* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-03-02 22:41 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-03-02 22:41 UTC (permalink / raw
To: gentoo-commits
commit: 6bbfb29704d75c82b3ed18db49ad93e666e132dd
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 2 22:40:52 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 2 22:40:52 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6bbfb297
Create the 6.14 branch with genpatches
CPU Optimization patch
Enable link security restrictions by default.
sparc: Address -Warray-bounds warnings
prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
Bluetooth: Check key sizes only when Secure Simple Pairing is
enabled. See bug #686758
bpf: mark get_entry_ip as __maybe_unused
sign-file: full functionality with modern LibreSSL
GCC 15 kbuild fixes
libbpf: workaround -Wmaybe-uninitialized false positive
Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO).
Add Gentoo Linux support config settings and defaults.
KVM: x86: Snapshot the host's DEBUGCTL in common x86
KVM: SVM: Manually zero/restore DEBUGCTL if LBR
virtualization is disabled
x86/insn_decoder_test: allow longer symbol-names
menuconfig: Allow sorting the entries alphabetically
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 52 ++
...ble-link-security-restrictions-by-default.patch | 17 +
1700_sparc-address-warray-bound-warnings.patch | 17 +
1730_parisc-Disable-prctl.patch | 51 ++
...sn-decoder-test-allow-longer-symbol-names.patch | 49 ++
1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch | 95 +++
1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch | 69 ++
...zes-only-if-Secure-Simple-Pairing-enabled.patch | 37 +
| 219 ++++++
2910_bfp-mark-get-entry-ip-as--maybe-unused.patch | 11 +
2920_sign-file-patch-for-libressl.patch | 16 +
...workaround-Wmaybe-uninitialized-false-pos.patch | 98 +++
3000_Support-printing-firmware-info.patch | 14 +
5010_enable-cpu-optimizations-universal.patch | 768 +++++++++++++++++++++
14 files changed, 1513 insertions(+)
diff --git a/0000_README b/0000_README
index 90189932..a3cfadee 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,58 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 1700_sparc-address-warray-bound-warnings.patch
+From: https://github.com/KSPP/linux/issues/109
+Desc: Address -Warray-bounds warnings
+
+Patch: 1730_parisc-Disable-prctl.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
+Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+
+Patch: 1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
+From: https://gitlab.com/cki-project/kernel-ark/-/commit/8d4a52c3921d278f27241fc0c6949d8fdc13a7f5
+Desc: x86/insn_decoder_test: allow longer symbol-names
+
+Patch: 1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch
+From: https://bugzilla.kernel.org/show_bug.cgi?id=219787
+Desc: KVM: x86: Snapshot the host's DEBUGCTL in common x86
+
+Patch: 1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch
+From: https://bugzilla.kernel.org/show_bug.cgi?id=219787
+Desc: KVM: SVM: Manually zero/restore DEBUGCTL if LBR virtualization is disabled
+
+Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch: 2901_permit-menuconfig-sorting.patch
+From: https://lore.kernel.org/
+Desc: menuconfig: Allow sorting the entries alphabetically
+
+Patch: 2910_bfp-mark-get-entry-ip-as--maybe-unused.patch
+From: https://www.spinics.net/lists/stable/msg604665.html
+Desc: bpf: mark get_entry_ip as __maybe_unused
+
+Patch: 2920_sign-file-patch-for-libressl.patch
+From: https://bugs.gentoo.org/717166
+Desc: sign-file: full functionality with modern LibreSSL
+
+Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
+From: https://lore.kernel.org/bpf/
+Desc: libbpf: workaround -Wmaybe-uninitialized false positive
+
+Patch: 3000_Support-printing-firmware-info.patch
+From: https://bugs.gentoo.org/732852
+Desc: Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5010_enable-cpu-optimizations-universal.patch
+From: https://github.com/graysky2/kernel_compiler_patch
+Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 00000000..e8c30157
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,17 @@
+--- a/fs/namei.c 2022-01-23 13:02:27.876558299 -0500
++++ b/fs/namei.c 2022-03-06 12:47:39.375719693 -0500
+@@ -1020,10 +1020,10 @@ static inline void put_link(struct namei
+ path_put(&last->link);
+ }
+
+-static int sysctl_protected_symlinks __read_mostly;
+-static int sysctl_protected_hardlinks __read_mostly;
+-static int sysctl_protected_fifos __read_mostly;
+-static int sysctl_protected_regular __read_mostly;
++static int sysctl_protected_symlinks __read_mostly = 1;
++static int sysctl_protected_hardlinks __read_mostly = 1;
++int sysctl_protected_fifos __read_mostly = 1;
++int sysctl_protected_regular __read_mostly = 1;
+
+ #ifdef CONFIG_SYSCTL
+ static struct ctl_table namei_sysctls[] = {
diff --git a/1700_sparc-address-warray-bound-warnings.patch b/1700_sparc-address-warray-bound-warnings.patch
new file mode 100644
index 00000000..f9393555
--- /dev/null
+++ b/1700_sparc-address-warray-bound-warnings.patch
@@ -0,0 +1,17 @@
+--- a/arch/sparc/mm/init_64.c 2022-05-24 16:48:40.749677491 -0400
++++ b/arch/sparc/mm/init_64.c 2022-05-24 16:55:15.511356945 -0400
+@@ -3052,11 +3052,11 @@ static inline resource_size_t compute_ke
+ static void __init kernel_lds_init(void)
+ {
+ code_resource.start = compute_kern_paddr(_text);
+- code_resource.end = compute_kern_paddr(_etext - 1);
++ code_resource.end = compute_kern_paddr(_etext) - 1;
+ data_resource.start = compute_kern_paddr(_etext);
+- data_resource.end = compute_kern_paddr(_edata - 1);
++ data_resource.end = compute_kern_paddr(_edata) - 1;
+ bss_resource.start = compute_kern_paddr(__bss_start);
+- bss_resource.end = compute_kern_paddr(_end - 1);
++ bss_resource.end = compute_kern_paddr(_end) - 1;
+ }
+
+ static int __init report_memory(void)
diff --git a/1730_parisc-Disable-prctl.patch b/1730_parisc-Disable-prctl.patch
new file mode 100644
index 00000000..f892d6a1
--- /dev/null
+++ b/1730_parisc-Disable-prctl.patch
@@ -0,0 +1,51 @@
+From 339b41ec357c24c02ed4aed6267dbfd443ee1e8e Mon Sep 17 00:00:00 2001
+From: Helge Deller <deller@gmx.de>
+Date: Mon, 13 Nov 2023 16:06:18 +0100
+Subject: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+
+systemd-254 tries to use prctl(PR_SET_MDWE) for systemd's
+MemoryDenyWriteExecute functionality, but fails on PA-RISC/HPPA which
+still needs executable stacks.
+
+Temporarily disable prctl(PR_SET_MDWE) by returning -ENODEV on parisc
+for now. Note that we can't return -EINVAL since systemd will then try
+to use seccomp instead.
+
+Reported-by: Sam James <sam@gentoo.org>
+Signed-off-by: Helge Deller <deller@gmx.de>
+Link: https://lore.kernel.org/all/875y2jro9a.fsf@gentoo.org/
+Link: https://github.com/systemd/systemd/issues/29775.
+Cc: <stable@vger.kernel.org> # v6.3+
+---
+ kernel/sys.c | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 420d9cb9cc8e2..8e3eaf650d07d 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -2700,10 +2700,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
+ break;
+ #endif
+ case PR_SET_MDWE:
+- error = prctl_set_mdwe(arg2, arg3, arg4, arg5);
++ if (IS_ENABLED(CONFIG_PARISC))
++ error = -EINVAL;
++ else
++ error = prctl_set_mdwe(arg2, arg3, arg4, arg5);
+ break;
+ case PR_GET_MDWE:
+- error = prctl_get_mdwe(arg2, arg3, arg4, arg5);
++ if (IS_ENABLED(CONFIG_PARISC))
++ error = -EINVAL;
++ else
++ error = prctl_get_mdwe(arg2, arg3, arg4, arg5);
+ break;
+ case PR_SET_VMA:
+ error = prctl_set_vma(arg2, arg3, arg4, arg5);
+--
+cgit
+Filename: fallback-exec-stack.patch. Size: 2kb. View raw, copy, hex, or download this file.
+View source code, the removal or expiry stories, or read the about page.
+
+This website does not claim ownership of, copyright on, and assumes no liability for provided content. Toggle color scheme.
diff --git a/1740_x86-insn-decoder-test-allow-longer-symbol-names.patch b/1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
new file mode 100644
index 00000000..70c706ba
--- /dev/null
+++ b/1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
@@ -0,0 +1,49 @@
+From 8d4a52c3921d278f27241fc0c6949d8fdc13a7f5 Mon Sep 17 00:00:00 2001
+From: David Rheinsberg <david@readahead.eu>
+Date: Tue, 24 Jan 2023 12:04:59 +0100
+Subject: [PATCH] x86/insn_decoder_test: allow longer symbol-names
+
+Increase the allowed line-length of the insn-decoder-test to 4k to allow
+for symbol-names longer than 256 characters.
+
+The insn-decoder-test takes objdump output as input, which may contain
+symbol-names as instruction arguments. With rust-code entering the
+kernel, those symbol-names will include mangled-symbols which might
+exceed the current line-length-limit of the tool.
+
+By bumping the line-length-limit of the tool to 4k, we get a reasonable
+buffer for all objdump outputs I have seen so far. Unfortunately, ELF
+symbol-names are not restricted in length, so technically this might
+still end up failing if we encounter longer names in the future.
+
+My compile-failure looks like this:
+
+ arch/x86/tools/insn_decoder_test: error: malformed line 1152000:
+ tBb_+0xf2>
+
+..which overflowed by 10 characters reading this line:
+
+ ffffffff81458193: 74 3d je ffffffff814581d2 <_RNvXse_NtNtNtCshGpAVYOtgW1_4core4iter8adapters7flattenINtB5_13FlattenCompatINtNtB7_3map3MapNtNtNtBb_3str4iter5CharsNtB1v_17CharEscapeDefaultENtNtBb_4char13EscapeDefaultENtNtBb_3fmt5Debug3fmtBb_+0xf2>
+
+Signed-off-by: David Rheinsberg <david@readahead.eu>
+Signed-off-by: Scott Weaver <scweaver@redhat.com>
+---
+ arch/x86/tools/insn_decoder_test.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/tools/insn_decoder_test.c b/arch/x86/tools/insn_decoder_test.c
+index 472540aeabc23..366e07546344b 100644
+--- a/arch/x86/tools/insn_decoder_test.c
++++ b/arch/x86/tools/insn_decoder_test.c
+@@ -106,7 +106,7 @@ static void parse_args(int argc, char **argv)
+ }
+ }
+
+-#define BUFSIZE 256
++#define BUFSIZE 4096
+
+ int main(int argc, char **argv)
+ {
+--
+GitLab
+
diff --git a/1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch b/1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch
new file mode 100644
index 00000000..0265460c
--- /dev/null
+++ b/1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch
@@ -0,0 +1,95 @@
+From d8595d6256fd46ece44b3433954e8545a0d199b8 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Fri, 21 Feb 2025 07:45:22 -0800
+Subject: [PATCH 1/2] KVM: x86: Snapshot the host's DEBUGCTL in common x86
+
+Move KVM's snapshot of DEBUGCTL to kvm_vcpu_arch and take the snapshot in
+common x86, so that SVM can also use the snapshot.
+
+Opportunistically change the field to a u64. While bits 63:32 are reserved
+on AMD, not mentioned at all in Intel's SDM, and managed as an "unsigned
+long" by the kernel, DEBUGCTL is an MSR and therefore a 64-bit value.
+
+Cc: stable@vger.kernel.org
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+---
+ arch/x86/include/asm/kvm_host.h | 1 +
+ arch/x86/kvm/vmx/vmx.c | 8 ++------
+ arch/x86/kvm/vmx/vmx.h | 2 --
+ arch/x86/kvm/x86.c | 1 +
+ 4 files changed, 4 insertions(+), 8 deletions(-)
+
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 0b7af5902ff7..32ae3aa50c7e 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -780,6 +780,7 @@ struct kvm_vcpu_arch {
+ u32 pkru;
+ u32 hflags;
+ u64 efer;
++ u64 host_debugctl;
+ u64 apic_base;
+ struct kvm_lapic *apic; /* kernel irqchip context */
+ bool load_eoi_exitmap_pending;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 6c56d5235f0f..3b92f893b239 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1514,16 +1514,12 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
+ */
+ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+- struct vcpu_vmx *vmx = to_vmx(vcpu);
+-
+ if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
+ shrink_ple_window(vcpu);
+
+ vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
+
+ vmx_vcpu_pi_load(vcpu, cpu);
+-
+- vmx->host_debugctlmsr = get_debugctlmsr();
+ }
+
+ void vmx_vcpu_put(struct kvm_vcpu *vcpu)
+@@ -7458,8 +7454,8 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+ }
+
+ /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
+- if (vmx->host_debugctlmsr)
+- update_debugctlmsr(vmx->host_debugctlmsr);
++ if (vcpu->arch.host_debugctl)
++ update_debugctlmsr(vcpu->arch.host_debugctl);
+
+ #ifndef CONFIG_X86_64
+ /*
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 8b111ce1087c..951e44dc9d0e 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -340,8 +340,6 @@ struct vcpu_vmx {
+ /* apic deadline value in host tsc */
+ u64 hv_deadline_tsc;
+
+- unsigned long host_debugctlmsr;
+-
+ /*
+ * Only bits masked by msr_ia32_feature_control_valid_bits can be set in
+ * msr_ia32_feature_control. FEAT_CTL_LOCKED is always included
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 02159c967d29..5c6fd0edc41f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4968,6 +4968,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+
+ /* Save host pkru register if supported */
+ vcpu->arch.host_pkru = read_pkru();
++ vcpu->arch.host_debugctl = get_debugctlmsr();
+
+ /* Apply any externally detected TSC adjustments (due to suspend) */
+ if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
+
+base-commit: 0ad2507d5d93f39619fc42372c347d6006b64319
+--
+2.48.1.658.g4767266eb4-goog
+
diff --git a/1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch b/1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch
new file mode 100644
index 00000000..e3ce9fe4
--- /dev/null
+++ b/1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch
@@ -0,0 +1,69 @@
+From d02de0dfc6fd10f7bc4f7067fb9765c24948c737 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Fri, 21 Feb 2025 08:16:36 -0800
+Subject: [PATCH 2/2] KVM: SVM: Manually zero/restore DEBUGCTL if LBR
+ virtualization is disabled
+
+Manually zero DEBUGCTL prior to VMRUN if the host's value is non-zero and
+LBR virtualization is disabled, as hardware only context switches DEBUGCTL
+if LBR virtualization is fully enabled. Running the guest with the host's
+value has likely been mildly problematic for quite some time, e.g. it will
+result in undesirable behavior if host is running with BTF=1.
+
+But the bug became fatal with the introduction of Bus Lock Trap ("Detect"
+in kernel paralance) support for AMD (commit 408eb7417a92
+("x86/bus_lock: Add support for AMD")), as a bus lock in the guest will
+trigger an unexpected #DB.
+
+Note, KVM could suppress the bus lock #DB, i.e. simply resume the guest
+without injecting a #DB, but that wouldn't address things like BTF. And
+it appears that AMD CPUs incorrectly clear DR6_BUS_LOCK (it's active low)
+when delivering a #DB that is NOT a bus lock trap, and BUS_LOCK_DETECT is
+enabled in DEBUGCTL.
+
+Reported-by: rangemachine@gmail.com
+Reported-by: whanos@sergal.fun
+Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219787
+Closes: https://lore.kernel.org/all/bug-219787-28872@https.bugzilla.kernel.org%2F
+Cc: Ravi Bangoria <ravi.bangoria@amd.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+---
+ arch/x86/kvm/svm/svm.c | 14 ++++++++++++++
+ 1 file changed, 14 insertions(+)
+
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index a713c803a3a3..a50ca1f17e31 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4253,6 +4253,16 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+ clgi();
+ kvm_load_guest_xsave_state(vcpu);
+
++ /*
++ * Hardware only context switches DEBUGCTL if LBR virtualization is
++ * enabled. Manually zero DEBUGCTL if necessary (and restore it after)
++ * VM-Exit, as running with the host's DEBUGCTL can negatively affect
++ * guest state and can even be fatal, e.g. due to bus lock detect.
++ */
++ if (vcpu->arch.host_debugctl &&
++ !(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK))
++ update_debugctlmsr(0);
++
+ kvm_wait_lapic_expire(vcpu);
+
+ /*
+@@ -4280,6 +4290,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+ if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
+ kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
+
++ if (vcpu->arch.host_debugctl &&
++ !(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK))
++ update_debugctlmsr(vcpu->arch.host_debugctl);
++
+ kvm_load_host_xsave_state(vcpu);
+ stgi();
+
+--
+2.48.1.658.g4767266eb4-goog
+
diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 00000000..394ad48f
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ return 0;
+ }
+
+- if (hci_conn_ssp_enabled(conn) &&
+- !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++ /* If Secure Simple Pairing is not enabled, then legacy connection
++ * setup is used and no encryption or key sizes can be enforced.
++ */
++ if (!hci_conn_ssp_enabled(conn))
++ return 1;
++
++ if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ return 0;
+
+ /* The minimum encryption key size needs to be enforced by the
+--
+2.20.1
--git a/2901_permit-menuconfig-sorting.patch b/2901_permit-menuconfig-sorting.patch
new file mode 100644
index 00000000..1ceade0c
--- /dev/null
+++ b/2901_permit-menuconfig-sorting.patch
@@ -0,0 +1,219 @@
+From git@z Thu Jan 1 00:00:00 1970
+Subject: [PATCH] menuconfig: Allow sorting the entries alphabetically
+From: Ivan Orlov <ivan.orlov0322@gmail.com>
+Date: Fri, 16 Aug 2024 15:18:31 +0100
+Message-Id: <20240816141831.104085-1-ivan.orlov0322@gmail.com>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 7bit
+
+Implement the functionality which allows to sort the Kconfig entries
+alphabetically if user decides to. It may help finding the desired entry
+faster, so the user will spend less time looking through the list.
+
+The sorting is done on the dialog_list elements in the 'dialog_menu'
+function, so on the option "representation" layer. The sorting could be
+enabled/disabled by pressing the '>' key. The labels are sorted in the
+following way:
+
+1. Put all entries into the array (from the linked list)
+2. Sort them alphabetically using qsort and custom comparator
+3. Restore the items linked list structure
+
+I know that this modification includes the ugly heruistics for
+extracting the actual label text from " [ ] Some-option"-like
+expressions (to be able to alphabetically compare the labels), and I
+would be happy to discuss alternative solutions.
+
+Signed-off-by: Ivan Orlov <ivan.orlov0322@gmail.com>
+---
+ scripts/kconfig/lxdialog/dialog.h | 5 +-
+ scripts/kconfig/lxdialog/menubox.c | 7 ++-
+ scripts/kconfig/lxdialog/util.c | 79 ++++++++++++++++++++++++++++++
+ scripts/kconfig/mconf.c | 9 +++-
+ 4 files changed, 97 insertions(+), 3 deletions(-)
+
+diff --git a/scripts/kconfig/lxdialog/dialog.h b/scripts/kconfig/lxdialog/dialog.h
+index f6c2ebe6d1f9..a036ed8cb43c 100644
+--- a/scripts/kconfig/lxdialog/dialog.h
++++ b/scripts/kconfig/lxdialog/dialog.h
+@@ -58,6 +58,8 @@
+ #define ACS_DARROW 'v'
+ #endif
+
++#define KEY_ACTION_SORT 11
++
+ /* error return codes */
+ #define ERRDISPLAYTOOSMALL (KEY_MAX + 1)
+
+@@ -127,6 +129,7 @@ void item_set_selected(int val);
+ int item_activate_selected(void);
+ void *item_data(void);
+ char item_tag(void);
++void sort_items(void);
+
+ /* item list manipulation for lxdialog use */
+ #define MAXITEMSTR 200
+@@ -196,7 +199,7 @@ int dialog_textbox(const char *title, const char *tbuf, int initial_height,
+ int initial_width, int *_vscroll, int *_hscroll,
+ int (*extra_key_cb)(int, size_t, size_t, void *), void *data);
+ int dialog_menu(const char *title, const char *prompt,
+- const void *selected, int *s_scroll);
++ const void *selected, int *s_scroll, bool sort);
+ int dialog_checklist(const char *title, const char *prompt, int height,
+ int width, int list_height);
+ int dialog_inputbox(const char *title, const char *prompt, int height,
+diff --git a/scripts/kconfig/lxdialog/menubox.c b/scripts/kconfig/lxdialog/menubox.c
+index 6e6244df0c56..4cba15f967c5 100644
+--- a/scripts/kconfig/lxdialog/menubox.c
++++ b/scripts/kconfig/lxdialog/menubox.c
+@@ -161,7 +161,7 @@ static void do_scroll(WINDOW *win, int *scroll, int n)
+ * Display a menu for choosing among a number of options
+ */
+ int dialog_menu(const char *title, const char *prompt,
+- const void *selected, int *s_scroll)
++ const void *selected, int *s_scroll, bool sort)
+ {
+ int i, j, x, y, box_x, box_y;
+ int height, width, menu_height;
+@@ -181,6 +181,9 @@ int dialog_menu(const char *title, const char *prompt,
+
+ max_choice = MIN(menu_height, item_count());
+
++ if (sort)
++ sort_items();
++
+ /* center dialog box on screen */
+ x = (getmaxx(stdscr) - width) / 2;
+ y = (getmaxy(stdscr) - height) / 2;
+@@ -408,6 +411,8 @@ int dialog_menu(const char *title, const char *prompt,
+ delwin(menu);
+ delwin(dialog);
+ goto do_resize;
++ case '>':
++ return KEY_ACTION_SORT;
+ }
+ }
+ delwin(menu);
+diff --git a/scripts/kconfig/lxdialog/util.c b/scripts/kconfig/lxdialog/util.c
+index 964139c87fcb..cc87ddd69c10 100644
+--- a/scripts/kconfig/lxdialog/util.c
++++ b/scripts/kconfig/lxdialog/util.c
+@@ -563,6 +563,85 @@ void item_reset(void)
+ item_cur = &item_nil;
+ }
+
++/*
++ * Function skips a part of the label to get the actual label text
++ * (without the '[ ]'-like prefix).
++ */
++static char *skip_spec_characters(char *s)
++{
++ bool unbalanced = false;
++
++ while (*s) {
++ if (isalnum(*s) && !unbalanced) {
++ break;
++ } else if (*s == '[' || *s == '<' || *s == '(') {
++ /*
++ * '[', '<' or '(' means that we need to look for
++ * closure
++ */
++ unbalanced = true;
++ } else if (*s == '-') {
++ /*
++ * Labels could start with "-*-", so '-' here either
++ * opens or closes the "checkbox"
++ */
++ unbalanced = !unbalanced;
++ } else if (*s == '>' || *s == ']' || *s == ')') {
++ unbalanced = false;
++ }
++ s++;
++ }
++ return s;
++}
++
++static int compare_labels(const void *a, const void *b)
++{
++ struct dialog_list *el1 = *((struct dialog_list **)a);
++ struct dialog_list *el2 = *((struct dialog_list **)b);
++
++ return strcasecmp(skip_spec_characters(el1->node.str),
++ skip_spec_characters(el2->node.str));
++}
++
++void sort_items(void)
++{
++ struct dialog_list **arr;
++ struct dialog_list *cur;
++ size_t n, i;
++
++ n = item_count();
++ if (n == 0)
++ return;
++
++ /* Copy all items from linked list into array */
++ cur = item_head;
++ arr = malloc(sizeof(*arr) * n);
++
++ if (!arr) {
++ /* Don't have enough memory, so don't do anything */
++ return;
++ }
++
++ for (i = 0; i < n; i++) {
++ arr[i] = cur;
++ cur = cur->next;
++ }
++
++ qsort(arr, n, sizeof(struct dialog_list *), compare_labels);
++
++ /* Restore the linked list structure from the sorted array */
++ for (i = 0; i < n; i++) {
++ if (i < n - 1)
++ arr[i]->next = arr[i + 1];
++ else
++ arr[i]->next = NULL;
++ }
++
++ item_head = arr[0];
++
++ free(arr);
++}
++
+ void item_make(const char *fmt, ...)
+ {
+ va_list ap;
+diff --git a/scripts/kconfig/mconf.c b/scripts/kconfig/mconf.c
+index 3887eac75289..8a961a41cae4 100644
+--- a/scripts/kconfig/mconf.c
++++ b/scripts/kconfig/mconf.c
+@@ -749,6 +749,7 @@ static void conf_save(void)
+ }
+ }
+
++static bool should_sort;
+ static void conf(struct menu *menu, struct menu *active_menu)
+ {
+ struct menu *submenu;
+@@ -774,9 +775,15 @@ static void conf(struct menu *menu, struct menu *active_menu)
+ dialog_clear();
+ res = dialog_menu(prompt ? prompt : "Main Menu",
+ menu_instructions,
+- active_menu, &s_scroll);
++ active_menu, &s_scroll, should_sort);
+ if (res == 1 || res == KEY_ESC || res == -ERRDISPLAYTOOSMALL)
+ break;
++
++ if (res == KEY_ACTION_SORT) {
++ should_sort = !should_sort;
++ continue;
++ }
++
+ if (item_count() != 0) {
+ if (!item_activate_selected())
+ continue;
+--
+2.34.1
+
diff --git a/2910_bfp-mark-get-entry-ip-as--maybe-unused.patch b/2910_bfp-mark-get-entry-ip-as--maybe-unused.patch
new file mode 100644
index 00000000..a75b90c8
--- /dev/null
+++ b/2910_bfp-mark-get-entry-ip-as--maybe-unused.patch
@@ -0,0 +1,11 @@
+--- a/kernel/trace/bpf_trace.c 2022-11-09 13:30:24.192940988 -0500
++++ b/kernel/trace/bpf_trace.c 2022-11-09 13:30:59.029810818 -0500
+@@ -1027,7 +1027,7 @@ static const struct bpf_func_proto bpf_g
+ };
+
+ #ifdef CONFIG_X86_KERNEL_IBT
+-static unsigned long get_entry_ip(unsigned long fentry_ip)
++static unsigned long __maybe_unused get_entry_ip(unsigned long fentry_ip)
+ {
+ u32 instr;
+
diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 00000000..e6ec017d
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c 2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c 2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+ * signing with anything other than SHA1 - so we're stuck with that if such is
+ * the case.
+ */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+- OPENSSL_VERSION_NUMBER < 0x10000000L || \
+- defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++ ( defined(LIBRESSL_VERSION_NUMBER) \
++ && (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++ OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7
diff --git a/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch b/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
new file mode 100644
index 00000000..af5e117f
--- /dev/null
+++ b/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
@@ -0,0 +1,98 @@
+From git@z Thu Jan 1 00:00:00 1970
+Subject: [PATCH v2] libbpf: workaround -Wmaybe-uninitialized false positive
+From: Sam James <sam@gentoo.org>
+Date: Fri, 09 Aug 2024 18:26:41 +0100
+Message-Id: <8f5c3b173e4cb216322ae19ade2766940c6fbebb.1723224401.git.sam@gentoo.org>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 8bit
+
+In `elf_close`, we get this with GCC 15 -O3 (at least):
+```
+In function ‘elf_close’,
+ inlined from ‘elf_close’ at elf.c:53:6,
+ inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
+elf.c:57:9: warning: ‘elf_fd.elf’ may be used uninitialized [-Wmaybe-uninitialized]
+ 57 | elf_end(elf_fd->elf);
+ | ^~~~~~~~~~~~~~~~~~~~
+elf.c: In function ‘elf_find_func_offset_from_file’:
+elf.c:377:23: note: ‘elf_fd.elf’ was declared here
+ 377 | struct elf_fd elf_fd;
+ | ^~~~~~
+In function ‘elf_close’,
+ inlined from ‘elf_close’ at elf.c:53:6,
+ inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
+elf.c:58:9: warning: ‘elf_fd.fd’ may be used uninitialized [-Wmaybe-uninitialized]
+ 58 | close(elf_fd->fd);
+ | ^~~~~~~~~~~~~~~~~
+elf.c: In function ‘elf_find_func_offset_from_file’:
+elf.c:377:23: note: ‘elf_fd.fd’ was declared here
+ 377 | struct elf_fd elf_fd;
+ | ^~~~~~
+```
+
+In reality, our use is fine, it's just that GCC doesn't model errno
+here (see linked GCC bug). Suppress -Wmaybe-uninitialized accordingly.
+
+Link: https://gcc.gnu.org/PR114952
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+v2: Fix Clang build.
+
+Range-diff against v1:
+1: 3ebbe7a4e93a ! 1: 8f5c3b173e4c libbpf: workaround -Wmaybe-uninitialized false positive
+ @@ tools/lib/bpf/elf.c: long elf_find_func_offset(Elf *elf, const char *binary_path
+ return ret;
+ }
+
+ ++#if !defined(__clang__)
+ +#pragma GCC diagnostic push
+ +/* https://gcc.gnu.org/PR114952 */
+ +#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
+ ++#endif
+ /* Find offset of function name in ELF object specified by path. "name" matches
+ * symbol name or name@@LIB for library functions.
+ */
+ @@ tools/lib/bpf/elf.c: long elf_find_func_offset_from_file(const char *binary_path
+ elf_close(&elf_fd);
+ return ret;
+ }
+ ++#if !defined(__clang__)
+ +#pragma GCC diagnostic pop
+ ++#endif
+
+ struct symbol {
+ const char *name;
+
+ tools/lib/bpf/elf.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c
+index c92e02394159..7058425ca85b 100644
+--- a/tools/lib/bpf/elf.c
++++ b/tools/lib/bpf/elf.c
+@@ -369,6 +369,11 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
+ return ret;
+ }
+
++#if !defined(__clang__)
++#pragma GCC diagnostic push
++/* https://gcc.gnu.org/PR114952 */
++#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
++#endif
+ /* Find offset of function name in ELF object specified by path. "name" matches
+ * symbol name or name@@LIB for library functions.
+ */
+@@ -384,6 +389,9 @@ long elf_find_func_offset_from_file(const char *binary_path, const char *name)
+ elf_close(&elf_fd);
+ return ret;
+ }
++#if !defined(__clang__)
++#pragma GCC diagnostic pop
++#endif
+
+ struct symbol {
+ const char *name;
+--
+2.45.2
+
diff --git a/3000_Support-printing-firmware-info.patch b/3000_Support-printing-firmware-info.patch
new file mode 100644
index 00000000..a630cfbe
--- /dev/null
+++ b/3000_Support-printing-firmware-info.patch
@@ -0,0 +1,14 @@
+--- a/drivers/base/firmware_loader/main.c 2021-08-24 15:42:07.025482085 -0400
++++ b/drivers/base/firmware_loader/main.c 2021-08-24 15:44:40.782975313 -0400
+@@ -809,6 +809,11 @@ _request_firmware(const struct firmware
+
+ ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ offset, opt_flags);
++
++#ifdef CONFIG_GENTOO_PRINT_FIRMWARE_INFO
++ printk(KERN_NOTICE "Loading firmware: %s\n", name);
++#endif
++
+ if (ret <= 0) /* error or already assigned */
+ goto out;
+
diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
new file mode 100644
index 00000000..5011aaa6
--- /dev/null
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -0,0 +1,768 @@
+From d66d4da9e6fbd22780826ec7d55d65c3ecaf1e66 Mon Sep 17 00:00:00 2001
+From: graysky <therealgraysky AT proton DOT me>
+Date: Mon, 16 Sep 2024 05:55:58 -0400
+
+FEATURES
+This patch adds additional tunings via new x86-64 ISA levels and
+more micro-architecture options to the Linux kernel in three classes.
+
+1. New generic x86-64 ISA levels
+
+These are selectable under:
+ Processor type and features ---> x86-64 compiler ISA level
+
+• x86-64 A value of (1) is the default
+• x86-64-v2 A value of (2) brings support for vector
+ instructions up to Streaming SIMD Extensions 4.2 (SSE4.2)
+ and Supplemental Streaming SIMD Extensions 3 (SSSE3), the
+ POPCNT instruction, and CMPXCHG16B.
+• x86-64-v3 A value of (3) adds vector instructions up to AVX2, MOVBE,
+ and additional bit-manipulation instructions.
+
+There is also x86-64-v4 but including this makes little sense as
+the kernel does not use any of the AVX512 instructions anyway.
+
+Users of glibc 2.33 and above can see which level is supported by running:
+ /lib/ld-linux-x86-64.so.2 --help | grep supported
+Or
+ /lib64/ld-linux-x86-64.so.2 --help | grep supported
+
+2. New micro-architectures
+
+These are selectable under:
+ Processor type and features ---> Processor family
+
+• AMD Improved K8-family
+• AMD K10-family
+• AMD Family 10h (Barcelona)
+• AMD Family 14h (Bobcat)
+• AMD Family 16h (Jaguar)
+• AMD Family 15h (Bulldozer)
+• AMD Family 15h (Piledriver)
+• AMD Family 15h (Steamroller)
+• AMD Family 15h (Excavator)
+• AMD Family 17h (Zen)
+• AMD Family 17h (Zen 2)
+• AMD Family 19h (Zen 3)**
+• AMD Family 19h (Zen 4)‡
+• AMD Family 1Ah (Zen 5)§
+• Intel Silvermont low-power processors
+• Intel Goldmont low-power processors (Apollo Lake and Denverton)
+• Intel Goldmont Plus low-power processors (Gemini Lake)
+• Intel 1st Gen Core i3/i5/i7 (Nehalem)
+• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+• Intel 4th Gen Core i3/i5/i7 (Haswell)
+• Intel 5th Gen Core i3/i5/i7 (Broadwell)
+• Intel 6th Gen Core i3/i5/i7 (Skylake)
+• Intel 6th Gen Core i7/i9 (Skylake X)
+• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+• Intel 10th Gen Core i7/i9 (Ice Lake)
+• Intel Xeon (Cascade Lake)
+• Intel Xeon (Cooper Lake)*
+• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
+• Intel 4th Gen 10nm++ Xeon (Sapphire Rapids)†
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)†
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)†
+• Intel 13th Gen i3/i5/i7/i9-family (Raptor Lake)‡
+• Intel 14th Gen i3/i5/i7/i9-family (Meteor Lake)‡
+• Intel 5th Gen 10nm++ Xeon (Emerald Rapids)‡
+
+Notes: If not otherwise noted, gcc >=9.1 is required for support.
+ *Requires gcc >=10.1 or clang >=10.0
+ **Required gcc >=10.3 or clang >=12.0
+ †Required gcc >=11.1 or clang >=12.0
+ ‡Required gcc >=13.0 or clang >=15.0.5
+ §Required gcc >14.0 or clang >=19.0?
+
+3. Auto-detected micro-architecture levels
+
+Compile by passing the '-march=native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[1]
+
+Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
+CPUs should select the 'AMD-Native' option.
+
+MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
+This patch also changes -march=atom to -march=bonnell in accordance with the
+gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
+believe it should use the newer -march=bonnell flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[3] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_compiler_patch?tab=readme-ov-file#benchmarks
+
+REQUIREMENTS
+linux version 6.1.79+
+gcc version >=9.0 or clang version >=9.0
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[4]
+
+REFERENCES
+1. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://github.com/graysky2/kernel_gcc_patch/issues/15
+4. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+---
+ arch/x86/Kconfig.cpu | 367 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile | 89 +++++++-
+ arch/x86/include/asm/vermagic.h | 72 +++++++
+ 3 files changed, 511 insertions(+), 17 deletions(-)
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index ce5ed2c2db0c..6d89f21aba52 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -155,9 +155,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ help
+ Select this for an AMD K6-family processor. Enables use of
+@@ -165,7 +164,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ help
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -173,12 +172,114 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ help
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ help
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ help
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ help
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ help
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ help
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ help
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ help
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ help
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ help
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ help
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ help
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
++config MZEN3
++ bool "AMD Zen 3"
++ depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ help
++ Select this for AMD Family 19h Zen 3 processors.
++
++ Enables -march=znver3
++
++config MZEN4
++ bool "AMD Zen 4"
++ depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 160000)
++ help
++ Select this for AMD Family 19h Zen 4 processors.
++
++ Enables -march=znver4
++
++config MZEN5
++ bool "AMD Zen 5"
++ depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 190100)
++ help
++ Select this for AMD Family 19h Zen 5 processors.
++
++ Enables -march=znver5
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -269,8 +370,17 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ help
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
+ help
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,14 +388,199 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
+ help
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ help
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ help
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ help
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ help
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ help
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ help
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ help
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ help
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ help
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ help
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ help
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE_CLIENT
++ bool "Intel Ice Lake"
++ help
++
++ Select this for 10th Gen Core client processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MICELAKE_SERVER
++ bool "Intel Ice Lake Server"
++ help
++
++ Select this for 10th Gen Core server processors in the Ice Lake family.
++
++ Enables -march=icelake-server
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ help
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
++
++config MCOOPERLAKE
++ bool "Intel Cooper Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++ help
++
++ Select this for Xeon processors in the Cooper Lake family.
++
++ Enables -march=cooperlake
++
++config MTIGERLAKE
++ bool "Intel Tiger Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++ help
++
++ Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++ Enables -march=tigerlake
++
++config MSAPPHIRERAPIDS
++ bool "Intel Sapphire Rapids"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ help
++
++ Select this for fourth-generation 10 nm process processors in the Sapphire Rapids family.
++
++ Enables -march=sapphirerapids
++
++config MROCKETLAKE
++ bool "Intel Rocket Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ help
++
++ Select this for eleventh-generation processors in the Rocket Lake family.
++
++ Enables -march=rocketlake
++
++config MALDERLAKE
++ bool "Intel Alder Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ help
++
++ Select this for twelfth-generation processors in the Alder Lake family.
++
++ Enables -march=alderlake
++
++config MRAPTORLAKE
++ bool "Intel Raptor Lake"
++ depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++ help
++
++ Select this for thirteenth-generation processors in the Raptor Lake family.
++
++ Enables -march=raptorlake
++
++config MMETEORLAKE
++ bool "Intel Meteor Lake"
++ depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++ help
++
++ Select this for fourteenth-generation processors in the Meteor Lake family.
++
++ Enables -march=meteorlake
++
++config MEMERALDRAPIDS
++ bool "Intel Emerald Rapids"
++ depends on (CC_IS_GCC && GCC_VERSION > 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++ help
++
++ Select this for fifth-generation 10 nm process processors in the Emerald Rapids family.
++
++ Enables -march=emeraldrapids
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -294,6 +589,26 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE_INTEL
++ bool "Intel-Native optimizations autodetected by the compiler"
++ help
++
++ Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. Do NOT use this
++ for AMD CPUs. Intel Only!
++
++ Enables -march=native
++
++config MNATIVE_AMD
++ bool "AMD-Native optimizations autodetected by the compiler"
++ help
++
++ Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. Do NOT use this
++ for Intel CPUs. AMD Only!
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -308,6 +623,30 @@ config X86_GENERIC
+ This is really intended for distributors who need more
+ generic optimizations.
+
++config X86_64_VERSION
++ int "x86-64 compiler ISA level"
++ range 1 3
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ depends on X86_64 && GENERIC_CPU
++ help
++ Specify a specific x86-64 compiler ISA level.
++
++ There are three x86-64 ISA levels that work on top of
++ the x86-64 baseline, namely: x86-64-v2, x86-64-v3, and x86-64-v4.
++
++ x86-64-v2 brings support for vector instructions up to Streaming SIMD
++ Extensions 4.2 (SSE4.2) and Supplemental Streaming SIMD Extensions 3
++ (SSSE3), the POPCNT instruction, and CMPXCHG16B.
++
++ x86-64-v3 adds vector instructions up to AVX2, MOVBE, and additional
++ bit-manipulation instructions.
++
++ x86-64-v4 is not included since the kernel does not use AVX512 instructions
++
++ You can find the best version for your CPU by running one of the following:
++ /lib/ld-linux-x86-64.so.2 --help | grep supported
++ /lib64/ld-linux-x86-64.so.2 --help | grep supported
++
+ #
+ # Define implied options from the CPU selection here
+ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,7 +657,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
+ default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,11 +675,11 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
+
+ #
+ # P6_NOPs are a relatively minor optimization that require a family >=
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 3419ffa2a350..aafb069de612 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -152,15 +152,98 @@ else
+ cflags-$(CONFIG_MK8) += -march=k8
+ cflags-$(CONFIG_MPSC) += -march=nocona
+ cflags-$(CONFIG_MCORE2) += -march=core2
+- cflags-$(CONFIG_MATOM) += -march=atom
+- cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
++ cflags-$(CONFIG_MATOM) += -march=bonnell
++ ifeq ($(CONFIG_X86_64_VERSION),1)
++ cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
++ rustflags-$(CONFIG_GENERIC_CPU) += -Ztune-cpu=generic
++ else
++ cflags-$(CONFIG_GENERIC_CPU) += -march=x86-64-v$(CONFIG_X86_64_VERSION)
++ rustflags-$(CONFIG_GENERIC_CPU) += -Ctarget-cpu=x86-64-v$(CONFIG_X86_64_VERSION)
++ endif
++ cflags-$(CONFIG_MK8SSE3) += -march=k8-sse3
++ cflags-$(CONFIG_MK10) += -march=amdfam10
++ cflags-$(CONFIG_MBARCELONA) += -march=barcelona
++ cflags-$(CONFIG_MBOBCAT) += -march=btver1
++ cflags-$(CONFIG_MJAGUAR) += -march=btver2
++ cflags-$(CONFIG_MBULLDOZER) += -march=bdver1
++ cflags-$(CONFIG_MPILEDRIVER) += -march=bdver2 -mno-tbm
++ cflags-$(CONFIG_MSTEAMROLLER) += -march=bdver3 -mno-tbm
++ cflags-$(CONFIG_MEXCAVATOR) += -march=bdver4 -mno-tbm
++ cflags-$(CONFIG_MZEN) += -march=znver1
++ cflags-$(CONFIG_MZEN2) += -march=znver2
++ cflags-$(CONFIG_MZEN3) += -march=znver3
++ cflags-$(CONFIG_MZEN4) += -march=znver4
++ cflags-$(CONFIG_MZEN5) += -march=znver5
++ cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
++ cflags-$(CONFIG_MNATIVE_AMD) += -march=native -mno-tbm
++ cflags-$(CONFIG_MNEHALEM) += -march=nehalem
++ cflags-$(CONFIG_MWESTMERE) += -march=westmere
++ cflags-$(CONFIG_MSILVERMONT) += -march=silvermont
++ cflags-$(CONFIG_MGOLDMONT) += -march=goldmont
++ cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
++ cflags-$(CONFIG_MSANDYBRIDGE) += -march=sandybridge
++ cflags-$(CONFIG_MIVYBRIDGE) += -march=ivybridge
++ cflags-$(CONFIG_MHASWELL) += -march=haswell
++ cflags-$(CONFIG_MBROADWELL) += -march=broadwell
++ cflags-$(CONFIG_MSKYLAKE) += -march=skylake
++ cflags-$(CONFIG_MSKYLAKEX) += -march=skylake-avx512
++ cflags-$(CONFIG_MCANNONLAKE) += -march=cannonlake
++ cflags-$(CONFIG_MICELAKE_CLIENT) += -march=icelake-client
++ cflags-$(CONFIG_MICELAKE_SERVER) += -march=icelake-server
++ cflags-$(CONFIG_MCASCADELAKE) += -march=cascadelake
++ cflags-$(CONFIG_MCOOPERLAKE) += -march=cooperlake
++ cflags-$(CONFIG_MTIGERLAKE) += -march=tigerlake
++ cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
++ cflags-$(CONFIG_MROCKETLAKE) += -march=rocketlake
++ cflags-$(CONFIG_MALDERLAKE) += -march=alderlake
++ cflags-$(CONFIG_MRAPTORLAKE) += -march=raptorlake
++ cflags-$(CONFIG_MMETEORLAKE) += -march=meteorlake
++ cflags-$(CONFIG_MEMERALDRAPIDS) += -march=emeraldrapids
+ KBUILD_CFLAGS += $(cflags-y)
+
+ rustflags-$(CONFIG_MK8) += -Ctarget-cpu=k8
+ rustflags-$(CONFIG_MPSC) += -Ctarget-cpu=nocona
+ rustflags-$(CONFIG_MCORE2) += -Ctarget-cpu=core2
+ rustflags-$(CONFIG_MATOM) += -Ctarget-cpu=atom
+- rustflags-$(CONFIG_GENERIC_CPU) += -Ztune-cpu=generic
++ rustflags-$(CONFIG_MK8SSE3) += -Ctarget-cpu=k8-sse3
++ rustflags-$(CONFIG_MK10) += -Ctarget-cpu=amdfam10
++ rustflags-$(CONFIG_MBARCELONA) += -Ctarget-cpu=barcelona
++ rustflags-$(CONFIG_MBOBCAT) += -Ctarget-cpu=btver1
++ rustflags-$(CONFIG_MJAGUAR) += -Ctarget-cpu=btver2
++ rustflags-$(CONFIG_MBULLDOZER) += -Ctarget-cpu=bdver1
++ rustflags-$(CONFIG_MPILEDRIVER) += -Ctarget-cpu=bdver2
++ rustflags-$(CONFIG_MSTEAMROLLER) += -Ctarget-cpu=bdver3
++ rustflags-$(CONFIG_MEXCAVATOR) += -Ctarget-cpu=bdver4
++ rustflags-$(CONFIG_MZEN) += -Ctarget-cpu=znver1
++ rustflags-$(CONFIG_MZEN2) += -Ctarget-cpu=znver2
++ rustflags-$(CONFIG_MZEN3) += -Ctarget-cpu=znver3
++ rustflags-$(CONFIG_MZEN4) += -Ctarget-cpu=znver4
++ rustflags-$(CONFIG_MZEN5) += -Ctarget-cpu=znver5
++ rustflags-$(CONFIG_MNATIVE_INTEL) += -Ctarget-cpu=native
++ rustflags-$(CONFIG_MNATIVE_AMD) += -Ctarget-cpu=native
++ rustflags-$(CONFIG_MNEHALEM) += -Ctarget-cpu=nehalem
++ rustflags-$(CONFIG_MWESTMERE) += -Ctarget-cpu=westmere
++ rustflags-$(CONFIG_MSILVERMONT) += -Ctarget-cpu=silvermont
++ rustflags-$(CONFIG_MGOLDMONT) += -Ctarget-cpu=goldmont
++ rustflags-$(CONFIG_MGOLDMONTPLUS) += -Ctarget-cpu=goldmont-plus
++ rustflags-$(CONFIG_MSANDYBRIDGE) += -Ctarget-cpu=sandybridge
++ rustflags-$(CONFIG_MIVYBRIDGE) += -Ctarget-cpu=ivybridge
++ rustflags-$(CONFIG_MHASWELL) += -Ctarget-cpu=haswell
++ rustflags-$(CONFIG_MBROADWELL) += -Ctarget-cpu=broadwell
++ rustflags-$(CONFIG_MSKYLAKE) += -Ctarget-cpu=skylake
++ rustflags-$(CONFIG_MSKYLAKEX) += -Ctarget-cpu=skylake-avx512
++ rustflags-$(CONFIG_MCANNONLAKE) += -Ctarget-cpu=cannonlake
++ rustflags-$(CONFIG_MICELAKE_CLIENT) += -Ctarget-cpu=icelake-client
++ rustflags-$(CONFIG_MICELAKE_SERVER) += -Ctarget-cpu=icelake-server
++ rustflags-$(CONFIG_MCASCADELAKE) += -Ctarget-cpu=cascadelake
++ rustflags-$(CONFIG_MCOOPERLAKE) += -Ctarget-cpu=cooperlake
++ rustflags-$(CONFIG_MTIGERLAKE) += -Ctarget-cpu=tigerlake
++ rustflags-$(CONFIG_MSAPPHIRERAPIDS) += -Ctarget-cpu=sapphirerapids
++ rustflags-$(CONFIG_MROCKETLAKE) += -Ctarget-cpu=rocketlake
++ rustflags-$(CONFIG_MALDERLAKE) += -Ctarget-cpu=alderlake
++ rustflags-$(CONFIG_MRAPTORLAKE) += -Ctarget-cpu=raptorlake
++ rustflags-$(CONFIG_MMETEORLAKE) += -Ctarget-cpu=meteorlake
++ rustflags-$(CONFIG_MEMERALDRAPIDS) += -Ctarget-cpu=emeraldrapids
+ KBUILD_RUSTFLAGS += $(rustflags-y)
+
+ KBUILD_CFLAGS += -mno-red-zone
+diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
+index 75884d2cdec3..2fdae271f47f 100644
+--- a/arch/x86/include/asm/vermagic.h
++++ b/arch/x86/include/asm/vermagic.h
+@@ -17,6 +17,56 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE_INTEL
++#define MODULE_PROC_FAMILY "NATIVE_INTEL "
++#elif defined CONFIG_MNATIVE_AMD
++#define MODULE_PROC_FAMILY "NATIVE_AMD "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE_CLIENT
++#define MODULE_PROC_FAMILY "ICELAKE_CLIENT "
++#elif defined CONFIG_MICELAKE_SERVER
++#define MODULE_PROC_FAMILY "ICELAKE_SERVER "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
++#elif defined CONFIG_MCOOPERLAKE
++#define MODULE_PROC_FAMILY "COOPERLAKE "
++#elif defined CONFIG_MTIGERLAKE
++#define MODULE_PROC_FAMILY "TIGERLAKE "
++#elif defined CONFIG_MSAPPHIRERAPIDS
++#define MODULE_PROC_FAMILY "SAPPHIRERAPIDS "
++#elif defined CONFIG_ROCKETLAKE
++#define MODULE_PROC_FAMILY "ROCKETLAKE "
++#elif defined CONFIG_MALDERLAKE
++#define MODULE_PROC_FAMILY "ALDERLAKE "
++#elif defined CONFIG_MRAPTORLAKE
++#define MODULE_PROC_FAMILY "RAPTORLAKE "
++#elif defined CONFIG_MMETEORLAKE
++#define MODULE_PROC_FAMILY "METEORLAKE "
++#elif defined CONFIG_MEMERALDRAPIDS
++#define MODULE_PROC_FAMILY "EMERALDRAPIDS "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +85,28 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--
+2.47.1
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-03-17 17:31 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-03-17 17:31 UTC (permalink / raw
To: gentoo-commits
commit: 4552ebd04b070ec63b3e68be90f0c1625940f496
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 17 17:31:26 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 17 17:31:26 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4552ebd0
Removed unneeded patches
Removed:
1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch
1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 --
1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch | 95 -----------------------
1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch | 69 ----------------
3 files changed, 172 deletions(-)
diff --git a/0000_README b/0000_README
index a3cfadee..34375f37 100644
--- a/0000_README
+++ b/0000_README
@@ -59,14 +59,6 @@ Patch: 1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
From: https://gitlab.com/cki-project/kernel-ark/-/commit/8d4a52c3921d278f27241fc0c6949d8fdc13a7f5
Desc: x86/insn_decoder_test: allow longer symbol-names
-Patch: 1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch
-From: https://bugzilla.kernel.org/show_bug.cgi?id=219787
-Desc: KVM: x86: Snapshot the host's DEBUGCTL in common x86
-
-Patch: 1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch
-From: https://bugzilla.kernel.org/show_bug.cgi?id=219787
-Desc: KVM: SVM: Manually zero/restore DEBUGCTL if LBR virtualization is disabled
-
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch b/1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch
deleted file mode 100644
index 0265460c..00000000
--- a/1750_KVM-x86-Snapshot-hosts-DEBUGCTL.patch
+++ /dev/null
@@ -1,95 +0,0 @@
-From d8595d6256fd46ece44b3433954e8545a0d199b8 Mon Sep 17 00:00:00 2001
-From: Sean Christopherson <seanjc@google.com>
-Date: Fri, 21 Feb 2025 07:45:22 -0800
-Subject: [PATCH 1/2] KVM: x86: Snapshot the host's DEBUGCTL in common x86
-
-Move KVM's snapshot of DEBUGCTL to kvm_vcpu_arch and take the snapshot in
-common x86, so that SVM can also use the snapshot.
-
-Opportunistically change the field to a u64. While bits 63:32 are reserved
-on AMD, not mentioned at all in Intel's SDM, and managed as an "unsigned
-long" by the kernel, DEBUGCTL is an MSR and therefore a 64-bit value.
-
-Cc: stable@vger.kernel.org
-Signed-off-by: Sean Christopherson <seanjc@google.com>
----
- arch/x86/include/asm/kvm_host.h | 1 +
- arch/x86/kvm/vmx/vmx.c | 8 ++------
- arch/x86/kvm/vmx/vmx.h | 2 --
- arch/x86/kvm/x86.c | 1 +
- 4 files changed, 4 insertions(+), 8 deletions(-)
-
-diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
-index 0b7af5902ff7..32ae3aa50c7e 100644
---- a/arch/x86/include/asm/kvm_host.h
-+++ b/arch/x86/include/asm/kvm_host.h
-@@ -780,6 +780,7 @@ struct kvm_vcpu_arch {
- u32 pkru;
- u32 hflags;
- u64 efer;
-+ u64 host_debugctl;
- u64 apic_base;
- struct kvm_lapic *apic; /* kernel irqchip context */
- bool load_eoi_exitmap_pending;
-diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
-index 6c56d5235f0f..3b92f893b239 100644
---- a/arch/x86/kvm/vmx/vmx.c
-+++ b/arch/x86/kvm/vmx/vmx.c
-@@ -1514,16 +1514,12 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
- */
- void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
- {
-- struct vcpu_vmx *vmx = to_vmx(vcpu);
--
- if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
- shrink_ple_window(vcpu);
-
- vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
-
- vmx_vcpu_pi_load(vcpu, cpu);
--
-- vmx->host_debugctlmsr = get_debugctlmsr();
- }
-
- void vmx_vcpu_put(struct kvm_vcpu *vcpu)
-@@ -7458,8 +7454,8 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
- }
-
- /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
-- if (vmx->host_debugctlmsr)
-- update_debugctlmsr(vmx->host_debugctlmsr);
-+ if (vcpu->arch.host_debugctl)
-+ update_debugctlmsr(vcpu->arch.host_debugctl);
-
- #ifndef CONFIG_X86_64
- /*
-diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
-index 8b111ce1087c..951e44dc9d0e 100644
---- a/arch/x86/kvm/vmx/vmx.h
-+++ b/arch/x86/kvm/vmx/vmx.h
-@@ -340,8 +340,6 @@ struct vcpu_vmx {
- /* apic deadline value in host tsc */
- u64 hv_deadline_tsc;
-
-- unsigned long host_debugctlmsr;
--
- /*
- * Only bits masked by msr_ia32_feature_control_valid_bits can be set in
- * msr_ia32_feature_control. FEAT_CTL_LOCKED is always included
-diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
-index 02159c967d29..5c6fd0edc41f 100644
---- a/arch/x86/kvm/x86.c
-+++ b/arch/x86/kvm/x86.c
-@@ -4968,6 +4968,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
-
- /* Save host pkru register if supported */
- vcpu->arch.host_pkru = read_pkru();
-+ vcpu->arch.host_debugctl = get_debugctlmsr();
-
- /* Apply any externally detected TSC adjustments (due to suspend) */
- if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
-
-base-commit: 0ad2507d5d93f39619fc42372c347d6006b64319
---
-2.48.1.658.g4767266eb4-goog
-
diff --git a/1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch b/1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch
deleted file mode 100644
index e3ce9fe4..00000000
--- a/1751_KVM-SVM-Manually-zero-restore-DEBUGCTL.patch
+++ /dev/null
@@ -1,69 +0,0 @@
-From d02de0dfc6fd10f7bc4f7067fb9765c24948c737 Mon Sep 17 00:00:00 2001
-From: Sean Christopherson <seanjc@google.com>
-Date: Fri, 21 Feb 2025 08:16:36 -0800
-Subject: [PATCH 2/2] KVM: SVM: Manually zero/restore DEBUGCTL if LBR
- virtualization is disabled
-
-Manually zero DEBUGCTL prior to VMRUN if the host's value is non-zero and
-LBR virtualization is disabled, as hardware only context switches DEBUGCTL
-if LBR virtualization is fully enabled. Running the guest with the host's
-value has likely been mildly problematic for quite some time, e.g. it will
-result in undesirable behavior if host is running with BTF=1.
-
-But the bug became fatal with the introduction of Bus Lock Trap ("Detect"
-in kernel paralance) support for AMD (commit 408eb7417a92
-("x86/bus_lock: Add support for AMD")), as a bus lock in the guest will
-trigger an unexpected #DB.
-
-Note, KVM could suppress the bus lock #DB, i.e. simply resume the guest
-without injecting a #DB, but that wouldn't address things like BTF. And
-it appears that AMD CPUs incorrectly clear DR6_BUS_LOCK (it's active low)
-when delivering a #DB that is NOT a bus lock trap, and BUS_LOCK_DETECT is
-enabled in DEBUGCTL.
-
-Reported-by: rangemachine@gmail.com
-Reported-by: whanos@sergal.fun
-Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219787
-Closes: https://lore.kernel.org/all/bug-219787-28872@https.bugzilla.kernel.org%2F
-Cc: Ravi Bangoria <ravi.bangoria@amd.com>
-Cc: stable@vger.kernel.org
-Signed-off-by: Sean Christopherson <seanjc@google.com>
----
- arch/x86/kvm/svm/svm.c | 14 ++++++++++++++
- 1 file changed, 14 insertions(+)
-
-diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
-index a713c803a3a3..a50ca1f17e31 100644
---- a/arch/x86/kvm/svm/svm.c
-+++ b/arch/x86/kvm/svm/svm.c
-@@ -4253,6 +4253,16 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
- clgi();
- kvm_load_guest_xsave_state(vcpu);
-
-+ /*
-+ * Hardware only context switches DEBUGCTL if LBR virtualization is
-+ * enabled. Manually zero DEBUGCTL if necessary (and restore it after)
-+ * VM-Exit, as running with the host's DEBUGCTL can negatively affect
-+ * guest state and can even be fatal, e.g. due to bus lock detect.
-+ */
-+ if (vcpu->arch.host_debugctl &&
-+ !(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK))
-+ update_debugctlmsr(0);
-+
- kvm_wait_lapic_expire(vcpu);
-
- /*
-@@ -4280,6 +4290,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
- if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
- kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
-
-+ if (vcpu->arch.host_debugctl &&
-+ !(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK))
-+ update_debugctlmsr(vcpu->arch.host_debugctl);
-+
- kvm_load_host_xsave_state(vcpu);
- stgi();
-
---
-2.48.1.658.g4767266eb4-goog
-
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-03-25 19:35 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-03-25 19:35 UTC (permalink / raw
To: gentoo-commits
commit: fe5545b758c2f7ccd67049b5677cd0e3b6908796
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Mar 25 18:48:28 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Mar 25 19:34:52 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe5545b7
Fix ARM64 circular dependencies for KSPP setting
Bug: https://bugs.gentoo.org/952015
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 74e75c40..c308dca8 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -267,7 +267,7 @@
+ select ARM64_BTI
+ select ARM64_E0PD
+ select ARM64_EPAN if ARM64_PAN=y
-+ select ARM64_MTE if (ARM64_AS_HAS_MTE=y && ARM64_TAGGED_ADDR_ABI=y ) && ( AS_HAS_ARMV8_5=y ) && ( AS_HAS_LSE_ATOMICS=y ) && ( ARM64_PAN=y )
++ select ARM64_MTE if (ARM64_AS_HAS_MTE=y && ARM64_TAGGED_ADDR_ABI=y ) && ( AS_HAS_ARMV8_5=y ) && ( AS_HAS_LSE_ATOMICS=y )
+ select ARM64_PTR_AUTH
+ select ARM64_PTR_AUTH_KERNEL if ( ARM64_PTR_AUTH=y ) && (( CC_HAS_SIGN_RETURN_ADDRESS=y || CC_HAS_BRANCH_PROT_PAC_RET=y ) && AS_HAS_ARMV8_3=y ) && ( LD_IS_LLD=y || LD_VERSION >= 23301 || ( CC_IS_GCC=y && GCC_VERSION < 90100 )) && (CC_IS_CLANG=n || AS_HAS_CFI_NEGATE_RA_STATE=y ) && ((FUNCTION_GRAPH_TRACER=n || DYNAMIC_FTRACE_WITH_ARGS=y ))
+ select ARM64_BTI_KERNEL if ( ARM64_BTI=y ) && ( ARM64_PTR_AUTH_KERNEL=y ) && ( CC_HAS_BRANCH_PROT_PAC_RET_BTI=y ) && (CC_IS_GCC=n || GCC_VERSION >= 100100 ) && (CC_IS_GCC=n ) && ((FUNCTION_GRAPH_TRACE=n || DYNAMIC_FTRACE_WITH_ARG=y ))
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-04-07 10:28 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-04-07 10:28 UTC (permalink / raw
To: gentoo-commits
commit: 3815726101d2ccaca74262765bae03eae8f0301a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 7 10:28:20 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Apr 7 10:28:20 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=38157261
Linux patch 6.14.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 3 +
1000_linux-6.14.1.patch | 906 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 909 insertions(+)
diff --git a/0000_README b/0000_README
index 34375f37..e248ad77 100644
--- a/0000_README
+++ b/0000_README
@@ -42,6 +42,9 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-6.14.1.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.1
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
diff --git a/1000_linux-6.14.1.patch b/1000_linux-6.14.1.patch
new file mode 100644
index 00000000..243d08ec
--- /dev/null
+++ b/1000_linux-6.14.1.patch
@@ -0,0 +1,906 @@
+diff --git a/Makefile b/Makefile
+index 8b6764d44a610f..3ede59c1146cb6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/counter/microchip-tcb-capture.c b/drivers/counter/microchip-tcb-capture.c
+index 2f096a5b973d18..c391ac38b99093 100644
+--- a/drivers/counter/microchip-tcb-capture.c
++++ b/drivers/counter/microchip-tcb-capture.c
+@@ -368,6 +368,25 @@ static int mchp_tc_probe(struct platform_device *pdev)
+ channel);
+ }
+
++ /* Disable Quadrature Decoder and position measure */
++ ret = regmap_update_bits(regmap, ATMEL_TC_BMR, ATMEL_TC_QDEN | ATMEL_TC_POSEN, 0);
++ if (ret)
++ return ret;
++
++ /* Setup the period capture mode */
++ ret = regmap_update_bits(regmap, ATMEL_TC_REG(priv->channel[0], CMR),
++ ATMEL_TC_WAVE | ATMEL_TC_ABETRG | ATMEL_TC_CMR_MASK |
++ ATMEL_TC_TCCLKS,
++ ATMEL_TC_CMR_MASK);
++ if (ret)
++ return ret;
++
++ /* Enable clock and trigger counter */
++ ret = regmap_write(regmap, ATMEL_TC_REG(priv->channel[0], CCR),
++ ATMEL_TC_CLKEN | ATMEL_TC_SWTRG);
++ if (ret)
++ return ret;
++
+ priv->tc_cfg = tcb_config;
+ priv->regmap = regmap;
+ counter->name = dev_name(&pdev->dev);
+diff --git a/drivers/counter/stm32-lptimer-cnt.c b/drivers/counter/stm32-lptimer-cnt.c
+index cf73f65baf606c..b249c8647639f7 100644
+--- a/drivers/counter/stm32-lptimer-cnt.c
++++ b/drivers/counter/stm32-lptimer-cnt.c
+@@ -58,37 +58,43 @@ static int stm32_lptim_set_enable_state(struct stm32_lptim_cnt *priv,
+ return 0;
+ }
+
++ ret = clk_enable(priv->clk);
++ if (ret)
++ goto disable_cnt;
++
+ /* LP timer must be enabled before writing CMP & ARR */
+ ret = regmap_write(priv->regmap, STM32_LPTIM_ARR, priv->ceiling);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+ ret = regmap_write(priv->regmap, STM32_LPTIM_CMP, 0);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+ /* ensure CMP & ARR registers are properly written */
+ ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val,
+ (val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK,
+ 100, 1000);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+ ret = regmap_write(priv->regmap, STM32_LPTIM_ICR,
+ STM32_LPTIM_CMPOKCF_ARROKCF);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+- ret = clk_enable(priv->clk);
+- if (ret) {
+- regmap_write(priv->regmap, STM32_LPTIM_CR, 0);
+- return ret;
+- }
+ priv->enabled = true;
+
+ /* Start LP timer in continuous mode */
+ return regmap_update_bits(priv->regmap, STM32_LPTIM_CR,
+ STM32_LPTIM_CNTSTRT, STM32_LPTIM_CNTSTRT);
++
++disable_clk:
++ clk_disable(priv->clk);
++disable_cnt:
++ regmap_write(priv->regmap, STM32_LPTIM_CR, 0);
++
++ return ret;
+ }
+
+ static int stm32_lptim_setup(struct stm32_lptim_cnt *priv, int enable)
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index 25cfd964dc25d9..acb9eb18f7ccfe 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -6,9 +6,6 @@
+ * Copyright (c) 2015-2018 Terry Junge <terry.junge@plantronics.com>
+ */
+
+-/*
+- */
+-
+ #include "hid-ids.h"
+
+ #include <linux/hid.h>
+@@ -23,30 +20,28 @@
+
+ #define PLT_VOL_UP 0x00b1
+ #define PLT_VOL_DOWN 0x00b2
++#define PLT_MIC_MUTE 0x00b5
+
+ #define PLT1_VOL_UP (PLT_HID_1_0_PAGE | PLT_VOL_UP)
+ #define PLT1_VOL_DOWN (PLT_HID_1_0_PAGE | PLT_VOL_DOWN)
++#define PLT1_MIC_MUTE (PLT_HID_1_0_PAGE | PLT_MIC_MUTE)
+ #define PLT2_VOL_UP (PLT_HID_2_0_PAGE | PLT_VOL_UP)
+ #define PLT2_VOL_DOWN (PLT_HID_2_0_PAGE | PLT_VOL_DOWN)
++#define PLT2_MIC_MUTE (PLT_HID_2_0_PAGE | PLT_MIC_MUTE)
++#define HID_TELEPHONY_MUTE (HID_UP_TELEPHONY | 0x2f)
++#define HID_CONSUMER_MUTE (HID_UP_CONSUMER | 0xe2)
+
+ #define PLT_DA60 0xda60
+ #define PLT_BT300_MIN 0x0413
+ #define PLT_BT300_MAX 0x0418
+
+-
+-#define PLT_ALLOW_CONSUMER (field->application == HID_CP_CONSUMERCONTROL && \
+- (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER)
+-
+-#define PLT_QUIRK_DOUBLE_VOLUME_KEYS BIT(0)
+-#define PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS BIT(1)
+-
+ #define PLT_DOUBLE_KEY_TIMEOUT 5 /* ms */
+-#define PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT 220 /* ms */
+
+ struct plt_drv_data {
+ unsigned long device_type;
+- unsigned long last_volume_key_ts;
+- u32 quirks;
++ unsigned long last_key_ts;
++ unsigned long double_key_to;
++ __u16 last_key;
+ };
+
+ static int plantronics_input_mapping(struct hid_device *hdev,
+@@ -58,34 +53,43 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ unsigned short mapped_key;
+ struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
+ unsigned long plt_type = drv_data->device_type;
++ int allow_mute = usage->hid == HID_TELEPHONY_MUTE;
++ int allow_consumer = field->application == HID_CP_CONSUMERCONTROL &&
++ (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER &&
++ usage->hid != HID_CONSUMER_MUTE;
+
+ /* special case for PTT products */
+ if (field->application == HID_GD_JOYSTICK)
+ goto defaulted;
+
+- /* handle volume up/down mapping */
+ /* non-standard types or multi-HID interfaces - plt_type is PID */
+ if (!(plt_type & HID_USAGE_PAGE)) {
+ switch (plt_type) {
+ case PLT_DA60:
+- if (PLT_ALLOW_CONSUMER)
++ if (allow_consumer)
+ goto defaulted;
+- goto ignored;
++ if (usage->hid == HID_CONSUMER_MUTE) {
++ mapped_key = KEY_MICMUTE;
++ goto mapped;
++ }
++ break;
+ default:
+- if (PLT_ALLOW_CONSUMER)
++ if (allow_consumer || allow_mute)
+ goto defaulted;
+ }
++ goto ignored;
+ }
+- /* handle standard types - plt_type is 0xffa0uuuu or 0xffa2uuuu */
+- /* 'basic telephony compliant' - allow default consumer page map */
+- else if ((plt_type & HID_USAGE) >= PLT_BASIC_TELEPHONY &&
+- (plt_type & HID_USAGE) != PLT_BASIC_EXCEPTION) {
+- if (PLT_ALLOW_CONSUMER)
+- goto defaulted;
+- }
+- /* not 'basic telephony' - apply legacy mapping */
+- /* only map if the field is in the device's primary vendor page */
+- else if (!((field->application ^ plt_type) & HID_USAGE_PAGE)) {
++
++ /* handle standard consumer control mapping */
++ /* and standard telephony mic mute mapping */
++ if (allow_consumer || allow_mute)
++ goto defaulted;
++
++ /* handle vendor unique types - plt_type is 0xffa0uuuu or 0xffa2uuuu */
++ /* if not 'basic telephony compliant' - map vendor unique controls */
++ if (!((plt_type & HID_USAGE) >= PLT_BASIC_TELEPHONY &&
++ (plt_type & HID_USAGE) != PLT_BASIC_EXCEPTION) &&
++ !((field->application ^ plt_type) & HID_USAGE_PAGE))
+ switch (usage->hid) {
+ case PLT1_VOL_UP:
+ case PLT2_VOL_UP:
+@@ -95,8 +99,11 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ case PLT2_VOL_DOWN:
+ mapped_key = KEY_VOLUMEDOWN;
+ goto mapped;
++ case PLT1_MIC_MUTE:
++ case PLT2_MIC_MUTE:
++ mapped_key = KEY_MICMUTE;
++ goto mapped;
+ }
+- }
+
+ /*
+ * Future mapping of call control or other usages,
+@@ -105,6 +112,8 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ */
+
+ ignored:
++ hid_dbg(hdev, "usage: %08x (appl: %08x) - ignored\n",
++ usage->hid, field->application);
+ return -1;
+
+ defaulted:
+@@ -123,38 +132,26 @@ static int plantronics_event(struct hid_device *hdev, struct hid_field *field,
+ struct hid_usage *usage, __s32 value)
+ {
+ struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
++ unsigned long prev_tsto, cur_ts;
++ __u16 prev_key, cur_key;
+
+- if (drv_data->quirks & PLT_QUIRK_DOUBLE_VOLUME_KEYS) {
+- unsigned long prev_ts, cur_ts;
++ /* Usages are filtered in plantronics_usages. */
+
+- /* Usages are filtered in plantronics_usages. */
++ /* HZ too low for ms resolution - double key detection disabled */
++ /* or it is a key release - handle key presses only. */
++ if (!drv_data->double_key_to || !value)
++ return 0;
+
+- if (!value) /* Handle key presses only. */
+- return 0;
++ prev_tsto = drv_data->last_key_ts + drv_data->double_key_to;
++ cur_ts = drv_data->last_key_ts = jiffies;
++ prev_key = drv_data->last_key;
++ cur_key = drv_data->last_key = usage->code;
+
+- prev_ts = drv_data->last_volume_key_ts;
+- cur_ts = jiffies;
+- if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_DOUBLE_KEY_TIMEOUT)
+- return 1; /* Ignore the repeated key. */
+-
+- drv_data->last_volume_key_ts = cur_ts;
++ /* If the same key occurs in <= double_key_to -- ignore it */
++ if (prev_key == cur_key && time_before_eq(cur_ts, prev_tsto)) {
++ hid_dbg(hdev, "double key %d ignored\n", cur_key);
++ return 1; /* Ignore the repeated key. */
+ }
+- if (drv_data->quirks & PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS) {
+- unsigned long prev_ts, cur_ts;
+-
+- /* Usages are filtered in plantronics_usages. */
+-
+- if (!value) /* Handle key presses only. */
+- return 0;
+-
+- prev_ts = drv_data->last_volume_key_ts;
+- cur_ts = jiffies;
+- if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT)
+- return 1; /* Ignore the followed opposite volume key. */
+-
+- drv_data->last_volume_key_ts = cur_ts;
+- }
+-
+ return 0;
+ }
+
+@@ -196,12 +193,16 @@ static int plantronics_probe(struct hid_device *hdev,
+ ret = hid_parse(hdev);
+ if (ret) {
+ hid_err(hdev, "parse failed\n");
+- goto err;
++ return ret;
+ }
+
+ drv_data->device_type = plantronics_device_type(hdev);
+- drv_data->quirks = id->driver_data;
+- drv_data->last_volume_key_ts = jiffies - msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT);
++ drv_data->double_key_to = msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT);
++ drv_data->last_key_ts = jiffies - drv_data->double_key_to;
++
++ /* if HZ does not allow ms resolution - disable double key detection */
++ if (drv_data->double_key_to < PLT_DOUBLE_KEY_TIMEOUT)
++ drv_data->double_key_to = 0;
+
+ hid_set_drvdata(hdev, drv_data);
+
+@@ -210,29 +211,10 @@ static int plantronics_probe(struct hid_device *hdev,
+ if (ret)
+ hid_err(hdev, "hw start failed\n");
+
+-err:
+ return ret;
+ }
+
+ static const struct hid_device_id plantronics_devices[] = {
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3210_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3325_SERIES),
+- .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_ENCOREPRO_500_SERIES),
+- .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
+ { }
+ };
+@@ -241,6 +223,14 @@ MODULE_DEVICE_TABLE(hid, plantronics_devices);
+ static const struct hid_usage_id plantronics_usages[] = {
+ { HID_CP_VOLUMEUP, EV_KEY, HID_ANY_ID },
+ { HID_CP_VOLUMEDOWN, EV_KEY, HID_ANY_ID },
++ { HID_TELEPHONY_MUTE, EV_KEY, HID_ANY_ID },
++ { HID_CONSUMER_MUTE, EV_KEY, HID_ANY_ID },
++ { PLT2_VOL_UP, EV_KEY, HID_ANY_ID },
++ { PLT2_VOL_DOWN, EV_KEY, HID_ANY_ID },
++ { PLT2_MIC_MUTE, EV_KEY, HID_ANY_ID },
++ { PLT1_VOL_UP, EV_KEY, HID_ANY_ID },
++ { PLT1_VOL_DOWN, EV_KEY, HID_ANY_ID },
++ { PLT1_MIC_MUTE, EV_KEY, HID_ANY_ID },
+ { HID_TERMINATOR, HID_TERMINATOR, HID_TERMINATOR }
+ };
+
+diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
+index 6eb892fd4d3432..3878136227e49c 100644
+--- a/drivers/memstick/host/rtsx_usb_ms.c
++++ b/drivers/memstick/host/rtsx_usb_ms.c
+@@ -813,6 +813,7 @@ static void rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+
+ host->eject = true;
+ cancel_work_sync(&host->handle_req);
++ cancel_delayed_work_sync(&host->poll_card);
+
+ mutex_lock(&host->host_mutex);
+ if (host->req) {
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index e9208a8d2bfa61..a5a4252ea19fdd 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1365,9 +1365,11 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a9, 0)}, /* Telit FN920C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10b0, 0)}, /* Telit FE990B */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c0, 0)}, /* Telit FE910C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c4, 0)}, /* Telit FE910C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c8, 0)}, /* Telit FE910C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10d0, 0)}, /* Telit FN990B */
+ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
+ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 44179f4e807fc3..aeab2308b15008 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -178,6 +178,17 @@ int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress)
+ }
+ EXPORT_SYMBOL_GPL(usbnet_get_ethernet_addr);
+
++static bool usbnet_needs_usb_name_format(struct usbnet *dev, struct net_device *net)
++{
++ /* Point to point devices which don't have a real MAC address
++ * (or report a fake local one) have historically used the usb%d
++ * naming. Preserve this..
++ */
++ return (dev->driver_info->flags & FLAG_POINTTOPOINT) != 0 &&
++ (is_zero_ether_addr(net->dev_addr) ||
++ is_local_ether_addr(net->dev_addr));
++}
++
+ static void intr_complete (struct urb *urb)
+ {
+ struct usbnet *dev = urb->context;
+@@ -1762,13 +1773,11 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ if (status < 0)
+ goto out1;
+
+- // heuristic: "usb%d" for links we know are two-host,
+- // else "eth%d" when there's reasonable doubt. userspace
+- // can rename the link if it knows better.
++ /* heuristic: rename to "eth%d" if we are not sure this link
++ * is two-host (these links keep "usb%d")
++ */
+ if ((dev->driver_info->flags & FLAG_ETHER) != 0 &&
+- ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 ||
+- /* somebody touched it*/
+- !is_zero_ether_addr(net->dev_addr)))
++ !usbnet_needs_usb_name_format(dev, net))
+ strscpy(net->name, "eth%d", sizeof(net->name));
+ /* WLAN devices should always be named "wlan%d" */
+ if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index f245a84f4a508d..bdd26c9f34bdf2 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -162,7 +162,7 @@ void serial8250_tx_dma_flush(struct uart_8250_port *p)
+ */
+ dma->tx_size = 0;
+
+- dmaengine_terminate_async(dma->rxchan);
++ dmaengine_terminate_async(dma->txchan);
+ }
+
+ int serial8250_rx_dma(struct uart_8250_port *p)
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index df4d0d832e5421..73c200127b089f 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2727,6 +2727,22 @@ static struct pci_serial_quirk pci_serial_quirks[] = {
+ .init = pci_oxsemi_tornado_init,
+ .setup = pci_oxsemi_tornado_setup,
+ },
++ {
++ .vendor = PCI_VENDOR_ID_INTASHIELD,
++ .device = 0x4026,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .init = pci_oxsemi_tornado_init,
++ .setup = pci_oxsemi_tornado_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_INTASHIELD,
++ .device = 0x4021,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .init = pci_oxsemi_tornado_init,
++ .setup = pci_oxsemi_tornado_setup,
++ },
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = 0x8811,
+@@ -5253,6 +5269,14 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ pbn_b2_2_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0BA2,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_2_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0BA3,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_2_115200 },
+ /*
+ * Brainboxes UC-235/246
+ */
+@@ -5373,6 +5397,14 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ pbn_b2_4_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0C42,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_4_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0C43,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_4_115200 },
+ /*
+ * Brainboxes UC-420
+ */
+@@ -5599,6 +5631,20 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ pbn_oxsemi_1_15625000 },
++ /*
++ * Brainboxes XC-235
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4026,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
++ /*
++ * Brainboxes XC-475
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4021,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
+
+ /*
+ * Perle PCI-RAS cards
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index c91b9d9818cd83..79623b2482a04c 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1484,6 +1484,19 @@ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termio
+
+ unsigned long modem = lpuart32_read(&sport->port, UARTMODIR)
+ & ~(UARTMODIR_TXRTSPOL | UARTMODIR_TXRTSE);
++ u32 ctrl;
++
++ /* TXRTSE and TXRTSPOL only can be changed when transmitter is disabled. */
++ ctrl = lpuart32_read(&sport->port, UARTCTRL);
++ if (ctrl & UARTCTRL_TE) {
++ /* wait for the transmit engine to complete */
++ lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
++ lpuart32_write(&sport->port, ctrl & ~UARTCTRL_TE, UARTCTRL);
++
++ while (lpuart32_read(&sport->port, UARTCTRL) & UARTCTRL_TE)
++ cpu_relax();
++ }
++
+ lpuart32_write(&sport->port, modem, UARTMODIR);
+
+ if (rs485->flags & SER_RS485_ENABLED) {
+@@ -1503,6 +1516,10 @@ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termio
+ }
+
+ lpuart32_write(&sport->port, modem, UARTMODIR);
++
++ if (ctrl & UARTCTRL_TE)
++ lpuart32_write(&sport->port, ctrl, UARTCTRL);
++
+ return 0;
+ }
+
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 1ec5d8c3aef8dd..0854ad8c90cd24 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -965,10 +965,8 @@ static void stm32_usart_start_tx(struct uart_port *port)
+ {
+ struct tty_port *tport = &port->state->port;
+
+- if (kfifo_is_empty(&tport->xmit_fifo) && !port->x_char) {
+- stm32_usart_rs485_rts_disable(port);
++ if (kfifo_is_empty(&tport->xmit_fifo) && !port->x_char)
+ return;
+- }
+
+ stm32_usart_rs485_rts_enable(port);
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 965bffce301e24..5e89e9cdcec2e9 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2865,6 +2865,10 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ if (!ep_seg) {
+
+ if (ep->skip && usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
++ /* this event is unlikely to match any TD, don't skip them all */
++ if (trb_comp_code == COMP_STOPPED_LENGTH_INVALID)
++ return 0;
++
+ skip_isoc_td(xhci, td, ep, status);
+ if (!list_empty(&ep_ring->td_list))
+ continue;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 779b01dee068f5..59c6c1c701b9c9 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1759,11 +1759,20 @@ static inline void xhci_write_64(struct xhci_hcd *xhci,
+ }
+
+
+-/* Link TRB chain should always be set on 0.95 hosts, and AMD 0.96 ISOC rings */
++/*
++ * Reportedly, some chapters of v0.95 spec said that Link TRB always has its chain bit set.
++ * Other chapters and later specs say that it should only be set if the link is inside a TD
++ * which continues from the end of one segment to the next segment.
++ *
++ * Some 0.95 hardware was found to misbehave if any link TRB doesn't have the chain bit set.
++ *
++ * 0.96 hardware from AMD and NEC was found to ignore unchained isochronous link TRBs when
++ * "resynchronizing the pipe" after a Missed Service Error.
++ */
+ static inline bool xhci_link_chain_quirk(struct xhci_hcd *xhci, enum xhci_ring_type type)
+ {
+ return (xhci->quirks & XHCI_LINK_TRB_QUIRK) ||
+- (type == TYPE_ISOC && (xhci->quirks & XHCI_AMD_0x96_HOST));
++ (type == TYPE_ISOC && (xhci->quirks & (XHCI_AMD_0x96_HOST | XHCI_NEC_HOST)));
+ }
+
+ /* xHCI debugging */
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index aac91466279f17..3e01781aeb7bd0 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -612,36 +612,33 @@ static void cgroup_force_idle_show(struct seq_file *seq, struct cgroup_base_stat
+ void cgroup_base_stat_cputime_show(struct seq_file *seq)
+ {
+ struct cgroup *cgrp = seq_css(seq)->cgroup;
+- u64 usage, utime, stime, ntime;
++ struct cgroup_base_stat bstat;
+
+ if (cgroup_parent(cgrp)) {
+ cgroup_rstat_flush_hold(cgrp);
+- usage = cgrp->bstat.cputime.sum_exec_runtime;
++ bstat = cgrp->bstat;
+ cputime_adjust(&cgrp->bstat.cputime, &cgrp->prev_cputime,
+- &utime, &stime);
+- ntime = cgrp->bstat.ntime;
++ &bstat.cputime.utime, &bstat.cputime.stime);
+ cgroup_rstat_flush_release(cgrp);
+ } else {
+- /* cgrp->bstat of root is not actually used, reuse it */
+- root_cgroup_cputime(&cgrp->bstat);
+- usage = cgrp->bstat.cputime.sum_exec_runtime;
+- utime = cgrp->bstat.cputime.utime;
+- stime = cgrp->bstat.cputime.stime;
+- ntime = cgrp->bstat.ntime;
++ root_cgroup_cputime(&bstat);
+ }
+
+- do_div(usage, NSEC_PER_USEC);
+- do_div(utime, NSEC_PER_USEC);
+- do_div(stime, NSEC_PER_USEC);
+- do_div(ntime, NSEC_PER_USEC);
++ do_div(bstat.cputime.sum_exec_runtime, NSEC_PER_USEC);
++ do_div(bstat.cputime.utime, NSEC_PER_USEC);
++ do_div(bstat.cputime.stime, NSEC_PER_USEC);
++ do_div(bstat.ntime, NSEC_PER_USEC);
+
+ seq_printf(seq, "usage_usec %llu\n"
+ "user_usec %llu\n"
+ "system_usec %llu\n"
+ "nice_usec %llu\n",
+- usage, utime, stime, ntime);
++ bstat.cputime.sum_exec_runtime,
++ bstat.cputime.utime,
++ bstat.cputime.stime,
++ bstat.ntime);
+
+- cgroup_force_idle_show(seq, &cgrp->bstat);
++ cgroup_force_idle_show(seq, &bstat);
+ }
+
+ /* Add bpf kfuncs for cgroup_rstat_updated() and cgroup_rstat_flush() */
+diff --git a/net/atm/mpc.c b/net/atm/mpc.c
+index 324e3ab96bb393..12da0269275c54 100644
+--- a/net/atm/mpc.c
++++ b/net/atm/mpc.c
+@@ -1314,6 +1314,8 @@ static void MPOA_cache_impos_rcvd(struct k_message *msg,
+ holding_time = msg->content.eg_info.holding_time;
+ dprintk("(%s) entry = %p, holding_time = %u\n",
+ mpc->dev->name, entry, holding_time);
++ if (entry == NULL && !holding_time)
++ return;
+ if (entry == NULL && holding_time) {
+ entry = mpc->eg_ops->add_entry(msg, mpc);
+ mpc->eg_ops->put(entry);
+diff --git a/net/ipv6/netfilter/nf_socket_ipv6.c b/net/ipv6/netfilter/nf_socket_ipv6.c
+index a7690ec6232596..9ea5ef56cb2704 100644
+--- a/net/ipv6/netfilter/nf_socket_ipv6.c
++++ b/net/ipv6/netfilter/nf_socket_ipv6.c
+@@ -103,6 +103,10 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ struct sk_buff *data_skb = NULL;
+ int doff = 0;
+ int thoff = 0, tproto;
++#if IS_ENABLED(CONFIG_NF_CONNTRACK)
++ enum ip_conntrack_info ctinfo;
++ struct nf_conn const *ct;
++#endif
+
+ tproto = ipv6_find_hdr(skb, &thoff, -1, NULL, NULL);
+ if (tproto < 0) {
+@@ -136,6 +140,25 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ return NULL;
+ }
+
++#if IS_ENABLED(CONFIG_NF_CONNTRACK)
++ /* Do the lookup with the original socket address in
++ * case this is a reply packet of an established
++ * SNAT-ted connection.
++ */
++ ct = nf_ct_get(skb, &ctinfo);
++ if (ct &&
++ ((tproto != IPPROTO_ICMPV6 &&
++ ctinfo == IP_CT_ESTABLISHED_REPLY) ||
++ (tproto == IPPROTO_ICMPV6 &&
++ ctinfo == IP_CT_RELATED_REPLY)) &&
++ (ct->status & IPS_SRC_NAT_DONE)) {
++ daddr = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.in6;
++ dport = (tproto == IPPROTO_TCP) ?
++ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.tcp.port :
++ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.udp.port;
++ }
++#endif
++
+ return nf_socket_get_sock_v6(net, data_skb, doff, tproto, saddr, daddr,
+ sport, dport, indev);
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a84857a3c2bfbe..78aab243c8b655 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10519,6 +10519,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ SND_PCI_QUIRK(0x103c, 0x881d, "HP 250 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
++ SND_PCI_QUIRK(0x103c, 0x881e, "HP Laptop 15s-du3xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -10783,6 +10784,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x1da2, "ASUS UP6502ZA/ZD", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1df3, "ASUS UM5606WA", ALC294_FIXUP_BASS_SPEAKER_15),
++ SND_PCI_QUIRK(0x1043, 0x1264, "ASUS UM5606KA", ALC294_FIXUP_BASS_SPEAKER_15),
+ SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2),
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index ed6127b0389fff..3d36d22f8e9e6b 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -4223,6 +4223,52 @@ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
+ }
+ }
+
++/*
++ * Some Plantronics headsets have control names that don't meet ALSA naming
++ * standards. This function fixes nonstandard source names. By the time
++ * this function is called the control name should look like one of these:
++ * "source names Playback Volume"
++ * "source names Playback Switch"
++ * "source names Capture Volume"
++ * "source names Capture Switch"
++ * If any of the trigger words are found in the name then the name will
++ * be changed to:
++ * "Headset Playback Volume"
++ * "Headset Playback Switch"
++ * "Headset Capture Volume"
++ * "Headset Capture Switch"
++ * depending on the current suffix.
++ */
++static void snd_fix_plt_name(struct snd_usb_audio *chip,
++ struct snd_ctl_elem_id *id)
++{
++ /* no variant of "Sidetone" should be added to this list */
++ static const char * const trigger[] = {
++ "Earphone", "Microphone", "Receive", "Transmit"
++ };
++ static const char * const suffix[] = {
++ " Playback Volume", " Playback Switch",
++ " Capture Volume", " Capture Switch"
++ };
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(trigger); i++)
++ if (strstr(id->name, trigger[i]))
++ goto triggered;
++ usb_audio_dbg(chip, "no change in %s\n", id->name);
++ return;
++
++triggered:
++ for (i = 0; i < ARRAY_SIZE(suffix); i++)
++ if (strstr(id->name, suffix[i])) {
++ usb_audio_dbg(chip, "fixing kctl name %s\n", id->name);
++ snprintf(id->name, sizeof(id->name), "Headset%s",
++ suffix[i]);
++ return;
++ }
++ usb_audio_dbg(chip, "something wrong in kctl name %s\n", id->name);
++}
++
+ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ struct usb_mixer_elem_info *cval, int unitid,
+ struct snd_kcontrol *kctl)
+@@ -4240,5 +4286,10 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ cval->min_mute = 1;
+ break;
+ }
++
++ /* ALSA-ify some Plantronics headset control names */
++ if (USB_ID_VENDOR(mixer->chip->usb_id) == 0x047f &&
++ (cval->control == UAC_FU_MUTE || cval->control == UAC_FU_VOLUME))
++ snd_fix_plt_name(mixer->chip, &kctl->id);
+ }
+
+diff --git a/tools/perf/Documentation/intel-hybrid.txt b/tools/perf/Documentation/intel-hybrid.txt
+index e7a776ad25d719..0379903673a4ac 100644
+--- a/tools/perf/Documentation/intel-hybrid.txt
++++ b/tools/perf/Documentation/intel-hybrid.txt
+@@ -8,15 +8,15 @@ Part of events are available on core cpu, part of events are available
+ on atom cpu and even part of events are available on both.
+
+ Kernel exports two new cpu pmus via sysfs:
+-/sys/devices/cpu_core
+-/sys/devices/cpu_atom
++/sys/bus/event_source/devices/cpu_core
++/sys/bus/event_source/devices/cpu_atom
+
+ The 'cpus' files are created under the directories. For example,
+
+-cat /sys/devices/cpu_core/cpus
++cat /sys/bus/event_source/devices/cpu_core/cpus
+ 0-15
+
+-cat /sys/devices/cpu_atom/cpus
++cat /sys/bus/event_source/devices/cpu_atom/cpus
+ 16-23
+
+ It indicates cpu0-cpu15 are core cpus and cpu16-cpu23 are atom cpus.
+@@ -60,8 +60,8 @@ can't carry pmu information. So now this type is extended to be PMU aware
+ type. The PMU type ID is stored at attr.config[63:32].
+
+ PMU type ID is retrieved from sysfs.
+-/sys/devices/cpu_atom/type
+-/sys/devices/cpu_core/type
++/sys/bus/event_source/devices/cpu_atom/type
++/sys/bus/event_source/devices/cpu_core/type
+
+ The new attr.config layout for PERF_TYPE_HARDWARE:
+
+diff --git a/tools/perf/Documentation/perf-list.txt b/tools/perf/Documentation/perf-list.txt
+index d0c65fad419a07..c3ffd93f94d73b 100644
+--- a/tools/perf/Documentation/perf-list.txt
++++ b/tools/perf/Documentation/perf-list.txt
+@@ -188,7 +188,7 @@ in the CPU vendor specific documentation.
+
+ The available PMUs and their raw parameters can be listed with
+
+- ls /sys/devices/*/format
++ ls /sys/bus/event_source/devices/*/format
+
+ For example the raw event "LSD.UOPS" core pmu event above could
+ be specified as
+diff --git a/tools/perf/arch/x86/util/iostat.c b/tools/perf/arch/x86/util/iostat.c
+index 00f645a0c18ac9..7442a2cd87eda7 100644
+--- a/tools/perf/arch/x86/util/iostat.c
++++ b/tools/perf/arch/x86/util/iostat.c
+@@ -32,7 +32,7 @@
+ #define MAX_PATH 1024
+ #endif
+
+-#define UNCORE_IIO_PMU_PATH "devices/uncore_iio_%d"
++#define UNCORE_IIO_PMU_PATH "bus/event_source/devices/uncore_iio_%d"
+ #define SYSFS_UNCORE_PMU_PATH "%s/"UNCORE_IIO_PMU_PATH
+ #define PLATFORM_MAPPING_PATH UNCORE_IIO_PMU_PATH"/die%d"
+
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 77e327d4a9a70c..68ea7589c143bd 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -97,7 +97,7 @@
+ #include <internal/threadmap.h>
+
+ #define DEFAULT_SEPARATOR " "
+-#define FREEZE_ON_SMI_PATH "devices/cpu/freeze_on_smi"
++#define FREEZE_ON_SMI_PATH "bus/event_source/devices/cpu/freeze_on_smi"
+
+ static void print_counters(struct timespec *ts, int argc, const char **argv);
+
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index 3692e988c86e5d..0277d3e1505c52 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -189,7 +189,7 @@ static bool perf_pmu__mem_events_supported(const char *mnt, struct perf_pmu *pmu
+ if (!e->event_name)
+ return true;
+
+- scnprintf(path, PATH_MAX, "%s/devices/%s/events/%s", mnt, pmu->name, e->event_name);
++ scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/events/%s", mnt, pmu->name, e->event_name);
+
+ return !stat(path, &st);
+ }
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 6206c8fe2bf941..a8193ac8f2e7d0 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -36,12 +36,12 @@
+ #define UNIT_MAX_LEN 31 /* max length for event unit name */
+
+ enum event_source {
+- /* An event loaded from /sys/devices/<pmu>/events. */
++ /* An event loaded from /sys/bus/event_source/devices/<pmu>/events. */
+ EVENT_SRC_SYSFS,
+ /* An event loaded from a CPUID matched json file. */
+ EVENT_SRC_CPU_JSON,
+ /*
+- * An event loaded from a /sys/devices/<pmu>/identifier matched json
++ * An event loaded from a /sys/bus/event_source/devices/<pmu>/identifier matched json
+ * file.
+ */
+ EVENT_SRC_SYS_JSON,
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-04-10 13:38 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-04-10 13:38 UTC (permalink / raw
To: gentoo-commits
commit: a95fd77ad2318fd4fb9f5162c0649df68e29b0db
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 10 13:37:45 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 10 13:37:45 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a95fd77a
Linux patch 6.14.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-6.14.2.patch | 36703 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 36707 insertions(+)
diff --git a/0000_README b/0000_README
index e248ad77..c3c35403 100644
--- a/0000_README
+++ b/0000_README
@@ -46,6 +46,10 @@ Patch: 1000_linux-6.14.1.patch
From: https://www.kernel.org
Desc: Linux 6.14.1
+Patch: 1001_linux-6.14.2.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.2
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1001_linux-6.14.2.patch b/1001_linux-6.14.2.patch
new file mode 100644
index 00000000..6bbcf1fe
--- /dev/null
+++ b/1001_linux-6.14.2.patch
@@ -0,0 +1,36703 @@
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+index 5079ca6ce1d1e9..b5979832ddce65 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
++++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+@@ -593,6 +593,8 @@ patternProperties:
+ description: GlobalTop Technology, Inc.
+ "^gmt,.*":
+ description: Global Mixed-mode Technology, Inc.
++ "^gocontroll,.*":
++ description: GOcontroll Modular Embedded Electronics B.V.
+ "^goldelico,.*":
+ description: Golden Delicious Computers GmbH & Co. KG
+ "^goodix,.*":
+diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
+index cbb544bd6c848c..901b5afb3df0d3 100644
+--- a/Documentation/netlink/specs/netdev.yaml
++++ b/Documentation/netlink/specs/netdev.yaml
+@@ -70,6 +70,10 @@ definitions:
+ name: tx-checksum
+ doc:
+ L3 checksum HW offload is supported by the driver.
++ -
++ name: tx-launch-time-fifo
++ doc:
++ Launch time HW offload is supported by the driver.
+ -
+ name: queue-type
+ type: enum
+diff --git a/Documentation/netlink/specs/rt_route.yaml b/Documentation/netlink/specs/rt_route.yaml
+index a674103e5bc4ea..292469c7d4b9f1 100644
+--- a/Documentation/netlink/specs/rt_route.yaml
++++ b/Documentation/netlink/specs/rt_route.yaml
+@@ -80,165 +80,167 @@ definitions:
+ attribute-sets:
+ -
+ name: route-attrs
++ name-prefix: rta-
+ attributes:
+ -
+- name: rta-dst
++ name: dst
+ type: binary
+ display-hint: ipv4
+ -
+- name: rta-src
++ name: src
+ type: binary
+ display-hint: ipv4
+ -
+- name: rta-iif
++ name: iif
+ type: u32
+ -
+- name: rta-oif
++ name: oif
+ type: u32
+ -
+- name: rta-gateway
++ name: gateway
+ type: binary
+ display-hint: ipv4
+ -
+- name: rta-priority
++ name: priority
+ type: u32
+ -
+- name: rta-prefsrc
++ name: prefsrc
+ type: binary
+ display-hint: ipv4
+ -
+- name: rta-metrics
++ name: metrics
+ type: nest
+- nested-attributes: rta-metrics
++ nested-attributes: metrics
+ -
+- name: rta-multipath
++ name: multipath
+ type: binary
+ -
+- name: rta-protoinfo # not used
++ name: protoinfo # not used
+ type: binary
+ -
+- name: rta-flow
++ name: flow
+ type: u32
+ -
+- name: rta-cacheinfo
++ name: cacheinfo
+ type: binary
+ struct: rta-cacheinfo
+ -
+- name: rta-session # not used
++ name: session # not used
+ type: binary
+ -
+- name: rta-mp-algo # not used
++ name: mp-algo # not used
+ type: binary
+ -
+- name: rta-table
++ name: table
+ type: u32
+ -
+- name: rta-mark
++ name: mark
+ type: u32
+ -
+- name: rta-mfc-stats
++ name: mfc-stats
+ type: binary
+ -
+- name: rta-via
++ name: via
+ type: binary
+ -
+- name: rta-newdst
++ name: newdst
+ type: binary
+ -
+- name: rta-pref
++ name: pref
+ type: u8
+ -
+- name: rta-encap-type
++ name: encap-type
+ type: u16
+ -
+- name: rta-encap
++ name: encap
+ type: binary # tunnel specific nest
+ -
+- name: rta-expires
++ name: expires
+ type: u32
+ -
+- name: rta-pad
++ name: pad
+ type: binary
+ -
+- name: rta-uid
++ name: uid
+ type: u32
+ -
+- name: rta-ttl-propagate
++ name: ttl-propagate
+ type: u8
+ -
+- name: rta-ip-proto
++ name: ip-proto
+ type: u8
+ -
+- name: rta-sport
++ name: sport
+ type: u16
+ -
+- name: rta-dport
++ name: dport
+ type: u16
+ -
+- name: rta-nh-id
++ name: nh-id
+ type: u32
+ -
+- name: rta-flowlabel
++ name: flowlabel
+ type: u32
+ byte-order: big-endian
+ display-hint: hex
+ -
+- name: rta-metrics
++ name: metrics
++ name-prefix: rtax-
+ attributes:
+ -
+- name: rtax-unspec
++ name: unspec
+ type: unused
+ value: 0
+ -
+- name: rtax-lock
++ name: lock
+ type: u32
+ -
+- name: rtax-mtu
++ name: mtu
+ type: u32
+ -
+- name: rtax-window
++ name: window
+ type: u32
+ -
+- name: rtax-rtt
++ name: rtt
+ type: u32
+ -
+- name: rtax-rttvar
++ name: rttvar
+ type: u32
+ -
+- name: rtax-ssthresh
++ name: ssthresh
+ type: u32
+ -
+- name: rtax-cwnd
++ name: cwnd
+ type: u32
+ -
+- name: rtax-advmss
++ name: advmss
+ type: u32
+ -
+- name: rtax-reordering
++ name: reordering
+ type: u32
+ -
+- name: rtax-hoplimit
++ name: hoplimit
+ type: u32
+ -
+- name: rtax-initcwnd
++ name: initcwnd
+ type: u32
+ -
+- name: rtax-features
++ name: features
+ type: u32
+ -
+- name: rtax-rto-min
++ name: rto-min
+ type: u32
+ -
+- name: rtax-initrwnd
++ name: initrwnd
+ type: u32
+ -
+- name: rtax-quickack
++ name: quickack
+ type: u32
+ -
+- name: rtax-cc-algo
++ name: cc-algo
+ type: string
+ -
+- name: rtax-fastopen-no-cookie
++ name: fastopen-no-cookie
+ type: u32
+
+ operations:
+@@ -254,18 +256,18 @@ operations:
+ value: 26
+ attributes:
+ - rtm-family
+- - rta-src
++ - src
+ - rtm-src-len
+- - rta-dst
++ - dst
+ - rtm-dst-len
+- - rta-iif
+- - rta-oif
+- - rta-ip-proto
+- - rta-sport
+- - rta-dport
+- - rta-mark
+- - rta-uid
+- - rta-flowlabel
++ - iif
++ - oif
++ - ip-proto
++ - sport
++ - dport
++ - mark
++ - uid
++ - flowlabel
+ reply:
+ value: 24
+ attributes: &all-route-attrs
+@@ -278,34 +280,34 @@ operations:
+ - rtm-scope
+ - rtm-type
+ - rtm-flags
+- - rta-dst
+- - rta-src
+- - rta-iif
+- - rta-oif
+- - rta-gateway
+- - rta-priority
+- - rta-prefsrc
+- - rta-metrics
+- - rta-multipath
+- - rta-flow
+- - rta-cacheinfo
+- - rta-table
+- - rta-mark
+- - rta-mfc-stats
+- - rta-via
+- - rta-newdst
+- - rta-pref
+- - rta-encap-type
+- - rta-encap
+- - rta-expires
+- - rta-pad
+- - rta-uid
+- - rta-ttl-propagate
+- - rta-ip-proto
+- - rta-sport
+- - rta-dport
+- - rta-nh-id
+- - rta-flowlabel
++ - dst
++ - src
++ - iif
++ - oif
++ - gateway
++ - priority
++ - prefsrc
++ - metrics
++ - multipath
++ - flow
++ - cacheinfo
++ - table
++ - mark
++ - mfc-stats
++ - via
++ - newdst
++ - pref
++ - encap-type
++ - encap
++ - expires
++ - pad
++ - uid
++ - ttl-propagate
++ - ip-proto
++ - sport
++ - dport
++ - nh-id
++ - flowlabel
+ dump:
+ request:
+ value: 26
+diff --git a/Documentation/networking/xsk-tx-metadata.rst b/Documentation/networking/xsk-tx-metadata.rst
+index e76b0cfc32f7d0..df53a10ccac34b 100644
+--- a/Documentation/networking/xsk-tx-metadata.rst
++++ b/Documentation/networking/xsk-tx-metadata.rst
+@@ -50,6 +50,10 @@ The flags field enables the particular offload:
+ checksum. ``csum_start`` specifies byte offset of where the checksumming
+ should start and ``csum_offset`` specifies byte offset where the
+ device should store the computed checksum.
++- ``XDP_TXMD_FLAGS_LAUNCH_TIME``: requests the device to schedule the
++ packet for transmission at a pre-determined time called launch time. The
++ value of launch time is indicated by ``launch_time`` field of
++ ``union xsk_tx_metadata``.
+
+ Besides the flags above, in order to trigger the offloads, the first
+ packet's ``struct xdp_desc`` descriptor should set ``XDP_TX_METADATA``
+@@ -65,6 +69,63 @@ In this case, when running in ``XDK_COPY`` mode, the TX checksum
+ is calculated on the CPU. Do not enable this option in production because
+ it will negatively affect performance.
+
++Launch Time
++===========
++
++The value of the requested launch time should be based on the device's PTP
++Hardware Clock (PHC) to ensure accuracy. AF_XDP takes a different data path
++compared to the ETF queuing discipline, which organizes packets and delays
++their transmission. Instead, AF_XDP immediately hands off the packets to
++the device driver without rearranging their order or holding them prior to
++transmission. Since the driver maintains FIFO behavior and does not perform
++packet reordering, a packet with a launch time request will block other
++packets in the same Tx Queue until it is sent. Therefore, it is recommended
++to allocate separate queue for scheduling traffic that is intended for
++future transmission.
++
++In scenarios where the launch time offload feature is disabled, the device
++driver is expected to disregard the launch time request. For correct
++interpretation and meaningful operation, the launch time should never be
++set to a value larger than the farthest programmable time in the future
++(the horizon). Different devices have different hardware limitations on the
++launch time offload feature.
++
++stmmac driver
++-------------
++
++For stmmac, TSO and launch time (TBS) features are mutually exclusive for
++each individual Tx Queue. By default, the driver configures Tx Queue 0 to
++support TSO and the rest of the Tx Queues to support TBS. The launch time
++hardware offload feature can be enabled or disabled by using the tc-etf
++command to call the driver's ndo_setup_tc() callback.
++
++The value of the launch time that is programmed in the Enhanced Normal
++Transmit Descriptors is a 32-bit value, where the most significant 8 bits
++represent the time in seconds and the remaining 24 bits represent the time
++in 256 ns increments. The programmed launch time is compared against the
++PTP time (bits[39:8]) and rolls over after 256 seconds. Therefore, the
++horizon of the launch time for dwmac4 and dwxlgmac2 is 128 seconds in the
++future.
++
++igc driver
++----------
++
++For igc, all four Tx Queues support the launch time feature. The launch
++time hardware offload feature can be enabled or disabled by using the
++tc-etf command to call the driver's ndo_setup_tc() callback. When entering
++TSN mode, the igc driver will reset the device and create a default Qbv
++schedule with a 1-second cycle time, with all Tx Queues open at all times.
++
++The value of the launch time that is programmed in the Advanced Transmit
++Context Descriptor is a relative offset to the starting time of the Qbv
++transmission window of the queue. The Frst flag of the descriptor can be
++set to schedule the packet for the next Qbv cycle. Therefore, the horizon
++of the launch time for i225 and i226 is the ending time of the next cycle
++of the Qbv transmission window of the queue. For example, when the Qbv
++cycle time is set to 1 second, the horizon of the launch time ranges
++from 1 second to 2 seconds, depending on where the Qbv cycle is currently
++running.
++
+ Querying Device Capabilities
+ ============================
+
+@@ -74,6 +135,7 @@ Refer to ``xsk-flags`` features bitmask in
+
+ - ``tx-timestamp``: device supports ``XDP_TXMD_FLAGS_TIMESTAMP``
+ - ``tx-checksum``: device supports ``XDP_TXMD_FLAGS_CHECKSUM``
++- ``tx-launch-time-fifo``: device supports ``XDP_TXMD_FLAGS_LAUNCH_TIME``
+
+ See ``tools/net/ynl/samples/netdev.c`` on how to query this information.
+
+diff --git a/Makefile b/Makefile
+index 3ede59c1146cb6..907a4565f06ab4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 835b5f100e926e..f3f6b7a33b7934 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -121,7 +121,7 @@ config ARM
+ select HAVE_KERNEL_XZ
+ select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M
+ select HAVE_KRETPROBES if HAVE_KPROBES
+- select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_IS_LLD)
++ select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_CAN_USE_KEEP_IN_OVERLAY)
+ select HAVE_MOD_ARCH_SPECIFIC
+ select HAVE_NMI
+ select HAVE_OPTPROBES if !THUMB2_KERNEL
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1-mba6ulx.dts b/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1-mba6ulx.dts
+index f2a5f17f312e58..2e7b96e7b791db 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1-mba6ulx.dts
++++ b/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1-mba6ulx.dts
+@@ -6,8 +6,9 @@
+
+ /dts-v1/;
+
+-#include "imx6ul-tqma6ul1.dtsi"
++#include "imx6ul-tqma6ul2.dtsi"
+ #include "mba6ulx.dtsi"
++#include "imx6ul-tqma6ul1.dtsi"
+
+ / {
+ model = "TQ-Systems TQMa6UL1 SoM on MBa6ULx board";
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1.dtsi b/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1.dtsi
+index 24192d012ef7e6..79c8c5529135a4 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6ul-tqma6ul1.dtsi
+@@ -4,8 +4,6 @@
+ * Author: Markus Niebel <Markus.Niebel@tq-group.com>
+ */
+
+-#include "imx6ul-tqma6ul2.dtsi"
+-
+ / {
+ model = "TQ-Systems TQMa6UL1 SoM";
+ compatible = "tq,imx6ul-tqma6ul1", "fsl,imx6ul";
+diff --git a/arch/arm/boot/dts/ti/omap/omap4-panda-a4.dts b/arch/arm/boot/dts/ti/omap/omap4-panda-a4.dts
+index 8fd076e5d1b019..4b8bfd0188add2 100644
+--- a/arch/arm/boot/dts/ti/omap/omap4-panda-a4.dts
++++ b/arch/arm/boot/dts/ti/omap/omap4-panda-a4.dts
+@@ -7,6 +7,11 @@
+ #include "omap443x.dtsi"
+ #include "omap4-panda-common.dtsi"
+
++/ {
++ model = "TI OMAP4 PandaBoard (A4)";
++ compatible = "ti,omap4-panda-a4", "ti,omap4-panda", "ti,omap4430", "ti,omap4";
++};
++
+ /* Pandaboard Rev A4+ have external pullups on SCL & SDA */
+ &dss_hdmi_pins {
+ pinctrl-single,pins = <
+diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
+index d60f6e83a9f700..14811b4f48ec8a 100644
+--- a/arch/arm/include/asm/vmlinux.lds.h
++++ b/arch/arm/include/asm/vmlinux.lds.h
+@@ -34,6 +34,12 @@
+ #define NOCROSSREFS
+ #endif
+
++#ifdef CONFIG_LD_CAN_USE_KEEP_IN_OVERLAY
++#define OVERLAY_KEEP(x) KEEP(x)
++#else
++#define OVERLAY_KEEP(x) x
++#endif
++
+ /* Set start/end symbol names to the LMA for the section */
+ #define ARM_LMA(sym, section) \
+ sym##_start = LOADADDR(section); \
+@@ -125,13 +131,13 @@
+ __vectors_lma = .; \
+ OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) { \
+ .vectors { \
+- *(.vectors) \
++ OVERLAY_KEEP(*(.vectors)) \
+ } \
+ .vectors.bhb.loop8 { \
+- *(.vectors.bhb.loop8) \
++ OVERLAY_KEEP(*(.vectors.bhb.loop8)) \
+ } \
+ .vectors.bhb.bpiall { \
+- *(.vectors.bhb.bpiall) \
++ OVERLAY_KEEP(*(.vectors.bhb.bpiall)) \
+ } \
+ } \
+ ARM_LMA(__vectors, .vectors); \
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-skov-reva.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-skov-reva.dtsi
+index 59813ef8e2bb3a..7ae686d37ddaca 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-skov-reva.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-skov-reva.dtsi
+@@ -163,6 +163,19 @@ reg_vsd_3v3: regulator-vsd-3v3 {
+ };
+ };
+
++/*
++ * Board is passively cooled and heatsink is specced for continuous operation
++ * at 1.2 GHz only. Short bouts of 1.6 GHz are ok, but these should be done
++ * intentionally, not as part of suspend/resume cycles.
++ */
++&{/opp-table/opp-1600000000} {
++ /delete-property/ opp-suspend;
++};
++
++&{/opp-table/opp-1800000000} {
++ /delete-property/ opp-suspend;
++};
++
+ &A53_0 {
+ cpu-supply = <®_vdd_arm>;
+ };
+@@ -247,20 +260,20 @@ reg_vdd_soc: BUCK1 {
+
+ reg_vdd_arm: BUCK2 {
+ regulator-name = "VDD_ARM";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <2187500>;
++ regulator-min-microvolt = <850000>;
++ regulator-max-microvolt = <1000000>;
+ vin-supply = <®_5v_p>;
+ regulator-boot-on;
+ regulator-always-on;
+ regulator-ramp-delay = <3125>;
+- nxp,dvs-run-voltage = <950000>;
++ nxp,dvs-run-voltage = <850000>;
+ nxp,dvs-standby-voltage = <850000>;
+ };
+
+ reg_vdd_3v3: BUCK4 {
+ regulator-name = "VDD_3V3";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <3400000>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
+ vin-supply = <®_5v_p>;
+ regulator-boot-on;
+ regulator-always-on;
+@@ -268,8 +281,8 @@ reg_vdd_3v3: BUCK4 {
+
+ reg_vdd_1v8: BUCK5 {
+ regulator-name = "VDD_1V8";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <3400000>;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
+ vin-supply = <®_5v_p>;
+ regulator-boot-on;
+ regulator-always-on;
+@@ -277,8 +290,8 @@ reg_vdd_1v8: BUCK5 {
+
+ reg_nvcc_dram_1v1: BUCK6 {
+ regulator-name = "NVCC_DRAM_1V1";
+- regulator-min-microvolt = <600000>;
+- regulator-max-microvolt = <3400000>;
++ regulator-min-microvolt = <1100000>;
++ regulator-max-microvolt = <1100000>;
+ vin-supply = <®_5v_p>;
+ regulator-boot-on;
+ regulator-always-on;
+@@ -286,8 +299,8 @@ reg_nvcc_dram_1v1: BUCK6 {
+
+ reg_nvcc_snvs_1v8: LDO1 {
+ regulator-name = "NVCC_SNVS_1V8";
+- regulator-min-microvolt = <1600000>;
+- regulator-max-microvolt = <3300000>;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
+ vin-supply = <®_5v_p>;
+ regulator-boot-on;
+ regulator-always-on;
+@@ -295,8 +308,8 @@ reg_nvcc_snvs_1v8: LDO1 {
+
+ reg_vdda_1v8: LDO3 {
+ regulator-name = "VDDA_1V8";
+- regulator-min-microvolt = <800000>;
+- regulator-max-microvolt = <3300000>;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
+ vin-supply = <®_5v_p>;
+ regulator-boot-on;
+ regulator-always-on;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index e0d3b8cba221e8..54147bce3b8380 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -834,7 +834,7 @@ pgc_audio: power-domain@5 {
+ assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_800M>,
+ <&clk IMX8MP_SYS_PLL1_800M>;
+ assigned-clock-rates = <400000000>,
+- <600000000>;
++ <800000000>;
+ };
+
+ pgc_gpu2d: power-domain@6 {
+@@ -1619,10 +1619,11 @@ audio_blk_ctrl: clock-controller@30e20000 {
+ <&clk IMX8MP_CLK_SAI3>,
+ <&clk IMX8MP_CLK_SAI5>,
+ <&clk IMX8MP_CLK_SAI6>,
+- <&clk IMX8MP_CLK_SAI7>;
++ <&clk IMX8MP_CLK_SAI7>,
++ <&clk IMX8MP_CLK_AUDIO_AXI_ROOT>;
+ clock-names = "ahb",
+ "sai1", "sai2", "sai3",
+- "sai5", "sai6", "sai7";
++ "sai5", "sai6", "sai7", "axi";
+ power-domains = <&pgc_audio>;
+ assigned-clocks = <&clk IMX8MP_AUDIO_PLL1>,
+ <&clk IMX8MP_AUDIO_PLL2>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt6359.dtsi b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+index 150ad84d5d2b30..7b10f9c59819a9 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6359.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+@@ -15,7 +15,8 @@ pmic_adc: adc {
+ #io-channel-cells = <1>;
+ };
+
+- mt6359codec: mt6359codec {
++ mt6359codec: audio-codec {
++ compatible = "mediatek,mt6359-codec";
+ };
+
+ regulators {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+index b5d4b5baf4785f..0d995b342d4631 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+@@ -925,8 +925,6 @@ &pwm0 {
+ &pwrap {
+ pmic: pmic {
+ compatible = "mediatek,mt6397";
+- #address-cells = <1>;
+- #size-cells = <1>;
+ interrupts-extended = <&pio 11 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index 3458be7f7f6114..0ca63e8c4e16ce 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -352,14 +352,14 @@ topckgen: clock-controller@10000000 {
+ #clock-cells = <1>;
+ };
+
+- infracfg: power-controller@10001000 {
++ infracfg: clock-controller@10001000 {
+ compatible = "mediatek,mt8173-infracfg", "syscon";
+ reg = <0 0x10001000 0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+ };
+
+- pericfg: power-controller@10003000 {
++ pericfg: clock-controller@10003000 {
+ compatible = "mediatek,mt8173-pericfg", "syscon";
+ reg = <0 0x10003000 0 0x1000>;
+ #clock-cells = <1>;
+@@ -564,7 +564,7 @@ vpu: vpu@10020000 {
+ memory-region = <&vpu_dma_reserved>;
+ };
+
+- sysirq: intpol-controller@10200620 {
++ sysirq: interrupt-controller@10200620 {
+ compatible = "mediatek,mt8173-sysirq",
+ "mediatek,mt6577-sysirq";
+ interrupt-controller;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8390-genio-700-evk.dts b/arch/arm64/boot/dts/mediatek/mt8390-genio-700-evk.dts
+index 04e4a2f73799d0..612336713a64ee 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8390-genio-700-evk.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8390-genio-700-evk.dts
+@@ -8,1047 +8,16 @@
+ /dts-v1/;
+
+ #include "mt8188.dtsi"
+-#include "mt6359.dtsi"
+-#include <dt-bindings/gpio/gpio.h>
+-#include <dt-bindings/input/input.h>
+-#include <dt-bindings/interrupt-controller/irq.h>
+-#include <dt-bindings/pinctrl/mediatek,mt8188-pinfunc.h>
+-#include <dt-bindings/regulator/mediatek,mt6360-regulator.h>
+-#include <dt-bindings/spmi/spmi.h>
+-#include <dt-bindings/usb/pd.h>
++#include "mt8390-genio-common.dtsi"
+
+ / {
+ model = "MediaTek Genio-700 EVK";
+ compatible = "mediatek,mt8390-evk", "mediatek,mt8390",
+ "mediatek,mt8188";
+
+- aliases {
+- ethernet0 = ð
+- i2c0 = &i2c0;
+- i2c1 = &i2c1;
+- i2c2 = &i2c2;
+- i2c3 = &i2c3;
+- i2c4 = &i2c4;
+- i2c5 = &i2c5;
+- i2c6 = &i2c6;
+- mmc0 = &mmc0;
+- mmc1 = &mmc1;
+- serial0 = &uart0;
+- };
+-
+- chosen {
+- stdout-path = "serial0:921600n8";
+- };
+-
+- firmware {
+- optee {
+- compatible = "linaro,optee-tz";
+- method = "smc";
+- };
+- };
+-
+ memory@40000000 {
+ device_type = "memory";
+ reg = <0 0x40000000 0x2 0x00000000>;
+ };
+-
+- reserved-memory {
+- #address-cells = <2>;
+- #size-cells = <2>;
+- ranges;
+-
+- /*
+- * 12 MiB reserved for OP-TEE (BL32)
+- * +-----------------------+ 0x43e0_0000
+- * | SHMEM 2MiB |
+- * +-----------------------+ 0x43c0_0000
+- * | | TA_RAM 8MiB |
+- * + TZDRAM +--------------+ 0x4340_0000
+- * | | TEE_RAM 2MiB |
+- * +-----------------------+ 0x4320_0000
+- */
+- optee_reserved: optee@43200000 {
+- no-map;
+- reg = <0 0x43200000 0 0x00c00000>;
+- };
+-
+- scp_mem: memory@50000000 {
+- compatible = "shared-dma-pool";
+- reg = <0 0x50000000 0 0x2900000>;
+- no-map;
+- };
+-
+- /* 2 MiB reserved for ARM Trusted Firmware (BL31) */
+- bl31_secmon_reserved: memory@54600000 {
+- no-map;
+- reg = <0 0x54600000 0x0 0x200000>;
+- };
+-
+- apu_mem: memory@55000000 {
+- compatible = "shared-dma-pool";
+- reg = <0 0x55000000 0 0x1400000>; /* 20 MB */
+- };
+-
+- vpu_mem: memory@57000000 {
+- compatible = "shared-dma-pool";
+- reg = <0 0x57000000 0 0x1400000>; /* 20 MB */
+- };
+-
+- adsp_mem: memory@60000000 {
+- compatible = "shared-dma-pool";
+- reg = <0 0x60000000 0 0xf00000>;
+- no-map;
+- };
+-
+- afe_dma_mem: memory@60f00000 {
+- compatible = "shared-dma-pool";
+- reg = <0 0x60f00000 0 0x100000>;
+- no-map;
+- };
+-
+- adsp_dma_mem: memory@61000000 {
+- compatible = "shared-dma-pool";
+- reg = <0 0x61000000 0 0x100000>;
+- no-map;
+- };
+- };
+-
+- common_fixed_5v: regulator-0 {
+- compatible = "regulator-fixed";
+- regulator-name = "vdd_5v";
+- regulator-min-microvolt = <5000000>;
+- regulator-max-microvolt = <5000000>;
+- gpio = <&pio 10 GPIO_ACTIVE_HIGH>;
+- enable-active-high;
+- regulator-always-on;
+- vin-supply = <®_vsys>;
+- };
+-
+- edp_panel_fixed_3v3: regulator-1 {
+- compatible = "regulator-fixed";
+- regulator-name = "vedp_3v3";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- enable-active-high;
+- gpio = <&pio 15 GPIO_ACTIVE_HIGH>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&edp_panel_3v3_en_pins>;
+- vin-supply = <®_vsys>;
+- };
+-
+- gpio_fixed_3v3: regulator-2 {
+- compatible = "regulator-fixed";
+- regulator-name = "ext_3v3";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- gpio = <&pio 9 GPIO_ACTIVE_HIGH>;
+- enable-active-high;
+- regulator-always-on;
+- vin-supply = <®_vsys>;
+- };
+-
+- /* system wide 4.2V power rail from charger */
+- reg_vsys: regulator-vsys {
+- compatible = "regulator-fixed";
+- regulator-name = "vsys";
+- regulator-always-on;
+- regulator-boot-on;
+- };
+-
+- /* used by mmc2 */
+- sdio_fixed_1v8: regulator-3 {
+- compatible = "regulator-fixed";
+- regulator-name = "vio18_conn";
+- regulator-min-microvolt = <1800000>;
+- regulator-max-microvolt = <1800000>;
+- enable-active-high;
+- regulator-always-on;
+- };
+-
+- /* used by mmc2 */
+- sdio_fixed_3v3: regulator-4 {
+- compatible = "regulator-fixed";
+- regulator-name = "wifi_3v3";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- gpio = <&pio 74 GPIO_ACTIVE_HIGH>;
+- enable-active-high;
+- regulator-always-on;
+- vin-supply = <®_vsys>;
+- };
+-
+- touch0_fixed_3v3: regulator-5 {
+- compatible = "regulator-fixed";
+- regulator-name = "vio33_tp1";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- gpio = <&pio 119 GPIO_ACTIVE_HIGH>;
+- enable-active-high;
+- vin-supply = <®_vsys>;
+- };
+-
+- usb_hub_fixed_3v3: regulator-6 {
+- compatible = "regulator-fixed";
+- regulator-name = "vhub_3v3";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+- gpio = <&pio 112 GPIO_ACTIVE_HIGH>; /* HUB_3V3_EN */
+- startup-delay-us = <10000>;
+- enable-active-high;
+- vin-supply = <®_vsys>;
+- };
+-
+- usb_p0_vbus: regulator-7 {
+- compatible = "regulator-fixed";
+- regulator-name = "vbus_p0";
+- regulator-min-microvolt = <5000000>;
+- regulator-max-microvolt = <5000000>;
+- gpio = <&pio 84 GPIO_ACTIVE_HIGH>;
+- enable-active-high;
+- vin-supply = <®_vsys>;
+- };
+-
+- usb_p1_vbus: regulator-8 {
+- compatible = "regulator-fixed";
+- regulator-name = "vbus_p1";
+- regulator-min-microvolt = <5000000>;
+- regulator-max-microvolt = <5000000>;
+- gpio = <&pio 87 GPIO_ACTIVE_HIGH>;
+- enable-active-high;
+- vin-supply = <®_vsys>;
+- };
+-
+- /* used by ssusb2 */
+- usb_p2_vbus: regulator-9 {
+- compatible = "regulator-fixed";
+- regulator-name = "wifi_3v3";
+- regulator-min-microvolt = <5000000>;
+- regulator-max-microvolt = <5000000>;
+- enable-active-high;
+- };
+-};
+-
+-&adsp {
+- memory-region = <&adsp_dma_mem>, <&adsp_mem>;
+- status = "okay";
+-};
+-
+-&afe {
+- memory-region = <&afe_dma_mem>;
+- status = "okay";
+-};
+-
+-&gpu {
+- mali-supply = <&mt6359_vproc2_buck_reg>;
+- status = "okay";
+-};
+-
+-&i2c0 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c0_pins>;
+- clock-frequency = <400000>;
+- status = "okay";
+-
+- touchscreen@5d {
+- compatible = "goodix,gt9271";
+- reg = <0x5d>;
+- interrupt-parent = <&pio>;
+- interrupts-extended = <&pio 6 IRQ_TYPE_EDGE_RISING>;
+- irq-gpios = <&pio 6 GPIO_ACTIVE_HIGH>;
+- reset-gpios = <&pio 5 GPIO_ACTIVE_HIGH>;
+- AVDD28-supply = <&touch0_fixed_3v3>;
+- VDDIO-supply = <&mt6359_vio18_ldo_reg>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&touch_pins>;
+- };
+-};
+-
+-&i2c1 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c1_pins>;
+- clock-frequency = <400000>;
+- status = "okay";
+-};
+-
+-&i2c2 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c2_pins>;
+- clock-frequency = <400000>;
+- status = "okay";
+-};
+-
+-&i2c3 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c3_pins>;
+- clock-frequency = <400000>;
+- status = "okay";
+-};
+-
+-&i2c4 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c4_pins>;
+- clock-frequency = <1000000>;
+- status = "okay";
+-};
+-
+-&i2c5 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c5_pins>;
+- clock-frequency = <400000>;
+- status = "okay";
+-};
+-
+-&i2c6 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c6_pins>;
+- clock-frequency = <400000>;
+- status = "okay";
+-};
+-
+-&mfg0 {
+- domain-supply = <&mt6359_vproc2_buck_reg>;
+-};
+-
+-&mfg1 {
+- domain-supply = <&mt6359_vsram_others_ldo_reg>;
+-};
+-
+-&mmc0 {
+- status = "okay";
+- pinctrl-names = "default", "state_uhs";
+- pinctrl-0 = <&mmc0_default_pins>;
+- pinctrl-1 = <&mmc0_uhs_pins>;
+- bus-width = <8>;
+- max-frequency = <200000000>;
+- cap-mmc-highspeed;
+- mmc-hs200-1_8v;
+- mmc-hs400-1_8v;
+- supports-cqe;
+- cap-mmc-hw-reset;
+- no-sdio;
+- no-sd;
+- hs400-ds-delay = <0x1481b>;
+- vmmc-supply = <&mt6359_vemc_1_ldo_reg>;
+- vqmmc-supply = <&mt6359_vufs_ldo_reg>;
+- non-removable;
+-};
+-
+-&mmc1 {
+- status = "okay";
+- pinctrl-names = "default", "state_uhs";
+- pinctrl-0 = <&mmc1_default_pins>;
+- pinctrl-1 = <&mmc1_uhs_pins>;
+- bus-width = <4>;
+- max-frequency = <200000000>;
+- cap-sd-highspeed;
+- sd-uhs-sdr50;
+- sd-uhs-sdr104;
+- no-mmc;
+- no-sdio;
+- cd-gpios = <&pio 2 GPIO_ACTIVE_LOW>;
+- vmmc-supply = <&mt6359_vpa_buck_reg>;
+- vqmmc-supply = <&mt6359_vsim1_ldo_reg>;
+-};
+-
+-&mt6359_vbbck_ldo_reg {
+- regulator-always-on;
+-};
+-
+-&mt6359_vcn18_ldo_reg {
+- regulator-name = "vcn18_pmu";
+- regulator-always-on;
+-};
+-
+-&mt6359_vcn33_2_bt_ldo_reg {
+- regulator-name = "vcn33_2_pmu";
+- regulator-always-on;
+-};
+-
+-&mt6359_vcore_buck_reg {
+- regulator-name = "dvdd_proc_l";
+- regulator-always-on;
+-};
+-
+-&mt6359_vgpu11_buck_reg {
+- regulator-name = "dvdd_core";
+- regulator-always-on;
+-};
+-
+-&mt6359_vpa_buck_reg {
+- regulator-name = "vpa_pmu";
+- regulator-max-microvolt = <3100000>;
+-};
+-
+-&mt6359_vproc2_buck_reg {
+- /* The name "vgpu" is required by mtk-regulator-coupler */
+- regulator-name = "vgpu";
+- regulator-min-microvolt = <550000>;
+- regulator-max-microvolt = <800000>;
+- regulator-coupled-with = <&mt6359_vsram_others_ldo_reg>;
+- regulator-coupled-max-spread = <6250>;
+-};
+-
+-&mt6359_vpu_buck_reg {
+- regulator-name = "dvdd_adsp";
+- regulator-always-on;
+-};
+-
+-&mt6359_vrf12_ldo_reg {
+- regulator-name = "va12_abb2_pmu";
+- regulator-always-on;
+-};
+-
+-&mt6359_vsim1_ldo_reg {
+- regulator-name = "vsim1_pmu";
+- regulator-enable-ramp-delay = <480>;
+-};
+-
+-&mt6359_vsram_others_ldo_reg {
+- /* The name "vsram_gpu" is required by mtk-regulator-coupler */
+- regulator-name = "vsram_gpu";
+- regulator-min-microvolt = <750000>;
+- regulator-max-microvolt = <800000>;
+- regulator-coupled-with = <&mt6359_vproc2_buck_reg>;
+- regulator-coupled-max-spread = <6250>;
+-};
+-
+-&mt6359_vufs_ldo_reg {
+- regulator-name = "vufs18_pmu";
+- regulator-always-on;
+-};
+-
+-&mt6359codec {
+- mediatek,mic-type-0 = <1>; /* ACC */
+- mediatek,mic-type-1 = <3>; /* DCC */
+-};
+-
+-&pcie {
+- pinctrl-names = "default";
+- pinctrl-0 = <&pcie_pins_default>;
+- status = "okay";
+-};
+-
+-&pciephy {
+- status = "okay";
+-};
+-
+-&pio {
+- audio_default_pins: audio-default-pins {
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO101__FUNC_O_AUD_CLK_MOSI>,
+- <PINMUX_GPIO102__FUNC_O_AUD_SYNC_MOSI>,
+- <PINMUX_GPIO103__FUNC_O_AUD_DAT_MOSI0>,
+- <PINMUX_GPIO104__FUNC_O_AUD_DAT_MOSI1>,
+- <PINMUX_GPIO105__FUNC_I0_AUD_DAT_MISO0>,
+- <PINMUX_GPIO106__FUNC_I0_AUD_DAT_MISO1>,
+- <PINMUX_GPIO107__FUNC_B0_I2SIN_MCK>,
+- <PINMUX_GPIO108__FUNC_B0_I2SIN_BCK>,
+- <PINMUX_GPIO109__FUNC_B0_I2SIN_WS>,
+- <PINMUX_GPIO110__FUNC_I0_I2SIN_D0>,
+- <PINMUX_GPIO114__FUNC_O_I2SO2_MCK>,
+- <PINMUX_GPIO115__FUNC_B0_I2SO2_BCK>,
+- <PINMUX_GPIO116__FUNC_B0_I2SO2_WS>,
+- <PINMUX_GPIO117__FUNC_O_I2SO2_D0>,
+- <PINMUX_GPIO118__FUNC_O_I2SO2_D1>,
+- <PINMUX_GPIO121__FUNC_B0_PCM_CLK>,
+- <PINMUX_GPIO122__FUNC_B0_PCM_SYNC>,
+- <PINMUX_GPIO124__FUNC_I0_PCM_DI>,
+- <PINMUX_GPIO125__FUNC_O_DMIC1_CLK>,
+- <PINMUX_GPIO126__FUNC_I0_DMIC1_DAT>,
+- <PINMUX_GPIO128__FUNC_O_DMIC2_CLK>,
+- <PINMUX_GPIO129__FUNC_I0_DMIC2_DAT>;
+- };
+- };
+-
+- dptx_pins: dptx-pins {
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO46__FUNC_I0_DP_TX_HPD>;
+- bias-pull-up;
+- };
+- };
+-
+- edp_panel_3v3_en_pins: edp-panel-3v3-en-pins {
+- pins1 {
+- pinmux = <PINMUX_GPIO15__FUNC_B_GPIO15>;
+- output-high;
+- };
+- };
+-
+- eth_default_pins: eth-default-pins {
+- pins-cc {
+- pinmux = <PINMUX_GPIO139__FUNC_B0_GBE_TXC>,
+- <PINMUX_GPIO140__FUNC_I0_GBE_RXC>,
+- <PINMUX_GPIO141__FUNC_I0_GBE_RXDV>,
+- <PINMUX_GPIO142__FUNC_O_GBE_TXEN>;
+- drive-strength = <8>;
+- };
+-
+- pins-mdio {
+- pinmux = <PINMUX_GPIO143__FUNC_O_GBE_MDC>,
+- <PINMUX_GPIO144__FUNC_B1_GBE_MDIO>;
+- drive-strength = <8>;
+- input-enable;
+- };
+-
+- pins-power {
+- pinmux = <PINMUX_GPIO145__FUNC_B_GPIO145>,
+- <PINMUX_GPIO146__FUNC_B_GPIO146>;
+- output-high;
+- };
+-
+- pins-rxd {
+- pinmux = <PINMUX_GPIO135__FUNC_I0_GBE_RXD3>,
+- <PINMUX_GPIO136__FUNC_I0_GBE_RXD2>,
+- <PINMUX_GPIO137__FUNC_I0_GBE_RXD1>,
+- <PINMUX_GPIO138__FUNC_I0_GBE_RXD0>;
+- drive-strength = <8>;
+- };
+-
+- pins-txd {
+- pinmux = <PINMUX_GPIO131__FUNC_O_GBE_TXD3>,
+- <PINMUX_GPIO132__FUNC_O_GBE_TXD2>,
+- <PINMUX_GPIO133__FUNC_O_GBE_TXD1>,
+- <PINMUX_GPIO134__FUNC_O_GBE_TXD0>;
+- drive-strength = <8>;
+- };
+- };
+-
+- eth_sleep_pins: eth-sleep-pins {
+- pins-cc {
+- pinmux = <PINMUX_GPIO139__FUNC_B_GPIO139>,
+- <PINMUX_GPIO140__FUNC_B_GPIO140>,
+- <PINMUX_GPIO141__FUNC_B_GPIO141>,
+- <PINMUX_GPIO142__FUNC_B_GPIO142>;
+- };
+-
+- pins-mdio {
+- pinmux = <PINMUX_GPIO143__FUNC_B_GPIO143>,
+- <PINMUX_GPIO144__FUNC_B_GPIO144>;
+- input-disable;
+- bias-disable;
+- };
+-
+- pins-rxd {
+- pinmux = <PINMUX_GPIO135__FUNC_B_GPIO135>,
+- <PINMUX_GPIO136__FUNC_B_GPIO136>,
+- <PINMUX_GPIO137__FUNC_B_GPIO137>,
+- <PINMUX_GPIO138__FUNC_B_GPIO138>;
+- };
+-
+- pins-txd {
+- pinmux = <PINMUX_GPIO131__FUNC_B_GPIO131>,
+- <PINMUX_GPIO132__FUNC_B_GPIO132>,
+- <PINMUX_GPIO133__FUNC_B_GPIO133>,
+- <PINMUX_GPIO134__FUNC_B_GPIO134>;
+- };
+- };
+-
+- i2c0_pins: i2c0-pins {
+- pins {
+- pinmux = <PINMUX_GPIO56__FUNC_B1_SDA0>,
+- <PINMUX_GPIO55__FUNC_B1_SCL0>;
+- bias-pull-up = <MTK_PULL_SET_RSEL_011>;
+- drive-strength-microamp = <1000>;
+- };
+- };
+-
+- i2c1_pins: i2c1-pins {
+- pins {
+- pinmux = <PINMUX_GPIO58__FUNC_B1_SDA1>,
+- <PINMUX_GPIO57__FUNC_B1_SCL1>;
+- bias-pull-up = <MTK_PULL_SET_RSEL_011>;
+- drive-strength-microamp = <1000>;
+- };
+- };
+-
+- i2c2_pins: i2c2-pins {
+- pins {
+- pinmux = <PINMUX_GPIO60__FUNC_B1_SDA2>,
+- <PINMUX_GPIO59__FUNC_B1_SCL2>;
+- bias-pull-up = <MTK_PULL_SET_RSEL_011>;
+- drive-strength-microamp = <1000>;
+- };
+- };
+-
+- i2c3_pins: i2c3-pins {
+- pins {
+- pinmux = <PINMUX_GPIO62__FUNC_B1_SDA3>,
+- <PINMUX_GPIO61__FUNC_B1_SCL3>;
+- bias-pull-up = <MTK_PULL_SET_RSEL_011>;
+- drive-strength-microamp = <1000>;
+- };
+- };
+-
+- i2c4_pins: i2c4-pins {
+- pins {
+- pinmux = <PINMUX_GPIO64__FUNC_B1_SDA4>,
+- <PINMUX_GPIO63__FUNC_B1_SCL4>;
+- bias-pull-up = <MTK_PULL_SET_RSEL_011>;
+- drive-strength-microamp = <1000>;
+- };
+- };
+-
+- i2c5_pins: i2c5-pins {
+- pins {
+- pinmux = <PINMUX_GPIO66__FUNC_B1_SDA5>,
+- <PINMUX_GPIO65__FUNC_B1_SCL5>;
+- bias-pull-up = <MTK_PULL_SET_RSEL_011>;
+- drive-strength-microamp = <1000>;
+- };
+- };
+-
+- i2c6_pins: i2c6-pins {
+- pins {
+- pinmux = <PINMUX_GPIO68__FUNC_B1_SDA6>,
+- <PINMUX_GPIO67__FUNC_B1_SCL6>;
+- bias-pull-up = <MTK_PULL_SET_RSEL_011>;
+- drive-strength-microamp = <1000>;
+- };
+- };
+-
+- gpio_key_pins: gpio-key-pins {
+- pins {
+- pinmux = <PINMUX_GPIO42__FUNC_B1_KPCOL0>,
+- <PINMUX_GPIO43__FUNC_B1_KPCOL1>,
+- <PINMUX_GPIO44__FUNC_B1_KPROW0>;
+- };
+- };
+-
+- mmc0_default_pins: mmc0-default-pins {
+- pins-clk {
+- pinmux = <PINMUX_GPIO157__FUNC_B1_MSDC0_CLK>;
+- drive-strength = <6>;
+- bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
+- };
+-
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO161__FUNC_B1_MSDC0_DAT0>,
+- <PINMUX_GPIO160__FUNC_B1_MSDC0_DAT1>,
+- <PINMUX_GPIO159__FUNC_B1_MSDC0_DAT2>,
+- <PINMUX_GPIO158__FUNC_B1_MSDC0_DAT3>,
+- <PINMUX_GPIO154__FUNC_B1_MSDC0_DAT4>,
+- <PINMUX_GPIO153__FUNC_B1_MSDC0_DAT5>,
+- <PINMUX_GPIO152__FUNC_B1_MSDC0_DAT6>,
+- <PINMUX_GPIO151__FUNC_B1_MSDC0_DAT7>,
+- <PINMUX_GPIO156__FUNC_B1_MSDC0_CMD>;
+- input-enable;
+- drive-strength = <6>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+-
+- pins-rst {
+- pinmux = <PINMUX_GPIO155__FUNC_O_MSDC0_RSTB>;
+- drive-strength = <6>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+- };
+-
+- mmc0_uhs_pins: mmc0-uhs-pins {
+- pins-clk {
+- pinmux = <PINMUX_GPIO157__FUNC_B1_MSDC0_CLK>;
+- drive-strength = <8>;
+- bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
+- };
+-
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO161__FUNC_B1_MSDC0_DAT0>,
+- <PINMUX_GPIO160__FUNC_B1_MSDC0_DAT1>,
+- <PINMUX_GPIO159__FUNC_B1_MSDC0_DAT2>,
+- <PINMUX_GPIO158__FUNC_B1_MSDC0_DAT3>,
+- <PINMUX_GPIO154__FUNC_B1_MSDC0_DAT4>,
+- <PINMUX_GPIO153__FUNC_B1_MSDC0_DAT5>,
+- <PINMUX_GPIO152__FUNC_B1_MSDC0_DAT6>,
+- <PINMUX_GPIO151__FUNC_B1_MSDC0_DAT7>,
+- <PINMUX_GPIO156__FUNC_B1_MSDC0_CMD>;
+- input-enable;
+- drive-strength = <8>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+-
+- pins-ds {
+- pinmux = <PINMUX_GPIO162__FUNC_B0_MSDC0_DSL>;
+- drive-strength = <8>;
+- bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
+- };
+-
+- pins-rst {
+- pinmux = <PINMUX_GPIO155__FUNC_O_MSDC0_RSTB>;
+- drive-strength = <8>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+- };
+-
+- mmc1_default_pins: mmc1-default-pins {
+- pins-clk {
+- pinmux = <PINMUX_GPIO164__FUNC_B1_MSDC1_CLK>;
+- drive-strength = <6>;
+- bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
+- };
+-
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO163__FUNC_B1_MSDC1_CMD>,
+- <PINMUX_GPIO165__FUNC_B1_MSDC1_DAT0>,
+- <PINMUX_GPIO166__FUNC_B1_MSDC1_DAT1>,
+- <PINMUX_GPIO167__FUNC_B1_MSDC1_DAT2>,
+- <PINMUX_GPIO168__FUNC_B1_MSDC1_DAT3>;
+- input-enable;
+- drive-strength = <6>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+-
+- pins-insert {
+- pinmux = <PINMUX_GPIO2__FUNC_B_GPIO2>;
+- bias-pull-up;
+- };
+- };
+-
+- mmc1_uhs_pins: mmc1-uhs-pins {
+- pins-clk {
+- pinmux = <PINMUX_GPIO164__FUNC_B1_MSDC1_CLK>;
+- drive-strength = <6>;
+- bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
+- };
+-
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO163__FUNC_B1_MSDC1_CMD>,
+- <PINMUX_GPIO165__FUNC_B1_MSDC1_DAT0>,
+- <PINMUX_GPIO166__FUNC_B1_MSDC1_DAT1>,
+- <PINMUX_GPIO167__FUNC_B1_MSDC1_DAT2>,
+- <PINMUX_GPIO168__FUNC_B1_MSDC1_DAT3>;
+- input-enable;
+- drive-strength = <6>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+- };
+-
+- mmc2_default_pins: mmc2-default-pins {
+- pins-clk {
+- pinmux = <PINMUX_GPIO170__FUNC_B1_MSDC2_CLK>;
+- drive-strength = <4>;
+- bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
+- };
+-
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO169__FUNC_B1_MSDC2_CMD>,
+- <PINMUX_GPIO171__FUNC_B1_MSDC2_DAT0>,
+- <PINMUX_GPIO172__FUNC_B1_MSDC2_DAT1>,
+- <PINMUX_GPIO173__FUNC_B1_MSDC2_DAT2>,
+- <PINMUX_GPIO174__FUNC_B1_MSDC2_DAT3>;
+- input-enable;
+- drive-strength = <6>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+-
+- pins-pcm {
+- pinmux = <PINMUX_GPIO123__FUNC_O_PCM_DO>;
+- };
+- };
+-
+- mmc2_uhs_pins: mmc2-uhs-pins {
+- pins-clk {
+- pinmux = <PINMUX_GPIO170__FUNC_B1_MSDC2_CLK>;
+- drive-strength = <4>;
+- bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
+- };
+-
+- pins-cmd-dat {
+- pinmux = <PINMUX_GPIO169__FUNC_B1_MSDC2_CMD>,
+- <PINMUX_GPIO171__FUNC_B1_MSDC2_DAT0>,
+- <PINMUX_GPIO172__FUNC_B1_MSDC2_DAT1>,
+- <PINMUX_GPIO173__FUNC_B1_MSDC2_DAT2>,
+- <PINMUX_GPIO174__FUNC_B1_MSDC2_DAT3>;
+- input-enable;
+- drive-strength = <6>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+- };
+-
+- mmc2_eint_pins: mmc2-eint-pins {
+- pins-dat1 {
+- pinmux = <PINMUX_GPIO172__FUNC_B_GPIO172>;
+- input-enable;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+- };
+-
+- mmc2_dat1_pins: mmc2-dat1-pins {
+- pins-dat1 {
+- pinmux = <PINMUX_GPIO172__FUNC_B1_MSDC2_DAT1>;
+- input-enable;
+- drive-strength = <6>;
+- bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
+- };
+- };
+-
+- panel_default_pins: panel-default-pins {
+- pins-dcdc {
+- pinmux = <PINMUX_GPIO45__FUNC_B_GPIO45>;
+- output-low;
+- };
+-
+- pins-en {
+- pinmux = <PINMUX_GPIO111__FUNC_B_GPIO111>;
+- output-low;
+- };
+-
+- pins-rst {
+- pinmux = <PINMUX_GPIO25__FUNC_B_GPIO25>;
+- output-high;
+- };
+- };
+-
+- pcie_pins_default: pcie-default {
+- mux {
+- pinmux = <PINMUX_GPIO47__FUNC_I1_WAKEN>,
+- <PINMUX_GPIO48__FUNC_O_PERSTN>,
+- <PINMUX_GPIO49__FUNC_B1_CLKREQN>;
+- bias-pull-up;
+- };
+- };
+-
+- rt1715_int_pins: rt1715-int-pins {
+- pins_cmd0_dat {
+- pinmux = <PINMUX_GPIO12__FUNC_B_GPIO12>;
+- bias-pull-up;
+- input-enable;
+- };
+- };
+-
+- spi0_pins: spi0-pins {
+- pins-spi {
+- pinmux = <PINMUX_GPIO69__FUNC_O_SPIM0_CSB>,
+- <PINMUX_GPIO70__FUNC_O_SPIM0_CLK>,
+- <PINMUX_GPIO71__FUNC_B0_SPIM0_MOSI>,
+- <PINMUX_GPIO72__FUNC_B0_SPIM0_MISO>;
+- bias-disable;
+- };
+- };
+-
+- spi1_pins: spi1-pins {
+- pins-spi {
+- pinmux = <PINMUX_GPIO75__FUNC_O_SPIM1_CSB>,
+- <PINMUX_GPIO76__FUNC_O_SPIM1_CLK>,
+- <PINMUX_GPIO77__FUNC_B0_SPIM1_MOSI>,
+- <PINMUX_GPIO78__FUNC_B0_SPIM1_MISO>;
+- bias-disable;
+- };
+- };
+-
+- spi2_pins: spi2-pins {
+- pins-spi {
+- pinmux = <PINMUX_GPIO79__FUNC_O_SPIM2_CSB>,
+- <PINMUX_GPIO80__FUNC_O_SPIM2_CLK>,
+- <PINMUX_GPIO81__FUNC_B0_SPIM2_MOSI>,
+- <PINMUX_GPIO82__FUNC_B0_SPIM2_MISO>;
+- bias-disable;
+- };
+- };
+-
+- touch_pins: touch-pins {
+- pins-irq {
+- pinmux = <PINMUX_GPIO6__FUNC_B_GPIO6>;
+- input-enable;
+- bias-disable;
+- };
+-
+- pins-reset {
+- pinmux = <PINMUX_GPIO5__FUNC_B_GPIO5>;
+- output-high;
+- };
+- };
+-
+- uart0_pins: uart0-pins {
+- pins {
+- pinmux = <PINMUX_GPIO31__FUNC_O_UTXD0>,
+- <PINMUX_GPIO32__FUNC_I1_URXD0>;
+- bias-pull-up;
+- };
+- };
+-
+- uart1_pins: uart1-pins {
+- pins {
+- pinmux = <PINMUX_GPIO33__FUNC_O_UTXD1>,
+- <PINMUX_GPIO34__FUNC_I1_URXD1>;
+- bias-pull-up;
+- };
+- };
+-
+- uart2_pins: uart2-pins {
+- pins {
+- pinmux = <PINMUX_GPIO35__FUNC_O_UTXD2>,
+- <PINMUX_GPIO36__FUNC_I1_URXD2>;
+- bias-pull-up;
+- };
+- };
+-
+- usb_default_pins: usb-default-pins {
+- pins-iddig {
+- pinmux = <PINMUX_GPIO83__FUNC_B_GPIO83>;
+- input-enable;
+- bias-pull-up;
+- };
+-
+- pins-valid {
+- pinmux = <PINMUX_GPIO85__FUNC_I0_VBUSVALID>;
+- input-enable;
+- };
+-
+- pins-vbus {
+- pinmux = <PINMUX_GPIO84__FUNC_O_USB_DRVVBUS>;
+- output-high;
+- };
+-
+- };
+-
+- usb1_default_pins: usb1-default-pins {
+- pins-valid {
+- pinmux = <PINMUX_GPIO88__FUNC_I0_VBUSVALID_1P>;
+- input-enable;
+- };
+-
+- pins-usb-hub-3v3-en {
+- pinmux = <PINMUX_GPIO112__FUNC_B_GPIO112>;
+- output-high;
+- };
+- };
+-
+- wifi_pwrseq_pins: wifi-pwrseq-pins {
+- pins-wifi-enable {
+- pinmux = <PINMUX_GPIO127__FUNC_B_GPIO127>;
+- output-low;
+- };
+- };
+-};
+-
+-ð {
+- phy-mode ="rgmii-id";
+- phy-handle = <ðernet_phy0>;
+- pinctrl-names = "default", "sleep";
+- pinctrl-0 = <ð_default_pins>;
+- pinctrl-1 = <ð_sleep_pins>;
+- mediatek,mac-wol;
+- snps,reset-gpio = <&pio 147 GPIO_ACTIVE_HIGH>;
+- snps,reset-delays-us = <0 10000 10000>;
+- status = "okay";
+-};
+-
+-ð_mdio {
+- ethernet_phy0: ethernet-phy@1 {
+- compatible = "ethernet-phy-id001c.c916";
+- reg = <0x1>;
+- };
+-};
+-
+-&pmic {
+- interrupt-parent = <&pio>;
+- interrupts = <222 IRQ_TYPE_LEVEL_HIGH>;
+-
+- mt6359keys: keys {
+- compatible = "mediatek,mt6359-keys";
+- mediatek,long-press-mode = <1>;
+- power-off-time-sec = <0>;
+-
+- power-key {
+- linux,keycodes = <KEY_POWER>;
+- wakeup-source;
+- };
+- };
+-};
+-
+-&scp {
+- memory-region = <&scp_mem>;
+- status = "okay";
+-};
+-
+-&sound {
+- compatible = "mediatek,mt8390-mt6359-evk", "mediatek,mt8188-mt6359-evb";
+- model = "mt8390-evk";
+- pinctrl-names = "default";
+- pinctrl-0 = <&audio_default_pins>;
+- audio-routing =
+- "Headphone", "Headphone L",
+- "Headphone", "Headphone R";
+- mediatek,adsp = <&adsp>;
+- status = "okay";
+-
+- dai-link-0 {
+- link-name = "DL_SRC_BE";
+-
+- codec {
+- sound-dai = <&pmic 0>;
+- };
+- };
+-};
+-
+-&spi2 {
+- pinctrl-0 = <&spi2_pins>;
+- pinctrl-names = "default";
+- mediatek,pad-select = <0>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+- status = "okay";
+ };
+
+-&uart0 {
+- pinctrl-0 = <&uart0_pins>;
+- pinctrl-names = "default";
+- status = "okay";
+-};
+-
+-&uart1 {
+- pinctrl-0 = <&uart1_pins>;
+- pinctrl-names = "default";
+- status = "okay";
+-};
+-
+-&uart2 {
+- pinctrl-0 = <&uart2_pins>;
+- pinctrl-names = "default";
+- status = "okay";
+-};
+-
+-&u3phy0 {
+- status = "okay";
+-};
+-
+-&u3phy1 {
+- status = "okay";
+-};
+-
+-&u3phy2 {
+- status = "okay";
+-};
+-
+-&xhci0 {
+- status = "okay";
+- vusb33-supply = <&mt6359_vusb_ldo_reg>;
+-};
+-
+-&xhci1 {
+- status = "okay";
+- vusb33-supply = <&mt6359_vusb_ldo_reg>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- hub_2_0: hub@1 {
+- compatible = "usb451,8025";
+- reg = <1>;
+- peer-hub = <&hub_3_0>;
+- reset-gpios = <&pio 7 GPIO_ACTIVE_HIGH>;
+- vdd-supply = <&usb_hub_fixed_3v3>;
+- };
+-
+- hub_3_0: hub@2 {
+- compatible = "usb451,8027";
+- reg = <2>;
+- peer-hub = <&hub_2_0>;
+- reset-gpios = <&pio 7 GPIO_ACTIVE_HIGH>;
+- vdd-supply = <&usb_hub_fixed_3v3>;
+- };
+-};
+-
+-&xhci2 {
+- status = "okay";
+- vusb33-supply = <&mt6359_vusb_ldo_reg>;
+- vbus-supply = <&sdio_fixed_3v3>; /* wifi_3v3 */
+-};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi b/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
+new file mode 100644
+index 00000000000000..e828864433a6f4
+--- /dev/null
++++ b/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
+@@ -0,0 +1,1046 @@
++// SPDX-License-Identifier: (GPL-2.0 OR MIT)
++/*
++ * Copyright (C) 2023 MediaTek Inc.
++ * Author: Chris Chen <chris-qj.chen@mediatek.com>
++ * Pablo Sun <pablo.sun@mediatek.com>
++ * Macpaul Lin <macpaul.lin@mediatek.com>
++ *
++ * Copyright (C) 2025 Collabora Ltd.
++ * Louis-Alexis Eyraud <louisalexis.eyraud@collabora.com>
++ * AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
++ */
++
++#include "mt6359.dtsi"
++#include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
++#include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/pinctrl/mediatek,mt8188-pinfunc.h>
++#include <dt-bindings/regulator/mediatek,mt6360-regulator.h>
++#include <dt-bindings/spmi/spmi.h>
++#include <dt-bindings/usb/pd.h>
++
++/ {
++ aliases {
++ ethernet0 = ð
++ i2c0 = &i2c0;
++ i2c1 = &i2c1;
++ i2c2 = &i2c2;
++ i2c3 = &i2c3;
++ i2c4 = &i2c4;
++ i2c5 = &i2c5;
++ i2c6 = &i2c6;
++ mmc0 = &mmc0;
++ mmc1 = &mmc1;
++ serial0 = &uart0;
++ };
++
++ chosen {
++ stdout-path = "serial0:921600n8";
++ };
++
++ firmware {
++ optee {
++ compatible = "linaro,optee-tz";
++ method = "smc";
++ };
++ };
++ reserved-memory {
++ #address-cells = <2>;
++ #size-cells = <2>;
++ ranges;
++
++ /*
++ * 12 MiB reserved for OP-TEE (BL32)
++ * +-----------------------+ 0x43e0_0000
++ * | SHMEM 2MiB |
++ * +-----------------------+ 0x43c0_0000
++ * | | TA_RAM 8MiB |
++ * + TZDRAM +--------------+ 0x4340_0000
++ * | | TEE_RAM 2MiB |
++ * +-----------------------+ 0x4320_0000
++ */
++ optee_reserved: optee@43200000 {
++ no-map;
++ reg = <0 0x43200000 0 0x00c00000>;
++ };
++
++ scp_mem: memory@50000000 {
++ compatible = "shared-dma-pool";
++ reg = <0 0x50000000 0 0x2900000>;
++ no-map;
++ };
++
++ /* 2 MiB reserved for ARM Trusted Firmware (BL31) */
++ bl31_secmon_reserved: memory@54600000 {
++ no-map;
++ reg = <0 0x54600000 0x0 0x200000>;
++ };
++
++ apu_mem: memory@55000000 {
++ compatible = "shared-dma-pool";
++ reg = <0 0x55000000 0 0x1400000>; /* 20 MB */
++ };
++
++ vpu_mem: memory@57000000 {
++ compatible = "shared-dma-pool";
++ reg = <0 0x57000000 0 0x1400000>; /* 20 MB */
++ };
++
++ adsp_mem: memory@60000000 {
++ compatible = "shared-dma-pool";
++ reg = <0 0x60000000 0 0xf00000>;
++ no-map;
++ };
++
++ afe_dma_mem: memory@60f00000 {
++ compatible = "shared-dma-pool";
++ reg = <0 0x60f00000 0 0x100000>;
++ no-map;
++ };
++
++ adsp_dma_mem: memory@61000000 {
++ compatible = "shared-dma-pool";
++ reg = <0 0x61000000 0 0x100000>;
++ no-map;
++ };
++ };
++
++ common_fixed_5v: regulator-0 {
++ compatible = "regulator-fixed";
++ regulator-name = "vdd_5v";
++ regulator-min-microvolt = <5000000>;
++ regulator-max-microvolt = <5000000>;
++ gpio = <&pio 10 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++ regulator-always-on;
++ vin-supply = <®_vsys>;
++ };
++
++ edp_panel_fixed_3v3: regulator-1 {
++ compatible = "regulator-fixed";
++ regulator-name = "vedp_3v3";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ enable-active-high;
++ gpio = <&pio 15 GPIO_ACTIVE_HIGH>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&edp_panel_3v3_en_pins>;
++ vin-supply = <®_vsys>;
++ };
++
++ gpio_fixed_3v3: regulator-2 {
++ compatible = "regulator-fixed";
++ regulator-name = "ext_3v3";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ gpio = <&pio 9 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++ regulator-always-on;
++ vin-supply = <®_vsys>;
++ };
++
++ /* system wide 4.2V power rail from charger */
++ reg_vsys: regulator-vsys {
++ compatible = "regulator-fixed";
++ regulator-name = "vsys";
++ regulator-always-on;
++ regulator-boot-on;
++ };
++
++ /* used by mmc2 */
++ sdio_fixed_1v8: regulator-3 {
++ compatible = "regulator-fixed";
++ regulator-name = "vio18_conn";
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ enable-active-high;
++ regulator-always-on;
++ };
++
++ /* used by mmc2 */
++ sdio_fixed_3v3: regulator-4 {
++ compatible = "regulator-fixed";
++ regulator-name = "wifi_3v3";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ gpio = <&pio 74 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++ regulator-always-on;
++ vin-supply = <®_vsys>;
++ };
++
++ touch0_fixed_3v3: regulator-5 {
++ compatible = "regulator-fixed";
++ regulator-name = "vio33_tp1";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ gpio = <&pio 119 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++ vin-supply = <®_vsys>;
++ };
++
++ usb_hub_fixed_3v3: regulator-6 {
++ compatible = "regulator-fixed";
++ regulator-name = "vhub_3v3";
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ gpio = <&pio 112 GPIO_ACTIVE_HIGH>; /* HUB_3V3_EN */
++ startup-delay-us = <10000>;
++ enable-active-high;
++ vin-supply = <®_vsys>;
++ };
++
++ usb_p0_vbus: regulator-7 {
++ compatible = "regulator-fixed";
++ regulator-name = "vbus_p0";
++ regulator-min-microvolt = <5000000>;
++ regulator-max-microvolt = <5000000>;
++ gpio = <&pio 84 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++ vin-supply = <®_vsys>;
++ };
++
++ usb_p1_vbus: regulator-8 {
++ compatible = "regulator-fixed";
++ regulator-name = "vbus_p1";
++ regulator-min-microvolt = <5000000>;
++ regulator-max-microvolt = <5000000>;
++ gpio = <&pio 87 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++ vin-supply = <®_vsys>;
++ };
++
++ /* used by ssusb2 */
++ usb_p2_vbus: regulator-9 {
++ compatible = "regulator-fixed";
++ regulator-name = "vbus_p2";
++ regulator-min-microvolt = <5000000>;
++ regulator-max-microvolt = <5000000>;
++ enable-active-high;
++ };
++};
++
++&adsp {
++ memory-region = <&adsp_dma_mem>, <&adsp_mem>;
++ status = "okay";
++};
++
++&afe {
++ memory-region = <&afe_dma_mem>;
++ status = "okay";
++};
++
++&gpu {
++ mali-supply = <&mt6359_vproc2_buck_reg>;
++ status = "okay";
++};
++
++&i2c0 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c0_pins>;
++ clock-frequency = <400000>;
++ status = "okay";
++
++ touchscreen@5d {
++ compatible = "goodix,gt9271";
++ reg = <0x5d>;
++ interrupt-parent = <&pio>;
++ interrupts-extended = <&pio 6 IRQ_TYPE_EDGE_RISING>;
++ irq-gpios = <&pio 6 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&pio 5 GPIO_ACTIVE_HIGH>;
++ AVDD28-supply = <&touch0_fixed_3v3>;
++ VDDIO-supply = <&mt6359_vio18_ldo_reg>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&touch_pins>;
++ };
++};
++
++&i2c1 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c1_pins>;
++ clock-frequency = <400000>;
++ status = "okay";
++};
++
++&i2c2 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c2_pins>;
++ clock-frequency = <400000>;
++ status = "okay";
++};
++
++&i2c3 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c3_pins>;
++ clock-frequency = <400000>;
++ status = "okay";
++};
++
++&i2c4 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c4_pins>;
++ clock-frequency = <1000000>;
++ status = "okay";
++};
++
++&i2c5 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c5_pins>;
++ clock-frequency = <400000>;
++ status = "okay";
++};
++
++&i2c6 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c6_pins>;
++ clock-frequency = <400000>;
++ status = "okay";
++};
++
++&mfg0 {
++ domain-supply = <&mt6359_vproc2_buck_reg>;
++};
++
++&mfg1 {
++ domain-supply = <&mt6359_vsram_others_ldo_reg>;
++};
++
++&mmc0 {
++ status = "okay";
++ pinctrl-names = "default", "state_uhs";
++ pinctrl-0 = <&mmc0_default_pins>;
++ pinctrl-1 = <&mmc0_uhs_pins>;
++ bus-width = <8>;
++ max-frequency = <200000000>;
++ cap-mmc-highspeed;
++ mmc-hs200-1_8v;
++ mmc-hs400-1_8v;
++ supports-cqe;
++ cap-mmc-hw-reset;
++ no-sdio;
++ no-sd;
++ hs400-ds-delay = <0x1481b>;
++ vmmc-supply = <&mt6359_vemc_1_ldo_reg>;
++ vqmmc-supply = <&mt6359_vufs_ldo_reg>;
++ non-removable;
++};
++
++&mmc1 {
++ status = "okay";
++ pinctrl-names = "default", "state_uhs";
++ pinctrl-0 = <&mmc1_default_pins>;
++ pinctrl-1 = <&mmc1_uhs_pins>;
++ bus-width = <4>;
++ max-frequency = <200000000>;
++ cap-sd-highspeed;
++ sd-uhs-sdr50;
++ sd-uhs-sdr104;
++ no-mmc;
++ no-sdio;
++ cd-gpios = <&pio 2 GPIO_ACTIVE_LOW>;
++ vmmc-supply = <&mt6359_vpa_buck_reg>;
++ vqmmc-supply = <&mt6359_vsim1_ldo_reg>;
++};
++
++&mt6359_vbbck_ldo_reg {
++ regulator-always-on;
++};
++
++&mt6359_vcn18_ldo_reg {
++ regulator-name = "vcn18_pmu";
++ regulator-always-on;
++};
++
++&mt6359_vcn33_2_bt_ldo_reg {
++ regulator-name = "vcn33_2_pmu";
++ regulator-always-on;
++};
++
++&mt6359_vcore_buck_reg {
++ regulator-name = "dvdd_proc_l";
++ regulator-always-on;
++};
++
++&mt6359_vgpu11_buck_reg {
++ regulator-name = "dvdd_core";
++ regulator-always-on;
++};
++
++&mt6359_vpa_buck_reg {
++ regulator-name = "vpa_pmu";
++ regulator-max-microvolt = <3100000>;
++};
++
++&mt6359_vproc2_buck_reg {
++ /* The name "vgpu" is required by mtk-regulator-coupler */
++ regulator-name = "vgpu";
++ regulator-min-microvolt = <550000>;
++ regulator-max-microvolt = <800000>;
++ regulator-coupled-with = <&mt6359_vsram_others_ldo_reg>;
++ regulator-coupled-max-spread = <6250>;
++};
++
++&mt6359_vpu_buck_reg {
++ regulator-name = "dvdd_adsp";
++ regulator-always-on;
++};
++
++&mt6359_vrf12_ldo_reg {
++ regulator-name = "va12_abb2_pmu";
++ regulator-always-on;
++};
++
++&mt6359_vsim1_ldo_reg {
++ regulator-name = "vsim1_pmu";
++ regulator-enable-ramp-delay = <480>;
++};
++
++&mt6359_vsram_others_ldo_reg {
++ /* The name "vsram_gpu" is required by mtk-regulator-coupler */
++ regulator-name = "vsram_gpu";
++ regulator-min-microvolt = <750000>;
++ regulator-max-microvolt = <800000>;
++ regulator-coupled-with = <&mt6359_vproc2_buck_reg>;
++ regulator-coupled-max-spread = <6250>;
++};
++
++&mt6359_vufs_ldo_reg {
++ regulator-name = "vufs18_pmu";
++ regulator-always-on;
++};
++
++&mt6359codec {
++ mediatek,mic-type-0 = <1>; /* ACC */
++ mediatek,mic-type-1 = <3>; /* DCC */
++};
++
++&pcie {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pcie_pins_default>;
++ status = "okay";
++};
++
++&pciephy {
++ status = "okay";
++};
++
++&pio {
++ audio_default_pins: audio-default-pins {
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO101__FUNC_O_AUD_CLK_MOSI>,
++ <PINMUX_GPIO102__FUNC_O_AUD_SYNC_MOSI>,
++ <PINMUX_GPIO103__FUNC_O_AUD_DAT_MOSI0>,
++ <PINMUX_GPIO104__FUNC_O_AUD_DAT_MOSI1>,
++ <PINMUX_GPIO105__FUNC_I0_AUD_DAT_MISO0>,
++ <PINMUX_GPIO106__FUNC_I0_AUD_DAT_MISO1>,
++ <PINMUX_GPIO107__FUNC_B0_I2SIN_MCK>,
++ <PINMUX_GPIO108__FUNC_B0_I2SIN_BCK>,
++ <PINMUX_GPIO109__FUNC_B0_I2SIN_WS>,
++ <PINMUX_GPIO110__FUNC_I0_I2SIN_D0>,
++ <PINMUX_GPIO114__FUNC_O_I2SO2_MCK>,
++ <PINMUX_GPIO115__FUNC_B0_I2SO2_BCK>,
++ <PINMUX_GPIO116__FUNC_B0_I2SO2_WS>,
++ <PINMUX_GPIO117__FUNC_O_I2SO2_D0>,
++ <PINMUX_GPIO118__FUNC_O_I2SO2_D1>,
++ <PINMUX_GPIO121__FUNC_B0_PCM_CLK>,
++ <PINMUX_GPIO122__FUNC_B0_PCM_SYNC>,
++ <PINMUX_GPIO124__FUNC_I0_PCM_DI>,
++ <PINMUX_GPIO125__FUNC_O_DMIC1_CLK>,
++ <PINMUX_GPIO126__FUNC_I0_DMIC1_DAT>,
++ <PINMUX_GPIO128__FUNC_O_DMIC2_CLK>,
++ <PINMUX_GPIO129__FUNC_I0_DMIC2_DAT>;
++ };
++ };
++
++ dptx_pins: dptx-pins {
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO46__FUNC_I0_DP_TX_HPD>;
++ bias-pull-up;
++ };
++ };
++
++ edp_panel_3v3_en_pins: edp-panel-3v3-en-pins {
++ pins1 {
++ pinmux = <PINMUX_GPIO15__FUNC_B_GPIO15>;
++ output-high;
++ };
++ };
++
++ eth_default_pins: eth-default-pins {
++ pins-cc {
++ pinmux = <PINMUX_GPIO139__FUNC_B0_GBE_TXC>,
++ <PINMUX_GPIO140__FUNC_I0_GBE_RXC>,
++ <PINMUX_GPIO141__FUNC_I0_GBE_RXDV>,
++ <PINMUX_GPIO142__FUNC_O_GBE_TXEN>;
++ drive-strength = <8>;
++ };
++
++ pins-mdio {
++ pinmux = <PINMUX_GPIO143__FUNC_O_GBE_MDC>,
++ <PINMUX_GPIO144__FUNC_B1_GBE_MDIO>;
++ drive-strength = <8>;
++ input-enable;
++ };
++
++ pins-power {
++ pinmux = <PINMUX_GPIO145__FUNC_B_GPIO145>,
++ <PINMUX_GPIO146__FUNC_B_GPIO146>;
++ output-high;
++ };
++
++ pins-rxd {
++ pinmux = <PINMUX_GPIO135__FUNC_I0_GBE_RXD3>,
++ <PINMUX_GPIO136__FUNC_I0_GBE_RXD2>,
++ <PINMUX_GPIO137__FUNC_I0_GBE_RXD1>,
++ <PINMUX_GPIO138__FUNC_I0_GBE_RXD0>;
++ drive-strength = <8>;
++ };
++
++ pins-txd {
++ pinmux = <PINMUX_GPIO131__FUNC_O_GBE_TXD3>,
++ <PINMUX_GPIO132__FUNC_O_GBE_TXD2>,
++ <PINMUX_GPIO133__FUNC_O_GBE_TXD1>,
++ <PINMUX_GPIO134__FUNC_O_GBE_TXD0>;
++ drive-strength = <8>;
++ };
++ };
++
++ eth_sleep_pins: eth-sleep-pins {
++ pins-cc {
++ pinmux = <PINMUX_GPIO139__FUNC_B_GPIO139>,
++ <PINMUX_GPIO140__FUNC_B_GPIO140>,
++ <PINMUX_GPIO141__FUNC_B_GPIO141>,
++ <PINMUX_GPIO142__FUNC_B_GPIO142>;
++ };
++
++ pins-mdio {
++ pinmux = <PINMUX_GPIO143__FUNC_B_GPIO143>,
++ <PINMUX_GPIO144__FUNC_B_GPIO144>;
++ input-disable;
++ bias-disable;
++ };
++
++ pins-rxd {
++ pinmux = <PINMUX_GPIO135__FUNC_B_GPIO135>,
++ <PINMUX_GPIO136__FUNC_B_GPIO136>,
++ <PINMUX_GPIO137__FUNC_B_GPIO137>,
++ <PINMUX_GPIO138__FUNC_B_GPIO138>;
++ };
++
++ pins-txd {
++ pinmux = <PINMUX_GPIO131__FUNC_B_GPIO131>,
++ <PINMUX_GPIO132__FUNC_B_GPIO132>,
++ <PINMUX_GPIO133__FUNC_B_GPIO133>,
++ <PINMUX_GPIO134__FUNC_B_GPIO134>;
++ };
++ };
++
++ i2c0_pins: i2c0-pins {
++ pins {
++ pinmux = <PINMUX_GPIO56__FUNC_B1_SDA0>,
++ <PINMUX_GPIO55__FUNC_B1_SCL0>;
++ bias-pull-up = <MTK_PULL_SET_RSEL_011>;
++ drive-strength-microamp = <1000>;
++ };
++ };
++
++ i2c1_pins: i2c1-pins {
++ pins {
++ pinmux = <PINMUX_GPIO58__FUNC_B1_SDA1>,
++ <PINMUX_GPIO57__FUNC_B1_SCL1>;
++ bias-pull-up = <MTK_PULL_SET_RSEL_011>;
++ drive-strength-microamp = <1000>;
++ };
++ };
++
++ i2c2_pins: i2c2-pins {
++ pins {
++ pinmux = <PINMUX_GPIO60__FUNC_B1_SDA2>,
++ <PINMUX_GPIO59__FUNC_B1_SCL2>;
++ bias-pull-up = <MTK_PULL_SET_RSEL_011>;
++ drive-strength-microamp = <1000>;
++ };
++ };
++
++ i2c3_pins: i2c3-pins {
++ pins {
++ pinmux = <PINMUX_GPIO62__FUNC_B1_SDA3>,
++ <PINMUX_GPIO61__FUNC_B1_SCL3>;
++ bias-pull-up = <MTK_PULL_SET_RSEL_011>;
++ drive-strength-microamp = <1000>;
++ };
++ };
++
++ i2c4_pins: i2c4-pins {
++ pins {
++ pinmux = <PINMUX_GPIO64__FUNC_B1_SDA4>,
++ <PINMUX_GPIO63__FUNC_B1_SCL4>;
++ bias-pull-up = <MTK_PULL_SET_RSEL_011>;
++ drive-strength-microamp = <1000>;
++ };
++ };
++
++ i2c5_pins: i2c5-pins {
++ pins {
++ pinmux = <PINMUX_GPIO66__FUNC_B1_SDA5>,
++ <PINMUX_GPIO65__FUNC_B1_SCL5>;
++ bias-pull-up = <MTK_PULL_SET_RSEL_011>;
++ drive-strength-microamp = <1000>;
++ };
++ };
++
++ i2c6_pins: i2c6-pins {
++ pins {
++ pinmux = <PINMUX_GPIO68__FUNC_B1_SDA6>,
++ <PINMUX_GPIO67__FUNC_B1_SCL6>;
++ bias-pull-up = <MTK_PULL_SET_RSEL_011>;
++ drive-strength-microamp = <1000>;
++ };
++ };
++
++ gpio_key_pins: gpio-key-pins {
++ pins {
++ pinmux = <PINMUX_GPIO42__FUNC_B1_KPCOL0>,
++ <PINMUX_GPIO43__FUNC_B1_KPCOL1>,
++ <PINMUX_GPIO44__FUNC_B1_KPROW0>;
++ };
++ };
++
++ mmc0_default_pins: mmc0-default-pins {
++ pins-clk {
++ pinmux = <PINMUX_GPIO157__FUNC_B1_MSDC0_CLK>;
++ drive-strength = <6>;
++ bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
++ };
++
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO161__FUNC_B1_MSDC0_DAT0>,
++ <PINMUX_GPIO160__FUNC_B1_MSDC0_DAT1>,
++ <PINMUX_GPIO159__FUNC_B1_MSDC0_DAT2>,
++ <PINMUX_GPIO158__FUNC_B1_MSDC0_DAT3>,
++ <PINMUX_GPIO154__FUNC_B1_MSDC0_DAT4>,
++ <PINMUX_GPIO153__FUNC_B1_MSDC0_DAT5>,
++ <PINMUX_GPIO152__FUNC_B1_MSDC0_DAT6>,
++ <PINMUX_GPIO151__FUNC_B1_MSDC0_DAT7>,
++ <PINMUX_GPIO156__FUNC_B1_MSDC0_CMD>;
++ input-enable;
++ drive-strength = <6>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++
++ pins-rst {
++ pinmux = <PINMUX_GPIO155__FUNC_O_MSDC0_RSTB>;
++ drive-strength = <6>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++ };
++
++ mmc0_uhs_pins: mmc0-uhs-pins {
++ pins-clk {
++ pinmux = <PINMUX_GPIO157__FUNC_B1_MSDC0_CLK>;
++ drive-strength = <8>;
++ bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
++ };
++
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO161__FUNC_B1_MSDC0_DAT0>,
++ <PINMUX_GPIO160__FUNC_B1_MSDC0_DAT1>,
++ <PINMUX_GPIO159__FUNC_B1_MSDC0_DAT2>,
++ <PINMUX_GPIO158__FUNC_B1_MSDC0_DAT3>,
++ <PINMUX_GPIO154__FUNC_B1_MSDC0_DAT4>,
++ <PINMUX_GPIO153__FUNC_B1_MSDC0_DAT5>,
++ <PINMUX_GPIO152__FUNC_B1_MSDC0_DAT6>,
++ <PINMUX_GPIO151__FUNC_B1_MSDC0_DAT7>,
++ <PINMUX_GPIO156__FUNC_B1_MSDC0_CMD>;
++ input-enable;
++ drive-strength = <8>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++
++ pins-ds {
++ pinmux = <PINMUX_GPIO162__FUNC_B0_MSDC0_DSL>;
++ drive-strength = <8>;
++ bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
++ };
++
++ pins-rst {
++ pinmux = <PINMUX_GPIO155__FUNC_O_MSDC0_RSTB>;
++ drive-strength = <8>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++ };
++
++ mmc1_default_pins: mmc1-default-pins {
++ pins-clk {
++ pinmux = <PINMUX_GPIO164__FUNC_B1_MSDC1_CLK>;
++ drive-strength = <6>;
++ bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
++ };
++
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO163__FUNC_B1_MSDC1_CMD>,
++ <PINMUX_GPIO165__FUNC_B1_MSDC1_DAT0>,
++ <PINMUX_GPIO166__FUNC_B1_MSDC1_DAT1>,
++ <PINMUX_GPIO167__FUNC_B1_MSDC1_DAT2>,
++ <PINMUX_GPIO168__FUNC_B1_MSDC1_DAT3>;
++ input-enable;
++ drive-strength = <6>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++
++ pins-insert {
++ pinmux = <PINMUX_GPIO2__FUNC_B_GPIO2>;
++ bias-pull-up;
++ };
++ };
++
++ mmc1_uhs_pins: mmc1-uhs-pins {
++ pins-clk {
++ pinmux = <PINMUX_GPIO164__FUNC_B1_MSDC1_CLK>;
++ drive-strength = <6>;
++ bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
++ };
++
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO163__FUNC_B1_MSDC1_CMD>,
++ <PINMUX_GPIO165__FUNC_B1_MSDC1_DAT0>,
++ <PINMUX_GPIO166__FUNC_B1_MSDC1_DAT1>,
++ <PINMUX_GPIO167__FUNC_B1_MSDC1_DAT2>,
++ <PINMUX_GPIO168__FUNC_B1_MSDC1_DAT3>;
++ input-enable;
++ drive-strength = <6>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++ };
++
++ mmc2_default_pins: mmc2-default-pins {
++ pins-clk {
++ pinmux = <PINMUX_GPIO170__FUNC_B1_MSDC2_CLK>;
++ drive-strength = <4>;
++ bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
++ };
++
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO169__FUNC_B1_MSDC2_CMD>,
++ <PINMUX_GPIO171__FUNC_B1_MSDC2_DAT0>,
++ <PINMUX_GPIO172__FUNC_B1_MSDC2_DAT1>,
++ <PINMUX_GPIO173__FUNC_B1_MSDC2_DAT2>,
++ <PINMUX_GPIO174__FUNC_B1_MSDC2_DAT3>;
++ input-enable;
++ drive-strength = <6>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++
++ pins-pcm {
++ pinmux = <PINMUX_GPIO123__FUNC_O_PCM_DO>;
++ };
++ };
++
++ mmc2_uhs_pins: mmc2-uhs-pins {
++ pins-clk {
++ pinmux = <PINMUX_GPIO170__FUNC_B1_MSDC2_CLK>;
++ drive-strength = <4>;
++ bias-pull-down = <MTK_PUPD_SET_R1R0_10>;
++ };
++
++ pins-cmd-dat {
++ pinmux = <PINMUX_GPIO169__FUNC_B1_MSDC2_CMD>,
++ <PINMUX_GPIO171__FUNC_B1_MSDC2_DAT0>,
++ <PINMUX_GPIO172__FUNC_B1_MSDC2_DAT1>,
++ <PINMUX_GPIO173__FUNC_B1_MSDC2_DAT2>,
++ <PINMUX_GPIO174__FUNC_B1_MSDC2_DAT3>;
++ input-enable;
++ drive-strength = <6>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++ };
++
++ mmc2_eint_pins: mmc2-eint-pins {
++ pins-dat1 {
++ pinmux = <PINMUX_GPIO172__FUNC_B_GPIO172>;
++ input-enable;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++ };
++
++ mmc2_dat1_pins: mmc2-dat1-pins {
++ pins-dat1 {
++ pinmux = <PINMUX_GPIO172__FUNC_B1_MSDC2_DAT1>;
++ input-enable;
++ drive-strength = <6>;
++ bias-pull-up = <MTK_PUPD_SET_R1R0_01>;
++ };
++ };
++
++ panel_default_pins: panel-default-pins {
++ pins-dcdc {
++ pinmux = <PINMUX_GPIO45__FUNC_B_GPIO45>;
++ output-low;
++ };
++
++ pins-en {
++ pinmux = <PINMUX_GPIO111__FUNC_B_GPIO111>;
++ output-low;
++ };
++
++ pins-rst {
++ pinmux = <PINMUX_GPIO25__FUNC_B_GPIO25>;
++ output-high;
++ };
++ };
++
++ pcie_pins_default: pcie-default {
++ mux {
++ pinmux = <PINMUX_GPIO47__FUNC_I1_WAKEN>,
++ <PINMUX_GPIO48__FUNC_O_PERSTN>,
++ <PINMUX_GPIO49__FUNC_B1_CLKREQN>;
++ bias-pull-up;
++ };
++ };
++
++ rt1715_int_pins: rt1715-int-pins {
++ pins_cmd0_dat {
++ pinmux = <PINMUX_GPIO12__FUNC_B_GPIO12>;
++ bias-pull-up;
++ input-enable;
++ };
++ };
++
++ spi0_pins: spi0-pins {
++ pins-spi {
++ pinmux = <PINMUX_GPIO69__FUNC_O_SPIM0_CSB>,
++ <PINMUX_GPIO70__FUNC_O_SPIM0_CLK>,
++ <PINMUX_GPIO71__FUNC_B0_SPIM0_MOSI>,
++ <PINMUX_GPIO72__FUNC_B0_SPIM0_MISO>;
++ bias-disable;
++ };
++ };
++
++ spi1_pins: spi1-pins {
++ pins-spi {
++ pinmux = <PINMUX_GPIO75__FUNC_O_SPIM1_CSB>,
++ <PINMUX_GPIO76__FUNC_O_SPIM1_CLK>,
++ <PINMUX_GPIO77__FUNC_B0_SPIM1_MOSI>,
++ <PINMUX_GPIO78__FUNC_B0_SPIM1_MISO>;
++ bias-disable;
++ };
++ };
++
++ spi2_pins: spi2-pins {
++ pins-spi {
++ pinmux = <PINMUX_GPIO79__FUNC_O_SPIM2_CSB>,
++ <PINMUX_GPIO80__FUNC_O_SPIM2_CLK>,
++ <PINMUX_GPIO81__FUNC_B0_SPIM2_MOSI>,
++ <PINMUX_GPIO82__FUNC_B0_SPIM2_MISO>;
++ bias-disable;
++ };
++ };
++
++ touch_pins: touch-pins {
++ pins-irq {
++ pinmux = <PINMUX_GPIO6__FUNC_B_GPIO6>;
++ input-enable;
++ bias-disable;
++ };
++
++ pins-reset {
++ pinmux = <PINMUX_GPIO5__FUNC_B_GPIO5>;
++ output-high;
++ };
++ };
++
++ uart0_pins: uart0-pins {
++ pins {
++ pinmux = <PINMUX_GPIO31__FUNC_O_UTXD0>,
++ <PINMUX_GPIO32__FUNC_I1_URXD0>;
++ bias-pull-up;
++ };
++ };
++
++ uart1_pins: uart1-pins {
++ pins {
++ pinmux = <PINMUX_GPIO33__FUNC_O_UTXD1>,
++ <PINMUX_GPIO34__FUNC_I1_URXD1>;
++ bias-pull-up;
++ };
++ };
++
++ uart2_pins: uart2-pins {
++ pins {
++ pinmux = <PINMUX_GPIO35__FUNC_O_UTXD2>,
++ <PINMUX_GPIO36__FUNC_I1_URXD2>;
++ bias-pull-up;
++ };
++ };
++
++ usb_default_pins: usb-default-pins {
++ pins-iddig {
++ pinmux = <PINMUX_GPIO83__FUNC_B_GPIO83>;
++ input-enable;
++ bias-pull-up;
++ };
++
++ pins-valid {
++ pinmux = <PINMUX_GPIO85__FUNC_I0_VBUSVALID>;
++ input-enable;
++ };
++
++ pins-vbus {
++ pinmux = <PINMUX_GPIO84__FUNC_O_USB_DRVVBUS>;
++ output-high;
++ };
++
++ };
++
++ usb1_default_pins: usb1-default-pins {
++ pins-valid {
++ pinmux = <PINMUX_GPIO88__FUNC_I0_VBUSVALID_1P>;
++ input-enable;
++ };
++
++ pins-usb-hub-3v3-en {
++ pinmux = <PINMUX_GPIO112__FUNC_B_GPIO112>;
++ output-high;
++ };
++ };
++
++ wifi_pwrseq_pins: wifi-pwrseq-pins {
++ pins-wifi-enable {
++ pinmux = <PINMUX_GPIO127__FUNC_B_GPIO127>;
++ output-low;
++ };
++ };
++};
++
++ð {
++ phy-mode ="rgmii-id";
++ phy-handle = <ðernet_phy0>;
++ pinctrl-names = "default", "sleep";
++ pinctrl-0 = <ð_default_pins>;
++ pinctrl-1 = <ð_sleep_pins>;
++ mediatek,mac-wol;
++ snps,reset-gpio = <&pio 147 GPIO_ACTIVE_HIGH>;
++ snps,reset-delays-us = <0 10000 10000>;
++ status = "okay";
++};
++
++ð_mdio {
++ ethernet_phy0: ethernet-phy@1 {
++ compatible = "ethernet-phy-id001c.c916";
++ reg = <0x1>;
++ };
++};
++
++&pmic {
++ interrupt-parent = <&pio>;
++ interrupts = <222 IRQ_TYPE_LEVEL_HIGH>;
++
++ mt6359keys: keys {
++ compatible = "mediatek,mt6359-keys";
++ mediatek,long-press-mode = <1>;
++ power-off-time-sec = <0>;
++
++ power-key {
++ linux,keycodes = <KEY_POWER>;
++ wakeup-source;
++ };
++ };
++};
++
++&scp {
++ memory-region = <&scp_mem>;
++ status = "okay";
++};
++
++&sound {
++ compatible = "mediatek,mt8390-mt6359-evk", "mediatek,mt8188-mt6359-evb";
++ model = "mt8390-evk";
++ pinctrl-names = "default";
++ pinctrl-0 = <&audio_default_pins>;
++ audio-routing =
++ "Headphone", "Headphone L",
++ "Headphone", "Headphone R";
++ mediatek,adsp = <&adsp>;
++ status = "okay";
++
++ dai-link-0 {
++ link-name = "DL_SRC_BE";
++
++ codec {
++ sound-dai = <&pmic 0>;
++ };
++ };
++};
++
++&spi2 {
++ pinctrl-0 = <&spi2_pins>;
++ pinctrl-names = "default";
++ mediatek,pad-select = <0>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++ status = "okay";
++};
++
++&uart0 {
++ pinctrl-0 = <&uart0_pins>;
++ pinctrl-names = "default";
++ status = "okay";
++};
++
++&uart1 {
++ pinctrl-0 = <&uart1_pins>;
++ pinctrl-names = "default";
++ status = "okay";
++};
++
++&uart2 {
++ pinctrl-0 = <&uart2_pins>;
++ pinctrl-names = "default";
++ status = "okay";
++};
++
++&u3phy0 {
++ status = "okay";
++};
++
++&u3phy1 {
++ status = "okay";
++};
++
++&u3phy2 {
++ status = "okay";
++};
++
++&xhci0 {
++ status = "okay";
++ vusb33-supply = <&mt6359_vusb_ldo_reg>;
++};
++
++&xhci1 {
++ status = "okay";
++ vusb33-supply = <&mt6359_vusb_ldo_reg>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ hub_2_0: hub@1 {
++ compatible = "usb451,8025";
++ reg = <1>;
++ peer-hub = <&hub_3_0>;
++ reset-gpios = <&pio 7 GPIO_ACTIVE_HIGH>;
++ vdd-supply = <&usb_hub_fixed_3v3>;
++ };
++
++ hub_3_0: hub@2 {
++ compatible = "usb451,8027";
++ reg = <2>;
++ peer-hub = <&hub_2_0>;
++ reset-gpios = <&pio 7 GPIO_ACTIVE_HIGH>;
++ vdd-supply = <&usb_hub_fixed_3v3>;
++ };
++};
++
++&xhci2 {
++ status = "okay";
++ vusb33-supply = <&mt6359_vusb_ldo_reg>;
++ vbus-supply = <&sdio_fixed_3v3>; /* wifi_3v3 */
++};
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+index 7655d5e3a03416..522e20924e94a2 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774c0.dtsi
+@@ -47,16 +47,20 @@ can_clk: can {
+ cluster1_opp: opp-table-1 {
+ compatible = "operating-points-v2";
+ opp-shared;
++
+ opp-800000000 {
+ opp-hz = /bits/ 64 <800000000>;
++ opp-microvolt = <1030000>;
+ clock-latency-ns = <300000>;
+ };
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
++ opp-microvolt = <1030000>;
+ clock-latency-ns = <300000>;
+ };
+ opp-1200000000 {
+ opp-hz = /bits/ 64 <1200000000>;
++ opp-microvolt = <1030000>;
+ clock-latency-ns = <300000>;
+ opp-suspend;
+ };
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index 233af3081e84a4..50fbf7251665a4 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -47,16 +47,20 @@ can_clk: can {
+ cluster1_opp: opp-table-1 {
+ compatible = "operating-points-v2";
+ opp-shared;
++
+ opp-800000000 {
+ opp-hz = /bits/ 64 <800000000>;
++ opp-microvolt = <1030000>;
+ clock-latency-ns = <300000>;
+ };
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
++ opp-microvolt = <1030000>;
+ clock-latency-ns = <300000>;
+ };
+ opp-1200000000 {
+ opp-hz = /bits/ 64 <1200000000>;
++ opp-microvolt = <1030000>;
+ clock-latency-ns = <300000>;
+ opp-suspend;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+index 629121de5a13d6..5e71819489920e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+@@ -147,7 +147,7 @@ rtc: rtc@51 {
+
+ &pwm5 {
+ status = "okay";
+- pinctrl-names = "active";
++ pinctrl-names = "default";
+ pinctrl-0 = <&pwm5_pin_pull_down>;
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3318-a95x-z2.dts b/arch/arm64/boot/dts/rockchip/rk3318-a95x-z2.dts
+index a94114fb7cc1d1..96c27fc5005d1f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3318-a95x-z2.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3318-a95x-z2.dts
+@@ -274,13 +274,13 @@ otg_vbus_drv: otg-vbus-drv {
+
+ &pwm0 {
+ pinctrl-0 = <&pwm0_pin_pull_up>;
+- pinctrl-names = "active";
++ pinctrl-names = "default";
+ status = "okay";
+ };
+
+ &pwm1 {
+ pinctrl-0 = <&pwm1_pin_pull_up>;
+- pinctrl-names = "active";
++ pinctrl-names = "default";
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi
+index b169be06d4d1f7..c8eb5481f43d02 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi
+@@ -603,7 +603,7 @@ &pwm1 {
+ };
+
+ &pwm2 {
+- pinctrl-names = "active";
++ pinctrl-names = "default";
+ pinctrl-0 = <&pwm2_pin_pull_down>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts b/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
+index ac79140a9ecd63..44cfdfeed66813 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
+@@ -778,20 +778,6 @@ &uart1 {
+ pinctrl-0 = <&uart1m0_xfer &uart1m0_ctsn &uart1m0_rtsn>;
+ uart-has-rtscts;
+ status = "okay";
+-
+- bluetooth {
+- compatible = "brcm,bcm43438-bt";
+- clocks = <&rk809 1>;
+- clock-names = "lpo";
+- device-wakeup-gpios = <&gpio4 RK_PB5 GPIO_ACTIVE_HIGH>;
+- host-wakeup-gpios = <&gpio4 RK_PB4 GPIO_ACTIVE_HIGH>;
+- shutdown-gpios = <&gpio4 RK_PB2 GPIO_ACTIVE_HIGH>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&bt_host_wake &bt_wake &bt_enable>;
+- vbat-supply = <&vcc3v3_sys>;
+- vddio-supply = <&vcc_1v8>;
+- /* vddio comes from regulator on module, use IO bank voltage instead */
+- };
+ };
+
+ &uart2 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk356x-base.dtsi b/arch/arm64/boot/dts/rockchip/rk356x-base.dtsi
+index e5539062911405..8421d4b8c7719f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk356x-base.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk356x-base.dtsi
+@@ -174,6 +174,18 @@ psci {
+ method = "smc";
+ };
+
++ reserved-memory {
++ #address-cells = <2>;
++ #size-cells = <2>;
++ ranges;
++
++ scmi_shmem: shmem@10f000 {
++ compatible = "arm,scmi-shmem";
++ reg = <0x0 0x0010f000 0x0 0x100>;
++ no-map;
++ };
++ };
++
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_HIGH>,
+@@ -199,19 +211,6 @@ xin32k: xin32k {
+ #clock-cells = <0>;
+ };
+
+- sram@10f000 {
+- compatible = "mmio-sram";
+- reg = <0x0 0x0010f000 0x0 0x100>;
+- #address-cells = <1>;
+- #size-cells = <1>;
+- ranges = <0 0x0 0x0010f000 0x100>;
+-
+- scmi_shmem: sram@0 {
+- compatible = "arm,scmi-shmem";
+- reg = <0x0 0x100>;
+- };
+- };
+-
+ sata1: sata@fc400000 {
+ compatible = "rockchip,rk3568-dwc-ahci", "snps,dwc-ahci";
+ reg = <0 0xfc400000 0 0x1000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts b/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
+index 7c7331936a7fd5..a9b9db31d2a3e6 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
+@@ -182,8 +182,7 @@ &gmac0 {
+ ð0m0_tx_bus2
+ ð0m0_rx_bus2
+ ð0m0_rgmii_clk
+- ð0m0_rgmii_bus
+- ðm0_clk0_25m_out>;
++ ð0m0_rgmii_bus>;
+
+ phy-handle = <&rgmii_phy0>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-compact.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-compact.dtsi
+index 87090cb98020b9..bcf3cf704a00e1 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-compact.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-compact.dtsi
+@@ -73,7 +73,7 @@ &led_green_pwm {
+
+ /* phy2 */
+ &pcie2x1l1 {
+- reset-gpios = <&gpio4 RK_PD4 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&gpio3 RK_PD4 GPIO_ACTIVE_HIGH>;
+ vpcie3v3-supply = <&vcc3v3_pcie_eth>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts b/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts
+index 9c394f733bbfbb..b2c30122aacc57 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-coolpi-4b.dts
+@@ -429,7 +429,7 @@ &pwm2 {
+ };
+
+ &pwm13 {
+- pinctrl-names = "active";
++ pinctrl-names = "default";
+ pinctrl-0 = <&pwm13m2_pins>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi
+index 9202181fbd6528..fcc4cb2e9389bc 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin-dahlia.dtsi
+@@ -28,10 +28,10 @@ sound {
+ "Headphone Jack", "HPOUTR",
+ "IN2L", "Line In Jack",
+ "IN2R", "Line In Jack",
+- "Headphone Jack", "MICBIAS",
+- "IN1L", "Headphone Jack";
++ "Microphone Jack", "MICBIAS",
++ "IN1L", "Microphone Jack";
+ simple-audio-card,widgets =
+- "Microphone", "Headphone Jack",
++ "Microphone", "Microphone Jack",
+ "Headphone", "Headphone Jack",
+ "Line", "Line In Jack";
+
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-mcu.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-mcu.dtsi
+index b33aff0d65c9de..bd6a00d13aea75 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-mcu.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-mcu.dtsi
+@@ -12,15 +12,7 @@ mcu_pmx0: pinctrl@4084000 {
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+- pinctrl-single,gpio-range =
+- <&mcu_pmx_range 0 21 PIN_GPIO_RANGE_IOPAD>,
+- <&mcu_pmx_range 23 1 PIN_GPIO_RANGE_IOPAD>,
+- <&mcu_pmx_range 32 2 PIN_GPIO_RANGE_IOPAD>;
+ bootph-all;
+-
+- mcu_pmx_range: gpio-range {
+- #pinctrl-single,gpio-range-cells = <3>;
+- };
+ };
+
+ mcu_esm: esm@4100000 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi
+index 420c77c8e9e5e2..6aea9d3f134e4b 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-main.dtsi
+@@ -42,17 +42,23 @@ &inta_main_dmss {
+ ti,interrupt-ranges = <5 69 35>;
+ };
+
+-&main_pmx0 {
+- pinctrl-single,gpio-range =
+- <&main_pmx0_range 0 32 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 33 38 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 72 22 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 137 5 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 143 3 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 149 2 PIN_GPIO_RANGE_IOPAD>;
++&main_conf {
++ audio_refclk0: clock-controller@82e0 {
++ compatible = "ti,am62-audio-refclk";
++ reg = <0x82e0 0x4>;
++ clocks = <&k3_clks 157 0>;
++ assigned-clocks = <&k3_clks 157 0>;
++ assigned-clock-parents = <&k3_clks 157 16>;
++ #clock-cells = <0>;
++ };
+
+- main_pmx0_range: gpio-range {
+- #pinctrl-single,gpio-range-cells = <3>;
++ audio_refclk1: clock-controller@82e4 {
++ compatible = "ti,am62-audio-refclk";
++ reg = <0x82e4 0x4>;
++ clocks = <&k3_clks 157 18>;
++ assigned-clocks = <&k3_clks 157 18>;
++ assigned-clock-parents = <&k3_clks 157 34>;
++ #clock-cells = <0>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+index d184e9c1a0a598..adee69607fdbf5 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+@@ -590,7 +590,7 @@ exp1: gpio@23 {
+ p05-hog {
+ /* P05 - USB2.0_MUX_SEL */
+ gpio-hog;
+- gpios = <5 GPIO_ACTIVE_HIGH>;
++ gpios = <5 GPIO_ACTIVE_LOW>;
+ output-high;
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi b/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
+index 3ac2d45a055857..6da7b3a2943c44 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
+@@ -251,21 +251,6 @@ &inta_main_dmss {
+ ti,interrupt-ranges = <7 71 21>;
+ };
+
+-&main_pmx0 {
+- pinctrl-single,gpio-range =
+- <&main_pmx0_range 0 32 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 33 38 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 72 17 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 101 25 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 137 5 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 143 3 PIN_GPIO_RANGE_IOPAD>,
+- <&main_pmx0_range 149 2 PIN_GPIO_RANGE_IOPAD>;
+-
+- main_pmx0_range: gpio-range {
+- #pinctrl-single,gpio-range-cells = <3>;
+- };
+-};
+-
+ &main_gpio0 {
+ gpio-ranges = <&main_pmx0 0 0 32>, <&main_pmx0 32 33 38>,
+ <&main_pmx0 70 72 17>;
+diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h
+index f8f78f622dd2c9..a2a1eeb36d4b50 100644
+--- a/arch/arm64/include/asm/mem_encrypt.h
++++ b/arch/arm64/include/asm/mem_encrypt.h
+@@ -21,4 +21,15 @@ static inline bool force_dma_unencrypted(struct device *dev)
+ return is_realm_world();
+ }
+
++/*
++ * For Arm CCA guests, canonical addresses are "encrypted", so no changes
++ * required for dma_addr_encrypted().
++ * The unencrypted DMA buffers must be accessed via the unprotected IPA,
++ * "top IPA bit" set.
++ */
++#define dma_addr_unencrypted(x) ((x) | PROT_NS_SHARED)
++
++/* Clear the "top" IPA bit while converting back */
++#define dma_addr_canonical(x) ((x) & ~PROT_NS_SHARED)
++
+ #endif /* __ASM_MEM_ENCRYPT_H */
+diff --git a/arch/arm64/kernel/compat_alignment.c b/arch/arm64/kernel/compat_alignment.c
+index deff21bfa6800c..b68e1d328d4cb9 100644
+--- a/arch/arm64/kernel/compat_alignment.c
++++ b/arch/arm64/kernel/compat_alignment.c
+@@ -368,6 +368,8 @@ int do_compat_alignment_fixup(unsigned long addr, struct pt_regs *regs)
+ return 1;
+ }
+
++ if (!handler)
++ return 1;
+ type = handler(addr, instr, regs);
+
+ if (type == TYPE_ERROR || type == TYPE_FAULT)
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index 2b8bd27a852fee..bdb989c49c094c 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -382,8 +382,8 @@ config CMDLINE_BOOTLOADER
+ config CMDLINE_EXTEND
+ bool "Use built-in to extend bootloader kernel arguments"
+ help
+- The command-line arguments provided during boot will be
+- appended to the built-in command line. This is useful in
++ The built-in command line will be appended to the command-
++ line arguments provided during boot. This is useful in
+ cases where the provided arguments are insufficient and
+ you don't want to or cannot modify them.
+
+diff --git a/arch/loongarch/include/asm/cache.h b/arch/loongarch/include/asm/cache.h
+index 1b6d0961719989..aa622c75441442 100644
+--- a/arch/loongarch/include/asm/cache.h
++++ b/arch/loongarch/include/asm/cache.h
+@@ -8,6 +8,8 @@
+ #define L1_CACHE_SHIFT CONFIG_L1_CACHE_SHIFT
+ #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
+
++#define ARCH_DMA_MINALIGN (16)
++
+ #define __read_mostly __section(".data..read_mostly")
+
+ #endif /* _ASM_CACHE_H */
+diff --git a/arch/loongarch/include/asm/irq.h b/arch/loongarch/include/asm/irq.h
+index a0ca84da8541d5..12bd15578c3369 100644
+--- a/arch/loongarch/include/asm/irq.h
++++ b/arch/loongarch/include/asm/irq.h
+@@ -53,7 +53,7 @@ void spurious_interrupt(void);
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+ void arch_trigger_cpumask_backtrace(const struct cpumask *mask, int exclude_cpu);
+
+-#define MAX_IO_PICS 2
++#define MAX_IO_PICS 8
+ #define NR_IRQS (64 + NR_VECTORS * (NR_CPUS + MAX_IO_PICS))
+
+ struct acpi_vector_group {
+diff --git a/arch/loongarch/include/asm/stacktrace.h b/arch/loongarch/include/asm/stacktrace.h
+index f23adb15f418fb..fc8b64773794a9 100644
+--- a/arch/loongarch/include/asm/stacktrace.h
++++ b/arch/loongarch/include/asm/stacktrace.h
+@@ -8,6 +8,7 @@
+ #include <asm/asm.h>
+ #include <asm/ptrace.h>
+ #include <asm/loongarch.h>
++#include <asm/unwind_hints.h>
+ #include <linux/stringify.h>
+
+ enum stack_type {
+@@ -43,6 +44,7 @@ int get_stack_info(unsigned long stack, struct task_struct *task, struct stack_i
+ static __always_inline void prepare_frametrace(struct pt_regs *regs)
+ {
+ __asm__ __volatile__(
++ UNWIND_HINT_SAVE
+ /* Save $ra */
+ STORE_ONE_REG(1)
+ /* Use $ra to save PC */
+@@ -80,6 +82,7 @@ static __always_inline void prepare_frametrace(struct pt_regs *regs)
+ STORE_ONE_REG(29)
+ STORE_ONE_REG(30)
+ STORE_ONE_REG(31)
++ UNWIND_HINT_RESTORE
+ : "=m" (regs->csr_era)
+ : "r" (regs->regs)
+ : "memory");
+diff --git a/arch/loongarch/include/asm/unwind_hints.h b/arch/loongarch/include/asm/unwind_hints.h
+index a01086ad9ddea4..2c68bc72736c95 100644
+--- a/arch/loongarch/include/asm/unwind_hints.h
++++ b/arch/loongarch/include/asm/unwind_hints.h
+@@ -23,6 +23,14 @@
+ UNWIND_HINT sp_reg=ORC_REG_SP type=UNWIND_HINT_TYPE_CALL
+ .endm
+
+-#endif /* __ASSEMBLY__ */
++#else /* !__ASSEMBLY__ */
++
++#define UNWIND_HINT_SAVE \
++ UNWIND_HINT(UNWIND_HINT_TYPE_SAVE, 0, 0, 0)
++
++#define UNWIND_HINT_RESTORE \
++ UNWIND_HINT(UNWIND_HINT_TYPE_RESTORE, 0, 0, 0)
++
++#endif /* !__ASSEMBLY__ */
+
+ #endif /* _ASM_LOONGARCH_UNWIND_HINTS_H */
+diff --git a/arch/loongarch/kernel/env.c b/arch/loongarch/kernel/env.c
+index 2f1f5b08638f81..27144de5c5fe4f 100644
+--- a/arch/loongarch/kernel/env.c
++++ b/arch/loongarch/kernel/env.c
+@@ -68,6 +68,8 @@ static int __init fdt_cpu_clk_init(void)
+ return -ENODEV;
+
+ clk = of_clk_get(np, 0);
++ of_node_put(np);
++
+ if (IS_ERR(clk))
+ return -ENODEV;
+
+diff --git a/arch/loongarch/kernel/kgdb.c b/arch/loongarch/kernel/kgdb.c
+index 445c452d72a79c..7be5b4c0c90020 100644
+--- a/arch/loongarch/kernel/kgdb.c
++++ b/arch/loongarch/kernel/kgdb.c
+@@ -8,6 +8,7 @@
+ #include <linux/hw_breakpoint.h>
+ #include <linux/kdebug.h>
+ #include <linux/kgdb.h>
++#include <linux/objtool.h>
+ #include <linux/processor.h>
+ #include <linux/ptrace.h>
+ #include <linux/sched.h>
+@@ -224,13 +225,13 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc)
+ regs->csr_era = pc;
+ }
+
+-void arch_kgdb_breakpoint(void)
++noinline void arch_kgdb_breakpoint(void)
+ {
+ __asm__ __volatile__ ( \
+ ".globl kgdb_breakinst\n\t" \
+- "nop\n" \
+ "kgdb_breakinst:\tbreak 2\n\t"); /* BRK_KDB = 2 */
+ }
++STACK_FRAME_NON_STANDARD(arch_kgdb_breakpoint);
+
+ /*
+ * Calls linux_debug_hook before the kernel dies. If KGDB is enabled,
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index ea357a3edc0943..fa1500d4aa3e3a 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -142,6 +142,8 @@ static void build_prologue(struct jit_ctx *ctx)
+ */
+ if (seen_tail_call(ctx) && seen_call(ctx))
+ move_reg(ctx, TCC_SAVED, REG_TCC);
++ else
++ emit_insn(ctx, nop);
+
+ ctx->stack_size = stack_adjust;
+ }
+@@ -905,7 +907,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+
+ move_addr(ctx, t1, func_addr);
+ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, t1, 0);
+- move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
++
++ if (insn->src_reg != BPF_PSEUDO_CALL)
++ move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
++
+ break;
+
+ /* tail call */
+@@ -930,7 +935,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ {
+ const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
+
+- move_imm(ctx, dst, imm64, is32);
++ if (bpf_pseudo_func(insn))
++ move_addr(ctx, dst, imm64);
++ else
++ move_imm(ctx, dst, imm64, is32);
+ return 1;
+ }
+
+diff --git a/arch/loongarch/net/bpf_jit.h b/arch/loongarch/net/bpf_jit.h
+index 68586338ecf859..f9c569f5394914 100644
+--- a/arch/loongarch/net/bpf_jit.h
++++ b/arch/loongarch/net/bpf_jit.h
+@@ -27,6 +27,11 @@ struct jit_data {
+ struct jit_ctx ctx;
+ };
+
++static inline void emit_nop(union loongarch_instruction *insn)
++{
++ insn->word = INSN_NOP;
++}
++
+ #define emit_insn(ctx, func, ...) \
+ do { \
+ if (ctx->image != NULL) { \
+diff --git a/arch/m68k/include/asm/processor.h b/arch/m68k/include/asm/processor.h
+index 8f2676c3a9882d..3c43c09d448945 100644
+--- a/arch/m68k/include/asm/processor.h
++++ b/arch/m68k/include/asm/processor.h
+@@ -95,10 +95,24 @@ static inline void set_fc(unsigned long val)
+ "movec %0,%/dfc\n\t"
+ : /* no outputs */ : "r" (val) : "memory");
+ }
++
++static inline unsigned long get_fc(void)
++{
++ unsigned long val;
++
++ __asm__ ("movec %/dfc,%0" : "=r" (val) : );
++
++ return val;
++}
+ #else
+ static inline void set_fc(unsigned long val)
+ {
+ }
++
++static inline unsigned long get_fc(void)
++{
++ return USER_DATA;
++}
+ #endif /* CONFIG_CPU_HAS_ADDRESS_SPACES */
+
+ struct thread_struct {
+diff --git a/arch/m68k/sun3/mmu_emu.c b/arch/m68k/sun3/mmu_emu.c
+index 119bd32efcfbc9..b39fc3717d8eae 100644
+--- a/arch/m68k/sun3/mmu_emu.c
++++ b/arch/m68k/sun3/mmu_emu.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitops.h>
+ #include <linux/module.h>
+ #include <linux/sched/mm.h>
++#include <linux/string_choices.h>
+
+ #include <asm/setup.h>
+ #include <asm/traps.h>
+@@ -370,8 +371,8 @@ int mmu_emu_handle_fault (unsigned long vaddr, int read_flag, int kernel_fault)
+ }
+
+ #ifdef DEBUG_MMU_EMU
+- pr_info("%s: vaddr=%lx type=%s crp=%p\n", __func__, vaddr,
+- read_flag ? "read" : "write", crp);
++ pr_info("%s: vaddr=%lx type=%s crp=%px\n", __func__, vaddr,
++ str_read_write(read_flag), crp);
+ #endif
+
+ segment = (vaddr >> SUN3_PMEG_SIZE_BITS) & 0x7FF;
+@@ -417,7 +418,7 @@ int mmu_emu_handle_fault (unsigned long vaddr, int read_flag, int kernel_fault)
+ pte_val (*pte) |= SUN3_PAGE_ACCESSED;
+
+ #ifdef DEBUG_MMU_EMU
+- pr_info("seg:%ld crp:%p ->", get_fs().seg, crp);
++ pr_info("seg:%ld crp:%px ->", get_fc(), crp);
+ print_pte_vaddr (vaddr);
+ pr_cont("\n");
+ #endif
+diff --git a/arch/parisc/include/uapi/asm/socket.h b/arch/parisc/include/uapi/asm/socket.h
+index aa9cd4b951fe53..96831c98860658 100644
+--- a/arch/parisc/include/uapi/asm/socket.h
++++ b/arch/parisc/include/uapi/asm/socket.h
+@@ -132,16 +132,16 @@
+ #define SO_PASSPIDFD 0x404A
+ #define SO_PEERPIDFD 0x404B
+
+-#define SO_DEVMEM_LINEAR 78
+-#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+-#define SO_DEVMEM_DMABUF 79
+-#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+-#define SO_DEVMEM_DONTNEED 80
+-
+ #define SCM_TS_OPT_ID 0x404C
+
+ #define SO_RCVPRIORITY 0x404D
+
++#define SO_DEVMEM_LINEAR 0x404E
++#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
++#define SO_DEVMEM_DMABUF 0x404F
++#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
++#define SO_DEVMEM_DONTNEED 0x4050
++
+ #if !defined(__KERNEL__)
+
+ #if __BITS_PER_LONG == 64
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index 77306be62e9ee8..129355f87f80fc 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -78,4 +78,4 @@ CONFIG_DEBUG_VM_PGTABLE=y
+ CONFIG_DETECT_HUNG_TASK=y
+ CONFIG_BDI_SWITCH=y
+ CONFIG_PPC_EARLY_DEBUG=y
+-CONFIG_GENERIC_PTDUMP=y
++CONFIG_PTDUMP_DEBUGFS=y
+diff --git a/arch/powerpc/crypto/Makefile b/arch/powerpc/crypto/Makefile
+index 9b38f4a7bc1525..2f00b22b0823ea 100644
+--- a/arch/powerpc/crypto/Makefile
++++ b/arch/powerpc/crypto/Makefile
+@@ -51,3 +51,4 @@ $(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE
+ OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y
+ OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y
+ OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y
++OBJECT_FILES_NON_STANDARD_ghashp8-ppc.o := y
+diff --git a/arch/powerpc/kexec/relocate_32.S b/arch/powerpc/kexec/relocate_32.S
+index 104c9911f40611..dd86e338307d3f 100644
+--- a/arch/powerpc/kexec/relocate_32.S
++++ b/arch/powerpc/kexec/relocate_32.S
+@@ -348,16 +348,13 @@ write_utlb:
+ rlwinm r10, r24, 0, 22, 27
+
+ cmpwi r10, PPC47x_TLB0_4K
+- bne 0f
+ li r10, 0x1000 /* r10 = 4k */
+- ANNOTATE_INTRA_FUNCTION_CALL
+- bl 1f
++ beq 0f
+
+-0:
+ /* Defaults to 256M */
+ lis r10, 0x1000
+
+- bcl 20,31,$+4
++0: bcl 20,31,$+4
+ 1: mflr r4
+ addi r4, r4, (2f-1b) /* virtual address of 2f */
+
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 2b79171ee185be..f4e03aaabb4c36 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -132,7 +132,10 @@ static unsigned long ebb_switch_in(bool ebb, struct cpu_hw_events *cpuhw)
+
+ static inline void power_pmu_bhrb_enable(struct perf_event *event) {}
+ static inline void power_pmu_bhrb_disable(struct perf_event *event) {}
+-static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) {}
++static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
++{
++}
+ static inline void power_pmu_bhrb_read(struct perf_event *event, struct cpu_hw_events *cpuhw) {}
+ static void pmao_restore_workaround(bool ebb) { }
+ #endif /* CONFIG_PPC32 */
+@@ -444,7 +447,8 @@ static void power_pmu_bhrb_disable(struct perf_event *event)
+ /* Called from ctxsw to prevent one process's branch entries to
+ * mingle with the other process's entries during context switch.
+ */
+-static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ if (!ppmu->bhrb_nr)
+ return;
+diff --git a/arch/powerpc/perf/vpa-pmu.c b/arch/powerpc/perf/vpa-pmu.c
+index 6a5bfd2a13b5aa..8407334689596a 100644
+--- a/arch/powerpc/perf/vpa-pmu.c
++++ b/arch/powerpc/perf/vpa-pmu.c
+@@ -156,6 +156,7 @@ static void vpa_pmu_del(struct perf_event *event, int flags)
+ }
+
+ static struct pmu vpa_pmu = {
++ .module = THIS_MODULE,
+ .task_ctx_nr = perf_sw_context,
+ .name = "vpa_pmu",
+ .event_init = vpa_pmu_event_init,
+diff --git a/arch/powerpc/platforms/cell/spufs/gang.c b/arch/powerpc/platforms/cell/spufs/gang.c
+index 827d338deaf4c6..2c2999de6bfa25 100644
+--- a/arch/powerpc/platforms/cell/spufs/gang.c
++++ b/arch/powerpc/platforms/cell/spufs/gang.c
+@@ -25,6 +25,7 @@ struct spu_gang *alloc_spu_gang(void)
+ mutex_init(&gang->aff_mutex);
+ INIT_LIST_HEAD(&gang->list);
+ INIT_LIST_HEAD(&gang->aff_list_head);
++ gang->alive = 1;
+
+ out:
+ return gang;
+diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c
+index 70236d1df3d3e0..9f9e4b87162782 100644
+--- a/arch/powerpc/platforms/cell/spufs/inode.c
++++ b/arch/powerpc/platforms/cell/spufs/inode.c
+@@ -192,13 +192,32 @@ static int spufs_fill_dir(struct dentry *dir,
+ return -ENOMEM;
+ ret = spufs_new_file(dir->d_sb, dentry, files->ops,
+ files->mode & mode, files->size, ctx);
+- if (ret)
++ if (ret) {
++ dput(dentry);
+ return ret;
++ }
+ files++;
+ }
+ return 0;
+ }
+
++static void unuse_gang(struct dentry *dir)
++{
++ struct inode *inode = dir->d_inode;
++ struct spu_gang *gang = SPUFS_I(inode)->i_gang;
++
++ if (gang) {
++ bool dead;
++
++ inode_lock(inode); // exclusion with spufs_create_context()
++ dead = !--gang->alive;
++ inode_unlock(inode);
++
++ if (dead)
++ simple_recursive_removal(dir, NULL);
++ }
++}
++
+ static int spufs_dir_close(struct inode *inode, struct file *file)
+ {
+ struct inode *parent;
+@@ -213,6 +232,7 @@ static int spufs_dir_close(struct inode *inode, struct file *file)
+ inode_unlock(parent);
+ WARN_ON(ret);
+
++ unuse_gang(dir->d_parent);
+ return dcache_dir_close(inode, file);
+ }
+
+@@ -405,7 +425,7 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ {
+ int ret;
+ int affinity;
+- struct spu_gang *gang;
++ struct spu_gang *gang = SPUFS_I(inode)->i_gang;
+ struct spu_context *neighbor;
+ struct path path = {.mnt = mnt, .dentry = dentry};
+
+@@ -420,11 +440,15 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ if ((flags & SPU_CREATE_ISOLATE) && !isolated_loader)
+ return -ENODEV;
+
+- gang = NULL;
++ if (gang) {
++ if (!gang->alive)
++ return -ENOENT;
++ gang->alive++;
++ }
++
+ neighbor = NULL;
+ affinity = flags & (SPU_CREATE_AFFINITY_MEM | SPU_CREATE_AFFINITY_SPU);
+ if (affinity) {
+- gang = SPUFS_I(inode)->i_gang;
+ if (!gang)
+ return -EINVAL;
+ mutex_lock(&gang->aff_mutex);
+@@ -436,8 +460,11 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ }
+
+ ret = spufs_mkdir(inode, dentry, flags, mode & 0777);
+- if (ret)
++ if (ret) {
++ if (neighbor)
++ put_spu_context(neighbor);
+ goto out_aff_unlock;
++ }
+
+ if (affinity) {
+ spufs_set_affinity(flags, SPUFS_I(d_inode(dentry))->i_ctx,
+@@ -453,6 +480,8 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ out_aff_unlock:
+ if (affinity)
+ mutex_unlock(&gang->aff_mutex);
++ if (ret && gang)
++ gang->alive--; // can't reach 0
+ return ret;
+ }
+
+@@ -482,6 +511,7 @@ spufs_mkgang(struct inode *dir, struct dentry *dentry, umode_t mode)
+ inode->i_fop = &simple_dir_operations;
+
+ d_instantiate(dentry, inode);
++ dget(dentry);
+ inc_nlink(dir);
+ inc_nlink(d_inode(dentry));
+ return ret;
+@@ -492,6 +522,21 @@ spufs_mkgang(struct inode *dir, struct dentry *dentry, umode_t mode)
+ return ret;
+ }
+
++static int spufs_gang_close(struct inode *inode, struct file *file)
++{
++ unuse_gang(file->f_path.dentry);
++ return dcache_dir_close(inode, file);
++}
++
++static const struct file_operations spufs_gang_fops = {
++ .open = dcache_dir_open,
++ .release = spufs_gang_close,
++ .llseek = dcache_dir_lseek,
++ .read = generic_read_dir,
++ .iterate_shared = dcache_readdir,
++ .fsync = noop_fsync,
++};
++
+ static int spufs_gang_open(const struct path *path)
+ {
+ int ret;
+@@ -511,7 +556,7 @@ static int spufs_gang_open(const struct path *path)
+ return PTR_ERR(filp);
+ }
+
+- filp->f_op = &simple_dir_operations;
++ filp->f_op = &spufs_gang_fops;
+ fd_install(ret, filp);
+ return ret;
+ }
+@@ -526,10 +571,8 @@ static int spufs_create_gang(struct inode *inode,
+ ret = spufs_mkgang(inode, dentry, mode & 0777);
+ if (!ret) {
+ ret = spufs_gang_open(&path);
+- if (ret < 0) {
+- int err = simple_rmdir(inode, dentry);
+- WARN_ON(err);
+- }
++ if (ret < 0)
++ unuse_gang(dentry);
+ }
+ return ret;
+ }
+diff --git a/arch/powerpc/platforms/cell/spufs/spufs.h b/arch/powerpc/platforms/cell/spufs/spufs.h
+index 84958487f696a4..d33787c57c39a2 100644
+--- a/arch/powerpc/platforms/cell/spufs/spufs.h
++++ b/arch/powerpc/platforms/cell/spufs/spufs.h
+@@ -151,6 +151,8 @@ struct spu_gang {
+ int aff_flags;
+ struct spu *aff_ref_spu;
+ atomic_t aff_sched_count;
++
++ int alive;
+ };
+
+ /* Flag bits for spu_gang aff_flags */
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 7612c52e9b1e35..5d63abc499ce7d 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -149,7 +149,7 @@ config RISCV
+ select HAVE_DYNAMIC_FTRACE_WITH_ARGS if HAVE_DYNAMIC_FTRACE
+ select HAVE_FTRACE_GRAPH_FUNC
+ select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
+- select HAVE_FUNCTION_GRAPH_TRACER
++ select HAVE_FUNCTION_GRAPH_TRACER if HAVE_DYNAMIC_FTRACE_WITH_ARGS
+ select HAVE_FUNCTION_GRAPH_FREGS
+ select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
+ select HAVE_EBPF_JIT if MMU
+diff --git a/arch/riscv/errata/Makefile b/arch/riscv/errata/Makefile
+index f0da9d7b39c374..bc6c77ba837d2d 100644
+--- a/arch/riscv/errata/Makefile
++++ b/arch/riscv/errata/Makefile
+@@ -1,5 +1,9 @@
+ ifdef CONFIG_RELOCATABLE
+-KBUILD_CFLAGS += -fno-pie
++# We can't use PIC/PIE when handling early-boot errata parsing, as the kernel
++# doesn't have a GOT setup at that point. So instead just use medany: it's
++# usually position-independent, so it should be good enough for the errata
++# handling.
++KBUILD_CFLAGS += -fno-pie -mcmodel=medany
+ endif
+
+ ifdef CONFIG_RISCV_ALTERNATIVE_EARLY
+diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
+index 569140d6e6399e..19defdc2002d85 100644
+--- a/arch/riscv/include/asm/cpufeature.h
++++ b/arch/riscv/include/asm/cpufeature.h
+@@ -63,7 +63,7 @@ void __init riscv_user_isa_enable(void);
+ #define __RISCV_ISA_EXT_SUPERSET_VALIDATE(_name, _id, _sub_exts, _validate) \
+ _RISCV_ISA_EXT_DATA(_name, _id, _sub_exts, ARRAY_SIZE(_sub_exts), _validate)
+
+-bool check_unaligned_access_emulated_all_cpus(void);
++bool __init check_unaligned_access_emulated_all_cpus(void);
+ #if defined(CONFIG_RISCV_SCALAR_MISALIGNED)
+ void check_unaligned_access_emulated(struct work_struct *work __always_unused);
+ void unaligned_emulation_finish(void);
+@@ -76,7 +76,7 @@ static inline bool unaligned_ctl_available(void)
+ }
+ #endif
+
+-bool check_vector_unaligned_access_emulated_all_cpus(void);
++bool __init check_vector_unaligned_access_emulated_all_cpus(void);
+ #if defined(CONFIG_RISCV_VECTOR_MISALIGNED)
+ void check_vector_unaligned_access_emulated(struct work_struct *work __always_unused);
+ DECLARE_PER_CPU(long, vector_misaligned_access);
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index c4721ce44ca474..2636ee00ccf0fd 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -92,7 +92,7 @@ struct dyn_arch_ftrace {
+ #define make_call_t0(caller, callee, call) \
+ do { \
+ unsigned int offset = \
+- (unsigned long) callee - (unsigned long) caller; \
++ (unsigned long) (callee) - (unsigned long) (caller); \
+ call[0] = to_auipc_t0(offset); \
+ call[1] = to_jalr_t0(offset); \
+ } while (0)
+@@ -108,7 +108,7 @@ do { \
+ #define make_call_ra(caller, callee, call) \
+ do { \
+ unsigned int offset = \
+- (unsigned long) callee - (unsigned long) caller; \
++ (unsigned long) (callee) - (unsigned long) (caller); \
+ call[0] = to_auipc_ra(offset); \
+ call[1] = to_jalr_ra(offset); \
+ } while (0)
+diff --git a/arch/riscv/kernel/elf_kexec.c b/arch/riscv/kernel/elf_kexec.c
+index 3c37661801f95d..e783a72d051f43 100644
+--- a/arch/riscv/kernel/elf_kexec.c
++++ b/arch/riscv/kernel/elf_kexec.c
+@@ -468,6 +468,9 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ case R_RISCV_ALIGN:
+ case R_RISCV_RELAX:
+ break;
++ case R_RISCV_64:
++ *(u64 *)loc = val;
++ break;
+ default:
+ pr_err("Unknown rela relocation: %d\n", r_type);
+ return -ENOEXEC;
+diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
+index 068168046e0efa..da4a4000e57eae 100644
+--- a/arch/riscv/kernel/mcount.S
++++ b/arch/riscv/kernel/mcount.S
+@@ -12,8 +12,6 @@
+ #include <asm/asm-offsets.h>
+ #include <asm/ftrace.h>
+
+-#define ABI_SIZE_ON_STACK 80
+-
+ .text
+
+ .macro SAVE_ABI_STATE
+@@ -28,12 +26,12 @@
+ * register if a0 was not saved.
+ */
+ .macro SAVE_RET_ABI_STATE
+- addi sp, sp, -ABI_SIZE_ON_STACK
+- REG_S ra, 1*SZREG(sp)
+- REG_S s0, 8*SZREG(sp)
+- REG_S a0, 10*SZREG(sp)
+- REG_S a1, 11*SZREG(sp)
+- addi s0, sp, ABI_SIZE_ON_STACK
++ addi sp, sp, -FREGS_SIZE_ON_STACK
++ REG_S ra, FREGS_RA(sp)
++ REG_S s0, FREGS_S0(sp)
++ REG_S a0, FREGS_A0(sp)
++ REG_S a1, FREGS_A1(sp)
++ addi s0, sp, FREGS_SIZE_ON_STACK
+ .endm
+
+ .macro RESTORE_ABI_STATE
+@@ -43,11 +41,11 @@
+ .endm
+
+ .macro RESTORE_RET_ABI_STATE
+- REG_L ra, 1*SZREG(sp)
+- REG_L s0, 8*SZREG(sp)
+- REG_L a0, 10*SZREG(sp)
+- REG_L a1, 11*SZREG(sp)
+- addi sp, sp, ABI_SIZE_ON_STACK
++ REG_L ra, FREGS_RA(sp)
++ REG_L s0, FREGS_S0(sp)
++ REG_L a0, FREGS_A0(sp)
++ REG_L a1, FREGS_A1(sp)
++ addi sp, sp, FREGS_SIZE_ON_STACK
+ .endm
+
+ SYM_TYPED_FUNC_START(ftrace_stub)
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 7cc108aed74e8b..4354c87c0376fd 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -605,16 +605,10 @@ void check_vector_unaligned_access_emulated(struct work_struct *work __always_un
+ kernel_vector_end();
+ }
+
+-bool check_vector_unaligned_access_emulated_all_cpus(void)
++bool __init check_vector_unaligned_access_emulated_all_cpus(void)
+ {
+ int cpu;
+
+- if (!has_vector()) {
+- for_each_online_cpu(cpu)
+- per_cpu(vector_misaligned_access, cpu) = RISCV_HWPROBE_MISALIGNED_VECTOR_UNSUPPORTED;
+- return false;
+- }
+-
+ schedule_on_each_cpu(check_vector_unaligned_access_emulated);
+
+ for_each_online_cpu(cpu)
+@@ -625,7 +619,7 @@ bool check_vector_unaligned_access_emulated_all_cpus(void)
+ return true;
+ }
+ #else
+-bool check_vector_unaligned_access_emulated_all_cpus(void)
++bool __init check_vector_unaligned_access_emulated_all_cpus(void)
+ {
+ return false;
+ }
+@@ -659,7 +653,7 @@ void check_unaligned_access_emulated(struct work_struct *work __always_unused)
+ }
+ }
+
+-bool check_unaligned_access_emulated_all_cpus(void)
++bool __init check_unaligned_access_emulated_all_cpus(void)
+ {
+ int cpu;
+
+@@ -684,7 +678,7 @@ bool unaligned_ctl_available(void)
+ return unaligned_ctl;
+ }
+ #else
+-bool check_unaligned_access_emulated_all_cpus(void)
++bool __init check_unaligned_access_emulated_all_cpus(void)
+ {
+ return false;
+ }
+diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
+index 91f189cf16113c..a42115fbdeb897 100644
+--- a/arch/riscv/kernel/unaligned_access_speed.c
++++ b/arch/riscv/kernel/unaligned_access_speed.c
+@@ -121,7 +121,7 @@ static int check_unaligned_access(void *param)
+ return 0;
+ }
+
+-static void check_unaligned_access_nonboot_cpu(void *param)
++static void __init check_unaligned_access_nonboot_cpu(void *param)
+ {
+ unsigned int cpu = smp_processor_id();
+ struct page **pages = param;
+@@ -175,7 +175,7 @@ static void set_unaligned_access_static_branches(void)
+ modify_unaligned_access_branches(&fast_and_online, num_online_cpus());
+ }
+
+-static int lock_and_set_unaligned_access_static_branch(void)
++static int __init lock_and_set_unaligned_access_static_branch(void)
+ {
+ cpus_read_lock();
+ set_unaligned_access_static_branches();
+@@ -218,7 +218,7 @@ static int riscv_offline_cpu(unsigned int cpu)
+ }
+
+ /* Measure unaligned access speed on all CPUs present at boot in parallel. */
+-static int check_unaligned_access_speed_all_cpus(void)
++static void __init check_unaligned_access_speed_all_cpus(void)
+ {
+ unsigned int cpu;
+ unsigned int cpu_count = num_possible_cpus();
+@@ -226,7 +226,7 @@ static int check_unaligned_access_speed_all_cpus(void)
+
+ if (!bufs) {
+ pr_warn("Allocation failure, not measuring misaligned performance\n");
+- return 0;
++ return;
+ }
+
+ /*
+@@ -247,13 +247,6 @@ static int check_unaligned_access_speed_all_cpus(void)
+ /* Check core 0. */
+ smp_call_on_cpu(0, check_unaligned_access, bufs[0], true);
+
+- /*
+- * Setup hotplug callbacks for any new CPUs that come online or go
+- * offline.
+- */
+- cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
+- riscv_online_cpu, riscv_offline_cpu);
+-
+ out:
+ for_each_cpu(cpu, cpu_online_mask) {
+ if (bufs[cpu])
+@@ -261,12 +254,10 @@ static int check_unaligned_access_speed_all_cpus(void)
+ }
+
+ kfree(bufs);
+- return 0;
+ }
+ #else /* CONFIG_RISCV_PROBE_UNALIGNED_ACCESS */
+-static int check_unaligned_access_speed_all_cpus(void)
++static void __init check_unaligned_access_speed_all_cpus(void)
+ {
+- return 0;
+ }
+ #endif
+
+@@ -349,7 +340,7 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus
+ pr_warn("cpu%d: rdtime lacks granularity needed to measure unaligned vector access speed\n",
+ cpu);
+
+- return;
++ goto free;
+ }
+
+ if (word_cycles < byte_cycles)
+@@ -363,57 +354,69 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus
+ (speed == RISCV_HWPROBE_MISALIGNED_VECTOR_FAST) ? "fast" : "slow");
+
+ per_cpu(vector_misaligned_access, cpu) = speed;
+-}
+-
+-static int riscv_online_cpu_vec(unsigned int cpu)
+-{
+- if (!has_vector())
+- return 0;
+
+- if (per_cpu(vector_misaligned_access, cpu) != RISCV_HWPROBE_MISALIGNED_VECTOR_UNSUPPORTED)
+- return 0;
+-
+- check_vector_unaligned_access_emulated(NULL);
+- check_vector_unaligned_access(NULL);
+- return 0;
++free:
++ __free_pages(page, MISALIGNED_BUFFER_ORDER);
+ }
+
+ /* Measure unaligned access speed on all CPUs present at boot in parallel. */
+-static int vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
++static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
+ {
+ schedule_on_each_cpu(check_vector_unaligned_access);
+
+- /*
+- * Setup hotplug callbacks for any new CPUs that come online or go
+- * offline.
+- */
+- cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
+- riscv_online_cpu_vec, NULL);
+-
+ return 0;
+ }
+ #else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
+-static int vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
++static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
+ {
+ return 0;
+ }
+ #endif
+
+-static int check_unaligned_access_all_cpus(void)
++static int riscv_online_cpu_vec(unsigned int cpu)
+ {
+- bool all_cpus_emulated, all_cpus_vec_unsupported;
++ if (!has_vector()) {
++ per_cpu(vector_misaligned_access, cpu) = RISCV_HWPROBE_MISALIGNED_VECTOR_UNSUPPORTED;
++ return 0;
++ }
++
++#ifdef CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS
++ if (per_cpu(vector_misaligned_access, cpu) != RISCV_HWPROBE_MISALIGNED_VECTOR_UNKNOWN)
++ return 0;
+
+- all_cpus_emulated = check_unaligned_access_emulated_all_cpus();
+- all_cpus_vec_unsupported = check_vector_unaligned_access_emulated_all_cpus();
++ check_vector_unaligned_access_emulated(NULL);
++ check_vector_unaligned_access(NULL);
++#endif
++
++ return 0;
++}
+
+- if (!all_cpus_vec_unsupported &&
+- IS_ENABLED(CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS)) {
++static int __init check_unaligned_access_all_cpus(void)
++{
++ int cpu;
++
++ if (!check_unaligned_access_emulated_all_cpus())
++ check_unaligned_access_speed_all_cpus();
++
++ if (!has_vector()) {
++ for_each_online_cpu(cpu)
++ per_cpu(vector_misaligned_access, cpu) = RISCV_HWPROBE_MISALIGNED_VECTOR_UNSUPPORTED;
++ } else if (!check_vector_unaligned_access_emulated_all_cpus() &&
++ IS_ENABLED(CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS)) {
+ kthread_run(vec_check_unaligned_access_speed_all_cpus,
+ NULL, "vec_check_unaligned_access_speed_all_cpus");
+ }
+
+- if (!all_cpus_emulated)
+- return check_unaligned_access_speed_all_cpus();
++ /*
++ * Setup hotplug callbacks for any new CPUs that come online or go
++ * offline.
++ */
++#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS
++ cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
++ riscv_online_cpu, riscv_offline_cpu);
++#endif
++ cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
++ riscv_online_cpu_vec, NULL);
+
+ return 0;
+ }
+diff --git a/arch/riscv/kernel/vec-copy-unaligned.S b/arch/riscv/kernel/vec-copy-unaligned.S
+index d16f19f1b3b65f..7ce4de6f6e694a 100644
+--- a/arch/riscv/kernel/vec-copy-unaligned.S
++++ b/arch/riscv/kernel/vec-copy-unaligned.S
+@@ -11,7 +11,7 @@
+
+ #define WORD_SEW CONCATENATE(e, WORD_EEW)
+ #define VEC_L CONCATENATE(vle, WORD_EEW).v
+-#define VEC_S CONCATENATE(vle, WORD_EEW).v
++#define VEC_S CONCATENATE(vse, WORD_EEW).v
+
+ /* void __riscv_copy_vec_words_unaligned(void *, const void *, size_t) */
+ /* Performs a memcpy without aligning buffers, using word loads and stores. */
+diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
+index 1fa8be5ee5097f..4b24705dc63a99 100644
+--- a/arch/riscv/kvm/main.c
++++ b/arch/riscv/kvm/main.c
+@@ -172,8 +172,8 @@ module_init(riscv_kvm_init);
+
+ static void __exit riscv_kvm_exit(void)
+ {
+- kvm_riscv_teardown();
+-
+ kvm_exit();
++
++ kvm_riscv_teardown();
+ }
+ module_exit(riscv_kvm_exit);
+diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
+index 2707a51b082ca7..78ac3216a54ddb 100644
+--- a/arch/riscv/kvm/vcpu_pmu.c
++++ b/arch/riscv/kvm/vcpu_pmu.c
+@@ -666,6 +666,7 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba
+ .type = etype,
+ .size = sizeof(struct perf_event_attr),
+ .pinned = true,
++ .disabled = true,
+ /*
+ * It should never reach here if the platform doesn't support the sscofpmf
+ * extension as mode filtering won't work without it.
+diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
+index b4a78a4b35cff5..375dd96bb4a0d2 100644
+--- a/arch/riscv/mm/hugetlbpage.c
++++ b/arch/riscv/mm/hugetlbpage.c
+@@ -148,22 +148,25 @@ unsigned long hugetlb_mask_last_page(struct hstate *h)
+ static pte_t get_clear_contig(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep,
+- unsigned long pte_num)
++ unsigned long ncontig)
+ {
+- pte_t orig_pte = ptep_get(ptep);
+- unsigned long i;
+-
+- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) {
+- pte_t pte = ptep_get_and_clear(mm, addr, ptep);
+-
+- if (pte_dirty(pte))
+- orig_pte = pte_mkdirty(orig_pte);
+-
+- if (pte_young(pte))
+- orig_pte = pte_mkyoung(orig_pte);
++ pte_t pte, tmp_pte;
++ bool present;
++
++ pte = ptep_get_and_clear(mm, addr, ptep);
++ present = pte_present(pte);
++ while (--ncontig) {
++ ptep++;
++ addr += PAGE_SIZE;
++ tmp_pte = ptep_get_and_clear(mm, addr, ptep);
++ if (present) {
++ if (pte_dirty(tmp_pte))
++ pte = pte_mkdirty(pte);
++ if (pte_young(tmp_pte))
++ pte = pte_mkyoung(pte);
++ }
+ }
+-
+- return orig_pte;
++ return pte;
+ }
+
+ static pte_t get_clear_contig_flush(struct mm_struct *mm,
+@@ -212,6 +215,26 @@ static void clear_flush(struct mm_struct *mm,
+ flush_tlb_range(&vma, saddr, addr);
+ }
+
++static int num_contig_ptes_from_size(unsigned long sz, size_t *pgsize)
++{
++ unsigned long hugepage_shift;
++
++ if (sz >= PGDIR_SIZE)
++ hugepage_shift = PGDIR_SHIFT;
++ else if (sz >= P4D_SIZE)
++ hugepage_shift = P4D_SHIFT;
++ else if (sz >= PUD_SIZE)
++ hugepage_shift = PUD_SHIFT;
++ else if (sz >= PMD_SIZE)
++ hugepage_shift = PMD_SHIFT;
++ else
++ hugepage_shift = PAGE_SHIFT;
++
++ *pgsize = 1 << hugepage_shift;
++
++ return sz >> hugepage_shift;
++}
++
+ /*
+ * When dealing with NAPOT mappings, the privileged specification indicates that
+ * "if an update needs to be made, the OS generally should first mark all of the
+@@ -226,22 +249,10 @@ void set_huge_pte_at(struct mm_struct *mm,
+ pte_t pte,
+ unsigned long sz)
+ {
+- unsigned long hugepage_shift, pgsize;
++ size_t pgsize;
+ int i, pte_num;
+
+- if (sz >= PGDIR_SIZE)
+- hugepage_shift = PGDIR_SHIFT;
+- else if (sz >= P4D_SIZE)
+- hugepage_shift = P4D_SHIFT;
+- else if (sz >= PUD_SIZE)
+- hugepage_shift = PUD_SHIFT;
+- else if (sz >= PMD_SIZE)
+- hugepage_shift = PMD_SHIFT;
+- else
+- hugepage_shift = PAGE_SHIFT;
+-
+- pte_num = sz >> hugepage_shift;
+- pgsize = 1 << hugepage_shift;
++ pte_num = num_contig_ptes_from_size(sz, &pgsize);
+
+ if (!pte_present(pte)) {
+ for (i = 0; i < pte_num; i++, ptep++, addr += pgsize)
+@@ -295,13 +306,14 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep, unsigned long sz)
+ {
++ size_t pgsize;
+ pte_t orig_pte = ptep_get(ptep);
+ int pte_num;
+
+ if (!pte_napot(orig_pte))
+ return ptep_get_and_clear(mm, addr, ptep);
+
+- pte_num = napot_pte_num(napot_cont_order(orig_pte));
++ pte_num = num_contig_ptes_from_size(sz, &pgsize);
+
+ return get_clear_contig(mm, addr, ptep, pte_num);
+ }
+@@ -351,6 +363,7 @@ void huge_pte_clear(struct mm_struct *mm,
+ pte_t *ptep,
+ unsigned long sz)
+ {
++ size_t pgsize;
+ pte_t pte = ptep_get(ptep);
+ int i, pte_num;
+
+@@ -359,8 +372,9 @@ void huge_pte_clear(struct mm_struct *mm,
+ return;
+ }
+
+- pte_num = napot_pte_num(napot_cont_order(pte));
+- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++)
++ pte_num = num_contig_ptes_from_size(sz, &pgsize);
++
++ for (i = 0; i < pte_num; i++, addr += pgsize, ptep++)
+ pte_clear(mm, addr, ptep);
+ }
+
+diff --git a/arch/riscv/purgatory/entry.S b/arch/riscv/purgatory/entry.S
+index 0e6ca6d5ae4b41..c5db2f072c341a 100644
+--- a/arch/riscv/purgatory/entry.S
++++ b/arch/riscv/purgatory/entry.S
+@@ -12,6 +12,7 @@
+
+ .text
+
++.align 2
+ SYM_CODE_START(purgatory_start)
+
+ lla sp, .Lstack
+diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h
+index fc9933a743d692..251e0372ccbd0a 100644
+--- a/arch/s390/include/asm/io.h
++++ b/arch/s390/include/asm/io.h
+@@ -34,8 +34,6 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
+
+ #define ioremap_wc(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(pgprot_writecombine(PAGE_KERNEL)))
+-#define ioremap_wt(addr, size) \
+- ioremap_prot((addr), (size), pgprot_val(pgprot_writethrough(PAGE_KERNEL)))
+
+ static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
+ {
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 3ca5af4cfe432e..2467e521e1c0fb 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1402,9 +1402,6 @@ void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr);
+ #define pgprot_writecombine pgprot_writecombine
+ pgprot_t pgprot_writecombine(pgprot_t prot);
+
+-#define pgprot_writethrough pgprot_writethrough
+-pgprot_t pgprot_writethrough(pgprot_t prot);
+-
+ #define PFN_PTE_SHIFT PAGE_SHIFT
+
+ /*
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 4cc3408c4dacff..88e09a650d2dfe 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -467,7 +467,7 @@ SYM_CODE_START(mcck_int_handler)
+ clgrjl %r9,%r14, 4f
+ larl %r14,.Lsie_leave
+ clgrjhe %r9,%r14, 4f
+- lg %r10,__LC_PCPU
++ lg %r10,__LC_PCPU(%r13)
+ oi __PCPU_FLAGS+7(%r10), _CIF_MCCK_GUEST
+ 4: BPENTER __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST
+ SIEEXIT __SF_SIE_CONTROL(%r15),%r13
+diff --git a/arch/s390/kernel/perf_pai_crypto.c b/arch/s390/kernel/perf_pai_crypto.c
+index 10725f5a6f0fd1..63875270941bc4 100644
+--- a/arch/s390/kernel/perf_pai_crypto.c
++++ b/arch/s390/kernel/perf_pai_crypto.c
+@@ -518,7 +518,8 @@ static void paicrypt_have_samples(void)
+ /* Called on schedule-in and schedule-out. No access to event structure,
+ * but for sampling only event CRYPTO_ALL is allowed.
+ */
+-static void paicrypt_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++static void paicrypt_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ /* We started with a clean page on event installation. So read out
+ * results on schedule_out and if page was dirty, save old values.
+diff --git a/arch/s390/kernel/perf_pai_ext.c b/arch/s390/kernel/perf_pai_ext.c
+index a8f0bad99cf04f..fd14d5ebccbca0 100644
+--- a/arch/s390/kernel/perf_pai_ext.c
++++ b/arch/s390/kernel/perf_pai_ext.c
+@@ -542,7 +542,8 @@ static void paiext_have_samples(void)
+ /* Called on schedule-in and schedule-out. No access to event structure,
+ * but for sampling only event NNPA_ALL is allowed.
+ */
+-static void paiext_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++static void paiext_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ /* We started with a clean page on event installation. So read out
+ * results on schedule_out and if page was dirty, save old values.
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index f05e62e037c284..a248764ad95860 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -34,16 +34,6 @@ pgprot_t pgprot_writecombine(pgprot_t prot)
+ }
+ EXPORT_SYMBOL_GPL(pgprot_writecombine);
+
+-pgprot_t pgprot_writethrough(pgprot_t prot)
+-{
+- /*
+- * mio_wb_bit_mask may be set on a different CPU, but it is only set
+- * once at init and only read afterwards.
+- */
+- return __pgprot(pgprot_val(prot) & ~mio_wb_bit_mask);
+-}
+-EXPORT_SYMBOL_GPL(pgprot_writethrough);
+-
+ static inline void ptep_ipte_local(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, int nodat)
+ {
+diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h
+index 5babad8c5f75ed..bc02767f063978 100644
+--- a/arch/um/include/shared/os.h
++++ b/arch/um/include/shared/os.h
+@@ -213,7 +213,6 @@ extern int os_protect_memory(void *addr, unsigned long len,
+ extern int os_unmap_memory(void *addr, int len);
+ extern int os_drop_memory(void *addr, int length);
+ extern int can_drop_memory(void);
+-extern int os_mincore(void *addr, unsigned long len);
+
+ void os_set_pdeathsig(void);
+
+diff --git a/arch/um/kernel/Makefile b/arch/um/kernel/Makefile
+index f8567b933ffaa9..4df1cd0d20179e 100644
+--- a/arch/um/kernel/Makefile
++++ b/arch/um/kernel/Makefile
+@@ -17,7 +17,7 @@ extra-y := vmlinux.lds
+ obj-y = config.o exec.o exitcode.o irq.o ksyms.o mem.o \
+ physmem.o process.o ptrace.o reboot.o sigio.o \
+ signal.o sysrq.o time.o tlb.o trap.o \
+- um_arch.o umid.o maccess.o kmsg_dump.o capflags.o skas/
++ um_arch.o umid.o kmsg_dump.o capflags.o skas/
+ obj-y += load_file.o
+
+ obj-$(CONFIG_BLK_DEV_INITRD) += initrd.o
+diff --git a/arch/um/kernel/maccess.c b/arch/um/kernel/maccess.c
+deleted file mode 100644
+index 8ccd56813f684f..00000000000000
+--- a/arch/um/kernel/maccess.c
++++ /dev/null
+@@ -1,19 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Copyright (C) 2013 Richard Weinberger <richrd@nod.at>
+- */
+-
+-#include <linux/uaccess.h>
+-#include <linux/kernel.h>
+-#include <os.h>
+-
+-bool copy_from_kernel_nofault_allowed(const void *src, size_t size)
+-{
+- void *psrc = (void *)rounddown((unsigned long)src, PAGE_SIZE);
+-
+- if ((unsigned long)src < PAGE_SIZE || size <= 0)
+- return false;
+- if (os_mincore(psrc, size + src - psrc) <= 0)
+- return false;
+- return true;
+-}
+diff --git a/arch/um/os-Linux/process.c b/arch/um/os-Linux/process.c
+index 9f086f9394202d..184566edeee997 100644
+--- a/arch/um/os-Linux/process.c
++++ b/arch/um/os-Linux/process.c
+@@ -142,57 +142,6 @@ int __init can_drop_memory(void)
+ return ok;
+ }
+
+-static int os_page_mincore(void *addr)
+-{
+- char vec[2];
+- int ret;
+-
+- ret = mincore(addr, UM_KERN_PAGE_SIZE, vec);
+- if (ret < 0) {
+- if (errno == ENOMEM || errno == EINVAL)
+- return 0;
+- else
+- return -errno;
+- }
+-
+- return vec[0] & 1;
+-}
+-
+-int os_mincore(void *addr, unsigned long len)
+-{
+- char *vec;
+- int ret, i;
+-
+- if (len <= UM_KERN_PAGE_SIZE)
+- return os_page_mincore(addr);
+-
+- vec = calloc(1, (len + UM_KERN_PAGE_SIZE - 1) / UM_KERN_PAGE_SIZE);
+- if (!vec)
+- return -ENOMEM;
+-
+- ret = mincore(addr, UM_KERN_PAGE_SIZE, vec);
+- if (ret < 0) {
+- if (errno == ENOMEM || errno == EINVAL)
+- ret = 0;
+- else
+- ret = -errno;
+-
+- goto out;
+- }
+-
+- for (i = 0; i < ((len + UM_KERN_PAGE_SIZE - 1) / UM_KERN_PAGE_SIZE); i++) {
+- if (!(vec[i] & 1)) {
+- ret = 0;
+- goto out;
+- }
+- }
+-
+- ret = 1;
+-out:
+- free(vec);
+- return ret;
+-}
+-
+ void init_new_thread_signals(void)
+ {
+ set_handler(SIGSEGV);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 0e27ebd7e36a9e..aaec6ebd6c4e01 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -232,7 +232,7 @@ config X86
+ select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
+ select HAVE_EBPF_JIT
+ select HAVE_EFFICIENT_UNALIGNED_ACCESS
+- select HAVE_EISA
++ select HAVE_EISA if X86_32
+ select HAVE_EXIT_THREAD
+ select HAVE_GUP_FAST
+ select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
+@@ -902,6 +902,7 @@ config INTEL_TDX_GUEST
+ depends on X86_64 && CPU_SUP_INTEL
+ depends on X86_X2APIC
+ depends on EFI_STUB
++ depends on PARAVIRT
+ select ARCH_HAS_CC_PLATFORM
+ select X86_MEM_ENCRYPT
+ select X86_MCE
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index 2a7279d80460a8..42e6a40876ea4c 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -368,7 +368,7 @@ config X86_HAVE_PAE
+
+ config X86_CMPXCHG64
+ def_bool y
+- depends on X86_HAVE_PAE || M586TSC || M586MMX || MK6 || MK7
++ depends on X86_HAVE_PAE || M586TSC || M586MMX || MK6 || MK7 || MGEODEGX1 || MGEODE_LX
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+diff --git a/arch/x86/Makefile.um b/arch/x86/Makefile.um
+index a46b1397ad01c2..c86cbd9cbba38f 100644
+--- a/arch/x86/Makefile.um
++++ b/arch/x86/Makefile.um
+@@ -7,12 +7,13 @@ core-y += arch/x86/crypto/
+ # GCC versions < 11. See:
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652
+ #
+-ifeq ($(CONFIG_CC_IS_CLANG),y)
+-KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx
+-KBUILD_RUSTFLAGS += --target=$(objtree)/scripts/target.json
++ifeq ($(call gcc-min-version, 110000)$(CONFIG_CC_IS_CLANG),y)
++KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx
+ KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2
+ endif
+
++KBUILD_RUSTFLAGS += --target=$(objtree)/scripts/target.json
++
+ ifeq ($(CONFIG_X86_32),y)
+ START := 0x8048000
+
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index 32809a06dab46d..6aad910d119da0 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -14,6 +14,7 @@
+ #include <asm/ia32.h>
+ #include <asm/insn.h>
+ #include <asm/insn-eval.h>
++#include <asm/paravirt_types.h>
+ #include <asm/pgtable.h>
+ #include <asm/set_memory.h>
+ #include <asm/traps.h>
+@@ -398,7 +399,7 @@ static int handle_halt(struct ve_info *ve)
+ return ve_instr_len(ve);
+ }
+
+-void __cpuidle tdx_safe_halt(void)
++void __cpuidle tdx_halt(void)
+ {
+ const bool irq_disabled = false;
+
+@@ -409,6 +410,16 @@ void __cpuidle tdx_safe_halt(void)
+ WARN_ONCE(1, "HLT instruction emulation failed\n");
+ }
+
++static void __cpuidle tdx_safe_halt(void)
++{
++ tdx_halt();
++ /*
++ * "__cpuidle" section doesn't support instrumentation, so stick
++ * with raw_* variant that avoids tracing hooks.
++ */
++ raw_local_irq_enable();
++}
++
+ static int read_msr(struct pt_regs *regs, struct ve_info *ve)
+ {
+ struct tdx_module_args args = {
+@@ -1109,6 +1120,19 @@ void __init tdx_early_init(void)
+ x86_platform.guest.enc_kexec_begin = tdx_kexec_begin;
+ x86_platform.guest.enc_kexec_finish = tdx_kexec_finish;
+
++ /*
++ * Avoid "sti;hlt" execution in TDX guests as HLT induces a #VE that
++ * will enable interrupts before HLT TDCALL invocation if executed
++ * in STI-shadow, possibly resulting in missed wakeup events.
++ *
++ * Modify all possible HLT execution paths to use TDX specific routines
++ * that directly execute TDCALL and toggle the interrupt state as
++ * needed after TDCALL completion. This also reduces HLT related #VEs
++ * in addition to having a reliable halt logic execution.
++ */
++ pv_ops.irq.safe_halt = tdx_safe_halt;
++ pv_ops.irq.halt = tdx_halt;
++
+ /*
+ * TDX intercepts the RDMSR to read the X2APIC ID in the parallel
+ * bringup low level code. That raises #VE which cannot be handled
+diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
+index ea81770629eea6..626a81c6015bda 100644
+--- a/arch/x86/entry/calling.h
++++ b/arch/x86/entry/calling.h
+@@ -70,6 +70,8 @@ For 32-bit we have the following conventions - kernel is built with
+ pushq %rsi /* pt_regs->si */
+ movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */
+ movq %rdi, 8(%rsp) /* pt_regs->di (overwriting original return address) */
++ /* We just clobbered the return address - use the IRET frame for unwinding: */
++ UNWIND_HINT_IRET_REGS offset=3*8
+ .else
+ pushq %rdi /* pt_regs->di */
+ pushq %rsi /* pt_regs->si */
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 14db5b85114c1c..3514bf2978eed3 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -142,7 +142,7 @@ static __always_inline int syscall_32_enter(struct pt_regs *regs)
+ #ifdef CONFIG_IA32_EMULATION
+ bool __ia32_enabled __ro_after_init = !IS_ENABLED(CONFIG_IA32_EMULATION_DEFAULT_DISABLED);
+
+-static int ia32_emulation_override_cmdline(char *arg)
++static int __init ia32_emulation_override_cmdline(char *arg)
+ {
+ return kstrtobool(arg, &__ia32_enabled);
+ }
+diff --git a/arch/x86/entry/vdso/vdso-layout.lds.S b/arch/x86/entry/vdso/vdso-layout.lds.S
+index 872947c1004c35..918606ff92a988 100644
+--- a/arch/x86/entry/vdso/vdso-layout.lds.S
++++ b/arch/x86/entry/vdso/vdso-layout.lds.S
+@@ -24,7 +24,7 @@ SECTIONS
+
+ timens_page = vvar_start + PAGE_SIZE;
+
+- vclock_pages = vvar_start + VDSO_NR_VCLOCK_PAGES * PAGE_SIZE;
++ vclock_pages = VDSO_VCLOCK_PAGES_START(vvar_start);
+ pvclock_page = vclock_pages + VDSO_PAGE_PVCLOCK_OFFSET * PAGE_SIZE;
+ hvclock_page = vclock_pages + VDSO_PAGE_HVCLOCK_OFFSET * PAGE_SIZE;
+
+diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
+index 39e6efc1a9cab7..aa62949335ecec 100644
+--- a/arch/x86/entry/vdso/vma.c
++++ b/arch/x86/entry/vdso/vma.c
+@@ -290,7 +290,7 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
+ }
+
+ vma = _install_special_mapping(mm,
+- addr + (__VVAR_PAGES - VDSO_NR_VCLOCK_PAGES) * PAGE_SIZE,
++ VDSO_VCLOCK_PAGES_START(addr),
+ VDSO_NR_VCLOCK_PAGES * PAGE_SIZE,
+ VM_READ|VM_MAYREAD|VM_IO|VM_DONTDUMP|
+ VM_PFNMAP,
+diff --git a/arch/x86/events/amd/brs.c b/arch/x86/events/amd/brs.c
+index 780acd3dff22a2..ec34274633824e 100644
+--- a/arch/x86/events/amd/brs.c
++++ b/arch/x86/events/amd/brs.c
+@@ -381,7 +381,8 @@ static void amd_brs_poison_buffer(void)
+ * On ctxswin, sched_in = true, called after the PMU has started
+ * On ctxswout, sched_in = false, called before the PMU is stopped
+ */
+-void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c
+index 19c7b76e21bcb4..c06ccca96851f3 100644
+--- a/arch/x86/events/amd/lbr.c
++++ b/arch/x86/events/amd/lbr.c
+@@ -371,7 +371,8 @@ void amd_pmu_lbr_del(struct perf_event *event)
+ perf_sched_cb_dec(event->pmu);
+ }
+
+-void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 2092d615333da5..3a27c50080f4fe 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2625,9 +2625,10 @@ static const struct attribute_group *x86_pmu_attr_groups[] = {
+ NULL,
+ };
+
+-static void x86_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++static void x86_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+- static_call_cond(x86_pmu_sched_task)(pmu_ctx, sched_in);
++ static_call_cond(x86_pmu_sched_task)(pmu_ctx, task, sched_in);
+ }
+
+ static void x86_pmu_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index cdb19e3ba3aa3d..9e8de416d1f023 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2779,28 +2779,33 @@ static u64 icl_update_topdown_event(struct perf_event *event)
+
+ DEFINE_STATIC_CALL(intel_pmu_update_topdown_event, x86_perf_event_update);
+
+-static void intel_pmu_read_topdown_event(struct perf_event *event)
++static void intel_pmu_read_event(struct perf_event *event)
+ {
+- struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ if (event->hw.flags & (PERF_X86_EVENT_AUTO_RELOAD | PERF_X86_EVENT_TOPDOWN)) {
++ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ bool pmu_enabled = cpuc->enabled;
+
+- /* Only need to call update_topdown_event() once for group read. */
+- if ((cpuc->txn_flags & PERF_PMU_TXN_READ) &&
+- !is_slots_event(event))
+- return;
++ /* Only need to call update_topdown_event() once for group read. */
++ if (is_metric_event(event) && (cpuc->txn_flags & PERF_PMU_TXN_READ))
++ return;
+
+- perf_pmu_disable(event->pmu);
+- static_call(intel_pmu_update_topdown_event)(event);
+- perf_pmu_enable(event->pmu);
+-}
++ cpuc->enabled = 0;
++ if (pmu_enabled)
++ intel_pmu_disable_all();
+
+-static void intel_pmu_read_event(struct perf_event *event)
+-{
+- if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)
+- intel_pmu_auto_reload_read(event);
+- else if (is_topdown_count(event))
+- intel_pmu_read_topdown_event(event);
+- else
+- x86_perf_event_update(event);
++ if (is_topdown_event(event))
++ static_call(intel_pmu_update_topdown_event)(event);
++ else
++ intel_pmu_drain_pebs_buffer();
++
++ cpuc->enabled = pmu_enabled;
++ if (pmu_enabled)
++ intel_pmu_enable_all(0);
++
++ return;
++ }
++
++ x86_perf_event_update(event);
+ }
+
+ static void intel_pmu_enable_fixed(struct perf_event *event)
+@@ -3070,7 +3075,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
+
+ handled++;
+ x86_pmu_handle_guest_pebs(regs, &data);
+- x86_pmu.drain_pebs(regs, &data);
++ static_call(x86_pmu_drain_pebs)(regs, &data);
+ status &= intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
+
+ /*
+@@ -5244,10 +5249,10 @@ static void intel_pmu_cpu_dead(int cpu)
+ }
+
+ static void intel_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
+- bool sched_in)
++ struct task_struct *task, bool sched_in)
+ {
+ intel_pmu_pebs_sched_task(pmu_ctx, sched_in);
+- intel_pmu_lbr_sched_task(pmu_ctx, sched_in);
++ intel_pmu_lbr_sched_task(pmu_ctx, task, sched_in);
+ }
+
+ static void intel_pmu_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index f122882ef278f8..33f4bb22fc0ee5 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -953,11 +953,11 @@ int intel_pmu_drain_bts_buffer(void)
+ return 1;
+ }
+
+-static inline void intel_pmu_drain_pebs_buffer(void)
++void intel_pmu_drain_pebs_buffer(void)
+ {
+ struct perf_sample_data data;
+
+- x86_pmu.drain_pebs(NULL, &data);
++ static_call(x86_pmu_drain_pebs)(NULL, &data);
+ }
+
+ /*
+@@ -2094,15 +2094,6 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit)
+ return NULL;
+ }
+
+-void intel_pmu_auto_reload_read(struct perf_event *event)
+-{
+- WARN_ON(!(event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD));
+-
+- perf_pmu_disable(event->pmu);
+- intel_pmu_drain_pebs_buffer();
+- perf_pmu_enable(event->pmu);
+-}
+-
+ /*
+ * Special variant of intel_pmu_save_and_restart() for auto-reload.
+ */
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index dc641b50814e21..24719adbcd7ead 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -422,11 +422,17 @@ static __always_inline bool lbr_is_reset_in_cstate(void *ctx)
+ return !rdlbr_from(((struct x86_perf_task_context *)ctx)->tos, NULL);
+ }
+
++static inline bool has_lbr_callstack_users(void *ctx)
++{
++ return task_context_opt(ctx)->lbr_callstack_users ||
++ x86_pmu.lbr_callstack_users;
++}
++
+ static void __intel_pmu_lbr_restore(void *ctx)
+ {
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+- if (task_context_opt(ctx)->lbr_callstack_users == 0 ||
++ if (!has_lbr_callstack_users(ctx) ||
+ task_context_opt(ctx)->lbr_stack_state == LBR_NONE) {
+ intel_pmu_lbr_reset();
+ return;
+@@ -503,7 +509,7 @@ static void __intel_pmu_lbr_save(void *ctx)
+ {
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+- if (task_context_opt(ctx)->lbr_callstack_users == 0) {
++ if (!has_lbr_callstack_users(ctx)) {
+ task_context_opt(ctx)->lbr_stack_state = LBR_NONE;
+ return;
+ }
+@@ -539,9 +545,11 @@ void intel_pmu_lbr_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
+ task_context_opt(next_ctx_data)->lbr_callstack_users);
+ }
+
+-void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ struct perf_ctx_data *ctx_data;
+ void *task_ctx;
+
+ if (!cpuc->lbr_users)
+@@ -552,14 +560,18 @@ void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched
+ * the task was scheduled out, restore the stack. Otherwise flush
+ * the LBR stack.
+ */
+- task_ctx = pmu_ctx ? pmu_ctx->task_ctx_data : NULL;
++ rcu_read_lock();
++ ctx_data = rcu_dereference(task->perf_ctx_data);
++ task_ctx = ctx_data ? ctx_data->data : NULL;
+ if (task_ctx) {
+ if (sched_in)
+ __intel_pmu_lbr_restore(task_ctx);
+ else
+ __intel_pmu_lbr_save(task_ctx);
++ rcu_read_unlock();
+ return;
+ }
++ rcu_read_unlock();
+
+ /*
+ * Since a context switch can flip the address space and LBR entries
+@@ -588,9 +600,19 @@ void intel_pmu_lbr_add(struct perf_event *event)
+
+ cpuc->br_sel = event->hw.branch_reg.reg;
+
+- if (branch_user_callstack(cpuc->br_sel) && event->pmu_ctx->task_ctx_data)
+- task_context_opt(event->pmu_ctx->task_ctx_data)->lbr_callstack_users++;
++ if (branch_user_callstack(cpuc->br_sel)) {
++ if (event->attach_state & PERF_ATTACH_TASK) {
++ struct task_struct *task = event->hw.target;
++ struct perf_ctx_data *ctx_data;
+
++ rcu_read_lock();
++ ctx_data = rcu_dereference(task->perf_ctx_data);
++ if (ctx_data)
++ task_context_opt(ctx_data->data)->lbr_callstack_users++;
++ rcu_read_unlock();
++ } else
++ x86_pmu.lbr_callstack_users++;
++ }
+ /*
+ * Request pmu::sched_task() callback, which will fire inside the
+ * regular perf event scheduling, so that call will:
+@@ -664,9 +686,19 @@ void intel_pmu_lbr_del(struct perf_event *event)
+ if (!x86_pmu.lbr_nr)
+ return;
+
+- if (branch_user_callstack(cpuc->br_sel) &&
+- event->pmu_ctx->task_ctx_data)
+- task_context_opt(event->pmu_ctx->task_ctx_data)->lbr_callstack_users--;
++ if (branch_user_callstack(cpuc->br_sel)) {
++ if (event->attach_state & PERF_ATTACH_TASK) {
++ struct task_struct *task = event->hw.target;
++ struct perf_ctx_data *ctx_data;
++
++ rcu_read_lock();
++ ctx_data = rcu_dereference(task->perf_ctx_data);
++ if (ctx_data)
++ task_context_opt(ctx_data->data)->lbr_callstack_users--;
++ rcu_read_unlock();
++ } else
++ x86_pmu.lbr_callstack_users--;
++ }
+
+ if (event->hw.flags & PERF_X86_EVENT_LBR_SELECT)
+ cpuc->lbr_select = 0;
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 31c2771545a6c6..1dfa78a30266c9 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -869,7 +869,7 @@ struct x86_pmu {
+
+ void (*check_microcode)(void);
+ void (*sched_task)(struct perf_event_pmu_context *pmu_ctx,
+- bool sched_in);
++ struct task_struct *task, bool sched_in);
+
+ /*
+ * Intel Arch Perfmon v2+
+@@ -914,6 +914,7 @@ struct x86_pmu {
+ const int *lbr_sel_map; /* lbr_select mappings */
+ int *lbr_ctl_map; /* LBR_CTL mappings */
+ };
++ u64 lbr_callstack_users; /* lbr callstack system wide users */
+ bool lbr_double_abort; /* duplicated lbr aborts */
+ bool lbr_pt_coexist; /* (LBR|BTS) may coexist with PT */
+
+@@ -1107,6 +1108,7 @@ extern struct x86_pmu x86_pmu __read_mostly;
+
+ DECLARE_STATIC_CALL(x86_pmu_set_period, *x86_pmu.set_period);
+ DECLARE_STATIC_CALL(x86_pmu_update, *x86_pmu.update);
++DECLARE_STATIC_CALL(x86_pmu_drain_pebs, *x86_pmu.drain_pebs);
+
+ static __always_inline struct x86_perf_task_context_opt *task_context_opt(void *ctx)
+ {
+@@ -1394,7 +1396,8 @@ void amd_pmu_lbr_reset(void);
+ void amd_pmu_lbr_read(void);
+ void amd_pmu_lbr_add(struct perf_event *event);
+ void amd_pmu_lbr_del(struct perf_event *event);
+-void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
++void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in);
+ void amd_pmu_lbr_enable_all(void);
+ void amd_pmu_lbr_disable_all(void);
+ int amd_pmu_lbr_hw_config(struct perf_event *event);
+@@ -1448,7 +1451,8 @@ static inline void amd_pmu_brs_del(struct perf_event *event)
+ perf_sched_cb_dec(event->pmu);
+ }
+
+-void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
++void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in);
+ #else
+ static inline int amd_brs_init(void)
+ {
+@@ -1473,7 +1477,8 @@ static inline void amd_pmu_brs_del(struct perf_event *event)
+ {
+ }
+
+-static inline void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
++static inline void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ }
+
+@@ -1643,7 +1648,7 @@ void intel_pmu_pebs_disable_all(void);
+
+ void intel_pmu_pebs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
+
+-void intel_pmu_auto_reload_read(struct perf_event *event);
++void intel_pmu_drain_pebs_buffer(void);
+
+ void intel_pmu_store_pebs_lbrs(struct lbr_entry *lbr);
+
+@@ -1656,7 +1661,8 @@ void intel_pmu_lbr_save_brstack(struct perf_sample_data *data,
+ void intel_pmu_lbr_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
+ struct perf_event_pmu_context *next_epc);
+
+-void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
++void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
++ struct task_struct *task, bool sched_in);
+
+ u64 lbr_from_signext_quirk_wr(u64 val);
+
+diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
+index ec7880271cf98a..77bf05f06b9efa 100644
+--- a/arch/x86/hyperv/ivm.c
++++ b/arch/x86/hyperv/ivm.c
+@@ -338,7 +338,7 @@ int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
+ vmsa->sev_features = sev_status >> 2;
+
+ ret = snp_set_vmsa(vmsa, true);
+- if (!ret) {
++ if (ret) {
+ pr_err("RMPADJUST(%llx) failed: %llx\n", (u64)vmsa, ret);
+ free_page((u64)vmsa);
+ return ret;
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index cf7fc2b8e3ce1f..1c2db11a2c3cb9 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -76,6 +76,28 @@ static __always_inline void native_local_irq_restore(unsigned long flags)
+
+ #endif
+
++#ifndef CONFIG_PARAVIRT
++#ifndef __ASSEMBLY__
++/*
++ * Used in the idle loop; sti takes one instruction cycle
++ * to complete:
++ */
++static __always_inline void arch_safe_halt(void)
++{
++ native_safe_halt();
++}
++
++/*
++ * Used when interrupts are already enabled or to
++ * shutdown the processor:
++ */
++static __always_inline void halt(void)
++{
++ native_halt();
++}
++#endif /* __ASSEMBLY__ */
++#endif /* CONFIG_PARAVIRT */
++
+ #ifdef CONFIG_PARAVIRT_XXL
+ #include <asm/paravirt.h>
+ #else
+@@ -97,24 +119,6 @@ static __always_inline void arch_local_irq_enable(void)
+ native_irq_enable();
+ }
+
+-/*
+- * Used in the idle loop; sti takes one instruction cycle
+- * to complete:
+- */
+-static __always_inline void arch_safe_halt(void)
+-{
+- native_safe_halt();
+-}
+-
+-/*
+- * Used when interrupts are already enabled or to
+- * shutdown the processor:
+- */
+-static __always_inline void halt(void)
+-{
+- native_halt();
+-}
+-
+ /*
+ * For spinlocks, etc:
+ */
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 041aff51eb50fa..29e7331a0c98d4 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -107,6 +107,16 @@ static inline void notify_page_enc_status_changed(unsigned long pfn,
+ PVOP_VCALL3(mmu.notify_page_enc_status_changed, pfn, npages, enc);
+ }
+
++static __always_inline void arch_safe_halt(void)
++{
++ PVOP_VCALL0(irq.safe_halt);
++}
++
++static inline void halt(void)
++{
++ PVOP_VCALL0(irq.halt);
++}
++
+ #ifdef CONFIG_PARAVIRT_XXL
+ static inline void load_sp0(unsigned long sp0)
+ {
+@@ -170,16 +180,6 @@ static inline void __write_cr4(unsigned long x)
+ PVOP_VCALL1(cpu.write_cr4, x);
+ }
+
+-static __always_inline void arch_safe_halt(void)
+-{
+- PVOP_VCALL0(irq.safe_halt);
+-}
+-
+-static inline void halt(void)
+-{
+- PVOP_VCALL0(irq.halt);
+-}
+-
+ static inline u64 paravirt_read_msr(unsigned msr)
+ {
+ return PVOP_CALL1(u64, cpu.read_msr, msr);
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index fea56b04f43650..abccfccc2e3fa5 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -120,10 +120,9 @@ struct pv_irq_ops {
+ struct paravirt_callee_save save_fl;
+ struct paravirt_callee_save irq_disable;
+ struct paravirt_callee_save irq_enable;
+-
++#endif
+ void (*safe_halt)(void);
+ void (*halt)(void);
+-#endif
+ } __no_randomize_layout;
+
+ struct pv_mmu_ops {
+diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
+index b4b16dafd55ede..40f9a97371a906 100644
+--- a/arch/x86/include/asm/tdx.h
++++ b/arch/x86/include/asm/tdx.h
+@@ -58,7 +58,7 @@ void tdx_get_ve_info(struct ve_info *ve);
+
+ bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve);
+
+-void tdx_safe_halt(void);
++void tdx_halt(void);
+
+ bool tdx_early_handle_ve(struct pt_regs *regs);
+
+@@ -72,7 +72,7 @@ void __init tdx_dump_td_ctls(u64 td_ctls);
+ #else
+
+ static inline void tdx_early_init(void) { };
+-static inline void tdx_safe_halt(void) { };
++static inline void tdx_halt(void) { };
+
+ static inline bool tdx_early_handle_ve(struct pt_regs *regs) { return false; }
+
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 02fc2aa06e9e0e..3da64513974853 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -242,7 +242,7 @@ void flush_tlb_multi(const struct cpumask *cpumask,
+ flush_tlb_mm_range((vma)->vm_mm, start, end, \
+ ((vma)->vm_flags & VM_HUGETLB) \
+ ? huge_page_shift(hstate_vma(vma)) \
+- : PAGE_SHIFT, false)
++ : PAGE_SHIFT, true)
+
+ extern void flush_tlb_all(void);
+ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
+diff --git a/arch/x86/include/asm/vdso/vsyscall.h b/arch/x86/include/asm/vdso/vsyscall.h
+index 37b4a70559a822..88b31d4cdfaf33 100644
+--- a/arch/x86/include/asm/vdso/vsyscall.h
++++ b/arch/x86/include/asm/vdso/vsyscall.h
+@@ -6,6 +6,7 @@
+ #define __VVAR_PAGES 4
+
+ #define VDSO_NR_VCLOCK_PAGES 2
++#define VDSO_VCLOCK_PAGES_START(_b) ((_b) + (__VVAR_PAGES - VDSO_NR_VCLOCK_PAGES) * PAGE_SIZE)
+ #define VDSO_PAGE_PVCLOCK_OFFSET 0
+ #define VDSO_PAGE_HVCLOCK_OFFSET 1
+
+diff --git a/arch/x86/kernel/cpu/bus_lock.c b/arch/x86/kernel/cpu/bus_lock.c
+index 6cba85c79d42d6..97222efb4d2a6c 100644
+--- a/arch/x86/kernel/cpu/bus_lock.c
++++ b/arch/x86/kernel/cpu/bus_lock.c
+@@ -192,7 +192,13 @@ static void __split_lock_reenable(struct work_struct *work)
+ {
+ sld_update_msr(true);
+ }
+-static DECLARE_DELAYED_WORK(sl_reenable, __split_lock_reenable);
++/*
++ * In order for each CPU to schedule its delayed work independently of the
++ * others, delayed work struct must be per-CPU. This is not required when
++ * sysctl_sld_mitigate is enabled because of the semaphore that limits
++ * the number of simultaneously scheduled delayed works to 1.
++ */
++static DEFINE_PER_CPU(struct delayed_work, sl_reenable);
+
+ /*
+ * If a CPU goes offline with pending delayed work to re-enable split lock
+@@ -213,7 +219,7 @@ static int splitlock_cpu_offline(unsigned int cpu)
+
+ static void split_lock_warn(unsigned long ip)
+ {
+- struct delayed_work *work;
++ struct delayed_work *work = NULL;
+ int cpu;
+
+ if (!current->reported_split_lock)
+@@ -235,11 +241,17 @@ static void split_lock_warn(unsigned long ip)
+ if (down_interruptible(&buslock_sem) == -EINTR)
+ return;
+ work = &sl_reenable_unlock;
+- } else {
+- work = &sl_reenable;
+ }
+
+ cpu = get_cpu();
++
++ if (!work) {
++ work = this_cpu_ptr(&sl_reenable);
++ /* Deferred initialization of per-CPU struct */
++ if (!work->work.func)
++ INIT_DELAYED_WORK(work, __split_lock_reenable);
++ }
++
+ schedule_delayed_work_on(cpu, work, 2);
+
+ /* Disable split lock detection on this CPU to make progress */
+diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c
+index dac4d64dfb2a8e..2235a74774360d 100644
+--- a/arch/x86/kernel/cpu/mce/severity.c
++++ b/arch/x86/kernel/cpu/mce/severity.c
+@@ -300,13 +300,12 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs)
+ copy_user = is_copy_from_user(regs);
+ instrumentation_end();
+
+- switch (fixup_type) {
+- case EX_TYPE_UACCESS:
+- if (!copy_user)
+- return IN_KERNEL;
+- m->kflags |= MCE_IN_KERNEL_COPYIN;
+- fallthrough;
++ if (copy_user) {
++ m->kflags |= MCE_IN_KERNEL_COPYIN | MCE_IN_KERNEL_RECOV;
++ return IN_KERNEL_RECOV;
++ }
+
++ switch (fixup_type) {
+ case EX_TYPE_FAULT_MCE_SAFE:
+ case EX_TYPE_DEFAULT_MCE_SAFE:
+ m->kflags |= MCE_IN_KERNEL_RECOV;
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 138689b8e1d833..b61028cf5c8a3b 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -600,7 +600,7 @@ static bool __apply_microcode_amd(struct microcode_amd *mc, u32 *cur_rev,
+ unsigned long p_addr = (unsigned long)&mc->hdr.data_code;
+
+ if (!verify_sha256_digest(mc->hdr.patch_id, *cur_rev, (const u8 *)p_addr, psize))
+- return -1;
++ return false;
+
+ native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
+
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 6419e04d8a7b2b..04b653d613e884 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -157,7 +157,8 @@ static int closid_alloc(void)
+
+ lockdep_assert_held(&rdtgroup_mutex);
+
+- if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
++ if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID) &&
++ is_llc_occupancy_enabled()) {
+ cleanest_closid = resctrl_find_cleanest_closid();
+ if (cleanest_closid < 0)
+ return cleanest_closid;
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index a7d562697e50e4..b2b118a8c09be9 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -195,6 +195,7 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ printk("%sCall Trace:\n", log_lvl);
+
+ unwind_start(&state, task, regs, stack);
++ stack = stack ?: get_stack_pointer(task, regs);
+ regs = unwind_get_entry_regs(&state, &partial);
+
+ /*
+@@ -213,9 +214,7 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ * - hardirq stack
+ * - entry stack
+ */
+- for (stack = stack ?: get_stack_pointer(task, regs);
+- stack;
+- stack = stack_info.next_sp) {
++ for (; stack; stack = stack_info.next_sp) {
+ const char *stack_name;
+
+ stack = PTR_ALIGN(stack, sizeof(long));
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 1209c7aebb211f..dcac3c058fb761 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -220,7 +220,7 @@ bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu)
+ struct fpstate *fpstate;
+ unsigned int size;
+
+- size = fpu_user_cfg.default_size + ALIGN(offsetof(struct fpstate, regs), 64);
++ size = fpu_kernel_cfg.default_size + ALIGN(offsetof(struct fpstate, regs), 64);
+ fpstate = vzalloc(size);
+ if (!fpstate)
+ return false;
+@@ -232,8 +232,8 @@ bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu)
+ fpstate->is_guest = true;
+
+ gfpu->fpstate = fpstate;
+- gfpu->xfeatures = fpu_user_cfg.default_features;
+- gfpu->perm = fpu_user_cfg.default_features;
++ gfpu->xfeatures = fpu_kernel_cfg.default_features;
++ gfpu->perm = fpu_kernel_cfg.default_features;
+
+ /*
+ * KVM sets the FP+SSE bits in the XSAVE header when copying FPU state
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 1ccaa3397a6708..c5bb980b8a6732 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -110,6 +110,11 @@ int paravirt_disable_iospace(void)
+ return request_resource(&ioport_resource, &reserve_ioports);
+ }
+
++static noinstr void pv_native_safe_halt(void)
++{
++ native_safe_halt();
++}
++
+ #ifdef CONFIG_PARAVIRT_XXL
+ static noinstr void pv_native_write_cr2(unsigned long val)
+ {
+@@ -125,11 +130,6 @@ static noinstr void pv_native_set_debugreg(int regno, unsigned long val)
+ {
+ native_set_debugreg(regno, val);
+ }
+-
+-static noinstr void pv_native_safe_halt(void)
+-{
+- native_safe_halt();
+-}
+ #endif
+
+ struct pv_info pv_info = {
+@@ -186,9 +186,11 @@ struct paravirt_patch_template pv_ops = {
+ .irq.save_fl = __PV_IS_CALLEE_SAVE(pv_native_save_fl),
+ .irq.irq_disable = __PV_IS_CALLEE_SAVE(pv_native_irq_disable),
+ .irq.irq_enable = __PV_IS_CALLEE_SAVE(pv_native_irq_enable),
++#endif /* CONFIG_PARAVIRT_XXL */
++
++ /* Irq HLT ops. */
+ .irq.safe_halt = pv_native_safe_halt,
+ .irq.halt = native_halt,
+-#endif /* CONFIG_PARAVIRT_XXL */
+
+ /* Mmu ops. */
+ .mmu.flush_tlb_user = native_flush_tlb_local,
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 6da6769d7254a4..21561262a82161 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -93,7 +93,12 @@ EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid);
+ */
+ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+ {
+- memcpy(dst, src, arch_task_struct_size);
++ /* init_task is not dynamically sized (incomplete FPU state) */
++ if (unlikely(src == &init_task))
++ memcpy_and_pad(dst, arch_task_struct_size, src, sizeof(init_task), 0);
++ else
++ memcpy(dst, src, arch_task_struct_size);
++
+ #ifdef CONFIG_VM86
+ dst->thread.vm86 = NULL;
+ #endif
+@@ -934,7 +939,7 @@ void __init select_idle_routine(void)
+ static_call_update(x86_idle, mwait_idle);
+ } else if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) {
+ pr_info("using TDX aware idle routine\n");
+- static_call_update(x86_idle, tdx_safe_halt);
++ static_call_update(x86_idle, tdx_halt);
+ } else {
+ static_call_update(x86_idle, default_idle);
+ }
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 2dbadf347b5f4f..5e3e036e6e537f 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -379,6 +379,21 @@ __visible void __noreturn handle_stack_overflow(struct pt_regs *regs,
+ }
+ #endif
+
++/*
++ * Prevent the compiler and/or objtool from marking the !CONFIG_X86_ESPFIX64
++ * version of exc_double_fault() as noreturn. Otherwise the noreturn mismatch
++ * between configs triggers objtool warnings.
++ *
++ * This is a temporary hack until we have compiler or plugin support for
++ * annotating noreturns.
++ */
++#ifdef CONFIG_X86_ESPFIX64
++#define always_true() true
++#else
++bool always_true(void);
++bool __weak always_true(void) { return true; }
++#endif
++
+ /*
+ * Runs on an IST stack for x86_64 and on a special task stack for x86_32.
+ *
+@@ -514,7 +529,8 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
+
+ pr_emerg("PANIC: double fault, error_code: 0x%lx\n", error_code);
+ die("double fault", regs, error_code);
+- panic("Machine halted.");
++ if (always_true())
++ panic("Machine halted.");
+ instrumentation_end();
+ }
+
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 34dec0b72ea8dd..88e5a4ed9db3a7 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -959,7 +959,7 @@ static unsigned long long cyc2ns_suspend;
+
+ void tsc_save_sched_clock_state(void)
+ {
+- if (!sched_clock_stable())
++ if (!static_branch_likely(&__use_tsc) && !sched_clock_stable())
+ return;
+
+ cyc2ns_suspend = sched_clock();
+@@ -979,7 +979,7 @@ void tsc_restore_sched_clock_state(void)
+ unsigned long flags;
+ int cpu;
+
+- if (!sched_clock_stable())
++ if (!static_branch_likely(&__use_tsc) && !sched_clock_stable())
+ return;
+
+ local_irq_save(flags);
+diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
+index 5a952c5ea66bc6..9194695662b26f 100644
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -357,19 +357,23 @@ void *arch_uprobe_trampoline(unsigned long *psize)
+ return &insn;
+ }
+
+-static unsigned long trampoline_check_ip(void)
++static unsigned long trampoline_check_ip(unsigned long tramp)
+ {
+- unsigned long tramp = uprobe_get_trampoline_vaddr();
+-
+ return tramp + (uretprobe_syscall_check - uretprobe_trampoline_entry);
+ }
+
+ SYSCALL_DEFINE0(uretprobe)
+ {
+ struct pt_regs *regs = task_pt_regs(current);
+- unsigned long err, ip, sp, r11_cx_ax[3];
++ unsigned long err, ip, sp, r11_cx_ax[3], tramp;
++
++ /* If there's no trampoline, we are called from wrong place. */
++ tramp = uprobe_get_trampoline_vaddr();
++ if (unlikely(tramp == UPROBE_NO_TRAMPOLINE_VADDR))
++ goto sigill;
+
+- if (regs->ip != trampoline_check_ip())
++ /* Make sure the ip matches the only allowed sys_uretprobe caller. */
++ if (unlikely(regs->ip != trampoline_check_ip(tramp)))
+ goto sigill;
+
+ err = copy_from_user(r11_cx_ax, (void __user *)regs->sp, sizeof(r11_cx_ax));
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 661108d65ee72f..510901b8c3699b 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -3969,16 +3969,12 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm)
+
+ /*
+ * The target vCPU is valid, so the vCPU will be kicked unless the
+- * request is for CREATE_ON_INIT. For any errors at this stage, the
+- * kick will place the vCPU in an non-runnable state.
++ * request is for CREATE_ON_INIT.
+ */
+ kick = true;
+
+ mutex_lock(&target_svm->sev_es.snp_vmsa_mutex);
+
+- target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
+- target_svm->sev_es.snp_ap_waiting_for_reset = true;
+-
+ /* Interrupt injection mode shouldn't change for AP creation */
+ if (request < SVM_VMGEXIT_AP_DESTROY) {
+ u64 sev_features;
+@@ -4024,20 +4020,23 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm)
+ target_svm->sev_es.snp_vmsa_gpa = svm->vmcb->control.exit_info_2;
+ break;
+ case SVM_VMGEXIT_AP_DESTROY:
++ target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
+ break;
+ default:
+ vcpu_unimpl(vcpu, "vmgexit: invalid AP creation request [%#x] from guest\n",
+ request);
+ ret = -EINVAL;
+- break;
++ goto out;
+ }
+
+-out:
++ target_svm->sev_es.snp_ap_waiting_for_reset = true;
++
+ if (kick) {
+ kvm_make_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, target_vcpu);
+ kvm_vcpu_kick(target_vcpu);
+ }
+
++out:
+ mutex_unlock(&target_svm->sev_es.snp_vmsa_mutex);
+
+ return ret;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 4b64ab350bcd4d..01d3fa84d2a459 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4573,6 +4573,11 @@ static bool kvm_is_vm_type_supported(unsigned long type)
+ return type < 32 && (kvm_caps.supported_vm_types & BIT(type));
+ }
+
++static inline u32 kvm_sync_valid_fields(struct kvm *kvm)
++{
++ return kvm && kvm->arch.has_protected_state ? 0 : KVM_SYNC_X86_VALID_FIELDS;
++}
++
+ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ {
+ int r = 0;
+@@ -4681,7 +4686,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ break;
+ #endif
+ case KVM_CAP_SYNC_REGS:
+- r = KVM_SYNC_X86_VALID_FIELDS;
++ r = kvm_sync_valid_fields(kvm);
+ break;
+ case KVM_CAP_ADJUST_CLOCK:
+ r = KVM_CLOCK_VALID_FLAGS;
+@@ -11474,6 +11479,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ {
+ struct kvm_queued_exception *ex = &vcpu->arch.exception;
+ struct kvm_run *kvm_run = vcpu->run;
++ u32 sync_valid_fields;
+ int r;
+
+ r = kvm_mmu_post_init_vm(vcpu->kvm);
+@@ -11519,8 +11525,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ goto out;
+ }
+
+- if ((kvm_run->kvm_valid_regs & ~KVM_SYNC_X86_VALID_FIELDS) ||
+- (kvm_run->kvm_dirty_regs & ~KVM_SYNC_X86_VALID_FIELDS)) {
++ sync_valid_fields = kvm_sync_valid_fields(vcpu->kvm);
++ if ((kvm_run->kvm_valid_regs & ~sync_valid_fields) ||
++ (kvm_run->kvm_dirty_regs & ~sync_valid_fields)) {
+ r = -EINVAL;
+ goto out;
+ }
+@@ -11578,7 +11585,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+
+ out:
+ kvm_put_guest_fpu(vcpu);
+- if (kvm_run->kvm_valid_regs)
++ if (kvm_run->kvm_valid_regs && likely(!vcpu->arch.guest_state_protected))
+ store_regs(vcpu);
+ post_kvm_run_save(vcpu);
+ kvm_vcpu_srcu_read_unlock(vcpu);
+diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
+index fc9fb5d0617443..b8f74d80f35c61 100644
+--- a/arch/x86/lib/copy_user_64.S
++++ b/arch/x86/lib/copy_user_64.S
+@@ -74,6 +74,24 @@ SYM_FUNC_START(rep_movs_alternative)
+ _ASM_EXTABLE_UA( 0b, 1b)
+
+ .Llarge_movsq:
++ /* Do the first possibly unaligned word */
++0: movq (%rsi),%rax
++1: movq %rax,(%rdi)
++
++ _ASM_EXTABLE_UA( 0b, .Lcopy_user_tail)
++ _ASM_EXTABLE_UA( 1b, .Lcopy_user_tail)
++
++ /* What would be the offset to the aligned destination? */
++ leaq 8(%rdi),%rax
++ andq $-8,%rax
++ subq %rdi,%rax
++
++ /* .. and update pointers and count to match */
++ addq %rax,%rdi
++ addq %rax,%rsi
++ subq %rax,%rcx
++
++ /* make %rcx contain the number of words, %rax the remainder */
+ movq %rcx,%rax
+ shrq $3,%rcx
+ andl $7,%eax
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index e6c7686f443a06..9fce5b87b8c50f 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -565,7 +565,7 @@ void __head sme_enable(struct boot_params *bp)
+ }
+
+ RIP_REL_REF(sme_me_mask) = me_mask;
+- physical_mask &= ~me_mask;
+- cc_vendor = CC_VENDOR_AMD;
++ RIP_REL_REF(physical_mask) &= ~me_mask;
++ RIP_REL_REF(cc_vendor) = CC_VENDOR_AMD;
+ cc_set_mask(me_mask);
+ }
+diff --git a/arch/x86/mm/pat/cpa-test.c b/arch/x86/mm/pat/cpa-test.c
+index 3d2f7f0a6ed142..ad3c1feec990db 100644
+--- a/arch/x86/mm/pat/cpa-test.c
++++ b/arch/x86/mm/pat/cpa-test.c
+@@ -183,7 +183,7 @@ static int pageattr_test(void)
+ break;
+
+ case 1:
+- err = change_page_attr_set(addrs, len[1], PAGE_CPA_TEST, 1);
++ err = change_page_attr_set(addrs, len[i], PAGE_CPA_TEST, 1);
+ break;
+
+ case 2:
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index feb8cc6a12bf23..d721cc19addbd6 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -984,29 +984,42 @@ static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
+ return -EINVAL;
+ }
+
+-/*
+- * track_pfn_copy is called when vma that is covering the pfnmap gets
+- * copied through copy_page_range().
+- *
+- * If the vma has a linear pfn mapping for the entire range, we get the prot
+- * from pte and reserve the entire vma range with single reserve_pfn_range call.
+- */
+-int track_pfn_copy(struct vm_area_struct *vma)
++int track_pfn_copy(struct vm_area_struct *dst_vma,
++ struct vm_area_struct *src_vma, unsigned long *pfn)
+ {
++ const unsigned long vma_size = src_vma->vm_end - src_vma->vm_start;
+ resource_size_t paddr;
+- unsigned long vma_size = vma->vm_end - vma->vm_start;
+ pgprot_t pgprot;
++ int rc;
+
+- if (vma->vm_flags & VM_PAT) {
+- if (get_pat_info(vma, &paddr, &pgprot))
+- return -EINVAL;
+- /* reserve the whole chunk covered by vma. */
+- return reserve_pfn_range(paddr, vma_size, &pgprot, 1);
+- }
++ if (!(src_vma->vm_flags & VM_PAT))
++ return 0;
++
++ /*
++ * Duplicate the PAT information for the dst VMA based on the src
++ * VMA.
++ */
++ if (get_pat_info(src_vma, &paddr, &pgprot))
++ return -EINVAL;
++ rc = reserve_pfn_range(paddr, vma_size, &pgprot, 1);
++ if (rc)
++ return rc;
+
++ /* Reservation for the destination VMA succeeded. */
++ vm_flags_set(dst_vma, VM_PAT);
++ *pfn = PHYS_PFN(paddr);
+ return 0;
+ }
+
++void untrack_pfn_copy(struct vm_area_struct *dst_vma, unsigned long pfn)
++{
++ untrack_pfn(dst_vma, pfn, dst_vma->vm_end - dst_vma->vm_start, true);
++ /*
++ * Reservation was freed, any copied page tables will get cleaned
++ * up later, but without getting PAT involved again.
++ */
++}
++
+ /*
+ * prot is passed in as a parameter for the new mapping. If the vma has
+ * a linear pfn mapping for the entire range, or no vma is provided,
+@@ -1095,15 +1108,6 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+ }
+ }
+
+-/*
+- * untrack_pfn_clear is called if the following situation fits:
+- *
+- * 1) while mremapping a pfnmap for a new region, with the old vma after
+- * its pfnmap page table has been removed. The new vma has a new pfnmap
+- * to the same pfn & cache type with VM_PAT set.
+- * 2) while duplicating vm area, the new vma fails to copy the pgtable from
+- * old vma.
+- */
+ void untrack_pfn_clear(struct vm_area_struct *vma)
+ {
+ vm_flags_clear(vma, VM_PAT);
+diff --git a/block/badblocks.c b/block/badblocks.c
+index db4ec8b9b2a8c2..dc147c0179612f 100644
+--- a/block/badblocks.c
++++ b/block/badblocks.c
+@@ -527,51 +527,6 @@ static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad,
+ return ret;
+ }
+
+-/*
+- * Return 'true' if the range indicated by 'bad' can be backward merged
+- * with the bad range (from the bad table) index by 'behind'.
+- */
+-static bool can_merge_behind(struct badblocks *bb,
+- struct badblocks_context *bad, int behind)
+-{
+- sector_t sectors = bad->len;
+- sector_t s = bad->start;
+- u64 *p = bb->page;
+-
+- if ((s < BB_OFFSET(p[behind])) &&
+- ((s + sectors) >= BB_OFFSET(p[behind])) &&
+- ((BB_END(p[behind]) - s) <= BB_MAX_LEN) &&
+- BB_ACK(p[behind]) == bad->ack)
+- return true;
+- return false;
+-}
+-
+-/*
+- * Do backward merge for range indicated by 'bad' and the bad range
+- * (from the bad table) indexed by 'behind'. The return value is merged
+- * sectors from bad->len.
+- */
+-static int behind_merge(struct badblocks *bb, struct badblocks_context *bad,
+- int behind)
+-{
+- sector_t sectors = bad->len;
+- sector_t s = bad->start;
+- u64 *p = bb->page;
+- int merged = 0;
+-
+- WARN_ON(s >= BB_OFFSET(p[behind]));
+- WARN_ON((s + sectors) < BB_OFFSET(p[behind]));
+-
+- if (s < BB_OFFSET(p[behind])) {
+- merged = BB_OFFSET(p[behind]) - s;
+- p[behind] = BB_MAKE(s, BB_LEN(p[behind]) + merged, bad->ack);
+-
+- WARN_ON((BB_LEN(p[behind]) + merged) >= BB_MAX_LEN);
+- }
+-
+- return merged;
+-}
+-
+ /*
+ * Return 'true' if the range indicated by 'bad' can be forward
+ * merged with the bad range (from the bad table) indexed by 'prev'.
+@@ -745,7 +700,7 @@ static bool can_front_overwrite(struct badblocks *bb, int prev,
+ *extra = 2;
+ }
+
+- if ((bb->count + (*extra)) >= MAX_BADBLOCKS)
++ if ((bb->count + (*extra)) > MAX_BADBLOCKS)
+ return false;
+
+ return true;
+@@ -855,40 +810,60 @@ static void badblocks_update_acked(struct badblocks *bb)
+ bb->unacked_exist = 0;
+ }
+
++/*
++ * Return 'true' if the range indicated by 'bad' is exactly backward
++ * overlapped with the bad range (from bad table) indexed by 'behind'.
++ */
++static bool try_adjacent_combine(struct badblocks *bb, int prev)
++{
++ u64 *p = bb->page;
++
++ if (prev >= 0 && (prev + 1) < bb->count &&
++ BB_END(p[prev]) == BB_OFFSET(p[prev + 1]) &&
++ (BB_LEN(p[prev]) + BB_LEN(p[prev + 1])) <= BB_MAX_LEN &&
++ BB_ACK(p[prev]) == BB_ACK(p[prev + 1])) {
++ p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
++ BB_LEN(p[prev]) + BB_LEN(p[prev + 1]),
++ BB_ACK(p[prev]));
++
++ if ((prev + 2) < bb->count)
++ memmove(p + prev + 1, p + prev + 2,
++ (bb->count - (prev + 2)) * 8);
++ bb->count--;
++ return true;
++ }
++ return false;
++}
++
+ /* Do exact work to set bad block range into the bad block table */
+-static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+- int acknowledged)
++static bool _badblocks_set(struct badblocks *bb, sector_t s, sector_t sectors,
++ int acknowledged)
+ {
+- int retried = 0, space_desired = 0;
+- int orig_len, len = 0, added = 0;
++ int len = 0, added = 0;
+ struct badblocks_context bad;
+ int prev = -1, hint = -1;
+- sector_t orig_start;
+ unsigned long flags;
+- int rv = 0;
+ u64 *p;
+
+ if (bb->shift < 0)
+ /* badblocks are disabled */
+- return 1;
++ return false;
+
+ if (sectors == 0)
+ /* Invalid sectors number */
+- return 1;
++ return false;
+
+ if (bb->shift) {
+ /* round the start down, and the end up */
+ sector_t next = s + sectors;
+
+- rounddown(s, bb->shift);
+- roundup(next, bb->shift);
++ rounddown(s, 1 << bb->shift);
++ roundup(next, 1 << bb->shift);
+ sectors = next - s;
+ }
+
+ write_seqlock_irqsave(&bb->lock, flags);
+
+- orig_start = s;
+- orig_len = sectors;
+ bad.ack = acknowledged;
+ p = bb->page;
+
+@@ -897,6 +872,9 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+ bad.len = sectors;
+ len = 0;
+
++ if (badblocks_full(bb))
++ goto out;
++
+ if (badblocks_empty(bb)) {
+ len = insert_at(bb, 0, &bad);
+ bb->count++;
+@@ -908,32 +886,14 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+
+ /* start before all badblocks */
+ if (prev < 0) {
+- if (!badblocks_full(bb)) {
+- /* insert on the first */
+- if (bad.len > (BB_OFFSET(p[0]) - bad.start))
+- bad.len = BB_OFFSET(p[0]) - bad.start;
+- len = insert_at(bb, 0, &bad);
+- bb->count++;
+- added++;
+- hint = 0;
+- goto update_sectors;
+- }
+-
+- /* No sapce, try to merge */
+- if (overlap_behind(bb, &bad, 0)) {
+- if (can_merge_behind(bb, &bad, 0)) {
+- len = behind_merge(bb, &bad, 0);
+- added++;
+- } else {
+- len = BB_OFFSET(p[0]) - s;
+- space_desired = 1;
+- }
+- hint = 0;
+- goto update_sectors;
+- }
+-
+- /* no table space and give up */
+- goto out;
++ /* insert on the first */
++ if (bad.len > (BB_OFFSET(p[0]) - bad.start))
++ bad.len = BB_OFFSET(p[0]) - bad.start;
++ len = insert_at(bb, 0, &bad);
++ bb->count++;
++ added++;
++ hint = ++prev;
++ goto update_sectors;
+ }
+
+ /* in case p[prev-1] can be merged with p[prev] */
+@@ -953,6 +913,9 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+ int extra = 0;
+
+ if (!can_front_overwrite(bb, prev, &bad, &extra)) {
++ if (extra > 0)
++ goto out;
++
+ len = min_t(sector_t,
+ BB_END(p[prev]) - s, sectors);
+ hint = prev;
+@@ -979,24 +942,6 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+ goto update_sectors;
+ }
+
+- /* if no space in table, still try to merge in the covered range */
+- if (badblocks_full(bb)) {
+- /* skip the cannot-merge range */
+- if (((prev + 1) < bb->count) &&
+- overlap_behind(bb, &bad, prev + 1) &&
+- ((s + sectors) >= BB_END(p[prev + 1]))) {
+- len = BB_END(p[prev + 1]) - s;
+- hint = prev + 1;
+- goto update_sectors;
+- }
+-
+- /* no retry any more */
+- len = sectors;
+- space_desired = 1;
+- hint = -1;
+- goto update_sectors;
+- }
+-
+ /* cannot merge and there is space in bad table */
+ if ((prev + 1) < bb->count &&
+ overlap_behind(bb, &bad, prev + 1))
+@@ -1006,7 +951,7 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+ len = insert_at(bb, prev + 1, &bad);
+ bb->count++;
+ added++;
+- hint = prev + 1;
++ hint = ++prev;
+
+ update_sectors:
+ s += len;
+@@ -1015,35 +960,12 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+ if (sectors > 0)
+ goto re_insert;
+
+- WARN_ON(sectors < 0);
+-
+ /*
+ * Check whether the following already set range can be
+ * merged. (prev < 0) condition is not handled here,
+ * because it's already complicated enough.
+ */
+- if (prev >= 0 &&
+- (prev + 1) < bb->count &&
+- BB_END(p[prev]) == BB_OFFSET(p[prev + 1]) &&
+- (BB_LEN(p[prev]) + BB_LEN(p[prev + 1])) <= BB_MAX_LEN &&
+- BB_ACK(p[prev]) == BB_ACK(p[prev + 1])) {
+- p[prev] = BB_MAKE(BB_OFFSET(p[prev]),
+- BB_LEN(p[prev]) + BB_LEN(p[prev + 1]),
+- BB_ACK(p[prev]));
+-
+- if ((prev + 2) < bb->count)
+- memmove(p + prev + 1, p + prev + 2,
+- (bb->count - (prev + 2)) * 8);
+- bb->count--;
+- }
+-
+- if (space_desired && !badblocks_full(bb)) {
+- s = orig_start;
+- sectors = orig_len;
+- space_desired = 0;
+- if (retried++ < 3)
+- goto re_insert;
+- }
++ try_adjacent_combine(bb, prev);
+
+ out:
+ if (added) {
+@@ -1057,10 +979,7 @@ static int _badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+
+ write_sequnlock_irqrestore(&bb->lock, flags);
+
+- if (!added)
+- rv = 1;
+-
+- return rv;
++ return sectors == 0;
+ }
+
+ /*
+@@ -1131,21 +1050,20 @@ static int front_splitting_clear(struct badblocks *bb, int prev,
+ }
+
+ /* Do the exact work to clear bad block range from the bad block table */
+-static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
++static bool _badblocks_clear(struct badblocks *bb, sector_t s, sector_t sectors)
+ {
+ struct badblocks_context bad;
+ int prev = -1, hint = -1;
+ int len = 0, cleared = 0;
+- int rv = 0;
+ u64 *p;
+
+ if (bb->shift < 0)
+ /* badblocks are disabled */
+- return 1;
++ return false;
+
+ if (sectors == 0)
+ /* Invalid sectors number */
+- return 1;
++ return false;
+
+ if (bb->shift) {
+ sector_t target;
+@@ -1157,8 +1075,8 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
+ * isn't than to think a block is not bad when it is.
+ */
+ target = s + sectors;
+- roundup(s, bb->shift);
+- rounddown(target, bb->shift);
++ roundup(s, 1 << bb->shift);
++ rounddown(target, 1 << bb->shift);
+ sectors = target - s;
+ }
+
+@@ -1214,7 +1132,7 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
+ if ((BB_OFFSET(p[prev]) < bad.start) &&
+ (BB_END(p[prev]) > (bad.start + bad.len))) {
+ /* Splitting */
+- if ((bb->count + 1) < MAX_BADBLOCKS) {
++ if ((bb->count + 1) <= MAX_BADBLOCKS) {
+ len = front_splitting_clear(bb, prev, &bad);
+ bb->count += 1;
+ cleared++;
+@@ -1255,8 +1173,6 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
+ if (sectors > 0)
+ goto re_clear;
+
+- WARN_ON(sectors < 0);
+-
+ if (cleared) {
+ badblocks_update_acked(bb);
+ set_changed(bb);
+@@ -1265,40 +1181,21 @@ static int _badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
+ write_sequnlock_irq(&bb->lock);
+
+ if (!cleared)
+- rv = 1;
++ return false;
+
+- return rv;
++ return true;
+ }
+
+ /* Do the exact work to check bad blocks range from the bad block table */
+-static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors,
+- sector_t *first_bad, int *bad_sectors)
++static int _badblocks_check(struct badblocks *bb, sector_t s, sector_t sectors,
++ sector_t *first_bad, sector_t *bad_sectors)
+ {
+- int unacked_badblocks, acked_badblocks;
+ int prev = -1, hint = -1, set = 0;
+ struct badblocks_context bad;
+- unsigned int seq;
++ int unacked_badblocks = 0;
++ int acked_badblocks = 0;
++ u64 *p = bb->page;
+ int len, rv;
+- u64 *p;
+-
+- WARN_ON(bb->shift < 0 || sectors == 0);
+-
+- if (bb->shift > 0) {
+- sector_t target;
+-
+- /* round the start down, and the end up */
+- target = s + sectors;
+- rounddown(s, bb->shift);
+- roundup(target, bb->shift);
+- sectors = target - s;
+- }
+-
+-retry:
+- seq = read_seqbegin(&bb->lock);
+-
+- p = bb->page;
+- unacked_badblocks = 0;
+- acked_badblocks = 0;
+
+ re_check:
+ bad.start = s;
+@@ -1364,9 +1261,6 @@ static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors,
+ else
+ rv = 0;
+
+- if (read_seqretry(&bb->lock, seq))
+- goto retry;
+-
+ return rv;
+ }
+
+@@ -1404,10 +1298,30 @@ static int _badblocks_check(struct badblocks *bb, sector_t s, int sectors,
+ * -1: there are bad blocks which have not yet been acknowledged in metadata.
+ * plus the start/length of the first bad section we overlap.
+ */
+-int badblocks_check(struct badblocks *bb, sector_t s, int sectors,
+- sector_t *first_bad, int *bad_sectors)
++int badblocks_check(struct badblocks *bb, sector_t s, sector_t sectors,
++ sector_t *first_bad, sector_t *bad_sectors)
+ {
+- return _badblocks_check(bb, s, sectors, first_bad, bad_sectors);
++ unsigned int seq;
++ int rv;
++
++ WARN_ON(bb->shift < 0 || sectors == 0);
++
++ if (bb->shift > 0) {
++ /* round the start down, and the end up */
++ sector_t target = s + sectors;
++
++ rounddown(s, 1 << bb->shift);
++ roundup(target, 1 << bb->shift);
++ sectors = target - s;
++ }
++
++retry:
++ seq = read_seqbegin(&bb->lock);
++ rv = _badblocks_check(bb, s, sectors, first_bad, bad_sectors);
++ if (read_seqretry(&bb->lock, seq))
++ goto retry;
++
++ return rv;
+ }
+ EXPORT_SYMBOL_GPL(badblocks_check);
+
+@@ -1423,11 +1337,12 @@ EXPORT_SYMBOL_GPL(badblocks_check);
+ * decide how best to handle it.
+ *
+ * Return:
+- * 0: success
+- * 1: failed to set badblocks (out of space)
++ * true: success
++ * false: failed to set badblocks (out of space). Parital setting will be
++ * treated as failure.
+ */
+-int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+- int acknowledged)
++bool badblocks_set(struct badblocks *bb, sector_t s, sector_t sectors,
++ int acknowledged)
+ {
+ return _badblocks_set(bb, s, sectors, acknowledged);
+ }
+@@ -1444,10 +1359,10 @@ EXPORT_SYMBOL_GPL(badblocks_set);
+ * drop the remove request.
+ *
+ * Return:
+- * 0: success
+- * 1: failed to clear badblocks
++ * true: success
++ * false: failed to clear badblocks
+ */
+-int badblocks_clear(struct badblocks *bb, sector_t s, int sectors)
++bool badblocks_clear(struct badblocks *bb, sector_t s, sector_t sectors)
+ {
+ return _badblocks_clear(bb, s, sectors);
+ }
+@@ -1479,6 +1394,11 @@ void ack_all_badblocks(struct badblocks *bb)
+ p[i] = BB_MAKE(start, len, 1);
+ }
+ }
++
++ for (i = 0; i < bb->count ; i++)
++ while (try_adjacent_combine(bb, i))
++ ;
++
+ bb->unacked_exist = 0;
+ }
+ write_sequnlock_irq(&bb->lock);
+@@ -1564,10 +1484,10 @@ ssize_t badblocks_store(struct badblocks *bb, const char *page, size_t len,
+ return -EINVAL;
+ }
+
+- if (badblocks_set(bb, sector, length, !unack))
++ if (!badblocks_set(bb, sector, length, !unack))
+ return -ENOSPC;
+- else
+- return len;
++
++ return len;
+ }
+ EXPORT_SYMBOL_GPL(badblocks_store);
+
+diff --git a/block/bio.c b/block/bio.c
+index 6ac5983ba51e6c..6deea10b2cd3d6 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1026,9 +1026,10 @@ EXPORT_SYMBOL(bio_add_page);
+ void bio_add_folio_nofail(struct bio *bio, struct folio *folio, size_t len,
+ size_t off)
+ {
++ unsigned long nr = off / PAGE_SIZE;
++
+ WARN_ON_ONCE(len > UINT_MAX);
+- WARN_ON_ONCE(off > UINT_MAX);
+- __bio_add_page(bio, &folio->page, len, off);
++ __bio_add_page(bio, folio_page(folio, nr), len, off % PAGE_SIZE);
+ }
+ EXPORT_SYMBOL_GPL(bio_add_folio_nofail);
+
+@@ -1049,9 +1050,11 @@ EXPORT_SYMBOL_GPL(bio_add_folio_nofail);
+ bool bio_add_folio(struct bio *bio, struct folio *folio, size_t len,
+ size_t off)
+ {
+- if (len > UINT_MAX || off > UINT_MAX)
++ unsigned long nr = off / PAGE_SIZE;
++
++ if (len > UINT_MAX)
+ return false;
+- return bio_add_page(bio, &folio->page, len, off) > 0;
++ return bio_add_page(bio, folio_page(folio, nr), len, off % PAGE_SIZE) > 0;
+ }
+ EXPORT_SYMBOL(bio_add_folio);
+
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index b9c6f0ec1c4992..66721afeea5467 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -114,6 +114,7 @@ static int blk_validate_integrity_limits(struct queue_limits *lim)
+ pr_warn("invalid PI settings.\n");
+ return -EINVAL;
+ }
++ bi->flags |= BLK_INTEGRITY_NOGENERATE | BLK_INTEGRITY_NOVERIFY;
+ return 0;
+ }
+
+@@ -867,36 +868,28 @@ bool queue_limits_stack_integrity(struct queue_limits *t,
+ if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY))
+ return true;
+
+- if (!ti->tuple_size) {
+- /* inherit the settings from the first underlying device */
+- if (!(ti->flags & BLK_INTEGRITY_STACKED)) {
+- ti->flags = BLK_INTEGRITY_DEVICE_CAPABLE |
+- (bi->flags & BLK_INTEGRITY_REF_TAG);
+- ti->csum_type = bi->csum_type;
+- ti->tuple_size = bi->tuple_size;
+- ti->pi_offset = bi->pi_offset;
+- ti->interval_exp = bi->interval_exp;
+- ti->tag_size = bi->tag_size;
+- goto done;
+- }
+- if (!bi->tuple_size)
+- goto done;
++ if (ti->flags & BLK_INTEGRITY_STACKED) {
++ if (ti->tuple_size != bi->tuple_size)
++ goto incompatible;
++ if (ti->interval_exp != bi->interval_exp)
++ goto incompatible;
++ if (ti->tag_size != bi->tag_size)
++ goto incompatible;
++ if (ti->csum_type != bi->csum_type)
++ goto incompatible;
++ if ((ti->flags & BLK_INTEGRITY_REF_TAG) !=
++ (bi->flags & BLK_INTEGRITY_REF_TAG))
++ goto incompatible;
++ } else {
++ ti->flags = BLK_INTEGRITY_STACKED;
++ ti->flags |= (bi->flags & BLK_INTEGRITY_DEVICE_CAPABLE) |
++ (bi->flags & BLK_INTEGRITY_REF_TAG);
++ ti->csum_type = bi->csum_type;
++ ti->tuple_size = bi->tuple_size;
++ ti->pi_offset = bi->pi_offset;
++ ti->interval_exp = bi->interval_exp;
++ ti->tag_size = bi->tag_size;
+ }
+-
+- if (ti->tuple_size != bi->tuple_size)
+- goto incompatible;
+- if (ti->interval_exp != bi->interval_exp)
+- goto incompatible;
+- if (ti->tag_size != bi->tag_size)
+- goto incompatible;
+- if (ti->csum_type != bi->csum_type)
+- goto incompatible;
+- if ((ti->flags & BLK_INTEGRITY_REF_TAG) !=
+- (bi->flags & BLK_INTEGRITY_REF_TAG))
+- goto incompatible;
+-
+-done:
+- ti->flags |= BLK_INTEGRITY_STACKED;
+ return true;
+
+ incompatible:
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 8d149aff9fd0b7..a52f0d6b40ad4e 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -599,14 +599,23 @@ static inline void throtl_trim_slice(struct throtl_grp *tg, bool rw)
+ * sooner, then we need to reduce slice_end. A high bogus slice_end
+ * is bad because it does not allow new slice to start.
+ */
+-
+ throtl_set_slice_end(tg, rw, jiffies + tg->td->throtl_slice);
+
+ time_elapsed = rounddown(jiffies - tg->slice_start[rw],
+ tg->td->throtl_slice);
+- if (!time_elapsed)
++ /* Don't trim slice until at least 2 slices are used */
++ if (time_elapsed < tg->td->throtl_slice * 2)
+ return;
+
++ /*
++ * The bio submission time may be a few jiffies more than the expected
++ * waiting time, due to 'extra_bytes' can't be divided in
++ * tg_within_bps_limit(), and also due to timer wakeup delay. In this
++ * case, adjust slice_start will discard the extra wait time, causing
++ * lower rate than expected. Therefore, other than the above rounddown,
++ * one extra slice is preserved for deviation.
++ */
++ time_elapsed -= tg->td->throtl_slice;
+ bytes_trim = calculate_bytes_allowed(tg_bps_limit(tg, rw),
+ time_elapsed) +
+ tg->carryover_bytes[rw];
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 5318c214debb0e..6120329eadadab 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -464,8 +464,7 @@ void crypto_unregister_alg(struct crypto_alg *alg)
+ if (WARN_ON(refcount_read(&alg->cra_refcnt) != 1))
+ return;
+
+- if (alg->cra_destroy)
+- alg->cra_destroy(alg);
++ crypto_alg_put(alg);
+
+ crypto_remove_final(&list);
+ }
+diff --git a/crypto/api.c b/crypto/api.c
+index bfd177a4313a01..c2c4eb14ef955f 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -36,7 +36,8 @@ EXPORT_SYMBOL_GPL(crypto_chain);
+ DEFINE_STATIC_KEY_FALSE(__crypto_boot_test_finished);
+ #endif
+
+-static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg);
++static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
++ u32 type, u32 mask);
+ static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type,
+ u32 mask);
+
+@@ -145,7 +146,7 @@ static struct crypto_alg *crypto_larval_add(const char *name, u32 type,
+ if (alg != &larval->alg) {
+ kfree(larval);
+ if (crypto_is_larval(alg))
+- alg = crypto_larval_wait(alg);
++ alg = crypto_larval_wait(alg, type, mask);
+ }
+
+ return alg;
+@@ -197,7 +198,8 @@ static void crypto_start_test(struct crypto_larval *larval)
+ crypto_schedule_test(larval);
+ }
+
+-static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
++static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
++ u32 type, u32 mask)
+ {
+ struct crypto_larval *larval;
+ long time_left;
+@@ -219,12 +221,7 @@ static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
+ crypto_larval_kill(larval);
+ alg = ERR_PTR(-ETIMEDOUT);
+ } else if (!alg) {
+- u32 type;
+- u32 mask;
+-
+ alg = &larval->alg;
+- type = alg->cra_flags & ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
+- mask = larval->mask;
+ alg = crypto_alg_lookup(alg->cra_name, type, mask) ?:
+ ERR_PTR(-EAGAIN);
+ } else if (IS_ERR(alg))
+@@ -304,7 +301,7 @@ static struct crypto_alg *crypto_larval_lookup(const char *name, u32 type,
+ }
+
+ if (!IS_ERR_OR_NULL(alg) && crypto_is_larval(alg))
+- alg = crypto_larval_wait(alg);
++ alg = crypto_larval_wait(alg, type, mask);
+ else if (alg)
+ ;
+ else if (!(mask & CRYPTO_ALG_TESTED))
+@@ -352,7 +349,7 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
+ ok = crypto_probing_notify(CRYPTO_MSG_ALG_REQUEST, larval);
+
+ if (ok == NOTIFY_STOP)
+- alg = crypto_larval_wait(larval);
++ alg = crypto_larval_wait(larval, type, mask);
+ else {
+ crypto_mod_put(larval);
+ alg = ERR_PTR(-ENOENT);
+diff --git a/crypto/bpf_crypto_skcipher.c b/crypto/bpf_crypto_skcipher.c
+index b5e657415770a3..a88798d3e8c872 100644
+--- a/crypto/bpf_crypto_skcipher.c
++++ b/crypto/bpf_crypto_skcipher.c
+@@ -80,3 +80,4 @@ static void __exit bpf_crypto_skcipher_exit(void)
+ module_init(bpf_crypto_skcipher_init);
+ module_exit(bpf_crypto_skcipher_exit);
+ MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Symmetric key cipher support for BPF");
+diff --git a/drivers/accel/amdxdna/aie2_smu.c b/drivers/accel/amdxdna/aie2_smu.c
+index 73388443c67678..d303701b0ded4c 100644
+--- a/drivers/accel/amdxdna/aie2_smu.c
++++ b/drivers/accel/amdxdna/aie2_smu.c
+@@ -64,6 +64,7 @@ int npu1_set_dpm(struct amdxdna_dev_hdl *ndev, u32 dpm_level)
+ if (ret) {
+ XDNA_ERR(ndev->xdna, "Set npu clock to %d failed, ret %d\n",
+ ndev->priv->dpm_clk_tbl[dpm_level].npuclk, ret);
++ return ret;
+ }
+ ndev->npuclk_freq = freq;
+
+@@ -72,6 +73,7 @@ int npu1_set_dpm(struct amdxdna_dev_hdl *ndev, u32 dpm_level)
+ if (ret) {
+ XDNA_ERR(ndev->xdna, "Set h clock to %d failed, ret %d\n",
+ ndev->priv->dpm_clk_tbl[dpm_level].hclk, ret);
++ return ret;
+ }
+ ndev->hclk_freq = freq;
+ ndev->dpm_level = dpm_level;
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index a972831dbd667d..064c7afba740d5 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -648,6 +648,13 @@ acpi_video_device_EDID(struct acpi_video_device *device, void **edid, int length
+
+ obj = buffer.pointer;
+
++ /*
++ * Some buggy implementations incorrectly return the EDID buffer in an ACPI package.
++ * In this case, extract the buffer from the package.
++ */
++ if (obj && obj->type == ACPI_TYPE_PACKAGE && obj->package.count == 1)
++ obj = &obj->package.elements[0];
++
+ if (obj && obj->type == ACPI_TYPE_BUFFER) {
+ *edid = kmemdup(obj->buffer.pointer, obj->buffer.length, GFP_KERNEL);
+ ret = *edid ? obj->buffer.length : -ENOMEM;
+@@ -657,7 +664,7 @@ acpi_video_device_EDID(struct acpi_video_device *device, void **edid, int length
+ ret = -EFAULT;
+ }
+
+- kfree(obj);
++ kfree(buffer.pointer);
+ return ret;
+ }
+
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index a5d47819b3a4e2..ae035b93da0878 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -485,7 +485,7 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ cmd_mask = nd_desc->cmd_mask;
+ if (cmd == ND_CMD_CALL && call_pkg->nd_family) {
+ family = call_pkg->nd_family;
+- if (family > NVDIMM_BUS_FAMILY_MAX ||
++ if (call_pkg->nd_family > NVDIMM_BUS_FAMILY_MAX ||
+ !test_bit(family, &nd_desc->bus_family_mask))
+ return -EINVAL;
+ family = array_index_nospec(family,
+diff --git a/drivers/acpi/platform_profile.c b/drivers/acpi/platform_profile.c
+index ef9444482db198..174a6439a412b9 100644
+--- a/drivers/acpi/platform_profile.c
++++ b/drivers/acpi/platform_profile.c
+@@ -289,14 +289,14 @@ static int _remove_hidden_choices(struct device *dev, void *arg)
+
+ /**
+ * platform_profile_choices_show - Show the available profile choices for legacy sysfs interface
+- * @dev: The device
++ * @kobj: The kobject
+ * @attr: The attribute
+ * @buf: The buffer to write to
+ *
+ * Return: The number of bytes written
+ */
+-static ssize_t platform_profile_choices_show(struct device *dev,
+- struct device_attribute *attr,
++static ssize_t platform_profile_choices_show(struct kobject *kobj,
++ struct kobj_attribute *attr,
+ char *buf)
+ {
+ struct aggregate_choices_data data = {
+@@ -371,14 +371,14 @@ static int _store_and_notify(struct device *dev, void *data)
+
+ /**
+ * platform_profile_show - Show the current profile for legacy sysfs interface
+- * @dev: The device
++ * @kobj: The kobject
+ * @attr: The attribute
+ * @buf: The buffer to write to
+ *
+ * Return: The number of bytes written
+ */
+-static ssize_t platform_profile_show(struct device *dev,
+- struct device_attribute *attr,
++static ssize_t platform_profile_show(struct kobject *kobj,
++ struct kobj_attribute *attr,
+ char *buf)
+ {
+ enum platform_profile_option profile = PLATFORM_PROFILE_LAST;
+@@ -400,15 +400,15 @@ static ssize_t platform_profile_show(struct device *dev,
+
+ /**
+ * platform_profile_store - Set the profile for legacy sysfs interface
+- * @dev: The device
++ * @kobj: The kobject
+ * @attr: The attribute
+ * @buf: The buffer to read from
+ * @count: The number of bytes to read
+ *
+ * Return: The number of bytes read
+ */
+-static ssize_t platform_profile_store(struct device *dev,
+- struct device_attribute *attr,
++static ssize_t platform_profile_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+ {
+ struct aggregate_choices_data data = {
+@@ -442,12 +442,12 @@ static ssize_t platform_profile_store(struct device *dev,
+ return count;
+ }
+
+-static DEVICE_ATTR_RO(platform_profile_choices);
+-static DEVICE_ATTR_RW(platform_profile);
++static struct kobj_attribute attr_platform_profile_choices = __ATTR_RO(platform_profile_choices);
++static struct kobj_attribute attr_platform_profile = __ATTR_RW(platform_profile);
+
+ static struct attribute *platform_profile_attrs[] = {
+- &dev_attr_platform_profile_choices.attr,
+- &dev_attr_platform_profile.attr,
++ &attr_platform_profile_choices.attr,
++ &attr_platform_profile.attr,
+ NULL
+ };
+
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 698897b29de244..2df1296ff44d5e 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -268,6 +268,10 @@ static int acpi_processor_get_power_info_fadt(struct acpi_processor *pr)
+ ACPI_CX_DESC_LEN, "ACPI P_LVL3 IOPORT 0x%x",
+ pr->power.states[ACPI_STATE_C3].address);
+
++ if (!pr->power.states[ACPI_STATE_C2].address &&
++ !pr->power.states[ACPI_STATE_C3].address)
++ return -ENODEV;
++
+ return 0;
+ }
+
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index b4cd14e7fa76cc..14c7bac4100b46 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -440,6 +440,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
+ },
+ },
++ {
++ /* Asus Vivobook X1404VAP */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_BOARD_NAME, "X1404VAP"),
++ },
++ },
+ {
+ /* Asus Vivobook X1504VAP */
+ .matches = {
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index 068c1612660bc0..4ee30c2897a2b9 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -374,7 +374,8 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
+ },
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+- ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
++ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
+ /* Medion Lifetab S10346 */
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index d956735e2a7645..3d730c10f7beaf 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2243,7 +2243,7 @@ static void ata_dev_config_ncq_non_data(struct ata_device *dev)
+
+ if (!ata_log_supported(dev, ATA_LOG_NCQ_NON_DATA)) {
+ ata_dev_warn(dev,
+- "NCQ Send/Recv Log not supported\n");
++ "NCQ Non-Data Log not supported\n");
+ return;
+ }
+ err_mask = ata_read_log_page(dev, ATA_LOG_NCQ_NON_DATA,
+diff --git a/drivers/auxdisplay/Kconfig b/drivers/auxdisplay/Kconfig
+index 8934e6ad5772b4..bedc6133f970aa 100644
+--- a/drivers/auxdisplay/Kconfig
++++ b/drivers/auxdisplay/Kconfig
+@@ -503,6 +503,7 @@ config HT16K33
+ config MAX6959
+ tristate "Maxim MAX6958/6959 7-segment LED controller"
+ depends on I2C
++ select BITREVERSE
+ select REGMAP_I2C
+ select LINEDISP
+ help
+diff --git a/drivers/auxdisplay/panel.c b/drivers/auxdisplay/panel.c
+index a731f28455b45f..6dc8798d01f98c 100644
+--- a/drivers/auxdisplay/panel.c
++++ b/drivers/auxdisplay/panel.c
+@@ -1664,7 +1664,7 @@ static void panel_attach(struct parport *port)
+ if (lcd.enabled)
+ charlcd_unregister(lcd.charlcd);
+ err_unreg_device:
+- kfree(lcd.charlcd);
++ charlcd_free(lcd.charlcd);
+ lcd.charlcd = NULL;
+ parport_unregister_device(pprt);
+ pprt = NULL;
+@@ -1692,7 +1692,7 @@ static void panel_detach(struct parport *port)
+ charlcd_unregister(lcd.charlcd);
+ lcd.initialized = false;
+ kfree(lcd.charlcd->drvdata);
+- kfree(lcd.charlcd);
++ charlcd_free(lcd.charlcd);
+ lcd.charlcd = NULL;
+ }
+
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 40e1d8d8a58930..23be2d1b040798 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -929,6 +929,9 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ if (dev->power.syscore)
+ goto Complete;
+
++ if (!dev->power.is_suspended)
++ goto Complete;
++
+ if (dev->power.direct_complete) {
+ /* Match the pm_runtime_disable() in device_suspend(). */
+ pm_runtime_enable(dev);
+@@ -947,9 +950,6 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ */
+ dev->power.is_prepared = false;
+
+- if (!dev->power.is_suspended)
+- goto Unlock;
+-
+ if (dev->pm_domain) {
+ info = "power domain ";
+ callback = pm_op(&dev->pm_domain->ops, state);
+@@ -989,7 +989,6 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ error = dpm_run_callback(callback, dev, state, info);
+ dev->power.is_suspended = false;
+
+- Unlock:
+ device_unlock(dev);
+ dpm_watchdog_clear(&wd);
+
+@@ -1270,14 +1269,13 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state, bool asy
+ dev->power.is_noirq_suspended = true;
+
+ /*
+- * Skipping the resume of devices that were in use right before the
+- * system suspend (as indicated by their PM-runtime usage counters)
+- * would be suboptimal. Also resume them if doing that is not allowed
+- * to be skipped.
++ * Devices must be resumed unless they are explicitly allowed to be left
++ * in suspend, but even in that case skipping the resume of devices that
++ * were in use right before the system suspend (as indicated by their
++ * runtime PM usage counters and child counters) would be suboptimal.
+ */
+- if (atomic_read(&dev->power.usage_count) > 1 ||
+- !(dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME) &&
+- dev->power.may_skip_resume))
++ if (!(dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME) &&
++ dev->power.may_skip_resume) || !pm_runtime_need_not_resume(dev))
+ dev->power.must_resume = true;
+
+ if (dev->power.must_resume) {
+@@ -1650,6 +1648,7 @@ static int device_suspend(struct device *dev, pm_message_t state, bool async)
+ pm_runtime_disable(dev);
+ if (pm_runtime_status_suspended(dev)) {
+ pm_dev_dbg(dev, state, "direct-complete ");
++ dev->power.is_suspended = true;
+ goto Complete;
+ }
+
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 2ee45841486bc7..04113adb092b52 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1874,7 +1874,7 @@ void pm_runtime_drop_link(struct device_link *link)
+ pm_request_idle(link->supplier);
+ }
+
+-static bool pm_runtime_need_not_resume(struct device *dev)
++bool pm_runtime_need_not_resume(struct device *dev)
+ {
+ return atomic_read(&dev->power.usage_count) <= 1 &&
+ (atomic_read(&dev->power.child_count) == 0 ||
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index fdc7a0b2af1097..175566a71bb3f9 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -559,14 +559,14 @@ static ssize_t nullb_device_badblocks_store(struct config_item *item,
+ goto out;
+ /* enable badblocks */
+ cmpxchg(&t_dev->badblocks.shift, -1, 0);
+- if (buf[0] == '+')
+- ret = badblocks_set(&t_dev->badblocks, start,
+- end - start + 1, 1);
+- else
+- ret = badblocks_clear(&t_dev->badblocks, start,
+- end - start + 1);
+- if (ret == 0)
++ if (buf[0] == '+') {
++ if (badblocks_set(&t_dev->badblocks, start,
++ end - start + 1, 1))
++ ret = count;
++ } else if (badblocks_clear(&t_dev->badblocks, start,
++ end - start + 1)) {
+ ret = count;
++ }
+ out:
+ kfree(orig);
+ return ret;
+@@ -1300,8 +1300,7 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd,
+ sector_t nr_sectors)
+ {
+ struct badblocks *bb = &cmd->nq->dev->badblocks;
+- sector_t first_bad;
+- int bad_sectors;
++ sector_t first_bad, bad_sectors;
+
+ if (badblocks_check(bb, sector, nr_sectors, &first_bad, &bad_sectors))
+ return BLK_STS_IOERR;
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index ca9a67b5b537ac..b7adfaddc3abb3 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1452,17 +1452,27 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
+ }
+ }
+
++/* Must be called when queue is frozen */
++static bool ublk_mark_queue_canceling(struct ublk_queue *ubq)
++{
++ bool canceled;
++
++ spin_lock(&ubq->cancel_lock);
++ canceled = ubq->canceling;
++ if (!canceled)
++ ubq->canceling = true;
++ spin_unlock(&ubq->cancel_lock);
++
++ return canceled;
++}
++
+ static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq)
+ {
++ bool was_canceled = ubq->canceling;
+ struct gendisk *disk;
+
+- spin_lock(&ubq->cancel_lock);
+- if (ubq->canceling) {
+- spin_unlock(&ubq->cancel_lock);
++ if (was_canceled)
+ return false;
+- }
+- ubq->canceling = true;
+- spin_unlock(&ubq->cancel_lock);
+
+ spin_lock(&ub->lock);
+ disk = ub->ub_disk;
+@@ -1474,14 +1484,23 @@ static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq)
+ if (!disk)
+ return false;
+
+- /* Now we are serialized with ublk_queue_rq() */
++ /*
++ * Now we are serialized with ublk_queue_rq()
++ *
++ * Make sure that ubq->canceling is set when queue is frozen,
++ * because ublk_queue_rq() has to rely on this flag for avoiding to
++ * touch completed uring_cmd
++ */
+ blk_mq_quiesce_queue(disk->queue);
+- /* abort queue is for making forward progress */
+- ublk_abort_queue(ub, ubq);
++ was_canceled = ublk_mark_queue_canceling(ubq);
++ if (!was_canceled) {
++ /* abort queue is for making forward progress */
++ ublk_abort_queue(ub, ubq);
++ }
+ blk_mq_unquiesce_queue(disk->queue);
+ put_device(disk_to_dev(disk));
+
+- return true;
++ return !was_canceled;
+ }
+
+ static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io,
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index aa5ec1d444a9d9..6d66668d670a99 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -651,8 +651,10 @@ static int nxp_download_firmware(struct hci_dev *hdev)
+ &nxpdev->tx_state),
+ msecs_to_jiffies(60000));
+
+- release_firmware(nxpdev->fw);
+- memset(nxpdev->fw_name, 0, sizeof(nxpdev->fw_name));
++ if (nxpdev->fw && strlen(nxpdev->fw_name)) {
++ release_firmware(nxpdev->fw);
++ memset(nxpdev->fw_name, 0, sizeof(nxpdev->fw_name));
++ }
+
+ if (err == 0) {
+ bt_dev_err(hdev, "FW Download Timeout. offset: %d",
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index a0fc465458b2f9..699ff21d97675b 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2477,6 +2477,8 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
+ set_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks);
+ set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks);
+
+ /* Clear the reset quirk since this is not an actual
+ * early Bluetooth 1.1 device from CSR.
+diff --git a/drivers/bus/qcom-ssc-block-bus.c b/drivers/bus/qcom-ssc-block-bus.c
+index 85d781a32df4b2..7f5fd4e0940dc1 100644
+--- a/drivers/bus/qcom-ssc-block-bus.c
++++ b/drivers/bus/qcom-ssc-block-bus.c
+@@ -264,18 +264,6 @@ static int qcom_ssc_block_bus_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, data);
+
+- data->pd_names = qcom_ssc_block_pd_names;
+- data->num_pds = ARRAY_SIZE(qcom_ssc_block_pd_names);
+-
+- /* power domains */
+- ret = qcom_ssc_block_bus_pds_attach(&pdev->dev, data->pds, data->pd_names, data->num_pds);
+- if (ret < 0)
+- return dev_err_probe(&pdev->dev, ret, "error when attaching power domains\n");
+-
+- ret = qcom_ssc_block_bus_pds_enable(data->pds, data->num_pds);
+- if (ret < 0)
+- return dev_err_probe(&pdev->dev, ret, "error when enabling power domains\n");
+-
+ /* low level overrides for when the HW logic doesn't "just work" */
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mpm_sscaon_config0");
+ data->reg_mpm_sscaon_config0 = devm_ioremap_resource(&pdev->dev, res);
+@@ -343,11 +331,30 @@ static int qcom_ssc_block_bus_probe(struct platform_device *pdev)
+
+ data->ssc_axi_halt = halt_args.args[0];
+
++ /* power domains */
++ data->pd_names = qcom_ssc_block_pd_names;
++ data->num_pds = ARRAY_SIZE(qcom_ssc_block_pd_names);
++
++ ret = qcom_ssc_block_bus_pds_attach(&pdev->dev, data->pds, data->pd_names, data->num_pds);
++ if (ret < 0)
++ return dev_err_probe(&pdev->dev, ret, "error when attaching power domains\n");
++
++ ret = qcom_ssc_block_bus_pds_enable(data->pds, data->num_pds);
++ if (ret < 0) {
++ dev_err_probe(&pdev->dev, ret, "error when enabling power domains\n");
++ goto err_detach_pds_bus;
++ }
++
+ qcom_ssc_block_bus_init(&pdev->dev);
+
+ of_platform_populate(np, NULL, NULL, &pdev->dev);
+
+ return 0;
++
++err_detach_pds_bus:
++ qcom_ssc_block_bus_pds_detach(&pdev->dev, data->pds, data->num_pds);
++
++ return ret;
+ }
+
+ static void qcom_ssc_block_bus_remove(struct platform_device *pdev)
+@@ -356,9 +363,6 @@ static void qcom_ssc_block_bus_remove(struct platform_device *pdev)
+
+ qcom_ssc_block_bus_deinit(&pdev->dev);
+
+- iounmap(data->reg_mpm_sscaon_config0);
+- iounmap(data->reg_mpm_sscaon_config1);
+-
+ qcom_ssc_block_bus_pds_disable(data->pds, data->num_pds);
+ qcom_ssc_block_bus_pds_detach(&pdev->dev, data->pds, data->num_pds);
+ pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/clk/clk-stm32f4.c b/drivers/clk/clk-stm32f4.c
+index f476883bc93ba5..85e23961ec3413 100644
+--- a/drivers/clk/clk-stm32f4.c
++++ b/drivers/clk/clk-stm32f4.c
+@@ -888,7 +888,6 @@ static int __init stm32f4_pll_ssc_parse_dt(struct device_node *np,
+ struct stm32f4_pll_ssc *conf)
+ {
+ int ret;
+- const char *s;
+
+ if (!conf)
+ return -EINVAL;
+@@ -916,7 +915,8 @@ static int __init stm32f4_pll_ssc_parse_dt(struct device_node *np,
+ conf->mod_type = ret;
+
+ pr_debug("%pOF: SSCG settings: mod_freq: %d, mod_depth: %d mod_method: %s [%d]\n",
+- np, conf->mod_freq, conf->mod_depth, s, conf->mod_type);
++ np, conf->mod_freq, conf->mod_depth,
++ stm32f4_ssc_mod_methods[ret], conf->mod_type);
+
+ return 0;
+ }
+diff --git a/drivers/clk/imx/clk-imx8mp-audiomix.c b/drivers/clk/imx/clk-imx8mp-audiomix.c
+index c409fc7e061869..775f62dddb11d8 100644
+--- a/drivers/clk/imx/clk-imx8mp-audiomix.c
++++ b/drivers/clk/imx/clk-imx8mp-audiomix.c
+@@ -180,14 +180,14 @@ static struct clk_imx8mp_audiomix_sel sels[] = {
+ CLK_GATE("asrc", ASRC_IPG),
+ CLK_GATE("pdm", PDM_IPG),
+ CLK_GATE("earc", EARC_IPG),
+- CLK_GATE("ocrama", OCRAMA_IPG),
++ CLK_GATE_PARENT("ocrama", OCRAMA_IPG, "axi"),
+ CLK_GATE("aud2htx", AUD2HTX_IPG),
+ CLK_GATE_PARENT("earc_phy", EARC_PHY, "sai_pll_out_div2"),
+ CLK_GATE("sdma2", SDMA2_ROOT),
+ CLK_GATE("sdma3", SDMA3_ROOT),
+ CLK_GATE("spba2", SPBA2_ROOT),
+- CLK_GATE("dsp", DSP_ROOT),
+- CLK_GATE("dspdbg", DSPDBG_ROOT),
++ CLK_GATE_PARENT("dsp", DSP_ROOT, "axi"),
++ CLK_GATE_PARENT("dspdbg", DSPDBG_ROOT, "axi"),
+ CLK_GATE("edma", EDMA_ROOT),
+ CLK_GATE_PARENT("audpll", AUDPLL_ROOT, "osc_24m"),
+ CLK_GATE("mu2", MU2_ROOT),
+diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c
+index cfffd434e998ef..ceabebb1863d6e 100644
+--- a/drivers/clk/meson/g12a.c
++++ b/drivers/clk/meson/g12a.c
+@@ -1137,8 +1137,18 @@ static struct clk_regmap g12a_cpu_clk_div16_en = {
+ .hw.init = &(struct clk_init_data) {
+ .name = "cpu_clk_div16_en",
+ .ops = &clk_regmap_gate_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) {
+- &g12a_cpu_clk.hw
++ .parent_data = &(const struct clk_parent_data) {
++ /*
++ * Note:
++ * G12A and G12B have different cpu clocks (with
++ * different struct clk_hw). We fallback to the global
++ * naming string mechanism so this clock picks
++ * up the appropriate one. Same goes for the other
++ * clock using cpu cluster A clock output and present
++ * on both G12 variant.
++ */
++ .name = "cpu_clk",
++ .index = -1,
+ },
+ .num_parents = 1,
+ /*
+@@ -1203,7 +1213,10 @@ static struct clk_regmap g12a_cpu_clk_apb_div = {
+ .hw.init = &(struct clk_init_data){
+ .name = "cpu_clk_apb_div",
+ .ops = &clk_regmap_divider_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw },
++ .parent_data = &(const struct clk_parent_data) {
++ .name = "cpu_clk",
++ .index = -1,
++ },
+ .num_parents = 1,
+ },
+ };
+@@ -1237,7 +1250,10 @@ static struct clk_regmap g12a_cpu_clk_atb_div = {
+ .hw.init = &(struct clk_init_data){
+ .name = "cpu_clk_atb_div",
+ .ops = &clk_regmap_divider_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw },
++ .parent_data = &(const struct clk_parent_data) {
++ .name = "cpu_clk",
++ .index = -1,
++ },
+ .num_parents = 1,
+ },
+ };
+@@ -1271,7 +1287,10 @@ static struct clk_regmap g12a_cpu_clk_axi_div = {
+ .hw.init = &(struct clk_init_data){
+ .name = "cpu_clk_axi_div",
+ .ops = &clk_regmap_divider_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw },
++ .parent_data = &(const struct clk_parent_data) {
++ .name = "cpu_clk",
++ .index = -1,
++ },
+ .num_parents = 1,
+ },
+ };
+@@ -1306,13 +1325,6 @@ static struct clk_regmap g12a_cpu_clk_trace_div = {
+ .name = "cpu_clk_trace_div",
+ .ops = &clk_regmap_divider_ro_ops,
+ .parent_data = &(const struct clk_parent_data) {
+- /*
+- * Note:
+- * G12A and G12B have different cpu_clks (with
+- * different struct clk_hw). We fallback to the global
+- * naming string mechanism so cpu_clk_trace_div picks
+- * up the appropriate one.
+- */
+ .name = "cpu_clk",
+ .index = -1,
+ },
+@@ -4311,7 +4323,7 @@ static MESON_GATE(g12a_spicc_1, HHI_GCLK_MPEG0, 14);
+ static MESON_GATE(g12a_hiu_reg, HHI_GCLK_MPEG0, 19);
+ static MESON_GATE(g12a_mipi_dsi_phy, HHI_GCLK_MPEG0, 20);
+ static MESON_GATE(g12a_assist_misc, HHI_GCLK_MPEG0, 23);
+-static MESON_GATE(g12a_emmc_a, HHI_GCLK_MPEG0, 4);
++static MESON_GATE(g12a_emmc_a, HHI_GCLK_MPEG0, 24);
+ static MESON_GATE(g12a_emmc_b, HHI_GCLK_MPEG0, 25);
+ static MESON_GATE(g12a_emmc_c, HHI_GCLK_MPEG0, 26);
+ static MESON_GATE(g12a_audio_codec, HHI_GCLK_MPEG0, 28);
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index 8575b848538598..3abb44a2532b9e 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -1266,14 +1266,13 @@ static struct clk_regmap gxbb_cts_i958 = {
+ },
+ };
+
++/*
++ * This table skips a clock named 'cts_slow_oscin' in the documentation
++ * This clock does not exist yet in this controller or the AO one
++ */
++static u32 gxbb_32k_clk_parents_val_table[] = { 0, 2, 3 };
+ static const struct clk_parent_data gxbb_32k_clk_parent_data[] = {
+ { .fw_name = "xtal", },
+- /*
+- * FIXME: This clock is provided by the ao clock controller but the
+- * clock is not yet part of the binding of this controller, so string
+- * name must be use to set this parent.
+- */
+- { .name = "cts_slow_oscin", .index = -1 },
+ { .hw = &gxbb_fclk_div3.hw },
+ { .hw = &gxbb_fclk_div5.hw },
+ };
+@@ -1283,6 +1282,7 @@ static struct clk_regmap gxbb_32k_clk_sel = {
+ .offset = HHI_32K_CLK_CNTL,
+ .mask = 0x3,
+ .shift = 16,
++ .table = gxbb_32k_clk_parents_val_table,
+ },
+ .hw.init = &(struct clk_init_data){
+ .name = "32k_clk_sel",
+@@ -1306,7 +1306,7 @@ static struct clk_regmap gxbb_32k_clk_div = {
+ &gxbb_32k_clk_sel.hw
+ },
+ .num_parents = 1,
+- .flags = CLK_SET_RATE_PARENT | CLK_DIVIDER_ROUND_CLOSEST,
++ .flags = CLK_SET_RATE_PARENT,
+ },
+ };
+
+diff --git a/drivers/clk/mmp/clk-pxa1908-apmu.c b/drivers/clk/mmp/clk-pxa1908-apmu.c
+index 8cfb1258202f6f..d3a070687fc5b9 100644
+--- a/drivers/clk/mmp/clk-pxa1908-apmu.c
++++ b/drivers/clk/mmp/clk-pxa1908-apmu.c
+@@ -87,8 +87,8 @@ static int pxa1908_apmu_probe(struct platform_device *pdev)
+ struct pxa1908_clk_unit *pxa_unit;
+
+ pxa_unit = devm_kzalloc(&pdev->dev, sizeof(*pxa_unit), GFP_KERNEL);
+- if (IS_ERR(pxa_unit))
+- return PTR_ERR(pxa_unit);
++ if (!pxa_unit)
++ return -ENOMEM;
+
+ pxa_unit->base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(pxa_unit->base))
+diff --git a/drivers/clk/qcom/gcc-ipq5424.c b/drivers/clk/qcom/gcc-ipq5424.c
+index d5b218b76e2912..3d42f3d85c7a92 100644
+--- a/drivers/clk/qcom/gcc-ipq5424.c
++++ b/drivers/clk/qcom/gcc-ipq5424.c
+@@ -592,13 +592,19 @@ static struct clk_rcg2 gcc_qupv3_spi1_clk_src = {
+ };
+
+ static const struct freq_tbl ftbl_gcc_qupv3_uart0_clk_src[] = {
+- F(960000, P_XO, 10, 2, 5),
+- F(4800000, P_XO, 5, 0, 0),
+- F(9600000, P_XO, 2, 4, 5),
+- F(16000000, P_GPLL0_OUT_MAIN, 10, 1, 5),
++ F(3686400, P_GCC_GPLL0_OUT_MAIN_DIV_CLK_SRC, 1, 144, 15625),
++ F(7372800, P_GCC_GPLL0_OUT_MAIN_DIV_CLK_SRC, 1, 288, 15625),
++ F(14745600, P_GCC_GPLL0_OUT_MAIN_DIV_CLK_SRC, 1, 576, 15625),
+ F(24000000, P_XO, 1, 0, 0),
+ F(25000000, P_GPLL0_OUT_MAIN, 16, 1, 2),
+- F(50000000, P_GPLL0_OUT_MAIN, 16, 0, 0),
++ F(32000000, P_GPLL0_OUT_MAIN, 1, 1, 25),
++ F(40000000, P_GPLL0_OUT_MAIN, 1, 1, 20),
++ F(46400000, P_GPLL0_OUT_MAIN, 1, 29, 500),
++ F(48000000, P_GPLL0_OUT_MAIN, 1, 3, 50),
++ F(51200000, P_GPLL0_OUT_MAIN, 1, 8, 125),
++ F(56000000, P_GPLL0_OUT_MAIN, 1, 7, 100),
++ F(58982400, P_GPLL0_OUT_MAIN, 1, 1152, 15625),
++ F(60000000, P_GPLL0_OUT_MAIN, 1, 3, 40),
+ F(64000000, P_GPLL0_OUT_MAIN, 12.5, 0, 0),
+ { }
+ };
+@@ -634,11 +640,11 @@ static struct clk_rcg2 gcc_qupv3_uart1_clk_src = {
+ static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk_src[] = {
+ F(144000, P_XO, 16, 12, 125),
+ F(400000, P_XO, 12, 1, 5),
+- F(24000000, P_XO, 1, 0, 0),
+- F(48000000, P_GPLL2_OUT_MAIN, 12, 1, 2),
+- F(96000000, P_GPLL2_OUT_MAIN, 6, 1, 2),
++ F(24000000, P_GPLL2_OUT_MAIN, 12, 1, 2),
++ F(48000000, P_GPLL2_OUT_MAIN, 12, 0, 0),
++ F(96000000, P_GPLL2_OUT_MAIN, 6, 0, 0),
+ F(177777778, P_GPLL0_OUT_MAIN, 4.5, 0, 0),
+- F(192000000, P_GPLL2_OUT_MAIN, 6, 0, 0),
++ F(192000000, P_GPLL2_OUT_MAIN, 3, 0, 0),
+ F(200000000, P_GPLL0_OUT_MAIN, 4, 0, 0),
+ { }
+ };
+diff --git a/drivers/clk/qcom/gcc-msm8953.c b/drivers/clk/qcom/gcc-msm8953.c
+index 855a61966f3ef5..8f29ecc74c50bf 100644
+--- a/drivers/clk/qcom/gcc-msm8953.c
++++ b/drivers/clk/qcom/gcc-msm8953.c
+@@ -3770,7 +3770,7 @@ static struct clk_branch gcc_venus0_axi_clk = {
+
+ static struct clk_branch gcc_venus0_core0_vcodec0_clk = {
+ .halt_reg = 0x4c02c,
+- .halt_check = BRANCH_HALT,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x4c02c,
+ .enable_mask = BIT(0),
+diff --git a/drivers/clk/qcom/gcc-sm8650.c b/drivers/clk/qcom/gcc-sm8650.c
+index 9dd5c48f33bed5..fa1672c4e7d814 100644
+--- a/drivers/clk/qcom/gcc-sm8650.c
++++ b/drivers/clk/qcom/gcc-sm8650.c
+@@ -3497,7 +3497,7 @@ static struct gdsc usb30_prim_gdsc = {
+ .pd = {
+ .name = "usb30_prim_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3506,7 +3506,7 @@ static struct gdsc usb3_phy_gdsc = {
+ .pd = {
+ .name = "usb3_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index 7288af845434d8..009f39139b6440 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -2564,19 +2564,6 @@ static struct clk_branch gcc_disp_hf_axi_clk = {
+ },
+ };
+
+-static struct clk_branch gcc_disp_xo_clk = {
+- .halt_reg = 0x27018,
+- .halt_check = BRANCH_HALT,
+- .clkr = {
+- .enable_reg = 0x27018,
+- .enable_mask = BIT(0),
+- .hw.init = &(const struct clk_init_data) {
+- .name = "gcc_disp_xo_clk",
+- .ops = &clk_branch2_ops,
+- },
+- },
+-};
+-
+ static struct clk_branch gcc_gp1_clk = {
+ .halt_reg = 0x64000,
+ .halt_check = BRANCH_HALT,
+@@ -2631,21 +2618,6 @@ static struct clk_branch gcc_gp3_clk = {
+ },
+ };
+
+-static struct clk_branch gcc_gpu_cfg_ahb_clk = {
+- .halt_reg = 0x71004,
+- .halt_check = BRANCH_HALT_VOTED,
+- .hwcg_reg = 0x71004,
+- .hwcg_bit = 1,
+- .clkr = {
+- .enable_reg = 0x71004,
+- .enable_mask = BIT(0),
+- .hw.init = &(const struct clk_init_data) {
+- .name = "gcc_gpu_cfg_ahb_clk",
+- .ops = &clk_branch2_ops,
+- },
+- },
+-};
+-
+ static struct clk_branch gcc_gpu_gpll0_cph_clk_src = {
+ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+@@ -6268,7 +6240,6 @@ static struct clk_regmap *gcc_x1e80100_clocks[] = {
+ [GCC_CNOC_PCIE_TUNNEL_CLK] = &gcc_cnoc_pcie_tunnel_clk.clkr,
+ [GCC_DDRSS_GPU_AXI_CLK] = &gcc_ddrss_gpu_axi_clk.clkr,
+ [GCC_DISP_HF_AXI_CLK] = &gcc_disp_hf_axi_clk.clkr,
+- [GCC_DISP_XO_CLK] = &gcc_disp_xo_clk.clkr,
+ [GCC_GP1_CLK] = &gcc_gp1_clk.clkr,
+ [GCC_GP1_CLK_SRC] = &gcc_gp1_clk_src.clkr,
+ [GCC_GP2_CLK] = &gcc_gp2_clk.clkr,
+@@ -6281,7 +6252,6 @@ static struct clk_regmap *gcc_x1e80100_clocks[] = {
+ [GCC_GPLL7] = &gcc_gpll7.clkr,
+ [GCC_GPLL8] = &gcc_gpll8.clkr,
+ [GCC_GPLL9] = &gcc_gpll9.clkr,
+- [GCC_GPU_CFG_AHB_CLK] = &gcc_gpu_cfg_ahb_clk.clkr,
+ [GCC_GPU_GPLL0_CPH_CLK_SRC] = &gcc_gpu_gpll0_cph_clk_src.clkr,
+ [GCC_GPU_GPLL0_DIV_CPH_CLK_SRC] = &gcc_gpu_gpll0_div_cph_clk_src.clkr,
+ [GCC_GPU_MEMNOC_GFX_CLK] = &gcc_gpu_memnoc_gfx_clk.clkr,
+diff --git a/drivers/clk/qcom/mmcc-sdm660.c b/drivers/clk/qcom/mmcc-sdm660.c
+index 98ba5b4518fb3b..b9f02d91004e8b 100644
+--- a/drivers/clk/qcom/mmcc-sdm660.c
++++ b/drivers/clk/qcom/mmcc-sdm660.c
+@@ -2544,7 +2544,7 @@ static struct clk_branch video_core_clk = {
+
+ static struct clk_branch video_subcore0_clk = {
+ .halt_reg = 0x1048,
+- .halt_check = BRANCH_HALT,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x1048,
+ .enable_mask = BIT(0),
+diff --git a/drivers/clk/renesas/r9a08g045-cpg.c b/drivers/clk/renesas/r9a08g045-cpg.c
+index 0e7e3bf05b52d1..cb63d397429f6d 100644
+--- a/drivers/clk/renesas/r9a08g045-cpg.c
++++ b/drivers/clk/renesas/r9a08g045-cpg.c
+@@ -51,7 +51,7 @@
+ #define G3S_SEL_SDHI2 SEL_PLL_PACK(G3S_CPG_SDHI_DSEL, 8, 2)
+
+ /* PLL 1/4/6 configuration registers macro. */
+-#define G3S_PLL146_CONF(clk1, clk2) ((clk1) << 22 | (clk2) << 12)
++#define G3S_PLL146_CONF(clk1, clk2, setting) ((clk1) << 22 | (clk2) << 12 | (setting))
+
+ #define DEF_G3S_MUX(_name, _id, _conf, _parent_names, _mux_flags, _clk_flags) \
+ DEF_TYPE(_name, _id, CLK_TYPE_MUX, .conf = (_conf), \
+@@ -134,7 +134,8 @@ static const struct cpg_core_clk r9a08g045_core_clks[] __initconst = {
+
+ /* Internal Core Clocks */
+ DEF_FIXED(".osc_div1000", CLK_OSC_DIV1000, CLK_EXTAL, 1, 1000),
+- DEF_G3S_PLL(".pll1", CLK_PLL1, CLK_EXTAL, G3S_PLL146_CONF(0x4, 0x8)),
++ DEF_G3S_PLL(".pll1", CLK_PLL1, CLK_EXTAL, G3S_PLL146_CONF(0x4, 0x8, 0x100),
++ 1100000000UL),
+ DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 200, 3),
+ DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 200, 3),
+ DEF_FIXED(".pll4", CLK_PLL4, CLK_EXTAL, 100, 3),
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index ddf722ca79eb0f..4bd8862dc82be8 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -51,6 +51,7 @@
+ #define RZG3S_DIV_M GENMASK(25, 22)
+ #define RZG3S_DIV_NI GENMASK(21, 13)
+ #define RZG3S_DIV_NF GENMASK(12, 1)
++#define RZG3S_SEL_PLL BIT(0)
+
+ #define CLK_ON_R(reg) (reg)
+ #define CLK_MON_R(reg) (0x180 + (reg))
+@@ -60,6 +61,7 @@
+ #define GET_REG_OFFSET(val) ((val >> 20) & 0xfff)
+ #define GET_REG_SAMPLL_CLK1(val) ((val >> 22) & 0xfff)
+ #define GET_REG_SAMPLL_CLK2(val) ((val >> 12) & 0xfff)
++#define GET_REG_SAMPLL_SETTING(val) ((val) & 0xfff)
+
+ #define CPG_WEN_BIT BIT(16)
+
+@@ -943,6 +945,7 @@ rzg2l_cpg_sipll5_register(const struct cpg_core_clk *core,
+
+ struct pll_clk {
+ struct clk_hw hw;
++ unsigned long default_rate;
+ unsigned int conf;
+ unsigned int type;
+ void __iomem *base;
+@@ -980,12 +983,19 @@ static unsigned long rzg3s_cpg_pll_clk_recalc_rate(struct clk_hw *hw,
+ {
+ struct pll_clk *pll_clk = to_pll(hw);
+ struct rzg2l_cpg_priv *priv = pll_clk->priv;
+- u32 nir, nfr, mr, pr, val;
++ u32 nir, nfr, mr, pr, val, setting;
+ u64 rate;
+
+ if (pll_clk->type != CLK_TYPE_G3S_PLL)
+ return parent_rate;
+
++ setting = GET_REG_SAMPLL_SETTING(pll_clk->conf);
++ if (setting) {
++ val = readl(priv->base + setting);
++ if (val & RZG3S_SEL_PLL)
++ return pll_clk->default_rate;
++ }
++
+ val = readl(priv->base + GET_REG_SAMPLL_CLK1(pll_clk->conf));
+
+ pr = 1 << FIELD_GET(RZG3S_DIV_P, val);
+@@ -1038,6 +1048,7 @@ rzg2l_cpg_pll_clk_register(const struct cpg_core_clk *core,
+ pll_clk->base = priv->base;
+ pll_clk->priv = priv;
+ pll_clk->type = core->type;
++ pll_clk->default_rate = core->default_rate;
+
+ ret = devm_clk_hw_register(dev, &pll_clk->hw);
+ if (ret)
+diff --git a/drivers/clk/renesas/rzg2l-cpg.h b/drivers/clk/renesas/rzg2l-cpg.h
+index 881a89b5a71001..b74c94a16986ef 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.h
++++ b/drivers/clk/renesas/rzg2l-cpg.h
+@@ -102,7 +102,10 @@ struct cpg_core_clk {
+ const struct clk_div_table *dtable;
+ const u32 *mtable;
+ const unsigned long invalid_rate;
+- const unsigned long max_rate;
++ union {
++ const unsigned long max_rate;
++ const unsigned long default_rate;
++ };
+ const char * const *parent_names;
+ notifier_fn_t notifier;
+ u32 flag;
+@@ -144,8 +147,9 @@ enum clk_types {
+ DEF_TYPE(_name, _id, _type, .parent = _parent)
+ #define DEF_SAMPLL(_name, _id, _parent, _conf) \
+ DEF_TYPE(_name, _id, CLK_TYPE_SAM_PLL, .parent = _parent, .conf = _conf)
+-#define DEF_G3S_PLL(_name, _id, _parent, _conf) \
+- DEF_TYPE(_name, _id, CLK_TYPE_G3S_PLL, .parent = _parent, .conf = _conf)
++#define DEF_G3S_PLL(_name, _id, _parent, _conf, _default_rate) \
++ DEF_TYPE(_name, _id, CLK_TYPE_G3S_PLL, .parent = _parent, .conf = _conf, \
++ .default_rate = _default_rate)
+ #define DEF_INPUT(_name, _id) \
+ DEF_TYPE(_name, _id, CLK_TYPE_IN)
+ #define DEF_FIXED(_name, _id, _parent, _mult, _div) \
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index 3bb87b27b662da..cf60fcf2fa5cde 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -201,7 +201,7 @@ PNAME(mux_aclk_peri_pre_p) = { "cpll_peri",
+ "gpll_peri",
+ "hdmiphy_peri" };
+ PNAME(mux_ref_usb3otg_src_p) = { "xin24m",
+- "clk_usb3otg_ref" };
++ "clk_ref_usb3otg_src" };
+ PNAME(mux_xin24m_32k_p) = { "xin24m",
+ "clk_rtc32k" };
+ PNAME(mux_mac2io_src_p) = { "clk_mac2io_src",
+diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c
+index 283c523763e6b6..8d440cf56bd459 100644
+--- a/drivers/clk/samsung/clk.c
++++ b/drivers/clk/samsung/clk.c
+@@ -74,12 +74,12 @@ struct samsung_clk_provider * __init samsung_clk_init(struct device *dev,
+ if (!ctx)
+ panic("could not allocate clock provider context.\n");
+
++ ctx->clk_data.num = nr_clks;
+ for (i = 0; i < nr_clks; ++i)
+ ctx->clk_data.hws[i] = ERR_PTR(-ENOENT);
+
+ ctx->dev = dev;
+ ctx->reg_base = base;
+- ctx->clk_data.num = nr_clks;
+ spin_lock_init(&ctx->lock);
+
+ return ctx;
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index 9e46960f6a862b..4f9cb943d945c2 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -254,7 +254,7 @@ config ARM_TEGRA186_CPUFREQ
+
+ config ARM_TEGRA194_CPUFREQ
+ tristate "Tegra194 CPUFreq support"
+- depends on ARCH_TEGRA_194_SOC || (64BIT && COMPILE_TEST)
++ depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST)
+ depends on TEGRA_BPMP
+ default y
+ help
+diff --git a/drivers/cpufreq/amd-pstate-trace.h b/drivers/cpufreq/amd-pstate-trace.h
+index 8d692415d90505..f457d4af2c62e5 100644
+--- a/drivers/cpufreq/amd-pstate-trace.h
++++ b/drivers/cpufreq/amd-pstate-trace.h
+@@ -24,9 +24,9 @@
+
+ TRACE_EVENT(amd_pstate_perf,
+
+- TP_PROTO(unsigned long min_perf,
+- unsigned long target_perf,
+- unsigned long capacity,
++ TP_PROTO(u8 min_perf,
++ u8 target_perf,
++ u8 capacity,
+ u64 freq,
+ u64 mperf,
+ u64 aperf,
+@@ -47,9 +47,9 @@ TRACE_EVENT(amd_pstate_perf,
+ ),
+
+ TP_STRUCT__entry(
+- __field(unsigned long, min_perf)
+- __field(unsigned long, target_perf)
+- __field(unsigned long, capacity)
++ __field(u8, min_perf)
++ __field(u8, target_perf)
++ __field(u8, capacity)
+ __field(unsigned long long, freq)
+ __field(unsigned long long, mperf)
+ __field(unsigned long long, aperf)
+@@ -70,10 +70,10 @@ TRACE_EVENT(amd_pstate_perf,
+ __entry->fast_switch = fast_switch;
+ ),
+
+- TP_printk("amd_min_perf=%lu amd_des_perf=%lu amd_max_perf=%lu freq=%llu mperf=%llu aperf=%llu tsc=%llu cpu_id=%u fast_switch=%s",
+- (unsigned long)__entry->min_perf,
+- (unsigned long)__entry->target_perf,
+- (unsigned long)__entry->capacity,
++ TP_printk("amd_min_perf=%hhu amd_des_perf=%hhu amd_max_perf=%hhu freq=%llu mperf=%llu aperf=%llu tsc=%llu cpu_id=%u fast_switch=%s",
++ (u8)__entry->min_perf,
++ (u8)__entry->target_perf,
++ (u8)__entry->capacity,
+ (unsigned long long)__entry->freq,
+ (unsigned long long)__entry->mperf,
+ (unsigned long long)__entry->aperf,
+@@ -86,10 +86,10 @@ TRACE_EVENT(amd_pstate_perf,
+ TRACE_EVENT(amd_pstate_epp_perf,
+
+ TP_PROTO(unsigned int cpu_id,
+- unsigned int highest_perf,
+- unsigned int epp,
+- unsigned int min_perf,
+- unsigned int max_perf,
++ u8 highest_perf,
++ u8 epp,
++ u8 min_perf,
++ u8 max_perf,
+ bool boost
+ ),
+
+@@ -102,10 +102,10 @@ TRACE_EVENT(amd_pstate_epp_perf,
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu_id)
+- __field(unsigned int, highest_perf)
+- __field(unsigned int, epp)
+- __field(unsigned int, min_perf)
+- __field(unsigned int, max_perf)
++ __field(u8, highest_perf)
++ __field(u8, epp)
++ __field(u8, min_perf)
++ __field(u8, max_perf)
+ __field(bool, boost)
+ ),
+
+@@ -118,12 +118,12 @@ TRACE_EVENT(amd_pstate_epp_perf,
+ __entry->boost = boost;
+ ),
+
+- TP_printk("cpu%u: [%u<->%u]/%u, epp=%u, boost=%u",
++ TP_printk("cpu%u: [%hhu<->%hhu]/%hhu, epp=%hhu, boost=%u",
+ (unsigned int)__entry->cpu_id,
+- (unsigned int)__entry->min_perf,
+- (unsigned int)__entry->max_perf,
+- (unsigned int)__entry->highest_perf,
+- (unsigned int)__entry->epp,
++ (u8)__entry->min_perf,
++ (u8)__entry->max_perf,
++ (u8)__entry->highest_perf,
++ (u8)__entry->epp,
+ (bool)__entry->boost
+ )
+ );
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 313550fa62d41e..bd63837eabb4ef 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -186,7 +186,7 @@ static inline int get_mode_idx_from_str(const char *str, size_t size)
+ static DEFINE_MUTEX(amd_pstate_limits_lock);
+ static DEFINE_MUTEX(amd_pstate_driver_lock);
+
+-static s16 msr_get_epp(struct amd_cpudata *cpudata)
++static u8 msr_get_epp(struct amd_cpudata *cpudata)
+ {
+ u64 value;
+ int ret;
+@@ -207,7 +207,7 @@ static inline s16 amd_pstate_get_epp(struct amd_cpudata *cpudata)
+ return static_call(amd_pstate_get_epp)(cpudata);
+ }
+
+-static s16 shmem_get_epp(struct amd_cpudata *cpudata)
++static u8 shmem_get_epp(struct amd_cpudata *cpudata)
+ {
+ u64 epp;
+ int ret;
+@@ -218,11 +218,11 @@ static s16 shmem_get_epp(struct amd_cpudata *cpudata)
+ return ret;
+ }
+
+- return (s16)(epp & 0xff);
++ return FIELD_GET(AMD_CPPC_EPP_PERF_MASK, epp);
+ }
+
+-static int msr_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
+- u32 des_perf, u32 max_perf, u32 epp, bool fast_switch)
++static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
++ u8 des_perf, u8 max_perf, u8 epp, bool fast_switch)
+ {
+ u64 value, prev;
+
+@@ -257,15 +257,15 @@ static int msr_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
+ DEFINE_STATIC_CALL(amd_pstate_update_perf, msr_update_perf);
+
+ static inline int amd_pstate_update_perf(struct amd_cpudata *cpudata,
+- u32 min_perf, u32 des_perf,
+- u32 max_perf, u32 epp,
++ u8 min_perf, u8 des_perf,
++ u8 max_perf, u8 epp,
+ bool fast_switch)
+ {
+ return static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
+ max_perf, epp, fast_switch);
+ }
+
+-static int msr_set_epp(struct amd_cpudata *cpudata, u32 epp)
++static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp)
+ {
+ u64 value, prev;
+ int ret;
+@@ -292,12 +292,12 @@ static int msr_set_epp(struct amd_cpudata *cpudata, u32 epp)
+
+ DEFINE_STATIC_CALL(amd_pstate_set_epp, msr_set_epp);
+
+-static inline int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
++static inline int amd_pstate_set_epp(struct amd_cpudata *cpudata, u8 epp)
+ {
+ return static_call(amd_pstate_set_epp)(cpudata, epp);
+ }
+
+-static int shmem_set_epp(struct amd_cpudata *cpudata, u32 epp)
++static int shmem_set_epp(struct amd_cpudata *cpudata, u8 epp)
+ {
+ int ret;
+ struct cppc_perf_ctrls perf_ctrls;
+@@ -320,7 +320,7 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy,
+ int pref_index)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- int epp;
++ u8 epp;
+
+ if (!pref_index)
+ epp = cpudata->epp_default;
+@@ -479,8 +479,8 @@ static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata)
+ return static_call(amd_pstate_init_perf)(cpudata);
+ }
+
+-static int shmem_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
+- u32 des_perf, u32 max_perf, u32 epp, bool fast_switch)
++static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
++ u8 des_perf, u8 max_perf, u8 epp, bool fast_switch)
+ {
+ struct cppc_perf_ctrls perf_ctrls;
+
+@@ -531,14 +531,17 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
+ return true;
+ }
+
+-static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
+- u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags)
++static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
++ u8 des_perf, u8 max_perf, bool fast_switch, int gov_flags)
+ {
+ unsigned long max_freq;
+ struct cpufreq_policy *policy = cpufreq_cpu_get(cpudata->cpu);
+- u32 nominal_perf = READ_ONCE(cpudata->nominal_perf);
++ u8 nominal_perf = READ_ONCE(cpudata->nominal_perf);
+
+- des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
++ if (!policy)
++ return;
++
++ des_perf = clamp_t(u8, des_perf, min_perf, max_perf);
+
+ max_freq = READ_ONCE(cpudata->max_limit_freq);
+ policy->cur = div_u64(des_perf * max_freq, max_perf);
+@@ -550,7 +553,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
+
+ /* limit the max perf when core performance boost feature is disabled */
+ if (!cpudata->boost_supported)
+- max_perf = min_t(unsigned long, nominal_perf, max_perf);
++ max_perf = min_t(u8, nominal_perf, max_perf);
+
+ if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) {
+ trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
+@@ -591,7 +594,8 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data)
+
+ static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
+ {
+- u32 max_limit_perf, min_limit_perf, max_perf, max_freq;
++ u8 max_limit_perf, min_limit_perf, max_perf;
++ u32 max_freq;
+ struct amd_cpudata *cpudata = policy->driver_data;
+
+ max_perf = READ_ONCE(cpudata->highest_perf);
+@@ -615,7 +619,7 @@ static int amd_pstate_update_freq(struct cpufreq_policy *policy,
+ {
+ struct cpufreq_freqs freqs;
+ struct amd_cpudata *cpudata = policy->driver_data;
+- unsigned long max_perf, min_perf, des_perf, cap_perf;
++ u8 des_perf, cap_perf;
+
+ if (!cpudata->max_freq)
+ return -ENODEV;
+@@ -624,8 +628,6 @@ static int amd_pstate_update_freq(struct cpufreq_policy *policy,
+ amd_pstate_update_min_max_limit(policy);
+
+ cap_perf = READ_ONCE(cpudata->highest_perf);
+- min_perf = READ_ONCE(cpudata->lowest_perf);
+- max_perf = cap_perf;
+
+ freqs.old = policy->cur;
+ freqs.new = target_freq;
+@@ -642,8 +644,9 @@ static int amd_pstate_update_freq(struct cpufreq_policy *policy,
+ if (!fast_switch)
+ cpufreq_freq_transition_begin(policy, &freqs);
+
+- amd_pstate_update(cpudata, min_perf, des_perf,
+- max_perf, fast_switch, policy->governor->flags);
++ amd_pstate_update(cpudata, cpudata->min_limit_perf, des_perf,
++ cpudata->max_limit_perf, fast_switch,
++ policy->governor->flags);
+
+ if (!fast_switch)
+ cpufreq_freq_transition_end(policy, &freqs, false);
+@@ -671,8 +674,7 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ unsigned long target_perf,
+ unsigned long capacity)
+ {
+- unsigned long max_perf, min_perf, des_perf,
+- cap_perf, lowest_nonlinear_perf;
++ u8 max_perf, min_perf, des_perf, cap_perf, min_limit_perf;
+ struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
+ struct amd_cpudata *cpudata;
+
+@@ -684,20 +686,20 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
+ amd_pstate_update_min_max_limit(policy);
+
+-
+ cap_perf = READ_ONCE(cpudata->highest_perf);
+- lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
++ min_limit_perf = READ_ONCE(cpudata->min_limit_perf);
+
+ des_perf = cap_perf;
+ if (target_perf < capacity)
+ des_perf = DIV_ROUND_UP(cap_perf * target_perf, capacity);
+
+- min_perf = READ_ONCE(cpudata->lowest_perf);
+ if (_min_perf < capacity)
+ min_perf = DIV_ROUND_UP(cap_perf * _min_perf, capacity);
++ else
++ min_perf = cap_perf;
+
+- if (min_perf < lowest_nonlinear_perf)
+- min_perf = lowest_nonlinear_perf;
++ if (min_perf < min_limit_perf)
++ min_perf = min_limit_perf;
+
+ max_perf = cpudata->max_limit_perf;
+ if (max_perf < min_perf)
+@@ -908,8 +910,8 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
+ {
+ int ret;
+ u32 min_freq, max_freq;
+- u32 highest_perf, nominal_perf, nominal_freq;
+- u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
++ u8 highest_perf, nominal_perf, lowest_nonlinear_perf;
++ u32 nominal_freq, lowest_nonlinear_freq;
+ struct cppc_perf_caps cppc_perf;
+
+ ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
+@@ -1116,7 +1118,7 @@ static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *poli
+ static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy,
+ char *buf)
+ {
+- u32 perf;
++ u8 perf;
+ struct amd_cpudata *cpudata = policy->driver_data;
+
+ perf = READ_ONCE(cpudata->highest_perf);
+@@ -1127,7 +1129,7 @@ static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy,
+ static ssize_t show_amd_pstate_prefcore_ranking(struct cpufreq_policy *policy,
+ char *buf)
+ {
+- u32 perf;
++ u8 perf;
+ struct amd_cpudata *cpudata = policy->driver_data;
+
+ perf = READ_ONCE(cpudata->prefcore_ranking);
+@@ -1190,7 +1192,7 @@ static ssize_t show_energy_performance_preference(
+ struct cpufreq_policy *policy, char *buf)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- int preference;
++ u8 preference;
+
+ switch (cpudata->epp_cached) {
+ case AMD_CPPC_EPP_PERFORMANCE:
+@@ -1552,7 +1554,7 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
+ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- u32 epp;
++ u8 epp;
+
+ amd_pstate_update_min_max_limit(policy);
+
+@@ -1601,7 +1603,7 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
+ static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- u64 max_perf;
++ u8 max_perf;
+ int ret;
+
+ ret = amd_pstate_cppc_enable(true);
+@@ -1638,7 +1640,7 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
+ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- int min_perf;
++ u8 min_perf;
+
+ if (cpudata->suspended)
+ return 0;
+diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h
+index 9747e3be6ceee8..19d405c6d805e5 100644
+--- a/drivers/cpufreq/amd-pstate.h
++++ b/drivers/cpufreq/amd-pstate.h
+@@ -70,13 +70,13 @@ struct amd_cpudata {
+ struct freq_qos_request req[2];
+ u64 cppc_req_cached;
+
+- u32 highest_perf;
+- u32 nominal_perf;
+- u32 lowest_nonlinear_perf;
+- u32 lowest_perf;
+- u32 prefcore_ranking;
+- u32 min_limit_perf;
+- u32 max_limit_perf;
++ u8 highest_perf;
++ u8 nominal_perf;
++ u8 lowest_nonlinear_perf;
++ u8 lowest_perf;
++ u8 prefcore_ranking;
++ u8 min_limit_perf;
++ u8 max_limit_perf;
+ u32 min_limit_freq;
+ u32 max_limit_freq;
+
+@@ -93,11 +93,11 @@ struct amd_cpudata {
+ bool hw_prefcore;
+
+ /* EPP feature related attributes*/
+- s16 epp_cached;
++ u8 epp_cached;
+ u32 policy;
+ u64 cppc_cap1_cached;
+ bool suspended;
+- s16 epp_default;
++ u8 epp_default;
+ };
+
+ /*
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index 7a979db81f0982..5a3545bd0d8d20 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -47,7 +47,7 @@ static void __init armada_8k_get_sharing_cpus(struct clk *cur_clk,
+ {
+ int cpu;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ struct device *cpu_dev;
+ struct clk *clk;
+
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 3a7c3372bda751..f3913eea5e553a 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -303,7 +303,7 @@ static int dt_cpufreq_probe(struct platform_device *pdev)
+ int ret, cpu;
+
+ /* Request resources early so we can return in case of -EPROBE_DEFER */
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ ret = dt_cpufreq_early_init(&pdev->dev, cpu);
+ if (ret)
+ goto err;
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index af44ee6a64304f..1a7fcaf39cc9b5 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -145,7 +145,23 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ time_elapsed = update_time - j_cdbs->prev_update_time;
+ j_cdbs->prev_update_time = update_time;
+
+- idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
++ /*
++ * cur_idle_time could be smaller than j_cdbs->prev_cpu_idle if
++ * it's obtained from get_cpu_idle_time_jiffy() when NOHZ is
++ * off, where idle_time is calculated by the difference between
++ * time elapsed in jiffies and "busy time" obtained from CPU
++ * statistics. If a CPU is 100% busy, the time elapsed and busy
++ * time should grow with the same amount in two consecutive
++ * samples, but in practice there could be a tiny difference,
++ * making the accumulated idle time decrease sometimes. Hence,
++ * in this case, idle_time should be regarded as 0 in order to
++ * make the further process correct.
++ */
++ if (cur_idle_time > j_cdbs->prev_cpu_idle)
++ idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
++ else
++ idle_time = 0;
++
+ j_cdbs->prev_cpu_idle = cur_idle_time;
+
+ if (ignore_nice) {
+@@ -162,7 +178,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ * calls, so the previous load value can be used then.
+ */
+ load = j_cdbs->prev_load;
+- } else if (unlikely((int)idle_time > 2 * sampling_rate &&
++ } else if (unlikely(idle_time > 2 * sampling_rate &&
+ j_cdbs->prev_load)) {
+ /*
+ * If the CPU had gone completely idle and a task has
+@@ -189,30 +205,15 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ load = j_cdbs->prev_load;
+ j_cdbs->prev_load = 0;
+ } else {
+- if (time_elapsed >= idle_time) {
++ if (time_elapsed > idle_time)
+ load = 100 * (time_elapsed - idle_time) / time_elapsed;
+- } else {
+- /*
+- * That can happen if idle_time is returned by
+- * get_cpu_idle_time_jiffy(). In that case
+- * idle_time is roughly equal to the difference
+- * between time_elapsed and "busy time" obtained
+- * from CPU statistics. Then, the "busy time"
+- * can end up being greater than time_elapsed
+- * (for example, if jiffies_64 and the CPU
+- * statistics are updated by different CPUs),
+- * so idle_time may in fact be negative. That
+- * means, though, that the CPU was busy all
+- * the time (on the rough average) during the
+- * last sampling interval and 100 can be
+- * returned as the load.
+- */
+- load = (int)idle_time < 0 ? 100 : 0;
+- }
++ else
++ load = 0;
++
+ j_cdbs->prev_load = load;
+ }
+
+- if (unlikely((int)idle_time > 2 * sampling_rate)) {
++ if (unlikely(idle_time > 2 * sampling_rate)) {
+ unsigned int periods = idle_time / sampling_rate;
+
+ if (periods < idle_periods)
+diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c
+index 9252ebd60373f1..478257523cc3c4 100644
+--- a/drivers/cpufreq/mediatek-cpufreq-hw.c
++++ b/drivers/cpufreq/mediatek-cpufreq-hw.c
+@@ -304,7 +304,7 @@ static int mtk_cpufreq_hw_driver_probe(struct platform_device *pdev)
+ struct regulator *cpu_reg;
+
+ /* Make sure that all CPU supplies are available before proceeding. */
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ cpu_dev = get_cpu_device(cpu);
+ if (!cpu_dev)
+ return dev_err_probe(&pdev->dev, -EPROBE_DEFER,
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index 663f61565cf728..2e4f9ca0af3504 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -632,7 +632,7 @@ static int mtk_cpufreq_probe(struct platform_device *pdev)
+ return dev_err_probe(&pdev->dev, -ENODEV,
+ "failed to get mtk cpufreq platform data\n");
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ info = mtk_cpu_dvfs_info_lookup(cpu);
+ if (info)
+ continue;
+diff --git a/drivers/cpufreq/mvebu-cpufreq.c b/drivers/cpufreq/mvebu-cpufreq.c
+index 7f3cfe668f307c..2aad4c04673cc5 100644
+--- a/drivers/cpufreq/mvebu-cpufreq.c
++++ b/drivers/cpufreq/mvebu-cpufreq.c
+@@ -56,7 +56,7 @@ static int __init armada_xp_pmsu_cpufreq_init(void)
+ * it), and registers the clock notifier that will take care
+ * of doing the PMSU part of a frequency transition.
+ */
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ struct device *cpu_dev;
+ struct clk *clk;
+ int ret;
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index b2e7e89feaac41..dce7cad1813fb5 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -306,7 +306,7 @@ static void qcom_get_related_cpus(int index, struct cpumask *m)
+ struct of_phandle_args args;
+ int cpu, ret;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ cpu_np = of_cpu_device_node_get(cpu);
+ if (!cpu_np)
+ continue;
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index 3a8ed723a23e52..54f8117103c850 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -489,7 +489,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
+ nvmem_cell_put(speedbin_nvmem);
+ }
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ struct dev_pm_opp_config config = {
+ .supported_hw = NULL,
+ };
+@@ -543,7 +543,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
+ dev_err(cpu_dev, "Failed to register platform device\n");
+
+ free_opp:
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ dev_pm_domain_detach_list(drv->cpus[cpu].pd_list);
+ dev_pm_opp_clear_config(drv->cpus[cpu].opp_token);
+ }
+@@ -557,7 +557,7 @@ static void qcom_cpufreq_remove(struct platform_device *pdev)
+
+ platform_device_unregister(cpufreq_dt_pdev);
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ dev_pm_domain_detach_list(drv->cpus[cpu].pd_list);
+ dev_pm_opp_clear_config(drv->cpus[cpu].opp_token);
+ }
+@@ -568,7 +568,7 @@ static int qcom_cpufreq_suspend(struct device *dev)
+ struct qcom_cpufreq_drv *drv = dev_get_drvdata(dev);
+ unsigned int cpu;
+
+- for_each_possible_cpu(cpu)
++ for_each_present_cpu(cpu)
+ qcom_cpufreq_suspend_pd_devs(drv, cpu);
+
+ return 0;
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index b8fe758aeb0100..914bf2c940a037 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -104,7 +104,7 @@ scmi_get_sharing_cpus(struct device *cpu_dev, int domain,
+ int cpu, tdomain;
+ struct device *tcpu_dev;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ if (cpu == cpu_dev->id)
+ continue;
+
+diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
+index cd89c1b9832c02..1f97b949763fa7 100644
+--- a/drivers/cpufreq/scpi-cpufreq.c
++++ b/drivers/cpufreq/scpi-cpufreq.c
+@@ -39,8 +39,9 @@ static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)
+ static int
+ scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
+ {
+- u64 rate = policy->freq_table[index].frequency * 1000;
++ unsigned long freq_khz = policy->freq_table[index].frequency;
+ struct scpi_data *priv = policy->driver_data;
++ unsigned long rate = freq_khz * 1000;
+ int ret;
+
+ ret = clk_set_rate(priv->clk, rate);
+@@ -48,7 +49,7 @@ scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
+ if (ret)
+ return ret;
+
+- if (clk_get_rate(priv->clk) != rate)
++ if (clk_get_rate(priv->clk) / 1000 != freq_khz)
+ return -EIO;
+
+ return 0;
+@@ -64,7 +65,7 @@ scpi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
+ if (domain < 0)
+ return domain;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ if (cpu == cpu_dev->id)
+ continue;
+
+diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+index 17d6a149f580dc..47d6840b348994 100644
+--- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c
++++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+@@ -262,7 +262,7 @@ static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev)
+ snprintf(name, sizeof(name), "speed%d", speed);
+ config.prop_name = name;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ struct device *cpu_dev = get_cpu_device(cpu);
+
+ if (!cpu_dev) {
+@@ -288,7 +288,7 @@ static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev)
+ pr_err("Failed to register platform device\n");
+
+ free_opp:
+- for_each_possible_cpu(cpu)
++ for_each_present_cpu(cpu)
+ dev_pm_opp_clear_config(opp_tokens[cpu]);
+ kfree(opp_tokens);
+
+@@ -302,7 +302,7 @@ static void sun50i_cpufreq_nvmem_remove(struct platform_device *pdev)
+
+ platform_device_unregister(cpufreq_dt_pdev);
+
+- for_each_possible_cpu(cpu)
++ for_each_present_cpu(cpu)
+ dev_pm_opp_clear_config(opp_tokens[cpu]);
+
+ kfree(opp_tokens);
+diff --git a/drivers/cpufreq/virtual-cpufreq.c b/drivers/cpufreq/virtual-cpufreq.c
+index a050b3a6737f00..272dc3c85106cb 100644
+--- a/drivers/cpufreq/virtual-cpufreq.c
++++ b/drivers/cpufreq/virtual-cpufreq.c
+@@ -138,7 +138,7 @@ static int virt_cpufreq_get_sharing_cpus(struct cpufreq_policy *policy)
+ cur_perf_domain = readl_relaxed(base + policy->cpu *
+ PER_CPU_OFFSET + REG_PERF_DOMAIN_OFFSET);
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ cpu_dev = get_cpu_device(cpu);
+ if (!cpu_dev)
+ continue;
+diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
+index caba6f4bb1b793..e044fefdb81678 100644
+--- a/drivers/cpuidle/cpuidle-arm.c
++++ b/drivers/cpuidle/cpuidle-arm.c
+@@ -137,9 +137,9 @@ static int __init arm_idle_init_cpu(int cpu)
+ /*
+ * arm_idle_init - Initializes arm cpuidle driver
+ *
+- * Initializes arm cpuidle driver for all CPUs, if any CPU fails
+- * to register cpuidle driver then rollback to cancel all CPUs
+- * registration.
++ * Initializes arm cpuidle driver for all present CPUs, if any
++ * CPU fails to register cpuidle driver then rollback to cancel
++ * all CPUs registration.
+ */
+ static int __init arm_idle_init(void)
+ {
+@@ -147,7 +147,7 @@ static int __init arm_idle_init(void)
+ struct cpuidle_driver *drv;
+ struct cpuidle_device *dev;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ ret = arm_idle_init_cpu(cpu);
+ if (ret)
+ goto out_fail;
+diff --git a/drivers/cpuidle/cpuidle-big_little.c b/drivers/cpuidle/cpuidle-big_little.c
+index 74972deda0ead3..4abba42fcc3112 100644
+--- a/drivers/cpuidle/cpuidle-big_little.c
++++ b/drivers/cpuidle/cpuidle-big_little.c
+@@ -148,7 +148,7 @@ static int __init bl_idle_driver_init(struct cpuidle_driver *drv, int part_id)
+ if (!cpumask)
+ return -ENOMEM;
+
+- for_each_possible_cpu(cpu)
++ for_each_present_cpu(cpu)
+ if (smp_cpuid_part(cpu) == part_id)
+ cpumask_set_cpu(cpu, cpumask);
+
+diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c
+index 2562dc001fc1de..a4594c3d6562d3 100644
+--- a/drivers/cpuidle/cpuidle-psci.c
++++ b/drivers/cpuidle/cpuidle-psci.c
+@@ -400,7 +400,7 @@ static int psci_idle_init_cpu(struct device *dev, int cpu)
+ /*
+ * psci_idle_probe - Initializes PSCI cpuidle driver
+ *
+- * Initializes PSCI cpuidle driver for all CPUs, if any CPU fails
++ * Initializes PSCI cpuidle driver for all present CPUs, if any CPU fails
+ * to register cpuidle driver then rollback to cancel all CPUs
+ * registration.
+ */
+@@ -410,7 +410,7 @@ static int psci_cpuidle_probe(struct platform_device *pdev)
+ struct cpuidle_driver *drv;
+ struct cpuidle_device *dev;
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ ret = psci_idle_init_cpu(&pdev->dev, cpu);
+ if (ret)
+ goto out_fail;
+diff --git a/drivers/cpuidle/cpuidle-qcom-spm.c b/drivers/cpuidle/cpuidle-qcom-spm.c
+index 3ab240e0e12292..5f386761b1562a 100644
+--- a/drivers/cpuidle/cpuidle-qcom-spm.c
++++ b/drivers/cpuidle/cpuidle-qcom-spm.c
+@@ -135,7 +135,7 @@ static int spm_cpuidle_drv_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(&pdev->dev, ret, "set warm boot addr failed");
+
+- for_each_possible_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ ret = spm_cpuidle_register(&pdev->dev, cpu);
+ if (ret && ret != -ENODEV) {
+ dev_err(&pdev->dev,
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index 0c92a628bbd40e..0fe1ece9fbdca4 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -529,8 +529,8 @@ static int sbi_cpuidle_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- /* Initialize CPU idle driver for each CPU */
+- for_each_possible_cpu(cpu) {
++ /* Initialize CPU idle driver for each present CPU */
++ for_each_present_cpu(cpu) {
+ ret = sbi_cpuidle_init_cpu(&pdev->dev, cpu);
+ if (ret) {
+ pr_debug("HART%ld: idle driver init failed\n",
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index 4b997023082287..703920b49c7c08 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -37,7 +37,6 @@ struct sec_aead_req {
+ u8 *a_ivin;
+ dma_addr_t a_ivin_dma;
+ struct aead_request *aead_req;
+- bool fallback;
+ };
+
+ /* SEC request of Crypto */
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 66bc07da9eb6f7..8ea5305bc320f8 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -57,7 +57,6 @@
+ #define SEC_TYPE_MASK 0x0F
+ #define SEC_DONE_MASK 0x0001
+ #define SEC_ICV_MASK 0x000E
+-#define SEC_SQE_LEN_RATE_MASK 0x3
+
+ #define SEC_TOTAL_IV_SZ(depth) (SEC_IV_SIZE * (depth))
+ #define SEC_SGL_SGE_NR 128
+@@ -80,16 +79,16 @@
+ #define SEC_TOTAL_PBUF_SZ(depth) (PAGE_SIZE * SEC_PBUF_PAGE_NUM(depth) + \
+ SEC_PBUF_LEFT_SZ(depth))
+
+-#define SEC_SQE_LEN_RATE 4
+ #define SEC_SQE_CFLAG 2
+ #define SEC_SQE_AEAD_FLAG 3
+ #define SEC_SQE_DONE 0x1
+ #define SEC_ICV_ERR 0x2
+-#define MIN_MAC_LEN 4
+ #define MAC_LEN_MASK 0x1U
+ #define MAX_INPUT_DATA_LEN 0xFFFE00
+ #define BITS_MASK 0xFF
++#define WORD_MASK 0x3
+ #define BYTE_BITS 0x8
++#define BYTES_TO_WORDS(bcount) ((bcount) >> 2)
+ #define SEC_XTS_NAME_SZ 0x3
+ #define IV_CM_CAL_NUM 2
+ #define IV_CL_MASK 0x7
+@@ -691,14 +690,10 @@ static int sec_skcipher_fbtfm_init(struct crypto_skcipher *tfm)
+
+ c_ctx->fallback = false;
+
+- /* Currently, only XTS mode need fallback tfm when using 192bit key */
+- if (likely(strncmp(alg, "xts", SEC_XTS_NAME_SZ)))
+- return 0;
+-
+ c_ctx->fbtfm = crypto_alloc_sync_skcipher(alg, 0,
+ CRYPTO_ALG_NEED_FALLBACK);
+ if (IS_ERR(c_ctx->fbtfm)) {
+- pr_err("failed to alloc xts mode fallback tfm!\n");
++ pr_err("failed to alloc fallback tfm for %s!\n", alg);
+ return PTR_ERR(c_ctx->fbtfm);
+ }
+
+@@ -858,7 +853,7 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ }
+
+ memcpy(c_ctx->c_key, key, keylen);
+- if (c_ctx->fallback && c_ctx->fbtfm) {
++ if (c_ctx->fbtfm) {
+ ret = crypto_sync_skcipher_setkey(c_ctx->fbtfm, key, keylen);
+ if (ret) {
+ dev_err(dev, "failed to set fallback skcipher key!\n");
+@@ -1090,11 +1085,6 @@ static int sec_aead_auth_set_key(struct sec_auth_ctx *ctx,
+ struct crypto_shash *hash_tfm = ctx->hash_tfm;
+ int blocksize, digestsize, ret;
+
+- if (!keys->authkeylen) {
+- pr_err("hisi_sec2: aead auth key error!\n");
+- return -EINVAL;
+- }
+-
+ blocksize = crypto_shash_blocksize(hash_tfm);
+ digestsize = crypto_shash_digestsize(hash_tfm);
+ if (keys->authkeylen > blocksize) {
+@@ -1106,7 +1096,8 @@ static int sec_aead_auth_set_key(struct sec_auth_ctx *ctx,
+ }
+ ctx->a_key_len = digestsize;
+ } else {
+- memcpy(ctx->a_key, keys->authkey, keys->authkeylen);
++ if (keys->authkeylen)
++ memcpy(ctx->a_key, keys->authkey, keys->authkeylen);
+ ctx->a_key_len = keys->authkeylen;
+ }
+
+@@ -1160,8 +1151,10 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ }
+
+ ret = crypto_authenc_extractkeys(&keys, key, keylen);
+- if (ret)
++ if (ret) {
++ dev_err(dev, "sec extract aead keys err!\n");
+ goto bad_key;
++ }
+
+ ret = sec_aead_aes_set_key(c_ctx, &keys);
+ if (ret) {
+@@ -1175,12 +1168,6 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ goto bad_key;
+ }
+
+- if (ctx->a_ctx.a_key_len & SEC_SQE_LEN_RATE_MASK) {
+- ret = -EINVAL;
+- dev_err(dev, "AUTH key length error!\n");
+- goto bad_key;
+- }
+-
+ ret = sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
+ if (ret) {
+ dev_err(dev, "set sec fallback key err!\n");
+@@ -1583,11 +1570,10 @@ static void sec_auth_bd_fill_ex(struct sec_auth_ctx *ctx, int dir,
+
+ sec_sqe->type2.a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+- sec_sqe->type2.mac_key_alg = cpu_to_le32(authsize / SEC_SQE_LEN_RATE);
++ sec_sqe->type2.mac_key_alg = cpu_to_le32(BYTES_TO_WORDS(authsize));
+
+ sec_sqe->type2.mac_key_alg |=
+- cpu_to_le32((u32)((ctx->a_key_len) /
+- SEC_SQE_LEN_RATE) << SEC_AKEY_OFFSET);
++ cpu_to_le32((u32)BYTES_TO_WORDS(ctx->a_key_len) << SEC_AKEY_OFFSET);
+
+ sec_sqe->type2.mac_key_alg |=
+ cpu_to_le32((u32)(ctx->a_alg) << SEC_AEAD_ALG_OFFSET);
+@@ -1639,12 +1625,10 @@ static void sec_auth_bd_fill_ex_v3(struct sec_auth_ctx *ctx, int dir,
+ sqe3->a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+ sqe3->auth_mac_key |=
+- cpu_to_le32((u32)(authsize /
+- SEC_SQE_LEN_RATE) << SEC_MAC_OFFSET_V3);
++ cpu_to_le32(BYTES_TO_WORDS(authsize) << SEC_MAC_OFFSET_V3);
+
+ sqe3->auth_mac_key |=
+- cpu_to_le32((u32)(ctx->a_key_len /
+- SEC_SQE_LEN_RATE) << SEC_AKEY_OFFSET_V3);
++ cpu_to_le32((u32)BYTES_TO_WORDS(ctx->a_key_len) << SEC_AKEY_OFFSET_V3);
+
+ sqe3->auth_mac_key |=
+ cpu_to_le32((u32)(ctx->a_alg) << SEC_AUTH_ALG_OFFSET_V3);
+@@ -2003,8 +1987,7 @@ static int sec_aead_sha512_ctx_init(struct crypto_aead *tfm)
+ return sec_aead_ctx_init(tfm, "sha512");
+ }
+
+-static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx,
+- struct sec_req *sreq)
++static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ {
+ u32 cryptlen = sreq->c_req.sk_req->cryptlen;
+ struct device *dev = ctx->dev;
+@@ -2026,10 +2009,6 @@ static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx,
+ }
+ break;
+ case SEC_CMODE_CTR:
+- if (unlikely(ctx->sec->qm.ver < QM_HW_V3)) {
+- dev_err(dev, "skcipher HW version error!\n");
+- ret = -EINVAL;
+- }
+ break;
+ default:
+ ret = -EINVAL;
+@@ -2038,17 +2017,21 @@ static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx,
+ return ret;
+ }
+
+-static int sec_skcipher_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
++static int sec_skcipher_param_check(struct sec_ctx *ctx,
++ struct sec_req *sreq, bool *need_fallback)
+ {
+ struct skcipher_request *sk_req = sreq->c_req.sk_req;
+ struct device *dev = ctx->dev;
+ u8 c_alg = ctx->c_ctx.c_alg;
+
+- if (unlikely(!sk_req->src || !sk_req->dst ||
+- sk_req->cryptlen > MAX_INPUT_DATA_LEN)) {
++ if (unlikely(!sk_req->src || !sk_req->dst)) {
+ dev_err(dev, "skcipher input param error!\n");
+ return -EINVAL;
+ }
++
++ if (sk_req->cryptlen > MAX_INPUT_DATA_LEN)
++ *need_fallback = true;
++
+ sreq->c_req.c_len = sk_req->cryptlen;
+
+ if (ctx->pbuf_supported && sk_req->cryptlen <= SEC_PBUF_SZ)
+@@ -2106,6 +2089,7 @@ static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(sk_req);
+ struct sec_req *req = skcipher_request_ctx(sk_req);
+ struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
++ bool need_fallback = false;
+ int ret;
+
+ if (!sk_req->cryptlen) {
+@@ -2119,11 +2103,11 @@ static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
+ req->c_req.encrypt = encrypt;
+ req->ctx = ctx;
+
+- ret = sec_skcipher_param_check(ctx, req);
++ ret = sec_skcipher_param_check(ctx, req, &need_fallback);
+ if (unlikely(ret))
+ return -EINVAL;
+
+- if (unlikely(ctx->c_ctx.fallback))
++ if (unlikely(ctx->c_ctx.fallback || need_fallback))
+ return sec_skcipher_soft_crypto(ctx, sk_req, encrypt);
+
+ return ctx->req_op->process(ctx, req);
+@@ -2231,52 +2215,35 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ size_t sz = crypto_aead_authsize(tfm);
+ u8 c_mode = ctx->c_ctx.c_mode;
+- struct device *dev = ctx->dev;
+ int ret;
+
+- /* Hardware does not handle cases where authsize is less than 4 bytes */
+- if (unlikely(sz < MIN_MAC_LEN)) {
+- sreq->aead_req.fallback = true;
++ if (unlikely(ctx->sec->qm.ver == QM_HW_V2 && !sreq->c_req.c_len))
+ return -EINVAL;
+- }
+
+ if (unlikely(req->cryptlen + req->assoclen > MAX_INPUT_DATA_LEN ||
+- req->assoclen > SEC_MAX_AAD_LEN)) {
+- dev_err(dev, "aead input spec error!\n");
++ req->assoclen > SEC_MAX_AAD_LEN))
+ return -EINVAL;
+- }
+
+ if (c_mode == SEC_CMODE_CCM) {
+- if (unlikely(req->assoclen > SEC_MAX_CCM_AAD_LEN)) {
+- dev_err_ratelimited(dev, "CCM input aad parameter is too long!\n");
++ if (unlikely(req->assoclen > SEC_MAX_CCM_AAD_LEN))
+ return -EINVAL;
+- }
+- ret = aead_iv_demension_check(req);
+- if (ret) {
+- dev_err(dev, "aead input iv param error!\n");
+- return ret;
+- }
+- }
+
+- if (sreq->c_req.encrypt)
+- sreq->c_req.c_len = req->cryptlen;
+- else
+- sreq->c_req.c_len = req->cryptlen - sz;
+- if (c_mode == SEC_CMODE_CBC) {
+- if (unlikely(sreq->c_req.c_len & (AES_BLOCK_SIZE - 1))) {
+- dev_err(dev, "aead crypto length error!\n");
++ ret = aead_iv_demension_check(req);
++ if (unlikely(ret))
++ return -EINVAL;
++ } else if (c_mode == SEC_CMODE_CBC) {
++ if (unlikely(sz & WORD_MASK))
++ return -EINVAL;
++ if (unlikely(ctx->a_ctx.a_key_len & WORD_MASK))
+ return -EINVAL;
+- }
+ }
+
+ return 0;
+ }
+
+-static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
++static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq, bool *need_fallback)
+ {
+ struct aead_request *req = sreq->aead_req.aead_req;
+- struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+- size_t authsize = crypto_aead_authsize(tfm);
+ struct device *dev = ctx->dev;
+ u8 c_alg = ctx->c_ctx.c_alg;
+
+@@ -2285,12 +2252,10 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ return -EINVAL;
+ }
+
+- if (ctx->sec->qm.ver == QM_HW_V2) {
+- if (unlikely(!req->cryptlen || (!sreq->c_req.encrypt &&
+- req->cryptlen <= authsize))) {
+- sreq->aead_req.fallback = true;
+- return -EINVAL;
+- }
++ if (unlikely(ctx->c_ctx.c_mode == SEC_CMODE_CBC &&
++ sreq->c_req.c_len & (AES_BLOCK_SIZE - 1))) {
++ dev_err(dev, "aead cbc mode input data length error!\n");
++ return -EINVAL;
+ }
+
+ /* Support AES or SM4 */
+@@ -2299,8 +2264,10 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ return -EINVAL;
+ }
+
+- if (unlikely(sec_aead_spec_check(ctx, sreq)))
++ if (unlikely(sec_aead_spec_check(ctx, sreq))) {
++ *need_fallback = true;
+ return -EINVAL;
++ }
+
+ if (ctx->pbuf_supported && (req->cryptlen + req->assoclen) <=
+ SEC_PBUF_SZ)
+@@ -2344,17 +2311,19 @@ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
+ struct crypto_aead *tfm = crypto_aead_reqtfm(a_req);
+ struct sec_req *req = aead_request_ctx(a_req);
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
++ size_t sz = crypto_aead_authsize(tfm);
++ bool need_fallback = false;
+ int ret;
+
+ req->flag = a_req->base.flags;
+ req->aead_req.aead_req = a_req;
+ req->c_req.encrypt = encrypt;
+ req->ctx = ctx;
+- req->aead_req.fallback = false;
++ req->c_req.c_len = a_req->cryptlen - (req->c_req.encrypt ? 0 : sz);
+
+- ret = sec_aead_param_check(ctx, req);
++ ret = sec_aead_param_check(ctx, req, &need_fallback);
+ if (unlikely(ret)) {
+- if (req->aead_req.fallback)
++ if (need_fallback)
+ return sec_aead_soft_crypto(ctx, a_req, encrypt);
+ return -EINVAL;
+ }
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index c3776b0de51d76..990ea46955bbf3 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -1537,7 +1537,7 @@ static int iaa_comp_acompress(struct acomp_req *req)
+ iaa_wq = idxd_wq_get_private(wq);
+
+ if (!req->dst) {
+- gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
++ gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
+
+ /* incompressible data will always be < 2 * slen */
+ req->dlen = 2 * req->slen;
+@@ -1619,7 +1619,7 @@ static int iaa_comp_acompress(struct acomp_req *req)
+
+ static int iaa_comp_adecompress_alloc_dest(struct acomp_req *req)
+ {
+- gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
++ gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
+ GFP_KERNEL : GFP_ATOMIC;
+ struct crypto_tfm *tfm = req->base.tfm;
+ dma_addr_t src_addr, dst_addr;
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+index 9faef33e54bd32..a17adc4beda2e3 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+@@ -420,6 +420,7 @@ static void adf_gen4_set_err_mask(struct adf_dev_err_mask *dev_err_mask)
+ dev_err_mask->parerr_cpr_xlt_mask = ADF_420XX_PARITYERRORMASK_CPR_XLT_MASK;
+ dev_err_mask->parerr_dcpr_ucs_mask = ADF_420XX_PARITYERRORMASK_DCPR_UCS_MASK;
+ dev_err_mask->parerr_pke_mask = ADF_420XX_PARITYERRORMASK_PKE_MASK;
++ dev_err_mask->parerr_wat_wcp_mask = ADF_420XX_PARITYERRORMASK_WAT_WCP_MASK;
+ dev_err_mask->ssmfeatren_mask = ADF_420XX_SSMFEATREN_MASK;
+ }
+
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
+index 2dd3772bf58a6c..0f7f00a19e7dc6 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
+@@ -695,7 +695,7 @@ static bool adf_handle_slice_hang_error(struct adf_accel_dev *accel_dev,
+ if (err_mask->parerr_wat_wcp_mask)
+ adf_poll_slicehang_csr(accel_dev, csr,
+ ADF_GEN4_SLICEHANGSTATUS_WAT_WCP,
+- "ath_cph");
++ "wat_wcp");
+
+ return false;
+ }
+@@ -1043,63 +1043,16 @@ static bool adf_handle_ssmcpppar_err(struct adf_accel_dev *accel_dev,
+ return reset_required;
+ }
+
+-static bool adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
++static void adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+ {
+- struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+- u32 reg;
+-
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT))
+- return false;
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC);
+- reg &= ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH);
+- reg &= err_mask->parerr_ath_cph_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT);
+- reg &= err_mask->parerr_cpr_xlt_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS);
+- reg &= err_mask->parerr_dcpr_ucs_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE);
+- reg &= err_mask->parerr_pke_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE, reg);
+- }
+-
+- if (err_mask->parerr_wat_wcp_mask) {
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP);
+- reg &= err_mask->parerr_wat_wcp_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP,
+- reg);
+- }
+- }
++ return;
+
++ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ dev_err(&GET_DEV(accel_dev), "Slice ssm soft parity error reported");
+
+- return false;
++ return;
+ }
+
+ static bool adf_handle_ser_err_ssmsh(struct adf_accel_dev *accel_dev,
+@@ -1171,8 +1124,8 @@ static bool adf_handle_iaintstatssm(struct adf_accel_dev *accel_dev,
+ reset_required |= adf_handle_slice_hang_error(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_spppar_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_ssmcpppar_err(accel_dev, csr, iastatssm);
+- reset_required |= adf_handle_rf_parr_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_ser_err_ssmsh(accel_dev, csr, iastatssm);
++ adf_handle_rf_parr_err(accel_dev, csr, iastatssm);
+
+ ADF_CSR_WR(csr, ADF_GEN4_IAINTSTATSSM, iastatssm);
+
+diff --git a/drivers/crypto/nx/nx-common-pseries.c b/drivers/crypto/nx/nx-common-pseries.c
+index 1660c5cf3641c2..56129bdf53ab03 100644
+--- a/drivers/crypto/nx/nx-common-pseries.c
++++ b/drivers/crypto/nx/nx-common-pseries.c
+@@ -1145,6 +1145,7 @@ static void __init nxcop_get_capabilities(void)
+ {
+ struct hv_vas_all_caps *hv_caps;
+ struct hv_nx_cop_caps *hv_nxc;
++ u64 feat;
+ int rc;
+
+ hv_caps = kmalloc(sizeof(*hv_caps), GFP_KERNEL);
+@@ -1155,27 +1156,26 @@ static void __init nxcop_get_capabilities(void)
+ */
+ rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES, 0,
+ (u64)virt_to_phys(hv_caps));
++ if (!rc)
++ feat = be64_to_cpu(hv_caps->feat_type);
++ kfree(hv_caps);
+ if (rc)
+- goto out;
++ return;
++ if (!(feat & VAS_NX_GZIP_FEAT_BIT))
++ return;
+
+- caps_feat = be64_to_cpu(hv_caps->feat_type);
+ /*
+ * NX-GZIP feature available
+ */
+- if (caps_feat & VAS_NX_GZIP_FEAT_BIT) {
+- hv_nxc = kmalloc(sizeof(*hv_nxc), GFP_KERNEL);
+- if (!hv_nxc)
+- goto out;
+- /*
+- * Get capabilities for NX-GZIP feature
+- */
+- rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES,
+- VAS_NX_GZIP_FEAT,
+- (u64)virt_to_phys(hv_nxc));
+- } else {
+- pr_err("NX-GZIP feature is not available\n");
+- rc = -EINVAL;
+- }
++ hv_nxc = kmalloc(sizeof(*hv_nxc), GFP_KERNEL);
++ if (!hv_nxc)
++ return;
++ /*
++ * Get capabilities for NX-GZIP feature
++ */
++ rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES,
++ VAS_NX_GZIP_FEAT,
++ (u64)virt_to_phys(hv_nxc));
+
+ if (!rc) {
+ nx_cop_caps.descriptor = be64_to_cpu(hv_nxc->descriptor);
+@@ -1185,13 +1185,10 @@ static void __init nxcop_get_capabilities(void)
+ be64_to_cpu(hv_nxc->min_compress_len);
+ nx_cop_caps.min_decompress_len =
+ be64_to_cpu(hv_nxc->min_decompress_len);
+- } else {
+- caps_feat = 0;
++ caps_feat = feat;
+ }
+
+ kfree(hv_nxc);
+-out:
+- kfree(hv_caps);
+ }
+
+ static const struct vio_device_id nx842_vio_driver_ids[] = {
+diff --git a/drivers/crypto/tegra/tegra-se-aes.c b/drivers/crypto/tegra/tegra-se-aes.c
+index d734c9a567868f..ca9d0cca1f748e 100644
+--- a/drivers/crypto/tegra/tegra-se-aes.c
++++ b/drivers/crypto/tegra/tegra-se-aes.c
+@@ -28,6 +28,9 @@ struct tegra_aes_ctx {
+ u32 ivsize;
+ u32 key1_id;
+ u32 key2_id;
++ u32 keylen;
++ u8 key1[AES_MAX_KEY_SIZE];
++ u8 key2[AES_MAX_KEY_SIZE];
+ };
+
+ struct tegra_aes_reqctx {
+@@ -43,8 +46,9 @@ struct tegra_aead_ctx {
+ struct tegra_se *se;
+ unsigned int authsize;
+ u32 alg;
+- u32 keylen;
+ u32 key_id;
++ u32 keylen;
++ u8 key[AES_MAX_KEY_SIZE];
+ };
+
+ struct tegra_aead_reqctx {
+@@ -56,8 +60,8 @@ struct tegra_aead_reqctx {
+ unsigned int cryptlen;
+ unsigned int authsize;
+ bool encrypt;
+- u32 config;
+ u32 crypto_config;
++ u32 config;
+ u32 key_id;
+ u32 iv[4];
+ u8 authdata[16];
+@@ -67,6 +71,8 @@ struct tegra_cmac_ctx {
+ struct tegra_se *se;
+ unsigned int alg;
+ u32 key_id;
++ u32 keylen;
++ u8 key[AES_MAX_KEY_SIZE];
+ struct crypto_shash *fallback_tfm;
+ };
+
+@@ -260,17 +266,13 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
+ struct tegra_aes_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
+ struct tegra_aes_reqctx *rctx = skcipher_request_ctx(req);
+ struct tegra_se *se = ctx->se;
+- unsigned int cmdlen;
++ unsigned int cmdlen, key1_id, key2_id;
+ int ret;
+
+- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_AES_BUFLEN,
+- &rctx->datbuf.addr, GFP_KERNEL);
+- if (!rctx->datbuf.buf)
+- return -ENOMEM;
+-
+- rctx->datbuf.size = SE_AES_BUFLEN;
+ rctx->iv = (u32 *)req->iv;
+ rctx->len = req->cryptlen;
++ key1_id = ctx->key1_id;
++ key2_id = ctx->key2_id;
+
+ /* Pad input to AES Block size */
+ if (ctx->alg != SE_ALG_XTS) {
+@@ -278,20 +280,59 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
+ rctx->len += AES_BLOCK_SIZE - (rctx->len % AES_BLOCK_SIZE);
+ }
+
++ rctx->datbuf.size = rctx->len;
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf) {
++ ret = -ENOMEM;
++ goto out_finalize;
++ }
++
+ scatterwalk_map_and_copy(rctx->datbuf.buf, req->src, 0, req->cryptlen, 0);
+
++ rctx->config = tegra234_aes_cfg(ctx->alg, rctx->encrypt);
++ rctx->crypto_config = tegra234_aes_crypto_cfg(ctx->alg, rctx->encrypt);
++
++ if (!key1_id) {
++ ret = tegra_key_submit_reserved_aes(ctx->se, ctx->key1,
++ ctx->keylen, ctx->alg, &key1_id);
++ if (ret)
++ goto out;
++ }
++
++ rctx->crypto_config |= SE_AES_KEY_INDEX(key1_id);
++
++ if (ctx->alg == SE_ALG_XTS) {
++ if (!key2_id) {
++ ret = tegra_key_submit_reserved_xts(ctx->se, ctx->key2,
++ ctx->keylen, ctx->alg, &key2_id);
++ if (ret)
++ goto out;
++ }
++
++ rctx->crypto_config |= SE_AES_KEY2_INDEX(key2_id);
++ }
++
+ /* Prepare the command and submit for execution */
+ cmdlen = tegra_aes_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+
+ /* Copy the result */
+ tegra_aes_update_iv(req, ctx);
+ scatterwalk_map_and_copy(rctx->datbuf.buf, req->dst, 0, req->cryptlen, 1);
+
++out:
+ /* Free the buffer */
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
+ rctx->datbuf.buf, rctx->datbuf.addr);
+
++ if (tegra_key_is_reserved(key1_id))
++ tegra_key_invalidate_reserved(ctx->se, key1_id, ctx->alg);
++
++ if (tegra_key_is_reserved(key2_id))
++ tegra_key_invalidate_reserved(ctx->se, key2_id, ctx->alg);
++
++out_finalize:
+ crypto_finalize_skcipher_request(se->engine, req, ret);
+
+ return 0;
+@@ -313,6 +354,7 @@ static int tegra_aes_cra_init(struct crypto_skcipher *tfm)
+ ctx->se = se_alg->se_dev;
+ ctx->key1_id = 0;
+ ctx->key2_id = 0;
++ ctx->keylen = 0;
+
+ algname = crypto_tfm_alg_name(&tfm->base);
+ ret = se_algname_to_algid(algname);
+@@ -341,13 +383,20 @@ static int tegra_aes_setkey(struct crypto_skcipher *tfm,
+ const u8 *key, u32 keylen)
+ {
+ struct tegra_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
++ int ret;
+
+ if (aes_check_keylen(keylen)) {
+ dev_dbg(ctx->se->dev, "invalid key length (%d)\n", keylen);
+ return -EINVAL;
+ }
+
+- return tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key1_id);
++ ret = tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key1_id);
++ if (ret) {
++ ctx->keylen = keylen;
++ memcpy(ctx->key1, key, keylen);
++ }
++
++ return 0;
+ }
+
+ static int tegra_xts_setkey(struct crypto_skcipher *tfm,
+@@ -365,11 +414,17 @@ static int tegra_xts_setkey(struct crypto_skcipher *tfm,
+
+ ret = tegra_key_submit(ctx->se, key, len,
+ ctx->alg, &ctx->key1_id);
+- if (ret)
+- return ret;
++ if (ret) {
++ ctx->keylen = len;
++ memcpy(ctx->key1, key, len);
++ }
+
+- return tegra_key_submit(ctx->se, key + len, len,
++ ret = tegra_key_submit(ctx->se, key + len, len,
+ ctx->alg, &ctx->key2_id);
++ if (ret) {
++ ctx->keylen = len;
++ memcpy(ctx->key2, key + len, len);
++ }
+
+ return 0;
+ }
+@@ -443,13 +498,10 @@ static int tegra_aes_crypt(struct skcipher_request *req, bool encrypt)
+ if (!req->cryptlen)
+ return 0;
+
+- rctx->encrypt = encrypt;
+- rctx->config = tegra234_aes_cfg(ctx->alg, encrypt);
+- rctx->crypto_config = tegra234_aes_crypto_cfg(ctx->alg, encrypt);
+- rctx->crypto_config |= SE_AES_KEY_INDEX(ctx->key1_id);
++ if (ctx->alg == SE_ALG_ECB)
++ req->iv = NULL;
+
+- if (ctx->key2_id)
+- rctx->crypto_config |= SE_AES_KEY2_INDEX(ctx->key2_id);
++ rctx->encrypt = encrypt;
+
+ return crypto_transfer_skcipher_request_to_engine(ctx->se->engine, req);
+ }
+@@ -715,11 +767,11 @@ static int tegra_gcm_do_gmac(struct tegra_aead_ctx *ctx, struct tegra_aead_reqct
+
+ rctx->config = tegra234_aes_cfg(SE_ALG_GMAC, rctx->encrypt);
+ rctx->crypto_config = tegra234_aes_crypto_cfg(SE_ALG_GMAC, rctx->encrypt) |
+- SE_AES_KEY_INDEX(ctx->key_id);
++ SE_AES_KEY_INDEX(rctx->key_id);
+
+ cmdlen = tegra_gmac_prep_cmd(ctx, rctx);
+
+- return tegra_se_host1x_submit(se, cmdlen);
++ return tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ }
+
+ static int tegra_gcm_do_crypt(struct tegra_aead_ctx *ctx, struct tegra_aead_reqctx *rctx)
+@@ -732,11 +784,11 @@ static int tegra_gcm_do_crypt(struct tegra_aead_ctx *ctx, struct tegra_aead_reqc
+
+ rctx->config = tegra234_aes_cfg(SE_ALG_GCM, rctx->encrypt);
+ rctx->crypto_config = tegra234_aes_crypto_cfg(SE_ALG_GCM, rctx->encrypt) |
+- SE_AES_KEY_INDEX(ctx->key_id);
++ SE_AES_KEY_INDEX(rctx->key_id);
+
+ /* Prepare command and submit */
+ cmdlen = tegra_gcm_crypt_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ return ret;
+
+@@ -755,11 +807,11 @@ static int tegra_gcm_do_final(struct tegra_aead_ctx *ctx, struct tegra_aead_reqc
+
+ rctx->config = tegra234_aes_cfg(SE_ALG_GCM_FINAL, rctx->encrypt);
+ rctx->crypto_config = tegra234_aes_crypto_cfg(SE_ALG_GCM_FINAL, rctx->encrypt) |
+- SE_AES_KEY_INDEX(ctx->key_id);
++ SE_AES_KEY_INDEX(rctx->key_id);
+
+ /* Prepare command and submit */
+ cmdlen = tegra_gcm_prep_final_cmd(se, cpuvaddr, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ return ret;
+
+@@ -886,12 +938,12 @@ static int tegra_ccm_do_cbcmac(struct tegra_aead_ctx *ctx, struct tegra_aead_req
+ rctx->config = tegra234_aes_cfg(SE_ALG_CBC_MAC, rctx->encrypt);
+ rctx->crypto_config = tegra234_aes_crypto_cfg(SE_ALG_CBC_MAC,
+ rctx->encrypt) |
+- SE_AES_KEY_INDEX(ctx->key_id);
++ SE_AES_KEY_INDEX(rctx->key_id);
+
+ /* Prepare command and submit */
+ cmdlen = tegra_cbcmac_prep_cmd(ctx, rctx);
+
+- return tegra_se_host1x_submit(se, cmdlen);
++ return tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ }
+
+ static int tegra_ccm_set_msg_len(u8 *block, unsigned int msglen, int csize)
+@@ -1073,7 +1125,7 @@ static int tegra_ccm_do_ctr(struct tegra_aead_ctx *ctx, struct tegra_aead_reqctx
+
+ rctx->config = tegra234_aes_cfg(SE_ALG_CTR, rctx->encrypt);
+ rctx->crypto_config = tegra234_aes_crypto_cfg(SE_ALG_CTR, rctx->encrypt) |
+- SE_AES_KEY_INDEX(ctx->key_id);
++ SE_AES_KEY_INDEX(rctx->key_id);
+
+ /* Copy authdata in the top of buffer for encryption/decryption */
+ if (rctx->encrypt)
+@@ -1098,7 +1150,7 @@ static int tegra_ccm_do_ctr(struct tegra_aead_ctx *ctx, struct tegra_aead_reqctx
+
+ /* Prepare command and submit */
+ cmdlen = tegra_ctr_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ return ret;
+
+@@ -1117,6 +1169,11 @@ static int tegra_ccm_crypt_init(struct aead_request *req, struct tegra_se *se,
+ rctx->assoclen = req->assoclen;
+ rctx->authsize = crypto_aead_authsize(tfm);
+
++ if (rctx->encrypt)
++ rctx->cryptlen = req->cryptlen;
++ else
++ rctx->cryptlen = req->cryptlen - rctx->authsize;
++
+ memcpy(iv, req->iv, 16);
+
+ ret = tegra_ccm_check_iv(iv);
+@@ -1145,30 +1202,35 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
+ struct tegra_se *se = ctx->se;
+ int ret;
+
++ ret = tegra_ccm_crypt_init(req, se, rctx);
++ if (ret)
++ goto out_finalize;
++
++ rctx->key_id = ctx->key_id;
++
+ /* Allocate buffers required */
+- rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100;
++ rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size,
+ &rctx->inbuf.addr, GFP_KERNEL);
+ if (!rctx->inbuf.buf)
+- return -ENOMEM;
+-
+- rctx->inbuf.size = SE_AES_BUFLEN;
++ goto out_finalize;
+
+- rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100;
++ rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size,
+ &rctx->outbuf.addr, GFP_KERNEL);
+ if (!rctx->outbuf.buf) {
+ ret = -ENOMEM;
+- goto outbuf_err;
++ goto out_free_inbuf;
+ }
+
+- rctx->outbuf.size = SE_AES_BUFLEN;
+-
+- ret = tegra_ccm_crypt_init(req, se, rctx);
+- if (ret)
+- goto out;
++ if (!ctx->key_id) {
++ ret = tegra_key_submit_reserved_aes(ctx->se, ctx->key,
++ ctx->keylen, ctx->alg, &rctx->key_id);
++ if (ret)
++ goto out;
++ }
+
+ if (rctx->encrypt) {
+- rctx->cryptlen = req->cryptlen;
+-
+ /* CBC MAC Operation */
+ ret = tegra_ccm_compute_auth(ctx, rctx);
+ if (ret)
+@@ -1179,8 +1241,6 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
+ if (ret)
+ goto out;
+ } else {
+- rctx->cryptlen = req->cryptlen - ctx->authsize;
+-
+ /* CTR operation */
+ ret = tegra_ccm_do_ctr(ctx, rctx);
+ if (ret)
+@@ -1193,13 +1253,17 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
+ }
+
+ out:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->inbuf.size,
+ rctx->outbuf.buf, rctx->outbuf.addr);
+
+-outbuf_err:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++out_free_inbuf:
++ dma_free_coherent(ctx->se->dev, rctx->outbuf.size,
+ rctx->inbuf.buf, rctx->inbuf.addr);
+
++ if (tegra_key_is_reserved(rctx->key_id))
++ tegra_key_invalidate_reserved(ctx->se, rctx->key_id, ctx->alg);
++
++out_finalize:
+ crypto_finalize_aead_request(ctx->se->engine, req, ret);
+
+ return 0;
+@@ -1213,23 +1277,6 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
+ struct tegra_aead_reqctx *rctx = aead_request_ctx(req);
+ int ret;
+
+- /* Allocate buffers required */
+- rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
+- &rctx->inbuf.addr, GFP_KERNEL);
+- if (!rctx->inbuf.buf)
+- return -ENOMEM;
+-
+- rctx->inbuf.size = SE_AES_BUFLEN;
+-
+- rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
+- &rctx->outbuf.addr, GFP_KERNEL);
+- if (!rctx->outbuf.buf) {
+- ret = -ENOMEM;
+- goto outbuf_err;
+- }
+-
+- rctx->outbuf.size = SE_AES_BUFLEN;
+-
+ rctx->src_sg = req->src;
+ rctx->dst_sg = req->dst;
+ rctx->assoclen = req->assoclen;
+@@ -1243,6 +1290,32 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
+ memcpy(rctx->iv, req->iv, GCM_AES_IV_SIZE);
+ rctx->iv[3] = (1 << 24);
+
++ rctx->key_id = ctx->key_id;
++
++ /* Allocate buffers required */
++ rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen;
++ rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size,
++ &rctx->inbuf.addr, GFP_KERNEL);
++ if (!rctx->inbuf.buf) {
++ ret = -ENOMEM;
++ goto out_finalize;
++ }
++
++ rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen;
++ rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size,
++ &rctx->outbuf.addr, GFP_KERNEL);
++ if (!rctx->outbuf.buf) {
++ ret = -ENOMEM;
++ goto out_free_inbuf;
++ }
++
++ if (!ctx->key_id) {
++ ret = tegra_key_submit_reserved_aes(ctx->se, ctx->key,
++ ctx->keylen, ctx->alg, &rctx->key_id);
++ if (ret)
++ goto out;
++ }
++
+ /* If there is associated data perform GMAC operation */
+ if (rctx->assoclen) {
+ ret = tegra_gcm_do_gmac(ctx, rctx);
+@@ -1266,14 +1339,17 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
+ ret = tegra_gcm_do_verify(ctx->se, rctx);
+
+ out:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->outbuf.size,
+ rctx->outbuf.buf, rctx->outbuf.addr);
+
+-outbuf_err:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++out_free_inbuf:
++ dma_free_coherent(ctx->se->dev, rctx->inbuf.size,
+ rctx->inbuf.buf, rctx->inbuf.addr);
+
+- /* Finalize the request if there are no errors */
++ if (tegra_key_is_reserved(rctx->key_id))
++ tegra_key_invalidate_reserved(ctx->se, rctx->key_id, ctx->alg);
++
++out_finalize:
+ crypto_finalize_aead_request(ctx->se->engine, req, ret);
+
+ return 0;
+@@ -1295,6 +1371,7 @@ static int tegra_aead_cra_init(struct crypto_aead *tfm)
+
+ ctx->se = se_alg->se_dev;
+ ctx->key_id = 0;
++ ctx->keylen = 0;
+
+ ret = se_algname_to_algid(algname);
+ if (ret < 0) {
+@@ -1376,13 +1453,20 @@ static int tegra_aead_setkey(struct crypto_aead *tfm,
+ const u8 *key, u32 keylen)
+ {
+ struct tegra_aead_ctx *ctx = crypto_aead_ctx(tfm);
++ int ret;
+
+ if (aes_check_keylen(keylen)) {
+ dev_dbg(ctx->se->dev, "invalid key length (%d)\n", keylen);
+ return -EINVAL;
+ }
+
+- return tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ ret = tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ if (ret) {
++ ctx->keylen = keylen;
++ memcpy(ctx->key, key, keylen);
++ }
++
++ return 0;
+ }
+
+ static unsigned int tegra_cmac_prep_cmd(struct tegra_cmac_ctx *ctx,
+@@ -1456,6 +1540,35 @@ static void tegra_cmac_paste_result(struct tegra_se *se, struct tegra_cmac_reqct
+ se->base + se->hw->regs->result + (i * 4));
+ }
+
++static int tegra_cmac_do_init(struct ahash_request *req)
++{
++ struct tegra_cmac_reqctx *rctx = ahash_request_ctx(req);
++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
++ struct tegra_se *se = ctx->se;
++ int i;
++
++ rctx->total_len = 0;
++ rctx->datbuf.size = 0;
++ rctx->residue.size = 0;
++ rctx->key_id = ctx->key_id;
++ rctx->task |= SHA_FIRST;
++ rctx->blk_size = crypto_ahash_blocksize(tfm);
++
++ rctx->residue.buf = dma_alloc_coherent(se->dev, rctx->blk_size * 2,
++ &rctx->residue.addr, GFP_KERNEL);
++ if (!rctx->residue.buf)
++ return -ENOMEM;
++
++ rctx->residue.size = 0;
++
++ /* Clear any previous result */
++ for (i = 0; i < CMAC_RESULT_REG_COUNT; i++)
++ writel(0, se->base + se->hw->regs->result + (i * 4));
++
++ return 0;
++}
++
+ static int tegra_cmac_do_update(struct ahash_request *req)
+ {
+ struct tegra_cmac_reqctx *rctx = ahash_request_ctx(req);
+@@ -1483,7 +1596,7 @@ static int tegra_cmac_do_update(struct ahash_request *req)
+ rctx->datbuf.size = (req->nbytes + rctx->residue.size) - nresidue;
+ rctx->total_len += rctx->datbuf.size;
+ rctx->config = tegra234_aes_cfg(SE_ALG_CMAC, 0);
+- rctx->crypto_config = SE_AES_KEY_INDEX(ctx->key_id);
++ rctx->crypto_config = SE_AES_KEY_INDEX(rctx->key_id);
+
+ /*
+ * Keep one block and residue bytes in residue and
+@@ -1497,6 +1610,11 @@ static int tegra_cmac_do_update(struct ahash_request *req)
+ return 0;
+ }
+
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf)
++ return -ENOMEM;
++
+ /* Copy the previous residue first */
+ if (rctx->residue.size)
+ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+@@ -1511,23 +1629,19 @@ static int tegra_cmac_do_update(struct ahash_request *req)
+ rctx->residue.size = nresidue;
+
+ /*
+- * If this is not the first 'update' call, paste the previous copied
++ * If this is not the first task, paste the previous copied
+ * intermediate results to the registers so that it gets picked up.
+- * This is to support the import/export functionality.
+ */
+ if (!(rctx->task & SHA_FIRST))
+ tegra_cmac_paste_result(ctx->se, rctx);
+
+ cmdlen = tegra_cmac_prep_cmd(ctx, rctx);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+
+- ret = tegra_se_host1x_submit(se, cmdlen);
+- /*
+- * If this is not the final update, copy the intermediate results
+- * from the registers so that it can be used in the next 'update'
+- * call. This is to support the import/export functionality.
+- */
+- if (!(rctx->task & SHA_FINAL))
+- tegra_cmac_copy_result(ctx->se, rctx);
++ tegra_cmac_copy_result(ctx->se, rctx);
++
++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
+
+ return ret;
+ }
+@@ -1543,17 +1657,34 @@ static int tegra_cmac_do_final(struct ahash_request *req)
+
+ if (!req->nbytes && !rctx->total_len && ctx->fallback_tfm) {
+ return crypto_shash_tfm_digest(ctx->fallback_tfm,
+- rctx->datbuf.buf, 0, req->result);
++ NULL, 0, req->result);
++ }
++
++ if (rctx->residue.size) {
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf) {
++ ret = -ENOMEM;
++ goto out_free;
++ }
++
++ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+ }
+
+- memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+ rctx->datbuf.size = rctx->residue.size;
+ rctx->total_len += rctx->residue.size;
+ rctx->config = tegra234_aes_cfg(SE_ALG_CMAC, 0);
+
++ /*
++ * If this is not the first task, paste the previous copied
++ * intermediate results to the registers so that it gets picked up.
++ */
++ if (!(rctx->task & SHA_FIRST))
++ tegra_cmac_paste_result(ctx->se, rctx);
++
+ /* Prepare command and submit */
+ cmdlen = tegra_cmac_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ goto out;
+
+@@ -1565,8 +1696,10 @@ static int tegra_cmac_do_final(struct ahash_request *req)
+ writel(0, se->base + se->hw->regs->result + (i * 4));
+
+ out:
+- dma_free_coherent(se->dev, SE_SHA_BUFLEN,
+- rctx->datbuf.buf, rctx->datbuf.addr);
++ if (rctx->residue.size)
++ dma_free_coherent(se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
++out_free:
+ dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm) * 2,
+ rctx->residue.buf, rctx->residue.addr);
+ return ret;
+@@ -1579,17 +1712,41 @@ static int tegra_cmac_do_one_req(struct crypto_engine *engine, void *areq)
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct tegra_se *se = ctx->se;
+- int ret;
++ int ret = 0;
++
++ if (rctx->task & SHA_INIT) {
++ ret = tegra_cmac_do_init(req);
++ if (ret)
++ goto out;
++
++ rctx->task &= ~SHA_INIT;
++ }
++
++ if (!ctx->key_id) {
++ ret = tegra_key_submit_reserved_aes(ctx->se, ctx->key,
++ ctx->keylen, ctx->alg, &rctx->key_id);
++ if (ret)
++ goto out;
++ }
+
+ if (rctx->task & SHA_UPDATE) {
+ ret = tegra_cmac_do_update(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_UPDATE;
+ }
+
+ if (rctx->task & SHA_FINAL) {
+ ret = tegra_cmac_do_final(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_FINAL;
+ }
++out:
++ if (tegra_key_is_reserved(rctx->key_id))
++ tegra_key_invalidate_reserved(ctx->se, rctx->key_id, ctx->alg);
+
+ crypto_finalize_hash_request(se->engine, req, ret);
+
+@@ -1631,6 +1788,7 @@ static int tegra_cmac_cra_init(struct crypto_tfm *tfm)
+
+ ctx->se = se_alg->se_dev;
+ ctx->key_id = 0;
++ ctx->keylen = 0;
+
+ ret = se_algname_to_algid(algname);
+ if (ret < 0) {
+@@ -1655,51 +1813,11 @@ static void tegra_cmac_cra_exit(struct crypto_tfm *tfm)
+ tegra_key_invalidate(ctx->se, ctx->key_id, ctx->alg);
+ }
+
+-static int tegra_cmac_init(struct ahash_request *req)
+-{
+- struct tegra_cmac_reqctx *rctx = ahash_request_ctx(req);
+- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+- struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
+- struct tegra_se *se = ctx->se;
+- int i;
+-
+- rctx->total_len = 0;
+- rctx->datbuf.size = 0;
+- rctx->residue.size = 0;
+- rctx->task = SHA_FIRST;
+- rctx->blk_size = crypto_ahash_blocksize(tfm);
+-
+- rctx->residue.buf = dma_alloc_coherent(se->dev, rctx->blk_size * 2,
+- &rctx->residue.addr, GFP_KERNEL);
+- if (!rctx->residue.buf)
+- goto resbuf_fail;
+-
+- rctx->residue.size = 0;
+-
+- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN,
+- &rctx->datbuf.addr, GFP_KERNEL);
+- if (!rctx->datbuf.buf)
+- goto datbuf_fail;
+-
+- rctx->datbuf.size = 0;
+-
+- /* Clear any previous result */
+- for (i = 0; i < CMAC_RESULT_REG_COUNT; i++)
+- writel(0, se->base + se->hw->regs->result + (i * 4));
+-
+- return 0;
+-
+-datbuf_fail:
+- dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf,
+- rctx->residue.addr);
+-resbuf_fail:
+- return -ENOMEM;
+-}
+-
+ static int tegra_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
++ int ret;
+
+ if (aes_check_keylen(keylen)) {
+ dev_dbg(ctx->se->dev, "invalid key length (%d)\n", keylen);
+@@ -1709,7 +1827,24 @@ static int tegra_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ if (ctx->fallback_tfm)
+ crypto_shash_setkey(ctx->fallback_tfm, key, keylen);
+
+- return tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ ret = tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ if (ret) {
++ ctx->keylen = keylen;
++ memcpy(ctx->key, key, keylen);
++ }
++
++ return 0;
++}
++
++static int tegra_cmac_init(struct ahash_request *req)
++{
++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
++ struct tegra_cmac_reqctx *rctx = ahash_request_ctx(req);
++
++ rctx->task = SHA_INIT;
++
++ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+ static int tegra_cmac_update(struct ahash_request *req)
+@@ -1750,13 +1885,9 @@ static int tegra_cmac_digest(struct ahash_request *req)
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct tegra_cmac_reqctx *rctx = ahash_request_ctx(req);
+- int ret;
+
+- ret = tegra_cmac_init(req);
+- if (ret)
+- return ret;
++ rctx->task |= SHA_INIT | SHA_UPDATE | SHA_FINAL;
+
+- rctx->task |= SHA_UPDATE | SHA_FINAL;
+ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+diff --git a/drivers/crypto/tegra/tegra-se-hash.c b/drivers/crypto/tegra/tegra-se-hash.c
+index 0b5cdd5676b17e..42d007b7af45dc 100644
+--- a/drivers/crypto/tegra/tegra-se-hash.c
++++ b/drivers/crypto/tegra/tegra-se-hash.c
+@@ -34,6 +34,7 @@ struct tegra_sha_reqctx {
+ struct tegra_se_datbuf datbuf;
+ struct tegra_se_datbuf residue;
+ struct tegra_se_datbuf digest;
++ struct tegra_se_datbuf intr_res;
+ unsigned int alg;
+ unsigned int config;
+ unsigned int total_len;
+@@ -211,9 +212,62 @@ static int tegra_sha_fallback_export(struct ahash_request *req, void *out)
+ return crypto_ahash_export(&rctx->fallback_req, out);
+ }
+
+-static int tegra_sha_prep_cmd(struct tegra_se *se, u32 *cpuvaddr,
++static int tegra_se_insert_hash_result(struct tegra_sha_ctx *ctx, u32 *cpuvaddr,
++ struct tegra_sha_reqctx *rctx)
++{
++ __be32 *res_be = (__be32 *)rctx->intr_res.buf;
++ u32 *res = (u32 *)rctx->intr_res.buf;
++ int i = 0, j;
++
++ cpuvaddr[i++] = 0;
++ cpuvaddr[i++] = host1x_opcode_setpayload(HASH_RESULT_REG_COUNT);
++ cpuvaddr[i++] = se_host1x_opcode_incr_w(SE_SHA_HASH_RESULT);
++
++ for (j = 0; j < HASH_RESULT_REG_COUNT; j++) {
++ int idx = j;
++
++ /*
++ * The initial, intermediate and final hash value of SHA-384, SHA-512
++ * in SHA_HASH_RESULT registers follow the below layout of bytes.
++ *
++ * +---------------+------------+
++ * | HASH_RESULT_0 | B4...B7 |
++ * +---------------+------------+
++ * | HASH_RESULT_1 | B0...B3 |
++ * +---------------+------------+
++ * | HASH_RESULT_2 | B12...B15 |
++ * +---------------+------------+
++ * | HASH_RESULT_3 | B8...B11 |
++ * +---------------+------------+
++ * | ...... |
++ * +---------------+------------+
++ * | HASH_RESULT_14| B60...B63 |
++ * +---------------+------------+
++ * | HASH_RESULT_15| B56...B59 |
++ * +---------------+------------+
++ *
++ */
++ if (ctx->alg == SE_ALG_SHA384 || ctx->alg == SE_ALG_SHA512)
++ idx = (j % 2) ? j - 1 : j + 1;
++
++ /* For SHA-1, SHA-224, SHA-256, SHA-384, SHA-512 the initial
++ * intermediate and final hash value when stored in
++ * SHA_HASH_RESULT registers, the byte order is NOT in
++ * little-endian.
++ */
++ if (ctx->alg <= SE_ALG_SHA512)
++ cpuvaddr[i++] = be32_to_cpu(res_be[idx]);
++ else
++ cpuvaddr[i++] = res[idx];
++ }
++
++ return i;
++}
++
++static int tegra_sha_prep_cmd(struct tegra_sha_ctx *ctx, u32 *cpuvaddr,
+ struct tegra_sha_reqctx *rctx)
+ {
++ struct tegra_se *se = ctx->se;
+ u64 msg_len, msg_left;
+ int i = 0;
+
+@@ -241,7 +295,7 @@ static int tegra_sha_prep_cmd(struct tegra_se *se, u32 *cpuvaddr,
+ cpuvaddr[i++] = upper_32_bits(msg_left);
+ cpuvaddr[i++] = 0;
+ cpuvaddr[i++] = 0;
+- cpuvaddr[i++] = host1x_opcode_setpayload(6);
++ cpuvaddr[i++] = host1x_opcode_setpayload(2);
+ cpuvaddr[i++] = se_host1x_opcode_incr_w(SE_SHA_CFG);
+ cpuvaddr[i++] = rctx->config;
+
+@@ -249,15 +303,29 @@ static int tegra_sha_prep_cmd(struct tegra_se *se, u32 *cpuvaddr,
+ cpuvaddr[i++] = SE_SHA_TASK_HASH_INIT;
+ rctx->task &= ~SHA_FIRST;
+ } else {
+- cpuvaddr[i++] = 0;
++ /*
++ * If it isn't the first task, program the HASH_RESULT register
++ * with the intermediate result from the previous task
++ */
++ i += tegra_se_insert_hash_result(ctx, cpuvaddr + i, rctx);
+ }
+
++ cpuvaddr[i++] = host1x_opcode_setpayload(4);
++ cpuvaddr[i++] = se_host1x_opcode_incr_w(SE_SHA_IN_ADDR);
+ cpuvaddr[i++] = rctx->datbuf.addr;
+ cpuvaddr[i++] = (u32)(SE_ADDR_HI_MSB(upper_32_bits(rctx->datbuf.addr)) |
+ SE_ADDR_HI_SZ(rctx->datbuf.size));
+- cpuvaddr[i++] = rctx->digest.addr;
+- cpuvaddr[i++] = (u32)(SE_ADDR_HI_MSB(upper_32_bits(rctx->digest.addr)) |
+- SE_ADDR_HI_SZ(rctx->digest.size));
++
++ if (rctx->task & SHA_UPDATE) {
++ cpuvaddr[i++] = rctx->intr_res.addr;
++ cpuvaddr[i++] = (u32)(SE_ADDR_HI_MSB(upper_32_bits(rctx->intr_res.addr)) |
++ SE_ADDR_HI_SZ(rctx->intr_res.size));
++ } else {
++ cpuvaddr[i++] = rctx->digest.addr;
++ cpuvaddr[i++] = (u32)(SE_ADDR_HI_MSB(upper_32_bits(rctx->digest.addr)) |
++ SE_ADDR_HI_SZ(rctx->digest.size));
++ }
++
+ if (rctx->key_id) {
+ cpuvaddr[i++] = host1x_opcode_setpayload(1);
+ cpuvaddr[i++] = se_host1x_opcode_nonincr_w(SE_SHA_CRYPTO_CFG);
+@@ -266,42 +334,72 @@ static int tegra_sha_prep_cmd(struct tegra_se *se, u32 *cpuvaddr,
+
+ cpuvaddr[i++] = host1x_opcode_setpayload(1);
+ cpuvaddr[i++] = se_host1x_opcode_nonincr_w(SE_SHA_OPERATION);
+- cpuvaddr[i++] = SE_SHA_OP_WRSTALL |
+- SE_SHA_OP_START |
++ cpuvaddr[i++] = SE_SHA_OP_WRSTALL | SE_SHA_OP_START |
+ SE_SHA_OP_LASTBUF;
+ cpuvaddr[i++] = se_host1x_opcode_nonincr(host1x_uclass_incr_syncpt_r(), 1);
+ cpuvaddr[i++] = host1x_uclass_incr_syncpt_cond_f(1) |
+ host1x_uclass_incr_syncpt_indx_f(se->syncpt_id);
+
+- dev_dbg(se->dev, "msg len %llu msg left %llu cfg %#x",
+- msg_len, msg_left, rctx->config);
++ dev_dbg(se->dev, "msg len %llu msg left %llu sz %zd cfg %#x",
++ msg_len, msg_left, rctx->datbuf.size, rctx->config);
+
+ return i;
+ }
+
+-static void tegra_sha_copy_hash_result(struct tegra_se *se, struct tegra_sha_reqctx *rctx)
++static int tegra_sha_do_init(struct ahash_request *req)
+ {
+- int i;
++ struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
++ struct tegra_se *se = ctx->se;
+
+- for (i = 0; i < HASH_RESULT_REG_COUNT; i++)
+- rctx->result[i] = readl(se->base + se->hw->regs->result + (i * 4));
+-}
++ if (ctx->fallback)
++ return tegra_sha_fallback_init(req);
+
+-static void tegra_sha_paste_hash_result(struct tegra_se *se, struct tegra_sha_reqctx *rctx)
+-{
+- int i;
++ rctx->total_len = 0;
++ rctx->datbuf.size = 0;
++ rctx->residue.size = 0;
++ rctx->key_id = ctx->key_id;
++ rctx->task |= SHA_FIRST;
++ rctx->alg = ctx->alg;
++ rctx->blk_size = crypto_ahash_blocksize(tfm);
++ rctx->digest.size = crypto_ahash_digestsize(tfm);
++
++ rctx->digest.buf = dma_alloc_coherent(se->dev, rctx->digest.size,
++ &rctx->digest.addr, GFP_KERNEL);
++ if (!rctx->digest.buf)
++ goto digbuf_fail;
++
++ rctx->residue.buf = dma_alloc_coherent(se->dev, rctx->blk_size,
++ &rctx->residue.addr, GFP_KERNEL);
++ if (!rctx->residue.buf)
++ goto resbuf_fail;
++
++ rctx->intr_res.size = HASH_RESULT_REG_COUNT * 4;
++ rctx->intr_res.buf = dma_alloc_coherent(se->dev, rctx->intr_res.size,
++ &rctx->intr_res.addr, GFP_KERNEL);
++ if (!rctx->intr_res.buf)
++ goto intr_res_fail;
++
++ return 0;
+
+- for (i = 0; i < HASH_RESULT_REG_COUNT; i++)
+- writel(rctx->result[i],
+- se->base + se->hw->regs->result + (i * 4));
++intr_res_fail:
++ dma_free_coherent(se->dev, rctx->residue.size, rctx->residue.buf,
++ rctx->residue.addr);
++resbuf_fail:
++ dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf,
++ rctx->digest.addr);
++digbuf_fail:
++ return -ENOMEM;
+ }
+
+ static int tegra_sha_do_update(struct ahash_request *req)
+ {
+ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+ struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
++ struct tegra_se *se = ctx->se;
+ unsigned int nblks, nresidue, size, ret;
+- u32 *cpuvaddr = ctx->se->cmdbuf->addr;
++ u32 *cpuvaddr = se->cmdbuf->addr;
+
+ nresidue = (req->nbytes + rctx->residue.size) % rctx->blk_size;
+ nblks = (req->nbytes + rctx->residue.size) / rctx->blk_size;
+@@ -317,7 +415,6 @@ static int tegra_sha_do_update(struct ahash_request *req)
+
+ rctx->src_sg = req->src;
+ rctx->datbuf.size = (req->nbytes + rctx->residue.size) - nresidue;
+- rctx->total_len += rctx->datbuf.size;
+
+ /*
+ * If nbytes are less than a block size, copy it residue and
+@@ -326,11 +423,16 @@ static int tegra_sha_do_update(struct ahash_request *req)
+ if (nblks < 1) {
+ scatterwalk_map_and_copy(rctx->residue.buf + rctx->residue.size,
+ rctx->src_sg, 0, req->nbytes, 0);
+-
+ rctx->residue.size += req->nbytes;
++
+ return 0;
+ }
+
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf)
++ return -ENOMEM;
++
+ /* Copy the previous residue first */
+ if (rctx->residue.size)
+ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+@@ -343,29 +445,16 @@ static int tegra_sha_do_update(struct ahash_request *req)
+
+ /* Update residue value with the residue after current block */
+ rctx->residue.size = nresidue;
++ rctx->total_len += rctx->datbuf.size;
+
+ rctx->config = tegra_sha_get_config(rctx->alg) |
+- SE_SHA_DST_HASH_REG;
+-
+- /*
+- * If this is not the first 'update' call, paste the previous copied
+- * intermediate results to the registers so that it gets picked up.
+- * This is to support the import/export functionality.
+- */
+- if (!(rctx->task & SHA_FIRST))
+- tegra_sha_paste_hash_result(ctx->se, rctx);
++ SE_SHA_DST_MEMORY;
+
+- size = tegra_sha_prep_cmd(ctx->se, cpuvaddr, rctx);
++ size = tegra_sha_prep_cmd(ctx, cpuvaddr, rctx);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, size);
+
+- ret = tegra_se_host1x_submit(ctx->se, size);
+-
+- /*
+- * If this is not the final update, copy the intermediate results
+- * from the registers so that it can be used in the next 'update'
+- * call. This is to support the import/export functionality.
+- */
+- if (!(rctx->task & SHA_FINAL))
+- tegra_sha_copy_hash_result(ctx->se, rctx);
++ dma_free_coherent(se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
+
+ return ret;
+ }
+@@ -379,16 +468,25 @@ static int tegra_sha_do_final(struct ahash_request *req)
+ u32 *cpuvaddr = se->cmdbuf->addr;
+ int size, ret = 0;
+
+- memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
++ if (rctx->residue.size) {
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf) {
++ ret = -ENOMEM;
++ goto out_free;
++ }
++
++ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
++ }
++
+ rctx->datbuf.size = rctx->residue.size;
+ rctx->total_len += rctx->residue.size;
+
+ rctx->config = tegra_sha_get_config(rctx->alg) |
+ SE_SHA_DST_MEMORY;
+
+- size = tegra_sha_prep_cmd(se, cpuvaddr, rctx);
+-
+- ret = tegra_se_host1x_submit(se, size);
++ size = tegra_sha_prep_cmd(ctx, cpuvaddr, rctx);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, size);
+ if (ret)
+ goto out;
+
+@@ -396,12 +494,18 @@ static int tegra_sha_do_final(struct ahash_request *req)
+ memcpy(req->result, rctx->digest.buf, rctx->digest.size);
+
+ out:
+- dma_free_coherent(se->dev, SE_SHA_BUFLEN,
+- rctx->datbuf.buf, rctx->datbuf.addr);
++ if (rctx->residue.size)
++ dma_free_coherent(se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
++out_free:
+ dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm),
+ rctx->residue.buf, rctx->residue.addr);
+ dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf,
+ rctx->digest.addr);
++
++ dma_free_coherent(se->dev, rctx->intr_res.size, rctx->intr_res.buf,
++ rctx->intr_res.addr);
++
+ return ret;
+ }
+
+@@ -414,16 +518,31 @@ static int tegra_sha_do_one_req(struct crypto_engine *engine, void *areq)
+ struct tegra_se *se = ctx->se;
+ int ret = 0;
+
++ if (rctx->task & SHA_INIT) {
++ ret = tegra_sha_do_init(req);
++ if (ret)
++ goto out;
++
++ rctx->task &= ~SHA_INIT;
++ }
++
+ if (rctx->task & SHA_UPDATE) {
+ ret = tegra_sha_do_update(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_UPDATE;
+ }
+
+ if (rctx->task & SHA_FINAL) {
+ ret = tegra_sha_do_final(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_FINAL;
+ }
+
++out:
+ crypto_finalize_hash_request(se->engine, req, ret);
+
+ return 0;
+@@ -497,52 +616,6 @@ static void tegra_sha_cra_exit(struct crypto_tfm *tfm)
+ tegra_key_invalidate(ctx->se, ctx->key_id, ctx->alg);
+ }
+
+-static int tegra_sha_init(struct ahash_request *req)
+-{
+- struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
+- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+- struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
+- struct tegra_se *se = ctx->se;
+-
+- if (ctx->fallback)
+- return tegra_sha_fallback_init(req);
+-
+- rctx->total_len = 0;
+- rctx->datbuf.size = 0;
+- rctx->residue.size = 0;
+- rctx->key_id = ctx->key_id;
+- rctx->task = SHA_FIRST;
+- rctx->alg = ctx->alg;
+- rctx->blk_size = crypto_ahash_blocksize(tfm);
+- rctx->digest.size = crypto_ahash_digestsize(tfm);
+-
+- rctx->digest.buf = dma_alloc_coherent(se->dev, rctx->digest.size,
+- &rctx->digest.addr, GFP_KERNEL);
+- if (!rctx->digest.buf)
+- goto digbuf_fail;
+-
+- rctx->residue.buf = dma_alloc_coherent(se->dev, rctx->blk_size,
+- &rctx->residue.addr, GFP_KERNEL);
+- if (!rctx->residue.buf)
+- goto resbuf_fail;
+-
+- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN,
+- &rctx->datbuf.addr, GFP_KERNEL);
+- if (!rctx->datbuf.buf)
+- goto datbuf_fail;
+-
+- return 0;
+-
+-datbuf_fail:
+- dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf,
+- rctx->residue.addr);
+-resbuf_fail:
+- dma_free_coherent(se->dev, SE_SHA_BUFLEN, rctx->datbuf.buf,
+- rctx->datbuf.addr);
+-digbuf_fail:
+- return -ENOMEM;
+-}
+-
+ static int tegra_hmac_fallback_setkey(struct tegra_sha_ctx *ctx, const u8 *key,
+ unsigned int keylen)
+ {
+@@ -559,13 +632,29 @@ static int tegra_hmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
++ int ret;
+
+ if (aes_check_keylen(keylen))
+ return tegra_hmac_fallback_setkey(ctx, key, keylen);
+
++ ret = tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ if (ret)
++ return tegra_hmac_fallback_setkey(ctx, key, keylen);
++
+ ctx->fallback = false;
+
+- return tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ return 0;
++}
++
++static int tegra_sha_init(struct ahash_request *req)
++{
++ struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
++
++ rctx->task = SHA_INIT;
++
++ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+ static int tegra_sha_update(struct ahash_request *req)
+@@ -615,16 +704,12 @@ static int tegra_sha_digest(struct ahash_request *req)
+ struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
+- int ret;
+
+ if (ctx->fallback)
+ return tegra_sha_fallback_digest(req);
+
+- ret = tegra_sha_init(req);
+- if (ret)
+- return ret;
++ rctx->task |= SHA_INIT | SHA_UPDATE | SHA_FINAL;
+
+- rctx->task |= SHA_UPDATE | SHA_FINAL;
+ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+diff --git a/drivers/crypto/tegra/tegra-se-key.c b/drivers/crypto/tegra/tegra-se-key.c
+index ac14678dbd30d5..956fa9b4e9b1a0 100644
+--- a/drivers/crypto/tegra/tegra-se-key.c
++++ b/drivers/crypto/tegra/tegra-se-key.c
+@@ -115,11 +115,17 @@ static int tegra_key_insert(struct tegra_se *se, const u8 *key,
+ u32 keylen, u16 slot, u32 alg)
+ {
+ const u32 *keyval = (u32 *)key;
+- u32 *addr = se->cmdbuf->addr, size;
++ u32 *addr = se->keybuf->addr, size;
++ int ret;
++
++ mutex_lock(&kslt_lock);
+
+ size = tegra_key_prep_ins_cmd(se, addr, keyval, keylen, slot, alg);
++ ret = tegra_se_host1x_submit(se, se->keybuf, size);
++
++ mutex_unlock(&kslt_lock);
+
+- return tegra_se_host1x_submit(se, size);
++ return ret;
+ }
+
+ void tegra_key_invalidate(struct tegra_se *se, u32 keyid, u32 alg)
+@@ -135,6 +141,23 @@ void tegra_key_invalidate(struct tegra_se *se, u32 keyid, u32 alg)
+ tegra_keyslot_free(keyid);
+ }
+
++void tegra_key_invalidate_reserved(struct tegra_se *se, u32 keyid, u32 alg)
++{
++ u8 zkey[AES_MAX_KEY_SIZE] = {0};
++
++ if (!keyid)
++ return;
++
++ /* Overwrite the key with 0s */
++ tegra_key_insert(se, zkey, AES_MAX_KEY_SIZE, keyid, alg);
++}
++
++inline int tegra_key_submit_reserved(struct tegra_se *se, const u8 *key,
++ u32 keylen, u32 alg, u32 *keyid)
++{
++ return tegra_key_insert(se, key, keylen, *keyid, alg);
++}
++
+ int tegra_key_submit(struct tegra_se *se, const u8 *key, u32 keylen, u32 alg, u32 *keyid)
+ {
+ int ret;
+@@ -143,7 +166,7 @@ int tegra_key_submit(struct tegra_se *se, const u8 *key, u32 keylen, u32 alg, u3
+ if (!tegra_key_in_kslt(*keyid)) {
+ *keyid = tegra_keyslot_alloc();
+ if (!(*keyid)) {
+- dev_err(se->dev, "failed to allocate key slot\n");
++ dev_dbg(se->dev, "failed to allocate key slot\n");
+ return -ENOMEM;
+ }
+ }
+diff --git a/drivers/crypto/tegra/tegra-se-main.c b/drivers/crypto/tegra/tegra-se-main.c
+index 918c0b10614d43..1c94f1de05467a 100644
+--- a/drivers/crypto/tegra/tegra-se-main.c
++++ b/drivers/crypto/tegra/tegra-se-main.c
+@@ -141,7 +141,7 @@ static struct tegra_se_cmdbuf *tegra_se_host1x_bo_alloc(struct tegra_se *se, ssi
+ return cmdbuf;
+ }
+
+-int tegra_se_host1x_submit(struct tegra_se *se, u32 size)
++int tegra_se_host1x_submit(struct tegra_se *se, struct tegra_se_cmdbuf *cmdbuf, u32 size)
+ {
+ struct host1x_job *job;
+ int ret;
+@@ -160,9 +160,9 @@ int tegra_se_host1x_submit(struct tegra_se *se, u32 size)
+ job->engine_fallback_streamid = se->stream_id;
+ job->engine_streamid_offset = SE_STREAM_ID;
+
+- se->cmdbuf->words = size;
++ cmdbuf->words = size;
+
+- host1x_job_add_gather(job, &se->cmdbuf->bo, size, 0);
++ host1x_job_add_gather(job, &cmdbuf->bo, size, 0);
+
+ ret = host1x_job_pin(job, se->dev);
+ if (ret) {
+@@ -220,14 +220,22 @@ static int tegra_se_client_init(struct host1x_client *client)
+ goto syncpt_put;
+ }
+
++ se->keybuf = tegra_se_host1x_bo_alloc(se, SZ_4K);
++ if (!se->keybuf) {
++ ret = -ENOMEM;
++ goto cmdbuf_put;
++ }
++
+ ret = se->hw->init_alg(se);
+ if (ret) {
+ dev_err(se->dev, "failed to register algorithms\n");
+- goto cmdbuf_put;
++ goto keybuf_put;
+ }
+
+ return 0;
+
++keybuf_put:
++ tegra_se_cmdbuf_put(&se->keybuf->bo);
+ cmdbuf_put:
+ tegra_se_cmdbuf_put(&se->cmdbuf->bo);
+ syncpt_put:
+diff --git a/drivers/crypto/tegra/tegra-se.h b/drivers/crypto/tegra/tegra-se.h
+index b9dd7ceb8783c9..b6cac9384f666d 100644
+--- a/drivers/crypto/tegra/tegra-se.h
++++ b/drivers/crypto/tegra/tegra-se.h
+@@ -24,6 +24,7 @@
+ #define SE_STREAM_ID 0x90
+
+ #define SE_SHA_CFG 0x4004
++#define SE_SHA_IN_ADDR 0x400c
+ #define SE_SHA_KEY_ADDR 0x4094
+ #define SE_SHA_KEY_DATA 0x4098
+ #define SE_SHA_KEYMANIFEST 0x409c
+@@ -340,12 +341,14 @@
+ #define SE_CRYPTO_CTR_REG_COUNT 4
+ #define SE_MAX_KEYSLOT 15
+ #define SE_MAX_MEM_ALLOC SZ_4M
+-#define SE_AES_BUFLEN 0x8000
+-#define SE_SHA_BUFLEN 0x2000
++
++#define TEGRA_AES_RESERVED_KSLT 14
++#define TEGRA_XTS_RESERVED_KSLT 15
+
+ #define SHA_FIRST BIT(0)
+-#define SHA_UPDATE BIT(1)
+-#define SHA_FINAL BIT(2)
++#define SHA_INIT BIT(1)
++#define SHA_UPDATE BIT(2)
++#define SHA_FINAL BIT(3)
+
+ /* Security Engine operation modes */
+ enum se_aes_alg {
+@@ -420,6 +423,7 @@ struct tegra_se {
+ struct host1x_client client;
+ struct host1x_channel *channel;
+ struct tegra_se_cmdbuf *cmdbuf;
++ struct tegra_se_cmdbuf *keybuf;
+ struct crypto_engine *engine;
+ struct host1x_syncpt *syncpt;
+ struct device *dev;
+@@ -501,8 +505,33 @@ void tegra_deinit_aes(struct tegra_se *se);
+ void tegra_deinit_hash(struct tegra_se *se);
+ int tegra_key_submit(struct tegra_se *se, const u8 *key,
+ u32 keylen, u32 alg, u32 *keyid);
++
++int tegra_key_submit_reserved(struct tegra_se *se, const u8 *key,
++ u32 keylen, u32 alg, u32 *keyid);
++
+ void tegra_key_invalidate(struct tegra_se *se, u32 keyid, u32 alg);
+-int tegra_se_host1x_submit(struct tegra_se *se, u32 size);
++void tegra_key_invalidate_reserved(struct tegra_se *se, u32 keyid, u32 alg);
++int tegra_se_host1x_submit(struct tegra_se *se, struct tegra_se_cmdbuf *cmdbuf, u32 size);
++
++static inline int tegra_key_submit_reserved_aes(struct tegra_se *se, const u8 *key,
++ u32 keylen, u32 alg, u32 *keyid)
++{
++ *keyid = TEGRA_AES_RESERVED_KSLT;
++ return tegra_key_submit_reserved(se, key, keylen, alg, keyid);
++}
++
++static inline int tegra_key_submit_reserved_xts(struct tegra_se *se, const u8 *key,
++ u32 keylen, u32 alg, u32 *keyid)
++{
++ *keyid = TEGRA_XTS_RESERVED_KSLT;
++ return tegra_key_submit_reserved(se, key, keylen, alg, keyid);
++}
++
++static inline bool tegra_key_is_reserved(u32 keyid)
++{
++ return ((keyid == TEGRA_AES_RESERVED_KSLT) ||
++ (keyid == TEGRA_XTS_RESERVED_KSLT));
++}
+
+ /* HOST1x OPCODES */
+ static inline u32 host1x_opcode_setpayload(unsigned int payload)
+diff --git a/drivers/dma/amd/ae4dma/ae4dma-pci.c b/drivers/dma/amd/ae4dma/ae4dma-pci.c
+index aad0dc4294a394..587c5a10c1a8b2 100644
+--- a/drivers/dma/amd/ae4dma/ae4dma-pci.c
++++ b/drivers/dma/amd/ae4dma/ae4dma-pci.c
+@@ -46,8 +46,8 @@ static int ae4_get_irqs(struct ae4_device *ae4)
+
+ } else {
+ ae4_msix->msix_count = ret;
+- for (i = 0; i < MAX_AE4_HW_QUEUES; i++)
+- ae4->ae4_irq[i] = ae4_msix->msix_entry[i].vector;
++ for (i = 0; i < ae4_msix->msix_count; i++)
++ ae4->ae4_irq[i] = pci_irq_vector(pdev, i);
+ }
+
+ return ret;
+diff --git a/drivers/dma/amd/ae4dma/ae4dma.h b/drivers/dma/amd/ae4dma/ae4dma.h
+index 265c5d4360080d..57f6048726bb68 100644
+--- a/drivers/dma/amd/ae4dma/ae4dma.h
++++ b/drivers/dma/amd/ae4dma/ae4dma.h
+@@ -37,6 +37,8 @@
+ #define AE4_DMA_VERSION 4
+ #define CMD_AE4_DESC_DW0_VAL 2
+
++#define AE4_TIME_OUT 5000
++
+ struct ae4_msix {
+ int msix_count;
+ struct msix_entry msix_entry[MAX_AE4_HW_QUEUES];
+diff --git a/drivers/dma/amd/ptdma/ptdma-dmaengine.c b/drivers/dma/amd/ptdma/ptdma-dmaengine.c
+index 35c84ec9608b4f..715ac3ae067b85 100644
+--- a/drivers/dma/amd/ptdma/ptdma-dmaengine.c
++++ b/drivers/dma/amd/ptdma/ptdma-dmaengine.c
+@@ -198,8 +198,10 @@ static struct pt_dma_desc *pt_handle_active_desc(struct pt_dma_chan *chan,
+ {
+ struct dma_async_tx_descriptor *tx_desc;
+ struct virt_dma_desc *vd;
++ struct pt_device *pt;
+ unsigned long flags;
+
++ pt = chan->pt;
+ /* Loop over descriptors until one is found with commands */
+ do {
+ if (desc) {
+@@ -217,7 +219,7 @@ static struct pt_dma_desc *pt_handle_active_desc(struct pt_dma_chan *chan,
+
+ spin_lock_irqsave(&chan->vc.lock, flags);
+
+- if (desc) {
++ if (pt->ver != AE4_DMA_VERSION && desc) {
+ if (desc->status != DMA_COMPLETE) {
+ if (desc->status != DMA_ERROR)
+ desc->status = DMA_COMPLETE;
+@@ -235,7 +237,7 @@ static struct pt_dma_desc *pt_handle_active_desc(struct pt_dma_chan *chan,
+
+ spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+- if (tx_desc) {
++ if (pt->ver != AE4_DMA_VERSION && tx_desc) {
+ dmaengine_desc_get_callback_invoke(tx_desc, NULL);
+ dma_run_dependencies(tx_desc);
+ vchan_vdesc_fini(vd);
+@@ -245,11 +247,25 @@ static struct pt_dma_desc *pt_handle_active_desc(struct pt_dma_chan *chan,
+ return NULL;
+ }
+
++static inline bool ae4_core_queue_full(struct pt_cmd_queue *cmd_q)
++{
++ u32 front_wi = readl(cmd_q->reg_control + AE4_WR_IDX_OFF);
++ u32 rear_ri = readl(cmd_q->reg_control + AE4_RD_IDX_OFF);
++
++ if (((MAX_CMD_QLEN + front_wi - rear_ri) % MAX_CMD_QLEN) >= (MAX_CMD_QLEN - 1))
++ return true;
++
++ return false;
++}
++
+ static void pt_cmd_callback(void *data, int err)
+ {
+ struct pt_dma_desc *desc = data;
++ struct ae4_cmd_queue *ae4cmd_q;
+ struct dma_chan *dma_chan;
+ struct pt_dma_chan *chan;
++ struct ae4_device *ae4;
++ struct pt_device *pt;
+ int ret;
+
+ if (err == -EINPROGRESS)
+@@ -257,11 +273,32 @@ static void pt_cmd_callback(void *data, int err)
+
+ dma_chan = desc->vd.tx.chan;
+ chan = to_pt_chan(dma_chan);
++ pt = chan->pt;
+
+ if (err)
+ desc->status = DMA_ERROR;
+
+ while (true) {
++ if (pt->ver == AE4_DMA_VERSION) {
++ ae4 = container_of(pt, struct ae4_device, pt);
++ ae4cmd_q = &ae4->ae4cmd_q[chan->id];
++
++ if (ae4cmd_q->q_cmd_count >= (CMD_Q_LEN - 1) ||
++ ae4_core_queue_full(&ae4cmd_q->cmd_q)) {
++ wake_up(&ae4cmd_q->q_w);
++
++ if (wait_for_completion_timeout(&ae4cmd_q->cmp,
++ msecs_to_jiffies(AE4_TIME_OUT))
++ == 0) {
++ dev_err(pt->dev, "TIMEOUT %d:\n", ae4cmd_q->id);
++ break;
++ }
++
++ reinit_completion(&ae4cmd_q->cmp);
++ continue;
++ }
++ }
++
+ /* Check for DMA descriptor completion */
+ desc = pt_handle_active_desc(chan, desc);
+
+@@ -296,6 +333,49 @@ static struct pt_dma_desc *pt_alloc_dma_desc(struct pt_dma_chan *chan,
+ return desc;
+ }
+
++static void pt_cmd_callback_work(void *data, int err)
++{
++ struct dma_async_tx_descriptor *tx_desc;
++ struct pt_dma_desc *desc = data;
++ struct dma_chan *dma_chan;
++ struct virt_dma_desc *vd;
++ struct pt_dma_chan *chan;
++ unsigned long flags;
++
++ dma_chan = desc->vd.tx.chan;
++ chan = to_pt_chan(dma_chan);
++
++ if (err == -EINPROGRESS)
++ return;
++
++ tx_desc = &desc->vd.tx;
++ vd = &desc->vd;
++
++ if (err)
++ desc->status = DMA_ERROR;
++
++ spin_lock_irqsave(&chan->vc.lock, flags);
++ if (desc) {
++ if (desc->status != DMA_COMPLETE) {
++ if (desc->status != DMA_ERROR)
++ desc->status = DMA_COMPLETE;
++
++ dma_cookie_complete(tx_desc);
++ dma_descriptor_unmap(tx_desc);
++ } else {
++ tx_desc = NULL;
++ }
++ }
++ spin_unlock_irqrestore(&chan->vc.lock, flags);
++
++ if (tx_desc) {
++ dmaengine_desc_get_callback_invoke(tx_desc, NULL);
++ dma_run_dependencies(tx_desc);
++ list_del(&desc->vd.node);
++ vchan_vdesc_fini(vd);
++ }
++}
++
+ static struct pt_dma_desc *pt_create_desc(struct dma_chan *dma_chan,
+ dma_addr_t dst,
+ dma_addr_t src,
+@@ -327,6 +407,7 @@ static struct pt_dma_desc *pt_create_desc(struct dma_chan *dma_chan,
+ desc->len = len;
+
+ if (pt->ver == AE4_DMA_VERSION) {
++ pt_cmd->pt_cmd_callback = pt_cmd_callback_work;
+ ae4 = container_of(pt, struct ae4_device, pt);
+ ae4cmd_q = &ae4->ae4cmd_q[chan->id];
+ mutex_lock(&ae4cmd_q->cmd_lock);
+@@ -367,13 +448,16 @@ static void pt_issue_pending(struct dma_chan *dma_chan)
+ {
+ struct pt_dma_chan *chan = to_pt_chan(dma_chan);
+ struct pt_dma_desc *desc;
++ struct pt_device *pt;
+ unsigned long flags;
+ bool engine_is_idle = true;
+
++ pt = chan->pt;
++
+ spin_lock_irqsave(&chan->vc.lock, flags);
+
+ desc = pt_next_dma_desc(chan);
+- if (desc)
++ if (desc && pt->ver != AE4_DMA_VERSION)
+ engine_is_idle = false;
+
+ vchan_issue_pending(&chan->vc);
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index f989b6c9c0a984..e3e0e88a76d3c3 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -401,6 +401,7 @@ fsl_edma2_irq_init(struct platform_device *pdev,
+
+ /* The last IRQ is for eDMA err */
+ if (i == count - 1) {
++ fsl_edma->errirq = irq;
+ ret = devm_request_irq(&pdev->dev, irq,
+ fsl_edma_err_handler,
+ 0, "eDMA2-ERR", fsl_edma);
+@@ -420,10 +421,13 @@ static void fsl_edma_irq_exit(
+ struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
+ {
+ if (fsl_edma->txirq == fsl_edma->errirq) {
+- devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
++ if (fsl_edma->txirq >= 0)
++ devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
+ } else {
+- devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
+- devm_free_irq(&pdev->dev, fsl_edma->errirq, fsl_edma);
++ if (fsl_edma->txirq >= 0)
++ devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
++ if (fsl_edma->errirq >= 0)
++ devm_free_irq(&pdev->dev, fsl_edma->errirq, fsl_edma);
+ }
+ }
+
+@@ -620,6 +624,8 @@ static int fsl_edma_probe(struct platform_device *pdev)
+ if (!fsl_edma)
+ return -ENOMEM;
+
++ fsl_edma->errirq = -EINVAL;
++ fsl_edma->txirq = -EINVAL;
+ fsl_edma->drvdata = drvdata;
+ fsl_edma->n_chans = chans;
+ mutex_init(&fsl_edma->fsl_edma_mutex);
+@@ -802,9 +808,9 @@ static void fsl_edma_remove(struct platform_device *pdev)
+ struct fsl_edma_engine *fsl_edma = platform_get_drvdata(pdev);
+
+ fsl_edma_irq_exit(pdev, fsl_edma);
+- fsl_edma_cleanup_vchan(&fsl_edma->dma_dev);
+ of_dma_controller_free(np);
+ dma_async_device_unregister(&fsl_edma->dma_dev);
++ fsl_edma_cleanup_vchan(&fsl_edma->dma_dev);
+ fsl_disable_clocks(fsl_edma, fsl_edma->drvdata->dmamuxs);
+ }
+
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index f45d849d3f1509..355a977019e944 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -751,6 +751,8 @@ static int i10nm_get_ddr_munits(void)
+ continue;
+ } else {
+ d->imc[lmc].mdev = mdev;
++ if (res_cfg->type == SPR)
++ skx_set_mc_mapping(d, i, lmc);
+ lmc++;
+ }
+ }
+diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c
+index 4fc16922dc1af5..9b02a6b43ab581 100644
+--- a/drivers/edac/ie31200_edac.c
++++ b/drivers/edac/ie31200_edac.c
+@@ -94,8 +94,6 @@
+ (((did) & PCI_DEVICE_ID_INTEL_IE31200_HB_CFL_MASK) == \
+ PCI_DEVICE_ID_INTEL_IE31200_HB_CFL_MASK))
+
+-#define IE31200_DIMMS 4
+-#define IE31200_RANKS 8
+ #define IE31200_RANKS_PER_CHANNEL 4
+ #define IE31200_DIMMS_PER_CHANNEL 2
+ #define IE31200_CHANNELS 2
+@@ -167,6 +165,7 @@
+ #define IE31200_MAD_DIMM_0_OFFSET 0x5004
+ #define IE31200_MAD_DIMM_0_OFFSET_SKL 0x500C
+ #define IE31200_MAD_DIMM_SIZE GENMASK_ULL(7, 0)
++#define IE31200_MAD_DIMM_SIZE_SKL GENMASK_ULL(5, 0)
+ #define IE31200_MAD_DIMM_A_RANK BIT(17)
+ #define IE31200_MAD_DIMM_A_RANK_SHIFT 17
+ #define IE31200_MAD_DIMM_A_RANK_SKL BIT(10)
+@@ -380,7 +379,7 @@ static void __iomem *ie31200_map_mchbar(struct pci_dev *pdev)
+ static void __skl_populate_dimm_info(struct dimm_data *dd, u32 addr_decode,
+ int chan)
+ {
+- dd->size = (addr_decode >> (chan << 4)) & IE31200_MAD_DIMM_SIZE;
++ dd->size = (addr_decode >> (chan << 4)) & IE31200_MAD_DIMM_SIZE_SKL;
+ dd->dual_rank = (addr_decode & (IE31200_MAD_DIMM_A_RANK_SKL << (chan << 4))) ? 1 : 0;
+ dd->x16_width = ((addr_decode & (IE31200_MAD_DIMM_A_WIDTH_SKL << (chan << 4))) >>
+ (IE31200_MAD_DIMM_A_WIDTH_SKL_SHIFT + (chan << 4)));
+@@ -429,7 +428,7 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
+
+ nr_channels = how_many_channels(pdev);
+ layers[0].type = EDAC_MC_LAYER_CHIP_SELECT;
+- layers[0].size = IE31200_DIMMS;
++ layers[0].size = IE31200_RANKS_PER_CHANNEL;
+ layers[0].is_virt_csrow = true;
+ layers[1].type = EDAC_MC_LAYER_CHANNEL;
+ layers[1].size = nr_channels;
+@@ -622,7 +621,7 @@ static int __init ie31200_init(void)
+
+ pci_rc = pci_register_driver(&ie31200_driver);
+ if (pci_rc < 0)
+- goto fail0;
++ return pci_rc;
+
+ if (!mci_pdev) {
+ ie31200_registered = 0;
+@@ -633,11 +632,13 @@ static int __init ie31200_init(void)
+ if (mci_pdev)
+ break;
+ }
++
+ if (!mci_pdev) {
+ edac_dbg(0, "ie31200 pci_get_device fail\n");
+ pci_rc = -ENODEV;
+- goto fail1;
++ goto fail0;
+ }
++
+ pci_rc = ie31200_init_one(mci_pdev, &ie31200_pci_tbl[i]);
+ if (pci_rc < 0) {
+ edac_dbg(0, "ie31200 init fail\n");
+@@ -645,12 +646,12 @@ static int __init ie31200_init(void)
+ goto fail1;
+ }
+ }
+- return 0;
+
++ return 0;
+ fail1:
+- pci_unregister_driver(&ie31200_driver);
+-fail0:
+ pci_dev_put(mci_pdev);
++fail0:
++ pci_unregister_driver(&ie31200_driver);
+
+ return pci_rc;
+ }
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index fdf3a84fe6988b..595908af9e5c93 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -785,13 +785,22 @@ static u64 ecclog_read_and_clear(struct igen6_imc *imc)
+ {
+ u64 ecclog = readq(imc->window + ECC_ERROR_LOG_OFFSET);
+
+- if (ecclog & (ECC_ERROR_LOG_CE | ECC_ERROR_LOG_UE)) {
+- /* Clear CE/UE bits by writing 1s */
+- writeq(ecclog, imc->window + ECC_ERROR_LOG_OFFSET);
+- return ecclog;
+- }
++ /*
++ * Quirk: The ECC_ERROR_LOG register of certain SoCs may contain
++ * the invalid value ~0. This will result in a flood of invalid
++ * error reports in polling mode. Skip it.
++ */
++ if (ecclog == ~0)
++ return 0;
+
+- return 0;
++ /* Neither a CE nor a UE. Skip it.*/
++ if (!(ecclog & (ECC_ERROR_LOG_CE | ECC_ERROR_LOG_UE)))
++ return 0;
++
++ /* Clear CE/UE bits by writing 1s */
++ writeq(ecclog, imc->window + ECC_ERROR_LOG_OFFSET);
++
++ return ecclog;
+ }
+
+ static void errsts_clear(struct igen6_imc *imc)
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index f7bd930e058fed..fa5b442b184499 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -121,6 +121,35 @@ void skx_adxl_put(void)
+ }
+ EXPORT_SYMBOL_GPL(skx_adxl_put);
+
++static void skx_init_mc_mapping(struct skx_dev *d)
++{
++ /*
++ * By default, the BIOS presents all memory controllers within each
++ * socket to the EDAC driver. The physical indices are the same as
++ * the logical indices of the memory controllers enumerated by the
++ * EDAC driver.
++ */
++ for (int i = 0; i < NUM_IMC; i++)
++ d->mc_mapping[i] = i;
++}
++
++void skx_set_mc_mapping(struct skx_dev *d, u8 pmc, u8 lmc)
++{
++ edac_dbg(0, "Set the mapping of mc phy idx to logical idx: %02d -> %02d\n",
++ pmc, lmc);
++
++ d->mc_mapping[pmc] = lmc;
++}
++EXPORT_SYMBOL_GPL(skx_set_mc_mapping);
++
++static u8 skx_get_mc_mapping(struct skx_dev *d, u8 pmc)
++{
++ edac_dbg(0, "Get the mapping of mc phy idx to logical idx: %02d -> %02d\n",
++ pmc, d->mc_mapping[pmc]);
++
++ return d->mc_mapping[pmc];
++}
++
+ static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
+ {
+ struct skx_dev *d;
+@@ -188,6 +217,8 @@ static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
+ return false;
+ }
+
++ res->imc = skx_get_mc_mapping(d, res->imc);
++
+ for (i = 0; i < adxl_component_count; i++) {
+ if (adxl_values[i] == ~0x0ull)
+ continue;
+@@ -326,6 +357,8 @@ int skx_get_all_bus_mappings(struct res_config *cfg, struct list_head **list)
+ d->bus[0], d->bus[1], d->bus[2], d->bus[3]);
+ list_add_tail(&d->list, &dev_edac_list);
+ prev = pdev;
++
++ skx_init_mc_mapping(d);
+ }
+
+ if (list)
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index b0845bdd45164b..ca5408803f8787 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -93,6 +93,16 @@ struct skx_dev {
+ struct pci_dev *uracu; /* for i10nm CPU */
+ struct pci_dev *pcu_cr3; /* for HBM memory detection */
+ u32 mcroute;
++ /*
++ * Some server BIOS may hide certain memory controllers, and the
++ * EDAC driver skips those hidden memory controllers. However, the
++ * ADXL still decodes memory error address using physical memory
++ * controller indices. The mapping table is used to convert the
++ * physical indices (reported by ADXL) to the logical indices
++ * (used the EDAC driver) of present memory controllers during the
++ * error handling process.
++ */
++ u8 mc_mapping[NUM_IMC];
+ struct skx_imc {
+ struct mem_ctl_info *mci;
+ struct pci_dev *mdev; /* for i10nm CPU */
+@@ -242,6 +252,7 @@ void skx_adxl_put(void);
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
+ void skx_set_mem_cfg(bool mem_cfg_2lm);
+ void skx_set_res_cfg(struct res_config *cfg);
++void skx_set_mc_mapping(struct skx_dev *d, u8 pmc, u8 lmc);
+
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id);
+
+diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c
+index dfda5ffc14db72..fa09a82b44921e 100644
+--- a/drivers/firmware/arm_ffa/bus.c
++++ b/drivers/firmware/arm_ffa/bus.c
+@@ -160,11 +160,12 @@ static int __ffa_devices_unregister(struct device *dev, void *data)
+ return 0;
+ }
+
+-static void ffa_devices_unregister(void)
++void ffa_devices_unregister(void)
+ {
+ bus_for_each_dev(&ffa_bus_type, NULL, NULL,
+ __ffa_devices_unregister);
+ }
++EXPORT_SYMBOL_GPL(ffa_devices_unregister);
+
+ bool ffa_device_is_valid(struct ffa_device *ffa_dev)
+ {
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 2c2ec3c35f1561..655672a8809595 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -145,7 +145,7 @@ static int ffa_version_check(u32 *version)
+ .a0 = FFA_VERSION, .a1 = FFA_DRIVER_VERSION,
+ }, &ver);
+
+- if (ver.a0 == FFA_RET_NOT_SUPPORTED) {
++ if ((s32)ver.a0 == FFA_RET_NOT_SUPPORTED) {
+ pr_info("FFA_VERSION returned not supported\n");
+ return -EOPNOTSUPP;
+ }
+@@ -899,7 +899,7 @@ static void ffa_notification_info_get(void)
+ }, &ret);
+
+ if (ret.a0 != FFA_FN_NATIVE(SUCCESS) && ret.a0 != FFA_SUCCESS) {
+- if (ret.a2 != FFA_RET_NO_DATA)
++ if ((s32)ret.a2 != FFA_RET_NO_DATA)
+ pr_err("Notification Info fetch failed: 0x%lx (0x%lx)",
+ ret.a0, ret.a2);
+ return;
+@@ -935,7 +935,7 @@ static void ffa_notification_info_get(void)
+ }
+
+ /* Per vCPU Notification */
+- for (idx = 0; idx < ids_count[list]; idx++) {
++ for (idx = 1; idx < ids_count[list]; idx++) {
+ if (ids_processed >= max_ids - 1)
+ break;
+
+@@ -1384,11 +1384,30 @@ static struct notifier_block ffa_bus_nb = {
+ .notifier_call = ffa_bus_notifier,
+ };
+
++static int ffa_xa_add_partition_info(int vm_id)
++{
++ struct ffa_dev_part_info *info;
++ int ret;
++
++ info = kzalloc(sizeof(*info), GFP_KERNEL);
++ if (!info)
++ return -ENOMEM;
++
++ rwlock_init(&info->rw_lock);
++ ret = xa_insert(&drv_info->partition_info, vm_id, info, GFP_KERNEL);
++ if (ret) {
++ pr_err("%s: failed to save partition ID 0x%x - ret:%d. Abort.\n",
++ __func__, vm_id, ret);
++ kfree(info);
++ }
++
++ return ret;
++}
++
+ static int ffa_setup_partitions(void)
+ {
+ int count, idx, ret;
+ struct ffa_device *ffa_dev;
+- struct ffa_dev_part_info *info;
+ struct ffa_partition_info *pbuf, *tpbuf;
+
+ if (drv_info->version == FFA_VERSION_1_0) {
+@@ -1422,42 +1441,18 @@ static int ffa_setup_partitions(void)
+ !(tpbuf->properties & FFA_PARTITION_AARCH64_EXEC))
+ ffa_mode_32bit_set(ffa_dev);
+
+- info = kzalloc(sizeof(*info), GFP_KERNEL);
+- if (!info) {
++ if (ffa_xa_add_partition_info(ffa_dev->vm_id)) {
+ ffa_device_unregister(ffa_dev);
+ continue;
+ }
+- rwlock_init(&info->rw_lock);
+- ret = xa_insert(&drv_info->partition_info, tpbuf->id,
+- info, GFP_KERNEL);
+- if (ret) {
+- pr_err("%s: failed to save partition ID 0x%x - ret:%d\n",
+- __func__, tpbuf->id, ret);
+- ffa_device_unregister(ffa_dev);
+- kfree(info);
+- }
+ }
+
+ kfree(pbuf);
+
+ /* Allocate for the host */
+- info = kzalloc(sizeof(*info), GFP_KERNEL);
+- if (!info) {
+- /* Already registered devices are freed on bus_exit */
+- ffa_partitions_cleanup();
+- return -ENOMEM;
+- }
+-
+- rwlock_init(&info->rw_lock);
+- ret = xa_insert(&drv_info->partition_info, drv_info->vm_id,
+- info, GFP_KERNEL);
+- if (ret) {
+- pr_err("%s: failed to save Host partition ID 0x%x - ret:%d. Abort.\n",
+- __func__, drv_info->vm_id, ret);
+- kfree(info);
+- /* Already registered devices are freed on bus_exit */
++ ret = ffa_xa_add_partition_info(drv_info->vm_id);
++ if (ret)
+ ffa_partitions_cleanup();
+- }
+
+ return ret;
+ }
+@@ -1467,6 +1462,9 @@ static void ffa_partitions_cleanup(void)
+ struct ffa_dev_part_info *info;
+ unsigned long idx;
+
++ /* Clean up/free all registered devices */
++ ffa_devices_unregister();
++
+ xa_for_each(&drv_info->partition_info, idx, info) {
+ xa_erase(&drv_info->partition_info, idx);
+ kfree(info);
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 60050da54bf24c..1c75a4c9c37166 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -1997,17 +1997,7 @@ static void scmi_common_fastchannel_db_ring(struct scmi_fc_db_info *db)
+ else if (db->width == 4)
+ SCMI_PROTO_FC_RING_DB(32);
+ else /* db->width == 8 */
+-#ifdef CONFIG_64BIT
+ SCMI_PROTO_FC_RING_DB(64);
+-#else
+- {
+- u64 val = 0;
+-
+- if (db->mask)
+- val = ioread64_hi_lo(db->addr) & db->mask;
+- iowrite64_hi_lo(db->set | val, db->addr);
+- }
+-#endif
+ }
+
+ /**
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index 42433c19eb3088..560724ce21aa3d 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -1631,6 +1631,7 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+
+ cs_dsp_debugfs_save_wmfwname(dsp, file);
+
++ ret = 0;
+ out_fw:
+ cs_dsp_buf_free(&buf_list);
+
+@@ -2338,6 +2339,7 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+
+ cs_dsp_debugfs_save_binname(dsp, file);
+
++ ret = 0;
+ out_fw:
+ cs_dsp_buf_free(&buf_list);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 018dfccd771baa..f5909977eed4b7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4223,7 +4223,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ mutex_init(&adev->grbm_idx_mutex);
+ mutex_init(&adev->mn_lock);
+ mutex_init(&adev->virt.vf_errors.lock);
+- mutex_init(&adev->virt.rlcg_reg_lock);
+ hash_init(adev->mn_hash);
+ mutex_init(&adev->psp.mutex);
+ mutex_init(&adev->notifier_lock);
+@@ -4249,6 +4248,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ spin_lock_init(&adev->se_cac_idx_lock);
+ spin_lock_init(&adev->audio_endpt_idx_lock);
+ spin_lock_init(&adev->mm_stats.lock);
++ spin_lock_init(&adev->virt.rlcg_reg_lock);
+ spin_lock_init(&adev->wb.lock);
+
+ INIT_LIST_HEAD(&adev->reset_list);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 709c11cbeabd88..6fa20980a0b15e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -145,9 +145,8 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
+ adev->mes.vmid_mask_gfxhub = 0xffffff00;
+
+ for (i = 0; i < AMDGPU_MES_MAX_COMPUTE_PIPES; i++) {
+- /* use only 1st MEC pipes */
+- if (i >= adev->gfx.mec.num_pipe_per_mec)
+- continue;
++ if (i >= (adev->gfx.mec.num_pipe_per_mec * adev->gfx.mec.num_mec))
++ break;
+ adev->mes.compute_hqd_mask[i] = 0xc;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
+index dde15c6a96e1ae..a7f2648245ec0b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
+@@ -32,462 +32,7 @@
+ #include "amdgpu_umsch_mm.h"
+ #include "umsch_mm_v4_0.h"
+
+-struct umsch_mm_test_ctx_data {
+- uint8_t process_csa[PAGE_SIZE];
+- uint8_t vpe_ctx_csa[PAGE_SIZE];
+- uint8_t vcn_ctx_csa[PAGE_SIZE];
+-};
+-
+-struct umsch_mm_test_mqd_data {
+- uint8_t vpe_mqd[PAGE_SIZE];
+- uint8_t vcn_mqd[PAGE_SIZE];
+-};
+-
+-struct umsch_mm_test_ring_data {
+- uint8_t vpe_ring[PAGE_SIZE];
+- uint8_t vpe_ib[PAGE_SIZE];
+- uint8_t vcn_ring[PAGE_SIZE];
+- uint8_t vcn_ib[PAGE_SIZE];
+-};
+-
+-struct umsch_mm_test_queue_info {
+- uint64_t mqd_addr;
+- uint64_t csa_addr;
+- uint32_t doorbell_offset_0;
+- uint32_t doorbell_offset_1;
+- enum UMSCH_SWIP_ENGINE_TYPE engine;
+-};
+-
+-struct umsch_mm_test {
+- struct amdgpu_bo *ctx_data_obj;
+- uint64_t ctx_data_gpu_addr;
+- uint32_t *ctx_data_cpu_addr;
+-
+- struct amdgpu_bo *mqd_data_obj;
+- uint64_t mqd_data_gpu_addr;
+- uint32_t *mqd_data_cpu_addr;
+-
+- struct amdgpu_bo *ring_data_obj;
+- uint64_t ring_data_gpu_addr;
+- uint32_t *ring_data_cpu_addr;
+-
+-
+- struct amdgpu_vm *vm;
+- struct amdgpu_bo_va *bo_va;
+- uint32_t pasid;
+- uint32_t vm_cntx_cntl;
+- uint32_t num_queues;
+-};
+-
+-static int map_ring_data(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+- struct amdgpu_bo *bo, struct amdgpu_bo_va **bo_va,
+- uint64_t addr, uint32_t size)
+-{
+- struct amdgpu_sync sync;
+- struct drm_exec exec;
+- int r;
+-
+- amdgpu_sync_create(&sync);
+-
+- drm_exec_init(&exec, 0, 0);
+- drm_exec_until_all_locked(&exec) {
+- r = drm_exec_lock_obj(&exec, &bo->tbo.base);
+- drm_exec_retry_on_contention(&exec);
+- if (unlikely(r))
+- goto error_fini_exec;
+-
+- r = amdgpu_vm_lock_pd(vm, &exec, 0);
+- drm_exec_retry_on_contention(&exec);
+- if (unlikely(r))
+- goto error_fini_exec;
+- }
+-
+- *bo_va = amdgpu_vm_bo_add(adev, vm, bo);
+- if (!*bo_va) {
+- r = -ENOMEM;
+- goto error_fini_exec;
+- }
+-
+- r = amdgpu_vm_bo_map(adev, *bo_va, addr, 0, size,
+- AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
+- AMDGPU_PTE_EXECUTABLE);
+-
+- if (r)
+- goto error_del_bo_va;
+-
+-
+- r = amdgpu_vm_bo_update(adev, *bo_va, false);
+- if (r)
+- goto error_del_bo_va;
+-
+- amdgpu_sync_fence(&sync, (*bo_va)->last_pt_update);
+-
+- r = amdgpu_vm_update_pdes(adev, vm, false);
+- if (r)
+- goto error_del_bo_va;
+-
+- amdgpu_sync_fence(&sync, vm->last_update);
+-
+- amdgpu_sync_wait(&sync, false);
+- drm_exec_fini(&exec);
+-
+- amdgpu_sync_free(&sync);
+-
+- return 0;
+-
+-error_del_bo_va:
+- amdgpu_vm_bo_del(adev, *bo_va);
+- amdgpu_sync_free(&sync);
+-
+-error_fini_exec:
+- drm_exec_fini(&exec);
+- amdgpu_sync_free(&sync);
+- return r;
+-}
+-
+-static int unmap_ring_data(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+- struct amdgpu_bo *bo, struct amdgpu_bo_va *bo_va,
+- uint64_t addr)
+-{
+- struct drm_exec exec;
+- long r;
+-
+- drm_exec_init(&exec, 0, 0);
+- drm_exec_until_all_locked(&exec) {
+- r = drm_exec_lock_obj(&exec, &bo->tbo.base);
+- drm_exec_retry_on_contention(&exec);
+- if (unlikely(r))
+- goto out_unlock;
+-
+- r = amdgpu_vm_lock_pd(vm, &exec, 0);
+- drm_exec_retry_on_contention(&exec);
+- if (unlikely(r))
+- goto out_unlock;
+- }
+-
+-
+- r = amdgpu_vm_bo_unmap(adev, bo_va, addr);
+- if (r)
+- goto out_unlock;
+-
+- amdgpu_vm_bo_del(adev, bo_va);
+-
+-out_unlock:
+- drm_exec_fini(&exec);
+-
+- return r;
+-}
+-
+-static void setup_vpe_queue(struct amdgpu_device *adev,
+- struct umsch_mm_test *test,
+- struct umsch_mm_test_queue_info *qinfo)
+-{
+- struct MQD_INFO *mqd = (struct MQD_INFO *)test->mqd_data_cpu_addr;
+- uint64_t ring_gpu_addr = test->ring_data_gpu_addr;
+-
+- mqd->rb_base_lo = (ring_gpu_addr >> 8);
+- mqd->rb_base_hi = (ring_gpu_addr >> 40);
+- mqd->rb_size = PAGE_SIZE / 4;
+- mqd->wptr_val = 0;
+- mqd->rptr_val = 0;
+- mqd->unmapped = 1;
+-
+- if (adev->vpe.collaborate_mode)
+- memcpy(++mqd, test->mqd_data_cpu_addr, sizeof(struct MQD_INFO));
+-
+- qinfo->mqd_addr = test->mqd_data_gpu_addr;
+- qinfo->csa_addr = test->ctx_data_gpu_addr +
+- offsetof(struct umsch_mm_test_ctx_data, vpe_ctx_csa);
+- qinfo->doorbell_offset_0 = 0;
+- qinfo->doorbell_offset_1 = 0;
+-}
+-
+-static void setup_vcn_queue(struct amdgpu_device *adev,
+- struct umsch_mm_test *test,
+- struct umsch_mm_test_queue_info *qinfo)
+-{
+-}
+-
+-static int add_test_queue(struct amdgpu_device *adev,
+- struct umsch_mm_test *test,
+- struct umsch_mm_test_queue_info *qinfo)
+-{
+- struct umsch_mm_add_queue_input queue_input = {};
+- int r;
+-
+- queue_input.process_id = test->pasid;
+- queue_input.page_table_base_addr = amdgpu_gmc_pd_addr(test->vm->root.bo);
+-
+- queue_input.process_va_start = 0;
+- queue_input.process_va_end = (adev->vm_manager.max_pfn - 1) << AMDGPU_GPU_PAGE_SHIFT;
+-
+- queue_input.process_quantum = 100000; /* 10ms */
+- queue_input.process_csa_addr = test->ctx_data_gpu_addr +
+- offsetof(struct umsch_mm_test_ctx_data, process_csa);
+-
+- queue_input.context_quantum = 10000; /* 1ms */
+- queue_input.context_csa_addr = qinfo->csa_addr;
+-
+- queue_input.inprocess_context_priority = CONTEXT_PRIORITY_LEVEL_NORMAL;
+- queue_input.context_global_priority_level = CONTEXT_PRIORITY_LEVEL_NORMAL;
+- queue_input.doorbell_offset_0 = qinfo->doorbell_offset_0;
+- queue_input.doorbell_offset_1 = qinfo->doorbell_offset_1;
+-
+- queue_input.engine_type = qinfo->engine;
+- queue_input.mqd_addr = qinfo->mqd_addr;
+- queue_input.vm_context_cntl = test->vm_cntx_cntl;
+-
+- amdgpu_umsch_mm_lock(&adev->umsch_mm);
+- r = adev->umsch_mm.funcs->add_queue(&adev->umsch_mm, &queue_input);
+- amdgpu_umsch_mm_unlock(&adev->umsch_mm);
+- if (r)
+- return r;
+-
+- return 0;
+-}
+-
+-static int remove_test_queue(struct amdgpu_device *adev,
+- struct umsch_mm_test *test,
+- struct umsch_mm_test_queue_info *qinfo)
+-{
+- struct umsch_mm_remove_queue_input queue_input = {};
+- int r;
+-
+- queue_input.doorbell_offset_0 = qinfo->doorbell_offset_0;
+- queue_input.doorbell_offset_1 = qinfo->doorbell_offset_1;
+- queue_input.context_csa_addr = qinfo->csa_addr;
+-
+- amdgpu_umsch_mm_lock(&adev->umsch_mm);
+- r = adev->umsch_mm.funcs->remove_queue(&adev->umsch_mm, &queue_input);
+- amdgpu_umsch_mm_unlock(&adev->umsch_mm);
+- if (r)
+- return r;
+-
+- return 0;
+-}
+-
+-static int submit_vpe_queue(struct amdgpu_device *adev, struct umsch_mm_test *test)
+-{
+- struct MQD_INFO *mqd = (struct MQD_INFO *)test->mqd_data_cpu_addr;
+- uint32_t *ring = test->ring_data_cpu_addr +
+- offsetof(struct umsch_mm_test_ring_data, vpe_ring) / 4;
+- uint32_t *ib = test->ring_data_cpu_addr +
+- offsetof(struct umsch_mm_test_ring_data, vpe_ib) / 4;
+- uint64_t ib_gpu_addr = test->ring_data_gpu_addr +
+- offsetof(struct umsch_mm_test_ring_data, vpe_ib);
+- uint32_t *fence = ib + 2048 / 4;
+- uint64_t fence_gpu_addr = ib_gpu_addr + 2048;
+- const uint32_t test_pattern = 0xdeadbeef;
+- int i;
+-
+- ib[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0);
+- ib[1] = lower_32_bits(fence_gpu_addr);
+- ib[2] = upper_32_bits(fence_gpu_addr);
+- ib[3] = test_pattern;
+-
+- ring[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_INDIRECT, 0);
+- ring[1] = (ib_gpu_addr & 0xffffffe0);
+- ring[2] = upper_32_bits(ib_gpu_addr);
+- ring[3] = 4;
+- ring[4] = 0;
+- ring[5] = 0;
+-
+- mqd->wptr_val = (6 << 2);
+- if (adev->vpe.collaborate_mode)
+- (++mqd)->wptr_val = (6 << 2);
+-
+- WDOORBELL32(adev->umsch_mm.agdb_index[CONTEXT_PRIORITY_LEVEL_NORMAL], mqd->wptr_val);
+-
+- for (i = 0; i < adev->usec_timeout; i++) {
+- if (*fence == test_pattern)
+- return 0;
+- udelay(1);
+- }
+-
+- dev_err(adev->dev, "vpe queue submission timeout\n");
+-
+- return -ETIMEDOUT;
+-}
+-
+-static int submit_vcn_queue(struct amdgpu_device *adev, struct umsch_mm_test *test)
+-{
+- return 0;
+-}
+-
+-static int setup_umsch_mm_test(struct amdgpu_device *adev,
+- struct umsch_mm_test *test)
+-{
+- struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB0(0)];
+- int r;
+-
+- test->vm_cntx_cntl = hub->vm_cntx_cntl;
+-
+- test->vm = kzalloc(sizeof(*test->vm), GFP_KERNEL);
+- if (!test->vm) {
+- r = -ENOMEM;
+- return r;
+- }
+-
+- r = amdgpu_vm_init(adev, test->vm, -1);
+- if (r)
+- goto error_free_vm;
+-
+- r = amdgpu_pasid_alloc(16);
+- if (r < 0)
+- goto error_fini_vm;
+- test->pasid = r;
+-
+- r = amdgpu_bo_create_kernel(adev, sizeof(struct umsch_mm_test_ctx_data),
+- PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+- &test->ctx_data_obj,
+- &test->ctx_data_gpu_addr,
+- (void **)&test->ctx_data_cpu_addr);
+- if (r)
+- goto error_free_pasid;
+-
+- memset(test->ctx_data_cpu_addr, 0, sizeof(struct umsch_mm_test_ctx_data));
+-
+- r = amdgpu_bo_create_kernel(adev, PAGE_SIZE,
+- PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+- &test->mqd_data_obj,
+- &test->mqd_data_gpu_addr,
+- (void **)&test->mqd_data_cpu_addr);
+- if (r)
+- goto error_free_ctx_data_obj;
+-
+- memset(test->mqd_data_cpu_addr, 0, PAGE_SIZE);
+-
+- r = amdgpu_bo_create_kernel(adev, sizeof(struct umsch_mm_test_ring_data),
+- PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+- &test->ring_data_obj,
+- NULL,
+- (void **)&test->ring_data_cpu_addr);
+- if (r)
+- goto error_free_mqd_data_obj;
+-
+- memset(test->ring_data_cpu_addr, 0, sizeof(struct umsch_mm_test_ring_data));
+-
+- test->ring_data_gpu_addr = AMDGPU_VA_RESERVED_BOTTOM;
+- r = map_ring_data(adev, test->vm, test->ring_data_obj, &test->bo_va,
+- test->ring_data_gpu_addr, sizeof(struct umsch_mm_test_ring_data));
+- if (r)
+- goto error_free_ring_data_obj;
+-
+- return 0;
+-
+-error_free_ring_data_obj:
+- amdgpu_bo_free_kernel(&test->ring_data_obj, NULL,
+- (void **)&test->ring_data_cpu_addr);
+-error_free_mqd_data_obj:
+- amdgpu_bo_free_kernel(&test->mqd_data_obj, &test->mqd_data_gpu_addr,
+- (void **)&test->mqd_data_cpu_addr);
+-error_free_ctx_data_obj:
+- amdgpu_bo_free_kernel(&test->ctx_data_obj, &test->ctx_data_gpu_addr,
+- (void **)&test->ctx_data_cpu_addr);
+-error_free_pasid:
+- amdgpu_pasid_free(test->pasid);
+-error_fini_vm:
+- amdgpu_vm_fini(adev, test->vm);
+-error_free_vm:
+- kfree(test->vm);
+-
+- return r;
+-}
+-
+-static void cleanup_umsch_mm_test(struct amdgpu_device *adev,
+- struct umsch_mm_test *test)
+-{
+- unmap_ring_data(adev, test->vm, test->ring_data_obj,
+- test->bo_va, test->ring_data_gpu_addr);
+- amdgpu_bo_free_kernel(&test->mqd_data_obj, &test->mqd_data_gpu_addr,
+- (void **)&test->mqd_data_cpu_addr);
+- amdgpu_bo_free_kernel(&test->ring_data_obj, NULL,
+- (void **)&test->ring_data_cpu_addr);
+- amdgpu_bo_free_kernel(&test->ctx_data_obj, &test->ctx_data_gpu_addr,
+- (void **)&test->ctx_data_cpu_addr);
+- amdgpu_pasid_free(test->pasid);
+- amdgpu_vm_fini(adev, test->vm);
+- kfree(test->vm);
+-}
+-
+-static int setup_test_queues(struct amdgpu_device *adev,
+- struct umsch_mm_test *test,
+- struct umsch_mm_test_queue_info *qinfo)
+-{
+- int i, r;
+-
+- for (i = 0; i < test->num_queues; i++) {
+- if (qinfo[i].engine == UMSCH_SWIP_ENGINE_TYPE_VPE)
+- setup_vpe_queue(adev, test, &qinfo[i]);
+- else
+- setup_vcn_queue(adev, test, &qinfo[i]);
+-
+- r = add_test_queue(adev, test, &qinfo[i]);
+- if (r)
+- return r;
+- }
+-
+- return 0;
+-}
+-
+-static int submit_test_queues(struct amdgpu_device *adev,
+- struct umsch_mm_test *test,
+- struct umsch_mm_test_queue_info *qinfo)
+-{
+- int i, r;
+-
+- for (i = 0; i < test->num_queues; i++) {
+- if (qinfo[i].engine == UMSCH_SWIP_ENGINE_TYPE_VPE)
+- r = submit_vpe_queue(adev, test);
+- else
+- r = submit_vcn_queue(adev, test);
+- if (r)
+- return r;
+- }
+-
+- return 0;
+-}
+-
+-static void cleanup_test_queues(struct amdgpu_device *adev,
+- struct umsch_mm_test *test,
+- struct umsch_mm_test_queue_info *qinfo)
+-{
+- int i;
+-
+- for (i = 0; i < test->num_queues; i++)
+- remove_test_queue(adev, test, &qinfo[i]);
+-}
+-
+-static int umsch_mm_test(struct amdgpu_device *adev)
+-{
+- struct umsch_mm_test_queue_info qinfo[] = {
+- { .engine = UMSCH_SWIP_ENGINE_TYPE_VPE },
+- };
+- struct umsch_mm_test test = { .num_queues = ARRAY_SIZE(qinfo) };
+- int r;
+-
+- r = setup_umsch_mm_test(adev, &test);
+- if (r)
+- return r;
+-
+- r = setup_test_queues(adev, &test, qinfo);
+- if (r)
+- goto cleanup;
+-
+- r = submit_test_queues(adev, &test, qinfo);
+- if (r)
+- goto cleanup;
+-
+- cleanup_test_queues(adev, &test, qinfo);
+- cleanup_umsch_mm_test(adev, &test);
+-
+- return 0;
+-
+-cleanup:
+- cleanup_test_queues(adev, &test, qinfo);
+- cleanup_umsch_mm_test(adev, &test);
+- return r;
+-}
++MODULE_FIRMWARE("amdgpu/umsch_mm_4_0_0.bin");
+
+ int amdgpu_umsch_mm_submit_pkt(struct amdgpu_umsch_mm *umsch, void *pkt, int ndws)
+ {
+@@ -584,7 +129,7 @@ int amdgpu_umsch_mm_init_microcode(struct amdgpu_umsch_mm *umsch)
+ fw_name = "amdgpu/umsch_mm_4_0_0.bin";
+ break;
+ default:
+- break;
++ return -EINVAL;
+ }
+
+ r = amdgpu_ucode_request(adev, &adev->umsch_mm.fw, AMDGPU_UCODE_REQUIRED,
+@@ -792,7 +337,7 @@ static int umsch_mm_late_init(struct amdgpu_ip_block *ip_block)
+ if (amdgpu_in_reset(adev) || adev->in_s0ix || adev->in_suspend)
+ return 0;
+
+- return umsch_mm_test(adev);
++ return 0;
+ }
+
+ static int umsch_mm_sw_init(struct amdgpu_ip_block *ip_block)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 0af469ec6fccdd..13e5709ea1caa3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -1017,6 +1017,7 @@ u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v, u32 f
+ void *scratch_reg2;
+ void *scratch_reg3;
+ void *spare_int;
++ unsigned long flags;
+
+ if (!adev->gfx.rlc.rlcg_reg_access_supported) {
+ dev_err(adev->dev,
+@@ -1038,7 +1039,7 @@ u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v, u32 f
+ scratch_reg2 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg2;
+ scratch_reg3 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg3;
+
+- mutex_lock(&adev->virt.rlcg_reg_lock);
++ spin_lock_irqsave(&adev->virt.rlcg_reg_lock, flags);
+
+ if (reg_access_ctrl->spare_int)
+ spare_int = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->spare_int;
+@@ -1097,7 +1098,7 @@ u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v, u32 f
+
+ ret = readl(scratch_reg0);
+
+- mutex_unlock(&adev->virt.rlcg_reg_lock);
++ spin_unlock_irqrestore(&adev->virt.rlcg_reg_lock, flags);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index 5381b8d596e622..0ca73343a76893 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -279,7 +279,8 @@ struct amdgpu_virt {
+ /* the ucode id to signal the autoload */
+ uint32_t autoload_ucode_id;
+
+- struct mutex rlcg_reg_lock;
++ /* Spinlock to protect access to the RLCG register interface */
++ spinlock_t rlcg_reg_lock;
+
+ union amd_sriov_ras_caps ras_en_caps;
+ union amd_sriov_ras_caps ras_telemetry_en_caps;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 56c06b72a70ac5..cfb51baa581a13 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -1559,7 +1559,7 @@ static int gfx_v11_0_sw_init(struct amdgpu_ip_block *ip_block)
+ adev->gfx.me.num_me = 1;
+ adev->gfx.me.num_pipe_per_me = 1;
+ adev->gfx.me.num_queue_per_pipe = 1;
+- adev->gfx.mec.num_mec = 2;
++ adev->gfx.mec.num_mec = 1;
+ adev->gfx.mec.num_pipe_per_mec = 4;
+ adev->gfx.mec.num_queue_per_pipe = 4;
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 48ff00427882c4..c21b168f75a754 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -1337,7 +1337,7 @@ static int gfx_v12_0_sw_init(struct amdgpu_ip_block *ip_block)
+ adev->gfx.me.num_me = 1;
+ adev->gfx.me.num_pipe_per_me = 1;
+ adev->gfx.me.num_queue_per_pipe = 1;
+- adev->gfx.mec.num_mec = 2;
++ adev->gfx.mec.num_mec = 1;
+ adev->gfx.mec.num_pipe_per_mec = 2;
+ adev->gfx.mec.num_queue_per_pipe = 4;
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 0dce4421418c51..eda0dc83714a50 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1269,6 +1269,7 @@ static void gfx_v9_0_check_fw_write_wait(struct amdgpu_device *adev)
+ adev->gfx.mec_fw_write_wait = false;
+
+ if ((amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 1)) &&
++ (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 2)) &&
+ ((adev->gfx.mec_fw_version < 0x000001a5) ||
+ (adev->gfx.mec_feature_version < 46) ||
+ (adev->gfx.pfp_fw_version < 0x000000b7) ||
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
+index 8b463c977d08f2..8b0b3739a53776 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
+@@ -575,8 +575,10 @@ static int vcn_v5_0_1_start(struct amdgpu_device *adev)
+ uint32_t tmp;
+ int i, j, k, r, vcn_inst;
+
+- if (adev->pm.dpm_enabled)
+- amdgpu_dpm_enable_uvd(adev, true);
++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++ if (adev->pm.dpm_enabled)
++ amdgpu_dpm_enable_vcn(adev, true, i);
++ }
+
+ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+@@ -816,8 +818,10 @@ static int vcn_v5_0_1_stop(struct amdgpu_device *adev)
+ WREG32_SOC15(VCN, vcn_inst, regUVD_STATUS, 0);
+ }
+
+- if (adev->pm.dpm_enabled)
+- amdgpu_dpm_enable_uvd(adev, false);
++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++ if (adev->pm.dpm_enabled)
++ amdgpu_dpm_enable_vcn(adev, false, i);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 34c2c42c0f95c6..ad9cb50a9fa38b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -207,21 +207,6 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+ if (!down_read_trylock(&adev->reset_domain->sem))
+ return -EIO;
+
+- if (!pdd->proc_ctx_cpu_ptr) {
+- r = amdgpu_amdkfd_alloc_gtt_mem(adev,
+- AMDGPU_MES_PROC_CTX_SIZE,
+- &pdd->proc_ctx_bo,
+- &pdd->proc_ctx_gpu_addr,
+- &pdd->proc_ctx_cpu_ptr,
+- false);
+- if (r) {
+- dev_err(adev->dev,
+- "failed to allocate process context bo\n");
+- return r;
+- }
+- memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
+- }
+-
+ memset(&queue_input, 0x0, sizeof(struct mes_add_queue_input));
+ queue_input.process_id = qpd->pqm->process->pasid;
+ queue_input.page_table_base_addr = qpd->page_table_base;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index bd36a75309e120..6c02bc36d63446 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -363,10 +363,26 @@ int pqm_create_queue(struct process_queue_manager *pqm,
+ if (retval != 0)
+ return retval;
+
++ /* Register process if this is the first queue */
+ if (list_empty(&pdd->qpd.queues_list) &&
+ list_empty(&pdd->qpd.priv_queue_list))
+ dev->dqm->ops.register_process(dev->dqm, &pdd->qpd);
+
++ /* Allocate proc_ctx_bo only if MES is enabled and this is the first queue */
++ if (!pdd->proc_ctx_cpu_ptr && dev->kfd->shared_resources.enable_mes) {
++ retval = amdgpu_amdkfd_alloc_gtt_mem(dev->adev,
++ AMDGPU_MES_PROC_CTX_SIZE,
++ &pdd->proc_ctx_bo,
++ &pdd->proc_ctx_gpu_addr,
++ &pdd->proc_ctx_cpu_ptr,
++ false);
++ if (retval) {
++ dev_err(dev->adev->dev, "failed to allocate process context bo\n");
++ return retval;
++ }
++ memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
++ }
++
+ pqn = kzalloc(sizeof(*pqn), GFP_KERNEL);
+ if (!pqn) {
+ retval = -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index 6e2fce329d7382..d37ecfdde4f1bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -63,6 +63,10 @@ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
+
+ bool should_use_dmub_lock(struct dc_link *link)
+ {
++ /* ASIC doesn't support DMUB */
++ if (!link->ctx->dmub_srv)
++ return false;
++
+ if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
+ return true;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index cee1b351e10589..f1fe49401bc0ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -281,10 +281,10 @@ static void CalculateDynamicMetadataParameters(
+ double DISPCLK,
+ double DCFClkDeepSleep,
+ double PixelClock,
+- long HTotal,
+- long VBlank,
+- long DynamicMetadataTransmittedBytes,
+- long DynamicMetadataLinesBeforeActiveRequired,
++ unsigned int HTotal,
++ unsigned int VBlank,
++ unsigned int DynamicMetadataTransmittedBytes,
++ int DynamicMetadataLinesBeforeActiveRequired,
+ int InterlaceEnable,
+ bool ProgressiveToInterlaceUnitInOPP,
+ double *Tsetup,
+@@ -3265,8 +3265,8 @@ static double CalculateWriteBackDelay(
+
+
+ static void CalculateDynamicMetadataParameters(int MaxInterDCNTileRepeaters, double DPPCLK, double DISPCLK,
+- double DCFClkDeepSleep, double PixelClock, long HTotal, long VBlank, long DynamicMetadataTransmittedBytes,
+- long DynamicMetadataLinesBeforeActiveRequired, int InterlaceEnable, bool ProgressiveToInterlaceUnitInOPP,
++ double DCFClkDeepSleep, double PixelClock, unsigned int HTotal, unsigned int VBlank, unsigned int DynamicMetadataTransmittedBytes,
++ int DynamicMetadataLinesBeforeActiveRequired, int InterlaceEnable, bool ProgressiveToInterlaceUnitInOPP,
+ double *Tsetup, double *Tdmbf, double *Tdmec, double *Tdmsks)
+ {
+ double TotalRepeaterDelayTime = 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
+index d68b4567e218aa..7216d25c783e68 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
+@@ -141,9 +141,8 @@ bool core_dcn4_initialize(struct dml2_core_initialize_in_out *in_out)
+ core->clean_me_up.mode_lib.ip.subvp_fw_processing_delay_us = core_dcn4_ip_caps_base.subvp_pstate_allow_width_us;
+ core->clean_me_up.mode_lib.ip.subvp_swath_height_margin_lines = core_dcn4_ip_caps_base.subvp_swath_height_margin_lines;
+ } else {
+- memcpy(&core->clean_me_up.mode_lib.ip, &core_dcn4_ip_caps_base, sizeof(struct dml2_core_ip_params));
++ memcpy(&core->clean_me_up.mode_lib.ip, &core_dcn4_ip_caps_base, sizeof(struct dml2_core_ip_params));
+ patch_ip_params_with_ip_caps(&core->clean_me_up.mode_lib.ip, in_out->ip_caps);
+-
+ core->clean_me_up.mode_lib.ip.imall_supported = false;
+ }
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index 9f55207ea9bc38..d834d134ad2b87 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -459,8 +459,7 @@ int smu_cmn_send_smc_msg_with_param(struct smu_context *smu,
+ }
+ if (read_arg) {
+ smu_cmn_read_arg(smu, read_arg);
+- dev_dbg(adev->dev, "smu send message: %s(%d) param: 0x%08x, resp: 0x%08x,\
+- readval: 0x%08x\n",
++ dev_dbg(adev->dev, "smu send message: %s(%d) param: 0x%08x, resp: 0x%08x, readval: 0x%08x\n",
+ smu_get_message_name(smu, msg), index, param, reg, *read_arg);
+ } else {
+ dev_dbg(adev->dev, "smu send message: %s(%d) param: 0x%08x, resp: 0x%08x\n",
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+index d081850e3c03e9..d4e4f484cbe5ef 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+@@ -2463,9 +2463,9 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+ if (!mhdp)
+ return -ENOMEM;
+
+- clk = devm_clk_get(dev, NULL);
++ clk = devm_clk_get_enabled(dev, NULL);
+ if (IS_ERR(clk)) {
+- dev_err(dev, "couldn't get clk: %ld\n", PTR_ERR(clk));
++ dev_err(dev, "couldn't get and enable clk: %ld\n", PTR_ERR(clk));
+ return PTR_ERR(clk);
+ }
+
+@@ -2504,14 +2504,12 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+
+ mhdp->info = of_device_get_match_data(dev);
+
+- clk_prepare_enable(clk);
+-
+ pm_runtime_enable(dev);
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0) {
+ dev_err(dev, "pm_runtime_resume_and_get failed\n");
+ pm_runtime_disable(dev);
+- goto clk_disable;
++ return ret;
+ }
+
+ if (mhdp->info && mhdp->info->ops && mhdp->info->ops->init) {
+@@ -2590,8 +2588,6 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+ runtime_put:
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
+-clk_disable:
+- clk_disable_unprepare(mhdp->clk);
+
+ return ret;
+ }
+@@ -2632,8 +2628,6 @@ static void cdns_mhdp_remove(struct platform_device *pdev)
+ cancel_work_sync(&mhdp->modeset_retry_work);
+ flush_work(&mhdp->hpd_work);
+ /* Ignoring mhdp->hdcp.check_work and mhdp->hdcp.prop_work here. */
+-
+- clk_disable_unprepare(mhdp->clk);
+ }
+
+ static const struct of_device_id mhdp_ids[] = {
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 88ef76a37fe6ac..76dabca04d0d19 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -2250,12 +2250,13 @@ static bool it6505_hdcp_part2_ksvlist_check(struct it6505 *it6505)
+ continue;
+ }
+
+- for (i = 0; i < 5; i++) {
++ for (i = 0; i < 5; i++)
+ if (bv[i][3] != av[i][0] || bv[i][2] != av[i][1] ||
+- av[i][1] != av[i][2] || bv[i][0] != av[i][3])
++ bv[i][1] != av[i][2] || bv[i][0] != av[i][3])
+ break;
+
+- DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d, %d", retry, i);
++ if (i == 5) {
++ DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d", retry);
+ return true;
+ }
+ }
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index e4d9006b59f1b9..b3d617505dda7d 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -480,6 +480,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,
+ const char *name)
+ {
+ struct device *dev = pdata->dev;
++ const struct i2c_client *client = to_i2c_client(dev);
+ struct auxiliary_device *aux;
+ int ret;
+
+@@ -488,6 +489,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,
+ return -ENOMEM;
+
+ aux->name = name;
++ aux->id = (client->adapter->nr << 10) | client->addr;
+ aux->dev.parent = dev;
+ aux->dev.release = ti_sn65dsi86_aux_device_release;
+ device_set_of_node_from_dev(&aux->dev, dev);
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index 6d09bef671da05..314b394cb7e126 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -175,13 +175,13 @@ static int
+ drm_dp_mst_rad_to_str(const u8 rad[8], u8 lct, char *out, size_t len)
+ {
+ int i;
+- u8 unpacked_rad[16];
++ u8 unpacked_rad[16] = {};
+
+- for (i = 0; i < lct; i++) {
++ for (i = 1; i < lct; i++) {
+ if (i % 2)
+- unpacked_rad[i] = rad[i / 2] >> 4;
++ unpacked_rad[i] = rad[(i - 1) / 2] >> 4;
+ else
+- unpacked_rad[i] = rad[i / 2] & BIT_MASK(4);
++ unpacked_rad[i] = rad[(i - 1) / 2] & 0xF;
+ }
+
+ /* TODO: Eventually add something to printk so we can format the rad
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 2289e71e2fa249..c299cd94d3f78f 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -830,8 +830,11 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
+ }
+ EXPORT_SYMBOL(drm_send_event);
+
+-static void print_size(struct drm_printer *p, const char *stat,
+- const char *region, u64 sz)
++void drm_fdinfo_print_size(struct drm_printer *p,
++ const char *prefix,
++ const char *stat,
++ const char *region,
++ u64 sz)
+ {
+ const char *units[] = {"", " KiB", " MiB"};
+ unsigned u;
+@@ -842,8 +845,10 @@ static void print_size(struct drm_printer *p, const char *stat,
+ sz = div_u64(sz, SZ_1K);
+ }
+
+- drm_printf(p, "drm-%s-%s:\t%llu%s\n", stat, region, sz, units[u]);
++ drm_printf(p, "%s-%s-%s:\t%llu%s\n",
++ prefix, stat, region, sz, units[u]);
+ }
++EXPORT_SYMBOL(drm_fdinfo_print_size);
+
+ int drm_memory_stats_is_zero(const struct drm_memory_stats *stats)
+ {
+@@ -868,17 +873,22 @@ void drm_print_memory_stats(struct drm_printer *p,
+ enum drm_gem_object_status supported_status,
+ const char *region)
+ {
+- print_size(p, "total", region, stats->private + stats->shared);
+- print_size(p, "shared", region, stats->shared);
++ const char *prefix = "drm";
++
++ drm_fdinfo_print_size(p, prefix, "total", region,
++ stats->private + stats->shared);
++ drm_fdinfo_print_size(p, prefix, "shared", region, stats->shared);
+
+ if (supported_status & DRM_GEM_OBJECT_ACTIVE)
+- print_size(p, "active", region, stats->active);
++ drm_fdinfo_print_size(p, prefix, "active", region, stats->active);
+
+ if (supported_status & DRM_GEM_OBJECT_RESIDENT)
+- print_size(p, "resident", region, stats->resident);
++ drm_fdinfo_print_size(p, prefix, "resident", region,
++ stats->resident);
+
+ if (supported_status & DRM_GEM_OBJECT_PURGEABLE)
+- print_size(p, "purgeable", region, stats->purgeable);
++ drm_fdinfo_print_size(p, prefix, "purgeable", region,
++ stats->purgeable);
+ }
+ EXPORT_SYMBOL(drm_print_memory_stats);
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index 5674f5707cca83..8f6fba4217ece5 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -620,13 +620,16 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+
+ mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
+ mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
++ goto update_config_out;
+ }
+-#else
++#endif
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ mtk_crtc->config_updating = false;
+ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
+-#endif
+
++#if IS_REACHABLE(CONFIG_MTK_CMDQ)
++update_config_out:
++#endif
+ mutex_unlock(&mtk_crtc->hw_lock);
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index cd385ba4c66aaa..d2cf09124d1085 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -1766,7 +1766,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
+
+ ret = drm_dp_dpcd_readb(&mtk_dp->aux, DP_MSTM_CAP, &val);
+ if (ret < 1) {
+- drm_err(mtk_dp->drm_dev, "Read mstm cap failed\n");
++ dev_err(mtk_dp->dev, "Read mstm cap failed: %zd\n", ret);
+ return ret == 0 ? -EIO : ret;
+ }
+
+@@ -1776,7 +1776,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
+ DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0,
+ &val);
+ if (ret < 1) {
+- drm_err(mtk_dp->drm_dev, "Read irq vector failed\n");
++ dev_err(mtk_dp->dev, "Read irq vector failed: %zd\n", ret);
+ return ret == 0 ? -EIO : ret;
+ }
+
+@@ -2059,7 +2059,7 @@ static int mtk_dp_wait_hpd_asserted(struct drm_dp_aux *mtk_aux, unsigned long wa
+
+ ret = mtk_dp_parse_capabilities(mtk_dp);
+ if (ret) {
+- drm_err(mtk_dp->drm_dev, "Can't parse capabilities\n");
++ dev_err(mtk_dp->dev, "Can't parse capabilities: %d\n", ret);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index 40752f2320548f..852aeef9f38dcd 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -1116,12 +1116,12 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg)
+ {
+ struct mtk_dsi *dsi = host_to_dsi(host);
+- u32 recv_cnt, i;
++ ssize_t recv_cnt;
+ u8 read_data[16];
+ void *src_addr;
+ u8 irq_flag = CMD_DONE_INT_FLAG;
+ u32 dsi_mode;
+- int ret;
++ int ret, i;
+
+ dsi_mode = readl(dsi->regs + DSI_MODE_CTRL);
+ if (dsi_mode & MODE) {
+@@ -1170,7 +1170,7 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ if (recv_cnt)
+ memcpy(msg->rx_buf, src_addr, recv_cnt);
+
+- DRM_INFO("dsi get %d byte data from the panel address(0x%x)\n",
++ DRM_INFO("dsi get %zd byte data from the panel address(0x%x)\n",
+ recv_cnt, *((u8 *)(msg->tx_buf)));
+
+ restore_dsi_mode:
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index ca82bc829cb964..250ad0d4027d6e 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -137,7 +137,7 @@ enum hdmi_aud_channel_swap_type {
+
+ struct hdmi_audio_param {
+ enum hdmi_audio_coding_type aud_codec;
+- enum hdmi_audio_sample_size aud_sampe_size;
++ enum hdmi_audio_sample_size aud_sample_size;
+ enum hdmi_aud_input_type aud_input_type;
+ enum hdmi_aud_i2s_fmt aud_i2s_fmt;
+ enum hdmi_aud_mclk aud_mclk;
+@@ -173,6 +173,7 @@ struct mtk_hdmi {
+ unsigned int sys_offset;
+ void __iomem *regs;
+ enum hdmi_colorspace csp;
++ struct platform_device *audio_pdev;
+ struct hdmi_audio_param aud_param;
+ bool audio_enable;
+ bool powered;
+@@ -1074,7 +1075,7 @@ static int mtk_hdmi_output_init(struct mtk_hdmi *hdmi)
+
+ hdmi->csp = HDMI_COLORSPACE_RGB;
+ aud_param->aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
+- aud_param->aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16;
++ aud_param->aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
+ aud_param->aud_input_type = HDMI_AUD_INPUT_I2S;
+ aud_param->aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT;
+ aud_param->aud_mclk = HDMI_AUD_MCLK_128FS;
+@@ -1572,14 +1573,14 @@ static int mtk_hdmi_audio_hw_params(struct device *dev, void *data,
+ switch (daifmt->fmt) {
+ case HDMI_I2S:
+ hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
+- hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16;
++ hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
+ hdmi_params.aud_input_type = HDMI_AUD_INPUT_I2S;
+ hdmi_params.aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT;
+ hdmi_params.aud_mclk = HDMI_AUD_MCLK_128FS;
+ break;
+ case HDMI_SPDIF:
+ hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
+- hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16;
++ hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
+ hdmi_params.aud_input_type = HDMI_AUD_INPUT_SPDIF;
+ break;
+ default:
+@@ -1662,6 +1663,11 @@ static const struct hdmi_codec_ops mtk_hdmi_audio_codec_ops = {
+ .hook_plugged_cb = mtk_hdmi_audio_hook_plugged_cb,
+ };
+
++static void mtk_hdmi_unregister_audio_driver(void *data)
++{
++ platform_device_unregister(data);
++}
++
+ static int mtk_hdmi_register_audio_driver(struct device *dev)
+ {
+ struct mtk_hdmi *hdmi = dev_get_drvdata(dev);
+@@ -1672,13 +1678,20 @@ static int mtk_hdmi_register_audio_driver(struct device *dev)
+ .data = hdmi,
+ .no_capture_mute = 1,
+ };
+- struct platform_device *pdev;
++ int ret;
+
+- pdev = platform_device_register_data(dev, HDMI_CODEC_DRV_NAME,
+- PLATFORM_DEVID_AUTO, &codec_data,
+- sizeof(codec_data));
+- if (IS_ERR(pdev))
+- return PTR_ERR(pdev);
++ hdmi->audio_pdev = platform_device_register_data(dev,
++ HDMI_CODEC_DRV_NAME,
++ PLATFORM_DEVID_AUTO,
++ &codec_data,
++ sizeof(codec_data));
++ if (IS_ERR(hdmi->audio_pdev))
++ return PTR_ERR(hdmi->audio_pdev);
++
++ ret = devm_add_action_or_reset(dev, mtk_hdmi_unregister_audio_driver,
++ hdmi->audio_pdev);
++ if (ret)
++ return ret;
+
+ DRM_INFO("%s driver bound to HDMI\n", HDMI_CODEC_DRV_NAME);
+ return 0;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index 0fcae53c0b140b..159665cb6b14f9 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -1507,6 +1507,8 @@ static void a6xx_get_indexed_registers(struct msm_gpu *gpu,
+
+ /* Restore the size in the hardware */
+ gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, mempool_size);
++
++ a6xx_state->nr_indexed_regs = count;
+ }
+
+ static void a7xx_get_indexed_registers(struct msm_gpu *gpu,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index e5dcd41a361f45..29485e76f531fa 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -1262,10 +1262,6 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+
+ DRM_DEBUG_ATOMIC("%s: check\n", dpu_crtc->name);
+
+- /* force a full mode set if active state changed */
+- if (crtc_state->active_changed)
+- crtc_state->mode_changed = true;
+-
+ if (cstate->num_mixers) {
+ rc = _dpu_crtc_check_and_setup_lm_bounds(crtc, crtc_state);
+ if (rc)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 48e6e8d74c855b..7b56da24711e43 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -622,9 +622,9 @@ bool dpu_encoder_use_dsc_merge(struct drm_encoder *drm_enc)
+ if (dpu_enc->phys_encs[i])
+ intf_count++;
+
+- /* See dpu_encoder_get_topology, we only support 2:2:1 topology */
+- if (dpu_enc->dsc)
+- num_dsc = 2;
++ for (i = 0; i < MAX_CHANNELS_PER_ENC; i++)
++ if (dpu_enc->hw_dsc[i])
++ num_dsc++;
+
+ return (num_dsc > 0) && (num_dsc > intf_count);
+ }
+@@ -649,11 +649,14 @@ struct drm_dsc_config *dpu_encoder_get_dsc_config(struct drm_encoder *drm_enc)
+
+ static struct msm_display_topology dpu_encoder_get_topology(
+ struct dpu_encoder_virt *dpu_enc,
+- struct dpu_kms *dpu_kms,
+ struct drm_display_mode *mode,
+ struct drm_crtc_state *crtc_state,
+- struct drm_dsc_config *dsc)
++ struct drm_connector_state *conn_state)
+ {
++ struct msm_drm_private *priv = dpu_enc->base.dev->dev_private;
++ struct msm_display_info *disp_info = &dpu_enc->disp_info;
++ struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms);
++ struct drm_dsc_config *dsc = dpu_encoder_get_dsc_config(&dpu_enc->base);
+ struct msm_display_topology topology = {0};
+ int i, intf_count = 0;
+
+@@ -686,14 +689,38 @@ static struct msm_display_topology dpu_encoder_get_topology(
+
+ if (dsc) {
+ /*
+- * In case of Display Stream Compression (DSC), we would use
+- * 2 DSC encoders, 2 layer mixers and 1 interface
+- * this is power optimal and can drive up to (including) 4k
+- * screens
++ * Use 2 DSC encoders, 2 layer mixers and 1 or 2 interfaces
++ * when Display Stream Compression (DSC) is enabled,
++ * and when enough DSC blocks are available.
++ * This is power-optimal and can drive up to (including) 4k
++ * screens.
+ */
+- topology.num_dsc = 2;
+- topology.num_lm = 2;
+- topology.num_intf = 1;
++ WARN(topology.num_intf > 2,
++ "DSC topology cannot support more than 2 interfaces\n");
++ if (intf_count >= 2 || dpu_kms->catalog->dsc_count >= 2) {
++ topology.num_dsc = 2;
++ topology.num_lm = 2;
++ } else {
++ topology.num_dsc = 1;
++ topology.num_lm = 1;
++ }
++ }
++
++ /*
++ * Use CDM only for writeback or DP at the moment as other interfaces cannot handle it.
++ * If writeback itself cannot handle cdm for some reason it will fail in its atomic_check()
++ * earlier.
++ */
++ if (disp_info->intf_type == INTF_WB && conn_state->writeback_job) {
++ struct drm_framebuffer *fb;
++
++ fb = conn_state->writeback_job->fb;
++
++ if (fb && MSM_FORMAT_IS_YUV(msm_framebuffer_format(fb)))
++ topology.needs_cdm = true;
++ } else if (disp_info->intf_type == INTF_DP) {
++ if (msm_dp_is_yuv_420_enabled(priv->dp[disp_info->h_tile_instance[0]], mode))
++ topology.needs_cdm = true;
+ }
+
+ return topology;
+@@ -733,6 +760,34 @@ static void dpu_encoder_assign_crtc_resources(struct dpu_kms *dpu_kms,
+ cstate->num_mixers = num_lm;
+ }
+
++/**
++ * dpu_encoder_virt_check_mode_changed: check if full modeset is required
++ * @drm_enc: Pointer to drm encoder structure
++ * @crtc_state: Corresponding CRTC state to be checked
++ * @conn_state: Corresponding Connector's state to be checked
++ *
++ * Check if the changes in the object properties demand full mode set.
++ */
++int dpu_encoder_virt_check_mode_changed(struct drm_encoder *drm_enc,
++ struct drm_crtc_state *crtc_state,
++ struct drm_connector_state *conn_state)
++{
++ struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc);
++ struct msm_display_topology topology;
++
++ DPU_DEBUG_ENC(dpu_enc, "\n");
++
++ /* Using mode instead of adjusted_mode as it wasn't computed yet */
++ topology = dpu_encoder_get_topology(dpu_enc, &crtc_state->mode, crtc_state, conn_state);
++
++ if (topology.needs_cdm && !dpu_enc->cur_master->hw_cdm)
++ crtc_state->mode_changed = true;
++ else if (!topology.needs_cdm && dpu_enc->cur_master->hw_cdm)
++ crtc_state->mode_changed = true;
++
++ return 0;
++}
++
+ static int dpu_encoder_virt_atomic_check(
+ struct drm_encoder *drm_enc,
+ struct drm_crtc_state *crtc_state,
+@@ -743,10 +798,7 @@ static int dpu_encoder_virt_atomic_check(
+ struct dpu_kms *dpu_kms;
+ struct drm_display_mode *adj_mode;
+ struct msm_display_topology topology;
+- struct msm_display_info *disp_info;
+ struct dpu_global_state *global_state;
+- struct drm_framebuffer *fb;
+- struct drm_dsc_config *dsc;
+ int ret = 0;
+
+ if (!drm_enc || !crtc_state || !conn_state) {
+@@ -759,7 +811,6 @@ static int dpu_encoder_virt_atomic_check(
+ DPU_DEBUG_ENC(dpu_enc, "\n");
+
+ priv = drm_enc->dev->dev_private;
+- disp_info = &dpu_enc->disp_info;
+ dpu_kms = to_dpu_kms(priv->kms);
+ adj_mode = &crtc_state->adjusted_mode;
+ global_state = dpu_kms_get_global_state(crtc_state->state);
+@@ -768,37 +819,15 @@ static int dpu_encoder_virt_atomic_check(
+
+ trace_dpu_enc_atomic_check(DRMID(drm_enc));
+
+- dsc = dpu_encoder_get_dsc_config(drm_enc);
+-
+- topology = dpu_encoder_get_topology(dpu_enc, dpu_kms, adj_mode, crtc_state, dsc);
+-
+- /*
+- * Use CDM only for writeback or DP at the moment as other interfaces cannot handle it.
+- * If writeback itself cannot handle cdm for some reason it will fail in its atomic_check()
+- * earlier.
+- */
+- if (disp_info->intf_type == INTF_WB && conn_state->writeback_job) {
+- fb = conn_state->writeback_job->fb;
+-
+- if (fb && MSM_FORMAT_IS_YUV(msm_framebuffer_format(fb)))
+- topology.needs_cdm = true;
+- } else if (disp_info->intf_type == INTF_DP) {
+- if (msm_dp_is_yuv_420_enabled(priv->dp[disp_info->h_tile_instance[0]], adj_mode))
+- topology.needs_cdm = true;
+- }
++ topology = dpu_encoder_get_topology(dpu_enc, adj_mode, crtc_state, conn_state);
+
+- if (topology.needs_cdm && !dpu_enc->cur_master->hw_cdm)
+- crtc_state->mode_changed = true;
+- else if (!topology.needs_cdm && dpu_enc->cur_master->hw_cdm)
+- crtc_state->mode_changed = true;
+ /*
+ * Release and Allocate resources on every modeset
+- * Dont allocate when active is false.
+ */
+ if (drm_atomic_crtc_needs_modeset(crtc_state)) {
+ dpu_rm_release(global_state, drm_enc);
+
+- if (!crtc_state->active_changed || crtc_state->enable)
++ if (crtc_state->enable)
+ ret = dpu_rm_reserve(&dpu_kms->rm, global_state,
+ drm_enc, crtc_state, &topology);
+ if (!ret)
+@@ -2020,7 +2049,6 @@ static void dpu_encoder_dsc_pipe_cfg(struct dpu_hw_ctl *ctl,
+ static void dpu_encoder_prep_dsc(struct dpu_encoder_virt *dpu_enc,
+ struct drm_dsc_config *dsc)
+ {
+- /* coding only for 2LM, 2enc, 1 dsc config */
+ struct dpu_encoder_phys *enc_master = dpu_enc->cur_master;
+ struct dpu_hw_ctl *ctl = enc_master->hw_ctl;
+ struct dpu_hw_dsc *hw_dsc[MAX_CHANNELS_PER_ENC];
+@@ -2030,22 +2058,24 @@ static void dpu_encoder_prep_dsc(struct dpu_encoder_virt *dpu_enc,
+ int dsc_common_mode;
+ int pic_width;
+ u32 initial_lines;
++ int num_dsc = 0;
+ int i;
+
+ for (i = 0; i < MAX_CHANNELS_PER_ENC; i++) {
+ hw_pp[i] = dpu_enc->hw_pp[i];
+ hw_dsc[i] = dpu_enc->hw_dsc[i];
+
+- if (!hw_pp[i] || !hw_dsc[i]) {
+- DPU_ERROR_ENC(dpu_enc, "invalid params for DSC\n");
+- return;
+- }
++ if (!hw_pp[i] || !hw_dsc[i])
++ break;
++
++ num_dsc++;
+ }
+
+- dsc_common_mode = 0;
+ pic_width = dsc->pic_width;
+
+- dsc_common_mode = DSC_MODE_SPLIT_PANEL;
++ dsc_common_mode = 0;
++ if (num_dsc > 1)
++ dsc_common_mode |= DSC_MODE_SPLIT_PANEL;
+ if (dpu_encoder_use_dsc_merge(enc_master->parent))
+ dsc_common_mode |= DSC_MODE_MULTIPLEX;
+ if (enc_master->intf_mode == INTF_MODE_VIDEO)
+@@ -2054,14 +2084,10 @@ static void dpu_encoder_prep_dsc(struct dpu_encoder_virt *dpu_enc,
+ this_frame_slices = pic_width / dsc->slice_width;
+ intf_ip_w = this_frame_slices * dsc->slice_width;
+
+- /*
+- * dsc merge case: when using 2 encoders for the same stream,
+- * no. of slices need to be same on both the encoders.
+- */
+- enc_ip_w = intf_ip_w / 2;
++ enc_ip_w = intf_ip_w / num_dsc;
+ initial_lines = dpu_encoder_dsc_initial_line_calc(dsc, enc_ip_w);
+
+- for (i = 0; i < MAX_CHANNELS_PER_ENC; i++)
++ for (i = 0; i < num_dsc; i++)
+ dpu_encoder_dsc_pipe_cfg(ctl, hw_dsc[i], hw_pp[i],
+ dsc, dsc_common_mode, initial_lines);
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
+index 92b5ee390788d1..da133ee4701a32 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
+@@ -88,4 +88,8 @@ void dpu_encoder_cleanup_wb_job(struct drm_encoder *drm_enc,
+
+ bool dpu_encoder_is_valid_for_commit(struct drm_encoder *drm_enc);
+
++int dpu_encoder_virt_check_mode_changed(struct drm_encoder *drm_enc,
++ struct drm_crtc_state *crtc_state,
++ struct drm_connector_state *conn_state);
++
+ #endif /* __DPU_ENCODER_H__ */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index 97e9cb8c2b099f..8741dc6fc8ddc4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -446,6 +446,29 @@ static void dpu_kms_disable_commit(struct msm_kms *kms)
+ pm_runtime_put_sync(&dpu_kms->pdev->dev);
+ }
+
++static int dpu_kms_check_mode_changed(struct msm_kms *kms, struct drm_atomic_state *state)
++{
++ struct drm_crtc_state *new_crtc_state;
++ struct drm_connector *connector;
++ struct drm_connector_state *new_conn_state;
++ int i;
++
++ for_each_new_connector_in_state(state, connector, new_conn_state, i) {
++ struct drm_encoder *encoder;
++
++ if (!new_conn_state->crtc || !new_conn_state->best_encoder)
++ continue;
++
++ new_crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc);
++
++ encoder = new_conn_state->best_encoder;
++
++ dpu_encoder_virt_check_mode_changed(encoder, new_crtc_state, new_conn_state);
++ }
++
++ return 0;
++}
++
+ static void dpu_kms_flush_commit(struct msm_kms *kms, unsigned crtc_mask)
+ {
+ struct dpu_kms *dpu_kms = to_dpu_kms(kms);
+@@ -1062,6 +1085,7 @@ static const struct msm_kms_funcs kms_funcs = {
+ .irq = dpu_core_irq,
+ .enable_commit = dpu_kms_enable_commit,
+ .disable_commit = dpu_kms_disable_commit,
++ .check_mode_changed = dpu_kms_check_mode_changed,
+ .flush_commit = dpu_kms_flush_commit,
+ .wait_flush = dpu_kms_wait_flush,
+ .complete_commit = dpu_kms_complete_commit,
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 007311c21fdaa0..42e100a8adca09 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -846,7 +846,7 @@ static void dsi_ctrl_enable(struct msm_dsi_host *msm_host,
+ dsi_write(msm_host, REG_DSI_CPHY_MODE_CTRL, BIT(0));
+ }
+
+-static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mode, u32 hdisplay)
++static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mode)
+ {
+ struct drm_dsc_config *dsc = msm_host->dsc;
+ u32 reg, reg_ctrl, reg_ctrl2;
+@@ -858,7 +858,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
+ /* first calculate dsc parameters and then program
+ * compress mode registers
+ */
+- slice_per_intf = msm_dsc_get_slices_per_intf(dsc, hdisplay);
++ slice_per_intf = dsc->slice_count;
+
+ total_bytes_per_intf = dsc->slice_chunk_size * slice_per_intf;
+ bytes_per_pkt = dsc->slice_chunk_size; /* * slice_per_pkt; */
+@@ -991,7 +991,7 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
+
+ if (msm_host->mode_flags & MIPI_DSI_MODE_VIDEO) {
+ if (msm_host->dsc)
+- dsi_update_dsc_timing(msm_host, false, mode->hdisplay);
++ dsi_update_dsc_timing(msm_host, false);
+
+ dsi_write(msm_host, REG_DSI_ACTIVE_H,
+ DSI_ACTIVE_H_START(ha_start) |
+@@ -1012,7 +1012,7 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
+ DSI_ACTIVE_VSYNC_VPOS_END(vs_end));
+ } else { /* command mode */
+ if (msm_host->dsc)
+- dsi_update_dsc_timing(msm_host, true, mode->hdisplay);
++ dsi_update_dsc_timing(msm_host, true);
+
+ /* image data and 1 byte write_memory_start cmd */
+ if (!msm_host->dsc)
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+index a210b7c9e5ca28..4fabb01345aa2a 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+@@ -74,17 +74,35 @@ static int dsi_mgr_setup_components(int id)
+ int ret;
+
+ if (!IS_BONDED_DSI()) {
++ /*
++ * Set the usecase before calling msm_dsi_host_register(), which would
++ * already program the PLL source mux based on a default usecase.
++ */
++ msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
++ msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
++
+ ret = msm_dsi_host_register(msm_dsi->host);
+ if (ret)
+ return ret;
+-
+- msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
+- msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
+ } else if (other_dsi) {
+ struct msm_dsi *master_link_dsi = IS_MASTER_DSI_LINK(id) ?
+ msm_dsi : other_dsi;
+ struct msm_dsi *slave_link_dsi = IS_MASTER_DSI_LINK(id) ?
+ other_dsi : msm_dsi;
++
++ /*
++ * PLL0 is to drive both DSI link clocks in bonded DSI mode.
++ *
++ * Set the usecase before calling msm_dsi_host_register(), which would
++ * already program the PLL source mux based on a default usecase.
++ */
++ msm_dsi_phy_set_usecase(clk_master_dsi->phy,
++ MSM_DSI_PHY_MASTER);
++ msm_dsi_phy_set_usecase(clk_slave_dsi->phy,
++ MSM_DSI_PHY_SLAVE);
++ msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
++ msm_dsi_host_set_phy_mode(other_dsi->host, other_dsi->phy);
++
+ /* Register slave host first, so that slave DSI device
+ * has a chance to probe, and do not block the master
+ * DSI device's probe.
+@@ -98,14 +116,6 @@ static int dsi_mgr_setup_components(int id)
+ ret = msm_dsi_host_register(master_link_dsi->host);
+ if (ret)
+ return ret;
+-
+- /* PLL0 is to drive both 2 DSI link clocks in bonded DSI mode. */
+- msm_dsi_phy_set_usecase(clk_master_dsi->phy,
+- MSM_DSI_PHY_MASTER);
+- msm_dsi_phy_set_usecase(clk_slave_dsi->phy,
+- MSM_DSI_PHY_SLAVE);
+- msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
+- msm_dsi_host_set_phy_mode(other_dsi->host, other_dsi->phy);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 798168180c1ab6..a2c87c84aa05b8 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -305,7 +305,7 @@ static void dsi_pll_commit(struct dsi_pll_7nm *pll, struct dsi_pll_config *confi
+ writel(pll->phy->cphy_mode ? 0x00 : 0x10,
+ base + REG_DSI_7nm_PHY_PLL_CMODE_1);
+ writel(config->pll_clock_inverters,
+- base + REG_DSI_7nm_PHY_PLL_CLOCK_INVERTERS);
++ base + REG_DSI_7nm_PHY_PLL_CLOCK_INVERTERS_1);
+ }
+
+ static int dsi_pll_7nm_vco_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
+index a7a2384044ffdb..364df245e3a209 100644
+--- a/drivers/gpu/drm/msm/msm_atomic.c
++++ b/drivers/gpu/drm/msm/msm_atomic.c
+@@ -183,10 +183,16 @@ static unsigned get_crtc_mask(struct drm_atomic_state *state)
+
+ int msm_atomic_check(struct drm_device *dev, struct drm_atomic_state *state)
+ {
++ struct msm_drm_private *priv = dev->dev_private;
++ struct msm_kms *kms = priv->kms;
+ struct drm_crtc_state *old_crtc_state, *new_crtc_state;
+ struct drm_crtc *crtc;
+- int i;
++ int i, ret = 0;
+
++ /*
++ * FIXME: stop setting allow_modeset and move this check to the DPU
++ * driver.
++ */
+ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
+ new_crtc_state, i) {
+ if ((old_crtc_state->ctm && !new_crtc_state->ctm) ||
+@@ -196,6 +202,11 @@ int msm_atomic_check(struct drm_device *dev, struct drm_atomic_state *state)
+ }
+ }
+
++ if (kms && kms->funcs && kms->funcs->check_mode_changed)
++ ret = kms->funcs->check_mode_changed(kms, state);
++ if (ret)
++ return ret;
++
+ return drm_atomic_helper_check(dev, state);
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_dsc_helper.h b/drivers/gpu/drm/msm/msm_dsc_helper.h
+index b9049fe1e27907..63f95523b2cbb4 100644
+--- a/drivers/gpu/drm/msm/msm_dsc_helper.h
++++ b/drivers/gpu/drm/msm/msm_dsc_helper.h
+@@ -12,17 +12,6 @@
+ #include <linux/math.h>
+ #include <drm/display/drm_dsc_helper.h>
+
+-/**
+- * msm_dsc_get_slices_per_intf() - calculate number of slices per interface
+- * @dsc: Pointer to drm dsc config struct
+- * @intf_width: interface width in pixels
+- * Returns: Integer representing the number of slices for the given interface
+- */
+-static inline u32 msm_dsc_get_slices_per_intf(const struct drm_dsc_config *dsc, u32 intf_width)
+-{
+- return DIV_ROUND_UP(intf_width, dsc->slice_width);
+-}
+-
+ /**
+ * msm_dsc_get_bytes_per_line() - calculate bytes per line
+ * @dsc: Pointer to drm dsc config struct
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index dee47040303684..3e9aa2cc38ef99 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -509,7 +509,7 @@ static struct drm_syncobj **msm_parse_deps(struct msm_gem_submit *submit,
+ }
+
+ if (syncobj_desc.flags & ~MSM_SUBMIT_SYNCOBJ_FLAGS) {
+- ret = -SUBMIT_ERROR(EINVAL, submit, "invalid syncobj flags: %x", syncobj_desc.flags);
++ ret = SUBMIT_ERROR(EINVAL, submit, "invalid syncobj flags: %x", syncobj_desc.flags);
+ break;
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h
+index e60162744c6697..ec2a75af89b097 100644
+--- a/drivers/gpu/drm/msm/msm_kms.h
++++ b/drivers/gpu/drm/msm/msm_kms.h
+@@ -59,6 +59,13 @@ struct msm_kms_funcs {
+ void (*enable_commit)(struct msm_kms *kms);
+ void (*disable_commit)(struct msm_kms *kms);
+
++ /**
++ * @check_mode_changed:
++ *
++ * Verify if the commit requires a full modeset on one of CRTCs.
++ */
++ int (*check_mode_changed)(struct msm_kms *kms, struct drm_atomic_state *state);
++
+ /**
+ * Prepare for atomic commit. This is called after any previous
+ * (async or otherwise) commit has completed.
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c b/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
+index 266a087fe14c13..3c24a63b6be8c7 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
+@@ -607,7 +607,7 @@ static int ili9882t_add(struct ili9882t *ili)
+
+ ili->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);
+ if (IS_ERR(ili->enable_gpio)) {
+- dev_err(dev, "cannot get reset-gpios %ld\n",
++ dev_err(dev, "cannot get enable-gpios %ld\n",
+ PTR_ERR(ili->enable_gpio));
+ return PTR_ERR(ili->enable_gpio);
+ }
+diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
+index 0a37cfeeb181cb..a9da1d1eeb7071 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.c
++++ b/drivers/gpu/drm/panthor/panthor_device.c
+@@ -128,14 +128,11 @@ static void panthor_device_reset_work(struct work_struct *work)
+ struct panthor_device *ptdev = container_of(work, struct panthor_device, reset.work);
+ int ret = 0, cookie;
+
+- if (atomic_read(&ptdev->pm.state) != PANTHOR_DEVICE_PM_STATE_ACTIVE) {
+- /*
+- * No need for a reset as the device has been (or will be)
+- * powered down
+- */
+- atomic_set(&ptdev->reset.pending, 0);
++ /* If the device is entering suspend, we don't reset. A slow reset will
++ * be forced at resume time instead.
++ */
++ if (atomic_read(&ptdev->pm.state) != PANTHOR_DEVICE_PM_STATE_ACTIVE)
+ return;
+- }
+
+ if (!drm_dev_enter(&ptdev->base, &cookie))
+ return;
+@@ -477,6 +474,14 @@ int panthor_device_resume(struct device *dev)
+
+ if (panthor_device_is_initialized(ptdev) &&
+ drm_dev_enter(&ptdev->base, &cookie)) {
++ /* If there was a reset pending at the time we suspended the
++ * device, we force a slow reset.
++ */
++ if (atomic_read(&ptdev->reset.pending)) {
++ ptdev->reset.fast = false;
++ atomic_set(&ptdev->reset.pending, 0);
++ }
++
+ ret = panthor_device_resume_hw_components(ptdev);
+ if (ret && ptdev->reset.fast) {
+ drm_err(&ptdev->base, "Fast reset failed, trying a slow reset");
+@@ -493,9 +498,6 @@ int panthor_device_resume(struct device *dev)
+ goto err_suspend_devfreq;
+ }
+
+- if (atomic_read(&ptdev->reset.pending))
+- queue_work(ptdev->reset.wq, &ptdev->reset.work);
+-
+ /* Clear all IOMEM mappings pointing to this device after we've
+ * resumed. This way the fake mappings pointing to the dummy pages
+ * are removed and the real iomem mapping will be restored on next
+diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
+index 08136e790ca0a6..06fe46e320738f 100644
+--- a/drivers/gpu/drm/panthor/panthor_drv.c
++++ b/drivers/gpu/drm/panthor/panthor_drv.c
+@@ -1458,12 +1458,26 @@ static void panthor_gpu_show_fdinfo(struct panthor_device *ptdev,
+ drm_printf(p, "drm-curfreq-panthor:\t%lu Hz\n", ptdev->current_frequency);
+ }
+
++static void panthor_show_internal_memory_stats(struct drm_printer *p, struct drm_file *file)
++{
++ char *drv_name = file->minor->dev->driver->name;
++ struct panthor_file *pfile = file->driver_priv;
++ struct drm_memory_stats stats = {0};
++
++ panthor_fdinfo_gather_group_mem_info(pfile, &stats);
++ panthor_vm_heaps_sizes(pfile, &stats);
++
++ drm_fdinfo_print_size(p, drv_name, "resident", "memory", stats.resident);
++ drm_fdinfo_print_size(p, drv_name, "active", "memory", stats.active);
++}
++
+ static void panthor_show_fdinfo(struct drm_printer *p, struct drm_file *file)
+ {
+ struct drm_device *dev = file->minor->dev;
+ struct panthor_device *ptdev = container_of(dev, struct panthor_device, base);
+
+ panthor_gpu_show_fdinfo(ptdev, file->driver_priv, p);
++ panthor_show_internal_memory_stats(p, file);
+
+ drm_show_memory_stats(p, file);
+ }
+diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c
+index 68eb4fb4d3a8ae..a024b475b68870 100644
+--- a/drivers/gpu/drm/panthor/panthor_fw.c
++++ b/drivers/gpu/drm/panthor/panthor_fw.c
+@@ -637,8 +637,8 @@ static int panthor_fw_read_build_info(struct panthor_device *ptdev,
+ u32 ehdr)
+ {
+ struct panthor_fw_build_info_hdr hdr;
+- char header[9];
+- const char git_sha_header[sizeof(header)] = "git_sha: ";
++ static const char git_sha_header[] = "git_sha: ";
++ const int header_len = sizeof(git_sha_header) - 1;
+ int ret;
+
+ ret = panthor_fw_binary_iter_read(ptdev, iter, &hdr, sizeof(hdr));
+@@ -652,8 +652,7 @@ static int panthor_fw_read_build_info(struct panthor_device *ptdev,
+ return 0;
+ }
+
+- if (memcmp(git_sha_header, fw->data + hdr.meta_start,
+- sizeof(git_sha_header))) {
++ if (memcmp(git_sha_header, fw->data + hdr.meta_start, header_len)) {
+ /* Not the expected header, this isn't metadata we understand */
+ return 0;
+ }
+@@ -666,7 +665,7 @@ static int panthor_fw_read_build_info(struct panthor_device *ptdev,
+ }
+
+ drm_info(&ptdev->base, "Firmware git sha: %s\n",
+- fw->data + hdr.meta_start + sizeof(git_sha_header));
++ fw->data + hdr.meta_start + header_len);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/panthor/panthor_fw.h b/drivers/gpu/drm/panthor/panthor_fw.h
+index 22448abde99232..6598d96c6d2aab 100644
+--- a/drivers/gpu/drm/panthor/panthor_fw.h
++++ b/drivers/gpu/drm/panthor/panthor_fw.h
+@@ -102,9 +102,9 @@ struct panthor_fw_cs_output_iface {
+ #define CS_STATUS_BLOCKED_REASON_SB_WAIT 1
+ #define CS_STATUS_BLOCKED_REASON_PROGRESS_WAIT 2
+ #define CS_STATUS_BLOCKED_REASON_SYNC_WAIT 3
+-#define CS_STATUS_BLOCKED_REASON_DEFERRED 5
+-#define CS_STATUS_BLOCKED_REASON_RES 6
+-#define CS_STATUS_BLOCKED_REASON_FLUSH 7
++#define CS_STATUS_BLOCKED_REASON_DEFERRED 4
++#define CS_STATUS_BLOCKED_REASON_RESOURCE 5
++#define CS_STATUS_BLOCKED_REASON_FLUSH 6
+ #define CS_STATUS_BLOCKED_REASON_MASK GENMASK(3, 0)
+ u32 status_blocked_reason;
+ u32 status_wait_sync_value_hi;
+diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c
+index 3796a9eb22af2f..3bdf61c142644a 100644
+--- a/drivers/gpu/drm/panthor/panthor_heap.c
++++ b/drivers/gpu/drm/panthor/panthor_heap.c
+@@ -97,6 +97,9 @@ struct panthor_heap_pool {
+
+ /** @gpu_contexts: Buffer object containing the GPU heap contexts. */
+ struct panthor_kernel_bo *gpu_contexts;
++
++ /** @size: Size of all chunks across all heaps in the pool. */
++ atomic_t size;
+ };
+
+ static int panthor_heap_ctx_stride(struct panthor_device *ptdev)
+@@ -118,7 +121,7 @@ static void *panthor_get_heap_ctx(struct panthor_heap_pool *pool, int id)
+ panthor_get_heap_ctx_offset(pool, id);
+ }
+
+-static void panthor_free_heap_chunk(struct panthor_vm *vm,
++static void panthor_free_heap_chunk(struct panthor_heap_pool *pool,
+ struct panthor_heap *heap,
+ struct panthor_heap_chunk *chunk)
+ {
+@@ -127,12 +130,13 @@ static void panthor_free_heap_chunk(struct panthor_vm *vm,
+ heap->chunk_count--;
+ mutex_unlock(&heap->lock);
+
++ atomic_sub(heap->chunk_size, &pool->size);
++
+ panthor_kernel_bo_destroy(chunk->bo);
+ kfree(chunk);
+ }
+
+-static int panthor_alloc_heap_chunk(struct panthor_device *ptdev,
+- struct panthor_vm *vm,
++static int panthor_alloc_heap_chunk(struct panthor_heap_pool *pool,
+ struct panthor_heap *heap,
+ bool initial_chunk)
+ {
+@@ -144,7 +148,7 @@ static int panthor_alloc_heap_chunk(struct panthor_device *ptdev,
+ if (!chunk)
+ return -ENOMEM;
+
+- chunk->bo = panthor_kernel_bo_create(ptdev, vm, heap->chunk_size,
++ chunk->bo = panthor_kernel_bo_create(pool->ptdev, pool->vm, heap->chunk_size,
+ DRM_PANTHOR_BO_NO_MMAP,
+ DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC,
+ PANTHOR_VM_KERNEL_AUTO_VA);
+@@ -180,6 +184,8 @@ static int panthor_alloc_heap_chunk(struct panthor_device *ptdev,
+ heap->chunk_count++;
+ mutex_unlock(&heap->lock);
+
++ atomic_add(heap->chunk_size, &pool->size);
++
+ return 0;
+
+ err_destroy_bo:
+@@ -191,17 +197,16 @@ static int panthor_alloc_heap_chunk(struct panthor_device *ptdev,
+ return ret;
+ }
+
+-static void panthor_free_heap_chunks(struct panthor_vm *vm,
++static void panthor_free_heap_chunks(struct panthor_heap_pool *pool,
+ struct panthor_heap *heap)
+ {
+ struct panthor_heap_chunk *chunk, *tmp;
+
+ list_for_each_entry_safe(chunk, tmp, &heap->chunks, node)
+- panthor_free_heap_chunk(vm, heap, chunk);
++ panthor_free_heap_chunk(pool, heap, chunk);
+ }
+
+-static int panthor_alloc_heap_chunks(struct panthor_device *ptdev,
+- struct panthor_vm *vm,
++static int panthor_alloc_heap_chunks(struct panthor_heap_pool *pool,
+ struct panthor_heap *heap,
+ u32 chunk_count)
+ {
+@@ -209,7 +214,7 @@ static int panthor_alloc_heap_chunks(struct panthor_device *ptdev,
+ u32 i;
+
+ for (i = 0; i < chunk_count; i++) {
+- ret = panthor_alloc_heap_chunk(ptdev, vm, heap, true);
++ ret = panthor_alloc_heap_chunk(pool, heap, true);
+ if (ret)
+ return ret;
+ }
+@@ -226,7 +231,7 @@ panthor_heap_destroy_locked(struct panthor_heap_pool *pool, u32 handle)
+ if (!heap)
+ return -EINVAL;
+
+- panthor_free_heap_chunks(pool->vm, heap);
++ panthor_free_heap_chunks(pool, heap);
+ mutex_destroy(&heap->lock);
+ kfree(heap);
+ return 0;
+@@ -308,8 +313,7 @@ int panthor_heap_create(struct panthor_heap_pool *pool,
+ heap->max_chunks = max_chunks;
+ heap->target_in_flight = target_in_flight;
+
+- ret = panthor_alloc_heap_chunks(pool->ptdev, vm, heap,
+- initial_chunk_count);
++ ret = panthor_alloc_heap_chunks(pool, heap, initial_chunk_count);
+ if (ret)
+ goto err_free_heap;
+
+@@ -342,7 +346,7 @@ int panthor_heap_create(struct panthor_heap_pool *pool,
+ return id;
+
+ err_free_heap:
+- panthor_free_heap_chunks(pool->vm, heap);
++ panthor_free_heap_chunks(pool, heap);
+ mutex_destroy(&heap->lock);
+ kfree(heap);
+
+@@ -389,6 +393,7 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool,
+ removed = chunk;
+ list_del(&chunk->node);
+ heap->chunk_count--;
++ atomic_sub(heap->chunk_size, &pool->size);
+ break;
+ }
+ }
+@@ -466,7 +471,7 @@ int panthor_heap_grow(struct panthor_heap_pool *pool,
+ * further jobs in this queue fail immediately instead of having to
+ * wait for the job timeout.
+ */
+- ret = panthor_alloc_heap_chunk(pool->ptdev, pool->vm, heap, false);
++ ret = panthor_alloc_heap_chunk(pool, heap, false);
+ if (ret)
+ goto out_unlock;
+
+@@ -560,6 +565,8 @@ panthor_heap_pool_create(struct panthor_device *ptdev, struct panthor_vm *vm)
+ if (ret)
+ goto err_destroy_pool;
+
++ atomic_add(pool->gpu_contexts->obj->size, &pool->size);
++
+ return pool;
+
+ err_destroy_pool:
+@@ -594,8 +601,10 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool)
+ xa_for_each(&pool->xa, i, heap)
+ drm_WARN_ON(&pool->ptdev->base, panthor_heap_destroy_locked(pool, i));
+
+- if (!IS_ERR_OR_NULL(pool->gpu_contexts))
++ if (!IS_ERR_OR_NULL(pool->gpu_contexts)) {
++ atomic_sub(pool->gpu_contexts->obj->size, &pool->size);
+ panthor_kernel_bo_destroy(pool->gpu_contexts);
++ }
+
+ /* Reflects the fact the pool has been destroyed. */
+ pool->vm = NULL;
+@@ -603,3 +612,18 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool)
+
+ panthor_heap_pool_put(pool);
+ }
++
++/**
++ * panthor_heap_pool_size() - Get a heap pool's total size
++ * @pool: Pool whose total chunks size to return
++ *
++ * Returns the aggregated size of all chunks for all heaps in the pool
++ *
++ */
++size_t panthor_heap_pool_size(struct panthor_heap_pool *pool)
++{
++ if (!pool)
++ return 0;
++
++ return atomic_read(&pool->size);
++}
+diff --git a/drivers/gpu/drm/panthor/panthor_heap.h b/drivers/gpu/drm/panthor/panthor_heap.h
+index 25a5f2bba44570..e3358d4e8edb21 100644
+--- a/drivers/gpu/drm/panthor/panthor_heap.h
++++ b/drivers/gpu/drm/panthor/panthor_heap.h
+@@ -27,6 +27,8 @@ struct panthor_heap_pool *
+ panthor_heap_pool_get(struct panthor_heap_pool *pool);
+ void panthor_heap_pool_put(struct panthor_heap_pool *pool);
+
++size_t panthor_heap_pool_size(struct panthor_heap_pool *pool);
++
+ int panthor_heap_grow(struct panthor_heap_pool *pool,
+ u64 heap_gpu_va,
+ u32 renderpasses_in_flight,
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index c39e3eb1c15d53..1202de8811c2ae 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -1941,6 +1941,33 @@ struct panthor_heap_pool *panthor_vm_get_heap_pool(struct panthor_vm *vm, bool c
+ return pool;
+ }
+
++/**
++ * panthor_vm_heaps_sizes() - Calculate size of all heap chunks across all
++ * heaps over all the heap pools in a VM
++ * @pfile: File.
++ * @stats: Memory stats to be updated.
++ *
++ * Calculate all heap chunk sizes in all heap pools bound to a VM. If the VM
++ * is active, record the size as active as well.
++ */
++void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats *stats)
++{
++ struct panthor_vm *vm;
++ unsigned long i;
++
++ if (!pfile->vms)
++ return;
++
++ xa_lock(&pfile->vms->xa);
++ xa_for_each(&pfile->vms->xa, i, vm) {
++ size_t size = panthor_heap_pool_size(vm->heaps.pool);
++ stats->resident += size;
++ if (vm->as.id >= 0)
++ stats->active += size;
++ }
++ xa_unlock(&pfile->vms->xa);
++}
++
+ static u64 mair_to_memattr(u64 mair, bool coherent)
+ {
+ u64 memattr = 0;
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h
+index 8d21e83d8aba1e..fc274637114e55 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.h
++++ b/drivers/gpu/drm/panthor/panthor_mmu.h
+@@ -9,6 +9,7 @@
+
+ struct drm_exec;
+ struct drm_sched_job;
++struct drm_memory_stats;
+ struct panthor_gem_object;
+ struct panthor_heap_pool;
+ struct panthor_vm;
+@@ -37,6 +38,8 @@ int panthor_vm_flush_all(struct panthor_vm *vm);
+ struct panthor_heap_pool *
+ panthor_vm_get_heap_pool(struct panthor_vm *vm, bool create);
+
++void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats *stats);
++
+ struct panthor_vm *panthor_vm_get(struct panthor_vm *vm);
+ void panthor_vm_put(struct panthor_vm *vm);
+ struct panthor_vm *panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 77b184c3fb0cec..b8dbeb1586f64e 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -9,6 +9,7 @@
+ #include <drm/panthor_drm.h>
+
+ #include <linux/build_bug.h>
++#include <linux/cleanup.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/dma-mapping.h>
+@@ -628,16 +629,19 @@ struct panthor_group {
+ */
+ struct panthor_kernel_bo *syncobjs;
+
+- /** @fdinfo: Per-file total cycle and timestamp values reference. */
++ /** @fdinfo: Per-file info exposed through /proc/<process>/fdinfo */
+ struct {
+ /** @data: Total sampled values for jobs in queues from this group. */
+ struct panthor_gpu_usage data;
+
+ /**
+- * @lock: Mutex to govern concurrent access from drm file's fdinfo callback
+- * and job post-completion processing function
++ * @fdinfo.lock: Spinlock to govern concurrent access from drm file's fdinfo
++ * callback and job post-completion processing function
+ */
+- struct mutex lock;
++ spinlock_t lock;
++
++ /** @fdinfo.kbo_sizes: Aggregate size of private kernel BO's held by the group. */
++ size_t kbo_sizes;
+ } fdinfo;
+
+ /** @state: Group state. */
+@@ -910,8 +914,6 @@ static void group_release_work(struct work_struct *work)
+ release_work);
+ u32 i;
+
+- mutex_destroy(&group->fdinfo.lock);
+-
+ for (i = 0; i < group->queue_count; i++)
+ group_free_queue(group, group->queues[i]);
+
+@@ -2861,12 +2863,12 @@ static void update_fdinfo_stats(struct panthor_job *job)
+ struct panthor_job_profiling_data *slots = queue->profiling.slots->kmap;
+ struct panthor_job_profiling_data *data = &slots[job->profiling.slot];
+
+- mutex_lock(&group->fdinfo.lock);
+- if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES)
+- fdinfo->cycles += data->cycles.after - data->cycles.before;
+- if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP)
+- fdinfo->time += data->time.after - data->time.before;
+- mutex_unlock(&group->fdinfo.lock);
++ scoped_guard(spinlock, &group->fdinfo.lock) {
++ if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES)
++ fdinfo->cycles += data->cycles.after - data->cycles.before;
++ if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP)
++ fdinfo->time += data->time.after - data->time.before;
++ }
+ }
+
+ void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
+@@ -2878,14 +2880,15 @@ void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
+ if (IS_ERR_OR_NULL(gpool))
+ return;
+
++ xa_lock(&gpool->xa);
+ xa_for_each(&gpool->xa, i, group) {
+- mutex_lock(&group->fdinfo.lock);
++ guard(spinlock)(&group->fdinfo.lock);
+ pfile->stats.cycles += group->fdinfo.data.cycles;
+ pfile->stats.time += group->fdinfo.data.time;
+ group->fdinfo.data.cycles = 0;
+ group->fdinfo.data.time = 0;
+- mutex_unlock(&group->fdinfo.lock);
+ }
++ xa_unlock(&gpool->xa);
+ }
+
+ static void group_sync_upd_work(struct work_struct *work)
+@@ -3381,6 +3384,29 @@ group_create_queue(struct panthor_group *group,
+ return ERR_PTR(ret);
+ }
+
++static void add_group_kbo_sizes(struct panthor_device *ptdev,
++ struct panthor_group *group)
++{
++ struct panthor_queue *queue;
++ int i;
++
++ if (drm_WARN_ON(&ptdev->base, IS_ERR_OR_NULL(group)))
++ return;
++ if (drm_WARN_ON(&ptdev->base, ptdev != group->ptdev))
++ return;
++
++ group->fdinfo.kbo_sizes += group->suspend_buf->obj->size;
++ group->fdinfo.kbo_sizes += group->protm_suspend_buf->obj->size;
++ group->fdinfo.kbo_sizes += group->syncobjs->obj->size;
++
++ for (i = 0; i < group->queue_count; i++) {
++ queue = group->queues[i];
++ group->fdinfo.kbo_sizes += queue->ringbuf->obj->size;
++ group->fdinfo.kbo_sizes += queue->iface.mem->obj->size;
++ group->fdinfo.kbo_sizes += queue->profiling.slots->obj->size;
++ }
++}
++
+ #define MAX_GROUPS_PER_POOL 128
+
+ int panthor_group_create(struct panthor_file *pfile,
+@@ -3505,7 +3531,8 @@ int panthor_group_create(struct panthor_file *pfile,
+ }
+ mutex_unlock(&sched->reset.lock);
+
+- mutex_init(&group->fdinfo.lock);
++ add_group_kbo_sizes(group->ptdev, group);
++ spin_lock_init(&group->fdinfo.lock);
+
+ return gid;
+
+@@ -3624,6 +3651,33 @@ void panthor_group_pool_destroy(struct panthor_file *pfile)
+ pfile->groups = NULL;
+ }
+
++/**
++ * panthor_fdinfo_gather_group_mem_info() - Retrieve aggregate size of all private kernel BO's
++ * belonging to all the groups owned by an open Panthor file
++ * @pfile: File.
++ * @stats: Memory statistics to be updated.
++ *
++ */
++void
++panthor_fdinfo_gather_group_mem_info(struct panthor_file *pfile,
++ struct drm_memory_stats *stats)
++{
++ struct panthor_group_pool *gpool = pfile->groups;
++ struct panthor_group *group;
++ unsigned long i;
++
++ if (IS_ERR_OR_NULL(gpool))
++ return;
++
++ xa_lock(&gpool->xa);
++ xa_for_each(&gpool->xa, i, group) {
++ stats->resident += group->fdinfo.kbo_sizes;
++ if (group->csg_id >= 0)
++ stats->active += group->fdinfo.kbo_sizes;
++ }
++ xa_unlock(&gpool->xa);
++}
++
+ static void job_release(struct kref *ref)
+ {
+ struct panthor_job *job = container_of(ref, struct panthor_job, refcount);
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.h b/drivers/gpu/drm/panthor/panthor_sched.h
+index 5ae6b4bde7c50f..e650a445cf5070 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.h
++++ b/drivers/gpu/drm/panthor/panthor_sched.h
+@@ -9,6 +9,7 @@ struct dma_fence;
+ struct drm_file;
+ struct drm_gem_object;
+ struct drm_sched_job;
++struct drm_memory_stats;
+ struct drm_panthor_group_create;
+ struct drm_panthor_queue_create;
+ struct drm_panthor_group_get_state;
+@@ -36,6 +37,8 @@ void panthor_job_update_resvs(struct drm_exec *exec, struct drm_sched_job *job);
+
+ int panthor_group_pool_create(struct panthor_file *pfile);
+ void panthor_group_pool_destroy(struct panthor_file *pfile);
++void panthor_fdinfo_gather_group_mem_info(struct panthor_file *pfile,
++ struct drm_memory_stats *stats);
+
+ int panthor_sched_init(struct panthor_device *ptdev);
+ void panthor_sched_unplug(struct panthor_device *ptdev);
+diff --git a/drivers/gpu/drm/solomon/ssd130x-spi.c b/drivers/gpu/drm/solomon/ssd130x-spi.c
+index 08334be386946e..7c935870f7d2a9 100644
+--- a/drivers/gpu/drm/solomon/ssd130x-spi.c
++++ b/drivers/gpu/drm/solomon/ssd130x-spi.c
+@@ -151,7 +151,6 @@ static const struct of_device_id ssd130x_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ssd130x_of_match);
+
+-#if IS_MODULE(CONFIG_DRM_SSD130X_SPI)
+ /*
+ * The SPI core always reports a MODALIAS uevent of the form "spi:<dev>", even
+ * if the device was registered via OF. This means that the module will not be
+@@ -160,7 +159,7 @@ MODULE_DEVICE_TABLE(of, ssd130x_of_match);
+ * To workaround this issue, add a SPI device ID table. Even when this should
+ * not be needed for this driver to match the registered SPI devices.
+ */
+-static const struct spi_device_id ssd130x_spi_table[] = {
++static const struct spi_device_id ssd130x_spi_id[] = {
+ /* ssd130x family */
+ { "sh1106", SH1106_ID },
+ { "ssd1305", SSD1305_ID },
+@@ -175,14 +174,14 @@ static const struct spi_device_id ssd130x_spi_table[] = {
+ { "ssd1331", SSD1331_ID },
+ { /* sentinel */ }
+ };
+-MODULE_DEVICE_TABLE(spi, ssd130x_spi_table);
+-#endif
++MODULE_DEVICE_TABLE(spi, ssd130x_spi_id);
+
+ static struct spi_driver ssd130x_spi_driver = {
+ .driver = {
+ .name = DRIVER_NAME,
+ .of_match_table = ssd130x_of_match,
+ },
++ .id_table = ssd130x_spi_id,
+ .probe = ssd130x_spi_probe,
+ .remove = ssd130x_spi_remove,
+ .shutdown = ssd130x_spi_shutdown,
+diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c
+index b777690fd6607e..dd2006d51c7a2f 100644
+--- a/drivers/gpu/drm/solomon/ssd130x.c
++++ b/drivers/gpu/drm/solomon/ssd130x.c
+@@ -880,7 +880,7 @@ static int ssd132x_update_rect(struct ssd130x_device *ssd130x,
+ u8 n1 = buf[i * width + j];
+ u8 n2 = buf[i * width + j + 1];
+
+- data_array[array_idx++] = (n2 << 4) | n1;
++ data_array[array_idx++] = (n2 & 0xf0) | (n1 >> 4);
+ }
+ }
+
+@@ -1037,7 +1037,7 @@ static int ssd132x_fb_blit_rect(struct drm_framebuffer *fb,
+ struct drm_format_conv_state *fmtcnv_state)
+ {
+ struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
+- unsigned int dst_pitch = drm_rect_width(rect);
++ unsigned int dst_pitch;
+ struct iosys_map dst;
+ int ret = 0;
+
+@@ -1046,6 +1046,8 @@ static int ssd132x_fb_blit_rect(struct drm_framebuffer *fb,
+ rect->x2 = min_t(unsigned int, round_up(rect->x2, SSD132X_SEGMENT_WIDTH),
+ ssd130x->width);
+
++ dst_pitch = drm_rect_width(rect);
++
+ ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
+ if (ret)
+ return ret;
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
+index e0409aba934969..7fefef690ab6bc 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.c
++++ b/drivers/gpu/drm/vkms/vkms_drv.c
+@@ -244,17 +244,19 @@ static int __init vkms_init(void)
+ if (!config)
+ return -ENOMEM;
+
+- default_config = config;
+-
+ config->cursor = enable_cursor;
+ config->writeback = enable_writeback;
+ config->overlay = enable_overlay;
+
+ ret = vkms_create(config);
+- if (ret)
++ if (ret) {
+ kfree(config);
++ return ret;
++ }
+
+- return ret;
++ default_config = config;
++
++ return 0;
+ }
+
+ static void vkms_destroy(struct vkms_config *config)
+@@ -278,9 +280,10 @@ static void vkms_destroy(struct vkms_config *config)
+
+ static void __exit vkms_exit(void)
+ {
+- if (default_config->dev)
+- vkms_destroy(default_config);
++ if (!default_config)
++ return;
+
++ vkms_destroy(default_config);
+ kfree(default_config);
+ }
+
+diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
+index b51a2bde73e294..dcf6583a4c5226 100644
+--- a/drivers/gpu/drm/xe/Kconfig
++++ b/drivers/gpu/drm/xe/Kconfig
+@@ -52,7 +52,7 @@ config DRM_XE
+ config DRM_XE_DISPLAY
+ bool "Enable display support"
+ depends on DRM_XE && DRM_XE=m && HAS_IOPORT
+- select FB_IOMEM_HELPERS
++ select FB_IOMEM_HELPERS if DRM_FBDEV_EMULATION
+ select I2C
+ select I2C_ALGOBIT
+ default y
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp.c b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+index 979f6d3239ba69..189a08cdc73c05 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+@@ -2295,7 +2295,7 @@ static int zynqmp_dp_ignore_hpd_set(void *data, u64 val)
+
+ mutex_lock(&dp->lock);
+ dp->ignore_hpd = val;
+- mutex_lock(&dp->lock);
++ mutex_unlock(&dp->lock);
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp_audio.c b/drivers/gpu/drm/xlnx/zynqmp_dp_audio.c
+index fa5f0ace608428..f07ff4eb3a6de7 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dp_audio.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dp_audio.c
+@@ -323,12 +323,16 @@ int zynqmp_audio_init(struct zynqmp_dpsub *dpsub)
+
+ audio->dai_name = devm_kasprintf(dev, GFP_KERNEL,
+ "%s-dai", dev_name(dev));
++ if (!audio->dai_name)
++ return -ENOMEM;
+
+ for (unsigned int i = 0; i < ZYNQMP_NUM_PCMS; ++i) {
+ audio->link_names[i] = devm_kasprintf(dev, GFP_KERNEL,
+ "%s-dp-%u", dev_name(dev), i);
+ audio->pcm_names[i] = devm_kasprintf(dev, GFP_KERNEL,
+ "%s-pcm-%u", dev_name(dev), i);
++ if (!audio->link_names[i] || !audio->pcm_names[i])
++ return -ENOMEM;
+ }
+
+ audio->base = devm_platform_ioremap_resource_byname(pdev, "aud");
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+index f953ca48a9303d..3a9544b97bc531 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+@@ -201,6 +201,8 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ dma_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32));
++
+ /* Try the reserved memory. Proceed if there's none. */
+ of_reserved_mem_device_init(&pdev->dev);
+
+diff --git a/drivers/greybus/gb-beagleplay.c b/drivers/greybus/gb-beagleplay.c
+index 473ac3f2d38219..da31f1131afcab 100644
+--- a/drivers/greybus/gb-beagleplay.c
++++ b/drivers/greybus/gb-beagleplay.c
+@@ -912,7 +912,9 @@ static enum fw_upload_err cc1352_prepare(struct fw_upload *fw_upload,
+ cc1352_bootloader_reset(bg);
+ WRITE_ONCE(bg->flashing_mode, false);
+ msleep(200);
+- gb_greybus_init(bg);
++ if (gb_greybus_init(bg) < 0)
++ return dev_err_probe(&bg->sd->dev, FW_UPLOAD_ERR_RW_ERROR,
++ "Failed to initialize greybus");
+ gb_beagleplay_start_svc(bg);
+ return FW_UPLOAD_ERR_FW_INVALID;
+ }
+diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile
+index 482b096eea2808..0abfe51704a0b4 100644
+--- a/drivers/hid/Makefile
++++ b/drivers/hid/Makefile
+@@ -166,7 +166,6 @@ obj-$(CONFIG_USB_KBD) += usbhid/
+ obj-$(CONFIG_I2C_HID_CORE) += i2c-hid/
+
+ obj-$(CONFIG_INTEL_ISH_HID) += intel-ish-hid/
+-obj-$(INTEL_ISH_FIRMWARE_DOWNLOADER) += intel-ish-hid/
+
+ obj-$(CONFIG_AMD_SFH_HID) += amd-sfh-hid/
+
+diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c
+index 275cc0d9f505fb..3378bb77e6b41d 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.c
++++ b/drivers/hwtracing/coresight/coresight-catu.c
+@@ -269,7 +269,7 @@ catu_init_sg_table(struct device *catu_dev, int node,
+ * Each table can address upto 1MB and we can have
+ * CATU_PAGES_PER_SYSPAGE tables in a system page.
+ */
+- nr_tpages = DIV_ROUND_UP(size, SZ_1M) / CATU_PAGES_PER_SYSPAGE;
++ nr_tpages = DIV_ROUND_UP(size, CATU_PAGES_PER_SYSPAGE * SZ_1M);
+ catu_table = tmc_alloc_sg_table(catu_dev, node, nr_tpages,
+ size >> PAGE_SHIFT, pages);
+ if (IS_ERR(catu_table))
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 0a9380350fb525..4936dc2f7a56b1 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1092,18 +1092,20 @@ static void coresight_remove_conns(struct coresight_device *csdev)
+ }
+
+ /**
+- * coresight_timeout - loop until a bit has changed to a specific register
+- * state.
++ * coresight_timeout_action - loop until a bit has changed to a specific register
++ * state, with a callback after every trial.
+ * @csa: coresight device access for the device
+ * @offset: Offset of the register from the base of the device.
+ * @position: the position of the bit of interest.
+ * @value: the value the bit should have.
++ * @cb: Call back after each trial.
+ *
+ * Return: 0 as soon as the bit has taken the desired state or -EAGAIN if
+ * TIMEOUT_US has elapsed, which ever happens first.
+ */
+-int coresight_timeout(struct csdev_access *csa, u32 offset,
+- int position, int value)
++int coresight_timeout_action(struct csdev_access *csa, u32 offset,
++ int position, int value,
++ coresight_timeout_cb_t cb)
+ {
+ int i;
+ u32 val;
+@@ -1119,7 +1121,8 @@ int coresight_timeout(struct csdev_access *csa, u32 offset,
+ if (!(val & BIT(position)))
+ return 0;
+ }
+-
++ if (cb)
++ cb(csa, offset, position, value);
+ /*
+ * Delay is arbitrary - the specification doesn't say how long
+ * we are expected to wait. Extra check required to make sure
+@@ -1131,6 +1134,13 @@ int coresight_timeout(struct csdev_access *csa, u32 offset,
+
+ return -EAGAIN;
+ }
++EXPORT_SYMBOL_GPL(coresight_timeout_action);
++
++int coresight_timeout(struct csdev_access *csa, u32 offset,
++ int position, int value)
++{
++ return coresight_timeout_action(csa, offset, position, value, NULL);
++}
+ EXPORT_SYMBOL_GPL(coresight_timeout);
+
+ u32 coresight_relaxed_read32(struct coresight_device *csdev, u32 offset)
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 2c1a60577728e2..5bda265d023400 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -428,6 +428,29 @@ static void etm4_check_arch_features(struct etmv4_drvdata *drvdata,
+ }
+ #endif /* CONFIG_ETM4X_IMPDEF_FEATURE */
+
++static void etm4x_sys_ins_barrier(struct csdev_access *csa, u32 offset, int pos, int val)
++{
++ if (!csa->io_mem)
++ isb();
++}
++
++/*
++ * etm4x_wait_status: Poll for TRCSTATR.<pos> == <val>. While using system
++ * instruction to access the trace unit, each access must be separated by a
++ * synchronization barrier. See ARM IHI0064H.b section "4.3.7 Synchronization of
++ * register updates", for system instructions section, in "Notes":
++ *
++ * "In particular, whenever disabling or enabling the trace unit, a poll of
++ * TRCSTATR needs explicit synchronization between each read of TRCSTATR"
++ */
++static int etm4x_wait_status(struct csdev_access *csa, int pos, int val)
++{
++ if (!csa->io_mem)
++ return coresight_timeout_action(csa, TRCSTATR, pos, val,
++ etm4x_sys_ins_barrier);
++ return coresight_timeout(csa, TRCSTATR, pos, val);
++}
++
+ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ {
+ int i, rc;
+@@ -459,7 +482,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ isb();
+
+ /* wait for TRCSTATR.IDLE to go up */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_IDLE_BIT, 1))
++ if (etm4x_wait_status(csa, TRCSTATR_IDLE_BIT, 1))
+ dev_err(etm_dev,
+ "timeout while waiting for Idle Trace Status\n");
+ if (drvdata->nr_pe)
+@@ -552,7 +575,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ isb();
+
+ /* wait for TRCSTATR.IDLE to go back down to '0' */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_IDLE_BIT, 0))
++ if (etm4x_wait_status(csa, TRCSTATR_IDLE_BIT, 0))
+ dev_err(etm_dev,
+ "timeout while waiting for Idle Trace Status\n");
+
+@@ -941,10 +964,25 @@ static void etm4_disable_hw(void *info)
+ tsb_csync();
+ etm4x_relaxed_write32(csa, control, TRCPRGCTLR);
+
++ /*
++ * As recommended by section 4.3.7 ("Synchronization when using system
++ * instructions to progrom the trace unit") of ARM IHI 0064H.b, the
++ * self-hosted trace analyzer must perform a Context synchronization
++ * event between writing to the TRCPRGCTLR and reading the TRCSTATR.
++ */
++ if (!csa->io_mem)
++ isb();
++
+ /* wait for TRCSTATR.PMSTABLE to go to '1' */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_PMSTABLE_BIT, 1))
++ if (etm4x_wait_status(csa, TRCSTATR_PMSTABLE_BIT, 1))
+ dev_err(etm_dev,
+ "timeout while waiting for PM stable Trace Status\n");
++ /*
++ * As recommended by section 4.3.7 (Synchronization of register updates)
++ * of ARM IHI 0064H.b.
++ */
++ isb();
++
+ /* read the status of the single shot comparators */
+ for (i = 0; i < drvdata->nr_ss_cmp; i++) {
+ config->ss_status[i] =
+@@ -1746,7 +1784,7 @@ static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ etm4_os_lock(drvdata);
+
+ /* wait for TRCSTATR.PMSTABLE to go up */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_PMSTABLE_BIT, 1)) {
++ if (etm4x_wait_status(csa, TRCSTATR_PMSTABLE_BIT, 1)) {
+ dev_err(etm_dev,
+ "timeout while waiting for PM Stable Status\n");
+ etm4_os_unlock(drvdata);
+@@ -1837,7 +1875,7 @@ static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ state->trcpdcr = etm4x_read32(csa, TRCPDCR);
+
+ /* wait for TRCSTATR.IDLE to go up */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_IDLE_BIT, 1)) {
++ if (etm4x_wait_status(csa, TRCSTATR_PMSTABLE_BIT, 1)) {
+ dev_err(etm_dev,
+ "timeout while waiting for Idle Trace Status\n");
+ etm4_os_unlock(drvdata);
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index d6057d8c7dec4e..ecc07c17f4c798 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -1037,7 +1037,7 @@ static int svc_i3c_update_ibirules(struct svc_i3c_master *master)
+
+ /* Create the IBIRULES register for both cases */
+ i3c_bus_for_each_i3cdev(&master->base.bus, dev) {
+- if (I3C_BCR_DEVICE_ROLE(dev->info.bcr) == I3C_BCR_I3C_MASTER)
++ if (!(dev->info.bcr & I3C_BCR_IBI_REQ_CAP))
+ continue;
+
+ if (dev->info.bcr & I3C_BCR_IBI_PAYLOAD) {
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index 962d289065ab7b..1b2014c4c4b469 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -712,7 +712,7 @@ static int mma8452_write_raw(struct iio_dev *indio_dev,
+ int val, int val2, long mask)
+ {
+ struct mma8452_data *data = iio_priv(indio_dev);
+- int i, ret;
++ int i, j, ret;
+
+ ret = iio_device_claim_direct_mode(indio_dev);
+ if (ret)
+@@ -772,14 +772,18 @@ static int mma8452_write_raw(struct iio_dev *indio_dev,
+ break;
+
+ case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+- ret = mma8452_get_odr_index(data);
++ j = mma8452_get_odr_index(data);
+
+ for (i = 0; i < ARRAY_SIZE(mma8452_os_ratio); i++) {
+- if (mma8452_os_ratio[i][ret] == val) {
++ if (mma8452_os_ratio[i][j] == val) {
+ ret = mma8452_set_power_mode(data, i);
+ break;
+ }
+ }
++ if (i == ARRAY_SIZE(mma8452_os_ratio)) {
++ ret = -EINVAL;
++ break;
++ }
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/drivers/iio/accel/msa311.c b/drivers/iio/accel/msa311.c
+index e7fb860f32337c..c2b05d1f7239a1 100644
+--- a/drivers/iio/accel/msa311.c
++++ b/drivers/iio/accel/msa311.c
+@@ -594,23 +594,25 @@ static int msa311_read_raw_data(struct iio_dev *indio_dev,
+ __le16 axis;
+ int err;
+
+- err = pm_runtime_resume_and_get(dev);
++ err = iio_device_claim_direct_mode(indio_dev);
+ if (err)
+ return err;
+
+- err = iio_device_claim_direct_mode(indio_dev);
+- if (err)
++ err = pm_runtime_resume_and_get(dev);
++ if (err) {
++ iio_device_release_direct_mode(indio_dev);
+ return err;
++ }
+
+ mutex_lock(&msa311->lock);
+ err = msa311_get_axis(msa311, chan, &axis);
+ mutex_unlock(&msa311->lock);
+
+- iio_device_release_direct_mode(indio_dev);
+-
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+
++ iio_device_release_direct_mode(indio_dev);
++
+ if (err) {
+ dev_err(dev, "can't get axis %s (%pe)\n",
+ chan->datasheet_name, ERR_PTR(err));
+@@ -756,10 +758,6 @@ static int msa311_write_samp_freq(struct iio_dev *indio_dev, int val, int val2)
+ unsigned int odr;
+ int err;
+
+- err = pm_runtime_resume_and_get(dev);
+- if (err)
+- return err;
+-
+ /*
+ * Sampling frequency changing is prohibited when buffer mode is
+ * enabled, because sometimes MSA311 chip returns outliers during
+@@ -769,6 +767,12 @@ static int msa311_write_samp_freq(struct iio_dev *indio_dev, int val, int val2)
+ if (err)
+ return err;
+
++ err = pm_runtime_resume_and_get(dev);
++ if (err) {
++ iio_device_release_direct_mode(indio_dev);
++ return err;
++ }
++
+ err = -EINVAL;
+ for (odr = 0; odr < ARRAY_SIZE(msa311_odr_table); odr++)
+ if (val == msa311_odr_table[odr].integral &&
+@@ -779,11 +783,11 @@ static int msa311_write_samp_freq(struct iio_dev *indio_dev, int val, int val2)
+ break;
+ }
+
+- iio_device_release_direct_mode(indio_dev);
+-
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+
++ iio_device_release_direct_mode(indio_dev);
++
+ if (err)
+ dev_err(dev, "can't update frequency (%pe)\n", ERR_PTR(err));
+
+diff --git a/drivers/iio/adc/ad4130.c b/drivers/iio/adc/ad4130.c
+index de32cc9d18c5ef..712f95f53c9ecd 100644
+--- a/drivers/iio/adc/ad4130.c
++++ b/drivers/iio/adc/ad4130.c
+@@ -223,6 +223,10 @@ enum ad4130_pin_function {
+ AD4130_PIN_FN_VBIAS = BIT(3),
+ };
+
++/*
++ * If you make adaptations in this struct, you most likely also have to adapt
++ * ad4130_setup_info_eq(), too.
++ */
+ struct ad4130_setup_info {
+ unsigned int iout0_val;
+ unsigned int iout1_val;
+@@ -591,6 +595,40 @@ static irqreturn_t ad4130_irq_handler(int irq, void *private)
+ return IRQ_HANDLED;
+ }
+
++static bool ad4130_setup_info_eq(struct ad4130_setup_info *a,
++ struct ad4130_setup_info *b)
++{
++ /*
++ * This is just to make sure that the comparison is adapted after
++ * struct ad4130_setup_info was changed.
++ */
++ static_assert(sizeof(*a) ==
++ sizeof(struct {
++ unsigned int iout0_val;
++ unsigned int iout1_val;
++ unsigned int burnout;
++ unsigned int pga;
++ unsigned int fs;
++ u32 ref_sel;
++ enum ad4130_filter_mode filter_mode;
++ bool ref_bufp;
++ bool ref_bufm;
++ }));
++
++ if (a->iout0_val != b->iout0_val ||
++ a->iout1_val != b->iout1_val ||
++ a->burnout != b->burnout ||
++ a->pga != b->pga ||
++ a->fs != b->fs ||
++ a->ref_sel != b->ref_sel ||
++ a->filter_mode != b->filter_mode ||
++ a->ref_bufp != b->ref_bufp ||
++ a->ref_bufm != b->ref_bufm)
++ return false;
++
++ return true;
++}
++
+ static int ad4130_find_slot(struct ad4130_state *st,
+ struct ad4130_setup_info *target_setup_info,
+ unsigned int *slot, bool *overwrite)
+@@ -604,8 +642,7 @@ static int ad4130_find_slot(struct ad4130_state *st,
+ struct ad4130_slot_info *slot_info = &st->slots_info[i];
+
+ /* Immediately accept a matching setup info. */
+- if (!memcmp(target_setup_info, &slot_info->setup,
+- sizeof(*target_setup_info))) {
++ if (ad4130_setup_info_eq(target_setup_info, &slot_info->setup)) {
+ *slot = i;
+ return 0;
+ }
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 6ae27cdd32503c..de90ecb5f6307e 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -151,7 +151,11 @@ struct ad7124_chip_info {
+ struct ad7124_channel_config {
+ bool live;
+ unsigned int cfg_slot;
+- /* Following fields are used to compare equality. */
++ /*
++ * Following fields are used to compare for equality. If you
++ * make adaptations in it, you most likely also have to adapt
++ * ad7124_find_similar_live_cfg(), too.
++ */
+ struct_group(config_props,
+ enum ad7124_ref_sel refsel;
+ bool bipolar;
+@@ -338,15 +342,38 @@ static struct ad7124_channel_config *ad7124_find_similar_live_cfg(struct ad7124_
+ struct ad7124_channel_config *cfg)
+ {
+ struct ad7124_channel_config *cfg_aux;
+- ptrdiff_t cmp_size;
+ int i;
+
+- cmp_size = sizeof_field(struct ad7124_channel_config, config_props);
++ /*
++ * This is just to make sure that the comparison is adapted after
++ * struct ad7124_channel_config was changed.
++ */
++ static_assert(sizeof_field(struct ad7124_channel_config, config_props) ==
++ sizeof(struct {
++ enum ad7124_ref_sel refsel;
++ bool bipolar;
++ bool buf_positive;
++ bool buf_negative;
++ unsigned int vref_mv;
++ unsigned int pga_bits;
++ unsigned int odr;
++ unsigned int odr_sel_bits;
++ unsigned int filter_type;
++ }));
++
+ for (i = 0; i < st->num_channels; i++) {
+ cfg_aux = &st->channels[i].cfg;
+
+ if (cfg_aux->live &&
+- !memcmp(&cfg->config_props, &cfg_aux->config_props, cmp_size))
++ cfg->refsel == cfg_aux->refsel &&
++ cfg->bipolar == cfg_aux->bipolar &&
++ cfg->buf_positive == cfg_aux->buf_positive &&
++ cfg->buf_negative == cfg_aux->buf_negative &&
++ cfg->vref_mv == cfg_aux->vref_mv &&
++ cfg->pga_bits == cfg_aux->pga_bits &&
++ cfg->odr == cfg_aux->odr &&
++ cfg->odr_sel_bits == cfg_aux->odr_sel_bits &&
++ cfg->filter_type == cfg_aux->filter_type)
+ return cfg_aux;
+ }
+
+@@ -540,14 +567,21 @@ static int ad7124_append_status(struct ad_sigma_delta *sd, bool append)
+ return 0;
+ }
+
+-static int ad7124_disable_all(struct ad_sigma_delta *sd)
++static int ad7124_disable_one(struct ad_sigma_delta *sd, unsigned int chan)
+ {
+ struct ad7124_state *st = container_of(sd, struct ad7124_state, sd);
++
++ /* The relevant thing here is that AD7124_CHANNEL_EN_MSK is cleared. */
++ return ad_sd_write_reg(&st->sd, AD7124_CHANNEL(chan), 2, 0);
++}
++
++static int ad7124_disable_all(struct ad_sigma_delta *sd)
++{
+ int ret;
+ int i;
+
+- for (i = 0; i < st->num_channels; i++) {
+- ret = ad7124_spi_write_mask(st, AD7124_CHANNEL(i), AD7124_CHANNEL_EN_MSK, 0, 2);
++ for (i = 0; i < 16; i++) {
++ ret = ad7124_disable_one(sd, i);
+ if (ret < 0)
+ return ret;
+ }
+@@ -555,13 +589,6 @@ static int ad7124_disable_all(struct ad_sigma_delta *sd)
+ return 0;
+ }
+
+-static int ad7124_disable_one(struct ad_sigma_delta *sd, unsigned int chan)
+-{
+- struct ad7124_state *st = container_of(sd, struct ad7124_state, sd);
+-
+- return ad7124_spi_write_mask(st, AD7124_CHANNEL(chan), AD7124_CHANNEL_EN_MSK, 0, 2);
+-}
+-
+ static const struct ad_sigma_delta_info ad7124_sigma_delta_info = {
+ .set_channel = ad7124_set_channel,
+ .append_status = ad7124_append_status,
+@@ -1016,11 +1043,10 @@ static int ad7124_setup(struct ad7124_state *st)
+ * set all channels to this default value.
+ */
+ ad7124_set_channel_odr(st, i, 10);
+-
+- /* Disable all channels to prevent unintended conversions. */
+- ad_sd_write_reg(&st->sd, AD7124_CHANNEL(i), 2, 0);
+ }
+
++ ad7124_disable_all(&st->sd);
++
+ ret = ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control);
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Failed to setup CONTROL register\n");
+diff --git a/drivers/iio/adc/ad7173.c b/drivers/iio/adc/ad7173.c
+index 6c4ed10ae580d6..4f8810e35a8d11 100644
+--- a/drivers/iio/adc/ad7173.c
++++ b/drivers/iio/adc/ad7173.c
+@@ -189,7 +189,11 @@ struct ad7173_channel_config {
+ u8 cfg_slot;
+ bool live;
+
+- /* Following fields are used to compare equality. */
++ /*
++ * Following fields are used to compare equality. If you
++ * make adaptations in it, you most likely also have to adapt
++ * ad7173_find_live_config(), too.
++ */
+ struct_group(config_props,
+ bool bipolar;
+ bool input_buf;
+@@ -559,6 +563,9 @@ static ssize_t ad7173_write_syscalib(struct iio_dev *indio_dev,
+ if (ret)
+ return ret;
+
++ if (!iio_device_claim_direct(indio_dev))
++ return -EBUSY;
++
+ mode = st->channels[chan->channel].syscalib_mode;
+ if (sys_calib) {
+ if (mode == AD7173_SYSCALIB_ZERO_SCALE)
+@@ -569,6 +576,8 @@ static ssize_t ad7173_write_syscalib(struct iio_dev *indio_dev,
+ chan->address);
+ }
+
++ iio_device_release_direct(indio_dev);
++
+ return ret ? : len;
+ }
+
+@@ -712,15 +721,28 @@ static struct ad7173_channel_config *
+ ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)
+ {
+ struct ad7173_channel_config *cfg_aux;
+- ptrdiff_t cmp_size;
+ int i;
+
+- cmp_size = sizeof_field(struct ad7173_channel_config, config_props);
++ /*
++ * This is just to make sure that the comparison is adapted after
++ * struct ad7173_channel_config was changed.
++ */
++ static_assert(sizeof_field(struct ad7173_channel_config, config_props) ==
++ sizeof(struct {
++ bool bipolar;
++ bool input_buf;
++ u8 odr;
++ u8 ref_sel;
++ }));
++
+ for (i = 0; i < st->num_channels; i++) {
+ cfg_aux = &st->channels[i].cfg;
+
+ if (cfg_aux->live &&
+- !memcmp(&cfg->config_props, &cfg_aux->config_props, cmp_size))
++ cfg->bipolar == cfg_aux->bipolar &&
++ cfg->input_buf == cfg_aux->input_buf &&
++ cfg->odr == cfg_aux->odr &&
++ cfg->ref_sel == cfg_aux->ref_sel)
+ return cfg_aux;
+ }
+ return NULL;
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index cfaf8f7e0a07da..1ebb738d99f57d 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -256,6 +256,9 @@ static ssize_t ad7192_write_syscalib(struct iio_dev *indio_dev,
+ if (ret)
+ return ret;
+
++ if (!iio_device_claim_direct(indio_dev))
++ return -EBUSY;
++
+ temp = st->syscalib_mode[chan->channel];
+ if (sys_calib) {
+ if (temp == AD7192_SYSCALIB_ZERO_SCALE)
+@@ -266,6 +269,8 @@ static ssize_t ad7192_write_syscalib(struct iio_dev *indio_dev,
+ chan->address);
+ }
+
++ iio_device_release_direct(indio_dev);
++
+ return ret ? ret : len;
+ }
+
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 113703fb724544..6f8816483f1a02 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -574,6 +574,21 @@ static int ad7768_probe(struct spi_device *spi)
+ return -ENOMEM;
+
+ st = iio_priv(indio_dev);
++ /*
++ * Datasheet recommends SDI line to be kept high when data is not being
++ * clocked out of the controller and the spi clock is free running,
++ * to prevent accidental reset.
++ * Since many controllers do not support the SPI_MOSI_IDLE_HIGH flag
++ * yet, only request the MOSI idle state to enable if the controller
++ * supports it.
++ */
++ if (spi->controller->mode_bits & SPI_MOSI_IDLE_HIGH) {
++ spi->mode |= SPI_MOSI_IDLE_HIGH;
++ ret = spi_setup(spi);
++ if (ret < 0)
++ return ret;
++ }
++
+ st->spi = spi;
+
+ st->vref = devm_regulator_get(&spi->dev, "vref");
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index d5d81581ab3409..77b4e8bc47485c 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -339,6 +339,7 @@ int ad_sd_calibrate(struct ad_sigma_delta *sigma_delta,
+ out:
+ sigma_delta->keep_cs_asserted = false;
+ ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
++ ad_sigma_delta_disable_one(sigma_delta, channel);
+ sigma_delta->bus_locked = false;
+ spi_bus_unlock(sigma_delta->spi->controller);
+
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index b143f7ed684727..ac871deb8063cd 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -585,6 +585,14 @@ static int axi_dac_ddr_disable(struct iio_backend *back)
+ static int axi_dac_data_stream_enable(struct iio_backend *back)
+ {
+ struct axi_dac_state *st = iio_backend_get_priv(back);
++ int ret, val;
++
++ ret = regmap_read_poll_timeout(st->regmap,
++ AXI_DAC_UI_STATUS_REG, val,
++ FIELD_GET(AXI_DAC_UI_STATUS_IF_BUSY, val) == 0,
++ 10, 100 * KILO);
++ if (ret)
++ return ret;
+
+ return regmap_set_bits(st->regmap, AXI_DAC_CUSTOM_CTRL_REG,
+ AXI_DAC_CUSTOM_CTRL_STREAM_ENABLE);
+diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-backend.c
+index 36328127203520..aa2b8b38ab5876 100644
+--- a/drivers/iio/industrialio-backend.c
++++ b/drivers/iio/industrialio-backend.c
+@@ -155,10 +155,12 @@ static ssize_t iio_backend_debugfs_write_reg(struct file *file,
+ ssize_t rc;
+ int ret;
+
+- rc = simple_write_to_buffer(buf, sizeof(buf), ppos, userbuf, count);
++ rc = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf, count);
+ if (rc < 0)
+ return rc;
+
++ buf[count] = '\0';
++
+ ret = sscanf(buf, "%i %i", &back->cached_reg_addr, &val);
+
+ switch (ret) {
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index d70ebe3bf77429..d14f3507f34ea8 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -950,7 +950,15 @@ int iio_gts_find_gain_time_sel_for_scale(struct iio_gts *gts, int scale_int,
+ }
+ EXPORT_SYMBOL_NS_GPL(iio_gts_find_gain_time_sel_for_scale, "IIO_GTS_HELPER");
+
+-static int iio_gts_get_total_gain(struct iio_gts *gts, int gain, int time)
++/**
++ * iio_gts_get_total_gain - Fetch total gain for given HW-gain and time
++ * @gts: Gain time scale descriptor
++ * @gain: HW-gain for which the total gain is searched for
++ * @time: Integration time for which the total gain is searched for
++ *
++ * Return: total gain on success and -EINVAL on error.
++ */
++int iio_gts_get_total_gain(struct iio_gts *gts, int gain, int time)
+ {
+ const struct iio_itime_sel_mul *itime;
+
+@@ -966,6 +974,7 @@ static int iio_gts_get_total_gain(struct iio_gts *gts, int gain, int time)
+
+ return gain * itime->mul;
+ }
++EXPORT_SYMBOL_NS_GPL(iio_gts_get_total_gain, "IIO_GTS_HELPER");
+
+ static int iio_gts_get_scale_linear(struct iio_gts *gts, int gain, int time,
+ u64 *scale)
+diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig
+index e34e551eef3e8d..eb7f56eaeae07c 100644
+--- a/drivers/iio/light/Kconfig
++++ b/drivers/iio/light/Kconfig
+@@ -683,6 +683,7 @@ config VEML6030
+ select REGMAP_I2C
+ select IIO_BUFFER
+ select IIO_TRIGGERED_BUFFER
++ select IIO_GTS_HELPER
+ depends on I2C
+ help
+ Say Y here if you want to build a driver for the Vishay VEML6030
+diff --git a/drivers/iio/light/veml6030.c b/drivers/iio/light/veml6030.c
+index 9b71825eea9bee..750d3c2267a491 100644
+--- a/drivers/iio/light/veml6030.c
++++ b/drivers/iio/light/veml6030.c
+@@ -24,10 +24,12 @@
+ #include <linux/regmap.h>
+ #include <linux/interrupt.h>
+ #include <linux/pm_runtime.h>
++#include <linux/units.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/iio/iio.h>
+ #include <linux/iio/sysfs.h>
+ #include <linux/iio/events.h>
++#include <linux/iio/iio-gts-helper.h>
+ #include <linux/iio/trigger_consumer.h>
+ #include <linux/iio/triggered_buffer.h>
+
+@@ -59,22 +61,36 @@
+ #define VEML6035_INT_CHAN BIT(3)
+ #define VEML6035_CHAN_EN BIT(2)
+
++/* Regfields */
++#define VEML6030_GAIN_RF REG_FIELD(VEML6030_REG_ALS_CONF, 11, 12)
++#define VEML6030_IT_RF REG_FIELD(VEML6030_REG_ALS_CONF, 6, 9)
++
++#define VEML6035_GAIN_RF REG_FIELD(VEML6030_REG_ALS_CONF, 10, 12)
++
++/* Maximum scales x 10000 to work with integers */
++#define VEML6030_MAX_SCALE 21504
++#define VEML6035_MAX_SCALE 4096
++
+ enum veml6030_scan {
+ VEML6030_SCAN_ALS,
+ VEML6030_SCAN_WH,
+ VEML6030_SCAN_TIMESTAMP,
+ };
+
++struct veml6030_rf {
++ struct regmap_field *it;
++ struct regmap_field *gain;
++};
++
+ struct veml603x_chip {
+ const char *name;
+- const int(*scale_vals)[][2];
+- const int num_scale_vals;
+ const struct iio_chan_spec *channels;
+ const int num_channels;
++ const struct reg_field gain_rf;
++ const struct reg_field it_rf;
++ const int max_scale;
+ int (*hw_init)(struct iio_dev *indio_dev, struct device *dev);
+ int (*set_info)(struct iio_dev *indio_dev);
+- int (*set_als_gain)(struct iio_dev *indio_dev, int val, int val2);
+- int (*get_als_gain)(struct iio_dev *indio_dev, int *val, int *val2);
+ };
+
+ /*
+@@ -91,40 +107,56 @@ struct veml603x_chip {
+ struct veml6030_data {
+ struct i2c_client *client;
+ struct regmap *regmap;
+- int cur_resolution;
+- int cur_gain;
+- int cur_integration_time;
++ struct veml6030_rf rf;
+ const struct veml603x_chip *chip;
++ struct iio_gts gts;
++
+ };
+
+-static const int veml6030_it_times[][2] = {
+- { 0, 25000 },
+- { 0, 50000 },
+- { 0, 100000 },
+- { 0, 200000 },
+- { 0, 400000 },
+- { 0, 800000 },
++#define VEML6030_SEL_IT_25MS 0x0C
++#define VEML6030_SEL_IT_50MS 0x08
++#define VEML6030_SEL_IT_100MS 0x00
++#define VEML6030_SEL_IT_200MS 0x01
++#define VEML6030_SEL_IT_400MS 0x02
++#define VEML6030_SEL_IT_800MS 0x03
++static const struct iio_itime_sel_mul veml6030_it_sel[] = {
++ GAIN_SCALE_ITIME_US(25000, VEML6030_SEL_IT_25MS, 1),
++ GAIN_SCALE_ITIME_US(50000, VEML6030_SEL_IT_50MS, 2),
++ GAIN_SCALE_ITIME_US(100000, VEML6030_SEL_IT_100MS, 4),
++ GAIN_SCALE_ITIME_US(200000, VEML6030_SEL_IT_200MS, 8),
++ GAIN_SCALE_ITIME_US(400000, VEML6030_SEL_IT_400MS, 16),
++ GAIN_SCALE_ITIME_US(800000, VEML6030_SEL_IT_800MS, 32),
+ };
+
+-/*
+- * Scale is 1/gain. Value 0.125 is ALS gain x (1/8), 0.25 is
+- * ALS gain x (1/4), 0.5 is ALS gain x (1/2), 1.0 is ALS gain x 1,
+- * 2.0 is ALS gain x2, and 4.0 is ALS gain x 4.
++/* Gains are multiplied by 8 to work with integers. The values in the
++ * iio-gts tables don't need corrections because the maximum value of
++ * the scale refers to GAIN = x1, and the rest of the values are
++ * obtained from the resulting linear function.
+ */
+-static const int veml6030_scale_vals[][2] = {
+- { 0, 125000 },
+- { 0, 250000 },
+- { 1, 0 },
+- { 2, 0 },
++#define VEML6030_SEL_MILLI_GAIN_X125 2
++#define VEML6030_SEL_MILLI_GAIN_X250 3
++#define VEML6030_SEL_MILLI_GAIN_X1000 0
++#define VEML6030_SEL_MILLI_GAIN_X2000 1
++static const struct iio_gain_sel_pair veml6030_gain_sel[] = {
++ GAIN_SCALE_GAIN(1, VEML6030_SEL_MILLI_GAIN_X125),
++ GAIN_SCALE_GAIN(2, VEML6030_SEL_MILLI_GAIN_X250),
++ GAIN_SCALE_GAIN(8, VEML6030_SEL_MILLI_GAIN_X1000),
++ GAIN_SCALE_GAIN(16, VEML6030_SEL_MILLI_GAIN_X2000),
+ };
+
+-static const int veml6035_scale_vals[][2] = {
+- { 0, 125000 },
+- { 0, 250000 },
+- { 0, 500000 },
+- { 1, 0 },
+- { 2, 0 },
+- { 4, 0 },
++#define VEML6035_SEL_MILLI_GAIN_X125 4
++#define VEML6035_SEL_MILLI_GAIN_X250 5
++#define VEML6035_SEL_MILLI_GAIN_X500 7
++#define VEML6035_SEL_MILLI_GAIN_X1000 0
++#define VEML6035_SEL_MILLI_GAIN_X2000 1
++#define VEML6035_SEL_MILLI_GAIN_X4000 3
++static const struct iio_gain_sel_pair veml6035_gain_sel[] = {
++ GAIN_SCALE_GAIN(1, VEML6035_SEL_MILLI_GAIN_X125),
++ GAIN_SCALE_GAIN(2, VEML6035_SEL_MILLI_GAIN_X250),
++ GAIN_SCALE_GAIN(4, VEML6035_SEL_MILLI_GAIN_X500),
++ GAIN_SCALE_GAIN(8, VEML6035_SEL_MILLI_GAIN_X1000),
++ GAIN_SCALE_GAIN(16, VEML6035_SEL_MILLI_GAIN_X2000),
++ GAIN_SCALE_GAIN(32, VEML6035_SEL_MILLI_GAIN_X4000),
+ };
+
+ /*
+@@ -327,105 +359,73 @@ static const struct regmap_config veml6030_regmap_config = {
+ .val_format_endian = REGMAP_ENDIAN_LITTLE,
+ };
+
+-static int veml6030_get_intgrn_tm(struct iio_dev *indio_dev,
+- int *val, int *val2)
++static int veml6030_get_it(struct veml6030_data *data, int *val, int *val2)
+ {
+- int ret, reg;
+- struct veml6030_data *data = iio_priv(indio_dev);
++ int ret, it_idx;
+
+- ret = regmap_read(data->regmap, VEML6030_REG_ALS_CONF, ®);
+- if (ret) {
+- dev_err(&data->client->dev,
+- "can't read als conf register %d\n", ret);
++ ret = regmap_field_read(data->rf.it, &it_idx);
++ if (ret)
+ return ret;
+- }
+
+- switch ((reg >> 6) & 0xF) {
+- case 0:
+- *val2 = 100000;
+- break;
+- case 1:
+- *val2 = 200000;
+- break;
+- case 2:
+- *val2 = 400000;
+- break;
+- case 3:
+- *val2 = 800000;
+- break;
+- case 8:
+- *val2 = 50000;
+- break;
+- case 12:
+- *val2 = 25000;
+- break;
+- default:
+- return -EINVAL;
+- }
++ ret = iio_gts_find_int_time_by_sel(&data->gts, it_idx);
++ if (ret < 0)
++ return ret;
+
++ *val2 = ret;
+ *val = 0;
++
+ return IIO_VAL_INT_PLUS_MICRO;
+ }
+
+-static int veml6030_set_intgrn_tm(struct iio_dev *indio_dev,
+- int val, int val2)
++static int veml6030_set_it(struct iio_dev *indio_dev, int val, int val2)
+ {
+- int ret, new_int_time, int_idx;
+ struct veml6030_data *data = iio_priv(indio_dev);
++ int ret, gain_idx, it_idx, new_gain, prev_gain, prev_it;
++ bool in_range;
+
+- if (val)
++ if (val || !iio_gts_valid_time(&data->gts, val2))
+ return -EINVAL;
+
+- switch (val2) {
+- case 25000:
+- new_int_time = 0x300;
+- int_idx = 5;
+- break;
+- case 50000:
+- new_int_time = 0x200;
+- int_idx = 4;
+- break;
+- case 100000:
+- new_int_time = 0x00;
+- int_idx = 3;
+- break;
+- case 200000:
+- new_int_time = 0x40;
+- int_idx = 2;
+- break;
+- case 400000:
+- new_int_time = 0x80;
+- int_idx = 1;
+- break;
+- case 800000:
+- new_int_time = 0xC0;
+- int_idx = 0;
+- break;
+- default:
+- return -EINVAL;
+- }
++ ret = regmap_field_read(data->rf.it, &it_idx);
++ if (ret)
++ return ret;
+
+- ret = regmap_update_bits(data->regmap, VEML6030_REG_ALS_CONF,
+- VEML6030_ALS_IT, new_int_time);
+- if (ret) {
+- dev_err(&data->client->dev,
+- "can't update als integration time %d\n", ret);
++ ret = regmap_field_read(data->rf.gain, &gain_idx);
++ if (ret)
+ return ret;
+- }
+
+- /*
+- * Cache current integration time and update resolution. For every
+- * increase in integration time to next level, resolution is halved
+- * and vice-versa.
+- */
+- if (data->cur_integration_time < int_idx)
+- data->cur_resolution <<= int_idx - data->cur_integration_time;
+- else if (data->cur_integration_time > int_idx)
+- data->cur_resolution >>= data->cur_integration_time - int_idx;
++ prev_it = iio_gts_find_int_time_by_sel(&data->gts, it_idx);
++ if (prev_it < 0)
++ return prev_it;
++
++ if (prev_it == val2)
++ return 0;
+
+- data->cur_integration_time = int_idx;
++ prev_gain = iio_gts_find_gain_by_sel(&data->gts, gain_idx);
++ if (prev_gain < 0)
++ return prev_gain;
+
+- return ret;
++ ret = iio_gts_find_new_gain_by_gain_time_min(&data->gts, prev_gain, prev_it,
++ val2, &new_gain, &in_range);
++ if (ret)
++ return ret;
++
++ if (!in_range)
++ dev_dbg(&data->client->dev, "Optimal gain out of range\n");
++
++ ret = iio_gts_find_sel_by_int_time(&data->gts, val2);
++ if (ret < 0)
++ return ret;
++
++ ret = regmap_field_write(data->rf.it, ret);
++ if (ret)
++ return ret;
++
++ ret = iio_gts_find_sel_by_gain(&data->gts, new_gain);
++ if (ret < 0)
++ return ret;
++
++ return regmap_field_write(data->rf.gain, ret);
+ }
+
+ static int veml6030_read_persistence(struct iio_dev *indio_dev,
+@@ -434,7 +434,7 @@ static int veml6030_read_persistence(struct iio_dev *indio_dev,
+ int ret, reg, period, x, y;
+ struct veml6030_data *data = iio_priv(indio_dev);
+
+- ret = veml6030_get_intgrn_tm(indio_dev, &x, &y);
++ ret = veml6030_get_it(data, &x, &y);
+ if (ret < 0)
+ return ret;
+
+@@ -459,7 +459,7 @@ static int veml6030_write_persistence(struct iio_dev *indio_dev,
+ int ret, period, x, y;
+ struct veml6030_data *data = iio_priv(indio_dev);
+
+- ret = veml6030_get_intgrn_tm(indio_dev, &x, &y);
++ ret = veml6030_get_it(data, &x, &y);
+ if (ret < 0)
+ return ret;
+
+@@ -488,177 +488,29 @@ static int veml6030_write_persistence(struct iio_dev *indio_dev,
+ return ret;
+ }
+
+-/*
+- * Cache currently set gain & update resolution. For every
+- * increase in the gain to next level, resolution is halved
+- * and vice-versa.
+- */
+-static void veml6030_update_gain_res(struct veml6030_data *data, int gain_idx)
+-{
+- if (data->cur_gain < gain_idx)
+- data->cur_resolution <<= gain_idx - data->cur_gain;
+- else if (data->cur_gain > gain_idx)
+- data->cur_resolution >>= data->cur_gain - gain_idx;
+-
+- data->cur_gain = gain_idx;
+-}
+-
+-static int veml6030_set_als_gain(struct iio_dev *indio_dev,
+- int val, int val2)
++static int veml6030_set_scale(struct iio_dev *indio_dev, int val, int val2)
+ {
+- int ret, new_gain, gain_idx;
++ int ret, gain_sel, it_idx, it_sel;
+ struct veml6030_data *data = iio_priv(indio_dev);
+
+- if (val == 0 && val2 == 125000) {
+- new_gain = 0x1000; /* 0x02 << 11 */
+- gain_idx = 3;
+- } else if (val == 0 && val2 == 250000) {
+- new_gain = 0x1800;
+- gain_idx = 2;
+- } else if (val == 1 && val2 == 0) {
+- new_gain = 0x00;
+- gain_idx = 1;
+- } else if (val == 2 && val2 == 0) {
+- new_gain = 0x800;
+- gain_idx = 0;
+- } else {
+- return -EINVAL;
+- }
+-
+- ret = regmap_update_bits(data->regmap, VEML6030_REG_ALS_CONF,
+- VEML6030_ALS_GAIN, new_gain);
+- if (ret) {
+- dev_err(&data->client->dev,
+- "can't set als gain %d\n", ret);
++ ret = regmap_field_read(data->rf.it, &it_idx);
++ if (ret)
+ return ret;
+- }
+
+- veml6030_update_gain_res(data, gain_idx);
+-
+- return 0;
+-}
+-
+-static int veml6035_set_als_gain(struct iio_dev *indio_dev, int val, int val2)
+-{
+- int ret, new_gain, gain_idx;
+- struct veml6030_data *data = iio_priv(indio_dev);
+-
+- if (val == 0 && val2 == 125000) {
+- new_gain = VEML6035_SENS;
+- gain_idx = 5;
+- } else if (val == 0 && val2 == 250000) {
+- new_gain = VEML6035_SENS | VEML6035_GAIN;
+- gain_idx = 4;
+- } else if (val == 0 && val2 == 500000) {
+- new_gain = VEML6035_SENS | VEML6035_GAIN |
+- VEML6035_DG;
+- gain_idx = 3;
+- } else if (val == 1 && val2 == 0) {
+- new_gain = 0x0000;
+- gain_idx = 2;
+- } else if (val == 2 && val2 == 0) {
+- new_gain = VEML6035_GAIN;
+- gain_idx = 1;
+- } else if (val == 4 && val2 == 0) {
+- new_gain = VEML6035_GAIN | VEML6035_DG;
+- gain_idx = 0;
+- } else {
+- return -EINVAL;
+- }
+-
+- ret = regmap_update_bits(data->regmap, VEML6030_REG_ALS_CONF,
+- VEML6035_GAIN_M, new_gain);
+- if (ret) {
+- dev_err(&data->client->dev, "can't set als gain %d\n", ret);
++ ret = iio_gts_find_gain_time_sel_for_scale(&data->gts, val, val2,
++ &gain_sel, &it_sel);
++ if (ret)
+ return ret;
+- }
+-
+- veml6030_update_gain_res(data, gain_idx);
+
+- return 0;
+-}
+-
+-static int veml6030_get_als_gain(struct iio_dev *indio_dev,
+- int *val, int *val2)
+-{
+- int ret, reg;
+- struct veml6030_data *data = iio_priv(indio_dev);
+-
+- ret = regmap_read(data->regmap, VEML6030_REG_ALS_CONF, ®);
+- if (ret) {
+- dev_err(&data->client->dev,
+- "can't read als conf register %d\n", ret);
++ ret = regmap_field_write(data->rf.it, it_sel);
++ if (ret)
+ return ret;
+- }
+-
+- switch ((reg >> 11) & 0x03) {
+- case 0:
+- *val = 1;
+- *val2 = 0;
+- break;
+- case 1:
+- *val = 2;
+- *val2 = 0;
+- break;
+- case 2:
+- *val = 0;
+- *val2 = 125000;
+- break;
+- case 3:
+- *val = 0;
+- *val2 = 250000;
+- break;
+- default:
+- return -EINVAL;
+- }
+-
+- return IIO_VAL_INT_PLUS_MICRO;
+-}
+-
+-static int veml6035_get_als_gain(struct iio_dev *indio_dev, int *val, int *val2)
+-{
+- int ret, reg;
+- struct veml6030_data *data = iio_priv(indio_dev);
+
+- ret = regmap_read(data->regmap, VEML6030_REG_ALS_CONF, ®);
+- if (ret) {
+- dev_err(&data->client->dev,
+- "can't read als conf register %d\n", ret);
++ ret = regmap_field_write(data->rf.gain, gain_sel);
++ if (ret)
+ return ret;
+- }
+
+- switch (FIELD_GET(VEML6035_GAIN_M, reg)) {
+- case 0:
+- *val = 1;
+- *val2 = 0;
+- break;
+- case 1:
+- case 2:
+- *val = 2;
+- *val2 = 0;
+- break;
+- case 3:
+- *val = 4;
+- *val2 = 0;
+- break;
+- case 4:
+- *val = 0;
+- *val2 = 125000;
+- break;
+- case 5:
+- case 6:
+- *val = 0;
+- *val2 = 250000;
+- break;
+- case 7:
+- *val = 0;
+- *val2 = 500000;
+- break;
+- default:
+- return -EINVAL;
+- }
+-
+- return IIO_VAL_INT_PLUS_MICRO;
++ return 0;
+ }
+
+ static int veml6030_read_thresh(struct iio_dev *indio_dev,
+@@ -705,6 +557,71 @@ static int veml6030_write_thresh(struct iio_dev *indio_dev,
+ return ret;
+ }
+
++static int veml6030_get_total_gain(struct veml6030_data *data)
++{
++ int gain, it, reg, ret;
++
++ ret = regmap_field_read(data->rf.gain, ®);
++ if (ret)
++ return ret;
++
++ gain = iio_gts_find_gain_by_sel(&data->gts, reg);
++ if (gain < 0)
++ return gain;
++
++ ret = regmap_field_read(data->rf.it, ®);
++ if (ret)
++ return ret;
++
++ it = iio_gts_find_int_time_by_sel(&data->gts, reg);
++ if (it < 0)
++ return it;
++
++ return iio_gts_get_total_gain(&data->gts, gain, it);
++}
++
++static int veml6030_get_scale(struct veml6030_data *data, int *val, int *val2)
++{
++ int gain, it, reg, ret;
++
++ ret = regmap_field_read(data->rf.gain, ®);
++ if (ret)
++ return ret;
++
++ gain = iio_gts_find_gain_by_sel(&data->gts, reg);
++ if (gain < 0)
++ return gain;
++
++ ret = regmap_field_read(data->rf.it, ®);
++ if (ret)
++ return ret;
++
++ it = iio_gts_find_int_time_by_sel(&data->gts, reg);
++ if (it < 0)
++ return it;
++
++ ret = iio_gts_get_scale(&data->gts, gain, it, val, val2);
++ if (ret)
++ return ret;
++
++ return IIO_VAL_INT_PLUS_NANO;
++}
++
++static int veml6030_process_als(struct veml6030_data *data, int raw,
++ int *val, int *val2)
++{
++ int total_gain;
++
++ total_gain = veml6030_get_total_gain(data);
++ if (total_gain < 0)
++ return total_gain;
++
++ *val = raw * data->chip->max_scale / total_gain / 10000;
++ *val2 = raw * data->chip->max_scale / total_gain % 10000 * 100;
++
++ return IIO_VAL_INT_PLUS_MICRO;
++}
++
+ /*
+ * Provide both raw as well as light reading in lux.
+ * light (in lux) = resolution * raw reading
+@@ -728,11 +645,9 @@ static int veml6030_read_raw(struct iio_dev *indio_dev,
+ dev_err(dev, "can't read als data %d\n", ret);
+ return ret;
+ }
+- if (mask == IIO_CHAN_INFO_PROCESSED) {
+- *val = (reg * data->cur_resolution) / 10000;
+- *val2 = (reg * data->cur_resolution) % 10000 * 100;
+- return IIO_VAL_INT_PLUS_MICRO;
+- }
++ if (mask == IIO_CHAN_INFO_PROCESSED)
++ return veml6030_process_als(data, reg, val, val2);
++
+ *val = reg;
+ return IIO_VAL_INT;
+ case IIO_INTENSITY:
+@@ -747,9 +662,9 @@ static int veml6030_read_raw(struct iio_dev *indio_dev,
+ return -EINVAL;
+ }
+ case IIO_CHAN_INFO_INT_TIME:
+- return veml6030_get_intgrn_tm(indio_dev, val, val2);
++ return veml6030_get_it(data, val, val2);
+ case IIO_CHAN_INFO_SCALE:
+- return data->chip->get_als_gain(indio_dev, val, val2);
++ return veml6030_get_scale(data, val, val2);
+ default:
+ return -EINVAL;
+ }
+@@ -764,15 +679,9 @@ static int veml6030_read_avail(struct iio_dev *indio_dev,
+
+ switch (mask) {
+ case IIO_CHAN_INFO_INT_TIME:
+- *vals = (int *)&veml6030_it_times;
+- *length = 2 * ARRAY_SIZE(veml6030_it_times);
+- *type = IIO_VAL_INT_PLUS_MICRO;
+- return IIO_AVAIL_LIST;
++ return iio_gts_avail_times(&data->gts, vals, type, length);
+ case IIO_CHAN_INFO_SCALE:
+- *vals = (int *)*data->chip->scale_vals;
+- *length = 2 * data->chip->num_scale_vals;
+- *type = IIO_VAL_INT_PLUS_MICRO;
+- return IIO_AVAIL_LIST;
++ return iio_gts_all_avail_scales(&data->gts, vals, type, length);
+ }
+
+ return -EINVAL;
+@@ -782,13 +691,25 @@ static int veml6030_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+ {
+- struct veml6030_data *data = iio_priv(indio_dev);
+-
+ switch (mask) {
+ case IIO_CHAN_INFO_INT_TIME:
+- return veml6030_set_intgrn_tm(indio_dev, val, val2);
++ return veml6030_set_it(indio_dev, val, val2);
++ case IIO_CHAN_INFO_SCALE:
++ return veml6030_set_scale(indio_dev, val, val2);
++ default:
++ return -EINVAL;
++ }
++}
++
++static int veml6030_write_raw_get_fmt(struct iio_dev *indio_dev,
++ struct iio_chan_spec const *chan,
++ long mask)
++{
++ switch (mask) {
+ case IIO_CHAN_INFO_SCALE:
+- return data->chip->set_als_gain(indio_dev, val, val2);
++ return IIO_VAL_INT_PLUS_NANO;
++ case IIO_CHAN_INFO_INT_TIME:
++ return IIO_VAL_INT_PLUS_MICRO;
+ default:
+ return -EINVAL;
+ }
+@@ -886,6 +807,7 @@ static const struct iio_info veml6030_info = {
+ .read_raw = veml6030_read_raw,
+ .read_avail = veml6030_read_avail,
+ .write_raw = veml6030_write_raw,
++ .write_raw_get_fmt = veml6030_write_raw_get_fmt,
+ .read_event_value = veml6030_read_event_val,
+ .write_event_value = veml6030_write_event_val,
+ .read_event_config = veml6030_read_interrupt_config,
+@@ -897,6 +819,7 @@ static const struct iio_info veml6030_info_no_irq = {
+ .read_raw = veml6030_read_raw,
+ .read_avail = veml6030_read_avail,
+ .write_raw = veml6030_write_raw,
++ .write_raw_get_fmt = veml6030_write_raw_get_fmt,
+ };
+
+ static irqreturn_t veml6030_event_handler(int irq, void *private)
+@@ -990,6 +913,27 @@ static int veml7700_set_info(struct iio_dev *indio_dev)
+ return 0;
+ }
+
++static int veml6030_regfield_init(struct iio_dev *indio_dev)
++{
++ struct veml6030_data *data = iio_priv(indio_dev);
++ struct regmap *regmap = data->regmap;
++ struct device *dev = &data->client->dev;
++ struct regmap_field *rm_field;
++ struct veml6030_rf *rf = &data->rf;
++
++ rm_field = devm_regmap_field_alloc(dev, regmap, data->chip->it_rf);
++ if (IS_ERR(rm_field))
++ return PTR_ERR(rm_field);
++ rf->it = rm_field;
++
++ rm_field = devm_regmap_field_alloc(dev, regmap, data->chip->gain_rf);
++ if (IS_ERR(rm_field))
++ return PTR_ERR(rm_field);
++ rf->gain = rm_field;
++
++ return 0;
++}
++
+ /*
+ * Set ALS gain to 1/8, integration time to 100 ms, PSM to mode 2,
+ * persistence to 1 x integration time and the threshold
+@@ -1001,6 +945,13 @@ static int veml6030_hw_init(struct iio_dev *indio_dev, struct device *dev)
+ int ret, val;
+ struct veml6030_data *data = iio_priv(indio_dev);
+
++ ret = devm_iio_init_iio_gts(dev, 2, 150400000,
++ veml6030_gain_sel, ARRAY_SIZE(veml6030_gain_sel),
++ veml6030_it_sel, ARRAY_SIZE(veml6030_it_sel),
++ &data->gts);
++ if (ret)
++ return dev_err_probe(dev, ret, "failed to init iio gts\n");
++
+ ret = veml6030_als_shut_down(data);
+ if (ret)
+ return dev_err_probe(dev, ret, "can't shutdown als\n");
+@@ -1036,11 +987,6 @@ static int veml6030_hw_init(struct iio_dev *indio_dev, struct device *dev)
+ return dev_err_probe(dev, ret,
+ "can't clear als interrupt status\n");
+
+- /* Cache currently active measurement parameters */
+- data->cur_gain = 3;
+- data->cur_resolution = 5376;
+- data->cur_integration_time = 3;
+-
+ return ret;
+ }
+
+@@ -1056,6 +1002,13 @@ static int veml6035_hw_init(struct iio_dev *indio_dev, struct device *dev)
+ int ret, val;
+ struct veml6030_data *data = iio_priv(indio_dev);
+
++ ret = devm_iio_init_iio_gts(dev, 0, 409600000,
++ veml6035_gain_sel, ARRAY_SIZE(veml6035_gain_sel),
++ veml6030_it_sel, ARRAY_SIZE(veml6030_it_sel),
++ &data->gts);
++ if (ret)
++ return dev_err_probe(dev, ret, "failed to init iio gts\n");
++
+ ret = veml6030_als_shut_down(data);
+ if (ret)
+ return dev_err_probe(dev, ret, "can't shutdown als\n");
+@@ -1092,11 +1045,6 @@ static int veml6035_hw_init(struct iio_dev *indio_dev, struct device *dev)
+ return dev_err_probe(dev, ret,
+ "can't clear als interrupt status\n");
+
+- /* Cache currently active measurement parameters */
+- data->cur_gain = 5;
+- data->cur_resolution = 1024;
+- data->cur_integration_time = 3;
+-
+ return 0;
+ }
+
+@@ -1143,6 +1091,11 @@ static int veml6030_probe(struct i2c_client *client)
+ if (ret < 0)
+ return ret;
+
++ ret = veml6030_regfield_init(indio_dev);
++ if (ret)
++ return dev_err_probe(&client->dev, ret,
++ "failed to init regfields\n");
++
+ ret = data->chip->hw_init(indio_dev, &client->dev);
+ if (ret < 0)
+ return ret;
+@@ -1187,38 +1140,35 @@ static DEFINE_RUNTIME_DEV_PM_OPS(veml6030_pm_ops, veml6030_runtime_suspend,
+
+ static const struct veml603x_chip veml6030_chip = {
+ .name = "veml6030",
+- .scale_vals = &veml6030_scale_vals,
+- .num_scale_vals = ARRAY_SIZE(veml6030_scale_vals),
+ .channels = veml6030_channels,
+ .num_channels = ARRAY_SIZE(veml6030_channels),
++ .gain_rf = VEML6030_GAIN_RF,
++ .it_rf = VEML6030_IT_RF,
++ .max_scale = VEML6030_MAX_SCALE,
+ .hw_init = veml6030_hw_init,
+ .set_info = veml6030_set_info,
+- .set_als_gain = veml6030_set_als_gain,
+- .get_als_gain = veml6030_get_als_gain,
+ };
+
+ static const struct veml603x_chip veml6035_chip = {
+ .name = "veml6035",
+- .scale_vals = &veml6035_scale_vals,
+- .num_scale_vals = ARRAY_SIZE(veml6035_scale_vals),
+ .channels = veml6030_channels,
+ .num_channels = ARRAY_SIZE(veml6030_channels),
++ .gain_rf = VEML6035_GAIN_RF,
++ .it_rf = VEML6030_IT_RF,
++ .max_scale = VEML6035_MAX_SCALE,
+ .hw_init = veml6035_hw_init,
+ .set_info = veml6030_set_info,
+- .set_als_gain = veml6035_set_als_gain,
+- .get_als_gain = veml6035_get_als_gain,
+ };
+
+ static const struct veml603x_chip veml7700_chip = {
+ .name = "veml7700",
+- .scale_vals = &veml6030_scale_vals,
+- .num_scale_vals = ARRAY_SIZE(veml6030_scale_vals),
+ .channels = veml7700_channels,
+ .num_channels = ARRAY_SIZE(veml7700_channels),
++ .gain_rf = VEML6030_GAIN_RF,
++ .it_rf = VEML6030_IT_RF,
++ .max_scale = VEML6030_MAX_SCALE,
+ .hw_init = veml6030_hw_init,
+ .set_info = veml7700_set_info,
+- .set_als_gain = veml6030_set_als_gain,
+- .get_als_gain = veml6030_get_als_gain,
+ };
+
+ static const struct of_device_id veml6030_of_match[] = {
+@@ -1260,3 +1210,4 @@ module_i2c_driver(veml6030_driver);
+ MODULE_AUTHOR("Rishi Gupta <gupt21@gmail.com>");
+ MODULE_DESCRIPTION("VEML6030 Ambient Light Sensor");
+ MODULE_LICENSE("GPL v2");
++MODULE_IMPORT_NS("IIO_GTS_HELPER");
+diff --git a/drivers/iio/light/veml6075.c b/drivers/iio/light/veml6075.c
+index 05d4c0e9015d6e..859891e8f11521 100644
+--- a/drivers/iio/light/veml6075.c
++++ b/drivers/iio/light/veml6075.c
+@@ -195,13 +195,17 @@ static int veml6075_read_uv_direct(struct veml6075_data *data, int chan,
+
+ static int veml6075_read_int_time_index(struct veml6075_data *data)
+ {
+- int ret, conf;
++ int ret, conf, int_index;
+
+ ret = regmap_read(data->regmap, VEML6075_CMD_CONF, &conf);
+ if (ret < 0)
+ return ret;
+
+- return FIELD_GET(VEML6075_CONF_IT, conf);
++ int_index = FIELD_GET(VEML6075_CONF_IT, conf);
++ if (int_index >= ARRAY_SIZE(veml6075_it_ms))
++ return -EINVAL;
++
++ return int_index;
+ }
+
+ static int veml6075_read_int_time_ms(struct veml6075_data *data, int *val)
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 0ded91f056f39a..ee75b99f84bcc2 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -528,6 +528,8 @@ static struct class ib_class = {
+ static void rdma_init_coredev(struct ib_core_device *coredev,
+ struct ib_device *dev, struct net *net)
+ {
++ bool is_full_dev = &dev->coredev == coredev;
++
+ /* This BUILD_BUG_ON is intended to catch layout change
+ * of union of ib_core_device and device.
+ * dev must be the first element as ib_core and providers
+@@ -539,6 +541,13 @@ static void rdma_init_coredev(struct ib_core_device *coredev,
+
+ coredev->dev.class = &ib_class;
+ coredev->dev.groups = dev->groups;
++
++ /*
++ * Don't expose hw counters outside of the init namespace.
++ */
++ if (!is_full_dev && dev->hw_stats_attr_index)
++ coredev->dev.groups[dev->hw_stats_attr_index] = NULL;
++
+ device_initialize(&coredev->dev);
+ coredev->owner = dev;
+ INIT_LIST_HEAD(&coredev->port_list);
+@@ -1341,9 +1350,11 @@ static void ib_device_notify_register(struct ib_device *device)
+ u32 port;
+ int ret;
+
++ down_read(&devices_rwsem);
++
+ ret = rdma_nl_notify_event(device, 0, RDMA_REGISTER_EVENT);
+ if (ret)
+- return;
++ goto out;
+
+ rdma_for_each_port(device, port) {
+ netdev = ib_device_get_netdev(device, port);
+@@ -1354,8 +1365,11 @@ static void ib_device_notify_register(struct ib_device *device)
+ RDMA_NETDEV_ATTACH_EVENT);
+ dev_put(netdev);
+ if (ret)
+- return;
++ goto out;
+ }
++
++out:
++ up_read(&devices_rwsem);
+ }
+
+ /**
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index 1fd54d5c4dd8b7..73f3a0b9a54b5f 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -2671,11 +2671,11 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ struct ib_mad_private *mad)
+ {
+ unsigned long flags;
+- int post, ret;
+ struct ib_mad_private *mad_priv;
+ struct ib_sge sg_list;
+ struct ib_recv_wr recv_wr;
+ struct ib_mad_queue *recv_queue = &qp_info->recv_queue;
++ int ret = 0;
+
+ /* Initialize common scatter list fields */
+ sg_list.lkey = qp_info->port_priv->pd->local_dma_lkey;
+@@ -2685,7 +2685,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ recv_wr.sg_list = &sg_list;
+ recv_wr.num_sge = 1;
+
+- do {
++ while (true) {
+ /* Allocate and map receive buffer */
+ if (mad) {
+ mad_priv = mad;
+@@ -2693,10 +2693,8 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ } else {
+ mad_priv = alloc_mad_private(port_mad_size(qp_info->port_priv),
+ GFP_ATOMIC);
+- if (!mad_priv) {
+- ret = -ENOMEM;
+- break;
+- }
++ if (!mad_priv)
++ return -ENOMEM;
+ }
+ sg_list.length = mad_priv_dma_size(mad_priv);
+ sg_list.addr = ib_dma_map_single(qp_info->port_priv->device,
+@@ -2705,37 +2703,41 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ DMA_FROM_DEVICE);
+ if (unlikely(ib_dma_mapping_error(qp_info->port_priv->device,
+ sg_list.addr))) {
+- kfree(mad_priv);
+ ret = -ENOMEM;
+- break;
++ goto free_mad_priv;
+ }
+ mad_priv->header.mapping = sg_list.addr;
+ mad_priv->header.mad_list.mad_queue = recv_queue;
+ mad_priv->header.mad_list.cqe.done = ib_mad_recv_done;
+ recv_wr.wr_cqe = &mad_priv->header.mad_list.cqe;
+-
+- /* Post receive WR */
+ spin_lock_irqsave(&recv_queue->lock, flags);
+- post = (++recv_queue->count < recv_queue->max_active);
+- list_add_tail(&mad_priv->header.mad_list.list, &recv_queue->list);
++ if (recv_queue->count >= recv_queue->max_active) {
++ /* Fully populated the receive queue */
++ spin_unlock_irqrestore(&recv_queue->lock, flags);
++ break;
++ }
++ recv_queue->count++;
++ list_add_tail(&mad_priv->header.mad_list.list,
++ &recv_queue->list);
+ spin_unlock_irqrestore(&recv_queue->lock, flags);
++
+ ret = ib_post_recv(qp_info->qp, &recv_wr, NULL);
+ if (ret) {
+ spin_lock_irqsave(&recv_queue->lock, flags);
+ list_del(&mad_priv->header.mad_list.list);
+ recv_queue->count--;
+ spin_unlock_irqrestore(&recv_queue->lock, flags);
+- ib_dma_unmap_single(qp_info->port_priv->device,
+- mad_priv->header.mapping,
+- mad_priv_dma_size(mad_priv),
+- DMA_FROM_DEVICE);
+- kfree(mad_priv);
+ dev_err(&qp_info->port_priv->device->dev,
+ "ib_post_recv failed: %d\n", ret);
+ break;
+ }
+- } while (post);
++ }
+
++ ib_dma_unmap_single(qp_info->port_priv->device,
++ mad_priv->header.mapping,
++ mad_priv_dma_size(mad_priv), DMA_FROM_DEVICE);
++free_mad_priv:
++ kfree(mad_priv);
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 9f97bef0214975..210092b9bf17d2 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -988,6 +988,7 @@ int ib_setup_device_attrs(struct ib_device *ibdev)
+ for (i = 0; i != ARRAY_SIZE(ibdev->groups); i++)
+ if (!ibdev->groups[i]) {
+ ibdev->groups[i] = &data->group;
++ ibdev->hw_stats_attr_index = i;
+ return 0;
+ }
+ WARN(true, "struct ib_device->groups is too small");
+diff --git a/drivers/infiniband/hw/erdma/erdma_cm.c b/drivers/infiniband/hw/erdma/erdma_cm.c
+index 1b23c698ec25c3..e0acc185e71930 100644
+--- a/drivers/infiniband/hw/erdma/erdma_cm.c
++++ b/drivers/infiniband/hw/erdma/erdma_cm.c
+@@ -709,7 +709,6 @@ static void erdma_accept_newconn(struct erdma_cep *cep)
+ erdma_cancel_mpatimer(new_cep);
+
+ erdma_cep_put(new_cep);
+- new_cep->sock = NULL;
+ }
+
+ if (new_s) {
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index 457cea6d990958..f6bf289041bfe3 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -358,7 +358,7 @@ static int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem
+ unsigned int tail = 0;
+ u64 *page_addr_list;
+ void *request_buf;
+- int err;
++ int err = 0;
+
+ gc = mdev_to_gc(dev);
+ hwc = gc->hwc.driver_data;
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 4c54dc57806901..1aa5311b03e9f5 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -490,7 +490,7 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq,
+ }
+
+ qpn = ntohl(cqe64->sop_drop_qpn) & 0xffffff;
+- if (!*cur_qp || (qpn != (*cur_qp)->ibqp.qp_num)) {
++ if (!*cur_qp || (qpn != (*cur_qp)->trans_qp.base.mqp.qpn)) {
+ /* We do not have to take the QP table lock here,
+ * because CQs will be locked while QPs are removed
+ * from the table.
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 753faa9ad06a88..068eac3bdb50ba 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -56,7 +56,7 @@ static void
+ create_mkey_callback(int status, struct mlx5_async_work *context);
+ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, struct ib_umem *umem,
+ u64 iova, int access_flags,
+- unsigned int page_size, bool populate,
++ unsigned long page_size, bool populate,
+ int access_mode);
+ static int __mlx5_ib_dereg_mr(struct ib_mr *ibmr);
+
+@@ -919,6 +919,25 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev,
+ return ERR_PTR(ret);
+ }
+
++static void mlx5r_destroy_cache_entries(struct mlx5_ib_dev *dev)
++{
++ struct rb_root *root = &dev->cache.rb_root;
++ struct mlx5_cache_ent *ent;
++ struct rb_node *node;
++
++ mutex_lock(&dev->cache.rb_lock);
++ node = rb_first(root);
++ while (node) {
++ ent = rb_entry(node, struct mlx5_cache_ent, node);
++ node = rb_next(node);
++ clean_keys(dev, ent);
++ rb_erase(&ent->node, root);
++ mlx5r_mkeys_uninit(ent);
++ kfree(ent);
++ }
++ mutex_unlock(&dev->cache.rb_lock);
++}
++
+ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_mkey_cache *cache = &dev->cache;
+@@ -970,6 +989,8 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ err:
+ mutex_unlock(&cache->rb_lock);
+ mlx5_mkey_cache_debugfs_cleanup(dev);
++ mlx5r_destroy_cache_entries(dev);
++ destroy_workqueue(cache->wq);
+ mlx5_ib_warn(dev, "failed to create mkey cache entry\n");
+ return ret;
+ }
+@@ -1003,17 +1024,7 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev)
+ mlx5_cmd_cleanup_async_ctx(&dev->async_ctx);
+
+ /* At this point all entries are disabled and have no concurrent work. */
+- mutex_lock(&dev->cache.rb_lock);
+- node = rb_first(root);
+- while (node) {
+- ent = rb_entry(node, struct mlx5_cache_ent, node);
+- node = rb_next(node);
+- clean_keys(dev, ent);
+- rb_erase(&ent->node, root);
+- mlx5r_mkeys_uninit(ent);
+- kfree(ent);
+- }
+- mutex_unlock(&dev->cache.rb_lock);
++ mlx5r_destroy_cache_entries(dev);
+
+ destroy_workqueue(dev->cache.wq);
+ del_timer_sync(&dev->delay_timer);
+@@ -1115,7 +1126,7 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd,
+ struct mlx5r_cache_rb_key rb_key = {};
+ struct mlx5_cache_ent *ent;
+ struct mlx5_ib_mr *mr;
+- unsigned int page_size;
++ unsigned long page_size;
+
+ if (umem->is_dmabuf)
+ page_size = mlx5_umem_dmabuf_default_pgsz(umem, iova);
+@@ -1219,7 +1230,7 @@ reg_create_crossing_vhca_mr(struct ib_pd *pd, u64 iova, u64 length, int access_f
+ */
+ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, struct ib_umem *umem,
+ u64 iova, int access_flags,
+- unsigned int page_size, bool populate,
++ unsigned long page_size, bool populate,
+ int access_mode)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(pd->device);
+@@ -1425,7 +1436,7 @@ static struct ib_mr *create_real_mr(struct ib_pd *pd, struct ib_umem *umem,
+ mr = alloc_cacheable_mr(pd, umem, iova, access_flags,
+ MLX5_MKC_ACCESS_MODE_MTT);
+ } else {
+- unsigned int page_size =
++ unsigned long page_size =
+ mlx5_umem_mkc_find_best_pgsz(dev, umem, iova);
+
+ mutex_lock(&dev->slow_path_mutex);
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index e77c9280c07e43..86d8fa63bf691a 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -309,9 +309,6 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
+ blk_start_idx = idx;
+ in_block = 1;
+ }
+-
+- /* Count page invalidations */
+- invalidations += idx - blk_start_idx + 1;
+ } else {
+ u64 umr_offset = idx & umr_block_mask;
+
+@@ -321,14 +318,19 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
+ MLX5_IB_UPD_XLT_ZAP |
+ MLX5_IB_UPD_XLT_ATOMIC);
+ in_block = 0;
++ /* Count page invalidations */
++ invalidations += idx - blk_start_idx + 1;
+ }
+ }
+ }
+- if (in_block)
++ if (in_block) {
+ mlx5r_umr_update_xlt(mr, blk_start_idx,
+ idx - blk_start_idx + 1, 0,
+ MLX5_IB_UPD_XLT_ZAP |
+ MLX5_IB_UPD_XLT_ATOMIC);
++ /* Count page invalidations */
++ invalidations += idx - blk_start_idx + 1;
++ }
+
+ mlx5_update_odp_stats_with_handled(mr, invalidations, invalidations);
+
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index 68debf5ee2d751..e3bf27da1339e3 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -176,12 +176,11 @@ void amd_iommu_apply_ivrs_quirks(void);
+ #else
+ static inline void amd_iommu_apply_ivrs_quirks(void) { }
+ #endif
++struct dev_table_entry *amd_iommu_get_ivhd_dte_flags(u16 segid, u16 devid);
+
+ void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
+ u64 *root, int mode);
+ struct dev_table_entry *get_dev_table(struct amd_iommu *iommu);
+-
+-#endif
+-
+-struct dev_table_entry *amd_iommu_get_ivhd_dte_flags(u16 segid, u16 devid);
+ struct iommu_dev_data *search_dev_data(struct amd_iommu *iommu, u16 devid);
++
++#endif /* AMD_IOMMU_H */
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index bf1f0c81434830..25d31f8c129a68 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2871,16 +2871,19 @@ void intel_iommu_shutdown(void)
+ if (no_iommu || dmar_disabled)
+ return;
+
+- down_write(&dmar_global_lock);
++ /*
++ * All other CPUs were brought down, hotplug interrupts were disabled,
++ * no lock and RCU checking needed anymore
++ */
++ list_for_each_entry(drhd, &dmar_drhd_units, list) {
++ iommu = drhd->iommu;
+
+- /* Disable PMRs explicitly here. */
+- for_each_iommu(iommu, drhd)
++ /* Disable PMRs explicitly here. */
+ iommu_disable_protect_mem_regions(iommu);
+
+- /* Make sure the IOMMUs are switched off */
+- intel_disable_iommus();
+-
+- up_write(&dmar_global_lock);
++ /* Make sure the IOMMUs are switched off */
++ iommu_disable_translation(iommu);
++ }
+ }
+
+ static struct intel_iommu *dev_to_intel_iommu(struct device *dev)
+diff --git a/drivers/iommu/io-pgtable-dart.c b/drivers/iommu/io-pgtable-dart.c
+index c004640640ee50..06aca9ab52f9a8 100644
+--- a/drivers/iommu/io-pgtable-dart.c
++++ b/drivers/iommu/io-pgtable-dart.c
+@@ -135,7 +135,6 @@ static int dart_init_pte(struct dart_io_pgtable *data,
+ pte |= FIELD_PREP(APPLE_DART_PTE_SUBPAGE_START, 0);
+ pte |= FIELD_PREP(APPLE_DART_PTE_SUBPAGE_END, 0xfff);
+
+- pte |= APPLE_DART1_PTE_PROT_SP_DIS;
+ pte |= APPLE_DART_PTE_VALID;
+
+ for (i = 0; i < num_entries; i++)
+@@ -211,6 +210,7 @@ static dart_iopte dart_prot_to_pte(struct dart_io_pgtable *data,
+ dart_iopte pte = 0;
+
+ if (data->iop.fmt == APPLE_DART) {
++ pte |= APPLE_DART1_PTE_PROT_SP_DIS;
+ if (!(prot & IOMMU_WRITE))
+ pte |= APPLE_DART1_PTE_PROT_NO_WRITE;
+ if (!(prot & IOMMU_READ))
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 60aed01e54f27c..e3df1f06afbeb3 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -3097,6 +3097,11 @@ int iommu_device_use_default_domain(struct device *dev)
+ return 0;
+
+ mutex_lock(&group->mutex);
++ /* We may race against bus_iommu_probe() finalising groups here */
++ if (!group->default_domain) {
++ ret = -EPROBE_DEFER;
++ goto unlock_out;
++ }
+ if (group->owner_cnt) {
+ if (group->domain != group->default_domain || group->owner ||
+ !xa_empty(&group->pasid_array)) {
+diff --git a/drivers/leds/led-core.c b/drivers/leds/led-core.c
+index f6c46d2e5276b5..e3d8ddcff5670e 100644
+--- a/drivers/leds/led-core.c
++++ b/drivers/leds/led-core.c
+@@ -159,8 +159,19 @@ static void set_brightness_delayed(struct work_struct *ws)
+ * before this work item runs once. To make sure this works properly
+ * handle LED_SET_BRIGHTNESS_OFF first.
+ */
+- if (test_and_clear_bit(LED_SET_BRIGHTNESS_OFF, &led_cdev->work_flags))
++ if (test_and_clear_bit(LED_SET_BRIGHTNESS_OFF, &led_cdev->work_flags)) {
+ set_brightness_delayed_set_brightness(led_cdev, LED_OFF);
++ /*
++ * The consecutives led_set_brightness(LED_OFF),
++ * led_set_brightness(LED_FULL) could have been executed out of
++ * order (LED_FULL first), if the work_flags has been set
++ * between LED_SET_BRIGHTNESS_OFF and LED_SET_BRIGHTNESS of this
++ * work. To avoid ending with the LED turned off, turn the LED
++ * on again.
++ */
++ if (led_cdev->delayed_set_value != LED_OFF)
++ set_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags);
++ }
+
+ if (test_and_clear_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags))
+ set_brightness_delayed_set_brightness(led_cdev, led_cdev->delayed_set_value);
+@@ -331,10 +342,13 @@ void led_set_brightness_nopm(struct led_classdev *led_cdev, unsigned int value)
+ * change is done immediately afterwards (before the work runs),
+ * it uses a separate work_flag.
+ */
+- if (value) {
+- led_cdev->delayed_set_value = value;
++ led_cdev->delayed_set_value = value;
++ /* Ensure delayed_set_value is seen before work_flags modification */
++ smp_mb__before_atomic();
++
++ if (value)
+ set_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags);
+- } else {
++ else {
+ clear_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags);
+ clear_bit(LED_SET_BLINK, &led_cdev->work_flags);
+ set_bit(LED_SET_BRIGHTNESS_OFF, &led_cdev->work_flags);
+diff --git a/drivers/leds/leds-st1202.c b/drivers/leds/leds-st1202.c
+index e894b3f9a0f46b..4cebc0203c227a 100644
+--- a/drivers/leds/leds-st1202.c
++++ b/drivers/leds/leds-st1202.c
+@@ -345,7 +345,9 @@ static int st1202_probe(struct i2c_client *client)
+ if (!chip)
+ return -ENOMEM;
+
+- devm_mutex_init(&client->dev, &chip->lock);
++ ret = devm_mutex_init(&client->dev, &chip->lock);
++ if (ret < 0)
++ return ret;
+ chip->client = client;
+
+ ret = st1202_dt_init(chip);
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 23c09d22fcdbc1..9ae6cc8e30cbdc 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -426,8 +426,8 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
+ struct block_device *bdev;
+ struct mddev *mddev = bitmap->mddev;
+ struct bitmap_storage *store = &bitmap->storage;
+- unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) <<
+- PAGE_SHIFT;
++ unsigned long num_pages = bitmap->storage.file_pages;
++ unsigned int bitmap_limit = (num_pages - pg_index % num_pages) << PAGE_SHIFT;
+ loff_t sboff, offset = mddev->bitmap_info.offset;
+ sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE;
+ unsigned int size = PAGE_SIZE;
+@@ -436,7 +436,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
+
+ bdev = (rdev->meta_bdev) ? rdev->meta_bdev : rdev->bdev;
+ /* we compare length (page numbers), not page offset. */
+- if ((pg_index - store->sb_index) == store->file_pages - 1) {
++ if ((pg_index - store->sb_index) == num_pages - 1) {
+ unsigned int last_page_size = store->bytes & (PAGE_SIZE - 1);
+
+ if (last_page_size == 0)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 30b3dbbce2d2df..ef859ccb03661a 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -629,6 +629,12 @@ static void __mddev_put(struct mddev *mddev)
+ queue_work(md_misc_wq, &mddev->del_work);
+ }
+
++static void mddev_put_locked(struct mddev *mddev)
++{
++ if (atomic_dec_and_test(&mddev->active))
++ __mddev_put(mddev);
++}
++
+ void mddev_put(struct mddev *mddev)
+ {
+ if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
+@@ -1748,7 +1754,7 @@ static int super_1_load(struct md_rdev *rdev, struct md_rdev *refdev, int minor_
+ count <<= sb->bblog_shift;
+ if (bb + 1 == 0)
+ break;
+- if (badblocks_set(&rdev->badblocks, sector, count, 1))
++ if (!badblocks_set(&rdev->badblocks, sector, count, 1))
+ return -EINVAL;
+ }
+ } else if (sb->bblog_offset != 0)
+@@ -8461,9 +8467,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
+ if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
+ status_unused(seq);
+
+- if (atomic_dec_and_test(&mddev->active))
+- __mddev_put(mddev);
+-
++ mddev_put_locked(mddev);
+ return 0;
+ }
+
+@@ -9460,6 +9464,13 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
+ return true;
+ }
+
++ /* Check if resync is in progress. */
++ if (mddev->recovery_cp < MaxSector) {
++ set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
++ clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
++ return true;
++ }
++
+ /*
+ * Remove any failed drives, then add spares if possible. Spares are
+ * also removed and re-added, to allow the personality to fail the
+@@ -9476,13 +9487,6 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
+ return true;
+ }
+
+- /* Check if recovery is in progress. */
+- if (mddev->recovery_cp < MaxSector) {
+- set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+- clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
+- return true;
+- }
+-
+ /* Delay to choose resync/check/repair in md_do_sync(). */
+ if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery))
+ return true;
+@@ -9846,7 +9850,6 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
+ int is_new)
+ {
+ struct mddev *mddev = rdev->mddev;
+- int rv;
+
+ /*
+ * Recording new badblocks for faulty rdev will force unnecessary
+@@ -9862,44 +9865,46 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
+ s += rdev->new_data_offset;
+ else
+ s += rdev->data_offset;
+- rv = badblocks_set(&rdev->badblocks, s, sectors, 0);
+- if (rv == 0) {
+- /* Make sure they get written out promptly */
+- if (test_bit(ExternalBbl, &rdev->flags))
+- sysfs_notify_dirent_safe(rdev->sysfs_unack_badblocks);
+- sysfs_notify_dirent_safe(rdev->sysfs_state);
+- set_mask_bits(&mddev->sb_flags, 0,
+- BIT(MD_SB_CHANGE_CLEAN) | BIT(MD_SB_CHANGE_PENDING));
+- md_wakeup_thread(rdev->mddev->thread);
+- return 1;
+- } else
++
++ if (!badblocks_set(&rdev->badblocks, s, sectors, 0))
+ return 0;
++
++ /* Make sure they get written out promptly */
++ if (test_bit(ExternalBbl, &rdev->flags))
++ sysfs_notify_dirent_safe(rdev->sysfs_unack_badblocks);
++ sysfs_notify_dirent_safe(rdev->sysfs_state);
++ set_mask_bits(&mddev->sb_flags, 0,
++ BIT(MD_SB_CHANGE_CLEAN) | BIT(MD_SB_CHANGE_PENDING));
++ md_wakeup_thread(rdev->mddev->thread);
++ return 1;
+ }
+ EXPORT_SYMBOL_GPL(rdev_set_badblocks);
+
+ int rdev_clear_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
+ int is_new)
+ {
+- int rv;
+ if (is_new)
+ s += rdev->new_data_offset;
+ else
+ s += rdev->data_offset;
+- rv = badblocks_clear(&rdev->badblocks, s, sectors);
+- if ((rv == 0) && test_bit(ExternalBbl, &rdev->flags))
++
++ if (!badblocks_clear(&rdev->badblocks, s, sectors))
++ return 0;
++
++ if (test_bit(ExternalBbl, &rdev->flags))
+ sysfs_notify_dirent_safe(rdev->sysfs_badblocks);
+- return rv;
++ return 1;
+ }
+ EXPORT_SYMBOL_GPL(rdev_clear_badblocks);
+
+ static int md_notify_reboot(struct notifier_block *this,
+ unsigned long code, void *x)
+ {
+- struct mddev *mddev, *n;
++ struct mddev *mddev;
+ int need_delay = 0;
+
+ spin_lock(&all_mddevs_lock);
+- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
++ list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
+ if (!mddev_get(mddev))
+ continue;
+ spin_unlock(&all_mddevs_lock);
+@@ -9911,8 +9916,8 @@ static int md_notify_reboot(struct notifier_block *this,
+ mddev_unlock(mddev);
+ }
+ need_delay = 1;
+- mddev_put(mddev);
+ spin_lock(&all_mddevs_lock);
++ mddev_put_locked(mddev);
+ }
+ spin_unlock(&all_mddevs_lock);
+
+@@ -10245,7 +10250,7 @@ void md_autostart_arrays(int part)
+
+ static __exit void md_exit(void)
+ {
+- struct mddev *mddev, *n;
++ struct mddev *mddev;
+ int delay = 1;
+
+ unregister_blkdev(MD_MAJOR,"md");
+@@ -10266,7 +10271,7 @@ static __exit void md_exit(void)
+ remove_proc_entry("mdstat", NULL);
+
+ spin_lock(&all_mddevs_lock);
+- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
++ list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
+ if (!mddev_get(mddev))
+ continue;
+ spin_unlock(&all_mddevs_lock);
+@@ -10278,8 +10283,8 @@ static __exit void md_exit(void)
+ * the mddev for destruction by a workqueue, and the
+ * destroy_workqueue() below will wait for that to complete.
+ */
+- mddev_put(mddev);
+ spin_lock(&all_mddevs_lock);
++ mddev_put_locked(mddev);
+ }
+ spin_unlock(&all_mddevs_lock);
+
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index def808064ad8ef..cc31c795369da8 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -266,8 +266,8 @@ enum flag_bits {
+ Nonrot, /* non-rotational device (SSD) */
+ };
+
+-static inline int is_badblock(struct md_rdev *rdev, sector_t s, int sectors,
+- sector_t *first_bad, int *bad_sectors)
++static inline int is_badblock(struct md_rdev *rdev, sector_t s, sector_t sectors,
++ sector_t *first_bad, sector_t *bad_sectors)
+ {
+ if (unlikely(rdev->badblocks.count)) {
+ int rv = badblocks_check(&rdev->badblocks, rdev->data_offset + s,
+@@ -284,7 +284,7 @@ static inline int rdev_has_badblock(struct md_rdev *rdev, sector_t s,
+ int sectors)
+ {
+ sector_t first_bad;
+- int bad_sectors;
++ sector_t bad_sectors;
+
+ return is_badblock(rdev, s, sectors, &first_bad, &bad_sectors);
+ }
+diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
+index 4378d3250bd757..62b980b12f93aa 100644
+--- a/drivers/md/raid1-10.c
++++ b/drivers/md/raid1-10.c
+@@ -247,7 +247,7 @@ static inline int raid1_check_read_range(struct md_rdev *rdev,
+ sector_t this_sector, int *len)
+ {
+ sector_t first_bad;
+- int bad_sectors;
++ sector_t bad_sectors;
+
+ /* no bad block overlap */
+ if (!is_badblock(rdev, this_sector, *len, &first_bad, &bad_sectors))
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 10ea3af40991df..15829ab192d2b1 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -45,6 +45,7 @@
+
+ static void allow_barrier(struct r1conf *conf, sector_t sector_nr);
+ static void lower_barrier(struct r1conf *conf, sector_t sector_nr);
++static void raid1_free(struct mddev *mddev, void *priv);
+
+ #define RAID_1_10_NAME "raid1"
+ #include "raid1-10.c"
+@@ -1315,8 +1316,6 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
+ struct r1conf *conf = mddev->private;
+ struct raid1_info *mirror;
+ struct bio *read_bio;
+- const enum req_op op = bio_op(bio);
+- const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
+ int max_sectors;
+ int rdisk, error;
+ bool r1bio_existed = !!r1_bio;
+@@ -1404,7 +1403,6 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
+ read_bio->bi_iter.bi_sector = r1_bio->sector +
+ mirror->rdev->data_offset;
+ read_bio->bi_end_io = raid1_end_read_request;
+- read_bio->bi_opf = op | do_sync;
+ if (test_bit(FailFast, &mirror->rdev->flags) &&
+ test_bit(R1BIO_FailFast, &r1_bio->state))
+ read_bio->bi_opf |= MD_FAILFAST;
+@@ -1537,7 +1535,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ atomic_inc(&rdev->nr_pending);
+ if (test_bit(WriteErrorSeen, &rdev->flags)) {
+ sector_t first_bad;
+- int bad_sectors;
++ sector_t bad_sectors;
+ int is_bad;
+
+ is_bad = is_badblock(rdev, r1_bio->sector, max_sectors,
+@@ -1653,8 +1651,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+
+ mbio->bi_iter.bi_sector = (r1_bio->sector + rdev->data_offset);
+ mbio->bi_end_io = raid1_end_write_request;
+- mbio->bi_opf = bio_op(bio) |
+- (bio->bi_opf & (REQ_SYNC | REQ_FUA | REQ_ATOMIC));
+ if (test_bit(FailFast, &rdev->flags) &&
+ !test_bit(WriteMostly, &rdev->flags) &&
+ conf->raid_disks - mddev->degraded > 1)
+@@ -2886,7 +2882,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
+ } else {
+ /* may need to read from here */
+ sector_t first_bad = MaxSector;
+- int bad_sectors;
++ sector_t bad_sectors;
+
+ if (is_badblock(rdev, sector_nr, good_sectors,
+ &first_bad, &bad_sectors)) {
+@@ -3256,8 +3252,11 @@ static int raid1_run(struct mddev *mddev)
+
+ if (!mddev_is_dm(mddev)) {
+ ret = raid1_set_limits(mddev);
+- if (ret)
++ if (ret) {
++ if (!mddev->private)
++ raid1_free(mddev, conf);
+ return ret;
++ }
+ }
+
+ mddev->degraded = 0;
+@@ -3271,6 +3270,8 @@ static int raid1_run(struct mddev *mddev)
+ */
+ if (conf->raid_disks - mddev->degraded < 1) {
+ md_unregister_thread(mddev, &conf->thread);
++ if (!mddev->private)
++ raid1_free(mddev, conf);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 15b9ae5bf84d8d..af010b64be63b3 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -747,7 +747,7 @@ static struct md_rdev *read_balance(struct r10conf *conf,
+
+ for (slot = 0; slot < conf->copies ; slot++) {
+ sector_t first_bad;
+- int bad_sectors;
++ sector_t bad_sectors;
+ sector_t dev_sector;
+ unsigned int pending;
+ bool nonrot;
+@@ -1146,8 +1146,6 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
+ {
+ struct r10conf *conf = mddev->private;
+ struct bio *read_bio;
+- const enum req_op op = bio_op(bio);
+- const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
+ int max_sectors;
+ struct md_rdev *rdev;
+ char b[BDEVNAME_SIZE];
+@@ -1228,7 +1226,6 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
+ read_bio->bi_iter.bi_sector = r10_bio->devs[slot].addr +
+ choose_data_offset(r10_bio, rdev);
+ read_bio->bi_end_io = raid10_end_read_request;
+- read_bio->bi_opf = op | do_sync;
+ if (test_bit(FailFast, &rdev->flags) &&
+ test_bit(R10BIO_FailFast, &r10_bio->state))
+ read_bio->bi_opf |= MD_FAILFAST;
+@@ -1247,10 +1244,6 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ struct bio *bio, bool replacement,
+ int n_copy)
+ {
+- const enum req_op op = bio_op(bio);
+- const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
+- const blk_opf_t do_fua = bio->bi_opf & REQ_FUA;
+- const blk_opf_t do_atomic = bio->bi_opf & REQ_ATOMIC;
+ unsigned long flags;
+ struct r10conf *conf = mddev->private;
+ struct md_rdev *rdev;
+@@ -1269,7 +1262,6 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ mbio->bi_iter.bi_sector = (r10_bio->devs[n_copy].addr +
+ choose_data_offset(r10_bio, rdev));
+ mbio->bi_end_io = raid10_end_write_request;
+- mbio->bi_opf = op | do_sync | do_fua | do_atomic;
+ if (!replacement && test_bit(FailFast,
+ &conf->mirrors[devnum].rdev->flags)
+ && enough(conf, devnum))
+@@ -1438,7 +1430,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) {
+ sector_t first_bad;
+ sector_t dev_sector = r10_bio->devs[i].addr;
+- int bad_sectors;
++ sector_t bad_sectors;
+ int is_bad;
+
+ is_bad = is_badblock(rdev, dev_sector, max_sectors,
+@@ -1631,11 +1623,10 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
+ if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
+ return -EAGAIN;
+
+- if (WARN_ON_ONCE(bio->bi_opf & REQ_NOWAIT)) {
++ if (!wait_barrier(conf, bio->bi_opf & REQ_NOWAIT)) {
+ bio_wouldblock_error(bio);
+ return 0;
+ }
+- wait_barrier(conf, false);
+
+ /*
+ * Check reshape again to avoid reshape happens after checking
+@@ -3413,7 +3404,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ sector_t from_addr, to_addr;
+ struct md_rdev *rdev = conf->mirrors[d].rdev;
+ sector_t sector, first_bad;
+- int bad_sectors;
++ sector_t bad_sectors;
+ if (!rdev ||
+ !test_bit(In_sync, &rdev->flags))
+ continue;
+@@ -3609,7 +3600,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ for (i = 0; i < conf->copies; i++) {
+ int d = r10_bio->devs[i].devnum;
+ sector_t first_bad, sector;
+- int bad_sectors;
++ sector_t bad_sectors;
+ struct md_rdev *rdev;
+
+ if (r10_bio->devs[i].repl_bio)
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index 2f5165918163df..cfe59c3255f706 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -2701,8 +2701,11 @@ static void dib8000_set_dds(struct dib8000_state *state, s32 offset_khz)
+ u8 ratio;
+
+ if (state->revision == 0x8090) {
++ u32 internal = dib8000_read32(state, 23) / 1000;
++
+ ratio = 4;
+- unit_khz_dds_val = (1<<26) / (dib8000_read32(state, 23) / 1000);
++
++ unit_khz_dds_val = (1<<26) / (internal ?: 1);
+ if (offset_khz < 0)
+ dds = (1 << 26) - (abs_offset_khz * unit_khz_dds_val);
+ else
+diff --git a/drivers/media/platform/allegro-dvt/allegro-core.c b/drivers/media/platform/allegro-dvt/allegro-core.c
+index e491399afcc984..eb03df0d865274 100644
+--- a/drivers/media/platform/allegro-dvt/allegro-core.c
++++ b/drivers/media/platform/allegro-dvt/allegro-core.c
+@@ -3912,6 +3912,7 @@ static int allegro_probe(struct platform_device *pdev)
+ if (ret < 0) {
+ v4l2_err(&dev->v4l2_dev,
+ "failed to request firmware: %d\n", ret);
++ v4l2_device_unregister(&dev->v4l2_dev);
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/ti/omap3isp/isp.c b/drivers/media/platform/ti/omap3isp/isp.c
+index 405ca215179dd3..a7fd808aea1e78 100644
+--- a/drivers/media/platform/ti/omap3isp/isp.c
++++ b/drivers/media/platform/ti/omap3isp/isp.c
+@@ -1961,6 +1961,13 @@ static int isp_attach_iommu(struct isp_device *isp)
+ struct dma_iommu_mapping *mapping;
+ int ret;
+
++ /* We always want to replace any default mapping from the arch code */
++ mapping = to_dma_iommu_mapping(isp->dev);
++ if (mapping) {
++ arm_iommu_detach_device(isp->dev);
++ arm_iommu_release_mapping(mapping);
++ }
++
+ /*
+ * Create the ARM mapping, used by the ARM DMA mapping core to allocate
+ * VAs. This will allocate a corresponding IOMMU domain.
+diff --git a/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c b/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
+index 85a44143b3786b..0e212198dd65b1 100644
+--- a/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
++++ b/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
+@@ -518,6 +518,7 @@ static void set_buffers(struct hantro_ctx *ctx)
+ hantro_reg_write(vpu, &g2_stream_len, src_len);
+ hantro_reg_write(vpu, &g2_strm_buffer_len, src_buf_len);
+ hantro_reg_write(vpu, &g2_strm_start_offset, 0);
++ hantro_reg_write(vpu, &g2_start_bit, 0);
+ hantro_reg_write(vpu, &g2_write_mvs_e, 1);
+
+ hantro_write_addr(vpu, G2_TILE_SIZES_ADDR, ctx->hevc_dec.tile_sizes.dma);
+diff --git a/drivers/media/rc/streamzap.c b/drivers/media/rc/streamzap.c
+index 9b209e687f256d..2ce62fe5d60f5a 100644
+--- a/drivers/media/rc/streamzap.c
++++ b/drivers/media/rc/streamzap.c
+@@ -385,8 +385,8 @@ static void streamzap_disconnect(struct usb_interface *interface)
+ if (!sz)
+ return;
+
+- rc_unregister_device(sz->rdev);
+ usb_kill_urb(sz->urb_in);
++ rc_unregister_device(sz->rdev);
+ usb_free_urb(sz->urb_in);
+ usb_free_coherent(usbdev, sz->buf_in_len, sz->buf_in, sz->dma_in);
+
+diff --git a/drivers/media/test-drivers/vimc/vimc-streamer.c b/drivers/media/test-drivers/vimc/vimc-streamer.c
+index 807551a5143b78..15d863f97cbf96 100644
+--- a/drivers/media/test-drivers/vimc/vimc-streamer.c
++++ b/drivers/media/test-drivers/vimc/vimc-streamer.c
+@@ -59,6 +59,12 @@ static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream)
+ continue;
+
+ sd = media_entity_to_v4l2_subdev(ved->ent);
++ /*
++ * Do not call .s_stream() to stop an already
++ * stopped/unstarted subdev.
++ */
++ if (!v4l2_subdev_is_streaming(sd))
++ continue;
+ v4l2_subdev_call(sd, video, s_stream, 0);
+ }
+ }
+diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
+index 5710348f72f6ff..a8f5467d6b31e3 100644
+--- a/drivers/memory/mtk-smi.c
++++ b/drivers/memory/mtk-smi.c
+@@ -332,6 +332,38 @@ static const u8 mtk_smi_larb_mt8188_ostd[][SMI_LARB_PORT_NR_MAX] = {
+ [25] = {0x01},
+ };
+
++static const u8 mtk_smi_larb_mt8192_ostd[][SMI_LARB_PORT_NR_MAX] = {
++ [0] = {0x2, 0x2, 0x28, 0xa, 0xc, 0x28,},
++ [1] = {0x2, 0x2, 0x18, 0x18, 0x18, 0xa, 0xc, 0x28,},
++ [2] = {0x5, 0x5, 0x5, 0x5, 0x1,},
++ [3] = {},
++ [4] = {0x28, 0x19, 0xb, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x4, 0x1,},
++ [5] = {0x1, 0x1, 0x4, 0x1, 0x1, 0x1, 0x1, 0x16,},
++ [6] = {},
++ [7] = {0x1, 0x3, 0x2, 0x1, 0x1, 0x5, 0x2, 0x12, 0x13, 0x4, 0x4, 0x1,
++ 0x4, 0x2, 0x1,},
++ [8] = {},
++ [9] = {0xa, 0x7, 0xf, 0x8, 0x1, 0x8, 0x9, 0x3, 0x3, 0x6, 0x7, 0x4,
++ 0xa, 0x3, 0x4, 0xe, 0x1, 0x7, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1,
++ 0x1, 0x1, 0x1, 0x1, 0x1,},
++ [10] = {},
++ [11] = {0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1,
++ 0x1, 0x1, 0x1, 0xe, 0x1, 0x7, 0x8, 0x7, 0x7, 0x1, 0x6, 0x2,
++ 0xf, 0x8, 0x1, 0x1, 0x1,},
++ [12] = {},
++ [13] = {0x2, 0xc, 0xc, 0xe, 0x6, 0x6, 0x6, 0x6, 0x6, 0x12, 0x6, 0x28,
++ 0x2, 0xc, 0xc, 0x28, 0x12, 0x6,},
++ [14] = {},
++ [15] = {0x28, 0x14, 0x2, 0xc, 0x18, 0x4, 0x28, 0x14, 0x4, 0x4, 0x4, 0x2,
++ 0x4, 0x2, 0x8, 0x4, 0x4,},
++ [16] = {0x28, 0x14, 0x2, 0xc, 0x18, 0x4, 0x28, 0x14, 0x4, 0x4, 0x4, 0x2,
++ 0x4, 0x2, 0x8, 0x4, 0x4,},
++ [17] = {0x28, 0x14, 0x2, 0xc, 0x18, 0x4, 0x28, 0x14, 0x4, 0x4, 0x4, 0x2,
++ 0x4, 0x2, 0x8, 0x4, 0x4,},
++ [18] = {0x2, 0x2, 0x4, 0x2,},
++ [19] = {0x9, 0x9, 0x5, 0x5, 0x1, 0x1,},
++};
++
+ static const u8 mtk_smi_larb_mt8195_ostd[][SMI_LARB_PORT_NR_MAX] = {
+ [0] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb0 */
+ [1] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb1 */
+@@ -427,6 +459,7 @@ static const struct mtk_smi_larb_gen mtk_smi_larb_mt8188 = {
+
+ static const struct mtk_smi_larb_gen mtk_smi_larb_mt8192 = {
+ .config_port = mtk_smi_larb_config_port_gen2_general,
++ .ostd = mtk_smi_larb_mt8192_ostd,
+ };
+
+ static const struct mtk_smi_larb_gen mtk_smi_larb_mt8195 = {
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index 0469e85d72cff3..7ee293b09f628a 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -920,7 +920,7 @@ static void sm501_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+ {
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
+ struct sm501_gpio *smgpio = smchip->ourgpio;
+- unsigned long bit = 1 << offset;
++ unsigned long bit = BIT(offset);
+ void __iomem *regs = smchip->regbase;
+ unsigned long save;
+ unsigned long val;
+@@ -946,7 +946,7 @@ static int sm501_gpio_input(struct gpio_chip *chip, unsigned offset)
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
+ struct sm501_gpio *smgpio = smchip->ourgpio;
+ void __iomem *regs = smchip->regbase;
+- unsigned long bit = 1 << offset;
++ unsigned long bit = BIT(offset);
+ unsigned long save;
+ unsigned long ddr;
+
+@@ -971,7 +971,7 @@ static int sm501_gpio_output(struct gpio_chip *chip,
+ {
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
+ struct sm501_gpio *smgpio = smchip->ourgpio;
+- unsigned long bit = 1 << offset;
++ unsigned long bit = BIT(offset);
+ void __iomem *regs = smchip->regbase;
+ unsigned long save;
+ unsigned long val;
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index d5ac71a4938659..9dac7cbe8748cc 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -272,9 +272,9 @@ static const u32 bar_test_pattern[] = {
+ };
+
+ static int pci_endpoint_test_bar_memcmp(struct pci_endpoint_test *test,
+- enum pci_barno barno, int offset,
+- void *write_buf, void *read_buf,
+- int size)
++ enum pci_barno barno,
++ resource_size_t offset, void *write_buf,
++ void *read_buf, int size)
+ {
+ memset(write_buf, bar_test_pattern[barno], size);
+ memcpy_toio(test->bar[barno] + offset, write_buf, size);
+@@ -287,10 +287,11 @@ static int pci_endpoint_test_bar_memcmp(struct pci_endpoint_test *test,
+ static int pci_endpoint_test_bar(struct pci_endpoint_test *test,
+ enum pci_barno barno)
+ {
+- int j, bar_size, buf_size, iters;
++ resource_size_t bar_size, offset = 0;
+ void *write_buf __free(kfree) = NULL;
+ void *read_buf __free(kfree) = NULL;
+ struct pci_dev *pdev = test->pdev;
++ int buf_size;
+
+ if (!test->bar[barno])
+ return -ENOMEM;
+@@ -314,11 +315,12 @@ static int pci_endpoint_test_bar(struct pci_endpoint_test *test,
+ if (!read_buf)
+ return -ENOMEM;
+
+- iters = bar_size / buf_size;
+- for (j = 0; j < iters; j++)
+- if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * j,
+- write_buf, read_buf, buf_size))
++ while (offset < bar_size) {
++ if (pci_endpoint_test_bar_memcmp(test, barno, offset, write_buf,
++ read_buf, buf_size))
+ return -EIO;
++ offset += buf_size;
++ }
+
+ return 0;
+ }
+@@ -382,7 +384,7 @@ static int pci_endpoint_test_bars_read_bar(struct pci_endpoint_test *test,
+ static int pci_endpoint_test_bars(struct pci_endpoint_test *test)
+ {
+ enum pci_barno bar;
+- bool ret;
++ int ret;
+
+ /* Write all BARs in order (without reading). */
+ for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
+@@ -398,7 +400,7 @@ static int pci_endpoint_test_bars(struct pci_endpoint_test *test)
+ for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
+ if (test->bar[bar]) {
+ ret = pci_endpoint_test_bars_read_bar(test, bar);
+- if (!ret)
++ if (ret)
+ return ret;
+ }
+ }
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index 62252ad4e20de1..3cdb2fc4496527 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -1272,19 +1272,25 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
+ /* Check for some optional GPIO controls */
+ slot->vsd = devm_gpiod_get_index_optional(host->dev, "vsd",
+ id, GPIOD_OUT_LOW);
+- if (IS_ERR(slot->vsd))
+- return dev_err_probe(host->dev, PTR_ERR(slot->vsd),
++ if (IS_ERR(slot->vsd)) {
++ r = dev_err_probe(host->dev, PTR_ERR(slot->vsd),
+ "error looking up VSD GPIO\n");
++ goto err_free_host;
++ }
+ slot->vio = devm_gpiod_get_index_optional(host->dev, "vio",
+ id, GPIOD_OUT_LOW);
+- if (IS_ERR(slot->vio))
+- return dev_err_probe(host->dev, PTR_ERR(slot->vio),
++ if (IS_ERR(slot->vio)) {
++ r = dev_err_probe(host->dev, PTR_ERR(slot->vio),
+ "error looking up VIO GPIO\n");
++ goto err_free_host;
++ }
+ slot->cover = devm_gpiod_get_index_optional(host->dev, "cover",
+ id, GPIOD_IN);
+- if (IS_ERR(slot->cover))
+- return dev_err_probe(host->dev, PTR_ERR(slot->cover),
++ if (IS_ERR(slot->cover)) {
++ r = dev_err_probe(host->dev, PTR_ERR(slot->cover),
+ "error looking up cover switch GPIO\n");
++ goto err_free_host;
++ }
+
+ host->slots[id] = slot;
+
+@@ -1344,6 +1350,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
+ device_remove_file(&mmc->class_dev, &dev_attr_slot_name);
+ err_remove_host:
+ mmc_remove_host(mmc);
++err_free_host:
+ mmc_free_host(mmc);
+ return r;
+ }
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 54d795205fb443..26a9a8b5682af1 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -1339,8 +1339,8 @@ static int sdhci_omap_probe(struct platform_device *pdev)
+ /* R1B responses is required to properly manage HW busy detection. */
+ mmc->caps |= MMC_CAP_NEED_RSP_BUSY;
+
+- /* Allow card power off and runtime PM for eMMC/SD card devices */
+- mmc->caps |= MMC_CAP_POWER_OFF_CARD | MMC_CAP_AGGRESSIVE_PM;
++ /* Enable SDIO card power off. */
++ mmc->caps |= MMC_CAP_POWER_OFF_CARD;
+
+ ret = sdhci_setup_host(host);
+ if (ret)
+diff --git a/drivers/mmc/host/sdhci-pxav3.c b/drivers/mmc/host/sdhci-pxav3.c
+index 990723a008aec5..3fb56face3d812 100644
+--- a/drivers/mmc/host/sdhci-pxav3.c
++++ b/drivers/mmc/host/sdhci-pxav3.c
+@@ -399,6 +399,7 @@ static int sdhci_pxav3_probe(struct platform_device *pdev)
+ if (!IS_ERR(pxa->clk_core))
+ clk_prepare_enable(pxa->clk_core);
+
++ host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY;
+ /* enable 1/8V DDR capable */
+ host->mmc->caps |= MMC_CAP_1_8V_DDR;
+
+diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c
+index c5e571ec94c990..0472bcdff13072 100644
+--- a/drivers/net/arcnet/com20020-pci.c
++++ b/drivers/net/arcnet/com20020-pci.c
+@@ -251,18 +251,33 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
+ GFP_KERNEL, "arc%d-%d-tx",
+ dev->dev_id, i);
++ if (!card->tx_led.default_trigger) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "pci:green:tx:%d-%d",
+ dev->dev_id, i);
+-
++ if (!card->tx_led.name) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->tx_led.dev = &dev->dev;
+ card->recon_led.brightness_set = led_recon_set;
+ card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
+ GFP_KERNEL, "arc%d-%d-recon",
+ dev->dev_id, i);
++ if (!card->recon_led.default_trigger) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "pci:red:recon:%d-%d",
+ dev->dev_id, i);
++ if (!card->recon_led.name) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->recon_led.dev = &dev->dev;
+
+ ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index e45bba240cbcda..4da5fcb7def47f 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -322,9 +322,9 @@ static bool bond_sk_check(struct bonding *bond)
+ }
+ }
+
+-static bool bond_xdp_check(struct bonding *bond)
++bool bond_xdp_check(struct bonding *bond, int mode)
+ {
+- switch (BOND_MODE(bond)) {
++ switch (mode) {
+ case BOND_MODE_ROUNDROBIN:
+ case BOND_MODE_ACTIVEBACKUP:
+ return true;
+@@ -1937,7 +1937,7 @@ void bond_xdp_set_features(struct net_device *bond_dev)
+
+ ASSERT_RTNL();
+
+- if (!bond_xdp_check(bond) || !bond_has_slaves(bond)) {
++ if (!bond_xdp_check(bond, BOND_MODE(bond)) || !bond_has_slaves(bond)) {
+ xdp_clear_features_flag(bond_dev);
+ return;
+ }
+@@ -5699,7 +5699,7 @@ static int bond_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+
+ ASSERT_RTNL();
+
+- if (!bond_xdp_check(bond)) {
++ if (!bond_xdp_check(bond, BOND_MODE(bond))) {
+ BOND_NL_ERR(dev, extack,
+ "No native XDP support for the current bonding mode");
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index d1b095af253bdc..91893c29b8995b 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -868,6 +868,9 @@ static bool bond_set_xfrm_features(struct bonding *bond)
+ static int bond_option_mode_set(struct bonding *bond,
+ const struct bond_opt_value *newval)
+ {
++ if (bond->xdp_prog && !bond_xdp_check(bond, newval->value))
++ return -EOPNOTSUPP;
++
+ if (!bond_mode_uses_arp(newval->value)) {
+ if (bond->params.arp_interval) {
+ netdev_dbg(bond->dev, "%s mode is incompatible with arp monitoring, start mii monitoring\n",
+diff --git a/drivers/net/can/rockchip/rockchip_canfd-core.c b/drivers/net/can/rockchip/rockchip_canfd-core.c
+index d9a937ba126c3c..46201c126703ce 100644
+--- a/drivers/net/can/rockchip/rockchip_canfd-core.c
++++ b/drivers/net/can/rockchip/rockchip_canfd-core.c
+@@ -236,11 +236,6 @@ static void rkcanfd_chip_fifo_setup(struct rkcanfd_priv *priv)
+ {
+ u32 reg;
+
+- /* TXE FIFO */
+- reg = rkcanfd_read(priv, RKCANFD_REG_RX_FIFO_CTRL);
+- reg |= RKCANFD_REG_RX_FIFO_CTRL_RX_FIFO_ENABLE;
+- rkcanfd_write(priv, RKCANFD_REG_RX_FIFO_CTRL, reg);
+-
+ /* RX FIFO */
+ reg = rkcanfd_read(priv, RKCANFD_REG_RX_FIFO_CTRL);
+ reg |= RKCANFD_REG_RX_FIFO_CTRL_RX_FIFO_ENABLE;
+diff --git a/drivers/net/dsa/microchip/ksz8.c b/drivers/net/dsa/microchip/ksz8.c
+index da7110d675583d..be433b4e2b1ca8 100644
+--- a/drivers/net/dsa/microchip/ksz8.c
++++ b/drivers/net/dsa/microchip/ksz8.c
+@@ -1625,7 +1625,6 @@ void ksz8_port_setup(struct ksz_device *dev, int port, bool cpu_port)
+ const u16 *regs = dev->info->regs;
+ struct dsa_switch *ds = dev->ds;
+ const u32 *masks;
+- int queues;
+ u8 member;
+
+ masks = dev->info->masks;
+@@ -1633,15 +1632,7 @@ void ksz8_port_setup(struct ksz_device *dev, int port, bool cpu_port)
+ /* enable broadcast storm limit */
+ ksz_port_cfg(dev, port, P_BCAST_STORM_CTRL, PORT_BROADCAST_STORM, true);
+
+- /* For KSZ88x3 enable only one queue by default, otherwise we won't
+- * be able to get rid of PCP prios on Port 2.
+- */
+- if (ksz_is_ksz88x3(dev))
+- queues = 1;
+- else
+- queues = dev->info->num_tx_queues;
+-
+- ksz8_port_queue_split(dev, port, queues);
++ ksz8_port_queue_split(dev, port, dev->info->num_tx_queues);
+
+ /* replace priority */
+ ksz_port_cfg(dev, port, P_802_1P_CTRL,
+diff --git a/drivers/net/dsa/microchip/ksz_dcb.c b/drivers/net/dsa/microchip/ksz_dcb.c
+index 30b4a6186e38f4..c3b501997ac948 100644
+--- a/drivers/net/dsa/microchip/ksz_dcb.c
++++ b/drivers/net/dsa/microchip/ksz_dcb.c
+@@ -10,7 +10,12 @@
+ #include "ksz_dcb.h"
+ #include "ksz8.h"
+
+-#define KSZ8_REG_PORT_1_CTRL_0 0x10
++/* Port X Control 0 register.
++ * The datasheet specifies: Port 1 - 0x10, Port 2 - 0x20, Port 3 - 0x30.
++ * However, the driver uses get_port_addr(), which maps Port 1 to offset 0.
++ * Therefore, we define the base offset as 0x00 here to align with that logic.
++ */
++#define KSZ8_REG_PORT_1_CTRL_0 0x00
+ #define KSZ8_PORT_DIFFSERV_ENABLE BIT(6)
+ #define KSZ8_PORT_802_1P_ENABLE BIT(5)
+ #define KSZ8_PORT_BASED_PRIO_M GENMASK(4, 3)
+@@ -181,49 +186,6 @@ int ksz_port_get_default_prio(struct dsa_switch *ds, int port)
+ return (data & mask) >> shift;
+ }
+
+-/**
+- * ksz88x3_port_set_default_prio_quirks - Quirks for default priority
+- * @dev: Pointer to the KSZ switch device structure
+- * @port: Port number for which to set the default priority
+- * @prio: Priority value to set
+- *
+- * This function implements quirks for setting the default priority on KSZ88x3
+- * devices. On Port 2, no other priority providers are working
+- * except of PCP. So, configuring default priority on Port 2 is not possible.
+- * On Port 1, it is not possible to configure port priority if PCP
+- * apptrust on Port 2 is disabled. Since we disable multiple queues on the
+- * switch to disable PCP on Port 2, we need to ensure that the default priority
+- * configuration on Port 1 is in agreement with the configuration on Port 2.
+- *
+- * Return: 0 on success, or a negative error code on failure
+- */
+-static int ksz88x3_port_set_default_prio_quirks(struct ksz_device *dev, int port,
+- u8 prio)
+-{
+- if (!prio)
+- return 0;
+-
+- if (port == KSZ_PORT_2) {
+- dev_err(dev->dev, "Port priority configuration is not working on Port 2\n");
+- return -EINVAL;
+- } else if (port == KSZ_PORT_1) {
+- u8 port2_data;
+- int ret;
+-
+- ret = ksz_pread8(dev, KSZ_PORT_2, KSZ8_REG_PORT_1_CTRL_0,
+- &port2_data);
+- if (ret)
+- return ret;
+-
+- if (!(port2_data & KSZ8_PORT_802_1P_ENABLE)) {
+- dev_err(dev->dev, "Not possible to configure port priority on Port 1 if PCP apptrust on Port 2 is disabled\n");
+- return -EINVAL;
+- }
+- }
+-
+- return 0;
+-}
+-
+ /**
+ * ksz_port_set_default_prio - Sets the default priority for a port on a KSZ
+ * switch
+@@ -239,18 +201,12 @@ static int ksz88x3_port_set_default_prio_quirks(struct ksz_device *dev, int port
+ int ksz_port_set_default_prio(struct dsa_switch *ds, int port, u8 prio)
+ {
+ struct ksz_device *dev = ds->priv;
+- int reg, shift, ret;
++ int reg, shift;
+ u8 mask;
+
+ if (prio >= dev->info->num_ipms)
+ return -EINVAL;
+
+- if (ksz_is_ksz88x3(dev)) {
+- ret = ksz88x3_port_set_default_prio_quirks(dev, port, prio);
+- if (ret)
+- return ret;
+- }
+-
+ ksz_get_default_port_prio_reg(dev, ®, &mask, &shift);
+
+ return ksz_prmw8(dev, port, reg, mask, (prio << shift) & mask);
+@@ -518,155 +474,6 @@ static int ksz_port_set_apptrust_validate(struct ksz_device *dev, int port,
+ return -EINVAL;
+ }
+
+-/**
+- * ksz88x3_port1_apptrust_quirk - Quirk for apptrust configuration on Port 1
+- * of KSZ88x3 devices
+- * @dev: Pointer to the KSZ switch device structure
+- * @port: Port number for which to set the apptrust selectors
+- * @reg: Register address for the apptrust configuration
+- * @port1_data: Data to set for the apptrust configuration
+- *
+- * This function implements a quirk for apptrust configuration on Port 1 of
+- * KSZ88x3 devices. It ensures that apptrust configuration on Port 1 is not
+- * possible if PCP apptrust on Port 2 is disabled. This is because the Port 2
+- * seems to be permanently hardwired to PCP classification, so we need to
+- * do Port 1 configuration always in agreement with Port 2 configuration.
+- *
+- * Return: 0 on success, or a negative error code on failure
+- */
+-static int ksz88x3_port1_apptrust_quirk(struct ksz_device *dev, int port,
+- int reg, u8 port1_data)
+-{
+- u8 port2_data;
+- int ret;
+-
+- /* If no apptrust is requested for Port 1, no need to care about Port 2
+- * configuration.
+- */
+- if (!(port1_data & (KSZ8_PORT_802_1P_ENABLE | KSZ8_PORT_DIFFSERV_ENABLE)))
+- return 0;
+-
+- /* We got request to enable any apptrust on Port 1. To make it possible,
+- * we need to enable multiple queues on the switch. If we enable
+- * multiqueue support, PCP classification on Port 2 will be
+- * automatically activated by HW.
+- */
+- ret = ksz_pread8(dev, KSZ_PORT_2, reg, &port2_data);
+- if (ret)
+- return ret;
+-
+- /* If KSZ8_PORT_802_1P_ENABLE bit is set on Port 2, the driver showed
+- * the interest in PCP classification on Port 2. In this case,
+- * multiqueue support is enabled and we can enable any apptrust on
+- * Port 1.
+- * If KSZ8_PORT_802_1P_ENABLE bit is not set on Port 2, the PCP
+- * classification on Port 2 is still active, but the driver disabled
+- * multiqueue support and made frame prioritization inactive for
+- * all ports. In this case, we can't enable any apptrust on Port 1.
+- */
+- if (!(port2_data & KSZ8_PORT_802_1P_ENABLE)) {
+- dev_err(dev->dev, "Not possible to enable any apptrust on Port 1 if PCP apptrust on Port 2 is disabled\n");
+- return -EINVAL;
+- }
+-
+- return 0;
+-}
+-
+-/**
+- * ksz88x3_port2_apptrust_quirk - Quirk for apptrust configuration on Port 2
+- * of KSZ88x3 devices
+- * @dev: Pointer to the KSZ switch device structure
+- * @port: Port number for which to set the apptrust selectors
+- * @reg: Register address for the apptrust configuration
+- * @port2_data: Data to set for the apptrust configuration
+- *
+- * This function implements a quirk for apptrust configuration on Port 2 of
+- * KSZ88x3 devices. It ensures that DSCP apptrust is not working on Port 2 and
+- * that it is not possible to disable PCP on Port 2. The only way to disable PCP
+- * on Port 2 is to disable multiple queues on the switch.
+- *
+- * Return: 0 on success, or a negative error code on failure
+- */
+-static int ksz88x3_port2_apptrust_quirk(struct ksz_device *dev, int port,
+- int reg, u8 port2_data)
+-{
+- struct dsa_switch *ds = dev->ds;
+- u8 port1_data;
+- int ret;
+-
+- /* First validate Port 2 configuration. DiffServ/DSCP is not working
+- * on this port.
+- */
+- if (port2_data & KSZ8_PORT_DIFFSERV_ENABLE) {
+- dev_err(dev->dev, "DSCP apptrust is not working on Port 2\n");
+- return -EINVAL;
+- }
+-
+- /* If PCP support is requested, we need to enable all queues on the
+- * switch to make PCP priority working on Port 2.
+- */
+- if (port2_data & KSZ8_PORT_802_1P_ENABLE)
+- return ksz8_all_queues_split(dev, dev->info->num_tx_queues);
+-
+- /* We got request to disable PCP priority on Port 2.
+- * Now, we need to compare Port 2 configuration with Port 1
+- * configuration.
+- */
+- ret = ksz_pread8(dev, KSZ_PORT_1, reg, &port1_data);
+- if (ret)
+- return ret;
+-
+- /* If Port 1 has any apptrust enabled, we can't disable multiple queues
+- * on the switch, so we can't disable PCP on Port 2.
+- */
+- if (port1_data & (KSZ8_PORT_802_1P_ENABLE | KSZ8_PORT_DIFFSERV_ENABLE)) {
+- dev_err(dev->dev, "Not possible to disable PCP on Port 2 if any apptrust is enabled on Port 1\n");
+- return -EINVAL;
+- }
+-
+- /* Now we need to ensure that default priority on Port 1 is set to 0
+- * otherwise we can't disable multiqueue support on the switch.
+- */
+- ret = ksz_port_get_default_prio(ds, KSZ_PORT_1);
+- if (ret < 0) {
+- return ret;
+- } else if (ret) {
+- dev_err(dev->dev, "Not possible to disable PCP on Port 2 if non zero default priority is set on Port 1\n");
+- return -EINVAL;
+- }
+-
+- /* Port 1 has no apptrust or default priority set and we got request to
+- * disable PCP on Port 2. We can disable multiqueue support to disable
+- * PCP on Port 2.
+- */
+- return ksz8_all_queues_split(dev, 1);
+-}
+-
+-/**
+- * ksz88x3_port_apptrust_quirk - Quirk for apptrust configuration on KSZ88x3
+- * devices
+- * @dev: Pointer to the KSZ switch device structure
+- * @port: Port number for which to set the apptrust selectors
+- * @reg: Register address for the apptrust configuration
+- * @data: Data to set for the apptrust configuration
+- *
+- * This function implements a quirk for apptrust configuration on KSZ88x3
+- * devices. It ensures that apptrust configuration on Port 1 and
+- * Port 2 is done in agreement with each other.
+- *
+- * Return: 0 on success, or a negative error code on failure
+- */
+-static int ksz88x3_port_apptrust_quirk(struct ksz_device *dev, int port,
+- int reg, u8 data)
+-{
+- if (port == KSZ_PORT_1)
+- return ksz88x3_port1_apptrust_quirk(dev, port, reg, data);
+- else if (port == KSZ_PORT_2)
+- return ksz88x3_port2_apptrust_quirk(dev, port, reg, data);
+-
+- return 0;
+-}
+-
+ /**
+ * ksz_port_set_apptrust - Sets the apptrust selectors for a port on a KSZ
+ * switch
+@@ -707,12 +514,6 @@ int ksz_port_set_apptrust(struct dsa_switch *ds, int port,
+ }
+ }
+
+- if (ksz_is_ksz88x3(dev)) {
+- ret = ksz88x3_port_apptrust_quirk(dev, port, reg, data);
+- if (ret)
+- return ret;
+- }
+-
+ return ksz_prmw8(dev, port, reg, mask, data);
+ }
+
+@@ -799,21 +600,5 @@ int ksz_dcb_init_port(struct ksz_device *dev, int port)
+ */
+ int ksz_dcb_init(struct ksz_device *dev)
+ {
+- int ret;
+-
+- ret = ksz_init_global_dscp_map(dev);
+- if (ret)
+- return ret;
+-
+- /* Enable 802.1p priority control on Port 2 during switch initialization.
+- * This setup is critical for the apptrust functionality on Port 1, which
+- * relies on the priority settings of Port 2. Note: Port 1 is naturally
+- * configured before Port 2, necessitating this configuration order.
+- */
+- if (ksz_is_ksz88x3(dev))
+- return ksz_prmw8(dev, KSZ_PORT_2, KSZ8_REG_PORT_1_CTRL_0,
+- KSZ8_PORT_802_1P_ENABLE,
+- KSZ8_PORT_802_1P_ENABLE);
+-
+- return 0;
++ return ksz_init_global_dscp_map(dev);
+ }
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 5db96ca52505ab..4a9fbfa8db41a5 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -5145,6 +5145,7 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
+ .port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
+ .port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
+ .port_tag_remap = mv88e6095_port_tag_remap,
++ .port_set_policy = mv88e6352_port_set_policy,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
+ .port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
+@@ -5169,8 +5170,10 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
+ .hardware_reset_pre = mv88e6xxx_g2_eeprom_wait,
+ .hardware_reset_post = mv88e6xxx_g2_eeprom_wait,
+ .reset = mv88e6352_g1_reset,
+- .vtu_getnext = mv88e6185_g1_vtu_getnext,
+- .vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,
++ .vtu_getnext = mv88e6352_g1_vtu_getnext,
++ .vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
++ .stu_getnext = mv88e6352_g1_stu_getnext,
++ .stu_loadpurge = mv88e6352_g1_stu_loadpurge,
+ .gpio_ops = &mv88e6352_gpio_ops,
+ .avb_ops = &mv88e6352_avb_ops,
+ .ptp_ops = &mv88e6352_ptp_ops,
+@@ -5194,6 +5197,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
+ .port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
+ .port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
+ .port_tag_remap = mv88e6095_port_tag_remap,
++ .port_set_policy = mv88e6352_port_set_policy,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
+ .port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
+@@ -5217,8 +5221,10 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
+ .hardware_reset_pre = mv88e6xxx_g2_eeprom_wait,
+ .hardware_reset_post = mv88e6xxx_g2_eeprom_wait,
+ .reset = mv88e6352_g1_reset,
+- .vtu_getnext = mv88e6185_g1_vtu_getnext,
+- .vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,
++ .vtu_getnext = mv88e6352_g1_vtu_getnext,
++ .vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
++ .stu_getnext = mv88e6352_g1_stu_getnext,
++ .stu_loadpurge = mv88e6352_g1_stu_loadpurge,
+ .gpio_ops = &mv88e6352_gpio_ops,
+ .avb_ops = &mv88e6352_avb_ops,
+ .ptp_ops = &mv88e6352_ptp_ops,
+@@ -5818,7 +5824,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .global1_addr = 0x1b,
+ .global2_addr = 0x1c,
+ .age_time_coeff = 3750,
+- .atu_move_port_mask = 0x1f,
++ .atu_move_port_mask = 0xf,
+ .g1_irqs = 9,
+ .g2_irqs = 10,
+ .stats_type = STATS_TYPE_BANK0 | STATS_TYPE_BANK1,
+@@ -6239,6 +6245,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .num_internal_phys = 5,
+ .num_gpio = 15,
+ .max_vid = 4095,
++ .max_sid = 63,
+ .port_base_addr = 0x10,
+ .phy_base_addr = 0x0,
+ .global1_addr = 0x1b,
+@@ -6265,6 +6272,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .num_internal_phys = 5,
+ .num_gpio = 15,
+ .max_vid = 4095,
++ .max_sid = 63,
+ .port_base_addr = 0x10,
+ .phy_base_addr = 0x0,
+ .global1_addr = 0x1b,
+@@ -6274,6 +6282,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .g2_irqs = 10,
+ .stats_type = STATS_TYPE_BANK0 | STATS_TYPE_BANK1,
+ .atu_move_port_mask = 0xf,
++ .pvt = true,
+ .multi_chip = true,
+ .edsa_support = MV88E6XXX_EDSA_SUPPORTED,
+ .ptp_support = true,
+@@ -6296,7 +6305,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .global1_addr = 0x1b,
+ .global2_addr = 0x1c,
+ .age_time_coeff = 3750,
+- .atu_move_port_mask = 0x1f,
++ .atu_move_port_mask = 0xf,
+ .g1_irqs = 9,
+ .g2_irqs = 10,
+ .stats_type = STATS_TYPE_BANK0 | STATS_TYPE_BANK1,
+@@ -7322,13 +7331,13 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ err = mv88e6xxx_switch_reset(chip);
+ mv88e6xxx_reg_unlock(chip);
+ if (err)
+- goto out;
++ goto out_phy;
+
+ if (np) {
+ chip->irq = of_irq_get(np, 0);
+ if (chip->irq == -EPROBE_DEFER) {
+ err = chip->irq;
+- goto out;
++ goto out_phy;
+ }
+ }
+
+@@ -7347,7 +7356,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ mv88e6xxx_reg_unlock(chip);
+
+ if (err)
+- goto out;
++ goto out_phy;
+
+ if (chip->info->g2_irqs > 0) {
+ err = mv88e6xxx_g2_irq_setup(chip);
+@@ -7381,6 +7390,8 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ mv88e6xxx_g1_irq_free(chip);
+ else
+ mv88e6xxx_irq_poll_free(chip);
++out_phy:
++ mv88e6xxx_phy_destroy(chip);
+ out:
+ if (pdata)
+ dev_put(pdata->netdev);
+@@ -7403,7 +7414,6 @@ static void mv88e6xxx_remove(struct mdio_device *mdiodev)
+ mv88e6xxx_ptp_free(chip);
+ }
+
+- mv88e6xxx_phy_destroy(chip);
+ mv88e6xxx_unregister_switch(chip);
+
+ mv88e6xxx_g1_vtu_prob_irq_free(chip);
+@@ -7416,6 +7426,8 @@ static void mv88e6xxx_remove(struct mdio_device *mdiodev)
+ mv88e6xxx_g1_irq_free(chip);
+ else
+ mv88e6xxx_irq_poll_free(chip);
++
++ mv88e6xxx_phy_destroy(chip);
+ }
+
+ static void mv88e6xxx_shutdown(struct mdio_device *mdiodev)
+diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
+index 8bb88b3d900db3..ee9e5d7e527709 100644
+--- a/drivers/net/dsa/mv88e6xxx/phy.c
++++ b/drivers/net/dsa/mv88e6xxx/phy.c
+@@ -229,7 +229,10 @@ static void mv88e6xxx_phy_ppu_state_init(struct mv88e6xxx_chip *chip)
+
+ static void mv88e6xxx_phy_ppu_state_destroy(struct mv88e6xxx_chip *chip)
+ {
++ mutex_lock(&chip->ppu_mutex);
+ del_timer_sync(&chip->ppu_timer);
++ cancel_work_sync(&chip->ppu_work);
++ mutex_unlock(&chip->ppu_mutex);
+ }
+
+ int mv88e6185_phy_ppu_read(struct mv88e6xxx_chip *chip, struct mii_bus *bus,
+diff --git a/drivers/net/dsa/sja1105/sja1105_ethtool.c b/drivers/net/dsa/sja1105/sja1105_ethtool.c
+index 2ea64b1d026d73..84d7d3f66bd037 100644
+--- a/drivers/net/dsa/sja1105/sja1105_ethtool.c
++++ b/drivers/net/dsa/sja1105/sja1105_ethtool.c
+@@ -571,6 +571,9 @@ void sja1105_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
+ max_ctr = __MAX_SJA1105PQRS_PORT_COUNTER;
+
+ for (i = 0; i < max_ctr; i++) {
++ if (!strlen(sja1105_port_counters[i].name))
++ continue;
++
+ rc = sja1105_port_counter_read(priv, port, i, &data[k++]);
+ if (rc) {
+ dev_err(ds->dev,
+@@ -596,8 +599,12 @@ void sja1105_get_strings(struct dsa_switch *ds, int port,
+ else
+ max_ctr = __MAX_SJA1105PQRS_PORT_COUNTER;
+
+- for (i = 0; i < max_ctr; i++)
++ for (i = 0; i < max_ctr; i++) {
++ if (!strlen(sja1105_port_counters[i].name))
++ continue;
++
+ ethtool_puts(&data, sja1105_port_counters[i].name);
++ }
+ }
+
+ int sja1105_get_sset_count(struct dsa_switch *ds, int port, int sset)
+diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c
+index a1f4ca6ad888f1..08b45fdd1d2482 100644
+--- a/drivers/net/dsa/sja1105/sja1105_ptp.c
++++ b/drivers/net/dsa/sja1105/sja1105_ptp.c
+@@ -61,17 +61,21 @@ enum sja1105_ptp_clk_mode {
+ int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
+ {
+ struct sja1105_private *priv = ds->priv;
++ unsigned long hwts_tx_en, hwts_rx_en;
+ struct hwtstamp_config config;
+
+ if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ return -EFAULT;
+
++ hwts_tx_en = priv->hwts_tx_en;
++ hwts_rx_en = priv->hwts_rx_en;
++
+ switch (config.tx_type) {
+ case HWTSTAMP_TX_OFF:
+- priv->hwts_tx_en &= ~BIT(port);
++ hwts_tx_en &= ~BIT(port);
+ break;
+ case HWTSTAMP_TX_ON:
+- priv->hwts_tx_en |= BIT(port);
++ hwts_tx_en |= BIT(port);
+ break;
+ default:
+ return -ERANGE;
+@@ -79,15 +83,21 @@ int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
+
+ switch (config.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+- priv->hwts_rx_en &= ~BIT(port);
++ hwts_rx_en &= ~BIT(port);
+ break;
+- default:
+- priv->hwts_rx_en |= BIT(port);
++ case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
++ hwts_rx_en |= BIT(port);
+ break;
++ default:
++ return -ERANGE;
+ }
+
+ if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
+ return -EFAULT;
++
++ priv->hwts_tx_en = hwts_tx_en;
++ priv->hwts_rx_en = hwts_rx_en;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.c b/drivers/net/dsa/sja1105/sja1105_static_config.c
+index 3d790f8c6f4dab..ffece8a400a668 100644
+--- a/drivers/net/dsa/sja1105/sja1105_static_config.c
++++ b/drivers/net/dsa/sja1105/sja1105_static_config.c
+@@ -1917,8 +1917,10 @@ int sja1105_table_delete_entry(struct sja1105_table *table, int i)
+ if (i > table->entry_count)
+ return -ERANGE;
+
+- memmove(entries + i * entry_size, entries + (i + 1) * entry_size,
+- (table->entry_count - i) * entry_size);
++ if (i + 1 < table->entry_count) {
++ memmove(entries + i * entry_size, entries + (i + 1) * entry_size,
++ (table->entry_count - i - 1) * entry_size);
++ }
+
+ table->entry_count--;
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 55f553debd3b29..2cd79b59cf0022 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -485,6 +485,17 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ txr = &bp->tx_ring[bp->tx_ring_map[i]];
+ prod = txr->tx_prod;
+
++#if (MAX_SKB_FRAGS > TX_MAX_FRAGS)
++ if (skb_shinfo(skb)->nr_frags > TX_MAX_FRAGS) {
++ netdev_warn_once(dev, "SKB has too many (%d) fragments, max supported is %d. SKB will be linearized.\n",
++ skb_shinfo(skb)->nr_frags, TX_MAX_FRAGS);
++ if (skb_linearize(skb)) {
++ dev_kfree_skb_any(skb);
++ dev_core_stats_tx_dropped_inc(dev);
++ return NETDEV_TX_OK;
++ }
++ }
++#endif
+ free_size = bnxt_tx_avail(bp, txr);
+ if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
+ /* We must have raced with NAPI cleanup */
+@@ -564,7 +575,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ TX_BD_FLAGS_LHINT_512_AND_SMALLER |
+ TX_BD_FLAGS_COAL_NOW |
+ TX_BD_FLAGS_PACKET_END |
+- (2 << TX_BD_FLAGS_BD_CNT_SHIFT));
++ TX_BD_CNT(2));
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+ tx_push1->tx_bd_hsize_lflags =
+@@ -639,7 +650,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ dma_unmap_addr_set(tx_buf, mapping, mapping);
+ flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
+- ((last_frag + 2) << TX_BD_FLAGS_BD_CNT_SHIFT);
++ TX_BD_CNT(last_frag + 2);
+
+ txbd->tx_bd_haddr = cpu_to_le64(mapping);
+ txbd->tx_bd_opaque = SET_TX_OPAQUE(bp, txr, prod, 2 + last_frag);
+@@ -15651,7 +15662,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+ cpr = &rxr->bnapi->cp_ring;
+ cpr->sw_stats->rx.rx_resets++;
+
+- for (i = 0; i <= bp->nr_vnics; i++) {
++ for (i = 0; i < bp->nr_vnics; i++) {
+ vnic = &bp->vnic_info[i];
+
+ rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);
+@@ -15679,7 +15690,7 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
+ struct bnxt_vnic_info *vnic;
+ int i;
+
+- for (i = 0; i <= bp->nr_vnics; i++) {
++ for (i = 0; i < bp->nr_vnics; i++) {
+ vnic = &bp->vnic_info[i];
+ vnic->mru = 0;
+ bnxt_hwrm_vnic_update(bp, vnic,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 2373f423a523ec..d621fb621f30c7 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -82,6 +82,12 @@ struct tx_bd {
+ #define TX_OPAQUE_PROD(bp, opq) ((TX_OPAQUE_IDX(opq) + TX_OPAQUE_BDS(opq)) &\
+ (bp)->tx_ring_mask)
+
++#define TX_BD_CNT(n) (((n) << TX_BD_FLAGS_BD_CNT_SHIFT) & TX_BD_FLAGS_BD_CNT)
++
++#define TX_MAX_BD_CNT 32
++
++#define TX_MAX_FRAGS (TX_MAX_BD_CNT - 2)
++
+ struct tx_bd_ext {
+ __le32 tx_bd_hsize_lflags;
+ #define TX_BD_FLAGS_TCP_UDP_CHKSUM (1 << 0)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index 299822cacca48e..d71bad3cfd6bd6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -48,8 +48,7 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
+ tx_buf->page = virt_to_head_page(xdp->data);
+
+ txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)];
+- flags = (len << TX_BD_LEN_SHIFT) |
+- ((num_frags + 1) << TX_BD_FLAGS_BD_CNT_SHIFT) |
++ flags = (len << TX_BD_LEN_SHIFT) | TX_BD_CNT(num_frags + 1) |
+ bnxt_lhint_arr[len >> 9];
+ txbd->tx_bd_len_flags_type = cpu_to_le32(flags);
+ txbd->tx_bd_opaque = SET_TX_OPAQUE(bp, txr, prod, 1 + num_frags);
+diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
+index b619a3ec245b24..04192190bebabb 100644
+--- a/drivers/net/ethernet/ibm/ibmveth.c
++++ b/drivers/net/ethernet/ibm/ibmveth.c
+@@ -1802,18 +1802,22 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
+ long value = simple_strtol(buf, NULL, 10);
+ long rc;
+
++ rtnl_lock();
++
+ if (attr == &veth_active_attr) {
+ if (value && !pool->active) {
+ if (netif_running(netdev)) {
+ if (ibmveth_alloc_buffer_pool(pool)) {
+ netdev_err(netdev,
+ "unable to alloc pool\n");
+- return -ENOMEM;
++ rc = -ENOMEM;
++ goto unlock_err;
+ }
+ pool->active = 1;
+ ibmveth_close(netdev);
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ } else {
+ pool->active = 1;
+ }
+@@ -1833,48 +1837,59 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
+
+ if (i == IBMVETH_NUM_BUFF_POOLS) {
+ netdev_err(netdev, "no active pool >= MTU\n");
+- return -EPERM;
++ rc = -EPERM;
++ goto unlock_err;
+ }
+
+ if (netif_running(netdev)) {
+ ibmveth_close(netdev);
+ pool->active = 0;
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ }
+ pool->active = 0;
+ }
+ } else if (attr == &veth_num_attr) {
+ if (value <= 0 || value > IBMVETH_MAX_POOL_COUNT) {
+- return -EINVAL;
++ rc = -EINVAL;
++ goto unlock_err;
+ } else {
+ if (netif_running(netdev)) {
+ ibmveth_close(netdev);
+ pool->size = value;
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ } else {
+ pool->size = value;
+ }
+ }
+ } else if (attr == &veth_size_attr) {
+ if (value <= IBMVETH_BUFF_OH || value > IBMVETH_MAX_BUF_SIZE) {
+- return -EINVAL;
++ rc = -EINVAL;
++ goto unlock_err;
+ } else {
+ if (netif_running(netdev)) {
+ ibmveth_close(netdev);
+ pool->buff_size = value;
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ } else {
+ pool->buff_size = value;
+ }
+ }
+ }
++ rtnl_unlock();
+
+ /* kick the interrupt handler to allocate/deallocate pools */
+ ibmveth_interrupt(netdev->irq, netdev);
+ return count;
++
++unlock_err:
++ rtnl_unlock();
++ return rc;
+ }
+
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 0676fc547b6f47..480606d1245ea4 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -4829,6 +4829,18 @@ static void vnic_add_client_data(struct ibmvnic_adapter *adapter,
+ strscpy(vlcd->name, adapter->netdev->name, len);
+ }
+
++static void ibmvnic_print_hex_dump(struct net_device *dev, void *buf,
++ size_t len)
++{
++ unsigned char hex_str[16 * 3];
++
++ for (size_t i = 0; i < len; i += 16) {
++ hex_dump_to_buffer((unsigned char *)buf + i, len - i, 16, 8,
++ hex_str, sizeof(hex_str), false);
++ netdev_dbg(dev, "%s\n", hex_str);
++ }
++}
++
+ static int send_login(struct ibmvnic_adapter *adapter)
+ {
+ struct ibmvnic_login_rsp_buffer *login_rsp_buffer;
+@@ -4939,10 +4951,8 @@ static int send_login(struct ibmvnic_adapter *adapter)
+ vnic_add_client_data(adapter, vlcd);
+
+ netdev_dbg(adapter->netdev, "Login Buffer:\n");
+- for (i = 0; i < (adapter->login_buf_sz - 1) / 8 + 1; i++) {
+- netdev_dbg(adapter->netdev, "%016lx\n",
+- ((unsigned long *)(adapter->login_buf))[i]);
+- }
++ ibmvnic_print_hex_dump(adapter->netdev, adapter->login_buf,
++ adapter->login_buf_sz);
+
+ memset(&crq, 0, sizeof(crq));
+ crq.login.first = IBMVNIC_CRQ_CMD;
+@@ -5319,15 +5329,13 @@ static void handle_query_ip_offload_rsp(struct ibmvnic_adapter *adapter)
+ {
+ struct device *dev = &adapter->vdev->dev;
+ struct ibmvnic_query_ip_offload_buffer *buf = &adapter->ip_offload_buf;
+- int i;
+
+ dma_unmap_single(dev, adapter->ip_offload_tok,
+ sizeof(adapter->ip_offload_buf), DMA_FROM_DEVICE);
+
+ netdev_dbg(adapter->netdev, "Query IP Offload Buffer:\n");
+- for (i = 0; i < (sizeof(adapter->ip_offload_buf) - 1) / 8 + 1; i++)
+- netdev_dbg(adapter->netdev, "%016lx\n",
+- ((unsigned long *)(buf))[i]);
++ ibmvnic_print_hex_dump(adapter->netdev, buf,
++ sizeof(adapter->ip_offload_buf));
+
+ netdev_dbg(adapter->netdev, "ipv4_chksum = %d\n", buf->ipv4_chksum);
+ netdev_dbg(adapter->netdev, "ipv6_chksum = %d\n", buf->ipv6_chksum);
+@@ -5558,10 +5566,8 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
+ netdev->mtu = adapter->req_mtu - ETH_HLEN;
+
+ netdev_dbg(adapter->netdev, "Login Response Buffer:\n");
+- for (i = 0; i < (adapter->login_rsp_buf_sz - 1) / 8 + 1; i++) {
+- netdev_dbg(adapter->netdev, "%016lx\n",
+- ((unsigned long *)(adapter->login_rsp_buf))[i]);
+- }
++ ibmvnic_print_hex_dump(netdev, adapter->login_rsp_buf,
++ adapter->login_rsp_buf_sz);
+
+ /* Sanity checks */
+ if (login->num_txcomp_subcrqs != login_rsp->num_txsubm_subcrqs ||
+diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
+index 5e2cfa73f8891c..8294a7c4f122c3 100644
+--- a/drivers/net/ethernet/intel/e1000e/defines.h
++++ b/drivers/net/ethernet/intel/e1000e/defines.h
+@@ -803,4 +803,7 @@
+ /* SerDes Control */
+ #define E1000_GEN_POLL_TIMEOUT 640
+
++#define E1000_FEXTNVM12_PHYPD_CTRL_MASK 0x00C00000
++#define E1000_FEXTNVM12_PHYPD_CTRL_P1 0x00800000
++
+ #endif /* _E1000_DEFINES_H_ */
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 2f9655cf5dd9ee..364378133526a1 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -285,6 +285,45 @@ static void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw)
+ }
+ }
+
++/**
++ * e1000_reconfigure_k1_exit_timeout - reconfigure K1 exit timeout to
++ * align to MTP and later platform requirements.
++ * @hw: pointer to the HW structure
++ *
++ * Context: PHY semaphore must be held by caller.
++ * Return: 0 on success, negative on failure
++ */
++static s32 e1000_reconfigure_k1_exit_timeout(struct e1000_hw *hw)
++{
++ u16 phy_timeout;
++ u32 fextnvm12;
++ s32 ret_val;
++
++ if (hw->mac.type < e1000_pch_mtp)
++ return 0;
++
++ /* Change Kumeran K1 power down state from P0s to P1 */
++ fextnvm12 = er32(FEXTNVM12);
++ fextnvm12 &= ~E1000_FEXTNVM12_PHYPD_CTRL_MASK;
++ fextnvm12 |= E1000_FEXTNVM12_PHYPD_CTRL_P1;
++ ew32(FEXTNVM12, fextnvm12);
++
++ /* Wait for the interface the settle */
++ usleep_range(1000, 1100);
++
++ /* Change K1 exit timeout */
++ ret_val = e1e_rphy_locked(hw, I217_PHY_TIMEOUTS_REG,
++ &phy_timeout);
++ if (ret_val)
++ return ret_val;
++
++ phy_timeout &= ~I217_PHY_TIMEOUTS_K1_EXIT_TO_MASK;
++ phy_timeout |= 0xF00;
++
++ return e1e_wphy_locked(hw, I217_PHY_TIMEOUTS_REG,
++ phy_timeout);
++}
++
+ /**
+ * e1000_init_phy_workarounds_pchlan - PHY initialization workarounds
+ * @hw: pointer to the HW structure
+@@ -327,15 +366,22 @@ static s32 e1000_init_phy_workarounds_pchlan(struct e1000_hw *hw)
+ * LANPHYPC Value bit to force the interconnect to PCIe mode.
+ */
+ switch (hw->mac.type) {
++ case e1000_pch_mtp:
++ case e1000_pch_lnp:
++ case e1000_pch_ptp:
++ case e1000_pch_nvp:
++ /* At this point the PHY might be inaccessible so don't
++ * propagate the failure
++ */
++ if (e1000_reconfigure_k1_exit_timeout(hw))
++ e_dbg("Failed to reconfigure K1 exit timeout\n");
++
++ fallthrough;
+ case e1000_pch_lpt:
+ case e1000_pch_spt:
+ case e1000_pch_cnp:
+ case e1000_pch_tgp:
+ case e1000_pch_adp:
+- case e1000_pch_mtp:
+- case e1000_pch_lnp:
+- case e1000_pch_ptp:
+- case e1000_pch_nvp:
+ if (e1000_phy_is_accessible_pchlan(hw))
+ break;
+
+@@ -419,8 +465,20 @@ static s32 e1000_init_phy_workarounds_pchlan(struct e1000_hw *hw)
+ * the PHY is in.
+ */
+ ret_val = hw->phy.ops.check_reset_block(hw);
+- if (ret_val)
++ if (ret_val) {
+ e_err("ME blocked access to PHY after reset\n");
++ goto out;
++ }
++
++ if (hw->mac.type >= e1000_pch_mtp) {
++ ret_val = hw->phy.ops.acquire(hw);
++ if (ret_val) {
++ e_err("Failed to reconfigure K1 exit timeout\n");
++ goto out;
++ }
++ ret_val = e1000_reconfigure_k1_exit_timeout(hw);
++ hw->phy.ops.release(hw);
++ }
+ }
+
+ out:
+@@ -4888,6 +4946,18 @@ static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
+ u16 i;
+
+ e1000_initialize_hw_bits_ich8lan(hw);
++ if (hw->mac.type >= e1000_pch_mtp) {
++ ret_val = hw->phy.ops.acquire(hw);
++ if (ret_val)
++ return ret_val;
++
++ ret_val = e1000_reconfigure_k1_exit_timeout(hw);
++ hw->phy.ops.release(hw);
++ if (ret_val) {
++ e_dbg("Error failed to reconfigure K1 exit timeout\n");
++ return ret_val;
++ }
++ }
+
+ /* Initialize identification LED */
+ ret_val = mac->ops.id_led_init(hw);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index 2504b11c3169fa..5feb589a9b5ff2 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -219,6 +219,10 @@
+ #define I217_PLL_CLOCK_GATE_REG PHY_REG(772, 28)
+ #define I217_PLL_CLOCK_GATE_MASK 0x07FF
+
++/* PHY Timeouts */
++#define I217_PHY_TIMEOUTS_REG PHY_REG(770, 21)
++#define I217_PHY_TIMEOUTS_K1_EXIT_TO_MASK 0x0FC0
++
+ #define SW_FLAG_TIMEOUT 1000 /* SW Semaphore flag timeout in ms */
+
+ /* Inband Control */
+diff --git a/drivers/net/ethernet/intel/ice/devlink/health.c b/drivers/net/ethernet/intel/ice/devlink/health.c
+index ea40f794125900..19c3d37aa768b3 100644
+--- a/drivers/net/ethernet/intel/ice/devlink/health.c
++++ b/drivers/net/ethernet/intel/ice/devlink/health.c
+@@ -25,10 +25,10 @@ struct ice_health_status {
+ * The below lookup requires to be sorted by code.
+ */
+
+-static const char *const ice_common_port_solutions =
++static const char ice_common_port_solutions[] =
+ "Check your cable connection. Change or replace the module or cable. Manually set speed and duplex.";
+-static const char *const ice_port_number_label = "Port Number";
+-static const char *const ice_update_nvm_solution = "Update to the latest NVM image.";
++static const char ice_port_number_label[] = "Port Number";
++static const char ice_update_nvm_solution[] = "Update to the latest NVM image.";
+
+ static const struct ice_health_status ice_health_status_lookup[] = {
+ {ICE_AQC_HEALTH_STATUS_ERR_UNKNOWN_MOD_STRICT, "An unsupported module was detected.",
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 7a2a2e8da8fabd..1e801300310e9f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -2271,7 +2271,8 @@ ice_parse_common_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps,
+ caps->nvm_unified_update);
+ break;
+ case ICE_AQC_CAPS_RDMA:
+- caps->rdma = (number == 1);
++ if (IS_ENABLED(CONFIG_INFINIBAND_IRDMA))
++ caps->rdma = (number == 1);
+ ice_debug(hw, ICE_DBG_INIT, "%s: rdma = %d\n", prefix, caps->rdma);
+ break;
+ case ICE_AQC_CAPS_MAX_MTU:
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index e26320ce52ca17..a99e0fbd0b8b55 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -1783,6 +1783,7 @@ static int ice_ptp_write_perout(struct ice_hw *hw, unsigned int chan,
+ 8 + chan + (tmr_idx * 4));
+
+ wr32(hw, GLGEN_GPIO_CTL(gpio_pin), val);
++ ice_flush(hw);
+
+ return 0;
+ }
+@@ -1843,9 +1844,10 @@ static int ice_ptp_cfg_perout(struct ice_pf *pf, struct ptp_perout_request *rq,
+ div64_u64_rem(start, period, &phase);
+
+ /* If we have only phase or start time is in the past, start the timer
+- * at the next multiple of period, maintaining phase.
++ * at the next multiple of period, maintaining phase at least 0.5 second
++ * from now, so we have time to write it to HW.
+ */
+- clk = ice_ptp_read_src_clk_reg(pf, NULL);
++ clk = ice_ptp_read_src_clk_reg(pf, NULL) + NSEC_PER_MSEC * 500;
+ if (rq->flags & PTP_PEROUT_PHASE || start <= clk - prop_delay_ns)
+ start = div64_u64(clk + period - 1, period) * period + phase;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index ff4ad788d96ac5..1af51469f070b6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -562,7 +562,7 @@ bool ice_vc_isvalid_vsi_id(struct ice_vf *vf, u16 vsi_id)
+ *
+ * check for the valid queue ID
+ */
+-static bool ice_vc_isvalid_q_id(struct ice_vsi *vsi, u8 qid)
++static bool ice_vc_isvalid_q_id(struct ice_vsi *vsi, u16 qid)
+ {
+ /* allocated Tx and Rx queues should be always equal for VF VSI */
+ return qid < vsi->alloc_txq;
+@@ -1862,15 +1862,33 @@ static int ice_vc_cfg_q_bw(struct ice_vf *vf, u8 *msg)
+
+ for (i = 0; i < qbw->num_queues; i++) {
+ if (qbw->cfg[i].shaper.peak != 0 && vf->max_tx_rate != 0 &&
+- qbw->cfg[i].shaper.peak > vf->max_tx_rate)
++ qbw->cfg[i].shaper.peak > vf->max_tx_rate) {
+ dev_warn(ice_pf_to_dev(vf->pf), "The maximum queue %d rate limit configuration may not take effect because the maximum TX rate for VF-%d is %d\n",
+ qbw->cfg[i].queue_id, vf->vf_id,
+ vf->max_tx_rate);
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ goto err;
++ }
+ if (qbw->cfg[i].shaper.committed != 0 && vf->min_tx_rate != 0 &&
+- qbw->cfg[i].shaper.committed < vf->min_tx_rate)
++ qbw->cfg[i].shaper.committed < vf->min_tx_rate) {
+ dev_warn(ice_pf_to_dev(vf->pf), "The minimum queue %d rate limit configuration may not take effect because the minimum TX rate for VF-%d is %d\n",
+ qbw->cfg[i].queue_id, vf->vf_id,
+- vf->max_tx_rate);
++ vf->min_tx_rate);
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ goto err;
++ }
++ if (qbw->cfg[i].queue_id > vf->num_vf_qs) {
++ dev_warn(ice_pf_to_dev(vf->pf), "VF-%d trying to configure invalid queue_id\n",
++ vf->vf_id);
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ goto err;
++ }
++ if (qbw->cfg[i].tc >= ICE_MAX_TRAFFIC_CLASS) {
++ dev_warn(ice_pf_to_dev(vf->pf), "VF-%d trying to configure a traffic class higher than allowed\n",
++ vf->vf_id);
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ goto err;
++ }
+ }
+
+ for (i = 0; i < qbw->num_queues; i++) {
+@@ -1900,13 +1918,21 @@ static int ice_vc_cfg_q_bw(struct ice_vf *vf, u8 *msg)
+ */
+ static int ice_vc_cfg_q_quanta(struct ice_vf *vf, u8 *msg)
+ {
++ u16 quanta_prof_id, quanta_size, start_qid, num_queues, end_qid, i;
+ enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
+- u16 quanta_prof_id, quanta_size, start_qid, end_qid, i;
+ struct virtchnl_quanta_cfg *qquanta =
+ (struct virtchnl_quanta_cfg *)msg;
+ struct ice_vsi *vsi;
+ int ret;
+
++ start_qid = qquanta->queue_select.start_queue_id;
++ num_queues = qquanta->queue_select.num_queues;
++
++ if (check_add_overflow(start_qid, num_queues, &end_qid)) {
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ goto err;
++ }
++
+ if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ goto err;
+@@ -1918,8 +1944,6 @@ static int ice_vc_cfg_q_quanta(struct ice_vf *vf, u8 *msg)
+ goto err;
+ }
+
+- end_qid = qquanta->queue_select.start_queue_id +
+- qquanta->queue_select.num_queues;
+ if (end_qid > ICE_MAX_RSS_QS_PER_VF ||
+ end_qid > min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)) {
+ dev_err(ice_pf_to_dev(vf->pf), "VF-%d trying to configure more than allocated number of queues: %d\n",
+@@ -1948,7 +1972,6 @@ static int ice_vc_cfg_q_quanta(struct ice_vf *vf, u8 *msg)
+ goto err;
+ }
+
+- start_qid = qquanta->queue_select.start_queue_id;
+ for (i = start_qid; i < end_qid; i++)
+ vsi->tx_rings[i]->quanta_prof_id = quanta_prof_id;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+index 14e3f0f89c78d6..9be4bd717512d0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+@@ -832,21 +832,27 @@ ice_vc_fdir_parse_raw(struct ice_vf *vf,
+ struct virtchnl_proto_hdrs *proto,
+ struct virtchnl_fdir_fltr_conf *conf)
+ {
+- u8 *pkt_buf, *msk_buf __free(kfree);
++ u8 *pkt_buf, *msk_buf __free(kfree) = NULL;
+ struct ice_parser_result rslt;
+ struct ice_pf *pf = vf->pf;
++ u16 pkt_len, udp_port = 0;
+ struct ice_parser *psr;
+ int status = -ENOMEM;
+ struct ice_hw *hw;
+- u16 udp_port = 0;
+
+- pkt_buf = kzalloc(proto->raw.pkt_len, GFP_KERNEL);
+- msk_buf = kzalloc(proto->raw.pkt_len, GFP_KERNEL);
++ pkt_len = proto->raw.pkt_len;
++
++ if (!pkt_len || pkt_len > VIRTCHNL_MAX_SIZE_RAW_PACKET)
++ return -EINVAL;
++
++ pkt_buf = kzalloc(pkt_len, GFP_KERNEL);
++ msk_buf = kzalloc(pkt_len, GFP_KERNEL);
++
+ if (!pkt_buf || !msk_buf)
+ goto err_mem_alloc;
+
+- memcpy(pkt_buf, proto->raw.spec, proto->raw.pkt_len);
+- memcpy(msk_buf, proto->raw.mask, proto->raw.pkt_len);
++ memcpy(pkt_buf, proto->raw.spec, pkt_len);
++ memcpy(msk_buf, proto->raw.mask, pkt_len);
+
+ hw = &pf->hw;
+
+@@ -862,7 +868,7 @@ ice_vc_fdir_parse_raw(struct ice_vf *vf,
+ if (ice_get_open_tunnel_port(hw, &udp_port, TNL_VXLAN))
+ ice_parser_vxlan_tunnel_set(psr, udp_port, true);
+
+- status = ice_parser_run(psr, pkt_buf, proto->raw.pkt_len, &rslt);
++ status = ice_parser_run(psr, pkt_buf, pkt_len, &rslt);
+ if (status)
+ goto err_parser_destroy;
+
+@@ -876,7 +882,7 @@ ice_vc_fdir_parse_raw(struct ice_vf *vf,
+ }
+
+ status = ice_parser_profile_init(&rslt, pkt_buf, msk_buf,
+- proto->raw.pkt_len, ICE_BLK_FD,
++ pkt_len, ICE_BLK_FD,
+ conf->prof);
+ if (status)
+ goto err_parser_profile_init;
+@@ -885,7 +891,7 @@ ice_vc_fdir_parse_raw(struct ice_vf *vf,
+ ice_parser_profile_dump(hw, conf->prof);
+
+ /* Store raw flow info into @conf */
+- conf->pkt_len = proto->raw.pkt_len;
++ conf->pkt_len = pkt_len;
+ conf->pkt_buf = pkt_buf;
+ conf->parser_ena = true;
+
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index a3d6b8f198a86a..a055a47449f128 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -927,15 +927,19 @@ static int idpf_stop(struct net_device *netdev)
+ static void idpf_decfg_netdev(struct idpf_vport *vport)
+ {
+ struct idpf_adapter *adapter = vport->adapter;
++ u16 idx = vport->idx;
+
+ kfree(vport->rx_ptype_lkup);
+ vport->rx_ptype_lkup = NULL;
+
+- unregister_netdev(vport->netdev);
+- free_netdev(vport->netdev);
++ if (test_and_clear_bit(IDPF_VPORT_REG_NETDEV,
++ adapter->vport_config[idx]->flags)) {
++ unregister_netdev(vport->netdev);
++ free_netdev(vport->netdev);
++ }
+ vport->netdev = NULL;
+
+- adapter->netdevs[vport->idx] = NULL;
++ adapter->netdevs[idx] = NULL;
+ }
+
+ /**
+@@ -1536,13 +1540,22 @@ void idpf_init_task(struct work_struct *work)
+ }
+
+ for (index = 0; index < adapter->max_vports; index++) {
+- if (adapter->netdevs[index] &&
+- !test_bit(IDPF_VPORT_REG_NETDEV,
+- adapter->vport_config[index]->flags)) {
+- register_netdev(adapter->netdevs[index]);
+- set_bit(IDPF_VPORT_REG_NETDEV,
+- adapter->vport_config[index]->flags);
++ struct net_device *netdev = adapter->netdevs[index];
++ struct idpf_vport_config *vport_config;
++
++ vport_config = adapter->vport_config[index];
++
++ if (!netdev ||
++ test_bit(IDPF_VPORT_REG_NETDEV, vport_config->flags))
++ continue;
++
++ err = register_netdev(netdev);
++ if (err) {
++ dev_err(&pdev->dev, "failed to register netdev for vport %d: %pe\n",
++ index, ERR_PTR(err));
++ continue;
+ }
++ set_bit(IDPF_VPORT_REG_NETDEV, vport_config->flags);
+ }
+
+ /* As all the required vports are created, clear the reset flag
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
+index b6c515d14cbf08..bec4a02c53733e 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_main.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
+@@ -87,7 +87,11 @@ static void idpf_remove(struct pci_dev *pdev)
+ */
+ static void idpf_shutdown(struct pci_dev *pdev)
+ {
+- idpf_remove(pdev);
++ struct idpf_adapter *adapter = pci_get_drvdata(pdev);
++
++ cancel_delayed_work_sync(&adapter->vc_event_task);
++ idpf_vc_core_deinit(adapter);
++ idpf_deinit_dflt_mbx(adapter);
+
+ if (system_state == SYSTEM_POWER_OFF)
+ pci_set_power_state(pdev, PCI_D3hot);
+diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
+index f9457055612004..f323e1c1989f1b 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
++++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
+@@ -509,6 +509,12 @@ static int igb_ptp_feature_enable_82580(struct ptp_clock_info *ptp,
+ PTP_STRICT_FLAGS))
+ return -EOPNOTSUPP;
+
++ /* Both the rising and falling edge are timestamped */
++ if (rq->extts.flags & PTP_STRICT_FLAGS &&
++ (rq->extts.flags & PTP_ENABLE_FEATURE) &&
++ (rq->extts.flags & PTP_EXTTS_EDGES) != PTP_EXTTS_EDGES)
++ return -EOPNOTSUPP;
++
+ if (on) {
+ pin = ptp_find_pin(igb->ptp_clock, PTP_PF_EXTTS,
+ rq->extts.index);
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index b8111ad9a9a83d..cd1d7b6c178235 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -579,6 +579,7 @@ struct igc_metadata_request {
+ struct xsk_tx_metadata *meta;
+ struct igc_ring *tx_ring;
+ u32 cmd_type;
++ u16 used_desc;
+ };
+
+ struct igc_q_vector {
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 84307bb7313e0f..706dd26d4dde26 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -1092,7 +1092,8 @@ static int igc_init_empty_frame(struct igc_ring *ring,
+
+ dma = dma_map_single(ring->dev, skb->data, size, DMA_TO_DEVICE);
+ if (dma_mapping_error(ring->dev, dma)) {
+- netdev_err_once(ring->netdev, "Failed to map DMA for TX\n");
++ net_err_ratelimited("%s: DMA mapping error for empty frame\n",
++ netdev_name(ring->netdev));
+ return -ENOMEM;
+ }
+
+@@ -1108,20 +1109,12 @@ static int igc_init_empty_frame(struct igc_ring *ring,
+ return 0;
+ }
+
+-static int igc_init_tx_empty_descriptor(struct igc_ring *ring,
+- struct sk_buff *skb,
+- struct igc_tx_buffer *first)
++static void igc_init_tx_empty_descriptor(struct igc_ring *ring,
++ struct sk_buff *skb,
++ struct igc_tx_buffer *first)
+ {
+ union igc_adv_tx_desc *desc;
+ u32 cmd_type, olinfo_status;
+- int err;
+-
+- if (!igc_desc_unused(ring))
+- return -EBUSY;
+-
+- err = igc_init_empty_frame(ring, first, skb);
+- if (err)
+- return err;
+
+ cmd_type = IGC_ADVTXD_DTYP_DATA | IGC_ADVTXD_DCMD_DEXT |
+ IGC_ADVTXD_DCMD_IFCS | IGC_TXD_DCMD |
+@@ -1140,8 +1133,6 @@ static int igc_init_tx_empty_descriptor(struct igc_ring *ring,
+ ring->next_to_use++;
+ if (ring->next_to_use == ring->count)
+ ring->next_to_use = 0;
+-
+- return 0;
+ }
+
+ #define IGC_EMPTY_FRAME_SIZE 60
+@@ -1567,6 +1558,40 @@ static bool igc_request_tx_tstamp(struct igc_adapter *adapter, struct sk_buff *s
+ return false;
+ }
+
++static int igc_insert_empty_frame(struct igc_ring *tx_ring)
++{
++ struct igc_tx_buffer *empty_info;
++ struct sk_buff *empty_skb;
++ void *data;
++ int ret;
++
++ empty_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
++ empty_skb = alloc_skb(IGC_EMPTY_FRAME_SIZE, GFP_ATOMIC);
++ if (unlikely(!empty_skb)) {
++ net_err_ratelimited("%s: skb alloc error for empty frame\n",
++ netdev_name(tx_ring->netdev));
++ return -ENOMEM;
++ }
++
++ data = skb_put(empty_skb, IGC_EMPTY_FRAME_SIZE);
++ memset(data, 0, IGC_EMPTY_FRAME_SIZE);
++
++ /* Prepare DMA mapping and Tx buffer information */
++ ret = igc_init_empty_frame(tx_ring, empty_info, empty_skb);
++ if (unlikely(ret)) {
++ dev_kfree_skb_any(empty_skb);
++ return ret;
++ }
++
++ /* Prepare advanced context descriptor for empty packet */
++ igc_tx_ctxtdesc(tx_ring, 0, false, 0, 0, 0);
++
++ /* Prepare advanced data descriptor for empty packet */
++ igc_init_tx_empty_descriptor(tx_ring, empty_skb, empty_info);
++
++ return 0;
++}
++
+ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
+ struct igc_ring *tx_ring)
+ {
+@@ -1586,6 +1611,7 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
+ * + 1 desc for skb_headlen/IGC_MAX_DATA_PER_TXD,
+ * + 2 desc gap to keep tail from touching head,
+ * + 1 desc for context descriptor,
++ * + 2 desc for inserting an empty packet for launch time,
+ * otherwise try next time
+ */
+ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
+@@ -1605,24 +1631,16 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
+ launch_time = igc_tx_launchtime(tx_ring, txtime, &first_flag, &insert_empty);
+
+ if (insert_empty) {
+- struct igc_tx_buffer *empty_info;
+- struct sk_buff *empty;
+- void *data;
+-
+- empty_info = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
+- empty = alloc_skb(IGC_EMPTY_FRAME_SIZE, GFP_ATOMIC);
+- if (!empty)
+- goto done;
+-
+- data = skb_put(empty, IGC_EMPTY_FRAME_SIZE);
+- memset(data, 0, IGC_EMPTY_FRAME_SIZE);
+-
+- igc_tx_ctxtdesc(tx_ring, 0, false, 0, 0, 0);
+-
+- if (igc_init_tx_empty_descriptor(tx_ring,
+- empty,
+- empty_info) < 0)
+- dev_kfree_skb_any(empty);
++ /* Reset the launch time if the required empty frame fails to
++ * be inserted. However, this packet is not dropped, so it
++ * "dirties" the current Qbv cycle. This ensures that the
++ * upcoming packet, which is scheduled in the next Qbv cycle,
++ * does not require an empty frame. This way, the launch time
++ * continues to function correctly despite the current failure
++ * to insert the empty frame.
++ */
++ if (igc_insert_empty_frame(tx_ring))
++ launch_time = 0;
+ }
+
+ done:
+@@ -2953,9 +2971,48 @@ static u64 igc_xsk_fill_timestamp(void *_priv)
+ return *(u64 *)_priv;
+ }
+
++static void igc_xsk_request_launch_time(u64 launch_time, void *_priv)
++{
++ struct igc_metadata_request *meta_req = _priv;
++ struct igc_ring *tx_ring = meta_req->tx_ring;
++ __le32 launch_time_offset;
++ bool insert_empty = false;
++ bool first_flag = false;
++ u16 used_desc = 0;
++
++ if (!tx_ring->launchtime_enable)
++ return;
++
++ launch_time_offset = igc_tx_launchtime(tx_ring,
++ ns_to_ktime(launch_time),
++ &first_flag, &insert_empty);
++ if (insert_empty) {
++ /* Disregard the launch time request if the required empty frame
++ * fails to be inserted.
++ */
++ if (igc_insert_empty_frame(tx_ring))
++ return;
++
++ meta_req->tx_buffer =
++ &tx_ring->tx_buffer_info[tx_ring->next_to_use];
++ /* Inserting an empty packet requires two descriptors:
++ * one data descriptor and one context descriptor.
++ */
++ used_desc += 2;
++ }
++
++ /* Use one context descriptor to specify launch time and first flag. */
++ igc_tx_ctxtdesc(tx_ring, launch_time_offset, first_flag, 0, 0, 0);
++ used_desc += 1;
++
++ /* Update the number of used descriptors in this request */
++ meta_req->used_desc += used_desc;
++}
++
+ const struct xsk_tx_metadata_ops igc_xsk_tx_metadata_ops = {
+ .tmo_request_timestamp = igc_xsk_request_timestamp,
+ .tmo_fill_timestamp = igc_xsk_fill_timestamp,
++ .tmo_request_launch_time = igc_xsk_request_launch_time,
+ };
+
+ static void igc_xdp_xmit_zc(struct igc_ring *ring)
+@@ -2978,7 +3035,13 @@ static void igc_xdp_xmit_zc(struct igc_ring *ring)
+ ntu = ring->next_to_use;
+ budget = igc_desc_unused(ring);
+
+- while (xsk_tx_peek_desc(pool, &xdp_desc) && budget--) {
++ /* Packets with launch time require one data descriptor and one context
++ * descriptor. When the launch time falls into the next Qbv cycle, we
++ * may need to insert an empty packet, which requires two more
++ * descriptors. Therefore, to be safe, we always ensure we have at least
++ * 4 descriptors available.
++ */
++ while (budget >= 4 && xsk_tx_peek_desc(pool, &xdp_desc)) {
+ struct igc_metadata_request meta_req;
+ struct xsk_tx_metadata *meta = NULL;
+ struct igc_tx_buffer *bi;
+@@ -2999,9 +3062,19 @@ static void igc_xdp_xmit_zc(struct igc_ring *ring)
+ meta_req.tx_ring = ring;
+ meta_req.tx_buffer = bi;
+ meta_req.meta = meta;
++ meta_req.used_desc = 0;
+ xsk_tx_metadata_request(meta, &igc_xsk_tx_metadata_ops,
+ &meta_req);
+
++ /* xsk_tx_metadata_request() may have updated next_to_use */
++ ntu = ring->next_to_use;
++
++ /* xsk_tx_metadata_request() may have updated Tx buffer info */
++ bi = meta_req.tx_buffer;
++
++ /* xsk_tx_metadata_request() may use a few descriptors */
++ budget -= meta_req.used_desc;
++
+ tx_desc = IGC_TX_DESC(ring, ntu);
+ tx_desc->read.cmd_type_len = cpu_to_le32(meta_req.cmd_type);
+ tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
+@@ -3019,9 +3092,11 @@ static void igc_xdp_xmit_zc(struct igc_ring *ring)
+ ntu++;
+ if (ntu == ring->count)
+ ntu = 0;
++
++ ring->next_to_use = ntu;
++ budget--;
+ }
+
+- ring->next_to_use = ntu;
+ if (tx_desc) {
+ igc_flush_tx_descriptors(ring);
+ xsk_tx_release(pool);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
+index cb07ecd8937d34..00935747c8c551 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
+@@ -1453,9 +1453,11 @@ enum ixgbe_media_type ixgbe_get_media_type_e610(struct ixgbe_hw *hw)
+ hw->link.link_info.phy_type_low = 0;
+ } else {
+ highest_bit = fls64(le64_to_cpu(pcaps.phy_type_low));
+- if (highest_bit)
++ if (highest_bit) {
+ hw->link.link_info.phy_type_low =
+ BIT_ULL(highest_bit - 1);
++ hw->link.link_info.phy_type_high = 0;
++ }
+ }
+ }
+
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index 44fe9b68d1c223..061fcd444d503a 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -1113,6 +1113,9 @@ struct mvpp2 {
+
+ /* Spinlocks for CM3 shared memory configuration */
+ spinlock_t mss_spinlock;
++
++ /* Spinlock for shared PRS parser memory and shadow table */
++ spinlock_t prs_spinlock;
+ };
+
+ struct mvpp2_pcpu_stats {
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index dd76c1b7ed3a18..c63e5f1b168a9b 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -7722,8 +7722,9 @@ static int mvpp2_probe(struct platform_device *pdev)
+ if (mvpp2_read(priv, MVPP2_VER_ID_REG) == MVPP2_VER_PP23)
+ priv->hw_version = MVPP23;
+
+- /* Init mss lock */
++ /* Init locks for shared packet processor resources */
+ spin_lock_init(&priv->mss_spinlock);
++ spin_lock_init(&priv->prs_spinlock);
+
+ /* Initialize network controller */
+ err = mvpp2_init(pdev, priv);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index 9af22f497a40f5..93e978bdf303c4 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -23,6 +23,8 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe)
+ {
+ int i;
+
++ lockdep_assert_held(&priv->prs_spinlock);
++
+ if (pe->index > MVPP2_PRS_TCAM_SRAM_SIZE - 1)
+ return -EINVAL;
+
+@@ -43,11 +45,13 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe)
+ }
+
+ /* Initialize tcam entry from hw */
+-int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe,
+- int tid)
++static int __mvpp2_prs_init_from_hw(struct mvpp2 *priv,
++ struct mvpp2_prs_entry *pe, int tid)
+ {
+ int i;
+
++ lockdep_assert_held(&priv->prs_spinlock);
++
+ if (tid > MVPP2_PRS_TCAM_SRAM_SIZE - 1)
+ return -EINVAL;
+
+@@ -73,6 +77,18 @@ int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe,
+ return 0;
+ }
+
++int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe,
++ int tid)
++{
++ int err;
++
++ spin_lock_bh(&priv->prs_spinlock);
++ err = __mvpp2_prs_init_from_hw(priv, pe, tid);
++ spin_unlock_bh(&priv->prs_spinlock);
++
++ return err;
++}
++
+ /* Invalidate tcam hw entry */
+ static void mvpp2_prs_hw_inv(struct mvpp2 *priv, int index)
+ {
+@@ -374,7 +390,7 @@ static int mvpp2_prs_flow_find(struct mvpp2 *priv, int flow)
+ priv->prs_shadow[tid].lu != MVPP2_PRS_LU_FLOWS)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ bits = mvpp2_prs_sram_ai_get(&pe);
+
+ /* Sram store classification lookup ID in AI bits [5:0] */
+@@ -441,7 +457,7 @@ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add)
+
+ if (priv->prs_shadow[MVPP2_PE_DROP_ALL].valid) {
+ /* Entry exist - update port only */
+- mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL);
++ __mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL);
+ } else {
+ /* Entry doesn't exist - create new */
+ memset(&pe, 0, sizeof(pe));
+@@ -469,14 +485,17 @@ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add)
+ }
+
+ /* Set port to unicast or multicast promiscuous mode */
+-void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
+- enum mvpp2_prs_l2_cast l2_cast, bool add)
++static void __mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
++ enum mvpp2_prs_l2_cast l2_cast,
++ bool add)
+ {
+ struct mvpp2_prs_entry pe;
+ unsigned char cast_match;
+ unsigned int ri;
+ int tid;
+
++ lockdep_assert_held(&priv->prs_spinlock);
++
+ if (l2_cast == MVPP2_PRS_L2_UNI_CAST) {
+ cast_match = MVPP2_PRS_UCAST_VAL;
+ tid = MVPP2_PE_MAC_UC_PROMISCUOUS;
+@@ -489,7 +508,7 @@ void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
+
+ /* promiscuous mode - Accept unknown unicast or multicast packets */
+ if (priv->prs_shadow[tid].valid) {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ } else {
+ memset(&pe, 0, sizeof(pe));
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC);
+@@ -522,6 +541,14 @@ void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
+ mvpp2_prs_hw_write(priv, &pe);
+ }
+
++void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
++ enum mvpp2_prs_l2_cast l2_cast, bool add)
++{
++ spin_lock_bh(&priv->prs_spinlock);
++ __mvpp2_prs_mac_promisc_set(priv, port, l2_cast, add);
++ spin_unlock_bh(&priv->prs_spinlock);
++}
++
+ /* Set entry for dsa packets */
+ static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add,
+ bool tagged, bool extend)
+@@ -539,7 +566,7 @@ static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add,
+
+ if (priv->prs_shadow[tid].valid) {
+ /* Entry exist - update port only */
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ } else {
+ /* Entry doesn't exist - create new */
+ memset(&pe, 0, sizeof(pe));
+@@ -610,7 +637,7 @@ static void mvpp2_prs_dsa_tag_ethertype_set(struct mvpp2 *priv, int port,
+
+ if (priv->prs_shadow[tid].valid) {
+ /* Entry exist - update port only */
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ } else {
+ /* Entry doesn't exist - create new */
+ memset(&pe, 0, sizeof(pe));
+@@ -673,7 +700,7 @@ static int mvpp2_prs_vlan_find(struct mvpp2 *priv, unsigned short tpid, int ai)
+ priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid);
+ if (!match)
+ continue;
+@@ -726,7 +753,7 @@ static int mvpp2_prs_vlan_add(struct mvpp2 *priv, unsigned short tpid, int ai,
+ priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
+ ri_bits = mvpp2_prs_sram_ri_get(&pe);
+ if ((ri_bits & MVPP2_PRS_RI_VLAN_MASK) ==
+ MVPP2_PRS_RI_VLAN_DOUBLE)
+@@ -760,7 +787,7 @@ static int mvpp2_prs_vlan_add(struct mvpp2 *priv, unsigned short tpid, int ai,
+
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+ /* Update ports' mask */
+ mvpp2_prs_tcam_port_map_set(&pe, port_map);
+@@ -800,7 +827,7 @@ static int mvpp2_prs_double_vlan_find(struct mvpp2 *priv, unsigned short tpid1,
+ priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+
+ match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid1) &&
+ mvpp2_prs_tcam_data_cmp(&pe, 4, tpid2);
+@@ -849,7 +876,7 @@ static int mvpp2_prs_double_vlan_add(struct mvpp2 *priv, unsigned short tpid1,
+ priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
+ ri_bits = mvpp2_prs_sram_ri_get(&pe);
+ ri_bits &= MVPP2_PRS_RI_VLAN_MASK;
+ if (ri_bits == MVPP2_PRS_RI_VLAN_SINGLE ||
+@@ -880,7 +907,7 @@ static int mvpp2_prs_double_vlan_add(struct mvpp2 *priv, unsigned short tpid1,
+
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+
+ /* Update ports' mask */
+@@ -1213,8 +1240,8 @@ static void mvpp2_prs_mac_init(struct mvpp2 *priv)
+ /* Create dummy entries for drop all and promiscuous modes */
+ mvpp2_prs_drop_fc(priv);
+ mvpp2_prs_mac_drop_all_set(priv, 0, false);
+- mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false);
+- mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false);
++ __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false);
++ __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false);
+ }
+
+ /* Set default entries for various types of dsa packets */
+@@ -1533,12 +1560,6 @@ static int mvpp2_prs_vlan_init(struct platform_device *pdev, struct mvpp2 *priv)
+ struct mvpp2_prs_entry pe;
+ int err;
+
+- priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool),
+- MVPP2_PRS_DBL_VLANS_MAX,
+- GFP_KERNEL);
+- if (!priv->prs_double_vlans)
+- return -ENOMEM;
+-
+ /* Double VLAN: 0x88A8, 0x8100 */
+ err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021AD, ETH_P_8021Q,
+ MVPP2_PRS_PORT_MASK);
+@@ -1941,7 +1962,7 @@ static int mvpp2_prs_vid_range_find(struct mvpp2_port *port, u16 vid, u16 mask)
+ port->priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID)
+ continue;
+
+- mvpp2_prs_init_from_hw(port->priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(port->priv, &pe, tid);
+
+ mvpp2_prs_tcam_data_byte_get(&pe, 2, &byte[0], &enable[0]);
+ mvpp2_prs_tcam_data_byte_get(&pe, 3, &byte[1], &enable[1]);
+@@ -1970,6 +1991,8 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ /* Scan TCAM and see if entry with this <vid,port> already exist */
+ tid = mvpp2_prs_vid_range_find(port, vid, mask);
+
+@@ -1988,8 +2011,10 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ MVPP2_PRS_VLAN_FILT_MAX_ENTRY);
+
+ /* There isn't room for a new VID filter */
+- if (tid < 0)
++ if (tid < 0) {
++ spin_unlock_bh(&priv->prs_spinlock);
+ return tid;
++ }
+
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_VID);
+ pe.index = tid;
+@@ -1997,7 +2022,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ /* Mask all ports */
+ mvpp2_prs_tcam_port_map_set(&pe, 0);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+
+ /* Enable the current port */
+@@ -2019,6 +2044,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID);
+ mvpp2_prs_hw_write(priv, &pe);
+
++ spin_unlock_bh(&priv->prs_spinlock);
+ return 0;
+ }
+
+@@ -2028,15 +2054,16 @@ void mvpp2_prs_vid_entry_remove(struct mvpp2_port *port, u16 vid)
+ struct mvpp2 *priv = port->priv;
+ int tid;
+
+- /* Scan TCAM and see if entry with this <vid,port> already exist */
+- tid = mvpp2_prs_vid_range_find(port, vid, 0xfff);
++ spin_lock_bh(&priv->prs_spinlock);
+
+- /* No such entry */
+- if (tid < 0)
+- return;
++ /* Invalidate TCAM entry with this <vid,port>, if it exists */
++ tid = mvpp2_prs_vid_range_find(port, vid, 0xfff);
++ if (tid >= 0) {
++ mvpp2_prs_hw_inv(priv, tid);
++ priv->prs_shadow[tid].valid = false;
++ }
+
+- mvpp2_prs_hw_inv(priv, tid);
+- priv->prs_shadow[tid].valid = false;
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Remove all existing VID filters on this port */
+@@ -2045,6 +2072,8 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port)
+ struct mvpp2 *priv = port->priv;
+ int tid;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id);
+ tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) {
+ if (priv->prs_shadow[tid].valid) {
+@@ -2052,6 +2081,8 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port)
+ priv->prs_shadow[tid].valid = false;
+ }
+ }
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Remove VID filering entry for this port */
+@@ -2060,10 +2091,14 @@ void mvpp2_prs_vid_disable_filtering(struct mvpp2_port *port)
+ unsigned int tid = MVPP2_PRS_VID_PORT_DFLT(port->id);
+ struct mvpp2 *priv = port->priv;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ /* Invalidate the guard entry */
+ mvpp2_prs_hw_inv(priv, tid);
+
+ priv->prs_shadow[tid].valid = false;
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Add guard entry that drops packets when no VID is matched on this port */
+@@ -2079,6 +2114,8 @@ void mvpp2_prs_vid_enable_filtering(struct mvpp2_port *port)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ pe.index = tid;
+
+ reg_val = mvpp2_read(priv, MVPP2_MH_REG(port->id));
+@@ -2111,6 +2148,8 @@ void mvpp2_prs_vid_enable_filtering(struct mvpp2_port *port)
+ /* Update shadow table */
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID);
+ mvpp2_prs_hw_write(priv, &pe);
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Parser default initialization */
+@@ -2118,6 +2157,20 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv)
+ {
+ int err, index, i;
+
++ priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE,
++ sizeof(*priv->prs_shadow),
++ GFP_KERNEL);
++ if (!priv->prs_shadow)
++ return -ENOMEM;
++
++ priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool),
++ MVPP2_PRS_DBL_VLANS_MAX,
++ GFP_KERNEL);
++ if (!priv->prs_double_vlans)
++ return -ENOMEM;
++
++ spin_lock_bh(&priv->prs_spinlock);
++
+ /* Enable tcam table */
+ mvpp2_write(priv, MVPP2_PRS_TCAM_CTRL_REG, MVPP2_PRS_TCAM_EN_MASK);
+
+@@ -2136,12 +2189,6 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv)
+ for (index = 0; index < MVPP2_PRS_TCAM_SRAM_SIZE; index++)
+ mvpp2_prs_hw_inv(priv, index);
+
+- priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE,
+- sizeof(*priv->prs_shadow),
+- GFP_KERNEL);
+- if (!priv->prs_shadow)
+- return -ENOMEM;
+-
+ /* Always start from lookup = 0 */
+ for (index = 0; index < MVPP2_MAX_PORTS; index++)
+ mvpp2_prs_hw_port_init(priv, index, MVPP2_PRS_LU_MH,
+@@ -2158,26 +2205,13 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv)
+ mvpp2_prs_vid_init(priv);
+
+ err = mvpp2_prs_etype_init(priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_vlan_init(pdev, priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_pppoe_init(priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_ip6_init(priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_ip4_init(priv);
+- if (err)
+- return err;
++ err = err ? : mvpp2_prs_vlan_init(pdev, priv);
++ err = err ? : mvpp2_prs_pppoe_init(priv);
++ err = err ? : mvpp2_prs_ip6_init(priv);
++ err = err ? : mvpp2_prs_ip4_init(priv);
+
+- return 0;
++ spin_unlock_bh(&priv->prs_spinlock);
++ return err;
+ }
+
+ /* Compare MAC DA with tcam entry data */
+@@ -2217,7 +2251,7 @@ mvpp2_prs_mac_da_range_find(struct mvpp2 *priv, int pmap, const u8 *da,
+ (priv->prs_shadow[tid].udf != udf_type))
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ entry_pmap = mvpp2_prs_tcam_port_map_get(&pe);
+
+ if (mvpp2_prs_mac_range_equals(&pe, da, mask) &&
+@@ -2229,7 +2263,8 @@ mvpp2_prs_mac_da_range_find(struct mvpp2 *priv, int pmap, const u8 *da,
+ }
+
+ /* Update parser's mac da entry */
+-int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
++static int __mvpp2_prs_mac_da_accept(struct mvpp2_port *port,
++ const u8 *da, bool add)
+ {
+ unsigned char mask[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
+ struct mvpp2 *priv = port->priv;
+@@ -2261,7 +2296,7 @@ int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
+ /* Mask all ports */
+ mvpp2_prs_tcam_port_map_set(&pe, 0);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC);
+@@ -2317,6 +2352,17 @@ int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
+ return 0;
+ }
+
++int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
++{
++ int err;
++
++ spin_lock_bh(&port->priv->prs_spinlock);
++ err = __mvpp2_prs_mac_da_accept(port, da, add);
++ spin_unlock_bh(&port->priv->prs_spinlock);
++
++ return err;
++}
++
+ int mvpp2_prs_update_mac_da(struct net_device *dev, const u8 *da)
+ {
+ struct mvpp2_port *port = netdev_priv(dev);
+@@ -2345,6 +2391,8 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port)
+ unsigned long pmap;
+ int index, tid;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ for (tid = MVPP2_PE_MAC_RANGE_START;
+ tid <= MVPP2_PE_MAC_RANGE_END; tid++) {
+ unsigned char da[ETH_ALEN], da_mask[ETH_ALEN];
+@@ -2354,7 +2402,7 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port)
+ (priv->prs_shadow[tid].udf != MVPP2_PRS_UDF_MAC_DEF))
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+
+ pmap = mvpp2_prs_tcam_port_map_get(&pe);
+
+@@ -2375,14 +2423,17 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port)
+ continue;
+
+ /* Remove entry from TCAM */
+- mvpp2_prs_mac_da_accept(port, da, false);
++ __mvpp2_prs_mac_da_accept(port, da, false);
+ }
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ {
+ switch (type) {
+ case MVPP2_TAG_TYPE_EDSA:
++ spin_lock_bh(&priv->prs_spinlock);
+ /* Add port to EDSA entries */
+ mvpp2_prs_dsa_tag_set(priv, port, true,
+ MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA);
+@@ -2393,9 +2444,11 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ MVPP2_PRS_TAGGED, MVPP2_PRS_DSA);
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_UNTAGGED, MVPP2_PRS_DSA);
++ spin_unlock_bh(&priv->prs_spinlock);
+ break;
+
+ case MVPP2_TAG_TYPE_DSA:
++ spin_lock_bh(&priv->prs_spinlock);
+ /* Add port to DSA entries */
+ mvpp2_prs_dsa_tag_set(priv, port, true,
+ MVPP2_PRS_TAGGED, MVPP2_PRS_DSA);
+@@ -2406,10 +2459,12 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA);
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA);
++ spin_unlock_bh(&priv->prs_spinlock);
+ break;
+
+ case MVPP2_TAG_TYPE_MH:
+ case MVPP2_TAG_TYPE_NONE:
++ spin_lock_bh(&priv->prs_spinlock);
+ /* Remove port form EDSA and DSA entries */
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_TAGGED, MVPP2_PRS_DSA);
+@@ -2419,6 +2474,7 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA);
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA);
++ spin_unlock_bh(&priv->prs_spinlock);
+ break;
+
+ default:
+@@ -2437,11 +2493,15 @@ int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ tid = mvpp2_prs_tcam_first_free(priv,
+ MVPP2_PE_LAST_FREE_TID,
+ MVPP2_PE_FIRST_FREE_TID);
+- if (tid < 0)
++ if (tid < 0) {
++ spin_unlock_bh(&priv->prs_spinlock);
+ return tid;
++ }
+
+ pe.index = tid;
+
+@@ -2461,6 +2521,7 @@ int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask)
+ mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK);
+ mvpp2_prs_hw_write(priv, &pe);
+
++ spin_unlock_bh(&priv->prs_spinlock);
+ return 0;
+ }
+
+@@ -2472,6 +2533,8 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&port->priv->prs_spinlock);
++
+ tid = mvpp2_prs_flow_find(port->priv, port->id);
+
+ /* Such entry not exist */
+@@ -2480,8 +2543,10 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port)
+ tid = mvpp2_prs_tcam_first_free(port->priv,
+ MVPP2_PE_LAST_FREE_TID,
+ MVPP2_PE_FIRST_FREE_TID);
+- if (tid < 0)
++ if (tid < 0) {
++ spin_unlock_bh(&port->priv->prs_spinlock);
+ return tid;
++ }
+
+ pe.index = tid;
+
+@@ -2492,13 +2557,14 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port)
+ /* Update shadow table */
+ mvpp2_prs_shadow_set(port->priv, pe.index, MVPP2_PRS_LU_FLOWS);
+ } else {
+- mvpp2_prs_init_from_hw(port->priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(port->priv, &pe, tid);
+ }
+
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_FLOWS);
+ mvpp2_prs_tcam_port_map_set(&pe, (1 << port->id));
+ mvpp2_prs_hw_write(port->priv, &pe);
+
++ spin_unlock_bh(&port->priv->prs_spinlock);
+ return 0;
+ }
+
+@@ -2509,11 +2575,14 @@ int mvpp2_prs_hits(struct mvpp2 *priv, int index)
+ if (index > MVPP2_PRS_TCAM_SRAM_SIZE)
+ return -EINVAL;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ mvpp2_write(priv, MVPP2_PRS_TCAM_HIT_IDX_REG, index);
+
+ val = mvpp2_read(priv, MVPP2_PRS_TCAM_HIT_CNT_REG);
+
+ val &= MVPP2_PRS_TCAM_HIT_CNT_MASK;
+
++ spin_unlock_bh(&priv->prs_spinlock);
+ return val;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index cd0d7b7774f1af..6575c422635b76 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2634,7 +2634,7 @@ static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+ rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), intr);
+
+ rvu_queue_work(&rvu->afvf_wq_info, 64, vfs, intr);
+- vfs -= 64;
++ vfs = 64;
+ }
+
+ intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(0));
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
+index dab4deca893f5d..27c3a2daaaa958 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
+@@ -207,7 +207,7 @@ static void rvu_nix_unregister_interrupts(struct rvu *rvu)
+ rvu->irq_allocated[offs + NIX_AF_INT_VEC_RVU] = false;
+ }
+
+- for (i = NIX_AF_INT_VEC_AF_ERR; i < NIX_AF_INT_VEC_CNT; i++)
++ for (i = NIX_AF_INT_VEC_GEN; i < NIX_AF_INT_VEC_CNT; i++)
+ if (rvu->irq_allocated[offs + i]) {
+ free_irq(pci_irq_vector(rvu->pdev, offs + i), rvu_dl);
+ rvu->irq_allocated[offs + i] = false;
+diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c
+index 09f448f2912402..0c244ea5244ccc 100644
+--- a/drivers/net/ethernet/mediatek/airoha_eth.c
++++ b/drivers/net/ethernet/mediatek/airoha_eth.c
+@@ -1547,7 +1547,7 @@ static int airoha_qdma_get_gdm_port(struct airoha_eth *eth,
+
+ sport = FIELD_GET(QDMA_ETH_RXMSG_SPORT_MASK, msg1);
+ switch (sport) {
+- case 0x10 ... 0x13:
++ case 0x10 ... 0x14:
+ port = 0;
+ break;
+ case 0x2 ... 0x4:
+@@ -2793,7 +2793,7 @@ static int airoha_qdma_set_tx_ets_sched(struct airoha_gdm_port *port,
+ struct tc_ets_qopt_offload_replace_params *p = &opt->replace_params;
+ enum tx_sched_mode mode = TC_SCH_SP;
+ u16 w[AIROHA_NUM_QOS_QUEUES] = {};
+- int i, nstrict = 0, nwrr, qidx;
++ int i, nstrict = 0;
+
+ if (p->bands > AIROHA_NUM_QOS_QUEUES)
+ return -EINVAL;
+@@ -2811,17 +2811,17 @@ static int airoha_qdma_set_tx_ets_sched(struct airoha_gdm_port *port,
+ * lowest priorities with respect to SP ones.
+ * e.g: WRR0, WRR1, .., WRRm, SP0, SP1, .., SPn
+ */
+- nwrr = p->bands - nstrict;
+- qidx = nstrict && nwrr ? nstrict : 0;
+- for (i = 1; i <= p->bands; i++) {
+- if (p->priomap[i % AIROHA_NUM_QOS_QUEUES] != qidx)
++ for (i = 0; i < nstrict; i++) {
++ if (p->priomap[p->bands - i - 1] != i)
+ return -EINVAL;
+-
+- qidx = i == nwrr ? 0 : qidx + 1;
+ }
+
+- for (i = 0; i < nwrr; i++)
++ for (i = 0; i < p->bands - nstrict; i++) {
++ if (p->priomap[i] != nstrict + i)
++ return -EINVAL;
++
+ w[i] = p->weights[nstrict + i];
++ }
+
+ if (!nstrict)
+ mode = TC_SCH_WRR8;
+@@ -3082,7 +3082,7 @@ static int airoha_tc_get_htb_get_leaf_queue(struct airoha_gdm_port *port,
+ return -EINVAL;
+ }
+
+- opt->qid = channel;
++ opt->qid = AIROHA_NUM_TX_RING + channel;
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index 64b62ed17b07a7..31eb99f09c63c1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -423,7 +423,7 @@ u8 mlx5e_shampo_get_log_pkt_per_rsrv(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params)
+ {
+ u32 resrv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) *
+- PAGE_SIZE;
++ MLX5E_SHAMPO_WQ_BASE_RESRV_SIZE;
+
+ return order_base_2(DIV_ROUND_UP(resrv_size, params->sw_mtu));
+ }
+@@ -827,7 +827,8 @@ static u32 mlx5e_shampo_get_log_cq_size(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
+ {
+- int rsrv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) * PAGE_SIZE;
++ int rsrv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) *
++ MLX5E_SHAMPO_WQ_BASE_RESRV_SIZE;
+ u16 num_strides = BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk));
+ int pkt_per_rsrv = BIT(mlx5e_shampo_get_log_pkt_per_rsrv(mdev, params));
+ u8 log_stride_sz = mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk);
+@@ -1036,7 +1037,8 @@ u32 mlx5e_shampo_hd_per_wqe(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params,
+ struct mlx5e_rq_param *rq_param)
+ {
+- int resv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) * PAGE_SIZE;
++ int resv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) *
++ MLX5E_SHAMPO_WQ_BASE_RESRV_SIZE;
+ u16 num_strides = BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params, NULL));
+ int pkt_per_resv = BIT(mlx5e_shampo_get_log_pkt_per_rsrv(mdev, params));
+ u8 log_stride_sz = mlx5e_mpwqe_get_log_stride_size(mdev, params, NULL);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+index 773624bb2c5d54..d68230a7b9f46c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+@@ -884,8 +884,10 @@ static int flow_type_to_traffic_type(u32 flow_type)
+ case ESP_V6_FLOW:
+ return MLX5_TT_IPV6_IPSEC_ESP;
+ case IPV4_FLOW:
++ case IP_USER_FLOW:
+ return MLX5_TT_IPV4;
+ case IPV6_FLOW:
++ case IPV6_USER_FLOW:
+ return MLX5_TT_IPV6;
+ default:
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+index ed2ba272946b9d..6c9737c5373487 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+@@ -1052,6 +1052,10 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ if (err) {
+ if (shared_fdb || roce_lag)
+ mlx5_lag_add_devices(ldev);
++ if (shared_fdb) {
++ mlx5_ldev_for_each(i, 0, ldev)
++ mlx5_eswitch_reload_ib_reps(ldev->pf[i].dev->priv.eswitch);
++ }
+
+ return;
+ } else if (roce_lag) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index ec956c4bcebdba..7c3312d6aed9b2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1205,24 +1205,24 @@ static int mlx5_function_enable(struct mlx5_core_dev *dev, bool boot, u64 timeou
+ dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev);
+ mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_UP);
+
+- mlx5_start_health_poll(dev);
+-
+ err = mlx5_core_enable_hca(dev, 0);
+ if (err) {
+ mlx5_core_err(dev, "enable hca failed\n");
+- goto stop_health_poll;
++ goto err_cmd_cleanup;
+ }
+
++ mlx5_start_health_poll(dev);
++
+ err = mlx5_core_set_issi(dev);
+ if (err) {
+ mlx5_core_err(dev, "failed to set issi\n");
+- goto err_disable_hca;
++ goto stop_health_poll;
+ }
+
+ err = mlx5_satisfy_startup_pages(dev, 1);
+ if (err) {
+ mlx5_core_err(dev, "failed to allocate boot pages\n");
+- goto err_disable_hca;
++ goto stop_health_poll;
+ }
+
+ err = mlx5_tout_query_dtor(dev);
+@@ -1235,10 +1235,9 @@ static int mlx5_function_enable(struct mlx5_core_dev *dev, bool boot, u64 timeou
+
+ reclaim_boot_pages:
+ mlx5_reclaim_startup_pages(dev);
+-err_disable_hca:
+- mlx5_core_disable_hca(dev, 0);
+ stop_health_poll:
+ mlx5_stop_health_poll(dev, boot);
++ mlx5_core_disable_hca(dev, 0);
+ err_cmd_cleanup:
+ mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
+ mlx5_cmd_disable(dev);
+@@ -1249,8 +1248,8 @@ static int mlx5_function_enable(struct mlx5_core_dev *dev, bool boot, u64 timeou
+ static void mlx5_function_disable(struct mlx5_core_dev *dev, bool boot)
+ {
+ mlx5_reclaim_startup_pages(dev);
+- mlx5_core_disable_hca(dev, 0);
+ mlx5_stop_health_poll(dev, boot);
++ mlx5_core_disable_hca(dev, 0);
+ mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
+ mlx5_cmd_disable(dev);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+index a54eedb69a3f5b..067f0055a55af8 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+@@ -212,7 +212,22 @@ static const u8 mlxsw_sp4_acl_bf_crc6_tab[256] = {
+ * This array defines key offsets for easy access when copying key blocks from
+ * entry key to Bloom filter chunk.
+ */
+-static const u8 chunk_key_offsets[MLXSW_BLOOM_KEY_CHUNKS] = {2, 20, 38};
++static char *
++mlxsw_sp_acl_bf_enc_key_get(struct mlxsw_sp_acl_atcam_entry *aentry,
++ u8 chunk_index)
++{
++ switch (chunk_index) {
++ case 0:
++ return &aentry->ht_key.enc_key[2];
++ case 1:
++ return &aentry->ht_key.enc_key[20];
++ case 2:
++ return &aentry->ht_key.enc_key[38];
++ default:
++ WARN_ON_ONCE(1);
++ return &aentry->ht_key.enc_key[0];
++ }
++}
+
+ static u16 mlxsw_sp2_acl_bf_crc16_byte(u16 crc, u8 c)
+ {
+@@ -235,9 +250,10 @@ __mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion,
+ u8 key_offset, u8 chunk_key_len, u8 chunk_len)
+ {
+ struct mlxsw_afk_key_info *key_info = aregion->region->key_info;
+- u8 chunk_index, chunk_count, block_count;
++ u8 chunk_index, chunk_count;
+ char *chunk = output;
+ __be16 erp_region_id;
++ u32 block_count;
+
+ block_count = mlxsw_afk_key_info_blocks_count_get(key_info);
+ chunk_count = 1 + ((block_count - 1) >> 2);
+@@ -245,12 +261,13 @@ __mlxsw_sp_acl_bf_key_encode(struct mlxsw_sp_acl_atcam_region *aregion,
+ (aregion->region->id << 4));
+ for (chunk_index = max_chunks - chunk_count; chunk_index < max_chunks;
+ chunk_index++) {
++ char *enc_key;
++
+ memset(chunk, 0, pad_bytes);
+ memcpy(chunk + pad_bytes, &erp_region_id,
+ sizeof(erp_region_id));
+- memcpy(chunk + key_offset,
+- &aentry->ht_key.enc_key[chunk_key_offsets[chunk_index]],
+- chunk_key_len);
++ enc_key = mlxsw_sp_acl_bf_enc_key_get(aentry, chunk_index);
++ memcpy(chunk + key_offset, enc_key, chunk_key_len);
+ chunk += chunk_len;
+ }
+ *len = chunk_count * chunk_len;
+diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.c b/drivers/net/ethernet/microchip/lan743x_ptp.c
+index 4a777b449ecd03..0be44dcb339387 100644
+--- a/drivers/net/ethernet/microchip/lan743x_ptp.c
++++ b/drivers/net/ethernet/microchip/lan743x_ptp.c
+@@ -942,6 +942,12 @@ static int lan743x_ptp_io_extts(struct lan743x_adapter *adapter, int on,
+
+ extts = &ptp->extts[index];
+
++ if (extts_request->flags & ~(PTP_ENABLE_FEATURE |
++ PTP_RISING_EDGE |
++ PTP_FALLING_EDGE |
++ PTP_STRICT_FLAGS))
++ return -EOPNOTSUPP;
++
+ if (on) {
+ extts_pin = ptp_find_pin(ptp->ptp_clock, PTP_PF_EXTTS, index);
+ if (extts_pin < 0)
+diff --git a/drivers/net/ethernet/renesas/ravb_ptp.c b/drivers/net/ethernet/renesas/ravb_ptp.c
+index 6e4ef7af27bf31..b4365906669f3b 100644
+--- a/drivers/net/ethernet/renesas/ravb_ptp.c
++++ b/drivers/net/ethernet/renesas/ravb_ptp.c
+@@ -179,8 +179,7 @@ static int ravb_ptp_extts(struct ptp_clock_info *ptp,
+ /* Reject requests with unsupported flags */
+ if (req->flags & ~(PTP_ENABLE_FEATURE |
+ PTP_RISING_EDGE |
+- PTP_FALLING_EDGE |
+- PTP_STRICT_FLAGS))
++ PTP_FALLING_EDGE))
+ return -EOPNOTSUPP;
+
+ if (req->index)
+diff --git a/drivers/net/ethernet/sfc/ef100_netdev.c b/drivers/net/ethernet/sfc/ef100_netdev.c
+index 7f7d560cb2b4c9..3a06e3b1bd6bf8 100644
+--- a/drivers/net/ethernet/sfc/ef100_netdev.c
++++ b/drivers/net/ethernet/sfc/ef100_netdev.c
+@@ -450,9 +450,9 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
+ net_dev->hw_enc_features |= efx->type->offload_features;
+ net_dev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_SG |
+ NETIF_F_HIGHDMA | NETIF_F_ALL_TSO;
+- netif_set_tso_max_segs(net_dev,
+- ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS_DEFAULT);
+- efx->mdio.dev = net_dev;
++ nic_data = efx->nic_data;
++ netif_set_tso_max_size(efx->net_dev, nic_data->tso_max_payload_len);
++ netif_set_tso_max_segs(efx->net_dev, nic_data->tso_max_payload_num_segs);
+
+ rc = efx_ef100_init_datapath_caps(efx);
+ if (rc < 0)
+@@ -478,7 +478,6 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
+ /* Don't fail init if RSS setup doesn't work. */
+ efx_mcdi_push_default_indir_table(efx, efx->n_rx_channels);
+
+- nic_data = efx->nic_data;
+ rc = ef100_get_mac_address(efx, net_dev->perm_addr, CLIENT_HANDLE_SELF,
+ efx->type->is_vf);
+ if (rc)
+diff --git a/drivers/net/ethernet/sfc/ef100_nic.c b/drivers/net/ethernet/sfc/ef100_nic.c
+index 62e674d6ff60c7..3ad95a4c8af2d3 100644
+--- a/drivers/net/ethernet/sfc/ef100_nic.c
++++ b/drivers/net/ethernet/sfc/ef100_nic.c
+@@ -887,8 +887,7 @@ static int ef100_process_design_param(struct efx_nic *efx,
+ case ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS:
+ /* We always put HDR_NUM_SEGS=1 in our TSO descriptors */
+ if (!reader->value) {
+- netif_err(efx, probe, efx->net_dev,
+- "TSO_MAX_HDR_NUM_SEGS < 1\n");
++ pci_err(efx->pci_dev, "TSO_MAX_HDR_NUM_SEGS < 1\n");
+ return -EOPNOTSUPP;
+ }
+ return 0;
+@@ -901,32 +900,28 @@ static int ef100_process_design_param(struct efx_nic *efx,
+ */
+ if (!reader->value || reader->value > EFX_MIN_DMAQ_SIZE ||
+ EFX_MIN_DMAQ_SIZE % (u32)reader->value) {
+- netif_err(efx, probe, efx->net_dev,
+- "%s size granularity is %llu, can't guarantee safety\n",
+- reader->type == ESE_EF100_DP_GZ_RXQ_SIZE_GRANULARITY ? "RXQ" : "TXQ",
+- reader->value);
++ pci_err(efx->pci_dev,
++ "%s size granularity is %llu, can't guarantee safety\n",
++ reader->type == ESE_EF100_DP_GZ_RXQ_SIZE_GRANULARITY ? "RXQ" : "TXQ",
++ reader->value);
+ return -EOPNOTSUPP;
+ }
+ return 0;
+ case ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_LEN:
+ nic_data->tso_max_payload_len = min_t(u64, reader->value,
+ GSO_LEGACY_MAX_SIZE);
+- netif_set_tso_max_size(efx->net_dev,
+- nic_data->tso_max_payload_len);
+ return 0;
+ case ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_NUM_SEGS:
+ nic_data->tso_max_payload_num_segs = min_t(u64, reader->value, 0xffff);
+- netif_set_tso_max_segs(efx->net_dev,
+- nic_data->tso_max_payload_num_segs);
+ return 0;
+ case ESE_EF100_DP_GZ_TSO_MAX_NUM_FRAMES:
+ nic_data->tso_max_frames = min_t(u64, reader->value, 0xffff);
+ return 0;
+ case ESE_EF100_DP_GZ_COMPAT:
+ if (reader->value) {
+- netif_err(efx, probe, efx->net_dev,
+- "DP_COMPAT has unknown bits %#llx, driver not compatible with this hw\n",
+- reader->value);
++ pci_err(efx->pci_dev,
++ "DP_COMPAT has unknown bits %#llx, driver not compatible with this hw\n",
++ reader->value);
+ return -EOPNOTSUPP;
+ }
+ return 0;
+@@ -946,10 +941,10 @@ static int ef100_process_design_param(struct efx_nic *efx,
+ * So the value of this shouldn't matter.
+ */
+ if (reader->value != ESE_EF100_DP_GZ_VI_STRIDES_DEFAULT)
+- netif_dbg(efx, probe, efx->net_dev,
+- "NIC has other than default VI_STRIDES (mask "
+- "%#llx), early probing might use wrong one\n",
+- reader->value);
++ pci_dbg(efx->pci_dev,
++ "NIC has other than default VI_STRIDES (mask "
++ "%#llx), early probing might use wrong one\n",
++ reader->value);
+ return 0;
+ case ESE_EF100_DP_GZ_RX_MAX_RUNT:
+ /* Driver doesn't look at L2_STATUS:LEN_ERR bit, so we don't
+@@ -961,9 +956,9 @@ static int ef100_process_design_param(struct efx_nic *efx,
+ /* Host interface says "Drivers should ignore design parameters
+ * that they do not recognise."
+ */
+- netif_dbg(efx, probe, efx->net_dev,
+- "Ignoring unrecognised design parameter %u\n",
+- reader->type);
++ pci_dbg(efx->pci_dev,
++ "Ignoring unrecognised design parameter %u\n",
++ reader->type);
+ return 0;
+ }
+ }
+@@ -999,13 +994,13 @@ static int ef100_check_design_params(struct efx_nic *efx)
+ */
+ if (reader.state != EF100_TLV_TYPE) {
+ if (reader.state == EF100_TLV_TYPE_CONT)
+- netif_err(efx, probe, efx->net_dev,
+- "truncated design parameter (incomplete type %u)\n",
+- reader.type);
++ pci_err(efx->pci_dev,
++ "truncated design parameter (incomplete type %u)\n",
++ reader.type);
+ else
+- netif_err(efx, probe, efx->net_dev,
+- "truncated design parameter %u\n",
+- reader.type);
++ pci_err(efx->pci_dev,
++ "truncated design parameter %u\n",
++ reader.type);
+ rc = -EIO;
+ }
+ out:
+diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
+index 650136dfc642fd..112e55b98ed3b6 100644
+--- a/drivers/net/ethernet/sfc/efx.c
++++ b/drivers/net/ethernet/sfc/efx.c
+@@ -474,28 +474,6 @@ void efx_get_irq_moderation(struct efx_nic *efx, unsigned int *tx_usecs,
+ }
+ }
+
+-/**************************************************************************
+- *
+- * ioctls
+- *
+- *************************************************************************/
+-
+-/* Net device ioctl
+- * Context: process, rtnl_lock() held.
+- */
+-static int efx_ioctl(struct net_device *net_dev, struct ifreq *ifr, int cmd)
+-{
+- struct efx_nic *efx = efx_netdev_priv(net_dev);
+- struct mii_ioctl_data *data = if_mii(ifr);
+-
+- /* Convert phy_id from older PRTAD/DEVAD format */
+- if ((cmd == SIOCGMIIREG || cmd == SIOCSMIIREG) &&
+- (data->phy_id & 0xfc00) == 0x0400)
+- data->phy_id ^= MDIO_PHY_ID_C45 | 0x0400;
+-
+- return mdio_mii_ioctl(&efx->mdio, data, cmd);
+-}
+-
+ /**************************************************************************
+ *
+ * Kernel net device interface
+@@ -593,7 +571,6 @@ static const struct net_device_ops efx_netdev_ops = {
+ .ndo_tx_timeout = efx_watchdog,
+ .ndo_start_xmit = efx_hard_start_xmit,
+ .ndo_validate_addr = eth_validate_addr,
+- .ndo_eth_ioctl = efx_ioctl,
+ .ndo_change_mtu = efx_change_mtu,
+ .ndo_set_mac_address = efx_set_mac_address,
+ .ndo_set_rx_mode = efx_set_rx_mode,
+@@ -1201,7 +1178,6 @@ static int efx_pci_probe(struct pci_dev *pci_dev,
+ rc = efx_init_struct(efx, pci_dev);
+ if (rc)
+ goto fail1;
+- efx->mdio.dev = net_dev;
+
+ pci_info(pci_dev, "Solarflare NIC detected\n");
+
+diff --git a/drivers/net/ethernet/sfc/mcdi_port.c b/drivers/net/ethernet/sfc/mcdi_port.c
+index ad4694fa3ddae6..7b236d291d8c2f 100644
+--- a/drivers/net/ethernet/sfc/mcdi_port.c
++++ b/drivers/net/ethernet/sfc/mcdi_port.c
+@@ -17,58 +17,6 @@
+ #include "selftest.h"
+ #include "mcdi_port_common.h"
+
+-static int efx_mcdi_mdio_read(struct net_device *net_dev,
+- int prtad, int devad, u16 addr)
+-{
+- struct efx_nic *efx = efx_netdev_priv(net_dev);
+- MCDI_DECLARE_BUF(inbuf, MC_CMD_MDIO_READ_IN_LEN);
+- MCDI_DECLARE_BUF(outbuf, MC_CMD_MDIO_READ_OUT_LEN);
+- size_t outlen;
+- int rc;
+-
+- MCDI_SET_DWORD(inbuf, MDIO_READ_IN_BUS, efx->mdio_bus);
+- MCDI_SET_DWORD(inbuf, MDIO_READ_IN_PRTAD, prtad);
+- MCDI_SET_DWORD(inbuf, MDIO_READ_IN_DEVAD, devad);
+- MCDI_SET_DWORD(inbuf, MDIO_READ_IN_ADDR, addr);
+-
+- rc = efx_mcdi_rpc(efx, MC_CMD_MDIO_READ, inbuf, sizeof(inbuf),
+- outbuf, sizeof(outbuf), &outlen);
+- if (rc)
+- return rc;
+-
+- if (MCDI_DWORD(outbuf, MDIO_READ_OUT_STATUS) !=
+- MC_CMD_MDIO_STATUS_GOOD)
+- return -EIO;
+-
+- return (u16)MCDI_DWORD(outbuf, MDIO_READ_OUT_VALUE);
+-}
+-
+-static int efx_mcdi_mdio_write(struct net_device *net_dev,
+- int prtad, int devad, u16 addr, u16 value)
+-{
+- struct efx_nic *efx = efx_netdev_priv(net_dev);
+- MCDI_DECLARE_BUF(inbuf, MC_CMD_MDIO_WRITE_IN_LEN);
+- MCDI_DECLARE_BUF(outbuf, MC_CMD_MDIO_WRITE_OUT_LEN);
+- size_t outlen;
+- int rc;
+-
+- MCDI_SET_DWORD(inbuf, MDIO_WRITE_IN_BUS, efx->mdio_bus);
+- MCDI_SET_DWORD(inbuf, MDIO_WRITE_IN_PRTAD, prtad);
+- MCDI_SET_DWORD(inbuf, MDIO_WRITE_IN_DEVAD, devad);
+- MCDI_SET_DWORD(inbuf, MDIO_WRITE_IN_ADDR, addr);
+- MCDI_SET_DWORD(inbuf, MDIO_WRITE_IN_VALUE, value);
+-
+- rc = efx_mcdi_rpc(efx, MC_CMD_MDIO_WRITE, inbuf, sizeof(inbuf),
+- outbuf, sizeof(outbuf), &outlen);
+- if (rc)
+- return rc;
+-
+- if (MCDI_DWORD(outbuf, MDIO_WRITE_OUT_STATUS) !=
+- MC_CMD_MDIO_STATUS_GOOD)
+- return -EIO;
+-
+- return 0;
+-}
+
+ u32 efx_mcdi_phy_get_caps(struct efx_nic *efx)
+ {
+@@ -97,12 +45,7 @@ int efx_mcdi_port_probe(struct efx_nic *efx)
+ {
+ int rc;
+
+- /* Set up MDIO structure for PHY */
+- efx->mdio.mode_support = MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
+- efx->mdio.mdio_read = efx_mcdi_mdio_read;
+- efx->mdio.mdio_write = efx_mcdi_mdio_write;
+-
+- /* Fill out MDIO structure, loopback modes, and initial link state */
++ /* Fill out loopback modes and initial link state */
+ rc = efx_mcdi_phy_probe(efx);
+ if (rc != 0)
+ return rc;
+diff --git a/drivers/net/ethernet/sfc/mcdi_port_common.c b/drivers/net/ethernet/sfc/mcdi_port_common.c
+index 76ea26722ca469..dae684194ac8c9 100644
+--- a/drivers/net/ethernet/sfc/mcdi_port_common.c
++++ b/drivers/net/ethernet/sfc/mcdi_port_common.c
+@@ -448,15 +448,6 @@ int efx_mcdi_phy_probe(struct efx_nic *efx)
+ efx->phy_data = phy_data;
+ efx->phy_type = phy_data->type;
+
+- efx->mdio_bus = phy_data->channel;
+- efx->mdio.prtad = phy_data->port;
+- efx->mdio.mmds = phy_data->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22);
+- efx->mdio.mode_support = 0;
+- if (phy_data->mmd_mask & (1 << MC_CMD_MMD_CLAUSE22))
+- efx->mdio.mode_support |= MDIO_SUPPORTS_C22;
+- if (phy_data->mmd_mask & ~(1 << MC_CMD_MMD_CLAUSE22))
+- efx->mdio.mode_support |= MDIO_SUPPORTS_C45 | MDIO_EMULATE_C22;
+-
+ caps = MCDI_DWORD(outbuf, GET_LINK_OUT_CAP);
+ if (caps & (1 << MC_CMD_PHY_CAP_AN_LBN))
+ mcdi_to_ethtool_linkset(phy_data->media, caps,
+@@ -546,8 +537,6 @@ void efx_mcdi_phy_get_link_ksettings(struct efx_nic *efx, struct ethtool_link_ks
+ cmd->base.port = mcdi_to_ethtool_media(phy_cfg->media);
+ cmd->base.phy_address = phy_cfg->port;
+ cmd->base.autoneg = !!(efx->link_advertising[0] & ADVERTISED_Autoneg);
+- cmd->base.mdio_support = (efx->mdio.mode_support &
+- (MDIO_SUPPORTS_C45 | MDIO_SUPPORTS_C22));
+
+ mcdi_to_ethtool_linkset(phy_cfg->media, phy_cfg->supported_cap,
+ cmd->link_modes.supported);
+diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
+index f70a7b7d6345c5..1d3e0f3101d4fc 100644
+--- a/drivers/net/ethernet/sfc/net_driver.h
++++ b/drivers/net/ethernet/sfc/net_driver.h
+@@ -15,7 +15,7 @@
+ #include <linux/ethtool.h>
+ #include <linux/if_vlan.h>
+ #include <linux/timer.h>
+-#include <linux/mdio.h>
++#include <linux/mii.h>
+ #include <linux/list.h>
+ #include <linux/pci.h>
+ #include <linux/device.h>
+@@ -956,8 +956,6 @@ struct efx_mae;
+ * @stats_buffer: DMA buffer for statistics
+ * @phy_type: PHY type
+ * @phy_data: PHY private data (including PHY-specific stats)
+- * @mdio: PHY MDIO interface
+- * @mdio_bus: PHY MDIO bus ID (only used by Siena)
+ * @phy_mode: PHY operating mode. Serialised by @mac_lock.
+ * @link_advertising: Autonegotiation advertising flags
+ * @fec_config: Forward Error Correction configuration flags. For bit positions
+@@ -1131,8 +1129,6 @@ struct efx_nic {
+
+ unsigned int phy_type;
+ void *phy_data;
+- struct mdio_if_info mdio;
+- unsigned int mdio_bus;
+ enum efx_phy_mode phy_mode;
+
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(link_advertising);
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index 2b3d6586f44a53..497abf2723a5e4 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -1082,26 +1082,6 @@ static void wx_tx_ctxtdesc(struct wx_ring *tx_ring, u32 vlan_macip_lens,
+ context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx);
+ }
+
+-static void wx_get_ipv6_proto(struct sk_buff *skb, int offset, u8 *nexthdr)
+-{
+- struct ipv6hdr *hdr = (struct ipv6hdr *)(skb->data + offset);
+-
+- *nexthdr = hdr->nexthdr;
+- offset += sizeof(struct ipv6hdr);
+- while (ipv6_ext_hdr(*nexthdr)) {
+- struct ipv6_opt_hdr _hdr, *hp;
+-
+- if (*nexthdr == NEXTHDR_NONE)
+- return;
+- hp = skb_header_pointer(skb, offset, sizeof(_hdr), &_hdr);
+- if (!hp)
+- return;
+- if (*nexthdr == NEXTHDR_FRAGMENT)
+- break;
+- *nexthdr = hp->nexthdr;
+- }
+-}
+-
+ union network_header {
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+@@ -1112,6 +1092,8 @@ static u8 wx_encode_tx_desc_ptype(const struct wx_tx_buffer *first)
+ {
+ u8 tun_prot = 0, l4_prot = 0, ptype = 0;
+ struct sk_buff *skb = first->skb;
++ unsigned char *exthdr, *l4_hdr;
++ __be16 frag_off;
+
+ if (skb->encapsulation) {
+ union network_header hdr;
+@@ -1122,14 +1104,18 @@ static u8 wx_encode_tx_desc_ptype(const struct wx_tx_buffer *first)
+ ptype = WX_PTYPE_TUN_IPV4;
+ break;
+ case htons(ETH_P_IPV6):
+- wx_get_ipv6_proto(skb, skb_network_offset(skb), &tun_prot);
++ l4_hdr = skb_transport_header(skb);
++ exthdr = skb_network_header(skb) + sizeof(struct ipv6hdr);
++ tun_prot = ipv6_hdr(skb)->nexthdr;
++ if (l4_hdr != exthdr)
++ ipv6_skip_exthdr(skb, exthdr - skb->data, &tun_prot, &frag_off);
+ ptype = WX_PTYPE_TUN_IPV6;
+ break;
+ default:
+ return ptype;
+ }
+
+- if (tun_prot == IPPROTO_IPIP) {
++ if (tun_prot == IPPROTO_IPIP || tun_prot == IPPROTO_IPV6) {
+ hdr.raw = (void *)inner_ip_hdr(skb);
+ ptype |= WX_PTYPE_PKT_IPIP;
+ } else if (tun_prot == IPPROTO_UDP) {
+@@ -1166,7 +1152,11 @@ static u8 wx_encode_tx_desc_ptype(const struct wx_tx_buffer *first)
+ l4_prot = hdr.ipv4->protocol;
+ break;
+ case 6:
+- wx_get_ipv6_proto(skb, skb_inner_network_offset(skb), &l4_prot);
++ l4_hdr = skb_inner_transport_header(skb);
++ exthdr = skb_inner_network_header(skb) + sizeof(struct ipv6hdr);
++ l4_prot = inner_ipv6_hdr(skb)->nexthdr;
++ if (l4_hdr != exthdr)
++ ipv6_skip_exthdr(skb, exthdr - skb->data, &l4_prot, &frag_off);
+ ptype |= WX_PTYPE_PKT_IPV6;
+ break;
+ default:
+@@ -1179,7 +1169,11 @@ static u8 wx_encode_tx_desc_ptype(const struct wx_tx_buffer *first)
+ ptype = WX_PTYPE_PKT_IP;
+ break;
+ case htons(ETH_P_IPV6):
+- wx_get_ipv6_proto(skb, skb_network_offset(skb), &l4_prot);
++ l4_hdr = skb_transport_header(skb);
++ exthdr = skb_network_header(skb) + sizeof(struct ipv6hdr);
++ l4_prot = ipv6_hdr(skb)->nexthdr;
++ if (l4_hdr != exthdr)
++ ipv6_skip_exthdr(skb, exthdr - skb->data, &l4_prot, &frag_off);
+ ptype = WX_PTYPE_PKT_IP | WX_PTYPE_PKT_IPV6;
+ break;
+ default:
+@@ -1269,13 +1263,20 @@ static int wx_tso(struct wx_ring *tx_ring, struct wx_tx_buffer *first,
+
+ /* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
+ if (enc) {
++ unsigned char *exthdr, *l4_hdr;
++ __be16 frag_off;
++
+ switch (first->protocol) {
+ case htons(ETH_P_IP):
+ tun_prot = ip_hdr(skb)->protocol;
+ first->tx_flags |= WX_TX_FLAGS_OUTER_IPV4;
+ break;
+ case htons(ETH_P_IPV6):
++ l4_hdr = skb_transport_header(skb);
++ exthdr = skb_network_header(skb) + sizeof(struct ipv6hdr);
+ tun_prot = ipv6_hdr(skb)->nexthdr;
++ if (l4_hdr != exthdr)
++ ipv6_skip_exthdr(skb, exthdr - skb->data, &tun_prot, &frag_off);
+ break;
+ default:
+ break;
+@@ -1298,6 +1299,7 @@ static int wx_tso(struct wx_ring *tx_ring, struct wx_tx_buffer *first,
+ WX_TXD_TUNNEL_LEN_SHIFT);
+ break;
+ case IPPROTO_IPIP:
++ case IPPROTO_IPV6:
+ tunhdr_eiplen_tunlen = (((char *)inner_ip_hdr(skb) -
+ (char *)ip_hdr(skb)) >> 2) <<
+ WX_TXD_OUTER_IPLEN_SHIFT;
+@@ -1335,12 +1337,15 @@ static void wx_tx_csum(struct wx_ring *tx_ring, struct wx_tx_buffer *first,
+ u8 tun_prot = 0;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL) {
++csum_failed:
+ if (!(first->tx_flags & WX_TX_FLAGS_HW_VLAN) &&
+ !(first->tx_flags & WX_TX_FLAGS_CC))
+ return;
+ vlan_macip_lens = skb_network_offset(skb) <<
+ WX_TXD_MACLEN_SHIFT;
+ } else {
++ unsigned char *exthdr, *l4_hdr;
++ __be16 frag_off;
+ u8 l4_prot = 0;
+ union {
+ struct iphdr *ipv4;
+@@ -1362,7 +1367,12 @@ static void wx_tx_csum(struct wx_ring *tx_ring, struct wx_tx_buffer *first,
+ tun_prot = ip_hdr(skb)->protocol;
+ break;
+ case htons(ETH_P_IPV6):
++ l4_hdr = skb_transport_header(skb);
++ exthdr = skb_network_header(skb) + sizeof(struct ipv6hdr);
+ tun_prot = ipv6_hdr(skb)->nexthdr;
++ if (l4_hdr != exthdr)
++ ipv6_skip_exthdr(skb, exthdr - skb->data,
++ &tun_prot, &frag_off);
+ break;
+ default:
+ return;
+@@ -1386,6 +1396,7 @@ static void wx_tx_csum(struct wx_ring *tx_ring, struct wx_tx_buffer *first,
+ WX_TXD_TUNNEL_LEN_SHIFT);
+ break;
+ case IPPROTO_IPIP:
++ case IPPROTO_IPV6:
+ tunhdr_eiplen_tunlen = (((char *)inner_ip_hdr(skb) -
+ (char *)ip_hdr(skb)) >> 2) <<
+ WX_TXD_OUTER_IPLEN_SHIFT;
+@@ -1408,7 +1419,10 @@ static void wx_tx_csum(struct wx_ring *tx_ring, struct wx_tx_buffer *first,
+ break;
+ case 6:
+ vlan_macip_lens |= (transport_hdr.raw - network_hdr.raw) >> 1;
++ exthdr = network_hdr.raw + sizeof(struct ipv6hdr);
+ l4_prot = network_hdr.ipv6->nexthdr;
++ if (transport_hdr.raw != exthdr)
++ ipv6_skip_exthdr(skb, exthdr - skb->data, &l4_prot, &frag_off);
+ break;
+ default:
+ break;
+@@ -1428,7 +1442,8 @@ static void wx_tx_csum(struct wx_ring *tx_ring, struct wx_tx_buffer *first,
+ WX_TXD_L4LEN_SHIFT;
+ break;
+ default:
+- break;
++ skb_checksum_help(skb);
++ goto csum_failed;
+ }
+
+ /* update TX checksum flag */
+diff --git a/drivers/net/ipvlan/ipvlan_l3s.c b/drivers/net/ipvlan/ipvlan_l3s.c
+index b4ef386bdb1ba5..7c017fe35522a2 100644
+--- a/drivers/net/ipvlan/ipvlan_l3s.c
++++ b/drivers/net/ipvlan/ipvlan_l3s.c
+@@ -226,5 +226,4 @@ void ipvlan_l3s_unregister(struct ipvl_port *port)
+
+ dev->priv_flags &= ~IFF_L3MDEV_RX_HANDLER;
+ ipvlan_unregister_nf_hook(read_pnet(&port->pnet));
+- dev->l3mdev_ops = NULL;
+ }
+diff --git a/drivers/net/phy/bcm-phy-ptp.c b/drivers/net/phy/bcm-phy-ptp.c
+index 208e8f561e0696..eba8b5fb1365f4 100644
+--- a/drivers/net/phy/bcm-phy-ptp.c
++++ b/drivers/net/phy/bcm-phy-ptp.c
+@@ -597,7 +597,8 @@ static int bcm_ptp_perout_locked(struct bcm_ptp_private *priv,
+
+ period = BCM_MAX_PERIOD_8NS; /* write nonzero value */
+
+- if (req->flags & PTP_PEROUT_PHASE)
++ /* Reject unsupported flags */
++ if (req->flags & ~PTP_PEROUT_DUTY_CYCLE)
+ return -EOPNOTSUPP;
+
+ if (req->flags & PTP_PEROUT_DUTY_CYCLE)
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 22edb7e4c1a1f9..5a963b2b7ea78c 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -859,7 +859,7 @@ static int brcm_fet_config_init(struct phy_device *phydev)
+ return reg;
+
+ /* Unmask events we are interested in and mask interrupts globally. */
+- if (phydev->phy_id == PHY_ID_BCM5221)
++ if (phydev->drv->phy_id == PHY_ID_BCM5221)
+ reg = MII_BRCM_FET_IR_ENABLE |
+ MII_BRCM_FET_IR_MASK;
+ else
+@@ -888,7 +888,7 @@ static int brcm_fet_config_init(struct phy_device *phydev)
+ return err;
+ }
+
+- if (phydev->phy_id != PHY_ID_BCM5221) {
++ if (phydev->drv->phy_id != PHY_ID_BCM5221) {
+ /* Set the LED mode */
+ reg = __phy_read(phydev, MII_BRCM_FET_SHDW_AUXMODE4);
+ if (reg < 0) {
+@@ -1009,7 +1009,7 @@ static int brcm_fet_suspend(struct phy_device *phydev)
+ return err;
+ }
+
+- if (phydev->phy_id == PHY_ID_BCM5221)
++ if (phydev->drv->phy_id == PHY_ID_BCM5221)
+ /* Force Low Power Mode with clock enabled */
+ reg = BCM5221_SHDW_AM4_EN_CLK_LPM | BCM5221_SHDW_AM4_FORCE_LPM;
+ else
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index 7b3739b29c8f72..bb0bf141587274 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -630,6 +630,16 @@ static const struct driver_info zte_rndis_info = {
+ .tx_fixup = rndis_tx_fixup,
+ };
+
++static const struct driver_info wwan_rndis_info = {
++ .description = "Mobile Broadband RNDIS device",
++ .flags = FLAG_WWAN | FLAG_POINTTOPOINT | FLAG_FRAMING_RN | FLAG_NO_SETINT,
++ .bind = rndis_bind,
++ .unbind = rndis_unbind,
++ .status = rndis_status,
++ .rx_fixup = rndis_rx_fixup,
++ .tx_fixup = rndis_tx_fixup,
++};
++
+ /*-------------------------------------------------------------------------*/
+
+ static const struct usb_device_id products [] = {
+@@ -666,9 +676,11 @@ static const struct usb_device_id products [] = {
+ USB_INTERFACE_INFO(USB_CLASS_WIRELESS_CONTROLLER, 1, 3),
+ .driver_info = (unsigned long) &rndis_info,
+ }, {
+- /* Novatel Verizon USB730L */
++ /* Mobile Broadband Modem, seen in Novatel Verizon USB730L and
++ * Telit FN990A (RNDIS)
++ */
+ USB_INTERFACE_INFO(USB_CLASS_MISC, 4, 1),
+- .driver_info = (unsigned long) &rndis_info,
++ .driver_info = (unsigned long)&wwan_rndis_info,
+ },
+ { }, // END
+ };
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index aeab2308b15008..724b93aa4f7eb3 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -530,7 +530,8 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ netif_device_present (dev->net) &&
+ test_bit(EVENT_DEV_OPEN, &dev->flags) &&
+ !test_bit (EVENT_RX_HALT, &dev->flags) &&
+- !test_bit (EVENT_DEV_ASLEEP, &dev->flags)) {
++ !test_bit (EVENT_DEV_ASLEEP, &dev->flags) &&
++ !usbnet_going_away(dev)) {
+ switch (retval = usb_submit_urb (urb, GFP_ATOMIC)) {
+ case -EPIPE:
+ usbnet_defer_kevent (dev, EVENT_RX_HALT);
+@@ -551,8 +552,7 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ tasklet_schedule (&dev->bh);
+ break;
+ case 0:
+- if (!usbnet_going_away(dev))
+- __usbnet_queue_skb(&dev->rxq, skb, rx_start);
++ __usbnet_queue_skb(&dev->rxq, skb, rx_start);
+ }
+ } else {
+ netif_dbg(dev, ifdown, dev->net, "rx: stopped\n");
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 7646ddd9bef70c..d1ed544ba03ac4 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -368,15 +368,15 @@ struct receive_queue {
+ */
+ #define VIRTIO_NET_RSS_MAX_KEY_SIZE 40
+ struct virtio_net_ctrl_rss {
+- u32 hash_types;
+- u16 indirection_table_mask;
+- u16 unclassified_queue;
+- u16 hash_cfg_reserved; /* for HASH_CONFIG (see virtio_net_hash_config for details) */
+- u16 max_tx_vq;
++ __le32 hash_types;
++ __le16 indirection_table_mask;
++ __le16 unclassified_queue;
++ __le16 hash_cfg_reserved; /* for HASH_CONFIG (see virtio_net_hash_config for details) */
++ __le16 max_tx_vq;
+ u8 hash_key_length;
+ u8 key[VIRTIO_NET_RSS_MAX_KEY_SIZE];
+
+- u16 *indirection_table;
++ __le16 *indirection_table;
+ };
+
+ /* Control VQ buffers: protected by the rtnl lock */
+@@ -3576,9 +3576,9 @@ static void virtnet_rss_update_by_qpairs(struct virtnet_info *vi, u16 queue_pair
+
+ for (; i < vi->rss_indir_table_size; ++i) {
+ indir_val = ethtool_rxfh_indir_default(i, queue_pairs);
+- vi->rss.indirection_table[i] = indir_val;
++ vi->rss.indirection_table[i] = cpu_to_le16(indir_val);
+ }
+- vi->rss.max_tx_vq = queue_pairs;
++ vi->rss.max_tx_vq = cpu_to_le16(queue_pairs);
+ }
+
+ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
+@@ -4097,10 +4097,10 @@ static bool virtnet_commit_rss_command(struct virtnet_info *vi)
+
+ static void virtnet_init_default_rss(struct virtnet_info *vi)
+ {
+- vi->rss.hash_types = vi->rss_hash_types_supported;
++ vi->rss.hash_types = cpu_to_le32(vi->rss_hash_types_supported);
+ vi->rss_hash_types_saved = vi->rss_hash_types_supported;
+ vi->rss.indirection_table_mask = vi->rss_indir_table_size
+- ? vi->rss_indir_table_size - 1 : 0;
++ ? cpu_to_le16(vi->rss_indir_table_size - 1) : 0;
+ vi->rss.unclassified_queue = 0;
+
+ virtnet_rss_update_by_qpairs(vi, vi->curr_queue_pairs);
+@@ -4218,7 +4218,7 @@ static bool virtnet_set_hashflow(struct virtnet_info *vi, struct ethtool_rxnfc *
+
+ if (new_hashtypes != vi->rss_hash_types_saved) {
+ vi->rss_hash_types_saved = new_hashtypes;
+- vi->rss.hash_types = vi->rss_hash_types_saved;
++ vi->rss.hash_types = cpu_to_le32(vi->rss_hash_types_saved);
+ if (vi->dev->features & NETIF_F_RXHASH)
+ return virtnet_commit_rss_command(vi);
+ }
+@@ -5398,7 +5398,7 @@ static int virtnet_get_rxfh(struct net_device *dev,
+
+ if (rxfh->indir) {
+ for (i = 0; i < vi->rss_indir_table_size; ++i)
+- rxfh->indir[i] = vi->rss.indirection_table[i];
++ rxfh->indir[i] = le16_to_cpu(vi->rss.indirection_table[i]);
+ }
+
+ if (rxfh->key)
+@@ -5426,7 +5426,7 @@ static int virtnet_set_rxfh(struct net_device *dev,
+ return -EOPNOTSUPP;
+
+ for (i = 0; i < vi->rss_indir_table_size; ++i)
+- vi->rss.indirection_table[i] = rxfh->indir[i];
++ vi->rss.indirection_table[i] = cpu_to_le16(rxfh->indir[i]);
+ update = true;
+ }
+
+@@ -6044,9 +6044,9 @@ static int virtnet_set_features(struct net_device *dev,
+
+ if ((dev->features ^ features) & NETIF_F_RXHASH) {
+ if (features & NETIF_F_RXHASH)
+- vi->rss.hash_types = vi->rss_hash_types_saved;
++ vi->rss.hash_types = cpu_to_le32(vi->rss_hash_types_saved);
+ else
+- vi->rss.hash_types = VIRTIO_NET_HASH_REPORT_NONE;
++ vi->rss.hash_types = cpu_to_le32(VIRTIO_NET_HASH_REPORT_NONE);
+
+ if (!virtnet_commit_rss_command(vi))
+ return -EINVAL;
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 6793fa09f9d1ad..3df6aabc7e339e 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -2033,6 +2033,11 @@ vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq,
+
+ rq->comp_ring.gen = VMXNET3_INIT_GEN;
+ rq->comp_ring.next2proc = 0;
++
++ if (xdp_rxq_info_is_reg(&rq->xdp_rxq))
++ xdp_rxq_info_unreg(&rq->xdp_rxq);
++ page_pool_destroy(rq->page_pool);
++ rq->page_pool = NULL;
+ }
+
+
+@@ -2073,11 +2078,6 @@ static void vmxnet3_rq_destroy(struct vmxnet3_rx_queue *rq,
+ }
+ }
+
+- if (xdp_rxq_info_is_reg(&rq->xdp_rxq))
+- xdp_rxq_info_unreg(&rq->xdp_rxq);
+- page_pool_destroy(rq->page_pool);
+- rq->page_pool = NULL;
+-
+ if (rq->data_ring.base) {
+ dma_free_coherent(&adapter->pdev->dev,
+ rq->rx_ring[0].size * rq->data_ring.desc_size,
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 029ecf51c9efdb..b8b3dce9cdb53a 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -4783,7 +4783,7 @@ u32 ath11k_dp_rx_mon_mpdu_pop(struct ath11k *ar, int mac_id,
+ if (!msdu) {
+ ath11k_dbg(ar->ab, ATH11K_DBG_DATA,
+ "msdu_pop: invalid buf_id %d\n", buf_id);
+- break;
++ goto next_msdu;
+ }
+ rxcb = ATH11K_SKB_RXCB(msdu);
+ if (!rxcb->unmapped) {
+@@ -5148,7 +5148,7 @@ static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, int mac_id,
+ struct ath11k_mon_data *pmon = (struct ath11k_mon_data *)&dp->mon_data;
+ const struct ath11k_hw_hal_params *hal_params;
+ void *ring_entry;
+- void *mon_dst_srng;
++ struct hal_srng *mon_dst_srng;
+ u32 ppdu_id;
+ u32 rx_bufs_used;
+ u32 ring_id;
+@@ -5165,6 +5165,7 @@ static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, int mac_id,
+
+ spin_lock_bh(&pmon->mon_lock);
+
++ spin_lock_bh(&mon_dst_srng->lock);
+ ath11k_hal_srng_access_begin(ar->ab, mon_dst_srng);
+
+ ppdu_id = pmon->mon_ppdu_info.ppdu_id;
+@@ -5223,6 +5224,7 @@ static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, int mac_id,
+ mon_dst_srng);
+ }
+ ath11k_hal_srng_access_end(ar->ab, mon_dst_srng);
++ spin_unlock_bh(&mon_dst_srng->lock);
+
+ spin_unlock_bh(&pmon->mon_lock);
+
+@@ -5410,7 +5412,7 @@ ath11k_dp_rx_full_mon_mpdu_pop(struct ath11k *ar,
+ "full mon msdu_pop: invalid buf_id %d\n",
+ buf_id);
+ spin_unlock_bh(&rx_ring->idr_lock);
+- break;
++ goto next_msdu;
+ }
+ idr_remove(&rx_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&rx_ring->idr_lock);
+@@ -5612,7 +5614,7 @@ static int ath11k_dp_full_mon_process_rx(struct ath11k_base *ab, int mac_id,
+ struct hal_sw_mon_ring_entries *sw_mon_entries;
+ struct ath11k_pdev_mon_stats *rx_mon_stats;
+ struct sk_buff *head_msdu, *tail_msdu;
+- void *mon_dst_srng = &ar->ab->hal.srng_list[dp->rxdma_mon_dst_ring.ring_id];
++ struct hal_srng *mon_dst_srng;
+ void *ring_entry;
+ u32 rx_bufs_used = 0, mpdu_rx_bufs_used;
+ int quota = 0, ret;
+@@ -5628,6 +5630,9 @@ static int ath11k_dp_full_mon_process_rx(struct ath11k_base *ab, int mac_id,
+ goto reap_status_ring;
+ }
+
++ mon_dst_srng = &ar->ab->hal.srng_list[dp->rxdma_mon_dst_ring.ring_id];
++ spin_lock_bh(&mon_dst_srng->lock);
++
+ ath11k_hal_srng_access_begin(ar->ab, mon_dst_srng);
+ while ((ring_entry = ath11k_hal_srng_dst_peek(ar->ab, mon_dst_srng))) {
+ head_msdu = NULL;
+@@ -5671,6 +5676,7 @@ static int ath11k_dp_full_mon_process_rx(struct ath11k_base *ab, int mac_id,
+ }
+
+ ath11k_hal_srng_access_end(ar->ab, mon_dst_srng);
++ spin_unlock_bh(&mon_dst_srng->lock);
+ spin_unlock_bh(&pmon->mon_lock);
+
+ if (rx_bufs_used) {
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 1556392f7ad489..1298a3190a3c5d 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -5336,8 +5336,6 @@ static int ath11k_mac_set_txbf_conf(struct ath11k_vif *arvif)
+ if (vht_cap & (IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE)) {
+ nsts = vht_cap & IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+ nsts >>= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
+- if (nsts > (ar->num_rx_chains - 1))
+- nsts = ar->num_rx_chains - 1;
+ value |= SM(nsts, WMI_TXBF_STS_CAP_OFFSET);
+ }
+
+@@ -5421,9 +5419,6 @@ static void ath11k_set_vht_txbf_cap(struct ath11k *ar, u32 *vht_cap)
+
+ /* Enable Beamformee STS Field only if SU BF is enabled */
+ if (subfee) {
+- if (nsts > (ar->num_rx_chains - 1))
+- nsts = ar->num_rx_chains - 1;
+-
+ nsts <<= IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT;
+ nsts &= IEEE80211_VHT_CAP_BEAMFORMEE_STS_MASK;
+ *vht_cap |= nsts;
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index b93f04973ad79f..eaac9eabcc70a6 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -939,6 +939,8 @@ static int ath11k_pci_probe(struct pci_dev *pdev,
+ return 0;
+
+ err_free_irq:
++ /* __free_irq() expects the caller to have cleared the affinity hint */
++ ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
+ ath11k_pcic_free_irq(ab);
+
+ err_ce_free:
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index b0f289784dd3a2..7bfe47ad62a07f 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+ #include <linux/rtnetlink.h>
+
+@@ -55,6 +55,19 @@ ath11k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
+ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
+ "Regulatory Notification received for %s\n", wiphy_name(wiphy));
+
++ if (request->initiator == NL80211_REGDOM_SET_BY_DRIVER) {
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "driver initiated regd update\n");
++ if (ar->state != ATH11K_STATE_ON)
++ return;
++
++ ret = ath11k_reg_update_chan_list(ar, true);
++ if (ret)
++ ath11k_warn(ar->ab, "failed to update channel list: %d\n", ret);
++
++ return;
++ }
++
+ /* Currently supporting only General User Hints. Cell base user
+ * hints to be handled later.
+ * Hints from other sources like Core, Beacons are not expected for
+@@ -293,12 +306,6 @@ int ath11k_regd_update(struct ath11k *ar)
+ if (ret)
+ goto err;
+
+- if (ar->state == ATH11K_STATE_ON) {
+- ret = ath11k_reg_update_chan_list(ar, true);
+- if (ret)
+- goto err;
+- }
+-
+ return 0;
+ err:
+ ath11k_warn(ab, "failed to perform regd update : %d\n", ret);
+@@ -977,6 +984,7 @@ void ath11k_regd_update_work(struct work_struct *work)
+ void ath11k_reg_init(struct ath11k *ar)
+ {
+ ar->hw->wiphy->regulatory_flags = REGULATORY_WIPHY_SELF_MANAGED;
++ ar->hw->wiphy->flags |= WIPHY_FLAG_NOTIFY_REGDOM_BY_DRIVER;
+ ar->hw->wiphy->reg_notifier = ath11k_reg_notifier;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
+index 0606116d6b9c49..212cd935e60a05 100644
+--- a/drivers/net/wireless/ath/ath12k/core.c
++++ b/drivers/net/wireless/ath/ath12k/core.c
+@@ -1122,16 +1122,18 @@ int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab)
+ ath12k_core_stop(ab);
+ mutex_unlock(&ab->core_lock);
+ }
++ mutex_unlock(&ag->mutex);
+ goto exit;
+
+ err_dp_free:
+ ath12k_dp_free(ab);
+ mutex_unlock(&ab->core_lock);
++ mutex_unlock(&ag->mutex);
++
+ err_firmware_stop:
+ ath12k_qmi_firmware_stop(ab);
+
+ exit:
+- mutex_unlock(&ag->mutex);
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index dad35bfd83f627..68d609f2ac60e1 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -4032,7 +4032,7 @@ int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
+ hw_links[hw_link_id].pdev_idx);
+ ar = partner_ab->pdevs[pdev_id].ar;
+
+- if (!ar || !rcu_dereference(ar->ab->pdevs_active[hw_link_id])) {
++ if (!ar || !rcu_dereference(ar->ab->pdevs_active[pdev_id])) {
+ dev_kfree_skb_any(msdu);
+ continue;
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index a8d341a6df01ea..1fffabaca527a4 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -368,6 +368,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ add_htt_metadata = true;
+ msdu_ext_desc = true;
+ ti.flags0 |= u32_encode_bits(1, HAL_TCL_DATA_CMD_INFO2_TO_FW);
++ ti.meta_data_flags |= HTT_TCL_META_DATA_VALID_HTT;
+ ti.encap_type = HAL_TCL_ENCAP_TYPE_RAW;
+ ti.encrypt_type = HAL_ENCRYPT_TYPE_OPEN;
+ }
+@@ -398,6 +399,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ if (ret < 0) {
+ ath12k_dbg(ab, ATH12K_DBG_DP_TX,
+ "Failed to add HTT meta data, dropping packet\n");
++ kfree_skb(skb_ext_desc);
+ goto fail_unmap_dma;
+ }
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 2d062b5904a8ef..9c3e66dbe0c3be 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -8066,6 +8066,7 @@ static void ath12k_mac_vif_cache_flush(struct ath12k *ar, struct ath12k_link_vif
+ struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif);
+ struct ath12k_vif_cache *cache = ahvif->cache[arvif->link_id];
+ struct ath12k_base *ab = ar->ab;
++ struct ieee80211_bss_conf *link_conf;
+
+ int ret;
+
+@@ -8084,7 +8085,13 @@ static void ath12k_mac_vif_cache_flush(struct ath12k *ar, struct ath12k_link_vif
+ }
+
+ if (cache->bss_conf_changed) {
+- ath12k_mac_bss_info_changed(ar, arvif, &vif->bss_conf,
++ link_conf = ath12k_mac_get_link_bss_conf(arvif);
++ if (!link_conf) {
++ ath12k_warn(ar->ab, "unable to access bss link conf in cache flush for vif %pM link %u\n",
++ vif->addr, arvif->link_id);
++ return;
++ }
++ ath12k_mac_bss_info_changed(ar, arvif, link_conf,
+ cache->bss_conf_changed);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index 06cff3849ab8da..2851f6944b864b 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -1689,6 +1689,8 @@ static int ath12k_pci_probe(struct pci_dev *pdev,
+ return 0;
+
+ err_free_irq:
++ /* __free_irq() expects the caller to have cleared the affinity hint */
++ ath12k_pci_set_irq_affinity_hint(ab_pci, NULL);
+ ath12k_pci_free_irq(ab);
+
+ err_ce_free:
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index abb510d235a527..7a87777e0a047d 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -2794,6 +2794,8 @@ int ath12k_wmi_send_scan_chan_list_cmd(struct ath12k *ar,
+ WMI_CHAN_REG_INFO1_REG_CLS);
+ *reg2 |= le32_encode_bits(channel_arg->antennamax,
+ WMI_CHAN_REG_INFO2_ANT_MAX);
++ *reg2 |= le32_encode_bits(channel_arg->maxregpower,
++ WMI_CHAN_REG_INFO2_MAX_TX_PWR);
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
+ "WMI chan scan list chan[%d] = %u, chan_info->info %8x\n",
+diff --git a/drivers/net/wireless/ath/ath9k/common-spectral.c b/drivers/net/wireless/ath/ath9k/common-spectral.c
+index 628eeec4b82fe8..300d178830adf6 100644
+--- a/drivers/net/wireless/ath/ath9k/common-spectral.c
++++ b/drivers/net/wireless/ath/ath9k/common-spectral.c
+@@ -628,12 +628,12 @@ int ath_cmn_process_fft(struct ath_spec_scan_priv *spec_priv, struct ieee80211_h
+ else
+ RX_STAT_INC(sc, rx_spectral_sample_err);
+
+- memset(sample_buf, 0, SPECTRAL_SAMPLE_MAX_LEN);
+-
+ /* Mix the received bins to the /dev/random
+ * pool
+ */
+ add_device_randomness(sample_buf, num_bins);
++
++ memset(sample_buf, 0, SPECTRAL_SAMPLE_MAX_LEN);
+ }
+
+ /* Process a normal frame */
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index 4a96281792cc1a..91458f3bd14a54 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -454,6 +454,11 @@ enum mwifiex_channel_flags {
+ #define HostCmd_RET_BIT 0x8000
+ #define HostCmd_ACT_GEN_GET 0x0000
+ #define HostCmd_ACT_GEN_SET 0x0001
++#define HOST_CMD_ACT_GEN_SET 0x0001
++/* Add this non-CamelCase-style macro to comply with checkpatch requirements.
++ * This macro will eventually replace all existing CamelCase-style macros in
++ * the future for consistency.
++ */
+ #define HostCmd_ACT_GEN_REMOVE 0x0004
+ #define HostCmd_ACT_BITWISE_SET 0x0002
+ #define HostCmd_ACT_BITWISE_CLR 0x0003
+@@ -2352,6 +2357,14 @@ struct host_cmd_ds_add_station {
+ u8 tlv[];
+ } __packed;
+
++#define MWIFIEX_CFG_TYPE_CAL 0x2
++
++struct host_cmd_ds_802_11_cfg_data {
++ __le16 action;
++ __le16 type;
++ __le16 data_len;
++} __packed;
++
+ struct host_cmd_ds_command {
+ __le16 command;
+ __le16 size;
+@@ -2431,6 +2444,7 @@ struct host_cmd_ds_command {
+ struct host_cmd_ds_pkt_aggr_ctrl pkt_aggr_ctrl;
+ struct host_cmd_ds_sta_configure sta_cfg;
+ struct host_cmd_ds_add_station sta_info;
++ struct host_cmd_ds_802_11_cfg_data cfg_data;
+ } params;
+ } __packed;
+
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index 855019fe548582..80fc6d5afe8602 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -691,10 +691,6 @@ static int _mwifiex_fw_dpc(const struct firmware *firmware, void *context)
+
+ init_failed = true;
+ done:
+- if (adapter->cal_data) {
+- release_firmware(adapter->cal_data);
+- adapter->cal_data = NULL;
+- }
+ if (adapter->firmware) {
+ release_firmware(adapter->firmware);
+ adapter->firmware = NULL;
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_cmd.c b/drivers/net/wireless/marvell/mwifiex/sta_cmd.c
+index e2800a831c8edd..c4689f5a1acc8e 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_cmd.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_cmd.c
+@@ -1507,6 +1507,7 @@ static int mwifiex_cmd_cfg_data(struct mwifiex_private *priv,
+ u32 len;
+ u8 *data = (u8 *)cmd + S_DS_GEN;
+ int ret;
++ struct host_cmd_ds_802_11_cfg_data *pcfg_data;
+
+ if (prop) {
+ len = prop->length;
+@@ -1514,12 +1515,20 @@ static int mwifiex_cmd_cfg_data(struct mwifiex_private *priv,
+ data, len);
+ if (ret)
+ return ret;
++
++ cmd->size = cpu_to_le16(S_DS_GEN + len);
+ mwifiex_dbg(adapter, INFO,
+ "download cfg_data from device tree: %s\n",
+ prop->name);
+ } else if (adapter->cal_data->data && adapter->cal_data->size > 0) {
+ len = mwifiex_parse_cal_cfg((u8 *)adapter->cal_data->data,
+- adapter->cal_data->size, data);
++ adapter->cal_data->size,
++ data + sizeof(*pcfg_data));
++ pcfg_data = &cmd->params.cfg_data;
++ pcfg_data->action = cpu_to_le16(HOST_CMD_ACT_GEN_SET);
++ pcfg_data->type = cpu_to_le16(MWIFIEX_CFG_TYPE_CAL);
++ pcfg_data->data_len = cpu_to_le16(len);
++ cmd->size = cpu_to_le16(S_DS_GEN + sizeof(*pcfg_data) + len);
+ mwifiex_dbg(adapter, INFO,
+ "download cfg_data from config file\n");
+ } else {
+@@ -1527,7 +1536,6 @@ static int mwifiex_cmd_cfg_data(struct mwifiex_private *priv,
+ }
+
+ cmd->command = cpu_to_le16(HostCmd_CMD_CFG_DATA);
+- cmd->size = cpu_to_le16(S_DS_GEN + len);
+
+ return 0;
+ }
+@@ -2293,9 +2301,13 @@ int mwifiex_sta_init_cmd(struct mwifiex_private *priv, u8 first_sta, bool init)
+ "marvell,caldata");
+ }
+
+- if (adapter->cal_data)
++ if (adapter->cal_data) {
+ mwifiex_send_cmd(priv, HostCmd_CMD_CFG_DATA,
+ HostCmd_ACT_GEN_SET, 0, NULL, true);
++ release_firmware(adapter->cal_data);
++ adapter->cal_data = NULL;
++ }
++
+
+ /* Read MAC address from HW */
+ ret = mwifiex_send_cmd(priv, HostCmd_CMD_GET_HW_SPEC,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index 578013884e4381..4fec7d000a631a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -303,9 +303,9 @@ static int mt7915_muru_stats_show(struct seq_file *file, void *data)
+ phy->mib.dl_vht_3mu_cnt,
+ phy->mib.dl_vht_4mu_cnt);
+
+- sub_total_cnt = phy->mib.dl_vht_2mu_cnt +
+- phy->mib.dl_vht_3mu_cnt +
+- phy->mib.dl_vht_4mu_cnt;
++ sub_total_cnt = (u64)phy->mib.dl_vht_2mu_cnt +
++ phy->mib.dl_vht_3mu_cnt +
++ phy->mib.dl_vht_4mu_cnt;
+
+ seq_printf(file, "\nTotal non-HE MU-MIMO DL PPDU count: %lld",
+ sub_total_cnt);
+@@ -353,26 +353,27 @@ static int mt7915_muru_stats_show(struct seq_file *file, void *data)
+ phy->mib.dl_he_9to16ru_cnt,
+ phy->mib.dl_he_gtr16ru_cnt);
+
+- sub_total_cnt = phy->mib.dl_he_2mu_cnt +
+- phy->mib.dl_he_3mu_cnt +
+- phy->mib.dl_he_4mu_cnt;
++ sub_total_cnt = (u64)phy->mib.dl_he_2mu_cnt +
++ phy->mib.dl_he_3mu_cnt +
++ phy->mib.dl_he_4mu_cnt;
+ total_ppdu_cnt = sub_total_cnt;
+
+ seq_printf(file, "\nTotal HE MU-MIMO DL PPDU count: %lld",
+ sub_total_cnt);
+
+- sub_total_cnt = phy->mib.dl_he_2ru_cnt +
+- phy->mib.dl_he_3ru_cnt +
+- phy->mib.dl_he_4ru_cnt +
+- phy->mib.dl_he_5to8ru_cnt +
+- phy->mib.dl_he_9to16ru_cnt +
+- phy->mib.dl_he_gtr16ru_cnt;
++ sub_total_cnt = (u64)phy->mib.dl_he_2ru_cnt +
++ phy->mib.dl_he_3ru_cnt +
++ phy->mib.dl_he_4ru_cnt +
++ phy->mib.dl_he_5to8ru_cnt +
++ phy->mib.dl_he_9to16ru_cnt +
++ phy->mib.dl_he_gtr16ru_cnt;
+ total_ppdu_cnt += sub_total_cnt;
+
+ seq_printf(file, "\nTotal HE OFDMA DL PPDU count: %lld",
+ sub_total_cnt);
+
+- total_ppdu_cnt += phy->mib.dl_he_su_cnt + phy->mib.dl_he_ext_su_cnt;
++ total_ppdu_cnt += (u64)phy->mib.dl_he_su_cnt +
++ phy->mib.dl_he_ext_su_cnt;
+
+ seq_printf(file, "\nAll HE DL PPDU count: %lld", total_ppdu_cnt);
+
+@@ -404,20 +405,20 @@ static int mt7915_muru_stats_show(struct seq_file *file, void *data)
+ phy->mib.ul_hetrig_9to16ru_cnt,
+ phy->mib.ul_hetrig_gtr16ru_cnt);
+
+- sub_total_cnt = phy->mib.ul_hetrig_2mu_cnt +
+- phy->mib.ul_hetrig_3mu_cnt +
+- phy->mib.ul_hetrig_4mu_cnt;
++ sub_total_cnt = (u64)phy->mib.ul_hetrig_2mu_cnt +
++ phy->mib.ul_hetrig_3mu_cnt +
++ phy->mib.ul_hetrig_4mu_cnt;
+ total_ppdu_cnt = sub_total_cnt;
+
+ seq_printf(file, "\nTotal HE MU-MIMO UL TB PPDU count: %lld",
+ sub_total_cnt);
+
+- sub_total_cnt = phy->mib.ul_hetrig_2ru_cnt +
+- phy->mib.ul_hetrig_3ru_cnt +
+- phy->mib.ul_hetrig_4ru_cnt +
+- phy->mib.ul_hetrig_5to8ru_cnt +
+- phy->mib.ul_hetrig_9to16ru_cnt +
+- phy->mib.ul_hetrig_gtr16ru_cnt;
++ sub_total_cnt = (u64)phy->mib.ul_hetrig_2ru_cnt +
++ phy->mib.ul_hetrig_3ru_cnt +
++ phy->mib.ul_hetrig_4ru_cnt +
++ phy->mib.ul_hetrig_5to8ru_cnt +
++ phy->mib.ul_hetrig_9to16ru_cnt +
++ phy->mib.ul_hetrig_gtr16ru_cnt;
+ total_ppdu_cnt += sub_total_cnt;
+
+ seq_printf(file, "\nTotal HE OFDMA UL TB PPDU count: %lld",
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 13e58c328aff4d..78b77a54d19576 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -811,6 +811,7 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ msta->deflink.wcid.phy_idx = mvif->bss_conf.mt76.band_idx;
+ msta->deflink.wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ msta->deflink.last_txs = jiffies;
++ msta->deflink.sta = msta;
+
+ ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+ if (ret)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 15815ad84713a4..9e192b7e1d2e08 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -3155,7 +3155,6 @@ __mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+
+ .idx = idx,
+ .env = env_cap,
+- .acpi_conf = mt792x_acpi_get_flags(&dev->phy),
+ };
+ int ret, valid_cnt = 0;
+ u8 i, *pos;
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index ff4894c7fa8a5c..93e41def81b409 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -5135,7 +5135,7 @@ struct rtw89_tssi_info {
+ u32 alignment_backup_by_ch[RF_PATH_MAX][TSSI_MAX_CH_NUM][TSSI_ALIMK_VALUE_NUM];
+ u32 alignment_value[RF_PATH_MAX][TSSI_ALIMK_MAX][TSSI_ALIMK_VALUE_NUM];
+ bool alignment_done[RF_PATH_MAX][TSSI_ALIMK_MAX];
+- u32 tssi_alimk_time;
++ u64 tssi_alimk_time;
+ };
+
+ struct rtw89_power_trim_info {
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 5d4ad23cc3bd4d..2f3869c7006967 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -988,7 +988,7 @@ int rtw89_build_txpwr_trk_tbl_from_elm(struct rtw89_dev *rtwdev,
+ bitmap = le32_to_cpu(elm->u.txpwr_trk.bitmap);
+
+ if ((bitmap & needed_bitmap) != needed_bitmap) {
+- rtw89_warn(rtwdev, "needed txpwr trk bitmap %08x but %0x8x\n",
++ rtw89_warn(rtwdev, "needed txpwr trk bitmap %08x but %08x\n",
+ needed_bitmap, bitmap);
+ return -ENOENT;
+ }
+@@ -5311,6 +5311,7 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ u8 macc_role_size = sizeof(*macc_role) * option->num_macc_role;
+ u8 opch_size = sizeof(*opch) * option->num_opch;
+ u8 probe_id[NUM_NL80211_BANDS];
++ u8 scan_offload_ver = U8_MAX;
+ u8 cfg_len = sizeof(*h2c);
+ unsigned int cond;
+ u8 ver = U8_MAX;
+@@ -5321,6 +5322,11 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+
+ rtw89_scan_get_6g_disabled_chan(rtwdev, option);
+
++ if (RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD_BE_V0, &rtwdev->fw)) {
++ cfg_len = offsetofend(typeof(*h2c), w8);
++ scan_offload_ver = 0;
++ }
++
+ len = cfg_len + macc_role_size + opch_size;
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+ if (!skb) {
+@@ -5392,10 +5398,8 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ RTW89_H2C_SCANOFLD_BE_W8_PROBE_RATE_6GHZ);
+ }
+
+- if (RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD_BE_V0, &rtwdev->fw)) {
+- cfg_len = offsetofend(typeof(*h2c), w8);
++ if (scan_offload_ver == 0)
+ goto flex_member;
+- }
+
+ h2c->w9 = le32_encode_bits(sizeof(*h2c) / sizeof(h2c->w0),
+ RTW89_H2C_SCANOFLD_BE_W9_SIZE_CFG) |
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.h b/drivers/net/wireless/realtek/rtw89/pci.h
+index 4d11c3dd60a5d6..79fef5f9014080 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.h
++++ b/drivers/net/wireless/realtek/rtw89/pci.h
+@@ -455,34 +455,36 @@
+ #define B_BE_RX0DMA_INT_EN BIT(0)
+
+ #define R_BE_HAXI_HISR00 0xB0B4
+-#define B_BE_RDU_CH6_INT BIT(28)
+-#define B_BE_RDU_CH5_INT BIT(27)
+-#define B_BE_RDU_CH4_INT BIT(26)
+-#define B_BE_RDU_CH2_INT BIT(25)
+-#define B_BE_RDU_CH1_INT BIT(24)
+-#define B_BE_RDU_CH0_INT BIT(23)
+-#define B_BE_RXDMA_STUCK_INT BIT(22)
+-#define B_BE_TXDMA_STUCK_INT BIT(21)
+-#define B_BE_TXDMA_CH14_INT BIT(20)
+-#define B_BE_TXDMA_CH13_INT BIT(19)
+-#define B_BE_TXDMA_CH12_INT BIT(18)
+-#define B_BE_TXDMA_CH11_INT BIT(17)
+-#define B_BE_TXDMA_CH10_INT BIT(16)
+-#define B_BE_TXDMA_CH9_INT BIT(15)
+-#define B_BE_TXDMA_CH8_INT BIT(14)
+-#define B_BE_TXDMA_CH7_INT BIT(13)
+-#define B_BE_TXDMA_CH6_INT BIT(12)
+-#define B_BE_TXDMA_CH5_INT BIT(11)
+-#define B_BE_TXDMA_CH4_INT BIT(10)
+-#define B_BE_TXDMA_CH3_INT BIT(9)
+-#define B_BE_TXDMA_CH2_INT BIT(8)
+-#define B_BE_TXDMA_CH1_INT BIT(7)
+-#define B_BE_TXDMA_CH0_INT BIT(6)
+-#define B_BE_RPQ1DMA_INT BIT(5)
+-#define B_BE_RX1P1DMA_INT BIT(4)
++#define B_BE_RDU_CH5_INT_V1 BIT(30)
++#define B_BE_RDU_CH4_INT_V1 BIT(29)
++#define B_BE_RDU_CH3_INT_V1 BIT(28)
++#define B_BE_RDU_CH2_INT_V1 BIT(27)
++#define B_BE_RDU_CH1_INT_V1 BIT(26)
++#define B_BE_RDU_CH0_INT_V1 BIT(25)
++#define B_BE_RXDMA_STUCK_INT_V1 BIT(24)
++#define B_BE_TXDMA_STUCK_INT_V1 BIT(23)
++#define B_BE_TXDMA_CH14_INT_V1 BIT(22)
++#define B_BE_TXDMA_CH13_INT_V1 BIT(21)
++#define B_BE_TXDMA_CH12_INT_V1 BIT(20)
++#define B_BE_TXDMA_CH11_INT_V1 BIT(19)
++#define B_BE_TXDMA_CH10_INT_V1 BIT(18)
++#define B_BE_TXDMA_CH9_INT_V1 BIT(17)
++#define B_BE_TXDMA_CH8_INT_V1 BIT(16)
++#define B_BE_TXDMA_CH7_INT_V1 BIT(15)
++#define B_BE_TXDMA_CH6_INT_V1 BIT(14)
++#define B_BE_TXDMA_CH5_INT_V1 BIT(13)
++#define B_BE_TXDMA_CH4_INT_V1 BIT(12)
++#define B_BE_TXDMA_CH3_INT_V1 BIT(11)
++#define B_BE_TXDMA_CH2_INT_V1 BIT(10)
++#define B_BE_TXDMA_CH1_INT_V1 BIT(9)
++#define B_BE_TXDMA_CH0_INT_V1 BIT(8)
++#define B_BE_RX1P1DMA_INT_V1 BIT(7)
++#define B_BE_RX0P1DMA_INT_V1 BIT(6)
++#define B_BE_RO1DMA_INT BIT(5)
++#define B_BE_RP1DMA_INT BIT(4)
+ #define B_BE_RX1DMA_INT BIT(3)
+-#define B_BE_RPQ0DMA_INT BIT(2)
+-#define B_BE_RX0P1DMA_INT BIT(1)
++#define B_BE_RO0DMA_INT BIT(2)
++#define B_BE_RP0DMA_INT BIT(1)
+ #define B_BE_RX0DMA_INT BIT(0)
+
+ /* TX/RX */
+diff --git a/drivers/net/wireless/realtek/rtw89/pci_be.c b/drivers/net/wireless/realtek/rtw89/pci_be.c
+index cd39eebe818615..12e6a0cbb889b2 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci_be.c
++++ b/drivers/net/wireless/realtek/rtw89/pci_be.c
+@@ -666,7 +666,7 @@ SIMPLE_DEV_PM_OPS(rtw89_pm_ops_be, rtw89_pci_suspend_be, rtw89_pci_resume_be);
+ EXPORT_SYMBOL(rtw89_pm_ops_be);
+
+ const struct rtw89_pci_gen_def rtw89_pci_gen_be = {
+- .isr_rdu = B_BE_RDU_CH1_INT | B_BE_RDU_CH0_INT,
++ .isr_rdu = B_BE_RDU_CH1_INT_V1 | B_BE_RDU_CH0_INT_V1,
+ .isr_halt_c2h = B_BE_HALT_C2H_INT,
+ .isr_wdt_timeout = B_BE_WDT_TIMEOUT_INT,
+ .isr_clear_rpq = {R_BE_PCIE_DMA_ISR, B_BE_PCIE_RX_RPQ0_ISR_V1},
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
+index ef47a5facc8361..fbf82d42687bac 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
+@@ -3585,9 +3585,10 @@ static void _tssi_alimentk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ u8 ch_idx = _tssi_ch_to_idx(rtwdev, channel);
+ struct rtw8852bx_bb_tssi_bak tssi_bak;
+ s32 aliment_diff, tssi_cw_default;
+- u32 start_time, finish_time;
+ u32 bb_reg_backup[8] = {0};
++ ktime_t start_time;
+ const s16 *power;
++ s64 this_time;
+ u8 band;
+ bool ok;
+ u32 tmp;
+@@ -3613,7 +3614,7 @@ static void _tssi_alimentk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ return;
+ }
+
+- start_time = ktime_get_ns();
++ start_time = ktime_get();
+
+ if (chan->band_type == RTW89_BAND_2G)
+ power = power_2g;
+@@ -3738,12 +3739,12 @@ static void _tssi_alimentk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ rtw8852bx_bb_restore_tssi(rtwdev, phy, &tssi_bak);
+ rtw8852bx_bb_tx_mode_switch(rtwdev, phy, 0);
+
+- finish_time = ktime_get_ns();
+- tssi_info->tssi_alimk_time += finish_time - start_time;
++ this_time = ktime_us_delta(ktime_get(), start_time);
++ tssi_info->tssi_alimk_time += this_time;
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+- "[TSSI PA K] %s processing time = %d ms\n", __func__,
+- tssi_info->tssi_alimk_time);
++ "[TSSI PA K] %s processing time = %lld us (acc = %llu us)\n",
++ __func__, this_time, tssi_info->tssi_alimk_time);
+ }
+
+ void rtw8852b_dpk_init(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852bt_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852bt_rfk.c
+index 336a83e1d46be9..6e6889eea9a0d9 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852bt_rfk.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852bt_rfk.c
+@@ -3663,9 +3663,10 @@ static void _tssi_alimentk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ u8 ch_idx = _tssi_ch_to_idx(rtwdev, channel);
+ struct rtw8852bx_bb_tssi_bak tssi_bak;
+ s32 aliment_diff, tssi_cw_default;
+- u32 start_time, finish_time;
+ u32 bb_reg_backup[8] = {};
++ ktime_t start_time;
+ const s16 *power;
++ s64 this_time;
+ u8 band;
+ bool ok;
+ u32 tmp;
+@@ -3675,7 +3676,7 @@ static void _tssi_alimentk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ "======> %s channel=%d path=%d\n", __func__, channel,
+ path);
+
+- start_time = ktime_get_ns();
++ start_time = ktime_get();
+
+ if (chan->band_type == RTW89_BAND_2G)
+ power = power_2g;
+@@ -3802,12 +3803,12 @@ static void _tssi_alimentk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
+ rtw8852bx_bb_restore_tssi(rtwdev, phy, &tssi_bak);
+ rtw8852bx_bb_tx_mode_switch(rtwdev, phy, 0);
+
+- finish_time = ktime_get_ns();
+- tssi_info->tssi_alimk_time += finish_time - start_time;
++ this_time = ktime_us_delta(ktime_get(), start_time);
++ tssi_info->tssi_alimk_time += this_time;
+
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+- "[TSSI PA K] %s processing time = %d ms\n", __func__,
+- tssi_info->tssi_alimk_time);
++ "[TSSI PA K] %s processing time = %lld us (acc = %llu us)\n",
++ __func__, this_time, tssi_info->tssi_alimk_time);
+ }
+
+ void rtw8852bt_dpk_init(struct rtw89_dev *rtwdev)
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen3.c b/drivers/ntb/hw/intel/ntb_hw_gen3.c
+index ffcfc3e02c3532..a5aa96a31f4a64 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen3.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen3.c
+@@ -215,6 +215,9 @@ static int gen3_init_ntb(struct intel_ntb_dev *ndev)
+ }
+
+ ndev->db_valid_mask = BIT_ULL(ndev->db_count) - 1;
++ /* Make sure we are not using DB's used for link status */
++ if (ndev->hwerr_flags & NTB_HWERR_MSIX_VECTOR32_BAD)
++ ndev->db_valid_mask &= ~ndev->db_link_mask;
+
+ ndev->reg->db_iowrite(ndev->db_valid_mask,
+ ndev->self_mmio +
+diff --git a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+index ad1786be2554b3..f851397b65d6e5 100644
+--- a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
++++ b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+@@ -288,7 +288,7 @@ static int switchtec_ntb_mw_set_trans(struct ntb_dev *ntb, int pidx, int widx,
+ if (size != 0 && xlate_pos < 12)
+ return -EINVAL;
+
+- if (!IS_ALIGNED(addr, BIT_ULL(xlate_pos))) {
++ if (xlate_pos >= 0 && !IS_ALIGNED(addr, BIT_ULL(xlate_pos))) {
+ /*
+ * In certain circumstances we can get a buffer that is
+ * not aligned to its size. (Most of the time
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 72bc1d017a46ee..dfd175f79e8f08 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -839,10 +839,8 @@ static int perf_copy_chunk(struct perf_thread *pthr,
+ dma_set_unmap(tx, unmap);
+
+ ret = dma_submit_error(dmaengine_submit(tx));
+- if (ret) {
+- dmaengine_unmap_put(unmap);
++ if (ret)
+ goto err_free_resource;
+- }
+
+ dmaengine_unmap_put(unmap);
+
+diff --git a/drivers/nvdimm/badrange.c b/drivers/nvdimm/badrange.c
+index a002ea6fdd8424..ee478ccde7c6c7 100644
+--- a/drivers/nvdimm/badrange.c
++++ b/drivers/nvdimm/badrange.c
+@@ -167,7 +167,7 @@ static void set_badblock(struct badblocks *bb, sector_t s, int num)
+ dev_dbg(bb->dev, "Found a bad range (0x%llx, 0x%llx)\n",
+ (u64) s * 512, (u64) num * 512);
+ /* this isn't an error as the hardware will still throw an exception */
+- if (badblocks_set(bb, s, num, 1))
++ if (!badblocks_set(bb, s, num, 1))
+ dev_info_once(bb->dev, "%s: failed for sector %llx\n",
+ __func__, (u64) s);
+ }
+diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
+index 5ca06e9a2d2925..cc5c8f3f81e8d4 100644
+--- a/drivers/nvdimm/nd.h
++++ b/drivers/nvdimm/nd.h
+@@ -673,7 +673,7 @@ static inline bool is_bad_pmem(struct badblocks *bb, sector_t sector,
+ {
+ if (bb->count) {
+ sector_t first_bad;
+- int num_bad;
++ sector_t num_bad;
+
+ return !!badblocks_check(bb, sector, len / 512, &first_bad,
+ &num_bad);
+diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
+index cfdfe0eaa51210..8f3e816e805d88 100644
+--- a/drivers/nvdimm/pfn_devs.c
++++ b/drivers/nvdimm/pfn_devs.c
+@@ -367,9 +367,10 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn)
+ struct nd_namespace_common *ndns = nd_pfn->ndns;
+ void *zero_page = page_address(ZERO_PAGE(0));
+ struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb;
+- int num_bad, meta_num, rc, bb_present;
++ int meta_num, rc, bb_present;
+ sector_t first_bad, meta_start;
+ struct nd_namespace_io *nsio;
++ sector_t num_bad;
+
+ if (nd_pfn->mode != PFN_MODE_PMEM)
+ return 0;
+@@ -394,7 +395,7 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn)
+ bb_present = badblocks_check(&nd_region->bb, meta_start,
+ meta_num, &first_bad, &num_bad);
+ if (bb_present) {
+- dev_dbg(&nd_pfn->dev, "meta: %x badblocks at %llx\n",
++ dev_dbg(&nd_pfn->dev, "meta: %llx badblocks at %llx\n",
+ num_bad, first_bad);
+ nsoff = ALIGN_DOWN((nd_region->ndr_start
+ + (first_bad << 9)) - nsio->res.start,
+@@ -413,7 +414,7 @@ static int nd_pfn_clear_memmap_errors(struct nd_pfn *nd_pfn)
+ }
+ if (rc) {
+ dev_err(&nd_pfn->dev,
+- "error clearing %x badblocks at %llx\n",
++ "error clearing %llx badblocks at %llx\n",
+ num_bad, first_bad);
+ return rc;
+ }
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index d81faa9d89c935..43156e1576c939 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -249,7 +249,7 @@ __weak long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff,
+ unsigned int num = PFN_PHYS(nr_pages) >> SECTOR_SHIFT;
+ struct badblocks *bb = &pmem->bb;
+ sector_t first_bad;
+- int num_bad;
++ sector_t num_bad;
+
+ if (kaddr)
+ *kaddr = pmem->virt_addr + offset;
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index 24e2c702da7a2e..fed6b29098ad3d 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -141,7 +141,7 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
+ struct iov_iter iter;
+
+ /* fixedbufs is only for non-vectored io */
+- if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) {
++ if (flags & NVME_IOCTL_VEC) {
+ ret = -EINVAL;
+ goto out;
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3ad7f197c80871..1dc12784efafc6 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -989,6 +989,9 @@ static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct rq_list *rqlist)
+ {
+ struct request *req;
+
++ if (rq_list_empty(rqlist))
++ return;
++
+ spin_lock(&nvmeq->sq_lock);
+ while ((req = rq_list_pop(rqlist))) {
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+diff --git a/drivers/nvme/target/debugfs.c b/drivers/nvme/target/debugfs.c
+index 220c7391fc19ad..c6571fbd35e30e 100644
+--- a/drivers/nvme/target/debugfs.c
++++ b/drivers/nvme/target/debugfs.c
+@@ -78,7 +78,7 @@ static int nvmet_ctrl_state_show(struct seq_file *m, void *p)
+ bool sep = false;
+ int i;
+
+- for (i = 0; i < 7; i++) {
++ for (i = 0; i < ARRAY_SIZE(csts_state_names); i++) {
+ int state = BIT(i);
+
+ if (!(ctrl->csts & state))
+diff --git a/drivers/nvme/target/pci-epf.c b/drivers/nvme/target/pci-epf.c
+index b1e31483f1574a..99563648c318f5 100644
+--- a/drivers/nvme/target/pci-epf.c
++++ b/drivers/nvme/target/pci-epf.c
+@@ -2129,8 +2129,15 @@ static int nvmet_pci_epf_configure_bar(struct nvmet_pci_epf *nvme_epf)
+ return -ENODEV;
+ }
+
+- if (epc_features->bar[BAR_0].only_64bit)
+- epf->bar[BAR_0].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
++ /*
++ * While NVMe PCIe Transport Specification 1.1, section 2.1.10, claims
++ * that the BAR0 type is Implementation Specific, in NVMe 1.1, the type
++ * is required to be 64-bit. Thus, for interoperability, always set the
++ * type to 64-bit. In the rare case that the PCI EPC does not support
++ * configuring BAR0 as 64-bit, the call to pci_epc_set_bar() will fail,
++ * and we will return failure back to the user.
++ */
++ epf->bar[BAR_0].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+
+ /*
+ * Calculate the size of the register bar: NVMe registers first with
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index e0cc4560dfde7f..0bf4cde34f5171 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -352,8 +352,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
+ spin_unlock_irqrestore(&ep->lock, flags);
+
+ offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
+- CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
+- CDNS_PCIE_MSG_NO_DATA;
++ CDNS_PCIE_NORMAL_MSG_CODE(msg_code);
+ writel(0, ep->irq_cpu_addr + offset);
+ }
+
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index f5eeff834ec192..39ee9945c903ec 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -246,7 +246,7 @@ struct cdns_pcie_rp_ib_bar {
+ #define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8)
+ #define CDNS_PCIE_NORMAL_MSG_CODE(code) \
+ (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
+-#define CDNS_PCIE_MSG_NO_DATA BIT(16)
++#define CDNS_PCIE_MSG_DATA BIT(16)
+
+ struct cdns_pcie;
+
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 8e07d432e74f20..e41479a9ca0275 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -773,6 +773,7 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
+ if (ret)
+ return ret;
+
++ ret = -ENOMEM;
+ if (!ep->ib_window_map) {
+ ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows,
+ GFP_KERNEL);
+diff --git a/drivers/pci/controller/dwc/pcie-histb.c b/drivers/pci/controller/dwc/pcie-histb.c
+index 615a0e3e6d7eb5..1f2f4c28a94957 100644
+--- a/drivers/pci/controller/dwc/pcie-histb.c
++++ b/drivers/pci/controller/dwc/pcie-histb.c
+@@ -409,16 +409,21 @@ static int histb_pcie_probe(struct platform_device *pdev)
+ ret = histb_pcie_host_enable(pp);
+ if (ret) {
+ dev_err(dev, "failed to enable host\n");
+- return ret;
++ goto err_exit_phy;
+ }
+
+ ret = dw_pcie_host_init(pp);
+ if (ret) {
+ dev_err(dev, "failed to initialize host\n");
+- return ret;
++ goto err_exit_phy;
+ }
+
+ return 0;
++
++err_exit_phy:
++ phy_exit(hipcie->phy);
++
++ return ret;
+ }
+
+ static void histb_pcie_remove(struct platform_device *pdev)
+@@ -427,8 +432,7 @@ static void histb_pcie_remove(struct platform_device *pdev)
+
+ histb_pcie_host_disable(hipcie);
+
+- if (hipcie->phy)
+- phy_exit(hipcie->phy);
++ phy_exit(hipcie->phy);
+ }
+
+ static const struct of_device_id histb_pcie_of_match[] = {
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index e733a27dc8df8e..3d7dbfcd689e3e 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -403,10 +403,10 @@ static int brcm_pcie_set_ssc(struct brcm_pcie *pcie)
+ static void brcm_pcie_set_gen(struct brcm_pcie *pcie, int gen)
+ {
+ u16 lnkctl2 = readw(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2);
+- u32 lnkcap = readl(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP);
++ u32 lnkcap = readl(pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
+
+ lnkcap = (lnkcap & ~PCI_EXP_LNKCAP_SLS) | gen;
+- writel(lnkcap, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP);
++ writel(lnkcap, pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
+
+ lnkctl2 = (lnkctl2 & ~0xf) | gen;
+ writew(lnkctl2, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2);
+@@ -1276,6 +1276,10 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
+ bool ssc_good = false;
+ int ret, i;
+
++ /* Limit the generation if specified */
++ if (pcie->gen)
++ brcm_pcie_set_gen(pcie, pcie->gen);
++
+ /* Unassert the fundamental reset */
+ ret = pcie->perst_set(pcie, 0);
+ if (ret)
+@@ -1302,9 +1306,6 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
+
+ brcm_config_clkreq(pcie);
+
+- if (pcie->gen)
+- brcm_pcie_set_gen(pcie, pcie->gen);
+-
+ if (pcie->ssc) {
+ ret = brcm_pcie_set_ssc(pcie);
+ if (ret == 0)
+@@ -1367,7 +1368,8 @@ static int brcm_pcie_add_bus(struct pci_bus *bus)
+
+ ret = regulator_bulk_get(dev, sr->num_supplies, sr->supplies);
+ if (ret) {
+- dev_info(dev, "No regulators for downstream device\n");
++ dev_info(dev, "Did not get regulators, err=%d\n", ret);
++ pcie->sr = NULL;
+ goto no_regulators;
+ }
+
+@@ -1390,7 +1392,7 @@ static void brcm_pcie_remove_bus(struct pci_bus *bus)
+ struct subdev_regulators *sr = pcie->sr;
+ struct device *dev = &bus->dev;
+
+- if (!sr)
++ if (!sr || !bus->parent || !pci_is_root_bus(bus->parent))
+ return;
+
+ if (regulator_bulk_disable(sr->num_supplies, sr->supplies))
+diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c
+index aa24ac9aaecc74..d0cc7f3b4b520f 100644
+--- a/drivers/pci/controller/pcie-mediatek-gen3.c
++++ b/drivers/pci/controller/pcie-mediatek-gen3.c
+@@ -15,6 +15,7 @@
+ #include <linux/irqchip/chained_irq.h>
+ #include <linux/irqdomain.h>
+ #include <linux/kernel.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+ #include <linux/msi.h>
+ #include <linux/of_device.h>
+@@ -24,6 +25,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm_domain.h>
+ #include <linux/pm_runtime.h>
++#include <linux/regmap.h>
+ #include <linux/reset.h>
+
+ #include "../pci.h"
+@@ -930,9 +932,13 @@ static int mtk_pcie_parse_port(struct mtk_gen3_pcie *pcie)
+
+ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
+ {
++ struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
+ struct device *dev = pcie->dev;
++ struct resource_entry *entry;
++ struct regmap *pbus_regmap;
++ u32 val, args[2], size;
++ resource_size_t addr;
+ int err;
+- u32 val;
+
+ /*
+ * The controller may have been left out of reset by the bootloader
+@@ -945,6 +951,26 @@ static int mtk_pcie_en7581_power_up(struct mtk_gen3_pcie *pcie)
+ /* Wait for the time needed to complete the reset lines assert. */
+ msleep(PCIE_EN7581_RESET_TIME_MS);
+
++ /*
++ * Configure PBus base address and base address mask to allow the
++ * hw to detect if a given address is accessible on PCIe controller.
++ */
++ pbus_regmap = syscon_regmap_lookup_by_phandle_args(dev->of_node,
++ "mediatek,pbus-csr",
++ ARRAY_SIZE(args),
++ args);
++ if (IS_ERR(pbus_regmap))
++ return PTR_ERR(pbus_regmap);
++
++ entry = resource_list_first_type(&host->windows, IORESOURCE_MEM);
++ if (!entry)
++ return -ENODEV;
++
++ addr = entry->res->start - entry->offset;
++ regmap_write(pbus_regmap, args[0], lower_32_bits(addr));
++ size = lower_32_bits(resource_size(entry->res));
++ regmap_write(pbus_regmap, args[1], GENMASK(31, __fls(size)));
++
+ /*
+ * Unlike the other MediaTek Gen3 controllers, the Airoha EN7581
+ * requires PHY initialization and power-on before PHY reset deassert.
+diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c
+index 81e8bfae53d006..dc8ecdbee56c89 100644
+--- a/drivers/pci/controller/pcie-xilinx-cpm.c
++++ b/drivers/pci/controller/pcie-xilinx-cpm.c
+@@ -583,15 +583,17 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev)
+ return err;
+
+ bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+- if (!bus)
+- return -ENODEV;
++ if (!bus) {
++ err = -ENODEV;
++ goto err_free_irq_domains;
++ }
+
+ port->variant = of_device_get_match_data(dev);
+
+ err = xilinx_cpm_pcie_parse_dt(port, bus->res);
+ if (err) {
+ dev_err(dev, "Parsing DT failed\n");
+- goto err_parse_dt;
++ goto err_free_irq_domains;
+ }
+
+ xilinx_cpm_pcie_init_port(port);
+@@ -615,7 +617,7 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev)
+ xilinx_cpm_free_interrupts(port);
+ err_setup_irq:
+ pci_ecam_free(port->cfg);
+-err_parse_dt:
++err_free_irq_domains:
+ xilinx_cpm_free_irq_domains(port);
+ return err;
+ }
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index b94e205ae10b94..2409787cf56d99 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -66,17 +66,17 @@ struct pci_epf_test {
+ };
+
+ struct pci_epf_test_reg {
+- u32 magic;
+- u32 command;
+- u32 status;
+- u64 src_addr;
+- u64 dst_addr;
+- u32 size;
+- u32 checksum;
+- u32 irq_type;
+- u32 irq_number;
+- u32 flags;
+- u32 caps;
++ __le32 magic;
++ __le32 command;
++ __le32 status;
++ __le64 src_addr;
++ __le64 dst_addr;
++ __le32 size;
++ __le32 checksum;
++ __le32 irq_type;
++ __le32 irq_number;
++ __le32 flags;
++ __le32 caps;
+ } __packed;
+
+ static struct pci_epf_header test_header = {
+@@ -324,13 +324,17 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+ struct pci_epc *epc = epf->epc;
+ struct device *dev = &epf->dev;
+ struct pci_epc_map src_map, dst_map;
+- u64 src_addr = reg->src_addr;
+- u64 dst_addr = reg->dst_addr;
+- size_t copy_size = reg->size;
++ u64 src_addr = le64_to_cpu(reg->src_addr);
++ u64 dst_addr = le64_to_cpu(reg->dst_addr);
++ size_t orig_size, copy_size;
+ ssize_t map_size = 0;
++ u32 flags = le32_to_cpu(reg->flags);
++ u32 status = 0;
+ void *copy_buf = NULL, *buf;
+
+- if (reg->flags & FLAG_USE_DMA) {
++ orig_size = copy_size = le32_to_cpu(reg->size);
++
++ if (flags & FLAG_USE_DMA) {
+ if (!dma_has_cap(DMA_MEMCPY, epf_test->dma_chan_tx->device->cap_mask)) {
+ dev_err(dev, "DMA controller doesn't support MEMCPY\n");
+ ret = -EINVAL;
+@@ -350,7 +354,7 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+ src_addr, copy_size, &src_map);
+ if (ret) {
+ dev_err(dev, "Failed to map source address\n");
+- reg->status = STATUS_SRC_ADDR_INVALID;
++ status = STATUS_SRC_ADDR_INVALID;
+ goto free_buf;
+ }
+
+@@ -358,7 +362,7 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+ dst_addr, copy_size, &dst_map);
+ if (ret) {
+ dev_err(dev, "Failed to map destination address\n");
+- reg->status = STATUS_DST_ADDR_INVALID;
++ status = STATUS_DST_ADDR_INVALID;
+ pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no,
+ &src_map);
+ goto free_buf;
+@@ -367,7 +371,7 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+ map_size = min_t(size_t, dst_map.pci_size, src_map.pci_size);
+
+ ktime_get_ts64(&start);
+- if (reg->flags & FLAG_USE_DMA) {
++ if (flags & FLAG_USE_DMA) {
+ ret = pci_epf_test_data_transfer(epf_test,
+ dst_map.phys_addr, src_map.phys_addr,
+ map_size, 0, DMA_MEM_TO_MEM);
+@@ -391,8 +395,8 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+ map_size = 0;
+ }
+
+- pci_epf_test_print_rate(epf_test, "COPY", reg->size, &start,
+- &end, reg->flags & FLAG_USE_DMA);
++ pci_epf_test_print_rate(epf_test, "COPY", orig_size, &start, &end,
++ flags & FLAG_USE_DMA);
+
+ unmap:
+ if (map_size) {
+@@ -405,9 +409,10 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+
+ set_status:
+ if (!ret)
+- reg->status |= STATUS_COPY_SUCCESS;
++ status |= STATUS_COPY_SUCCESS;
+ else
+- reg->status |= STATUS_COPY_FAIL;
++ status |= STATUS_COPY_FAIL;
++ reg->status = cpu_to_le32(status);
+ }
+
+ static void pci_epf_test_read(struct pci_epf_test *epf_test,
+@@ -423,9 +428,14 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
+ struct pci_epc *epc = epf->epc;
+ struct device *dev = &epf->dev;
+ struct device *dma_dev = epf->epc->dev.parent;
+- u64 src_addr = reg->src_addr;
+- size_t src_size = reg->size;
++ u64 src_addr = le64_to_cpu(reg->src_addr);
++ size_t orig_size, src_size;
+ ssize_t map_size = 0;
++ u32 flags = le32_to_cpu(reg->flags);
++ u32 checksum = le32_to_cpu(reg->checksum);
++ u32 status = 0;
++
++ orig_size = src_size = le32_to_cpu(reg->size);
+
+ src_buf = kzalloc(src_size, GFP_KERNEL);
+ if (!src_buf) {
+@@ -439,12 +449,12 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
+ src_addr, src_size, &map);
+ if (ret) {
+ dev_err(dev, "Failed to map address\n");
+- reg->status = STATUS_SRC_ADDR_INVALID;
++ status = STATUS_SRC_ADDR_INVALID;
+ goto free_buf;
+ }
+
+ map_size = map.pci_size;
+- if (reg->flags & FLAG_USE_DMA) {
++ if (flags & FLAG_USE_DMA) {
+ dst_phys_addr = dma_map_single(dma_dev, buf, map_size,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(dma_dev, dst_phys_addr)) {
+@@ -481,11 +491,11 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
+ map_size = 0;
+ }
+
+- pci_epf_test_print_rate(epf_test, "READ", reg->size, &start,
+- &end, reg->flags & FLAG_USE_DMA);
++ pci_epf_test_print_rate(epf_test, "READ", orig_size, &start, &end,
++ flags & FLAG_USE_DMA);
+
+- crc32 = crc32_le(~0, src_buf, reg->size);
+- if (crc32 != reg->checksum)
++ crc32 = crc32_le(~0, src_buf, orig_size);
++ if (crc32 != checksum)
+ ret = -EIO;
+
+ unmap:
+@@ -497,9 +507,10 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
+
+ set_status:
+ if (!ret)
+- reg->status |= STATUS_READ_SUCCESS;
++ status |= STATUS_READ_SUCCESS;
+ else
+- reg->status |= STATUS_READ_FAIL;
++ status |= STATUS_READ_FAIL;
++ reg->status = cpu_to_le32(status);
+ }
+
+ static void pci_epf_test_write(struct pci_epf_test *epf_test,
+@@ -514,9 +525,13 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
+ struct pci_epc *epc = epf->epc;
+ struct device *dev = &epf->dev;
+ struct device *dma_dev = epf->epc->dev.parent;
+- u64 dst_addr = reg->dst_addr;
+- size_t dst_size = reg->size;
++ u64 dst_addr = le64_to_cpu(reg->dst_addr);
++ size_t orig_size, dst_size;
+ ssize_t map_size = 0;
++ u32 flags = le32_to_cpu(reg->flags);
++ u32 status = 0;
++
++ orig_size = dst_size = le32_to_cpu(reg->size);
+
+ dst_buf = kzalloc(dst_size, GFP_KERNEL);
+ if (!dst_buf) {
+@@ -524,7 +539,7 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
+ goto set_status;
+ }
+ get_random_bytes(dst_buf, dst_size);
+- reg->checksum = crc32_le(~0, dst_buf, dst_size);
++ reg->checksum = cpu_to_le32(crc32_le(~0, dst_buf, dst_size));
+ buf = dst_buf;
+
+ while (dst_size) {
+@@ -532,12 +547,12 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
+ dst_addr, dst_size, &map);
+ if (ret) {
+ dev_err(dev, "Failed to map address\n");
+- reg->status = STATUS_DST_ADDR_INVALID;
++ status = STATUS_DST_ADDR_INVALID;
+ goto free_buf;
+ }
+
+ map_size = map.pci_size;
+- if (reg->flags & FLAG_USE_DMA) {
++ if (flags & FLAG_USE_DMA) {
+ src_phys_addr = dma_map_single(dma_dev, buf, map_size,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(dma_dev, src_phys_addr)) {
+@@ -576,8 +591,8 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
+ map_size = 0;
+ }
+
+- pci_epf_test_print_rate(epf_test, "WRITE", reg->size, &start,
+- &end, reg->flags & FLAG_USE_DMA);
++ pci_epf_test_print_rate(epf_test, "WRITE", orig_size, &start, &end,
++ flags & FLAG_USE_DMA);
+
+ /*
+ * wait 1ms inorder for the write to complete. Without this delay L3
+@@ -594,9 +609,10 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
+
+ set_status:
+ if (!ret)
+- reg->status |= STATUS_WRITE_SUCCESS;
++ status |= STATUS_WRITE_SUCCESS;
+ else
+- reg->status |= STATUS_WRITE_FAIL;
++ status |= STATUS_WRITE_FAIL;
++ reg->status = cpu_to_le32(status);
+ }
+
+ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test,
+@@ -605,39 +621,42 @@ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test,
+ struct pci_epf *epf = epf_test->epf;
+ struct device *dev = &epf->dev;
+ struct pci_epc *epc = epf->epc;
+- u32 status = reg->status | STATUS_IRQ_RAISED;
++ u32 status = le32_to_cpu(reg->status);
++ u32 irq_number = le32_to_cpu(reg->irq_number);
++ u32 irq_type = le32_to_cpu(reg->irq_type);
+ int count;
+
+ /*
+ * Set the status before raising the IRQ to ensure that the host sees
+ * the updated value when it gets the IRQ.
+ */
+- WRITE_ONCE(reg->status, status);
++ status |= STATUS_IRQ_RAISED;
++ WRITE_ONCE(reg->status, cpu_to_le32(status));
+
+- switch (reg->irq_type) {
++ switch (irq_type) {
+ case IRQ_TYPE_INTX:
+ pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
+ PCI_IRQ_INTX, 0);
+ break;
+ case IRQ_TYPE_MSI:
+ count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no);
+- if (reg->irq_number > count || count <= 0) {
++ if (irq_number > count || count <= 0) {
+ dev_err(dev, "Invalid MSI IRQ number %d / %d\n",
+- reg->irq_number, count);
++ irq_number, count);
+ return;
+ }
+ pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
+- PCI_IRQ_MSI, reg->irq_number);
++ PCI_IRQ_MSI, irq_number);
+ break;
+ case IRQ_TYPE_MSIX:
+ count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no);
+- if (reg->irq_number > count || count <= 0) {
++ if (irq_number > count || count <= 0) {
+ dev_err(dev, "Invalid MSIX IRQ number %d / %d\n",
+- reg->irq_number, count);
++ irq_number, count);
+ return;
+ }
+ pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
+- PCI_IRQ_MSIX, reg->irq_number);
++ PCI_IRQ_MSIX, irq_number);
+ break;
+ default:
+ dev_err(dev, "Failed to raise IRQ, unknown type\n");
+@@ -654,21 +673,22 @@ static void pci_epf_test_cmd_handler(struct work_struct *work)
+ struct device *dev = &epf->dev;
+ enum pci_barno test_reg_bar = epf_test->test_reg_bar;
+ struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
++ u32 irq_type = le32_to_cpu(reg->irq_type);
+
+- command = READ_ONCE(reg->command);
++ command = le32_to_cpu(READ_ONCE(reg->command));
+ if (!command)
+ goto reset_handler;
+
+ WRITE_ONCE(reg->command, 0);
+ WRITE_ONCE(reg->status, 0);
+
+- if ((READ_ONCE(reg->flags) & FLAG_USE_DMA) &&
++ if ((le32_to_cpu(READ_ONCE(reg->flags)) & FLAG_USE_DMA) &&
+ !epf_test->dma_supported) {
+ dev_err(dev, "Cannot transfer data using DMA\n");
+ goto reset_handler;
+ }
+
+- if (reg->irq_type > IRQ_TYPE_MSIX) {
++ if (irq_type > IRQ_TYPE_MSIX) {
+ dev_err(dev, "Failed to detect IRQ type\n");
+ goto reset_handler;
+ }
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index bb5a8d9f03ad98..28ab393af1c04b 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -842,7 +842,9 @@ void pcie_enable_interrupt(struct controller *ctrl)
+ {
+ u16 mask;
+
+- mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++ mask = PCI_EXP_SLTCTL_DLLSCE;
++ if (!pciehp_poll_mode)
++ mask |= PCI_EXP_SLTCTL_HPIE;
+ pcie_write_cmd(ctrl, mask, mask);
+ }
+
+diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
+index 9e4770cdd4d5ae..a964c66b42950f 100644
+--- a/drivers/pci/iov.c
++++ b/drivers/pci/iov.c
+@@ -285,23 +285,16 @@ const struct attribute_group sriov_vf_dev_attr_group = {
+ .is_visible = sriov_vf_attrs_are_visible,
+ };
+
+-int pci_iov_add_virtfn(struct pci_dev *dev, int id)
++static struct pci_dev *pci_iov_scan_device(struct pci_dev *dev, int id,
++ struct pci_bus *bus)
+ {
+- int i;
+- int rc = -ENOMEM;
+- u64 size;
+- struct pci_dev *virtfn;
+- struct resource *res;
+ struct pci_sriov *iov = dev->sriov;
+- struct pci_bus *bus;
+-
+- bus = virtfn_add_bus(dev->bus, pci_iov_virtfn_bus(dev, id));
+- if (!bus)
+- goto failed;
++ struct pci_dev *virtfn;
++ int rc;
+
+ virtfn = pci_alloc_dev(bus);
+ if (!virtfn)
+- goto failed0;
++ return ERR_PTR(-ENOMEM);
+
+ virtfn->devfn = pci_iov_virtfn_devfn(dev, id);
+ virtfn->vendor = dev->vendor;
+@@ -314,8 +307,35 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id)
+ pci_read_vf_config_common(virtfn);
+
+ rc = pci_setup_device(virtfn);
+- if (rc)
+- goto failed1;
++ if (rc) {
++ pci_dev_put(dev);
++ pci_bus_put(virtfn->bus);
++ kfree(virtfn);
++ return ERR_PTR(rc);
++ }
++
++ return virtfn;
++}
++
++int pci_iov_add_virtfn(struct pci_dev *dev, int id)
++{
++ struct pci_bus *bus;
++ struct pci_dev *virtfn;
++ struct resource *res;
++ int rc, i;
++ u64 size;
++
++ bus = virtfn_add_bus(dev->bus, pci_iov_virtfn_bus(dev, id));
++ if (!bus) {
++ rc = -ENOMEM;
++ goto failed;
++ }
++
++ virtfn = pci_iov_scan_device(dev, id, bus);
++ if (IS_ERR(virtfn)) {
++ rc = PTR_ERR(virtfn);
++ goto failed0;
++ }
+
+ virtfn->dev.parent = dev->dev.parent;
+ virtfn->multifunction = 0;
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index b46ce1a2c5542c..0e7eb2a42d88d8 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -1556,7 +1556,7 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
+ return -EINVAL;
+
+ device_lock(dev);
+- if (dev->driver) {
++ if (dev->driver || pci_num_vf(pdev)) {
+ ret = -EBUSY;
+ goto unlock;
+ }
+@@ -1578,7 +1578,7 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
+
+ pci_remove_resource_files(pdev);
+
+- for (i = 0; i < PCI_STD_NUM_BARS; i++) {
++ for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) {
+ if (pci_resource_len(pdev, i) &&
+ pci_resource_flags(pdev, i) == flags)
+ pci_release_resource(pdev, i);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 869d204a70a372..3e78cf86ef03ba 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -954,8 +954,10 @@ struct pci_acs {
+ };
+
+ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+- const char *p, u16 mask, u16 flags)
++ const char *p, const u16 acs_mask, const u16 acs_flags)
+ {
++ u16 flags = acs_flags;
++ u16 mask = acs_mask;
+ char *delimit;
+ int ret = 0;
+
+@@ -963,7 +965,7 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+ return;
+
+ while (*p) {
+- if (!mask) {
++ if (!acs_mask) {
+ /* Check for ACS flags */
+ delimit = strstr(p, "@");
+ if (delimit) {
+@@ -971,6 +973,8 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+ u32 shift = 0;
+
+ end = delimit - p - 1;
++ mask = 0;
++ flags = 0;
+
+ while (end > -1) {
+ if (*(p + end) == '0') {
+@@ -1027,10 +1031,14 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+
+ pci_dbg(dev, "ACS mask = %#06x\n", mask);
+ pci_dbg(dev, "ACS flags = %#06x\n", flags);
++ pci_dbg(dev, "ACS control = %#06x\n", caps->ctrl);
++ pci_dbg(dev, "ACS fw_ctrl = %#06x\n", caps->fw_ctrl);
+
+- /* If mask is 0 then we copy the bit from the firmware setting. */
+- caps->ctrl = (caps->ctrl & ~mask) | (caps->fw_ctrl & mask);
+- caps->ctrl |= flags;
++ /*
++ * For mask bits that are 0, copy them from the firmware setting
++ * and apply flags for all the mask bits that are 1.
++ */
++ caps->ctrl = (caps->fw_ctrl & ~mask) | (flags & mask);
+
+ pci_info(dev, "Configured ACS to %#06x\n", caps->ctrl);
+ }
+@@ -5405,6 +5413,8 @@ static bool pci_bus_resettable(struct pci_bus *bus)
+ return false;
+
+ list_for_each_entry(dev, &bus->devices, bus_list) {
++ if (!pci_reset_supported(dev))
++ return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+@@ -5481,6 +5491,8 @@ static bool pci_slot_resettable(struct pci_slot *slot)
+ list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ if (!dev->slot || dev->slot != slot)
+ continue;
++ if (!pci_reset_supported(dev))
++ return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index da3e7edcf49d98..29fcb0689a918f 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1270,16 +1270,16 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ parent_link = link->parent;
+
+ /*
+- * link->downstream is a pointer to the pci_dev of function 0. If
+- * we remove that function, the pci_dev is about to be deallocated,
+- * so we can't use link->downstream again. Free the link state to
+- * avoid this.
++ * Free the parent link state, no later than function 0 (i.e.
++ * link->downstream) being removed.
+ *
+- * If we're removing a non-0 function, it's possible we could
+- * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
+- * programming the same ASPM Control value for all functions of
+- * multi-function devices, so disable ASPM for all of them.
++ * Do not free the link state any earlier. If function 0 is a
++ * switch upstream port, this link state is parent_link to all
++ * subordinate ones.
+ */
++ if (pdev != link->downstream)
++ goto out;
++
+ pcie_config_aspm_link(link, 0);
+ list_del(&link->sibling);
+ free_link_state(link);
+@@ -1290,6 +1290,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ pcie_config_aspm_path(parent_link);
+ }
+
++ out:
+ mutex_unlock(&aspm_lock);
+ up_read(&pci_bus_sem);
+ }
+diff --git a/drivers/pci/pcie/bwctrl.c b/drivers/pci/pcie/bwctrl.c
+index 0a5e7efbce2cca..d8d2aa85a22928 100644
+--- a/drivers/pci/pcie/bwctrl.c
++++ b/drivers/pci/pcie/bwctrl.c
+@@ -113,7 +113,7 @@ static u16 pcie_bwctrl_select_speed(struct pci_dev *port, enum pci_bus_speed spe
+ up_read(&pci_bus_sem);
+ }
+ if (!supported_speeds)
+- return PCI_EXP_LNKCAP2_SLS_2_5GB;
++ supported_speeds = PCI_EXP_LNKCAP2_SLS_2_5GB;
+
+ return pcie_supported_speeds2target_speed(supported_speeds & desired_speeds);
+ }
+@@ -294,6 +294,10 @@ static int pcie_bwnotif_probe(struct pcie_device *srv)
+ struct pci_dev *port = srv->port;
+ int ret;
+
++ /* Can happen if we run out of bus numbers during enumeration. */
++ if (!port->subordinate)
++ return -ENODEV;
++
+ struct pcie_bwctrl_data *data = devm_kzalloc(&srv->device,
+ sizeof(*data), GFP_KERNEL);
+ if (!data)
+diff --git a/drivers/pci/pcie/portdrv.c b/drivers/pci/pcie/portdrv.c
+index 02e73099bad053..e8318fd5f6ed53 100644
+--- a/drivers/pci/pcie/portdrv.c
++++ b/drivers/pci/pcie/portdrv.c
+@@ -228,10 +228,12 @@ static int get_port_device_capability(struct pci_dev *dev)
+
+ /*
+ * Disable hot-plug interrupts in case they have been enabled
+- * by the BIOS and the hot-plug service driver is not loaded.
++ * by the BIOS and the hot-plug service driver won't be loaded
++ * to handle them.
+ */
+- pcie_capability_clear_word(dev, PCI_EXP_SLTCTL,
+- PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE);
++ if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
++ pcie_capability_clear_word(dev, PCI_EXP_SLTCTL,
++ PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE);
+ }
+
+ #ifdef CONFIG_PCIEAER
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 246744d8d268a2..0154b48bfbd7b4 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -996,10 +996,9 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ /* Temporarily move resources off the list */
+ list_splice_init(&bridge->windows, &resources);
+ err = device_add(&bridge->dev);
+- if (err) {
+- put_device(&bridge->dev);
++ if (err)
+ goto free;
+- }
++
+ bus->bridge = get_device(&bridge->dev);
+ device_enable_async_suspend(bus->bridge);
+ pci_set_bus_of_node(bus);
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+index 5e00cecf1f1af3..8707c5b08cf341 100644
+--- a/drivers/pci/setup-bus.c
++++ b/drivers/pci/setup-bus.c
+@@ -927,9 +927,14 @@ static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size,
+
+ size0 = calculate_iosize(size, min_size, size1, 0, 0,
+ resource_size(b_res), min_align);
+- size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 :
+- calculate_iosize(size, min_size, size1, add_size, children_add_size,
+- resource_size(b_res), min_align);
++
++ size1 = size0;
++ if (realloc_head && (add_size > 0 || children_add_size > 0)) {
++ size1 = calculate_iosize(size, min_size, size1, add_size,
++ children_add_size, resource_size(b_res),
++ min_align);
++ }
++
+ if (!size0 && !size1) {
+ if (bus->self && (b_res->start || b_res->end))
+ pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n",
+@@ -1058,7 +1063,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
+ struct list_head *realloc_head)
+ {
+ struct pci_dev *dev;
+- resource_size_t min_align, win_align, align, size, size0, size1;
++ resource_size_t min_align, win_align, align, size, size0, size1 = 0;
+ resource_size_t aligns[24]; /* Alignments from 1MB to 8TB */
+ int order, max_order;
+ struct resource *b_res = find_bus_resource_of_type(bus,
+@@ -1141,7 +1146,6 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
+ min_align = calculate_mem_align(aligns, max_order);
+ min_align = max(min_align, win_align);
+ size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align);
+- add_align = max(min_align, add_align);
+
+ if (bus->self && size0 &&
+ !pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type,
+@@ -1149,14 +1153,28 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
+ min_align = 1ULL << (max_order + __ffs(SZ_1M));
+ min_align = max(min_align, win_align);
+ size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), win_align);
+- add_align = win_align;
+ pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n",
+ b_res, &bus->busn_res);
+ }
+
+- size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 :
+- calculate_memsize(size, min_size, add_size, children_add_size,
+- resource_size(b_res), add_align);
++ if (realloc_head && (add_size > 0 || children_add_size > 0)) {
++ add_align = max(min_align, add_align);
++ size1 = calculate_memsize(size, min_size, add_size, children_add_size,
++ resource_size(b_res), add_align);
++
++ if (bus->self && size1 &&
++ !pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type,
++ size1, add_align)) {
++ min_align = 1ULL << (max_order + __ffs(SZ_1M));
++ min_align = max(min_align, win_align);
++ size1 = calculate_memsize(size, min_size, add_size, children_add_size,
++ resource_size(b_res), win_align);
++ pci_info(bus->self,
++ "bridge window %pR to %pR requires relaxed alignment rules\n",
++ b_res, &bus->busn_res);
++ }
++ }
++
+ if (!size0 && !size1) {
+ if (bus->self && (b_res->start || b_res->end))
+ pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n",
+@@ -2102,8 +2120,7 @@ pci_root_bus_distribute_available_resources(struct pci_bus *bus,
+ * in case of root bus.
+ */
+ if (bridge && pci_bridge_resources_not_assigned(dev))
+- pci_bridge_distribute_available_resources(bridge,
+- add_list);
++ pci_bridge_distribute_available_resources(dev, add_list);
+ else
+ pci_root_bus_distribute_available_resources(b, add_list);
+ }
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 0965b9d4f9cf19..2fb4f297fda3d6 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -263,11 +263,22 @@ enum rk_hdptx_reset {
+ RST_MAX
+ };
+
++#define MAX_HDPTX_PHY_NUM 2
++
++struct rk_hdptx_phy_cfg {
++ unsigned int num_phys;
++ unsigned int phy_ids[MAX_HDPTX_PHY_NUM];
++};
++
+ struct rk_hdptx_phy {
+ struct device *dev;
+ struct regmap *regmap;
+ struct regmap *grf;
+
++ /* PHY const config */
++ const struct rk_hdptx_phy_cfg *cfgs;
++ int phy_id;
++
+ struct phy *phy;
+ struct phy_config *phy_cfg;
+ struct clk_bulk_data *clks;
+@@ -1007,15 +1018,14 @@ static int rk_hdptx_phy_clk_register(struct rk_hdptx_phy *hdptx)
+ struct device *dev = hdptx->dev;
+ const char *name, *pname;
+ struct clk *refclk;
+- int ret, id;
++ int ret;
+
+ refclk = devm_clk_get(dev, "ref");
+ if (IS_ERR(refclk))
+ return dev_err_probe(dev, PTR_ERR(refclk),
+ "Failed to get ref clock\n");
+
+- id = of_alias_get_id(dev->of_node, "hdptxphy");
+- name = id > 0 ? "clk_hdmiphy_pixel1" : "clk_hdmiphy_pixel0";
++ name = hdptx->phy_id > 0 ? "clk_hdmiphy_pixel1" : "clk_hdmiphy_pixel0";
+ pname = __clk_get_name(refclk);
+
+ hdptx->hw.init = CLK_HW_INIT(name, pname, &hdptx_phy_clk_ops,
+@@ -1058,8 +1068,9 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+ struct phy_provider *phy_provider;
+ struct device *dev = &pdev->dev;
+ struct rk_hdptx_phy *hdptx;
++ struct resource *res;
+ void __iomem *regs;
+- int ret;
++ int ret, id;
+
+ hdptx = devm_kzalloc(dev, sizeof(*hdptx), GFP_KERNEL);
+ if (!hdptx)
+@@ -1067,11 +1078,27 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+
+ hdptx->dev = dev;
+
+- regs = devm_platform_ioremap_resource(pdev, 0);
++ regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(regs))
+ return dev_err_probe(dev, PTR_ERR(regs),
+ "Failed to ioremap resource\n");
+
++ hdptx->cfgs = device_get_match_data(dev);
++ if (!hdptx->cfgs)
++ return dev_err_probe(dev, -EINVAL, "missing match data\n");
++
++ /* find the phy-id from the io address */
++ hdptx->phy_id = -ENODEV;
++ for (id = 0; id < hdptx->cfgs->num_phys; id++) {
++ if (res->start == hdptx->cfgs->phy_ids[id]) {
++ hdptx->phy_id = id;
++ break;
++ }
++ }
++
++ if (hdptx->phy_id < 0)
++ return dev_err_probe(dev, -ENODEV, "no matching device found\n");
++
+ ret = devm_clk_bulk_get_all(dev, &hdptx->clks);
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Failed to get clocks\n");
+@@ -1132,8 +1159,19 @@ static const struct dev_pm_ops rk_hdptx_phy_pm_ops = {
+ rk_hdptx_phy_runtime_resume, NULL)
+ };
+
++static const struct rk_hdptx_phy_cfg rk3588_hdptx_phy_cfgs = {
++ .num_phys = 2,
++ .phy_ids = {
++ 0xfed60000,
++ 0xfed70000,
++ },
++};
++
+ static const struct of_device_id rk_hdptx_phy_of_match[] = {
+- { .compatible = "rockchip,rk3588-hdptx-phy", },
++ {
++ .compatible = "rockchip,rk3588-hdptx-phy",
++ .data = &rk3588_hdptx_phy_cfgs
++ },
+ {}
+ };
+ MODULE_DEVICE_TABLE(of, rk_hdptx_phy_of_match);
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index cc1fe0555e196a..eaeec096bc9a96 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -346,14 +346,14 @@ static int bcm2835_gpio_get_direction(struct gpio_chip *chip, unsigned int offse
+ struct bcm2835_pinctrl *pc = gpiochip_get_data(chip);
+ enum bcm2835_fsel fsel = bcm2835_pinctrl_fsel_get(pc, offset);
+
+- /* Alternative function doesn't clearly provide a direction */
+- if (fsel > BCM2835_FSEL_GPIO_OUT)
+- return -EINVAL;
+-
+- if (fsel == BCM2835_FSEL_GPIO_IN)
+- return GPIO_LINE_DIRECTION_IN;
++ if (fsel == BCM2835_FSEL_GPIO_OUT)
++ return GPIO_LINE_DIRECTION_OUT;
+
+- return GPIO_LINE_DIRECTION_OUT;
++ /*
++ * Alternative function doesn't clearly provide a direction. Default
++ * to INPUT.
++ */
++ return GPIO_LINE_DIRECTION_IN;
+ }
+
+ static void bcm2835_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 527e4b87ae52e8..f8b0221055e4a7 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -1543,7 +1543,6 @@ static int intel_pinctrl_probe_pwm(struct intel_pinctrl *pctrl,
+ .clk_rate = 19200000,
+ .npwm = 1,
+ .base_unit_bits = 22,
+- .bypass = true,
+ };
+ struct pwm_chip *chip;
+
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
+index d09a5e9b2eca53..f6a1e684a3864e 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
+@@ -1290,12 +1290,14 @@ static struct npcm8xx_func npcm8xx_funcs[] = {
+ };
+
+ #define NPCM8XX_PINCFG(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q) \
+- [a] { .fn0 = fn_ ## b, .reg0 = NPCM8XX_GCR_ ## c, .bit0 = d, \
++ [a] = { \
++ .flag = q, \
++ .fn0 = fn_ ## b, .reg0 = NPCM8XX_GCR_ ## c, .bit0 = d, \
+ .fn1 = fn_ ## e, .reg1 = NPCM8XX_GCR_ ## f, .bit1 = g, \
+ .fn2 = fn_ ## h, .reg2 = NPCM8XX_GCR_ ## i, .bit2 = j, \
+ .fn3 = fn_ ## k, .reg3 = NPCM8XX_GCR_ ## l, .bit3 = m, \
+ .fn4 = fn_ ## n, .reg4 = NPCM8XX_GCR_ ## o, .bit4 = p, \
+- .flag = q }
++ }
+
+ /* Drive strength controlled by NPCM8XX_GP_N_ODSC */
+ #define DRIVE_STRENGTH_LO_SHIFT 8
+@@ -2361,8 +2363,8 @@ static int npcm8xx_gpio_fw(struct npcm8xx_pinctrl *pctrl)
+ return dev_err_probe(dev, ret, "gpio-ranges fail for GPIO bank %u\n", id);
+
+ ret = fwnode_irq_get(child, 0);
+- if (!ret)
+- return dev_err_probe(dev, ret, "No IRQ for GPIO bank %u\n", id);
++ if (ret < 0)
++ return dev_err_probe(dev, ret, "Failed to retrieve IRQ for bank %u\n", id);
+
+ pctrl->gpio_bank[id].irq = ret;
+ pctrl->gpio_bank[id].irq_chip = npcmgpio_irqchip;
+diff --git a/drivers/pinctrl/renesas/pinctrl-rza2.c b/drivers/pinctrl/renesas/pinctrl-rza2.c
+index dd1f8c29d3e755..8b36161c7c5022 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rza2.c
++++ b/drivers/pinctrl/renesas/pinctrl-rza2.c
+@@ -256,6 +256,8 @@ static int rza2_gpio_register(struct rza2_pinctrl_priv *priv)
+ return ret;
+ }
+
++ of_node_put(of_args.np);
++
+ if ((of_args.args[0] != 0) ||
+ (of_args.args[1] != 0) ||
+ (of_args.args[2] != priv->npins)) {
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index ce4a07a3df49ad..d1da7f53fc6008 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -2756,6 +2756,8 @@ static int rzg2l_gpio_register(struct rzg2l_pinctrl *pctrl)
+ if (ret)
+ return dev_err_probe(pctrl->dev, ret, "Unable to parse gpio-ranges\n");
+
++ of_node_put(of_args.np);
++
+ if (of_args.args[0] != 0 || of_args.args[1] != 0 ||
+ of_args.args[2] != pctrl->data->n_port_pins)
+ return dev_err_probe(pctrl->dev, -EINVAL,
+@@ -3386,6 +3388,7 @@ static struct platform_driver rzg2l_pinctrl_driver = {
+ .name = DRV_NAME,
+ .of_match_table = of_match_ptr(rzg2l_pinctrl_of_table),
+ .pm = pm_sleep_ptr(&rzg2l_pinctrl_pm_ops),
++ .suppress_bind_attrs = true,
+ },
+ .probe = rzg2l_pinctrl_probe,
+ };
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzv2m.c b/drivers/pinctrl/renesas/pinctrl-rzv2m.c
+index 4062c56619f595..8c7169db4fcce6 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzv2m.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzv2m.c
+@@ -940,6 +940,8 @@ static int rzv2m_gpio_register(struct rzv2m_pinctrl *pctrl)
+ return ret;
+ }
+
++ of_node_put(of_args.np);
++
+ if (of_args.args[0] != 0 || of_args.args[1] != 0 ||
+ of_args.args[2] != pctrl->data->n_port_pins) {
+ dev_err(pctrl->dev, "gpio-ranges does not match selected SOC\n");
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
+index c83e5a65e6801c..3b046450bd3ff8 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
+@@ -270,6 +270,9 @@ static int tegra_pinctrl_set_mux(struct pinctrl_dev *pctldev,
+ val = pmx_readl(pmx, g->mux_bank, g->mux_reg);
+ val &= ~(0x3 << g->mux_bit);
+ val |= i << g->mux_bit;
++ /* Set the SFIO/GPIO selection to SFIO when under pinmux control*/
++ if (pmx->soc->sfsel_in_mux)
++ val |= (1 << g->sfsel_bit);
+ pmx_writel(pmx, val, g->mux_bank, g->mux_reg);
+
+ return 0;
+diff --git a/drivers/platform/x86/dell/dell-uart-backlight.c b/drivers/platform/x86/dell/dell-uart-backlight.c
+index 50002ef13d5d4d..8f868f845350af 100644
+--- a/drivers/platform/x86/dell/dell-uart-backlight.c
++++ b/drivers/platform/x86/dell/dell-uart-backlight.c
+@@ -325,7 +325,7 @@ static int dell_uart_bl_serdev_probe(struct serdev_device *serdev)
+ return PTR_ERR_OR_ZERO(dell_bl->bl);
+ }
+
+-struct serdev_device_driver dell_uart_bl_serdev_driver = {
++static struct serdev_device_driver dell_uart_bl_serdev_driver = {
+ .probe = dell_uart_bl_serdev_probe,
+ .driver = {
+ .name = KBUILD_MODNAME,
+diff --git a/drivers/platform/x86/dell/dell-wmi-ddv.c b/drivers/platform/x86/dell/dell-wmi-ddv.c
+index e75cd6e1efe6ac..ab5f7d3ab82497 100644
+--- a/drivers/platform/x86/dell/dell-wmi-ddv.c
++++ b/drivers/platform/x86/dell/dell-wmi-ddv.c
+@@ -665,8 +665,10 @@ static ssize_t temp_show(struct device *dev, struct device_attribute *attr, char
+ if (ret < 0)
+ return ret;
+
+- /* Use 2731 instead of 2731.5 to avoid unnecessary rounding */
+- return sysfs_emit(buf, "%d\n", value - 2731);
++ /* Use 2732 instead of 2731.5 to avoid unnecessary rounding and to emulate
++ * the behaviour of the OEM application which seems to round down the result.
++ */
++ return sysfs_emit(buf, "%d\n", value - 2732);
+ }
+
+ static ssize_t eppid_show(struct device *dev, struct device_attribute *attr, char *buf)
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index dbcd3087aaa4b0..31239a93dd71bd 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -84,7 +84,7 @@ static DECLARE_HASHTABLE(isst_hash, 8);
+ static DEFINE_MUTEX(isst_hash_lock);
+
+ static int isst_store_new_cmd(int cmd, u32 cpu, int mbox_cmd_type, u32 param,
+- u32 data)
++ u64 data)
+ {
+ struct isst_cmd *sst_cmd;
+
+diff --git a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+index a96b215cd2c5ea..25933cd018d172 100644
+--- a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
++++ b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+@@ -219,7 +219,7 @@ static int yt2_1380_fc_serdev_probe(struct serdev_device *serdev)
+ return 0;
+ }
+
+-struct serdev_device_driver yt2_1380_fc_serdev_driver = {
++static struct serdev_device_driver yt2_1380_fc_serdev_driver = {
+ .probe = yt2_1380_fc_serdev_probe,
+ .driver = {
+ .name = KBUILD_MODNAME,
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 1cc91173e01277..2ff38ca9ddb400 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -8797,6 +8797,7 @@ static const struct attribute_group fan_driver_attr_group = {
+ #define TPACPI_FAN_NS 0x0010 /* For EC with non-Standard register addresses */
+ #define TPACPI_FAN_DECRPM 0x0020 /* For ECFW's with RPM in register as decimal */
+ #define TPACPI_FAN_TPR 0x0040 /* Fan speed is in Ticks Per Revolution */
++#define TPACPI_FAN_NOACPI 0x0080 /* Don't use ACPI methods even if detected */
+
+ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1),
+@@ -8827,6 +8828,9 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */
+ TPACPI_Q_LNV3('R', '0', 'Q', TPACPI_FAN_DECRPM),/* L480 */
+ TPACPI_Q_LNV('8', 'F', TPACPI_FAN_TPR), /* ThinkPad x120e */
++ TPACPI_Q_LNV3('R', '0', '0', TPACPI_FAN_NOACPI),/* E560 */
++ TPACPI_Q_LNV3('R', '1', '2', TPACPI_FAN_NOACPI),/* T495 */
++ TPACPI_Q_LNV3('R', '1', '3', TPACPI_FAN_NOACPI),/* T495s */
+ };
+
+ static int __init fan_init(struct ibm_init_struct *iibm)
+@@ -8878,6 +8882,13 @@ static int __init fan_init(struct ibm_init_struct *iibm)
+ tp_features.fan_ctrl_status_undef = 1;
+ }
+
++ if (quirks & TPACPI_FAN_NOACPI) {
++ /* E560, T495, T495s */
++ pr_info("Ignoring buggy ACPI fan access method\n");
++ fang_handle = NULL;
++ fanw_handle = NULL;
++ }
++
+ if (gfan_handle) {
+ /* 570, 600e/x, 770e, 770x */
+ fan_status_access_mode = TPACPI_FAN_RD_ACPI_GFAN;
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 90a5bccfc6b9bc..a8d8bcaace2f03 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1918,7 +1918,6 @@ static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
+ cache.flags = -1; /* read error */
+ if (cache.flags >= 0) {
+ cache.capacity = bq27xxx_battery_read_soc(di);
+- di->cache.flags = cache.flags;
+
+ /*
+ * On gauges with signed current reporting the current must be
+diff --git a/drivers/power/supply/max77693_charger.c b/drivers/power/supply/max77693_charger.c
+index cdea35c0d1de11..027d6a539b65a2 100644
+--- a/drivers/power/supply/max77693_charger.c
++++ b/drivers/power/supply/max77693_charger.c
+@@ -608,7 +608,7 @@ static int max77693_set_charge_input_threshold_volt(struct max77693_charger *chg
+ case 4700000:
+ case 4800000:
+ case 4900000:
+- data = (uvolt - 4700000) / 100000;
++ data = ((uvolt - 4700000) / 100000) + 1;
+ break;
+ default:
+ dev_err(chg->dev, "Wrong value for charge input voltage regulation threshold\n");
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index b651087f426f50..4a87af0980d695 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -2090,6 +2090,10 @@ ptp_ocp_signal_from_perout(struct ptp_ocp *bp, int gen,
+ {
+ struct ptp_ocp_signal s = { };
+
++ if (req->flags & ~(PTP_PEROUT_DUTY_CYCLE |
++ PTP_PEROUT_PHASE))
++ return -EOPNOTSUPP;
++
+ s.polarity = bp->signal[gen].polarity;
+ s.period = ktime_set(req->period.sec, req->period.nsec);
+ if (!s.period)
+diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
+index faa6b79c27d75f..dfe1dd93d56f53 100644
+--- a/drivers/regulator/pca9450-regulator.c
++++ b/drivers/regulator/pca9450-regulator.c
+@@ -460,7 +460,7 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = {
+ .n_linear_ranges = ARRAY_SIZE(pca9450_ldo5_volts),
+ .vsel_reg = PCA9450_REG_LDO5CTRL_H,
+ .vsel_mask = LDO5HOUT_MASK,
+- .enable_reg = PCA9450_REG_LDO5CTRL_H,
++ .enable_reg = PCA9450_REG_LDO5CTRL_L,
+ .enable_mask = LDO5H_EN_MASK,
+ .owner = THIS_MODULE,
+ },
+@@ -674,7 +674,7 @@ static const struct pca9450_regulator_desc pca9450bc_regulators[] = {
+ .n_linear_ranges = ARRAY_SIZE(pca9450_ldo5_volts),
+ .vsel_reg = PCA9450_REG_LDO5CTRL_H,
+ .vsel_mask = LDO5HOUT_MASK,
+- .enable_reg = PCA9450_REG_LDO5CTRL_H,
++ .enable_reg = PCA9450_REG_LDO5CTRL_L,
+ .enable_mask = LDO5H_EN_MASK,
+ .owner = THIS_MODULE,
+ },
+@@ -864,7 +864,7 @@ static const struct pca9450_regulator_desc pca9451a_regulators[] = {
+ .n_linear_ranges = ARRAY_SIZE(pca9450_ldo5_volts),
+ .vsel_reg = PCA9450_REG_LDO5CTRL_H,
+ .vsel_mask = LDO5HOUT_MASK,
+- .enable_reg = PCA9450_REG_LDO5CTRL_H,
++ .enable_reg = PCA9450_REG_LDO5CTRL_L,
+ .enable_mask = LDO5H_EN_MASK,
+ .owner = THIS_MODULE,
+ },
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index e78bd986dc3f25..2c80d7fe39f8e3 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1831,6 +1831,13 @@ static int q6v5_pds_attach(struct device *dev, struct device **devs,
+ while (pd_names[num_pds])
+ num_pds++;
+
++ /* Handle single power domain */
++ if (num_pds == 1 && dev->pm_domain) {
++ devs[0] = dev;
++ pm_runtime_enable(dev);
++ return 1;
++ }
++
+ for (i = 0; i < num_pds; i++) {
+ devs[i] = dev_pm_domain_attach_by_name(dev, pd_names[i]);
+ if (IS_ERR_OR_NULL(devs[i])) {
+@@ -1851,8 +1858,15 @@ static int q6v5_pds_attach(struct device *dev, struct device **devs,
+ static void q6v5_pds_detach(struct q6v5 *qproc, struct device **pds,
+ size_t pd_count)
+ {
++ struct device *dev = qproc->dev;
+ int i;
+
++ /* Handle single power domain */
++ if (pd_count == 1 && dev->pm_domain) {
++ pm_runtime_disable(dev);
++ return;
++ }
++
+ for (i = 0; i < pd_count; i++)
+ dev_pm_domain_detach(pds[i], false);
+ }
+@@ -2449,13 +2463,13 @@ static const struct rproc_hexagon_res msm8974_mss = {
+ .supply = "pll",
+ .uA = 100000,
+ },
+- {}
+- },
+- .fallback_proxy_supply = (struct qcom_mss_reg_res[]) {
+ {
+ .supply = "mx",
+ .uV = 1050000,
+ },
++ {}
++ },
++ .fallback_proxy_supply = (struct qcom_mss_reg_res[]) {
+ {
+ .supply = "cx",
+ .uA = 100000,
+@@ -2481,7 +2495,6 @@ static const struct rproc_hexagon_res msm8974_mss = {
+ NULL
+ },
+ .proxy_pd_names = (char*[]){
+- "mx",
+ "cx",
+ NULL
+ },
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 97c4bdd9222a8d..60923ed1290492 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -501,16 +501,16 @@ static int adsp_pds_attach(struct device *dev, struct device **devs,
+ if (!pd_names)
+ return 0;
+
++ while (pd_names[num_pds])
++ num_pds++;
++
+ /* Handle single power domain */
+- if (dev->pm_domain) {
++ if (num_pds == 1 && dev->pm_domain) {
+ devs[0] = dev;
+ pm_runtime_enable(dev);
+ return 1;
+ }
+
+- while (pd_names[num_pds])
+- num_pds++;
+-
+ for (i = 0; i < num_pds; i++) {
+ devs[i] = dev_pm_domain_attach_by_name(dev, pd_names[i]);
+ if (IS_ERR_OR_NULL(devs[i])) {
+@@ -535,7 +535,7 @@ static void adsp_pds_detach(struct qcom_adsp *adsp, struct device **pds,
+ int i;
+
+ /* Handle single power domain */
+- if (dev->pm_domain && pd_count) {
++ if (pd_count == 1 && dev->pm_domain) {
+ pm_runtime_disable(dev);
+ return;
+ }
+@@ -1348,6 +1348,7 @@ static const struct adsp_data sc7280_wpss_resource = {
+ .crash_reason_smem = 626,
+ .firmware_name = "wpss.mdt",
+ .pas_id = 6,
++ .minidump_id = 4,
+ .auto_boot = false,
+ .proxy_pd_names = (char*[]){
+ "cx",
+@@ -1410,7 +1411,7 @@ static const struct adsp_data sm8650_mpss_resource = {
+ };
+
+ static const struct of_device_id adsp_of_match[] = {
+- { .compatible = "qcom,msm8226-adsp-pil", .data = &adsp_resource_init},
++ { .compatible = "qcom,msm8226-adsp-pil", .data = &msm8996_adsp_resource},
+ { .compatible = "qcom,msm8953-adsp-pil", .data = &msm8996_adsp_resource},
+ { .compatible = "qcom,msm8974-adsp-pil", .data = &adsp_resource_init},
+ { .compatible = "qcom,msm8996-adsp-pil", .data = &msm8996_adsp_resource},
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index c2cf0d27772966..b21eedefff877a 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2025,6 +2025,7 @@ int rproc_shutdown(struct rproc *rproc)
+ kfree(rproc->cached_table);
+ rproc->cached_table = NULL;
+ rproc->table_ptr = NULL;
++ rproc->table_sz = 0;
+ out:
+ mutex_unlock(&rproc->lock);
+ return ret;
+diff --git a/drivers/rtc/rtc-renesas-rtca3.c b/drivers/rtc/rtc-renesas-rtca3.c
+index a056291d388765..ab816bdf0d7761 100644
+--- a/drivers/rtc/rtc-renesas-rtca3.c
++++ b/drivers/rtc/rtc-renesas-rtca3.c
+@@ -586,17 +586,14 @@ static int rtca3_initial_setup(struct clk *clk, struct rtca3_priv *priv)
+ */
+ usleep_range(sleep_us, sleep_us + 10);
+
+- /* Disable all interrupts. */
+- mask = RTCA3_RCR1_AIE | RTCA3_RCR1_CIE | RTCA3_RCR1_PIE;
+- ret = rtca3_alarm_irq_set_helper(priv, mask, 0);
+- if (ret)
+- return ret;
+-
+ mask = RTCA3_RCR2_START | RTCA3_RCR2_HR24;
+ val = readb(priv->base + RTCA3_RCR2);
+- /* Nothing to do if already started in 24 hours and calendar count mode. */
+- if ((val & mask) == mask)
+- return 0;
++ /* Only disable the interrupts if already started in 24 hours and calendar count mode. */
++ if ((val & mask) == mask) {
++ /* Disable all interrupts. */
++ mask = RTCA3_RCR1_AIE | RTCA3_RCR1_CIE | RTCA3_RCR1_PIE;
++ return rtca3_alarm_irq_set_helper(priv, mask, 0);
++ }
+
+ /* Reconfigure the RTC in 24 hours and calendar count mode. */
+ mask = RTCA3_RCR2_START | RTCA3_RCR2_CNTMD;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
+index 2d438d722d0b4f..e17f5d8226bf28 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas.h
++++ b/drivers/scsi/hisi_sas/hisi_sas.h
+@@ -633,8 +633,7 @@ extern struct dentry *hisi_sas_debugfs_dir;
+ extern void hisi_sas_stop_phys(struct hisi_hba *hisi_hba);
+ extern int hisi_sas_alloc(struct hisi_hba *hisi_hba);
+ extern void hisi_sas_free(struct hisi_hba *hisi_hba);
+-extern u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis,
+- int direction);
++extern u8 hisi_sas_get_ata_protocol(struct sas_task *task);
+ extern struct hisi_sas_port *to_hisi_sas_port(struct asd_sas_port *sas_port);
+ extern void hisi_sas_sata_done(struct sas_task *task,
+ struct hisi_sas_slot *slot);
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index da4a2ed8ee863e..3596414d970b24 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -21,8 +21,32 @@ struct hisi_sas_internal_abort_data {
+ bool rst_ha_timeout; /* reset the HA for timeout */
+ };
+
+-u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis, int direction)
++static u8 hisi_sas_get_ata_protocol_from_tf(struct ata_queued_cmd *qc)
+ {
++ if (!qc)
++ return HISI_SAS_SATA_PROTOCOL_PIO;
++
++ switch (qc->tf.protocol) {
++ case ATA_PROT_NODATA:
++ return HISI_SAS_SATA_PROTOCOL_NONDATA;
++ case ATA_PROT_PIO:
++ return HISI_SAS_SATA_PROTOCOL_PIO;
++ case ATA_PROT_DMA:
++ return HISI_SAS_SATA_PROTOCOL_DMA;
++ case ATA_PROT_NCQ_NODATA:
++ case ATA_PROT_NCQ:
++ return HISI_SAS_SATA_PROTOCOL_FPDMA;
++ default:
++ return HISI_SAS_SATA_PROTOCOL_PIO;
++ }
++}
++
++u8 hisi_sas_get_ata_protocol(struct sas_task *task)
++{
++ struct host_to_dev_fis *fis = &task->ata_task.fis;
++ struct ata_queued_cmd *qc = task->uldd_task;
++ int direction = task->data_dir;
++
+ switch (fis->command) {
+ case ATA_CMD_FPDMA_WRITE:
+ case ATA_CMD_FPDMA_READ:
+@@ -93,7 +117,7 @@ u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis, int direction)
+ {
+ if (direction == DMA_NONE)
+ return HISI_SAS_SATA_PROTOCOL_NONDATA;
+- return HISI_SAS_SATA_PROTOCOL_PIO;
++ return hisi_sas_get_ata_protocol_from_tf(qc);
+ }
+ }
+ }
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index 71cd5b4450c2b3..6e7f99fcc8247d 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2538,9 +2538,7 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
+ (task->ata_task.fis.control & ATA_SRST))
+ dw1 |= 1 << CMD_HDR_RESET_OFF;
+
+- dw1 |= (hisi_sas_get_ata_protocol(
+- &task->ata_task.fis, task->data_dir))
+- << CMD_HDR_FRAME_TYPE_OFF;
++ dw1 |= (hisi_sas_get_ata_protocol(task)) << CMD_HDR_FRAME_TYPE_OFF;
+ dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF;
+ hdr->dw1 = cpu_to_le32(dw1);
+
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 48b95d9a792753..095bbf80c34efb 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -1456,9 +1456,7 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba,
+ (task->ata_task.fis.control & ATA_SRST))
+ dw1 |= 1 << CMD_HDR_RESET_OFF;
+
+- dw1 |= (hisi_sas_get_ata_protocol(
+- &task->ata_task.fis, task->data_dir))
+- << CMD_HDR_FRAME_TYPE_OFF;
++ dw1 |= (hisi_sas_get_ata_protocol(task)) << CMD_HDR_FRAME_TYPE_OFF;
+ dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF;
+
+ if (FIS_CMD_IS_UNCONSTRAINED(task->ata_task.fis))
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 7589f48aebc80f..f4b5813e6fc4cf 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -2339,6 +2339,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ }
+
+ if (!mrioc->ioctl_sges_allocated) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ dprint_bsg_err(mrioc, "%s: DMA memory was not allocated\n",
+ __func__);
+ return -ENOMEM;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index dc43cfa83088bb..212e3b86bb8178 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -8018,7 +8018,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+
+ mutex_lock(&ioc->hostdiag_unlock_mutex);
+ if (mpt3sas_base_unlock_and_get_host_diagnostic(ioc, &host_diagnostic))
+- goto out;
++ goto unlock;
+
+ hcb_size = ioc->base_readl(&ioc->chip->HCBSize);
+ drsprintk(ioc, ioc_info(ioc, "diag reset: issued\n"));
+@@ -8038,7 +8038,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ ioc_info(ioc,
+ "Invalid host diagnostic register value\n");
+ _base_dump_reg_set(ioc);
+- goto out;
++ goto unlock;
+ }
+ if (!(host_diagnostic & MPI2_DIAG_RESET_ADAPTER))
+ break;
+@@ -8074,17 +8074,19 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ ioc_err(ioc, "%s: failed going to ready state (ioc_state=0x%x)\n",
+ __func__, ioc_state);
+ _base_dump_reg_set(ioc);
+- goto out;
++ goto fail;
+ }
+
+ pci_cfg_access_unlock(ioc->pdev);
+ ioc_info(ioc, "diag reset: SUCCESS\n");
+ return 0;
+
+- out:
++unlock:
++ mutex_unlock(&ioc->hostdiag_unlock_mutex);
++
++fail:
+ pci_cfg_access_unlock(ioc->pdev);
+ ioc_err(ioc, "diag reset: FAILED\n");
+- mutex_unlock(&ioc->hostdiag_unlock_mutex);
+ return -EFAULT;
+ }
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index a456e5ec74d886..9c2d3178f3844d 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -2703,7 +2703,7 @@ scsih_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
+ ssp_target = 1;
+ if (sas_device->device_info &
+ MPI2_SAS_DEVICE_INFO_SEP) {
+- sdev_printk(KERN_WARNING, sdev,
++ sdev_printk(KERN_INFO, sdev,
+ "set ignore_delay_remove for handle(0x%04x)\n",
+ sas_device_priv_data->sas_target->handle);
+ sas_device_priv_data->ignore_delay_remove = 1;
+diff --git a/drivers/soc/mediatek/mt8167-mmsys.h b/drivers/soc/mediatek/mt8167-mmsys.h
+index f7a35b3656bb13..655ef962abe9f4 100644
+--- a/drivers/soc/mediatek/mt8167-mmsys.h
++++ b/drivers/soc/mediatek/mt8167-mmsys.h
+@@ -17,18 +17,23 @@ static const struct mtk_mmsys_routes mt8167_mmsys_routing_table[] = {
+ {
+ DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0,
+ MT8167_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0,
++ OVL0_MOUT_EN_COLOR0
+ }, {
+ DDP_COMPONENT_DITHER0, DDP_COMPONENT_RDMA0,
+- MT8167_DISP_REG_CONFIG_DISP_DITHER_MOUT_EN, MT8167_DITHER_MOUT_EN_RDMA0
++ MT8167_DISP_REG_CONFIG_DISP_DITHER_MOUT_EN, MT8167_DITHER_MOUT_EN_RDMA0,
++ MT8167_DITHER_MOUT_EN_RDMA0
+ }, {
+ DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0,
+- MT8167_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0
++ MT8167_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0,
++ COLOR0_SEL_IN_OVL0
+ }, {
+ DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI0,
+- MT8167_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, MT8167_DSI0_SEL_IN_RDMA0
++ MT8167_DISP_REG_CONFIG_DISP_DSI0_SEL_IN, MT8167_DSI0_SEL_IN_RDMA0,
++ MT8167_DSI0_SEL_IN_RDMA0
+ }, {
+ DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI0,
+- MT8167_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN, MT8167_RDMA0_SOUT_DSI0
++ MT8167_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL_IN, MT8167_RDMA0_SOUT_DSI0,
++ MT8167_RDMA0_SOUT_DSI0
+ },
+ };
+
+diff --git a/drivers/soc/mediatek/mt8188-mmsys.h b/drivers/soc/mediatek/mt8188-mmsys.h
+index 6bebf1a69fc076..a1d63be0a73dc6 100644
+--- a/drivers/soc/mediatek/mt8188-mmsys.h
++++ b/drivers/soc/mediatek/mt8188-mmsys.h
+@@ -343,7 +343,7 @@ static const struct mtk_mmsys_routes mmsys_mt8188_vdo1_routing_table[] = {
+ MT8188_DISP_DPI1_SEL_IN_FROM_VPP_MERGE4_MOUT
+ }, {
+ DDP_COMPONENT_MERGE5, DDP_COMPONENT_DPI1,
+- MT8188_VDO1_MERGE4_SOUT_SEL, GENMASK(1, 0),
++ MT8188_VDO1_MERGE4_SOUT_SEL, GENMASK(3, 0),
+ MT8188_MERGE4_SOUT_TO_DPI1_SEL
+ }, {
+ DDP_COMPONENT_MERGE5, DDP_COMPONENT_DP_INTF1,
+diff --git a/drivers/soc/mediatek/mt8365-mmsys.h b/drivers/soc/mediatek/mt8365-mmsys.h
+index 7abaf048d91e86..ae37945e6c67cc 100644
+--- a/drivers/soc/mediatek/mt8365-mmsys.h
++++ b/drivers/soc/mediatek/mt8365-mmsys.h
+@@ -14,8 +14,9 @@
+ #define MT8365_DISP_REG_CONFIG_DISP_DPI0_SEL_IN 0xfd8
+ #define MT8365_DISP_REG_CONFIG_DISP_LVDS_SYS_CFG_00 0xfdc
+
++#define MT8365_DISP_MS_IN_OUT_MASK GENMASK(3, 0)
+ #define MT8365_RDMA0_SOUT_COLOR0 0x1
+-#define MT8365_DITHER_MOUT_EN_DSI0 0x1
++#define MT8365_DITHER_MOUT_EN_DSI0 BIT(0)
+ #define MT8365_DSI0_SEL_IN_DITHER 0x1
+ #define MT8365_RDMA0_SEL_IN_OVL0 0x0
+ #define MT8365_RDMA0_RSZ0_SEL_IN_RDMA0 0x0
+@@ -30,52 +31,43 @@ static const struct mtk_mmsys_routes mt8365_mmsys_routing_table[] = {
+ {
+ DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0,
+ MT8365_DISP_REG_CONFIG_DISP_OVL0_MOUT_EN,
+- MT8365_OVL0_MOUT_PATH0_SEL, MT8365_OVL0_MOUT_PATH0_SEL
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_OVL0_MOUT_PATH0_SEL
++ }, {
+ DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0,
+ MT8365_DISP_REG_CONFIG_DISP_RDMA0_SEL_IN,
+- MT8365_RDMA0_SEL_IN_OVL0, MT8365_RDMA0_SEL_IN_OVL0
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA0_SEL_IN_OVL0
++ }, {
+ DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0,
+ MT8365_DISP_REG_CONFIG_DISP_RDMA0_SOUT_SEL,
+- MT8365_RDMA0_SOUT_COLOR0, MT8365_RDMA0_SOUT_COLOR0
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA0_SOUT_COLOR0
++ }, {
+ DDP_COMPONENT_COLOR0, DDP_COMPONENT_CCORR,
+ MT8365_DISP_REG_CONFIG_DISP_COLOR0_SEL_IN,
+- MT8365_DISP_COLOR_SEL_IN_COLOR0,MT8365_DISP_COLOR_SEL_IN_COLOR0
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_DISP_COLOR_SEL_IN_COLOR0
++ }, {
+ DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0,
+ MT8365_DISP_REG_CONFIG_DISP_DITHER0_MOUT_EN,
+- MT8365_DITHER_MOUT_EN_DSI0, MT8365_DITHER_MOUT_EN_DSI0
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_DITHER_MOUT_EN_DSI0
++ }, {
+ DDP_COMPONENT_DITHER0, DDP_COMPONENT_DSI0,
+ MT8365_DISP_REG_CONFIG_DISP_DSI0_SEL_IN,
+- MT8365_DSI0_SEL_IN_DITHER, MT8365_DSI0_SEL_IN_DITHER
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_DSI0_SEL_IN_DITHER
++ }, {
+ DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0,
+ MT8365_DISP_REG_CONFIG_DISP_RDMA0_RSZ0_SEL_IN,
+- MT8365_RDMA0_RSZ0_SEL_IN_RDMA0, MT8365_RDMA0_RSZ0_SEL_IN_RDMA0
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA0_RSZ0_SEL_IN_RDMA0
++ }, {
+ DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+ MT8365_DISP_REG_CONFIG_DISP_LVDS_SYS_CFG_00,
+ MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK, MT8365_LVDS_SYS_CFG_00_SEL_LVDS_PXL_CLK
+- },
+- {
++ }, {
+ DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+ MT8365_DISP_REG_CONFIG_DISP_DPI0_SEL_IN,
+- MT8365_DPI0_SEL_IN_RDMA1, MT8365_DPI0_SEL_IN_RDMA1
+- },
+- {
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_DPI0_SEL_IN_RDMA1
++ }, {
+ DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+ MT8365_DISP_REG_CONFIG_DISP_RDMA1_SOUT_SEL,
+- MT8365_RDMA1_SOUT_DPI0, MT8365_RDMA1_SOUT_DPI0
++ MT8365_DISP_MS_IN_OUT_MASK, MT8365_RDMA1_SOUT_DPI0
+ },
+ };
+
+diff --git a/drivers/soundwire/generic_bandwidth_allocation.c b/drivers/soundwire/generic_bandwidth_allocation.c
+index 59965f43c2fb0e..f78a2a16581a71 100644
+--- a/drivers/soundwire/generic_bandwidth_allocation.c
++++ b/drivers/soundwire/generic_bandwidth_allocation.c
+@@ -194,10 +194,11 @@ static int sdw_compute_group_params(struct sdw_bus *bus,
+ continue;
+ } else {
+ /*
+- * Include runtimes with running (ENABLED state) and paused (DISABLED state)
+- * streams
++ * Include runtimes with running (ENABLED/PREPARED state) and
++ * paused (DISABLED state) streams
+ */
+ if (m_rt->stream->state != SDW_STREAM_ENABLED &&
++ m_rt->stream->state != SDW_STREAM_PREPARED &&
+ m_rt->stream->state != SDW_STREAM_DISABLED)
+ continue;
+ }
+diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c
+index 4869b073b11c2f..d2d99555ec5a50 100644
+--- a/drivers/soundwire/slave.c
++++ b/drivers/soundwire/slave.c
+@@ -13,6 +13,7 @@ static void sdw_slave_release(struct device *dev)
+ {
+ struct sdw_slave *slave = dev_to_sdw_dev(dev);
+
++ of_node_put(slave->dev.of_node);
+ mutex_destroy(&slave->sdw_dev_lock);
+ kfree(slave);
+ }
+diff --git a/drivers/spi/spi-amd.c b/drivers/spi/spi-amd.c
+index c85997478b8190..17fc0b17e756dd 100644
+--- a/drivers/spi/spi-amd.c
++++ b/drivers/spi/spi-amd.c
+@@ -302,7 +302,7 @@ static void amd_set_spi_freq(struct amd_spi *amd_spi, u32 speed_hz)
+ {
+ unsigned int i, spd7_val, alt_spd;
+
+- for (i = 0; i < ARRAY_SIZE(amd_spi_freq); i++)
++ for (i = 0; i < ARRAY_SIZE(amd_spi_freq)-1; i++)
+ if (speed_hz >= amd_spi_freq[i].speed_hz)
+ break;
+
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index 0d1aa659248460..77de5a07639afb 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -1162,7 +1162,8 @@ static void bcm2835_spi_cleanup(struct spi_device *spi)
+ sizeof(u32),
+ DMA_TO_DEVICE);
+
+- gpiod_put(bs->cs_gpio);
++ if (!IS_ERR(bs->cs_gpio))
++ gpiod_put(bs->cs_gpio);
+ spi_set_csgpiod(spi, 0, NULL);
+
+ kfree(target);
+@@ -1225,7 +1226,12 @@ static int bcm2835_spi_setup(struct spi_device *spi)
+ struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
+ struct bcm2835_spidev *target = spi_get_ctldata(spi);
+ struct gpiod_lookup_table *lookup __free(kfree) = NULL;
+- int ret;
++ const char *pinctrl_compats[] = {
++ "brcm,bcm2835-gpio",
++ "brcm,bcm2711-gpio",
++ "brcm,bcm7211-gpio",
++ };
++ int ret, i;
+ u32 cs;
+
+ if (!target) {
+@@ -1290,6 +1296,14 @@ static int bcm2835_spi_setup(struct spi_device *spi)
+ goto err_cleanup;
+ }
+
++ for (i = 0; i < ARRAY_SIZE(pinctrl_compats); i++) {
++ if (of_find_compatible_node(NULL, NULL, pinctrl_compats[i]))
++ break;
++ }
++
++ if (i == ARRAY_SIZE(pinctrl_compats))
++ return 0;
++
+ /*
+ * TODO: The code below is a slightly better alternative to the utter
+ * abuse of the GPIO API that I found here before. It creates a
+diff --git a/drivers/spi/spi-cadence-xspi.c b/drivers/spi/spi-cadence-xspi.c
+index aed98ab1433467..6dcba0e0ddaa3e 100644
+--- a/drivers/spi/spi-cadence-xspi.c
++++ b/drivers/spi/spi-cadence-xspi.c
+@@ -432,7 +432,7 @@ static bool cdns_mrvl_xspi_setup_clock(struct cdns_xspi_dev *cdns_xspi,
+ u32 clk_reg;
+ bool update_clk = false;
+
+- while (i < ARRAY_SIZE(cdns_mrvl_xspi_clk_div_list)) {
++ while (i < (ARRAY_SIZE(cdns_mrvl_xspi_clk_div_list) - 1)) {
+ clk_val = MRVL_XSPI_CLOCK_DIVIDED(
+ cdns_mrvl_xspi_clk_div_list[i]);
+ if (clk_val <= requested_clk)
+diff --git a/drivers/staging/gpib/agilent_82350b/agilent_82350b.c b/drivers/staging/gpib/agilent_82350b/agilent_82350b.c
+index 3f4f95b7fe34ac..c62407077d37f1 100644
+--- a/drivers/staging/gpib/agilent_82350b/agilent_82350b.c
++++ b/drivers/staging/gpib/agilent_82350b/agilent_82350b.c
+@@ -848,6 +848,7 @@ static gpib_interface_t agilent_82350b_unaccel_interface = {
+ .primary_address = agilent_82350b_primary_address,
+ .secondary_address = agilent_82350b_secondary_address,
+ .serial_poll_response = agilent_82350b_serial_poll_response,
++ .serial_poll_status = agilent_82350b_serial_poll_status,
+ .t1_delay = agilent_82350b_t1_delay,
+ .return_to_local = agilent_82350b_return_to_local,
+ };
+@@ -875,6 +876,7 @@ static gpib_interface_t agilent_82350b_interface = {
+ .primary_address = agilent_82350b_primary_address,
+ .secondary_address = agilent_82350b_secondary_address,
+ .serial_poll_response = agilent_82350b_serial_poll_response,
++ .serial_poll_status = agilent_82350b_serial_poll_status,
+ .t1_delay = agilent_82350b_t1_delay,
+ .return_to_local = agilent_82350b_return_to_local,
+ };
+diff --git a/drivers/staging/gpib/agilent_82357a/agilent_82357a.c b/drivers/staging/gpib/agilent_82357a/agilent_82357a.c
+index 69f0e490d401d8..e0d36f0dff25ed 100644
+--- a/drivers/staging/gpib/agilent_82357a/agilent_82357a.c
++++ b/drivers/staging/gpib/agilent_82357a/agilent_82357a.c
+@@ -7,6 +7,10 @@
+
+ #define _GNU_SOURCE
+
++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
++#define dev_fmt pr_fmt
++#define DRV_NAME KBUILD_MODNAME
++
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+@@ -79,14 +83,12 @@ static int agilent_82357a_send_bulk_msg(struct agilent_82357a_priv *a_priv, void
+
+ retval = usb_submit_urb(a_priv->bulk_urb, GFP_KERNEL);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: failed to submit bulk out urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to submit bulk out urb, retval=%i\n", retval);
+ mutex_unlock(&a_priv->bulk_alloc_lock);
+ goto cleanup;
+ }
+ mutex_unlock(&a_priv->bulk_alloc_lock);
+ if (down_interruptible(&context->complete)) {
+- dev_err(&usb_dev->dev, "%s: interrupted\n", __func__);
+ retval = -ERESTARTSYS;
+ goto cleanup;
+ }
+@@ -149,14 +151,12 @@ static int agilent_82357a_receive_bulk_msg(struct agilent_82357a_priv *a_priv, v
+
+ retval = usb_submit_urb(a_priv->bulk_urb, GFP_KERNEL);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: failed to submit bulk out urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to submit bulk in urb, retval=%i\n", retval);
+ mutex_unlock(&a_priv->bulk_alloc_lock);
+ goto cleanup;
+ }
+ mutex_unlock(&a_priv->bulk_alloc_lock);
+ if (down_interruptible(&context->complete)) {
+- dev_err(&usb_dev->dev, "%s: interrupted\n", __func__);
+ retval = -ERESTARTSYS;
+ goto cleanup;
+ }
+@@ -205,7 +205,6 @@ static int agilent_82357a_receive_control_msg(struct agilent_82357a_priv *a_priv
+
+ static void agilent_82357a_dump_raw_block(const u8 *raw_data, int length)
+ {
+- pr_info("hex block dump\n");
+ print_hex_dump(KERN_INFO, "", DUMP_PREFIX_NONE, 8, 1, raw_data, length, true);
+ }
+
+@@ -225,7 +224,7 @@ static int agilent_82357a_write_registers(struct agilent_82357a_priv *a_priv,
+ static const int max_writes = 31;
+
+ if (num_writes > max_writes) {
+- dev_err(&usb_dev->dev, "%s: bug! num_writes=%i too large\n", __func__, num_writes);
++ dev_err(&usb_dev->dev, "bug! num_writes=%i too large\n", num_writes);
+ return -EIO;
+ }
+ out_data_length = num_writes * bytes_per_write + header_length;
+@@ -239,8 +238,7 @@ static int agilent_82357a_write_registers(struct agilent_82357a_priv *a_priv,
+ out_data[i++] = writes[j].address;
+ out_data[i++] = writes[j].value;
+ }
+- if (i > out_data_length)
+- dev_err(&usb_dev->dev, "%s: bug! buffer overrun\n", __func__);
++
+ retval = mutex_lock_interruptible(&a_priv->bulk_transfer_lock);
+ if (retval) {
+ kfree(out_data);
+@@ -249,8 +247,8 @@ static int agilent_82357a_write_registers(struct agilent_82357a_priv *a_priv,
+ retval = agilent_82357a_send_bulk_msg(a_priv, out_data, i, &bytes_written, 1000);
+ kfree(out_data);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+ return retval;
+ }
+@@ -265,20 +263,19 @@ static int agilent_82357a_write_registers(struct agilent_82357a_priv *a_priv,
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ agilent_82357a_dump_raw_block(in_data, bytes_read);
+ kfree(in_data);
+ return -EIO;
+ }
+ if (in_data[0] != (0xff & ~DATA_PIPE_CMD_WR_REGS)) {
+- dev_err(&usb_dev->dev, "%s: error, bulk command=0x%x != ~DATA_PIPE_CMD_WR_REGS\n",
+- __func__, in_data[0]);
++ dev_err(&usb_dev->dev, "bulk command=0x%x != ~DATA_PIPE_CMD_WR_REGS\n", in_data[0]);
+ return -EIO;
+ }
+ if (in_data[1]) {
+- dev_err(&usb_dev->dev, "%s: nonzero error code 0x%x in DATA_PIPE_CMD_WR_REGS response\n",
+- __func__, in_data[1]);
++ dev_err(&usb_dev->dev, "nonzero error code 0x%x in DATA_PIPE_CMD_WR_REGS response\n",
++ in_data[1]);
+ return -EIO;
+ }
+ kfree(in_data);
+@@ -299,9 +296,10 @@ static int agilent_82357a_read_registers(struct agilent_82357a_priv *a_priv,
+ static const int header_length = 2;
+ static const int max_reads = 62;
+
+- if (num_reads > max_reads)
+- dev_err(&usb_dev->dev, "%s: bug! num_reads=%i too large\n", __func__, num_reads);
+-
++ if (num_reads > max_reads) {
++ dev_err(&usb_dev->dev, "bug! num_reads=%i too large\n", num_reads);
++ return -EIO;
++ }
+ out_data_length = num_reads + header_length;
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+@@ -311,8 +309,7 @@ static int agilent_82357a_read_registers(struct agilent_82357a_priv *a_priv,
+ out_data[i++] = num_reads;
+ for (j = 0; j < num_reads; j++)
+ out_data[i++] = reads[j].address;
+- if (i > out_data_length)
+- dev_err(&usb_dev->dev, "%s: bug! buffer overrun\n", __func__);
++
+ if (blocking) {
+ retval = mutex_lock_interruptible(&a_priv->bulk_transfer_lock);
+ if (retval) {
+@@ -329,8 +326,8 @@ static int agilent_82357a_read_registers(struct agilent_82357a_priv *a_priv,
+ retval = agilent_82357a_send_bulk_msg(a_priv, out_data, i, &bytes_written, 1000);
+ kfree(out_data);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+ return retval;
+ }
+@@ -345,21 +342,20 @@ static int agilent_82357a_read_registers(struct agilent_82357a_priv *a_priv,
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ agilent_82357a_dump_raw_block(in_data, bytes_read);
+ kfree(in_data);
+ return -EIO;
+ }
+ i = 0;
+ if (in_data[i++] != (0xff & ~DATA_PIPE_CMD_RD_REGS)) {
+- dev_err(&usb_dev->dev, "%s: error, bulk command=0x%x != ~DATA_PIPE_CMD_RD_REGS\n",
+- __func__, in_data[0]);
++ dev_err(&usb_dev->dev, "bulk command=0x%x != ~DATA_PIPE_CMD_RD_REGS\n", in_data[0]);
+ return -EIO;
+ }
+ if (in_data[i++]) {
+- dev_err(&usb_dev->dev, "%s: nonzero error code 0x%x in DATA_PIPE_CMD_RD_REGS response\n",
+- __func__, in_data[1]);
++ dev_err(&usb_dev->dev, "nonzero error code 0x%x in DATA_PIPE_CMD_RD_REGS response\n",
++ in_data[1]);
+ return -EIO;
+ }
+ for (j = 0; j < num_reads; j++)
+@@ -390,14 +386,13 @@ static int agilent_82357a_abort(struct agilent_82357a_priv *a_priv, int flush)
+ wIndex, status_data,
+ status_data_len, 100);
+ if (receive_control_retval < 0) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_receive_control_msg() returned %i\n",
+- __func__, receive_control_retval);
++ dev_err(&usb_dev->dev, "82357a_receive_control_msg() returned %i\n",
++ receive_control_retval);
+ retval = -EIO;
+ goto cleanup;
+ }
+ if (status_data[0] != (~XFER_ABORT & 0xff)) {
+- dev_err(&usb_dev->dev, "%s: error, major code=0x%x != ~XFER_ABORT\n",
+- __func__, status_data[0]);
++ dev_err(&usb_dev->dev, "major code=0x%x != ~XFER_ABORT\n", status_data[0]);
+ retval = -EIO;
+ goto cleanup;
+ }
+@@ -413,8 +408,7 @@ static int agilent_82357a_abort(struct agilent_82357a_priv *a_priv, int flush)
+ fallthrough;
+ case UGP_ERR_FLUSHING_ALREADY:
+ default:
+- dev_err(&usb_dev->dev, "%s: abort returned error code=0x%x\n",
+- __func__, status_data[1]);
++ dev_err(&usb_dev->dev, "abort returned error code=0x%x\n", status_data[1]);
+ retval = -EIO;
+ break;
+ }
+@@ -433,7 +427,7 @@ static int agilent_82357a_read(gpib_board_t *board, uint8_t *buffer, size_t leng
+ {
+ int retval;
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ int out_data_length, in_data_length;
+ int bytes_written, bytes_read;
+@@ -444,6 +438,10 @@ static int agilent_82357a_read(gpib_board_t *board, uint8_t *buffer, size_t leng
+
+ *nbytes = 0;
+ *end = 0;
++
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ out_data_length = 0x9;
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+@@ -469,8 +467,8 @@ static int agilent_82357a_read(gpib_board_t *board, uint8_t *buffer, size_t leng
+ retval = agilent_82357a_send_bulk_msg(a_priv, out_data, i, &bytes_written, msec_timeout);
+ kfree(out_data);
+ if (retval || bytes_written != i) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+ if (retval < 0)
+ return retval;
+@@ -501,19 +499,19 @@ static int agilent_82357a_read(gpib_board_t *board, uint8_t *buffer, size_t leng
+ &extra_bytes_read, 100);
+ bytes_read += extra_bytes_read;
+ if (extra_bytes_retval) {
+- dev_err(&usb_dev->dev, "%s: extra_bytes_retval=%i, bytes_read=%i\n",
+- __func__, extra_bytes_retval, bytes_read);
++ dev_err(&usb_dev->dev, "extra_bytes_retval=%i, bytes_read=%i\n",
++ extra_bytes_retval, bytes_read);
+ agilent_82357a_abort(a_priv, 0);
+ }
+ } else if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ agilent_82357a_abort(a_priv, 0);
+ }
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+ if (bytes_read > length + 1) {
+ bytes_read = length + 1;
+- pr_warn("%s: bytes_read > length? truncating", __func__);
++ dev_warn(&usb_dev->dev, "bytes_read > length? truncating");
+ }
+
+ if (bytes_read >= 1) {
+@@ -540,7 +538,7 @@ static ssize_t agilent_82357a_generic_write(gpib_board_t *board, uint8_t *buffer
+ {
+ int retval;
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data = NULL;
+ u8 *status_data = NULL;
+ int out_data_length;
+@@ -551,6 +549,10 @@ static ssize_t agilent_82357a_generic_write(gpib_board_t *board, uint8_t *buffer
+ struct agilent_82357a_register_pairlet read_reg;
+
+ *bytes_written = 0;
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ out_data_length = length + 0x8;
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+@@ -584,8 +586,8 @@ static ssize_t agilent_82357a_generic_write(gpib_board_t *board, uint8_t *buffer
+ kfree(out_data);
+ if (retval || raw_bytes_written != i) {
+ agilent_82357a_abort(a_priv, 0);
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_send_bulk_msg returned %i, raw_bytes_written=%i, i=%i\n",
+- __func__, retval, raw_bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, raw_bytes_written=%i, i=%i\n",
++ retval, raw_bytes_written, i);
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+ if (retval < 0)
+ return retval;
+@@ -597,7 +599,7 @@ static ssize_t agilent_82357a_generic_write(gpib_board_t *board, uint8_t *buffer
+ &a_priv->interrupt_flags) ||
+ test_bit(TIMO_NUM, &board->status));
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: wait write complete interrupted\n", __func__);
++ dev_dbg(&usb_dev->dev, "wait write complete interrupted\n");
+ agilent_82357a_abort(a_priv, 0);
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+ return -ERESTARTSYS;
+@@ -614,8 +616,7 @@ static ssize_t agilent_82357a_generic_write(gpib_board_t *board, uint8_t *buffer
+ read_reg.address = BSR;
+ retval = agilent_82357a_read_registers(a_priv, &read_reg, 1, 1);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_read_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "read_registers() returned error\n");
+ return -ETIMEDOUT;
+ }
+
+@@ -632,8 +633,7 @@ static ssize_t agilent_82357a_generic_write(gpib_board_t *board, uint8_t *buffer
+ read_reg.address = ADSR;
+ retval = agilent_82357a_read_registers(a_priv, &read_reg, 1, 1);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_read_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "read_registers() returned error\n");
+ return -ETIMEDOUT;
+ }
+ adsr = read_reg.value;
+@@ -659,8 +659,7 @@ static ssize_t agilent_82357a_generic_write(gpib_board_t *board, uint8_t *buffer
+ 100);
+ mutex_unlock(&a_priv->bulk_transfer_lock);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_receive_control_msg() returned %i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "receive_control_msg() returned %i\n", retval);
+ kfree(status_data);
+ return -EIO;
+ }
+@@ -699,17 +698,20 @@ int agilent_82357a_take_control_internal(gpib_board_t *board, int synchronous)
+ write.value = AUX_TCA;
+ retval = agilent_82357a_write_registers(a_priv, &write, 1);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+
+ return retval;
+ }
+
+ static int agilent_82357a_take_control(gpib_board_t *board, int synchronous)
+ {
++ struct agilent_82357a_priv *a_priv = board->private_data;
+ const int timeout = 10;
+ int i;
+
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++
+ /* It looks like the 9914 does not handle tcs properly.
+ * See comment above tms9914_take_control_workaround() in
+ * drivers/gpib/tms9914/tms9914_aux.c
+@@ -733,16 +735,19 @@ static int agilent_82357a_take_control(gpib_board_t *board, int synchronous)
+ static int agilent_82357a_go_to_standby(gpib_board_t *board)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet write;
+ int retval;
+
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ write.address = AUXCR;
+ write.value = AUX_GTS;
+ retval = agilent_82357a_write_registers(a_priv, &write, 1);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return 0;
+ }
+
+@@ -750,11 +755,15 @@ static int agilent_82357a_go_to_standby(gpib_board_t *board)
+ static void agilent_82357a_request_system_control(gpib_board_t *board, int request_control)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet writes[2];
+ int retval;
+ int i = 0;
+
++ if (!a_priv->bus_interface)
++ return; // -ENODEV;
++
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ /* 82357B needs bit to be set in 9914 AUXCR register */
+ writes[i].address = AUXCR;
+ if (request_control) {
+@@ -771,18 +780,21 @@ static void agilent_82357a_request_system_control(gpib_board_t *board, int reque
+ ++i;
+ retval = agilent_82357a_write_registers(a_priv, writes, i);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return;// retval;
+ }
+
+ static void agilent_82357a_interface_clear(gpib_board_t *board, int assert)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet write;
+ int retval;
+
++ if (!a_priv->bus_interface)
++ return; // -ENODEV;
++
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ write.address = AUXCR;
+ write.value = AUX_SIC;
+ if (assert) {
+@@ -791,25 +803,27 @@ static void agilent_82357a_interface_clear(gpib_board_t *board, int assert)
+ }
+ retval = agilent_82357a_write_registers(a_priv, &write, 1);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ }
+
+ static void agilent_82357a_remote_enable(gpib_board_t *board, int enable)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet write;
+ int retval;
+
++ if (!a_priv->bus_interface)
++ return; //-ENODEV;
++
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ write.address = AUXCR;
+ write.value = AUX_SRE;
+ if (enable)
+ write.value |= AUX_CS;
+ retval = agilent_82357a_write_registers(a_priv, &write, 1);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ a_priv->ren_state = enable;
+ return;// 0;
+ }
+@@ -818,10 +832,11 @@ static int agilent_82357a_enable_eos(gpib_board_t *board, uint8_t eos_byte, int
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+
+- if (compare_8_bits == 0) {
+- pr_warn("%s: hardware only supports 8-bit EOS compare", __func__);
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++ if (compare_8_bits == 0)
+ return -EOPNOTSUPP;
+- }
++
+ a_priv->eos_char = eos_byte;
+ a_priv->eos_mode = REOS | BIN;
+ return 0;
+@@ -837,10 +852,13 @@ static void agilent_82357a_disable_eos(gpib_board_t *board)
+ static unsigned int agilent_82357a_update_status(gpib_board_t *board, unsigned int clear_mask)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet address_status, bus_status;
+ int retval;
+
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ board->status &= ~clear_mask;
+ if (a_priv->is_cic)
+ set_bit(CIC_NUM, &board->status);
+@@ -850,8 +868,7 @@ static unsigned int agilent_82357a_update_status(gpib_board_t *board, unsigned i
+ retval = agilent_82357a_read_registers(a_priv, &address_status, 1, 0);
+ if (retval) {
+ if (retval != -EAGAIN)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_read_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "read_registers() returned error\n");
+ return board->status;
+ }
+ // check for remote/local
+@@ -883,8 +900,7 @@ static unsigned int agilent_82357a_update_status(gpib_board_t *board, unsigned i
+ retval = agilent_82357a_read_registers(a_priv, &bus_status, 1, 0);
+ if (retval) {
+ if (retval != -EAGAIN)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_read_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "read_registers() returned error\n");
+ return board->status;
+ }
+ if (bus_status.value & BSR_SRQ_BIT)
+@@ -902,13 +918,15 @@ static int agilent_82357a_primary_address(gpib_board_t *board, unsigned int addr
+ struct agilent_82357a_register_pairlet write;
+ int retval;
+
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ // put primary address in address0
+ write.address = ADR;
+ write.value = address & ADDRESS_MASK;
+ retval = agilent_82357a_write_registers(a_priv, &write, 1);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return retval;
+ }
+ return retval;
+@@ -917,18 +935,21 @@ static int agilent_82357a_primary_address(gpib_board_t *board, unsigned int addr
+ static int agilent_82357a_secondary_address(gpib_board_t *board, unsigned int address, int enable)
+ {
+ if (enable)
+- pr_warn("%s: warning: assigning a secondary address not supported\n", __func__);
+- return -EOPNOTSUPP;
++ return -EOPNOTSUPP;
++ return 0;
+ }
+
+ static int agilent_82357a_parallel_poll(gpib_board_t *board, uint8_t *result)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet writes[2];
+ struct agilent_82357a_register_pairlet read;
+ int retval;
+
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ // execute parallel poll
+ writes[0].address = AUXCR;
+ writes[0].value = AUX_CS | AUX_RPP;
+@@ -936,16 +957,14 @@ static int agilent_82357a_parallel_poll(gpib_board_t *board, uint8_t *result)
+ writes[1].value = a_priv->hw_control_bits & ~NOT_PARALLEL_POLL;
+ retval = agilent_82357a_write_registers(a_priv, writes, 2);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return retval;
+ }
+ udelay(2); //silly, since usb write will take way longer
+ read.address = CPTR;
+ retval = agilent_82357a_read_registers(a_priv, &read, 1, 1);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_read_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "read_registers() returned error\n");
+ return retval;
+ }
+ *result = read.value;
+@@ -956,8 +975,7 @@ static int agilent_82357a_parallel_poll(gpib_board_t *board, uint8_t *result)
+ writes[1].value = AUX_RPP;
+ retval = agilent_82357a_write_registers(a_priv, writes, 2);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return retval;
+ }
+ return 0;
+@@ -996,17 +1014,19 @@ static void agilent_82357a_return_to_local(gpib_board_t *board)
+ static int agilent_82357a_line_status(const gpib_board_t *board)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet bus_status;
+ int retval;
+ int status = ValidALL;
+
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ bus_status.address = BSR;
+ retval = agilent_82357a_read_registers(a_priv, &bus_status, 1, 0);
+ if (retval) {
+ if (retval != -EAGAIN)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_read_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "read_registers() returned error\n");
+ return retval;
+ }
+ if (bus_status.value & BSR_REN_BIT)
+@@ -1047,16 +1067,18 @@ static unsigned short nanosec_to_fast_talker_bits(unsigned int *nanosec)
+ static unsigned int agilent_82357a_t1_delay(gpib_board_t *board, unsigned int nanosec)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct agilent_82357a_register_pairlet write;
+ int retval;
+
++ if (!a_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ write.address = FAST_TALKER_T1;
+ write.value = nanosec_to_fast_talker_bits(&nanosec);
+ retval = agilent_82357a_write_registers(a_priv, &write, 1);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return nanosec;
+ }
+
+@@ -1081,7 +1103,7 @@ static void agilent_82357a_interrupt_complete(struct urb *urb)
+ default: /* other error, resubmit */
+ retval = usb_submit_urb(a_priv->interrupt_urb, GFP_ATOMIC);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: failed to resubmit interrupt urb\n", __func__);
++ dev_err(&usb_dev->dev, "failed to resubmit interrupt urb\n");
+ return;
+ }
+
+@@ -1097,7 +1119,7 @@ static void agilent_82357a_interrupt_complete(struct urb *urb)
+
+ retval = usb_submit_urb(a_priv->interrupt_urb, GFP_ATOMIC);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: failed to resubmit interrupt urb\n", __func__);
++ dev_err(&usb_dev->dev, "failed to resubmit interrupt urb\n");
+ }
+
+ static int agilent_82357a_setup_urbs(gpib_board_t *board)
+@@ -1133,8 +1155,7 @@ static int agilent_82357a_setup_urbs(gpib_board_t *board)
+ if (retval) {
+ usb_free_urb(a_priv->interrupt_urb);
+ a_priv->interrupt_urb = NULL;
+- dev_err(&usb_dev->dev, "%s: failed to submit first interrupt urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to submit first interrupt urb, retval=%i\n", retval);
+ goto setup_exit;
+ }
+ mutex_unlock(&a_priv->interrupt_alloc_lock);
+@@ -1184,108 +1205,78 @@ static void agilent_82357a_free_private(gpib_board_t *board)
+ {
+ kfree(board->private_data);
+ board->private_data = NULL;
+-
+ }
+
++#define INIT_NUM_REG_WRITES 18
+ static int agilent_82357a_init(gpib_board_t *board)
+ {
+ struct agilent_82357a_priv *a_priv = board->private_data;
+ struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ struct agilent_82357a_register_pairlet hw_control;
+- struct agilent_82357a_register_pairlet writes[0x20];
++ struct agilent_82357a_register_pairlet writes[INIT_NUM_REG_WRITES];
+ int retval;
+- int i;
+ unsigned int nanosec;
+
+- i = 0;
+- writes[i].address = LED_CONTROL;
+- writes[i].value = FAIL_LED_ON;
+- ++i;
+- writes[i].address = RESET_TO_POWERUP;
+- writes[i].value = RESET_SPACEBALL;
+- ++i;
+- retval = agilent_82357a_write_registers(a_priv, writes, i);
++ writes[0].address = LED_CONTROL;
++ writes[0].value = FAIL_LED_ON;
++ writes[1].address = RESET_TO_POWERUP;
++ writes[1].value = RESET_SPACEBALL;
++ retval = agilent_82357a_write_registers(a_priv, writes, 2);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return -EIO;
+ }
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (schedule_timeout(usec_to_jiffies(2000)))
+ return -ERESTARTSYS;
+- i = 0;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_NBAF;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_HLDE;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_TON;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_LON;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_RSV2;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_INVAL;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_RPP;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_STDL;
+- ++i;
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_VSTDL;
+- ++i;
+- writes[i].address = FAST_TALKER_T1;
++ writes[0].address = AUXCR;
++ writes[0].value = AUX_NBAF;
++ writes[1].address = AUXCR;
++ writes[1].value = AUX_HLDE;
++ writes[2].address = AUXCR;
++ writes[2].value = AUX_TON;
++ writes[3].address = AUXCR;
++ writes[3].value = AUX_LON;
++ writes[4].address = AUXCR;
++ writes[4].value = AUX_RSV2;
++ writes[5].address = AUXCR;
++ writes[5].value = AUX_INVAL;
++ writes[6].address = AUXCR;
++ writes[6].value = AUX_RPP;
++ writes[7].address = AUXCR;
++ writes[7].value = AUX_STDL;
++ writes[8].address = AUXCR;
++ writes[8].value = AUX_VSTDL;
++ writes[9].address = FAST_TALKER_T1;
+ nanosec = board->t1_nano_sec;
+- writes[i].value = nanosec_to_fast_talker_bits(&nanosec);
++ writes[9].value = nanosec_to_fast_talker_bits(&nanosec);
+ board->t1_nano_sec = nanosec;
+- ++i;
+- writes[i].address = ADR;
+- writes[i].value = board->pad & ADDRESS_MASK;
+- ++i;
+- writes[i].address = PPR;
+- writes[i].value = 0;
+- ++i;
+- writes[i].address = SPMR;
+- writes[i].value = 0;
+- ++i;
+- writes[i].address = PROTOCOL_CONTROL;
+- writes[i].value = WRITE_COMPLETE_INTERRUPT_EN;
+- ++i;
+- writes[i].address = IMR0;
+- writes[i].value = HR_BOIE | HR_BIIE;
+- ++i;
+- writes[i].address = IMR1;
+- writes[i].value = HR_SRQIE;
+- ++i;
++ writes[10].address = ADR;
++ writes[10].value = board->pad & ADDRESS_MASK;
++ writes[11].address = PPR;
++ writes[11].value = 0;
++ writes[12].address = SPMR;
++ writes[12].value = 0;
++ writes[13].address = PROTOCOL_CONTROL;
++ writes[13].value = WRITE_COMPLETE_INTERRUPT_EN;
++ writes[14].address = IMR0;
++ writes[14].value = HR_BOIE | HR_BIIE;
++ writes[15].address = IMR1;
++ writes[15].value = HR_SRQIE;
+ // turn off reset state
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_CHIP_RESET;
+- ++i;
+- writes[i].address = LED_CONTROL;
+- writes[i].value = FIRMWARE_LED_CONTROL;
+- ++i;
+- if (i > ARRAY_SIZE(writes)) {
+- dev_err(&usb_dev->dev, "%s: bug! writes[] overflow\n", __func__);
+- return -EFAULT;
+- }
+- retval = agilent_82357a_write_registers(a_priv, writes, i);
++ writes[16].address = AUXCR;
++ writes[16].value = AUX_CHIP_RESET;
++ writes[17].address = LED_CONTROL;
++ writes[17].value = FIRMWARE_LED_CONTROL;
++ retval = agilent_82357a_write_registers(a_priv, writes, INIT_NUM_REG_WRITES);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return -EIO;
+ }
+ hw_control.address = HW_CONTROL;
+ retval = agilent_82357a_read_registers(a_priv, &hw_control, 1, 1);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_read_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "read_registers() returned error\n");
+ return -EIO;
+ }
+ a_priv->hw_control_bits = (hw_control.value & ~0x7) | NOT_TI_RESET | NOT_PARALLEL_POLL;
+@@ -1336,7 +1327,7 @@ static int agilent_82357a_attach(gpib_board_t *board, const gpib_board_config_t
+ }
+ if (i == MAX_NUM_82357A_INTERFACES) {
+ dev_err(board->gpib_dev,
+- "No Agilent 82357 gpib adapters found, have you loaded its firmware?\n");
++ "No supported adapters found, have you loaded its firmware?\n");
+ retval = -ENODEV;
+ goto attach_fail;
+ }
+@@ -1372,8 +1363,7 @@ static int agilent_82357a_attach(gpib_board_t *board, const gpib_board_config_t
+ goto attach_fail;
+ }
+
+- dev_info(&usb_dev->dev,
+- "bus %d dev num %d attached to gpib minor %d, agilent usb interface %i\n",
++ dev_info(&usb_dev->dev, "bus %d dev num %d attached to gpib%d, interface %i\n",
+ usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
+ mutex_unlock(&agilent_82357a_hotplug_lock);
+ return retval;
+@@ -1390,37 +1380,24 @@ static int agilent_82357a_go_idle(gpib_board_t *board)
+ struct usb_device *usb_dev = interface_to_usbdev(a_priv->bus_interface);
+ struct agilent_82357a_register_pairlet writes[0x20];
+ int retval;
+- int i;
+
+- i = 0;
+ // turn on tms9914 reset state
+- writes[i].address = AUXCR;
+- writes[i].value = AUX_CS | AUX_CHIP_RESET;
+- ++i;
++ writes[0].address = AUXCR;
++ writes[0].value = AUX_CS | AUX_CHIP_RESET;
+ a_priv->hw_control_bits &= ~NOT_TI_RESET;
+- writes[i].address = HW_CONTROL;
+- writes[i].value = a_priv->hw_control_bits;
+- ++i;
+- writes[i].address = PROTOCOL_CONTROL;
+- writes[i].value = 0;
+- ++i;
+- writes[i].address = IMR0;
+- writes[i].value = 0;
+- ++i;
+- writes[i].address = IMR1;
+- writes[i].value = 0;
+- ++i;
+- writes[i].address = LED_CONTROL;
+- writes[i].value = 0;
+- ++i;
+- if (i > ARRAY_SIZE(writes)) {
+- dev_err(&usb_dev->dev, "%s: bug! writes[] overflow\n", __func__);
+- return -EFAULT;
+- }
+- retval = agilent_82357a_write_registers(a_priv, writes, i);
++ writes[1].address = HW_CONTROL;
++ writes[1].value = a_priv->hw_control_bits;
++ writes[2].address = PROTOCOL_CONTROL;
++ writes[2].value = 0;
++ writes[3].address = IMR0;
++ writes[3].value = 0;
++ writes[4].address = IMR1;
++ writes[4].value = 0;
++ writes[5].address = LED_CONTROL;
++ writes[5].value = 0;
++ retval = agilent_82357a_write_registers(a_priv, writes, 6);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: agilent_82357a_write_registers() returned error\n",
+- __func__);
++ dev_err(&usb_dev->dev, "write_registers() returned error\n");
+ return -EIO;
+ }
+ return 0;
+@@ -1445,7 +1422,6 @@ static void agilent_82357a_detach(gpib_board_t *board)
+ agilent_82357a_release_urbs(a_priv);
+ agilent_82357a_free_private(board);
+ }
+- dev_info(board->gpib_dev, "%s: detached\n", __func__);
+ mutex_unlock(&agilent_82357a_hotplug_lock);
+ }
+
+@@ -1510,8 +1486,7 @@ static int agilent_82357a_driver_probe(struct usb_interface *interface,
+ if (i == MAX_NUM_82357A_INTERFACES) {
+ usb_put_dev(usb_dev);
+ mutex_unlock(&agilent_82357a_hotplug_lock);
+- dev_err(&usb_dev->dev, "%s: out of space in agilent_82357a_driver_interfaces[]\n",
+- __func__);
++ dev_err(&usb_dev->dev, "out of space in agilent_82357a_driver_interfaces[]\n");
+ return -1;
+ }
+ path = kmalloc(path_length, GFP_KERNEL);
+@@ -1552,13 +1527,12 @@ static void agilent_82357a_driver_disconnect(struct usb_interface *interface)
+ mutex_unlock(&a_priv->control_alloc_lock);
+ }
+ }
+- dev_dbg(&usb_dev->dev, "nulled agilent_82357a_driver_interfaces[%i]\n", i);
+ agilent_82357a_driver_interfaces[i] = NULL;
+ break;
+ }
+ }
+ if (i == MAX_NUM_82357A_INTERFACES)
+- dev_err(&usb_dev->dev, "unable to find interface in agilent_82357a_driver_interfaces[]? bug?\n");
++ dev_err(&usb_dev->dev, "unable to find interface - bug?\n");
+ usb_put_dev(usb_dev);
+
+ mutex_unlock(&agilent_82357a_hotplug_lock);
+@@ -1583,18 +1557,18 @@ static int agilent_82357a_driver_suspend(struct usb_interface *interface, pm_mes
+ agilent_82357a_abort(a_priv, 0);
+ retval = agilent_82357a_go_idle(board);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: failed to go idle, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to go idle, retval=%i\n",
++ retval);
+ mutex_unlock(&agilent_82357a_hotplug_lock);
+ return retval;
+ }
+ mutex_lock(&a_priv->interrupt_alloc_lock);
+ agilent_82357a_cleanup_urbs(a_priv);
+ mutex_unlock(&a_priv->interrupt_alloc_lock);
+- dev_info(&usb_dev->dev,
+- "bus %d dev num %d gpib minor %d, agilent usb interface %i suspended\n",
+- usb_dev->bus->busnum, usb_dev->devnum,
+- board->minor, i);
++ dev_dbg(&usb_dev->dev,
++ "bus %d dev num %d gpib %d, interface %i suspended\n",
++ usb_dev->bus->busnum, usb_dev->devnum,
++ board->minor, i);
+ }
+ }
+ break;
+@@ -1631,8 +1605,8 @@ static int agilent_82357a_driver_resume(struct usb_interface *interface)
+ mutex_lock(&a_priv->interrupt_alloc_lock);
+ retval = usb_submit_urb(a_priv->interrupt_urb, GFP_KERNEL);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: failed to resubmit interrupt urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to resubmit interrupt urb in resume, retval=%i\n",
++ retval);
+ mutex_unlock(&a_priv->interrupt_alloc_lock);
+ mutex_unlock(&agilent_82357a_hotplug_lock);
+ return retval;
+@@ -1655,9 +1629,9 @@ static int agilent_82357a_driver_resume(struct usb_interface *interface)
+ // assert/unassert REN
+ agilent_82357a_remote_enable(board, a_priv->ren_state);
+
+- dev_info(&usb_dev->dev,
+- "bus %d dev num %d gpib minor %d, agilent usb interface %i resumed\n",
+- usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
++ dev_dbg(&usb_dev->dev,
++ "bus %d dev num %d gpib%d, interface %i resumed\n",
++ usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
+ }
+
+ resume_exit:
+@@ -1667,7 +1641,7 @@ static int agilent_82357a_driver_resume(struct usb_interface *interface)
+ }
+
+ static struct usb_driver agilent_82357a_bus_driver = {
+- .name = "agilent_82357a_gpib",
++ .name = DRV_NAME,
+ .probe = agilent_82357a_driver_probe,
+ .disconnect = agilent_82357a_driver_disconnect,
+ .suspend = agilent_82357a_driver_suspend,
+@@ -1680,19 +1654,18 @@ static int __init agilent_82357a_init_module(void)
+ int i;
+ int ret;
+
+- pr_info("agilent_82357a_gpib driver loading");
+ for (i = 0; i < MAX_NUM_82357A_INTERFACES; ++i)
+ agilent_82357a_driver_interfaces[i] = NULL;
+
+ ret = usb_register(&agilent_82357a_bus_driver);
+ if (ret) {
+- pr_err("agilent_82357a: usb_register failed: error = %d\n", ret);
++ pr_err("usb_register failed: error = %d\n", ret);
+ return ret;
+ }
+
+ ret = gpib_register_driver(&agilent_82357a_gpib_interface, THIS_MODULE);
+ if (ret) {
+- pr_err("agilent_82357a: gpib_register_driver failed: error = %d\n", ret);
++ pr_err("gpib_register_driver failed: error = %d\n", ret);
+ usb_deregister(&agilent_82357a_bus_driver);
+ return ret;
+ }
+@@ -1702,7 +1675,6 @@ static int __init agilent_82357a_init_module(void)
+
+ static void __exit agilent_82357a_exit_module(void)
+ {
+- pr_info("agilent_82357a_gpib driver unloading");
+ gpib_unregister_driver(&agilent_82357a_gpib_interface);
+ usb_deregister(&agilent_82357a_bus_driver);
+ }
+diff --git a/drivers/staging/gpib/cb7210/cb7210.c b/drivers/staging/gpib/cb7210/cb7210.c
+index 4d22f647a453fb..ab93061263bfef 100644
+--- a/drivers/staging/gpib/cb7210/cb7210.c
++++ b/drivers/staging/gpib/cb7210/cb7210.c
+@@ -1342,8 +1342,8 @@ static struct pcmcia_device_id cb_pcmcia_ids[] = {
+ MODULE_DEVICE_TABLE(pcmcia, cb_pcmcia_ids);
+
+ static struct pcmcia_driver cb_gpib_cs_driver = {
++ .name = "cb_gpib_cs",
+ .owner = THIS_MODULE,
+- .drv = { .name = "cb_gpib_cs", },
+ .id_table = cb_pcmcia_ids,
+ .probe = cb_gpib_probe,
+ .remove = cb_gpib_remove,
+diff --git a/drivers/staging/gpib/hp_82341/hp_82341.c b/drivers/staging/gpib/hp_82341/hp_82341.c
+index 0ddae295912faf..589c4fee1d5626 100644
+--- a/drivers/staging/gpib/hp_82341/hp_82341.c
++++ b/drivers/staging/gpib/hp_82341/hp_82341.c
+@@ -718,7 +718,7 @@ int hp_82341_attach(gpib_board_t *board, const gpib_board_config_t *config)
+ for (i = 0; i < hp_82341_num_io_regions; ++i) {
+ start_addr = iobase + i * hp_priv->io_region_offset;
+ if (!request_region(start_addr, hp_82341_region_iosize, "hp_82341")) {
+- pr_err("hp_82341: failed to allocate io ports 0x%lx-0x%lx\n",
++ pr_err("hp_82341: failed to allocate io ports 0x%x-0x%x\n",
+ start_addr,
+ start_addr + hp_82341_region_iosize - 1);
+ return -EIO;
+diff --git a/drivers/staging/gpib/ni_usb/ni_usb_gpib.c b/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
+index d0656dc520f506..1b976a28a7fe48 100644
+--- a/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
++++ b/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
+@@ -5,6 +5,10 @@
+ * copyright : (C) 2004 by Frank Mori Hess
+ ***************************************************************************/
+
++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
++#define dev_fmt pr_fmt
++#define DRV_NAME KBUILD_MODNAME
++
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+@@ -75,7 +79,7 @@ static unsigned short ni_usb_timeout_code(unsigned int usec)
+ */
+ else if (usec <= 1000000000)
+ return 0x02;
+- pr_err("%s: bug? usec is greater than 1e9\n", __func__);
++ pr_err("bug? usec is greater than 1e9\n");
+ return 0xf0;
+ }
+
+@@ -83,8 +87,6 @@ static void ni_usb_bulk_complete(struct urb *urb)
+ {
+ struct ni_usb_urb_ctx *context = urb->context;
+
+-// printk("debug: %s: status=0x%x, error_count=%i, actual_length=%i\n", __func__,
+-// urb->status, urb->error_count, urb->actual_length);
+ complete(&context->complete);
+ }
+
+@@ -137,8 +139,8 @@ static int ni_usb_nonblocking_send_bulk_msg(struct ni_usb_priv *ni_priv, void *d
+ del_timer_sync(&ni_priv->bulk_timer);
+ usb_free_urb(ni_priv->bulk_urb);
+ ni_priv->bulk_urb = NULL;
+- dev_err(&usb_dev->dev, "%s: failed to submit bulk out urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to submit bulk out urb, retval=%i\n",
++ retval);
+ mutex_unlock(&ni_priv->bulk_transfer_lock);
+ return retval;
+ }
+@@ -146,7 +148,7 @@ static int ni_usb_nonblocking_send_bulk_msg(struct ni_usb_priv *ni_priv, void *d
+ wait_for_completion(&context->complete); // wait for ni_usb_bulk_complete
+ if (context->timed_out) {
+ usb_kill_urb(ni_priv->bulk_urb);
+- dev_err(&usb_dev->dev, "%s: killed urb due to timeout\n", __func__);
++ dev_err(&usb_dev->dev, "killed urb due to timeout\n");
+ retval = -ETIMEDOUT;
+ } else {
+ retval = ni_priv->bulk_urb->status;
+@@ -218,14 +220,12 @@ static int ni_usb_nonblocking_receive_bulk_msg(struct ni_usb_priv *ni_priv,
+ if (timeout_msecs)
+ mod_timer(&ni_priv->bulk_timer, jiffies + msecs_to_jiffies(timeout_msecs));
+
+- //printk("%s: submitting urb\n", __func__);
+ retval = usb_submit_urb(ni_priv->bulk_urb, GFP_KERNEL);
+ if (retval) {
+ del_timer_sync(&ni_priv->bulk_timer);
+ usb_free_urb(ni_priv->bulk_urb);
+ ni_priv->bulk_urb = NULL;
+- dev_err(&usb_dev->dev, "%s: failed to submit bulk out urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to submit bulk in urb, retval=%i\n", retval);
+ mutex_unlock(&ni_priv->bulk_transfer_lock);
+ return retval;
+ }
+@@ -250,7 +250,7 @@ static int ni_usb_nonblocking_receive_bulk_msg(struct ni_usb_priv *ni_priv,
+ }
+ if (context->timed_out) {
+ usb_kill_urb(ni_priv->bulk_urb);
+- dev_err(&usb_dev->dev, "%s: killed urb due to timeout\n", __func__);
++ dev_err(&usb_dev->dev, "killed urb due to timeout\n");
+ retval = -ETIMEDOUT;
+ } else {
+ if (ni_priv->bulk_urb->status)
+@@ -330,14 +330,14 @@ static void ni_usb_soft_update_status(gpib_board_t *board, unsigned int ni_usb_i
+ ni_priv->monitored_ibsta_bits &= ~ni_usb_ibsta;
+ need_monitoring_bits &= ~ni_priv->monitored_ibsta_bits; /* mm - monitored set */
+ spin_unlock_irqrestore(&board->spinlock, flags);
+- dev_dbg(&usb_dev->dev, "%s: need_monitoring_bits=0x%x\n", __func__, need_monitoring_bits);
++ dev_dbg(&usb_dev->dev, "need_monitoring_bits=0x%x\n", need_monitoring_bits);
+
+ if (need_monitoring_bits & ~ni_usb_ibsta)
+ ni_usb_set_interrupt_monitor(board, ni_usb_ibsta_monitor_mask);
+ else if (need_monitoring_bits & ni_usb_ibsta)
+ wake_up_interruptible(&board->wait);
+
+- dev_dbg(&usb_dev->dev, "%s: ni_usb_ibsta=0x%x\n", __func__, ni_usb_ibsta);
++ dev_dbg(&usb_dev->dev, "ibsta=0x%x\n", ni_usb_ibsta);
+ }
+
+ static int ni_usb_parse_status_block(const u8 *buffer, struct ni_usb_status_block *status)
+@@ -371,7 +371,7 @@ static int ni_usb_parse_register_read_block(const u8 *raw_data, unsigned int *re
+ int k;
+
+ if (raw_data[i++] != NIUSB_REGISTER_READ_DATA_START_ID) {
+- pr_err("%s: parse error: wrong start id\n", __func__);
++ pr_err("parse error: wrong start id\n");
+ unexpected = 1;
+ }
+ for (k = 0; k < results_per_chunk && j < num_results; ++k)
+@@ -380,18 +380,18 @@ static int ni_usb_parse_register_read_block(const u8 *raw_data, unsigned int *re
+ while (i % 4)
+ i++;
+ if (raw_data[i++] != NIUSB_REGISTER_READ_DATA_END_ID) {
+- pr_err("%s: parse error: wrong end id\n", __func__);
++ pr_err("parse error: wrong end id\n");
+ unexpected = 1;
+ }
+ if (raw_data[i++] % results_per_chunk != num_results % results_per_chunk) {
+- pr_err("%s: parse error: wrong count=%i for NIUSB_REGISTER_READ_DATA_END\n",
+- __func__, (int)raw_data[i - 1]);
++ pr_err("parse error: wrong count=%i for NIUSB_REGISTER_READ_DATA_END\n",
++ (int)raw_data[i - 1]);
+ unexpected = 1;
+ }
+ while (i % 4) {
+ if (raw_data[i++] != 0) {
+- pr_err("%s: unexpected data: raw_data[%i]=0x%x, expected 0\n",
+- __func__, i - 1, (int)raw_data[i - 1]);
++ pr_err("unexpected data: raw_data[%i]=0x%x, expected 0\n",
++ i - 1, (int)raw_data[i - 1]);
+ unexpected = 1;
+ }
+ }
+@@ -408,9 +408,8 @@ static int ni_usb_parse_termination_block(const u8 *buffer)
+ buffer[i++] != 0x0 ||
+ buffer[i++] != 0x0 ||
+ buffer[i++] != 0x0) {
+- pr_err("%s: received unexpected termination block\n", __func__);
+- pr_err(" expected: 0x%x 0x%x 0x%x 0x%x\n",
+- NIUSB_TERM_ID, 0x0, 0x0, 0x0);
++ pr_err("received unexpected termination block\n");
++ pr_err(" expected: 0x%x 0x%x 0x%x 0x%x\n", NIUSB_TERM_ID, 0x0, 0x0, 0x0);
+ pr_err(" received: 0x%x 0x%x 0x%x 0x%x\n",
+ buffer[i - 4], buffer[i - 3], buffer[i - 2], buffer[i - 1]);
+ }
+@@ -438,12 +437,12 @@ static int parse_board_ibrd_readback(const u8 *raw_data, struct ni_usb_status_bl
+ } else if (raw_data[i] == NIUSB_IBRD_EXTENDED_DATA_ID) {
+ data_block_length = ibrd_extended_data_block_length;
+ if (raw_data[++i] != 0) {
+- pr_err("%s: unexpected data: raw_data[%i]=0x%x, expected 0\n",
+- __func__, i, (int)raw_data[i]);
++ pr_err("unexpected data: raw_data[%i]=0x%x, expected 0\n",
++ i, (int)raw_data[i]);
+ unexpected = 1;
+ }
+ } else {
+- pr_err("%s: logic bug!\n", __func__);
++ pr_err("Unexpected NIUSB_IBRD ID\n");
+ return -EINVAL;
+ }
+ ++i;
+@@ -457,7 +456,7 @@ static int parse_board_ibrd_readback(const u8 *raw_data, struct ni_usb_status_bl
+ }
+ i += ni_usb_parse_status_block(&raw_data[i], status);
+ if (status->id != NIUSB_IBRD_STATUS_ID) {
+- pr_err("%s: bug: status->id=%i, != ibrd_status_id\n", __func__, status->id);
++ pr_err("bug: status->id=%i, != ibrd_status_id\n", status->id);
+ return -EIO;
+ }
+ adr1_bits = raw_data[i++];
+@@ -468,29 +467,28 @@ static int parse_board_ibrd_readback(const u8 *raw_data, struct ni_usb_status_bl
+ *actual_bytes_read = 0;
+ }
+ if (*actual_bytes_read > j)
+- pr_err("%s: bug: discarded data. actual_bytes_read=%i, j=%i\n",
+- __func__, *actual_bytes_read, j);
++ pr_err("bug: discarded data. actual_bytes_read=%i, j=%i\n", *actual_bytes_read, j);
+ for (k = 0; k < 2; k++)
+ if (raw_data[i++] != 0) {
+- pr_err("%s: unexpected data: raw_data[%i]=0x%x, expected 0\n",
+- __func__, i - 1, (int)raw_data[i - 1]);
++ pr_err("unexpected data: raw_data[%i]=0x%x, expected 0\n",
++ i - 1, (int)raw_data[i - 1]);
+ unexpected = 1;
+ }
+ i += ni_usb_parse_status_block(&raw_data[i], ®ister_write_status);
+ if (register_write_status.id != NIUSB_REG_WRITE_ID) {
+- pr_err("%s: unexpected data: register write status id=0x%x, expected 0x%x\n",
+- __func__, register_write_status.id, NIUSB_REG_WRITE_ID);
++ pr_err("unexpected data: register write status id=0x%x, expected 0x%x\n",
++ register_write_status.id, NIUSB_REG_WRITE_ID);
+ unexpected = 1;
+ }
+ if (raw_data[i++] != 2) {
+- pr_err("%s: unexpected data: register write count=%i, expected 2\n",
+- __func__, (int)raw_data[i - 1]);
++ pr_err("unexpected data: register write count=%i, expected 2\n",
++ (int)raw_data[i - 1]);
+ unexpected = 1;
+ }
+ for (k = 0; k < 3; k++)
+ if (raw_data[i++] != 0) {
+- pr_err("%s: unexpected data: raw_data[%i]=0x%x, expected 0\n",
+- __func__, i - 1, (int)raw_data[i - 1]);
++ pr_err("unexpected data: raw_data[%i]=0x%x, expected 0\n",
++ i - 1, (int)raw_data[i - 1]);
+ unexpected = 1;
+ }
+ i += ni_usb_parse_termination_block(&raw_data[i]);
+@@ -530,18 +528,14 @@ static int ni_usb_write_registers(struct ni_usb_priv *ni_priv,
+
+ out_data_length = num_writes * bytes_per_write + 0x10;
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+- if (!out_data) {
+- dev_err(&usb_dev->dev, "%s: kmalloc failed\n", __func__);
++ if (!out_data)
+ return -ENOMEM;
+- }
+ i += ni_usb_bulk_register_write_header(&out_data[i], num_writes);
+ for (j = 0; j < num_writes; j++)
+ i += ni_usb_bulk_register_write(&out_data[i], writes[j]);
+ while (i % 4)
+ out_data[i++] = 0x00;
+ i += ni_usb_bulk_termination(&out_data[i]);
+- if (i > out_data_length)
+- dev_err(&usb_dev->dev, "%s: bug! buffer overrun\n", __func__);
+
+ mutex_lock(&ni_priv->addressed_transfer_lock);
+
+@@ -549,22 +543,21 @@ static int ni_usb_write_registers(struct ni_usb_priv *ni_priv,
+ kfree(out_data);
+ if (retval) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ return retval;
+ }
+
+ in_data = kmalloc(in_data_length, GFP_KERNEL);
+ if (!in_data) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: kmalloc failed\n", __func__);
+ return -ENOMEM;
+ }
+ retval = ni_usb_receive_bulk_msg(ni_priv, in_data, in_data_length, &bytes_read, 1000, 0);
+ if (retval || bytes_read != 16) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ ni_usb_dump_raw_block(in_data, bytes_read);
+ kfree(in_data);
+ return retval;
+@@ -576,18 +569,16 @@ static int ni_usb_write_registers(struct ni_usb_priv *ni_priv,
+ //FIXME parse extra 09 status bits and termination
+ kfree(in_data);
+ if (status.id != NIUSB_REG_WRITE_ID) {
+- dev_err(&usb_dev->dev, "%s: parse error, id=0x%x != NIUSB_REG_WRITE_ID\n",
+- __func__, status.id);
++ dev_err(&usb_dev->dev, "parse error, id=0x%x != NIUSB_REG_WRITE_ID\n", status.id);
+ return -EIO;
+ }
+ if (status.error_code) {
+- dev_err(&usb_dev->dev, "%s: nonzero error code 0x%x\n",
+- __func__, status.error_code);
++ dev_err(&usb_dev->dev, "nonzero error code 0x%x\n", status.error_code);
+ return -EIO;
+ }
+ if (reg_writes_completed != num_writes) {
+- dev_err(&usb_dev->dev, "%s: reg_writes_completed=%i, num_writes=%i\n",
+- __func__, reg_writes_completed, num_writes);
++ dev_err(&usb_dev->dev, "reg_writes_completed=%i, num_writes=%i\n",
++ reg_writes_completed, num_writes);
+ return -EIO;
+ }
+ if (ibsta)
+@@ -601,7 +592,7 @@ static int ni_usb_read(gpib_board_t *board, uint8_t *buffer, size_t length,
+ {
+ int retval, parse_retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ static const int out_data_length = 0x20;
+ int in_data_length;
+@@ -614,10 +605,11 @@ static int ni_usb_read(gpib_board_t *board, uint8_t *buffer, size_t length,
+ struct ni_usb_register reg;
+
+ *bytes_read = 0;
+- if (length > max_read_length) {
+- length = max_read_length;
+- dev_err(&usb_dev->dev, "%s: read length too long\n", __func__);
+- }
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ if (length > max_read_length)
++ return -EINVAL;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+ return -ENOMEM;
+@@ -649,8 +641,8 @@ static int ni_usb_read(gpib_board_t *board, uint8_t *buffer, size_t length,
+ if (retval || usb_bytes_written != i) {
+ if (retval == 0)
+ retval = -EIO;
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, usb_bytes_written=%i, i=%i\n",
+- __func__, retval, usb_bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, usb_bytes_written=%i, i=%i\n",
++ retval, usb_bytes_written, i);
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+ return retval;
+ }
+@@ -668,8 +660,8 @@ static int ni_usb_read(gpib_board_t *board, uint8_t *buffer, size_t length,
+
+ if (retval == -ERESTARTSYS) {
+ } else if (retval) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, usb_bytes_read=%i\n",
+- __func__, retval, usb_bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, usb_bytes_read=%i\n",
++ retval, usb_bytes_read);
+ kfree(in_data);
+ return retval;
+ }
+@@ -677,14 +669,14 @@ static int ni_usb_read(gpib_board_t *board, uint8_t *buffer, size_t length,
+ if (parse_retval != usb_bytes_read) {
+ if (parse_retval >= 0)
+ parse_retval = -EIO;
+- dev_err(&usb_dev->dev, "%s: retval=%i usb_bytes_read=%i\n",
+- __func__, parse_retval, usb_bytes_read);
++ dev_err(&usb_dev->dev, "retval=%i usb_bytes_read=%i\n",
++ parse_retval, usb_bytes_read);
+ kfree(in_data);
+ return parse_retval;
+ }
+ if (actual_length != length - status.count) {
+- dev_err(&usb_dev->dev, "%s: actual_length=%i expected=%li\n",
+- __func__, actual_length, (long)(length - status.count));
++ dev_err(&usb_dev->dev, "actual_length=%i expected=%li\n",
++ actual_length, (long)(length - status.count));
+ ni_usb_dump_raw_block(in_data, usb_bytes_read);
+ }
+ kfree(in_data);
+@@ -699,7 +691,7 @@ static int ni_usb_read(gpib_board_t *board, uint8_t *buffer, size_t length,
+ break;
+ case NIUSB_ATN_STATE_ERROR:
+ retval = -EIO;
+- dev_err(&usb_dev->dev, "%s: read when ATN set\n", __func__);
++ dev_err(&usb_dev->dev, "read when ATN set\n");
+ break;
+ case NIUSB_ADDRESSING_ERROR:
+ retval = -EIO;
+@@ -708,12 +700,11 @@ static int ni_usb_read(gpib_board_t *board, uint8_t *buffer, size_t length,
+ retval = -ETIMEDOUT;
+ break;
+ case NIUSB_EOSMODE_ERROR:
+- dev_err(&usb_dev->dev, "%s: driver bug, we should have been able to avoid NIUSB_EOSMODE_ERROR.\n",
+- __func__);
++ dev_err(&usb_dev->dev, "driver bug, we should have been able to avoid NIUSB_EOSMODE_ERROR.\n");
+ retval = -EINVAL;
+ break;
+ default:
+- dev_err(&usb_dev->dev, "%s: unknown error code=%i\n", __func__, status.error_code);
++ dev_err(&usb_dev->dev, "unknown error code=%i\n", status.error_code);
+ retval = -EIO;
+ break;
+ }
+@@ -731,7 +722,7 @@ static int ni_usb_write(gpib_board_t *board, uint8_t *buffer, size_t length,
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ int out_data_length;
+ static const int in_data_length = 0x10;
+@@ -741,12 +732,11 @@ static int ni_usb_write(gpib_board_t *board, uint8_t *buffer, size_t length,
+ struct ni_usb_status_block status;
+ static const int max_write_length = 0xffff;
+
+- *bytes_written = 0;
+- if (length > max_write_length) {
+- length = max_write_length;
+- send_eoi = 0;
+- dev_err(&usb_dev->dev, "%s: write length too long\n", __func__);
+- }
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ if (length > max_write_length)
++ return -EINVAL;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ out_data_length = length + 0x10;
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+@@ -777,8 +767,8 @@ static int ni_usb_write(gpib_board_t *board, uint8_t *buffer, size_t length,
+ kfree(out_data);
+ if (retval || usb_bytes_written != i) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, usb_bytes_written=%i, i=%i\n",
+- __func__, retval, usb_bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, usb_bytes_written=%i, i=%i\n",
++ retval, usb_bytes_written, i);
+ return retval;
+ }
+
+@@ -793,8 +783,8 @@ static int ni_usb_write(gpib_board_t *board, uint8_t *buffer, size_t length,
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+
+ if ((retval && retval != -ERESTARTSYS) || usb_bytes_read != 12) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, usb_bytes_read=%i\n",
+- __func__, retval, usb_bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, usb_bytes_read=%i\n",
++ retval, usb_bytes_read);
+ kfree(in_data);
+ return retval;
+ }
+@@ -810,8 +800,8 @@ static int ni_usb_write(gpib_board_t *board, uint8_t *buffer, size_t length,
+ */
+ break;
+ case NIUSB_ADDRESSING_ERROR:
+- dev_err(&usb_dev->dev, "%s: Addressing error retval %d error code=%i\n",
+- __func__, retval, status.error_code);
++ dev_err(&usb_dev->dev, "Addressing error retval %d error code=%i\n",
++ retval, status.error_code);
+ retval = -ENXIO;
+ break;
+ case NIUSB_NO_LISTENER_ERROR:
+@@ -821,8 +811,7 @@ static int ni_usb_write(gpib_board_t *board, uint8_t *buffer, size_t length,
+ retval = -ETIMEDOUT;
+ break;
+ default:
+- dev_err(&usb_dev->dev, "%s: unknown error code=%i\n",
+- __func__, status.error_code);
++ dev_err(&usb_dev->dev, "unknown error code=%i\n", status.error_code);
+ retval = -EPIPE;
+ break;
+ }
+@@ -836,7 +825,7 @@ static int ni_usb_command_chunk(gpib_board_t *board, uint8_t *buffer, size_t len
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ int out_data_length;
+ static const int in_data_length = 0x10;
+@@ -848,8 +837,11 @@ static int ni_usb_command_chunk(gpib_board_t *board, uint8_t *buffer, size_t len
+ static const int max_command_length = 0x10;
+
+ *command_bytes_written = 0;
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
+ if (length > max_command_length)
+ length = max_command_length;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ out_data_length = length + 0x10;
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+@@ -873,8 +865,8 @@ static int ni_usb_command_chunk(gpib_board_t *board, uint8_t *buffer, size_t len
+ kfree(out_data);
+ if (retval || bytes_written != i) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ return retval;
+ }
+
+@@ -890,8 +882,8 @@ static int ni_usb_command_chunk(gpib_board_t *board, uint8_t *buffer, size_t len
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+
+ if ((retval && retval != -ERESTARTSYS) || bytes_read != 12) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ kfree(in_data);
+ return retval;
+ }
+@@ -909,12 +901,12 @@ static int ni_usb_command_chunk(gpib_board_t *board, uint8_t *buffer, size_t len
+ case NIUSB_NO_BUS_ERROR:
+ return -ENOTCONN;
+ case NIUSB_EOSMODE_ERROR:
+- dev_err(&usb_dev->dev, "%s: got eosmode error. Driver bug?\n", __func__);
++ dev_err(&usb_dev->dev, "got eosmode error. Driver bug?\n");
+ return -EIO;
+ case NIUSB_TIMEOUT_ERROR:
+ return -ETIMEDOUT;
+ default:
+- dev_err(&usb_dev->dev, "%s: unknown error code=%i\n", __func__, status.error_code);
++ dev_err(&usb_dev->dev, "unknown error code=%i\n", status.error_code);
+ return -EIO;
+ }
+ ni_usb_soft_update_status(board, status.ibsta, 0);
+@@ -942,7 +934,7 @@ static int ni_usb_take_control(gpib_board_t *board, int synchronous)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ static const int out_data_length = 0x10;
+ static const int in_data_length = 0x10;
+@@ -950,6 +942,9 @@ static int ni_usb_take_control(gpib_board_t *board, int synchronous)
+ int i = 0;
+ struct ni_usb_status_block status;
+
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+ return -ENOMEM;
+@@ -968,15 +963,14 @@ static int ni_usb_take_control(gpib_board_t *board, int synchronous)
+ kfree(out_data);
+ if (retval || bytes_written != i) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ return retval;
+ }
+
+ in_data = kmalloc(in_data_length, GFP_KERNEL);
+ if (!in_data) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: kmalloc failed\n", __func__);
+ return -ENOMEM;
+ }
+ retval = ni_usb_receive_bulk_msg(ni_priv, in_data, in_data_length, &bytes_read, 1000, 1);
+@@ -986,8 +980,8 @@ static int ni_usb_take_control(gpib_board_t *board, int synchronous)
+ if ((retval && retval != -ERESTARTSYS) || bytes_read != 12) {
+ if (retval == 0)
+ retval = -EIO;
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ kfree(in_data);
+ return retval;
+ }
+@@ -1001,7 +995,7 @@ static int ni_usb_go_to_standby(gpib_board_t *board)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ static const int out_data_length = 0x10;
+ static const int in_data_length = 0x20;
+@@ -1009,6 +1003,9 @@ static int ni_usb_go_to_standby(gpib_board_t *board)
+ int i = 0;
+ struct ni_usb_status_block status;
+
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+ return -ENOMEM;
+@@ -1025,15 +1022,14 @@ static int ni_usb_go_to_standby(gpib_board_t *board)
+ kfree(out_data);
+ if (retval || bytes_written != i) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ return retval;
+ }
+
+ in_data = kmalloc(in_data_length, GFP_KERNEL);
+ if (!in_data) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: kmalloc failed\n", __func__);
+ return -ENOMEM;
+ }
+ retval = ni_usb_receive_bulk_msg(ni_priv, in_data, in_data_length, &bytes_read, 1000, 0);
+@@ -1041,16 +1037,15 @@ static int ni_usb_go_to_standby(gpib_board_t *board)
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+
+ if (retval || bytes_read != 12) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ kfree(in_data);
+ return retval;
+ }
+ ni_usb_parse_status_block(in_data, &status);
+ kfree(in_data);
+ if (status.id != NIUSB_IBGTS_ID)
+- dev_err(&usb_dev->dev, "%s: bug: status.id 0x%x != INUSB_IBGTS_ID\n",
+- __func__, status.id);
++ dev_err(&usb_dev->dev, "bug: status.id 0x%x != INUSB_IBGTS_ID\n", status.id);
+ ni_usb_soft_update_status(board, status.ibsta, 0);
+ return 0;
+ }
+@@ -1059,11 +1054,14 @@ static void ni_usb_request_system_control(gpib_board_t *board, int request_contr
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ int i = 0;
+ struct ni_usb_register writes[4];
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return; // -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ if (request_control) {
+ writes[i].device = NIUSB_SUBDEV_TNT4882;
+ writes[i].address = CMDR;
+@@ -1093,7 +1091,7 @@ static void ni_usb_request_system_control(gpib_board_t *board, int request_contr
+ }
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return; // retval;
+ }
+ if (!request_control)
+@@ -1107,7 +1105,7 @@ static void ni_usb_interface_clear(gpib_board_t *board, int assert)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ static const int out_data_length = 0x10;
+ static const int in_data_length = 0x10;
+@@ -1115,14 +1113,15 @@ static void ni_usb_interface_clear(gpib_board_t *board, int assert)
+ int i = 0;
+ struct ni_usb_status_block status;
+
+- // FIXME: we are going to pulse when assert is true, and ignore otherwise
++ if (!ni_priv->bus_interface)
++ return; // -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++// FIXME: we are going to pulse when assert is true, and ignore otherwise
+ if (assert == 0)
+ return;
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+- if (!out_data) {
+- dev_err(&usb_dev->dev, "%s: kmalloc failed\n", __func__);
++ if (!out_data)
+ return;
+- }
+ out_data[i++] = NIUSB_IBSIC_ID;
+ out_data[i++] = 0x0;
+ out_data[i++] = 0x0;
+@@ -1131,8 +1130,8 @@ static void ni_usb_interface_clear(gpib_board_t *board, int assert)
+ retval = ni_usb_send_bulk_msg(ni_priv, out_data, i, &bytes_written, 1000);
+ kfree(out_data);
+ if (retval || bytes_written != i) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ return;
+ }
+ in_data = kmalloc(in_data_length, GFP_KERNEL);
+@@ -1141,8 +1140,8 @@ static void ni_usb_interface_clear(gpib_board_t *board, int assert)
+
+ retval = ni_usb_receive_bulk_msg(ni_priv, in_data, in_data_length, &bytes_read, 1000, 0);
+ if (retval || bytes_read != 12) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ kfree(in_data);
+ return;
+ }
+@@ -1155,10 +1154,13 @@ static void ni_usb_remote_enable(gpib_board_t *board, int enable)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct ni_usb_register reg;
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return; // -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ reg.device = NIUSB_SUBDEV_TNT4882;
+ reg.address = nec7210_to_tnt4882_offset(AUXMR);
+ if (enable)
+@@ -1167,7 +1169,7 @@ static void ni_usb_remote_enable(gpib_board_t *board, int enable)
+ reg.value = AUX_CREN;
+ retval = ni_usb_write_registers(ni_priv, ®, 1, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return; //retval;
+ }
+ ni_priv->ren_state = enable;
+@@ -1202,12 +1204,14 @@ static unsigned int ni_usb_update_status(gpib_board_t *board, unsigned int clear
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ static const int buffer_length = 8;
+ u8 *buffer;
+ struct ni_usb_status_block status;
+
+- //printk("%s: receive control pipe is %i\n", __func__, pipe);
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ buffer = kmalloc(buffer_length, GFP_KERNEL);
+ if (!buffer)
+ return board->status;
+@@ -1216,7 +1220,7 @@ static unsigned int ni_usb_update_status(gpib_board_t *board, unsigned int clear
+ USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0x200, 0x0, buffer, buffer_length, 1000);
+ if (retval != buffer_length) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg returned %i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg returned %i\n", retval);
+ kfree(buffer);
+ return board->status;
+ }
+@@ -1235,7 +1239,6 @@ static void ni_usb_stop(struct ni_usb_priv *ni_priv)
+ u8 *buffer;
+ struct ni_usb_status_block status;
+
+- //printk("%s: receive control pipe is %i\n", __func__, pipe);
+ buffer = kmalloc(buffer_length, GFP_KERNEL);
+ if (!buffer)
+ return;
+@@ -1244,7 +1247,7 @@ static void ni_usb_stop(struct ni_usb_priv *ni_priv)
+ USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0x0, 0x0, buffer, buffer_length, 1000);
+ if (retval != buffer_length) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg returned %i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg returned %i\n", retval);
+ kfree(buffer);
+ return;
+ }
+@@ -1256,11 +1259,14 @@ static int ni_usb_primary_address(gpib_board_t *board, unsigned int address)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ int i = 0;
+ struct ni_usb_register writes[2];
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ writes[i].device = NIUSB_SUBDEV_TNT4882;
+ writes[i].address = nec7210_to_tnt4882_offset(ADR);
+ writes[i].value = address;
+@@ -1271,7 +1277,7 @@ static int ni_usb_primary_address(gpib_board_t *board, unsigned int address)
+ i++;
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return retval;
+ }
+ ni_usb_soft_update_status(board, ibsta, 0);
+@@ -1311,15 +1317,18 @@ static int ni_usb_secondary_address(gpib_board_t *board, unsigned int address, i
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ int i = 0;
+ struct ni_usb_register writes[3];
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ i += ni_usb_write_sad(writes, address, enable);
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return retval;
+ }
+ ni_usb_soft_update_status(board, ibsta, 0);
+@@ -1330,7 +1339,7 @@ static int ni_usb_parallel_poll(gpib_board_t *board, uint8_t *result)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ static const int out_data_length = 0x10;
+ static const int in_data_length = 0x20;
+@@ -1339,6 +1348,9 @@ static int ni_usb_parallel_poll(gpib_board_t *board, uint8_t *result)
+ int j = 0;
+ struct ni_usb_status_block status;
+
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+ return -ENOMEM;
+@@ -1353,8 +1365,8 @@ static int ni_usb_parallel_poll(gpib_board_t *board, uint8_t *result)
+
+ kfree(out_data);
+ if (retval || bytes_written != i) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ return retval;
+ }
+ in_data = kmalloc(in_data_length, GFP_KERNEL);
+@@ -1366,8 +1378,8 @@ static int ni_usb_parallel_poll(gpib_board_t *board, uint8_t *result)
+ &bytes_read, 1000, 1);
+
+ if (retval && retval != -ERESTARTSYS) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ kfree(in_data);
+ return retval;
+ }
+@@ -1382,18 +1394,21 @@ static void ni_usb_parallel_poll_configure(gpib_board_t *board, uint8_t config)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ int i = 0;
+ struct ni_usb_register writes[1];
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return; // -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ writes[i].device = NIUSB_SUBDEV_TNT4882;
+ writes[i].address = nec7210_to_tnt4882_offset(AUXMR);
+ writes[i].value = PPR | config;
+ i++;
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return;// retval;
+ }
+ ni_usb_soft_update_status(board, ibsta, 0);
+@@ -1404,11 +1419,14 @@ static void ni_usb_parallel_poll_response(gpib_board_t *board, int ist)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ int i = 0;
+ struct ni_usb_register writes[1];
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return; // -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ writes[i].device = NIUSB_SUBDEV_TNT4882;
+ writes[i].address = nec7210_to_tnt4882_offset(AUXMR);
+ if (ist)
+@@ -1418,7 +1436,7 @@ static void ni_usb_parallel_poll_response(gpib_board_t *board, int ist)
+ i++;
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return;// retval;
+ }
+ ni_usb_soft_update_status(board, ibsta, 0);
+@@ -1429,18 +1447,21 @@ static void ni_usb_serial_poll_response(gpib_board_t *board, u8 status)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ int i = 0;
+ struct ni_usb_register writes[1];
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return; // -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ writes[i].device = NIUSB_SUBDEV_TNT4882;
+ writes[i].address = nec7210_to_tnt4882_offset(SPMR);
+ writes[i].value = status;
+ i++;
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return;// retval;
+ }
+ ni_usb_soft_update_status(board, ibsta, 0);
+@@ -1456,18 +1477,21 @@ static void ni_usb_return_to_local(gpib_board_t *board)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ int i = 0;
+ struct ni_usb_register writes[1];
+ unsigned int ibsta;
+
++ if (!ni_priv->bus_interface)
++ return; // -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ writes[i].device = NIUSB_SUBDEV_TNT4882;
+ writes[i].address = nec7210_to_tnt4882_offset(AUXMR);
+ writes[i].value = AUX_RTL;
+ i++;
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return;// retval;
+ }
+ ni_usb_soft_update_status(board, ibsta, 0);
+@@ -1478,7 +1502,7 @@ static int ni_usb_line_status(const gpib_board_t *board)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ u8 *out_data, *in_data;
+ static const int out_data_length = 0x20;
+ static const int in_data_length = 0x20;
+@@ -1488,6 +1512,9 @@ static int ni_usb_line_status(const gpib_board_t *board)
+ int line_status = ValidALL;
+ // NI windows driver reads 0xd(HSSEL), 0xc (ARD0), 0x1f (BSR)
+
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ out_data = kmalloc(out_data_length, GFP_KERNEL);
+ if (!out_data)
+ return -ENOMEM;
+@@ -1509,15 +1536,14 @@ static int ni_usb_line_status(const gpib_board_t *board)
+ if (retval || bytes_written != i) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+ if (retval != -EAGAIN)
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
+- __func__, retval, bytes_written, i);
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%i\n",
++ retval, bytes_written, i);
+ return retval;
+ }
+
+ in_data = kmalloc(in_data_length, GFP_KERNEL);
+ if (!in_data) {
+ mutex_unlock(&ni_priv->addressed_transfer_lock);
+- dev_err(&usb_dev->dev, "%s: kmalloc failed\n", __func__);
+ return -ENOMEM;
+ }
+ retval = ni_usb_nonblocking_receive_bulk_msg(ni_priv, in_data, in_data_length,
+@@ -1527,8 +1553,8 @@ static int ni_usb_line_status(const gpib_board_t *board)
+
+ if (retval) {
+ if (retval != -EAGAIN)
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ kfree(in_data);
+ return retval;
+ }
+@@ -1595,16 +1621,19 @@ static unsigned int ni_usb_t1_delay(gpib_board_t *board, unsigned int nano_sec)
+ {
+ int retval;
+ struct ni_usb_priv *ni_priv = board->private_data;
+- struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
++ struct usb_device *usb_dev;
+ struct ni_usb_register writes[3];
+ unsigned int ibsta;
+ unsigned int actual_ns;
+ int i;
+
++ if (!ni_priv->bus_interface)
++ return -ENODEV;
++ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ i = ni_usb_setup_t1_delay(writes, nano_sec, &actual_ns);
+ retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return -1; //FIXME should change return type to int for error reporting
+ }
+ board->t1_nano_sec = actual_ns;
+@@ -1736,7 +1765,7 @@ static int ni_usb_setup_init(gpib_board_t *board, struct ni_usb_register *writes
+ writes[i].value = AUX_CPPF;
+ i++;
+ if (i > NUM_INIT_WRITES) {
+- dev_err(&usb_dev->dev, "%s: bug!, buffer overrun, i=%i\n", __func__, i);
++ dev_err(&usb_dev->dev, "bug!, buffer overrun, i=%i\n", i);
+ return 0;
+ }
+ return i;
+@@ -1762,7 +1791,7 @@ static int ni_usb_init(gpib_board_t *board)
+ return -EFAULT;
+ kfree(writes);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return retval;
+ }
+ ni_usb_soft_update_status(board, ibsta, 0);
+@@ -1778,9 +1807,6 @@ static void ni_usb_interrupt_complete(struct urb *urb)
+ struct ni_usb_status_block status;
+ unsigned long flags;
+
+-// printk("debug: %s: status=0x%x, error_count=%i, actual_length=%i\n", __func__,
+-// urb->status, urb->error_count, urb->actual_length);
+-
+ switch (urb->status) {
+ /* success */
+ case 0:
+@@ -1793,23 +1819,21 @@ static void ni_usb_interrupt_complete(struct urb *urb)
+ default: /* other error, resubmit */
+ retval = usb_submit_urb(ni_priv->interrupt_urb, GFP_ATOMIC);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: failed to resubmit interrupt urb\n", __func__);
++ dev_err(&usb_dev->dev, "failed to resubmit interrupt urb\n");
+ return;
+ }
+
+ ni_usb_parse_status_block(urb->transfer_buffer, &status);
+-// printk("debug: ibsta=0x%x\n", status.ibsta);
+
+ spin_lock_irqsave(&board->spinlock, flags);
+ ni_priv->monitored_ibsta_bits &= ~status.ibsta;
+-// printk("debug: monitored_ibsta_bits=0x%x\n", ni_priv->monitored_ibsta_bits);
+ spin_unlock_irqrestore(&board->spinlock, flags);
+
+ wake_up_interruptible(&board->wait);
+
+ retval = usb_submit_urb(ni_priv->interrupt_urb, GFP_ATOMIC);
+ if (retval)
+- dev_err(&usb_dev->dev, "%s: failed to resubmit interrupt urb\n", __func__);
++ dev_err(&usb_dev->dev, "failed to resubmit interrupt urb\n");
+ }
+
+ static int ni_usb_set_interrupt_monitor(gpib_board_t *board, unsigned int monitored_bits)
+@@ -1821,22 +1845,20 @@ static int ni_usb_set_interrupt_monitor(gpib_board_t *board, unsigned int monito
+ u8 *buffer;
+ struct ni_usb_status_block status;
+ unsigned long flags;
+- //printk("%s: receive control pipe is %i\n", __func__, pipe);
++
+ buffer = kmalloc(buffer_length, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+ spin_lock_irqsave(&board->spinlock, flags);
+ ni_priv->monitored_ibsta_bits = ni_usb_ibsta_monitor_mask & monitored_bits;
+-// dev_err(&usb_dev->dev, "debug: %s: monitored_ibsta_bits=0x%x\n",
+-// __func__, ni_priv->monitored_ibsta_bits);
+ spin_unlock_irqrestore(&board->spinlock, flags);
+ retval = ni_usb_receive_control_msg(ni_priv, NI_USB_WAIT_REQUEST, USB_DIR_IN |
+ USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0x300, ni_usb_ibsta_monitor_mask & monitored_bits,
+ buffer, buffer_length, 1000);
+ if (retval != buffer_length) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg returned %i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg returned %i\n", retval);
+ kfree(buffer);
+ return -1;
+ }
+@@ -1872,8 +1894,7 @@ static int ni_usb_setup_urbs(gpib_board_t *board)
+ retval = usb_submit_urb(ni_priv->interrupt_urb, GFP_KERNEL);
+ mutex_unlock(&ni_priv->interrupt_transfer_lock);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: failed to submit first interrupt urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "failed to submit first interrupt urb, retval=%i\n", retval);
+ return retval;
+ }
+ return 0;
+@@ -1904,7 +1925,6 @@ static int ni_usb_b_read_serial_number(struct ni_usb_priv *ni_priv)
+ int j;
+ unsigned int serial_number;
+
+-// printk("%s: %s\n", __func__);
+ in_data = kmalloc(in_data_length, GFP_KERNEL);
+ if (!in_data)
+ return -ENOMEM;
+@@ -1924,20 +1944,19 @@ static int ni_usb_b_read_serial_number(struct ni_usb_priv *ni_priv)
+ i += ni_usb_bulk_termination(&out_data[i]);
+ retval = ni_usb_send_bulk_msg(ni_priv, out_data, out_data_length, &bytes_written, 1000);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_send_bulk_msg returned %i, bytes_written=%i, i=%li\n",
+- __func__,
++ dev_err(&usb_dev->dev, "send_bulk_msg returned %i, bytes_written=%i, i=%li\n",
+ retval, bytes_written, (long)out_data_length);
+ goto serial_out;
+ }
+ retval = ni_usb_receive_bulk_msg(ni_priv, in_data, in_data_length, &bytes_read, 1000, 0);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: ni_usb_receive_bulk_msg returned %i, bytes_read=%i\n",
+- __func__, retval, bytes_read);
++ dev_err(&usb_dev->dev, "receive_bulk_msg returned %i, bytes_read=%i\n",
++ retval, bytes_read);
+ ni_usb_dump_raw_block(in_data, bytes_read);
+ goto serial_out;
+ }
+ if (ARRAY_SIZE(results) < num_reads) {
+- dev_err(&usb_dev->dev, "Setup bug\n");
++ dev_err(&usb_dev->dev, "serial number eetup bug\n");
+ retval = -EINVAL;
+ goto serial_out;
+ }
+@@ -1945,7 +1964,7 @@ static int ni_usb_b_read_serial_number(struct ni_usb_priv *ni_priv)
+ serial_number = 0;
+ for (j = 0; j < num_reads; ++j)
+ serial_number |= (results[j] & 0xff) << (8 * j);
+- dev_info(&usb_dev->dev, "%s: board serial number is 0x%x\n", __func__, serial_number);
++ dev_dbg(&usb_dev->dev, "board serial number is 0x%x\n", serial_number);
+ retval = 0;
+ serial_out:
+ kfree(in_data);
+@@ -1973,22 +1992,22 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0x0, 0x0, buffer, buffer_size, 1000);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg request 0x%x returned %i\n",
+- __func__, NI_USB_SERIAL_NUMBER_REQUEST, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg request 0x%x returned %i\n",
++ NI_USB_SERIAL_NUMBER_REQUEST, retval);
+ goto ready_out;
+ }
+ j = 0;
+ if (buffer[j] != NI_USB_SERIAL_NUMBER_REQUEST) {
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x%x\n",
+- __func__, j, (int)buffer[j], NI_USB_SERIAL_NUMBER_REQUEST);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x%x\n",
++ j, (int)buffer[j], NI_USB_SERIAL_NUMBER_REQUEST);
+ unexpected = 1;
+ }
+ if (unexpected)
+ ni_usb_dump_raw_block(buffer, retval);
+ // NI-USB-HS+ pads the serial with 0x0 to make 16 bytes
+ if (retval != 5 && retval != 16) {
+- dev_err(&usb_dev->dev, "%s: received unexpected number of bytes = %i, expected 5 or 16\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "received unexpected number of bytes = %i, expected 5 or 16\n",
++ retval);
+ ni_usb_dump_raw_block(buffer, retval);
+ }
+ serial_number = 0;
+@@ -1996,7 +2015,7 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ serial_number |= (buffer[++j] << 8);
+ serial_number |= (buffer[++j] << 16);
+ serial_number |= (buffer[++j] << 24);
+- dev_info(&usb_dev->dev, "%s: board serial number is 0x%x\n", __func__, serial_number);
++ dev_dbg(&usb_dev->dev, "board serial number is 0x%x\n", serial_number);
+ for (i = 0; i < timeout; ++i) {
+ int ready = 0;
+
+@@ -2004,26 +2023,26 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0x0, 0x0, buffer, buffer_size, 100);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg request 0x%x returned %i\n",
+- __func__, NI_USB_POLL_READY_REQUEST, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg request 0x%x returned %i\n",
++ NI_USB_POLL_READY_REQUEST, retval);
+ goto ready_out;
+ }
+ j = 0;
+ unexpected = 0;
+ if (buffer[j] != NI_USB_POLL_READY_REQUEST) { // [0]
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x%x\n",
+- __func__, j, (int)buffer[j], NI_USB_POLL_READY_REQUEST);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x%x\n",
++ j, (int)buffer[j], NI_USB_POLL_READY_REQUEST);
+ unexpected = 1;
+ }
+ ++j;
+ if (buffer[j] != 0x1 && buffer[j] != 0x0) { // [1] HS+ sends 0x0
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x1 or 0x0\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x1 or 0x0\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ if (buffer[++j] != 0x0) { // [2]
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x%x\n",
+- __func__, j, (int)buffer[j], 0x0);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x%x\n",
++ j, (int)buffer[j], 0x0);
+ unexpected = 1;
+ }
+ ++j;
+@@ -2031,22 +2050,22 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ // NI-USB-HS+ sends 0x0
+ if (buffer[j] != 0x1 && buffer[j] != 0x8 && buffer[j] != 0x7 && buffer[j] != 0x0) {
+ // [3]
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x0, 0x1, 0x7 or 0x8\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x0, 0x1, 0x7 or 0x8\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ ++j;
+ // NI-USB-HS+ sends 0 here
+ if (buffer[j] != 0x30 && buffer[j] != 0x0) { // [4]
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x0 or 0x30\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x0 or 0x30\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ ++j;
+ // MC usb-488 (and sometimes NI-USB-HS?) and NI-USB-HS+ sends 0x0 here
+ if (buffer[j] != 0x1 && buffer[j] != 0x0) { // [5]
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x1 or 0x0\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x1 or 0x0\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ if (buffer[++j] != 0x0) { // [6]
+@@ -2054,8 +2073,8 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ // NI-USB-HS+ sends 0xf here
+ if (buffer[j] != 0x2 && buffer[j] != 0xe && buffer[j] != 0xf &&
+ buffer[j] != 0x16) {
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x2, 0xe, 0xf or 0x16\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x2, 0xe, 0xf or 0x16\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ }
+@@ -2064,30 +2083,30 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ // MC usb-488 sends 0x5 here; MC usb-488A sends 0x6 here
+ if (buffer[j] != 0x3 && buffer[j] != 0x5 && buffer[j] != 0x6 &&
+ buffer[j] != 0x8) {
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x3 or 0x5, 0x6 or 0x08\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x3 or 0x5, 0x6 or 0x08\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ }
+ ++j;
+ if (buffer[j] != 0x0 && buffer[j] != 0x2) { // [8] MC usb-488 sends 0x2 here
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x0 or 0x2\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, " unexpected data: buffer[%i]=0x%x, expected 0x0 or 0x2\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ ++j;
+ // MC usb-488A and NI-USB-HS sends 0x3 here; NI-USB-HS+ sends 0x30 here
+ if (buffer[j] != 0x0 && buffer[j] != 0x3 && buffer[j] != 0x30) { // [9]
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x0, 0x3 or 0x30\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x0, 0x3 or 0x30\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ if (buffer[++j] != 0x0) {
+ ready = 1;
+ if (buffer[j] != 0x96 && buffer[j] != 0x7 && buffer[j] != 0x6e) {
+ // [10] MC usb-488 sends 0x7 here
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[%i]=0x%x, expected 0x96, 0x07 or 0x6e\n",
+- __func__, j, (int)buffer[j]);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x96, 0x07 or 0x6e\n",
++ j, (int)buffer[j]);
+ unexpected = 1;
+ }
+ }
+@@ -2097,7 +2116,6 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ break;
+ retval = msleep_interruptible(msec_sleep_duration);
+ if (retval) {
+- dev_err(&usb_dev->dev, "ni_usb_gpib: msleep interrupted\n");
+ retval = -ERESTARTSYS;
+ goto ready_out;
+ }
+@@ -2106,7 +2124,7 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+
+ ready_out:
+ kfree(buffer);
+- dev_dbg(&usb_dev->dev, "%s: exit retval=%d\n", __func__, retval);
++ dev_dbg(&usb_dev->dev, "exit retval=%d\n", retval);
+ return retval;
+ }
+
+@@ -2134,14 +2152,14 @@ static int ni_usb_hs_plus_extra_init(struct ni_usb_priv *ni_priv)
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0x0, 0x0, buffer, transfer_size, 1000);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg request 0x%x returned %i\n",
+- __func__, NI_USB_HS_PLUS_0x48_REQUEST, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg request 0x%x returned %i\n",
++ NI_USB_HS_PLUS_0x48_REQUEST, retval);
+ break;
+ }
+ // expected response data: 48 f3 30 00 00 00 00 00 00 00 00 00 00 00 00 00
+ if (buffer[0] != NI_USB_HS_PLUS_0x48_REQUEST)
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[0]=0x%x, expected 0x%x\n",
+- __func__, (int)buffer[0], NI_USB_HS_PLUS_0x48_REQUEST);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[0]=0x%x, expected 0x%x\n",
++ (int)buffer[0], NI_USB_HS_PLUS_0x48_REQUEST);
+
+ transfer_size = 2;
+
+@@ -2149,14 +2167,14 @@ static int ni_usb_hs_plus_extra_init(struct ni_usb_priv *ni_priv)
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 0x1, 0x0, buffer, transfer_size, 1000);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg request 0x%x returned %i\n",
+- __func__, NI_USB_HS_PLUS_LED_REQUEST, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg request 0x%x returned %i\n",
++ NI_USB_HS_PLUS_LED_REQUEST, retval);
+ break;
+ }
+ // expected response data: 4b 00
+ if (buffer[0] != NI_USB_HS_PLUS_LED_REQUEST)
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[0]=0x%x, expected 0x%x\n",
+- __func__, (int)buffer[0], NI_USB_HS_PLUS_LED_REQUEST);
++ dev_err(&usb_dev->dev, "unexpected data: buffer[0]=0x%x, expected 0x%x\n",
++ (int)buffer[0], NI_USB_HS_PLUS_LED_REQUEST);
+
+ transfer_size = 9;
+
+@@ -2165,15 +2183,14 @@ static int ni_usb_hs_plus_extra_init(struct ni_usb_priv *ni_priv)
+ USB_RECIP_INTERFACE,
+ 0x0, 0x1, buffer, transfer_size, 1000);
+ if (retval < 0) {
+- dev_err(&usb_dev->dev, "%s: usb_control_msg request 0x%x returned %i\n",
+- __func__, NI_USB_HS_PLUS_0xf8_REQUEST, retval);
++ dev_err(&usb_dev->dev, "usb_control_msg request 0x%x returned %i\n",
++ NI_USB_HS_PLUS_0xf8_REQUEST, retval);
+ break;
+ }
+ // expected response data: f8 01 00 00 00 01 00 00 00
+ if (buffer[0] != NI_USB_HS_PLUS_0xf8_REQUEST)
+- dev_err(&usb_dev->dev, "%s: unexpected data: buffer[0]=0x%x, expected 0x%x\n",
+- __func__, (int)buffer[0], NI_USB_HS_PLUS_0xf8_REQUEST);
+-
++ dev_err(&usb_dev->dev, "unexpected data: buffer[0]=0x%x, expected 0x%x\n",
++ (int)buffer[0], NI_USB_HS_PLUS_0xf8_REQUEST);
+ } while (0);
+
+ // cleanup
+@@ -2192,7 +2209,7 @@ static inline int ni_usb_device_match(struct usb_interface *interface,
+ static int ni_usb_attach(gpib_board_t *board, const gpib_board_config_t *config)
+ {
+ int retval;
+- int i;
++ int i, index;
+ struct ni_usb_priv *ni_priv;
+ int product_id;
+ struct usb_device *usb_dev;
+@@ -2211,19 +2228,17 @@ static int ni_usb_attach(gpib_board_t *board, const gpib_board_config_t *config)
+ ni_priv->bus_interface = ni_usb_driver_interfaces[i];
+ usb_set_intfdata(ni_usb_driver_interfaces[i], board);
+ usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+- dev_info(&usb_dev->dev,
+- "bus %d dev num %d attached to gpib minor %d, NI usb interface %i\n",
+- usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
++ index = i;
+ break;
+ }
+ }
+ if (i == MAX_NUM_NI_USB_INTERFACES) {
+ mutex_unlock(&ni_usb_hotplug_lock);
+- pr_err("No supported NI usb gpib adapters found, have you loaded its firmware?\n");
++ dev_err(board->gpib_dev, "No supported adapters found, have you loaded its firmware?\n");
+ return -ENODEV;
+ }
+ if (usb_reset_configuration(interface_to_usbdev(ni_priv->bus_interface)))
+- dev_err(&usb_dev->dev, "ni_usb_gpib: usb_reset_configuration() failed.\n");
++ dev_err(&usb_dev->dev, "usb_reset_configuration() failed.\n");
+
+ product_id = le16_to_cpu(usb_dev->descriptor.idProduct);
+ ni_priv->product_id = product_id;
+@@ -2296,7 +2311,9 @@ static int ni_usb_attach(gpib_board_t *board, const gpib_board_config_t *config)
+ }
+
+ mutex_unlock(&ni_usb_hotplug_lock);
+- dev_info(&usb_dev->dev, "%s: attached\n", __func__);
++ dev_info(&usb_dev->dev,
++ "bus %d dev num %d attached to gpib%d, intf %i\n",
++ usb_dev->bus->busnum, usb_dev->devnum, board->minor, index);
+ return retval;
+ }
+
+@@ -2304,27 +2321,19 @@ static int ni_usb_shutdown_hardware(struct ni_usb_priv *ni_priv)
+ {
+ struct usb_device *usb_dev = interface_to_usbdev(ni_priv->bus_interface);
+ int retval;
+- int i = 0;
+ struct ni_usb_register writes[2];
+ static const int writes_length = ARRAY_SIZE(writes);
+ unsigned int ibsta;
+
+-// printk("%s: %s\n", __func__);
+- writes[i].device = NIUSB_SUBDEV_TNT4882;
+- writes[i].address = nec7210_to_tnt4882_offset(AUXMR);
+- writes[i].value = AUX_CR;
+- i++;
+- writes[i].device = NIUSB_SUBDEV_UNKNOWN3;
+- writes[i].address = 0x10;
+- writes[i].value = 0x0;
+- i++;
+- if (i > writes_length) {
+- dev_err(&usb_dev->dev, "%s: bug!, buffer overrun, i=%i\n", __func__, i);
+- return -EINVAL;
+- }
+- retval = ni_usb_write_registers(ni_priv, writes, i, &ibsta);
++ writes[0].device = NIUSB_SUBDEV_TNT4882;
++ writes[0].address = nec7210_to_tnt4882_offset(AUXMR);
++ writes[0].value = AUX_CR;
++ writes[1].device = NIUSB_SUBDEV_UNKNOWN3;
++ writes[1].address = 0x10;
++ writes[1].value = 0x0;
++ retval = ni_usb_write_registers(ni_priv, writes, writes_length, &ibsta);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: register write failed, retval=%i\n", __func__, retval);
++ dev_err(&usb_dev->dev, "register write failed, retval=%i\n", retval);
+ return retval;
+ }
+ return 0;
+@@ -2413,7 +2422,7 @@ static int ni_usb_driver_probe(struct usb_interface *interface, const struct usb
+ if (i == MAX_NUM_NI_USB_INTERFACES) {
+ usb_put_dev(usb_dev);
+ mutex_unlock(&ni_usb_hotplug_lock);
+- dev_err(&usb_dev->dev, "%s: ni_usb_driver_interfaces[] full\n", __func__);
++ dev_err(&usb_dev->dev, "ni_usb_driver_interfaces[] full\n");
+ return -1;
+ }
+ path = kmalloc(path_length, GFP_KERNEL);
+@@ -2423,7 +2432,7 @@ static int ni_usb_driver_probe(struct usb_interface *interface, const struct usb
+ return -ENOMEM;
+ }
+ usb_make_path(usb_dev, path, path_length);
+- dev_info(&usb_dev->dev, "ni_usb_gpib: probe succeeded for path: %s\n", path);
++ dev_info(&usb_dev->dev, "probe succeeded for path: %s\n", path);
+ kfree(path);
+ mutex_unlock(&ni_usb_hotplug_lock);
+ return 0;
+@@ -2458,8 +2467,7 @@ static void ni_usb_driver_disconnect(struct usb_interface *interface)
+ }
+ }
+ if (i == MAX_NUM_NI_USB_INTERFACES)
+- dev_err(&usb_dev->dev, "%s: unable to find interface in ni_usb_driver_interfaces[]? bug?\n",
+- __func__);
++ dev_err(&usb_dev->dev, "unable to find interface bug?\n");
+ usb_put_dev(usb_dev);
+ mutex_unlock(&ni_usb_hotplug_lock);
+ }
+@@ -2498,9 +2506,9 @@ static int ni_usb_driver_suspend(struct usb_interface *interface, pm_message_t m
+ ni_usb_cleanup_urbs(ni_priv);
+ mutex_unlock(&ni_priv->interrupt_transfer_lock);
+ }
+- dev_info(&usb_dev->dev,
+- "bus %d dev num %d gpib minor %d, ni usb interface %i suspended\n",
+- usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
++ dev_dbg(&usb_dev->dev,
++ "bus %d dev num %d gpib%d, interface %i suspended\n",
++ usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
+ }
+
+ mutex_unlock(&ni_usb_hotplug_lock);
+@@ -2535,15 +2543,15 @@ static int ni_usb_driver_resume(struct usb_interface *interface)
+ mutex_lock(&ni_priv->interrupt_transfer_lock);
+ retval = usb_submit_urb(ni_priv->interrupt_urb, GFP_KERNEL);
+ if (retval) {
+- dev_err(&usb_dev->dev, "%s: failed to resubmit interrupt urb, retval=%i\n",
+- __func__, retval);
++ dev_err(&usb_dev->dev, "resume failed to resubmit interrupt urb, retval=%i\n",
++ retval);
+ mutex_unlock(&ni_priv->interrupt_transfer_lock);
+ mutex_unlock(&ni_usb_hotplug_lock);
+ return retval;
+ }
+ mutex_unlock(&ni_priv->interrupt_transfer_lock);
+ } else {
+- dev_err(&usb_dev->dev, "%s: bug! int urb not set up\n", __func__);
++ dev_err(&usb_dev->dev, "bug! resume int urb not set up\n");
+ mutex_unlock(&ni_usb_hotplug_lock);
+ return -EINVAL;
+ }
+@@ -2600,9 +2608,9 @@ static int ni_usb_driver_resume(struct usb_interface *interface)
+ if (ni_priv->ren_state)
+ ni_usb_remote_enable(board, 1);
+
+- dev_info(&usb_dev->dev,
+- "bus %d dev num %d gpib minor %d, ni usb interface %i resumed\n",
+- usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
++ dev_dbg(&usb_dev->dev,
++ "bus %d dev num %d gpib%d, interface %i resumed\n",
++ usb_dev->bus->busnum, usb_dev->devnum, board->minor, i);
+ }
+
+ mutex_unlock(&ni_usb_hotplug_lock);
+@@ -2610,7 +2618,7 @@ static int ni_usb_driver_resume(struct usb_interface *interface)
+ }
+
+ static struct usb_driver ni_usb_bus_driver = {
+- .name = "ni_usb_gpib",
++ .name = DRV_NAME,
+ .probe = ni_usb_driver_probe,
+ .disconnect = ni_usb_driver_disconnect,
+ .suspend = ni_usb_driver_suspend,
+@@ -2623,19 +2631,18 @@ static int __init ni_usb_init_module(void)
+ int i;
+ int ret;
+
+- pr_info("ni_usb_gpib driver loading\n");
+ for (i = 0; i < MAX_NUM_NI_USB_INTERFACES; i++)
+ ni_usb_driver_interfaces[i] = NULL;
+
+ ret = usb_register(&ni_usb_bus_driver);
+ if (ret) {
+- pr_err("ni_usb_gpib: usb_register failed: error = %d\n", ret);
++ pr_err("usb_register failed: error = %d\n", ret);
+ return ret;
+ }
+
+ ret = gpib_register_driver(&ni_usb_gpib_interface, THIS_MODULE);
+ if (ret) {
+- pr_err("ni_usb_gpib: gpib_register_driver failed: error = %d\n", ret);
++ pr_err("gpib_register_driver failed: error = %d\n", ret);
+ return ret;
+ }
+
+@@ -2644,7 +2651,6 @@ static int __init ni_usb_init_module(void)
+
+ static void __exit ni_usb_exit_module(void)
+ {
+- pr_info("ni_usb_gpib driver unloading\n");
+ gpib_unregister_driver(&ni_usb_gpib_interface);
+ usb_deregister(&ni_usb_bus_driver);
+ }
+diff --git a/drivers/staging/rtl8723bs/Kconfig b/drivers/staging/rtl8723bs/Kconfig
+index 8d48c61961a6b7..353e6ee2c14508 100644
+--- a/drivers/staging/rtl8723bs/Kconfig
++++ b/drivers/staging/rtl8723bs/Kconfig
+@@ -4,6 +4,7 @@ config RTL8723BS
+ depends on WLAN && MMC && CFG80211
+ depends on m
+ select CRYPTO
++ select CRYPTO_LIB_AES
+ select CRYPTO_LIB_ARC4
+ help
+ This option enables support for RTL8723BS SDIO drivers, such as
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index a4e83e5d619bc9..0c7ea2d0ee85e8 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -308,6 +308,20 @@ static struct vchiq_arm_state *vchiq_platform_get_arm_state(struct vchiq_state *
+ return (struct vchiq_arm_state *)state->platform_state;
+ }
+
++static void
++vchiq_platform_uninit(struct vchiq_drv_mgmt *mgmt)
++{
++ struct vchiq_arm_state *arm_state;
++
++ kthread_stop(mgmt->state.sync_thread);
++ kthread_stop(mgmt->state.recycle_thread);
++ kthread_stop(mgmt->state.slot_handler_thread);
++
++ arm_state = vchiq_platform_get_arm_state(&mgmt->state);
++ if (!IS_ERR_OR_NULL(arm_state->ka_thread))
++ kthread_stop(arm_state->ka_thread);
++}
++
+ void vchiq_dump_platform_state(struct seq_file *f)
+ {
+ seq_puts(f, " Platform: 2835 (VC master)\n");
+@@ -1386,8 +1400,6 @@ static int vchiq_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- vchiq_debugfs_init(&mgmt->state);
+-
+ dev_dbg(&pdev->dev, "arm: platform initialised - version %d (min %d)\n",
+ VCHIQ_VERSION, VCHIQ_VERSION_MIN);
+
+@@ -1398,9 +1410,12 @@ static int vchiq_probe(struct platform_device *pdev)
+ ret = vchiq_register_chrdev(&pdev->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "arm: Failed to initialize vchiq cdev\n");
++ vchiq_platform_uninit(mgmt);
+ return ret;
+ }
+
++ vchiq_debugfs_init(&mgmt->state);
++
+ bcm2835_audio = vchiq_device_register(&pdev->dev, "bcm2835-audio");
+ bcm2835_camera = vchiq_device_register(&pdev->dev, "bcm2835-camera");
+
+@@ -1410,19 +1425,12 @@ static int vchiq_probe(struct platform_device *pdev)
+ static void vchiq_remove(struct platform_device *pdev)
+ {
+ struct vchiq_drv_mgmt *mgmt = dev_get_drvdata(&pdev->dev);
+- struct vchiq_arm_state *arm_state;
+
+ vchiq_device_unregister(bcm2835_audio);
+ vchiq_device_unregister(bcm2835_camera);
+ vchiq_debugfs_deinit();
+ vchiq_deregister_chrdev();
+-
+- kthread_stop(mgmt->state.sync_thread);
+- kthread_stop(mgmt->state.recycle_thread);
+- kthread_stop(mgmt->state.slot_handler_thread);
+-
+- arm_state = vchiq_platform_get_arm_state(&mgmt->state);
+- kthread_stop(arm_state->ka_thread);
++ vchiq_platform_uninit(mgmt);
+ }
+
+ static struct platform_driver vchiq_driver = {
+diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c
+index 761c511aea07c9..c7b7da6297418f 100644
+--- a/drivers/target/loopback/tcm_loop.c
++++ b/drivers/target/loopback/tcm_loop.c
+@@ -176,7 +176,7 @@ static int tcm_loop_queuecommand(struct Scsi_Host *sh, struct scsi_cmnd *sc)
+
+ memset(tl_cmd, 0, sizeof(*tl_cmd));
+ tl_cmd->sc = sc;
+- tl_cmd->sc_cmd_tag = scsi_cmd_to_rq(sc)->tag;
++ tl_cmd->sc_cmd_tag = blk_mq_unique_tag(scsi_cmd_to_rq(sc));
+
+ tcm_loop_target_queue_cmd(tl_cmd);
+ return 0;
+@@ -242,7 +242,8 @@ static int tcm_loop_abort_task(struct scsi_cmnd *sc)
+ tl_hba = *(struct tcm_loop_hba **)shost_priv(sc->device->host);
+ tl_tpg = &tl_hba->tl_hba_tpgs[sc->device->id];
+ ret = tcm_loop_issue_tmr(tl_tpg, sc->device->lun,
+- scsi_cmd_to_rq(sc)->tag, TMR_ABORT_TASK);
++ blk_mq_unique_tag(scsi_cmd_to_rq(sc)),
++ TMR_ABORT_TASK);
+ return (ret == TMR_FUNCTION_COMPLETE) ? SUCCESS : FAILED;
+ }
+
+diff --git a/drivers/thermal/intel/int340x_thermal/int3402_thermal.c b/drivers/thermal/intel/int340x_thermal/int3402_thermal.c
+index 543b03960e9923..57b90005888a31 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3402_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3402_thermal.c
+@@ -45,6 +45,9 @@ static int int3402_thermal_probe(struct platform_device *pdev)
+ struct int3402_thermal_data *d;
+ int ret;
+
++ if (!adev)
++ return -ENODEV;
++
+ if (!acpi_has_method(adev->handle, "_TMP"))
+ return -ENODEV;
+
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 5e9ca4376d686e..94fa981081fdb5 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -486,7 +486,8 @@ static int do_output_char(u8 c, struct tty_struct *tty, int space)
+ static int process_output(u8 c, struct tty_struct *tty)
+ {
+ struct n_tty_data *ldata = tty->disc_data;
+- int space, retval;
++ unsigned int space;
++ int retval;
+
+ mutex_lock(&ldata->output_lock);
+
+@@ -522,16 +523,16 @@ static ssize_t process_output_block(struct tty_struct *tty,
+ const u8 *buf, unsigned int nr)
+ {
+ struct n_tty_data *ldata = tty->disc_data;
+- int space;
+- int i;
++ unsigned int space;
++ int i;
+ const u8 *cp;
+
+ mutex_lock(&ldata->output_lock);
+
+ space = tty_write_room(tty);
+- if (space <= 0) {
++ if (space == 0) {
+ mutex_unlock(&ldata->output_lock);
+- return space;
++ return 0;
+ }
+ if (nr > space)
+ nr = space;
+@@ -696,7 +697,7 @@ static int n_tty_process_echo_ops(struct tty_struct *tty, size_t *tail,
+ static size_t __process_echoes(struct tty_struct *tty)
+ {
+ struct n_tty_data *ldata = tty->disc_data;
+- int space, old_space;
++ unsigned int space, old_space;
+ size_t tail;
+ u8 c;
+
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 79623b2482a04c..9fdb66f2fcb812 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -441,7 +441,7 @@ static unsigned int lpuart_get_baud_clk_rate(struct lpuart_port *sport)
+
+ static void lpuart_stop_tx(struct uart_port *port)
+ {
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2);
+ temp &= ~(UARTCR2_TIE | UARTCR2_TCIE);
+@@ -450,7 +450,7 @@ static void lpuart_stop_tx(struct uart_port *port)
+
+ static void lpuart32_stop_tx(struct uart_port *port)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(port, UARTCTRL);
+ temp &= ~(UARTCTRL_TIE | UARTCTRL_TCIE);
+@@ -459,7 +459,7 @@ static void lpuart32_stop_tx(struct uart_port *port)
+
+ static void lpuart_stop_rx(struct uart_port *port)
+ {
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2);
+ writeb(temp & ~UARTCR2_RE, port->membase + UARTCR2);
+@@ -467,7 +467,7 @@ static void lpuart_stop_rx(struct uart_port *port)
+
+ static void lpuart32_stop_rx(struct uart_port *port)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(port, UARTCTRL);
+ lpuart32_write(port, temp & ~UARTCTRL_RE, UARTCTRL);
+@@ -581,7 +581,7 @@ static int lpuart_dma_tx_request(struct uart_port *port)
+ ret = dmaengine_slave_config(sport->dma_tx_chan, &dma_tx_sconfig);
+
+ if (ret) {
+- dev_err(sport->port.dev,
++ dev_err(port->dev,
+ "DMA slave config failed, err = %d\n", ret);
+ return ret;
+ }
+@@ -611,13 +611,13 @@ static void lpuart_flush_buffer(struct uart_port *port)
+ }
+
+ if (lpuart_is_32(sport)) {
+- val = lpuart32_read(&sport->port, UARTFIFO);
++ val = lpuart32_read(port, UARTFIFO);
+ val |= UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH;
+- lpuart32_write(&sport->port, val, UARTFIFO);
++ lpuart32_write(port, val, UARTFIFO);
+ } else {
+- val = readb(sport->port.membase + UARTCFIFO);
++ val = readb(port->membase + UARTCFIFO);
+ val |= UARTCFIFO_TXFLUSH | UARTCFIFO_RXFLUSH;
+- writeb(val, sport->port.membase + UARTCFIFO);
++ writeb(val, port->membase + UARTCFIFO);
+ }
+ }
+
+@@ -639,38 +639,36 @@ static void lpuart32_wait_bit_set(struct uart_port *port, unsigned int offset,
+
+ static int lpuart_poll_init(struct uart_port *port)
+ {
+- struct lpuart_port *sport = container_of(port,
+- struct lpuart_port, port);
+ unsigned long flags;
+- unsigned char temp;
++ u8 temp;
+
+- sport->port.fifosize = 0;
++ port->fifosize = 0;
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+ /* Disable Rx & Tx */
+- writeb(0, sport->port.membase + UARTCR2);
++ writeb(0, port->membase + UARTCR2);
+
+- temp = readb(sport->port.membase + UARTPFIFO);
++ temp = readb(port->membase + UARTPFIFO);
+ /* Enable Rx and Tx FIFO */
+ writeb(temp | UARTPFIFO_RXFE | UARTPFIFO_TXFE,
+- sport->port.membase + UARTPFIFO);
++ port->membase + UARTPFIFO);
+
+ /* flush Tx and Rx FIFO */
+ writeb(UARTCFIFO_TXFLUSH | UARTCFIFO_RXFLUSH,
+- sport->port.membase + UARTCFIFO);
++ port->membase + UARTCFIFO);
+
+ /* explicitly clear RDRF */
+- if (readb(sport->port.membase + UARTSR1) & UARTSR1_RDRF) {
+- readb(sport->port.membase + UARTDR);
+- writeb(UARTSFIFO_RXUF, sport->port.membase + UARTSFIFO);
++ if (readb(port->membase + UARTSR1) & UARTSR1_RDRF) {
++ readb(port->membase + UARTDR);
++ writeb(UARTSFIFO_RXUF, port->membase + UARTSFIFO);
+ }
+
+- writeb(0, sport->port.membase + UARTTWFIFO);
+- writeb(1, sport->port.membase + UARTRWFIFO);
++ writeb(0, port->membase + UARTTWFIFO);
++ writeb(1, port->membase + UARTRWFIFO);
+
+ /* Enable Rx and Tx */
+- writeb(UARTCR2_RE | UARTCR2_TE, sport->port.membase + UARTCR2);
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ writeb(UARTCR2_RE | UARTCR2_TE, port->membase + UARTCR2);
++ uart_port_unlock_irqrestore(port, flags);
+
+ return 0;
+ }
+@@ -693,33 +691,32 @@ static int lpuart_poll_get_char(struct uart_port *port)
+ static int lpuart32_poll_init(struct uart_port *port)
+ {
+ unsigned long flags;
+- struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+ u32 temp;
+
+- sport->port.fifosize = 0;
++ port->fifosize = 0;
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+
+ /* Disable Rx & Tx */
+- lpuart32_write(&sport->port, 0, UARTCTRL);
++ lpuart32_write(port, 0, UARTCTRL);
+
+- temp = lpuart32_read(&sport->port, UARTFIFO);
++ temp = lpuart32_read(port, UARTFIFO);
+
+ /* Enable Rx and Tx FIFO */
+- lpuart32_write(&sport->port, temp | UARTFIFO_RXFE | UARTFIFO_TXFE, UARTFIFO);
++ lpuart32_write(port, temp | UARTFIFO_RXFE | UARTFIFO_TXFE, UARTFIFO);
+
+ /* flush Tx and Rx FIFO */
+- lpuart32_write(&sport->port, UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH, UARTFIFO);
++ lpuart32_write(port, UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH, UARTFIFO);
+
+ /* explicitly clear RDRF */
+- if (lpuart32_read(&sport->port, UARTSTAT) & UARTSTAT_RDRF) {
+- lpuart32_read(&sport->port, UARTDATA);
+- lpuart32_write(&sport->port, UARTFIFO_RXUF, UARTFIFO);
++ if (lpuart32_read(port, UARTSTAT) & UARTSTAT_RDRF) {
++ lpuart32_read(port, UARTDATA);
++ lpuart32_write(port, UARTFIFO_RXUF, UARTFIFO);
+ }
+
+ /* Enable Rx and Tx */
+- lpuart32_write(&sport->port, UARTCTRL_RE | UARTCTRL_TE, UARTCTRL);
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ lpuart32_write(port, UARTCTRL_RE | UARTCTRL_TE, UARTCTRL);
++ uart_port_unlock_irqrestore(port, flags);
+
+ return 0;
+ }
+@@ -752,7 +749,7 @@ static inline void lpuart_transmit_buffer(struct lpuart_port *sport)
+ static inline void lpuart32_transmit_buffer(struct lpuart_port *sport)
+ {
+ struct tty_port *tport = &sport->port.state->port;
+- unsigned long txcnt;
++ u32 txcnt;
+ unsigned char c;
+
+ if (sport->port.x_char) {
+@@ -789,7 +786,7 @@ static void lpuart_start_tx(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port,
+ struct lpuart_port, port);
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2);
+ writeb(temp | UARTCR2_TIE, port->membase + UARTCR2);
+@@ -806,7 +803,7 @@ static void lpuart_start_tx(struct uart_port *port)
+ static void lpuart32_start_tx(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned long temp;
++ u32 temp;
+
+ if (sport->lpuart_dma_tx_use) {
+ if (!lpuart_stopped_or_empty(port))
+@@ -839,8 +836,8 @@ static unsigned int lpuart_tx_empty(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port,
+ struct lpuart_port, port);
+- unsigned char sr1 = readb(port->membase + UARTSR1);
+- unsigned char sfifo = readb(port->membase + UARTSFIFO);
++ u8 sr1 = readb(port->membase + UARTSR1);
++ u8 sfifo = readb(port->membase + UARTSFIFO);
+
+ if (sport->dma_tx_in_progress)
+ return 0;
+@@ -855,9 +852,9 @@ static unsigned int lpuart32_tx_empty(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port,
+ struct lpuart_port, port);
+- unsigned long stat = lpuart32_read(port, UARTSTAT);
+- unsigned long sfifo = lpuart32_read(port, UARTFIFO);
+- unsigned long ctrl = lpuart32_read(port, UARTCTRL);
++ u32 stat = lpuart32_read(port, UARTSTAT);
++ u32 sfifo = lpuart32_read(port, UARTFIFO);
++ u32 ctrl = lpuart32_read(port, UARTCTRL);
+
+ if (sport->dma_tx_in_progress)
+ return 0;
+@@ -884,7 +881,7 @@ static void lpuart_rxint(struct lpuart_port *sport)
+ {
+ unsigned int flg, ignored = 0, overrun = 0;
+ struct tty_port *port = &sport->port.state->port;
+- unsigned char rx, sr;
++ u8 rx, sr;
+
+ uart_port_lock(&sport->port);
+
+@@ -961,7 +958,7 @@ static void lpuart32_rxint(struct lpuart_port *sport)
+ {
+ unsigned int flg, ignored = 0;
+ struct tty_port *port = &sport->port.state->port;
+- unsigned long rx, sr;
++ u32 rx, sr;
+ bool is_break;
+
+ uart_port_lock(&sport->port);
+@@ -1039,7 +1036,7 @@ static void lpuart32_rxint(struct lpuart_port *sport)
+ static irqreturn_t lpuart_int(int irq, void *dev_id)
+ {
+ struct lpuart_port *sport = dev_id;
+- unsigned char sts;
++ u8 sts;
+
+ sts = readb(sport->port.membase + UARTSR1);
+
+@@ -1113,7 +1110,7 @@ static void lpuart_copy_rx_to_tty(struct lpuart_port *sport)
+ int count, copied;
+
+ if (lpuart_is_32(sport)) {
+- unsigned long sr = lpuart32_read(&sport->port, UARTSTAT);
++ u32 sr = lpuart32_read(&sport->port, UARTSTAT);
+
+ if (sr & (UARTSTAT_PE | UARTSTAT_FE)) {
+ /* Clear the error flags */
+@@ -1125,10 +1122,10 @@ static void lpuart_copy_rx_to_tty(struct lpuart_port *sport)
+ sport->port.icount.frame++;
+ }
+ } else {
+- unsigned char sr = readb(sport->port.membase + UARTSR1);
++ u8 sr = readb(sport->port.membase + UARTSR1);
+
+ if (sr & (UARTSR1_PE | UARTSR1_FE)) {
+- unsigned char cr2;
++ u8 cr2;
+
+ /* Disable receiver during this operation... */
+ cr2 = readb(sport->port.membase + UARTCR2);
+@@ -1279,7 +1276,7 @@ static void lpuart32_dma_idleint(struct lpuart_port *sport)
+ static irqreturn_t lpuart32_int(int irq, void *dev_id)
+ {
+ struct lpuart_port *sport = dev_id;
+- unsigned long sts, rxcount;
++ u32 sts, rxcount;
+
+ sts = lpuart32_read(&sport->port, UARTSTAT);
+ rxcount = lpuart32_read(&sport->port, UARTWATER);
+@@ -1411,12 +1408,12 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ dma_async_issue_pending(chan);
+
+ if (lpuart_is_32(sport)) {
+- unsigned long temp = lpuart32_read(&sport->port, UARTBAUD);
++ u32 temp = lpuart32_read(&sport->port, UARTBAUD);
+
+ lpuart32_write(&sport->port, temp | UARTBAUD_RDMAE, UARTBAUD);
+
+ if (sport->dma_idle_int) {
+- unsigned long ctrl = lpuart32_read(&sport->port, UARTCTRL);
++ u32 ctrl = lpuart32_read(&sport->port, UARTCTRL);
+
+ lpuart32_write(&sport->port, ctrl | UARTCTRL_ILIE, UARTCTRL);
+ }
+@@ -1449,12 +1446,9 @@ static void lpuart_dma_rx_free(struct uart_port *port)
+ static int lpuart_config_rs485(struct uart_port *port, struct ktermios *termios,
+ struct serial_rs485 *rs485)
+ {
+- struct lpuart_port *sport = container_of(port,
+- struct lpuart_port, port);
+-
+- u8 modem = readb(sport->port.membase + UARTMODEM) &
++ u8 modem = readb(port->membase + UARTMODEM) &
+ ~(UARTMODEM_TXRTSPOL | UARTMODEM_TXRTSE);
+- writeb(modem, sport->port.membase + UARTMODEM);
++ writeb(modem, port->membase + UARTMODEM);
+
+ if (rs485->flags & SER_RS485_ENABLED) {
+ /* Enable auto RS-485 RTS mode */
+@@ -1472,32 +1466,29 @@ static int lpuart_config_rs485(struct uart_port *port, struct ktermios *termios,
+ modem &= ~UARTMODEM_TXRTSPOL;
+ }
+
+- writeb(modem, sport->port.membase + UARTMODEM);
++ writeb(modem, port->membase + UARTMODEM);
+ return 0;
+ }
+
+ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termios,
+ struct serial_rs485 *rs485)
+ {
+- struct lpuart_port *sport = container_of(port,
+- struct lpuart_port, port);
+-
+- unsigned long modem = lpuart32_read(&sport->port, UARTMODIR)
++ u32 modem = lpuart32_read(port, UARTMODIR)
+ & ~(UARTMODIR_TXRTSPOL | UARTMODIR_TXRTSE);
+ u32 ctrl;
+
+ /* TXRTSE and TXRTSPOL only can be changed when transmitter is disabled. */
+- ctrl = lpuart32_read(&sport->port, UARTCTRL);
++ ctrl = lpuart32_read(port, UARTCTRL);
+ if (ctrl & UARTCTRL_TE) {
+ /* wait for the transmit engine to complete */
+- lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+- lpuart32_write(&sport->port, ctrl & ~UARTCTRL_TE, UARTCTRL);
++ lpuart32_wait_bit_set(port, UARTSTAT, UARTSTAT_TC);
++ lpuart32_write(port, ctrl & ~UARTCTRL_TE, UARTCTRL);
+
+- while (lpuart32_read(&sport->port, UARTCTRL) & UARTCTRL_TE)
++ while (lpuart32_read(port, UARTCTRL) & UARTCTRL_TE)
+ cpu_relax();
+ }
+
+- lpuart32_write(&sport->port, modem, UARTMODIR);
++ lpuart32_write(port, modem, UARTMODIR);
+
+ if (rs485->flags & SER_RS485_ENABLED) {
+ /* Enable auto RS-485 RTS mode */
+@@ -1515,10 +1506,10 @@ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termio
+ modem &= ~UARTMODIR_TXRTSPOL;
+ }
+
+- lpuart32_write(&sport->port, modem, UARTMODIR);
++ lpuart32_write(port, modem, UARTMODIR);
+
+ if (ctrl & UARTCTRL_TE)
+- lpuart32_write(&sport->port, ctrl, UARTCTRL);
++ lpuart32_write(port, ctrl, UARTCTRL);
+
+ return 0;
+ }
+@@ -1577,7 +1568,7 @@ static void lpuart32_set_mctrl(struct uart_port *port, unsigned int mctrl)
+
+ static void lpuart_break_ctl(struct uart_port *port, int break_state)
+ {
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2) & ~UARTCR2_SBK;
+
+@@ -1589,7 +1580,7 @@ static void lpuart_break_ctl(struct uart_port *port, int break_state)
+
+ static void lpuart32_break_ctl(struct uart_port *port, int break_state)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(port, UARTCTRL);
+
+@@ -1623,8 +1614,7 @@ static void lpuart32_break_ctl(struct uart_port *port, int break_state)
+
+ static void lpuart_setup_watermark(struct lpuart_port *sport)
+ {
+- unsigned char val, cr2;
+- unsigned char cr2_saved;
++ u8 val, cr2, cr2_saved;
+
+ cr2 = readb(sport->port.membase + UARTCR2);
+ cr2_saved = cr2;
+@@ -1657,7 +1647,7 @@ static void lpuart_setup_watermark(struct lpuart_port *sport)
+
+ static void lpuart_setup_watermark_enable(struct lpuart_port *sport)
+ {
+- unsigned char cr2;
++ u8 cr2;
+
+ lpuart_setup_watermark(sport);
+
+@@ -1668,8 +1658,7 @@ static void lpuart_setup_watermark_enable(struct lpuart_port *sport)
+
+ static void lpuart32_setup_watermark(struct lpuart_port *sport)
+ {
+- unsigned long val, ctrl;
+- unsigned long ctrl_saved;
++ u32 val, ctrl, ctrl_saved;
+
+ ctrl = lpuart32_read(&sport->port, UARTCTRL);
+ ctrl_saved = ctrl;
+@@ -1778,7 +1767,7 @@ static void lpuart_tx_dma_startup(struct lpuart_port *sport)
+ static void lpuart_rx_dma_startup(struct lpuart_port *sport)
+ {
+ int ret;
+- unsigned char cr3;
++ u8 cr3;
+
+ if (uart_console(&sport->port))
+ goto err;
+@@ -1828,14 +1817,14 @@ static void lpuart_hw_setup(struct lpuart_port *sport)
+ static int lpuart_startup(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned char temp;
++ u8 temp;
+
+ /* determine FIFO size and enable FIFO mode */
+- temp = readb(sport->port.membase + UARTPFIFO);
++ temp = readb(port->membase + UARTPFIFO);
+
+ sport->txfifo_size = UARTFIFO_DEPTH((temp >> UARTPFIFO_TXSIZE_OFF) &
+ UARTPFIFO_FIFOSIZE_MASK);
+- sport->port.fifosize = sport->txfifo_size;
++ port->fifosize = sport->txfifo_size;
+
+ sport->rxfifo_size = UARTFIFO_DEPTH((temp >> UARTPFIFO_RXSIZE_OFF) &
+ UARTPFIFO_FIFOSIZE_MASK);
+@@ -1848,7 +1837,7 @@ static int lpuart_startup(struct uart_port *port)
+
+ static void lpuart32_hw_disable(struct lpuart_port *sport)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(&sport->port, UARTCTRL);
+ temp &= ~(UARTCTRL_RIE | UARTCTRL_ILIE | UARTCTRL_RE |
+@@ -1858,7 +1847,7 @@ static void lpuart32_hw_disable(struct lpuart_port *sport)
+
+ static void lpuart32_configure(struct lpuart_port *sport)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(&sport->port, UARTCTRL);
+ if (!sport->lpuart_dma_rx_use)
+@@ -1888,14 +1877,14 @@ static void lpuart32_hw_setup(struct lpuart_port *sport)
+ static int lpuart32_startup(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned long temp;
++ u32 temp;
+
+ /* determine FIFO size */
+- temp = lpuart32_read(&sport->port, UARTFIFO);
++ temp = lpuart32_read(port, UARTFIFO);
+
+ sport->txfifo_size = UARTFIFO_DEPTH((temp >> UARTFIFO_TXSIZE_OFF) &
+ UARTFIFO_FIFOSIZE_MASK);
+- sport->port.fifosize = sport->txfifo_size;
++ port->fifosize = sport->txfifo_size;
+
+ sport->rxfifo_size = UARTFIFO_DEPTH((temp >> UARTFIFO_RXSIZE_OFF) &
+ UARTFIFO_FIFOSIZE_MASK);
+@@ -1908,7 +1897,7 @@ static int lpuart32_startup(struct uart_port *port)
+ if (is_layerscape_lpuart(sport)) {
+ sport->rxfifo_size = 16;
+ sport->txfifo_size = 16;
+- sport->port.fifosize = sport->txfifo_size;
++ port->fifosize = sport->txfifo_size;
+ }
+
+ lpuart_request_dma(sport);
+@@ -1942,7 +1931,7 @@ static void lpuart_dma_shutdown(struct lpuart_port *sport)
+ static void lpuart_shutdown(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned char temp;
++ u8 temp;
+ unsigned long flags;
+
+ uart_port_lock_irqsave(port, &flags);
+@@ -1962,14 +1951,14 @@ static void lpuart32_shutdown(struct uart_port *port)
+ {
+ struct lpuart_port *sport =
+ container_of(port, struct lpuart_port, port);
+- unsigned long temp;
++ u32 temp;
+ unsigned long flags;
+
+ uart_port_lock_irqsave(port, &flags);
+
+ /* clear status */
+- temp = lpuart32_read(&sport->port, UARTSTAT);
+- lpuart32_write(&sport->port, temp, UARTSTAT);
++ temp = lpuart32_read(port, UARTSTAT);
++ lpuart32_write(port, temp, UARTSTAT);
+
+ /* disable Rx/Tx DMA */
+ temp = lpuart32_read(port, UARTBAUD);
+@@ -1998,17 +1987,17 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+ unsigned long flags;
+- unsigned char cr1, old_cr1, old_cr2, cr3, cr4, bdh, modem;
++ u8 cr1, old_cr1, old_cr2, cr3, cr4, bdh, modem;
+ unsigned int baud;
+ unsigned int old_csize = old ? old->c_cflag & CSIZE : CS8;
+ unsigned int sbr, brfa;
+
+- cr1 = old_cr1 = readb(sport->port.membase + UARTCR1);
+- old_cr2 = readb(sport->port.membase + UARTCR2);
+- cr3 = readb(sport->port.membase + UARTCR3);
+- cr4 = readb(sport->port.membase + UARTCR4);
+- bdh = readb(sport->port.membase + UARTBDH);
+- modem = readb(sport->port.membase + UARTMODEM);
++ cr1 = old_cr1 = readb(port->membase + UARTCR1);
++ old_cr2 = readb(port->membase + UARTCR2);
++ cr3 = readb(port->membase + UARTCR3);
++ cr4 = readb(port->membase + UARTCR4);
++ bdh = readb(port->membase + UARTBDH);
++ modem = readb(port->membase + UARTMODEM);
+ /*
+ * only support CS8 and CS7, and for CS7 must enable PE.
+ * supported mode:
+@@ -2040,7 +2029,7 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ * When auto RS-485 RTS mode is enabled,
+ * hardware flow control need to be disabled.
+ */
+- if (sport->port.rs485.flags & SER_RS485_ENABLED)
++ if (port->rs485.flags & SER_RS485_ENABLED)
+ termios->c_cflag &= ~CRTSCTS;
+
+ if (termios->c_cflag & CRTSCTS)
+@@ -2081,59 +2070,59 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ * Need to update the Ring buffer length according to the selected
+ * baud rate and restart Rx DMA path.
+ *
+- * Since timer function acqures sport->port.lock, need to stop before
++ * Since timer function acqures port->lock, need to stop before
+ * acquring same lock because otherwise del_timer_sync() can deadlock.
+ */
+ if (old && sport->lpuart_dma_rx_use)
+- lpuart_dma_rx_free(&sport->port);
++ lpuart_dma_rx_free(port);
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+
+- sport->port.read_status_mask = 0;
++ port->read_status_mask = 0;
+ if (termios->c_iflag & INPCK)
+- sport->port.read_status_mask |= UARTSR1_FE | UARTSR1_PE;
++ port->read_status_mask |= UARTSR1_FE | UARTSR1_PE;
+ if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK))
+- sport->port.read_status_mask |= UARTSR1_FE;
++ port->read_status_mask |= UARTSR1_FE;
+
+ /* characters to ignore */
+- sport->port.ignore_status_mask = 0;
++ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSR1_PE;
++ port->ignore_status_mask |= UARTSR1_PE;
+ if (termios->c_iflag & IGNBRK) {
+- sport->port.ignore_status_mask |= UARTSR1_FE;
++ port->ignore_status_mask |= UARTSR1_FE;
+ /*
+ * if we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSR1_OR;
++ port->ignore_status_mask |= UARTSR1_OR;
+ }
+
+ /* update the per-port timeout */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ /* wait transmit engin complete */
+- lpuart_wait_bit_set(&sport->port, UARTSR1, UARTSR1_TC);
++ lpuart_wait_bit_set(port, UARTSR1, UARTSR1_TC);
+
+ /* disable transmit and receive */
+ writeb(old_cr2 & ~(UARTCR2_TE | UARTCR2_RE),
+- sport->port.membase + UARTCR2);
++ port->membase + UARTCR2);
+
+- sbr = sport->port.uartclk / (16 * baud);
+- brfa = ((sport->port.uartclk - (16 * sbr * baud)) * 2) / baud;
++ sbr = port->uartclk / (16 * baud);
++ brfa = ((port->uartclk - (16 * sbr * baud)) * 2) / baud;
+ bdh &= ~UARTBDH_SBR_MASK;
+ bdh |= (sbr >> 8) & 0x1F;
+ cr4 &= ~UARTCR4_BRFA_MASK;
+ brfa &= UARTCR4_BRFA_MASK;
+- writeb(cr4 | brfa, sport->port.membase + UARTCR4);
+- writeb(bdh, sport->port.membase + UARTBDH);
+- writeb(sbr & 0xFF, sport->port.membase + UARTBDL);
+- writeb(cr3, sport->port.membase + UARTCR3);
+- writeb(cr1, sport->port.membase + UARTCR1);
+- writeb(modem, sport->port.membase + UARTMODEM);
++ writeb(cr4 | brfa, port->membase + UARTCR4);
++ writeb(bdh, port->membase + UARTBDH);
++ writeb(sbr & 0xFF, port->membase + UARTBDL);
++ writeb(cr3, port->membase + UARTCR3);
++ writeb(cr1, port->membase + UARTCR1);
++ writeb(modem, port->membase + UARTMODEM);
+
+ /* restore control register */
+- writeb(old_cr2, sport->port.membase + UARTCR2);
++ writeb(old_cr2, port->membase + UARTCR2);
+
+ if (old && sport->lpuart_dma_rx_use) {
+ if (!lpuart_start_rx_dma(sport))
+@@ -2142,7 +2131,7 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ sport->lpuart_dma_rx_use = false;
+ }
+
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ uart_port_unlock_irqrestore(port, flags);
+ }
+
+ static void __lpuart32_serial_setbrg(struct uart_port *port,
+@@ -2236,13 +2225,13 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+ unsigned long flags;
+- unsigned long ctrl, old_ctrl, bd, modem;
++ u32 ctrl, old_ctrl, bd, modem;
+ unsigned int baud;
+ unsigned int old_csize = old ? old->c_cflag & CSIZE : CS8;
+
+- ctrl = old_ctrl = lpuart32_read(&sport->port, UARTCTRL);
+- bd = lpuart32_read(&sport->port, UARTBAUD);
+- modem = lpuart32_read(&sport->port, UARTMODIR);
++ ctrl = old_ctrl = lpuart32_read(port, UARTCTRL);
++ bd = lpuart32_read(port, UARTBAUD);
++ modem = lpuart32_read(port, UARTMODIR);
+ sport->is_cs7 = false;
+ /*
+ * only support CS8 and CS7
+@@ -2276,7 +2265,7 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ * When auto RS-485 RTS mode is enabled,
+ * hardware flow control need to be disabled.
+ */
+- if (sport->port.rs485.flags & SER_RS485_ENABLED)
++ if (port->rs485.flags & SER_RS485_ENABLED)
+ termios->c_cflag &= ~CRTSCTS;
+
+ if (termios->c_cflag & CRTSCTS)
+@@ -2326,59 +2315,61 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ * Need to update the Ring buffer length according to the selected
+ * baud rate and restart Rx DMA path.
+ *
+- * Since timer function acqures sport->port.lock, need to stop before
++ * Since timer function acqures port->lock, need to stop before
+ * acquring same lock because otherwise del_timer_sync() can deadlock.
+ */
+ if (old && sport->lpuart_dma_rx_use)
+- lpuart_dma_rx_free(&sport->port);
++ lpuart_dma_rx_free(port);
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+
+- sport->port.read_status_mask = 0;
++ port->read_status_mask = 0;
+ if (termios->c_iflag & INPCK)
+- sport->port.read_status_mask |= UARTSTAT_FE | UARTSTAT_PE;
++ port->read_status_mask |= UARTSTAT_FE | UARTSTAT_PE;
+ if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK))
+- sport->port.read_status_mask |= UARTSTAT_FE;
++ port->read_status_mask |= UARTSTAT_FE;
+
+ /* characters to ignore */
+- sport->port.ignore_status_mask = 0;
++ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSTAT_PE;
++ port->ignore_status_mask |= UARTSTAT_PE;
+ if (termios->c_iflag & IGNBRK) {
+- sport->port.ignore_status_mask |= UARTSTAT_FE;
++ port->ignore_status_mask |= UARTSTAT_FE;
+ /*
+ * if we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSTAT_OR;
++ port->ignore_status_mask |= UARTSTAT_OR;
+ }
+
+ /* update the per-port timeout */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
++ /*
++ * disable CTS to ensure the transmit engine is not blocked by the flow
++ * control when there is dirty data in TX FIFO
++ */
++ lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
++
+ /*
+ * LPUART Transmission Complete Flag may never be set while queuing a break
+ * character, so skip waiting for transmission complete when UARTCTRL_SBK is
+ * asserted.
+ */
+- if (!(old_ctrl & UARTCTRL_SBK)) {
+- lpuart32_write(&sport->port, 0, UARTMODIR);
+- lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+- }
++ if (!(old_ctrl & UARTCTRL_SBK))
++ lpuart32_wait_bit_set(port, UARTSTAT, UARTSTAT_TC);
+
+ /* disable transmit and receive */
+- lpuart32_write(&sport->port, old_ctrl & ~(UARTCTRL_TE | UARTCTRL_RE),
++ lpuart32_write(port, old_ctrl & ~(UARTCTRL_TE | UARTCTRL_RE),
+ UARTCTRL);
+
+- lpuart32_write(&sport->port, bd, UARTBAUD);
++ lpuart32_write(port, bd, UARTBAUD);
+ lpuart32_serial_setbrg(sport, baud);
+- /* disable CTS before enabling UARTCTRL_TE to avoid pending idle preamble */
+- lpuart32_write(&sport->port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
+ /* restore control register */
+- lpuart32_write(&sport->port, ctrl, UARTCTRL);
++ lpuart32_write(port, ctrl, UARTCTRL);
+ /* re-enable the CTS if needed */
+- lpuart32_write(&sport->port, modem, UARTMODIR);
++ lpuart32_write(port, modem, UARTMODIR);
+
+ if ((ctrl & (UARTCTRL_PE | UARTCTRL_M)) == UARTCTRL_PE)
+ sport->is_cs7 = true;
+@@ -2390,7 +2381,7 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ sport->lpuart_dma_rx_use = false;
+ }
+
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ uart_port_unlock_irqrestore(port, flags);
+ }
+
+ static const char *lpuart_type(struct uart_port *port)
+@@ -2503,7 +2494,7 @@ static void
+ lpuart_console_write(struct console *co, const char *s, unsigned int count)
+ {
+ struct lpuart_port *sport = lpuart_ports[co->index];
+- unsigned char old_cr2, cr2;
++ u8 old_cr2, cr2;
+ unsigned long flags;
+ int locked = 1;
+
+@@ -2533,7 +2524,7 @@ static void
+ lpuart32_console_write(struct console *co, const char *s, unsigned int count)
+ {
+ struct lpuart_port *sport = lpuart_ports[co->index];
+- unsigned long old_cr, cr;
++ u32 old_cr, cr;
+ unsigned long flags;
+ int locked = 1;
+
+@@ -2567,7 +2558,7 @@ static void __init
+ lpuart_console_get_options(struct lpuart_port *sport, int *baud,
+ int *parity, int *bits)
+ {
+- unsigned char cr, bdh, bdl, brfa;
++ u8 cr, bdh, bdl, brfa;
+ unsigned int sbr, uartclk, baud_raw;
+
+ cr = readb(sport->port.membase + UARTCR2);
+@@ -2616,7 +2607,7 @@ static void __init
+ lpuart32_console_get_options(struct lpuart_port *sport, int *baud,
+ int *parity, int *bits)
+ {
+- unsigned long cr, bd;
++ u32 cr, bd;
+ unsigned int sbr, uartclk, baud_raw;
+
+ cr = lpuart32_read(&sport->port, UARTCTRL);
+@@ -2822,13 +2813,13 @@ static int lpuart_global_reset(struct lpuart_port *sport)
+ {
+ struct uart_port *port = &sport->port;
+ void __iomem *global_addr;
+- unsigned long ctrl, bd;
++ u32 ctrl, bd;
+ unsigned int val = 0;
+ int ret;
+
+ ret = clk_prepare_enable(sport->ipg_clk);
+ if (ret) {
+- dev_err(sport->port.dev, "failed to enable uart ipg clk: %d\n", ret);
++ dev_err(port->dev, "failed to enable uart ipg clk: %d\n", ret);
+ return ret;
+ }
+
+@@ -2839,10 +2830,10 @@ static int lpuart_global_reset(struct lpuart_port *sport)
+ */
+ ctrl = lpuart32_read(port, UARTCTRL);
+ if (ctrl & UARTCTRL_TE) {
+- bd = lpuart32_read(&sport->port, UARTBAUD);
++ bd = lpuart32_read(port, UARTBAUD);
+ if (read_poll_timeout(lpuart32_tx_empty, val, val, 1, 100000, false,
+ port)) {
+- dev_warn(sport->port.dev,
++ dev_warn(port->dev,
+ "timeout waiting for transmit engine to complete\n");
+ clk_disable_unprepare(sport->ipg_clk);
+ return 0;
+@@ -3028,7 +3019,7 @@ static int lpuart_runtime_resume(struct device *dev)
+
+ static void serial_lpuart_enable_wakeup(struct lpuart_port *sport, bool on)
+ {
+- unsigned int val, baud;
++ u32 val, baud;
+
+ if (lpuart_is_32(sport)) {
+ val = lpuart32_read(&sport->port, UARTCTRL);
+@@ -3093,7 +3084,7 @@ static int lpuart_suspend_noirq(struct device *dev)
+ static int lpuart_resume_noirq(struct device *dev)
+ {
+ struct lpuart_port *sport = dev_get_drvdata(dev);
+- unsigned int val;
++ u32 val;
+
+ pinctrl_pm_select_default_state(dev);
+
+@@ -3113,7 +3104,8 @@ static int lpuart_resume_noirq(struct device *dev)
+ static int lpuart_suspend(struct device *dev)
+ {
+ struct lpuart_port *sport = dev_get_drvdata(dev);
+- unsigned long temp, flags;
++ u32 temp;
++ unsigned long flags;
+
+ uart_suspend_port(&lpuart_reg, &sport->port);
+
+@@ -3193,7 +3185,7 @@ static void lpuart_console_fixup(struct lpuart_port *sport)
+ * in VLLS mode, or restore console setting here.
+ */
+ if (is_imx7ulp_lpuart(sport) && lpuart_uport_is_active(sport) &&
+- console_suspend_enabled && uart_console(&sport->port)) {
++ console_suspend_enabled && uart_console(uport)) {
+
+ mutex_lock(&port->mutex);
+ memset(&termios, 0, sizeof(struct ktermios));
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index fdf0c1008225a2..9aa7e2a876ec1a 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2391,10 +2391,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ page_size = readl(&xhci->op_regs->page_size);
+ xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ "Supported page size register = 0x%x", page_size);
+- i = ffs(page_size);
+- if (i < 16)
++ val = ffs(page_size) - 1;
++ if (val < 16)
+ xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+- "Supported page size of %iK", (1 << (i+12)) / 1024);
++ "Supported page size of %iK", (1 << (val + 12)) / 1024);
+ else
+ xhci_warn(xhci, "WARN: no supported page size\n");
+ /* Use 4K pages, since that's common and the minimum the HC supports */
+diff --git a/drivers/usb/typec/altmodes/thunderbolt.c b/drivers/usb/typec/altmodes/thunderbolt.c
+index 1b475b1d98e783..6eadf7835f8f6c 100644
+--- a/drivers/usb/typec/altmodes/thunderbolt.c
++++ b/drivers/usb/typec/altmodes/thunderbolt.c
+@@ -112,7 +112,7 @@ static void tbt_altmode_work(struct work_struct *work)
+ return;
+
+ disable_plugs:
+- for (int i = TYPEC_PLUG_SOP_PP; i > 0; --i) {
++ for (int i = TYPEC_PLUG_SOP_PP; i >= 0; --i) {
+ if (tbt->plug[i])
+ typec_altmode_put_plug(tbt->plug[i]);
+
+@@ -143,7 +143,7 @@ static int tbt_enter_modes_ordered(struct typec_altmode *alt)
+ if (tbt->plug[TYPEC_PLUG_SOP_P]) {
+ ret = typec_cable_altmode_enter(alt, TYPEC_PLUG_SOP_P, NULL);
+ if (ret < 0) {
+- for (int i = TYPEC_PLUG_SOP_PP; i > 0; --i) {
++ for (int i = TYPEC_PLUG_SOP_PP; i >= 0; --i) {
+ if (tbt->plug[i])
+ typec_altmode_put_plug(tbt->plug[i]);
+
+@@ -324,7 +324,7 @@ static void tbt_altmode_remove(struct typec_altmode *alt)
+ {
+ struct tbt_altmode *tbt = typec_altmode_get_drvdata(alt);
+
+- for (int i = TYPEC_PLUG_SOP_PP; i > 0; --i) {
++ for (int i = TYPEC_PLUG_SOP_PP; i >= 0; --i) {
+ if (tbt->plug[i])
+ typec_altmode_put_plug(tbt->plug[i]);
+ }
+@@ -351,10 +351,10 @@ static bool tbt_ready(struct typec_altmode *alt)
+ */
+ for (int i = 0; i < TYPEC_PLUG_SOP_PP + 1; i++) {
+ plug = typec_altmode_get_plug(tbt->alt, i);
+- if (IS_ERR(plug))
++ if (!plug)
+ continue;
+
+- if (!plug || plug->svid != USB_TYPEC_TBT_SID)
++ if (plug->svid != USB_TYPEC_TBT_SID)
+ break;
+
+ plug->desc = "Thunderbolt3";
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index 4b1668733a4bec..511dd1b224ae51 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -1433,11 +1433,10 @@ static int ucsi_ccg_probe(struct i2c_client *client)
+ uc->fw_build = CCG_FW_BUILD_NVIDIA_TEGRA;
+ else if (!strcmp(fw_name, "nvidia,gpu"))
+ uc->fw_build = CCG_FW_BUILD_NVIDIA;
++ if (!uc->fw_build)
++ dev_err(uc->dev, "failed to get FW build information\n");
+ }
+
+- if (!uc->fw_build)
+- dev_err(uc->dev, "failed to get FW build information\n");
+-
+ /* reset ccg device and initialize ucsi */
+ status = ucsi_ccg_init(uc);
+ if (status < 0) {
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 718fa4e0b31ec2..7aeff435c1d873 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -1699,14 +1699,19 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ }
+ }
+
++ if (vs->vs_tpg) {
++ pr_err("vhost-scsi endpoint already set for %s.\n",
++ vs->vs_vhost_wwpn);
++ ret = -EEXIST;
++ goto out;
++ }
++
+ len = sizeof(vs_tpg[0]) * VHOST_SCSI_MAX_TARGET;
+ vs_tpg = kzalloc(len, GFP_KERNEL);
+ if (!vs_tpg) {
+ ret = -ENOMEM;
+ goto out;
+ }
+- if (vs->vs_tpg)
+- memcpy(vs_tpg, vs->vs_tpg, len);
+
+ mutex_lock(&vhost_scsi_mutex);
+ list_for_each_entry(tpg, &vhost_scsi_list, tv_tpg_list) {
+@@ -1722,12 +1727,6 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ tv_tport = tpg->tport;
+
+ if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
+- if (vs->vs_tpg && vs->vs_tpg[tpg->tport_tpgt]) {
+- mutex_unlock(&tpg->tv_tpg_mutex);
+- mutex_unlock(&vhost_scsi_mutex);
+- ret = -EEXIST;
+- goto undepend;
+- }
+ /*
+ * In order to ensure individual vhost-scsi configfs
+ * groups cannot be removed while in use by vhost ioctl,
+@@ -1774,15 +1773,15 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ }
+ ret = 0;
+ } else {
+- ret = -EEXIST;
++ ret = -ENODEV;
++ goto free_tpg;
+ }
+
+ /*
+- * Act as synchronize_rcu to make sure access to
+- * old vs->vs_tpg is finished.
++ * Act as synchronize_rcu to make sure requests after this point
++ * see a fully setup device.
+ */
+ vhost_scsi_flush(vs);
+- kfree(vs->vs_tpg);
+ vs->vs_tpg = vs_tpg;
+ goto out;
+
+@@ -1802,6 +1801,7 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ target_undepend_item(&tpg->se_tpg.tpg_group.cg_item);
+ }
+ }
++free_tpg:
+ kfree(vs_tpg);
+ out:
+ mutex_unlock(&vs->dev.mutex);
+@@ -1904,6 +1904,7 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
+ vhost_scsi_flush(vs);
+ kfree(vs->vs_tpg);
+ vs->vs_tpg = NULL;
++ memset(vs->vs_vhost_wwpn, 0, sizeof(vs->vs_vhost_wwpn));
+ WARN_ON(vs->vs_events_nr);
+ mutex_unlock(&vs->dev.mutex);
+ return 0;
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index bc31db6ef7d262..3e9f2bda67027e 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -24,7 +24,7 @@ config VGA_CONSOLE
+ Say Y.
+
+ config MDA_CONSOLE
+- depends on !M68K && !PARISC && ISA
++ depends on VGA_CONSOLE && ISA
+ tristate "MDA text console (dual-headed)"
+ help
+ Say Y here if you have an old MDA or monochrome Hercules graphics
+@@ -52,7 +52,7 @@ config DUMMY_CONSOLE
+
+ config DUMMY_CONSOLE_COLUMNS
+ int "Initial number of console screen columns"
+- depends on DUMMY_CONSOLE && !ARCH_FOOTBRIDGE
++ depends on DUMMY_CONSOLE && !(ARCH_FOOTBRIDGE && VGA_CONSOLE)
+ default 160 if PARISC
+ default 80
+ help
+@@ -62,7 +62,7 @@ config DUMMY_CONSOLE_COLUMNS
+
+ config DUMMY_CONSOLE_ROWS
+ int "Initial number of console screen rows"
+- depends on DUMMY_CONSOLE && !ARCH_FOOTBRIDGE
++ depends on DUMMY_CONSOLE && !(ARCH_FOOTBRIDGE && VGA_CONSOLE)
+ default 64 if PARISC
+ default 30 if ARM
+ default 25
+diff --git a/drivers/video/fbdev/au1100fb.c b/drivers/video/fbdev/au1100fb.c
+index 840f221607635b..6251a6b07b3a11 100644
+--- a/drivers/video/fbdev/au1100fb.c
++++ b/drivers/video/fbdev/au1100fb.c
+@@ -137,13 +137,15 @@ static int au1100fb_fb_blank(int blank_mode, struct fb_info *fbi)
+ */
+ int au1100fb_setmode(struct au1100fb_device *fbdev)
+ {
+- struct fb_info *info = &fbdev->info;
++ struct fb_info *info;
+ u32 words;
+ int index;
+
+ if (!fbdev)
+ return -EINVAL;
+
++ info = &fbdev->info;
++
+ /* Update var-dependent FB info */
+ if (panel_is_active(fbdev->panel) || panel_is_color(fbdev->panel)) {
+ if (info->var.bits_per_pixel <= 8) {
+diff --git a/drivers/video/fbdev/sm501fb.c b/drivers/video/fbdev/sm501fb.c
+index 7734377b2d87b9..ed6f4f43e2d52a 100644
+--- a/drivers/video/fbdev/sm501fb.c
++++ b/drivers/video/fbdev/sm501fb.c
+@@ -327,6 +327,13 @@ static int sm501fb_check_var(struct fb_var_screeninfo *var,
+ if (var->xres_virtual > 4096 || var->yres_virtual > 2048)
+ return -EINVAL;
+
++ /* geometry sanity checks */
++ if (var->xres + var->xoffset > var->xres_virtual)
++ return -EINVAL;
++
++ if (var->yres + var->yoffset > var->yres_virtual)
++ return -EINVAL;
++
+ /* can cope with 8,16 or 32bpp */
+
+ if (var->bits_per_pixel <= 8)
+diff --git a/drivers/w1/masters/w1-uart.c b/drivers/w1/masters/w1-uart.c
+index a31782e56ba75a..c87eea34780678 100644
+--- a/drivers/w1/masters/w1-uart.c
++++ b/drivers/w1/masters/w1-uart.c
+@@ -372,11 +372,11 @@ static int w1_uart_probe(struct serdev_device *serdev)
+ init_completion(&w1dev->rx_byte_received);
+ mutex_init(&w1dev->rx_mutex);
+
++ serdev_device_set_drvdata(serdev, w1dev);
++ serdev_device_set_client_ops(serdev, &w1_uart_serdev_ops);
+ ret = w1_uart_serdev_open(w1dev);
+ if (ret < 0)
+ return ret;
+- serdev_device_set_drvdata(serdev, w1dev);
+- serdev_device_set_client_ops(serdev, &w1_uart_serdev_ops);
+
+ return w1_add_master_device(&w1dev->bus);
+ }
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 143ac03b7425c0..3397939fd2d5af 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -407,8 +407,8 @@ static int v9fs_vfs_mkdir_dotl(struct mnt_idmap *idmap,
+ err);
+ goto error;
+ }
+- v9fs_fid_add(dentry, &fid);
+ v9fs_set_create_acl(inode, fid, dacl, pacl);
++ v9fs_fid_add(dentry, &fid);
+ d_instantiate(dentry, inode);
+ err = 0;
+ inc_nlink(dir);
+diff --git a/fs/autofs/autofs_i.h b/fs/autofs/autofs_i.h
+index 77c7991d89aace..23cea74f9933b5 100644
+--- a/fs/autofs/autofs_i.h
++++ b/fs/autofs/autofs_i.h
+@@ -218,6 +218,8 @@ void autofs_clean_ino(struct autofs_info *);
+
+ static inline int autofs_check_pipe(struct file *pipe)
+ {
++ if (pipe->f_mode & FMODE_PATH)
++ return -EINVAL;
+ if (!(pipe->f_mode & FMODE_CAN_WRITE))
+ return -EINVAL;
+ if (!S_ISFIFO(file_inode(pipe)->i_mode))
+diff --git a/fs/bcachefs/fs-ioctl.c b/fs/bcachefs/fs-ioctl.c
+index 15725b4ce393cc..4d61938204834c 100644
+--- a/fs/bcachefs/fs-ioctl.c
++++ b/fs/bcachefs/fs-ioctl.c
+@@ -515,10 +515,12 @@ static long bch2_ioctl_subvolume_destroy(struct bch_fs *c, struct file *filp,
+ ret = -ENOENT;
+ goto err;
+ }
+- ret = __bch2_unlink(dir, victim, true);
++
++ ret = inode_permission(file_mnt_idmap(filp), d_inode(victim), MAY_WRITE) ?:
++ __bch2_unlink(dir, victim, true);
+ if (!ret) {
+ fsnotify_rmdir(dir, victim);
+- d_delete(victim);
++ d_invalidate(victim);
+ }
+ err:
+ inode_unlock(dir);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index c0a8f7d92acc5c..b96b2359433447 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1823,7 +1823,8 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ list_sort(NULL, &fs_info->reclaim_bgs, reclaim_bgs_cmp);
+ while (!list_empty(&fs_info->reclaim_bgs)) {
+ u64 zone_unusable;
+- u64 reclaimed;
++ u64 used;
++ u64 reserved;
+ int ret = 0;
+
+ bg = list_first_entry(&fs_info->reclaim_bgs,
+@@ -1915,19 +1916,42 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ if (ret < 0)
+ goto next;
+
++ /*
++ * The amount of bytes reclaimed corresponds to the sum of the
++ * "used" and "reserved" counters. We have set the block group
++ * to RO above, which prevents reservations from happening but
++ * we may have existing reservations for which allocation has
++ * not yet been done - btrfs_update_block_group() was not yet
++ * called, which is where we will transfer a reserved extent's
++ * size from the "reserved" counter to the "used" counter - this
++ * happens when running delayed references. When we relocate the
++ * chunk below, relocation first flushes dellaloc, waits for
++ * ordered extent completion (which is where we create delayed
++ * references for data extents) and commits the current
++ * transaction (which runs delayed references), and only after
++ * it does the actual work to move extents out of the block
++ * group. So the reported amount of reclaimed bytes is
++ * effectively the sum of the 'used' and 'reserved' counters.
++ */
++ spin_lock(&bg->lock);
++ used = bg->used;
++ reserved = bg->reserved;
++ spin_unlock(&bg->lock);
++
+ btrfs_info(fs_info,
+- "reclaiming chunk %llu with %llu%% used %llu%% unusable",
++ "reclaiming chunk %llu with %llu%% used %llu%% reserved %llu%% unusable",
+ bg->start,
+- div64_u64(bg->used * 100, bg->length),
++ div64_u64(used * 100, bg->length),
++ div64_u64(reserved * 100, bg->length),
+ div64_u64(zone_unusable * 100, bg->length));
+ trace_btrfs_reclaim_block_group(bg);
+- reclaimed = bg->used;
+ ret = btrfs_relocate_chunk(fs_info, bg->start);
+ if (ret) {
+ btrfs_dec_block_group_ro(bg);
+ btrfs_err(fs_info, "error relocating chunk %llu",
+ bg->start);
+- reclaimed = 0;
++ used = 0;
++ reserved = 0;
+ spin_lock(&space_info->lock);
+ space_info->reclaim_errors++;
+ if (READ_ONCE(space_info->periodic_reclaim))
+@@ -1936,7 +1960,8 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ }
+ spin_lock(&space_info->lock);
+ space_info->reclaim_count++;
+- space_info->reclaim_bytes += reclaimed;
++ space_info->reclaim_bytes += used;
++ space_info->reclaim_bytes += reserved;
+ spin_unlock(&space_info->lock);
+
+ next:
+@@ -2771,8 +2796,11 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ /* Already aborted the transaction if it failed. */
+ next:
+ btrfs_dec_delayed_refs_rsv_bg_inserts(fs_info);
++
++ spin_lock(&fs_info->unused_bgs_lock);
+ list_del_init(&block_group->bg_list);
+ clear_bit(BLOCK_GROUP_FLAG_NEW, &block_group->runtime_flags);
++ spin_unlock(&fs_info->unused_bgs_lock);
+
+ /*
+ * If the block group is still unused, add it to the list of
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index f09db62e61a1b0..70b61bc237e98e 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -2561,6 +2561,9 @@ int btrfs_validate_super(const struct btrfs_fs_info *fs_info,
+ ret = -EINVAL;
+ }
+
++ if (ret)
++ return ret;
++
+ ret = validate_sys_chunk_array(fs_info, sb);
+
+ /*
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 4375c70144d0ad..4ebec51fe4f22a 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -1016,7 +1016,9 @@ static const struct ctl_table coredump_sysctls[] = {
+ .data = &core_pipe_limit,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_INT_MAX,
+ },
+ {
+ .procname = "core_file_note_size_limit",
+diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
+index 8afac6e2dff002..1929327ffbe1cf 100644
+--- a/fs/dlm/lockspace.c
++++ b/fs/dlm/lockspace.c
+@@ -576,7 +576,7 @@ static int new_lockspace(const char *name, const char *cluster,
+ lockspace to start running (via sysfs) in dlm_ls_start(). */
+
+ error = do_uevent(ls, 1);
+- if (error)
++ if (error < 0)
+ goto out_recoverd;
+
+ /* wait until recovery is successful or failed */
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 686d835eb533ac..efd25f3101f1f6 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -152,8 +152,6 @@ struct erofs_sb_info {
+ /* used for statfs, f_files - f_favail */
+ u64 inos;
+
+- u8 uuid[16]; /* 128-bit uuid for volume */
+- u8 volume_name[16]; /* volume name */
+ u32 feature_compat;
+ u32 feature_incompat;
+
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 827b6266564944..9f2bce5af9c837 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -317,14 +317,6 @@ static int erofs_read_superblock(struct super_block *sb)
+
+ super_set_uuid(sb, (void *)dsb->uuid, sizeof(dsb->uuid));
+
+- ret = strscpy(sbi->volume_name, dsb->volume_name,
+- sizeof(dsb->volume_name));
+- if (ret < 0) { /* -E2BIG */
+- erofs_err(sb, "bad volume name without NIL terminator");
+- ret = -EFSCORRUPTED;
+- goto out;
+- }
+-
+ /* parse on-disk compression configurations */
+ ret = z_erofs_parse_cfgs(sb, dsb);
+ if (ret < 0)
+diff --git a/fs/exec.c b/fs/exec.c
+index 506cd411f4ac2e..17047210be46df 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1229,13 +1229,12 @@ int begin_new_exec(struct linux_binprm * bprm)
+ */
+ bprm->point_of_no_return = true;
+
+- /*
+- * Make this the only thread in the thread group.
+- */
++ /* Make this the only thread in the thread group */
+ retval = de_thread(me);
+ if (retval)
+ goto out;
+-
++ /* see the comment in check_unsafe_exec() */
++ current->fs->in_exec = 0;
+ /*
+ * Cancel any io_uring activity across execve
+ */
+@@ -1497,6 +1496,8 @@ static void free_bprm(struct linux_binprm *bprm)
+ }
+ free_arg_pages(bprm);
+ if (bprm->cred) {
++ /* in case exec fails before de_thread() succeeds */
++ current->fs->in_exec = 0;
+ mutex_unlock(¤t->signal->cred_guard_mutex);
+ abort_creds(bprm->cred);
+ }
+@@ -1618,6 +1619,10 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+ * suid exec because the differently privileged task
+ * will be able to manipulate the current directory, etc.
+ * It would be nice to force an unshare instead...
++ *
++ * Otherwise we set fs->in_exec = 1 to deny clone(CLONE_FS)
++ * from another sub-thread until de_thread() succeeds, this
++ * state is protected by cred_guard_mutex we hold.
+ */
+ n_fs = 1;
+ spin_lock(&p->fs->lock);
+@@ -1862,7 +1867,6 @@ static int bprm_execve(struct linux_binprm *bprm)
+
+ sched_mm_cid_after_execve(current);
+ /* execve succeeded */
+- current->fs->in_exec = 0;
+ current->in_execve = 0;
+ rseq_execve(current);
+ user_events_execve(current);
+@@ -1881,7 +1885,6 @@ static int bprm_execve(struct linux_binprm *bprm)
+ force_fatal_sig(SIGSEGV);
+
+ sched_mm_cid_after_execve(current);
+- current->fs->in_exec = 0;
+ current->in_execve = 0;
+
+ return retval;
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 6f3651c6ca91ef..8df5ad6ebb10cb 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -265,7 +265,7 @@ int exfat_find_last_cluster(struct super_block *sb, struct exfat_chain *p_chain,
+ clu = next;
+ if (exfat_ent_get(sb, clu, &next))
+ return -EIO;
+- } while (next != EXFAT_EOF_CLUSTER);
++ } while (next != EXFAT_EOF_CLUSTER && count <= p_chain->size);
+
+ if (p_chain->size != count) {
+ exfat_fs_error(sb,
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 807349d8ea0501..841a5b18e3dfdb 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -582,6 +582,9 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ loff_t pos = iocb->ki_pos;
+ loff_t valid_size;
+
++ if (unlikely(exfat_forced_shutdown(inode->i_sb)))
++ return -EIO;
++
+ inode_lock(inode);
+
+ valid_size = ei->valid_size;
+@@ -635,6 +638,16 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ return ret;
+ }
+
++static ssize_t exfat_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
++{
++ struct inode *inode = file_inode(iocb->ki_filp);
++
++ if (unlikely(exfat_forced_shutdown(inode->i_sb)))
++ return -EIO;
++
++ return generic_file_read_iter(iocb, iter);
++}
++
+ static vm_fault_t exfat_page_mkwrite(struct vm_fault *vmf)
+ {
+ int err;
+@@ -672,14 +685,26 @@ static const struct vm_operations_struct exfat_file_vm_ops = {
+
+ static int exfat_file_mmap(struct file *file, struct vm_area_struct *vma)
+ {
++ if (unlikely(exfat_forced_shutdown(file_inode(file)->i_sb)))
++ return -EIO;
++
+ file_accessed(file);
+ vma->vm_ops = &exfat_file_vm_ops;
+ return 0;
+ }
+
++static ssize_t exfat_splice_read(struct file *in, loff_t *ppos,
++ struct pipe_inode_info *pipe, size_t len, unsigned int flags)
++{
++ if (unlikely(exfat_forced_shutdown(file_inode(in)->i_sb)))
++ return -EIO;
++
++ return filemap_splice_read(in, ppos, pipe, len, flags);
++}
++
+ const struct file_operations exfat_file_operations = {
+ .llseek = generic_file_llseek,
+- .read_iter = generic_file_read_iter,
++ .read_iter = exfat_file_read_iter,
+ .write_iter = exfat_file_write_iter,
+ .unlocked_ioctl = exfat_ioctl,
+ #ifdef CONFIG_COMPAT
+@@ -687,7 +712,7 @@ const struct file_operations exfat_file_operations = {
+ #endif
+ .mmap = exfat_file_mmap,
+ .fsync = exfat_file_fsync,
+- .splice_read = filemap_splice_read,
++ .splice_read = exfat_splice_read,
+ .splice_write = iter_file_splice_write,
+ };
+
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index 96952d4acb500f..a23677de4544f3 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -344,7 +344,8 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ * The block has been partially written,
+ * zero the unwritten part and map the block.
+ */
+- loff_t size, off, pos;
++ loff_t size, pos;
++ void *addr;
+
+ max_blocks = 1;
+
+@@ -355,17 +356,43 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ if (!bh_result->b_folio)
+ goto done;
+
++ /*
++ * No buffer_head is allocated.
++ * (1) bmap: It's enough to fill bh_result without I/O.
++ * (2) read: The unwritten part should be filled with 0
++ * If a folio does not have any buffers,
++ * let's returns -EAGAIN to fallback to
++ * per-bh IO like block_read_full_folio().
++ */
++ if (!folio_buffers(bh_result->b_folio)) {
++ err = -EAGAIN;
++ goto done;
++ }
++
+ pos = EXFAT_BLK_TO_B(iblock, sb);
+ size = ei->valid_size - pos;
+- off = pos & (PAGE_SIZE - 1);
++ addr = folio_address(bh_result->b_folio) +
++ offset_in_folio(bh_result->b_folio, pos);
++
++ /* Check if bh->b_data points to proper addr in folio */
++ if (bh_result->b_data != addr) {
++ exfat_fs_error_ratelimit(sb,
++ "b_data(%p) != folio_addr(%p)",
++ bh_result->b_data, addr);
++ err = -EINVAL;
++ goto done;
++ }
+
+- folio_set_bh(bh_result, bh_result->b_folio, off);
++ /* Read a block */
+ err = bh_read(bh_result, 0);
+ if (err < 0)
+- goto unlock_ret;
++ goto done;
++
++ /* Zero unwritten part of a block */
++ memset(bh_result->b_data + size, 0,
++ bh_result->b_size - size);
+
+- folio_zero_segment(bh_result->b_folio, off + size,
+- off + sb->s_blocksize);
++ err = 0;
+ } else {
+ /*
+ * The range has not been written, clear the mapped flag
+@@ -376,6 +403,8 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ }
+ done:
+ bh_result->b_size = EXFAT_BLK_TO_B(max_blocks, sb);
++ if (err < 0)
++ clear_buffer_mapped(bh_result);
+ unlock_ret:
+ mutex_unlock(&sbi->s_lock);
+ return err;
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index 02d47a64e8d13b..253992fcf57c25 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -104,6 +104,9 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ else if (unlikely(le32_to_cpu(de->inode) >
+ le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count)))
+ error_msg = "inode out of bounds";
++ else if (unlikely(next_offset == size && de->name_len == 1 &&
++ de->name[0] == '.'))
++ error_msg = "'.' directory cannot be the last in data block";
+ else
+ return 0;
+
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 4e7de7eaa374a0..df30d9f235123b 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1821,7 +1821,8 @@ static inline int ext4_valid_inum(struct super_block *sb, unsigned long ino)
+ */
+ enum {
+ EXT4_MF_MNTDIR_SAMPLED,
+- EXT4_MF_FC_INELIGIBLE /* Fast commit ineligible */
++ EXT4_MF_FC_INELIGIBLE, /* Fast commit ineligible */
++ EXT4_MF_JOURNAL_DESTROY /* Journal is in process of destroying */
+ };
+
+ static inline void ext4_set_mount_flag(struct super_block *sb, int bit)
+@@ -2232,15 +2233,23 @@ extern int ext4_feature_set_ok(struct super_block *sb, int readonly);
+ /*
+ * Superblock flags
+ */
+-#define EXT4_FLAGS_RESIZING 0
+-#define EXT4_FLAGS_SHUTDOWN 1
+-#define EXT4_FLAGS_BDEV_IS_DAX 2
++enum {
++ EXT4_FLAGS_RESIZING, /* Avoid superblock update and resize race */
++ EXT4_FLAGS_SHUTDOWN, /* Prevent access to the file system */
++ EXT4_FLAGS_BDEV_IS_DAX, /* Current block device support DAX */
++ EXT4_FLAGS_EMERGENCY_RO,/* Emergency read-only due to fs errors */
++};
+
+ static inline int ext4_forced_shutdown(struct super_block *sb)
+ {
+ return test_bit(EXT4_FLAGS_SHUTDOWN, &EXT4_SB(sb)->s_ext4_flags);
+ }
+
++static inline int ext4_emergency_ro(struct super_block *sb)
++{
++ return test_bit(EXT4_FLAGS_EMERGENCY_RO, &EXT4_SB(sb)->s_ext4_flags);
++}
++
+ /*
+ * Default values for user and/or group using reserved blocks
+ */
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 0c77697d5e90d0..ada46189b08603 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -513,4 +513,33 @@ static inline int ext4_should_dioread_nolock(struct inode *inode)
+ return 1;
+ }
+
++/*
++ * Pass journal explicitly as it may not be cached in the sbi->s_journal in some
++ * cases
++ */
++static inline int ext4_journal_destroy(struct ext4_sb_info *sbi, journal_t *journal)
++{
++ int err = 0;
++
++ /*
++ * At this point only two things can be operating on the journal.
++ * JBD2 thread performing transaction commit and s_sb_upd_work
++ * issuing sb update through the journal. Once we set
++ * EXT4_JOURNAL_DESTROY, new ext4_handle_error() calls will not
++ * queue s_sb_upd_work and ext4_force_commit() makes sure any
++ * ext4_handle_error() calls from the running transaction commit are
++ * finished. Hence no new s_sb_upd_work can be queued after we
++ * flush it here.
++ */
++ ext4_set_mount_flag(sbi->s_sb, EXT4_MF_JOURNAL_DESTROY);
++
++ ext4_force_commit(sbi->s_sb);
++ flush_work(&sbi->s_sb_upd_work);
++
++ err = jbd2_journal_destroy(journal);
++ sbi->s_journal = NULL;
++
++ return err;
++}
++
+ #endif /* _EXT4_JBD2_H */
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 7c54ae5fcbd454..4009f9017a0e97 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4674,6 +4674,11 @@ static inline int ext4_iget_extra_inode(struct inode *inode,
+ *magic == cpu_to_le32(EXT4_XATTR_MAGIC)) {
+ int err;
+
++ err = xattr_check_inode(inode, IHDR(inode, raw_inode),
++ ITAIL(inode, raw_inode));
++ if (err)
++ return err;
++
+ ext4_set_inode_state(inode, EXT4_STATE_XATTR);
+ err = ext4_find_inline_data_nolock(inode);
+ if (!err && ext4_has_inline_data(inode))
+@@ -5007,8 +5012,16 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ inode->i_op = &ext4_encrypted_symlink_inode_operations;
+ } else if (ext4_inode_is_fast_symlink(inode)) {
+ inode->i_op = &ext4_fast_symlink_inode_operations;
+- nd_terminate_link(ei->i_data, inode->i_size,
+- sizeof(ei->i_data) - 1);
++ if (inode->i_size == 0 ||
++ inode->i_size >= sizeof(ei->i_data) ||
++ strnlen((char *)ei->i_data, inode->i_size + 1) !=
++ inode->i_size) {
++ ext4_error_inode(inode, function, line, 0,
++ "invalid fast symlink length %llu",
++ (unsigned long long)inode->i_size);
++ ret = -EFSCORRUPTED;
++ goto bad_inode;
++ }
+ inode_set_cached_link(inode, (char *)ei->i_data,
+ inode->i_size);
+ } else {
+@@ -5464,7 +5477,7 @@ int ext4_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ oldsize & (inode->i_sb->s_blocksize - 1)) {
+ error = ext4_inode_attach_jinode(inode);
+ if (error)
+- goto err_out;
++ goto out_mmap_sem;
+ }
+
+ handle = ext4_journal_start(inode, EXT4_HT_INODE, 3);
+diff --git a/fs/ext4/mballoc-test.c b/fs/ext4/mballoc-test.c
+index bb2a223b207c19..d634c12f198474 100644
+--- a/fs/ext4/mballoc-test.c
++++ b/fs/ext4/mballoc-test.c
+@@ -796,6 +796,7 @@ static void test_mb_mark_used(struct kunit *test)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buddy);
+ grp = kunit_kzalloc(test, offsetof(struct ext4_group_info,
+ bb_counters[MB_NUM_ORDERS(sb)]), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, grp);
+
+ ret = ext4_mb_load_buddy(sb, TEST_GOAL_GROUP, &e4b);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+@@ -860,6 +861,7 @@ static void test_mb_free_blocks(struct kunit *test)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buddy);
+ grp = kunit_kzalloc(test, offsetof(struct ext4_group_info,
+ bb_counters[MB_NUM_ORDERS(sb)]), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, grp);
+
+ ret = ext4_mb_load_buddy(sb, TEST_GOAL_GROUP, &e4b);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 536d56d1507265..8e49cb7118581d 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2577,8 +2577,10 @@ static int ext4_dx_add_entry(handle_t *handle, struct ext4_filename *fname,
+ BUFFER_TRACE(frame->bh, "get_write_access");
+ err = ext4_journal_get_write_access(handle, sb, frame->bh,
+ EXT4_JTR_NONE);
+- if (err)
++ if (err) {
++ brelse(bh2);
+ goto journal_error;
++ }
+ if (!add_level) {
+ unsigned icount1 = icount/2, icount2 = icount - icount1;
+ unsigned hash2 = dx_get_hash(entries + icount1);
+@@ -2589,8 +2591,10 @@ static int ext4_dx_add_entry(handle_t *handle, struct ext4_filename *fname,
+ err = ext4_journal_get_write_access(handle, sb,
+ (frame - 1)->bh,
+ EXT4_JTR_NONE);
+- if (err)
++ if (err) {
++ brelse(bh2);
+ goto journal_error;
++ }
+
+ memcpy((char *) entries2, (char *) (entries + icount1),
+ icount2 * sizeof(struct dx_entry));
+@@ -2609,8 +2613,10 @@ static int ext4_dx_add_entry(handle_t *handle, struct ext4_filename *fname,
+ dxtrace(dx_show_index("node",
+ ((struct dx_node *) bh2->b_data)->entries));
+ err = ext4_handle_dirty_dx_node(handle, dir, bh2);
+- if (err)
++ if (err) {
++ brelse(bh2);
+ goto journal_error;
++ }
+ brelse (bh2);
+ err = ext4_handle_dirty_dx_node(handle, dir,
+ (frame - 1)->bh);
+@@ -2635,8 +2641,10 @@ static int ext4_dx_add_entry(handle_t *handle, struct ext4_filename *fname,
+ "Creating %d level index...\n",
+ dxroot->info.indirect_levels));
+ err = ext4_handle_dirty_dx_node(handle, dir, frame->bh);
+- if (err)
++ if (err) {
++ brelse(bh2);
+ goto journal_error;
++ }
+ err = ext4_handle_dirty_dx_node(handle, dir, bh2);
+ brelse(bh2);
+ restart = 1;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index a50e5c31b93782..dc46a7063f1e17 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -707,11 +707,8 @@ static void ext4_handle_error(struct super_block *sb, bool force_ro, int error,
+ if (test_opt(sb, WARN_ON_ERROR))
+ WARN_ON_ONCE(1);
+
+- if (!continue_fs && !sb_rdonly(sb)) {
+- set_bit(EXT4_FLAGS_SHUTDOWN, &EXT4_SB(sb)->s_ext4_flags);
+- if (journal)
+- jbd2_journal_abort(journal, -EIO);
+- }
++ if (!continue_fs && !ext4_emergency_ro(sb) && journal)
++ jbd2_journal_abort(journal, -EIO);
+
+ if (!bdev_read_only(sb->s_bdev)) {
+ save_error_info(sb, error, ino, block, func, line);
+@@ -719,9 +716,13 @@ static void ext4_handle_error(struct super_block *sb, bool force_ro, int error,
+ * In case the fs should keep running, we need to writeout
+ * superblock through the journal. Due to lock ordering
+ * constraints, it may not be safe to do it right here so we
+- * defer superblock flushing to a workqueue.
++ * defer superblock flushing to a workqueue. We just need to be
++ * careful when the journal is already shutting down. If we get
++ * here in that case, just update the sb directly as the last
++ * transaction won't commit anyway.
+ */
+- if (continue_fs && journal)
++ if (continue_fs && journal &&
++ !ext4_test_mount_flag(sb, EXT4_MF_JOURNAL_DESTROY))
+ schedule_work(&EXT4_SB(sb)->s_sb_upd_work);
+ else
+ ext4_commit_super(sb);
+@@ -737,17 +738,17 @@ static void ext4_handle_error(struct super_block *sb, bool force_ro, int error,
+ sb->s_id);
+ }
+
+- if (sb_rdonly(sb) || continue_fs)
++ if (ext4_emergency_ro(sb) || continue_fs)
+ return;
+
+ ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
+ /*
+- * EXT4_FLAGS_SHUTDOWN was set which stops all filesystem
+- * modifications. We don't set SB_RDONLY because that requires
+- * sb->s_umount semaphore and setting it without proper remount
+- * procedure is confusing code such as freeze_super() leading to
+- * deadlocks and other problems.
++ * We don't set SB_RDONLY because that requires sb->s_umount
++ * semaphore and setting it without proper remount procedure is
++ * confusing code such as freeze_super() leading to deadlocks
++ * and other problems.
+ */
++ set_bit(EXT4_FLAGS_EMERGENCY_RO, &EXT4_SB(sb)->s_ext4_flags);
+ }
+
+ static void update_super_work(struct work_struct *work)
+@@ -1306,18 +1307,17 @@ static void ext4_put_super(struct super_block *sb)
+ ext4_unregister_li_request(sb);
+ ext4_quotas_off(sb, EXT4_MAXQUOTAS);
+
+- flush_work(&sbi->s_sb_upd_work);
+ destroy_workqueue(sbi->rsv_conversion_wq);
+ ext4_release_orphan_info(sb);
+
+ if (sbi->s_journal) {
+ aborted = is_journal_aborted(sbi->s_journal);
+- err = jbd2_journal_destroy(sbi->s_journal);
+- sbi->s_journal = NULL;
++ err = ext4_journal_destroy(sbi, sbi->s_journal);
+ if ((err < 0) && !aborted) {
+ ext4_abort(sb, -err, "Couldn't clean up the journal");
+ }
+- }
++ } else
++ flush_work(&sbi->s_sb_upd_work);
+
+ ext4_es_unregister_shrinker(sbi);
+ timer_shutdown_sync(&sbi->s_err_report);
+@@ -3038,6 +3038,9 @@ static int _ext4_show_options(struct seq_file *seq, struct super_block *sb,
+ if (nodefs && !test_opt(sb, NO_PREFETCH_BLOCK_BITMAPS))
+ SEQ_OPTS_PUTS("prefetch_block_bitmaps");
+
++ if (ext4_emergency_ro(sb))
++ SEQ_OPTS_PUTS("emergency_ro");
++
+ ext4_show_quota_options(seq, sb);
+ return 0;
+ }
+@@ -4973,10 +4976,7 @@ static int ext4_load_and_init_journal(struct super_block *sb,
+ return 0;
+
+ out:
+- /* flush s_sb_upd_work before destroying the journal. */
+- flush_work(&sbi->s_sb_upd_work);
+- jbd2_journal_destroy(sbi->s_journal);
+- sbi->s_journal = NULL;
++ ext4_journal_destroy(sbi, sbi->s_journal);
+ return -EINVAL;
+ }
+
+@@ -5665,10 +5665,7 @@ failed_mount8: __maybe_unused
+ sbi->s_ea_block_cache = NULL;
+
+ if (sbi->s_journal) {
+- /* flush s_sb_upd_work before journal destroy. */
+- flush_work(&sbi->s_sb_upd_work);
+- jbd2_journal_destroy(sbi->s_journal);
+- sbi->s_journal = NULL;
++ ext4_journal_destroy(sbi, sbi->s_journal);
+ }
+ failed_mount3a:
+ ext4_es_unregister_shrinker(sbi);
+@@ -5973,7 +5970,7 @@ static journal_t *ext4_open_dev_journal(struct super_block *sb,
+ return journal;
+
+ out_journal:
+- jbd2_journal_destroy(journal);
++ ext4_journal_destroy(EXT4_SB(sb), journal);
+ out_bdev:
+ bdev_fput(bdev_file);
+ return ERR_PTR(errno);
+@@ -6090,8 +6087,7 @@ static int ext4_load_journal(struct super_block *sb,
+ EXT4_SB(sb)->s_journal = journal;
+ err = ext4_clear_journal_err(sb, es);
+ if (err) {
+- EXT4_SB(sb)->s_journal = NULL;
+- jbd2_journal_destroy(journal);
++ ext4_journal_destroy(EXT4_SB(sb), journal);
+ return err;
+ }
+
+@@ -6109,7 +6105,7 @@ static int ext4_load_journal(struct super_block *sb,
+ return 0;
+
+ err_out:
+- jbd2_journal_destroy(journal);
++ ext4_journal_destroy(EXT4_SB(sb), journal);
+ return err;
+ }
+
+@@ -6817,22 +6813,29 @@ static int ext4_statfs_project(struct super_block *sb,
+ dquot->dq_dqb.dqb_bhardlimit);
+ limit >>= sb->s_blocksize_bits;
+
+- if (limit && buf->f_blocks > limit) {
++ if (limit) {
++ uint64_t remaining = 0;
++
+ curblock = (dquot->dq_dqb.dqb_curspace +
+ dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
+- buf->f_blocks = limit;
+- buf->f_bfree = buf->f_bavail =
+- (buf->f_blocks > curblock) ?
+- (buf->f_blocks - curblock) : 0;
++ if (limit > curblock)
++ remaining = limit - curblock;
++
++ buf->f_blocks = min(buf->f_blocks, limit);
++ buf->f_bfree = min(buf->f_bfree, remaining);
++ buf->f_bavail = min(buf->f_bavail, remaining);
+ }
+
+ limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit,
+ dquot->dq_dqb.dqb_ihardlimit);
+- if (limit && buf->f_files > limit) {
+- buf->f_files = limit;
+- buf->f_ffree =
+- (buf->f_files > dquot->dq_dqb.dqb_curinodes) ?
+- (buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0;
++ if (limit) {
++ uint64_t remaining = 0;
++
++ if (limit > dquot->dq_dqb.dqb_curinodes)
++ remaining = limit - dquot->dq_dqb.dqb_curinodes;
++
++ buf->f_files = min(buf->f_files, limit);
++ buf->f_ffree = min(buf->f_ffree, remaining);
+ }
+
+ spin_unlock(&dquot->dq_dqb_lock);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 7647e9f6e1903a..a10fb8a9d02dc9 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -308,7 +308,7 @@ __ext4_xattr_check_block(struct inode *inode, struct buffer_head *bh,
+ __ext4_xattr_check_block((inode), (bh), __func__, __LINE__)
+
+
+-static inline int
++int
+ __xattr_check_inode(struct inode *inode, struct ext4_xattr_ibody_header *header,
+ void *end, const char *function, unsigned int line)
+ {
+@@ -316,9 +316,6 @@ __xattr_check_inode(struct inode *inode, struct ext4_xattr_ibody_header *header,
+ function, line);
+ }
+
+-#define xattr_check_inode(inode, header, end) \
+- __xattr_check_inode((inode), (header), (end), __func__, __LINE__)
+-
+ static int
+ xattr_find_entry(struct inode *inode, struct ext4_xattr_entry **pentry,
+ void *end, int name_index, const char *name, int sorted)
+@@ -649,10 +646,7 @@ ext4_xattr_ibody_get(struct inode *inode, int name_index, const char *name,
+ return error;
+ raw_inode = ext4_raw_inode(&iloc);
+ header = IHDR(inode, raw_inode);
+- end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
+- error = xattr_check_inode(inode, header, end);
+- if (error)
+- goto cleanup;
++ end = ITAIL(inode, raw_inode);
+ entry = IFIRST(header);
+ error = xattr_find_entry(inode, &entry, end, name_index, name, 0);
+ if (error)
+@@ -783,7 +777,6 @@ ext4_xattr_ibody_list(struct dentry *dentry, char *buffer, size_t buffer_size)
+ struct ext4_xattr_ibody_header *header;
+ struct ext4_inode *raw_inode;
+ struct ext4_iloc iloc;
+- void *end;
+ int error;
+
+ if (!ext4_test_inode_state(inode, EXT4_STATE_XATTR))
+@@ -793,14 +786,9 @@ ext4_xattr_ibody_list(struct dentry *dentry, char *buffer, size_t buffer_size)
+ return error;
+ raw_inode = ext4_raw_inode(&iloc);
+ header = IHDR(inode, raw_inode);
+- end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
+- error = xattr_check_inode(inode, header, end);
+- if (error)
+- goto cleanup;
+ error = ext4_xattr_list_entries(dentry, IFIRST(header),
+ buffer, buffer_size);
+
+-cleanup:
+ brelse(iloc.bh);
+ return error;
+ }
+@@ -868,7 +856,6 @@ int ext4_get_inode_usage(struct inode *inode, qsize_t *usage)
+ struct ext4_xattr_ibody_header *header;
+ struct ext4_xattr_entry *entry;
+ qsize_t ea_inode_refs = 0;
+- void *end;
+ int ret;
+
+ lockdep_assert_held_read(&EXT4_I(inode)->xattr_sem);
+@@ -879,10 +866,6 @@ int ext4_get_inode_usage(struct inode *inode, qsize_t *usage)
+ goto out;
+ raw_inode = ext4_raw_inode(&iloc);
+ header = IHDR(inode, raw_inode);
+- end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
+- ret = xattr_check_inode(inode, header, end);
+- if (ret)
+- goto out;
+
+ for (entry = IFIRST(header); !IS_LAST_ENTRY(entry);
+ entry = EXT4_XATTR_NEXT(entry))
+@@ -2235,11 +2218,8 @@ int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i,
+ header = IHDR(inode, raw_inode);
+ is->s.base = is->s.first = IFIRST(header);
+ is->s.here = is->s.first;
+- is->s.end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
++ is->s.end = ITAIL(inode, raw_inode);
+ if (ext4_test_inode_state(inode, EXT4_STATE_XATTR)) {
+- error = xattr_check_inode(inode, header, is->s.end);
+- if (error)
+- return error;
+ /* Find the named attribute. */
+ error = xattr_find_entry(inode, &is->s.here, is->s.end,
+ i->name_index, i->name, 0);
+@@ -2786,14 +2766,10 @@ int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
+ */
+
+ base = IFIRST(header);
+- end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
++ end = ITAIL(inode, raw_inode);
+ min_offs = end - base;
+ total_ino = sizeof(struct ext4_xattr_ibody_header) + sizeof(u32);
+
+- error = xattr_check_inode(inode, header, end);
+- if (error)
+- goto cleanup;
+-
+ ifree = ext4_xattr_free_space(base, &min_offs, base, &total_ino);
+ if (ifree >= isize_diff)
+ goto shift;
+diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
+index b25c2d7b5f9915..1fedf44d4fb65e 100644
+--- a/fs/ext4/xattr.h
++++ b/fs/ext4/xattr.h
+@@ -67,6 +67,9 @@ struct ext4_xattr_entry {
+ ((void *)raw_inode + \
+ EXT4_GOOD_OLD_INODE_SIZE + \
+ EXT4_I(inode)->i_extra_isize))
++#define ITAIL(inode, raw_inode) \
++ ((void *)(raw_inode) + \
++ EXT4_SB((inode)->i_sb)->s_inode_size)
+ #define IFIRST(hdr) ((struct ext4_xattr_entry *)((hdr)+1))
+
+ /*
+@@ -206,6 +209,13 @@ extern int ext4_xattr_ibody_set(handle_t *handle, struct inode *inode,
+ extern struct mb_cache *ext4_xattr_create_cache(void);
+ extern void ext4_xattr_destroy_cache(struct mb_cache *);
+
++extern int
++__xattr_check_inode(struct inode *inode, struct ext4_xattr_ibody_header *header,
++ void *end, const char *function, unsigned int line);
++
++#define xattr_check_inode(inode, header, end) \
++ __xattr_check_inode((inode), (header), (end), __func__, __LINE__)
++
+ #ifdef CONFIG_EXT4_FS_SECURITY
+ extern int ext4_init_security(handle_t *handle, struct inode *inode,
+ struct inode *dir, const struct qstr *qstr);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index efda9a0229816b..bd890738b94d77 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1237,7 +1237,7 @@ static int block_operations(struct f2fs_sb_info *sbi)
+ retry_flush_quotas:
+ f2fs_lock_all(sbi);
+ if (__need_flush_quota(sbi)) {
+- int locked;
++ bool need_lock = sbi->umount_lock_holder != current;
+
+ if (++cnt > DEFAULT_RETRY_QUOTA_FLUSH_COUNT) {
+ set_sbi_flag(sbi, SBI_QUOTA_SKIP_FLUSH);
+@@ -1246,11 +1246,13 @@ static int block_operations(struct f2fs_sb_info *sbi)
+ }
+ f2fs_unlock_all(sbi);
+
+- /* only failed during mount/umount/freeze/quotactl */
+- locked = down_read_trylock(&sbi->sb->s_umount);
+- f2fs_quota_sync(sbi->sb, -1);
+- if (locked)
++ /* don't grab s_umount lock during mount/umount/remount/freeze/quotactl */
++ if (!need_lock) {
++ f2fs_do_quota_sync(sbi->sb, -1);
++ } else if (down_read_trylock(&sbi->sb->s_umount)) {
++ f2fs_do_quota_sync(sbi->sb, -1);
+ up_read(&sbi->sb->s_umount);
++ }
+ cond_resched();
+ goto retry_flush_quotas;
+ }
+@@ -1867,7 +1869,8 @@ int f2fs_issue_checkpoint(struct f2fs_sb_info *sbi)
+ struct cp_control cpc;
+
+ cpc.reason = __get_cp_reason(sbi);
+- if (!test_opt(sbi, MERGE_CHECKPOINT) || cpc.reason != CP_SYNC) {
++ if (!test_opt(sbi, MERGE_CHECKPOINT) || cpc.reason != CP_SYNC ||
++ sbi->umount_lock_holder == current) {
+ int ret;
+
+ f2fs_down_write(&sbi->gc_lock);
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 985690d81a82c9..9b94810675c193 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -1150,6 +1150,7 @@ static int prepare_compress_overwrite(struct compress_ctx *cc,
+ f2fs_compress_ctx_add_page(cc, page_folio(page));
+
+ if (!PageUptodate(page)) {
++ f2fs_handle_page_eio(sbi, page_folio(page), DATA);
+ release_and_retry:
+ f2fs_put_rpages(cc);
+ f2fs_unlock_rpages(cc, i + 1);
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index de4da6d9cd93a7..8440a1ed24f232 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2178,6 +2178,12 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ int i;
+ int ret = 0;
+
++ if (unlikely(f2fs_cp_error(sbi))) {
++ ret = -EIO;
++ from_dnode = false;
++ goto out_put_dnode;
++ }
++
+ f2fs_bug_on(sbi, f2fs_cluster_is_empty(cc));
+
+ last_block_in_file = F2FS_BYTES_TO_BLK(f2fs_readpage_limit(inode) +
+@@ -2221,10 +2227,6 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ if (ret)
+ goto out;
+
+- if (unlikely(f2fs_cp_error(sbi))) {
+- ret = -EIO;
+- goto out_put_dnode;
+- }
+ f2fs_bug_on(sbi, dn.data_blkaddr != COMPRESS_ADDR);
+
+ skip_reading_dnode:
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 1afa7be16e7da5..493dda2d4b6631 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1659,6 +1659,7 @@ struct f2fs_sb_info {
+
+ unsigned int nquota_files; /* # of quota sysfile */
+ struct f2fs_rwsem quota_sem; /* blocking cp for flags */
++ struct task_struct *umount_lock_holder; /* s_umount lock holder */
+
+ /* # of pages, see count_type */
+ atomic_t nr_pages[NR_COUNT_TYPE];
+@@ -3624,7 +3625,7 @@ int f2fs_inode_dirtied(struct inode *inode, bool sync);
+ void f2fs_inode_synced(struct inode *inode);
+ int f2fs_dquot_initialize(struct inode *inode);
+ int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly);
+-int f2fs_quota_sync(struct super_block *sb, int type);
++int f2fs_do_quota_sync(struct super_block *sb, int type);
+ loff_t max_file_blocks(struct inode *inode);
+ void f2fs_quota_off_umount(struct super_block *sb);
+ void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index f92a9fba9991ba..44a658662462da 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1834,18 +1834,32 @@ static int f2fs_expand_inode_data(struct inode *inode, loff_t offset,
+
+ map.m_len = sec_blks;
+ next_alloc:
++ f2fs_down_write(&sbi->pin_sem);
++
++ if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++ if (has_not_enough_free_secs(sbi, 0, 0)) {
++ f2fs_up_write(&sbi->pin_sem);
++ err = -ENOSPC;
++ f2fs_warn_ratelimited(sbi,
++ "ino:%lu, start:%lu, end:%lu, need to trigger GC to "
++ "reclaim enough free segment when checkpoint is enabled",
++ inode->i_ino, pg_start, pg_end);
++ goto out_err;
++ }
++ }
++
+ if (has_not_enough_free_secs(sbi, 0, f2fs_sb_has_blkzoned(sbi) ?
+ ZONED_PIN_SEC_REQUIRED_COUNT :
+ GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) {
+ f2fs_down_write(&sbi->gc_lock);
+ stat_inc_gc_call_count(sbi, FOREGROUND);
+ err = f2fs_gc(sbi, &gc_control);
+- if (err && err != -ENODATA)
++ if (err && err != -ENODATA) {
++ f2fs_up_write(&sbi->pin_sem);
+ goto out_err;
++ }
+ }
+
+- f2fs_down_write(&sbi->pin_sem);
+-
+ err = f2fs_allocate_pinning_section(sbi);
+ if (err) {
+ f2fs_up_write(&sbi->pin_sem);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 3dd25f64d6f1e5..cd17d6f4c291f9 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -789,6 +789,13 @@ int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ !is_inode_flag_set(inode, FI_DIRTY_INODE))
+ return 0;
+
++ /*
++ * no need to update inode page, ultimately f2fs_evict_inode() will
++ * clear dirty status of inode.
++ */
++ if (f2fs_cp_error(sbi))
++ return -EIO;
++
+ if (!f2fs_is_checkpoint_ready(sbi)) {
+ f2fs_mark_inode_dirty_sync(inode, true);
+ return -ENOSPC;
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index a278c7da817782..3d85d8116dae7b 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -502,6 +502,14 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
+ goto out;
+ }
+
++ if (inode->i_nlink == 0) {
++ f2fs_warn(F2FS_I_SB(inode), "%s: inode (ino=%lx) has zero i_nlink",
++ __func__, inode->i_ino);
++ err = -EFSCORRUPTED;
++ set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK);
++ goto out_iput;
++ }
++
+ if (IS_ENCRYPTED(dir) &&
+ (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)) &&
+ !fscrypt_has_permitted_context(dir, inode)) {
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index c282e8a0a2ec1a..384bca002ec9ac 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2096,7 +2096,9 @@ static bool add_discard_addrs(struct f2fs_sb_info *sbi, struct cp_control *cpc,
+ return false;
+
+ if (!force) {
+- if (!f2fs_realtime_discard_enable(sbi) || !se->valid_blocks ||
++ if (!f2fs_realtime_discard_enable(sbi) ||
++ (!se->valid_blocks &&
++ !IS_CURSEG(sbi, cpc->trim_start)) ||
+ SM_I(sbi)->dcc_info->nr_discards >=
+ SM_I(sbi)->dcc_info->max_discards)
+ return false;
+@@ -2320,10 +2322,9 @@ static int create_discard_cmd_control(struct f2fs_sb_info *sbi)
+ dcc->discard_granularity = DEFAULT_DISCARD_GRANULARITY;
+ dcc->max_ordered_discard = DEFAULT_MAX_ORDERED_DISCARD_GRANULARITY;
+ dcc->discard_io_aware = DPOLICY_IO_AWARE_ENABLE;
+- if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SEGMENT)
++ if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SEGMENT ||
++ F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SECTION)
+ dcc->discard_granularity = BLKS_PER_SEG(sbi);
+- else if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SECTION)
+- dcc->discard_granularity = BLKS_PER_SEC(sbi);
+
+ INIT_LIST_HEAD(&dcc->entry_list);
+ for (i = 0; i < MAX_PLIST_NUM; i++)
+@@ -2806,7 +2807,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ MAIN_SECS(sbi));
+ if (secno >= MAIN_SECS(sbi)) {
+ ret = -ENOSPC;
+- f2fs_bug_on(sbi, 1);
++ f2fs_bug_on(sbi, !pinning);
+ goto out_unlock;
+ }
+ }
+@@ -2848,7 +2849,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ out_unlock:
+ spin_unlock(&free_i->segmap_lock);
+
+- if (ret == -ENOSPC)
++ if (ret == -ENOSPC && !pinning)
+ f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_NO_SEGMENT);
+ return ret;
+ }
+@@ -2921,6 +2922,13 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)
+ return curseg->segno;
+ }
+
++static void reset_curseg_fields(struct curseg_info *curseg)
++{
++ curseg->inited = false;
++ curseg->segno = NULL_SEGNO;
++ curseg->next_segno = 0;
++}
++
+ /*
+ * Allocate a current working segment.
+ * This function always allocates a free segment in LFS manner.
+@@ -2939,7 +2947,7 @@ static int new_curseg(struct f2fs_sb_info *sbi, int type, bool new_sec)
+ ret = get_new_segment(sbi, &segno, new_sec, pinning);
+ if (ret) {
+ if (ret == -ENOSPC)
+- curseg->segno = NULL_SEGNO;
++ reset_curseg_fields(curseg);
+ return ret;
+ }
+
+@@ -3710,13 +3718,6 @@ static void f2fs_randomize_chunk(struct f2fs_sb_info *sbi,
+ get_random_u32_inclusive(1, sbi->max_fragment_hole);
+ }
+
+-static void reset_curseg_fields(struct curseg_info *curseg)
+-{
+- curseg->inited = false;
+- curseg->segno = NULL_SEGNO;
+- curseg->next_segno = 0;
+-}
+-
+ int f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
+ block_t old_blkaddr, block_t *new_blkaddr,
+ struct f2fs_summary *sum, int type,
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 943be4f1d6d2d4..0465dc00b349d2 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -559,13 +559,16 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+ unsigned int node_blocks, unsigned int data_blocks,
+ unsigned int dent_blocks)
+ {
+-
+ unsigned int segno, left_blocks, blocks;
+ int i;
+
+ /* check current data/node sections in the worst case. */
+ for (i = CURSEG_HOT_DATA; i < NR_PERSISTENT_LOG; i++) {
+ segno = CURSEG_I(sbi, i)->segno;
++
++ if (unlikely(segno == NULL_SEGNO))
++ return false;
++
+ left_blocks = CAP_BLKS_PER_SEC(sbi) -
+ get_ckpt_valid_blocks(sbi, segno, true);
+
+@@ -576,6 +579,10 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+
+ /* check current data section for dentry blocks. */
+ segno = CURSEG_I(sbi, CURSEG_HOT_DATA)->segno;
++
++ if (unlikely(segno == NULL_SEGNO))
++ return false;
++
+ left_blocks = CAP_BLKS_PER_SEC(sbi) -
+ get_ckpt_valid_blocks(sbi, segno, true);
+ if (dent_blocks > left_blocks)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 19b67828ae3250..26b1021427ae05 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1737,22 +1737,28 @@ int f2fs_sync_fs(struct super_block *sb, int sync)
+
+ static int f2fs_freeze(struct super_block *sb)
+ {
++ struct f2fs_sb_info *sbi = F2FS_SB(sb);
++
+ if (f2fs_readonly(sb))
+ return 0;
+
+ /* IO error happened before */
+- if (unlikely(f2fs_cp_error(F2FS_SB(sb))))
++ if (unlikely(f2fs_cp_error(sbi)))
+ return -EIO;
+
+ /* must be clean, since sync_filesystem() was already called */
+- if (is_sbi_flag_set(F2FS_SB(sb), SBI_IS_DIRTY))
++ if (is_sbi_flag_set(sbi, SBI_IS_DIRTY))
+ return -EINVAL;
+
++ sbi->umount_lock_holder = current;
++
+ /* Let's flush checkpoints and stop the thread. */
+- f2fs_flush_ckpt_thread(F2FS_SB(sb));
++ f2fs_flush_ckpt_thread(sbi);
++
++ sbi->umount_lock_holder = NULL;
+
+ /* to avoid deadlock on f2fs_evict_inode->SB_FREEZE_FS */
+- set_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
++ set_sbi_flag(sbi, SBI_IS_FREEZING);
+ return 0;
+ }
+
+@@ -2329,6 +2335,8 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ org_mount_opt = sbi->mount_opt;
+ old_sb_flags = sb->s_flags;
+
++ sbi->umount_lock_holder = current;
++
+ #ifdef CONFIG_QUOTA
+ org_mount_opt.s_jquota_fmt = F2FS_OPTION(sbi).s_jquota_fmt;
+ for (i = 0; i < MAXQUOTAS; i++) {
+@@ -2552,6 +2560,8 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+
+ limit_reserve_root(sbi);
+ *flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME);
++
++ sbi->umount_lock_holder = NULL;
+ return 0;
+ restore_checkpoint:
+ if (need_enable_checkpoint) {
+@@ -2592,6 +2602,8 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ #endif
+ sbi->mount_opt = org_mount_opt;
+ sb->s_flags = old_sb_flags;
++
++ sbi->umount_lock_holder = NULL;
+ return err;
+ }
+
+@@ -2908,7 +2920,7 @@ static int f2fs_quota_sync_file(struct f2fs_sb_info *sbi, int type)
+ return ret;
+ }
+
+-int f2fs_quota_sync(struct super_block *sb, int type)
++int f2fs_do_quota_sync(struct super_block *sb, int type)
+ {
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct quota_info *dqopt = sb_dqopt(sb);
+@@ -2956,11 +2968,21 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ return ret;
+ }
+
++static int f2fs_quota_sync(struct super_block *sb, int type)
++{
++ int ret;
++
++ F2FS_SB(sb)->umount_lock_holder = current;
++ ret = f2fs_do_quota_sync(sb, type);
++ F2FS_SB(sb)->umount_lock_holder = NULL;
++ return ret;
++}
++
+ static int f2fs_quota_on(struct super_block *sb, int type, int format_id,
+ const struct path *path)
+ {
+ struct inode *inode;
+- int err;
++ int err = 0;
+
+ /* if quota sysfile exists, deny enabling quota with specific file */
+ if (f2fs_sb_has_quota_ino(F2FS_SB(sb))) {
+@@ -2971,31 +2993,34 @@ static int f2fs_quota_on(struct super_block *sb, int type, int format_id,
+ if (path->dentry->d_sb != sb)
+ return -EXDEV;
+
+- err = f2fs_quota_sync(sb, type);
++ F2FS_SB(sb)->umount_lock_holder = current;
++
++ err = f2fs_do_quota_sync(sb, type);
+ if (err)
+- return err;
++ goto out;
+
+ inode = d_inode(path->dentry);
+
+ err = filemap_fdatawrite(inode->i_mapping);
+ if (err)
+- return err;
++ goto out;
+
+ err = filemap_fdatawait(inode->i_mapping);
+ if (err)
+- return err;
++ goto out;
+
+ err = dquot_quota_on(sb, type, format_id, path);
+ if (err)
+- return err;
++ goto out;
+
+ inode_lock(inode);
+ F2FS_I(inode)->i_flags |= F2FS_QUOTA_DEFAULT_FL;
+ f2fs_set_inode_flags(inode);
+ inode_unlock(inode);
+ f2fs_mark_inode_dirty_sync(inode, false);
+-
+- return 0;
++out:
++ F2FS_SB(sb)->umount_lock_holder = NULL;
++ return err;
+ }
+
+ static int __f2fs_quota_off(struct super_block *sb, int type)
+@@ -3006,7 +3031,7 @@ static int __f2fs_quota_off(struct super_block *sb, int type)
+ if (!inode || !igrab(inode))
+ return dquot_quota_off(sb, type);
+
+- err = f2fs_quota_sync(sb, type);
++ err = f2fs_do_quota_sync(sb, type);
+ if (err)
+ goto out_put;
+
+@@ -3029,6 +3054,8 @@ static int f2fs_quota_off(struct super_block *sb, int type)
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ int err;
+
++ F2FS_SB(sb)->umount_lock_holder = current;
++
+ err = __f2fs_quota_off(sb, type);
+
+ /*
+@@ -3038,6 +3065,9 @@ static int f2fs_quota_off(struct super_block *sb, int type)
+ */
+ if (is_journalled_quota(sbi))
+ set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++
++ F2FS_SB(sb)->umount_lock_holder = NULL;
++
+ return err;
+ }
+
+@@ -3170,7 +3200,7 @@ int f2fs_dquot_initialize(struct inode *inode)
+ return 0;
+ }
+
+-int f2fs_quota_sync(struct super_block *sb, int type)
++int f2fs_do_quota_sync(struct super_block *sb, int type)
+ {
+ return 0;
+ }
+@@ -4703,6 +4733,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
+ if (err)
+ goto free_compress_inode;
+
++ sbi->umount_lock_holder = current;
+ #ifdef CONFIG_QUOTA
+ /* Enable quota usage during mount */
+ if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) {
+@@ -4769,10 +4800,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
+ }
+ }
+
++reset_checkpoint:
+ #ifdef CONFIG_QUOTA
+ f2fs_recover_quota_end(sbi, quota_enabled);
+ #endif
+-reset_checkpoint:
+ /*
+ * If the f2fs is not readonly and fsync data recovery succeeds,
+ * write pointer consistency of cursegs and other zones are already
+@@ -4829,6 +4860,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
+ f2fs_update_time(sbi, CP_TIME);
+ f2fs_update_time(sbi, REQ_TIME);
+ clear_sbi_flag(sbi, SBI_CP_DISABLED_QUICK);
++
++ sbi->umount_lock_holder = NULL;
+ return 0;
+
+ sync_free_meta:
+@@ -4931,6 +4964,8 @@ static void kill_f2fs_super(struct super_block *sb)
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+
+ if (sb->s_root) {
++ sbi->umount_lock_holder = current;
++
+ set_sbi_flag(sbi, SBI_IS_CLOSE);
+ f2fs_stop_gc_thread(sbi);
+ f2fs_stop_discard_thread(sbi);
+diff --git a/fs/fsopen.c b/fs/fsopen.c
+index 094a7f510edfec..1aaf4cb2afb29e 100644
+--- a/fs/fsopen.c
++++ b/fs/fsopen.c
+@@ -453,7 +453,7 @@ SYSCALL_DEFINE5(fsconfig,
+ case FSCONFIG_SET_FD:
+ param.type = fs_value_is_file;
+ ret = -EBADF;
+- param.file = fget(aux);
++ param.file = fget_raw(aux);
+ if (!param.file)
+ goto out_key;
+ param.dirfd = aux;
+diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
+index 0b6ee6dd1fd656..b7f805d2a14fd5 100644
+--- a/fs/fuse/dax.c
++++ b/fs/fuse/dax.c
+@@ -682,7 +682,6 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry,
+ 0, 0, fuse_wait_dax_page(inode));
+ }
+
+-/* dmap_end == 0 leads to unmapping of whole file */
+ int fuse_dax_break_layouts(struct inode *inode, u64 dmap_start,
+ u64 dmap_end)
+ {
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 3805f9b06c9d2d..3b031d24d36912 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1940,7 +1940,7 @@ int fuse_do_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ if (FUSE_IS_DAX(inode) && is_truncate) {
+ filemap_invalidate_lock(mapping);
+ fault_blocked = true;
+- err = fuse_dax_break_layouts(inode, 0, 0);
++ err = fuse_dax_break_layouts(inode, 0, -1);
+ if (err) {
+ filemap_invalidate_unlock(mapping);
+ return err;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index d63e56fd3dd207..754378dd9f7159 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -253,7 +253,7 @@ static int fuse_open(struct inode *inode, struct file *file)
+
+ if (dax_truncate) {
+ filemap_invalidate_lock(inode->i_mapping);
+- err = fuse_dax_break_layouts(inode, 0, 0);
++ err = fuse_dax_break_layouts(inode, 0, -1);
+ if (err)
+ goto out_inode_unlock;
+ }
+@@ -3205,7 +3205,7 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ inode_lock(inode);
+ if (block_faults) {
+ filemap_invalidate_lock(inode->i_mapping);
+- err = fuse_dax_break_layouts(inode, 0, 0);
++ err = fuse_dax_break_layouts(inode, 0, -1);
+ if (err)
+ goto out;
+ }
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 92a3b6ddafdc19..0e6ad7bf32be87 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1338,12 +1338,8 @@ static enum evict_behavior evict_should_delete(struct inode *inode,
+
+ /* Must not read inode block until block type has been verified */
+ ret = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, gh);
+- if (unlikely(ret)) {
+- glock_clear_object(ip->i_iopen_gh.gh_gl, ip);
+- ip->i_iopen_gh.gh_flags |= GL_NOCACHE;
+- gfs2_glock_dq_uninit(&ip->i_iopen_gh);
+- return EVICT_SHOULD_DEFER_DELETE;
+- }
++ if (unlikely(ret))
++ return EVICT_SHOULD_SKIP_DELETE;
+
+ if (gfs2_inode_already_deleted(ip->i_gl, ip->i_no_formal_ino))
+ return EVICT_SHOULD_SKIP_DELETE;
+@@ -1363,15 +1359,8 @@ static enum evict_behavior evict_should_delete(struct inode *inode,
+
+ should_delete:
+ if (gfs2_holder_initialized(&ip->i_iopen_gh) &&
+- test_bit(HIF_HOLDER, &ip->i_iopen_gh.gh_iflags)) {
+- enum evict_behavior behavior =
+- gfs2_upgrade_iopen_glock(inode);
+-
+- if (behavior != EVICT_SHOULD_DELETE) {
+- gfs2_holder_uninit(&ip->i_iopen_gh);
+- return behavior;
+- }
+- }
++ test_bit(HIF_HOLDER, &ip->i_iopen_gh.gh_iflags))
++ return gfs2_upgrade_iopen_glock(inode);
+ return EVICT_SHOULD_DELETE;
+ }
+
+@@ -1509,7 +1498,7 @@ static void gfs2_evict_inode(struct inode *inode)
+ gfs2_glock_put(io_gl);
+ goto out;
+ }
+- behavior = EVICT_SHOULD_DELETE;
++ behavior = EVICT_SHOULD_SKIP_DELETE;
+ }
+ if (behavior == EVICT_SHOULD_DELETE)
+ ret = evict_unlinked_inode(inode);
+diff --git a/fs/hostfs/hostfs.h b/fs/hostfs/hostfs.h
+index 8b39c15c408ccd..15b2f094d36ef8 100644
+--- a/fs/hostfs/hostfs.h
++++ b/fs/hostfs/hostfs.h
+@@ -60,7 +60,7 @@ struct hostfs_stat {
+ unsigned int uid;
+ unsigned int gid;
+ unsigned long long size;
+- struct hostfs_timespec atime, mtime, ctime;
++ struct hostfs_timespec atime, mtime, ctime, btime;
+ unsigned int blksize;
+ unsigned long long blocks;
+ struct {
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index e0741e468956dd..e6e24723572827 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -33,6 +33,7 @@ struct hostfs_inode_info {
+ struct inode vfs_inode;
+ struct mutex open_mutex;
+ dev_t dev;
++ struct hostfs_timespec btime;
+ };
+
+ static inline struct hostfs_inode_info *HOSTFS_I(struct inode *inode)
+@@ -547,6 +548,7 @@ static int hostfs_inode_set(struct inode *ino, void *data)
+ }
+
+ HOSTFS_I(ino)->dev = dev;
++ HOSTFS_I(ino)->btime = st->btime;
+ ino->i_ino = st->ino;
+ ino->i_mode = st->mode;
+ return hostfs_inode_update(ino, st);
+@@ -557,7 +559,10 @@ static int hostfs_inode_test(struct inode *inode, void *data)
+ const struct hostfs_stat *st = data;
+ dev_t dev = MKDEV(st->dev.maj, st->dev.min);
+
+- return inode->i_ino == st->ino && HOSTFS_I(inode)->dev == dev;
++ return inode->i_ino == st->ino && HOSTFS_I(inode)->dev == dev &&
++ (inode->i_mode & S_IFMT) == (st->mode & S_IFMT) &&
++ HOSTFS_I(inode)->btime.tv_sec == st->btime.tv_sec &&
++ HOSTFS_I(inode)->btime.tv_nsec == st->btime.tv_nsec;
+ }
+
+ static struct inode *hostfs_iget(struct super_block *sb, char *name)
+diff --git a/fs/hostfs/hostfs_user.c b/fs/hostfs/hostfs_user.c
+index 97e9c40a944883..3bcd9f35e70b22 100644
+--- a/fs/hostfs/hostfs_user.c
++++ b/fs/hostfs/hostfs_user.c
+@@ -18,39 +18,48 @@
+ #include "hostfs.h"
+ #include <utime.h>
+
+-static void stat64_to_hostfs(const struct stat64 *buf, struct hostfs_stat *p)
++static void statx_to_hostfs(const struct statx *buf, struct hostfs_stat *p)
+ {
+- p->ino = buf->st_ino;
+- p->mode = buf->st_mode;
+- p->nlink = buf->st_nlink;
+- p->uid = buf->st_uid;
+- p->gid = buf->st_gid;
+- p->size = buf->st_size;
+- p->atime.tv_sec = buf->st_atime;
+- p->atime.tv_nsec = 0;
+- p->ctime.tv_sec = buf->st_ctime;
+- p->ctime.tv_nsec = 0;
+- p->mtime.tv_sec = buf->st_mtime;
+- p->mtime.tv_nsec = 0;
+- p->blksize = buf->st_blksize;
+- p->blocks = buf->st_blocks;
+- p->rdev.maj = os_major(buf->st_rdev);
+- p->rdev.min = os_minor(buf->st_rdev);
+- p->dev.maj = os_major(buf->st_dev);
+- p->dev.min = os_minor(buf->st_dev);
++ p->ino = buf->stx_ino;
++ p->mode = buf->stx_mode;
++ p->nlink = buf->stx_nlink;
++ p->uid = buf->stx_uid;
++ p->gid = buf->stx_gid;
++ p->size = buf->stx_size;
++ p->atime.tv_sec = buf->stx_atime.tv_sec;
++ p->atime.tv_nsec = buf->stx_atime.tv_nsec;
++ p->ctime.tv_sec = buf->stx_ctime.tv_sec;
++ p->ctime.tv_nsec = buf->stx_ctime.tv_nsec;
++ p->mtime.tv_sec = buf->stx_mtime.tv_sec;
++ p->mtime.tv_nsec = buf->stx_mtime.tv_nsec;
++ if (buf->stx_mask & STATX_BTIME) {
++ p->btime.tv_sec = buf->stx_btime.tv_sec;
++ p->btime.tv_nsec = buf->stx_btime.tv_nsec;
++ } else {
++ memset(&p->btime, 0, sizeof(p->btime));
++ }
++ p->blksize = buf->stx_blksize;
++ p->blocks = buf->stx_blocks;
++ p->rdev.maj = buf->stx_rdev_major;
++ p->rdev.min = buf->stx_rdev_minor;
++ p->dev.maj = buf->stx_dev_major;
++ p->dev.min = buf->stx_dev_minor;
+ }
+
+ int stat_file(const char *path, struct hostfs_stat *p, int fd)
+ {
+- struct stat64 buf;
++ struct statx buf;
++ int flags = AT_SYMLINK_NOFOLLOW;
+
+ if (fd >= 0) {
+- if (fstat64(fd, &buf) < 0)
+- return -errno;
+- } else if (lstat64(path, &buf) < 0) {
+- return -errno;
++ flags |= AT_EMPTY_PATH;
++ path = "";
+ }
+- stat64_to_hostfs(&buf, p);
++
++ if ((statx(fd, path, flags, STATX_BASIC_STATS | STATX_BTIME, &buf)) < 0)
++ return -errno;
++
++ statx_to_hostfs(&buf, p);
+ return 0;
+ }
+
+diff --git a/fs/isofs/dir.c b/fs/isofs/dir.c
+index eb2f8273e6f15e..09df40b612fbf2 100644
+--- a/fs/isofs/dir.c
++++ b/fs/isofs/dir.c
+@@ -147,7 +147,8 @@ static int do_isofs_readdir(struct inode *inode, struct file *file,
+ de = tmpde;
+ }
+ /* Basic sanity check, whether name doesn't exceed dir entry */
+- if (de_len < de->name_len[0] +
++ if (de_len < sizeof(struct iso_directory_record) ||
++ de_len < de->name_len[0] +
+ sizeof(struct iso_directory_record)) {
+ printk(KERN_NOTICE "iso9660: Corrupted directory entry"
+ " in block %lu of inode %lu\n", block,
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index d8084b31b3610a..a10e086a0165b1 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -603,7 +603,7 @@ int jbd2_journal_start_commit(journal_t *journal, tid_t *ptid)
+ int jbd2_trans_will_send_data_barrier(journal_t *journal, tid_t tid)
+ {
+ int ret = 0;
+- transaction_t *commit_trans;
++ transaction_t *commit_trans, *running_trans;
+
+ if (!(journal->j_flags & JBD2_BARRIER))
+ return 0;
+@@ -613,6 +613,16 @@ int jbd2_trans_will_send_data_barrier(journal_t *journal, tid_t tid)
+ goto out;
+ commit_trans = journal->j_committing_transaction;
+ if (!commit_trans || commit_trans->t_tid != tid) {
++ running_trans = journal->j_running_transaction;
++ /*
++ * The query transaction hasn't started committing,
++ * it must still be running.
++ */
++ if (WARN_ON_ONCE(!running_trans ||
++ running_trans->t_tid != tid))
++ goto out;
++
++ running_trans->t_need_data_flush = 1;
+ ret = 1;
+ goto out;
+ }
+@@ -1965,17 +1975,15 @@ static int __jbd2_journal_erase(journal_t *journal, unsigned int flags)
+ return err;
+ }
+
+- if (block_start == ~0ULL) {
+- block_start = phys_block;
+- block_stop = block_start - 1;
+- }
++ if (block_start == ~0ULL)
++ block_stop = block_start = phys_block;
+
+ /*
+ * last block not contiguous with current block,
+ * process last contiguous region and return to this block on
+ * next loop
+ */
+- if (phys_block != block_stop + 1) {
++ if (phys_block != block_stop) {
+ block--;
+ } else {
+ block_stop++;
+@@ -1994,11 +2002,10 @@ static int __jbd2_journal_erase(journal_t *journal, unsigned int flags)
+ */
+ byte_start = block_start * journal->j_blocksize;
+ byte_stop = block_stop * journal->j_blocksize;
+- byte_count = (block_stop - block_start + 1) *
+- journal->j_blocksize;
++ byte_count = (block_stop - block_start) * journal->j_blocksize;
+
+ truncate_inode_pages_range(journal->j_dev->bd_mapping,
+- byte_start, byte_stop);
++ byte_start, byte_stop - 1);
+
+ if (flags & JBD2_JOURNAL_FLUSH_DISCARD) {
+ err = blkdev_issue_discard(journal->j_dev,
+@@ -2013,7 +2020,7 @@ static int __jbd2_journal_erase(journal_t *journal, unsigned int flags)
+ }
+
+ if (unlikely(err != 0)) {
+- pr_err("JBD2: (error %d) unable to wipe journal at physical blocks %llu - %llu",
++ pr_err("JBD2: (error %d) unable to wipe journal at physical blocks [%llu, %llu)",
+ err, block_start, block_stop);
+ return err;
+ }
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 07cfdc4405968b..60fc92dee24d20 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -369,7 +369,7 @@ void jfs_truncate_nolock(struct inode *ip, loff_t length)
+
+ ASSERT(length >= 0);
+
+- if (test_cflag(COMMIT_Nolink, ip)) {
++ if (test_cflag(COMMIT_Nolink, ip) || isReadOnly(ip)) {
+ xtTruncate(0, ip, length, COMMIT_WMAP);
+ return;
+ }
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 8f85177f284b5a..93db6eec446556 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -117,7 +117,8 @@ do { \
+ if (!(RC)) { \
+ if (((P)->header.nextindex > \
+ (((BN) == 0) ? DTROOTMAXSLOT : (P)->header.maxslot)) || \
+- ((BN) && ((P)->header.maxslot > DTPAGEMAXSLOT))) { \
++ ((BN) && (((P)->header.maxslot > DTPAGEMAXSLOT) || \
++ ((P)->header.stblindex >= DTPAGEMAXSLOT)))) { \
+ BT_PUTPAGE(MP); \
+ jfs_error((IP)->i_sb, \
+ "DT_GETPAGE: dtree page corrupt\n"); \
+diff --git a/fs/jfs/jfs_extent.c b/fs/jfs/jfs_extent.c
+index 63d21822d309be..46529bcc8297ea 100644
+--- a/fs/jfs/jfs_extent.c
++++ b/fs/jfs/jfs_extent.c
+@@ -74,6 +74,11 @@ extAlloc(struct inode *ip, s64 xlen, s64 pno, xad_t * xp, bool abnr)
+ int rc;
+ int xflag;
+
++ if (isReadOnly(ip)) {
++ jfs_error(ip->i_sb, "read-only filesystem\n");
++ return -EIO;
++ }
++
+ /* This blocks if we are low on resources */
+ txBeginAnon(ip->i_sb);
+
+@@ -253,6 +258,11 @@ int extRecord(struct inode *ip, xad_t * xp)
+ {
+ int rc;
+
++ if (isReadOnly(ip)) {
++ jfs_error(ip->i_sb, "read-only filesystem\n");
++ return -EIO;
++ }
++
+ txBeginAnon(ip->i_sb);
+
+ mutex_lock(&JFS_IP(ip)->commit_mutex);
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index a360b24ed320c0..debfc1389cb3e8 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -3029,14 +3029,23 @@ static void duplicateIXtree(struct super_block *sb, s64 blkno,
+ *
+ * RETURN VALUES:
+ * 0 - success
+- * -ENOMEM - insufficient memory
++ * -EINVAL - unexpected inode type
+ */
+ static int copy_from_dinode(struct dinode * dip, struct inode *ip)
+ {
+ struct jfs_inode_info *jfs_ip = JFS_IP(ip);
+ struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);
++ int fileset = le32_to_cpu(dip->di_fileset);
++
++ switch (fileset) {
++ case AGGR_RESERVED_I: case AGGREGATE_I: case BMAP_I:
++ case LOG_I: case BADBLOCK_I: case FILESYSTEM_I:
++ break;
++ default:
++ return -EINVAL;
++ }
+
+- jfs_ip->fileset = le32_to_cpu(dip->di_fileset);
++ jfs_ip->fileset = fileset;
+ jfs_ip->mode2 = le32_to_cpu(dip->di_mode);
+ jfs_set_inode_flags(ip);
+
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 24afbae87225a7..11d7f74d207be0 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -559,11 +559,16 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+
+ size_check:
+ if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
+- int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
+-
+- printk(KERN_ERR "ea_get: invalid extended attribute\n");
+- print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
+- ea_buf->xattr, size, 1);
++ if (unlikely(EALIST_SIZE(ea_buf->xattr) > INT_MAX)) {
++ printk(KERN_ERR "ea_get: extended attribute size too large: %u > INT_MAX\n",
++ EALIST_SIZE(ea_buf->xattr));
++ } else {
++ int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
++
++ printk(KERN_ERR "ea_get: invalid extended attribute\n");
++ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
++ ea_buf->xattr, size, 1);
++ }
+ ea_release(inode, ea_buf);
+ rc = -EIO;
+ goto clean_up;
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 4db912f5623055..325ba0663a6de2 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -79,6 +79,7 @@ static void nfs_mark_return_delegation(struct nfs_server *server,
+ struct nfs_delegation *delegation)
+ {
+ set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
++ set_bit(NFS4SERV_DELEGRETURN, &server->delegation_flags);
+ set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+ }
+
+@@ -330,14 +331,16 @@ nfs_start_delegation_return(struct nfs_inode *nfsi)
+ }
+
+ static void nfs_abort_delegation_return(struct nfs_delegation *delegation,
+- struct nfs_client *clp, int err)
++ struct nfs_server *server, int err)
+ {
+-
+ spin_lock(&delegation->lock);
+ clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags);
+ if (err == -EAGAIN) {
+ set_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags);
+- set_bit(NFS4CLNT_DELEGRETURN_DELAYED, &clp->cl_state);
++ set_bit(NFS4SERV_DELEGRETURN_DELAYED,
++ &server->delegation_flags);
++ set_bit(NFS4CLNT_DELEGRETURN_DELAYED,
++ &server->nfs_client->cl_state);
+ }
+ spin_unlock(&delegation->lock);
+ }
+@@ -547,7 +550,7 @@ int nfs_inode_set_delegation(struct inode *inode, const struct cred *cred,
+ */
+ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation *delegation, int issync)
+ {
+- struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
++ struct nfs_server *server = NFS_SERVER(inode);
+ unsigned int mode = O_WRONLY | O_RDWR;
+ int err = 0;
+
+@@ -569,11 +572,11 @@ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation
+ /*
+ * Guard against state recovery
+ */
+- err = nfs4_wait_clnt_recover(clp);
++ err = nfs4_wait_clnt_recover(server->nfs_client);
+ }
+
+ if (err) {
+- nfs_abort_delegation_return(delegation, clp, err);
++ nfs_abort_delegation_return(delegation, server, err);
+ goto out;
+ }
+
+@@ -590,17 +593,6 @@ static bool nfs_delegation_need_return(struct nfs_delegation *delegation)
+
+ if (test_and_clear_bit(NFS_DELEGATION_RETURN, &delegation->flags))
+ ret = true;
+- else if (test_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags)) {
+- struct inode *inode;
+-
+- spin_lock(&delegation->lock);
+- inode = delegation->inode;
+- if (inode && list_empty(&NFS_I(inode)->open_files))
+- ret = true;
+- spin_unlock(&delegation->lock);
+- }
+- if (ret)
+- clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
+ if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags) ||
+ test_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags) ||
+ test_bit(NFS_DELEGATION_REVOKED, &delegation->flags))
+@@ -619,6 +611,9 @@ static int nfs_server_return_marked_delegations(struct nfs_server *server,
+ struct nfs_delegation *place_holder_deleg = NULL;
+ int err = 0;
+
++ if (!test_and_clear_bit(NFS4SERV_DELEGRETURN,
++ &server->delegation_flags))
++ return 0;
+ restart:
+ /*
+ * To avoid quadratic looping we hold a reference
+@@ -670,6 +665,7 @@ static int nfs_server_return_marked_delegations(struct nfs_server *server,
+ cond_resched();
+ if (!err)
+ goto restart;
++ set_bit(NFS4SERV_DELEGRETURN, &server->delegation_flags);
+ set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+ goto out;
+ }
+@@ -684,6 +680,9 @@ static bool nfs_server_clear_delayed_delegations(struct nfs_server *server)
+ struct nfs_delegation *d;
+ bool ret = false;
+
++ if (!test_and_clear_bit(NFS4SERV_DELEGRETURN_DELAYED,
++ &server->delegation_flags))
++ goto out;
+ list_for_each_entry_rcu (d, &server->delegations, super_list) {
+ if (!test_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags))
+ continue;
+@@ -691,6 +690,7 @@ static bool nfs_server_clear_delayed_delegations(struct nfs_server *server)
+ clear_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags);
+ ret = true;
+ }
++out:
+ return ret;
+ }
+
+@@ -878,11 +878,25 @@ int nfs4_inode_make_writeable(struct inode *inode)
+ return nfs4_inode_return_delegation(inode);
+ }
+
+-static void nfs_mark_return_if_closed_delegation(struct nfs_server *server,
+- struct nfs_delegation *delegation)
++static void
++nfs_mark_return_if_closed_delegation(struct nfs_server *server,
++ struct nfs_delegation *delegation)
+ {
+- set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
+- set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
++ struct inode *inode;
++
++ if (test_bit(NFS_DELEGATION_RETURN, &delegation->flags) ||
++ test_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags))
++ return;
++ spin_lock(&delegation->lock);
++ inode = delegation->inode;
++ if (!inode)
++ goto out;
++ if (list_empty(&NFS_I(inode)->open_files))
++ nfs_mark_return_delegation(server, delegation);
++ else
++ set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
++out:
++ spin_unlock(&delegation->lock);
+ }
+
+ static bool nfs_server_mark_return_all_delegations(struct nfs_server *server)
+@@ -1276,6 +1290,7 @@ static void nfs_mark_test_expired_delegation(struct nfs_server *server,
+ return;
+ clear_bit(NFS_DELEGATION_NEED_RECLAIM, &delegation->flags);
+ set_bit(NFS_DELEGATION_TEST_EXPIRED, &delegation->flags);
++ set_bit(NFS4SERV_DELEGATION_EXPIRED, &server->delegation_flags);
+ set_bit(NFS4CLNT_DELEGATION_EXPIRED, &server->nfs_client->cl_state);
+ }
+
+@@ -1354,6 +1369,9 @@ static int nfs_server_reap_expired_delegations(struct nfs_server *server,
+ nfs4_stateid stateid;
+ unsigned long gen = ++server->delegation_gen;
+
++ if (!test_and_clear_bit(NFS4SERV_DELEGATION_EXPIRED,
++ &server->delegation_flags))
++ return 0;
+ restart:
+ rcu_read_lock();
+ list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
+@@ -1383,6 +1401,9 @@ static int nfs_server_reap_expired_delegations(struct nfs_server *server,
+ goto restart;
+ }
+ nfs_inode_mark_test_expired_delegation(server,inode);
++ set_bit(NFS4SERV_DELEGATION_EXPIRED, &server->delegation_flags);
++ set_bit(NFS4CLNT_DELEGATION_EXPIRED,
++ &server->nfs_client->cl_state);
+ iput(inode);
+ return -EAGAIN;
+ }
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index e8ac3f615f932e..71f45cc0ca74d1 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -82,9 +82,8 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ * we currently use size 2 (u64) out of (NFS4_OPAQUE_LIMIT >> 2)
+ */
+ #define pagepad_maxsz (1)
+-#define open_owner_id_maxsz (1 + 2 + 1 + 1 + 2)
+-#define lock_owner_id_maxsz (1 + 1 + 4)
+-#define decode_lockowner_maxsz (1 + XDR_QUADLEN(IDMAP_NAMESZ))
++#define open_owner_id_maxsz (2 + 1 + 2 + 2)
++#define lock_owner_id_maxsz (2 + 1 + 2)
+ #define compound_encode_hdr_maxsz (3 + (NFS4_MAXTAGLEN >> 2))
+ #define compound_decode_hdr_maxsz (3 + (NFS4_MAXTAGLEN >> 2))
+ #define op_encode_hdr_maxsz (1)
+@@ -185,7 +184,7 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ #define encode_claim_null_maxsz (1 + nfs4_name_maxsz)
+ #define encode_open_maxsz (op_encode_hdr_maxsz + \
+ 2 + encode_share_access_maxsz + 2 + \
+- open_owner_id_maxsz + \
++ 1 + open_owner_id_maxsz + \
+ encode_opentype_maxsz + \
+ encode_claim_null_maxsz)
+ #define decode_space_limit_maxsz (3)
+@@ -255,13 +254,14 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ #define encode_link_maxsz (op_encode_hdr_maxsz + \
+ nfs4_name_maxsz)
+ #define decode_link_maxsz (op_decode_hdr_maxsz + decode_change_info_maxsz)
+-#define encode_lockowner_maxsz (7)
++#define encode_lockowner_maxsz (2 + 1 + lock_owner_id_maxsz)
++
+ #define encode_lock_maxsz (op_encode_hdr_maxsz + \
+ 7 + \
+ 1 + encode_stateid_maxsz + 1 + \
+ encode_lockowner_maxsz)
+ #define decode_lock_denied_maxsz \
+- (8 + decode_lockowner_maxsz)
++ (2 + 2 + 1 + 2 + 1 + lock_owner_id_maxsz)
+ #define decode_lock_maxsz (op_decode_hdr_maxsz + \
+ decode_lock_denied_maxsz)
+ #define encode_lockt_maxsz (op_encode_hdr_maxsz + 5 + \
+@@ -617,7 +617,7 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ encode_lockowner_maxsz)
+ #define NFS4_dec_release_lockowner_sz \
+ (compound_decode_hdr_maxsz + \
+- decode_lockowner_maxsz)
++ decode_release_lockowner_maxsz)
+ #define NFS4_enc_access_sz (compound_encode_hdr_maxsz + \
+ encode_sequence_maxsz + \
+ encode_putfh_maxsz + \
+@@ -1412,7 +1412,7 @@ static inline void encode_openhdr(struct xdr_stream *xdr, const struct nfs_opena
+ __be32 *p;
+ /*
+ * opcode 4, seqid 4, share_access 4, share_deny 4, clientid 8, ownerlen 4,
+- * owner 4 = 32
++ * owner 28
+ */
+ encode_nfs4_seqid(xdr, arg->seqid);
+ encode_share_access(xdr, arg->share_access);
+@@ -5077,7 +5077,7 @@ static int decode_link(struct xdr_stream *xdr, struct nfs4_change_info *cinfo)
+ /*
+ * We create the owner, so we know a proper owner.id length is 4.
+ */
+-static int decode_lock_denied (struct xdr_stream *xdr, struct file_lock *fl)
++static int decode_lock_denied(struct xdr_stream *xdr, struct file_lock *fl)
+ {
+ uint64_t offset, length, clientid;
+ __be32 *p;
+diff --git a/fs/nfs/sysfs.c b/fs/nfs/sysfs.c
+index 7b59a40d40c061..784f7c1d003bfc 100644
+--- a/fs/nfs/sysfs.c
++++ b/fs/nfs/sysfs.c
+@@ -14,6 +14,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/lockd/lockd.h>
+
++#include "internal.h"
+ #include "nfs4_fs.h"
+ #include "netns.h"
+ #include "sysfs.h"
+@@ -228,6 +229,25 @@ static void shutdown_client(struct rpc_clnt *clnt)
+ rpc_cancel_tasks(clnt, -EIO, shutdown_match_client, NULL);
+ }
+
++/*
++ * Shut down the nfs_client only once all the superblocks
++ * have been shut down.
++ */
++static void shutdown_nfs_client(struct nfs_client *clp)
++{
++ struct nfs_server *server;
++ rcu_read_lock();
++ list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) {
++ if (!(server->flags & NFS_MOUNT_SHUTDOWN)) {
++ rcu_read_unlock();
++ return;
++ }
++ }
++ rcu_read_unlock();
++ nfs_mark_client_ready(clp, -EIO);
++ shutdown_client(clp->cl_rpcclient);
++}
++
+ static ssize_t
+ shutdown_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *buf)
+@@ -259,7 +279,6 @@ shutdown_store(struct kobject *kobj, struct kobj_attribute *attr,
+
+ server->flags |= NFS_MOUNT_SHUTDOWN;
+ shutdown_client(server->client);
+- shutdown_client(server->nfs_client->cl_rpcclient);
+
+ if (!IS_ERR(server->client_acl))
+ shutdown_client(server->client_acl);
+@@ -267,6 +286,7 @@ shutdown_store(struct kobject *kobj, struct kobj_attribute *attr,
+ if (server->nlm_host)
+ shutdown_client(server->nlm_host->h_rpcclnt);
+ out:
++ shutdown_nfs_client(server->nfs_client);
+ return count;
+ }
+
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index aa3d8bea3ec061..23df8b214474f4 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -579,8 +579,10 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+
+ while (!nfs_lock_request(head)) {
+ ret = nfs_wait_on_request(head);
+- if (ret < 0)
++ if (ret < 0) {
++ nfs_release_request(head);
+ return ERR_PTR(ret);
++ }
+ }
+
+ /* Ensure that nobody removed the request before we locked it */
+diff --git a/fs/nfsd/Kconfig b/fs/nfsd/Kconfig
+index c0bd1509ccd480..792d3fed1b45fd 100644
+--- a/fs/nfsd/Kconfig
++++ b/fs/nfsd/Kconfig
+@@ -172,6 +172,16 @@ config NFSD_LEGACY_CLIENT_TRACKING
+ recoverydir, or spawn a process directly using a usermodehelper
+ upcall.
+
+- These legacy client tracking methods have proven to be probelmatic
++ These legacy client tracking methods have proven to be problematic
+ and will be removed in the future. Say Y here if you need support
+ for them in the interim.
++
++config NFSD_V4_DELEG_TIMESTAMPS
++ bool "Support delegated timestamps"
++ depends on NFSD_V4
++ default n
++ help
++ NFSD implements delegated timestamps according to
++ draft-ietf-nfsv4-delstid-08 "Extending the Opening of Files". This
++ is currently an experimental feature and is therefore left disabled
++ by default.
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index 484077200c5d7e..d649a3d65a3a55 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -101,15 +101,15 @@ static int decode_cb_fattr4(struct xdr_stream *xdr, uint32_t *bitmap,
+
+ if (bitmap[0] & FATTR4_WORD0_CHANGE)
+ if (xdr_stream_decode_u64(xdr, &fattr->ncf_cb_change) < 0)
+- return -NFSERR_BAD_XDR;
++ return -EIO;
+ if (bitmap[0] & FATTR4_WORD0_SIZE)
+ if (xdr_stream_decode_u64(xdr, &fattr->ncf_cb_fsize) < 0)
+- return -NFSERR_BAD_XDR;
++ return -EIO;
+ if (bitmap[2] & FATTR4_WORD2_TIME_DELEG_ACCESS) {
+ fattr4_time_deleg_access access;
+
+ if (!xdrgen_decode_fattr4_time_deleg_access(xdr, &access))
+- return -NFSERR_BAD_XDR;
++ return -EIO;
+ fattr->ncf_cb_atime.tv_sec = access.seconds;
+ fattr->ncf_cb_atime.tv_nsec = access.nseconds;
+
+@@ -118,7 +118,7 @@ static int decode_cb_fattr4(struct xdr_stream *xdr, uint32_t *bitmap,
+ fattr4_time_deleg_modify modify;
+
+ if (!xdrgen_decode_fattr4_time_deleg_modify(xdr, &modify))
+- return -NFSERR_BAD_XDR;
++ return -EIO;
+ fattr->ncf_cb_mtime.tv_sec = modify.seconds;
+ fattr->ncf_cb_mtime.tv_nsec = modify.nseconds;
+
+@@ -682,15 +682,15 @@ static int nfs4_xdr_dec_cb_getattr(struct rpc_rqst *rqstp,
+ if (unlikely(status || cb->cb_status))
+ return status;
+ if (xdr_stream_decode_uint32_array(xdr, bitmap, 3) < 0)
+- return -NFSERR_BAD_XDR;
++ return -EIO;
+ if (xdr_stream_decode_u32(xdr, &attrlen) < 0)
+- return -NFSERR_BAD_XDR;
++ return -EIO;
+ maxlen = sizeof(ncf->ncf_cb_change) + sizeof(ncf->ncf_cb_fsize);
+ if (bitmap[2] != 0)
+ maxlen += (sizeof(ncf->ncf_cb_mtime.tv_sec) +
+ sizeof(ncf->ncf_cb_mtime.tv_nsec)) * 2;
+ if (attrlen > maxlen)
+- return -NFSERR_BAD_XDR;
++ return -EIO;
+ status = decode_cb_fattr4(xdr, bitmap, ncf);
+ return status;
+ }
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 153eeea2c7c999..2de49e2d6ac487 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1050,6 +1050,12 @@ static struct nfs4_ol_stateid * nfs4_alloc_open_stateid(struct nfs4_client *clp)
+ return openlockstateid(stid);
+ }
+
++/*
++ * As the sc_free callback of deleg, this may be called by nfs4_put_stid
++ * in nfsd_break_one_deleg.
++ * Considering nfsd_break_one_deleg is called with the flc->flc_lock held,
++ * this function mustn't ever sleep.
++ */
+ static void nfs4_free_deleg(struct nfs4_stid *stid)
+ {
+ struct nfs4_delegation *dp = delegstateid(stid);
+@@ -5414,6 +5420,7 @@ static const struct nfsd4_callback_ops nfsd4_cb_recall_ops = {
+
+ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
+ {
++ bool queued;
+ /*
+ * We're assuming the state code never drops its reference
+ * without first removing the lease. Since we're in this lease
+@@ -5422,7 +5429,10 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
+ * we know it's safe to take a reference.
+ */
+ refcount_inc(&dp->dl_stid.sc_count);
+- WARN_ON_ONCE(!nfsd4_run_cb(&dp->dl_recall));
++ queued = nfsd4_run_cb(&dp->dl_recall);
++ WARN_ON_ONCE(!queued);
++ if (!queued)
++ nfs4_put_stid(&dp->dl_stid);
+ }
+
+ /* Called from break_lease() with flc_lock held. */
+@@ -5948,11 +5958,23 @@ nfsd4_verify_setuid_write(struct nfsd4_open *open, struct nfsd_file *nf)
+ return 0;
+ }
+
++#ifdef CONFIG_NFSD_V4_DELEG_TIMESTAMPS
++static bool nfsd4_want_deleg_timestamps(const struct nfsd4_open *open)
++{
++ return open->op_deleg_want & OPEN4_SHARE_ACCESS_WANT_DELEG_TIMESTAMPS;
++}
++#else /* CONFIG_NFSD_V4_DELEG_TIMESTAMPS */
++static bool nfsd4_want_deleg_timestamps(const struct nfsd4_open *open)
++{
++ return false;
++}
++#endif /* CONFIG NFSD_V4_DELEG_TIMESTAMPS */
++
+ static struct nfs4_delegation *
+ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ struct svc_fh *parent)
+ {
+- bool deleg_ts = open->op_deleg_want & OPEN4_SHARE_ACCESS_WANT_DELEG_TIMESTAMPS;
++ bool deleg_ts = nfsd4_want_deleg_timestamps(open);
+ struct nfs4_client *clp = stp->st_stid.sc_client;
+ struct nfs4_file *fp = stp->st_stid.sc_file;
+ struct nfs4_clnt_odstate *odstate = stp->st_clnt_odstate;
+@@ -6151,8 +6173,8 @@ static void
+ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ struct svc_fh *currentfh)
+ {
+- bool deleg_ts = open->op_deleg_want & OPEN4_SHARE_ACCESS_WANT_DELEG_TIMESTAMPS;
+ struct nfs4_openowner *oo = openowner(stp->st_stateowner);
++ bool deleg_ts = nfsd4_want_deleg_timestamps(open);
+ struct nfs4_client *clp = stp->st_stid.sc_client;
+ struct svc_fh *parent = NULL;
+ struct nfs4_delegation *dp;
+@@ -6860,14 +6882,19 @@ deleg_reaper(struct nfsd_net *nn)
+ spin_lock(&nn->client_lock);
+ list_for_each_safe(pos, next, &nn->client_lru) {
+ clp = list_entry(pos, struct nfs4_client, cl_lru);
+- if (clp->cl_state != NFSD4_ACTIVE ||
+- list_empty(&clp->cl_delegations) ||
+- atomic_read(&clp->cl_delegs_in_recall) ||
+- test_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags) ||
+- (ktime_get_boottime_seconds() -
+- clp->cl_ra_time < 5)) {
++
++ if (clp->cl_state != NFSD4_ACTIVE)
++ continue;
++ if (list_empty(&clp->cl_delegations))
++ continue;
++ if (atomic_read(&clp->cl_delegs_in_recall))
++ continue;
++ if (test_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags))
++ continue;
++ if (ktime_get_boottime_seconds() - clp->cl_ra_time < 5)
++ continue;
++ if (clp->cl_cb_state != NFSD4_CB_UP)
+ continue;
+- }
+ list_add(&clp->cl_ra_cblist, &cblist);
+
+ /* release in nfsd4_cb_recall_any_release */
+@@ -7051,7 +7078,7 @@ nfsd4_lookup_stateid(struct nfsd4_compound_state *cstate,
+ */
+ statusmask |= SC_STATUS_REVOKED;
+
+- statusmask |= SC_STATUS_ADMIN_REVOKED;
++ statusmask |= SC_STATUS_ADMIN_REVOKED | SC_STATUS_FREEABLE;
+
+ if (ZERO_STATEID(stateid) || ONE_STATEID(stateid) ||
+ CLOSE_STATEID(stateid))
+@@ -7706,9 +7733,7 @@ nfsd4_delegreturn(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ if ((status = fh_verify(rqstp, &cstate->current_fh, S_IFREG, 0)))
+ return status;
+
+- status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG,
+- SC_STATUS_REVOKED | SC_STATUS_FREEABLE,
+- &s, nn);
++ status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG, SC_STATUS_REVOKED, &s, nn);
+ if (status)
+ goto out;
+ dp = delegstateid(s);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index ce2a71e4904c1d..ac265d6fde35df 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1917,6 +1917,7 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info)
+ struct svc_serv *serv;
+ LIST_HEAD(permsocks);
+ struct nfsd_net *nn;
++ bool delete = false;
+ int err, rem;
+
+ mutex_lock(&nfsd_mutex);
+@@ -1977,34 +1978,28 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info)
+ }
+ }
+
+- /* For now, no removing old sockets while server is running */
+- if (serv->sv_nrthreads && !list_empty(&permsocks)) {
++ /*
++ * If there are listener transports remaining on the permsocks list,
++ * it means we were asked to remove a listener.
++ */
++ if (!list_empty(&permsocks)) {
+ list_splice_init(&permsocks, &serv->sv_permsocks);
+- spin_unlock_bh(&serv->sv_lock);
+- err = -EBUSY;
+- goto out_unlock_mtx;
++ delete = true;
+ }
++ spin_unlock_bh(&serv->sv_lock);
+
+- /* Close the remaining sockets on the permsocks list */
+- while (!list_empty(&permsocks)) {
+- xprt = list_first_entry(&permsocks, struct svc_xprt, xpt_list);
+- list_move(&xprt->xpt_list, &serv->sv_permsocks);
+-
+- /*
+- * Newly-created sockets are born with the BUSY bit set. Clear
+- * it if there are no threads, since nothing can pick it up
+- * in that case.
+- */
+- if (!serv->sv_nrthreads)
+- clear_bit(XPT_BUSY, &xprt->xpt_flags);
+-
+- set_bit(XPT_CLOSE, &xprt->xpt_flags);
+- spin_unlock_bh(&serv->sv_lock);
+- svc_xprt_close(xprt);
+- spin_lock_bh(&serv->sv_lock);
++ /* Do not remove listeners while there are active threads. */
++ if (serv->sv_nrthreads) {
++ err = -EBUSY;
++ goto out_unlock_mtx;
+ }
+
+- spin_unlock_bh(&serv->sv_lock);
++ /*
++ * Since we can't delete an arbitrary llist entry, destroy the
++ * remaining listeners and recreate the list.
++ */
++ if (delete)
++ svc_xprt_destroy_all(serv, net);
+
+ /* walk list of addrs again, open any that still don't exist */
+ nlmsg_for_each_attr(attr, info->nlhdr, GENL_HDRLEN, rem) {
+@@ -2031,6 +2026,9 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info)
+
+ xprt = svc_find_listener(serv, xcl_name, net, sa);
+ if (xprt) {
++ if (delete)
++ WARN_ONCE(1, "Transport type=%s already exists\n",
++ xcl_name);
+ svc_xprt_put(xprt);
+ continue;
+ }
+@@ -2204,8 +2202,14 @@ static __net_init int nfsd_net_init(struct net *net)
+ NFSD_STATS_COUNTERS_NUM);
+ if (retval)
+ goto out_repcache_error;
++
+ memset(&nn->nfsd_svcstats, 0, sizeof(nn->nfsd_svcstats));
+ nn->nfsd_svcstats.program = &nfsd_programs[0];
++ if (!nfsd_proc_stat_init(net)) {
++ retval = -ENOMEM;
++ goto out_proc_error;
++ }
++
+ for (i = 0; i < sizeof(nn->nfsd_versions); i++)
+ nn->nfsd_versions[i] = nfsd_support_version(i);
+ for (i = 0; i < sizeof(nn->nfsd4_minorversions); i++)
+@@ -2215,13 +2219,14 @@ static __net_init int nfsd_net_init(struct net *net)
+ nfsd4_init_leases_net(nn);
+ get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key));
+ seqlock_init(&nn->writeverf_lock);
+- nfsd_proc_stat_init(net);
+ #if IS_ENABLED(CONFIG_NFS_LOCALIO)
+ spin_lock_init(&nn->local_clients_lock);
+ INIT_LIST_HEAD(&nn->local_clients);
+ #endif
+ return 0;
+
++out_proc_error:
++ percpu_counter_destroy_many(nn->counter, NFSD_STATS_COUNTERS_NUM);
+ out_repcache_error:
+ nfsd_idmap_shutdown(net);
+ out_idmap_error:
+diff --git a/fs/nfsd/stats.c b/fs/nfsd/stats.c
+index bb22893f1157e4..f7eaf95e20fc87 100644
+--- a/fs/nfsd/stats.c
++++ b/fs/nfsd/stats.c
+@@ -73,11 +73,11 @@ static int nfsd_show(struct seq_file *seq, void *v)
+
+ DEFINE_PROC_SHOW_ATTRIBUTE(nfsd);
+
+-void nfsd_proc_stat_init(struct net *net)
++struct proc_dir_entry *nfsd_proc_stat_init(struct net *net)
+ {
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+
+- svc_proc_register(net, &nn->nfsd_svcstats, &nfsd_proc_ops);
++ return svc_proc_register(net, &nn->nfsd_svcstats, &nfsd_proc_ops);
+ }
+
+ void nfsd_proc_stat_shutdown(struct net *net)
+diff --git a/fs/nfsd/stats.h b/fs/nfsd/stats.h
+index 04aacb6c36e257..e4efb0e4e56d46 100644
+--- a/fs/nfsd/stats.h
++++ b/fs/nfsd/stats.h
+@@ -10,7 +10,7 @@
+ #include <uapi/linux/nfsd/stats.h>
+ #include <linux/percpu_counter.h>
+
+-void nfsd_proc_stat_init(struct net *net);
++struct proc_dir_entry *nfsd_proc_stat_init(struct net *net);
+ void nfsd_proc_stat_shutdown(struct net *net);
+
+ static inline void nfsd_stats_rc_hits_inc(struct nfsd_net *nn)
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 29cb7b812d713c..6cd130b5c2b687 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1931,9 +1931,17 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ return err;
+ }
+
+-/*
+- * Unlink a file or directory
+- * N.B. After this call fhp needs an fh_put
++/**
++ * nfsd_unlink - remove a directory entry
++ * @rqstp: RPC transaction context
++ * @fhp: the file handle of the parent directory to be modified
++ * @type: enforced file type of the object to be removed
++ * @fname: the name of directory entry to be removed
++ * @flen: length of @fname in octets
++ *
++ * After this call fhp needs an fh_put.
++ *
++ * Returns a generic NFS status code in network byte-order.
+ */
+ __be32
+ nfsd_unlink(struct svc_rqst *rqstp, struct svc_fh *fhp, int type,
+@@ -2007,15 +2015,17 @@ nfsd_unlink(struct svc_rqst *rqstp, struct svc_fh *fhp, int type,
+ fh_drop_write(fhp);
+ out_nfserr:
+ if (host_err == -EBUSY) {
+- /* name is mounted-on. There is no perfect
+- * error status.
++ /*
++ * See RFC 8881 Section 18.25.4 para 4: NFSv4 REMOVE
++ * wants a status unique to the object type.
+ */
+- err = nfserr_file_open;
+- } else {
+- err = nfserrno(host_err);
++ if (type != S_IFDIR)
++ err = nfserr_file_open;
++ else
++ err = nfserr_acces;
+ }
+ out:
+- return err;
++ return err != nfs_ok ? err : nfserrno(host_err);
+ out_unlock:
+ inode_unlock(dirp);
+ goto out_drop_write;
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index af94e3737470d8..e946f75eb5406b 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -2664,8 +2664,9 @@ int attr_set_compress(struct ntfs_inode *ni, bool compr)
+ attr->nres.run_off = cpu_to_le16(run_off);
+ }
+
+- /* Update data attribute flags. */
++ /* Update attribute flags. */
+ if (compr) {
++ attr->flags &= ~ATTR_FLAG_SPARSED;
+ attr->flags |= ATTR_FLAG_COMPRESSED;
+ attr->nres.c_unit = NTFS_LZNT_CUNIT;
+ } else {
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index 3f96a11804c906..e9f701f884e72c 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -101,8 +101,26 @@ int ntfs_fileattr_set(struct mnt_idmap *idmap, struct dentry *dentry,
+ /* Allowed to change compression for empty files and for directories only. */
+ if (!is_dedup(ni) && !is_encrypted(ni) &&
+ (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode))) {
+- /* Change compress state. */
+- int err = ni_set_compress(inode, flags & FS_COMPR_FL);
++ int err = 0;
++ struct address_space *mapping = inode->i_mapping;
++
++ /* write out all data and wait. */
++ filemap_invalidate_lock(mapping);
++ err = filemap_write_and_wait(mapping);
++
++ if (err >= 0) {
++ /* Change compress state. */
++ bool compr = flags & FS_COMPR_FL;
++ err = ni_set_compress(inode, compr);
++
++ /* For files change a_ops too. */
++ if (!err)
++ mapping->a_ops = compr ? &ntfs_aops_cmpr :
++ &ntfs_aops;
++ }
++
++ filemap_invalidate_unlock(mapping);
++
+ if (err)
+ return err;
+ }
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 5df6a0b5add90e..81271196c5571b 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -3434,10 +3434,12 @@ int ni_set_compress(struct inode *inode, bool compr)
+ }
+
+ ni->std_fa = std->fa;
+- if (compr)
++ if (compr) {
++ std->fa &= ~FILE_ATTRIBUTE_SPARSE_FILE;
+ std->fa |= FILE_ATTRIBUTE_COMPRESSED;
+- else
++ } else {
+ std->fa &= ~FILE_ATTRIBUTE_COMPRESSED;
++ }
+
+ if (ni->std_fa != std->fa) {
+ ni->std_fa = std->fa;
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 7eb9fae22f8da6..78d20e4baa2c9a 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -618,7 +618,7 @@ static bool index_hdr_check(const struct INDEX_HDR *hdr, u32 bytes)
+ u32 off = le32_to_cpu(hdr->de_off);
+
+ if (!IS_ALIGNED(off, 8) || tot > bytes || end > tot ||
+- off + sizeof(struct NTFS_DE) > end) {
++ size_add(off, sizeof(struct NTFS_DE)) > end) {
+ /* incorrect index buffer. */
+ return false;
+ }
+@@ -736,7 +736,7 @@ static struct NTFS_DE *hdr_find_e(const struct ntfs_index *indx,
+ if (end > total)
+ return NULL;
+
+- if (off + sizeof(struct NTFS_DE) > end)
++ if (size_add(off, sizeof(struct NTFS_DE)) > end)
+ return NULL;
+
+ e = Add2Ptr(hdr, off);
+diff --git a/fs/ntfs3/ntfs.h b/fs/ntfs3/ntfs.h
+index 241f2ffdd9201a..1ff13b6f961326 100644
+--- a/fs/ntfs3/ntfs.h
++++ b/fs/ntfs3/ntfs.h
+@@ -717,7 +717,7 @@ static inline struct NTFS_DE *hdr_first_de(const struct INDEX_HDR *hdr)
+ struct NTFS_DE *e;
+ u16 esize;
+
+- if (de_off >= used || de_off + sizeof(struct NTFS_DE) > used )
++ if (de_off >= used || size_add(de_off, sizeof(struct NTFS_DE)) > used)
+ return NULL;
+
+ e = Add2Ptr(hdr, de_off);
+diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
+index 6a0f6b0a3ab2a5..920a1ab47b631d 100644
+--- a/fs/ntfs3/super.c
++++ b/fs/ntfs3/super.c
+@@ -555,6 +555,55 @@ static const struct proc_ops ntfs3_label_fops = {
+ .proc_write = ntfs3_label_write,
+ };
+
++static void ntfs_create_procdir(struct super_block *sb)
++{
++ struct proc_dir_entry *e;
++
++ if (!proc_info_root)
++ return;
++
++ e = proc_mkdir(sb->s_id, proc_info_root);
++ if (e) {
++ struct ntfs_sb_info *sbi = sb->s_fs_info;
++
++ proc_create_data("volinfo", 0444, e,
++ &ntfs3_volinfo_fops, sb);
++ proc_create_data("label", 0644, e,
++ &ntfs3_label_fops, sb);
++ sbi->procdir = e;
++ }
++}
++
++static void ntfs_remove_procdir(struct super_block *sb)
++{
++ struct ntfs_sb_info *sbi = sb->s_fs_info;
++
++ if (!sbi->procdir)
++ return;
++
++ remove_proc_entry("label", sbi->procdir);
++ remove_proc_entry("volinfo", sbi->procdir);
++ remove_proc_entry(sb->s_id, proc_info_root);
++ sbi->procdir = NULL;
++}
++
++static void ntfs_create_proc_root(void)
++{
++ proc_info_root = proc_mkdir("fs/ntfs3", NULL);
++}
++
++static void ntfs_remove_proc_root(void)
++{
++ if (proc_info_root) {
++ remove_proc_entry("fs/ntfs3", NULL);
++ proc_info_root = NULL;
++ }
++}
++#else
++static void ntfs_create_procdir(struct super_block *sb) {}
++static void ntfs_remove_procdir(struct super_block *sb) {}
++static void ntfs_create_proc_root(void) {}
++static void ntfs_remove_proc_root(void) {}
+ #endif
+
+ static struct kmem_cache *ntfs_inode_cachep;
+@@ -644,15 +693,7 @@ static void ntfs_put_super(struct super_block *sb)
+ {
+ struct ntfs_sb_info *sbi = sb->s_fs_info;
+
+-#ifdef CONFIG_PROC_FS
+- // Remove /proc/fs/ntfs3/..
+- if (sbi->procdir) {
+- remove_proc_entry("label", sbi->procdir);
+- remove_proc_entry("volinfo", sbi->procdir);
+- remove_proc_entry(sb->s_id, proc_info_root);
+- sbi->procdir = NULL;
+- }
+-#endif
++ ntfs_remove_procdir(sb);
+
+ /* Mark rw ntfs as clear, if possible. */
+ ntfs_set_state(sbi, NTFS_DIRTY_CLEAR);
+@@ -1590,20 +1631,7 @@ static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ kfree(boot2);
+ }
+
+-#ifdef CONFIG_PROC_FS
+- /* Create /proc/fs/ntfs3/.. */
+- if (proc_info_root) {
+- struct proc_dir_entry *e = proc_mkdir(sb->s_id, proc_info_root);
+- static_assert((S_IRUGO | S_IWUSR) == 0644);
+- if (e) {
+- proc_create_data("volinfo", S_IRUGO, e,
+- &ntfs3_volinfo_fops, sb);
+- proc_create_data("label", S_IRUGO | S_IWUSR, e,
+- &ntfs3_label_fops, sb);
+- sbi->procdir = e;
+- }
+- }
+-#endif
++ ntfs_create_procdir(sb);
+
+ if (is_legacy_ntfs(sb))
+ sb->s_flags |= SB_RDONLY;
+@@ -1853,14 +1881,11 @@ static int __init init_ntfs_fs(void)
+ if (IS_ENABLED(CONFIG_NTFS3_LZX_XPRESS))
+ pr_info("ntfs3: Read-only LZX/Xpress compression included\n");
+
+-#ifdef CONFIG_PROC_FS
+- /* Create "/proc/fs/ntfs3" */
+- proc_info_root = proc_mkdir("fs/ntfs3", NULL);
+-#endif
++ ntfs_create_proc_root();
+
+ err = ntfs3_init_bitmap();
+ if (err)
+- return err;
++ goto out2;
+
+ ntfs_inode_cachep = kmem_cache_create(
+ "ntfs_inode_cache", sizeof(struct ntfs_inode), 0,
+@@ -1880,6 +1905,8 @@ static int __init init_ntfs_fs(void)
+ kmem_cache_destroy(ntfs_inode_cachep);
+ out1:
+ ntfs3_exit_bitmap();
++out2:
++ ntfs_remove_proc_root();
+ return err;
+ }
+
+@@ -1890,11 +1917,7 @@ static void __exit exit_ntfs_fs(void)
+ unregister_filesystem(&ntfs_fs_type);
+ unregister_as_ntfs_legacy();
+ ntfs3_exit_bitmap();
+-
+-#ifdef CONFIG_PROC_FS
+- if (proc_info_root)
+- remove_proc_entry("fs/ntfs3", NULL);
+-#endif
++ ntfs_remove_proc_root();
+ }
+
+ MODULE_LICENSE("GPL");
+diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
+index 4414743b638e82..b8ac85b548c7e5 100644
+--- a/fs/ocfs2/alloc.c
++++ b/fs/ocfs2/alloc.c
+@@ -1803,6 +1803,14 @@ static int __ocfs2_find_path(struct ocfs2_caching_info *ci,
+
+ el = root_el;
+ while (el->l_tree_depth) {
++ if (unlikely(le16_to_cpu(el->l_tree_depth) >= OCFS2_MAX_PATH_DEPTH)) {
++ ocfs2_error(ocfs2_metadata_cache_get_super(ci),
++ "Owner %llu has invalid tree depth %u in extent list\n",
++ (unsigned long long)ocfs2_metadata_cache_owner(ci),
++ le16_to_cpu(el->l_tree_depth));
++ ret = -EROFS;
++ goto out;
++ }
+ if (le16_to_cpu(el->l_next_free_rec) == 0) {
+ ocfs2_error(ocfs2_metadata_cache_get_super(ci),
+ "Owner %llu has empty extent list at depth %u\n",
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index cd89e956c32244..7feb8f41aa253f 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -416,7 +416,7 @@ static const struct file_operations proc_pid_cmdline_ops = {
+ #ifdef CONFIG_KALLSYMS
+ /*
+ * Provides a wchan file via kallsyms in a proper one-value-per-file format.
+- * Returns the resolved symbol. If that fails, simply return the address.
++ * Returns the resolved symbol to user space.
+ */
+ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
+ struct pid *pid, struct task_struct *task)
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 73f93a35eeddbd..cb14a6828c501c 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -300,6 +300,7 @@ cifs_abort_connection(struct TCP_Server_Info *server)
+ server->ssocket->flags);
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
++ put_net(cifs_net_ns(server));
+ }
+ server->sequence_number = 0;
+ server->session_estab = false;
+@@ -3123,8 +3124,12 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ /*
+ * Grab netns reference for the socket.
+ *
+- * It'll be released here, on error, or in clean_demultiplex_info() upon server
+- * teardown.
++ * This reference will be released in several situations:
++ * - In the failure path before the cifsd thread is started.
++ * - In the all place where server->socket is released, it is
++ * also set to NULL.
++ * - Ultimately in clean_demultiplex_info(), during the final
++ * teardown.
+ */
+ get_net(net);
+
+@@ -3140,10 +3145,8 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ }
+
+ rc = bind_socket(server);
+- if (rc < 0) {
+- put_net(cifs_net_ns(server));
++ if (rc < 0)
+ return rc;
+- }
+
+ /*
+ * Eventually check for other socket options to change from
+@@ -3189,9 +3192,6 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (sport == htons(RFC1001_PORT))
+ rc = ip_rfc1001_connect(server);
+
+- if (rc < 0)
+- put_net(cifs_net_ns(server));
+-
+ return rc;
+ }
+
+diff --git a/fs/smb/server/auth.c b/fs/smb/server/auth.c
+index 2a5b4a96bf9938..83caa384974932 100644
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -1016,9 +1016,9 @@ static int ksmbd_get_encryption_key(struct ksmbd_work *work, __u64 ses_id,
+
+ ses_enc_key = enc ? sess->smb3encryptionkey :
+ sess->smb3decryptionkey;
+- if (enc)
+- ksmbd_user_session_get(sess);
+ memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);
++ if (!enc)
++ ksmbd_user_session_put(sess);
+
+ return 0;
+ }
+@@ -1218,7 +1218,7 @@ int ksmbd_crypt_message(struct ksmbd_work *work, struct kvec *iov,
+ free_sg:
+ kfree(sg);
+ free_req:
+- kfree(req);
++ aead_request_free(req);
+ free_ctx:
+ ksmbd_release_crypto_ctx(ctx);
+ return rc;
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 91c2318639e766..14620e147dda57 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -27,6 +27,7 @@ enum {
+ KSMBD_SESS_EXITING,
+ KSMBD_SESS_NEED_RECONNECT,
+ KSMBD_SESS_NEED_NEGOTIATE,
++ KSMBD_SESS_NEED_SETUP,
+ KSMBD_SESS_RELEASING
+ };
+
+@@ -187,6 +188,11 @@ static inline bool ksmbd_conn_need_negotiate(struct ksmbd_conn *conn)
+ return READ_ONCE(conn->status) == KSMBD_SESS_NEED_NEGOTIATE;
+ }
+
++static inline bool ksmbd_conn_need_setup(struct ksmbd_conn *conn)
++{
++ return READ_ONCE(conn->status) == KSMBD_SESS_NEED_SETUP;
++}
++
+ static inline bool ksmbd_conn_need_reconnect(struct ksmbd_conn *conn)
+ {
+ return READ_ONCE(conn->status) == KSMBD_SESS_NEED_RECONNECT;
+@@ -217,6 +223,11 @@ static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_conn *conn)
+ WRITE_ONCE(conn->status, KSMBD_SESS_NEED_NEGOTIATE);
+ }
+
++static inline void ksmbd_conn_set_need_setup(struct ksmbd_conn *conn)
++{
++ WRITE_ONCE(conn->status, KSMBD_SESS_NEED_SETUP);
++}
++
+ static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_conn *conn)
+ {
+ WRITE_ONCE(conn->status, KSMBD_SESS_NEED_RECONNECT);
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index 71c6939dfbf13b..3f45f28f6f0f8e 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -181,7 +181,7 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn)
+ down_write(&sessions_table_lock);
+ down_write(&conn->session_lock);
+ xa_for_each(&conn->sessions, id, sess) {
+- if (atomic_read(&sess->refcnt) == 0 &&
++ if (atomic_read(&sess->refcnt) <= 1 &&
+ (sess->state != SMB2_SESSION_VALID ||
+ time_after(jiffies,
+ sess->last_active + SMB2_SESSION_TIMEOUT))) {
+@@ -230,7 +230,11 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn)
+ if (!ksmbd_chann_del(conn, sess) &&
+ xa_empty(&sess->ksmbd_chann_list)) {
+ hash_del(&sess->hlist);
+- ksmbd_session_destroy(sess);
++ down_write(&conn->session_lock);
++ xa_erase(&conn->sessions, sess->id);
++ up_write(&conn->session_lock);
++ if (atomic_dec_and_test(&sess->refcnt))
++ ksmbd_session_destroy(sess);
+ }
+ }
+ }
+@@ -249,13 +253,30 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn)
+ if (xa_empty(&sess->ksmbd_chann_list)) {
+ xa_erase(&conn->sessions, sess->id);
+ hash_del(&sess->hlist);
+- ksmbd_session_destroy(sess);
++ if (atomic_dec_and_test(&sess->refcnt))
++ ksmbd_session_destroy(sess);
+ }
+ }
+ up_write(&conn->session_lock);
+ up_write(&sessions_table_lock);
+ }
+
++bool is_ksmbd_session_in_connection(struct ksmbd_conn *conn,
++ unsigned long long id)
++{
++ struct ksmbd_session *sess;
++
++ down_read(&conn->session_lock);
++ sess = xa_load(&conn->sessions, id);
++ if (sess) {
++ up_read(&conn->session_lock);
++ return true;
++ }
++ up_read(&conn->session_lock);
++
++ return false;
++}
++
+ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+ unsigned long long id)
+ {
+@@ -309,8 +330,8 @@ void ksmbd_user_session_put(struct ksmbd_session *sess)
+
+ if (atomic_read(&sess->refcnt) <= 0)
+ WARN_ON(1);
+- else
+- atomic_dec(&sess->refcnt);
++ else if (atomic_dec_and_test(&sess->refcnt))
++ ksmbd_session_destroy(sess);
+ }
+
+ struct preauth_session *ksmbd_preauth_session_alloc(struct ksmbd_conn *conn,
+@@ -353,13 +374,13 @@ void destroy_previous_session(struct ksmbd_conn *conn,
+ ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_RECONNECT);
+ err = ksmbd_conn_wait_idle_sess_id(conn, id);
+ if (err) {
+- ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_NEGOTIATE);
++ ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_SETUP);
+ goto out;
+ }
+
+ ksmbd_destroy_file_table(&prev_sess->file_table);
+ prev_sess->state = SMB2_SESSION_EXPIRED;
+- ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_NEGOTIATE);
++ ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_SETUP);
+ ksmbd_launch_ksmbd_durable_scavenger();
+ out:
+ up_write(&conn->session_lock);
+@@ -417,7 +438,7 @@ static struct ksmbd_session *__session_create(int protocol)
+ xa_init(&sess->rpc_handle_list);
+ sess->sequence_number = 1;
+ rwlock_init(&sess->tree_conns_lock);
+- atomic_set(&sess->refcnt, 1);
++ atomic_set(&sess->refcnt, 2);
+
+ ret = __init_smb2_session(sess);
+ if (ret)
+diff --git a/fs/smb/server/mgmt/user_session.h b/fs/smb/server/mgmt/user_session.h
+index c1c4b20bd5c6cf..f21348381d5984 100644
+--- a/fs/smb/server/mgmt/user_session.h
++++ b/fs/smb/server/mgmt/user_session.h
+@@ -87,6 +87,8 @@ void ksmbd_session_destroy(struct ksmbd_session *sess);
+ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id);
+ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+ unsigned long long id);
++bool is_ksmbd_session_in_connection(struct ksmbd_conn *conn,
++ unsigned long long id);
+ int ksmbd_session_register(struct ksmbd_conn *conn,
+ struct ksmbd_session *sess);
+ void ksmbd_sessions_deregister(struct ksmbd_conn *conn);
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index 28886ff1ee5776..f103b1bd040040 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -724,8 +724,8 @@ static int smb2_oplock_break_noti(struct oplock_info *opinfo)
+ work->conn = conn;
+ work->sess = opinfo->sess;
+
++ ksmbd_conn_r_count_inc(conn);
+ if (opinfo->op_state == OPLOCK_ACK_WAIT) {
+- ksmbd_conn_r_count_inc(conn);
+ INIT_WORK(&work->work, __smb2_oplock_break_noti);
+ ksmbd_queue_work(work);
+
+@@ -833,8 +833,8 @@ static int smb2_lease_break_noti(struct oplock_info *opinfo)
+ work->conn = conn;
+ work->sess = opinfo->sess;
+
++ ksmbd_conn_r_count_inc(conn);
+ if (opinfo->op_state == OPLOCK_ACK_WAIT) {
+- ksmbd_conn_r_count_inc(conn);
+ INIT_WORK(&work->work, __smb2_lease_break_noti);
+ ksmbd_queue_work(work);
+ wait_for_break_ack(opinfo);
+@@ -1505,6 +1505,10 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+ if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) {
+ struct create_lease_v2 *lc = (struct create_lease_v2 *)cc;
+
++ if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) <
++ sizeof(struct create_lease_v2) - 4)
++ return NULL;
++
+ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+ lreq->req_state = lc->lcontext.LeaseState;
+ lreq->flags = lc->lcontext.LeaseFlags;
+@@ -1517,6 +1521,10 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+ } else {
+ struct create_lease *lc = (struct create_lease *)cc;
+
++ if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) <
++ sizeof(struct create_lease))
++ return NULL;
++
+ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+ lreq->req_state = lc->lcontext.LeaseState;
+ lreq->flags = lc->lcontext.LeaseFlags;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index c53121538990ec..d24d95d15d876b 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1249,7 +1249,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
+ }
+
+ conn->srv_sec_mode = le16_to_cpu(rsp->SecurityMode);
+- ksmbd_conn_set_need_negotiate(conn);
++ ksmbd_conn_set_need_setup(conn);
+
+ err_out:
+ ksmbd_conn_unlock(conn);
+@@ -1271,6 +1271,9 @@ static int alloc_preauth_hash(struct ksmbd_session *sess,
+ if (sess->Preauth_HashValue)
+ return 0;
+
++ if (!conn->preauth_info)
++ return -ENOMEM;
++
+ sess->Preauth_HashValue = kmemdup(conn->preauth_info->Preauth_HashValue,
+ PREAUTH_HASHVALUE_SIZE, KSMBD_DEFAULT_GFP);
+ if (!sess->Preauth_HashValue)
+@@ -1674,6 +1677,11 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ ksmbd_debug(SMB, "Received smb2 session setup request\n");
+
++ if (!ksmbd_conn_need_setup(conn) && !ksmbd_conn_good(conn)) {
++ work->send_no_response = 1;
++ return rc;
++ }
++
+ WORK_BUFFERS(work, req, rsp);
+
+ rsp->StructureSize = cpu_to_le16(9);
+@@ -1707,44 +1715,38 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ if (conn->dialect != sess->dialect) {
+ rc = -EINVAL;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (!(req->hdr.Flags & SMB2_FLAGS_SIGNED)) {
+ rc = -EINVAL;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (strncmp(conn->ClientGUID, sess->ClientGUID,
+ SMB2_CLIENT_GUID_SIZE)) {
+ rc = -ENOENT;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_IN_PROGRESS) {
+ rc = -EACCES;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_EXPIRED) {
+ rc = -EFAULT;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+- ksmbd_user_session_put(sess);
+
+ if (ksmbd_conn_need_reconnect(conn)) {
+ rc = -EFAULT;
++ ksmbd_user_session_put(sess);
+ sess = NULL;
+ goto out_err;
+ }
+
+- sess = ksmbd_session_lookup(conn, sess_id);
+- if (!sess) {
++ if (is_ksmbd_session_in_connection(conn, sess_id)) {
+ rc = -EACCES;
+ goto out_err;
+ }
+@@ -1910,10 +1912,12 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ sess->last_active = jiffies;
+ sess->state = SMB2_SESSION_EXPIRED;
++ ksmbd_user_session_put(sess);
++ work->sess = NULL;
+ if (try_delay) {
+ ksmbd_conn_set_need_reconnect(conn);
+ ssleep(5);
+- ksmbd_conn_set_need_negotiate(conn);
++ ksmbd_conn_set_need_setup(conn);
+ }
+ }
+ smb2_set_err_rsp(work);
+@@ -2239,14 +2243,15 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ return -ENOENT;
+ }
+
+- ksmbd_destroy_file_table(&sess->file_table);
+ down_write(&conn->session_lock);
+ sess->state = SMB2_SESSION_EXPIRED;
+ up_write(&conn->session_lock);
+
+- ksmbd_free_user(sess->user);
+- sess->user = NULL;
+- ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_NEGOTIATE);
++ if (sess->user) {
++ ksmbd_free_user(sess->user);
++ sess->user = NULL;
++ }
++ ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_SETUP);
+
+ rsp->StructureSize = cpu_to_le16(4);
+ err = ksmbd_iov_pin_rsp(work, rsp, sizeof(struct smb2_logoff_rsp));
+@@ -2708,6 +2713,13 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ goto out;
+ }
+
++ if (le16_to_cpu(context->DataOffset) +
++ le32_to_cpu(context->DataLength) <
++ sizeof(struct create_durable_reconn_v2_req)) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ recon_v2 = (struct create_durable_reconn_v2_req *)context;
+ persistent_id = recon_v2->Fid.PersistentFileId;
+ dh_info->fp = ksmbd_lookup_durable_fd(persistent_id);
+@@ -2741,6 +2753,13 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ goto out;
+ }
+
++ if (le16_to_cpu(context->DataOffset) +
++ le32_to_cpu(context->DataLength) <
++ sizeof(struct create_durable_reconn_req)) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ recon = (struct create_durable_reconn_req *)context;
+ persistent_id = recon->Data.Fid.PersistentFileId;
+ dh_info->fp = ksmbd_lookup_durable_fd(persistent_id);
+@@ -2766,6 +2785,13 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ goto out;
+ }
+
++ if (le16_to_cpu(context->DataOffset) +
++ le32_to_cpu(context->DataLength) <
++ sizeof(struct create_durable_req_v2)) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ durable_v2_blob =
+ (struct create_durable_req_v2 *)context;
+ ksmbd_debug(SMB, "Request for durable v2 open\n");
+diff --git a/fs/smb/server/smbacl.c b/fs/smb/server/smbacl.c
+index 49b128698670a8..5aa7a66334d93d 100644
+--- a/fs/smb/server/smbacl.c
++++ b/fs/smb/server/smbacl.c
+@@ -270,6 +270,11 @@ static int sid_to_id(struct mnt_idmap *idmap,
+ return -EIO;
+ }
+
++ if (psid->num_subauth == 0) {
++ pr_err("%s: zero subauthorities!\n", __func__);
++ return -EIO;
++ }
++
+ if (sidtype == SIDOWNER) {
+ kuid_t uid;
+ uid_t id;
+@@ -1026,7 +1031,9 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ struct dentry *parent = path->dentry->d_parent;
+ struct mnt_idmap *idmap = mnt_idmap(path->mnt);
+ int inherited_flags = 0, flags = 0, i, nt_size = 0, pdacl_size;
+- int rc = 0, dacloffset, pntsd_type, pntsd_size, acl_len, aces_size;
++ int rc = 0, pntsd_type, pntsd_size, acl_len, aces_size;
++ unsigned int dacloffset;
++ size_t dacl_struct_end;
+ u16 num_aces, ace_cnt = 0;
+ char *aces_base;
+ bool is_dir = S_ISDIR(d_inode(path->dentry)->i_mode);
+@@ -1035,8 +1042,11 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ parent, &parent_pntsd);
+ if (pntsd_size <= 0)
+ return -ENOENT;
++
+ dacloffset = le32_to_cpu(parent_pntsd->dacloffset);
+- if (!dacloffset || (dacloffset + sizeof(struct smb_acl) > pntsd_size)) {
++ if (!dacloffset ||
++ check_add_overflow(dacloffset, sizeof(struct smb_acl), &dacl_struct_end) ||
++ dacl_struct_end > (size_t)pntsd_size) {
+ rc = -EINVAL;
+ goto free_parent_pntsd;
+ }
+@@ -1240,7 +1250,9 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+ struct smb_ntsd *pntsd = NULL;
+ struct smb_acl *pdacl;
+ struct posix_acl *posix_acls;
+- int rc = 0, pntsd_size, acl_size, aces_size, pdacl_size, dacl_offset;
++ int rc = 0, pntsd_size, acl_size, aces_size, pdacl_size;
++ unsigned int dacl_offset;
++ size_t dacl_struct_end;
+ struct smb_sid sid;
+ int granted = le32_to_cpu(*pdaccess & ~FILE_MAXIMAL_ACCESS_LE);
+ struct smb_ace *ace;
+@@ -1259,7 +1271,8 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+
+ dacl_offset = le32_to_cpu(pntsd->dacloffset);
+ if (!dacl_offset ||
+- (dacl_offset + sizeof(struct smb_acl) > pntsd_size))
++ check_add_overflow(dacl_offset, sizeof(struct smb_acl), &dacl_struct_end) ||
++ dacl_struct_end > (size_t)pntsd_size)
+ goto err_out;
+
+ pdacl = (struct smb_acl *)((char *)pntsd + le32_to_cpu(pntsd->dacloffset));
+diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h
+index 8d0a6280e98247..52b969c7cef935 100644
+--- a/include/asm-generic/rwonce.h
++++ b/include/asm-generic/rwonce.h
+@@ -79,10 +79,18 @@ unsigned long __read_once_word_nocheck(const void *addr)
+ (typeof(x))__read_once_word_nocheck(&(x)); \
+ })
+
+-static __no_kasan_or_inline
++static __no_sanitize_or_inline
+ unsigned long read_word_at_a_time(const void *addr)
+ {
++ /* open-coded instrument_read(addr, 1) */
+ kasan_check_read(addr, 1);
++ kcsan_check_read(addr, 1);
++
++ /*
++ * This load can race with concurrent stores to out-of-bounds memory,
++ * but READ_ONCE() can't be used because it requires higher alignment
++ * than plain loads in arm64 builds with LTO.
++ */
+ return *(unsigned long *)addr;
+ }
+
+diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h
+index e39de161c93863..2cfe1d4bfc9600 100644
+--- a/include/drm/display/drm_dp_mst_helper.h
++++ b/include/drm/display/drm_dp_mst_helper.h
+@@ -222,6 +222,13 @@ struct drm_dp_mst_branch {
+ */
+ struct list_head destroy_next;
+
++ /**
++ * @rad: Relative Address of the MST branch.
++ * For &drm_dp_mst_topology_mgr.mst_primary, it's rad[8] are all 0,
++ * unset and unused. For MST branches connected after mst_primary,
++ * in each element of rad[] the nibbles are ordered by the most
++ * signifcant 4 bits first and the least significant 4 bits second.
++ */
+ u8 rad[8];
+ u8 lct;
+ int num_ports;
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index ef817926cddd36..94d365b2250521 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -495,6 +495,11 @@ struct drm_memory_stats {
+ enum drm_gem_object_status;
+
+ int drm_memory_stats_is_zero(const struct drm_memory_stats *stats);
++void drm_fdinfo_print_size(struct drm_printer *p,
++ const char *prefix,
++ const char *stat,
++ const char *region,
++ u64 sz);
+ void drm_print_memory_stats(struct drm_printer *p,
+ const struct drm_memory_stats *stats,
+ enum drm_gem_object_status supported_status,
+diff --git a/include/linux/arm_ffa.h b/include/linux/arm_ffa.h
+index 74169dd0f65948..53f2837ce7df4e 100644
+--- a/include/linux/arm_ffa.h
++++ b/include/linux/arm_ffa.h
+@@ -176,6 +176,7 @@ void ffa_device_unregister(struct ffa_device *ffa_dev);
+ int ffa_driver_register(struct ffa_driver *driver, struct module *owner,
+ const char *mod_name);
+ void ffa_driver_unregister(struct ffa_driver *driver);
++void ffa_devices_unregister(void);
+ bool ffa_device_is_valid(struct ffa_device *ffa_dev);
+
+ #else
+@@ -188,6 +189,8 @@ ffa_device_register(const struct ffa_partition_info *part_info,
+
+ static inline void ffa_device_unregister(struct ffa_device *dev) {}
+
++static inline void ffa_devices_unregister(void) {}
++
+ static inline int
+ ffa_driver_register(struct ffa_driver *driver, struct module *owner,
+ const char *mod_name)
+diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
+index 13a11f3c09b875..aca06f300f833b 100644
+--- a/include/linux/avf/virtchnl.h
++++ b/include/linux/avf/virtchnl.h
+@@ -1283,7 +1283,7 @@ struct virtchnl_proto_hdrs {
+ * 2 - from the second inner layer
+ * ....
+ **/
+- int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
++ u32 count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
+ union {
+ struct virtchnl_proto_hdr
+ proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+@@ -1335,7 +1335,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_filter_action);
+
+ struct virtchnl_filter_action_set {
+ /* action number must be less then VIRTCHNL_MAX_NUM_ACTIONS */
+- int count;
++ u32 count;
+ struct virtchnl_filter_action actions[VIRTCHNL_MAX_NUM_ACTIONS];
+ };
+
+diff --git a/include/linux/badblocks.h b/include/linux/badblocks.h
+index 670f2dae692fb2..996493917f366e 100644
+--- a/include/linux/badblocks.h
++++ b/include/linux/badblocks.h
+@@ -48,11 +48,11 @@ struct badblocks_context {
+ int ack;
+ };
+
+-int badblocks_check(struct badblocks *bb, sector_t s, int sectors,
+- sector_t *first_bad, int *bad_sectors);
+-int badblocks_set(struct badblocks *bb, sector_t s, int sectors,
+- int acknowledged);
+-int badblocks_clear(struct badblocks *bb, sector_t s, int sectors);
++int badblocks_check(struct badblocks *bb, sector_t s, sector_t sectors,
++ sector_t *first_bad, sector_t *bad_sectors);
++bool badblocks_set(struct badblocks *bb, sector_t s, sector_t sectors,
++ int acknowledged);
++bool badblocks_clear(struct badblocks *bb, sector_t s, sector_t sectors);
+ void ack_all_badblocks(struct badblocks *bb);
+ ssize_t badblocks_show(struct badblocks *bb, char *page, int unack);
+ ssize_t badblocks_store(struct badblocks *bb, const char *page, size_t len,
+diff --git a/include/linux/context_tracking_irq.h b/include/linux/context_tracking_irq.h
+index c50b5670c4a52f..197916ee91a4bd 100644
+--- a/include/linux/context_tracking_irq.h
++++ b/include/linux/context_tracking_irq.h
+@@ -10,12 +10,12 @@ void ct_irq_exit_irqson(void);
+ void ct_nmi_enter(void);
+ void ct_nmi_exit(void);
+ #else
+-static inline void ct_irq_enter(void) { }
+-static inline void ct_irq_exit(void) { }
++static __always_inline void ct_irq_enter(void) { }
++static __always_inline void ct_irq_exit(void) { }
+ static inline void ct_irq_enter_irqson(void) { }
+ static inline void ct_irq_exit_irqson(void) { }
+-static inline void ct_nmi_enter(void) { }
+-static inline void ct_nmi_exit(void) { }
++static __always_inline void ct_nmi_enter(void) { }
++static __always_inline void ct_nmi_exit(void) { }
+ #endif
+
+ #endif
+diff --git a/include/linux/coresight.h b/include/linux/coresight.h
+index 17276965ff1d03..6ddcbb8be51653 100644
+--- a/include/linux/coresight.h
++++ b/include/linux/coresight.h
+@@ -649,6 +649,10 @@ extern int coresight_enable_sysfs(struct coresight_device *csdev);
+ extern void coresight_disable_sysfs(struct coresight_device *csdev);
+ extern int coresight_timeout(struct csdev_access *csa, u32 offset,
+ int position, int value);
++typedef void (*coresight_timeout_cb_t) (struct csdev_access *, u32, int, int);
++extern int coresight_timeout_action(struct csdev_access *csa, u32 offset,
++ int position, int value,
++ coresight_timeout_cb_t cb);
+
+ extern int coresight_claim_device(struct coresight_device *csdev);
+ extern int coresight_claim_device_unlocked(struct coresight_device *csdev);
+diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
+index 835e7b793f6a3c..5466c96a33dbb9 100644
+--- a/include/linux/cpuset.h
++++ b/include/linux/cpuset.h
+@@ -125,9 +125,11 @@ static inline int cpuset_do_page_mem_spread(void)
+
+ extern bool current_cpuset_is_being_rebound(void);
+
++extern void dl_rebuild_rd_accounting(void);
+ extern void rebuild_sched_domains(void);
+
+ extern void cpuset_print_current_mems_allowed(void);
++extern void cpuset_reset_sched_domains(void);
+
+ /*
+ * read_mems_allowed_begin is required when making decisions involving
+@@ -259,11 +261,20 @@ static inline bool current_cpuset_is_being_rebound(void)
+ return false;
+ }
+
++static inline void dl_rebuild_rd_accounting(void)
++{
++}
++
+ static inline void rebuild_sched_domains(void)
+ {
+ partition_sched_domains(1, NULL, NULL);
+ }
+
++static inline void cpuset_reset_sched_domains(void)
++{
++ partition_sched_domains(1, NULL, NULL);
++}
++
+ static inline void cpuset_print_current_mems_allowed(void)
+ {
+ }
+diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
+index d7e30d4f7503a8..f3bc0bcd70980c 100644
+--- a/include/linux/dma-direct.h
++++ b/include/linux/dma-direct.h
+@@ -78,14 +78,18 @@ static inline dma_addr_t dma_range_map_max(const struct bus_dma_region *map)
+ #define phys_to_dma_unencrypted phys_to_dma
+ #endif
+ #else
+-static inline dma_addr_t phys_to_dma_unencrypted(struct device *dev,
+- phys_addr_t paddr)
++static inline dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr)
+ {
+ if (dev->dma_range_map)
+ return translate_phys_to_dma(dev, paddr);
+ return paddr;
+ }
+
++static inline dma_addr_t phys_to_dma_unencrypted(struct device *dev,
++ phys_addr_t paddr)
++{
++ return dma_addr_unencrypted(__phys_to_dma(dev, paddr));
++}
+ /*
+ * If memory encryption is supported, phys_to_dma will set the memory encryption
+ * bit in the DMA address, and dma_to_phys will clear it.
+@@ -94,19 +98,20 @@ static inline dma_addr_t phys_to_dma_unencrypted(struct device *dev,
+ */
+ static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
+ {
+- return __sme_set(phys_to_dma_unencrypted(dev, paddr));
++ return dma_addr_encrypted(__phys_to_dma(dev, paddr));
+ }
+
+ static inline phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dma_addr)
+ {
+ phys_addr_t paddr;
+
++ dma_addr = dma_addr_canonical(dma_addr);
+ if (dev->dma_range_map)
+ paddr = translate_dma_to_phys(dev, dma_addr);
+ else
+ paddr = dma_addr;
+
+- return __sme_clr(paddr);
++ return paddr;
+ }
+ #endif /* !CONFIG_ARCH_HAS_PHYS_TO_DMA */
+
+diff --git a/include/linux/fwnode.h b/include/linux/fwnode.h
+index 0731994b9d7c83..6fa0a268d53827 100644
+--- a/include/linux/fwnode.h
++++ b/include/linux/fwnode.h
+@@ -91,7 +91,7 @@ struct fwnode_endpoint {
+ #define SWNODE_GRAPH_PORT_NAME_FMT "port@%u"
+ #define SWNODE_GRAPH_ENDPOINT_NAME_FMT "endpoint@%u"
+
+-#define NR_FWNODE_REFERENCE_ARGS 8
++#define NR_FWNODE_REFERENCE_ARGS 16
+
+ /**
+ * struct fwnode_reference_args - Fwnode reference with additional arguments
+diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h
+index 3ff96ae31bf6de..c5fe3b2a53e827 100644
+--- a/include/linux/if_bridge.h
++++ b/include/linux/if_bridge.h
+@@ -65,11 +65,9 @@ struct br_ip_list {
+ #define BR_DEFAULT_AGEING_TIME (300 * HZ)
+
+ struct net_bridge;
+-void brioctl_set(int (*hook)(struct net *net, struct net_bridge *br,
+- unsigned int cmd, struct ifreq *ifr,
++void brioctl_set(int (*hook)(struct net *net, unsigned int cmd,
+ void __user *uarg));
+-int br_ioctl_call(struct net *net, struct net_bridge *br, unsigned int cmd,
+- struct ifreq *ifr, void __user *uarg);
++int br_ioctl_call(struct net *net, unsigned int cmd, void __user *uarg);
+
+ #if IS_ENABLED(CONFIG_BRIDGE) && IS_ENABLED(CONFIG_BRIDGE_IGMP_SNOOPING)
+ int br_multicast_list_adjacent(struct net_device *dev,
+diff --git a/include/linux/iio/iio-gts-helper.h b/include/linux/iio/iio-gts-helper.h
+index e5de7a124bad6e..66f830ab9b49b5 100644
+--- a/include/linux/iio/iio-gts-helper.h
++++ b/include/linux/iio/iio-gts-helper.h
+@@ -208,5 +208,6 @@ int iio_gts_all_avail_scales(struct iio_gts *gts, const int **vals, int *type,
+ int *length);
+ int iio_gts_avail_scales_for_time(struct iio_gts *gts, int time,
+ const int **vals, int *type, int *length);
++int iio_gts_get_total_gain(struct iio_gts *gts, int gain, int time);
+
+ #endif
+diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
+index 56161e02f002cb..5ed03e36178fc2 100644
+--- a/include/linux/iio/iio.h
++++ b/include/linux/iio/iio.h
+@@ -10,6 +10,7 @@
+ #include <linux/device.h>
+ #include <linux/cdev.h>
+ #include <linux/cleanup.h>
++#include <linux/compiler_types.h>
+ #include <linux/slab.h>
+ #include <linux/iio/types.h>
+ /* IIO TODO LIST */
+@@ -662,6 +663,31 @@ int iio_push_event(struct iio_dev *indio_dev, u64 ev_code, s64 timestamp);
+ int iio_device_claim_direct_mode(struct iio_dev *indio_dev);
+ void iio_device_release_direct_mode(struct iio_dev *indio_dev);
+
++/*
++ * Helper functions that allow claim and release of direct mode
++ * in a fashion that doesn't generate many false positives from sparse.
++ * Note this must remain static inline in the header so that sparse
++ * can see the __acquire() marking. Revisit when sparse supports
++ * __cond_acquires()
++ */
++static inline bool iio_device_claim_direct(struct iio_dev *indio_dev)
++{
++ int ret = iio_device_claim_direct_mode(indio_dev);
++
++ if (ret)
++ return false;
++
++ __acquire(iio_dev);
++
++ return true;
++}
++
++static inline void iio_device_release_direct(struct iio_dev *indio_dev)
++{
++ iio_device_release_direct_mode(indio_dev);
++ __release(indio_dev);
++}
++
+ /*
+ * This autocleanup logic is normally used via
+ * iio_device_claim_direct_scoped().
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index 8cd9327e4e78d6..a1b1be9bf73b2b 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -448,7 +448,7 @@ irq_calc_affinity_vectors(unsigned int minvec, unsigned int maxvec,
+ static inline void disable_irq_nosync_lockdep(unsigned int irq)
+ {
+ disable_irq_nosync(irq);
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_disable();
+ #endif
+ }
+@@ -456,7 +456,7 @@ static inline void disable_irq_nosync_lockdep(unsigned int irq)
+ static inline void disable_irq_nosync_lockdep_irqsave(unsigned int irq, unsigned long *flags)
+ {
+ disable_irq_nosync(irq);
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_save(*flags);
+ #endif
+ }
+@@ -471,7 +471,7 @@ static inline void disable_irq_lockdep(unsigned int irq)
+
+ static inline void enable_irq_lockdep(unsigned int irq)
+ {
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_enable();
+ #endif
+ enable_irq(irq);
+@@ -479,7 +479,7 @@ static inline void enable_irq_lockdep(unsigned int irq)
+
+ static inline void enable_irq_lockdep_irqrestore(unsigned int irq, unsigned long *flags)
+ {
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_restore(*flags);
+ #endif
+ enable_irq(irq);
+diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
+index ae45263892611d..07584c5e36fb40 100644
+--- a/include/linux/mem_encrypt.h
++++ b/include/linux/mem_encrypt.h
+@@ -26,11 +26,34 @@
+ */
+ #define __sme_set(x) ((x) | sme_me_mask)
+ #define __sme_clr(x) ((x) & ~sme_me_mask)
++
++#define dma_addr_encrypted(x) __sme_set(x)
++#define dma_addr_canonical(x) __sme_clr(x)
++
+ #else
+ #define __sme_set(x) (x)
+ #define __sme_clr(x) (x)
+ #endif
+
++/*
++ * dma_addr_encrypted() and dma_addr_unencrypted() are for converting a given DMA
++ * address to the respective type of addressing.
++ *
++ * dma_addr_canonical() is used to reverse any conversions for encrypted/decrypted
++ * back to the canonical address.
++ */
++#ifndef dma_addr_encrypted
++#define dma_addr_encrypted(x) (x)
++#endif
++
++#ifndef dma_addr_unencrypted
++#define dma_addr_unencrypted(x) (x)
++#endif
++
++#ifndef dma_addr_canonical
++#define dma_addr_canonical(x) (x)
++#endif
++
+ #endif /* __ASSEMBLY__ */
+
+ #endif /* __MEM_ENCRYPT_H__ */
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index f00bfcee7120e8..108862d81b5798 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -250,6 +250,10 @@ struct nfs_server {
+ struct list_head ss_copies;
+ struct list_head ss_src_copies;
+
++ unsigned long delegation_flags;
++#define NFS4SERV_DELEGRETURN (1)
++#define NFS4SERV_DELEGATION_EXPIRED (2)
++#define NFS4SERV_DELEGRETURN_DELAYED (3)
+ unsigned long delegation_gen;
+ unsigned long mig_gen;
+ unsigned long mig_status;
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index a8dfb38c9bb6f1..e78fa535f61dd8 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -17,7 +17,6 @@
+ void lockup_detector_init(void);
+ void lockup_detector_retry_init(void);
+ void lockup_detector_soft_poweroff(void);
+-void lockup_detector_cleanup(void);
+
+ extern int watchdog_user_enabled;
+ extern int watchdog_thresh;
+@@ -37,7 +36,6 @@ extern int sysctl_hardlockup_all_cpu_backtrace;
+ static inline void lockup_detector_init(void) { }
+ static inline void lockup_detector_retry_init(void) { }
+ static inline void lockup_detector_soft_poweroff(void) { }
+-static inline void lockup_detector_cleanup(void) { }
+ #endif /* !CONFIG_LOCKUP_DETECTOR */
+
+ #ifdef CONFIG_SOFTLOCKUP_DETECTOR
+@@ -104,12 +102,10 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs);
+ #if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
+ extern void hardlockup_detector_perf_stop(void);
+ extern void hardlockup_detector_perf_restart(void);
+-extern void hardlockup_detector_perf_cleanup(void);
+ extern void hardlockup_config_perf_event(const char *str);
+ #else
+ static inline void hardlockup_detector_perf_stop(void) { }
+ static inline void hardlockup_detector_perf_restart(void) { }
+-static inline void hardlockup_detector_perf_cleanup(void) { }
+ static inline void hardlockup_config_perf_event(const char *str) { }
+ #endif
+
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 8333f132f4a96c..bcb764c3a8034c 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -495,7 +495,7 @@ struct pmu {
+ * context-switches callback
+ */
+ void (*sched_task) (struct perf_event_pmu_context *pmu_ctx,
+- bool sched_in);
++ struct task_struct *task, bool sched_in);
+
+ /*
+ * Kmem cache of PMU specific data
+@@ -1020,6 +1020,41 @@ struct perf_event_context {
+ local_t nr_no_switch_fast;
+ };
+
++/**
++ * struct perf_ctx_data - PMU specific data for a task
++ * @rcu_head: To avoid the race on free PMU specific data
++ * @refcount: To track users
++ * @global: To track system-wide users
++ * @ctx_cache: Kmem cache of PMU specific data
++ * @data: PMU specific data
++ *
++ * Currently, the struct is only used in Intel LBR call stack mode to
++ * save/restore the call stack of a task on context switches.
++ *
++ * The rcu_head is used to prevent the race on free the data.
++ * The data only be allocated when Intel LBR call stack mode is enabled.
++ * The data will be freed when the mode is disabled.
++ * The content of the data will only be accessed in context switch, which
++ * should be protected by rcu_read_lock().
++ *
++ * Because of the alignment requirement of Intel Arch LBR, the Kmem cache
++ * is used to allocate the PMU specific data. The ctx_cache is to track
++ * the Kmem cache.
++ *
++ * Careful: Struct perf_ctx_data is added as a pointer in struct task_struct.
++ * When system-wide Intel LBR call stack mode is enabled, a buffer with
++ * constant size will be allocated for each task.
++ * Also, system memory consumption can further grow when the size of
++ * struct perf_ctx_data enlarges.
++ */
++struct perf_ctx_data {
++ struct rcu_head rcu_head;
++ refcount_t refcount;
++ int global;
++ struct kmem_cache *ctx_cache;
++ void *data;
++};
++
+ struct perf_cpu_pmu_context {
+ struct perf_event_pmu_context epc;
+ struct perf_event_pmu_context *task_epc;
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 94d267d02372e4..4c107e17c547e5 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -1508,14 +1508,25 @@ static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
+ }
+
+ /*
+- * track_pfn_copy is called when vma that is covering the pfnmap gets
+- * copied through copy_page_range().
++ * track_pfn_copy is called when a VM_PFNMAP VMA is about to get the page
++ * tables copied during copy_page_range(). On success, stores the pfn to be
++ * passed to untrack_pfn_copy().
+ */
+-static inline int track_pfn_copy(struct vm_area_struct *vma)
++static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
++ struct vm_area_struct *src_vma, unsigned long *pfn)
+ {
+ return 0;
+ }
+
++/*
++ * untrack_pfn_copy is called when a VM_PFNMAP VMA failed to copy during
++ * copy_page_range(), but after track_pfn_copy() was already called.
++ */
++static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
++ unsigned long pfn)
++{
++}
++
+ /*
+ * untrack_pfn is called while unmapping a pfnmap for a region.
+ * untrack can be called for a specific region indicated by pfn and size or
+@@ -1528,8 +1539,10 @@ static inline void untrack_pfn(struct vm_area_struct *vma,
+ }
+
+ /*
+- * untrack_pfn_clear is called while mremapping a pfnmap for a new region
+- * or fails to copy pgtable during duplicate vm area.
++ * untrack_pfn_clear is called in the following cases on a VM_PFNMAP VMA:
++ *
++ * 1) During mremap() on the src VMA after the page tables were moved.
++ * 2) During fork() on the dst VMA, immediately after duplicating the src VMA.
+ */
+ static inline void untrack_pfn_clear(struct vm_area_struct *vma)
+ {
+@@ -1540,7 +1553,10 @@ extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
+ unsigned long size);
+ extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
+ pfn_t pfn);
+-extern int track_pfn_copy(struct vm_area_struct *vma);
++extern int track_pfn_copy(struct vm_area_struct *dst_vma,
++ struct vm_area_struct *src_vma, unsigned long *pfn);
++extern void untrack_pfn_copy(struct vm_area_struct *dst_vma,
++ unsigned long pfn);
+ extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+ unsigned long size, bool mm_wr_locked);
+ extern void untrack_pfn_clear(struct vm_area_struct *vma);
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index d39dc863f612fe..d0b29cd1fd204e 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -66,6 +66,7 @@ static inline bool queue_pm_work(struct work_struct *work)
+
+ extern int pm_generic_runtime_suspend(struct device *dev);
+ extern int pm_generic_runtime_resume(struct device *dev);
++extern bool pm_runtime_need_not_resume(struct device *dev);
+ extern int pm_runtime_force_suspend(struct device *dev);
+ extern int pm_runtime_force_resume(struct device *dev);
+
+@@ -241,6 +242,7 @@ static inline bool queue_pm_work(struct work_struct *work) { return false; }
+
+ static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; }
+ static inline int pm_generic_runtime_resume(struct device *dev) { return 0; }
++static inline bool pm_runtime_need_not_resume(struct device *dev) {return true; }
+ static inline int pm_runtime_force_suspend(struct device *dev) { return 0; }
+ static inline int pm_runtime_force_resume(struct device *dev) { return 0; }
+
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 48e5c03df1dd83..bd69ddc102fbc5 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -138,7 +138,7 @@ static inline void rcu_sysrq_end(void) { }
+ #if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK))
+ void rcu_irq_work_resched(void);
+ #else
+-static inline void rcu_irq_work_resched(void) { }
++static __always_inline void rcu_irq_work_resched(void) { }
+ #endif
+
+ #ifdef CONFIG_RCU_NOCB_CPU
+diff --git a/include/linux/reboot.h b/include/linux/reboot.h
+index abcdde4df69796..e97f6b8e858685 100644
+--- a/include/linux/reboot.h
++++ b/include/linux/reboot.h
+@@ -177,16 +177,28 @@ void ctrl_alt_del(void);
+
+ extern void orderly_poweroff(bool force);
+ extern void orderly_reboot(void);
+-void __hw_protection_shutdown(const char *reason, int ms_until_forced, bool shutdown);
++
++/**
++ * enum hw_protection_action - Hardware protection action
++ *
++ * @HWPROT_ACT_SHUTDOWN:
++ * The system should be shut down (powered off) for HW protection.
++ * @HWPROT_ACT_REBOOT:
++ * The system should be rebooted for HW protection.
++ */
++enum hw_protection_action { HWPROT_ACT_SHUTDOWN, HWPROT_ACT_REBOOT };
++
++void __hw_protection_shutdown(const char *reason, int ms_until_forced,
++ enum hw_protection_action action);
+
+ static inline void hw_protection_reboot(const char *reason, int ms_until_forced)
+ {
+- __hw_protection_shutdown(reason, ms_until_forced, false);
++ __hw_protection_shutdown(reason, ms_until_forced, HWPROT_ACT_REBOOT);
+ }
+
+ static inline void hw_protection_shutdown(const char *reason, int ms_until_forced)
+ {
+- __hw_protection_shutdown(reason, ms_until_forced, true);
++ __hw_protection_shutdown(reason, ms_until_forced, HWPROT_ACT_SHUTDOWN);
+ }
+
+ /*
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 9c15365a30c08b..6e5c38718ff562 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -65,6 +65,7 @@ struct mempolicy;
+ struct nameidata;
+ struct nsproxy;
+ struct perf_event_context;
++struct perf_ctx_data;
+ struct pid_namespace;
+ struct pipe_inode_info;
+ struct rcu_node;
+@@ -382,6 +383,11 @@ enum uclamp_id {
+ #ifdef CONFIG_SMP
+ extern struct root_domain def_root_domain;
+ extern struct mutex sched_domains_mutex;
++extern void sched_domains_mutex_lock(void);
++extern void sched_domains_mutex_unlock(void);
++#else
++static inline void sched_domains_mutex_lock(void) { }
++static inline void sched_domains_mutex_unlock(void) { }
+ #endif
+
+ struct sched_param {
+@@ -1311,6 +1317,7 @@ struct task_struct {
+ struct perf_event_context *perf_event_ctxp;
+ struct mutex perf_event_mutex;
+ struct list_head perf_event_list;
++ struct perf_ctx_data __rcu *perf_ctx_data;
+ #endif
+ #ifdef CONFIG_DEBUG_PREEMPT
+ unsigned long preempt_disable_ip;
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 3a912ab42bb550..f9aabbc9d22ef8 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -34,7 +34,11 @@ static inline bool dl_time_before(u64 a, u64 b)
+ struct root_domain;
+ extern void dl_add_task_root_domain(struct task_struct *p);
+ extern void dl_clear_root_domain(struct root_domain *rd);
++extern void dl_clear_root_domain_cpu(int cpu);
+
+ #endif /* CONFIG_SMP */
+
++extern u64 dl_cookie;
++extern bool dl_bw_visited(int cpu, u64 cookie);
++
+ #endif /* _LINUX_SCHED_DEADLINE_H */
+diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h
+index fb1e295e7e63e2..166b19af956f8b 100644
+--- a/include/linux/sched/smt.h
++++ b/include/linux/sched/smt.h
+@@ -12,7 +12,7 @@ static __always_inline bool sched_smt_active(void)
+ return static_branch_likely(&sched_smt_present);
+ }
+ #else
+-static inline bool sched_smt_active(void) { return false; }
++static __always_inline bool sched_smt_active(void) { return false; }
+ #endif
+
+ void arch_smt_update(void);
+diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
+index e45531455d3bbe..d55949071c30ec 100644
+--- a/include/linux/seccomp.h
++++ b/include/linux/seccomp.h
+@@ -22,8 +22,9 @@
+ #include <linux/atomic.h>
+ #include <asm/seccomp.h>
+
+-#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
+ extern int __secure_computing(const struct seccomp_data *sd);
++
++#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
+ static inline int secure_computing(void)
+ {
+ if (unlikely(test_syscall_work(SECCOMP)))
+@@ -32,11 +33,6 @@ static inline int secure_computing(void)
+ }
+ #else
+ extern void secure_computing_strict(int this_syscall);
+-static inline int __secure_computing(const struct seccomp_data *sd)
+-{
+- secure_computing_strict(sd->nr);
+- return 0;
+-}
+ #endif
+
+ extern long prctl_get_seccomp(void);
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index 69f9bedd0ee88c..0b5ed682108073 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -86,8 +86,6 @@ struct thermal_trip {
+ #define THERMAL_TRIP_PRIV_TO_INT(_val_) (uintptr_t)(_val_)
+ #define THERMAL_INT_TO_TRIP_PRIV(_val_) (void *)(uintptr_t)(_val_)
+
+-struct thermal_zone_device;
+-
+ struct cooling_spec {
+ unsigned long upper; /* Highest cooling state */
+ unsigned long lower; /* Lowest cooling state */
+diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
+index b1df7d792fa16b..a6bec560bdbc0c 100644
+--- a/include/linux/uprobes.h
++++ b/include/linux/uprobes.h
+@@ -39,6 +39,8 @@ struct page;
+
+ #define MAX_URETPROBE_DEPTH 64
+
++#define UPROBE_NO_TRAMPOLINE_VADDR (~0UL)
++
+ struct uprobe_consumer {
+ /*
+ * handler() can return UPROBE_HANDLER_REMOVE to signal the need to
+diff --git a/include/linux/writeback.h b/include/linux/writeback.h
+index d11b903c2edb8a..58bda33479146f 100644
+--- a/include/linux/writeback.h
++++ b/include/linux/writeback.h
+@@ -313,6 +313,30 @@ static inline void cgroup_writeback_umount(struct super_block *sb)
+ /*
+ * mm/page-writeback.c
+ */
++/* consolidated parameters for balance_dirty_pages() and its subroutines */
++struct dirty_throttle_control {
++#ifdef CONFIG_CGROUP_WRITEBACK
++ struct wb_domain *dom;
++ struct dirty_throttle_control *gdtc; /* only set in memcg dtc's */
++#endif
++ struct bdi_writeback *wb;
++ struct fprop_local_percpu *wb_completions;
++
++ unsigned long avail; /* dirtyable */
++ unsigned long dirty; /* file_dirty + write + nfs */
++ unsigned long thresh; /* dirty threshold */
++ unsigned long bg_thresh; /* dirty background threshold */
++ unsigned long limit; /* hard dirty limit */
++
++ unsigned long wb_dirty; /* per-wb counterparts */
++ unsigned long wb_thresh;
++ unsigned long wb_bg_thresh;
++
++ unsigned long pos_ratio;
++ bool freerun;
++ bool dirty_exceeded;
++};
++
+ void laptop_io_completion(struct backing_dev_info *info);
+ void laptop_sync_completion(void);
+ void laptop_mode_timer_fn(struct timer_list *t);
+diff --git a/include/net/ax25.h b/include/net/ax25.h
+index 4ee141aae0a29d..a7bba42dde153a 100644
+--- a/include/net/ax25.h
++++ b/include/net/ax25.h
+@@ -418,7 +418,6 @@ void ax25_rt_device_down(struct net_device *);
+ int ax25_rt_ioctl(unsigned int, void __user *);
+ extern const struct seq_operations ax25_rt_seqops;
+ ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev);
+-int ax25_rt_autobind(ax25_cb *, ax25_address *);
+ struct sk_buff *ax25_rt_build_path(struct sk_buff *, ax25_address *,
+ ax25_address *, ax25_digi *);
+ void ax25_rt_free(void);
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 3ec915738112b7..a8586c3058c7cd 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -208,6 +208,13 @@ enum {
+ */
+ HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+
++ /* When this quirk is set consider Sync Flow Control as supported by
++ * the driver.
++ *
++ * This quirk must be set before hci_register_dev is called.
++ */
++ HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED,
++
+ /* When this quirk is set, the LE states reported through the
+ * HCI_LE_READ_SUPPORTED_STATES are invalid/broken.
+ *
+@@ -354,6 +361,22 @@ enum {
+ * during the hdev->setup vendor callback.
+ */
+ HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY,
++
++ /* When this quirk is set, the HCI_OP_READ_VOICE_SETTING command is
++ * skipped. This is required for a subset of the CSR controller clones
++ * which erroneously claim to support it.
++ *
++ * This quirk must be set before hci_register_dev is called.
++ */
++ HCI_QUIRK_BROKEN_READ_VOICE_SETTING,
++
++ /* When this quirk is set, the HCI_OP_READ_PAGE_SCAN_TYPE command is
++ * skipped. This is required for a subset of the CSR controller clones
++ * which erroneously claim to support it.
++ *
++ * This quirk must be set before hci_register_dev is called.
++ */
++ HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE,
+ };
+
+ /* HCI device flags */
+@@ -432,6 +455,7 @@ enum {
+ HCI_WIDEBAND_SPEECH_ENABLED,
+ HCI_EVENT_FILTER_CONFIGURED,
+ HCI_PA_SYNC,
++ HCI_SCO_FLOWCTL,
+
+ HCI_DUT_MODE,
+ HCI_VENDOR_DIAG,
+@@ -855,6 +879,11 @@ struct hci_cp_remote_name_req_cancel {
+ bdaddr_t bdaddr;
+ } __packed;
+
++struct hci_rp_remote_name_req_cancel {
++ __u8 status;
++ bdaddr_t bdaddr;
++} __packed;
++
+ #define HCI_OP_READ_REMOTE_FEATURES 0x041b
+ struct hci_cp_read_remote_features {
+ __le16 handle;
+@@ -1528,6 +1557,11 @@ struct hci_rp_read_tx_power {
+ __s8 tx_power;
+ } __packed;
+
++#define HCI_OP_WRITE_SYNC_FLOWCTL 0x0c2f
++struct hci_cp_write_sync_flowctl {
++ __u8 enable;
++} __packed;
++
+ #define HCI_OP_READ_PAGE_SCAN_TYPE 0x0c46
+ struct hci_rp_read_page_scan_type {
+ __u8 status;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 6281063cbd8e44..f0b49aad519eb2 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1858,6 +1858,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ #define lmp_hold_capable(dev) ((dev)->features[0][0] & LMP_HOLD)
+ #define lmp_sniff_capable(dev) ((dev)->features[0][0] & LMP_SNIFF)
+ #define lmp_park_capable(dev) ((dev)->features[0][1] & LMP_PARK)
++#define lmp_sco_capable(dev) ((dev)->features[0][1] & LMP_SCO)
+ #define lmp_inq_rssi_capable(dev) ((dev)->features[0][3] & LMP_RSSI_INQ)
+ #define lmp_esco_capable(dev) ((dev)->features[0][3] & LMP_ESCO)
+ #define lmp_bredr_capable(dev) (!((dev)->features[0][4] & LMP_NO_BREDR))
+@@ -1925,6 +1926,10 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ ((dev)->commands[20] & 0x10 && \
+ !test_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks))
+
++#define read_voice_setting_capable(dev) \
++ ((dev)->commands[9] & 0x04 && \
++ !test_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &(dev)->quirks))
++
+ /* Use enhanced synchronous connection if command is supported and its quirk
+ * has not been set.
+ */
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 8bb5f016969f10..95f67b308c19a4 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -695,6 +695,7 @@ void bond_debug_register(struct bonding *bond);
+ void bond_debug_unregister(struct bonding *bond);
+ void bond_debug_reregister(struct bonding *bond);
+ const char *bond_mode_name(int mode);
++bool bond_xdp_check(struct bonding *bond, int mode);
+ void bond_setup(struct net_device *bond_dev);
+ unsigned int bond_get_num_tx_queues(void);
+ int bond_netlink_init(void);
+diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
+index bfe625b55d55d7..a58ae7589d1212 100644
+--- a/include/net/xdp_sock.h
++++ b/include/net/xdp_sock.h
+@@ -110,11 +110,16 @@ struct xdp_sock {
+ * indicates position where checksumming should start.
+ * csum_offset indicates position where checksum should be stored.
+ *
++ * void (*tmo_request_launch_time)(u64 launch_time, void *priv)
++ * Called when AF_XDP frame requested launch time HW offload support.
++ * launch_time indicates the PTP time at which the device can schedule the
++ * packet for transmission.
+ */
+ struct xsk_tx_metadata_ops {
+ void (*tmo_request_timestamp)(void *priv);
+ u64 (*tmo_fill_timestamp)(void *priv);
+ void (*tmo_request_checksum)(u16 csum_start, u16 csum_offset, void *priv);
++ void (*tmo_request_launch_time)(u64 launch_time, void *priv);
+ };
+
+ #ifdef CONFIG_XDP_SOCKETS
+@@ -162,6 +167,11 @@ static inline void xsk_tx_metadata_request(const struct xsk_tx_metadata *meta,
+ if (!meta)
+ return;
+
++ if (ops->tmo_request_launch_time)
++ if (meta->flags & XDP_TXMD_FLAGS_LAUNCH_TIME)
++ ops->tmo_request_launch_time(meta->request.launch_time,
++ priv);
++
+ if (ops->tmo_request_timestamp)
+ if (meta->flags & XDP_TXMD_FLAGS_TIMESTAMP)
+ ops->tmo_request_timestamp(priv);
+diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
+index 784cd34f5bba50..fe4afb2517197b 100644
+--- a/include/net/xdp_sock_drv.h
++++ b/include/net/xdp_sock_drv.h
+@@ -199,6 +199,7 @@ static inline void *xsk_buff_raw_get_data(struct xsk_buff_pool *pool, u64 addr)
+ #define XDP_TXMD_FLAGS_VALID ( \
+ XDP_TXMD_FLAGS_TIMESTAMP | \
+ XDP_TXMD_FLAGS_CHECKSUM | \
++ XDP_TXMD_FLAGS_LAUNCH_TIME | \
+ 0)
+
+ static inline bool
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index ed4b83696c77f9..e1eed5d47d0725 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -464,6 +464,15 @@ struct xfrm_type_offload {
+
+ int xfrm_register_type_offload(const struct xfrm_type_offload *type, unsigned short family);
+ void xfrm_unregister_type_offload(const struct xfrm_type_offload *type, unsigned short family);
++void xfrm_set_type_offload(struct xfrm_state *x);
++static inline void xfrm_unset_type_offload(struct xfrm_state *x)
++{
++ if (!x->type_offload)
++ return;
++
++ module_put(x->type_offload->owner);
++ x->type_offload = NULL;
++}
+
+ /**
+ * struct xfrm_mode_cbs - XFRM mode callbacks
+@@ -1760,7 +1769,7 @@ void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
+ u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
+ int xfrm_init_replay(struct xfrm_state *x, struct netlink_ext_ack *extack);
+ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu);
+-int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload,
++int __xfrm_init_state(struct xfrm_state *x, bool init_replay,
+ struct netlink_ext_ack *extack);
+ int xfrm_init_state(struct xfrm_state *x);
+ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type);
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 0ad104dae2539b..43954bb0475a74 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -2750,6 +2750,7 @@ struct ib_device {
+ * It is a NULL terminated array.
+ */
+ const struct attribute_group *groups[4];
++ u8 hw_stats_attr_index;
+
+ u64 uverbs_cmd_mask;
+
+diff --git a/include/trace/define_trace.h b/include/trace/define_trace.h
+index e1c1079f8c8db3..ed52d0506c69ff 100644
+--- a/include/trace/define_trace.h
++++ b/include/trace/define_trace.h
+@@ -76,6 +76,10 @@
+ #define DECLARE_TRACE(name, proto, args) \
+ DEFINE_TRACE(name, PARAMS(proto), PARAMS(args))
+
++#undef DECLARE_TRACE_CONDITION
++#define DECLARE_TRACE_CONDITION(name, proto, args, cond) \
++ DEFINE_TRACE(name, PARAMS(proto), PARAMS(args))
++
+ /* If requested, create helpers for calling these tracepoints from Rust. */
+ #ifdef CREATE_RUST_TRACE_POINTS
+ #undef DEFINE_RUST_DO_TRACE
+@@ -108,6 +112,8 @@
+ /* Make all open coded DECLARE_TRACE nops */
+ #undef DECLARE_TRACE
+ #define DECLARE_TRACE(name, proto, args)
++#undef DECLARE_TRACE_CONDITION
++#define DECLARE_TRACE_CONDITION(name, proto, args, cond)
+
+ #ifdef TRACEPOINTS_ENABLED
+ #include <trace/trace_events.h>
+@@ -129,6 +135,7 @@
+ #undef DEFINE_EVENT_CONDITION
+ #undef TRACE_HEADER_MULTI_READ
+ #undef DECLARE_TRACE
++#undef DECLARE_TRACE_CONDITION
+
+ /* Only undef what we defined in this file */
+ #ifdef UNDEF_TRACE_INCLUDE_FILE
+diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
+index a261e86e61facf..5755c2a569e16e 100644
+--- a/include/trace/events/writeback.h
++++ b/include/trace/events/writeback.h
+@@ -629,11 +629,7 @@ TRACE_EVENT(bdi_dirty_ratelimit,
+ TRACE_EVENT(balance_dirty_pages,
+
+ TP_PROTO(struct bdi_writeback *wb,
+- unsigned long thresh,
+- unsigned long bg_thresh,
+- unsigned long dirty,
+- unsigned long bdi_thresh,
+- unsigned long bdi_dirty,
++ struct dirty_throttle_control *dtc,
+ unsigned long dirty_ratelimit,
+ unsigned long task_ratelimit,
+ unsigned long dirtied,
+@@ -641,7 +637,7 @@ TRACE_EVENT(balance_dirty_pages,
+ long pause,
+ unsigned long start_time),
+
+- TP_ARGS(wb, thresh, bg_thresh, dirty, bdi_thresh, bdi_dirty,
++ TP_ARGS(wb, dtc,
+ dirty_ratelimit, task_ratelimit,
+ dirtied, period, pause, start_time),
+
+@@ -664,16 +660,15 @@ TRACE_EVENT(balance_dirty_pages,
+ ),
+
+ TP_fast_assign(
+- unsigned long freerun = (thresh + bg_thresh) / 2;
++ unsigned long freerun = (dtc->thresh + dtc->bg_thresh) / 2;
+ strscpy_pad(__entry->bdi, bdi_dev_name(wb->bdi), 32);
+
+- __entry->limit = global_wb_domain.dirty_limit;
+- __entry->setpoint = (global_wb_domain.dirty_limit +
+- freerun) / 2;
+- __entry->dirty = dirty;
++ __entry->limit = dtc->limit;
++ __entry->setpoint = (dtc->limit + freerun) / 2;
++ __entry->dirty = dtc->dirty;
+ __entry->bdi_setpoint = __entry->setpoint *
+- bdi_thresh / (thresh + 1);
+- __entry->bdi_dirty = bdi_dirty;
++ dtc->wb_thresh / (dtc->thresh + 1);
++ __entry->bdi_dirty = dtc->wb_dirty;
+ __entry->dirty_ratelimit = KBps(dirty_ratelimit);
+ __entry->task_ratelimit = KBps(task_ratelimit);
+ __entry->dirtied = dirtied;
+diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
+index 42ec5ddaab8dc8..42869770776ec0 100644
+--- a/include/uapi/linux/if_xdp.h
++++ b/include/uapi/linux/if_xdp.h
+@@ -127,6 +127,12 @@ struct xdp_options {
+ */
+ #define XDP_TXMD_FLAGS_CHECKSUM (1 << 1)
+
++/* Request launch time hardware offload. The device will schedule the packet for
++ * transmission at a pre-determined time called launch time. The value of
++ * launch time is communicated via launch_time field of struct xsk_tx_metadata.
++ */
++#define XDP_TXMD_FLAGS_LAUNCH_TIME (1 << 2)
++
+ /* AF_XDP offloads request. 'request' union member is consumed by the driver
+ * when the packet is being transmitted. 'completion' union member is
+ * filled by the driver when the transmit completion arrives.
+@@ -142,6 +148,10 @@ struct xsk_tx_metadata {
+ __u16 csum_start;
+ /* Offset from csum_start where checksum should be stored. */
+ __u16 csum_offset;
++
++ /* XDP_TXMD_FLAGS_LAUNCH_TIME */
++ /* Launch time in nanosecond against the PTP HW Clock */
++ __u64 launch_time;
+ } request;
+
+ struct {
+diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
+index e4be227d3ad648..4324e89a802618 100644
+--- a/include/uapi/linux/netdev.h
++++ b/include/uapi/linux/netdev.h
+@@ -59,10 +59,13 @@ enum netdev_xdp_rx_metadata {
+ * by the driver.
+ * @NETDEV_XSK_FLAGS_TX_CHECKSUM: L3 checksum HW offload is supported by the
+ * driver.
++ * @NETDEV_XSK_FLAGS_TX_LAUNCH_TIME_FIFO: Launch time HW offload is supported
++ * by the driver.
+ */
+ enum netdev_xsk_flags {
+ NETDEV_XSK_FLAGS_TX_TIMESTAMP = 1,
+ NETDEV_XSK_FLAGS_TX_CHECKSUM = 2,
++ NETDEV_XSK_FLAGS_TX_LAUNCH_TIME_FIFO = 4,
+ };
+
+ enum netdev_queue_type {
+diff --git a/init/Kconfig b/init/Kconfig
+index 324c2886b2ea31..5ab47c346ef93a 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -129,6 +129,11 @@ config CC_HAS_COUNTED_BY
+ # https://github.com/llvm/llvm-project/pull/112636
+ depends on !(CC_IS_CLANG && CLANG_VERSION < 190103)
+
++config LD_CAN_USE_KEEP_IN_OVERLAY
++ # ld.lld prior to 21.0.0 did not support KEEP within an overlay description
++ # https://github.com/llvm/llvm-project/pull/130661
++ def_bool LD_IS_BFD || LLD_VERSION >= 210000
++
+ config RUSTC_HAS_COERCE_POINTEE
+ def_bool RUSTC_VERSION >= 108400
+
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 91019b4d03088e..24f06fba430968 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -160,9 +160,9 @@ static inline struct io_wq_acct *io_get_acct(struct io_wq *wq, bool bound)
+ }
+
+ static inline struct io_wq_acct *io_work_get_acct(struct io_wq *wq,
+- struct io_wq_work *work)
++ unsigned int work_flags)
+ {
+- return io_get_acct(wq, !(atomic_read(&work->flags) & IO_WQ_WORK_UNBOUND));
++ return io_get_acct(wq, !(work_flags & IO_WQ_WORK_UNBOUND));
+ }
+
+ static inline struct io_wq_acct *io_wq_get_acct(struct io_worker *worker)
+@@ -452,9 +452,14 @@ static void __io_worker_idle(struct io_wq *wq, struct io_worker *worker)
+ }
+ }
+
++static inline unsigned int __io_get_work_hash(unsigned int work_flags)
++{
++ return work_flags >> IO_WQ_HASH_SHIFT;
++}
++
+ static inline unsigned int io_get_work_hash(struct io_wq_work *work)
+ {
+- return atomic_read(&work->flags) >> IO_WQ_HASH_SHIFT;
++ return __io_get_work_hash(atomic_read(&work->flags));
+ }
+
+ static bool io_wait_on_hash(struct io_wq *wq, unsigned int hash)
+@@ -484,17 +489,19 @@ static struct io_wq_work *io_get_next_work(struct io_wq_acct *acct,
+ struct io_wq *wq = worker->wq;
+
+ wq_list_for_each(node, prev, &acct->work_list) {
++ unsigned int work_flags;
+ unsigned int hash;
+
+ work = container_of(node, struct io_wq_work, list);
+
+ /* not hashed, can run anytime */
+- if (!io_wq_is_hashed(work)) {
++ work_flags = atomic_read(&work->flags);
++ if (!__io_wq_is_hashed(work_flags)) {
+ wq_list_del(&acct->work_list, node, prev);
+ return work;
+ }
+
+- hash = io_get_work_hash(work);
++ hash = __io_get_work_hash(work_flags);
+ /* all items with this hash lie in [work, tail] */
+ tail = wq->hash_tail[hash];
+
+@@ -591,12 +598,15 @@ static void io_worker_handle_work(struct io_wq_acct *acct,
+ /* handle a whole dependent link */
+ do {
+ struct io_wq_work *next_hashed, *linked;
+- unsigned int hash = io_get_work_hash(work);
++ unsigned int work_flags = atomic_read(&work->flags);
++ unsigned int hash = __io_wq_is_hashed(work_flags)
++ ? __io_get_work_hash(work_flags)
++ : -1U;
+
+ next_hashed = wq_next_work(work);
+
+ if (do_kill &&
+- (atomic_read(&work->flags) & IO_WQ_WORK_UNBOUND))
++ (work_flags & IO_WQ_WORK_UNBOUND))
+ atomic_or(IO_WQ_WORK_CANCEL, &work->flags);
+ wq->do_work(work);
+ io_assign_current_work(worker, NULL);
+@@ -916,19 +926,19 @@ static void io_run_cancel(struct io_wq_work *work, struct io_wq *wq)
+ } while (work);
+ }
+
+-static void io_wq_insert_work(struct io_wq *wq, struct io_wq_work *work)
++static void io_wq_insert_work(struct io_wq *wq, struct io_wq_acct *acct,
++ struct io_wq_work *work, unsigned int work_flags)
+ {
+- struct io_wq_acct *acct = io_work_get_acct(wq, work);
+ unsigned int hash;
+ struct io_wq_work *tail;
+
+- if (!io_wq_is_hashed(work)) {
++ if (!__io_wq_is_hashed(work_flags)) {
+ append:
+ wq_list_add_tail(&work->list, &acct->work_list);
+ return;
+ }
+
+- hash = io_get_work_hash(work);
++ hash = __io_get_work_hash(work_flags);
+ tail = wq->hash_tail[hash];
+ wq->hash_tail[hash] = work;
+ if (!tail)
+@@ -944,8 +954,8 @@ static bool io_wq_work_match_item(struct io_wq_work *work, void *data)
+
+ void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
+ {
+- struct io_wq_acct *acct = io_work_get_acct(wq, work);
+ unsigned int work_flags = atomic_read(&work->flags);
++ struct io_wq_acct *acct = io_work_get_acct(wq, work_flags);
+ struct io_cb_cancel_data match = {
+ .fn = io_wq_work_match_item,
+ .data = work,
+@@ -964,7 +974,7 @@ void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
+ }
+
+ raw_spin_lock(&acct->lock);
+- io_wq_insert_work(wq, work);
++ io_wq_insert_work(wq, acct, work, work_flags);
+ clear_bit(IO_ACCT_STALLED_BIT, &acct->flags);
+ raw_spin_unlock(&acct->lock);
+
+@@ -1034,10 +1044,10 @@ static bool io_wq_worker_cancel(struct io_worker *worker, void *data)
+ }
+
+ static inline void io_wq_remove_pending(struct io_wq *wq,
++ struct io_wq_acct *acct,
+ struct io_wq_work *work,
+ struct io_wq_work_node *prev)
+ {
+- struct io_wq_acct *acct = io_work_get_acct(wq, work);
+ unsigned int hash = io_get_work_hash(work);
+ struct io_wq_work *prev_work = NULL;
+
+@@ -1064,7 +1074,7 @@ static bool io_acct_cancel_pending_work(struct io_wq *wq,
+ work = container_of(node, struct io_wq_work, list);
+ if (!match->fn(work, match->data))
+ continue;
+- io_wq_remove_pending(wq, work, prev);
++ io_wq_remove_pending(wq, acct, work, prev);
+ raw_spin_unlock(&acct->lock);
+ io_run_cancel(work, wq);
+ match->nr_pending++;
+diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
+index b3b004a7b62528..d4fb2940e435f7 100644
+--- a/io_uring/io-wq.h
++++ b/io_uring/io-wq.h
+@@ -54,9 +54,14 @@ int io_wq_cpu_affinity(struct io_uring_task *tctx, cpumask_var_t mask);
+ int io_wq_max_workers(struct io_wq *wq, int *new_count);
+ bool io_wq_worker_stopped(void);
+
++static inline bool __io_wq_is_hashed(unsigned int work_flags)
++{
++ return work_flags & IO_WQ_WORK_HASHED;
++}
++
+ static inline bool io_wq_is_hashed(struct io_wq_work *work)
+ {
+- return atomic_read(&work->flags) & IO_WQ_WORK_HASHED;
++ return __io_wq_is_hashed(atomic_read(&work->flags));
+ }
+
+ typedef bool (work_cancel_fn)(struct io_wq_work *, void *);
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index f7acae5f7e1d0b..4910ee7ac18aad 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -899,7 +899,7 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
+ * Handle special CQ sync cases via task_work. DEFER_TASKRUN requires
+ * the submitter task context, IOPOLL protects with uring_lock.
+ */
+- if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL)) {
++ if (ctx->lockless_cq || (req->flags & REQ_F_REISSUE)) {
+ req->io_task_work.func = io_req_task_complete;
+ io_req_task_work_add(req);
+ return;
+@@ -3922,6 +3922,7 @@ static int __init io_uring_init(void)
+ SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT);
+
+ iou_wq = alloc_workqueue("iou_exit", WQ_UNBOUND, 64);
++ BUG_ON(!iou_wq);
+
+ #ifdef CONFIG_SYSCTL
+ register_sysctl_init("kernel", kernel_io_uring_disabled_table);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 50e8a3ccc9de9f..16d54cd4d53f38 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -76,6 +76,8 @@ struct io_sr_msg {
+ /* initialised and used only by !msg send variants */
+ u16 buf_group;
+ u16 buf_index;
++ bool retry;
++ bool imported; /* only for io_send_zc */
+ void __user *msg_control;
+ /* used only for send zerocopy */
+ struct io_kiocb *notif;
+@@ -187,6 +189,7 @@ static inline void io_mshot_prep_retry(struct io_kiocb *req,
+
+ req->flags &= ~REQ_F_BL_EMPTY;
+ sr->done_io = 0;
++ sr->retry = false;
+ sr->len = 0; /* get from the provided buffer */
+ req->buf_index = sr->buf_group;
+ }
+@@ -404,6 +407,7 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+
+ sr->done_io = 0;
++ sr->retry = false;
+
+ if (req->opcode != IORING_OP_SEND) {
+ if (sqe->addr2 || sqe->file_index)
+@@ -786,6 +790,7 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
+
+ sr->done_io = 0;
++ sr->retry = false;
+
+ if (unlikely(sqe->file_index || sqe->addr2))
+ return -EINVAL;
+@@ -834,6 +839,9 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ return io_recvmsg_prep_setup(req);
+ }
+
++/* bits to clear in old and inherit in new cflags on bundle retry */
++#define CQE_F_MASK (IORING_CQE_F_SOCK_NONEMPTY|IORING_CQE_F_MORE)
++
+ /*
+ * Finishes io_recv and io_recvmsg.
+ *
+@@ -853,9 +861,19 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ if (sr->flags & IORING_RECVSEND_BUNDLE) {
+ cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, *ret),
+ issue_flags);
++ if (sr->retry)
++ cflags = req->cqe.flags | (cflags & CQE_F_MASK);
+ /* bundle with no more immediate buffers, we're done */
+ if (req->flags & REQ_F_BL_EMPTY)
+ goto finish;
++ /* if more is available, retry and append to this one */
++ if (!sr->retry && kmsg->msg.msg_inq > 0 && *ret > 0) {
++ req->cqe.flags = cflags & ~CQE_F_MASK;
++ sr->len = kmsg->msg.msg_inq;
++ sr->done_io += *ret;
++ sr->retry = true;
++ return false;
++ }
+ } else {
+ cflags |= io_put_kbuf(req, *ret, issue_flags);
+ }
+@@ -1234,6 +1252,8 @@ int io_send_zc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ struct io_kiocb *notif;
+
+ zc->done_io = 0;
++ zc->retry = false;
++ zc->imported = false;
+ req->flags |= REQ_F_POLL_NO_LAZY;
+
+ if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3)))
+@@ -1396,7 +1416,8 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags)
+ (zc->flags & IORING_RECVSEND_POLL_FIRST))
+ return -EAGAIN;
+
+- if (!zc->done_io) {
++ if (!zc->imported) {
++ zc->imported = true;
+ ret = io_send_zc_import(req, issue_flags);
+ if (unlikely(ret))
+ return ret;
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index da729cbbaeb90c..a0200fbbace99a 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2290,17 +2290,18 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
+ insn->code = BPF_JMP | BPF_CALL_ARGS;
+ }
+ #endif
+-#else
++#endif
++
+ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ const struct bpf_insn *insn)
+ {
+ /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
+- * is not working properly, so warn about it!
++ * is not working properly, or interpreter is being used when
++ * prog->jit_requested is not 0, so warn about it!
+ */
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+-#endif
+
+ bool bpf_prog_map_compatible(struct bpf_map *map,
+ const struct bpf_prog *fp)
+@@ -2380,8 +2381,18 @@ static void bpf_prog_select_func(struct bpf_prog *fp)
+ {
+ #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
++ u32 idx = (round_up(stack_depth, 32) / 32) - 1;
+
+- fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
++ /* may_goto may cause stack size > 512, leading to idx out-of-bounds.
++ * But for non-JITed programs, we don't need bpf_func, so no bounds
++ * check needed.
++ */
++ if (!fp->jit_requested &&
++ !WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) {
++ fp->bpf_func = interpreters[idx];
++ } else {
++ fp->bpf_func = __bpf_prog_ret0_warn;
++ }
+ #else
+ fp->bpf_func = __bpf_prog_ret0_warn;
+ #endif
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 60611df77957a5..c6f3b5f4ff2beb 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -21897,6 +21897,13 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ if (subprogs[cur_subprog + 1].start == i + delta + 1) {
+ subprogs[cur_subprog].stack_depth += stack_depth_extra;
+ subprogs[cur_subprog].stack_extra = stack_depth_extra;
++
++ stack_depth = subprogs[cur_subprog].stack_depth;
++ if (stack_depth > MAX_BPF_STACK && !prog->jit_requested) {
++ verbose(env, "stack size %d(extra %d) is too large\n",
++ stack_depth, stack_depth_extra);
++ return -EINVAL;
++ }
+ cur_subprog++;
+ stack_depth = subprogs[cur_subprog].stack_depth;
+ stack_depth_extra = 0;
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 0f910c828973a9..1892dc8cd21191 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -954,10 +954,12 @@ static void dl_update_tasks_root_domain(struct cpuset *cs)
+ css_task_iter_end(&it);
+ }
+
+-static void dl_rebuild_rd_accounting(void)
++void dl_rebuild_rd_accounting(void)
+ {
+ struct cpuset *cs = NULL;
+ struct cgroup_subsys_state *pos_css;
++ int cpu;
++ u64 cookie = ++dl_cookie;
+
+ lockdep_assert_held(&cpuset_mutex);
+ lockdep_assert_cpus_held();
+@@ -965,11 +967,12 @@ static void dl_rebuild_rd_accounting(void)
+
+ rcu_read_lock();
+
+- /*
+- * Clear default root domain DL accounting, it will be computed again
+- * if a task belongs to it.
+- */
+- dl_clear_root_domain(&def_root_domain);
++ for_each_possible_cpu(cpu) {
++ if (dl_bw_visited(cpu, cookie))
++ continue;
++
++ dl_clear_root_domain_cpu(cpu);
++ }
+
+ cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
+
+@@ -994,10 +997,9 @@ static void
+ partition_and_rebuild_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ struct sched_domain_attr *dattr_new)
+ {
+- mutex_lock(&sched_domains_mutex);
++ sched_domains_mutex_lock();
+ partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+- dl_rebuild_rd_accounting();
+- mutex_unlock(&sched_domains_mutex);
++ sched_domains_mutex_unlock();
+ }
+
+ /*
+@@ -1083,6 +1085,13 @@ void rebuild_sched_domains(void)
+ cpus_read_unlock();
+ }
+
++void cpuset_reset_sched_domains(void)
++{
++ mutex_lock(&cpuset_mutex);
++ partition_sched_domains(1, NULL, NULL);
++ mutex_unlock(&cpuset_mutex);
++}
++
+ /**
+ * cpuset_update_tasks_cpumask - Update the cpumasks of tasks in the cpuset.
+ * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 07455d25329c90..ad755db29efd4d 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -1453,11 +1453,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+
+ out:
+ cpus_write_unlock();
+- /*
+- * Do post unplug cleanup. This is still protected against
+- * concurrent CPU hotplug via cpu_add_remove_lock.
+- */
+- lockup_detector_cleanup();
+ arch_smt_update();
+ return ret;
+ }
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 823aa08249161b..f6cf17929bb983 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2407,6 +2407,7 @@ ctx_time_update_event(struct perf_event_context *ctx, struct perf_event *event)
+ #define DETACH_GROUP 0x01UL
+ #define DETACH_CHILD 0x02UL
+ #define DETACH_DEAD 0x04UL
++#define DETACH_EXIT 0x08UL
+
+ /*
+ * Cross CPU call to remove a performance event
+@@ -2421,6 +2422,7 @@ __perf_remove_from_context(struct perf_event *event,
+ void *info)
+ {
+ struct perf_event_pmu_context *pmu_ctx = event->pmu_ctx;
++ enum perf_event_state state = PERF_EVENT_STATE_OFF;
+ unsigned long flags = (unsigned long)info;
+
+ ctx_time_update(cpuctx, ctx);
+@@ -2429,16 +2431,19 @@ __perf_remove_from_context(struct perf_event *event,
+ * Ensure event_sched_out() switches to OFF, at the very least
+ * this avoids raising perf_pending_task() at this time.
+ */
+- if (flags & DETACH_DEAD)
++ if (flags & DETACH_EXIT)
++ state = PERF_EVENT_STATE_EXIT;
++ if (flags & DETACH_DEAD) {
+ event->pending_disable = 1;
++ state = PERF_EVENT_STATE_DEAD;
++ }
+ event_sched_out(event, ctx);
++ perf_event_set_state(event, min(event->state, state));
+ if (flags & DETACH_GROUP)
+ perf_group_detach(event);
+ if (flags & DETACH_CHILD)
+ perf_child_detach(event);
+ list_del_event(event, ctx);
+- if (flags & DETACH_DEAD)
+- event->state = PERF_EVENT_STATE_DEAD;
+
+ if (!pmu_ctx->nr_events) {
+ pmu_ctx->rotate_necessary = 0;
+@@ -3558,7 +3563,8 @@ static void perf_event_swap_task_ctx_data(struct perf_event_context *prev_ctx,
+ }
+ }
+
+-static void perf_ctx_sched_task_cb(struct perf_event_context *ctx, bool sched_in)
++static void perf_ctx_sched_task_cb(struct perf_event_context *ctx,
++ struct task_struct *task, bool sched_in)
+ {
+ struct perf_event_pmu_context *pmu_ctx;
+ struct perf_cpu_pmu_context *cpc;
+@@ -3567,7 +3573,7 @@ static void perf_ctx_sched_task_cb(struct perf_event_context *ctx, bool sched_in
+ cpc = this_cpu_ptr(pmu_ctx->pmu->cpu_pmu_context);
+
+ if (cpc->sched_cb_usage && pmu_ctx->pmu->sched_task)
+- pmu_ctx->pmu->sched_task(pmu_ctx, sched_in);
++ pmu_ctx->pmu->sched_task(pmu_ctx, task, sched_in);
+ }
+ }
+
+@@ -3630,7 +3636,7 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next)
+ WRITE_ONCE(ctx->task, next);
+ WRITE_ONCE(next_ctx->task, task);
+
+- perf_ctx_sched_task_cb(ctx, false);
++ perf_ctx_sched_task_cb(ctx, task, false);
+ perf_event_swap_task_ctx_data(ctx, next_ctx);
+
+ perf_ctx_enable(ctx, false);
+@@ -3660,7 +3666,7 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next)
+ perf_ctx_disable(ctx, false);
+
+ inside_switch:
+- perf_ctx_sched_task_cb(ctx, false);
++ perf_ctx_sched_task_cb(ctx, task, false);
+ task_ctx_sched_out(ctx, NULL, EVENT_ALL);
+
+ perf_ctx_enable(ctx, false);
+@@ -3702,7 +3708,8 @@ void perf_sched_cb_inc(struct pmu *pmu)
+ * PEBS requires this to provide PID/TID information. This requires we flush
+ * all queued PEBS records before we context switch to a new task.
+ */
+-static void __perf_pmu_sched_task(struct perf_cpu_pmu_context *cpc, bool sched_in)
++static void __perf_pmu_sched_task(struct perf_cpu_pmu_context *cpc,
++ struct task_struct *task, bool sched_in)
+ {
+ struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context);
+ struct pmu *pmu;
+@@ -3716,7 +3723,7 @@ static void __perf_pmu_sched_task(struct perf_cpu_pmu_context *cpc, bool sched_i
+ perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+ perf_pmu_disable(pmu);
+
+- pmu->sched_task(cpc->task_epc, sched_in);
++ pmu->sched_task(cpc->task_epc, task, sched_in);
+
+ perf_pmu_enable(pmu);
+ perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
+@@ -3734,7 +3741,7 @@ static void perf_pmu_sched_task(struct task_struct *prev,
+ return;
+
+ list_for_each_entry(cpc, this_cpu_ptr(&sched_cb_list), sched_cb_entry)
+- __perf_pmu_sched_task(cpc, sched_in);
++ __perf_pmu_sched_task(cpc, sched_in ? next : prev, sched_in);
+ }
+
+ static void perf_event_switch(struct task_struct *task,
+@@ -4029,7 +4036,7 @@ static void perf_event_context_sched_in(struct task_struct *task)
+ perf_ctx_lock(cpuctx, ctx);
+ perf_ctx_disable(ctx, false);
+
+- perf_ctx_sched_task_cb(ctx, true);
++ perf_ctx_sched_task_cb(ctx, task, true);
+
+ perf_ctx_enable(ctx, false);
+ perf_ctx_unlock(cpuctx, ctx);
+@@ -4060,7 +4067,7 @@ static void perf_event_context_sched_in(struct task_struct *task)
+
+ perf_event_sched_in(cpuctx, ctx, NULL);
+
+- perf_ctx_sched_task_cb(cpuctx->task_ctx, true);
++ perf_ctx_sched_task_cb(cpuctx->task_ctx, task, true);
+
+ if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree))
+ perf_ctx_enable(&cpuctx->ctx, false);
+@@ -13448,12 +13455,7 @@ perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx)
+ mutex_lock(&parent_event->child_mutex);
+ }
+
+- perf_remove_from_context(event, detach_flags);
+-
+- raw_spin_lock_irq(&ctx->lock);
+- if (event->state > PERF_EVENT_STATE_EXIT)
+- perf_event_set_state(event, PERF_EVENT_STATE_EXIT);
+- raw_spin_unlock_irq(&ctx->lock);
++ perf_remove_from_context(event, detach_flags | DETACH_EXIT);
+
+ /*
+ * Child events can be freed.
+@@ -14002,6 +14004,7 @@ int perf_event_init_task(struct task_struct *child, u64 clone_flags)
+ child->perf_event_ctxp = NULL;
+ mutex_init(&child->perf_event_mutex);
+ INIT_LIST_HEAD(&child->perf_event_list);
++ child->perf_ctx_data = NULL;
+
+ ret = perf_event_init_context(child, clone_flags);
+ if (ret) {
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 180509132d4b68..09459647cb8221 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -19,7 +19,7 @@
+
+ static void perf_output_wakeup(struct perf_output_handle *handle)
+ {
+- atomic_set(&handle->rb->poll, EPOLLIN);
++ atomic_set(&handle->rb->poll, EPOLLIN | EPOLLRDNORM);
+
+ handle->event->pending_wakeup = 1;
+
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index b4ca8898fe178e..7420a2a0d1f747 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -173,6 +173,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0);
+ int err;
+ struct mmu_notifier_range range;
++ pte_t pte;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr,
+ addr + PAGE_SIZE);
+@@ -192,6 +193,16 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ if (!page_vma_mapped_walk(&pvmw))
+ goto unlock;
+ VM_BUG_ON_PAGE(addr != pvmw.address, old_page);
++ pte = ptep_get(pvmw.pte);
++
++ /*
++ * Handle PFN swap PTES, such as device-exclusive ones, that actually
++ * map pages: simply trigger GUP again to fix it up.
++ */
++ if (unlikely(!pte_present(pte))) {
++ page_vma_mapped_walk_done(&pvmw);
++ goto unlock;
++ }
+
+ if (new_page) {
+ folio_get(new_folio);
+@@ -206,7 +217,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ inc_mm_counter(mm, MM_ANONPAGES);
+ }
+
+- flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte)));
++ flush_cache_page(vma, addr, pte_pfn(pte));
+ ptep_clear_flush(vma, addr, pvmw.pte);
+ if (new_page)
+ set_pte_at(mm, addr, pvmw.pte,
+@@ -2169,8 +2180,8 @@ void uprobe_copy_process(struct task_struct *t, unsigned long flags)
+ */
+ unsigned long uprobe_get_trampoline_vaddr(void)
+ {
++ unsigned long trampoline_vaddr = UPROBE_NO_TRAMPOLINE_VADDR;
+ struct xol_area *area;
+- unsigned long trampoline_vaddr = -1;
+
+ /* Pairs with xol_add_vma() smp_store_release() */
+ area = READ_ONCE(current->mm->uprobes_state.xol_area); /* ^^^ */
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 735405a9c5f323..ca2ca3884f763d 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -504,6 +504,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
+ vma_numab_state_init(new);
+ dup_anon_vma_name(orig, new);
+
++ /* track_pfn_copy() will later take care of copying internal state. */
++ if (unlikely(new->vm_flags & VM_PFNMAP))
++ untrack_pfn_clear(new);
++
+ return new;
+ }
+
+diff --git a/kernel/kexec_elf.c b/kernel/kexec_elf.c
+index d3689632e8b90f..3a5c25b2adc94d 100644
+--- a/kernel/kexec_elf.c
++++ b/kernel/kexec_elf.c
+@@ -390,7 +390,7 @@ int kexec_elf_load(struct kimage *image, struct elfhdr *ehdr,
+ struct kexec_buf *kbuf,
+ unsigned long *lowest_load_addr)
+ {
+- unsigned long lowest_addr = UINT_MAX;
++ unsigned long lowest_addr = ULONG_MAX;
+ int ret;
+ size_t i;
+
+diff --git a/kernel/reboot.c b/kernel/reboot.c
+index b5a8569e5d81f5..f348f1ba9e2267 100644
+--- a/kernel/reboot.c
++++ b/kernel/reboot.c
+@@ -932,48 +932,76 @@ void orderly_reboot(void)
+ }
+ EXPORT_SYMBOL_GPL(orderly_reboot);
+
++static const char *hw_protection_action_str(enum hw_protection_action action)
++{
++ switch (action) {
++ case HWPROT_ACT_SHUTDOWN:
++ return "shutdown";
++ case HWPROT_ACT_REBOOT:
++ return "reboot";
++ default:
++ return "undefined";
++ }
++}
++
++static enum hw_protection_action hw_failure_emergency_action;
++
+ /**
+- * hw_failure_emergency_poweroff_func - emergency poweroff work after a known delay
+- * @work: work_struct associated with the emergency poweroff function
++ * hw_failure_emergency_action_func - emergency action work after a known delay
++ * @work: work_struct associated with the emergency action function
+ *
+ * This function is called in very critical situations to force
+- * a kernel poweroff after a configurable timeout value.
++ * a kernel poweroff or reboot after a configurable timeout value.
+ */
+-static void hw_failure_emergency_poweroff_func(struct work_struct *work)
++static void hw_failure_emergency_action_func(struct work_struct *work)
+ {
++ const char *action_str = hw_protection_action_str(hw_failure_emergency_action);
++
++ pr_emerg("Hardware protection timed-out. Trying forced %s\n",
++ action_str);
++
+ /*
+- * We have reached here after the emergency shutdown waiting period has
+- * expired. This means orderly_poweroff has not been able to shut off
+- * the system for some reason.
++ * We have reached here after the emergency action waiting period has
++ * expired. This means orderly_poweroff/reboot has not been able to
++ * shut off the system for some reason.
+ *
+- * Try to shut down the system immediately using kernel_power_off
+- * if populated
++ * Try to shut off the system immediately if possible
+ */
+- pr_emerg("Hardware protection timed-out. Trying forced poweroff\n");
+- kernel_power_off();
++
++ if (hw_failure_emergency_action == HWPROT_ACT_REBOOT)
++ kernel_restart(NULL);
++ else
++ kernel_power_off();
+
+ /*
+ * Worst of the worst case trigger emergency restart
+ */
+- pr_emerg("Hardware protection shutdown failed. Trying emergency restart\n");
++ pr_emerg("Hardware protection %s failed. Trying emergency restart\n",
++ action_str);
+ emergency_restart();
+ }
+
+-static DECLARE_DELAYED_WORK(hw_failure_emergency_poweroff_work,
+- hw_failure_emergency_poweroff_func);
++static DECLARE_DELAYED_WORK(hw_failure_emergency_action_work,
++ hw_failure_emergency_action_func);
+
+ /**
+- * hw_failure_emergency_poweroff - Trigger an emergency system poweroff
++ * hw_failure_emergency_schedule - Schedule an emergency system shutdown or reboot
++ *
++ * @action: The hardware protection action to be taken
++ * @action_delay_ms: Time in milliseconds to elapse before triggering action
+ *
+ * This may be called from any critical situation to trigger a system shutdown
+- * after a given period of time. If time is negative this is not scheduled.
++ * or reboot after a given period of time.
++ * If time is negative this is not scheduled.
+ */
+-static void hw_failure_emergency_poweroff(int poweroff_delay_ms)
++static void hw_failure_emergency_schedule(enum hw_protection_action action,
++ int action_delay_ms)
+ {
+- if (poweroff_delay_ms <= 0)
++ if (action_delay_ms <= 0)
+ return;
+- schedule_delayed_work(&hw_failure_emergency_poweroff_work,
+- msecs_to_jiffies(poweroff_delay_ms));
++ hw_failure_emergency_action = action;
++ schedule_delayed_work(&hw_failure_emergency_action_work,
++ msecs_to_jiffies(action_delay_ms));
+ }
+
+ /**
+@@ -983,10 +1011,7 @@ static void hw_failure_emergency_poweroff(int poweroff_delay_ms)
+ * @ms_until_forced: Time to wait for orderly shutdown or reboot before
+ * triggering it. Negative value disables the forced
+ * shutdown or reboot.
+- * @shutdown: If true, indicates that a shutdown will happen
+- * after the critical tempeature is reached.
+- * If false, indicates that a reboot will happen
+- * after the critical tempeature is reached.
++ * @action: The hardware protection action to be taken.
+ *
+ * Initiate an emergency system shutdown or reboot in order to protect
+ * hardware from further damage. Usage examples include a thermal protection.
+@@ -994,7 +1019,8 @@ static void hw_failure_emergency_poweroff(int poweroff_delay_ms)
+ * pending even if the previous request has given a large timeout for forced
+ * shutdown/reboot.
+ */
+-void __hw_protection_shutdown(const char *reason, int ms_until_forced, bool shutdown)
++void __hw_protection_shutdown(const char *reason, int ms_until_forced,
++ enum hw_protection_action action)
+ {
+ static atomic_t allow_proceed = ATOMIC_INIT(1);
+
+@@ -1008,11 +1034,11 @@ void __hw_protection_shutdown(const char *reason, int ms_until_forced, bool shut
+ * Queue a backup emergency shutdown in the event of
+ * orderly_poweroff failure
+ */
+- hw_failure_emergency_poweroff(ms_until_forced);
+- if (shutdown)
+- orderly_poweroff(true);
+- else
++ hw_failure_emergency_schedule(action, ms_until_forced);
++ if (action == HWPROT_ACT_REBOOT)
+ orderly_reboot();
++ else
++ orderly_poweroff(true);
+ }
+ EXPORT_SYMBOL_GPL(__hw_protection_shutdown);
+
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index 2cb16091ec0ae4..a7d81229eda04b 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -78,24 +78,24 @@ static int rseq_validate_ro_fields(struct task_struct *t)
+ return -EFAULT;
+ }
+
+-static void rseq_set_ro_fields(struct task_struct *t, u32 cpu_id_start, u32 cpu_id,
+- u32 node_id, u32 mm_cid)
+-{
+- rseq_kernel_fields(t)->cpu_id_start = cpu_id;
+- rseq_kernel_fields(t)->cpu_id = cpu_id;
+- rseq_kernel_fields(t)->node_id = node_id;
+- rseq_kernel_fields(t)->mm_cid = mm_cid;
+-}
++/*
++ * Update an rseq field and its in-kernel copy in lock-step to keep a coherent
++ * state.
++ */
++#define rseq_unsafe_put_user(t, value, field, error_label) \
++ do { \
++ unsafe_put_user(value, &t->rseq->field, error_label); \
++ rseq_kernel_fields(t)->field = value; \
++ } while (0)
++
+ #else
+ static int rseq_validate_ro_fields(struct task_struct *t)
+ {
+ return 0;
+ }
+
+-static void rseq_set_ro_fields(struct task_struct *t, u32 cpu_id_start, u32 cpu_id,
+- u32 node_id, u32 mm_cid)
+-{
+-}
++#define rseq_unsafe_put_user(t, value, field, error_label) \
++ unsafe_put_user(value, &t->rseq->field, error_label)
+ #endif
+
+ /*
+@@ -173,17 +173,18 @@ static int rseq_update_cpu_node_id(struct task_struct *t)
+ WARN_ON_ONCE((int) mm_cid < 0);
+ if (!user_write_access_begin(rseq, t->rseq_len))
+ goto efault;
+- unsafe_put_user(cpu_id, &rseq->cpu_id_start, efault_end);
+- unsafe_put_user(cpu_id, &rseq->cpu_id, efault_end);
+- unsafe_put_user(node_id, &rseq->node_id, efault_end);
+- unsafe_put_user(mm_cid, &rseq->mm_cid, efault_end);
++
++ rseq_unsafe_put_user(t, cpu_id, cpu_id_start, efault_end);
++ rseq_unsafe_put_user(t, cpu_id, cpu_id, efault_end);
++ rseq_unsafe_put_user(t, node_id, node_id, efault_end);
++ rseq_unsafe_put_user(t, mm_cid, mm_cid, efault_end);
++
+ /*
+ * Additional feature fields added after ORIG_RSEQ_SIZE
+ * need to be conditionally updated only if
+ * t->rseq_len != ORIG_RSEQ_SIZE.
+ */
+ user_write_access_end();
+- rseq_set_ro_fields(t, cpu_id, cpu_id, node_id, mm_cid);
+ trace_rseq_update(t);
+ return 0;
+
+@@ -195,6 +196,7 @@ static int rseq_update_cpu_node_id(struct task_struct *t)
+
+ static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
+ {
++ struct rseq __user *rseq = t->rseq;
+ u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED, node_id = 0,
+ mm_cid = 0;
+
+@@ -202,38 +204,36 @@ static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
+ * Validate read-only rseq fields.
+ */
+ if (rseq_validate_ro_fields(t))
+- return -EFAULT;
+- /*
+- * Reset cpu_id_start to its initial state (0).
+- */
+- if (put_user(cpu_id_start, &t->rseq->cpu_id_start))
+- return -EFAULT;
+- /*
+- * Reset cpu_id to RSEQ_CPU_ID_UNINITIALIZED, so any user coming
+- * in after unregistration can figure out that rseq needs to be
+- * registered again.
+- */
+- if (put_user(cpu_id, &t->rseq->cpu_id))
+- return -EFAULT;
+- /*
+- * Reset node_id to its initial state (0).
+- */
+- if (put_user(node_id, &t->rseq->node_id))
+- return -EFAULT;
++ goto efault;
++
++ if (!user_write_access_begin(rseq, t->rseq_len))
++ goto efault;
++
+ /*
+- * Reset mm_cid to its initial state (0).
++ * Reset all fields to their initial state.
++ *
++ * All fields have an initial state of 0 except cpu_id which is set to
++ * RSEQ_CPU_ID_UNINITIALIZED, so that any user coming in after
++ * unregistration can figure out that rseq needs to be registered
++ * again.
+ */
+- if (put_user(mm_cid, &t->rseq->mm_cid))
+- return -EFAULT;
+-
+- rseq_set_ro_fields(t, cpu_id_start, cpu_id, node_id, mm_cid);
++ rseq_unsafe_put_user(t, cpu_id_start, cpu_id_start, efault_end);
++ rseq_unsafe_put_user(t, cpu_id, cpu_id, efault_end);
++ rseq_unsafe_put_user(t, node_id, node_id, efault_end);
++ rseq_unsafe_put_user(t, mm_cid, mm_cid, efault_end);
+
+ /*
+ * Additional feature fields added after ORIG_RSEQ_SIZE
+ * need to be conditionally reset only if
+ * t->rseq_len != ORIG_RSEQ_SIZE.
+ */
++ user_write_access_end();
+ return 0;
++
++efault_end:
++ user_write_access_end();
++efault:
++ return -EFAULT;
+ }
+
+ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 042351c7afce7b..3c7c942c7c4297 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8183,7 +8183,7 @@ static void cpuset_cpu_active(void)
+ * operation in the resume sequence, just build a single sched
+ * domain, ignoring cpusets.
+ */
+- partition_sched_domains(1, NULL, NULL);
++ cpuset_reset_sched_domains();
+ if (--num_cpus_frozen)
+ return;
+ /*
+@@ -8202,7 +8202,7 @@ static void cpuset_cpu_inactive(unsigned int cpu)
+ cpuset_update_active_cpus();
+ } else {
+ num_cpus_frozen++;
+- partition_sched_domains(1, NULL, NULL);
++ cpuset_reset_sched_domains();
+ }
+ }
+
+@@ -8424,9 +8424,9 @@ void __init sched_init_smp(void)
+ * CPU masks are stable and all blatant races in the below code cannot
+ * happen.
+ */
+- mutex_lock(&sched_domains_mutex);
++ sched_domains_mutex_lock();
+ sched_init_domains(cpu_active_mask);
+- mutex_unlock(&sched_domains_mutex);
++ sched_domains_mutex_unlock();
+
+ /* Move init over to a non-isolated CPU */
+ if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index ff4df16b5186d7..5dca336cdd7ca5 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -166,14 +166,14 @@ static inline unsigned long dl_bw_capacity(int i)
+ }
+ }
+
+-static inline bool dl_bw_visited(int cpu, u64 gen)
++bool dl_bw_visited(int cpu, u64 cookie)
+ {
+ struct root_domain *rd = cpu_rq(cpu)->rd;
+
+- if (rd->visit_gen == gen)
++ if (rd->visit_cookie == cookie)
+ return true;
+
+- rd->visit_gen = gen;
++ rd->visit_cookie = cookie;
+ return false;
+ }
+
+@@ -207,7 +207,7 @@ static inline unsigned long dl_bw_capacity(int i)
+ return SCHED_CAPACITY_SCALE;
+ }
+
+-static inline bool dl_bw_visited(int cpu, u64 gen)
++bool dl_bw_visited(int cpu, u64 cookie)
+ {
+ return false;
+ }
+@@ -2956,7 +2956,7 @@ void dl_add_task_root_domain(struct task_struct *p)
+ struct dl_bw *dl_b;
+
+ raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
+- if (!dl_task(p)) {
++ if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
+ raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
+ return;
+ }
+@@ -2981,18 +2981,22 @@ void dl_clear_root_domain(struct root_domain *rd)
+ rd->dl_bw.total_bw = 0;
+
+ /*
+- * dl_server bandwidth is only restored when CPUs are attached to root
+- * domains (after domains are created or CPUs moved back to the
+- * default root doamin).
++ * dl_servers are not tasks. Since dl_add_task_root_domain ignores
++ * them, we need to account for them here explicitly.
+ */
+ for_each_cpu(i, rd->span) {
+ struct sched_dl_entity *dl_se = &cpu_rq(i)->fair_server;
+
+ if (dl_server(dl_se) && cpu_active(i))
+- rd->dl_bw.total_bw += dl_se->dl_bw;
++ __dl_add(&rd->dl_bw, dl_se->dl_bw, dl_bw_cpus(i));
+ }
+ }
+
++void dl_clear_root_domain_cpu(int cpu)
++{
++ dl_clear_root_domain(cpu_rq(cpu)->rd);
++}
++
+ #endif /* CONFIG_SMP */
+
+ static void switched_from_dl(struct rq *rq, struct task_struct *p)
+@@ -3171,15 +3175,18 @@ DEFINE_SCHED_CLASS(dl) = {
+ #endif
+ };
+
+-/* Used for dl_bw check and update, used under sched_rt_handler()::mutex */
+-static u64 dl_generation;
++/*
++ * Used for dl_bw check and update, used under sched_rt_handler()::mutex and
++ * sched_domains_mutex.
++ */
++u64 dl_cookie;
+
+ int sched_dl_global_validate(void)
+ {
+ u64 runtime = global_rt_runtime();
+ u64 period = global_rt_period();
+ u64 new_bw = to_ratio(period, runtime);
+- u64 gen = ++dl_generation;
++ u64 cookie = ++dl_cookie;
+ struct dl_bw *dl_b;
+ int cpu, cpus, ret = 0;
+ unsigned long flags;
+@@ -3192,7 +3199,7 @@ int sched_dl_global_validate(void)
+ for_each_online_cpu(cpu) {
+ rcu_read_lock_sched();
+
+- if (dl_bw_visited(cpu, gen))
++ if (dl_bw_visited(cpu, cookie))
+ goto next;
+
+ dl_b = dl_bw_of(cpu);
+@@ -3229,7 +3236,7 @@ static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq)
+ void sched_dl_do_global(void)
+ {
+ u64 new_bw = -1;
+- u64 gen = ++dl_generation;
++ u64 cookie = ++dl_cookie;
+ struct dl_bw *dl_b;
+ int cpu;
+ unsigned long flags;
+@@ -3240,7 +3247,7 @@ void sched_dl_do_global(void)
+ for_each_possible_cpu(cpu) {
+ rcu_read_lock_sched();
+
+- if (dl_bw_visited(cpu, gen)) {
++ if (dl_bw_visited(cpu, cookie)) {
+ rcu_read_unlock_sched();
+ continue;
+ }
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index ef047add7f9e62..a0893a483d35d5 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -292,7 +292,7 @@ static ssize_t sched_verbose_write(struct file *filp, const char __user *ubuf,
+ bool orig;
+
+ cpus_read_lock();
+- mutex_lock(&sched_domains_mutex);
++ sched_domains_mutex_lock();
+
+ orig = sched_debug_verbose;
+ result = debugfs_write_file_bool(filp, ubuf, cnt, ppos);
+@@ -304,7 +304,7 @@ static ssize_t sched_verbose_write(struct file *filp, const char __user *ubuf,
+ sd_dentry = NULL;
+ }
+
+- mutex_unlock(&sched_domains_mutex);
++ sched_domains_mutex_unlock();
+ cpus_read_unlock();
+
+ return result;
+@@ -515,9 +515,9 @@ static __init int sched_init_debug(void)
+ debugfs_create_u32("migration_cost_ns", 0644, debugfs_sched, &sysctl_sched_migration_cost);
+ debugfs_create_u32("nr_migrate", 0644, debugfs_sched, &sysctl_sched_nr_migrate);
+
+- mutex_lock(&sched_domains_mutex);
++ sched_domains_mutex_lock();
+ update_sched_domain_debugfs();
+- mutex_unlock(&sched_domains_mutex);
++ sched_domains_mutex_unlock();
+ #endif
+
+ #ifdef CONFIG_NUMA_BALANCING
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index c798d27952431b..89c7260103e18b 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -883,6 +883,26 @@ struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq)
+ return __node_2_se(left);
+ }
+
++/*
++ * HACK, stash a copy of deadline at the point of pick in vlag,
++ * which isn't used until dequeue.
++ */
++static inline void set_protect_slice(struct sched_entity *se)
++{
++ se->vlag = se->deadline;
++}
++
++static inline bool protect_slice(struct sched_entity *se)
++{
++ return se->vlag == se->deadline;
++}
++
++static inline void cancel_protect_slice(struct sched_entity *se)
++{
++ if (protect_slice(se))
++ se->vlag = se->deadline + 1;
++}
++
+ /*
+ * Earliest Eligible Virtual Deadline First
+ *
+@@ -919,11 +939,7 @@ static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq)
+ if (curr && (!curr->on_rq || !entity_eligible(cfs_rq, curr)))
+ curr = NULL;
+
+- /*
+- * Once selected, run a task until it either becomes non-eligible or
+- * until it gets a new slice. See the HACK in set_next_entity().
+- */
+- if (sched_feat(RUN_TO_PARITY) && curr && curr->vlag == curr->deadline)
++ if (sched_feat(RUN_TO_PARITY) && curr && protect_slice(curr))
+ return curr;
+
+ /* Pick the leftmost entity if it's eligible */
+@@ -5530,11 +5546,8 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
+ update_stats_wait_end_fair(cfs_rq, se);
+ __dequeue_entity(cfs_rq, se);
+ update_load_avg(cfs_rq, se, UPDATE_TG);
+- /*
+- * HACK, stash a copy of deadline at the point of pick in vlag,
+- * which isn't used until dequeue.
+- */
+- se->vlag = se->deadline;
++
++ set_protect_slice(se);
+ }
+
+ update_stats_curr_start(cfs_rq, se);
+@@ -6991,6 +7004,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ update_cfs_group(se);
+
+ se->slice = slice;
++ if (se != cfs_rq->curr)
++ min_vruntime_cb_propagate(&se->run_node, NULL);
+ slice = cfs_rq_min_slice(cfs_rq);
+
+ cfs_rq->h_nr_runnable += h_nr_runnable;
+@@ -7120,6 +7135,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ update_cfs_group(se);
+
+ se->slice = slice;
++ if (se != cfs_rq->curr)
++ min_vruntime_cb_propagate(&se->run_node, NULL);
+ slice = cfs_rq_min_slice(cfs_rq);
+
+ cfs_rq->h_nr_runnable -= h_nr_runnable;
+@@ -8783,8 +8800,15 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ * Preempt an idle entity in favor of a non-idle entity (and don't preempt
+ * in the inverse case).
+ */
+- if (cse_is_idle && !pse_is_idle)
++ if (cse_is_idle && !pse_is_idle) {
++ /*
++ * When non-idle entity preempt an idle entity,
++ * don't give idle entity slice protection.
++ */
++ cancel_protect_slice(se);
+ goto preempt;
++ }
++
+ if (cse_is_idle != pse_is_idle)
+ return;
+
+@@ -8803,8 +8827,8 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ * Note that even if @p does not turn out to be the most eligible
+ * task at this moment, current's slice protection will be lost.
+ */
+- if (do_preempt_short(cfs_rq, pse, se) && se->vlag == se->deadline)
+- se->vlag = se->deadline + 1;
++ if (do_preempt_short(cfs_rq, pse, se))
++ cancel_protect_slice(se);
+
+ /*
+ * If @p has become the most eligible task, force preemption.
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 4b8e33c615b12c..8cebe71d2bb161 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2910,6 +2910,7 @@ static int sched_rt_handler(const struct ctl_table *table, int write, void *buff
+ int ret;
+
+ mutex_lock(&mutex);
++ sched_domains_mutex_lock();
+ old_period = sysctl_sched_rt_period;
+ old_runtime = sysctl_sched_rt_runtime;
+
+@@ -2936,6 +2937,7 @@ static int sched_rt_handler(const struct ctl_table *table, int write, void *buff
+ sysctl_sched_rt_period = old_period;
+ sysctl_sched_rt_runtime = old_runtime;
+ }
++ sched_domains_mutex_unlock();
+ mutex_unlock(&mutex);
+
+ return ret;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 023b844159c941..1aa65a0ac58644 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -998,7 +998,7 @@ struct root_domain {
+ * Also, some corner cases, like 'wrap around' is dangerous, but given
+ * that u64 is 'big enough'. So that shouldn't be a concern.
+ */
+- u64 visit_gen;
++ u64 visit_cookie;
+
+ #ifdef HAVE_RT_PUSH_IPI
+ /*
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index c49aea8c10254d..363ad268a25b0f 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -6,6 +6,14 @@
+ #include <linux/bsearch.h>
+
+ DEFINE_MUTEX(sched_domains_mutex);
++void sched_domains_mutex_lock(void)
++{
++ mutex_lock(&sched_domains_mutex);
++}
++void sched_domains_mutex_unlock(void)
++{
++ mutex_unlock(&sched_domains_mutex);
++}
+
+ /* Protected by sched_domains_mutex: */
+ static cpumask_var_t sched_domains_tmpmask;
+@@ -560,7 +568,7 @@ static int init_rootdomain(struct root_domain *rd)
+ rd->rto_push_work = IRQ_WORK_INIT_HARD(rto_push_irq_work_func);
+ #endif
+
+- rd->visit_gen = 0;
++ rd->visit_cookie = 0;
+ init_dl_bw(&rd->dl_bw);
+ if (cpudl_init(&rd->cpudl) != 0)
+ goto free_rto_mask;
+@@ -2783,6 +2791,7 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
+ ndoms_cur = ndoms_new;
+
+ update_sched_domain_debugfs();
++ dl_rebuild_rd_accounting();
+ }
+
+ /*
+@@ -2791,7 +2800,7 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
+ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ struct sched_domain_attr *dattr_new)
+ {
+- mutex_lock(&sched_domains_mutex);
++ sched_domains_mutex_lock();
+ partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+- mutex_unlock(&sched_domains_mutex);
++ sched_domains_mutex_unlock();
+ }
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 7bbb408431ebcf..3231f63d93d89e 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -29,13 +29,11 @@
+ #include <linux/syscalls.h>
+ #include <linux/sysctl.h>
+
++#include <asm/syscall.h>
++
+ /* Not exposed in headers: strictly internal use only. */
+ #define SECCOMP_MODE_DEAD (SECCOMP_MODE_FILTER + 1)
+
+-#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
+-#include <asm/syscall.h>
+-#endif
+-
+ #ifdef CONFIG_SECCOMP_FILTER
+ #include <linux/file.h>
+ #include <linux/filter.h>
+@@ -1074,6 +1072,14 @@ void secure_computing_strict(int this_syscall)
+ else
+ BUG();
+ }
++int __secure_computing(const struct seccomp_data *sd)
++{
++ int this_syscall = sd ? sd->nr :
++ syscall_get_nr(current, current_pt_regs());
++
++ secure_computing_strict(this_syscall);
++ return 0;
++}
+ #else
+
+ #ifdef CONFIG_SECCOMP_FILTER
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index adc947587eb813..a612f6f182e511 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -843,7 +843,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type, struct task_struc
+ if (unlikely(is_global_init(task)))
+ return -EPERM;
+
+- if (!preemptible()) {
++ if (preempt_count() != 0 || irqs_disabled()) {
+ /* Do an early check on signal validity. Otherwise,
+ * the error is lost in deferred irq_work.
+ */
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index bb6089c2951e50..510409f979923b 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -7411,9 +7411,9 @@ static __init int rb_write_something(struct rb_test_data *data, bool nested)
+ /* Ignore dropped events before test starts. */
+ if (started) {
+ if (nested)
+- data->bytes_dropped += len;
+- else
+ data->bytes_dropped_nested += len;
++ else
++ data->bytes_dropped += len;
+ }
+ return len;
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 513de9ceb80ef0..b1f6d04f9fe992 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -470,6 +470,7 @@ static void test_event_printk(struct trace_event_call *call)
+ case '%':
+ continue;
+ case 'p':
++ do_pointer:
+ /* Find dereferencing fields */
+ switch (fmt[i + 1]) {
+ case 'B': case 'R': case 'r':
+@@ -498,6 +499,12 @@ static void test_event_printk(struct trace_event_call *call)
+ continue;
+ if (fmt[i + j] == '*') {
+ star = true;
++ /* Handle %*pbl case */
++ if (!j && fmt[i + 1] == 'p') {
++ arg++;
++ i++;
++ goto do_pointer;
++ }
+ continue;
+ }
+ if ((fmt[i + j] == 's')) {
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index e3f7d09e551208..0330ecdfb9f123 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -305,7 +305,7 @@ static const char *synth_field_fmt(char *type)
+ else if (strcmp(type, "gfp_t") == 0)
+ fmt = "%x";
+ else if (synth_field_is_string(type))
+- fmt = "%.*s";
++ fmt = "%s";
+ else if (synth_field_is_stack(type))
+ fmt = "%s";
+
+@@ -852,6 +852,38 @@ static struct trace_event_fields synth_event_fields_array[] = {
+ {}
+ };
+
++static int synth_event_reg(struct trace_event_call *call,
++ enum trace_reg type, void *data)
++{
++ struct synth_event *event = container_of(call, struct synth_event, call);
++
++ switch (type) {
++#ifdef CONFIG_PERF_EVENTS
++ case TRACE_REG_PERF_REGISTER:
++#endif
++ case TRACE_REG_REGISTER:
++ if (!try_module_get(event->mod))
++ return -EBUSY;
++ break;
++ default:
++ break;
++ }
++
++ int ret = trace_event_reg(call, type, data);
++
++ switch (type) {
++#ifdef CONFIG_PERF_EVENTS
++ case TRACE_REG_PERF_UNREGISTER:
++#endif
++ case TRACE_REG_UNREGISTER:
++ module_put(event->mod);
++ break;
++ default:
++ break;
++ }
++ return ret;
++}
++
+ static int register_synth_event(struct synth_event *event)
+ {
+ struct trace_event_call *call = &event->call;
+@@ -881,7 +913,7 @@ static int register_synth_event(struct synth_event *event)
+ goto out;
+ }
+ call->flags = TRACE_EVENT_FL_TRACEPOINT;
+- call->class->reg = trace_event_reg;
++ call->class->reg = synth_event_reg;
+ call->class->probe = trace_event_raw_event_synth;
+ call->data = event;
+ call->tp = event->tp;
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 136c750b0b4da4..b3ee425bf2d7bb 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -1511,6 +1511,7 @@ void graph_trace_close(struct trace_iterator *iter)
+ if (data) {
+ free_percpu(data->cpu_data);
+ kfree(data);
++ iter->private = NULL;
+ }
+ }
+
+diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
+index 7294ad676379a1..f05b719d071603 100644
+--- a/kernel/trace/trace_irqsoff.c
++++ b/kernel/trace/trace_irqsoff.c
+@@ -250,8 +250,6 @@ static void irqsoff_trace_open(struct trace_iterator *iter)
+ {
+ if (is_graph(iter->tr))
+ graph_trace_open(iter);
+- else
+- iter->private = NULL;
+ }
+
+ static void irqsoff_trace_close(struct trace_iterator *iter)
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index f3a2722ee4c078..c83a51218ee59a 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2032,7 +2032,6 @@ static int start_kthread(unsigned int cpu)
+
+ if (IS_ERR(kthread)) {
+ pr_err(BANNER "could not start sampling thread\n");
+- stop_per_cpu_kthreads();
+ return -ENOMEM;
+ }
+
+diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
+index af30586f1aeacb..e24ddcc234812c 100644
+--- a/kernel/trace/trace_sched_wakeup.c
++++ b/kernel/trace/trace_sched_wakeup.c
+@@ -188,8 +188,6 @@ static void wakeup_trace_open(struct trace_iterator *iter)
+ {
+ if (is_graph(iter->tr))
+ graph_trace_open(iter);
+- else
+- iter->private = NULL;
+ }
+
+ static void wakeup_trace_close(struct trace_iterator *iter)
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index 5267adeaa40345..41e4e8070923a9 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -269,6 +269,15 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ if (ret < 0)
+ goto error;
+
++ /*
++ * pipe_resize_ring() does not update nr_accounted for watch_queue
++ * pipes, because the above vastly overprovisions. Set nr_accounted on
++ * and max_usage this pipe to the number that was actually charged to
++ * the user above via account_pipe_buffers.
++ */
++ pipe->max_usage = nr_pages;
++ pipe->nr_accounted = nr_pages;
++
+ ret = -ENOMEM;
+ pages = kcalloc(nr_pages, sizeof(struct page *), GFP_KERNEL);
+ if (!pages)
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index b2da7de39d06dd..18156023e46147 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -347,8 +347,6 @@ static int __init watchdog_thresh_setup(char *str)
+ }
+ __setup("watchdog_thresh=", watchdog_thresh_setup);
+
+-static void __lockup_detector_cleanup(void);
+-
+ #ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
+ enum stats_per_group {
+ STATS_SYSTEM,
+@@ -886,11 +884,6 @@ static void __lockup_detector_reconfigure(void)
+
+ watchdog_hardlockup_start();
+ cpus_read_unlock();
+- /*
+- * Must be called outside the cpus locked section to prevent
+- * recursive locking in the perf code.
+- */
+- __lockup_detector_cleanup();
+ }
+
+ void lockup_detector_reconfigure(void)
+@@ -940,24 +933,6 @@ static inline void lockup_detector_setup(void)
+ }
+ #endif /* !CONFIG_SOFTLOCKUP_DETECTOR */
+
+-static void __lockup_detector_cleanup(void)
+-{
+- lockdep_assert_held(&watchdog_mutex);
+- hardlockup_detector_perf_cleanup();
+-}
+-
+-/**
+- * lockup_detector_cleanup - Cleanup after cpu hotplug or sysctl changes
+- *
+- * Caller must not hold the cpu hotplug rwsem.
+- */
+-void lockup_detector_cleanup(void)
+-{
+- mutex_lock(&watchdog_mutex);
+- __lockup_detector_cleanup();
+- mutex_unlock(&watchdog_mutex);
+-}
+-
+ /**
+ * lockup_detector_soft_poweroff - Interface to stop lockup detector(s)
+ *
+diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c
+index 59c1d86a73a248..2fdb96eaf49336 100644
+--- a/kernel/watchdog_perf.c
++++ b/kernel/watchdog_perf.c
+@@ -21,8 +21,6 @@
+ #include <linux/perf_event.h>
+
+ static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
+-static DEFINE_PER_CPU(struct perf_event *, dead_event);
+-static struct cpumask dead_events_mask;
+
+ static atomic_t watchdog_cpus = ATOMIC_INIT(0);
+
+@@ -181,36 +179,12 @@ void watchdog_hardlockup_disable(unsigned int cpu)
+
+ if (event) {
+ perf_event_disable(event);
++ perf_event_release_kernel(event);
+ this_cpu_write(watchdog_ev, NULL);
+- this_cpu_write(dead_event, event);
+- cpumask_set_cpu(smp_processor_id(), &dead_events_mask);
+ atomic_dec(&watchdog_cpus);
+ }
+ }
+
+-/**
+- * hardlockup_detector_perf_cleanup - Cleanup disabled events and destroy them
+- *
+- * Called from lockup_detector_cleanup(). Serialized by the caller.
+- */
+-void hardlockup_detector_perf_cleanup(void)
+-{
+- int cpu;
+-
+- for_each_cpu(cpu, &dead_events_mask) {
+- struct perf_event *event = per_cpu(dead_event, cpu);
+-
+- /*
+- * Required because for_each_cpu() reports unconditionally
+- * CPU0 as set on UP kernels. Sigh.
+- */
+- if (event)
+- perf_event_release_kernel(event);
+- per_cpu(dead_event, cpu) = NULL;
+- }
+- cpumask_clear(&dead_events_mask);
+-}
+-
+ /**
+ * hardlockup_detector_perf_stop - Globally stop watchdog events
+ *
+diff --git a/lib/842/842_compress.c b/lib/842/842_compress.c
+index c02baa4168e168..055356508d97c5 100644
+--- a/lib/842/842_compress.c
++++ b/lib/842/842_compress.c
+@@ -532,6 +532,8 @@ int sw842_compress(const u8 *in, unsigned int ilen,
+ }
+ if (repeat_count) {
+ ret = add_repeat_template(p, repeat_count);
++ if (ret)
++ return ret;
+ repeat_count = 0;
+ if (next == last) /* reached max repeat bits */
+ goto repeat;
+diff --git a/lib/stackinit_kunit.c b/lib/stackinit_kunit.c
+index 135322592faf80..63aa78e6f5c1ad 100644
+--- a/lib/stackinit_kunit.c
++++ b/lib/stackinit_kunit.c
+@@ -184,6 +184,15 @@ static bool stackinit_range_contains(char *haystack_start, size_t haystack_size,
+ #define INIT_UNION_assigned_copy(var_type) \
+ INIT_STRUCT_assigned_copy(var_type)
+
++/*
++ * The "did we actually fill the stack?" check value needs
++ * to be neither 0 nor any of the "pattern" bytes. The
++ * pattern bytes are compiler, architecture, and type based,
++ * so we have to pick a value that never appears for those
++ * combinations. Use 0x99 which is not 0xFF, 0xFE, nor 0xAA.
++ */
++#define FILL_BYTE 0x99
++
+ /*
+ * @name: unique string name for the test
+ * @var_type: type to be tested for zeroing initialization
+@@ -206,12 +215,12 @@ static noinline void test_ ## name (struct kunit *test) \
+ ZERO_CLONE_ ## which(zero); \
+ /* Clear entire check buffer for 0xFF overlap test. */ \
+ memset(check_buf, 0x00, sizeof(check_buf)); \
+- /* Fill stack with 0xFF. */ \
++ /* Fill stack with FILL_BYTE. */ \
+ ignored = leaf_ ##name((unsigned long)&ignored, 1, \
+ FETCH_ARG_ ## which(zero)); \
+- /* Verify all bytes overwritten with 0xFF. */ \
++ /* Verify all bytes overwritten with FILL_BYTE. */ \
+ for (sum = 0, i = 0; i < target_size; i++) \
+- sum += (check_buf[i] != 0xFF); \
++ sum += (check_buf[i] != FILL_BYTE); \
+ /* Clear entire check buffer for later bit tests. */ \
+ memset(check_buf, 0x00, sizeof(check_buf)); \
+ /* Extract stack-defined variable contents. */ \
+@@ -222,7 +231,8 @@ static noinline void test_ ## name (struct kunit *test) \
+ * possible between the two leaf function calls. \
+ */ \
+ KUNIT_ASSERT_EQ_MSG(test, sum, 0, \
+- "leaf fill was not 0xFF!?\n"); \
++ "leaf fill was not 0x%02X!?\n", \
++ FILL_BYTE); \
+ \
+ /* Validate that compiler lined up fill and target. */ \
+ KUNIT_ASSERT_TRUE_MSG(test, \
+@@ -234,9 +244,9 @@ static noinline void test_ ## name (struct kunit *test) \
+ (int)((ssize_t)(uintptr_t)fill_start - \
+ (ssize_t)(uintptr_t)target_start)); \
+ \
+- /* Look for any bytes still 0xFF in check region. */ \
++ /* Validate check region has no FILL_BYTE bytes. */ \
+ for (sum = 0, i = 0; i < target_size; i++) \
+- sum += (check_buf[i] == 0xFF); \
++ sum += (check_buf[i] == FILL_BYTE); \
+ \
+ if (sum != 0 && xfail) \
+ kunit_skip(test, \
+@@ -271,12 +281,12 @@ static noinline int leaf_ ## name(unsigned long sp, bool fill, \
+ * stack frame of SOME kind... \
+ */ \
+ memset(buf, (char)(sp & 0xff), sizeof(buf)); \
+- /* Fill variable with 0xFF. */ \
++ /* Fill variable with FILL_BYTE. */ \
+ if (fill) { \
+ fill_start = &var; \
+ fill_size = sizeof(var); \
+ memset(fill_start, \
+- (char)((sp & 0xff) | forced_mask), \
++ FILL_BYTE & forced_mask, \
+ fill_size); \
+ } \
+ \
+@@ -469,7 +479,7 @@ static int noinline __leaf_switch_none(int path, bool fill)
+ fill_start = &var;
+ fill_size = sizeof(var);
+
+- memset(fill_start, forced_mask | 0x55, fill_size);
++ memset(fill_start, (forced_mask | 0x55) & FILL_BYTE, fill_size);
+ }
+ memcpy(check_buf, target_start, target_size);
+ break;
+@@ -480,7 +490,7 @@ static int noinline __leaf_switch_none(int path, bool fill)
+ fill_start = &var;
+ fill_size = sizeof(var);
+
+- memset(fill_start, forced_mask | 0xaa, fill_size);
++ memset(fill_start, (forced_mask | 0xaa) & FILL_BYTE, fill_size);
+ }
+ memcpy(check_buf, target_start, target_size);
+ break;
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 56fe9631929267..a8ac4c4fffcf27 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -2285,7 +2285,7 @@ int __init no_hash_pointers_enable(char *str)
+ early_param("no_hash_pointers", no_hash_pointers_enable);
+
+ /* Used for Rust formatting ('%pA'). */
+-char *rust_fmt_argument(char *buf, char *end, void *ptr);
++char *rust_fmt_argument(char *buf, char *end, const void *ptr);
+
+ /*
+ * Show a '%p' thing. A kernel extension is that the '%p' is followed
+diff --git a/mm/gup.c b/mm/gup.c
+index 3883b307780ea1..61e751baf862c5 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1283,6 +1283,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
+ if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
+ return -EOPNOTSUPP;
+
++ if ((gup_flags & FOLL_SPLIT_PMD) && is_vm_hugetlb_page(vma))
++ return -EOPNOTSUPP;
++
+ if (vma_is_secretmem(vma))
+ return -EFAULT;
+
+diff --git a/mm/memory.c b/mm/memory.c
+index fb7b8dc7516796..53f7b0aaf2a332 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1362,12 +1362,12 @@ int
+ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ {
+ pgd_t *src_pgd, *dst_pgd;
+- unsigned long next;
+ unsigned long addr = src_vma->vm_start;
+ unsigned long end = src_vma->vm_end;
+ struct mm_struct *dst_mm = dst_vma->vm_mm;
+ struct mm_struct *src_mm = src_vma->vm_mm;
+ struct mmu_notifier_range range;
++ unsigned long next, pfn;
+ bool is_cow;
+ int ret;
+
+@@ -1378,11 +1378,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ return copy_hugetlb_page_range(dst_mm, src_mm, dst_vma, src_vma);
+
+ if (unlikely(src_vma->vm_flags & VM_PFNMAP)) {
+- /*
+- * We do not free on error cases below as remove_vma
+- * gets called on error from higher level routine
+- */
+- ret = track_pfn_copy(src_vma);
++ ret = track_pfn_copy(dst_vma, src_vma, &pfn);
+ if (ret)
+ return ret;
+ }
+@@ -1419,7 +1415,6 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ continue;
+ if (unlikely(copy_p4d_range(dst_vma, src_vma, dst_pgd, src_pgd,
+ addr, next))) {
+- untrack_pfn_clear(dst_vma);
+ ret = -ENOMEM;
+ break;
+ }
+@@ -1429,6 +1424,8 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ raw_write_seqcount_end(&src_mm->write_protect_seq);
+ mmu_notifier_invalidate_range_end(&range);
+ }
++ if (ret && unlikely(src_vma->vm_flags & VM_PFNMAP))
++ untrack_pfn_copy(dst_vma, pfn);
+ return ret;
+ }
+
+@@ -6834,10 +6831,8 @@ void __might_fault(const char *file, int line)
+ if (pagefault_disabled())
+ return;
+ __might_sleep(file, line);
+-#if defined(CONFIG_DEBUG_ATOMIC_SLEEP)
+ if (current->mm)
+ might_lock_read(¤t->mm->mmap_lock);
+-#endif
+ }
+ EXPORT_SYMBOL(__might_fault);
+ #endif
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index eb55ece39c565d..3147119a9a0427 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -120,29 +120,6 @@ EXPORT_SYMBOL(laptop_mode);
+
+ struct wb_domain global_wb_domain;
+
+-/* consolidated parameters for balance_dirty_pages() and its subroutines */
+-struct dirty_throttle_control {
+-#ifdef CONFIG_CGROUP_WRITEBACK
+- struct wb_domain *dom;
+- struct dirty_throttle_control *gdtc; /* only set in memcg dtc's */
+-#endif
+- struct bdi_writeback *wb;
+- struct fprop_local_percpu *wb_completions;
+-
+- unsigned long avail; /* dirtyable */
+- unsigned long dirty; /* file_dirty + write + nfs */
+- unsigned long thresh; /* dirty threshold */
+- unsigned long bg_thresh; /* dirty background threshold */
+-
+- unsigned long wb_dirty; /* per-wb counterparts */
+- unsigned long wb_thresh;
+- unsigned long wb_bg_thresh;
+-
+- unsigned long pos_ratio;
+- bool freerun;
+- bool dirty_exceeded;
+-};
+-
+ /*
+ * Length of period for aging writeout fractions of bdis. This is an
+ * arbitrarily chosen number. The longer the period, the slower fractions will
+@@ -1095,7 +1072,7 @@ static void wb_position_ratio(struct dirty_throttle_control *dtc)
+ struct bdi_writeback *wb = dtc->wb;
+ unsigned long write_bw = READ_ONCE(wb->avg_write_bandwidth);
+ unsigned long freerun = dirty_freerun_ceiling(dtc->thresh, dtc->bg_thresh);
+- unsigned long limit = hard_dirty_limit(dtc_dom(dtc), dtc->thresh);
++ unsigned long limit = dtc->limit = hard_dirty_limit(dtc_dom(dtc), dtc->thresh);
+ unsigned long wb_thresh = dtc->wb_thresh;
+ unsigned long x_intercept;
+ unsigned long setpoint; /* dirty pages' target balance point */
+@@ -1962,11 +1939,7 @@ static int balance_dirty_pages(struct bdi_writeback *wb,
+ */
+ if (pause < min_pause) {
+ trace_balance_dirty_pages(wb,
+- sdtc->thresh,
+- sdtc->bg_thresh,
+- sdtc->dirty,
+- sdtc->wb_thresh,
+- sdtc->wb_dirty,
++ sdtc,
+ dirty_ratelimit,
+ task_ratelimit,
+ pages_dirtied,
+@@ -1991,11 +1964,7 @@ static int balance_dirty_pages(struct bdi_writeback *wb,
+
+ pause:
+ trace_balance_dirty_pages(wb,
+- sdtc->thresh,
+- sdtc->bg_thresh,
+- sdtc->dirty,
+- sdtc->wb_thresh,
+- sdtc->wb_dirty,
++ sdtc,
+ dirty_ratelimit,
+ task_ratelimit,
+ pages_dirtied,
+diff --git a/mm/zswap.c b/mm/zswap.c
+index 23365e76a3ce37..c7ff9e94520a5f 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -881,18 +881,32 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node)
+ {
+ struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
+ struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
++ struct acomp_req *req;
++ struct crypto_acomp *acomp;
++ u8 *buffer;
++
++ if (IS_ERR_OR_NULL(acomp_ctx))
++ return 0;
+
+ mutex_lock(&acomp_ctx->mutex);
+- if (!IS_ERR_OR_NULL(acomp_ctx)) {
+- if (!IS_ERR_OR_NULL(acomp_ctx->req))
+- acomp_request_free(acomp_ctx->req);
+- acomp_ctx->req = NULL;
+- if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
+- crypto_free_acomp(acomp_ctx->acomp);
+- kfree(acomp_ctx->buffer);
+- }
++ req = acomp_ctx->req;
++ acomp = acomp_ctx->acomp;
++ buffer = acomp_ctx->buffer;
++ acomp_ctx->req = NULL;
++ acomp_ctx->acomp = NULL;
++ acomp_ctx->buffer = NULL;
+ mutex_unlock(&acomp_ctx->mutex);
+
++ /*
++ * Do the actual freeing after releasing the mutex to avoid subtle
++ * locking dependencies causing deadlocks.
++ */
++ if (!IS_ERR_OR_NULL(req))
++ acomp_request_free(req);
++ if (!IS_ERR_OR_NULL(acomp))
++ crypto_free_acomp(acomp);
++ kfree(buffer);
++
+ return 0;
+ }
+
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 9f3b8b682adb29..3ee7dba3431008 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -1270,28 +1270,18 @@ static int __must_check ax25_connect(struct socket *sock,
+ }
+ }
+
+- /*
+- * Must bind first - autobinding in this may or may not work. If
+- * the socket is already bound, check to see if the device has
+- * been filled in, error if it hasn't.
+- */
++ /* Must bind first - autobinding does not work. */
+ if (sock_flag(sk, SOCK_ZAPPED)) {
+- /* check if we can remove this feature. It is broken. */
+- printk(KERN_WARNING "ax25_connect(): %s uses autobind, please contact jreuter@yaina.de\n",
+- current->comm);
+- if ((err = ax25_rt_autobind(ax25, &fsa->fsa_ax25.sax25_call)) < 0) {
+- kfree(digi);
+- goto out_release;
+- }
++ kfree(digi);
++ err = -EINVAL;
++ goto out_release;
++ }
+
+- ax25_fillin_cb(ax25, ax25->ax25_dev);
+- ax25_cb_add(ax25);
+- } else {
+- if (ax25->ax25_dev == NULL) {
+- kfree(digi);
+- err = -EHOSTUNREACH;
+- goto out_release;
+- }
++ /* Check to see if the device has been filled in, error if it hasn't. */
++ if (ax25->ax25_dev == NULL) {
++ kfree(digi);
++ err = -EHOSTUNREACH;
++ goto out_release;
+ }
+
+ if (sk->sk_type == SOCK_SEQPACKET &&
+diff --git a/net/ax25/ax25_route.c b/net/ax25/ax25_route.c
+index 69de75db0c9c21..10577434f40bf7 100644
+--- a/net/ax25/ax25_route.c
++++ b/net/ax25/ax25_route.c
+@@ -373,80 +373,6 @@ ax25_route *ax25_get_route(ax25_address *addr, struct net_device *dev)
+ return ax25_rt;
+ }
+
+-/*
+- * Adjust path: If you specify a default route and want to connect
+- * a target on the digipeater path but w/o having a special route
+- * set before, the path has to be truncated from your target on.
+- */
+-static inline void ax25_adjust_path(ax25_address *addr, ax25_digi *digipeat)
+-{
+- int k;
+-
+- for (k = 0; k < digipeat->ndigi; k++) {
+- if (ax25cmp(addr, &digipeat->calls[k]) == 0)
+- break;
+- }
+-
+- digipeat->ndigi = k;
+-}
+-
+-
+-/*
+- * Find which interface to use.
+- */
+-int ax25_rt_autobind(ax25_cb *ax25, ax25_address *addr)
+-{
+- ax25_uid_assoc *user;
+- ax25_route *ax25_rt;
+- int err = 0;
+-
+- ax25_route_lock_use();
+- ax25_rt = ax25_get_route(addr, NULL);
+- if (!ax25_rt) {
+- ax25_route_lock_unuse();
+- return -EHOSTUNREACH;
+- }
+- rcu_read_lock();
+- if ((ax25->ax25_dev = ax25_dev_ax25dev(ax25_rt->dev)) == NULL) {
+- err = -EHOSTUNREACH;
+- goto put;
+- }
+-
+- user = ax25_findbyuid(current_euid());
+- if (user) {
+- ax25->source_addr = user->call;
+- ax25_uid_put(user);
+- } else {
+- if (ax25_uid_policy && !capable(CAP_NET_BIND_SERVICE)) {
+- err = -EPERM;
+- goto put;
+- }
+- ax25->source_addr = *(ax25_address *)ax25->ax25_dev->dev->dev_addr;
+- }
+-
+- if (ax25_rt->digipeat != NULL) {
+- ax25->digipeat = kmemdup(ax25_rt->digipeat, sizeof(ax25_digi),
+- GFP_ATOMIC);
+- if (ax25->digipeat == NULL) {
+- err = -ENOMEM;
+- goto put;
+- }
+- ax25_adjust_path(addr, ax25->digipeat);
+- }
+-
+- if (ax25->sk != NULL) {
+- local_bh_disable();
+- bh_lock_sock(ax25->sk);
+- sock_reset_flag(ax25->sk, SOCK_ZAPPED);
+- bh_unlock_sock(ax25->sk);
+- local_bh_enable();
+- }
+-
+-put:
+- rcu_read_unlock();
+- ax25_route_lock_unuse();
+- return err;
+-}
+
+ struct sk_buff *ax25_rt_build_path(struct sk_buff *skb, ax25_address *src,
+ ax25_address *dest, ax25_digi *digi)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 012fc107901a6e..94d9147612daf0 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3552,42 +3552,27 @@ static void __check_timeout(struct hci_dev *hdev, unsigned int cnt, u8 type)
+ }
+
+ /* Schedule SCO */
+-static void hci_sched_sco(struct hci_dev *hdev)
++static void hci_sched_sco(struct hci_dev *hdev, __u8 type)
+ {
+ struct hci_conn *conn;
+ struct sk_buff *skb;
+- int quote;
++ int quote, *cnt;
++ unsigned int pkts = hdev->sco_pkts;
+
+- BT_DBG("%s", hdev->name);
++ bt_dev_dbg(hdev, "type %u", type);
+
+- if (!hci_conn_num(hdev, SCO_LINK))
++ if (!hci_conn_num(hdev, type) || !pkts)
+ return;
+
+- while (hdev->sco_cnt && (conn = hci_low_sent(hdev, SCO_LINK, "e))) {
+- while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+- BT_DBG("skb %p len %d", skb, skb->len);
+- hci_send_frame(hdev, skb);
+-
+- conn->sent++;
+- if (conn->sent == ~0)
+- conn->sent = 0;
+- }
+- }
+-}
+-
+-static void hci_sched_esco(struct hci_dev *hdev)
+-{
+- struct hci_conn *conn;
+- struct sk_buff *skb;
+- int quote;
+-
+- BT_DBG("%s", hdev->name);
+-
+- if (!hci_conn_num(hdev, ESCO_LINK))
+- return;
++ /* Use sco_pkts if flow control has not been enabled which will limit
++ * the amount of buffer sent in a row.
++ */
++ if (!hci_dev_test_flag(hdev, HCI_SCO_FLOWCTL))
++ cnt = &pkts;
++ else
++ cnt = &hdev->sco_cnt;
+
+- while (hdev->sco_cnt && (conn = hci_low_sent(hdev, ESCO_LINK,
+- "e))) {
++ while (*cnt && (conn = hci_low_sent(hdev, type, "e))) {
+ while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+ BT_DBG("skb %p len %d", skb, skb->len);
+ hci_send_frame(hdev, skb);
+@@ -3595,8 +3580,17 @@ static void hci_sched_esco(struct hci_dev *hdev)
+ conn->sent++;
+ if (conn->sent == ~0)
+ conn->sent = 0;
++ (*cnt)--;
+ }
+ }
++
++ /* Rescheduled if all packets were sent and flow control is not enabled
++ * as there could be more packets queued that could not be sent and
++ * since no HCI_EV_NUM_COMP_PKTS event will be generated the reschedule
++ * needs to be forced.
++ */
++ if (!pkts && !hci_dev_test_flag(hdev, HCI_SCO_FLOWCTL))
++ queue_work(hdev->workqueue, &hdev->tx_work);
+ }
+
+ static void hci_sched_acl_pkt(struct hci_dev *hdev)
+@@ -3632,8 +3626,8 @@ static void hci_sched_acl_pkt(struct hci_dev *hdev)
+ chan->conn->sent++;
+
+ /* Send pending SCO packets right away */
+- hci_sched_sco(hdev);
+- hci_sched_esco(hdev);
++ hci_sched_sco(hdev, SCO_LINK);
++ hci_sched_sco(hdev, ESCO_LINK);
+ }
+ }
+
+@@ -3688,8 +3682,8 @@ static void hci_sched_le(struct hci_dev *hdev)
+ chan->conn->sent++;
+
+ /* Send pending SCO packets right away */
+- hci_sched_sco(hdev);
+- hci_sched_esco(hdev);
++ hci_sched_sco(hdev, SCO_LINK);
++ hci_sched_sco(hdev, ESCO_LINK);
+ }
+ }
+
+@@ -3734,8 +3728,8 @@ static void hci_tx_work(struct work_struct *work)
+
+ if (!hci_dev_test_flag(hdev, HCI_USER_CHANNEL)) {
+ /* Schedule queues and send stuff to HCI driver */
+- hci_sched_sco(hdev);
+- hci_sched_esco(hdev);
++ hci_sched_sco(hdev, SCO_LINK);
++ hci_sched_sco(hdev, ESCO_LINK);
+ hci_sched_iso(hdev);
+ hci_sched_acl(hdev);
+ hci_sched_le(hdev);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 903b0b52692aa6..e2bfbcee06a800 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -151,7 +151,7 @@ static u8 hci_cc_exit_periodic_inq(struct hci_dev *hdev, void *data,
+ static u8 hci_cc_remote_name_req_cancel(struct hci_dev *hdev, void *data,
+ struct sk_buff *skb)
+ {
+- struct hci_ev_status *rp = data;
++ struct hci_rp_remote_name_req_cancel *rp = data;
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
+
+@@ -4012,8 +4012,8 @@ static const struct hci_cc {
+ HCI_CC_STATUS(HCI_OP_INQUIRY_CANCEL, hci_cc_inquiry_cancel),
+ HCI_CC_STATUS(HCI_OP_PERIODIC_INQ, hci_cc_periodic_inq),
+ HCI_CC_STATUS(HCI_OP_EXIT_PERIODIC_INQ, hci_cc_exit_periodic_inq),
+- HCI_CC_STATUS(HCI_OP_REMOTE_NAME_REQ_CANCEL,
+- hci_cc_remote_name_req_cancel),
++ HCI_CC(HCI_OP_REMOTE_NAME_REQ_CANCEL, hci_cc_remote_name_req_cancel,
++ sizeof(struct hci_rp_remote_name_req_cancel)),
+ HCI_CC(HCI_OP_ROLE_DISCOVERY, hci_cc_role_discovery,
+ sizeof(struct hci_rp_role_discovery)),
+ HCI_CC(HCI_OP_READ_LINK_POLICY, hci_cc_read_link_policy,
+@@ -4442,9 +4442,11 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ break;
+
+ case SCO_LINK:
++ case ESCO_LINK:
+ hdev->sco_cnt += count;
+ if (hdev->sco_cnt > hdev->sco_pkts)
+ hdev->sco_cnt = hdev->sco_pkts;
++
+ break;
+
+ case ISO_LINK:
+@@ -6051,8 +6053,17 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ * a LE Direct Advertising Report event. In that case it is
+ * important to see if the address is matching the local
+ * controller address.
++ *
++ * If local privacy is not enable the controller shall not be
++ * generating such event since according to its documentation it is only
++ * valid for filter_policy 0x02 and 0x03, but the fact that it did
++ * generate LE Direct Advertising Report means it is probably broken and
++ * won't generate any other event which can potentially break
++ * auto-connect logic so in case local privacy is not enable this
++ * ignores the direct_addr so it works as a regular report.
+ */
+- if (!hci_dev_test_flag(hdev, HCI_MESH) && direct_addr) {
++ if (!hci_dev_test_flag(hdev, HCI_MESH) && direct_addr &&
++ hci_dev_test_flag(hdev, HCI_PRIVACY)) {
+ direct_addr_type = ev_bdaddr_type(hdev, direct_addr_type,
+ &bdaddr_resolved);
+
+@@ -6062,12 +6073,6 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ if (!hci_bdaddr_is_rpa(direct_addr, direct_addr_type))
+ return;
+
+- /* If the controller is not using resolvable random
+- * addresses, then this report can be ignored.
+- */
+- if (!hci_dev_test_flag(hdev, HCI_PRIVACY))
+- return;
+-
+ /* If the local IRK of the controller does not match
+ * with the resolvable random address provided, then
+ * this report can be ignored.
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index dd770ef5ec3684..14c3ee5c6a1e89 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -3696,6 +3696,9 @@ static int hci_read_local_name_sync(struct hci_dev *hdev)
+ /* Read Voice Setting */
+ static int hci_read_voice_setting_sync(struct hci_dev *hdev)
+ {
++ if (!read_voice_setting_capable(hdev))
++ return 0;
++
+ return __hci_cmd_sync_status(hdev, HCI_OP_READ_VOICE_SETTING,
+ 0, NULL, HCI_CMD_TIMEOUT);
+ }
+@@ -3766,6 +3769,28 @@ static int hci_write_ca_timeout_sync(struct hci_dev *hdev)
+ sizeof(param), ¶m, HCI_CMD_TIMEOUT);
+ }
+
++/* Enable SCO flow control if supported */
++static int hci_write_sync_flowctl_sync(struct hci_dev *hdev)
++{
++ struct hci_cp_write_sync_flowctl cp;
++ int err;
++
++ /* Check if the controller supports SCO and HCI_OP_WRITE_SYNC_FLOWCTL */
++ if (!lmp_sco_capable(hdev) || !(hdev->commands[10] & BIT(4)) ||
++ !test_bit(HCI_QUIRK_SYNC_FLOWCTL_SUPPORTED, &hdev->quirks))
++ return 0;
++
++ memset(&cp, 0, sizeof(cp));
++ cp.enable = 0x01;
++
++ err = __hci_cmd_sync_status(hdev, HCI_OP_WRITE_SYNC_FLOWCTL,
++ sizeof(cp), &cp, HCI_CMD_TIMEOUT);
++ if (!err)
++ hci_dev_set_flag(hdev, HCI_SCO_FLOWCTL);
++
++ return err;
++}
++
+ /* BR Controller init stage 2 command sequence */
+ static const struct hci_init_stage br_init2[] = {
+ /* HCI_OP_READ_BUFFER_SIZE */
+@@ -3784,6 +3809,8 @@ static const struct hci_init_stage br_init2[] = {
+ HCI_INIT(hci_clear_event_filter_sync),
+ /* HCI_OP_WRITE_CA_TIMEOUT */
+ HCI_INIT(hci_write_ca_timeout_sync),
++ /* HCI_OP_WRITE_SYNC_FLOWCTL */
++ HCI_INIT(hci_write_sync_flowctl_sync),
+ {}
+ };
+
+@@ -4129,7 +4156,8 @@ static int hci_read_page_scan_type_sync(struct hci_dev *hdev)
+ * support the Read Page Scan Type command. Check support for
+ * this command in the bit mask of supported commands.
+ */
+- if (!(hdev->commands[13] & 0x01))
++ if (!(hdev->commands[13] & 0x01) ||
++ test_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks))
+ return 0;
+
+ return __hci_cmd_sync_status(hdev, HCI_OP_READ_PAGE_SCAN_TYPE,
+diff --git a/net/bridge/br_ioctl.c b/net/bridge/br_ioctl.c
+index f213ed10836185..6bc0a11f2ed3e6 100644
+--- a/net/bridge/br_ioctl.c
++++ b/net/bridge/br_ioctl.c
+@@ -394,10 +394,26 @@ static int old_deviceless(struct net *net, void __user *data)
+ return -EOPNOTSUPP;
+ }
+
+-int br_ioctl_stub(struct net *net, struct net_bridge *br, unsigned int cmd,
+- struct ifreq *ifr, void __user *uarg)
++int br_ioctl_stub(struct net *net, unsigned int cmd, void __user *uarg)
+ {
+ int ret = -EOPNOTSUPP;
++ struct ifreq ifr;
++
++ if (cmd == SIOCBRADDIF || cmd == SIOCBRDELIF) {
++ void __user *data;
++ char *colon;
++
++ if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
++ return -EPERM;
++
++ if (get_user_ifreq(&ifr, &data, uarg))
++ return -EFAULT;
++
++ ifr.ifr_name[IFNAMSIZ - 1] = 0;
++ colon = strchr(ifr.ifr_name, ':');
++ if (colon)
++ *colon = 0;
++ }
+
+ rtnl_lock();
+
+@@ -430,7 +446,21 @@ int br_ioctl_stub(struct net *net, struct net_bridge *br, unsigned int cmd,
+ break;
+ case SIOCBRADDIF:
+ case SIOCBRDELIF:
+- ret = add_del_if(br, ifr->ifr_ifindex, cmd == SIOCBRADDIF);
++ {
++ struct net_device *dev;
++
++ dev = __dev_get_by_name(net, ifr.ifr_name);
++ if (!dev || !netif_device_present(dev)) {
++ ret = -ENODEV;
++ break;
++ }
++ if (!netif_is_bridge_master(dev)) {
++ ret = -EOPNOTSUPP;
++ break;
++ }
++
++ ret = add_del_if(netdev_priv(dev), ifr.ifr_ifindex, cmd == SIOCBRADDIF);
++ }
+ break;
+ }
+
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 1054b8a88edc47..d5b3c5936a79e1 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -949,8 +949,7 @@ br_port_get_check_rtnl(const struct net_device *dev)
+ /* br_ioctl.c */
+ int br_dev_siocdevprivate(struct net_device *dev, struct ifreq *rq,
+ void __user *data, int cmd);
+-int br_ioctl_stub(struct net *net, struct net_bridge *br, unsigned int cmd,
+- struct ifreq *ifr, void __user *uarg);
++int br_ioctl_stub(struct net *net, unsigned int cmd, void __user *uarg);
+
+ /* br_multicast.c */
+ #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index 4c2098ac9d7243..57f79f8e84665e 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -551,7 +551,6 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, void __user *data,
+ int err;
+ struct net_device *dev = __dev_get_by_name(net, ifr->ifr_name);
+ const struct net_device_ops *ops;
+- netdevice_tracker dev_tracker;
+
+ if (!dev)
+ return -ENODEV;
+@@ -614,22 +613,6 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, void __user *data,
+ case SIOCWANDEV:
+ return dev_siocwandev(dev, &ifr->ifr_settings);
+
+- case SIOCBRADDIF:
+- case SIOCBRDELIF:
+- if (!netif_device_present(dev))
+- return -ENODEV;
+- if (!netif_is_bridge_master(dev))
+- return -EOPNOTSUPP;
+-
+- netdev_hold(dev, &dev_tracker, GFP_KERNEL);
+- rtnl_net_unlock(net);
+-
+- err = br_ioctl_call(net, netdev_priv(dev), cmd, ifr, NULL);
+-
+- netdev_put(dev, &dev_tracker);
+- rtnl_net_lock(net);
+- return err;
+-
+ case SIOCDEVPRIVATE ... SIOCDEVPRIVATE + 15:
+ return dev_siocdevprivate(dev, ifr, data, cmd);
+
+@@ -812,8 +795,6 @@ int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr,
+ case SIOCBONDRELEASE:
+ case SIOCBONDSETHWADDR:
+ case SIOCBONDCHANGEACTIVE:
+- case SIOCBRADDIF:
+- case SIOCBRDELIF:
+ case SIOCSHWTSTAMP:
+ if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
+ return -EPERM;
+diff --git a/net/core/dst.c b/net/core/dst.c
+index 9552a90d4772dc..6d76b799ce645d 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -165,6 +165,14 @@ static void dst_count_dec(struct dst_entry *dst)
+ void dst_release(struct dst_entry *dst)
+ {
+ if (dst && rcuref_put(&dst->__rcuref)) {
++#ifdef CONFIG_DST_CACHE
++ if (dst->flags & DST_METADATA) {
++ struct metadata_dst *md_dst = (struct metadata_dst *)dst;
++
++ if (md_dst->type == METADATA_IP_TUNNEL)
++ dst_cache_reset_now(&md_dst->u.tun_info.dst_cache);
++ }
++#endif
+ dst_count_dec(dst);
+ call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu);
+ }
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 715f85c6b62eab..7832abc5ca6e2f 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -52,6 +52,8 @@ XDP_METADATA_KFUNC_xxx
+ xsk_features |= NETDEV_XSK_FLAGS_TX_TIMESTAMP;
+ if (netdev->xsk_tx_metadata_ops->tmo_request_checksum)
+ xsk_features |= NETDEV_XSK_FLAGS_TX_CHECKSUM;
++ if (netdev->xsk_tx_metadata_ops->tmo_request_launch_time)
++ xsk_features |= NETDEV_XSK_FLAGS_TX_LAUNCH_TIME_FIFO;
+ }
+
+ if (nla_put_u32(rsp, NETDEV_A_DEV_IFINDEX, netdev->ifindex) ||
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index d1e559fce918d3..80e006940f51a9 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1171,6 +1171,9 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
+ /* IFLA_VF_STATS_TX_DROPPED */
+ nla_total_size_64bit(sizeof(__u64)));
+ }
++ if (dev->netdev_ops->ndo_get_vf_guid)
++ size += num_vfs * 2 *
++ nla_total_size(sizeof(struct ifla_vf_guid));
+ return size;
+ } else
+ return 0;
+diff --git a/net/core/rtnl_net_debug.c b/net/core/rtnl_net_debug.c
+index 7ecd28cc1c2256..f3272b09c25568 100644
+--- a/net/core/rtnl_net_debug.c
++++ b/net/core/rtnl_net_debug.c
+@@ -102,7 +102,7 @@ static int __init rtnl_net_debug_init(void)
+ {
+ int ret;
+
+- ret = register_pernet_device(&rtnl_net_debug_net_ops);
++ ret = register_pernet_subsys(&rtnl_net_debug_net_ops);
+ if (ret)
+ return ret;
+
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index a3676155be78b9..f65d2f7273813b 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -416,7 +416,7 @@ int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst,
+
+ skb_dst_update_pmtu_no_confirm(skb, mtu);
+
+- if (!reply || skb->pkt_type == PACKET_HOST)
++ if (!reply)
+ return 0;
+
+ if (skb->protocol == htons(ETH_P_IP))
+@@ -451,7 +451,7 @@ static const struct nla_policy
+ geneve_opt_policy[LWTUNNEL_IP_OPT_GENEVE_MAX + 1] = {
+ [LWTUNNEL_IP_OPT_GENEVE_CLASS] = { .type = NLA_U16 },
+ [LWTUNNEL_IP_OPT_GENEVE_TYPE] = { .type = NLA_U8 },
+- [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 },
++ [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 },
+ };
+
+ static const struct nla_policy
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index a9bb9ce5438eaa..3fe85ecec23615 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1626,12 +1626,12 @@ static bool udp_skb_has_head_state(struct sk_buff *skb)
+ }
+
+ /* fully reclaim rmem/fwd memory allocated for skb */
+-static void udp_rmem_release(struct sock *sk, int size, int partial,
+- bool rx_queue_lock_held)
++static void udp_rmem_release(struct sock *sk, unsigned int size,
++ int partial, bool rx_queue_lock_held)
+ {
+ struct udp_sock *up = udp_sk(sk);
+ struct sk_buff_head *sk_queue;
+- int amt;
++ unsigned int amt;
+
+ if (likely(partial)) {
+ up->forward_deficit += size;
+@@ -1651,10 +1651,8 @@ static void udp_rmem_release(struct sock *sk, int size, int partial,
+ if (!rx_queue_lock_held)
+ spin_lock(&sk_queue->lock);
+
+-
+- sk_forward_alloc_add(sk, size);
+- amt = (sk->sk_forward_alloc - partial) & ~(PAGE_SIZE - 1);
+- sk_forward_alloc_add(sk, -amt);
++ amt = (size + sk->sk_forward_alloc - partial) & ~(PAGE_SIZE - 1);
++ sk_forward_alloc_add(sk, size - amt);
+
+ if (amt)
+ __sk_mem_reduce_allocated(sk, amt >> PAGE_SHIFT);
+@@ -1726,17 +1724,25 @@ static int udp_rmem_schedule(struct sock *sk, int size)
+ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ {
+ struct sk_buff_head *list = &sk->sk_receive_queue;
+- int rmem, err = -ENOMEM;
++ unsigned int rmem, rcvbuf;
+ spinlock_t *busy = NULL;
+- int size, rcvbuf;
++ int size, err = -ENOMEM;
+
+- /* Immediately drop when the receive queue is full.
+- * Always allow at least one packet.
+- */
+ rmem = atomic_read(&sk->sk_rmem_alloc);
+ rcvbuf = READ_ONCE(sk->sk_rcvbuf);
+- if (rmem > rcvbuf)
+- goto drop;
++ size = skb->truesize;
++
++ /* Immediately drop when the receive queue is full.
++ * Cast to unsigned int performs the boundary check for INT_MAX.
++ */
++ if (rmem + size > rcvbuf) {
++ if (rcvbuf > INT_MAX >> 1)
++ goto drop;
++
++ /* Always allow at least one packet for small buffer. */
++ if (rmem > rcvbuf)
++ goto drop;
++ }
+
+ /* Under mem pressure, it might be helpful to help udp_recvmsg()
+ * having linear skbs :
+@@ -1746,10 +1752,10 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ */
+ if (rmem > (rcvbuf >> 1)) {
+ skb_condense(skb);
+-
++ size = skb->truesize;
+ busy = busylock_acquire(sk);
+ }
+- size = skb->truesize;
++
+ udp_set_dev_scratch(skb);
+
+ atomic_add(size, &sk->sk_rmem_alloc);
+@@ -1836,7 +1842,7 @@ EXPORT_SYMBOL_GPL(skb_consume_udp);
+
+ static struct sk_buff *__first_packet_length(struct sock *sk,
+ struct sk_buff_head *rcvq,
+- int *total)
++ unsigned int *total)
+ {
+ struct sk_buff *skb;
+
+@@ -1869,8 +1875,8 @@ static int first_packet_length(struct sock *sk)
+ {
+ struct sk_buff_head *rcvq = &udp_sk(sk)->reader_queue;
+ struct sk_buff_head *sk_queue = &sk->sk_receive_queue;
++ unsigned int total = 0;
+ struct sk_buff *skb;
+- int total = 0;
+ int res;
+
+ spin_lock_bh(&rcvq->lock);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index ac8cc10765360f..54a8ea004da286 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5784,6 +5784,27 @@ static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype,
+ }
+ }
+
++static int inet6_fill_ifla6_stats_attrs(struct sk_buff *skb,
++ struct inet6_dev *idev)
++{
++ struct nlattr *nla;
++
++ nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64));
++ if (!nla)
++ goto nla_put_failure;
++ snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla));
++
++ nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64));
++ if (!nla)
++ goto nla_put_failure;
++ snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla));
++
++ return 0;
++
++nla_put_failure:
++ return -EMSGSIZE;
++}
++
+ static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev,
+ u32 ext_filter_mask)
+ {
+@@ -5806,18 +5827,10 @@ static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev,
+
+ /* XXX - MC not implemented */
+
+- if (ext_filter_mask & RTEXT_FILTER_SKIP_STATS)
+- return 0;
+-
+- nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64));
+- if (!nla)
+- goto nla_put_failure;
+- snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla));
+-
+- nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64));
+- if (!nla)
+- goto nla_put_failure;
+- snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla));
++ if (!(ext_filter_mask & RTEXT_FILTER_SKIP_STATS)) {
++ if (inet6_fill_ifla6_stats_attrs(skb, idev) < 0)
++ goto nla_put_failure;
++ }
+
+ nla = nla_reserve(skb, IFLA_INET6_TOKEN, sizeof(struct in6_addr));
+ if (!nla)
+diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
+index dbcea9fee6262d..62618a058b8fad 100644
+--- a/net/ipv6/calipso.c
++++ b/net/ipv6/calipso.c
+@@ -1072,8 +1072,13 @@ static int calipso_sock_getattr(struct sock *sk,
+ struct ipv6_opt_hdr *hop;
+ int opt_len, len, ret_val = -ENOMSG, offset;
+ unsigned char *opt;
+- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk));
++ struct ipv6_pinfo *pinfo = inet6_sk(sk);
++ struct ipv6_txoptions *txopts;
++
++ if (!pinfo)
++ return -EAFNOSUPPORT;
+
++ txopts = txopt_get(pinfo);
+ if (!txopts || !txopts->hopopt)
+ goto done;
+
+@@ -1125,8 +1130,13 @@ static int calipso_sock_setattr(struct sock *sk,
+ {
+ int ret_val;
+ struct ipv6_opt_hdr *old, *new;
+- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk));
++ struct ipv6_pinfo *pinfo = inet6_sk(sk);
++ struct ipv6_txoptions *txopts;
++
++ if (!pinfo)
++ return -EAFNOSUPPORT;
+
++ txopts = txopt_get(pinfo);
+ old = NULL;
+ if (txopts)
+ old = txopts->hopopt;
+@@ -1153,8 +1163,13 @@ static int calipso_sock_setattr(struct sock *sk,
+ static void calipso_sock_delattr(struct sock *sk)
+ {
+ struct ipv6_opt_hdr *new_hop;
+- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk));
++ struct ipv6_pinfo *pinfo = inet6_sk(sk);
++ struct ipv6_txoptions *txopts;
++
++ if (!pinfo)
++ return;
+
++ txopts = txopt_get(pinfo);
+ if (!txopts || !txopts->hopopt)
+ goto done;
+
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 15ce21afc8c628..169a7b9bc40ea1 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -412,12 +412,37 @@ static bool rt6_check_expired(const struct rt6_info *rt)
+ return false;
+ }
+
++static struct fib6_info *
++rt6_multipath_first_sibling_rcu(const struct fib6_info *rt)
++{
++ struct fib6_info *iter;
++ struct fib6_node *fn;
++
++ fn = rcu_dereference(rt->fib6_node);
++ if (!fn)
++ goto out;
++ iter = rcu_dereference(fn->leaf);
++ if (!iter)
++ goto out;
++
++ while (iter) {
++ if (iter->fib6_metric == rt->fib6_metric &&
++ rt6_qualify_for_ecmp(iter))
++ return iter;
++ iter = rcu_dereference(iter->fib6_next);
++ }
++
++out:
++ return NULL;
++}
++
+ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ struct flowi6 *fl6, int oif, bool have_oif_match,
+ const struct sk_buff *skb, int strict)
+ {
+- struct fib6_info *match = res->f6i;
++ struct fib6_info *first, *match = res->f6i;
+ struct fib6_info *sibling;
++ int hash;
+
+ if (!match->nh && (!match->fib6_nsiblings || have_oif_match))
+ goto out;
+@@ -440,16 +465,25 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ return;
+ }
+
+- if (fl6->mp_hash <= atomic_read(&match->fib6_nh->fib_nh_upper_bound))
++ first = rt6_multipath_first_sibling_rcu(match);
++ if (!first)
+ goto out;
+
+- list_for_each_entry_rcu(sibling, &match->fib6_siblings,
++ hash = fl6->mp_hash;
++ if (hash <= atomic_read(&first->fib6_nh->fib_nh_upper_bound) &&
++ rt6_score_route(first->fib6_nh, first->fib6_flags, oif,
++ strict) >= 0) {
++ match = first;
++ goto out;
++ }
++
++ list_for_each_entry_rcu(sibling, &first->fib6_siblings,
+ fib6_siblings) {
+ const struct fib6_nh *nh = sibling->fib6_nh;
+ int nh_upper_bound;
+
+ nh_upper_bound = atomic_read(&nh->fib_nh_upper_bound);
+- if (fl6->mp_hash > nh_upper_bound)
++ if (hash > nh_upper_bound)
+ continue;
+ if (rt6_score_route(nh, sibling->fib6_flags, oif, strict) < 0)
+ break;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 9351c64608a998..b766472703b12f 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1908,12 +1908,12 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ }
+
+ if (params->supported_rates &&
+- params->supported_rates_len) {
+- ieee80211_parse_bitrates(link->conf->chanreq.oper.width,
+- sband, params->supported_rates,
+- params->supported_rates_len,
+- &link_sta->pub->supp_rates[sband->band]);
+- }
++ params->supported_rates_len &&
++ !ieee80211_parse_bitrates(link->conf->chanreq.oper.width,
++ sband, params->supported_rates,
++ params->supported_rates_len,
++ &link_sta->pub->supp_rates[sband->band]))
++ return -EINVAL;
+
+ if (params->ht_capa)
+ ieee80211_ht_cap_ie_to_sta_ht_cap(sdata, sband,
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 36a9be9a66c8e7..da2c2e6035be8a 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -9946,8 +9946,8 @@ ieee80211_build_ml_reconf_req(struct ieee80211_sub_if_data *sdata,
+ size += 2 + sizeof(struct ieee80211_mle_per_sta_profile) +
+ ETH_ALEN;
+
+- /* SSID element + WMM */
+- size += 2 + sdata->vif.cfg.ssid_len + 9;
++ /* WMM */
++ size += 9;
+ size += ieee80211_link_common_elems_size(sdata, iftype, cbss,
+ elems_len);
+ }
+@@ -10053,11 +10053,6 @@ ieee80211_build_ml_reconf_req(struct ieee80211_sub_if_data *sdata,
+
+ capab_pos = skb_put(skb, 2);
+
+- skb_put_u8(skb, WLAN_EID_SSID);
+- skb_put_u8(skb, sdata->vif.cfg.ssid_len);
+- skb_put_data(skb, sdata->vif.cfg.ssid,
+- sdata->vif.cfg.ssid_len);
+-
+ extra_used =
+ ieee80211_add_link_elems(sdata, skb, &capab, NULL,
+ add_links_data->link[link_id].elems,
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index c2df81b7e95056..a133e1c175ce9c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2839,11 +2839,11 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ err = nft_netdev_register_hooks(ctx->net, &hook.list);
+ if (err < 0)
+ goto err_hooks;
++
++ unregister = true;
+ }
+ }
+
+- unregister = true;
+-
+ if (nla[NFTA_CHAIN_COUNTERS]) {
+ if (!nft_is_base_chain(chain)) {
+ err = -EOPNOTSUPP;
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index 75598520b0fa0e..6557a4018c0993 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -21,25 +21,22 @@
+ #include <net/netfilter/nf_log.h>
+ #include <net/netfilter/nft_meta.h>
+
+-#if defined(CONFIG_MITIGATION_RETPOLINE) && defined(CONFIG_X86)
+-
++#ifdef CONFIG_MITIGATION_RETPOLINE
+ static struct static_key_false nf_tables_skip_direct_calls;
+
+-static bool nf_skip_indirect_calls(void)
++static inline bool nf_skip_indirect_calls(void)
+ {
+ return static_branch_likely(&nf_tables_skip_direct_calls);
+ }
+
+-static void __init nf_skip_indirect_calls_enable(void)
++static inline void __init nf_skip_indirect_calls_enable(void)
+ {
+ if (!cpu_feature_enabled(X86_FEATURE_RETPOLINE))
+ static_branch_enable(&nf_tables_skip_direct_calls);
+ }
+ #else
+-static inline bool nf_skip_indirect_calls(void) { return false; }
+-
+ static inline void nf_skip_indirect_calls_enable(void) { }
+-#endif
++#endif /* CONFIG_MITIGATION_RETPOLINE */
+
+ static noinline void __nft_trace_packet(const struct nft_pktinfo *pkt,
+ const struct nft_verdict *verdict,
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 5c913987901ab4..8b7b39d8a10913 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -567,7 +567,7 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
+ enum ip_conntrack_info ctinfo = 0;
+ const struct nfnl_ct_hook *nfnl_ct;
+ bool csum_verify;
+- struct lsm_context ctx;
++ struct lsm_context ctx = { NULL, 0, 0 };
+ int seclen = 0;
+ ktime_t tstamp;
+
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index 8bfac4185ac797..abb0c8ec637191 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -309,7 +309,8 @@ static bool nft_rhash_expr_needs_gc_run(const struct nft_set *set,
+
+ nft_setelem_expr_foreach(expr, elem_expr, size) {
+ if (expr->ops->gc &&
+- expr->ops->gc(read_pnet(&set->net), expr))
++ expr->ops->gc(read_pnet(&set->net), expr) &&
++ set->flags & NFT_SET_EVAL)
+ return true;
+ }
+
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 681301b46aa40b..0c63d1367cf7a7 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -335,13 +335,13 @@ static int nft_tunnel_obj_erspan_init(const struct nlattr *attr,
+ static const struct nla_policy nft_tunnel_opts_geneve_policy[NFTA_TUNNEL_KEY_GENEVE_MAX + 1] = {
+ [NFTA_TUNNEL_KEY_GENEVE_CLASS] = { .type = NLA_U16 },
+ [NFTA_TUNNEL_KEY_GENEVE_TYPE] = { .type = NLA_U8 },
+- [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 },
++ [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 },
+ };
+
+ static int nft_tunnel_obj_geneve_init(const struct nlattr *attr,
+ struct nft_tunnel_opts *opts)
+ {
+- struct geneve_opt *opt = (struct geneve_opt *)opts->u.data + opts->len;
++ struct geneve_opt *opt = (struct geneve_opt *)(opts->u.data + opts->len);
+ struct nlattr *tb[NFTA_TUNNEL_KEY_GENEVE_MAX + 1];
+ int err, data_len;
+
+@@ -625,7 +625,7 @@ static int nft_tunnel_opts_dump(struct sk_buff *skb,
+ if (!inner)
+ goto failure;
+ while (opts->len > offset) {
+- opt = (struct geneve_opt *)opts->u.data + offset;
++ opt = (struct geneve_opt *)(opts->u.data + offset);
+ if (nla_put_be16(skb, NFTA_TUNNEL_KEY_GENEVE_CLASS,
+ opt->opt_class) ||
+ nla_put_u8(skb, NFTA_TUNNEL_KEY_GENEVE_TYPE,
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 704c858cf2093b..61fea7baae5d5c 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -947,12 +947,6 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
+ pskb_trim(skb, ovs_mac_header_len(key));
+ }
+
+- /* Need to set the pkt_type to involve the routing layer. The
+- * packet movement through the OVS datapath doesn't generally
+- * use routing, but this is needed for tunnel cases.
+- */
+- skb->pkt_type = PACKET_OUTGOING;
+-
+ if (likely(!mru ||
+ (skb->len <= mru + vport->dev->hard_header_len))) {
+ ovs_vport_send(vport, skb, ovs_key_mac_proto(key));
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index af7c9984594880..e296714803dc02 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -68,7 +68,7 @@ geneve_opt_policy[TCA_TUNNEL_KEY_ENC_OPT_GENEVE_MAX + 1] = {
+ [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 },
+ [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 },
+ [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY,
+- .len = 128 },
++ .len = 127 },
+ };
+
+ static const struct nla_policy
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 03505673d5234d..099ff6a3e1f516 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -766,7 +766,7 @@ geneve_opt_policy[TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX + 1] = {
+ [TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 },
+ [TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 },
+ [TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY,
+- .len = 128 },
++ .len = 127 },
+ };
+
+ static const struct nla_policy
+diff --git a/net/sched/sch_skbprio.c b/net/sched/sch_skbprio.c
+index 20ff7386b74bd8..f485f62ab721ab 100644
+--- a/net/sched/sch_skbprio.c
++++ b/net/sched/sch_skbprio.c
+@@ -123,8 +123,6 @@ static int skbprio_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ /* Check to update highest and lowest priorities. */
+ if (skb_queue_empty(lp_qdisc)) {
+ if (q->lowest_prio == q->highest_prio) {
+- /* The incoming packet is the only packet in queue. */
+- BUG_ON(sch->q.qlen != 1);
+ q->lowest_prio = prio;
+ q->highest_prio = prio;
+ } else {
+@@ -156,7 +154,6 @@ static struct sk_buff *skbprio_dequeue(struct Qdisc *sch)
+ /* Update highest priority field. */
+ if (skb_queue_empty(hpq)) {
+ if (q->lowest_prio == q->highest_prio) {
+- BUG_ON(sch->q.qlen);
+ q->highest_prio = 0;
+ q->lowest_prio = SKBPRIO_MAX_PRIORITY - 1;
+ } else {
+diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
+index 8e1e97be4df79f..ee3eac338a9dee 100644
+--- a/net/sctp/sysctl.c
++++ b/net/sctp/sysctl.c
+@@ -525,6 +525,8 @@ static int proc_sctp_do_auth(const struct ctl_table *ctl, int write,
+ return ret;
+ }
+
++static DEFINE_MUTEX(sctp_sysctl_mutex);
++
+ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+@@ -549,6 +551,7 @@ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ if (new_value > max || new_value < min)
+ return -EINVAL;
+
++ mutex_lock(&sctp_sysctl_mutex);
+ net->sctp.udp_port = new_value;
+ sctp_udp_sock_stop(net);
+ if (new_value) {
+@@ -561,6 +564,7 @@ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ lock_sock(sk);
+ sctp_sk(sk)->udp_port = htons(net->sctp.udp_port);
+ release_sock(sk);
++ mutex_unlock(&sctp_sysctl_mutex);
+ }
+
+ return ret;
+diff --git a/net/socket.c b/net/socket.c
+index 28bae5a942341b..38227d00d1987b 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1145,12 +1145,10 @@ static ssize_t sock_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ */
+
+ static DEFINE_MUTEX(br_ioctl_mutex);
+-static int (*br_ioctl_hook)(struct net *net, struct net_bridge *br,
+- unsigned int cmd, struct ifreq *ifr,
++static int (*br_ioctl_hook)(struct net *net, unsigned int cmd,
+ void __user *uarg);
+
+-void brioctl_set(int (*hook)(struct net *net, struct net_bridge *br,
+- unsigned int cmd, struct ifreq *ifr,
++void brioctl_set(int (*hook)(struct net *net, unsigned int cmd,
+ void __user *uarg))
+ {
+ mutex_lock(&br_ioctl_mutex);
+@@ -1159,8 +1157,7 @@ void brioctl_set(int (*hook)(struct net *net, struct net_bridge *br,
+ }
+ EXPORT_SYMBOL(brioctl_set);
+
+-int br_ioctl_call(struct net *net, struct net_bridge *br, unsigned int cmd,
+- struct ifreq *ifr, void __user *uarg)
++int br_ioctl_call(struct net *net, unsigned int cmd, void __user *uarg)
+ {
+ int err = -ENOPKG;
+
+@@ -1169,7 +1166,7 @@ int br_ioctl_call(struct net *net, struct net_bridge *br, unsigned int cmd,
+
+ mutex_lock(&br_ioctl_mutex);
+ if (br_ioctl_hook)
+- err = br_ioctl_hook(net, br, cmd, ifr, uarg);
++ err = br_ioctl_hook(net, cmd, uarg);
+ mutex_unlock(&br_ioctl_mutex);
+
+ return err;
+@@ -1269,7 +1266,9 @@ static long sock_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ case SIOCSIFBR:
+ case SIOCBRADDBR:
+ case SIOCBRDELBR:
+- err = br_ioctl_call(net, NULL, cmd, NULL, argp);
++ case SIOCBRADDIF:
++ case SIOCBRDELIF:
++ err = br_ioctl_call(net, cmd, argp);
+ break;
+ case SIOCGIFVLAN:
+ case SIOCSIFVLAN:
+@@ -3429,6 +3428,8 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ case SIOCGPGRP:
+ case SIOCBRADDBR:
+ case SIOCBRDELBR:
++ case SIOCBRADDIF:
++ case SIOCBRDELIF:
+ case SIOCGIFVLAN:
+ case SIOCSIFVLAN:
+ case SIOCGSKNS:
+@@ -3468,8 +3469,6 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ case SIOCGIFPFLAGS:
+ case SIOCGIFTXQLEN:
+ case SIOCSIFTXQLEN:
+- case SIOCBRADDIF:
+- case SIOCBRDELIF:
+ case SIOCGIFNAME:
+ case SIOCSIFNAME:
+ case SIOCGMIIPHY:
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 7e3db87ae4333c..fc6afbc8d6806a 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1551,7 +1551,11 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ timeout = vsk->connect_timeout;
+ prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+
+- while (sk->sk_state != TCP_ESTABLISHED && sk->sk_err == 0) {
++ /* If the socket is already closing or it is in an error state, there
++ * is no point in waiting.
++ */
++ while (sk->sk_state != TCP_ESTABLISHED &&
++ sk->sk_state != TCP_CLOSING && sk->sk_err == 0) {
+ if (flags & O_NONBLOCK) {
+ /* If we're not going to block, we schedule a timeout
+ * function to generate a timeout on the connection
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 828e2987263351..ceb768925b8501 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -546,6 +546,9 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
+ INIT_WORK(&rdev->mgmt_registrations_update_wk,
+ cfg80211_mgmt_registrations_update_wk);
+ spin_lock_init(&rdev->mgmt_registrations_lock);
++ INIT_WORK(&rdev->wiphy_work, cfg80211_wiphy_work);
++ INIT_LIST_HEAD(&rdev->wiphy_work_list);
++ spin_lock_init(&rdev->wiphy_work_lock);
+
+ #ifdef CONFIG_CFG80211_DEFAULT_PS
+ rdev->wiphy.flags |= WIPHY_FLAG_PS_ON_BY_DEFAULT;
+@@ -563,9 +566,6 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
+ return NULL;
+ }
+
+- INIT_WORK(&rdev->wiphy_work, cfg80211_wiphy_work);
+- INIT_LIST_HEAD(&rdev->wiphy_work_list);
+- spin_lock_init(&rdev->wiphy_work_lock);
+ INIT_WORK(&rdev->rfkill_block, cfg80211_rfkill_block_work);
+ INIT_WORK(&rdev->conn_work, cfg80211_conn_work);
+ INIT_WORK(&rdev->event_work, cfg80211_event_work);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index aac0e7298dc7ab..b457fe78672b71 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -10172,7 +10172,7 @@ static int nl80211_start_radar_detection(struct sk_buff *skb,
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_P2P_GO:
+- wdev->links[0].ap.chandef = chandef;
++ wdev->links[link_id].ap.chandef = chandef;
+ break;
+ case NL80211_IFTYPE_ADHOC:
+ wdev->u.ibss.chandef = chandef;
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 89d2bef9646984..a373a7130d7572 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -742,6 +742,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ goto free_err;
+ }
+ }
++
++ if (meta->flags & XDP_TXMD_FLAGS_LAUNCH_TIME)
++ skb->skb_mstamp_ns = meta->request.launch_time;
+ }
+ }
+
+@@ -802,8 +805,11 @@ static int __xsk_generic_xmit(struct sock *sk)
+ * if there is space in it. This avoids having to implement
+ * any buffering in the Tx path.
+ */
+- if (xsk_cq_reserve_addr_locked(xs->pool, desc.addr))
++ err = xsk_cq_reserve_addr_locked(xs->pool, desc.addr);
++ if (err) {
++ err = -EAGAIN;
+ goto out;
++ }
+
+ skb = xsk_build_skb(xs, &desc);
+ if (IS_ERR(skb)) {
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index d1fa94e52ceae4..97c8030cc41733 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -244,11 +244,6 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ xfrm_address_t *daddr;
+ bool is_packet_offload;
+
+- if (!x->type_offload) {
+- NL_SET_ERR_MSG(extack, "Type doesn't support offload");
+- return -EINVAL;
+- }
+-
+ if (xuo->flags &
+ ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND | XFRM_OFFLOAD_PACKET)) {
+ NL_SET_ERR_MSG(extack, "Unrecognized flags in offload request");
+@@ -310,6 +305,13 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ return -EINVAL;
+ }
+
++ xfrm_set_type_offload(x);
++ if (!x->type_offload) {
++ NL_SET_ERR_MSG(extack, "Type doesn't support offload");
++ dev_put(dev);
++ return -EINVAL;
++ }
++
+ xso->dev = dev;
+ netdev_tracker_alloc(dev, &xso->dev_tracker, GFP_ATOMIC);
+ xso->real_dev = dev;
+@@ -332,6 +334,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ netdev_put(dev, &xso->dev_tracker);
+ xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+
++ xfrm_unset_type_offload(x);
+ /* User explicitly requested packet offload mode and configured
+ * policy in addition to the XFRM state. So be civil to users,
+ * and return an error instead of taking fallback path.
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index ad2202fa82f34d..69af5964c886c9 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -424,18 +424,18 @@ void xfrm_unregister_type_offload(const struct xfrm_type_offload *type,
+ }
+ EXPORT_SYMBOL(xfrm_unregister_type_offload);
+
+-static const struct xfrm_type_offload *
+-xfrm_get_type_offload(u8 proto, unsigned short family, bool try_load)
++void xfrm_set_type_offload(struct xfrm_state *x)
+ {
+ const struct xfrm_type_offload *type = NULL;
+ struct xfrm_state_afinfo *afinfo;
++ bool try_load = true;
+
+ retry:
+- afinfo = xfrm_state_get_afinfo(family);
++ afinfo = xfrm_state_get_afinfo(x->props.family);
+ if (unlikely(afinfo == NULL))
+- return NULL;
++ goto out;
+
+- switch (proto) {
++ switch (x->id.proto) {
+ case IPPROTO_ESP:
+ type = afinfo->type_offload_esp;
+ break;
+@@ -449,18 +449,16 @@ xfrm_get_type_offload(u8 proto, unsigned short family, bool try_load)
+ rcu_read_unlock();
+
+ if (!type && try_load) {
+- request_module("xfrm-offload-%d-%d", family, proto);
++ request_module("xfrm-offload-%d-%d", x->props.family,
++ x->id.proto);
+ try_load = false;
+ goto retry;
+ }
+
+- return type;
+-}
+-
+-static void xfrm_put_type_offload(const struct xfrm_type_offload *type)
+-{
+- module_put(type->owner);
++out:
++ x->type_offload = type;
+ }
++EXPORT_SYMBOL(xfrm_set_type_offload);
+
+ static const struct xfrm_mode xfrm4_mode_map[XFRM_MODE_MAX] = {
+ [XFRM_MODE_BEET] = {
+@@ -609,8 +607,6 @@ static void ___xfrm_state_destroy(struct xfrm_state *x)
+ kfree(x->coaddr);
+ kfree(x->replay_esn);
+ kfree(x->preplay_esn);
+- if (x->type_offload)
+- xfrm_put_type_offload(x->type_offload);
+ if (x->type) {
+ x->type->destructor(x);
+ xfrm_put_type(x->type);
+@@ -784,6 +780,8 @@ void xfrm_dev_state_free(struct xfrm_state *x)
+ struct xfrm_dev_offload *xso = &x->xso;
+ struct net_device *dev = READ_ONCE(xso->dev);
+
++ xfrm_unset_type_offload(x);
++
+ if (dev && dev->xfrmdev_ops) {
+ spin_lock_bh(&xfrm_state_dev_gc_lock);
+ if (!hlist_unhashed(&x->dev_gclist))
+@@ -3122,7 +3120,7 @@ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ }
+ EXPORT_SYMBOL_GPL(xfrm_state_mtu);
+
+-int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload,
++int __xfrm_init_state(struct xfrm_state *x, bool init_replay,
+ struct netlink_ext_ack *extack)
+ {
+ const struct xfrm_mode *inner_mode;
+@@ -3178,8 +3176,6 @@ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload,
+ goto error;
+ }
+
+- x->type_offload = xfrm_get_type_offload(x->id.proto, family, offload);
+-
+ err = x->type->init_state(x, extack);
+ if (err)
+ goto error;
+@@ -3229,7 +3225,7 @@ int xfrm_init_state(struct xfrm_state *x)
+ {
+ int err;
+
+- err = __xfrm_init_state(x, true, false, NULL);
++ err = __xfrm_init_state(x, true, NULL);
+ if (!err)
+ x->km.state = XFRM_STATE_VALID;
+
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 08c6d6f0179fbf..82a768500999b2 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -907,7 +907,7 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ goto error;
+ }
+
+- err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV], extack);
++ err = __xfrm_init_state(x, false, extack);
+ if (err)
+ goto error;
+
+diff --git a/rust/Makefile b/rust/Makefile
+index ea3849eb78f658..2c57c624fe7df0 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -232,7 +232,8 @@ bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \
+ -mfunction-return=thunk-extern -mrecord-mcount -mabi=lp64 \
+ -mindirect-branch-cs-prefix -mstack-protector-guard% -mtraceback=no \
+ -mno-pointers-to-nested-functions -mno-string \
+- -mno-strict-align -mstrict-align \
++ -mno-strict-align -mstrict-align -mdirect-extern-access \
++ -mexplicit-relocs -mno-check-zero-division \
+ -fconserve-stack -falign-jumps=% -falign-loops=% \
+ -femit-struct-debug-baseonly -fno-ipa-cp-clone -fno-ipa-sra \
+ -fno-partial-inlining -fplugin-arg-arm_ssp_per_task_plugin-% \
+@@ -246,6 +247,7 @@ bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \
+ # Derived from `scripts/Makefile.clang`.
+ BINDGEN_TARGET_x86 := x86_64-linux-gnu
+ BINDGEN_TARGET_arm64 := aarch64-linux-gnu
++BINDGEN_TARGET_loongarch := loongarch64-linux-gnusf
+ BINDGEN_TARGET := $(BINDGEN_TARGET_$(SRCARCH))
+
+ # All warnings are inhibited since GCC builds are very experimental,
+diff --git a/rust/kernel/print.rs b/rust/kernel/print.rs
+index b19ee490be58fd..61ee36c5e5f5db 100644
+--- a/rust/kernel/print.rs
++++ b/rust/kernel/print.rs
+@@ -6,12 +6,11 @@
+ //!
+ //! Reference: <https://docs.kernel.org/core-api/printk-basics.html>
+
+-use core::{
++use crate::{
+ ffi::{c_char, c_void},
+- fmt,
++ str::RawFormatter,
+ };
+-
+-use crate::str::RawFormatter;
++use core::fmt;
+
+ // Called from `vsprintf` with format specifier `%pA`.
+ #[expect(clippy::missing_safety_doc)]
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index dd9944a97b7e68..5b632635e00dde 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -307,7 +307,7 @@ $(obj)/$(TRACE_HELPERS): TPROGS_CFLAGS := $(TPROGS_CFLAGS) -D__must_check=
+
+ VMLINUX_BTF_PATHS ?= $(abspath $(if $(O),$(O)/vmlinux)) \
+ $(abspath $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux)) \
+- $(abspath ./vmlinux)
++ $(abspath $(objtree)/vmlinux)
+ VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+
+ $(obj)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
+diff --git a/samples/trace_events/trace-events-sample.h b/samples/trace_events/trace-events-sample.h
+index 999f78d380aee4..1a05fc15335315 100644
+--- a/samples/trace_events/trace-events-sample.h
++++ b/samples/trace_events/trace-events-sample.h
+@@ -319,7 +319,8 @@ TRACE_EVENT(foo_bar,
+ __assign_cpumask(cpum, cpumask_bits(mask));
+ ),
+
+- TP_printk("foo %s %d %s %s %s %s %s %s (%s) (%s) %s", __entry->foo, __entry->bar,
++ TP_printk("foo %s %d %s %s %s %s %s %s (%s) (%s) %s [%d] %*pbl",
++ __entry->foo, __entry->bar,
+
+ /*
+ * Notice here the use of some helper functions. This includes:
+@@ -370,7 +371,10 @@ TRACE_EVENT(foo_bar,
+
+ __get_str(str), __get_str(lstr),
+ __get_bitmask(cpus), __get_cpumask(cpum),
+- __get_str(vstr))
++ __get_str(vstr),
++ __get_dynamic_array_len(cpus),
++ __get_dynamic_array_len(cpus),
++ __get_dynamic_array(cpus))
+ );
+
+ /*
+diff --git a/scripts/gdb/linux/symbols.py b/scripts/gdb/linux/symbols.py
+index f6c1b063775a71..15d76f7d8ebce1 100644
+--- a/scripts/gdb/linux/symbols.py
++++ b/scripts/gdb/linux/symbols.py
+@@ -15,6 +15,7 @@ import gdb
+ import os
+ import re
+
++from itertools import count
+ from linux import modules, utils, constants
+
+
+@@ -95,10 +96,14 @@ lx-symbols command."""
+ except gdb.error:
+ return str(module_addr)
+
+- attrs = sect_attrs['attrs']
+- section_name_to_address = {
+- attrs[n]['battr']['attr']['name'].string(): attrs[n]['address']
+- for n in range(int(sect_attrs['nsections']))}
++ section_name_to_address = {}
++ for i in count():
++ # this is a NULL terminated array
++ if sect_attrs['grp']['bin_attrs'][i] == 0x0:
++ break
++
++ attr = sect_attrs['grp']['bin_attrs'][i].dereference()
++ section_name_to_address[attr['attr']['name'].string()] = attr['private']
+
+ textaddr = section_name_to_address.get(".text", module_addr)
+ args = []
+diff --git a/scripts/package/debian/rules b/scripts/package/debian/rules
+index ca07243bd5cdf6..2b3f9a0bd6c40f 100755
+--- a/scripts/package/debian/rules
++++ b/scripts/package/debian/rules
+@@ -21,9 +21,11 @@ ifeq ($(origin KBUILD_VERBOSE),undefined)
+ endif
+ endif
+
+-revision = $(lastword $(subst -, ,$(shell dpkg-parsechangelog -S Version)))
++revision = $(shell dpkg-parsechangelog -S Version | sed -n 's/.*-//p')
+ CROSS_COMPILE ?= $(filter-out $(DEB_BUILD_GNU_TYPE)-, $(DEB_HOST_GNU_TYPE)-)
+-make-opts = ARCH=$(ARCH) KERNELRELEASE=$(KERNELRELEASE) KBUILD_BUILD_VERSION=$(revision) $(addprefix CROSS_COMPILE=,$(CROSS_COMPILE))
++make-opts = ARCH=$(ARCH) KERNELRELEASE=$(KERNELRELEASE) \
++ $(addprefix KBUILD_BUILD_VERSION=,$(revision)) \
++ $(addprefix CROSS_COMPILE=,$(CROSS_COMPILE))
+
+ binary-targets := $(addprefix binary-, image image-dbg headers libc-dev)
+
+diff --git a/scripts/selinux/install_policy.sh b/scripts/selinux/install_policy.sh
+index 24086793b0d8d4..db40237e60ce7e 100755
+--- a/scripts/selinux/install_policy.sh
++++ b/scripts/selinux/install_policy.sh
+@@ -6,27 +6,24 @@ if [ `id -u` -ne 0 ]; then
+ exit 1
+ fi
+
+-SF=`which setfiles`
+-if [ $? -eq 1 ]; then
++SF=`which setfiles` || {
+ echo "Could not find setfiles"
+ echo "Do you have policycoreutils installed?"
+ exit 1
+-fi
++}
+
+-CP=`which checkpolicy`
+-if [ $? -eq 1 ]; then
++CP=`which checkpolicy` || {
+ echo "Could not find checkpolicy"
+ echo "Do you have checkpolicy installed?"
+ exit 1
+-fi
++}
+ VERS=`$CP -V | awk '{print $1}'`
+
+-ENABLED=`which selinuxenabled`
+-if [ $? -eq 1 ]; then
++ENABLED=`which selinuxenabled` || {
+ echo "Could not find selinuxenabled"
+ echo "Do you have libselinux-utils installed?"
+ exit 1
+-fi
++}
+
+ if selinuxenabled; then
+ echo "SELinux is already enabled"
+diff --git a/security/smack/smack.h b/security/smack/smack.h
+index 4608b07607a3da..c4d998972ba561 100644
+--- a/security/smack/smack.h
++++ b/security/smack/smack.h
+@@ -152,6 +152,7 @@ struct smk_net4addr {
+ struct smack_known *smk_label; /* label */
+ };
+
++#if IS_ENABLED(CONFIG_IPV6)
+ /*
+ * An entry in the table identifying IPv6 hosts.
+ */
+@@ -162,7 +163,9 @@ struct smk_net6addr {
+ int smk_masks; /* mask size */
+ struct smack_known *smk_label; /* label */
+ };
++#endif /* CONFIG_IPV6 */
+
++#ifdef SMACK_IPV6_PORT_LABELING
+ /*
+ * An entry in the table identifying ports.
+ */
+@@ -175,6 +178,7 @@ struct smk_port_label {
+ short smk_sock_type; /* Socket type */
+ short smk_can_reuse;
+ };
++#endif /* SMACK_IPV6_PORT_LABELING */
+
+ struct smack_known_list_elem {
+ struct list_head list;
+@@ -315,7 +319,9 @@ extern struct smack_known smack_known_web;
+ extern struct mutex smack_known_lock;
+ extern struct list_head smack_known_list;
+ extern struct list_head smk_net4addr_list;
++#if IS_ENABLED(CONFIG_IPV6)
+ extern struct list_head smk_net6addr_list;
++#endif /* CONFIG_IPV6 */
+
+ extern struct mutex smack_onlycap_lock;
+ extern struct list_head smack_onlycap_list;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 239773cdcdcf48..e68c982e499ebc 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -2492,6 +2492,7 @@ static struct smack_known *smack_ipv4host_label(struct sockaddr_in *sip)
+ return NULL;
+ }
+
++#if IS_ENABLED(CONFIG_IPV6)
+ /*
+ * smk_ipv6_localhost - Check for local ipv6 host address
+ * @sip: the address
+@@ -2559,6 +2560,7 @@ static struct smack_known *smack_ipv6host_label(struct sockaddr_in6 *sip)
+
+ return NULL;
+ }
++#endif /* CONFIG_IPV6 */
+
+ /**
+ * smack_netlbl_add - Set the secattr on a socket
+@@ -2663,6 +2665,7 @@ static int smk_ipv4_check(struct sock *sk, struct sockaddr_in *sap)
+ return rc;
+ }
+
++#if IS_ENABLED(CONFIG_IPV6)
+ /**
+ * smk_ipv6_check - check Smack access
+ * @subject: subject Smack label
+@@ -2695,6 +2698,7 @@ static int smk_ipv6_check(struct smack_known *subject,
+ rc = smk_bu_note("IPv6 check", subject, object, MAY_WRITE, rc);
+ return rc;
+ }
++#endif /* CONFIG_IPV6 */
+
+ #ifdef SMACK_IPV6_PORT_LABELING
+ /**
+@@ -3027,7 +3031,9 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+ return 0;
+ if (addrlen < offsetofend(struct sockaddr, sa_family))
+ return 0;
+- if (IS_ENABLED(CONFIG_IPV6) && sap->sa_family == AF_INET6) {
++
++#if IS_ENABLED(CONFIG_IPV6)
++ if (sap->sa_family == AF_INET6) {
+ struct sockaddr_in6 *sip = (struct sockaddr_in6 *)sap;
+ struct smack_known *rsp = NULL;
+
+@@ -3047,6 +3053,8 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+
+ return rc;
+ }
++#endif /* CONFIG_IPV6 */
++
+ if (sap->sa_family != AF_INET || addrlen < sizeof(struct sockaddr_in))
+ return 0;
+ rc = smk_ipv4_check(sock->sk, (struct sockaddr_in *)sap);
+@@ -4342,29 +4350,6 @@ static int smack_socket_getpeersec_dgram(struct socket *sock,
+ return 0;
+ }
+
+-/**
+- * smack_sock_graft - Initialize a newly created socket with an existing sock
+- * @sk: child sock
+- * @parent: parent socket
+- *
+- * Set the smk_{in,out} state of an existing sock based on the process that
+- * is creating the new socket.
+- */
+-static void smack_sock_graft(struct sock *sk, struct socket *parent)
+-{
+- struct socket_smack *ssp;
+- struct smack_known *skp = smk_of_current();
+-
+- if (sk == NULL ||
+- (sk->sk_family != PF_INET && sk->sk_family != PF_INET6))
+- return;
+-
+- ssp = smack_sock(sk);
+- ssp->smk_in = skp;
+- ssp->smk_out = skp;
+- /* cssp->smk_packet is already set in smack_inet_csk_clone() */
+-}
+-
+ /**
+ * smack_inet_conn_request - Smack access check on connect
+ * @sk: socket involved
+@@ -5179,7 +5164,6 @@ static struct security_hook_list smack_hooks[] __ro_after_init = {
+ LSM_HOOK_INIT(sk_free_security, smack_sk_free_security),
+ #endif
+ LSM_HOOK_INIT(sk_clone_security, smack_sk_clone_security),
+- LSM_HOOK_INIT(sock_graft, smack_sock_graft),
+ LSM_HOOK_INIT(inet_conn_request, smack_inet_conn_request),
+ LSM_HOOK_INIT(inet_csk_clone, smack_inet_csk_clone),
+
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index fbada79380f9ea..d774b9b71ce238 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -1515,91 +1515,97 @@ static void snd_timer_user_copy_id(struct snd_timer_id *id, struct snd_timer *ti
+ id->subdevice = timer->tmr_subdevice;
+ }
+
+-static int snd_timer_user_next_device(struct snd_timer_id __user *_tid)
++static void get_next_device(struct snd_timer_id *id)
+ {
+- struct snd_timer_id id;
+ struct snd_timer *timer;
+ struct list_head *p;
+
+- if (copy_from_user(&id, _tid, sizeof(id)))
+- return -EFAULT;
+- guard(mutex)(®ister_mutex);
+- if (id.dev_class < 0) { /* first item */
++ if (id->dev_class < 0) { /* first item */
+ if (list_empty(&snd_timer_list))
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ else {
+ timer = list_entry(snd_timer_list.next,
+ struct snd_timer, device_list);
+- snd_timer_user_copy_id(&id, timer);
++ snd_timer_user_copy_id(id, timer);
+ }
+ } else {
+- switch (id.dev_class) {
++ switch (id->dev_class) {
+ case SNDRV_TIMER_CLASS_GLOBAL:
+- id.device = id.device < 0 ? 0 : id.device + 1;
++ id->device = id->device < 0 ? 0 : id->device + 1;
+ list_for_each(p, &snd_timer_list) {
+ timer = list_entry(p, struct snd_timer, device_list);
+ if (timer->tmr_class > SNDRV_TIMER_CLASS_GLOBAL) {
+- snd_timer_user_copy_id(&id, timer);
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_device >= id.device) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_device >= id->device) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+ }
+ if (p == &snd_timer_list)
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ break;
+ case SNDRV_TIMER_CLASS_CARD:
+ case SNDRV_TIMER_CLASS_PCM:
+- if (id.card < 0) {
+- id.card = 0;
++ if (id->card < 0) {
++ id->card = 0;
+ } else {
+- if (id.device < 0) {
+- id.device = 0;
++ if (id->device < 0) {
++ id->device = 0;
+ } else {
+- if (id.subdevice < 0)
+- id.subdevice = 0;
+- else if (id.subdevice < INT_MAX)
+- id.subdevice++;
++ if (id->subdevice < 0)
++ id->subdevice = 0;
++ else if (id->subdevice < INT_MAX)
++ id->subdevice++;
+ }
+ }
+ list_for_each(p, &snd_timer_list) {
+ timer = list_entry(p, struct snd_timer, device_list);
+- if (timer->tmr_class > id.dev_class) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_class > id->dev_class) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_class < id.dev_class)
++ if (timer->tmr_class < id->dev_class)
+ continue;
+- if (timer->card->number > id.card) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->card->number > id->card) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->card->number < id.card)
++ if (timer->card->number < id->card)
+ continue;
+- if (timer->tmr_device > id.device) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_device > id->device) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_device < id.device)
++ if (timer->tmr_device < id->device)
+ continue;
+- if (timer->tmr_subdevice > id.subdevice) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_subdevice > id->subdevice) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_subdevice < id.subdevice)
++ if (timer->tmr_subdevice < id->subdevice)
+ continue;
+- snd_timer_user_copy_id(&id, timer);
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+ if (p == &snd_timer_list)
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ break;
+ default:
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ }
+ }
++}
++
++static int snd_timer_user_next_device(struct snd_timer_id __user *_tid)
++{
++ struct snd_timer_id id;
++
++ if (copy_from_user(&id, _tid, sizeof(id)))
++ return -EFAULT;
++ scoped_guard(mutex, ®ister_mutex)
++ get_next_device(&id);
+ if (copy_to_user(_tid, &id, sizeof(*_tid)))
+ return -EFAULT;
+ return 0;
+@@ -1620,23 +1626,24 @@ static int snd_timer_user_ginfo(struct file *file,
+ tid = ginfo->tid;
+ memset(ginfo, 0, sizeof(*ginfo));
+ ginfo->tid = tid;
+- guard(mutex)(®ister_mutex);
+- t = snd_timer_find(&tid);
+- if (!t)
+- return -ENODEV;
+- ginfo->card = t->card ? t->card->number : -1;
+- if (t->hw.flags & SNDRV_TIMER_HW_SLAVE)
+- ginfo->flags |= SNDRV_TIMER_FLG_SLAVE;
+- strscpy(ginfo->id, t->id, sizeof(ginfo->id));
+- strscpy(ginfo->name, t->name, sizeof(ginfo->name));
+- scoped_guard(spinlock_irq, &t->lock)
+- ginfo->resolution = snd_timer_hw_resolution(t);
+- if (t->hw.resolution_min > 0) {
+- ginfo->resolution_min = t->hw.resolution_min;
+- ginfo->resolution_max = t->hw.resolution_max;
+- }
+- list_for_each(p, &t->open_list_head) {
+- ginfo->clients++;
++ scoped_guard(mutex, ®ister_mutex) {
++ t = snd_timer_find(&tid);
++ if (!t)
++ return -ENODEV;
++ ginfo->card = t->card ? t->card->number : -1;
++ if (t->hw.flags & SNDRV_TIMER_HW_SLAVE)
++ ginfo->flags |= SNDRV_TIMER_FLG_SLAVE;
++ strscpy(ginfo->id, t->id, sizeof(ginfo->id));
++ strscpy(ginfo->name, t->name, sizeof(ginfo->name));
++ scoped_guard(spinlock_irq, &t->lock)
++ ginfo->resolution = snd_timer_hw_resolution(t);
++ if (t->hw.resolution_min > 0) {
++ ginfo->resolution_min = t->hw.resolution_min;
++ ginfo->resolution_max = t->hw.resolution_max;
++ }
++ list_for_each(p, &t->open_list_head) {
++ ginfo->clients++;
++ }
+ }
+ if (copy_to_user(_ginfo, ginfo, sizeof(*ginfo)))
+ return -EFAULT;
+@@ -1674,31 +1681,31 @@ static int snd_timer_user_gstatus(struct file *file,
+ struct snd_timer_gstatus gstatus;
+ struct snd_timer_id tid;
+ struct snd_timer *t;
+- int err = 0;
+
+ if (copy_from_user(&gstatus, _gstatus, sizeof(gstatus)))
+ return -EFAULT;
+ tid = gstatus.tid;
+ memset(&gstatus, 0, sizeof(gstatus));
+ gstatus.tid = tid;
+- guard(mutex)(®ister_mutex);
+- t = snd_timer_find(&tid);
+- if (t != NULL) {
+- guard(spinlock_irq)(&t->lock);
+- gstatus.resolution = snd_timer_hw_resolution(t);
+- if (t->hw.precise_resolution) {
+- t->hw.precise_resolution(t, &gstatus.resolution_num,
+- &gstatus.resolution_den);
++ scoped_guard(mutex, ®ister_mutex) {
++ t = snd_timer_find(&tid);
++ if (t != NULL) {
++ guard(spinlock_irq)(&t->lock);
++ gstatus.resolution = snd_timer_hw_resolution(t);
++ if (t->hw.precise_resolution) {
++ t->hw.precise_resolution(t, &gstatus.resolution_num,
++ &gstatus.resolution_den);
++ } else {
++ gstatus.resolution_num = gstatus.resolution;
++ gstatus.resolution_den = 1000000000uL;
++ }
+ } else {
+- gstatus.resolution_num = gstatus.resolution;
+- gstatus.resolution_den = 1000000000uL;
++ return -ENODEV;
+ }
+- } else {
+- err = -ENODEV;
+ }
+- if (err >= 0 && copy_to_user(_gstatus, &gstatus, sizeof(gstatus)))
+- err = -EFAULT;
+- return err;
++ if (copy_to_user(_gstatus, &gstatus, sizeof(gstatus)))
++ return -EFAULT;
++ return 0;
+ }
+
+ static int snd_timer_user_tselect(struct file *file,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 78aab243c8b655..65ece19a6dd7d3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -586,6 +586,9 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ {
+ struct alc_spec *spec = codec->spec;
+
++ if (spec->no_shutup_pins)
++ return;
++
+ switch (codec->core.vendor_id) {
+ case 0x10ec0236:
+ case 0x10ec0256:
+@@ -601,8 +604,7 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ alc_headset_mic_no_shutup(codec);
+ break;
+ default:
+- if (!spec->no_shutup_pins)
+- snd_hda_shutup_pins(codec);
++ snd_hda_shutup_pins(codec);
+ break;
+ }
+ }
+@@ -10700,6 +10702,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++ SND_PCI_QUIRK(0x1043, 0x106f, "ASUS VivoBook X515UA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1074, "ASUS G614PH/PM/PP", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x10a1, "ASUS UX391UA", ALC294_FIXUP_ASUS_SPK),
+ SND_PCI_QUIRK(0x1043, 0x10a4, "ASUS TP3407SA", ALC287_FIXUP_TAS2781_I2C),
+@@ -10733,6 +10736,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1493, "ASUS GV601VV/VU/VJ/VQ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x14d3, "ASUS G614JY/JZ/JG", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x14e3, "ASUS G513PI/PU/PV", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x1043, 0x14f2, "ASUS VivoBook X515JA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1503, "ASUS G733PY/PZ/PZV/PYV", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ SND_PCI_QUIRK(0x1043, 0x1533, "ASUS GV302XA/XJ/XQ/XU/XV/XI", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10772,6 +10776,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1c43, "ASUS UX8406MA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1c63, "ASUS GU605M", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1),
++ SND_PCI_QUIRK(0x1043, 0x1c80, "ASUS VivoBook TP401", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1c9f, "ASUS G614JU/JV/JI", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1caf, "ASUS G634JY/JZ/JI/JG", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+diff --git a/sound/soc/amd/acp/acp-legacy-common.c b/sound/soc/amd/acp/acp-legacy-common.c
+index 7acc7ed2e8cc90..b9f085c560c2da 100644
+--- a/sound/soc/amd/acp/acp-legacy-common.c
++++ b/sound/soc/amd/acp/acp-legacy-common.c
+@@ -13,6 +13,7 @@
+ */
+
+ #include "amd.h"
++#include <linux/acpi.h>
+ #include <linux/pci.h>
+ #include <linux/export.h>
+
+@@ -445,7 +446,9 @@ void check_acp_config(struct pci_dev *pci, struct acp_chip_info *chip)
+ {
+ struct acpi_device *pdm_dev;
+ const union acpi_object *obj;
+- u32 pdm_addr;
++ acpi_handle handle;
++ acpi_integer dmic_status;
++ u32 pdm_addr, ret;
+
+ switch (chip->acp_rev) {
+ case ACP_RN_PCI_ID:
+@@ -477,6 +480,11 @@ void check_acp_config(struct pci_dev *pci, struct acp_chip_info *chip)
+ obj->integer.value == pdm_addr)
+ chip->is_pdm_dev = true;
+ }
++
++ handle = ACPI_HANDLE(&pci->dev);
++ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
++ if (!ACPI_FAILURE(ret))
++ chip->is_pdm_dev = dmic_status;
+ }
+ }
+ EXPORT_SYMBOL_NS_GPL(check_acp_config, "SND_SOC_ACP_COMMON");
+diff --git a/sound/soc/codecs/cs35l41-spi.c b/sound/soc/codecs/cs35l41-spi.c
+index a6db44520c060b..f9b6bf7bea9c97 100644
+--- a/sound/soc/codecs/cs35l41-spi.c
++++ b/sound/soc/codecs/cs35l41-spi.c
+@@ -32,13 +32,16 @@ static int cs35l41_spi_probe(struct spi_device *spi)
+ const struct regmap_config *regmap_config = &cs35l41_regmap_spi;
+ struct cs35l41_hw_cfg *hw_cfg = dev_get_platdata(&spi->dev);
+ struct cs35l41_private *cs35l41;
++ int ret;
+
+ cs35l41 = devm_kzalloc(&spi->dev, sizeof(struct cs35l41_private), GFP_KERNEL);
+ if (!cs35l41)
+ return -ENOMEM;
+
+ spi->max_speed_hz = CS35L41_SPI_MAX_FREQ;
+- spi_setup(spi);
++ ret = spi_setup(spi);
++ if (ret < 0)
++ return ret;
+
+ spi_set_drvdata(spi, cs35l41);
+ cs35l41->regmap = devm_regmap_init_spi(spi, regmap_config);
+diff --git a/sound/soc/codecs/mt6359.c b/sound/soc/codecs/mt6359.c
+index 0b76a55664b035..f73120c6a6ce68 100644
+--- a/sound/soc/codecs/mt6359.c
++++ b/sound/soc/codecs/mt6359.c
+@@ -2867,9 +2867,12 @@ static int mt6359_parse_dt(struct mt6359_priv *priv)
+ struct device *dev = priv->dev;
+ struct device_node *np;
+
+- np = of_get_child_by_name(dev->parent->of_node, "mt6359codec");
+- if (!np)
+- return -EINVAL;
++ np = of_get_child_by_name(dev->parent->of_node, "audio-codec");
++ if (!np) {
++ np = of_get_child_by_name(dev->parent->of_node, "mt6359codec");
++ if (!np)
++ return -EINVAL;
++ }
+
+ ret = of_property_read_u32(np, "mediatek,dmic-mode",
+ &priv->dmic_one_wire_mode);
+diff --git a/sound/soc/codecs/rt5665.c b/sound/soc/codecs/rt5665.c
+index 47df14ba52784b..4f0236b34a2d9b 100644
+--- a/sound/soc/codecs/rt5665.c
++++ b/sound/soc/codecs/rt5665.c
+@@ -31,9 +31,7 @@
+ #include "rl6231.h"
+ #include "rt5665.h"
+
+-#define RT5665_NUM_SUPPLIES 3
+-
+-static const char *rt5665_supply_names[RT5665_NUM_SUPPLIES] = {
++static const char * const rt5665_supply_names[] = {
+ "AVDD",
+ "MICVDD",
+ "VBAT",
+@@ -46,7 +44,6 @@ struct rt5665_priv {
+ struct gpio_desc *gpiod_ldo1_en;
+ struct gpio_desc *gpiod_reset;
+ struct snd_soc_jack *hs_jack;
+- struct regulator_bulk_data supplies[RT5665_NUM_SUPPLIES];
+ struct delayed_work jack_detect_work;
+ struct delayed_work calibrate_work;
+ struct delayed_work jd_check_work;
+@@ -4471,8 +4468,6 @@ static void rt5665_remove(struct snd_soc_component *component)
+ struct rt5665_priv *rt5665 = snd_soc_component_get_drvdata(component);
+
+ regmap_write(rt5665->regmap, RT5665_RESET, 0);
+-
+- regulator_bulk_disable(ARRAY_SIZE(rt5665->supplies), rt5665->supplies);
+ }
+
+ #ifdef CONFIG_PM
+@@ -4758,7 +4753,7 @@ static int rt5665_i2c_probe(struct i2c_client *i2c)
+ {
+ struct rt5665_platform_data *pdata = dev_get_platdata(&i2c->dev);
+ struct rt5665_priv *rt5665;
+- int i, ret;
++ int ret;
+ unsigned int val;
+
+ rt5665 = devm_kzalloc(&i2c->dev, sizeof(struct rt5665_priv),
+@@ -4774,24 +4769,13 @@ static int rt5665_i2c_probe(struct i2c_client *i2c)
+ else
+ rt5665_parse_dt(rt5665, &i2c->dev);
+
+- for (i = 0; i < ARRAY_SIZE(rt5665->supplies); i++)
+- rt5665->supplies[i].supply = rt5665_supply_names[i];
+-
+- ret = devm_regulator_bulk_get(&i2c->dev, ARRAY_SIZE(rt5665->supplies),
+- rt5665->supplies);
++ ret = devm_regulator_bulk_get_enable(&i2c->dev, ARRAY_SIZE(rt5665_supply_names),
++ rt5665_supply_names);
+ if (ret != 0) {
+ dev_err(&i2c->dev, "Failed to request supplies: %d\n", ret);
+ return ret;
+ }
+
+- ret = regulator_bulk_enable(ARRAY_SIZE(rt5665->supplies),
+- rt5665->supplies);
+- if (ret != 0) {
+- dev_err(&i2c->dev, "Failed to enable supplies: %d\n", ret);
+- return ret;
+- }
+-
+-
+ rt5665->gpiod_ldo1_en = devm_gpiod_get_optional(&i2c->dev,
+ "realtek,ldo1-en",
+ GPIOD_OUT_HIGH);
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index ac043ad367ace0..21f617f6f9fa84 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -767,6 +767,8 @@ static int imx_card_probe(struct platform_device *pdev)
+ data->dapm_routes[i].sink =
+ devm_kasprintf(&pdev->dev, GFP_KERNEL, "%d %s",
+ i + 1, "Playback");
++ if (!data->dapm_routes[i].sink)
++ return -ENOMEM;
+ data->dapm_routes[i].source = "CPU-Playback";
+ }
+ }
+@@ -784,6 +786,8 @@ static int imx_card_probe(struct platform_device *pdev)
+ data->dapm_routes[i].source =
+ devm_kasprintf(&pdev->dev, GFP_KERNEL, "%d %s",
+ i + 1, "Capture");
++ if (!data->dapm_routes[i].source)
++ return -ENOMEM;
+ data->dapm_routes[i].sink = "CPU-Capture";
+ }
+ }
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index c2445c5ccd84c2..32efb30c55d695 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -1077,6 +1077,7 @@ static int graph_get_dai_id(struct device_node *ep)
+ int graph_util_parse_dai(struct device *dev, struct device_node *ep,
+ struct snd_soc_dai_link_component *dlc, int *is_single_link)
+ {
++ struct device_node *node;
+ struct of_phandle_args args = {};
+ struct snd_soc_dai *dai;
+ int ret;
+@@ -1084,7 +1085,7 @@ int graph_util_parse_dai(struct device *dev, struct device_node *ep,
+ if (!ep)
+ return 0;
+
+- struct device_node *node __free(device_node) = of_graph_get_port_parent(ep);
++ node = of_graph_get_port_parent(ep);
+
+ /*
+ * Try to find from DAI node
+@@ -1126,8 +1127,10 @@ int graph_util_parse_dai(struct device *dev, struct device_node *ep,
+ * if he unbinded CPU or Codec.
+ */
+ ret = snd_soc_get_dlc(&args, dlc);
+- if (ret < 0)
++ if (ret < 0) {
++ of_node_put(node);
+ return ret;
++ }
+
+ parse_dai_end:
+ if (is_single_link)
+diff --git a/sound/soc/tegra/tegra210_adx.c b/sound/soc/tegra/tegra210_adx.c
+index 0aa93b948378f3..3c10e09976ad00 100644
+--- a/sound/soc/tegra/tegra210_adx.c
++++ b/sound/soc/tegra/tegra210_adx.c
+@@ -1,5 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+-// SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES.
++// SPDX-FileCopyrightText: Copyright (c) 2021-2025 NVIDIA CORPORATION & AFFILIATES.
+ // All rights reserved.
+ //
+ // tegra210_adx.c - Tegra210 ADX driver
+@@ -57,8 +57,8 @@ static int tegra210_adx_startup(struct snd_pcm_substream *substream,
+ int err;
+
+ /* Ensure if ADX status is disabled */
+- err = regmap_read_poll_timeout_atomic(adx->regmap, TEGRA210_ADX_STATUS,
+- val, !(val & 0x1), 10, 10000);
++ err = regmap_read_poll_timeout(adx->regmap, TEGRA210_ADX_STATUS,
++ val, !(val & 0x1), 10, 10000);
+ if (err < 0) {
+ dev_err(dai->dev, "failed to stop ADX, err = %d\n", err);
+ return err;
+diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c
+index d9d1e021f5b2ee..0f96cc45578d8c 100644
+--- a/sound/soc/ti/j721e-evm.c
++++ b/sound/soc/ti/j721e-evm.c
+@@ -182,6 +182,8 @@ static int j721e_configure_refclk(struct j721e_priv *priv,
+ clk_id = J721E_CLK_PARENT_48000;
+ else if (!(rate % 11025) && priv->pll_rates[J721E_CLK_PARENT_44100])
+ clk_id = J721E_CLK_PARENT_44100;
++ else if (!(rate % 11025) && priv->pll_rates[J721E_CLK_PARENT_48000])
++ clk_id = J721E_CLK_PARENT_48000;
+ else
+ return ret;
+
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 3d36d22f8e9e6b..62b28e9d83c7a7 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3688,8 +3688,7 @@ static const char *snd_djm_get_label(u8 device_idx, u16 wvalue, u16 windex)
+
+ // common DJM capture level option values
+ static const u16 snd_djm_opts_cap_level[] = {
+- 0x0000, 0x0100, 0x0200, 0x0300, 0x400, 0x500 };
+-
++ 0x0000, 0x0100, 0x0200, 0x0300 };
+
+ // DJM-250MK2
+ static const u16 snd_djm_opts_250mk2_cap1[] = {
+@@ -3831,6 +3830,8 @@ static const struct snd_djm_ctl snd_djm_ctls_750mk2[] = {
+
+
+ // DJM-A9
++static const u16 snd_djm_opts_a9_cap_level[] = {
++ 0x0000, 0x0100, 0x0200, 0x0300, 0x0400, 0x0500 };
+ static const u16 snd_djm_opts_a9_cap1[] = {
+ 0x0107, 0x0108, 0x0109, 0x010a, 0x010e,
+ 0x111, 0x112, 0x113, 0x114, 0x0131, 0x132, 0x133, 0x134 };
+@@ -3844,7 +3845,7 @@ static const u16 snd_djm_opts_a9_cap5[] = {
+ 0x0501, 0x0502, 0x0503, 0x0505, 0x0506, 0x0507, 0x0508, 0x0509, 0x050a, 0x050e };
+
+ static const struct snd_djm_ctl snd_djm_ctls_a9[] = {
+- SND_DJM_CTL("Capture Level", cap_level, 0, SND_DJM_WINDEX_CAPLVL),
++ SND_DJM_CTL("Capture Level", a9_cap_level, 0, SND_DJM_WINDEX_CAPLVL),
+ SND_DJM_CTL("Master Input", a9_cap1, 3, SND_DJM_WINDEX_CAP),
+ SND_DJM_CTL("Ch1 Input", a9_cap2, 2, SND_DJM_WINDEX_CAP),
+ SND_DJM_CTL("Ch2 Input", a9_cap3, 2, SND_DJM_WINDEX_CAP),
+diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c
+index ab5cdc3337dacb..e91d4c4e1c1621 100644
+--- a/tools/arch/x86/lib/insn.c
++++ b/tools/arch/x86/lib/insn.c
+@@ -13,7 +13,7 @@
+ #endif
+ #include "../include/asm/inat.h" /* __ignore_sync_check__ */
+ #include "../include/asm/insn.h" /* __ignore_sync_check__ */
+-#include "../include/linux/unaligned.h" /* __ignore_sync_check__ */
++#include <linux/unaligned.h> /* __ignore_sync_check__ */
+
+ #include <linux/errno.h>
+ #include <linux/kconfig.h>
+diff --git a/tools/bpf/runqslower/Makefile b/tools/bpf/runqslower/Makefile
+index e49203ebd48c18..78a436c4072e38 100644
+--- a/tools/bpf/runqslower/Makefile
++++ b/tools/bpf/runqslower/Makefile
+@@ -6,6 +6,7 @@ OUTPUT ?= $(abspath .output)/
+ BPFTOOL_OUTPUT := $(OUTPUT)bpftool/
+ DEFAULT_BPFTOOL := $(BPFTOOL_OUTPUT)bootstrap/bpftool
+ BPFTOOL ?= $(DEFAULT_BPFTOOL)
++BPF_TARGET_ENDIAN ?= --target=bpf
+ LIBBPF_SRC := $(abspath ../../lib/bpf)
+ BPFOBJ_OUTPUT := $(OUTPUT)libbpf/
+ BPFOBJ := $(BPFOBJ_OUTPUT)libbpf.a
+@@ -60,7 +61,7 @@ $(OUTPUT)/%.skel.h: $(OUTPUT)/%.bpf.o | $(BPFTOOL)
+ $(QUIET_GEN)$(BPFTOOL) gen skeleton $< > $@
+
+ $(OUTPUT)/%.bpf.o: %.bpf.c $(BPFOBJ) | $(OUTPUT)
+- $(QUIET_GEN)$(CLANG) -g -O2 --target=bpf $(INCLUDES) \
++ $(QUIET_GEN)$(CLANG) -g -O2 $(BPF_TARGET_ENDIAN) $(INCLUDES) \
+ -c $(filter %.c,$^) -o $@ && \
+ $(LLVM_STRIP) -g $@
+
+diff --git a/tools/include/uapi/linux/if_xdp.h b/tools/include/uapi/linux/if_xdp.h
+index 42ec5ddaab8dc8..42869770776ec0 100644
+--- a/tools/include/uapi/linux/if_xdp.h
++++ b/tools/include/uapi/linux/if_xdp.h
+@@ -127,6 +127,12 @@ struct xdp_options {
+ */
+ #define XDP_TXMD_FLAGS_CHECKSUM (1 << 1)
+
++/* Request launch time hardware offload. The device will schedule the packet for
++ * transmission at a pre-determined time called launch time. The value of
++ * launch time is communicated via launch_time field of struct xsk_tx_metadata.
++ */
++#define XDP_TXMD_FLAGS_LAUNCH_TIME (1 << 2)
++
+ /* AF_XDP offloads request. 'request' union member is consumed by the driver
+ * when the packet is being transmitted. 'completion' union member is
+ * filled by the driver when the transmit completion arrives.
+@@ -142,6 +148,10 @@ struct xsk_tx_metadata {
+ __u16 csum_start;
+ /* Offset from csum_start where checksum should be stored. */
+ __u16 csum_offset;
++
++ /* XDP_TXMD_FLAGS_LAUNCH_TIME */
++ /* Launch time in nanosecond against the PTP HW Clock */
++ __u64 launch_time;
+ } request;
+
+ struct {
+diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
+index e4be227d3ad648..4324e89a802618 100644
+--- a/tools/include/uapi/linux/netdev.h
++++ b/tools/include/uapi/linux/netdev.h
+@@ -59,10 +59,13 @@ enum netdev_xdp_rx_metadata {
+ * by the driver.
+ * @NETDEV_XSK_FLAGS_TX_CHECKSUM: L3 checksum HW offload is supported by the
+ * driver.
++ * @NETDEV_XSK_FLAGS_TX_LAUNCH_TIME_FIFO: Launch time HW offload is supported
++ * by the driver.
+ */
+ enum netdev_xsk_flags {
+ NETDEV_XSK_FLAGS_TX_TIMESTAMP = 1,
+ NETDEV_XSK_FLAGS_TX_CHECKSUM = 2,
++ NETDEV_XSK_FLAGS_TX_LAUNCH_TIME_FIFO = 4,
+ };
+
+ enum netdev_queue_type {
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 48c66f3a920021..560b519f820e24 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -3015,8 +3015,6 @@ static int btf_ext_parse_info(struct btf_ext *btf_ext, bool is_native)
+ .desc = "line_info",
+ };
+ struct btf_ext_sec_info_param core_relo = {
+- .off = btf_ext->hdr->core_relo_off,
+- .len = btf_ext->hdr->core_relo_len,
+ .min_rec_size = sizeof(struct bpf_core_relo),
+ .ext_info = &btf_ext->core_relo_info,
+ .desc = "core_relo",
+@@ -3034,6 +3032,8 @@ static int btf_ext_parse_info(struct btf_ext *btf_ext, bool is_native)
+ if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, core_relo_len))
+ return 0; /* skip core relos parsing */
+
++ core_relo.off = btf_ext->hdr->core_relo_off;
++ core_relo.len = btf_ext->hdr->core_relo_len;
+ err = btf_ext_parse_sec_info(btf_ext, &core_relo, is_native);
+ if (err)
+ return err;
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index b52f71c59616fd..800e0ef09c3787 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -2163,7 +2163,7 @@ static int linker_append_elf_sym(struct bpf_linker *linker, struct src_obj *obj,
+
+ obj->sym_map[src_sym_idx] = dst_sym_idx;
+
+- if (sym_type == STT_SECTION && dst_sym) {
++ if (sym_type == STT_SECTION && dst_sec) {
+ dst_sec->sec_sym_idx = dst_sym_idx;
+ dst_sym->st_value = 0;
+ }
+diff --git a/tools/lib/bpf/str_error.c b/tools/lib/bpf/str_error.c
+index 8743049e32b7d4..9a541762f54c08 100644
+--- a/tools/lib/bpf/str_error.c
++++ b/tools/lib/bpf/str_error.c
+@@ -36,7 +36,7 @@ char *libbpf_strerror_r(int err, char *dst, int len)
+ return dst;
+ }
+
+-const char *errstr(int err)
++const char *libbpf_errstr(int err)
+ {
+ static __thread char buf[12];
+
+diff --git a/tools/lib/bpf/str_error.h b/tools/lib/bpf/str_error.h
+index 66ffebde0684aa..53e7fbffc13ec1 100644
+--- a/tools/lib/bpf/str_error.h
++++ b/tools/lib/bpf/str_error.h
+@@ -7,10 +7,13 @@
+ char *libbpf_strerror_r(int err, char *dst, int len);
+
+ /**
+- * @brief **errstr()** returns string corresponding to numeric errno
++ * @brief **libbpf_errstr()** returns string corresponding to numeric errno
+ * @param err negative numeric errno
+ * @return pointer to string representation of the errno, that is invalidated
+ * upon the next call.
+ */
+-const char *errstr(int err);
++const char *libbpf_errstr(int err);
++
++#define errstr(err) libbpf_errstr(err)
++
+ #endif /* __LIBBPF_STR_ERROR_H */
+diff --git a/tools/objtool/arch/loongarch/decode.c b/tools/objtool/arch/loongarch/decode.c
+index 69b66994f2a155..02e49055596674 100644
+--- a/tools/objtool/arch/loongarch/decode.c
++++ b/tools/objtool/arch/loongarch/decode.c
+@@ -5,10 +5,7 @@
+ #include <asm/inst.h>
+ #include <asm/orc_types.h>
+ #include <linux/objtool_types.h>
+-
+-#ifndef EM_LOONGARCH
+-#define EM_LOONGARCH 258
+-#endif
++#include <arch/elf.h>
+
+ int arch_ftrace_match(char *name)
+ {
+@@ -363,3 +360,26 @@ void arch_initial_func_cfi_state(struct cfi_init_state *state)
+ state->cfa.base = CFI_SP;
+ state->cfa.offset = 0;
+ }
++
++unsigned int arch_reloc_size(struct reloc *reloc)
++{
++ switch (reloc_type(reloc)) {
++ case R_LARCH_32:
++ case R_LARCH_32_PCREL:
++ return 4;
++ default:
++ return 8;
++ }
++}
++
++unsigned long arch_jump_table_sym_offset(struct reloc *reloc, struct reloc *table)
++{
++ switch (reloc_type(reloc)) {
++ case R_LARCH_32_PCREL:
++ case R_LARCH_64_PCREL:
++ return reloc->sym->offset + reloc_addend(reloc) -
++ (reloc_offset(reloc) - reloc_offset(table));
++ default:
++ return reloc->sym->offset + reloc_addend(reloc);
++ }
++}
+diff --git a/tools/objtool/arch/loongarch/include/arch/elf.h b/tools/objtool/arch/loongarch/include/arch/elf.h
+index 9623d663220eff..ec79062c9554db 100644
+--- a/tools/objtool/arch/loongarch/include/arch/elf.h
++++ b/tools/objtool/arch/loongarch/include/arch/elf.h
+@@ -18,6 +18,13 @@
+ #ifndef R_LARCH_32_PCREL
+ #define R_LARCH_32_PCREL 99
+ #endif
++#ifndef R_LARCH_64_PCREL
++#define R_LARCH_64_PCREL 109
++#endif
++
++#ifndef EM_LOONGARCH
++#define EM_LOONGARCH 258
++#endif
+
+ #define R_NONE R_LARCH_NONE
+ #define R_ABS32 R_LARCH_32
+diff --git a/tools/objtool/arch/powerpc/decode.c b/tools/objtool/arch/powerpc/decode.c
+index 53b55690f32041..7c0bf242906759 100644
+--- a/tools/objtool/arch/powerpc/decode.c
++++ b/tools/objtool/arch/powerpc/decode.c
+@@ -106,3 +106,17 @@ void arch_initial_func_cfi_state(struct cfi_init_state *state)
+ state->regs[CFI_RA].base = CFI_CFA;
+ state->regs[CFI_RA].offset = 0;
+ }
++
++unsigned int arch_reloc_size(struct reloc *reloc)
++{
++ switch (reloc_type(reloc)) {
++ case R_PPC_REL32:
++ case R_PPC_ADDR32:
++ case R_PPC_UADDR32:
++ case R_PPC_PLT32:
++ case R_PPC_PLTREL32:
++ return 4;
++ default:
++ return 8;
++ }
++}
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index fe1362c345647e..fb9691a34d9263 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -852,3 +852,16 @@ bool arch_is_embedded_insn(struct symbol *sym)
+ return !strcmp(sym->name, "retbleed_return_thunk") ||
+ !strcmp(sym->name, "srso_safe_ret");
+ }
++
++unsigned int arch_reloc_size(struct reloc *reloc)
++{
++ switch (reloc_type(reloc)) {
++ case R_X86_64_32:
++ case R_X86_64_32S:
++ case R_X86_64_PC32:
++ case R_X86_64_PLT32:
++ return 4;
++ default:
++ return 8;
++ }
++}
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index ce973d9d8e6d81..159fb130e28270 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1944,8 +1944,12 @@ static int add_special_section_alts(struct objtool_file *file)
+ return ret;
+ }
+
+-static int add_jump_table(struct objtool_file *file, struct instruction *insn,
+- struct reloc *next_table)
++__weak unsigned long arch_jump_table_sym_offset(struct reloc *reloc, struct reloc *table)
++{
++ return reloc->sym->offset + reloc_addend(reloc);
++}
++
++static int add_jump_table(struct objtool_file *file, struct instruction *insn)
+ {
+ unsigned long table_size = insn_jump_table_size(insn);
+ struct symbol *pfunc = insn_func(insn)->pfunc;
+@@ -1954,6 +1958,7 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
+ unsigned int prev_offset = 0;
+ struct reloc *reloc = table;
+ struct alternative *alt;
++ unsigned long sym_offset;
+
+ /*
+ * Each @reloc is a switch table relocation which points to the target
+@@ -1964,16 +1969,17 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
+ /* Check for the end of the table: */
+ if (table_size && reloc_offset(reloc) - reloc_offset(table) >= table_size)
+ break;
+- if (reloc != table && reloc == next_table)
++ if (reloc != table && is_jump_table(reloc))
+ break;
+
+ /* Make sure the table entries are consecutive: */
+- if (prev_offset && reloc_offset(reloc) != prev_offset + 8)
++ if (prev_offset && reloc_offset(reloc) != prev_offset + arch_reloc_size(reloc))
+ break;
+
++ sym_offset = arch_jump_table_sym_offset(reloc, table);
++
+ /* Detect function pointers from contiguous objects: */
+- if (reloc->sym->sec == pfunc->sec &&
+- reloc_addend(reloc) == pfunc->offset)
++ if (reloc->sym->sec == pfunc->sec && sym_offset == pfunc->offset)
+ break;
+
+ /*
+@@ -1981,10 +1987,10 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
+ * which point to the end of the function. Ignore them.
+ */
+ if (reloc->sym->sec == pfunc->sec &&
+- reloc_addend(reloc) == pfunc->offset + pfunc->len)
++ sym_offset == pfunc->offset + pfunc->len)
+ goto next;
+
+- dest_insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
++ dest_insn = find_insn(file, reloc->sym->sec, sym_offset);
+ if (!dest_insn)
+ break;
+
+@@ -2023,6 +2029,7 @@ static void find_jump_table(struct objtool_file *file, struct symbol *func,
+ struct reloc *table_reloc;
+ struct instruction *dest_insn, *orig_insn = insn;
+ unsigned long table_size;
++ unsigned long sym_offset;
+
+ /*
+ * Backward search using the @first_jump_src links, these help avoid
+@@ -2046,12 +2053,17 @@ static void find_jump_table(struct objtool_file *file, struct symbol *func,
+ table_reloc = arch_find_switch_table(file, insn, &table_size);
+ if (!table_reloc)
+ continue;
+- dest_insn = find_insn(file, table_reloc->sym->sec, reloc_addend(table_reloc));
++
++ sym_offset = table_reloc->sym->offset + reloc_addend(table_reloc);
++
++ dest_insn = find_insn(file, table_reloc->sym->sec, sym_offset);
+ if (!dest_insn || !insn_func(dest_insn) || insn_func(dest_insn)->pfunc != func)
+ continue;
+
++ set_jump_table(table_reloc);
+ orig_insn->_jump_table = table_reloc;
+ orig_insn->_jump_table_size = table_size;
++
+ break;
+ }
+ }
+@@ -2093,31 +2105,20 @@ static void mark_func_jump_tables(struct objtool_file *file,
+ static int add_func_jump_tables(struct objtool_file *file,
+ struct symbol *func)
+ {
+- struct instruction *insn, *insn_t1 = NULL, *insn_t2;
+- int ret = 0;
++ struct instruction *insn;
++ int ret;
+
+ func_for_each_insn(file, func, insn) {
+ if (!insn_jump_table(insn))
+ continue;
+
+- if (!insn_t1) {
+- insn_t1 = insn;
+- continue;
+- }
+
+- insn_t2 = insn;
+-
+- ret = add_jump_table(file, insn_t1, insn_jump_table(insn_t2));
++ ret = add_jump_table(file, insn);
+ if (ret)
+ return ret;
+-
+- insn_t1 = insn_t2;
+ }
+
+- if (insn_t1)
+- ret = add_jump_table(file, insn_t1, NULL);
+-
+- return ret;
++ return 0;
+ }
+
+ /*
+@@ -4008,7 +4009,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
+ * It may also insert a UD2 after calling a __noreturn function.
+ */
+ prev_insn = prev_insn_same_sec(file, insn);
+- if (prev_insn->dead_end &&
++ if (prev_insn && prev_insn->dead_end &&
+ (insn->type == INSN_BUG ||
+ (insn->type == INSN_JUMP_UNCONDITIONAL &&
+ insn->jump_dest && insn->jump_dest->type == INSN_BUG)))
+@@ -4449,35 +4450,6 @@ static int validate_sls(struct objtool_file *file)
+ return warnings;
+ }
+
+-static bool ignore_noreturn_call(struct instruction *insn)
+-{
+- struct symbol *call_dest = insn_call_dest(insn);
+-
+- /*
+- * FIXME: hack, we need a real noreturn solution
+- *
+- * Problem is, exc_double_fault() may or may not return, depending on
+- * whether CONFIG_X86_ESPFIX64 is set. But objtool has no visibility
+- * to the kernel config.
+- *
+- * Other potential ways to fix it:
+- *
+- * - have compiler communicate __noreturn functions somehow
+- * - remove CONFIG_X86_ESPFIX64
+- * - read the .config file
+- * - add a cmdline option
+- * - create a generic objtool annotation format (vs a bunch of custom
+- * formats) and annotate it
+- */
+- if (!strcmp(call_dest->name, "exc_double_fault")) {
+- /* prevent further unreachable warnings for the caller */
+- insn->sym->warned = 1;
+- return true;
+- }
+-
+- return false;
+-}
+-
+ static int validate_reachable_instructions(struct objtool_file *file)
+ {
+ struct instruction *insn, *prev_insn;
+@@ -4494,7 +4466,7 @@ static int validate_reachable_instructions(struct objtool_file *file)
+ prev_insn = prev_insn_same_sec(file, insn);
+ if (prev_insn && prev_insn->dead_end) {
+ call_dest = insn_call_dest(prev_insn);
+- if (call_dest && !ignore_noreturn_call(prev_insn)) {
++ if (call_dest) {
+ WARN_INSN(insn, "%s() is missing a __noreturn annotation",
+ call_dest->name);
+ warnings++;
+@@ -4517,6 +4489,8 @@ static int disas_funcs(const char *funcs)
+ char *cmd;
+
+ cross_compile = getenv("CROSS_COMPILE");
++ if (!cross_compile)
++ cross_compile = "";
+
+ objdump_str = "%sobjdump -wdr %s | gawk -M -v _funcs='%s' '"
+ "BEGIN { split(_funcs, funcs); }"
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 6f64d611faea96..934855be631c01 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -583,7 +583,7 @@ static int elf_update_sym_relocs(struct elf *elf, struct symbol *sym)
+ {
+ struct reloc *reloc;
+
+- for (reloc = sym->relocs; reloc; reloc = reloc->sym_next_reloc)
++ for (reloc = sym->relocs; reloc; reloc = sym_next_reloc(reloc))
+ set_reloc_sym(elf, reloc, reloc->sym->idx);
+
+ return 0;
+@@ -880,7 +880,7 @@ static struct reloc *elf_init_reloc(struct elf *elf, struct section *rsec,
+ set_reloc_addend(elf, reloc, addend);
+
+ elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc));
+- reloc->sym_next_reloc = sym->relocs;
++ set_sym_next_reloc(reloc, sym->relocs);
+ sym->relocs = reloc;
+
+ return reloc;
+@@ -979,7 +979,7 @@ static int read_relocs(struct elf *elf)
+ }
+
+ elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc));
+- reloc->sym_next_reloc = sym->relocs;
++ set_sym_next_reloc(reloc, sym->relocs);
+ sym->relocs = reloc;
+
+ nr_reloc++;
+diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
+index d63b46a19f3979..089a1acc48a8d0 100644
+--- a/tools/objtool/include/objtool/arch.h
++++ b/tools/objtool/include/objtool/arch.h
+@@ -97,4 +97,7 @@ int arch_rewrite_retpolines(struct objtool_file *file);
+
+ bool arch_pc_relative_reloc(struct reloc *reloc);
+
++unsigned int arch_reloc_size(struct reloc *reloc);
++unsigned long arch_jump_table_sym_offset(struct reloc *reloc, struct reloc *table);
++
+ #endif /* _ARCH_H */
+diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
+index d7e815c2fd1567..764cba535f22e2 100644
+--- a/tools/objtool/include/objtool/elf.h
++++ b/tools/objtool/include/objtool/elf.h
+@@ -77,7 +77,7 @@ struct reloc {
+ struct elf_hash_node hash;
+ struct section *sec;
+ struct symbol *sym;
+- struct reloc *sym_next_reloc;
++ unsigned long _sym_next_reloc;
+ };
+
+ struct elf {
+@@ -297,6 +297,31 @@ static inline void set_reloc_type(struct elf *elf, struct reloc *reloc, unsigned
+ mark_sec_changed(elf, reloc->sec, true);
+ }
+
++#define RELOC_JUMP_TABLE_BIT 1UL
++
++/* Does reloc mark the beginning of a jump table? */
++static inline bool is_jump_table(struct reloc *reloc)
++{
++ return reloc->_sym_next_reloc & RELOC_JUMP_TABLE_BIT;
++}
++
++static inline void set_jump_table(struct reloc *reloc)
++{
++ reloc->_sym_next_reloc |= RELOC_JUMP_TABLE_BIT;
++}
++
++static inline struct reloc *sym_next_reloc(struct reloc *reloc)
++{
++ return (struct reloc *)(reloc->_sym_next_reloc & ~RELOC_JUMP_TABLE_BIT);
++}
++
++static inline void set_sym_next_reloc(struct reloc *reloc, struct reloc *next)
++{
++ unsigned long bit = reloc->_sym_next_reloc & RELOC_JUMP_TABLE_BIT;
++
++ reloc->_sym_next_reloc = (unsigned long)next | bit;
++}
++
+ #define for_each_sec(file, sec) \
+ list_for_each_entry(sec, &file->elf->sections, list)
+
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index a148ca9efca912..23dbb6bb91cf85 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -497,13 +497,14 @@ ifeq ($(feature-setns), 1)
+ $(call detected,CONFIG_SETNS)
+ endif
+
++ifeq ($(feature-reallocarray), 0)
++ CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
++endif
++
+ ifdef CORESIGHT
+ $(call feature_check,libopencsd)
+ ifeq ($(feature-libopencsd), 1)
+ CFLAGS += -DHAVE_CSTRACE_SUPPORT $(LIBOPENCSD_CFLAGS)
+- ifeq ($(feature-reallocarray), 0)
+- CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
+- endif
+ LDFLAGS += $(LIBOPENCSD_LDFLAGS)
+ EXTLIBS += $(OPENCSDLIBS)
+ $(call detected,CONFIG_LIBOPENCSD)
+@@ -1103,9 +1104,6 @@ ifndef NO_AUXTRACE
+ ifndef NO_AUXTRACE
+ $(call detected,CONFIG_AUXTRACE)
+ CFLAGS += -DHAVE_AUXTRACE_SUPPORT
+- ifeq ($(feature-reallocarray), 0)
+- CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
+- endif
+ endif
+ endif
+
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 05c083bb112204..eea8877c7cba38 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -158,7 +158,7 @@ ifneq ($(OUTPUT),)
+ VPATH += $(OUTPUT)
+ export VPATH
+ # create symlink to the original source
+-SOURCE := $(shell ln -sf $(srctree)/tools/perf $(OUTPUT)/source)
++SOURCE := $(shell ln -sfn $(srctree)/tools/perf $(OUTPUT)/source)
+ endif
+
+ # Do not use make's built-in rules
+diff --git a/tools/perf/arch/powerpc/util/header.c b/tools/perf/arch/powerpc/util/header.c
+index c7df534dbf8f84..0be74f048f964b 100644
+--- a/tools/perf/arch/powerpc/util/header.c
++++ b/tools/perf/arch/powerpc/util/header.c
+@@ -14,8 +14,8 @@
+
+ static bool is_compat_mode(void)
+ {
+- u64 base_platform = getauxval(AT_BASE_PLATFORM);
+- u64 platform = getauxval(AT_PLATFORM);
++ unsigned long base_platform = getauxval(AT_BASE_PLATFORM);
++ unsigned long platform = getauxval(AT_PLATFORM);
+
+ if (!strcmp((char *)platform, (char *)base_platform))
+ return false;
+diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
+index f63747d0abdf9e..d1c65483904969 100644
+--- a/tools/perf/arch/x86/util/topdown.c
++++ b/tools/perf/arch/x86/util/topdown.c
+@@ -81,7 +81,7 @@ bool arch_topdown_sample_read(struct evsel *leader)
+ */
+ evlist__for_each_entry(leader->evlist, evsel) {
+ if (evsel->core.leader != leader->core.leader)
+- return false;
++ continue;
+ if (evsel != leader && arch_is_topdown_metrics(evsel))
+ return true;
+ }
+diff --git a/tools/perf/bench/syscall.c b/tools/perf/bench/syscall.c
+index ea4dfc07cbd6b8..e7dc216f717f5a 100644
+--- a/tools/perf/bench/syscall.c
++++ b/tools/perf/bench/syscall.c
+@@ -22,8 +22,7 @@
+ #define __NR_fork -1
+ #endif
+
+-#define LOOPS_DEFAULT 10000000
+-static int loops = LOOPS_DEFAULT;
++static int loops;
+
+ static const struct option options[] = {
+ OPT_INTEGER('l', "loop", &loops, "Specify number of loops"),
+@@ -80,6 +79,18 @@ static int bench_syscall_common(int argc, const char **argv, int syscall)
+ const char *name = NULL;
+ int i;
+
++ switch (syscall) {
++ case __NR_fork:
++ case __NR_execve:
++ /* Limit default loop to 10000 times to save time */
++ loops = 10000;
++ break;
++ default:
++ loops = 10000000;
++ break;
++ }
++
++ /* Options -l and --loops override default above */
+ argc = parse_options(argc, argv, options, bench_syscall_usage, 0);
+
+ gettimeofday(&start, NULL);
+@@ -94,16 +105,9 @@ static int bench_syscall_common(int argc, const char **argv, int syscall)
+ break;
+ case __NR_fork:
+ test_fork();
+- /* Only loop 10000 times to save time */
+- if (i == 10000)
+- loops = 10000;
+ break;
+ case __NR_execve:
+ test_execve();
+- /* Only loop 10000 times to save time */
+- if (i == 10000)
+- loops = 10000;
+- break;
+ default:
+ break;
+ }
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index f5fbd670d619a3..19175fe9a8b125 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -1553,12 +1553,12 @@ int cmd_report(int argc, const char **argv)
+ input_name = "perf.data";
+ }
+
++repeat:
+ data.path = input_name;
+ data.force = symbol_conf.force;
+
+ symbol_conf.skip_empty = report.skip_empty;
+
+-repeat:
+ perf_tool__init(&report.tool, ordered_events);
+ report.tool.sample = process_sample_event;
+ report.tool.mmap = perf_event__process_mmap;
+@@ -1719,22 +1719,24 @@ int cmd_report(int argc, const char **argv)
+ symbol_conf.annotate_data_sample = true;
+ }
+
+- if (sort_order && strstr(sort_order, "ipc")) {
+- parse_options_usage(report_usage, options, "s", 1);
+- goto error;
+- }
+-
+- if (sort_order && strstr(sort_order, "symbol")) {
+- if (sort__mode == SORT_MODE__BRANCH) {
+- snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
+- sort_order, "ipc_lbr");
+- report.symbol_ipc = true;
+- } else {
+- snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
+- sort_order, "ipc_null");
++ if (last_key != K_SWITCH_INPUT_DATA) {
++ if (sort_order && strstr(sort_order, "ipc")) {
++ parse_options_usage(report_usage, options, "s", 1);
++ goto error;
+ }
+
+- sort_order = sort_tmp;
++ if (sort_order && strstr(sort_order, "symbol")) {
++ if (sort__mode == SORT_MODE__BRANCH) {
++ snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
++ sort_order, "ipc_lbr");
++ report.symbol_ipc = true;
++ } else {
++ snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
++ sort_order, "ipc_null");
++ }
++
++ sort_order = sort_tmp;
++ }
+ }
+
+ if ((last_key != K_SWITCH_INPUT_DATA && last_key != K_RELOAD) &&
+diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
+index c5d1d22bd034b1..5228f94a793f95 100644
+--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
++++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
+@@ -229,19 +229,19 @@
+ },
+ {
+ "MetricName": "slots_lost_misspeculation_fraction",
+- "MetricExpr": "(OP_SPEC - OP_RETIRED) / (CPU_CYCLES * #slots)",
++ "MetricExpr": "100 * (OP_SPEC - OP_RETIRED) / (CPU_CYCLES * #slots)",
+ "BriefDescription": "Fraction of slots lost due to misspeculation",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
+- "ScaleUnit": "100percent of slots"
++ "ScaleUnit": "1percent of slots"
+ },
+ {
+ "MetricName": "retired_fraction",
+- "MetricExpr": "OP_RETIRED / (CPU_CYCLES * #slots)",
++ "MetricExpr": "100 * OP_RETIRED / (CPU_CYCLES * #slots)",
+ "BriefDescription": "Fraction of slots retiring, useful work",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
+- "ScaleUnit": "100percent of slots"
++ "ScaleUnit": "1percent of slots"
+ },
+ {
+ "MetricName": "backend_core",
+@@ -266,7 +266,7 @@
+ },
+ {
+ "MetricName": "frontend_bandwidth",
+- "MetricExpr": "frontend_bound - frontend_latency",
++ "MetricExpr": "frontend_bound - 100 * frontend_latency",
+ "BriefDescription": "Fraction of slots the CPU did not dispatch at full bandwidth - able to dispatch partial slots only (1, 2, or 3 uops)",
+ "MetricGroup": "TopdownL2",
+ "ScaleUnit": "1percent of slots"
+diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
+index 1c7a2cfa321fcd..0cb7ba7912e8a7 100644
+--- a/tools/perf/pmu-events/empty-pmu-events.c
++++ b/tools/perf/pmu-events/empty-pmu-events.c
+@@ -422,7 +422,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table,
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+ int ret;
+
+- if (pmu && !pmu__name_match(pmu, pmu_name))
++ if (pmu && !perf_pmu__name_wildcard_match(pmu, pmu_name))
+ continue;
+
+ ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data);
+@@ -443,7 +443,7 @@ int pmu_events_table__find_event(const struct pmu_events_table *table,
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+ int ret;
+
+- if (!pmu__name_match(pmu, pmu_name))
++ if (!perf_pmu__name_wildcard_match(pmu, pmu_name))
+ continue;
+
+ ret = pmu_events_table__find_event_pmu(table, table_pmu, name, fn, data);
+@@ -462,7 +462,7 @@ size_t pmu_events_table__num_events(const struct pmu_events_table *table,
+ const struct pmu_table_entry *table_pmu = &table->pmus[i];
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+
+- if (pmu__name_match(pmu, pmu_name))
++ if (perf_pmu__name_wildcard_match(pmu, pmu_name))
+ count += table_pmu->num_entries;
+ }
+ return count;
+@@ -581,7 +581,7 @@ const struct pmu_events_table *perf_pmu__find_events_table(struct perf_pmu *pmu)
+ const struct pmu_table_entry *table_pmu = &map->event_table.pmus[i];
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+
+- if (pmu__name_match(pmu, pmu_name))
++ if (perf_pmu__name_wildcard_match(pmu, pmu_name))
+ return &map->event_table;
+ }
+ return NULL;
+diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
+index 3e204700b59afc..7499a35bfadd03 100755
+--- a/tools/perf/pmu-events/jevents.py
++++ b/tools/perf/pmu-events/jevents.py
+@@ -945,7 +945,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table,
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+ int ret;
+
+- if (pmu && !pmu__name_match(pmu, pmu_name))
++ if (pmu && !perf_pmu__name_wildcard_match(pmu, pmu_name))
+ continue;
+
+ ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data);
+@@ -966,7 +966,7 @@ int pmu_events_table__find_event(const struct pmu_events_table *table,
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+ int ret;
+
+- if (!pmu__name_match(pmu, pmu_name))
++ if (!perf_pmu__name_wildcard_match(pmu, pmu_name))
+ continue;
+
+ ret = pmu_events_table__find_event_pmu(table, table_pmu, name, fn, data);
+@@ -985,7 +985,7 @@ size_t pmu_events_table__num_events(const struct pmu_events_table *table,
+ const struct pmu_table_entry *table_pmu = &table->pmus[i];
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+
+- if (pmu__name_match(pmu, pmu_name))
++ if (perf_pmu__name_wildcard_match(pmu, pmu_name))
+ count += table_pmu->num_entries;
+ }
+ return count;
+@@ -1104,7 +1104,7 @@ const struct pmu_events_table *perf_pmu__find_events_table(struct perf_pmu *pmu)
+ const struct pmu_table_entry *table_pmu = &map->event_table.pmus[i];
+ const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+
+- if (pmu__name_match(pmu, pmu_name))
++ if (perf_pmu__name_wildcard_match(pmu, pmu_name))
+ return &map->event_table;
+ }
+ return NULL;
+diff --git a/tools/perf/tests/hwmon_pmu.c b/tools/perf/tests/hwmon_pmu.c
+index d2b066a2b557aa..0837aca1cdfa7c 100644
+--- a/tools/perf/tests/hwmon_pmu.c
++++ b/tools/perf/tests/hwmon_pmu.c
+@@ -13,17 +13,23 @@
+ static const struct test_event {
+ const char *name;
+ const char *alias;
+- long config;
++ union hwmon_pmu_event_key key;
+ } test_events[] = {
+ {
+ "temp_test_hwmon_event1",
+ "temp1",
+- 0xA0001,
++ .key = {
++ .num = 1,
++ .type = 10
++ },
+ },
+ {
+ "temp_test_hwmon_event2",
+ "temp2",
+- 0xA0002,
++ .key = {
++ .num = 2,
++ .type = 10
++ },
+ },
+ };
+
+@@ -183,11 +189,11 @@ static int do_test(size_t i, bool with_pmu, bool with_alias)
+ strcmp(evsel->pmu->name, "hwmon_a_test_hwmon_pmu"))
+ continue;
+
+- if (evsel->core.attr.config != (u64)test_events[i].config) {
++ if (evsel->core.attr.config != (u64)test_events[i].key.type_and_num) {
+ pr_debug("FAILED %s:%d Unexpected config for '%s', %lld != %ld\n",
+ __FILE__, __LINE__, str,
+ evsel->core.attr.config,
+- test_events[i].config);
++ test_events[i].key.type_and_num);
+ ret = TEST_FAIL;
+ goto out;
+ }
+diff --git a/tools/perf/tests/pmu.c b/tools/perf/tests/pmu.c
+index 6a681e3fb552d6..4a9f8e090cf4b3 100644
+--- a/tools/perf/tests/pmu.c
++++ b/tools/perf/tests/pmu.c
+@@ -452,9 +452,9 @@ static int test__name_cmp(struct test_suite *test __maybe_unused, int subtest __
+ }
+
+ /**
+- * Test perf_pmu__match() that's used to search for a PMU given a name passed
++ * Test perf_pmu__wildcard_match() that's used to search for a PMU given a name passed
+ * on the command line. The name that's passed may also be a filename type glob
+- * match. If the name does not match, perf_pmu__match() attempts to match the
++ * match. If the name does not match, perf_pmu__wildcard_match() attempts to match the
+ * alias of the PMU, if provided.
+ */
+ static int test__pmu_match(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
+@@ -463,41 +463,44 @@ static int test__pmu_match(struct test_suite *test __maybe_unused, int subtest _
+ .name = "pmuname",
+ };
+
+- TEST_ASSERT_EQUAL("Exact match", perf_pmu__match(&test_pmu, "pmuname"), true);
+- TEST_ASSERT_EQUAL("Longer token", perf_pmu__match(&test_pmu, "longertoken"), false);
+- TEST_ASSERT_EQUAL("Shorter token", perf_pmu__match(&test_pmu, "pmu"), false);
++#define TEST_PMU_MATCH(msg, to_match, expect) \
++ TEST_ASSERT_EQUAL(msg, perf_pmu__wildcard_match(&test_pmu, to_match), expect)
++
++ TEST_PMU_MATCH("Exact match", "pmuname", true);
++ TEST_PMU_MATCH("Longer token", "longertoken", false);
++ TEST_PMU_MATCH("Shorter token", "pmu", false);
+
+ test_pmu.name = "pmuname_10";
+- TEST_ASSERT_EQUAL("Diff suffix_", perf_pmu__match(&test_pmu, "pmuname_2"), false);
+- TEST_ASSERT_EQUAL("Sub suffix_", perf_pmu__match(&test_pmu, "pmuname_1"), true);
+- TEST_ASSERT_EQUAL("Same suffix_", perf_pmu__match(&test_pmu, "pmuname_10"), true);
+- TEST_ASSERT_EQUAL("No suffix_", perf_pmu__match(&test_pmu, "pmuname"), true);
+- TEST_ASSERT_EQUAL("Underscore_", perf_pmu__match(&test_pmu, "pmuname_"), true);
+- TEST_ASSERT_EQUAL("Substring_", perf_pmu__match(&test_pmu, "pmuna"), false);
++ TEST_PMU_MATCH("Diff suffix_", "pmuname_2", false);
++ TEST_PMU_MATCH("Sub suffix_", "pmuname_1", true);
++ TEST_PMU_MATCH("Same suffix_", "pmuname_10", true);
++ TEST_PMU_MATCH("No suffix_", "pmuname", true);
++ TEST_PMU_MATCH("Underscore_", "pmuname_", true);
++ TEST_PMU_MATCH("Substring_", "pmuna", false);
+
+ test_pmu.name = "pmuname_ab23";
+- TEST_ASSERT_EQUAL("Diff suffix hex_", perf_pmu__match(&test_pmu, "pmuname_2"), false);
+- TEST_ASSERT_EQUAL("Sub suffix hex_", perf_pmu__match(&test_pmu, "pmuname_ab"), true);
+- TEST_ASSERT_EQUAL("Same suffix hex_", perf_pmu__match(&test_pmu, "pmuname_ab23"), true);
+- TEST_ASSERT_EQUAL("No suffix hex_", perf_pmu__match(&test_pmu, "pmuname"), true);
+- TEST_ASSERT_EQUAL("Underscore hex_", perf_pmu__match(&test_pmu, "pmuname_"), true);
+- TEST_ASSERT_EQUAL("Substring hex_", perf_pmu__match(&test_pmu, "pmuna"), false);
++ TEST_PMU_MATCH("Diff suffix hex_", "pmuname_2", false);
++ TEST_PMU_MATCH("Sub suffix hex_", "pmuname_ab", true);
++ TEST_PMU_MATCH("Same suffix hex_", "pmuname_ab23", true);
++ TEST_PMU_MATCH("No suffix hex_", "pmuname", true);
++ TEST_PMU_MATCH("Underscore hex_", "pmuname_", true);
++ TEST_PMU_MATCH("Substring hex_", "pmuna", false);
+
+ test_pmu.name = "pmuname10";
+- TEST_ASSERT_EQUAL("Diff suffix", perf_pmu__match(&test_pmu, "pmuname2"), false);
+- TEST_ASSERT_EQUAL("Sub suffix", perf_pmu__match(&test_pmu, "pmuname1"), true);
+- TEST_ASSERT_EQUAL("Same suffix", perf_pmu__match(&test_pmu, "pmuname10"), true);
+- TEST_ASSERT_EQUAL("No suffix", perf_pmu__match(&test_pmu, "pmuname"), true);
+- TEST_ASSERT_EQUAL("Underscore", perf_pmu__match(&test_pmu, "pmuname_"), false);
+- TEST_ASSERT_EQUAL("Substring", perf_pmu__match(&test_pmu, "pmuna"), false);
++ TEST_PMU_MATCH("Diff suffix", "pmuname2", false);
++ TEST_PMU_MATCH("Sub suffix", "pmuname1", true);
++ TEST_PMU_MATCH("Same suffix", "pmuname10", true);
++ TEST_PMU_MATCH("No suffix", "pmuname", true);
++ TEST_PMU_MATCH("Underscore", "pmuname_", false);
++ TEST_PMU_MATCH("Substring", "pmuna", false);
+
+ test_pmu.name = "pmunameab23";
+- TEST_ASSERT_EQUAL("Diff suffix hex", perf_pmu__match(&test_pmu, "pmuname2"), false);
+- TEST_ASSERT_EQUAL("Sub suffix hex", perf_pmu__match(&test_pmu, "pmunameab"), true);
+- TEST_ASSERT_EQUAL("Same suffix hex", perf_pmu__match(&test_pmu, "pmunameab23"), true);
+- TEST_ASSERT_EQUAL("No suffix hex", perf_pmu__match(&test_pmu, "pmuname"), true);
+- TEST_ASSERT_EQUAL("Underscore hex", perf_pmu__match(&test_pmu, "pmuname_"), false);
+- TEST_ASSERT_EQUAL("Substring hex", perf_pmu__match(&test_pmu, "pmuna"), false);
++ TEST_PMU_MATCH("Diff suffix hex", "pmuname2", false);
++ TEST_PMU_MATCH("Sub suffix hex", "pmunameab", true);
++ TEST_PMU_MATCH("Same suffix hex", "pmunameab23", true);
++ TEST_PMU_MATCH("No suffix hex", "pmuname", true);
++ TEST_PMU_MATCH("Underscore hex", "pmuname_", false);
++ TEST_PMU_MATCH("Substring hex", "pmuna", false);
+
+ /*
+ * 2 hex chars or less are not considered suffixes so it shouldn't be
+@@ -505,7 +508,7 @@ static int test__pmu_match(struct test_suite *test __maybe_unused, int subtest _
+ * false results here than above.
+ */
+ test_pmu.name = "pmuname_a3";
+- TEST_ASSERT_EQUAL("Diff suffix 2 hex_", perf_pmu__match(&test_pmu, "pmuname_2"), false);
++ TEST_PMU_MATCH("Diff suffix 2 hex_", "pmuname_2", false);
+ /*
+ * This one should be false, but because pmuname_a3 ends in 3 which is
+ * decimal, it's not possible to determine if it's a short hex suffix or
+@@ -513,19 +516,19 @@ static int test__pmu_match(struct test_suite *test __maybe_unused, int subtest _
+ * length of decimal suffix. Run the test anyway and expect the wrong
+ * result. And slightly fuzzy matching shouldn't do too much harm.
+ */
+- TEST_ASSERT_EQUAL("Sub suffix 2 hex_", perf_pmu__match(&test_pmu, "pmuname_a"), true);
+- TEST_ASSERT_EQUAL("Same suffix 2 hex_", perf_pmu__match(&test_pmu, "pmuname_a3"), true);
+- TEST_ASSERT_EQUAL("No suffix 2 hex_", perf_pmu__match(&test_pmu, "pmuname"), false);
+- TEST_ASSERT_EQUAL("Underscore 2 hex_", perf_pmu__match(&test_pmu, "pmuname_"), false);
+- TEST_ASSERT_EQUAL("Substring 2 hex_", perf_pmu__match(&test_pmu, "pmuna"), false);
++ TEST_PMU_MATCH("Sub suffix 2 hex_", "pmuname_a", true);
++ TEST_PMU_MATCH("Same suffix 2 hex_", "pmuname_a3", true);
++ TEST_PMU_MATCH("No suffix 2 hex_", "pmuname", false);
++ TEST_PMU_MATCH("Underscore 2 hex_", "pmuname_", false);
++ TEST_PMU_MATCH("Substring 2 hex_", "pmuna", false);
+
+ test_pmu.name = "pmuname_5";
+- TEST_ASSERT_EQUAL("Glob 1", perf_pmu__match(&test_pmu, "pmu*"), true);
+- TEST_ASSERT_EQUAL("Glob 2", perf_pmu__match(&test_pmu, "nomatch*"), false);
+- TEST_ASSERT_EQUAL("Seq 1", perf_pmu__match(&test_pmu, "pmuname_[12345]"), true);
+- TEST_ASSERT_EQUAL("Seq 2", perf_pmu__match(&test_pmu, "pmuname_[67890]"), false);
+- TEST_ASSERT_EQUAL("? 1", perf_pmu__match(&test_pmu, "pmuname_?"), true);
+- TEST_ASSERT_EQUAL("? 2", perf_pmu__match(&test_pmu, "pmuname_1?"), false);
++ TEST_PMU_MATCH("Glob 1", "pmu*", true);
++ TEST_PMU_MATCH("Glob 2", "nomatch*", false);
++ TEST_PMU_MATCH("Seq 1", "pmuname_[12345]", true);
++ TEST_PMU_MATCH("Seq 2", "pmuname_[67890]", false);
++ TEST_PMU_MATCH("? 1", "pmuname_?", true);
++ TEST_PMU_MATCH("? 2", "pmuname_1?", false);
+
+ return TEST_OK;
+ }
+diff --git a/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S b/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S
+index 75cf084a927d3d..5777600467723f 100644
+--- a/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S
++++ b/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S
+@@ -26,3 +26,5 @@ skip:
+ mov x0, #0
+ mov x8, #93 // __NR_exit syscall
+ svc #0
++
++.section .note.GNU-stack, "", @progbits
+diff --git a/tools/perf/tests/shell/record_bpf_filter.sh b/tools/perf/tests/shell/record_bpf_filter.sh
+index 1b58ccc1fd882d..4d6c3c1b7fb925 100755
+--- a/tools/perf/tests/shell/record_bpf_filter.sh
++++ b/tools/perf/tests/shell/record_bpf_filter.sh
+@@ -89,7 +89,7 @@ test_bpf_filter_fail() {
+ test_bpf_filter_group() {
+ echo "Group bpf-filter test"
+
+- if ! perf record -e task-clock --filter 'period > 1000 || ip > 0' \
++ if ! perf record -e task-clock --filter 'period > 1000, ip > 0' \
+ -o /dev/null true 2>/dev/null
+ then
+ echo "Group bpf-filter test [Failed should succeed]"
+@@ -97,7 +97,7 @@ test_bpf_filter_group() {
+ return
+ fi
+
+- if ! perf record -e task-clock --filter 'cpu > 0 || ip > 0' \
++ if ! perf record -e task-clock --filter 'period > 1000 , cpu > 0 || ip > 0' \
+ -o /dev/null true 2>&1 | grep -q PERF_SAMPLE_CPU
+ then
+ echo "Group bpf-filter test [Failed forbidden CPU]"
+diff --git a/tools/perf/tests/shell/stat_all_pmu.sh b/tools/perf/tests/shell/stat_all_pmu.sh
+index 8b148b300be113..9c466c0efa857f 100755
+--- a/tools/perf/tests/shell/stat_all_pmu.sh
++++ b/tools/perf/tests/shell/stat_all_pmu.sh
+@@ -2,7 +2,6 @@
+ # perf all PMU test (exclusive)
+ # SPDX-License-Identifier: GPL-2.0
+
+-set -e
+ err=0
+ result=""
+
+@@ -16,34 +15,55 @@ trap trap_cleanup EXIT TERM INT
+ # Test all PMU events; however exclude parameterized ones (name contains '?')
+ for p in $(perf list --raw-dump pmu | sed 's/[[:graph:]]\+?[[:graph:]]\+[[:space:]]//g')
+ do
+- echo "Testing $p"
+- result=$(perf stat -e "$p" true 2>&1)
+- if echo "$result" | grep -q "$p"
++ echo -n "Testing $p -- "
++ output=$(perf stat -e "$p" true 2>&1)
++ stat_result=$?
++ if echo "$output" | grep -q "$p"
+ then
+ # Event seen in output.
+- continue
+- fi
+- if echo "$result" | grep -q "<not supported>"
+- then
+- # Event not supported, so ignore.
+- continue
++ if [ $stat_result -eq 0 ] && ! echo "$output" | grep -q "<not supported>"
++ then
++ # Event supported.
++ echo "supported"
++ continue
++ elif echo "$output" | grep -q "<not supported>"
++ then
++ # Event not supported, so ignore.
++ echo "not supported"
++ continue
++ elif echo "$output" | grep -q "No permission to enable"
++ then
++ # No permissions, so ignore.
++ echo "no permission to enable"
++ continue
++ elif echo "$output" | grep -q "Bad event name"
++ then
++ # Non-existent event.
++ echo "Error: Bad event name"
++ echo "$output"
++ err=1
++ continue
++ fi
+ fi
+- if echo "$result" | grep -q "Access to performance monitoring and observability operations is limited."
++
++ if echo "$output" | grep -q "Access to performance monitoring and observability operations is limited."
+ then
+ # Access is limited, so ignore.
++ echo "access limited"
+ continue
+ fi
+
+ # We failed to see the event and it is supported. Possibly the workload was
+ # too small so retry with something longer.
+- result=$(perf stat -e "$p" perf bench internals synthesize 2>&1)
+- if echo "$result" | grep -q "$p"
++ output=$(perf stat -e "$p" perf bench internals synthesize 2>&1)
++ if echo "$output" | grep -q "$p"
+ then
+ # Event seen in output.
++ echo "supported"
+ continue
+ fi
+ echo "Error: event '$p' not printed in:"
+- echo "$result"
++ echo "$output"
+ err=1
+ done
+
+diff --git a/tools/perf/tests/shell/test_data_symbol.sh b/tools/perf/tests/shell/test_data_symbol.sh
+index c86da02350596b..7da606db97cb46 100755
+--- a/tools/perf/tests/shell/test_data_symbol.sh
++++ b/tools/perf/tests/shell/test_data_symbol.sh
+@@ -18,7 +18,7 @@ skip_if_no_mem_event() {
+
+ skip_if_no_mem_event || exit 2
+
+-skip_test_missing_symbol buf1
++skip_test_missing_symbol workload_datasym_buf1
+
+ TEST_PROGRAM="perf test -w datasym"
+ PERF_DATA=$(mktemp /tmp/__perf_test.perf.data.XXXXX)
+@@ -26,18 +26,19 @@ ERR_FILE=$(mktemp /tmp/__perf_test.stderr.XXXXX)
+
+ check_result() {
+ # The memory report format is as below:
+- # 99.92% ... [.] buf1+0x38
++ # 99.92% ... [.] workload_datasym_buf1+0x38
+ result=$(perf mem report -i ${PERF_DATA} -s symbol_daddr -q 2>&1 |
+- awk '/buf1/ { print $4 }')
++ awk '/workload_datasym_buf1/ { print $4 }')
+
+- # Testing is failed if has no any sample for "buf1"
++ # Testing is failed if has no any sample for "workload_datasym_buf1"
+ [ -z "$result" ] && return 1
+
+ while IFS= read -r line; do
+- # The "data1" and "data2" fields in structure "buf1" have
+- # offset "0x0" and "0x38", returns failure if detect any
+- # other offset value.
+- if [ "$line" != "buf1+0x0" ] && [ "$line" != "buf1+0x38" ]; then
++ # The "data1" and "data2" fields in structure
++ # "workload_datasym_buf1" have offset "0x0" and "0x38", returns
++ # failure if detect any other offset value.
++ if [ "$line" != "workload_datasym_buf1+0x0" ] && \
++ [ "$line" != "workload_datasym_buf1+0x38" ]; then
+ return 1
+ fi
+ done <<< "$result"
+diff --git a/tools/perf/tests/tool_pmu.c b/tools/perf/tests/tool_pmu.c
+index 187942b749b7c9..1e900ef92e3780 100644
+--- a/tools/perf/tests/tool_pmu.c
++++ b/tools/perf/tests/tool_pmu.c
+@@ -27,7 +27,7 @@ static int do_test(enum tool_pmu_event ev, bool with_pmu)
+ parse_events_error__init(&err);
+ ret = parse_events(evlist, str, &err);
+ if (ret) {
+- if (tool_pmu__skip_event(tool_pmu__event_to_str(ev))) {
++ if (!tool_pmu__event_to_str(ev)) {
+ ret = TEST_OK;
+ goto out;
+ }
+@@ -59,7 +59,7 @@ static int do_test(enum tool_pmu_event ev, bool with_pmu)
+ }
+ }
+
+- if (!found && !tool_pmu__skip_event(tool_pmu__event_to_str(ev))) {
++ if (!found && tool_pmu__event_to_str(ev)) {
+ pr_debug("FAILED %s:%d Didn't find tool event '%s' in parsed evsels\n",
+ __FILE__, __LINE__, str);
+ ret = TEST_FAIL;
+diff --git a/tools/perf/tests/workloads/datasym.c b/tools/perf/tests/workloads/datasym.c
+index 8e08fc75a973e5..1d0b7d64e1ba1a 100644
+--- a/tools/perf/tests/workloads/datasym.c
++++ b/tools/perf/tests/workloads/datasym.c
+@@ -1,3 +1,6 @@
++#include <stdlib.h>
++#include <signal.h>
++#include <unistd.h>
+ #include <linux/compiler.h>
+ #include "../tests.h"
+
+@@ -7,16 +10,33 @@ typedef struct _buf {
+ char data2;
+ } buf __attribute__((aligned(64)));
+
+-static buf buf1 = {
++/* volatile to try to avoid the compiler seeing reserved as unused. */
++static volatile buf workload_datasym_buf1 = {
+ /* to have this in the data section */
+ .reserved[0] = 1,
+ };
+
+-static int datasym(int argc __maybe_unused, const char **argv __maybe_unused)
++static volatile sig_atomic_t done;
++
++static void sighandler(int sig __maybe_unused)
++{
++ done = 1;
++}
++
++static int datasym(int argc, const char **argv)
+ {
+- for (;;) {
+- buf1.data1++;
+- if (buf1.data1 == 123) {
++ int sec = 1;
++
++ if (argc > 0)
++ sec = atoi(argv[0]);
++
++ signal(SIGINT, sighandler);
++ signal(SIGALRM, sighandler);
++ alarm(sec);
++
++ while (!done) {
++ workload_datasym_buf1.data1++;
++ if (workload_datasym_buf1.data1 == 123) {
+ /*
+ * Add some 'noise' in the loop to work around errata
+ * 1694299 on Arm N1.
+@@ -30,9 +50,9 @@ static int datasym(int argc __maybe_unused, const char **argv __maybe_unused)
+ * longer a continuous repeating pattern that interacts
+ * badly with the bias.
+ */
+- buf1.data1++;
++ workload_datasym_buf1.data1++;
+ }
+- buf1.data2 += buf1.data1;
++ workload_datasym_buf1.data2 += workload_datasym_buf1.data1;
+ }
+ return 0;
+ }
+diff --git a/tools/perf/util/arm-spe.c b/tools/perf/util/arm-spe.c
+index 12761c39788f80..f1365ce69ba088 100644
+--- a/tools/perf/util/arm-spe.c
++++ b/tools/perf/util/arm-spe.c
+@@ -37,6 +37,8 @@
+ #include "../../arch/arm64/include/asm/cputype.h"
+ #define MAX_TIMESTAMP (~0ULL)
+
++#define is_ldst_op(op) (!!((op) & ARM_SPE_OP_LDST))
++
+ struct arm_spe {
+ struct auxtrace auxtrace;
+ struct auxtrace_queues queues;
+@@ -669,6 +671,10 @@ static u64 arm_spe__synth_data_source(struct arm_spe_queue *speq,
+ {
+ union perf_mem_data_src data_src = { .mem_op = PERF_MEM_OP_NA };
+
++ /* Only synthesize data source for LDST operations */
++ if (!is_ldst_op(record->op))
++ return 0;
++
+ if (record->op & ARM_SPE_OP_LD)
+ data_src.mem_op = PERF_MEM_OP_LOAD;
+ else if (record->op & ARM_SPE_OP_ST)
+@@ -767,7 +773,7 @@ static int arm_spe_sample(struct arm_spe_queue *speq)
+ * When data_src is zero it means the record is not a memory operation,
+ * skip to synthesize memory sample for this case.
+ */
+- if (spe->sample_memory && data_src) {
++ if (spe->sample_memory && is_ldst_op(record->op)) {
+ err = arm_spe__synth_mem_sample(speq, spe->memory_id, data_src);
+ if (err)
+ return err;
+diff --git a/tools/perf/util/bpf-filter.l b/tools/perf/util/bpf-filter.l
+index f313404f95a90d..6aa65ade33851b 100644
+--- a/tools/perf/util/bpf-filter.l
++++ b/tools/perf/util/bpf-filter.l
+@@ -76,7 +76,7 @@ static int path_or_error(void)
+ num_dec [0-9]+
+ num_hex 0[Xx][0-9a-fA-F]+
+ space [ \t]+
+-path [^ \t\n]+
++path [^ \t\n,]+
+ ident [_a-zA-Z][_a-zA-Z0-9]+
+
+ %%
+diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
+index 49b79cf0c5cc51..8aa456d7c2cd2d 100644
+--- a/tools/perf/util/comm.c
++++ b/tools/perf/util/comm.c
+@@ -5,6 +5,8 @@
+ #include <internal/rc_check.h>
+ #include <linux/refcount.h>
+ #include <linux/zalloc.h>
++#include <tools/libc_compat.h> // reallocarray
++
+ #include "rwsem.h"
+
+ DECLARE_RC_STRUCT(comm_str) {
+diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
+index 995f6bb05b5f88..f9ef7d045c92e7 100644
+--- a/tools/perf/util/debug.c
++++ b/tools/perf/util/debug.c
+@@ -46,8 +46,8 @@ int debug_type_profile;
+ FILE *debug_file(void)
+ {
+ if (!_debug_file) {
+- pr_warning_once("debug_file not set");
+ debug_set_file(stderr);
++ pr_warning_once("debug_file not set");
+ }
+ return _debug_file;
+ }
+diff --git a/tools/perf/util/dso.h b/tools/perf/util/dso.h
+index bb8e8f444054d8..c0472a41147c3c 100644
+--- a/tools/perf/util/dso.h
++++ b/tools/perf/util/dso.h
+@@ -808,7 +808,9 @@ static inline bool dso__is_kcore(const struct dso *dso)
+
+ static inline bool dso__is_kallsyms(const struct dso *dso)
+ {
+- return RC_CHK_ACCESS(dso)->kernel && RC_CHK_ACCESS(dso)->long_name[0] != '/';
++ enum dso_binary_type bt = dso__binary_type(dso);
++
++ return bt == DSO_BINARY_TYPE__KALLSYMS || bt == DSO_BINARY_TYPE__GUEST_KALLSYMS;
+ }
+
+ bool dso__is_object_file(const struct dso *dso);
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index f0dd174e2debdb..633df7d9204c2e 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -1373,19 +1373,18 @@ static int evlist__create_syswide_maps(struct evlist *evlist)
+ */
+ cpus = perf_cpu_map__new_online_cpus();
+ if (!cpus)
+- goto out;
++ return -ENOMEM;
+
+ threads = perf_thread_map__new_dummy();
+- if (!threads)
+- goto out_put;
++ if (!threads) {
++ perf_cpu_map__put(cpus);
++ return -ENOMEM;
++ }
+
+ perf_evlist__set_maps(&evlist->core, cpus, threads);
+-
+ perf_thread_map__put(threads);
+-out_put:
+ perf_cpu_map__put(cpus);
+-out:
+- return -ENOMEM;
++ return 0;
+ }
+
+ int evlist__open(struct evlist *evlist)
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index bc144388f89298..9cd78cdee6282f 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -511,6 +511,16 @@ struct evsel *evsel__clone(struct evsel *dest, struct evsel *orig)
+ }
+ evsel->cgrp = cgroup__get(orig->cgrp);
+ #ifdef HAVE_LIBTRACEEVENT
++ if (orig->tp_sys) {
++ evsel->tp_sys = strdup(orig->tp_sys);
++ if (evsel->tp_sys == NULL)
++ goto out_err;
++ }
++ if (orig->tp_name) {
++ evsel->tp_name = strdup(orig->tp_name);
++ if (evsel->tp_name == NULL)
++ goto out_err;
++ }
+ evsel->tp_format = orig->tp_format;
+ #endif
+ evsel->handler = orig->handler;
+@@ -634,7 +644,11 @@ struct tep_event *evsel__tp_format(struct evsel *evsel)
+ if (evsel->core.attr.type != PERF_TYPE_TRACEPOINT)
+ return NULL;
+
+- tp_format = trace_event__tp_format(evsel->tp_sys, evsel->tp_name);
++ if (!evsel->tp_sys)
++ tp_format = trace_event__tp_format_id(evsel->core.attr.config);
++ else
++ tp_format = trace_event__tp_format(evsel->tp_sys, evsel->tp_name);
++
+ if (IS_ERR(tp_format)) {
+ int err = -PTR_ERR(evsel->tp_format);
+
+diff --git a/tools/perf/util/expr.c b/tools/perf/util/expr.c
+index c221dcce666609..6413537442aa80 100644
+--- a/tools/perf/util/expr.c
++++ b/tools/perf/util/expr.c
+@@ -215,6 +215,8 @@ int expr__add_ref(struct expr_parse_ctx *ctx, struct metric_ref *ref)
+ int expr__get_id(struct expr_parse_ctx *ctx, const char *id,
+ struct expr_id_data **data)
+ {
++ if (!ctx || !id)
++ return -1;
+ return hashmap__find(ctx->ids, id, data) ? 0 : -1;
+ }
+
+diff --git a/tools/perf/util/hwmon_pmu.c b/tools/perf/util/hwmon_pmu.c
+index 4acb9bb19b8464..acd889b2462f61 100644
+--- a/tools/perf/util/hwmon_pmu.c
++++ b/tools/perf/util/hwmon_pmu.c
+@@ -107,20 +107,6 @@ struct hwmon_pmu {
+ int hwmon_dir_fd;
+ };
+
+-/**
+- * union hwmon_pmu_event_key: Key for hwmon_pmu->events as such each key
+- * represents an event.
+- *
+- * Related hwmon files start <type><number> that this key represents.
+- */
+-union hwmon_pmu_event_key {
+- long type_and_num;
+- struct {
+- int num :16;
+- enum hwmon_type type :8;
+- };
+-};
+-
+ /**
+ * struct hwmon_pmu_event_value: Value in hwmon_pmu->events.
+ *
+diff --git a/tools/perf/util/hwmon_pmu.h b/tools/perf/util/hwmon_pmu.h
+index 882566846df46c..b3329774d2b22d 100644
+--- a/tools/perf/util/hwmon_pmu.h
++++ b/tools/perf/util/hwmon_pmu.h
+@@ -91,6 +91,22 @@ enum hwmon_item {
+ HWMON_ITEM__MAX,
+ };
+
++/**
++ * union hwmon_pmu_event_key: Key for hwmon_pmu->events as such each key
++ * represents an event.
++ * union is exposed for testing to ensure problems are avoided on big
++ * endian machines.
++ *
++ * Related hwmon files start <type><number> that this key represents.
++ */
++union hwmon_pmu_event_key {
++ long type_and_num;
++ struct {
++ int num :16;
++ enum hwmon_type type :8;
++ };
++};
++
+ bool perf_pmu__is_hwmon(const struct perf_pmu *pmu);
+ bool evsel__is_hwmon(const struct evsel *evsel);
+
+diff --git a/tools/perf/util/intel-tpebs.c b/tools/perf/util/intel-tpebs.c
+index 50a3c3e0716065..2c421b475b3b8b 100644
+--- a/tools/perf/util/intel-tpebs.c
++++ b/tools/perf/util/intel-tpebs.c
+@@ -254,7 +254,7 @@ int tpebs_start(struct evlist *evsel_list)
+ new = zalloc(sizeof(*new));
+ if (!new) {
+ ret = -1;
+- zfree(name);
++ zfree(&name);
+ goto err;
+ }
+ new->name = name;
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 2d51badfbf2e2d..9c7bf17bcbe86b 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1468,8 +1468,6 @@ static int machine__create_modules(struct machine *machine)
+ if (modules__parse(modules, machine, machine__create_module))
+ return -1;
+
+- maps__fixup_end(machine__kernel_maps(machine));
+-
+ if (!machine__set_modules_path(machine))
+ return 0;
+
+@@ -1563,6 +1561,8 @@ int machine__create_kernel_maps(struct machine *machine)
+ }
+ }
+
++ maps__fixup_end(machine__kernel_maps(machine));
++
+ out_put:
+ dso__put(kernel);
+ return ret;
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 1e23faa364b11f..6c36b98875bc91 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -1660,7 +1660,7 @@ int parse_events_multi_pmu_add_or_add_pmu(struct parse_events_state *parse_state
+ /* Failed to add, try wildcard expansion of event_or_pmu as a PMU name. */
+ while ((pmu = perf_pmus__scan(pmu)) != NULL) {
+ if (!parse_events__filter_pmu(parse_state, pmu) &&
+- perf_pmu__match(pmu, event_or_pmu)) {
++ perf_pmu__wildcard_match(pmu, event_or_pmu)) {
+ bool auto_merge_stats = perf_pmu__auto_merge_stats(pmu);
+
+ if (!parse_events_add_pmu(parse_state, *listp, pmu,
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index a8193ac8f2e7d0..72aa6167c090e2 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -596,7 +596,7 @@ static int perf_pmu__new_alias(struct perf_pmu *pmu, const char *name,
+ };
+ if (pmu_events_table__find_event(pmu->events_table, pmu, name,
+ update_alias, &data) == 0)
+- pmu->cpu_json_aliases++;
++ pmu->cpu_common_json_aliases++;
+ }
+ pmu->sysfs_aliases++;
+ break;
+@@ -847,21 +847,23 @@ static size_t pmu_deduped_name_len(const struct perf_pmu *pmu, const char *name,
+ }
+
+ /**
+- * perf_pmu__match_ignoring_suffix - Does the pmu_name match tok ignoring any
+- * trailing suffix? The Suffix must be in form
+- * tok_{digits}, or tok{digits}.
++ * perf_pmu__match_wildcard - Does the pmu_name start with tok and is then only
++ * followed by nothing or a suffix? tok may contain
++ * part of a suffix.
+ * @pmu_name: The pmu_name with possible suffix.
+- * @tok: The possible match to pmu_name without suffix.
++ * @tok: The wildcard argument to match.
+ */
+-static bool perf_pmu__match_ignoring_suffix(const char *pmu_name, const char *tok)
++static bool perf_pmu__match_wildcard(const char *pmu_name, const char *tok)
+ {
+ const char *p, *suffix;
+ bool has_hex = false;
++ size_t tok_len = strlen(tok);
+
+- if (strncmp(pmu_name, tok, strlen(tok)))
++ /* Check start of pmu_name for equality. */
++ if (strncmp(pmu_name, tok, tok_len))
+ return false;
+
+- suffix = p = pmu_name + strlen(tok);
++ suffix = p = pmu_name + tok_len;
+ if (*p == 0)
+ return true;
+
+@@ -887,60 +889,84 @@ static bool perf_pmu__match_ignoring_suffix(const char *pmu_name, const char *to
+ }
+
+ /**
+- * pmu_uncore_alias_match - does name match the PMU name?
+- * @pmu_name: the json struct pmu_event name. This may lack a suffix (which
++ * perf_pmu__match_ignoring_suffix_uncore - Does the pmu_name match tok ignoring
++ * any trailing suffix on pmu_name and
++ * tok? The Suffix must be in form
++ * tok_{digits}, or tok{digits}.
++ * @pmu_name: The pmu_name with possible suffix.
++ * @tok: The possible match to pmu_name.
++ */
++static bool perf_pmu__match_ignoring_suffix_uncore(const char *pmu_name, const char *tok)
++{
++ size_t pmu_name_len, tok_len;
++
++ /* For robustness, check for NULL. */
++ if (pmu_name == NULL)
++ return tok == NULL;
++
++ /* uncore_ prefixes are ignored. */
++ if (!strncmp(pmu_name, "uncore_", 7))
++ pmu_name += 7;
++ if (!strncmp(tok, "uncore_", 7))
++ tok += 7;
++
++ pmu_name_len = pmu_name_len_no_suffix(pmu_name);
++ tok_len = pmu_name_len_no_suffix(tok);
++ if (pmu_name_len != tok_len)
++ return false;
++
++ return strncmp(pmu_name, tok, pmu_name_len) == 0;
++}
++
++
++/**
++ * perf_pmu__match_wildcard_uncore - does to_match match the PMU's name?
++ * @pmu_name: The pmu->name or pmu->alias to match against.
++ * @to_match: the json struct pmu_event name. This may lack a suffix (which
+ * matches) or be of the form "socket,pmuname" which will match
+ * "socketX_pmunameY".
+- * @name: a real full PMU name as from sysfs.
+ */
+-static bool pmu_uncore_alias_match(const char *pmu_name, const char *name)
++static bool perf_pmu__match_wildcard_uncore(const char *pmu_name, const char *to_match)
+ {
+- char *tmp = NULL, *tok, *str;
+- bool res;
+-
+- if (strchr(pmu_name, ',') == NULL)
+- return perf_pmu__match_ignoring_suffix(name, pmu_name);
++ char *mutable_to_match, *tok, *tmp;
+
+- str = strdup(pmu_name);
+- if (!str)
++ if (!pmu_name)
+ return false;
+
+- /*
+- * uncore alias may be from different PMU with common prefix
+- */
+- tok = strtok_r(str, ",", &tmp);
+- if (strncmp(pmu_name, tok, strlen(tok))) {
+- res = false;
+- goto out;
+- }
++ /* uncore_ prefixes are ignored. */
++ if (!strncmp(pmu_name, "uncore_", 7))
++ pmu_name += 7;
++ if (!strncmp(to_match, "uncore_", 7))
++ to_match += 7;
+
+- /*
+- * Match more complex aliases where the alias name is a comma-delimited
+- * list of tokens, orderly contained in the matching PMU name.
+- *
+- * Example: For alias "socket,pmuname" and PMU "socketX_pmunameY", we
+- * match "socket" in "socketX_pmunameY" and then "pmuname" in
+- * "pmunameY".
+- */
+- while (1) {
+- char *next_tok = strtok_r(NULL, ",", &tmp);
++ if (strchr(to_match, ',') == NULL)
++ return perf_pmu__match_wildcard(pmu_name, to_match);
+
+- name = strstr(name, tok);
+- if (!name ||
+- (!next_tok && !perf_pmu__match_ignoring_suffix(name, tok))) {
+- res = false;
+- goto out;
++ /* Process comma separated list of PMU name components. */
++ mutable_to_match = strdup(to_match);
++ if (!mutable_to_match)
++ return false;
++
++ tok = strtok_r(mutable_to_match, ",", &tmp);
++ while (tok) {
++ size_t tok_len = strlen(tok);
++
++ if (strncmp(pmu_name, tok, tok_len)) {
++ /* Mismatch between part of pmu_name and tok. */
++ free(mutable_to_match);
++ return false;
+ }
+- if (!next_tok)
+- break;
+- tok = next_tok;
+- name += strlen(tok);
++ /* Move pmu_name forward over tok and suffix. */
++ pmu_name += tok_len;
++ while (*pmu_name != '\0' && isdigit(*pmu_name))
++ pmu_name++;
++ if (*pmu_name == '_')
++ pmu_name++;
++
++ tok = strtok_r(NULL, ",", &tmp);
+ }
+-
+- res = true;
+-out:
+- free(str);
+- return res;
++ free(mutable_to_match);
++ return *pmu_name == '\0';
+ }
+
+ bool pmu_uncore_identifier_match(const char *compat, const char *id)
+@@ -1003,11 +1029,19 @@ static int pmu_add_sys_aliases_iter_fn(const struct pmu_event *pe,
+ {
+ struct perf_pmu *pmu = vdata;
+
+- if (!pe->compat || !pe->pmu)
++ if (!pe->compat || !pe->pmu) {
++ /* No data to match. */
+ return 0;
++ }
++
++ if (!perf_pmu__match_wildcard_uncore(pmu->name, pe->pmu) &&
++ !perf_pmu__match_wildcard_uncore(pmu->alias_name, pe->pmu)) {
++ /* PMU name/alias_name don't match. */
++ return 0;
++ }
+
+- if (pmu_uncore_alias_match(pe->pmu, pmu->name) &&
+- pmu_uncore_identifier_match(pe->compat, pmu->id)) {
++ if (pmu_uncore_identifier_match(pe->compat, pmu->id)) {
++ /* Id matched. */
+ perf_pmu__new_alias(pmu,
+ pe->name,
+ pe->desc,
+@@ -1016,7 +1050,6 @@ static int pmu_add_sys_aliases_iter_fn(const struct pmu_event *pe,
+ pe,
+ EVENT_SRC_SYS_JSON);
+ }
+-
+ return 0;
+ }
+
+@@ -1851,9 +1884,10 @@ size_t perf_pmu__num_events(struct perf_pmu *pmu)
+ if (pmu->cpu_aliases_added)
+ nr += pmu->cpu_json_aliases;
+ else if (pmu->events_table)
+- nr += pmu_events_table__num_events(pmu->events_table, pmu) - pmu->cpu_json_aliases;
++ nr += pmu_events_table__num_events(pmu->events_table, pmu) -
++ pmu->cpu_common_json_aliases;
+ else
+- assert(pmu->cpu_json_aliases == 0);
++ assert(pmu->cpu_json_aliases == 0 && pmu->cpu_common_json_aliases == 0);
+
+ if (perf_pmu__is_tool(pmu))
+ nr -= tool_pmu__num_skip_events();
+@@ -1974,15 +2008,82 @@ int perf_pmu__for_each_event(struct perf_pmu *pmu, bool skip_duplicate_pmus,
+ return ret;
+ }
+
+-bool pmu__name_match(const struct perf_pmu *pmu, const char *pmu_name)
++static bool perf_pmu___name_match(const struct perf_pmu *pmu, const char *to_match, bool wildcard)
+ {
+- return !strcmp(pmu->name, pmu_name) ||
+- (pmu->is_uncore && pmu_uncore_alias_match(pmu_name, pmu->name)) ||
++ const char *names[2] = {
++ pmu->name,
++ pmu->alias_name,
++ };
++ if (pmu->is_core) {
++ for (size_t i = 0; i < ARRAY_SIZE(names); i++) {
++ const char *name = names[i];
++
++ if (!name)
++ continue;
++
++ if (!strcmp(name, to_match)) {
++ /* Exact name match. */
++ return true;
++ }
++ }
++ if (!strcmp(to_match, "default_core")) {
++ /*
++ * jevents and tests use default_core as a marker for any core
++ * PMU as the PMU name varies across architectures.
++ */
++ return true;
++ }
++ return false;
++ }
++ if (!pmu->is_uncore) {
+ /*
+- * jevents and tests use default_core as a marker for any core
+- * PMU as the PMU name varies across architectures.
++ * PMU isn't core or uncore, some kind of broken CPU mask
++ * situation. Only match exact name.
+ */
+- (pmu->is_core && !strcmp(pmu_name, "default_core"));
++ for (size_t i = 0; i < ARRAY_SIZE(names); i++) {
++ const char *name = names[i];
++
++ if (!name)
++ continue;
++
++ if (!strcmp(name, to_match)) {
++ /* Exact name match. */
++ return true;
++ }
++ }
++ return false;
++ }
++ for (size_t i = 0; i < ARRAY_SIZE(names); i++) {
++ const char *name = names[i];
++
++ if (wildcard && perf_pmu__match_wildcard_uncore(name, to_match))
++ return true;
++ if (!wildcard && perf_pmu__match_ignoring_suffix_uncore(name, to_match))
++ return true;
++ }
++ return false;
++}
++
++/**
++ * perf_pmu__name_wildcard_match - Called by the jevents generated code to see
++ * if pmu matches the json to_match string.
++ * @pmu: The pmu whose name/alias to match.
++ * @to_match: The possible match to pmu_name.
++ */
++bool perf_pmu__name_wildcard_match(const struct perf_pmu *pmu, const char *to_match)
++{
++ return perf_pmu___name_match(pmu, to_match, /*wildcard=*/true);
++}
++
++/**
++ * perf_pmu__name_no_suffix_match - Does pmu's name match to_match ignoring any
++ * trailing suffix on the pmu_name and/or tok?
++ * @pmu: The pmu whose name/alias to match.
++ * @to_match: The possible match to pmu_name.
++ */
++bool perf_pmu__name_no_suffix_match(const struct perf_pmu *pmu, const char *to_match)
++{
++ return perf_pmu___name_match(pmu, to_match, /*wildcard=*/false);
+ }
+
+ bool perf_pmu__is_software(const struct perf_pmu *pmu)
+@@ -2229,29 +2330,31 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
+ name ?: "N/A", buf, config_name, config);
+ }
+
+-bool perf_pmu__match(const struct perf_pmu *pmu, const char *tok)
++bool perf_pmu__wildcard_match(const struct perf_pmu *pmu, const char *wildcard_to_match)
+ {
+- const char *name = pmu->name;
+- bool need_fnmatch = strisglob(tok);
++ const char *names[2] = {
++ pmu->name,
++ pmu->alias_name,
++ };
++ bool need_fnmatch = strisglob(wildcard_to_match);
+
+- if (!strncmp(tok, "uncore_", 7))
+- tok += 7;
+- if (!strncmp(name, "uncore_", 7))
+- name += 7;
++ if (!strncmp(wildcard_to_match, "uncore_", 7))
++ wildcard_to_match += 7;
+
+- if (perf_pmu__match_ignoring_suffix(name, tok) ||
+- (need_fnmatch && !fnmatch(tok, name, 0)))
+- return true;
++ for (size_t i = 0; i < ARRAY_SIZE(names); i++) {
++ const char *pmu_name = names[i];
+
+- name = pmu->alias_name;
+- if (!name)
+- return false;
++ if (!pmu_name)
++ continue;
+
+- if (!strncmp(name, "uncore_", 7))
+- name += 7;
++ if (!strncmp(pmu_name, "uncore_", 7))
++ pmu_name += 7;
+
+- return perf_pmu__match_ignoring_suffix(name, tok) ||
+- (need_fnmatch && !fnmatch(tok, name, 0));
++ if (perf_pmu__match_wildcard(pmu_name, wildcard_to_match) ||
++ (need_fnmatch && !fnmatch(wildcard_to_match, pmu_name, 0)))
++ return true;
++ }
++ return false;
+ }
+
+ int perf_pmu__event_source_devices_scnprintf(char *pathname, size_t size)
+diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
+index dbed6c243a5ef9..b93014cc3670e8 100644
+--- a/tools/perf/util/pmu.h
++++ b/tools/perf/util/pmu.h
+@@ -37,6 +37,8 @@ struct perf_pmu_caps {
+ };
+
+ enum {
++ PERF_PMU_TYPE_PE_START = 0,
++ PERF_PMU_TYPE_PE_END = 0xFFFEFFFF,
+ PERF_PMU_TYPE_HWMON_START = 0xFFFF0000,
+ PERF_PMU_TYPE_HWMON_END = 0xFFFFFFFD,
+ PERF_PMU_TYPE_TOOL = 0xFFFFFFFE,
+@@ -134,6 +136,11 @@ struct perf_pmu {
+ uint32_t cpu_json_aliases;
+ /** @sys_json_aliases: Number of json event aliases loaded matching the PMU's identifier. */
+ uint32_t sys_json_aliases;
++ /**
++ * @cpu_common_json_aliases: Number of json events that overlapped with sysfs when
++ * loading all sysfs events.
++ */
++ uint32_t cpu_common_json_aliases;
+ /** @sysfs_aliases_loaded: Are sysfs aliases loaded from disk? */
+ bool sysfs_aliases_loaded;
+ /**
+@@ -238,7 +245,8 @@ bool perf_pmu__have_event(struct perf_pmu *pmu, const char *name);
+ size_t perf_pmu__num_events(struct perf_pmu *pmu);
+ int perf_pmu__for_each_event(struct perf_pmu *pmu, bool skip_duplicate_pmus,
+ void *state, pmu_event_callback cb);
+-bool pmu__name_match(const struct perf_pmu *pmu, const char *pmu_name);
++bool perf_pmu__name_wildcard_match(const struct perf_pmu *pmu, const char *to_match);
++bool perf_pmu__name_no_suffix_match(const struct perf_pmu *pmu, const char *to_match);
+
+ /**
+ * perf_pmu_is_software - is the PMU a software PMU as in it uses the
+@@ -273,7 +281,7 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
+ const char *config_name);
+ void perf_pmu__warn_invalid_formats(struct perf_pmu *pmu);
+
+-bool perf_pmu__match(const struct perf_pmu *pmu, const char *tok);
++bool perf_pmu__wildcard_match(const struct perf_pmu *pmu, const char *wildcard_to_match);
+
+ int perf_pmu__event_source_devices_scnprintf(char *pathname, size_t size);
+ int perf_pmu__pathname_scnprintf(char *buf, size_t size,
+diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
+index b493da0d22ef7b..7959af59908c2d 100644
+--- a/tools/perf/util/pmus.c
++++ b/tools/perf/util/pmus.c
+@@ -37,10 +37,25 @@
+ */
+ static LIST_HEAD(core_pmus);
+ static LIST_HEAD(other_pmus);
+-static bool read_sysfs_core_pmus;
+-static bool read_sysfs_all_pmus;
++enum perf_tool_pmu_type {
++ PERF_TOOL_PMU_TYPE_PE_CORE,
++ PERF_TOOL_PMU_TYPE_PE_OTHER,
++ PERF_TOOL_PMU_TYPE_TOOL,
++ PERF_TOOL_PMU_TYPE_HWMON,
++
++#define PERF_TOOL_PMU_TYPE_PE_CORE_MASK (1 << PERF_TOOL_PMU_TYPE_PE_CORE)
++#define PERF_TOOL_PMU_TYPE_PE_OTHER_MASK (1 << PERF_TOOL_PMU_TYPE_PE_OTHER)
++#define PERF_TOOL_PMU_TYPE_TOOL_MASK (1 << PERF_TOOL_PMU_TYPE_TOOL)
++#define PERF_TOOL_PMU_TYPE_HWMON_MASK (1 << PERF_TOOL_PMU_TYPE_HWMON)
++
++#define PERF_TOOL_PMU_TYPE_ALL_MASK (PERF_TOOL_PMU_TYPE_PE_CORE_MASK | \
++ PERF_TOOL_PMU_TYPE_PE_OTHER_MASK | \
++ PERF_TOOL_PMU_TYPE_TOOL_MASK | \
++ PERF_TOOL_PMU_TYPE_HWMON_MASK)
++};
++static unsigned int read_pmu_types;
+
+-static void pmu_read_sysfs(bool core_only);
++static void pmu_read_sysfs(unsigned int to_read_pmus);
+
+ size_t pmu_name_len_no_suffix(const char *str)
+ {
+@@ -102,8 +117,7 @@ void perf_pmus__destroy(void)
+
+ perf_pmu__delete(pmu);
+ }
+- read_sysfs_core_pmus = false;
+- read_sysfs_all_pmus = false;
++ read_pmu_types = 0;
+ }
+
+ static struct perf_pmu *pmu_find(const char *name)
+@@ -129,6 +143,7 @@ struct perf_pmu *perf_pmus__find(const char *name)
+ struct perf_pmu *pmu;
+ int dirfd;
+ bool core_pmu;
++ unsigned int to_read_pmus = 0;
+
+ /*
+ * Once PMU is loaded it stays in the list,
+@@ -139,11 +154,11 @@ struct perf_pmu *perf_pmus__find(const char *name)
+ if (pmu)
+ return pmu;
+
+- if (read_sysfs_all_pmus)
++ if (read_pmu_types == PERF_TOOL_PMU_TYPE_ALL_MASK)
+ return NULL;
+
+ core_pmu = is_pmu_core(name);
+- if (core_pmu && read_sysfs_core_pmus)
++ if (core_pmu && (read_pmu_types & PERF_TOOL_PMU_TYPE_PE_CORE_MASK))
+ return NULL;
+
+ dirfd = perf_pmu__event_source_devices_fd();
+@@ -151,15 +166,27 @@ struct perf_pmu *perf_pmus__find(const char *name)
+ /*eager_load=*/false);
+ close(dirfd);
+
+- if (!pmu) {
+- /*
+- * Looking up an inidividual PMU failed. This may mean name is
+- * an alias, so read the PMUs from sysfs and try to find again.
+- */
+- pmu_read_sysfs(core_pmu);
++ if (pmu)
++ return pmu;
++
++ /* Looking up an individual perf event PMU failed, check if a tool PMU should be read. */
++ if (!strncmp(name, "hwmon_", 6))
++ to_read_pmus |= PERF_TOOL_PMU_TYPE_HWMON_MASK;
++ else if (!strcmp(name, "tool"))
++ to_read_pmus |= PERF_TOOL_PMU_TYPE_TOOL_MASK;
++
++ if (to_read_pmus) {
++ pmu_read_sysfs(to_read_pmus);
+ pmu = pmu_find(name);
++ if (pmu)
++ return pmu;
+ }
+- return pmu;
++ /* Read all necessary PMUs from sysfs and see if the PMU is found. */
++ to_read_pmus = PERF_TOOL_PMU_TYPE_PE_CORE_MASK;
++ if (!core_pmu)
++ to_read_pmus |= PERF_TOOL_PMU_TYPE_PE_OTHER_MASK;
++ pmu_read_sysfs(to_read_pmus);
++ return pmu_find(name);
+ }
+
+ static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name)
+@@ -176,11 +203,11 @@ static struct perf_pmu *perf_pmu__find2(int dirfd, const char *name)
+ if (pmu)
+ return pmu;
+
+- if (read_sysfs_all_pmus)
++ if (read_pmu_types == PERF_TOOL_PMU_TYPE_ALL_MASK)
+ return NULL;
+
+ core_pmu = is_pmu_core(name);
+- if (core_pmu && read_sysfs_core_pmus)
++ if (core_pmu && (read_pmu_types & PERF_TOOL_PMU_TYPE_PE_CORE_MASK))
+ return NULL;
+
+ return perf_pmu__lookup(core_pmu ? &core_pmus : &other_pmus, dirfd, name,
+@@ -197,52 +224,61 @@ static int pmus_cmp(void *priv __maybe_unused,
+ }
+
+ /* Add all pmus in sysfs to pmu list: */
+-static void pmu_read_sysfs(bool core_only)
++static void pmu_read_sysfs(unsigned int to_read_types)
+ {
+- int fd;
+- DIR *dir;
+- struct dirent *dent;
+ struct perf_pmu *tool_pmu;
+
+- if (read_sysfs_all_pmus || (core_only && read_sysfs_core_pmus))
++ if ((read_pmu_types & to_read_types) == to_read_types) {
++ /* All requested PMU types have been read. */
+ return;
++ }
+
+- fd = perf_pmu__event_source_devices_fd();
+- if (fd < 0)
+- return;
++ if (to_read_types & (PERF_TOOL_PMU_TYPE_PE_CORE_MASK | PERF_TOOL_PMU_TYPE_PE_OTHER_MASK)) {
++ int fd = perf_pmu__event_source_devices_fd();
++ DIR *dir;
++ struct dirent *dent;
++ bool core_only = (to_read_types & PERF_TOOL_PMU_TYPE_PE_OTHER_MASK) == 0;
+
+- dir = fdopendir(fd);
+- if (!dir) {
+- close(fd);
+- return;
+- }
++ if (fd < 0)
++ goto skip_pe_pmus;
+
+- while ((dent = readdir(dir))) {
+- if (!strcmp(dent->d_name, ".") || !strcmp(dent->d_name, ".."))
+- continue;
+- if (core_only && !is_pmu_core(dent->d_name))
+- continue;
+- /* add to static LIST_HEAD(core_pmus) or LIST_HEAD(other_pmus): */
+- perf_pmu__find2(fd, dent->d_name);
+- }
++ dir = fdopendir(fd);
++ if (!dir) {
++ close(fd);
++ goto skip_pe_pmus;
++ }
+
+- closedir(dir);
+- if (list_empty(&core_pmus)) {
++ while ((dent = readdir(dir))) {
++ if (!strcmp(dent->d_name, ".") || !strcmp(dent->d_name, ".."))
++ continue;
++ if (core_only && !is_pmu_core(dent->d_name))
++ continue;
++ /* add to static LIST_HEAD(core_pmus) or LIST_HEAD(other_pmus): */
++ perf_pmu__find2(fd, dent->d_name);
++ }
++
++ closedir(dir);
++ }
++skip_pe_pmus:
++ if ((to_read_types & PERF_TOOL_PMU_TYPE_PE_CORE_MASK) && list_empty(&core_pmus)) {
+ if (!perf_pmu__create_placeholder_core_pmu(&core_pmus))
+ pr_err("Failure to set up any core PMUs\n");
+ }
+ list_sort(NULL, &core_pmus, pmus_cmp);
+- if (!core_only) {
+- tool_pmu = perf_pmus__tool_pmu();
+- list_add_tail(&tool_pmu->list, &other_pmus);
+- perf_pmus__read_hwmon_pmus(&other_pmus);
++
++ if ((to_read_types & PERF_TOOL_PMU_TYPE_TOOL_MASK) != 0 &&
++ (read_pmu_types & PERF_TOOL_PMU_TYPE_TOOL_MASK) == 0) {
++ tool_pmu = tool_pmu__new();
++ if (tool_pmu)
++ list_add_tail(&tool_pmu->list, &other_pmus);
+ }
++ if ((to_read_types & PERF_TOOL_PMU_TYPE_HWMON_MASK) != 0 &&
++ (read_pmu_types & PERF_TOOL_PMU_TYPE_HWMON_MASK) == 0)
++ perf_pmus__read_hwmon_pmus(&other_pmus);
++
+ list_sort(NULL, &other_pmus, pmus_cmp);
+- if (!list_empty(&core_pmus)) {
+- read_sysfs_core_pmus = true;
+- if (!core_only)
+- read_sysfs_all_pmus = true;
+- }
++
++ read_pmu_types |= to_read_types;
+ }
+
+ static struct perf_pmu *__perf_pmus__find_by_type(unsigned int type)
+@@ -263,12 +299,21 @@ static struct perf_pmu *__perf_pmus__find_by_type(unsigned int type)
+
+ struct perf_pmu *perf_pmus__find_by_type(unsigned int type)
+ {
++ unsigned int to_read_pmus;
+ struct perf_pmu *pmu = __perf_pmus__find_by_type(type);
+
+- if (pmu || read_sysfs_all_pmus)
++ if (pmu || (read_pmu_types == PERF_TOOL_PMU_TYPE_ALL_MASK))
+ return pmu;
+
+- pmu_read_sysfs(/*core_only=*/false);
++ if (type >= PERF_PMU_TYPE_PE_START && type <= PERF_PMU_TYPE_PE_END) {
++ to_read_pmus = PERF_TOOL_PMU_TYPE_PE_CORE_MASK |
++ PERF_TOOL_PMU_TYPE_PE_OTHER_MASK;
++ } else if (type >= PERF_PMU_TYPE_HWMON_START && type <= PERF_PMU_TYPE_HWMON_END) {
++ to_read_pmus = PERF_TOOL_PMU_TYPE_HWMON_MASK;
++ } else {
++ to_read_pmus = PERF_TOOL_PMU_TYPE_TOOL_MASK;
++ }
++ pmu_read_sysfs(to_read_pmus);
+ pmu = __perf_pmus__find_by_type(type);
+ return pmu;
+ }
+@@ -282,7 +327,7 @@ struct perf_pmu *perf_pmus__scan(struct perf_pmu *pmu)
+ bool use_core_pmus = !pmu || pmu->is_core;
+
+ if (!pmu) {
+- pmu_read_sysfs(/*core_only=*/false);
++ pmu_read_sysfs(PERF_TOOL_PMU_TYPE_ALL_MASK);
+ pmu = list_prepare_entry(pmu, &core_pmus, list);
+ }
+ if (use_core_pmus) {
+@@ -300,7 +345,7 @@ struct perf_pmu *perf_pmus__scan(struct perf_pmu *pmu)
+ struct perf_pmu *perf_pmus__scan_core(struct perf_pmu *pmu)
+ {
+ if (!pmu) {
+- pmu_read_sysfs(/*core_only=*/true);
++ pmu_read_sysfs(PERF_TOOL_PMU_TYPE_PE_CORE_MASK);
+ return list_first_entry_or_null(&core_pmus, typeof(*pmu), list);
+ }
+ list_for_each_entry_continue(pmu, &core_pmus, list)
+@@ -316,7 +361,7 @@ static struct perf_pmu *perf_pmus__scan_skip_duplicates(struct perf_pmu *pmu)
+ const char *last_pmu_name = (pmu && pmu->name) ? pmu->name : "";
+
+ if (!pmu) {
+- pmu_read_sysfs(/*core_only=*/false);
++ pmu_read_sysfs(PERF_TOOL_PMU_TYPE_ALL_MASK);
+ pmu = list_prepare_entry(pmu, &core_pmus, list);
+ } else
+ last_pmu_name_len = pmu_name_len_no_suffix(pmu->name ?: "");
+@@ -710,11 +755,25 @@ char *perf_pmus__default_pmu_name(void)
+ struct perf_pmu *evsel__find_pmu(const struct evsel *evsel)
+ {
+ struct perf_pmu *pmu = evsel->pmu;
++ bool legacy_core_type;
+
+- if (!pmu) {
+- pmu = perf_pmus__find_by_type(evsel->core.attr.type);
+- ((struct evsel *)evsel)->pmu = pmu;
++ if (pmu)
++ return pmu;
++
++ pmu = perf_pmus__find_by_type(evsel->core.attr.type);
++ legacy_core_type =
++ evsel->core.attr.type == PERF_TYPE_HARDWARE ||
++ evsel->core.attr.type == PERF_TYPE_HW_CACHE;
++ if (!pmu && legacy_core_type) {
++ if (perf_pmus__supports_extended_type()) {
++ u32 type = evsel->core.attr.config >> PERF_PMU_TYPE_SHIFT;
++
++ pmu = perf_pmus__find_by_type(type);
++ } else {
++ pmu = perf_pmus__find_core_pmu();
++ }
+ }
++ ((struct evsel *)evsel)->pmu = pmu;
+ return pmu;
+ }
+
+diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
+index b4bc57859f7302..a23fa5d9539439 100644
+--- a/tools/perf/util/python.c
++++ b/tools/perf/util/python.c
+@@ -47,7 +47,7 @@ struct pyrf_event {
+ };
+
+ #define sample_members \
+- sample_member_def(sample_ip, ip, T_ULONGLONG, "event type"), \
++ sample_member_def(sample_ip, ip, T_ULONGLONG, "event ip"), \
+ sample_member_def(sample_pid, pid, T_INT, "event pid"), \
+ sample_member_def(sample_tid, tid, T_INT, "event tid"), \
+ sample_member_def(sample_time, time, T_ULONGLONG, "event timestamp"), \
+@@ -481,6 +481,11 @@ static PyObject *pyrf_event__new(const union perf_event *event)
+ event->header.type == PERF_RECORD_SWITCH_CPU_WIDE))
+ return NULL;
+
++ // FIXME this better be dynamic or we need to parse everything
++ // before calling perf_mmap__consume(), including tracepoint fields.
++ if (sizeof(pevent->event) < event->header.size)
++ return NULL;
++
+ ptype = pyrf_event__type[event->header.type];
+ pevent = PyObject_New(struct pyrf_event, ptype);
+ if (pevent != NULL)
+@@ -984,20 +989,22 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist,
+
+ evsel = evlist__event2evsel(evlist, event);
+ if (!evsel) {
++ Py_DECREF(pyevent);
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+
+ pevent->evsel = evsel;
+
+- err = evsel__parse_sample(evsel, event, &pevent->sample);
+-
+- /* Consume the even only after we parsed it out. */
+ perf_mmap__consume(&md->core);
+
+- if (err)
++ err = evsel__parse_sample(evsel, &pevent->event, &pevent->sample);
++ if (err) {
++ Py_DECREF(pyevent);
+ return PyErr_Format(PyExc_OSError,
+ "perf: can't parse sample, err=%d", err);
++ }
++
+ return pyevent;
+ }
+ end:
+diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
+index fa8b2a1048ff99..d83bda5824d22b 100644
+--- a/tools/perf/util/stat-shadow.c
++++ b/tools/perf/util/stat-shadow.c
+@@ -151,6 +151,7 @@ static double find_stat(const struct evsel *evsel, int aggr_idx, enum stat_type
+ {
+ struct evsel *cur;
+ int evsel_ctx = evsel_context(evsel);
++ struct perf_pmu *evsel_pmu = evsel__find_pmu(evsel);
+
+ evlist__for_each_entry(evsel->evlist, cur) {
+ struct perf_stat_aggr *aggr;
+@@ -177,7 +178,7 @@ static double find_stat(const struct evsel *evsel, int aggr_idx, enum stat_type
+ * Except the SW CLOCK events,
+ * ignore if not the PMU we're looking for.
+ */
+- if ((type != STAT_NSECS) && (evsel->pmu != cur->pmu))
++ if ((type != STAT_NSECS) && (evsel_pmu != evsel__find_pmu(cur)))
+ continue;
+
+ aggr = &cur->stats->aggr[aggr_idx];
+diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
+index 7c2ccdcc3fdba3..1f7abd8754c75b 100644
+--- a/tools/perf/util/stat.c
++++ b/tools/perf/util/stat.c
+@@ -535,7 +535,10 @@ static int evsel__merge_aggr_counters(struct evsel *evsel, struct evsel *alias)
+
+ return 0;
+ }
+-/* events should have the same name, scale, unit, cgroup but on different PMUs */
++/*
++ * Events should have the same name, scale, unit, cgroup but on different core
++ * PMUs or on different but matching uncore PMUs.
++ */
+ static bool evsel__is_alias(struct evsel *evsel_a, struct evsel *evsel_b)
+ {
+ if (strcmp(evsel__name(evsel_a), evsel__name(evsel_b)))
+@@ -553,7 +556,13 @@ static bool evsel__is_alias(struct evsel *evsel_a, struct evsel *evsel_b)
+ if (evsel__is_clock(evsel_a) != evsel__is_clock(evsel_b))
+ return false;
+
+- return evsel_a->pmu != evsel_b->pmu;
++ if (evsel_a->pmu == evsel_b->pmu || evsel_a->pmu == NULL || evsel_b->pmu == NULL)
++ return false;
++
++ if (evsel_a->pmu->is_core)
++ return evsel_b->pmu->is_core;
++
++ return perf_pmu__name_no_suffix_match(evsel_a->pmu, evsel_b->pmu->name);
+ }
+
+ static void evsel__merge_aliases(struct evsel *evsel)
+diff --git a/tools/perf/util/tool_pmu.c b/tools/perf/util/tool_pmu.c
+index 4fb09757847944..d43d6cf6e4a20f 100644
+--- a/tools/perf/util/tool_pmu.c
++++ b/tools/perf/util/tool_pmu.c
+@@ -62,7 +62,8 @@ int tool_pmu__num_skip_events(void)
+
+ const char *tool_pmu__event_to_str(enum tool_pmu_event ev)
+ {
+- if (ev > TOOL_PMU__EVENT_NONE && ev < TOOL_PMU__EVENT_MAX)
++ if ((ev > TOOL_PMU__EVENT_NONE && ev < TOOL_PMU__EVENT_MAX) &&
++ !tool_pmu__skip_event(tool_pmu__event_names[ev]))
+ return tool_pmu__event_names[ev];
+
+ return NULL;
+@@ -489,17 +490,24 @@ int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread)
+ return 0;
+ }
+
+-struct perf_pmu *perf_pmus__tool_pmu(void)
++struct perf_pmu *tool_pmu__new(void)
+ {
+- static struct perf_pmu tool = {
+- .name = "tool",
+- .type = PERF_PMU_TYPE_TOOL,
+- .aliases = LIST_HEAD_INIT(tool.aliases),
+- .caps = LIST_HEAD_INIT(tool.caps),
+- .format = LIST_HEAD_INIT(tool.format),
+- };
+- if (!tool.events_table)
+- tool.events_table = find_core_events_table("common", "common");
+-
+- return &tool;
++ struct perf_pmu *tool = zalloc(sizeof(struct perf_pmu));
++
++ if (!tool)
++ goto out;
++ tool->name = strdup("tool");
++ if (!tool->name) {
++ zfree(&tool);
++ goto out;
++ }
++
++ tool->type = PERF_PMU_TYPE_TOOL;
++ INIT_LIST_HEAD(&tool->aliases);
++ INIT_LIST_HEAD(&tool->caps);
++ INIT_LIST_HEAD(&tool->format);
++ tool->events_table = find_core_events_table("common", "common");
++
++out:
++ return tool;
+ }
+diff --git a/tools/perf/util/tool_pmu.h b/tools/perf/util/tool_pmu.h
+index a60184859080f1..c6ad1dd90a56d4 100644
+--- a/tools/perf/util/tool_pmu.h
++++ b/tools/perf/util/tool_pmu.h
+@@ -51,6 +51,6 @@ int evsel__tool_pmu_open(struct evsel *evsel,
+ int start_cpu_map_idx, int end_cpu_map_idx);
+ int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread);
+
+-struct perf_pmu *perf_pmus__tool_pmu(void);
++struct perf_pmu *tool_pmu__new(void);
+
+ #endif /* __TOOL_PMU_H */
+diff --git a/tools/perf/util/units.c b/tools/perf/util/units.c
+index 32c39cfe209b3b..4c6a86e1cb54b2 100644
+--- a/tools/perf/util/units.c
++++ b/tools/perf/util/units.c
+@@ -64,7 +64,7 @@ unsigned long convert_unit(unsigned long value, char *unit)
+
+ int unit_number__scnprintf(char *buf, size_t size, u64 n)
+ {
+- char unit[4] = "BKMG";
++ char unit[] = "BKMG";
+ int i = 0;
+
+ while (((n / 1024) > 1) && (i < 3)) {
+diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
+index 99bf905ade8127..e4f9f93c123a2b 100644
+--- a/tools/power/x86/turbostat/turbostat.8
++++ b/tools/power/x86/turbostat/turbostat.8
+@@ -168,6 +168,8 @@ The system configuration dump (if --quiet is not used) is followed by statistics
+ .PP
+ \fBPkgTmp\fP Degrees Celsius reported by the per-package Package Thermal Monitor.
+ .PP
++\fBCoreThr\fP Core Thermal Throttling events during the measurement interval. Note that events since boot can be find in /sys/devices/system/cpu/cpu*/thermal_throttle/*
++.PP
+ \fBGFX%rc6\fP The percentage of time the GPU is in the "render C6" state, rc6, during the measurement interval. From /sys/class/drm/card0/power/rc6_residency_ms or /sys/class/drm/card0/gt/gt0/rc6_residency_ms or /sys/class/drm/card0/device/tile0/gtN/gtidle/idle_residency_ms depending on the graphics driver being used.
+ .PP
+ \fBGFXMHz\fP Instantaneous snapshot of what sysfs presents at the end of the measurement interval. From /sys/class/graphics/fb0/device/drm/card0/gt_cur_freq_mhz or /sys/class/drm/card0/gt_cur_freq_mhz or /sys/class/drm/card0/gt/gt0/rps_cur_freq_mhz or /sys/class/drm/card0/device/tile0/gtN/freq0/cur_freq depending on the graphics driver being used.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 8d5011a0bf60d8..4155d9bfcfc6da 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -2211,7 +2211,7 @@ int get_msr(int cpu, off_t offset, unsigned long long *msr)
+ return 0;
+ }
+
+-int probe_msr(int cpu, off_t offset)
++int probe_rapl_msr(int cpu, off_t offset, int index)
+ {
+ ssize_t retval;
+ unsigned long long value;
+@@ -2220,13 +2220,22 @@ int probe_msr(int cpu, off_t offset)
+
+ retval = pread(get_msr_fd(cpu), &value, sizeof(value), offset);
+
+- /*
+- * Expect MSRs to accumulate some non-zero value since the system was powered on.
+- * Treat zero as a read failure.
+- */
+- if (retval != sizeof(value) || value == 0)
++ /* if the read failed, the probe fails */
++ if (retval != sizeof(value))
+ return 1;
+
++ /* If an Energy Status Counter MSR returns 0, the probe fails */
++ switch (index) {
++ case RAPL_RCI_INDEX_ENERGY_PKG:
++ case RAPL_RCI_INDEX_ENERGY_CORES:
++ case RAPL_RCI_INDEX_DRAM:
++ case RAPL_RCI_INDEX_GFX:
++ case RAPL_RCI_INDEX_ENERGY_PLATFORM:
++ if (value == 0)
++ return 1;
++ }
++
++ /* PKG,DRAM_PERF_STATUS MSRs, can return any value */
+ return 0;
+ }
+
+@@ -3476,7 +3485,7 @@ void delta_core(struct core_data *new, struct core_data *old)
+ old->c6 = new->c6 - old->c6;
+ old->c7 = new->c7 - old->c7;
+ old->core_temp_c = new->core_temp_c;
+- old->core_throt_cnt = new->core_throt_cnt;
++ old->core_throt_cnt = new->core_throt_cnt - old->core_throt_cnt;
+ old->mc6_us = new->mc6_us - old->mc6_us;
+
+ DELTA_WRAP32(new->core_energy.raw_value, old->core_energy.raw_value);
+@@ -6030,6 +6039,7 @@ int snapshot_graphics(int idx)
+ int retval;
+
+ rewind(gfx_info[idx].fp);
++ fflush(gfx_info[idx].fp);
+
+ switch (idx) {
+ case GFX_rc6:
+@@ -7896,7 +7906,7 @@ void rapl_perf_init(void)
+ rci->flags[cai->rci_index] = cai->flags;
+
+ /* Use MSR for this counter */
+- } else if (!no_msr && cai->msr && probe_msr(cpu, cai->msr) == 0) {
++ } else if (!no_msr && cai->msr && probe_rapl_msr(cpu, cai->msr, cai->rci_index) == 0) {
+ rci->source[cai->rci_index] = COUNTER_SOURCE_MSR;
+ rci->msr[cai->rci_index] = cai->msr;
+ rci->msr_mask[cai->rci_index] = cai->msr_mask;
+@@ -8034,7 +8044,7 @@ void msr_perf_init_(void)
+ cai->present = true;
+
+ /* User MSR for this counter */
+- } else if (!no_msr && cai->msr && probe_msr(cpu, cai->msr) == 0) {
++ } else if (!no_msr && cai->msr && probe_rapl_msr(cpu, cai->msr, cai->rci_index) == 0) {
+ cci->source[cai->rci_index] = COUNTER_SOURCE_MSR;
+ cci->msr[cai->rci_index] = cai->msr;
+ cci->msr_mask[cai->rci_index] = cai->msr_mask;
+@@ -8148,7 +8158,7 @@ void cstate_perf_init_(bool soft_c1)
+
+ /* User MSR for this counter */
+ } else if (!no_msr && cai->msr && pkg_cstate_limit >= cai->pkg_cstate_limit
+- && probe_msr(cpu, cai->msr) == 0) {
++ && probe_rapl_msr(cpu, cai->msr, cai->rci_index) == 0) {
+ cci->source[cai->rci_index] = COUNTER_SOURCE_MSR;
+ cci->msr[cai->rci_index] = cai->msr;
+ }
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 87551628e1129e..6722080b2107a5 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -306,6 +306,7 @@ $(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL) $(RUNQSLOWER_OUTPUT)
+ BPFTOOL_OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \
+ BPFOBJ_OUTPUT=$(BUILD_DIR)/libbpf/ \
+ BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) \
++ BPF_TARGET_ENDIAN=$(BPF_TARGET_ENDIAN) \
+ EXTRA_CFLAGS='-g $(OPT_FLAGS) $(SAN_CFLAGS) $(EXTRA_CFLAGS)' \
+ EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' && \
+ cp $(RUNQSLOWER_OUTPUT)runqslower $@
+diff --git a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
+index cc184e4420f6e3..67557cda220835 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
++++ b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
+@@ -6,6 +6,10 @@
+ #include <test_progs.h>
+ #include "bloom_filter_map.skel.h"
+
++#ifndef NUMA_NO_NODE
++#define NUMA_NO_NODE (-1)
++#endif
++
+ static void test_fail_cases(void)
+ {
+ LIBBPF_OPTS(bpf_map_create_opts, opts);
+@@ -69,6 +73,7 @@ static void test_success_cases(void)
+
+ /* Create a map */
+ opts.map_flags = BPF_F_ZERO_SEED | BPF_F_NUMA_NODE;
++ opts.numa_node = NUMA_NO_NODE;
+ fd = bpf_map_create(BPF_MAP_TYPE_BLOOM_FILTER, NULL, 0, sizeof(value), 100, &opts);
+ if (!ASSERT_GE(fd, 0, "bpf_map_create bloom filter success case"))
+ return;
+diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+index 544144620ca61a..66a900327f912d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
++++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+@@ -1600,6 +1600,7 @@ static void test_tailcall_bpf2bpf_freplace(void)
+ goto out;
+
+ err = bpf_link__destroy(freplace_link);
++ freplace_link = NULL;
+ if (!ASSERT_OK(err, "destroy link"))
+ goto out;
+
+diff --git a/tools/testing/selftests/bpf/progs/strncmp_bench.c b/tools/testing/selftests/bpf/progs/strncmp_bench.c
+index 18373a7df76e6c..f47bf88f8d2a73 100644
+--- a/tools/testing/selftests/bpf/progs/strncmp_bench.c
++++ b/tools/testing/selftests/bpf/progs/strncmp_bench.c
+@@ -35,7 +35,10 @@ static __always_inline int local_strncmp(const char *s1, unsigned int sz,
+ SEC("tp/syscalls/sys_enter_getpgid")
+ int strncmp_no_helper(void *ctx)
+ {
+- if (local_strncmp(str, cmp_str_len + 1, target) < 0)
++ const char *target_str = target;
++
++ barrier_var(target_str);
++ if (local_strncmp(str, cmp_str_len + 1, target_str) < 0)
+ __sync_add_and_fetch(&hits, 1);
+ return 0;
+ }
+diff --git a/tools/testing/selftests/mm/cow.c b/tools/testing/selftests/mm/cow.c
+index 9446673645ebab..f0cb14ea860842 100644
+--- a/tools/testing/selftests/mm/cow.c
++++ b/tools/testing/selftests/mm/cow.c
+@@ -876,7 +876,7 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run, size_t thpsize)
+ mremap_size = thpsize / 2;
+ mremap_mem = mmap(NULL, mremap_size, PROT_NONE,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+- if (mem == MAP_FAILED) {
++ if (mremap_mem == MAP_FAILED) {
+ ksft_test_result_fail("mmap() failed\n");
+ goto munmap;
+ }
+diff --git a/tools/testing/selftests/pcie_bwctrl/Makefile b/tools/testing/selftests/pcie_bwctrl/Makefile
+index 3e84e26341d1ca..48ec048f47afda 100644
+--- a/tools/testing/selftests/pcie_bwctrl/Makefile
++++ b/tools/testing/selftests/pcie_bwctrl/Makefile
+@@ -1,2 +1,2 @@
+-TEST_PROGS = set_pcie_cooling_state.sh
++TEST_PROGS = set_pcie_cooling_state.sh set_pcie_speed.sh
+ include ../lib.mk
+diff --git a/tools/verification/rv/Makefile.rv b/tools/verification/rv/Makefile.rv
+index 161baa29eb86c0..2497fb96c83d27 100644
+--- a/tools/verification/rv/Makefile.rv
++++ b/tools/verification/rv/Makefile.rv
+@@ -27,7 +27,7 @@ endif
+
+ INCLUDE := -Iinclude/
+ CFLAGS := -g -DVERSION=\"$(VERSION)\" $(FOPTS) $(WOPTS) $(EXTRA_CFLAGS) $(INCLUDE)
+-LDFLAGS := -ggdb $(EXTRA_LDFLAGS)
++LDFLAGS := -ggdb $(LDFLAGS) $(EXTRA_LDFLAGS)
+
+ INSTALL := install
+ MKDIR := mkdir
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-04-20 9:36 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-04-20 9:36 UTC (permalink / raw
To: gentoo-commits
commit: be05deca5d1a774ad878353d30de626216ae81ee
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 20 09:36:41 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Apr 20 09:36:41 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=be05deca
Linux patch 6.14.3
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1002_linux-6.14.3.patch | 19087 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 19091 insertions(+)
diff --git a/0000_README b/0000_README
index c3c35403..83dda9ca 100644
--- a/0000_README
+++ b/0000_README
@@ -50,6 +50,10 @@ Patch: 1001_linux-6.14.2.patch
From: https://www.kernel.org
Desc: Linux 6.14.2
+Patch: 1002_linux-6.14.3.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.3
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1002_linux-6.14.3.patch b/1002_linux-6.14.3.patch
new file mode 100644
index 00000000..ee92bc5a
--- /dev/null
+++ b/1002_linux-6.14.3.patch
@@ -0,0 +1,19087 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index fb8752b42ec858..aa7447f8837cb7 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -3116,6 +3116,8 @@
+ * max_sec_lba48: Set or clear transfer size limit to
+ 65535 sectors.
+
++ * external: Mark port as external (hotplug-capable).
++
+ * [no]lpm: Enable or disable link power management.
+
+ * [no]setxfer: Indicate if transfer speed mode setting
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index f074f6219f5c33..f968c13b46a787 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -284,6 +284,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Rockchip | RK3588 | #3588001 | ROCKCHIP_ERRATUM_3588001 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Rockchip | RK3568 | #3568002 | ROCKCHIP_ERRATUM_3568002 |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Fujitsu | A64FX | E#010001 | FUJITSU_ERRATUM_010001 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml b/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml
+index 76163abed655a2..5ed40f21b8eb5d 100644
+--- a/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml
+@@ -55,8 +55,7 @@ properties:
+ - const: arm,primecell
+
+ reg:
+- minItems: 1
+- maxItems: 2
++ maxItems: 1
+
+ clocks:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml b/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml
+index 8eec07d9d45428..07d21a3617f5b2 100644
+--- a/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml
+@@ -41,8 +41,7 @@ properties:
+ - const: arm,primecell
+
+ reg:
+- minItems: 1
+- maxItems: 2
++ maxItems: 1
+
+ qcom,dsb-element-bits:
+ description:
+diff --git a/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml b/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
+index b68141264c0e9f..4d40e75b4e1eff 100644
+--- a/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
++++ b/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
+@@ -71,7 +71,7 @@ properties:
+ description:
+ Any lane can be inverted or not.
+ minItems: 1
+- maxItems: 2
++ maxItems: 3
+
+ required:
+ - data-lanes
+diff --git a/Makefile b/Makefile
+index 907a4565f06ab4..93870f58505f51 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -1065,6 +1065,9 @@ ifdef CONFIG_CC_IS_GCC
+ KBUILD_CFLAGS += -fconserve-stack
+ endif
+
++# Ensure compilers do not transform certain loops into calls to wcslen()
++KBUILD_CFLAGS += -fno-builtin-wcslen
++
+ # change __FILE__ to the relative path to the source directory
+ ifdef building_out_of_srctree
+ KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srcroot)/=)
+diff --git a/arch/arm/lib/crc-t10dif-glue.c b/arch/arm/lib/crc-t10dif-glue.c
+index d24dee62670ec5..4ab8daa4ec0bcb 100644
+--- a/arch/arm/lib/crc-t10dif-glue.c
++++ b/arch/arm/lib/crc-t10dif-glue.c
+@@ -44,9 +44,7 @@ u16 crc_t10dif_arch(u16 crc, const u8 *data, size_t length)
+ crc_t10dif_pmull8(crc, data, length, buf);
+ kernel_neon_end();
+
+- crc = 0;
+- data = buf;
+- length = sizeof(buf);
++ return crc_t10dif_generic(0, buf, sizeof(buf));
+ }
+ }
+ return crc_t10dif_generic(crc, data, length);
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 940343beb3d4cd..3e7483ad5276c3 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1302,6 +1302,15 @@ config NVIDIA_CARMEL_CNP_ERRATUM
+
+ If unsure, say Y.
+
++config ROCKCHIP_ERRATUM_3568002
++ bool "Rockchip 3568002: GIC600 can not access physical addresses higher than 4GB"
++ default y
++ help
++ The Rockchip RK3566 and RK3568 GIC600 SoC integrations have AXI
++ addressing limited to the first 32bit of physical address space.
++
++ If unsure, say Y.
++
+ config ROCKCHIP_ERRATUM_3588001
+ bool "Rockchip 3588001: GIC600 can not support shareability attributes"
+ default y
+diff --git a/arch/arm64/boot/dts/exynos/google/gs101.dtsi b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+index c5335dd59dfe9f..813f960895784c 100644
+--- a/arch/arm64/boot/dts/exynos/google/gs101.dtsi
++++ b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+@@ -1454,6 +1454,7 @@ pinctrl_gsacore: pinctrl@17a80000 {
+ /* TODO: update once support for this CMU exists */
+ clocks = <0>;
+ clock-names = "pclk";
++ status = "disabled";
+ };
+
+ cmu_top: clock-controller@1e080000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index 0ca63e8c4e16ce..6d1d8877b43f24 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -1255,8 +1255,7 @@ dpi0_out: endpoint {
+ };
+
+ pwm0: pwm@1401e000 {
+- compatible = "mediatek,mt8173-disp-pwm",
+- "mediatek,mt6595-disp-pwm";
++ compatible = "mediatek,mt8173-disp-pwm";
+ reg = <0 0x1401e000 0 0x1000>;
+ #pwm-cells = <2>;
+ clocks = <&mmsys CLK_MM_DISP_PWM026M>,
+@@ -1266,8 +1265,7 @@ pwm0: pwm@1401e000 {
+ };
+
+ pwm1: pwm@1401f000 {
+- compatible = "mediatek,mt8173-disp-pwm",
+- "mediatek,mt6595-disp-pwm";
++ compatible = "mediatek,mt8173-disp-pwm";
+ reg = <0 0x1401f000 0 0x1000>;
+ #pwm-cells = <2>;
+ clocks = <&mmsys CLK_MM_DISP_PWM126M>,
+diff --git a/arch/arm64/boot/dts/mediatek/mt8188.dtsi b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+index 338120930b8196..17e22d4515ab69 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8188.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+@@ -1392,7 +1392,7 @@ afe: audio-controller@10b10000 {
+ compatible = "mediatek,mt8188-afe";
+ reg = <0 0x10b10000 0 0x10000>;
+ assigned-clocks = <&topckgen CLK_TOP_A1SYS_HP>;
+- assigned-clock-parents = <&clk26m>;
++ assigned-clock-parents = <&topckgen CLK_TOP_APLL1_D4>;
+ clocks = <&clk26m>,
+ <&apmixedsys CLK_APMIXED_APLL1>,
+ <&apmixedsys CLK_APMIXED_APLL2>,
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi b/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi
+index 19340d13f789f0..41821354bbdae6 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi
+@@ -227,13 +227,6 @@ key-power {
+ wakeup-event-action = <EV_ACT_ASSERTED>;
+ wakeup-source;
+ };
+-
+- key-suspend {
+- label = "Suspend";
+- gpios = <&gpio TEGRA234_MAIN_GPIO(G, 2) GPIO_ACTIVE_LOW>;
+- linux,input-type = <EV_KEY>;
+- linux,code = <KEY_SLEEP>;
+- };
+ };
+
+ fan: pwm-fan {
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
+index 83bbf94b58d157..1944616ab3579a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
+@@ -84,7 +84,9 @@ serdes_ln_ctrl: mux-controller@4080 {
+ <0x10 0x3>, <0x14 0x3>, /* SERDES1 lane0/1 select */
+ <0x18 0x3>, <0x1c 0x3>, /* SERDES1 lane2/3 select */
+ <0x20 0x3>, <0x24 0x3>, /* SERDES2 lane0/1 select */
+- <0x28 0x3>, <0x2c 0x3>; /* SERDES2 lane2/3 select */
++ <0x28 0x3>, <0x2c 0x3>, /* SERDES2 lane2/3 select */
++ <0x40 0x3>, <0x44 0x3>, /* SERDES4 lane0/1 select */
++ <0x48 0x3>, <0x4c 0x3>; /* SERDES4 lane2/3 select */
+ idle-states = <J784S4_SERDES0_LANE0_PCIE1_LANE0>,
+ <J784S4_SERDES0_LANE1_PCIE1_LANE1>,
+ <J784S4_SERDES0_LANE2_IP3_UNUSED>,
+@@ -193,7 +195,7 @@ gic500: interrupt-controller@1800000 {
+ ranges;
+ #interrupt-cells = <3>;
+ interrupt-controller;
+- reg = <0x00 0x01800000 0x00 0x200000>, /* GICD */
++ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+ <0x00 0x01900000 0x00 0x100000>, /* GICR */
+ <0x00 0x6f000000 0x00 0x2000>, /* GICC */
+ <0x00 0x6f010000 0x00 0x1000>, /* GICH */
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 6f3f4142e214f7..41c21feaef4ad9 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -75,6 +75,7 @@
+ #define ARM_CPU_PART_CORTEX_A76 0xD0B
+ #define ARM_CPU_PART_NEOVERSE_N1 0xD0C
+ #define ARM_CPU_PART_CORTEX_A77 0xD0D
++#define ARM_CPU_PART_CORTEX_A76AE 0xD0E
+ #define ARM_CPU_PART_NEOVERSE_V1 0xD40
+ #define ARM_CPU_PART_CORTEX_A78 0xD41
+ #define ARM_CPU_PART_CORTEX_A78AE 0xD42
+@@ -119,6 +120,7 @@
+ #define QCOM_CPU_PART_KRYO 0x200
+ #define QCOM_CPU_PART_KRYO_2XX_GOLD 0x800
+ #define QCOM_CPU_PART_KRYO_2XX_SILVER 0x801
++#define QCOM_CPU_PART_KRYO_3XX_GOLD 0x802
+ #define QCOM_CPU_PART_KRYO_3XX_SILVER 0x803
+ #define QCOM_CPU_PART_KRYO_4XX_GOLD 0x804
+ #define QCOM_CPU_PART_KRYO_4XX_SILVER 0x805
+@@ -159,6 +161,7 @@
+ #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
+ #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
+ #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
++#define MIDR_CORTEX_A76AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76AE)
+ #define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
+ #define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
+ #define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
+@@ -196,6 +199,7 @@
+ #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
+ #define MIDR_QCOM_KRYO_2XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_GOLD)
+ #define MIDR_QCOM_KRYO_2XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_SILVER)
++#define MIDR_QCOM_KRYO_3XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_GOLD)
+ #define MIDR_QCOM_KRYO_3XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_SILVER)
+ #define MIDR_QCOM_KRYO_4XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_GOLD)
+ #define MIDR_QCOM_KRYO_4XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_SILVER)
+diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
+index c2417a424b98dd..974d72b5905b86 100644
+--- a/arch/arm64/include/asm/kvm_arm.h
++++ b/arch/arm64/include/asm/kvm_arm.h
+@@ -92,12 +92,12 @@
+ * SWIO: Turn set/way invalidates into set/way clean+invalidate
+ * PTW: Take a stage2 fault if a stage1 walk steps in device memory
+ * TID3: Trap EL1 reads of group 3 ID registers
+- * TID2: Trap CTR_EL0, CCSIDR2_EL1, CLIDR_EL1, and CSSELR_EL1
++ * TID1: Trap REVIDR_EL1, AIDR_EL1, and SMIDR_EL1
+ */
+ #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
+ HCR_BSU_IS | HCR_FB | HCR_TACR | \
+ HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
+- HCR_FMO | HCR_IMO | HCR_PTW | HCR_TID3)
++ HCR_FMO | HCR_IMO | HCR_PTW | HCR_TID3 | HCR_TID1)
+ #define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK | HCR_ATA)
+ #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
+ #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index 0c4d9045c31f47..f1524cdeacf1c4 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -97,7 +97,6 @@ enum mitigation_state arm64_get_meltdown_state(void);
+
+ enum mitigation_state arm64_get_spectre_bhb_state(void);
+ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope);
+-u8 spectre_bhb_loop_affected(int scope);
+ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
+ bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr);
+
+diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
+index d780d1bd2eacb9..82cf1f879c61df 100644
+--- a/arch/arm64/include/asm/traps.h
++++ b/arch/arm64/include/asm/traps.h
+@@ -109,10 +109,9 @@ static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned lon
+ int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr);
+ int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr);
+ int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr);
+- unsigned long dst, src, size;
++ unsigned long dst, size;
+
+ dst = regs->regs[dstreg];
+- src = regs->regs[srcreg];
+ size = regs->regs[sizereg];
+
+ /*
+@@ -129,6 +128,7 @@ static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned lon
+ }
+ } else {
+ /* CPY* instruction */
++ unsigned long src = regs->regs[srcreg];
+ if (!(option_a ^ wrong_option)) {
+ /* Format is from Option B */
+ if (regs->pstate & PSR_N_BIT) {
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index da53722f95d41a..0f51fd10b4b063 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -845,52 +845,86 @@ static unsigned long system_bhb_mitigations;
+ * This must be called with SCOPE_LOCAL_CPU for each type of CPU, before any
+ * SCOPE_SYSTEM call will give the right answer.
+ */
+-u8 spectre_bhb_loop_affected(int scope)
++static bool is_spectre_bhb_safe(int scope)
++{
++ static const struct midr_range spectre_bhb_safe_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A510),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A520),
++ MIDR_ALL_VERSIONS(MIDR_BRAHMA_B53),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_2XX_SILVER),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_3XX_SILVER),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_SILVER),
++ {},
++ };
++ static bool all_safe = true;
++
++ if (scope != SCOPE_LOCAL_CPU)
++ return all_safe;
++
++ if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_safe_list))
++ return true;
++
++ all_safe = false;
++
++ return false;
++}
++
++static u8 spectre_bhb_loop_affected(void)
+ {
+ u8 k = 0;
+- static u8 max_bhb_k;
+-
+- if (scope == SCOPE_LOCAL_CPU) {
+- static const struct midr_range spectre_bhb_k32_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
+- MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
+- MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
+- {},
+- };
+- static const struct midr_range spectre_bhb_k24_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
+- MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+- {},
+- };
+- static const struct midr_range spectre_bhb_k11_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+- {},
+- };
+- static const struct midr_range spectre_bhb_k8_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+- {},
+- };
+-
+- if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
+- k = 32;
+- else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
+- k = 24;
+- else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
+- k = 11;
+- else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
+- k = 8;
+-
+- max_bhb_k = max(max_bhb_k, k);
+- } else {
+- k = max_bhb_k;
+- }
++
++ static const struct midr_range spectre_bhb_k132_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
++ };
++ static const struct midr_range spectre_bhb_k38_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A715),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
++ };
++ static const struct midr_range spectre_bhb_k32_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
++ {},
++ };
++ static const struct midr_range spectre_bhb_k24_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A76AE),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_GOLD),
++ {},
++ };
++ static const struct midr_range spectre_bhb_k11_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
++ {},
++ };
++ static const struct midr_range spectre_bhb_k8_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
++ {},
++ };
++
++ if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k132_list))
++ k = 132;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k38_list))
++ k = 38;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
++ k = 32;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
++ k = 24;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
++ k = 11;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
++ k = 8;
+
+ return k;
+ }
+@@ -916,29 +950,13 @@ static enum mitigation_state spectre_bhb_get_cpu_fw_mitigation_state(void)
+ }
+ }
+
+-static bool is_spectre_bhb_fw_affected(int scope)
++static bool has_spectre_bhb_fw_mitigation(void)
+ {
+- static bool system_affected;
+ enum mitigation_state fw_state;
+ bool has_smccc = arm_smccc_1_1_get_conduit() != SMCCC_CONDUIT_NONE;
+- static const struct midr_range spectre_bhb_firmware_mitigated_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+- {},
+- };
+- bool cpu_in_list = is_midr_in_range_list(read_cpuid_id(),
+- spectre_bhb_firmware_mitigated_list);
+-
+- if (scope != SCOPE_LOCAL_CPU)
+- return system_affected;
+
+ fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
+- if (cpu_in_list || (has_smccc && fw_state == SPECTRE_MITIGATED)) {
+- system_affected = true;
+- return true;
+- }
+-
+- return false;
++ return has_smccc && fw_state == SPECTRE_MITIGATED;
+ }
+
+ static bool supports_ecbhb(int scope)
+@@ -954,6 +972,8 @@ static bool supports_ecbhb(int scope)
+ ID_AA64MMFR1_EL1_ECBHB_SHIFT);
+ }
+
++static u8 max_bhb_k;
++
+ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
+ int scope)
+ {
+@@ -962,16 +982,18 @@ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
+ if (supports_csv2p3(scope))
+ return false;
+
+- if (supports_clearbhb(scope))
+- return true;
+-
+- if (spectre_bhb_loop_affected(scope))
+- return true;
++ if (is_spectre_bhb_safe(scope))
++ return false;
+
+- if (is_spectre_bhb_fw_affected(scope))
+- return true;
++ /*
++ * At this point the core isn't known to be "safe" so we're going to
++ * assume it's vulnerable. We still need to update `max_bhb_k` though,
++ * but only if we aren't mitigating with clearbhb though.
++ */
++ if (scope == SCOPE_LOCAL_CPU && !supports_clearbhb(SCOPE_LOCAL_CPU))
++ max_bhb_k = max(max_bhb_k, spectre_bhb_loop_affected());
+
+- return false;
++ return true;
+ }
+
+ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
+@@ -1002,7 +1024,7 @@ early_param("nospectre_bhb", parse_spectre_bhb_param);
+ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+ {
+ bp_hardening_cb_t cpu_cb;
+- enum mitigation_state fw_state, state = SPECTRE_VULNERABLE;
++ enum mitigation_state state = SPECTRE_VULNERABLE;
+ struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data);
+
+ if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU))
+@@ -1028,7 +1050,7 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+ this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN);
+ state = SPECTRE_MITIGATED;
+ set_bit(BHB_INSN, &system_bhb_mitigations);
+- } else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
++ } else if (spectre_bhb_loop_affected()) {
+ /*
+ * Ensure KVM uses the indirect vector which will have the
+ * branchy-loop added. A57/A72-r0 will already have selected
+@@ -1041,32 +1063,29 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+ this_cpu_set_vectors(EL1_VECTOR_BHB_LOOP);
+ state = SPECTRE_MITIGATED;
+ set_bit(BHB_LOOP, &system_bhb_mitigations);
+- } else if (is_spectre_bhb_fw_affected(SCOPE_LOCAL_CPU)) {
+- fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
+- if (fw_state == SPECTRE_MITIGATED) {
+- /*
+- * Ensure KVM uses one of the spectre bp_hardening
+- * vectors. The indirect vector doesn't include the EL3
+- * call, so needs upgrading to
+- * HYP_VECTOR_SPECTRE_INDIRECT.
+- */
+- if (!data->slot || data->slot == HYP_VECTOR_INDIRECT)
+- data->slot += 1;
+-
+- this_cpu_set_vectors(EL1_VECTOR_BHB_FW);
+-
+- /*
+- * The WA3 call in the vectors supersedes the WA1 call
+- * made during context-switch. Uninstall any firmware
+- * bp_hardening callback.
+- */
+- cpu_cb = spectre_v2_get_sw_mitigation_cb();
+- if (__this_cpu_read(bp_hardening_data.fn) != cpu_cb)
+- __this_cpu_write(bp_hardening_data.fn, NULL);
+-
+- state = SPECTRE_MITIGATED;
+- set_bit(BHB_FW, &system_bhb_mitigations);
+- }
++ } else if (has_spectre_bhb_fw_mitigation()) {
++ /*
++ * Ensure KVM uses one of the spectre bp_hardening
++ * vectors. The indirect vector doesn't include the EL3
++ * call, so needs upgrading to
++ * HYP_VECTOR_SPECTRE_INDIRECT.
++ */
++ if (!data->slot || data->slot == HYP_VECTOR_INDIRECT)
++ data->slot += 1;
++
++ this_cpu_set_vectors(EL1_VECTOR_BHB_FW);
++
++ /*
++ * The WA3 call in the vectors supersedes the WA1 call
++ * made during context-switch. Uninstall any firmware
++ * bp_hardening callback.
++ */
++ cpu_cb = spectre_v2_get_sw_mitigation_cb();
++ if (__this_cpu_read(bp_hardening_data.fn) != cpu_cb)
++ __this_cpu_write(bp_hardening_data.fn, NULL);
++
++ state = SPECTRE_MITIGATED;
++ set_bit(BHB_FW, &system_bhb_mitigations);
+ }
+
+ update_mitigation_state(&spectre_bhb_state, state);
+@@ -1100,7 +1119,6 @@ void noinstr spectre_bhb_patch_loop_iter(struct alt_instr *alt,
+ {
+ u8 rd;
+ u32 insn;
+- u16 loop_count = spectre_bhb_loop_affected(SCOPE_SYSTEM);
+
+ BUG_ON(nr_inst != 1); /* MOV -> MOV */
+
+@@ -1109,7 +1127,7 @@ void noinstr spectre_bhb_patch_loop_iter(struct alt_instr *alt,
+
+ insn = le32_to_cpu(*origptr);
+ rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn);
+- insn = aarch64_insn_gen_movewide(rd, loop_count, 0,
++ insn = aarch64_insn_gen_movewide(rd, max_bhb_k, 0,
+ AARCH64_INSN_VARIANT_64BIT,
+ AARCH64_INSN_MOVEWIDE_ZERO);
+ *updptr++ = cpu_to_le32(insn);
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 0160b492435113..1a479df5d78eed 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -466,7 +466,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ if (err)
+ return err;
+
+- return kvm_share_hyp(vcpu, vcpu + 1);
++ err = kvm_share_hyp(vcpu, vcpu + 1);
++ if (err)
++ kvm_vgic_vcpu_destroy(vcpu);
++
++ return err;
+ }
+
+ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 82430c1e1dd02b..011ef6dcfcd224 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1051,26 +1051,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+
+ static int set_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 val)
+ {
+- bool set;
+-
+- val &= kvm_pmu_accessible_counter_mask(vcpu);
+-
+- switch (r->reg) {
+- case PMOVSSET_EL0:
+- /* CRm[1] being set indicates a SET register, and CLR otherwise */
+- set = r->CRm & 2;
+- break;
+- default:
+- /* Op2[0] being set indicates a SET register, and CLR otherwise */
+- set = r->Op2 & 1;
+- break;
+- }
+-
+- if (set)
+- __vcpu_sys_reg(vcpu, r->reg) |= val;
+- else
+- __vcpu_sys_reg(vcpu, r->reg) &= ~val;
++ u64 mask = kvm_pmu_accessible_counter_mask(vcpu);
+
++ __vcpu_sys_reg(vcpu, r->reg) = val & mask;
+ return 0;
+ }
+
+@@ -2493,6 +2476,93 @@ static bool access_mdcr(struct kvm_vcpu *vcpu,
+ return true;
+ }
+
++/*
++ * For historical (ahem ABI) reasons, KVM treated MIDR_EL1, REVIDR_EL1, and
++ * AIDR_EL1 as "invariant" registers, meaning userspace cannot change them.
++ * The values made visible to userspace were the register values of the boot
++ * CPU.
++ *
++ * At the same time, reads from these registers at EL1 previously were not
++ * trapped, allowing the guest to read the actual hardware value. On big-little
++ * machines, this means the VM can see different values depending on where a
++ * given vCPU got scheduled.
++ *
++ * These registers are now trapped as collateral damage from SME, and what
++ * follows attempts to give a user / guest view consistent with the existing
++ * ABI.
++ */
++static bool access_imp_id_reg(struct kvm_vcpu *vcpu,
++ struct sys_reg_params *p,
++ const struct sys_reg_desc *r)
++{
++ if (p->is_write)
++ return write_to_read_only(vcpu, p, r);
++
++ switch (reg_to_encoding(r)) {
++ case SYS_REVIDR_EL1:
++ p->regval = read_sysreg(revidr_el1);
++ break;
++ case SYS_AIDR_EL1:
++ p->regval = read_sysreg(aidr_el1);
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ }
++
++ return true;
++}
++
++static u64 __ro_after_init boot_cpu_midr_val;
++static u64 __ro_after_init boot_cpu_revidr_val;
++static u64 __ro_after_init boot_cpu_aidr_val;
++
++static void init_imp_id_regs(void)
++{
++ boot_cpu_midr_val = read_sysreg(midr_el1);
++ boot_cpu_revidr_val = read_sysreg(revidr_el1);
++ boot_cpu_aidr_val = read_sysreg(aidr_el1);
++}
++
++static int get_imp_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
++ u64 *val)
++{
++ switch (reg_to_encoding(r)) {
++ case SYS_MIDR_EL1:
++ *val = boot_cpu_midr_val;
++ break;
++ case SYS_REVIDR_EL1:
++ *val = boot_cpu_revidr_val;
++ break;
++ case SYS_AIDR_EL1:
++ *val = boot_cpu_aidr_val;
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
++static int set_imp_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
++ u64 val)
++{
++ u64 expected;
++ int ret;
++
++ ret = get_imp_id_reg(vcpu, r, &expected);
++ if (ret)
++ return ret;
++
++ return (expected == val) ? 0 : -EINVAL;
++}
++
++#define IMPLEMENTATION_ID(reg) { \
++ SYS_DESC(SYS_##reg), \
++ .access = access_imp_id_reg, \
++ .get_user = get_imp_id_reg, \
++ .set_user = set_imp_id_reg, \
++}
+
+ /*
+ * Architected system registers.
+@@ -2542,7 +2612,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+
+ { SYS_DESC(SYS_DBGVCR32_EL2), undef_access, reset_val, DBGVCR32_EL2, 0 },
+
++ IMPLEMENTATION_ID(MIDR_EL1),
+ { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
++ IMPLEMENTATION_ID(REVIDR_EL1),
+
+ /*
+ * ID regs: all ID_SANITISED() entries here must have corresponding
+@@ -2814,6 +2886,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ .set_user = set_clidr, .val = ~CLIDR_EL1_RES0 },
+ { SYS_DESC(SYS_CCSIDR2_EL1), undef_access },
+ { SYS_DESC(SYS_SMIDR_EL1), undef_access },
++ IMPLEMENTATION_ID(AIDR_EL1),
+ { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 },
+ ID_FILTERED(CTR_EL0, ctr_el0,
+ CTR_EL0_DIC_MASK |
+@@ -4272,9 +4345,13 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu)
+ * Certain AArch32 ID registers are handled by rerouting to the AArch64
+ * system register table. Registers in the ID range where CRm=0 are
+ * excluded from this scheme as they do not trivially map into AArch64
+- * system register encodings.
++ * system register encodings, except for AIDR/REVIDR.
+ */
+- if (params.Op1 == 0 && params.CRn == 0 && params.CRm)
++ if (params.Op1 == 0 && params.CRn == 0 &&
++ (params.CRm || params.Op2 == 6 /* REVIDR */))
++ return kvm_emulate_cp15_id_reg(vcpu, ¶ms);
++ if (params.Op1 == 1 && params.CRn == 0 &&
++ params.CRm == 0 && params.Op2 == 7 /* AIDR */)
+ return kvm_emulate_cp15_id_reg(vcpu, ¶ms);
+
+ return kvm_handle_cp_32(vcpu, ¶ms, cp15_regs, ARRAY_SIZE(cp15_regs));
+@@ -4578,65 +4655,6 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id,
+ return r;
+ }
+
+-/*
+- * These are the invariant sys_reg registers: we let the guest see the
+- * host versions of these, so they're part of the guest state.
+- *
+- * A future CPU may provide a mechanism to present different values to
+- * the guest, or a future kvm may trap them.
+- */
+-
+-#define FUNCTION_INVARIANT(reg) \
+- static u64 reset_##reg(struct kvm_vcpu *v, \
+- const struct sys_reg_desc *r) \
+- { \
+- ((struct sys_reg_desc *)r)->val = read_sysreg(reg); \
+- return ((struct sys_reg_desc *)r)->val; \
+- }
+-
+-FUNCTION_INVARIANT(midr_el1)
+-FUNCTION_INVARIANT(revidr_el1)
+-FUNCTION_INVARIANT(aidr_el1)
+-
+-/* ->val is filled in by kvm_sys_reg_table_init() */
+-static struct sys_reg_desc invariant_sys_regs[] __ro_after_init = {
+- { SYS_DESC(SYS_MIDR_EL1), NULL, reset_midr_el1 },
+- { SYS_DESC(SYS_REVIDR_EL1), NULL, reset_revidr_el1 },
+- { SYS_DESC(SYS_AIDR_EL1), NULL, reset_aidr_el1 },
+-};
+-
+-static int get_invariant_sys_reg(u64 id, u64 __user *uaddr)
+-{
+- const struct sys_reg_desc *r;
+-
+- r = get_reg_by_id(id, invariant_sys_regs,
+- ARRAY_SIZE(invariant_sys_regs));
+- if (!r)
+- return -ENOENT;
+-
+- return put_user(r->val, uaddr);
+-}
+-
+-static int set_invariant_sys_reg(u64 id, u64 __user *uaddr)
+-{
+- const struct sys_reg_desc *r;
+- u64 val;
+-
+- r = get_reg_by_id(id, invariant_sys_regs,
+- ARRAY_SIZE(invariant_sys_regs));
+- if (!r)
+- return -ENOENT;
+-
+- if (get_user(val, uaddr))
+- return -EFAULT;
+-
+- /* This is what we mean by invariant: you can't change it. */
+- if (r->val != val)
+- return -EINVAL;
+-
+- return 0;
+-}
+-
+ static int demux_c15_get(struct kvm_vcpu *vcpu, u64 id, void __user *uaddr)
+ {
+ u32 val;
+@@ -4718,15 +4736,10 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
+ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ {
+ void __user *uaddr = (void __user *)(unsigned long)reg->addr;
+- int err;
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
+ return demux_c15_get(vcpu, reg->id, uaddr);
+
+- err = get_invariant_sys_reg(reg->id, uaddr);
+- if (err != -ENOENT)
+- return err;
+-
+ return kvm_sys_reg_get_user(vcpu, reg,
+ sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+ }
+@@ -4762,15 +4775,10 @@ int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
+ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ {
+ void __user *uaddr = (void __user *)(unsigned long)reg->addr;
+- int err;
+
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX)
+ return demux_c15_set(vcpu, reg->id, uaddr);
+
+- err = set_invariant_sys_reg(reg->id, uaddr);
+- if (err != -ENOENT)
+- return err;
+-
+ return kvm_sys_reg_set_user(vcpu, reg,
+ sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+ }
+@@ -4859,23 +4867,14 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+
+ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)
+ {
+- return ARRAY_SIZE(invariant_sys_regs)
+- + num_demux_regs()
++ return num_demux_regs()
+ + walk_sys_regs(vcpu, (u64 __user *)NULL);
+ }
+
+ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
+ {
+- unsigned int i;
+ int err;
+
+- /* Then give them all the invariant registers' indices. */
+- for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++) {
+- if (put_user(sys_reg_to_index(&invariant_sys_regs[i]), uindices))
+- return -EFAULT;
+- uindices++;
+- }
+-
+ err = walk_sys_regs(vcpu, uindices);
+ if (err < 0)
+ return err;
+@@ -5101,15 +5100,12 @@ int __init kvm_sys_reg_table_init(void)
+ valid &= check_sysreg_table(cp14_64_regs, ARRAY_SIZE(cp14_64_regs), true);
+ valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
+ valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
+- valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
+ valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
+
+ if (!valid)
+ return -EINVAL;
+
+- /* We abuse the reset function to overwrite the table itself. */
+- for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
+- invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
++ init_imp_id_regs();
+
+ ret = populate_nv_trap_config();
+
+diff --git a/arch/arm64/lib/crc-t10dif-glue.c b/arch/arm64/lib/crc-t10dif-glue.c
+index dab7e379623298..2dcb1e31bbb2d0 100644
+--- a/arch/arm64/lib/crc-t10dif-glue.c
++++ b/arch/arm64/lib/crc-t10dif-glue.c
+@@ -45,9 +45,7 @@ u16 crc_t10dif_arch(u16 crc, const u8 *data, size_t length)
+ crc_t10dif_pmull_p8(crc, data, length, buf);
+ kernel_neon_end();
+
+- crc = 0;
+- data = buf;
+- length = sizeof(buf);
++ return crc_t10dif_generic(0, buf, sizeof(buf));
+ }
+ }
+ return crc_t10dif_generic(crc, data, length);
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 1dfe1a8efdbe41..310ff75891efc8 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1361,7 +1361,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
+ __remove_pgd_mapping(swapper_pg_dir,
+ __phys_to_virt(start), size);
+ else {
+- max_pfn = PFN_UP(start + size);
++ /* Address of hotplugged memory can be smaller */
++ max_pfn = max(max_pfn, PFN_UP(start + size));
+ max_low_pfn = max_pfn;
+ }
+
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index ce1d91eed231bc..a7138eb18d598f 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -550,12 +550,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+
+ #ifdef CONFIG_PPC_BOOK3S_64
+ case KVM_CAP_SPAPR_TCE:
++ fallthrough;
+ case KVM_CAP_SPAPR_TCE_64:
+- r = 1;
+- break;
+ case KVM_CAP_SPAPR_TCE_VFIO:
+- r = !!cpu_has_feature(CPU_FTR_HVMODE);
+- break;
+ case KVM_CAP_PPC_RTAS:
+ case KVM_CAP_PPC_FIXUP_HCALL:
+ case KVM_CAP_PPC_ENABLE_HCALL:
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index 5fae311203c269..fd3b70d9aab157 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -15,7 +15,7 @@ KBUILD_CFLAGS_MODULE += -fPIC
+ KBUILD_AFLAGS += -m64
+ KBUILD_CFLAGS += -m64
+ KBUILD_CFLAGS += -fPIC
+-LDFLAGS_vmlinux := -no-pie --emit-relocs --discard-none
++LDFLAGS_vmlinux := $(call ld-option,-no-pie) --emit-relocs --discard-none
+ extra_tools := relocs
+ aflags_dwarf := -Wa,-gdwarf-2
+ KBUILD_AFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -D__ASSEMBLY__
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index 33205dd410e470..60a60185b1d4db 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -858,18 +858,13 @@ static int cpumf_pmu_event_type(struct perf_event *event)
+ static int cpumf_pmu_event_init(struct perf_event *event)
+ {
+ unsigned int type = event->attr.type;
+- int err;
++ int err = -ENOENT;
+
+ if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
+ err = __hw_perf_event_init(event, type);
+ else if (event->pmu->type == type)
+ /* Registered as unknown PMU */
+ err = __hw_perf_event_init(event, cpumf_pmu_event_type(event));
+- else
+- return -ENOENT;
+-
+- if (unlikely(err) && event->destroy)
+- event->destroy(event);
+
+ return err;
+ }
+@@ -1819,8 +1814,6 @@ static int cfdiag_event_init(struct perf_event *event)
+ event->destroy = hw_perf_event_destroy;
+
+ err = cfdiag_event_init2(event);
+- if (unlikely(err))
+- event->destroy(event);
+ out:
+ return err;
+ }
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 5f60248cb46873..ad22799d8a7d91 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -885,9 +885,6 @@ static int cpumsf_pmu_event_init(struct perf_event *event)
+ event->attr.exclude_idle = 0;
+
+ err = __hw_perf_event_init(event);
+- if (unlikely(err))
+- if (event->destroy)
+- event->destroy(event);
+ return err;
+ }
+
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index 39a481ec4a402d..983d9239feb411 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -335,6 +335,9 @@ static bool zpci_bus_is_isolated_vf(struct zpci_bus *zbus, struct zpci_dev *zdev
+ {
+ struct pci_dev *pdev;
+
++ if (!zdev->vfn)
++ return false;
++
+ pdev = zpci_iov_find_parent_pf(zbus, zdev);
+ if (!pdev)
+ return true;
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index 46f99dc164ade4..1997d9b7965df3 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -175,8 +175,12 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
+ args.address = mmio_addr;
+ args.vma = vma;
+ ret = follow_pfnmap_start(&args);
+- if (ret)
+- goto out_unlock_mmap;
++ if (ret) {
++ fixup_user_fault(current->mm, mmio_addr, FAULT_FLAG_WRITE, NULL);
++ ret = follow_pfnmap_start(&args);
++ if (ret)
++ goto out_unlock_mmap;
++ }
+
+ io_addr = (void __iomem *)((args.pfn << PAGE_SHIFT) |
+ (mmio_addr & ~PAGE_MASK));
+@@ -315,14 +319,18 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
+ if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
+ goto out_unlock_mmap;
+ ret = -EACCES;
+- if (!(vma->vm_flags & VM_WRITE))
++ if (!(vma->vm_flags & VM_READ))
+ goto out_unlock_mmap;
+
+ args.vma = vma;
+ args.address = mmio_addr;
+ ret = follow_pfnmap_start(&args);
+- if (ret)
+- goto out_unlock_mmap;
++ if (ret) {
++ fixup_user_fault(current->mm, mmio_addr, 0, NULL);
++ ret = follow_pfnmap_start(&args);
++ if (ret)
++ goto out_unlock_mmap;
++ }
+
+ io_addr = (void __iomem *)((args.pfn << PAGE_SHIFT) |
+ (mmio_addr & ~PAGE_MASK));
+diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
+index 2b7f358762c187..dc28f2c4eee3f2 100644
+--- a/arch/sparc/include/asm/pgtable_64.h
++++ b/arch/sparc/include/asm/pgtable_64.h
+@@ -936,7 +936,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
+ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte, unsigned int nr)
+ {
+- arch_enter_lazy_mmu_mode();
+ for (;;) {
+ __set_pte_at(mm, addr, ptep, pte, 0);
+ if (--nr == 0)
+@@ -945,7 +944,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_val(pte) += PAGE_SIZE;
+ addr += PAGE_SIZE;
+ }
+- arch_leave_lazy_mmu_mode();
+ }
+ #define set_ptes set_ptes
+
+diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
+index 8648a50afe8899..a35ddcca5e7668 100644
+--- a/arch/sparc/mm/tlb.c
++++ b/arch/sparc/mm/tlb.c
+@@ -52,8 +52,10 @@ void flush_tlb_pending(void)
+
+ void arch_enter_lazy_mmu_mode(void)
+ {
+- struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
++ struct tlb_batch *tb;
+
++ preempt_disable();
++ tb = this_cpu_ptr(&tlb_batch);
+ tb->active = 1;
+ }
+
+@@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void)
+ if (tb->tlb_nr)
+ flush_tlb_pending();
+ tb->active = 0;
++ preempt_enable();
+ }
+
+ static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,
+diff --git a/arch/x86/Kbuild b/arch/x86/Kbuild
+index cf0ad89f5639da..f7fb3d88c57bd8 100644
+--- a/arch/x86/Kbuild
++++ b/arch/x86/Kbuild
+@@ -1,4 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0
++
++# Branch profiling isn't noinstr-safe. Disable it for arch/x86/*
++subdir-ccflags-$(CONFIG_TRACE_BRANCH_PROFILING) += -DDISABLE_BRANCH_PROFILING
++
+ obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += coco/
+
+ obj-y += entry/
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index aaec6ebd6c4e01..aeb95b6e553691 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2452,18 +2452,20 @@ config CC_HAS_NAMED_AS
+ def_bool $(success,echo 'int __seg_fs fs; int __seg_gs gs;' | $(CC) -x c - -S -o /dev/null)
+ depends on CC_IS_GCC
+
++#
++# -fsanitize=kernel-address (KASAN) and -fsanitize=thread (KCSAN)
++# are incompatible with named address spaces with GCC < 13.3
++# (see GCC PR sanitizer/111736 and also PR sanitizer/115172).
++#
++
+ config CC_HAS_NAMED_AS_FIXED_SANITIZERS
+- def_bool CC_IS_GCC && GCC_VERSION >= 130300
++ def_bool y
++ depends on !(KASAN || KCSAN) || GCC_VERSION >= 130300
++ depends on !(UBSAN_BOOL && KASAN) || GCC_VERSION >= 140200
+
+ config USE_X86_SEG_SUPPORT
+- def_bool y
+- depends on CC_HAS_NAMED_AS
+- #
+- # -fsanitize=kernel-address (KASAN) and -fsanitize=thread
+- # (KCSAN) are incompatible with named address spaces with
+- # GCC < 13.3 - see GCC PR sanitizer/111736.
+- #
+- depends on !(KASAN || KCSAN) || CC_HAS_NAMED_AS_FIXED_SANITIZERS
++ def_bool CC_HAS_NAMED_AS
++ depends on CC_HAS_NAMED_AS_FIXED_SANITIZERS
+
+ config CC_HAS_SLS
+ def_bool $(cc-option,-mharden-sls=all)
+diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
+index 96c7bc698e6b62..d14bce0f82cc58 100644
+--- a/arch/x86/coco/sev/core.c
++++ b/arch/x86/coco/sev/core.c
+@@ -9,8 +9,6 @@
+
+ #define pr_fmt(fmt) "SEV: " fmt
+
+-#define DISABLE_BRANCH_PROFILING
+-
+ #include <linux/sched/debug.h> /* For show_regs() */
+ #include <linux/percpu-defs.h>
+ #include <linux/cc_platform.h>
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index dae6a73be40e1c..9fa321a95eb33f 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -23,6 +23,8 @@
+ #include <linux/serial_core.h>
+ #include <linux/pgtable.h>
+
++#include <xen/xen.h>
++
+ #include <asm/e820/api.h>
+ #include <asm/irqdomain.h>
+ #include <asm/pci_x86.h>
+@@ -1729,6 +1731,15 @@ int __init acpi_mps_check(void)
+ {
+ #if defined(CONFIG_X86_LOCAL_APIC) && !defined(CONFIG_X86_MPPARSE)
+ /* mptable code is not built-in*/
++
++ /*
++ * Xen disables ACPI in PV DomU guests but it still emulates APIC and
++ * supports SMP. Returning early here ensures that APIC is not disabled
++ * unnecessarily and the guest is not limited to a single vCPU.
++ */
++ if (xen_pv_domain() && !xen_initial_domain())
++ return 0;
++
+ if (acpi_disabled || acpi_noirq) {
+ pr_warn("MPS support code is not built-in, using acpi=off or acpi=noirq or pci=noacpi may have problem\n");
+ return 1;
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 54194f5995de37..4c9b20d028eb4c 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -632,7 +632,7 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
+ * (model = 0x14) and later actually support it.
+ * (AMD Erratum #110, docId: 25759).
+ */
+- if (c->x86_model < 0x14 && cpu_has(c, X86_FEATURE_LAHF_LM)) {
++ if (c->x86_model < 0x14 && cpu_has(c, X86_FEATURE_LAHF_LM) && !cpu_has(c, X86_FEATURE_HYPERVISOR)) {
+ clear_cpu_cap(c, X86_FEATURE_LAHF_LM);
+ if (!rdmsrl_amd_safe(0xc001100d, &value)) {
+ value &= ~BIT_64(32);
+@@ -803,6 +803,7 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ static const struct x86_cpu_id erratum_1386_microcode[] = {
+ X86_MATCH_VFM_STEPS(VFM_MAKE(X86_VENDOR_AMD, 0x17, 0x01), 0x2, 0x2, 0x0800126e),
+ X86_MATCH_VFM_STEPS(VFM_MAKE(X86_VENDOR_AMD, 0x17, 0x31), 0x0, 0x0, 0x08301052),
++ {}
+ };
+
+ static void fix_erratum_1386(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 82b96ed9890aba..84264205dae55c 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -754,22 +754,21 @@ void __init e820__memory_setup_extended(u64 phys_addr, u32 data_len)
+ void __init e820__register_nosave_regions(unsigned long limit_pfn)
+ {
+ int i;
+- unsigned long pfn = 0;
++ u64 last_addr = 0;
+
+ for (i = 0; i < e820_table->nr_entries; i++) {
+ struct e820_entry *entry = &e820_table->entries[i];
+
+- if (pfn < PFN_UP(entry->addr))
+- register_nosave_region(pfn, PFN_UP(entry->addr));
+-
+- pfn = PFN_DOWN(entry->addr + entry->size);
+-
+ if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN)
+- register_nosave_region(PFN_UP(entry->addr), pfn);
++ continue;
+
+- if (pfn >= limit_pfn)
+- break;
++ if (last_addr < entry->addr)
++ register_nosave_region(PFN_DOWN(last_addr), PFN_UP(entry->addr));
++
++ last_addr = entry->addr + entry->size;
+ }
++
++ register_nosave_region(PFN_DOWN(last_addr), limit_pfn);
+ }
+
+ #ifdef CONFIG_ACPI
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 22c9ba305ac171..368157a7f6d213 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -5,8 +5,6 @@
+ * Copyright (C) 2000 Andrea Arcangeli <andrea@suse.de> SuSE
+ */
+
+-#define DISABLE_BRANCH_PROFILING
+-
+ /* cpu_feature_enabled() cannot be used this early */
+ #define USE_EARLY_PGTABLE_L5
+
+diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
+index ef654530bf5a93..98123ff10506c6 100644
+--- a/arch/x86/kernel/signal_32.c
++++ b/arch/x86/kernel/signal_32.c
+@@ -33,25 +33,55 @@
+ #include <asm/smap.h>
+ #include <asm/gsseg.h>
+
++/*
++ * The first GDT descriptor is reserved as 'NULL descriptor'. As bits 0
++ * and 1 of a segment selector, i.e., the RPL bits, are NOT used to index
++ * GDT, selector values 0~3 all point to the NULL descriptor, thus values
++ * 0, 1, 2 and 3 are all valid NULL selector values.
++ *
++ * However IRET zeros ES, FS, GS, and DS segment registers if any of them
++ * is found to have any nonzero NULL selector value, which can be used by
++ * userspace in pre-FRED systems to spot any interrupt/exception by loading
++ * a nonzero NULL selector and waiting for it to become zero. Before FRED
++ * there was nothing software could do to prevent such an information leak.
++ *
++ * ERETU, the only legit instruction to return to userspace from kernel
++ * under FRED, by design does NOT zero any segment register to avoid this
++ * problem behavior.
++ *
++ * As such, leave NULL selector values 0~3 unchanged.
++ */
++static inline u16 fixup_rpl(u16 sel)
++{
++ return sel <= 3 ? sel : sel | 3;
++}
++
+ #ifdef CONFIG_IA32_EMULATION
+ #include <asm/unistd_32_ia32.h>
+
+ static inline void reload_segments(struct sigcontext_32 *sc)
+ {
+- unsigned int cur;
++ u16 cur;
+
++ /*
++ * Reload fs and gs if they have changed in the signal
++ * handler. This does not handle long fs/gs base changes in
++ * the handler, but does not clobber them at least in the
++ * normal case.
++ */
+ savesegment(gs, cur);
+- if ((sc->gs | 0x03) != cur)
+- load_gs_index(sc->gs | 0x03);
++ if (fixup_rpl(sc->gs) != cur)
++ load_gs_index(fixup_rpl(sc->gs));
+ savesegment(fs, cur);
+- if ((sc->fs | 0x03) != cur)
+- loadsegment(fs, sc->fs | 0x03);
++ if (fixup_rpl(sc->fs) != cur)
++ loadsegment(fs, fixup_rpl(sc->fs));
++
+ savesegment(ds, cur);
+- if ((sc->ds | 0x03) != cur)
+- loadsegment(ds, sc->ds | 0x03);
++ if (fixup_rpl(sc->ds) != cur)
++ loadsegment(ds, fixup_rpl(sc->ds));
+ savesegment(es, cur);
+- if ((sc->es | 0x03) != cur)
+- loadsegment(es, sc->es | 0x03);
++ if (fixup_rpl(sc->es) != cur)
++ loadsegment(es, fixup_rpl(sc->es));
+ }
+
+ #define sigset32_t compat_sigset_t
+@@ -105,18 +135,12 @@ static bool ia32_restore_sigcontext(struct pt_regs *regs,
+ regs->orig_ax = -1;
+
+ #ifdef CONFIG_IA32_EMULATION
+- /*
+- * Reload fs and gs if they have changed in the signal
+- * handler. This does not handle long fs/gs base changes in
+- * the handler, but does not clobber them at least in the
+- * normal case.
+- */
+ reload_segments(&sc);
+ #else
+- loadsegment(gs, sc.gs);
+- regs->fs = sc.fs;
+- regs->es = sc.es;
+- regs->ds = sc.ds;
++ loadsegment(gs, fixup_rpl(sc.gs));
++ regs->fs = fixup_rpl(sc.fs);
++ regs->es = fixup_rpl(sc.es);
++ regs->ds = fixup_rpl(sc.ds);
+ #endif
+
+ return fpu__restore_sig(compat_ptr(sc.fpstate), 1);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 121edf1f2a79ac..9b92f3f56f49cb 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -1423,8 +1423,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ }
+ break;
+ case 0xa: { /* Architectural Performance Monitoring */
+- union cpuid10_eax eax;
+- union cpuid10_edx edx;
++ union cpuid10_eax eax = { };
++ union cpuid10_edx edx = { };
+
+ if (!enable_pmu || !static_cpu_has(X86_FEATURE_ARCH_PERFMON)) {
+ entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+@@ -1440,8 +1440,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+
+ if (kvm_pmu_cap.version)
+ edx.split.anythread_deprecated = 1;
+- edx.split.reserved1 = 0;
+- edx.split.reserved2 = 0;
+
+ entry->eax = eax.full;
+ entry->ebx = kvm_pmu_cap.events_mask;
+@@ -1759,7 +1757,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ break;
+ /* AMD Extended Performance Monitoring and Debug */
+ case 0x80000022: {
+- union cpuid_0x80000022_ebx ebx;
++ union cpuid_0x80000022_ebx ebx = { };
+
+ entry->ecx = entry->edx = 0;
+ if (!enable_pmu || !kvm_cpu_cap_has(X86_FEATURE_PERFMON_V2)) {
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 01d3fa84d2a459..91f9590a8ddec6 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11773,6 +11773,8 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
+ if (kvm_mpx_supported())
+ kvm_load_guest_fpu(vcpu);
+
++ kvm_vcpu_srcu_read_lock(vcpu);
++
+ r = kvm_apic_accept_events(vcpu);
+ if (r < 0)
+ goto out;
+@@ -11786,6 +11788,8 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
+ mp_state->mp_state = vcpu->arch.mp_state;
+
+ out:
++ kvm_vcpu_srcu_read_unlock(vcpu);
++
+ if (kvm_mpx_supported())
+ kvm_put_guest_fpu(vcpu);
+ vcpu_put(vcpu);
+diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
+index 9dddf19a557160..0539efd0d216b9 100644
+--- a/arch/x86/mm/kasan_init_64.c
++++ b/arch/x86/mm/kasan_init_64.c
+@@ -1,5 +1,4 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#define DISABLE_BRANCH_PROFILING
+ #define pr_fmt(fmt) "kasan: " fmt
+
+ /* cpu_feature_enabled() cannot be used this early */
+diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
+index b56c5c073003d6..7490ff6d83b1bc 100644
+--- a/arch/x86/mm/mem_encrypt_amd.c
++++ b/arch/x86/mm/mem_encrypt_amd.c
+@@ -7,8 +7,6 @@
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ */
+
+-#define DISABLE_BRANCH_PROFILING
+-
+ #include <linux/linkage.h>
+ #include <linux/init.h>
+ #include <linux/mm.h>
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index 9fce5b87b8c50f..5eecdd92da1050 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -7,8 +7,6 @@
+ * Author: Tom Lendacky <thomas.lendacky@amd.com>
+ */
+
+-#define DISABLE_BRANCH_PROFILING
+-
+ /*
+ * Since we're dealing with identity mappings, physical and virtual
+ * addresses are the same, so override these defines which are ultimately
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index ef4514d64c0524..b491d8190a6c5c 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -2420,7 +2420,7 @@ static int __set_pages_np(struct page *page, int numpages)
+ .pgd = NULL,
+ .numpages = numpages,
+ .mask_set = __pgprot(0),
+- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
++ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
+ .flags = CPA_NO_CHECK_ALIAS };
+
+ /*
+@@ -2507,7 +2507,7 @@ int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
+ .pgd = pgd,
+ .numpages = numpages,
+ .mask_set = __pgprot(0),
+- .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW)),
++ .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW|_PAGE_DIRTY)),
+ .flags = CPA_NO_CHECK_ALIAS,
+ };
+
+@@ -2550,7 +2550,7 @@ int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address,
+ .pgd = pgd,
+ .numpages = numpages,
+ .mask_set = __pgprot(0),
+- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
++ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
+ .flags = CPA_NO_CHECK_ALIAS,
+ };
+
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 43dcd8c7badc08..1b7710bd0d0511 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -70,6 +70,9 @@ EXPORT_SYMBOL(xen_start_flags);
+ */
+ struct shared_info *HYPERVISOR_shared_info = &xen_dummy_shared_info;
+
++/* Number of pages released from the initial allocation. */
++unsigned long xen_released_pages;
++
+ static __ref void xen_get_vendor(void)
+ {
+ init_cpu_devs();
+@@ -466,6 +469,13 @@ int __init arch_xen_unpopulated_init(struct resource **res)
+ xen_free_unpopulated_pages(1, &pg);
+ }
+
++ /*
++ * Account for the region being in the physmap but unpopulated.
++ * The value in xen_released_pages is used by the balloon
++ * driver to know how much of the physmap is unpopulated and
++ * set an accurate initial memory target.
++ */
++ xen_released_pages += xen_extra_mem[i].n_pfns;
+ /* Zero so region is not also added to the balloon driver. */
+ xen_extra_mem[i].n_pfns = 0;
+ }
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index c3db71d96c434a..3823e52aef523c 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -37,9 +37,6 @@
+
+ #define GB(x) ((uint64_t)(x) * 1024 * 1024 * 1024)
+
+-/* Number of pages released from the initial allocation. */
+-unsigned long xen_released_pages;
+-
+ /* Memory map would allow PCI passthrough. */
+ bool xen_pv_pci_possible;
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 40490ac8804570..005c520d3498a2 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3314,6 +3314,7 @@ int blk_rq_prep_clone(struct request *rq, struct request *rq_src,
+ rq->special_vec = rq_src->special_vec;
+ }
+ rq->nr_phys_segments = rq_src->nr_phys_segments;
++ rq->nr_integrity_segments = rq_src->nr_integrity_segments;
+
+ if (rq->bio && blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask) < 0)
+ goto free_and_out;
+diff --git a/drivers/accel/ivpu/ivpu_debugfs.c b/drivers/accel/ivpu/ivpu_debugfs.c
+index 8180b95ed69dc7..093a2e93b0b394 100644
+--- a/drivers/accel/ivpu/ivpu_debugfs.c
++++ b/drivers/accel/ivpu/ivpu_debugfs.c
+@@ -331,7 +331,7 @@ ivpu_force_recovery_fn(struct file *file, const char __user *user_buf, size_t si
+ return -EINVAL;
+
+ ret = ivpu_rpm_get(vdev);
+- if (ret)
++ if (ret < 0)
+ return ret;
+
+ ivpu_pm_trigger_recovery(vdev, "debugfs");
+@@ -382,7 +382,7 @@ static int dct_active_set(void *data, u64 active_percent)
+ return -EINVAL;
+
+ ret = ivpu_rpm_get(vdev);
+- if (ret)
++ if (ret < 0)
+ return ret;
+
+ if (active_percent)
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 01ebf88fe6ef0a..5daaf07fc1a712 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -302,7 +302,8 @@ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req
+ struct ivpu_ipc_consumer cons;
+ int ret;
+
+- drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++ drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev) &&
++ pm_runtime_enabled(vdev->drm.dev));
+
+ ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
+
+diff --git a/drivers/accel/ivpu/ivpu_ms.c b/drivers/accel/ivpu/ivpu_ms.c
+index ffe7b10f8a767b..2a043baf10ca17 100644
+--- a/drivers/accel/ivpu/ivpu_ms.c
++++ b/drivers/accel/ivpu/ivpu_ms.c
+@@ -4,6 +4,7 @@
+ */
+
+ #include <drm/drm_file.h>
++#include <linux/pm_runtime.h>
+
+ #include "ivpu_drv.h"
+ #include "ivpu_gem.h"
+@@ -44,6 +45,10 @@ int ivpu_ms_start_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
+ args->sampling_period_ns < MS_MIN_SAMPLE_PERIOD_NS)
+ return -EINVAL;
+
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
+ mutex_lock(&file_priv->ms_lock);
+
+ if (get_instance_by_mask(file_priv, args->metric_group_mask)) {
+@@ -96,6 +101,8 @@ int ivpu_ms_start_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
+ kfree(ms);
+ unlock:
+ mutex_unlock(&file_priv->ms_lock);
++
++ ivpu_rpm_put(vdev);
+ return ret;
+ }
+
+@@ -160,6 +167,10 @@ int ivpu_ms_get_data_ioctl(struct drm_device *dev, void *data, struct drm_file *
+ if (!args->metric_group_mask)
+ return -EINVAL;
+
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
+ mutex_lock(&file_priv->ms_lock);
+
+ ms = get_instance_by_mask(file_priv, args->metric_group_mask);
+@@ -187,6 +198,7 @@ int ivpu_ms_get_data_ioctl(struct drm_device *dev, void *data, struct drm_file *
+ unlock:
+ mutex_unlock(&file_priv->ms_lock);
+
++ ivpu_rpm_put(vdev);
+ return ret;
+ }
+
+@@ -204,11 +216,17 @@ int ivpu_ms_stop_ioctl(struct drm_device *dev, void *data, struct drm_file *file
+ {
+ struct ivpu_file_priv *file_priv = file->driver_priv;
+ struct drm_ivpu_metric_streamer_stop *args = data;
++ struct ivpu_device *vdev = file_priv->vdev;
+ struct ivpu_ms_instance *ms;
++ int ret;
+
+ if (!args->metric_group_mask)
+ return -EINVAL;
+
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
+ mutex_lock(&file_priv->ms_lock);
+
+ ms = get_instance_by_mask(file_priv, args->metric_group_mask);
+@@ -217,6 +235,7 @@ int ivpu_ms_stop_ioctl(struct drm_device *dev, void *data, struct drm_file *file
+
+ mutex_unlock(&file_priv->ms_lock);
+
++ ivpu_rpm_put(vdev);
+ return ms ? 0 : -EINVAL;
+ }
+
+@@ -281,6 +300,9 @@ int ivpu_ms_get_info_ioctl(struct drm_device *dev, void *data, struct drm_file *
+ void ivpu_ms_cleanup(struct ivpu_file_priv *file_priv)
+ {
+ struct ivpu_ms_instance *ms, *tmp;
++ struct ivpu_device *vdev = file_priv->vdev;
++
++ pm_runtime_get_sync(vdev->drm.dev);
+
+ mutex_lock(&file_priv->ms_lock);
+
+@@ -293,6 +315,8 @@ void ivpu_ms_cleanup(struct ivpu_file_priv *file_priv)
+ free_instance(file_priv, ms);
+
+ mutex_unlock(&file_priv->ms_lock);
++
++ pm_runtime_put_autosuspend(vdev->drm.dev);
+ }
+
+ void ivpu_ms_cleanup_all(struct ivpu_device *vdev)
+diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
+index 40208a0f5dfb57..797070fc9a3f42 100644
+--- a/drivers/acpi/Makefile
++++ b/drivers/acpi/Makefile
+@@ -5,6 +5,10 @@
+
+ ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
+
++ifdef CONFIG_TRACE_BRANCH_PROFILING
++CFLAGS_processor_idle.o += -DDISABLE_BRANCH_PROFILING
++endif
++
+ #
+ # ACPI Boot-Time Table Parsing
+ #
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index f813dbdc2346fb..f3a6bfe098cd40 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -63,6 +63,7 @@ enum board_ids {
+ board_ahci_pcs_quirk_no_devslp,
+ board_ahci_pcs_quirk_no_sntf,
+ board_ahci_yes_fbs,
++ board_ahci_yes_fbs_atapi_dma,
+
+ /* board IDs for specific chipsets in alphabetical order */
+ board_ahci_al,
+@@ -188,6 +189,14 @@ static const struct ata_port_info ahci_port_info[] = {
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &ahci_ops,
+ },
++ [board_ahci_yes_fbs_atapi_dma] = {
++ AHCI_HFLAGS (AHCI_HFLAG_YES_FBS |
++ AHCI_HFLAG_ATAPI_DMA_QUIRK),
++ .flags = AHCI_FLAG_COMMON,
++ .pio_mask = ATA_PIO4,
++ .udma_mask = ATA_UDMA6,
++ .port_ops = &ahci_ops,
++ },
+ /* by chipsets */
+ [board_ahci_al] = {
+ AHCI_HFLAGS (AHCI_HFLAG_NO_PMP | AHCI_HFLAG_NO_MSI),
+@@ -589,6 +598,8 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ .driver_data = board_ahci_yes_fbs },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x91a3),
+ .driver_data = board_ahci_yes_fbs },
++ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9215),
++ .driver_data = board_ahci_yes_fbs_atapi_dma },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9230),
+ .driver_data = board_ahci_yes_fbs },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9235),
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index c842e2de6ef988..2c10c8f440d122 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -246,6 +246,7 @@ enum {
+ AHCI_HFLAG_NO_SXS = BIT(26), /* SXS not supported */
+ AHCI_HFLAG_43BIT_ONLY = BIT(27), /* 43bit DMA addr limit */
+ AHCI_HFLAG_INTEL_PCS_QUIRK = BIT(28), /* apply Intel PCS quirk */
++ AHCI_HFLAG_ATAPI_DMA_QUIRK = BIT(29), /* force ATAPI to use DMA */
+
+ /* ap->flags bits */
+
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index e7ace4b10f15b7..22afa4ff860d18 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -1322,6 +1322,10 @@ static void ahci_dev_config(struct ata_device *dev)
+ {
+ struct ahci_host_priv *hpriv = dev->link->ap->host->private_data;
+
++ if ((dev->class == ATA_DEV_ATAPI) &&
++ (hpriv->flags & AHCI_HFLAG_ATAPI_DMA_QUIRK))
++ dev->quirks |= ATA_QUIRK_ATAPI_MOD16_DMA;
++
+ if (hpriv->flags & AHCI_HFLAG_SECT255) {
+ dev->max_sectors = 255;
+ ata_dev_info(dev,
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 3d730c10f7beaf..05bfcb359f92cb 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -88,6 +88,7 @@ struct ata_force_param {
+ unsigned int xfer_mask;
+ unsigned int quirk_on;
+ unsigned int quirk_off;
++ unsigned int pflags_on;
+ u16 lflags_on;
+ u16 lflags_off;
+ };
+@@ -331,6 +332,35 @@ void ata_force_cbl(struct ata_port *ap)
+ }
+ }
+
++/**
++ * ata_force_pflags - force port flags according to libata.force
++ * @ap: ATA port of interest
++ *
++ * Force port flags according to libata.force and whine about it.
++ *
++ * LOCKING:
++ * EH context.
++ */
++static void ata_force_pflags(struct ata_port *ap)
++{
++ int i;
++
++ for (i = ata_force_tbl_size - 1; i >= 0; i--) {
++ const struct ata_force_ent *fe = &ata_force_tbl[i];
++
++ if (fe->port != -1 && fe->port != ap->print_id)
++ continue;
++
++ /* let pflags stack */
++ if (fe->param.pflags_on) {
++ ap->pflags |= fe->param.pflags_on;
++ ata_port_notice(ap,
++ "FORCE: port flag 0x%x forced -> 0x%x\n",
++ fe->param.pflags_on, ap->pflags);
++ }
++ }
++}
++
+ /**
+ * ata_force_link_limits - force link limits according to libata.force
+ * @link: ATA link of interest
+@@ -486,6 +516,7 @@ static void ata_force_quirks(struct ata_device *dev)
+ }
+ }
+ #else
++static inline void ata_force_pflags(struct ata_port *ap) { }
+ static inline void ata_force_link_limits(struct ata_link *link) { }
+ static inline void ata_force_xfermask(struct ata_device *dev) { }
+ static inline void ata_force_quirks(struct ata_device *dev) { }
+@@ -5460,6 +5491,8 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
+ #endif
+ ata_sff_port_init(ap);
+
++ ata_force_pflags(ap);
++
+ return ap;
+ }
+ EXPORT_SYMBOL_GPL(ata_port_alloc);
+@@ -6272,6 +6305,9 @@ EXPORT_SYMBOL_GPL(ata_platform_remove_one);
+ { "no" #name, .lflags_on = (flags) }, \
+ { #name, .lflags_off = (flags) }
+
++#define force_pflag_on(name, flags) \
++ { #name, .pflags_on = (flags) }
++
+ #define force_quirk_on(name, flag) \
+ { #name, .quirk_on = (flag) }
+
+@@ -6331,6 +6367,8 @@ static const struct ata_force_param force_tbl[] __initconst = {
+ force_lflag_on(rstonce, ATA_LFLAG_RST_ONCE),
+ force_lflag_onoff(dbdelay, ATA_LFLAG_NO_DEBOUNCE_DELAY),
+
++ force_pflag_on(external, ATA_PFLAG_EXTERNAL),
++
+ force_quirk_onoff(ncq, ATA_QUIRK_NONCQ),
+ force_quirk_onoff(ncqtrim, ATA_QUIRK_NO_NCQ_TRIM),
+ force_quirk_onoff(ncqati, ATA_QUIRK_NO_NCQ_ON_ATI),
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 3b303d4ae37a01..16cd676eae1f9a 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -1542,8 +1542,15 @@ unsigned int atapi_eh_request_sense(struct ata_device *dev,
+ tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
+ tf.command = ATA_CMD_PACKET;
+
+- /* is it pointless to prefer PIO for "safety reasons"? */
+- if (ap->flags & ATA_FLAG_PIO_DMA) {
++ /*
++ * Do not use DMA if the connected device only supports PIO, even if the
++ * port prefers PIO commands via DMA.
++ *
++ * Ideally, we should call atapi_check_dma() to check if it is safe for
++ * the LLD to use DMA for REQUEST_SENSE, but we don't have a qc.
++ * Since we can't check the command, perhaps we should only use pio?
++ */
++ if ((ap->flags & ATA_FLAG_PIO_DMA) && !(dev->flags & ATA_DFLAG_PIO)) {
+ tf.protocol = ATAPI_PROT_DMA;
+ tf.feature |= ATAPI_PKT_DMA;
+ } else {
+diff --git a/drivers/ata/pata_pxa.c b/drivers/ata/pata_pxa.c
+index 434f380114af09..03dbaf4a13a75c 100644
+--- a/drivers/ata/pata_pxa.c
++++ b/drivers/ata/pata_pxa.c
+@@ -223,10 +223,16 @@ static int pxa_ata_probe(struct platform_device *pdev)
+
+ ap->ioaddr.cmd_addr = devm_ioremap(&pdev->dev, cmd_res->start,
+ resource_size(cmd_res));
++ if (!ap->ioaddr.cmd_addr)
++ return -ENOMEM;
+ ap->ioaddr.ctl_addr = devm_ioremap(&pdev->dev, ctl_res->start,
+ resource_size(ctl_res));
++ if (!ap->ioaddr.ctl_addr)
++ return -ENOMEM;
+ ap->ioaddr.bmdma_addr = devm_ioremap(&pdev->dev, dma_res->start,
+ resource_size(dma_res));
++ if (!ap->ioaddr.bmdma_addr)
++ return -ENOMEM;
+
+ /*
+ * Adjust register offsets
+diff --git a/drivers/ata/sata_sx4.c b/drivers/ata/sata_sx4.c
+index a482741eb181ff..c3042eca6332df 100644
+--- a/drivers/ata/sata_sx4.c
++++ b/drivers/ata/sata_sx4.c
+@@ -1117,9 +1117,14 @@ static int pdc20621_prog_dimm0(struct ata_host *host)
+ mmio += PDC_CHIP0_OFS;
+
+ for (i = 0; i < ARRAY_SIZE(pdc_i2c_read_data); i++)
+- pdc20621_i2c_read(host, PDC_DIMM0_SPD_DEV_ADDRESS,
+- pdc_i2c_read_data[i].reg,
+- &spd0[pdc_i2c_read_data[i].ofs]);
++ if (!pdc20621_i2c_read(host, PDC_DIMM0_SPD_DEV_ADDRESS,
++ pdc_i2c_read_data[i].reg,
++ &spd0[pdc_i2c_read_data[i].ofs])) {
++ dev_err(host->dev,
++ "Failed in i2c read at index %d: device=%#x, reg=%#x\n",
++ i, PDC_DIMM0_SPD_DEV_ADDRESS, pdc_i2c_read_data[i].reg);
++ return -EIO;
++ }
+
+ data |= (spd0[4] - 8) | ((spd0[21] != 0) << 3) | ((spd0[3]-11) << 4);
+ data |= ((spd0[17] / 4) << 6) | ((spd0[5] / 2) << 7) |
+@@ -1284,6 +1289,8 @@ static unsigned int pdc20621_dimm_init(struct ata_host *host)
+
+ /* Programming DIMM0 Module Control Register (index_CID0:80h) */
+ size = pdc20621_prog_dimm0(host);
++ if (size < 0)
++ return size;
+ dev_dbg(host->dev, "Local DIMM Size = %dMB\n", size);
+
+ /* Programming DIMM Module Global Control Register (index_CID0:88h) */
+diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c
+index 0526f0d90a793e..9d0ae9c02e9ba2 100644
+--- a/drivers/auxdisplay/hd44780.c
++++ b/drivers/auxdisplay/hd44780.c
+@@ -313,7 +313,7 @@ static int hd44780_probe(struct platform_device *pdev)
+ fail3:
+ kfree(hd);
+ fail2:
+- kfree(lcd);
++ charlcd_free(lcd);
+ fail1:
+ kfree(hdc);
+ return ret;
+@@ -328,7 +328,7 @@ static void hd44780_remove(struct platform_device *pdev)
+ kfree(hdc->hd44780);
+ kfree(lcd->drvdata);
+
+- kfree(lcd);
++ charlcd_free(lcd);
+ }
+
+ static const struct of_device_id hd44780_of_match[] = {
+diff --git a/drivers/base/devres.c b/drivers/base/devres.c
+index 93e7779ef21e86..b955a2f9520bfe 100644
+--- a/drivers/base/devres.c
++++ b/drivers/base/devres.c
+@@ -687,6 +687,13 @@ int devres_release_group(struct device *dev, void *id)
+ spin_unlock_irqrestore(&dev->devres_lock, flags);
+
+ release_nodes(dev, &todo);
++ } else if (list_empty(&dev->devres_head)) {
++ /*
++ * dev is probably dying via devres_release_all(): groups
++ * have already been removed and are on the process of
++ * being released - don't touch and don't warn.
++ */
++ spin_unlock_irqrestore(&dev->devres_lock, flags);
+ } else {
+ WARN_ON(1);
+ spin_unlock_irqrestore(&dev->devres_lock, flags);
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index b7adfaddc3abb3..971b793dedd03a 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1094,6 +1094,25 @@ static void ublk_complete_rq(struct kref *ref)
+ __ublk_complete_rq(req);
+ }
+
++static void ublk_do_fail_rq(struct request *req)
++{
++ struct ublk_queue *ubq = req->mq_hctx->driver_data;
++
++ if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
++ blk_mq_requeue_request(req, false);
++ else
++ __ublk_complete_rq(req);
++}
++
++static void ublk_fail_rq_fn(struct kref *ref)
++{
++ struct ublk_rq_data *data = container_of(ref, struct ublk_rq_data,
++ ref);
++ struct request *req = blk_mq_rq_from_pdu(data);
++
++ ublk_do_fail_rq(req);
++}
++
+ /*
+ * Since __ublk_rq_task_work always fails requests immediately during
+ * exiting, __ublk_fail_req() is only called from abort context during
+@@ -1107,10 +1126,13 @@ static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
+ {
+ WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
+
+- if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
+- blk_mq_requeue_request(req, false);
+- else
+- ublk_put_req_ref(ubq, req);
++ if (ublk_need_req_ref(ubq)) {
++ struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
++
++ kref_put(&data->ref, ublk_fail_rq_fn);
++ } else {
++ ublk_do_fail_rq(req);
++ }
+ }
+
+ static void ubq_complete_io_cmd(struct ublk_io *io, int res,
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 091ffe3e14954a..6130854b6658ac 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -36,6 +36,7 @@
+ /* Intel Bluetooth PCIe device id table */
+ static const struct pci_device_id btintel_pcie_table[] = {
+ { BTINTEL_PCI_DEVICE(0xA876, PCI_ANY_ID) },
++ { BTINTEL_PCI_DEVICE(0xE476, PCI_ANY_ID) },
+ { 0 }
+ };
+ MODULE_DEVICE_TABLE(pci, btintel_pcie_table);
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index cdf09d9a9ad27c..3d6778b95e0058 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -785,6 +785,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ const char *firmware_name, const char *rampatch_name)
+ {
+ struct qca_fw_config config = {};
++ const char *variant = "";
+ int err;
+ u8 rom_ver = 0;
+ u32 soc_ver;
+@@ -815,6 +816,10 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ snprintf(config.fwname, sizeof(config.fwname), "qca/%s", rampatch_name);
+ } else {
+ switch (soc_type) {
++ case QCA_WCN3950:
++ snprintf(config.fwname, sizeof(config.fwname),
++ "qca/cmbtfw%02x.tlv", rom_ver);
++ break;
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+ case QCA_WCN3998:
+@@ -880,16 +885,23 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ }
+ } else {
+ switch (soc_type) {
++ case QCA_WCN3950:
++ if (le32_to_cpu(ver.soc_id) == QCA_WCN3950_SOC_ID_T)
++ variant = "t";
++ else if (le32_to_cpu(ver.soc_id) == QCA_WCN3950_SOC_ID_S)
++ variant = "u";
++
++ snprintf(config.fwname, sizeof(config.fwname),
++ "qca/cmnv%02x%s.bin", rom_ver, variant);
++ break;
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+ case QCA_WCN3998:
+- if (le32_to_cpu(ver.soc_id) == QCA_WCN3991_SOC_ID) {
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/crnv%02xu.bin", rom_ver);
+- } else {
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/crnv%02x.bin", rom_ver);
+- }
++ if (le32_to_cpu(ver.soc_id) == QCA_WCN3991_SOC_ID)
++ variant = "u";
++
++ snprintf(config.fwname, sizeof(config.fwname),
++ "qca/crnv%02x%s.bin", rom_ver, variant);
+ break;
+ case QCA_WCN3988:
+ snprintf(config.fwname, sizeof(config.fwname),
+@@ -948,6 +960,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ * VsMsftOpCode.
+ */
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+diff --git a/drivers/bluetooth/btqca.h b/drivers/bluetooth/btqca.h
+index 9d28c88002257b..8f3c1b1c77b3de 100644
+--- a/drivers/bluetooth/btqca.h
++++ b/drivers/bluetooth/btqca.h
+@@ -41,6 +41,9 @@
+
+ #define QCA_WCN3991_SOC_ID 0x40014320
+
++#define QCA_WCN3950_SOC_ID_T 0x40074130
++#define QCA_WCN3950_SOC_ID_S 0x40075130
++
+ /* QCA chipset version can be decided by patch and SoC
+ * version, combination with upper 2 bytes from SoC
+ * and lower 2 bytes from patch will be used.
+@@ -145,6 +148,7 @@ enum qca_btsoc_type {
+ QCA_INVALID = -1,
+ QCA_AR3002,
+ QCA_ROME,
++ QCA_WCN3950,
+ QCA_WCN3988,
+ QCA_WCN3990,
+ QCA_WCN3998,
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 699ff21d97675b..bfd769f2026b30 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -376,10 +376,38 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe0f3), .driver_info = BTUSB_QCA_WCN6855 |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe100), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe103), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe10a), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe10d), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe11b), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe11c), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe11f), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe141), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe14a), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe14b), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe14d), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3623), .driver_info = BTUSB_QCA_WCN6855 |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3624), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x2c7c, 0x0130), .driver_info = BTUSB_QCA_WCN6855 |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x2c7c, 0x0131), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x2c7c, 0x0132), .driver_info = BTUSB_QCA_WCN6855 |
++ BTUSB_WIDEBAND_SPEECH },
+
+ /* Broadcom BCM2035 */
+ { USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
+@@ -640,6 +668,10 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe102), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe152), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe153), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x04ca, 0x3804), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x04ca, 0x38e4), .driver_info = BTUSB_MEDIATEK |
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index d2d6ba8d2f8b1c..acba83156de9a6 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -102,7 +102,8 @@ static inline struct sk_buff *hci_uart_dequeue(struct hci_uart *hu)
+ if (!skb) {
+ percpu_down_read(&hu->proto_lock);
+
+- if (test_bit(HCI_UART_PROTO_READY, &hu->flags))
++ if (test_bit(HCI_UART_PROTO_READY, &hu->flags) ||
++ test_bit(HCI_UART_PROTO_INIT, &hu->flags))
+ skb = hu->proto->dequeue(hu);
+
+ percpu_up_read(&hu->proto_lock);
+@@ -124,7 +125,8 @@ int hci_uart_tx_wakeup(struct hci_uart *hu)
+ if (!percpu_down_read_trylock(&hu->proto_lock))
+ return 0;
+
+- if (!test_bit(HCI_UART_PROTO_READY, &hu->flags))
++ if (!test_bit(HCI_UART_PROTO_READY, &hu->flags) &&
++ !test_bit(HCI_UART_PROTO_INIT, &hu->flags))
+ goto no_schedule;
+
+ set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state);
+@@ -278,7 +280,8 @@ static int hci_uart_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
+
+ percpu_down_read(&hu->proto_lock);
+
+- if (!test_bit(HCI_UART_PROTO_READY, &hu->flags)) {
++ if (!test_bit(HCI_UART_PROTO_READY, &hu->flags) &&
++ !test_bit(HCI_UART_PROTO_INIT, &hu->flags)) {
+ percpu_up_read(&hu->proto_lock);
+ return -EUNATCH;
+ }
+@@ -585,7 +588,8 @@ static void hci_uart_tty_wakeup(struct tty_struct *tty)
+ if (tty != hu->tty)
+ return;
+
+- if (test_bit(HCI_UART_PROTO_READY, &hu->flags))
++ if (test_bit(HCI_UART_PROTO_READY, &hu->flags) ||
++ test_bit(HCI_UART_PROTO_INIT, &hu->flags))
+ hci_uart_tx_wakeup(hu);
+ }
+
+@@ -611,7 +615,8 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
+
+ percpu_down_read(&hu->proto_lock);
+
+- if (!test_bit(HCI_UART_PROTO_READY, &hu->flags)) {
++ if (!test_bit(HCI_UART_PROTO_READY, &hu->flags) &&
++ !test_bit(HCI_UART_PROTO_INIT, &hu->flags)) {
+ percpu_up_read(&hu->proto_lock);
+ return;
+ }
+@@ -707,12 +712,16 @@ static int hci_uart_set_proto(struct hci_uart *hu, int id)
+
+ hu->proto = p;
+
++ set_bit(HCI_UART_PROTO_INIT, &hu->flags);
++
+ err = hci_uart_register_dev(hu);
+ if (err) {
+ return err;
+ }
+
+ set_bit(HCI_UART_PROTO_READY, &hu->flags);
++ clear_bit(HCI_UART_PROTO_INIT, &hu->flags);
++
+ return 0;
+ }
+
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 0ac2168f1dc4f8..f2558506a02c72 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -623,6 +623,7 @@ static int qca_open(struct hci_uart *hu)
+ qcadev = serdev_device_get_drvdata(hu->serdev);
+
+ switch (qcadev->btsoc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1366,6 +1367,7 @@ static int qca_set_baudrate(struct hci_dev *hdev, uint8_t baudrate)
+
+ /* Give the controller time to process the request */
+ switch (qca_soc_type(hu)) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1452,6 +1454,7 @@ static unsigned int qca_get_speed(struct hci_uart *hu,
+ static int qca_check_speeds(struct hci_uart *hu)
+ {
+ switch (qca_soc_type(hu)) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1494,6 +1497,7 @@ static int qca_set_speed(struct hci_uart *hu, enum qca_speed_type speed_type)
+ * changing the baudrate of chip and host.
+ */
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1528,6 +1532,7 @@ static int qca_set_speed(struct hci_uart *hu, enum qca_speed_type speed_type)
+
+ error:
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1746,6 +1751,7 @@ static int qca_regulator_init(struct hci_uart *hu)
+ }
+
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1776,6 +1782,7 @@ static int qca_regulator_init(struct hci_uart *hu)
+ qca_set_speed(hu, QCA_INIT_SPEED);
+
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1807,6 +1814,7 @@ static int qca_power_on(struct hci_dev *hdev)
+ return 0;
+
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1891,6 +1899,7 @@ static int qca_setup(struct hci_uart *hu)
+ soc_name = "qca2066";
+ break;
+
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1925,6 +1934,7 @@ static int qca_setup(struct hci_uart *hu)
+ clear_bit(QCA_SSR_TRIGGERED, &qca->flags);
+
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -1958,6 +1968,7 @@ static int qca_setup(struct hci_uart *hu)
+ }
+
+ switch (soc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -2046,6 +2057,17 @@ static const struct hci_uart_proto qca_proto = {
+ .dequeue = qca_dequeue,
+ };
+
++static const struct qca_device_data qca_soc_data_wcn3950 __maybe_unused = {
++ .soc_type = QCA_WCN3950,
++ .vregs = (struct qca_vreg []) {
++ { "vddio", 15000 },
++ { "vddxo", 60000 },
++ { "vddrf", 155000 },
++ { "vddch0", 585000 },
++ },
++ .num_vregs = 4,
++};
++
+ static const struct qca_device_data qca_soc_data_wcn3988 __maybe_unused = {
+ .soc_type = QCA_WCN3988,
+ .vregs = (struct qca_vreg []) {
+@@ -2338,6 +2360,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ qcadev->btsoc_type = QCA_ROME;
+
+ switch (qcadev->btsoc_type) {
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+@@ -2359,6 +2382,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ switch (qcadev->btsoc_type) {
+ case QCA_WCN6855:
+ case QCA_WCN7850:
++ case QCA_WCN6750:
+ if (!device_property_present(&serdev->dev, "enable-gpios")) {
+ /*
+ * Backward compatibility with old DT sources. If the
+@@ -2374,11 +2398,11 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ break;
+ }
+ fallthrough;
++ case QCA_WCN3950:
+ case QCA_WCN3988:
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+ case QCA_WCN3998:
+- case QCA_WCN6750:
+ qcadev->bt_power->dev = &serdev->dev;
+ err = qca_init_regulators(qcadev->bt_power, data->vregs,
+ data->num_vregs);
+@@ -2683,6 +2707,7 @@ static const struct of_device_id qca_bluetooth_of_match[] = {
+ { .compatible = "qcom,qca6174-bt" },
+ { .compatible = "qcom,qca6390-bt", .data = &qca_soc_data_qca6390},
+ { .compatible = "qcom,qca9377-bt" },
++ { .compatible = "qcom,wcn3950-bt", .data = &qca_soc_data_wcn3950},
+ { .compatible = "qcom,wcn3988-bt", .data = &qca_soc_data_wcn3988},
+ { .compatible = "qcom,wcn3990-bt", .data = &qca_soc_data_wcn3990},
+ { .compatible = "qcom,wcn3991-bt", .data = &qca_soc_data_wcn3991},
+diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h
+index fbf3079b92a533..5ea5dd80e297c7 100644
+--- a/drivers/bluetooth/hci_uart.h
++++ b/drivers/bluetooth/hci_uart.h
+@@ -90,6 +90,7 @@ struct hci_uart {
+ #define HCI_UART_REGISTERED 1
+ #define HCI_UART_PROTO_READY 2
+ #define HCI_UART_NO_SUSPEND_NOTIFIER 3
++#define HCI_UART_PROTO_INIT 4
+
+ /* TX states */
+ #define HCI_UART_SENDING 1
+diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
+index 4de75674f19350..aa8a0ef697c779 100644
+--- a/drivers/bus/mhi/host/main.c
++++ b/drivers/bus/mhi/host/main.c
+@@ -1207,11 +1207,16 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ struct mhi_ring_element *mhi_tre;
+ struct mhi_buf_info *buf_info;
+ int eot, eob, chain, bei;
+- int ret;
++ int ret = 0;
+
+ /* Protect accesses for reading and incrementing WP */
+ write_lock_bh(&mhi_chan->lock);
+
++ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
++ ret = -ENODEV;
++ goto out;
++ }
++
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+@@ -1229,10 +1234,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+
+ if (!info->pre_mapped) {
+ ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+- if (ret) {
+- write_unlock_bh(&mhi_chan->lock);
+- return ret;
+- }
++ if (ret)
++ goto out;
+ }
+
+ eob = !!(flags & MHI_EOB);
+@@ -1250,9 +1253,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
++out:
+ write_unlock_bh(&mhi_chan->lock);
+
+- return 0;
++ return ret;
+ }
+
+ int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 7df7abaf3e526b..e25daf2396d37b 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -168,6 +168,11 @@ int tpm_try_get_ops(struct tpm_chip *chip)
+ goto out_ops;
+
+ mutex_lock(&chip->tpm_mutex);
++
++ /* tmp_chip_start may issue IO that is denied while suspended */
++ if (chip->flags & TPM_CHIP_FLAG_SUSPENDED)
++ goto out_lock;
++
+ rc = tpm_chip_start(chip);
+ if (rc)
+ goto out_lock;
+@@ -300,6 +305,7 @@ int tpm_class_shutdown(struct device *dev)
+ down_write(&chip->ops_sem);
+ if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+ if (!tpm_chip_start(chip)) {
++ tpm2_end_auth_session(chip);
+ tpm2_shutdown(chip, TPM2_SU_CLEAR);
+ tpm_chip_stop(chip);
+ }
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index b1daa0d7b341b1..f62f7871edbdb0 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -445,18 +445,11 @@ int tpm_get_random(struct tpm_chip *chip, u8 *out, size_t max)
+ if (!chip)
+ return -ENODEV;
+
+- /* Give back zero bytes, as TPM chip has not yet fully resumed: */
+- if (chip->flags & TPM_CHIP_FLAG_SUSPENDED) {
+- rc = 0;
+- goto out;
+- }
+-
+ if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ rc = tpm2_get_random(chip, out, max);
+ else
+ rc = tpm1_get_random(chip, out, max);
+
+-out:
+ tpm_put_ops(chip);
+ return rc;
+ }
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index fdef214b9f6bff..ed0d3d8449b306 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -114,11 +114,10 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
+ return 0;
+ /* process status changes without irq support */
+ do {
++ usleep_range(priv->timeout_min, priv->timeout_max);
+ status = chip->ops->status(chip);
+ if ((status & mask) == mask)
+ return 0;
+- usleep_range(priv->timeout_min,
+- priv->timeout_max);
+ } while (time_before(jiffies, stop));
+ return -ETIME;
+ }
+@@ -464,7 +463,10 @@ static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len)
+
+ if (wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c,
+ &priv->int_queue, false) < 0) {
+- rc = -ETIME;
++ if (test_bit(TPM_TIS_STATUS_VALID_RETRY, &priv->flags))
++ rc = -EAGAIN;
++ else
++ rc = -ETIME;
+ goto out_err;
+ }
+ status = tpm_tis_status(chip);
+@@ -481,7 +483,10 @@ static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len)
+
+ if (wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c,
+ &priv->int_queue, false) < 0) {
+- rc = -ETIME;
++ if (test_bit(TPM_TIS_STATUS_VALID_RETRY, &priv->flags))
++ rc = -EAGAIN;
++ else
++ rc = -ETIME;
+ goto out_err;
+ }
+ status = tpm_tis_status(chip);
+@@ -546,9 +551,11 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
+ if (rc >= 0)
+ /* Data transfer done successfully */
+ break;
+- else if (rc != -EIO)
++ else if (rc != -EAGAIN && rc != -EIO)
+ /* Data transfer failed, not recoverable */
+ return rc;
++
++ usleep_range(priv->timeout_min, priv->timeout_max);
+ }
+
+ /* go and do it */
+@@ -1144,6 +1151,9 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ priv->timeout_max = TIS_TIMEOUT_MAX_ATML;
+ }
+
++ if (priv->manufacturer_id == TPM_VID_IFX)
++ set_bit(TPM_TIS_STATUS_VALID_RETRY, &priv->flags);
++
+ if (is_bsw()) {
+ priv->ilb_base_addr = ioremap(INTEL_LEGACY_BLK_BASE_ADDR,
+ ILB_REMAP_SIZE);
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 690ad8e9b73190..970d02c337c7f1 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -89,6 +89,7 @@ enum tpm_tis_flags {
+ TPM_TIS_INVALID_STATUS = 1,
+ TPM_TIS_DEFAULT_CANCELLATION = 2,
+ TPM_TIS_IRQ_TESTED = 3,
++ TPM_TIS_STATUS_VALID_RETRY = 4,
+ };
+
+ struct tpm_tis_data {
+diff --git a/drivers/clk/qcom/clk-branch.c b/drivers/clk/qcom/clk-branch.c
+index 229480c5b075a0..0f10090d4ae681 100644
+--- a/drivers/clk/qcom/clk-branch.c
++++ b/drivers/clk/qcom/clk-branch.c
+@@ -28,7 +28,7 @@ static bool clk_branch_in_hwcg_mode(const struct clk_branch *br)
+
+ static bool clk_branch_check_halt(const struct clk_branch *br, bool enabling)
+ {
+- bool invert = (br->halt_check == BRANCH_HALT_ENABLE);
++ bool invert = (br->halt_check & BRANCH_HALT_ENABLE);
+ u32 val;
+
+ regmap_read(br->clkr.regmap, br->halt_reg, &val);
+@@ -44,7 +44,7 @@ static bool clk_branch2_check_halt(const struct clk_branch *br, bool enabling)
+ {
+ u32 val;
+ u32 mask;
+- bool invert = (br->halt_check == BRANCH_HALT_ENABLE);
++ bool invert = (br->halt_check & BRANCH_HALT_ENABLE);
+
+ mask = CBCR_NOC_FSM_STATUS;
+ mask |= CBCR_CLK_OFF;
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index fa5fe4c2a2ee77..208fc430ec98f1 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -292,6 +292,9 @@ static int gdsc_enable(struct generic_pm_domain *domain)
+ */
+ udelay(1);
+
++ if (sc->flags & RETAIN_FF_ENABLE)
++ gdsc_retain_ff_on(sc);
++
+ /* Turn on HW trigger mode if supported */
+ if (sc->flags & HW_CTRL) {
+ ret = gdsc_hwctrl(sc, true);
+@@ -308,9 +311,6 @@ static int gdsc_enable(struct generic_pm_domain *domain)
+ udelay(1);
+ }
+
+- if (sc->flags & RETAIN_FF_ENABLE)
+- gdsc_retain_ff_on(sc);
+-
+ return 0;
+ }
+
+@@ -457,13 +457,6 @@ static int gdsc_init(struct gdsc *sc)
+ goto err_disable_supply;
+ }
+
+- /* Turn on HW trigger mode if supported */
+- if (sc->flags & HW_CTRL) {
+- ret = gdsc_hwctrl(sc, true);
+- if (ret < 0)
+- goto err_disable_supply;
+- }
+-
+ /*
+ * Make sure the retain bit is set if the GDSC is already on,
+ * otherwise we end up turning off the GDSC and destroying all
+@@ -471,6 +464,14 @@ static int gdsc_init(struct gdsc *sc)
+ */
+ if (sc->flags & RETAIN_FF_ENABLE)
+ gdsc_retain_ff_on(sc);
++
++ /* Turn on HW trigger mode if supported */
++ if (sc->flags & HW_CTRL) {
++ ret = gdsc_hwctrl(sc, true);
++ if (ret < 0)
++ goto err_disable_supply;
++ }
++
+ } else if (sc->flags & ALWAYS_ON) {
+ /* If ALWAYS_ON GDSCs are not ON, turn them ON */
+ gdsc_enable(&sc->pd);
+@@ -506,6 +507,23 @@ static int gdsc_init(struct gdsc *sc)
+ return ret;
+ }
+
++static void gdsc_pm_subdomain_remove(struct gdsc_desc *desc, size_t num)
++{
++ struct device *dev = desc->dev;
++ struct gdsc **scs = desc->scs;
++ int i;
++
++ /* Remove subdomains */
++ for (i = num - 1; i >= 0; i--) {
++ if (!scs[i])
++ continue;
++ if (scs[i]->parent)
++ pm_genpd_remove_subdomain(scs[i]->parent, &scs[i]->pd);
++ else if (!IS_ERR_OR_NULL(dev->pm_domain))
++ pm_genpd_remove_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
++ }
++}
++
+ int gdsc_register(struct gdsc_desc *desc,
+ struct reset_controller_dev *rcdev, struct regmap *regmap)
+ {
+@@ -555,30 +573,27 @@ int gdsc_register(struct gdsc_desc *desc,
+ if (!scs[i])
+ continue;
+ if (scs[i]->parent)
+- pm_genpd_add_subdomain(scs[i]->parent, &scs[i]->pd);
++ ret = pm_genpd_add_subdomain(scs[i]->parent, &scs[i]->pd);
+ else if (!IS_ERR_OR_NULL(dev->pm_domain))
+- pm_genpd_add_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
++ ret = pm_genpd_add_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
++ if (ret)
++ goto err_pm_subdomain_remove;
+ }
+
+ return of_genpd_add_provider_onecell(dev->of_node, data);
++
++err_pm_subdomain_remove:
++ gdsc_pm_subdomain_remove(desc, i);
++
++ return ret;
+ }
+
+ void gdsc_unregister(struct gdsc_desc *desc)
+ {
+- int i;
+ struct device *dev = desc->dev;
+- struct gdsc **scs = desc->scs;
+ size_t num = desc->num;
+
+- /* Remove subdomains */
+- for (i = 0; i < num; i++) {
+- if (!scs[i])
+- continue;
+- if (scs[i]->parent)
+- pm_genpd_remove_subdomain(scs[i]->parent, &scs[i]->pd);
+- else if (!IS_ERR_OR_NULL(dev->pm_domain))
+- pm_genpd_remove_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
+- }
++ gdsc_pm_subdomain_remove(desc, num);
+ of_genpd_del_provider(dev->of_node);
+ }
+
+diff --git a/drivers/clk/renesas/r9a07g043-cpg.c b/drivers/clk/renesas/r9a07g043-cpg.c
+index c3c2b0c4398330..fce2eecfa8c03c 100644
+--- a/drivers/clk/renesas/r9a07g043-cpg.c
++++ b/drivers/clk/renesas/r9a07g043-cpg.c
+@@ -89,7 +89,9 @@ static const struct clk_div_table dtable_1_32[] = {
+
+ /* Mux clock tables */
+ static const char * const sel_pll3_3[] = { ".pll3_533", ".pll3_400" };
++#ifdef CONFIG_ARM64
+ static const char * const sel_pll6_2[] = { ".pll6_250", ".pll5_250" };
++#endif
+ static const char * const sel_sdhi[] = { ".clk_533", ".clk_400", ".clk_266" };
+
+ static const u32 mtable_sdhi[] = { 1, 2, 3 };
+@@ -137,7 +139,12 @@ static const struct cpg_core_clk r9a07g043_core_clks[] __initconst = {
+ DEF_DIV("P2", R9A07G043_CLK_P2, CLK_PLL3_DIV2_4_2, DIVPL3A, dtable_1_32),
+ DEF_FIXED("M0", R9A07G043_CLK_M0, CLK_PLL3_DIV2_4, 1, 1),
+ DEF_FIXED("ZT", R9A07G043_CLK_ZT, CLK_PLL3_DIV2_4_2, 1, 1),
++#ifdef CONFIG_ARM64
+ DEF_MUX("HP", R9A07G043_CLK_HP, SEL_PLL6_2, sel_pll6_2),
++#endif
++#ifdef CONFIG_RISCV
++ DEF_FIXED("HP", R9A07G043_CLK_HP, CLK_PLL6_250, 1, 1),
++#endif
+ DEF_FIXED("SPI0", R9A07G043_CLK_SPI0, CLK_DIV_PLL3_C, 1, 2),
+ DEF_FIXED("SPI1", R9A07G043_CLK_SPI1, CLK_DIV_PLL3_C, 1, 4),
+ DEF_SD_MUX("SD0", R9A07G043_CLK_SD0, SEL_SDHI0, SEL_SDHI0_STS, sel_sdhi,
+diff --git a/drivers/clocksource/timer-stm32-lp.c b/drivers/clocksource/timer-stm32-lp.c
+index a4c95161cb22c4..193e4f643358bc 100644
+--- a/drivers/clocksource/timer-stm32-lp.c
++++ b/drivers/clocksource/timer-stm32-lp.c
+@@ -168,9 +168,7 @@ static int stm32_clkevent_lp_probe(struct platform_device *pdev)
+ }
+
+ if (of_property_read_bool(pdev->dev.parent->of_node, "wakeup-source")) {
+- ret = device_init_wakeup(&pdev->dev, true);
+- if (ret)
+- goto out_clk_disable;
++ device_set_wakeup_capable(&pdev->dev, true);
+
+ ret = dev_pm_set_wake_irq(&pdev->dev, irq);
+ if (ret)
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index bd63837eabb4ef..1b26845703f68c 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -1619,7 +1619,7 @@ static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
+ max_perf, policy->boost_enabled);
+ }
+
+- return amd_pstate_update_perf(cpudata, 0, 0, max_perf, cpudata->epp_cached, false);
++ return amd_pstate_epp_update_limit(policy);
+ }
+
+ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
+@@ -1668,6 +1668,9 @@ static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
+ if (cppc_state != AMD_PSTATE_ACTIVE)
+ return 0;
+
++ /* invalidate to ensure it's rewritten during resume */
++ cpudata->cppc_req_cached = 0;
++
+ /* set this flag to avoid setting core offline*/
+ cpudata->suspended = true;
+
+diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
+index d103342b7cfc21..1de9e92c5b0fc9 100644
+--- a/drivers/cpuidle/Makefile
++++ b/drivers/cpuidle/Makefile
+@@ -3,6 +3,9 @@
+ # Makefile for cpuidle.
+ #
+
++# Branch profiling isn't noinstr-safe
++ccflags-$(CONFIG_TRACE_BRANCH_PROFILING) += -DDISABLE_BRANCH_PROFILING
++
+ obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
+ obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
+ obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index 248d98fd8c48d0..157f9a9ed63616 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -189,14 +189,17 @@ static bool sp_pci_is_master(struct sp_device *sp)
+ pdev_new = to_pci_dev(dev_new);
+ pdev_cur = to_pci_dev(dev_cur);
+
+- if (pdev_new->bus->number < pdev_cur->bus->number)
+- return true;
++ if (pci_domain_nr(pdev_new->bus) != pci_domain_nr(pdev_cur->bus))
++ return pci_domain_nr(pdev_new->bus) < pci_domain_nr(pdev_cur->bus);
+
+- if (PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn))
+- return true;
++ if (pdev_new->bus->number != pdev_cur->bus->number)
++ return pdev_new->bus->number < pdev_cur->bus->number;
+
+- if (PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn))
+- return true;
++ if (PCI_SLOT(pdev_new->devfn) != PCI_SLOT(pdev_cur->devfn))
++ return PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn);
++
++ if (PCI_FUNC(pdev_new->devfn) != PCI_FUNC(pdev_cur->devfn))
++ return PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn);
+
+ return false;
+ }
+diff --git a/drivers/firmware/cirrus/test/cs_dsp_test_control_parse.c b/drivers/firmware/cirrus/test/cs_dsp_test_control_parse.c
+index cb90964740ea35..942ba1af5e7c1e 100644
+--- a/drivers/firmware/cirrus/test/cs_dsp_test_control_parse.c
++++ b/drivers/firmware/cirrus/test/cs_dsp_test_control_parse.c
+@@ -73,6 +73,18 @@ static const struct cs_dsp_mock_coeff_def mock_coeff_template = {
+ .length_bytes = 4,
+ };
+
++static char *cs_dsp_ctl_alloc_test_string(struct kunit *test, char c, size_t len)
++{
++ char *str;
++
++ str = kunit_kmalloc(test, len + 1, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, str);
++ memset(str, c, len);
++ str[len] = '\0';
++
++ return str;
++}
++
+ /* Algorithm info block without controls should load */
+ static void cs_dsp_ctl_parse_no_coeffs(struct kunit *test)
+ {
+@@ -160,12 +172,8 @@ static void cs_dsp_ctl_parse_max_v1_name(struct kunit *test)
+ struct cs_dsp_mock_coeff_def def = mock_coeff_template;
+ struct cs_dsp_coeff_ctl *ctl;
+ struct firmware *wmfw;
+- char *name;
+
+- name = kunit_kzalloc(test, 256, GFP_KERNEL);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, name);
+- memset(name, 'A', 255);
+- def.fullname = name;
++ def.fullname = cs_dsp_ctl_alloc_test_string(test, 'A', 255);
+
+ cs_dsp_mock_wmfw_start_alg_info_block(local->wmfw_builder,
+ cs_dsp_ctl_parse_test_algs[0].id,
+@@ -252,14 +260,9 @@ static void cs_dsp_ctl_parse_max_short_name(struct kunit *test)
+ struct cs_dsp_test_local *local = priv->local;
+ struct cs_dsp_mock_coeff_def def = mock_coeff_template;
+ struct cs_dsp_coeff_ctl *ctl;
+- char *name;
+ struct firmware *wmfw;
+
+- name = kunit_kmalloc(test, 255, GFP_KERNEL);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, name);
+- memset(name, 'A', 255);
+-
+- def.shortname = name;
++ def.shortname = cs_dsp_ctl_alloc_test_string(test, 'A', 255);
+
+ cs_dsp_mock_wmfw_start_alg_info_block(local->wmfw_builder,
+ cs_dsp_ctl_parse_test_algs[0].id,
+@@ -273,7 +276,7 @@ static void cs_dsp_ctl_parse_max_short_name(struct kunit *test)
+ ctl = list_first_entry_or_null(&priv->dsp->ctl_list, struct cs_dsp_coeff_ctl, list);
+ KUNIT_ASSERT_NOT_NULL(test, ctl);
+ KUNIT_EXPECT_EQ(test, ctl->subname_len, 255);
+- KUNIT_EXPECT_MEMEQ(test, ctl->subname, name, ctl->subname_len);
++ KUNIT_EXPECT_MEMEQ(test, ctl->subname, def.shortname, ctl->subname_len);
+ KUNIT_EXPECT_EQ(test, ctl->flags, def.flags);
+ KUNIT_EXPECT_EQ(test, ctl->type, def.type);
+ KUNIT_EXPECT_EQ(test, ctl->len, def.length_bytes);
+@@ -323,12 +326,8 @@ static void cs_dsp_ctl_parse_with_max_fullname(struct kunit *test)
+ struct cs_dsp_mock_coeff_def def = mock_coeff_template;
+ struct cs_dsp_coeff_ctl *ctl;
+ struct firmware *wmfw;
+- char *fullname;
+
+- fullname = kunit_kmalloc(test, 255, GFP_KERNEL);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, fullname);
+- memset(fullname, 'A', 255);
+- def.fullname = fullname;
++ def.fullname = cs_dsp_ctl_alloc_test_string(test, 'A', 255);
+
+ cs_dsp_mock_wmfw_start_alg_info_block(local->wmfw_builder,
+ cs_dsp_ctl_parse_test_algs[0].id,
+@@ -392,12 +391,8 @@ static void cs_dsp_ctl_parse_with_max_description(struct kunit *test)
+ struct cs_dsp_mock_coeff_def def = mock_coeff_template;
+ struct cs_dsp_coeff_ctl *ctl;
+ struct firmware *wmfw;
+- char *description;
+
+- description = kunit_kmalloc(test, 65535, GFP_KERNEL);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, description);
+- memset(description, 'A', 65535);
+- def.description = description;
++ def.description = cs_dsp_ctl_alloc_test_string(test, 'A', 65535);
+
+ cs_dsp_mock_wmfw_start_alg_info_block(local->wmfw_builder,
+ cs_dsp_ctl_parse_test_algs[0].id,
+@@ -429,17 +424,9 @@ static void cs_dsp_ctl_parse_with_max_fullname_and_description(struct kunit *tes
+ struct cs_dsp_mock_coeff_def def = mock_coeff_template;
+ struct cs_dsp_coeff_ctl *ctl;
+ struct firmware *wmfw;
+- char *fullname, *description;
+-
+- fullname = kunit_kmalloc(test, 255, GFP_KERNEL);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, fullname);
+- memset(fullname, 'A', 255);
+- def.fullname = fullname;
+
+- description = kunit_kmalloc(test, 65535, GFP_KERNEL);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, description);
+- memset(description, 'A', 65535);
+- def.description = description;
++ def.fullname = cs_dsp_ctl_alloc_test_string(test, 'A', 255);
++ def.description = cs_dsp_ctl_alloc_test_string(test, 'A', 65535);
+
+ cs_dsp_mock_wmfw_start_alg_info_block(local->wmfw_builder,
+ cs_dsp_ctl_parse_test_algs[0].id,
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 0cd4c36ae8aaf0..5415175364899e 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -410,7 +410,9 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ goto err;
+ }
+
+- device_init_wakeup(dev, true);
++ ret = devm_device_init_wakeup(dev);
++ if (ret)
++ return dev_err_probe(dev, ret, "Failed to init wakeup\n");
+
+ return 0;
+ err:
+diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c
+index 6895b65c86aff5..d27bfac6c9f53d 100644
+--- a/drivers/gpio/gpio-tegra186.c
++++ b/drivers/gpio/gpio-tegra186.c
+@@ -823,6 +823,7 @@ static int tegra186_gpio_probe(struct platform_device *pdev)
+ struct gpio_irq_chip *irq;
+ struct tegra_gpio *gpio;
+ struct device_node *np;
++ struct resource *res;
+ char **names;
+ int err;
+
+@@ -842,19 +843,19 @@ static int tegra186_gpio_probe(struct platform_device *pdev)
+ gpio->num_banks++;
+
+ /* get register apertures */
+- gpio->secure = devm_platform_ioremap_resource_byname(pdev, "security");
+- if (IS_ERR(gpio->secure)) {
+- gpio->secure = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(gpio->secure))
+- return PTR_ERR(gpio->secure);
+- }
+-
+- gpio->base = devm_platform_ioremap_resource_byname(pdev, "gpio");
+- if (IS_ERR(gpio->base)) {
+- gpio->base = devm_platform_ioremap_resource(pdev, 1);
+- if (IS_ERR(gpio->base))
+- return PTR_ERR(gpio->base);
+- }
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "security");
++ if (!res)
++ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ gpio->secure = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(gpio->secure))
++ return PTR_ERR(gpio->secure);
++
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gpio");
++ if (!res)
++ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
++ gpio->base = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(gpio->base))
++ return PTR_ERR(gpio->base);
+
+ err = platform_irq_count(pdev);
+ if (err < 0)
+diff --git a/drivers/gpio/gpio-zynq.c b/drivers/gpio/gpio-zynq.c
+index be81fa2b17abc6..3dae63f3ea2177 100644
+--- a/drivers/gpio/gpio-zynq.c
++++ b/drivers/gpio/gpio-zynq.c
+@@ -1011,6 +1011,7 @@ static void zynq_gpio_remove(struct platform_device *pdev)
+ ret = pm_runtime_get_sync(&pdev->dev);
+ if (ret < 0)
+ dev_warn(&pdev->dev, "pm_runtime_get_sync() Failed\n");
++ device_init_wakeup(&pdev->dev, 0);
+ gpiochip_remove(&gpio->chip);
+ device_set_wakeup_capable(&pdev->dev, 0);
+ pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 2e537ee979f3e2..176e9142fd8f85 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -193,6 +193,8 @@ static void of_gpio_try_fixup_polarity(const struct device_node *np,
+ */
+ { "himax,hx8357", "gpios-reset", false },
+ { "himax,hx8369", "gpios-reset", false },
++#endif
++#if IS_ENABLED(CONFIG_MTD_NAND_JZ4780)
+ /*
+ * The rb-gpios semantics was undocumented and qi,lb60 (along with
+ * the ingenic driver) got it wrong. The active state encodes the
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index f5909977eed4b7..9a8f6cb2b8360e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -6851,18 +6851,26 @@ struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+ {
+ struct dma_fence *old = NULL;
+
++ dma_fence_get(gang);
+ do {
+ dma_fence_put(old);
+ old = amdgpu_device_get_gang(adev);
+ if (old == gang)
+ break;
+
+- if (!dma_fence_is_signaled(old))
++ if (!dma_fence_is_signaled(old)) {
++ dma_fence_put(gang);
+ return old;
++ }
+
+ } while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
+ old, gang) != old);
+
++ /*
++ * Drop it once for the exchanged reference in adev and once for the
++ * thread local reference acquired in amdgpu_device_get_gang().
++ */
++ dma_fence_put(old);
+ dma_fence_put(old);
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 5c07777d3239e4..22aa4a8f11891b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2534,8 +2534,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ spin_lock_init(&vm->status_lock);
+ INIT_LIST_HEAD(&vm->freed);
+ INIT_LIST_HEAD(&vm->done);
+- INIT_LIST_HEAD(&vm->pt_freed);
+- INIT_WORK(&vm->pt_free_work, amdgpu_vm_pt_free_work);
+ INIT_KFIFO(vm->faults);
+
+ r = amdgpu_vm_init_entities(adev, vm);
+@@ -2717,8 +2715,6 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+
+ amdgpu_amdkfd_gpuvm_destroy_cb(adev, vm);
+
+- flush_work(&vm->pt_free_work);
+-
+ root = amdgpu_bo_ref(vm->root.bo);
+ amdgpu_bo_reserve(root, true);
+ amdgpu_vm_set_pasid(adev, vm, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index a3e128e373bc62..5010a3107bf892 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -374,10 +374,6 @@ struct amdgpu_vm {
+ /* BOs which are invalidated, has been updated in the PTs */
+ struct list_head done;
+
+- /* PT BOs scheduled to free and fill with zero if vm_resv is not hold */
+- struct list_head pt_freed;
+- struct work_struct pt_free_work;
+-
+ /* contains the page directory */
+ struct amdgpu_vm_bo_base root;
+ struct dma_fence *last_update;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+index b0bf216821152e..30022123b0bf6d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+@@ -547,27 +547,6 @@ static void amdgpu_vm_pt_free(struct amdgpu_vm_bo_base *entry)
+ amdgpu_bo_unref(&entry->bo);
+ }
+
+-void amdgpu_vm_pt_free_work(struct work_struct *work)
+-{
+- struct amdgpu_vm_bo_base *entry, *next;
+- struct amdgpu_vm *vm;
+- LIST_HEAD(pt_freed);
+-
+- vm = container_of(work, struct amdgpu_vm, pt_free_work);
+-
+- spin_lock(&vm->status_lock);
+- list_splice_init(&vm->pt_freed, &pt_freed);
+- spin_unlock(&vm->status_lock);
+-
+- /* flush_work in amdgpu_vm_fini ensure vm->root.bo is valid. */
+- amdgpu_bo_reserve(vm->root.bo, true);
+-
+- list_for_each_entry_safe(entry, next, &pt_freed, vm_status)
+- amdgpu_vm_pt_free(entry);
+-
+- amdgpu_bo_unreserve(vm->root.bo);
+-}
+-
+ /**
+ * amdgpu_vm_pt_free_list - free PD/PT levels
+ *
+@@ -580,19 +559,15 @@ void amdgpu_vm_pt_free_list(struct amdgpu_device *adev,
+ struct amdgpu_vm_update_params *params)
+ {
+ struct amdgpu_vm_bo_base *entry, *next;
+- struct amdgpu_vm *vm = params->vm;
+ bool unlocked = params->unlocked;
+
+ if (list_empty(¶ms->tlb_flush_waitlist))
+ return;
+
+- if (unlocked) {
+- spin_lock(&vm->status_lock);
+- list_splice_init(¶ms->tlb_flush_waitlist, &vm->pt_freed);
+- spin_unlock(&vm->status_lock);
+- schedule_work(&vm->pt_free_work);
+- return;
+- }
++ /*
++ * unlocked unmap clear page table leaves, warning to free the page entry.
++ */
++ WARN_ON(unlocked);
+
+ list_for_each_entry_safe(entry, next, ¶ms->tlb_flush_waitlist, vm_status)
+ amdgpu_vm_pt_free(entry);
+@@ -900,7 +875,15 @@ int amdgpu_vm_ptes_update(struct amdgpu_vm_update_params *params,
+ incr = (uint64_t)AMDGPU_GPU_PAGE_SIZE << shift;
+ mask = amdgpu_vm_pt_entries_mask(adev, cursor.level);
+ pe_start = ((cursor.pfn >> shift) & mask) * 8;
+- entry_end = ((uint64_t)mask + 1) << shift;
++
++ if (cursor.level < AMDGPU_VM_PTB && params->unlocked)
++ /*
++ * MMU notifier callback unlocked unmap huge page, leave is PDE entry,
++ * only clear one entry. Next entry search again for PDE or PTE leave.
++ */
++ entry_end = 1ULL << shift;
++ else
++ entry_end = ((uint64_t)mask + 1) << shift;
+ entry_end += cursor.pfn & ~(entry_end - 1);
+ entry_end = min(entry_end, end);
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 065d8784145918..33df35cab46791 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -212,6 +212,11 @@ static int set_queue_properties_from_user(struct queue_properties *q_properties,
+ return -EINVAL;
+ }
+
++ if (args->ring_size < KFD_MIN_QUEUE_RING_SIZE) {
++ args->ring_size = KFD_MIN_QUEUE_RING_SIZE;
++ pr_debug("Size lower. clamped to KFD_MIN_QUEUE_RING_SIZE");
++ }
++
+ if (!access_ok((const void __user *) args->read_pointer_address,
+ sizeof(uint32_t))) {
+ pr_err("Can't access read pointer\n");
+@@ -461,6 +466,11 @@ static int kfd_ioctl_update_queue(struct file *filp, struct kfd_process *p,
+ return -EINVAL;
+ }
+
++ if (args->ring_size < KFD_MIN_QUEUE_RING_SIZE) {
++ args->ring_size = KFD_MIN_QUEUE_RING_SIZE;
++ pr_debug("Size lower. clamped to KFD_MIN_QUEUE_RING_SIZE");
++ }
++
+ properties.queue_address = args->ring_base_address;
+ properties.queue_size = args->ring_size;
+ properties.queue_percent = args->queue_percentage & 0xFF;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index a29374c8640565..6cefd338f23de0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -1593,6 +1593,11 @@ int kfd_debugfs_hang_hws(struct kfd_node *dev)
+ return -EINVAL;
+ }
+
++ if (dev->kfd->shared_resources.enable_mes) {
++ dev_err(dev->adev->dev, "Inducing MES hang is not supported\n");
++ return -EINVAL;
++ }
++
+ return dqm_debugfs_hang_hws(dev->dqm);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 083f83c9453184..c3f2c0428e013b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -35,6 +35,7 @@
+ #include <linux/pm_runtime.h>
+ #include "amdgpu_amdkfd.h"
+ #include "amdgpu.h"
++#include "amdgpu_reset.h"
+
+ struct mm_struct;
+
+@@ -1140,6 +1141,17 @@ static void kfd_process_remove_sysfs(struct kfd_process *p)
+ p->kobj = NULL;
+ }
+
++/*
++ * If any GPU is ongoing reset, wait for reset complete.
++ */
++static void kfd_process_wait_gpu_reset_complete(struct kfd_process *p)
++{
++ int i;
++
++ for (i = 0; i < p->n_pdds; i++)
++ flush_workqueue(p->pdds[i]->dev->adev->reset_domain->wq);
++}
++
+ /* No process locking is needed in this function, because the process
+ * is not findable any more. We must assume that no other thread is
+ * using it any more, otherwise we couldn't safely free the process
+@@ -1154,6 +1166,11 @@ static void kfd_process_wq_release(struct work_struct *work)
+ kfd_process_dequeue_from_all_devices(p);
+ pqm_uninit(&p->pqm);
+
++ /*
++ * If GPU in reset, user queues may still running, wait for reset complete.
++ */
++ kfd_process_wait_gpu_reset_complete(p);
++
+ /* Signal the eviction fence after user mode queues are
+ * destroyed. This allows any BOs to be freed without
+ * triggering pointless evictions or waiting for fences.
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 6c02bc36d63446..d79caa1a68676d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -548,7 +548,7 @@ int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid)
+ pr_err("Pasid 0x%x destroy queue %d failed, ret %d\n",
+ pqm->process->pasid,
+ pqn->q->properties.queue_id, retval);
+- if (retval != -ETIME)
++ if (retval != -ETIME && retval != -EIO)
+ goto err_destroy_queue;
+ }
+ kfd_procfs_del_queue(pqn->q);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 9477a4adcd36d6..d1cf9dd352904c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -3002,19 +3002,6 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ goto out;
+ }
+
+- /* check if this page fault time stamp is before svms->checkpoint_ts */
+- if (svms->checkpoint_ts[gpuidx] != 0) {
+- if (amdgpu_ih_ts_after(ts, svms->checkpoint_ts[gpuidx])) {
+- pr_debug("draining retry fault, drop fault 0x%llx\n", addr);
+- r = 0;
+- goto out;
+- } else
+- /* ts is after svms->checkpoint_ts now, reset svms->checkpoint_ts
+- * to zero to avoid following ts wrap around give wrong comparing
+- */
+- svms->checkpoint_ts[gpuidx] = 0;
+- }
+-
+ if (!p->xnack_enabled) {
+ pr_debug("XNACK not enabled for pasid 0x%x\n", pasid);
+ r = -EFAULT;
+@@ -3034,6 +3021,21 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ mmap_read_lock(mm);
+ retry_write_locked:
+ mutex_lock(&svms->lock);
++
++ /* check if this page fault time stamp is before svms->checkpoint_ts */
++ if (svms->checkpoint_ts[gpuidx] != 0) {
++ if (amdgpu_ih_ts_after(ts, svms->checkpoint_ts[gpuidx])) {
++ pr_debug("draining retry fault, drop fault 0x%llx\n", addr);
++ r = -EAGAIN;
++ goto out_unlock_svms;
++ } else {
++ /* ts is after svms->checkpoint_ts now, reset svms->checkpoint_ts
++ * to zero to avoid following ts wrap around give wrong comparing
++ */
++ svms->checkpoint_ts[gpuidx] = 0;
++ }
++ }
++
+ prange = svm_range_from_addr(svms, addr, NULL);
+ if (!prange) {
+ pr_debug("failed to find prange svms 0x%p address [0x%llx]\n",
+@@ -3159,7 +3161,8 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ mutex_unlock(&svms->lock);
+ mmap_read_unlock(mm);
+
+- svm_range_count_fault(node, p, gpuidx);
++ if (r != -EAGAIN)
++ svm_range_count_fault(node, p, gpuidx);
+
+ mmput(mm);
+ out:
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index f84e795e35f586..4683c7ef4507f5 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -5549,9 +5549,11 @@ void dc_allow_idle_optimizations_internal(struct dc *dc, bool allow, char const
+ if (dc->clk_mgr != NULL && dc->clk_mgr->funcs->get_hard_min_memclk)
+ idle_dramclk_khz = dc->clk_mgr->funcs->get_hard_min_memclk(dc->clk_mgr);
+
+- for (i = 0; i < dc->res_pool->pipe_count; i++) {
+- pipe = &context->res_ctx.pipe_ctx[i];
+- subvp_pipe_type[i] = dc_state_get_pipe_subvp_type(context, pipe);
++ if (dc->res_pool && context) {
++ for (i = 0; i < dc->res_pool->pipe_count; i++) {
++ pipe = &context->res_ctx.pipe_ctx[i];
++ subvp_pipe_type[i] = dc_state_get_pipe_subvp_type(context, pipe);
++ }
+ }
+
+ DC_LOG_DC("%s: allow_idle=%d\n HardMinUClk_Khz=%d HardMinDramclk_Khz=%d\n Pipe_0=%d Pipe_1=%d Pipe_2=%d Pipe_3=%d Pipe_4=%d Pipe_5=%d (caller=%s)\n",
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 053481ab69efbe..ab77dcbc105844 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1788,7 +1788,9 @@ struct dc_link {
+ bool dongle_mode_timing_override;
+ bool blank_stream_on_ocs_change;
+ bool read_dpcd204h_on_irq_hpd;
++ bool force_dp_ffe_preset;
+ } wa_flags;
++ union dc_dp_ffe_preset forced_dp_ffe_preset;
+ struct link_mst_stream_allocation_table mst_stream_alloc_table;
+
+ struct dc_link_status link_status;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+index 94ce8fe7448105..cc005da75ce4ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+@@ -1119,6 +1119,8 @@ struct dc_lttpr_caps {
+ union dp_main_link_channel_coding_lttpr_cap main_link_channel_coding;
+ union dp_128b_132b_supported_lttpr_link_rates supported_128b_132b_rates;
+ uint8_t aux_rd_interval[MAX_REPEATER_CNT - 1];
++ uint8_t lttpr_ieee_oui[3];
++ uint8_t lttpr_device_id[6];
+ };
+
+ struct dc_dongle_dfp_cap_ext {
+@@ -1379,6 +1381,12 @@ struct dp_trace {
+ #ifndef DP_BRANCH_VENDOR_SPECIFIC_START
+ #define DP_BRANCH_VENDOR_SPECIFIC_START 0x50C
+ #endif
++#ifndef DP_LTTPR_IEEE_OUI
++#define DP_LTTPR_IEEE_OUI 0xF003D
++#endif
++#ifndef DP_LTTPR_DEVICE_ID
++#define DP_LTTPR_DEVICE_ID 0xF0040
++#endif
+ /** USB4 DPCD BW Allocation Registers Chapter 10.7 **/
+ #ifndef DP_TUNNELING_CAPABILITIES
+ #define DP_TUNNELING_CAPABILITIES 0xE000D /* 1.4a */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+index 8ed49a9df3780e..c1ff869512f27c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+@@ -15,6 +15,7 @@
+ //#define DML_MODE_SUPPORT_USE_DPM_DRAM_BW
+ //#define DML_GLOBAL_PREFETCH_CHECK
+ #define ALLOW_SDPIF_RATE_LIMIT_PRE_CSTATE
++#define DML_MAX_VSTARTUP_START 1023
+
+ const char *dml2_core_internal_bw_type_str(enum dml2_core_internal_bw_type bw_type)
+ {
+@@ -3726,6 +3727,7 @@ static unsigned int CalculateMaxVStartup(
+ dml2_printf("DML::%s: vblank_avail = %u\n", __func__, vblank_avail);
+ dml2_printf("DML::%s: max_vstartup_lines = %u\n", __func__, max_vstartup_lines);
+ #endif
++ max_vstartup_lines = (unsigned int)math_min2(max_vstartup_lines, DML_MAX_VSTARTUP_START);
+ return max_vstartup_lines;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+index 1ed21c1b86a5bb..a966abd4078810 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+@@ -532,26 +532,6 @@ static void calculate_odm_slices(const struct dc_stream_state *stream, unsigned
+ odm_slice_end_x[odm_factor - 1] = stream->src.width - 1;
+ }
+
+-static bool is_plane_in_odm_slice(const struct dc_plane_state *plane, unsigned int slice_index, unsigned int *odm_slice_end_x, unsigned int num_slices)
+-{
+- unsigned int slice_start_x, slice_end_x;
+-
+- if (slice_index == 0)
+- slice_start_x = 0;
+- else
+- slice_start_x = odm_slice_end_x[slice_index - 1] + 1;
+-
+- slice_end_x = odm_slice_end_x[slice_index];
+-
+- if (plane->clip_rect.x + plane->clip_rect.width < slice_start_x)
+- return false;
+-
+- if (plane->clip_rect.x > slice_end_x)
+- return false;
+-
+- return true;
+-}
+-
+ static void add_odm_slice_to_odm_tree(struct dml2_context *ctx,
+ struct dc_state *state,
+ struct dc_pipe_mapping_scratch *scratch,
+@@ -791,12 +771,6 @@ static void map_pipes_for_plane(struct dml2_context *ctx, struct dc_state *state
+ sort_pipes_for_splitting(&scratch->pipe_pool);
+
+ for (odm_slice_index = 0; odm_slice_index < scratch->odm_info.odm_factor; odm_slice_index++) {
+- // We build the tree for one ODM slice at a time.
+- // Each ODM slice shares a common OPP
+- if (!is_plane_in_odm_slice(plane, odm_slice_index, scratch->odm_info.odm_slice_end_x, scratch->odm_info.odm_factor)) {
+- continue;
+- }
+-
+ // Now we have a list of all pipes to be used for this plane/stream, now setup the tree.
+ scratch->odm_info.next_higher_pipe_for_odm_slice[odm_slice_index] = add_plane_to_blend_tree(ctx, state,
+ plane,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+index c2900c79a2d357..7fd582a8a4ba98 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+@@ -44,7 +44,7 @@ void hubp31_set_unbounded_requesting(struct hubp *hubp, bool enable)
+ struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+ REG_UPDATE(DCHUBP_CNTL, HUBP_UNBOUNDED_REQ_MODE, enable);
+- REG_UPDATE(CURSOR_CONTROL, CURSOR_REQ_MODE, enable);
++ REG_UPDATE(CURSOR_CONTROL, CURSOR_REQ_MODE, 1);
+ }
+
+ void hubp31_soft_reset(struct hubp *hubp, bool reset)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index 44e405e9bc9715..13f9e9b439f6a5 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1992,20 +1992,11 @@ static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ dc->hwss.get_position(&pipe_ctx, 1, &position);
+ vpos = position.vertical_count;
+
+- /* Avoid wraparound calculation issues */
+- vupdate_start += stream->timing.v_total;
+- vupdate_end += stream->timing.v_total;
+- vpos += stream->timing.v_total;
+-
+ if (vpos <= vupdate_start) {
+ /* VPOS is in VACTIVE or back porch. */
+ lines_to_vupdate = vupdate_start - vpos;
+- } else if (vpos > vupdate_end) {
+- /* VPOS is in the front porch. */
+- return;
+ } else {
+- /* VPOS is in VUPDATE. */
+- lines_to_vupdate = 0;
++ lines_to_vupdate = stream->timing.v_total - vpos + vupdate_start;
+ }
+
+ /* Calculate time until VUPDATE in microseconds. */
+@@ -2013,13 +2004,18 @@ static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz;
+ us_to_vupdate = lines_to_vupdate * us_per_line;
+
++ /* Stall out until the cursor update completes. */
++ if (vupdate_end < vupdate_start)
++ vupdate_end += stream->timing.v_total;
++
++ /* Position is in the range of vupdate start and end*/
++ if (lines_to_vupdate > stream->timing.v_total - vupdate_end + vupdate_start)
++ us_to_vupdate = 0;
++
+ /* 70 us is a conservative estimate of cursor update time*/
+ if (us_to_vupdate > 70)
+ return;
+
+- /* Stall out until the cursor update completes. */
+- if (vupdate_end < vupdate_start)
+- vupdate_end += stream->timing.v_total;
+ us_vupdate = (vupdate_end - vupdate_start + 1) * us_per_line;
+ udelay(us_to_vupdate + us_vupdate);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index 44c3023a77318d..44f33e3bc1c599 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -1568,10 +1568,18 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
+ /* Attempt to train in LTTPR transparent mode if repeater count exceeds 8. */
+ is_lttpr_present = dp_is_lttpr_present(link);
+
+- if (is_lttpr_present)
++ DC_LOG_DC("is_lttpr_present = %d\n", is_lttpr_present);
++
++ if (is_lttpr_present) {
+ CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
+
+- DC_LOG_DC("is_lttpr_present = %d\n", is_lttpr_present);
++ core_link_read_dpcd(link, DP_LTTPR_IEEE_OUI, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui));
++ CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui), "LTTPR IEEE OUI: ");
++
++ core_link_read_dpcd(link, DP_LTTPR_DEVICE_ID, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id));
++ CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id), "LTTPR Device ID: ");
++ }
++
+ return status;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+index 88d4288cde0f58..751c18e592ea5e 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+@@ -736,6 +736,8 @@ void override_training_settings(
+ lt_settings->pre_emphasis = overrides->pre_emphasis;
+ if (overrides->post_cursor2 != NULL)
+ lt_settings->post_cursor2 = overrides->post_cursor2;
++ if (link->wa_flags.force_dp_ffe_preset && !dp_is_lttpr_present(link))
++ lt_settings->ffe_preset = &link->forced_dp_ffe_preset;
+ if (overrides->ffe_preset != NULL)
+ lt_settings->ffe_preset = overrides->ffe_preset;
+ /* Override HW lane settings with BIOS forced values if present */
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c
+index ccf8096dde2909..ce174ce5579c07 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c
+@@ -270,7 +270,8 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence(
+
+ rate = get_dpcd_link_rate(<_settings->link_settings);
+
+- if (!link->dpcd_caps.lttpr_caps.main_link_channel_coding.bits.DP_128b_132b_SUPPORTED) {
++ // Only perform toggle if FIXED_VS LTTPR reports no IEEE OUI
++ if (memcmp("\x0,\x0,\x0", &link->dpcd_caps.lttpr_caps.lttpr_ieee_oui[0], 3) == 0) {
+ /* Vendor specific: Toggle link rate */
+ toggle_rate = (rate == 0x6) ? 0xA : 0x6;
+
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+index 686345f75f2645..6cd327fecebbc9 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+@@ -51,6 +51,11 @@ static int amd_powerplay_create(struct amdgpu_device *adev)
+ hwmgr->adev = adev;
+ hwmgr->not_vf = !amdgpu_sriov_vf(adev);
+ hwmgr->device = amdgpu_cgs_create_device(adev);
++ if (!hwmgr->device) {
++ kfree(hwmgr);
++ return -ENOMEM;
++ }
++
+ mutex_init(&hwmgr->msg_lock);
+ hwmgr->chip_family = adev->family;
+ hwmgr->chip_id = adev->asic_type;
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 5186d2114a5037..32902f77f00dd8 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1376,7 +1376,7 @@ crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *old_state)
+ mode = &new_crtc_state->mode;
+ adjusted_mode = &new_crtc_state->adjusted_mode;
+
+- if (!new_crtc_state->mode_changed)
++ if (!new_crtc_state->mode_changed && !new_crtc_state->connectors_changed)
+ continue;
+
+ drm_dbg_atomic(dev, "modeset on [ENCODER:%d:%s]\n",
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 536409a35df406..6b2178864c7ee1 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -748,7 +748,7 @@ static int bridges_show(struct seq_file *m, void *data)
+ unsigned int idx = 0;
+
+ drm_for_each_bridge_in_chain(encoder, bridge) {
+- drm_printf(&p, "bridge[%d]: %ps\n", idx++, bridge->funcs);
++ drm_printf(&p, "bridge[%u]: %ps\n", idx++, bridge->funcs);
+ drm_printf(&p, "\ttype: [%d] %s\n",
+ bridge->type,
+ drm_get_connector_type_name(bridge->type));
+diff --git a/drivers/gpu/drm/drm_panel.c b/drivers/gpu/drm/drm_panel.c
+index 9940e96d35e302..c627e42a7ce704 100644
+--- a/drivers/gpu/drm/drm_panel.c
++++ b/drivers/gpu/drm/drm_panel.c
+@@ -50,7 +50,7 @@ static LIST_HEAD(panel_list);
+ * @dev: parent device of the panel
+ * @funcs: panel operations
+ * @connector_type: the connector type (DRM_MODE_CONNECTOR_*) corresponding to
+- * the panel interface
++ * the panel interface (must NOT be DRM_MODE_CONNECTOR_Unknown)
+ *
+ * Initialize the panel structure for subsequent registration with
+ * drm_panel_add().
+@@ -58,6 +58,9 @@ static LIST_HEAD(panel_list);
+ void drm_panel_init(struct drm_panel *panel, struct device *dev,
+ const struct drm_panel_funcs *funcs, int connector_type)
+ {
++ if (connector_type == DRM_MODE_CONNECTOR_Unknown)
++ DRM_WARN("%s: %s: a valid connector type is required!\n", __func__, dev_name(dev));
++
+ INIT_LIST_HEAD(&panel->list);
+ INIT_LIST_HEAD(&panel->followers);
+ mutex_init(&panel->follower_lock);
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 4a73821b81f6fd..c554ad8f246b65 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -93,6 +93,12 @@ static const struct drm_dmi_panel_orientation_data onegx1_pro = {
+ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+
++static const struct drm_dmi_panel_orientation_data lcd640x960_leftside_up = {
++ .width = 640,
++ .height = 960,
++ .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data lcd720x1280_rightside_up = {
+ .width = 720,
+ .height = 1280,
+@@ -123,6 +129,12 @@ static const struct drm_dmi_panel_orientation_data lcd1080x1920_rightside_up = {
+ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+
++static const struct drm_dmi_panel_orientation_data lcd1200x1920_leftside_up = {
++ .width = 1200,
++ .height = 1920,
++ .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data lcd1200x1920_rightside_up = {
+ .width = 1200,
+ .height = 1920,
+@@ -184,10 +196,10 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
+- }, { /* AYA NEO AYANEO 2 */
++ }, { /* AYA NEO AYANEO 2/2S */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYANEO 2"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "AYANEO 2"),
+ },
+ .driver_data = (void *)&lcd1200x1920_rightside_up,
+ }, { /* AYA NEO 2021 */
+@@ -202,6 +214,18 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "AIR"),
+ },
+ .driver_data = (void *)&lcd1080x1920_leftside_up,
++ }, { /* AYA NEO Flip DS Bottom Screen */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "FLIP DS"),
++ },
++ .driver_data = (void *)&lcd640x960_leftside_up,
++ }, { /* AYA NEO Flip KB/DS Top Screen */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "FLIP"),
++ },
++ .driver_data = (void *)&lcd1080x1920_leftside_up,
+ }, { /* AYA NEO Founder */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYA NEO"),
+@@ -226,6 +250,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "KUN"),
+ },
+ .driver_data = (void *)&lcd1600x2560_rightside_up,
++ }, { /* AYA NEO SLIDE */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "SLIDE"),
++ },
++ .driver_data = (void *)&lcd1080x1920_leftside_up,
+ }, { /* AYN Loki Max */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ayn"),
+@@ -315,6 +345,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
+ },
+ .driver_data = (void *)&gpd_win2,
++ }, { /* GPD Win 2 (correct DMI strings) */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "WIN2")
++ },
++ .driver_data = (void *)&lcd720x1280_rightside_up,
+ }, { /* GPD Win 3 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
+@@ -443,6 +479,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
+ },
+ .driver_data = (void *)&lcd1600x2560_leftside_up,
++ }, { /* OneXPlayer Mini (Intel) */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ONE-NETBOOK TECHNOLOGY CO., LTD."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
++ },
++ .driver_data = (void *)&lcd1200x1920_leftside_up,
+ }, { /* OrangePi Neo */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "OrangePi"),
+diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
+index 9378d5901c4939..9ca42589da4dad 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rc6.c
++++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
+@@ -117,21 +117,10 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
+ GEN6_RC_CTL_RC6_ENABLE |
+ GEN6_RC_CTL_EI_MODE(1);
+
+- /*
+- * BSpec 52698 - Render powergating must be off.
+- * FIXME BSpec is outdated, disabling powergating for MTL is just
+- * temporary wa and should be removed after fixing real cause
+- * of forcewake timeouts.
+- */
+- if (IS_GFX_GT_IP_RANGE(gt, IP_VER(12, 70), IP_VER(12, 74)))
+- pg_enable =
+- GEN9_MEDIA_PG_ENABLE |
+- GEN11_MEDIA_SAMPLER_PG_ENABLE;
+- else
+- pg_enable =
+- GEN9_RENDER_PG_ENABLE |
+- GEN9_MEDIA_PG_ENABLE |
+- GEN11_MEDIA_SAMPLER_PG_ENABLE;
++ pg_enable =
++ GEN9_RENDER_PG_ENABLE |
++ GEN9_MEDIA_PG_ENABLE |
++ GEN11_MEDIA_SAMPLER_PG_ENABLE;
+
+ if (GRAPHICS_VER(gt->i915) >= 12 && !IS_DG1(gt->i915)) {
+ for (i = 0; i < I915_MAX_VCS; i++)
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
+index b3cbf85c00cbd5..eb59c1f2dccdc0 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
+@@ -317,6 +317,11 @@ void intel_huc_init_early(struct intel_huc *huc)
+ }
+ }
+
++void intel_huc_fini_late(struct intel_huc *huc)
++{
++ delayed_huc_load_fini(huc);
++}
++
+ #define HUC_LOAD_MODE_STRING(x) (x ? "GSC" : "legacy")
+ static int check_huc_loading_mode(struct intel_huc *huc)
+ {
+@@ -414,12 +419,6 @@ int intel_huc_init(struct intel_huc *huc)
+
+ void intel_huc_fini(struct intel_huc *huc)
+ {
+- /*
+- * the fence is initialized in init_early, so we need to clean it up
+- * even if HuC loading is off.
+- */
+- delayed_huc_load_fini(huc);
+-
+ if (huc->heci_pkt)
+ i915_vma_unpin_and_release(&huc->heci_pkt, 0);
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.h b/drivers/gpu/drm/i915/gt/uc/intel_huc.h
+index d5e441b9e08d63..921ad4b1687f0b 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_huc.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.h
+@@ -55,6 +55,7 @@ struct intel_huc {
+
+ int intel_huc_sanitize(struct intel_huc *huc);
+ void intel_huc_init_early(struct intel_huc *huc);
++void intel_huc_fini_late(struct intel_huc *huc);
+ int intel_huc_init(struct intel_huc *huc);
+ void intel_huc_fini(struct intel_huc *huc);
+ int intel_huc_auth(struct intel_huc *huc, enum intel_huc_authentication_type type);
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+index 5b8080ec5315b6..4f751ce74214d4 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+@@ -136,6 +136,7 @@ void intel_uc_init_late(struct intel_uc *uc)
+
+ void intel_uc_driver_late_release(struct intel_uc *uc)
+ {
++ intel_huc_fini_late(&uc->huc);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/i915/selftests/i915_selftest.c b/drivers/gpu/drm/i915/selftests/i915_selftest.c
+index fee76c1d2f4500..889281819c5b13 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_selftest.c
++++ b/drivers/gpu/drm/i915/selftests/i915_selftest.c
+@@ -23,7 +23,9 @@
+
+ #include <linux/random.h>
+
++#include "gt/intel_gt.h"
+ #include "gt/intel_gt_pm.h"
++#include "gt/intel_gt_regs.h"
+ #include "gt/uc/intel_gsc_fw.h"
+
+ #include "i915_driver.h"
+@@ -253,11 +255,27 @@ int i915_mock_selftests(void)
+ int i915_live_selftests(struct pci_dev *pdev)
+ {
+ struct drm_i915_private *i915 = pdev_to_i915(pdev);
++ struct intel_uncore *uncore = &i915->uncore;
+ int err;
++ u32 pg_enable;
++ intel_wakeref_t wakeref;
+
+ if (!i915_selftest.live)
+ return 0;
+
++ /*
++ * FIXME Disable render powergating, this is temporary wa and should be removed
++ * after fixing real cause of forcewake timeouts.
++ */
++ with_intel_runtime_pm(uncore->rpm, wakeref) {
++ if (IS_GFX_GT_IP_RANGE(to_gt(i915), IP_VER(12, 00), IP_VER(12, 74))) {
++ pg_enable = intel_uncore_read(uncore, GEN9_PG_ENABLE);
++ if (pg_enable & GEN9_RENDER_PG_ENABLE)
++ intel_uncore_write_fw(uncore, GEN9_PG_ENABLE,
++ pg_enable & ~GEN9_RENDER_PG_ENABLE);
++ }
++ }
++
+ __wait_gsc_proxy_completed(i915);
+ __wait_gsc_huc_load_completed(i915);
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index 1864eb02dbf50a..a12ef24c774234 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -127,14 +127,14 @@ struct mtk_dpi_yc_limit {
+ * @is_ck_de_pol: Support CK/DE polarity.
+ * @swap_input_support: Support input swap function.
+ * @support_direct_pin: IP supports direct connection to dpi panels.
+- * @input_2pixel: Input pixel of dp_intf is 2 pixel per round, so enable this
+- * config to enable this feature.
+ * @dimension_mask: Mask used for HWIDTH, HPORCH, VSYNC_WIDTH and VSYNC_PORCH
+ * (no shift).
+ * @hvsize_mask: Mask of HSIZE and VSIZE mask (no shift).
+ * @channel_swap_shift: Shift value of channel swap.
+ * @yuv422_en_bit: Enable bit of yuv422.
+ * @csc_enable_bit: Enable bit of CSC.
++ * @input_2p_en_bit: Enable bit for input two pixel per round feature.
++ * If present, implies that the feature must be enabled.
+ * @pixels_per_iter: Quantity of transferred pixels per iteration.
+ * @edge_cfg_in_mmsys: If the edge configuration for DPI's output needs to be set in MMSYS.
+ */
+@@ -148,12 +148,12 @@ struct mtk_dpi_conf {
+ bool is_ck_de_pol;
+ bool swap_input_support;
+ bool support_direct_pin;
+- bool input_2pixel;
+ u32 dimension_mask;
+ u32 hvsize_mask;
+ u32 channel_swap_shift;
+ u32 yuv422_en_bit;
+ u32 csc_enable_bit;
++ u32 input_2p_en_bit;
+ u32 pixels_per_iter;
+ bool edge_cfg_in_mmsys;
+ };
+@@ -471,6 +471,7 @@ static void mtk_dpi_power_off(struct mtk_dpi *dpi)
+
+ mtk_dpi_disable(dpi);
+ clk_disable_unprepare(dpi->pixel_clk);
++ clk_disable_unprepare(dpi->tvd_clk);
+ clk_disable_unprepare(dpi->engine_clk);
+ }
+
+@@ -487,6 +488,12 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ goto err_refcount;
+ }
+
++ ret = clk_prepare_enable(dpi->tvd_clk);
++ if (ret) {
++ dev_err(dpi->dev, "Failed to enable tvd pll: %d\n", ret);
++ goto err_engine;
++ }
++
+ ret = clk_prepare_enable(dpi->pixel_clk);
+ if (ret) {
+ dev_err(dpi->dev, "Failed to enable pixel clock: %d\n", ret);
+@@ -496,6 +503,8 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ return 0;
+
+ err_pixel:
++ clk_disable_unprepare(dpi->tvd_clk);
++err_engine:
+ clk_disable_unprepare(dpi->engine_clk);
+ err_refcount:
+ dpi->refcount--;
+@@ -610,9 +619,9 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi,
+ mtk_dpi_dual_edge(dpi);
+ mtk_dpi_config_disable_edge(dpi);
+ }
+- if (dpi->conf->input_2pixel) {
+- mtk_dpi_mask(dpi, DPI_CON, DPINTF_INPUT_2P_EN,
+- DPINTF_INPUT_2P_EN);
++ if (dpi->conf->input_2p_en_bit) {
++ mtk_dpi_mask(dpi, DPI_CON, dpi->conf->input_2p_en_bit,
++ dpi->conf->input_2p_en_bit);
+ }
+ mtk_dpi_sw_reset(dpi, false);
+
+@@ -1006,12 +1015,12 @@ static const struct mtk_dpi_conf mt8195_dpintf_conf = {
+ .output_fmts = mt8195_output_fmts,
+ .num_output_fmts = ARRAY_SIZE(mt8195_output_fmts),
+ .pixels_per_iter = 4,
+- .input_2pixel = true,
+ .dimension_mask = DPINTF_HPW_MASK,
+ .hvsize_mask = DPINTF_HSIZE_MASK,
+ .channel_swap_shift = DPINTF_CH_SWAP,
+ .yuv422_en_bit = DPINTF_YUV422_EN,
+ .csc_enable_bit = DPINTF_CSC_ENABLE,
++ .input_2p_en_bit = DPINTF_INPUT_2P_EN,
+ };
+
+ static int mtk_dpi_probe(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+index e7a6669c46b078..f737e7d46e667f 100644
+--- a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+@@ -203,7 +203,7 @@ static int rockchip_hdmi_parse_dt(struct rockchip_hdmi *hdmi)
+
+ hdmi->regmap = syscon_regmap_lookup_by_phandle(np, "rockchip,grf");
+ if (IS_ERR(hdmi->regmap)) {
+- drm_err(hdmi, "Unable to get rockchip,grf\n");
++ dev_err(hdmi->dev, "Unable to get rockchip,grf\n");
+ return PTR_ERR(hdmi->regmap);
+ }
+
+@@ -214,7 +214,7 @@ static int rockchip_hdmi_parse_dt(struct rockchip_hdmi *hdmi)
+ if (IS_ERR(hdmi->ref_clk)) {
+ ret = PTR_ERR(hdmi->ref_clk);
+ if (ret != -EPROBE_DEFER)
+- drm_err(hdmi, "failed to get reference clock\n");
++ dev_err(hdmi->dev, "failed to get reference clock\n");
+ return ret;
+ }
+
+@@ -222,7 +222,7 @@ static int rockchip_hdmi_parse_dt(struct rockchip_hdmi *hdmi)
+ if (IS_ERR(hdmi->grf_clk)) {
+ ret = PTR_ERR(hdmi->grf_clk);
+ if (ret != -EPROBE_DEFER)
+- drm_err(hdmi, "failed to get grf clock\n");
++ dev_err(hdmi->dev, "failed to get grf clock\n");
+ return ret;
+ }
+
+@@ -302,16 +302,16 @@ static void dw_hdmi_rockchip_encoder_enable(struct drm_encoder *encoder)
+
+ ret = clk_prepare_enable(hdmi->grf_clk);
+ if (ret < 0) {
+- drm_err(hdmi, "failed to enable grfclk %d\n", ret);
++ dev_err(hdmi->dev, "failed to enable grfclk %d\n", ret);
+ return;
+ }
+
+ ret = regmap_write(hdmi->regmap, hdmi->chip_data->lcdsel_grf_reg, val);
+ if (ret != 0)
+- drm_err(hdmi, "Could not write to GRF: %d\n", ret);
++ dev_err(hdmi->dev, "Could not write to GRF: %d\n", ret);
+
+ clk_disable_unprepare(hdmi->grf_clk);
+- drm_dbg(hdmi, "vop %s output to hdmi\n", ret ? "LIT" : "BIG");
++ dev_dbg(hdmi->dev, "vop %s output to hdmi\n", ret ? "LIT" : "BIG");
+ }
+
+ static int
+@@ -574,7 +574,7 @@ static int dw_hdmi_rockchip_bind(struct device *dev, struct device *master,
+ ret = rockchip_hdmi_parse_dt(hdmi);
+ if (ret) {
+ if (ret != -EPROBE_DEFER)
+- drm_err(hdmi, "Unable to parse OF data\n");
++ dev_err(hdmi->dev, "Unable to parse OF data\n");
+ return ret;
+ }
+
+@@ -582,7 +582,7 @@ static int dw_hdmi_rockchip_bind(struct device *dev, struct device *master,
+ if (IS_ERR(hdmi->phy)) {
+ ret = PTR_ERR(hdmi->phy);
+ if (ret != -EPROBE_DEFER)
+- drm_err(hdmi, "failed to get phy\n");
++ dev_err(hdmi->dev, "failed to get phy\n");
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
+index e498767a0a667d..6bbc84c5d716db 100644
+--- a/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c
+@@ -54,7 +54,6 @@ struct rockchip_hdmi_qp {
+ struct regmap *regmap;
+ struct regmap *vo_regmap;
+ struct rockchip_encoder encoder;
+- struct clk *ref_clk;
+ struct dw_hdmi_qp *hdmi;
+ struct phy *phy;
+ struct gpio_desc *enable_gpio;
+@@ -81,7 +80,6 @@ static void dw_hdmi_qp_rockchip_encoder_enable(struct drm_encoder *encoder)
+ if (crtc && crtc->state) {
+ rate = drm_hdmi_compute_mode_clock(&crtc->state->adjusted_mode,
+ 8, HDMI_COLORSPACE_RGB);
+- clk_set_rate(hdmi->ref_clk, rate);
+ /*
+ * FIXME: Temporary workaround to pass pixel clock rate
+ * to the PHY driver until phy_configure_opts_hdmi
+@@ -172,7 +170,7 @@ static void dw_hdmi_qp_rk3588_hpd_work(struct work_struct *work)
+ if (drm) {
+ changed = drm_helper_hpd_irq_event(drm);
+ if (changed)
+- drm_dbg(hdmi, "connector status changed\n");
++ dev_dbg(hdmi->dev, "connector status changed\n");
+ }
+ }
+
+@@ -289,7 +287,7 @@ static int dw_hdmi_qp_rockchip_bind(struct device *dev, struct device *master,
+ }
+ }
+ if (hdmi->port_id < 0) {
+- drm_err(hdmi, "Failed to match HDMI port ID\n");
++ dev_err(hdmi->dev, "Failed to match HDMI port ID\n");
+ return hdmi->port_id;
+ }
+
+@@ -313,39 +311,28 @@ static int dw_hdmi_qp_rockchip_bind(struct device *dev, struct device *master,
+ hdmi->regmap = syscon_regmap_lookup_by_phandle(dev->of_node,
+ "rockchip,grf");
+ if (IS_ERR(hdmi->regmap)) {
+- drm_err(hdmi, "Unable to get rockchip,grf\n");
++ dev_err(hdmi->dev, "Unable to get rockchip,grf\n");
+ return PTR_ERR(hdmi->regmap);
+ }
+
+ hdmi->vo_regmap = syscon_regmap_lookup_by_phandle(dev->of_node,
+ "rockchip,vo-grf");
+ if (IS_ERR(hdmi->vo_regmap)) {
+- drm_err(hdmi, "Unable to get rockchip,vo-grf\n");
++ dev_err(hdmi->dev, "Unable to get rockchip,vo-grf\n");
+ return PTR_ERR(hdmi->vo_regmap);
+ }
+
+ ret = devm_clk_bulk_get_all_enabled(hdmi->dev, &clks);
+ if (ret < 0) {
+- drm_err(hdmi, "Failed to get clocks: %d\n", ret);
++ dev_err(hdmi->dev, "Failed to get clocks: %d\n", ret);
+ return ret;
+ }
+
+- for (i = 0; i < ret; i++) {
+- if (!strcmp(clks[i].id, "ref")) {
+- hdmi->ref_clk = clks[1].clk;
+- break;
+- }
+- }
+- if (!hdmi->ref_clk) {
+- drm_err(hdmi, "Missing ref clock\n");
+- return -EINVAL;
+- }
+-
+ hdmi->enable_gpio = devm_gpiod_get_optional(hdmi->dev, "enable",
+ GPIOD_OUT_HIGH);
+ if (IS_ERR(hdmi->enable_gpio)) {
+ ret = PTR_ERR(hdmi->enable_gpio);
+- drm_err(hdmi, "Failed to request enable GPIO: %d\n", ret);
++ dev_err(hdmi->dev, "Failed to request enable GPIO: %d\n", ret);
+ return ret;
+ }
+
+@@ -353,7 +340,7 @@ static int dw_hdmi_qp_rockchip_bind(struct device *dev, struct device *master,
+ if (IS_ERR(hdmi->phy)) {
+ ret = PTR_ERR(hdmi->phy);
+ if (ret != -EPROBE_DEFER)
+- drm_err(hdmi, "failed to get phy: %d\n", ret);
++ dev_err(hdmi->dev, "failed to get phy: %d\n", ret);
+ return ret;
+ }
+
+@@ -416,7 +403,7 @@ static int dw_hdmi_qp_rockchip_bind(struct device *dev, struct device *master,
+ connector = drm_bridge_connector_init(drm, encoder);
+ if (IS_ERR(connector)) {
+ ret = PTR_ERR(connector);
+- drm_err(hdmi, "failed to init bridge connector: %d\n", ret);
++ dev_err(hdmi->dev, "failed to init bridge connector: %d\n", ret);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/tests/drm_client_modeset_test.c b/drivers/gpu/drm/tests/drm_client_modeset_test.c
+index 7516f6cb36e4e3..3e9518d7b8b7eb 100644
+--- a/drivers/gpu/drm/tests/drm_client_modeset_test.c
++++ b/drivers/gpu/drm/tests/drm_client_modeset_test.c
+@@ -95,6 +95,9 @@ static void drm_test_pick_cmdline_res_1920_1080_60(struct kunit *test)
+ expected_mode = drm_mode_find_dmt(priv->drm, 1920, 1080, 60, false);
+ KUNIT_ASSERT_NOT_NULL(test, expected_mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, expected_mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_ASSERT_TRUE(test,
+ drm_mode_parse_command_line_for_connector(cmdline,
+ connector,
+diff --git a/drivers/gpu/drm/tests/drm_cmdline_parser_test.c b/drivers/gpu/drm/tests/drm_cmdline_parser_test.c
+index 59c8408c453c2e..1cfcb597b088b4 100644
+--- a/drivers/gpu/drm/tests/drm_cmdline_parser_test.c
++++ b/drivers/gpu/drm/tests/drm_cmdline_parser_test.c
+@@ -7,6 +7,7 @@
+ #include <kunit/test.h>
+
+ #include <drm/drm_connector.h>
++#include <drm/drm_kunit_helpers.h>
+ #include <drm/drm_modes.h>
+
+ static const struct drm_connector no_connector = {};
+@@ -955,8 +956,15 @@ struct drm_cmdline_tv_option_test {
+ static void drm_test_cmdline_tv_options(struct kunit *test)
+ {
+ const struct drm_cmdline_tv_option_test *params = test->param_value;
+- const struct drm_display_mode *expected_mode = params->mode_fn(NULL);
++ struct drm_display_mode *expected_mode;
+ struct drm_cmdline_mode mode = { };
++ int ret;
++
++ expected_mode = params->mode_fn(NULL);
++ KUNIT_ASSERT_NOT_NULL(test, expected_mode);
++
++ ret = drm_kunit_add_mode_destroy_action(test, expected_mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(params->cmdline,
+ &no_connector, &mode));
+diff --git a/drivers/gpu/drm/tests/drm_kunit_helpers.c b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+index 3c0b7824c0be37..922c4b6ed1dc9b 100644
+--- a/drivers/gpu/drm/tests/drm_kunit_helpers.c
++++ b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+@@ -319,6 +319,28 @@ static void kunit_action_drm_mode_destroy(void *ptr)
+ drm_mode_destroy(NULL, mode);
+ }
+
++/**
++ * drm_kunit_add_mode_destroy_action() - Add a drm_destroy_mode kunit action
++ * @test: The test context object
++ * @mode: The drm_display_mode to destroy eventually
++ *
++ * Registers a kunit action that will destroy the drm_display_mode at
++ * the end of the test.
++ *
++ * If an error occurs, the drm_display_mode will be destroyed.
++ *
++ * Returns:
++ * 0 on success, an error code otherwise.
++ */
++int drm_kunit_add_mode_destroy_action(struct kunit *test,
++ struct drm_display_mode *mode)
++{
++ return kunit_add_action_or_reset(test,
++ kunit_action_drm_mode_destroy,
++ mode);
++}
++EXPORT_SYMBOL_GPL(drm_kunit_add_mode_destroy_action);
++
+ /**
+ * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC for a KUnit test
+ * @test: The test context object
+diff --git a/drivers/gpu/drm/tests/drm_modes_test.c b/drivers/gpu/drm/tests/drm_modes_test.c
+index 6ed51f99e133c9..7ba646d87856f5 100644
+--- a/drivers/gpu/drm/tests/drm_modes_test.c
++++ b/drivers/gpu/drm/tests/drm_modes_test.c
+@@ -40,6 +40,7 @@ static void drm_test_modes_analog_tv_ntsc_480i(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *mode;
++ int ret;
+
+ mode = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_NTSC,
+@@ -47,6 +48,9 @@ static void drm_test_modes_analog_tv_ntsc_480i(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_EQ(test, drm_mode_vrefresh(mode), 60);
+ KUNIT_EXPECT_EQ(test, mode->hdisplay, 720);
+
+@@ -70,6 +74,7 @@ static void drm_test_modes_analog_tv_ntsc_480i_inlined(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *expected, *mode;
++ int ret;
+
+ expected = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_NTSC,
+@@ -77,9 +82,15 @@ static void drm_test_modes_analog_tv_ntsc_480i_inlined(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, expected);
+
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ mode = drm_mode_analog_ntsc_480i(priv->drm);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(expected, mode));
+ }
+
+@@ -87,6 +98,7 @@ static void drm_test_modes_analog_tv_pal_576i(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *mode;
++ int ret;
+
+ mode = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_PAL,
+@@ -94,6 +106,9 @@ static void drm_test_modes_analog_tv_pal_576i(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_EQ(test, drm_mode_vrefresh(mode), 50);
+ KUNIT_EXPECT_EQ(test, mode->hdisplay, 720);
+
+@@ -117,6 +132,7 @@ static void drm_test_modes_analog_tv_pal_576i_inlined(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *expected, *mode;
++ int ret;
+
+ expected = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_PAL,
+@@ -124,9 +140,15 @@ static void drm_test_modes_analog_tv_pal_576i_inlined(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, expected);
+
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ mode = drm_mode_analog_pal_576i(priv->drm);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(expected, mode));
+ }
+
+diff --git a/drivers/gpu/drm/tests/drm_probe_helper_test.c b/drivers/gpu/drm/tests/drm_probe_helper_test.c
+index bc09ff38aca18e..db0e4f5df275e8 100644
+--- a/drivers/gpu/drm/tests/drm_probe_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_probe_helper_test.c
+@@ -98,7 +98,7 @@ drm_test_connector_helper_tv_get_modes_check(struct kunit *test)
+ struct drm_connector *connector = &priv->connector;
+ struct drm_cmdline_mode *cmdline = &connector->cmdline_mode;
+ struct drm_display_mode *mode;
+- const struct drm_display_mode *expected;
++ struct drm_display_mode *expected;
+ size_t len;
+ int ret;
+
+@@ -134,6 +134,9 @@ drm_test_connector_helper_tv_get_modes_check(struct kunit *test)
+
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(mode, expected));
+ KUNIT_EXPECT_TRUE(test, mode->type & DRM_MODE_TYPE_PREFERRED);
++
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
+ }
+
+ if (params->num_expected_modes >= 2) {
+@@ -145,6 +148,9 @@ drm_test_connector_helper_tv_get_modes_check(struct kunit *test)
+
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(mode, expected));
+ KUNIT_EXPECT_FALSE(test, mode->type & DRM_MODE_TYPE_PREFERRED);
++
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
+ }
+
+ mutex_unlock(&priv->drm->mode_config.mutex);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
+index f92133a01195a9..58c9e22e9745c9 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
++++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
+@@ -250,7 +250,6 @@ static int virtgpu_dma_buf_init_obj(struct drm_device *dev,
+ virtio_gpu_cmd_resource_create_blob(vgdev, bo, ¶ms,
+ ents, nents);
+ bo->guest_blob = true;
+- bo->attached = true;
+
+ dma_buf_unpin(attach);
+ dma_resv_unlock(resv);
+@@ -319,6 +318,7 @@ struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
+ return ERR_PTR(-ENOMEM);
+
+ obj = &bo->base.base;
++ obj->resv = buf->resv;
+ obj->funcs = &virtgpu_gem_dma_buf_funcs;
+ drm_gem_private_object_init(dev, obj, buf->size);
+
+diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
+index ad91624df42dd9..062639250a4e93 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
+@@ -1300,6 +1300,9 @@ virtio_gpu_cmd_resource_create_blob(struct virtio_gpu_device *vgdev,
+
+ virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+ bo->created = true;
++
++ if (nents)
++ bo->attached = true;
+ }
+
+ void virtio_gpu_cmd_set_scanout_blob(struct virtio_gpu_device *vgdev,
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 9f4f27d1ef4a95..8a20e6744836cb 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -32,6 +32,7 @@
+ #include "xe_gt_pagefault.h"
+ #include "xe_gt_printk.h"
+ #include "xe_gt_sriov_pf.h"
++#include "xe_gt_sriov_vf.h"
+ #include "xe_gt_sysfs.h"
+ #include "xe_gt_tlb_invalidation.h"
+ #include "xe_gt_topology.h"
+@@ -676,6 +677,9 @@ static int do_gt_reset(struct xe_gt *gt)
+ {
+ int err;
+
++ if (IS_SRIOV_VF(gt_to_xe(gt)))
++ return xe_gt_sriov_vf_reset(gt);
++
+ xe_gsc_wa_14015076503(gt, true);
+
+ xe_mmio_write32(>->mmio, GDRST, GRDOM_FULL);
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+index 878e96281c0351..4bd255adfb401c 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+@@ -262,7 +262,7 @@ static u32 encode_config(u32 *cfg, const struct xe_gt_sriov_config *config, bool
+
+ n += encode_config_ggtt(cfg, config, details);
+
+- if (details) {
++ if (details && config->num_ctxs) {
+ cfg[n++] = PREP_GUC_KLV_TAG(VF_CFG_BEGIN_CONTEXT_ID);
+ cfg[n++] = config->begin_ctx;
+ }
+@@ -270,7 +270,7 @@ static u32 encode_config(u32 *cfg, const struct xe_gt_sriov_config *config, bool
+ cfg[n++] = PREP_GUC_KLV_TAG(VF_CFG_NUM_CONTEXTS);
+ cfg[n++] = config->num_ctxs;
+
+- if (details) {
++ if (details && config->num_dbs) {
+ cfg[n++] = PREP_GUC_KLV_TAG(VF_CFG_BEGIN_DOORBELL_ID);
+ cfg[n++] = config->begin_db;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
+index cca5d57328021a..9c30cbd9af6e18 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
+@@ -58,6 +58,22 @@ static int vf_reset_guc_state(struct xe_gt *gt)
+ return err;
+ }
+
++/**
++ * xe_gt_sriov_vf_reset - Reset GuC VF internal state.
++ * @gt: the &xe_gt
++ *
++ * It requires functional `GuC MMIO based communication`_.
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++int xe_gt_sriov_vf_reset(struct xe_gt *gt)
++{
++ if (!xe_device_uc_enabled(gt_to_xe(gt)))
++ return -ENODEV;
++
++ return vf_reset_guc_state(gt);
++}
++
+ static int guc_action_match_version(struct xe_guc *guc,
+ u32 wanted_branch, u32 wanted_major, u32 wanted_minor,
+ u32 *branch, u32 *major, u32 *minor, u32 *patch)
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
+index 912d2081426163..ba6c5d74e326f4 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
+@@ -12,6 +12,7 @@ struct drm_printer;
+ struct xe_gt;
+ struct xe_reg;
+
++int xe_gt_sriov_vf_reset(struct xe_gt *gt);
+ int xe_gt_sriov_vf_bootstrap(struct xe_gt *gt);
+ int xe_gt_sriov_vf_query_config(struct xe_gt *gt);
+ int xe_gt_sriov_vf_connect(struct xe_gt *gt);
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
+index b995d1d51aed04..f382f5d53ca8bc 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc.c
++++ b/drivers/gpu/drm/xe/xe_guc_pc.c
+@@ -1056,6 +1056,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
+ if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING,
+ SLPC_RESET_EXTENDED_TIMEOUT_MS)) {
+ xe_gt_err(gt, "GuC PC Start failed: Dynamic GT frequency control and GT sleep states are now disabled.\n");
++ ret = -EIO;
+ goto out;
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
+index b53e8d2accdbd7..a440442b4d7270 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
+@@ -32,14 +32,61 @@ bool xe_hw_engine_timeout_in_range(u64 timeout, u64 min, u64 max)
+ return timeout >= min && timeout <= max;
+ }
+
+-static void kobj_xe_hw_engine_release(struct kobject *kobj)
++static void xe_hw_engine_sysfs_kobj_release(struct kobject *kobj)
+ {
+ kfree(kobj);
+ }
+
++static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
++ struct attribute *attr,
++ char *buf)
++{
++ struct xe_device *xe = kobj_to_xe(kobj);
++ struct kobj_attribute *kattr;
++ ssize_t ret = -EIO;
++
++ kattr = container_of(attr, struct kobj_attribute, attr);
++ if (kattr->show) {
++ xe_pm_runtime_get(xe);
++ ret = kattr->show(kobj, kattr, buf);
++ xe_pm_runtime_put(xe);
++ }
++
++ return ret;
++}
++
++static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
++ struct attribute *attr,
++ const char *buf,
++ size_t count)
++{
++ struct xe_device *xe = kobj_to_xe(kobj);
++ struct kobj_attribute *kattr;
++ ssize_t ret = -EIO;
++
++ kattr = container_of(attr, struct kobj_attribute, attr);
++ if (kattr->store) {
++ xe_pm_runtime_get(xe);
++ ret = kattr->store(kobj, kattr, buf, count);
++ xe_pm_runtime_put(xe);
++ }
++
++ return ret;
++}
++
++static const struct sysfs_ops xe_hw_engine_class_sysfs_ops = {
++ .show = xe_hw_engine_class_sysfs_attr_show,
++ .store = xe_hw_engine_class_sysfs_attr_store,
++};
++
+ static const struct kobj_type kobj_xe_hw_engine_type = {
+- .release = kobj_xe_hw_engine_release,
+- .sysfs_ops = &kobj_sysfs_ops
++ .release = xe_hw_engine_sysfs_kobj_release,
++ .sysfs_ops = &xe_hw_engine_class_sysfs_ops,
++};
++
++static const struct kobj_type kobj_xe_hw_engine_type_def = {
++ .release = xe_hw_engine_sysfs_kobj_release,
++ .sysfs_ops = &kobj_sysfs_ops,
+ };
+
+ static ssize_t job_timeout_max_store(struct kobject *kobj,
+@@ -543,7 +590,7 @@ static int xe_add_hw_engine_class_defaults(struct xe_device *xe,
+ if (!kobj)
+ return -ENOMEM;
+
+- kobject_init(kobj, &kobj_xe_hw_engine_type);
++ kobject_init(kobj, &kobj_xe_hw_engine_type_def);
+ err = kobject_add(kobj, parent, "%s", ".defaults");
+ if (err)
+ goto err_object;
+@@ -559,57 +606,6 @@ static int xe_add_hw_engine_class_defaults(struct xe_device *xe,
+ return err;
+ }
+
+-static void xe_hw_engine_sysfs_kobj_release(struct kobject *kobj)
+-{
+- kfree(kobj);
+-}
+-
+-static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
+- struct attribute *attr,
+- char *buf)
+-{
+- struct xe_device *xe = kobj_to_xe(kobj);
+- struct kobj_attribute *kattr;
+- ssize_t ret = -EIO;
+-
+- kattr = container_of(attr, struct kobj_attribute, attr);
+- if (kattr->show) {
+- xe_pm_runtime_get(xe);
+- ret = kattr->show(kobj, kattr, buf);
+- xe_pm_runtime_put(xe);
+- }
+-
+- return ret;
+-}
+-
+-static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
+- struct attribute *attr,
+- const char *buf,
+- size_t count)
+-{
+- struct xe_device *xe = kobj_to_xe(kobj);
+- struct kobj_attribute *kattr;
+- ssize_t ret = -EIO;
+-
+- kattr = container_of(attr, struct kobj_attribute, attr);
+- if (kattr->store) {
+- xe_pm_runtime_get(xe);
+- ret = kattr->store(kobj, kattr, buf, count);
+- xe_pm_runtime_put(xe);
+- }
+-
+- return ret;
+-}
+-
+-static const struct sysfs_ops xe_hw_engine_class_sysfs_ops = {
+- .show = xe_hw_engine_class_sysfs_attr_show,
+- .store = xe_hw_engine_class_sysfs_attr_store,
+-};
+-
+-static const struct kobj_type xe_hw_engine_sysfs_kobj_type = {
+- .release = xe_hw_engine_sysfs_kobj_release,
+- .sysfs_ops = &xe_hw_engine_class_sysfs_ops,
+-};
+
+ static void hw_engine_class_sysfs_fini(void *arg)
+ {
+@@ -640,7 +636,7 @@ int xe_hw_engine_class_sysfs_init(struct xe_gt *gt)
+ if (!kobj)
+ return -ENOMEM;
+
+- kobject_init(kobj, &xe_hw_engine_sysfs_kobj_type);
++ kobject_init(kobj, &kobj_xe_hw_engine_type);
+
+ err = kobject_add(kobj, gt->sysfs, "engines");
+ if (err)
+diff --git a/drivers/gpu/drm/xe/xe_tuning.c b/drivers/gpu/drm/xe/xe_tuning.c
+index d449de0fb6ecb9..3c78f3d7155910 100644
+--- a/drivers/gpu/drm/xe/xe_tuning.c
++++ b/drivers/gpu/drm/xe/xe_tuning.c
+@@ -97,14 +97,6 @@ static const struct xe_rtp_entry_sr engine_tunings[] = {
+ };
+
+ static const struct xe_rtp_entry_sr lrc_tunings[] = {
+- { XE_RTP_NAME("Tuning: ganged timer, also known as 16011163337"),
+- XE_RTP_RULES(GRAPHICS_VERSION_RANGE(1200, 1210), ENGINE_CLASS(RENDER)),
+- /* read verification is ignored due to 1608008084. */
+- XE_RTP_ACTIONS(FIELD_SET_NO_READ_MASK(FF_MODE2,
+- FF_MODE2_GS_TIMER_MASK,
+- FF_MODE2_GS_TIMER_224))
+- },
+-
+ /* DG2 */
+
+ { XE_RTP_NAME("Tuning: L3 cache"),
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index 570fe03764025c..2553accf8c5176 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -618,6 +618,13 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ };
+
+ static const struct xe_rtp_entry_sr lrc_was[] = {
++ { XE_RTP_NAME("16011163337"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(1200, 1210), ENGINE_CLASS(RENDER)),
++ /* read verification is ignored due to 1608008084. */
++ XE_RTP_ACTIONS(FIELD_SET_NO_READ_MASK(FF_MODE2,
++ FF_MODE2_GS_TIMER_MASK,
++ FF_MODE2_GS_TIMER_224))
++ },
+ { XE_RTP_NAME("1409342910, 14010698770, 14010443199, 1408979724, 1409178076, 1409207793, 1409217633, 1409252684, 1409347922, 1409142259"),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(1200, 1210)),
+ XE_RTP_ACTIONS(SET(COMMON_SLICE_CHICKEN3,
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index dfc245867a46ac..4cfea399ebab2d 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -1220,6 +1220,20 @@ config HID_U2FZERO
+ allow setting the brightness to anything but 1, which will
+ trigger a single blink and immediately reset back to 0.
+
++config HID_UNIVERSAL_PIDFF
++ tristate "universal-pidff: extended USB PID driver compatibility and usage"
++ depends on USB_HID
++ depends on HID_PID
++ help
++ Extended PID support for selected devices.
++
++ Contains report fixups, extended usable button range and
++ pidff quirk management to extend compatibility with slightly
++ non-compliant USB PID devices and better fuzz/flat values for
++ high precision direct drive devices.
++
++ Supports Moza Racing, Cammus, VRS, FFBeast and more.
++
+ config HID_WACOM
+ tristate "Wacom Intuos/Graphire tablet support (USB)"
+ depends on USB_HID
+diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile
+index 0abfe51704a0b4..c7ecfbb3e2280c 100644
+--- a/drivers/hid/Makefile
++++ b/drivers/hid/Makefile
+@@ -140,6 +140,7 @@ hid-uclogic-objs := hid-uclogic-core.o \
+ hid-uclogic-params.o
+ obj-$(CONFIG_HID_UCLOGIC) += hid-uclogic.o
+ obj-$(CONFIG_HID_UDRAW_PS3) += hid-udraw-ps3.o
++obj-$(CONFIG_HID_UNIVERSAL_PIDFF) += hid-universal-pidff.o
+ obj-$(CONFIG_HID_LED) += hid-led.o
+ obj-$(CONFIG_HID_XIAOMI) += hid-xiaomi.o
+ obj-$(CONFIG_HID_XINMO) += hid-xinmo.o
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 7e400624908e30..288a2b864cc41d 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -190,6 +190,12 @@
+ #define USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT 0x8102
+ #define USB_DEVICE_ID_APPLE_TOUCHBAR_DISPLAY 0x8302
+
++#define USB_VENDOR_ID_ASETEK 0x2433
++#define USB_DEVICE_ID_ASETEK_INVICTA 0xf300
++#define USB_DEVICE_ID_ASETEK_FORTE 0xf301
++#define USB_DEVICE_ID_ASETEK_LA_PRIMA 0xf303
++#define USB_DEVICE_ID_ASETEK_TONY_KANAAN 0xf306
++
+ #define USB_VENDOR_ID_ASUS 0x0486
+ #define USB_DEVICE_ID_ASUS_T91MT 0x0185
+ #define USB_DEVICE_ID_ASUSTEK_MULTITOUCH_YFO 0x0186
+@@ -262,6 +268,10 @@
+ #define USB_DEVICE_ID_BTC_EMPREX_REMOTE 0x5578
+ #define USB_DEVICE_ID_BTC_EMPREX_REMOTE_2 0x5577
+
++#define USB_VENDOR_ID_CAMMUS 0x3416
++#define USB_DEVICE_ID_CAMMUS_C5 0x0301
++#define USB_DEVICE_ID_CAMMUS_C12 0x0302
++
+ #define USB_VENDOR_ID_CANDO 0x2087
+ #define USB_DEVICE_ID_CANDO_PIXCIR_MULTI_TOUCH 0x0703
+ #define USB_DEVICE_ID_CANDO_MULTI_TOUCH 0x0a01
+@@ -453,6 +463,11 @@
+ #define USB_VENDOR_ID_EVISION 0x320f
+ #define USB_DEVICE_ID_EVISION_ICL01 0x5041
+
++#define USB_VENDOR_ID_FFBEAST 0x045b
++#define USB_DEVICE_ID_FFBEAST_JOYSTICK 0x58f9
++#define USB_DEVICE_ID_FFBEAST_RUDDER 0x5968
++#define USB_DEVICE_ID_FFBEAST_WHEEL 0x59d7
++
+ #define USB_VENDOR_ID_FLATFROG 0x25b5
+ #define USB_DEVICE_ID_MULTITOUCH_3200 0x0002
+
+@@ -817,6 +832,13 @@
+ #define I2C_DEVICE_ID_LG_8001 0x8001
+ #define I2C_DEVICE_ID_LG_7010 0x7010
+
++#define USB_VENDOR_ID_LITE_STAR 0x11ff
++#define USB_DEVICE_ID_PXN_V10 0x3245
++#define USB_DEVICE_ID_PXN_V12 0x1212
++#define USB_DEVICE_ID_PXN_V12_LITE 0x1112
++#define USB_DEVICE_ID_PXN_V12_LITE_2 0x1211
++#define USB_DEVICE_LITE_STAR_GT987_FF 0x2141
++
+ #define USB_VENDOR_ID_LOGITECH 0x046d
+ #define USB_DEVICE_ID_LOGITECH_Z_10_SPK 0x0a07
+ #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e
+@@ -964,6 +986,18 @@
+ #define USB_VENDOR_ID_MONTEREY 0x0566
+ #define USB_DEVICE_ID_GENIUS_KB29E 0x3004
+
++#define USB_VENDOR_ID_MOZA 0x346e
++#define USB_DEVICE_ID_MOZA_R3 0x0005
++#define USB_DEVICE_ID_MOZA_R3_2 0x0015
++#define USB_DEVICE_ID_MOZA_R5 0x0004
++#define USB_DEVICE_ID_MOZA_R5_2 0x0014
++#define USB_DEVICE_ID_MOZA_R9 0x0002
++#define USB_DEVICE_ID_MOZA_R9_2 0x0012
++#define USB_DEVICE_ID_MOZA_R12 0x0006
++#define USB_DEVICE_ID_MOZA_R12_2 0x0016
++#define USB_DEVICE_ID_MOZA_R16_R21 0x0000
++#define USB_DEVICE_ID_MOZA_R16_R21_2 0x0010
++
+ #define USB_VENDOR_ID_MSI 0x1770
+ #define USB_DEVICE_ID_MSI_GT683R_LED_PANEL 0xff00
+
+@@ -1377,6 +1411,9 @@
+ #define USB_DEVICE_ID_VELLEMAN_K8061_FIRST 0x8061
+ #define USB_DEVICE_ID_VELLEMAN_K8061_LAST 0x8068
+
++#define USB_VENDOR_ID_VRS 0x0483
++#define USB_DEVICE_ID_VRS_DFP 0xa355
++
+ #define USB_VENDOR_ID_VTL 0x0306
+ #define USB_DEVICE_ID_VTL_MULTITOUCH_FF3F 0xff3f
+
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index a7d9ca02779eaf..04508c36bdc823 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -778,7 +778,7 @@ static int lenovo_raw_event(struct hid_device *hdev,
+ if (unlikely((hdev->product == USB_DEVICE_ID_LENOVO_X12_TAB
+ || hdev->product == USB_DEVICE_ID_LENOVO_X12_TAB2)
+ && size >= 3 && report->id == 0x03))
+- return lenovo_raw_event_TP_X12_tab(hdev, le32_to_cpu(*(u32 *)data));
++ return lenovo_raw_event_TP_X12_tab(hdev, le32_to_cpu(*(__le32 *)data));
+
+ return 0;
+ }
+diff --git a/drivers/hid/hid-universal-pidff.c b/drivers/hid/hid-universal-pidff.c
+new file mode 100644
+index 00000000000000..5b89ec7b5c26c5
+--- /dev/null
++++ b/drivers/hid/hid-universal-pidff.c
+@@ -0,0 +1,202 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * HID UNIVERSAL PIDFF
++ * hid-pidff wrapper for PID-enabled devices
++ * Handles device reports, quirks and extends usable button range
++ *
++ * Copyright (c) 2024, 2025 Oleg Makarenko
++ * Copyright (c) 2024, 2025 Tomasz Pakuła
++ */
++
++#include <linux/device.h>
++#include <linux/hid.h>
++#include <linux/module.h>
++#include <linux/input-event-codes.h>
++#include "hid-ids.h"
++#include "usbhid/hid-pidff.h"
++
++#define JOY_RANGE (BTN_DEAD - BTN_JOYSTICK + 1)
++
++/*
++ * Map buttons manually to extend the default joystick button limit
++ */
++static int universal_pidff_input_mapping(struct hid_device *hdev,
++ struct hid_input *hi, struct hid_field *field, struct hid_usage *usage,
++ unsigned long **bit, int *max)
++{
++ if ((usage->hid & HID_USAGE_PAGE) != HID_UP_BUTTON)
++ return 0;
++
++ if (field->application != HID_GD_JOYSTICK)
++ return 0;
++
++ int button = ((usage->hid - 1) & HID_USAGE);
++ int code = button + BTN_JOYSTICK;
++
++ /* Detect the end of JOYSTICK buttons range */
++ if (code > BTN_DEAD)
++ code = button + KEY_NEXT_FAVORITE - JOY_RANGE;
++
++ /*
++ * Map overflowing buttons to KEY_RESERVED to not ignore
++ * them and let them still trigger MSC_SCAN
++ */
++ if (code > KEY_MAX)
++ code = KEY_RESERVED;
++
++ hid_map_usage(hi, usage, bit, max, EV_KEY, code);
++ hid_dbg(hdev, "Button %d: usage %d", button, code);
++ return 1;
++}
++
++/*
++ * Check if the device is PID and initialize it
++ * Add quirks after initialisation
++ */
++static int universal_pidff_probe(struct hid_device *hdev,
++ const struct hid_device_id *id)
++{
++ int i, error;
++ error = hid_parse(hdev);
++ if (error) {
++ hid_err(hdev, "HID parse failed\n");
++ goto err;
++ }
++
++ error = hid_hw_start(hdev, HID_CONNECT_DEFAULT & ~HID_CONNECT_FF);
++ if (error) {
++ hid_err(hdev, "HID hw start failed\n");
++ goto err;
++ }
++
++ /* Check if device contains PID usage page */
++ error = 1;
++ for (i = 0; i < hdev->collection_size; i++)
++ if ((hdev->collection[i].usage & HID_USAGE_PAGE) == HID_UP_PID) {
++ error = 0;
++ hid_dbg(hdev, "PID usage page found\n");
++ break;
++ }
++
++ /*
++ * Do not fail as this might be the second "device"
++ * just for additional buttons/axes. Exit cleanly if force
++ * feedback usage page wasn't found (included devices were
++ * tested and confirmed to be USB PID after all).
++ */
++ if (error) {
++ hid_dbg(hdev, "PID usage page not found in the descriptor\n");
++ return 0;
++ }
++
++ /* Check if HID_PID support is enabled */
++ int (*init_function)(struct hid_device *, u32);
++ init_function = hid_pidff_init_with_quirks;
++
++ if (!init_function) {
++ hid_warn(hdev, "HID_PID support not enabled!\n");
++ return 0;
++ }
++
++ error = init_function(hdev, id->driver_data);
++ if (error) {
++ hid_warn(hdev, "Error initialising force feedback\n");
++ goto err;
++ }
++
++ hid_info(hdev, "Universal pidff driver loaded sucessfully!");
++
++ return 0;
++err:
++ return error;
++}
++
++static int universal_pidff_input_configured(struct hid_device *hdev,
++ struct hid_input *hidinput)
++{
++ int axis;
++ struct input_dev *input = hidinput->input;
++
++ if (!input->absinfo)
++ return 0;
++
++ /* Decrease fuzz and deadzone on available axes */
++ for (axis = ABS_X; axis <= ABS_BRAKE; axis++) {
++ if (!test_bit(axis, input->absbit))
++ continue;
++
++ input_set_abs_params(input, axis,
++ input->absinfo[axis].minimum,
++ input->absinfo[axis].maximum,
++ axis == ABS_X ? 0 : 8, 0);
++ }
++
++ /* Remove fuzz and deadzone from the second joystick axis */
++ if (hdev->vendor == USB_VENDOR_ID_FFBEAST &&
++ hdev->product == USB_DEVICE_ID_FFBEAST_JOYSTICK)
++ input_set_abs_params(input, ABS_Y,
++ input->absinfo[ABS_Y].minimum,
++ input->absinfo[ABS_Y].maximum, 0, 0);
++
++ return 0;
++}
++
++static const struct hid_device_id universal_pidff_devices[] = {
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R3),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R3_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R5),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R5_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R9),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R9_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R12),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R12_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R16_R21),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R16_R21_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_CAMMUS, USB_DEVICE_ID_CAMMUS_C5) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_CAMMUS, USB_DEVICE_ID_CAMMUS_C12) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_VRS, USB_DEVICE_ID_VRS_DFP),
++ .driver_data = HID_PIDFF_QUIRK_PERMISSIVE_CONTROL },
++ { HID_USB_DEVICE(USB_VENDOR_ID_FFBEAST, USB_DEVICE_ID_FFBEAST_JOYSTICK), },
++ { HID_USB_DEVICE(USB_VENDOR_ID_FFBEAST, USB_DEVICE_ID_FFBEAST_RUDDER), },
++ { HID_USB_DEVICE(USB_VENDOR_ID_FFBEAST, USB_DEVICE_ID_FFBEAST_WHEEL) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V10),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V12),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V12_LITE),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V12_LITE_2),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_LITE_STAR_GT987_FF),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_INVICTA) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_FORTE) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_LA_PRIMA) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_TONY_KANAAN) },
++ { }
++};
++MODULE_DEVICE_TABLE(hid, universal_pidff_devices);
++
++static struct hid_driver universal_pidff = {
++ .name = "hid-universal-pidff",
++ .id_table = universal_pidff_devices,
++ .input_mapping = universal_pidff_input_mapping,
++ .probe = universal_pidff_probe,
++ .input_configured = universal_pidff_input_configured
++};
++module_hid_driver(universal_pidff);
++
++MODULE_DESCRIPTION("Universal driver for USB PID Force Feedback devices");
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Oleg Makarenko <oleg@makarenk.ooo>");
++MODULE_AUTHOR("Tomasz Pakuła <tomasz.pakula.oficjalny@gmail.com>");
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index a6eb6fe6130d13..44c2351b870fa2 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -35,6 +35,7 @@
+ #include <linux/hid-debug.h>
+ #include <linux/hidraw.h>
+ #include "usbhid.h"
++#include "hid-pidff.h"
+
+ /*
+ * Version Information
+diff --git a/drivers/hid/usbhid/hid-pidff.c b/drivers/hid/usbhid/hid-pidff.c
+index 3b4ee21cd81119..8dfd2c554a2762 100644
+--- a/drivers/hid/usbhid/hid-pidff.c
++++ b/drivers/hid/usbhid/hid-pidff.c
+@@ -3,27 +3,27 @@
+ * Force feedback driver for USB HID PID compliant devices
+ *
+ * Copyright (c) 2005, 2006 Anssi Hannula <anssi.hannula@gmail.com>
++ * Upgraded 2025 by Oleg Makarenko and Tomasz Pakuła
+ */
+
+-/*
+- */
+-
+-/* #define DEBUG */
+-
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#include "hid-pidff.h"
+ #include <linux/input.h>
+ #include <linux/slab.h>
+ #include <linux/usb.h>
+-
+ #include <linux/hid.h>
++#include <linux/minmax.h>
+
+-#include "usbhid.h"
+
+ #define PID_EFFECTS_MAX 64
++#define PID_INFINITE U16_MAX
+
+-/* Report usage table used to put reports into an array */
++/* Linux Force Feedback API uses miliseconds as time unit */
++#define FF_TIME_EXPONENT -3
++#define FF_INFINITE 0
+
++/* Report usage table used to put reports into an array */
+ #define PID_SET_EFFECT 0
+ #define PID_EFFECT_OPERATION 1
+ #define PID_DEVICE_GAIN 2
+@@ -44,12 +44,19 @@ static const u8 pidff_reports[] = {
+ 0x21, 0x77, 0x7d, 0x7f, 0x89, 0x90, 0x96, 0xab,
+ 0x5a, 0x5f, 0x6e, 0x73, 0x74
+ };
++/*
++ * device_control is really 0x95, but 0x96 specified
++ * as it is the usage of the only field in that report.
++ */
+
+-/* device_control is really 0x95, but 0x96 specified as it is the usage of
+-the only field in that report */
++/* PID special fields */
++#define PID_EFFECT_TYPE 0x25
++#define PID_DIRECTION 0x57
++#define PID_EFFECT_OPERATION_ARRAY 0x78
++#define PID_BLOCK_LOAD_STATUS 0x8b
++#define PID_DEVICE_CONTROL_ARRAY 0x96
+
+ /* Value usage tables used to put fields and values into arrays */
+-
+ #define PID_EFFECT_BLOCK_INDEX 0
+
+ #define PID_DURATION 1
+@@ -107,10 +114,13 @@ static const u8 pidff_device_gain[] = { 0x7e };
+ static const u8 pidff_pool[] = { 0x80, 0x83, 0xa9 };
+
+ /* Special field key tables used to put special field keys into arrays */
+-
+ #define PID_ENABLE_ACTUATORS 0
+-#define PID_RESET 1
+-static const u8 pidff_device_control[] = { 0x97, 0x9a };
++#define PID_DISABLE_ACTUATORS 1
++#define PID_STOP_ALL_EFFECTS 2
++#define PID_RESET 3
++#define PID_PAUSE 4
++#define PID_CONTINUE 5
++static const u8 pidff_device_control[] = { 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c };
+
+ #define PID_CONSTANT 0
+ #define PID_RAMP 1
+@@ -130,12 +140,16 @@ static const u8 pidff_effect_types[] = {
+
+ #define PID_BLOCK_LOAD_SUCCESS 0
+ #define PID_BLOCK_LOAD_FULL 1
+-static const u8 pidff_block_load_status[] = { 0x8c, 0x8d };
++#define PID_BLOCK_LOAD_ERROR 2
++static const u8 pidff_block_load_status[] = { 0x8c, 0x8d, 0x8e};
+
+ #define PID_EFFECT_START 0
+ #define PID_EFFECT_STOP 1
+ static const u8 pidff_effect_operation_status[] = { 0x79, 0x7b };
+
++/* Polar direction 90 degrees (East) */
++#define PIDFF_FIXED_WHEEL_DIRECTION 0x4000
++
+ struct pidff_usage {
+ struct hid_field *field;
+ s32 *value;
+@@ -159,8 +173,10 @@ struct pidff_device {
+ struct pidff_usage effect_operation[sizeof(pidff_effect_operation)];
+ struct pidff_usage block_free[sizeof(pidff_block_free)];
+
+- /* Special field is a field that is not composed of
+- usage<->value pairs that pidff_usage values are */
++ /*
++ * Special field is a field that is not composed of
++ * usage<->value pairs that pidff_usage values are
++ */
+
+ /* Special field in create_new_effect */
+ struct hid_field *create_new_effect_type;
+@@ -184,30 +200,61 @@ struct pidff_device {
+ int operation_id[sizeof(pidff_effect_operation_status)];
+
+ int pid_id[PID_EFFECTS_MAX];
++
++ u32 quirks;
++ u8 effect_count;
+ };
+
++/*
++ * Clamp value for a given field
++ */
++static s32 pidff_clamp(s32 i, struct hid_field *field)
++{
++ s32 clamped = clamp(i, field->logical_minimum, field->logical_maximum);
++ pr_debug("clamped from %d to %d", i, clamped);
++ return clamped;
++}
++
+ /*
+ * Scale an unsigned value with range 0..max for the given field
+ */
+ static int pidff_rescale(int i, int max, struct hid_field *field)
+ {
+ return i * (field->logical_maximum - field->logical_minimum) / max +
+- field->logical_minimum;
++ field->logical_minimum;
+ }
+
+ /*
+- * Scale a signed value in range -0x8000..0x7fff for the given field
++ * Scale a signed value in range S16_MIN..S16_MAX for the given field
+ */
+ static int pidff_rescale_signed(int i, struct hid_field *field)
+ {
+- return i == 0 ? 0 : i >
+- 0 ? i * field->logical_maximum / 0x7fff : i *
+- field->logical_minimum / -0x8000;
++ if (i > 0) return i * field->logical_maximum / S16_MAX;
++ if (i < 0) return i * field->logical_minimum / S16_MIN;
++ return 0;
++}
++
++/*
++ * Scale time value from Linux default (ms) to field units
++ */
++static u32 pidff_rescale_time(u16 time, struct hid_field *field)
++{
++ u32 scaled_time = time;
++ int exponent = field->unit_exponent;
++ pr_debug("time field exponent: %d\n", exponent);
++
++ for (;exponent < FF_TIME_EXPONENT; exponent++)
++ scaled_time *= 10;
++ for (;exponent > FF_TIME_EXPONENT; exponent--)
++ scaled_time /= 10;
++
++ pr_debug("time calculated from %d to %d\n", time, scaled_time);
++ return scaled_time;
+ }
+
+ static void pidff_set(struct pidff_usage *usage, u16 value)
+ {
+- usage->value[0] = pidff_rescale(value, 0xffff, usage->field);
++ usage->value[0] = pidff_rescale(value, U16_MAX, usage->field);
+ pr_debug("calculated from %d to %d\n", value, usage->value[0]);
+ }
+
+@@ -218,14 +265,35 @@ static void pidff_set_signed(struct pidff_usage *usage, s16 value)
+ else {
+ if (value < 0)
+ usage->value[0] =
+- pidff_rescale(-value, 0x8000, usage->field);
++ pidff_rescale(-value, -S16_MIN, usage->field);
+ else
+ usage->value[0] =
+- pidff_rescale(value, 0x7fff, usage->field);
++ pidff_rescale(value, S16_MAX, usage->field);
+ }
+ pr_debug("calculated from %d to %d\n", value, usage->value[0]);
+ }
+
++static void pidff_set_time(struct pidff_usage *usage, u16 time)
++{
++ u32 modified_time = pidff_rescale_time(time, usage->field);
++ usage->value[0] = pidff_clamp(modified_time, usage->field);
++}
++
++static void pidff_set_duration(struct pidff_usage *usage, u16 duration)
++{
++ /* Infinite value conversion from Linux API -> PID */
++ if (duration == FF_INFINITE)
++ duration = PID_INFINITE;
++
++ /* PID defines INFINITE as the max possible value for duration field */
++ if (duration == PID_INFINITE) {
++ usage->value[0] = (1U << usage->field->report_size) - 1;
++ return;
++ }
++
++ pidff_set_time(usage, duration);
++}
++
+ /*
+ * Send envelope report to the device
+ */
+@@ -233,19 +301,21 @@ static void pidff_set_envelope_report(struct pidff_device *pidff,
+ struct ff_envelope *envelope)
+ {
+ pidff->set_envelope[PID_EFFECT_BLOCK_INDEX].value[0] =
+- pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
++ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
+
+ pidff->set_envelope[PID_ATTACK_LEVEL].value[0] =
+- pidff_rescale(envelope->attack_level >
+- 0x7fff ? 0x7fff : envelope->attack_level, 0x7fff,
+- pidff->set_envelope[PID_ATTACK_LEVEL].field);
++ pidff_rescale(envelope->attack_level >
++ S16_MAX ? S16_MAX : envelope->attack_level, S16_MAX,
++ pidff->set_envelope[PID_ATTACK_LEVEL].field);
+ pidff->set_envelope[PID_FADE_LEVEL].value[0] =
+- pidff_rescale(envelope->fade_level >
+- 0x7fff ? 0x7fff : envelope->fade_level, 0x7fff,
+- pidff->set_envelope[PID_FADE_LEVEL].field);
++ pidff_rescale(envelope->fade_level >
++ S16_MAX ? S16_MAX : envelope->fade_level, S16_MAX,
++ pidff->set_envelope[PID_FADE_LEVEL].field);
+
+- pidff->set_envelope[PID_ATTACK_TIME].value[0] = envelope->attack_length;
+- pidff->set_envelope[PID_FADE_TIME].value[0] = envelope->fade_length;
++ pidff_set_time(&pidff->set_envelope[PID_ATTACK_TIME],
++ envelope->attack_length);
++ pidff_set_time(&pidff->set_envelope[PID_FADE_TIME],
++ envelope->fade_length);
+
+ hid_dbg(pidff->hid, "attack %u => %d\n",
+ envelope->attack_level,
+@@ -261,10 +331,22 @@ static void pidff_set_envelope_report(struct pidff_device *pidff,
+ static int pidff_needs_set_envelope(struct ff_envelope *envelope,
+ struct ff_envelope *old)
+ {
+- return envelope->attack_level != old->attack_level ||
+- envelope->fade_level != old->fade_level ||
++ bool needs_new_envelope;
++ needs_new_envelope = envelope->attack_level != 0 ||
++ envelope->fade_level != 0 ||
++ envelope->attack_length != 0 ||
++ envelope->fade_length != 0;
++
++ if (!needs_new_envelope)
++ return false;
++
++ if (!old)
++ return needs_new_envelope;
++
++ return envelope->attack_level != old->attack_level ||
++ envelope->fade_level != old->fade_level ||
+ envelope->attack_length != old->attack_length ||
+- envelope->fade_length != old->fade_length;
++ envelope->fade_length != old->fade_length;
+ }
+
+ /*
+@@ -301,17 +383,27 @@ static void pidff_set_effect_report(struct pidff_device *pidff,
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
+ pidff->set_effect_type->value[0] =
+ pidff->create_new_effect_type->value[0];
+- pidff->set_effect[PID_DURATION].value[0] = effect->replay.length;
++
++ pidff_set_duration(&pidff->set_effect[PID_DURATION],
++ effect->replay.length);
++
+ pidff->set_effect[PID_TRIGGER_BUTTON].value[0] = effect->trigger.button;
+- pidff->set_effect[PID_TRIGGER_REPEAT_INT].value[0] =
+- effect->trigger.interval;
++ pidff_set_time(&pidff->set_effect[PID_TRIGGER_REPEAT_INT],
++ effect->trigger.interval);
+ pidff->set_effect[PID_GAIN].value[0] =
+ pidff->set_effect[PID_GAIN].field->logical_maximum;
+ pidff->set_effect[PID_DIRECTION_ENABLE].value[0] = 1;
+- pidff->effect_direction->value[0] =
+- pidff_rescale(effect->direction, 0xffff,
+- pidff->effect_direction);
+- pidff->set_effect[PID_START_DELAY].value[0] = effect->replay.delay;
++
++ /* Use fixed direction if needed */
++ pidff->effect_direction->value[0] = pidff_rescale(
++ pidff->quirks & HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION ?
++ PIDFF_FIXED_WHEEL_DIRECTION : effect->direction,
++ U16_MAX, pidff->effect_direction);
++
++ /* Omit setting delay field if it's missing */
++ if (!(pidff->quirks & HID_PIDFF_QUIRK_MISSING_DELAY))
++ pidff_set_time(&pidff->set_effect[PID_START_DELAY],
++ effect->replay.delay);
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_SET_EFFECT],
+ HID_REQ_SET_REPORT);
+@@ -343,11 +435,11 @@ static void pidff_set_periodic_report(struct pidff_device *pidff,
+ pidff_set_signed(&pidff->set_periodic[PID_OFFSET],
+ effect->u.periodic.offset);
+ pidff_set(&pidff->set_periodic[PID_PHASE], effect->u.periodic.phase);
+- pidff->set_periodic[PID_PERIOD].value[0] = effect->u.periodic.period;
++ pidff_set_time(&pidff->set_periodic[PID_PERIOD],
++ effect->u.periodic.period);
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_SET_PERIODIC],
+ HID_REQ_SET_REPORT);
+-
+ }
+
+ /*
+@@ -368,13 +460,19 @@ static int pidff_needs_set_periodic(struct ff_effect *effect,
+ static void pidff_set_condition_report(struct pidff_device *pidff,
+ struct ff_effect *effect)
+ {
+- int i;
++ int i, max_axis;
++
++ /* Devices missing Parameter Block Offset can only have one axis */
++ max_axis = pidff->quirks & HID_PIDFF_QUIRK_MISSING_PBO ? 1 : 2;
+
+ pidff->set_condition[PID_EFFECT_BLOCK_INDEX].value[0] =
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
+
+- for (i = 0; i < 2; i++) {
+- pidff->set_condition[PID_PARAM_BLOCK_OFFSET].value[0] = i;
++ for (i = 0; i < max_axis; i++) {
++ /* Omit Parameter Block Offset if missing */
++ if (!(pidff->quirks & HID_PIDFF_QUIRK_MISSING_PBO))
++ pidff->set_condition[PID_PARAM_BLOCK_OFFSET].value[0] = i;
++
+ pidff_set_signed(&pidff->set_condition[PID_CP_OFFSET],
+ effect->u.condition[i].center);
+ pidff_set_signed(&pidff->set_condition[PID_POS_COEFFICIENT],
+@@ -441,9 +539,104 @@ static int pidff_needs_set_ramp(struct ff_effect *effect, struct ff_effect *old)
+ effect->u.ramp.end_level != old->u.ramp.end_level;
+ }
+
++/*
++ * Set device gain
++ */
++static void pidff_set_gain_report(struct pidff_device *pidff, u16 gain)
++{
++ if (!pidff->device_gain[PID_DEVICE_GAIN_FIELD].field)
++ return;
++
++ pidff_set(&pidff->device_gain[PID_DEVICE_GAIN_FIELD], gain);
++ hid_hw_request(pidff->hid, pidff->reports[PID_DEVICE_GAIN],
++ HID_REQ_SET_REPORT);
++}
++
++/*
++ * Send device control report to the device
++ */
++static void pidff_set_device_control(struct pidff_device *pidff, int field)
++{
++ int i, index;
++ int field_index = pidff->control_id[field];
++
++ if (field_index < 1)
++ return;
++
++ /* Detect if the field is a bitmask variable or an array */
++ if (pidff->device_control->flags & HID_MAIN_ITEM_VARIABLE) {
++ hid_dbg(pidff->hid, "DEVICE_CONTROL is a bitmask\n");
++
++ /* Clear current bitmask */
++ for(i = 0; i < sizeof(pidff_device_control); i++) {
++ index = pidff->control_id[i];
++ if (index < 1)
++ continue;
++
++ pidff->device_control->value[index - 1] = 0;
++ }
++
++ pidff->device_control->value[field_index - 1] = 1;
++ } else {
++ hid_dbg(pidff->hid, "DEVICE_CONTROL is an array\n");
++ pidff->device_control->value[0] = field_index;
++ }
++
++ hid_hw_request(pidff->hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
++ hid_hw_wait(pidff->hid);
++}
++
++/*
++ * Modify actuators state
++ */
++static void pidff_set_actuators(struct pidff_device *pidff, bool enable)
++{
++ hid_dbg(pidff->hid, "%s actuators\n", enable ? "Enable" : "Disable");
++ pidff_set_device_control(pidff,
++ enable ? PID_ENABLE_ACTUATORS : PID_DISABLE_ACTUATORS);
++}
++
++/*
++ * Reset the device, stop all effects, enable actuators
++ */
++static void pidff_reset(struct pidff_device *pidff)
++{
++ /* We reset twice as sometimes hid_wait_io isn't waiting long enough */
++ pidff_set_device_control(pidff, PID_RESET);
++ pidff_set_device_control(pidff, PID_RESET);
++ pidff->effect_count = 0;
++
++ pidff_set_device_control(pidff, PID_STOP_ALL_EFFECTS);
++ pidff_set_actuators(pidff, 1);
++}
++
++/*
++ * Fetch pool report
++ */
++static void pidff_fetch_pool(struct pidff_device *pidff)
++{
++ int i;
++ struct hid_device *hid = pidff->hid;
++
++ /* Repeat if PID_SIMULTANEOUS_MAX < 2 to make sure it's correct */
++ for(i = 0; i < 20; i++) {
++ hid_hw_request(hid, pidff->reports[PID_POOL], HID_REQ_GET_REPORT);
++ hid_hw_wait(hid);
++
++ if (!pidff->pool[PID_SIMULTANEOUS_MAX].value)
++ return;
++ if (pidff->pool[PID_SIMULTANEOUS_MAX].value[0] >= 2)
++ return;
++ }
++ hid_warn(hid, "device reports %d simultaneous effects\n",
++ pidff->pool[PID_SIMULTANEOUS_MAX].value[0]);
++}
++
+ /*
+ * Send a request for effect upload to the device
+ *
++ * Reset and enable actuators if no effects were present on the device
++ *
+ * Returns 0 if device reported success, -ENOSPC if the device reported memory
+ * is full. Upon unknown response the function will retry for 60 times, if
+ * still unsuccessful -EIO is returned.
+@@ -452,6 +645,9 @@ static int pidff_request_effect_upload(struct pidff_device *pidff, int efnum)
+ {
+ int j;
+
++ if (!pidff->effect_count)
++ pidff_reset(pidff);
++
+ pidff->create_new_effect_type->value[0] = efnum;
+ hid_hw_request(pidff->hid, pidff->reports[PID_CREATE_NEW_EFFECT],
+ HID_REQ_SET_REPORT);
+@@ -471,6 +667,8 @@ static int pidff_request_effect_upload(struct pidff_device *pidff, int efnum)
+ hid_dbg(pidff->hid, "device reported free memory: %d bytes\n",
+ pidff->block_load[PID_RAM_POOL_AVAILABLE].value ?
+ pidff->block_load[PID_RAM_POOL_AVAILABLE].value[0] : -1);
++
++ pidff->effect_count++;
+ return 0;
+ }
+ if (pidff->block_load_status->value[0] ==
+@@ -480,6 +678,11 @@ static int pidff_request_effect_upload(struct pidff_device *pidff, int efnum)
+ pidff->block_load[PID_RAM_POOL_AVAILABLE].value[0] : -1);
+ return -ENOSPC;
+ }
++ if (pidff->block_load_status->value[0] ==
++ pidff->status_id[PID_BLOCK_LOAD_ERROR]) {
++ hid_dbg(pidff->hid, "device error during effect creation\n");
++ return -EREMOTEIO;
++ }
+ }
+ hid_err(pidff->hid, "pid_block_load failed 60 times\n");
+ return -EIO;
+@@ -498,7 +701,8 @@ static void pidff_playback_pid(struct pidff_device *pidff, int pid_id, int n)
+ } else {
+ pidff->effect_operation_status->value[0] =
+ pidff->operation_id[PID_EFFECT_START];
+- pidff->effect_operation[PID_LOOP_COUNT].value[0] = n;
++ pidff->effect_operation[PID_LOOP_COUNT].value[0] =
++ pidff_clamp(n, pidff->effect_operation[PID_LOOP_COUNT].field);
+ }
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_EFFECT_OPERATION],
+@@ -511,20 +715,22 @@ static void pidff_playback_pid(struct pidff_device *pidff, int pid_id, int n)
+ static int pidff_playback(struct input_dev *dev, int effect_id, int value)
+ {
+ struct pidff_device *pidff = dev->ff->private;
+-
+ pidff_playback_pid(pidff, pidff->pid_id[effect_id], value);
+-
+ return 0;
+ }
+
+ /*
+ * Erase effect with PID id
++ * Decrease the device effect counter
+ */
+ static void pidff_erase_pid(struct pidff_device *pidff, int pid_id)
+ {
+ pidff->block_free[PID_EFFECT_BLOCK_INDEX].value[0] = pid_id;
+ hid_hw_request(pidff->hid, pidff->reports[PID_BLOCK_FREE],
+ HID_REQ_SET_REPORT);
++
++ if (pidff->effect_count > 0)
++ pidff->effect_count--;
+ }
+
+ /*
+@@ -537,8 +743,11 @@ static int pidff_erase_effect(struct input_dev *dev, int effect_id)
+
+ hid_dbg(pidff->hid, "starting to erase %d/%d\n",
+ effect_id, pidff->pid_id[effect_id]);
+- /* Wait for the queue to clear. We do not want a full fifo to
+- prevent the effect removal. */
++
++ /*
++ * Wait for the queue to clear. We do not want
++ * a full fifo to prevent the effect removal.
++ */
+ hid_hw_wait(pidff->hid);
+ pidff_playback_pid(pidff, pid_id, 0);
+ pidff_erase_pid(pidff, pid_id);
+@@ -574,11 +783,9 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ pidff_set_effect_report(pidff, effect);
+ if (!old || pidff_needs_set_constant(effect, old))
+ pidff_set_constant_force_report(pidff, effect);
+- if (!old ||
+- pidff_needs_set_envelope(&effect->u.constant.envelope,
+- &old->u.constant.envelope))
+- pidff_set_envelope_report(pidff,
+- &effect->u.constant.envelope);
++ if (pidff_needs_set_envelope(&effect->u.constant.envelope,
++ old ? &old->u.constant.envelope : NULL))
++ pidff_set_envelope_report(pidff, &effect->u.constant.envelope);
+ break;
+
+ case FF_PERIODIC:
+@@ -604,6 +811,9 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ return -EINVAL;
+ }
+
++ if (pidff->quirks & HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY)
++ type_id = PID_SINE;
++
+ error = pidff_request_effect_upload(pidff,
+ pidff->type_id[type_id]);
+ if (error)
+@@ -613,11 +823,9 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ pidff_set_effect_report(pidff, effect);
+ if (!old || pidff_needs_set_periodic(effect, old))
+ pidff_set_periodic_report(pidff, effect);
+- if (!old ||
+- pidff_needs_set_envelope(&effect->u.periodic.envelope,
+- &old->u.periodic.envelope))
+- pidff_set_envelope_report(pidff,
+- &effect->u.periodic.envelope);
++ if (pidff_needs_set_envelope(&effect->u.periodic.envelope,
++ old ? &old->u.periodic.envelope : NULL))
++ pidff_set_envelope_report(pidff, &effect->u.periodic.envelope);
+ break;
+
+ case FF_RAMP:
+@@ -631,56 +839,32 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ pidff_set_effect_report(pidff, effect);
+ if (!old || pidff_needs_set_ramp(effect, old))
+ pidff_set_ramp_force_report(pidff, effect);
+- if (!old ||
+- pidff_needs_set_envelope(&effect->u.ramp.envelope,
+- &old->u.ramp.envelope))
+- pidff_set_envelope_report(pidff,
+- &effect->u.ramp.envelope);
++ if (pidff_needs_set_envelope(&effect->u.ramp.envelope,
++ old ? &old->u.ramp.envelope : NULL))
++ pidff_set_envelope_report(pidff, &effect->u.ramp.envelope);
+ break;
+
+ case FF_SPRING:
+- if (!old) {
+- error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_SPRING]);
+- if (error)
+- return error;
+- }
+- if (!old || pidff_needs_set_effect(effect, old))
+- pidff_set_effect_report(pidff, effect);
+- if (!old || pidff_needs_set_condition(effect, old))
+- pidff_set_condition_report(pidff, effect);
+- break;
+-
+- case FF_FRICTION:
+- if (!old) {
+- error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_FRICTION]);
+- if (error)
+- return error;
+- }
+- if (!old || pidff_needs_set_effect(effect, old))
+- pidff_set_effect_report(pidff, effect);
+- if (!old || pidff_needs_set_condition(effect, old))
+- pidff_set_condition_report(pidff, effect);
+- break;
+-
+ case FF_DAMPER:
+- if (!old) {
+- error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_DAMPER]);
+- if (error)
+- return error;
+- }
+- if (!old || pidff_needs_set_effect(effect, old))
+- pidff_set_effect_report(pidff, effect);
+- if (!old || pidff_needs_set_condition(effect, old))
+- pidff_set_condition_report(pidff, effect);
+- break;
+-
+ case FF_INERTIA:
++ case FF_FRICTION:
+ if (!old) {
++ switch(effect->type) {
++ case FF_SPRING:
++ type_id = PID_SPRING;
++ break;
++ case FF_DAMPER:
++ type_id = PID_DAMPER;
++ break;
++ case FF_INERTIA:
++ type_id = PID_INERTIA;
++ break;
++ case FF_FRICTION:
++ type_id = PID_FRICTION;
++ break;
++ }
+ error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_INERTIA]);
++ pidff->type_id[type_id]);
+ if (error)
+ return error;
+ }
+@@ -709,11 +893,7 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ */
+ static void pidff_set_gain(struct input_dev *dev, u16 gain)
+ {
+- struct pidff_device *pidff = dev->ff->private;
+-
+- pidff_set(&pidff->device_gain[PID_DEVICE_GAIN_FIELD], gain);
+- hid_hw_request(pidff->hid, pidff->reports[PID_DEVICE_GAIN],
+- HID_REQ_SET_REPORT);
++ pidff_set_gain_report(dev->ff->private, gain);
+ }
+
+ static void pidff_autocenter(struct pidff_device *pidff, u16 magnitude)
+@@ -736,7 +916,10 @@ static void pidff_autocenter(struct pidff_device *pidff, u16 magnitude)
+ pidff->set_effect[PID_TRIGGER_REPEAT_INT].value[0] = 0;
+ pidff_set(&pidff->set_effect[PID_GAIN], magnitude);
+ pidff->set_effect[PID_DIRECTION_ENABLE].value[0] = 1;
+- pidff->set_effect[PID_START_DELAY].value[0] = 0;
++
++ /* Omit setting delay field if it's missing */
++ if (!(pidff->quirks & HID_PIDFF_QUIRK_MISSING_DELAY))
++ pidff->set_effect[PID_START_DELAY].value[0] = 0;
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_SET_EFFECT],
+ HID_REQ_SET_REPORT);
+@@ -747,9 +930,7 @@ static void pidff_autocenter(struct pidff_device *pidff, u16 magnitude)
+ */
+ static void pidff_set_autocenter(struct input_dev *dev, u16 magnitude)
+ {
+- struct pidff_device *pidff = dev->ff->private;
+-
+- pidff_autocenter(pidff, magnitude);
++ pidff_autocenter(dev->ff->private, magnitude);
+ }
+
+ /*
+@@ -758,7 +939,13 @@ static void pidff_set_autocenter(struct input_dev *dev, u16 magnitude)
+ static int pidff_find_fields(struct pidff_usage *usage, const u8 *table,
+ struct hid_report *report, int count, int strict)
+ {
++ if (!report) {
++ pr_debug("pidff_find_fields, null report\n");
++ return -1;
++ }
++
+ int i, j, k, found;
++ int return_value = 0;
+
+ for (k = 0; k < count; k++) {
+ found = 0;
+@@ -783,12 +970,22 @@ static int pidff_find_fields(struct pidff_usage *usage, const u8 *table,
+ if (found)
+ break;
+ }
+- if (!found && strict) {
++ if (!found && table[k] == pidff_set_effect[PID_START_DELAY]) {
++ pr_debug("Delay field not found, but that's OK\n");
++ pr_debug("Setting MISSING_DELAY quirk\n");
++ return_value |= HID_PIDFF_QUIRK_MISSING_DELAY;
++ }
++ else if (!found && table[k] == pidff_set_condition[PID_PARAM_BLOCK_OFFSET]) {
++ pr_debug("PBO field not found, but that's OK\n");
++ pr_debug("Setting MISSING_PBO quirk\n");
++ return_value |= HID_PIDFF_QUIRK_MISSING_PBO;
++ }
++ else if (!found && strict) {
+ pr_debug("failed to locate %d\n", k);
+ return -1;
+ }
+ }
+- return 0;
++ return return_value;
+ }
+
+ /*
+@@ -871,6 +1068,11 @@ static int pidff_reports_ok(struct pidff_device *pidff)
+ static struct hid_field *pidff_find_special_field(struct hid_report *report,
+ int usage, int enforce_min)
+ {
++ if (!report) {
++ pr_debug("pidff_find_special_field, null report\n");
++ return NULL;
++ }
++
+ int i;
+
+ for (i = 0; i < report->maxfield; i++) {
+@@ -923,22 +1125,24 @@ static int pidff_find_special_fields(struct pidff_device *pidff)
+
+ pidff->create_new_effect_type =
+ pidff_find_special_field(pidff->reports[PID_CREATE_NEW_EFFECT],
+- 0x25, 1);
++ PID_EFFECT_TYPE, 1);
+ pidff->set_effect_type =
+ pidff_find_special_field(pidff->reports[PID_SET_EFFECT],
+- 0x25, 1);
++ PID_EFFECT_TYPE, 1);
+ pidff->effect_direction =
+ pidff_find_special_field(pidff->reports[PID_SET_EFFECT],
+- 0x57, 0);
++ PID_DIRECTION, 0);
+ pidff->device_control =
+ pidff_find_special_field(pidff->reports[PID_DEVICE_CONTROL],
+- 0x96, 1);
++ PID_DEVICE_CONTROL_ARRAY,
++ !(pidff->quirks & HID_PIDFF_QUIRK_PERMISSIVE_CONTROL));
++
+ pidff->block_load_status =
+ pidff_find_special_field(pidff->reports[PID_BLOCK_LOAD],
+- 0x8b, 1);
++ PID_BLOCK_LOAD_STATUS, 1);
+ pidff->effect_operation_status =
+ pidff_find_special_field(pidff->reports[PID_EFFECT_OPERATION],
+- 0x78, 1);
++ PID_EFFECT_OPERATION_ARRAY, 1);
+
+ hid_dbg(pidff->hid, "search done\n");
+
+@@ -967,10 +1171,6 @@ static int pidff_find_special_fields(struct pidff_device *pidff)
+ return -1;
+ }
+
+- pidff_find_special_keys(pidff->control_id, pidff->device_control,
+- pidff_device_control,
+- sizeof(pidff_device_control));
+-
+ PIDFF_FIND_SPECIAL_KEYS(control_id, device_control, device_control);
+
+ if (!PIDFF_FIND_SPECIAL_KEYS(type_id, create_new_effect_type,
+@@ -1049,7 +1249,6 @@ static int pidff_find_effects(struct pidff_device *pidff,
+ set_bit(FF_FRICTION, dev->ffbit);
+
+ return 0;
+-
+ }
+
+ #define PIDFF_FIND_FIELDS(name, report, strict) \
+@@ -1062,12 +1261,19 @@ static int pidff_find_effects(struct pidff_device *pidff,
+ */
+ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ {
+- int envelope_ok = 0;
++ int status = 0;
+
+- if (PIDFF_FIND_FIELDS(set_effect, PID_SET_EFFECT, 1)) {
++ /* Save info about the device not having the DELAY ffb field. */
++ status = PIDFF_FIND_FIELDS(set_effect, PID_SET_EFFECT, 1);
++ if (status == -1) {
+ hid_err(pidff->hid, "unknown set_effect report layout\n");
+ return -ENODEV;
+ }
++ pidff->quirks |= status;
++
++ if (status & HID_PIDFF_QUIRK_MISSING_DELAY)
++ hid_dbg(pidff->hid, "Adding MISSING_DELAY quirk\n");
++
+
+ PIDFF_FIND_FIELDS(block_load, PID_BLOCK_LOAD, 0);
+ if (!pidff->block_load[PID_EFFECT_BLOCK_INDEX].value) {
+@@ -1085,13 +1291,10 @@ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ return -ENODEV;
+ }
+
+- if (!PIDFF_FIND_FIELDS(set_envelope, PID_SET_ENVELOPE, 1))
+- envelope_ok = 1;
+-
+ if (pidff_find_special_fields(pidff) || pidff_find_effects(pidff, dev))
+ return -ENODEV;
+
+- if (!envelope_ok) {
++ if (PIDFF_FIND_FIELDS(set_envelope, PID_SET_ENVELOPE, 1)) {
+ if (test_and_clear_bit(FF_CONSTANT, dev->ffbit))
+ hid_warn(pidff->hid,
+ "has constant effect but no envelope\n");
+@@ -1116,16 +1319,20 @@ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ clear_bit(FF_RAMP, dev->ffbit);
+ }
+
+- if ((test_bit(FF_SPRING, dev->ffbit) ||
+- test_bit(FF_DAMPER, dev->ffbit) ||
+- test_bit(FF_FRICTION, dev->ffbit) ||
+- test_bit(FF_INERTIA, dev->ffbit)) &&
+- PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1)) {
+- hid_warn(pidff->hid, "unknown condition effect layout\n");
+- clear_bit(FF_SPRING, dev->ffbit);
+- clear_bit(FF_DAMPER, dev->ffbit);
+- clear_bit(FF_FRICTION, dev->ffbit);
+- clear_bit(FF_INERTIA, dev->ffbit);
++ if (test_bit(FF_SPRING, dev->ffbit) ||
++ test_bit(FF_DAMPER, dev->ffbit) ||
++ test_bit(FF_FRICTION, dev->ffbit) ||
++ test_bit(FF_INERTIA, dev->ffbit)) {
++ status = PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1);
++
++ if (status < 0) {
++ hid_warn(pidff->hid, "unknown condition effect layout\n");
++ clear_bit(FF_SPRING, dev->ffbit);
++ clear_bit(FF_DAMPER, dev->ffbit);
++ clear_bit(FF_FRICTION, dev->ffbit);
++ clear_bit(FF_INERTIA, dev->ffbit);
++ }
++ pidff->quirks |= status;
+ }
+
+ if (test_bit(FF_PERIODIC, dev->ffbit) &&
+@@ -1142,46 +1349,6 @@ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ return 0;
+ }
+
+-/*
+- * Reset the device
+- */
+-static void pidff_reset(struct pidff_device *pidff)
+-{
+- struct hid_device *hid = pidff->hid;
+- int i = 0;
+-
+- pidff->device_control->value[0] = pidff->control_id[PID_RESET];
+- /* We reset twice as sometimes hid_wait_io isn't waiting long enough */
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
+- hid_hw_wait(hid);
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
+- hid_hw_wait(hid);
+-
+- pidff->device_control->value[0] =
+- pidff->control_id[PID_ENABLE_ACTUATORS];
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
+- hid_hw_wait(hid);
+-
+- /* pool report is sometimes messed up, refetch it */
+- hid_hw_request(hid, pidff->reports[PID_POOL], HID_REQ_GET_REPORT);
+- hid_hw_wait(hid);
+-
+- if (pidff->pool[PID_SIMULTANEOUS_MAX].value) {
+- while (pidff->pool[PID_SIMULTANEOUS_MAX].value[0] < 2) {
+- if (i++ > 20) {
+- hid_warn(pidff->hid,
+- "device reports %d simultaneous effects\n",
+- pidff->pool[PID_SIMULTANEOUS_MAX].value[0]);
+- break;
+- }
+- hid_dbg(pidff->hid, "pid_pool requested again\n");
+- hid_hw_request(hid, pidff->reports[PID_POOL],
+- HID_REQ_GET_REPORT);
+- hid_hw_wait(hid);
+- }
+- }
+-}
+-
+ /*
+ * Test if autocenter modification is using the supported method
+ */
+@@ -1206,24 +1373,23 @@ static int pidff_check_autocenter(struct pidff_device *pidff,
+
+ if (pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0] ==
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].field->logical_minimum + 1) {
+- pidff_autocenter(pidff, 0xffff);
++ pidff_autocenter(pidff, U16_MAX);
+ set_bit(FF_AUTOCENTER, dev->ffbit);
+ } else {
+ hid_notice(pidff->hid,
+ "device has unknown autocenter control method\n");
+ }
+-
+ pidff_erase_pid(pidff,
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0]);
+
+ return 0;
+-
+ }
+
+ /*
+ * Check if the device is PID and initialize it
++ * Set initial quirks
+ */
+-int hid_pidff_init(struct hid_device *hid)
++int hid_pidff_init_with_quirks(struct hid_device *hid, u32 initial_quirks)
+ {
+ struct pidff_device *pidff;
+ struct hid_input *hidinput = list_entry(hid->inputs.next,
+@@ -1245,6 +1411,8 @@ int hid_pidff_init(struct hid_device *hid)
+ return -ENOMEM;
+
+ pidff->hid = hid;
++ pidff->quirks = initial_quirks;
++ pidff->effect_count = 0;
+
+ hid_device_io_start(hid);
+
+@@ -1261,14 +1429,9 @@ int hid_pidff_init(struct hid_device *hid)
+ if (error)
+ goto fail;
+
+- pidff_reset(pidff);
+-
+- if (test_bit(FF_GAIN, dev->ffbit)) {
+- pidff_set(&pidff->device_gain[PID_DEVICE_GAIN_FIELD], 0xffff);
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_GAIN],
+- HID_REQ_SET_REPORT);
+- }
+-
++ /* pool report is sometimes messed up, refetch it */
++ pidff_fetch_pool(pidff);
++ pidff_set_gain_report(pidff, U16_MAX);
+ error = pidff_check_autocenter(pidff, dev);
+ if (error)
+ goto fail;
+@@ -1311,6 +1474,7 @@ int hid_pidff_init(struct hid_device *hid)
+ ff->playback = pidff_playback;
+
+ hid_info(dev, "Force feedback for USB HID PID devices by Anssi Hannula <anssi.hannula@gmail.com>\n");
++ hid_dbg(dev, "Active quirks mask: 0x%x\n", pidff->quirks);
+
+ hid_device_io_stop(hid);
+
+@@ -1322,3 +1486,14 @@ int hid_pidff_init(struct hid_device *hid)
+ kfree(pidff);
+ return error;
+ }
++EXPORT_SYMBOL_GPL(hid_pidff_init_with_quirks);
++
++/*
++ * Check if the device is PID and initialize it
++ * Wrapper made to keep the compatibility with old
++ * init function
++ */
++int hid_pidff_init(struct hid_device *hid)
++{
++ return hid_pidff_init_with_quirks(hid, 0);
++}
+diff --git a/drivers/hid/usbhid/hid-pidff.h b/drivers/hid/usbhid/hid-pidff.h
+new file mode 100644
+index 00000000000000..dda571e0a5bd38
+--- /dev/null
++++ b/drivers/hid/usbhid/hid-pidff.h
+@@ -0,0 +1,33 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++#ifndef __HID_PIDFF_H
++#define __HID_PIDFF_H
++
++#include <linux/hid.h>
++
++/* HID PIDFF quirks */
++
++/* Delay field (0xA7) missing. Skip it during set effect report upload */
++#define HID_PIDFF_QUIRK_MISSING_DELAY BIT(0)
++
++/* Missing Paramter block offset (0x23). Skip it during SET_CONDITION
++ report upload */
++#define HID_PIDFF_QUIRK_MISSING_PBO BIT(1)
++
++/* Initialise device control field even if logical_minimum != 1 */
++#define HID_PIDFF_QUIRK_PERMISSIVE_CONTROL BIT(2)
++
++/* Use fixed 0x4000 direction during SET_EFFECT report upload */
++#define HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION BIT(3)
++
++/* Force all periodic effects to be uploaded as SINE */
++#define HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY BIT(4)
++
++#ifdef CONFIG_HID_PID
++int hid_pidff_init(struct hid_device *hid);
++int hid_pidff_init_with_quirks(struct hid_device *hid, u32 initial_quirks);
++#else
++#define hid_pidff_init NULL
++#define hid_pidff_init_with_quirks NULL
++#endif
++
++#endif
+diff --git a/drivers/hsi/clients/ssi_protocol.c b/drivers/hsi/clients/ssi_protocol.c
+index afe470f3661c77..6105ea9a6c6aa2 100644
+--- a/drivers/hsi/clients/ssi_protocol.c
++++ b/drivers/hsi/clients/ssi_protocol.c
+@@ -401,6 +401,7 @@ static void ssip_reset(struct hsi_client *cl)
+ del_timer(&ssi->rx_wd);
+ del_timer(&ssi->tx_wd);
+ del_timer(&ssi->keep_alive);
++ cancel_work_sync(&ssi->work);
+ ssi->main_state = 0;
+ ssi->send_state = 0;
+ ssi->recv_state = 0;
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index d5dc4180afbcfc..c65006aa0684f7 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2561,6 +2561,9 @@ static void i3c_master_unregister_i3c_devs(struct i3c_master_controller *master)
+ */
+ void i3c_master_queue_ibi(struct i3c_dev_desc *dev, struct i3c_ibi_slot *slot)
+ {
++ if (!dev->ibi || !slot)
++ return;
++
+ atomic_inc(&dev->ibi->pending_ibis);
+ queue_work(dev->ibi->wq, &slot->work);
+ }
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index ecc07c17f4c798..ed7b9d7f688cc6 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -378,7 +378,7 @@ static int svc_i3c_master_handle_ibi(struct svc_i3c_master *master,
+ slot->len < SVC_I3C_FIFO_SIZE) {
+ mdatactrl = readl(master->regs + SVC_I3C_MDATACTRL);
+ count = SVC_I3C_MDATACTRL_RXCOUNT(mdatactrl);
+- readsl(master->regs + SVC_I3C_MRDATAB, buf, count);
++ readsb(master->regs + SVC_I3C_MRDATAB, buf, count);
+ slot->len += count;
+ buf += count;
+ }
+diff --git a/drivers/idle/Makefile b/drivers/idle/Makefile
+index 0a3c3751007979..a34af1ba09bdba 100644
+--- a/drivers/idle/Makefile
++++ b/drivers/idle/Makefile
+@@ -1,3 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_INTEL_IDLE) += intel_idle.o
+
++# Branch profiling isn't noinstr-safe
++ccflags-$(CONFIG_TRACE_BRANCH_PROFILING) += -DDISABLE_BRANCH_PROFILING
++
++obj-$(CONFIG_INTEL_IDLE) += intel_idle.o
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index d525ab43a4aebf..dd7d030d2e8909 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -487,17 +487,6 @@ static int tegra241_cmdqv_hw_reset(struct arm_smmu_device *smmu)
+
+ /* VCMDQ Resource Helpers */
+
+-static void tegra241_vcmdq_free_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+-{
+- struct arm_smmu_queue *q = &vcmdq->cmdq.q;
+- size_t nents = 1 << q->llq.max_n_shift;
+- size_t qsz = nents << CMDQ_ENT_SZ_SHIFT;
+-
+- if (!q->base)
+- return;
+- dmam_free_coherent(vcmdq->cmdqv->smmu.dev, qsz, q->base, q->base_dma);
+-}
+-
+ static int tegra241_vcmdq_alloc_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+ {
+ struct arm_smmu_device *smmu = &vcmdq->cmdqv->smmu;
+@@ -560,7 +549,8 @@ static void tegra241_vintf_free_lvcmdq(struct tegra241_vintf *vintf, u16 lidx)
+ struct tegra241_vcmdq *vcmdq = vintf->lvcmdqs[lidx];
+ char header[64];
+
+- tegra241_vcmdq_free_smmu_cmdq(vcmdq);
++ /* Note that the lvcmdq queue memory space is managed by devres */
++
+ tegra241_vintf_deinit_lvcmdq(vintf, lidx);
+
+ dev_dbg(vintf->cmdqv->dev,
+@@ -768,13 +758,13 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+
+ vintf = kzalloc(sizeof(*vintf), GFP_KERNEL);
+ if (!vintf)
+- goto out_fallback;
++ return -ENOMEM;
+
+ /* Init VINTF0 for in-kernel use */
+ ret = tegra241_cmdqv_init_vintf(cmdqv, 0, vintf);
+ if (ret) {
+ dev_err(cmdqv->dev, "failed to init vintf0: %d\n", ret);
+- goto free_vintf;
++ return ret;
+ }
+
+ /* Preallocate logical VCMDQs to VINTF0 */
+@@ -783,24 +773,12 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+
+ vcmdq = tegra241_vintf_alloc_lvcmdq(vintf, lidx);
+ if (IS_ERR(vcmdq))
+- goto free_lvcmdq;
++ return PTR_ERR(vcmdq);
+ }
+
+ /* Now, we are ready to run all the impl ops */
+ smmu->impl_ops = &tegra241_cmdqv_impl_ops;
+ return 0;
+-
+-free_lvcmdq:
+- for (lidx--; lidx >= 0; lidx--)
+- tegra241_vintf_free_lvcmdq(vintf, lidx);
+- tegra241_cmdqv_deinit_vintf(cmdqv, vintf->idx);
+-free_vintf:
+- kfree(vintf);
+-out_fallback:
+- dev_info(smmu->impl_dev, "Falling back to standard SMMU CMDQ\n");
+- smmu->options &= ~ARM_SMMU_OPT_TEGRA241_CMDQV;
+- tegra241_cmdqv_remove(smmu);
+- return 0;
+ }
+
+ #ifdef CONFIG_IOMMU_DEBUGFS
+diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
+index 69e23e017d9e5f..317266aca6e28e 100644
+--- a/drivers/iommu/exynos-iommu.c
++++ b/drivers/iommu/exynos-iommu.c
+@@ -832,7 +832,7 @@ static int __maybe_unused exynos_sysmmu_suspend(struct device *dev)
+ struct exynos_iommu_owner *owner = dev_iommu_priv_get(master);
+
+ mutex_lock(&owner->rpm_lock);
+- if (&data->domain->domain != &exynos_identity_domain) {
++ if (data->domain) {
+ dev_dbg(data->sysmmu, "saving state\n");
+ __sysmmu_disable(data);
+ }
+@@ -850,7 +850,7 @@ static int __maybe_unused exynos_sysmmu_resume(struct device *dev)
+ struct exynos_iommu_owner *owner = dev_iommu_priv_get(master);
+
+ mutex_lock(&owner->rpm_lock);
+- if (&data->domain->domain != &exynos_identity_domain) {
++ if (data->domain) {
+ dev_dbg(data->sysmmu, "restoring state\n");
+ __sysmmu_enable(data);
+ }
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 25d31f8c129a68..76417bd5e926e0 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3016,6 +3016,7 @@ static int __init probe_acpi_namespace_devices(void)
+ if (dev->bus != &acpi_bus_type)
+ continue;
+
++ up_read(&dmar_global_lock);
+ adev = to_acpi_device(dev);
+ mutex_lock(&adev->physical_node_lock);
+ list_for_each_entry(pn,
+@@ -3025,6 +3026,7 @@ static int __init probe_acpi_namespace_devices(void)
+ break;
+ }
+ mutex_unlock(&adev->physical_node_lock);
++ down_read(&dmar_global_lock);
+
+ if (ret)
+ return ret;
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index ad795c772f21b5..3bc2a03ccecaae 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -25,11 +25,6 @@
+ #include "../irq_remapping.h"
+ #include "../iommu-pages.h"
+
+-enum irq_mode {
+- IRQ_REMAPPING,
+- IRQ_POSTING,
+-};
+-
+ struct ioapic_scope {
+ struct intel_iommu *iommu;
+ unsigned int id;
+@@ -49,8 +44,8 @@ struct irq_2_iommu {
+ u16 irte_index;
+ u16 sub_handle;
+ u8 irte_mask;
+- enum irq_mode mode;
+ bool posted_msi;
++ bool posted_vcpu;
+ };
+
+ struct intel_ir_data {
+@@ -138,7 +133,6 @@ static int alloc_irte(struct intel_iommu *iommu,
+ irq_iommu->irte_index = index;
+ irq_iommu->sub_handle = 0;
+ irq_iommu->irte_mask = mask;
+- irq_iommu->mode = IRQ_REMAPPING;
+ }
+ raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags);
+
+@@ -193,8 +187,6 @@ static int modify_irte(struct irq_2_iommu *irq_iommu,
+
+ rc = qi_flush_iec(iommu, index, 0);
+
+- /* Update iommu mode according to the IRTE mode */
+- irq_iommu->mode = irte->pst ? IRQ_POSTING : IRQ_REMAPPING;
+ raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags);
+
+ return rc;
+@@ -1169,7 +1161,26 @@ static void intel_ir_reconfigure_irte_posted(struct irq_data *irqd)
+ static inline void intel_ir_reconfigure_irte_posted(struct irq_data *irqd) {}
+ #endif
+
+-static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force)
++static void __intel_ir_reconfigure_irte(struct irq_data *irqd, bool force_host)
++{
++ struct intel_ir_data *ir_data = irqd->chip_data;
++
++ /*
++ * Don't modify IRTEs for IRQs that are being posted to vCPUs if the
++ * host CPU affinity changes.
++ */
++ if (ir_data->irq_2_iommu.posted_vcpu && !force_host)
++ return;
++
++ ir_data->irq_2_iommu.posted_vcpu = false;
++
++ if (ir_data->irq_2_iommu.posted_msi)
++ intel_ir_reconfigure_irte_posted(irqd);
++ else
++ modify_irte(&ir_data->irq_2_iommu, &ir_data->irte_entry);
++}
++
++static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force_host)
+ {
+ struct intel_ir_data *ir_data = irqd->chip_data;
+ struct irte *irte = &ir_data->irte_entry;
+@@ -1182,10 +1193,7 @@ static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force)
+ irte->vector = cfg->vector;
+ irte->dest_id = IRTE_DEST(cfg->dest_apicid);
+
+- if (ir_data->irq_2_iommu.posted_msi)
+- intel_ir_reconfigure_irte_posted(irqd);
+- else if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)
+- modify_irte(&ir_data->irq_2_iommu, irte);
++ __intel_ir_reconfigure_irte(irqd, force_host);
+ }
+
+ /*
+@@ -1240,7 +1248,7 @@ static int intel_ir_set_vcpu_affinity(struct irq_data *data, void *info)
+
+ /* stop posting interrupts, back to the default mode */
+ if (!vcpu_pi_info) {
+- modify_irte(&ir_data->irq_2_iommu, &ir_data->irte_entry);
++ __intel_ir_reconfigure_irte(data, true);
+ } else {
+ struct irte irte_pi;
+
+@@ -1263,6 +1271,7 @@ static int intel_ir_set_vcpu_affinity(struct irq_data *data, void *info)
+ irte_pi.pda_h = (vcpu_pi_info->pi_desc_addr >> 32) &
+ ~(-1UL << PDA_HIGH_BIT);
+
++ ir_data->irq_2_iommu.posted_vcpu = true;
+ modify_irte(&ir_data->irq_2_iommu, &irte_pi);
+ }
+
+@@ -1278,43 +1287,44 @@ static struct irq_chip intel_ir_chip = {
+ };
+
+ /*
+- * With posted MSIs, all vectors are multiplexed into a single notification
+- * vector. Devices MSIs are then dispatched in a demux loop where
+- * EOIs can be coalesced as well.
++ * With posted MSIs, the MSI vectors are multiplexed into a single notification
++ * vector, and only the notification vector is sent to the APIC IRR. Device
++ * MSIs are then dispatched in a demux loop that harvests the MSIs from the
++ * CPU's Posted Interrupt Request bitmap. I.e. Posted MSIs never get sent to
++ * the APIC IRR, and thus do not need an EOI. The notification handler instead
++ * performs a single EOI after processing the PIR.
+ *
+- * "INTEL-IR-POST" IRQ chip does not do EOI on ACK, thus the dummy irq_ack()
+- * function. Instead EOI is performed by the posted interrupt notification
+- * handler.
++ * Note! Pending SMP/CPU affinity changes, which are per MSI, must still be
++ * honored, only the APIC EOI is omitted.
+ *
+ * For the example below, 3 MSIs are coalesced into one CPU notification. Only
+- * one apic_eoi() is needed.
++ * one apic_eoi() is needed, but each MSI needs to process pending changes to
++ * its CPU affinity.
+ *
+ * __sysvec_posted_msi_notification()
+ * irq_enter();
+ * handle_edge_irq()
+ * irq_chip_ack_parent()
+- * dummy(); // No EOI
++ * irq_move_irq(); // No EOI
+ * handle_irq_event()
+ * driver_handler()
+ * handle_edge_irq()
+ * irq_chip_ack_parent()
+- * dummy(); // No EOI
++ * irq_move_irq(); // No EOI
+ * handle_irq_event()
+ * driver_handler()
+ * handle_edge_irq()
+ * irq_chip_ack_parent()
+- * dummy(); // No EOI
++ * irq_move_irq(); // No EOI
+ * handle_irq_event()
+ * driver_handler()
+ * apic_eoi()
+ * irq_exit()
++ *
+ */
+-
+-static void dummy_ack(struct irq_data *d) { }
+-
+ static struct irq_chip intel_ir_chip_post_msi = {
+ .name = "INTEL-IR-POST",
+- .irq_ack = dummy_ack,
++ .irq_ack = irq_move_irq,
+ .irq_set_affinity = intel_ir_set_affinity,
+ .irq_compose_msi_msg = intel_ir_compose_msi_msg,
+ .irq_set_vcpu_affinity = intel_ir_set_vcpu_affinity,
+@@ -1489,6 +1499,9 @@ static void intel_irq_remapping_deactivate(struct irq_domain *domain,
+ struct intel_ir_data *data = irq_data->chip_data;
+ struct irte entry;
+
++ WARN_ON_ONCE(data->irq_2_iommu.posted_vcpu);
++ data->irq_2_iommu.posted_vcpu = false;
++
+ memset(&entry, 0, sizeof(entry));
+ modify_irte(&data->irq_2_iommu, &entry);
+ }
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index dfd0898fb6c157..3c7800d4ab622d 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -352,6 +352,122 @@ iommufd_device_attach_reserved_iova(struct iommufd_device *idev,
+ return 0;
+ }
+
++/* The device attach/detach/replace helpers for attach_handle */
++
++/* Check if idev is attached to igroup->hwpt */
++static bool iommufd_device_is_attached(struct iommufd_device *idev)
++{
++ struct iommufd_device *cur;
++
++ list_for_each_entry(cur, &idev->igroup->device_list, group_item)
++ if (cur == idev)
++ return true;
++ return false;
++}
++
++static int iommufd_hwpt_attach_device(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_device *idev)
++{
++ struct iommufd_attach_handle *handle;
++ int rc;
++
++ lockdep_assert_held(&idev->igroup->lock);
++
++ handle = kzalloc(sizeof(*handle), GFP_KERNEL);
++ if (!handle)
++ return -ENOMEM;
++
++ if (hwpt->fault) {
++ rc = iommufd_fault_iopf_enable(idev);
++ if (rc)
++ goto out_free_handle;
++ }
++
++ handle->idev = idev;
++ rc = iommu_attach_group_handle(hwpt->domain, idev->igroup->group,
++ &handle->handle);
++ if (rc)
++ goto out_disable_iopf;
++
++ return 0;
++
++out_disable_iopf:
++ if (hwpt->fault)
++ iommufd_fault_iopf_disable(idev);
++out_free_handle:
++ kfree(handle);
++ return rc;
++}
++
++static struct iommufd_attach_handle *
++iommufd_device_get_attach_handle(struct iommufd_device *idev)
++{
++ struct iommu_attach_handle *handle;
++
++ lockdep_assert_held(&idev->igroup->lock);
++
++ handle =
++ iommu_attach_handle_get(idev->igroup->group, IOMMU_NO_PASID, 0);
++ if (IS_ERR(handle))
++ return NULL;
++ return to_iommufd_handle(handle);
++}
++
++static void iommufd_hwpt_detach_device(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_device *idev)
++{
++ struct iommufd_attach_handle *handle;
++
++ handle = iommufd_device_get_attach_handle(idev);
++ iommu_detach_group_handle(hwpt->domain, idev->igroup->group);
++ if (hwpt->fault) {
++ iommufd_auto_response_faults(hwpt, handle);
++ iommufd_fault_iopf_disable(idev);
++ }
++ kfree(handle);
++}
++
++static int iommufd_hwpt_replace_device(struct iommufd_device *idev,
++ struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_hw_pagetable *old)
++{
++ struct iommufd_attach_handle *handle, *old_handle =
++ iommufd_device_get_attach_handle(idev);
++ int rc;
++
++ handle = kzalloc(sizeof(*handle), GFP_KERNEL);
++ if (!handle)
++ return -ENOMEM;
++
++ if (hwpt->fault && !old->fault) {
++ rc = iommufd_fault_iopf_enable(idev);
++ if (rc)
++ goto out_free_handle;
++ }
++
++ handle->idev = idev;
++ rc = iommu_replace_group_handle(idev->igroup->group, hwpt->domain,
++ &handle->handle);
++ if (rc)
++ goto out_disable_iopf;
++
++ if (old->fault) {
++ iommufd_auto_response_faults(hwpt, old_handle);
++ if (!hwpt->fault)
++ iommufd_fault_iopf_disable(idev);
++ }
++ kfree(old_handle);
++
++ return 0;
++
++out_disable_iopf:
++ if (hwpt->fault && !old->fault)
++ iommufd_fault_iopf_disable(idev);
++out_free_handle:
++ kfree(handle);
++ return rc;
++}
++
+ int iommufd_hw_pagetable_attach(struct iommufd_hw_pagetable *hwpt,
+ struct iommufd_device *idev)
+ {
+@@ -488,6 +604,11 @@ iommufd_device_do_replace(struct iommufd_device *idev,
+ goto err_unlock;
+ }
+
++ if (!iommufd_device_is_attached(idev)) {
++ rc = -EINVAL;
++ goto err_unlock;
++ }
++
+ if (hwpt == igroup->hwpt) {
+ mutex_unlock(&idev->igroup->lock);
+ return NULL;
+@@ -1127,7 +1248,7 @@ int iommufd_access_rw(struct iommufd_access *access, unsigned long iova,
+ struct io_pagetable *iopt;
+ struct iopt_area *area;
+ unsigned long last_iova;
+- int rc;
++ int rc = -EINVAL;
+
+ if (!length)
+ return -EINVAL;
+diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
+index d9a937450e5526..cb844e6799d4f8 100644
+--- a/drivers/iommu/iommufd/fault.c
++++ b/drivers/iommu/iommufd/fault.c
+@@ -17,7 +17,7 @@
+ #include "../iommu-priv.h"
+ #include "iommufd_private.h"
+
+-static int iommufd_fault_iopf_enable(struct iommufd_device *idev)
++int iommufd_fault_iopf_enable(struct iommufd_device *idev)
+ {
+ struct device *dev = idev->dev;
+ int ret;
+@@ -50,7 +50,7 @@ static int iommufd_fault_iopf_enable(struct iommufd_device *idev)
+ return ret;
+ }
+
+-static void iommufd_fault_iopf_disable(struct iommufd_device *idev)
++void iommufd_fault_iopf_disable(struct iommufd_device *idev)
+ {
+ mutex_lock(&idev->iopf_lock);
+ if (!WARN_ON(idev->iopf_enabled == 0)) {
+@@ -98,8 +98,8 @@ int iommufd_fault_domain_attach_dev(struct iommufd_hw_pagetable *hwpt,
+ return ret;
+ }
+
+-static void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_attach_handle *handle)
++void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_attach_handle *handle)
+ {
+ struct iommufd_fault *fault = hwpt->fault;
+ struct iopf_group *group, *next;
+diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
+index 0b1bafc7fd9940..02fe1ada97cc79 100644
+--- a/drivers/iommu/iommufd/iommufd_private.h
++++ b/drivers/iommu/iommufd/iommufd_private.h
+@@ -504,35 +504,10 @@ int iommufd_fault_domain_replace_dev(struct iommufd_device *idev,
+ struct iommufd_hw_pagetable *hwpt,
+ struct iommufd_hw_pagetable *old);
+
+-static inline int iommufd_hwpt_attach_device(struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_device *idev)
+-{
+- if (hwpt->fault)
+- return iommufd_fault_domain_attach_dev(hwpt, idev);
+-
+- return iommu_attach_group(hwpt->domain, idev->igroup->group);
+-}
+-
+-static inline void iommufd_hwpt_detach_device(struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_device *idev)
+-{
+- if (hwpt->fault) {
+- iommufd_fault_domain_detach_dev(hwpt, idev);
+- return;
+- }
+-
+- iommu_detach_group(hwpt->domain, idev->igroup->group);
+-}
+-
+-static inline int iommufd_hwpt_replace_device(struct iommufd_device *idev,
+- struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_hw_pagetable *old)
+-{
+- if (old->fault || hwpt->fault)
+- return iommufd_fault_domain_replace_dev(idev, hwpt, old);
+-
+- return iommu_group_replace_domain(idev->igroup->group, hwpt->domain);
+-}
++int iommufd_fault_iopf_enable(struct iommufd_device *idev);
++void iommufd_fault_iopf_disable(struct iommufd_device *idev);
++void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_attach_handle *handle);
+
+ static inline struct iommufd_viommu *
+ iommufd_get_viommu(struct iommufd_ucmd *ucmd, u32 id)
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 034b0e670384a2..df98d0c65f5469 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -1372,15 +1372,6 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, data);
+ mutex_init(&data->mutex);
+
+- ret = iommu_device_sysfs_add(&data->iommu, dev, NULL,
+- "mtk-iommu.%pa", &ioaddr);
+- if (ret)
+- goto out_link_remove;
+-
+- ret = iommu_device_register(&data->iommu, &mtk_iommu_ops, dev);
+- if (ret)
+- goto out_sysfs_remove;
+-
+ if (MTK_IOMMU_HAS_FLAG(data->plat_data, SHARE_PGTABLE)) {
+ list_add_tail(&data->list, data->plat_data->hw_list);
+ data->hw_list = data->plat_data->hw_list;
+@@ -1390,19 +1381,28 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ data->hw_list = &data->hw_list_head;
+ }
+
++ ret = iommu_device_sysfs_add(&data->iommu, dev, NULL,
++ "mtk-iommu.%pa", &ioaddr);
++ if (ret)
++ goto out_list_del;
++
++ ret = iommu_device_register(&data->iommu, &mtk_iommu_ops, dev);
++ if (ret)
++ goto out_sysfs_remove;
++
+ if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_MM)) {
+ ret = component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
+ if (ret)
+- goto out_list_del;
++ goto out_device_unregister;
+ }
+ return ret;
+
+-out_list_del:
+- list_del(&data->list);
++out_device_unregister:
+ iommu_device_unregister(&data->iommu);
+ out_sysfs_remove:
+ iommu_device_sysfs_remove(&data->iommu);
+-out_link_remove:
++out_list_del:
++ list_del(&data->list);
+ if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_MM))
+ device_link_remove(data->smicomm_dev, dev);
+ out_runtime_disable:
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 8c3ec5734f1ef4..f30ed281882ff8 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -205,13 +205,15 @@ static DEFINE_IDA(its_vpeid_ida);
+ #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base)
+ #define gic_data_rdist_vlpi_base() (gic_data_rdist_rd_base() + SZ_128K)
+
++static gfp_t gfp_flags_quirk;
++
+ static struct page *its_alloc_pages_node(int node, gfp_t gfp,
+ unsigned int order)
+ {
+ struct page *page;
+ int ret = 0;
+
+- page = alloc_pages_node(node, gfp, order);
++ page = alloc_pages_node(node, gfp | gfp_flags_quirk, order);
+
+ if (!page)
+ return NULL;
+@@ -4887,6 +4889,17 @@ static bool __maybe_unused its_enable_quirk_hip09_162100801(void *data)
+ return true;
+ }
+
++static bool __maybe_unused its_enable_rk3568002(void *data)
++{
++ if (!of_machine_is_compatible("rockchip,rk3566") &&
++ !of_machine_is_compatible("rockchip,rk3568"))
++ return false;
++
++ gfp_flags_quirk |= GFP_DMA32;
++
++ return true;
++}
++
+ static const struct gic_quirk its_quirks[] = {
+ #ifdef CONFIG_CAVIUM_ERRATUM_22375
+ {
+@@ -4954,6 +4967,14 @@ static const struct gic_quirk its_quirks[] = {
+ .property = "dma-noncoherent",
+ .init = its_set_non_coherent,
+ },
++#ifdef CONFIG_ROCKCHIP_ERRATUM_3568002
++ {
++ .desc = "ITS: Rockchip erratum RK3568002",
++ .iidr = 0x0201743b,
++ .mask = 0xffffffff,
++ .init = its_enable_rk3568002,
++ },
++#endif
+ {
+ }
+ };
+diff --git a/drivers/irqchip/irq-renesas-rzv2h.c b/drivers/irqchip/irq-renesas-rzv2h.c
+index fe2d29e910261b..f6363246a71a0b 100644
+--- a/drivers/irqchip/irq-renesas-rzv2h.c
++++ b/drivers/irqchip/irq-renesas-rzv2h.c
+@@ -301,10 +301,10 @@ static int rzv2h_tint_set_type(struct irq_data *d, unsigned int type)
+
+ tssr_k = ICU_TSSR_K(tint_nr);
+ tssel_n = ICU_TSSR_TSSEL_N(tint_nr);
++ tien = ICU_TSSR_TIEN(tssel_n);
+
+ titsr_k = ICU_TITSR_K(tint_nr);
+ titsel_n = ICU_TITSR_TITSEL_N(tint_nr);
+- tien = ICU_TSSR_TIEN(titsel_n);
+
+ guard(raw_spinlock)(&priv->lock);
+
+diff --git a/drivers/leds/rgb/leds-qcom-lpg.c b/drivers/leds/rgb/leds-qcom-lpg.c
+index f3c9ef2bfa572f..5d8e27e2e7ae71 100644
+--- a/drivers/leds/rgb/leds-qcom-lpg.c
++++ b/drivers/leds/rgb/leds-qcom-lpg.c
+@@ -461,7 +461,7 @@ static int lpg_calc_freq(struct lpg_channel *chan, uint64_t period)
+ max_res = LPG_RESOLUTION_9BIT;
+ }
+
+- min_period = div64_u64((u64)NSEC_PER_SEC * (1 << pwm_resolution_arr[0]),
++ min_period = div64_u64((u64)NSEC_PER_SEC * ((1 << pwm_resolution_arr[0]) - 1),
+ clk_rate_arr[clk_len - 1]);
+ if (period <= min_period)
+ return -EINVAL;
+@@ -482,7 +482,7 @@ static int lpg_calc_freq(struct lpg_channel *chan, uint64_t period)
+ */
+
+ for (i = 0; i < pwm_resolution_count; i++) {
+- resolution = 1 << pwm_resolution_arr[i];
++ resolution = (1 << pwm_resolution_arr[i]) - 1;
+ for (clk_sel = 1; clk_sel < clk_len; clk_sel++) {
+ u64 numerator = period * clk_rate_arr[clk_sel];
+
+@@ -529,7 +529,7 @@ static void lpg_calc_duty(struct lpg_channel *chan, uint64_t duty)
+ unsigned int clk_rate;
+
+ if (chan->subtype == LPG_SUBTYPE_HI_RES_PWM) {
+- max = LPG_RESOLUTION_15BIT - 1;
++ max = BIT(lpg_pwm_resolution_hi_res[chan->pwm_resolution_sel]) - 1;
+ clk_rate = lpg_clk_rates_hi_res[chan->clk_sel];
+ } else {
+ max = LPG_RESOLUTION_9BIT - 1;
+@@ -1291,7 +1291,7 @@ static int lpg_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ if (ret)
+ return ret;
+
+- state->period = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC * (1 << resolution) *
++ state->period = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC * ((1 << resolution) - 1) *
+ pre_div * (1 << m), refclk);
+ state->duty_cycle = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC * pwm_value * pre_div * (1 << m), refclk);
+ } else {
+diff --git a/drivers/mailbox/tegra-hsp.c b/drivers/mailbox/tegra-hsp.c
+index c1981f091bd1bb..ed9a0bb2bcd844 100644
+--- a/drivers/mailbox/tegra-hsp.c
++++ b/drivers/mailbox/tegra-hsp.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2016-2023, NVIDIA CORPORATION. All rights reserved.
++ * Copyright (c) 2016-2025, NVIDIA CORPORATION. All rights reserved.
+ */
+
+ #include <linux/delay.h>
+@@ -28,12 +28,6 @@
+ #define HSP_INT_FULL_MASK 0xff
+
+ #define HSP_INT_DIMENSIONING 0x380
+-#define HSP_nSM_SHIFT 0
+-#define HSP_nSS_SHIFT 4
+-#define HSP_nAS_SHIFT 8
+-#define HSP_nDB_SHIFT 12
+-#define HSP_nSI_SHIFT 16
+-#define HSP_nINT_MASK 0xf
+
+ #define HSP_DB_TRIGGER 0x0
+ #define HSP_DB_ENABLE 0x4
+@@ -97,6 +91,20 @@ struct tegra_hsp_soc {
+ bool has_per_mb_ie;
+ bool has_128_bit_mb;
+ unsigned int reg_stride;
++
++ /* Shifts for dimensioning register. */
++ unsigned int si_shift;
++ unsigned int db_shift;
++ unsigned int as_shift;
++ unsigned int ss_shift;
++ unsigned int sm_shift;
++
++ /* Masks for dimensioning register. */
++ unsigned int si_mask;
++ unsigned int db_mask;
++ unsigned int as_mask;
++ unsigned int ss_mask;
++ unsigned int sm_mask;
+ };
+
+ struct tegra_hsp {
+@@ -747,11 +755,11 @@ static int tegra_hsp_probe(struct platform_device *pdev)
+ return PTR_ERR(hsp->regs);
+
+ value = tegra_hsp_readl(hsp, HSP_INT_DIMENSIONING);
+- hsp->num_sm = (value >> HSP_nSM_SHIFT) & HSP_nINT_MASK;
+- hsp->num_ss = (value >> HSP_nSS_SHIFT) & HSP_nINT_MASK;
+- hsp->num_as = (value >> HSP_nAS_SHIFT) & HSP_nINT_MASK;
+- hsp->num_db = (value >> HSP_nDB_SHIFT) & HSP_nINT_MASK;
+- hsp->num_si = (value >> HSP_nSI_SHIFT) & HSP_nINT_MASK;
++ hsp->num_sm = (value >> hsp->soc->sm_shift) & hsp->soc->sm_mask;
++ hsp->num_ss = (value >> hsp->soc->ss_shift) & hsp->soc->ss_mask;
++ hsp->num_as = (value >> hsp->soc->as_shift) & hsp->soc->as_mask;
++ hsp->num_db = (value >> hsp->soc->db_shift) & hsp->soc->db_mask;
++ hsp->num_si = (value >> hsp->soc->si_shift) & hsp->soc->si_mask;
+
+ err = platform_get_irq_byname_optional(pdev, "doorbell");
+ if (err >= 0)
+@@ -915,6 +923,16 @@ static const struct tegra_hsp_soc tegra186_hsp_soc = {
+ .has_per_mb_ie = false,
+ .has_128_bit_mb = false,
+ .reg_stride = 0x100,
++ .si_shift = 16,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0xf,
++ .db_mask = 0xf,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct tegra_hsp_soc tegra194_hsp_soc = {
+@@ -922,6 +940,16 @@ static const struct tegra_hsp_soc tegra194_hsp_soc = {
+ .has_per_mb_ie = true,
+ .has_128_bit_mb = false,
+ .reg_stride = 0x100,
++ .si_shift = 16,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0xf,
++ .db_mask = 0xf,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct tegra_hsp_soc tegra234_hsp_soc = {
+@@ -929,6 +957,16 @@ static const struct tegra_hsp_soc tegra234_hsp_soc = {
+ .has_per_mb_ie = false,
+ .has_128_bit_mb = true,
+ .reg_stride = 0x100,
++ .si_shift = 16,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0xf,
++ .db_mask = 0xf,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct tegra_hsp_soc tegra264_hsp_soc = {
+@@ -936,6 +974,16 @@ static const struct tegra_hsp_soc tegra264_hsp_soc = {
+ .has_per_mb_ie = false,
+ .has_128_bit_mb = true,
+ .reg_stride = 0x1000,
++ .si_shift = 17,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0x1f,
++ .db_mask = 0x1f,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct of_device_id tegra_hsp_match[] = {
+diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c
+index 18ae45dcbfb28b..b19b0142a690a3 100644
+--- a/drivers/md/dm-ebs-target.c
++++ b/drivers/md/dm-ebs-target.c
+@@ -390,6 +390,12 @@ static int ebs_map(struct dm_target *ti, struct bio *bio)
+ return DM_MAPIO_REMAPPED;
+ }
+
++static void ebs_postsuspend(struct dm_target *ti)
++{
++ struct ebs_c *ec = ti->private;
++ dm_bufio_client_reset(ec->bufio);
++}
++
+ static void ebs_status(struct dm_target *ti, status_type_t type,
+ unsigned int status_flags, char *result, unsigned int maxlen)
+ {
+@@ -447,6 +453,7 @@ static struct target_type ebs_target = {
+ .ctr = ebs_ctr,
+ .dtr = ebs_dtr,
+ .map = ebs_map,
++ .postsuspend = ebs_postsuspend,
+ .status = ebs_status,
+ .io_hints = ebs_io_hints,
+ .prepare_ioctl = ebs_prepare_ioctl,
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index c45464b6576aaf..65ab609ac0cb3e 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -21,6 +21,7 @@
+ #include <linux/reboot.h>
+ #include <crypto/hash.h>
+ #include <crypto/skcipher.h>
++#include <crypto/utils.h>
+ #include <linux/async_tx.h>
+ #include <linux/dm-bufio.h>
+
+@@ -516,7 +517,7 @@ static int sb_mac(struct dm_integrity_c *ic, bool wr)
+ dm_integrity_io_error(ic, "crypto_shash_digest", r);
+ return r;
+ }
+- if (memcmp(mac, actual_mac, mac_size)) {
++ if (crypto_memneq(mac, actual_mac, mac_size)) {
+ dm_integrity_io_error(ic, "superblock mac", -EILSEQ);
+ dm_audit_log_target(DM_MSG_PREFIX, "mac-superblock", ic->ti, 0);
+ return -EILSEQ;
+@@ -859,7 +860,7 @@ static void rw_section_mac(struct dm_integrity_c *ic, unsigned int section, bool
+ if (likely(wr))
+ memcpy(&js->mac, result + (j * JOURNAL_MAC_PER_SECTOR), JOURNAL_MAC_PER_SECTOR);
+ else {
+- if (memcmp(&js->mac, result + (j * JOURNAL_MAC_PER_SECTOR), JOURNAL_MAC_PER_SECTOR)) {
++ if (crypto_memneq(&js->mac, result + (j * JOURNAL_MAC_PER_SECTOR), JOURNAL_MAC_PER_SECTOR)) {
+ dm_integrity_io_error(ic, "journal mac", -EILSEQ);
+ dm_audit_log_target(DM_MSG_PREFIX, "mac-journal", ic->ti, 0);
+ }
+@@ -1401,10 +1402,9 @@ static bool find_newer_committed_node(struct dm_integrity_c *ic, struct journal_
+ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, sector_t *metadata_block,
+ unsigned int *metadata_offset, unsigned int total_size, int op)
+ {
+-#define MAY_BE_FILLER 1
+-#define MAY_BE_HASH 2
+ unsigned int hash_offset = 0;
+- unsigned int may_be = MAY_BE_HASH | (ic->discard ? MAY_BE_FILLER : 0);
++ unsigned char mismatch_hash = 0;
++ unsigned char mismatch_filler = !ic->discard;
+
+ do {
+ unsigned char *data, *dp;
+@@ -1425,7 +1425,7 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se
+ if (op == TAG_READ) {
+ memcpy(tag, dp, to_copy);
+ } else if (op == TAG_WRITE) {
+- if (memcmp(dp, tag, to_copy)) {
++ if (crypto_memneq(dp, tag, to_copy)) {
+ memcpy(dp, tag, to_copy);
+ dm_bufio_mark_partial_buffer_dirty(b, *metadata_offset, *metadata_offset + to_copy);
+ }
+@@ -1433,29 +1433,30 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se
+ /* e.g.: op == TAG_CMP */
+
+ if (likely(is_power_of_2(ic->tag_size))) {
+- if (unlikely(memcmp(dp, tag, to_copy)))
+- if (unlikely(!ic->discard) ||
+- unlikely(memchr_inv(dp, DISCARD_FILLER, to_copy) != NULL)) {
+- goto thorough_test;
+- }
++ if (unlikely(crypto_memneq(dp, tag, to_copy)))
++ goto thorough_test;
+ } else {
+ unsigned int i, ts;
+ thorough_test:
+ ts = total_size;
+
+ for (i = 0; i < to_copy; i++, ts--) {
+- if (unlikely(dp[i] != tag[i]))
+- may_be &= ~MAY_BE_HASH;
+- if (likely(dp[i] != DISCARD_FILLER))
+- may_be &= ~MAY_BE_FILLER;
++ /*
++ * Warning: the control flow must not be
++ * dependent on match/mismatch of
++ * individual bytes.
++ */
++ mismatch_hash |= dp[i] ^ tag[i];
++ mismatch_filler |= dp[i] ^ DISCARD_FILLER;
+ hash_offset++;
+ if (unlikely(hash_offset == ic->tag_size)) {
+- if (unlikely(!may_be)) {
++ if (unlikely(mismatch_hash) && unlikely(mismatch_filler)) {
+ dm_bufio_release(b);
+ return ts;
+ }
+ hash_offset = 0;
+- may_be = MAY_BE_HASH | (ic->discard ? MAY_BE_FILLER : 0);
++ mismatch_hash = 0;
++ mismatch_filler = !ic->discard;
+ }
+ }
+ }
+@@ -1476,8 +1477,6 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se
+ } while (unlikely(total_size));
+
+ return 0;
+-#undef MAY_BE_FILLER
+-#undef MAY_BE_HASH
+ }
+
+ struct flush_request {
+@@ -2076,7 +2075,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
+ char checksums_onstack[MAX_T(size_t, HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
+
+ integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
+- if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
++ if (unlikely(crypto_memneq(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
+ DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx",
+ logical_sector);
+ dm_audit_log_bio(DM_MSG_PREFIX, "journal-checksum",
+@@ -2595,7 +2594,7 @@ static void dm_integrity_inline_recheck(struct work_struct *w)
+ bio_put(outgoing_bio);
+
+ integrity_sector_checksum(ic, dio->bio_details.bi_iter.bi_sector, outgoing_data, digest);
+- if (unlikely(memcmp(digest, dio->integrity_payload, min(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)))) {
++ if (unlikely(crypto_memneq(digest, dio->integrity_payload, min(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)))) {
+ DMERR_LIMIT("%pg: Checksum failed at sector 0x%llx",
+ ic->dev->bdev, dio->bio_details.bi_iter.bi_sector);
+ atomic64_inc(&ic->number_of_mismatches);
+@@ -2634,7 +2633,7 @@ static int dm_integrity_end_io(struct dm_target *ti, struct bio *bio, blk_status
+ char *mem = bvec_kmap_local(&bv);
+ //memset(mem, 0xff, ic->sectors_per_block << SECTOR_SHIFT);
+ integrity_sector_checksum(ic, dio->bio_details.bi_iter.bi_sector, mem, digest);
+- if (unlikely(memcmp(digest, dio->integrity_payload + pos,
++ if (unlikely(crypto_memneq(digest, dio->integrity_payload + pos,
+ min(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)))) {
+ kunmap_local(mem);
+ dm_integrity_free_payload(dio);
+@@ -2911,7 +2910,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned int write_start
+
+ integrity_sector_checksum(ic, sec + ((l - j) << ic->sb->log2_sectors_per_block),
+ (char *)access_journal_data(ic, i, l), test_tag);
+- if (unlikely(memcmp(test_tag, journal_entry_tag(ic, je2), ic->tag_size))) {
++ if (unlikely(crypto_memneq(test_tag, journal_entry_tag(ic, je2), ic->tag_size))) {
+ dm_integrity_io_error(ic, "tag mismatch when replaying journal", -EILSEQ);
+ dm_audit_log_target(DM_MSG_PREFIX, "integrity-replay-journal", ic->ti, 0);
+ }
+@@ -5084,16 +5083,19 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
+
+ ic->recalc_bitmap = dm_integrity_alloc_page_list(n_bitmap_pages);
+ if (!ic->recalc_bitmap) {
++ ti->error = "Could not allocate memory for bitmap";
+ r = -ENOMEM;
+ goto bad;
+ }
+ ic->may_write_bitmap = dm_integrity_alloc_page_list(n_bitmap_pages);
+ if (!ic->may_write_bitmap) {
++ ti->error = "Could not allocate memory for bitmap";
+ r = -ENOMEM;
+ goto bad;
+ }
+ ic->bbs = kvmalloc_array(ic->n_bitmap_blocks, sizeof(struct bitmap_block_status), GFP_KERNEL);
+ if (!ic->bbs) {
++ ti->error = "Could not allocate memory for bitmap";
+ r = -ENOMEM;
+ goto bad;
+ }
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index e86c1431b108f8..24b167f71c5f09 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -796,6 +796,13 @@ static int verity_map(struct dm_target *ti, struct bio *bio)
+ return DM_MAPIO_SUBMITTED;
+ }
+
++static void verity_postsuspend(struct dm_target *ti)
++{
++ struct dm_verity *v = ti->private;
++ flush_workqueue(v->verify_wq);
++ dm_bufio_client_reset(v->bufio);
++}
++
+ /*
+ * Status: V (valid) or C (corruption found)
+ */
+@@ -1766,6 +1773,7 @@ static struct target_type verity_target = {
+ .ctr = verity_ctr,
+ .dtr = verity_dtr,
+ .map = verity_map,
++ .postsuspend = verity_postsuspend,
+ .status = verity_status,
+ .prepare_ioctl = verity_prepare_ioctl,
+ .iterate_devices = verity_iterate_devices,
+diff --git a/drivers/media/common/siano/smsdvb-main.c b/drivers/media/common/siano/smsdvb-main.c
+index 44d8fe8b220e79..9b1a650ed055c9 100644
+--- a/drivers/media/common/siano/smsdvb-main.c
++++ b/drivers/media/common/siano/smsdvb-main.c
+@@ -1243,6 +1243,8 @@ static int __init smsdvb_module_init(void)
+ smsdvb_debugfs_register();
+
+ rc = smscore_register_hotplug(smsdvb_hotplug);
++ if (rc)
++ smsdvb_debugfs_unregister();
+
+ pr_debug("\n");
+
+diff --git a/drivers/media/i2c/adv748x/adv748x.h b/drivers/media/i2c/adv748x/adv748x.h
+index 9bc0121d0eff39..2c1db5968af8e7 100644
+--- a/drivers/media/i2c/adv748x/adv748x.h
++++ b/drivers/media/i2c/adv748x/adv748x.h
+@@ -320,7 +320,7 @@ struct adv748x_state {
+
+ /* Free run pattern select */
+ #define ADV748X_SDP_FRP 0x14
+-#define ADV748X_SDP_FRP_MASK GENMASK(3, 1)
++#define ADV748X_SDP_FRP_MASK GENMASK(2, 0)
+
+ /* Saturation */
+ #define ADV748X_SDP_SD_SAT_U 0xe3 /* user_map_rw_reg_e3 */
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index 2cdab2f3d9dc3d..004d28c3328757 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -3566,6 +3566,7 @@ static int ccs_probe(struct i2c_client *client)
+ out_disable_runtime_pm:
+ pm_runtime_put_noidle(&client->dev);
+ pm_runtime_disable(&client->dev);
++ pm_runtime_set_suspended(&client->dev);
+
+ out_cleanup:
+ ccs_cleanup(sensor);
+@@ -3595,9 +3596,10 @@ static void ccs_remove(struct i2c_client *client)
+ v4l2_async_unregister_subdev(subdev);
+
+ pm_runtime_disable(&client->dev);
+- if (!pm_runtime_status_suspended(&client->dev))
++ if (!pm_runtime_status_suspended(&client->dev)) {
+ ccs_power_off(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ pm_runtime_set_suspended(&client->dev);
++ }
+
+ for (i = 0; i < sensor->ssds_used; i++)
+ v4l2_device_unregister_subdev(&sensor->ssds[i].sd);
+diff --git a/drivers/media/i2c/hi556.c b/drivers/media/i2c/hi556.c
+index 3ac42d1ab8b437..c28eca2f86f601 100644
+--- a/drivers/media/i2c/hi556.c
++++ b/drivers/media/i2c/hi556.c
+@@ -1230,12 +1230,13 @@ static int hi556_check_hwcfg(struct device *dev)
+ ret = fwnode_property_read_u32(fwnode, "clock-frequency", &mclk);
+ if (ret) {
+ dev_err(dev, "can't get clock frequency");
+- return ret;
++ goto check_hwcfg_error;
+ }
+
+ if (mclk != HI556_MCLK) {
+ dev_err(dev, "external clock %d is not supported", mclk);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto check_hwcfg_error;
+ }
+
+ if (bus_cfg.bus.mipi_csi2.num_data_lanes != 2) {
+diff --git a/drivers/media/i2c/imx214.c b/drivers/media/i2c/imx214.c
+index 4962cfe7c83d62..6a393e18267f42 100644
+--- a/drivers/media/i2c/imx214.c
++++ b/drivers/media/i2c/imx214.c
+@@ -1075,10 +1075,6 @@ static int imx214_probe(struct i2c_client *client)
+ */
+ imx214_power_on(imx214->dev);
+
+- pm_runtime_set_active(imx214->dev);
+- pm_runtime_enable(imx214->dev);
+- pm_runtime_idle(imx214->dev);
+-
+ ret = imx214_ctrls_init(imx214);
+ if (ret < 0)
+ goto error_power_off;
+@@ -1099,22 +1095,30 @@ static int imx214_probe(struct i2c_client *client)
+
+ imx214_entity_init_state(&imx214->sd, NULL);
+
++ pm_runtime_set_active(imx214->dev);
++ pm_runtime_enable(imx214->dev);
++
+ ret = v4l2_async_register_subdev_sensor(&imx214->sd);
+ if (ret < 0) {
+ dev_err(dev, "could not register v4l2 device\n");
+ goto free_entity;
+ }
+
++ pm_runtime_idle(imx214->dev);
++
+ return 0;
+
+ free_entity:
++ pm_runtime_disable(imx214->dev);
++ pm_runtime_set_suspended(&client->dev);
+ media_entity_cleanup(&imx214->sd.entity);
++
+ free_ctrl:
+ mutex_destroy(&imx214->mutex);
+ v4l2_ctrl_handler_free(&imx214->ctrls);
++
+ error_power_off:
+- pm_runtime_disable(imx214->dev);
+- regulator_bulk_disable(IMX214_NUM_SUPPLIES, imx214->supplies);
++ imx214_power_off(imx214->dev);
+
+ return ret;
+ }
+@@ -1127,11 +1131,12 @@ static void imx214_remove(struct i2c_client *client)
+ v4l2_async_unregister_subdev(&imx214->sd);
+ media_entity_cleanup(&imx214->sd.entity);
+ v4l2_ctrl_handler_free(&imx214->ctrls);
+-
+- pm_runtime_disable(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
+-
+ mutex_destroy(&imx214->mutex);
++ pm_runtime_disable(&client->dev);
++ if (!pm_runtime_status_suspended(&client->dev)) {
++ imx214_power_off(imx214->dev);
++ pm_runtime_set_suspended(&client->dev);
++ }
+ }
+
+ static const struct of_device_id imx214_of_match[] = {
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 2d54cea113e19f..64227eb423d431 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -133,10 +133,11 @@
+
+ /* Pixel rate is fixed for all the modes */
+ #define IMX219_PIXEL_RATE 182400000
+-#define IMX219_PIXEL_RATE_4LANE 280800000
++#define IMX219_PIXEL_RATE_4LANE 281600000
+
+ #define IMX219_DEFAULT_LINK_FREQ 456000000
+-#define IMX219_DEFAULT_LINK_FREQ_4LANE 363000000
++#define IMX219_DEFAULT_LINK_FREQ_4LANE_UNSUPPORTED 363000000
++#define IMX219_DEFAULT_LINK_FREQ_4LANE 364000000
+
+ /* IMX219 native and active pixel array size. */
+ #define IMX219_NATIVE_WIDTH 3296U
+@@ -168,15 +169,6 @@ static const struct cci_reg_sequence imx219_common_regs[] = {
+ { CCI_REG8(0x30eb), 0x05 },
+ { CCI_REG8(0x30eb), 0x09 },
+
+- /* PLL Clock Table */
+- { IMX219_REG_VTPXCK_DIV, 5 },
+- { IMX219_REG_VTSYCK_DIV, 1 },
+- { IMX219_REG_PREPLLCK_VT_DIV, 3 }, /* 0x03 = AUTO set */
+- { IMX219_REG_PREPLLCK_OP_DIV, 3 }, /* 0x03 = AUTO set */
+- { IMX219_REG_PLL_VT_MPY, 57 },
+- { IMX219_REG_OPSYCK_DIV, 1 },
+- { IMX219_REG_PLL_OP_MPY, 114 },
+-
+ /* Undocumented registers */
+ { CCI_REG8(0x455e), 0x00 },
+ { CCI_REG8(0x471e), 0x4b },
+@@ -201,12 +193,45 @@ static const struct cci_reg_sequence imx219_common_regs[] = {
+ { IMX219_REG_EXCK_FREQ, IMX219_EXCK_FREQ(IMX219_XCLK_FREQ / 1000000) },
+ };
+
++static const struct cci_reg_sequence imx219_2lane_regs[] = {
++ /* PLL Clock Table */
++ { IMX219_REG_VTPXCK_DIV, 5 },
++ { IMX219_REG_VTSYCK_DIV, 1 },
++ { IMX219_REG_PREPLLCK_VT_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PREPLLCK_OP_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PLL_VT_MPY, 57 },
++ { IMX219_REG_OPSYCK_DIV, 1 },
++ { IMX219_REG_PLL_OP_MPY, 114 },
++
++ /* 2-Lane CSI Mode */
++ { IMX219_REG_CSI_LANE_MODE, IMX219_CSI_2_LANE_MODE },
++};
++
++static const struct cci_reg_sequence imx219_4lane_regs[] = {
++ /* PLL Clock Table */
++ { IMX219_REG_VTPXCK_DIV, 5 },
++ { IMX219_REG_VTSYCK_DIV, 1 },
++ { IMX219_REG_PREPLLCK_VT_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PREPLLCK_OP_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PLL_VT_MPY, 88 },
++ { IMX219_REG_OPSYCK_DIV, 1 },
++ { IMX219_REG_PLL_OP_MPY, 91 },
++
++ /* 4-Lane CSI Mode */
++ { IMX219_REG_CSI_LANE_MODE, IMX219_CSI_4_LANE_MODE },
++};
++
+ static const s64 imx219_link_freq_menu[] = {
+ IMX219_DEFAULT_LINK_FREQ,
+ };
+
+ static const s64 imx219_link_freq_4lane_menu[] = {
+ IMX219_DEFAULT_LINK_FREQ_4LANE,
++ /*
++ * This will never be advertised to userspace, but will be used for
++ * v4l2_link_freq_to_bitmap
++ */
++ IMX219_DEFAULT_LINK_FREQ_4LANE_UNSUPPORTED,
+ };
+
+ static const char * const imx219_test_pattern_menu[] = {
+@@ -662,9 +687,11 @@ static int imx219_set_framefmt(struct imx219 *imx219,
+
+ static int imx219_configure_lanes(struct imx219 *imx219)
+ {
+- return cci_write(imx219->regmap, IMX219_REG_CSI_LANE_MODE,
+- imx219->lanes == 2 ? IMX219_CSI_2_LANE_MODE :
+- IMX219_CSI_4_LANE_MODE, NULL);
++ /* Write the appropriate PLL settings for the number of MIPI lanes */
++ return cci_multi_reg_write(imx219->regmap,
++ imx219->lanes == 2 ? imx219_2lane_regs : imx219_4lane_regs,
++ imx219->lanes == 2 ? ARRAY_SIZE(imx219_2lane_regs) :
++ ARRAY_SIZE(imx219_4lane_regs), NULL);
+ };
+
+ static int imx219_start_streaming(struct imx219 *imx219,
+@@ -1035,6 +1062,7 @@ static int imx219_check_hwcfg(struct device *dev, struct imx219 *imx219)
+ struct v4l2_fwnode_endpoint ep_cfg = {
+ .bus_type = V4L2_MBUS_CSI2_DPHY
+ };
++ unsigned long link_freq_bitmap;
+ int ret = -EINVAL;
+
+ endpoint = fwnode_graph_get_next_endpoint(dev_fwnode(dev), NULL);
+@@ -1056,23 +1084,40 @@ static int imx219_check_hwcfg(struct device *dev, struct imx219 *imx219)
+ imx219->lanes = ep_cfg.bus.mipi_csi2.num_data_lanes;
+
+ /* Check the link frequency set in device tree */
+- if (!ep_cfg.nr_of_link_frequencies) {
+- dev_err_probe(dev, -EINVAL,
+- "link-frequency property not found in DT\n");
+- goto error_out;
++ switch (imx219->lanes) {
++ case 2:
++ ret = v4l2_link_freq_to_bitmap(dev,
++ ep_cfg.link_frequencies,
++ ep_cfg.nr_of_link_frequencies,
++ imx219_link_freq_menu,
++ ARRAY_SIZE(imx219_link_freq_menu),
++ &link_freq_bitmap);
++ break;
++ case 4:
++ ret = v4l2_link_freq_to_bitmap(dev,
++ ep_cfg.link_frequencies,
++ ep_cfg.nr_of_link_frequencies,
++ imx219_link_freq_4lane_menu,
++ ARRAY_SIZE(imx219_link_freq_4lane_menu),
++ &link_freq_bitmap);
++
++ if (!ret && (link_freq_bitmap & BIT(1))) {
++ dev_warn(dev, "Link frequency of %d not supported, but has been incorrectly advertised previously\n",
++ IMX219_DEFAULT_LINK_FREQ_4LANE_UNSUPPORTED);
++ dev_warn(dev, "Using link frequency of %d\n",
++ IMX219_DEFAULT_LINK_FREQ_4LANE);
++ link_freq_bitmap |= BIT(0);
++ }
++ break;
+ }
+
+- if (ep_cfg.nr_of_link_frequencies != 1 ||
+- (ep_cfg.link_frequencies[0] != ((imx219->lanes == 2) ?
+- IMX219_DEFAULT_LINK_FREQ : IMX219_DEFAULT_LINK_FREQ_4LANE))) {
++ if (ret || !(link_freq_bitmap & BIT(0))) {
++ ret = -EINVAL;
+ dev_err_probe(dev, -EINVAL,
+ "Link frequency not supported: %lld\n",
+ ep_cfg.link_frequencies[0]);
+- goto error_out;
+ }
+
+- ret = 0;
+-
+ error_out:
+ v4l2_fwnode_endpoint_free(&ep_cfg);
+ fwnode_handle_put(endpoint);
+@@ -1178,6 +1223,9 @@ static int imx219_probe(struct i2c_client *client)
+ goto error_media_entity;
+ }
+
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
+ ret = v4l2_async_register_subdev_sensor(&imx219->sd);
+ if (ret < 0) {
+ dev_err_probe(dev, ret,
+@@ -1185,15 +1233,14 @@ static int imx219_probe(struct i2c_client *client)
+ goto error_subdev_cleanup;
+ }
+
+- /* Enable runtime PM and turn off the device */
+- pm_runtime_set_active(dev);
+- pm_runtime_enable(dev);
+ pm_runtime_idle(dev);
+
+ return 0;
+
+ error_subdev_cleanup:
+ v4l2_subdev_cleanup(&imx219->sd);
++ pm_runtime_disable(dev);
++ pm_runtime_set_suspended(dev);
+
+ error_media_entity:
+ media_entity_cleanup(&imx219->sd.entity);
+@@ -1218,9 +1265,10 @@ static void imx219_remove(struct i2c_client *client)
+ imx219_free_controls(imx219);
+
+ pm_runtime_disable(&client->dev);
+- if (!pm_runtime_status_suspended(&client->dev))
++ if (!pm_runtime_status_suspended(&client->dev)) {
+ imx219_power_off(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ pm_runtime_set_suspended(&client->dev);
++ }
+ }
+
+ static const struct of_device_id imx219_dt_ids[] = {
+diff --git a/drivers/media/i2c/imx319.c b/drivers/media/i2c/imx319.c
+index dd1b4ff983dcb1..701840f4a5cc00 100644
+--- a/drivers/media/i2c/imx319.c
++++ b/drivers/media/i2c/imx319.c
+@@ -2442,17 +2442,19 @@ static int imx319_probe(struct i2c_client *client)
+ if (full_power)
+ pm_runtime_set_active(&client->dev);
+ pm_runtime_enable(&client->dev);
+- pm_runtime_idle(&client->dev);
+
+ ret = v4l2_async_register_subdev_sensor(&imx319->sd);
+ if (ret < 0)
+ goto error_media_entity_pm;
+
++ pm_runtime_idle(&client->dev);
++
+ return 0;
+
+ error_media_entity_pm:
+ pm_runtime_disable(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ if (full_power)
++ pm_runtime_set_suspended(&client->dev);
+ media_entity_cleanup(&imx319->sd.entity);
+
+ error_handler_free:
+@@ -2474,7 +2476,8 @@ static void imx319_remove(struct i2c_client *client)
+ v4l2_ctrl_handler_free(sd->ctrl_handler);
+
+ pm_runtime_disable(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ if (!pm_runtime_status_suspended(&client->dev))
++ pm_runtime_set_suspended(&client->dev);
+
+ mutex_destroy(&imx319->mutex);
+ }
+diff --git a/drivers/media/i2c/ov08x40.c b/drivers/media/i2c/ov08x40.c
+index b9682264e2f53d..83b49cf114acc7 100644
+--- a/drivers/media/i2c/ov08x40.c
++++ b/drivers/media/i2c/ov08x40.c
+@@ -2324,11 +2324,14 @@ static void ov08x40_remove(struct i2c_client *client)
+ ov08x40_free_controls(ov08x);
+
+ pm_runtime_disable(&client->dev);
++ if (!pm_runtime_status_suspended(&client->dev))
++ ov08x40_power_off(&client->dev);
+ pm_runtime_set_suspended(&client->dev);
+-
+- ov08x40_power_off(&client->dev);
+ }
+
++static DEFINE_RUNTIME_DEV_PM_OPS(ov08x40_pm_ops, ov08x40_power_off,
++ ov08x40_power_on, NULL);
++
+ #ifdef CONFIG_ACPI
+ static const struct acpi_device_id ov08x40_acpi_ids[] = {
+ {"OVTI08F4"},
+@@ -2349,6 +2352,7 @@ static struct i2c_driver ov08x40_i2c_driver = {
+ .name = "ov08x40",
+ .acpi_match_table = ACPI_PTR(ov08x40_acpi_ids),
+ .of_match_table = ov08x40_of_match,
++ .pm = pm_sleep_ptr(&ov08x40_pm_ops),
+ },
+ .probe = ov08x40_probe,
+ .remove = ov08x40_remove,
+diff --git a/drivers/media/i2c/ov7251.c b/drivers/media/i2c/ov7251.c
+index 30f61e04ecaf51..3226888d77e9c7 100644
+--- a/drivers/media/i2c/ov7251.c
++++ b/drivers/media/i2c/ov7251.c
+@@ -922,6 +922,8 @@ static int ov7251_set_power_on(struct device *dev)
+ return ret;
+ }
+
++ usleep_range(1000, 1100);
++
+ gpiod_set_value_cansleep(ov7251->enable_gpio, 1);
+
+ /* wait at least 65536 external clock cycles */
+@@ -1696,7 +1698,7 @@ static int ov7251_probe(struct i2c_client *client)
+ return PTR_ERR(ov7251->analog_regulator);
+ }
+
+- ov7251->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_HIGH);
++ ov7251->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);
+ if (IS_ERR(ov7251->enable_gpio)) {
+ dev_err(dev, "cannot get enable gpio\n");
+ return PTR_ERR(ov7251->enable_gpio);
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-video.c b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
+index 387963529adb56..959869a885564a 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
+@@ -1296,6 +1296,7 @@ int ipu6_isys_video_init(struct ipu6_isys_video *av)
+ av->vdev.release = video_device_release_empty;
+ av->vdev.fops = &isys_fops;
+ av->vdev.v4l2_dev = &av->isys->v4l2_dev;
++ av->vdev.dev_parent = &av->isys->adev->isp->pdev->dev;
+ if (!av->vdev.ioctl_ops)
+ av->vdev.ioctl_ops = &ipu6_v4l2_ioctl_ops;
+ av->vdev.queue = &av->aq.vbq;
+diff --git a/drivers/media/pci/mgb4/mgb4_cmt.c b/drivers/media/pci/mgb4/mgb4_cmt.c
+index a25b68403bc608..c22ef51436ed5d 100644
+--- a/drivers/media/pci/mgb4/mgb4_cmt.c
++++ b/drivers/media/pci/mgb4/mgb4_cmt.c
+@@ -135,8 +135,8 @@ static const u16 cmt_vals_out[][15] = {
+ };
+
+ static const u16 cmt_vals_in[][13] = {
+- {0x1082, 0x0000, 0x5104, 0x0000, 0x11C7, 0x0000, 0x1041, 0x02BC, 0x7C01, 0xFFE9, 0x9900, 0x9908, 0x8100},
+ {0x1104, 0x0000, 0x9208, 0x0000, 0x138E, 0x0000, 0x1041, 0x015E, 0x7C01, 0xFFE9, 0x0100, 0x0908, 0x1000},
++ {0x1082, 0x0000, 0x5104, 0x0000, 0x11C7, 0x0000, 0x1041, 0x02BC, 0x7C01, 0xFFE9, 0x9900, 0x9908, 0x8100},
+ };
+
+ static const u32 cmt_addrs_out[][15] = {
+@@ -206,10 +206,11 @@ u32 mgb4_cmt_set_vout_freq(struct mgb4_vout_dev *voutdev, unsigned int freq)
+
+ mgb4_write_reg(video, regs->config, 0x1 | (config & ~0x3));
+
++ mgb4_mask_reg(video, regs->config, 0x100, 0x100);
++
+ for (i = 0; i < ARRAY_SIZE(cmt_addrs_out[0]); i++)
+ mgb4_write_reg(&voutdev->mgbdev->cmt, addr[i], reg_set[i]);
+
+- mgb4_mask_reg(video, regs->config, 0x100, 0x100);
+ mgb4_mask_reg(video, regs->config, 0x100, 0x0);
+
+ mgb4_write_reg(video, regs->config, config & ~0x1);
+@@ -236,10 +237,11 @@ void mgb4_cmt_set_vin_freq_range(struct mgb4_vin_dev *vindev,
+
+ mgb4_write_reg(video, regs->config, 0x1 | (config & ~0x3));
+
++ mgb4_mask_reg(video, regs->config, 0x1000, 0x1000);
++
+ for (i = 0; i < ARRAY_SIZE(cmt_addrs_in[0]); i++)
+ mgb4_write_reg(&vindev->mgbdev->cmt, addr[i], reg_set[i]);
+
+- mgb4_mask_reg(video, regs->config, 0x1000, 0x1000);
+ mgb4_mask_reg(video, regs->config, 0x1000, 0x0);
+
+ mgb4_write_reg(video, regs->config, config & ~0x1);
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-hw.c b/drivers/media/platform/chips-media/wave5/wave5-hw.c
+index c8a90599410980..d94cf84c3ee5fc 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-hw.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-hw.c
+@@ -585,7 +585,7 @@ int wave5_vpu_build_up_dec_param(struct vpu_instance *inst,
+ vpu_write_reg(inst->dev, W5_CMD_NUM_CQ_DEPTH_M1,
+ WAVE521_COMMAND_QUEUE_DEPTH - 1);
+ }
+-
++ vpu_write_reg(inst->dev, W5_CMD_ERR_CONCEAL, 0);
+ ret = send_firmware_command(inst, W5_CREATE_INSTANCE, true, NULL, NULL);
+ if (ret) {
+ wave5_vdi_free_dma_memory(vpu_dev, &p_dec_info->vb_work);
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c b/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
+index d3ff420c52ce1c..fd71f0c43ac37a 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
+@@ -1345,10 +1345,24 @@ static int wave5_vpu_dec_start_streaming(struct vb2_queue *q, unsigned int count
+ if (ret)
+ goto free_bitstream_vbuf;
+ } else if (q->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
++ struct dec_initial_info *initial_info =
++ &inst->codec_info->dec_info.initial_info;
++
+ if (inst->state == VPU_INST_STATE_STOP)
+ ret = switch_state(inst, VPU_INST_STATE_INIT_SEQ);
+ if (ret)
+ goto return_buffers;
++
++ if (inst->state == VPU_INST_STATE_INIT_SEQ &&
++ inst->dev->product_code == WAVE521C_CODE) {
++ if (initial_info->luma_bitdepth != 8) {
++ dev_info(inst->dev->dev, "%s: no support for %d bit depth",
++ __func__, initial_info->luma_bitdepth);
++ ret = -EINVAL;
++ goto return_buffers;
++ }
++ }
++
+ }
+ pm_runtime_mark_last_busy(inst->dev->dev);
+ pm_runtime_put_autosuspend(inst->dev->dev);
+@@ -1369,6 +1383,16 @@ static int streamoff_output(struct vb2_queue *q)
+ struct vb2_v4l2_buffer *buf;
+ int ret;
+ dma_addr_t new_rd_ptr;
++ struct dec_output_info dec_info;
++ unsigned int i;
++
++ for (i = 0; i < v4l2_m2m_num_dst_bufs_ready(m2m_ctx); i++) {
++ ret = wave5_vpu_dec_set_disp_flag(inst, i);
++ if (ret)
++ dev_dbg(inst->dev->dev,
++ "%s: Setting display flag of buf index: %u, fail: %d\n",
++ __func__, i, ret);
++ }
+
+ while ((buf = v4l2_m2m_src_buf_remove(m2m_ctx))) {
+ dev_dbg(inst->dev->dev, "%s: (Multiplanar) buf type %4u | index %4u\n",
+@@ -1376,6 +1400,11 @@ static int streamoff_output(struct vb2_queue *q)
+ v4l2_m2m_buf_done(buf, VB2_BUF_STATE_ERROR);
+ }
+
++ while (wave5_vpu_dec_get_output_info(inst, &dec_info) == 0) {
++ if (dec_info.index_frame_display >= 0)
++ wave5_vpu_dec_set_disp_flag(inst, dec_info.index_frame_display);
++ }
++
+ ret = wave5_vpu_flush_instance(inst);
+ if (ret)
+ return ret;
+@@ -1459,7 +1488,7 @@ static void wave5_vpu_dec_stop_streaming(struct vb2_queue *q)
+ break;
+
+ if (wave5_vpu_dec_get_output_info(inst, &dec_output_info))
+- dev_dbg(inst->dev->dev, "Getting decoding results from fw, fail\n");
++ dev_dbg(inst->dev->dev, "there is no output info\n");
+ }
+
+ v4l2_m2m_update_stop_streaming_state(m2m_ctx, q);
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu.c b/drivers/media/platform/chips-media/wave5/wave5-vpu.c
+index d1320298a0f767..5948a18958d11e 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-vpu.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-vpu.c
+@@ -55,12 +55,12 @@ static void wave5_vpu_handle_irq(void *dev_id)
+ struct vpu_device *dev = dev_id;
+
+ irq_reason = wave5_vdi_read_register(dev, W5_VPU_VINT_REASON);
++ seq_done = wave5_vdi_read_register(dev, W5_RET_SEQ_DONE_INSTANCE_INFO);
++ cmd_done = wave5_vdi_read_register(dev, W5_RET_QUEUE_CMD_DONE_INST);
+ wave5_vdi_write_register(dev, W5_VPU_VINT_REASON_CLR, irq_reason);
+ wave5_vdi_write_register(dev, W5_VPU_VINT_CLEAR, 0x1);
+
+ list_for_each_entry(inst, &dev->instances, list) {
+- seq_done = wave5_vdi_read_register(dev, W5_RET_SEQ_DONE_INSTANCE_INFO);
+- cmd_done = wave5_vdi_read_register(dev, W5_RET_QUEUE_CMD_DONE_INST);
+
+ if (irq_reason & BIT(INT_WAVE5_INIT_SEQ) ||
+ irq_reason & BIT(INT_WAVE5_ENC_SET_PARAM)) {
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c b/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c
+index e16b990041c2e4..e5e879a13e8b89 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c
+@@ -75,6 +75,16 @@ int wave5_vpu_flush_instance(struct vpu_instance *inst)
+ inst->type == VPU_INST_TYPE_DEC ? "DECODER" : "ENCODER", inst->id);
+ mutex_unlock(&inst->dev->hw_lock);
+ return -ETIMEDOUT;
++ } else if (ret == -EBUSY) {
++ struct dec_output_info dec_info;
++
++ mutex_unlock(&inst->dev->hw_lock);
++ wave5_vpu_dec_get_output_info(inst, &dec_info);
++ ret = mutex_lock_interruptible(&inst->dev->hw_lock);
++ if (ret)
++ return ret;
++ if (dec_info.index_frame_display > 0)
++ wave5_vpu_dec_set_disp_flag(inst, dec_info.index_frame_display);
+ }
+ } while (ret != 0);
+ mutex_unlock(&inst->dev->hw_lock);
+diff --git a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
+index ff23b225db705a..1b0bc47355c05f 100644
+--- a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
++++ b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
+@@ -79,8 +79,11 @@ struct mtk_vcodec_fw *mtk_vcodec_fw_scp_init(void *priv, enum mtk_vcodec_fw_use
+ }
+
+ fw = devm_kzalloc(&plat_dev->dev, sizeof(*fw), GFP_KERNEL);
+- if (!fw)
++ if (!fw) {
++ scp_put(scp);
+ return ERR_PTR(-ENOMEM);
++ }
++
+ fw->type = SCP;
+ fw->ops = &mtk_vcodec_rproc_msg;
+ fw->scp = scp;
+diff --git a/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c b/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c
+index f8145998fcaf78..8522f71fc901d5 100644
+--- a/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c
++++ b/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c
+@@ -594,7 +594,11 @@ static int h264_enc_init(struct mtk_vcodec_enc_ctx *ctx)
+
+ inst->ctx = ctx;
+ inst->vpu_inst.ctx = ctx;
+- inst->vpu_inst.id = is_ext ? SCP_IPI_VENC_H264 : IPI_VENC_H264;
++ if (is_ext)
++ inst->vpu_inst.id = SCP_IPI_VENC_H264;
++ else
++ inst->vpu_inst.id = IPI_VENC_H264;
++
+ inst->hw_base = mtk_vcodec_get_reg_addr(inst->ctx->dev->reg_base, VENC_SYS);
+
+ ret = vpu_enc_init(&inst->vpu_inst);
+diff --git a/drivers/media/platform/nuvoton/npcm-video.c b/drivers/media/platform/nuvoton/npcm-video.c
+index 024cd8ee17098d..7a9d8928ae4019 100644
+--- a/drivers/media/platform/nuvoton/npcm-video.c
++++ b/drivers/media/platform/nuvoton/npcm-video.c
+@@ -1648,8 +1648,8 @@ static int npcm_video_setup_video(struct npcm_video *video)
+
+ static int npcm_video_ece_init(struct npcm_video *video)
+ {
++ struct device_node *ece_node __free(device_node) = NULL;
+ struct device *dev = video->dev;
+- struct device_node *ece_node;
+ struct platform_device *ece_pdev;
+ void __iomem *regs;
+
+@@ -1669,7 +1669,7 @@ static int npcm_video_ece_init(struct npcm_video *video)
+ dev_err(dev, "Failed to find ECE device\n");
+ return -ENODEV;
+ }
+- of_node_put(ece_node);
++ struct device *ece_dev __free(put_device) = &ece_pdev->dev;
+
+ regs = devm_platform_ioremap_resource(ece_pdev, 0);
+ if (IS_ERR(regs)) {
+@@ -1684,7 +1684,7 @@ static int npcm_video_ece_init(struct npcm_video *video)
+ return PTR_ERR(video->ece.regmap);
+ }
+
+- video->ece.reset = devm_reset_control_get(&ece_pdev->dev, NULL);
++ video->ece.reset = devm_reset_control_get(ece_dev, NULL);
+ if (IS_ERR(video->ece.reset)) {
+ dev_err(dev, "Failed to get ECE reset control in DTS\n");
+ return PTR_ERR(video->ece.reset);
+diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c
+index 3df241dc3a118b..1b3db2caa99fe4 100644
+--- a/drivers/media/platform/qcom/venus/hfi_parser.c
++++ b/drivers/media/platform/qcom/venus/hfi_parser.c
+@@ -19,6 +19,8 @@ static void init_codecs(struct venus_core *core)
+ struct hfi_plat_caps *caps = core->caps, *cap;
+ unsigned long bit;
+
++ core->codecs_count = 0;
++
+ if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > MAX_CODEC_NUM)
+ return;
+
+@@ -62,7 +64,7 @@ fill_buf_mode(struct hfi_plat_caps *cap, const void *data, unsigned int num)
+ cap->cap_bufs_mode_dynamic = true;
+ }
+
+-static void
++static int
+ parse_alloc_mode(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_buffer_alloc_mode_supported *mode = data;
+@@ -70,7 +72,7 @@ parse_alloc_mode(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ u32 *type;
+
+ if (num_entries > MAX_ALLOC_MODE_ENTRIES)
+- return;
++ return -EINVAL;
+
+ type = mode->data;
+
+@@ -82,6 +84,8 @@ parse_alloc_mode(struct venus_core *core, u32 codecs, u32 domain, void *data)
+
+ type++;
+ }
++
++ return sizeof(*mode);
+ }
+
+ static void fill_profile_level(struct hfi_plat_caps *cap, const void *data,
+@@ -96,7 +100,7 @@ static void fill_profile_level(struct hfi_plat_caps *cap, const void *data,
+ cap->num_pl += num;
+ }
+
+-static void
++static int
+ parse_profile_level(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_profile_level_supported *pl = data;
+@@ -104,12 +108,14 @@ parse_profile_level(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ struct hfi_profile_level pl_arr[HFI_MAX_PROFILE_COUNT] = {};
+
+ if (pl->profile_count > HFI_MAX_PROFILE_COUNT)
+- return;
++ return -EINVAL;
+
+ memcpy(pl_arr, proflevel, pl->profile_count * sizeof(*proflevel));
+
+ for_each_codec(core->caps, ARRAY_SIZE(core->caps), codecs, domain,
+ fill_profile_level, pl_arr, pl->profile_count);
++
++ return pl->profile_count * sizeof(*proflevel) + sizeof(u32);
+ }
+
+ static void
+@@ -124,7 +130,7 @@ fill_caps(struct hfi_plat_caps *cap, const void *data, unsigned int num)
+ cap->num_caps += num;
+ }
+
+-static void
++static int
+ parse_caps(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_capabilities *caps = data;
+@@ -133,12 +139,14 @@ parse_caps(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ struct hfi_capability caps_arr[MAX_CAP_ENTRIES] = {};
+
+ if (num_caps > MAX_CAP_ENTRIES)
+- return;
++ return -EINVAL;
+
+ memcpy(caps_arr, cap, num_caps * sizeof(*cap));
+
+ for_each_codec(core->caps, ARRAY_SIZE(core->caps), codecs, domain,
+ fill_caps, caps_arr, num_caps);
++
++ return sizeof(*caps);
+ }
+
+ static void fill_raw_fmts(struct hfi_plat_caps *cap, const void *fmts,
+@@ -153,7 +161,7 @@ static void fill_raw_fmts(struct hfi_plat_caps *cap, const void *fmts,
+ cap->num_fmts += num_fmts;
+ }
+
+-static void
++static int
+ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_uncompressed_format_supported *fmt = data;
+@@ -162,7 +170,8 @@ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ struct raw_formats rawfmts[MAX_FMT_ENTRIES] = {};
+ u32 entries = fmt->format_entries;
+ unsigned int i = 0;
+- u32 num_planes;
++ u32 num_planes = 0;
++ u32 size;
+
+ while (entries) {
+ num_planes = pinfo->num_planes;
+@@ -172,7 +181,7 @@ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ i++;
+
+ if (i >= MAX_FMT_ENTRIES)
+- return;
++ return -EINVAL;
+
+ if (pinfo->num_planes > MAX_PLANES)
+ break;
+@@ -184,9 +193,13 @@ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+
+ for_each_codec(core->caps, ARRAY_SIZE(core->caps), codecs, domain,
+ fill_raw_fmts, rawfmts, i);
++ size = fmt->format_entries * (sizeof(*constr) * num_planes + 2 * sizeof(u32))
++ + 2 * sizeof(u32);
++
++ return size;
+ }
+
+-static void parse_codecs(struct venus_core *core, void *data)
++static int parse_codecs(struct venus_core *core, void *data)
+ {
+ struct hfi_codec_supported *codecs = data;
+
+@@ -198,21 +211,27 @@ static void parse_codecs(struct venus_core *core, void *data)
+ core->dec_codecs &= ~HFI_VIDEO_CODEC_SPARK;
+ core->enc_codecs &= ~HFI_VIDEO_CODEC_HEVC;
+ }
++
++ return sizeof(*codecs);
+ }
+
+-static void parse_max_sessions(struct venus_core *core, const void *data)
++static int parse_max_sessions(struct venus_core *core, const void *data)
+ {
+ const struct hfi_max_sessions_supported *sessions = data;
+
+ core->max_sessions_supported = sessions->max_sessions;
++
++ return sizeof(*sessions);
+ }
+
+-static void parse_codecs_mask(u32 *codecs, u32 *domain, void *data)
++static int parse_codecs_mask(u32 *codecs, u32 *domain, void *data)
+ {
+ struct hfi_codec_mask_supported *mask = data;
+
+ *codecs = mask->codecs;
+ *domain = mask->video_domains;
++
++ return sizeof(*mask);
+ }
+
+ static void parser_init(struct venus_inst *inst, u32 *codecs, u32 *domain)
+@@ -281,8 +300,9 @@ static int hfi_platform_parser(struct venus_core *core, struct venus_inst *inst)
+ u32 hfi_parser(struct venus_core *core, struct venus_inst *inst, void *buf,
+ u32 size)
+ {
+- unsigned int words_count = size >> 2;
+- u32 *word = buf, *data, codecs = 0, domain = 0;
++ u32 *words = buf, *payload, codecs = 0, domain = 0;
++ u32 *frame_size = buf + size;
++ u32 rem_bytes = size;
+ int ret;
+
+ ret = hfi_platform_parser(core, inst);
+@@ -299,38 +319,66 @@ u32 hfi_parser(struct venus_core *core, struct venus_inst *inst, void *buf,
+ memset(core->caps, 0, sizeof(core->caps));
+ }
+
+- while (words_count) {
+- data = word + 1;
++ while (words < frame_size) {
++ payload = words + 1;
+
+- switch (*word) {
++ switch (*words) {
+ case HFI_PROPERTY_PARAM_CODEC_SUPPORTED:
+- parse_codecs(core, data);
++ if (rem_bytes <= sizeof(struct hfi_codec_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_codecs(core, payload);
++ if (ret < 0)
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
+ init_codecs(core);
+ break;
+ case HFI_PROPERTY_PARAM_MAX_SESSIONS_SUPPORTED:
+- parse_max_sessions(core, data);
++ if (rem_bytes <= sizeof(struct hfi_max_sessions_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_max_sessions(core, payload);
+ break;
+ case HFI_PROPERTY_PARAM_CODEC_MASK_SUPPORTED:
+- parse_codecs_mask(&codecs, &domain, data);
++ if (rem_bytes <= sizeof(struct hfi_codec_mask_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_codecs_mask(&codecs, &domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_UNCOMPRESSED_FORMAT_SUPPORTED:
+- parse_raw_formats(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_uncompressed_format_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_raw_formats(core, codecs, domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_CAPABILITY_SUPPORTED:
+- parse_caps(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_capabilities))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_caps(core, codecs, domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_PROFILE_LEVEL_SUPPORTED:
+- parse_profile_level(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_profile_level_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_profile_level(core, codecs, domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_BUFFER_ALLOC_MODE_SUPPORTED:
+- parse_alloc_mode(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_buffer_alloc_mode_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_alloc_mode(core, codecs, domain, payload);
+ break;
+ default:
++ ret = sizeof(u32);
+ break;
+ }
+
+- word++;
+- words_count--;
++ if (ret < 0)
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ words += ret / sizeof(u32);
++ rem_bytes -= ret;
+ }
+
+ if (!core->max_sessions_supported)
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c
+index a9167867063c41..b5f2ea8799507f 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -187,6 +187,9 @@ static int venus_write_queue(struct venus_hfi_device *hdev,
+ /* ensure rd/wr indices's are read from memory */
+ rmb();
+
++ if (qsize > IFACEQ_QUEUE_SIZE / 4)
++ return -EINVAL;
++
+ if (wr_idx >= rd_idx)
+ empty_space = qsize - (wr_idx - rd_idx);
+ else
+@@ -255,6 +258,9 @@ static int venus_read_queue(struct venus_hfi_device *hdev,
+ wr_idx = qhdr->write_idx;
+ qsize = qhdr->q_size;
+
++ if (qsize > IFACEQ_QUEUE_SIZE / 4)
++ return -EINVAL;
++
+ /* make sure data is valid before using it */
+ rmb();
+
+@@ -1035,18 +1041,26 @@ static void venus_sfr_print(struct venus_hfi_device *hdev)
+ {
+ struct device *dev = hdev->core->dev;
+ struct hfi_sfr *sfr = hdev->sfr.kva;
++ u32 size;
+ void *p;
+
+ if (!sfr)
+ return;
+
+- p = memchr(sfr->data, '\0', sfr->buf_size);
++ size = sfr->buf_size;
++ if (!size)
++ return;
++
++ if (size > ALIGNED_SFR_SIZE)
++ size = ALIGNED_SFR_SIZE;
++
++ p = memchr(sfr->data, '\0', size);
+ /*
+ * SFR isn't guaranteed to be NULL terminated since SYS_ERROR indicates
+ * that Venus is in the process of crashing.
+ */
+ if (!p)
+- sfr->data[sfr->buf_size - 1] = '\0';
++ sfr->data[size - 1] = '\0';
+
+ dev_err_ratelimited(dev, "SFR message from FW: %s\n", sfr->data);
+ }
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.c b/drivers/media/platform/rockchip/rga/rga-hw.c
+index bf55beec0fac7a..43ed742a164929 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.c
++++ b/drivers/media/platform/rockchip/rga/rga-hw.c
+@@ -376,7 +376,7 @@ static void rga_cmd_set_dst_info(struct rga_ctx *ctx,
+ * Configure the dest framebuffer base address with pixel offset.
+ */
+ offsets = rga_get_addr_offset(&ctx->out, offset, dst_x, dst_y, dst_w, dst_h);
+- dst_offset = rga_lookup_draw_pos(&offsets, mir_mode, rot_mode);
++ dst_offset = rga_lookup_draw_pos(&offsets, rot_mode, mir_mode);
+
+ dest[(RGA_DST_Y_RGB_BASE_ADDR - RGA_MODE_BASE_REG) >> 2] =
+ dst_offset->y_off;
+diff --git a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c
+index 73f7af674c01bd..0c636090d723de 100644
+--- a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c
++++ b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c
+@@ -549,8 +549,9 @@ static void s5p_mfc_enc_calc_src_size_v6(struct s5p_mfc_ctx *ctx)
+ case V4L2_PIX_FMT_NV21M:
+ ctx->stride[0] = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN_V6);
+ ctx->stride[1] = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN_V6);
+- ctx->luma_size = ctx->stride[0] * ALIGN(ctx->img_height, 16);
+- ctx->chroma_size = ctx->stride[0] * ALIGN(ctx->img_height / 2, 16);
++ ctx->luma_size = ALIGN(ctx->stride[0] * ALIGN(ctx->img_height, 16), 256);
++ ctx->chroma_size = ALIGN(ctx->stride[0] * ALIGN(ctx->img_height / 2, 16),
++ 256);
+ break;
+ case V4L2_PIX_FMT_YUV420M:
+ case V4L2_PIX_FMT_YVU420M:
+diff --git a/drivers/media/platform/st/stm32/dma2d/dma2d.c b/drivers/media/platform/st/stm32/dma2d/dma2d.c
+index b6c8400fb92da9..48fa781aab06c1 100644
+--- a/drivers/media/platform/st/stm32/dma2d/dma2d.c
++++ b/drivers/media/platform/st/stm32/dma2d/dma2d.c
+@@ -490,7 +490,8 @@ static void device_run(void *prv)
+ dst->sequence = frm_cap->sequence++;
+ v4l2_m2m_buf_copy_metadata(src, dst, true);
+
+- clk_enable(dev->gate);
++ if (clk_enable(dev->gate))
++ goto end;
+
+ dma2d_config_fg(dev, frm_out,
+ vb2_dma_contig_plane_dma_addr(&src->vb2_buf, 0));
+diff --git a/drivers/media/platform/xilinx/xilinx-tpg.c b/drivers/media/platform/xilinx/xilinx-tpg.c
+index cb93711ea3e356..7deec6e37edc19 100644
+--- a/drivers/media/platform/xilinx/xilinx-tpg.c
++++ b/drivers/media/platform/xilinx/xilinx-tpg.c
+@@ -722,7 +722,6 @@ static int xtpg_parse_of(struct xtpg_device *xtpg)
+ format = xvip_of_get_format(port);
+ if (IS_ERR(format)) {
+ dev_err(dev, "invalid format in DT");
+- of_node_put(port);
+ return PTR_ERR(format);
+ }
+
+@@ -731,7 +730,6 @@ static int xtpg_parse_of(struct xtpg_device *xtpg)
+ xtpg->vip_format = format;
+ } else if (xtpg->vip_format != format) {
+ dev_err(dev, "in/out format mismatch in DT");
+- of_node_put(port);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/media/rc/streamzap.c b/drivers/media/rc/streamzap.c
+index 2ce62fe5d60f5a..d3b48a0dd1f474 100644
+--- a/drivers/media/rc/streamzap.c
++++ b/drivers/media/rc/streamzap.c
+@@ -138,39 +138,10 @@ static void sz_push_half_space(struct streamzap_ir *sz,
+ sz_push_full_space(sz, value & SZ_SPACE_MASK);
+ }
+
+-/*
+- * streamzap_callback - usb IRQ handler callback
+- *
+- * This procedure is invoked on reception of data from
+- * the usb remote.
+- */
+-static void streamzap_callback(struct urb *urb)
++static void sz_process_ir_data(struct streamzap_ir *sz, int len)
+ {
+- struct streamzap_ir *sz;
+ unsigned int i;
+- int len;
+-
+- if (!urb)
+- return;
+-
+- sz = urb->context;
+- len = urb->actual_length;
+-
+- switch (urb->status) {
+- case -ECONNRESET:
+- case -ENOENT:
+- case -ESHUTDOWN:
+- /*
+- * this urb is terminated, clean up.
+- * sz might already be invalid at this point
+- */
+- dev_err(sz->dev, "urb terminated, status: %d\n", urb->status);
+- return;
+- default:
+- break;
+- }
+
+- dev_dbg(sz->dev, "%s: received urb, len %d\n", __func__, len);
+ for (i = 0; i < len; i++) {
+ dev_dbg(sz->dev, "sz->buf_in[%d]: %x\n",
+ i, (unsigned char)sz->buf_in[i]);
+@@ -219,6 +190,43 @@ static void streamzap_callback(struct urb *urb)
+ }
+
+ ir_raw_event_handle(sz->rdev);
++}
++
++/*
++ * streamzap_callback - usb IRQ handler callback
++ *
++ * This procedure is invoked on reception of data from
++ * the usb remote.
++ */
++static void streamzap_callback(struct urb *urb)
++{
++ struct streamzap_ir *sz;
++ int len;
++
++ if (!urb)
++ return;
++
++ sz = urb->context;
++ len = urb->actual_length;
++
++ switch (urb->status) {
++ case 0:
++ dev_dbg(sz->dev, "%s: received urb, len %d\n", __func__, len);
++ sz_process_ir_data(sz, len);
++ break;
++ case -ECONNRESET:
++ case -ENOENT:
++ case -ESHUTDOWN:
++ /*
++ * this urb is terminated, clean up.
++ * sz might already be invalid at this point
++ */
++ dev_err(sz->dev, "urb terminated, status: %d\n", urb->status);
++ return;
++ default:
++ break;
++ }
++
+ usb_submit_urb(urb, GFP_ATOMIC);
+ }
+
+diff --git a/drivers/media/test-drivers/vim2m.c b/drivers/media/test-drivers/vim2m.c
+index 6c24dcf27eb078..0fe97e208c02c0 100644
+--- a/drivers/media/test-drivers/vim2m.c
++++ b/drivers/media/test-drivers/vim2m.c
+@@ -1314,9 +1314,6 @@ static int vim2m_probe(struct platform_device *pdev)
+ vfd->v4l2_dev = &dev->v4l2_dev;
+
+ video_set_drvdata(vfd, dev);
+- v4l2_info(&dev->v4l2_dev,
+- "Device registered as /dev/video%d\n", vfd->num);
+-
+ platform_set_drvdata(pdev, dev);
+
+ dev->m2m_dev = v4l2_m2m_init(&m2m_ops);
+@@ -1343,6 +1340,9 @@ static int vim2m_probe(struct platform_device *pdev)
+ goto error_m2m;
+ }
+
++ v4l2_info(&dev->v4l2_dev,
++ "Device registered as /dev/video%d\n", vfd->num);
++
+ #ifdef CONFIG_MEDIA_CONTROLLER
+ ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
+ MEDIA_ENT_F_PROC_VIDEO_SCALER);
+diff --git a/drivers/media/test-drivers/visl/visl-core.c b/drivers/media/test-drivers/visl/visl-core.c
+index 01c964ea6f7675..5bf3136b36eb30 100644
+--- a/drivers/media/test-drivers/visl/visl-core.c
++++ b/drivers/media/test-drivers/visl/visl-core.c
+@@ -161,9 +161,15 @@ static const struct visl_ctrl_desc visl_h264_ctrl_descs[] = {
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_H264_DECODE_MODE,
++ .cfg.min = V4L2_STATELESS_H264_DECODE_MODE_SLICE_BASED,
++ .cfg.max = V4L2_STATELESS_H264_DECODE_MODE_FRAME_BASED,
++ .cfg.def = V4L2_STATELESS_H264_DECODE_MODE_SLICE_BASED,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_H264_START_CODE,
++ .cfg.min = V4L2_STATELESS_H264_START_CODE_NONE,
++ .cfg.max = V4L2_STATELESS_H264_START_CODE_ANNEX_B,
++ .cfg.def = V4L2_STATELESS_H264_START_CODE_NONE,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_H264_SLICE_PARAMS,
+@@ -198,9 +204,15 @@ static const struct visl_ctrl_desc visl_hevc_ctrl_descs[] = {
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_HEVC_DECODE_MODE,
++ .cfg.min = V4L2_STATELESS_HEVC_DECODE_MODE_SLICE_BASED,
++ .cfg.max = V4L2_STATELESS_HEVC_DECODE_MODE_FRAME_BASED,
++ .cfg.def = V4L2_STATELESS_HEVC_DECODE_MODE_SLICE_BASED,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_HEVC_START_CODE,
++ .cfg.min = V4L2_STATELESS_HEVC_START_CODE_NONE,
++ .cfg.max = V4L2_STATELESS_HEVC_START_CODE_ANNEX_B,
++ .cfg.def = V4L2_STATELESS_HEVC_START_CODE_NONE,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_HEVC_ENTRY_POINT_OFFSETS,
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index deadbcea5e227c..11b04f6f60cd18 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -3062,6 +3062,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_PROBE_MINMAX
+ | UVC_QUIRK_IGNORE_SELECTOR_UNIT) },
++ /* Actions Microelectronics Co. Display capture-UVC05 */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x1de1,
++ .idProduct = 0xf105,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_DISABLE_AUTOSUSPEND) },
+ /* NXP Semiconductors IR VIDEO */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index d26edf157e6400..32930956740d95 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -764,7 +764,7 @@ bool v4l2_detect_gtf(unsigned int frame_height,
+ u64 num;
+ u32 den;
+
+- num = ((image_width * GTF_D_C_PRIME * (u64)hfreq) -
++ num = (((u64)image_width * GTF_D_C_PRIME * hfreq) -
+ ((u64)image_width * GTF_D_M_PRIME * 1000));
+ den = (hfreq * (100 - GTF_D_C_PRIME) + GTF_D_M_PRIME * 1000) *
+ (2 * GTF_CELL_GRAN);
+@@ -774,7 +774,7 @@ bool v4l2_detect_gtf(unsigned int frame_height,
+ u64 num;
+ u32 den;
+
+- num = ((image_width * GTF_S_C_PRIME * (u64)hfreq) -
++ num = (((u64)image_width * GTF_S_C_PRIME * hfreq) -
+ ((u64)image_width * GTF_S_M_PRIME * 1000));
+ den = (hfreq * (100 - GTF_S_C_PRIME) + GTF_S_M_PRIME * 1000) *
+ (2 * GTF_CELL_GRAN);
+diff --git a/drivers/mfd/ene-kb3930.c b/drivers/mfd/ene-kb3930.c
+index fa0ad2f14a3961..9460a67acb0b5e 100644
+--- a/drivers/mfd/ene-kb3930.c
++++ b/drivers/mfd/ene-kb3930.c
+@@ -162,7 +162,7 @@ static int kb3930_probe(struct i2c_client *client)
+ devm_gpiod_get_array_optional(dev, "off", GPIOD_IN);
+ if (IS_ERR(ddata->off_gpios))
+ return PTR_ERR(ddata->off_gpios);
+- if (ddata->off_gpios->ndescs < 2) {
++ if (ddata->off_gpios && ddata->off_gpios->ndescs < 2) {
+ dev_err(dev, "invalid off-gpios property\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 9dac7cbe8748cc..4c0f37ad0281b1 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -88,7 +88,6 @@
+ #define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025
+ #define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031
+
+-#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
+ #define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588
+
+ static DEFINE_IDA(pci_endpoint_test_ida);
+@@ -242,7 +241,7 @@ static int pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
+ return 0;
+
+ fail:
+- switch (irq_type) {
++ switch (test->irq_type) {
+ case IRQ_TYPE_INTX:
+ dev_err(dev, "Failed to request IRQ %d for Legacy\n",
+ pci_irq_vector(pdev, i));
+@@ -259,6 +258,9 @@ static int pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
+ break;
+ }
+
++ test->num_irqs = i;
++ pci_endpoint_test_release_irq(test);
++
+ return ret;
+ }
+
+@@ -828,6 +830,7 @@ static int pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
+ return ret;
+ }
+
++ irq_type = test->irq_type;
+ return 0;
+ }
+
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 3cbda98d08d287..74f224647bf1ed 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2579,6 +2579,91 @@ static void dw_mci_pull_data64(struct dw_mci *host, void *buf, int cnt)
+ }
+ }
+
++static void dw_mci_push_data64_32(struct dw_mci *host, void *buf, int cnt)
++{
++ struct mmc_data *data = host->data;
++ int init_cnt = cnt;
++
++ /* try and push anything in the part_buf */
++ if (unlikely(host->part_buf_count)) {
++ int len = dw_mci_push_part_bytes(host, buf, cnt);
++
++ buf += len;
++ cnt -= len;
++
++ if (host->part_buf_count == 8) {
++ mci_fifo_l_writeq(host->fifo_reg, host->part_buf);
++ host->part_buf_count = 0;
++ }
++ }
++#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
++ if (unlikely((unsigned long)buf & 0x7)) {
++ while (cnt >= 8) {
++ u64 aligned_buf[16];
++ int len = min(cnt & -8, (int)sizeof(aligned_buf));
++ int items = len >> 3;
++ int i;
++ /* memcpy from input buffer into aligned buffer */
++ memcpy(aligned_buf, buf, len);
++ buf += len;
++ cnt -= len;
++ /* push data from aligned buffer into fifo */
++ for (i = 0; i < items; ++i)
++ mci_fifo_l_writeq(host->fifo_reg, aligned_buf[i]);
++ }
++ } else
++#endif
++ {
++ u64 *pdata = buf;
++
++ for (; cnt >= 8; cnt -= 8)
++ mci_fifo_l_writeq(host->fifo_reg, *pdata++);
++ buf = pdata;
++ }
++ /* put anything remaining in the part_buf */
++ if (cnt) {
++ dw_mci_set_part_bytes(host, buf, cnt);
++ /* Push data if we have reached the expected data length */
++ if ((data->bytes_xfered + init_cnt) ==
++ (data->blksz * data->blocks))
++ mci_fifo_l_writeq(host->fifo_reg, host->part_buf);
++ }
++}
++
++static void dw_mci_pull_data64_32(struct dw_mci *host, void *buf, int cnt)
++{
++#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
++ if (unlikely((unsigned long)buf & 0x7)) {
++ while (cnt >= 8) {
++ /* pull data from fifo into aligned buffer */
++ u64 aligned_buf[16];
++ int len = min(cnt & -8, (int)sizeof(aligned_buf));
++ int items = len >> 3;
++ int i;
++
++ for (i = 0; i < items; ++i)
++ aligned_buf[i] = mci_fifo_l_readq(host->fifo_reg);
++
++ /* memcpy from aligned buffer into output buffer */
++ memcpy(buf, aligned_buf, len);
++ buf += len;
++ cnt -= len;
++ }
++ } else
++#endif
++ {
++ u64 *pdata = buf;
++
++ for (; cnt >= 8; cnt -= 8)
++ *pdata++ = mci_fifo_l_readq(host->fifo_reg);
++ buf = pdata;
++ }
++ if (cnt) {
++ host->part_buf = mci_fifo_l_readq(host->fifo_reg);
++ dw_mci_pull_final_bytes(host, buf, cnt);
++ }
++}
++
+ static void dw_mci_pull_data(struct dw_mci *host, void *buf, int cnt)
+ {
+ int len;
+@@ -3379,8 +3464,13 @@ int dw_mci_probe(struct dw_mci *host)
+ width = 16;
+ host->data_shift = 1;
+ } else if (i == 2) {
+- host->push_data = dw_mci_push_data64;
+- host->pull_data = dw_mci_pull_data64;
++ if ((host->quirks & DW_MMC_QUIRK_FIFO64_32)) {
++ host->push_data = dw_mci_push_data64_32;
++ host->pull_data = dw_mci_pull_data64_32;
++ } else {
++ host->push_data = dw_mci_push_data64;
++ host->pull_data = dw_mci_pull_data64;
++ }
+ width = 64;
+ host->data_shift = 3;
+ } else {
+diff --git a/drivers/mmc/host/dw_mmc.h b/drivers/mmc/host/dw_mmc.h
+index 6447b916990dcd..5463392dc81105 100644
+--- a/drivers/mmc/host/dw_mmc.h
++++ b/drivers/mmc/host/dw_mmc.h
+@@ -281,6 +281,8 @@ struct dw_mci_board {
+
+ /* Support for longer data read timeout */
+ #define DW_MMC_QUIRK_EXTENDED_TMOUT BIT(0)
++/* Force 32-bit access to the FIFO */
++#define DW_MMC_QUIRK_FIFO64_32 BIT(1)
+
+ #define DW_MMC_240A 0x240a
+ #define DW_MMC_280A 0x280a
+@@ -472,6 +474,31 @@ struct dw_mci_board {
+ #define mci_fifo_writel(__value, __reg) __raw_writel(__reg, __value)
+ #define mci_fifo_writeq(__value, __reg) __raw_writeq(__reg, __value)
+
++/*
++ * Some dw_mmc devices have 64-bit FIFOs, but expect them to be
++ * accessed using two 32-bit accesses. If such controller is used
++ * with a 64-bit kernel, this has to be done explicitly.
++ */
++static inline u64 mci_fifo_l_readq(void __iomem *addr)
++{
++ u64 ans;
++ u32 proxy[2];
++
++ proxy[0] = mci_fifo_readl(addr);
++ proxy[1] = mci_fifo_readl(addr + 4);
++ memcpy(&ans, proxy, 8);
++ return ans;
++}
++
++static inline void mci_fifo_l_writeq(void __iomem *addr, u64 value)
++{
++ u32 proxy[2];
++
++ memcpy(proxy, &value, 8);
++ mci_fifo_writel(addr, proxy[0]);
++ mci_fifo_writel(addr + 4, proxy[1]);
++}
++
+ /* Register access macros */
+ #define mci_readl(dev, reg) \
+ readl_relaxed((dev)->regs + SDMMC_##reg)
+diff --git a/drivers/mtd/inftlcore.c b/drivers/mtd/inftlcore.c
+index 9739387cff8c91..58c6e1743f5c65 100644
+--- a/drivers/mtd/inftlcore.c
++++ b/drivers/mtd/inftlcore.c
+@@ -482,10 +482,11 @@ static inline u16 INFTL_findwriteunit(struct INFTLrecord *inftl, unsigned block)
+ silly = MAX_LOOPS;
+
+ while (thisEUN <= inftl->lastEUN) {
+- inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) +
+- blockofs, 8, &retlen, (char *)&bci);
+-
+- status = bci.Status | bci.Status1;
++ if (inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) +
++ blockofs, 8, &retlen, (char *)&bci) < 0)
++ status = SECTOR_IGNORE;
++ else
++ status = bci.Status | bci.Status1;
+ pr_debug("INFTL: status of block %d in EUN %d is %x\n",
+ block , writeEUN, status);
+
+diff --git a/drivers/mtd/mtdpstore.c b/drivers/mtd/mtdpstore.c
+index 7ac8ac90130685..9cf3872e37ae14 100644
+--- a/drivers/mtd/mtdpstore.c
++++ b/drivers/mtd/mtdpstore.c
+@@ -417,11 +417,14 @@ static void mtdpstore_notify_add(struct mtd_info *mtd)
+ }
+
+ longcnt = BITS_TO_LONGS(div_u64(mtd->size, info->kmsg_size));
+- cxt->rmmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL);
+- cxt->usedmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL);
++ cxt->rmmap = devm_kcalloc(&mtd->dev, longcnt, sizeof(long), GFP_KERNEL);
++ cxt->usedmap = devm_kcalloc(&mtd->dev, longcnt, sizeof(long), GFP_KERNEL);
+
+ longcnt = BITS_TO_LONGS(div_u64(mtd->size, mtd->erasesize));
+- cxt->badmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL);
++ cxt->badmap = devm_kcalloc(&mtd->dev, longcnt, sizeof(long), GFP_KERNEL);
++
++ if (!cxt->rmmap || !cxt->usedmap || !cxt->badmap)
++ return;
+
+ /* just support dmesg right now */
+ cxt->dev.flags = PSTORE_FLAGS_DMESG;
+@@ -527,9 +530,6 @@ static void mtdpstore_notify_remove(struct mtd_info *mtd)
+ mtdpstore_flush_removed(cxt);
+
+ unregister_pstore_device(&cxt->dev);
+- kfree(cxt->badmap);
+- kfree(cxt->usedmap);
+- kfree(cxt->rmmap);
+ cxt->mtd = NULL;
+ cxt->index = -1;
+ }
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index fea5b611995635..17f6d9723df9f5 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -3008,7 +3008,7 @@ static int brcmnand_resume(struct device *dev)
+ brcmnand_save_restore_cs_config(host, 1);
+
+ /* Reset the chip, required by some chips after power-up */
+- nand_reset_op(chip);
++ nand_reset(chip, 0);
+ }
+
+ return 0;
+diff --git a/drivers/mtd/nand/raw/r852.c b/drivers/mtd/nand/raw/r852.c
+index b07c2f8b40350d..918974d088cf65 100644
+--- a/drivers/mtd/nand/raw/r852.c
++++ b/drivers/mtd/nand/raw/r852.c
+@@ -387,6 +387,9 @@ static int r852_wait(struct nand_chip *chip)
+ static int r852_ready(struct nand_chip *chip)
+ {
+ struct r852_device *dev = r852_get_dev(nand_to_mtd(chip));
++ if (dev->card_unstable)
++ return 0;
++
+ return !(r852_read_reg(dev, R852_CARD_STA) & R852_CARD_STA_BUSY);
+ }
+
+diff --git a/drivers/net/can/flexcan/flexcan-core.c b/drivers/net/can/flexcan/flexcan-core.c
+index b080740bcb104f..fca290afb5329a 100644
+--- a/drivers/net/can/flexcan/flexcan-core.c
++++ b/drivers/net/can/flexcan/flexcan-core.c
+@@ -386,6 +386,16 @@ static const struct flexcan_devtype_data fsl_lx2160a_r1_devtype_data = {
+ FLEXCAN_QUIRK_SUPPORT_RX_MAILBOX_RTR,
+ };
+
++static const struct flexcan_devtype_data nxp_s32g2_devtype_data = {
++ .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
++ FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
++ FLEXCAN_QUIRK_USE_RX_MAILBOX | FLEXCAN_QUIRK_SUPPORT_FD |
++ FLEXCAN_QUIRK_SUPPORT_ECC | FLEXCAN_QUIRK_NR_IRQ_3 |
++ FLEXCAN_QUIRK_SUPPORT_RX_MAILBOX |
++ FLEXCAN_QUIRK_SUPPORT_RX_MAILBOX_RTR |
++ FLEXCAN_QUIRK_SECONDARY_MB_IRQ,
++};
++
+ static const struct can_bittiming_const flexcan_bittiming_const = {
+ .name = DRV_NAME,
+ .tseg1_min = 4,
+@@ -1762,14 +1772,25 @@ static int flexcan_open(struct net_device *dev)
+ goto out_free_irq_boff;
+ }
+
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SECONDARY_MB_IRQ) {
++ err = request_irq(priv->irq_secondary_mb,
++ flexcan_irq, IRQF_SHARED, dev->name, dev);
++ if (err)
++ goto out_free_irq_err;
++ }
++
+ flexcan_chip_interrupts_enable(dev);
+
+ netif_start_queue(dev);
+
+ return 0;
+
++ out_free_irq_err:
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3)
++ free_irq(priv->irq_err, dev);
+ out_free_irq_boff:
+- free_irq(priv->irq_boff, dev);
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3)
++ free_irq(priv->irq_boff, dev);
+ out_free_irq:
+ free_irq(dev->irq, dev);
+ out_can_rx_offload_disable:
+@@ -1794,6 +1815,9 @@ static int flexcan_close(struct net_device *dev)
+ netif_stop_queue(dev);
+ flexcan_chip_interrupts_disable(dev);
+
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SECONDARY_MB_IRQ)
++ free_irq(priv->irq_secondary_mb, dev);
++
+ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
+ free_irq(priv->irq_err, dev);
+ free_irq(priv->irq_boff, dev);
+@@ -2041,6 +2065,7 @@ static const struct of_device_id flexcan_of_match[] = {
+ { .compatible = "fsl,vf610-flexcan", .data = &fsl_vf610_devtype_data, },
+ { .compatible = "fsl,ls1021ar2-flexcan", .data = &fsl_ls1021a_r2_devtype_data, },
+ { .compatible = "fsl,lx2160ar1-flexcan", .data = &fsl_lx2160a_r1_devtype_data, },
++ { .compatible = "nxp,s32g2-flexcan", .data = &nxp_s32g2_devtype_data, },
+ { /* sentinel */ },
+ };
+ MODULE_DEVICE_TABLE(of, flexcan_of_match);
+@@ -2187,6 +2212,14 @@ static int flexcan_probe(struct platform_device *pdev)
+ }
+ }
+
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SECONDARY_MB_IRQ) {
++ priv->irq_secondary_mb = platform_get_irq_byname(pdev, "mb-1");
++ if (priv->irq_secondary_mb < 0) {
++ err = priv->irq_secondary_mb;
++ goto failed_platform_get_irq;
++ }
++ }
++
+ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SUPPORT_FD) {
+ priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD |
+ CAN_CTRLMODE_FD_NON_ISO;
+diff --git a/drivers/net/can/flexcan/flexcan.h b/drivers/net/can/flexcan/flexcan.h
+index 4933d8c7439e62..2cf886618c9621 100644
+--- a/drivers/net/can/flexcan/flexcan.h
++++ b/drivers/net/can/flexcan/flexcan.h
+@@ -70,6 +70,10 @@
+ #define FLEXCAN_QUIRK_SUPPORT_RX_FIFO BIT(16)
+ /* Setup stop mode with ATF SCMI protocol to support wakeup */
+ #define FLEXCAN_QUIRK_SETUP_STOP_MODE_SCMI BIT(17)
++/* Device has two separate interrupt lines for two mailbox ranges, which
++ * both need to have an interrupt handler registered.
++ */
++#define FLEXCAN_QUIRK_SECONDARY_MB_IRQ BIT(18)
+
+ struct flexcan_devtype_data {
+ u32 quirks; /* quirks needed for different IP cores */
+@@ -107,6 +111,7 @@ struct flexcan_priv {
+
+ int irq_boff;
+ int irq_err;
++ int irq_secondary_mb;
+
+ /* IPC handle when setup stop mode by System Controller firmware(scfw) */
+ struct imx_sc_ipc *sc_ipc_handle;
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 4a9fbfa8db41a5..29a89ab4b78946 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3674,6 +3674,21 @@ static int mv88e6xxx_stats_setup(struct mv88e6xxx_chip *chip)
+ return mv88e6xxx_g1_stats_clear(chip);
+ }
+
++static int mv88e6320_setup_errata(struct mv88e6xxx_chip *chip)
++{
++ u16 dummy;
++ int err;
++
++ /* Workaround for erratum
++ * 3.3 RGMII timing may be out of spec when transmit delay is enabled
++ */
++ err = mv88e6xxx_port_hidden_write(chip, 0, 0xf, 0x7, 0xe000);
++ if (err)
++ return err;
++
++ return mv88e6xxx_port_hidden_read(chip, 0, 0xf, 0x7, &dummy);
++}
++
+ /* Check if the errata has already been applied. */
+ static bool mv88e6390_setup_errata_applied(struct mv88e6xxx_chip *chip)
+ {
+@@ -5130,6 +5145,7 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
+
+ static const struct mv88e6xxx_ops mv88e6320_ops = {
+ /* MV88E6XXX_FAMILY_6320 */
++ .setup_errata = mv88e6320_setup_errata,
+ .ieee_pri_map = mv88e6085_g1_ieee_pri_map,
+ .ip_pri_map = mv88e6085_g1_ip_pri_map,
+ .irl_init_all = mv88e6352_g2_irl_init_all,
+@@ -5182,6 +5198,7 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
+
+ static const struct mv88e6xxx_ops mv88e6321_ops = {
+ /* MV88E6XXX_FAMILY_6320 */
++ .setup_errata = mv88e6320_setup_errata,
+ .ieee_pri_map = mv88e6085_g1_ieee_pri_map,
+ .ip_pri_map = mv88e6085_g1_ip_pri_map,
+ .irl_init_all = mv88e6352_g2_irl_init_all,
+@@ -6242,7 +6259,8 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .num_databases = 4096,
+ .num_macs = 8192,
+ .num_ports = 7,
+- .num_internal_phys = 5,
++ .num_internal_phys = 2,
++ .internal_phys_offset = 3,
+ .num_gpio = 15,
+ .max_vid = 4095,
+ .max_sid = 63,
+@@ -6269,7 +6287,8 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .num_databases = 4096,
+ .num_macs = 8192,
+ .num_ports = 7,
+- .num_internal_phys = 5,
++ .num_internal_phys = 2,
++ .internal_phys_offset = 3,
+ .num_gpio = 15,
+ .max_vid = 4095,
+ .max_sid = 63,
+diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
+index bdfc6e77b2af56..1f5db1096d4a40 100644
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -392,7 +392,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ */
+ data[i++] = 0;
+ data[i++] = 0;
+- data[i++] = tx->dqo_tx.tail - tx->dqo_tx.head;
++ data[i++] =
++ (tx->dqo_tx.tail - tx->dqo_tx.head) &
++ tx->mask;
+ }
+ do {
+ start =
+diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+index f0674a44356708..b5be2c18858a0c 100644
+--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+@@ -114,7 +114,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx)
+ if (!gve_rx_was_added_to_block(priv, idx))
+ return;
+
+- page_pool_disable_direct_recycling(rx->dqo.page_pool);
++ if (rx->dqo.page_pool)
++ page_pool_disable_direct_recycling(rx->dqo.page_pool);
+ gve_remove_napi(priv, ntfy_idx);
+ gve_rx_remove_from_block(priv, idx);
+ gve_rx_reset_ring_dqo(priv, idx);
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index cd1d7b6c178235..c35cc5cb118569 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -337,8 +337,6 @@ struct igc_adapter {
+ struct igc_led_classdev *leds;
+ };
+
+-void igc_set_queue_napi(struct igc_adapter *adapter, int q_idx,
+- struct napi_struct *napi);
+ void igc_up(struct igc_adapter *adapter);
+ void igc_down(struct igc_adapter *adapter);
+ int igc_open(struct net_device *netdev);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 706dd26d4dde26..daf2a24ead0370 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -5021,8 +5021,8 @@ static int igc_sw_init(struct igc_adapter *adapter)
+ return 0;
+ }
+
+-void igc_set_queue_napi(struct igc_adapter *adapter, int vector,
+- struct napi_struct *napi)
++static void igc_set_queue_napi(struct igc_adapter *adapter, int vector,
++ struct napi_struct *napi)
+ {
+ struct igc_q_vector *q_vector = adapter->q_vector[vector];
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c
+index 13bbd3346e01f3..869815f48ac1d2 100644
+--- a/drivers/net/ethernet/intel/igc/igc_xdp.c
++++ b/drivers/net/ethernet/intel/igc/igc_xdp.c
+@@ -86,7 +86,6 @@ static int igc_xdp_enable_pool(struct igc_adapter *adapter,
+ napi_disable(napi);
+ }
+
+- igc_set_queue_napi(adapter, queue_id, NULL);
+ set_bit(IGC_RING_FLAG_AF_XDP_ZC, &rx_ring->flags);
+ set_bit(IGC_RING_FLAG_AF_XDP_ZC, &tx_ring->flags);
+
+@@ -136,7 +135,6 @@ static int igc_xdp_disable_pool(struct igc_adapter *adapter, u16 queue_id)
+ xsk_pool_dma_unmap(pool, IGC_RX_DMA_ATTR);
+ clear_bit(IGC_RING_FLAG_AF_XDP_ZC, &rx_ring->flags);
+ clear_bit(IGC_RING_FLAG_AF_XDP_ZC, &tx_ring->flags);
+- igc_set_queue_napi(adapter, queue_id, napi);
+
+ if (needs_reset) {
+ napi_enable(napi);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+index 0f844c14485a0e..35acc07bd96489 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+@@ -165,6 +165,11 @@ static void __otx2_qos_txschq_cfg(struct otx2_nic *pfvf,
+
+ otx2_config_sched_shaping(pfvf, node, cfg, &num_regs);
+ } else if (level == NIX_TXSCH_LVL_TL2) {
++ /* configure parent txschq */
++ cfg->reg[num_regs] = NIX_AF_TL2X_PARENT(node->schq);
++ cfg->regval[num_regs] = (u64)hw->tx_link << 16;
++ num_regs++;
++
+ /* configure link cfg */
+ if (level == pfvf->qos.link_cfg_lvl) {
+ cfg->reg[num_regs] = NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index ae76ecc7a5d36c..2e124f74df797f 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -652,30 +652,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu, int num_qu
+ mpc->rxbpre_total = 0;
+
+ for (i = 0; i < num_rxb; i++) {
+- if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
+- va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
+- if (!va)
+- goto error;
+-
+- page = virt_to_head_page(va);
+- /* Check if the frag falls back to single page */
+- if (compound_order(page) <
+- get_order(mpc->rxbpre_alloc_size)) {
+- put_page(page);
+- goto error;
+- }
+- } else {
+- page = dev_alloc_page();
+- if (!page)
+- goto error;
++ page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
++ if (!page)
++ goto error;
+
+- va = page_to_virt(page);
+- }
++ va = page_to_virt(page);
+
+ da = dma_map_single(dev, va + mpc->rxbpre_headroom,
+ mpc->rxbpre_datasize, DMA_FROM_DEVICE);
+ if (dma_mapping_error(dev, da)) {
+- put_page(virt_to_head_page(va));
++ put_page(page);
+ goto error;
+ }
+
+@@ -1660,7 +1646,7 @@ static void mana_rx_skb(void *buf_va, bool from_pool,
+ }
+
+ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
+- dma_addr_t *da, bool *from_pool, bool is_napi)
++ dma_addr_t *da, bool *from_pool)
+ {
+ struct page *page;
+ void *va;
+@@ -1671,21 +1657,6 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
+ if (rxq->xdp_save_va) {
+ va = rxq->xdp_save_va;
+ rxq->xdp_save_va = NULL;
+- } else if (rxq->alloc_size > PAGE_SIZE) {
+- if (is_napi)
+- va = napi_alloc_frag(rxq->alloc_size);
+- else
+- va = netdev_alloc_frag(rxq->alloc_size);
+-
+- if (!va)
+- return NULL;
+-
+- page = virt_to_head_page(va);
+- /* Check if the frag falls back to single page */
+- if (compound_order(page) < get_order(rxq->alloc_size)) {
+- put_page(page);
+- return NULL;
+- }
+ } else {
+ page = page_pool_dev_alloc_pages(rxq->page_pool);
+ if (!page)
+@@ -1718,7 +1689,7 @@ static void mana_refill_rx_oob(struct device *dev, struct mana_rxq *rxq,
+ dma_addr_t da;
+ void *va;
+
+- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, true);
++ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
+ if (!va)
+ return;
+
+@@ -2158,7 +2129,7 @@ static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key,
+ if (mpc->rxbufs_pre)
+ va = mana_get_rxbuf_pre(rxq, &da);
+ else
+- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, false);
++ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
+
+ if (!va)
+ return -ENOMEM;
+@@ -2244,6 +2215,7 @@ static int mana_create_page_pool(struct mana_rxq *rxq, struct gdma_context *gc)
+ pprm.nid = gc->numa_node;
+ pprm.napi = &rxq->rx_cq.napi;
+ pprm.netdev = rxq->ndev;
++ pprm.order = get_order(rxq->alloc_size);
+
+ rxq->page_pool = page_pool_create(&pprm);
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c0ae7db96f46ff..b7c3bfdaa1802b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3640,7 +3640,6 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
+ {
+ struct stmmac_priv *priv = netdev_priv(dev);
+ enum request_irq_err irq_err;
+- cpumask_t cpu_mask;
+ int irq_idx = 0;
+ char *int_name;
+ int ret;
+@@ -3769,9 +3768,8 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
+ irq_idx = i;
+ goto irq_error;
+ }
+- cpumask_clear(&cpu_mask);
+- cpumask_set_cpu(i % num_online_cpus(), &cpu_mask);
+- irq_set_affinity_hint(priv->rx_irq[i], &cpu_mask);
++ irq_set_affinity_hint(priv->rx_irq[i],
++ cpumask_of(i % num_online_cpus()));
+ }
+
+ /* Request Tx MSI irq */
+@@ -3794,9 +3792,8 @@ static int stmmac_request_irq_multi_msi(struct net_device *dev)
+ irq_idx = i;
+ goto irq_error;
+ }
+- cpumask_clear(&cpu_mask);
+- cpumask_set_cpu(i % num_online_cpus(), &cpu_mask);
+- irq_set_affinity_hint(priv->tx_irq[i], &cpu_mask);
++ irq_set_affinity_hint(priv->tx_irq[i],
++ cpumask_of(i % num_online_cpus()));
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index 497abf2723a5e4..5b113fd71fe2eb 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -309,7 +309,8 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_ring,
+ return true;
+
+ page = page_pool_dev_alloc_pages(rx_ring->page_pool);
+- WARN_ON(!page);
++ if (unlikely(!page))
++ return false;
+ dma = page_pool_get_dma_addr(page);
+
+ bi->page_dma = dma;
+@@ -545,7 +546,8 @@ static void wx_rx_checksum(struct wx_ring *ring,
+ return;
+
+ /* Hardware can't guarantee csum if IPv6 Dest Header found */
+- if (dptype.prot != WX_DEC_PTYPE_PROT_SCTP && WX_RXD_IPV6EX(rx_desc))
++ if (dptype.prot != WX_DEC_PTYPE_PROT_SCTP &&
++ wx_test_staterr(rx_desc, WX_RXD_STAT_IPV6EX))
+ return;
+
+ /* if L4 checksum error */
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_type.h b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+index b54bffda027b40..1d9ed1cffd67c3 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_type.h
++++ b/drivers/net/ethernet/wangxun/libwx/wx_type.h
+@@ -460,6 +460,7 @@ enum WX_MSCA_CMD_value {
+ #define WX_RXD_STAT_L4CS BIT(7) /* L4 xsum calculated */
+ #define WX_RXD_STAT_IPCS BIT(8) /* IP xsum calculated */
+ #define WX_RXD_STAT_OUTERIPCS BIT(10) /* Cloud IP xsum calculated*/
++#define WX_RXD_STAT_IPV6EX BIT(12) /* IPv6 Dest Header */
+
+ #define WX_RXD_ERR_OUTERIPER BIT(26) /* CRC IP Header error */
+ #define WX_RXD_ERR_RXE BIT(29) /* Any MAC Error */
+@@ -535,8 +536,6 @@ enum wx_l2_ptypes {
+
+ #define WX_RXD_PKTTYPE(_rxd) \
+ ((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 9) & 0xFF)
+-#define WX_RXD_IPV6EX(_rxd) \
+- ((le32_to_cpu((_rxd)->wb.lower.lo_dword.data) >> 6) & 0x1)
+ /*********************** Transmit Descriptor Config Masks ****************/
+ #define WX_TXD_STAT_DD BIT(0) /* Descriptor Done */
+ #define WX_TXD_DTYP_DATA 0 /* Adv Data Descriptor */
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 46713d27412b76..92161af788afd2 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -240,6 +240,46 @@ static bool phy_drv_wol_enabled(struct phy_device *phydev)
+ return wol.wolopts != 0;
+ }
+
++static void phy_link_change(struct phy_device *phydev, bool up)
++{
++ struct net_device *netdev = phydev->attached_dev;
++
++ if (up)
++ netif_carrier_on(netdev);
++ else
++ netif_carrier_off(netdev);
++ phydev->adjust_link(netdev);
++ if (phydev->mii_ts && phydev->mii_ts->link_state)
++ phydev->mii_ts->link_state(phydev->mii_ts, phydev);
++}
++
++/**
++ * phy_uses_state_machine - test whether consumer driver uses PAL state machine
++ * @phydev: the target PHY device structure
++ *
++ * Ultimately, this aims to indirectly determine whether the PHY is attached
++ * to a consumer which uses the state machine by calling phy_start() and
++ * phy_stop().
++ *
++ * When the PHY driver consumer uses phylib, it must have previously called
++ * phy_connect_direct() or one of its derivatives, so that phy_prepare_link()
++ * has set up a hook for monitoring state changes.
++ *
++ * When the PHY driver is used by the MAC driver consumer through phylink (the
++ * only other provider of a phy_link_change() method), using the PHY state
++ * machine is not optional.
++ *
++ * Return: true if consumer calls phy_start() and phy_stop(), false otherwise.
++ */
++static bool phy_uses_state_machine(struct phy_device *phydev)
++{
++ if (phydev->phy_link_change == phy_link_change)
++ return phydev->attached_dev && phydev->adjust_link;
++
++ /* phydev->phy_link_change is implicitly phylink_phy_change() */
++ return true;
++}
++
+ static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
+ {
+ struct device_driver *drv = phydev->mdio.dev.driver;
+@@ -306,7 +346,7 @@ static __maybe_unused int mdio_bus_phy_suspend(struct device *dev)
+ * may call phy routines that try to grab the same lock, and that may
+ * lead to a deadlock.
+ */
+- if (phydev->attached_dev && phydev->adjust_link)
++ if (phy_uses_state_machine(phydev))
+ phy_stop_machine(phydev);
+
+ if (!mdio_bus_phy_may_suspend(phydev))
+@@ -360,7 +400,7 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
+ }
+ }
+
+- if (phydev->attached_dev && phydev->adjust_link)
++ if (phy_uses_state_machine(phydev))
+ phy_start_machine(phydev);
+
+ return 0;
+@@ -1052,19 +1092,6 @@ struct phy_device *phy_find_first(struct mii_bus *bus)
+ }
+ EXPORT_SYMBOL(phy_find_first);
+
+-static void phy_link_change(struct phy_device *phydev, bool up)
+-{
+- struct net_device *netdev = phydev->attached_dev;
+-
+- if (up)
+- netif_carrier_on(netdev);
+- else
+- netif_carrier_off(netdev);
+- phydev->adjust_link(netdev);
+- if (phydev->mii_ts && phydev->mii_ts->link_state)
+- phydev->mii_ts->link_state(phydev->mii_ts, phydev);
+-}
+-
+ /**
+ * phy_prepare_link - prepares the PHY layer to monitor link status
+ * @phydev: target phy_device struct
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 7dbcbf0a4ee26a..c88217af44a144 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -385,7 +385,7 @@ static void sfp_fixup_rollball(struct sfp *sfp)
+ sfp->phy_t_retry = msecs_to_jiffies(1000);
+ }
+
+-static void sfp_fixup_fs_2_5gt(struct sfp *sfp)
++static void sfp_fixup_rollball_wait4s(struct sfp *sfp)
+ {
+ sfp_fixup_rollball(sfp);
+
+@@ -399,7 +399,7 @@ static void sfp_fixup_fs_2_5gt(struct sfp *sfp)
+ static void sfp_fixup_fs_10gt(struct sfp *sfp)
+ {
+ sfp_fixup_10gbaset_30m(sfp);
+- sfp_fixup_fs_2_5gt(sfp);
++ sfp_fixup_rollball_wait4s(sfp);
+ }
+
+ static void sfp_fixup_halny_gsfp(struct sfp *sfp)
+@@ -479,9 +479,10 @@ static const struct sfp_quirk sfp_quirks[] = {
+ // PHY.
+ SFP_QUIRK_F("FS", "SFP-10G-T", sfp_fixup_fs_10gt),
+
+- // Fiberstore SFP-2.5G-T uses Rollball protocol to talk to the PHY and
+- // needs 4 sec wait before probing the PHY.
+- SFP_QUIRK_F("FS", "SFP-2.5G-T", sfp_fixup_fs_2_5gt),
++ // Fiberstore SFP-2.5G-T and SFP-10GM-T uses Rollball protocol to talk
++ // to the PHY and needs 4 sec wait before probing the PHY.
++ SFP_QUIRK_F("FS", "SFP-2.5G-T", sfp_fixup_rollball_wait4s),
++ SFP_QUIRK_F("FS", "SFP-10GM-T", sfp_fixup_rollball_wait4s),
+
+ // Fiberstore GPON-ONU-34-20BI can operate at 2500base-X, but report 1.2GBd
+ // NRZ in their EEPROM
+@@ -515,6 +516,8 @@ static const struct sfp_quirk sfp_quirks[] = {
+
+ SFP_QUIRK_F("OEM", "SFP-10G-T", sfp_fixup_rollball_cc),
+ SFP_QUIRK_M("OEM", "SFP-2.5G-T", sfp_quirk_oem_2_5g),
++ SFP_QUIRK_M("OEM", "SFP-2.5G-BX10-D", sfp_quirk_2500basex),
++ SFP_QUIRK_M("OEM", "SFP-2.5G-BX10-U", sfp_quirk_2500basex),
+ SFP_QUIRK_F("OEM", "RTSFP-10", sfp_fixup_rollball_cc),
+ SFP_QUIRK_F("OEM", "RTSFP-10G", sfp_fixup_rollball_cc),
+ SFP_QUIRK_F("Turris", "RTSFP-2.5G", sfp_fixup_rollball),
+diff --git a/drivers/net/ppp/ppp_synctty.c b/drivers/net/ppp/ppp_synctty.c
+index 644e99fc3623f5..9c4932198931f3 100644
+--- a/drivers/net/ppp/ppp_synctty.c
++++ b/drivers/net/ppp/ppp_synctty.c
+@@ -506,6 +506,11 @@ ppp_sync_txmunge(struct syncppp *ap, struct sk_buff *skb)
+ unsigned char *data;
+ int islcp;
+
++ /* Ensure we can safely access protocol field and LCP code */
++ if (!pskb_may_pull(skb, 3)) {
++ kfree_skb(skb);
++ return NULL;
++ }
+ data = skb->data;
+ proto = get_unaligned_be16(data);
+
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 57d6e5abc30e88..da24941a6e4446 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -1421,6 +1421,19 @@ static const struct driver_info hg20f9_info = {
+ .data = FLAG_EEPROM_MAC,
+ };
+
++static const struct driver_info lyconsys_fibergecko100_info = {
++ .description = "LyconSys FiberGecko 100 USB 2.0 to SFP Adapter",
++ .bind = ax88178_bind,
++ .status = asix_status,
++ .link_reset = ax88178_link_reset,
++ .reset = ax88178_link_reset,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR |
++ FLAG_MULTI_PACKET,
++ .rx_fixup = asix_rx_fixup_common,
++ .tx_fixup = asix_tx_fixup,
++ .data = 0x20061201,
++};
++
+ static const struct usb_device_id products [] = {
+ {
+ // Linksys USB200M
+@@ -1578,6 +1591,10 @@ static const struct usb_device_id products [] = {
+ // Linux Automation GmbH USB 10Base-T1L
+ USB_DEVICE(0x33f7, 0x0004),
+ .driver_info = (unsigned long) &lxausb_t1l_info,
++}, {
++ /* LyconSys FiberGecko 100 */
++ USB_DEVICE(0x1d2a, 0x0801),
++ .driver_info = (unsigned long) &lyconsys_fibergecko100_info,
+ },
+ { }, // END
+ };
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index a6469235d904e7..a032c1ded40634 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -783,6 +783,13 @@ static const struct usb_device_id products[] = {
+ .driver_info = 0,
+ },
+
++/* Lenovo ThinkPad Hybrid USB-C with USB-A Dock (40af0135eu, based on Realtek RTL8153) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa359, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++ .driver_info = 0,
++},
++
+ /* Aquantia AQtion USB to 5GbE Controller (based on AQC111U) */
+ {
+ USB_DEVICE_AND_INTERFACE_INFO(AQUANTIA_VENDOR_ID, 0xc101,
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 468c739740463d..96fa3857d8e257 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -785,6 +785,7 @@ enum rtl8152_flags {
+ #define DEVICE_ID_THINKPAD_USB_C_DONGLE 0x720c
+ #define DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2 0xa387
+ #define DEVICE_ID_THINKPAD_USB_C_DOCK_GEN3 0x3062
++#define DEVICE_ID_THINKPAD_HYBRID_USB_C_DOCK 0xa359
+
+ struct tally_counter {
+ __le64 tx_packets;
+@@ -9787,6 +9788,7 @@ static bool rtl8152_supports_lenovo_macpassthru(struct usb_device *udev)
+ case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2:
+ case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN3:
+ case DEVICE_ID_THINKPAD_USB_C_DONGLE:
++ case DEVICE_ID_THINKPAD_HYBRID_USB_C_DOCK:
+ return 1;
+ }
+ } else if (vendor_id == VENDOR_ID_REALTEK && parent_vendor_id == VENDOR_ID_LENOVO) {
+@@ -10064,6 +10066,8 @@ static const struct usb_device_id rtl8152_table[] = {
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927) },
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0c5e) },
+ { USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101) },
++
++ /* Lenovo */
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x304f) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3054) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3062) },
+@@ -10074,7 +10078,9 @@ static const struct usb_device_id rtl8152_table[] = {
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x720c) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x7214) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x721e) },
++ { USB_DEVICE(VENDOR_ID_LENOVO, 0xa359) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0xa387) },
++
+ { USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041) },
+ { USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) },
+ { USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) },
+diff --git a/drivers/net/usb/r8153_ecm.c b/drivers/net/usb/r8153_ecm.c
+index 20b2df8d74ae1b..8d860dacdf49b2 100644
+--- a/drivers/net/usb/r8153_ecm.c
++++ b/drivers/net/usb/r8153_ecm.c
+@@ -135,6 +135,12 @@ static const struct usb_device_id products[] = {
+ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&r8153_info,
+ },
++/* Lenovo ThinkPad Hybrid USB-C with USB-A Dock (40af0135eu, based on Realtek RTL8153) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_LENOVO, 0xa359, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++ .driver_info = (unsigned long)&r8153_info,
++},
+
+ { }, /* END */
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index f2fc04596d4817..eedba3766ba244 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -1290,6 +1290,7 @@ static void ath11k_ahb_remove(struct platform_device *pdev)
+ ath11k_core_deinit(ab);
+
+ qmi_fail:
++ ath11k_fw_destroy(ab);
+ ath11k_ahb_free_resources(ab);
+ }
+
+@@ -1309,6 +1310,7 @@ static void ath11k_ahb_shutdown(struct platform_device *pdev)
+ ath11k_core_deinit(ab);
+
+ free_resources:
++ ath11k_fw_destroy(ab);
+ ath11k_ahb_free_resources(ab);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index c576bbba52bf15..12dd37c2e90440 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -2056,6 +2056,7 @@ void ath11k_core_halt(struct ath11k *ar)
+ ath11k_mac_scan_finish(ar);
+ ath11k_mac_peer_cleanup_all(ar);
+ cancel_delayed_work_sync(&ar->scan.timeout);
++ cancel_work_sync(&ar->channel_update_work);
+ cancel_work_sync(&ar->regd_update_work);
+ cancel_work_sync(&ab->update_11d_work);
+
+@@ -2346,7 +2347,6 @@ void ath11k_core_deinit(struct ath11k_base *ab)
+ ath11k_hif_power_down(ab);
+ ath11k_mac_destroy(ab);
+ ath11k_core_soc_destroy(ab);
+- ath11k_fw_destroy(ab);
+ }
+ EXPORT_SYMBOL(ath11k_core_deinit);
+
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index a9dc7fe7765a14..c142b79ba543bb 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -685,7 +685,7 @@ struct ath11k {
+ struct mutex conf_mutex;
+ /* protects the radio specific data like debug stats, ppdu_stats_info stats,
+ * vdev_stop_status info, scan data, ath11k_sta info, ath11k_vif info,
+- * channel context data, survey info, test mode data.
++ * channel context data, survey info, test mode data, channel_update_queue.
+ */
+ spinlock_t data_lock;
+
+@@ -743,6 +743,9 @@ struct ath11k {
+ struct completion bss_survey_done;
+
+ struct work_struct regd_update_work;
++ struct work_struct channel_update_work;
++ /* protected with data_lock */
++ struct list_head channel_update_queue;
+
+ struct work_struct wmi_mgmt_tx_work;
+ struct sk_buff_head wmi_mgmt_tx_queue;
+diff --git a/drivers/net/wireless/ath/ath11k/dp.c b/drivers/net/wireless/ath/ath11k/dp.c
+index fbf666d0ecf1dc..f124b7329e1ac2 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.c
++++ b/drivers/net/wireless/ath/ath11k/dp.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <crypto/hash.h>
+@@ -104,14 +104,12 @@ void ath11k_dp_srng_cleanup(struct ath11k_base *ab, struct dp_srng *ring)
+ if (!ring->vaddr_unaligned)
+ return;
+
+- if (ring->cached) {
+- dma_unmap_single(ab->dev, ring->paddr_unaligned, ring->size,
+- DMA_FROM_DEVICE);
+- kfree(ring->vaddr_unaligned);
+- } else {
++ if (ring->cached)
++ dma_free_noncoherent(ab->dev, ring->size, ring->vaddr_unaligned,
++ ring->paddr_unaligned, DMA_FROM_DEVICE);
++ else
+ dma_free_coherent(ab->dev, ring->size, ring->vaddr_unaligned,
+ ring->paddr_unaligned);
+- }
+
+ ring->vaddr_unaligned = NULL;
+ }
+@@ -249,25 +247,14 @@ int ath11k_dp_srng_setup(struct ath11k_base *ab, struct dp_srng *ring,
+ default:
+ cached = false;
+ }
+-
+- if (cached) {
+- ring->vaddr_unaligned = kzalloc(ring->size, GFP_KERNEL);
+- if (!ring->vaddr_unaligned)
+- return -ENOMEM;
+-
+- ring->paddr_unaligned = dma_map_single(ab->dev,
+- ring->vaddr_unaligned,
+- ring->size,
+- DMA_FROM_DEVICE);
+- if (dma_mapping_error(ab->dev, ring->paddr_unaligned)) {
+- kfree(ring->vaddr_unaligned);
+- ring->vaddr_unaligned = NULL;
+- return -ENOMEM;
+- }
+- }
+ }
+
+- if (!cached)
++ if (cached)
++ ring->vaddr_unaligned = dma_alloc_noncoherent(ab->dev, ring->size,
++ &ring->paddr_unaligned,
++ DMA_FROM_DEVICE,
++ GFP_KERNEL);
++ else
+ ring->vaddr_unaligned = dma_alloc_coherent(ab->dev, ring->size,
+ &ring->paddr_unaligned,
+ GFP_KERNEL);
+diff --git a/drivers/net/wireless/ath/ath11k/fw.c b/drivers/net/wireless/ath/ath11k/fw.c
+index 4e36292a79db89..cbbd8e57119f28 100644
+--- a/drivers/net/wireless/ath/ath11k/fw.c
++++ b/drivers/net/wireless/ath/ath11k/fw.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+- * Copyright (c) 2022-2023, Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include "core.h"
+@@ -166,3 +166,4 @@ void ath11k_fw_destroy(struct ath11k_base *ab)
+ {
+ release_firmware(ab->fw.fw);
+ }
++EXPORT_SYMBOL(ath11k_fw_destroy);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 1298a3190a3c5d..f04dd4a35376ec 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -6283,6 +6283,7 @@ static void ath11k_mac_op_stop(struct ieee80211_hw *hw, bool suspend)
+ {
+ struct ath11k *ar = hw->priv;
+ struct htt_ppdu_stats_info *ppdu_stats, *tmp;
++ struct scan_chan_list_params *params;
+ int ret;
+
+ ath11k_mac_drain_tx(ar);
+@@ -6298,6 +6299,7 @@ static void ath11k_mac_op_stop(struct ieee80211_hw *hw, bool suspend)
+ mutex_unlock(&ar->conf_mutex);
+
+ cancel_delayed_work_sync(&ar->scan.timeout);
++ cancel_work_sync(&ar->channel_update_work);
+ cancel_work_sync(&ar->regd_update_work);
+ cancel_work_sync(&ar->ab->update_11d_work);
+
+@@ -6307,10 +6309,19 @@ static void ath11k_mac_op_stop(struct ieee80211_hw *hw, bool suspend)
+ }
+
+ spin_lock_bh(&ar->data_lock);
++
+ list_for_each_entry_safe(ppdu_stats, tmp, &ar->ppdu_stats_info, list) {
+ list_del(&ppdu_stats->list);
+ kfree(ppdu_stats);
+ }
++
++ while ((params = list_first_entry_or_null(&ar->channel_update_queue,
++ struct scan_chan_list_params,
++ list))) {
++ list_del(¶ms->list);
++ kfree(params);
++ }
++
+ spin_unlock_bh(&ar->data_lock);
+
+ rcu_assign_pointer(ar->ab->pdevs_active[ar->pdev_idx], NULL);
+@@ -10014,6 +10025,7 @@ static const struct wiphy_iftype_ext_capab ath11k_iftypes_ext_capa[] = {
+
+ static void __ath11k_mac_unregister(struct ath11k *ar)
+ {
++ cancel_work_sync(&ar->channel_update_work);
+ cancel_work_sync(&ar->regd_update_work);
+
+ ieee80211_unregister_hw(ar->hw);
+@@ -10413,6 +10425,8 @@ int ath11k_mac_allocate(struct ath11k_base *ab)
+ init_completion(&ar->thermal.wmi_sync);
+
+ INIT_DELAYED_WORK(&ar->scan.timeout, ath11k_scan_timeout_work);
++ INIT_WORK(&ar->channel_update_work, ath11k_regd_update_chan_list_work);
++ INIT_LIST_HEAD(&ar->channel_update_queue);
+ INIT_WORK(&ar->regd_update_work, ath11k_regd_update_work);
+
+ INIT_WORK(&ar->wmi_mgmt_tx_work, ath11k_mgmt_over_wmi_tx_work);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index eaac9eabcc70a6..4d96f838b5ae0a 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2019-2020 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -986,6 +986,7 @@ static void ath11k_pci_remove(struct pci_dev *pdev)
+ ath11k_core_deinit(ab);
+
+ qmi_fail:
++ ath11k_fw_destroy(ab);
+ ath11k_mhi_unregister(ab_pci);
+
+ ath11k_pcic_free_irq(ab);
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index 7bfe47ad62a07f..d62a2014315a08 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -124,32 +124,7 @@ int ath11k_reg_update_chan_list(struct ath11k *ar, bool wait)
+ struct channel_param *ch;
+ enum nl80211_band band;
+ int num_channels = 0;
+- int i, ret, left;
+-
+- if (wait && ar->state_11d != ATH11K_11D_IDLE) {
+- left = wait_for_completion_timeout(&ar->completed_11d_scan,
+- ATH11K_SCAN_TIMEOUT_HZ);
+- if (!left) {
+- ath11k_dbg(ar->ab, ATH11K_DBG_REG,
+- "failed to receive 11d scan complete: timed out\n");
+- ar->state_11d = ATH11K_11D_IDLE;
+- }
+- ath11k_dbg(ar->ab, ATH11K_DBG_REG,
+- "11d scan wait left time %d\n", left);
+- }
+-
+- if (wait &&
+- (ar->scan.state == ATH11K_SCAN_STARTING ||
+- ar->scan.state == ATH11K_SCAN_RUNNING)) {
+- left = wait_for_completion_timeout(&ar->scan.completed,
+- ATH11K_SCAN_TIMEOUT_HZ);
+- if (!left)
+- ath11k_dbg(ar->ab, ATH11K_DBG_REG,
+- "failed to receive hw scan complete: timed out\n");
+-
+- ath11k_dbg(ar->ab, ATH11K_DBG_REG,
+- "hw scan wait left time %d\n", left);
+- }
++ int i, ret = 0;
+
+ if (ar->state == ATH11K_STATE_RESTARTING)
+ return 0;
+@@ -231,6 +206,16 @@ int ath11k_reg_update_chan_list(struct ath11k *ar, bool wait)
+ }
+ }
+
++ if (wait) {
++ spin_lock_bh(&ar->data_lock);
++ list_add_tail(¶ms->list, &ar->channel_update_queue);
++ spin_unlock_bh(&ar->data_lock);
++
++ queue_work(ar->ab->workqueue, &ar->channel_update_work);
++
++ return 0;
++ }
++
+ ret = ath11k_wmi_send_scan_chan_list_cmd(ar, params);
+ kfree(params);
+
+@@ -811,6 +796,54 @@ ath11k_reg_build_regd(struct ath11k_base *ab,
+ return new_regd;
+ }
+
++void ath11k_regd_update_chan_list_work(struct work_struct *work)
++{
++ struct ath11k *ar = container_of(work, struct ath11k,
++ channel_update_work);
++ struct scan_chan_list_params *params;
++ struct list_head local_update_list;
++ int left;
++
++ INIT_LIST_HEAD(&local_update_list);
++
++ spin_lock_bh(&ar->data_lock);
++ list_splice_tail_init(&ar->channel_update_queue, &local_update_list);
++ spin_unlock_bh(&ar->data_lock);
++
++ while ((params = list_first_entry_or_null(&local_update_list,
++ struct scan_chan_list_params,
++ list))) {
++ if (ar->state_11d != ATH11K_11D_IDLE) {
++ left = wait_for_completion_timeout(&ar->completed_11d_scan,
++ ATH11K_SCAN_TIMEOUT_HZ);
++ if (!left) {
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "failed to receive 11d scan complete: timed out\n");
++ ar->state_11d = ATH11K_11D_IDLE;
++ }
++
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "reg 11d scan wait left time %d\n", left);
++ }
++
++ if ((ar->scan.state == ATH11K_SCAN_STARTING ||
++ ar->scan.state == ATH11K_SCAN_RUNNING)) {
++ left = wait_for_completion_timeout(&ar->scan.completed,
++ ATH11K_SCAN_TIMEOUT_HZ);
++ if (!left)
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "failed to receive hw scan complete: timed out\n");
++
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "reg hw scan wait left time %d\n", left);
++ }
++
++ ath11k_wmi_send_scan_chan_list_cmd(ar, params);
++ list_del(¶ms->list);
++ kfree(params);
++ }
++}
++
+ static bool ath11k_reg_is_world_alpha(char *alpha)
+ {
+ if (alpha[0] == '0' && alpha[1] == '0')
+diff --git a/drivers/net/wireless/ath/ath11k/reg.h b/drivers/net/wireless/ath/ath11k/reg.h
+index 263ea90619483e..72b48359401576 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.h
++++ b/drivers/net/wireless/ath/ath11k/reg.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+ * Copyright (c) 2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #ifndef ATH11K_REG_H
+@@ -33,6 +33,7 @@ void ath11k_reg_init(struct ath11k *ar);
+ void ath11k_reg_reset_info(struct cur_regulatory_info *reg_info);
+ void ath11k_reg_free(struct ath11k_base *ab);
+ void ath11k_regd_update_work(struct work_struct *work);
++void ath11k_regd_update_chan_list_work(struct work_struct *work);
+ struct ieee80211_regdomain *
+ ath11k_reg_build_regd(struct ath11k_base *ab,
+ struct cur_regulatory_info *reg_info, bool intersect,
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.h b/drivers/net/wireless/ath/ath11k/wmi.h
+index 8982b909c821e6..30b4b0c1768269 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.h
++++ b/drivers/net/wireless/ath/ath11k/wmi.h
+@@ -3817,6 +3817,7 @@ struct wmi_stop_scan_cmd {
+ };
+
+ struct scan_chan_list_params {
++ struct list_head list;
+ u32 pdev_id;
+ u16 nallchans;
+ struct channel_param ch_param[];
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 5a21961cfd4655..0b089389087d33 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -743,7 +743,6 @@ ath12k_dp_mon_rx_parse_status_tlv(struct ath12k_base *ab,
+ }
+ case HAL_RX_MPDU_START: {
+ const struct hal_rx_mpdu_start *mpdu_start = tlv_data;
+- struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
+ u16 peer_id;
+
+ info[1] = __le32_to_cpu(mpdu_start->info1);
+@@ -760,65 +759,17 @@ ath12k_dp_mon_rx_parse_status_tlv(struct ath12k_base *ab,
+ u32_get_bits(info[0], HAL_RX_MPDU_START_INFO1_PEERID);
+ }
+
+- mon_mpdu = kzalloc(sizeof(*mon_mpdu), GFP_ATOMIC);
+- if (!mon_mpdu)
+- return HAL_RX_MON_STATUS_PPDU_NOT_DONE;
+-
+ break;
+ }
+ case HAL_RX_MSDU_START:
+ /* TODO: add msdu start parsing logic */
+ break;
+- case HAL_MON_BUF_ADDR: {
+- struct dp_rxdma_mon_ring *buf_ring = &ab->dp.rxdma_mon_buf_ring;
+- const struct dp_mon_packet_info *packet_info = tlv_data;
+- int buf_id = u32_get_bits(packet_info->cookie,
+- DP_RXDMA_BUF_COOKIE_BUF_ID);
+- struct sk_buff *msdu;
+- struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
+- struct ath12k_skb_rxcb *rxcb;
+-
+- spin_lock_bh(&buf_ring->idr_lock);
+- msdu = idr_remove(&buf_ring->bufs_idr, buf_id);
+- spin_unlock_bh(&buf_ring->idr_lock);
+-
+- if (unlikely(!msdu)) {
+- ath12k_warn(ab, "monitor destination with invalid buf_id %d\n",
+- buf_id);
+- return HAL_RX_MON_STATUS_PPDU_NOT_DONE;
+- }
+-
+- rxcb = ATH12K_SKB_RXCB(msdu);
+- dma_unmap_single(ab->dev, rxcb->paddr,
+- msdu->len + skb_tailroom(msdu),
+- DMA_FROM_DEVICE);
+-
+- if (mon_mpdu->tail)
+- mon_mpdu->tail->next = msdu;
+- else
+- mon_mpdu->tail = msdu;
+-
+- ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+-
+- break;
+- }
+- case HAL_RX_MSDU_END: {
+- const struct rx_msdu_end_qcn9274 *msdu_end = tlv_data;
+- bool is_first_msdu_in_mpdu;
+- u16 msdu_end_info;
+-
+- msdu_end_info = __le16_to_cpu(msdu_end->info5);
+- is_first_msdu_in_mpdu = u32_get_bits(msdu_end_info,
+- RX_MSDU_END_INFO5_FIRST_MSDU);
+- if (is_first_msdu_in_mpdu) {
+- pmon->mon_mpdu->head = pmon->mon_mpdu->tail;
+- pmon->mon_mpdu->tail = NULL;
+- }
+- break;
+- }
++ case HAL_MON_BUF_ADDR:
++ return HAL_RX_MON_STATUS_BUF_ADDR;
++ case HAL_RX_MSDU_END:
++ return HAL_RX_MON_STATUS_MSDU_END;
+ case HAL_RX_MPDU_END:
+- list_add_tail(&pmon->mon_mpdu->list, &pmon->dp_rx_mon_mpdu_list);
+- break;
++ return HAL_RX_MON_STATUS_MPDU_END;
+ case HAL_DUMMY:
+ return HAL_RX_MON_STATUS_BUF_DONE;
+ case HAL_RX_PPDU_END_STATUS_DONE:
+@@ -1216,7 +1167,10 @@ ath12k_dp_mon_parse_rx_dest(struct ath12k_base *ab, struct ath12k_mon_data *pmon
+ if ((ptr - skb->data) >= DP_RX_BUFFER_SIZE)
+ break;
+
+- } while (hal_status == HAL_RX_MON_STATUS_PPDU_NOT_DONE);
++ } while ((hal_status == HAL_RX_MON_STATUS_PPDU_NOT_DONE) ||
++ (hal_status == HAL_RX_MON_STATUS_BUF_ADDR) ||
++ (hal_status == HAL_RX_MON_STATUS_MPDU_END) ||
++ (hal_status == HAL_RX_MON_STATUS_MSDU_END));
+
+ return hal_status;
+ }
+@@ -2519,7 +2473,7 @@ int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
+ dest_idx = 0;
+ move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+- ath12k_hal_srng_src_get_next_entry(ab, srng);
++ ath12k_hal_srng_dst_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 68d609f2ac60e1..ae6608b10bb570 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -2530,6 +2530,29 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ ieee80211_rx_napi(ath12k_ar_to_hw(ar), pubsta, msdu, napi);
+ }
+
++static bool ath12k_dp_rx_check_nwifi_hdr_len_valid(struct ath12k_base *ab,
++ struct hal_rx_desc *rx_desc,
++ struct sk_buff *msdu)
++{
++ struct ieee80211_hdr *hdr;
++ u8 decap_type;
++ u32 hdr_len;
++
++ decap_type = ath12k_dp_rx_h_decap_type(ab, rx_desc);
++ if (decap_type != DP_RX_DECAP_TYPE_NATIVE_WIFI)
++ return true;
++
++ hdr = (struct ieee80211_hdr *)msdu->data;
++ hdr_len = ieee80211_hdrlen(hdr->frame_control);
++
++ if ((likely(hdr_len <= DP_MAX_NWIFI_HDR_LEN)))
++ return true;
++
++ ab->soc_stats.invalid_rbm++;
++ WARN_ON_ONCE(1);
++ return false;
++}
++
+ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct sk_buff_head *msdu_list,
+@@ -2588,6 +2611,11 @@ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ }
+ }
+
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, rx_desc, msdu))) {
++ ret = -EINVAL;
++ goto free_out;
++ }
++
+ ath12k_dp_rx_h_ppdu(ar, rx_desc, rx_status);
+ ath12k_dp_rx_h_mpdu(ar, msdu, rx_desc, rx_status);
+
+@@ -2978,6 +3006,9 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ RX_FLAG_IV_STRIPPED | RX_FLAG_DECRYPTED;
+ skb_pull(msdu, hal_rx_desc_sz);
+
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, rx_desc, msdu)))
++ return -EINVAL;
++
+ ath12k_dp_rx_h_ppdu(ar, rx_desc, rxs);
+ ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ HAL_ENCRYPT_TYPE_TKIP_MIC, rxs, true);
+@@ -3720,6 +3751,9 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+ }
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
++ return -EINVAL;
++
+ ath12k_dp_rx_h_ppdu(ar, desc, status);
+
+ ath12k_dp_rx_h_mpdu(ar, msdu, desc, status);
+@@ -3764,7 +3798,7 @@ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+ return drop;
+ }
+
+-static void ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
++static bool ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+ struct ieee80211_rx_status *status)
+ {
+ struct ath12k_base *ab = ar->ab;
+@@ -3782,6 +3816,9 @@ static void ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+ skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
++ return true;
++
+ ath12k_dp_rx_h_ppdu(ar, desc, status);
+
+ status->flag |= (RX_FLAG_MMIC_STRIPPED | RX_FLAG_MMIC_ERROR |
+@@ -3789,6 +3826,7 @@ static void ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+
+ ath12k_dp_rx_h_undecap(ar, msdu, desc,
+ HAL_ENCRYPT_TYPE_TKIP_MIC, status, false);
++ return false;
+ }
+
+ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar, struct sk_buff *msdu,
+@@ -3807,7 +3845,7 @@ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar, struct sk_buff *msdu,
+ case HAL_REO_ENTR_RING_RXDMA_ECODE_TKIP_MIC_ERR:
+ err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, rx_desc);
+ if (err_bitmap & HAL_RX_MPDU_ERR_TKIP_MIC) {
+- ath12k_dp_rx_h_tkip_mic_err(ar, msdu, status);
++ drop = ath12k_dp_rx_h_tkip_mic_err(ar, msdu, status);
+ break;
+ }
+ fallthrough;
+diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.h b/drivers/net/wireless/ath/ath12k/hal_rx.h
+index b08aa2e79f4112..54f3eaeca8bb96 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_rx.h
++++ b/drivers/net/wireless/ath/ath12k/hal_rx.h
+@@ -108,6 +108,9 @@ enum hal_rx_mon_status {
+ HAL_RX_MON_STATUS_PPDU_NOT_DONE,
+ HAL_RX_MON_STATUS_PPDU_DONE,
+ HAL_RX_MON_STATUS_BUF_DONE,
++ HAL_RX_MON_STATUS_BUF_ADDR,
++ HAL_RX_MON_STATUS_MPDU_END,
++ HAL_RX_MON_STATUS_MSDU_END,
+ };
+
+ #define HAL_RX_MAX_MPDU 256
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index 2851f6944b864b..ee14b848454879 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -1736,9 +1736,9 @@ static void ath12k_pci_remove(struct pci_dev *pdev)
+ cancel_work_sync(&ab->reset_work);
+ cancel_work_sync(&ab->dump_work);
+ ath12k_core_deinit(ab);
+- ath12k_fw_unmap(ab);
+
+ qmi_fail:
++ ath12k_fw_unmap(ab);
+ ath12k_mhi_unregister(ab_pci);
+
+ ath12k_pci_free_irq(ab);
+diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
+index a728cc0387df8f..cbcf37008556f4 100644
+--- a/drivers/net/wireless/ath/ath9k/ath9k.h
++++ b/drivers/net/wireless/ath/ath9k/ath9k.h
+@@ -1018,7 +1018,7 @@ struct ath_softc {
+
+ u8 gtt_cnt;
+ u32 intrstatus;
+- u32 rx_active_check_time;
++ unsigned long rx_active_check_time;
+ u32 rx_active_count;
+ u16 ps_flags; /* PS_* */
+ bool ps_enabled;
+diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
+index 0bc66cc19acd1e..443517d06c9fa9 100644
+--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
+@@ -95,6 +95,10 @@ int mt76_get_of_data_from_mtd(struct mt76_dev *dev, void *eep, int offset, int l
+
+ #ifdef CONFIG_NL80211_TESTMODE
+ dev->test_mtd.name = devm_kstrdup(dev->dev, part, GFP_KERNEL);
++ if (!dev->test_mtd.name) {
++ ret = -ENOMEM;
++ goto out_put_node;
++ }
+ dev->test_mtd.offset = offset;
+ #endif
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 132148f7b10701..05651efb549ecf 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -769,6 +769,7 @@ struct mt76_testmode_data {
+
+ struct mt76_vif_link {
+ u8 idx;
++ u8 link_idx;
+ u8 omac_idx;
+ u8 band_idx;
+ u8 wmm_idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index f30cf9e716105d..d0e49d68c5dbf0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1168,7 +1168,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ .tag = cpu_to_le16(DEV_INFO_ACTIVE),
+ .len = cpu_to_le16(sizeof(struct req_tlv)),
+ .active = enable,
+- .link_idx = mvif->idx,
++ .link_idx = mvif->link_idx,
+ },
+ };
+ struct {
+@@ -1191,7 +1191,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ .bmc_tx_wlan_idx = cpu_to_le16(wcid->idx),
+ .sta_idx = cpu_to_le16(wcid->idx),
+ .conn_state = 1,
+- .link_idx = mvif->idx,
++ .link_idx = mvif->link_idx,
+ },
+ };
+ int err, idx, cmd, len;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+index e832ad53e2393b..a4f4d12f904e7c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+@@ -22,6 +22,7 @@ static const struct usb_device_id mt76x2u_device_table[] = {
+ { USB_DEVICE(0x0846, 0x9053) }, /* Netgear A6210 */
+ { USB_DEVICE(0x045e, 0x02e6) }, /* XBox One Wireless Adapter */
+ { USB_DEVICE(0x045e, 0x02fe) }, /* XBox One Wireless Adapter */
++ { USB_DEVICE(0x2357, 0x0137) }, /* TP-Link TL-WDN6200 */
+ { },
+ };
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+index f41ca42484978e..a2bb36dab2310e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+@@ -244,6 +244,7 @@ int mt7925_register_device(struct mt792x_dev *dev)
+ dev->mt76.tx_worker.fn = mt792x_tx_worker;
+
+ INIT_DELAYED_WORK(&dev->pm.ps_work, mt792x_pm_power_save_work);
++ INIT_DELAYED_WORK(&dev->mlo_pm_work, mt7925_mlo_pm_work);
+ INIT_WORK(&dev->pm.wake_work, mt792x_pm_wake_work);
+ spin_lock_init(&dev->pm.wake.lock);
+ mutex_init(&dev->pm.mutex);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index 98daf80ac13136..61080d8b4b4648 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -256,7 +256,7 @@ int mt7925_init_mlo_caps(struct mt792x_phy *phy)
+
+ ext_capab[0].eml_capabilities = phy->eml_cap;
+ ext_capab[0].mld_capa_and_ops =
+- u16_encode_bits(1, IEEE80211_MLD_CAP_OP_MAX_SIMUL_LINKS);
++ u16_encode_bits(0, IEEE80211_MLD_CAP_OP_MAX_SIMUL_LINKS);
+
+ wiphy->flags |= WIPHY_FLAG_SUPPORTS_MLO;
+ wiphy->iftype_ext_capab = ext_capab;
+@@ -360,10 +360,15 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ struct mt76_txq *mtxq;
+ int idx, ret = 0;
+
+- mconf->mt76.idx = __ffs64(~dev->mt76.vif_mask);
+- if (mconf->mt76.idx >= MT792x_MAX_INTERFACES) {
+- ret = -ENOSPC;
+- goto out;
++ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
++ mconf->mt76.idx = MT792x_MAX_INTERFACES;
++ } else {
++ mconf->mt76.idx = __ffs64(~dev->mt76.vif_mask);
++
++ if (mconf->mt76.idx >= MT792x_MAX_INTERFACES) {
++ ret = -ENOSPC;
++ goto out;
++ }
+ }
+
+ mconf->mt76.omac_idx = ieee80211_vif_is_mld(vif) ?
+@@ -371,6 +376,7 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ mconf->mt76.band_idx = 0xff;
+ mconf->mt76.wmm_idx = ieee80211_vif_is_mld(vif) ?
+ 0 : mconf->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
++ mconf->mt76.link_idx = hweight16(mvif->valid_links);
+
+ if (mvif->phy->mt76->chandef.chan->band != NL80211_BAND_2GHZ)
+ mconf->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL + 4;
+@@ -421,6 +427,7 @@ mt7925_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ mvif->bss_conf.vif = mvif;
+ mvif->sta.vif = mvif;
+ mvif->deflink_id = IEEE80211_LINK_UNSPECIFIED;
++ mvif->mlo_pm_state = MT792x_MLO_LINK_DISASSOC;
+
+ ret = mt7925_mac_link_bss_add(dev, &vif->bss_conf, &mvif->sta.deflink);
+ if (ret < 0)
+@@ -1149,7 +1156,12 @@ static void mt7925_mac_link_sta_remove(struct mt76_dev *mdev,
+ struct mt792x_bss_conf *mconf;
+
+ mconf = mt792x_link_conf_to_mconf(link_conf);
+- mt792x_mac_link_bss_remove(dev, mconf, mlink);
++
++ if (ieee80211_vif_is_mld(vif))
++ mt792x_mac_link_bss_remove(dev, mconf, mlink);
++ else
++ mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx, link_conf,
++ link_sta, false);
+ }
+
+ spin_lock_bh(&mdev->sta_poll_lock);
+@@ -1169,6 +1181,31 @@ mt7925_mac_sta_remove_links(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ struct mt76_wcid *wcid;
+ unsigned int link_id;
+
++ /* clean up bss before starec */
++ for_each_set_bit(link_id, &old_links, IEEE80211_MLD_MAX_NUM_LINKS) {
++ struct ieee80211_link_sta *link_sta;
++ struct ieee80211_bss_conf *link_conf;
++ struct mt792x_bss_conf *mconf;
++ struct mt792x_link_sta *mlink;
++
++ link_sta = mt792x_sta_to_link_sta(vif, sta, link_id);
++ if (!link_sta)
++ continue;
++
++ mlink = mt792x_sta_to_link(msta, link_id);
++ if (!mlink)
++ continue;
++
++ link_conf = mt792x_vif_to_bss_conf(vif, link_id);
++ if (!link_conf)
++ continue;
++
++ mconf = mt792x_link_conf_to_mconf(link_conf);
++
++ mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx, link_conf,
++ link_sta, false);
++ }
++
+ for_each_set_bit(link_id, &old_links, IEEE80211_MLD_MAX_NUM_LINKS) {
+ struct ieee80211_link_sta *link_sta;
+ struct mt792x_link_sta *mlink;
+@@ -1206,51 +1243,22 @@ void mt7925_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ {
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+- struct {
+- struct {
+- u8 omac_idx;
+- u8 band_idx;
+- __le16 pad;
+- } __packed hdr;
+- struct req_tlv {
+- __le16 tag;
+- __le16 len;
+- u8 active;
+- u8 link_idx; /* hw link idx */
+- u8 omac_addr[ETH_ALEN];
+- } __packed tlv;
+- } dev_req = {
+- .hdr = {
+- .omac_idx = 0,
+- .band_idx = 0,
+- },
+- .tlv = {
+- .tag = cpu_to_le16(DEV_INFO_ACTIVE),
+- .len = cpu_to_le16(sizeof(struct req_tlv)),
+- .active = true,
+- },
+- };
++ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ unsigned long rem;
+
+ rem = ieee80211_vif_is_mld(vif) ? msta->valid_links : BIT(0);
+
+ mt7925_mac_sta_remove_links(dev, vif, sta, rem);
+
+- if (ieee80211_vif_is_mld(vif)) {
+- mt7925_mcu_set_dbdc(&dev->mphy, false);
+-
+- /* recovery omac address for the legacy interface */
+- memcpy(dev_req.tlv.omac_addr, vif->addr, ETH_ALEN);
+- mt76_mcu_send_msg(mdev, MCU_UNI_CMD(DEV_INFO_UPDATE),
+- &dev_req, sizeof(dev_req), true);
+- }
++ if (ieee80211_vif_is_mld(vif))
++ mt7925_mcu_del_dev(mdev, vif);
+
+ if (vif->type == NL80211_IFTYPE_STATION) {
+- struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+-
+ mvif->wep_sta = NULL;
+ ewma_rssi_init(&mvif->bss_conf.rssi);
+ }
++
++ mvif->mlo_pm_state = MT792x_MLO_LINK_DISASSOC;
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mac_sta_remove);
+
+@@ -1289,22 +1297,22 @@ mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ case IEEE80211_AMPDU_RX_START:
+ mt76_rx_aggr_start(&dev->mt76, &msta->deflink.wcid, tid, ssn,
+ params->buf_size);
+- mt7925_mcu_uni_rx_ba(dev, vif, params, true);
++ mt7925_mcu_uni_rx_ba(dev, params, true);
+ break;
+ case IEEE80211_AMPDU_RX_STOP:
+ mt76_rx_aggr_stop(&dev->mt76, &msta->deflink.wcid, tid);
+- mt7925_mcu_uni_rx_ba(dev, vif, params, false);
++ mt7925_mcu_uni_rx_ba(dev, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_OPERATIONAL:
+ mtxq->aggr = true;
+ mtxq->send_bar = false;
+- mt7925_mcu_uni_tx_ba(dev, vif, params, true);
++ mt7925_mcu_uni_tx_ba(dev, params, true);
+ break;
+ case IEEE80211_AMPDU_TX_STOP_FLUSH:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->deflink.wcid.ampdu_state);
+- mt7925_mcu_uni_tx_ba(dev, vif, params, false);
++ mt7925_mcu_uni_tx_ba(dev, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_START:
+ set_bit(tid, &msta->deflink.wcid.ampdu_state);
+@@ -1313,7 +1321,7 @@ mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ case IEEE80211_AMPDU_TX_STOP_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->deflink.wcid.ampdu_state);
+- mt7925_mcu_uni_tx_ba(dev, vif, params, false);
++ mt7925_mcu_uni_tx_ba(dev, params, false);
+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ break;
+ }
+@@ -1322,6 +1330,38 @@ mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ return ret;
+ }
+
++static void
++mt7925_mlo_pm_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
++{
++ struct mt792x_dev *dev = priv;
++ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++ unsigned long valid = ieee80211_vif_is_mld(vif) ?
++ mvif->valid_links : BIT(0);
++ struct ieee80211_bss_conf *bss_conf;
++ int i;
++
++ if (mvif->mlo_pm_state != MT792x_MLO_CHANGED_PS)
++ return;
++
++ mt792x_mutex_acquire(dev);
++ for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
++ bss_conf = mt792x_vif_to_bss_conf(vif, i);
++ mt7925_mcu_uni_bss_ps(dev, bss_conf);
++ }
++ mt792x_mutex_release(dev);
++}
++
++void mt7925_mlo_pm_work(struct work_struct *work)
++{
++ struct mt792x_dev *dev = container_of(work, struct mt792x_dev,
++ mlo_pm_work.work);
++ struct ieee80211_hw *hw = mt76_hw(dev);
++
++ ieee80211_iterate_active_interfaces(hw,
++ IEEE80211_IFACE_ITER_RESUME_ALL,
++ mt7925_mlo_pm_iter, dev);
++}
++
+ static bool is_valid_alpha2(const char *alpha2)
+ {
+ if (!alpha2)
+@@ -1871,6 +1911,9 @@ static void mt7925_vif_cfg_changed(struct ieee80211_hw *hw,
+ mt7925_mcu_sta_update(dev, NULL, vif, true,
+ MT76_STA_INFO_STATE_ASSOC);
+ mt7925_mcu_set_beacon_filter(dev, vif, vif->cfg.assoc);
++
++ if (ieee80211_vif_is_mld(vif))
++ mvif->mlo_pm_state = MT792x_MLO_LINK_ASSOC;
+ }
+
+ if (changed & BSS_CHANGED_ARP_FILTER) {
+@@ -1881,9 +1924,19 @@ static void mt7925_vif_cfg_changed(struct ieee80211_hw *hw,
+ }
+
+ if (changed & BSS_CHANGED_PS) {
+- for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
+- bss_conf = mt792x_vif_to_bss_conf(vif, i);
++ if (hweight16(mvif->valid_links) < 2) {
++ /* legacy */
++ bss_conf = &vif->bss_conf;
+ mt7925_mcu_uni_bss_ps(dev, bss_conf);
++ } else {
++ if (mvif->mlo_pm_state == MT792x_MLO_LINK_ASSOC) {
++ mvif->mlo_pm_state = MT792x_MLO_CHANGED_PS_PENDING;
++ } else if (mvif->mlo_pm_state == MT792x_MLO_CHANGED_PS) {
++ for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
++ bss_conf = mt792x_vif_to_bss_conf(vif, i);
++ mt7925_mcu_uni_bss_ps(dev, bss_conf);
++ }
++ }
+ }
+ }
+
+@@ -1934,11 +1987,12 @@ static void mt7925_link_info_changed(struct ieee80211_hw *hw,
+ if (changed & (BSS_CHANGED_QOS | BSS_CHANGED_BEACON_ENABLED))
+ mt7925_mcu_set_tx(dev, info);
+
+- if (changed & BSS_CHANGED_BSSID) {
+- if (ieee80211_vif_is_mld(vif) &&
+- hweight16(mvif->valid_links) == 2)
+- /* Indicate the secondary setup done */
+- mt7925_mcu_uni_bss_bcnft(dev, info, true);
++ if (mvif->mlo_pm_state == MT792x_MLO_CHANGED_PS_PENDING) {
++ /* Indicate the secondary setup done */
++ mt7925_mcu_uni_bss_bcnft(dev, info, true);
++
++ ieee80211_queue_delayed_work(hw, &dev->mlo_pm_work, 5 * HZ);
++ mvif->mlo_pm_state = MT792x_MLO_CHANGED_PS;
+ }
+
+ mt792x_mutex_release(dev);
+@@ -2022,8 +2076,6 @@ mt7925_change_vif_links(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ goto free;
+
+ if (mconf != &mvif->bss_conf) {
+- mt7925_mcu_set_bss_pm(dev, link_conf, true);
+-
+ err = mt7925_set_mlo_roc(phy, &mvif->bss_conf,
+ vif->active_links);
+ if (err < 0)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 9e192b7e1d2e08..b8cd7cd3d832b0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -576,10 +576,10 @@ void mt7925_mcu_rx_event(struct mt792x_dev *dev, struct sk_buff *skb)
+
+ static int
+ mt7925_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif_link *mvif,
+- struct mt76_wcid *wcid,
+ struct ieee80211_ampdu_params *params,
+ bool enable, bool tx)
+ {
++ struct mt76_wcid *wcid = (struct mt76_wcid *)params->sta->drv_priv;
+ struct sta_rec_ba_uni *ba;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+@@ -607,60 +607,28 @@ mt7925_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif_link *mvif,
+
+ /** starec & wtbl **/
+ int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
+- struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+- struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+- struct mt792x_link_sta *mlink;
+- struct mt792x_bss_conf *mconf;
+- unsigned long usable_links = ieee80211_vif_usable_links(vif);
+- struct mt76_wcid *wcid;
+- u8 link_id, ret;
+-
+- for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
+- mconf = mt792x_vif_to_link(mvif, link_id);
+- mlink = mt792x_sta_to_link(msta, link_id);
+- wcid = &mlink->wcid;
+-
+- if (enable && !params->amsdu)
+- mlink->wcid.amsdu = false;
++ struct mt792x_vif *mvif = msta->vif;
+
+- ret = mt7925_mcu_sta_ba(&dev->mt76, &mconf->mt76, wcid, params,
+- enable, true);
+- if (ret < 0)
+- break;
+- }
++ if (enable && !params->amsdu)
++ msta->deflink.wcid.amsdu = false;
+
+- return ret;
++ return mt7925_mcu_sta_ba(&dev->mt76, &mvif->bss_conf.mt76, params,
++ enable, true);
+ }
+
+ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
+- struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+- struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+- struct mt792x_link_sta *mlink;
+- struct mt792x_bss_conf *mconf;
+- unsigned long usable_links = ieee80211_vif_usable_links(vif);
+- struct mt76_wcid *wcid;
+- u8 link_id, ret;
+-
+- for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
+- mconf = mt792x_vif_to_link(mvif, link_id);
+- mlink = mt792x_sta_to_link(msta, link_id);
+- wcid = &mlink->wcid;
+-
+- ret = mt7925_mcu_sta_ba(&dev->mt76, &mconf->mt76, wcid, params,
+- enable, false);
+- if (ret < 0)
+- break;
+- }
++ struct mt792x_vif *mvif = msta->vif;
+
+- return ret;
++ return mt7925_mcu_sta_ba(&dev->mt76, &mvif->bss_conf.mt76, params,
++ enable, false);
+ }
+
+ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+@@ -1850,49 +1818,6 @@ mt7925_mcu_sta_mld_tlv(struct sk_buff *skb,
+ }
+ }
+
+-static int
+-mt7925_mcu_sta_cmd(struct mt76_phy *phy,
+- struct mt76_sta_cmd_info *info)
+-{
+- struct mt76_vif_link *mvif = (struct mt76_vif_link *)info->vif->drv_priv;
+- struct mt76_dev *dev = phy->dev;
+- struct sk_buff *skb;
+- int conn_state;
+-
+- skb = __mt76_connac_mcu_alloc_sta_req(dev, mvif, info->wcid,
+- MT7925_STA_UPDATE_MAX_SIZE);
+- if (IS_ERR(skb))
+- return PTR_ERR(skb);
+-
+- conn_state = info->enable ? CONN_STATE_PORT_SECURE :
+- CONN_STATE_DISCONNECT;
+- if (info->link_sta)
+- mt76_connac_mcu_sta_basic_tlv(dev, skb, info->link_conf,
+- info->link_sta,
+- conn_state, info->newly);
+- if (info->link_sta && info->enable) {
+- mt7925_mcu_sta_phy_tlv(skb, info->vif, info->link_sta);
+- mt7925_mcu_sta_ht_tlv(skb, info->link_sta);
+- mt7925_mcu_sta_vht_tlv(skb, info->link_sta);
+- mt76_connac_mcu_sta_uapsd(skb, info->vif, info->link_sta->sta);
+- mt7925_mcu_sta_amsdu_tlv(skb, info->vif, info->link_sta);
+- mt7925_mcu_sta_he_tlv(skb, info->link_sta);
+- mt7925_mcu_sta_he_6g_tlv(skb, info->link_sta);
+- mt7925_mcu_sta_eht_tlv(skb, info->link_sta);
+- mt7925_mcu_sta_rate_ctrl_tlv(skb, info->vif,
+- info->link_sta);
+- mt7925_mcu_sta_state_v2_tlv(phy, skb, info->link_sta,
+- info->vif, info->rcpi,
+- info->state);
+- mt7925_mcu_sta_mld_tlv(skb, info->vif, info->link_sta->sta);
+- }
+-
+- if (info->enable)
+- mt7925_mcu_sta_hdr_trans_tlv(skb, info->vif, info->link_sta);
+-
+- return mt76_mcu_skb_send_msg(dev, skb, info->cmd, true);
+-}
+-
+ static void
+ mt7925_mcu_sta_remove_tlv(struct sk_buff *skb)
+ {
+@@ -1905,8 +1830,8 @@ mt7925_mcu_sta_remove_tlv(struct sk_buff *skb)
+ }
+
+ static int
+-mt7925_mcu_mlo_sta_cmd(struct mt76_phy *phy,
+- struct mt76_sta_cmd_info *info)
++mt7925_mcu_sta_cmd(struct mt76_phy *phy,
++ struct mt76_sta_cmd_info *info)
+ {
+ struct mt792x_vif *mvif = (struct mt792x_vif *)info->vif->drv_priv;
+ struct mt76_dev *dev = phy->dev;
+@@ -1920,12 +1845,10 @@ mt7925_mcu_mlo_sta_cmd(struct mt76_phy *phy,
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+- if (info->enable)
++ if (info->enable && info->link_sta) {
+ mt76_connac_mcu_sta_basic_tlv(dev, skb, info->link_conf,
+ info->link_sta,
+ info->enable, info->newly);
+-
+- if (info->enable && info->link_sta) {
+ mt7925_mcu_sta_phy_tlv(skb, info->vif, info->link_sta);
+ mt7925_mcu_sta_ht_tlv(skb, info->link_sta);
+ mt7925_mcu_sta_vht_tlv(skb, info->link_sta);
+@@ -1976,7 +1899,6 @@ int mt7925_mcu_sta_update(struct mt792x_dev *dev,
+ };
+ struct mt792x_sta *msta;
+ struct mt792x_link_sta *mlink;
+- int err;
+
+ if (link_sta) {
+ msta = (struct mt792x_sta *)link_sta->sta->drv_priv;
+@@ -1989,12 +1911,7 @@ int mt7925_mcu_sta_update(struct mt792x_dev *dev,
+ else
+ info.newly = state == MT76_STA_INFO_STATE_ASSOC ? false : true;
+
+- if (ieee80211_vif_is_mld(vif))
+- err = mt7925_mcu_mlo_sta_cmd(&dev->mphy, &info);
+- else
+- err = mt7925_mcu_sta_cmd(&dev->mphy, &info);
+-
+- return err;
++ return mt7925_mcu_sta_cmd(&dev->mphy, &info);
+ }
+
+ int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+@@ -2668,6 +2585,62 @@ int mt7925_mcu_set_timing(struct mt792x_phy *phy,
+ MCU_UNI_CMD(BSS_INFO_UPDATE), true);
+ }
+
++void mt7925_mcu_del_dev(struct mt76_dev *mdev,
++ struct ieee80211_vif *vif)
++{
++ struct mt76_vif_link *mvif = (struct mt76_vif_link *)vif->drv_priv;
++ struct {
++ struct {
++ u8 omac_idx;
++ u8 band_idx;
++ __le16 pad;
++ } __packed hdr;
++ struct req_tlv {
++ __le16 tag;
++ __le16 len;
++ u8 active;
++ u8 link_idx; /* hw link idx */
++ u8 omac_addr[ETH_ALEN];
++ } __packed tlv;
++ } dev_req = {
++ .tlv = {
++ .tag = cpu_to_le16(DEV_INFO_ACTIVE),
++ .len = cpu_to_le16(sizeof(struct req_tlv)),
++ .active = true,
++ },
++ };
++ struct {
++ struct {
++ u8 bss_idx;
++ u8 pad[3];
++ } __packed hdr;
++ struct mt76_connac_bss_basic_tlv basic;
++ } basic_req = {
++ .basic = {
++ .tag = cpu_to_le16(UNI_BSS_INFO_BASIC),
++ .len = cpu_to_le16(sizeof(struct mt76_connac_bss_basic_tlv)),
++ .active = true,
++ .conn_state = 1,
++ },
++ };
++
++ dev_req.hdr.omac_idx = mvif->omac_idx;
++ dev_req.hdr.band_idx = mvif->band_idx;
++
++ basic_req.hdr.bss_idx = mvif->idx;
++ basic_req.basic.omac_idx = mvif->omac_idx;
++ basic_req.basic.band_idx = mvif->band_idx;
++ basic_req.basic.link_idx = mvif->link_idx;
++
++ mt76_mcu_send_msg(mdev, MCU_UNI_CMD(BSS_INFO_UPDATE),
++ &basic_req, sizeof(basic_req), true);
++
++ /* recovery omac address for the legacy interface */
++ memcpy(dev_req.tlv.omac_addr, vif->addr, ETH_ALEN);
++ mt76_mcu_send_msg(mdev, MCU_UNI_CMD(DEV_INFO_UPDATE),
++ &dev_req, sizeof(dev_req), true);
++}
++
+ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ struct ieee80211_chanctx_conf *ctx,
+ struct ieee80211_bss_conf *link_conf,
+@@ -3157,13 +3130,14 @@ __mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ .env = env_cap,
+ };
+ int ret, valid_cnt = 0;
+- u8 i, *pos;
++ u8 *pos, *last_pos;
+
+ if (!clc)
+ return 0;
+
+ pos = clc->data + sizeof(*seg) * clc->nr_seg;
+- for (i = 0; i < clc->nr_country; i++) {
++ last_pos = clc->data + le32_to_cpu(*(__le32 *)(clc->data + 4));
++ while (pos < last_pos) {
+ struct mt7925_clc_rule *rule = (struct mt7925_clc_rule *)pos;
+
+ pos += sizeof(*rule);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+index 1e47d2c61b5453..8ac43feb26d64f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+@@ -566,8 +566,8 @@ struct mt7925_wow_pattern_tlv {
+ u8 offset;
+ u8 mask[MT76_CONNAC_WOW_MASK_MAX_LEN];
+ u8 pattern[MT76_CONNAC_WOW_PATTEN_MAX_LEN];
+- u8 rsv[7];
+-} __packed;
++ u8 rsv[4];
++};
+
+ struct roc_acquire_tlv {
+ __le16 tag;
+@@ -627,6 +627,8 @@ int mt7925_mcu_sched_scan_req(struct mt76_phy *phy,
+ int mt7925_mcu_sched_scan_enable(struct mt76_phy *phy,
+ struct ieee80211_vif *vif,
+ bool enable);
++void mt7925_mcu_del_dev(struct mt76_dev *mdev,
++ struct ieee80211_vif *vif);
+ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ struct ieee80211_chanctx_conf *ctx,
+ struct ieee80211_bss_conf *link_conf,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+index 8707b5d04743bd..cb7b1a49fbd14e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+@@ -263,13 +263,12 @@ int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+ struct ieee80211_vif *vif,
+ bool enable);
+ int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
+- struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
+ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
+- struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
++void mt7925_mlo_pm_work(struct work_struct *work);
+ void mt7925_scan_work(struct work_struct *work);
+ void mt7925_roc_work(struct work_struct *work);
+ int mt7925_mcu_uni_bss_ps(struct mt792x_dev *dev,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x.h b/drivers/net/wireless/mediatek/mt76/mt792x.h
+index 32ed01a96bf7c1..6e25a4421e1237 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x.h
++++ b/drivers/net/wireless/mediatek/mt76/mt792x.h
+@@ -81,6 +81,13 @@ enum mt792x_reg_power_type {
+ MT_AP_VLP,
+ };
+
++enum mt792x_mlo_pm_state {
++ MT792x_MLO_LINK_DISASSOC,
++ MT792x_MLO_LINK_ASSOC,
++ MT792x_MLO_CHANGED_PS_PENDING,
++ MT792x_MLO_CHANGED_PS,
++};
++
+ DECLARE_EWMA(avg_signal, 10, 8)
+
+ struct mt792x_link_sta {
+@@ -134,6 +141,7 @@ struct mt792x_vif {
+ struct mt792x_phy *phy;
+ u16 valid_links;
+ u8 deflink_id;
++ enum mt792x_mlo_pm_state mlo_pm_state;
+
+ struct work_struct csa_work;
+ struct timer_list csa_timer;
+@@ -239,6 +247,7 @@ struct mt792x_dev {
+ const struct mt792x_irq_map *irq_map;
+
+ struct work_struct ipv6_ns_work;
++ struct delayed_work mlo_pm_work;
+ /* IPv6 addresses for WoWLAN */
+ struct sk_buff_head ipv6_ns_list;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_core.c b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+index 8799627f629269..0f7806f6338d0d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_core.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+@@ -665,7 +665,8 @@ int mt792x_init_wiphy(struct ieee80211_hw *hw)
+ ieee80211_hw_set(hw, SUPPORTS_DYNAMIC_PS);
+ ieee80211_hw_set(hw, SUPPORTS_VHT_EXT_NSS_BW);
+ ieee80211_hw_set(hw, CONNECTION_MONITOR);
+- ieee80211_hw_set(hw, CHANCTX_STA_CSA);
++ if (is_mt7921(&dev->mt76))
++ ieee80211_hw_set(hw, CHANCTX_STA_CSA);
+
+ if (dev->pm.enable)
+ ieee80211_hw_set(hw, CONNECTION_MONITOR);
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822bu.c b/drivers/net/wireless/realtek/rtw88/rtw8822bu.c
+index 8883300fc6adb0..572d1f31832ee4 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822bu.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822bu.c
+@@ -73,6 +73,10 @@ static const struct usb_device_id rtw_8822bu_id_table[] = {
+ .driver_info = (kernel_ulong_t)&(rtw8822b_hw_spec) }, /* ELECOM WDB-867DU3S */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x0107, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&(rtw8822b_hw_spec) }, /* Mercusys MA30H */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x010a, 0xff, 0xff, 0xff),
++ .driver_info = (kernel_ulong_t)&(rtw8822b_hw_spec) }, /* Mercusys MA30N */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x3322, 0xff, 0xff, 0xff),
++ .driver_info = (kernel_ulong_t)&(rtw8822b_hw_spec) }, /* D-Link DWA-T185 rev. A1 */
+ {},
+ };
+ MODULE_DEVICE_TABLE(usb, rtw_8822bu_id_table);
+diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
+index a22ea4a4b202bd..4f775c3e218f45 100644
+--- a/drivers/ntb/ntb_transport.c
++++ b/drivers/ntb/ntb_transport.c
+@@ -1353,7 +1353,7 @@ static int ntb_transport_probe(struct ntb_client *self, struct ntb_dev *ndev)
+ qp_count = ilog2(qp_bitmap);
+ if (nt->use_msi) {
+ qp_count -= 1;
+- nt->msi_db_mask = 1 << qp_count;
++ nt->msi_db_mask = BIT_ULL(qp_count);
+ ntb_db_clear_mask(ndev, nt->msi_db_mask);
+ }
+
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index e1abb27927ff74..da195d61a9664c 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -478,7 +478,7 @@ fcloop_t2h_xmt_ls_rsp(struct nvme_fc_local_port *localport,
+ if (targetport) {
+ tport = targetport->private;
+ spin_lock(&tport->lock);
+- list_add_tail(&tport->ls_list, &tls_req->ls_list);
++ list_add_tail(&tls_req->ls_list, &tport->ls_list);
+ spin_unlock(&tport->lock);
+ queue_work(nvmet_wq, &tport->ls_work);
+ }
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index 6c843d54ebb116..f5459ad50f3674 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -16,6 +16,7 @@
+
+ #define pr_fmt(fmt) "OF: " fmt
+
++#include <linux/cleanup.h>
+ #include <linux/device.h>
+ #include <linux/errno.h>
+ #include <linux/list.h>
+@@ -38,11 +39,15 @@
+ unsigned int irq_of_parse_and_map(struct device_node *dev, int index)
+ {
+ struct of_phandle_args oirq;
++ unsigned int ret;
+
+ if (of_irq_parse_one(dev, index, &oirq))
+ return 0;
+
+- return irq_create_of_mapping(&oirq);
++ ret = irq_create_of_mapping(&oirq);
++ of_node_put(oirq.np);
++
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(irq_of_parse_and_map);
+
+@@ -165,6 +170,8 @@ const __be32 *of_irq_parse_imap_parent(const __be32 *imap, int len, struct of_ph
+ * the specifier for each map, and then returns the translated map.
+ *
+ * Return: 0 on success and a negative number on error
++ *
++ * Note: refcount of node @out_irq->np is increased by 1 on success.
+ */
+ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
+ {
+@@ -310,6 +317,12 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
+ addrsize = (imap - match_array) - intsize;
+
+ if (ipar == newpar) {
++ /*
++ * We got @ipar's refcount, but the refcount was
++ * gotten again by of_irq_parse_imap_parent() via its
++ * alias @newpar.
++ */
++ of_node_put(ipar);
+ pr_debug("%pOF interrupt-map entry to self\n", ipar);
+ return 0;
+ }
+@@ -339,10 +352,12 @@ EXPORT_SYMBOL_GPL(of_irq_parse_raw);
+ * This function resolves an interrupt for a node by walking the interrupt tree,
+ * finding which interrupt controller node it is attached to, and returning the
+ * interrupt specifier that can be used to retrieve a Linux IRQ number.
++ *
++ * Note: refcount of node @out_irq->np is increased by 1 on success.
+ */
+ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_args *out_irq)
+ {
+- struct device_node *p;
++ struct device_node __free(device_node) *p = NULL;
+ const __be32 *addr;
+ u32 intsize;
+ int i, res, addr_len;
+@@ -367,41 +382,33 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ /* Try the new-style interrupts-extended first */
+ res = of_parse_phandle_with_args(device, "interrupts-extended",
+ "#interrupt-cells", index, out_irq);
+- if (!res)
+- return of_irq_parse_raw(addr_buf, out_irq);
+-
+- /* Look for the interrupt parent. */
+- p = of_irq_find_parent(device);
+- if (p == NULL)
+- return -EINVAL;
++ if (!res) {
++ p = out_irq->np;
++ } else {
++ /* Look for the interrupt parent. */
++ p = of_irq_find_parent(device);
++ /* Get size of interrupt specifier */
++ if (!p || of_property_read_u32(p, "#interrupt-cells", &intsize))
++ return -EINVAL;
++
++ pr_debug(" parent=%pOF, intsize=%d\n", p, intsize);
++
++ /* Copy intspec into irq structure */
++ out_irq->np = p;
++ out_irq->args_count = intsize;
++ for (i = 0; i < intsize; i++) {
++ res = of_property_read_u32_index(device, "interrupts",
++ (index * intsize) + i,
++ out_irq->args + i);
++ if (res)
++ return res;
++ }
+
+- /* Get size of interrupt specifier */
+- if (of_property_read_u32(p, "#interrupt-cells", &intsize)) {
+- res = -EINVAL;
+- goto out;
++ pr_debug(" intspec=%d\n", *out_irq->args);
+ }
+
+- pr_debug(" parent=%pOF, intsize=%d\n", p, intsize);
+-
+- /* Copy intspec into irq structure */
+- out_irq->np = p;
+- out_irq->args_count = intsize;
+- for (i = 0; i < intsize; i++) {
+- res = of_property_read_u32_index(device, "interrupts",
+- (index * intsize) + i,
+- out_irq->args + i);
+- if (res)
+- goto out;
+- }
+-
+- pr_debug(" intspec=%d\n", *out_irq->args);
+-
+-
+ /* Check if there are any interrupt-map translations to process */
+- res = of_irq_parse_raw(addr_buf, out_irq);
+- out:
+- of_node_put(p);
+- return res;
++ return of_irq_parse_raw(addr_buf, out_irq);
+ }
+ EXPORT_SYMBOL_GPL(of_irq_parse_one);
+
+@@ -505,8 +512,10 @@ int of_irq_count(struct device_node *dev)
+ struct of_phandle_args irq;
+ int nr = 0;
+
+- while (of_irq_parse_one(dev, nr, &irq) == 0)
++ while (of_irq_parse_one(dev, nr, &irq) == 0) {
++ of_node_put(irq.np);
+ nr++;
++ }
+
+ return nr;
+ }
+@@ -623,6 +632,8 @@ void __init of_irq_init(const struct of_device_id *matches)
+ __func__, desc->dev, desc->dev,
+ desc->interrupt_parent);
+ of_node_clear_flag(desc->dev, OF_POPULATED);
++ of_node_put(desc->interrupt_parent);
++ of_node_put(desc->dev);
+ kfree(desc);
+ continue;
+ }
+@@ -653,6 +664,7 @@ void __init of_irq_init(const struct of_device_id *matches)
+ err:
+ list_for_each_entry_safe(desc, temp_desc, &intc_desc_list, list) {
+ list_del(&desc->list);
++ of_node_put(desc->interrupt_parent);
+ of_node_put(desc->dev);
+ kfree(desc);
+ }
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 0341d51d6aed12..ef1cfdae33bb40 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -355,6 +355,7 @@ static const struct j721e_pcie_data j7200_pcie_rc_data = {
+ static const struct j721e_pcie_data j7200_pcie_ep_data = {
+ .mode = PCI_MODE_EP,
+ .quirk_detect_quiet_flag = true,
++ .linkdown_irq_regfield = J7200_LINK_DOWN,
+ .quirk_disable_flr = true,
+ .max_lanes = 2,
+ };
+@@ -376,13 +377,13 @@ static const struct j721e_pcie_data j784s4_pcie_rc_data = {
+ .mode = PCI_MODE_RC,
+ .quirk_retrain_flag = true,
+ .byte_access_allowed = false,
+- .linkdown_irq_regfield = LINK_DOWN,
++ .linkdown_irq_regfield = J7200_LINK_DOWN,
+ .max_lanes = 4,
+ };
+
+ static const struct j721e_pcie_data j784s4_pcie_ep_data = {
+ .mode = PCI_MODE_EP,
+- .linkdown_irq_regfield = LINK_DOWN,
++ .linkdown_irq_regfield = J7200_LINK_DOWN,
+ .max_lanes = 4,
+ };
+
+diff --git a/drivers/pci/controller/dwc/pci-layerscape.c b/drivers/pci/controller/dwc/pci-layerscape.c
+index 239a05b36e8e62..a44b5c256d6e2a 100644
+--- a/drivers/pci/controller/dwc/pci-layerscape.c
++++ b/drivers/pci/controller/dwc/pci-layerscape.c
+@@ -356,7 +356,7 @@ static int ls_pcie_probe(struct platform_device *pdev)
+ if (pcie->drvdata->scfg_support) {
+ pcie->scfg =
+ syscon_regmap_lookup_by_phandle_args(dev->of_node,
+- "fsl,pcie-scfg", 2,
++ "fsl,pcie-scfg", 1,
+ index);
+ if (IS_ERR(pcie->scfg)) {
+ dev_err(dev, "No syscfg phandle specified\n");
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index 3d7dbfcd689e3e..1a3bdc01b0747c 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -1786,7 +1786,7 @@ static struct pci_ops brcm7425_pcie_ops = {
+
+ static int brcm_pcie_probe(struct platform_device *pdev)
+ {
+- struct device_node *np = pdev->dev.of_node, *msi_np;
++ struct device_node *np = pdev->dev.of_node;
+ struct pci_host_bridge *bridge;
+ const struct pcie_cfg_data *data;
+ struct brcm_pcie *pcie;
+@@ -1890,9 +1890,14 @@ static int brcm_pcie_probe(struct platform_device *pdev)
+ goto fail;
+ }
+
+- msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
+- if (pci_msi_enabled() && msi_np == pcie->np) {
+- ret = brcm_pcie_enable_msi(pcie);
++ if (pci_msi_enabled()) {
++ struct device_node *msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
++
++ if (msi_np == pcie->np)
++ ret = brcm_pcie_enable_msi(pcie);
++
++ of_node_put(msi_np);
++
+ if (ret) {
+ dev_err(pcie->dev, "probe of internal MSI failed");
+ goto fail;
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index 5adac6adc046f8..6a46be17aa91b1 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -367,7 +367,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
+ }
+ }
+
+- rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID,
++ rockchip_pcie_write(rockchip, PCI_VENDOR_ID_ROCKCHIP,
+ PCIE_CORE_CONFIG_VENDOR);
+ rockchip_pcie_write(rockchip,
+ PCI_CLASS_BRIDGE_PCI_NORMAL << 8,
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index 11def598534b2f..14954f43e5e9af 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -200,7 +200,6 @@
+ #define AXI_WRAPPER_NOR_MSG 0xc
+
+ #define PCIE_RC_SEND_PME_OFF 0x11960
+-#define ROCKCHIP_VENDOR_ID 0x1d87
+ #define PCIE_LINK_IS_L2(x) \
+ (((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2)
+ #define PCIE_LINK_TRAINING_DONE(x) \
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 9d9596947350f5..94ceec50a2b94c 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -125,7 +125,7 @@ struct vmd_irq_list {
+ struct vmd_dev {
+ struct pci_dev *dev;
+
+- spinlock_t cfg_lock;
++ raw_spinlock_t cfg_lock;
+ void __iomem *cfgbar;
+
+ int msix_count;
+@@ -391,7 +391,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
+ if (!addr)
+ return -EFAULT;
+
+- spin_lock_irqsave(&vmd->cfg_lock, flags);
++ raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
+ switch (len) {
+ case 1:
+ *value = readb(addr);
+@@ -406,7 +406,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
+ ret = -EINVAL;
+ break;
+ }
+- spin_unlock_irqrestore(&vmd->cfg_lock, flags);
++ raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
+ return ret;
+ }
+
+@@ -426,7 +426,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
+ if (!addr)
+ return -EFAULT;
+
+- spin_lock_irqsave(&vmd->cfg_lock, flags);
++ raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
+ switch (len) {
+ case 1:
+ writeb(value, addr);
+@@ -444,7 +444,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
+ ret = -EINVAL;
+ break;
+ }
+- spin_unlock_irqrestore(&vmd->cfg_lock, flags);
++ raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
+ return ret;
+ }
+
+@@ -1009,7 +1009,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
+ vmd->first_vec = 1;
+
+- spin_lock_init(&vmd->cfg_lock);
++ raw_spin_lock_init(&vmd->cfg_lock);
+ pci_set_drvdata(dev, vmd);
+ err = vmd_enable_domain(vmd, features);
+ if (err)
+diff --git a/drivers/pci/devres.c b/drivers/pci/devres.c
+index 3431a7df3e0d9a..73047316889e50 100644
+--- a/drivers/pci/devres.c
++++ b/drivers/pci/devres.c
+@@ -40,7 +40,7 @@
+ * Legacy struct storing addresses to whole mapped BARs.
+ */
+ struct pcim_iomap_devres {
+- void __iomem *table[PCI_STD_NUM_BARS];
++ void __iomem *table[PCI_NUM_RESOURCES];
+ };
+
+ /* Used to restore the old INTx state on driver detach. */
+@@ -577,7 +577,7 @@ static int pcim_add_mapping_to_legacy_table(struct pci_dev *pdev,
+ {
+ void __iomem **legacy_iomap_table;
+
+- if (bar >= PCI_STD_NUM_BARS)
++ if (!pci_bar_index_is_valid(bar))
+ return -EINVAL;
+
+ legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev);
+@@ -622,7 +622,7 @@ static void pcim_remove_bar_from_legacy_table(struct pci_dev *pdev, int bar)
+ {
+ void __iomem **legacy_iomap_table;
+
+- if (bar >= PCI_STD_NUM_BARS)
++ if (!pci_bar_index_is_valid(bar))
+ return;
+
+ legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev);
+@@ -655,6 +655,9 @@ void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen)
+ void __iomem *mapping;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return NULL;
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return NULL;
+@@ -722,6 +725,9 @@ void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar,
+ int ret;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return IOMEM_ERR_PTR(-EINVAL);
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return IOMEM_ERR_PTR(-ENOMEM);
+@@ -823,6 +829,9 @@ static int _pcim_request_region(struct pci_dev *pdev, int bar, const char *name,
+ int ret;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return -EINVAL;
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return -ENOMEM;
+@@ -991,6 +1000,9 @@ void __iomem *pcim_iomap_range(struct pci_dev *pdev, int bar,
+ void __iomem *mapping;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return IOMEM_ERR_PTR(-EINVAL);
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return IOMEM_ERR_PTR(-ENOMEM);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index ff458e692fedb3..997841c6989359 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -286,9 +286,12 @@ static int pciehp_suspend(struct pcie_device *dev)
+
+ static bool pciehp_device_replaced(struct controller *ctrl)
+ {
+- struct pci_dev *pdev __free(pci_dev_put);
++ struct pci_dev *pdev __free(pci_dev_put) = NULL;
+ u32 reg;
+
++ if (pci_dev_is_disconnected(ctrl->pcie->port))
++ return false;
++
+ pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
+ if (!pdev)
+ return true;
+diff --git a/drivers/pci/iomap.c b/drivers/pci/iomap.c
+index 9fb7cacc15cdef..fe706ed946dfd2 100644
+--- a/drivers/pci/iomap.c
++++ b/drivers/pci/iomap.c
+@@ -9,6 +9,8 @@
+
+ #include <linux/export.h>
+
++#include "pci.h" /* for pci_bar_index_is_valid() */
++
+ /**
+ * pci_iomap_range - create a virtual mapping cookie for a PCI BAR
+ * @dev: PCI device that owns the BAR
+@@ -33,12 +35,19 @@ void __iomem *pci_iomap_range(struct pci_dev *dev,
+ unsigned long offset,
+ unsigned long maxlen)
+ {
+- resource_size_t start = pci_resource_start(dev, bar);
+- resource_size_t len = pci_resource_len(dev, bar);
+- unsigned long flags = pci_resource_flags(dev, bar);
++ resource_size_t start, len;
++ unsigned long flags;
++
++ if (!pci_bar_index_is_valid(bar))
++ return NULL;
++
++ start = pci_resource_start(dev, bar);
++ len = pci_resource_len(dev, bar);
++ flags = pci_resource_flags(dev, bar);
+
+ if (len <= offset || !start)
+ return NULL;
++
+ len -= offset;
+ start += offset;
+ if (maxlen && len > maxlen)
+@@ -77,16 +86,20 @@ void __iomem *pci_iomap_wc_range(struct pci_dev *dev,
+ unsigned long offset,
+ unsigned long maxlen)
+ {
+- resource_size_t start = pci_resource_start(dev, bar);
+- resource_size_t len = pci_resource_len(dev, bar);
+- unsigned long flags = pci_resource_flags(dev, bar);
++ resource_size_t start, len;
++ unsigned long flags;
+
+-
+- if (flags & IORESOURCE_IO)
++ if (!pci_bar_index_is_valid(bar))
+ return NULL;
+
++ start = pci_resource_start(dev, bar);
++ len = pci_resource_len(dev, bar);
++ flags = pci_resource_flags(dev, bar);
++
+ if (len <= offset || !start)
+ return NULL;
++ if (flags & IORESOURCE_IO)
++ return NULL;
+
+ len -= offset;
+ start += offset;
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 3e78cf86ef03ba..3152750aab2fc8 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -3929,6 +3929,9 @@ EXPORT_SYMBOL(pci_enable_atomic_ops_to_root);
+ */
+ void pci_release_region(struct pci_dev *pdev, int bar)
+ {
++ if (!pci_bar_index_is_valid(bar))
++ return;
++
+ /*
+ * This is done for backwards compatibility, because the old PCI devres
+ * API had a mode in which the function became managed if it had been
+@@ -3973,6 +3976,9 @@ EXPORT_SYMBOL(pci_release_region);
+ static int __pci_request_region(struct pci_dev *pdev, int bar,
+ const char *name, int exclusive)
+ {
++ if (!pci_bar_index_is_valid(bar))
++ return -EINVAL;
++
+ if (pci_is_managed(pdev)) {
+ if (exclusive == IORESOURCE_EXCLUSIVE)
+ return pcim_request_region_exclusive(pdev, bar, name);
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 01e51db8d285af..d22755de688b81 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -167,6 +167,22 @@ static inline void pci_wakeup_event(struct pci_dev *dev)
+ pm_wakeup_event(&dev->dev, 100);
+ }
+
++/**
++ * pci_bar_index_is_valid - Check whether a BAR index is within valid range
++ * @bar: BAR index
++ *
++ * Protects against overflowing &struct pci_dev.resource array.
++ *
++ * Return: true for valid index, false otherwise.
++ */
++static inline bool pci_bar_index_is_valid(int bar)
++{
++ if (bar >= 0 && bar < PCI_NUM_RESOURCES)
++ return true;
++
++ return false;
++}
++
+ static inline bool pci_has_subordinate(struct pci_dev *pci_dev)
+ {
+ return !!(pci_dev->subordinate);
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 0154b48bfbd7b4..3da48c13d9cc7a 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -954,6 +954,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ resource_size_t offset, next_offset;
+ LIST_HEAD(resources);
+ struct resource *res, *next_res;
++ bool bus_registered = false;
+ char addr[64], *fmt;
+ const char *name;
+ int err;
+@@ -1017,6 +1018,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ name = dev_name(&bus->dev);
+
+ err = device_register(&bus->dev);
++ bus_registered = true;
+ if (err)
+ goto unregister;
+
+@@ -1103,12 +1105,15 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ unregister:
+ put_device(&bridge->dev);
+ device_del(&bridge->dev);
+-
+ free:
+ #ifdef CONFIG_PCI_DOMAINS_GENERIC
+ pci_bus_release_domain_nr(parent, bus->domain_nr);
+ #endif
+- kfree(bus);
++ if (bus_registered)
++ put_device(&bus->dev);
++ else
++ kfree(bus);
++
+ return err;
+ }
+
+@@ -1217,7 +1222,10 @@ static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent,
+ add_dev:
+ pci_set_bus_msi_domain(child);
+ ret = device_register(&child->dev);
+- WARN_ON(ret < 0);
++ if (WARN_ON(ret < 0)) {
++ put_device(&child->dev);
++ return NULL;
++ }
+
+ pcibios_add_bus(child);
+
+@@ -1373,8 +1381,6 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
+ pci_write_config_word(dev, PCI_BRIDGE_CONTROL,
+ bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
+
+- pci_enable_rrs_sv(dev);
+-
+ if ((secondary || subordinate) && !pcibios_assign_all_busses() &&
+ !is_cardbus && !broken) {
+ unsigned int cmax, buses;
+@@ -1615,6 +1621,11 @@ void set_pcie_port_type(struct pci_dev *pdev)
+ pdev->pcie_cap = pos;
+ pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, ®16);
+ pdev->pcie_flags_reg = reg16;
++
++ type = pci_pcie_type(pdev);
++ if (type == PCI_EXP_TYPE_ROOT_PORT)
++ pci_enable_rrs_sv(pdev);
++
+ pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap);
+ pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap);
+
+@@ -1631,7 +1642,6 @@ void set_pcie_port_type(struct pci_dev *pdev)
+ * correctly so detect impossible configurations here and correct
+ * the port type accordingly.
+ */
+- type = pci_pcie_type(pdev);
+ if (type == PCI_EXP_TYPE_DOWNSTREAM) {
+ /*
+ * If pdev claims to be downstream port but the parent
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index 398cce3d76fc44..2f33e69a8caf20 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -342,12 +342,10 @@ armpmu_add(struct perf_event *event, int flags)
+ if (idx < 0)
+ return idx;
+
+- /*
+- * If there is an event in the counter we are going to use then make
+- * sure it is disabled.
+- */
++ /* The newly-allocated counter should be empty */
++ WARN_ON_ONCE(hw_events->events[idx]);
++
+ event->hw.idx = idx;
+- armpmu->disable(event);
+ hw_events->events[idx] = event;
+
+ hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+diff --git a/drivers/perf/dwc_pcie_pmu.c b/drivers/perf/dwc_pcie_pmu.c
+index cccecae9823f6a..f851e070760c56 100644
+--- a/drivers/perf/dwc_pcie_pmu.c
++++ b/drivers/perf/dwc_pcie_pmu.c
+@@ -565,15 +565,15 @@ static int dwc_pcie_register_dev(struct pci_dev *pdev)
+ u32 sbdf;
+
+ sbdf = (pci_domain_nr(pdev->bus) << 16) | PCI_DEVID(pdev->bus->number, pdev->devfn);
+- plat_dev = platform_device_register_data(NULL, "dwc_pcie_pmu", sbdf,
+- pdev, sizeof(*pdev));
+-
++ plat_dev = platform_device_register_simple("dwc_pcie_pmu", sbdf, NULL, 0);
+ if (IS_ERR(plat_dev))
+ return PTR_ERR(plat_dev);
+
+ dev_info = kzalloc(sizeof(*dev_info), GFP_KERNEL);
+- if (!dev_info)
++ if (!dev_info) {
++ platform_device_unregister(plat_dev);
+ return -ENOMEM;
++ }
+
+ /* Cache platform device to handle pci device hotplug */
+ dev_info->plat_dev = plat_dev;
+@@ -614,18 +614,26 @@ static struct notifier_block dwc_pcie_pmu_nb = {
+
+ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+ {
+- struct pci_dev *pdev = plat_dev->dev.platform_data;
++ struct pci_dev *pdev;
+ struct dwc_pcie_pmu *pcie_pmu;
+ char *name;
+ u32 sbdf;
+ u16 vsec;
+ int ret;
+
++ sbdf = plat_dev->id;
++ pdev = pci_get_domain_bus_and_slot(sbdf >> 16, PCI_BUS_NUM(sbdf & 0xffff),
++ sbdf & 0xff);
++ if (!pdev) {
++ pr_err("No pdev found for the sbdf 0x%x\n", sbdf);
++ return -ENODEV;
++ }
++
+ vsec = dwc_pcie_des_cap(pdev);
+ if (!vsec)
+ return -ENODEV;
+
+- sbdf = plat_dev->id;
++ pci_dev_put(pdev);
+ name = devm_kasprintf(&plat_dev->dev, GFP_KERNEL, "dwc_rootport_%x", sbdf);
+ if (!name)
+ return -ENOMEM;
+@@ -640,7 +648,7 @@ static int dwc_pcie_pmu_probe(struct platform_device *plat_dev)
+ pcie_pmu->on_cpu = -1;
+ pcie_pmu->pmu = (struct pmu){
+ .name = name,
+- .parent = &pdev->dev,
++ .parent = &plat_dev->dev,
+ .module = THIS_MODULE,
+ .attr_groups = dwc_pcie_attr_groups,
+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE,
+@@ -730,6 +738,15 @@ static struct platform_driver dwc_pcie_pmu_driver = {
+ .driver = {.name = "dwc_pcie_pmu",},
+ };
+
++static void dwc_pcie_cleanup_devices(void)
++{
++ struct dwc_pcie_dev_info *dev_info, *tmp;
++
++ list_for_each_entry_safe(dev_info, tmp, &dwc_pcie_dev_info_head, dev_node) {
++ dwc_pcie_unregister_dev(dev_info);
++ }
++}
++
+ static int __init dwc_pcie_pmu_init(void)
+ {
+ struct pci_dev *pdev = NULL;
+@@ -742,7 +759,7 @@ static int __init dwc_pcie_pmu_init(void)
+ ret = dwc_pcie_register_dev(pdev);
+ if (ret) {
+ pci_dev_put(pdev);
+- return ret;
++ goto err_cleanup;
+ }
+ }
+
+@@ -751,35 +768,35 @@ static int __init dwc_pcie_pmu_init(void)
+ dwc_pcie_pmu_online_cpu,
+ dwc_pcie_pmu_offline_cpu);
+ if (ret < 0)
+- return ret;
++ goto err_cleanup;
+
+ dwc_pcie_pmu_hp_state = ret;
+
+ ret = platform_driver_register(&dwc_pcie_pmu_driver);
+ if (ret)
+- goto platform_driver_register_err;
++ goto err_remove_cpuhp;
+
+ ret = bus_register_notifier(&pci_bus_type, &dwc_pcie_pmu_nb);
+ if (ret)
+- goto platform_driver_register_err;
++ goto err_unregister_driver;
+ notify = true;
+
+ return 0;
+
+-platform_driver_register_err:
++err_unregister_driver:
++ platform_driver_unregister(&dwc_pcie_pmu_driver);
++err_remove_cpuhp:
+ cpuhp_remove_multi_state(dwc_pcie_pmu_hp_state);
+-
++err_cleanup:
++ dwc_pcie_cleanup_devices();
+ return ret;
+ }
+
+ static void __exit dwc_pcie_pmu_exit(void)
+ {
+- struct dwc_pcie_dev_info *dev_info, *tmp;
+-
+ if (notify)
+ bus_unregister_notifier(&pci_bus_type, &dwc_pcie_pmu_nb);
+- list_for_each_entry_safe(dev_info, tmp, &dwc_pcie_dev_info_head, dev_node)
+- dwc_pcie_unregister_dev(dev_info);
++ dwc_pcie_cleanup_devices();
+ platform_driver_unregister(&dwc_pcie_pmu_driver);
+ cpuhp_remove_multi_state(dwc_pcie_pmu_hp_state);
+ }
+diff --git a/drivers/phy/freescale/phy-fsl-imx8m-pcie.c b/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
+index e98361dcdeadfe..afd52392cd5301 100644
+--- a/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
++++ b/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
+@@ -162,6 +162,16 @@ static int imx8_pcie_phy_power_on(struct phy *phy)
+ return ret;
+ }
+
++static int imx8_pcie_phy_power_off(struct phy *phy)
++{
++ struct imx8_pcie_phy *imx8_phy = phy_get_drvdata(phy);
++
++ reset_control_assert(imx8_phy->reset);
++ reset_control_assert(imx8_phy->perst);
++
++ return 0;
++}
++
+ static int imx8_pcie_phy_init(struct phy *phy)
+ {
+ struct imx8_pcie_phy *imx8_phy = phy_get_drvdata(phy);
+@@ -182,6 +192,7 @@ static const struct phy_ops imx8_pcie_phy_ops = {
+ .init = imx8_pcie_phy_init,
+ .exit = imx8_pcie_phy_exit,
+ .power_on = imx8_pcie_phy_power_on,
++ .power_off = imx8_pcie_phy_power_off,
+ .owner = THIS_MODULE,
+ };
+
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 47daa47153c970..82f0cc43bbf4f4 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -1045,8 +1045,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ const struct msm_pingroup *g;
+ u32 intr_target_mask = GENMASK(2, 0);
+ unsigned long flags;
+- bool was_enabled;
+- u32 val;
++ u32 val, oldval;
+
+ if (msm_gpio_needs_dual_edge_parent_workaround(d, type)) {
+ set_bit(d->hwirq, pctrl->dual_edge_irqs);
+@@ -1108,8 +1107,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ * internal circuitry of TLMM, toggling the RAW_STATUS
+ * could cause the INTR_STATUS to be set for EDGE interrupts.
+ */
+- val = msm_readl_intr_cfg(pctrl, g);
+- was_enabled = val & BIT(g->intr_raw_status_bit);
++ val = oldval = msm_readl_intr_cfg(pctrl, g);
+ val |= BIT(g->intr_raw_status_bit);
+ if (g->intr_detection_width == 2) {
+ val &= ~(3 << g->intr_detection_bit);
+@@ -1162,9 +1160,11 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ /*
+ * The first time we set RAW_STATUS_EN it could trigger an interrupt.
+ * Clear the interrupt. This is safe because we have
+- * IRQCHIP_SET_TYPE_MASKED.
++ * IRQCHIP_SET_TYPE_MASKED. When changing the interrupt type, we could
++ * also still have a non-matching interrupt latched, so clear whenever
++ * making changes to the interrupt configuration.
+ */
+- if (!was_enabled)
++ if (val != oldval)
+ msm_ack_intr_status(pctrl, g);
+
+ if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+index 3ea7106ce5eae3..e28fe81776466b 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+@@ -1370,83 +1370,83 @@ const struct samsung_pinctrl_of_match_data fsd_of_data __initconst = {
+
+ /* pin banks of gs101 pin-controller (ALIVE) */
+ static const struct samsung_pin_bank_data gs101_pin_alive[] = {
+- EXYNOS850_PIN_BANK_EINTW(8, 0x0, "gpa0", 0x00),
+- EXYNOS850_PIN_BANK_EINTW(7, 0x20, "gpa1", 0x04),
+- EXYNOS850_PIN_BANK_EINTW(5, 0x40, "gpa2", 0x08),
+- EXYNOS850_PIN_BANK_EINTW(4, 0x60, "gpa3", 0x0c),
+- EXYNOS850_PIN_BANK_EINTW(4, 0x80, "gpa4", 0x10),
+- EXYNOS850_PIN_BANK_EINTW(7, 0xa0, "gpa5", 0x14),
+- EXYNOS850_PIN_BANK_EINTW(8, 0xc0, "gpa9", 0x18),
+- EXYNOS850_PIN_BANK_EINTW(2, 0xe0, "gpa10", 0x1c),
++ GS101_PIN_BANK_EINTW(8, 0x0, "gpa0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTW(7, 0x20, "gpa1", 0x04, 0x08),
++ GS101_PIN_BANK_EINTW(5, 0x40, "gpa2", 0x08, 0x10),
++ GS101_PIN_BANK_EINTW(4, 0x60, "gpa3", 0x0c, 0x18),
++ GS101_PIN_BANK_EINTW(4, 0x80, "gpa4", 0x10, 0x1c),
++ GS101_PIN_BANK_EINTW(7, 0xa0, "gpa5", 0x14, 0x20),
++ GS101_PIN_BANK_EINTW(8, 0xc0, "gpa9", 0x18, 0x28),
++ GS101_PIN_BANK_EINTW(2, 0xe0, "gpa10", 0x1c, 0x30),
+ };
+
+ /* pin banks of gs101 pin-controller (FAR_ALIVE) */
+ static const struct samsung_pin_bank_data gs101_pin_far_alive[] = {
+- EXYNOS850_PIN_BANK_EINTW(8, 0x0, "gpa6", 0x00),
+- EXYNOS850_PIN_BANK_EINTW(4, 0x20, "gpa7", 0x04),
+- EXYNOS850_PIN_BANK_EINTW(8, 0x40, "gpa8", 0x08),
+- EXYNOS850_PIN_BANK_EINTW(2, 0x60, "gpa11", 0x0c),
++ GS101_PIN_BANK_EINTW(8, 0x0, "gpa6", 0x00, 0x00),
++ GS101_PIN_BANK_EINTW(4, 0x20, "gpa7", 0x04, 0x08),
++ GS101_PIN_BANK_EINTW(8, 0x40, "gpa8", 0x08, 0x0c),
++ GS101_PIN_BANK_EINTW(2, 0x60, "gpa11", 0x0c, 0x14),
+ };
+
+ /* pin banks of gs101 pin-controller (GSACORE) */
+ static const struct samsung_pin_bank_data gs101_pin_gsacore[] = {
+- EXYNOS850_PIN_BANK_EINTG(2, 0x0, "gps0", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(8, 0x20, "gps1", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(3, 0x40, "gps2", 0x08),
++ GS101_PIN_BANK_EINTG(2, 0x0, "gps0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(8, 0x20, "gps1", 0x04, 0x04),
++ GS101_PIN_BANK_EINTG(3, 0x40, "gps2", 0x08, 0x0c),
+ };
+
+ /* pin banks of gs101 pin-controller (GSACTRL) */
+ static const struct samsung_pin_bank_data gs101_pin_gsactrl[] = {
+- EXYNOS850_PIN_BANK_EINTW(6, 0x0, "gps3", 0x00),
++ GS101_PIN_BANK_EINTW(6, 0x0, "gps3", 0x00, 0x00),
+ };
+
+ /* pin banks of gs101 pin-controller (PERIC0) */
+ static const struct samsung_pin_bank_data gs101_pin_peric0[] = {
+- EXYNOS850_PIN_BANK_EINTG(5, 0x0, "gpp0", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x20, "gpp1", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x40, "gpp2", 0x08),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x60, "gpp3", 0x0c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x80, "gpp4", 0x10),
+- EXYNOS850_PIN_BANK_EINTG(2, 0xa0, "gpp5", 0x14),
+- EXYNOS850_PIN_BANK_EINTG(4, 0xc0, "gpp6", 0x18),
+- EXYNOS850_PIN_BANK_EINTG(2, 0xe0, "gpp7", 0x1c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x100, "gpp8", 0x20),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x120, "gpp9", 0x24),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x140, "gpp10", 0x28),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x160, "gpp11", 0x2c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x180, "gpp12", 0x30),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x1a0, "gpp13", 0x34),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x1c0, "gpp14", 0x38),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x1e0, "gpp15", 0x3c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x200, "gpp16", 0x40),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x220, "gpp17", 0x44),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x240, "gpp18", 0x48),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x260, "gpp19", 0x4c),
++ GS101_PIN_BANK_EINTG(5, 0x0, "gpp0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(4, 0x20, "gpp1", 0x04, 0x08),
++ GS101_PIN_BANK_EINTG(4, 0x40, "gpp2", 0x08, 0x0c),
++ GS101_PIN_BANK_EINTG(2, 0x60, "gpp3", 0x0c, 0x10),
++ GS101_PIN_BANK_EINTG(4, 0x80, "gpp4", 0x10, 0x14),
++ GS101_PIN_BANK_EINTG(2, 0xa0, "gpp5", 0x14, 0x18),
++ GS101_PIN_BANK_EINTG(4, 0xc0, "gpp6", 0x18, 0x1c),
++ GS101_PIN_BANK_EINTG(2, 0xe0, "gpp7", 0x1c, 0x20),
++ GS101_PIN_BANK_EINTG(4, 0x100, "gpp8", 0x20, 0x24),
++ GS101_PIN_BANK_EINTG(2, 0x120, "gpp9", 0x24, 0x28),
++ GS101_PIN_BANK_EINTG(4, 0x140, "gpp10", 0x28, 0x2c),
++ GS101_PIN_BANK_EINTG(2, 0x160, "gpp11", 0x2c, 0x30),
++ GS101_PIN_BANK_EINTG(4, 0x180, "gpp12", 0x30, 0x34),
++ GS101_PIN_BANK_EINTG(2, 0x1a0, "gpp13", 0x34, 0x38),
++ GS101_PIN_BANK_EINTG(4, 0x1c0, "gpp14", 0x38, 0x3c),
++ GS101_PIN_BANK_EINTG(2, 0x1e0, "gpp15", 0x3c, 0x40),
++ GS101_PIN_BANK_EINTG(4, 0x200, "gpp16", 0x40, 0x44),
++ GS101_PIN_BANK_EINTG(2, 0x220, "gpp17", 0x44, 0x48),
++ GS101_PIN_BANK_EINTG(4, 0x240, "gpp18", 0x48, 0x4c),
++ GS101_PIN_BANK_EINTG(4, 0x260, "gpp19", 0x4c, 0x50),
+ };
+
+ /* pin banks of gs101 pin-controller (PERIC1) */
+ static const struct samsung_pin_bank_data gs101_pin_peric1[] = {
+- EXYNOS850_PIN_BANK_EINTG(8, 0x0, "gpp20", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x20, "gpp21", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x40, "gpp22", 0x08),
+- EXYNOS850_PIN_BANK_EINTG(8, 0x60, "gpp23", 0x0c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x80, "gpp24", 0x10),
+- EXYNOS850_PIN_BANK_EINTG(4, 0xa0, "gpp25", 0x14),
+- EXYNOS850_PIN_BANK_EINTG(5, 0xc0, "gpp26", 0x18),
+- EXYNOS850_PIN_BANK_EINTG(4, 0xe0, "gpp27", 0x1c),
++ GS101_PIN_BANK_EINTG(8, 0x0, "gpp20", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(4, 0x20, "gpp21", 0x04, 0x08),
++ GS101_PIN_BANK_EINTG(2, 0x40, "gpp22", 0x08, 0x0c),
++ GS101_PIN_BANK_EINTG(8, 0x60, "gpp23", 0x0c, 0x10),
++ GS101_PIN_BANK_EINTG(4, 0x80, "gpp24", 0x10, 0x18),
++ GS101_PIN_BANK_EINTG(4, 0xa0, "gpp25", 0x14, 0x1c),
++ GS101_PIN_BANK_EINTG(5, 0xc0, "gpp26", 0x18, 0x20),
++ GS101_PIN_BANK_EINTG(4, 0xe0, "gpp27", 0x1c, 0x28),
+ };
+
+ /* pin banks of gs101 pin-controller (HSI1) */
+ static const struct samsung_pin_bank_data gs101_pin_hsi1[] = {
+- EXYNOS850_PIN_BANK_EINTG(6, 0x0, "gph0", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(7, 0x20, "gph1", 0x04),
++ GS101_PIN_BANK_EINTG(6, 0x0, "gph0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(7, 0x20, "gph1", 0x04, 0x08),
+ };
+
+ /* pin banks of gs101 pin-controller (HSI2) */
+ static const struct samsung_pin_bank_data gs101_pin_hsi2[] = {
+- EXYNOS850_PIN_BANK_EINTG(6, 0x0, "gph2", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x20, "gph3", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(6, 0x40, "gph4", 0x08),
++ GS101_PIN_BANK_EINTG(6, 0x0, "gph2", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(2, 0x20, "gph3", 0x04, 0x08),
++ GS101_PIN_BANK_EINTG(6, 0x40, "gph4", 0x08, 0x0c),
+ };
+
+ static const struct samsung_pin_ctrl gs101_pin_ctrl[] __initconst = {
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.h b/drivers/pinctrl/samsung/pinctrl-exynos.h
+index 7b7ff7ffeb56bd..33df21d5c9d61e 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.h
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.h
+@@ -175,6 +175,28 @@
+ .name = id \
+ }
+
++#define GS101_PIN_BANK_EINTG(pins, reg, id, offs, fltcon_offs) \
++ { \
++ .type = &exynos850_bank_type_off, \
++ .pctl_offset = reg, \
++ .nr_pins = pins, \
++ .eint_type = EINT_TYPE_GPIO, \
++ .eint_offset = offs, \
++ .eint_fltcon_offset = fltcon_offs, \
++ .name = id \
++ }
++
++#define GS101_PIN_BANK_EINTW(pins, reg, id, offs, fltcon_offs) \
++ { \
++ .type = &exynos850_bank_type_alive, \
++ .pctl_offset = reg, \
++ .nr_pins = pins, \
++ .eint_type = EINT_TYPE_WKUP, \
++ .eint_offset = offs, \
++ .eint_fltcon_offset = fltcon_offs, \
++ .name = id \
++ }
++
+ /**
+ * struct exynos_weint_data: irq specific data for all the wakeup interrupts
+ * generated by the external wakeup interrupt controller.
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index cfced7afd4ca6e..963060920301ec 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1230,6 +1230,7 @@ samsung_pinctrl_get_soc_data(struct samsung_pinctrl_drv_data *d,
+ bank->eint_con_offset = bdata->eint_con_offset;
+ bank->eint_mask_offset = bdata->eint_mask_offset;
+ bank->eint_pend_offset = bdata->eint_pend_offset;
++ bank->eint_fltcon_offset = bdata->eint_fltcon_offset;
+ bank->name = bdata->name;
+
+ raw_spin_lock_init(&bank->slock);
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.h b/drivers/pinctrl/samsung/pinctrl-samsung.h
+index bb0689d52ea0b4..371e4f02bbfb37 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.h
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.h
+@@ -144,6 +144,7 @@ struct samsung_pin_bank_type {
+ * @eint_con_offset: ExynosAuto SoC-specific EINT control register offset of bank.
+ * @eint_mask_offset: ExynosAuto SoC-specific EINT mask register offset of bank.
+ * @eint_pend_offset: ExynosAuto SoC-specific EINT pend register offset of bank.
++ * @eint_fltcon_offset: GS101 SoC-specific EINT filter config register offset.
+ * @name: name to be prefixed for each pin in this pin bank.
+ */
+ struct samsung_pin_bank_data {
+@@ -158,6 +159,7 @@ struct samsung_pin_bank_data {
+ u32 eint_con_offset;
+ u32 eint_mask_offset;
+ u32 eint_pend_offset;
++ u32 eint_fltcon_offset;
+ const char *name;
+ };
+
+@@ -175,6 +177,7 @@ struct samsung_pin_bank_data {
+ * @eint_con_offset: ExynosAuto SoC-specific EINT register or interrupt offset of bank.
+ * @eint_mask_offset: ExynosAuto SoC-specific EINT mask register offset of bank.
+ * @eint_pend_offset: ExynosAuto SoC-specific EINT pend register offset of bank.
++ * @eint_fltcon_offset: GS101 SoC-specific EINT filter config register offset.
+ * @name: name to be prefixed for each pin in this pin bank.
+ * @id: id of the bank, propagated to the pin range.
+ * @pin_base: starting pin number of the bank.
+@@ -201,6 +204,7 @@ struct samsung_pin_bank {
+ u32 eint_con_offset;
+ u32 eint_mask_offset;
+ u32 eint_pend_offset;
++ u32 eint_fltcon_offset;
+ const char *name;
+ u32 id;
+
+diff --git a/drivers/platform/chrome/cros_ec_lpc.c b/drivers/platform/chrome/cros_ec_lpc.c
+index 5a2f1d98b3501b..be319949b94153 100644
+--- a/drivers/platform/chrome/cros_ec_lpc.c
++++ b/drivers/platform/chrome/cros_ec_lpc.c
+@@ -30,6 +30,7 @@
+
+ #define DRV_NAME "cros_ec_lpcs"
+ #define ACPI_DRV_NAME "GOOG0004"
++#define FRMW_ACPI_DRV_NAME "FRMWC004"
+
+ /* True if ACPI device is present */
+ static bool cros_ec_lpc_acpi_device_found;
+@@ -514,7 +515,7 @@ static int cros_ec_lpc_probe(struct platform_device *pdev)
+ acpi_status status;
+ struct cros_ec_device *ec_dev;
+ struct cros_ec_lpc *ec_lpc;
+- struct lpc_driver_data *driver_data;
++ const struct lpc_driver_data *driver_data;
+ u8 buf[2] = {};
+ int irq, ret;
+ u32 quirks;
+@@ -526,6 +527,9 @@ static int cros_ec_lpc_probe(struct platform_device *pdev)
+ ec_lpc->mmio_memory_base = EC_LPC_ADDR_MEMMAP;
+
+ driver_data = platform_get_drvdata(pdev);
++ if (!driver_data)
++ driver_data = acpi_device_get_match_data(dev);
++
+ if (driver_data) {
+ quirks = driver_data->quirks;
+
+@@ -696,12 +700,6 @@ static void cros_ec_lpc_remove(struct platform_device *pdev)
+ cros_ec_unregister(ec_dev);
+ }
+
+-static const struct acpi_device_id cros_ec_lpc_acpi_device_ids[] = {
+- { ACPI_DRV_NAME, 0 },
+- { }
+-};
+-MODULE_DEVICE_TABLE(acpi, cros_ec_lpc_acpi_device_ids);
+-
+ static const struct lpc_driver_data framework_laptop_npcx_lpc_driver_data __initconst = {
+ .quirks = CROS_EC_LPC_QUIRK_REMAP_MEMORY,
+ .quirk_mmio_memory_base = 0xE00,
+@@ -713,6 +711,13 @@ static const struct lpc_driver_data framework_laptop_mec_lpc_driver_data __initc
+ .quirk_aml_mutex_name = "ECMT",
+ };
+
++static const struct acpi_device_id cros_ec_lpc_acpi_device_ids[] = {
++ { ACPI_DRV_NAME, 0 },
++ { FRMW_ACPI_DRV_NAME, (kernel_ulong_t)&framework_laptop_npcx_lpc_driver_data },
++ { }
++};
++MODULE_DEVICE_TABLE(acpi, cros_ec_lpc_acpi_device_ids);
++
+ static const struct dmi_system_id cros_ec_lpc_dmi_table[] __initconst = {
+ {
+ /*
+@@ -866,7 +871,8 @@ static int __init cros_ec_lpc_init(void)
+ int ret;
+ const struct dmi_system_id *dmi_match;
+
+- cros_ec_lpc_acpi_device_found = !!cros_ec_lpc_get_device(ACPI_DRV_NAME);
++ cros_ec_lpc_acpi_device_found = !!cros_ec_lpc_get_device(ACPI_DRV_NAME) ||
++ !!cros_ec_lpc_get_device(FRMW_ACPI_DRV_NAME);
+
+ dmi_match = dmi_first_match(cros_ec_lpc_dmi_table);
+
+diff --git a/drivers/platform/x86/x86-android-tablets/Kconfig b/drivers/platform/x86/x86-android-tablets/Kconfig
+index a67bddc4300757..193da15ee01ca5 100644
+--- a/drivers/platform/x86/x86-android-tablets/Kconfig
++++ b/drivers/platform/x86/x86-android-tablets/Kconfig
+@@ -10,6 +10,7 @@ config X86_ANDROID_TABLETS
+ depends on ACPI && EFI && PCI
+ select NEW_LEDS
+ select LEDS_CLASS
++ select POWER_SUPPLY
+ help
+ X86 tablets which ship with Android as (part of) the factory image
+ typically have various problems with their DSDTs. The factory kernels
+diff --git a/drivers/pwm/pwm-fsl-ftm.c b/drivers/pwm/pwm-fsl-ftm.c
+index 2510c10ca47303..c45a5fca4cbbd2 100644
+--- a/drivers/pwm/pwm-fsl-ftm.c
++++ b/drivers/pwm/pwm-fsl-ftm.c
+@@ -118,6 +118,9 @@ static unsigned int fsl_pwm_ticks_to_ns(struct fsl_pwm_chip *fpc,
+ unsigned long long exval;
+
+ rate = clk_get_rate(fpc->clk[fpc->period.clk_select]);
++ if (rate >> fpc->period.clk_ps == 0)
++ return 0;
++
+ exval = ticks;
+ exval *= 1000000000UL;
+ do_div(exval, rate >> fpc->period.clk_ps);
+@@ -190,6 +193,9 @@ static unsigned int fsl_pwm_calculate_duty(struct fsl_pwm_chip *fpc,
+ unsigned int period = fpc->period.mod_period + 1;
+ unsigned int period_ns = fsl_pwm_ticks_to_ns(fpc, period);
+
++ if (!period_ns)
++ return 0;
++
+ duty = (unsigned long long)duty_ns * period;
+ do_div(duty, period_ns);
+
+diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c
+index 01dfa0fab80a44..7eaab58314995c 100644
+--- a/drivers/pwm/pwm-mediatek.c
++++ b/drivers/pwm/pwm-mediatek.c
+@@ -121,21 +121,25 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
+ u32 clkdiv = 0, cnt_period, cnt_duty, reg_width = PWMDWIDTH,
+ reg_thres = PWMTHRES;
++ unsigned long clk_rate;
+ u64 resolution;
+ int ret;
+
+ ret = pwm_mediatek_clk_enable(chip, pwm);
+-
+ if (ret < 0)
+ return ret;
+
++ clk_rate = clk_get_rate(pc->clk_pwms[pwm->hwpwm]);
++ if (!clk_rate)
++ return -EINVAL;
++
+ /* Make sure we use the bus clock and not the 26MHz clock */
+ if (pc->soc->has_ck_26m_sel)
+ writel(0, pc->regs + PWM_CK_26M_SEL);
+
+ /* Using resolution in picosecond gets accuracy higher */
+ resolution = (u64)NSEC_PER_SEC * 1000;
+- do_div(resolution, clk_get_rate(pc->clk_pwms[pwm->hwpwm]));
++ do_div(resolution, clk_rate);
+
+ cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000, resolution);
+ while (cnt_period > 8191) {
+diff --git a/drivers/pwm/pwm-rcar.c b/drivers/pwm/pwm-rcar.c
+index 2261789cc27dae..578dbdd2d5a721 100644
+--- a/drivers/pwm/pwm-rcar.c
++++ b/drivers/pwm/pwm-rcar.c
+@@ -8,6 +8,7 @@
+ * - The hardware cannot generate a 0% duty cycle.
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+@@ -102,23 +103,24 @@ static void rcar_pwm_set_clock_control(struct rcar_pwm_chip *rp,
+ rcar_pwm_write(rp, value, RCAR_PWMCR);
+ }
+
+-static int rcar_pwm_set_counter(struct rcar_pwm_chip *rp, int div, int duty_ns,
+- int period_ns)
++static int rcar_pwm_set_counter(struct rcar_pwm_chip *rp, int div, u64 duty_ns,
++ u64 period_ns)
+ {
+- unsigned long long one_cycle, tmp; /* 0.01 nanoseconds */
++ unsigned long long tmp;
+ unsigned long clk_rate = clk_get_rate(rp->clk);
+ u32 cyc, ph;
+
+- one_cycle = NSEC_PER_SEC * 100ULL << div;
+- do_div(one_cycle, clk_rate);
++ /* div <= 24 == RCAR_PWM_MAX_DIVISION, so the shift doesn't overflow. */
++ tmp = mul_u64_u64_div_u64(period_ns, clk_rate, (u64)NSEC_PER_SEC << div);
++ if (tmp > FIELD_MAX(RCAR_PWMCNT_CYC0_MASK))
++ tmp = FIELD_MAX(RCAR_PWMCNT_CYC0_MASK);
+
+- tmp = period_ns * 100ULL;
+- do_div(tmp, one_cycle);
+- cyc = (tmp << RCAR_PWMCNT_CYC0_SHIFT) & RCAR_PWMCNT_CYC0_MASK;
++ cyc = FIELD_PREP(RCAR_PWMCNT_CYC0_MASK, tmp);
+
+- tmp = duty_ns * 100ULL;
+- do_div(tmp, one_cycle);
+- ph = tmp & RCAR_PWMCNT_PH0_MASK;
++ tmp = mul_u64_u64_div_u64(duty_ns, clk_rate, (u64)NSEC_PER_SEC << div);
++ if (tmp > FIELD_MAX(RCAR_PWMCNT_PH0_MASK))
++ tmp = FIELD_MAX(RCAR_PWMCNT_PH0_MASK);
++ ph = FIELD_PREP(RCAR_PWMCNT_PH0_MASK, tmp);
+
+ /* Avoid prohibited setting */
+ if (cyc == 0 || ph == 0)
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index a59de4de18b6e9..ec2c05c9ee7a67 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -103,22 +103,16 @@ static int stm32_pwm_round_waveform_tohw(struct pwm_chip *chip,
+ if (ret)
+ goto out;
+
+- /*
+- * calculate the best value for ARR for the given PSC, refuse if
+- * the resulting period gets bigger than the requested one.
+- */
+ arr = mul_u64_u64_div_u64(wf->period_length_ns, rate,
+ (u64)NSEC_PER_SEC * (wfhw->psc + 1));
+ if (arr <= wfhw->arr) {
+ /*
+- * requested period is small than the currently
++ * requested period is smaller than the currently
+ * configured and unchangable period, report back the smallest
+- * possible period, i.e. the current state; Initialize
+- * ccr to anything valid.
++ * possible period, i.e. the current state and return 1
++ * to indicate the wrong rounding direction.
+ */
+- wfhw->ccr = 0;
+ ret = 1;
+- goto out;
+ }
+
+ } else {
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index 21fa7ac849e5c3..4904b831c0a75f 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -302,11 +302,17 @@ static struct airq_info *new_airq_info(int index)
+ static unsigned long *get_airq_indicator(struct virtqueue *vqs[], int nvqs,
+ u64 *first, void **airq_info)
+ {
+- int i, j;
++ int i, j, queue_idx, highest_queue_idx = -1;
+ struct airq_info *info;
+ unsigned long *indicator_addr = NULL;
+ unsigned long bit, flags;
+
++ /* Array entries without an actual queue pointer must be ignored. */
++ for (i = 0; i < nvqs; i++) {
++ if (vqs[i])
++ highest_queue_idx++;
++ }
++
+ for (i = 0; i < MAX_AIRQ_AREAS && !indicator_addr; i++) {
+ mutex_lock(&airq_areas_lock);
+ if (!airq_areas[i])
+@@ -316,7 +322,7 @@ static unsigned long *get_airq_indicator(struct virtqueue *vqs[], int nvqs,
+ if (!info)
+ return NULL;
+ write_lock_irqsave(&info->lock, flags);
+- bit = airq_iv_alloc(info->aiv, nvqs);
++ bit = airq_iv_alloc(info->aiv, highest_queue_idx + 1);
+ if (bit == -1UL) {
+ /* Not enough vacancies. */
+ write_unlock_irqrestore(&info->lock, flags);
+@@ -325,8 +331,10 @@ static unsigned long *get_airq_indicator(struct virtqueue *vqs[], int nvqs,
+ *first = bit;
+ *airq_info = info;
+ indicator_addr = info->aiv->vector;
+- for (j = 0; j < nvqs; j++) {
+- airq_iv_set_ptr(info->aiv, bit + j,
++ for (j = 0, queue_idx = 0; j < nvqs; j++) {
++ if (!vqs[j])
++ continue;
++ airq_iv_set_ptr(info->aiv, bit + queue_idx++,
+ (unsigned long)vqs[j]);
+ }
+ write_unlock_irqrestore(&info->lock, flags);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 3fd9723cd271c8..92f3d442372907 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2923,6 +2923,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag);
+ ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING;
+ lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
++ } else {
++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag);
+ }
+
+ /* The unreg_login mailbox is complete and had a
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index 0d72b5f1b69df1..6e3f337ace9f85 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -80,13 +80,14 @@ extern atomic64_t event_counter;
+
+ /* Admin queue management definitions */
+ #define MPI3MR_ADMIN_REQ_Q_SIZE (2 * MPI3MR_PAGE_SIZE_4K)
+-#define MPI3MR_ADMIN_REPLY_Q_SIZE (4 * MPI3MR_PAGE_SIZE_4K)
++#define MPI3MR_ADMIN_REPLY_Q_SIZE (8 * MPI3MR_PAGE_SIZE_4K)
+ #define MPI3MR_ADMIN_REQ_FRAME_SZ 128
+ #define MPI3MR_ADMIN_REPLY_FRAME_SZ 16
+
+ /* Operational queue management definitions */
+ #define MPI3MR_OP_REQ_Q_QD 512
+ #define MPI3MR_OP_REP_Q_QD 1024
++#define MPI3MR_OP_REP_Q_QD2K 2048
+ #define MPI3MR_OP_REP_Q_QD4K 4096
+ #define MPI3MR_OP_REQ_Q_SEG_SIZE 4096
+ #define MPI3MR_OP_REP_Q_SEG_SIZE 4096
+@@ -328,6 +329,7 @@ enum mpi3mr_reset_reason {
+ #define MPI3MR_RESET_REASON_OSTYPE_SHIFT 28
+ #define MPI3MR_RESET_REASON_IOCNUM_SHIFT 20
+
++
+ /* Queue type definitions */
+ enum queue_type {
+ MPI3MR_DEFAULT_QUEUE = 0,
+@@ -387,6 +389,7 @@ struct mpi3mr_ioc_facts {
+ u16 max_msix_vectors;
+ u8 personality;
+ u8 dma_mask;
++ bool max_req_limit;
+ u8 protocol_flags;
+ u8 sge_mod_mask;
+ u8 sge_mod_value;
+@@ -456,6 +459,8 @@ struct op_req_qinfo {
+ * @enable_irq_poll: Flag to indicate polling is enabled
+ * @in_use: Queue is handled by poll/ISR
+ * @qtype: Type of queue (types defined in enum queue_type)
++ * @qfull_watermark: Watermark defined in reply queue to avoid
++ * reply queue full
+ */
+ struct op_reply_qinfo {
+ u16 ci;
+@@ -471,6 +476,7 @@ struct op_reply_qinfo {
+ bool enable_irq_poll;
+ atomic_t in_use;
+ enum queue_type qtype;
++ u16 qfull_watermark;
+ };
+
+ /**
+@@ -1090,6 +1096,7 @@ struct scmd_priv {
+ * @ts_update_interval: Timestamp update interval
+ * @reset_in_progress: Reset in progress flag
+ * @unrecoverable: Controller unrecoverable flag
++ * @io_admin_reset_sync: Manage state of I/O ops during an admin reset process
+ * @prev_reset_result: Result of previous reset
+ * @reset_mutex: Controller reset mutex
+ * @reset_waitq: Controller reset wait queue
+@@ -1153,6 +1160,8 @@ struct scmd_priv {
+ * @snapdump_trigger_active: Snapdump trigger active flag
+ * @pci_err_recovery: PCI error recovery in progress
+ * @block_on_pci_err: Block IO during PCI error recovery
++ * @reply_qfull_count: Occurences of reply queue full avoidance kicking-in
++ * @prevent_reply_qfull: Enable reply queue prevention
+ */
+ struct mpi3mr_ioc {
+ struct list_head list;
+@@ -1276,6 +1285,7 @@ struct mpi3mr_ioc {
+ u16 ts_update_interval;
+ u8 reset_in_progress;
+ u8 unrecoverable;
++ u8 io_admin_reset_sync;
+ int prev_reset_result;
+ struct mutex reset_mutex;
+ wait_queue_head_t reset_waitq;
+@@ -1351,6 +1361,8 @@ struct mpi3mr_ioc {
+ bool fw_release_trigger_active;
+ bool pci_err_recovery;
+ bool block_on_pci_err;
++ atomic_t reply_qfull_count;
++ bool prevent_reply_qfull;
+ };
+
+ /**
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index f4b5813e6fc4cf..db4b9f1b1d1b3a 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -3061,6 +3061,29 @@ reply_queue_count_show(struct device *dev, struct device_attribute *attr,
+
+ static DEVICE_ATTR_RO(reply_queue_count);
+
++/**
++ * reply_qfull_count_show - Show reply qfull count
++ * @dev: class device
++ * @attr: Device attributes
++ * @buf: Buffer to copy
++ *
++ * Retrieves the current value of the reply_qfull_count from the mrioc structure and
++ * formats it as a string for display.
++ *
++ * Return: sysfs_emit() return
++ */
++static ssize_t
++reply_qfull_count_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
++{
++ struct Scsi_Host *shost = class_to_shost(dev);
++ struct mpi3mr_ioc *mrioc = shost_priv(shost);
++
++ return sysfs_emit(buf, "%u\n", atomic_read(&mrioc->reply_qfull_count));
++}
++
++static DEVICE_ATTR_RO(reply_qfull_count);
++
+ /**
+ * logging_level_show - Show controller debug level
+ * @dev: class device
+@@ -3153,6 +3176,7 @@ static struct attribute *mpi3mr_host_attrs[] = {
+ &dev_attr_fw_queue_depth.attr,
+ &dev_attr_op_req_q_count.attr,
+ &dev_attr_reply_queue_count.attr,
++ &dev_attr_reply_qfull_count.attr,
+ &dev_attr_logging_level.attr,
+ &dev_attr_adp_state.attr,
+ NULL,
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index 5ed31fe57474a3..ec5b1ab2871776 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -17,7 +17,7 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
+ struct mpi3_ioc_facts_data *facts_data);
+ static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc,
+ struct mpi3mr_drv_cmd *drv_cmd);
+-
++static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc);
+ static int poll_queues;
+ module_param(poll_queues, int, 0444);
+ MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)");
+@@ -459,7 +459,7 @@ int mpi3mr_process_admin_reply_q(struct mpi3mr_ioc *mrioc)
+ }
+
+ do {
+- if (mrioc->unrecoverable)
++ if (mrioc->unrecoverable || mrioc->io_admin_reset_sync)
+ break;
+
+ mrioc->admin_req_ci = le16_to_cpu(reply_desc->request_queue_ci);
+@@ -554,7 +554,7 @@ int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,
+ }
+
+ do {
+- if (mrioc->unrecoverable)
++ if (mrioc->unrecoverable || mrioc->io_admin_reset_sync)
+ break;
+
+ req_q_idx = le16_to_cpu(reply_desc->request_queue_id) - 1;
+@@ -2104,15 +2104,22 @@ static int mpi3mr_create_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)
+ }
+
+ reply_qid = qidx + 1;
+- op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
+- if ((mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) &&
+- !mrioc->pdev->revision)
+- op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
++
++ if (mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) {
++ if (mrioc->pdev->revision)
++ op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
++ else
++ op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
++ } else
++ op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD2K;
++
+ op_reply_q->ci = 0;
+ op_reply_q->ephase = 1;
+ atomic_set(&op_reply_q->pend_ios, 0);
+ atomic_set(&op_reply_q->in_use, 0);
+ op_reply_q->enable_irq_poll = false;
++ op_reply_q->qfull_watermark =
++ op_reply_q->num_replies - (MPI3MR_THRESHOLD_REPLY_COUNT * 2);
+
+ if (!op_reply_q->q_segments) {
+ retval = mpi3mr_alloc_op_reply_q_segments(mrioc, qidx);
+@@ -2416,8 +2423,10 @@ int mpi3mr_op_request_post(struct mpi3mr_ioc *mrioc,
+ void *segment_base_addr;
+ u16 req_sz = mrioc->facts.op_req_sz;
+ struct segments *segments = op_req_q->q_segments;
++ struct op_reply_qinfo *op_reply_q = NULL;
+
+ reply_qidx = op_req_q->reply_qid - 1;
++ op_reply_q = mrioc->op_reply_qinfo + reply_qidx;
+
+ if (mrioc->unrecoverable)
+ return -EFAULT;
+@@ -2448,6 +2457,15 @@ int mpi3mr_op_request_post(struct mpi3mr_ioc *mrioc,
+ goto out;
+ }
+
++ /* Reply queue is nearing to get full, push back IOs to SML */
++ if ((mrioc->prevent_reply_qfull == true) &&
++ (atomic_read(&op_reply_q->pend_ios) >
++ (op_reply_q->qfull_watermark))) {
++ atomic_inc(&mrioc->reply_qfull_count);
++ retval = -EAGAIN;
++ goto out;
++ }
++
+ segment_base_addr = segments[pi / op_req_q->segment_qd].segment;
+ req_entry = (u8 *)segment_base_addr +
+ ((pi % op_req_q->segment_qd) * req_sz);
+@@ -3091,6 +3109,9 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
+ mrioc->facts.dma_mask = (facts_flags &
+ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >>
+ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT;
++ mrioc->facts.dma_mask = (facts_flags &
++ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >>
++ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT;
+ mrioc->facts.protocol_flags = facts_data->protocol_flags;
+ mrioc->facts.mpi_version = le32_to_cpu(facts_data->mpi_version.word);
+ mrioc->facts.max_reqs = le16_to_cpu(facts_data->max_outstanding_requests);
+@@ -4214,6 +4235,9 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)
+ mrioc->shost->transportt = mpi3mr_transport_template;
+ }
+
++ if (mrioc->facts.max_req_limit)
++ mrioc->prevent_reply_qfull = true;
++
+ mrioc->reply_sz = mrioc->facts.reply_sz;
+
+ retval = mpi3mr_check_reset_dma_mask(mrioc);
+@@ -4370,6 +4394,7 @@ int mpi3mr_reinit_ioc(struct mpi3mr_ioc *mrioc, u8 is_resume)
+ goto out_failed_noretry;
+ }
+
++ mrioc->io_admin_reset_sync = 0;
+ if (is_resume || mrioc->block_on_pci_err) {
+ dprint_reset(mrioc, "setting up single ISR\n");
+ retval = mpi3mr_setup_isr(mrioc, 1);
+@@ -5228,6 +5253,55 @@ void mpi3mr_pel_get_seqnum_complete(struct mpi3mr_ioc *mrioc,
+ drv_cmd->retry_count = 0;
+ }
+
++/**
++ * mpi3mr_check_op_admin_proc -
++ * @mrioc: Adapter instance reference
++ *
++ * Check if any of the operation reply queues
++ * or the admin reply queue are currently in use.
++ * If any queue is in use, this function waits for
++ * a maximum of 10 seconds for them to become available.
++ *
++ * Return: 0 on success, non-zero on failure.
++ */
++static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc)
++{
++
++ u16 timeout = 10 * 10;
++ u16 elapsed_time = 0;
++ bool op_admin_in_use = false;
++
++ do {
++ op_admin_in_use = false;
++
++ /* Check admin_reply queue first to exit early */
++ if (atomic_read(&mrioc->admin_reply_q_in_use) == 1)
++ op_admin_in_use = true;
++ else {
++ /* Check op_reply queues */
++ int i;
++
++ for (i = 0; i < mrioc->num_queues; i++) {
++ if (atomic_read(&mrioc->op_reply_qinfo[i].in_use) == 1) {
++ op_admin_in_use = true;
++ break;
++ }
++ }
++ }
++
++ if (!op_admin_in_use)
++ break;
++
++ msleep(100);
++
++ } while (++elapsed_time < timeout);
++
++ if (op_admin_in_use)
++ return 1;
++
++ return 0;
++}
++
+ /**
+ * mpi3mr_soft_reset_handler - Reset the controller
+ * @mrioc: Adapter instance reference
+@@ -5308,6 +5382,7 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
+ mpi3mr_wait_for_host_io(mrioc, MPI3MR_RESET_HOST_IOWAIT_TIMEOUT);
+
+ mpi3mr_ioc_disable_intr(mrioc);
++ mrioc->io_admin_reset_sync = 1;
+
+ if (snapdump) {
+ mpi3mr_set_diagsave(mrioc);
+@@ -5335,6 +5410,16 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
+ ioc_err(mrioc, "Failed to issue soft reset to the ioc\n");
+ goto out;
+ }
++
++ retval = mpi3mr_check_op_admin_proc(mrioc);
++ if (retval) {
++ ioc_err(mrioc, "Soft reset failed due to an Admin or I/O queue polling\n"
++ "thread still processing replies even after a 10 second\n"
++ "timeout. Marking the controller as unrecoverable!\n");
++
++ goto out;
++ }
++
+ if (mrioc->num_io_throttle_group !=
+ mrioc->facts.max_io_throttle_group) {
+ ioc_err(mrioc,
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index ebbd50ec0cda51..344e4da336bb56 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -4122,7 +4122,7 @@ static void validate_options(void)
+ */
+ static int __init st_setup(char *str)
+ {
+- int i, len, ints[5];
++ int i, len, ints[ARRAY_SIZE(parms) + 1];
+ char *stp;
+
+ stp = get_options(str, ARRAY_SIZE(ints), ints);
+diff --git a/drivers/soc/samsung/exynos-chipid.c b/drivers/soc/samsung/exynos-chipid.c
+index e37dde1fb588ec..95294462ff2113 100644
+--- a/drivers/soc/samsung/exynos-chipid.c
++++ b/drivers/soc/samsung/exynos-chipid.c
+@@ -134,6 +134,8 @@ static int exynos_chipid_probe(struct platform_device *pdev)
+
+ soc_dev_attr->revision = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%x", soc_info.revision);
++ if (!soc_dev_attr->revision)
++ return -ENOMEM;
+ soc_dev_attr->soc_id = product_id_to_soc_id(soc_info.product_id);
+ if (!soc_dev_attr->soc_id) {
+ pr_err("Unknown SoC\n");
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 0cd37a7436d539..c90462783b3f9f 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1658,6 +1658,12 @@ static int cqspi_request_mmap_dma(struct cqspi_st *cqspi)
+ int ret = PTR_ERR(cqspi->rx_chan);
+
+ cqspi->rx_chan = NULL;
++ if (ret == -ENODEV) {
++ /* DMA support is not mandatory */
++ dev_info(&cqspi->pdev->dev, "No Rx DMA available\n");
++ return 0;
++ }
++
+ return dev_err_probe(&cqspi->pdev->dev, ret, "No Rx DMA available\n");
+ }
+ init_completion(&cqspi->rx_dma_complete);
+diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c
+index 355e6a39fb4189..2f54dc09d11b1c 100644
+--- a/drivers/spi/spi-fsl-qspi.c
++++ b/drivers/spi/spi-fsl-qspi.c
+@@ -844,6 +844,19 @@ static const struct spi_controller_mem_caps fsl_qspi_mem_caps = {
+ .per_op_freq = true,
+ };
+
++static void fsl_qspi_cleanup(void *data)
++{
++ struct fsl_qspi *q = data;
++
++ /* disable the hardware */
++ qspi_writel(q, QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR);
++ qspi_writel(q, 0x0, q->iobase + QUADSPI_RSER);
++
++ fsl_qspi_clk_disable_unprep(q);
++
++ mutex_destroy(&q->lock);
++}
++
+ static int fsl_qspi_probe(struct platform_device *pdev)
+ {
+ struct spi_controller *ctlr;
+@@ -934,15 +947,16 @@ static int fsl_qspi_probe(struct platform_device *pdev)
+
+ ctlr->dev.of_node = np;
+
++ ret = devm_add_action_or_reset(dev, fsl_qspi_cleanup, q);
++ if (ret)
++ goto err_put_ctrl;
++
+ ret = devm_spi_register_controller(dev, ctlr);
+ if (ret)
+- goto err_destroy_mutex;
++ goto err_put_ctrl;
+
+ return 0;
+
+-err_destroy_mutex:
+- mutex_destroy(&q->lock);
+-
+ err_disable_clk:
+ fsl_qspi_clk_disable_unprep(q);
+
+@@ -953,19 +967,6 @@ static int fsl_qspi_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+-static void fsl_qspi_remove(struct platform_device *pdev)
+-{
+- struct fsl_qspi *q = platform_get_drvdata(pdev);
+-
+- /* disable the hardware */
+- qspi_writel(q, QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR);
+- qspi_writel(q, 0x0, q->iobase + QUADSPI_RSER);
+-
+- fsl_qspi_clk_disable_unprep(q);
+-
+- mutex_destroy(&q->lock);
+-}
+-
+ static int fsl_qspi_suspend(struct device *dev)
+ {
+ return 0;
+@@ -1003,7 +1004,6 @@ static struct platform_driver fsl_qspi_driver = {
+ .pm = &fsl_qspi_pm_ops,
+ },
+ .probe = fsl_qspi_probe,
+- .remove = fsl_qspi_remove,
+ };
+ module_platform_driver(fsl_qspi_driver);
+
+diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
+index ea14a38356814d..61c065702350e0 100644
+--- a/drivers/target/target_core_spc.c
++++ b/drivers/target/target_core_spc.c
+@@ -2243,7 +2243,7 @@ spc_emulate_report_supp_op_codes(struct se_cmd *cmd)
+ response_length += spc_rsoc_encode_command_descriptor(
+ &buf[response_length], rctd, descr);
+ }
+- put_unaligned_be32(response_length - 3, buf);
++ put_unaligned_be32(response_length - 4, buf);
+ } else {
+ response_length = spc_rsoc_encode_one_command_descriptor(
+ &buf[response_length], rctd, descr,
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 07f7f3b7a2fb56..0aaa44b734ca43 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -65,7 +65,7 @@
+ #define LVTS_HW_FILTER 0x0
+ #define LVTS_TSSEL_CONF 0x13121110
+ #define LVTS_CALSCALE_CONF 0x300
+-#define LVTS_MONINT_CONF 0x8300318C
++#define LVTS_MONINT_CONF 0x0300318C
+
+ #define LVTS_MONINT_OFFSET_SENSOR0 0xC
+ #define LVTS_MONINT_OFFSET_SENSOR1 0x180
+@@ -91,8 +91,6 @@
+ #define LVTS_MSR_READ_TIMEOUT_US 400
+ #define LVTS_MSR_READ_WAIT_US (LVTS_MSR_READ_TIMEOUT_US / 2)
+
+-#define LVTS_HW_TSHUT_TEMP 105000
+-
+ #define LVTS_MINIMUM_THRESHOLD 20000
+
+ static int golden_temp = LVTS_GOLDEN_TEMP_DEFAULT;
+@@ -145,7 +143,6 @@ struct lvts_ctrl {
+ struct lvts_sensor sensors[LVTS_SENSOR_MAX];
+ const struct lvts_data *lvts_data;
+ u32 calibration[LVTS_SENSOR_MAX];
+- u32 hw_tshut_raw_temp;
+ u8 valid_sensor_mask;
+ int mode;
+ void __iomem *base;
+@@ -837,14 +834,6 @@ static int lvts_ctrl_init(struct device *dev, struct lvts_domain *lvts_td,
+ */
+ lvts_ctrl[i].mode = lvts_data->lvts_ctrl[i].mode;
+
+- /*
+- * The temperature to raw temperature must be done
+- * after initializing the calibration.
+- */
+- lvts_ctrl[i].hw_tshut_raw_temp =
+- lvts_temp_to_raw(LVTS_HW_TSHUT_TEMP,
+- lvts_data->temp_factor);
+-
+ lvts_ctrl[i].low_thresh = INT_MIN;
+ lvts_ctrl[i].high_thresh = INT_MIN;
+ }
+@@ -860,6 +849,32 @@ static int lvts_ctrl_init(struct device *dev, struct lvts_domain *lvts_td,
+ return 0;
+ }
+
++static void lvts_ctrl_monitor_enable(struct device *dev, struct lvts_ctrl *lvts_ctrl, bool enable)
++{
++ /*
++ * Bitmaps to enable each sensor on filtered mode in the MONCTL0
++ * register.
++ */
++ static const u8 sensor_filt_bitmap[] = { BIT(0), BIT(1), BIT(2), BIT(3) };
++ u32 sensor_map = 0;
++ int i;
++
++ if (lvts_ctrl->mode != LVTS_MSR_FILTERED_MODE)
++ return;
++
++ if (enable) {
++ lvts_for_each_valid_sensor(i, lvts_ctrl)
++ sensor_map |= sensor_filt_bitmap[i];
++ }
++
++ /*
++ * Bits:
++ * 9: Single point access flow
++ * 0-3: Enable sensing point 0-3
++ */
++ writel(sensor_map | BIT(9), LVTS_MONCTL0(lvts_ctrl->base));
++}
++
+ /*
+ * At this point the configuration register is the only place in the
+ * driver where we write multiple values. Per hardware constraint,
+@@ -893,7 +908,6 @@ static int lvts_irq_init(struct lvts_ctrl *lvts_ctrl)
+ * 10 : Selected sensor with bits 19-18
+ * 11 : Reserved
+ */
+- writel(BIT(16), LVTS_PROTCTL(lvts_ctrl->base));
+
+ /*
+ * LVTS_PROTTA : Stage 1 temperature threshold
+@@ -906,8 +920,8 @@ static int lvts_irq_init(struct lvts_ctrl *lvts_ctrl)
+ *
+ * writel(0x0, LVTS_PROTTA(lvts_ctrl->base));
+ * writel(0x0, LVTS_PROTTB(lvts_ctrl->base));
++ * writel(0x0, LVTS_PROTTC(lvts_ctrl->base));
+ */
+- writel(lvts_ctrl->hw_tshut_raw_temp, LVTS_PROTTC(lvts_ctrl->base));
+
+ /*
+ * LVTS_MONINT : Interrupt configuration register
+@@ -1381,8 +1395,11 @@ static int lvts_suspend(struct device *dev)
+
+ lvts_td = dev_get_drvdata(dev);
+
+- for (i = 0; i < lvts_td->num_lvts_ctrl; i++)
++ for (i = 0; i < lvts_td->num_lvts_ctrl; i++) {
++ lvts_ctrl_monitor_enable(dev, &lvts_td->lvts_ctrl[i], false);
++ usleep_range(100, 200);
+ lvts_ctrl_set_enable(&lvts_td->lvts_ctrl[i], false);
++ }
+
+ clk_disable_unprepare(lvts_td->clk);
+
+@@ -1400,8 +1417,11 @@ static int lvts_resume(struct device *dev)
+ if (ret)
+ return ret;
+
+- for (i = 0; i < lvts_td->num_lvts_ctrl; i++)
++ for (i = 0; i < lvts_td->num_lvts_ctrl; i++) {
+ lvts_ctrl_set_enable(&lvts_td->lvts_ctrl[i], true);
++ usleep_range(100, 200);
++ lvts_ctrl_monitor_enable(dev, &lvts_td->lvts_ctrl[i], true);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/thermal/rockchip_thermal.c b/drivers/thermal/rockchip_thermal.c
+index f551df48eef935..a8ad85feb68fbb 100644
+--- a/drivers/thermal/rockchip_thermal.c
++++ b/drivers/thermal/rockchip_thermal.c
+@@ -386,6 +386,7 @@ static const struct tsadc_table rk3328_code_table[] = {
+ {296, -40000},
+ {304, -35000},
+ {313, -30000},
++ {322, -25000},
+ {331, -20000},
+ {340, -15000},
+ {349, -10000},
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 8455f08f5d4060..61424342c09641 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -190,9 +190,12 @@ static void fill_indir(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_mr *mkey, v
+ klm->bcount = cpu_to_be32(klm_bcount(dmr->end - dmr->start));
+ preve = dmr->end;
+ } else {
++ u64 bcount = min_t(u64, dmr->start - preve, MAX_KLM_SIZE);
++
+ klm->key = cpu_to_be32(mvdev->res.null_mkey);
+- klm->bcount = cpu_to_be32(klm_bcount(dmr->start - preve));
+- preve = dmr->start;
++ klm->bcount = cpu_to_be32(klm_bcount(bcount));
++ preve += bcount;
++
+ goto again;
+ }
+ }
+diff --git a/drivers/video/backlight/led_bl.c b/drivers/video/backlight/led_bl.c
+index ae34d1ecbfbef7..d2db157b2c290a 100644
+--- a/drivers/video/backlight/led_bl.c
++++ b/drivers/video/backlight/led_bl.c
+@@ -229,8 +229,11 @@ static void led_bl_remove(struct platform_device *pdev)
+ backlight_device_unregister(bl);
+
+ led_bl_power_off(priv);
+- for (i = 0; i < priv->nb_leds; i++)
++ for (i = 0; i < priv->nb_leds; i++) {
++ mutex_lock(&priv->leds[i]->led_access);
+ led_sysfs_enable(priv->leds[i]);
++ mutex_unlock(&priv->leds[i]->led_access);
++ }
+ }
+
+ static const struct of_device_id led_bl_of_match[] = {
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dispc.c b/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
+index ccb96a5be07e46..139476f9d91898 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
+@@ -2738,9 +2738,13 @@ int dispc_ovl_setup(enum omap_plane plane, const struct omap_overlay_info *oi,
+ bool mem_to_mem)
+ {
+ int r;
+- enum omap_overlay_caps caps = dss_feat_get_overlay_caps(plane);
++ enum omap_overlay_caps caps;
+ enum omap_channel channel;
+
++ if (plane == OMAP_DSS_WB)
++ return -EINVAL;
++
++ caps = dss_feat_get_overlay_caps(plane);
+ channel = dispc_ovl_get_channel_out(plane);
+
+ DSSDBG("dispc_ovl_setup %d, pa %pad, pa_uv %pad, sw %d, %d,%d, %dx%d ->"
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 163f7f1d70f1b1..ee165f4f7fe6c9 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -675,7 +675,7 @@ void xen_free_ballooned_pages(unsigned int nr_pages, struct page **pages)
+ }
+ EXPORT_SYMBOL(xen_free_ballooned_pages);
+
+-static void __init balloon_add_regions(void)
++static int __init balloon_add_regions(void)
+ {
+ unsigned long start_pfn, pages;
+ unsigned long pfn, extra_pfn_end;
+@@ -698,26 +698,38 @@ static void __init balloon_add_regions(void)
+ for (pfn = start_pfn; pfn < extra_pfn_end; pfn++)
+ balloon_append(pfn_to_page(pfn));
+
+- balloon_stats.total_pages += extra_pfn_end - start_pfn;
++ /*
++ * Extra regions are accounted for in the physmap, but need
++ * decreasing from current_pages to balloon down the initial
++ * allocation, because they are already accounted for in
++ * total_pages.
++ */
++ if (extra_pfn_end - start_pfn >= balloon_stats.current_pages) {
++ WARN(1, "Extra pages underflow current target");
++ return -ERANGE;
++ }
++ balloon_stats.current_pages -= extra_pfn_end - start_pfn;
+ }
++
++ return 0;
+ }
+
+ static int __init balloon_init(void)
+ {
+ struct task_struct *task;
++ int rc;
+
+ if (!xen_domain())
+ return -ENODEV;
+
+ pr_info("Initialising balloon driver\n");
+
+-#ifdef CONFIG_XEN_PV
+- balloon_stats.current_pages = xen_pv_domain()
+- ? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
+- : get_num_physpages();
+-#else
+- balloon_stats.current_pages = get_num_physpages();
+-#endif
++ if (xen_released_pages >= get_num_physpages()) {
++ WARN(1, "Released pages underflow current target");
++ return -ERANGE;
++ }
++
++ balloon_stats.current_pages = get_num_physpages() - xen_released_pages;
+ balloon_stats.target_pages = balloon_stats.current_pages;
+ balloon_stats.balloon_low = 0;
+ balloon_stats.balloon_high = 0;
+@@ -734,7 +746,9 @@ static int __init balloon_init(void)
+ register_sysctl_init("xen/balloon", balloon_table);
+ #endif
+
+- balloon_add_regions();
++ rc = balloon_add_regions();
++ if (rc)
++ return rc;
+
+ task = kthread_run(balloon_thread, NULL, "xen-balloon");
+ if (IS_ERR(task)) {
+diff --git a/drivers/xen/xenfs/xensyms.c b/drivers/xen/xenfs/xensyms.c
+index b799bc759c15f4..088b7f02c35866 100644
+--- a/drivers/xen/xenfs/xensyms.c
++++ b/drivers/xen/xenfs/xensyms.c
+@@ -48,7 +48,7 @@ static int xensyms_next_sym(struct xensyms *xs)
+ return -ENOMEM;
+
+ set_xen_guest_handle(symdata->name, xs->name);
+- symdata->symnum--; /* Rewind */
++ symdata->symnum = symnum; /* Rewind */
+
+ ret = HYPERVISOR_platform_op(&xs->op);
+ if (ret < 0)
+@@ -78,7 +78,7 @@ static void *xensyms_next(struct seq_file *m, void *p, loff_t *pos)
+ {
+ struct xensyms *xs = m->private;
+
+- xs->op.u.symdata.symnum = ++(*pos);
++ *pos = xs->op.u.symdata.symnum;
+
+ if (xensyms_next_sym(xs))
+ return NULL;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 70b61bc237e98e..ca821e5966bd3a 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4348,6 +4348,18 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ */
+ btrfs_flush_workqueue(fs_info->delalloc_workers);
+
++ /*
++ * When finishing a compressed write bio we schedule a work queue item
++ * to finish an ordered extent - btrfs_finish_compressed_write_work()
++ * calls btrfs_finish_ordered_extent() which in turns does a call to
++ * btrfs_queue_ordered_fn(), and that queues the ordered extent
++ * completion either in the endio_write_workers work queue or in the
++ * fs_info->endio_freespace_worker work queue. We flush those queues
++ * below, so before we flush them we must flush this queue for the
++ * workers of compressed writes.
++ */
++ flush_workqueue(fs_info->compressed_write_workers);
++
+ /*
+ * After we parked the cleaner kthread, ordered extents may have
+ * completed and created new delayed iputs. If one of the async reclaim
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 3014a1a23efdbf..6d615711f04001 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2874,7 +2874,15 @@ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans)
+ block_group->length,
+ &trimmed);
+
++ /*
++ * Not strictly necessary to lock, as the block_group should be
++ * read-only from btrfs_delete_unused_bgs().
++ */
++ ASSERT(block_group->ro);
++ spin_lock(&fs_info->unused_bgs_lock);
+ list_del_init(&block_group->bg_list);
++ spin_unlock(&fs_info->unused_bgs_lock);
++
+ btrfs_unfreeze_block_group(block_group);
+ btrfs_put_block_group(block_group);
+
+diff --git a/fs/btrfs/tests/extent-map-tests.c b/fs/btrfs/tests/extent-map-tests.c
+index 56e61ac1cc64c8..609bb6c9c0873f 100644
+--- a/fs/btrfs/tests/extent-map-tests.c
++++ b/fs/btrfs/tests/extent-map-tests.c
+@@ -1045,6 +1045,7 @@ static int test_rmap_block(struct btrfs_fs_info *fs_info,
+ ret = btrfs_add_chunk_map(fs_info, map);
+ if (ret) {
+ test_err("error adding chunk map to mapping tree");
++ btrfs_free_chunk_map(map);
+ goto out_free;
+ }
+
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index aca83a98b75a24..c0e9d4bbe380df 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -160,7 +160,13 @@ void btrfs_put_transaction(struct btrfs_transaction *transaction)
+ cache = list_first_entry(&transaction->deleted_bgs,
+ struct btrfs_block_group,
+ bg_list);
++ /*
++ * Not strictly necessary to lock, as no other task will be using a
++ * block_group on the deleted_bgs list during a transaction abort.
++ */
++ spin_lock(&transaction->fs_info->unused_bgs_lock);
+ list_del_init(&cache->bg_list);
++ spin_unlock(&transaction->fs_info->unused_bgs_lock);
+ btrfs_unfreeze_block_group(cache);
+ btrfs_put_block_group(cache);
+ }
+@@ -2096,7 +2102,13 @@ static void btrfs_cleanup_pending_block_groups(struct btrfs_trans_handle *trans)
+
+ list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
+ btrfs_dec_delayed_refs_rsv_bg_inserts(fs_info);
++ /*
++ * Not strictly necessary to lock, as no other task will be using a
++ * block_group on the new_bgs list during a transaction abort.
++ */
++ spin_lock(&fs_info->unused_bgs_lock);
+ list_del_init(&block_group->bg_list);
++ spin_unlock(&fs_info->unused_bgs_lock);
+ }
+ }
+
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 73e0aa9fc08a5d..aaf925897fdda3 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -2111,6 +2111,9 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ physical = map->stripes[i].physical;
+ zinfo = device->zone_info;
+
++ if (!device->bdev)
++ continue;
++
+ if (zinfo->max_active_zones == 0)
+ continue;
+
+@@ -2272,6 +2275,9 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
+ struct btrfs_zoned_device_info *zinfo = device->zone_info;
+ unsigned int nofs_flags;
+
++ if (!device->bdev)
++ continue;
++
+ if (zinfo->max_active_zones == 0)
+ continue;
+
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index c8ff88f1cdcf2c..e01d5f29f4d252 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -741,6 +741,7 @@ static int find_rsb_dir(struct dlm_ls *ls, const void *name, int len,
+ read_lock_bh(&ls->ls_rsbtbl_lock);
+ if (!rsb_flag(r, RSB_HASHED)) {
+ read_unlock_bh(&ls->ls_rsbtbl_lock);
++ error = -EBADR;
+ goto do_new;
+ }
+
+@@ -784,6 +785,7 @@ static int find_rsb_dir(struct dlm_ls *ls, const void *name, int len,
+ }
+ } else {
+ write_unlock_bh(&ls->ls_rsbtbl_lock);
++ error = -EBADR;
+ goto do_new;
+ }
+
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 0ffd1c63beeb98..abb9c6d3b1aa2a 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -32,6 +32,8 @@ static void erofs_fileio_ki_complete(struct kiocb *iocb, long ret)
+ ret = 0;
+ }
+ if (rq->bio.bi_end_io) {
++ if (ret < 0 && !rq->bio.bi_status)
++ rq->bio.bi_status = errno_to_blk_status(ret);
+ rq->bio.bi_end_io(&rq->bio);
+ } else {
+ bio_for_each_folio_all(fi, &rq->bio) {
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 4009f9017a0e97..4108b7d1696fff 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4710,22 +4710,43 @@ static inline void ext4_inode_set_iversion_queried(struct inode *inode, u64 val)
+ inode_set_iversion_queried(inode, val);
+ }
+
+-static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags)
+-
++static int check_igot_inode(struct inode *inode, ext4_iget_flags flags,
++ const char *function, unsigned int line)
+ {
++ const char *err_str;
++
+ if (flags & EXT4_IGET_EA_INODE) {
+- if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
+- return "missing EA_INODE flag";
++ if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
++ err_str = "missing EA_INODE flag";
++ goto error;
++ }
+ if (ext4_test_inode_state(inode, EXT4_STATE_XATTR) ||
+- EXT4_I(inode)->i_file_acl)
+- return "ea_inode with extended attributes";
++ EXT4_I(inode)->i_file_acl) {
++ err_str = "ea_inode with extended attributes";
++ goto error;
++ }
+ } else {
+- if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
+- return "unexpected EA_INODE flag";
++ if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
++ /*
++ * open_by_handle_at() could provide an old inode number
++ * that has since been reused for an ea_inode; this does
++ * not indicate filesystem corruption
++ */
++ if (flags & EXT4_IGET_HANDLE)
++ return -ESTALE;
++ err_str = "unexpected EA_INODE flag";
++ goto error;
++ }
++ }
++ if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) {
++ err_str = "unexpected bad inode w/o EXT4_IGET_BAD";
++ goto error;
+ }
+- if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD))
+- return "unexpected bad inode w/o EXT4_IGET_BAD";
+- return NULL;
++ return 0;
++
++error:
++ ext4_error_inode(inode, function, line, 0, err_str);
++ return -EFSCORRUPTED;
+ }
+
+ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+@@ -4737,7 +4758,6 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ struct ext4_inode_info *ei;
+ struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ struct inode *inode;
+- const char *err_str;
+ journal_t *journal = EXT4_SB(sb)->s_journal;
+ long ret;
+ loff_t size;
+@@ -4766,10 +4786,10 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+ if (!(inode->i_state & I_NEW)) {
+- if ((err_str = check_igot_inode(inode, flags)) != NULL) {
+- ext4_error_inode(inode, function, line, 0, err_str);
++ ret = check_igot_inode(inode, flags, function, line);
++ if (ret) {
+ iput(inode);
+- return ERR_PTR(-EFSCORRUPTED);
++ return ERR_PTR(ret);
+ }
+ return inode;
+ }
+@@ -5050,13 +5070,21 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ ret = -EFSCORRUPTED;
+ goto bad_inode;
+ }
+- if ((err_str = check_igot_inode(inode, flags)) != NULL) {
+- ext4_error_inode(inode, function, line, 0, err_str);
+- ret = -EFSCORRUPTED;
+- goto bad_inode;
++ ret = check_igot_inode(inode, flags, function, line);
++ /*
++ * -ESTALE here means there is nothing inherently wrong with the inode,
++ * it's just not an inode we can return for an fhandle lookup.
++ */
++ if (ret == -ESTALE) {
++ brelse(iloc.bh);
++ unlock_new_inode(inode);
++ iput(inode);
++ return ERR_PTR(-ESTALE);
+ }
+-
++ if (ret)
++ goto bad_inode;
+ brelse(iloc.bh);
++
+ unlock_new_inode(inode);
+ return inode;
+
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 8e49cb7118581d..b998020c68193b 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1995,7 +1995,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ * split it in half by count; each resulting block will have at least
+ * half the space free.
+ */
+- if (i > 0)
++ if (i >= 0)
+ split = count - move;
+ else
+ split = count/2;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index dc46a7063f1e17..528979de0f7c1e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6938,12 +6938,25 @@ static int ext4_release_dquot(struct dquot *dquot)
+ {
+ int ret, err;
+ handle_t *handle;
++ bool freeze_protected = false;
++
++ /*
++ * Trying to sb_start_intwrite() in a running transaction
++ * can result in a deadlock. Further, running transactions
++ * are already protected from freezing.
++ */
++ if (!ext4_journal_current_handle()) {
++ sb_start_intwrite(dquot->dq_sb);
++ freeze_protected = true;
++ }
+
+ handle = ext4_journal_start(dquot_to_inode(dquot), EXT4_HT_QUOTA,
+ EXT4_QUOTA_DEL_BLOCKS(dquot->dq_sb));
+ if (IS_ERR(handle)) {
+ /* Release dquot anyway to avoid endless cycle in dqput() */
+ dquot_release(dquot);
++ if (freeze_protected)
++ sb_end_intwrite(dquot->dq_sb);
+ return PTR_ERR(handle);
+ }
+ ret = dquot_release(dquot);
+@@ -6954,6 +6967,10 @@ static int ext4_release_dquot(struct dquot *dquot)
+ err = ext4_journal_stop(handle);
+ if (!ret)
+ ret = err;
++
++ if (freeze_protected)
++ sb_end_intwrite(dquot->dq_sb);
++
+ return ret;
+ }
+
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index a10fb8a9d02dc9..8ced9beba2f7ed 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1159,15 +1159,24 @@ ext4_xattr_inode_dec_ref_all(handle_t *handle, struct inode *parent,
+ {
+ struct inode *ea_inode;
+ struct ext4_xattr_entry *entry;
++ struct ext4_iloc iloc;
+ bool dirty = false;
+ unsigned int ea_ino;
+ int err;
+ int credits;
++ void *end;
++
++ if (block_csum)
++ end = (void *)bh->b_data + bh->b_size;
++ else {
++ ext4_get_inode_loc(parent, &iloc);
++ end = (void *)ext4_raw_inode(&iloc) + EXT4_SB(parent->i_sb)->s_inode_size;
++ }
+
+ /* One credit for dec ref on ea_inode, one for orphan list addition, */
+ credits = 2 + extra_credits;
+
+- for (entry = first; !IS_LAST_ENTRY(entry);
++ for (entry = first; (void *)entry < end && !IS_LAST_ENTRY(entry);
+ entry = EXT4_XATTR_NEXT(entry)) {
+ if (!entry->e_value_inum)
+ continue;
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index bd890738b94d77..92be53a83744e7 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1346,21 +1346,13 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
+ unsigned long flags;
+
+- if (cpc->reason & CP_UMOUNT) {
+- if (le32_to_cpu(ckpt->cp_pack_total_block_count) +
+- NM_I(sbi)->nat_bits_blocks > BLKS_PER_SEG(sbi)) {
+- clear_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
+- f2fs_notice(sbi, "Disable nat_bits due to no space");
+- } else if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG) &&
+- f2fs_nat_bitmap_enabled(sbi)) {
+- f2fs_enable_nat_bits(sbi);
+- set_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
+- f2fs_notice(sbi, "Rebuild and enable nat_bits");
+- }
+- }
+-
+ spin_lock_irqsave(&sbi->cp_lock, flags);
+
++ if ((cpc->reason & CP_UMOUNT) &&
++ le32_to_cpu(ckpt->cp_pack_total_block_count) >
++ sbi->blocks_per_seg - NM_I(sbi)->nat_bits_blocks)
++ disable_nat_bits(sbi, false);
++
+ if (cpc->reason & CP_TRIMMED)
+ __set_ckpt_flags(ckpt, CP_TRIMMED_FLAG);
+ else
+@@ -1543,8 +1535,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ start_blk = __start_cp_next_addr(sbi);
+
+ /* write nat bits */
+- if ((cpc->reason & CP_UMOUNT) &&
+- is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG)) {
++ if (enabled_nat_bits(sbi, cpc)) {
+ __u64 cp_ver = cur_cp_version(ckpt);
+ block_t blk;
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 493dda2d4b6631..02fc4e9d42120f 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -2220,6 +2220,36 @@ static inline void f2fs_up_write(struct f2fs_rwsem *sem)
+ #endif
+ }
+
++static inline void disable_nat_bits(struct f2fs_sb_info *sbi, bool lock)
++{
++ unsigned long flags;
++ unsigned char *nat_bits;
++
++ /*
++ * In order to re-enable nat_bits we need to call fsck.f2fs by
++ * set_sbi_flag(sbi, SBI_NEED_FSCK). But it may give huge cost,
++ * so let's rely on regular fsck or unclean shutdown.
++ */
++
++ if (lock)
++ spin_lock_irqsave(&sbi->cp_lock, flags);
++ __clear_ckpt_flags(F2FS_CKPT(sbi), CP_NAT_BITS_FLAG);
++ nat_bits = NM_I(sbi)->nat_bits;
++ NM_I(sbi)->nat_bits = NULL;
++ if (lock)
++ spin_unlock_irqrestore(&sbi->cp_lock, flags);
++
++ kvfree(nat_bits);
++}
++
++static inline bool enabled_nat_bits(struct f2fs_sb_info *sbi,
++ struct cp_control *cpc)
++{
++ bool set = is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
++
++ return (cpc) ? (cpc->reason & CP_UMOUNT) && set : set;
++}
++
+ static inline void f2fs_lock_op(struct f2fs_sb_info *sbi)
+ {
+ f2fs_down_read(&sbi->cp_rwsem);
+@@ -3663,7 +3693,6 @@ int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from);
+ int f2fs_truncate_xattr_node(struct inode *inode);
+ int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi,
+ unsigned int seq_id);
+-bool f2fs_nat_bitmap_enabled(struct f2fs_sb_info *sbi);
+ int f2fs_remove_inode_page(struct inode *inode);
+ struct page *f2fs_new_inode_page(struct inode *inode);
+ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs);
+@@ -3688,7 +3717,6 @@ int f2fs_recover_xattr_data(struct inode *inode, struct page *page);
+ int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page);
+ int f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
+ unsigned int segno, struct f2fs_summary_block *sum);
+-void f2fs_enable_nat_bits(struct f2fs_sb_info *sbi);
+ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
+ int f2fs_build_node_manager(struct f2fs_sb_info *sbi);
+ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index cd17d6f4c291f9..1221067d2da8ab 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -34,10 +34,8 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ if (f2fs_inode_dirtied(inode, sync))
+ return;
+
+- if (f2fs_is_atomic_file(inode)) {
+- set_inode_flag(inode, FI_ATOMIC_DIRTIED);
++ if (f2fs_is_atomic_file(inode))
+ return;
+- }
+
+ mark_inode_dirty_sync(inode);
+ }
+@@ -765,8 +763,12 @@ void f2fs_update_inode_page(struct inode *inode)
+ if (err == -ENOENT)
+ return;
+
++ if (err == -EFSCORRUPTED)
++ goto stop_checkpoint;
++
+ if (err == -ENOMEM || ++count <= DEFAULT_RETRY_IO_COUNT)
+ goto retry;
++stop_checkpoint:
+ f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_UPDATE_INODE);
+ return;
+ }
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index f88392fc4ba959..9f6cca183c6083 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1135,7 +1135,14 @@ int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from)
+ trace_f2fs_truncate_inode_blocks_enter(inode, from);
+
+ level = get_node_path(inode, from, offset, noffset);
+- if (level < 0) {
++ if (level <= 0) {
++ if (!level) {
++ level = -EFSCORRUPTED;
++ f2fs_err(sbi, "%s: inode ino=%lx has corrupted node block, from:%lu addrs:%u",
++ __func__, inode->i_ino,
++ from, ADDRS_PER_INODE(inode));
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ }
+ trace_f2fs_truncate_inode_blocks_exit(inode, level);
+ return level;
+ }
+@@ -2269,24 +2276,6 @@ static void __move_free_nid(struct f2fs_sb_info *sbi, struct free_nid *i,
+ }
+ }
+
+-bool f2fs_nat_bitmap_enabled(struct f2fs_sb_info *sbi)
+-{
+- struct f2fs_nm_info *nm_i = NM_I(sbi);
+- unsigned int i;
+- bool ret = true;
+-
+- f2fs_down_read(&nm_i->nat_tree_lock);
+- for (i = 0; i < nm_i->nat_blocks; i++) {
+- if (!test_bit_le(i, nm_i->nat_block_bitmap)) {
+- ret = false;
+- break;
+- }
+- }
+- f2fs_up_read(&nm_i->nat_tree_lock);
+-
+- return ret;
+-}
+-
+ static void update_free_nid_bitmap(struct f2fs_sb_info *sbi, nid_t nid,
+ bool set, bool build)
+ {
+@@ -2965,23 +2954,7 @@ static void __adjust_nat_entry_set(struct nat_entry_set *nes,
+ list_add_tail(&nes->set_list, head);
+ }
+
+-static void __update_nat_bits(struct f2fs_nm_info *nm_i, unsigned int nat_ofs,
+- unsigned int valid)
+-{
+- if (valid == 0) {
+- __set_bit_le(nat_ofs, nm_i->empty_nat_bits);
+- __clear_bit_le(nat_ofs, nm_i->full_nat_bits);
+- return;
+- }
+-
+- __clear_bit_le(nat_ofs, nm_i->empty_nat_bits);
+- if (valid == NAT_ENTRY_PER_BLOCK)
+- __set_bit_le(nat_ofs, nm_i->full_nat_bits);
+- else
+- __clear_bit_le(nat_ofs, nm_i->full_nat_bits);
+-}
+-
+-static void update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
++static void __update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
+ struct page *page)
+ {
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
+@@ -2990,7 +2963,7 @@ static void update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
+ int valid = 0;
+ int i = 0;
+
+- if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG))
++ if (!enabled_nat_bits(sbi, NULL))
+ return;
+
+ if (nat_index == 0) {
+@@ -3001,36 +2974,17 @@ static void update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
+ if (le32_to_cpu(nat_blk->entries[i].block_addr) != NULL_ADDR)
+ valid++;
+ }
+-
+- __update_nat_bits(nm_i, nat_index, valid);
+-}
+-
+-void f2fs_enable_nat_bits(struct f2fs_sb_info *sbi)
+-{
+- struct f2fs_nm_info *nm_i = NM_I(sbi);
+- unsigned int nat_ofs;
+-
+- f2fs_down_read(&nm_i->nat_tree_lock);
+-
+- for (nat_ofs = 0; nat_ofs < nm_i->nat_blocks; nat_ofs++) {
+- unsigned int valid = 0, nid_ofs = 0;
+-
+- /* handle nid zero due to it should never be used */
+- if (unlikely(nat_ofs == 0)) {
+- valid = 1;
+- nid_ofs = 1;
+- }
+-
+- for (; nid_ofs < NAT_ENTRY_PER_BLOCK; nid_ofs++) {
+- if (!test_bit_le(nid_ofs,
+- nm_i->free_nid_bitmap[nat_ofs]))
+- valid++;
+- }
+-
+- __update_nat_bits(nm_i, nat_ofs, valid);
++ if (valid == 0) {
++ __set_bit_le(nat_index, nm_i->empty_nat_bits);
++ __clear_bit_le(nat_index, nm_i->full_nat_bits);
++ return;
+ }
+
+- f2fs_up_read(&nm_i->nat_tree_lock);
++ __clear_bit_le(nat_index, nm_i->empty_nat_bits);
++ if (valid == NAT_ENTRY_PER_BLOCK)
++ __set_bit_le(nat_index, nm_i->full_nat_bits);
++ else
++ __clear_bit_le(nat_index, nm_i->full_nat_bits);
+ }
+
+ static int __flush_nat_entry_set(struct f2fs_sb_info *sbi,
+@@ -3049,7 +3003,7 @@ static int __flush_nat_entry_set(struct f2fs_sb_info *sbi,
+ * #1, flush nat entries to journal in current hot data summary block.
+ * #2, flush nat entries to nat page.
+ */
+- if ((cpc->reason & CP_UMOUNT) ||
++ if (enabled_nat_bits(sbi, cpc) ||
+ !__has_cursum_space(journal, set->entry_cnt, NAT_JOURNAL))
+ to_journal = false;
+
+@@ -3096,7 +3050,7 @@ static int __flush_nat_entry_set(struct f2fs_sb_info *sbi,
+ if (to_journal) {
+ up_write(&curseg->journal_rwsem);
+ } else {
+- update_nat_bits(sbi, start_nid, page);
++ __update_nat_bits(sbi, start_nid, page);
+ f2fs_put_page(page, 1);
+ }
+
+@@ -3127,7 +3081,7 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ * during unmount, let's flush nat_bits before checking
+ * nat_cnt[DIRTY_NAT].
+ */
+- if (cpc->reason & CP_UMOUNT) {
++ if (enabled_nat_bits(sbi, cpc)) {
+ f2fs_down_write(&nm_i->nat_tree_lock);
+ remove_nats_in_journal(sbi);
+ f2fs_up_write(&nm_i->nat_tree_lock);
+@@ -3143,7 +3097,7 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ * entries, remove all entries from journal and merge them
+ * into nat entry set.
+ */
+- if (cpc->reason & CP_UMOUNT ||
++ if (enabled_nat_bits(sbi, cpc) ||
+ !__has_cursum_space(journal,
+ nm_i->nat_cnt[DIRTY_NAT], NAT_JOURNAL))
+ remove_nats_in_journal(sbi);
+@@ -3180,18 +3134,15 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+ __u64 cp_ver = cur_cp_version(ckpt);
+ block_t nat_bits_addr;
+
++ if (!enabled_nat_bits(sbi, NULL))
++ return 0;
++
+ nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
+ nm_i->nat_bits = f2fs_kvzalloc(sbi,
+ F2FS_BLK_TO_BYTES(nm_i->nat_bits_blocks), GFP_KERNEL);
+ if (!nm_i->nat_bits)
+ return -ENOMEM;
+
+- nm_i->full_nat_bits = nm_i->nat_bits + 8;
+- nm_i->empty_nat_bits = nm_i->full_nat_bits + nat_bits_bytes;
+-
+- if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG))
+- return 0;
+-
+ nat_bits_addr = __start_cp_addr(sbi) + BLKS_PER_SEG(sbi) -
+ nm_i->nat_bits_blocks;
+ for (i = 0; i < nm_i->nat_bits_blocks; i++) {
+@@ -3208,12 +3159,13 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+
+ cp_ver |= (cur_cp_crc(ckpt) << 32);
+ if (cpu_to_le64(cp_ver) != *(__le64 *)nm_i->nat_bits) {
+- clear_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
+- f2fs_notice(sbi, "Disable nat_bits due to incorrect cp_ver (%llu, %llu)",
+- cp_ver, le64_to_cpu(*(__le64 *)nm_i->nat_bits));
++ disable_nat_bits(sbi, true);
+ return 0;
+ }
+
++ nm_i->full_nat_bits = nm_i->nat_bits + 8;
++ nm_i->empty_nat_bits = nm_i->full_nat_bits + nat_bits_bytes;
++
+ f2fs_notice(sbi, "Found nat_bits in checkpoint");
+ return 0;
+ }
+@@ -3224,7 +3176,7 @@ static inline void load_free_nid_bitmap(struct f2fs_sb_info *sbi)
+ unsigned int i = 0;
+ nid_t nid, last_nid;
+
+- if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG))
++ if (!enabled_nat_bits(sbi, NULL))
+ return;
+
+ for (i = 0; i < nm_i->nat_blocks; i++) {
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 26b1021427ae05..b8a0e925a40119 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1527,6 +1527,10 @@ int f2fs_inode_dirtied(struct inode *inode, bool sync)
+ inc_page_count(sbi, F2FS_DIRTY_IMETA);
+ }
+ spin_unlock(&sbi->inode_lock[DIRTY_META]);
++
++ if (!ret && f2fs_is_atomic_file(inode))
++ set_inode_flag(inode, FI_ATOMIC_DIRTIED);
++
+ return ret;
+ }
+
+@@ -4749,8 +4753,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
+ if (err)
+ goto free_meta;
+
+- if (unlikely(is_set_ckpt_flags(sbi, CP_DISABLED_FLAG)))
++ if (unlikely(is_set_ckpt_flags(sbi, CP_DISABLED_FLAG))) {
++ skip_recovery = true;
+ goto reset_checkpoint;
++ }
+
+ /* recover fsynced data */
+ if (!test_opt(sbi, DISABLE_ROLL_FORWARD) &&
+diff --git a/fs/file.c b/fs/file.c
+index d868cdb95d1e78..1ba03662ae66f4 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -418,17 +418,25 @@ struct files_struct *dup_fd(struct files_struct *oldf, struct fd_range *punch_ho
+ old_fds = old_fdt->fd;
+ new_fds = new_fdt->fd;
+
++ /*
++ * We may be racing against fd allocation from other threads using this
++ * files_struct, despite holding ->file_lock.
++ *
++ * alloc_fd() might have already claimed a slot, while fd_install()
++ * did not populate it yet. Note the latter operates locklessly, so
++ * the file can show up as we are walking the array below.
++ *
++ * At the same time we know no files will disappear as all other
++ * operations take the lock.
++ *
++ * Instead of trying to placate userspace racing with itself, we
++ * ref the file if we see it and mark the fd slot as unused otherwise.
++ */
+ for (i = open_files; i != 0; i--) {
+- struct file *f = *old_fds++;
++ struct file *f = rcu_dereference_raw(*old_fds++);
+ if (f) {
+ get_file(f);
+ } else {
+- /*
+- * The fd may be claimed in the fd bitmap but not yet
+- * instantiated in the files array if a sibling thread
+- * is partway through open(). So make sure that this
+- * fd is available to the new process.
+- */
+ __clear_open_fd(open_files - i, new_fdt);
+ }
+ rcu_assign_pointer(*new_fds++, f);
+@@ -679,7 +687,7 @@ struct file *file_close_fd_locked(struct files_struct *files, unsigned fd)
+ return NULL;
+
+ fd = array_index_nospec(fd, fdt->max_fds);
+- file = fdt->fd[fd];
++ file = rcu_dereference_raw(fdt->fd[fd]);
+ if (file) {
+ rcu_assign_pointer(fdt->fd[fd], NULL);
+ __put_unused_fd(files, fd);
+@@ -1237,7 +1245,7 @@ __releases(&files->file_lock)
+ */
+ fdt = files_fdtable(files);
+ fd = array_index_nospec(fd, fdt->max_fds);
+- tofree = fdt->fd[fd];
++ tofree = rcu_dereference_raw(fdt->fd[fd]);
+ if (!tofree && fd_is_open(fd, fdt))
+ goto Ebusy;
+ get_file(file);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 51e31df4c54613..d9bfdd774c92db 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -407,6 +407,24 @@ static int queue_interrupt(struct fuse_req *req)
+ return 0;
+ }
+
++bool fuse_remove_pending_req(struct fuse_req *req, spinlock_t *lock)
++{
++ spin_lock(lock);
++ if (test_bit(FR_PENDING, &req->flags)) {
++ /*
++ * FR_PENDING does not get cleared as the request will end
++ * up in destruction anyway.
++ */
++ list_del(&req->list);
++ spin_unlock(lock);
++ __fuse_put_request(req);
++ req->out.h.error = -EINTR;
++ return true;
++ }
++ spin_unlock(lock);
++ return false;
++}
++
+ static void request_wait_answer(struct fuse_req *req)
+ {
+ struct fuse_conn *fc = req->fm->fc;
+@@ -428,22 +446,20 @@ static void request_wait_answer(struct fuse_req *req)
+ }
+
+ if (!test_bit(FR_FORCE, &req->flags)) {
++ bool removed;
++
+ /* Only fatal signals may interrupt this */
+ err = wait_event_killable(req->waitq,
+ test_bit(FR_FINISHED, &req->flags));
+ if (!err)
+ return;
+
+- spin_lock(&fiq->lock);
+- /* Request is not yet in userspace, bail out */
+- if (test_bit(FR_PENDING, &req->flags)) {
+- list_del(&req->list);
+- spin_unlock(&fiq->lock);
+- __fuse_put_request(req);
+- req->out.h.error = -EINTR;
++ if (test_bit(FR_URING, &req->flags))
++ removed = fuse_uring_remove_pending_req(req);
++ else
++ removed = fuse_remove_pending_req(req, &fiq->lock);
++ if (removed)
+ return;
+- }
+- spin_unlock(&fiq->lock);
+ }
+
+ /*
+diff --git a/fs/fuse/dev_uring.c b/fs/fuse/dev_uring.c
+index 82bf458fa9db5b..5f1566c69ddcc6 100644
+--- a/fs/fuse/dev_uring.c
++++ b/fs/fuse/dev_uring.c
+@@ -726,8 +726,6 @@ static void fuse_uring_add_req_to_ring_ent(struct fuse_ring_ent *ent,
+ struct fuse_req *req)
+ {
+ struct fuse_ring_queue *queue = ent->queue;
+- struct fuse_conn *fc = req->fm->fc;
+- struct fuse_iqueue *fiq = &fc->iq;
+
+ lockdep_assert_held(&queue->lock);
+
+@@ -737,9 +735,7 @@ static void fuse_uring_add_req_to_ring_ent(struct fuse_ring_ent *ent,
+ ent->state);
+ }
+
+- spin_lock(&fiq->lock);
+ clear_bit(FR_PENDING, &req->flags);
+- spin_unlock(&fiq->lock);
+ ent->fuse_req = req;
+ ent->state = FRRS_FUSE_REQ;
+ list_move(&ent->list, &queue->ent_w_req_queue);
+@@ -1238,6 +1234,8 @@ void fuse_uring_queue_fuse_req(struct fuse_iqueue *fiq, struct fuse_req *req)
+ if (unlikely(queue->stopped))
+ goto err_unlock;
+
++ set_bit(FR_URING, &req->flags);
++ req->ring_queue = queue;
+ ent = list_first_entry_or_null(&queue->ent_avail_queue,
+ struct fuse_ring_ent, list);
+ if (ent)
+@@ -1276,6 +1274,8 @@ bool fuse_uring_queue_bq_req(struct fuse_req *req)
+ return false;
+ }
+
++ set_bit(FR_URING, &req->flags);
++ req->ring_queue = queue;
+ list_add_tail(&req->list, &queue->fuse_req_bg_queue);
+
+ ent = list_first_entry_or_null(&queue->ent_avail_queue,
+@@ -1306,6 +1306,13 @@ bool fuse_uring_queue_bq_req(struct fuse_req *req)
+ return true;
+ }
+
++bool fuse_uring_remove_pending_req(struct fuse_req *req)
++{
++ struct fuse_ring_queue *queue = req->ring_queue;
++
++ return fuse_remove_pending_req(req, &queue->lock);
++}
++
+ static const struct fuse_iqueue_ops fuse_io_uring_ops = {
+ /* should be send over io-uring as enhancement */
+ .send_forget = fuse_dev_queue_forget,
+diff --git a/fs/fuse/dev_uring_i.h b/fs/fuse/dev_uring_i.h
+index 2102b3d0c1aed1..e5b39a92b7ca0e 100644
+--- a/fs/fuse/dev_uring_i.h
++++ b/fs/fuse/dev_uring_i.h
+@@ -142,6 +142,7 @@ void fuse_uring_abort_end_requests(struct fuse_ring *ring);
+ int fuse_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags);
+ void fuse_uring_queue_fuse_req(struct fuse_iqueue *fiq, struct fuse_req *req);
+ bool fuse_uring_queue_bq_req(struct fuse_req *req);
++bool fuse_uring_remove_pending_req(struct fuse_req *req);
+
+ static inline void fuse_uring_abort(struct fuse_conn *fc)
+ {
+@@ -200,6 +201,11 @@ static inline bool fuse_uring_ready(struct fuse_conn *fc)
+ return false;
+ }
+
++static inline bool fuse_uring_remove_pending_req(struct fuse_req *req)
++{
++ return false;
++}
++
+ #endif /* CONFIG_FUSE_IO_URING */
+
+ #endif /* _FS_FUSE_DEV_URING_I_H */
+diff --git a/fs/fuse/fuse_dev_i.h b/fs/fuse/fuse_dev_i.h
+index 3b2bfe1248d357..2481da3388c5fe 100644
+--- a/fs/fuse/fuse_dev_i.h
++++ b/fs/fuse/fuse_dev_i.h
+@@ -61,6 +61,7 @@ int fuse_copy_out_args(struct fuse_copy_state *cs, struct fuse_args *args,
+ void fuse_dev_queue_forget(struct fuse_iqueue *fiq,
+ struct fuse_forget_link *forget);
+ void fuse_dev_queue_interrupt(struct fuse_iqueue *fiq, struct fuse_req *req);
++bool fuse_remove_pending_req(struct fuse_req *req, spinlock_t *lock);
+
+ #endif
+
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index fee96fe7887b30..2086dac7243ba8 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -378,6 +378,7 @@ struct fuse_io_priv {
+ * FR_FINISHED: request is finished
+ * FR_PRIVATE: request is on private list
+ * FR_ASYNC: request is asynchronous
++ * FR_URING: request is handled through fuse-io-uring
+ */
+ enum fuse_req_flag {
+ FR_ISREPLY,
+@@ -392,6 +393,7 @@ enum fuse_req_flag {
+ FR_FINISHED,
+ FR_PRIVATE,
+ FR_ASYNC,
++ FR_URING,
+ };
+
+ /**
+@@ -441,6 +443,7 @@ struct fuse_req {
+
+ #ifdef CONFIG_FUSE_IO_URING
+ void *ring_entry;
++ void *ring_queue;
+ #endif
+ };
+
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index a10e086a0165b1..b6a1bbc211efc2 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1879,7 +1879,6 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
+
+ /* Log is no longer empty */
+ write_lock(&journal->j_state_lock);
+- WARN_ON(!sb->s_sequence);
+ journal->j_flags &= ~JBD2_FLUSHED;
+ write_unlock(&journal->j_state_lock);
+
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index f9009e4f9ffd89..0e1019382cf519 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -204,6 +204,10 @@ int dbMount(struct inode *ipbmap)
+ bmp->db_aglevel = le32_to_cpu(dbmp_le->dn_aglevel);
+ bmp->db_agheight = le32_to_cpu(dbmp_le->dn_agheight);
+ bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
++ if (!bmp->db_agwidth) {
++ err = -EINVAL;
++ goto err_release_metapage;
++ }
+ bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
+ bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
+ if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG ||
+@@ -3403,7 +3407,7 @@ int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
+ oldl2agsize = bmp->db_agl2size;
+
+ bmp->db_agl2size = l2agsize;
+- bmp->db_agsize = 1 << l2agsize;
++ bmp->db_agsize = (s64)1 << l2agsize;
+
+ /* compute new number of AG */
+ agno = bmp->db_numag;
+@@ -3666,8 +3670,8 @@ void dbFinalizeBmap(struct inode *ipbmap)
+ * system size is not a multiple of the group size).
+ */
+ inactfree = (inactags && ag_rem) ?
+- ((inactags - 1) << bmp->db_agl2size) + ag_rem
+- : inactags << bmp->db_agl2size;
++ (((s64)inactags - 1) << bmp->db_agl2size) + ag_rem
++ : ((s64)inactags << bmp->db_agl2size);
+
+ /* determine how many free blocks are in the active
+ * allocation groups plus the average number of free blocks
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index debfc1389cb3e8..ecb8e05b8b8481 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -102,7 +102,7 @@ int diMount(struct inode *ipimap)
+ * allocate/initialize the in-memory inode map control structure
+ */
+ /* allocate the in-memory inode map control structure. */
+- imap = kmalloc(sizeof(struct inomap), GFP_KERNEL);
++ imap = kzalloc(sizeof(struct inomap), GFP_KERNEL);
+ if (imap == NULL)
+ return -ENOMEM;
+
+@@ -456,7 +456,7 @@ struct inode *diReadSpecial(struct super_block *sb, ino_t inum, int secondary)
+ dp += inum % 8; /* 8 inodes per 4K page */
+
+ /* copy on-disk inode to in-memory inode */
+- if ((copy_from_dinode(dp, ip)) != 0) {
++ if ((copy_from_dinode(dp, ip) != 0) || (ip->i_nlink == 0)) {
+ /* handle bad return by returning NULL for ip */
+ set_nlink(ip, 1); /* Don't want iput() deleting it */
+ iput(ip);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 8f1000f9f3df16..d401486fe95d17 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2026,6 +2026,7 @@ static void warn_mandlock(void)
+ static int can_umount(const struct path *path, int flags)
+ {
+ struct mount *mnt = real_mount(path->mnt);
++ struct super_block *sb = path->dentry->d_sb;
+
+ if (!may_mount())
+ return -EPERM;
+@@ -2035,7 +2036,7 @@ static int can_umount(const struct path *path, int flags)
+ return -EINVAL;
+ if (mnt->mnt.mnt_flags & MNT_LOCKED) /* Check optimistically */
+ return -EINVAL;
+- if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))
++ if (flags & MNT_FORCE && !ns_capable(sb->s_user_ns, CAP_SYS_ADMIN))
+ return -EPERM;
+ return 0;
+ }
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index e69968e88fe724..35892df7335c75 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -704,18 +704,12 @@ cifs_crypto_secmech_release(struct TCP_Server_Info *server)
+ cifs_free_hash(&server->secmech.md5);
+ cifs_free_hash(&server->secmech.sha512);
+
+- if (!SERVER_IS_CHAN(server)) {
+- if (server->secmech.enc) {
+- crypto_free_aead(server->secmech.enc);
+- server->secmech.enc = NULL;
+- }
+-
+- if (server->secmech.dec) {
+- crypto_free_aead(server->secmech.dec);
+- server->secmech.dec = NULL;
+- }
+- } else {
++ if (server->secmech.enc) {
++ crypto_free_aead(server->secmech.enc);
+ server->secmech.enc = NULL;
++ }
++ if (server->secmech.dec) {
++ crypto_free_aead(server->secmech.dec);
+ server->secmech.dec = NULL;
+ }
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index cb14a6828c501c..e417052694f276 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1677,6 +1677,7 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ /* Grab netns reference for this server. */
+ cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
+
++ tcp_ses->sign = ctx->sign;
+ tcp_ses->conn_id = atomic_inc_return(&tcpSesNextId);
+ tcp_ses->noblockcnt = ctx->rootfs;
+ tcp_ses->noblocksnd = ctx->noblocksnd || ctx->rootfs;
+@@ -2455,6 +2456,8 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ return 0;
+ if (tcon->nodelete != ctx->nodelete)
+ return 0;
++ if (tcon->posix_extensions != ctx->linux_ext)
++ return 0;
+ return 1;
+ }
+
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 8c73d4d60d1a74..e38521a713a6b3 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1377,6 +1377,11 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ ctx->closetimeo = HZ * result.uint_32;
+ break;
+ case Opt_echo_interval:
++ if (result.uint_32 < SMB_ECHO_INTERVAL_MIN ||
++ result.uint_32 > SMB_ECHO_INTERVAL_MAX) {
++ cifs_errorf(fc, "echo interval is out of bounds\n");
++ goto cifs_parse_mount_err;
++ }
+ ctx->echo_interval = result.uint_32;
+ break;
+ case Opt_snapshot:
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 616149c7f0a541..c88a70e6e13208 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1228,6 +1228,16 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ cifs_create_junction_fattr(fattr, sb);
+ goto out;
+ }
++ /*
++ * If the reparse point is unsupported by the Linux SMB
++ * client then let it process by the SMB server. So mask
++ * the -EOPNOTSUPP error code. This will allow Linux SMB
++ * client to send SMB OPEN request to server. If server
++ * does not support this reparse point too then server
++ * will return error during open the path.
++ */
++ if (rc == -EOPNOTSUPP)
++ rc = 0;
+ }
+
+ if (data->reparse.tag == IO_REPARSE_TAG_SYMLINK && !rc) {
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 2b9e9885dc4258..1416b9ffaca124 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -542,12 +542,12 @@ static int wsl_set_reparse_buf(struct reparse_data_buffer **buf,
+ kfree(symname_utf16);
+ return -ENOMEM;
+ }
+- /* Flag 0x02000000 is unknown, but all wsl symlinks have this value */
+- symlink_buf->Flags = cpu_to_le32(0x02000000);
+- /* PathBuffer is in UTF-8 but without trailing null-term byte */
++ /* Version field must be set to 2 (MS-FSCC 2.1.2.7) */
++ symlink_buf->Version = cpu_to_le32(2);
++ /* Target for Version 2 is in UTF-8 but without trailing null-term byte */
+ symname_utf8_len = utf16s_to_utf8s((wchar_t *)symname_utf16, symname_utf16_len/2,
+ UTF16_LITTLE_ENDIAN,
+- symlink_buf->PathBuffer,
++ symlink_buf->Target,
+ symname_utf8_maxlen);
+ *buf = (struct reparse_data_buffer *)symlink_buf;
+ buf_len = sizeof(struct reparse_wsl_symlink_data_buffer) + symname_utf8_len;
+@@ -1016,29 +1016,36 @@ static int parse_reparse_wsl_symlink(struct reparse_wsl_symlink_data_buffer *buf
+ struct cifs_open_info_data *data)
+ {
+ int len = le16_to_cpu(buf->ReparseDataLength);
++ int data_offset = offsetof(typeof(*buf), Target) - offsetof(typeof(*buf), Version);
+ int symname_utf8_len;
+ __le16 *symname_utf16;
+ int symname_utf16_len;
+
+- if (len <= sizeof(buf->Flags)) {
++ if (len <= data_offset) {
+ cifs_dbg(VFS, "srv returned malformed wsl symlink buffer\n");
+ return -EIO;
+ }
+
+- /* PathBuffer is in UTF-8 but without trailing null-term byte */
+- symname_utf8_len = len - sizeof(buf->Flags);
++ /* MS-FSCC 2.1.2.7 defines layout of the Target field only for Version 2. */
++ if (le32_to_cpu(buf->Version) != 2) {
++ cifs_dbg(VFS, "srv returned unsupported wsl symlink version %u\n", le32_to_cpu(buf->Version));
++ return -EIO;
++ }
++
++ /* Target for Version 2 is in UTF-8 but without trailing null-term byte */
++ symname_utf8_len = len - data_offset;
+ /*
+ * Check that buffer does not contain null byte
+ * because Linux cannot process symlink with null byte.
+ */
+- if (strnlen(buf->PathBuffer, symname_utf8_len) != symname_utf8_len) {
++ if (strnlen(buf->Target, symname_utf8_len) != symname_utf8_len) {
+ cifs_dbg(VFS, "srv returned null byte in wsl symlink target location\n");
+ return -EIO;
+ }
+ symname_utf16 = kzalloc(symname_utf8_len * 2, GFP_KERNEL);
+ if (!symname_utf16)
+ return -ENOMEM;
+- symname_utf16_len = utf8s_to_utf16s(buf->PathBuffer, symname_utf8_len,
++ symname_utf16_len = utf8s_to_utf16s(buf->Target, symname_utf8_len,
+ UTF16_LITTLE_ENDIAN,
+ (wchar_t *) symname_utf16, symname_utf8_len * 2);
+ if (symname_utf16_len < 0) {
+@@ -1062,8 +1069,6 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ const char *full_path,
+ struct cifs_open_info_data *data)
+ {
+- struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+-
+ data->reparse.buf = buf;
+
+ /* See MS-FSCC 2.1.2 */
+@@ -1090,8 +1095,6 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ }
+ return 0;
+ default:
+- cifs_tcon_dbg(VFS | ONCE, "unhandled reparse tag: 0x%08x\n",
+- le32_to_cpu(buf->ReparseTag));
+ return -EOPNOTSUPP;
+ }
+ }
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index faa80e7d54a6e8..eb70ebf38464bc 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -522,6 +522,13 @@ cifs_ses_add_channel(struct cifs_ses *ses,
+ ctx->sockopt_tcp_nodelay = ses->server->tcp_nodelay;
+ ctx->echo_interval = ses->server->echo_interval / HZ;
+ ctx->max_credits = ses->server->max_credits;
++ ctx->min_offload = ses->server->min_offload;
++ ctx->compress = ses->server->compression.requested;
++ ctx->dfs_conn = ses->server->dfs_conn;
++ ctx->ignore_signature = ses->server->ignore_signature;
++ ctx->leaf_fullpath = ses->server->leaf_fullpath;
++ ctx->rootfs = ses->server->noblockcnt;
++ ctx->retrans = ses->server->retrans;
+
+ /*
+ * This will be used for encoding/decoding user/domain/pw
+diff --git a/fs/smb/client/smb2misc.c b/fs/smb/client/smb2misc.c
+index f3c4b70b77b94f..cddf273c14aed7 100644
+--- a/fs/smb/client/smb2misc.c
++++ b/fs/smb/client/smb2misc.c
+@@ -816,11 +816,12 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
+ WARN_ONCE(tcon->tc_count < 0, "tcon refcount is negative");
+ spin_unlock(&cifs_tcp_ses_lock);
+
+- if (tcon->ses)
++ if (tcon->ses) {
+ server = tcon->ses->server;
+-
+- cifs_server_dbg(FYI, "tid=0x%x: tcon is closing, skipping async close retry of fid %llu %llu\n",
+- tcon->tid, persistent_fid, volatile_fid);
++ cifs_server_dbg(FYI,
++ "tid=0x%x: tcon is closing, skipping async close retry of fid %llu %llu\n",
++ tcon->tid, persistent_fid, volatile_fid);
++ }
+
+ return 0;
+ }
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 4dd11eafb69d9c..7aeac8dd9a1d13 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4549,9 +4549,9 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
+ return rc;
+ }
+ } else {
+- if (unlikely(!server->secmech.dec))
+- return -EIO;
+-
++ rc = smb3_crypto_aead_allocate(server);
++ if (unlikely(rc))
++ return rc;
+ tfm = server->secmech.dec;
+ }
+
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index f9c521b3c65ee7..163b8fea47e8a0 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -1251,15 +1251,8 @@ SMB2_negotiate(const unsigned int xid,
+ cifs_server_dbg(VFS, "Missing expected negotiate contexts\n");
+ }
+
+- if (server->cipher_type && !rc) {
+- if (!SERVER_IS_CHAN(server)) {
+- rc = smb3_crypto_aead_allocate(server);
+- } else {
+- /* For channels, just reuse the primary server crypto secmech. */
+- server->secmech.enc = server->primary_server->secmech.enc;
+- server->secmech.dec = server->primary_server->secmech.dec;
+- }
+- }
++ if (server->cipher_type && !rc)
++ rc = smb3_crypto_aead_allocate(server);
+ neg_exit:
+ free_rsp_buf(resp_buftype, rsp);
+ return rc;
+diff --git a/fs/smb/common/smb2pdu.h b/fs/smb/common/smb2pdu.h
+index c7a0efda440367..12f0013334057e 100644
+--- a/fs/smb/common/smb2pdu.h
++++ b/fs/smb/common/smb2pdu.h
+@@ -1564,13 +1564,13 @@ struct reparse_nfs_data_buffer {
+ __u8 DataBuffer[];
+ } __packed;
+
+-/* For IO_REPARSE_TAG_LX_SYMLINK */
++/* For IO_REPARSE_TAG_LX_SYMLINK - see MS-FSCC 2.1.2.7 */
+ struct reparse_wsl_symlink_data_buffer {
+ __le32 ReparseTag;
+ __le16 ReparseDataLength;
+ __u16 Reserved;
+- __le32 Flags;
+- __u8 PathBuffer[]; /* Variable Length UTF-8 string without nul-term */
++ __le32 Version; /* Always 2 */
++ __u8 Target[]; /* Variable Length UTF-8 string without nul-term */
+ } __packed;
+
+ struct validate_negotiate_info_req {
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 70c907fe8af9eb..4386dd845e4009 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -810,6 +810,7 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+ }
+ map->oflags = UDF_BLK_MAPPED;
+ map->pblk = udf_get_lb_pblock(inode->i_sb, &eloc, offset);
++ ret = 0;
+ goto out_free;
+ }
+
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 97c4d71115d8a0..d80f943461992f 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -395,32 +395,6 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
+ if (!(vmf->flags & FAULT_FLAG_USER) && (ctx->flags & UFFD_USER_MODE_ONLY))
+ goto out;
+
+- /*
+- * If it's already released don't get it. This avoids to loop
+- * in __get_user_pages if userfaultfd_release waits on the
+- * caller of handle_userfault to release the mmap_lock.
+- */
+- if (unlikely(READ_ONCE(ctx->released))) {
+- /*
+- * Don't return VM_FAULT_SIGBUS in this case, so a non
+- * cooperative manager can close the uffd after the
+- * last UFFDIO_COPY, without risking to trigger an
+- * involuntary SIGBUS if the process was starting the
+- * userfaultfd while the userfaultfd was still armed
+- * (but after the last UFFDIO_COPY). If the uffd
+- * wasn't already closed when the userfault reached
+- * this point, that would normally be solved by
+- * userfaultfd_must_wait returning 'false'.
+- *
+- * If we were to return VM_FAULT_SIGBUS here, the non
+- * cooperative manager would be instead forced to
+- * always call UFFDIO_UNREGISTER before it can safely
+- * close the uffd.
+- */
+- ret = VM_FAULT_NOPAGE;
+- goto out;
+- }
+-
+ /*
+ * Check that we can return VM_FAULT_RETRY.
+ *
+@@ -457,6 +431,31 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
+ if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT)
+ goto out;
+
++ if (unlikely(READ_ONCE(ctx->released))) {
++ /*
++ * If a concurrent release is detected, do not return
++ * VM_FAULT_SIGBUS or VM_FAULT_NOPAGE, but instead always
++ * return VM_FAULT_RETRY with lock released proactively.
++ *
++ * If we were to return VM_FAULT_SIGBUS here, the non
++ * cooperative manager would be instead forced to
++ * always call UFFDIO_UNREGISTER before it can safely
++ * close the uffd, to avoid involuntary SIGBUS triggered.
++ *
++ * If we were to return VM_FAULT_NOPAGE, it would work for
++ * the fault path, in which the lock will be released
++ * later. However for GUP, faultin_page() does nothing
++ * special on NOPAGE, so GUP would spin retrying without
++ * releasing the mmap read lock, causing possible livelock.
++ *
++ * Here only VM_FAULT_RETRY would make sure the mmap lock
++ * be released immediately, so that the thread concurrently
++ * releasing the userfault would always make progress.
++ */
++ release_fault_lock(vmf);
++ goto out;
++ }
++
+ /* take the reference before dropping the mmap_lock */
+ userfaultfd_ctx_get(ctx);
+
+diff --git a/include/drm/drm_kunit_helpers.h b/include/drm/drm_kunit_helpers.h
+index afdd46ef04f70d..c835f113055dc4 100644
+--- a/include/drm/drm_kunit_helpers.h
++++ b/include/drm/drm_kunit_helpers.h
+@@ -120,6 +120,9 @@ drm_kunit_helper_create_crtc(struct kunit *test,
+ const struct drm_crtc_funcs *funcs,
+ const struct drm_crtc_helper_funcs *helper_funcs);
+
++int drm_kunit_add_mode_destroy_action(struct kunit *test,
++ struct drm_display_mode *mode);
++
+ struct drm_display_mode *
+ drm_kunit_display_mode_from_cea_vic(struct kunit *test, struct drm_device *dev,
+ u8 video_code);
+diff --git a/include/drm/intel/pciids.h b/include/drm/intel/pciids.h
+index 77c826589ec118..f9d3e85142ea88 100644
+--- a/include/drm/intel/pciids.h
++++ b/include/drm/intel/pciids.h
+@@ -846,19 +846,20 @@
+ MACRO__(0xE20B, ## __VA_ARGS__), \
+ MACRO__(0xE20C, ## __VA_ARGS__), \
+ MACRO__(0xE20D, ## __VA_ARGS__), \
+- MACRO__(0xE212, ## __VA_ARGS__)
++ MACRO__(0xE210, ## __VA_ARGS__), \
++ MACRO__(0xE212, ## __VA_ARGS__), \
++ MACRO__(0xE215, ## __VA_ARGS__), \
++ MACRO__(0xE216, ## __VA_ARGS__)
+
+ /* PTL */
+ #define INTEL_PTL_IDS(MACRO__, ...) \
+ MACRO__(0xB080, ## __VA_ARGS__), \
+ MACRO__(0xB081, ## __VA_ARGS__), \
+ MACRO__(0xB082, ## __VA_ARGS__), \
++ MACRO__(0xB083, ## __VA_ARGS__), \
++ MACRO__(0xB08F, ## __VA_ARGS__), \
+ MACRO__(0xB090, ## __VA_ARGS__), \
+- MACRO__(0xB091, ## __VA_ARGS__), \
+- MACRO__(0xB092, ## __VA_ARGS__), \
+ MACRO__(0xB0A0, ## __VA_ARGS__), \
+- MACRO__(0xB0A1, ## __VA_ARGS__), \
+- MACRO__(0xB0A2, ## __VA_ARGS__), \
+ MACRO__(0xB0B0, ## __VA_ARGS__)
+
+ #endif /* __PCIIDS_H__ */
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 17960a1e858dbe..d1aee2d3e189ee 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -711,6 +711,7 @@ struct cgroup_subsys {
+ void (*css_released)(struct cgroup_subsys_state *css);
+ void (*css_free)(struct cgroup_subsys_state *css);
+ void (*css_reset)(struct cgroup_subsys_state *css);
++ void (*css_killed)(struct cgroup_subsys_state *css);
+ void (*css_rstat_flush)(struct cgroup_subsys_state *css, int cpu);
+ int (*css_extra_stat_show)(struct seq_file *seq,
+ struct cgroup_subsys_state *css);
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index f8ef47f8a634df..fc1324ed597d6b 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -343,7 +343,7 @@ static inline u64 cgroup_id(const struct cgroup *cgrp)
+ */
+ static inline bool css_is_dying(struct cgroup_subsys_state *css)
+ {
+- return !(css->flags & CSS_NO_REF) && percpu_ref_is_dying(&css->refcnt);
++ return css->flags & CSS_DYING;
+ }
+
+ static inline void cgroup_get(struct cgroup *cgrp)
+diff --git a/include/linux/damon.h b/include/linux/damon.h
+index c9074d569596a6..b4d37d9b92212e 100644
+--- a/include/linux/damon.h
++++ b/include/linux/damon.h
+@@ -432,6 +432,7 @@ struct damos_access_pattern {
+ * @wmarks: Watermarks for automated (in)activation of this scheme.
+ * @target_nid: Destination node if @action is "migrate_{hot,cold}".
+ * @filters: Additional set of &struct damos_filter for &action.
++ * @last_applied: Last @action applied ops-managing entity.
+ * @stat: Statistics of this scheme.
+ * @list: List head for siblings.
+ *
+@@ -454,6 +455,15 @@ struct damos_access_pattern {
+ * implementation could check pages of the region and skip &action to respect
+ * &filters
+ *
++ * The minimum entity that @action can be applied depends on the underlying
++ * &struct damon_operations. Since it may not be aligned with the core layer
++ * abstract, namely &struct damon_region, &struct damon_operations could apply
++ * @action to same entity multiple times. Large folios that underlying on
++ * multiple &struct damon region objects could be such examples. The &struct
++ * damon_operations can use @last_applied to avoid that. DAMOS core logic
++ * unsets @last_applied when each regions walking for applying the scheme is
++ * finished.
++ *
+ * After applying the &action to each region, &stat_count and &stat_sz is
+ * updated to reflect the number of regions and total size of regions that the
+ * &action is applied.
+@@ -482,6 +492,7 @@ struct damos {
+ int target_nid;
+ };
+ struct list_head filters;
++ void *last_applied;
+ struct damos_stat stat;
+ struct list_head list;
+ };
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index cdc0dc13c87fed..9ca7e26ac4e925 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -1222,12 +1222,6 @@ unsigned long hid_lookup_quirk(const struct hid_device *hdev);
+ int hid_quirks_init(char **quirks_param, __u16 bus, int count);
+ void hid_quirks_exit(__u16 bus);
+
+-#ifdef CONFIG_HID_PID
+-int hid_pidff_init(struct hid_device *hid);
+-#else
+-#define hid_pidff_init NULL
+-#endif
+-
+ #define dbg_hid(fmt, ...) pr_debug("%s: " fmt, __FILE__, ##__VA_ARGS__)
+
+ #define hid_err(hid, fmt, ...) \
+diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
+index 3def525a1da375..6d011ca634010c 100644
+--- a/include/linux/io_uring_types.h
++++ b/include/linux/io_uring_types.h
+@@ -470,6 +470,7 @@ enum {
+ REQ_F_SKIP_LINK_CQES_BIT,
+ REQ_F_SINGLE_POLL_BIT,
+ REQ_F_DOUBLE_POLL_BIT,
++ REQ_F_MULTISHOT_BIT,
+ REQ_F_APOLL_MULTISHOT_BIT,
+ REQ_F_CLEAR_POLLIN_BIT,
+ /* keep async read/write and isreg together and in order */
+@@ -546,6 +547,8 @@ enum {
+ REQ_F_SINGLE_POLL = IO_REQ_FLAG(REQ_F_SINGLE_POLL_BIT),
+ /* double poll may active */
+ REQ_F_DOUBLE_POLL = IO_REQ_FLAG(REQ_F_DOUBLE_POLL_BIT),
++ /* request posts multiple completions, should be set at prep time */
++ REQ_F_MULTISHOT = IO_REQ_FLAG(REQ_F_MULTISHOT_BIT),
+ /* fast poll multishot mode */
+ REQ_F_APOLL_MULTISHOT = IO_REQ_FLAG(REQ_F_APOLL_MULTISHOT_BIT),
+ /* recvmsg special flag, clear EPOLLIN */
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index f34f4cfaa51344..be7e1cd516d1b7 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -2382,7 +2382,7 @@ static inline bool kvm_is_visible_memslot(struct kvm_memory_slot *memslot)
+ struct kvm_vcpu *kvm_get_running_vcpu(void);
+ struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ bool kvm_arch_has_irq_bypass(void);
+ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *,
+ struct irq_bypass_producer *);
+diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
+index 0da8a1c7740ef5..b4bcade821a435 100644
+--- a/include/linux/mtd/spinand.h
++++ b/include/linux/mtd/spinand.h
+@@ -67,7 +67,7 @@
+ SPI_MEM_OP_ADDR(2, addr, 1), \
+ SPI_MEM_OP_DUMMY(ndummy, 1), \
+ SPI_MEM_OP_DATA_IN(len, buf, 1), \
+- __VA_OPT__(SPI_MEM_OP_MAX_FREQ(__VA_ARGS__)))
++ SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0))
+
+ #define SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(addr, ndummy, buf, len) \
+ SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1), \
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index 36d283552f80e9..be2f0017a66739 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -1104,6 +1104,12 @@ static inline bool is_page_hwpoison(const struct page *page)
+ return folio_test_hugetlb(folio) && PageHWPoison(&folio->page);
+ }
+
++static inline bool folio_contain_hwpoisoned_page(struct folio *folio)
++{
++ return folio_test_hwpoison(folio) ||
++ (folio_test_large(folio) && folio_test_has_hwpoisoned(folio));
++}
++
+ bool is_free_buddy_page(const struct page *page);
+
+ PAGEFLAG(Isolated, isolated, PF_ANY);
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 1a2594a38199fe..2a9ca3dbaa0e9d 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2609,6 +2609,8 @@
+
+ #define PCI_VENDOR_ID_ZHAOXIN 0x1d17
+
++#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
++
+ #define PCI_VENDOR_ID_HYGON 0x1d94
+
+ #define PCI_VENDOR_ID_META 0x1d9b
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index bcb764c3a8034c..93ea9c6672f0e1 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -673,13 +673,15 @@ struct swevent_hlist {
+ struct rcu_head rcu_head;
+ };
+
+-#define PERF_ATTACH_CONTEXT 0x01
+-#define PERF_ATTACH_GROUP 0x02
+-#define PERF_ATTACH_TASK 0x04
+-#define PERF_ATTACH_TASK_DATA 0x08
+-#define PERF_ATTACH_ITRACE 0x10
+-#define PERF_ATTACH_SCHED_CB 0x20
+-#define PERF_ATTACH_CHILD 0x40
++#define PERF_ATTACH_CONTEXT 0x0001
++#define PERF_ATTACH_GROUP 0x0002
++#define PERF_ATTACH_TASK 0x0004
++#define PERF_ATTACH_TASK_DATA 0x0008
++#define PERF_ATTACH_ITRACE 0x0010
++#define PERF_ATTACH_SCHED_CB 0x0020
++#define PERF_ATTACH_CHILD 0x0040
++#define PERF_ATTACH_EXCLUSIVE 0x0080
++#define PERF_ATTACH_CALLCHAIN 0x0100
+
+ struct bpf_prog;
+ struct perf_cgroup;
+@@ -831,7 +833,6 @@ struct perf_event {
+ struct irq_work pending_disable_irq;
+ struct callback_head pending_task;
+ unsigned int pending_work;
+- struct rcuwait pending_work_wait;
+
+ atomic_t event_limit;
+
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 4c107e17c547e5..e2b705c149454a 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -222,10 +222,14 @@ static inline int pmd_dirty(pmd_t pmd)
+ * hazard could result in the direct mode hypervisor case, since the actual
+ * write to the page tables may not yet have taken place, so reads though
+ * a raw PTE pointer after it has been modified are not guaranteed to be
+- * up to date. This mode can only be entered and left under the protection of
+- * the page table locks for all page tables which may be modified. In the UP
+- * case, this is required so that preemption is disabled, and in the SMP case,
+- * it must synchronize the delayed page table writes properly on other CPUs.
++ * up to date.
++ *
++ * In the general case, no lock is guaranteed to be held between entry and exit
++ * of the lazy mode. So the implementation must assume preemption may be enabled
++ * and cpu migration is possible; it must take steps to be robust against this.
++ * (In practice, for user PTE updates, the appropriate page table lock(s) are
++ * held, but for kernel PTE updates, no lock is held). Nesting is not permitted
++ * and the mode cannot be used in interrupt context.
+ */
+ #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
+ #define arch_enter_lazy_mmu_mode() do {} while (0)
+@@ -287,7 +291,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ {
+ page_table_check_ptes_set(mm, ptep, pte, nr);
+
+- arch_enter_lazy_mmu_mode();
+ for (;;) {
+ set_pte(ptep, pte);
+ if (--nr == 0)
+@@ -295,7 +298,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ ptep++;
+ pte = pte_next_pfn(pte);
+ }
+- arch_leave_lazy_mmu_mode();
+ }
+ #endif
+ #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index 4217a9f412b265..5b462029d03c15 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -207,6 +207,7 @@ void printk_legacy_allow_panic_sync(void);
+ extern bool nbcon_device_try_acquire(struct console *con);
+ extern void nbcon_device_release(struct console *con);
+ void nbcon_atomic_flush_unsafe(void);
++bool pr_flush(int timeout_ms, bool reset_on_progress);
+ #else
+ static inline __printf(1, 0)
+ int vprintk(const char *s, va_list args)
+@@ -315,6 +316,11 @@ static inline void nbcon_atomic_flush_unsafe(void)
+ {
+ }
+
++static inline bool pr_flush(int timeout_ms, bool reset_on_progress)
++{
++ return true;
++}
++
+ #endif
+
+ bool this_cpu_in_panic(void);
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 20a40ade803086..6c3125300c009a 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -335,6 +335,7 @@ enum tpm2_cc_attrs {
+ #define TPM_VID_WINBOND 0x1050
+ #define TPM_VID_STM 0x104A
+ #define TPM_VID_ATML 0x1114
++#define TPM_VID_IFX 0x15D1
+
+ enum tpm_chip_flags {
+ TPM_CHIP_FLAG_BOOTSTRAPPED = BIT(0),
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index c3ed2fcff8b798..dcbb2e54746c7f 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -2851,6 +2851,11 @@ struct ieee80211_txq {
+ * implements MLO, so operation can continue on other links when one
+ * link is switching.
+ *
++ * @IEEE80211_HW_STRICT: strictly enforce certain things mandated by the spec
++ * but otherwise ignored/worked around for interoperability. This is a
++ * HW flag so drivers can opt in according to their own control, e.g. in
++ * testing.
++ *
+ * @NUM_IEEE80211_HW_FLAGS: number of hardware flags, used for sizing arrays
+ */
+ enum ieee80211_hw_flags {
+@@ -2911,6 +2916,7 @@ enum ieee80211_hw_flags {
+ IEEE80211_HW_DISALLOW_PUNCTURING,
+ IEEE80211_HW_DISALLOW_PUNCTURING_5GHZ,
+ IEEE80211_HW_HANDLES_QUIET_CSA,
++ IEEE80211_HW_STRICT,
+
+ /* keep last, obviously */
+ NUM_IEEE80211_HW_FLAGS
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index 31248cfdfb235f..dcd288fa1bb6fb 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -775,6 +775,7 @@ struct sctp_transport {
+
+ /* Reference counting. */
+ refcount_t refcnt;
++ __u32 dead:1,
+ /* RTO-Pending : A flag used to track if one of the DATA
+ * chunks sent to this address is currently being
+ * used to compute a RTT. If this flag is 0,
+@@ -784,7 +785,7 @@ struct sctp_transport {
+ * calculation completes (i.e. the DATA chunk
+ * is SACK'd) clear this flag.
+ */
+- __u32 rto_pending:1,
++ rto_pending:1,
+
+ /*
+ * hb_sent : a flag that signals that we have a pending
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 7ef728324e4e7e..93587308c38a6f 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -338,6 +338,8 @@ struct sk_filter;
+ * @sk_txtime_unused: unused txtime flags
+ * @ns_tracker: tracker for netns reference
+ * @sk_user_frags: xarray of pages the user is holding a reference on.
++ * @sk_owner: reference to the real owner of the socket that calls
++ * sock_lock_init_class_and_name().
+ */
+ struct sock {
+ /*
+@@ -544,6 +546,10 @@ struct sock {
+ struct rcu_head sk_rcu;
+ netns_tracker ns_tracker;
+ struct xarray sk_user_frags;
++
++#if IS_ENABLED(CONFIG_PROVE_LOCKING) && IS_ENABLED(CONFIG_MODULES)
++ struct module *sk_owner;
++#endif
+ };
+
+ struct sock_bh_locked {
+@@ -1592,6 +1598,35 @@ static inline void sk_mem_uncharge(struct sock *sk, int size)
+ sk_mem_reclaim(sk);
+ }
+
++#if IS_ENABLED(CONFIG_PROVE_LOCKING) && IS_ENABLED(CONFIG_MODULES)
++static inline void sk_owner_set(struct sock *sk, struct module *owner)
++{
++ __module_get(owner);
++ sk->sk_owner = owner;
++}
++
++static inline void sk_owner_clear(struct sock *sk)
++{
++ sk->sk_owner = NULL;
++}
++
++static inline void sk_owner_put(struct sock *sk)
++{
++ module_put(sk->sk_owner);
++}
++#else
++static inline void sk_owner_set(struct sock *sk, struct module *owner)
++{
++}
++
++static inline void sk_owner_clear(struct sock *sk)
++{
++}
++
++static inline void sk_owner_put(struct sock *sk)
++{
++}
++#endif
+ /*
+ * Macro so as to not evaluate some arguments when
+ * lockdep is not enabled.
+@@ -1601,13 +1636,14 @@ static inline void sk_mem_uncharge(struct sock *sk, int size)
+ */
+ #define sock_lock_init_class_and_name(sk, sname, skey, name, key) \
+ do { \
++ sk_owner_set(sk, THIS_MODULE); \
+ sk->sk_lock.owned = 0; \
+ init_waitqueue_head(&sk->sk_lock.wq); \
+ spin_lock_init(&(sk)->sk_lock.slock); \
+ debug_check_no_locks_freed((void *)&(sk)->sk_lock, \
+- sizeof((sk)->sk_lock)); \
++ sizeof((sk)->sk_lock)); \
+ lockdep_set_class_and_name(&(sk)->sk_lock.slock, \
+- (skey), (sname)); \
++ (skey), (sname)); \
+ lockdep_init_map(&(sk)->sk_lock.dep_map, (name), (key), 0); \
+ } while (0)
+
+diff --git a/include/uapi/linux/kfd_ioctl.h b/include/uapi/linux/kfd_ioctl.h
+index fa9f9846b88e4d..b0160b09987c1f 100644
+--- a/include/uapi/linux/kfd_ioctl.h
++++ b/include/uapi/linux/kfd_ioctl.h
+@@ -62,6 +62,8 @@ struct kfd_ioctl_get_version_args {
+ #define KFD_MAX_QUEUE_PERCENTAGE 100
+ #define KFD_MAX_QUEUE_PRIORITY 15
+
++#define KFD_MIN_QUEUE_RING_SIZE 1024
++
+ struct kfd_ioctl_create_queue_args {
+ __u64 ring_base_address; /* to KFD */
+ __u64 write_pointer_address; /* from KFD */
+diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h
+index e1d2c27533b49f..8806a132d7b8e1 100644
+--- a/include/uapi/linux/landlock.h
++++ b/include/uapi/linux/landlock.h
+@@ -57,9 +57,11 @@ struct landlock_ruleset_attr {
+ *
+ * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI
+ * version.
++ * - %LANDLOCK_CREATE_RULESET_ERRATA: Get a bitmask of fixed issues.
+ */
+ /* clang-format off */
+ #define LANDLOCK_CREATE_RULESET_VERSION (1U << 0)
++#define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1)
+ /* clang-format on */
+
+ /**
+diff --git a/include/uapi/linux/psp-sev.h b/include/uapi/linux/psp-sev.h
+index 832c15d9155bdb..eeb20dfb1fdaa4 100644
+--- a/include/uapi/linux/psp-sev.h
++++ b/include/uapi/linux/psp-sev.h
+@@ -73,13 +73,20 @@ typedef enum {
+ SEV_RET_INVALID_PARAM,
+ SEV_RET_RESOURCE_LIMIT,
+ SEV_RET_SECURE_DATA_INVALID,
+- SEV_RET_INVALID_KEY = 0x27,
+- SEV_RET_INVALID_PAGE_SIZE,
+- SEV_RET_INVALID_PAGE_STATE,
+- SEV_RET_INVALID_MDATA_ENTRY,
+- SEV_RET_INVALID_PAGE_OWNER,
+- SEV_RET_INVALID_PAGE_AEAD_OFLOW,
+- SEV_RET_RMP_INIT_REQUIRED,
++ SEV_RET_INVALID_PAGE_SIZE = 0x0019,
++ SEV_RET_INVALID_PAGE_STATE = 0x001A,
++ SEV_RET_INVALID_MDATA_ENTRY = 0x001B,
++ SEV_RET_INVALID_PAGE_OWNER = 0x001C,
++ SEV_RET_AEAD_OFLOW = 0x001D,
++ SEV_RET_EXIT_RING_BUFFER = 0x001F,
++ SEV_RET_RMP_INIT_REQUIRED = 0x0020,
++ SEV_RET_BAD_SVN = 0x0021,
++ SEV_RET_BAD_VERSION = 0x0022,
++ SEV_RET_SHUTDOWN_REQUIRED = 0x0023,
++ SEV_RET_UPDATE_FAILED = 0x0024,
++ SEV_RET_RESTORE_REQUIRED = 0x0025,
++ SEV_RET_RMP_INITIALIZATION_FAILED = 0x0026,
++ SEV_RET_INVALID_KEY = 0x0027,
+ SEV_RET_MAX,
+ } sev_ret_code;
+
+diff --git a/include/uapi/linux/rkisp1-config.h b/include/uapi/linux/rkisp1-config.h
+index 430daceafac705..2d995f3c1ca378 100644
+--- a/include/uapi/linux/rkisp1-config.h
++++ b/include/uapi/linux/rkisp1-config.h
+@@ -1528,7 +1528,7 @@ enum rksip1_ext_param_buffer_version {
+ * The expected memory layout of the parameters buffer is::
+ *
+ * +-------------------- struct rkisp1_ext_params_cfg -------------------+
+- * | version = RKISP_EXT_PARAMS_BUFFER_V1; |
++ * | version = RKISP1_EXT_PARAM_BUFFER_V1; |
+ * | data_size = sizeof(struct rkisp1_ext_params_bls_config) |
+ * | + sizeof(struct rkisp1_ext_params_dpcc_config); |
+ * | +------------------------- data ---------------------------------+ |
+diff --git a/include/xen/interface/xen-mca.h b/include/xen/interface/xen-mca.h
+index 464aa6b3a5f928..1c9afbe8cc2600 100644
+--- a/include/xen/interface/xen-mca.h
++++ b/include/xen/interface/xen-mca.h
+@@ -372,7 +372,7 @@ struct xen_mce {
+ #define XEN_MCE_LOG_LEN 32
+
+ struct xen_mce_log {
+- char signature[12]; /* "MACHINECHECK" */
++ char signature[12] __nonstring; /* "MACHINECHECK" */
+ unsigned len; /* = XEN_MCE_LOG_LEN */
+ unsigned next;
+ unsigned flags;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 4910ee7ac18aad..7370f763346f45 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1818,7 +1818,7 @@ void io_wq_submit_work(struct io_wq_work *work)
+ * Don't allow any multishot execution from io-wq. It's more restrictive
+ * than necessary and also cleaner.
+ */
+- if (req->flags & REQ_F_APOLL_MULTISHOT) {
++ if (req->flags & (REQ_F_MULTISHOT|REQ_F_APOLL_MULTISHOT)) {
+ err = -EBADFD;
+ if (!io_file_can_poll(req))
+ goto fail;
+@@ -1829,7 +1829,7 @@ void io_wq_submit_work(struct io_wq_work *work)
+ goto fail;
+ return;
+ } else {
+- req->flags &= ~REQ_F_APOLL_MULTISHOT;
++ req->flags &= ~(REQ_F_APOLL_MULTISHOT|REQ_F_MULTISHOT);
+ }
+ }
+
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index 8e72de7712ac97..b224c03056c554 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -480,6 +480,8 @@ int io_provide_buffers_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+ p->nbufs = tmp;
+ p->addr = READ_ONCE(sqe->addr);
+ p->len = READ_ONCE(sqe->len);
++ if (!p->len)
++ return -EINVAL;
+
+ if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
+ &size))
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 16d54cd4d53f38..8f965ec67b5767 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -429,6 +429,7 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ sr->msg_flags |= MSG_WAITALL;
+ sr->buf_group = req->buf_index;
+ req->buf_list = NULL;
++ req->flags |= REQ_F_MULTISHOT;
+ }
+
+ #ifdef CONFIG_COMPAT
+@@ -1650,6 +1651,8 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags)
+ }
+
+ io_req_set_res(req, ret, cflags);
++ if (!(issue_flags & IO_URING_F_MULTISHOT))
++ return IOU_OK;
+ return IOU_STOP_MULTISHOT;
+ }
+
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 87866b037fbed3..434929de17ef27 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -21,6 +21,11 @@ ifdef CONFIG_FUNCTION_TRACER
+ CFLAGS_REMOVE_irq_work.o = $(CC_FLAGS_FTRACE)
+ endif
+
++# Branch profiling isn't noinstr-safe
++ifdef CONFIG_TRACE_BRANCH_PROFILING
++CFLAGS_context_tracking.o += -DDISABLE_BRANCH_PROFILING
++endif
++
+ # Prevents flicker of uninteresting __do_softirq()/__local_bh_disable_ip()
+ # in coverage traces.
+ KCOV_INSTRUMENT_softirq.o := n
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index afc665b7b1fe56..81f078c059e86d 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -5909,6 +5909,12 @@ static void kill_css(struct cgroup_subsys_state *css)
+ if (css->flags & CSS_DYING)
+ return;
+
++ /*
++ * Call css_killed(), if defined, before setting the CSS_DYING flag
++ */
++ if (css->ss->css_killed)
++ css->ss->css_killed(css);
++
+ css->flags |= CSS_DYING;
+
+ /*
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 1892dc8cd21191..d72f843d9feebd 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1416,6 +1416,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
+ list_add(&cs->remote_sibling, &remote_children);
+ spin_unlock_irq(&callback_lock);
+ update_unbound_workqueue_cpumask(isolcpus_updated);
++ cs->prs_err = 0;
+
+ /*
+ * Propagate changes in top_cpuset's effective_cpus down the hierarchy.
+@@ -1446,9 +1447,11 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
+ list_del_init(&cs->remote_sibling);
+ isolcpus_updated = partition_xcpus_del(cs->partition_root_state,
+ NULL, tmp->new_cpus);
+- cs->partition_root_state = -cs->partition_root_state;
+- if (!cs->prs_err)
+- cs->prs_err = PERR_INVCPUS;
++ if (cs->prs_err)
++ cs->partition_root_state = -cs->partition_root_state;
++ else
++ cs->partition_root_state = PRS_MEMBER;
++
+ reset_partition_data(cs);
+ spin_unlock_irq(&callback_lock);
+ update_unbound_workqueue_cpumask(isolcpus_updated);
+@@ -1481,8 +1484,10 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *newmask,
+
+ WARN_ON_ONCE(!cpumask_subset(cs->effective_xcpus, subpartitions_cpus));
+
+- if (cpumask_empty(newmask))
++ if (cpumask_empty(newmask)) {
++ cs->prs_err = PERR_CPUSEMPTY;
+ goto invalidate;
++ }
+
+ adding = cpumask_andnot(tmp->addmask, newmask, cs->effective_xcpus);
+ deleting = cpumask_andnot(tmp->delmask, cs->effective_xcpus, newmask);
+@@ -1492,10 +1497,15 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *newmask,
+ * not allocated to other partitions and there are effective_cpus
+ * left in the top cpuset.
+ */
+- if (adding && (!capable(CAP_SYS_ADMIN) ||
+- cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
+- cpumask_subset(top_cpuset.effective_cpus, tmp->addmask)))
+- goto invalidate;
++ if (adding) {
++ if (!capable(CAP_SYS_ADMIN))
++ cs->prs_err = PERR_ACCESS;
++ else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
++ cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
++ cs->prs_err = PERR_NOCPUS;
++ if (cs->prs_err)
++ goto invalidate;
++ }
+
+ spin_lock_irq(&callback_lock);
+ if (adding)
+@@ -1611,7 +1621,7 @@ static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus)
+ * The partcmd_update command is used by update_cpumasks_hier() with newmask
+ * NULL and update_cpumask() with newmask set. The partcmd_invalidate is used
+ * by update_cpumask() with NULL newmask. In both cases, the callers won't
+- * check for error and so partition_root_state and prs_error will be updated
++ * check for error and so partition_root_state and prs_err will be updated
+ * directly.
+ */
+ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
+@@ -1689,9 +1699,9 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
+ if (nocpu)
+ return PERR_NOCPUS;
+
+- cpumask_copy(tmp->delmask, xcpus);
+- deleting = true;
+- subparts_delta++;
++ deleting = cpumask_and(tmp->delmask, xcpus, parent->effective_xcpus);
++ if (deleting)
++ subparts_delta++;
+ new_prs = (cmd == partcmd_enable) ? PRS_ROOT : PRS_ISOLATED;
+ } else if (cmd == partcmd_disable) {
+ /*
+@@ -3485,9 +3495,6 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
+ cpus_read_lock();
+ mutex_lock(&cpuset_mutex);
+
+- if (is_partition_valid(cs))
+- update_prstate(cs, 0);
+-
+ if (!cpuset_v2() && is_sched_load_balance(cs))
+ cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0);
+
+@@ -3498,6 +3505,22 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
+ cpus_read_unlock();
+ }
+
++static void cpuset_css_killed(struct cgroup_subsys_state *css)
++{
++ struct cpuset *cs = css_cs(css);
++
++ cpus_read_lock();
++ mutex_lock(&cpuset_mutex);
++
++ /* Reset valid partition back to member */
++ if (is_partition_valid(cs))
++ update_prstate(cs, PRS_MEMBER);
++
++ mutex_unlock(&cpuset_mutex);
++ cpus_read_unlock();
++
++}
++
+ static void cpuset_css_free(struct cgroup_subsys_state *css)
+ {
+ struct cpuset *cs = css_cs(css);
+@@ -3619,6 +3642,7 @@ struct cgroup_subsys cpuset_cgrp_subsys = {
+ .css_alloc = cpuset_css_alloc,
+ .css_online = cpuset_css_online,
+ .css_offline = cpuset_css_offline,
++ .css_killed = cpuset_css_killed,
+ .css_free = cpuset_css_free,
+ .can_attach = cpuset_can_attach,
+ .cancel_attach = cpuset_cancel_attach,
+@@ -3749,6 +3773,7 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
+
+ if (remote && cpumask_empty(&new_cpus) &&
+ partition_is_populated(cs, NULL)) {
++ cs->prs_err = PERR_HOTPLUG;
+ remote_partition_disable(cs, tmp);
+ compute_effective_cpumask(&new_cpus, cs, parent);
+ remote = false;
+diff --git a/kernel/entry/Makefile b/kernel/entry/Makefile
+index 095c775e001e27..d4b8bd0af79b0d 100644
+--- a/kernel/entry/Makefile
++++ b/kernel/entry/Makefile
+@@ -6,6 +6,9 @@ KASAN_SANITIZE := n
+ UBSAN_SANITIZE := n
+ KCOV_INSTRUMENT := n
+
++# Branch profiling isn't noinstr-safe
++ccflags-$(CONFIG_TRACE_BRANCH_PROFILING) += -DDISABLE_BRANCH_PROFILING
++
+ CFLAGS_REMOVE_common.o = -fstack-protector -fstack-protector-strong
+ CFLAGS_common.o += -fno-stack-protector
+
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index f6cf17929bb983..ee6b7281a19943 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -5253,6 +5253,8 @@ static int exclusive_event_init(struct perf_event *event)
+ return -EBUSY;
+ }
+
++ event->attach_state |= PERF_ATTACH_EXCLUSIVE;
++
+ return 0;
+ }
+
+@@ -5260,14 +5262,13 @@ static void exclusive_event_destroy(struct perf_event *event)
+ {
+ struct pmu *pmu = event->pmu;
+
+- if (!is_exclusive_pmu(pmu))
+- return;
+-
+ /* see comment in exclusive_event_init() */
+ if (event->attach_state & PERF_ATTACH_TASK)
+ atomic_dec(&pmu->exclusive_cnt);
+ else
+ atomic_inc(&pmu->exclusive_cnt);
++
++ event->attach_state &= ~PERF_ATTACH_EXCLUSIVE;
+ }
+
+ static bool exclusive_event_match(struct perf_event *e1, struct perf_event *e2)
+@@ -5302,35 +5303,58 @@ static bool exclusive_event_installable(struct perf_event *event,
+ static void perf_addr_filters_splice(struct perf_event *event,
+ struct list_head *head);
+
+-static void perf_pending_task_sync(struct perf_event *event)
++/* vs perf_event_alloc() error */
++static void __free_event(struct perf_event *event)
+ {
+- struct callback_head *head = &event->pending_task;
++ if (event->attach_state & PERF_ATTACH_CALLCHAIN)
++ put_callchain_buffers();
++
++ kfree(event->addr_filter_ranges);
++
++ if (event->attach_state & PERF_ATTACH_EXCLUSIVE)
++ exclusive_event_destroy(event);
++
++ if (is_cgroup_event(event))
++ perf_detach_cgroup(event);
++
++ if (event->destroy)
++ event->destroy(event);
+
+- if (!event->pending_work)
+- return;
+ /*
+- * If the task is queued to the current task's queue, we
+- * obviously can't wait for it to complete. Simply cancel it.
++ * Must be after ->destroy(), due to uprobe_perf_close() using
++ * hw.target.
+ */
+- if (task_work_cancel(current, head)) {
+- event->pending_work = 0;
+- local_dec(&event->ctx->nr_no_switch_fast);
+- return;
++ if (event->hw.target)
++ put_task_struct(event->hw.target);
++
++ if (event->pmu_ctx) {
++ /*
++ * put_pmu_ctx() needs an event->ctx reference, because of
++ * epc->ctx.
++ */
++ WARN_ON_ONCE(!event->ctx);
++ WARN_ON_ONCE(event->pmu_ctx->ctx != event->ctx);
++ put_pmu_ctx(event->pmu_ctx);
+ }
+
+ /*
+- * All accesses related to the event are within the same RCU section in
+- * perf_pending_task(). The RCU grace period before the event is freed
+- * will make sure all those accesses are complete by then.
++ * perf_event_free_task() relies on put_ctx() being 'last', in
++ * particular all task references must be cleaned up.
+ */
+- rcuwait_wait_event(&event->pending_work_wait, !event->pending_work, TASK_UNINTERRUPTIBLE);
++ if (event->ctx)
++ put_ctx(event->ctx);
++
++ if (event->pmu)
++ module_put(event->pmu->module);
++
++ call_rcu(&event->rcu_head, free_event_rcu);
+ }
+
++/* vs perf_event_alloc() success */
+ static void _free_event(struct perf_event *event)
+ {
+ irq_work_sync(&event->pending_irq);
+ irq_work_sync(&event->pending_disable_irq);
+- perf_pending_task_sync(event);
+
+ unaccount_event(event);
+
+@@ -5348,42 +5372,10 @@ static void _free_event(struct perf_event *event)
+ mutex_unlock(&event->mmap_mutex);
+ }
+
+- if (is_cgroup_event(event))
+- perf_detach_cgroup(event);
+-
+- if (!event->parent) {
+- if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
+- put_callchain_buffers();
+- }
+-
+ perf_event_free_bpf_prog(event);
+ perf_addr_filters_splice(event, NULL);
+- kfree(event->addr_filter_ranges);
+-
+- if (event->destroy)
+- event->destroy(event);
+-
+- /*
+- * Must be after ->destroy(), due to uprobe_perf_close() using
+- * hw.target.
+- */
+- if (event->hw.target)
+- put_task_struct(event->hw.target);
+-
+- if (event->pmu_ctx)
+- put_pmu_ctx(event->pmu_ctx);
+-
+- /*
+- * perf_event_free_task() relies on put_ctx() being 'last', in particular
+- * all task references must be cleaned up.
+- */
+- if (event->ctx)
+- put_ctx(event->ctx);
+-
+- exclusive_event_destroy(event);
+- module_put(event->pmu->module);
+
+- call_rcu(&event->rcu_head, free_event_rcu);
++ __free_event(event);
+ }
+
+ /*
+@@ -5455,10 +5447,17 @@ static void perf_remove_from_owner(struct perf_event *event)
+
+ static void put_event(struct perf_event *event)
+ {
++ struct perf_event *parent;
++
+ if (!atomic_long_dec_and_test(&event->refcount))
+ return;
+
++ parent = event->parent;
+ _free_event(event);
++
++ /* Matches the refcount bump in inherit_event() */
++ if (parent)
++ put_event(parent);
+ }
+
+ /*
+@@ -5542,11 +5541,6 @@ int perf_event_release_kernel(struct perf_event *event)
+ if (tmp == child) {
+ perf_remove_from_context(child, DETACH_GROUP);
+ list_move(&child->child_list, &free_list);
+- /*
+- * This matches the refcount bump in inherit_event();
+- * this can't be the last reference.
+- */
+- put_event(event);
+ } else {
+ var = &ctx->refcount;
+ }
+@@ -5572,7 +5566,8 @@ int perf_event_release_kernel(struct perf_event *event)
+ void *var = &child->ctx->refcount;
+
+ list_del(&child->child_list);
+- free_event(child);
++ /* Last reference unless ->pending_task work is pending */
++ put_event(child);
+
+ /*
+ * Wake any perf_event_free_task() waiting for this event to be
+@@ -5583,7 +5578,11 @@ int perf_event_release_kernel(struct perf_event *event)
+ }
+
+ no_ctx:
+- put_event(event); /* Must be the 'last' reference */
++ /*
++ * Last reference unless ->pending_task work is pending on this event
++ * or any of its children.
++ */
++ put_event(event);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(perf_event_release_kernel);
+@@ -6997,12 +6996,6 @@ static void perf_pending_task(struct callback_head *head)
+ struct perf_event *event = container_of(head, struct perf_event, pending_task);
+ int rctx;
+
+- /*
+- * All accesses to the event must belong to the same implicit RCU read-side
+- * critical section as the ->pending_work reset. See comment in
+- * perf_pending_task_sync().
+- */
+- rcu_read_lock();
+ /*
+ * If we 'fail' here, that's OK, it means recursion is already disabled
+ * and we won't recurse 'further'.
+@@ -7013,9 +7006,8 @@ static void perf_pending_task(struct callback_head *head)
+ event->pending_work = 0;
+ perf_sigtrap(event);
+ local_dec(&event->ctx->nr_no_switch_fast);
+- rcuwait_wake_up(&event->pending_work_wait);
+ }
+- rcu_read_unlock();
++ put_event(event);
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+@@ -9961,6 +9953,7 @@ static int __perf_event_overflow(struct perf_event *event,
+ !task_work_add(current, &event->pending_task, notify_mode)) {
+ event->pending_work = pending_id;
+ local_inc(&event->ctx->nr_no_switch_fast);
++ WARN_ON_ONCE(!atomic_long_inc_not_zero(&event->refcount));
+
+ event->pending_addr = 0;
+ if (valid_sample && (data->sample_flags & PERF_SAMPLE_ADDR))
+@@ -12056,8 +12049,10 @@ static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
+ event->destroy(event);
+ }
+
+- if (ret)
++ if (ret) {
++ event->pmu = NULL;
+ module_put(pmu->module);
++ }
+
+ return ret;
+ }
+@@ -12306,7 +12301,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ init_irq_work(&event->pending_irq, perf_pending_irq);
+ event->pending_disable_irq = IRQ_WORK_INIT_HARD(perf_pending_disable);
+ init_task_work(&event->pending_task, perf_pending_task);
+- rcuwait_init(&event->pending_work_wait);
+
+ mutex_init(&event->mmap_mutex);
+ raw_spin_lock_init(&event->addr_filters.lock);
+@@ -12385,7 +12379,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ * See perf_output_read().
+ */
+ if (has_inherit_and_sample_read(attr) && !(attr->sample_type & PERF_SAMPLE_TID))
+- goto err_ns;
++ goto err;
+
+ if (!has_branch_stack(event))
+ event->attr.branch_sample_type = 0;
+@@ -12393,7 +12387,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ pmu = perf_init_event(event);
+ if (IS_ERR(pmu)) {
+ err = PTR_ERR(pmu);
+- goto err_ns;
++ goto err;
+ }
+
+ /*
+@@ -12403,25 +12397,25 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ */
+ if (pmu->task_ctx_nr == perf_invalid_context && (task || cgroup_fd != -1)) {
+ err = -EINVAL;
+- goto err_pmu;
++ goto err;
+ }
+
+ if (event->attr.aux_output &&
+ (!(pmu->capabilities & PERF_PMU_CAP_AUX_OUTPUT) ||
+ event->attr.aux_pause || event->attr.aux_resume)) {
+ err = -EOPNOTSUPP;
+- goto err_pmu;
++ goto err;
+ }
+
+ if (event->attr.aux_pause && event->attr.aux_resume) {
+ err = -EINVAL;
+- goto err_pmu;
++ goto err;
+ }
+
+ if (event->attr.aux_start_paused) {
+ if (!(pmu->capabilities & PERF_PMU_CAP_AUX_PAUSE)) {
+ err = -EOPNOTSUPP;
+- goto err_pmu;
++ goto err;
+ }
+ event->hw.aux_paused = 1;
+ }
+@@ -12429,12 +12423,12 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ if (cgroup_fd != -1) {
+ err = perf_cgroup_connect(cgroup_fd, event, attr, group_leader);
+ if (err)
+- goto err_pmu;
++ goto err;
+ }
+
+ err = exclusive_event_init(event);
+ if (err)
+- goto err_pmu;
++ goto err;
+
+ if (has_addr_filter(event)) {
+ event->addr_filter_ranges = kcalloc(pmu->nr_addr_filters,
+@@ -12442,7 +12436,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ GFP_KERNEL);
+ if (!event->addr_filter_ranges) {
+ err = -ENOMEM;
+- goto err_per_task;
++ goto err;
+ }
+
+ /*
+@@ -12467,41 +12461,22 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) {
+ err = get_callchain_buffers(attr->sample_max_stack);
+ if (err)
+- goto err_addr_filters;
++ goto err;
++ event->attach_state |= PERF_ATTACH_CALLCHAIN;
+ }
+ }
+
+ err = security_perf_event_alloc(event);
+ if (err)
+- goto err_callchain_buffer;
++ goto err;
+
+ /* symmetric to unaccount_event() in _free_event() */
+ account_event(event);
+
+ return event;
+
+-err_callchain_buffer:
+- if (!event->parent) {
+- if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
+- put_callchain_buffers();
+- }
+-err_addr_filters:
+- kfree(event->addr_filter_ranges);
+-
+-err_per_task:
+- exclusive_event_destroy(event);
+-
+-err_pmu:
+- if (is_cgroup_event(event))
+- perf_detach_cgroup(event);
+- if (event->destroy)
+- event->destroy(event);
+- module_put(pmu->module);
+-err_ns:
+- if (event->hw.target)
+- put_task_struct(event->hw.target);
+- call_rcu(&event->rcu_head, free_event_rcu);
+-
++err:
++ __free_event(event);
+ return ERR_PTR(err);
+ }
+
+@@ -13466,8 +13441,7 @@ perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx)
+ * Kick perf_poll() for is_event_hup();
+ */
+ perf_event_wakeup(parent_event);
+- free_event(event);
+- put_event(parent_event);
++ put_event(event);
+ return;
+ }
+
+@@ -13585,13 +13559,11 @@ static void perf_free_event(struct perf_event *event,
+ list_del_init(&event->child_list);
+ mutex_unlock(&parent->child_mutex);
+
+- put_event(parent);
+-
+ raw_spin_lock_irq(&ctx->lock);
+ perf_group_detach(event);
+ list_del_event(event, ctx);
+ raw_spin_unlock_irq(&ctx->lock);
+- free_event(event);
++ put_event(event);
+ }
+
+ /*
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 7420a2a0d1f747..29c59599153dfe 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1955,6 +1955,9 @@ static void free_ret_instance(struct uprobe_task *utask,
+ * to-be-reused return instances for future uretprobes. If ri_timer()
+ * happens to be running right now, though, we fallback to safety and
+ * just perform RCU-delated freeing of ri.
++ * Admittedly, this is a rather simple use of seqcount, but it nicely
++ * abstracts away all the necessary memory barriers, so we use
++ * a well-supported kernel primitive here.
+ */
+ if (raw_seqcount_try_begin(&utask->ri_seqcount, seq)) {
+ /* immediate reuse of ri without RCU GP is OK */
+@@ -2015,12 +2018,20 @@ static void ri_timer(struct timer_list *timer)
+ /* RCU protects return_instance from freeing. */
+ guard(rcu)();
+
+- write_seqcount_begin(&utask->ri_seqcount);
++ /*
++ * See free_ret_instance() for notes on seqcount use.
++ * We also employ raw API variants to avoid lockdep false-positive
++ * warning complaining about enabled preemption. The timer can only be
++ * invoked once for a uprobe_task. Therefore there can only be one
++ * writer. The reader does not require an even sequence count to make
++ * progress, so it is OK to remain preemptible on PREEMPT_RT.
++ */
++ raw_write_seqcount_begin(&utask->ri_seqcount);
+
+ for_each_ret_instance_rcu(ri, utask->return_instances)
+ hprobe_expire(&ri->hprobe, false);
+
+- write_seqcount_end(&utask->ri_seqcount);
++ raw_write_seqcount_end(&utask->ri_seqcount);
+ }
+
+ static struct uprobe_task *alloc_utask(void)
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 4470680f022697..54fff13ed24947 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -6249,6 +6249,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
+ hlist_del_rcu(&class->hash_entry);
+ WRITE_ONCE(class->key, NULL);
+ WRITE_ONCE(class->name, NULL);
++ /* Class allocated but not used, -1 in nr_unused_locks */
++ if (class->usage_mask == 0)
++ debug_atomic_dec(nr_unused_locks);
+ nr_lock_classes--;
+ __clear_bit(class - lock_classes, lock_classes_in_use);
+ if (class - lock_classes == max_lock_class_idx)
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 10a01af63a8079..b129ed1d25a8af 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -1446,10 +1446,10 @@ static const char * const comp_alg_enabled[] = {
+ static int hibernate_compressor_param_set(const char *compressor,
+ const struct kernel_param *kp)
+ {
+- unsigned int sleep_flags;
+ int index, ret;
+
+- sleep_flags = lock_system_sleep();
++ if (!mutex_trylock(&system_transition_mutex))
++ return -EBUSY;
+
+ index = sysfs_match_string(comp_alg_enabled, compressor);
+ if (index >= 0) {
+@@ -1461,7 +1461,7 @@ static int hibernate_compressor_param_set(const char *compressor,
+ ret = index;
+ }
+
+- unlock_system_sleep(sleep_flags);
++ mutex_unlock(&system_transition_mutex);
+
+ if (ret)
+ pr_debug("Cannot set specified compressor %s\n",
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 07668433644b8a..057db78876cd98 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -2461,7 +2461,6 @@ asmlinkage __visible int _printk(const char *fmt, ...)
+ }
+ EXPORT_SYMBOL(_printk);
+
+-static bool pr_flush(int timeout_ms, bool reset_on_progress);
+ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress);
+
+ #else /* CONFIG_PRINTK */
+@@ -2474,7 +2473,6 @@ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progre
+
+ static u64 syslog_seq;
+
+-static bool pr_flush(int timeout_ms, bool reset_on_progress) { return true; }
+ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) { return true; }
+
+ #endif /* CONFIG_PRINTK */
+@@ -4466,7 +4464,7 @@ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progre
+ * Context: Process context. May sleep while acquiring console lock.
+ * Return: true if all usable printers are caught up.
+ */
+-static bool pr_flush(int timeout_ms, bool reset_on_progress)
++bool pr_flush(int timeout_ms, bool reset_on_progress)
+ {
+ return __pr_flush(NULL, timeout_ms, reset_on_progress);
+ }
+diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
+index b83c74c4dcc0d9..2d8f3329023c50 100644
+--- a/kernel/rcu/srcutree.c
++++ b/kernel/rcu/srcutree.c
+@@ -647,6 +647,7 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp)
+ unsigned long jbase = SRCU_INTERVAL;
+ struct srcu_usage *sup = ssp->srcu_sup;
+
++ lockdep_assert_held(&ACCESS_PRIVATE(ssp->srcu_sup, lock));
+ if (srcu_gp_is_expedited(ssp))
+ jbase = 0;
+ if (rcu_seq_state(READ_ONCE(sup->srcu_gp_seq))) {
+@@ -674,9 +675,13 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp)
+ void cleanup_srcu_struct(struct srcu_struct *ssp)
+ {
+ int cpu;
++ unsigned long delay;
+ struct srcu_usage *sup = ssp->srcu_sup;
+
+- if (WARN_ON(!srcu_get_delay(ssp)))
++ spin_lock_irq_rcu_node(ssp->srcu_sup);
++ delay = srcu_get_delay(ssp);
++ spin_unlock_irq_rcu_node(ssp->srcu_sup);
++ if (WARN_ON(!delay))
+ return; /* Just leak it! */
+ if (WARN_ON(srcu_readers_active(ssp)))
+ return; /* Just leak it! */
+@@ -1102,7 +1107,9 @@ static bool try_check_zero(struct srcu_struct *ssp, int idx, int trycount)
+ {
+ unsigned long curdelay;
+
++ spin_lock_irq_rcu_node(ssp->srcu_sup);
+ curdelay = !srcu_get_delay(ssp);
++ spin_unlock_irq_rcu_node(ssp->srcu_sup);
+
+ for (;;) {
+ if (srcu_readers_active_idx_check(ssp, idx))
+@@ -1849,7 +1856,9 @@ static void process_srcu(struct work_struct *work)
+ ssp = sup->srcu_ssp;
+
+ srcu_advance_state(ssp);
++ spin_lock_irq_rcu_node(ssp->srcu_sup);
+ curdelay = srcu_get_delay(ssp);
++ spin_unlock_irq_rcu_node(ssp->srcu_sup);
+ if (curdelay) {
+ WRITE_ONCE(sup->reschedule_count, 0);
+ } else {
+diff --git a/kernel/reboot.c b/kernel/reboot.c
+index f348f1ba9e2267..9461b6b0baa3a2 100644
+--- a/kernel/reboot.c
++++ b/kernel/reboot.c
+@@ -704,6 +704,7 @@ void kernel_power_off(void)
+ migrate_to_reboot_cpu();
+ syscore_shutdown();
+ pr_emerg("Power down\n");
++ pr_flush(1000, true);
+ kmsg_dump(KMSG_DUMP_SHUTDOWN);
+ machine_power_off();
+ }
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd4520..8ae86371ddcddf 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -22,6 +22,11 @@ ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y)
+ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
+ endif
+
++# Branch profiling isn't noinstr-safe
++ifdef CONFIG_TRACE_BRANCH_PROFILING
++CFLAGS_build_policy.o += -DDISABLE_BRANCH_PROFILING
++CFLAGS_build_utility.o += -DDISABLE_BRANCH_PROFILING
++endif
+ #
+ # Build efficiency:
+ #
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 7b9dfee858e798..9688f1a5df8b8f 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -4523,8 +4523,8 @@ static struct scx_dispatch_q *create_dsq(u64 dsq_id, int node)
+
+ init_dsq(dsq, dsq_id);
+
+- ret = rhashtable_insert_fast(&dsq_hash, &dsq->hash_node,
+- dsq_hash_params);
++ ret = rhashtable_lookup_insert_fast(&dsq_hash, &dsq->hash_node,
++ dsq_hash_params);
+ if (ret) {
+ kfree(dsq);
+ return ERR_PTR(ret);
+diff --git a/kernel/time/Makefile b/kernel/time/Makefile
+index fe0ae82124fe7f..e6e9b85d4db5f8 100644
+--- a/kernel/time/Makefile
++++ b/kernel/time/Makefile
+@@ -1,4 +1,10 @@
+ # SPDX-License-Identifier: GPL-2.0
++
++# Branch profiling isn't noinstr-safe
++ifdef CONFIG_TRACE_BRANCH_PROFILING
++CFLAGS_sched_clock.o += -DDISABLE_BRANCH_PROFILING
++endif
++
+ obj-y += time.o timer.o hrtimer.o sleep_timeout.o
+ obj-y += timekeeping.o ntp.o clocksource.o jiffies.o timer_list.o
+ obj-y += timeconv.o timecounter.o alarmtimer.o
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index 33082c4e8154ea..95c6e3473a76b5 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -89,8 +89,11 @@ static bool delete_fprobe_node(struct fprobe_hlist_node *node)
+ {
+ lockdep_assert_held(&fprobe_mutex);
+
+- WRITE_ONCE(node->fp, NULL);
+- hlist_del_rcu(&node->hlist);
++ /* Avoid double deleting */
++ if (READ_ONCE(node->fp) != NULL) {
++ WRITE_ONCE(node->fp, NULL);
++ hlist_del_rcu(&node->hlist);
++ }
+ return !!find_first_fprobe_node(node->addr);
+ }
+
+@@ -411,6 +414,102 @@ static void fprobe_graph_remove_ips(unsigned long *addrs, int num)
+ ftrace_set_filter_ips(&fprobe_graph_ops.ops, addrs, num, 1, 0);
+ }
+
++#ifdef CONFIG_MODULES
++
++#define FPROBE_IPS_BATCH_INIT 8
++/* instruction pointer address list */
++struct fprobe_addr_list {
++ int index;
++ int size;
++ unsigned long *addrs;
++};
++
++static int fprobe_addr_list_add(struct fprobe_addr_list *alist, unsigned long addr)
++{
++ unsigned long *addrs;
++
++ if (alist->index >= alist->size)
++ return -ENOMEM;
++
++ alist->addrs[alist->index++] = addr;
++ if (alist->index < alist->size)
++ return 0;
++
++ /* Expand the address list */
++ addrs = kcalloc(alist->size * 2, sizeof(*addrs), GFP_KERNEL);
++ if (!addrs)
++ return -ENOMEM;
++
++ memcpy(addrs, alist->addrs, alist->size * sizeof(*addrs));
++ alist->size *= 2;
++ kfree(alist->addrs);
++ alist->addrs = addrs;
++
++ return 0;
++}
++
++static void fprobe_remove_node_in_module(struct module *mod, struct hlist_head *head,
++ struct fprobe_addr_list *alist)
++{
++ struct fprobe_hlist_node *node;
++ int ret = 0;
++
++ hlist_for_each_entry_rcu(node, head, hlist) {
++ if (!within_module(node->addr, mod))
++ continue;
++ if (delete_fprobe_node(node))
++ continue;
++ /*
++ * If failed to update alist, just continue to update hlist.
++ * Therefore, at list user handler will not hit anymore.
++ */
++ if (!ret)
++ ret = fprobe_addr_list_add(alist, node->addr);
++ }
++}
++
++/* Handle module unloading to manage fprobe_ip_table. */
++static int fprobe_module_callback(struct notifier_block *nb,
++ unsigned long val, void *data)
++{
++ struct fprobe_addr_list alist = {.size = FPROBE_IPS_BATCH_INIT};
++ struct module *mod = data;
++ int i;
++
++ if (val != MODULE_STATE_GOING)
++ return NOTIFY_DONE;
++
++ alist.addrs = kcalloc(alist.size, sizeof(*alist.addrs), GFP_KERNEL);
++ /* If failed to alloc memory, we can not remove ips from hash. */
++ if (!alist.addrs)
++ return NOTIFY_DONE;
++
++ mutex_lock(&fprobe_mutex);
++ for (i = 0; i < FPROBE_IP_TABLE_SIZE; i++)
++ fprobe_remove_node_in_module(mod, &fprobe_ip_table[i], &alist);
++
++ if (alist.index < alist.size && alist.index > 0)
++ ftrace_set_filter_ips(&fprobe_graph_ops.ops,
++ alist.addrs, alist.index, 1, 0);
++ mutex_unlock(&fprobe_mutex);
++
++ kfree(alist.addrs);
++
++ return NOTIFY_DONE;
++}
++
++static struct notifier_block fprobe_module_nb = {
++ .notifier_call = fprobe_module_callback,
++ .priority = 0,
++};
++
++static int __init init_fprobe_module(void)
++{
++ return register_module_notifier(&fprobe_module_nb);
++}
++early_initcall(init_fprobe_module);
++#endif
++
+ static int symbols_cmp(const void *a, const void *b)
+ {
+ const char **str_a = (const char **) a;
+@@ -445,6 +544,7 @@ struct filter_match_data {
+ size_t index;
+ size_t size;
+ unsigned long *addrs;
++ struct module **mods;
+ };
+
+ static int filter_match_callback(void *data, const char *name, unsigned long addr)
+@@ -458,30 +558,47 @@ static int filter_match_callback(void *data, const char *name, unsigned long add
+ if (!ftrace_location(addr))
+ return 0;
+
+- if (match->addrs)
+- match->addrs[match->index] = addr;
++ if (match->addrs) {
++ struct module *mod = __module_text_address(addr);
++
++ if (mod && !try_module_get(mod))
++ return 0;
+
++ match->mods[match->index] = mod;
++ match->addrs[match->index] = addr;
++ }
+ match->index++;
+ return match->index == match->size;
+ }
+
+ /*
+ * Make IP list from the filter/no-filter glob patterns.
+- * Return the number of matched symbols, or -ENOENT.
++ * Return the number of matched symbols, or errno.
++ * If @addrs == NULL, this just counts the number of matched symbols. If @addrs
++ * is passed with an array, we need to pass the an @mods array of the same size
++ * to increment the module refcount for each symbol.
++ * This means we also need to call `module_put` for each element of @mods after
++ * using the @addrs.
+ */
+-static int ip_list_from_filter(const char *filter, const char *notfilter,
+- unsigned long *addrs, size_t size)
++static int get_ips_from_filter(const char *filter, const char *notfilter,
++ unsigned long *addrs, struct module **mods,
++ size_t size)
+ {
+ struct filter_match_data match = { .filter = filter, .notfilter = notfilter,
+- .index = 0, .size = size, .addrs = addrs};
++ .index = 0, .size = size, .addrs = addrs, .mods = mods};
+ int ret;
+
++ if (addrs && !mods)
++ return -EINVAL;
++
+ ret = kallsyms_on_each_symbol(filter_match_callback, &match);
+ if (ret < 0)
+ return ret;
+- ret = module_kallsyms_on_each_symbol(NULL, filter_match_callback, &match);
+- if (ret < 0)
+- return ret;
++ if (IS_ENABLED(CONFIG_MODULES)) {
++ ret = module_kallsyms_on_each_symbol(NULL, filter_match_callback, &match);
++ if (ret < 0)
++ return ret;
++ }
+
+ return match.index ?: -ENOENT;
+ }
+@@ -543,24 +660,35 @@ static int fprobe_init(struct fprobe *fp, unsigned long *addrs, int num)
+ */
+ int register_fprobe(struct fprobe *fp, const char *filter, const char *notfilter)
+ {
+- unsigned long *addrs;
+- int ret;
++ unsigned long *addrs __free(kfree) = NULL;
++ struct module **mods __free(kfree) = NULL;
++ int ret, num;
+
+ if (!fp || !filter)
+ return -EINVAL;
+
+- ret = ip_list_from_filter(filter, notfilter, NULL, FPROBE_IPS_MAX);
+- if (ret < 0)
+- return ret;
++ num = get_ips_from_filter(filter, notfilter, NULL, NULL, FPROBE_IPS_MAX);
++ if (num < 0)
++ return num;
+
+- addrs = kcalloc(ret, sizeof(unsigned long), GFP_KERNEL);
++ addrs = kcalloc(num, sizeof(*addrs), GFP_KERNEL);
+ if (!addrs)
+ return -ENOMEM;
+- ret = ip_list_from_filter(filter, notfilter, addrs, ret);
+- if (ret > 0)
+- ret = register_fprobe_ips(fp, addrs, ret);
+
+- kfree(addrs);
++ mods = kcalloc(num, sizeof(*mods), GFP_KERNEL);
++ if (!mods)
++ return -ENOMEM;
++
++ ret = get_ips_from_filter(filter, notfilter, addrs, mods, num);
++ if (ret < 0)
++ return ret;
++
++ ret = register_fprobe_ips(fp, addrs, ret);
++
++ for (int i = 0; i < num; i++) {
++ if (mods[i])
++ module_put(mods[i]);
++ }
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(register_fprobe);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index fc88e0688daf09..62d300eee7eb81 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3524,16 +3524,16 @@ int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int
+ ftrace_hash_empty(subops->func_hash->notrace_hash)) {
+ notrace_hash = EMPTY_HASH;
+ } else {
+- size_bits = max(ops->func_hash->filter_hash->size_bits,
+- subops->func_hash->filter_hash->size_bits);
++ size_bits = max(ops->func_hash->notrace_hash->size_bits,
++ subops->func_hash->notrace_hash->size_bits);
+ notrace_hash = alloc_ftrace_hash(size_bits);
+ if (!notrace_hash) {
+ free_ftrace_hash(filter_hash);
+ return -ENOMEM;
+ }
+
+- ret = intersect_hash(¬race_hash, ops->func_hash->filter_hash,
+- subops->func_hash->filter_hash);
++ ret = intersect_hash(¬race_hash, ops->func_hash->notrace_hash,
++ subops->func_hash->notrace_hash);
+ if (ret < 0) {
+ free_ftrace_hash(filter_hash);
+ free_ftrace_hash(notrace_hash);
+@@ -6853,6 +6853,7 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
+ }
+ }
+ }
++ cond_resched();
+ } while_for_each_ftrace_rec();
+
+ return fail ? -EINVAL : 0;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 510409f979923b..9b8ce8f4ff9b38 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -5963,7 +5963,7 @@ static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer)
+ meta->read = cpu_buffer->read;
+
+ /* Some archs do not have data cache coherency between kernel and user-space */
+- flush_dcache_folio(virt_to_folio(cpu_buffer->meta_page));
++ flush_kernel_vmap_range(cpu_buffer->meta_page, PAGE_SIZE);
+ }
+
+ static void
+@@ -7278,7 +7278,8 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu)
+
+ out:
+ /* Some archs do not have data cache coherency between kernel and user-space */
+- flush_dcache_folio(virt_to_folio(cpu_buffer->reader_page->page));
++ flush_kernel_vmap_range(cpu_buffer->reader_page->page,
++ buffer->subbuf_size + BUF_PAGE_HDR_SIZE);
+
+ rb_update_meta_page(cpu_buffer);
+
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index 82fd637cfc19ea..af9fa0632b5740 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -913,6 +913,8 @@ static int __trace_eprobe_create(int argc, const char *argv[])
+ }
+
+ if (argc - 2 > MAX_TRACE_ARGS) {
++ trace_probe_log_set_index(2);
++ trace_probe_log_err(0, TOO_MANY_ARGS);
+ ret = -E2BIG;
+ goto error;
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index b1f6d04f9fe992..ceeedcb5940bdb 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -797,7 +797,9 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
+ clear_bit(EVENT_FILE_FL_RECORDED_TGID_BIT, &file->flags);
+ }
+
+- call->class->reg(call, TRACE_REG_UNREGISTER, file);
++ ret = call->class->reg(call, TRACE_REG_UNREGISTER, file);
++
++ WARN_ON_ONCE(ret);
+ }
+ /* If in SOFT_MODE, just set the SOFT_DISABLE_BIT, else clear it */
+ if (file->flags & EVENT_FILE_FL_SOFT_MODE)
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index 0330ecdfb9f123..88b2795847c743 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -370,7 +370,6 @@ static enum print_line_t print_synth_event(struct trace_iterator *iter,
+ union trace_synth_field *data = &entry->fields[n_u64];
+
+ trace_seq_printf(s, print_fmt, se->fields[i]->name,
+- STR_VAR_LEN_MAX,
+ (char *)entry + data->as_dynamic.offset,
+ i == se->n_fields - 1 ? "" : " ");
+ n_u64++;
+diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c
+index 985ff98272da8f..b40fa59159ac52 100644
+--- a/kernel/trace/trace_fprobe.c
++++ b/kernel/trace/trace_fprobe.c
+@@ -919,9 +919,15 @@ static void __find_tracepoint_module_cb(struct tracepoint *tp, struct module *mo
+ struct __find_tracepoint_cb_data *data = priv;
+
+ if (!data->tpoint && !strcmp(data->tp_name, tp->name)) {
+- data->tpoint = tp;
+- if (!data->mod)
++ /* If module is not specified, try getting module refcount. */
++ if (!data->mod && mod) {
++ /* If failed to get refcount, ignore this tracepoint. */
++ if (!try_module_get(mod))
++ return;
++
+ data->mod = mod;
++ }
++ data->tpoint = tp;
+ }
+ }
+
+@@ -933,7 +939,11 @@ static void __find_tracepoint_cb(struct tracepoint *tp, void *priv)
+ data->tpoint = tp;
+ }
+
+-/* Find a tracepoint from kernel and module. */
++/*
++ * Find a tracepoint from kernel and module. If the tracepoint is on the module,
++ * the module's refcount is incremented and returned as *@tp_mod. Thus, if it is
++ * not NULL, caller must call module_put(*tp_mod) after used the tracepoint.
++ */
+ static struct tracepoint *find_tracepoint(const char *tp_name,
+ struct module **tp_mod)
+ {
+@@ -962,7 +972,10 @@ static void reenable_trace_fprobe(struct trace_fprobe *tf)
+ }
+ }
+
+-/* Find a tracepoint from specified module. */
++/*
++ * Find a tracepoint from specified module. In this case, this does not get the
++ * module's refcount. The caller must ensure the module is not freed.
++ */
+ static struct tracepoint *find_tracepoint_in_module(struct module *mod,
+ const char *tp_name)
+ {
+@@ -1169,11 +1182,6 @@ static int trace_fprobe_create_internal(int argc, const char *argv[],
+ if (is_tracepoint) {
+ ctx->flags |= TPARG_FL_TPOINT;
+ tpoint = find_tracepoint(symbol, &tp_mod);
+- /* lock module until register this tprobe. */
+- if (tp_mod && !try_module_get(tp_mod)) {
+- tpoint = NULL;
+- tp_mod = NULL;
+- }
+ if (tpoint) {
+ ctx->funcname = kallsyms_lookup(
+ (unsigned long)tpoint->probestub,
+@@ -1199,8 +1207,11 @@ static int trace_fprobe_create_internal(int argc, const char *argv[],
+ argc = new_argc;
+ argv = new_argv;
+ }
+- if (argc > MAX_TRACE_ARGS)
++ if (argc > MAX_TRACE_ARGS) {
++ trace_probe_log_set_index(2);
++ trace_probe_log_err(0, TOO_MANY_ARGS);
+ return -E2BIG;
++ }
+
+ ret = traceprobe_expand_dentry_args(argc, argv, &dbuf);
+ if (ret)
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index d8d5f18a141adc..8287b175667f33 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1007,8 +1007,11 @@ static int trace_kprobe_create_internal(int argc, const char *argv[],
+ argc = new_argc;
+ argv = new_argv;
+ }
+- if (argc > MAX_TRACE_ARGS)
++ if (argc > MAX_TRACE_ARGS) {
++ trace_probe_log_set_index(2);
++ trace_probe_log_err(0, TOO_MANY_ARGS);
+ return -E2BIG;
++ }
+
+ ret = traceprobe_expand_dentry_args(argc, argv, &dbuf);
+ if (ret)
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 8f58ee1e8858af..2eeecb6c95eea5 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -770,6 +770,10 @@ static int check_prepare_btf_string_fetch(char *typename,
+
+ #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
+
++/*
++ * Add the entry code to store the 'argnum'th parameter and return the offset
++ * in the entry data buffer where the data will be stored.
++ */
+ static int __store_entry_arg(struct trace_probe *tp, int argnum)
+ {
+ struct probe_entry_arg *earg = tp->entry_arg;
+@@ -793,6 +797,20 @@ static int __store_entry_arg(struct trace_probe *tp, int argnum)
+ tp->entry_arg = earg;
+ }
+
++ /*
++ * The entry code array is repeating the pair of
++ * [FETCH_OP_ARG(argnum)][FETCH_OP_ST_EDATA(offset of entry data buffer)]
++ * and the rest of entries are filled with [FETCH_OP_END].
++ *
++ * To reduce the redundant function parameter fetching, we scan the entry
++ * code array to find the FETCH_OP_ARG which already fetches the 'argnum'
++ * parameter. If it doesn't match, update 'offset' to find the last
++ * offset.
++ * If we find the FETCH_OP_END without matching FETCH_OP_ARG entry, we
++ * will save the entry with FETCH_OP_ARG and FETCH_OP_ST_EDATA, and
++ * return data offset so that caller can find the data offset in the entry
++ * data buffer.
++ */
+ offset = 0;
+ for (i = 0; i < earg->size - 1; i++) {
+ switch (earg->code[i].op) {
+@@ -826,6 +844,16 @@ int traceprobe_get_entry_data_size(struct trace_probe *tp)
+ if (!earg)
+ return 0;
+
++ /*
++ * earg->code[] array has an operation sequence which is run in
++ * the entry handler.
++ * The sequence stopped by FETCH_OP_END and each data stored in
++ * the entry data buffer by FETCH_OP_ST_EDATA. The FETCH_OP_ST_EDATA
++ * stores the data at the data buffer + its offset, and all data are
++ * "unsigned long" size. The offset must be increased when a data is
++ * stored. Thus we need to find the last FETCH_OP_ST_EDATA in the
++ * code array.
++ */
+ for (i = 0; i < earg->size; i++) {
+ switch (earg->code[i].op) {
+ case FETCH_OP_END:
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index 96792bc4b09244..854e5668f5ee5a 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -545,6 +545,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
+ C(BAD_BTF_TID, "Failed to get BTF type info."),\
+ C(BAD_TYPE4STR, "This type does not fit for string."),\
+ C(NEED_STRING_TYPE, "$comm and immediate-string only accepts string type"),\
++ C(TOO_MANY_ARGS, "Too many arguments are specified"), \
+ C(TOO_MANY_EARGS, "Too many entry arguments specified"),
+
+ #undef C
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index ccc762fbb69cd1..3386439ec9f674 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -562,8 +562,14 @@ static int __trace_uprobe_create(int argc, const char **argv)
+
+ if (argc < 2)
+ return -ECANCELED;
+- if (argc - 2 > MAX_TRACE_ARGS)
++
++ trace_probe_log_init("trace_uprobe", argc, argv);
++
++ if (argc - 2 > MAX_TRACE_ARGS) {
++ trace_probe_log_set_index(2);
++ trace_probe_log_err(0, TOO_MANY_ARGS);
+ return -E2BIG;
++ }
+
+ if (argv[0][1] == ':')
+ event = &argv[0][2];
+@@ -582,7 +588,6 @@ static int __trace_uprobe_create(int argc, const char **argv)
+ return -ECANCELED;
+ }
+
+- trace_probe_log_init("trace_uprobe", argc, argv);
+ trace_probe_log_set_index(1); /* filename is the 2nd argument */
+
+ *arg++ = '\0';
+diff --git a/lib/Makefile b/lib/Makefile
+index d5cfc7afbbb821..4f3d00a2fd6592 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -5,6 +5,11 @@
+
+ ccflags-remove-$(CONFIG_FUNCTION_TRACER) += $(CC_FLAGS_FTRACE)
+
++# Branch profiling isn't noinstr-safe
++ifdef CONFIG_TRACE_BRANCH_PROFILING
++CFLAGS_smp_processor_id.o += -DDISABLE_BRANCH_PROFILING
++endif
++
+ # These files are disabled because they produce lots of non-interesting and/or
+ # flaky coverage that is not a function of syscall inputs. For example,
+ # rbtree can be global and individual rotations don't correlate with inputs.
+diff --git a/lib/sg_split.c b/lib/sg_split.c
+index 60a0babebf2efc..0f89aab5c6715b 100644
+--- a/lib/sg_split.c
++++ b/lib/sg_split.c
+@@ -88,8 +88,6 @@ static void sg_split_phys(struct sg_splitter *splitters, const int nb_splits)
+ if (!j) {
+ out_sg->offset += split->skip_sg0;
+ out_sg->length -= split->skip_sg0;
+- } else {
+- out_sg->offset = 0;
+ }
+ sg_dma_address(out_sg) = 0;
+ sg_dma_len(out_sg) = 0;
+diff --git a/lib/zstd/common/portability_macros.h b/lib/zstd/common/portability_macros.h
+index 0e3b2c0a527db7..0dde8bf56595ea 100644
+--- a/lib/zstd/common/portability_macros.h
++++ b/lib/zstd/common/portability_macros.h
+@@ -55,7 +55,7 @@
+ #ifndef DYNAMIC_BMI2
+ #if ((defined(__clang__) && __has_attribute(__target__)) \
+ || (defined(__GNUC__) \
+- && (__GNUC__ >= 5 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 8)))) \
++ && (__GNUC__ >= 11))) \
+ && (defined(__x86_64__) || defined(_M_X64)) \
+ && !defined(__BMI2__)
+ # define DYNAMIC_BMI2 1
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 384935ef4e65e6..dc8f94fe7c3bc8 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -1856,6 +1856,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
+ s->next_apply_sis = c->passed_sample_intervals +
+ (s->apply_interval_us ? s->apply_interval_us :
+ c->attrs.aggr_interval) / sample_interval;
++ s->last_applied = NULL;
+ }
+ }
+
+diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
+index d25d99cb5f2bb9..d511be201c4c9e 100644
+--- a/mm/damon/ops-common.c
++++ b/mm/damon/ops-common.c
+@@ -24,7 +24,7 @@ struct folio *damon_get_folio(unsigned long pfn)
+ struct page *page = pfn_to_online_page(pfn);
+ struct folio *folio;
+
+- if (!page || PageTail(page))
++ if (!page)
+ return NULL;
+
+ folio = page_folio(page);
+diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
+index c834aa2178352d..69afe1933e2944 100644
+--- a/mm/damon/paddr.c
++++ b/mm/damon/paddr.c
+@@ -246,6 +246,17 @@ static bool damos_pa_filter_out(struct damos *scheme, struct folio *folio)
+ return false;
+ }
+
++static bool damon_pa_invalid_damos_folio(struct folio *folio, struct damos *s)
++{
++ if (!folio)
++ return true;
++ if (folio == s->last_applied) {
++ folio_put(folio);
++ return true;
++ }
++ return false;
++}
++
+ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
+ unsigned long *sz_filter_passed)
+ {
+@@ -253,6 +264,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
+ LIST_HEAD(folio_list);
+ bool install_young_filter = true;
+ struct damos_filter *filter;
++ struct folio *folio;
+
+ /* check access in page level again by default */
+ damos_for_each_filter(filter, s) {
+@@ -269,11 +281,13 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
+ damos_add_filter(s, filter);
+ }
+
+- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+- struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+-
+- if (!folio)
++ addr = r->ar.start;
++ while (addr < r->ar.end) {
++ folio = damon_get_folio(PHYS_PFN(addr));
++ if (damon_pa_invalid_damos_folio(folio, s)) {
++ addr += PAGE_SIZE;
+ continue;
++ }
+
+ if (damos_pa_filter_out(s, folio))
+ goto put_folio;
+@@ -289,12 +303,14 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
+ else
+ list_add(&folio->lru, &folio_list);
+ put_folio:
++ addr += folio_size(folio);
+ folio_put(folio);
+ }
+ if (install_young_filter)
+ damos_destroy_filter(filter);
+ applied = reclaim_pages(&folio_list);
+ cond_resched();
++ s->last_applied = folio;
+ return applied * PAGE_SIZE;
+ }
+
+@@ -303,12 +319,15 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
+ unsigned long *sz_filter_passed)
+ {
+ unsigned long addr, applied = 0;
++ struct folio *folio;
+
+- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+- struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+-
+- if (!folio)
++ addr = r->ar.start;
++ while (addr < r->ar.end) {
++ folio = damon_get_folio(PHYS_PFN(addr));
++ if (damon_pa_invalid_damos_folio(folio, s)) {
++ addr += PAGE_SIZE;
+ continue;
++ }
+
+ if (damos_pa_filter_out(s, folio))
+ goto put_folio;
+@@ -321,8 +340,10 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
+ folio_deactivate(folio);
+ applied += folio_nr_pages(folio);
+ put_folio:
++ addr += folio_size(folio);
+ folio_put(folio);
+ }
++ s->last_applied = folio;
+ return applied * PAGE_SIZE;
+ }
+
+@@ -466,12 +487,15 @@ static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s,
+ {
+ unsigned long addr, applied;
+ LIST_HEAD(folio_list);
++ struct folio *folio;
+
+- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
+- struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+-
+- if (!folio)
++ addr = r->ar.start;
++ while (addr < r->ar.end) {
++ folio = damon_get_folio(PHYS_PFN(addr));
++ if (damon_pa_invalid_damos_folio(folio, s)) {
++ addr += PAGE_SIZE;
+ continue;
++ }
+
+ if (damos_pa_filter_out(s, folio))
+ goto put_folio;
+@@ -482,10 +506,12 @@ static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s,
+ goto put_folio;
+ list_add(&folio->lru, &folio_list);
+ put_folio:
++ addr += folio_size(folio);
+ folio_put(folio);
+ }
+ applied = damon_pa_migrate_pages(&folio_list, s->target_nid);
+ cond_resched();
++ s->last_applied = folio;
+ return applied * PAGE_SIZE;
+ }
+
+@@ -503,15 +529,15 @@ static unsigned long damon_pa_stat(struct damon_region *r, struct damos *s,
+ {
+ unsigned long addr;
+ LIST_HEAD(folio_list);
++ struct folio *folio;
+
+ if (!damon_pa_scheme_has_filter(s))
+ return 0;
+
+ addr = r->ar.start;
+ while (addr < r->ar.end) {
+- struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+-
+- if (!folio) {
++ folio = damon_get_folio(PHYS_PFN(addr));
++ if (damon_pa_invalid_damos_folio(folio, s)) {
+ addr += PAGE_SIZE;
+ continue;
+ }
+@@ -521,6 +547,7 @@ static unsigned long damon_pa_stat(struct damon_region *r, struct damos *s,
+ addr += folio_size(folio);
+ folio_put(folio);
+ }
++ s->last_applied = folio;
+ return 0;
+ }
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 318624c9658444..44b8feb83402b3 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4912,7 +4912,7 @@ static const struct ctl_table hugetlb_table[] = {
+ },
+ };
+
+-static void hugetlb_sysctl_init(void)
++static void __init hugetlb_sysctl_init(void)
+ {
+ register_sysctl_init("vm", hugetlb_table);
+ }
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 327e02fdc029da..b04fb434b6cf1f 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -881,12 +881,17 @@ static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
+ mmap_read_lock(p->mm);
+ ret = walk_page_range(p->mm, 0, TASK_SIZE, &hwpoison_walk_ops,
+ (void *)&priv);
++ /*
++ * ret = 1 when CMCI wins, regardless of whether try_to_unmap()
++ * succeeds or fails, then kill the process with SIGBUS.
++ * ret = 0 when poison page is a clean page and it's dropped, no
++ * SIGBUS is needed.
++ */
+ if (ret == 1 && priv.tk.addr)
+ kill_proc(&priv.tk, pfn, flags);
+- else
+- ret = 0;
+ mmap_read_unlock(p->mm);
+- return ret > 0 ? -EHWPOISON : -EFAULT;
++
++ return ret > 0 ? -EHWPOISON : 0;
+ }
+
+ /*
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 16cf9e17077e35..75401866fb760c 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1828,8 +1828,7 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ if (unlikely(page_folio(page) != folio))
+ goto put_folio;
+
+- if (folio_test_hwpoison(folio) ||
+- (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {
++ if (folio_contain_hwpoisoned_page(folio)) {
+ if (WARN_ON(folio_test_lru(folio)))
+ folio_isolate_lru(folio);
+ if (folio_mapped(folio)) {
+diff --git a/mm/mremap.c b/mm/mremap.c
+index cff7f552f90904..c3e4c86d0b8d28 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -705,8 +705,8 @@ static unsigned long move_vma(struct vm_area_struct *vma,
+ unsigned long vm_flags = vma->vm_flags;
+ unsigned long new_pgoff;
+ unsigned long moved_len;
+- unsigned long account_start = 0;
+- unsigned long account_end = 0;
++ bool account_start = false;
++ bool account_end = false;
+ unsigned long hiwater_vm;
+ int err = 0;
+ bool need_rmap_locks;
+@@ -790,9 +790,9 @@ static unsigned long move_vma(struct vm_area_struct *vma,
+ if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) {
+ vm_flags_clear(vma, VM_ACCOUNT);
+ if (vma->vm_start < old_addr)
+- account_start = vma->vm_start;
++ account_start = true;
+ if (vma->vm_end > old_addr + old_len)
+- account_end = vma->vm_end;
++ account_end = true;
+ }
+
+ /*
+@@ -832,7 +832,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
+ /* OOM: unable to split vma, just get accounts right */
+ if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP))
+ vm_acct_memory(old_len >> PAGE_SHIFT);
+- account_start = account_end = 0;
++ account_start = account_end = false;
+ }
+
+ if (vm_flags & VM_LOCKED) {
+diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
+index 81839a9e74f16d..367209e6583070 100644
+--- a/mm/page_vma_mapped.c
++++ b/mm/page_vma_mapped.c
+@@ -84,6 +84,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,
+ * mapped at the @pvmw->pte
+ * @pvmw: page_vma_mapped_walk struct, includes a pair pte and pfn range
+ * for checking
++ * @pte_nr: the number of small pages described by @pvmw->pte.
+ *
+ * page_vma_mapped_walk() found a place where pfn range is *potentially*
+ * mapped. check_pte() has to validate this.
+@@ -100,7 +101,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,
+ * Otherwise, return false.
+ *
+ */
+-static bool check_pte(struct page_vma_mapped_walk *pvmw)
++static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)
+ {
+ unsigned long pfn;
+ pte_t ptent = ptep_get(pvmw->pte);
+@@ -133,7 +134,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
+ pfn = pte_pfn(ptent);
+ }
+
+- return (pfn - pvmw->pfn) < pvmw->nr_pages;
++ if ((pfn + pte_nr - 1) < pvmw->pfn)
++ return false;
++ if (pfn > (pvmw->pfn + pvmw->nr_pages - 1))
++ return false;
++ return true;
+ }
+
+ /* Returns true if the two ranges overlap. Careful to not overflow. */
+@@ -208,7 +213,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
+ return false;
+
+ pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
+- if (!check_pte(pvmw))
++ if (!check_pte(pvmw, pages_per_huge_page(hstate)))
+ return not_found(pvmw);
+ return true;
+ }
+@@ -291,7 +296,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
+ goto next_pte;
+ }
+ this_pte:
+- if (check_pte(pvmw))
++ if (check_pte(pvmw, 1))
+ return true;
+ next_pte:
+ do {
+diff --git a/mm/rmap.c b/mm/rmap.c
+index c6c4d4ea29a7e3..17fbfa61f7efb7 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -2499,7 +2499,7 @@ static bool folio_make_device_exclusive(struct folio *folio,
+ * Restrict to anonymous folios for now to avoid potential writeback
+ * issues.
+ */
+- if (!folio_test_anon(folio))
++ if (!folio_test_anon(folio) || folio_test_hugetlb(folio))
+ return false;
+
+ rmap_walk(folio, &rwc);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 1ede0800e8461b..1dd513d82332d7 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -3302,8 +3302,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
+ if (ret)
+ return ret;
+
+- if (folio_test_hwpoison(folio) ||
+- (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {
++ if (folio_contain_hwpoisoned_page(folio)) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return -EIO;
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index c767d71c43d7d2..fada3b35aff834 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -7580,7 +7580,7 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
+ return NODE_RECLAIM_NOSCAN;
+
+ ret = __node_reclaim(pgdat, gfp_mask, order);
+- clear_bit(PGDAT_RECLAIM_LOCKED, &pgdat->flags);
++ clear_bit_unlock(PGDAT_RECLAIM_LOCKED, &pgdat->flags);
+
+ if (ret)
+ count_vm_event(PGSCAN_ZONE_RECLAIM_SUCCESS);
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 91d134961357c4..ee7186e4d353b1 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -273,17 +273,6 @@ static int vlan_dev_open(struct net_device *dev)
+ goto out;
+ }
+
+- if (dev->flags & IFF_ALLMULTI) {
+- err = dev_set_allmulti(real_dev, 1);
+- if (err < 0)
+- goto del_unicast;
+- }
+- if (dev->flags & IFF_PROMISC) {
+- err = dev_set_promiscuity(real_dev, 1);
+- if (err < 0)
+- goto clear_allmulti;
+- }
+-
+ ether_addr_copy(vlan->real_dev_addr, real_dev->dev_addr);
+
+ if (vlan->flags & VLAN_FLAG_GVRP)
+@@ -297,12 +286,6 @@ static int vlan_dev_open(struct net_device *dev)
+ netif_carrier_on(dev);
+ return 0;
+
+-clear_allmulti:
+- if (dev->flags & IFF_ALLMULTI)
+- dev_set_allmulti(real_dev, -1);
+-del_unicast:
+- if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr))
+- dev_uc_del(real_dev, dev->dev_addr);
+ out:
+ netif_carrier_off(dev);
+ return err;
+@@ -315,10 +298,6 @@ static int vlan_dev_stop(struct net_device *dev)
+
+ dev_mc_unsync(real_dev, dev);
+ dev_uc_unsync(real_dev, dev);
+- if (dev->flags & IFF_ALLMULTI)
+- dev_set_allmulti(real_dev, -1);
+- if (dev->flags & IFF_PROMISC)
+- dev_set_promiscuity(real_dev, -1);
+
+ if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr))
+ dev_uc_del(real_dev, dev->dev_addr);
+@@ -490,12 +469,10 @@ static void vlan_dev_change_rx_flags(struct net_device *dev, int change)
+ {
+ struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
+
+- if (dev->flags & IFF_UP) {
+- if (change & IFF_ALLMULTI)
+- dev_set_allmulti(real_dev, dev->flags & IFF_ALLMULTI ? 1 : -1);
+- if (change & IFF_PROMISC)
+- dev_set_promiscuity(real_dev, dev->flags & IFF_PROMISC ? 1 : -1);
+- }
++ if (change & IFF_ALLMULTI)
++ dev_set_allmulti(real_dev, dev->flags & IFF_ALLMULTI ? 1 : -1);
++ if (change & IFF_PROMISC)
++ dev_set_promiscuity(real_dev, dev->flags & IFF_PROMISC ? 1 : -1);
+ }
+
+ static void vlan_dev_set_rx_mode(struct net_device *vlan_dev)
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 2ec162dd83c463..b0df9b7d16d3f3 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -218,24 +218,36 @@ BPF_CALL_3(bpf_skb_get_nlattr_nest, struct sk_buff *, skb, u32, a, u32, x)
+ return 0;
+ }
+
++static int bpf_skb_load_helper_convert_offset(const struct sk_buff *skb, int offset)
++{
++ if (likely(offset >= 0))
++ return offset;
++
++ if (offset >= SKF_NET_OFF)
++ return offset - SKF_NET_OFF + skb_network_offset(skb);
++
++ if (offset >= SKF_LL_OFF && skb_mac_header_was_set(skb))
++ return offset - SKF_LL_OFF + skb_mac_offset(skb);
++
++ return INT_MIN;
++}
++
+ BPF_CALL_4(bpf_skb_load_helper_8, const struct sk_buff *, skb, const void *,
+ data, int, headlen, int, offset)
+ {
+- u8 tmp, *ptr;
++ u8 tmp;
+ const int len = sizeof(tmp);
+
+- if (offset >= 0) {
+- if (headlen - offset >= len)
+- return *(u8 *)(data + offset);
+- if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
+- return tmp;
+- } else {
+- ptr = bpf_internal_load_pointer_neg_helper(skb, offset, len);
+- if (likely(ptr))
+- return *(u8 *)ptr;
+- }
++ offset = bpf_skb_load_helper_convert_offset(skb, offset);
++ if (offset == INT_MIN)
++ return -EFAULT;
+
+- return -EFAULT;
++ if (headlen - offset >= len)
++ return *(u8 *)(data + offset);
++ if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
++ return tmp;
++ else
++ return -EFAULT;
+ }
+
+ BPF_CALL_2(bpf_skb_load_helper_8_no_cache, const struct sk_buff *, skb,
+@@ -248,21 +260,19 @@ BPF_CALL_2(bpf_skb_load_helper_8_no_cache, const struct sk_buff *, skb,
+ BPF_CALL_4(bpf_skb_load_helper_16, const struct sk_buff *, skb, const void *,
+ data, int, headlen, int, offset)
+ {
+- __be16 tmp, *ptr;
++ __be16 tmp;
+ const int len = sizeof(tmp);
+
+- if (offset >= 0) {
+- if (headlen - offset >= len)
+- return get_unaligned_be16(data + offset);
+- if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
+- return be16_to_cpu(tmp);
+- } else {
+- ptr = bpf_internal_load_pointer_neg_helper(skb, offset, len);
+- if (likely(ptr))
+- return get_unaligned_be16(ptr);
+- }
++ offset = bpf_skb_load_helper_convert_offset(skb, offset);
++ if (offset == INT_MIN)
++ return -EFAULT;
+
+- return -EFAULT;
++ if (headlen - offset >= len)
++ return get_unaligned_be16(data + offset);
++ if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
++ return be16_to_cpu(tmp);
++ else
++ return -EFAULT;
+ }
+
+ BPF_CALL_2(bpf_skb_load_helper_16_no_cache, const struct sk_buff *, skb,
+@@ -275,21 +285,19 @@ BPF_CALL_2(bpf_skb_load_helper_16_no_cache, const struct sk_buff *, skb,
+ BPF_CALL_4(bpf_skb_load_helper_32, const struct sk_buff *, skb, const void *,
+ data, int, headlen, int, offset)
+ {
+- __be32 tmp, *ptr;
++ __be32 tmp;
+ const int len = sizeof(tmp);
+
+- if (likely(offset >= 0)) {
+- if (headlen - offset >= len)
+- return get_unaligned_be32(data + offset);
+- if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
+- return be32_to_cpu(tmp);
+- } else {
+- ptr = bpf_internal_load_pointer_neg_helper(skb, offset, len);
+- if (likely(ptr))
+- return get_unaligned_be32(ptr);
+- }
++ offset = bpf_skb_load_helper_convert_offset(skb, offset);
++ if (offset == INT_MIN)
++ return -EFAULT;
+
+- return -EFAULT;
++ if (headlen - offset >= len)
++ return get_unaligned_be32(data + offset);
++ if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
++ return be32_to_cpu(tmp);
++ else
++ return -EFAULT;
+ }
+
+ BPF_CALL_2(bpf_skb_load_helper_32_no_cache, const struct sk_buff *, skb,
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index f5e908c9e7ad8f..ede82c610936e5 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -1104,7 +1104,13 @@ static void page_pool_release_retry(struct work_struct *wq)
+ int inflight;
+
+ inflight = page_pool_release(pool);
+- if (!inflight)
++ /* In rare cases, a driver bug may cause inflight to go negative.
++ * Don't reschedule release if inflight is 0 or negative.
++ * - If 0, the page_pool has been destroyed
++ * - if negative, we will never recover
++ * in both cases no reschedule is necessary.
++ */
++ if (inflight <= 0)
+ return;
+
+ /* Periodic warning for page pools the user can't see */
+diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c
+index 6677e0c2e25650..d5e214c30c310f 100644
+--- a/net/core/page_pool_user.c
++++ b/net/core/page_pool_user.c
+@@ -356,7 +356,7 @@ void page_pool_unlist(struct page_pool *pool)
+ int page_pool_check_memory_provider(struct net_device *dev,
+ struct netdev_rx_queue *rxq)
+ {
+- struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv;
++ void *binding = rxq->mp_params.mp_priv;
+ struct page_pool *pool;
+ struct hlist_node *n;
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 6c0e87f97fa4a7..45df7865521441 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2115,6 +2115,8 @@ int sk_getsockopt(struct sock *sk, int level, int optname,
+ */
+ static inline void sock_lock_init(struct sock *sk)
+ {
++ sk_owner_clear(sk);
++
+ if (sk->sk_kern_sock)
+ sock_lock_init_class_and_name(
+ sk,
+@@ -2211,6 +2213,9 @@ static void sk_prot_free(struct proto *prot, struct sock *sk)
+ cgroup_sk_free(&sk->sk_cgrp_data);
+ mem_cgroup_sk_free(sk);
+ security_sk_free(sk);
++
++ sk_owner_put(sk);
++
+ if (slab != NULL)
+ kmem_cache_free(slab, sk);
+ else
+diff --git a/net/ethtool/cmis.h b/net/ethtool/cmis.h
+index 1e790413db0e8d..4a9a946cabf05d 100644
+--- a/net/ethtool/cmis.h
++++ b/net/ethtool/cmis.h
+@@ -101,7 +101,6 @@ struct ethtool_cmis_cdb_rpl {
+ };
+
+ u32 ethtool_cmis_get_max_lpl_size(u8 num_of_byte_octs);
+-u32 ethtool_cmis_get_max_epl_size(u8 num_of_byte_octs);
+
+ void ethtool_cmis_cdb_compose_args(struct ethtool_cmis_cdb_cmd_args *args,
+ enum ethtool_cmis_cdb_cmd_id cmd, u8 *lpl,
+diff --git a/net/ethtool/cmis_cdb.c b/net/ethtool/cmis_cdb.c
+index d159dc121bde58..0e2691ccb0df38 100644
+--- a/net/ethtool/cmis_cdb.c
++++ b/net/ethtool/cmis_cdb.c
+@@ -16,15 +16,6 @@ u32 ethtool_cmis_get_max_lpl_size(u8 num_of_byte_octs)
+ return 8 * (1 + min_t(u8, num_of_byte_octs, 15));
+ }
+
+-/* For accessing the EPL field on page 9Fh, the allowable length extension is
+- * min(i, 255) byte octets where i specifies the allowable additional number of
+- * byte octets in a READ or a WRITE.
+- */
+-u32 ethtool_cmis_get_max_epl_size(u8 num_of_byte_octs)
+-{
+- return 8 * (1 + min_t(u8, num_of_byte_octs, 255));
+-}
+-
+ void ethtool_cmis_cdb_compose_args(struct ethtool_cmis_cdb_cmd_args *args,
+ enum ethtool_cmis_cdb_cmd_id cmd, u8 *lpl,
+ u8 lpl_len, u8 *epl, u16 epl_len,
+@@ -33,19 +24,16 @@ void ethtool_cmis_cdb_compose_args(struct ethtool_cmis_cdb_cmd_args *args,
+ {
+ args->req.id = cpu_to_be16(cmd);
+ args->req.lpl_len = lpl_len;
+- if (lpl) {
++ if (lpl)
+ memcpy(args->req.payload, lpl, args->req.lpl_len);
+- args->read_write_len_ext =
+- ethtool_cmis_get_max_lpl_size(read_write_len_ext);
+- }
+ if (epl) {
+ args->req.epl_len = cpu_to_be16(epl_len);
+ args->req.epl = epl;
+- args->read_write_len_ext =
+- ethtool_cmis_get_max_epl_size(read_write_len_ext);
+ }
+
+ args->max_duration = max_duration;
++ args->read_write_len_ext =
++ ethtool_cmis_get_max_lpl_size(read_write_len_ext);
+ args->msleep_pre_rpl = msleep_pre_rpl;
+ args->rpl_exp_len = rpl_exp_len;
+ args->flags = flags;
+diff --git a/net/ethtool/common.c b/net/ethtool/common.c
+index b97374b508f672..e2f8a41cc10849 100644
+--- a/net/ethtool/common.c
++++ b/net/ethtool/common.c
+@@ -785,6 +785,7 @@ void ethtool_ringparam_get_cfg(struct net_device *dev,
+
+ /* Driver gives us current state, we want to return current config */
+ kparam->tcp_data_split = dev->cfg->hds_config;
++ kparam->hds_thresh = dev->cfg->hds_thresh;
+ }
+
+ static void ethtool_init_tsinfo(struct kernel_ethtool_ts_info *info)
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index 734849a573691d..e088a30d1dd264 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -493,7 +493,7 @@ static int ethnl_default_doit(struct sk_buff *skb, struct genl_info *info)
+ ret = ops->prepare_data(req_info, reply_data, info);
+ rtnl_unlock();
+ if (ret < 0)
+- goto err_cleanup;
++ goto err_dev;
+ ret = ops->reply_size(req_info, reply_data);
+ if (ret < 0)
+ goto err_cleanup;
+@@ -551,7 +551,7 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
+ ret = ctx->ops->prepare_data(ctx->req_info, ctx->reply_data, info);
+ rtnl_unlock();
+ if (ret < 0)
+- goto out;
++ goto out_cancel;
+ ret = ethnl_fill_reply_header(skb, dev, ctx->ops->hdr_attr);
+ if (ret < 0)
+ goto out;
+@@ -560,6 +560,7 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
+ out:
+ if (ctx->ops->cleanup_data)
+ ctx->ops->cleanup_data(ctx->reply_data);
++out_cancel:
+ ctx->reply_data->dev = NULL;
+ if (ret < 0)
+ genlmsg_cancel(skb, ehdr);
+@@ -780,7 +781,7 @@ static void ethnl_default_notify(struct net_device *dev, unsigned int cmd,
+ ethnl_init_reply_data(reply_data, ops, dev);
+ ret = ops->prepare_data(req_info, reply_data, &info);
+ if (ret < 0)
+- goto err_cleanup;
++ goto err_rep;
+ ret = ops->reply_size(req_info, reply_data);
+ if (ret < 0)
+ goto err_cleanup;
+@@ -815,6 +816,7 @@ static void ethnl_default_notify(struct net_device *dev, unsigned int cmd,
+ err_cleanup:
+ if (ops->cleanup_data)
+ ops->cleanup_data(reply_data);
++err_rep:
+ kfree(reply_data);
+ kfree(req_info);
+ return;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 169a7b9bc40ea1..08cee62e789e13 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -470,10 +470,10 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ goto out;
+
+ hash = fl6->mp_hash;
+- if (hash <= atomic_read(&first->fib6_nh->fib_nh_upper_bound) &&
+- rt6_score_route(first->fib6_nh, first->fib6_flags, oif,
+- strict) >= 0) {
+- match = first;
++ if (hash <= atomic_read(&first->fib6_nh->fib_nh_upper_bound)) {
++ if (rt6_score_route(first->fib6_nh, first->fib6_flags, oif,
++ strict) >= 0)
++ match = first;
+ goto out;
+ }
+
+diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c
+index bf0a2902d93c6a..69e03630f64c9f 100644
+--- a/net/mac80211/debugfs.c
++++ b/net/mac80211/debugfs.c
+@@ -492,6 +492,7 @@ static const char *hw_flag_names[] = {
+ FLAG(DISALLOW_PUNCTURING),
+ FLAG(DISALLOW_PUNCTURING_5GHZ),
+ FLAG(HANDLES_QUIET_CSA),
++ FLAG(STRICT),
+ #undef FLAG
+ };
+
+@@ -524,6 +525,46 @@ static ssize_t hwflags_read(struct file *file, char __user *user_buf,
+ return rv;
+ }
+
++static ssize_t hwflags_write(struct file *file, const char __user *user_buf,
++ size_t count, loff_t *ppos)
++{
++ struct ieee80211_local *local = file->private_data;
++ char buf[100];
++ int val;
++
++ if (count >= sizeof(buf))
++ return -EINVAL;
++
++ if (copy_from_user(buf, user_buf, count))
++ return -EFAULT;
++
++ if (count && buf[count - 1] == '\n')
++ buf[count - 1] = '\0';
++ else
++ buf[count] = '\0';
++
++ if (sscanf(buf, "strict=%d", &val) == 1) {
++ switch (val) {
++ case 0:
++ ieee80211_hw_set(&local->hw, STRICT);
++ return count;
++ case 1:
++ __clear_bit(IEEE80211_HW_STRICT, local->hw.flags);
++ return count;
++ default:
++ return -EINVAL;
++ }
++ }
++
++ return -EINVAL;
++}
++
++static const struct file_operations hwflags_ops = {
++ .open = simple_open,
++ .read = hwflags_read,
++ .write = hwflags_write,
++};
++
+ static ssize_t misc_read(struct file *file, char __user *user_buf,
+ size_t count, loff_t *ppos)
+ {
+@@ -574,7 +615,6 @@ static ssize_t queues_read(struct file *file, char __user *user_buf,
+ return simple_read_from_buffer(user_buf, count, ppos, buf, res);
+ }
+
+-DEBUGFS_READONLY_FILE_OPS(hwflags);
+ DEBUGFS_READONLY_FILE_OPS(queues);
+ DEBUGFS_READONLY_FILE_OPS(misc);
+
+@@ -651,7 +691,7 @@ void debugfs_hw_add(struct ieee80211_local *local)
+ #ifdef CONFIG_PM
+ DEBUGFS_ADD_MODE(reset, 0200);
+ #endif
+- DEBUGFS_ADD(hwflags);
++ DEBUGFS_ADD_MODE(hwflags, 0600);
+ DEBUGFS_ADD(user_power);
+ DEBUGFS_ADD(power);
+ DEBUGFS_ADD(hw_conf);
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 738de269e13f04..459fc391a4d932 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -8,7 +8,7 @@
+ * Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright (c) 2016 Intel Deutschland GmbH
+- * Copyright (C) 2018-2024 Intel Corporation
++ * Copyright (C) 2018-2025 Intel Corporation
+ */
+ #include <linux/slab.h>
+ #include <linux/kernel.h>
+@@ -807,6 +807,9 @@ static void ieee80211_set_multicast_list(struct net_device *dev)
+ */
+ static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata)
+ {
++ if (WARN_ON(!list_empty(&sdata->work.entry)))
++ wiphy_work_cancel(sdata->local->hw.wiphy, &sdata->work);
++
+ /* free extra data */
+ ieee80211_free_keys(sdata, false);
+
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index 4e9546e998b6d1..c94a9c7ca960ef 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -367,6 +367,12 @@ u32 airtime_link_metric_get(struct ieee80211_local *local,
+ return (u32)result;
+ }
+
++/* Check that the first metric is at least 10% better than the second one */
++static bool is_metric_better(u32 x, u32 y)
++{
++ return (x < y) && (x < (y - x / 10));
++}
++
+ /**
+ * hwmp_route_info_get - Update routing info to originator and transmitter
+ *
+@@ -458,8 +464,8 @@ static u32 hwmp_route_info_get(struct ieee80211_sub_if_data *sdata,
+ (mpath->sn == orig_sn &&
+ (rcu_access_pointer(mpath->next_hop) !=
+ sta ?
+- mult_frac(new_metric, 10, 9) :
+- new_metric) >= mpath->metric)) {
++ !is_metric_better(new_metric, mpath->metric) :
++ new_metric >= mpath->metric))) {
+ process = false;
+ fresh_info = false;
+ }
+@@ -533,8 +539,8 @@ static u32 hwmp_route_info_get(struct ieee80211_sub_if_data *sdata,
+ if ((mpath->flags & MESH_PATH_FIXED) ||
+ ((mpath->flags & MESH_PATH_ACTIVE) &&
+ ((rcu_access_pointer(mpath->next_hop) != sta ?
+- mult_frac(last_hop_metric, 10, 9) :
+- last_hop_metric) > mpath->metric)))
++ !is_metric_better(last_hop_metric, mpath->metric) :
++ last_hop_metric > mpath->metric))))
+ fresh_info = false;
+ } else {
+ mpath = mesh_path_add(sdata, ta);
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index da2c2e6035be8a..99e9b03d7fe193 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -168,6 +168,9 @@ ieee80211_determine_ap_chan(struct ieee80211_sub_if_data *sdata,
+ bool no_vht = false;
+ u32 ht_cfreq;
+
++ if (ieee80211_hw_check(&sdata->local->hw, STRICT))
++ ignore_ht_channel_mismatch = false;
++
+ *chandef = (struct cfg80211_chan_def) {
+ .chan = channel,
+ .width = NL80211_CHAN_WIDTH_20_NOHT,
+@@ -388,7 +391,7 @@ ieee80211_verify_peer_he_mcs_support(struct ieee80211_sub_if_data *sdata,
+ * zeroes, which is nonsense, and completely inconsistent with itself
+ * (it doesn't have 8 streams). Accept the settings in this case anyway.
+ */
+- if (!ap_min_req_set)
++ if (!ieee80211_hw_check(&sdata->local->hw, STRICT) && !ap_min_req_set)
+ return true;
+
+ /* make sure the AP is consistent with itself
+@@ -448,7 +451,7 @@ ieee80211_verify_sta_he_mcs_support(struct ieee80211_sub_if_data *sdata,
+ * zeroes, which is nonsense, and completely inconsistent with itself
+ * (it doesn't have 8 streams). Accept the settings in this case anyway.
+ */
+- if (!ap_min_req_set)
++ if (!ieee80211_hw_check(&sdata->local->hw, STRICT) && !ap_min_req_set)
+ return true;
+
+ /* Need to go over for 80MHz, 160MHz and for 80+80 */
+@@ -1313,13 +1316,15 @@ static bool ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ * Some APs apparently get confused if our capabilities are better
+ * than theirs, so restrict what we advertise in the assoc request.
+ */
+- if (!(ap_vht_cap->vht_cap_info &
+- cpu_to_le32(IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)))
+- cap &= ~(IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+- IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
+- else if (!(ap_vht_cap->vht_cap_info &
+- cpu_to_le32(IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE)))
+- cap &= ~IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE;
++ if (!ieee80211_hw_check(&local->hw, STRICT)) {
++ if (!(ap_vht_cap->vht_cap_info &
++ cpu_to_le32(IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)))
++ cap &= ~(IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
++ IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
++ else if (!(ap_vht_cap->vht_cap_info &
++ cpu_to_le32(IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE)))
++ cap &= ~IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE;
++ }
+
+ /*
+ * If some other vif is using the MU-MIMO capability we cannot associate
+@@ -1361,14 +1366,16 @@ static bool ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ return mu_mimo_owner;
+ }
+
+-static void ieee80211_assoc_add_rates(struct sk_buff *skb,
++static void ieee80211_assoc_add_rates(struct ieee80211_local *local,
++ struct sk_buff *skb,
+ enum nl80211_chan_width width,
+ struct ieee80211_supported_band *sband,
+ struct ieee80211_mgd_assoc_data *assoc_data)
+ {
+ u32 rates;
+
+- if (assoc_data->supp_rates_len) {
++ if (assoc_data->supp_rates_len &&
++ !ieee80211_hw_check(&local->hw, STRICT)) {
+ /*
+ * Get all rates supported by the device and the AP as
+ * some APs don't like getting a superset of their rates
+@@ -1584,7 +1591,7 @@ ieee80211_add_link_elems(struct ieee80211_sub_if_data *sdata,
+ *capab |= WLAN_CAPABILITY_SPECTRUM_MGMT;
+
+ if (sband->band != NL80211_BAND_S1GHZ)
+- ieee80211_assoc_add_rates(skb, width, sband, assoc_data);
++ ieee80211_assoc_add_rates(local, skb, width, sband, assoc_data);
+
+ if (*capab & WLAN_CAPABILITY_SPECTRUM_MGMT ||
+ *capab & WLAN_CAPABILITY_RADIO_MEASURE) {
+@@ -2051,7 +2058,8 @@ static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+ * for some reason check it and want it to be set, set the bit for all
+ * pre-EHT connections as we used to do.
+ */
+- if (link->u.mgd.conn.mode < IEEE80211_CONN_MODE_EHT)
++ if (link->u.mgd.conn.mode < IEEE80211_CONN_MODE_EHT &&
++ !ieee80211_hw_check(&local->hw, STRICT))
+ capab |= WLAN_CAPABILITY_ESS;
+
+ /* add the elements for the assoc (main) link */
+@@ -4936,7 +4944,7 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ * 2G/3G/4G wifi routers, reported models include the "Onda PN51T",
+ * "Vodafone PocketWiFi 2", "ZTE MF60" and a similar T-Mobile device.
+ */
+- if (!is_6ghz &&
++ if (!ieee80211_hw_check(&local->hw, STRICT) && !is_6ghz &&
+ ((assoc_data->wmm && !elems->wmm_param) ||
+ (link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_HT &&
+ (!elems->ht_cap_elem || !elems->ht_operation)) ||
+@@ -5072,6 +5080,15 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ bss_vht_cap = (const void *)elem->data;
+ }
+
++ if (ieee80211_hw_check(&local->hw, STRICT) &&
++ (!bss_vht_cap || memcmp(bss_vht_cap, elems->vht_cap_elem,
++ sizeof(*bss_vht_cap)))) {
++ rcu_read_unlock();
++ ret = false;
++ link_info(link, "VHT capabilities mismatch\n");
++ goto out;
++ }
++
+ ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband,
+ elems->vht_cap_elem,
+ bss_vht_cap, link_sta);
+@@ -9631,8 +9648,6 @@ EXPORT_SYMBOL(ieee80211_disable_rssi_reports);
+
+ static void ieee80211_ml_reconf_selectors(unsigned long *userspace_selectors)
+ {
+- *userspace_selectors = 0;
+-
+ /* these selectors are mandatory for ML reconfiguration */
+ set_bit(BSS_MEMBERSHIP_SELECTOR_SAE_H2E, userspace_selectors);
+ set_bit(BSS_MEMBERSHIP_SELECTOR_HE_PHY, userspace_selectors);
+@@ -9652,7 +9667,7 @@ void ieee80211_process_ml_reconf_resp(struct ieee80211_sub_if_data *sdata,
+ sdata->u.mgd.reconf.removed_links;
+ u16 link_mask, valid_links;
+ unsigned int link_id;
+- unsigned long userspace_selectors;
++ unsigned long userspace_selectors[BITS_TO_LONGS(128)] = {};
+ size_t orig_len = len;
+ u8 i, group_key_data_len;
+ u8 *pos;
+@@ -9760,7 +9775,7 @@ void ieee80211_process_ml_reconf_resp(struct ieee80211_sub_if_data *sdata,
+ }
+
+ ieee80211_vif_set_links(sdata, valid_links, sdata->vif.dormant_links);
+- ieee80211_ml_reconf_selectors(&userspace_selectors);
++ ieee80211_ml_reconf_selectors(userspace_selectors);
+ link_mask = 0;
+ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) {
+ struct cfg80211_bss *cbss = add_links_data->link[link_id].bss;
+@@ -9806,7 +9821,7 @@ void ieee80211_process_ml_reconf_resp(struct ieee80211_sub_if_data *sdata,
+ link->u.mgd.conn = add_links_data->link[link_id].conn;
+ if (ieee80211_prep_channel(sdata, link, link_id, cbss,
+ true, &link->u.mgd.conn,
+- &userspace_selectors)) {
++ userspace_selectors)) {
+ link_info(link, "mlo: reconf: prep_channel failed\n");
+ goto disconnect;
+ }
+@@ -10135,14 +10150,14 @@ int ieee80211_mgd_assoc_ml_reconf(struct ieee80211_sub_if_data *sdata,
+ */
+ if (added_links) {
+ bool uapsd_supported;
+- unsigned long userspace_selectors;
++ unsigned long userspace_selectors[BITS_TO_LONGS(128)] = {};
+
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ uapsd_supported = true;
+- ieee80211_ml_reconf_selectors(&userspace_selectors);
++ ieee80211_ml_reconf_selectors(userspace_selectors);
+ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS;
+ link_id++) {
+ struct ieee80211_supported_band *sband;
+@@ -10218,7 +10233,7 @@ int ieee80211_mgd_assoc_ml_reconf(struct ieee80211_sub_if_data *sdata,
+ data->link[link_id].bss,
+ true,
+ &data->link[link_id].conn,
+- &userspace_selectors);
++ userspace_selectors);
+ if (err)
+ goto err_free;
+ }
+diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
+index 505445a9598faf..3caa0a9d3b3885 100644
+--- a/net/mptcp/sockopt.c
++++ b/net/mptcp/sockopt.c
+@@ -1419,6 +1419,12 @@ static int mptcp_getsockopt_v4(struct mptcp_sock *msk, int optname,
+ switch (optname) {
+ case IP_TOS:
+ return mptcp_put_int_option(msk, optval, optlen, READ_ONCE(inet_sk(sk)->tos));
++ case IP_FREEBIND:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(FREEBIND, sk));
++ case IP_TRANSPARENT:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(TRANSPARENT, sk));
+ case IP_BIND_ADDRESS_NO_PORT:
+ return mptcp_put_int_option(msk, optval, optlen,
+ inet_test_bit(BIND_ADDRESS_NO_PORT, sk));
+@@ -1430,6 +1436,26 @@ static int mptcp_getsockopt_v4(struct mptcp_sock *msk, int optname,
+ return -EOPNOTSUPP;
+ }
+
++static int mptcp_getsockopt_v6(struct mptcp_sock *msk, int optname,
++ char __user *optval, int __user *optlen)
++{
++ struct sock *sk = (void *)msk;
++
++ switch (optname) {
++ case IPV6_V6ONLY:
++ return mptcp_put_int_option(msk, optval, optlen,
++ sk->sk_ipv6only);
++ case IPV6_TRANSPARENT:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(TRANSPARENT, sk));
++ case IPV6_FREEBIND:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(FREEBIND, sk));
++ }
++
++ return -EOPNOTSUPP;
++}
++
+ static int mptcp_getsockopt_sol_mptcp(struct mptcp_sock *msk, int optname,
+ char __user *optval, int __user *optlen)
+ {
+@@ -1469,6 +1495,8 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
+
+ if (level == SOL_IP)
+ return mptcp_getsockopt_v4(msk, optname, optval, option);
++ if (level == SOL_IPV6)
++ return mptcp_getsockopt_v6(msk, optname, optval, option);
+ if (level == SOL_TCP)
+ return mptcp_getsockopt_sol_tcp(msk, optname, optval, option);
+ if (level == SOL_MPTCP)
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 9f18217dddc865..f2e82599dd0a36 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -754,8 +754,6 @@ static bool subflow_hmac_valid(const struct request_sock *req,
+
+ subflow_req = mptcp_subflow_rsk(req);
+ msk = subflow_req->msk;
+- if (!msk)
+- return false;
+
+ subflow_generate_hmac(READ_ONCE(msk->remote_key),
+ READ_ONCE(msk->local_key),
+@@ -853,12 +851,8 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+
+ } else if (subflow_req->mp_join) {
+ mptcp_get_options(skb, &mp_opt);
+- if (!(mp_opt.suboptions & OPTION_MPTCP_MPJ_ACK) ||
+- !subflow_hmac_valid(req, &mp_opt) ||
+- !mptcp_can_accept_new_subflow(subflow_req->msk)) {
+- SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
++ if (!(mp_opt.suboptions & OPTION_MPTCP_MPJ_ACK))
+ fallback = true;
+- }
+ }
+
+ create_child:
+@@ -908,6 +902,17 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ goto dispose_child;
+ }
+
++ if (!subflow_hmac_valid(req, &mp_opt)) {
++ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
++ subflow_add_reset_reason(skb, MPTCP_RST_EPROHIBIT);
++ goto dispose_child;
++ }
++
++ if (!mptcp_can_accept_new_subflow(owner)) {
++ subflow_add_reset_reason(skb, MPTCP_RST_EPROHIBIT);
++ goto dispose_child;
++ }
++
+ /* move the msk reference ownership to the subflow */
+ subflow_req->msk = NULL;
+ ctx->conn = (struct sock *)owner;
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index b8d3c3213efee5..c15db28c5ebc43 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -994,8 +994,9 @@ static int nft_pipapo_avx2_lookup_8b_16(unsigned long *map, unsigned long *fill,
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(5, lt, 8, pkt[8], bsize);
+
+ NFT_PIPAPO_AVX2_AND(6, 2, 3);
++ NFT_PIPAPO_AVX2_AND(3, 4, 7);
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(7, lt, 9, pkt[9], bsize);
+- NFT_PIPAPO_AVX2_AND(0, 4, 5);
++ NFT_PIPAPO_AVX2_AND(0, 3, 5);
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(1, lt, 10, pkt[10], bsize);
+ NFT_PIPAPO_AVX2_AND(2, 6, 7);
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(3, lt, 11, pkt[11], bsize);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 4f648af8cfaafe..ecec0a1e1c1a07 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2057,6 +2057,7 @@ static int tcf_fill_node(struct net *net, struct sk_buff *skb,
+ struct tcmsg *tcm;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb_tail_pointer(skb);
++ int ret = -EMSGSIZE;
+
+ nlh = nlmsg_put(skb, portid, seq, event, sizeof(*tcm), flags);
+ if (!nlh)
+@@ -2101,11 +2102,45 @@ static int tcf_fill_node(struct net *net, struct sk_buff *skb,
+
+ return skb->len;
+
++cls_op_not_supp:
++ ret = -EOPNOTSUPP;
+ out_nlmsg_trim:
+ nla_put_failure:
+-cls_op_not_supp:
+ nlmsg_trim(skb, b);
+- return -1;
++ return ret;
++}
++
++static struct sk_buff *tfilter_notify_prep(struct net *net,
++ struct sk_buff *oskb,
++ struct nlmsghdr *n,
++ struct tcf_proto *tp,
++ struct tcf_block *block,
++ struct Qdisc *q, u32 parent,
++ void *fh, int event,
++ u32 portid, bool rtnl_held,
++ struct netlink_ext_ack *extack)
++{
++ unsigned int size = oskb ? max(NLMSG_GOODSIZE, oskb->len) : NLMSG_GOODSIZE;
++ struct sk_buff *skb;
++ int ret;
++
++retry:
++ skb = alloc_skb(size, GFP_KERNEL);
++ if (!skb)
++ return ERR_PTR(-ENOBUFS);
++
++ ret = tcf_fill_node(net, skb, tp, block, q, parent, fh, portid,
++ n->nlmsg_seq, n->nlmsg_flags, event, false,
++ rtnl_held, extack);
++ if (ret <= 0) {
++ kfree_skb(skb);
++ if (ret == -EMSGSIZE) {
++ size += NLMSG_GOODSIZE;
++ goto retry;
++ }
++ return ERR_PTR(-EINVAL);
++ }
++ return skb;
+ }
+
+ static int tfilter_notify(struct net *net, struct sk_buff *oskb,
+@@ -2121,16 +2156,10 @@ static int tfilter_notify(struct net *net, struct sk_buff *oskb,
+ if (!unicast && !rtnl_notify_needed(net, n->nlmsg_flags, RTNLGRP_TC))
+ return 0;
+
+- skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+- if (!skb)
+- return -ENOBUFS;
+-
+- if (tcf_fill_node(net, skb, tp, block, q, parent, fh, portid,
+- n->nlmsg_seq, n->nlmsg_flags, event,
+- false, rtnl_held, extack) <= 0) {
+- kfree_skb(skb);
+- return -EINVAL;
+- }
++ skb = tfilter_notify_prep(net, oskb, n, tp, block, q, parent, fh, event,
++ portid, rtnl_held, extack);
++ if (IS_ERR(skb))
++ return PTR_ERR(skb);
+
+ if (unicast)
+ err = rtnl_unicast(skb, net, portid);
+@@ -2153,16 +2182,11 @@ static int tfilter_del_notify(struct net *net, struct sk_buff *oskb,
+ if (!rtnl_notify_needed(net, n->nlmsg_flags, RTNLGRP_TC))
+ return tp->ops->delete(tp, fh, last, rtnl_held, extack);
+
+- skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+- if (!skb)
+- return -ENOBUFS;
+-
+- if (tcf_fill_node(net, skb, tp, block, q, parent, fh, portid,
+- n->nlmsg_seq, n->nlmsg_flags, RTM_DELTFILTER,
+- false, rtnl_held, extack) <= 0) {
++ skb = tfilter_notify_prep(net, oskb, n, tp, block, q, parent, fh,
++ RTM_DELTFILTER, portid, rtnl_held, extack);
++ if (IS_ERR(skb)) {
+ NL_SET_ERR_MSG(extack, "Failed to build del event notification");
+- kfree_skb(skb);
+- return -EINVAL;
++ return PTR_ERR(skb);
+ }
+
+ err = tp->ops->delete(tp, fh, last, rtnl_held, extack);
+diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c
+index 81189d02fee761..12dd71139da396 100644
+--- a/net/sched/sch_codel.c
++++ b/net/sched/sch_codel.c
+@@ -65,10 +65,7 @@ static struct sk_buff *codel_qdisc_dequeue(struct Qdisc *sch)
+ &q->stats, qdisc_pkt_len, codel_get_enqueue_time,
+ drop_func, dequeue_func);
+
+- /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
+- * or HTB crashes. Defer it for next round.
+- */
+- if (q->stats.drop_count && sch->q.qlen) {
++ if (q->stats.drop_count) {
+ qdisc_tree_reduce_backlog(sch, q->stats.drop_count, q->stats.drop_len);
+ q->stats.drop_count = 0;
+ q->stats.drop_len = 0;
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 799f5397ad4c17..6c9029f71e88d3 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -315,10 +315,8 @@ static struct sk_buff *fq_codel_dequeue(struct Qdisc *sch)
+ }
+ qdisc_bstats_update(sch, skb);
+ flow->deficit -= qdisc_pkt_len(skb);
+- /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
+- * or HTB crashes. Defer it for next round.
+- */
+- if (q->cstats.drop_count && sch->q.qlen) {
++
++ if (q->cstats.drop_count) {
+ qdisc_tree_reduce_backlog(sch, q->cstats.drop_count,
+ q->cstats.drop_len);
+ q->cstats.drop_count = 0;
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 65d5b59da58303..58b42dcf8f2013 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -631,6 +631,15 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ struct red_parms *p = NULL;
+ struct sk_buff *to_free = NULL;
+ struct sk_buff *tail = NULL;
++ unsigned int maxflows;
++ unsigned int quantum;
++ unsigned int divisor;
++ int perturb_period;
++ u8 headdrop;
++ u8 maxdepth;
++ int limit;
++ u8 flags;
++
+
+ if (opt->nla_len < nla_attr_size(sizeof(*ctl)))
+ return -EINVAL;
+@@ -652,39 +661,64 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ if (!p)
+ return -ENOMEM;
+ }
+- if (ctl->limit == 1) {
+- NL_SET_ERR_MSG_MOD(extack, "invalid limit");
+- return -EINVAL;
+- }
++
+ sch_tree_lock(sch);
++
++ limit = q->limit;
++ divisor = q->divisor;
++ headdrop = q->headdrop;
++ maxdepth = q->maxdepth;
++ maxflows = q->maxflows;
++ perturb_period = q->perturb_period;
++ quantum = q->quantum;
++ flags = q->flags;
++
++ /* update and validate configuration */
+ if (ctl->quantum)
+- q->quantum = ctl->quantum;
+- WRITE_ONCE(q->perturb_period, ctl->perturb_period * HZ);
++ quantum = ctl->quantum;
++ perturb_period = ctl->perturb_period * HZ;
+ if (ctl->flows)
+- q->maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
++ maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
+ if (ctl->divisor) {
+- q->divisor = ctl->divisor;
+- q->maxflows = min_t(u32, q->maxflows, q->divisor);
++ divisor = ctl->divisor;
++ maxflows = min_t(u32, maxflows, divisor);
+ }
+ if (ctl_v1) {
+ if (ctl_v1->depth)
+- q->maxdepth = min_t(u32, ctl_v1->depth, SFQ_MAX_DEPTH);
++ maxdepth = min_t(u32, ctl_v1->depth, SFQ_MAX_DEPTH);
+ if (p) {
+- swap(q->red_parms, p);
+- red_set_parms(q->red_parms,
++ red_set_parms(p,
+ ctl_v1->qth_min, ctl_v1->qth_max,
+ ctl_v1->Wlog,
+ ctl_v1->Plog, ctl_v1->Scell_log,
+ NULL,
+ ctl_v1->max_P);
+ }
+- q->flags = ctl_v1->flags;
+- q->headdrop = ctl_v1->headdrop;
++ flags = ctl_v1->flags;
++ headdrop = ctl_v1->headdrop;
+ }
+ if (ctl->limit) {
+- q->limit = min_t(u32, ctl->limit, q->maxdepth * q->maxflows);
+- q->maxflows = min_t(u32, q->maxflows, q->limit);
++ limit = min_t(u32, ctl->limit, maxdepth * maxflows);
++ maxflows = min_t(u32, maxflows, limit);
+ }
++ if (limit == 1) {
++ sch_tree_unlock(sch);
++ kfree(p);
++ NL_SET_ERR_MSG_MOD(extack, "invalid limit");
++ return -EINVAL;
++ }
++
++ /* commit configuration */
++ q->limit = limit;
++ q->divisor = divisor;
++ q->headdrop = headdrop;
++ q->maxdepth = maxdepth;
++ q->maxflows = maxflows;
++ WRITE_ONCE(q->perturb_period, perturb_period);
++ q->quantum = quantum;
++ q->flags = flags;
++ if (p)
++ swap(q->red_parms, p);
+
+ qlen = sch->q.qlen;
+ while (sch->q.qlen > q->limit) {
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 36ee34f483d703..53725ee7ba06d7 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -72,8 +72,9 @@
+ /* Forward declarations for internal helper functions. */
+ static bool sctp_writeable(const struct sock *sk);
+ static void sctp_wfree(struct sk_buff *skb);
+-static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+- size_t msg_len);
++static int sctp_wait_for_sndbuf(struct sctp_association *asoc,
++ struct sctp_transport *transport,
++ long *timeo_p, size_t msg_len);
+ static int sctp_wait_for_packet(struct sock *sk, int *err, long *timeo_p);
+ static int sctp_wait_for_connect(struct sctp_association *, long *timeo_p);
+ static int sctp_wait_for_accept(struct sock *sk, long timeo);
+@@ -1828,7 +1829,7 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
+
+ if (sctp_wspace(asoc) <= 0 || !sk_wmem_schedule(sk, msg_len)) {
+ timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
+- err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len);
++ err = sctp_wait_for_sndbuf(asoc, transport, &timeo, msg_len);
+ if (err)
+ goto err;
+ if (unlikely(sinfo->sinfo_stream >= asoc->stream.outcnt)) {
+@@ -9214,8 +9215,9 @@ void sctp_sock_rfree(struct sk_buff *skb)
+
+
+ /* Helper function to wait for space in the sndbuf. */
+-static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+- size_t msg_len)
++static int sctp_wait_for_sndbuf(struct sctp_association *asoc,
++ struct sctp_transport *transport,
++ long *timeo_p, size_t msg_len)
+ {
+ struct sock *sk = asoc->base.sk;
+ long current_timeo = *timeo_p;
+@@ -9225,7 +9227,9 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ pr_debug("%s: asoc:%p, timeo:%ld, msg_len:%zu\n", __func__, asoc,
+ *timeo_p, msg_len);
+
+- /* Increment the association's refcnt. */
++ /* Increment the transport and association's refcnt. */
++ if (transport)
++ sctp_transport_hold(transport);
+ sctp_association_hold(asoc);
+
+ /* Wait on the association specific sndbuf space. */
+@@ -9234,7 +9238,7 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ TASK_INTERRUPTIBLE);
+ if (asoc->base.dead)
+ goto do_dead;
+- if (!*timeo_p)
++ if ((!*timeo_p) || (transport && transport->dead))
+ goto do_nonblock;
+ if (sk->sk_err || asoc->state >= SCTP_STATE_SHUTDOWN_PENDING)
+ goto do_error;
+@@ -9259,7 +9263,9 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ out:
+ finish_wait(&asoc->wait, &wait);
+
+- /* Release the association's refcnt. */
++ /* Release the transport and association's refcnt. */
++ if (transport)
++ sctp_transport_put(transport);
+ sctp_association_put(asoc);
+
+ return err;
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 2abe45af98e7c6..31eca29b6cfbfb 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -117,6 +117,8 @@ struct sctp_transport *sctp_transport_new(struct net *net,
+ */
+ void sctp_transport_free(struct sctp_transport *transport)
+ {
++ transport->dead = 1;
++
+ /* Try to delete the heartbeat timer. */
+ if (del_timer(&transport->hb_timer))
+ sctp_transport_put(transport);
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index c3fbf0779d4ab6..aca8bdf65d729f 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -621,7 +621,8 @@ static void __svc_rdma_free(struct work_struct *work)
+ /* Destroy the CM ID */
+ rdma_destroy_id(rdma->sc_cm_id);
+
+- rpcrdma_rn_unregister(device, &rdma->sc_rn);
++ if (!test_bit(XPT_LISTENER, &rdma->sc_xprt.xpt_flags))
++ rpcrdma_rn_unregister(device, &rdma->sc_rn);
+ kfree(rdma);
+ }
+
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 5c2088a469cea1..5689e1f4854797 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -1046,6 +1046,7 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
+ if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) {
+ if (imp == TIPC_SYSTEM_IMPORTANCE) {
+ pr_warn("%s<%s>, link overflow", link_rst_msg, l->name);
++ __skb_queue_purge(list);
+ return -ENOBUFS;
+ }
+ rc = link_schedule_user(l, hdr);
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 99ca4465f70216..4d7702ce17c063 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -852,6 +852,11 @@ static int tls_setsockopt(struct sock *sk, int level, int optname,
+ return do_tls_setsockopt(sk, optname, optval, optlen);
+ }
+
++static int tls_disconnect(struct sock *sk, int flags)
++{
++ return -EOPNOTSUPP;
++}
++
+ struct tls_context *tls_ctx_create(struct sock *sk)
+ {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+@@ -947,6 +952,7 @@ static void build_protos(struct proto prot[TLS_NUM_CONFIG][TLS_NUM_CONFIG],
+ prot[TLS_BASE][TLS_BASE] = *base;
+ prot[TLS_BASE][TLS_BASE].setsockopt = tls_setsockopt;
+ prot[TLS_BASE][TLS_BASE].getsockopt = tls_getsockopt;
++ prot[TLS_BASE][TLS_BASE].disconnect = tls_disconnect;
+ prot[TLS_BASE][TLS_BASE].close = tls_sk_proto_close;
+
+ prot[TLS_SW][TLS_BASE] = prot[TLS_BASE][TLS_BASE];
+diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
+index b9ec761b3befc4..d4bd5c2b998ca2 100755
+--- a/scripts/generate_builtin_ranges.awk
++++ b/scripts/generate_builtin_ranges.awk
+@@ -282,6 +282,11 @@ ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
+ # section.
+ #
+ ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
++ # There are a few sections with constant data (without symbols) that
++ # can get resized during linking, so it is best to ignore them.
++ if ($1 ~ /^\.rodata\.(cst|str)[0-9]/)
++ next;
++
+ if (!($1 in sect_base)) {
+ sect_base[$1] = base;
+
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index a4f284bd846c15..e0489c6f7f5961 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -181,7 +181,8 @@ struct ima_kexec_hdr {
+ #define IMA_UPDATE_XATTR 1
+ #define IMA_CHANGE_ATTR 2
+ #define IMA_DIGSIG 3
+-#define IMA_MUST_MEASURE 4
++#define IMA_MAY_EMIT_TOMTOU 4
++#define IMA_EMITTED_OPENWRITERS 5
+
+ /* IMA integrity metadata associated with an inode */
+ struct ima_iint_cache {
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 28b8b0db6f9bbc..f3e7ac513db3f5 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -129,16 +129,22 @@ static void ima_rdwr_violation_check(struct file *file,
+ if (atomic_read(&inode->i_readcount) && IS_IMA(inode)) {
+ if (!iint)
+ iint = ima_iint_find(inode);
++
+ /* IMA_MEASURE is set from reader side */
+- if (iint && test_bit(IMA_MUST_MEASURE,
+- &iint->atomic_flags))
++ if (iint && test_and_clear_bit(IMA_MAY_EMIT_TOMTOU,
++ &iint->atomic_flags))
+ send_tomtou = true;
+ }
+ } else {
+ if (must_measure)
+- set_bit(IMA_MUST_MEASURE, &iint->atomic_flags);
+- if (inode_is_open_for_write(inode) && must_measure)
+- send_writers = true;
++ set_bit(IMA_MAY_EMIT_TOMTOU, &iint->atomic_flags);
++
++ /* Limit number of open_writers violations */
++ if (inode_is_open_for_write(inode) && must_measure) {
++ if (!test_and_set_bit(IMA_EMITTED_OPENWRITERS,
++ &iint->atomic_flags))
++ send_writers = true;
++ }
+ }
+
+ if (!send_tomtou && !send_writers)
+@@ -167,6 +173,8 @@ static void ima_check_last_writer(struct ima_iint_cache *iint,
+ if (atomic_read(&inode->i_writecount) == 1) {
+ struct kstat stat;
+
++ clear_bit(IMA_EMITTED_OPENWRITERS, &iint->atomic_flags);
++
+ update = test_and_clear_bit(IMA_UPDATE_XATTR,
+ &iint->atomic_flags);
+ if ((iint->flags & IMA_NEW_FILE) ||
+diff --git a/security/landlock/errata.h b/security/landlock/errata.h
+new file mode 100644
+index 00000000000000..8e626accac1011
+--- /dev/null
++++ b/security/landlock/errata.h
+@@ -0,0 +1,99 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Landlock - Errata information
++ *
++ * Copyright © 2025 Microsoft Corporation
++ */
++
++#ifndef _SECURITY_LANDLOCK_ERRATA_H
++#define _SECURITY_LANDLOCK_ERRATA_H
++
++#include <linux/init.h>
++
++struct landlock_erratum {
++ const int abi;
++ const u8 number;
++};
++
++/* clang-format off */
++#define LANDLOCK_ERRATUM(NUMBER) \
++ { \
++ .abi = LANDLOCK_ERRATA_ABI, \
++ .number = NUMBER, \
++ },
++/* clang-format on */
++
++/*
++ * Some fixes may require user space to check if they are applied on the running
++ * kernel before using a specific feature. For instance, this applies when a
++ * restriction was previously too restrictive and is now getting relaxed (for
++ * compatibility or semantic reasons). However, non-visible changes for
++ * legitimate use (e.g. security fixes) do not require an erratum.
++ */
++static const struct landlock_erratum landlock_errata_init[] __initconst = {
++
++/*
++ * Only Sparse may not implement __has_include. If a compiler does not
++ * implement __has_include, a warning will be printed at boot time (see
++ * setup.c).
++ */
++#ifdef __has_include
++
++#define LANDLOCK_ERRATA_ABI 1
++#if __has_include("errata/abi-1.h")
++#include "errata/abi-1.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 2
++#if __has_include("errata/abi-2.h")
++#include "errata/abi-2.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 3
++#if __has_include("errata/abi-3.h")
++#include "errata/abi-3.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 4
++#if __has_include("errata/abi-4.h")
++#include "errata/abi-4.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 5
++#if __has_include("errata/abi-5.h")
++#include "errata/abi-5.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 6
++#if __has_include("errata/abi-6.h")
++#include "errata/abi-6.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++/*
++ * For each new erratum, we need to include all the ABI files up to the impacted
++ * ABI to make all potential future intermediate errata easy to backport.
++ *
++ * If such change involves more than one ABI addition, then it must be in a
++ * dedicated commit with the same Fixes tag as used for the actual fix.
++ *
++ * Each commit creating a new security/landlock/errata/abi-*.h file must have a
++ * Depends-on tag to reference the commit that previously added the line to
++ * include this new file, except if the original Fixes tag is enough.
++ *
++ * Each erratum must be documented in its related ABI file, and a dedicated
++ * commit must update Documentation/userspace-api/landlock.rst to include this
++ * erratum. This commit will not be backported.
++ */
++
++#endif
++
++ {}
++};
++
++#endif /* _SECURITY_LANDLOCK_ERRATA_H */
+diff --git a/security/landlock/errata/abi-4.h b/security/landlock/errata/abi-4.h
+new file mode 100644
+index 00000000000000..c052ee54f89f60
+--- /dev/null
++++ b/security/landlock/errata/abi-4.h
+@@ -0,0 +1,15 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++/**
++ * DOC: erratum_1
++ *
++ * Erratum 1: TCP socket identification
++ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++ *
++ * This fix addresses an issue where IPv4 and IPv6 stream sockets (e.g., SMC,
++ * MPTCP, or SCTP) were incorrectly restricted by TCP access rights during
++ * :manpage:`bind(2)` and :manpage:`connect(2)` operations. This change ensures
++ * that only TCP sockets are subject to TCP access rights, allowing other
++ * protocols to operate without unnecessary restrictions.
++ */
++LANDLOCK_ERRATUM(1)
+diff --git a/security/landlock/errata/abi-6.h b/security/landlock/errata/abi-6.h
+new file mode 100644
+index 00000000000000..df7bc0e1fdf472
+--- /dev/null
++++ b/security/landlock/errata/abi-6.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++/**
++ * DOC: erratum_2
++ *
++ * Erratum 2: Scoped signal handling
++ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++ *
++ * This fix addresses an issue where signal scoping was overly restrictive,
++ * preventing sandboxed threads from signaling other threads within the same
++ * process if they belonged to different domains. Because threads are not
++ * security boundaries, user space might assume that any thread within the same
++ * process can send signals between themselves (see :manpage:`nptl(7)` and
++ * :manpage:`libpsx(3)`). Consistent with :manpage:`ptrace(2)` behavior, direct
++ * interaction between threads of the same process should always be allowed.
++ * This change ensures that any thread is allowed to send signals to any other
++ * thread within the same process, regardless of their domain.
++ */
++LANDLOCK_ERRATUM(2)
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index 71b9dc331aae87..c19aab87c4d2d7 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -27,7 +27,9 @@
+ #include <linux/mount.h>
+ #include <linux/namei.h>
+ #include <linux/path.h>
++#include <linux/pid.h>
+ #include <linux/rcupdate.h>
++#include <linux/sched/signal.h>
+ #include <linux/spinlock.h>
+ #include <linux/stat.h>
+ #include <linux/types.h>
+@@ -1628,21 +1630,46 @@ static int hook_file_ioctl_compat(struct file *file, unsigned int cmd,
+ return -EACCES;
+ }
+
+-static void hook_file_set_fowner(struct file *file)
++/*
++ * Always allow sending signals between threads of the same process. This
++ * ensures consistency with hook_task_kill().
++ */
++static bool control_current_fowner(struct fown_struct *const fown)
+ {
+- struct landlock_ruleset *new_dom, *prev_dom;
++ struct task_struct *p;
+
+ /*
+ * Lock already held by __f_setown(), see commit 26f204380a3c ("fs: Fix
+ * file_set_fowner LSM hook inconsistencies").
+ */
+- lockdep_assert_held(&file_f_owner(file)->lock);
+- new_dom = landlock_get_current_domain();
+- landlock_get_ruleset(new_dom);
++ lockdep_assert_held(&fown->lock);
++
++ /*
++ * Some callers (e.g. fcntl_dirnotify) may not be in an RCU read-side
++ * critical section.
++ */
++ guard(rcu)();
++ p = pid_task(fown->pid, fown->pid_type);
++ if (!p)
++ return true;
++
++ return !same_thread_group(p, current);
++}
++
++static void hook_file_set_fowner(struct file *file)
++{
++ struct landlock_ruleset *prev_dom;
++ struct landlock_ruleset *new_dom = NULL;
++
++ if (control_current_fowner(file_f_owner(file))) {
++ new_dom = landlock_get_current_domain();
++ landlock_get_ruleset(new_dom);
++ }
++
+ prev_dom = landlock_file(file)->fown_domain;
+ landlock_file(file)->fown_domain = new_dom;
+
+- /* Called in an RCU read-side critical section. */
++ /* May be called in an RCU read-side critical section. */
+ landlock_put_ruleset_deferred(prev_dom);
+ }
+
+diff --git a/security/landlock/setup.c b/security/landlock/setup.c
+index 28519a45b11ffb..0c85ea27e40990 100644
+--- a/security/landlock/setup.c
++++ b/security/landlock/setup.c
+@@ -6,12 +6,14 @@
+ * Copyright © 2018-2020 ANSSI
+ */
+
++#include <linux/bits.h>
+ #include <linux/init.h>
+ #include <linux/lsm_hooks.h>
+ #include <uapi/linux/lsm.h>
+
+ #include "common.h"
+ #include "cred.h"
++#include "errata.h"
+ #include "fs.h"
+ #include "net.h"
+ #include "setup.h"
+@@ -19,6 +21,11 @@
+
+ bool landlock_initialized __ro_after_init = false;
+
++const struct lsm_id landlock_lsmid = {
++ .name = LANDLOCK_NAME,
++ .id = LSM_ID_LANDLOCK,
++};
++
+ struct lsm_blob_sizes landlock_blob_sizes __ro_after_init = {
+ .lbs_cred = sizeof(struct landlock_cred_security),
+ .lbs_file = sizeof(struct landlock_file_security),
+@@ -26,13 +33,36 @@ struct lsm_blob_sizes landlock_blob_sizes __ro_after_init = {
+ .lbs_superblock = sizeof(struct landlock_superblock_security),
+ };
+
+-const struct lsm_id landlock_lsmid = {
+- .name = LANDLOCK_NAME,
+- .id = LSM_ID_LANDLOCK,
+-};
++int landlock_errata __ro_after_init;
++
++static void __init compute_errata(void)
++{
++ size_t i;
++
++#ifndef __has_include
++ /*
++ * This is a safeguard to make sure the compiler implements
++ * __has_include (see errata.h).
++ */
++ WARN_ON_ONCE(1);
++ return;
++#endif
++
++ for (i = 0; landlock_errata_init[i].number; i++) {
++ const int prev_errata = landlock_errata;
++
++ if (WARN_ON_ONCE(landlock_errata_init[i].abi >
++ landlock_abi_version))
++ continue;
++
++ landlock_errata |= BIT(landlock_errata_init[i].number - 1);
++ WARN_ON_ONCE(prev_errata == landlock_errata);
++ }
++}
+
+ static int __init landlock_init(void)
+ {
++ compute_errata();
+ landlock_add_cred_hooks();
+ landlock_add_task_hooks();
+ landlock_add_fs_hooks();
+diff --git a/security/landlock/setup.h b/security/landlock/setup.h
+index c4252d46d49d48..fca307c35fee5d 100644
+--- a/security/landlock/setup.h
++++ b/security/landlock/setup.h
+@@ -11,7 +11,10 @@
+
+ #include <linux/lsm_hooks.h>
+
++extern const int landlock_abi_version;
++
+ extern bool landlock_initialized;
++extern int landlock_errata;
+
+ extern struct lsm_blob_sizes landlock_blob_sizes;
+ extern const struct lsm_id landlock_lsmid;
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index a9760d252fc2dc..cf9e0483e5429a 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -160,7 +160,9 @@ static const struct file_operations ruleset_fops = {
+ * the new ruleset.
+ * @size: Size of the pointed &struct landlock_ruleset_attr (needed for
+ * backward and forward compatibility).
+- * @flags: Supported value: %LANDLOCK_CREATE_RULESET_VERSION.
++ * @flags: Supported value:
++ * - %LANDLOCK_CREATE_RULESET_VERSION
++ * - %LANDLOCK_CREATE_RULESET_ERRATA
+ *
+ * This system call enables to create a new Landlock ruleset, and returns the
+ * related file descriptor on success.
+@@ -169,6 +171,10 @@ static const struct file_operations ruleset_fops = {
+ * 0, then the returned value is the highest supported Landlock ABI version
+ * (starting at 1).
+ *
++ * If @flags is %LANDLOCK_CREATE_RULESET_ERRATA and @attr is NULL and @size is
++ * 0, then the returned value is a bitmask of fixed issues for the current
++ * Landlock ABI version.
++ *
+ * Possible returned errors are:
+ *
+ * - %EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
+@@ -192,9 +198,15 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ return -EOPNOTSUPP;
+
+ if (flags) {
+- if ((flags == LANDLOCK_CREATE_RULESET_VERSION) && !attr &&
+- !size)
+- return LANDLOCK_ABI_VERSION;
++ if (attr || size)
++ return -EINVAL;
++
++ if (flags == LANDLOCK_CREATE_RULESET_VERSION)
++ return landlock_abi_version;
++
++ if (flags == LANDLOCK_CREATE_RULESET_ERRATA)
++ return landlock_errata;
++
+ return -EINVAL;
+ }
+
+@@ -235,6 +247,8 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ return ruleset_fd;
+ }
+
++const int landlock_abi_version = LANDLOCK_ABI_VERSION;
++
+ /*
+ * Returns an owned ruleset from a FD. It is thus needed to call
+ * landlock_put_ruleset() on the return value.
+diff --git a/security/landlock/task.c b/security/landlock/task.c
+index dc7dab78392edc..4578ce6e319d83 100644
+--- a/security/landlock/task.c
++++ b/security/landlock/task.c
+@@ -13,6 +13,7 @@
+ #include <linux/lsm_hooks.h>
+ #include <linux/rcupdate.h>
+ #include <linux/sched.h>
++#include <linux/sched/signal.h>
+ #include <net/af_unix.h>
+ #include <net/sock.h>
+
+@@ -264,6 +265,17 @@ static int hook_task_kill(struct task_struct *const p,
+ /* Dealing with USB IO. */
+ dom = landlock_cred(cred)->domain;
+ } else {
++ /*
++ * Always allow sending signals between threads of the same process.
++ * This is required for process credential changes by the Native POSIX
++ * Threads Library and implemented by the set*id(2) wrappers and
++ * libcap(3) with tgkill(2). See nptl(7) and libpsx(3).
++ *
++ * This exception is similar to the __ptrace_may_access() one.
++ */
++ if (same_thread_group(p, current))
++ return 0;
++
+ dom = landlock_get_current_domain();
+ }
+ dom = landlock_get_applicable_domain(dom, signal_scope);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index e67c22c59f02b1..1ae26bdbe756ad 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -37,6 +37,7 @@
+ #include <linux/completion.h>
+ #include <linux/acpi.h>
+ #include <linux/pgtable.h>
++#include <linux/dmi.h>
+
+ #ifdef CONFIG_X86
+ /* for snoop control */
+@@ -1352,8 +1353,21 @@ static void azx_free(struct azx *chip)
+ if (use_vga_switcheroo(hda)) {
+ if (chip->disabled && hda->probe_continued)
+ snd_hda_unlock_devices(&chip->bus);
+- if (hda->vga_switcheroo_registered)
++ if (hda->vga_switcheroo_registered) {
+ vga_switcheroo_unregister_client(chip->pci);
++
++ /* Some GPUs don't have sound, and azx_first_init fails,
++ * leaving the device probed but non-functional. As long
++ * as it's probed, the PCI subsystem keeps its runtime
++ * PM status as active. Force it to suspended (as we
++ * actually stop the chip) to allow GPU to suspend via
++ * vga_switcheroo, and print a warning.
++ */
++ dev_warn(&pci->dev, "GPU sound probed, but not operational: please add a quirk to driver_denylist\n");
++ pm_runtime_disable(&pci->dev);
++ pm_runtime_set_suspended(&pci->dev);
++ pm_runtime_enable(&pci->dev);
++ }
+ }
+
+ if (bus->chip_init) {
+@@ -2061,6 +2075,27 @@ static const struct pci_device_id driver_denylist[] = {
+ {}
+ };
+
++static struct pci_device_id driver_denylist_ideapad_z570[] = {
++ { PCI_DEVICE_SUB(0x10de, 0x0bea, 0x0000, 0x0000) }, /* NVIDIA GF108 HDA */
++ {}
++};
++
++/* DMI-based denylist, to be used when:
++ * - PCI subsystem IDs are zero, impossible to distinguish from valid sound cards.
++ * - Different modifications of the same laptop use different GPU models.
++ */
++static const struct dmi_system_id driver_denylist_dmi[] = {
++ {
++ /* No HDA in NVIDIA DGPU. BIOS disables it, but quirk_nvidia_hda() reenables. */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"),
++ },
++ .driver_data = &driver_denylist_ideapad_z570,
++ },
++ {}
++};
++
+ static const struct hda_controller_ops pci_hda_ops = {
+ .disable_msi_reset_irq = disable_msi_reset_irq,
+ .position_check = azx_position_check,
+@@ -2071,6 +2106,7 @@ static DECLARE_BITMAP(probed_devs, SNDRV_CARDS);
+ static int azx_probe(struct pci_dev *pci,
+ const struct pci_device_id *pci_id)
+ {
++ const struct dmi_system_id *dmi;
+ struct snd_card *card;
+ struct hda_intel *hda;
+ struct azx *chip;
+@@ -2083,6 +2119,12 @@ static int azx_probe(struct pci_dev *pci,
+ return -ENODEV;
+ }
+
++ dmi = dmi_first_match(driver_denylist_dmi);
++ if (dmi && pci_match_id(dmi->driver_data, pci)) {
++ dev_info(&pci->dev, "Skipping the device on the DMI denylist\n");
++ return -ENODEV;
++ }
++
+ dev = find_first_zero_bit(probed_devs, SNDRV_CARDS);
+ if (dev >= SNDRV_CARDS)
+ return -ENODEV;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 65ece19a6dd7d3..8e482f6ecafea9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4742,6 +4742,22 @@ static void alc245_fixup_hp_mute_led_coefbit(struct hda_codec *codec,
+ }
+ }
+
++static void alc245_fixup_hp_mute_led_v1_coefbit(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ struct alc_spec *spec = codec->spec;
++
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ spec->mute_led_polarity = 0;
++ spec->mute_led_coef.idx = 0x0b;
++ spec->mute_led_coef.mask = 1 << 3;
++ spec->mute_led_coef.on = 1 << 3;
++ spec->mute_led_coef.off = 0;
++ snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set);
++ }
++}
++
+ /* turn on/off mic-mute LED per capture hook by coef bit */
+ static int coef_micmute_led_set(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+@@ -7885,6 +7901,7 @@ enum {
+ ALC245_FIXUP_TAS2781_SPI_2,
+ ALC287_FIXUP_YOGA7_14ARB7_I2C,
+ ALC245_FIXUP_HP_MUTE_LED_COEFBIT,
++ ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT,
+ ALC245_FIXUP_HP_X360_MUTE_LEDS,
+ ALC287_FIXUP_THINKPAD_I2S_SPK,
+ ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD,
+@@ -10132,6 +10149,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc245_fixup_hp_mute_led_coefbit,
+ },
++ [ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc245_fixup_hp_mute_led_v1_coefbit,
++ },
+ [ALC245_FIXUP_HP_X360_MUTE_LEDS] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc245_fixup_hp_mute_led_coefbit,
+@@ -10626,6 +10647,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT),
+ SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8bdf, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10691,13 +10713,32 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d85, "HP EliteBook 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d86, "HP Elite X360 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d8c, "HP EliteBook 13 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d8d, "HP Elite X360 13 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d8e, "HP EliteBook 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d8f, "HP EliteBook 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d90, "HP EliteBook 16 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8d91, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8d92, "HP ZBook Firefly 16 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8de8, "HP Gemtree", ALC245_FIXUP_TAS2781_SPI_2),
+ SND_PCI_QUIRK(0x103c, 0x8de9, "HP Gemtree", ALC245_FIXUP_TAS2781_SPI_2),
++ SND_PCI_QUIRK(0x103c, 0x8dec, "HP EliteBook 640 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8dee, "HP EliteBook 660 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8df0, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8dfc, "HP EliteBook 645 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8dfe, "HP EliteBook 665 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e14, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e15, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e16, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e17, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8e18, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8e19, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8e1a, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e1b, "HP EliteBook G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e1c, "HP EliteBook G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e2c, "HP EliteBook 16 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2),
+diff --git a/sound/soc/amd/acp/acp-sdw-legacy-mach.c b/sound/soc/amd/acp/acp-sdw-legacy-mach.c
+index 9280cd30d19cf2..a0defa5d15f732 100644
+--- a/sound/soc/amd/acp/acp-sdw-legacy-mach.c
++++ b/sound/soc/amd/acp/acp-sdw-legacy-mach.c
+@@ -28,6 +28,8 @@ static void log_quirks(struct device *dev)
+ SOC_JACK_JDSRC(soc_sdw_quirk));
+ if (soc_sdw_quirk & ASOC_SDW_ACP_DMIC)
+ dev_dbg(dev, "quirk SOC_SDW_ACP_DMIC enabled\n");
++ if (soc_sdw_quirk & ASOC_SDW_CODEC_SPKR)
++ dev_dbg(dev, "quirk ASOC_SDW_CODEC_SPKR enabled\n");
+ }
+
+ static int soc_sdw_quirk_cb(const struct dmi_system_id *id)
+@@ -45,6 +47,38 @@ static const struct dmi_system_id soc_sdw_quirk_table[] = {
+ },
+ .driver_data = (void *)RT711_JD2,
+ },
++ {
++ .callback = soc_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0D80"),
++ },
++ .driver_data = (void *)(ASOC_SDW_CODEC_SPKR),
++ },
++ {
++ .callback = soc_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0D81"),
++ },
++ .driver_data = (void *)(ASOC_SDW_CODEC_SPKR),
++ },
++ {
++ .callback = soc_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0D82"),
++ },
++ .driver_data = (void *)(ASOC_SDW_CODEC_SPKR),
++ },
++ {
++ .callback = soc_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0D83"),
++ },
++ .driver_data = (void *)(ASOC_SDW_CODEC_SPKR),
++ },
+ {}
+ };
+
+diff --git a/sound/soc/amd/acp/soc_amd_sdw_common.h b/sound/soc/amd/acp/soc_amd_sdw_common.h
+index b7bae107c13e4f..ed5aec9c014588 100644
+--- a/sound/soc/amd/acp/soc_amd_sdw_common.h
++++ b/sound/soc/amd/acp/soc_amd_sdw_common.h
+@@ -22,6 +22,7 @@
+ #define SOC_JACK_JDSRC(quirk) ((quirk) & GENMASK(3, 0))
+ #define ASOC_SDW_FOUR_SPK BIT(4)
+ #define ASOC_SDW_ACP_DMIC BIT(5)
++#define ASOC_SDW_CODEC_SPKR BIT(15)
+
+ #define AMD_SDW0 0
+ #define AMD_SDW1 1
+diff --git a/sound/soc/amd/ps/acp63.h b/sound/soc/amd/ps/acp63.h
+index e54eabaa4d3e16..28d3959a416b3f 100644
+--- a/sound/soc/amd/ps/acp63.h
++++ b/sound/soc/amd/ps/acp63.h
+@@ -11,6 +11,7 @@
+ #define ACP_DEVICE_ID 0x15E2
+ #define ACP63_REG_START 0x1240000
+ #define ACP63_REG_END 0x125C000
++#define ACP63_PCI_REV 0x63
+
+ #define ACP_SOFT_RESET_SOFTRESET_AUDDONE_MASK 0x00010001
+ #define ACP_PGFSM_CNTL_POWER_ON_MASK 1
+diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c
+index 8b556950b855a9..6015dd5270731f 100644
+--- a/sound/soc/amd/ps/pci-ps.c
++++ b/sound/soc/amd/ps/pci-ps.c
+@@ -562,7 +562,7 @@ static int snd_acp63_probe(struct pci_dev *pci,
+
+ /* Pink Sardine device check */
+ switch (pci->revision) {
+- case 0x63:
++ case ACP63_PCI_REV:
+ break;
+ default:
+ dev_dbg(&pci->dev, "acp63 pci device not found\n");
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index a7637056972aab..e632f16c910250 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -339,6 +339,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "83Q3"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83J2"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -584,6 +591,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_VERSION, "pang13"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 15 C7UCX"),
++ }
++ },
+ {}
+ };
+
+diff --git a/sound/soc/codecs/wcd937x.c b/sound/soc/codecs/wcd937x.c
+index c9d5e67bf66e4e..951fd1caf84776 100644
+--- a/sound/soc/codecs/wcd937x.c
++++ b/sound/soc/codecs/wcd937x.c
+@@ -2563,6 +2563,7 @@ static int wcd937x_soc_codec_probe(struct snd_soc_component *component)
+ ARRAY_SIZE(wcd9375_dapm_widgets));
+ if (ret < 0) {
+ dev_err(component->dev, "Failed to add snd_ctls\n");
++ wcd_clsh_ctrl_free(wcd937x->clsh_info);
+ return ret;
+ }
+
+@@ -2570,6 +2571,7 @@ static int wcd937x_soc_codec_probe(struct snd_soc_component *component)
+ ARRAY_SIZE(wcd9375_audio_map));
+ if (ret < 0) {
+ dev_err(component->dev, "Failed to add routes\n");
++ wcd_clsh_ctrl_free(wcd937x->clsh_info);
+ return ret;
+ }
+ }
+diff --git a/sound/soc/fsl/fsl_audmix.c b/sound/soc/fsl/fsl_audmix.c
+index 3cd9a66b70a157..7981d598ba139b 100644
+--- a/sound/soc/fsl/fsl_audmix.c
++++ b/sound/soc/fsl/fsl_audmix.c
+@@ -488,11 +488,17 @@ static int fsl_audmix_probe(struct platform_device *pdev)
+ goto err_disable_pm;
+ }
+
+- priv->pdev = platform_device_register_data(dev, "imx-audmix", 0, NULL, 0);
+- if (IS_ERR(priv->pdev)) {
+- ret = PTR_ERR(priv->pdev);
+- dev_err(dev, "failed to register platform: %d\n", ret);
+- goto err_disable_pm;
++ /*
++ * If dais property exist, then register the imx-audmix card driver.
++ * otherwise, it should be linked by audio graph card.
++ */
++ if (of_find_property(pdev->dev.of_node, "dais", NULL)) {
++ priv->pdev = platform_device_register_data(dev, "imx-audmix", 0, NULL, 0);
++ if (IS_ERR(priv->pdev)) {
++ ret = PTR_ERR(priv->pdev);
++ dev_err(dev, "failed to register platform: %d\n", ret);
++ goto err_disable_pm;
++ }
+ }
+
+ return 0;
+diff --git a/sound/soc/intel/common/soc-acpi-intel-adl-match.c b/sound/soc/intel/common/soc-acpi-intel-adl-match.c
+index bb1324fb588e97..a68efbe98948f4 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-adl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-adl-match.c
+@@ -214,6 +214,15 @@ static const struct snd_soc_acpi_adr_device rt1316_1_group2_adr[] = {
+ }
+ };
+
++static const struct snd_soc_acpi_adr_device rt1316_2_group2_adr[] = {
++ {
++ .adr = 0x000232025D131601ull,
++ .num_endpoints = 1,
++ .endpoints = &spk_r_endpoint,
++ .name_prefix = "rt1316-2"
++ }
++};
++
+ static const struct snd_soc_acpi_adr_device rt1316_1_single_adr[] = {
+ {
+ .adr = 0x000130025D131601ull,
+@@ -547,6 +556,20 @@ static const struct snd_soc_acpi_link_adr adl_chromebook_base[] = {
+ {}
+ };
+
++static const struct snd_soc_acpi_link_adr adl_sdw_rt1316_link02[] = {
++ {
++ .mask = BIT(0),
++ .num_adr = ARRAY_SIZE(rt1316_0_group2_adr),
++ .adr_d = rt1316_0_group2_adr,
++ },
++ {
++ .mask = BIT(2),
++ .num_adr = ARRAY_SIZE(rt1316_2_group2_adr),
++ .adr_d = rt1316_2_group2_adr,
++ },
++ {}
++};
++
+ static const struct snd_soc_acpi_codecs adl_max98357a_amp = {
+ .num_codecs = 1,
+ .codecs = {"MX98357A"}
+@@ -749,6 +772,12 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_adl_sdw_machines[] = {
+ .drv_name = "sof_sdw",
+ .sof_tplg_filename = "sof-adl-sdw-max98373-rt5682.tplg",
+ },
++ {
++ .link_mask = BIT(0) | BIT(2),
++ .links = adl_sdw_rt1316_link02,
++ .drv_name = "sof_sdw",
++ .sof_tplg_filename = "sof-adl-rt1316-l02.tplg",
++ },
+ {},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_adl_sdw_machines);
+diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
+index c9404b5934c7e6..2cd522108221a2 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
+@@ -24,8 +24,8 @@
+ #define PLAYBACK_MIN_PERIOD_SIZE 128
+ #define CAPTURE_MIN_NUM_PERIODS 2
+ #define CAPTURE_MAX_NUM_PERIODS 8
+-#define CAPTURE_MAX_PERIOD_SIZE 4096
+-#define CAPTURE_MIN_PERIOD_SIZE 320
++#define CAPTURE_MAX_PERIOD_SIZE 65536
++#define CAPTURE_MIN_PERIOD_SIZE 6144
+ #define BUFFER_BYTES_MAX (PLAYBACK_MAX_NUM_PERIODS * PLAYBACK_MAX_PERIOD_SIZE)
+ #define BUFFER_BYTES_MIN (PLAYBACK_MIN_NUM_PERIODS * PLAYBACK_MIN_PERIOD_SIZE)
+ #define COMPR_PLAYBACK_MAX_FRAGMENT_SIZE (128 * 1024)
+@@ -64,12 +64,12 @@ struct q6apm_dai_rtd {
+ phys_addr_t phys;
+ unsigned int pcm_size;
+ unsigned int pcm_count;
+- unsigned int pos; /* Buffer position */
+ unsigned int periods;
+ unsigned int bytes_sent;
+ unsigned int bytes_received;
+ unsigned int copied_total;
+ uint16_t bits_per_sample;
++ snd_pcm_uframes_t queue_ptr;
+ bool next_track;
+ enum stream_state state;
+ struct q6apm_graph *graph;
+@@ -123,25 +123,16 @@ static void event_handler(uint32_t opcode, uint32_t token, void *payload, void *
+ {
+ struct q6apm_dai_rtd *prtd = priv;
+ struct snd_pcm_substream *substream = prtd->substream;
+- unsigned long flags;
+
+ switch (opcode) {
+ case APM_CLIENT_EVENT_CMD_EOS_DONE:
+ prtd->state = Q6APM_STREAM_STOPPED;
+ break;
+ case APM_CLIENT_EVENT_DATA_WRITE_DONE:
+- spin_lock_irqsave(&prtd->lock, flags);
+- prtd->pos += prtd->pcm_count;
+- spin_unlock_irqrestore(&prtd->lock, flags);
+ snd_pcm_period_elapsed(substream);
+- if (prtd->state == Q6APM_STREAM_RUNNING)
+- q6apm_write_async(prtd->graph, prtd->pcm_count, 0, 0, 0);
+
+ break;
+ case APM_CLIENT_EVENT_DATA_READ_DONE:
+- spin_lock_irqsave(&prtd->lock, flags);
+- prtd->pos += prtd->pcm_count;
+- spin_unlock_irqrestore(&prtd->lock, flags);
+ snd_pcm_period_elapsed(substream);
+ if (prtd->state == Q6APM_STREAM_RUNNING)
+ q6apm_read(prtd->graph);
+@@ -248,7 +239,6 @@ static int q6apm_dai_prepare(struct snd_soc_component *component,
+ }
+
+ prtd->pcm_count = snd_pcm_lib_period_bytes(substream);
+- prtd->pos = 0;
+ /* rate and channels are sent to audio driver */
+ ret = q6apm_graph_media_format_shmem(prtd->graph, &cfg);
+ if (ret < 0) {
+@@ -294,6 +284,27 @@ static int q6apm_dai_prepare(struct snd_soc_component *component,
+ return 0;
+ }
+
++static int q6apm_dai_ack(struct snd_soc_component *component, struct snd_pcm_substream *substream)
++{
++ struct snd_pcm_runtime *runtime = substream->runtime;
++ struct q6apm_dai_rtd *prtd = runtime->private_data;
++ int i, ret = 0, avail_periods;
++
++ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
++ avail_periods = (runtime->control->appl_ptr - prtd->queue_ptr)/runtime->period_size;
++ for (i = 0; i < avail_periods; i++) {
++ ret = q6apm_write_async(prtd->graph, prtd->pcm_count, 0, 0, NO_TIMESTAMP);
++ if (ret < 0) {
++ dev_err(component->dev, "Error queuing playback buffer %d\n", ret);
++ return ret;
++ }
++ prtd->queue_ptr += runtime->period_size;
++ }
++ }
++
++ return ret;
++}
++
+ static int q6apm_dai_trigger(struct snd_soc_component *component,
+ struct snd_pcm_substream *substream, int cmd)
+ {
+@@ -305,9 +316,6 @@ static int q6apm_dai_trigger(struct snd_soc_component *component,
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+- /* start writing buffers for playback only as we already queued capture buffers */
+- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+- ret = q6apm_write_async(prtd->graph, prtd->pcm_count, 0, 0, 0);
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ /* TODO support be handled via SoftPause Module */
+@@ -377,13 +385,14 @@ static int q6apm_dai_open(struct snd_soc_component *component,
+ }
+ }
+
+- ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 32);
++ /* setup 10ms latency to accommodate DSP restrictions */
++ ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, 480);
+ if (ret < 0) {
+ dev_err(dev, "constraint for period bytes step ret = %d\n", ret);
+ goto err;
+ }
+
+- ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_BUFFER_BYTES, 32);
++ ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_BUFFER_SIZE, 480);
+ if (ret < 0) {
+ dev_err(dev, "constraint for buffer bytes step ret = %d\n", ret);
+ goto err;
+@@ -428,16 +437,12 @@ static snd_pcm_uframes_t q6apm_dai_pointer(struct snd_soc_component *component,
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct q6apm_dai_rtd *prtd = runtime->private_data;
+ snd_pcm_uframes_t ptr;
+- unsigned long flags;
+
+- spin_lock_irqsave(&prtd->lock, flags);
+- if (prtd->pos == prtd->pcm_size)
+- prtd->pos = 0;
+-
+- ptr = bytes_to_frames(runtime, prtd->pos);
+- spin_unlock_irqrestore(&prtd->lock, flags);
++ ptr = q6apm_get_hw_pointer(prtd->graph, substream->stream) * runtime->period_size;
++ if (ptr)
++ return ptr - 1;
+
+- return ptr;
++ return 0;
+ }
+
+ static int q6apm_dai_hw_params(struct snd_soc_component *component,
+@@ -652,8 +657,6 @@ static int q6apm_dai_compr_set_params(struct snd_soc_component *component,
+ prtd->pcm_size = runtime->fragments * runtime->fragment_size;
+ prtd->bits_per_sample = 16;
+
+- prtd->pos = 0;
+-
+ if (prtd->next_track != true) {
+ memcpy(&prtd->codec, codec, sizeof(*codec));
+
+@@ -836,6 +839,7 @@ static const struct snd_soc_component_driver q6apm_fe_dai_component = {
+ .hw_params = q6apm_dai_hw_params,
+ .pointer = q6apm_dai_pointer,
+ .trigger = q6apm_dai_trigger,
++ .ack = q6apm_dai_ack,
+ .compress_ops = &q6apm_dai_compress_ops,
+ .use_dai_pcm_id = true,
+ };
+diff --git a/sound/soc/qcom/qdsp6/q6apm.c b/sound/soc/qcom/qdsp6/q6apm.c
+index 2a2a5bd98110bc..ca57413cb7847a 100644
+--- a/sound/soc/qcom/qdsp6/q6apm.c
++++ b/sound/soc/qcom/qdsp6/q6apm.c
+@@ -494,6 +494,19 @@ int q6apm_read(struct q6apm_graph *graph)
+ }
+ EXPORT_SYMBOL_GPL(q6apm_read);
+
++int q6apm_get_hw_pointer(struct q6apm_graph *graph, int dir)
++{
++ struct audioreach_graph_data *data;
++
++ if (dir == SNDRV_PCM_STREAM_PLAYBACK)
++ data = &graph->rx_data;
++ else
++ data = &graph->tx_data;
++
++ return (int)atomic_read(&data->hw_ptr);
++}
++EXPORT_SYMBOL_GPL(q6apm_get_hw_pointer);
++
+ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ {
+ struct data_cmd_rsp_rd_sh_mem_ep_data_buffer_done_v2 *rd_done;
+@@ -520,7 +533,8 @@ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ done = data->payload;
+ phys = graph->rx_data.buf[token].phys;
+ mutex_unlock(&graph->lock);
+-
++ /* token numbering starts at 0 */
++ atomic_set(&graph->rx_data.hw_ptr, token + 1);
+ if (lower_32_bits(phys) == done->buf_addr_lsw &&
+ upper_32_bits(phys) == done->buf_addr_msw) {
+ graph->result.opcode = hdr->opcode;
+@@ -553,6 +567,8 @@ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ rd_done = data->payload;
+ phys = graph->tx_data.buf[hdr->token].phys;
+ mutex_unlock(&graph->lock);
++ /* token numbering starts at 0 */
++ atomic_set(&graph->tx_data.hw_ptr, hdr->token + 1);
+
+ if (upper_32_bits(phys) == rd_done->buf_addr_msw &&
+ lower_32_bits(phys) == rd_done->buf_addr_lsw) {
+diff --git a/sound/soc/qcom/qdsp6/q6apm.h b/sound/soc/qcom/qdsp6/q6apm.h
+index c248c8d2b1ab7f..7ce08b401e3102 100644
+--- a/sound/soc/qcom/qdsp6/q6apm.h
++++ b/sound/soc/qcom/qdsp6/q6apm.h
+@@ -2,6 +2,7 @@
+ #ifndef __Q6APM_H__
+ #define __Q6APM_H__
+ #include <linux/types.h>
++#include <linux/atomic.h>
+ #include <linux/slab.h>
+ #include <linux/wait.h>
+ #include <linux/kernel.h>
+@@ -77,6 +78,7 @@ struct audioreach_graph_data {
+ uint32_t num_periods;
+ uint32_t dsp_buf;
+ uint32_t mem_map_handle;
++ atomic_t hw_ptr;
+ };
+
+ struct audioreach_graph {
+@@ -150,4 +152,5 @@ int q6apm_enable_compress_module(struct device *dev, struct q6apm_graph *graph,
+ int q6apm_remove_initial_silence(struct device *dev, struct q6apm_graph *graph, uint32_t samples);
+ int q6apm_remove_trailing_silence(struct device *dev, struct q6apm_graph *graph, uint32_t samples);
+ int q6apm_set_real_module_id(struct device *dev, struct q6apm_graph *graph, uint32_t codec_id);
++int q6apm_get_hw_pointer(struct q6apm_graph *graph, int dir);
+ #endif /* __APM_GRAPH_ */
+diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
+index 045100c9435271..a400c9a31fead5 100644
+--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
+@@ -892,9 +892,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+
+ if (ret < 0) {
+ dev_err(dev, "q6asm_open_write failed\n");
+- q6asm_audio_client_free(prtd->audio_client);
+- prtd->audio_client = NULL;
+- return ret;
++ goto open_err;
+ }
+ }
+
+@@ -903,7 +901,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+ prtd->session_id, dir);
+ if (ret) {
+ dev_err(dev, "Stream reg failed ret:%d\n", ret);
+- return ret;
++ goto q6_err;
+ }
+
+ ret = __q6asm_dai_compr_set_codec_params(component, stream,
+@@ -911,7 +909,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+ prtd->stream_id);
+ if (ret) {
+ dev_err(dev, "codec param setup failed ret:%d\n", ret);
+- return ret;
++ goto q6_err;
+ }
+
+ ret = q6asm_map_memory_regions(dir, prtd->audio_client, prtd->phys,
+@@ -920,12 +918,21 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+
+ if (ret < 0) {
+ dev_err(dev, "Buffer Mapping failed ret:%d\n", ret);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto q6_err;
+ }
+
+ prtd->state = Q6ASM_STREAM_RUNNING;
+
+ return 0;
++
++q6_err:
++ q6asm_cmd(prtd->audio_client, prtd->stream_id, CMD_CLOSE);
++
++open_err:
++ q6asm_audio_client_free(prtd->audio_client);
++ prtd->audio_client = NULL;
++ return ret;
+ }
+
+ static int q6asm_dai_compr_set_metadata(struct snd_soc_component *component,
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 688cc7ac17148a..dc9cb832406783 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1273,8 +1273,8 @@ static int sof_widget_parse_tokens(struct snd_soc_component *scomp, struct snd_s
+ struct snd_sof_tuple *new_tuples;
+
+ num_tuples += token_list[object_token_list[i]].count * (num_sets - 1);
+- new_tuples = krealloc(swidget->tuples,
+- sizeof(*new_tuples) * num_tuples, GFP_KERNEL);
++ new_tuples = krealloc_array(swidget->tuples,
++ num_tuples, sizeof(*new_tuples), GFP_KERNEL);
+ if (!new_tuples) {
+ ret = -ENOMEM;
+ goto err;
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 779d97d31f170e..826ac870f24690 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -489,16 +489,84 @@ static void ch345_broken_sysex_input(struct snd_usb_midi_in_endpoint *ep,
+
+ /*
+ * CME protocol: like the standard protocol, but SysEx commands are sent as a
+- * single USB packet preceded by a 0x0F byte.
++ * single USB packet preceded by a 0x0F byte, as are system realtime
++ * messages and MIDI Active Sensing.
++ * Also, multiple messages can be sent in the same packet.
+ */
+ static void snd_usbmidi_cme_input(struct snd_usb_midi_in_endpoint *ep,
+ uint8_t *buffer, int buffer_length)
+ {
+- if (buffer_length < 2 || (buffer[0] & 0x0f) != 0x0f)
+- snd_usbmidi_standard_input(ep, buffer, buffer_length);
+- else
+- snd_usbmidi_input_data(ep, buffer[0] >> 4,
+- &buffer[1], buffer_length - 1);
++ int remaining = buffer_length;
++
++ /*
++ * CME send sysex, song position pointer, system realtime
++ * and active sensing using CIN 0x0f, which in the standard
++ * is only intended for single byte unparsed data.
++ * So we need to interpret these here before sending them on.
++ * By default, we assume single byte data, which is true
++ * for system realtime (midi clock, start, stop and continue)
++ * and active sensing, and handle the other (known) cases
++ * separately.
++ * In contrast to the standard, CME does not split sysex
++ * into multiple 4-byte packets, but lumps everything together
++ * into one. In addition, CME can string multiple messages
++ * together in the same packet; pressing the Record button
++ * on an UF6 sends a sysex message directly followed
++ * by a song position pointer in the same packet.
++ * For it to have any reasonable meaning, a sysex message
++ * needs to be at least 3 bytes in length (0xf0, id, 0xf7),
++ * corresponding to a packet size of 4 bytes, and the ones sent
++ * by CME devices are 6 or 7 bytes, making the packet fragments
++ * 7 or 8 bytes long (six or seven bytes plus preceding CN+CIN byte).
++ * For the other types, the packet size is always 4 bytes,
++ * as per the standard, with the data size being 3 for SPP
++ * and 1 for the others.
++ * Thus all packet fragments are at least 4 bytes long, so we can
++ * skip anything that is shorter; this also conveniantly skips
++ * packets with size 0, which CME devices continuously send when
++ * they have nothing better to do.
++ * Another quirk is that sometimes multiple messages are sent
++ * in the same packet. This has been observed for midi clock
++ * and active sensing i.e. 0x0f 0xf8 0x00 0x00 0x0f 0xfe 0x00 0x00,
++ * but also multiple note ons/offs, and control change together
++ * with MIDI clock. Similarly, some sysex messages are followed by
++ * the song position pointer in the same packet, and occasionally
++ * additionally by a midi clock or active sensing.
++ * We handle this by looping over all data and parsing it along the way.
++ */
++ while (remaining >= 4) {
++ int source_length = 4; /* default */
++
++ if ((buffer[0] & 0x0f) == 0x0f) {
++ int data_length = 1; /* default */
++
++ if (buffer[1] == 0xf0) {
++ /* Sysex: Find EOX and send on whole message. */
++ /* To kick off the search, skip the first
++ * two bytes (CN+CIN and SYSEX (0xf0).
++ */
++ uint8_t *tmp_buf = buffer + 2;
++ int tmp_length = remaining - 2;
++
++ while (tmp_length > 1 && *tmp_buf != 0xf7) {
++ tmp_buf++;
++ tmp_length--;
++ }
++ data_length = tmp_buf - buffer;
++ source_length = data_length + 1;
++ } else if (buffer[1] == 0xf2) {
++ /* Three byte song position pointer */
++ data_length = 3;
++ }
++ snd_usbmidi_input_data(ep, buffer[0] >> 4,
++ &buffer[1], data_length);
++ } else {
++ /* normal channel events */
++ snd_usbmidi_standard_input(ep, buffer, source_length);
++ }
++ buffer += source_length;
++ remaining -= source_length;
++ }
+ }
+
+ /*
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 159fb130e28270..9f4c54fe6f56f5 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -3846,6 +3846,11 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
+ WARN_INSN(insn, "RET before UNTRAIN");
+ return 1;
+
++ case INSN_CONTEXT_SWITCH:
++ if (insn_func(insn))
++ break;
++ return 0;
++
+ case INSN_NOP:
+ if (insn->retpoline_safe)
+ return 0;
+diff --git a/tools/power/cpupower/bench/parse.c b/tools/power/cpupower/bench/parse.c
+index 080678d9d74e25..bd67c758b33ac3 100644
+--- a/tools/power/cpupower/bench/parse.c
++++ b/tools/power/cpupower/bench/parse.c
+@@ -121,6 +121,10 @@ FILE *prepare_output(const char *dirname)
+ struct config *prepare_default_config()
+ {
+ struct config *config = malloc(sizeof(struct config));
++ if (!config) {
++ perror("malloc");
++ return NULL;
++ }
+
+ dprintf("loading defaults\n");
+
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index 8c8da966c641bc..a5f7fdd0c1fbbb 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -4303,6 +4303,14 @@ if (defined($opt{"LOG_FILE"})) {
+ if ($opt{"CLEAR_LOG"}) {
+ unlink $opt{"LOG_FILE"};
+ }
++
++ if (! -e $opt{"LOG_FILE"} && $opt{"LOG_FILE"} =~ m,^(.*/),) {
++ my $dir = $1;
++ if (! -d $dir) {
++ mkpath($dir) or die "Failed to create directories '$dir': $!";
++ print "\nThe log directory $dir did not exist, so it was created.\n";
++ }
++ }
+ open(LOG, ">> $opt{LOG_FILE}") or die "Can't write to $opt{LOG_FILE}";
+ LOG->autoflush(1);
+ }
+diff --git a/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c b/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c
+index 7d7a6a06cdb75b..2d8230da906429 100644
+--- a/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c
++++ b/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c
+@@ -98,7 +98,7 @@ int main(int argc, char *argv[])
+ info("Calling futex_waitv on f1: %u @ %p with val=%u\n", f1, &f1, f1+1);
+ res = futex_waitv(&waitv, 1, 0, &to, CLOCK_MONOTONIC);
+ if (!res || errno != EWOULDBLOCK) {
+- ksft_test_result_pass("futex_waitv returned: %d %s\n",
++ ksft_test_result_fail("futex_waitv returned: %d %s\n",
+ res ? errno : res,
+ res ? strerror(errno) : "");
+ ret = RET_FAIL;
+diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c
+index 1bc16fde2e8aea..4766f8fec9f605 100644
+--- a/tools/testing/selftests/landlock/base_test.c
++++ b/tools/testing/selftests/landlock/base_test.c
+@@ -98,10 +98,54 @@ TEST(abi_version)
+ ASSERT_EQ(EINVAL, errno);
+ }
+
++/*
++ * Old source trees might not have the set of Kselftest fixes related to kernel
++ * UAPI headers.
++ */
++#ifndef LANDLOCK_CREATE_RULESET_ERRATA
++#define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1)
++#endif
++
++TEST(errata)
++{
++ const struct landlock_ruleset_attr ruleset_attr = {
++ .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
++ };
++ int errata;
++
++ errata = landlock_create_ruleset(NULL, 0,
++ LANDLOCK_CREATE_RULESET_ERRATA);
++ /* The errata bitmask will not be backported to tests. */
++ ASSERT_LE(0, errata);
++ TH_LOG("errata: 0x%x", errata);
++
++ ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0,
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(EINVAL, errno);
++
++ ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(EINVAL, errno);
++
++ ASSERT_EQ(-1,
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(EINVAL, errno);
++
++ ASSERT_EQ(-1, landlock_create_ruleset(
++ NULL, 0,
++ LANDLOCK_CREATE_RULESET_VERSION |
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0,
++ LANDLOCK_CREATE_RULESET_ERRATA |
++ 1 << 31));
++ ASSERT_EQ(EINVAL, errno);
++}
++
+ /* Tests ordering of syscall argument checks. */
+ TEST(create_ruleset_checks_ordering)
+ {
+- const int last_flag = LANDLOCK_CREATE_RULESET_VERSION;
++ const int last_flag = LANDLOCK_CREATE_RULESET_ERRATA;
+ const int invalid_flag = last_flag << 1;
+ int ruleset_fd;
+ const struct landlock_ruleset_attr ruleset_attr = {
+diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h
+index 6064c9ac05329d..076a9a625c98d1 100644
+--- a/tools/testing/selftests/landlock/common.h
++++ b/tools/testing/selftests/landlock/common.h
+@@ -41,6 +41,7 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
+ CAP_MKNOD,
+ CAP_NET_ADMIN,
+ CAP_NET_BIND_SERVICE,
++ CAP_SETUID,
+ CAP_SYS_ADMIN,
+ CAP_SYS_CHROOT,
+ /* clang-format on */
+diff --git a/tools/testing/selftests/landlock/scoped_signal_test.c b/tools/testing/selftests/landlock/scoped_signal_test.c
+index 475ee62a832d6d..d8bf33417619f6 100644
+--- a/tools/testing/selftests/landlock/scoped_signal_test.c
++++ b/tools/testing/selftests/landlock/scoped_signal_test.c
+@@ -249,47 +249,67 @@ TEST_F(scoped_domains, check_access_signal)
+ _metadata->exit_code = KSFT_FAIL;
+ }
+
+-static int thread_pipe[2];
+-
+ enum thread_return {
+ THREAD_INVALID = 0,
+ THREAD_SUCCESS = 1,
+ THREAD_ERROR = 2,
++ THREAD_TEST_FAILED = 3,
+ };
+
+-void *thread_func(void *arg)
++static void *thread_sync(void *arg)
+ {
++ const int pipe_read = *(int *)arg;
+ char buf;
+
+- if (read(thread_pipe[0], &buf, 1) != 1)
++ if (read(pipe_read, &buf, 1) != 1)
+ return (void *)THREAD_ERROR;
+
+ return (void *)THREAD_SUCCESS;
+ }
+
+-TEST(signal_scoping_threads)
++TEST(signal_scoping_thread_before)
+ {
+- pthread_t no_sandbox_thread, scoped_thread;
++ pthread_t no_sandbox_thread;
+ enum thread_return ret = THREAD_INVALID;
++ int thread_pipe[2];
+
+ drop_caps(_metadata);
+ ASSERT_EQ(0, pipe2(thread_pipe, O_CLOEXEC));
+
+- ASSERT_EQ(0,
+- pthread_create(&no_sandbox_thread, NULL, thread_func, NULL));
++ ASSERT_EQ(0, pthread_create(&no_sandbox_thread, NULL, thread_sync,
++ &thread_pipe[0]));
+
+- /* Restricts the domain after creating the first thread. */
++ /* Enforces restriction after creating the thread. */
+ create_scoped_domain(_metadata, LANDLOCK_SCOPE_SIGNAL);
+
+- ASSERT_EQ(EPERM, pthread_kill(no_sandbox_thread, 0));
+- ASSERT_EQ(1, write(thread_pipe[1], ".", 1));
+-
+- ASSERT_EQ(0, pthread_create(&scoped_thread, NULL, thread_func, NULL));
+- ASSERT_EQ(0, pthread_kill(scoped_thread, 0));
+- ASSERT_EQ(1, write(thread_pipe[1], ".", 1));
++ EXPECT_EQ(0, pthread_kill(no_sandbox_thread, 0));
++ EXPECT_EQ(1, write(thread_pipe[1], ".", 1));
+
+ EXPECT_EQ(0, pthread_join(no_sandbox_thread, (void **)&ret));
+ EXPECT_EQ(THREAD_SUCCESS, ret);
++
++ EXPECT_EQ(0, close(thread_pipe[0]));
++ EXPECT_EQ(0, close(thread_pipe[1]));
++}
++
++TEST(signal_scoping_thread_after)
++{
++ pthread_t scoped_thread;
++ enum thread_return ret = THREAD_INVALID;
++ int thread_pipe[2];
++
++ drop_caps(_metadata);
++ ASSERT_EQ(0, pipe2(thread_pipe, O_CLOEXEC));
++
++ /* Enforces restriction before creating the thread. */
++ create_scoped_domain(_metadata, LANDLOCK_SCOPE_SIGNAL);
++
++ ASSERT_EQ(0, pthread_create(&scoped_thread, NULL, thread_sync,
++ &thread_pipe[0]));
++
++ EXPECT_EQ(0, pthread_kill(scoped_thread, 0));
++ EXPECT_EQ(1, write(thread_pipe[1], ".", 1));
++
+ EXPECT_EQ(0, pthread_join(scoped_thread, (void **)&ret));
+ EXPECT_EQ(THREAD_SUCCESS, ret);
+
+@@ -297,6 +317,64 @@ TEST(signal_scoping_threads)
+ EXPECT_EQ(0, close(thread_pipe[1]));
+ }
+
++struct thread_setuid_args {
++ int pipe_read, new_uid;
++};
++
++void *thread_setuid(void *ptr)
++{
++ const struct thread_setuid_args *arg = ptr;
++ char buf;
++
++ if (read(arg->pipe_read, &buf, 1) != 1)
++ return (void *)THREAD_ERROR;
++
++ /* libc's setuid() should update all thread's credentials. */
++ if (getuid() != arg->new_uid)
++ return (void *)THREAD_TEST_FAILED;
++
++ return (void *)THREAD_SUCCESS;
++}
++
++TEST(signal_scoping_thread_setuid)
++{
++ struct thread_setuid_args arg;
++ pthread_t no_sandbox_thread;
++ enum thread_return ret = THREAD_INVALID;
++ int pipe_parent[2];
++ int prev_uid;
++
++ disable_caps(_metadata);
++
++ /* This test does not need to be run as root. */
++ prev_uid = getuid();
++ arg.new_uid = prev_uid + 1;
++ EXPECT_LT(0, arg.new_uid);
++
++ ASSERT_EQ(0, pipe2(pipe_parent, O_CLOEXEC));
++ arg.pipe_read = pipe_parent[0];
++
++ /* Capabilities must be set before creating a new thread. */
++ set_cap(_metadata, CAP_SETUID);
++ ASSERT_EQ(0, pthread_create(&no_sandbox_thread, NULL, thread_setuid,
++ &arg));
++
++ /* Enforces restriction after creating the thread. */
++ create_scoped_domain(_metadata, LANDLOCK_SCOPE_SIGNAL);
++
++ EXPECT_NE(arg.new_uid, getuid());
++ EXPECT_EQ(0, setuid(arg.new_uid));
++ EXPECT_EQ(arg.new_uid, getuid());
++ EXPECT_EQ(1, write(pipe_parent[1], ".", 1));
++
++ EXPECT_EQ(0, pthread_join(no_sandbox_thread, (void **)&ret));
++ EXPECT_EQ(THREAD_SUCCESS, ret);
++
++ clear_cap(_metadata, CAP_SETUID);
++ EXPECT_EQ(0, close(pipe_parent[0]));
++ EXPECT_EQ(0, close(pipe_parent[1]));
++}
++
+ const short backlog = 10;
+
+ static volatile sig_atomic_t signal_received;
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index d240d02fa443a1..c83a8b47bbdfa5 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -1270,7 +1270,7 @@ int main_loop(void)
+
+ if (cfg_input && cfg_sockopt_types.mptfo) {
+ fd_in = open(cfg_input, O_RDONLY);
+- if (fd < 0)
++ if (fd_in < 0)
+ xerror("can't open %s:%d", cfg_input, errno);
+ }
+
+@@ -1293,13 +1293,13 @@ int main_loop(void)
+
+ if (cfg_input && !cfg_sockopt_types.mptfo) {
+ fd_in = open(cfg_input, O_RDONLY);
+- if (fd < 0)
++ if (fd_in < 0)
+ xerror("can't open %s:%d", cfg_input, errno);
+ }
+
+ ret = copyfd_io(fd_in, fd, 1, 0, &winfo);
+ if (ret)
+- return ret;
++ goto out;
+
+ if (cfg_truncate > 0) {
+ shutdown(fd, SHUT_WR);
+@@ -1320,7 +1320,10 @@ int main_loop(void)
+ close(fd);
+ }
+
+- return 0;
++out:
++ if (cfg_input)
++ close(fd_in);
++ return ret;
+ }
+
+ int parse_proto(const char *proto)
+diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
+index 54e959e7d68fb4..570938f0455c29 100644
+--- a/virt/kvm/Kconfig
++++ b/virt/kvm/Kconfig
+@@ -75,7 +75,7 @@ config KVM_COMPAT
+ depends on KVM && COMPAT && !(S390 || ARM64 || RISCV)
+
+ config HAVE_KVM_IRQ_BYPASS
+- bool
++ tristate
+ select IRQ_BYPASS_MANAGER
+
+ config HAVE_KVM_VCPU_ASYNC_IOCTL
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index 249ba5b72e9b09..11e5d1e3f12eae 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -149,7 +149,7 @@ irqfd_shutdown(struct work_struct *work)
+ /*
+ * It is now safe to release the object's resources
+ */
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ irq_bypass_unregister_consumer(&irqfd->consumer);
+ #endif
+ eventfd_ctx_put(irqfd->eventfd);
+@@ -274,7 +274,7 @@ static void irqfd_update(struct kvm *kvm, struct kvm_kernel_irqfd *irqfd)
+ write_seqcount_end(&irqfd->irq_entry_sc);
+ }
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ void __attribute__((weak)) kvm_arch_irq_bypass_stop(
+ struct irq_bypass_consumer *cons)
+ {
+@@ -424,7 +424,7 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
+ if (events & EPOLLIN)
+ schedule_work(&irqfd->inject);
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ if (kvm_arch_has_irq_bypass()) {
+ irqfd->consumer.token = (void *)irqfd->eventfd;
+ irqfd->consumer.add_producer = kvm_arch_irq_bypass_add_producer;
+@@ -609,14 +609,14 @@ void kvm_irq_routing_update(struct kvm *kvm)
+ spin_lock_irq(&kvm->irqfds.lock);
+
+ list_for_each_entry(irqfd, &kvm->irqfds.items, list) {
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ /* Under irqfds.lock, so can read irq_entry safely */
+ struct kvm_kernel_irq_routing_entry old = irqfd->irq_entry;
+ #endif
+
+ irqfd_update(kvm, irqfd);
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ if (irqfd->producer &&
+ kvm_arch_irqfd_route_changed(&old, &irqfd->irq_entry)) {
+ int ret = kvm_arch_update_irqfd_routing(
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-04-25 11:46 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-04-25 11:46 UTC (permalink / raw
To: gentoo-commits
commit: 1fcb559b7f413319247e04c5dd68208b89eba3d4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 25 11:46:44 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 25 11:46:44 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1fcb559b
Linux patch 6.14.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-6.14.4.patch | 8688 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8692 insertions(+)
diff --git a/0000_README b/0000_README
index 83dda9ca..21d8b648 100644
--- a/0000_README
+++ b/0000_README
@@ -54,6 +54,10 @@ Patch: 1002_linux-6.14.3.patch
From: https://www.kernel.org
Desc: Linux 6.14.3
+Patch: 1003_linux-6.14.4.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.4
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1003_linux-6.14.4.patch b/1003_linux-6.14.4.patch
new file mode 100644
index 00000000..19c4136a
--- /dev/null
+++ b/1003_linux-6.14.4.patch
@@ -0,0 +1,8688 @@
+diff --git a/Documentation/arch/arm64/booting.rst b/Documentation/arch/arm64/booting.rst
+index cad6fdc96b98bf..dee7b6de864fcf 100644
+--- a/Documentation/arch/arm64/booting.rst
++++ b/Documentation/arch/arm64/booting.rst
+@@ -288,6 +288,12 @@ Before jumping into the kernel, the following conditions must be met:
+
+ - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
+
++ For CPUs with the Fine Grained Traps 2 (FEAT_FGT2) extension present:
++
++ - If EL3 is present and the kernel is entered at EL2:
++
++ - SCR_EL3.FGTEn2 (bit 59) must be initialised to 0b1.
++
+ For CPUs with support for HCRX_EL2 (FEAT_HCX) present:
+
+ - If EL3 is present and the kernel is entered at EL2:
+@@ -382,6 +388,22 @@ Before jumping into the kernel, the following conditions must be met:
+
+ - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.
+
++ For CPUs with the Performance Monitors Extension (FEAT_PMUv3p9):
++
++ - If EL3 is present:
++
++ - MDCR_EL3.EnPM2 (bit 7) must be initialised to 0b1.
++
++ - If the kernel is entered at EL1 and EL2 is present:
++
++ - HDFGRTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1.
++ - HDFGRTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1.
++ - HDFGRTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1.
++
++ - HDFGWTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1.
++ - HDFGWTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1.
++ - HDFGWTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1.
++
+ For CPUs with Memory Copy and Memory Set instructions (FEAT_MOPS):
+
+ - If the kernel is entered at EL1 and EL2 is present:
+diff --git a/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml b/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
+index 31295be910130c..234089b5954ddb 100644
+--- a/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
++++ b/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
+@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
+ title: Freescale Layerscape Reset Registers Module
+
+ maintainers:
+- - Frank Li
++ - Frank Li <Frank.Li@nxp.com>
+
+ description:
+ Reset Module includes chip reset, service processor control and Reset Control
+diff --git a/Documentation/netlink/specs/ovs_vport.yaml b/Documentation/netlink/specs/ovs_vport.yaml
+index 86ba9ac2a52103..b538bb99ee9b5f 100644
+--- a/Documentation/netlink/specs/ovs_vport.yaml
++++ b/Documentation/netlink/specs/ovs_vport.yaml
+@@ -123,12 +123,12 @@ attribute-sets:
+
+ operations:
+ name-prefix: ovs-vport-cmd-
++ fixed-header: ovs-header
+ list:
+ -
+ name: new
+ doc: Create a new OVS vport
+ attribute-set: vport
+- fixed-header: ovs-header
+ do:
+ request:
+ attributes:
+@@ -141,7 +141,6 @@ operations:
+ name: del
+ doc: Delete existing OVS vport from a data path
+ attribute-set: vport
+- fixed-header: ovs-header
+ do:
+ request:
+ attributes:
+@@ -152,7 +151,6 @@ operations:
+ name: get
+ doc: Get / dump OVS vport configuration and state
+ attribute-set: vport
+- fixed-header: ovs-header
+ do: &vport-get-op
+ request:
+ attributes:
+diff --git a/Documentation/netlink/specs/rt_link.yaml b/Documentation/netlink/specs/rt_link.yaml
+index 0d492500c7e57d..78d3bec72f0e78 100644
+--- a/Documentation/netlink/specs/rt_link.yaml
++++ b/Documentation/netlink/specs/rt_link.yaml
+@@ -1101,11 +1101,10 @@ attribute-sets:
+ -
+ name: prop-list
+ type: nest
+- nested-attributes: link-attrs
++ nested-attributes: prop-list-link-attrs
+ -
+ name: alt-ifname
+ type: string
+- multi-attr: true
+ -
+ name: perm-address
+ type: binary
+@@ -1148,6 +1147,13 @@ attribute-sets:
+ name: max-pacing-offload-horizon
+ type: uint
+ doc: EDT offload horizon supported by the device (in nsec).
++ -
++ name: prop-list-link-attrs
++ subset-of: link-attrs
++ attributes:
++ -
++ name: alt-ifname
++ multi-attr: true
+ -
+ name: af-spec-attrs
+ attributes:
+@@ -1570,7 +1576,7 @@ attribute-sets:
+ name: nf-call-iptables
+ type: u8
+ -
+- name: nf-call-ip6-tables
++ name: nf-call-ip6tables
+ type: u8
+ -
+ name: nf-call-arptables
+@@ -2058,7 +2064,7 @@ attribute-sets:
+ name: id
+ type: u16
+ -
+- name: flag
++ name: flags
+ type: binary
+ struct: ifla-vlan-flags
+ -
+@@ -2146,7 +2152,7 @@ attribute-sets:
+ type: binary
+ struct: ifla-cacheinfo
+ -
+- name: icmp6-stats
++ name: icmp6stats
+ type: binary
+ struct: ifla-icmp6-stats
+ -
+@@ -2160,9 +2166,10 @@ attribute-sets:
+ type: u32
+ -
+ name: mctp-attrs
++ name-prefix: ifla-mctp-
+ attributes:
+ -
+- name: mctp-net
++ name: net
+ type: u32
+ -
+ name: phys-binding
+@@ -2434,7 +2441,6 @@ operations:
+ - min-mtu
+ - max-mtu
+ - prop-list
+- - alt-ifname
+ - perm-address
+ - proto-down-reason
+ - parent-dev-name
+diff --git a/Documentation/netlink/specs/rt_neigh.yaml b/Documentation/netlink/specs/rt_neigh.yaml
+index e670b6dc07be4f..a843caa72259e1 100644
+--- a/Documentation/netlink/specs/rt_neigh.yaml
++++ b/Documentation/netlink/specs/rt_neigh.yaml
+@@ -13,25 +13,25 @@ definitions:
+ type: struct
+ members:
+ -
+- name: family
++ name: ndm-family
+ type: u8
+ -
+- name: pad
++ name: ndm-pad
+ type: pad
+ len: 3
+ -
+- name: ifindex
++ name: ndm-ifindex
+ type: s32
+ -
+- name: state
++ name: ndm-state
+ type: u16
+ enum: nud-state
+ -
+- name: flags
++ name: ndm-flags
+ type: u8
+ enum: ntf-flags
+ -
+- name: type
++ name: ndm-type
+ type: u8
+ enum: rtm-type
+ -
+@@ -189,7 +189,7 @@ attribute-sets:
+ type: binary
+ display-hint: ipv4
+ -
+- name: lladr
++ name: lladdr
+ type: binary
+ display-hint: mac
+ -
+diff --git a/Documentation/wmi/devices/msi-wmi-platform.rst b/Documentation/wmi/devices/msi-wmi-platform.rst
+index 31a13694289238..73197b31926a57 100644
+--- a/Documentation/wmi/devices/msi-wmi-platform.rst
++++ b/Documentation/wmi/devices/msi-wmi-platform.rst
+@@ -138,6 +138,10 @@ input data, the meaning of which depends on the subfeature being accessed.
+ The output buffer contains a single byte which signals success or failure (``0x00`` on failure)
+ and 31 bytes of output data, the meaning if which depends on the subfeature being accessed.
+
++.. note::
++ The ACPI control method responsible for handling the WMI method calls is not thread-safe.
++ This is a firmware bug that needs to be handled inside the driver itself.
++
+ WMI method Get_EC()
+ -------------------
+
+diff --git a/Makefile b/Makefile
+index 93870f58505f51..0c1b99da2c1f2a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -480,7 +480,6 @@ export rust_common_flags := --edition=2021 \
+ -Wclippy::ignored_unit_patterns \
+ -Wclippy::mut_mut \
+ -Wclippy::needless_bitwise_bool \
+- -Wclippy::needless_continue \
+ -Aclippy::needless_lifetimes \
+ -Wclippy::no_mangle_with_rust_abi \
+ -Wclippy::undocumented_unsafe_blocks \
+diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
+index 555c613fd2324a..ebceaae3c749b8 100644
+--- a/arch/arm64/include/asm/el2_setup.h
++++ b/arch/arm64/include/asm/el2_setup.h
+@@ -259,6 +259,30 @@
+ .Lskip_fgt_\@:
+ .endm
+
++.macro __init_el2_fgt2
++ mrs x1, id_aa64mmfr0_el1
++ ubfx x1, x1, #ID_AA64MMFR0_EL1_FGT_SHIFT, #4
++ cmp x1, #ID_AA64MMFR0_EL1_FGT_FGT2
++ b.lt .Lskip_fgt2_\@
++
++ mov x0, xzr
++ mrs x1, id_aa64dfr0_el1
++ ubfx x1, x1, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4
++ cmp x1, #ID_AA64DFR0_EL1_PMUVer_V3P9
++ b.lt .Lskip_pmuv3p9_\@
++
++ orr x0, x0, #HDFGRTR2_EL2_nPMICNTR_EL0
++ orr x0, x0, #HDFGRTR2_EL2_nPMICFILTR_EL0
++ orr x0, x0, #HDFGRTR2_EL2_nPMUACR_EL1
++.Lskip_pmuv3p9_\@:
++ msr_s SYS_HDFGRTR2_EL2, x0
++ msr_s SYS_HDFGWTR2_EL2, x0
++ msr_s SYS_HFGRTR2_EL2, xzr
++ msr_s SYS_HFGWTR2_EL2, xzr
++ msr_s SYS_HFGITR2_EL2, xzr
++.Lskip_fgt2_\@:
++.endm
++
+ .macro __init_el2_gcs
+ mrs_s x1, SYS_ID_AA64PFR1_EL1
+ ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4
+@@ -304,6 +328,7 @@
+ __init_el2_nvhe_idregs
+ __init_el2_cptr
+ __init_el2_fgt
++ __init_el2_fgt2
+ __init_el2_gcs
+ .endm
+
+diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
+index 762ee084b37c5b..891fe033e1b63a 100644
+--- a/arch/arm64/tools/sysreg
++++ b/arch/arm64/tools/sysreg
+@@ -1664,6 +1664,7 @@ EndEnum
+ UnsignedEnum 59:56 FGT
+ 0b0000 NI
+ 0b0001 IMP
++ 0b0010 FGT2
+ EndEnum
+ Res0 55:48
+ UnsignedEnum 47:44 EXS
+@@ -1725,6 +1726,7 @@ Enum 3:0 PARANGE
+ 0b0100 44
+ 0b0101 48
+ 0b0110 52
++ 0b0111 56
+ EndEnum
+ EndSysreg
+
+@@ -2641,6 +2643,101 @@ Field 0 E0HTRE
+ EndSysreg
+
+
++Sysreg HDFGRTR2_EL2 3 4 3 1 0
++Res0 63:25
++Field 24 nPMBMAR_EL1
++Field 23 nMDSTEPOP_EL1
++Field 22 nTRBMPAM_EL1
++Res0 21
++Field 20 nTRCITECR_EL1
++Field 19 nPMSDSFR_EL1
++Field 18 nSPMDEVAFF_EL1
++Field 17 nSPMID
++Field 16 nSPMSCR_EL1
++Field 15 nSPMACCESSR_EL1
++Field 14 nSPMCR_EL0
++Field 13 nSPMOVS
++Field 12 nSPMINTEN
++Field 11 nSPMCNTEN
++Field 10 nSPMSELR_EL0
++Field 9 nSPMEVTYPERn_EL0
++Field 8 nSPMEVCNTRn_EL0
++Field 7 nPMSSCR_EL1
++Field 6 nPMSSDATA
++Field 5 nMDSELR_EL1
++Field 4 nPMUACR_EL1
++Field 3 nPMICFILTR_EL0
++Field 2 nPMICNTR_EL0
++Field 1 nPMIAR_EL1
++Field 0 nPMECR_EL1
++EndSysreg
++
++Sysreg HDFGWTR2_EL2 3 4 3 1 1
++Res0 63:25
++Field 24 nPMBMAR_EL1
++Field 23 nMDSTEPOP_EL1
++Field 22 nTRBMPAM_EL1
++Field 21 nPMZR_EL0
++Field 20 nTRCITECR_EL1
++Field 19 nPMSDSFR_EL1
++Res0 18:17
++Field 16 nSPMSCR_EL1
++Field 15 nSPMACCESSR_EL1
++Field 14 nSPMCR_EL0
++Field 13 nSPMOVS
++Field 12 nSPMINTEN
++Field 11 nSPMCNTEN
++Field 10 nSPMSELR_EL0
++Field 9 nSPMEVTYPERn_EL0
++Field 8 nSPMEVCNTRn_EL0
++Field 7 nPMSSCR_EL1
++Res0 6
++Field 5 nMDSELR_EL1
++Field 4 nPMUACR_EL1
++Field 3 nPMICFILTR_EL0
++Field 2 nPMICNTR_EL0
++Field 1 nPMIAR_EL1
++Field 0 nPMECR_EL1
++EndSysreg
++
++Sysreg HFGRTR2_EL2 3 4 3 1 2
++Res0 63:15
++Field 14 nACTLRALIAS_EL1
++Field 13 nACTLRMASK_EL1
++Field 12 nTCR2ALIAS_EL1
++Field 11 nTCRALIAS_EL1
++Field 10 nSCTLRALIAS2_EL1
++Field 9 nSCTLRALIAS_EL1
++Field 8 nCPACRALIAS_EL1
++Field 7 nTCR2MASK_EL1
++Field 6 nTCRMASK_EL1
++Field 5 nSCTLR2MASK_EL1
++Field 4 nSCTLRMASK_EL1
++Field 3 nCPACRMASK_EL1
++Field 2 nRCWSMASK_EL1
++Field 1 nERXGSR_EL1
++Field 0 nPFAR_EL1
++EndSysreg
++
++Sysreg HFGWTR2_EL2 3 4 3 1 3
++Res0 63:15
++Field 14 nACTLRALIAS_EL1
++Field 13 nACTLRMASK_EL1
++Field 12 nTCR2ALIAS_EL1
++Field 11 nTCRALIAS_EL1
++Field 10 nSCTLRALIAS2_EL1
++Field 9 nSCTLRALIAS_EL1
++Field 8 nCPACRALIAS_EL1
++Field 7 nTCR2MASK_EL1
++Field 6 nTCRMASK_EL1
++Field 5 nSCTLR2MASK_EL1
++Field 4 nSCTLRMASK_EL1
++Field 3 nCPACRMASK_EL1
++Field 2 nRCWSMASK_EL1
++Res0 1
++Field 0 nPFAR_EL1
++EndSysreg
++
+ Sysreg HDFGRTR_EL2 3 4 3 1 4
+ Field 63 PMBIDR_EL1
+ Field 62 nPMSNEVFR_EL1
+@@ -2813,6 +2910,12 @@ Field 1 AMEVCNTR00_EL0
+ Field 0 AMCNTEN0
+ EndSysreg
+
++Sysreg HFGITR2_EL2 3 4 3 1 7
++Res0 63:2
++Field 1 nDCCIVAPS
++Field 0 TSBCSYNC
++EndSysreg
++
+ Sysreg ZCR_EL2 3 4 1 2 0
+ Fields ZCR_ELx
+ EndSysreg
+diff --git a/arch/mips/dec/prom/init.c b/arch/mips/dec/prom/init.c
+index cb12eb211a49e0..8d74d7d6c05b47 100644
+--- a/arch/mips/dec/prom/init.c
++++ b/arch/mips/dec/prom/init.c
+@@ -42,7 +42,7 @@ int (*__pmax_close)(int);
+ * Detect which PROM the DECSTATION has, and set the callback vectors
+ * appropriately.
+ */
+-void __init which_prom(s32 magic, s32 *prom_vec)
++static void __init which_prom(s32 magic, s32 *prom_vec)
+ {
+ /*
+ * No sign of the REX PROM's magic number means we assume a non-REX
+diff --git a/arch/mips/include/asm/ds1287.h b/arch/mips/include/asm/ds1287.h
+index 46cfb01f9a14e7..51cb61fd4c0330 100644
+--- a/arch/mips/include/asm/ds1287.h
++++ b/arch/mips/include/asm/ds1287.h
+@@ -8,7 +8,7 @@
+ #define __ASM_DS1287_H
+
+ extern int ds1287_timer_state(void);
+-extern void ds1287_set_base_clock(unsigned int clock);
++extern int ds1287_set_base_clock(unsigned int hz);
+ extern int ds1287_clockevent_init(int irq);
+
+ #endif
+diff --git a/arch/mips/kernel/cevt-ds1287.c b/arch/mips/kernel/cevt-ds1287.c
+index 9a47fbcd4638a6..de64d6bb7ba36c 100644
+--- a/arch/mips/kernel/cevt-ds1287.c
++++ b/arch/mips/kernel/cevt-ds1287.c
+@@ -10,6 +10,7 @@
+ #include <linux/mc146818rtc.h>
+ #include <linux/irq.h>
+
++#include <asm/ds1287.h>
+ #include <asm/time.h>
+
+ int ds1287_timer_state(void)
+diff --git a/arch/riscv/include/asm/kgdb.h b/arch/riscv/include/asm/kgdb.h
+index 46677daf708bd0..cc11c4544cffd1 100644
+--- a/arch/riscv/include/asm/kgdb.h
++++ b/arch/riscv/include/asm/kgdb.h
+@@ -19,16 +19,9 @@
+
+ #ifndef __ASSEMBLY__
+
++void arch_kgdb_breakpoint(void);
+ extern unsigned long kgdb_compiled_break;
+
+-static inline void arch_kgdb_breakpoint(void)
+-{
+- asm(".global kgdb_compiled_break\n"
+- ".option norvc\n"
+- "kgdb_compiled_break: ebreak\n"
+- ".option rvc\n");
+-}
+-
+ #endif /* !__ASSEMBLY__ */
+
+ #define DBG_REG_ZERO "zero"
+diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
+index 121fff429dce66..eceabf59ae482a 100644
+--- a/arch/riscv/include/asm/syscall.h
++++ b/arch/riscv/include/asm/syscall.h
+@@ -62,8 +62,11 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ unsigned long *args)
+ {
+ args[0] = regs->orig_a0;
+- args++;
+- memcpy(args, ®s->a1, 5 * sizeof(args[0]));
++ args[1] = regs->a1;
++ args[2] = regs->a2;
++ args[3] = regs->a3;
++ args[4] = regs->a4;
++ args[5] = regs->a5;
+ }
+
+ static inline int syscall_get_arch(struct task_struct *task)
+diff --git a/arch/riscv/kernel/kgdb.c b/arch/riscv/kernel/kgdb.c
+index 2e0266ae6bd728..9f3db3503dabd6 100644
+--- a/arch/riscv/kernel/kgdb.c
++++ b/arch/riscv/kernel/kgdb.c
+@@ -254,6 +254,12 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc)
+ regs->epc = pc;
+ }
+
++noinline void arch_kgdb_breakpoint(void)
++{
++ asm(".global kgdb_compiled_break\n"
++ "kgdb_compiled_break: ebreak\n");
++}
++
+ void kgdb_arch_handle_qxfer_pkt(char *remcom_in_buffer,
+ char *remcom_out_buffer)
+ {
+diff --git a/arch/riscv/kernel/module-sections.c b/arch/riscv/kernel/module-sections.c
+index e264e59e596e80..91d0b355ceeff6 100644
+--- a/arch/riscv/kernel/module-sections.c
++++ b/arch/riscv/kernel/module-sections.c
+@@ -73,16 +73,17 @@ static bool duplicate_rela(const Elf_Rela *rela, int idx)
+ static void count_max_entries(Elf_Rela *relas, int num,
+ unsigned int *plts, unsigned int *gots)
+ {
+- unsigned int type, i;
+-
+- for (i = 0; i < num; i++) {
+- type = ELF_RISCV_R_TYPE(relas[i].r_info);
+- if (type == R_RISCV_CALL_PLT) {
++ for (int i = 0; i < num; i++) {
++ switch (ELF_R_TYPE(relas[i].r_info)) {
++ case R_RISCV_CALL_PLT:
++ case R_RISCV_PLT32:
+ if (!duplicate_rela(relas, i))
+ (*plts)++;
+- } else if (type == R_RISCV_GOT_HI20) {
++ break;
++ case R_RISCV_GOT_HI20:
+ if (!duplicate_rela(relas, i))
+ (*gots)++;
++ break;
+ }
+ }
+ }
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 47d0ebeec93c23..7f6147c18033b2 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -648,7 +648,7 @@ process_accumulated_relocations(struct module *me,
+ kfree(bucket_iter);
+ }
+
+- kfree(*relocation_hashtable);
++ kvfree(*relocation_hashtable);
+ }
+
+ static int add_relocation_to_accumulate(struct module *me, int type,
+@@ -752,9 +752,10 @@ initialize_relocation_hashtable(unsigned int num_relocations,
+
+ hashtable_size <<= should_double_size;
+
+- *relocation_hashtable = kmalloc_array(hashtable_size,
+- sizeof(**relocation_hashtable),
+- GFP_KERNEL);
++ /* Number of relocations may be large, so kvmalloc it */
++ *relocation_hashtable = kvmalloc_array(hashtable_size,
++ sizeof(**relocation_hashtable),
++ GFP_KERNEL);
+ if (!*relocation_hashtable)
+ return 0;
+
+@@ -859,7 +860,7 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+ }
+
+ j++;
+- if (j > sechdrs[relsec].sh_size / sizeof(*rel))
++ if (j == num_relocations)
+ j = 0;
+
+ } while (j_idx != j);
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 4fe45daa6281e9..b7d0bd4c0a81ac 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -66,6 +66,9 @@ static struct resource bss_res = { .name = "Kernel bss", };
+ static struct resource elfcorehdr_res = { .name = "ELF Core hdr", };
+ #endif
+
++static int num_standard_resources;
++static struct resource *standard_resources;
++
+ static int __init add_resource(struct resource *parent,
+ struct resource *res)
+ {
+@@ -139,7 +142,7 @@ static void __init init_resources(void)
+ struct resource *res = NULL;
+ struct resource *mem_res = NULL;
+ size_t mem_res_sz = 0;
+- int num_resources = 0, res_idx = 0;
++ int num_resources = 0, res_idx = 0, non_resv_res = 0;
+ int ret = 0;
+
+ /* + 1 as memblock_alloc() might increase memblock.reserved.cnt */
+@@ -193,6 +196,7 @@ static void __init init_resources(void)
+ /* Add /memory regions to the resource tree */
+ for_each_mem_region(region) {
+ res = &mem_res[res_idx--];
++ non_resv_res++;
+
+ if (unlikely(memblock_is_nomap(region))) {
+ res->name = "Reserved";
+@@ -210,6 +214,9 @@ static void __init init_resources(void)
+ goto error;
+ }
+
++ num_standard_resources = non_resv_res;
++ standard_resources = &mem_res[res_idx + 1];
++
+ /* Clean-up any unused pre-allocated resources */
+ if (res_idx >= 0)
+ memblock_free(mem_res, (res_idx + 1) * sizeof(*mem_res));
+@@ -221,6 +228,33 @@ static void __init init_resources(void)
+ memblock_free(mem_res, mem_res_sz);
+ }
+
++static int __init reserve_memblock_reserved_regions(void)
++{
++ u64 i, j;
++
++ for (i = 0; i < num_standard_resources; i++) {
++ struct resource *mem = &standard_resources[i];
++ phys_addr_t r_start, r_end, mem_size = resource_size(mem);
++
++ if (!memblock_is_region_reserved(mem->start, mem_size))
++ continue;
++
++ for_each_reserved_mem_range(j, &r_start, &r_end) {
++ resource_size_t start, end;
++
++ start = max(PFN_PHYS(PFN_DOWN(r_start)), mem->start);
++ end = min(PFN_PHYS(PFN_UP(r_end)) - 1, mem->end);
++
++ if (start > mem->end || end < mem->start)
++ continue;
++
++ reserve_region_with_split(mem, start, end, "Reserved");
++ }
++ }
++
++ return 0;
++}
++arch_initcall(reserve_memblock_reserved_regions);
+
+ static void __init parse_dtb(void)
+ {
+diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c
+index dbba332e4a12d7..f676156d9f3db4 100644
+--- a/arch/x86/boot/compressed/mem.c
++++ b/arch/x86/boot/compressed/mem.c
+@@ -34,11 +34,14 @@ static bool early_is_tdx_guest(void)
+
+ void arch_accept_memory(phys_addr_t start, phys_addr_t end)
+ {
++ static bool sevsnp;
++
+ /* Platform-specific memory-acceptance call goes here */
+ if (early_is_tdx_guest()) {
+ if (!tdx_accept_memory(start, end))
+ panic("TDX: Failed to accept memory\n");
+- } else if (sev_snp_enabled()) {
++ } else if (sevsnp || (sev_get_status() & MSR_AMD64_SEV_SNP_ENABLED)) {
++ sevsnp = true;
+ snp_accept_memory(start, end);
+ } else {
+ error("Cannot accept memory: unknown platform\n");
+diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
+index bb55934c1cee70..89ba168f4f0f02 100644
+--- a/arch/x86/boot/compressed/sev.c
++++ b/arch/x86/boot/compressed/sev.c
+@@ -164,10 +164,7 @@ bool sev_snp_enabled(void)
+
+ static void __page_state_change(unsigned long paddr, enum psc_op op)
+ {
+- u64 val;
+-
+- if (!sev_snp_enabled())
+- return;
++ u64 val, msr;
+
+ /*
+ * If private -> shared then invalidate the page before requesting the
+@@ -176,6 +173,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
+ if (op == SNP_PAGE_STATE_SHARED)
+ pvalidate_4k_page(paddr, paddr, false);
+
++ /* Save the current GHCB MSR value */
++ msr = sev_es_rd_ghcb_msr();
++
+ /* Issue VMGEXIT to change the page state in RMP table. */
+ sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
+ VMGEXIT();
+@@ -185,6 +185,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
+ if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+
++ /* Restore the GHCB MSR value */
++ sev_es_wr_ghcb_msr(msr);
++
+ /*
+ * Now that page state is changed in the RMP table, validate it so that it is
+ * consistent with the RMP entry.
+@@ -195,11 +198,17 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
+
+ void snp_set_page_private(unsigned long paddr)
+ {
++ if (!sev_snp_enabled())
++ return;
++
+ __page_state_change(paddr, SNP_PAGE_STATE_PRIVATE);
+ }
+
+ void snp_set_page_shared(unsigned long paddr)
+ {
++ if (!sev_snp_enabled())
++ return;
++
+ __page_state_change(paddr, SNP_PAGE_STATE_SHARED);
+ }
+
+@@ -223,56 +232,10 @@ static bool early_setup_ghcb(void)
+ return true;
+ }
+
+-static phys_addr_t __snp_accept_memory(struct snp_psc_desc *desc,
+- phys_addr_t pa, phys_addr_t pa_end)
+-{
+- struct psc_hdr *hdr;
+- struct psc_entry *e;
+- unsigned int i;
+-
+- hdr = &desc->hdr;
+- memset(hdr, 0, sizeof(*hdr));
+-
+- e = desc->entries;
+-
+- i = 0;
+- while (pa < pa_end && i < VMGEXIT_PSC_MAX_ENTRY) {
+- hdr->end_entry = i;
+-
+- e->gfn = pa >> PAGE_SHIFT;
+- e->operation = SNP_PAGE_STATE_PRIVATE;
+- if (IS_ALIGNED(pa, PMD_SIZE) && (pa_end - pa) >= PMD_SIZE) {
+- e->pagesize = RMP_PG_SIZE_2M;
+- pa += PMD_SIZE;
+- } else {
+- e->pagesize = RMP_PG_SIZE_4K;
+- pa += PAGE_SIZE;
+- }
+-
+- e++;
+- i++;
+- }
+-
+- if (vmgexit_psc(boot_ghcb, desc))
+- sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+-
+- pvalidate_pages(desc);
+-
+- return pa;
+-}
+-
+ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
+ {
+- struct snp_psc_desc desc = {};
+- unsigned int i;
+- phys_addr_t pa;
+-
+- if (!boot_ghcb && !early_setup_ghcb())
+- sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+-
+- pa = start;
+- while (pa < end)
+- pa = __snp_accept_memory(&desc, pa, end);
++ for (phys_addr_t pa = start; pa < end; pa += PAGE_SIZE)
++ __page_state_change(pa, SNP_PAGE_STATE_PRIVATE);
+ }
+
+ void sev_es_shutdown_ghcb(void)
+diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h
+index fc725a981b093b..4e463f33186df4 100644
+--- a/arch/x86/boot/compressed/sev.h
++++ b/arch/x86/boot/compressed/sev.h
+@@ -12,11 +12,13 @@
+
+ bool sev_snp_enabled(void);
+ void snp_accept_memory(phys_addr_t start, phys_addr_t end);
++u64 sev_get_status(void);
+
+ #else
+
+ static inline bool sev_snp_enabled(void) { return false; }
+ static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
++static inline u64 sev_get_status(void) { return 0; }
+
+ #endif
+
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 33f4bb22fc0ee5..08de293bebad14 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1338,8 +1338,10 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
+ * + precise_ip < 2 for the non event IP
+ * + For RTM TSX weight we need GPRs for the abort code.
+ */
+- gprs = (sample_type & PERF_SAMPLE_REGS_INTR) &&
+- (attr->sample_regs_intr & PEBS_GP_REGS);
++ gprs = ((sample_type & PERF_SAMPLE_REGS_INTR) &&
++ (attr->sample_regs_intr & PEBS_GP_REGS)) ||
++ ((sample_type & PERF_SAMPLE_REGS_USER) &&
++ (attr->sample_regs_user & PEBS_GP_REGS));
+
+ tsx_weight = (sample_type & PERF_SAMPLE_WEIGHT_TYPE) &&
+ ((attr->config & INTEL_ARCH_EVENT_MASK) ==
+@@ -1985,7 +1987,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
+ regs->flags &= ~PERF_EFLAGS_EXACT;
+ }
+
+- if (sample_type & PERF_SAMPLE_REGS_INTR)
++ if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))
+ adaptive_pebs_save_regs(regs, gprs);
+ }
+
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 60973c209c0e64..76d96df1475a1c 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -4891,28 +4891,28 @@ static struct uncore_event_desc snr_uncore_iio_freerunning_events[] = {
+ INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
+ /* Free-Running IIO BANDWIDTH IN Counters */
+ INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
+ { /* end: all zeroes */ },
+ };
+@@ -5485,37 +5485,6 @@ static struct freerunning_counters icx_iio_freerunning[] = {
+ [ICX_IIO_MSR_BW_IN] = { 0xaa0, 0x1, 0x10, 8, 48, icx_iio_bw_freerunning_box_offsets },
+ };
+
+-static struct uncore_event_desc icx_uncore_iio_freerunning_events[] = {
+- /* Free-Running IIO CLOCKS Counter */
+- INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
+- /* Free-Running IIO BANDWIDTH IN Counters */
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
+- { /* end: all zeroes */ },
+-};
+-
+ static struct intel_uncore_type icx_uncore_iio_free_running = {
+ .name = "iio_free_running",
+ .num_counters = 9,
+@@ -5523,7 +5492,7 @@ static struct intel_uncore_type icx_uncore_iio_free_running = {
+ .num_freerunning_types = ICX_IIO_FREERUNNING_TYPE_MAX,
+ .freerunning = icx_iio_freerunning,
+ .ops = &skx_uncore_iio_freerunning_ops,
+- .event_descs = icx_uncore_iio_freerunning_events,
++ .event_descs = snr_uncore_iio_freerunning_events,
+ .format_group = &skx_uncore_iio_freerunning_format_group,
+ };
+
+@@ -6320,69 +6289,13 @@ static struct freerunning_counters spr_iio_freerunning[] = {
+ [SPR_IIO_MSR_BW_OUT] = { 0x3808, 0x1, 0x10, 8, 48 },
+ };
+
+-static struct uncore_event_desc spr_uncore_iio_freerunning_events[] = {
+- /* Free-Running IIO CLOCKS Counter */
+- INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
+- /* Free-Running IIO BANDWIDTH IN Counters */
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
+- /* Free-Running IIO BANDWIDTH OUT Counters */
+- INTEL_UNCORE_EVENT_DESC(bw_out_port0, "event=0xff,umask=0x30"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port0.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port0.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port1, "event=0xff,umask=0x31"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port1.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port1.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port2, "event=0xff,umask=0x32"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port2.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port2.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port3, "event=0xff,umask=0x33"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port3.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port3.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port4, "event=0xff,umask=0x34"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port4.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port4.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port5, "event=0xff,umask=0x35"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port5.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port5.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port6, "event=0xff,umask=0x36"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port6.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port6.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port7, "event=0xff,umask=0x37"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port7.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port7.unit, "MiB"),
+- { /* end: all zeroes */ },
+-};
+-
+ static struct intel_uncore_type spr_uncore_iio_free_running = {
+ .name = "iio_free_running",
+ .num_counters = 17,
+ .num_freerunning_types = SPR_IIO_FREERUNNING_TYPE_MAX,
+ .freerunning = spr_iio_freerunning,
+ .ops = &skx_uncore_iio_freerunning_ops,
+- .event_descs = spr_uncore_iio_freerunning_events,
++ .event_descs = snr_uncore_iio_freerunning_events,
+ .format_group = &skx_uncore_iio_freerunning_format_group,
+ };
+
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 4c9b20d028eb4c..70df203d814e02 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -867,6 +867,16 @@ static void init_amd_zen1(struct cpuinfo_x86 *c)
+
+ pr_notice_once("AMD Zen1 DIV0 bug detected. Disable SMT for full protection.\n");
+ setup_force_cpu_bug(X86_BUG_DIV0);
++
++ /*
++ * Turn off the Instructions Retired free counter on machines that are
++ * susceptible to erratum #1054 "Instructions Retired Performance
++ * Counter May Be Inaccurate".
++ */
++ if (c->x86_model < 0x30) {
++ msr_clear_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
++ clear_cpu_cap(c, X86_FEATURE_IRPERF);
++ }
+ }
+
+ static bool cpu_has_zenbleed_microcode(void)
+@@ -1050,13 +1060,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ if (!cpu_feature_enabled(X86_FEATURE_XENPV))
+ set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
+
+- /*
+- * Turn on the Instructions Retired free counter on machines not
+- * susceptible to erratum #1054 "Instructions Retired Performance
+- * Counter May Be Inaccurate".
+- */
+- if (cpu_has(c, X86_FEATURE_IRPERF) &&
+- (boot_cpu_has(X86_FEATURE_ZEN1) && c->x86_model > 0x2f))
++ /* Enable the Instructions Retired free counter */
++ if (cpu_has(c, X86_FEATURE_IRPERF))
+ msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
+
+ check_null_seg_clears_base(c);
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index b61028cf5c8a3b..4a10d35e70aa54 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -199,6 +199,12 @@ static bool need_sha_check(u32 cur_rev)
+ case 0xa70c0: return cur_rev <= 0xa70C009; break;
+ case 0xaa001: return cur_rev <= 0xaa00116; break;
+ case 0xaa002: return cur_rev <= 0xaa00218; break;
++ case 0xb0021: return cur_rev <= 0xb002146; break;
++ case 0xb1010: return cur_rev <= 0xb101046; break;
++ case 0xb2040: return cur_rev <= 0xb204031; break;
++ case 0xb4040: return cur_rev <= 0xb404031; break;
++ case 0xb6000: return cur_rev <= 0xb600031; break;
++ case 0xb7000: return cur_rev <= 0xb700031; break;
+ default: break;
+ }
+
+@@ -214,8 +220,7 @@ static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsi
+ struct sha256_state s;
+ int i;
+
+- if (x86_family(bsp_cpuid_1_eax) < 0x17 ||
+- x86_family(bsp_cpuid_1_eax) > 0x19)
++ if (x86_family(bsp_cpuid_1_eax) < 0x17)
+ return true;
+
+ if (!need_sha_check(cur_rev))
+diff --git a/arch/x86/xen/multicalls.c b/arch/x86/xen/multicalls.c
+index 10c660fae8b300..7237d56a9d3f01 100644
+--- a/arch/x86/xen/multicalls.c
++++ b/arch/x86/xen/multicalls.c
+@@ -54,14 +54,20 @@ struct mc_debug_data {
+
+ static DEFINE_PER_CPU(struct mc_buffer, mc_buffer);
+ static struct mc_debug_data mc_debug_data_early __initdata;
+-static DEFINE_PER_CPU(struct mc_debug_data *, mc_debug_data) =
+- &mc_debug_data_early;
+ static struct mc_debug_data __percpu *mc_debug_data_ptr;
+ DEFINE_PER_CPU(unsigned long, xen_mc_irq_flags);
+
+ static struct static_key mc_debug __ro_after_init;
+ static bool mc_debug_enabled __initdata;
+
++static struct mc_debug_data * __ref get_mc_debug(void)
++{
++ if (!mc_debug_data_ptr)
++ return &mc_debug_data_early;
++
++ return this_cpu_ptr(mc_debug_data_ptr);
++}
++
+ static int __init xen_parse_mc_debug(char *arg)
+ {
+ mc_debug_enabled = true;
+@@ -71,20 +77,16 @@ static int __init xen_parse_mc_debug(char *arg)
+ }
+ early_param("xen_mc_debug", xen_parse_mc_debug);
+
+-void mc_percpu_init(unsigned int cpu)
+-{
+- per_cpu(mc_debug_data, cpu) = per_cpu_ptr(mc_debug_data_ptr, cpu);
+-}
+-
+ static int __init mc_debug_enable(void)
+ {
+ unsigned long flags;
++ struct mc_debug_data __percpu *mcdb;
+
+ if (!mc_debug_enabled)
+ return 0;
+
+- mc_debug_data_ptr = alloc_percpu(struct mc_debug_data);
+- if (!mc_debug_data_ptr) {
++ mcdb = alloc_percpu(struct mc_debug_data);
++ if (!mcdb) {
+ pr_err("xen_mc_debug inactive\n");
+ static_key_slow_dec(&mc_debug);
+ return -ENOMEM;
+@@ -93,7 +95,7 @@ static int __init mc_debug_enable(void)
+ /* Be careful when switching to percpu debug data. */
+ local_irq_save(flags);
+ xen_mc_flush();
+- mc_percpu_init(0);
++ mc_debug_data_ptr = mcdb;
+ local_irq_restore(flags);
+
+ pr_info("xen_mc_debug active\n");
+@@ -155,7 +157,7 @@ void xen_mc_flush(void)
+ trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx);
+
+ if (static_key_false(&mc_debug)) {
+- mcdb = __this_cpu_read(mc_debug_data);
++ mcdb = get_mc_debug();
+ memcpy(mcdb->entries, b->entries,
+ b->mcidx * sizeof(struct multicall_entry));
+ }
+@@ -235,7 +237,7 @@ struct multicall_space __xen_mc_entry(size_t args)
+
+ ret.mc = &b->entries[b->mcidx];
+ if (static_key_false(&mc_debug)) {
+- struct mc_debug_data *mcdb = __this_cpu_read(mc_debug_data);
++ struct mc_debug_data *mcdb = get_mc_debug();
+
+ mcdb->caller[b->mcidx] = __builtin_return_address(0);
+ mcdb->argsz[b->mcidx] = args;
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 6863d3da7decfc..7ea57f728b89db 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -305,7 +305,6 @@ static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle)
+ return rc;
+
+ xen_pmu_init(cpu);
+- mc_percpu_init(cpu);
+
+ /*
+ * Why is this a BUG? If the hypercall fails then everything can be
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 63c13a2ccf556a..25e318ef27d6b0 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -261,9 +261,6 @@ void xen_mc_callback(void (*fn)(void *), void *data);
+ */
+ struct multicall_space xen_mc_extend_args(unsigned long op, size_t arg_size);
+
+-/* Do percpu data initialization for multicalls. */
+-void mc_percpu_init(unsigned int cpu);
+-
+ extern bool is_xen_pmu;
+
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id);
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 5d81ad9a3d20a7..03e8fae173937a 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -104,16 +104,12 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
+ }
+ EXPORT_SYMBOL(bio_integrity_alloc);
+
+-static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs,
+- bool dirty)
++static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs)
+ {
+ int i;
+
+- for (i = 0; i < nr_vecs; i++) {
+- if (dirty && !PageCompound(bv[i].bv_page))
+- set_page_dirty_lock(bv[i].bv_page);
++ for (i = 0; i < nr_vecs; i++)
+ unpin_user_page(bv[i].bv_page);
+- }
+ }
+
+ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
+@@ -129,7 +125,7 @@ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
+ ret = copy_to_iter(bvec_virt(bounce_bvec), bytes, &orig_iter);
+ WARN_ON_ONCE(ret != bytes);
+
+- bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs, true);
++ bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs);
+ }
+
+ /**
+@@ -149,8 +145,7 @@ void bio_integrity_unmap_user(struct bio *bio)
+ return;
+ }
+
+- bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt,
+- bio_data_dir(bio) == READ);
++ bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt);
+ }
+
+ /**
+@@ -236,7 +231,7 @@ static int bio_integrity_copy_user(struct bio *bio, struct bio_vec *bvec,
+ }
+
+ if (write)
+- bio_integrity_unpin_bvec(bvec, nr_vecs, false);
++ bio_integrity_unpin_bvec(bvec, nr_vecs);
+ else
+ memcpy(&bip->bip_vec[1], bvec, nr_vecs * sizeof(*bvec));
+
+@@ -357,7 +352,7 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
+ return 0;
+
+ release_pages:
+- bio_integrity_unpin_bvec(bvec, nr_bvecs, false);
++ bio_integrity_unpin_bvec(bvec, nr_bvecs);
+ free_bvec:
+ if (bvec != stack_vec)
+ kfree(bvec);
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 6f548a4376aa4a..7802186849074f 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -824,6 +824,8 @@ int blk_register_queue(struct gendisk *disk)
+ out_debugfs_remove:
+ blk_debugfs_remove(disk);
+ mutex_unlock(&q->sysfs_lock);
++ if (queue_is_mq(q))
++ blk_mq_sysfs_unregister(disk);
+ out_put_queue_kobj:
+ kobject_put(&disk->queue_kobj);
+ return ret;
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index 38cf1c342c7249..3e56ce8bc2c1d5 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (C) 2020-2024 Intel Corporation
++ * Copyright (C) 2020-2025 Intel Corporation
+ */
+
+ #include <linux/firmware.h>
+@@ -167,7 +167,7 @@ static int ivpu_get_param_ioctl(struct drm_device *dev, void *data, struct drm_f
+ args->value = vdev->platform;
+ break;
+ case DRM_IVPU_PARAM_CORE_CLOCK_RATE:
+- args->value = ivpu_hw_ratio_to_freq(vdev, vdev->hw->pll.max_ratio);
++ args->value = ivpu_hw_dpu_max_freq_get(vdev);
+ break;
+ case DRM_IVPU_PARAM_NUM_CONTEXTS:
+ args->value = ivpu_get_context_count(vdev);
+diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
+index 6037ec0b309689..7db9a59640e73f 100644
+--- a/drivers/accel/ivpu/ivpu_fw.c
++++ b/drivers/accel/ivpu/ivpu_fw.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (C) 2020-2024 Intel Corporation
++ * Copyright (C) 2020-2025 Intel Corporation
+ */
+
+ #include <linux/firmware.h>
+@@ -561,7 +561,6 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
+
+ boot_params->magic = VPU_BOOT_PARAMS_MAGIC;
+ boot_params->vpu_id = to_pci_dev(vdev->drm.dev)->bus->number;
+- boot_params->frequency = ivpu_hw_pll_freq_get(vdev);
+
+ /*
+ * This param is a debug firmware feature. It switches default clock
+diff --git a/drivers/accel/ivpu/ivpu_hw.h b/drivers/accel/ivpu/ivpu_hw.h
+index fc4dbfc980c819..1e85306bcd0653 100644
+--- a/drivers/accel/ivpu/ivpu_hw.h
++++ b/drivers/accel/ivpu/ivpu_hw.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (C) 2020-2024 Intel Corporation
++ * Copyright (C) 2020-2025 Intel Corporation
+ */
+
+ #ifndef __IVPU_HW_H__
+@@ -86,9 +86,9 @@ static inline u64 ivpu_hw_range_size(const struct ivpu_addr_range *range)
+ return range->end - range->start;
+ }
+
+-static inline u32 ivpu_hw_ratio_to_freq(struct ivpu_device *vdev, u32 ratio)
++static inline u32 ivpu_hw_dpu_max_freq_get(struct ivpu_device *vdev)
+ {
+- return ivpu_hw_btrs_ratio_to_freq(vdev, ratio);
++ return ivpu_hw_btrs_dpu_max_freq_get(vdev);
+ }
+
+ static inline void ivpu_hw_irq_clear(struct ivpu_device *vdev)
+@@ -96,11 +96,6 @@ static inline void ivpu_hw_irq_clear(struct ivpu_device *vdev)
+ ivpu_hw_ip_irq_clear(vdev);
+ }
+
+-static inline u32 ivpu_hw_pll_freq_get(struct ivpu_device *vdev)
+-{
+- return ivpu_hw_btrs_pll_freq_get(vdev);
+-}
+-
+ static inline u32 ivpu_hw_profiling_freq_get(struct ivpu_device *vdev)
+ {
+ return vdev->hw->pll.profiling_freq;
+diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.c b/drivers/accel/ivpu/ivpu_hw_btrs.c
+index 3212c99f36823a..51b9581bb60aca 100644
+--- a/drivers/accel/ivpu/ivpu_hw_btrs.c
++++ b/drivers/accel/ivpu/ivpu_hw_btrs.c
+@@ -1,8 +1,10 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (C) 2020-2024 Intel Corporation
++ * Copyright (C) 2020-2025 Intel Corporation
+ */
+
++#include <linux/units.h>
++
+ #include "ivpu_drv.h"
+ #include "ivpu_hw.h"
+ #include "ivpu_hw_btrs.h"
+@@ -28,17 +30,13 @@
+
+ #define BTRS_LNL_ALL_IRQ_MASK ((u32)-1)
+
+-#define BTRS_MTL_WP_CONFIG_1_TILE_5_3_RATIO WP_CONFIG(MTL_CONFIG_1_TILE, MTL_PLL_RATIO_5_3)
+-#define BTRS_MTL_WP_CONFIG_1_TILE_4_3_RATIO WP_CONFIG(MTL_CONFIG_1_TILE, MTL_PLL_RATIO_4_3)
+-#define BTRS_MTL_WP_CONFIG_2_TILE_5_3_RATIO WP_CONFIG(MTL_CONFIG_2_TILE, MTL_PLL_RATIO_5_3)
+-#define BTRS_MTL_WP_CONFIG_2_TILE_4_3_RATIO WP_CONFIG(MTL_CONFIG_2_TILE, MTL_PLL_RATIO_4_3)
+-#define BTRS_MTL_WP_CONFIG_0_TILE_PLL_OFF WP_CONFIG(0, 0)
+
+ #define PLL_CDYN_DEFAULT 0x80
+ #define PLL_EPP_DEFAULT 0x80
+ #define PLL_CONFIG_DEFAULT 0x0
+-#define PLL_SIMULATION_FREQ 10000000
+-#define PLL_REF_CLK_FREQ 50000000
++#define PLL_REF_CLK_FREQ 50000000ull
++#define PLL_RATIO_TO_FREQ(x) ((x) * PLL_REF_CLK_FREQ)
++
+ #define PLL_TIMEOUT_US (1500 * USEC_PER_MSEC)
+ #define IDLE_TIMEOUT_US (5 * USEC_PER_MSEC)
+ #define TIMEOUT_US (150 * USEC_PER_MSEC)
+@@ -62,6 +60,8 @@
+ #define DCT_ENABLE 0x1
+ #define DCT_DISABLE 0x0
+
++static u32 pll_ratio_to_dpu_freq(struct ivpu_device *vdev, u32 ratio);
++
+ int ivpu_hw_btrs_irqs_clear_with_0_mtl(struct ivpu_device *vdev)
+ {
+ REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, BTRS_MTL_ALL_IRQ_MASK);
+@@ -156,7 +156,7 @@ static int info_init_mtl(struct ivpu_device *vdev)
+
+ hw->tile_fuse = BTRS_MTL_TILE_FUSE_ENABLE_BOTH;
+ hw->sku = BTRS_MTL_TILE_SKU_BOTH;
+- hw->config = BTRS_MTL_WP_CONFIG_2_TILE_4_3_RATIO;
++ hw->config = WP_CONFIG(MTL_CONFIG_2_TILE, MTL_PLL_RATIO_4_3);
+
+ return 0;
+ }
+@@ -334,8 +334,8 @@ int ivpu_hw_btrs_wp_drive(struct ivpu_device *vdev, bool enable)
+
+ prepare_wp_request(vdev, &wp, enable);
+
+- ivpu_dbg(vdev, PM, "PLL workpoint request: %u Hz, config: 0x%x, epp: 0x%x, cdyn: 0x%x\n",
+- PLL_RATIO_TO_FREQ(wp.target), wp.cfg, wp.epp, wp.cdyn);
++ ivpu_dbg(vdev, PM, "PLL workpoint request: %lu MHz, config: 0x%x, epp: 0x%x, cdyn: 0x%x\n",
++ pll_ratio_to_dpu_freq(vdev, wp.target) / HZ_PER_MHZ, wp.cfg, wp.epp, wp.cdyn);
+
+ ret = wp_request_send(vdev, &wp);
+ if (ret) {
+@@ -573,6 +573,39 @@ int ivpu_hw_btrs_wait_for_idle(struct ivpu_device *vdev)
+ return REGB_POLL_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, IDLE, 0x1, IDLE_TIMEOUT_US);
+ }
+
++static u32 pll_config_get_mtl(struct ivpu_device *vdev)
++{
++ return REGB_RD32(VPU_HW_BTRS_MTL_CURRENT_PLL);
++}
++
++static u32 pll_config_get_lnl(struct ivpu_device *vdev)
++{
++ return REGB_RD32(VPU_HW_BTRS_LNL_PLL_FREQ);
++}
++
++static u32 pll_ratio_to_dpu_freq_mtl(u16 ratio)
++{
++ return (PLL_RATIO_TO_FREQ(ratio) * 2) / 3;
++}
++
++static u32 pll_ratio_to_dpu_freq_lnl(u16 ratio)
++{
++ return PLL_RATIO_TO_FREQ(ratio) / 2;
++}
++
++static u32 pll_ratio_to_dpu_freq(struct ivpu_device *vdev, u32 ratio)
++{
++ if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
++ return pll_ratio_to_dpu_freq_mtl(ratio);
++ else
++ return pll_ratio_to_dpu_freq_lnl(ratio);
++}
++
++u32 ivpu_hw_btrs_dpu_max_freq_get(struct ivpu_device *vdev)
++{
++ return pll_ratio_to_dpu_freq(vdev, vdev->hw->pll.max_ratio);
++}
++
+ /* Handler for IRQs from Buttress core (irqB) */
+ bool ivpu_hw_btrs_irq_handler_mtl(struct ivpu_device *vdev, int irq)
+ {
+@@ -582,9 +615,12 @@ bool ivpu_hw_btrs_irq_handler_mtl(struct ivpu_device *vdev, int irq)
+ if (!status)
+ return false;
+
+- if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, FREQ_CHANGE, status))
+- ivpu_dbg(vdev, IRQ, "FREQ_CHANGE irq: %08x",
+- REGB_RD32(VPU_HW_BTRS_MTL_CURRENT_PLL));
++ if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, FREQ_CHANGE, status)) {
++ u32 pll = pll_config_get_mtl(vdev);
++
++ ivpu_dbg(vdev, IRQ, "FREQ_CHANGE irq, wp %08x, %lu MHz",
++ pll, pll_ratio_to_dpu_freq_mtl(pll) / HZ_PER_MHZ);
++ }
+
+ if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, ATS_ERR, status)) {
+ ivpu_err(vdev, "ATS_ERR irq 0x%016llx", REGB_RD64(VPU_HW_BTRS_MTL_ATS_ERR_LOG_0));
+@@ -634,8 +670,12 @@ bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq)
+ ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
+ }
+
+- if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status))
+- ivpu_dbg(vdev, IRQ, "FREQ_CHANGE irq: %08x", REGB_RD32(VPU_HW_BTRS_LNL_PLL_FREQ));
++ if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status)) {
++ u32 pll = pll_config_get_lnl(vdev);
++
++ ivpu_dbg(vdev, IRQ, "FREQ_CHANGE irq, wp %08x, %lu MHz",
++ pll, pll_ratio_to_dpu_freq_lnl(pll) / HZ_PER_MHZ);
++ }
+
+ if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, ATS_ERR, status)) {
+ ivpu_err(vdev, "ATS_ERR LOG1 0x%08x ATS_ERR_LOG2 0x%08x\n",
+@@ -718,60 +758,6 @@ void ivpu_hw_btrs_dct_set_status(struct ivpu_device *vdev, bool enable, u32 acti
+ REGB_WR32(VPU_HW_BTRS_LNL_PCODE_MAILBOX_STATUS, val);
+ }
+
+-static u32 pll_ratio_to_freq_mtl(u32 ratio, u32 config)
+-{
+- u32 pll_clock = PLL_REF_CLK_FREQ * ratio;
+- u32 cpu_clock;
+-
+- if ((config & 0xff) == MTL_PLL_RATIO_4_3)
+- cpu_clock = pll_clock * 2 / 4;
+- else
+- cpu_clock = pll_clock * 2 / 5;
+-
+- return cpu_clock;
+-}
+-
+-u32 ivpu_hw_btrs_ratio_to_freq(struct ivpu_device *vdev, u32 ratio)
+-{
+- struct ivpu_hw_info *hw = vdev->hw;
+-
+- if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
+- return pll_ratio_to_freq_mtl(ratio, hw->config);
+- else
+- return PLL_RATIO_TO_FREQ(ratio);
+-}
+-
+-static u32 pll_freq_get_mtl(struct ivpu_device *vdev)
+-{
+- u32 pll_curr_ratio;
+-
+- pll_curr_ratio = REGB_RD32(VPU_HW_BTRS_MTL_CURRENT_PLL);
+- pll_curr_ratio &= VPU_HW_BTRS_MTL_CURRENT_PLL_RATIO_MASK;
+-
+- if (!ivpu_is_silicon(vdev))
+- return PLL_SIMULATION_FREQ;
+-
+- return pll_ratio_to_freq_mtl(pll_curr_ratio, vdev->hw->config);
+-}
+-
+-static u32 pll_freq_get_lnl(struct ivpu_device *vdev)
+-{
+- u32 pll_curr_ratio;
+-
+- pll_curr_ratio = REGB_RD32(VPU_HW_BTRS_LNL_PLL_FREQ);
+- pll_curr_ratio &= VPU_HW_BTRS_LNL_PLL_FREQ_RATIO_MASK;
+-
+- return PLL_RATIO_TO_FREQ(pll_curr_ratio);
+-}
+-
+-u32 ivpu_hw_btrs_pll_freq_get(struct ivpu_device *vdev)
+-{
+- if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
+- return pll_freq_get_mtl(vdev);
+- else
+- return pll_freq_get_lnl(vdev);
+-}
+-
+ u32 ivpu_hw_btrs_telemetry_offset_get(struct ivpu_device *vdev)
+ {
+ if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL)
+diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.h b/drivers/accel/ivpu/ivpu_hw_btrs.h
+index 04f14f50fed62e..71792dab3c2107 100644
+--- a/drivers/accel/ivpu/ivpu_hw_btrs.h
++++ b/drivers/accel/ivpu/ivpu_hw_btrs.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (C) 2020-2024 Intel Corporation
++ * Copyright (C) 2020-2025 Intel Corporation
+ */
+
+ #ifndef __IVPU_HW_BTRS_H__
+@@ -13,7 +13,6 @@
+
+ #define PLL_PROFILING_FREQ_DEFAULT 38400000
+ #define PLL_PROFILING_FREQ_HIGH 400000000
+-#define PLL_RATIO_TO_FREQ(x) ((x) * PLL_REF_CLK_FREQ)
+
+ #define DCT_DEFAULT_ACTIVE_PERCENT 15u
+ #define DCT_PERIOD_US 35300u
+@@ -32,12 +31,11 @@ int ivpu_hw_btrs_ip_reset(struct ivpu_device *vdev);
+ void ivpu_hw_btrs_profiling_freq_reg_set_lnl(struct ivpu_device *vdev);
+ void ivpu_hw_btrs_ats_print_lnl(struct ivpu_device *vdev);
+ void ivpu_hw_btrs_clock_relinquish_disable_lnl(struct ivpu_device *vdev);
++u32 ivpu_hw_btrs_dpu_max_freq_get(struct ivpu_device *vdev);
+ bool ivpu_hw_btrs_irq_handler_mtl(struct ivpu_device *vdev, int irq);
+ bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq);
+ int ivpu_hw_btrs_dct_get_request(struct ivpu_device *vdev, bool *enable);
+ void ivpu_hw_btrs_dct_set_status(struct ivpu_device *vdev, bool enable, u32 dct_percent);
+-u32 ivpu_hw_btrs_pll_freq_get(struct ivpu_device *vdev);
+-u32 ivpu_hw_btrs_ratio_to_freq(struct ivpu_device *vdev, u32 ratio);
+ u32 ivpu_hw_btrs_telemetry_offset_get(struct ivpu_device *vdev);
+ u32 ivpu_hw_btrs_telemetry_size_get(struct ivpu_device *vdev);
+ u32 ivpu_hw_btrs_telemetry_enable_get(struct ivpu_device *vdev);
+diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
+index ba300cc0a3a327..2e4463d3a3561f 100644
+--- a/drivers/ata/libata-sata.c
++++ b/drivers/ata/libata-sata.c
+@@ -1510,6 +1510,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
+ unsigned int err_mask, tag;
+ u8 *sense, sk = 0, asc = 0, ascq = 0;
+ u64 sense_valid, val;
++ u16 extended_sense;
++ bool aux_icc_valid;
+ int ret = 0;
+
+ err_mask = ata_read_log_page(dev, ATA_LOG_SENSE_NCQ, 0, buf, 2);
+@@ -1529,6 +1531,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
+
+ sense_valid = (u64)buf[8] | ((u64)buf[9] << 8) |
+ ((u64)buf[10] << 16) | ((u64)buf[11] << 24);
++ extended_sense = get_unaligned_le16(&buf[14]);
++ aux_icc_valid = extended_sense & BIT(15);
+
+ ata_qc_for_each_raw(ap, qc, tag) {
+ if (!(qc->flags & ATA_QCFLAG_EH) ||
+@@ -1556,6 +1560,17 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
+ continue;
+ }
+
++ qc->result_tf.nsect = sense[6];
++ qc->result_tf.hob_nsect = sense[7];
++ qc->result_tf.lbal = sense[8];
++ qc->result_tf.lbam = sense[9];
++ qc->result_tf.lbah = sense[10];
++ qc->result_tf.hob_lbal = sense[11];
++ qc->result_tf.hob_lbam = sense[12];
++ qc->result_tf.hob_lbah = sense[13];
++ if (aux_icc_valid)
++ qc->result_tf.auxiliary = get_unaligned_le32(&sense[16]);
++
+ /* Set sense without also setting scsicmd->result */
+ scsi_build_sense_buffer(dev->flags & ATA_DFLAG_D_SENSE,
+ qc->scsicmd->sense_buffer, sk,
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index c05fe27a96b64f..7668b79d8b0a94 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -235,72 +235,6 @@ static void loop_set_size(struct loop_device *lo, loff_t size)
+ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
+ }
+
+-static int lo_write_bvec(struct file *file, struct bio_vec *bvec, loff_t *ppos)
+-{
+- struct iov_iter i;
+- ssize_t bw;
+-
+- iov_iter_bvec(&i, ITER_SOURCE, bvec, 1, bvec->bv_len);
+-
+- bw = vfs_iter_write(file, &i, ppos, 0);
+-
+- if (likely(bw == bvec->bv_len))
+- return 0;
+-
+- printk_ratelimited(KERN_ERR
+- "loop: Write error at byte offset %llu, length %i.\n",
+- (unsigned long long)*ppos, bvec->bv_len);
+- if (bw >= 0)
+- bw = -EIO;
+- return bw;
+-}
+-
+-static int lo_write_simple(struct loop_device *lo, struct request *rq,
+- loff_t pos)
+-{
+- struct bio_vec bvec;
+- struct req_iterator iter;
+- int ret = 0;
+-
+- rq_for_each_segment(bvec, rq, iter) {
+- ret = lo_write_bvec(lo->lo_backing_file, &bvec, &pos);
+- if (ret < 0)
+- break;
+- cond_resched();
+- }
+-
+- return ret;
+-}
+-
+-static int lo_read_simple(struct loop_device *lo, struct request *rq,
+- loff_t pos)
+-{
+- struct bio_vec bvec;
+- struct req_iterator iter;
+- struct iov_iter i;
+- ssize_t len;
+-
+- rq_for_each_segment(bvec, rq, iter) {
+- iov_iter_bvec(&i, ITER_DEST, &bvec, 1, bvec.bv_len);
+- len = vfs_iter_read(lo->lo_backing_file, &i, &pos, 0);
+- if (len < 0)
+- return len;
+-
+- flush_dcache_page(bvec.bv_page);
+-
+- if (len != bvec.bv_len) {
+- struct bio *bio;
+-
+- __rq_for_each_bio(bio, rq)
+- zero_fill_bio(bio);
+- break;
+- }
+- cond_resched();
+- }
+-
+- return 0;
+-}
+-
+ static void loop_clear_limits(struct loop_device *lo, int mode)
+ {
+ struct queue_limits lim = queue_limits_start_update(lo->lo_queue);
+@@ -366,7 +300,7 @@ static void lo_complete_rq(struct request *rq)
+ struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
+ blk_status_t ret = BLK_STS_OK;
+
+- if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
++ if (cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
+ req_op(rq) != REQ_OP_READ) {
+ if (cmd->ret < 0)
+ ret = errno_to_blk_status(cmd->ret);
+@@ -382,14 +316,13 @@ static void lo_complete_rq(struct request *rq)
+ cmd->ret = 0;
+ blk_mq_requeue_request(rq, true);
+ } else {
+- if (cmd->use_aio) {
+- struct bio *bio = rq->bio;
++ struct bio *bio = rq->bio;
+
+- while (bio) {
+- zero_fill_bio(bio);
+- bio = bio->bi_next;
+- }
++ while (bio) {
++ zero_fill_bio(bio);
++ bio = bio->bi_next;
+ }
++
+ ret = BLK_STS_IOERR;
+ end_io:
+ blk_mq_end_request(rq, ret);
+@@ -469,9 +402,14 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
+
+ cmd->iocb.ki_pos = pos;
+ cmd->iocb.ki_filp = file;
+- cmd->iocb.ki_complete = lo_rw_aio_complete;
+- cmd->iocb.ki_flags = IOCB_DIRECT;
+- cmd->iocb.ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
++ cmd->iocb.ki_ioprio = req_get_ioprio(rq);
++ if (cmd->use_aio) {
++ cmd->iocb.ki_complete = lo_rw_aio_complete;
++ cmd->iocb.ki_flags = IOCB_DIRECT;
++ } else {
++ cmd->iocb.ki_complete = NULL;
++ cmd->iocb.ki_flags = 0;
++ }
+
+ if (rw == ITER_SOURCE)
+ ret = file->f_op->write_iter(&cmd->iocb, &iter);
+@@ -482,7 +420,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
+
+ if (ret != -EIOCBQUEUED)
+ lo_rw_aio_complete(&cmd->iocb, ret);
+- return 0;
++ return -EIOCBQUEUED;
+ }
+
+ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
+@@ -490,15 +428,6 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
+ struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
+ loff_t pos = ((loff_t) blk_rq_pos(rq) << 9) + lo->lo_offset;
+
+- /*
+- * lo_write_simple and lo_read_simple should have been covered
+- * by io submit style function like lo_rw_aio(), one blocker
+- * is that lo_read_simple() need to call flush_dcache_page after
+- * the page is written from kernel, and it isn't easy to handle
+- * this in io submit style function which submits all segments
+- * of the req at one time. And direct read IO doesn't need to
+- * run flush_dcache_page().
+- */
+ switch (req_op(rq)) {
+ case REQ_OP_FLUSH:
+ return lo_req_flush(lo, rq);
+@@ -514,15 +443,9 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
+ case REQ_OP_DISCARD:
+ return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE);
+ case REQ_OP_WRITE:
+- if (cmd->use_aio)
+- return lo_rw_aio(lo, cmd, pos, ITER_SOURCE);
+- else
+- return lo_write_simple(lo, rq, pos);
++ return lo_rw_aio(lo, cmd, pos, ITER_SOURCE);
+ case REQ_OP_READ:
+- if (cmd->use_aio)
+- return lo_rw_aio(lo, cmd, pos, ITER_DEST);
+- else
+- return lo_read_simple(lo, rq, pos);
++ return lo_rw_aio(lo, cmd, pos, ITER_DEST);
+ default:
+ WARN_ON_ONCE(1);
+ return -EIO;
+@@ -649,19 +572,20 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ * dependency.
+ */
+ fput(old_file);
++ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
+ if (partscan)
+ loop_reread_partitions(lo);
+
+ error = 0;
+ done:
+- /* enable and uncork uevent now that we are done */
+- dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
++ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
+ return error;
+
+ out_err:
+ loop_global_unlock(lo, is_loop);
+ out_putf:
+ fput(file);
++ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
+ goto done;
+ }
+
+@@ -1115,8 +1039,8 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+ if (partscan)
+ clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
+
+- /* enable and uncork uevent now that we are done */
+ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
++ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
+
+ loop_global_unlock(lo, is_loop);
+ if (partscan)
+@@ -1907,7 +1831,6 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
+ struct loop_device *lo = rq->q->queuedata;
+ int ret = 0;
+ struct mem_cgroup *old_memcg = NULL;
+- const bool use_aio = cmd->use_aio;
+
+ if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) {
+ ret = -EIO;
+@@ -1937,7 +1860,7 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
+ }
+ failed:
+ /* complete non-aio request */
+- if (!use_aio || ret) {
++ if (ret != -EIOCBQUEUED) {
+ if (ret == -EOPNOTSUPP)
+ cmd->ret = ret;
+ else
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index 3d6778b95e0058..edefb9dc76aa1a 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -889,7 +889,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ if (le32_to_cpu(ver.soc_id) == QCA_WCN3950_SOC_ID_T)
+ variant = "t";
+ else if (le32_to_cpu(ver.soc_id) == QCA_WCN3950_SOC_ID_S)
+- variant = "u";
++ variant = "s";
+
+ snprintf(config.fwname, sizeof(config.fwname),
+ "qca/cmnv%02x%s.bin", rom_ver, variant);
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index d3eba0d4a57d3b..7838c89e529e0c 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1215,6 +1215,8 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ rtl_dev_err(hdev, "mandatory config file %s not found",
+ btrtl_dev->ic_info->cfg_name);
+ ret = btrtl_dev->cfg_len;
++ if (!ret)
++ ret = -EINVAL;
+ goto err_free;
+ }
+ }
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index 7651321d351ccd..9ac22e4a070bef 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -289,18 +289,18 @@ static void vhci_coredump(struct hci_dev *hdev)
+
+ static void vhci_coredump_hdr(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- char buf[80];
++ const char *buf;
+
+- snprintf(buf, sizeof(buf), "Controller Name: vhci_ctrl\n");
++ buf = "Controller Name: vhci_ctrl\n";
+ skb_put_data(skb, buf, strlen(buf));
+
+- snprintf(buf, sizeof(buf), "Firmware Version: vhci_fw\n");
++ buf = "Firmware Version: vhci_fw\n";
+ skb_put_data(skb, buf, strlen(buf));
+
+- snprintf(buf, sizeof(buf), "Driver: vhci_drv\n");
++ buf = "Driver: vhci_drv\n";
+ skb_put_data(skb, buf, strlen(buf));
+
+- snprintf(buf, sizeof(buf), "Vendor: vhci\n");
++ buf = "Vendor: vhci\n";
+ skb_put_data(skb, buf, strlen(buf));
+ }
+
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 30ffbddc7ecec7..934e0e19824ce1 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2762,10 +2762,18 @@ EXPORT_SYMBOL(cpufreq_update_policy);
+ */
+ void cpufreq_update_limits(unsigned int cpu)
+ {
++ struct cpufreq_policy *policy;
++
++ policy = cpufreq_cpu_get(cpu);
++ if (!policy)
++ return;
++
+ if (cpufreq_driver->update_limits)
+ cpufreq_driver->update_limits(cpu);
+ else
+ cpufreq_update_policy(cpu);
++
++ cpufreq_cpu_put(policy);
+ }
+ EXPORT_SYMBOL_GPL(cpufreq_update_limits);
+
+diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
+index 7701d00bcb3ac5..b6e7c0b29d4e6c 100644
+--- a/drivers/crypto/caam/qi.c
++++ b/drivers/crypto/caam/qi.c
+@@ -122,12 +122,12 @@ int caam_qi_enqueue(struct device *qidev, struct caam_drv_req *req)
+ qm_fd_addr_set64(&fd, addr);
+
+ do {
++ refcount_inc(&req->drv_ctx->refcnt);
+ ret = qman_enqueue(req->drv_ctx->req_fq, &fd);
+- if (likely(!ret)) {
+- refcount_inc(&req->drv_ctx->refcnt);
++ if (likely(!ret))
+ return 0;
+- }
+
++ refcount_dec(&req->drv_ctx->refcnt);
+ if (ret != -EBUSY)
+ break;
+ num_retries++;
+diff --git a/drivers/crypto/tegra/tegra-se-aes.c b/drivers/crypto/tegra/tegra-se-aes.c
+index ca9d0cca1f748e..0e07d0523291a5 100644
+--- a/drivers/crypto/tegra/tegra-se-aes.c
++++ b/drivers/crypto/tegra/tegra-se-aes.c
+@@ -269,7 +269,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
+ unsigned int cmdlen, key1_id, key2_id;
+ int ret;
+
+- rctx->iv = (u32 *)req->iv;
++ rctx->iv = (ctx->alg == SE_ALG_ECB) ? NULL : (u32 *)req->iv;
+ rctx->len = req->cryptlen;
+ key1_id = ctx->key1_id;
+ key2_id = ctx->key2_id;
+@@ -498,9 +498,6 @@ static int tegra_aes_crypt(struct skcipher_request *req, bool encrypt)
+ if (!req->cryptlen)
+ return 0;
+
+- if (ctx->alg == SE_ALG_ECB)
+- req->iv = NULL;
+-
+ rctx->encrypt = encrypt;
+
+ return crypto_transfer_skcipher_request_to_engine(ctx->se->engine, req);
+diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
+index f5905d67dedbbb..22a808995f106f 100644
+--- a/drivers/dma-buf/sw_sync.c
++++ b/drivers/dma-buf/sw_sync.c
+@@ -438,15 +438,17 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a
+ return -EINVAL;
+
+ pt = dma_fence_to_sync_pt(fence);
+- if (!pt)
+- return -EINVAL;
++ if (!pt) {
++ ret = -EINVAL;
++ goto put_fence;
++ }
+
+ spin_lock_irqsave(fence->lock, flags);
+- if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
+- data.deadline_ns = ktime_to_ns(pt->deadline);
+- } else {
++ if (!test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
+ ret = -ENOENT;
++ goto unlock;
+ }
++ data.deadline_ns = ktime_to_ns(pt->deadline);
+ spin_unlock_irqrestore(fence->lock, flags);
+
+ dma_fence_put(fence);
+@@ -458,6 +460,13 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a
+ return -EFAULT;
+
+ return 0;
++
++unlock:
++ spin_unlock_irqrestore(fence->lock, flags);
++put_fence:
++ dma_fence_put(fence);
++
++ return ret;
+ }
+
+ static long sw_sync_ioctl(struct file *file, unsigned int cmd,
+diff --git a/drivers/firmware/cirrus/test/cs_dsp_mock_mem_maps.c b/drivers/firmware/cirrus/test/cs_dsp_mock_mem_maps.c
+index 161272e47bdabc..73412bcef50c50 100644
+--- a/drivers/firmware/cirrus/test/cs_dsp_mock_mem_maps.c
++++ b/drivers/firmware/cirrus/test/cs_dsp_mock_mem_maps.c
+@@ -461,36 +461,6 @@ unsigned int cs_dsp_mock_xm_header_get_alg_base_in_words(struct cs_dsp_test *pri
+ }
+ EXPORT_SYMBOL_NS_GPL(cs_dsp_mock_xm_header_get_alg_base_in_words, "FW_CS_DSP_KUNIT_TEST_UTILS");
+
+-/**
+- * cs_dsp_mock_xm_header_get_fw_version_from_regmap() - Firmware version.
+- *
+- * @priv: Pointer to struct cs_dsp_test.
+- *
+- * Return: Firmware version word value.
+- */
+-unsigned int cs_dsp_mock_xm_header_get_fw_version_from_regmap(struct cs_dsp_test *priv)
+-{
+- unsigned int xm = cs_dsp_mock_base_addr_for_mem(priv, WMFW_ADSP2_XM);
+- union {
+- struct wmfw_id_hdr adsp2;
+- struct wmfw_v3_id_hdr halo;
+- } hdr;
+-
+- switch (priv->dsp->type) {
+- case WMFW_ADSP2:
+- regmap_raw_read(priv->dsp->regmap, xm, &hdr.adsp2, sizeof(hdr.adsp2));
+- return be32_to_cpu(hdr.adsp2.ver);
+- case WMFW_HALO:
+- regmap_raw_read(priv->dsp->regmap, xm, &hdr.halo, sizeof(hdr.halo));
+- return be32_to_cpu(hdr.halo.ver);
+- default:
+- KUNIT_FAIL(priv->test, NULL);
+- return 0;
+- }
+-}
+-EXPORT_SYMBOL_NS_GPL(cs_dsp_mock_xm_header_get_fw_version_from_regmap,
+- "FW_CS_DSP_KUNIT_TEST_UTILS");
+-
+ /**
+ * cs_dsp_mock_xm_header_get_fw_version() - Firmware version.
+ *
+diff --git a/drivers/firmware/cirrus/test/cs_dsp_test_bin.c b/drivers/firmware/cirrus/test/cs_dsp_test_bin.c
+index 1e161bbc5b4a46..163b7faecff466 100644
+--- a/drivers/firmware/cirrus/test/cs_dsp_test_bin.c
++++ b/drivers/firmware/cirrus/test/cs_dsp_test_bin.c
+@@ -2198,7 +2198,7 @@ static int cs_dsp_bin_test_common_init(struct kunit *test, struct cs_dsp *dsp)
+
+ priv->local->bin_builder =
+ cs_dsp_mock_bin_init(priv, 1,
+- cs_dsp_mock_xm_header_get_fw_version_from_regmap(priv));
++ cs_dsp_mock_xm_header_get_fw_version(xm_hdr));
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, priv->local->bin_builder);
+
+ /* We must provide a dummy wmfw to load */
+diff --git a/drivers/firmware/cirrus/test/cs_dsp_test_bin_error.c b/drivers/firmware/cirrus/test/cs_dsp_test_bin_error.c
+index 5dcf62f19fafd1..10761d8ff1f2ee 100644
+--- a/drivers/firmware/cirrus/test/cs_dsp_test_bin_error.c
++++ b/drivers/firmware/cirrus/test/cs_dsp_test_bin_error.c
+@@ -451,7 +451,7 @@ static int cs_dsp_bin_err_test_common_init(struct kunit *test, struct cs_dsp *ds
+
+ local->bin_builder =
+ cs_dsp_mock_bin_init(priv, 1,
+- cs_dsp_mock_xm_header_get_fw_version_from_regmap(priv));
++ cs_dsp_mock_xm_header_get_fw_version(local->xm_header));
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, local->bin_builder);
+
+ /* Init cs_dsp */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+index 423fd2eebe1e05..892a5315677fcc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+@@ -439,6 +439,13 @@ static bool amdgpu_get_bios_apu(struct amdgpu_device *adev)
+ return true;
+ }
+
++static bool amdgpu_prefer_rom_resource(struct amdgpu_device *adev)
++{
++ struct resource *res = &adev->pdev->resource[PCI_ROM_RESOURCE];
++
++ return (res->flags & IORESOURCE_ROM_SHADOW);
++}
++
+ static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev)
+ {
+ if (amdgpu_atrm_get_bios(adev)) {
+@@ -457,14 +464,27 @@ static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev)
+ goto success;
+ }
+
+- if (amdgpu_read_platform_bios(adev)) {
+- dev_info(adev->dev, "Fetched VBIOS from platform\n");
+- goto success;
+- }
++ if (amdgpu_prefer_rom_resource(adev)) {
++ if (amdgpu_read_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
++ goto success;
++ }
+
+- if (amdgpu_read_bios(adev)) {
+- dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
+- goto success;
++ if (amdgpu_read_platform_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from platform\n");
++ goto success;
++ }
++
++ } else {
++ if (amdgpu_read_platform_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from platform\n");
++ goto success;
++ }
++
++ if (amdgpu_read_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
++ goto success;
++ }
+ }
+
+ if (amdgpu_read_bios_from_rom(adev)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 9a8f6cb2b8360e..71e8a76180ad6d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3442,6 +3442,7 @@ static int amdgpu_device_ip_fini(struct amdgpu_device *adev)
+ amdgpu_device_mem_scratch_fini(adev);
+ amdgpu_ib_pool_fini(adev);
+ amdgpu_seq64_fini(adev);
++ amdgpu_doorbell_fini(adev);
+ }
+ if (adev->ip_blocks[i].version->funcs->sw_fini) {
+ r = adev->ip_blocks[i].version->funcs->sw_fini(&adev->ip_blocks[i]);
+@@ -4770,7 +4771,6 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+
+ iounmap(adev->rmmio);
+ adev->rmmio = NULL;
+- amdgpu_doorbell_fini(adev);
+ drm_dev_exit(idx);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+index 9f627caedc3f61..c9842a0e2a1cd4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+@@ -184,7 +184,7 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach,
+ struct sg_table *sgt,
+ enum dma_data_direction dir)
+ {
+- if (sgt->sgl->page_link) {
++ if (sg_page(sgt->sgl)) {
+ dma_unmap_sgtable(attach->dev, sgt, dir, 0);
+ sg_free_table(sgt);
+ kfree(sgt);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index c0ddbe7d6f0bc5..24c255e05079e0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1791,7 +1791,6 @@ static const u16 amdgpu_unsupported_pciidlist[] = {
+ };
+
+ static const struct pci_device_id pciidlist[] = {
+-#ifdef CONFIG_DRM_AMDGPU_SI
+ {0x1002, 0x6780, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
+ {0x1002, 0x6784, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
+ {0x1002, 0x6788, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
+@@ -1864,8 +1863,6 @@ static const struct pci_device_id pciidlist[] = {
+ {0x1002, 0x6665, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
+ {0x1002, 0x6667, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
+ {0x1002, 0x666F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
+-#endif
+-#ifdef CONFIG_DRM_AMDGPU_CIK
+ /* Kaveri */
+ {0x1002, 0x1304, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_MOBILITY|AMD_IS_APU},
+ {0x1002, 0x1305, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_APU},
+@@ -1948,7 +1945,6 @@ static const struct pci_device_id pciidlist[] = {
+ {0x1002, 0x985D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
+ {0x1002, 0x985E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
+ {0x1002, 0x985F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
+-#endif
+ /* topaz */
+ {0x1002, 0x6900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ},
+ {0x1002, 0x6901, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ},
+@@ -2280,14 +2276,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ return -ENOTSUPP;
+ }
+
++ switch (flags & AMD_ASIC_MASK) {
++ case CHIP_TAHITI:
++ case CHIP_PITCAIRN:
++ case CHIP_VERDE:
++ case CHIP_OLAND:
++ case CHIP_HAINAN:
+ #ifdef CONFIG_DRM_AMDGPU_SI
+- if (!amdgpu_si_support) {
+- switch (flags & AMD_ASIC_MASK) {
+- case CHIP_TAHITI:
+- case CHIP_PITCAIRN:
+- case CHIP_VERDE:
+- case CHIP_OLAND:
+- case CHIP_HAINAN:
++ if (!amdgpu_si_support) {
+ dev_info(&pdev->dev,
+ "SI support provided by radeon.\n");
+ dev_info(&pdev->dev,
+@@ -2295,16 +2291,18 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ );
+ return -ENODEV;
+ }
+- }
++ break;
++#else
++ dev_info(&pdev->dev, "amdgpu is built without SI support.\n");
++ return -ENODEV;
+ #endif
++ case CHIP_KAVERI:
++ case CHIP_BONAIRE:
++ case CHIP_HAWAII:
++ case CHIP_KABINI:
++ case CHIP_MULLINS:
+ #ifdef CONFIG_DRM_AMDGPU_CIK
+- if (!amdgpu_cik_support) {
+- switch (flags & AMD_ASIC_MASK) {
+- case CHIP_KAVERI:
+- case CHIP_BONAIRE:
+- case CHIP_HAWAII:
+- case CHIP_KABINI:
+- case CHIP_MULLINS:
++ if (!amdgpu_cik_support) {
+ dev_info(&pdev->dev,
+ "CIK support provided by radeon.\n");
+ dev_info(&pdev->dev,
+@@ -2312,8 +2310,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ );
+ return -ENODEV;
+ }
+- }
++ break;
++#else
++ dev_info(&pdev->dev, "amdgpu is built without CIK support.\n");
++ return -ENODEV;
+ #endif
++ default:
++ break;
++ }
+
+ adev = devm_drm_dev_alloc(&pdev->dev, &amdgpu_kms_driver, typeof(*adev), ddev);
+ if (IS_ERR(adev))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 96f4b8904e9a6a..00752e3f9d8ab2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -163,8 +163,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
+ * When GTT is just an alternative to VRAM make sure that we
+ * only use it as fallback and still try to fill up VRAM first.
+ */
+- if (domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM &&
+- !(adev->flags & AMD_IS_APU))
++ if (abo->tbo.resource && !(adev->flags & AMD_IS_APU) &&
++ domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM)
+ places[c].flags |= TTM_PL_FLAG_FALLBACK;
+ c++;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index f9a4d08eef9259..0f808ffcab9433 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -899,6 +899,10 @@ static void mes_v11_0_get_fw_version(struct amdgpu_device *adev)
+ {
+ int pipe;
+
++ /* return early if we have already fetched these */
++ if (adev->mes.sched_version && adev->mes.kiq_version)
++ return;
++
+ /* get MES scheduler/KIQ versions */
+ mutex_lock(&adev->srbm_mutex);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+index 0fd0fa6ed51843..6b121c2723d66a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+@@ -1390,17 +1390,20 @@ static int mes_v12_0_queue_init(struct amdgpu_device *adev,
+ mes_v12_0_queue_init_register(ring);
+ }
+
+- /* get MES scheduler/KIQ versions */
+- mutex_lock(&adev->srbm_mutex);
+- soc21_grbm_select(adev, 3, pipe, 0, 0);
++ if (((pipe == AMDGPU_MES_SCHED_PIPE) && !adev->mes.sched_version) ||
++ ((pipe == AMDGPU_MES_KIQ_PIPE) && !adev->mes.kiq_version)) {
++ /* get MES scheduler/KIQ versions */
++ mutex_lock(&adev->srbm_mutex);
++ soc21_grbm_select(adev, 3, pipe, 0, 0);
+
+- if (pipe == AMDGPU_MES_SCHED_PIPE)
+- adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
+- else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq)
+- adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
++ if (pipe == AMDGPU_MES_SCHED_PIPE)
++ adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
++ else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq)
++ adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
+
+- soc21_grbm_select(adev, 0, 0, 0, 0);
+- mutex_unlock(&adev->srbm_mutex);
++ soc21_grbm_select(adev, 0, 0, 0, 0);
++ mutex_unlock(&adev->srbm_mutex);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 39df45f652b329..80a3cbd2cbe5d4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1715,6 +1715,13 @@ static const struct dmi_system_id dmi_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"),
+ },
+ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 645 14 inch G11 Notebook PC"),
++ },
++ },
+ {
+ .callback = edp0_on_dp1_callback,
+ .matches = {
+@@ -1722,6 +1729,20 @@ static const struct dmi_system_id dmi_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"),
+ },
+ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 445 14 inch G11 Notebook PC"),
++ },
++ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 465 16 inch G11 Notebook PC"),
++ },
++ },
+ {}
+ /* TODO: refactor this from a fixed table to a dynamic option */
+ };
+@@ -8577,14 +8598,39 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+ int offdelay;
+
+ if (acrtc_state) {
+- if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
+- IP_VERSION(3, 5, 0) ||
+- acrtc_state->stream->link->psr_settings.psr_version <
+- DC_PSR_VERSION_UNSUPPORTED ||
+- !(adev->flags & AMD_IS_APU)) {
+- timing = &acrtc_state->stream->timing;
+-
+- /* at least 2 frames */
++ timing = &acrtc_state->stream->timing;
++
++ /*
++ * Depending on when the HW latching event of double-buffered
++ * registers happen relative to the PSR SDP deadline, and how
++ * bad the Panel clock has drifted since the last ALPM off
++ * event, there can be up to 3 frames of delay between sending
++ * the PSR exit cmd to DMUB fw, and when the panel starts
++ * displaying live frames.
++ *
++ * We can set:
++ *
++ * 20/100 * offdelay_ms = 3_frames_ms
++ * => offdelay_ms = 5 * 3_frames_ms
++ *
++ * This ensures that `3_frames_ms` will only be experienced as a
++ * 20% delay on top how long the display has been static, and
++ * thus make the delay less perceivable.
++ */
++ if (acrtc_state->stream->link->psr_settings.psr_version <
++ DC_PSR_VERSION_UNSUPPORTED) {
++ offdelay = DIV64_U64_ROUND_UP((u64)5 * 3 * 10 *
++ timing->v_total *
++ timing->h_total,
++ timing->pix_clk_100hz);
++ config.offdelay_ms = offdelay ?: 30;
++ } else if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
++ IP_VERSION(3, 5, 0) ||
++ !(adev->flags & AMD_IS_APU)) {
++ /*
++ * Older HW and DGPU have issues with instant off;
++ * use a 2 frame offdelay.
++ */
+ offdelay = DIV64_U64_ROUND_UP((u64)20 *
+ timing->v_total *
+ timing->h_total,
+@@ -8592,6 +8638,8 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+
+ config.offdelay_ms = offdelay ?: 30;
+ } else {
++ /* offdelay_ms = 0 will never disable vblank */
++ config.offdelay_ms = 1;
+ config.disable_immediate = true;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 36a830a7440f10..87058271b00cc4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -113,6 +113,7 @@ bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state)
+ *
+ * Panel Replay and PSR SU
+ * - Enable when:
++ * - VRR is disabled
+ * - vblank counter is disabled
+ * - entry is allowed: usermode demonstrates an adequate number of fast
+ * commits)
+@@ -131,19 +132,20 @@ static void amdgpu_dm_crtc_set_panel_sr_feature(
+ bool is_sr_active = (link->replay_settings.replay_allow_active ||
+ link->psr_settings.psr_allow_active);
+ bool is_crc_window_active = false;
++ bool vrr_active = amdgpu_dm_crtc_vrr_active_irq(vblank_work->acrtc);
+
+ #ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
+ is_crc_window_active =
+ amdgpu_dm_crc_window_is_activated(&vblank_work->acrtc->base);
+ #endif
+
+- if (link->replay_settings.replay_feature_enabled &&
++ if (link->replay_settings.replay_feature_enabled && !vrr_active &&
+ allow_sr_entry && !is_sr_active && !is_crc_window_active) {
+ amdgpu_dm_replay_enable(vblank_work->stream, true);
+ } else if (vblank_enabled) {
+ if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active)
+ amdgpu_dm_psr_disable(vblank_work->stream, false);
+- } else if (link->psr_settings.psr_feature_enabled &&
++ } else if (link->psr_settings.psr_feature_enabled && !vrr_active &&
+ allow_sr_entry && !is_sr_active && !is_crc_window_active) {
+
+ struct amdgpu_dm_connector *aconn =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
+index fb80ba9287b660..d6fd13f43c08f7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
+@@ -2,6 +2,7 @@
+ //
+ // Copyright 2024 Advanced Micro Devices, Inc.
+
++#include <linux/vmalloc.h>
+
+ #include "dml2_internal_types.h"
+ #include "dml_top.h"
+@@ -13,11 +14,11 @@
+
+ static bool dml21_allocate_memory(struct dml2_context **dml_ctx)
+ {
+- *dml_ctx = kzalloc(sizeof(struct dml2_context), GFP_KERNEL);
++ *dml_ctx = vzalloc(sizeof(struct dml2_context));
+ if (!(*dml_ctx))
+ return false;
+
+- (*dml_ctx)->v21.dml_init.dml2_instance = kzalloc(sizeof(struct dml2_instance), GFP_KERNEL);
++ (*dml_ctx)->v21.dml_init.dml2_instance = vzalloc(sizeof(struct dml2_instance));
+ if (!((*dml_ctx)->v21.dml_init.dml2_instance))
+ return false;
+
+@@ -27,7 +28,7 @@ static bool dml21_allocate_memory(struct dml2_context **dml_ctx)
+ (*dml_ctx)->v21.mode_support.display_config = &(*dml_ctx)->v21.display_config;
+ (*dml_ctx)->v21.mode_programming.display_config = (*dml_ctx)->v21.mode_support.display_config;
+
+- (*dml_ctx)->v21.mode_programming.programming = kzalloc(sizeof(struct dml2_display_cfg_programming), GFP_KERNEL);
++ (*dml_ctx)->v21.mode_programming.programming = vzalloc(sizeof(struct dml2_display_cfg_programming));
+ if (!((*dml_ctx)->v21.mode_programming.programming))
+ return false;
+
+@@ -86,6 +87,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co
+ /* Store configuration options */
+ (*dml_ctx)->config = *config;
+
++ DC_FP_START();
++
+ /*Initialize SOCBB and DCNIP params */
+ dml21_initialize_soc_bb_params(&(*dml_ctx)->v21.dml_init, config, in_dc);
+ dml21_initialize_ip_params(&(*dml_ctx)->v21.dml_init, config, in_dc);
+@@ -96,6 +99,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co
+
+ /*Initialize DML21 instance */
+ dml2_initialize_instance(&(*dml_ctx)->v21.dml_init);
++
++ DC_FP_END();
+ }
+
+ bool dml21_create(const struct dc *in_dc, struct dml2_context **dml_ctx, const struct dml2_configuration_options *config)
+@@ -111,8 +116,8 @@ bool dml21_create(const struct dc *in_dc, struct dml2_context **dml_ctx, const s
+
+ void dml21_destroy(struct dml2_context *dml2)
+ {
+- kfree(dml2->v21.dml_init.dml2_instance);
+- kfree(dml2->v21.mode_programming.programming);
++ vfree(dml2->v21.dml_init.dml2_instance);
++ vfree(dml2->v21.mode_programming.programming);
+ }
+
+ static void dml21_calculate_rq_and_dlg_params(const struct dc *dc, struct dc_state *context, struct resource_context *out_new_hw_state,
+@@ -269,11 +274,16 @@ bool dml21_validate(const struct dc *in_dc, struct dc_state *context, struct dml
+ {
+ bool out = false;
+
++ DC_FP_START();
++
+ /* Use dml_validate_only for fast_validate path */
+- if (fast_validate) {
++ if (fast_validate)
+ out = dml21_check_mode_support(in_dc, context, dml_ctx);
+- } else
++ else
+ out = dml21_mode_check_and_programming(in_dc, context, dml_ctx);
++
++ DC_FP_END();
++
+ return out;
+ }
+
+@@ -412,8 +422,12 @@ void dml21_copy(struct dml2_context *dst_dml_ctx,
+
+ dst_dml_ctx->v21.mode_programming.programming = dst_dml2_programming;
+
++ DC_FP_START();
++
+ /* need to initialize copied instance for internal references to be correct */
+ dml2_initialize_instance(&dst_dml_ctx->v21.dml_init);
++
++ DC_FP_END();
+ }
+
+ bool dml21_create_copy(struct dml2_context **dst_dml_ctx,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+index 68b882d281959a..d0f9df2daeb416 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+@@ -24,6 +24,8 @@
+ *
+ */
+
++#include <linux/vmalloc.h>
++
+ #include "display_mode_core.h"
+ #include "dml2_internal_types.h"
+ #include "dml2_utils.h"
+@@ -732,17 +734,22 @@ bool dml2_validate(const struct dc *in_dc, struct dc_state *context, struct dml2
+ return out;
+ }
+
++ DC_FP_START();
++
+ /* Use dml_validate_only for fast_validate path */
+ if (fast_validate)
+ out = dml2_validate_only(context);
+ else
+ out = dml2_validate_and_build_resource(in_dc, context);
++
++ DC_FP_END();
++
+ return out;
+ }
+
+ static inline struct dml2_context *dml2_allocate_memory(void)
+ {
+- return (struct dml2_context *) kzalloc(sizeof(struct dml2_context), GFP_KERNEL);
++ return (struct dml2_context *) vzalloc(sizeof(struct dml2_context));
+ }
+
+ static void dml2_init(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2)
+@@ -776,11 +783,15 @@ static void dml2_init(const struct dc *in_dc, const struct dml2_configuration_op
+ break;
+ }
+
++ DC_FP_START();
++
+ initialize_dml2_ip_params(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.ip);
+
+ initialize_dml2_soc_bbox(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc);
+
+ initialize_dml2_soc_states(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc, &(*dml2)->v20.dml_core_ctx.states);
++
++ DC_FP_END();
+ }
+
+ bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2)
+@@ -806,7 +817,7 @@ void dml2_destroy(struct dml2_context *dml2)
+
+ if (dml2->architecture == dml2_architecture_21)
+ dml21_destroy(dml2);
+- kfree(dml2);
++ vfree(dml2);
+ }
+
+ void dml2_extract_dram_and_fclk_change_support(struct dml2_context *dml2,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index a5e18ab72394ae..b78096a7690eeb 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -3027,7 +3027,11 @@ void dcn20_enable_stream(struct pipe_ctx *pipe_ctx)
+ dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, dp_hpo_inst);
+
+ phyd32clk = get_phyd32clk_src(link);
+- dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) {
++ dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
++ } else {
++ dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ }
+ } else {
+ if (dccg->funcs->enable_symclk_se)
+ dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index 555a9f590cd75b..89af3e4afbc251 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -936,8 +936,11 @@ void dcn401_enable_stream(struct pipe_ctx *pipe_ctx)
+ if (dc_is_dp_signal(pipe_ctx->stream->signal) || dc_is_virtual_signal(pipe_ctx->stream->signal)) {
+ if (dc->link_srv->dp_is_128b_132b_signal(pipe_ctx)) {
+ dccg->funcs->set_dpstreamclk(dccg, DPREFCLK, tg->inst, dp_hpo_inst);
+-
+- dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) {
++ dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
++ } else {
++ dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ }
+ } else {
+ dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst,
+ link_enc->transmitter - TRANSMITTER_UNIPHY_A);
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+index 911bd60d4fbcc6..3c42ba8566cf8b 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+@@ -890,7 +890,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .disable_z10 = true,
+ .enable_legacy_fast_update = true,
+ .enable_z9_disable_interface = true, /* Allow support for the PMFW interface for disable Z9*/
+- .dml_hostvm_override = DML_HOSTVM_NO_OVERRIDE,
++ .dml_hostvm_override = DML_HOSTVM_OVERRIDE_FALSE,
+ .using_dml2 = false,
+ };
+
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c
+index a8fc0fa44db69d..ba5c1237fcfe1a 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c
+@@ -267,10 +267,10 @@ int smu7_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
+ if (hwmgr->thermal_controller.fanInfo.bNoFan ||
+ (hwmgr->thermal_controller.fanInfo.
+ ucTachometerPulsesPerRevolution == 0) ||
+- speed == 0 ||
++ (!speed || speed > UINT_MAX/8) ||
+ (speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) ||
+ (speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM))
+- return 0;
++ return -EINVAL;
+
+ if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl))
+ smu7_fan_ctrl_stop_smc_fan_control(hwmgr);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
+index 379012494da57b..56423aedf3fa7c 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
+@@ -307,10 +307,10 @@ int vega10_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
+ int result = 0;
+
+ if (hwmgr->thermal_controller.fanInfo.bNoFan ||
+- speed == 0 ||
++ (!speed || speed > UINT_MAX/8) ||
+ (speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) ||
+ (speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM))
+- return -1;
++ return -EINVAL;
+
+ if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl))
+ result = vega10_fan_ctrl_stop_smc_fan_control(hwmgr);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c
+index a3331ffb2daf7f..1b1c88590156cd 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c
+@@ -191,7 +191,7 @@ int vega20_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
+ uint32_t tach_period, crystal_clock_freq;
+ int result = 0;
+
+- if (!speed)
++ if (!speed || speed > UINT_MAX/8)
+ return -EINVAL;
+
+ if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl)) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index 8aa61a9f77782b..453952cdc353b1 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1267,6 +1267,9 @@ static int arcturus_set_fan_speed_rpm(struct smu_context *smu,
+ uint32_t crystal_clock_freq = 2500;
+ uint32_t tach_period;
+
++ if (!speed || speed > UINT_MAX/8)
++ return -EINVAL;
++
+ tach_period = 60 * crystal_clock_freq * 10000 / (8 * speed);
+ WREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT,
+ REG_SET_FIELD(RREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT),
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+index 189c6a32b6bdb4..54229b991858c2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+@@ -1200,7 +1200,7 @@ int smu_v11_0_set_fan_speed_rpm(struct smu_context *smu,
+ uint32_t crystal_clock_freq = 2500;
+ uint32_t tach_period;
+
+- if (speed == 0)
++ if (!speed || speed > UINT_MAX/8)
+ return -EINVAL;
+ /*
+ * To prevent from possible overheat, some ASICs may have requirement
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index fbbdfa54f6a20f..b508d475d2e1f3 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -1229,7 +1229,7 @@ int smu_v13_0_set_fan_speed_rpm(struct smu_context *smu,
+ uint32_t tach_period;
+ int ret;
+
+- if (!speed)
++ if (!speed || speed > UINT_MAX/8)
+ return -EINVAL;
+
+ ret = smu_v13_0_auto_fan_control(smu, 0);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 3f1fcf8c4ee8e4..5b5971f750a6f6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -79,6 +79,7 @@
+ #define PP_OD_FEATURE_FAN_ACOUSTIC_TARGET 8
+ #define PP_OD_FEATURE_FAN_TARGET_TEMPERATURE 9
+ #define PP_OD_FEATURE_FAN_MINIMUM_PWM 10
++#define PP_OD_FEATURE_FAN_ZERO_RPM_ENABLE 11
+
+ static struct cmn2asic_msg_mapping smu_v14_0_2_message_map[SMU_MSG_MAX_COUNT] = {
+ MSG_MAP(TestMessage, PPSMC_MSG_TestMessage, 1),
+@@ -1042,6 +1043,10 @@ static void smu_v14_0_2_get_od_setting_limits(struct smu_context *smu,
+ od_min_setting = overdrive_lowerlimits->FanMinimumPwm;
+ od_max_setting = overdrive_upperlimits->FanMinimumPwm;
+ break;
++ case PP_OD_FEATURE_FAN_ZERO_RPM_ENABLE:
++ od_min_setting = overdrive_lowerlimits->FanZeroRpmEnable;
++ od_max_setting = overdrive_upperlimits->FanZeroRpmEnable;
++ break;
+ default:
+ od_min_setting = od_max_setting = INT_MAX;
+ break;
+@@ -1320,6 +1325,24 @@ static int smu_v14_0_2_print_clk_levels(struct smu_context *smu,
+ min_value, max_value);
+ break;
+
++ case SMU_OD_FAN_ZERO_RPM_ENABLE:
++ if (!smu_v14_0_2_is_od_feature_supported(smu,
++ PP_OD_FEATURE_ZERO_FAN_BIT))
++ break;
++
++ size += sysfs_emit_at(buf, size, "FAN_ZERO_RPM_ENABLE:\n");
++ size += sysfs_emit_at(buf, size, "%d\n",
++ (int)od_table->OverDriveTable.FanZeroRpmEnable);
++
++ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
++ smu_v14_0_2_get_od_setting_limits(smu,
++ PP_OD_FEATURE_FAN_ZERO_RPM_ENABLE,
++ &min_value,
++ &max_value);
++ size += sysfs_emit_at(buf, size, "ZERO_RPM_ENABLE: %u %u\n",
++ min_value, max_value);
++ break;
++
+ case SMU_OD_RANGE:
+ if (!smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_GFXCLK_BIT) &&
+ !smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_UCLK_BIT) &&
+@@ -2260,7 +2283,9 @@ static void smu_v14_0_2_set_supported_od_feature_mask(struct smu_context *smu)
+ OD_OPS_SUPPORT_FAN_TARGET_TEMPERATURE_RETRIEVE |
+ OD_OPS_SUPPORT_FAN_TARGET_TEMPERATURE_SET |
+ OD_OPS_SUPPORT_FAN_MINIMUM_PWM_RETRIEVE |
+- OD_OPS_SUPPORT_FAN_MINIMUM_PWM_SET;
++ OD_OPS_SUPPORT_FAN_MINIMUM_PWM_SET |
++ OD_OPS_SUPPORT_FAN_ZERO_RPM_ENABLE_RETRIEVE |
++ OD_OPS_SUPPORT_FAN_ZERO_RPM_ENABLE_SET;
+ }
+
+ static int smu_v14_0_2_get_overdrive_table(struct smu_context *smu,
+@@ -2339,6 +2364,8 @@ static int smu_v14_0_2_set_default_od_settings(struct smu_context *smu)
+ user_od_table_bak.OverDriveTable.FanTargetTemperature;
+ user_od_table->OverDriveTable.FanMinimumPwm =
+ user_od_table_bak.OverDriveTable.FanMinimumPwm;
++ user_od_table->OverDriveTable.FanZeroRpmEnable =
++ user_od_table_bak.OverDriveTable.FanZeroRpmEnable;
+ }
+
+ smu_v14_0_2_set_supported_od_feature_mask(smu);
+@@ -2386,6 +2413,11 @@ static int smu_v14_0_2_od_restore_table_single(struct smu_context *smu, long inp
+ od_table->OverDriveTable.FanMode = FAN_MODE_AUTO;
+ od_table->OverDriveTable.FeatureCtrlMask |= BIT(PP_OD_FEATURE_FAN_CURVE_BIT);
+ break;
++ case PP_OD_EDIT_FAN_ZERO_RPM_ENABLE:
++ od_table->OverDriveTable.FanZeroRpmEnable =
++ boot_overdrive_table->OverDriveTable.FanZeroRpmEnable;
++ od_table->OverDriveTable.FeatureCtrlMask |= BIT(PP_OD_FEATURE_ZERO_FAN_BIT);
++ break;
+ case PP_OD_EDIT_ACOUSTIC_LIMIT:
+ od_table->OverDriveTable.AcousticLimitRpmThreshold =
+ boot_overdrive_table->OverDriveTable.AcousticLimitRpmThreshold;
+@@ -2668,6 +2700,27 @@ static int smu_v14_0_2_od_edit_dpm_table(struct smu_context *smu,
+ od_table->OverDriveTable.FeatureCtrlMask |= BIT(PP_OD_FEATURE_FAN_CURVE_BIT);
+ break;
+
++ case PP_OD_EDIT_FAN_ZERO_RPM_ENABLE:
++ if (!smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_ZERO_FAN_BIT)) {
++ dev_warn(adev->dev, "Zero RPM setting not supported!\n");
++ return -ENOTSUPP;
++ }
++
++ smu_v14_0_2_get_od_setting_limits(smu,
++ PP_OD_FEATURE_FAN_ZERO_RPM_ENABLE,
++ &minimum,
++ &maximum);
++ if (input[0] < minimum ||
++ input[0] > maximum) {
++ dev_info(adev->dev, "zero RPM enable setting(%ld) must be within [%d, %d]!\n",
++ input[0], minimum, maximum);
++ return -EINVAL;
++ }
++
++ od_table->OverDriveTable.FanZeroRpmEnable = input[0];
++ od_table->OverDriveTable.FeatureCtrlMask |= BIT(PP_OD_FEATURE_ZERO_FAN_BIT);
++ break;
++
+ case PP_OD_RESTORE_DEFAULT_TABLE:
+ if (size == 1) {
+ ret = smu_v14_0_2_od_restore_table_single(smu, input[0]);
+diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c
+index b9eb67e3fa90e4..2d7482a65f62ab 100644
+--- a/drivers/gpu/drm/ast/ast_dp.c
++++ b/drivers/gpu/drm/ast/ast_dp.c
+@@ -17,6 +17,12 @@ static bool ast_astdp_is_connected(struct ast_device *ast)
+ {
+ if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, AST_IO_VGACRDF_HPD))
+ return false;
++ /*
++ * HPD might be set even if no monitor is connected, so also check that
++ * the link training was successful.
++ */
++ if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, AST_IO_VGACRDC_LINK_SUCCESS))
++ return false;
+ return true;
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
+index 23edc81741dee8..dd1cae3021af90 100644
+--- a/drivers/gpu/drm/i915/display/intel_bw.c
++++ b/drivers/gpu/drm/i915/display/intel_bw.c
+@@ -244,6 +244,7 @@ static int icl_get_qgv_points(struct drm_i915_private *dev_priv,
+ qi->deinterleave = 4;
+ break;
+ case INTEL_DRAM_GDDR:
++ case INTEL_DRAM_GDDR_ECC:
+ qi->channel_width = 32;
+ break;
+ default:
+@@ -398,6 +399,12 @@ static const struct intel_sa_info xe2_hpd_sa_info = {
+ /* Other values not used by simplified algorithm */
+ };
+
++static const struct intel_sa_info xe2_hpd_ecc_sa_info = {
++ .derating = 45,
++ .deprogbwlimit = 53,
++ /* Other values not used by simplified algorithm */
++};
++
+ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel_sa_info *sa)
+ {
+ struct intel_qgv_info qi = {};
+@@ -740,10 +747,15 @@ static unsigned int icl_qgv_bw(struct drm_i915_private *i915,
+
+ void intel_bw_init_hw(struct drm_i915_private *dev_priv)
+ {
++ const struct dram_info *dram_info = &dev_priv->dram_info;
++
+ if (!HAS_DISPLAY(dev_priv))
+ return;
+
+- if (DISPLAY_VERx100(dev_priv) >= 1401 && IS_DGFX(dev_priv))
++ if (DISPLAY_VERx100(dev_priv) >= 1401 && IS_DGFX(dev_priv) &&
++ dram_info->type == INTEL_DRAM_GDDR_ECC)
++ xe2_hpd_get_bw_info(dev_priv, &xe2_hpd_ecc_sa_info);
++ else if (DISPLAY_VERx100(dev_priv) >= 1401 && IS_DGFX(dev_priv))
+ xe2_hpd_get_bw_info(dev_priv, &xe2_hpd_sa_info);
+ else if (DISPLAY_VER(dev_priv) >= 14)
+ tgl_get_bw_info(dev_priv, &mtl_sa_info);
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index c9dcf2bbd4c730..c974f86eaf93b6 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -1109,7 +1109,9 @@ static bool vrr_params_changed(const struct intel_crtc_state *old_crtc_state,
+ old_crtc_state->vrr.vmin != new_crtc_state->vrr.vmin ||
+ old_crtc_state->vrr.vmax != new_crtc_state->vrr.vmax ||
+ old_crtc_state->vrr.guardband != new_crtc_state->vrr.guardband ||
+- old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full;
++ old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full ||
++ old_crtc_state->vrr.vsync_start != new_crtc_state->vrr.vsync_start ||
++ old_crtc_state->vrr.vsync_end != new_crtc_state->vrr.vsync_end;
+ }
+
+ static bool cmrr_params_changed(const struct intel_crtc_state *old_crtc_state,
+diff --git a/drivers/gpu/drm/i915/display/intel_display_device.h b/drivers/gpu/drm/i915/display/intel_display_device.h
+index 9a333d9e660105..3b1e8e4c86e7e1 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_device.h
++++ b/drivers/gpu/drm/i915/display/intel_display_device.h
+@@ -159,6 +159,7 @@ struct intel_display_platforms {
+ #define HAS_DPT(__display) (DISPLAY_VER(__display) >= 13)
+ #define HAS_DSB(__display) (DISPLAY_INFO(__display)->has_dsb)
+ #define HAS_DSC(__display) (DISPLAY_RUNTIME_INFO(__display)->has_dsc)
++#define HAS_DSC_3ENGINES(__display) (DISPLAY_VERx100(__display) == 1401 && HAS_DSC(__display))
+ #define HAS_DSC_MST(__display) (DISPLAY_VER(__display) >= 12 && HAS_DSC(__display))
+ #define HAS_FBC(__display) (DISPLAY_RUNTIME_INFO(__display)->fbc_mask != 0)
+ #define HAS_FPGA_DBG_UNCLAIMED(__display) (DISPLAY_INFO(__display)->has_fpga_dbg)
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index aa77ddcee42c86..219ee9b7f9769c 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -172,10 +172,28 @@ int intel_dp_link_symbol_clock(int rate)
+
+ static int max_dprx_rate(struct intel_dp *intel_dp)
+ {
++ struct intel_display *display = to_intel_display(intel_dp);
++ struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
++ int max_rate;
++
+ if (intel_dp_tunnel_bw_alloc_is_enabled(intel_dp))
+- return drm_dp_tunnel_max_dprx_rate(intel_dp->tunnel);
++ max_rate = drm_dp_tunnel_max_dprx_rate(intel_dp->tunnel);
++ else
++ max_rate = drm_dp_bw_code_to_link_rate(intel_dp->dpcd[DP_MAX_LINK_RATE]);
+
+- return drm_dp_bw_code_to_link_rate(intel_dp->dpcd[DP_MAX_LINK_RATE]);
++ /*
++ * Some broken eDP sinks illegally declare support for
++ * HBR3 without TPS4, and are unable to produce a stable
++ * output. Reject HBR3 when TPS4 is not available.
++ */
++ if (max_rate >= 810000 && !drm_dp_tps4_supported(intel_dp->dpcd)) {
++ drm_dbg_kms(display->drm,
++ "[ENCODER:%d:%s] Rejecting HBR3 due to missing TPS4 support\n",
++ encoder->base.base.id, encoder->base.name);
++ max_rate = 540000;
++ }
++
++ return max_rate;
+ }
+
+ static int max_dprx_lane_count(struct intel_dp *intel_dp)
+@@ -1032,10 +1050,11 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
+ u8 test_slice_count = valid_dsc_slicecount[i] * num_joined_pipes;
+
+ /*
+- * 3 DSC Slices per pipe need 3 DSC engines,
+- * which is supported only with Ultrajoiner.
++ * 3 DSC Slices per pipe need 3 DSC engines, which is supported only
++ * with Ultrajoiner only for some platforms.
+ */
+- if (valid_dsc_slicecount[i] == 3 && num_joined_pipes != 4)
++ if (valid_dsc_slicecount[i] == 3 &&
++ (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
+ continue;
+
+ if (test_slice_count >
+@@ -4188,6 +4207,9 @@ static void intel_edp_mso_init(struct intel_dp *intel_dp)
+ static void
+ intel_edp_set_sink_rates(struct intel_dp *intel_dp)
+ {
++ struct intel_display *display = to_intel_display(intel_dp);
++ struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
++
+ intel_dp->num_sink_rates = 0;
+
+ if (intel_dp->edp_dpcd[0] >= DP_EDP_14) {
+@@ -4198,10 +4220,7 @@ intel_edp_set_sink_rates(struct intel_dp *intel_dp)
+ sink_rates, sizeof(sink_rates));
+
+ for (i = 0; i < ARRAY_SIZE(sink_rates); i++) {
+- int val = le16_to_cpu(sink_rates[i]);
+-
+- if (val == 0)
+- break;
++ int rate;
+
+ /* Value read multiplied by 200kHz gives the per-lane
+ * link rate in kHz. The source rates are, however,
+@@ -4209,7 +4228,24 @@ intel_edp_set_sink_rates(struct intel_dp *intel_dp)
+ * back to symbols is
+ * (val * 200kHz)*(8/10 ch. encoding)*(1/8 bit to Byte)
+ */
+- intel_dp->sink_rates[i] = (val * 200) / 10;
++ rate = le16_to_cpu(sink_rates[i]) * 200 / 10;
++
++ if (rate == 0)
++ break;
++
++ /*
++ * Some broken eDP sinks illegally declare support for
++ * HBR3 without TPS4, and are unable to produce a stable
++ * output. Reject HBR3 when TPS4 is not available.
++ */
++ if (rate >= 810000 && !drm_dp_tps4_supported(intel_dp->dpcd)) {
++ drm_dbg_kms(display->drm,
++ "[ENCODER:%d:%s] Rejecting HBR3 due to missing TPS4 support\n",
++ encoder->base.base.id, encoder->base.name);
++ break;
++ }
++
++ intel_dp->sink_rates[i] = rate;
+ }
+ intel_dp->num_sink_rates = i;
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_vblank.c b/drivers/gpu/drm/i915/display/intel_vblank.c
+index a95fb3349eba75..9e024d153e83bf 100644
+--- a/drivers/gpu/drm/i915/display/intel_vblank.c
++++ b/drivers/gpu/drm/i915/display/intel_vblank.c
+@@ -222,7 +222,9 @@ int intel_crtc_scanline_offset(const struct intel_crtc_state *crtc_state)
+ * However if queried just before the start of vblank we'll get an
+ * answer that's slightly in the future.
+ */
+- if (DISPLAY_VER(display) == 2)
++ if (DISPLAY_VER(display) >= 20 || display->platform.battlemage)
++ return 1;
++ else if (DISPLAY_VER(display) == 2)
+ return -1;
+ else if (HAS_DDI(display) && intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI))
+ return 2;
+diff --git a/drivers/gpu/drm/i915/gvt/opregion.c b/drivers/gpu/drm/i915/gvt/opregion.c
+index 509f9ccae3a9f0..dbad4d853d3ade 100644
+--- a/drivers/gpu/drm/i915/gvt/opregion.c
++++ b/drivers/gpu/drm/i915/gvt/opregion.c
+@@ -222,7 +222,6 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu)
+ u8 *buf;
+ struct opregion_header *header;
+ struct vbt v;
+- const char opregion_signature[16] = OPREGION_SIGNATURE;
+
+ gvt_dbg_core("init vgpu%d opregion\n", vgpu->id);
+ vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_KERNEL |
+@@ -236,8 +235,10 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu)
+ /* emulated opregion with VBT mailbox only */
+ buf = (u8 *)vgpu_opregion(vgpu)->va;
+ header = (struct opregion_header *)buf;
+- memcpy(header->signature, opregion_signature,
+- sizeof(opregion_signature));
++
++ static_assert(sizeof(header->signature) == sizeof(OPREGION_SIGNATURE) - 1);
++ memcpy(header->signature, OPREGION_SIGNATURE, sizeof(header->signature));
++
+ header->size = 0x8;
+ header->opregion_ver = 0x02000000;
+ header->mboxes = MBOX_VBT;
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index b96b8de12756ec..def73ce0c3ba70 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -306,6 +306,7 @@ struct drm_i915_private {
+ INTEL_DRAM_DDR5,
+ INTEL_DRAM_LPDDR5,
+ INTEL_DRAM_GDDR,
++ INTEL_DRAM_GDDR_ECC,
+ } type;
+ u8 num_qgv_points;
+ u8 num_psf_gv_points;
+diff --git a/drivers/gpu/drm/i915/soc/intel_dram.c b/drivers/gpu/drm/i915/soc/intel_dram.c
+index 9e310f4099f423..f60eedb0e92cfe 100644
+--- a/drivers/gpu/drm/i915/soc/intel_dram.c
++++ b/drivers/gpu/drm/i915/soc/intel_dram.c
+@@ -687,6 +687,10 @@ static int xelpdp_get_dram_info(struct drm_i915_private *i915)
+ drm_WARN_ON(&i915->drm, !IS_DGFX(i915));
+ dram_info->type = INTEL_DRAM_GDDR;
+ break;
++ case 9:
++ drm_WARN_ON(&i915->drm, !IS_DGFX(i915));
++ dram_info->type = INTEL_DRAM_GDDR_ECC;
++ break;
+ default:
+ MISSING_CASE(val);
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
+index 3debc9870a82ae..d09c4c68411627 100644
+--- a/drivers/gpu/drm/imagination/pvr_fw.c
++++ b/drivers/gpu/drm/imagination/pvr_fw.c
+@@ -732,7 +732,7 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+ fw_mem->core_data, fw_mem->core_code_alloc_size);
+
+ if (err)
+- goto err_free_fw_core_data_obj;
++ goto err_free_kdata;
+
+ memcpy(fw_code_ptr, fw_mem->code, fw_mem->code_alloc_size);
+ memcpy(fw_data_ptr, fw_mem->data, fw_mem->data_alloc_size);
+@@ -742,10 +742,14 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+ memcpy(fw_core_data_ptr, fw_mem->core_data, fw_mem->core_data_alloc_size);
+
+ /* We're finished with the firmware section memory on the CPU, unmap. */
+- if (fw_core_data_ptr)
++ if (fw_core_data_ptr) {
+ pvr_fw_object_vunmap(fw_mem->core_data_obj);
+- if (fw_core_code_ptr)
++ fw_core_data_ptr = NULL;
++ }
++ if (fw_core_code_ptr) {
+ pvr_fw_object_vunmap(fw_mem->core_code_obj);
++ fw_core_code_ptr = NULL;
++ }
+ pvr_fw_object_vunmap(fw_mem->data_obj);
+ fw_data_ptr = NULL;
+ pvr_fw_object_vunmap(fw_mem->code_obj);
+@@ -753,7 +757,7 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+
+ err = pvr_fw_create_fwif_connection_ctl(pvr_dev);
+ if (err)
+- goto err_free_fw_core_data_obj;
++ goto err_free_kdata;
+
+ return 0;
+
+@@ -763,13 +767,16 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+ kfree(fw_mem->data);
+ kfree(fw_mem->code);
+
+-err_free_fw_core_data_obj:
+ if (fw_core_data_ptr)
+- pvr_fw_object_unmap_and_destroy(fw_mem->core_data_obj);
++ pvr_fw_object_vunmap(fw_mem->core_data_obj);
++ if (fw_mem->core_data_obj)
++ pvr_fw_object_destroy(fw_mem->core_data_obj);
+
+ err_free_fw_core_code_obj:
+ if (fw_core_code_ptr)
+- pvr_fw_object_unmap_and_destroy(fw_mem->core_code_obj);
++ pvr_fw_object_vunmap(fw_mem->core_code_obj);
++ if (fw_mem->core_code_obj)
++ pvr_fw_object_destroy(fw_mem->core_code_obj);
+
+ err_free_fw_data_obj:
+ if (fw_data_ptr)
+@@ -836,6 +843,12 @@ pvr_fw_cleanup(struct pvr_device *pvr_dev)
+ struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+ pvr_fw_fini_fwif_connection_ctl(pvr_dev);
++
++ kfree(fw_mem->core_data);
++ kfree(fw_mem->core_code);
++ kfree(fw_mem->data);
++ kfree(fw_mem->code);
++
+ if (fw_mem->core_code_obj)
+ pvr_fw_object_destroy(fw_mem->core_code_obj);
+ if (fw_mem->core_data_obj)
+diff --git a/drivers/gpu/drm/imagination/pvr_job.c b/drivers/gpu/drm/imagination/pvr_job.c
+index 618503a212a7d3..aad183a5737183 100644
+--- a/drivers/gpu/drm/imagination/pvr_job.c
++++ b/drivers/gpu/drm/imagination/pvr_job.c
+@@ -677,6 +677,13 @@ pvr_jobs_link_geom_frag(struct pvr_job_data *job_data, u32 *job_count)
+ geom_job->paired_job = frag_job;
+ frag_job->paired_job = geom_job;
+
++ /* The geometry job pvr_job structure is used when the fragment
++ * job is being prepared by the GPU scheduler. Have the fragment
++ * job hold a reference on the geometry job to prevent it being
++ * freed until the fragment job has finished with it.
++ */
++ pvr_job_get(geom_job);
++
+ /* Skip the fragment job we just paired to the geometry job. */
+ i++;
+ }
+diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c
+index 43411be930a214..fa1afe1193e1d3 100644
+--- a/drivers/gpu/drm/imagination/pvr_queue.c
++++ b/drivers/gpu/drm/imagination/pvr_queue.c
+@@ -866,6 +866,10 @@ static void pvr_queue_free_job(struct drm_sched_job *sched_job)
+ struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+
+ drm_sched_job_cleanup(sched_job);
++
++ if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job)
++ pvr_job_put(job->paired_job);
++
+ job->paired_job = NULL;
+ pvr_job_put(job);
+ }
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index fb71658c3117b2..6067d08aeee34b 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -223,7 +223,7 @@ void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mod
+ vsyncstr = mode->crtc_vsync_start - 1;
+ vsyncend = mode->crtc_vsync_end - 1;
+ vtotal = mode->crtc_vtotal - 2;
+- vblkstr = mode->crtc_vblank_start;
++ vblkstr = mode->crtc_vblank_start - 1;
+ vblkend = vtotal + 1;
+
+ linecomp = vdispend;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 699b0dd34b18f0..38c94915d4c9d6 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1169,49 +1169,50 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu)
+ struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
+ struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ u32 val;
++ int ret;
+
+ /*
+- * The GMU may still be in slumber unless the GPU started so check and
+- * skip putting it back into slumber if so
++ * GMU firmware's internal power state gets messed up if we send "prepare_slumber" hfi when
++ * oob_gpu handshake wasn't done after the last wake up. So do a dummy handshake here when
++ * required
+ */
+- val = gmu_read(gmu, REG_A6XX_GPU_GMU_CX_GMU_RPMH_POWER_STATE);
++ if (adreno_gpu->base.needs_hw_init) {
++ if (a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET))
++ goto force_off;
+
+- if (val != 0xf) {
+- int ret = a6xx_gmu_wait_for_idle(gmu);
++ a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET);
++ }
+
+- /* If the GMU isn't responding assume it is hung */
+- if (ret) {
+- a6xx_gmu_force_off(gmu);
+- return;
+- }
++ ret = a6xx_gmu_wait_for_idle(gmu);
+
+- a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung);
++ /* If the GMU isn't responding assume it is hung */
++ if (ret)
++ goto force_off;
+
+- /* tell the GMU we want to slumber */
+- ret = a6xx_gmu_notify_slumber(gmu);
+- if (ret) {
+- a6xx_gmu_force_off(gmu);
+- return;
+- }
++ a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung);
+
+- ret = gmu_poll_timeout(gmu,
+- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val,
+- !(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB),
+- 100, 10000);
++ /* tell the GMU we want to slumber */
++ ret = a6xx_gmu_notify_slumber(gmu);
++ if (ret)
++ goto force_off;
+
+- /*
+- * Let the user know we failed to slumber but don't worry too
+- * much because we are powering down anyway
+- */
++ ret = gmu_poll_timeout(gmu,
++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val,
++ !(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB),
++ 100, 10000);
+
+- if (ret)
+- DRM_DEV_ERROR(gmu->dev,
+- "Unable to slumber GMU: status = 0%x/0%x\n",
+- gmu_read(gmu,
+- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS),
+- gmu_read(gmu,
+- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2));
+- }
++ /*
++ * Let the user know we failed to slumber but don't worry too
++ * much because we are powering down anyway
++ */
++
++ if (ret)
++ DRM_DEV_ERROR(gmu->dev,
++ "Unable to slumber GMU: status = 0%x/0%x\n",
++ gmu_read(gmu,
++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS),
++ gmu_read(gmu,
++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2));
+
+ /* Turn off HFI */
+ a6xx_hfi_stop(gmu);
+@@ -1221,6 +1222,11 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu)
+
+ /* Tell RPMh to power off the GPU */
+ a6xx_rpmh_stop(gmu);
++
++ return;
++
++force_off:
++ a6xx_gmu_force_off(gmu);
+ }
+
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 0ae29a7c8a4d3f..2a317cdb8eaa1e 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -242,10 +242,10 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ break;
+ fallthrough;
+ case MSM_SUBMIT_CMD_BUF:
+- OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
++ OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3);
+ OUT_RING(ring, lower_32_bits(submit->cmd[i].iova));
+ OUT_RING(ring, upper_32_bits(submit->cmd[i].iova));
+- OUT_RING(ring, submit->cmd[i].size);
++ OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size));
+ ibs++;
+ break;
+ }
+@@ -377,10 +377,10 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ break;
+ fallthrough;
+ case MSM_SUBMIT_CMD_BUF:
+- OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
++ OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3);
+ OUT_RING(ring, lower_32_bits(submit->cmd[i].iova));
+ OUT_RING(ring, upper_32_bits(submit->cmd[i].iova));
+- OUT_RING(ring, submit->cmd[i].size);
++ OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size));
+ ibs++;
+ break;
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
+index ab3dfb0b374ead..cb89d8cf300f99 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
+@@ -132,7 +132,6 @@ static const struct dpu_intf_cfg msm8937_intf[] = {
+ .prog_fetch_lines_worst_case = 14,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_2", .id = INTF_2,
+ .base = 0x6b000, .len = 0x268,
+@@ -141,7 +140,6 @@ static const struct dpu_intf_cfg msm8937_intf[] = {
+ .prog_fetch_lines_worst_case = 14,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29),
+- .intr_tear_rd_ptr = -1,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
+index 6bdaecca676144..b2a38e62d3a936 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
+@@ -118,7 +118,6 @@ static const struct dpu_intf_cfg msm8917_intf[] = {
+ .prog_fetch_lines_worst_case = 14,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27),
+- .intr_tear_rd_ptr = -1,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
+index 14f36ea6ad0eb6..06859984d2a5c7 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
+@@ -131,7 +131,6 @@ static const struct dpu_intf_cfg msm8953_intf[] = {
+ .prog_fetch_lines_worst_case = 14,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_1", .id = INTF_1,
+ .base = 0x6a800, .len = 0x268,
+@@ -140,7 +139,6 @@ static const struct dpu_intf_cfg msm8953_intf[] = {
+ .prog_fetch_lines_worst_case = 14,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_2", .id = INTF_2,
+ .base = 0x6b000, .len = 0x268,
+@@ -149,7 +147,6 @@ static const struct dpu_intf_cfg msm8953_intf[] = {
+ .prog_fetch_lines_worst_case = 14,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29),
+- .intr_tear_rd_ptr = -1,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_7_msm8996.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_7_msm8996.h
+index 491f6f5827d151..153e275513c31b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_7_msm8996.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_7_msm8996.h
+@@ -241,7 +241,6 @@ static const struct dpu_intf_cfg msm8996_intf[] = {
+ .prog_fetch_lines_worst_case = 25,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_1", .id = INTF_1,
+ .base = 0x6a800, .len = 0x268,
+@@ -250,7 +249,6 @@ static const struct dpu_intf_cfg msm8996_intf[] = {
+ .prog_fetch_lines_worst_case = 25,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_2", .id = INTF_2,
+ .base = 0x6b000, .len = 0x268,
+@@ -259,7 +257,6 @@ static const struct dpu_intf_cfg msm8996_intf[] = {
+ .prog_fetch_lines_worst_case = 25,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_3", .id = INTF_3,
+ .base = 0x6b800, .len = 0x268,
+@@ -267,7 +264,6 @@ static const struct dpu_intf_cfg msm8996_intf[] = {
+ .prog_fetch_lines_worst_case = 25,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31),
+- .intr_tear_rd_ptr = -1,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_2_sdm660.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_2_sdm660.h
+index 424815e7fb7dd8..f247625ed7e5b2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_2_sdm660.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_2_sdm660.h
+@@ -202,7 +202,6 @@ static const struct dpu_intf_cfg sdm660_intf[] = {
+ .prog_fetch_lines_worst_case = 21,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_1", .id = INTF_1,
+ .base = 0x6a800, .len = 0x280,
+@@ -211,7 +210,6 @@ static const struct dpu_intf_cfg sdm660_intf[] = {
+ .prog_fetch_lines_worst_case = 21,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_2", .id = INTF_2,
+ .base = 0x6b000, .len = 0x280,
+@@ -220,7 +218,6 @@ static const struct dpu_intf_cfg sdm660_intf[] = {
+ .prog_fetch_lines_worst_case = 21,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29),
+- .intr_tear_rd_ptr = -1,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_3_sdm630.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_3_sdm630.h
+index df01227fc36468..5ddd2b3c0cd2e8 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_3_sdm630.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_3_sdm630.h
+@@ -147,7 +147,6 @@ static const struct dpu_intf_cfg sdm630_intf[] = {
+ .prog_fetch_lines_worst_case = 21,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25),
+- .intr_tear_rd_ptr = -1,
+ }, {
+ .name = "intf_1", .id = INTF_1,
+ .base = 0x6a800, .len = 0x280,
+@@ -156,7 +155,6 @@ static const struct dpu_intf_cfg sdm630_intf[] = {
+ .prog_fetch_lines_worst_case = 21,
+ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26),
+ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27),
+- .intr_tear_rd_ptr = -1,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+index af3e541f60c303..b19193b02ab327 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+@@ -1059,6 +1059,9 @@ static int dpu_plane_virtual_atomic_check(struct drm_plane *plane,
+ struct drm_crtc_state *crtc_state;
+ int ret;
+
++ if (IS_ERR(plane_state))
++ return PTR_ERR(plane_state);
++
+ if (plane_state->crtc)
+ crtc_state = drm_atomic_get_new_crtc_state(state,
+ plane_state->crtc);
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 42e100a8adca09..e9b824ed43538c 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1827,8 +1827,15 @@ static int dsi_host_parse_dt(struct msm_dsi_host *msm_host)
+ __func__, ret);
+ goto err;
+ }
+- if (!ret)
++ if (!ret) {
+ msm_dsi->te_source = devm_kstrdup(dev, te_source, GFP_KERNEL);
++ if (!msm_dsi->te_source) {
++ DRM_DEV_ERROR(dev, "%s: failed to allocate te_source\n",
++ __func__);
++ ret = -ENOMEM;
++ goto err;
++ }
++ }
+ ret = 0;
+
+ if (of_property_present(np, "syscon-sfpb")) {
+diff --git a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+index 55a35182858cca..5a6ae9fc319451 100644
+--- a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
++++ b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+@@ -2259,5 +2259,12 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ </reg32>
+ </domain>
+
++<domain name="CP_INDIRECT_BUFFER" width="32" varset="chip" prefix="chip" variants="A5XX-">
++ <reg64 offset="0" name="IB_BASE" type="address"/>
++ <reg32 offset="2" name="2">
++ <bitfield name="IB_SIZE" low="0" high="19"/>
++ </reg32>
++</domain>
++
+ </database>
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index db961eade2257f..2016c1e7242fe3 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -144,6 +144,9 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo)
+ nouveau_bo_del_io_reserve_lru(bo);
+ nv10_bo_put_tile_region(dev, nvbo->tile, NULL);
+
++ if (bo->base.import_attach)
++ drm_prime_gem_destroy(&bo->base, bo->sg);
++
+ /*
+ * If nouveau_bo_new() allocated this buffer, the GEM object was never
+ * initialized, so don't attempt to release it.
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index 9ae2cee1c7c580..67e3c99de73ae6 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -87,9 +87,6 @@ nouveau_gem_object_del(struct drm_gem_object *gem)
+ return;
+ }
+
+- if (gem->import_attach)
+- drm_prime_gem_destroy(gem, nvbo->bo.sg);
+-
+ ttm_bo_put(&nvbo->bo);
+
+ pm_runtime_mark_last_busy(dev);
+diff --git a/drivers/gpu/drm/sti/Makefile b/drivers/gpu/drm/sti/Makefile
+index f203ac5514ae0b..f778a4eee7c9cf 100644
+--- a/drivers/gpu/drm/sti/Makefile
++++ b/drivers/gpu/drm/sti/Makefile
+@@ -7,8 +7,6 @@ sti-drm-y := \
+ sti_compositor.o \
+ sti_crtc.o \
+ sti_plane.o \
+- sti_crtc.o \
+- sti_plane.o \
+ sti_hdmi.o \
+ sti_hdmi_tx3g4c28phy.o \
+ sti_dvo.o \
+diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
+index 52ba6c699bc8fa..5c3b51eb0a97e1 100644
+--- a/drivers/gpu/drm/tiny/repaper.c
++++ b/drivers/gpu/drm/tiny/repaper.c
+@@ -456,7 +456,7 @@ static void repaper_frame_fixed_repeat(struct repaper_epd *epd, u8 fixed_value,
+ enum repaper_stage stage)
+ {
+ u64 start = local_clock();
+- u64 end = start + (epd->factored_stage_time * 1000 * 1000);
++ u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000);
+
+ do {
+ repaper_frame_fixed(epd, fixed_value, stage);
+@@ -467,7 +467,7 @@ static void repaper_frame_data_repeat(struct repaper_epd *epd, const u8 *image,
+ const u8 *mask, enum repaper_stage stage)
+ {
+ u64 start = local_clock();
+- u64 end = start + (epd->factored_stage_time * 1000 * 1000);
++ u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000);
+
+ do {
+ repaper_frame_data(epd, image, mask, stage);
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 05608c894ed934..6db503a5691806 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -428,7 +428,8 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job)
+ struct v3d_bo *bo = to_v3d_bo(job->base.bo[0]);
+ struct v3d_bo *indirect = to_v3d_bo(indirect_csd->indirect);
+ struct drm_v3d_submit_csd *args = &indirect_csd->job->args;
+- u32 *wg_counts;
++ struct v3d_dev *v3d = job->base.v3d;
++ u32 num_batches, *wg_counts;
+
+ v3d_get_bo_vaddr(bo);
+ v3d_get_bo_vaddr(indirect);
+@@ -441,8 +442,17 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job)
+ args->cfg[0] = wg_counts[0] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
+ args->cfg[1] = wg_counts[1] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
+ args->cfg[2] = wg_counts[2] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
+- args->cfg[4] = DIV_ROUND_UP(indirect_csd->wg_size, 16) *
+- (wg_counts[0] * wg_counts[1] * wg_counts[2]) - 1;
++
++ num_batches = DIV_ROUND_UP(indirect_csd->wg_size, 16) *
++ (wg_counts[0] * wg_counts[1] * wg_counts[2]);
++
++ /* V3D 7.1.6 and later don't subtract 1 from the number of batches */
++ if (v3d->ver < 71 || (v3d->ver == 71 && v3d->rev < 6))
++ args->cfg[4] = num_batches - 1;
++ else
++ args->cfg[4] = num_batches;
++
++ WARN_ON(args->cfg[4] == ~0);
+
+ for (int i = 0; i < 3; i++) {
+ /* 0xffffffff indicates that the uniform rewrite is not needed */
+diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
+index 5aab588fc400e7..3d6aa26fdb5343 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
++++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
+@@ -115,13 +115,14 @@ int virtio_gpu_gem_object_open(struct drm_gem_object *obj,
+ if (!vgdev->has_context_init)
+ virtio_gpu_create_context(obj->dev, file);
+
+- objs = virtio_gpu_array_alloc(1);
+- if (!objs)
+- return -ENOMEM;
+- virtio_gpu_array_add_obj(objs, obj);
++ if (vfpriv->context_created) {
++ objs = virtio_gpu_array_alloc(1);
++ if (!objs)
++ return -ENOMEM;
++ virtio_gpu_array_add_obj(objs, obj);
+
+- if (vfpriv->ctx_id)
+ virtio_gpu_cmd_context_attach_resource(vgdev, vfpriv->ctx_id, objs);
++ }
+
+ out_notify:
+ virtio_gpu_notify(vgdev);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
+index 42aa554eca9fef..26abe3d1b122b4 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
++++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
+@@ -322,12 +322,6 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
+ return 0;
+
+ obj = new_state->fb->obj[0];
+- if (obj->import_attach) {
+- ret = virtio_gpu_prepare_imported_obj(plane, new_state, obj);
+- if (ret)
+- return ret;
+- }
+-
+ if (bo->dumb || obj->import_attach) {
+ vgplane_st->fence = virtio_gpu_fence_alloc(vgdev,
+ vgdev->fence_drv.context,
+@@ -336,7 +330,21 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
+ return -ENOMEM;
+ }
+
++ if (obj->import_attach) {
++ ret = virtio_gpu_prepare_imported_obj(plane, new_state, obj);
++ if (ret)
++ goto err_fence;
++ }
++
+ return 0;
++
++err_fence:
++ if (vgplane_st->fence) {
++ dma_fence_put(&vgplane_st->fence->f);
++ vgplane_st->fence = NULL;
++ }
++
++ return ret;
+ }
+
+ static void virtio_gpu_cleanup_imported_obj(struct drm_gem_object *obj)
+diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
+index 8a7b1597241355..eaf7a5bf0a38f3 100644
+--- a/drivers/gpu/drm/xe/xe_device_types.h
++++ b/drivers/gpu/drm/xe/xe_device_types.h
+@@ -559,6 +559,7 @@ struct xe_device {
+ INTEL_DRAM_DDR5,
+ INTEL_DRAM_LPDDR5,
+ INTEL_DRAM_GDDR,
++ INTEL_DRAM_GDDR_ECC,
+ } type;
+ u8 num_qgv_points;
+ u8 num_psf_gv_points;
+diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
+index f67803e15a0e66..f7a20264ea3305 100644
+--- a/drivers/gpu/drm/xe/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/xe_dma_buf.c
+@@ -145,10 +145,7 @@ static void xe_dma_buf_unmap(struct dma_buf_attachment *attach,
+ struct sg_table *sgt,
+ enum dma_data_direction dir)
+ {
+- struct dma_buf *dma_buf = attach->dmabuf;
+- struct xe_bo *bo = gem_to_xe_bo(dma_buf->priv);
+-
+- if (!xe_bo_is_vram(bo)) {
++ if (sg_page(sgt->sgl)) {
+ dma_unmap_sgtable(attach->dev, sgt, dir, 0);
+ sg_free_table(sgt);
+ kfree(sgt);
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 0a93831c0a025e..9405d83d4db2ab 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -322,6 +322,13 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt)
+ return 0;
+ }
+
++/*
++ * Ensure that roundup_pow_of_two(length) doesn't overflow.
++ * Note that roundup_pow_of_two() operates on unsigned long,
++ * not on u64.
++ */
++#define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
++
+ /**
+ * xe_gt_tlb_invalidation_range - Issue a TLB invalidation on this GT for an
+ * address range
+@@ -346,6 +353,7 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
+ struct xe_device *xe = gt_to_xe(gt);
+ #define MAX_TLB_INVALIDATION_LEN 7
+ u32 action[MAX_TLB_INVALIDATION_LEN];
++ u64 length = end - start;
+ int len = 0;
+
+ xe_gt_assert(gt, fence);
+@@ -358,11 +366,11 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
+
+ action[len++] = XE_GUC_ACTION_TLB_INVALIDATION;
+ action[len++] = 0; /* seqno, replaced in send_tlb_invalidation */
+- if (!xe->info.has_range_tlb_invalidation) {
++ if (!xe->info.has_range_tlb_invalidation ||
++ length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
+ action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
+ } else {
+ u64 orig_start = start;
+- u64 length = end - start;
+ u64 align;
+
+ if (length < SZ_4K)
+diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
+index fab259adc380be..bcbbe0d99306fa 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ads.c
++++ b/drivers/gpu/drm/xe/xe_guc_ads.c
+@@ -490,24 +490,52 @@ static void fill_engine_enable_masks(struct xe_gt *gt,
+ engine_enable_mask(gt, XE_ENGINE_CLASS_OTHER));
+ }
+
+-static void guc_prep_golden_lrc_null(struct xe_guc_ads *ads)
++/*
++ * Write the offsets corresponding to the golden LRCs. The actual data is
++ * populated later by guc_golden_lrc_populate()
++ */
++static void guc_golden_lrc_init(struct xe_guc_ads *ads)
+ {
+ struct xe_device *xe = ads_to_xe(ads);
++ struct xe_gt *gt = ads_to_gt(ads);
+ struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads),
+ offsetof(struct __guc_ads_blob, system_info));
+- u8 guc_class;
++ size_t alloc_size, real_size;
++ u32 addr_ggtt, offset;
++ int class;
++
++ offset = guc_ads_golden_lrc_offset(ads);
++ addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset;
++
++ for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) {
++ u8 guc_class;
++
++ guc_class = xe_engine_class_to_guc_class(class);
+
+- for (guc_class = 0; guc_class <= GUC_MAX_ENGINE_CLASSES; ++guc_class) {
+ if (!info_map_read(xe, &info_map,
+ engine_enabled_masks[guc_class]))
+ continue;
+
++ real_size = xe_gt_lrc_size(gt, class);
++ alloc_size = PAGE_ALIGN(real_size);
++
++ /*
++ * This interface is slightly confusing. We need to pass the
++ * base address of the full golden context and the size of just
++ * the engine state, which is the section of the context image
++ * that starts after the execlists LRC registers. This is
++ * required to allow the GuC to restore just the engine state
++ * when a watchdog reset occurs.
++ * We calculate the engine state size by removing the size of
++ * what comes before it in the context image (which is identical
++ * on all engines).
++ */
+ ads_blob_write(ads, ads.eng_state_size[guc_class],
+- guc_ads_golden_lrc_size(ads) -
+- xe_lrc_skip_size(xe));
++ real_size - xe_lrc_skip_size(xe));
+ ads_blob_write(ads, ads.golden_context_lrca[guc_class],
+- xe_bo_ggtt_addr(ads->bo) +
+- guc_ads_golden_lrc_offset(ads));
++ addr_ggtt);
++
++ addr_ggtt += alloc_size;
+ }
+ }
+
+@@ -857,7 +885,7 @@ void xe_guc_ads_populate_minimal(struct xe_guc_ads *ads)
+
+ xe_map_memset(ads_to_xe(ads), ads_to_map(ads), 0, 0, ads->bo->size);
+ guc_policies_init(ads);
+- guc_prep_golden_lrc_null(ads);
++ guc_golden_lrc_init(ads);
+ guc_mapping_table_init_invalid(gt, &info_map);
+ guc_doorbell_init(ads);
+
+@@ -883,7 +911,7 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads)
+ guc_policies_init(ads);
+ fill_engine_enable_masks(gt, &info_map);
+ guc_mmio_reg_state_init(ads);
+- guc_prep_golden_lrc_null(ads);
++ guc_golden_lrc_init(ads);
+ guc_mapping_table_init(gt, &info_map);
+ guc_capture_prep_lists(ads);
+ guc_doorbell_init(ads);
+@@ -903,18 +931,22 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads)
+ guc_ads_private_data_offset(ads));
+ }
+
+-static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
++/*
++ * After the golden LRC's are recorded for each engine class by the first
++ * submission, copy them to the ADS, as initialized earlier by
++ * guc_golden_lrc_init().
++ */
++static void guc_golden_lrc_populate(struct xe_guc_ads *ads)
+ {
+ struct xe_device *xe = ads_to_xe(ads);
+ struct xe_gt *gt = ads_to_gt(ads);
+ struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads),
+ offsetof(struct __guc_ads_blob, system_info));
+ size_t total_size = 0, alloc_size, real_size;
+- u32 addr_ggtt, offset;
++ u32 offset;
+ int class;
+
+ offset = guc_ads_golden_lrc_offset(ads);
+- addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset;
+
+ for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) {
+ u8 guc_class;
+@@ -931,26 +963,9 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
+ alloc_size = PAGE_ALIGN(real_size);
+ total_size += alloc_size;
+
+- /*
+- * This interface is slightly confusing. We need to pass the
+- * base address of the full golden context and the size of just
+- * the engine state, which is the section of the context image
+- * that starts after the execlists LRC registers. This is
+- * required to allow the GuC to restore just the engine state
+- * when a watchdog reset occurs.
+- * We calculate the engine state size by removing the size of
+- * what comes before it in the context image (which is identical
+- * on all engines).
+- */
+- ads_blob_write(ads, ads.eng_state_size[guc_class],
+- real_size - xe_lrc_skip_size(xe));
+- ads_blob_write(ads, ads.golden_context_lrca[guc_class],
+- addr_ggtt);
+-
+ xe_map_memcpy_to(xe, ads_to_map(ads), offset,
+ gt->default_lrc[class], real_size);
+
+- addr_ggtt += alloc_size;
+ offset += alloc_size;
+ }
+
+@@ -959,7 +974,7 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
+
+ void xe_guc_ads_populate_post_load(struct xe_guc_ads *ads)
+ {
+- guc_populate_golden_lrc(ads);
++ guc_golden_lrc_populate(ads);
+ }
+
+ static int guc_ads_action_update_policies(struct xe_guc_ads *ads, u32 policy_offset)
+diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c
+index c3cc0fa105e84a..57b71956ddf42a 100644
+--- a/drivers/gpu/drm/xe/xe_hmm.c
++++ b/drivers/gpu/drm/xe/xe_hmm.c
+@@ -19,29 +19,6 @@ static u64 xe_npages_in_range(unsigned long start, unsigned long end)
+ return (end - start) >> PAGE_SHIFT;
+ }
+
+-/**
+- * xe_mark_range_accessed() - mark a range is accessed, so core mm
+- * have such information for memory eviction or write back to
+- * hard disk
+- * @range: the range to mark
+- * @write: if write to this range, we mark pages in this range
+- * as dirty
+- */
+-static void xe_mark_range_accessed(struct hmm_range *range, bool write)
+-{
+- struct page *page;
+- u64 i, npages;
+-
+- npages = xe_npages_in_range(range->start, range->end);
+- for (i = 0; i < npages; i++) {
+- page = hmm_pfn_to_page(range->hmm_pfns[i]);
+- if (write)
+- set_page_dirty_lock(page);
+-
+- mark_page_accessed(page);
+- }
+-}
+-
+ static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st,
+ struct hmm_range *range, struct rw_semaphore *notifier_sem)
+ {
+@@ -331,7 +308,6 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
+ if (ret)
+ goto out_unlock;
+
+- xe_mark_range_accessed(&hmm_range, write);
+ userptr->sg = &userptr->sgt;
+ xe_hmm_userptr_set_mapped(uvma);
+ userptr->notifier_seq = hmm_range.notifier_seq;
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index 278bc96cf593d8..fb93345b377061 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -1177,7 +1177,7 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m,
+ err_sync:
+ /* Sync partial copies if any. FIXME: job_mutex? */
+ if (fence) {
+- dma_fence_wait(m->fence, false);
++ dma_fence_wait(fence, false);
+ dma_fence_put(fence);
+ }
+
+diff --git a/drivers/i2c/busses/i2c-cros-ec-tunnel.c b/drivers/i2c/busses/i2c-cros-ec-tunnel.c
+index 43bf90d90eebab..208ce4f9e782cd 100644
+--- a/drivers/i2c/busses/i2c-cros-ec-tunnel.c
++++ b/drivers/i2c/busses/i2c-cros-ec-tunnel.c
+@@ -247,6 +247,9 @@ static int ec_i2c_probe(struct platform_device *pdev)
+ u32 remote_bus;
+ int err;
+
++ if (!ec)
++ return dev_err_probe(dev, -EPROBE_DEFER, "couldn't find parent EC device\n");
++
+ if (!ec->cmd_xfer) {
+ dev_err(dev, "Missing sendrecv\n");
+ return -EINVAL;
+diff --git a/drivers/i2c/i2c-atr.c b/drivers/i2c/i2c-atr.c
+index 8fe9ddff8e96f6..783fb8df2ebee9 100644
+--- a/drivers/i2c/i2c-atr.c
++++ b/drivers/i2c/i2c-atr.c
+@@ -8,12 +8,12 @@
+ * Originally based on i2c-mux.c
+ */
+
+-#include <linux/fwnode.h>
+ #include <linux/i2c-atr.h>
+ #include <linux/i2c.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
++#include <linux/property.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 91db10515d7472..176d0b3e448870 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -72,6 +72,8 @@ static const char * const cma_events[] = {
+ static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
+ enum ib_gid_type gid_type);
+
++static void cma_netevent_work_handler(struct work_struct *_work);
++
+ const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
+ {
+ size_t index = event;
+@@ -1033,6 +1035,7 @@ __rdma_create_id(struct net *net, rdma_cm_event_handler event_handler,
+ get_random_bytes(&id_priv->seq_num, sizeof id_priv->seq_num);
+ id_priv->id.route.addr.dev_addr.net = get_net(net);
+ id_priv->seq_num &= 0x00ffffff;
++ INIT_WORK(&id_priv->id.net_work, cma_netevent_work_handler);
+
+ rdma_restrack_new(&id_priv->res, RDMA_RESTRACK_CM_ID);
+ if (parent)
+@@ -5227,7 +5230,6 @@ static int cma_netevent_callback(struct notifier_block *self,
+ if (!memcmp(current_id->id.route.addr.dev_addr.dst_dev_addr,
+ neigh->ha, ETH_ALEN))
+ continue;
+- INIT_WORK(¤t_id->id.net_work, cma_netevent_work_handler);
+ cma_id_get(current_id);
+ queue_work(cma_wq, ¤t_id->id.net_work);
+ }
+diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
+index e9fa22d31c2332..c48ef608302055 100644
+--- a/drivers/infiniband/core/umem_odp.c
++++ b/drivers/infiniband/core/umem_odp.c
+@@ -76,12 +76,14 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
+
+ npfns = (end - start) >> PAGE_SHIFT;
+ umem_odp->pfn_list = kvcalloc(
+- npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL);
++ npfns, sizeof(*umem_odp->pfn_list),
++ GFP_KERNEL | __GFP_NOWARN);
+ if (!umem_odp->pfn_list)
+ return -ENOMEM;
+
+ umem_odp->dma_list = kvcalloc(
+- ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL);
++ ndmas, sizeof(*umem_odp->dma_list),
++ GFP_KERNEL | __GFP_NOWARN);
+ if (!umem_odp->dma_list) {
+ ret = -ENOMEM;
+ goto out_pfn_list;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 6f5db32082dd78..02b21d484677ed 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1773,10 +1773,7 @@ int bnxt_re_destroy_srq(struct ib_srq *ib_srq, struct ib_udata *udata)
+ ib_srq);
+ struct bnxt_re_dev *rdev = srq->rdev;
+ struct bnxt_qplib_srq *qplib_srq = &srq->qplib_srq;
+- struct bnxt_qplib_nq *nq = NULL;
+
+- if (qplib_srq->cq)
+- nq = qplib_srq->cq->nq;
+ if (rdev->chip_ctx->modes.toggle_bits & BNXT_QPLIB_SRQ_TOGGLE_BIT) {
+ free_page((unsigned long)srq->uctx_srq_page);
+ hash_del(&srq->hash_entry);
+@@ -1784,8 +1781,6 @@ int bnxt_re_destroy_srq(struct ib_srq *ib_srq, struct ib_udata *udata)
+ bnxt_qplib_destroy_srq(&rdev->qplib_res, qplib_srq);
+ ib_umem_release(srq->umem);
+ atomic_dec(&rdev->stats.res.srq_count);
+- if (nq)
+- nq->budget--;
+ return 0;
+ }
+
+@@ -1826,7 +1821,6 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ struct ib_udata *udata)
+ {
+ struct bnxt_qplib_dev_attr *dev_attr;
+- struct bnxt_qplib_nq *nq = NULL;
+ struct bnxt_re_ucontext *uctx;
+ struct bnxt_re_dev *rdev;
+ struct bnxt_re_srq *srq;
+@@ -1872,7 +1866,6 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ srq->qplib_srq.eventq_hw_ring_id = rdev->nqr->nq[0].ring_id;
+ srq->qplib_srq.sg_info.pgsize = PAGE_SIZE;
+ srq->qplib_srq.sg_info.pgshft = PAGE_SHIFT;
+- nq = &rdev->nqr->nq[0];
+
+ if (udata) {
+ rc = bnxt_re_init_user_srq(rdev, pd, srq, udata);
+@@ -1907,8 +1900,6 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ goto fail;
+ }
+ }
+- if (nq)
+- nq->budget++;
+ active_srqs = atomic_inc_return(&rdev->stats.res.srq_count);
+ if (active_srqs > rdev->stats.res.srq_watermark)
+ rdev->stats.res.srq_watermark = active_srqs;
+@@ -3078,7 +3069,6 @@ int bnxt_re_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata)
+ ib_umem_release(cq->umem);
+
+ atomic_dec(&rdev->stats.res.cq_count);
+- nq->budget--;
+ kfree(cq->cql);
+ return 0;
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index cf89a8db4f64cd..8d0b63d4b50a6c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -763,7 +763,7 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
+ if (ret)
+ return ret;
+ }
+- dma_set_max_seg_size(dev, UINT_MAX);
++ dma_set_max_seg_size(dev, SZ_2G);
+ ret = ib_register_device(ib_dev, "hns_%d", dev);
+ if (ret) {
+ dev_err(dev, "ib_register_device failed!\n");
+diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c
+index 4ddcd5860e0fa4..11eca39b73a93e 100644
+--- a/drivers/infiniband/hw/usnic/usnic_ib_main.c
++++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c
+@@ -397,7 +397,7 @@ static void *usnic_ib_device_add(struct pci_dev *dev)
+ if (!us_ibdev) {
+ usnic_err("Device %s context alloc failed\n",
+ netdev_name(pci_get_drvdata(dev)));
+- return ERR_PTR(-EFAULT);
++ return NULL;
+ }
+
+ us_ibdev->ufdev = usnic_fwd_dev_alloc(dev);
+@@ -517,8 +517,8 @@ static struct usnic_ib_dev *usnic_ib_discover_pf(struct usnic_vnic *vnic)
+ }
+
+ us_ibdev = usnic_ib_device_add(parent_pci);
+- if (IS_ERR_OR_NULL(us_ibdev)) {
+- us_ibdev = us_ibdev ? us_ibdev : ERR_PTR(-EFAULT);
++ if (!us_ibdev) {
++ us_ibdev = ERR_PTR(-EFAULT);
+ goto out;
+ }
+
+@@ -586,10 +586,10 @@ static int usnic_ib_pci_probe(struct pci_dev *pdev,
+ }
+
+ pf = usnic_ib_discover_pf(vf->vnic);
+- if (IS_ERR_OR_NULL(pf)) {
+- usnic_err("Failed to discover pf of vnic %s with err%ld\n",
+- pci_name(pdev), PTR_ERR(pf));
+- err = pf ? PTR_ERR(pf) : -EFAULT;
++ if (IS_ERR(pf)) {
++ err = PTR_ERR(pf);
++ usnic_err("Failed to discover pf of vnic %s with err%d\n",
++ pci_name(pdev), err);
+ goto out_clean_vnic;
+ }
+
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 9ae6cc8e30cbdc..27409d05f05325 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -2355,9 +2355,8 @@ static int bitmap_get_stats(void *data, struct md_bitmap_stats *stats)
+
+ if (!bitmap)
+ return -ENOENT;
+- if (bitmap->mddev->bitmap_info.external)
+- return -ENOENT;
+- if (!bitmap->storage.sb_page) /* no superblock */
++ if (!bitmap->mddev->bitmap_info.external &&
++ !bitmap->storage.sb_page)
+ return -EINVAL;
+ sb = kmap_local_page(bitmap->storage.sb_page);
+ stats->sync_size = le64_to_cpu(sb->sync_size);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index af010b64be63b3..76a75925b7138c 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1734,6 +1734,7 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
+ * The discard bio returns only first r10bio finishes
+ */
+ if (first_copy) {
++ md_account_bio(mddev, &bio);
+ r10_bio->master_bio = bio;
+ set_bit(R10BIO_Discard, &r10_bio->state);
+ first_copy = false;
+diff --git a/drivers/net/can/rockchip/rockchip_canfd-core.c b/drivers/net/can/rockchip/rockchip_canfd-core.c
+index 46201c126703ce..7107a37da36c7f 100644
+--- a/drivers/net/can/rockchip/rockchip_canfd-core.c
++++ b/drivers/net/can/rockchip/rockchip_canfd-core.c
+@@ -902,15 +902,16 @@ static int rkcanfd_probe(struct platform_device *pdev)
+ priv->can.data_bittiming_const = &rkcanfd_data_bittiming_const;
+ priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
+ CAN_CTRLMODE_BERR_REPORTING;
+- if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN))
+- priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
+ priv->can.do_set_mode = rkcanfd_set_mode;
+ priv->can.do_get_berr_counter = rkcanfd_get_berr_counter;
+ priv->ndev = ndev;
+
+ match = device_get_match_data(&pdev->dev);
+- if (match)
++ if (match) {
+ priv->devtype_data = *(struct rkcanfd_devtype_data *)match;
++ if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN))
++ priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
++ }
+
+ err = can_rx_offload_add_manual(ndev, &priv->offload,
+ RKCANFD_NAPI_WEIGHT);
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 79dc77835681c8..3b49e87e8ef721 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -737,6 +737,15 @@ static void b53_enable_mib(struct b53_device *dev)
+ b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc);
+ }
+
++static void b53_enable_stp(struct b53_device *dev)
++{
++ u8 gc;
++
++ b53_read8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, &gc);
++ gc |= GC_RX_BPDU_EN;
++ b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc);
++}
++
+ static u16 b53_default_pvid(struct b53_device *dev)
+ {
+ if (is5325(dev) || is5365(dev))
+@@ -876,6 +885,7 @@ static int b53_switch_reset(struct b53_device *dev)
+ }
+
+ b53_enable_mib(dev);
++ b53_enable_stp(dev);
+
+ return b53_flush_arl(dev, FAST_AGE_STATIC);
+ }
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 29a89ab4b78946..08db846cda8dec 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -1852,6 +1852,8 @@ static int mv88e6xxx_vtu_get(struct mv88e6xxx_chip *chip, u16 vid,
+ if (!chip->info->ops->vtu_getnext)
+ return -EOPNOTSUPP;
+
++ memset(entry, 0, sizeof(*entry));
++
+ entry->vid = vid ? vid - 1 : mv88e6xxx_max_vid(chip);
+ entry->valid = false;
+
+@@ -1960,7 +1962,16 @@ static int mv88e6xxx_mst_put(struct mv88e6xxx_chip *chip, u8 sid)
+ struct mv88e6xxx_mst *mst, *tmp;
+ int err;
+
+- if (!sid)
++ /* If the SID is zero, it is for a VLAN mapped to the default MSTI,
++ * and mv88e6xxx_stu_setup() made sure it is always present, and thus,
++ * should not be removed here.
++ *
++ * If the chip lacks STU support, numerically the "sid" variable will
++ * happen to also be zero, but we don't want to rely on that fact, so
++ * we explicitly test that first. In that case, there is also nothing
++ * to do here.
++ */
++ if (!mv88e6xxx_has_stu(chip) || !sid)
+ return 0;
+
+ list_for_each_entry_safe(mst, tmp, &chip->msts, node) {
+diff --git a/drivers/net/dsa/mv88e6xxx/devlink.c b/drivers/net/dsa/mv88e6xxx/devlink.c
+index 795c8df7b6a743..195460a0a0d418 100644
+--- a/drivers/net/dsa/mv88e6xxx/devlink.c
++++ b/drivers/net/dsa/mv88e6xxx/devlink.c
+@@ -736,7 +736,8 @@ void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds)
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(mv88e6xxx_regions); i++)
+- dsa_devlink_region_destroy(chip->regions[i]);
++ if (chip->regions[i])
++ dsa_devlink_region_destroy(chip->regions[i]);
+ }
+
+ void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port)
+diff --git a/drivers/net/ethernet/amd/pds_core/debugfs.c b/drivers/net/ethernet/amd/pds_core/debugfs.c
+index ac37a4e738ae7d..04c5e3abd8d706 100644
+--- a/drivers/net/ethernet/amd/pds_core/debugfs.c
++++ b/drivers/net/ethernet/amd/pds_core/debugfs.c
+@@ -154,8 +154,9 @@ void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq)
+ debugfs_create_u32("index", 0400, intr_dentry, &intr->index);
+ debugfs_create_u32("vector", 0400, intr_dentry, &intr->vector);
+
+- intr_ctrl_regset = kzalloc(sizeof(*intr_ctrl_regset),
+- GFP_KERNEL);
++ intr_ctrl_regset = devm_kzalloc(pdsc->dev,
++ sizeof(*intr_ctrl_regset),
++ GFP_KERNEL);
+ if (!intr_ctrl_regset)
+ return;
+ intr_ctrl_regset->regs = intr_ctrl_regs;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 2cd79b59cf0022..1b39574e3fa22d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -783,7 +783,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ dev_kfree_skb_any(skb);
+ tx_kick_pending:
+ if (BNXT_TX_PTP_IS_SET(lflags)) {
+- txr->tx_buf_ring[txr->tx_prod].is_ts_pkt = 0;
++ txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].is_ts_pkt = 0;
+ atomic64_inc(&bp->ptp_cfg->stats.ts_err);
+ if (!(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP))
+ /* set SKB to err so PTP worker will clean up */
+@@ -791,7 +791,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ }
+ if (txr->kick_pending)
+ bnxt_txr_db_kick(bp, txr, txr->tx_prod);
+- txr->tx_buf_ring[txr->tx_prod].skb = NULL;
++ txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].skb = NULL;
+ dev_core_stats_tx_dropped_inc(dev);
+ return NETDEV_TX_OK;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+index 7f3f5afa864f4a..1546c3db08f093 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+@@ -2270,6 +2270,7 @@ int cxgb4_init_ethtool_filters(struct adapter *adap)
+ eth_filter->port[i].bmap = bitmap_zalloc(nentries, GFP_KERNEL);
+ if (!eth_filter->port[i].bmap) {
+ ret = -ENOMEM;
++ kvfree(eth_filter->port[i].loc_array);
+ goto free_eth_finfo;
+ }
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c b/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
+index 4d1f4a33391a8b..5e288db06d6f6e 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
+@@ -26,12 +26,15 @@ static void hbg_restore_mac_table(struct hbg_priv *priv)
+
+ static void hbg_restore_user_def_settings(struct hbg_priv *priv)
+ {
++ /* The index of host mac is always 0. */
++ u64 rx_pause_addr = ether_addr_to_u64(priv->filter.mac_table[0].addr);
+ struct ethtool_pauseparam *pause_param = &priv->user_def.pause_param;
+
+ hbg_restore_mac_table(priv);
+ hbg_hw_set_mtu(priv, priv->netdev->mtu);
+ hbg_hw_set_pause_enable(priv, pause_param->tx_pause,
+ pause_param->rx_pause);
++ hbg_hw_set_rx_pause_mac_addr(priv, rx_pause_addr);
+ }
+
+ int hbg_rebuild(struct hbg_priv *priv)
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c b/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
+index e7798f21364502..56089849753dc4 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
+@@ -224,6 +224,10 @@ void hbg_hw_set_mac_filter_enable(struct hbg_priv *priv, u32 enable)
+ {
+ hbg_reg_write_field(priv, HBG_REG_REC_FILT_CTRL_ADDR,
+ HBG_REG_REC_FILT_CTRL_UC_MATCH_EN_B, enable);
++
++ /* only uc filter is supported, so set all bits of mc mask reg to 1 */
++ hbg_reg_write64(priv, HBG_REG_STATION_ADDR_LOW_MSK_0, U64_MAX);
++ hbg_reg_write64(priv, HBG_REG_STATION_ADDR_LOW_MSK_1, U64_MAX);
+ }
+
+ void hbg_hw_set_pause_enable(struct hbg_priv *priv, u32 tx_en, u32 rx_en)
+@@ -232,6 +236,9 @@ void hbg_hw_set_pause_enable(struct hbg_priv *priv, u32 tx_en, u32 rx_en)
+ HBG_REG_PAUSE_ENABLE_TX_B, tx_en);
+ hbg_reg_write_field(priv, HBG_REG_PAUSE_ENABLE_ADDR,
+ HBG_REG_PAUSE_ENABLE_RX_B, rx_en);
++
++ hbg_reg_write_field(priv, HBG_REG_REC_FILT_CTRL_ADDR,
++ HBG_REG_REC_FILT_CTRL_PAUSE_FRM_PASS_B, rx_en);
+ }
+
+ void hbg_hw_get_pause_enable(struct hbg_priv *priv, u32 *tx_en, u32 *rx_en)
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_main.c b/drivers/net/ethernet/hisilicon/hibmcge/hbg_main.c
+index bb0f25ac97600c..4d20679b2543a5 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_main.c
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_main.c
+@@ -198,12 +198,12 @@ static int hbg_net_change_mtu(struct net_device *netdev, int new_mtu)
+ if (netif_running(netdev))
+ return -EBUSY;
+
+- hbg_hw_set_mtu(priv, new_mtu);
+- WRITE_ONCE(netdev->mtu, new_mtu);
+-
+ dev_dbg(&priv->pdev->dev,
+ "change mtu from %u to %u\n", netdev->mtu, new_mtu);
+
++ hbg_hw_set_mtu(priv, new_mtu);
++ WRITE_ONCE(netdev->mtu, new_mtu);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_reg.h b/drivers/net/ethernet/hisilicon/hibmcge/hbg_reg.h
+index f12efc12f3c54b..c254f4036329fb 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_reg.h
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_reg.h
+@@ -60,6 +60,7 @@
+ #define HBG_REG_TRANSMIT_CTRL_AN_EN_B BIT(5)
+ #define HBG_REG_REC_FILT_CTRL_ADDR (HBG_REG_SGMII_BASE + 0x0064)
+ #define HBG_REG_REC_FILT_CTRL_UC_MATCH_EN_B BIT(0)
++#define HBG_REG_REC_FILT_CTRL_PAUSE_FRM_PASS_B BIT(4)
+ #define HBG_REG_LINE_LOOP_BACK_ADDR (HBG_REG_SGMII_BASE + 0x01A8)
+ #define HBG_REG_CF_CRC_STRIP_ADDR (HBG_REG_SGMII_BASE + 0x01B0)
+ #define HBG_REG_CF_CRC_STRIP_B BIT(0)
+@@ -81,6 +82,8 @@
+ #define HBG_REG_STATION_ADDR_HIGH_4_ADDR (HBG_REG_SGMII_BASE + 0x0224)
+ #define HBG_REG_STATION_ADDR_LOW_5_ADDR (HBG_REG_SGMII_BASE + 0x0228)
+ #define HBG_REG_STATION_ADDR_HIGH_5_ADDR (HBG_REG_SGMII_BASE + 0x022C)
++#define HBG_REG_STATION_ADDR_LOW_MSK_0 (HBG_REG_SGMII_BASE + 0x0230)
++#define HBG_REG_STATION_ADDR_LOW_MSK_1 (HBG_REG_SGMII_BASE + 0x0238)
+
+ /* PCU */
+ #define HBG_REG_TX_FIFO_THRSLD_ADDR (HBG_REG_SGMII_BASE + 0x0420)
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index c35cc5cb118569..2f265c0959c7a0 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -319,6 +319,7 @@ struct igc_adapter {
+ struct timespec64 prev_ptp_time; /* Pre-reset PTP clock */
+ ktime_t ptp_reset_start; /* Reset time in clock mono */
+ struct system_time_snapshot snapshot;
++ struct mutex ptm_lock; /* Only allow one PTM transaction at a time */
+
+ char fw_version[32];
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h
+index 8e449904aa7dbd..d19325b0e6e0ba 100644
+--- a/drivers/net/ethernet/intel/igc/igc_defines.h
++++ b/drivers/net/ethernet/intel/igc/igc_defines.h
+@@ -574,7 +574,10 @@
+ #define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x3f) << 2)
+ #define IGC_PTM_CTRL_PTM_TO(usec) (((usec) & 0xff) << 8)
+
+-#define IGC_PTM_SHORT_CYC_DEFAULT 1 /* Default short cycle interval */
++/* A short cycle time of 1us theoretically should work, but appears to be too
++ * short in practice.
++ */
++#define IGC_PTM_SHORT_CYC_DEFAULT 4 /* Default short cycle interval */
+ #define IGC_PTM_CYC_TIME_DEFAULT 5 /* Default PTM cycle time */
+ #define IGC_PTM_TIMEOUT_DEFAULT 255 /* Default timeout for PTM errors */
+
+@@ -593,6 +596,7 @@
+ #define IGC_PTM_STAT_T4M1_OVFL BIT(3) /* T4 minus T1 overflow */
+ #define IGC_PTM_STAT_ADJUST_1ST BIT(4) /* 1588 timer adjusted during 1st PTM cycle */
+ #define IGC_PTM_STAT_ADJUST_CYC BIT(5) /* 1588 timer adjusted during non-1st PTM cycle */
++#define IGC_PTM_STAT_ALL GENMASK(5, 0) /* Used to clear all status */
+
+ /* PCIe PTM Cycle Control */
+ #define IGC_PTM_CYCLE_CTRL_CYC_TIME(msec) ((msec) & 0x3ff) /* PTM Cycle Time (msec) */
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index daf2a24ead0370..80831c57f75094 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -7230,6 +7230,7 @@ static int igc_probe(struct pci_dev *pdev,
+
+ err_register:
+ igc_release_hw_control(adapter);
++ igc_ptp_stop(adapter);
+ err_eeprom:
+ if (!igc_check_reset_block(hw))
+ igc_reset_phy(hw);
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 946edbad43022c..612ed26a29c5d4 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -974,45 +974,62 @@ static void igc_ptm_log_error(struct igc_adapter *adapter, u32 ptm_stat)
+ }
+ }
+
++/* The PTM lock: adapter->ptm_lock must be held when calling igc_ptm_trigger() */
++static void igc_ptm_trigger(struct igc_hw *hw)
++{
++ u32 ctrl;
++
++ /* To "manually" start the PTM cycle we need to set the
++ * trigger (TRIG) bit
++ */
++ ctrl = rd32(IGC_PTM_CTRL);
++ ctrl |= IGC_PTM_CTRL_TRIG;
++ wr32(IGC_PTM_CTRL, ctrl);
++ /* Perform flush after write to CTRL register otherwise
++ * transaction may not start
++ */
++ wrfl();
++}
++
++/* The PTM lock: adapter->ptm_lock must be held when calling igc_ptm_reset() */
++static void igc_ptm_reset(struct igc_hw *hw)
++{
++ u32 ctrl;
++
++ ctrl = rd32(IGC_PTM_CTRL);
++ ctrl &= ~IGC_PTM_CTRL_TRIG;
++ wr32(IGC_PTM_CTRL, ctrl);
++ /* Write to clear all status */
++ wr32(IGC_PTM_STAT, IGC_PTM_STAT_ALL);
++}
++
+ static int igc_phc_get_syncdevicetime(ktime_t *device,
+ struct system_counterval_t *system,
+ void *ctx)
+ {
+- u32 stat, t2_curr_h, t2_curr_l, ctrl;
+ struct igc_adapter *adapter = ctx;
+ struct igc_hw *hw = &adapter->hw;
++ u32 stat, t2_curr_h, t2_curr_l;
+ int err, count = 100;
+ ktime_t t1, t2_curr;
+
+- /* Get a snapshot of system clocks to use as historic value. */
+- ktime_get_snapshot(&adapter->snapshot);
+-
++ /* Doing this in a loop because in the event of a
++ * badly timed (ha!) system clock adjustment, we may
++ * get PTM errors from the PCI root, but these errors
++ * are transitory. Repeating the process returns valid
++ * data eventually.
++ */
+ do {
+- /* Doing this in a loop because in the event of a
+- * badly timed (ha!) system clock adjustment, we may
+- * get PTM errors from the PCI root, but these errors
+- * are transitory. Repeating the process returns valid
+- * data eventually.
+- */
++ /* Get a snapshot of system clocks to use as historic value. */
++ ktime_get_snapshot(&adapter->snapshot);
+
+- /* To "manually" start the PTM cycle we need to clear and
+- * then set again the TRIG bit.
+- */
+- ctrl = rd32(IGC_PTM_CTRL);
+- ctrl &= ~IGC_PTM_CTRL_TRIG;
+- wr32(IGC_PTM_CTRL, ctrl);
+- ctrl |= IGC_PTM_CTRL_TRIG;
+- wr32(IGC_PTM_CTRL, ctrl);
+-
+- /* The cycle only starts "for real" when software notifies
+- * that it has read the registers, this is done by setting
+- * VALID bit.
+- */
+- wr32(IGC_PTM_STAT, IGC_PTM_STAT_VALID);
++ igc_ptm_trigger(hw);
+
+ err = readx_poll_timeout(rd32, IGC_PTM_STAT, stat,
+ stat, IGC_PTM_STAT_SLEEP,
+ IGC_PTM_STAT_TIMEOUT);
++ igc_ptm_reset(hw);
++
+ if (err < 0) {
+ netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n");
+ return err;
+@@ -1021,15 +1038,7 @@ static int igc_phc_get_syncdevicetime(ktime_t *device,
+ if ((stat & IGC_PTM_STAT_VALID) == IGC_PTM_STAT_VALID)
+ break;
+
+- if (stat & ~IGC_PTM_STAT_VALID) {
+- /* An error occurred, log it. */
+- igc_ptm_log_error(adapter, stat);
+- /* The STAT register is write-1-to-clear (W1C),
+- * so write the previous error status to clear it.
+- */
+- wr32(IGC_PTM_STAT, stat);
+- continue;
+- }
++ igc_ptm_log_error(adapter, stat);
+ } while (--count);
+
+ if (!count) {
+@@ -1061,9 +1070,16 @@ static int igc_ptp_getcrosststamp(struct ptp_clock_info *ptp,
+ {
+ struct igc_adapter *adapter = container_of(ptp, struct igc_adapter,
+ ptp_caps);
++ int ret;
++
++ /* This blocks until any in progress PTM transactions complete */
++ mutex_lock(&adapter->ptm_lock);
+
+- return get_device_system_crosststamp(igc_phc_get_syncdevicetime,
+- adapter, &adapter->snapshot, cts);
++ ret = get_device_system_crosststamp(igc_phc_get_syncdevicetime,
++ adapter, &adapter->snapshot, cts);
++ mutex_unlock(&adapter->ptm_lock);
++
++ return ret;
+ }
+
+ static int igc_ptp_getcyclesx64(struct ptp_clock_info *ptp,
+@@ -1162,6 +1178,7 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ spin_lock_init(&adapter->ptp_tx_lock);
+ spin_lock_init(&adapter->free_timer_lock);
+ spin_lock_init(&adapter->tmreg_lock);
++ mutex_init(&adapter->ptm_lock);
+
+ adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+ adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+@@ -1174,6 +1191,7 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ if (IS_ERR(adapter->ptp_clock)) {
+ adapter->ptp_clock = NULL;
+ netdev_err(netdev, "ptp_clock_register failed\n");
++ mutex_destroy(&adapter->ptm_lock);
+ } else if (adapter->ptp_clock) {
+ netdev_info(netdev, "PHC added\n");
+ adapter->ptp_flags |= IGC_PTP_ENABLED;
+@@ -1203,10 +1221,12 @@ static void igc_ptm_stop(struct igc_adapter *adapter)
+ struct igc_hw *hw = &adapter->hw;
+ u32 ctrl;
+
++ mutex_lock(&adapter->ptm_lock);
+ ctrl = rd32(IGC_PTM_CTRL);
+ ctrl &= ~IGC_PTM_CTRL_EN;
+
+ wr32(IGC_PTM_CTRL, ctrl);
++ mutex_unlock(&adapter->ptm_lock);
+ }
+
+ /**
+@@ -1237,13 +1257,18 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
+ **/
+ void igc_ptp_stop(struct igc_adapter *adapter)
+ {
++ if (!(adapter->ptp_flags & IGC_PTP_ENABLED))
++ return;
++
+ igc_ptp_suspend(adapter);
+
++ adapter->ptp_flags &= ~IGC_PTP_ENABLED;
+ if (adapter->ptp_clock) {
+ ptp_clock_unregister(adapter->ptp_clock);
+ netdev_info(adapter->netdev, "PHC removed\n");
+ adapter->ptp_flags &= ~IGC_PTP_ENABLED;
+ }
++ mutex_destroy(&adapter->ptm_lock);
+ }
+
+ /**
+@@ -1255,10 +1280,13 @@ void igc_ptp_stop(struct igc_adapter *adapter)
+ void igc_ptp_reset(struct igc_adapter *adapter)
+ {
+ struct igc_hw *hw = &adapter->hw;
+- u32 cycle_ctrl, ctrl;
++ u32 cycle_ctrl, ctrl, stat;
+ unsigned long flags;
+ u32 timadj;
+
++ if (!(adapter->ptp_flags & IGC_PTP_ENABLED))
++ return;
++
+ /* reset the tstamp_config */
+ igc_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config);
+
+@@ -1280,6 +1308,7 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ if (!igc_is_crosststamp_supported(adapter))
+ break;
+
++ mutex_lock(&adapter->ptm_lock);
+ wr32(IGC_PCIE_DIG_DELAY, IGC_PCIE_DIG_DELAY_DEFAULT);
+ wr32(IGC_PCIE_PHY_DELAY, IGC_PCIE_PHY_DELAY_DEFAULT);
+
+@@ -1290,14 +1319,20 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ ctrl = IGC_PTM_CTRL_EN |
+ IGC_PTM_CTRL_START_NOW |
+ IGC_PTM_CTRL_SHRT_CYC(IGC_PTM_SHORT_CYC_DEFAULT) |
+- IGC_PTM_CTRL_PTM_TO(IGC_PTM_TIMEOUT_DEFAULT) |
+- IGC_PTM_CTRL_TRIG;
++ IGC_PTM_CTRL_PTM_TO(IGC_PTM_TIMEOUT_DEFAULT);
+
+ wr32(IGC_PTM_CTRL, ctrl);
+
+ /* Force the first cycle to run. */
+- wr32(IGC_PTM_STAT, IGC_PTM_STAT_VALID);
++ igc_ptm_trigger(hw);
++
++ if (readx_poll_timeout_atomic(rd32, IGC_PTM_STAT, stat,
++ stat, IGC_PTM_STAT_SLEEP,
++ IGC_PTM_STAT_TIMEOUT))
++ netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n");
+
++ igc_ptm_reset(hw);
++ mutex_unlock(&adapter->ptm_lock);
+ break;
+ default:
+ /* No work to do. */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+index 04e08e06f30ff2..7153a71dfc860e 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+@@ -67,6 +67,8 @@ static int rvu_rep_mcam_flow_init(struct rep_dev *rep)
+
+ rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
+ (&priv->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp))
++ goto exit;
+
+ for (ent = 0; ent < rsp->count; ent++)
+ rep->flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent];
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 53485142938c47..0cd1ecacfd29f5 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -734,7 +734,7 @@ static void mtk_set_queue_speed(struct mtk_eth *eth, unsigned int idx,
+ case SPEED_100:
+ val |= MTK_QTX_SCH_MAX_RATE_EN |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 103) |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 3);
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 3) |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1);
+ break;
+ case SPEED_1000:
+@@ -757,13 +757,13 @@ static void mtk_set_queue_speed(struct mtk_eth *eth, unsigned int idx,
+ case SPEED_100:
+ val |= MTK_QTX_SCH_MAX_RATE_EN |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5);
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5) |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1);
+ break;
+ case SPEED_1000:
+ val |= MTK_QTX_SCH_MAX_RATE_EN |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 10) |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5) |
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) |
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 6) |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 10);
+ break;
+ default:
+@@ -823,9 +823,25 @@ static const struct phylink_mac_ops mtk_phylink_ops = {
+ .mac_link_up = mtk_mac_link_up,
+ };
+
++static void mtk_mdio_config(struct mtk_eth *eth)
++{
++ u32 val;
++
++ /* Configure MDC Divider */
++ val = FIELD_PREP(PPSC_MDC_CFG, eth->mdc_divider);
++
++ /* Configure MDC Turbo Mode */
++ if (mtk_is_netsys_v3_or_greater(eth))
++ mtk_m32(eth, 0, MISC_MDC_TURBO, MTK_MAC_MISC_V3);
++ else
++ val |= PPSC_MDC_TURBO;
++
++ mtk_m32(eth, PPSC_MDC_CFG, val, MTK_PPSC);
++}
++
+ static int mtk_mdio_init(struct mtk_eth *eth)
+ {
+- unsigned int max_clk = 2500000, divider;
++ unsigned int max_clk = 2500000;
+ struct device_node *mii_np;
+ int ret;
+ u32 val;
+@@ -865,20 +881,9 @@ static int mtk_mdio_init(struct mtk_eth *eth)
+ }
+ max_clk = val;
+ }
+- divider = min_t(unsigned int, DIV_ROUND_UP(MDC_MAX_FREQ, max_clk), 63);
+-
+- /* Configure MDC Turbo Mode */
+- if (mtk_is_netsys_v3_or_greater(eth))
+- mtk_m32(eth, 0, MISC_MDC_TURBO, MTK_MAC_MISC_V3);
+-
+- /* Configure MDC Divider */
+- val = FIELD_PREP(PPSC_MDC_CFG, divider);
+- if (!mtk_is_netsys_v3_or_greater(eth))
+- val |= PPSC_MDC_TURBO;
+- mtk_m32(eth, PPSC_MDC_CFG, val, MTK_PPSC);
+-
+- dev_dbg(eth->dev, "MDC is running on %d Hz\n", MDC_MAX_FREQ / divider);
+-
++ eth->mdc_divider = min_t(unsigned int, DIV_ROUND_UP(MDC_MAX_FREQ, max_clk), 63);
++ mtk_mdio_config(eth);
++ dev_dbg(eth->dev, "MDC is running on %d Hz\n", MDC_MAX_FREQ / eth->mdc_divider);
+ ret = of_mdiobus_register(eth->mii_bus, mii_np);
+
+ err_put_node:
+@@ -3269,7 +3274,7 @@ static int mtk_start_dma(struct mtk_eth *eth)
+ if (mtk_is_netsys_v2_or_greater(eth))
+ val |= MTK_MUTLI_CNT | MTK_RESV_BUF |
+ MTK_WCOMP_EN | MTK_DMAD_WR_WDONE |
+- MTK_CHK_DDONE_EN | MTK_LEAKY_BUCKET_EN;
++ MTK_CHK_DDONE_EN;
+ else
+ val |= MTK_RX_BT_32DWORDS;
+ mtk_w32(eth, val, reg_map->qdma.glo_cfg);
+@@ -3928,6 +3933,10 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
+ else
+ mtk_hw_reset(eth);
+
++ /* No MT7628/88 support yet */
++ if (reset && !MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628))
++ mtk_mdio_config(eth);
++
+ if (mtk_is_netsys_v3_or_greater(eth)) {
+ /* Set FE to PDMAv2 if necessary */
+ val = mtk_r32(eth, MTK_FE_GLO_MISC);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 0d5225f1d3eef6..8d7b6818d86012 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -1260,6 +1260,7 @@ struct mtk_eth {
+ struct clk *clks[MTK_CLK_MAX];
+
+ struct mii_bus *mii_bus;
++ unsigned int mdc_divider;
+ struct work_struct pending_work;
+ unsigned long state;
+
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index bef734c6e5c2b3..afe8127fd32beb 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2756,7 +2756,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ of_property_read_bool(port_np, "ti,mac-only");
+
+ /* get phy/link info */
+- port->slave.port_np = port_np;
++ port->slave.port_np = of_node_get(port_np);
+ ret = of_get_phy_mode(port_np, &port->slave.phy_if);
+ if (ret) {
+ dev_err(dev, "%pOF read phy-mode err %d\n",
+@@ -2810,6 +2810,17 @@ static void am65_cpsw_nuss_phylink_cleanup(struct am65_cpsw_common *common)
+ }
+ }
+
++static void am65_cpsw_remove_dt(struct am65_cpsw_common *common)
++{
++ struct am65_cpsw_port *port;
++ int i;
++
++ for (i = 0; i < common->port_num; i++) {
++ port = &common->ports[i];
++ of_node_put(port->slave.port_np);
++ }
++}
++
+ static int
+ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx)
+ {
+@@ -3708,6 +3719,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ am65_cpsw_nuss_cleanup_ndev(common);
+ am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
++ am65_cpsw_remove_dt(common);
+ err_of_clear:
+ if (common->mdio_dev)
+ of_platform_device_destroy(common->mdio_dev, NULL);
+@@ -3747,6 +3759,7 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev)
+ am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
+ am65_cpsw_disable_serdes_phy(common);
++ am65_cpsw_remove_dt(common);
+
+ if (common->mdio_dev)
+ of_platform_device_destroy(common->mdio_dev, NULL);
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index d59c1744840af2..2a1c43316f462b 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -406,66 +406,79 @@ static void icss_iep_update_to_next_boundary(struct icss_iep *iep, u64 start_ns)
+ static int icss_iep_perout_enable_hw(struct icss_iep *iep,
+ struct ptp_perout_request *req, int on)
+ {
++ struct timespec64 ts;
++ u64 ns_start;
++ u64 ns_width;
+ int ret;
+ u64 cmp;
+
++ if (!on) {
++ /* Disable CMP 1 */
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(1), 0);
++
++ /* clear CMP regs */
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0);
++ if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0);
++
++ /* Disable sync */
++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0);
++
++ return 0;
++ }
++
++ /* Calculate width of the signal for PPS/PEROUT handling */
++ ts.tv_sec = req->on.sec;
++ ts.tv_nsec = req->on.nsec;
++ ns_width = timespec64_to_ns(&ts);
++
++ if (req->flags & PTP_PEROUT_PHASE) {
++ ts.tv_sec = req->phase.sec;
++ ts.tv_nsec = req->phase.nsec;
++ ns_start = timespec64_to_ns(&ts);
++ } else {
++ ns_start = 0;
++ }
++
+ if (iep->ops && iep->ops->perout_enable) {
+ ret = iep->ops->perout_enable(iep->clockops_data, req, on, &cmp);
+ if (ret)
+ return ret;
+
+- if (on) {
+- /* Configure CMP */
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, lower_32_bits(cmp));
+- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, upper_32_bits(cmp));
+- /* Configure SYNC, 1ms pulse width */
+- regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG, 1000000);
+- regmap_write(iep->map, ICSS_IEP_SYNC0_PERIOD_REG, 0);
+- regmap_write(iep->map, ICSS_IEP_SYNC_START_REG, 0);
+- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); /* one-shot mode */
+- /* Enable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+- } else {
+- /* Disable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), 0);
+-
+- /* clear regs */
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0);
+- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0);
+- }
++ /* Configure CMP */
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG0, lower_32_bits(cmp));
++ if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG1, upper_32_bits(cmp));
++ /* Configure SYNC, based on req on width */
++ regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG,
++ div_u64(ns_width, iep->def_inc));
++ regmap_write(iep->map, ICSS_IEP_SYNC0_PERIOD_REG, 0);
++ regmap_write(iep->map, ICSS_IEP_SYNC_START_REG,
++ div_u64(ns_start, iep->def_inc));
++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); /* one-shot mode */
++ /* Enable CMP 1 */
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+ } else {
+- if (on) {
+- u64 start_ns;
+-
+- iep->period = ((u64)req->period.sec * NSEC_PER_SEC) +
+- req->period.nsec;
+- start_ns = ((u64)req->period.sec * NSEC_PER_SEC)
+- + req->period.nsec;
+- icss_iep_update_to_next_boundary(iep, start_ns);
+-
+- /* Enable Sync in single shot mode */
+- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG,
+- IEP_SYNC_CTRL_SYNC_N_EN(0) | IEP_SYNC_CTRL_SYNC_EN);
+- /* Enable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+- } else {
+- /* Disable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), 0);
+-
+- /* clear CMP regs */
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0);
+- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0);
+-
+- /* Disable sync */
+- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0);
+- }
++ u64 start_ns;
++
++ iep->period = ((u64)req->period.sec * NSEC_PER_SEC) +
++ req->period.nsec;
++ start_ns = ((u64)req->period.sec * NSEC_PER_SEC)
++ + req->period.nsec;
++ icss_iep_update_to_next_boundary(iep, start_ns);
++
++ regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG,
++ div_u64(ns_width, iep->def_inc));
++ regmap_write(iep->map, ICSS_IEP_SYNC_START_REG,
++ div_u64(ns_start, iep->def_inc));
++ /* Enable Sync in single shot mode */
++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG,
++ IEP_SYNC_CTRL_SYNC_N_EN(0) | IEP_SYNC_CTRL_SYNC_EN);
++ /* Enable CMP 1 */
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+ }
+
+ return 0;
+@@ -474,7 +487,41 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep,
+ static int icss_iep_perout_enable(struct icss_iep *iep,
+ struct ptp_perout_request *req, int on)
+ {
+- return -EOPNOTSUPP;
++ int ret = 0;
++
++ if (!on)
++ goto disable;
++
++ /* Reject requests with unsupported flags */
++ if (req->flags & ~(PTP_PEROUT_DUTY_CYCLE |
++ PTP_PEROUT_PHASE))
++ return -EOPNOTSUPP;
++
++ /* Set default "on" time (1ms) for the signal if not passed by the app */
++ if (!(req->flags & PTP_PEROUT_DUTY_CYCLE)) {
++ req->on.sec = 0;
++ req->on.nsec = NSEC_PER_MSEC;
++ }
++
++disable:
++ mutex_lock(&iep->ptp_clk_mutex);
++
++ if (iep->pps_enabled) {
++ ret = -EBUSY;
++ goto exit;
++ }
++
++ if (iep->perout_enabled == !!on)
++ goto exit;
++
++ ret = icss_iep_perout_enable_hw(iep, req, on);
++ if (!ret)
++ iep->perout_enabled = !!on;
++
++exit:
++ mutex_unlock(&iep->ptp_clk_mutex);
++
++ return ret;
+ }
+
+ static void icss_iep_cap_cmp_work(struct work_struct *work)
+@@ -549,10 +596,13 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on)
+ if (on) {
+ ns = icss_iep_gettime(iep, NULL);
+ ts = ns_to_timespec64(ns);
++ rq.perout.flags = 0;
+ rq.perout.period.sec = 1;
+ rq.perout.period.nsec = 0;
+ rq.perout.start.sec = ts.tv_sec + 2;
+ rq.perout.start.nsec = 0;
++ rq.perout.on.sec = 0;
++ rq.perout.on.nsec = NSEC_PER_MSEC;
+ ret = icss_iep_perout_enable_hw(iep, &rq.perout, on);
+ } else {
+ ret = icss_iep_perout_enable_hw(iep, &rq.perout, on);
+diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+index 53aeae2f884b01..1be2a5cc4a83c3 100644
+--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
++++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+@@ -607,7 +607,7 @@ static int ngbe_probe(struct pci_dev *pdev,
+ /* setup the private structure */
+ err = ngbe_sw_init(wx);
+ if (err)
+- goto err_free_mac_table;
++ goto err_pci_release_regions;
+
+ /* check if flash load is done after hw power up */
+ err = wx_check_flash_load(wx, NGBE_SPI_ILDR_STATUS_PERST);
+@@ -701,6 +701,7 @@ static int ngbe_probe(struct pci_dev *pdev,
+ err_clear_interrupt_scheme:
+ wx_clear_interrupt_scheme(wx);
+ err_free_mac_table:
++ kfree(wx->rss_key);
+ kfree(wx->mac_table);
+ err_pci_release_regions:
+ pci_release_selected_regions(pdev,
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+index f7745026803643..7e352837184fad 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+@@ -559,7 +559,7 @@ static int txgbe_probe(struct pci_dev *pdev,
+ /* setup the private structure */
+ err = txgbe_sw_init(wx);
+ if (err)
+- goto err_free_mac_table;
++ goto err_pci_release_regions;
+
+ /* check if flash load is done after hw power up */
+ err = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_PERST);
+@@ -717,6 +717,7 @@ static int txgbe_probe(struct pci_dev *pdev,
+ wx_clear_interrupt_scheme(wx);
+ wx_control_hw(wx, false);
+ err_free_mac_table:
++ kfree(wx->rss_key);
+ kfree(wx->mac_table);
+ err_pci_release_regions:
+ pci_release_selected_regions(pdev,
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 0b089389087d33..b952e79179d011 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -2054,7 +2054,7 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id, int *budget,
+ dest_idx = 0;
+ move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+- ath12k_hal_srng_src_get_next_entry(ab, srng);
++ ath12k_hal_srng_dst_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+@@ -2473,7 +2473,7 @@ int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
+ dest_idx = 0;
+ move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+- ath12k_hal_srng_dst_get_next_entry(ab, srng);
++ ath12k_hal_srng_src_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+diff --git a/drivers/net/wireless/atmel/at76c50x-usb.c b/drivers/net/wireless/atmel/at76c50x-usb.c
+index 504e05ea30f298..97ea7ab0f49102 100644
+--- a/drivers/net/wireless/atmel/at76c50x-usb.c
++++ b/drivers/net/wireless/atmel/at76c50x-usb.c
+@@ -2552,7 +2552,7 @@ static void at76_disconnect(struct usb_interface *interface)
+
+ wiphy_info(priv->hw->wiphy, "disconnecting\n");
+ at76_delete_device(priv);
+- usb_put_dev(priv->udev);
++ usb_put_dev(interface_to_usbdev(interface));
+ dev_info(&interface->dev, "disconnected\n");
+ }
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+index cfcf01eb0daa54..f26e4679e4ff02 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+@@ -561,8 +561,10 @@ struct brcmf_mp_device *brcmf_get_module_param(struct device *dev,
+ if (!found) {
+ /* No platform data for this device, try OF and DMI data */
+ brcmf_dmi_probe(settings, chip, chiprev);
+- if (brcmf_of_probe(dev, bus_type, settings) == -EPROBE_DEFER)
++ if (brcmf_of_probe(dev, bus_type, settings) == -EPROBE_DEFER) {
++ kfree(settings);
+ return ERR_PTR(-EPROBE_DEFER);
++ }
+ brcmf_acpi_probe(dev, bus_type, settings);
+ }
+ return settings;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 793514a1852a3d..e37fa5ae97f649 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -147,8 +147,14 @@ static void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans)
+ return;
+
+ if (trans->state >= IWL_TRANS_FW_STARTED &&
+- trans_pcie->fw_reset_handshake)
++ trans_pcie->fw_reset_handshake) {
++ /*
++ * Reset handshake can dump firmware on timeout, but that
++ * should assume that the firmware is already dead.
++ */
++ trans->state = IWL_TRANS_NO_FW;
+ iwl_trans_pcie_fw_reset_handshake(trans);
++ }
+
+ trans_pcie->is_down = true;
+
+diff --git a/drivers/net/wireless/ti/wl1251/tx.c b/drivers/net/wireless/ti/wl1251/tx.c
+index 474b603c121cba..adb4840b048932 100644
+--- a/drivers/net/wireless/ti/wl1251/tx.c
++++ b/drivers/net/wireless/ti/wl1251/tx.c
+@@ -342,8 +342,10 @@ void wl1251_tx_work(struct work_struct *work)
+ while ((skb = skb_dequeue(&wl->tx_queue))) {
+ if (!woken_up) {
+ ret = wl1251_ps_elp_wakeup(wl);
+- if (ret < 0)
++ if (ret < 0) {
++ skb_queue_head(&wl->tx_queue, skb);
+ goto out;
++ }
+ woken_up = true;
+ }
+
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 3ef4beacde3257..7318b736d41417 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -172,20 +172,6 @@ struct nvmet_fc_tgt_assoc {
+ struct work_struct del_work;
+ };
+
+-
+-static inline int
+-nvmet_fc_iodnum(struct nvmet_fc_ls_iod *iodptr)
+-{
+- return (iodptr - iodptr->tgtport->iod);
+-}
+-
+-static inline int
+-nvmet_fc_fodnum(struct nvmet_fc_fcp_iod *fodptr)
+-{
+- return (fodptr - fodptr->queue->fod);
+-}
+-
+-
+ /*
+ * Association and Connection IDs:
+ *
+diff --git a/drivers/nvme/target/pci-epf.c b/drivers/nvme/target/pci-epf.c
+index 99563648c318f5..5c4c4c1f535d44 100644
+--- a/drivers/nvme/target/pci-epf.c
++++ b/drivers/nvme/target/pci-epf.c
+@@ -1655,16 +1655,17 @@ static int nvmet_pci_epf_process_sq(struct nvmet_pci_epf_ctrl *ctrl,
+ {
+ struct nvmet_pci_epf_iod *iod;
+ int ret, n = 0;
++ u16 head = sq->head;
+
+ sq->tail = nvmet_pci_epf_bar_read32(ctrl, sq->db);
+- while (sq->head != sq->tail && (!ctrl->sq_ab || n < ctrl->sq_ab)) {
++ while (head != sq->tail && (!ctrl->sq_ab || n < ctrl->sq_ab)) {
+ iod = nvmet_pci_epf_alloc_iod(sq);
+ if (!iod)
+ break;
+
+ /* Get the NVMe command submitted by the host. */
+ ret = nvmet_pci_epf_transfer(ctrl, &iod->cmd,
+- sq->pci_addr + sq->head * sq->qes,
++ sq->pci_addr + head * sq->qes,
+ sq->qes, DMA_FROM_DEVICE);
+ if (ret) {
+ /* Not much we can do... */
+@@ -1673,12 +1674,13 @@ static int nvmet_pci_epf_process_sq(struct nvmet_pci_epf_ctrl *ctrl,
+ }
+
+ dev_dbg(ctrl->dev, "SQ[%u]: head %u, tail %u, command %s\n",
+- sq->qid, sq->head, sq->tail,
++ sq->qid, head, sq->tail,
+ nvmet_pci_epf_iod_name(iod));
+
+- sq->head++;
+- if (sq->head == sq->depth)
+- sq->head = 0;
++ head++;
++ if (head == sq->depth)
++ head = 0;
++ WRITE_ONCE(sq->head, head);
+ n++;
+
+ queue_work_on(WORK_CPU_UNBOUND, sq->iod_wq, &iod->work);
+@@ -1772,8 +1774,17 @@ static void nvmet_pci_epf_cq_work(struct work_struct *work)
+ if (!iod)
+ break;
+
+- /* Post the IOD completion entry. */
++ /*
++ * Post the IOD completion entry. If the IOD request was
++ * executed (req->execute() called), the CQE is already
++ * initialized. However, the IOD may have been failed before
++ * that, leaving the CQE not properly initialized. So always
++ * initialize it here.
++ */
+ cqe = &iod->cqe;
++ cqe->sq_head = cpu_to_le16(READ_ONCE(iod->sq->head));
++ cqe->sq_id = cpu_to_le16(iod->sq->qid);
++ cqe->command_id = iod->cmd.common.command_id;
+ cqe->status = cpu_to_le16((iod->status << 1) | cq->phase);
+
+ dev_dbg(ctrl->dev,
+@@ -1814,6 +1825,21 @@ static void nvmet_pci_epf_cq_work(struct work_struct *work)
+ NVMET_PCI_EPF_CQ_RETRY_INTERVAL);
+ }
+
++static void nvmet_pci_epf_clear_ctrl_config(struct nvmet_pci_epf_ctrl *ctrl)
++{
++ struct nvmet_ctrl *tctrl = ctrl->tctrl;
++
++ /* Initialize controller status. */
++ tctrl->csts = 0;
++ ctrl->csts = 0;
++ nvmet_pci_epf_bar_write32(ctrl, NVME_REG_CSTS, ctrl->csts);
++
++ /* Initialize controller configuration and start polling. */
++ tctrl->cc = 0;
++ ctrl->cc = 0;
++ nvmet_pci_epf_bar_write32(ctrl, NVME_REG_CC, ctrl->cc);
++}
++
+ static int nvmet_pci_epf_enable_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
+ {
+ u64 pci_addr, asq, acq;
+@@ -1879,18 +1905,20 @@ static int nvmet_pci_epf_enable_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
+ return 0;
+
+ err:
+- ctrl->csts = 0;
++ nvmet_pci_epf_clear_ctrl_config(ctrl);
+ return -EINVAL;
+ }
+
+-static void nvmet_pci_epf_disable_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
++static void nvmet_pci_epf_disable_ctrl(struct nvmet_pci_epf_ctrl *ctrl,
++ bool shutdown)
+ {
+ int qid;
+
+ if (!ctrl->enabled)
+ return;
+
+- dev_info(ctrl->dev, "Disabling controller\n");
++ dev_info(ctrl->dev, "%s controller\n",
++ shutdown ? "Shutting down" : "Disabling");
+
+ ctrl->enabled = false;
+ cancel_delayed_work_sync(&ctrl->poll_sqs);
+@@ -1907,6 +1935,11 @@ static void nvmet_pci_epf_disable_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
+ nvmet_pci_epf_delete_cq(ctrl->tctrl, 0);
+
+ ctrl->csts &= ~NVME_CSTS_RDY;
++ if (shutdown) {
++ ctrl->csts |= NVME_CSTS_SHST_CMPLT;
++ ctrl->cc &= ~NVME_CC_ENABLE;
++ nvmet_pci_epf_bar_write32(ctrl, NVME_REG_CC, ctrl->cc);
++ }
+ }
+
+ static void nvmet_pci_epf_poll_cc_work(struct work_struct *work)
+@@ -1933,12 +1966,10 @@ static void nvmet_pci_epf_poll_cc_work(struct work_struct *work)
+ }
+
+ if (!nvmet_cc_en(new_cc) && nvmet_cc_en(old_cc))
+- nvmet_pci_epf_disable_ctrl(ctrl);
++ nvmet_pci_epf_disable_ctrl(ctrl, false);
+
+- if (nvmet_cc_shn(new_cc) && !nvmet_cc_shn(old_cc)) {
+- nvmet_pci_epf_disable_ctrl(ctrl);
+- ctrl->csts |= NVME_CSTS_SHST_CMPLT;
+- }
++ if (nvmet_cc_shn(new_cc) && !nvmet_cc_shn(old_cc))
++ nvmet_pci_epf_disable_ctrl(ctrl, true);
+
+ if (!nvmet_cc_shn(new_cc) && nvmet_cc_shn(old_cc))
+ ctrl->csts &= ~NVME_CSTS_SHST_CMPLT;
+@@ -1977,16 +2008,10 @@ static void nvmet_pci_epf_init_bar(struct nvmet_pci_epf_ctrl *ctrl)
+ /* Clear Controller Memory Buffer Supported (CMBS). */
+ ctrl->cap &= ~(0x1ULL << 57);
+
+- /* Controller configuration. */
+- ctrl->cc = tctrl->cc & (~NVME_CC_ENABLE);
+-
+- /* Controller status. */
+- ctrl->csts = ctrl->tctrl->csts;
+-
+ nvmet_pci_epf_bar_write64(ctrl, NVME_REG_CAP, ctrl->cap);
+ nvmet_pci_epf_bar_write32(ctrl, NVME_REG_VS, tctrl->subsys->ver);
+- nvmet_pci_epf_bar_write32(ctrl, NVME_REG_CSTS, ctrl->csts);
+- nvmet_pci_epf_bar_write32(ctrl, NVME_REG_CC, ctrl->cc);
++
++ nvmet_pci_epf_clear_ctrl_config(ctrl);
+ }
+
+ static int nvmet_pci_epf_create_ctrl(struct nvmet_pci_epf *nvme_epf,
+@@ -2091,7 +2116,8 @@ static void nvmet_pci_epf_stop_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
+ {
+ cancel_delayed_work_sync(&ctrl->poll_cc);
+
+- nvmet_pci_epf_disable_ctrl(ctrl);
++ nvmet_pci_epf_disable_ctrl(ctrl, false);
++ nvmet_pci_epf_clear_ctrl_config(ctrl);
+ }
+
+ static void nvmet_pci_epf_destroy_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 3152750aab2fc8..f489256cd9deda 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5419,8 +5419,6 @@ static bool pci_bus_resettable(struct pci_bus *bus)
+ return false;
+
+ list_for_each_entry(dev, &bus->devices, bus_list) {
+- if (!pci_reset_supported(dev))
+- return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+@@ -5497,8 +5495,6 @@ static bool pci_slot_resettable(struct pci_slot *slot)
+ list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ if (!dev->slot || dev->slot != slot)
+ continue;
+- if (!pci_reset_supported(dev))
+- return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+diff --git a/drivers/platform/mellanox/mlxbf-bootctl.c b/drivers/platform/mellanox/mlxbf-bootctl.c
+index 9cae07348d5eb4..49d4394e9f8a3c 100644
+--- a/drivers/platform/mellanox/mlxbf-bootctl.c
++++ b/drivers/platform/mellanox/mlxbf-bootctl.c
+@@ -332,9 +332,9 @@ static ssize_t secure_boot_fuse_state_show(struct device *dev,
+ else
+ status = valid ? "Invalid" : "Free";
+ }
+- buf_len += sysfs_emit(buf + buf_len, "%d:%s ", key, status);
++ buf_len += sysfs_emit_at(buf, buf_len, "%d:%s ", key, status);
+ }
+- buf_len += sysfs_emit(buf + buf_len, "\n");
++ buf_len += sysfs_emit_at(buf, buf_len, "\n");
+
+ return buf_len;
+ }
+diff --git a/drivers/platform/x86/amd/pmf/auto-mode.c b/drivers/platform/x86/amd/pmf/auto-mode.c
+index 02ff68be10d012..a184922bba8d65 100644
+--- a/drivers/platform/x86/amd/pmf/auto-mode.c
++++ b/drivers/platform/x86/amd/pmf/auto-mode.c
+@@ -120,9 +120,9 @@ static void amd_pmf_set_automode(struct amd_pmf_dev *dev, int idx,
+ amd_pmf_send_cmd(dev, SET_SPPT_APU_ONLY, false, pwr_ctrl->sppt_apu_only, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, pwr_ctrl->stt_min, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
+- pwr_ctrl->stt_skin_temp[STT_TEMP_APU], NULL);
++ fixp_q88_fromint(pwr_ctrl->stt_skin_temp[STT_TEMP_APU]), NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
+- pwr_ctrl->stt_skin_temp[STT_TEMP_HS2], NULL);
++ fixp_q88_fromint(pwr_ctrl->stt_skin_temp[STT_TEMP_HS2]), NULL);
+
+ if (is_apmf_func_supported(dev, APMF_FUNC_SET_FAN_IDX))
+ apmf_update_fan_idx(dev, config_store.mode_set[idx].fan_control.manual,
+diff --git a/drivers/platform/x86/amd/pmf/cnqf.c b/drivers/platform/x86/amd/pmf/cnqf.c
+index bc8899e15c914b..207a0b33d8d368 100644
+--- a/drivers/platform/x86/amd/pmf/cnqf.c
++++ b/drivers/platform/x86/amd/pmf/cnqf.c
+@@ -81,10 +81,10 @@ static int amd_pmf_set_cnqf(struct amd_pmf_dev *dev, int src, int idx,
+ amd_pmf_send_cmd(dev, SET_SPPT, false, pc->sppt, NULL);
+ amd_pmf_send_cmd(dev, SET_SPPT_APU_ONLY, false, pc->sppt_apu_only, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, pc->stt_min, NULL);
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, pc->stt_skin_temp[STT_TEMP_APU],
+- NULL);
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, pc->stt_skin_temp[STT_TEMP_HS2],
+- NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
++ fixp_q88_fromint(pc->stt_skin_temp[STT_TEMP_APU]), NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
++ fixp_q88_fromint(pc->stt_skin_temp[STT_TEMP_HS2]), NULL);
+
+ if (is_apmf_func_supported(dev, APMF_FUNC_SET_FAN_IDX))
+ apmf_update_fan_idx(dev,
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index a2cb2d5544f5b3..96821101ec773c 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -176,6 +176,20 @@ static void __maybe_unused amd_pmf_dump_registers(struct amd_pmf_dev *dev)
+ dev_dbg(dev->dev, "AMD_PMF_REGISTER_MESSAGE:%x\n", value);
+ }
+
++/**
++ * fixp_q88_fromint: Convert integer to Q8.8
++ * @val: input value
++ *
++ * Converts an integer into binary fixed point format where 8 bits
++ * are used for integer and 8 bits are used for the decimal.
++ *
++ * Return: unsigned integer converted to Q8.8 format
++ */
++u32 fixp_q88_fromint(u32 val)
++{
++ return val << 8;
++}
++
+ int amd_pmf_send_cmd(struct amd_pmf_dev *dev, u8 message, bool get, u32 arg, u32 *data)
+ {
+ int rc;
+diff --git a/drivers/platform/x86/amd/pmf/pmf.h b/drivers/platform/x86/amd/pmf/pmf.h
+index e6bdee68ccf347..45b60238d5277f 100644
+--- a/drivers/platform/x86/amd/pmf/pmf.h
++++ b/drivers/platform/x86/amd/pmf/pmf.h
+@@ -777,6 +777,7 @@ int apmf_install_handler(struct amd_pmf_dev *pmf_dev);
+ int apmf_os_power_slider_update(struct amd_pmf_dev *dev, u8 flag);
+ int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer);
+ int amd_pmf_notify_sbios_heartbeat_event_v2(struct amd_pmf_dev *dev, u8 flag);
++u32 fixp_q88_fromint(u32 val);
+
+ /* SPS Layer */
+ int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf);
+diff --git a/drivers/platform/x86/amd/pmf/sps.c b/drivers/platform/x86/amd/pmf/sps.c
+index d3083383f11fbf..49e14ca94a9e77 100644
+--- a/drivers/platform/x86/amd/pmf/sps.c
++++ b/drivers/platform/x86/amd/pmf/sps.c
+@@ -198,9 +198,11 @@ static void amd_pmf_update_slider_v2(struct amd_pmf_dev *dev, int idx)
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false,
+ apts_config_store.val[idx].stt_min_limit, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
+- apts_config_store.val[idx].stt_skin_temp_limit_apu, NULL);
++ fixp_q88_fromint(apts_config_store.val[idx].stt_skin_temp_limit_apu),
++ NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
+- apts_config_store.val[idx].stt_skin_temp_limit_hs2, NULL);
++ fixp_q88_fromint(apts_config_store.val[idx].stt_skin_temp_limit_hs2),
++ NULL);
+ }
+
+ void amd_pmf_update_slider(struct amd_pmf_dev *dev, bool op, int idx,
+@@ -217,9 +219,11 @@ void amd_pmf_update_slider(struct amd_pmf_dev *dev, bool op, int idx,
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false,
+ config_store.prop[src][idx].stt_min, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
+- config_store.prop[src][idx].stt_skin_temp[STT_TEMP_APU], NULL);
++ fixp_q88_fromint(config_store.prop[src][idx].stt_skin_temp[STT_TEMP_APU]),
++ NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
+- config_store.prop[src][idx].stt_skin_temp[STT_TEMP_HS2], NULL);
++ fixp_q88_fromint(config_store.prop[src][idx].stt_skin_temp[STT_TEMP_HS2]),
++ NULL);
+ } else if (op == SLIDER_OP_GET) {
+ amd_pmf_send_cmd(dev, GET_SPL, true, ARG_NONE, &table->prop[src][idx].spl);
+ amd_pmf_send_cmd(dev, GET_FPPT, true, ARG_NONE, &table->prop[src][idx].fppt);
+diff --git a/drivers/platform/x86/amd/pmf/tee-if.c b/drivers/platform/x86/amd/pmf/tee-if.c
+index a1e43873a07b08..14b99d8b63d2fc 100644
+--- a/drivers/platform/x86/amd/pmf/tee-if.c
++++ b/drivers/platform/x86/amd/pmf/tee-if.c
+@@ -123,7 +123,8 @@ static void amd_pmf_apply_policies(struct amd_pmf_dev *dev, struct ta_pmf_enact_
+
+ case PMF_POLICY_STT_SKINTEMP_APU:
+ if (dev->prev_data->stt_skintemp_apu != val) {
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, val, NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
++ fixp_q88_fromint(val), NULL);
+ dev_dbg(dev->dev, "update STT_SKINTEMP_APU: %u\n", val);
+ dev->prev_data->stt_skintemp_apu = val;
+ }
+@@ -131,7 +132,8 @@ static void amd_pmf_apply_policies(struct amd_pmf_dev *dev, struct ta_pmf_enact_
+
+ case PMF_POLICY_STT_SKINTEMP_HS2:
+ if (dev->prev_data->stt_skintemp_hs2 != val) {
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, val, NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
++ fixp_q88_fromint(val), NULL);
+ dev_dbg(dev->dev, "update STT_SKINTEMP_HS2: %u\n", val);
+ dev->prev_data->stt_skintemp_hs2 = val;
+ }
+diff --git a/drivers/platform/x86/asus-laptop.c b/drivers/platform/x86/asus-laptop.c
+index d460dd194f1965..a0a411b4f2d6d8 100644
+--- a/drivers/platform/x86/asus-laptop.c
++++ b/drivers/platform/x86/asus-laptop.c
+@@ -426,11 +426,14 @@ static int asus_pega_lucid_set(struct asus_laptop *asus, int unit, bool enable)
+
+ static int pega_acc_axis(struct asus_laptop *asus, int curr, char *method)
+ {
++ unsigned long long val = (unsigned long long)curr;
++ acpi_status status;
+ int i, delta;
+- unsigned long long val;
+- for (i = 0; i < PEGA_ACC_RETRIES; i++) {
+- acpi_evaluate_integer(asus->handle, method, NULL, &val);
+
++ for (i = 0; i < PEGA_ACC_RETRIES; i++) {
++ status = acpi_evaluate_integer(asus->handle, method, NULL, &val);
++ if (ACPI_FAILURE(status))
++ continue;
+ /* The output is noisy. From reading the ASL
+ * dissassembly, timeout errors are returned with 1's
+ * in the high word, and the lack of locking around
+diff --git a/drivers/platform/x86/dell/alienware-wmi.c b/drivers/platform/x86/dell/alienware-wmi.c
+index e252e0cf47efee..1426ea8e4f1948 100644
+--- a/drivers/platform/x86/dell/alienware-wmi.c
++++ b/drivers/platform/x86/dell/alienware-wmi.c
+@@ -214,6 +214,15 @@ static int __init dmi_matched(const struct dmi_system_id *dmi)
+ }
+
+ static const struct dmi_system_id alienware_quirks[] __initconst = {
++ {
++ .callback = dmi_matched,
++ .ident = "Alienware Area-51m R2",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Alienware Area-51m R2"),
++ },
++ .driver_data = &quirk_x_series,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Alienware ASM100",
+@@ -241,6 +250,15 @@ static const struct dmi_system_id alienware_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_asm201,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Alienware m16 R1",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m16 R1"),
++ },
++ .driver_data = &quirk_g_series,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Alienware m16 R1 AMD",
+@@ -248,7 +266,7 @@ static const struct dmi_system_id alienware_quirks[] __initconst = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m16 R1 AMD"),
+ },
+- .driver_data = &quirk_x_series,
++ .driver_data = &quirk_g_series,
+ },
+ {
+ .callback = dmi_matched,
+@@ -259,6 +277,15 @@ static const struct dmi_system_id alienware_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_x_series,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Alienware m16 R2",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m16 R2"),
++ },
++ .driver_data = &quirk_x_series,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Alienware m18 R2",
+@@ -277,6 +304,15 @@ static const struct dmi_system_id alienware_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_x_series,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Alienware x15 R2",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Alienware x15 R2"),
++ },
++ .driver_data = &quirk_x_series,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Alienware x17 R2",
+@@ -340,6 +376,15 @@ static const struct dmi_system_id alienware_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_g_series,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Dell Inc. G16 7630",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Dell G16 7630"),
++ },
++ .driver_data = &quirk_g_series,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Dell Inc. G3 3500",
+@@ -367,6 +412,15 @@ static const struct dmi_system_id alienware_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_g_series,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Dell Inc. G5 5505",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "G5 5505"),
++ },
++ .driver_data = &quirk_g_series,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Dell Inc. Inspiron 5675",
+diff --git a/drivers/platform/x86/msi-wmi-platform.c b/drivers/platform/x86/msi-wmi-platform.c
+index 9b5c7f8c79b0dd..dc5e9878cb6822 100644
+--- a/drivers/platform/x86/msi-wmi-platform.c
++++ b/drivers/platform/x86/msi-wmi-platform.c
+@@ -10,6 +10,7 @@
+ #include <linux/acpi.h>
+ #include <linux/bits.h>
+ #include <linux/bitfield.h>
++#include <linux/cleanup.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+ #include <linux/device/driver.h>
+@@ -17,6 +18,7 @@
+ #include <linux/hwmon.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/printk.h>
+ #include <linux/rwsem.h>
+ #include <linux/types.h>
+@@ -76,8 +78,13 @@ enum msi_wmi_platform_method {
+ MSI_PLATFORM_GET_WMI = 0x1d,
+ };
+
+-struct msi_wmi_platform_debugfs_data {
++struct msi_wmi_platform_data {
+ struct wmi_device *wdev;
++ struct mutex wmi_lock; /* Necessary when calling WMI methods */
++};
++
++struct msi_wmi_platform_debugfs_data {
++ struct msi_wmi_platform_data *data;
+ enum msi_wmi_platform_method method;
+ struct rw_semaphore buffer_lock; /* Protects debugfs buffer */
+ size_t length;
+@@ -132,8 +139,9 @@ static int msi_wmi_platform_parse_buffer(union acpi_object *obj, u8 *output, siz
+ return 0;
+ }
+
+-static int msi_wmi_platform_query(struct wmi_device *wdev, enum msi_wmi_platform_method method,
+- u8 *input, size_t input_length, u8 *output, size_t output_length)
++static int msi_wmi_platform_query(struct msi_wmi_platform_data *data,
++ enum msi_wmi_platform_method method, u8 *input,
++ size_t input_length, u8 *output, size_t output_length)
+ {
+ struct acpi_buffer out = { ACPI_ALLOCATE_BUFFER, NULL };
+ struct acpi_buffer in = {
+@@ -147,9 +155,15 @@ static int msi_wmi_platform_query(struct wmi_device *wdev, enum msi_wmi_platform
+ if (!input_length || !output_length)
+ return -EINVAL;
+
+- status = wmidev_evaluate_method(wdev, 0x0, method, &in, &out);
+- if (ACPI_FAILURE(status))
+- return -EIO;
++ /*
++ * The ACPI control method responsible for handling the WMI method calls
++ * is not thread-safe. Because of this we have to do the locking ourself.
++ */
++ scoped_guard(mutex, &data->wmi_lock) {
++ status = wmidev_evaluate_method(data->wdev, 0x0, method, &in, &out);
++ if (ACPI_FAILURE(status))
++ return -EIO;
++ }
+
+ obj = out.pointer;
+ if (!obj)
+@@ -170,22 +184,22 @@ static umode_t msi_wmi_platform_is_visible(const void *drvdata, enum hwmon_senso
+ static int msi_wmi_platform_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
+ int channel, long *val)
+ {
+- struct wmi_device *wdev = dev_get_drvdata(dev);
++ struct msi_wmi_platform_data *data = dev_get_drvdata(dev);
+ u8 input[32] = { 0 };
+ u8 output[32];
+- u16 data;
++ u16 value;
+ int ret;
+
+- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_FAN, input, sizeof(input), output,
++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_FAN, input, sizeof(input), output,
+ sizeof(output));
+ if (ret < 0)
+ return ret;
+
+- data = get_unaligned_be16(&output[channel * 2 + 1]);
+- if (!data)
++ value = get_unaligned_be16(&output[channel * 2 + 1]);
++ if (!value)
+ *val = 0;
+ else
+- *val = 480000 / data;
++ *val = 480000 / value;
+
+ return 0;
+ }
+@@ -231,7 +245,7 @@ static ssize_t msi_wmi_platform_write(struct file *fp, const char __user *input,
+ return ret;
+
+ down_write(&data->buffer_lock);
+- ret = msi_wmi_platform_query(data->wdev, data->method, payload, data->length, data->buffer,
++ ret = msi_wmi_platform_query(data->data, data->method, payload, data->length, data->buffer,
+ data->length);
+ up_write(&data->buffer_lock);
+
+@@ -277,17 +291,17 @@ static void msi_wmi_platform_debugfs_remove(void *data)
+ debugfs_remove_recursive(dir);
+ }
+
+-static void msi_wmi_platform_debugfs_add(struct wmi_device *wdev, struct dentry *dir,
++static void msi_wmi_platform_debugfs_add(struct msi_wmi_platform_data *drvdata, struct dentry *dir,
+ const char *name, enum msi_wmi_platform_method method)
+ {
+ struct msi_wmi_platform_debugfs_data *data;
+ struct dentry *entry;
+
+- data = devm_kzalloc(&wdev->dev, sizeof(*data), GFP_KERNEL);
++ data = devm_kzalloc(&drvdata->wdev->dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return;
+
+- data->wdev = wdev;
++ data->data = drvdata;
+ data->method = method;
+ init_rwsem(&data->buffer_lock);
+
+@@ -298,82 +312,82 @@ static void msi_wmi_platform_debugfs_add(struct wmi_device *wdev, struct dentry
+
+ entry = debugfs_create_file(name, 0600, dir, data, &msi_wmi_platform_debugfs_fops);
+ if (IS_ERR(entry))
+- devm_kfree(&wdev->dev, data);
++ devm_kfree(&drvdata->wdev->dev, data);
+ }
+
+-static void msi_wmi_platform_debugfs_init(struct wmi_device *wdev)
++static void msi_wmi_platform_debugfs_init(struct msi_wmi_platform_data *data)
+ {
+ struct dentry *dir;
+ char dir_name[64];
+ int ret, method;
+
+- scnprintf(dir_name, ARRAY_SIZE(dir_name), "%s-%s", DRIVER_NAME, dev_name(&wdev->dev));
++ scnprintf(dir_name, ARRAY_SIZE(dir_name), "%s-%s", DRIVER_NAME, dev_name(&data->wdev->dev));
+
+ dir = debugfs_create_dir(dir_name, NULL);
+ if (IS_ERR(dir))
+ return;
+
+- ret = devm_add_action_or_reset(&wdev->dev, msi_wmi_platform_debugfs_remove, dir);
++ ret = devm_add_action_or_reset(&data->wdev->dev, msi_wmi_platform_debugfs_remove, dir);
+ if (ret < 0)
+ return;
+
+ for (method = MSI_PLATFORM_GET_PACKAGE; method <= MSI_PLATFORM_GET_WMI; method++)
+- msi_wmi_platform_debugfs_add(wdev, dir, msi_wmi_platform_debugfs_names[method - 1],
++ msi_wmi_platform_debugfs_add(data, dir, msi_wmi_platform_debugfs_names[method - 1],
+ method);
+ }
+
+-static int msi_wmi_platform_hwmon_init(struct wmi_device *wdev)
++static int msi_wmi_platform_hwmon_init(struct msi_wmi_platform_data *data)
+ {
+ struct device *hdev;
+
+- hdev = devm_hwmon_device_register_with_info(&wdev->dev, "msi_wmi_platform", wdev,
++ hdev = devm_hwmon_device_register_with_info(&data->wdev->dev, "msi_wmi_platform", data,
+ &msi_wmi_platform_chip_info, NULL);
+
+ return PTR_ERR_OR_ZERO(hdev);
+ }
+
+-static int msi_wmi_platform_ec_init(struct wmi_device *wdev)
++static int msi_wmi_platform_ec_init(struct msi_wmi_platform_data *data)
+ {
+ u8 input[32] = { 0 };
+ u8 output[32];
+ u8 flags;
+ int ret;
+
+- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_EC, input, sizeof(input), output,
++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_EC, input, sizeof(input), output,
+ sizeof(output));
+ if (ret < 0)
+ return ret;
+
+ flags = output[MSI_PLATFORM_EC_FLAGS_OFFSET];
+
+- dev_dbg(&wdev->dev, "EC RAM version %lu.%lu\n",
++ dev_dbg(&data->wdev->dev, "EC RAM version %lu.%lu\n",
+ FIELD_GET(MSI_PLATFORM_EC_MAJOR_MASK, flags),
+ FIELD_GET(MSI_PLATFORM_EC_MINOR_MASK, flags));
+- dev_dbg(&wdev->dev, "EC firmware version %.28s\n",
++ dev_dbg(&data->wdev->dev, "EC firmware version %.28s\n",
+ &output[MSI_PLATFORM_EC_VERSION_OFFSET]);
+
+ if (!(flags & MSI_PLATFORM_EC_IS_TIGERLAKE)) {
+ if (!force)
+ return -ENODEV;
+
+- dev_warn(&wdev->dev, "Loading on a non-Tigerlake platform\n");
++ dev_warn(&data->wdev->dev, "Loading on a non-Tigerlake platform\n");
+ }
+
+ return 0;
+ }
+
+-static int msi_wmi_platform_init(struct wmi_device *wdev)
++static int msi_wmi_platform_init(struct msi_wmi_platform_data *data)
+ {
+ u8 input[32] = { 0 };
+ u8 output[32];
+ int ret;
+
+- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_WMI, input, sizeof(input), output,
++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_WMI, input, sizeof(input), output,
+ sizeof(output));
+ if (ret < 0)
+ return ret;
+
+- dev_dbg(&wdev->dev, "WMI interface version %u.%u\n",
++ dev_dbg(&data->wdev->dev, "WMI interface version %u.%u\n",
+ output[MSI_PLATFORM_WMI_MAJOR_OFFSET],
+ output[MSI_PLATFORM_WMI_MINOR_OFFSET]);
+
+@@ -381,7 +395,8 @@ static int msi_wmi_platform_init(struct wmi_device *wdev)
+ if (!force)
+ return -ENODEV;
+
+- dev_warn(&wdev->dev, "Loading despite unsupported WMI interface version (%u.%u)\n",
++ dev_warn(&data->wdev->dev,
++ "Loading despite unsupported WMI interface version (%u.%u)\n",
+ output[MSI_PLATFORM_WMI_MAJOR_OFFSET],
+ output[MSI_PLATFORM_WMI_MINOR_OFFSET]);
+ }
+@@ -391,19 +406,31 @@ static int msi_wmi_platform_init(struct wmi_device *wdev)
+
+ static int msi_wmi_platform_probe(struct wmi_device *wdev, const void *context)
+ {
++ struct msi_wmi_platform_data *data;
+ int ret;
+
+- ret = msi_wmi_platform_init(wdev);
++ data = devm_kzalloc(&wdev->dev, sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++
++ data->wdev = wdev;
++ dev_set_drvdata(&wdev->dev, data);
++
++ ret = devm_mutex_init(&wdev->dev, &data->wmi_lock);
++ if (ret < 0)
++ return ret;
++
++ ret = msi_wmi_platform_init(data);
+ if (ret < 0)
+ return ret;
+
+- ret = msi_wmi_platform_ec_init(wdev);
++ ret = msi_wmi_platform_ec_init(data);
+ if (ret < 0)
+ return ret;
+
+- msi_wmi_platform_debugfs_init(wdev);
++ msi_wmi_platform_debugfs_init(data);
+
+- return msi_wmi_platform_hwmon_init(wdev);
++ return msi_wmi_platform_hwmon_init(data);
+ }
+
+ static const struct wmi_device_id msi_wmi_platform_id_table[] = {
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 4a87af0980d695..4b7344e1816e49 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -2067,6 +2067,7 @@ ptp_ocp_signal_set(struct ptp_ocp *bp, int gen, struct ptp_ocp_signal *s)
+ if (!s->start) {
+ /* roundup() does not work on 32-bit systems */
+ s->start = DIV64_U64_ROUND_UP(start_ns, s->period);
++ s->start *= s->period;
+ s->start = ktime_add(s->start, s->phase);
+ }
+
+diff --git a/drivers/ras/amd/atl/internal.h b/drivers/ras/amd/atl/internal.h
+index f9be26d2534846..d096b58cd0ae97 100644
+--- a/drivers/ras/amd/atl/internal.h
++++ b/drivers/ras/amd/atl/internal.h
+@@ -362,4 +362,7 @@ static inline void atl_debug_on_bad_intlv_mode(struct addr_ctx *ctx)
+ atl_debug(ctx, "Unrecognized interleave mode: %u", ctx->map.intlv_mode);
+ }
+
++#define MI300_UMC_MCA_COL GENMASK(5, 1)
++#define MI300_UMC_MCA_ROW13 BIT(23)
++
+ #endif /* __AMD_ATL_INTERNAL_H__ */
+diff --git a/drivers/ras/amd/atl/umc.c b/drivers/ras/amd/atl/umc.c
+index dc8aa12f63c811..6e072b7667e98b 100644
+--- a/drivers/ras/amd/atl/umc.c
++++ b/drivers/ras/amd/atl/umc.c
+@@ -229,7 +229,6 @@ int get_umc_info_mi300(void)
+ * Additionally, the PC and Bank bits may be hashed. This must be accounted for before
+ * reconstructing the normalized address.
+ */
+-#define MI300_UMC_MCA_COL GENMASK(5, 1)
+ #define MI300_UMC_MCA_BANK GENMASK(9, 6)
+ #define MI300_UMC_MCA_ROW GENMASK(24, 10)
+ #define MI300_UMC_MCA_PC BIT(25)
+@@ -320,7 +319,7 @@ static unsigned long convert_dram_to_norm_addr_mi300(unsigned long addr)
+ * See amd_atl::convert_dram_to_norm_addr_mi300() for MI300 address formats.
+ */
+ #define MI300_NUM_COL BIT(HWEIGHT(MI300_UMC_MCA_COL))
+-static void retire_row_mi300(struct atl_err *a_err)
++static void _retire_row_mi300(struct atl_err *a_err)
+ {
+ unsigned long addr;
+ struct page *p;
+@@ -351,6 +350,22 @@ static void retire_row_mi300(struct atl_err *a_err)
+ }
+ }
+
++/*
++ * In addition to the column bits, the row[13] bit should also be included when
++ * calculating addresses affected by a physical row.
++ *
++ * Instead of running through another loop over a single bit, just run through
++ * the column bits twice and flip the row[13] bit in-between.
++ *
++ * See MI300_UMC_MCA_ROW for the row bits in MCA_ADDR_UMC value.
++ */
++static void retire_row_mi300(struct atl_err *a_err)
++{
++ _retire_row_mi300(a_err);
++ a_err->addr ^= MI300_UMC_MCA_ROW13;
++ _retire_row_mi300(a_err);
++}
++
+ void amd_retire_dram_row(struct atl_err *a_err)
+ {
+ if (df_cfg.rev == DF4p5 && df_cfg.flags.heterogeneous)
+diff --git a/drivers/ras/amd/fmpm.c b/drivers/ras/amd/fmpm.c
+index 90de737fbc9097..8877c6ff64c468 100644
+--- a/drivers/ras/amd/fmpm.c
++++ b/drivers/ras/amd/fmpm.c
+@@ -250,6 +250,13 @@ static bool rec_has_valid_entries(struct fru_rec *rec)
+ return true;
+ }
+
++/*
++ * Row retirement is done on MI300 systems, and some bits are 'don't
++ * care' for comparing addresses with unique physical rows. This
++ * includes all column bits and the row[13] bit.
++ */
++#define MASK_ADDR(addr) ((addr) & ~(MI300_UMC_MCA_ROW13 | MI300_UMC_MCA_COL))
++
+ static bool fpds_equal(struct cper_fru_poison_desc *old, struct cper_fru_poison_desc *new)
+ {
+ /*
+@@ -258,7 +265,7 @@ static bool fpds_equal(struct cper_fru_poison_desc *old, struct cper_fru_poison_
+ *
+ * Also, order the checks from most->least likely to fail to shortcut the code.
+ */
+- if (old->addr != new->addr)
++ if (MASK_ADDR(old->addr) != MASK_ADDR(new->addr))
+ return false;
+
+ if (old->hw_id != new->hw_id)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index 6e7f99fcc8247d..3af991cad07eb3 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2501,6 +2501,7 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
+ struct hisi_sas_port *port = to_hisi_sas_port(sas_port);
+ struct sas_ata_task *ata_task = &task->ata_task;
+ struct sas_tmf_task *tmf = slot->tmf;
++ int phy_id;
+ u8 *buf_cmd;
+ int has_data = 0, hdr_tag = 0;
+ u32 dw0, dw1 = 0, dw2 = 0;
+@@ -2508,10 +2509,14 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
+ /* create header */
+ /* dw0 */
+ dw0 = port->id << CMD_HDR_PORT_OFF;
+- if (parent_dev && dev_is_expander(parent_dev->dev_type))
++ if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
+ dw0 |= 3 << CMD_HDR_CMD_OFF;
+- else
++ } else {
++ phy_id = device->phy->identify.phy_identifier;
++ dw0 |= (1U << phy_id) << CMD_HDR_PHY_ID_OFF;
++ dw0 |= CMD_HDR_FORCE_PHY_MSK;
+ dw0 |= 4 << CMD_HDR_CMD_OFF;
++ }
+
+ if (tmf && ata_task->force_phy) {
+ dw0 |= CMD_HDR_FORCE_PHY_MSK;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 095bbf80c34efb..6a0656f3b596cc 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -359,6 +359,10 @@
+ #define CMD_HDR_RESP_REPORT_MSK (0x1 << CMD_HDR_RESP_REPORT_OFF)
+ #define CMD_HDR_TLR_CTRL_OFF 6
+ #define CMD_HDR_TLR_CTRL_MSK (0x3 << CMD_HDR_TLR_CTRL_OFF)
++#define CMD_HDR_PHY_ID_OFF 8
++#define CMD_HDR_PHY_ID_MSK (0x1ff << CMD_HDR_PHY_ID_OFF)
++#define CMD_HDR_FORCE_PHY_OFF 17
++#define CMD_HDR_FORCE_PHY_MSK (0x1U << CMD_HDR_FORCE_PHY_OFF)
+ #define CMD_HDR_PORT_OFF 18
+ #define CMD_HDR_PORT_MSK (0xf << CMD_HDR_PORT_OFF)
+ #define CMD_HDR_PRIORITY_OFF 27
+@@ -1429,15 +1433,21 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba,
+ struct hisi_sas_cmd_hdr *hdr = slot->cmd_hdr;
+ struct asd_sas_port *sas_port = device->port;
+ struct hisi_sas_port *port = to_hisi_sas_port(sas_port);
++ int phy_id;
+ u8 *buf_cmd;
+ int has_data = 0, hdr_tag = 0;
+ u32 dw1 = 0, dw2 = 0;
+
+ hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF);
+- if (parent_dev && dev_is_expander(parent_dev->dev_type))
++ if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
+ hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF);
+- else
++ } else {
++ phy_id = device->phy->identify.phy_identifier;
++ hdr->dw0 |= cpu_to_le32((1U << phy_id)
++ << CMD_HDR_PHY_ID_OFF);
++ hdr->dw0 |= CMD_HDR_FORCE_PHY_MSK;
+ hdr->dw0 |= cpu_to_le32(4U << CMD_HDR_CMD_OFF);
++ }
+
+ switch (task->data_dir) {
+ case DMA_TO_DEVICE:
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index d85f990aec885a..6116799ddf9c0d 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -2103,6 +2103,9 @@ static int megasas_sdev_configure(struct scsi_device *sdev,
+ /* This sdev property may change post OCR */
+ megasas_set_dynamic_target_properties(sdev, lim, is_target_prop);
+
++ if (!MEGASAS_IS_LOGICAL(sdev))
++ sdev->no_vpd_size = 1;
++
+ mutex_unlock(&instance->reset_mutex);
+
+ return 0;
+@@ -3662,8 +3665,10 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
+
+ case MFI_STAT_SCSI_IO_FAILED:
+ case MFI_STAT_LD_INIT_IN_PROGRESS:
+- cmd->scmd->result =
+- (DID_ERROR << 16) | hdr->scsi_status;
++ if (hdr->scsi_status == 0xf0)
++ cmd->scmd->result = (DID_ERROR << 16) | SAM_STAT_CHECK_CONDITION;
++ else
++ cmd->scmd->result = (DID_ERROR << 16) | hdr->scsi_status;
+ break;
+
+ case MFI_STAT_SCSI_DONE_WITH_ERROR:
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 1eec23da28e2d6..1eea4df9e47d35 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -2043,7 +2043,10 @@ map_cmd_status(struct fusion_context *fusion,
+
+ case MFI_STAT_SCSI_IO_FAILED:
+ case MFI_STAT_LD_INIT_IN_PROGRESS:
+- scmd->result = (DID_ERROR << 16) | ext_status;
++ if (ext_status == 0xf0)
++ scmd->result = (DID_ERROR << 16) | SAM_STAT_CHECK_CONDITION;
++ else
++ scmd->result = (DID_ERROR << 16) | ext_status;
+ break;
+
+ case MFI_STAT_SCSI_DONE_WITH_ERROR:
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 9c347c64c315f8..0b8c91bf793fcb 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -3182,11 +3182,14 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ }
+
+ /* see similar check in iscsi_if_set_param() */
+- if (strlen(data) > ev->u.set_host_param.len)
+- return -EINVAL;
++ if (strlen(data) > ev->u.set_host_param.len) {
++ err = -EINVAL;
++ goto out;
++ }
+
+ err = transport->set_host_param(shost, ev->u.set_host_param.param,
+ data, ev->u.set_host_param.len);
++out:
+ scsi_host_put(shost);
+ return err;
+ }
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 0da7be40c92580..e790b5d4e3c70a 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -19,6 +19,7 @@
+ #include <linux/bcd.h>
+ #include <linux/reboot.h>
+ #include <linux/cciss_ioctl.h>
++#include <linux/crash_dump.h>
+ #include <scsi/scsi_host.h>
+ #include <scsi/scsi_cmnd.h>
+ #include <scsi/scsi_device.h>
+@@ -5246,7 +5247,7 @@ static void pqi_calculate_io_resources(struct pqi_ctrl_info *ctrl_info)
+ ctrl_info->error_buffer_length =
+ ctrl_info->max_io_slots * PQI_ERROR_BUFFER_ELEMENT_LENGTH;
+
+- if (reset_devices)
++ if (is_kdump_kernel())
+ max_transfer_size = min(ctrl_info->max_transfer_size,
+ PQI_MAX_TRANSFER_SIZE_KDUMP);
+ else
+@@ -5275,7 +5276,7 @@ static void pqi_calculate_queue_resources(struct pqi_ctrl_info *ctrl_info)
+ u16 num_elements_per_iq;
+ u16 num_elements_per_oq;
+
+- if (reset_devices) {
++ if (is_kdump_kernel()) {
+ num_queue_groups = 1;
+ } else {
+ int num_cpus;
+@@ -8288,12 +8289,12 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ u32 product_id;
+
+ if (reset_devices) {
+- if (pqi_is_fw_triage_supported(ctrl_info)) {
++ if (is_kdump_kernel() && pqi_is_fw_triage_supported(ctrl_info)) {
+ rc = sis_wait_for_fw_triage_completion(ctrl_info);
+ if (rc)
+ return rc;
+ }
+- if (sis_is_ctrl_logging_supported(ctrl_info)) {
++ if (is_kdump_kernel() && sis_is_ctrl_logging_supported(ctrl_info)) {
+ sis_notify_kdump(ctrl_info);
+ rc = sis_wait_for_ctrl_logging_completion(ctrl_info);
+ if (rc)
+@@ -8344,7 +8345,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ ctrl_info->product_id = (u8)product_id;
+ ctrl_info->product_revision = (u8)(product_id >> 8);
+
+- if (reset_devices) {
++ if (is_kdump_kernel()) {
+ if (ctrl_info->max_outstanding_requests >
+ PQI_MAX_OUTSTANDING_REQUESTS_KDUMP)
+ ctrl_info->max_outstanding_requests =
+@@ -8480,7 +8481,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ if (rc)
+ return rc;
+
+- if (ctrl_info->ctrl_logging_supported && !reset_devices) {
++ if (ctrl_info->ctrl_logging_supported && !is_kdump_kernel()) {
+ pqi_host_setup_buffer(ctrl_info, &ctrl_info->ctrl_log_memory, PQI_CTRL_LOG_TOTAL_SIZE, PQI_CTRL_LOG_MIN_SIZE);
+ pqi_host_memory_update(ctrl_info, &ctrl_info->ctrl_log_memory, PQI_VENDOR_GENERAL_CTRL_LOG_MEMORY_UPDATE);
+ }
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
+index dad63f2d5f90fc..3a028b78d9afc0 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
+@@ -166,15 +166,18 @@ static const struct mmio_reg adl_dvfs_mmio_regs[] = {
+ { 0, 0x5A40, 1, 0x1, 0}, /* rfi_disable */
+ };
+
++static const struct mapping_table *dlvr_mapping;
++static const struct mmio_reg *dlvr_mmio_regs_table;
++
+ #define RFIM_SHOW(suffix, table)\
+ static ssize_t suffix##_show(struct device *dev,\
+ struct device_attribute *attr,\
+ char *buf)\
+ {\
+- const struct mapping_table *mapping = NULL;\
++ const struct mmio_reg *mmio_regs = dlvr_mmio_regs_table;\
++ const struct mapping_table *mapping = dlvr_mapping;\
+ struct proc_thermal_device *proc_priv;\
+ struct pci_dev *pdev = to_pci_dev(dev);\
+- const struct mmio_reg *mmio_regs;\
+ const char **match_strs;\
+ int ret, err;\
+ u32 reg_val;\
+@@ -186,12 +189,6 @@ static ssize_t suffix##_show(struct device *dev,\
+ mmio_regs = adl_dvfs_mmio_regs;\
+ } else if (table == 2) { \
+ match_strs = (const char **)dlvr_strings;\
+- if (pdev->device == PCI_DEVICE_ID_INTEL_LNLM_THERMAL) {\
+- mmio_regs = lnl_dlvr_mmio_regs;\
+- mapping = lnl_dlvr_mapping;\
+- } else {\
+- mmio_regs = dlvr_mmio_regs;\
+- } \
+ } else {\
+ match_strs = (const char **)fivr_strings;\
+ mmio_regs = tgl_fivr_mmio_regs;\
+@@ -214,12 +211,12 @@ static ssize_t suffix##_store(struct device *dev,\
+ struct device_attribute *attr,\
+ const char *buf, size_t count)\
+ {\
+- const struct mapping_table *mapping = NULL;\
++ const struct mmio_reg *mmio_regs = dlvr_mmio_regs_table;\
++ const struct mapping_table *mapping = dlvr_mapping;\
+ struct proc_thermal_device *proc_priv;\
+ struct pci_dev *pdev = to_pci_dev(dev);\
+ unsigned int input;\
+ const char **match_strs;\
+- const struct mmio_reg *mmio_regs;\
+ int ret, err;\
+ u32 reg_val;\
+ u32 mask;\
+@@ -230,12 +227,6 @@ static ssize_t suffix##_store(struct device *dev,\
+ mmio_regs = adl_dvfs_mmio_regs;\
+ } else if (table == 2) { \
+ match_strs = (const char **)dlvr_strings;\
+- if (pdev->device == PCI_DEVICE_ID_INTEL_LNLM_THERMAL) {\
+- mmio_regs = lnl_dlvr_mmio_regs;\
+- mapping = lnl_dlvr_mapping;\
+- } else {\
+- mmio_regs = dlvr_mmio_regs;\
+- } \
+ } else {\
+ match_strs = (const char **)fivr_strings;\
+ mmio_regs = tgl_fivr_mmio_regs;\
+@@ -448,6 +439,16 @@ int proc_thermal_rfim_add(struct pci_dev *pdev, struct proc_thermal_device *proc
+ }
+
+ if (proc_priv->mmio_feature_mask & PROC_THERMAL_FEATURE_DLVR) {
++ switch (pdev->device) {
++ case PCI_DEVICE_ID_INTEL_LNLM_THERMAL:
++ case PCI_DEVICE_ID_INTEL_PTL_THERMAL:
++ dlvr_mmio_regs_table = lnl_dlvr_mmio_regs;
++ dlvr_mapping = lnl_dlvr_mapping;
++ break;
++ default:
++ dlvr_mmio_regs_table = dlvr_mmio_regs;
++ break;
++ }
+ ret = sysfs_create_group(&pdev->dev.kobj, &dlvr_attribute_group);
+ if (ret)
+ return ret;
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index 13dd5dfc03eb38..5ea3f9beb1bd9a 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -92,11 +92,16 @@
+ UIC_TRANSPORT_NO_CONNECTION_RX |\
+ UIC_TRANSPORT_BAD_TC)
+
+-/* FSYS UFS Shareability */
+-#define UFS_WR_SHARABLE BIT(2)
+-#define UFS_RD_SHARABLE BIT(1)
+-#define UFS_SHARABLE (UFS_WR_SHARABLE | UFS_RD_SHARABLE)
+-#define UFS_SHAREABILITY_OFFSET 0x710
++/* UFS Shareability */
++#define UFS_EXYNOSAUTO_WR_SHARABLE BIT(2)
++#define UFS_EXYNOSAUTO_RD_SHARABLE BIT(1)
++#define UFS_EXYNOSAUTO_SHARABLE (UFS_EXYNOSAUTO_WR_SHARABLE | \
++ UFS_EXYNOSAUTO_RD_SHARABLE)
++#define UFS_GS101_WR_SHARABLE BIT(1)
++#define UFS_GS101_RD_SHARABLE BIT(0)
++#define UFS_GS101_SHARABLE (UFS_GS101_WR_SHARABLE | \
++ UFS_GS101_RD_SHARABLE)
++#define UFS_SHAREABILITY_OFFSET 0x710
+
+ /* Multi-host registers */
+ #define MHCTRL 0xC4
+@@ -209,8 +214,8 @@ static int exynos_ufs_shareability(struct exynos_ufs *ufs)
+ /* IO Coherency setting */
+ if (ufs->sysreg) {
+ return regmap_update_bits(ufs->sysreg,
+- ufs->shareability_reg_offset,
+- UFS_SHARABLE, UFS_SHARABLE);
++ ufs->iocc_offset,
++ ufs->iocc_mask, ufs->iocc_val);
+ }
+
+ return 0;
+@@ -957,6 +962,12 @@ static int exynos_ufs_phy_init(struct exynos_ufs *ufs)
+ }
+
+ phy_set_bus_width(generic_phy, ufs->avail_ln_rx);
++
++ if (generic_phy->power_count) {
++ phy_power_off(generic_phy);
++ phy_exit(generic_phy);
++ }
++
+ ret = phy_init(generic_phy);
+ if (ret) {
+ dev_err(hba->dev, "%s: phy init failed, ret = %d\n",
+@@ -1168,12 +1179,22 @@ static int exynos_ufs_parse_dt(struct device *dev, struct exynos_ufs *ufs)
+ ufs->sysreg = NULL;
+ else {
+ if (of_property_read_u32_index(np, "samsung,sysreg", 1,
+- &ufs->shareability_reg_offset)) {
++ &ufs->iocc_offset)) {
+ dev_warn(dev, "can't get an offset from sysreg. Set to default value\n");
+- ufs->shareability_reg_offset = UFS_SHAREABILITY_OFFSET;
++ ufs->iocc_offset = UFS_SHAREABILITY_OFFSET;
+ }
+ }
+
++ ufs->iocc_mask = ufs->drv_data->iocc_mask;
++ /*
++ * no 'dma-coherent' property means the descriptors are
++ * non-cacheable so iocc shareability should be disabled.
++ */
++ if (of_dma_is_coherent(dev->of_node))
++ ufs->iocc_val = ufs->iocc_mask;
++ else
++ ufs->iocc_val = 0;
++
+ ufs->pclk_avail_min = PCLK_AVAIL_MIN;
+ ufs->pclk_avail_max = PCLK_AVAIL_MAX;
+
+@@ -2034,6 +2055,7 @@ static const struct exynos_ufs_drv_data exynosauto_ufs_drvs = {
+ .opts = EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL |
+ EXYNOS_UFS_OPT_SKIP_CONFIG_PHY_ATTR |
+ EXYNOS_UFS_OPT_BROKEN_RX_SEL_IDX,
++ .iocc_mask = UFS_EXYNOSAUTO_SHARABLE,
+ .drv_init = exynosauto_ufs_drv_init,
+ .post_hce_enable = exynosauto_ufs_post_hce_enable,
+ .pre_link = exynosauto_ufs_pre_link,
+@@ -2135,6 +2157,7 @@ static const struct exynos_ufs_drv_data gs101_ufs_drvs = {
+ .opts = EXYNOS_UFS_OPT_SKIP_CONFIG_PHY_ATTR |
+ EXYNOS_UFS_OPT_UFSPR_SECURE |
+ EXYNOS_UFS_OPT_TIMER_TICK_SELECT,
++ .iocc_mask = UFS_GS101_SHARABLE,
+ .drv_init = gs101_ufs_drv_init,
+ .pre_link = gs101_ufs_pre_link,
+ .post_link = gs101_ufs_post_link,
+diff --git a/drivers/ufs/host/ufs-exynos.h b/drivers/ufs/host/ufs-exynos.h
+index 9670dc138d1e49..d0b3df221503c6 100644
+--- a/drivers/ufs/host/ufs-exynos.h
++++ b/drivers/ufs/host/ufs-exynos.h
+@@ -181,6 +181,7 @@ struct exynos_ufs_drv_data {
+ struct exynos_ufs_uic_attr *uic_attr;
+ unsigned int quirks;
+ unsigned int opts;
++ u32 iocc_mask;
+ /* SoC's specific operations */
+ int (*drv_init)(struct exynos_ufs *ufs);
+ int (*pre_link)(struct exynos_ufs *ufs);
+@@ -230,7 +231,9 @@ struct exynos_ufs {
+ ktime_t entry_hibern8_t;
+ const struct exynos_ufs_drv_data *drv_data;
+ struct regmap *sysreg;
+- u32 shareability_reg_offset;
++ u32 iocc_offset;
++ u32 iocc_mask;
++ u32 iocc_val;
+
+ u32 opts;
+ #define EXYNOS_UFS_OPT_HAS_APB_CLK_CTRL BIT(0)
+diff --git a/fs/Kconfig b/fs/Kconfig
+index 64d420e3c47580..8fd1011f7d628d 100644
+--- a/fs/Kconfig
++++ b/fs/Kconfig
+@@ -368,6 +368,7 @@ config GRACE_PERIOD
+ config LOCKD
+ tristate
+ depends on FILE_LOCKING
++ select CRC32
+ select GRACE_PERIOD
+
+ config LOCKD_V4
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 6c18bad53cd3ea..e666c141cae0b0 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4903,6 +4903,8 @@ static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue
+
+ ret = btrfs_encoded_read(&kiocb, &data->iter, &data->args, &cached_state,
+ &disk_bytenr, &disk_io_size);
++ if (ret == -EAGAIN)
++ goto out_acct;
+ if (ret < 0 && ret != -EIOCBQUEUED)
+ goto out_free;
+
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index dc4fee519ca6c1..a5f29ff3fbc2e7 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1139,8 +1139,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ subvol_name = btrfs_get_subvol_name_from_objectid(info,
+ btrfs_root_id(BTRFS_I(d_inode(dentry))->root));
+ if (!IS_ERR(subvol_name)) {
+- seq_puts(seq, ",subvol=");
+- seq_escape(seq, subvol_name, " \t\n\\");
++ seq_show_option(seq, "subvol", subvol_name);
+ kfree(subvol_name);
+ }
+ return 0;
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 7c0980db77b317..c01234bbac4989 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1980,6 +1980,30 @@ static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry,
+ return ret;
+ }
+
++static int ep_try_send_events(struct eventpoll *ep,
++ struct epoll_event __user *events, int maxevents)
++{
++ int res;
++
++ /*
++ * Try to transfer events to user space. In case we get 0 events and
++ * there's still timeout left over, we go trying again in search of
++ * more luck.
++ */
++ res = ep_send_events(ep, events, maxevents);
++ if (res > 0)
++ ep_suspend_napi_irqs(ep);
++ return res;
++}
++
++static int ep_schedule_timeout(ktime_t *to)
++{
++ if (to)
++ return ktime_after(*to, ktime_get());
++ else
++ return 1;
++}
++
+ /**
+ * ep_poll - Retrieves ready events, and delivers them to the caller-supplied
+ * event buffer.
+@@ -2031,17 +2055,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+
+ while (1) {
+ if (eavail) {
+- /*
+- * Try to transfer events to user space. In case we get
+- * 0 events and there's still timeout left over, we go
+- * trying again in search of more luck.
+- */
+- res = ep_send_events(ep, events, maxevents);
+- if (res) {
+- if (res > 0)
+- ep_suspend_napi_irqs(ep);
++ res = ep_try_send_events(ep, events, maxevents);
++ if (res)
+ return res;
+- }
+ }
+
+ if (timed_out)
+@@ -2095,7 +2111,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+
+ write_unlock_irq(&ep->lock);
+
+- if (!eavail)
++ if (!eavail && ep_schedule_timeout(to))
+ timed_out = !schedule_hrtimeout_range(to, slack,
+ HRTIMER_MODE_ABS);
+ __set_current_state(TASK_RUNNING);
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index 82afe78ec54235..997636ed3d1b10 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1670,6 +1670,9 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ unsigned int virtqueue_size;
+ int err = -EIO;
+
++ if (!fsc->source)
++ return invalf(fsc, "No source specified");
++
+ /* This gets a reference on virtio_fs object. This ptr gets installed
+ * in fc->iq->priv. Once fuse_conn is going away, it calls ->put()
+ * to drop the reference to this object.
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index 6add6ebfef8967..cb823a8a6ba960 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -67,6 +67,12 @@ void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off)
+ else
+ key_len = tree->max_key_len + 1;
+
++ if (key_len > sizeof(hfs_btree_key) || key_len < 1) {
++ memset(key, 0, sizeof(hfs_btree_key));
++ pr_err("hfs: Invalid key length: %d\n", key_len);
++ return;
++ }
++
+ hfs_bnode_read(node, key, off, key_len);
+ }
+
+diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
+index 87974d5e679156..079ea80534f7de 100644
+--- a/fs/hfsplus/bnode.c
++++ b/fs/hfsplus/bnode.c
+@@ -67,6 +67,12 @@ void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off)
+ else
+ key_len = tree->max_key_len + 2;
+
++ if (key_len > sizeof(hfsplus_btree_key) || key_len < 1) {
++ memset(key, 0, sizeof(hfsplus_btree_key));
++ pr_err("hfsplus: Invalid key length: %d\n", key_len);
++ return;
++ }
++
+ hfs_bnode_read(node, key, off, key_len);
+ }
+
+diff --git a/fs/isofs/export.c b/fs/isofs/export.c
+index 35768a63fb1d23..421d247fae5230 100644
+--- a/fs/isofs/export.c
++++ b/fs/isofs/export.c
+@@ -180,7 +180,7 @@ static struct dentry *isofs_fh_to_parent(struct super_block *sb,
+ return NULL;
+
+ return isofs_export_iget(sb,
+- fh_len > 2 ? ifid->parent_block : 0,
++ fh_len > 3 ? ifid->parent_block : 0,
+ ifid->parent_offset,
+ fh_len > 4 ? ifid->parent_generation : 0);
+ }
+diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig
+index d3f76101ad4b91..07932ce9246c17 100644
+--- a/fs/nfs/Kconfig
++++ b/fs/nfs/Kconfig
+@@ -2,6 +2,7 @@
+ config NFS_FS
+ tristate "NFS client support"
+ depends on INET && FILE_LOCKING && MULTIUSER
++ select CRC32
+ select LOCKD
+ select SUNRPC
+ select NFS_COMMON
+@@ -196,7 +197,6 @@ config NFS_USE_KERNEL_DNS
+ config NFS_DEBUG
+ bool
+ depends on NFS_FS && SUNRPC_DEBUG
+- select CRC32
+ default y
+
+ config NFS_DISABLE_UDP_SUPPORT
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index fae2c7ae4acc28..59bb4d0338f39a 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -899,18 +899,11 @@ u64 nfs_timespec_to_change_attr(const struct timespec64 *ts)
+ return ((u64)ts->tv_sec << 30) + ts->tv_nsec;
+ }
+
+-#ifdef CONFIG_CRC32
+ static inline u32 nfs_stateid_hash(const nfs4_stateid *stateid)
+ {
+ return ~crc32_le(0xFFFFFFFF, &stateid->other[0],
+ NFS4_STATEID_OTHER_SIZE);
+ }
+-#else
+-static inline u32 nfs_stateid_hash(nfs4_stateid *stateid)
+-{
+- return 0;
+-}
+-#endif
+
+ static inline bool nfs_error_is_fatal(int err)
+ {
+diff --git a/fs/nfs/nfs4session.h b/fs/nfs/nfs4session.h
+index 351616c61df541..f9c291e2165cd8 100644
+--- a/fs/nfs/nfs4session.h
++++ b/fs/nfs/nfs4session.h
+@@ -148,16 +148,12 @@ static inline void nfs4_copy_sessionid(struct nfs4_sessionid *dst,
+ memcpy(dst->data, src->data, NFS4_MAX_SESSIONID_LEN);
+ }
+
+-#ifdef CONFIG_CRC32
+ /*
+ * nfs_session_id_hash - calculate the crc32 hash for the session id
+ * @session - pointer to session
+ */
+ #define nfs_session_id_hash(sess_id) \
+ (~crc32_le(0xFFFFFFFF, &(sess_id)->data[0], sizeof((sess_id)->data)))
+-#else
+-#define nfs_session_id_hash(session) (0)
+-#endif
+ #else /* defined(CONFIG_NFS_V4_1) */
+
+ static inline int nfs4_init_session(struct nfs_client *clp)
+diff --git a/fs/nfsd/Kconfig b/fs/nfsd/Kconfig
+index 792d3fed1b45fd..731a88f6313ebf 100644
+--- a/fs/nfsd/Kconfig
++++ b/fs/nfsd/Kconfig
+@@ -4,6 +4,7 @@ config NFSD
+ depends on INET
+ depends on FILE_LOCKING
+ depends on FSNOTIFY
++ select CRC32
+ select LOCKD
+ select SUNRPC
+ select EXPORTFS
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 2de49e2d6ac487..613bee7edb81e8 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5432,7 +5432,7 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
+ queued = nfsd4_run_cb(&dp->dl_recall);
+ WARN_ON_ONCE(!queued);
+ if (!queued)
+- nfs4_put_stid(&dp->dl_stid);
++ refcount_dec(&dp->dl_stid.sc_count);
+ }
+
+ /* Called from break_lease() with flc_lock held. */
+diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
+index 876152a91f122f..5103c2f4d2253a 100644
+--- a/fs/nfsd/nfsfh.h
++++ b/fs/nfsd/nfsfh.h
+@@ -267,7 +267,6 @@ static inline bool fh_fsid_match(const struct knfsd_fh *fh1,
+ return true;
+ }
+
+-#ifdef CONFIG_CRC32
+ /**
+ * knfsd_fh_hash - calculate the crc32 hash for the filehandle
+ * @fh - pointer to filehandle
+@@ -279,12 +278,6 @@ static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
+ {
+ return ~crc32_le(0xFFFFFFFF, fh->fh_raw, fh->fh_size);
+ }
+-#else
+-static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
+-{
+- return 0;
+-}
+-#endif
+
+ /**
+ * fh_clear_pre_post_attrs - Reset pre/post attributes
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 0021e202502026..be86d2ed71d655 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -540,8 +540,6 @@ int ovl_set_metacopy_xattr(struct ovl_fs *ofs, struct dentry *d,
+ bool ovl_is_metacopy_dentry(struct dentry *dentry);
+ char *ovl_get_redirect_xattr(struct ovl_fs *ofs, const struct path *path, int padding);
+ int ovl_ensure_verity_loaded(struct path *path);
+-int ovl_get_verity_xattr(struct ovl_fs *ofs, const struct path *path,
+- u8 *digest_buf, int *buf_length);
+ int ovl_validate_verity(struct ovl_fs *ofs,
+ struct path *metapath,
+ struct path *datapath);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 86ae6f6da36b68..b11094acdd8f37 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1137,6 +1137,11 @@ static struct ovl_entry *ovl_get_lowerstack(struct super_block *sb,
+ return ERR_PTR(-EINVAL);
+ }
+
++ if (ctx->nr == ctx->nr_data) {
++ pr_err("at least one non-data lowerdir is required\n");
++ return ERR_PTR(-EINVAL);
++ }
++
+ err = -EINVAL;
+ for (i = 0; i < ctx->nr; i++) {
+ l = &ctx->lower[i];
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 81680001944df6..278092a15f8903 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -160,6 +160,8 @@ extern int cifs_get_writable_path(struct cifs_tcon *tcon, const char *name,
+ extern struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *, bool);
+ extern int cifs_get_readable_path(struct cifs_tcon *tcon, const char *name,
+ struct cifsFileInfo **ret_file);
++extern int cifs_get_hardlink_path(struct cifs_tcon *tcon, struct inode *inode,
++ struct file *file);
+ extern unsigned int smbCalcSize(void *buf);
+ extern int decode_negTokenInit(unsigned char *security_blob, int length,
+ struct TCP_Server_Info *server);
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index e417052694f276..cc9c912db8def9 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -300,7 +300,6 @@ cifs_abort_connection(struct TCP_Server_Info *server)
+ server->ssocket->flags);
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
+- put_net(cifs_net_ns(server));
+ }
+ server->sequence_number = 0;
+ server->session_estab = false;
+@@ -973,13 +972,9 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ msleep(125);
+ if (cifs_rdma_enabled(server))
+ smbd_destroy(server);
+-
+ if (server->ssocket) {
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
+-
+- /* Release netns reference for the socket. */
+- put_net(cifs_net_ns(server));
+ }
+
+ if (!list_empty(&server->pending_mid_q)) {
+@@ -1027,7 +1022,6 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ */
+ }
+
+- /* Release netns reference for this server. */
+ put_net(cifs_net_ns(server));
+ kfree(server->leaf_fullpath);
+ kfree(server->hostname);
+@@ -1673,8 +1667,6 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+
+ tcp_ses->ops = ctx->ops;
+ tcp_ses->vals = ctx->vals;
+-
+- /* Grab netns reference for this server. */
+ cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
+
+ tcp_ses->sign = ctx->sign;
+@@ -1804,7 +1796,6 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ out_err_crypto_release:
+ cifs_crypto_secmech_release(tcp_ses);
+
+- /* Release netns reference for this server. */
+ put_net(cifs_net_ns(tcp_ses));
+
+ out_err:
+@@ -1813,10 +1804,8 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ cifs_put_tcp_session(tcp_ses->primary_server, false);
+ kfree(tcp_ses->hostname);
+ kfree(tcp_ses->leaf_fullpath);
+- if (tcp_ses->ssocket) {
++ if (tcp_ses->ssocket)
+ sock_release(tcp_ses->ssocket);
+- put_net(cifs_net_ns(tcp_ses));
+- }
+ kfree(tcp_ses);
+ }
+ return ERR_PTR(rc);
+@@ -3117,24 +3106,20 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ socket = server->ssocket;
+ } else {
+ struct net *net = cifs_net_ns(server);
++ struct sock *sk;
+
+- rc = sock_create_kern(net, sfamily, SOCK_STREAM, IPPROTO_TCP, &server->ssocket);
++ rc = __sock_create(net, sfamily, SOCK_STREAM,
++ IPPROTO_TCP, &server->ssocket, 1);
+ if (rc < 0) {
+ cifs_server_dbg(VFS, "Error %d creating socket\n", rc);
+ return rc;
+ }
+
+- /*
+- * Grab netns reference for the socket.
+- *
+- * This reference will be released in several situations:
+- * - In the failure path before the cifsd thread is started.
+- * - In the all place where server->socket is released, it is
+- * also set to NULL.
+- * - Ultimately in clean_demultiplex_info(), during the final
+- * teardown.
+- */
+- get_net(net);
++ sk = server->ssocket->sk;
++ __netns_tracker_free(net, &sk->ns_tracker, false);
++ sk->sk_net_refcnt = 1;
++ get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(net, 1);
+
+ /* BB other socket options to set KEEPALIVE, NODELAY? */
+ cifs_dbg(FYI, "Socket created\n");
+@@ -3186,7 +3171,6 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (rc < 0) {
+ cifs_dbg(FYI, "Error %d connecting to server\n", rc);
+ trace_smb3_connect_err(server->hostname, server->conn_id, &server->dstaddr, rc);
+- put_net(cifs_net_ns(server));
+ sock_release(socket);
+ server->ssocket = NULL;
+ return rc;
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 8582cf61242c60..60103d5305d6eb 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -1007,6 +1007,11 @@ int cifs_open(struct inode *inode, struct file *file)
+ } else {
+ _cifsFileInfo_put(cfile, true, false);
+ }
++ } else {
++ /* hard link on the defeered close file */
++ rc = cifs_get_hardlink_path(tcon, inode, file);
++ if (rc)
++ cifs_close_deferred_file(CIFS_I(inode));
+ }
+
+ if (server->oplocks)
+@@ -2071,6 +2076,29 @@ cifs_move_llist(struct list_head *source, struct list_head *dest)
+ list_move(li, dest);
+ }
+
++int
++cifs_get_hardlink_path(struct cifs_tcon *tcon, struct inode *inode,
++ struct file *file)
++{
++ struct cifsFileInfo *open_file = NULL;
++ struct cifsInodeInfo *cinode = CIFS_I(inode);
++ int rc = 0;
++
++ spin_lock(&tcon->open_file_lock);
++ spin_lock(&cinode->open_file_lock);
++
++ list_for_each_entry(open_file, &cinode->openFileList, flist) {
++ if (file->f_flags == open_file->f_flags) {
++ rc = -EINVAL;
++ break;
++ }
++ }
++
++ spin_unlock(&cinode->open_file_lock);
++ spin_unlock(&tcon->open_file_lock);
++ return rc;
++}
++
+ void
+ cifs_free_llist(struct list_head *llist)
+ {
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index c1f22c12911179..83764c230e9d4c 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -39,8 +39,10 @@ void ksmbd_conn_free(struct ksmbd_conn *conn)
+ xa_destroy(&conn->sessions);
+ kvfree(conn->request_buf);
+ kfree(conn->preauth_info);
+- if (atomic_dec_and_test(&conn->refcnt))
++ if (atomic_dec_and_test(&conn->refcnt)) {
++ ksmbd_free_transport(conn->transport);
+ kfree(conn);
++ }
+ }
+
+ /**
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index f103b1bd040040..81a29857b1e32f 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -129,14 +129,6 @@ static void free_opinfo(struct oplock_info *opinfo)
+ kfree(opinfo);
+ }
+
+-static inline void opinfo_free_rcu(struct rcu_head *rcu_head)
+-{
+- struct oplock_info *opinfo;
+-
+- opinfo = container_of(rcu_head, struct oplock_info, rcu_head);
+- free_opinfo(opinfo);
+-}
+-
+ struct oplock_info *opinfo_get(struct ksmbd_file *fp)
+ {
+ struct oplock_info *opinfo;
+@@ -157,8 +149,8 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ if (list_empty(&ci->m_op_list))
+ return NULL;
+
+- rcu_read_lock();
+- opinfo = list_first_or_null_rcu(&ci->m_op_list, struct oplock_info,
++ down_read(&ci->m_lock);
++ opinfo = list_first_entry(&ci->m_op_list, struct oplock_info,
+ op_entry);
+ if (opinfo) {
+ if (opinfo->conn == NULL ||
+@@ -171,8 +163,7 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ }
+ }
+ }
+-
+- rcu_read_unlock();
++ up_read(&ci->m_lock);
+
+ return opinfo;
+ }
+@@ -185,7 +176,7 @@ void opinfo_put(struct oplock_info *opinfo)
+ if (!atomic_dec_and_test(&opinfo->refcount))
+ return;
+
+- call_rcu(&opinfo->rcu_head, opinfo_free_rcu);
++ free_opinfo(opinfo);
+ }
+
+ static void opinfo_add(struct oplock_info *opinfo)
+@@ -193,7 +184,7 @@ static void opinfo_add(struct oplock_info *opinfo)
+ struct ksmbd_inode *ci = opinfo->o_fp->f_ci;
+
+ down_write(&ci->m_lock);
+- list_add_rcu(&opinfo->op_entry, &ci->m_op_list);
++ list_add(&opinfo->op_entry, &ci->m_op_list);
+ up_write(&ci->m_lock);
+ }
+
+@@ -207,7 +198,7 @@ static void opinfo_del(struct oplock_info *opinfo)
+ write_unlock(&lease_list_lock);
+ }
+ down_write(&ci->m_lock);
+- list_del_rcu(&opinfo->op_entry);
++ list_del(&opinfo->op_entry);
+ up_write(&ci->m_lock);
+ }
+
+@@ -1347,8 +1338,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ ci = fp->f_ci;
+ op = opinfo_get(fp);
+
+- rcu_read_lock();
+- list_for_each_entry_rcu(brk_op, &ci->m_op_list, op_entry) {
++ down_read(&ci->m_lock);
++ list_for_each_entry(brk_op, &ci->m_op_list, op_entry) {
+ if (brk_op->conn == NULL)
+ continue;
+
+@@ -1358,7 +1349,6 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ if (ksmbd_conn_releasing(brk_op->conn))
+ continue;
+
+- rcu_read_unlock();
+ if (brk_op->is_lease && (brk_op->o_lease->state &
+ (~(SMB2_LEASE_READ_CACHING_LE |
+ SMB2_LEASE_HANDLE_CACHING_LE)))) {
+@@ -1388,9 +1378,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE, NULL);
+ next:
+ opinfo_put(brk_op);
+- rcu_read_lock();
+ }
+- rcu_read_unlock();
++ up_read(&ci->m_lock);
+
+ if (op)
+ opinfo_put(op);
+diff --git a/fs/smb/server/oplock.h b/fs/smb/server/oplock.h
+index 3f64f07872638e..9a56eaadd0dd8f 100644
+--- a/fs/smb/server/oplock.h
++++ b/fs/smb/server/oplock.h
+@@ -71,7 +71,6 @@ struct oplock_info {
+ struct list_head lease_entry;
+ wait_queue_head_t oplock_q; /* Other server threads */
+ wait_queue_head_t oplock_brk; /* oplock breaking wait */
+- struct rcu_head rcu_head;
+ };
+
+ struct lease_break_info {
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index d24d95d15d876b..57839f9708bb6c 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1602,8 +1602,10 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ if (prev_sess_id && prev_sess_id != sess->id)
+ destroy_previous_session(conn, sess->user, prev_sess_id);
+
+- if (sess->state == SMB2_SESSION_VALID)
++ if (sess->state == SMB2_SESSION_VALID) {
+ ksmbd_free_user(sess->user);
++ sess->user = NULL;
++ }
+
+ retval = ksmbd_krb5_authenticate(sess, in_blob, in_len,
+ out_blob, &out_len);
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index 3f185ae60dc514..2a3e2b0ce5570a 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -310,7 +310,11 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
+ server_conf.signing = req->signing;
+ server_conf.tcp_port = req->tcp_port;
+ server_conf.ipc_timeout = req->ipc_timeout * HZ;
+- server_conf.deadtime = req->deadtime * SMB_ECHO_INTERVAL;
++ if (check_mul_overflow(req->deadtime, SMB_ECHO_INTERVAL,
++ &server_conf.deadtime)) {
++ ret = -EINVAL;
++ goto out;
++ }
+ server_conf.share_fake_fscaps = req->share_fake_fscaps;
+ ksmbd_init_domain(req->sub_auth);
+
+@@ -337,6 +341,7 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
+ server_conf.bind_interfaces_only = req->bind_interfaces_only;
+ ret |= ksmbd_tcp_set_interfaces(KSMBD_STARTUP_CONFIG_INTERFACES(req),
+ req->ifc_list_sz);
++out:
+ if (ret) {
+ pr_err("Server configuration error: %s %s %s\n",
+ req->netbios_name, req->server_string,
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index 7f38a3c3f5bd69..abedf510899a74 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -93,17 +93,21 @@ static struct tcp_transport *alloc_transport(struct socket *client_sk)
+ return t;
+ }
+
+-static void free_transport(struct tcp_transport *t)
++void ksmbd_free_transport(struct ksmbd_transport *kt)
+ {
+- kernel_sock_shutdown(t->sock, SHUT_RDWR);
+- sock_release(t->sock);
+- t->sock = NULL;
++ struct tcp_transport *t = TCP_TRANS(kt);
+
+- ksmbd_conn_free(KSMBD_TRANS(t)->conn);
++ sock_release(t->sock);
+ kfree(t->iov);
+ kfree(t);
+ }
+
++static void free_transport(struct tcp_transport *t)
++{
++ kernel_sock_shutdown(t->sock, SHUT_RDWR);
++ ksmbd_conn_free(KSMBD_TRANS(t)->conn);
++}
++
+ /**
+ * kvec_array_init() - initialize a IO vector segment
+ * @new: IO vector to be initialized
+diff --git a/fs/smb/server/transport_tcp.h b/fs/smb/server/transport_tcp.h
+index 8c9aa624cfe3ca..1e51675ee1b209 100644
+--- a/fs/smb/server/transport_tcp.h
++++ b/fs/smb/server/transport_tcp.h
+@@ -8,6 +8,7 @@
+
+ int ksmbd_tcp_set_interfaces(char *ifc_list, int ifc_list_sz);
+ struct interface *ksmbd_find_netdev_name_iface_list(char *netdev_name);
++void ksmbd_free_transport(struct ksmbd_transport *kt);
+ int ksmbd_tcp_init(void);
+ void ksmbd_tcp_destroy(void);
+
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 6890016e1923ed..9c765b97375170 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -496,7 +496,8 @@ int ksmbd_vfs_write(struct ksmbd_work *work, struct ksmbd_file *fp,
+ int err = 0;
+
+ if (work->conn->connection_type) {
+- if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE))) {
++ if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE)) ||
++ S_ISDIR(file_inode(fp->filp)->i_mode)) {
+ pr_err("no right to write(%pD)\n", fp->filp);
+ err = -EACCES;
+ goto out;
+diff --git a/include/drm/intel/pciids.h b/include/drm/intel/pciids.h
+index f9d3e85142ea88..750323e1947bda 100644
+--- a/include/drm/intel/pciids.h
++++ b/include/drm/intel/pciids.h
+@@ -847,6 +847,7 @@
+ MACRO__(0xE20C, ## __VA_ARGS__), \
+ MACRO__(0xE20D, ## __VA_ARGS__), \
+ MACRO__(0xE210, ## __VA_ARGS__), \
++ MACRO__(0xE211, ## __VA_ARGS__), \
+ MACRO__(0xE212, ## __VA_ARGS__), \
+ MACRO__(0xE215, ## __VA_ARGS__), \
+ MACRO__(0xE216, ## __VA_ARGS__)
+diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
+index 8e7af9a03b41dd..e721148c95d07d 100644
+--- a/include/linux/backing-dev.h
++++ b/include/linux/backing-dev.h
+@@ -249,6 +249,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
+ {
+ #ifdef CONFIG_LOCKDEP
+ WARN_ON_ONCE(debug_locks &&
++ (inode->i_sb->s_iflags & SB_I_CGROUPWB) &&
+ (!lockdep_is_held(&inode->i_lock) &&
+ !lockdep_is_held(&inode->i_mapping->i_pages.xa_lock) &&
+ !lockdep_is_held(&inode->i_wb->list_lock)));
+diff --git a/include/linux/firmware/cirrus/cs_dsp_test_utils.h b/include/linux/firmware/cirrus/cs_dsp_test_utils.h
+index 4f87a908ab4f6b..ecd821ed8064f1 100644
+--- a/include/linux/firmware/cirrus/cs_dsp_test_utils.h
++++ b/include/linux/firmware/cirrus/cs_dsp_test_utils.h
+@@ -104,7 +104,6 @@ unsigned int cs_dsp_mock_num_dsp_words_to_num_packed_regs(unsigned int num_dsp_w
+ unsigned int cs_dsp_mock_xm_header_get_alg_base_in_words(struct cs_dsp_test *priv,
+ unsigned int alg_id,
+ int mem_type);
+-unsigned int cs_dsp_mock_xm_header_get_fw_version_from_regmap(struct cs_dsp_test *priv);
+ unsigned int cs_dsp_mock_xm_header_get_fw_version(struct cs_dsp_mock_xm_header *header);
+ void cs_dsp_mock_xm_header_drop_from_regmap_cache(struct cs_dsp_test *priv);
+ int cs_dsp_mock_xm_header_write_to_regmap(struct cs_dsp_mock_xm_header *header);
+diff --git a/include/linux/nfs.h b/include/linux/nfs.h
+index 9ad727ddfedb34..0906a0b40c6aa5 100644
+--- a/include/linux/nfs.h
++++ b/include/linux/nfs.h
+@@ -55,7 +55,6 @@ enum nfs3_stable_how {
+ NFS_INVALID_STABLE_HOW = -1
+ };
+
+-#ifdef CONFIG_CRC32
+ /**
+ * nfs_fhandle_hash - calculate the crc32 hash for the filehandle
+ * @fh - pointer to filehandle
+@@ -67,10 +66,4 @@ static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh)
+ {
+ return ~crc32_le(0xFFFFFFFF, &fh->data[0], fh->size);
+ }
+-#else /* CONFIG_CRC32 */
+-static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh)
+-{
+- return 0;
+-}
+-#endif /* CONFIG_CRC32 */
+ #endif /* _LINUX_NFS_H */
+diff --git a/include/uapi/drm/ivpu_accel.h b/include/uapi/drm/ivpu_accel.h
+index a35b97b097bf62..e778dc24cc3420 100644
+--- a/include/uapi/drm/ivpu_accel.h
++++ b/include/uapi/drm/ivpu_accel.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
+ /*
+- * Copyright (C) 2020-2024 Intel Corporation
++ * Copyright (C) 2020-2025 Intel Corporation
+ */
+
+ #ifndef __UAPI_IVPU_DRM_H__
+@@ -128,7 +128,7 @@ struct drm_ivpu_param {
+ * platform type when executing on a simulator or emulator (read-only)
+ *
+ * %DRM_IVPU_PARAM_CORE_CLOCK_RATE:
+- * Current PLL frequency (read-only)
++ * Maximum frequency of the NPU data processing unit clock (read-only)
+ *
+ * %DRM_IVPU_PARAM_NUM_CONTEXTS:
+ * Maximum number of simultaneously existing contexts (read-only)
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index af39b69eb4fde8..dbb94d93279be0 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -130,6 +130,18 @@ struct io_rsrc_node *io_rsrc_node_alloc(int type)
+ return node;
+ }
+
++static void io_clear_table_tags(struct io_rsrc_data *data)
++{
++ int i;
++
++ for (i = 0; i < data->nr; i++) {
++ struct io_rsrc_node *node = data->nodes[i];
++
++ if (node)
++ node->tag = 0;
++ }
++}
++
+ __cold void io_rsrc_data_free(struct io_ring_ctx *ctx, struct io_rsrc_data *data)
+ {
+ if (!data->nr)
+@@ -539,6 +551,7 @@ int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ io_file_table_set_alloc_range(ctx, 0, ctx->file_table.data.nr);
+ return 0;
+ fail:
++ io_clear_table_tags(&ctx->file_table.data);
+ io_sqe_files_unregister(ctx);
+ return ret;
+ }
+@@ -855,8 +868,10 @@ int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
+ }
+
+ ctx->buf_table = data;
+- if (ret)
++ if (ret) {
++ io_clear_table_tags(&ctx->buf_table);
+ io_sqe_buffers_unregister(ctx);
++ }
+ return ret;
+ }
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 1a19d69b91ed3c..bcab867575bb5b 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -81,9 +81,20 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
+ if (!cpufreq_this_cpu_can_update(sg_policy->policy))
+ return false;
+
+- if (unlikely(sg_policy->limits_changed)) {
+- sg_policy->limits_changed = false;
+- sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
++ if (unlikely(READ_ONCE(sg_policy->limits_changed))) {
++ WRITE_ONCE(sg_policy->limits_changed, false);
++ sg_policy->need_freq_update = true;
++
++ /*
++ * The above limits_changed update must occur before the reads
++ * of policy limits in cpufreq_driver_resolve_freq() or a policy
++ * limits update might be missed, so use a memory barrier to
++ * ensure it.
++ *
++ * This pairs with the write memory barrier in sugov_limits().
++ */
++ smp_mb();
++
+ return true;
+ }
+
+@@ -95,10 +106,22 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
+ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
+ unsigned int next_freq)
+ {
+- if (sg_policy->need_freq_update)
++ if (sg_policy->need_freq_update) {
+ sg_policy->need_freq_update = false;
+- else if (sg_policy->next_freq == next_freq)
++ /*
++ * The policy limits have changed, but if the return value of
++ * cpufreq_driver_resolve_freq() after applying the new limits
++ * is still equal to the previously selected frequency, the
++ * driver callback need not be invoked unless the driver
++ * specifically wants that to happen on every update of the
++ * policy limits.
++ */
++ if (sg_policy->next_freq == next_freq &&
++ !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS))
++ return false;
++ } else if (sg_policy->next_freq == next_freq) {
+ return false;
++ }
+
+ sg_policy->next_freq = next_freq;
+ sg_policy->last_freq_update_time = time;
+@@ -365,7 +388,7 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
+ if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+- sg_cpu->sg_policy->limits_changed = true;
++ WRITE_ONCE(sg_cpu->sg_policy->limits_changed, true);
+ }
+
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -871,7 +894,16 @@ static void sugov_limits(struct cpufreq_policy *policy)
+ mutex_unlock(&sg_policy->work_lock);
+ }
+
+- sg_policy->limits_changed = true;
++ /*
++ * The limits_changed update below must take place before the updates
++ * of policy limits in cpufreq_set_policy() or a policy limits update
++ * might be missed, so use a memory barrier to ensure it.
++ *
++ * This pairs with the memory barrier in sugov_should_update_freq().
++ */
++ smp_wmb();
++
++ WRITE_ONCE(sg_policy->limits_changed, true);
+ }
+
+ struct cpufreq_governor schedutil_gov = {
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 62d300eee7eb81..201e770185ddec 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5912,9 +5912,10 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
+
+ /* Make a copy hash to place the new and the old entries in */
+ size = hash->count + direct_functions->count;
+- if (size > 32)
+- size = 32;
+- new_hash = alloc_ftrace_hash(fls(size));
++ size = fls(size);
++ if (size > FTRACE_HASH_MAX_BITS)
++ size = FTRACE_HASH_MAX_BITS;
++ new_hash = alloc_ftrace_hash(size);
+ if (!new_hash)
+ goto out_unlock;
+
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 0993dfc1c5c165..2048560264bb48 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -808,7 +808,7 @@ static __always_inline char *test_string(char *str)
+ kstr = ubuf->buffer;
+
+ /* For safety, do not trust the string pointer */
+- if (!strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE))
++ if (strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE) < 0)
+ return NULL;
+ return kstr;
+ }
+@@ -827,7 +827,7 @@ static __always_inline char *test_ustring(char *str)
+
+ /* user space address? */
+ ustr = (char __user *)str;
+- if (!strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE))
++ if (strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE) < 0)
+ return NULL;
+
+ return kstr;
+diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
+index 19b45617bdcf69..513176e33242e1 100644
+--- a/lib/alloc_tag.c
++++ b/lib/alloc_tag.c
+@@ -422,11 +422,20 @@ static int vm_module_tags_populate(void)
+ unsigned long old_shadow_end = ALIGN(phys_end, MODULE_ALIGN);
+ unsigned long new_shadow_end = ALIGN(new_end, MODULE_ALIGN);
+ unsigned long more_pages;
+- unsigned long nr;
++ unsigned long nr = 0;
+
+ more_pages = ALIGN(new_end - phys_end, PAGE_SIZE) >> PAGE_SHIFT;
+- nr = alloc_pages_bulk_node(GFP_KERNEL | __GFP_NOWARN,
+- NUMA_NO_NODE, more_pages, next_page);
++ while (nr < more_pages) {
++ unsigned long allocated;
++
++ allocated = alloc_pages_bulk_node(GFP_KERNEL | __GFP_NOWARN,
++ NUMA_NO_NODE, more_pages - nr, next_page + nr);
++
++ if (!allocated)
++ break;
++ nr += allocated;
++ }
++
+ if (nr < more_pages ||
+ vmap_pages_range(phys_end, phys_end + (nr << PAGE_SHIFT), PAGE_KERNEL,
+ next_page, PAGE_SHIFT) < 0) {
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 8c7fdb7d8c8fa3..bc9391e55d57ea 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1191,7 +1191,7 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
+ return -ENOMEM;
+ p = *pages;
+ for (int k = 0; k < n; k++) {
+- struct folio *folio = page_folio(page);
++ struct folio *folio = page_folio(page + k);
+ p[k] = page + k;
+ if (!folio_test_slab(folio))
+ folio_get(folio);
+diff --git a/lib/string.c b/lib/string.c
+index eb4486ed40d259..b632c71df1a506 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count)
+ if (count == 0 || WARN_ON_ONCE(count > INT_MAX))
+ return -E2BIG;
+
++#ifndef CONFIG_DCACHE_WORD_ACCESS
+ #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ /*
+ * If src is unaligned, don't cross a page boundary,
+@@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count)
+ /* If src or dest is unaligned, don't do word-at-a-time. */
+ if (((long) dest | (long) src) & (sizeof(long) - 1))
+ max = 0;
++#endif
+ #endif
+
+ /*
+- * read_word_at_a_time() below may read uninitialized bytes after the
+- * trailing zero and use them in comparisons. Disable this optimization
+- * under KMSAN to prevent false positive reports.
++ * load_unaligned_zeropad() or read_word_at_a_time() below may read
++ * uninitialized bytes after the trailing zero and use them in
++ * comparisons. Disable this optimization under KMSAN to prevent
++ * false positive reports.
+ */
+ if (IS_ENABLED(CONFIG_KMSAN))
+ max = 0;
+@@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count)
+ while (max >= sizeof(unsigned long)) {
+ unsigned long c, data;
+
++#ifdef CONFIG_DCACHE_WORD_ACCESS
++ c = load_unaligned_zeropad(src+res);
++#else
+ c = read_word_at_a_time(src+res);
++#endif
+ if (has_zero(c, &data, &constants)) {
+ data = prep_zero_mask(c, data, &constants);
+ data = create_zero_mask(data);
+diff --git a/mm/compaction.c b/mm/compaction.c
+index a3203d97123ead..ae734de0038c45 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -981,13 +981,13 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
+ }
+
+ if (PageHuge(page)) {
++ const unsigned int order = compound_order(page);
+ /*
+ * skip hugetlbfs if we are not compacting for pages
+ * bigger than its order. THPs and other compound pages
+ * are handled below.
+ */
+ if (!cc->alloc_contig) {
+- const unsigned int order = compound_order(page);
+
+ if (order <= MAX_PAGE_ORDER) {
+ low_pfn += (1UL << order) - 1;
+@@ -1011,8 +1011,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
+ /* Do not report -EBUSY down the chain */
+ if (ret == -EBUSY)
+ ret = 0;
+- low_pfn += compound_nr(page) - 1;
+- nr_scanned += compound_nr(page) - 1;
++ low_pfn += (1UL << order) - 1;
++ nr_scanned += (1UL << order) - 1;
+ goto isolate_fail;
+ }
+
+diff --git a/mm/filemap.c b/mm/filemap.c
+index e9404290f2c638..cfcc98bc1aa257 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2244,6 +2244,7 @@ unsigned filemap_get_folios_contig(struct address_space *mapping,
+ *start = folio->index + nr;
+ goto out;
+ }
++ xas_advance(&xas, folio_next_index(folio) - 1);
+ continue;
+ put_folio:
+ folio_put(folio);
+diff --git a/mm/gup.c b/mm/gup.c
+index 61e751baf862c5..4ededc1133583b 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2210,8 +2210,8 @@ size_t fault_in_safe_writeable(const char __user *uaddr, size_t size)
+ } while (start != end);
+ mmap_read_unlock(mm);
+
+- if (size > (unsigned long)uaddr - start)
+- return size - ((unsigned long)uaddr - start);
++ if (size > start - (unsigned long)uaddr)
++ return size - (start - (unsigned long)uaddr);
+ return 0;
+ }
+ EXPORT_SYMBOL(fault_in_safe_writeable);
+diff --git a/mm/memory.c b/mm/memory.c
+index 53f7b0aaf2a332..21dea111ffb224 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -2904,11 +2904,11 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ if (fn) {
+ do {
+ if (create || !pte_none(ptep_get(pte))) {
+- err = fn(pte++, addr, data);
++ err = fn(pte, addr, data);
+ if (err)
+ break;
+ }
+- } while (addr += PAGE_SIZE, addr != end);
++ } while (pte++, addr += PAGE_SIZE, addr != end);
+ }
+ *mask |= PGTBL_PTE_MODIFIED;
+
+diff --git a/mm/slub.c b/mm/slub.c
+index 1f50129dcfb3cd..96babca6b33036 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1950,6 +1950,11 @@ static inline void handle_failed_objexts_alloc(unsigned long obj_exts,
+ #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \
+ __GFP_ACCOUNT | __GFP_NOFAIL)
+
++static inline void init_slab_obj_exts(struct slab *slab)
++{
++ slab->obj_exts = 0;
++}
++
+ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
+ gfp_t gfp, bool new_slab)
+ {
+@@ -2034,6 +2039,10 @@ static inline bool need_slab_obj_ext(void)
+
+ #else /* CONFIG_SLAB_OBJ_EXT */
+
++static inline void init_slab_obj_exts(struct slab *slab)
++{
++}
++
+ static int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
+ gfp_t gfp, bool new_slab)
+ {
+@@ -2601,6 +2610,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
+ slab->objects = oo_objects(oo);
+ slab->inuse = 0;
+ slab->frozen = 0;
++ init_slab_obj_exts(slab);
+
+ account_slab(slab, oo_order(oo), s, flags);
+
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index d06453fa8abae6..4295a599d71494 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1898,6 +1898,14 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
+ unsigned long end)
+ {
+ struct vm_area_struct *ret;
++ bool give_up_on_oom = false;
++
++ /*
++ * If we are modifying only and not splitting, just give up on the merge
++ * if OOM prevents us from merging successfully.
++ */
++ if (start == vma->vm_start && end == vma->vm_end)
++ give_up_on_oom = true;
+
+ /* Reset ptes for the whole vma range if wr-protected */
+ if (userfaultfd_wp(vma))
+@@ -1905,7 +1913,7 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
+
+ ret = vma_modify_flags_uffd(vmi, prev, vma, start, end,
+ vma->vm_flags & ~__VM_UFFD_FLAGS,
+- NULL_VM_UFFD_CTX);
++ NULL_VM_UFFD_CTX, give_up_on_oom);
+
+ /*
+ * In the vma_merge() successful mprotect-like case 8:
+@@ -1956,7 +1964,8 @@ int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
+ new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags;
+ vma = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end,
+ new_flags,
+- (struct vm_userfaultfd_ctx){ctx});
++ (struct vm_userfaultfd_ctx){ctx},
++ /* give_up_on_oom = */false);
+ if (IS_ERR(vma))
+ return PTR_ERR(vma);
+
+diff --git a/mm/vma.c b/mm/vma.c
+index 71ca012c616c99..b29323af68dd1c 100644
+--- a/mm/vma.c
++++ b/mm/vma.c
+@@ -903,7 +903,13 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
+ if (anon_dup)
+ unlink_anon_vmas(anon_dup);
+
+- vmg->state = VMA_MERGE_ERROR_NOMEM;
++ /*
++ * We've cleaned up any cloned anon_vma's, no VMAs have been
++ * modified, no harm no foul if the user requests that we not
++ * report this and just give up, leaving the VMAs unmerged.
++ */
++ if (!vmg->give_up_on_oom)
++ vmg->state = VMA_MERGE_ERROR_NOMEM;
+ return NULL;
+ }
+
+@@ -916,7 +922,15 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
+ abort:
+ vma_iter_set(vmg->vmi, start);
+ vma_iter_load(vmg->vmi);
+- vmg->state = VMA_MERGE_ERROR_NOMEM;
++
++ /*
++ * This means we have failed to clone anon_vma's correctly, but no
++ * actual changes to VMAs have occurred, so no harm no foul - if the
++ * user doesn't want this reported and instead just wants to give up on
++ * the merge, allow it.
++ */
++ if (!vmg->give_up_on_oom)
++ vmg->state = VMA_MERGE_ERROR_NOMEM;
+ return NULL;
+ }
+
+@@ -1076,9 +1090,15 @@ int vma_expand(struct vma_merge_struct *vmg)
+ return 0;
+
+ nomem:
+- vmg->state = VMA_MERGE_ERROR_NOMEM;
+ if (anon_dup)
+ unlink_anon_vmas(anon_dup);
++ /*
++ * If the user requests that we just give upon OOM, we are safe to do so
++ * here, as commit merge provides this contract to us. Nothing has been
++ * changed - no harm no foul, just don't report it.
++ */
++ if (!vmg->give_up_on_oom)
++ vmg->state = VMA_MERGE_ERROR_NOMEM;
+ return -ENOMEM;
+ }
+
+@@ -1520,6 +1540,13 @@ static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)
+ if (vmg_nomem(vmg))
+ return ERR_PTR(-ENOMEM);
+
++ /*
++ * Split can fail for reasons other than OOM, so if the user requests
++ * this it's probably a mistake.
++ */
++ VM_WARN_ON(vmg->give_up_on_oom &&
++ (vma->vm_start != start || vma->vm_end != end));
++
+ /* Split any preceding portion of the VMA. */
+ if (vma->vm_start < start) {
+ int err = split_vma(vmg->vmi, vma, start, 1);
+@@ -1588,12 +1615,15 @@ struct vm_area_struct
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long new_flags,
+- struct vm_userfaultfd_ctx new_ctx)
++ struct vm_userfaultfd_ctx new_ctx,
++ bool give_up_on_oom)
+ {
+ VMG_VMA_STATE(vmg, vmi, prev, vma, start, end);
+
+ vmg.flags = new_flags;
+ vmg.uffd_ctx = new_ctx;
++ if (give_up_on_oom)
++ vmg.give_up_on_oom = true;
+
+ return vma_modify(&vmg);
+ }
+diff --git a/mm/vma.h b/mm/vma.h
+index a2e8710b8c479e..df4793dac1b13f 100644
+--- a/mm/vma.h
++++ b/mm/vma.h
+@@ -87,6 +87,12 @@ struct vma_merge_struct {
+ struct anon_vma_name *anon_name;
+ enum vma_merge_flags merge_flags;
+ enum vma_merge_state state;
++
++ /*
++ * If a merge is possible, but an OOM error occurs, give up and don't
++ * execute the merge, returning NULL.
++ */
++ bool give_up_on_oom :1;
+ };
+
+ static inline bool vmg_nomem(struct vma_merge_struct *vmg)
+@@ -206,7 +212,8 @@ __must_check struct vm_area_struct
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long new_flags,
+- struct vm_userfaultfd_ctx new_ctx);
++ struct vm_userfaultfd_ctx new_ctx,
++ bool give_up_on_oom);
+
+ __must_check struct vm_area_struct
+ *vma_merge_new_range(struct vma_merge_struct *vmg);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index e2bfbcee06a800..20d3cdcb14f6cd 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6153,11 +6153,12 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ * event or send an immediate device found event if the data
+ * should not be stored for later.
+ */
+- if (!ext_adv && !has_pending_adv_report(hdev)) {
++ if (!has_pending_adv_report(hdev)) {
+ /* If the report will trigger a SCAN_REQ store it for
+ * later merging.
+ */
+- if (type == LE_ADV_IND || type == LE_ADV_SCAN_IND) {
++ if (!ext_adv && (type == LE_ADV_IND ||
++ type == LE_ADV_SCAN_IND)) {
+ store_pending_adv_report(hdev, bdaddr, bdaddr_type,
+ rssi, flags, data, len);
+ return;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index c27ea70f71e1e1..a55388fbf07c84 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3956,7 +3956,8 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
+
+ /* Check if the ACL is secure enough (if not SDP) */
+ if (psm != cpu_to_le16(L2CAP_PSM_SDP) &&
+- !hci_conn_check_link_mode(conn->hcon)) {
++ (!hci_conn_check_link_mode(conn->hcon) ||
++ !l2cap_check_enc_key_size(conn->hcon))) {
+ conn->disc_reason = HCI_ERROR_AUTH_FAILURE;
+ result = L2CAP_CR_SEC_BLOCK;
+ goto response;
+@@ -7503,8 +7504,24 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+ if (skb->len > len) {
+ BT_ERR("Frame is too long (len %u, expected len %d)",
+ skb->len, len);
++ /* PTS test cases L2CAP/COS/CED/BI-14-C and BI-15-C
++ * (Multiple Signaling Command in one PDU, Data
++ * Truncated, BR/EDR) send a C-frame to the IUT with
++ * PDU Length set to 8 and Channel ID set to the
++ * correct signaling channel for the logical link.
++ * The Information payload contains one L2CAP_ECHO_REQ
++ * packet with Data Length set to 0 with 0 octets of
++ * echo data and one invalid command packet due to
++ * data truncated in PDU but present in HCI packet.
++ *
++ * Shorter the socket buffer to the PDU length to
++ * allow to process valid commands from the PDU before
++ * setting the socket unreliable.
++ */
++ skb->len = len;
++ l2cap_recv_frame(conn, skb);
+ l2cap_conn_unreliable(conn, ECOMM);
+- goto drop;
++ goto unlock;
+ }
+
+ /* Append fragment into frame (with header) */
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index d9a69ec9affe59..939a3aa78d5c46 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -715,8 +715,8 @@ static int br_vlan_add_existing(struct net_bridge *br,
+ u16 flags, bool *changed,
+ struct netlink_ext_ack *extack)
+ {
+- bool would_change = __vlan_flags_would_change(vlan, flags);
+ bool becomes_brentry = false;
++ bool would_change = false;
+ int err;
+
+ if (!br_vlan_is_brentry(vlan)) {
+@@ -725,6 +725,8 @@ static int br_vlan_add_existing(struct net_bridge *br,
+ return -EINVAL;
+
+ becomes_brentry = true;
++ } else {
++ would_change = __vlan_flags_would_change(vlan, flags);
+ }
+
+ /* Master VLANs that aren't brentries weren't notified before,
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index e827775baf2ee1..436a7e1b412ade 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -862,6 +862,16 @@ static void dsa_tree_teardown_lags(struct dsa_switch_tree *dst)
+ kfree(dst->lags);
+ }
+
++static void dsa_tree_teardown_routing_table(struct dsa_switch_tree *dst)
++{
++ struct dsa_link *dl, *next;
++
++ list_for_each_entry_safe(dl, next, &dst->rtable, list) {
++ list_del(&dl->list);
++ kfree(dl);
++ }
++}
++
+ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+ {
+ bool complete;
+@@ -879,7 +889,7 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+
+ err = dsa_tree_setup_cpu_ports(dst);
+ if (err)
+- return err;
++ goto teardown_rtable;
+
+ err = dsa_tree_setup_switches(dst);
+ if (err)
+@@ -911,14 +921,14 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+ dsa_tree_teardown_switches(dst);
+ teardown_cpu_ports:
+ dsa_tree_teardown_cpu_ports(dst);
++teardown_rtable:
++ dsa_tree_teardown_routing_table(dst);
+
+ return err;
+ }
+
+ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
+ {
+- struct dsa_link *dl, *next;
+-
+ if (!dst->setup)
+ return;
+
+@@ -932,10 +942,7 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
+
+ dsa_tree_teardown_cpu_ports(dst);
+
+- list_for_each_entry_safe(dl, next, &dst->rtable, list) {
+- list_del(&dl->list);
+- kfree(dl);
+- }
++ dsa_tree_teardown_routing_table(dst);
+
+ pr_info("DSA: tree %d torn down\n", dst->index);
+
+@@ -1478,12 +1485,44 @@ static int dsa_switch_parse(struct dsa_switch *ds, struct dsa_chip_data *cd)
+
+ static void dsa_switch_release_ports(struct dsa_switch *ds)
+ {
++ struct dsa_mac_addr *a, *tmp;
+ struct dsa_port *dp, *next;
++ struct dsa_vlan *v, *n;
+
+ dsa_switch_for_each_port_safe(dp, next, ds) {
+- WARN_ON(!list_empty(&dp->fdbs));
+- WARN_ON(!list_empty(&dp->mdbs));
+- WARN_ON(!list_empty(&dp->vlans));
++ /* These are either entries that upper layers lost track of
++ * (probably due to bugs), or installed through interfaces
++ * where one does not necessarily have to remove them, like
++ * ndo_dflt_fdb_add().
++ */
++ list_for_each_entry_safe(a, tmp, &dp->fdbs, list) {
++ dev_info(ds->dev,
++ "Cleaning up unicast address %pM vid %u from port %d\n",
++ a->addr, a->vid, dp->index);
++ list_del(&a->list);
++ kfree(a);
++ }
++
++ list_for_each_entry_safe(a, tmp, &dp->mdbs, list) {
++ dev_info(ds->dev,
++ "Cleaning up multicast address %pM vid %u from port %d\n",
++ a->addr, a->vid, dp->index);
++ list_del(&a->list);
++ kfree(a);
++ }
++
++ /* These are entries that upper layers have lost track of,
++ * probably due to bugs, but also due to dsa_port_do_vlan_del()
++ * having failed and the VLAN entry still lingering on.
++ */
++ list_for_each_entry_safe(v, n, &dp->vlans, list) {
++ dev_info(ds->dev,
++ "Cleaning up vid %u from port %d\n",
++ v->vid, dp->index);
++ list_del(&v->list);
++ kfree(v);
++ }
++
+ list_del(&dp->list);
+ kfree(dp);
+ }
+diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
+index 3ee53e28ec2e9f..53e03fd8071b4a 100644
+--- a/net/dsa/tag_8021q.c
++++ b/net/dsa/tag_8021q.c
+@@ -197,7 +197,7 @@ static int dsa_port_do_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid)
+
+ err = ds->ops->tag_8021q_vlan_del(ds, port, vid);
+ if (err) {
+- refcount_inc(&v->refcount);
++ refcount_set(&v->refcount, 1);
+ return err;
+ }
+
+diff --git a/net/ethtool/cmis_cdb.c b/net/ethtool/cmis_cdb.c
+index 0e2691ccb0df38..3057576bc81e3d 100644
+--- a/net/ethtool/cmis_cdb.c
++++ b/net/ethtool/cmis_cdb.c
+@@ -351,7 +351,7 @@ ethtool_cmis_module_poll(struct net_device *dev,
+ struct netlink_ext_ack extack = {};
+ int err;
+
+- ethtool_cmis_page_init(&page_data, 0, offset, sizeof(rpl));
++ ethtool_cmis_page_init(&page_data, 0, offset, sizeof(*rpl));
+ page_data.data = (u8 *)rpl;
+
+ err = ops->get_module_eeprom_by_page(dev, &page_data, &extack);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 08cee62e789e13..21eca985a1fd1e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1771,6 +1771,7 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ if (!err) {
+ spin_lock_bh(&f6i->fib6_table->tb6_lock);
+ fib6_update_sernum(net, f6i);
++ fib6_add_gc_list(f6i);
+ spin_unlock_bh(&f6i->fib6_table->tb6_lock);
+ fib6_force_start_gc(net);
+ }
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 459fc391a4d932..d299bdbca6b3b4 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -660,6 +660,9 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ ieee80211_txq_remove_vlan(local, sdata);
+
++ if (sdata->vif.txq)
++ ieee80211_txq_purge(sdata->local, to_txq_info(sdata->vif.txq));
++
+ sdata->bss = NULL;
+
+ if (local->open_count == 0)
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index f6de136008f6f9..57850d4dac5db9 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -630,6 +630,9 @@ static int mctp_sk_hash(struct sock *sk)
+ {
+ struct net *net = sock_net(sk);
+
++ /* Bind lookup runs under RCU, remain live during that. */
++ sock_set_flag(sk, SOCK_RCU_FREE);
++
+ mutex_lock(&net->mctp.bind_lock);
+ sk_add_node_rcu(sk, &net->mctp.binds);
+ mutex_unlock(&net->mctp.bind_lock);
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index 9d8361526f82ac..9441ac3d8c1a2e 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -383,8 +383,8 @@ static void flow_offload_del(struct nf_flowtable *flow_table,
+ void flow_offload_teardown(struct flow_offload *flow)
+ {
+ clear_bit(IPS_OFFLOAD_BIT, &flow->ct->status);
+- set_bit(NF_FLOW_TEARDOWN, &flow->flags);
+- flow_offload_fixup_ct(flow);
++ if (!test_and_set_bit(NF_FLOW_TEARDOWN, &flow->flags))
++ flow_offload_fixup_ct(flow);
+ }
+ EXPORT_SYMBOL_GPL(flow_offload_teardown);
+
+@@ -558,10 +558,12 @@ static void nf_flow_offload_gc_step(struct nf_flowtable *flow_table,
+
+ if (nf_flow_has_expired(flow) ||
+ nf_ct_is_dying(flow->ct) ||
+- nf_flow_custom_gc(flow_table, flow))
++ nf_flow_custom_gc(flow_table, flow)) {
+ flow_offload_teardown(flow);
+- else if (!teardown)
++ teardown = true;
++ } else if (!teardown) {
+ nf_flow_table_extend_ct_timeout(flow->ct);
++ }
+
+ if (teardown) {
+ if (test_bit(NF_FLOW_HW, &flow->flags)) {
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 95e0dd14dc1a32..518be23e48ea93 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2876,7 +2876,8 @@ static int validate_set(const struct nlattr *a,
+ size_t key_len;
+
+ /* There can be only one key in a action */
+- if (nla_total_size(nla_len(ovs_key)) != nla_len(a))
++ if (!nla_ok(ovs_key, nla_len(a)) ||
++ nla_total_size(nla_len(ovs_key)) != nla_len(a))
+ return -EINVAL;
+
+ key_len = nla_len(ovs_key);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 3e6cb35baf25af..3760131f148450 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -362,6 +362,9 @@ static void smc_destruct(struct sock *sk)
+ return;
+ }
+
++static struct lock_class_key smc_key;
++static struct lock_class_key smc_slock_key;
++
+ void smc_sk_init(struct net *net, struct sock *sk, int protocol)
+ {
+ struct smc_sock *smc = smc_sk(sk);
+@@ -375,6 +378,8 @@ void smc_sk_init(struct net *net, struct sock *sk, int protocol)
+ INIT_WORK(&smc->connect_work, smc_connect_work);
+ INIT_DELAYED_WORK(&smc->conn.tx_work, smc_tx_work);
+ INIT_LIST_HEAD(&smc->accept_q);
++ sock_lock_init_class_and_name(sk, "slock-AF_SMC", &smc_slock_key,
++ "sk_lock-AF_SMC", &smc_key);
+ spin_lock_init(&smc->accept_q_lock);
+ spin_lock_init(&smc->conn.send_lock);
+ sk->sk_prot->hash(sk);
+diff --git a/rust/Makefile b/rust/Makefile
+index 2c57c624fe7df0..c53a6959550196 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -334,7 +334,7 @@ $(obj)/bindings/bindings_helpers_generated.rs: private bindgen_target_extra = ;
+ $(obj)/bindings/bindings_helpers_generated.rs: $(src)/helpers/helpers.c FORCE
+ $(call if_changed_dep,bindgen)
+
+-rust_exports = $(NM) -p --defined-only $(1) | awk '$$2~/(T|R|D|B)/ && $$3!~/__cfi/ && $$3!~/__odr_asan/ { printf $(2),$$3 }'
++rust_exports = $(NM) -p --defined-only $(1) | awk '$$2~/(T|R|D|B)/ && $$3!~/__(pfx|cfi|odr_asan)/ { printf $(2),$$3 }'
+
+ quiet_cmd_exports = EXPORTS $@
+ cmd_exports = \
+diff --git a/rust/helpers/io.c b/rust/helpers/io.c
+index 4c2401ccd72078..15ea187c546625 100644
+--- a/rust/helpers/io.c
++++ b/rust/helpers/io.c
+@@ -7,94 +7,94 @@ void __iomem *rust_helper_ioremap(phys_addr_t offset, size_t size)
+ return ioremap(offset, size);
+ }
+
+-void rust_helper_iounmap(volatile void __iomem *addr)
++void rust_helper_iounmap(void __iomem *addr)
+ {
+ iounmap(addr);
+ }
+
+-u8 rust_helper_readb(const volatile void __iomem *addr)
++u8 rust_helper_readb(const void __iomem *addr)
+ {
+ return readb(addr);
+ }
+
+-u16 rust_helper_readw(const volatile void __iomem *addr)
++u16 rust_helper_readw(const void __iomem *addr)
+ {
+ return readw(addr);
+ }
+
+-u32 rust_helper_readl(const volatile void __iomem *addr)
++u32 rust_helper_readl(const void __iomem *addr)
+ {
+ return readl(addr);
+ }
+
+ #ifdef CONFIG_64BIT
+-u64 rust_helper_readq(const volatile void __iomem *addr)
++u64 rust_helper_readq(const void __iomem *addr)
+ {
+ return readq(addr);
+ }
+ #endif
+
+-void rust_helper_writeb(u8 value, volatile void __iomem *addr)
++void rust_helper_writeb(u8 value, void __iomem *addr)
+ {
+ writeb(value, addr);
+ }
+
+-void rust_helper_writew(u16 value, volatile void __iomem *addr)
++void rust_helper_writew(u16 value, void __iomem *addr)
+ {
+ writew(value, addr);
+ }
+
+-void rust_helper_writel(u32 value, volatile void __iomem *addr)
++void rust_helper_writel(u32 value, void __iomem *addr)
+ {
+ writel(value, addr);
+ }
+
+ #ifdef CONFIG_64BIT
+-void rust_helper_writeq(u64 value, volatile void __iomem *addr)
++void rust_helper_writeq(u64 value, void __iomem *addr)
+ {
+ writeq(value, addr);
+ }
+ #endif
+
+-u8 rust_helper_readb_relaxed(const volatile void __iomem *addr)
++u8 rust_helper_readb_relaxed(const void __iomem *addr)
+ {
+ return readb_relaxed(addr);
+ }
+
+-u16 rust_helper_readw_relaxed(const volatile void __iomem *addr)
++u16 rust_helper_readw_relaxed(const void __iomem *addr)
+ {
+ return readw_relaxed(addr);
+ }
+
+-u32 rust_helper_readl_relaxed(const volatile void __iomem *addr)
++u32 rust_helper_readl_relaxed(const void __iomem *addr)
+ {
+ return readl_relaxed(addr);
+ }
+
+ #ifdef CONFIG_64BIT
+-u64 rust_helper_readq_relaxed(const volatile void __iomem *addr)
++u64 rust_helper_readq_relaxed(const void __iomem *addr)
+ {
+ return readq_relaxed(addr);
+ }
+ #endif
+
+-void rust_helper_writeb_relaxed(u8 value, volatile void __iomem *addr)
++void rust_helper_writeb_relaxed(u8 value, void __iomem *addr)
+ {
+ writeb_relaxed(value, addr);
+ }
+
+-void rust_helper_writew_relaxed(u16 value, volatile void __iomem *addr)
++void rust_helper_writew_relaxed(u16 value, void __iomem *addr)
+ {
+ writew_relaxed(value, addr);
+ }
+
+-void rust_helper_writel_relaxed(u32 value, volatile void __iomem *addr)
++void rust_helper_writel_relaxed(u32 value, void __iomem *addr)
+ {
+ writel_relaxed(value, addr);
+ }
+
+ #ifdef CONFIG_64BIT
+-void rust_helper_writeq_relaxed(u64 value, volatile void __iomem *addr)
++void rust_helper_writeq_relaxed(u64 value, void __iomem *addr)
+ {
+ writeq_relaxed(value, addr);
+ }
+diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
+index 8c1029687e2e4f..75356d2acc0b56 100644
+--- a/scripts/Makefile.compiler
++++ b/scripts/Makefile.compiler
+@@ -75,8 +75,8 @@ ld-option = $(call try-run, $(LD) $(KBUILD_LDFLAGS) $(1) -v,$(1),$(2),$(3))
+ # Usage: MY_RUSTFLAGS += $(call __rustc-option,$(RUSTC),$(MY_RUSTFLAGS),-Cinstrument-coverage,-Zinstrument-coverage)
+ # TODO: remove RUSTC_BOOTSTRAP=1 when we raise the minimum GNU Make version to 4.4
+ __rustc-option = $(call try-run,\
+- echo '#![allow(missing_docs)]#![feature(no_core)]#![no_core]' | RUSTC_BOOTSTRAP=1\
+- $(1) --sysroot=/dev/null $(filter-out --sysroot=/dev/null,$(2)) $(3)\
++ echo '$(pound)![allow(missing_docs)]$(pound)![feature(no_core)]$(pound)![no_core]' | RUSTC_BOOTSTRAP=1\
++ $(1) --sysroot=/dev/null $(filter-out --sysroot=/dev/null --target=%,$(2)) $(3)\
+ --crate-type=rlib --out-dir=$(TMPOUT) --emit=obj=- - >/dev/null,$(3),$(4))
+
+ # rustc-option
+diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py
+index adae71544cbd62..f2ff0f9542361c 100755
+--- a/scripts/generate_rust_analyzer.py
++++ b/scripts/generate_rust_analyzer.py
+@@ -97,6 +97,12 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ ["core", "compiler_builtins"],
+ )
+
++ append_crate(
++ "ffi",
++ srctree / "rust" / "ffi.rs",
++ ["core", "compiler_builtins"],
++ )
++
+ def append_crate_with_generated(
+ display_name,
+ deps,
+@@ -116,9 +122,9 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ "exclude_dirs": [],
+ }
+
+- append_crate_with_generated("bindings", ["core"])
+- append_crate_with_generated("uapi", ["core"])
+- append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"])
++ append_crate_with_generated("bindings", ["core", "ffi"])
++ append_crate_with_generated("uapi", ["core", "ffi"])
++ append_crate_with_generated("kernel", ["core", "macros", "build_error", "ffi", "bindings", "uapi"])
+
+ def is_root_crate(build_file, target):
+ try:
+diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig
+index 84ebf19f288365..ddbbfd5b274f5c 100644
+--- a/sound/pci/hda/Kconfig
++++ b/sound/pci/hda/Kconfig
+@@ -96,9 +96,7 @@ config SND_HDA_CIRRUS_SCODEC
+
+ config SND_HDA_CIRRUS_SCODEC_KUNIT_TEST
+ tristate "KUnit test for Cirrus side-codec library" if !KUNIT_ALL_TESTS
+- select SND_HDA_CIRRUS_SCODEC
+- select GPIOLIB
+- depends on KUNIT
++ depends on SND_HDA_CIRRUS_SCODEC && GPIOLIB && KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ This builds KUnit tests for the cirrus side-codec library.
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8e482f6ecafea9..356df48c97309d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7924,6 +7924,7 @@ enum {
+ ALC233_FIXUP_MEDION_MTL_SPK,
+ ALC294_FIXUP_BASS_SPEAKER_15,
+ ALC283_FIXUP_DELL_HP_RESUME,
++ ALC294_FIXUP_ASUS_CS35L41_SPI_2,
+ };
+
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -10278,6 +10279,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc283_fixup_dell_hp_resume,
+ },
++ [ALC294_FIXUP_ASUS_CS35L41_SPI_2] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cs35l41_fixup_spi_two,
++ .chained = true,
++ .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC,
++ },
+ };
+
+ static const struct hda_quirk alc269_fixup_tbl[] = {
+@@ -10763,7 +10770,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM),
+ SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -10853,14 +10860,14 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1fb3, "ASUS ROG Flow Z13 GZ302EA", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x3011, "ASUS B5605CVA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+- SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x31d0, "ASUS Zen AIO 27 Z272SD_A272SD", ALC274_FIXUP_ASUS_ZEN_AIO_27),
+- SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+diff --git a/sound/soc/codecs/cs42l43-jack.c b/sound/soc/codecs/cs42l43-jack.c
+index ac19a572fe70cb..20e6ab6f0d4ad7 100644
+--- a/sound/soc/codecs/cs42l43-jack.c
++++ b/sound/soc/codecs/cs42l43-jack.c
+@@ -702,6 +702,9 @@ static void cs42l43_clear_jack(struct cs42l43_codec *priv)
+ CS42L43_PGA_WIDESWING_MODE_EN_MASK, 0);
+ regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CTRL,
+ CS42L43_JACK_STEREO_CONFIG_MASK, 0);
++ regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CLAMP_CTRL,
++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_MASK,
++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_MASK);
+ regmap_update_bits(cs42l43->regmap, CS42L43_HS2,
+ CS42L43_HSDET_MODE_MASK | CS42L43_HSDET_MANUAL_MODE_MASK,
+ 0x2 << CS42L43_HSDET_MODE_SHIFT);
+diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c
+index c989d82d1d3c17..81bab8299eae4b 100644
+--- a/sound/soc/codecs/lpass-wsa-macro.c
++++ b/sound/soc/codecs/lpass-wsa-macro.c
+@@ -63,6 +63,10 @@
+ #define CDC_WSA_TX_SPKR_PROT_CLK_DISABLE 0
+ #define CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK GENMASK(3, 0)
+ #define CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K 0
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_16K 1
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_24K 2
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_32K 3
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_48K 4
+ #define CDC_WSA_TX0_SPKR_PROT_PATH_CFG0 (0x0248)
+ #define CDC_WSA_TX1_SPKR_PROT_PATH_CTL (0x0264)
+ #define CDC_WSA_TX1_SPKR_PROT_PATH_CFG0 (0x0268)
+@@ -407,6 +411,7 @@ struct wsa_macro {
+ int ear_spkr_gain;
+ int spkr_gain_offset;
+ int spkr_mode;
++ u32 pcm_rate_vi;
+ int is_softclip_on[WSA_MACRO_SOFTCLIP_MAX];
+ int softclip_clk_users[WSA_MACRO_SOFTCLIP_MAX];
+ struct regmap *regmap;
+@@ -1280,6 +1285,7 @@ static int wsa_macro_hw_params(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
+ struct snd_soc_component *component = dai->component;
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
+ int ret;
+
+ switch (substream->stream) {
+@@ -1291,6 +1297,11 @@ static int wsa_macro_hw_params(struct snd_pcm_substream *substream,
+ __func__, params_rate(params));
+ return ret;
+ }
++ break;
++ case SNDRV_PCM_STREAM_CAPTURE:
++ if (dai->id == WSA_MACRO_AIF_VI)
++ wsa->pcm_rate_vi = params_rate(params);
++
+ break;
+ default:
+ break;
+@@ -1448,35 +1459,11 @@ static void wsa_macro_mclk_enable(struct wsa_macro *wsa, bool mclk_enable)
+ }
+ }
+
+-static int wsa_macro_mclk_event(struct snd_soc_dapm_widget *w,
+- struct snd_kcontrol *kcontrol, int event)
++static void wsa_macro_enable_disable_vi_sense(struct snd_soc_component *component, bool enable,
++ u32 tx_reg0, u32 tx_reg1, u32 val)
+ {
+- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+- struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
+-
+- wsa_macro_mclk_enable(wsa, event == SND_SOC_DAPM_PRE_PMU);
+- return 0;
+-}
+-
+-static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+- struct snd_kcontrol *kcontrol,
+- int event)
+-{
+- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+- struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
+- u32 tx_reg0, tx_reg1;
+-
+- if (test_bit(WSA_MACRO_TX0, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) {
+- tx_reg0 = CDC_WSA_TX0_SPKR_PROT_PATH_CTL;
+- tx_reg1 = CDC_WSA_TX1_SPKR_PROT_PATH_CTL;
+- } else if (test_bit(WSA_MACRO_TX1, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) {
+- tx_reg0 = CDC_WSA_TX2_SPKR_PROT_PATH_CTL;
+- tx_reg1 = CDC_WSA_TX3_SPKR_PROT_PATH_CTL;
+- }
+-
+- switch (event) {
+- case SND_SOC_DAPM_POST_PMU:
+- /* Enable V&I sensing */
++ if (enable) {
++ /* Enable V&I sensing */
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_RESET_MASK,
+ CDC_WSA_TX_SPKR_PROT_RESET);
+@@ -1485,10 +1472,10 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+ CDC_WSA_TX_SPKR_PROT_RESET);
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK,
+- CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K);
++ val);
+ snd_soc_component_update_bits(component, tx_reg1,
+ CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK,
+- CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K);
++ val);
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_CLK_EN_MASK,
+ CDC_WSA_TX_SPKR_PROT_CLK_ENABLE);
+@@ -1501,9 +1488,7 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+ snd_soc_component_update_bits(component, tx_reg1,
+ CDC_WSA_TX_SPKR_PROT_RESET_MASK,
+ CDC_WSA_TX_SPKR_PROT_NO_RESET);
+- break;
+- case SND_SOC_DAPM_POST_PMD:
+- /* Disable V&I sensing */
++ } else {
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_RESET_MASK,
+ CDC_WSA_TX_SPKR_PROT_RESET);
+@@ -1516,6 +1501,72 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+ snd_soc_component_update_bits(component, tx_reg1,
+ CDC_WSA_TX_SPKR_PROT_CLK_EN_MASK,
+ CDC_WSA_TX_SPKR_PROT_CLK_DISABLE);
++ }
++}
++
++static void wsa_macro_enable_disable_vi_feedback(struct snd_soc_component *component,
++ bool enable, u32 rate)
++{
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
++
++ if (test_bit(WSA_MACRO_TX0, &wsa->active_ch_mask[WSA_MACRO_AIF_VI]))
++ wsa_macro_enable_disable_vi_sense(component, enable,
++ CDC_WSA_TX0_SPKR_PROT_PATH_CTL,
++ CDC_WSA_TX1_SPKR_PROT_PATH_CTL, rate);
++
++ if (test_bit(WSA_MACRO_TX1, &wsa->active_ch_mask[WSA_MACRO_AIF_VI]))
++ wsa_macro_enable_disable_vi_sense(component, enable,
++ CDC_WSA_TX2_SPKR_PROT_PATH_CTL,
++ CDC_WSA_TX3_SPKR_PROT_PATH_CTL, rate);
++}
++
++static int wsa_macro_mclk_event(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol, int event)
++{
++ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
++
++ wsa_macro_mclk_enable(wsa, event == SND_SOC_DAPM_PRE_PMU);
++ return 0;
++}
++
++static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol,
++ int event)
++{
++ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
++ u32 rate_val;
++
++ switch (wsa->pcm_rate_vi) {
++ case 8000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K;
++ break;
++ case 16000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_16K;
++ break;
++ case 24000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_24K;
++ break;
++ case 32000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_32K;
++ break;
++ case 48000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_48K;
++ break;
++ default:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K;
++ break;
++ }
++
++ switch (event) {
++ case SND_SOC_DAPM_POST_PMU:
++ /* Enable V&I sensing */
++ wsa_macro_enable_disable_vi_feedback(component, true, rate_val);
++ break;
++ case SND_SOC_DAPM_POST_PMD:
++ /* Disable V&I sensing */
++ wsa_macro_enable_disable_vi_feedback(component, false, rate_val);
+ break;
+ }
+
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 57b789d7fbedd4..5b4f20dbf7bba4 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -199,12 +199,10 @@ static void i2s_start(struct dw_i2s_dev *dev,
+ else
+ i2s_write_reg(dev->i2s_base, IRER, 1);
+
+- /* I2S needs to enable IRQ to make a handshake with DMAC on the JH7110 SoC */
+- if (dev->use_pio || dev->is_jh7110)
+- i2s_enable_irqs(dev, substream->stream, config->chan_nr);
+- else
++ if (!(dev->use_pio || dev->is_jh7110))
+ i2s_enable_dma(dev, substream->stream);
+
++ i2s_enable_irqs(dev, substream->stream, config->chan_nr);
+ i2s_write_reg(dev->i2s_base, CER, 1);
+ }
+
+@@ -218,11 +216,12 @@ static void i2s_stop(struct dw_i2s_dev *dev,
+ else
+ i2s_write_reg(dev->i2s_base, IRER, 0);
+
+- if (dev->use_pio || dev->is_jh7110)
+- i2s_disable_irqs(dev, substream->stream, 8);
+- else
++ if (!(dev->use_pio || dev->is_jh7110))
+ i2s_disable_dma(dev, substream->stream);
+
++ i2s_disable_irqs(dev, substream->stream, 8);
++
++
+ if (!dev->active) {
+ i2s_write_reg(dev->i2s_base, CER, 0);
+ i2s_write_reg(dev->i2s_base, IER, 0);
+diff --git a/sound/soc/fsl/fsl_qmc_audio.c b/sound/soc/fsl/fsl_qmc_audio.c
+index e257b8adafe095..ca67c5c5bef693 100644
+--- a/sound/soc/fsl/fsl_qmc_audio.c
++++ b/sound/soc/fsl/fsl_qmc_audio.c
+@@ -250,6 +250,9 @@ static int qmc_audio_pcm_trigger(struct snd_soc_component *component,
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ bitmap_zero(prtd->chans_pending, 64);
++ prtd->buffer_ended = 0;
++ prtd->ch_dma_addr_current = prtd->ch_dma_addr_start;
++
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ for (i = 0; i < prtd->channels; i++)
+ prtd->qmc_dai->chans[i].prtd_tx = prtd;
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index 4bfbcb5a5ae8af..f7dd17849e97b8 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -927,7 +927,8 @@ static int avs_component_probe(struct snd_soc_component *component)
+ else
+ mach->tplg_filename = devm_kasprintf(adev->dev, GFP_KERNEL,
+ "hda-generic-tplg.bin");
+-
++ if (!mach->tplg_filename)
++ return -ENOMEM;
+ filename = kasprintf(GFP_KERNEL, "%s/%s", component->driver->topology_name_prefix,
+ mach->tplg_filename);
+ if (!filename)
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 90dafa810b2ec0..095d08b3fc8249 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -764,6 +764,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+
+ static const struct snd_pci_quirk sof_sdw_ssid_quirk_table[] = {
+ SND_PCI_QUIRK(0x1043, 0x1e13, "ASUS Zenbook S14", SOC_SDW_CODEC_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1f43, "ASUS Zenbook S16", SOC_SDW_CODEC_MIC),
+ {}
+ };
+
+diff --git a/sound/soc/qcom/lpass.h b/sound/soc/qcom/lpass.h
+index 27a2bf9a661393..de3ec6f594c11c 100644
+--- a/sound/soc/qcom/lpass.h
++++ b/sound/soc/qcom/lpass.h
+@@ -13,10 +13,11 @@
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+ #include <dt-bindings/sound/qcom,lpass.h>
++#include <dt-bindings/sound/qcom,q6afe.h>
+ #include "lpass-hdmi.h"
+
+ #define LPASS_AHBIX_CLOCK_FREQUENCY 131072000
+-#define LPASS_MAX_PORTS (LPASS_CDC_DMA_VA_TX8 + 1)
++#define LPASS_MAX_PORTS (DISPLAY_PORT_RX_7 + 1)
+ #define LPASS_MAX_MI2S_PORTS (8)
+ #define LPASS_MAX_DMA_CHANNELS (8)
+ #define LPASS_MAX_HDMI_DMA_CHANNELS (4)
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 9f4c54fe6f56f5..7b535e119cafaa 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -226,6 +226,7 @@ static bool is_rust_noreturn(const struct symbol *func)
+ str_ends_with(func->name, "_4core9panicking14panic_nounwind") ||
+ str_ends_with(func->name, "_4core9panicking18panic_bounds_check") ||
+ str_ends_with(func->name, "_4core9panicking19assert_failed_inner") ||
++ str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference") ||
+ str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||
+ strstr(func->name, "_4core9panicking13assert_failed") ||
+ strstr(func->name, "_4core9panicking11panic_const24panic_const_") ||
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 9cd78cdee6282f..c99eb9ff17ed6d 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -2556,25 +2556,6 @@ static bool evsel__detect_missing_features(struct evsel *evsel, struct perf_cpu
+ return false;
+ }
+
+-static bool evsel__handle_error_quirks(struct evsel *evsel, int error)
+-{
+- /*
+- * AMD core PMU tries to forward events with precise_ip to IBS PMU
+- * implicitly. But IBS PMU has more restrictions so it can fail with
+- * supported event attributes. Let's forward it back to the core PMU
+- * by clearing precise_ip only if it's from precise_max (:P).
+- */
+- if ((error == -EINVAL || error == -ENOENT) && x86__is_amd_cpu() &&
+- evsel->core.attr.precise_ip && evsel->precise_max) {
+- evsel->core.attr.precise_ip = 0;
+- pr_debug2_peo("removing precise_ip on AMD\n");
+- display_attr(&evsel->core.attr);
+- return true;
+- }
+-
+- return false;
+-}
+-
+ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
+ struct perf_thread_map *threads,
+ int start_cpu_map_idx, int end_cpu_map_idx)
+@@ -2720,9 +2701,6 @@ static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
+ if (evsel__precise_ip_fallback(evsel))
+ goto retry_open;
+
+- if (evsel__handle_error_quirks(evsel, err))
+- goto retry_open;
+-
+ out_close:
+ if (err)
+ threads->err_thread = thread;
+diff --git a/tools/testing/kunit/qemu_configs/sh.py b/tools/testing/kunit/qemu_configs/sh.py
+index 78a474a5b95f3a..f00cb89fdef6aa 100644
+--- a/tools/testing/kunit/qemu_configs/sh.py
++++ b/tools/testing/kunit/qemu_configs/sh.py
+@@ -7,7 +7,9 @@ CONFIG_CPU_SUBTYPE_SH7751R=y
+ CONFIG_MEMORY_START=0x0c000000
+ CONFIG_SH_RTS7751R2D=y
+ CONFIG_RTS7751R2D_PLUS=y
+-CONFIG_SERIAL_SH_SCI=y''',
++CONFIG_SERIAL_SH_SCI=y
++CONFIG_CMDLINE_EXTEND=y
++''',
+ qemu_arch='sh4',
+ kernel_path='arch/sh/boot/zImage',
+ kernel_command_line='console=ttySC1',
+diff --git a/tools/testing/selftests/mincore/mincore_selftest.c b/tools/testing/selftests/mincore/mincore_selftest.c
+index e949a43a614508..0fd4b00bd345b5 100644
+--- a/tools/testing/selftests/mincore/mincore_selftest.c
++++ b/tools/testing/selftests/mincore/mincore_selftest.c
+@@ -286,8 +286,7 @@ TEST(check_file_mmap)
+
+ /*
+ * Test mincore() behavior on a page backed by a tmpfs file. This test
+- * performs the same steps as the previous one. However, we don't expect
+- * any readahead in this case.
++ * performs the same steps as the previous one.
+ */
+ TEST(check_tmpfs_mmap)
+ {
+@@ -298,7 +297,6 @@ TEST(check_tmpfs_mmap)
+ int page_size;
+ int fd;
+ int i;
+- int ra_pages = 0;
+
+ page_size = sysconf(_SC_PAGESIZE);
+ vec_size = FILE_SIZE / page_size;
+@@ -341,8 +339,7 @@ TEST(check_tmpfs_mmap)
+ }
+
+ /*
+- * Touch a page in the middle of the mapping. We expect only
+- * that page to be fetched into memory.
++ * Touch a page in the middle of the mapping.
+ */
+ addr[FILE_SIZE / 2] = 1;
+ retval = mincore(addr, FILE_SIZE, vec);
+@@ -351,15 +348,6 @@ TEST(check_tmpfs_mmap)
+ TH_LOG("Page not found in memory after use");
+ }
+
+- i = FILE_SIZE / 2 / page_size + 1;
+- while (i < vec_size && vec[i]) {
+- ra_pages++;
+- i++;
+- }
+- ASSERT_EQ(ra_pages, 0) {
+- TH_LOG("Read-ahead pages found in memory");
+- }
+-
+ munmap(addr, FILE_SIZE);
+ close(fd);
+ free(vec);
+diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+index 67df7b47087f03..e1fe16bcbbe880 100755
+--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+@@ -29,7 +29,7 @@ fi
+ if [[ $cgroup2 ]]; then
+ cgroup_path=$(mount -t cgroup2 | head -1 | awk '{print $3}')
+ if [[ -z "$cgroup_path" ]]; then
+- cgroup_path=/dev/cgroup/memory
++ cgroup_path=$(mktemp -d)
+ mount -t cgroup2 none $cgroup_path
+ do_umount=1
+ fi
+@@ -37,7 +37,7 @@ if [[ $cgroup2 ]]; then
+ else
+ cgroup_path=$(mount -t cgroup | grep ",hugetlb" | awk '{print $3}')
+ if [[ -z "$cgroup_path" ]]; then
+- cgroup_path=/dev/cgroup/memory
++ cgroup_path=$(mktemp -d)
+ mount -t cgroup memory,hugetlb $cgroup_path
+ do_umount=1
+ fi
+diff --git a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
+index 11f9bbe7dc222b..0b0d4ba1af2771 100755
+--- a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
++++ b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
+@@ -23,7 +23,7 @@ fi
+ if [[ $cgroup2 ]]; then
+ CGROUP_ROOT=$(mount -t cgroup2 | head -1 | awk '{print $3}')
+ if [[ -z "$CGROUP_ROOT" ]]; then
+- CGROUP_ROOT=/dev/cgroup/memory
++ CGROUP_ROOT=$(mktemp -d)
+ mount -t cgroup2 none $CGROUP_ROOT
+ do_umount=1
+ fi
+diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c
+index 66dbb362385f3c..0f97fb0d19e19c 100644
+--- a/tools/testing/shared/linux.c
++++ b/tools/testing/shared/linux.c
+@@ -150,7 +150,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp)
+ void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **list)
+ {
+ if (kmalloc_verbose)
+- pr_debug("Bulk free %p[0-%lu]\n", list, size - 1);
++ pr_debug("Bulk free %p[0-%zu]\n", list, size - 1);
+
+ pthread_mutex_lock(&cachep->lock);
+ for (int i = 0; i < size; i++)
+@@ -168,7 +168,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t size,
+ size_t i;
+
+ if (kmalloc_verbose)
+- pr_debug("Bulk alloc %lu\n", size);
++ pr_debug("Bulk alloc %zu\n", size);
+
+ pthread_mutex_lock(&cachep->lock);
+ if (cachep->nr_objs >= size) {
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-04-25 12:12 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-04-25 12:12 UTC (permalink / raw
To: gentoo-commits
commit: 8e3b73428c77914a6e2bc8a4104b6627c4da2f60
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 25 12:11:48 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 25 12:11:48 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8e3b7342
Adding BMQ 6.14-r0, thanks to holgerh for the fixup
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +
5020_BMQ-and-PDS-io-scheduler-v6.14-r0.patch | 11514 +++++++++++++++++++++++++
5021_BMQ-and-PDS-gentoo-defaults.patch | 13 +
3 files changed, 11535 insertions(+)
diff --git a/0000_README b/0000_README
index 21d8b648..ec88ba02 100644
--- a/0000_README
+++ b/0000_README
@@ -105,3 +105,11 @@ Desc: Add Gentoo Linux support config settings and defaults.
Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch: 5020_BMQ-and-PDS-io-scheduler-v6.14-r0.patch
+From: https://gitlab.com/alfredchen/projectc
+Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
+From: https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc: Set defaults for BMQ. Add archs as people test, default to N
diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.14-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.14-r0.patch
new file mode 100644
index 00000000..9060c27d
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.14-r0.patch
@@ -0,0 +1,11514 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index dd49a89a62d3..4118f8c92125 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1700,3 +1700,12 @@ is 10 seconds.
+
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++ 0 - No yield.
++ 1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++ BitMap queue CPU Scheduler
++ --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++ Overview
++ Task policy
++ Priority management
++ BitMap Queue
++ CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++ It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++ All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++ BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++ ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index cd89e956c322..ce367105a127 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -515,7 +515,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ seq_puts(m, "0 0 0\n");
+ else
+ seq_printf(m, "%llu %llu %lu\n",
+- (unsigned long long)task->se.sum_exec_runtime,
++ (unsigned long long)tsk_seruntime(task),
+ (unsigned long long)task->sched_info.run_delay,
+ task->sched_info.pcount);
+
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ [RLIMIT_LOCKS] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ [RLIMIT_SIGPENDING] = { 0, 0 }, \
+ [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
+- [RLIMIT_NICE] = { 0, 0 }, \
++ [RLIMIT_NICE] = { 30, 30 }, \
+ [RLIMIT_RTPRIO] = { 0, 0 }, \
+ [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 9c15365a30c0..8fb0a36b58b9 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -817,9 +817,13 @@ struct task_struct {
+ struct alloc_tag *alloc_tag;
+ #endif
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ int on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ struct __call_single_node wake_entry;
++#ifndef CONFIG_SCHED_ALT
+ unsigned int wakee_flips;
+ unsigned long wakee_flip_decay_ts;
+ struct task_struct *last_wakee;
+@@ -833,6 +837,7 @@ struct task_struct {
+ */
+ int recent_used_cpu;
+ int wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ int on_rq;
+
+@@ -841,6 +846,19 @@ struct task_struct {
+ int normal_prio;
+ unsigned int rt_priority;
+
++#ifdef CONFIG_SCHED_ALT
++ u64 last_ran;
++ s64 time_slice;
++ struct list_head sq_node;
++#ifdef CONFIG_SCHED_BMQ
++ int boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++ u64 deadline;
++#endif /* CONFIG_SCHED_PDS */
++ /* sched_clock time spent running */
++ u64 sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ struct sched_entity se;
+ struct sched_rt_entity rt;
+ struct sched_dl_entity dl;
+@@ -855,6 +873,7 @@ struct task_struct {
+ unsigned long core_cookie;
+ unsigned int core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_CGROUP_SCHED
+ struct task_group *sched_task_group;
+@@ -891,11 +910,15 @@ struct task_struct {
+ const cpumask_t *cpus_ptr;
+ cpumask_t *user_cpus_ptr;
+ cpumask_t cpus_mask;
++#ifndef CONFIG_SCHED_ALT
+ void *migration_pending;
++#endif
+ #ifdef CONFIG_SMP
+ unsigned short migration_disabled;
+ #endif
++#ifndef CONFIG_SCHED_ALT
+ unsigned short migration_flags;
++#endif
+
+ #ifdef CONFIG_PREEMPT_RCU
+ int rcu_read_lock_nesting;
+@@ -927,8 +950,10 @@ struct task_struct {
+
+ struct list_head tasks;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct plist_node pushable_tasks;
+ struct rb_node pushable_dl_tasks;
++#endif
+ #endif
+
+ struct mm_struct *mm;
+@@ -1636,6 +1661,15 @@ struct task_struct {
+ */
+ };
+
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t) ((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t) (0UL)
++#else /* CFS */
++#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t) ((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE (TASK_REPORT + 1)
+ #define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
+
+@@ -2176,7 +2210,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+
+ static inline bool task_is_runnable(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return p->on_rq;
++#else
+ return p->on_rq && !p->se.sched_delayed;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ extern bool sched_task_on_rq(struct task_struct *p);
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 3a912ab42bb5..269a1513a153 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++ return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p) (0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p) ((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p) ((p)->dl.deadline)
++
+ /*
+ * SCHED_DEADLINE tasks has negative priorities, reflecting
+ * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline bool dl_task(struct task_struct *p)
+ {
+ return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index 6ab43b4f72f9..ef1cff556c5e 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -19,6 +19,28 @@
+ #define MAX_PRIO (MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO (MAX_RT_PRIO + NICE_WIDTH / 2)
+
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ (12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ (0)
++#endif
++
++#define MIN_NORMAL_PRIO (128)
++#define NORMAL_PRIO_NUM (64)
++#define MAX_PRIO (MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO (MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+ * Convert user-nice values [ -20 ... 0 ... 19 ]
+ * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 4e3338103654..6dfef878fe3b 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -45,8 +45,10 @@ static inline bool rt_or_dl_task_policy(struct task_struct *tsk)
+
+ if (policy == SCHED_FIFO || policy == SCHED_RR)
+ return true;
++#ifndef CONFIG_SCHED_ALT
+ if (policy == SCHED_DEADLINE)
+ return true;
++#endif
+ return false;
+ }
+
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 7f3dbafe1817..ad4de86601d1 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -239,7 +239,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+
+ #endif /* !CONFIG_SMP */
+
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++ !defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index 324c2886b2ea..e3bfb3ef42b5 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -664,6 +664,7 @@ config TASK_IO_ACCOUNTING
+
+ config PSI
+ bool "Pressure stall information tracking"
++ depends on !SCHED_ALT
+ select KERNFS
+ help
+ Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -875,6 +876,35 @@ config UCLAMP_BUCKETS_COUNT
+
+ If in doubt, use the default value.
+
++menuconfig SCHED_ALT
++ bool "Alternative CPU Schedulers"
++ default y
++ help
++ This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++ prompt "Alternative CPU Scheduler"
++ default SCHED_BMQ
++
++config SCHED_BMQ
++ bool "BMQ CPU scheduler"
++ help
++ The BitMap Queue CPU scheduler for excellent interactivity and
++ responsiveness on the desktop and solid scalability on normal
++ hardware and commodity servers.
++
++config SCHED_PDS
++ bool "PDS CPU scheduler"
++ help
++ The Priority and Deadline based Skip list multiple queue CPU
++ Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+
+ #
+@@ -940,6 +970,7 @@ config NUMA_BALANCING
+ depends on ARCH_SUPPORTS_NUMA_BALANCING
+ depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++ depends on !SCHED_ALT
+ help
+ This option adds support for automatic NUMA aware memory/task placement.
+ The mechanism is quite primitive and is based on migrating memory when
+@@ -1357,6 +1388,7 @@ config CHECKPOINT_RESTORE
+
+ config SCHED_AUTOGROUP
+ bool "Automatic process group scheduling"
++ depends on !SCHED_ALT
+ select CGROUPS
+ select CGROUP_SCHED
+ select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index e557f622bd90..99e59c2082e0 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -72,9 +72,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .stack = init_stack,
+ .usage = REFCOUNT_INIT(2),
+ .flags = PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++ .on_cpu = 1,
++ .prio = DEFAULT_PRIO,
++ .static_prio = DEFAULT_PRIO,
++ .normal_prio = DEFAULT_PRIO,
++#else
+ .prio = MAX_PRIO - 20,
+ .static_prio = MAX_PRIO - 20,
+ .normal_prio = MAX_PRIO - 20,
++#endif
+ .policy = SCHED_NORMAL,
+ .cpus_ptr = &init_task.cpus_mask,
+ .user_cpus_ptr = NULL,
+@@ -87,6 +94,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .restart_block = {
+ .fn = do_no_restart_syscall,
+ },
++#ifdef CONFIG_SCHED_ALT
++ .sq_node = LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++ .boost_prio = 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++ .deadline = 0,
++#endif
++ .time_slice = HZ,
++#else
+ .se = {
+ .group_node = LIST_HEAD_INIT(init_task.se.group_node),
+ },
+@@ -94,10 +111,13 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .run_list = LIST_HEAD_INIT(init_task.rt.run_list),
+ .time_slice = RR_TIMESLICE,
+ },
++#endif
+ .tasks = LIST_HEAD_INIT(init_task.tasks),
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ .pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+ #endif
++#endif
+ #ifdef CONFIG_CGROUP_SCHED
+ .sched_task_group = &root_task_group,
+ #endif
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index 54ea59ff8fbe..a6d3560cef75 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -134,7 +134,7 @@ config PREEMPT_DYNAMIC
+
+ config SCHED_CORE
+ bool "Core Scheduling for SMT"
+- depends on SCHED_SMT
++ depends on SCHED_SMT && !SCHED_ALT
+ help
+ This option permits Core Scheduling, a means of coordinated task
+ selection across SMT siblings. When enabled -- see
+@@ -152,7 +152,7 @@ config SCHED_CORE
+
+ config SCHED_CLASS_EXT
+ bool "Extensible Scheduling Class"
+- depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF
++ depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF && !SCHED_ALT
+ select STACKTRACE if STACKTRACE_SUPPORT
+ help
+ This option enables a new scheduler class sched_ext (SCX), which
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 0f910c828973..68f1a692eefc 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -643,7 +643,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ return ret;
+ }
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * Helper routine for generate_sched_domains().
+ * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1063,7 +1063,7 @@ void rebuild_sched_domains_locked(void)
+ /* Have scheduler rebuild the domains */
+ partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -2949,12 +2949,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ goto out_unlock;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(task)) {
+ cs->nr_migrate_dl_tasks++;
+ cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ }
++#endif
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (!cs->nr_migrate_dl_tasks)
+ goto out_success;
+
+@@ -2975,6 +2978,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ }
+
+ out_success:
++#endif
+ /*
+ * Mark attach is in progress. This makes validate_change() fail
+ * changes which zero cpus/mems_allowed.
+@@ -2996,12 +3000,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ mutex_lock(&cpuset_mutex);
+ dec_attach_in_progress_locked(cs);
+
++#ifndef CONFIG_SCHED_ALT
+ if (cs->nr_migrate_dl_tasks) {
+ int cpu = cpumask_any(cs->effective_cpus);
+
+ dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ reset_migrate_dl_data(cs);
+ }
++#endif
+
+ mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index eb63a021ac04..950c053dfecb 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -155,7 +155,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ */
+ t1 = tsk->sched_info.pcount;
+ t2 = tsk->sched_info.run_delay;
+- t3 = tsk->se.sum_exec_runtime;
++ t3 = tsk_seruntime(tsk);
+
+ d->cpu_count += t1;
+
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 3485e5fc499e..34193fe7ff67 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -174,7 +174,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->curr_target = next_thread(tsk);
+ }
+
+- add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++ add_device_randomness((const void*) &tsk_seruntime(tsk),
+ sizeof(unsigned long long));
+
+ /*
+@@ -195,7 +195,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->inblock += task_io_get_inblock(tsk);
+ sig->oublock += task_io_get_oublock(tsk);
+ task_io_accounting_add(&sig->ioac, &tsk->ioac);
+- sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++ sig->sum_sched_runtime += tsk_seruntime(tsk);
+ sig->nr_threads--;
+ __unhash_process(tsk, group_dead);
+ write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 4a8df1800cbb..fec4af3f0c11 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -365,7 +365,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+
+ waiter->tree.prio = __waiter_prio(task);
+- waiter->tree.deadline = task->dl.deadline;
++ waiter->tree.deadline = __tsk_deadline(task);
+ }
+
+ /*
+@@ -386,16 +386,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ * Only use with rt_waiter_node_{less,equal}()
+ */
+ #define task_to_waiter_node(p) \
+- &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++ &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p) \
+ &(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline < right->deadline);
++#else
+ if (left->prio < right->prio)
+ return 1;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -404,16 +408,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return dl_time_before(left->deadline, right->deadline);
++#endif
+
+ return 0;
++#endif
+ }
+
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline == right->deadline);
++#else
+ if (left->prio != right->prio)
+ return 0;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -422,8 +432,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return left->deadline == right->deadline;
++#endif
+
+ return 1;
++#endif
+ }
+
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
+index 37f025a096c9..45ae7a6fd9ac 100644
+--- a/kernel/locking/ww_mutex.h
++++ b/kernel/locking/ww_mutex.h
+@@ -247,6 +247,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+
+ /* equal static prio */
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_prio(a_prio)) {
+ if (dl_time_before(b->task->dl.deadline,
+ a->task->dl.deadline))
+@@ -256,6 +257,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+ b->task->dl.deadline))
+ return false;
+ }
++#endif
+
+ /* equal prio */
+ }
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..32afa2adfe8f
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7654 @@
++/*
++ * kernel/sched/alt_core.c
++ *
++ * Core alternative kernel scheduler code and related syscalls
++ *
++ * Copyright (C) 1991-2002 Linus Torvalds
++ *
++ * 2009-08-13 Brainfuck deadline scheduling policy by Con Kolivas deletes
++ * a whole lot of those previous things.
++ * 2017-09-06 Priority and Deadline based Skip list multiple queue kernel
++ * scheduler by Alfred Chen.
++ * 2019-02-20 BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x) (1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x) (0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.14-r0"
++
++#define STOP_PRIO (MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly = (4 << 20);
++
++#include "alt_core.h"
++#include "alt_topology.h"
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS (100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++
++cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
++
++cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++ if (!p->user_cpus_ptr)
++ return cpu_possible_mask; /* &init_task.cpus_mask */
++ return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++ int i;
++
++ bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++ for(i = 0; i < SCHED_LEVELS; i++)
++ INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++ struct task_struct *idle)
++{
++ INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++ list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++ idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++ int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++ int last_prio = rq->prio;
++ int cpu, pr;
++
++ if (prio == last_prio)
++ return;
++
++ rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++ cpu = cpu_of(rq);
++ pr = atomic_read(&sched_prio_record);
++
++ if (prio < last_prio) {
++ if (IDLE_TASK_SCHED_PRIO == last_prio) {
++ rq->clear_idle_mask_func(cpu, sched_idle_mask);
++ last_prio -= 2;
++ }
++ CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++ return;
++ }
++ /* last_prio < prio */
++ if (IDLE_TASK_SCHED_PRIO == prio) {
++ rq->set_idle_mask_func(cpu, sched_idle_mask);
++ prio -= 2;
++ }
++ SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ * p->pi_lock
++ * rq->lock
++ * hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ * rq1->lock
++ * rq2->lock where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ * - sched_setaffinity()/
++ * set_cpus_allowed_ptr(): p->cpus_ptr, p->nr_cpus_allowed
++ * - set_user_nice(): p->se.load, p->*prio
++ * - __sched_setscheduler(): p->sched_class, p->policy, p->*prio,
++ * p->se.load, p->rt_priority,
++ * p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ * - sched_setnuma(): p->numa_preferred_nid
++ * - sched_move_task(): p->sched_task_group
++ * - uclamp_update_active() p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ * is changed locklessly using set_current_state(), __set_current_state() or
++ * set_special_state(), see their respective comments, or by
++ * try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ * concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ * is set by activate_task() and cleared by deactivate_task(), under
++ * rq->lock. Non-zero indicates the task is runnable, the special
++ * ON_RQ_MIGRATING state is used for migration without holding both
++ * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * Additionally it is possible to be ->on_rq but still be considered not
++ * runnable when p->se.sched_delayed is true. These tasks are on the runqueue
++ * but will be dequeued as soon as they get picked again. See the
++ * task_is_runnable() helper.
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ * is set by prepare_task() and cleared by finish_task() such that it will be
++ * set before p is scheduled-in and cleared after p is scheduled-out, both
++ * under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ * [ The astute reader will observe that it is possible for two tasks on one
++ * CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ * - Don't call set_task_cpu() on a blocked task:
++ *
++ * We don't care what CPU we're not running on, this simplifies hotplug,
++ * the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ * - for try_to_wake_up(), called under p->pi_lock:
++ *
++ * This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ * - for migration called under rq->lock:
++ * [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ * o move_queued_task()
++ * o detach_task()
++ *
++ * - for migration called under double_rq_lock():
++ *
++ * o __migrate_swap_task()
++ * o push_rt_task() / pull_rt_task()
++ * o push_dl_task() / pull_dl_task()
++ * o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock_irqsave(&rq->lock, *flags);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, *flags);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ raw_spin_lock_irqsave(&p->pi_lock, *flags);
++ if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++ *plock = &p->pi_lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++ }
++ }
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++ raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ lockdep_assert_held(&p->pi_lock);
++
++ for (;;) {
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++ return rq;
++ raw_spin_unlock(&rq->lock);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ for (;;) {
++ raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ /*
++ * move_queued_task() task_rq_lock()
++ *
++ * ACQUIRE (rq->lock)
++ * [S] ->on_rq = MIGRATING [L] rq = task_rq()
++ * WMB (__set_task_cpu()) ACQUIRE (rq->lock);
++ * [S] ->cpu = new_cpu [L] task_rq()
++ * [L] ->on_rq
++ * RELEASE (rq->lock)
++ *
++ * If we observe the old CPU in task_rq_lock(), the acquire of
++ * the old rq->lock will fully serialize against the stores.
++ *
++ * If we observe the new CPU in task_rq_lock(), the address
++ * dependency headed by '[L] rq = task_rq()' and the acquire
++ * will pair with the WMB to ensure we then also see migrating.
++ */
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++ rq_lock_irqsave(_T->lock, &_T->rf),
++ rq_unlock_irqrestore(_T->lock, &_T->rf),
++ struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++ raw_spinlock_t *lock;
++
++ /* Matches synchronize_rcu() in __sched_core_enable() */
++ preempt_disable();
++
++ for (;;) {
++ lock = __rq_lockp(rq);
++ raw_spin_lock_nested(lock, subclass);
++ if (likely(lock == __rq_lockp(rq))) {
++ /* preempt_count *MUST* be > 1 */
++ preempt_enable_no_resched();
++ return;
++ }
++ raw_spin_unlock(lock);
++ }
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++ raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++ s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ if (irqtime_enabled()) {
++ irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++ /*
++ * Since irq_time is only updated on {soft,}irq_exit, we might run into
++ * this case when a previous update_rq_clock() happened inside a
++ * {soft,}IRQ region.
++ *
++ * When this happens, we stop ->clock_task and only update the
++ * prev_irq_time stamp to account for the part that fit, so that a next
++ * update will consume the rest. This ensures ->clock_task is
++ * monotonic.
++ *
++ * It does however cause some slight miss-attribution of {soft,}IRQ
++ * time, a more accurate solution would be to update the irq_time using
++ * the current rq->clock timestamp, except that would require using
++ * atomic ops.
++ */
++ if (irq_delta > delta)
++ irq_delta = delta;
++
++ rq->prev_irq_time += irq_delta;
++ delta -= irq_delta;
++ delayacct_irq(rq->curr, irq_delta);
++ }
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ if (static_key_false((¶virt_steal_rq_enabled))) {
++ u64 prev_steal;
++
++ steal = prev_steal = paravirt_steal_clock(cpu_of(rq));
++ steal -= rq->prev_steal_time_rq;
++
++ if (unlikely(steal > delta))
++ steal = delta;
++
++ rq->prev_steal_time_rq = prev_steal;
++ delta -= steal;
++ }
++#endif
++
++ rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ if ((irq_delta + steal))
++ update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++ s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++ if (unlikely(delta <= 0))
++ return;
++ rq->clock += delta;
++ sched_update_rq_clock(rq);
++ update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS (sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT (8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l) (((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t) ((t) >> 17)
++#define LOAD_HALF_BLOCK(t) ((t) >> 16)
++#define BLOCK_MASK(t) ((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b) (1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++ u64 time = rq->clock;
++ u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++ u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++ u64 curr = !!rq->nr_running;
++
++ if (delta) {
++ rq->load_history = rq->load_history >> delta;
++
++ if (delta < RQ_UTIL_SHIFT) {
++ rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++ if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++ rq->load_history ^= LOAD_BLOCK_BIT(delta);
++ }
++
++ rq->load_block = BLOCK_MASK(time) * prev;
++ } else {
++ rq->load_block += (time - rq->load_stamp) * prev;
++ }
++ if (prev ^ curr)
++ rq->load_history ^= CURRENT_LOAD_BIT;
++ rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++ return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++ return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid. Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++ struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++ data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++ if (data)
++ data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++ int cpu = cpu_of(rq);
++
++ if (!tick_nohz_full_cpu(cpu))
++ return;
++
++ if (rq->nr_running < 2)
++ tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++ else
++ tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++ return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++ unsigned long ip = 0;
++ unsigned int state;
++
++ if (!p || p == current)
++ return 0;
++
++ /* Only get wchan if task is blocked and we can keep it that way. */
++ raw_spin_lock_irq(&p->pi_lock);
++ state = READ_ONCE(p->__state);
++ smp_rmb(); /* see try_to_wake_up() */
++ if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++ ip = __get_wchan(p);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func) \
++ sched_info_dequeue(rq, p); \
++ \
++ __list_del_entry(&p->sq_node); \
++ if (p->sq_node.prev == p->sq_node.next) { \
++ clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq), \
++ rq->queue.bitmap); \
++ func; \
++ }
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func) \
++ sched_info_enqueue(rq, p); \
++ { \
++ int idx, prio; \
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio); \
++ list_add_tail(&p->sq_node, &rq->queue.heads[idx]); \
++ if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) { \
++ set_bit(prio, rq->queue.bitmap); \
++ func; \
++ } \
++ }
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ --rq->nr_running;
++#ifdef CONFIG_SMP
++ if (1 == rq->nr_running)
++ cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ ++rq->nr_running;
++#ifdef CONFIG_SMP
++ if (2 == rq->nr_running)
++ cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++void requeue_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *node = &p->sq_node;
++ int deq_idx, idx, prio;
++
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++ /*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++ cpu_of(rq), task_cpu(p));
++#endif
++ if (list_is_last(node, &rq->queue.heads[idx]))
++ return;
++
++ __list_del_entry(node);
++ if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++ clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++ list_add_tail(node, &rq->queue.heads[idx]);
++ if (list_is_first(node, &rq->queue.heads[idx]))
++ set_bit(prio, rq->queue.bitmap);
++ update_sched_preempt_mask(rq);
++}
++
++/*
++ * try_cmpxchg based fetch_or() macro so it works for different integer types:
++ */
++#define fetch_or(ptr, mask) \
++ ({ \
++ typeof(ptr) _ptr = (ptr); \
++ typeof(mask) _mask = (mask); \
++ typeof(*_ptr) _val = *_ptr; \
++ \
++ do { \
++ } while (!try_cmpxchg(_ptr, &_val, _val | _mask)); \
++ _val; \
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
++{
++ return !(fetch_or(&ti->flags, 1 << tif) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++ do {
++ if (!(val & _TIF_POLLING_NRFLAG))
++ return false;
++ if (val & _TIF_NEED_RESCHED)
++ return true;
++ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++ return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
++{
++ set_ti_thread_flag(ti, tif);
++ return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++ return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ struct wake_q_node *node = &task->wake_q;
++
++ /*
++ * Atomically grab the task, if ->wake_q is !nil already it means
++ * it's already queued (either by us or someone else) and will get the
++ * wakeup due to that.
++ *
++ * In order to ensure that a pending wakeup will observe our pending
++ * state, even in the failed case, an explicit smp_mb() must be used.
++ */
++ smp_mb__before_atomic();
++ if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++ return false;
++
++ /*
++ * The head is context local, there can be no concurrency.
++ */
++ *head->lastp = node;
++ head->lastp = &node->next;
++ return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ if (__wake_q_add(head, task))
++ get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++ if (!__wake_q_add(head, task))
++ put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++ struct wake_q_node *node = head->first;
++
++ while (node != WAKE_Q_TAIL) {
++ struct task_struct *task;
++
++ task = container_of(node, struct task_struct, wake_q);
++ node = node->next;
++ /* pairs with cmpxchg_relaxed() in __wake_q_add() */
++ WRITE_ONCE(task->wake_q.next, NULL);
++ /* Task can safely be re-inserted now. */
++
++ /*
++ * wake_up_process() executes a full barrier, which pairs with
++ * the queueing in wake_q_add() so as not to miss wakeups.
++ */
++ wake_up_process(task);
++ put_task_struct(task);
++ }
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void __resched_curr(struct rq *rq, int tif)
++{
++ struct task_struct *curr = rq->curr;
++ struct thread_info *cti = task_thread_info(curr);
++ int cpu;
++
++ lockdep_assert_held(&rq->lock);
++
++ /*
++ * Always immediately preempt the idle task; no point in delaying doing
++ * actual work.
++ */
++ if (is_idle_task(curr) && tif == TIF_NEED_RESCHED_LAZY)
++ tif = TIF_NEED_RESCHED;
++
++ if (cti->flags & ((1 << tif) | _TIF_NEED_RESCHED))
++ return;
++
++ cpu = cpu_of(rq);
++ if (cpu == smp_processor_id()) {
++ set_ti_thread_flag(cti, tif);
++ if (tif == TIF_NEED_RESCHED)
++ set_preempt_need_resched();
++ return;
++ }
++
++ if (set_nr_and_not_polling(cti, tif)) {
++ if (tif == TIF_NEED_RESCHED)
++ smp_send_reschedule(cpu);
++ } else {
++ trace_sched_wake_idle_without_ipi(cpu);
++ }
++}
++
++static inline void resched_curr(struct rq *rq)
++{
++ __resched_curr(rq, TIF_NEED_RESCHED);
++}
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_preempt_lazy);
++static __always_inline bool dynamic_preempt_lazy(void)
++{
++ return static_branch_unlikely(&sk_dynamic_preempt_lazy);
++}
++#else
++static __always_inline bool dynamic_preempt_lazy(void)
++{
++ return IS_ENABLED(CONFIG_PREEMPT_LAZY);
++}
++#endif
++
++static __always_inline int get_lazy_tif_bit(void)
++{
++ if (dynamic_preempt_lazy())
++ return TIF_NEED_RESCHED_LAZY;
++
++ return TIF_NEED_RESCHED;
++}
++
++static inline void resched_curr_lazy(struct rq *rq)
++{
++ __resched_curr(rq, get_lazy_tif_bit());
++}
++
++void resched_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(cpu_rq(cpu));
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU. This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be up to date wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++ int i, cpu = smp_processor_id(), default_cpu = -1;
++ struct cpumask *mask;
++ const struct cpumask *hk_mask;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE)) {
++ if (!idle_cpu(cpu))
++ return cpu;
++ default_cpu = cpu;
++ }
++
++ hk_mask = housekeeping_cpumask(HK_TYPE_KERNEL_NOISE);
++
++ for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++ mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++ for_each_cpu_and(i, mask, hk_mask)
++ if (!idle_cpu(i))
++ return i;
++
++ if (default_cpu == -1)
++ default_cpu = housekeeping_any_cpu(HK_TYPE_KERNEL_NOISE);
++ cpu = default_cpu;
++
++ return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (cpu == smp_processor_id())
++ return;
++
++ /*
++ * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++ * part of the idle loop. This forces an exit from the idle loop
++ * and a round trip to schedule(). Now this could be optimized
++ * because a simple new idle loop iteration is enough to
++ * re-evaluate the next tick. Provided some re-ordering of tick
++ * nohz functions that would need to follow TIF_NR_POLLING
++ * clearing:
++ *
++ * - On most architectures, a simple fetch_or on ti::flags with a
++ * "0" value would be enough to know if an IPI needs to be sent.
++ *
++ * - x86 needs to perform a last need_resched() check between
++ * monitor and mwait which doesn't take timers into account.
++ * There a dedicated TIF_TIMER flag would be required to
++ * fetch_or here and be checked along with TIF_NEED_RESCHED
++ * before mwait().
++ *
++ * However, remote timer enqueue is not such a frequent event
++ * and testing of the above solutions didn't appear to report
++ * much benefits.
++ */
++ if (set_nr_and_not_polling(task_thread_info(rq->idle), TIF_NEED_RESCHED))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++ /*
++ * We just need the target to call irq_exit() and re-evaluate
++ * the next tick. The nohz full kick at least implies that.
++ * If needed we can still optimize that later with an
++ * empty IRQ.
++ */
++ if (cpu_is_offline(cpu))
++ return true; /* Don't try to wake offline CPUs. */
++ if (tick_nohz_full_cpu(cpu)) {
++ if (cpu != smp_processor_id() ||
++ tick_nohz_tick_stopped())
++ tick_nohz_full_kick_cpu(cpu);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++ if (!wake_up_full_nohz_cpu(cpu))
++ wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++ struct rq *rq = info;
++ int cpu = cpu_of(rq);
++ unsigned int flags;
++
++ /*
++ * Release the rq::nohz_csd.
++ */
++ flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++ WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++ rq->idle_balance = idle_cpu(cpu);
++ if (rq->idle_balance) {
++ rq->nohz_idle_balance = flags;
++ __raise_softirq_irqoff(SCHED_SOFTIRQ);
++ }
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++ if (sched_rq_first_task(rq) != rq->curr)
++ resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++ if (READ_ONCE(p->__state) & state)
++ return 1;
++
++ if (READ_ONCE(p->saved_state) & state)
++ return -1;
++
++ return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++ /*
++ * Serialize against current_save_and_set_rtlock_wait_state(),
++ * current_restore_rtlock_saved_state(), and __refrigerator().
++ */
++ guard(raw_spinlock_irq)(&p->pi_lock);
++
++ return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero. When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count). If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++ unsigned long flags;
++ int running, queued, match;
++ unsigned long ncsw;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ for (;;) {
++ rq = task_rq(p);
++
++ /*
++ * If the task is actively running on another CPU
++ * still, just relax and busy-wait without holding
++ * any locks.
++ *
++ * NOTE! Since we don't hold any locks, it's not
++ * even sure that "rq" stays as the right runqueue!
++ * But we don't care, since this will return false
++ * if the runqueue has changed and p is actually now
++ * running somewhere else!
++ */
++ while (task_on_cpu(p)) {
++ if (!task_state_match(p, match_state))
++ return 0;
++ cpu_relax();
++ }
++
++ /*
++ * Ok, time to look more closely! We need the rq
++ * lock now, to be *sure*. If we're wrong, we'll
++ * just go back and repeat.
++ */
++ task_access_lock_irqsave(p, &lock, &flags);
++ trace_sched_wait_task(p);
++ running = task_on_cpu(p);
++ queued = p->on_rq;
++ ncsw = 0;
++ if ((match = __task_state_match(p, match_state))) {
++ /*
++ * When matching on p->saved_state, consider this task
++ * still queued so it will wait.
++ */
++ if (match < 0)
++ queued = 1;
++ ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++ }
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ /*
++ * If it changed from the expected state, bail out now.
++ */
++ if (unlikely(!ncsw))
++ break;
++
++ /*
++ * Was it really running after all now that we
++ * checked with the proper locks actually held?
++ *
++ * Oops. Go back and try again..
++ */
++ if (unlikely(running)) {
++ cpu_relax();
++ continue;
++ }
++
++ /*
++ * It's not enough that it's not actively running,
++ * it must be off the runqueue _entirely_, and not
++ * preempted!
++ *
++ * So if it was still runnable (but just not actively
++ * running right now), it's preempted, and we should
++ * yield - it could be a while.
++ */
++ if (unlikely(queued)) {
++ ktime_t to = NSEC_PER_SEC / HZ;
++
++ set_current_state(TASK_UNINTERRUPTIBLE);
++ schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++ continue;
++ }
++
++ /*
++ * Ahh, all good. It wasn't running, and it wasn't
++ * runnable, which means that it will never become
++ * running in the future either. We're all done!
++ */
++ break;
++ }
++
++ return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++ if (hrtimer_active(&rq->hrtick_timer))
++ hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++ struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++ WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++ raw_spin_lock(&rq->lock);
++ resched_curr(rq);
++ raw_spin_unlock(&rq->lock);
++
++ return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ * - enabled by features
++ * - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ /**
++ * Alt schedule FW doesn't support sched_feat yet
++ if (!sched_feat(HRTICK))
++ return 0;
++ */
++ if (!cpu_active(cpu_of(rq)))
++ return 0;
++ return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ ktime_t time = rq->hrtick_time;
++
++ hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++ struct rq *rq = arg;
++
++ raw_spin_lock(&rq->lock);
++ __hrtick_restart(rq);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ s64 delta;
++
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense and can cause timer DoS.
++ */
++ delta = max_t(s64, delay, 10000LL);
++
++ rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++ if (rq == this_rq())
++ __hrtick_restart(rq);
++ else
++ smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense. Rely on vruntime for fairness.
++ */
++ delay = max_t(u64, delay, 10000LL);
++ hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++ HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++ hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++ rq->hrtick_timer.function = hrtick;
++}
++#else /* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif /* CONFIG_SCHED_HRTICK */
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++ enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++ /*
++ * If in_iowait is set, the code below may not trigger any cpufreq
++ * utilization updates, so do it here explicitly with the IOWAIT flag
++ * passed.
++ */
++ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++static void block_task(struct rq *rq, struct task_struct *p)
++{
++ dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible++;
++
++ if (p->in_iowait) {
++ atomic_inc(&rq->nr_iowait);
++ delayacct_blkio_start();
++ }
++
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++ /*
++ * The moment this write goes through, ttwu() can swoop in and migrate
++ * this task, rendering our rq->__lock ineffective.
++ *
++ * __schedule() try_to_wake_up()
++ * LOCK rq->__lock LOCK p->pi_lock
++ * pick_next_task()
++ * pick_next_task_fair()
++ * pick_next_entity()
++ * dequeue_entities()
++ * __block_task()
++ * RELEASE p->on_rq = 0 if (p->on_rq && ...)
++ * break;
++ *
++ * ACQUIRE (after ctrl-dep)
++ *
++ * cpu = select_task_rq();
++ * set_task_cpu(p, cpu);
++ * ttwu_queue()
++ * ttwu_do_activate()
++ * LOCK rq->__lock
++ * activate_task()
++ * STORE p->on_rq = 1
++ * UNLOCK rq->__lock
++ *
++ * Callers must ensure to not reference @p after this -- we no longer
++ * own it.
++ */
++ smp_store_release(&p->on_rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++ /*
++ * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++ * successfully executed on another CPU. We must ensure that updates of
++ * per-task data have been completed by this moment.
++ */
++ smp_wmb();
++
++ WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * We should never call set_task_cpu() on a blocked task,
++ * ttwu() will sort out the placement.
++ */
++ WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++ /*
++ * The caller should hold either p->pi_lock or rq->lock, when changing
++ * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++ *
++ * sched_move_task() holds both and thus holding either pins the cgroup,
++ * see task_group().
++ */
++ WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++ lockdep_is_held(&task_rq(p)->lock)));
++#endif
++ /*
++ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++ */
++ WARN_ON_ONCE(!cpu_online(new_cpu));
++
++ WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++ trace_sched_migrate_task(p, new_cpu);
++
++ if (task_cpu(p) != new_cpu)
++ {
++ rseq_migrate(p);
++ sched_mm_cid_migrate_from(p);
++ perf_event_task_migrate(p);
++ }
++
++ __set_task_cpu(p, new_cpu);
++}
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ /*
++ * This here violates the locking rules for affinity, since we're only
++ * supposed to change these variables while holding both rq->lock and
++ * p->pi_lock.
++ *
++ * HOWEVER, it magically works, because ttwu() is the only code that
++ * accesses these variables under p->pi_lock and only does so after
++ * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++ * before finish_task().
++ *
++ * XXX do further audits, this smells like something putrid.
++ */
++ SCHED_WARN_ON(!p->on_cpu);
++ p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++ struct task_struct *p = current;
++ int cpu;
++
++ if (p->migration_disabled) {
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Warn about overflow half-way through the range.
++ */
++ WARN_ON_ONCE((s16)p->migration_disabled < 0);
++#endif
++ p->migration_disabled++;
++ return;
++ }
++
++ guard(preempt)();
++ cpu = smp_processor_id();
++ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++ cpu_rq(cpu)->nr_pinned++;
++ p->migration_disabled = 1;
++ /*
++ * Violates locking rules! see comment in __do_set_cpus_ptr().
++ */
++ if (p->cpus_ptr == &p->cpus_mask)
++ __do_set_cpus_ptr(p, cpumask_of(cpu));
++ }
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++ struct task_struct *p = current;
++
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Check both overflow from migrate_disable() and superfluous
++ * migrate_enable().
++ */
++ if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
++ return;
++#endif
++
++ if (p->migration_disabled > 1) {
++ p->migration_disabled--;
++ return;
++ }
++
++ /*
++ * Ensure stop_task runs either before or after this, and that
++ * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++ */
++ guard(preempt)();
++ /*
++ * Assumption: current should be running on allowed cpu
++ */
++ WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++ if (p->cpus_ptr != &p->cpus_mask)
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ /*
++ * Mustn't clear migration_disabled() until cpus_ptr points back at the
++ * regular cpus_mask, otherwise things that race (eg.
++ * select_fallback_rq) get confused.
++ */
++ barrier();
++ p->migration_disabled = 0;
++ this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static void __migrate_force_enable(struct task_struct *p, struct rq *rq)
++{
++ if (likely(p->cpus_ptr != &p->cpus_mask))
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ p->migration_disabled = 0;
++ /* When p is migrate_disabled, rq->lock should be held */
++ rq->nr_pinned--;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++ /* When not in the task's cpumask, no point in looking further. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /* migrate_disabled() must be allowed to finish. */
++ if (is_migration_disabled(p))
++ return cpu_online(cpu);
++
++ /* Non kernel threads are not allowed during either online or offline. */
++ if (!(p->flags & PF_KTHREAD))
++ return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++ /* KTHREAD_IS_PER_CPU is always allowed. */
++ if (kthread_is_per_cpu(p))
++ return cpu_online(cpu);
++
++ /* Regular kernel threads don't get to stay during offline. */
++ if (cpu_dying(cpu))
++ return false;
++
++ /* But are allowed during online. */
++ return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ * stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ * off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ * it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ * is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++ lockdep_assert_held(&rq->lock);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++ dequeue_task(p, rq, 0);
++ set_task_cpu(p, new_cpu);
++ raw_spin_unlock(&rq->lock);
++
++ rq = cpu_rq(new_cpu);
++
++ raw_spin_lock(&rq->lock);
++ WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++ sched_mm_cid_migrate_to(rq, p);
++
++ sched_task_sanity_check(p, rq);
++ enqueue_task(p, rq, 0);
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ wakeup_preempt(rq);
++
++ return rq;
++}
++
++struct migration_arg {
++ struct task_struct *task;
++ int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++ /* Affinity changed (again). */
++ if (!is_cpu_allowed(p, dest_cpu))
++ return rq;
++
++ return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a high-prio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++ struct migration_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++
++ /*
++ * The original target CPU might have gone down and we might
++ * be on another CPU but it doesn't matter.
++ */
++ local_irq_save(flags);
++ /*
++ * We need to explicitly wake pending tasks before running
++ * __migrate_task() such that we will not miss enforcing cpus_ptr
++ * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++ */
++ flush_smp_call_function_queue();
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++ /*
++ * If task_rq(p) != rq, it cannot be migrated here, because we're
++ * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++ * we're holding p->pi_lock.
++ */
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ rq = __migrate_task(rq, p, arg->dest_cpu);
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++ cpumask_copy(&p->cpus_mask, ctx->new_mask);
++ p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++ /*
++ * Swap in a new user_cpus_ptr if SCA_USER flag set
++ */
++ if (ctx->flags & SCA_USER)
++ swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++ lockdep_assert_held(&p->pi_lock);
++ set_cpus_allowed_common(p, ctx);
++ mm_set_cpus_allowed(p->mm, ctx->new_mask);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .user_mask = NULL,
++ .flags = SCA_USER, /* clear the user requested mask */
++ };
++ union cpumask_rcuhead {
++ cpumask_t cpumask;
++ struct rcu_head rcu;
++ };
++
++ __do_set_cpus_allowed(p, &ac);
++
++ if (is_migration_disabled(p) && !cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
++ __migrate_force_enable(p, task_rq(p));
++
++ /*
++ * Because this is called with p->pi_lock held, it is not possible
++ * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++ * kfree_rcu().
++ */
++ kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++ int node)
++{
++ cpumask_t *user_mask;
++ unsigned long flags;
++
++ /*
++ * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++ * may differ by now due to racing.
++ */
++ dst->user_cpus_ptr = NULL;
++
++ /*
++ * This check is racy and losing the race is a valid situation.
++ * It is not worth the extra overhead of taking the pi_lock on
++ * every fork/clone.
++ */
++ if (data_race(!src->user_cpus_ptr))
++ return 0;
++
++ user_mask = alloc_user_cpus_ptr(node);
++ if (!user_mask)
++ return -ENOMEM;
++
++ /*
++ * Use pi_lock to protect content of user_cpus_ptr
++ *
++ * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++ * do_set_cpus_allowed().
++ */
++ raw_spin_lock_irqsave(&src->pi_lock, flags);
++ if (src->user_cpus_ptr) {
++ swap(dst->user_cpus_ptr, user_mask);
++ cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++ }
++ raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++ if (unlikely(user_mask))
++ kfree(user_mask);
++
++ return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++ struct cpumask *user_mask = NULL;
++
++ swap(p->user_cpus_ptr, user_mask);
++
++ return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++ kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++ return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++ guard(preempt)();
++ int cpu = task_cpu(p);
++
++ if ((cpu != smp_processor_id()) && task_curr(p))
++ smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ * - cpu_active must be a subset of cpu_online
++ *
++ * - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ * see __set_cpus_allowed_ptr(). At this point the newly online
++ * CPU isn't yet part of the sched domains, and balancing will not
++ * see it.
++ *
++ * - on cpu-down we clear cpu_active() to mask the sched domains and
++ * avoid the load balancer to place new tasks on the to be removed
++ * CPU. Existing tasks will remain running there and will be taken
++ * off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++ int nid = cpu_to_node(cpu);
++ const struct cpumask *nodemask = NULL;
++ enum { cpuset, possible, fail } state = cpuset;
++ int dest_cpu;
++
++ /*
++ * If the node that the CPU is on has been offlined, cpu_to_node()
++ * will return -1. There is no CPU on the node, and we should
++ * select the CPU on the other node.
++ */
++ if (nid != -1) {
++ nodemask = cpumask_of_node(nid);
++
++ /* Look for allowed, online CPU in same node. */
++ for_each_cpu(dest_cpu, nodemask) {
++ if (is_cpu_allowed(p, dest_cpu))
++ return dest_cpu;
++ }
++ }
++
++ for (;;) {
++ /* Any allowed, online CPU? */
++ for_each_cpu(dest_cpu, p->cpus_ptr) {
++ if (!is_cpu_allowed(p, dest_cpu))
++ continue;
++ goto out;
++ }
++
++ /* No more Mr. Nice Guy. */
++ switch (state) {
++ case cpuset:
++ if (cpuset_cpus_allowed_fallback(p)) {
++ state = possible;
++ break;
++ }
++ fallthrough;
++ case possible:
++ /*
++ * XXX When called from select_task_rq() we only
++ * hold p->pi_lock and again violate locking order.
++ *
++ * More yuck to audit.
++ */
++ do_set_cpus_allowed(p, task_cpu_fallback_mask(p));
++ state = fail;
++ break;
++
++ case fail:
++ BUG();
++ break;
++ }
++ }
++
++out:
++ if (state != cpuset) {
++ /*
++ * Don't tell them about moving exiting tasks or
++ * kernel threads (both mm NULL), since they never
++ * leave kernel.
++ */
++ if (p->mm && printk_ratelimit()) {
++ printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++ task_pid_nr(p), p->comm, cpu);
++ }
++ }
++
++ return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++ int cpu;
++
++ cpumask_copy(mask, sched_preempt_mask + ref);
++ if (prio < ref) {
++ for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++ if (prio < cpu_rq(cpu)->prio)
++ cpumask_set_cpu(cpu, mask);
++ }
++ } else {
++ for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++ if (prio >= cpu_rq(cpu)->prio)
++ cpumask_clear_cpu(cpu, mask);
++ }
++ }
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++ cpumask_t *mask = sched_preempt_mask + prio;
++ int pr = atomic_read(&sched_prio_record);
++
++ if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++ sched_preempt_mask_flush(mask, prio, pr);
++ atomic_set(&sched_prio_record, prio);
++ }
++
++ return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ cpumask_t allow_mask, mask;
++
++ if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++ return select_fallback_rq(task_cpu(p), p);
++
++ if (idle_select_func(&mask, &allow_mask, sched_idle_mask) ||
++ preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++ return best_mask_cpu(task_cpu(p), &mask);
++
++ return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++ static struct lock_class_key stop_pi_lock;
++ struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++ struct sched_param start_param = { .sched_priority = 0 };
++ struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++ if (stop) {
++ /*
++ * Make it appear like a SCHED_FIFO task, its something
++ * userspace knows about and won't get confused about.
++ *
++ * Also, it will make PI more or less work without too
++ * much confusion -- but then, stop work should not
++ * rely on PI working anyway.
++ */
++ sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++ /*
++ * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++ * adjust the effective priority of a task. As a result,
++ * rt_mutex_setprio() can trigger (RT) balancing operations,
++ * which can then trigger wakeups of the stop thread to push
++ * around the current task.
++ *
++ * The stop task itself will never be part of the PI-chain, it
++ * never blocks, therefore that ->pi_lock recursion is safe.
++ * Tell lockdep about this by placing the stop->pi_lock in its
++ * own class.
++ */
++ lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++ }
++
++ cpu_rq(cpu)->stop = stop;
++
++ if (old_stop) {
++ /*
++ * Reset it back to a normal scheduling policy so that
++ * it can die in pieces.
++ */
++ sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++ }
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++ raw_spinlock_t *lock, unsigned long irq_flags)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ /* Can the task run on the task's current CPU? If so, we're done */
++ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++ if (is_migration_disabled(p))
++ __migrate_force_enable(p, rq);
++
++ if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++ struct migration_arg arg = { p, dest_cpu };
++
++ /* Need help from migration thread: drop lock and wait. */
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++ return 0;
++ }
++ if (task_on_rq_queued(p)) {
++ /*
++ * OK, since we're going to drop the lock immediately
++ * afterwards anyway.
++ */
++ update_rq_clock(rq);
++ rq = move_queued_task(rq, p, dest_cpu);
++ lock = &rq->lock;
++ }
++ }
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++ struct affinity_context *ctx,
++ struct rq *rq,
++ raw_spinlock_t *lock,
++ unsigned long irq_flags)
++{
++ const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++ const struct cpumask *cpu_valid_mask = cpu_active_mask;
++ bool kthread = p->flags & PF_KTHREAD;
++ int dest_cpu;
++ int ret = 0;
++
++ if (kthread || is_migration_disabled(p)) {
++ /*
++ * Kernel threads are allowed on online && !active CPUs,
++ * however, during cpu-hot-unplug, even these might get pushed
++ * away if not KTHREAD_IS_PER_CPU.
++ *
++ * Specifically, migration_disabled() tasks must not fail the
++ * cpumask_any_and_distribute() pick below, esp. so on
++ * SCA_MIGRATE_ENABLE, otherwise we'll not call
++ * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++ */
++ cpu_valid_mask = cpu_online_mask;
++ }
++
++ if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ /*
++ * Must re-check here, to close a race against __kthread_bind(),
++ * sched_setaffinity() is not guaranteed to observe the flag.
++ */
++ if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++ goto out;
++
++ dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++ if (dest_cpu >= nr_cpu_ids) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ __do_set_cpus_allowed(p, ctx);
++
++ return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++ return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ unsigned long irq_flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++ * flags are set.
++ */
++ if (p->user_cpus_ptr &&
++ !(ctx->flags & SCA_USER) &&
++ cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++ ctx->new_mask = rq->scratch_mask;
++
++
++ return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++
++ return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++ struct cpumask *new_mask,
++ const struct cpumask *subset_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++ unsigned long irq_flags;
++ raw_spinlock_t *lock;
++ struct rq *rq;
++ int err;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++
++ if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++ err = -EINVAL;
++ goto err_unlock;
++ }
++
++ return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ cpumask_var_t new_mask;
++ const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++ alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++ /*
++ * __migrate_task() can fail silently in the face of concurrent
++ * offlining of the chosen destination CPU, so take the hotplug
++ * lock to ensure that the migration succeeds.
++ */
++ cpus_read_lock();
++ if (!cpumask_available(new_mask))
++ goto out_set_mask;
++
++ if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++ goto out_free_mask;
++
++ /*
++ * We failed to find a valid subset of the affinity mask for the
++ * task, so override it based on its cpuset hierarchy.
++ */
++ cpuset_cpus_allowed(p, new_mask);
++ override_mask = new_mask;
++
++out_set_mask:
++ if (printk_ratelimit()) {
++ printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++ task_pid_nr(p), p->comm,
++ cpumask_pr_args(override_mask));
++ }
++
++ WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++ cpus_read_unlock();
++ free_cpumask_var(new_mask);
++}
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ struct affinity_context ac = {
++ .new_mask = task_user_cpus(p),
++ .flags = 0,
++ };
++ int ret;
++
++ /*
++ * Try to restore the old affinity mask with __sched_setaffinity().
++ * Cpuset masking will be done there too.
++ */
++ ret = __sched_setaffinity(p, &ac);
++ WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ return 0;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq;
++
++ if (!schedstat_enabled())
++ return;
++
++ rq = this_rq();
++
++#ifdef CONFIG_SMP
++ if (cpu == rq->cpu) {
++ __schedstat_inc(rq->ttwu_local);
++ __schedstat_inc(p->stats.nr_wakeups_local);
++ } else {
++ /** Alt schedule FW ToDo:
++ * How to do ttwu_wake_remote
++ */
++ }
++#endif /* CONFIG_SMP */
++
++ __schedstat_inc(rq->ttwu_count);
++ __schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible--;
++
++ if (
++#ifdef CONFIG_SMP
++ !(wake_flags & WF_MIGRATED) &&
++#endif
++ p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ activate_task(p, rq);
++ wakeup_preempt(rq);
++
++ ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ * for (;;) {
++ * set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ * if (CONDITION)
++ * break;
++ *
++ * schedule();
++ * }
++ * __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ * %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ int ret = 0;
++
++ rq = __task_access_lock(p, &lock);
++ if (task_on_rq_queued(p)) {
++ if (!task_on_cpu(p)) {
++ /*
++ * When on_rq && !on_cpu the task is preempted, see if
++ * it should preempt the task that is current now.
++ */
++ update_rq_clock(rq);
++ wakeup_preempt(rq);
++ }
++ ttwu_do_wakeup(p);
++ ret = 1;
++ }
++ __task_access_unlock(p, lock);
++
++ return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++ struct llist_node *llist = arg;
++ struct rq *rq = this_rq();
++ struct task_struct *p, *t;
++ struct rq_flags rf;
++
++ if (!llist)
++ return;
++
++ rq_lock_irqsave(rq, &rf);
++ update_rq_clock(rq);
++
++ llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++ if (WARN_ON_ONCE(p->on_cpu))
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++ set_task_cpu(p, cpu_of(rq));
++
++ ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++ }
++
++ /*
++ * Must be after enqueueing at least once task such that
++ * idle_cpu() does not observe a false-negative -- if it does,
++ * it is possible for select_idle_siblings() to stack a number
++ * of tasks on this CPU during that window.
++ *
++ * It is OK to clear ttwu_pending when another task pending.
++ * We will receive IPI after local IRQ enabled and then enqueue it.
++ * Since now nr_running > 0, idle_cpu() will always get correct result.
++ */
++ WRITE_ONCE(rq->ttwu_pending, 0);
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++ if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++ trace_sched_wake_idle_without_ipi(cpu);
++ return false;
++ }
++
++ return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++ WRITE_ONCE(rq->ttwu_pending, 1);
++ __smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++ /*
++ * Do not complicate things with the async wake_list while the CPU is
++ * in hotplug state.
++ */
++ if (!cpu_active(cpu))
++ return false;
++
++ /* Ensure the task will still be allowed to run on the CPU. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /*
++ * If the CPU does not share cache, then queue the task on the
++ * remote rqs wakelist to avoid accessing remote data.
++ */
++ if (!cpus_share_cache(smp_processor_id(), cpu))
++ return true;
++
++ if (cpu == smp_processor_id())
++ return false;
++
++ /*
++ * If the wakee cpu is idle, or the task is descheduling and the
++ * only running task on the CPU, then use the wakelist to offload
++ * the task activation to the idle (or soon-to-be-idle) CPU as
++ * the current CPU is likely busy. nr_running is checked to
++ * avoid unnecessary task stacking.
++ *
++ * Note that we can only get here with (wakee) p->on_rq=0,
++ * p->on_cpu can be whatever, we've done the dequeue, so
++ * the wakee has been accounted out of ->nr_running.
++ */
++ if (!cpu_rq(cpu)->nr_running)
++ return true;
++
++ return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++ sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++ __ttwu_queue_wakelist(p, cpu, wake_flags);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ guard(rcu)();
++ if (is_idle_task(rcu_dereference(rq->curr))) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ if (is_idle_task(rq->curr))
++ resched_curr(rq);
++ }
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++ return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++ if (!sched_asym_cpucap_active())
++ return true;
++
++ if (this_cpu == that_cpu)
++ return true;
++
++ return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++ if (this_cpu == that_cpu)
++ return true;
++
++ return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (ttwu_queue_wakelist(p, cpu, wake_flags))
++ return;
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++ ttwu_do_activate(rq, p, wake_flags);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ * The related locking code always holds p::pi_lock when updating
++ * p::saved_state, which means the code is fully serialized in both cases.
++ *
++ * For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ * No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ * For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ * allows us to prevent early wakeup of tasks before they can be run on
++ * asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++ int match;
++
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++ state != TASK_RTLOCK_WAIT);
++ }
++
++ *success = !!(match = __task_state_match(p, state));
++
++ /*
++ * Saved state preserves the task state across blocking on
++ * an RT lock or TASK_FREEZABLE tasks. If the state matches,
++ * set p::saved_state to TASK_RUNNING, but do not wake the task
++ * because it waits for a lock wakeup or __thaw_task(). Also
++ * indicate success because from the regular waker's point of
++ * view this has succeeded.
++ *
++ * After acquiring the lock the task will restore p::__state
++ * from p::saved_state which ensures that the regular
++ * wakeup is not lost. The restore will also set
++ * p::saved_state to TASK_RUNNING so any further tests will
++ * not result in false positives vs. @success
++ */
++ if (match < 0)
++ p->saved_state = TASK_RUNNING;
++
++ return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ * MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ * A) UNLOCK of the rq(c0)->lock scheduling out task t
++ * B) migration for t is required to synchronize *both* rq(c0)->lock and
++ * rq(c1)->lock (if not at the same time, then in that order).
++ * C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ * CPU0 CPU1 CPU2
++ *
++ * LOCK rq(0)->lock
++ * sched-out X
++ * sched-in Y
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(0)->lock // orders against CPU0
++ * dequeue X
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(1)->lock
++ * enqueue X
++ * UNLOCK rq(1)->lock
++ *
++ * LOCK rq(1)->lock // orders against CPU2
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(1)->lock
++ *
++ *
++ * BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ * 1) smp_store_release(X->on_cpu, 0) -- finish_task()
++ * 2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ * LOCK rq(0)->lock LOCK X->pi_lock
++ * dequeue X
++ * sched-out X
++ * smp_store_release(X->on_cpu, 0);
++ *
++ * smp_cond_load_acquire(&X->on_cpu, !VAL);
++ * X->state = WAKING
++ * set_task_cpu(X,2)
++ *
++ * LOCK rq(2)->lock
++ * enqueue X
++ * X->state = RUNNING
++ * UNLOCK rq(2)->lock
++ *
++ * LOCK rq(2)->lock // orders against CPU1
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(2)->lock
++ *
++ * UNLOCK X->pi_lock
++ * UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ * If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ * - p->sched_class
++ * - p->cpus_ptr
++ * - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ * - ttwu_runnable() -- old rq, unavoidable, see comment there;
++ * - ttwu_queue() -- new rq, for enqueue of the task;
++ * - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ * %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++ guard(preempt)();
++ int cpu, success = 0;
++
++ if (p == current) {
++ /*
++ * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++ * == smp_processor_id()'. Together this means we can special
++ * case the whole 'p->on_rq && ttwu_runnable()' case below
++ * without taking any locks.
++ *
++ * In particular:
++ * - we rely on Program-Order guarantees for all the ordering,
++ * - we're serialized against set_special_state() by virtue of
++ * it disabling IRQs (this allows not taking ->pi_lock).
++ */
++ if (!ttwu_state_match(p, state, &success))
++ goto out;
++
++ trace_sched_waking(p);
++ ttwu_do_wakeup(p);
++ goto out;
++ }
++
++ /*
++ * If we are going to wake up a thread waiting for CONDITION we
++ * need to ensure that CONDITION=1 done by the caller can not be
++ * reordered with p->state check below. This pairs with smp_store_mb()
++ * in set_current_state() that the waiting thread does.
++ */
++ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++ smp_mb__after_spinlock();
++ if (!ttwu_state_match(p, state, &success))
++ break;
++
++ trace_sched_waking(p);
++
++ /*
++ * Ensure we load p->on_rq _after_ p->state, otherwise it would
++ * be possible to, falsely, observe p->on_rq == 0 and get stuck
++ * in smp_cond_load_acquire() below.
++ *
++ * sched_ttwu_pending() try_to_wake_up()
++ * STORE p->on_rq = 1 LOAD p->state
++ * UNLOCK rq->lock
++ *
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * UNLOCK rq->lock
++ *
++ * [task p]
++ * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * A similar smp_rmb() lives in __task_needs_rq_lock().
++ */
++ smp_rmb();
++ if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++ break;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++ * possible to, falsely, observe p->on_cpu == 0.
++ *
++ * One must be running (->on_cpu == 1) in order to remove oneself
++ * from the runqueue.
++ *
++ * __schedule() (switch to task 'p') try_to_wake_up()
++ * STORE p->on_cpu = 1 LOAD p->on_rq
++ * UNLOCK rq->lock
++ *
++ * __schedule() (put 'p' to sleep)
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * STORE p->on_rq = 0 LOAD p->on_cpu
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++ * schedule()'s deactivate_task() has 'happened' and p will no longer
++ * care about it's own p->state. See the comment in __schedule().
++ */
++ smp_acquire__after_ctrl_dep();
++
++ /*
++ * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++ * == 0), which means we need to do an enqueue, change p->state to
++ * TASK_WAKING such that we can unlock p->pi_lock before doing the
++ * enqueue, such as ttwu_queue_wakelist().
++ */
++ WRITE_ONCE(p->__state, TASK_WAKING);
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, considering queueing p on the remote CPUs wake_list
++ * which potentially sends an IPI instead of spinning on p->on_cpu to
++ * let the waker make forward progress. This is safe because IRQs are
++ * disabled and the IPI will deliver after on_cpu is cleared.
++ *
++ * Ensure we load task_cpu(p) after p->on_cpu:
++ *
++ * set_task_cpu(p, cpu);
++ * STORE p->cpu = @cpu
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock
++ * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu)
++ * STORE p->on_cpu = 1 LOAD p->cpu
++ *
++ * to ensure we observe the correct CPU on which the task is currently
++ * scheduling.
++ */
++ if (smp_load_acquire(&p->on_cpu) &&
++ ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++ break;
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, wait until it's done referencing the task.
++ *
++ * Pairs with the smp_store_release() in finish_task().
++ *
++ * This ensures that tasks getting woken will be fully ordered against
++ * their previous state and preserve Program Order.
++ */
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ sched_task_ttwu(p);
++
++ if ((wake_flags & WF_CURRENT_CPU) &&
++ cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++ cpu = smp_processor_id();
++ else
++ cpu = select_task_rq(p);
++
++ if (cpu != task_cpu(p)) {
++ if (p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ wake_flags |= WF_MIGRATED;
++ set_task_cpu(p, cpu);
++ }
++#else
++ sched_task_ttwu(p);
++
++ cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++ ttwu_queue(p, cpu, wake_flags);
++ }
++out:
++ if (success)
++ ttwu_stat(p, task_cpu(p), wake_flags);
++
++ return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++ * the task is blocked. Make sure to check @state since ttwu() can drop
++ * locks at the end, see ttwu_queue_wakelist().
++ */
++ if (state == TASK_RUNNING || state == TASK_WAKING)
++ return true;
++
++ /*
++ * Ensure we load p->on_rq after p->__state, otherwise it would be
++ * possible to, falsely, observe p->on_rq == 0.
++ *
++ * See try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ if (p->on_rq)
++ return true;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure the task has finished __schedule() and will not be referenced
++ * anymore. Again, see try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++ return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it. This function can use task_is_runnable() and
++ * task_curr() to work out what the state is, if required. Given that @func
++ * can be invoked with a runqueue lock held, it had better be quite
++ * lightweight.
++ *
++ * Returns:
++ * Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++ struct rq *rq = NULL;
++ struct rq_flags rf;
++ int ret;
++
++ raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++ if (__task_needs_rq_lock(p))
++ rq = __task_rq_lock(p, &rf);
++
++ /*
++ * At this point the task is pinned; either:
++ * - blocked and we're holding off wakeups (pi->lock)
++ * - woken, and we're holding off enqueue (rq->lock)
++ * - queued, and we're holding off schedule (rq->lock)
++ * - running, and we're holding off de-schedule (rq->lock)
++ *
++ * The called function (@func) can use: task_curr(), p->on_rq and
++ * p->__state to differentiate between these states.
++ */
++ ret = func(p, arg);
++
++ if (rq)
++ __task_rq_unlock(rq, &rf);
++
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++ return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU. If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee. Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++ struct task_struct *t;
++
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ t = rcu_dereference(cpu_curr(cpu));
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++ return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++ return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup which is also used by sched_init() to
++ * initialize the boot CPU's idle task.
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ p->on_rq = 0;
++ p->on_cpu = 0;
++ p->utime = 0;
++ p->stime = 0;
++ p->sched_time = 0;
++
++#ifdef CONFIG_SCHEDSTATS
++ /* Even if schedstat is disabled, there should not be garbage */
++ memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++ INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++ p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++ p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++ init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ __sched_fork(clone_flags, p);
++ /*
++ * We mark the process as NEW here. This guarantees that
++ * nobody will actually run it, and a signal or other external
++ * event cannot wake it up and insert it on the runqueue either.
++ */
++ p->__state = TASK_NEW;
++
++ /*
++ * Make sure we do not leak PI boosting priority to the child.
++ */
++ p->prio = current->normal_prio;
++
++ /*
++ * Revert to default priority/policy on fork if requested.
++ */
++ if (unlikely(p->sched_reset_on_fork)) {
++ if (task_has_rt_policy(p)) {
++ p->policy = SCHED_NORMAL;
++ p->static_prio = NICE_TO_PRIO(0);
++ p->rt_priority = 0;
++ } else if (PRIO_TO_NICE(p->static_prio) < 0)
++ p->static_prio = NICE_TO_PRIO(0);
++
++ p->prio = p->normal_prio = p->static_prio;
++
++ /*
++ * We don't need the reset flag anymore after the fork. It has
++ * fulfilled its duty:
++ */
++ p->sched_reset_on_fork = 0;
++ }
++
++#ifdef CONFIG_SCHED_INFO
++ if (unlikely(sched_info_on()))
++ memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++ init_task_preempt_count(p);
++
++ return 0;
++}
++
++int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ /*
++ * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++ * required yet, but lockdep gets upset if rules are violated.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ /*
++ * Share the timeslice between parent and child, thus the
++ * total amount of pending timeslices in the system doesn't change,
++ * resulting in more scheduling fairness.
++ */
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ rq->curr->time_slice /= 2;
++ p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++ hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++ if (p->time_slice < RESCHED_NS) {
++ p->time_slice = sysctl_sched_base_slice;
++ resched_curr(rq);
++ }
++ sched_task_fork(p, rq);
++ raw_spin_unlock(&rq->lock);
++
++ rseq_migrate(p);
++ /*
++ * We're setting the CPU for the first time, we don't migrate,
++ * so use __set_task_cpu().
++ */
++ __set_task_cpu(p, smp_processor_id());
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++void sched_cancel_fork(struct task_struct *p)
++{
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++ if (enabled)
++ static_branch_enable(&sched_schedstats);
++ else
++ static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++ if (!schedstat_enabled()) {
++ pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++ static_branch_enable(&sched_schedstats);
++ }
++}
++
++static int __init setup_schedstats(char *str)
++{
++ int ret = 0;
++ if (!str)
++ goto out;
++
++ if (!strcmp(str, "enable")) {
++ set_schedstats(true);
++ ret = 1;
++ } else if (!strcmp(str, "disable")) {
++ set_schedstats(false);
++ ret = 1;
++ }
++out:
++ if (!ret)
++ pr_warn("Unable to parse schedstats=\n");
++
++ return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(const struct ctl_table *table, int write, void *buffer,
++ size_t *lenp, loff_t *ppos)
++{
++ struct ctl_table t;
++ int err;
++ int state = static_branch_likely(&sched_schedstats);
++
++ if (write && !capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ t = *table;
++ t.data = &state;
++ err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++ if (err < 0)
++ return err;
++ if (write)
++ set_schedstats(state);
++ return err;
++}
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_SYSCTL
++static const struct ctl_table sched_core_sysctls[] = {
++#ifdef CONFIG_SCHEDSTATS
++ {
++ .procname = "sched_schedstats",
++ .data = NULL,
++ .maxlen = sizeof(unsigned int),
++ .mode = 0644,
++ .proc_handler = sysctl_schedstats,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_ONE,
++ },
++#endif /* CONFIG_SCHEDSTATS */
++};
++static int __init sched_core_sysctl_init(void)
++{
++ register_sysctl_init("kernel", sched_core_sysctls);
++ return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_SYSCTL */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++ rseq_migrate(p);
++ /*
++ * Fork balancing, do it here and not earlier because:
++ * - cpus_ptr can change in the fork path
++ * - any previously selected CPU might disappear through hotplug
++ *
++ * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++ * as we're not fully set-up yet.
++ */
++ __set_task_cpu(p, cpu_of(rq));
++#endif
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ activate_task(p, rq);
++ trace_sched_wakeup_new(p);
++ wakeup_preempt(rq);
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++ static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++ static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++ if (!static_branch_unlikely(&preempt_notifier_key))
++ WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++ hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++ hlist_del(¬ifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++ /*
++ * Claim the task as running, we do this before switching to it
++ * such that any running task will have this set.
++ *
++ * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++ * its ordering comment.
++ */
++ WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++ /*
++ * This must be the very last reference to @prev from this CPU. After
++ * p->on_cpu is cleared, the task can be moved to a different CPU. We
++ * must ensure this doesn't happen until the switch is completely
++ * finished.
++ *
++ * In particular, the load of prev->state in finish_task_switch() must
++ * happen before this.
++ *
++ * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++ */
++ smp_store_release(&prev->on_cpu, 0);
++#else
++ prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ void (*func)(struct rq *rq);
++ struct balance_callback *next;
++
++ lockdep_assert_held(&rq->lock);
++
++ while (head) {
++ func = (void (*)(struct rq *))head->func;
++ next = head->next;
++ head->next = NULL;
++ head = next;
++
++ func(rq);
++ }
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++ .next = NULL,
++ .func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++ struct balance_callback *head = rq->balance_callback;
++
++ if (likely(!head))
++ return NULL;
++
++ lockdep_assert_rq_held(rq);
++ /*
++ * Must not take balance_push_callback off the list when
++ * splice_balance_callbacks() and balance_callbacks() are not
++ * in the same rq->lock section.
++ *
++ * In that case it would be possible for __schedule() to interleave
++ * and observe the list empty.
++ */
++ if (split && head == &balance_push_callback)
++ head = NULL;
++ else
++ rq->balance_callback = NULL;
++
++ return head;
++}
++
++struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++ do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ unsigned long flags;
++
++ if (unlikely(head)) {
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ do_balance_callbacks(rq, head);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++ }
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++ /*
++ * Since the runqueue lock will be released by the next
++ * task (which is an invalid locking op but in the case
++ * of the scheduler it's an obvious special-case), so we
++ * do an early lockdep release here:
++ */
++ spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++ /* this is a valid case when another task releases the spinlock */
++ rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++ /*
++ * If we are tracking spinlock dependencies then we have to
++ * fix up the runqueue lock - which gets 'carried over' from
++ * prev into current:
++ */
++ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ kcov_prepare_switch(prev);
++ sched_info_switch(rq, prev, next);
++ perf_event_task_sched_out(prev, next);
++ rseq_preempt(prev);
++ fire_sched_out_preempt_notifiers(prev, next);
++ kmap_local_sched_out();
++ prepare_task(next);
++ prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock. (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. 'prev == current' is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ struct rq *rq = this_rq();
++ struct mm_struct *mm = rq->prev_mm;
++ unsigned int prev_state;
++
++ /*
++ * The previous task will have left us with a preempt_count of 2
++ * because it left us after:
++ *
++ * schedule()
++ * preempt_disable(); // 1
++ * __schedule()
++ * raw_spin_lock_irq(&rq->lock) // 2
++ *
++ * Also, see FORK_PREEMPT_COUNT.
++ */
++ if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++ "corrupted preempt_count: %s/%d/0x%x\n",
++ current->comm, current->pid, preempt_count()))
++ preempt_count_set(FORK_PREEMPT_COUNT);
++
++ rq->prev_mm = NULL;
++
++ /*
++ * A task struct has one reference for the use as "current".
++ * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++ * schedule one last time. The schedule call will never return, and
++ * the scheduled task must drop that reference.
++ *
++ * We must observe prev->state before clearing prev->on_cpu (in
++ * finish_task), otherwise a concurrent wakeup can get prev
++ * running on another CPU and we could rave with its RUNNING -> DEAD
++ * transition, resulting in a double drop.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ vtime_task_switch(prev);
++ perf_event_task_sched_in(prev, current);
++ finish_task(prev);
++ tick_nohz_task_switch();
++ finish_lock_switch(rq);
++ finish_arch_post_lock_switch();
++ kcov_finish_switch(current);
++ /*
++ * kmap_local_sched_out() is invoked with rq::lock held and
++ * interrupts disabled. There is no requirement for that, but the
++ * sched out code does not have an interrupt enabled section.
++ * Restoring the maps on sched in does not require interrupts being
++ * disabled either.
++ */
++ kmap_local_sched_in();
++
++ fire_sched_in_preempt_notifiers(current);
++ /*
++ * When switching through a kernel thread, the loop in
++ * membarrier_{private,global}_expedited() may have observed that
++ * kernel thread and not issued an IPI. It is therefore possible to
++ * schedule between user->kernel->user threads without passing though
++ * switch_mm(). Membarrier requires a barrier after storing to
++ * rq->curr, before returning to userspace, so provide them here:
++ *
++ * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++ * provided by mmdrop(),
++ * - a sync_core for SYNC_CORE.
++ */
++ if (mm) {
++ membarrier_mm_sync_core_before_usermode(mm);
++ mmdrop_sched(mm);
++ }
++ if (unlikely(prev_state == TASK_DEAD)) {
++ /* Task is done with its stack. */
++ put_task_stack(prev);
++
++ put_task_struct_rcu_user(prev);
++ }
++
++ return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ /*
++ * New tasks start with FORK_PREEMPT_COUNT, see there and
++ * finish_task_switch() for details.
++ *
++ * finish_task_switch() will drop rq->lock() and lower preempt_count
++ * and the preempt_enable() will end up enabling preemption (on
++ * PREEMPT_COUNT kernels).
++ */
++
++ finish_task_switch(prev);
++ preempt_enable();
++
++ if (current->set_child_tid)
++ put_user(task_pid_vnr(current), current->set_child_tid);
++
++ calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ prepare_task_switch(rq, prev, next);
++
++ /*
++ * For paravirt, this is coupled with an exit in switch_to to
++ * combine the page table reload and the switch backend into
++ * one hypercall.
++ */
++ arch_start_context_switch(prev);
++
++ /*
++ * kernel -> kernel lazy + transfer active
++ * user -> kernel lazy + mmgrab() active
++ *
++ * kernel -> user switch + mmdrop() active
++ * user -> user switch
++ *
++ * switch_mm_cid() needs to be updated if the barriers provided
++ * by context_switch() are modified.
++ */
++ if (!next->mm) { // to kernel
++ enter_lazy_tlb(prev->active_mm, next);
++
++ next->active_mm = prev->active_mm;
++ if (prev->mm) // from user
++ mmgrab(prev->active_mm);
++ else
++ prev->active_mm = NULL;
++ } else { // to user
++ membarrier_switch_mm(rq, prev->active_mm, next->mm);
++ /*
++ * sys_membarrier() requires an smp_mb() between setting
++ * rq->curr / membarrier_switch_mm() and returning to userspace.
++ *
++ * The below provides this either through switch_mm(), or in
++ * case 'prev->active_mm == next->mm' through
++ * finish_task_switch()'s mmdrop().
++ */
++ switch_mm_irqs_off(prev->active_mm, next->mm, next);
++ lru_gen_use_mm(next->mm);
++
++ if (!prev->mm) { // from kernel
++ /* will mmdrop() in finish_task_switch(). */
++ rq->prev_mm = prev->active_mm;
++ prev->active_mm = NULL;
++ }
++ }
++
++ /* switch_mm_cid() requires the memory barriers above. */
++ switch_mm_cid(rq, prev, next);
++
++ prepare_lock_switch(rq, next);
++
++ /* Here we just switch the register state and the stack. */
++ switch_to(prev, next, prev);
++ barrier();
++
++ return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_online_cpu(i)
++ sum += cpu_rq(i)->nr_running;
++
++ return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race. The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++ return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++ return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++ int i;
++ unsigned long long sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += cpu_rq(i)->nr_switches;
++
++ return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++ return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += nr_iowait_cpu(i);
++
++ return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++ s64 ns = rq->clock_task - p->last_ran;
++
++ p->sched_time += ns;
++ cgroup_account_cputime(p, ns);
++ account_group_exec_runtime(p, ns);
++
++ p->time_slice -= ns;
++ p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++ /*
++ * 64-bit doesn't need locks to atomically read a 64-bit value.
++ * So we have a optimization chance when the task's delta_exec is 0.
++ * Reading ->on_cpu is racy, but this is OK.
++ *
++ * If we race with it leaving CPU, we'll take a lock. So we're correct.
++ * If we race with it entering CPU, unaccounted time is 0. This is
++ * indistinguishable from the read occurring a few cycles earlier.
++ * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++ * been accounted, so we're correct here as well.
++ */
++ if (!p->on_cpu || !task_on_rq_queued(p))
++ return tsk_seruntime(p);
++#endif
++
++ rq = task_access_lock_irqsave(p, &lock, &flags);
++ /*
++ * Must be ->curr _and_ ->on_rq. If dequeued, we would
++ * project cycles that may never be accounted to this
++ * thread, breaking clock_gettime().
++ */
++ if (p == rq->curr && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ update_curr(rq, p);
++ }
++ ns = tsk_seruntime(p);
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++ struct task_struct *p = rq->curr;
++
++ if (is_idle_task(p))
++ return;
++
++ update_curr(rq, p);
++ cpufreq_update_util(rq, 0);
++
++ /*
++ * Tasks have less than RESCHED_NS of time slice left they will be
++ * rescheduled.
++ */
++ if (p->time_slice >= RESCHED_NS)
++ return;
++ set_tsk_need_resched(p);
++ set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++ int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++ u64 resched_latency, now = rq_clock(rq);
++ static bool warned_once;
++
++ if (sysctl_resched_latency_warn_once && warned_once)
++ return 0;
++
++ if (!need_resched() || !latency_warn_ms)
++ return 0;
++
++ if (system_state == SYSTEM_BOOTING)
++ return 0;
++
++ if (!rq->last_seen_need_resched_ns) {
++ rq->last_seen_need_resched_ns = now;
++ rq->ticks_without_resched = 0;
++ return 0;
++ }
++
++ rq->ticks_without_resched++;
++ resched_latency = now - rq->last_seen_need_resched_ns;
++ if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++ return 0;
++
++ warned_once = true;
++
++ return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++ long val;
++
++ if ((kstrtol(str, 0, &val))) {
++ pr_warn("Unable to set resched_latency_warn_ms\n");
++ return 1;
++ }
++
++ sysctl_resched_latency_warn_ms = val;
++ return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++ int cpu __maybe_unused = smp_processor_id();
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *curr = rq->curr;
++ u64 resched_latency;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++ arch_scale_freq_tick();
++
++ sched_clock_tick();
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ if (dynamic_preempt_lazy() && tif_test_bit(TIF_NEED_RESCHED_LAZY))
++ resched_curr(rq);
++
++ scheduler_task_tick(rq);
++ if (sched_feat(LATENCY_WARN))
++ resched_latency = cpu_resched_latency(rq);
++ calc_global_load_tick(rq);
++
++ task_tick_mm_cid(rq, rq->curr);
++
++ raw_spin_unlock(&rq->lock);
++
++ if (sched_feat(LATENCY_WARN) && resched_latency)
++ resched_latency_warn(cpu, resched_latency);
++
++ perf_event_task_tick();
++
++ if (curr->flags & PF_WQ_WORKER)
++ wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++ int cpu;
++ atomic_t state;
++ struct delayed_work work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE 0
++#define TICK_SCHED_REMOTE_OFFLINING 1
++#define TICK_SCHED_REMOTE_RUNNING 2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ * TICK_SCHED_REMOTE_OFFLINE
++ * | ^
++ * | |
++ * | | sched_tick_remote()
++ * | |
++ * | |
++ * +--TICK_SCHED_REMOTE_OFFLINING
++ * | ^
++ * | |
++ * sched_tick_start() | | sched_tick_stop()
++ * | |
++ * V |
++ * TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++ struct delayed_work *dwork = to_delayed_work(work);
++ struct tick_work *twork = container_of(dwork, struct tick_work, work);
++ int cpu = twork->cpu;
++ struct rq *rq = cpu_rq(cpu);
++ int os;
++
++ /*
++ * Handle the tick only if it appears the remote CPU is running in full
++ * dynticks mode. The check is racy by nature, but missing a tick or
++ * having one too much is no big deal because the scheduler tick updates
++ * statistics and checks timeslices in a time-independent way, regardless
++ * of when exactly it is running.
++ */
++ if (tick_nohz_tick_stopped_cpu(cpu)) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ struct task_struct *curr = rq->curr;
++
++ if (cpu_online(cpu)) {
++ update_rq_clock(rq);
++
++ if (!is_idle_task(curr)) {
++ /*
++ * Make sure the next tick runs within a
++ * reasonable amount of time.
++ */
++ u64 delta = rq_clock_task(rq) - curr->last_ran;
++ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++ }
++ scheduler_task_tick(rq);
++
++ calc_load_nohz_remote(rq);
++ }
++ }
++
++ /*
++ * Run the remote tick once per second (1Hz). This arbitrary
++ * frequency is large enough to avoid overload but short enough
++ * to keep scheduler internal stats reasonably up to date. But
++ * first update state to reflect hotplug activity if required.
++ */
++ os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++ if (os == TICK_SCHED_REMOTE_RUNNING)
++ queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++ int os;
++ struct tick_work *twork;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++ if (os == TICK_SCHED_REMOTE_OFFLINE) {
++ twork->cpu = cpu;
++ INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++ queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++ }
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++ struct tick_work *twork;
++ int os;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ /* There cannot be competing actions, but don't rely on stop-machine. */
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++ WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++ /* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++ tick_work_cpu = alloc_percpu(struct tick_work);
++ BUG_ON(!tick_work_cpu);
++ return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++ defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++ if (preempt_count() == val) {
++ unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++ current->preempt_disable_ip = ip;
++#endif
++ trace_preempt_off(CALLER_ADDR0, ip);
++ }
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++ return;
++#endif
++ __preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Spinlock count overflowing soon?
++ */
++ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++ PREEMPT_MASK - 10);
++#endif
++ preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++ if (preempt_count() == val)
++ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++ return;
++ /*
++ * Is the spinlock portion underflowing?
++ */
++ if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++ !(preempt_count() & PREEMPT_MASK)))
++ return;
++#endif
++
++ preempt_latency_stop(val);
++ __preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ return p->preempt_disable_ip;
++#else
++ return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++ /* Save this before calling printk(), since that will clobber it */
++ unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++ if (oops_in_progress)
++ return;
++
++ printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++ prev->comm, prev->pid, preempt_count());
++
++ debug_show_held_locks(prev);
++ print_modules();
++ if (irqs_disabled())
++ print_irqtrace_events(prev);
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, preempt_disable_ip);
++ }
++ check_panic_on_warn("scheduling while atomic");
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++ if (task_stack_end_corrupted(prev))
++ panic("corrupted stack end detected inside scheduler\n");
++
++ if (task_scs_end_corrupted(prev))
++ panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++ if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++ printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++ prev->comm, prev->pid, prev->non_block_count);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++ }
++#endif
++
++ if (unlikely(in_atomic_preempt_off())) {
++ __schedule_bug(prev);
++ preempt_count_set(PREEMPT_DISABLED);
++ }
++ rcu_sleep_check();
++ SCHED_WARN_ON(ct_state() == CT_STATE_USER);
++
++ profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++ schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++ printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
++ " ecore_idle: 0x%04lx\n",
++ sched_rq_pending_mask.bits[0],
++ sched_idle_mask->bits[0],
++ sched_pcore_idle_mask->bits[0],
++ sched_ecore_idle_mask->bits[0]);
++}
++#endif
++
++#ifdef CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++ struct task_struct *p, *skip = rq->curr;
++ int nr_migrated = 0;
++ int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++ /* WA to check rq->curr is still on rq */
++ if (!task_on_rq_queued(skip))
++ return 0;
++
++ while (skip != rq->idle && nr_tries &&
++ (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++ skip = sched_rq_next_task(p, rq);
++ if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++ __SCHED_DEQUEUE_TASK(p, rq, 0, );
++ set_task_cpu(p, dest_cpu);
++ sched_task_sanity_check(p, dest_rq);
++ sched_mm_cid_migrate_to(dest_rq, p);
++ __SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++ nr_migrated++;
++ }
++ nr_tries--;
++ }
++
++ return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++ cpumask_t *topo_mask, *end_mask, chk;
++
++ if (unlikely(!rq->online))
++ return 0;
++
++ if (cpumask_empty(&sched_rq_pending_mask))
++ return 0;
++
++ topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++ end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++ do {
++ int i;
++
++ if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++ continue;
++
++ for_each_cpu_wrap(i, &chk, cpu) {
++ int nr_migrated;
++ struct rq *src_rq;
++
++ src_rq = cpu_rq(i);
++ if (!do_raw_spin_trylock(&src_rq->lock))
++ continue;
++ spin_acquire(&src_rq->lock.dep_map,
++ SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++ if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++ src_rq->nr_running -= nr_migrated;
++ if (src_rq->nr_running < 2)
++ cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++
++ rq->nr_running += nr_migrated;
++ if (rq->nr_running > 1)
++ cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++ update_sched_preempt_mask(rq);
++ cpufreq_update_util(rq, 0);
++
++ return 1;
++ }
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++ }
++ } while (++topo_mask < end_mask);
++
++ return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++
++ sched_task_renew(p, rq);
++
++ if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++ requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++ if (unlikely(rq->idle == p))
++ return;
++
++ update_curr(rq, p);
++
++ if (p->time_slice < RESCHED_NS)
++ time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++ struct task_struct *next = sched_rq_first_task(rq);
++
++ if (next == rq->idle) {
++#ifdef CONFIG_SMP
++ if (!take_other_rq_tasks(rq, cpu)) {
++ if (likely(rq->balance_func && rq->online))
++ rq->balance_func(rq, cpu);
++#endif /* CONFIG_SMP */
++
++ schedstat_inc(rq->sched_goidle);
++ /*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++ return next;
++#ifdef CONFIG_SMP
++ }
++ next = sched_rq_first_task(rq);
++#endif
++ }
++#ifdef CONFIG_HIGH_RES_TIMERS
++ hrtick_start(rq, next->time_slice);
++#endif
++ /*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++ return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock.
++ */
++ #define SM_IDLE (-1)
++ #define SM_NONE 0
++ #define SM_PREEMPT 1
++ #define SM_RTLOCK_WAIT 2
++
++/*
++ * Helper function for __schedule()
++ *
++ * If a task does not have signals pending, deactivate it
++ * Otherwise marks the task's __state as RUNNING
++ */
++static bool try_to_block_task(struct rq *rq, struct task_struct *p,
++ unsigned long task_state)
++{
++ if (signal_pending_state(task_state, p)) {
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ return false;
++ }
++ p->sched_contributes_to_load =
++ (task_state & TASK_UNINTERRUPTIBLE) &&
++ !(task_state & TASK_NOLOAD) &&
++ !(task_state & TASK_FROZEN);
++
++ /*
++ * __schedule() ttwu()
++ * prev_state = prev->state; if (p->on_rq && ...)
++ * if (prev_state) goto out;
++ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
++ * p->state = TASK_WAKING
++ *
++ * Where __schedule() and ttwu() have matching control dependencies.
++ *
++ * After this, schedule() must not care about p->state any more.
++ */
++ sched_task_deactivate(p, rq);
++ block_task(rq, p);
++ return true;
++}
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ * paths. For example, see arch/x86/entry_64.S.
++ *
++ * To drive preemption between tasks, the scheduler sets the flag in timer
++ * interrupt handler sched_tick().
++ *
++ * 3. Wakeups don't really cause entry into schedule(). They add a
++ * task to the run-queue and that's it.
++ *
++ * Now, if the new task added to the run-queue preempts the current
++ * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ * called on the nearest possible occasion:
++ *
++ * - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ * - in syscall or exception context, at the next outmost
++ * preempt_enable(). (this might be as soon as the wake_up()'s
++ * spin_unlock()!)
++ *
++ * - in IRQ context, return from interrupt-handler to
++ * preemptible context
++ *
++ * - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ * then at the next:
++ *
++ * - cond_resched() call
++ * - explicit schedule() call
++ * - return from syscall or exception to user-space
++ * - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(int sched_mode)
++{
++ struct task_struct *prev, *next;
++ /*
++ * On PREEMPT_RT kernel, SM_RTLOCK_WAIT is noted
++ * as a preemption by schedule_debug() and RCU.
++ */
++ bool preempt = sched_mode > SM_NONE;
++ unsigned long *switch_count;
++ unsigned long prev_state;
++ struct rq *rq;
++ int cpu;
++
++ cpu = smp_processor_id();
++ rq = cpu_rq(cpu);
++ prev = rq->curr;
++
++ schedule_debug(prev, preempt);
++
++ /* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++ hrtick_clear(rq);
++
++ local_irq_disable();
++ rcu_note_context_switch(preempt);
++
++ /*
++ * Make sure that signal_pending_state()->signal_pending() below
++ * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++ * done by the caller to avoid the race with signal_wake_up():
++ *
++ * __set_current_state(@state) signal_wake_up()
++ * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
++ * wake_up_state(p, state)
++ * LOCK rq->lock LOCK p->pi_state
++ * smp_mb__after_spinlock() smp_mb__after_spinlock()
++ * if (signal_pending_state()) if (p->state & @state)
++ *
++ * Also, the membarrier system call requires a full memory barrier
++ * after coming from user-space, before storing to rq->curr; this
++ * barrier matches a full barrier in the proximity of the membarrier
++ * system call exit.
++ */
++ raw_spin_lock(&rq->lock);
++ smp_mb__after_spinlock();
++
++ update_rq_clock(rq);
++
++ switch_count = &prev->nivcsw;
++
++ /* Task state changes only considers SM_PREEMPT as preemption */
++ preempt = sched_mode == SM_PREEMPT;
++
++ /*
++ * We must load prev->state once (task_struct::state is volatile), such
++ * that we form a control dependency vs deactivate_task() below.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ if (sched_mode == SM_IDLE) {
++ if (!rq->nr_running) {
++ next = prev;
++ goto picked;
++ }
++ } else if (!preempt && prev_state) {
++ try_to_block_task(rq, prev, prev_state);
++ switch_count = &prev->nvcsw;
++ }
++
++ check_curr(prev, rq);
++
++ next = choose_next_task(rq, cpu);
++picked:
++ clear_tsk_need_resched(prev);
++ clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++ rq->last_seen_need_resched_ns = 0;
++#endif
++
++ if (likely(prev != next)) {
++ next->last_ran = rq->clock_task;
++
++ /*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++ rq->nr_switches++;
++ /*
++ * RCU users of rcu_dereference(rq->curr) may not see
++ * changes to task_struct made by pick_next_task().
++ */
++ RCU_INIT_POINTER(rq->curr, next);
++ /*
++ * The membarrier system call requires each architecture
++ * to have a full memory barrier after updating
++ * rq->curr, before returning to user-space.
++ *
++ * Here are the schemes providing that barrier on the
++ * various architectures:
++ * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++ * RISC-V. switch_mm() relies on membarrier_arch_switch_mm()
++ * on PowerPC and on RISC-V.
++ * - finish_lock_switch() for weakly-ordered
++ * architectures where spin_unlock is a full barrier,
++ * - switch_to() for arm64 (weakly-ordered, spin_unlock
++ * is a RELEASE barrier),
++ *
++ * The barrier matches a full barrier in the proximity of
++ * the membarrier system call entry.
++ *
++ * On RISC-V, this barrier pairing is also needed for the
++ * SYNC_CORE command when switching between processes, cf.
++ * the inline comments in membarrier_arch_switch_mm().
++ */
++ ++*switch_count;
++
++ trace_sched_switch(preempt, prev, next, prev_state);
++
++ /* Also unlocks the rq: */
++ rq = context_switch(rq, prev, next);
++
++ cpu = cpu_of(rq);
++ } else {
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++ }
++}
++
++void __noreturn do_task_dead(void)
++{
++ /* Causes final put_task_struct in finish_task_switch(): */
++ set_special_state(TASK_DEAD);
++
++ /* Tell freezer to ignore us: */
++ current->flags |= PF_NOFREEZE;
++
++ __schedule(SM_NONE);
++ BUG();
++
++ /* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++ for (;;)
++ cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++ static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++ unsigned int task_flags;
++
++ /*
++ * Establish LD_WAIT_CONFIG context to ensure none of the code called
++ * will use a blocking primitive -- which would lead to recursion.
++ */
++ lock_map_acquire_try(&sched_map);
++
++ task_flags = tsk->flags;
++ /*
++ * If a worker goes to sleep, notify and ask workqueue whether it
++ * wants to wake up a task to maintain concurrency.
++ */
++ if (task_flags & PF_WQ_WORKER)
++ wq_worker_sleeping(tsk);
++ else if (task_flags & PF_IO_WORKER)
++ io_wq_worker_sleeping(tsk);
++
++ /*
++ * spinlock and rwlock must not flush block requests. This will
++ * deadlock if the callback attempts to acquire a lock which is
++ * already acquired.
++ */
++ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++ /*
++ * If we are going to sleep and we have plugged IO queued,
++ * make sure to submit it to avoid deadlocks.
++ */
++ blk_flush_plug(tsk->plug, true);
++
++ lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++ if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++ if (tsk->flags & PF_BLOCK_TS)
++ blk_plug_invalidate_ts(tsk);
++ if (tsk->flags & PF_WQ_WORKER)
++ wq_worker_running(tsk);
++ else if (tsk->flags & PF_IO_WORKER)
++ io_wq_worker_running(tsk);
++ }
++}
++
++static __always_inline void __schedule_loop(int sched_mode)
++{
++ do {
++ preempt_disable();
++ __schedule(sched_mode);
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++ struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++ lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++ if (!task_is_running(tsk))
++ sched_submit_work(tsk);
++ __schedule_loop(SM_NONE);
++ sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++ /*
++ * As this skips calling sched_submit_work(), which the idle task does
++ * regardless because that function is a NOP when the task is in a
++ * TASK_RUNNING state, make sure this isn't used someplace that the
++ * current task can be in any other state. Note, idle is always in the
++ * TASK_RUNNING state.
++ */
++ WARN_ON_ONCE(current->__state);
++ do {
++ __schedule(SM_IDLE);
++ } while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++ /*
++ * If we come here after a random call to set_need_resched(),
++ * or we have been woken up remotely but the IPI has not yet arrived,
++ * we haven't yet exited the RCU idle mode. Do it here manually until
++ * we find a better solution.
++ *
++ * NB: There are buggy callers of this function. Ideally we
++ * should warn if prev_state != CT_STATE_USER, but that will trigger
++ * too frequently to make sense yet.
++ */
++ enum ctx_state prev_state = exception_enter();
++ schedule();
++ exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++ sched_preempt_enable_no_resched();
++ schedule();
++ preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++ __schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ __schedule(SM_PREEMPT);
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++
++ /*
++ * Check again in case we missed a preemption opportunity
++ * between schedule and now.
++ */
++ } while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++ /*
++ * If there is a non-zero preempt_count or interrupts are disabled,
++ * we do not want to preempt the current task. Just return..
++ */
++ if (likely(!preemptible()))
++ return;
++
++ preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled preempt_schedule
++#define preempt_schedule_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++ return;
++ preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++ enum ctx_state prev_ctx;
++
++ if (likely(!preemptible()))
++ return;
++
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ /*
++ * Needs preempt disabled in case user_exit() is traced
++ * and the tracer calls preempt_enable_notrace() causing
++ * an infinite recursion.
++ */
++ prev_ctx = exception_enter();
++ __schedule(SM_PREEMPT);
++ exception_exit(prev_ctx);
++
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++ } while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++ return;
++ preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of IRQ context.
++ * Note, that this is called and return with IRQs disabled. This will
++ * protect us against recursive calling from IRQ contexts.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++ enum ctx_state prev_state;
++
++ /* Catch callers which need to be fixed */
++ BUG_ON(preempt_count() || !irqs_disabled());
++
++ prev_state = exception_enter();
++
++ do {
++ preempt_disable();
++ local_irq_enable();
++ __schedule(SM_PREEMPT);
++ local_irq_disable();
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++
++ exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++ void *key)
++{
++ WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++ return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++ /* Trigger resched if task sched_prio has been modified. */
++ if (task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ requeue_task(p, rq);
++ wakeup_preempt(rq);
++ }
++}
++
++void __setscheduler_prio(struct task_struct *p, int prio)
++{
++ p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++ lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++ sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++ lockdep_assert(current->sched_rt_mutex);
++ __schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++ sched_update_worker(current);
++ lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++ int prio;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ /* XXX used to be waiter->prio, not waiter->task->prio */
++ prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++ /*
++ * If nothing changed; bail early.
++ */
++ if (p->pi_top_task == pi_task && prio == p->prio)
++ return;
++
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Set under pi_lock && rq->lock, such that the value can be used under
++ * either lock.
++ *
++ * Note that there is loads of tricky to make this pointer cache work
++ * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++ * ensure a task is de-boosted (pi_task is set to NULL) before the
++ * task is allowed to run again (and can exit). This ensures the pointer
++ * points to a blocked task -- which guarantees the task is present.
++ */
++ p->pi_top_task = pi_task;
++
++ /*
++ * For FIFO/RR we only need to set prio, if that matches we're done.
++ */
++ if (prio == p->prio)
++ goto out_unlock;
++
++ /*
++ * Idle task boosting is a no-no in general. There is one
++ * exception, when PREEMPT_RT and NOHZ is active:
++ *
++ * The idle task calls get_next_timer_interrupt() and holds
++ * the timer wheel base->lock on the CPU and another CPU wants
++ * to access the timer (probably to cancel it). We can safely
++ * ignore the boosting request, as the idle CPU runs this code
++ * with interrupts disabled and will complete the lock
++ * protected section without being interrupted. So there is no
++ * real need to boost.
++ */
++ if (unlikely(p == rq->idle)) {
++ WARN_ON(p != rq->curr);
++ WARN_ON(p->pi_blocked_on);
++ goto out_unlock;
++ }
++
++ trace_sched_pi_setprio(p, pi_task);
++
++ __setscheduler_prio(p, prio);
++
++ check_task_changed(p, rq);
++out_unlock:
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++
++ if (task_on_rq_queued(p))
++ __balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++
++ preempt_enable();
++}
++#endif
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++ if (should_resched(0) && !irqs_disabled()) {
++ preempt_schedule_common();
++ return 1;
++ }
++ /*
++ * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++ * whether the current CPU is in an RCU read-side critical section,
++ * so the tick can report quiescent states even for CPUs looping
++ * in kernel context. In contrast, in non-preemptible kernels,
++ * RCU readers leave no in-memory hints, which means that CPU-bound
++ * processes executing in kernel context might never report an
++ * RCU quiescent state. Therefore, the following code causes
++ * cond_resched() to report a quiescent state, but only when RCU
++ * is in urgent need of one.
++ */
++#ifndef CONFIG_PREEMPT_RCU
++ rcu_all_qs();
++#endif
++ return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled __cond_resched
++#define cond_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled __cond_resched
++#define might_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++ klp_sched_try_switch();
++ if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_might_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION. We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held(lock);
++
++ if (spin_needbreak(lock) || resched) {
++ spin_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ spin_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_read(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ read_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ read_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_write(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ write_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ write_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ * cond_resched <- __cond_resched
++ * might_resched <- RET0
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ * dynamic_preempt_lazy <- false
++ *
++ * VOLUNTARY:
++ * cond_resched <- __cond_resched
++ * might_resched <- __cond_resched
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ * dynamic_preempt_lazy <- false
++ *
++ * FULL:
++ * cond_resched <- RET0
++ * might_resched <- RET0
++ * preempt_schedule <- preempt_schedule
++ * preempt_schedule_notrace <- preempt_schedule_notrace
++ * irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ * dynamic_preempt_lazy <- false
++ *
++ * LAZY:
++ * cond_resched <- RET0
++ * might_resched <- RET0
++ * preempt_schedule <- preempt_schedule
++ * preempt_schedule_notrace <- preempt_schedule_notrace
++ * irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ * dynamic_preempt_lazy <- true
++ */
++
++enum {
++ preempt_dynamic_undefined = -1,
++ preempt_dynamic_none,
++ preempt_dynamic_voluntary,
++ preempt_dynamic_full,
++ preempt_dynamic_lazy,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++#ifndef CONFIG_PREEMPT_RT
++ if (!strcmp(str, "none"))
++ return preempt_dynamic_none;
++
++ if (!strcmp(str, "voluntary"))
++ return preempt_dynamic_voluntary;
++#endif
++
++ if (!strcmp(str, "full"))
++ return preempt_dynamic_full;
++
++#ifdef CONFIG_ARCH_HAS_PREEMPT_LAZY
++ if (!strcmp(str, "lazy"))
++ return preempt_dynamic_lazy;
++#endif
++
++ return -EINVAL;
++}
++
++#define preempt_dynamic_key_enable(f) static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_key_disable(f) static_key_disable(&sk_dynamic_##f.key)
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f) static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f) static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f) preempt_dynamic_key_enable(f)
++#define preempt_dynamic_disable(f) preempt_dynamic_key_disable(f)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++ /*
++ * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++ * the ZERO state, which is invalid.
++ */
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++ preempt_dynamic_key_disable(preempt_lazy);
++
++ switch (mode) {
++ case preempt_dynamic_none:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ preempt_dynamic_key_disable(preempt_lazy);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: none\n");
++ break;
++
++ case preempt_dynamic_voluntary:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ preempt_dynamic_key_disable(preempt_lazy);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: voluntary\n");
++ break;
++
++ case preempt_dynamic_full:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++ preempt_dynamic_key_disable(preempt_lazy);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: full\n");
++ break;
++
++ case preempt_dynamic_lazy:
++ if (!klp_override)
++ preempt_dynamic_disable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++ preempt_dynamic_key_enable(preempt_lazy);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: lazy\n");
++ break;
++ }
++
++ preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++ mutex_lock(&sched_dynamic_mutex);
++ __sched_dynamic_update(mode);
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++ __klp_sched_try_switch();
++ return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = true;
++ static_call_update(cond_resched, klp_cond_resched);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = false;
++ __sched_dynamic_update(preempt_dynamic_mode);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++ int mode = sched_dynamic_mode(str);
++ if (mode < 0) {
++ pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++ return 0;
++ }
++
++ sched_dynamic_update(mode);
++ return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++ if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++ if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++ sched_dynamic_update(preempt_dynamic_none);
++ } else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++ sched_dynamic_update(preempt_dynamic_voluntary);
++ } else if (IS_ENABLED(CONFIG_PREEMPT_LAZY)) {
++ sched_dynamic_update(preempt_dynamic_lazy);
++ } else {
++ /* Default static call setting, nothing to do */
++ WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++ preempt_dynamic_mode = preempt_dynamic_full;
++ pr_info("Dynamic Preempt: full\n");
++ }
++ }
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++ bool preempt_model_##mode(void) \
++ { \
++ WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++ return preempt_dynamic_mode == preempt_dynamic_##mode; \
++ } \
++ EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++PREEMPT_MODEL_ACCESSOR(lazy);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC: */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++int io_schedule_prepare(void)
++{
++ int old_iowait = current->in_iowait;
++
++ current->in_iowait = 1;
++ blk_flush_plug(current->plug, true);
++ return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++ current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO. Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++ int token;
++ long ret;
++
++ token = io_schedule_prepare();
++ ret = schedule_timeout(timeout);
++ io_schedule_finish(token);
++
++ return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++ int token;
++
++ token = io_schedule_prepare();
++ schedule();
++ io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++void sched_show_task(struct task_struct *p)
++{
++ unsigned long free;
++ int ppid;
++
++ if (!try_get_task_stack(p))
++ return;
++
++ pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++ if (task_is_running(p))
++ pr_cont(" running task ");
++ free = stack_not_used(p);
++ ppid = 0;
++ rcu_read_lock();
++ if (pid_alive(p))
++ ppid = task_pid_nr(rcu_dereference(p->real_parent));
++ rcu_read_unlock();
++ pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d task_flags:0x%04x flags:0x%08lx\n",
++ free, task_pid_nr(p), task_tgid_nr(p),
++ ppid, p->flags, read_task_thread_flags(p));
++
++ print_worker_info(KERN_INFO, p);
++ print_stop_info(KERN_INFO, p);
++ show_stack(p, NULL, KERN_INFO);
++ put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /* no filter, everything matches */
++ if (!state_filter)
++ return true;
++
++ /* filter, but doesn't match */
++ if (!(state & state_filter))
++ return false;
++
++ /*
++ * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++ * TASK_KILLABLE).
++ */
++ if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++ return false;
++
++ return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++ struct task_struct *g, *p;
++
++ rcu_read_lock();
++ for_each_process_thread(g, p) {
++ /*
++ * reset the NMI-timeout, listing all files on a slow
++ * console might take a lot of time:
++ * Also, reset softlockup watchdogs on all CPUs, because
++ * another CPU might be blocked waiting for us to process
++ * an IPI.
++ */
++ touch_nmi_watchdog();
++ touch_all_softlockup_watchdogs();
++ if (state_filter_match(state_filter, p))
++ sched_show_task(p);
++ }
++
++#ifdef CONFIG_SCHED_DEBUG
++ /* TODO: Alt schedule FW should support this
++ if (!state_filter)
++ sysrq_sched_debug_show();
++ */
++#endif
++ rcu_read_unlock();
++ /*
++ * Only show locks if all tasks are dumped:
++ */
++ if (!state_filter)
++ debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++ if (in_hardirq() && cpu == smp_processor_id()) {
++ struct pt_regs *regs;
++
++ regs = get_irq_regs();
++ if (regs) {
++ show_regs(regs);
++ return;
++ }
++ }
++
++ if (trigger_single_cpu_backtrace(cpu))
++ return;
++
++ pr_info("Task dump for CPU %d:\n", cpu);
++ sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++ struct affinity_context ac = (struct affinity_context) {
++ .new_mask = cpumask_of(cpu),
++ .flags = 0,
++ };
++#endif
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&idle->pi_lock, flags);
++ raw_spin_lock(&rq->lock);
++
++ idle->last_ran = rq->clock_task;
++ idle->__state = TASK_RUNNING;
++ /*
++ * PF_KTHREAD should already be set at this point; regardless, make it
++ * look like a proper per-CPU kthread.
++ */
++ idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++ kthread_set_per_cpu(idle, cpu);
++
++ sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++ /*
++ * No validation and serialization required at boot time and for
++ * setting up the idle tasks of not yet online CPUs.
++ */
++ set_cpus_allowed_common(idle, &ac);
++#endif
++
++ /* Silence PROVE_RCU */
++ rcu_read_lock();
++ __set_task_cpu(idle, cpu);
++ rcu_read_unlock();
++
++ rq->idle = idle;
++ rcu_assign_pointer(rq->curr, idle);
++ idle->on_cpu = 1;
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++ /* Set the preempt count _outside_ the spinlocks! */
++ init_idle_preempt_count(idle, cpu);
++
++ ftrace_graph_init_idle_task(idle, cpu);
++ vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++ sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++ const struct cpumask __maybe_unused *trial)
++{
++ return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++ int ret = 0;
++
++ /*
++ * Kthreads which disallow setaffinity shouldn't be moved
++ * to a new cpuset; we don't want to change their CPU
++ * affinity and isolating such threads by their set of
++ * allowed nodes is unnecessary. Thus, cpusets are not
++ * applicable for such threads. This prevents checking for
++ * success of set_cpus_allowed_ptr() on all attached tasks
++ * before cpus_mask may be changed.
++ */
++ if (p->flags & PF_NO_SETAFFINITY)
++ ret = -EINVAL;
++
++ return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Invoked on the outgoing CPU in context of the CPU hotplug thread
++ * after ensuring that there are no user space tasks left on the CPU.
++ *
++ * If there is a lazy mm in use on the hotplug thread, drop it and
++ * switch to init_mm.
++ *
++ * The reference count on init_mm is dropped in finish_cpu().
++ */
++static void sched_force_init_mm(void)
++{
++ struct mm_struct *mm = current->active_mm;
++
++ if (mm != &init_mm) {
++ mmgrab_lazy_tlb(&init_mm);
++ local_irq_disable();
++ current->active_mm = &init_mm;
++ switch_mm_irqs_off(mm, &init_mm, current);
++ local_irq_enable();
++ finish_arch_post_lock_switch();
++ mmdrop_lazy_tlb(mm);
++ }
++
++ /* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++ struct task_struct *p = arg;
++ struct rq *rq = this_rq();
++ struct rq_flags rf;
++ int cpu;
++
++ raw_spin_lock_irq(&p->pi_lock);
++ rq_lock(rq, &rf);
++
++ update_rq_clock(rq);
++
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ cpu = select_fallback_rq(rq->cpu, p);
++ rq = __migrate_task(rq, p, cpu);
++ }
++
++ rq_unlock(rq, &rf);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ put_task_struct(p);
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++ struct task_struct *push_task = rq->curr;
++
++ lockdep_assert_held(&rq->lock);
++
++ /*
++ * Ensure the thing is persistent until balance_push_set(.on = false);
++ */
++ rq->balance_callback = &balance_push_callback;
++
++ /*
++ * Only active while going offline and when invoked on the outgoing
++ * CPU.
++ */
++ if (!cpu_dying(rq->cpu) || rq != this_rq())
++ return;
++
++ /*
++ * Both the cpu-hotplug and stop task are in this case and are
++ * required to complete the hotplug process.
++ */
++ if (kthread_is_per_cpu(push_task) ||
++ is_migration_disabled(push_task)) {
++
++ /*
++ * If this is the idle task on the outgoing CPU try to wake
++ * up the hotplug control thread which might wait for the
++ * last task to vanish. The rcuwait_active() check is
++ * accurate here because the waiter is pinned on this CPU
++ * and can't obviously be running in parallel.
++ *
++ * On RT kernels this also has to check whether there are
++ * pinned and scheduled out tasks on the runqueue. They
++ * need to leave the migrate disabled section first.
++ */
++ if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++ rcuwait_active(&rq->hotplug_wait)) {
++ raw_spin_unlock(&rq->lock);
++ rcuwait_wake_up(&rq->hotplug_wait);
++ raw_spin_lock(&rq->lock);
++ }
++ return;
++ }
++
++ get_task_struct(push_task);
++ /*
++ * Temporarily drop rq->lock such that we can wake-up the stop task.
++ * Both preemption and IRQs are still disabled.
++ */
++ preempt_disable();
++ raw_spin_unlock(&rq->lock);
++ stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++ this_cpu_ptr(&push_work));
++ preempt_enable();
++ /*
++ * At this point need_resched() is true and we'll take the loop in
++ * schedule(). The next pick is obviously going to be the stop task
++ * which kthread_is_per_cpu() and will push this task away.
++ */
++ raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct rq_flags rf;
++
++ rq_lock_irqsave(rq, &rf);
++ if (on) {
++ WARN_ON_ONCE(rq->balance_callback);
++ rq->balance_callback = &balance_push_callback;
++ } else if (rq->balance_callback == &balance_push_callback) {
++ rq->balance_callback = NULL;
++ }
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++ struct rq *rq = this_rq();
++
++ rcuwait_wait_event(&rq->hotplug_wait,
++ rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++ TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++ if (rq->online) {
++ update_rq_clock(rq);
++ rq->online = false;
++ }
++}
++
++static void set_rq_online(struct rq *rq)
++{
++ if (!rq->online)
++ rq->online = true;
++}
++
++static inline void sched_set_rq_online(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_online(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++static inline void sched_set_rq_offline(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_offline(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask. If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++ if (cpuhp_tasks_frozen) {
++ /*
++ * num_cpus_frozen tracks how many CPUs are involved in suspend
++ * resume sequence. As long as this is not the last online
++ * operation in the resume sequence, just build a single sched
++ * domain, ignoring cpusets.
++ */
++ partition_sched_domains(1, NULL, NULL);
++ if (--num_cpus_frozen)
++ return;
++ /*
++ * This is the last CPU online operation. So fall through and
++ * restore the original sched domains by considering the
++ * cpuset configurations.
++ */
++ cpuset_force_rebuild();
++ }
++
++ cpuset_update_active_cpus();
++}
++
++static void cpuset_cpu_inactive(unsigned int cpu)
++{
++ if (!cpuhp_tasks_frozen) {
++ cpuset_update_active_cpus();
++ } else {
++ num_cpus_frozen++;
++ partition_sched_domains(1, NULL, NULL);
++ }
++}
++
++static inline void sched_smt_present_inc(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_inc_cpuslocked(&sched_smt_present);
++ cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++static inline void sched_smt_present_dec(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_dec_cpuslocked(&sched_smt_present);
++ if (!static_branch_likely(&sched_smt_present))
++ cpumask_clear(sched_pcore_idle_mask);
++ cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ /*
++ * Clear the balance_push callback and prepare to schedule
++ * regular tasks.
++ */
++ balance_push_set(cpu, false);
++
++ set_cpu_active(cpu, true);
++
++ if (sched_smp_initialized)
++ cpuset_cpu_active();
++
++ /*
++ * Put the rq online, if not already. This happens:
++ *
++ * 1) In the early boot process, because we build the real domains
++ * after all cpus have been brought up.
++ *
++ * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++ * domains.
++ */
++ sched_set_rq_online(rq, cpu);
++
++ /*
++ * When going up, increment the number of cores with SMT present.
++ */
++ sched_smt_present_inc(cpu);
++
++ return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ set_cpu_active(cpu, false);
++
++ /*
++ * From this point forward, this CPU will refuse to run any task that
++ * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++ * push those tasks away until this gets cleared, see
++ * sched_cpu_dying().
++ */
++ balance_push_set(cpu, true);
++
++ /*
++ * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++ * users of this state to go away such that all new such users will
++ * observe it.
++ *
++ * Specifically, we rely on ttwu to no longer target this CPU, see
++ * ttwu_queue_cond() and is_cpu_allowed().
++ *
++ * Do sync before park smpboot threads to take care the RCU boost case.
++ */
++ synchronize_rcu();
++
++ sched_set_rq_offline(rq, cpu);
++
++ /*
++ * When going down, decrement the number of cores with SMT present.
++ */
++ sched_smt_present_dec(cpu);
++
++ if (!sched_smp_initialized)
++ return 0;
++
++ cpuset_cpu_inactive(cpu);
++
++ return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++ sched_rq_cpu_starting(cpu);
++ sched_tick_start(cpu);
++ return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++ balance_hotplug_wait();
++ sched_force_init_mm();
++ return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the tear-down thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++ long delta = calc_load_fold_active(rq, 1);
++
++ if (delta)
++ atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++ struct task_struct *g, *p;
++ int cpu = cpu_of(rq);
++
++ lockdep_assert_held(&rq->lock);
++
++ printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++ for_each_process_thread(g, p) {
++ if (task_cpu(p) != cpu)
++ continue;
++
++ if (!task_on_rq_queued(p))
++ continue;
++
++ printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++ }
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ /* Handle pending wakeups and then migrate everything off */
++ sched_tick_stop(cpu);
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++ WARN(true, "Dying CPU not properly vacated!");
++ dump_rq_tasks(rq, KERN_WARNING);
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ calc_load_migrate(rq);
++ hrtick_clear(rq);
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++ int cpu;
++ cpumask_t *tmp;
++
++ for_each_possible_cpu(cpu) {
++ /* init topo masks */
++ tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++ cpumask_copy(tmp, cpu_possible_mask);
++ per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++ per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++ }
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++ if (cpumask_and(topo, topo, mask)) { \
++ cpumask_copy(topo, mask); \
++ printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name, \
++ cpu, (topo++)->bits[0]); \
++ } \
++ if (!last) \
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(mask), \
++ nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++ int cpu;
++ cpumask_t *topo;
++
++ for_each_online_cpu(cpu) {
++ topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++ nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++ TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++ TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++ per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++ per_cpu(sched_cpu_llc_mask, cpu) = topo;
++ TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++ TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++ TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++ per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++ printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++ cpu, per_cpu(sd_llc_id, cpu),
++ (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++ per_cpu(sched_cpu_topo_masks, cpu)));
++ }
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++ /* Move init over to a non-isolated CPU */
++ if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++ BUG();
++ current->flags &= ~PF_NO_SETAFFINITY;
++
++ sched_init_topology();
++ sched_init_topology_cpumask();
++
++ sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++ sched_cpu_starting(smp_processor_id());
++ return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++ cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++ return in_lock_functions(addr) ||
++ (addr >= (unsigned long)__sched_text_start
++ && addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++ int i;
++ struct rq *rq;
++
++ printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++ " by Alfred Chen.\n");
++
++ wait_bit_init();
++
++#ifdef CONFIG_SMP
++ for (i = 0; i < SCHED_QUEUE_BITS; i++)
++ cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++ task_group_cache = KMEM_CACHE(task_group, 0);
++
++ list_add(&root_task_group.list, &task_groups);
++ INIT_LIST_HEAD(&root_task_group.children);
++ INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++ for_each_possible_cpu(i) {
++ rq = cpu_rq(i);
++
++ sched_queue_init(&rq->queue);
++ rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = rq->prio;
++#endif
++
++ raw_spin_lock_init(&rq->lock);
++ rq->nr_running = rq->nr_uninterruptible = 0;
++ rq->calc_load_active = 0;
++ rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++ rq->online = false;
++ rq->cpu = i;
++
++ rq->clear_idle_mask_func = cpumask_clear_cpu;
++ rq->set_idle_mask_func = cpumask_set_cpu;
++ rq->balance_func = NULL;
++ rq->active_balance_arg.active = 0;
++
++#ifdef CONFIG_NO_HZ_COMMON
++ INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++ rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++ rq->nr_switches = 0;
++
++ hrtick_rq_init(rq);
++ atomic_set(&rq->nr_iowait, 0);
++
++ zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++ }
++#ifdef CONFIG_SMP
++ /* Set rq->online for cpu 0 */
++ cpu_rq(0)->online = true;
++#endif
++ /*
++ * The boot idle thread does lazy MMU switching as well:
++ */
++ mmgrab(&init_mm);
++ enter_lazy_tlb(&init_mm, current);
++
++ /*
++ * The idle task doesn't need the kthread struct to function, but it
++ * is dressed up as a per-CPU kthread and thus needs to play the part
++ * if we want to avoid special-casing it in code that deals with per-CPU
++ * kthreads.
++ */
++ WARN_ON(!set_kthread_struct(current));
++
++ /*
++ * Make us the idle thread. Technically, schedule() should not be
++ * called from this thread, however somewhere below it might be,
++ * but because we are the idle thread, we just pick up running again
++ * when this runqueue becomes "idle".
++ */
++ __sched_fork(0, current);
++ init_idle(current, smp_processor_id());
++
++ calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++ idle_thread_set_boot_cpu();
++ balance_push_set(smp_processor_id(), false);
++
++ sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++ preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++ unsigned int state = get_current_state();
++ /*
++ * Blocking primitives will set (and therefore destroy) current->state,
++ * since we will exit with TASK_RUNNING make sure we enter with it,
++ * otherwise we will destroy state.
++ */
++ WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++ "do not call blocking ops when !TASK_RUNNING; "
++ "state=%x set at [<%p>] %pS\n", state,
++ (void *)current->task_state_change,
++ (void *)current->task_state_change);
++
++ __might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++ if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++ return;
++
++ if (preempt_count() == preempt_offset)
++ return;
++
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++ unsigned int nested = preempt_count();
++
++ nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++ return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++ /* Ratelimiting timestamp: */
++ static unsigned long prev_jiffy;
++
++ unsigned long preempt_disable_ip;
++
++ /* WARN_ON_ONCE() by default, no rate limit required: */
++ rcu_sleep_check();
++
++ if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++ !is_idle_task(current) && !current->non_block_count) ||
++ system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++ oops_in_progress)
++ return;
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ /* Save this before calling printk(), since that will clobber it: */
++ preempt_disable_ip = get_preempt_disable_ip(current);
++
++ pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++ file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), current->non_block_count,
++ current->pid, current->comm);
++ pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++ offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++ if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++ pr_err("RCU nest depth: %d, expected: %u\n",
++ rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++ }
++
++ if (task_stack_end_corrupted(current))
++ pr_emerg("Thread overran stack, or stack corrupted\n");
++
++ debug_show_held_locks(current);
++ if (irqs_disabled())
++ print_irqtrace_events(current);
++
++ print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++ preempt_disable_ip);
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > preempt_offset)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++ printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (is_migration_disabled(current))
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > 0)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), is_migration_disabled(current),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++ struct task_struct *g, *p;
++ struct sched_attr attr = {
++ .sched_policy = SCHED_NORMAL,
++ };
++
++ read_lock(&tasklist_lock);
++ for_each_process_thread(g, p) {
++ /*
++ * Only normalize user tasks:
++ */
++ if (p->flags & PF_KTHREAD)
++ continue;
++
++ schedstat_set(p->stats.wait_start, 0);
++ schedstat_set(p->stats.sleep_start, 0);
++ schedstat_set(p->stats.block_start, 0);
++
++ if (!rt_or_dl_task(p)) {
++ /*
++ * Renice negative nice level userspace
++ * tasks back to 0:
++ */
++ if (task_nice(p) < 0)
++ set_user_nice(p, 0);
++ continue;
++ }
++
++ __sched_setscheduler(p, &attr, false, false);
++ }
++ read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for KDB.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++ return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++ kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++ sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++ /*
++ * We have to wait for yet another RCU grace period to expire, as
++ * print_cfs_stats() might run concurrently.
++ */
++ call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++ struct task_group *tg;
++
++ tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++ if (!tg)
++ return ERR_PTR(-ENOMEM);
++
++ return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* RCU callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++ /* Now it should be safe to free those cfs_rqs: */
++ sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++ /* Wait for possible concurrent references to cfs_rqs complete: */
++ call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++ return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++ struct task_group *parent = css_tg(parent_css);
++ struct task_group *tg;
++
++ if (!parent) {
++ /* This is early initialization for the top cgroup */
++ return &root_task_group.css;
++ }
++
++ tg = sched_create_group(parent);
++ if (IS_ERR(tg))
++ return ERR_PTR(-ENOMEM);
++ return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++ struct task_group *parent = css_tg(css->parent);
++
++ if (parent)
++ sched_online_group(tg, parent);
++ return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ /*
++ * Relies on the RCU grace period between css_released() and this.
++ */
++ sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++ return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++ return 0;
++}
++
++static int sched_group_set_idle(struct task_group *tg, long idle)
++{
++ return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 shareval)
++{
++ return sched_group_set_shares(css_tg(css), shareval);
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 idle)
++{
++ return sched_group_set_idle(css_tg(css), idle);
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, s64 cfs_quota_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_burst_us)
++{
++ return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 val)
++{
++ return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 rt_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++ {
++ .name = "shares",
++ .read_u64 = cpu_shares_read_u64,
++ .write_u64 = cpu_shares_write_u64,
++ },
++ {
++ .name = "idle",
++ .read_s64 = cpu_idle_read_s64,
++ .write_s64 = cpu_idle_write_s64,
++ },
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
++ {
++ .name = "cfs_quota_us",
++ .read_s64 = cpu_cfs_quota_read_s64,
++ .write_s64 = cpu_cfs_quota_write_s64,
++ },
++ {
++ .name = "cfs_period_us",
++ .read_u64 = cpu_cfs_period_read_u64,
++ .write_u64 = cpu_cfs_period_write_u64,
++ },
++ {
++ .name = "cfs_burst_us",
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++ {
++ .name = "stat",
++ .seq_show = cpu_cfs_stat_show,
++ },
++ {
++ .name = "stat.local",
++ .seq_show = cpu_cfs_local_stat_show,
++ },
++#endif
++#ifdef CONFIG_RT_GROUP_SCHED
++ {
++ .name = "rt_runtime_us",
++ .read_s64 = cpu_rt_runtime_read,
++ .write_s64 = cpu_rt_runtime_write,
++ },
++ {
++ .name = "rt_period_us",
++ .read_u64 = cpu_rt_period_read_uint,
++ .write_u64 = cpu_rt_period_write_uint,
++ },
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++#endif
++ { } /* Terminate */
++};
++
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft, u64 weight)
++{
++ return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 nice)
++{
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes, loff_t off)
++{
++ return nbytes;
++}
++#endif
++
++static struct cftype cpu_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++ {
++ .name = "weight",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_weight_read_u64,
++ .write_u64 = cpu_weight_write_u64,
++ },
++ {
++ .name = "weight.nice",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_weight_nice_read_s64,
++ .write_s64 = cpu_weight_nice_write_s64,
++ },
++ {
++ .name = "idle",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_idle_read_s64,
++ .write_s64 = cpu_idle_write_s64,
++ },
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
++ {
++ .name = "max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_max_show,
++ .write = cpu_max_write,
++ },
++ {
++ .name = "max.burst",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++#endif
++ { } /* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++ .css_alloc = cpu_cgroup_css_alloc,
++ .css_online = cpu_cgroup_css_online,
++ .css_released = cpu_cgroup_css_released,
++ .css_free = cpu_cgroup_css_free,
++ .css_extra_stat_show = cpu_extra_stat_show,
++ .css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++ .can_attach = cpu_cgroup_can_attach,
++#endif
++ .attach = cpu_cgroup_attach,
++ .legacy_cftypes = cpu_legacy_files,
++ .dfl_cftypes = cpu_files,
++ .early_init = true,
++ .threaded = true,
++};
++#endif /* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ * X = Y = 0
++ *
++ * w[X]=1 w[Y]=1
++ * MB MB
++ * r[Y]=y r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0 CPU1
++ *
++ * Context switch CS-1 Remote-clear
++ * - store to rq->curr: (N)->(Y) (TSA) - cmpxchg to *pcpu_id to LAZY (TMA)
++ * (implied barrier after cmpxchg)
++ * - switch_mm_cid()
++ * - memory barrier (see switch_mm_cid()
++ * comment explaining how this barrier
++ * is combined with other scheduler
++ * barriers)
++ * - mm_cid_get (next)
++ * - READ_ONCE(*pcpu_cid) - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++ t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid)
++{
++ struct mm_struct *mm = t->mm;
++ struct task_struct *src_task;
++ int src_cid, last_mm_cid;
++
++ if (!mm)
++ return -1;
++
++ last_mm_cid = t->last_mm_cid;
++ /*
++ * If the migrated task has no last cid, or if the current
++ * task on src rq uses the cid, it means the source cid does not need
++ * to be moved to the destination cpu.
++ */
++ if (last_mm_cid == -1)
++ return -1;
++ src_cid = READ_ONCE(src_pcpu_cid->cid);
++ if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++ return -1;
++
++ /*
++ * If we observe an active task using the mm on this rq, it means we
++ * are not the last task to be migrated from this cpu for this mm, so
++ * there is no need to move src_cid to the destination cpu.
++ */
++ guard(rcu)();
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ t->last_mm_cid = -1;
++ return -1;
++ }
++
++ return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid,
++ int src_cid)
++{
++ struct task_struct *src_task;
++ struct mm_struct *mm = t->mm;
++ int lazy_cid;
++
++ if (src_cid == -1)
++ return -1;
++
++ /*
++ * Attempt to clear the source cpu cid to move it to the destination
++ * cpu.
++ */
++ lazy_cid = mm_cid_set_lazy_put(src_cid);
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++ return -1;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, this task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ rcu_read_unlock();
++ /*
++ * We observed an active task for this mm, there is therefore
++ * no point in moving this cid to the destination cpu.
++ */
++ t->last_mm_cid = -1;
++ return -1;
++ }
++ }
++
++ /*
++ * The src_cid is unused, so it can be unset.
++ */
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ return -1;
++ WRITE_ONCE(src_pcpu_cid->recent_cid, MM_CID_UNSET);
++ return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
++{
++ struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++ struct mm_struct *mm = t->mm;
++ int src_cid, src_cpu;
++ bool dst_cid_is_set;
++ struct rq *src_rq;
++
++ lockdep_assert_rq_held(dst_rq);
++
++ if (!mm)
++ return;
++ src_cpu = t->migrate_from_cpu;
++ if (src_cpu == -1) {
++ t->last_mm_cid = -1;
++ return;
++ }
++ /*
++ * Move the src cid if the dst cid is unset. This keeps id
++ * allocation closest to 0 in cases where few threads migrate around
++ * many CPUs.
++ *
++ * If destination cid or recent cid is already set, we may have
++ * to just clear the src cid to ensure compactness in frequent
++ * migrations scenarios.
++ *
++ * It is not useful to clear the src cid when the number of threads is
++ * greater or equal to the number of allowed CPUs, because user-space
++ * can expect that the number of allowed cids can reach the number of
++ * allowed CPUs.
++ */
++ dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++ dst_cid_is_set = !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->cid)) ||
++ !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->recent_cid));
++ if (dst_cid_is_set && atomic_read(&mm->mm_users) >= READ_ONCE(mm->nr_cpus_allowed))
++ return;
++ src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++ src_rq = cpu_rq(src_cpu);
++ src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++ if (src_cid == -1)
++ return;
++ src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++ src_cid);
++ if (src_cid == -1)
++ return;
++ if (dst_cid_is_set) {
++ __mm_cid_put(mm, src_cid);
++ return;
++ }
++ /* Move src_cid to dst cpu. */
++ mm_cid_snapshot_time(dst_rq, mm);
++ WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++ WRITE_ONCE(dst_pcpu_cid->recent_cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++ int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *t;
++ int cid, lazy_cid;
++
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid))
++ return;
++
++ /*
++ * Clear the cpu cid if it is set to keep cid allocation compact. If
++ * there happens to be other tasks left on the source cpu using this
++ * mm, the next task using this mm will reallocate its cid on context
++ * switch.
++ */
++ lazy_cid = mm_cid_set_lazy_put(cid);
++ if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++ return;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, that task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ t = rcu_dereference(rq->curr);
++ if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++ return;
++ }
++
++ /*
++ * The cid is unused, so it can be unset.
++ * Disable interrupts to keep the window of cid ownership without rq
++ * lock small.
++ */
++ scoped_guard (irqsave) {
++ if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ __mm_cid_put(mm, cid);
++ }
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct mm_cid *pcpu_cid;
++ struct task_struct *curr;
++ u64 rq_clock;
++
++ /*
++ * rq->clock load is racy on 32-bit but one spurious clear once in a
++ * while is irrelevant.
++ */
++ rq_clock = READ_ONCE(rq->clock);
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++ /*
++ * In order to take care of infrequently scheduled tasks, bump the time
++ * snapshot associated with this cid if an active task using the mm is
++ * observed on this rq.
++ */
++ scoped_guard (rcu) {
++ curr = rcu_dereference(rq->curr);
++ if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++ WRITE_ONCE(pcpu_cid->time, rq_clock);
++ return;
++ }
++ }
++
++ if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++ int weight)
++{
++ struct mm_cid *pcpu_cid;
++ int cid;
++
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid) || cid < weight)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++ unsigned long now = jiffies, old_scan, next_scan;
++ struct task_struct *t = current;
++ struct cpumask *cidmask;
++ struct mm_struct *mm;
++ int weight, cpu;
++
++ SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++ work->next = work; /* Prevent double-add */
++ if (t->flags & PF_EXITING)
++ return;
++ mm = t->mm;
++ if (!mm)
++ return;
++ old_scan = READ_ONCE(mm->mm_cid_next_scan);
++ next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ if (!old_scan) {
++ unsigned long res;
++
++ res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++ if (res != old_scan)
++ old_scan = res;
++ else
++ old_scan = next_scan;
++ }
++ if (time_before(now, old_scan))
++ return;
++ if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++ return;
++ cidmask = mm_cidmask(mm);
++ /* Clear cids that were not recently used. */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_old(mm, cpu);
++ weight = cpumask_weight(cidmask);
++ /*
++ * Clear cids that are greater or equal to the cidmask weight to
++ * recompact it.
++ */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ int mm_users = 0;
++
++ if (mm) {
++ mm_users = atomic_read(&mm->mm_users);
++ if (mm_users == 1)
++ mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ }
++ t->cid_work.next = &t->cid_work; /* Protect against double add */
++ init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++ struct callback_head *work = &curr->cid_work;
++ unsigned long now = jiffies;
++
++ if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++ work->next != work)
++ return;
++ if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++ return;
++
++ /* No page allocation under rq lock */
++ task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ scoped_guard (rq_lock_irqsave, rq) {
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 1);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ t->last_mm_cid = t->mm_cid = mm_cid_get(rq, t, mm);
++ }
++ rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++ WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++ t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
+new file mode 100644
+index 000000000000..12d76d9d290e
+--- /dev/null
++++ b/kernel/sched/alt_core.h
+@@ -0,0 +1,213 @@
++#ifndef _KERNEL_SCHED_ALT_CORE_H
++#define _KERNEL_SCHED_ALT_CORE_H
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/*
++ * Task related inlined functions
++ */
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++ return p->migration_disabled;
++#else
++ return false;
++#endif
++}
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p) rt_prio((p)->prio)
++#define rt_policy(policy) ((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p) (rt_policy((p)->policy))
++
++struct affinity_context {
++ const struct cpumask *new_mask;
++ struct cpumask *user_mask;
++ unsigned int flags;
++};
++
++/* CONFIG_SCHED_CLASS_EXT is not supported */
++#define scx_switched_all() false
++
++#define SCA_CHECK 0x01
++#define SCA_MIGRATE_DISABLE 0x02
++#define SCA_MIGRATE_ENABLE 0x04
++#define SCA_USER 0x08
++
++#ifdef CONFIG_SMP
++
++extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ /*
++ * See do_set_cpus_allowed() above for the rcu_head usage.
++ */
++ int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++ return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++#else /* !CONFIG_SMP: */
++
++static inline int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++ if (pi_task)
++ prio = min(prio, pi_task->prio);
++
++ return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++ return __rt_effective_prio(pi_task, prio);
++}
++
++#else /* !CONFIG_RT_MUTEXES: */
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ return prio;
++}
++
++#endif /* !CONFIG_RT_MUTEXES */
++
++extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
++extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++extern void __setscheduler_prio(struct task_struct *p, int prio);
++
++/*
++ * Context API
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock(&rq->lock);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ *plock = NULL;
++ return rq;
++ }
++ }
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++ if (NULL != lock)
++ raw_spin_unlock(lock);
++}
++
++void check_task_changed(struct task_struct *p, struct rq *rq);
++
++/*
++ * RQ related inlined functions
++ */
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++ const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *next = p->sq_node.next;
++
++ if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++ struct list_head *head;
++ unsigned long idx = next - &rq->queue.heads[0];
++
++ idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++ sched_idx2prio(idx, rq) + 1);
++ head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++ }
++
++ return list_next_entry(p, sq_node);
++}
++
++extern void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef ALT_SCHED_DEBUG
++extern void alt_sched_debug(void);
++#else
++static inline void alt_sched_debug(void) {}
++#endif
++
++extern int sched_yield_type;
++
++#ifdef CONFIG_SMP
++extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DECLARE_STATIC_KEY_FALSE(sched_smt_present);
++DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++
++extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++
++extern cpumask_t *const sched_idle_mask;
++extern cpumask_t *const sched_sg_idle_mask;
++extern cpumask_t *const sched_pcore_idle_mask;
++extern cpumask_t *const sched_ecore_idle_mask;
++
++extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
++
++typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p);
++
++extern idle_select_func_t idle_select_func;
++#endif
++
++/* balance callback */
++#ifdef CONFIG_SMP
++extern struct balance_callback *splice_balance_callbacks(struct rq *rq);
++extern void balance_callbacks(struct rq *rq, struct balance_callback *head);
++#else
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_CORE_H */
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1dbd7eb6a434
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...) \
++ do { \
++ if (m) \
++ seq_printf(m, x); \
++ else \
++ pr_cont(x); \
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++ struct seq_file *m)
++{
++ SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++ get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..c7d5272ca23a
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,1051 @@
++#ifndef _KERNEL_SCHED_ALT_SCHED_H
++#define _KERNEL_SCHED_ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++ struct cgroup_subsys_state css;
++
++ struct rcu_head rcu;
++ struct list_head list;
++
++ struct task_group *parent;
++ struct list_head siblings;
++ struct list_head children;
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++ struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO (32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS (64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO (SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x) WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x) ({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++ unsigned long __w = (w); \
++ if (__w) \
++ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++ __w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) (w)
++# define scale_load_down(w) (w)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED 1
++#define TASK_ON_RQ_MIGRATING 2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++ return READ_ONCE(p->on_rq) == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++ return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC 0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK 0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU 0x08 /* Wakeup; maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC 0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED 0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU 0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS (SCHED_LEVELS - 1)
++
++struct sched_queue {
++ DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++ struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++ struct balance_callback *next;
++ void (*func)(struct rq *rq);
++};
++
++typedef void (*balance_func_t)(struct rq *rq, int cpu);
++typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
++typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
++
++struct balance_arg {
++ struct task_struct *task;
++ int active;
++ cpumask_t *cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++ /* runqueue lock: */
++ raw_spinlock_t lock;
++
++ struct task_struct __rcu *curr;
++ struct task_struct *idle;
++ struct task_struct *stop;
++ struct mm_struct *prev_mm;
++
++ struct sched_queue queue ____cacheline_aligned;
++
++ int prio;
++#ifdef CONFIG_SCHED_PDS
++ int prio_idx;
++ u64 time_edge;
++#endif
++
++ /* switch count */
++ u64 nr_switches;
++
++ atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++ u64 last_seen_need_resched_ns;
++ int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++ int membarrier_state;
++#endif
++
++ set_idle_mask_func_t set_idle_mask_func;
++ clear_idle_mask_func_t clear_idle_mask_func;
++
++#ifdef CONFIG_SMP
++ int cpu; /* cpu of this runqueue */
++ bool online;
++
++ unsigned int ttwu_pending;
++ unsigned char nohz_idle_balance;
++ unsigned char idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ struct sched_avg avg_irq;
++#endif
++
++ balance_func_t balance_func;
++ struct balance_arg active_balance_arg ____cacheline_aligned;
++ struct cpu_stop_work active_balance_work;
++
++ struct balance_callback *balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ struct rcuwait hotplug_wait;
++#endif
++ unsigned int nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++ u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++ /* For genenal cpu load util */
++ s32 load_history;
++ u64 load_block;
++ u64 load_stamp;
++
++ /* calc_load related fields */
++ unsigned long calc_load_update;
++ long calc_load_active;
++
++ /* Ensure that all clocks are in the same cache line */
++ u64 clock ____cacheline_aligned;
++ u64 clock_task;
++
++ unsigned int nr_running;
++ unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++ call_single_data_t hrtick_csd;
++#endif
++ struct hrtimer hrtick_timer;
++ ktime_t hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++ /* latency stats */
++ struct sched_info rq_sched_info;
++ unsigned long long rq_cpu_time;
++ /* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++ /* sys_sched_yield() stats */
++ unsigned int yld_count;
++
++ /* schedule() stats */
++ unsigned int sched_switch;
++ unsigned int sched_count;
++ unsigned int sched_goidle;
++
++ /* try_to_wake_up() stats */
++ unsigned int ttwu_count;
++ unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++ /* Must be inspected within a rcu lock section */
++ struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++ call_single_data_t nohz_csd;
++#endif
++ atomic_t nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++ /* Scratch cpumask to be temporarily used under rq_lock */
++ cpumask_var_t scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
++#define this_rq() this_cpu_ptr(&runqueues)
++#define task_rq(p) cpu_rq(task_cpu(p))
++#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
++#define raw_rq() raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++ SMT_LEVEL_SPACE_HOLDER,
++#endif
++ COREGROUP_LEVEL_SPACE_HOLDER,
++ CORE_LEVEL_SPACE_HOLDER,
++ OTHER_LEVEL_SPACE_HOLDER,
++ NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++ int cpu;
++
++ while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++ mask++;
++
++ return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++ return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++ return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++ return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP 0x01
++
++#define ENQUEUE_WAKEUP 0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++ unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ local_irq_disable();
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++ return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++ return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++ lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++ raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++ local_irq_disable();
++ raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++ raw_spin_rq_unlock(rq);
++ local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++ return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++ return p->on_cpu;
++}
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++ rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ WARN_ON(!rcu_read_lock_held());
++ return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ return rq->cpu;
++#else
++ return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT 0
++#define NOHZ_STATS_KICK_BIT 1
++
++#define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++ u64 total;
++ u64 tick_delta;
++ u64 irq_start_time;
++ struct u64_stats_sync sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++extern int sched_clock_irqtime;
++
++static inline int irqtime_enabled(void)
++{
++ return sched_clock_irqtime;
++}
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++ struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++ unsigned int seq;
++ u64 total;
++
++ do {
++ seq = __u64_stats_fetch_begin(&irqtime->sync);
++ total = irqtime->total;
++ } while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++ return total;
++}
++#else
++
++static inline int irqtime_enabled(void)
++{
++ return 0;
++}
++
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant() (true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant() (false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++ unsigned long min,
++ unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV 0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++ int membarrier_state;
++
++ if (prev_mm == next_mm)
++ return;
++
++ membarrier_state = atomic_read(&next_mm->membarrier_state);
++ if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++ return;
++
++ WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline unsigned long
++uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id)
++{
++ if (clamp_id == UCLAMP_MIN)
++ return 0;
++
++ return SCHED_CAPACITY_SCALE;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++static inline bool uclamp_is_used(void)
++{
++ return false;
++}
++
++static inline unsigned long
++uclamp_rq_get(struct rq *rq, enum uclamp_id clamp_id)
++{
++ if (clamp_id == UCLAMP_MIN)
++ return 0;
++
++ return SCHED_CAPACITY_SCALE;
++}
++
++static inline void
++uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id, unsigned int value)
++{
++}
++
++static inline bool uclamp_rq_is_idle(struct rq *rq)
++{
++ return false;
++}
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
++#define MM_CID_SCAN_DELAY 100 /* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++ if (cid < 0)
++ return;
++ cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (!mm_cid_is_lazy_put(cid) ||
++ !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid, res;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ for (;;) {
++ if (mm_cid_is_unset(cid))
++ return MM_CID_UNSET;
++ /*
++ * Attempt transition from valid or lazy-put to unset.
++ */
++ res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++ if (res == cid)
++ break;
++ cid = res;
++ }
++ return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = mm_cid_pcpu_unset(mm);
++ if (cid == MM_CID_UNSET)
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
++{
++ struct cpumask *cidmask = mm_cidmask(mm);
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid, max_nr_cid, allowed_max_nr_cid;
++
++ /*
++ * After shrinking the number of threads or reducing the number
++ * of allowed cpus, reduce the value of max_nr_cid so expansion
++ * of cid allocation will preserve cache locality if the number
++ * of threads or allowed cpus increase again.
++ */
++ max_nr_cid = atomic_read(&mm->max_nr_cid);
++ while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
++ atomic_read(&mm->mm_users))),
++ max_nr_cid > allowed_max_nr_cid) {
++ /* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
++ if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
++ max_nr_cid = allowed_max_nr_cid;
++ break;
++ }
++ }
++ /* Try to re-use recent cid. This improves cache locality. */
++ cid = __this_cpu_read(pcpu_cid->recent_cid);
++ if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
++ !cpumask_test_and_set_cpu(cid, cidmask))
++ return cid;
++ /*
++ * Expand cid allocation if the maximum number of concurrency
++ * IDs allocated (max_nr_cid) is below the number cpus allowed
++ * and number of threads. Expanding cid allocation as much as
++ * possible improves cache locality.
++ */
++ cid = max_nr_cid;
++ while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
++ /* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
++ if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
++ continue;
++ if (!cpumask_test_and_set_cpu(cid, cidmask))
++ return cid;
++ }
++ /*
++ * Find the first available concurrency id.
++ * Retry finding first zero bit if the mask is temporarily
++ * filled. This only happens during concurrent remote-clear
++ * which owns a cid without holding a rq lock.
++ */
++ for (;;) {
++ cid = cpumask_first_zero(cidmask);
++ if (cid < READ_ONCE(mm->nr_cpus_allowed))
++ break;
++ cpu_relax();
++ }
++ if (cpumask_test_and_set_cpu(cid, cidmask))
++ return -1;
++
++ return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++ struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++ lockdep_assert_rq_held(rq);
++ WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct task_struct *t,
++ struct mm_struct *mm)
++{
++ int cid;
++
++ /*
++ * All allocations (even those using the cid_lock) are lock-free. If
++ * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++ * guarantee forward progress.
++ */
++ if (!READ_ONCE(use_cid_lock)) {
++ cid = __mm_cid_try_get(t, mm);
++ if (cid >= 0)
++ goto end;
++ raw_spin_lock(&cid_lock);
++ } else {
++ raw_spin_lock(&cid_lock);
++ cid = __mm_cid_try_get(t, mm);
++ if (cid >= 0)
++ goto unlock;
++ }
++
++ /*
++ * cid concurrently allocated. Retry while forcing following
++ * allocations to use the cid_lock to ensure forward progress.
++ */
++ WRITE_ONCE(use_cid_lock, 1);
++ /*
++ * Set use_cid_lock before allocation. Only care about program order
++ * because this is only required for forward progress.
++ */
++ barrier();
++ /*
++ * Retry until it succeeds. It is guaranteed to eventually succeed once
++ * all newcoming allocations observe the use_cid_lock flag set.
++ */
++ do {
++ cid = __mm_cid_try_get(t, mm);
++ cpu_relax();
++ } while (cid < 0);
++ /*
++ * Allocate before clearing use_cid_lock. Only care about
++ * program order because this is for forward progress.
++ */
++ barrier();
++ WRITE_ONCE(use_cid_lock, 0);
++unlock:
++ raw_spin_unlock(&cid_lock);
++end:
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct task_struct *t,
++ struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ struct cpumask *cpumask;
++ int cid;
++
++ lockdep_assert_rq_held(rq);
++ cpumask = mm_cidmask(mm);
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (mm_cid_is_valid(cid)) {
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++ }
++ if (mm_cid_is_lazy_put(cid)) {
++ if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++ }
++ cid = __mm_cid_get(rq, t, mm);
++ __this_cpu_write(pcpu_cid->cid, cid);
++ __this_cpu_write(pcpu_cid->recent_cid, cid);
++
++ return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++ struct task_struct *prev,
++ struct task_struct *next)
++{
++ /*
++ * Provide a memory barrier between rq->curr store and load of
++ * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++ *
++ * Should be adapted if context_switch() is modified.
++ */
++ if (!next->mm) { // to kernel
++ /*
++ * user -> kernel transition does not guarantee a barrier, but
++ * we can use the fact that it performs an atomic operation in
++ * mmgrab().
++ */
++ if (prev->mm) // from user
++ smp_mb__after_mmgrab();
++ /*
++ * kernel -> kernel transition does not change rq->curr->mm
++ * state. It stays NULL.
++ */
++ } else { // to user
++ /*
++ * kernel -> user transition does not provide a barrier
++ * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++ * Provide it here.
++ */
++ if (!prev->mm) // from kernel
++ smp_mb();
++ /*
++ * user -> user transition guarantees a memory barrier through
++ * switch_mm() when current->mm changes. If current->mm is
++ * unchanged, no barrier is needed.
++ */
++ }
++ if (prev->mm_cid_active) {
++ mm_cid_snapshot_time(rq, prev->mm);
++ mm_cid_put_lazy(prev);
++ prev->mm_cid = -1;
++ }
++ if (next->mm_cid_active)
++ next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++queue_balance_callback(struct rq *rq,
++ struct balance_callback *head,
++ void (*func)(struct rq *rq))
++{
++ lockdep_assert_rq_held(rq);
++
++ /*
++ * Don't (re)queue an already queued item; nor queue anything when
++ * balance_push() is active, see the comment with
++ * balance_push_callback.
++ */
++ if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++ return;
++
++ head->func = func;
++ head->next = rq->balance_callback;
++ rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_SCHED_H */
+diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
+new file mode 100644
+index 000000000000..2266138ee783
+--- /dev/null
++++ b/kernel/sched/alt_topology.c
+@@ -0,0 +1,350 @@
++#include "alt_core.h"
++#include "alt_topology.h"
++
++#ifdef CONFIG_SMP
++
++static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
++
++static int __init sched_pcore_mask_setup(char *str)
++{
++ if (cpulist_parse(str, &sched_pcore_mask))
++ pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
++
++ return 0;
++}
++__setup("pcore_cpus=", sched_pcore_mask_setup);
++
++/*
++ * set/clear idle mask functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
++ cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++
++static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++#endif
++
++static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_ecore_idle_mask);
++}
++
++static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
++}
++
++/*
++ * Idle cpu/rq selection functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++#endif
++
++static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p + 2) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++
++/* common balance functions */
++static int active_balance_cpu_stop(void *data)
++{
++ struct balance_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++ cpumask_t tmp;
++
++ local_irq_save(flags);
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++
++ arg->active = 0;
++
++ if (task_on_rq_queued(p) && task_rq(p) == rq &&
++ cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++ !is_migration_disabled(p)) {
++ int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++ rq = move_queued_task(rq, p, dcpu);
++ }
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++/* trigger_active_balance - for @rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
++{
++ struct balance_arg *arg;
++ unsigned long flags;
++ struct task_struct *p;
++ int res;
++
++ if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++ return 0;
++
++ arg = &rq->active_balance_arg;
++ res = (1 == rq->nr_running) && \
++ !is_migration_disabled((p = sched_rq_first_task(rq))) && \
++ cpumask_intersects(p->cpus_ptr, target_mask) && \
++ !arg->active;
++ if (res) {
++ arg->task = p;
++ arg->cpumask = target_mask;
++
++ arg->active = 1;
++ }
++
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ if (res) {
++ preempt_disable();
++ raw_spin_unlock(&src_rq->lock);
++
++ stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
++ &rq->active_balance_work);
++
++ preempt_enable();
++ raw_spin_lock(&src_rq->lock);
++ }
++
++ return res;
++}
++
++static inline int
++ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, single_task_mask, cpu)
++ if (trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
++
++#ifdef CONFIG_SCHED_SMT
++static inline int
++smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ cpumask_t smt_single_mask;
++
++ if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, &smt_single_mask, cpu) {
++ if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
++ trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++ }
++
++ return 0;
++}
++
++/* smt p core balance functions */
++static inline void smt_pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ (/* smt core group balance */
++ (static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
++ ) ||
++ /* e core to idle smt core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
++ return;
++}
++
++static void smt_pcore_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
++}
++
++/* smt balance functions */
++static inline void smt_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
++ return;
++}
++
++static void smt_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
++}
++
++/* e core balance functions */
++static inline void ecore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* smt occupied p core to idle e core balance */
++ smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
++ return;
++}
++
++static void ecore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
++}
++#endif /* CONFIG_SCHED_SMT */
++
++/* p core balance functions */
++static inline void pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* idle e core to p core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
++ return;
++}
++
++static void pcore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
++}
++
++#ifdef ALT_SCHED_DEBUG
++#define SCHED_DEBUG_INFO(...) printk(KERN_INFO __VA_ARGS__)
++#else
++#define SCHED_DEBUG_INFO(...) do { } while(0)
++#endif
++
++#define SET_IDLE_SELECT_FUNC(func) \
++{ \
++ idle_select_func = func; \
++ printk(KERN_INFO "sched: "#func); \
++}
++
++#define SET_RQ_BALANCE_FUNC(rq, cpu, func) \
++{ \
++ rq->balance_func = func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu); \
++}
++
++#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func) \
++{ \
++ rq->set_idle_mask_func = set_func; \
++ rq->clear_idle_mask_func = clear_func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu); \
++}
++
++void sched_init_topology(void)
++{
++ int cpu;
++ struct rq *rq;
++ cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
++ int ecore_present = 0;
++
++#ifdef CONFIG_SCHED_SMT
++ if (!cpumask_empty(&sched_smt_mask))
++ printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
++#endif
++
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
++ printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
++ sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
++
++ ecore_present = !cpumask_empty(&sched_ecore_mask);
++ }
++
++#ifdef CONFIG_SCHED_SMT
++ /* idle select function */
++ if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
++ SET_IDLE_SELECT_FUNC(p1_idle_select_func);
++ } else
++#endif
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
++ }
++
++ for_each_online_cpu(cpu) {
++ rq = cpu_rq(cpu);
++ /* take chance to reset time slice for idle tasks */
++ rq->idle->time_slice = sysctl_sched_base_slice;
++
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
++
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
++ !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
++ } else {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
++ }
++
++ continue;
++ }
++#endif
++ /* !SMT or only one cpu in sg */
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
++
++ if (ecore_present)
++ SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
++
++ continue;
++ }
++ if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
++ SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
++#endif
++ }
++ }
++}
++#endif /* CONFIG_SMP */
+diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
+new file mode 100644
+index 000000000000..076174cd2bc6
+--- /dev/null
++++ b/kernel/sched/alt_topology.h
+@@ -0,0 +1,6 @@
++#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
++#define _KERNEL_SCHED_ALT_TOPOLOGY_H
++
++extern void sched_init_topology(void);
++
++#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..5a7835246ec3
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,103 @@
++#ifndef _KERNEL_SCHED_BMQ_H
++#define _KERNEL_SCHED_BMQ_H
++
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++ int limit;
++
++ switch (p->policy) {
++ case SCHED_NORMAL:
++ limit = -MAX_PRIORITY_ADJ;
++ break;
++ case SCHED_BATCH:
++ limit = 0;
++ break;
++ default:
++ return;
++ }
++
++ p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++ if (p->boost_prio < MAX_PRIORITY_ADJ)
++ p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ prio = task_sched_prio(p); \
++ idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++ return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++ return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++ s64 delta = this_rq()->clock_task > p->last_ran;
++
++ if (likely(delta > 0))
++ boost_task(p, delta >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++ boost_task(p, 1);
++}
++
++#endif /* _KERNEL_SCHED_BMQ_H */
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index fae1f5c921eb..1e06434b5b9b 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -49,15 +49,21 @@
+
+ #include "idle.c"
+
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+
+ #include "cputime.c"
++#ifndef CONFIG_SCHED_ALT
+ #include "deadline.c"
++#endif
+
+ #ifdef CONFIG_SCHED_CLASS_EXT
+ # include "ext.c"
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 80a3df49ab47..58d04aa73634 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -56,6 +56,10 @@
+
+ #include "clock.c"
+
++#ifdef CONFIG_SCHED_ALT
++# include "alt_topology.c"
++#endif
++
+ #ifdef CONFIG_CGROUP_CPUACCT
+ # include "cpuacct.c"
+ #endif
+@@ -84,7 +88,9 @@
+
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 1a19d69b91ed..1f3116b727cb 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -197,6 +197,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned long min, max, util = scx_cpuperf_target(sg_cpu->cpu);
+
+ if (!scx_switched_all())
+@@ -205,6 +206,10 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ util = max(util, boost);
+ sg_cpu->bw_min = min;
+ sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++ sg_cpu->bw_min = 0;
++ sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -364,8 +369,10 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
+ */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+ WRITE_ONCE(sg_cpu->sg_policy->limits_changed, true);
++#endif
+ }
+
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -659,6 +666,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ }
+
+ ret = sched_setattr_nocheck(thread, &attr);
++
+ if (ret) {
+ kthread_stop(thread);
+ pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 6dab4854c6c0..24705643a077 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -124,7 +124,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ p->utime += cputime;
+ account_group_user_time(p, cputime);
+
+- index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++ index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+
+ /* Add user time to cpustat. */
+ task_group_account_field(p, index, cputime);
+@@ -148,7 +148,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ p->gtime += cputime;
+
+ /* Add guest time to cpustat. */
+- if (task_nice(p) > 0) {
++ if (task_running_nice(p)) {
+ task_group_account_field(p, CPUTIME_NICE, cputime);
+ cpustat[CPUTIME_GUEST_NICE] += cputime;
+ } else {
+@@ -286,7 +286,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+- return t->se.sum_exec_runtime;
++ return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -296,7 +296,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ struct rq *rq;
+
+ rq = task_rq_lock(t, &rf);
+- ns = t->se.sum_exec_runtime;
++ ns = tsk_seruntime(t);
+ task_rq_unlock(rq, t, &rf);
+
+ return ns;
+@@ -621,7 +621,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ struct task_cputime cputime = {
+- .sum_exec_runtime = p->se.sum_exec_runtime,
++ .sum_exec_runtime = tsk_seruntime(p),
+ };
+
+ if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index ef047add7f9e..26d9e8010a43 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+ * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * This allows printing both to /sys/kernel/debug/sched/debug and
+ * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+
+@@ -279,6 +281,7 @@ static const struct file_operations sched_dynamic_fops = {
+
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+
+ #ifdef CONFIG_SMP
+@@ -469,9 +472,11 @@ static const struct file_operations fair_server_period_fops = {
+ .llseek = seq_lseek,
+ .release = single_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+
+ static struct dentry *debugfs_sched;
+
++#ifndef CONFIG_SCHED_ALT
+ static void debugfs_fair_server_init(void)
+ {
+ struct dentry *d_fair;
+@@ -492,6 +497,7 @@ static void debugfs_fair_server_init(void)
+ debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &fair_server_period_fops);
+ }
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ static __init int sched_init_debug(void)
+ {
+@@ -499,14 +505,17 @@ static __init int sched_init_debug(void)
+
+ debugfs_sched = debugfs_create_dir("sched", NULL);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+
+ debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+
+@@ -531,13 +540,17 @@ static __init int sched_init_debug(void)
+ #endif
+
+ debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_fair_server_init();
++#endif /* !CONFIG_SCHED_ALT */
+
+ return 0;
+ }
+ late_initcall(sched_init_debug);
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+
+ static cpumask_var_t sd_sysctl_cpus;
+@@ -1289,6 +1302,7 @@ void proc_sched_set_task(struct task_struct *p)
+ memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 2c85c86b455f..4369a4b123c9 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -423,6 +423,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ do_idle();
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * idle-task scheduling class.
+ */
+@@ -538,3 +539,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ .switched_to = switched_to_idle,
+ .update_curr = update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..fe3099071eb7
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,139 @@
++#ifndef _KERNEL_SCHED_PDS_H
++#define _KERNEL_SCHED_PDS_H
++
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM (32)
++#define SCHED_EDGE_DELTA (SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x) ((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++ if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++ "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++ return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++ return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ if (p->prio < MIN_NORMAL_PRIO) { \
++ prio = p->prio >> 2; \
++ idx = prio; \
++ } else { \
++ u64 sched_dl = max(p->deadline, rq->time_edge); \
++ prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge; \
++ idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl); \
++ }
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++ return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++ sched_prio :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++ return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++ sched_idx :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio_idx;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++ struct list_head head;
++ u64 old = rq->time_edge;
++ u64 now = rq->clock >> sched_timeslice_shift;
++ u64 prio, delta;
++ DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++ if (now == old)
++ return;
++
++ rq->time_edge = now;
++ delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++ INIT_LIST_HEAD(&head);
++
++ prio = MIN_SCHED_NORMAL_PRIO;
++ for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++ list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++ SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++ bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++ if (!list_empty(&head)) {
++ u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++ __list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++ set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++ }
++ bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++ (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++ if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++ return;
++
++ rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ if (p->prio >= MIN_NORMAL_PRIO)
++ p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++ (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++ u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++ if (unlikely(p->deadline > max_dl))
++ p->deadline = max_dl;
++}
++
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++ sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++ sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++
++#endif /* _KERNEL_SCHED_PDS_H */
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index 7a8534a2deff..c57eb8f000d1 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * sched_entity:
+ *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+
+ return 0;
+ }
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * hardware:
+ *
+@@ -468,6 +470,7 @@ int update_irq_load_avg(struct rq *rq, u64 running)
+ }
+ #endif
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * Load avg and utiliztion metrics need to be updated periodically and before
+ * consumption. This function updates the metrics for all subsystems except for
+@@ -487,3 +490,4 @@ bool update_other_load_avgs(struct rq *rq)
+ update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
+ update_irq_load_avg(rq, 0);
+ }
++#endif /* !CONFIG_SCHED_ALT */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index f4f6a0875c66..ee780f2b6c17 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,14 +1,16 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
+ bool update_other_load_avgs(struct rq *rq);
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -45,6 +47,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ unsigned int enqueued;
+@@ -181,9 +184,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+
+ #else
+
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -201,6 +206,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ return 0;
+ }
++#endif
+
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 023b844159c9..87f9def0add8 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -4003,4 +4007,9 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx);
+
+ #include "ext.h"
+
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 4346fd81c31f..11f05554b538 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -115,8 +115,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ } else {
+ struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct sched_domain *sd;
+ int dcount = 0;
++#endif
+ #endif
+ cpu = (unsigned long)(v - 2);
+ rq = cpu_rq(cpu);
+@@ -133,6 +135,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ seq_printf(seq, "\n");
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ /* domain-specific stats */
+ rcu_read_lock();
+ for_each_domain(cpu, sd) {
+@@ -163,6 +166,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ sd->ttwu_move_balance);
+ }
+ rcu_read_unlock();
++#endif
+ #endif
+ }
+ return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 19cdbe96f93d..24fad39477ab 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart (struct rq *rq, unsigned long long delt
+
+ #endif /* CONFIG_SCHEDSTATS */
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ struct sched_entity se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 456d339be98f..b04133740601 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -16,6 +16,14 @@
+ #include "sched.h"
+ #include "autogroup.h"
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_core.h"
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++ return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++#else /* !CONFIG_SCHED_ALT */
+ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+ int prio;
+@@ -29,6 +37,7 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+
+ return prio;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Calculate the expected normal priority: i.e. priority
+@@ -39,7 +48,11 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ */
+ static inline int normal_prio(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++#else /* !CONFIG_SCHED_ALT */
+ return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -64,6 +77,37 @@ static int effective_prio(struct task_struct *p)
+
+ void set_user_nice(struct task_struct *p, long nice)
+ {
++#ifdef CONFIG_SCHED_ALT
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++ return;
++ /*
++ * We have to be careful, if called from sys_setpriority(),
++ * the task might be in the middle of scheduling on another CPU.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ rq = __task_access_lock(p, &lock);
++
++ p->static_prio = NICE_TO_PRIO(nice);
++ /*
++ * The RT priorities are set via sched_setscheduler(), but we still
++ * allow the 'normal' nice value to be set - but as expected
++ * it won't have any effect on scheduling until the task is
++ * not SCHED_NORMAL/SCHED_BATCH:
++ */
++ if (task_has_rt_policy(p))
++ goto out_unlock;
++
++ p->prio = effective_prio(p);
++
++ check_task_changed(p, rq);
++out_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++#else
+ bool queued, running;
+ struct rq *rq;
+ int old_prio;
+@@ -112,6 +156,7 @@ void set_user_nice(struct task_struct *p, long nice)
+ * lowered its priority, then reschedule its CPU:
+ */
+ p->sched_class->prio_changed(rq, p, old_prio);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL(set_user_nice);
+
+@@ -190,7 +235,19 @@ SYSCALL_DEFINE1(nice, int, increment)
+ */
+ int task_prio(const struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++/*
++ * sched policy return value kernel prio user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53] [100 ... 139] 0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39] 100 0/[-20 ... 19]
++ * fifo, rr [-1 ... -100] [99 ... 0] [0 ... 99]
++ */
++ return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++ task_sched_prio_normal(p, task_rq(p));
++#else
+ return p->prio - MAX_RT_PRIO;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -300,11 +357,16 @@ static void __setscheduler_params(struct task_struct *p,
+
+ p->policy = policy;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_policy(policy))
+ __setparam_dl(p, attr);
+ else if (fair_policy(policy))
+ __setparam_fair(p, attr);
++#else /* !CONFIG_SCHED_ALT */
++ p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++#endif /* CONFIG_SCHED_ALT */
+
++#ifndef CONFIG_SCHED_ALT
+ /* rt-policy tasks do not have a timerslack */
+ if (rt_or_dl_task_policy(p)) {
+ p->timer_slack_ns = 0;
+@@ -312,6 +374,7 @@ static void __setscheduler_params(struct task_struct *p,
+ /* when switching back to non-rt policy, restore timerslack */
+ p->timer_slack_ns = p->default_timer_slack_ns;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * __sched_setscheduler() ensures attr->sched_priority == 0 when
+@@ -320,7 +383,9 @@ static void __setscheduler_params(struct task_struct *p,
+ */
+ p->rt_priority = attr->sched_priority;
+ p->normal_prio = normal_prio(p);
++#ifndef CONFIG_SCHED_ALT
+ set_load_weight(p, true);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -336,6 +401,8 @@ static bool check_same_owner(struct task_struct *p)
+ uid_eq(cred->euid, pcred->uid));
+ }
+
++#ifndef CONFIG_SCHED_ALT
++
+ #ifdef CONFIG_UCLAMP_TASK
+
+ static int uclamp_validate(struct task_struct *p,
+@@ -449,6 +516,7 @@ static inline int uclamp_validate(struct task_struct *p,
+ static void __setscheduler_uclamp(struct task_struct *p,
+ const struct sched_attr *attr) { }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Allow unprivileged RT tasks to decrease priority.
+@@ -459,11 +527,13 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ int policy, int reset_on_fork)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (fair_policy(policy)) {
+ if (attr->sched_nice < task_nice(p) &&
+ !is_nice_reduction(p, attr->sched_nice))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ if (rt_policy(policy)) {
+ unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
+@@ -478,6 +548,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ goto req_priv;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * Can't set/change SCHED_DEADLINE policy at all for now
+ * (safest behavior); in the future we would like to allow
+@@ -495,6 +566,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ if (!is_nice_reduction(p, task_nice(p)))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /* Can't change other user's priorities: */
+ if (!check_same_owner(p))
+@@ -517,6 +589,158 @@ int __sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ bool user, bool pi)
+ {
++#ifdef CONFIG_SCHED_ALT
++ const struct sched_attr dl_squash_attr = {
++ .size = sizeof(struct sched_attr),
++ .sched_policy = SCHED_FIFO,
++ .sched_nice = 0,
++ .sched_priority = 99,
++ };
++ int oldpolicy = -1, policy = attr->sched_policy;
++ int retval, newprio;
++ struct balance_callback *head;
++ unsigned long flags;
++ struct rq *rq;
++ int reset_on_fork;
++ raw_spinlock_t *lock;
++
++ /* The pi code expects interrupts enabled */
++ BUG_ON(pi && in_interrupt());
++
++ /*
++ * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++ */
++ if (unlikely(SCHED_DEADLINE == policy)) {
++ attr = &dl_squash_attr;
++ policy = attr->sched_policy;
++ }
++recheck:
++ /* Double check policy once rq lock held */
++ if (policy < 0) {
++ reset_on_fork = p->sched_reset_on_fork;
++ policy = oldpolicy = p->policy;
++ } else {
++ reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++ if (policy > SCHED_IDLE)
++ return -EINVAL;
++ }
++
++ if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++ return -EINVAL;
++
++ /*
++ * Valid priorities for SCHED_FIFO and SCHED_RR are
++ * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++ * SCHED_BATCH and SCHED_IDLE is 0.
++ */
++ if (attr->sched_priority < 0 ||
++ (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++ (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++ return -EINVAL;
++ if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++ (attr->sched_priority != 0))
++ return -EINVAL;
++
++ if (user) {
++ retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++ if (retval)
++ return retval;
++
++ retval = security_task_setscheduler(p);
++ if (retval)
++ return retval;
++ }
++
++ /*
++ * Make sure no PI-waiters arrive (or leave) while we are
++ * changing the priority of the task:
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++ /*
++ * To be able to change p->policy safely, task_access_lock()
++ * must be called.
++ * IF use task_access_lock() here:
++ * For the task p which is not running, reading rq->stop is
++ * racy but acceptable as ->stop doesn't change much.
++ * An enhancemnet can be made to read rq->stop saftly.
++ */
++ rq = __task_access_lock(p, &lock);
++
++ /*
++ * Changing the policy of the stop threads its a very bad idea
++ */
++ if (p == rq->stop) {
++ retval = -EINVAL;
++ goto unlock;
++ }
++
++ /*
++ * If not changing anything there's no need to proceed further:
++ */
++ if (unlikely(policy == p->policy)) {
++ if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++ goto change;
++ if (!rt_policy(policy) &&
++ NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++ goto change;
++
++ p->sched_reset_on_fork = reset_on_fork;
++ retval = 0;
++ goto unlock;
++ }
++change:
++
++ /* Re-check policy now with rq lock held */
++ if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++ policy = oldpolicy = -1;
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ goto recheck;
++ }
++
++ p->sched_reset_on_fork = reset_on_fork;
++
++ newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++ if (pi) {
++ /*
++ * Take priority boosted tasks into account. If the new
++ * effective priority is unchanged, we just store the new
++ * normal parameters and do not touch the scheduler class and
++ * the runqueue. This will be done when the task deboost
++ * itself.
++ */
++ newprio = rt_effective_prio(p, newprio);
++ }
++
++ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++ __setscheduler_params(p, attr);
++ __setscheduler_prio(p, newprio);
++ }
++
++ check_task_changed(p, rq);
++
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++ head = splice_balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ if (pi)
++ rt_mutex_adjust_pi(p);
++
++ /* Run balance callbacks after we've adjusted the PI chain: */
++ balance_callbacks(rq, head);
++ preempt_enable();
++
++ return 0;
++
++unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ return retval;
++#else /* !CONFIG_SCHED_ALT */
+ int oldpolicy = -1, policy = attr->sched_policy;
+ int retval, oldprio, newprio, queued, running;
+ const struct sched_class *prev_class, *next_class;
+@@ -754,6 +978,7 @@ int __sched_setscheduler(struct task_struct *p,
+ if (cpuset_locked)
+ cpuset_unlock();
+ return retval;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ static int _sched_setscheduler(struct task_struct *p, int policy,
+@@ -765,8 +990,10 @@ static int _sched_setscheduler(struct task_struct *p, int policy,
+ .sched_nice = PRIO_TO_NICE(p->static_prio),
+ };
+
++#ifndef CONFIG_SCHED_ALT
+ if (p->se.custom_slice)
+ attr.sched_runtime = p->se.slice;
++#endif /* !CONFIG_SCHED_ALT */
+
+ /* Fixup the legacy SCHED_RESET_ON_FORK hack. */
+ if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
+@@ -934,13 +1161,18 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
+
+ static void get_params(struct task_struct *p, struct sched_attr *attr)
+ {
+- if (task_has_dl_policy(p)) {
++#ifndef CONFIG_SCHED_ALT
++ if (task_has_dl_policy(p))
+ __getparam_dl(p, attr);
+- } else if (task_has_rt_policy(p)) {
++ else
++#endif
++ if (task_has_rt_policy(p)) {
+ attr->sched_priority = p->rt_priority;
+ } else {
+ attr->sched_nice = task_nice(p);
++#ifndef CONFIG_SCHED_ALT
+ attr->sched_runtime = p->se.slice;
++#endif
+ }
+ }
+
+@@ -1122,6 +1354,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
+ #ifdef CONFIG_SMP
+ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ {
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * If the task isn't a deadline task or admission control is
+ * disabled then we don't care about affinity changes.
+@@ -1145,6 +1378,7 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ guard(rcu)();
+ if (!cpumask_subset(task_rq(p)->rd->span, mask))
+ return -EBUSY;
++#endif
+
+ return 0;
+ }
+@@ -1169,9 +1403,11 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ ctx->new_mask = new_mask;
+ ctx->flags |= SCA_CHECK;
+
++#ifndef CONFIG_SCHED_ALT
+ retval = dl_task_check_affinity(p, new_mask);
+ if (retval)
+ goto out_free_new_mask;
++#endif
+
+ retval = __set_cpus_allowed_ptr(p, ctx);
+ if (retval)
+@@ -1351,13 +1587,34 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
+
+ static void do_sched_yield(void)
+ {
+- struct rq_flags rf;
+ struct rq *rq;
++ struct rq_flags rf;
++
++#ifdef CONFIG_SCHED_ALT
++ struct task_struct *p;
++
++ if (!sched_yield_type)
++ return;
+
+ rq = this_rq_lock_irq(&rf);
+
++ schedstat_inc(rq->yld_count);
++
++ p = current;
++ if (rt_task(p)) {
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ } else if (rq->nr_running > 1) {
++ do_sched_yield_type_1(p, rq);
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ }
++#else /* !CONFIG_SCHED_ALT */
++ rq = this_rq_lock_irq(&rf);
++
+ schedstat_inc(rq->yld_count);
+ current->sched_class->yield_task(rq);
++#endif /* !CONFIG_SCHED_ALT */
+
+ preempt_disable();
+ rq_unlock_irq(rq, &rf);
+@@ -1426,6 +1683,9 @@ EXPORT_SYMBOL(yield);
+ */
+ int __sched yield_to(struct task_struct *p, bool preempt)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return 0;
++#else /* !CONFIG_SCHED_ALT */
+ struct task_struct *curr = current;
+ struct rq *rq, *p_rq;
+ int yielded = 0;
+@@ -1471,6 +1731,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ schedule();
+
+ return yielded;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL_GPL(yield_to);
+
+@@ -1491,7 +1752,9 @@ SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
+ case SCHED_RR:
+ ret = MAX_RT_PRIO-1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1519,7 +1782,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ case SCHED_RR:
+ ret = 1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1531,7 +1796,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+
+ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned int time_slice = 0;
++#endif
+ int retval;
+
+ if (pid < 0)
+@@ -1546,6 +1813,7 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ if (retval)
+ return retval;
+
++#ifndef CONFIG_SCHED_ALT
+ scoped_guard (task_rq_lock, p) {
+ struct rq *rq = scope.rq;
+ if (p->sched_class->get_rr_interval)
+@@ -1554,6 +1822,13 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ }
+
+ jiffies_to_timespec64(time_slice, t);
++#else
++ }
++
++ alt_sched_debug();
++
++ *t = ns_to_timespec64(sysctl_sched_base_slice);
++#endif /* !CONFIG_SCHED_ALT */
+ return 0;
+ }
+
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index c49aea8c1025..01ce4f4e1ae7 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+ * Scheduler topology setup/handling methods
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1459,8 +1460,10 @@ static void asym_cpu_capacity_scan(void)
+ */
+
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1693,6 +1696,7 @@ sd_init(struct sched_domain_topology_level *tl,
+
+ return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ /*
+ * Topology list, bottom-up.
+@@ -1729,6 +1733,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ sched_domain_topology_saved = NULL;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2795,3 +2800,28 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ sched_domains_mutex_unlock();
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++ struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++ return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index cb57da499ebb..f623df5f772a 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+
+ /* Constants used for minimum and maximum */
+
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1897,6 +1901,17 @@ static const struct ctl_table kern_table[] = {
+ .proc_handler = proc_dointvec,
+ },
+ #endif
++#ifdef CONFIG_SCHED_ALT
++ {
++ .procname = "yield_type",
++ .data = &sched_yield_type,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_TWO,
++ },
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ {
+ .procname = "spin_retry",
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 50e8d04ab661..0a761f9cd5e4 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ u64 stime, utime;
+
+ task_cputime(p, &utime, &stime);
+- store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++ store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -835,6 +835,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ }
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ if (tsk->dl.dl_overrun) {
+@@ -842,6 +843,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ }
+ }
++#endif
+
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -869,8 +871,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ u64 samples[CPUCLOCK_MAX];
+ unsigned long soft;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk))
+ check_dl_overrun(tsk);
++#endif
+
+ if (expiry_cache_is_inactive(pct))
+ return;
+@@ -884,7 +888,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ if (soft != RLIM_INFINITY) {
+ /* Task RT timeout is accounted in jiffies. RTTIME is usec */
+- unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++ unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+
+ /* At the hard limit, send SIGKILL. No further action. */
+@@ -1120,8 +1124,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ return true;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk) && tsk->dl.dl_overrun)
+ return true;
++#endif
+
+ return false;
+ }
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index c8f776dc6ee0..76532cec84de 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -2495,7 +2495,11 @@ static void run_local_timers(void)
+ */
+ if (time_after_eq(jiffies, READ_ONCE(base->next_expiry)) ||
+ (i == BASE_DEF && tmigr_requires_handle_remote())) {
++#ifdef CONFIG_SCHED_BMQ
++ __raise_softirq_irqoff(TIMER_SOFTIRQ);
++#else
+ raise_timer_softirq(TIMER_SOFTIRQ);
++#endif
+ return;
+ }
+ }
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index f3a2722ee4c0..d6991e349c81 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1670,6 +1670,9 @@ static void osnoise_sleep(bool skip_period)
+ */
+ static inline int osnoise_migration_pending(void)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return 0;
++#else
+ if (!current->migration_pending)
+ return 0;
+
+@@ -1691,6 +1694,7 @@ static inline int osnoise_migration_pending(void)
+ mutex_unlock(&interface_lock);
+
+ return 1;
++#endif
+ }
+
+ /*
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index d88c44f1dfa5..4af3cbbdcccb 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1423,10 +1423,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ /* Make this a -deadline thread */
+ static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++ /* No deadline on BMQ/PDS, use RR */
++ .sched_policy = SCHED_RR,
++#else
+ .sched_policy = SCHED_DEADLINE,
+ .sched_runtime = 100000ULL,
+ .sched_deadline = 10000000ULL,
+ .sched_period = 10000000ULL
++#endif
+ };
+ struct wakeup_test_data *x = data;
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index bfe030b443e2..31337359e58f 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1247,6 +1247,7 @@ static bool kick_pool(struct worker_pool *pool)
+
+ p = worker->task;
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ /*
+ * Idle @worker is about to execute @work and waking up provides an
+@@ -1276,6 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
+ }
+ }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ wake_up_process(p);
+ return true;
+ }
+@@ -1404,7 +1407,11 @@ void wq_worker_running(struct task_struct *task)
+ * CPU intensive auto-detection cares about how long a work item hogged
+ * CPU without sleeping. Reset the starting timestamp on wakeup.
+ */
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+
+ WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1489,7 +1496,11 @@ void wq_worker_tick(struct task_struct *task)
+ * We probably want to make this prettier in the future.
+ */
+ if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++ worker->task->sched_time - worker->current_at <
++#else
+ worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ return;
+
+@@ -3166,7 +3177,11 @@ __acquires(&pool->lock)
+ worker->current_func = work->func;
+ worker->current_pwq = pwq;
+ if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ work_data = *work_data_bits(work);
+ worker->current_color = get_work_color(work_data);
+
diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..6dc48eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig 2023-02-13 08:16:09.534315265 -0500
++++ b/init/Kconfig 2023-02-13 08:17:24.130237204 -0500
+@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
+ If in doubt, use the default value.
+
+ menuconfig SCHED_ALT
++ depends on X86_64
+ bool "Alternative CPU Schedulers"
+- default y
++ default n
+ help
+ This feature enable alternative CPU scheduler"
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-04-29 17:26 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-04-29 17:26 UTC (permalink / raw
To: gentoo-commits
commit: b9f90f2007fe034ccf58b3e0f0319d2f6bb8137e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 29 17:26:05 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 29 17:26:05 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b9f90f20
eventpoll: Prevent hang in epoll_wait
Bug: https://bugs.gentoo.org/954806
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
1900_eventpoll-Prevent-hang-in-epoll-wait.patch | 51 +++++++++++++++++++++++++
2 files changed, 55 insertions(+)
diff --git a/0000_README b/0000_README
index ec88ba02..623e5a70 100644
--- a/0000_README
+++ b/0000_README
@@ -74,6 +74,10 @@ Patch: 1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
From: https://gitlab.com/cki-project/kernel-ark/-/commit/8d4a52c3921d278f27241fc0c6949d8fdc13a7f5
Desc: x86/insn_decoder_test: allow longer symbol-names
+Patch: 1900_eventpoll-Prevent-hang-in-epoll-wait.patch
+From: https://lore.kernel.org/linux-fsdevel/20250429153419.94723-1-jdamato@fastly.com/T/#u
+Desc: eventpoll: Prevent hang in epoll_wait
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1900_eventpoll-Prevent-hang-in-epoll-wait.patch b/1900_eventpoll-Prevent-hang-in-epoll-wait.patch
new file mode 100644
index 00000000..7f1e543a
--- /dev/null
+++ b/1900_eventpoll-Prevent-hang-in-epoll-wait.patch
@@ -0,0 +1,51 @@
+From git@z Thu Jan 1 00:00:00 1970
+Subject: [PATCH] eventpoll: Prevent hang in epoll_wait
+From: Joe Damato <jdamato@fastly.com>
+Date: Tue, 29 Apr 2025 15:34:19 +0000
+Message-Id: <20250429153419.94723-1-jdamato@fastly.com>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 7bit
+
+In commit 0a65bc27bd64 ("eventpoll: Set epoll timeout if it's in the
+future"), a bug was introduced causing the loop in ep_poll to hang under
+certain circumstances.
+
+When the timeout is non-NULL and ep_schedule_timeout returns false, the
+flag timed_out was not set to true. This causes a hang.
+
+Adjust the logic and set timed_out, if needed, fixing the original code.
+
+Reported-by: Christian Brauner <brauner@kernel.org>
+Closes: https://lore.kernel.org/linux-fsdevel/20250426-haben-redeverbot-0b58878ac722@brauner/
+Reported-by: Mike Pagano <mpagano@gentoo.org>
+Closes: https://bugs.gentoo.org/954806
+Reported-by: Carlos Llamas <cmllamas@google.com>
+Closes: https://lore.kernel.org/linux-fsdevel/aBAB_4gQ6O_haAjp@google.com/
+Fixes: 0a65bc27bd64 ("eventpoll: Set epoll timeout if it's in the future")
+Tested-by: Carlos Llamas <cmllamas@google.com>
+Signed-off-by: Joe Damato <jdamato@fastly.com>
+---
+ fs/eventpoll.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 4bc264b854c4..1a5d1147f082 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -2111,7 +2111,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+
+ write_unlock_irq(&ep->lock);
+
+- if (!eavail && ep_schedule_timeout(to))
++ if (!ep_schedule_timeout(to))
++ timed_out = 1;
++ else if (!eavail)
+ timed_out = !schedule_hrtimeout_range(to, slack,
+ HRTIMER_MODE_ABS);
+ __set_current_state(TASK_RUNNING);
+
+base-commit: f520bed25d17bb31c2d2d72b0a785b593a4e3179
+--
+2.43.0
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-02 10:54 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-02 10:54 UTC (permalink / raw
To: gentoo-commits
commit: 09d231a9fd30c5654060da6386985030ef5f2c2b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri May 2 10:53:48 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri May 2 10:53:48 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=09d231a9
Linux patch 6.14.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1004_linux-6.14.5.patch | 13677 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 13681 insertions(+)
diff --git a/0000_README b/0000_README
index 623e5a70..5de58938 100644
--- a/0000_README
+++ b/0000_README
@@ -58,6 +58,10 @@ Patch: 1003_linux-6.14.4.patch
From: https://www.kernel.org
Desc: Linux 6.14.4
+Patch: 1004_linux-6.14.5.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.5
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1004_linux-6.14.5.patch b/1004_linux-6.14.5.patch
new file mode 100644
index 00000000..829d3e22
--- /dev/null
+++ b/1004_linux-6.14.5.patch
@@ -0,0 +1,13677 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index aa7447f8837cb7..56be1fc99bdd44 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -7243,6 +7243,8 @@
+ This is just one of many ways that can clear memory. Make sure your system
+ keeps the content of memory across reboots before relying on this option.
+
++ NB: Both the mapped address and size must be page aligned for the architecture.
++
+ See also Documentation/trace/debugging.rst
+
+
+diff --git a/Documentation/bpf/bpf_devel_QA.rst b/Documentation/bpf/bpf_devel_QA.rst
+index de27e1620821c4..0acb4c9b8d90f3 100644
+--- a/Documentation/bpf/bpf_devel_QA.rst
++++ b/Documentation/bpf/bpf_devel_QA.rst
+@@ -382,6 +382,14 @@ In case of new BPF instructions, once the changes have been accepted
+ into the Linux kernel, please implement support into LLVM's BPF back
+ end. See LLVM_ section below for further information.
+
++Q: What "BPF_INTERNAL" symbol namespace is for?
++-----------------------------------------------
++A: Symbols exported as BPF_INTERNAL can only be used by BPF infrastructure
++like preload kernel modules with light skeleton. Most symbols outside
++of BPF_INTERNAL are not expected to be used by code outside of BPF either.
++Symbols may lack the designation because they predate the namespaces,
++or due to an oversight.
++
+ Stable submission
+ =================
+
+diff --git a/Documentation/trace/debugging.rst b/Documentation/trace/debugging.rst
+index 54fb16239d7033..d54bc500af80ba 100644
+--- a/Documentation/trace/debugging.rst
++++ b/Documentation/trace/debugging.rst
+@@ -136,6 +136,8 @@ kernel, so only the same kernel is guaranteed to work if the mapping is
+ preserved. Switching to a different kernel version may find a different
+ layout and mark the buffer as invalid.
+
++NB: Both the mapped address and size must be page aligned for the architecture.
++
+ Using trace_printk() in the boot instance
+ -----------------------------------------
+ By default, the content of trace_printk() goes into the top level tracing
+diff --git a/Makefile b/Makefile
+index 0c1b99da2c1f2a..87835d7abbceb2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -1070,6 +1070,7 @@ KBUILD_CFLAGS += -fno-builtin-wcslen
+ # change __FILE__ to the relative path to the source directory
+ ifdef building_out_of_srctree
+ KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srcroot)/=)
++KBUILD_RUSTFLAGS += --remap-path-prefix=$(srcroot)/=
+ endif
+
+ # include additional Makefiles when needed
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index 32650c8431d988..23e4ea067ddbb0 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -3,10 +3,12 @@
+ menu "Accelerated Cryptographic Algorithms for CPU (arm)"
+
+ config CRYPTO_CURVE25519_NEON
+- tristate "Public key crypto: Curve25519 (NEON)"
++ tristate
+ depends on KERNEL_MODE_NEON
++ select CRYPTO_KPP
+ select CRYPTO_LIB_CURVE25519_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_CURVE25519
++ default CRYPTO_LIB_CURVE25519_INTERNAL
+ help
+ Curve25519 algorithm
+
+@@ -45,9 +47,10 @@ config CRYPTO_NHPOLY1305_NEON
+ - NEON (Advanced SIMD) extensions
+
+ config CRYPTO_POLY1305_ARM
+- tristate "Hash functions: Poly1305 (NEON)"
++ tristate
+ select CRYPTO_HASH
+ select CRYPTO_ARCH_HAVE_LIB_POLY1305
++ default CRYPTO_LIB_POLY1305_INTERNAL
+ help
+ Poly1305 authenticator algorithm (RFC7539)
+
+@@ -212,9 +215,10 @@ config CRYPTO_AES_ARM_CE
+ - ARMv8 Crypto Extensions
+
+ config CRYPTO_CHACHA20_NEON
+- tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (NEON)"
++ tristate
+ select CRYPTO_SKCIPHER
+ select CRYPTO_ARCH_HAVE_LIB_CHACHA
++ default CRYPTO_LIB_CHACHA_INTERNAL
+ help
+ Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
+ stream cipher algorithms
+diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
+index 5636ab83f22aee..3418c8d3c78d41 100644
+--- a/arch/arm64/crypto/Kconfig
++++ b/arch/arm64/crypto/Kconfig
+@@ -26,10 +26,11 @@ config CRYPTO_NHPOLY1305_NEON
+ - NEON (Advanced SIMD) extensions
+
+ config CRYPTO_POLY1305_NEON
+- tristate "Hash functions: Poly1305 (NEON)"
++ tristate
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_HASH
+ select CRYPTO_ARCH_HAVE_LIB_POLY1305
++ default CRYPTO_LIB_POLY1305_INTERNAL
+ help
+ Poly1305 authenticator algorithm (RFC7539)
+
+@@ -186,11 +187,12 @@ config CRYPTO_AES_ARM64_NEON_BLK
+ - NEON (Advanced SIMD) extensions
+
+ config CRYPTO_CHACHA20_NEON
+- tristate "Ciphers: ChaCha (NEON)"
++ tristate
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_SKCIPHER
+ select CRYPTO_LIB_CHACHA_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_CHACHA
++ default CRYPTO_LIB_CHACHA_INTERNAL
+ help
+ Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
+ stream cipher algorithms
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index bdb989c49c094c..b744bd73f08ee0 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -71,6 +71,7 @@ config LOONGARCH
+ select ARCH_SUPPORTS_RT
+ select ARCH_USE_BUILTIN_BSWAP
+ select ARCH_USE_CMPXCHG_LOCKREF
++ select ARCH_USE_MEMTEST
+ select ARCH_USE_QUEUED_RWLOCKS
+ select ARCH_USE_QUEUED_SPINLOCKS
+ select ARCH_WANT_DEFAULT_BPF_JIT
+diff --git a/arch/loongarch/include/asm/fpu.h b/arch/loongarch/include/asm/fpu.h
+index 3177674228f896..45514f314664d8 100644
+--- a/arch/loongarch/include/asm/fpu.h
++++ b/arch/loongarch/include/asm/fpu.h
+@@ -22,22 +22,29 @@
+ struct sigcontext;
+
+ #define kernel_fpu_available() cpu_has_fpu
+-extern void kernel_fpu_begin(void);
+-extern void kernel_fpu_end(void);
+-
+-extern void _init_fpu(unsigned int);
+-extern void _save_fp(struct loongarch_fpu *);
+-extern void _restore_fp(struct loongarch_fpu *);
+-
+-extern void _save_lsx(struct loongarch_fpu *fpu);
+-extern void _restore_lsx(struct loongarch_fpu *fpu);
+-extern void _init_lsx_upper(void);
+-extern void _restore_lsx_upper(struct loongarch_fpu *fpu);
+-
+-extern void _save_lasx(struct loongarch_fpu *fpu);
+-extern void _restore_lasx(struct loongarch_fpu *fpu);
+-extern void _init_lasx_upper(void);
+-extern void _restore_lasx_upper(struct loongarch_fpu *fpu);
++
++void kernel_fpu_begin(void);
++void kernel_fpu_end(void);
++
++asmlinkage void _init_fpu(unsigned int);
++asmlinkage void _save_fp(struct loongarch_fpu *);
++asmlinkage void _restore_fp(struct loongarch_fpu *);
++asmlinkage int _save_fp_context(void __user *fpregs, void __user *fcc, void __user *csr);
++asmlinkage int _restore_fp_context(void __user *fpregs, void __user *fcc, void __user *csr);
++
++asmlinkage void _save_lsx(struct loongarch_fpu *fpu);
++asmlinkage void _restore_lsx(struct loongarch_fpu *fpu);
++asmlinkage void _init_lsx_upper(void);
++asmlinkage void _restore_lsx_upper(struct loongarch_fpu *fpu);
++asmlinkage int _save_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
++asmlinkage int _restore_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
++
++asmlinkage void _save_lasx(struct loongarch_fpu *fpu);
++asmlinkage void _restore_lasx(struct loongarch_fpu *fpu);
++asmlinkage void _init_lasx_upper(void);
++asmlinkage void _restore_lasx_upper(struct loongarch_fpu *fpu);
++asmlinkage int _save_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
++asmlinkage int _restore_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
+
+ static inline void enable_lsx(void);
+ static inline void disable_lsx(void);
+diff --git a/arch/loongarch/include/asm/lbt.h b/arch/loongarch/include/asm/lbt.h
+index e671978bf5523f..38566574e56214 100644
+--- a/arch/loongarch/include/asm/lbt.h
++++ b/arch/loongarch/include/asm/lbt.h
+@@ -12,9 +12,13 @@
+ #include <asm/loongarch.h>
+ #include <asm/processor.h>
+
+-extern void _init_lbt(void);
+-extern void _save_lbt(struct loongarch_lbt *);
+-extern void _restore_lbt(struct loongarch_lbt *);
++asmlinkage void _init_lbt(void);
++asmlinkage void _save_lbt(struct loongarch_lbt *);
++asmlinkage void _restore_lbt(struct loongarch_lbt *);
++asmlinkage int _save_lbt_context(void __user *regs, void __user *eflags);
++asmlinkage int _restore_lbt_context(void __user *regs, void __user *eflags);
++asmlinkage int _save_ftop_context(void __user *ftop);
++asmlinkage int _restore_ftop_context(void __user *ftop);
+
+ static inline int is_lbt_enabled(void)
+ {
+diff --git a/arch/loongarch/include/asm/ptrace.h b/arch/loongarch/include/asm/ptrace.h
+index f3ddaed9ef7f08..a5b63c84f8541a 100644
+--- a/arch/loongarch/include/asm/ptrace.h
++++ b/arch/loongarch/include/asm/ptrace.h
+@@ -33,9 +33,9 @@ struct pt_regs {
+ unsigned long __last[];
+ } __aligned(8);
+
+-static inline int regs_irqs_disabled(struct pt_regs *regs)
++static __always_inline bool regs_irqs_disabled(struct pt_regs *regs)
+ {
+- return arch_irqs_disabled_flags(regs->csr_prmd);
++ return !(regs->csr_prmd & CSR_PRMD_PIE);
+ }
+
+ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+diff --git a/arch/loongarch/kernel/fpu.S b/arch/loongarch/kernel/fpu.S
+index 6ab640101457cc..28caf416ae36e6 100644
+--- a/arch/loongarch/kernel/fpu.S
++++ b/arch/loongarch/kernel/fpu.S
+@@ -458,6 +458,7 @@ SYM_FUNC_START(_save_fp_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_save_fp_context)
++EXPORT_SYMBOL_GPL(_save_fp_context)
+
+ /*
+ * a0: fpregs
+@@ -471,6 +472,7 @@ SYM_FUNC_START(_restore_fp_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_restore_fp_context)
++EXPORT_SYMBOL_GPL(_restore_fp_context)
+
+ /*
+ * a0: fpregs
+@@ -484,6 +486,7 @@ SYM_FUNC_START(_save_lsx_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_save_lsx_context)
++EXPORT_SYMBOL_GPL(_save_lsx_context)
+
+ /*
+ * a0: fpregs
+@@ -497,6 +500,7 @@ SYM_FUNC_START(_restore_lsx_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_restore_lsx_context)
++EXPORT_SYMBOL_GPL(_restore_lsx_context)
+
+ /*
+ * a0: fpregs
+@@ -510,6 +514,7 @@ SYM_FUNC_START(_save_lasx_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_save_lasx_context)
++EXPORT_SYMBOL_GPL(_save_lasx_context)
+
+ /*
+ * a0: fpregs
+@@ -523,6 +528,7 @@ SYM_FUNC_START(_restore_lasx_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_restore_lasx_context)
++EXPORT_SYMBOL_GPL(_restore_lasx_context)
+
+ .L_fpu_fault:
+ li.w a0, -EFAULT # failure
+diff --git a/arch/loongarch/kernel/lbt.S b/arch/loongarch/kernel/lbt.S
+index 001f061d226ab5..71678912d24ce2 100644
+--- a/arch/loongarch/kernel/lbt.S
++++ b/arch/loongarch/kernel/lbt.S
+@@ -90,6 +90,7 @@ SYM_FUNC_START(_save_lbt_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_save_lbt_context)
++EXPORT_SYMBOL_GPL(_save_lbt_context)
+
+ /*
+ * a0: scr
+@@ -110,6 +111,7 @@ SYM_FUNC_START(_restore_lbt_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_restore_lbt_context)
++EXPORT_SYMBOL_GPL(_restore_lbt_context)
+
+ /*
+ * a0: ftop
+@@ -120,6 +122,7 @@ SYM_FUNC_START(_save_ftop_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_save_ftop_context)
++EXPORT_SYMBOL_GPL(_save_ftop_context)
+
+ /*
+ * a0: ftop
+@@ -150,6 +153,7 @@ SYM_FUNC_START(_restore_ftop_context)
+ li.w a0, 0 # success
+ jr ra
+ SYM_FUNC_END(_restore_ftop_context)
++EXPORT_SYMBOL_GPL(_restore_ftop_context)
+
+ .L_lbt_fault:
+ li.w a0, -EFAULT # failure
+diff --git a/arch/loongarch/kernel/signal.c b/arch/loongarch/kernel/signal.c
+index 7a555b60017193..4740cb5b238898 100644
+--- a/arch/loongarch/kernel/signal.c
++++ b/arch/loongarch/kernel/signal.c
+@@ -51,27 +51,6 @@
+ #define lock_lbt_owner() ({ preempt_disable(); pagefault_disable(); })
+ #define unlock_lbt_owner() ({ pagefault_enable(); preempt_enable(); })
+
+-/* Assembly functions to move context to/from the FPU */
+-extern asmlinkage int
+-_save_fp_context(void __user *fpregs, void __user *fcc, void __user *csr);
+-extern asmlinkage int
+-_restore_fp_context(void __user *fpregs, void __user *fcc, void __user *csr);
+-extern asmlinkage int
+-_save_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
+-extern asmlinkage int
+-_restore_lsx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
+-extern asmlinkage int
+-_save_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
+-extern asmlinkage int
+-_restore_lasx_context(void __user *fpregs, void __user *fcc, void __user *fcsr);
+-
+-#ifdef CONFIG_CPU_HAS_LBT
+-extern asmlinkage int _save_lbt_context(void __user *regs, void __user *eflags);
+-extern asmlinkage int _restore_lbt_context(void __user *regs, void __user *eflags);
+-extern asmlinkage int _save_ftop_context(void __user *ftop);
+-extern asmlinkage int _restore_ftop_context(void __user *ftop);
+-#endif
+-
+ struct rt_sigframe {
+ struct siginfo rs_info;
+ struct ucontext rs_uctx;
+diff --git a/arch/loongarch/kernel/traps.c b/arch/loongarch/kernel/traps.c
+index 2ec3106c0da3d1..47fc2de6d15018 100644
+--- a/arch/loongarch/kernel/traps.c
++++ b/arch/loongarch/kernel/traps.c
+@@ -553,9 +553,10 @@ asmlinkage void noinstr do_ale(struct pt_regs *regs)
+ die_if_kernel("Kernel ale access", regs);
+ force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);
+ #else
++ bool pie = regs_irqs_disabled(regs);
+ unsigned int *pc;
+
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_enable();
+
+ perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->csr_badvaddr);
+@@ -582,7 +583,7 @@ asmlinkage void noinstr do_ale(struct pt_regs *regs)
+ die_if_kernel("Kernel ale access", regs);
+ force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)regs->csr_badvaddr);
+ out:
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_disable();
+ #endif
+ irqentry_exit(regs, state);
+@@ -621,12 +622,13 @@ static void bug_handler(struct pt_regs *regs)
+ asmlinkage void noinstr do_bce(struct pt_regs *regs)
+ {
+ bool user = user_mode(regs);
++ bool pie = regs_irqs_disabled(regs);
+ unsigned long era = exception_era(regs);
+ u64 badv = 0, lower = 0, upper = ULONG_MAX;
+ union loongarch_instruction insn;
+ irqentry_state_t state = irqentry_enter(regs);
+
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_enable();
+
+ current->thread.trap_nr = read_csr_excode();
+@@ -692,7 +694,7 @@ asmlinkage void noinstr do_bce(struct pt_regs *regs)
+ force_sig_bnderr((void __user *)badv, (void __user *)lower, (void __user *)upper);
+
+ out:
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_disable();
+
+ irqentry_exit(regs, state);
+@@ -710,11 +712,12 @@ asmlinkage void noinstr do_bce(struct pt_regs *regs)
+ asmlinkage void noinstr do_bp(struct pt_regs *regs)
+ {
+ bool user = user_mode(regs);
++ bool pie = regs_irqs_disabled(regs);
+ unsigned int opcode, bcode;
+ unsigned long era = exception_era(regs);
+ irqentry_state_t state = irqentry_enter(regs);
+
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_enable();
+
+ if (__get_inst(&opcode, (u32 *)era, user))
+@@ -780,7 +783,7 @@ asmlinkage void noinstr do_bp(struct pt_regs *regs)
+ }
+
+ out:
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_disable();
+
+ irqentry_exit(regs, state);
+@@ -1015,6 +1018,7 @@ static void init_restore_lbt(void)
+
+ asmlinkage void noinstr do_lbt(struct pt_regs *regs)
+ {
++ bool pie = regs_irqs_disabled(regs);
+ irqentry_state_t state = irqentry_enter(regs);
+
+ /*
+@@ -1024,7 +1028,7 @@ asmlinkage void noinstr do_lbt(struct pt_regs *regs)
+ * (including the user using 'MOVGR2GCSR' to turn on TM, which
+ * will not trigger the BTE), we need to check PRMD first.
+ */
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_enable();
+
+ if (!cpu_has_lbt) {
+@@ -1038,7 +1042,7 @@ asmlinkage void noinstr do_lbt(struct pt_regs *regs)
+ preempt_enable();
+
+ out:
+- if (regs->csr_prmd & CSR_PRMD_PIE)
++ if (!pie)
+ local_irq_disable();
+
+ irqentry_exit(regs, state);
+diff --git a/arch/loongarch/kvm/intc/ipi.c b/arch/loongarch/kvm/intc/ipi.c
+index 93f4acd445236e..fe734dc062ed47 100644
+--- a/arch/loongarch/kvm/intc/ipi.c
++++ b/arch/loongarch/kvm/intc/ipi.c
+@@ -111,7 +111,7 @@ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)
+ ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);
+ srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ if (unlikely(ret)) {
+- kvm_err("%s: : read date from addr %llx failed\n", __func__, addr);
++ kvm_err("%s: : read data from addr %llx failed\n", __func__, addr);
+ return ret;
+ }
+ /* Construct the mask by scanning the bit 27-30 */
+@@ -127,7 +127,7 @@ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)
+ ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);
+ srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ if (unlikely(ret))
+- kvm_err("%s: : write date to addr %llx failed\n", __func__, addr);
++ kvm_err("%s: : write data to addr %llx failed\n", __func__, addr);
+
+ return ret;
+ }
+diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
+index b6864d6e5ec8d2..f5d6317814825b 100644
+--- a/arch/loongarch/kvm/main.c
++++ b/arch/loongarch/kvm/main.c
+@@ -296,10 +296,10 @@ int kvm_arch_enable_virtualization_cpu(void)
+ /*
+ * Enable virtualization features granting guest direct control of
+ * certain features:
+- * GCI=2: Trap on init or unimplement cache instruction.
++ * GCI=2: Trap on init or unimplemented cache instruction.
+ * TORU=0: Trap on Root Unimplement.
+ * CACTRL=1: Root control cache.
+- * TOP=0: Trap on Previlege.
++ * TOP=0: Trap on Privilege.
+ * TOE=0: Trap on Exception.
+ * TIT=0: Trap on Timer.
+ */
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 9e1a9b4aa4c6a9..92b8177f193bd5 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -294,6 +294,7 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
+ vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST;
+
+ if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) {
++ kvm_lose_pmu(vcpu);
+ /* make sure the vcpu mode has been written */
+ smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE);
+ local_irq_enable();
+@@ -874,6 +875,13 @@ static int kvm_set_one_reg(struct kvm_vcpu *vcpu,
+ vcpu->arch.st.guest_addr = 0;
+ memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
+ memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
++
++ /*
++ * When vCPU reset, clear the ESTAT and GINTC registers
++ * Other CSR registers are cleared with function _kvm_setcsr().
++ */
++ kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_GINTC, 0);
++ kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_ESTAT, 0);
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/arch/loongarch/mm/hugetlbpage.c b/arch/loongarch/mm/hugetlbpage.c
+index e4068906143b33..cea84d7f2b91a1 100644
+--- a/arch/loongarch/mm/hugetlbpage.c
++++ b/arch/loongarch/mm/hugetlbpage.c
+@@ -47,7 +47,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
+ pmd = pmd_offset(pud, addr);
+ }
+ }
+- return (pte_t *) pmd;
++ return pmd_none(pmdp_get(pmd)) ? NULL : (pte_t *) pmd;
+ }
+
+ uint64_t pmd_to_entrylo(unsigned long pmd_val)
+diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
+index ca5aa5f46a9fe5..7fab370efa74dd 100644
+--- a/arch/loongarch/mm/init.c
++++ b/arch/loongarch/mm/init.c
+@@ -65,9 +65,6 @@ void __init paging_init(void)
+ {
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+
+-#ifdef CONFIG_ZONE_DMA
+- max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
+-#endif
+ #ifdef CONFIG_ZONE_DMA32
+ max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
+ #endif
+diff --git a/arch/mips/crypto/Kconfig b/arch/mips/crypto/Kconfig
+index 7decd40c4e204f..545fc0e12422d8 100644
+--- a/arch/mips/crypto/Kconfig
++++ b/arch/mips/crypto/Kconfig
+@@ -3,9 +3,11 @@
+ menu "Accelerated Cryptographic Algorithms for CPU (mips)"
+
+ config CRYPTO_POLY1305_MIPS
+- tristate "Hash functions: Poly1305"
++ tristate
+ depends on MIPS
++ select CRYPTO_HASH
+ select CRYPTO_ARCH_HAVE_LIB_POLY1305
++ default CRYPTO_LIB_POLY1305_INTERNAL
+ help
+ Poly1305 authenticator algorithm (RFC7539)
+
+@@ -52,10 +54,11 @@ config CRYPTO_SHA512_OCTEON
+ Architecture: mips OCTEON using crypto instructions, when available
+
+ config CRYPTO_CHACHA_MIPS
+- tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (MIPS32r2)"
++ tristate
+ depends on CPU_MIPS32_R2
+ select CRYPTO_SKCIPHER
+ select CRYPTO_ARCH_HAVE_LIB_CHACHA
++ default CRYPTO_LIB_CHACHA_INTERNAL
+ help
+ Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
+ stream cipher algorithms
+diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
+index 23ce951f445bb0..e4b34ea54c28fa 100644
+--- a/arch/mips/include/asm/mips-cm.h
++++ b/arch/mips/include/asm/mips-cm.h
+@@ -59,6 +59,16 @@ extern phys_addr_t mips_cm_l2sync_phys_base(void);
+ */
+ extern int mips_cm_is64;
+
++/*
++ * mips_cm_is_l2_hci_broken - determine if HCI is broken
++ *
++ * Some CM reports show that Hardware Cache Initialization is
++ * complete, but in reality it's not the case. They also incorrectly
++ * indicate that Hardware Cache Initialization is supported. This
++ * flags allows warning about this broken feature.
++ */
++extern bool mips_cm_is_l2_hci_broken;
++
+ /**
+ * mips_cm_error_report - Report CM cache errors
+ */
+@@ -97,6 +107,18 @@ static inline bool mips_cm_present(void)
+ #endif
+ }
+
++/**
++ * mips_cm_update_property - update property from the device tree
++ *
++ * Retrieve the properties from the device tree if a CM node exist and
++ * update the internal variable based on this.
++ */
++#ifdef CONFIG_MIPS_CM
++extern void mips_cm_update_property(void);
++#else
++static inline void mips_cm_update_property(void) {}
++#endif
++
+ /**
+ * mips_cm_has_l2sync - determine whether an L2-only sync region is present
+ *
+diff --git a/arch/mips/kernel/mips-cm.c b/arch/mips/kernel/mips-cm.c
+index 3eb2cfb893e19c..9cfabaa94d010f 100644
+--- a/arch/mips/kernel/mips-cm.c
++++ b/arch/mips/kernel/mips-cm.c
+@@ -5,6 +5,7 @@
+ */
+
+ #include <linux/errno.h>
++#include <linux/of.h>
+ #include <linux/percpu.h>
+ #include <linux/spinlock.h>
+
+@@ -14,6 +15,7 @@
+ void __iomem *mips_gcr_base;
+ void __iomem *mips_cm_l2sync_base;
+ int mips_cm_is64;
++bool mips_cm_is_l2_hci_broken;
+
+ static char *cm2_tr[8] = {
+ "mem", "gcr", "gic", "mmio",
+@@ -237,6 +239,18 @@ static void mips_cm_probe_l2sync(void)
+ mips_cm_l2sync_base = ioremap(addr, MIPS_CM_L2SYNC_SIZE);
+ }
+
++void mips_cm_update_property(void)
++{
++ struct device_node *cm_node;
++
++ cm_node = of_find_compatible_node(of_root, NULL, "mobileye,eyeq6-cm");
++ if (!cm_node)
++ return;
++ pr_info("HCI (Hardware Cache Init for the L2 cache) in GCR_L2_RAM_CONFIG from the CM3 is broken");
++ mips_cm_is_l2_hci_broken = true;
++ of_node_put(cm_node);
++}
++
+ int mips_cm_probe(void)
+ {
+ phys_addr_t addr;
+diff --git a/arch/parisc/kernel/pdt.c b/arch/parisc/kernel/pdt.c
+index 0f9b3b5914cf69..b70b67adb855f6 100644
+--- a/arch/parisc/kernel/pdt.c
++++ b/arch/parisc/kernel/pdt.c
+@@ -63,6 +63,7 @@ static unsigned long pdt_entry[MAX_PDT_ENTRIES] __page_aligned_bss;
+ #define PDT_ADDR_PERM_ERR (pdt_type != PDT_PDC ? 2UL : 0UL)
+ #define PDT_ADDR_SINGLE_ERR 1UL
+
++#ifdef CONFIG_PROC_FS
+ /* report PDT entries via /proc/meminfo */
+ void arch_report_meminfo(struct seq_file *m)
+ {
+@@ -74,6 +75,7 @@ void arch_report_meminfo(struct seq_file *m)
+ seq_printf(m, "PDT_cur_entries: %7lu\n",
+ pdt_status.pdt_entries);
+ }
++#endif
+
+ static int get_info_pat_new(void)
+ {
+diff --git a/arch/powerpc/crypto/Kconfig b/arch/powerpc/crypto/Kconfig
+index 5b315e9756b3fd..370db8192ce62d 100644
+--- a/arch/powerpc/crypto/Kconfig
++++ b/arch/powerpc/crypto/Kconfig
+@@ -3,10 +3,12 @@
+ menu "Accelerated Cryptographic Algorithms for CPU (powerpc)"
+
+ config CRYPTO_CURVE25519_PPC64
+- tristate "Public key crypto: Curve25519 (PowerPC64)"
++ tristate
+ depends on PPC64 && CPU_LITTLE_ENDIAN
++ select CRYPTO_KPP
+ select CRYPTO_LIB_CURVE25519_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_CURVE25519
++ default CRYPTO_LIB_CURVE25519_INTERNAL
+ help
+ Curve25519 algorithm
+
+@@ -91,11 +93,12 @@ config CRYPTO_AES_GCM_P10
+ later CPU. This module supports stitched acceleration for AES/GCM.
+
+ config CRYPTO_CHACHA20_P10
+- tristate "Ciphers: ChaCha20, XChacha20, XChacha12 (P10 or later)"
++ tristate
+ depends on PPC64 && CPU_LITTLE_ENDIAN && VSX
+ select CRYPTO_SKCIPHER
+ select CRYPTO_LIB_CHACHA_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_CHACHA
++ default CRYPTO_LIB_CHACHA_INTERNAL
+ help
+ Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
+ stream cipher algorithms
+diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig
+index ad58dad9a58076..c67095a3d66907 100644
+--- a/arch/riscv/crypto/Kconfig
++++ b/arch/riscv/crypto/Kconfig
+@@ -22,7 +22,6 @@ config CRYPTO_CHACHA_RISCV64
+ tristate "Ciphers: ChaCha"
+ depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
+ select CRYPTO_SKCIPHER
+- select CRYPTO_LIB_CHACHA_GENERIC
+ help
+ Length-preserving ciphers: ChaCha20 stream cipher algorithm
+
+diff --git a/arch/riscv/include/asm/alternative-macros.h b/arch/riscv/include/asm/alternative-macros.h
+index 721ec275ce57e3..231d777d936c2d 100644
+--- a/arch/riscv/include/asm/alternative-macros.h
++++ b/arch/riscv/include/asm/alternative-macros.h
+@@ -115,24 +115,19 @@
+ \old_c
+ .endm
+
+-#define _ALTERNATIVE_CFG(old_c, ...) \
+- ALTERNATIVE_CFG old_c
+-
+-#define _ALTERNATIVE_CFG_2(old_c, ...) \
+- ALTERNATIVE_CFG old_c
++#define __ALTERNATIVE_CFG(old_c, ...) ALTERNATIVE_CFG old_c
++#define __ALTERNATIVE_CFG_2(old_c, ...) ALTERNATIVE_CFG old_c
+
+ #else /* !__ASSEMBLY__ */
+
+-#define __ALTERNATIVE_CFG(old_c) \
+- old_c "\n"
++#define __ALTERNATIVE_CFG(old_c, ...) old_c "\n"
++#define __ALTERNATIVE_CFG_2(old_c, ...) old_c "\n"
+
+-#define _ALTERNATIVE_CFG(old_c, ...) \
+- __ALTERNATIVE_CFG(old_c)
++#endif /* __ASSEMBLY__ */
+
+-#define _ALTERNATIVE_CFG_2(old_c, ...) \
+- __ALTERNATIVE_CFG(old_c)
++#define _ALTERNATIVE_CFG(old_c, ...) __ALTERNATIVE_CFG(old_c)
++#define _ALTERNATIVE_CFG_2(old_c, ...) __ALTERNATIVE_CFG_2(old_c)
+
+-#endif /* __ASSEMBLY__ */
+ #endif /* CONFIG_RISCV_ALTERNATIVE */
+
+ /*
+diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
+index 8de73f91bfa371..b59ffeb668d6a5 100644
+--- a/arch/riscv/include/asm/cacheflush.h
++++ b/arch/riscv/include/asm/cacheflush.h
+@@ -34,11 +34,6 @@ static inline void flush_dcache_page(struct page *page)
+ flush_dcache_folio(page_folio(page));
+ }
+
+-/*
+- * RISC-V doesn't have an instruction to flush parts of the instruction cache,
+- * so instead we just flush the whole thing.
+- */
+-#define flush_icache_range(start, end) flush_icache_all()
+ #define flush_icache_user_page(vma, pg, addr, len) \
+ do { \
+ if (vma->vm_flags & VM_EXEC) \
+@@ -78,6 +73,16 @@ void flush_icache_mm(struct mm_struct *mm, bool local);
+
+ #endif /* CONFIG_SMP */
+
++/*
++ * RISC-V doesn't have an instruction to flush parts of the instruction cache,
++ * so instead we just flush the whole thing.
++ */
++#define flush_icache_range flush_icache_range
++static inline void flush_icache_range(unsigned long start, unsigned long end)
++{
++ flush_icache_all();
++}
++
+ extern unsigned int riscv_cbom_block_size;
+ extern unsigned int riscv_cboz_block_size;
+ void riscv_init_cbo_blocksizes(void);
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index 2636ee00ccf0fd..9704a73e515a5d 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -207,7 +207,7 @@ ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs)
+ {
+ struct __arch_ftrace_regs *afregs = arch_ftrace_regs(fregs);
+
+- memcpy(®s->a0, afregs->args, sizeof(afregs->args));
++ memcpy(®s->a_regs, afregs->args, sizeof(afregs->args));
+ regs->epc = afregs->epc;
+ regs->ra = afregs->ra;
+ regs->sp = afregs->sp;
+diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
+index b5b0adcc85c18e..2910231977cb71 100644
+--- a/arch/riscv/include/asm/ptrace.h
++++ b/arch/riscv/include/asm/ptrace.h
+@@ -23,14 +23,16 @@ struct pt_regs {
+ unsigned long t2;
+ unsigned long s0;
+ unsigned long s1;
+- unsigned long a0;
+- unsigned long a1;
+- unsigned long a2;
+- unsigned long a3;
+- unsigned long a4;
+- unsigned long a5;
+- unsigned long a6;
+- unsigned long a7;
++ struct_group(a_regs,
++ unsigned long a0;
++ unsigned long a1;
++ unsigned long a2;
++ unsigned long a3;
++ unsigned long a4;
++ unsigned long a5;
++ unsigned long a6;
++ unsigned long a7;
++ );
+ unsigned long s2;
+ unsigned long s3;
+ unsigned long s4;
+diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c
+index 4b3dc8beaf77d3..cc15f7ca6cc17b 100644
+--- a/arch/riscv/kernel/probes/uprobes.c
++++ b/arch/riscv/kernel/probes/uprobes.c
+@@ -167,6 +167,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+ /* Initialize the slot */
+ void *kaddr = kmap_atomic(page);
+ void *dst = kaddr + (vaddr & ~PAGE_MASK);
++ unsigned long start = (unsigned long)dst;
+
+ memcpy(dst, src, len);
+
+@@ -176,13 +177,6 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+ *(uprobe_opcode_t *)dst = __BUG_INSN_32;
+ }
+
++ flush_icache_range(start, start + len);
+ kunmap_atomic(kaddr);
+-
+- /*
+- * We probably need flush_icache_user_page() but it needs vma.
+- * This should work on most of architectures by default. If
+- * architecture needs to do something different it can define
+- * its own version of the function.
+- */
+- flush_dcache_page(page);
+ }
+diff --git a/arch/s390/crypto/Kconfig b/arch/s390/crypto/Kconfig
+index b760232537f1c6..8c4db8b64fa21a 100644
+--- a/arch/s390/crypto/Kconfig
++++ b/arch/s390/crypto/Kconfig
+@@ -108,11 +108,12 @@ config CRYPTO_DES_S390
+ As of z196 the CTR mode is hardware accelerated.
+
+ config CRYPTO_CHACHA_S390
+- tristate "Ciphers: ChaCha20"
++ tristate
+ depends on S390
+ select CRYPTO_SKCIPHER
+ select CRYPTO_LIB_CHACHA_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_CHACHA
++ default CRYPTO_LIB_CHACHA_INTERNAL
+ help
+ Length-preserving cipher: ChaCha20 stream cipher (RFC 7539)
+
+diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
+index 610dd44a948b22..a06a000f196ce0 100644
+--- a/arch/s390/kvm/intercept.c
++++ b/arch/s390/kvm/intercept.c
+@@ -95,7 +95,7 @@ static int handle_validity(struct kvm_vcpu *vcpu)
+
+ vcpu->stat.exit_validity++;
+ trace_kvm_s390_intercept_validity(vcpu, viwhy);
+- KVM_EVENT(3, "validity intercept 0x%x for pid %u (kvm 0x%pK)", viwhy,
++ KVM_EVENT(3, "validity intercept 0x%x for pid %u (kvm 0x%p)", viwhy,
+ current->pid, vcpu->kvm);
+
+ /* do not warn on invalid runtime instrumentation mode */
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 07ff0e10cb7f5c..c0558f05400732 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -3161,7 +3161,7 @@ void kvm_s390_gisa_clear(struct kvm *kvm)
+ if (!gi->origin)
+ return;
+ gisa_clear_ipm(gi->origin);
+- VM_EVENT(kvm, 3, "gisa 0x%pK cleared", gi->origin);
++ VM_EVENT(kvm, 3, "gisa 0x%p cleared", gi->origin);
+ }
+
+ void kvm_s390_gisa_init(struct kvm *kvm)
+@@ -3178,7 +3178,7 @@ void kvm_s390_gisa_init(struct kvm *kvm)
+ gi->timer.function = gisa_vcpu_kicker;
+ memset(gi->origin, 0, sizeof(struct kvm_s390_gisa));
+ gi->origin->next_alert = (u32)virt_to_phys(gi->origin);
+- VM_EVENT(kvm, 3, "gisa 0x%pK initialized", gi->origin);
++ VM_EVENT(kvm, 3, "gisa 0x%p initialized", gi->origin);
+ }
+
+ void kvm_s390_gisa_enable(struct kvm *kvm)
+@@ -3219,7 +3219,7 @@ void kvm_s390_gisa_destroy(struct kvm *kvm)
+ process_gib_alert_list();
+ hrtimer_cancel(&gi->timer);
+ gi->origin = NULL;
+- VM_EVENT(kvm, 3, "gisa 0x%pK destroyed", gisa);
++ VM_EVENT(kvm, 3, "gisa 0x%p destroyed", gisa);
+ }
+
+ void kvm_s390_gisa_disable(struct kvm *kvm)
+@@ -3468,7 +3468,7 @@ int __init kvm_s390_gib_init(u8 nisc)
+ }
+ }
+
+- KVM_EVENT(3, "gib 0x%pK (nisc=%d) initialized", gib, gib->nisc);
++ KVM_EVENT(3, "gib 0x%p (nisc=%d) initialized", gib, gib->nisc);
+ goto out;
+
+ out_unreg_gal:
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 020502af7dc98c..8b8e2173fe8296 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -1020,7 +1020,7 @@ static int kvm_s390_set_mem_control(struct kvm *kvm, struct kvm_device_attr *att
+ }
+ mutex_unlock(&kvm->lock);
+ VM_EVENT(kvm, 3, "SET: max guest address: %lu", new_limit);
+- VM_EVENT(kvm, 3, "New guest asce: 0x%pK",
++ VM_EVENT(kvm, 3, "New guest asce: 0x%p",
+ (void *) kvm->arch.gmap->asce);
+ break;
+ }
+@@ -3464,7 +3464,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ kvm_s390_gisa_init(kvm);
+ INIT_LIST_HEAD(&kvm->arch.pv.need_cleanup);
+ kvm->arch.pv.set_aside = NULL;
+- KVM_EVENT(3, "vm 0x%pK created by pid %u", kvm, current->pid);
++ KVM_EVENT(3, "vm 0x%p created by pid %u", kvm, current->pid);
+
+ return 0;
+ out_err:
+@@ -3527,7 +3527,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
+ kvm_s390_destroy_adapters(kvm);
+ kvm_s390_clear_float_irqs(kvm);
+ kvm_s390_vsie_destroy(kvm);
+- KVM_EVENT(3, "vm 0x%pK destroyed", kvm);
++ KVM_EVENT(3, "vm 0x%p destroyed", kvm);
+ }
+
+ /* Section: vcpu related */
+@@ -3648,7 +3648,7 @@ static int sca_switch_to_extended(struct kvm *kvm)
+
+ free_page((unsigned long)old_sca);
+
+- VM_EVENT(kvm, 2, "Switched to ESCA (0x%pK -> 0x%pK)",
++ VM_EVENT(kvm, 2, "Switched to ESCA (0x%p -> 0x%p)",
+ old_sca, kvm->arch.sca);
+ return 0;
+ }
+@@ -4025,7 +4025,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ goto out_free_sie_block;
+ }
+
+- VM_EVENT(vcpu->kvm, 3, "create cpu %d at 0x%pK, sie block at 0x%pK",
++ VM_EVENT(vcpu->kvm, 3, "create cpu %d at 0x%p, sie block at 0x%p",
+ vcpu->vcpu_id, vcpu, vcpu->arch.sie_block);
+ trace_kvm_s390_create_vcpu(vcpu->vcpu_id, vcpu, vcpu->arch.sie_block);
+
+diff --git a/arch/s390/kvm/trace-s390.h b/arch/s390/kvm/trace-s390.h
+index 9ac92dbf680dbb..9e28f165c114ca 100644
+--- a/arch/s390/kvm/trace-s390.h
++++ b/arch/s390/kvm/trace-s390.h
+@@ -56,7 +56,7 @@ TRACE_EVENT(kvm_s390_create_vcpu,
+ __entry->sie_block = sie_block;
+ ),
+
+- TP_printk("create cpu %d at 0x%pK, sie block at 0x%pK",
++ TP_printk("create cpu %d at 0x%p, sie block at 0x%p",
+ __entry->id, __entry->vcpu, __entry->sie_block)
+ );
+
+@@ -255,7 +255,7 @@ TRACE_EVENT(kvm_s390_enable_css,
+ __entry->kvm = kvm;
+ ),
+
+- TP_printk("enabling channel I/O support (kvm @ %pK)\n",
++ TP_printk("enabling channel I/O support (kvm @ %p)\n",
+ __entry->kvm)
+ );
+
+diff --git a/arch/um/include/linux/time-internal.h b/arch/um/include/linux/time-internal.h
+index b22226634ff609..138908b999d76c 100644
+--- a/arch/um/include/linux/time-internal.h
++++ b/arch/um/include/linux/time-internal.h
+@@ -83,6 +83,8 @@ extern void time_travel_not_configured(void);
+ #define time_travel_del_event(...) time_travel_not_configured()
+ #endif /* CONFIG_UML_TIME_TRAVEL_SUPPORT */
+
++extern unsigned long tt_extra_sched_jiffies;
++
+ /*
+ * Without CONFIG_UML_TIME_TRAVEL_SUPPORT this is a linker error if used,
+ * which is intentional since we really shouldn't link it in that case.
+diff --git a/arch/um/kernel/skas/syscall.c b/arch/um/kernel/skas/syscall.c
+index b09e85279d2b8c..a5beaea2967ec3 100644
+--- a/arch/um/kernel/skas/syscall.c
++++ b/arch/um/kernel/skas/syscall.c
+@@ -31,6 +31,17 @@ void handle_syscall(struct uml_pt_regs *r)
+ goto out;
+
+ syscall = UPT_SYSCALL_NR(r);
++
++ /*
++ * If no time passes, then sched_yield may not actually yield, causing
++ * broken spinlock implementations in userspace (ASAN) to hang for long
++ * periods of time.
++ */
++ if ((time_travel_mode == TT_MODE_INFCPU ||
++ time_travel_mode == TT_MODE_EXTERNAL) &&
++ syscall == __NR_sched_yield)
++ tt_extra_sched_jiffies += 1;
++
+ if (syscall >= 0 && syscall < __NR_syscalls) {
+ unsigned long ret = EXECUTE_SYSCALL(syscall, regs);
+
+diff --git a/arch/x86/crypto/Kconfig b/arch/x86/crypto/Kconfig
+index 4757bf922075b6..3d948f10c94cd7 100644
+--- a/arch/x86/crypto/Kconfig
++++ b/arch/x86/crypto/Kconfig
+@@ -3,10 +3,12 @@
+ menu "Accelerated Cryptographic Algorithms for CPU (x86)"
+
+ config CRYPTO_CURVE25519_X86
+- tristate "Public key crypto: Curve25519 (ADX)"
++ tristate
+ depends on X86 && 64BIT
++ select CRYPTO_KPP
+ select CRYPTO_LIB_CURVE25519_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_CURVE25519
++ default CRYPTO_LIB_CURVE25519_INTERNAL
+ help
+ Curve25519 algorithm
+
+@@ -348,11 +350,12 @@ config CRYPTO_ARIA_GFNI_AVX512_X86_64
+ Processes 64 blocks in parallel.
+
+ config CRYPTO_CHACHA20_X86_64
+- tristate "Ciphers: ChaCha20, XChaCha20, XChaCha12 (SSSE3/AVX2/AVX-512VL)"
++ tristate
+ depends on X86 && 64BIT
+ select CRYPTO_SKCIPHER
+ select CRYPTO_LIB_CHACHA_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_CHACHA
++ default CRYPTO_LIB_CHACHA_INTERNAL
+ help
+ Length-preserving ciphers: ChaCha20, XChaCha20, and XChaCha12
+ stream cipher algorithms
+@@ -417,10 +420,12 @@ config CRYPTO_POLYVAL_CLMUL_NI
+ - CLMUL-NI (carry-less multiplication new instructions)
+
+ config CRYPTO_POLY1305_X86_64
+- tristate "Hash functions: Poly1305 (SSE2/AVX2)"
++ tristate
+ depends on X86 && 64BIT
++ select CRYPTO_HASH
+ select CRYPTO_LIB_POLY1305_GENERIC
+ select CRYPTO_ARCH_HAVE_LIB_POLY1305
++ default CRYPTO_LIB_POLY1305_INTERNAL
+ help
+ Poly1305 authenticator algorithm (RFC7539)
+
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index b7ea3e8e9eccd5..58e3124ee2b420 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -18,7 +18,7 @@
+
+ SYM_FUNC_START(entry_ibpb)
+ movl $MSR_IA32_PRED_CMD, %ecx
+- movl $PRED_CMD_IBPB, %eax
++ movl _ASM_RIP(x86_pred_cmd), %eax
+ xorl %edx, %edx
+ wrmsr
+
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 3a27c50080f4fe..ce8d4fdf54fbb0 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -628,7 +628,7 @@ int x86_pmu_hw_config(struct perf_event *event)
+ if (event->attr.type == event->pmu->type)
+ event->hw.config |= x86_pmu_get_event_config(event);
+
+- if (!event->attr.freq && x86_pmu.limit_period) {
++ if (is_sampling_event(event) && !event->attr.freq && x86_pmu.limit_period) {
+ s64 left = event->attr.sample_period;
+ x86_pmu.limit_period(event, &left);
+ if (left > event->attr.sample_period)
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index 6d7b04ffc5fd0e..ef5a06ddf02877 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -115,6 +115,8 @@
+ #define INTEL_GRANITERAPIDS_X IFM(6, 0xAD)
+ #define INTEL_GRANITERAPIDS_D IFM(6, 0xAE)
+
++#define INTEL_BARTLETTLAKE IFM(6, 0xD7) /* Raptor Cove */
++
+ /* "Hybrid" Processors (P-Core/E-Core) */
+
+ #define INTEL_LAKEFIELD IFM(6, 0x8A) /* Sunny Cove / Tremont */
+diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h
+index dd4841231bb9f4..3ad786d06c088c 100644
+--- a/arch/x86/include/asm/pgalloc.h
++++ b/arch/x86/include/asm/pgalloc.h
+@@ -6,6 +6,8 @@
+ #include <linux/mm.h> /* for struct page */
+ #include <linux/pagemap.h>
+
++#include <asm/cpufeature.h>
++
+ #define __HAVE_ARCH_PTE_ALLOC_ONE
+ #define __HAVE_ARCH_PGD_FREE
+ #include <asm-generic/pgalloc.h>
+@@ -34,16 +36,17 @@ static inline void paravirt_release_p4d(unsigned long pfn) {}
+ */
+ extern gfp_t __userpte_alloc_gfp;
+
+-#ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
+ /*
+- * Instead of one PGD, we acquire two PGDs. Being order-1, it is
+- * both 8k in size and 8k-aligned. That lets us just flip bit 12
+- * in a pointer to swap between the two 4k halves.
++ * In case of Page Table Isolation active, we acquire two PGDs instead of one.
++ * Being order-1, it is both 8k in size and 8k-aligned. That lets us just
++ * flip bit 12 in a pointer to swap between the two 4k halves.
+ */
+-#define PGD_ALLOCATION_ORDER 1
+-#else
+-#define PGD_ALLOCATION_ORDER 0
+-#endif
++static inline unsigned int pgd_allocation_order(void)
++{
++ if (cpu_feature_enabled(X86_FEATURE_PTI))
++ return 1;
++ return 0;
++}
+
+ /*
+ * Allocate and free page tables.
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index a5d0998d760499..9152285aaaf961 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1578,7 +1578,7 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
+ rrsba_disabled = true;
+ }
+
+-static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode)
++static void __init spectre_v2_select_rsb_mitigation(enum spectre_v2_mitigation mode)
+ {
+ /*
+ * Similar to context switches, there are two types of RSB attacks
+@@ -1602,27 +1602,30 @@ static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_
+ */
+ switch (mode) {
+ case SPECTRE_V2_NONE:
+- return;
++ break;
+
+- case SPECTRE_V2_EIBRS_LFENCE:
+ case SPECTRE_V2_EIBRS:
++ case SPECTRE_V2_EIBRS_LFENCE:
++ case SPECTRE_V2_EIBRS_RETPOLINE:
+ if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
+- setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
+ pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n");
++ setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
+ }
+- return;
++ break;
+
+- case SPECTRE_V2_EIBRS_RETPOLINE:
+ case SPECTRE_V2_RETPOLINE:
+ case SPECTRE_V2_LFENCE:
+ case SPECTRE_V2_IBRS:
++ pr_info("Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT\n");
++ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+ setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+- pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n");
+- return;
+- }
++ break;
+
+- pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit");
+- dump_stack();
++ default:
++ pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation\n");
++ dump_stack();
++ break;
++ }
+ }
+
+ /*
+@@ -1854,10 +1857,7 @@ static void __init spectre_v2_select_mitigation(void)
+ *
+ * FIXME: Is this pointless for retbleed-affected AMD?
+ */
+- setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+- pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+-
+- spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
++ spectre_v2_select_rsb_mitigation(mode);
+
+ /*
+ * Retpoline protects the kernel, but doesn't protect firmware. IBRS
+diff --git a/arch/x86/kernel/i8253.c b/arch/x86/kernel/i8253.c
+index 80e262bb627fe1..cb9852ad609893 100644
+--- a/arch/x86/kernel/i8253.c
++++ b/arch/x86/kernel/i8253.c
+@@ -46,7 +46,8 @@ bool __init pit_timer_init(void)
+ * VMMs otherwise steal CPU time just to pointlessly waggle
+ * the (masked) IRQ.
+ */
+- clockevent_i8253_disable();
++ scoped_guard(irq)
++ clockevent_i8253_disable();
+ return false;
+ }
+ clockevent_i8253_init(true);
+diff --git a/arch/x86/kernel/machine_kexec_32.c b/arch/x86/kernel/machine_kexec_32.c
+index 80265162aefff9..1f325304c4a842 100644
+--- a/arch/x86/kernel/machine_kexec_32.c
++++ b/arch/x86/kernel/machine_kexec_32.c
+@@ -42,7 +42,7 @@ static void load_segments(void)
+
+ static void machine_kexec_free_page_tables(struct kimage *image)
+ {
+- free_pages((unsigned long)image->arch.pgd, PGD_ALLOCATION_ORDER);
++ free_pages((unsigned long)image->arch.pgd, pgd_allocation_order());
+ image->arch.pgd = NULL;
+ #ifdef CONFIG_X86_PAE
+ free_page((unsigned long)image->arch.pmd0);
+@@ -59,7 +59,7 @@ static void machine_kexec_free_page_tables(struct kimage *image)
+ static int machine_kexec_alloc_page_tables(struct kimage *image)
+ {
+ image->arch.pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+- PGD_ALLOCATION_ORDER);
++ pgd_allocation_order());
+ #ifdef CONFIG_X86_PAE
+ image->arch.pmd0 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+ image->arch.pmd1 = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index 65fd245a9953ce..63dea8ecd7efc3 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -820,7 +820,7 @@ static int svm_ir_list_add(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi)
+ * Allocating new amd_iommu_pi_data, which will get
+ * add to the per-vcpu ir_list.
+ */
+- ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_KERNEL_ACCOUNT);
++ ir = kzalloc(sizeof(struct amd_svm_iommu_ir), GFP_ATOMIC | __GFP_ACCOUNT);
+ if (!ir) {
+ ret = -ENOMEM;
+ goto out;
+@@ -896,6 +896,7 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ {
+ struct kvm_kernel_irq_routing_entry *e;
+ struct kvm_irq_routing_table *irq_rt;
++ bool enable_remapped_mode = true;
+ int idx, ret = 0;
+
+ if (!kvm_arch_has_assigned_device(kvm) ||
+@@ -933,6 +934,8 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ kvm_vcpu_apicv_active(&svm->vcpu)) {
+ struct amd_iommu_pi_data pi;
+
++ enable_remapped_mode = false;
++
+ /* Try to enable guest_mode in IRTE */
+ pi.base = __sme_set(page_to_phys(svm->avic_backing_page) &
+ AVIC_HPA_MASK);
+@@ -951,33 +954,6 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ */
+ if (!ret && pi.is_guest_mode)
+ svm_ir_list_add(svm, &pi);
+- } else {
+- /* Use legacy mode in IRTE */
+- struct amd_iommu_pi_data pi;
+-
+- /**
+- * Here, pi is used to:
+- * - Tell IOMMU to use legacy mode for this interrupt.
+- * - Retrieve ga_tag of prior interrupt remapping data.
+- */
+- pi.prev_ga_tag = 0;
+- pi.is_guest_mode = false;
+- ret = irq_set_vcpu_affinity(host_irq, &pi);
+-
+- /**
+- * Check if the posted interrupt was previously
+- * setup with the guest_mode by checking if the ga_tag
+- * was cached. If so, we need to clean up the per-vcpu
+- * ir_list.
+- */
+- if (!ret && pi.prev_ga_tag) {
+- int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);
+- struct kvm_vcpu *vcpu;
+-
+- vcpu = kvm_get_vcpu_by_id(kvm, id);
+- if (vcpu)
+- svm_ir_list_del(to_svm(vcpu), &pi);
+- }
+ }
+
+ if (!ret && svm) {
+@@ -993,6 +969,34 @@ int avic_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ }
+
+ ret = 0;
++ if (enable_remapped_mode) {
++ /* Use legacy mode in IRTE */
++ struct amd_iommu_pi_data pi;
++
++ /**
++ * Here, pi is used to:
++ * - Tell IOMMU to use legacy mode for this interrupt.
++ * - Retrieve ga_tag of prior interrupt remapping data.
++ */
++ pi.prev_ga_tag = 0;
++ pi.is_guest_mode = false;
++ ret = irq_set_vcpu_affinity(host_irq, &pi);
++
++ /**
++ * Check if the posted interrupt was previously
++ * setup with the guest_mode by checking if the ga_tag
++ * was cached. If so, we need to clean up the per-vcpu
++ * ir_list.
++ */
++ if (!ret && pi.prev_ga_tag) {
++ int id = AVIC_GATAG_TO_VCPUID(pi.prev_ga_tag);
++ struct kvm_vcpu *vcpu;
++
++ vcpu = kvm_get_vcpu_by_id(kvm, id);
++ if (vcpu)
++ svm_ir_list_del(to_svm(vcpu), &pi);
++ }
++ }
+ out:
+ srcu_read_unlock(&kvm->irq_srcu, idx);
+ return ret;
+diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_intr.c
+index ec08fa3caf43ce..6b803324a981a2 100644
+--- a/arch/x86/kvm/vmx/posted_intr.c
++++ b/arch/x86/kvm/vmx/posted_intr.c
+@@ -274,6 +274,7 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ {
+ struct kvm_kernel_irq_routing_entry *e;
+ struct kvm_irq_routing_table *irq_rt;
++ bool enable_remapped_mode = true;
+ struct kvm_lapic_irq irq;
+ struct kvm_vcpu *vcpu;
+ struct vcpu_data vcpu_info;
+@@ -312,21 +313,8 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+
+ kvm_set_msi_irq(kvm, e, &irq);
+ if (!kvm_intr_is_single_vcpu(kvm, &irq, &vcpu) ||
+- !kvm_irq_is_postable(&irq)) {
+- /*
+- * Make sure the IRTE is in remapped mode if
+- * we don't handle it in posted mode.
+- */
+- ret = irq_set_vcpu_affinity(host_irq, NULL);
+- if (ret < 0) {
+- printk(KERN_INFO
+- "failed to back to remapped mode, irq: %u\n",
+- host_irq);
+- goto out;
+- }
+-
++ !kvm_irq_is_postable(&irq))
+ continue;
+- }
+
+ vcpu_info.pi_desc_addr = __pa(vcpu_to_pi_desc(vcpu));
+ vcpu_info.vector = irq.vector;
+@@ -334,11 +322,12 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ trace_kvm_pi_irte_update(host_irq, vcpu->vcpu_id, e->gsi,
+ vcpu_info.vector, vcpu_info.pi_desc_addr, set);
+
+- if (set)
+- ret = irq_set_vcpu_affinity(host_irq, &vcpu_info);
+- else
+- ret = irq_set_vcpu_affinity(host_irq, NULL);
++ if (!set)
++ continue;
+
++ enable_remapped_mode = false;
++
++ ret = irq_set_vcpu_affinity(host_irq, &vcpu_info);
+ if (ret < 0) {
+ printk(KERN_INFO "%s: failed to update PI IRTE\n",
+ __func__);
+@@ -346,6 +335,9 @@ int vmx_pi_update_irte(struct kvm *kvm, unsigned int host_irq,
+ }
+ }
+
++ if (enable_remapped_mode)
++ ret = irq_set_vcpu_affinity(host_irq, NULL);
++
+ ret = 0;
+ out:
+ srcu_read_unlock(&kvm->irq_srcu, idx);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 91f9590a8ddec6..c8dd29bccc71e5 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -13565,15 +13565,22 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
+ {
+ struct kvm_kernel_irqfd *irqfd =
+ container_of(cons, struct kvm_kernel_irqfd, consumer);
++ struct kvm *kvm = irqfd->kvm;
+ int ret;
+
+- irqfd->producer = prod;
+ kvm_arch_start_assignment(irqfd->kvm);
++
++ spin_lock_irq(&kvm->irqfds.lock);
++ irqfd->producer = prod;
++
+ ret = kvm_x86_call(pi_update_irte)(irqfd->kvm,
+ prod->irq, irqfd->gsi, 1);
+ if (ret)
+ kvm_arch_end_assignment(irqfd->kvm);
+
++ spin_unlock_irq(&kvm->irqfds.lock);
++
++
+ return ret;
+ }
+
+@@ -13583,9 +13590,9 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
+ int ret;
+ struct kvm_kernel_irqfd *irqfd =
+ container_of(cons, struct kvm_kernel_irqfd, consumer);
++ struct kvm *kvm = irqfd->kvm;
+
+ WARN_ON(irqfd->producer != prod);
+- irqfd->producer = NULL;
+
+ /*
+ * When producer of consumer is unregistered, we change back to
+@@ -13593,12 +13600,18 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
+ * when the irq is masked/disabled or the consumer side (KVM
+ * int this case doesn't want to receive the interrupts.
+ */
++ spin_lock_irq(&kvm->irqfds.lock);
++ irqfd->producer = NULL;
++
+ ret = kvm_x86_call(pi_update_irte)(irqfd->kvm,
+ prod->irq, irqfd->gsi, 0);
+ if (ret)
+ printk(KERN_INFO "irq bypass consumer (token %p) unregistration"
+ " fails: %d\n", irqfd->consumer.token, ret);
+
++ spin_unlock_irq(&kvm->irqfds.lock);
++
++
+ kvm_arch_end_assignment(irqfd->kvm);
+ }
+
+@@ -13611,7 +13624,8 @@ int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq,
+ bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old,
+ struct kvm_kernel_irq_routing_entry *new)
+ {
+- if (new->type != KVM_IRQ_ROUTING_MSI)
++ if (old->type != KVM_IRQ_ROUTING_MSI ||
++ new->type != KVM_IRQ_ROUTING_MSI)
+ return true;
+
+ return !!memcmp(&old->msi, &new->msi, sizeof(new->msi));
+diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt
+index caedb3ef6688fc..f5dd84eb55dcda 100644
+--- a/arch/x86/lib/x86-opcode-map.txt
++++ b/arch/x86/lib/x86-opcode-map.txt
+@@ -996,8 +996,8 @@ AVXcode: 4
+ 83: Grp1 Ev,Ib (1A),(es)
+ # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL,
+ # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ
+-84: CTESTSCC (ev)
+-85: CTESTSCC (es) | CTESTSCC (66),(es)
++84: CTESTSCC Eb,Gb (ev)
++85: CTESTSCC Ev,Gv (es) | CTESTSCC Ev,Gv (66),(es)
+ 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es)
+ 8f: POP2 Bq,Rq (000),(11B),(ev)
+ a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index 1fef5ad32d5a8b..9b0ee41b545c7e 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -404,7 +404,7 @@ static inline pgd_t *_pgd_alloc(struct mm_struct *mm)
+ * We allocate one page for pgd.
+ */
+ if (!SHARED_KERNEL_PMD)
+- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);
++ return __pgd_alloc(mm, pgd_allocation_order());
+
+ /*
+ * Now PAE kernel is not running as a Xen domain. We can allocate
+@@ -424,7 +424,7 @@ static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
+
+ static inline pgd_t *_pgd_alloc(struct mm_struct *mm)
+ {
+- return __pgd_alloc(mm, PGD_ALLOCATION_ORDER);
++ return __pgd_alloc(mm, pgd_allocation_order());
+ }
+
+ static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd)
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 6cf881a942bbed..e491c75b2a6889 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -389,9 +389,9 @@ static void cond_mitigation(struct task_struct *next)
+ prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_spec);
+
+ /*
+- * Avoid user/user BTB poisoning by flushing the branch predictor
+- * when switching between processes. This stops one process from
+- * doing Spectre-v2 attacks on another.
++ * Avoid user->user BTB/RSB poisoning by flushing them when switching
++ * between processes. This stops one process from doing Spectre-v2
++ * attacks on another.
+ *
+ * Both, the conditional and the always IBPB mode use the mm
+ * pointer to avoid the IBPB when switching between tasks of the
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index 0f2fe524f60dcd..b8755cde241993 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -436,7 +436,8 @@ static struct msi_domain_ops xen_pci_msi_domain_ops = {
+ };
+
+ static struct msi_domain_info xen_pci_msi_domain_info = {
+- .flags = MSI_FLAG_PCI_MSIX | MSI_FLAG_FREE_MSI_DESCS | MSI_FLAG_DEV_SYSFS,
++ .flags = MSI_FLAG_PCI_MSIX | MSI_FLAG_FREE_MSI_DESCS |
++ MSI_FLAG_DEV_SYSFS | MSI_FLAG_NO_MASK,
+ .ops = &xen_pci_msi_domain_ops,
+ };
+
+@@ -484,11 +485,6 @@ static __init void xen_setup_pci_msi(void)
+ * in allocating the native domain and never use it.
+ */
+ x86_init.irqs.create_pci_msi_domain = xen_create_pci_msi_domain;
+- /*
+- * With XEN PIRQ/Eventchannels in use PCI/MSI[-X] masking is solely
+- * controlled by the hypervisor.
+- */
+- pci_msi_ignore_mask = 1;
+ }
+
+ #else /* CONFIG_PCI_MSI */
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index ac57259a432b8c..a4b4ebd41b8fab 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -73,7 +73,7 @@ int __init efi_alloc_page_tables(void)
+ gfp_t gfp_mask;
+
+ gfp_mask = GFP_KERNEL | __GFP_ZERO;
+- efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, PGD_ALLOCATION_ORDER);
++ efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, pgd_allocation_order());
+ if (!efi_pgd)
+ goto fail;
+
+@@ -96,7 +96,7 @@ int __init efi_alloc_page_tables(void)
+ if (pgtable_l5_enabled())
+ free_page((unsigned long)pgd_page_vaddr(*pgd));
+ free_pgd:
+- free_pages((unsigned long)efi_pgd, PGD_ALLOCATION_ORDER);
++ free_pages((unsigned long)efi_pgd, pgd_allocation_order());
+ fail:
+ return -ENOMEM;
+ }
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index 0e3d930bcb89e8..9d25d9373945cb 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -1,5 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/acpi.h>
++#include <linux/cpufreq.h>
++#include <linux/cpuidle.h>
+ #include <linux/export.h>
+ #include <linux/mm.h>
+
+@@ -123,8 +125,23 @@ static void __init pvh_arch_setup(void)
+ {
+ pvh_reserve_extra_memory();
+
+- if (xen_initial_domain())
++ if (xen_initial_domain()) {
+ xen_add_preferred_consoles();
++
++ /*
++ * Disable usage of CPU idle and frequency drivers: when
++ * running as hardware domain the exposed native ACPI tables
++ * causes idle and/or frequency drivers to attach and
++ * malfunction. It's Xen the entity that controls the idle and
++ * frequency states.
++ *
++ * For unprivileged domains the exposed ACPI tables are
++ * fabricated and don't contain such data.
++ */
++ disable_cpuidle();
++ disable_cpufreq();
++ WARN_ON(xen_set_default_idle());
++ }
+ }
+
+ void __init xen_pvh_init(struct boot_params *boot_params)
+diff --git a/block/bdev.c b/block/bdev.c
+index 9d73a8fbf7f997..5aebcf437f17c7 100644
+--- a/block/bdev.c
++++ b/block/bdev.c
+@@ -773,13 +773,13 @@ static void blkdev_put_part(struct block_device *part)
+ blkdev_put_whole(whole);
+ }
+
+-struct block_device *blkdev_get_no_open(dev_t dev)
++struct block_device *blkdev_get_no_open(dev_t dev, bool autoload)
+ {
+ struct block_device *bdev;
+ struct inode *inode;
+
+ inode = ilookup(blockdev_superblock, dev);
+- if (!inode && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) {
++ if (!inode && autoload && IS_ENABLED(CONFIG_BLOCK_LEGACY_AUTOLOAD)) {
+ blk_request_module(dev);
+ inode = ilookup(blockdev_superblock, dev);
+ if (inode)
+@@ -1001,7 +1001,7 @@ struct file *bdev_file_open_by_dev(dev_t dev, blk_mode_t mode, void *holder,
+ if (ret)
+ return ERR_PTR(ret);
+
+- bdev = blkdev_get_no_open(dev);
++ bdev = blkdev_get_no_open(dev, true);
+ if (!bdev)
+ return ERR_PTR(-ENXIO);
+
+@@ -1271,21 +1271,15 @@ void sync_bdevs(bool wait)
+ void bdev_statx(struct path *path, struct kstat *stat,
+ u32 request_mask)
+ {
+- struct inode *backing_inode;
+ struct block_device *bdev;
+
+- if (!(request_mask & (STATX_DIOALIGN | STATX_WRITE_ATOMIC)))
+- return;
+-
+- backing_inode = d_backing_inode(path->dentry);
+-
+ /*
+- * Note that backing_inode is the inode of a block device node file,
+- * not the block device's internal inode. Therefore it is *not* valid
+- * to use I_BDEV() here; the block device has to be looked up by i_rdev
++ * Note that d_backing_inode() returns the block device node inode, not
++ * the block device's internal inode. Therefore it is *not* valid to
++ * use I_BDEV() here; the block device has to be looked up by i_rdev
+ * instead.
+ */
+- bdev = blkdev_get_no_open(backing_inode->i_rdev);
++ bdev = blkdev_get_no_open(d_backing_inode(path->dentry)->i_rdev, false);
+ if (!bdev)
+ return;
+
+@@ -1303,6 +1297,8 @@ void bdev_statx(struct path *path, struct kstat *stat,
+ queue_atomic_write_unit_max_bytes(bd_queue));
+ }
+
++ stat->blksize = bdev_io_min(bdev);
++
+ blkdev_put_no_open(bdev);
+ }
+
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 9ed93d91d754ab..c94efae5bcfaf3 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -796,7 +796,7 @@ int blkg_conf_open_bdev(struct blkg_conf_ctx *ctx)
+ return -EINVAL;
+ input = skip_spaces(input);
+
+- bdev = blkdev_get_no_open(MKDEV(major, minor));
++ bdev = blkdev_get_no_open(MKDEV(major, minor), true);
+ if (!bdev)
+ return -ENODEV;
+ if (bdev_is_partition(bdev)) {
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 66721afeea5467..67b119ffa16890 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -61,8 +61,14 @@ void blk_apply_bdi_limits(struct backing_dev_info *bdi,
+ /*
+ * For read-ahead of large files to be effective, we need to read ahead
+ * at least twice the optimal I/O size.
++ *
++ * There is no hardware limitation for the read-ahead size and the user
++ * might have increased the read-ahead size through sysfs, so don't ever
++ * decrease it.
+ */
+- bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);
++ bdi->ra_pages = max3(bdi->ra_pages,
++ lim->io_opt * 2 / PAGE_SIZE,
++ VM_READAHEAD_PAGES);
+ bdi->io_pages = lim->max_sectors >> PAGE_SECTORS_SHIFT;
+ }
+
+diff --git a/block/blk.h b/block/blk.h
+index 9cf9a0099416dd..9dcc92c7f2b50d 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -94,6 +94,9 @@ static inline void blk_wait_io(struct completion *done)
+ wait_for_completion_io(done);
+ }
+
++struct block_device *blkdev_get_no_open(dev_t dev, bool autoload);
++void blkdev_put_no_open(struct block_device *bdev);
++
+ #define BIO_INLINE_VECS 4
+ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
+ gfp_t gfp_mask);
+diff --git a/block/fops.c b/block/fops.c
+index be9f1dbea9ce0a..d23ddb2dc1138d 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -642,7 +642,7 @@ static int blkdev_open(struct inode *inode, struct file *filp)
+ if (ret)
+ return ret;
+
+- bdev = blkdev_get_no_open(inode->i_rdev);
++ bdev = blkdev_get_no_open(inode->i_rdev, true);
+ if (!bdev)
+ return -ENXIO;
+
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 74ae5f52b78405..561ed11cd07439 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -318,6 +318,7 @@ config CRYPTO_CURVE25519
+ tristate "Curve25519"
+ select CRYPTO_KPP
+ select CRYPTO_LIB_CURVE25519_GENERIC
++ select CRYPTO_LIB_CURVE25519_INTERNAL
+ help
+ Curve25519 elliptic curve (RFC7748)
+
+@@ -616,6 +617,7 @@ config CRYPTO_ARC4
+ config CRYPTO_CHACHA20
+ tristate "ChaCha"
+ select CRYPTO_LIB_CHACHA_GENERIC
++ select CRYPTO_LIB_CHACHA_INTERNAL
+ select CRYPTO_SKCIPHER
+ help
+ The ChaCha20, XChaCha20, and XChaCha12 stream cipher algorithms
+@@ -937,6 +939,7 @@ config CRYPTO_POLY1305
+ tristate "Poly1305"
+ select CRYPTO_HASH
+ select CRYPTO_LIB_POLY1305_GENERIC
++ select CRYPTO_LIB_POLY1305_INTERNAL
+ help
+ Poly1305 authenticator algorithm (RFC7539)
+
+diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c
+index 5b84b0f7cc178f..3378670286535a 100644
+--- a/crypto/crypto_null.c
++++ b/crypto/crypto_null.c
+@@ -17,10 +17,10 @@
+ #include <crypto/internal/skcipher.h>
+ #include <linux/init.h>
+ #include <linux/module.h>
+-#include <linux/mm.h>
++#include <linux/spinlock.h>
+ #include <linux/string.h>
+
+-static DEFINE_MUTEX(crypto_default_null_skcipher_lock);
++static DEFINE_SPINLOCK(crypto_default_null_skcipher_lock);
+ static struct crypto_sync_skcipher *crypto_default_null_skcipher;
+ static int crypto_default_null_skcipher_refcnt;
+
+@@ -152,23 +152,32 @@ MODULE_ALIAS_CRYPTO("cipher_null");
+
+ struct crypto_sync_skcipher *crypto_get_default_null_skcipher(void)
+ {
++ struct crypto_sync_skcipher *ntfm = NULL;
+ struct crypto_sync_skcipher *tfm;
+
+- mutex_lock(&crypto_default_null_skcipher_lock);
++ spin_lock_bh(&crypto_default_null_skcipher_lock);
+ tfm = crypto_default_null_skcipher;
+
+ if (!tfm) {
+- tfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0);
+- if (IS_ERR(tfm))
+- goto unlock;
+-
+- crypto_default_null_skcipher = tfm;
++ spin_unlock_bh(&crypto_default_null_skcipher_lock);
++
++ ntfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0);
++ if (IS_ERR(ntfm))
++ return ntfm;
++
++ spin_lock_bh(&crypto_default_null_skcipher_lock);
++ tfm = crypto_default_null_skcipher;
++ if (!tfm) {
++ tfm = ntfm;
++ ntfm = NULL;
++ crypto_default_null_skcipher = tfm;
++ }
+ }
+
+ crypto_default_null_skcipher_refcnt++;
++ spin_unlock_bh(&crypto_default_null_skcipher_lock);
+
+-unlock:
+- mutex_unlock(&crypto_default_null_skcipher_lock);
++ crypto_free_sync_skcipher(ntfm);
+
+ return tfm;
+ }
+@@ -176,12 +185,16 @@ EXPORT_SYMBOL_GPL(crypto_get_default_null_skcipher);
+
+ void crypto_put_default_null_skcipher(void)
+ {
+- mutex_lock(&crypto_default_null_skcipher_lock);
++ struct crypto_sync_skcipher *tfm = NULL;
++
++ spin_lock_bh(&crypto_default_null_skcipher_lock);
+ if (!--crypto_default_null_skcipher_refcnt) {
+- crypto_free_sync_skcipher(crypto_default_null_skcipher);
++ tfm = crypto_default_null_skcipher;
+ crypto_default_null_skcipher = NULL;
+ }
+- mutex_unlock(&crypto_default_null_skcipher_lock);
++ spin_unlock_bh(&crypto_default_null_skcipher_lock);
++
++ crypto_free_sync_skcipher(tfm);
+ }
+ EXPORT_SYMBOL_GPL(crypto_put_default_null_skcipher);
+
+diff --git a/crypto/ecc.c b/crypto/ecc.c
+index 50ad2d4ed672c5..6cf9a945fc6c28 100644
+--- a/crypto/ecc.c
++++ b/crypto/ecc.c
+@@ -71,7 +71,7 @@ EXPORT_SYMBOL(ecc_get_curve);
+ void ecc_digits_from_bytes(const u8 *in, unsigned int nbytes,
+ u64 *out, unsigned int ndigits)
+ {
+- int diff = ndigits - DIV_ROUND_UP(nbytes, sizeof(u64));
++ int diff = ndigits - DIV_ROUND_UP_POW2(nbytes, sizeof(u64));
+ unsigned int o = nbytes & 7;
+ __be64 msd = 0;
+
+diff --git a/crypto/ecdsa-p1363.c b/crypto/ecdsa-p1363.c
+index eaae7214d69bca..4454f1f8f33f58 100644
+--- a/crypto/ecdsa-p1363.c
++++ b/crypto/ecdsa-p1363.c
+@@ -22,7 +22,7 @@ static int ecdsa_p1363_verify(struct crypto_sig *tfm,
+ {
+ struct ecdsa_p1363_ctx *ctx = crypto_sig_ctx(tfm);
+ unsigned int keylen = crypto_sig_keysize(ctx->child);
+- unsigned int ndigits = DIV_ROUND_UP(keylen, sizeof(u64));
++ unsigned int ndigits = DIV_ROUND_UP_POW2(keylen, sizeof(u64));
+ struct ecdsa_raw_sig sig;
+
+ if (slen != 2 * keylen)
+diff --git a/crypto/ecdsa-x962.c b/crypto/ecdsa-x962.c
+index 6a77c13e192b1a..90a04f4b9a2f55 100644
+--- a/crypto/ecdsa-x962.c
++++ b/crypto/ecdsa-x962.c
+@@ -81,8 +81,8 @@ static int ecdsa_x962_verify(struct crypto_sig *tfm,
+ struct ecdsa_x962_signature_ctx sig_ctx;
+ int err;
+
+- sig_ctx.ndigits = DIV_ROUND_UP(crypto_sig_keysize(ctx->child),
+- sizeof(u64));
++ sig_ctx.ndigits = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
++ sizeof(u64));
+
+ err = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx, src, slen);
+ if (err < 0)
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 8db09d81918fbb..3c5f34892734e9 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2301,6 +2301,34 @@ static const struct dmi_system_id acpi_ec_no_wakeup[] = {
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "103C_5336AN HP ZHAN 66 Pro"),
+ },
+ },
++ /*
++ * Lenovo Legion Go S; touchscreen blocks HW sleep when woken up from EC
++ * https://gitlab.freedesktop.org/drm/amd/-/issues/3929
++ */
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83L3"),
++ }
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83N6"),
++ }
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83Q2"),
++ }
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83Q3"),
++ }
++ },
+ { },
+ };
+
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index a35dd0e41c2704..f73ce6e13065dd 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -229,7 +229,7 @@ static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr,
+ node_entry = ACPI_PTR_DIFF(node, table_hdr);
+ entry = ACPI_ADD_PTR(struct acpi_subtable_header, table_hdr,
+ sizeof(struct acpi_table_pptt));
+- proc_sz = sizeof(struct acpi_pptt_processor *);
++ proc_sz = sizeof(struct acpi_pptt_processor);
+
+ while ((unsigned long)entry + proc_sz < table_end) {
+ cpu_node = (struct acpi_pptt_processor *)entry;
+@@ -270,7 +270,7 @@ static struct acpi_pptt_processor *acpi_find_processor_node(struct acpi_table_he
+ table_end = (unsigned long)table_hdr + table_hdr->length;
+ entry = ACPI_ADD_PTR(struct acpi_subtable_header, table_hdr,
+ sizeof(struct acpi_table_pptt));
+- proc_sz = sizeof(struct acpi_pptt_processor *);
++ proc_sz = sizeof(struct acpi_pptt_processor);
+
+ /* find the processor structure associated with this cpuid */
+ while ((unsigned long)entry + proc_sz < table_end) {
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 76052006bd8714..5fc2c8ee61b19b 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -6373,7 +6373,7 @@ static void print_binder_transaction_ilocked(struct seq_file *m,
+ seq_printf(m, " node %d", buffer->target_node->debug_id);
+ seq_printf(m, " size %zd:%zd offset %lx\n",
+ buffer->data_size, buffer->offsets_size,
+- proc->alloc.vm_start - buffer->user_data);
++ buffer->user_data - proc->alloc.vm_start);
+ }
+
+ static void print_binder_work_ilocked(struct seq_file *m,
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 2796c0da82578a..c0eb8c67a9ff69 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2453,8 +2453,8 @@ static unsigned int ata_msense_control_ata_feature(struct ata_device *dev,
+ */
+ put_unaligned_be16(ATA_FEATURE_SUB_MPAGE_LEN - 4, &buf[2]);
+
+- if (dev->flags & ATA_DFLAG_CDL)
+- buf[4] = 0x02; /* Support T2A and T2B pages */
++ if (dev->flags & ATA_DFLAG_CDL_ENABLED)
++ buf[4] = 0x02; /* T2A and T2B pages enabled */
+ else
+ buf[4] = 0;
+
+@@ -3886,12 +3886,11 @@ static int ata_mselect_control_spg0(struct ata_queued_cmd *qc,
+ }
+
+ /*
+- * Translate MODE SELECT control mode page, sub-pages f2h (ATA feature mode
++ * Translate MODE SELECT control mode page, sub-page f2h (ATA feature mode
+ * page) into a SET FEATURES command.
+ */
+-static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,
+- const u8 *buf, int len,
+- u16 *fp)
++static int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,
++ const u8 *buf, int len, u16 *fp)
+ {
+ struct ata_device *dev = qc->dev;
+ struct ata_taskfile *tf = &qc->tf;
+@@ -3909,17 +3908,27 @@ static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,
+ /* Check cdl_ctrl */
+ switch (buf[0] & 0x03) {
+ case 0:
+- /* Disable CDL */
++ /* Disable CDL if it is enabled */
++ if (!(dev->flags & ATA_DFLAG_CDL_ENABLED))
++ return 0;
++ ata_dev_dbg(dev, "Disabling CDL\n");
+ cdl_action = 0;
+ dev->flags &= ~ATA_DFLAG_CDL_ENABLED;
+ break;
+ case 0x02:
+- /* Enable CDL T2A/T2B: NCQ priority must be disabled */
++ /*
++ * Enable CDL if not already enabled. Since this is mutually
++ * exclusive with NCQ priority, allow this only if NCQ priority
++ * is disabled.
++ */
++ if (dev->flags & ATA_DFLAG_CDL_ENABLED)
++ return 0;
+ if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) {
+ ata_dev_err(dev,
+ "NCQ priority must be disabled to enable CDL\n");
+ return -EINVAL;
+ }
++ ata_dev_dbg(dev, "Enabling CDL\n");
+ cdl_action = 1;
+ dev->flags |= ATA_DFLAG_CDL_ENABLED;
+ break;
+diff --git a/drivers/base/base.h b/drivers/base/base.h
+index 0042e4774b0ce7..123031a757d916 100644
+--- a/drivers/base/base.h
++++ b/drivers/base/base.h
+@@ -73,6 +73,7 @@ static inline void subsys_put(struct subsys_private *sp)
+ kset_put(&sp->subsys);
+ }
+
++struct subsys_private *bus_to_subsys(const struct bus_type *bus);
+ struct subsys_private *class_to_subsys(const struct class *class);
+
+ struct driver_private {
+@@ -180,6 +181,22 @@ int driver_add_groups(const struct device_driver *drv, const struct attribute_gr
+ void driver_remove_groups(const struct device_driver *drv, const struct attribute_group **groups);
+ void device_driver_detach(struct device *dev);
+
++static inline void device_set_driver(struct device *dev, const struct device_driver *drv)
++{
++ /*
++ * Majority (all?) read accesses to dev->driver happens either
++ * while holding device lock or in bus/driver code that is only
++ * invoked when the device is bound to a driver and there is no
++ * concern of the pointer being changed while it is being read.
++ * However when reading device's uevent file we read driver pointer
++ * without taking device lock (so we do not block there for
++ * arbitrary amount of time). We use WRITE_ONCE() here to prevent
++ * tearing so that READ_ONCE() can safely be used in uevent code.
++ */
++ // FIXME - this cast should not be needed "soon"
++ WRITE_ONCE(dev->driver, (struct device_driver *)drv);
++}
++
+ int devres_release_all(struct device *dev);
+ void device_block_probing(void);
+ void device_unblock_probing(void);
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 6b9e65a42cd2e1..c8c7e080402492 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -57,7 +57,7 @@ static int __must_check bus_rescan_devices_helper(struct device *dev,
+ * NULL. A call to subsys_put() must be done when finished with the pointer in
+ * order for it to be properly freed.
+ */
+-static struct subsys_private *bus_to_subsys(const struct bus_type *bus)
++struct subsys_private *bus_to_subsys(const struct bus_type *bus)
+ {
+ struct subsys_private *sp = NULL;
+ struct kobject *kobj;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 2fde698430dff9..93019bb6998ebf 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2624,6 +2624,35 @@ static const char *dev_uevent_name(const struct kobject *kobj)
+ return NULL;
+ }
+
++/*
++ * Try filling "DRIVER=<name>" uevent variable for a device. Because this
++ * function may race with binding and unbinding the device from a driver,
++ * we need to be careful. Binding is generally safe, at worst we miss the
++ * fact that the device is already bound to a driver (but the driver
++ * information that is delivered through uevents is best-effort, it may
++ * become obsolete as soon as it is generated anyways). Unbinding is more
++ * risky as driver pointer is transitioning to NULL, so READ_ONCE() should
++ * be used to make sure we are dealing with the same pointer, and to
++ * ensure that driver structure is not going to disappear from under us
++ * we take bus' drivers klist lock. The assumption that only registered
++ * driver can be bound to a device, and to unregister a driver bus code
++ * will take the same lock.
++ */
++static void dev_driver_uevent(const struct device *dev, struct kobj_uevent_env *env)
++{
++ struct subsys_private *sp = bus_to_subsys(dev->bus);
++
++ if (sp) {
++ scoped_guard(spinlock, &sp->klist_drivers.k_lock) {
++ struct device_driver *drv = READ_ONCE(dev->driver);
++ if (drv)
++ add_uevent_var(env, "DRIVER=%s", drv->name);
++ }
++
++ subsys_put(sp);
++ }
++}
++
+ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
+ {
+ const struct device *dev = kobj_to_dev(kobj);
+@@ -2655,8 +2684,8 @@ static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
+ if (dev->type && dev->type->name)
+ add_uevent_var(env, "DEVTYPE=%s", dev->type->name);
+
+- if (dev->driver)
+- add_uevent_var(env, "DRIVER=%s", dev->driver->name);
++ /* Add "DRIVER=%s" variable if the device is bound to a driver */
++ dev_driver_uevent(dev, env);
+
+ /* Add common DT information about the device */
+ of_device_uevent(dev, env);
+@@ -2726,11 +2755,8 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
+ if (!env)
+ return -ENOMEM;
+
+- /* Synchronize with really_probe() */
+- device_lock(dev);
+ /* let the kset specific function add its keys */
+ retval = kset->uevent_ops->uevent(&dev->kobj, env);
+- device_unlock(dev);
+ if (retval)
+ goto out;
+
+@@ -3700,7 +3726,7 @@ int device_add(struct device *dev)
+ device_pm_remove(dev);
+ dpm_sysfs_remove(dev);
+ DPMError:
+- dev->driver = NULL;
++ device_set_driver(dev, NULL);
+ bus_remove_device(dev);
+ BusError:
+ device_remove_attrs(dev);
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index f0e4b4aba885c6..b526e0e0f52d79 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -550,7 +550,7 @@ static void device_unbind_cleanup(struct device *dev)
+ arch_teardown_dma_ops(dev);
+ kfree(dev->dma_range_map);
+ dev->dma_range_map = NULL;
+- dev->driver = NULL;
++ device_set_driver(dev, NULL);
+ dev_set_drvdata(dev, NULL);
+ if (dev->pm_domain && dev->pm_domain->dismiss)
+ dev->pm_domain->dismiss(dev);
+@@ -629,8 +629,7 @@ static int really_probe(struct device *dev, const struct device_driver *drv)
+ }
+
+ re_probe:
+- // FIXME - this cast should not be needed "soon"
+- dev->driver = (struct device_driver *)drv;
++ device_set_driver(dev, drv);
+
+ /* If using pinctrl, bind pins now before probing */
+ ret = pinctrl_bind_pins(dev);
+@@ -1014,7 +1013,7 @@ static int __device_attach(struct device *dev, bool allow_async)
+ if (ret == 0)
+ ret = 1;
+ else {
+- dev->driver = NULL;
++ device_set_driver(dev, NULL);
+ ret = 0;
+ }
+ } else {
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 971b793dedd03a..ab06a7a064fbf8 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -73,12 +73,24 @@
+ UBLK_PARAM_TYPE_DEVT | UBLK_PARAM_TYPE_ZONED)
+
+ struct ublk_rq_data {
+- struct llist_node node;
+-
+ struct kref ref;
+ };
+
+ struct ublk_uring_cmd_pdu {
++ /*
++ * Store requests in same batch temporarily for queuing them to
++ * daemon context.
++ *
++ * It should have been stored to request payload, but we do want
++ * to avoid extra pre-allocation, and uring_cmd payload is always
++ * free for us
++ */
++ struct request *req_list;
++
++ /*
++ * The following two are valid in this cmd whole lifetime, and
++ * setup in ublk uring_cmd handler
++ */
+ struct ublk_queue *ubq;
+ u16 tag;
+ };
+@@ -141,8 +153,6 @@ struct ublk_queue {
+ struct task_struct *ubq_daemon;
+ char *io_cmd_buf;
+
+- struct llist_head io_cmds;
+-
+ unsigned long io_addr; /* mapped vm address */
+ unsigned int max_io_sz;
+ bool force_abort;
+@@ -1114,7 +1124,7 @@ static void ublk_fail_rq_fn(struct kref *ref)
+ }
+
+ /*
+- * Since __ublk_rq_task_work always fails requests immediately during
++ * Since ublk_rq_task_work_cb always fails requests immediately during
+ * exiting, __ublk_fail_req() is only called from abort context during
+ * exiting. So lock is unnecessary.
+ *
+@@ -1163,10 +1173,10 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
+ blk_mq_end_request(rq, BLK_STS_IOERR);
+ }
+
+-static inline void __ublk_rq_task_work(struct request *req,
+- unsigned issue_flags)
++static void ublk_dispatch_req(struct ublk_queue *ubq,
++ struct request *req,
++ unsigned int issue_flags)
+ {
+- struct ublk_queue *ubq = req->mq_hctx->driver_data;
+ int tag = req->tag;
+ struct ublk_io *io = &ubq->ios[tag];
+ unsigned int mapped_bytes;
+@@ -1242,36 +1252,52 @@ static inline void __ublk_rq_task_work(struct request *req,
+ ubq_complete_io_cmd(io, UBLK_IO_RES_OK, issue_flags);
+ }
+
+-static inline void ublk_forward_io_cmds(struct ublk_queue *ubq,
+- unsigned issue_flags)
+-{
+- struct llist_node *io_cmds = llist_del_all(&ubq->io_cmds);
+- struct ublk_rq_data *data, *tmp;
+-
+- io_cmds = llist_reverse_order(io_cmds);
+- llist_for_each_entry_safe(data, tmp, io_cmds, node)
+- __ublk_rq_task_work(blk_mq_rq_from_pdu(data), issue_flags);
+-}
+-
+-static void ublk_rq_task_work_cb(struct io_uring_cmd *cmd, unsigned issue_flags)
++static void ublk_rq_task_work_cb(struct io_uring_cmd *cmd,
++ unsigned int issue_flags)
+ {
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+ struct ublk_queue *ubq = pdu->ubq;
++ int tag = pdu->tag;
++ struct request *req = blk_mq_tag_to_rq(
++ ubq->dev->tag_set.tags[ubq->q_id], tag);
+
+- ublk_forward_io_cmds(ubq, issue_flags);
++ ublk_dispatch_req(ubq, req, issue_flags);
+ }
+
+ static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq)
+ {
+- struct ublk_rq_data *data = blk_mq_rq_to_pdu(rq);
++ struct ublk_io *io = &ubq->ios[rq->tag];
++
++ io_uring_cmd_complete_in_task(io->cmd, ublk_rq_task_work_cb);
++}
+
+- if (llist_add(&data->node, &ubq->io_cmds)) {
+- struct ublk_io *io = &ubq->ios[rq->tag];
++static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd,
++ unsigned int issue_flags)
++{
++ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
++ struct request *rq = pdu->req_list;
++ struct ublk_queue *ubq = rq->mq_hctx->driver_data;
++ struct request *next;
+
+- io_uring_cmd_complete_in_task(io->cmd, ublk_rq_task_work_cb);
++ while (rq) {
++ next = rq->rq_next;
++ rq->rq_next = NULL;
++ ublk_dispatch_req(ubq, rq, issue_flags);
++ rq = next;
+ }
+ }
+
++static void ublk_queue_cmd_list(struct ublk_queue *ubq, struct rq_list *l)
++{
++ struct request *rq = rq_list_peek(l);
++ struct ublk_io *io = &ubq->ios[rq->tag];
++ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(io->cmd);
++
++ pdu->req_list = rq;
++ rq_list_init(l);
++ io_uring_cmd_complete_in_task(io->cmd, ublk_cmd_list_tw_cb);
++}
++
+ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
+ {
+ struct ublk_queue *ubq = rq->mq_hctx->driver_data;
+@@ -1310,21 +1336,13 @@ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
+ return BLK_EH_RESET_TIMER;
+ }
+
+-static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
+- const struct blk_mq_queue_data *bd)
++static blk_status_t ublk_prep_req(struct ublk_queue *ubq, struct request *rq,
++ bool check_cancel)
+ {
+- struct ublk_queue *ubq = hctx->driver_data;
+- struct request *rq = bd->rq;
+ blk_status_t res;
+
+- if (unlikely(ubq->fail_io)) {
++ if (unlikely(ubq->fail_io))
+ return BLK_STS_TARGET;
+- }
+-
+- /* fill iod to slot in io cmd buffer */
+- res = ublk_setup_iod(ubq, rq);
+- if (unlikely(res != BLK_STS_OK))
+- return BLK_STS_IOERR;
+
+ /* With recovery feature enabled, force_abort is set in
+ * ublk_stop_dev() before calling del_gendisk(). We have to
+@@ -1338,17 +1356,68 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
+ if (ublk_nosrv_should_queue_io(ubq) && unlikely(ubq->force_abort))
+ return BLK_STS_IOERR;
+
++ if (check_cancel && unlikely(ubq->canceling))
++ return BLK_STS_IOERR;
++
++ /* fill iod to slot in io cmd buffer */
++ res = ublk_setup_iod(ubq, rq);
++ if (unlikely(res != BLK_STS_OK))
++ return BLK_STS_IOERR;
++
++ blk_mq_start_request(rq);
++ return BLK_STS_OK;
++}
++
++static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
++ const struct blk_mq_queue_data *bd)
++{
++ struct ublk_queue *ubq = hctx->driver_data;
++ struct request *rq = bd->rq;
++ blk_status_t res;
++
++ res = ublk_prep_req(ubq, rq, false);
++ if (res != BLK_STS_OK)
++ return res;
++
++ /*
++ * ->canceling has to be handled after ->force_abort and ->fail_io
++ * is dealt with, otherwise this request may not be failed in case
++ * of recovery, and cause hang when deleting disk
++ */
+ if (unlikely(ubq->canceling)) {
+ __ublk_abort_rq(ubq, rq);
+ return BLK_STS_OK;
+ }
+
+- blk_mq_start_request(bd->rq);
+ ublk_queue_cmd(ubq, rq);
+-
+ return BLK_STS_OK;
+ }
+
++static void ublk_queue_rqs(struct rq_list *rqlist)
++{
++ struct rq_list requeue_list = { };
++ struct rq_list submit_list = { };
++ struct ublk_queue *ubq = NULL;
++ struct request *req;
++
++ while ((req = rq_list_pop(rqlist))) {
++ struct ublk_queue *this_q = req->mq_hctx->driver_data;
++
++ if (ubq && ubq != this_q && !rq_list_empty(&submit_list))
++ ublk_queue_cmd_list(ubq, &submit_list);
++ ubq = this_q;
++
++ if (ublk_prep_req(ubq, req, true) == BLK_STS_OK)
++ rq_list_add_tail(&submit_list, req);
++ else
++ rq_list_add_tail(&requeue_list, req);
++ }
++
++ if (ubq && !rq_list_empty(&submit_list))
++ ublk_queue_cmd_list(ubq, &submit_list);
++ *rqlist = requeue_list;
++}
++
+ static int ublk_init_hctx(struct blk_mq_hw_ctx *hctx, void *driver_data,
+ unsigned int hctx_idx)
+ {
+@@ -1361,6 +1430,7 @@ static int ublk_init_hctx(struct blk_mq_hw_ctx *hctx, void *driver_data,
+
+ static const struct blk_mq_ops ublk_mq_ops = {
+ .queue_rq = ublk_queue_rq,
++ .queue_rqs = ublk_queue_rqs,
+ .init_hctx = ublk_init_hctx,
+ .timeout = ublk_timeout,
+ };
+@@ -1462,7 +1532,7 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
+ struct request *rq;
+
+ /*
+- * Either we fail the request or ublk_rq_task_work_fn
++ * Either we fail the request or ublk_rq_task_work_cb
+ * will do it
+ */
+ rq = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], i);
+@@ -1629,31 +1699,35 @@ static void ublk_wait_tagset_rqs_idle(struct ublk_device *ub)
+
+ static void __ublk_quiesce_dev(struct ublk_device *ub)
+ {
++ int i;
++
+ pr_devel("%s: quiesce ub: dev_id %d state %s\n",
+ __func__, ub->dev_info.dev_id,
+ ub->dev_info.state == UBLK_S_DEV_LIVE ?
+ "LIVE" : "QUIESCED");
+ blk_mq_quiesce_queue(ub->ub_disk->queue);
++ /* mark every queue as canceling */
++ for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
++ ublk_get_queue(ub, i)->canceling = true;
+ ublk_wait_tagset_rqs_idle(ub);
+ ub->dev_info.state = UBLK_S_DEV_QUIESCED;
++ blk_mq_unquiesce_queue(ub->ub_disk->queue);
+ }
+
+-static void ublk_unquiesce_dev(struct ublk_device *ub)
++static void ublk_force_abort_dev(struct ublk_device *ub)
+ {
+ int i;
+
+- pr_devel("%s: unquiesce ub: dev_id %d state %s\n",
++ pr_devel("%s: force abort ub: dev_id %d state %s\n",
+ __func__, ub->dev_info.dev_id,
+ ub->dev_info.state == UBLK_S_DEV_LIVE ?
+ "LIVE" : "QUIESCED");
+- /* quiesce_work has run. We let requeued rqs be aborted
+- * before running fallback_wq. "force_abort" must be seen
+- * after request queue is unqiuesced. Then del_gendisk()
+- * can move on.
+- */
++ blk_mq_quiesce_queue(ub->ub_disk->queue);
++ if (ub->dev_info.state == UBLK_S_DEV_LIVE)
++ ublk_wait_tagset_rqs_idle(ub);
++
+ for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
+ ublk_get_queue(ub, i)->force_abort = true;
+-
+ blk_mq_unquiesce_queue(ub->ub_disk->queue);
+ /* We may have requeued some rqs in ublk_quiesce_queue() */
+ blk_mq_kick_requeue_list(ub->ub_disk->queue);
+@@ -1681,11 +1755,8 @@ static void ublk_stop_dev(struct ublk_device *ub)
+ mutex_lock(&ub->mutex);
+ if (ub->dev_info.state == UBLK_S_DEV_DEAD)
+ goto unlock;
+- if (ublk_nosrv_dev_should_queue_io(ub)) {
+- if (ub->dev_info.state == UBLK_S_DEV_LIVE)
+- __ublk_quiesce_dev(ub);
+- ublk_unquiesce_dev(ub);
+- }
++ if (ublk_nosrv_dev_should_queue_io(ub))
++ ublk_force_abort_dev(ub);
+ del_gendisk(ub->ub_disk);
+ disk = ublk_detach_disk(ub);
+ put_disk(disk);
+@@ -1743,15 +1814,6 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
+ mutex_unlock(&ub->mutex);
+ }
+
+-static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id,
+- int tag)
+-{
+- struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
+- struct request *req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag);
+-
+- ublk_queue_cmd(ubq, req);
+-}
+-
+ static inline int ublk_check_cmd_op(u32 cmd_op)
+ {
+ u32 ioc_type = _IOC_TYPE(cmd_op);
+@@ -1898,8 +1960,9 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
+ if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
+ goto out;
+ ublk_fill_io_cmd(io, cmd, ub_cmd->addr);
+- ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag);
+- break;
++ req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag);
++ ublk_dispatch_req(ubq, req, issue_flags);
++ return -EIOCBQUEUED;
+ default:
+ goto out;
+ }
+@@ -2790,7 +2853,6 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
+ /* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */
+ ubq->ubq_daemon = NULL;
+ ubq->timeout = false;
+- ubq->canceling = false;
+
+ for (i = 0; i < ubq->q_depth; i++) {
+ struct ublk_io *io = &ubq->ios[i];
+@@ -2879,20 +2941,18 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
+ pr_devel("%s: new ublksrv_pid %d, dev id %d\n",
+ __func__, ublksrv_pid, header->dev_id);
+
+- if (ublk_nosrv_dev_should_queue_io(ub)) {
+- ub->dev_info.state = UBLK_S_DEV_LIVE;
+- blk_mq_unquiesce_queue(ub->ub_disk->queue);
+- pr_devel("%s: queue unquiesced, dev id %d.\n",
+- __func__, header->dev_id);
+- blk_mq_kick_requeue_list(ub->ub_disk->queue);
+- } else {
+- blk_mq_quiesce_queue(ub->ub_disk->queue);
+- ub->dev_info.state = UBLK_S_DEV_LIVE;
+- for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+- ublk_get_queue(ub, i)->fail_io = false;
+- }
+- blk_mq_unquiesce_queue(ub->ub_disk->queue);
++ blk_mq_quiesce_queue(ub->ub_disk->queue);
++ ub->dev_info.state = UBLK_S_DEV_LIVE;
++ for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
++ struct ublk_queue *ubq = ublk_get_queue(ub, i);
++
++ ubq->canceling = false;
++ ubq->fail_io = false;
+ }
++ blk_mq_unquiesce_queue(ub->ub_disk->queue);
++ pr_devel("%s: queue unquiesced, dev id %d.\n",
++ __func__, header->dev_id);
++ blk_mq_kick_requeue_list(ub->ub_disk->queue);
+
+ ret = 0;
+ out_unlock:
+diff --git a/drivers/char/misc.c b/drivers/char/misc.c
+index f7dd455dd0dd3c..dda466f9181acf 100644
+--- a/drivers/char/misc.c
++++ b/drivers/char/misc.c
+@@ -315,7 +315,7 @@ static int __init misc_init(void)
+ goto fail_remove;
+
+ err = -EIO;
+- if (register_chrdev(MISC_MAJOR, "misc", &misc_fops))
++ if (__register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops))
+ goto fail_printk;
+ return 0;
+
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 18f92dd44d456d..fc698e2b1da1e4 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -1579,8 +1579,8 @@ static void handle_control_message(struct virtio_device *vdev,
+ break;
+ case VIRTIO_CONSOLE_RESIZE: {
+ struct {
+- __u16 rows;
+- __u16 cols;
++ __virtio16 rows;
++ __virtio16 cols;
+ } size;
+
+ if (!is_console_port(port))
+@@ -1588,7 +1588,8 @@ static void handle_control_message(struct virtio_device *vdev,
+
+ memcpy(&size, buf->buf + buf->offset + sizeof(*cpkt),
+ sizeof(size));
+- set_console_size(port, size.rows, size.cols);
++ set_console_size(port, virtio16_to_cpu(vdev, size.rows),
++ virtio16_to_cpu(vdev, size.cols));
+
+ port->cons.hvc->irq_requested = 1;
+ resize_console(port);
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index cf7720b9172ff2..50faafbf5dda56 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -5258,6 +5258,10 @@ of_clk_get_hw_from_clkspec(struct of_phandle_args *clkspec)
+ if (!clkspec)
+ return ERR_PTR(-EINVAL);
+
++ /* Check if node in clkspec is in disabled/fail state */
++ if (!of_device_is_available(clkspec->np))
++ return ERR_PTR(-ENOENT);
++
+ mutex_lock(&of_clk_mutex);
+ list_for_each_entry(provider, &of_clk_providers, link) {
+ if (provider->node == clkspec->np) {
+diff --git a/drivers/clk/renesas/rzv2h-cpg.c b/drivers/clk/renesas/rzv2h-cpg.c
+index a4c1e92e1fd765..4e81a0bae02286 100644
+--- a/drivers/clk/renesas/rzv2h-cpg.c
++++ b/drivers/clk/renesas/rzv2h-cpg.c
+@@ -447,8 +447,7 @@ static void rzv2h_mod_clock_mstop_enable(struct rzv2h_cpg_priv *priv,
+ {
+ unsigned long mstop_mask = FIELD_GET(BUS_MSTOP_BITS_MASK, mstop_data);
+ u16 mstop_index = FIELD_GET(BUS_MSTOP_IDX_MASK, mstop_data);
+- unsigned int index = (mstop_index - 1) * 16;
+- atomic_t *mstop = &priv->mstop_count[index];
++ atomic_t *mstop = &priv->mstop_count[mstop_index * 16];
+ unsigned long flags;
+ unsigned int i;
+ u32 val = 0;
+@@ -469,8 +468,7 @@ static void rzv2h_mod_clock_mstop_disable(struct rzv2h_cpg_priv *priv,
+ {
+ unsigned long mstop_mask = FIELD_GET(BUS_MSTOP_BITS_MASK, mstop_data);
+ u16 mstop_index = FIELD_GET(BUS_MSTOP_IDX_MASK, mstop_data);
+- unsigned int index = (mstop_index - 1) * 16;
+- atomic_t *mstop = &priv->mstop_count[index];
++ atomic_t *mstop = &priv->mstop_count[mstop_index * 16];
+ unsigned long flags;
+ unsigned int i;
+ u32 val = 0;
+@@ -630,8 +628,7 @@ rzv2h_cpg_register_mod_clk(const struct rzv2h_mod_clk *mod,
+ } else if (clock->mstop_data != BUS_MSTOP_NONE && mod->critical) {
+ unsigned long mstop_mask = FIELD_GET(BUS_MSTOP_BITS_MASK, clock->mstop_data);
+ u16 mstop_index = FIELD_GET(BUS_MSTOP_IDX_MASK, clock->mstop_data);
+- unsigned int index = (mstop_index - 1) * 16;
+- atomic_t *mstop = &priv->mstop_count[index];
++ atomic_t *mstop = &priv->mstop_count[mstop_index * 16];
+ unsigned long flags;
+ unsigned int i;
+ u32 val = 0;
+@@ -926,6 +923,9 @@ static int __init rzv2h_cpg_probe(struct platform_device *pdev)
+ if (!priv->mstop_count)
+ return -ENOMEM;
+
++ /* Adjust for CPG_BUS_m_MSTOP starting from m = 1 */
++ priv->mstop_count -= 16;
++
+ priv->resets = devm_kmemdup(dev, info->resets, sizeof(*info->resets) *
+ info->num_resets, GFP_KERNEL);
+ if (!priv->resets)
+diff --git a/drivers/comedi/drivers/jr3_pci.c b/drivers/comedi/drivers/jr3_pci.c
+index 951c23fa0369ea..75dce1ff24193b 100644
+--- a/drivers/comedi/drivers/jr3_pci.c
++++ b/drivers/comedi/drivers/jr3_pci.c
+@@ -758,7 +758,7 @@ static void jr3_pci_detach(struct comedi_device *dev)
+ struct jr3_pci_dev_private *devpriv = dev->private;
+
+ if (devpriv)
+- del_timer_sync(&devpriv->timer);
++ timer_shutdown_sync(&devpriv->timer);
+
+ comedi_pci_detach(dev);
+ }
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index 4f9cb943d945c2..0d46402e30942e 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -76,7 +76,7 @@ config ARM_VEXPRESS_SPC_CPUFREQ
+ config ARM_BRCMSTB_AVS_CPUFREQ
+ tristate "Broadcom STB AVS CPUfreq driver"
+ depends on (ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ) || COMPILE_TEST
+- default y
++ default y if ARCH_BRCMSTB && !ARM_SCMI_CPUFREQ
+ help
+ Some Broadcom STB SoCs use a co-processor running proprietary firmware
+ ("AVS") to handle voltage and frequency scaling. This driver provides
+@@ -88,7 +88,7 @@ config ARM_HIGHBANK_CPUFREQ
+ tristate "Calxeda Highbank-based"
+ depends on ARCH_HIGHBANK || COMPILE_TEST
+ depends on CPUFREQ_DT && REGULATOR && PL320_MBOX
+- default m
++ default m if ARCH_HIGHBANK
+ help
+ This adds the CPUFreq driver for Calxeda Highbank SoC
+ based boards.
+@@ -133,7 +133,7 @@ config ARM_MEDIATEK_CPUFREQ
+ config ARM_MEDIATEK_CPUFREQ_HW
+ tristate "MediaTek CPUFreq HW driver"
+ depends on ARCH_MEDIATEK || COMPILE_TEST
+- default m
++ default m if ARCH_MEDIATEK
+ help
+ Support for the CPUFreq HW driver.
+ Some MediaTek chipsets have a HW engine to offload the steps
+@@ -181,7 +181,7 @@ config ARM_RASPBERRYPI_CPUFREQ
+ config ARM_S3C64XX_CPUFREQ
+ bool "Samsung S3C64XX"
+ depends on CPU_S3C6410 || COMPILE_TEST
+- default y
++ default CPU_S3C6410
+ help
+ This adds the CPUFreq driver for Samsung S3C6410 SoC.
+
+@@ -190,7 +190,7 @@ config ARM_S3C64XX_CPUFREQ
+ config ARM_S5PV210_CPUFREQ
+ bool "Samsung S5PV210 and S5PC110"
+ depends on CPU_S5PV210 || COMPILE_TEST
+- default y
++ default CPU_S5PV210
+ help
+ This adds the CPUFreq driver for Samsung S5PV210 and
+ S5PC110 SoCs.
+@@ -214,7 +214,7 @@ config ARM_SCMI_CPUFREQ
+ config ARM_SPEAR_CPUFREQ
+ bool "SPEAr CPUFreq support"
+ depends on PLAT_SPEAR || COMPILE_TEST
+- default y
++ default PLAT_SPEAR
+ help
+ This adds the CPUFreq driver support for SPEAr SOCs.
+
+@@ -233,7 +233,7 @@ config ARM_TEGRA20_CPUFREQ
+ tristate "Tegra20/30 CPUFreq support"
+ depends on ARCH_TEGRA || COMPILE_TEST
+ depends on CPUFREQ_DT
+- default y
++ default ARCH_TEGRA
+ help
+ This adds the CPUFreq driver support for Tegra20/30 SOCs.
+
+@@ -241,7 +241,7 @@ config ARM_TEGRA124_CPUFREQ
+ bool "Tegra124 CPUFreq support"
+ depends on ARCH_TEGRA || COMPILE_TEST
+ depends on CPUFREQ_DT
+- default y
++ default ARCH_TEGRA
+ help
+ This adds the CPUFreq driver support for Tegra124 SOCs.
+
+@@ -256,14 +256,14 @@ config ARM_TEGRA194_CPUFREQ
+ tristate "Tegra194 CPUFreq support"
+ depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST)
+ depends on TEGRA_BPMP
+- default y
++ default ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC
+ help
+ This adds CPU frequency driver support for Tegra194 SOCs.
+
+ config ARM_TI_CPUFREQ
+ bool "Texas Instruments CPUFreq support"
+ depends on ARCH_OMAP2PLUS || ARCH_K3 || COMPILE_TEST
+- default y
++ default ARCH_OMAP2PLUS || ARCH_K3
+ help
+ This driver enables valid OPPs on the running platform based on
+ values contained within the SoC in use. Enable this in order to
+diff --git a/drivers/cpufreq/apple-soc-cpufreq.c b/drivers/cpufreq/apple-soc-cpufreq.c
+index 269b18c62d0402..82007f6a24d2a8 100644
+--- a/drivers/cpufreq/apple-soc-cpufreq.c
++++ b/drivers/cpufreq/apple-soc-cpufreq.c
+@@ -134,11 +134,17 @@ static const struct of_device_id apple_soc_cpufreq_of_match[] __maybe_unused = {
+
+ static unsigned int apple_soc_cpufreq_get_rate(unsigned int cpu)
+ {
+- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
+- struct apple_cpu_priv *priv = policy->driver_data;
++ struct cpufreq_policy *policy;
++ struct apple_cpu_priv *priv;
+ struct cpufreq_frequency_table *p;
+ unsigned int pstate;
+
++ policy = cpufreq_cpu_get_raw(cpu);
++ if (unlikely(!policy))
++ return 0;
++
++ priv = policy->driver_data;
++
+ if (priv->info->cur_pstate_mask) {
+ u32 reg = readl_relaxed(priv->reg_base + APPLE_DVFS_STATUS);
+
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index 8f512448382f4e..ba7c16c0e47569 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -749,7 +749,7 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
+ int ret;
+
+ if (!policy)
+- return -ENODEV;
++ return 0;
+
+ cpu_data = policy->driver_data;
+
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index 914bf2c940a037..9c6eb1238f1be3 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -37,11 +37,17 @@ static struct cpufreq_driver scmi_cpufreq_driver;
+
+ static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)
+ {
+- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
+- struct scmi_data *priv = policy->driver_data;
++ struct cpufreq_policy *policy;
++ struct scmi_data *priv;
+ unsigned long rate;
+ int ret;
+
++ policy = cpufreq_cpu_get_raw(cpu);
++ if (unlikely(!policy))
++ return 0;
++
++ priv = policy->driver_data;
++
+ ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false);
+ if (ret)
+ return 0;
+diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
+index 1f97b949763fa7..9118856e17365e 100644
+--- a/drivers/cpufreq/scpi-cpufreq.c
++++ b/drivers/cpufreq/scpi-cpufreq.c
+@@ -29,9 +29,16 @@ static struct scpi_ops *scpi_ops;
+
+ static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)
+ {
+- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
+- struct scpi_data *priv = policy->driver_data;
+- unsigned long rate = clk_get_rate(priv->clk);
++ struct cpufreq_policy *policy;
++ struct scpi_data *priv;
++ unsigned long rate;
++
++ policy = cpufreq_cpu_get_raw(cpu);
++ if (unlikely(!policy))
++ return 0;
++
++ priv = policy->driver_data;
++ rate = clk_get_rate(priv->clk);
+
+ return rate / 1000;
+ }
+diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+index 47d6840b348994..744312a44279cb 100644
+--- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c
++++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+@@ -194,7 +194,9 @@ static int sun50i_cpufreq_get_efuse(void)
+ struct nvmem_cell *speedbin_nvmem;
+ const struct of_device_id *match;
+ struct device *cpu_dev;
+- u32 *speedbin;
++ void *speedbin_ptr;
++ u32 speedbin = 0;
++ size_t len;
+ int ret;
+
+ cpu_dev = get_cpu_device(0);
+@@ -217,14 +219,18 @@ static int sun50i_cpufreq_get_efuse(void)
+ return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
+ "Could not get nvmem cell\n");
+
+- speedbin = nvmem_cell_read(speedbin_nvmem, NULL);
++ speedbin_ptr = nvmem_cell_read(speedbin_nvmem, &len);
+ nvmem_cell_put(speedbin_nvmem);
+- if (IS_ERR(speedbin))
+- return PTR_ERR(speedbin);
++ if (IS_ERR(speedbin_ptr))
++ return PTR_ERR(speedbin_ptr);
+
+- ret = opp_data->efuse_xlate(*speedbin);
++ if (len <= 4)
++ memcpy(&speedbin, speedbin_ptr, len);
++ speedbin = le32_to_cpu(speedbin);
+
+- kfree(speedbin);
++ ret = opp_data->efuse_xlate(speedbin);
++
++ kfree(speedbin_ptr);
+
+ return ret;
+ };
+diff --git a/drivers/crypto/atmel-sha204a.c b/drivers/crypto/atmel-sha204a.c
+index 75bebec2c757bf..0fcf4a39de279d 100644
+--- a/drivers/crypto/atmel-sha204a.c
++++ b/drivers/crypto/atmel-sha204a.c
+@@ -163,6 +163,12 @@ static int atmel_sha204a_probe(struct i2c_client *client)
+ i2c_priv->hwrng.name = dev_name(&client->dev);
+ i2c_priv->hwrng.read = atmel_sha204a_rng_read;
+
++ /*
++ * According to review by Bill Cox [1], this HWRNG has very low entropy.
++ * [1] https://www.metzdowd.com/pipermail/cryptography/2014-December/023858.html
++ */
++ i2c_priv->hwrng.quality = 1;
++
+ ret = devm_hwrng_register(&client->dev, &i2c_priv->hwrng);
+ if (ret)
+ dev_warn(&client->dev, "failed to register RNG (%d)\n", ret);
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index 157f9a9ed63616..2ebc878da16095 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -532,6 +532,7 @@ static const struct pci_device_id sp_pci_table[] = {
+ { PCI_VDEVICE(AMD, 0x14CA), (kernel_ulong_t)&dev_vdata[5] },
+ { PCI_VDEVICE(AMD, 0x15C7), (kernel_ulong_t)&dev_vdata[6] },
+ { PCI_VDEVICE(AMD, 0x1649), (kernel_ulong_t)&dev_vdata[6] },
++ { PCI_VDEVICE(AMD, 0x1134), (kernel_ulong_t)&dev_vdata[7] },
+ { PCI_VDEVICE(AMD, 0x17E0), (kernel_ulong_t)&dev_vdata[7] },
+ { PCI_VDEVICE(AMD, 0x156E), (kernel_ulong_t)&dev_vdata[8] },
+ /* Last entry must be zero */
+diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
+index 117c2e94c761d9..5ca7b0eed568b3 100644
+--- a/drivers/cxl/core/regs.c
++++ b/drivers/cxl/core/regs.c
+@@ -581,7 +581,6 @@ resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri
+ resource_size_t rcrb = ri->base;
+ void __iomem *addr;
+ u32 bar0, bar1;
+- u16 cmd;
+ u32 id;
+
+ if (which == CXL_RCRB_UPSTREAM)
+@@ -603,7 +602,6 @@ resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri
+ }
+
+ id = readl(addr + PCI_VENDOR_ID);
+- cmd = readw(addr + PCI_COMMAND);
+ bar0 = readl(addr + PCI_BASE_ADDRESS_0);
+ bar1 = readl(addr + PCI_BASE_ADDRESS_1);
+ iounmap(addr);
+@@ -618,8 +616,6 @@ resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri
+ dev_err(dev, "Failed to access Downstream Port RCRB\n");
+ return CXL_RESOURCE_NONE;
+ }
+- if (!(cmd & PCI_COMMAND_MEMORY))
+- return CXL_RESOURCE_NONE;
+ /* The RCRB is a Memory Window, and the MEM_TYPE_1M bit is obsolete */
+ if (bar0 & (PCI_BASE_ADDRESS_MEM_TYPE_1M | PCI_BASE_ADDRESS_SPACE_IO))
+ return CXL_RESOURCE_NONE;
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index cc7398cc17d67f..e74e36a8ecda21 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -393,7 +393,7 @@ static long udmabuf_create(struct miscdevice *device,
+ if (!ubuf)
+ return -ENOMEM;
+
+- pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
++ pglimit = ((u64)size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
+ for (i = 0; i < head->count; i++) {
+ pgoff_t subpgcnt;
+
+diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
+index 20b10c15c69678..0117bb2e8591be 100644
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -893,7 +893,7 @@ static int bcm2835_dma_suspend_late(struct device *dev)
+ }
+
+ static const struct dev_pm_ops bcm2835_dma_pm_ops = {
+- SET_LATE_SYSTEM_SLEEP_PM_OPS(bcm2835_dma_suspend_late, NULL)
++ LATE_SYSTEM_SLEEP_PM_OPS(bcm2835_dma_suspend_late, NULL)
+ };
+
+ static int bcm2835_dma_probe(struct platform_device *pdev)
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index 91b2fbc0b86471..d891dfca358e20 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -841,9 +841,9 @@ static int dmatest_func(void *data)
+ } else {
+ dma_async_issue_pending(chan);
+
+- wait_event_freezable_timeout(thread->done_wait,
+- done->done,
+- msecs_to_jiffies(params->timeout));
++ wait_event_timeout(thread->done_wait,
++ done->done,
++ msecs_to_jiffies(params->timeout));
+
+ status = dma_async_is_tx_complete(chan, cookie, NULL,
+ NULL);
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 3c52cb73237a43..e3f990d888d718 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -1224,22 +1224,28 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+ if (!svc->intel_svc_fcs) {
+ dev_err(dev, "failed to allocate %s device\n", INTEL_FCS);
+ ret = -ENOMEM;
+- goto err_unregister_dev;
++ goto err_unregister_rsu_dev;
+ }
+
+ ret = platform_device_add(svc->intel_svc_fcs);
+ if (ret) {
+ platform_device_put(svc->intel_svc_fcs);
+- goto err_unregister_dev;
++ goto err_unregister_rsu_dev;
+ }
+
++ ret = of_platform_default_populate(dev_of_node(dev), NULL, dev);
++ if (ret)
++ goto err_unregister_fcs_dev;
++
+ dev_set_drvdata(dev, svc);
+
+ pr_info("Intel Service Layer Driver Initialized\n");
+
+ return 0;
+
+-err_unregister_dev:
++err_unregister_fcs_dev:
++ platform_device_unregister(svc->intel_svc_fcs);
++err_unregister_rsu_dev:
+ platform_device_unregister(svc->stratix10_svc_rsu);
+ err_free_kfifo:
+ kfifo_free(&controller->svc_fifo);
+@@ -1253,6 +1259,8 @@ static void stratix10_svc_drv_remove(struct platform_device *pdev)
+ struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev);
+ struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev);
+
++ of_platform_depopulate(ctrl->dev);
++
+ platform_device_unregister(svc->intel_svc_fcs);
+ platform_device_unregister(svc->stratix10_svc_rsu);
+
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 176e9142fd8f85..56f13e4fa3614d 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -259,6 +259,9 @@ static void of_gpio_set_polarity_by_property(const struct device_node *np,
+ { "fsl,imx8qm-fec", "phy-reset-gpios", "phy-reset-active-high" },
+ { "fsl,s32v234-fec", "phy-reset-gpios", "phy-reset-active-high" },
+ #endif
++#if IS_ENABLED(CONFIG_MMC_ATMELMCI)
++ { "atmel,hsmci", "cd-gpios", "cd-inverted" },
++#endif
+ #if IS_ENABLED(CONFIG_PCI_IMX6)
+ { "fsl,imx6q-pcie", "reset-gpio", "reset-gpio-active-high" },
+ { "fsl,imx6sx-pcie", "reset-gpio", "reset-gpio-active-high" },
+@@ -284,9 +287,6 @@ static void of_gpio_set_polarity_by_property(const struct device_node *np,
+ #if IS_ENABLED(CONFIG_REGULATOR_GPIO)
+ { "regulator-gpio", "enable-gpio", "enable-active-high" },
+ { "regulator-gpio", "enable-gpios", "enable-active-high" },
+-#endif
+-#if IS_ENABLED(CONFIG_MMC_ATMELMCI)
+- { "atmel,hsmci", "cd-gpios", "cd-inverted" },
+ #endif
+ };
+ unsigned int i;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 69895fccb474ae..98f0c12df12bc1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -352,7 +352,6 @@ enum amdgpu_kiq_irq {
+ AMDGPU_CP_KIQ_IRQ_DRIVER0 = 0,
+ AMDGPU_CP_KIQ_IRQ_LAST
+ };
+-#define SRIOV_USEC_TIMEOUT 1200000 /* wait 12 * 100ms for SRIOV */
+ #define MAX_KIQ_REG_WAIT 5000 /* in usecs, 5ms */
+ #define MAX_KIQ_REG_BAILOUT_INTERVAL 5 /* in msecs, 5ms */
+ #define MAX_KIQ_REG_TRY 1000
+@@ -1119,6 +1118,7 @@ struct amdgpu_device {
+ bool in_s3;
+ bool in_s4;
+ bool in_s0ix;
++ suspend_state_t last_suspend_state;
+
+ enum pp_mp1_state mp1_state;
+ struct amdgpu_doorbell_index doorbell_index;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 24c255e05079e0..f2d77bc04e4a98 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2515,8 +2515,20 @@ static int amdgpu_pmops_suspend(struct device *dev)
+ adev->in_s0ix = true;
+ else if (amdgpu_acpi_is_s3_active(adev))
+ adev->in_s3 = true;
+- if (!adev->in_s0ix && !adev->in_s3)
++ if (!adev->in_s0ix && !adev->in_s3) {
++ /* don't allow going deep first time followed by s2idle the next time */
++ if (adev->last_suspend_state != PM_SUSPEND_ON &&
++ adev->last_suspend_state != pm_suspend_target_state) {
++ drm_err_once(drm_dev, "Unsupported suspend state %d\n",
++ pm_suspend_target_state);
++ return -EINVAL;
++ }
+ return 0;
++ }
++
++ /* cache the state last used for suspend */
++ adev->last_suspend_state = pm_suspend_target_state;
++
+ return amdgpu_device_suspend(drm_dev, true);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index c1f35ded684e81..506786784e32dc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -1411,9 +1411,11 @@ static int amdgpu_gfx_run_cleaner_shader_job(struct amdgpu_ring *ring)
+ struct amdgpu_device *adev = ring->adev;
+ struct drm_gpu_scheduler *sched = &ring->sched;
+ struct drm_sched_entity entity;
++ static atomic_t counter;
+ struct dma_fence *f;
+ struct amdgpu_job *job;
+ struct amdgpu_ib *ib;
++ void *owner;
+ int i, r;
+
+ /* Initialize the scheduler entity */
+@@ -1424,9 +1426,15 @@ static int amdgpu_gfx_run_cleaner_shader_job(struct amdgpu_ring *ring)
+ goto err;
+ }
+
+- r = amdgpu_job_alloc_with_ib(ring->adev, &entity, NULL,
+- 64, 0,
+- &job);
++ /*
++ * Use some unique dummy value as the owner to make sure we execute
++ * the cleaner shader on each submission. The value just need to change
++ * for each submission and is otherwise meaningless.
++ */
++ owner = (void *)(unsigned long)atomic_inc_return(&counter);
++
++ r = amdgpu_job_alloc_with_ib(ring->adev, &entity, owner,
++ 64, 0, &job);
+ if (r)
+ goto err;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index 1c19a65e655337..ef74259c448d78 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -678,12 +678,10 @@ int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
+ uint32_t flush_type, bool all_hub,
+ uint32_t inst)
+ {
+- u32 usec_timeout = amdgpu_sriov_vf(adev) ? SRIOV_USEC_TIMEOUT :
+- adev->usec_timeout;
+ struct amdgpu_ring *ring = &adev->gfx.kiq[inst].ring;
+ struct amdgpu_kiq *kiq = &adev->gfx.kiq[inst];
+ unsigned int ndw;
+- int r;
++ int r, cnt = 0;
+ uint32_t seq;
+
+ /*
+@@ -740,10 +738,21 @@ int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
+
+ amdgpu_ring_commit(ring);
+ spin_unlock(&adev->gfx.kiq[inst].ring_lock);
+- if (amdgpu_fence_wait_polling(ring, seq, usec_timeout) < 1) {
++
++ r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
++
++ might_sleep();
++ while (r < 1 && cnt++ < MAX_KIQ_REG_TRY &&
++ !amdgpu_reset_pending(adev->reset_domain)) {
++ msleep(MAX_KIQ_REG_BAILOUT_INTERVAL);
++ r = amdgpu_fence_wait_polling(ring, seq, MAX_KIQ_REG_WAIT);
++ }
++
++ if (cnt > MAX_KIQ_REG_TRY) {
+ dev_err(adev->dev, "timeout waiting for kiq fence\n");
+ r = -ETIME;
+- }
++ } else
++ r = 0;
+ }
+
+ error_unlock_reset:
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 5ba263fe551211..1f32c531f610e3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -6044,7 +6044,7 @@ static int gfx_v10_0_cp_gfx_load_pfp_microcode(struct amdgpu_device *adev)
+ }
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ tmp = RREG32_SOC15(GC, 0, mmCP_PFP_IC_BASE_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, VMID, 0);
+@@ -6122,7 +6122,7 @@ static int gfx_v10_0_cp_gfx_load_ce_microcode(struct amdgpu_device *adev)
+ }
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ tmp = RREG32_SOC15(GC, 0, mmCP_CE_IC_BASE_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_CE_IC_BASE_CNTL, VMID, 0);
+@@ -6199,7 +6199,7 @@ static int gfx_v10_0_cp_gfx_load_me_microcode(struct amdgpu_device *adev)
+ }
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ tmp = RREG32_SOC15(GC, 0, mmCP_ME_IC_BASE_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, VMID, 0);
+@@ -6574,7 +6574,7 @@ static int gfx_v10_0_cp_compute_load_microcode(struct amdgpu_device *adev)
+ }
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ tmp = RREG32_SOC15(GC, 0, mmCP_CPC_IC_BASE_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, CACHE_POLICY, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index cfb51baa581a13..f1f53c7687410e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -2391,7 +2391,7 @@ static int gfx_v11_0_config_me_cache(struct amdgpu_device *adev, uint64_t addr)
+ }
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_BASE_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, VMID, 0);
+@@ -2435,7 +2435,7 @@ static int gfx_v11_0_config_pfp_cache(struct amdgpu_device *adev, uint64_t addr)
+ }
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, VMID, 0);
+@@ -2480,7 +2480,7 @@ static int gfx_v11_0_config_mec_cache(struct amdgpu_device *adev, uint64_t addr)
+ }
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_CNTL);
+ tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, CACHE_POLICY, 0);
+@@ -3115,7 +3115,7 @@ static int gfx_v11_0_cp_gfx_load_pfp_microcode_rs64(struct amdgpu_device *adev)
+ amdgpu_bo_unreserve(adev->gfx.pfp.pfp_fw_data_obj);
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_LO,
+ lower_32_bits(adev->gfx.pfp.pfp_fw_gpu_addr));
+@@ -3333,7 +3333,7 @@ static int gfx_v11_0_cp_gfx_load_me_microcode_rs64(struct amdgpu_device *adev)
+ amdgpu_bo_unreserve(adev->gfx.me.me_fw_data_obj);
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_LO,
+ lower_32_bits(adev->gfx.me.me_fw_gpu_addr));
+@@ -4549,7 +4549,7 @@ static int gfx_v11_0_gfxhub_enable(struct amdgpu_device *adev)
+ if (r)
+ return r;
+
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ value = (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS) ?
+ false : true;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index c21b168f75a754..0c08785099f320 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -2306,7 +2306,7 @@ static int gfx_v12_0_cp_gfx_load_pfp_microcode_rs64(struct amdgpu_device *adev)
+ amdgpu_bo_unreserve(adev->gfx.pfp.pfp_fw_data_obj);
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_LO,
+ lower_32_bits(adev->gfx.pfp.pfp_fw_gpu_addr));
+@@ -2450,7 +2450,7 @@ static int gfx_v12_0_cp_gfx_load_me_microcode_rs64(struct amdgpu_device *adev)
+ amdgpu_bo_unreserve(adev->gfx.me.me_fw_data_obj);
+
+ if (amdgpu_emu_mode == 1)
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_LO,
+ lower_32_bits(adev->gfx.me.me_fw_gpu_addr));
+@@ -3469,7 +3469,7 @@ static int gfx_v12_0_gfxhub_enable(struct amdgpu_device *adev)
+ if (r)
+ return r;
+
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ value = (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS) ?
+ false : true;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index 9bedca9a79c63c..a88ad9951d3288 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -268,7 +268,7 @@ static void gmc_v10_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
+ ack = hub->vm_inv_eng0_ack + hub->eng_distance * eng;
+
+ /* flush hdp cache */
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ /* This is necessary for SRIOV as well as for GFXOFF to function
+ * properly under bare metal
+@@ -969,7 +969,7 @@ static int gmc_v10_0_gart_enable(struct amdgpu_device *adev)
+ adev->hdp.funcs->init_registers(adev);
+
+ /* Flush HDP after it is initialized */
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ value = (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS) ?
+ false : true;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+index 72751ab4c766ec..1eb97117fe7ae4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+@@ -229,7 +229,7 @@ static void gmc_v11_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
+ ack = hub->vm_inv_eng0_ack + hub->eng_distance * eng;
+
+ /* flush hdp cache */
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ /* This is necessary for SRIOV as well as for GFXOFF to function
+ * properly under bare metal
+@@ -896,7 +896,7 @@ static int gmc_v11_0_gart_enable(struct amdgpu_device *adev)
+ return r;
+
+ /* Flush HDP after it is initialized */
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ value = (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS) ?
+ false : true;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
+index c3c144a4f45eb1..0f136d6bbdc9b5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
+@@ -297,7 +297,7 @@ static void gmc_v12_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
+ return;
+
+ /* flush hdp cache */
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ /* This is necessary for SRIOV as well as for GFXOFF to function
+ * properly under bare metal
+@@ -881,7 +881,7 @@ static int gmc_v12_0_gart_enable(struct amdgpu_device *adev)
+ return r;
+
+ /* Flush HDP after it is initialized */
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ value = (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS) ?
+ false : true;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 291549765c38c5..5250b470e5ef39 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -2434,7 +2434,7 @@ static int gmc_v9_0_hw_init(struct amdgpu_ip_block *ip_block)
+ adev->hdp.funcs->init_registers(adev);
+
+ /* After HDP is initialized, flush HDP.*/
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+
+ if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
+ value = false;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+index 2395f1856962ad..e77a467af7ac31 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+@@ -532,7 +532,7 @@ static int psp_v11_0_memory_training(struct psp_context *psp, uint32_t ops)
+ }
+
+ memcpy_toio(adev->mman.aper_base_kaddr, buf, sz);
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+ vfree(buf);
+ drm_dev_exit(idx);
+ } else {
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index cc621064610f1d..afdf8ce3b4c59e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -610,7 +610,7 @@ static int psp_v13_0_memory_training(struct psp_context *psp, uint32_t ops)
+ }
+
+ memcpy_toio(adev->mman.aper_base_kaddr, buf, sz);
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+ vfree(buf);
+ drm_dev_exit(idx);
+ } else {
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+index 4d33c95a511631..89f6c06946c51b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+@@ -488,7 +488,7 @@ static int psp_v14_0_memory_training(struct psp_context *psp, uint32_t ops)
+ }
+
+ memcpy_toio(adev->mman.aper_base_kaddr, buf, sz);
+- adev->hdp.funcs->flush_hdp(adev, NULL);
++ amdgpu_device_flush_hdp(adev, NULL);
+ vfree(buf);
+ drm_dev_exit(idx);
+ } else {
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index ceb9fb475ef137..62a9a9ccf9bb63 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -2000,7 +2000,8 @@ static void kfd_topology_set_capabilities(struct kfd_topology_device *dev)
+ dev->node_props.capability |=
+ HSA_CAP_TRAP_DEBUG_PRECISE_MEMORY_OPERATIONS_SUPPORTED;
+
+- dev->node_props.capability |= HSA_CAP_PER_QUEUE_RESET_SUPPORTED;
++ if (!amdgpu_sriov_vf(dev->gpu->adev))
++ dev->node_props.capability |= HSA_CAP_PER_QUEUE_RESET_SUPPORTED;
+ } else {
+ dev->node_props.debug_prop |= HSA_DBG_WATCH_ADDR_MASK_LO_BIT_GFX10 |
+ HSA_DBG_WATCH_ADDR_MASK_HI_BIT;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 80a3cbd2cbe5d4..76c8e6457175f4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3293,16 +3293,16 @@ static void dm_gpureset_commit_state(struct dc_state *dc_state,
+ for (k = 0; k < dc_state->stream_count; k++) {
+ bundle->stream_update.stream = dc_state->streams[k];
+
+- for (m = 0; m < dc_state->stream_status->plane_count; m++) {
++ for (m = 0; m < dc_state->stream_status[k].plane_count; m++) {
+ bundle->surface_updates[m].surface =
+- dc_state->stream_status->plane_states[m];
++ dc_state->stream_status[k].plane_states[m];
+ bundle->surface_updates[m].surface->force_full_update =
+ true;
+ }
+
+ update_planes_and_stream_adapter(dm->dc,
+ UPDATE_TYPE_FULL,
+- dc_state->stream_status->plane_count,
++ dc_state->stream_status[k].plane_count,
+ dc_state->streams[k],
+ &bundle->stream_update,
+ bundle->surface_updates);
+@@ -10901,6 +10901,9 @@ static bool should_reset_plane(struct drm_atomic_state *state,
+ state->allow_modeset)
+ return true;
+
++ if (amdgpu_in_reset(adev) && state->allow_modeset)
++ return true;
++
+ /* Exit early if we know that we're adding or removing the plane. */
+ if (old_plane_state->crtc != new_plane_state->crtc)
+ return true;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index fbd80d8545a823..a2532907c7be92 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -912,7 +912,7 @@ dm_helpers_probe_acpi_edid(void *data, u8 *buf, unsigned int block, size_t len)
+ {
+ struct drm_connector *connector = data;
+ struct acpi_device *acpidev = ACPI_COMPANION(connector->dev->dev);
+- unsigned char start = block * EDID_LENGTH;
++ unsigned short start = block * EDID_LENGTH;
+ struct edid *edid;
+ int r;
+
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 81d2ee37e7732d..49ff9f1f16d32a 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -169,7 +169,7 @@ static const struct meson_drm_soc_attr meson_drm_soc_attrs[] = {
+ /* S805X/S805Y HDMI PLL won't lock for HDMI PHY freq > 1,65GHz */
+ {
+ .limits = {
+- .max_hdmi_phy_freq = 1650000,
++ .max_hdmi_phy_freq = 1650000000,
+ },
+ .attrs = (const struct soc_device_attribute []) {
+ { .soc_id = "GXL (S805*)", },
+diff --git a/drivers/gpu/drm/meson/meson_drv.h b/drivers/gpu/drm/meson/meson_drv.h
+index 3f9345c14f31c1..be4b0e4df6e13e 100644
+--- a/drivers/gpu/drm/meson/meson_drv.h
++++ b/drivers/gpu/drm/meson/meson_drv.h
+@@ -37,7 +37,7 @@ struct meson_drm_match_data {
+ };
+
+ struct meson_drm_soc_limits {
+- unsigned int max_hdmi_phy_freq;
++ unsigned long long max_hdmi_phy_freq;
+ };
+
+ struct meson_drm {
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index 0593a1cde906ff..ce8cea5d3a56be 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -70,12 +70,12 @@ static void meson_encoder_hdmi_set_vclk(struct meson_encoder_hdmi *encoder_hdmi,
+ {
+ struct meson_drm *priv = encoder_hdmi->priv;
+ int vic = drm_match_cea_mode(mode);
+- unsigned int phy_freq;
+- unsigned int vclk_freq;
+- unsigned int venc_freq;
+- unsigned int hdmi_freq;
++ unsigned long long phy_freq;
++ unsigned long long vclk_freq;
++ unsigned long long venc_freq;
++ unsigned long long hdmi_freq;
+
+- vclk_freq = mode->clock;
++ vclk_freq = mode->clock * 1000;
+
+ /* For 420, pixel clock is half unlike venc clock */
+ if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
+@@ -107,7 +107,8 @@ static void meson_encoder_hdmi_set_vclk(struct meson_encoder_hdmi *encoder_hdmi,
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ venc_freq /= 2;
+
+- dev_dbg(priv->dev, "vclk:%d phy=%d venc=%d hdmi=%d enci=%d\n",
++ dev_dbg(priv->dev,
++ "vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n",
+ phy_freq, vclk_freq, venc_freq, hdmi_freq,
+ priv->venc.hdmi_use_enci);
+
+@@ -122,10 +123,11 @@ static enum drm_mode_status meson_encoder_hdmi_mode_valid(struct drm_bridge *bri
+ struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);
+ struct meson_drm *priv = encoder_hdmi->priv;
+ bool is_hdmi2_sink = display_info->hdmi.scdc.supported;
+- unsigned int phy_freq;
+- unsigned int vclk_freq;
+- unsigned int venc_freq;
+- unsigned int hdmi_freq;
++ unsigned long long clock = mode->clock * 1000;
++ unsigned long long phy_freq;
++ unsigned long long vclk_freq;
++ unsigned long long venc_freq;
++ unsigned long long hdmi_freq;
+ int vic = drm_match_cea_mode(mode);
+ enum drm_mode_status status;
+
+@@ -144,12 +146,12 @@ static enum drm_mode_status meson_encoder_hdmi_mode_valid(struct drm_bridge *bri
+ if (status != MODE_OK)
+ return status;
+
+- return meson_vclk_dmt_supported_freq(priv, mode->clock);
++ return meson_vclk_dmt_supported_freq(priv, clock);
+ /* Check against supported VIC modes */
+ } else if (!meson_venc_hdmi_supported_vic(vic))
+ return MODE_BAD;
+
+- vclk_freq = mode->clock;
++ vclk_freq = clock;
+
+ /* For 420, pixel clock is half unlike venc clock */
+ if (drm_mode_is_420_only(display_info, mode) ||
+@@ -179,7 +181,8 @@ static enum drm_mode_status meson_encoder_hdmi_mode_valid(struct drm_bridge *bri
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ venc_freq /= 2;
+
+- dev_dbg(priv->dev, "%s: vclk:%d phy=%d venc=%d hdmi=%d\n",
++ dev_dbg(priv->dev,
++ "%s: vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz\n",
+ __func__, phy_freq, vclk_freq, venc_freq, hdmi_freq);
+
+ return meson_vclk_vic_supported_freq(priv, phy_freq, vclk_freq);
+diff --git a/drivers/gpu/drm/meson/meson_vclk.c b/drivers/gpu/drm/meson/meson_vclk.c
+index 2a942dc6a6dc23..3325580d885d0a 100644
+--- a/drivers/gpu/drm/meson/meson_vclk.c
++++ b/drivers/gpu/drm/meson/meson_vclk.c
+@@ -110,7 +110,10 @@
+ #define HDMI_PLL_LOCK BIT(31)
+ #define HDMI_PLL_LOCK_G12A (3 << 30)
+
+-#define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST(_freq * 1000, 1001)
++#define PIXEL_FREQ_1000_1001(_freq) \
++ DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)
++#define PHY_FREQ_1000_1001(_freq) \
++ (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10)
+
+ /* VID PLL Dividers */
+ enum {
+@@ -360,11 +363,11 @@ enum {
+ };
+
+ struct meson_vclk_params {
+- unsigned int pll_freq;
+- unsigned int phy_freq;
+- unsigned int vclk_freq;
+- unsigned int venc_freq;
+- unsigned int pixel_freq;
++ unsigned long long pll_freq;
++ unsigned long long phy_freq;
++ unsigned long long vclk_freq;
++ unsigned long long venc_freq;
++ unsigned long long pixel_freq;
+ unsigned int pll_od1;
+ unsigned int pll_od2;
+ unsigned int pll_od3;
+@@ -372,11 +375,11 @@ struct meson_vclk_params {
+ unsigned int vclk_div;
+ } params[] = {
+ [MESON_VCLK_HDMI_ENCI_54000] = {
+- .pll_freq = 4320000,
+- .phy_freq = 270000,
+- .vclk_freq = 54000,
+- .venc_freq = 54000,
+- .pixel_freq = 54000,
++ .pll_freq = 4320000000,
++ .phy_freq = 270000000,
++ .vclk_freq = 54000000,
++ .venc_freq = 54000000,
++ .pixel_freq = 54000000,
+ .pll_od1 = 4,
+ .pll_od2 = 4,
+ .pll_od3 = 1,
+@@ -384,11 +387,11 @@ struct meson_vclk_params {
+ .vclk_div = 1,
+ },
+ [MESON_VCLK_HDMI_DDR_54000] = {
+- .pll_freq = 4320000,
+- .phy_freq = 270000,
+- .vclk_freq = 54000,
+- .venc_freq = 54000,
+- .pixel_freq = 27000,
++ .pll_freq = 4320000000,
++ .phy_freq = 270000000,
++ .vclk_freq = 54000000,
++ .venc_freq = 54000000,
++ .pixel_freq = 27000000,
+ .pll_od1 = 4,
+ .pll_od2 = 4,
+ .pll_od3 = 1,
+@@ -396,11 +399,11 @@ struct meson_vclk_params {
+ .vclk_div = 1,
+ },
+ [MESON_VCLK_HDMI_DDR_148500] = {
+- .pll_freq = 2970000,
+- .phy_freq = 742500,
+- .vclk_freq = 148500,
+- .venc_freq = 148500,
+- .pixel_freq = 74250,
++ .pll_freq = 2970000000,
++ .phy_freq = 742500000,
++ .vclk_freq = 148500000,
++ .venc_freq = 148500000,
++ .pixel_freq = 74250000,
+ .pll_od1 = 4,
+ .pll_od2 = 1,
+ .pll_od3 = 1,
+@@ -408,11 +411,11 @@ struct meson_vclk_params {
+ .vclk_div = 1,
+ },
+ [MESON_VCLK_HDMI_74250] = {
+- .pll_freq = 2970000,
+- .phy_freq = 742500,
+- .vclk_freq = 74250,
+- .venc_freq = 74250,
+- .pixel_freq = 74250,
++ .pll_freq = 2970000000,
++ .phy_freq = 742500000,
++ .vclk_freq = 74250000,
++ .venc_freq = 74250000,
++ .pixel_freq = 74250000,
+ .pll_od1 = 2,
+ .pll_od2 = 2,
+ .pll_od3 = 2,
+@@ -420,11 +423,11 @@ struct meson_vclk_params {
+ .vclk_div = 1,
+ },
+ [MESON_VCLK_HDMI_148500] = {
+- .pll_freq = 2970000,
+- .phy_freq = 1485000,
+- .vclk_freq = 148500,
+- .venc_freq = 148500,
+- .pixel_freq = 148500,
++ .pll_freq = 2970000000,
++ .phy_freq = 1485000000,
++ .vclk_freq = 148500000,
++ .venc_freq = 148500000,
++ .pixel_freq = 148500000,
+ .pll_od1 = 1,
+ .pll_od2 = 2,
+ .pll_od3 = 2,
+@@ -432,11 +435,11 @@ struct meson_vclk_params {
+ .vclk_div = 1,
+ },
+ [MESON_VCLK_HDMI_297000] = {
+- .pll_freq = 5940000,
+- .phy_freq = 2970000,
+- .venc_freq = 297000,
+- .vclk_freq = 297000,
+- .pixel_freq = 297000,
++ .pll_freq = 5940000000,
++ .phy_freq = 2970000000,
++ .venc_freq = 297000000,
++ .vclk_freq = 297000000,
++ .pixel_freq = 297000000,
+ .pll_od1 = 2,
+ .pll_od2 = 1,
+ .pll_od3 = 1,
+@@ -444,11 +447,11 @@ struct meson_vclk_params {
+ .vclk_div = 2,
+ },
+ [MESON_VCLK_HDMI_594000] = {
+- .pll_freq = 5940000,
+- .phy_freq = 5940000,
+- .venc_freq = 594000,
+- .vclk_freq = 594000,
+- .pixel_freq = 594000,
++ .pll_freq = 5940000000,
++ .phy_freq = 5940000000,
++ .venc_freq = 594000000,
++ .vclk_freq = 594000000,
++ .pixel_freq = 594000000,
+ .pll_od1 = 1,
+ .pll_od2 = 1,
+ .pll_od3 = 2,
+@@ -456,11 +459,11 @@ struct meson_vclk_params {
+ .vclk_div = 1,
+ },
+ [MESON_VCLK_HDMI_594000_YUV420] = {
+- .pll_freq = 5940000,
+- .phy_freq = 2970000,
+- .venc_freq = 594000,
+- .vclk_freq = 594000,
+- .pixel_freq = 297000,
++ .pll_freq = 5940000000,
++ .phy_freq = 2970000000,
++ .venc_freq = 594000000,
++ .vclk_freq = 594000000,
++ .pixel_freq = 297000000,
+ .pll_od1 = 2,
+ .pll_od2 = 1,
+ .pll_od3 = 1,
+@@ -617,16 +620,16 @@ static void meson_hdmi_pll_set_params(struct meson_drm *priv, unsigned int m,
+ 3 << 20, pll_od_to_reg(od3) << 20);
+ }
+
+-#define XTAL_FREQ 24000
++#define XTAL_FREQ (24 * 1000 * 1000)
+
+ static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv,
+- unsigned int pll_freq)
++ unsigned long long pll_freq)
+ {
+ /* The GXBB PLL has a /2 pre-multiplier */
+ if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB))
+- pll_freq /= 2;
++ pll_freq = DIV_ROUND_DOWN_ULL(pll_freq, 2);
+
+- return pll_freq / XTAL_FREQ;
++ return DIV_ROUND_DOWN_ULL(pll_freq, XTAL_FREQ);
+ }
+
+ #define HDMI_FRAC_MAX_GXBB 4096
+@@ -635,12 +638,13 @@ static unsigned int meson_hdmi_pll_get_m(struct meson_drm *priv,
+
+ static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv,
+ unsigned int m,
+- unsigned int pll_freq)
++ unsigned long long pll_freq)
+ {
+- unsigned int parent_freq = XTAL_FREQ;
++ unsigned long long parent_freq = XTAL_FREQ;
+ unsigned int frac_max = HDMI_FRAC_MAX_GXL;
+ unsigned int frac_m;
+ unsigned int frac;
++ u32 remainder;
+
+ /* The GXBB PLL has a /2 pre-multiplier and a larger FRAC width */
+ if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {
+@@ -652,11 +656,11 @@ static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv,
+ frac_max = HDMI_FRAC_MAX_G12A;
+
+ /* We can have a perfect match !*/
+- if (pll_freq / m == parent_freq &&
+- pll_freq % m == 0)
++ if (div_u64_rem(pll_freq, m, &remainder) == parent_freq &&
++ remainder == 0)
+ return 0;
+
+- frac = div_u64((u64)pll_freq * (u64)frac_max, parent_freq);
++ frac = mul_u64_u64_div_u64(pll_freq, frac_max, parent_freq);
+ frac_m = m * frac_max;
+ if (frac_m > frac)
+ return frac_max;
+@@ -666,7 +670,7 @@ static unsigned int meson_hdmi_pll_get_frac(struct meson_drm *priv,
+ }
+
+ static bool meson_hdmi_pll_validate_params(struct meson_drm *priv,
+- unsigned int m,
++ unsigned long long m,
+ unsigned int frac)
+ {
+ if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {
+@@ -694,7 +698,7 @@ static bool meson_hdmi_pll_validate_params(struct meson_drm *priv,
+ }
+
+ static bool meson_hdmi_pll_find_params(struct meson_drm *priv,
+- unsigned int freq,
++ unsigned long long freq,
+ unsigned int *m,
+ unsigned int *frac,
+ unsigned int *od)
+@@ -706,7 +710,7 @@ static bool meson_hdmi_pll_find_params(struct meson_drm *priv,
+ continue;
+ *frac = meson_hdmi_pll_get_frac(priv, *m, freq * *od);
+
+- DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d\n",
++ DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d\n",
+ freq, *m, *frac, *od);
+
+ if (meson_hdmi_pll_validate_params(priv, *m, *frac))
+@@ -718,7 +722,7 @@ static bool meson_hdmi_pll_find_params(struct meson_drm *priv,
+
+ /* pll_freq is the frequency after the OD dividers */
+ enum drm_mode_status
+-meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq)
++meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq)
+ {
+ unsigned int od, m, frac;
+
+@@ -741,7 +745,7 @@ EXPORT_SYMBOL_GPL(meson_vclk_dmt_supported_freq);
+
+ /* pll_freq is the frequency after the OD dividers */
+ static void meson_hdmi_pll_generic_set(struct meson_drm *priv,
+- unsigned int pll_freq)
++ unsigned long long pll_freq)
+ {
+ unsigned int od, m, frac, od1, od2, od3;
+
+@@ -756,7 +760,7 @@ static void meson_hdmi_pll_generic_set(struct meson_drm *priv,
+ od1 = od / od2;
+ }
+
+- DRM_DEBUG_DRIVER("PLL params for %dkHz: m=%x frac=%x od=%d/%d/%d\n",
++ DRM_DEBUG_DRIVER("PLL params for %lluHz: m=%x frac=%x od=%d/%d/%d\n",
+ pll_freq, m, frac, od1, od2, od3);
+
+ meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);
+@@ -764,17 +768,18 @@ static void meson_hdmi_pll_generic_set(struct meson_drm *priv,
+ return;
+ }
+
+- DRM_ERROR("Fatal, unable to find parameters for PLL freq %d\n",
++ DRM_ERROR("Fatal, unable to find parameters for PLL freq %lluHz\n",
+ pll_freq);
+ }
+
+ enum drm_mode_status
+-meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,
+- unsigned int vclk_freq)
++meson_vclk_vic_supported_freq(struct meson_drm *priv,
++ unsigned long long phy_freq,
++ unsigned long long vclk_freq)
+ {
+ int i;
+
+- DRM_DEBUG_DRIVER("phy_freq = %d vclk_freq = %d\n",
++ DRM_DEBUG_DRIVER("phy_freq = %lluHz vclk_freq = %lluHz\n",
+ phy_freq, vclk_freq);
+
+ /* Check against soc revision/package limits */
+@@ -785,19 +790,19 @@ meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,
+ }
+
+ for (i = 0 ; params[i].pixel_freq ; ++i) {
+- DRM_DEBUG_DRIVER("i = %d pixel_freq = %d alt = %d\n",
++ DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n",
+ i, params[i].pixel_freq,
+- FREQ_1000_1001(params[i].pixel_freq));
+- DRM_DEBUG_DRIVER("i = %d phy_freq = %d alt = %d\n",
++ PIXEL_FREQ_1000_1001(params[i].pixel_freq));
++ DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",
+ i, params[i].phy_freq,
+- FREQ_1000_1001(params[i].phy_freq/1000)*1000);
++ PHY_FREQ_1000_1001(params[i].phy_freq));
+ /* Match strict frequency */
+ if (phy_freq == params[i].phy_freq &&
+ vclk_freq == params[i].vclk_freq)
+ return MODE_OK;
+ /* Match 1000/1001 variant */
+- if (phy_freq == (FREQ_1000_1001(params[i].phy_freq/1000)*1000) &&
+- vclk_freq == FREQ_1000_1001(params[i].vclk_freq))
++ if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) &&
++ vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq))
+ return MODE_OK;
+ }
+
+@@ -805,8 +810,9 @@ meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,
+ }
+ EXPORT_SYMBOL_GPL(meson_vclk_vic_supported_freq);
+
+-static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq,
+- unsigned int od1, unsigned int od2, unsigned int od3,
++static void meson_vclk_set(struct meson_drm *priv,
++ unsigned long long pll_base_freq, unsigned int od1,
++ unsigned int od2, unsigned int od3,
+ unsigned int vid_pll_div, unsigned int vclk_div,
+ unsigned int hdmi_tx_div, unsigned int venc_div,
+ bool hdmi_use_enci, bool vic_alternate_clock)
+@@ -826,15 +832,15 @@ static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq,
+ meson_hdmi_pll_generic_set(priv, pll_base_freq);
+ } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXBB)) {
+ switch (pll_base_freq) {
+- case 2970000:
++ case 2970000000:
+ m = 0x3d;
+ frac = vic_alternate_clock ? 0xd02 : 0xe00;
+ break;
+- case 4320000:
++ case 4320000000:
+ m = vic_alternate_clock ? 0x59 : 0x5a;
+ frac = vic_alternate_clock ? 0xe8f : 0;
+ break;
+- case 5940000:
++ case 5940000000:
+ m = 0x7b;
+ frac = vic_alternate_clock ? 0xa05 : 0xc00;
+ break;
+@@ -844,15 +850,15 @@ static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq,
+ } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) ||
+ meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) {
+ switch (pll_base_freq) {
+- case 2970000:
++ case 2970000000:
+ m = 0x7b;
+ frac = vic_alternate_clock ? 0x281 : 0x300;
+ break;
+- case 4320000:
++ case 4320000000:
+ m = vic_alternate_clock ? 0xb3 : 0xb4;
+ frac = vic_alternate_clock ? 0x347 : 0;
+ break;
+- case 5940000:
++ case 5940000000:
+ m = 0xf7;
+ frac = vic_alternate_clock ? 0x102 : 0x200;
+ break;
+@@ -861,15 +867,15 @@ static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq,
+ meson_hdmi_pll_set_params(priv, m, frac, od1, od2, od3);
+ } else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
+ switch (pll_base_freq) {
+- case 2970000:
++ case 2970000000:
+ m = 0x7b;
+ frac = vic_alternate_clock ? 0x140b4 : 0x18000;
+ break;
+- case 4320000:
++ case 4320000000:
+ m = vic_alternate_clock ? 0xb3 : 0xb4;
+ frac = vic_alternate_clock ? 0x1a3ee : 0;
+ break;
+- case 5940000:
++ case 5940000000:
+ m = 0xf7;
+ frac = vic_alternate_clock ? 0x8148 : 0x10000;
+ break;
+@@ -1025,14 +1031,14 @@ static void meson_vclk_set(struct meson_drm *priv, unsigned int pll_base_freq,
+ }
+
+ void meson_vclk_setup(struct meson_drm *priv, unsigned int target,
+- unsigned int phy_freq, unsigned int vclk_freq,
+- unsigned int venc_freq, unsigned int dac_freq,
++ unsigned long long phy_freq, unsigned long long vclk_freq,
++ unsigned long long venc_freq, unsigned long long dac_freq,
+ bool hdmi_use_enci)
+ {
+ bool vic_alternate_clock = false;
+- unsigned int freq;
+- unsigned int hdmi_tx_div;
+- unsigned int venc_div;
++ unsigned long long freq;
++ unsigned long long hdmi_tx_div;
++ unsigned long long venc_div;
+
+ if (target == MESON_VCLK_TARGET_CVBS) {
+ meson_venci_cvbs_clock_config(priv);
+@@ -1052,27 +1058,27 @@ void meson_vclk_setup(struct meson_drm *priv, unsigned int target,
+ return;
+ }
+
+- hdmi_tx_div = vclk_freq / dac_freq;
++ hdmi_tx_div = DIV_ROUND_DOWN_ULL(vclk_freq, dac_freq);
+
+ if (hdmi_tx_div == 0) {
+- pr_err("Fatal Error, invalid HDMI-TX freq %d\n",
++ pr_err("Fatal Error, invalid HDMI-TX freq %lluHz\n",
+ dac_freq);
+ return;
+ }
+
+- venc_div = vclk_freq / venc_freq;
++ venc_div = DIV_ROUND_DOWN_ULL(vclk_freq, venc_freq);
+
+ if (venc_div == 0) {
+- pr_err("Fatal Error, invalid HDMI venc freq %d\n",
++ pr_err("Fatal Error, invalid HDMI venc freq %lluHz\n",
+ venc_freq);
+ return;
+ }
+
+ for (freq = 0 ; params[freq].pixel_freq ; ++freq) {
+ if ((phy_freq == params[freq].phy_freq ||
+- phy_freq == FREQ_1000_1001(params[freq].phy_freq/1000)*1000) &&
++ phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) &&
+ (vclk_freq == params[freq].vclk_freq ||
+- vclk_freq == FREQ_1000_1001(params[freq].vclk_freq))) {
++ vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) {
+ if (vclk_freq != params[freq].vclk_freq)
+ vic_alternate_clock = true;
+ else
+@@ -1098,7 +1104,8 @@ void meson_vclk_setup(struct meson_drm *priv, unsigned int target,
+ }
+
+ if (!params[freq].pixel_freq) {
+- pr_err("Fatal Error, invalid HDMI vclk freq %d\n", vclk_freq);
++ pr_err("Fatal Error, invalid HDMI vclk freq %lluHz\n",
++ vclk_freq);
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/meson/meson_vclk.h b/drivers/gpu/drm/meson/meson_vclk.h
+index 60617aaf18dd1c..7ac55744e57494 100644
+--- a/drivers/gpu/drm/meson/meson_vclk.h
++++ b/drivers/gpu/drm/meson/meson_vclk.h
+@@ -20,17 +20,18 @@ enum {
+ };
+
+ /* 27MHz is the CVBS Pixel Clock */
+-#define MESON_VCLK_CVBS 27000
++#define MESON_VCLK_CVBS (27 * 1000 * 1000)
+
+ enum drm_mode_status
+-meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned int freq);
++meson_vclk_dmt_supported_freq(struct meson_drm *priv, unsigned long long freq);
+ enum drm_mode_status
+-meson_vclk_vic_supported_freq(struct meson_drm *priv, unsigned int phy_freq,
+- unsigned int vclk_freq);
++meson_vclk_vic_supported_freq(struct meson_drm *priv,
++ unsigned long long phy_freq,
++ unsigned long long vclk_freq);
+
+ void meson_vclk_setup(struct meson_drm *priv, unsigned int target,
+- unsigned int phy_freq, unsigned int vclk_freq,
+- unsigned int venc_freq, unsigned int dac_freq,
++ unsigned long long phy_freq, unsigned long long vclk_freq,
++ unsigned long long venc_freq, unsigned long long dac_freq,
+ bool hdmi_use_enci);
+
+ #endif /* __MESON_VCLK_H */
+diff --git a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+index 7d68a8acfe2ea4..eb0f8373258c34 100644
+--- a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
++++ b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+@@ -129,11 +129,11 @@ static int jadard_unprepare(struct drm_panel *panel)
+ {
+ struct jadard *jadard = panel_to_jadard(panel);
+
+- gpiod_set_value(jadard->reset, 1);
++ gpiod_set_value(jadard->reset, 0);
+ msleep(120);
+
+ if (jadard->desc->reset_before_power_off_vcioo) {
+- gpiod_set_value(jadard->reset, 0);
++ gpiod_set_value(jadard->reset, 1);
+
+ usleep_range(1000, 2000);
+ }
+diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+index 162f18e975dae4..d0ea8a55fd9c22 100644
+--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+@@ -475,6 +475,7 @@
+ #define TDL_TSL_CHICKEN XE_REG_MCR(0xe4c4, XE_REG_OPTION_MASKED)
+ #define STK_ID_RESTRICT REG_BIT(12)
+ #define SLM_WMTP_RESTORE REG_BIT(11)
++#define RES_CHK_SPR_DIS REG_BIT(6)
+
+ #define ROW_CHICKEN XE_REG_MCR(0xe4f0, XE_REG_OPTION_MASKED)
+ #define UGM_BACKUP_MODE REG_BIT(13)
+@@ -500,6 +501,9 @@
+ #define LSC_L1_FLUSH_CTL_3D_DATAPORT_FLUSH_EVENTS_MASK REG_GENMASK(13, 11)
+ #define DIS_ATOMIC_CHAINING_TYPED_WRITES REG_BIT(3)
+
++#define TDL_CHICKEN XE_REG_MCR(0xe5f4, XE_REG_OPTION_MASKED)
++#define QID_WAIT_FOR_THREAD_NOT_RUN_DISABLE REG_BIT(12)
++
+ #define LSC_CHICKEN_BIT_0 XE_REG_MCR(0xe7c8)
+ #define DISABLE_D8_D16_COASLESCE REG_BIT(30)
+ #define WR_REQ_CHAINING_DIS REG_BIT(26)
+diff --git a/drivers/gpu/drm/xe/tests/xe_rtp_test.c b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
+index 36a3b5420fef6a..b0254b014fe450 100644
+--- a/drivers/gpu/drm/xe/tests/xe_rtp_test.c
++++ b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
+@@ -320,7 +320,7 @@ static void xe_rtp_process_to_sr_tests(struct kunit *test)
+ count_rtp_entries++;
+
+ xe_rtp_process_ctx_enable_active_tracking(&ctx, &active, count_rtp_entries);
+- xe_rtp_process_to_sr(&ctx, param->entries, reg_sr);
++ xe_rtp_process_to_sr(&ctx, param->entries, count_rtp_entries, reg_sr);
+
+ xa_for_each(®_sr->xa, idx, sre) {
+ if (idx == param->expected_reg.addr)
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 8a20e6744836cb..94eed1315b0f1b 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -381,6 +381,10 @@ int xe_gt_init_early(struct xe_gt *gt)
+ if (err)
+ return err;
+
++ err = xe_tuning_init(gt);
++ if (err)
++ return err;
++
+ xe_wa_process_oob(gt);
+
+ xe_force_wake_init_gt(gt, gt_to_fw(gt));
+diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c
+index e7792858b1e466..2d63a69cbfa38e 100644
+--- a/drivers/gpu/drm/xe/xe_gt_debugfs.c
++++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c
+@@ -30,6 +30,7 @@
+ #include "xe_reg_sr.h"
+ #include "xe_reg_whitelist.h"
+ #include "xe_sriov.h"
++#include "xe_tuning.h"
+ #include "xe_uc_debugfs.h"
+ #include "xe_wa.h"
+
+@@ -217,6 +218,15 @@ static int workarounds(struct xe_gt *gt, struct drm_printer *p)
+ return 0;
+ }
+
++static int tunings(struct xe_gt *gt, struct drm_printer *p)
++{
++ xe_pm_runtime_get(gt_to_xe(gt));
++ xe_tuning_dump(gt, p);
++ xe_pm_runtime_put(gt_to_xe(gt));
++
++ return 0;
++}
++
+ static int pat(struct xe_gt *gt, struct drm_printer *p)
+ {
+ xe_pm_runtime_get(gt_to_xe(gt));
+@@ -300,6 +310,7 @@ static const struct drm_info_list debugfs_list[] = {
+ {"powergate_info", .show = xe_gt_debugfs_simple_show, .data = powergate_info},
+ {"register-save-restore", .show = xe_gt_debugfs_simple_show, .data = register_save_restore},
+ {"workarounds", .show = xe_gt_debugfs_simple_show, .data = workarounds},
++ {"tunings", .show = xe_gt_debugfs_simple_show, .data = tunings},
+ {"pat", .show = xe_gt_debugfs_simple_show, .data = pat},
+ {"mocs", .show = xe_gt_debugfs_simple_show, .data = mocs},
+ {"default_lrc_rcs", .show = xe_gt_debugfs_simple_show, .data = rcs_default_lrc},
+diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
+index 6e66bf0e8b3f70..dd2969a1846dde 100644
+--- a/drivers/gpu/drm/xe/xe_gt_types.h
++++ b/drivers/gpu/drm/xe/xe_gt_types.h
+@@ -413,6 +413,16 @@ struct xe_gt {
+ bool oob_initialized;
+ } wa_active;
+
++ /** @tuning_active: keep track of active tunings */
++ struct {
++ /** @tuning_active.gt: bitmap with active GT tunings */
++ unsigned long *gt;
++ /** @tuning_active.engine: bitmap with active engine tunings */
++ unsigned long *engine;
++ /** @tuning_active.lrc: bitmap with active LRC tunings */
++ unsigned long *lrc;
++ } tuning_active;
++
+ /** @user_engines: engines present in GT and available to userspace */
+ struct {
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
+index fc447751fe786c..b26b6fb5cdb5d4 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine.c
+@@ -386,12 +386,6 @@ xe_hw_engine_setup_default_lrc_state(struct xe_hw_engine *hwe)
+ blit_cctl_val,
+ XE_RTP_ACTION_FLAG(ENGINE_BASE)))
+ },
+- /* Use Fixed slice CCS mode */
+- { XE_RTP_NAME("RCU_MODE_FIXED_SLICE_CCS_MODE"),
+- XE_RTP_RULES(FUNC(xe_hw_engine_match_fixed_cslice_mode)),
+- XE_RTP_ACTIONS(FIELD_SET(RCU_MODE, RCU_MODE_FIXED_SLICE_CCS_MODE,
+- RCU_MODE_FIXED_SLICE_CCS_MODE))
+- },
+ /* Disable WMTP if HW doesn't support it */
+ { XE_RTP_NAME("DISABLE_WMTP_ON_UNSUPPORTED_HW"),
+ XE_RTP_RULES(FUNC(xe_rtp_cfeg_wmtp_disabled)),
+@@ -400,10 +394,9 @@ xe_hw_engine_setup_default_lrc_state(struct xe_hw_engine *hwe)
+ PREEMPT_GPGPU_THREAD_GROUP_LEVEL)),
+ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE)
+ },
+- {}
+ };
+
+- xe_rtp_process_to_sr(&ctx, lrc_setup, &hwe->reg_lrc);
++ xe_rtp_process_to_sr(&ctx, lrc_setup, ARRAY_SIZE(lrc_setup), &hwe->reg_lrc);
+ }
+
+ static void
+@@ -459,10 +452,15 @@ hw_engine_setup_default_state(struct xe_hw_engine *hwe)
+ XE_RTP_ACTIONS(SET(CSFE_CHICKEN1(0), CS_PRIORITY_MEM_READ,
+ XE_RTP_ACTION_FLAG(ENGINE_BASE)))
+ },
+- {}
++ /* Use Fixed slice CCS mode */
++ { XE_RTP_NAME("RCU_MODE_FIXED_SLICE_CCS_MODE"),
++ XE_RTP_RULES(FUNC(xe_hw_engine_match_fixed_cslice_mode)),
++ XE_RTP_ACTIONS(FIELD_SET(RCU_MODE, RCU_MODE_FIXED_SLICE_CCS_MODE,
++ RCU_MODE_FIXED_SLICE_CCS_MODE))
++ },
+ };
+
+- xe_rtp_process_to_sr(&ctx, engine_entries, &hwe->reg_sr);
++ xe_rtp_process_to_sr(&ctx, engine_entries, ARRAY_SIZE(engine_entries), &hwe->reg_sr);
+ }
+
+ static const struct engine_info *find_engine_info(enum xe_engine_class class, int instance)
+diff --git a/drivers/gpu/drm/xe/xe_reg_whitelist.c b/drivers/gpu/drm/xe/xe_reg_whitelist.c
+index edab5d4e3ba5e7..23f6c81d99946f 100644
+--- a/drivers/gpu/drm/xe/xe_reg_whitelist.c
++++ b/drivers/gpu/drm/xe/xe_reg_whitelist.c
+@@ -88,7 +88,6 @@ static const struct xe_rtp_entry_sr register_whitelist[] = {
+ RING_FORCE_TO_NONPRIV_ACCESS_RD |
+ RING_FORCE_TO_NONPRIV_RANGE_4))
+ },
+- {}
+ };
+
+ static void whitelist_apply_to_hwe(struct xe_hw_engine *hwe)
+@@ -137,7 +136,8 @@ void xe_reg_whitelist_process_engine(struct xe_hw_engine *hwe)
+ {
+ struct xe_rtp_process_ctx ctx = XE_RTP_PROCESS_CTX_INITIALIZER(hwe);
+
+- xe_rtp_process_to_sr(&ctx, register_whitelist, &hwe->reg_whitelist);
++ xe_rtp_process_to_sr(&ctx, register_whitelist, ARRAY_SIZE(register_whitelist),
++ &hwe->reg_whitelist);
+ whitelist_apply_to_hwe(hwe);
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_rtp.c b/drivers/gpu/drm/xe/xe_rtp.c
+index 7a1c78fdfc92ee..13bb62d3e615e8 100644
+--- a/drivers/gpu/drm/xe/xe_rtp.c
++++ b/drivers/gpu/drm/xe/xe_rtp.c
+@@ -237,6 +237,7 @@ static void rtp_mark_active(struct xe_device *xe,
+ * the save-restore argument.
+ * @ctx: The context for processing the table, with one of device, gt or hwe
+ * @entries: Table with RTP definitions
++ * @n_entries: Number of entries to process, usually ARRAY_SIZE(entries)
+ * @sr: Save-restore struct where matching rules execute the action. This can be
+ * viewed as the "coalesced view" of multiple the tables. The bits for each
+ * register set are expected not to collide with previously added entries
+@@ -247,6 +248,7 @@ static void rtp_mark_active(struct xe_device *xe,
+ */
+ void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx,
+ const struct xe_rtp_entry_sr *entries,
++ size_t n_entries,
+ struct xe_reg_sr *sr)
+ {
+ const struct xe_rtp_entry_sr *entry;
+@@ -259,7 +261,9 @@ void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx,
+ if (IS_SRIOV_VF(xe))
+ return;
+
+- for (entry = entries; entry && entry->name; entry++) {
++ xe_assert(xe, entries);
++
++ for (entry = entries; entry - entries < n_entries; entry++) {
+ bool match = false;
+
+ if (entry->flags & XE_RTP_ENTRY_FLAG_FOREACH_ENGINE) {
+diff --git a/drivers/gpu/drm/xe/xe_rtp.h b/drivers/gpu/drm/xe/xe_rtp.h
+index 38b9f13bba5e54..4fe736a11c42b9 100644
+--- a/drivers/gpu/drm/xe/xe_rtp.h
++++ b/drivers/gpu/drm/xe/xe_rtp.h
+@@ -430,7 +430,7 @@ void xe_rtp_process_ctx_enable_active_tracking(struct xe_rtp_process_ctx *ctx,
+
+ void xe_rtp_process_to_sr(struct xe_rtp_process_ctx *ctx,
+ const struct xe_rtp_entry_sr *entries,
+- struct xe_reg_sr *sr);
++ size_t n_entries, struct xe_reg_sr *sr);
+
+ void xe_rtp_process(struct xe_rtp_process_ctx *ctx,
+ const struct xe_rtp_entry *entries);
+diff --git a/drivers/gpu/drm/xe/xe_tuning.c b/drivers/gpu/drm/xe/xe_tuning.c
+index 3c78f3d7155910..a61a2917590fee 100644
+--- a/drivers/gpu/drm/xe/xe_tuning.c
++++ b/drivers/gpu/drm/xe/xe_tuning.c
+@@ -7,6 +7,8 @@
+
+ #include <kunit/visibility.h>
+
++#include <drm/drm_managed.h>
++
+ #include "regs/xe_gt_regs.h"
+ #include "xe_gt_types.h"
+ #include "xe_platform_types.h"
+@@ -83,8 +85,6 @@ static const struct xe_rtp_entry_sr gt_tunings[] = {
+ XE_RTP_RULES(MEDIA_VERSION(2000)),
+ XE_RTP_ACTIONS(SET(XE2LPM_SCRATCH3_LBCF, RWFLUSHALLEN))
+ },
+-
+- {}
+ };
+
+ static const struct xe_rtp_entry_sr engine_tunings[] = {
+@@ -93,7 +93,6 @@ static const struct xe_rtp_entry_sr engine_tunings[] = {
+ ENGINE_CLASS(RENDER)),
+ XE_RTP_ACTIONS(SET(SAMPLER_MODE, INDIRECT_STATE_BASE_ADDR_OVERRIDE))
+ },
+- {}
+ };
+
+ static const struct xe_rtp_entry_sr lrc_tunings[] = {
+@@ -131,15 +130,47 @@ static const struct xe_rtp_entry_sr lrc_tunings[] = {
+ XE_RTP_ACTIONS(FIELD_SET(FF_MODE, VS_HIT_MAX_VALUE_MASK,
+ REG_FIELD_PREP(VS_HIT_MAX_VALUE_MASK, 0x3f)))
+ },
+-
+- {}
+ };
+
++/**
++ * xe_tuning_init - initialize gt with tunings bookkeeping
++ * @gt: GT instance to initialize
++ *
++ * Returns 0 for success, negative error code otherwise.
++ */
++int xe_tuning_init(struct xe_gt *gt)
++{
++ struct xe_device *xe = gt_to_xe(gt);
++ size_t n_lrc, n_engine, n_gt, total;
++ unsigned long *p;
++
++ n_gt = BITS_TO_LONGS(ARRAY_SIZE(gt_tunings));
++ n_engine = BITS_TO_LONGS(ARRAY_SIZE(engine_tunings));
++ n_lrc = BITS_TO_LONGS(ARRAY_SIZE(lrc_tunings));
++ total = n_gt + n_engine + n_lrc;
++
++ p = drmm_kzalloc(&xe->drm, sizeof(*p) * total, GFP_KERNEL);
++ if (!p)
++ return -ENOMEM;
++
++ gt->tuning_active.gt = p;
++ p += n_gt;
++ gt->tuning_active.engine = p;
++ p += n_engine;
++ gt->tuning_active.lrc = p;
++
++ return 0;
++}
++ALLOW_ERROR_INJECTION(xe_tuning_init, ERRNO); /* See xe_pci_probe() */
++
+ void xe_tuning_process_gt(struct xe_gt *gt)
+ {
+ struct xe_rtp_process_ctx ctx = XE_RTP_PROCESS_CTX_INITIALIZER(gt);
+
+- xe_rtp_process_to_sr(&ctx, gt_tunings, >->reg_sr);
++ xe_rtp_process_ctx_enable_active_tracking(&ctx,
++ gt->tuning_active.gt,
++ ARRAY_SIZE(gt_tunings));
++ xe_rtp_process_to_sr(&ctx, gt_tunings, ARRAY_SIZE(gt_tunings), >->reg_sr);
+ }
+ EXPORT_SYMBOL_IF_KUNIT(xe_tuning_process_gt);
+
+@@ -147,7 +178,11 @@ void xe_tuning_process_engine(struct xe_hw_engine *hwe)
+ {
+ struct xe_rtp_process_ctx ctx = XE_RTP_PROCESS_CTX_INITIALIZER(hwe);
+
+- xe_rtp_process_to_sr(&ctx, engine_tunings, &hwe->reg_sr);
++ xe_rtp_process_ctx_enable_active_tracking(&ctx,
++ hwe->gt->tuning_active.engine,
++ ARRAY_SIZE(engine_tunings));
++ xe_rtp_process_to_sr(&ctx, engine_tunings, ARRAY_SIZE(engine_tunings),
++ &hwe->reg_sr);
+ }
+ EXPORT_SYMBOL_IF_KUNIT(xe_tuning_process_engine);
+
+@@ -163,5 +198,25 @@ void xe_tuning_process_lrc(struct xe_hw_engine *hwe)
+ {
+ struct xe_rtp_process_ctx ctx = XE_RTP_PROCESS_CTX_INITIALIZER(hwe);
+
+- xe_rtp_process_to_sr(&ctx, lrc_tunings, &hwe->reg_lrc);
++ xe_rtp_process_ctx_enable_active_tracking(&ctx,
++ hwe->gt->tuning_active.lrc,
++ ARRAY_SIZE(lrc_tunings));
++ xe_rtp_process_to_sr(&ctx, lrc_tunings, ARRAY_SIZE(lrc_tunings), &hwe->reg_lrc);
++}
++
++void xe_tuning_dump(struct xe_gt *gt, struct drm_printer *p)
++{
++ size_t idx;
++
++ drm_printf(p, "GT Tunings\n");
++ for_each_set_bit(idx, gt->tuning_active.gt, ARRAY_SIZE(gt_tunings))
++ drm_printf_indent(p, 1, "%s\n", gt_tunings[idx].name);
++
++ drm_printf(p, "\nEngine Tunings\n");
++ for_each_set_bit(idx, gt->tuning_active.engine, ARRAY_SIZE(engine_tunings))
++ drm_printf_indent(p, 1, "%s\n", engine_tunings[idx].name);
++
++ drm_printf(p, "\nLRC Tunings\n");
++ for_each_set_bit(idx, gt->tuning_active.lrc, ARRAY_SIZE(lrc_tunings))
++ drm_printf_indent(p, 1, "%s\n", lrc_tunings[idx].name);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_tuning.h b/drivers/gpu/drm/xe/xe_tuning.h
+index 4f9c3ac3b5162e..dd0d3ccc9c654c 100644
+--- a/drivers/gpu/drm/xe/xe_tuning.h
++++ b/drivers/gpu/drm/xe/xe_tuning.h
+@@ -6,11 +6,14 @@
+ #ifndef _XE_TUNING_
+ #define _XE_TUNING_
+
++struct drm_printer;
+ struct xe_gt;
+ struct xe_hw_engine;
+
++int xe_tuning_init(struct xe_gt *gt);
+ void xe_tuning_process_gt(struct xe_gt *gt);
+ void xe_tuning_process_engine(struct xe_hw_engine *hwe);
+ void xe_tuning_process_lrc(struct xe_hw_engine *hwe);
++void xe_tuning_dump(struct xe_gt *gt, struct drm_printer *p);
+
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index 2553accf8c5176..65bfb2f894d00e 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -279,8 +279,6 @@ static const struct xe_rtp_entry_sr gt_was[] = {
+ XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F10(0), RAMDFTUNIT_CLKGATE_DIS)),
+ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
+ },
+-
+- {}
+ };
+
+ static const struct xe_rtp_entry_sr engine_was[] = {
+@@ -613,8 +611,16 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ XE_RTP_ACTIONS(FIELD_SET(SAMPLER_MODE, SMP_WAIT_FETCH_MERGING_COUNTER,
+ SMP_FORCE_128B_OVERFETCH))
+ },
+-
+- {}
++ { XE_RTP_NAME("14023061436"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(3000, 3001),
++ FUNC(xe_rtp_match_first_render_or_compute)),
++ XE_RTP_ACTIONS(SET(TDL_CHICKEN, QID_WAIT_FOR_THREAD_NOT_RUN_DISABLE))
++ },
++ { XE_RTP_NAME("13012615864"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(3000, 3001),
++ FUNC(xe_rtp_match_first_render_or_compute)),
++ XE_RTP_ACTIONS(SET(TDL_TSL_CHICKEN, RES_CHK_SPR_DIS))
++ },
+ };
+
+ static const struct xe_rtp_entry_sr lrc_was[] = {
+@@ -807,8 +813,6 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
+ DIS_PARTIAL_AUTOSTRIP |
+ DIS_AUTOSTRIP))
+ },
+-
+- {}
+ };
+
+ static __maybe_unused const struct xe_rtp_entry oob_was[] = {
+@@ -850,7 +854,7 @@ void xe_wa_process_gt(struct xe_gt *gt)
+
+ xe_rtp_process_ctx_enable_active_tracking(&ctx, gt->wa_active.gt,
+ ARRAY_SIZE(gt_was));
+- xe_rtp_process_to_sr(&ctx, gt_was, >->reg_sr);
++ xe_rtp_process_to_sr(&ctx, gt_was, ARRAY_SIZE(gt_was), >->reg_sr);
+ }
+ EXPORT_SYMBOL_IF_KUNIT(xe_wa_process_gt);
+
+@@ -868,7 +872,7 @@ void xe_wa_process_engine(struct xe_hw_engine *hwe)
+
+ xe_rtp_process_ctx_enable_active_tracking(&ctx, hwe->gt->wa_active.engine,
+ ARRAY_SIZE(engine_was));
+- xe_rtp_process_to_sr(&ctx, engine_was, &hwe->reg_sr);
++ xe_rtp_process_to_sr(&ctx, engine_was, ARRAY_SIZE(engine_was), &hwe->reg_sr);
+ }
+
+ /**
+@@ -885,7 +889,7 @@ void xe_wa_process_lrc(struct xe_hw_engine *hwe)
+
+ xe_rtp_process_ctx_enable_active_tracking(&ctx, hwe->gt->wa_active.lrc,
+ ARRAY_SIZE(lrc_was));
+- xe_rtp_process_to_sr(&ctx, lrc_was, &hwe->reg_lrc);
++ xe_rtp_process_to_sr(&ctx, lrc_was, ARRAY_SIZE(lrc_was), &hwe->reg_lrc);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
+index 40438c3d9b7238..32d3853b08ec86 100644
+--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
++++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
+@@ -30,8 +30,10 @@
+ 13011645652 GRAPHICS_VERSION(2004)
+ 14022293748 GRAPHICS_VERSION(2001)
+ GRAPHICS_VERSION(2004)
++ GRAPHICS_VERSION_RANGE(3000, 3001)
+ 22019794406 GRAPHICS_VERSION(2001)
+ GRAPHICS_VERSION(2004)
++ GRAPHICS_VERSION_RANGE(3000, 3001)
+ 22019338487 MEDIA_VERSION(2000)
+ GRAPHICS_VERSION(2001)
+ MEDIA_VERSION(3000), MEDIA_STEP(A0, B0), FUNC(xe_rtp_match_not_sriov_vf)
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index ed7b9d7f688cc6..0fc03bb5d0a6ee 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -158,6 +158,10 @@ struct svc_i3c_regs_save {
+ u32 mdynaddr;
+ };
+
++struct svc_i3c_drvdata {
++ u32 quirks;
++};
++
+ /**
+ * struct svc_i3c_master - Silvaco I3C Master structure
+ * @base: I3C master controller
+@@ -183,6 +187,7 @@ struct svc_i3c_regs_save {
+ * @ibi.tbq_slot: To be queued IBI slot
+ * @ibi.lock: IBI lock
+ * @lock: Transfer lock, protect between IBI work thread and callbacks from master
++ * @drvdata: Driver data
+ * @enabled_events: Bit masks for enable events (IBI, HotJoin).
+ * @mctrl_config: Configuration value in SVC_I3C_MCTRL for setting speed back.
+ */
+@@ -214,6 +219,7 @@ struct svc_i3c_master {
+ spinlock_t lock;
+ } ibi;
+ struct mutex lock;
++ const struct svc_i3c_drvdata *drvdata;
+ u32 enabled_events;
+ u32 mctrl_config;
+ };
+@@ -1817,6 +1823,10 @@ static int svc_i3c_master_probe(struct platform_device *pdev)
+ if (!master)
+ return -ENOMEM;
+
++ master->drvdata = of_device_get_match_data(dev);
++ if (!master->drvdata)
++ return -EINVAL;
++
+ master->regs = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(master->regs))
+ return PTR_ERR(master->regs);
+@@ -1958,8 +1968,13 @@ static const struct dev_pm_ops svc_i3c_pm_ops = {
+ svc_i3c_runtime_resume, NULL)
+ };
+
++static const struct svc_i3c_drvdata npcm845_drvdata = {};
++
++static const struct svc_i3c_drvdata svc_default_drvdata = {};
++
+ static const struct of_device_id svc_i3c_master_of_match_tbl[] = {
+- { .compatible = "silvaco,i3c-master-v1"},
++ { .compatible = "nuvoton,npcm845-i3c", .data = &npcm845_drvdata },
++ { .compatible = "silvaco,i3c-master-v1", .data = &svc_default_drvdata },
+ { /* sentinel */ },
+ };
+ MODULE_DEVICE_TABLE(of, svc_i3c_master_of_match_tbl);
+diff --git a/drivers/iio/adc/ad4695.c b/drivers/iio/adc/ad4695.c
+index b79d135a54718a..22fdc454b0ceab 100644
+--- a/drivers/iio/adc/ad4695.c
++++ b/drivers/iio/adc/ad4695.c
+@@ -92,6 +92,8 @@
+ #define AD4695_T_REFBUF_MS 100
+ #define AD4695_T_REGCONFIG_NS 20
+ #define AD4695_T_SCK_CNV_DELAY_NS 80
++#define AD4695_T_CNVL_NS 80
++#define AD4695_T_CNVH_NS 10
+ #define AD4695_REG_ACCESS_SCLK_HZ (10 * MEGA)
+
+ /* Max number of voltage input channels. */
+@@ -364,11 +366,31 @@ static int ad4695_enter_advanced_sequencer_mode(struct ad4695_state *st, u32 n)
+ */
+ static int ad4695_exit_conversion_mode(struct ad4695_state *st)
+ {
+- struct spi_transfer xfer = {
+- .tx_buf = &st->cnv_cmd2,
+- .len = 1,
+- .delay.value = AD4695_T_REGCONFIG_NS,
+- .delay.unit = SPI_DELAY_UNIT_NSECS,
++ /*
++ * An extra transfer is needed to trigger a conversion here so
++ * that we can be 100% sure the command will be processed by the
++ * ADC, rather than relying on it to be in the correct state
++ * when this function is called (this chip has a quirk where the
++ * command only works when reading a conversion, and if the
++ * previous conversion was already read then it won't work). The
++ * actual conversion command is then run at the slower
++ * AD4695_REG_ACCESS_SCLK_HZ speed to guarantee this works.
++ */
++ struct spi_transfer xfers[] = {
++ {
++ .delay.value = AD4695_T_CNVL_NS,
++ .delay.unit = SPI_DELAY_UNIT_NSECS,
++ .cs_change = 1,
++ .cs_change_delay.value = AD4695_T_CNVH_NS,
++ .cs_change_delay.unit = SPI_DELAY_UNIT_NSECS,
++ },
++ {
++ .speed_hz = AD4695_REG_ACCESS_SCLK_HZ,
++ .tx_buf = &st->cnv_cmd2,
++ .len = 1,
++ .delay.value = AD4695_T_REGCONFIG_NS,
++ .delay.unit = SPI_DELAY_UNIT_NSECS,
++ },
+ };
+
+ /*
+@@ -377,7 +399,7 @@ static int ad4695_exit_conversion_mode(struct ad4695_state *st)
+ */
+ st->cnv_cmd2 = AD4695_CMD_EXIT_CNV_MODE << 3;
+
+- return spi_sync_transfer(st->spi, &xfer, 1);
++ return spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers));
+ }
+
+ static int ad4695_set_ref_voltage(struct ad4695_state *st, int vref_mv)
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 6f8816483f1a02..157a0df97f971b 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -142,7 +142,7 @@ static const struct iio_chan_spec ad7768_channels[] = {
+ .channel = 0,
+ .scan_index = 0,
+ .scan_type = {
+- .sign = 'u',
++ .sign = 's',
+ .realbits = 24,
+ .storagebits = 32,
+ .shift = 8,
+@@ -370,12 +370,11 @@ static int ad7768_read_raw(struct iio_dev *indio_dev,
+ return ret;
+
+ ret = ad7768_scan_direct(indio_dev);
+- if (ret >= 0)
+- *val = ret;
+
+ iio_device_release_direct_mode(indio_dev);
+ if (ret < 0)
+ return ret;
++ *val = sign_extend32(ret, chan->scan_type.realbits - 1);
+
+ return IIO_VAL_INT;
+
+diff --git a/drivers/infiniband/hw/qib/qib_fs.c b/drivers/infiniband/hw/qib/qib_fs.c
+index b27791029fa934..b9f4a2937c3acc 100644
+--- a/drivers/infiniband/hw/qib/qib_fs.c
++++ b/drivers/infiniband/hw/qib/qib_fs.c
+@@ -55,6 +55,7 @@ static int qibfs_mknod(struct inode *dir, struct dentry *dentry,
+ struct inode *inode = new_inode(dir->i_sb);
+
+ if (!inode) {
++ dput(dentry);
+ error = -EPERM;
+ goto bail;
+ }
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index cd5116d8c3b283..b3a01b7757ee16 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -3850,7 +3850,7 @@ static int amd_ir_set_vcpu_affinity(struct irq_data *data, void *vcpu_info)
+ * we should not modify the IRTE
+ */
+ if (!dev_data || !dev_data->use_vapic)
+- return 0;
++ return -EINVAL;
+
+ ir_data->cfg = irqd_cfg(data);
+ pi_data->ir_data = ir_data;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c
+index 5aa2e7af58b47b..34a0be59cd9194 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c
+@@ -43,6 +43,8 @@ static void arm_smmu_make_nested_cd_table_ste(
+ target->data[0] |= nested_domain->ste[0] &
+ ~cpu_to_le64(STRTAB_STE_0_CFG);
+ target->data[1] |= nested_domain->ste[1];
++ /* Merge events for DoS mitigations on eventq */
++ target->data[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
+ }
+
+ /*
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 358072b4e293e0..59749e8180afc4 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -1052,7 +1052,7 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
+ cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR |
+ STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH |
+ STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW |
+- STRTAB_STE_1_EATS);
++ STRTAB_STE_1_EATS | STRTAB_STE_1_MEV);
+ used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID);
+
+ /*
+@@ -1068,7 +1068,7 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
+ if (cfg & BIT(1)) {
+ used_bits[1] |=
+ cpu_to_le64(STRTAB_STE_1_S2FWB | STRTAB_STE_1_EATS |
+- STRTAB_STE_1_SHCFG);
++ STRTAB_STE_1_SHCFG | STRTAB_STE_1_MEV);
+ used_bits[2] |=
+ cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR |
+ STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI |
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+index bd9d7c85576a26..7290bd4c2bb0ae 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+@@ -266,6 +266,7 @@ static inline u32 arm_smmu_strtab_l2_idx(u32 sid)
+ #define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4)
+ #define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6)
+
++#define STRTAB_STE_1_MEV (1UL << 19)
+ #define STRTAB_STE_1_S2FWB (1UL << 25)
+ #define STRTAB_STE_1_S1STALLD (1UL << 27)
+
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index e3df1f06afbeb3..1efe7cddb4fe33 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -508,6 +508,9 @@ static void iommu_deinit_device(struct device *dev)
+ dev->iommu_group = NULL;
+ module_put(ops->owner);
+ dev_iommu_free(dev);
++#ifdef CONFIG_IOMMU_DMA
++ dev->dma_iommu = false;
++#endif
+ }
+
+ DEFINE_MUTEX(iommu_probe_device_lock);
+diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
+index be35c5349986aa..a1e370d0200f15 100644
+--- a/drivers/irqchip/irq-gic-v2m.c
++++ b/drivers/irqchip/irq-gic-v2m.c
+@@ -423,7 +423,7 @@ static int __init gicv2m_of_init(struct fwnode_handle *parent_handle,
+ #ifdef CONFIG_ACPI
+ static int acpi_num_msi;
+
+-static __init struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
++static struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
+ {
+ struct v2m_data *data;
+
+diff --git a/drivers/irqchip/irq-renesas-rzv2h.c b/drivers/irqchip/irq-renesas-rzv2h.c
+index f6363246a71a0b..21d01ce2da5cfc 100644
+--- a/drivers/irqchip/irq-renesas-rzv2h.c
++++ b/drivers/irqchip/irq-renesas-rzv2h.c
+@@ -80,18 +80,28 @@
+ #define ICU_TINT_EXTRACT_GPIOINT(x) FIELD_GET(GENMASK(31, 16), (x))
+ #define ICU_PB5_TINT 0x55
+
++/**
++ * struct rzv2h_hw_info - Interrupt Control Unit controller hardware info structure.
++ * @t_offs: TINT offset
++ */
++struct rzv2h_hw_info {
++ u16 t_offs;
++};
++
+ /**
+ * struct rzv2h_icu_priv - Interrupt Control Unit controller private data structure.
+ * @base: Controller's base address
+ * @irqchip: Pointer to struct irq_chip
+ * @fwspec: IRQ firmware specific data
+ * @lock: Lock to serialize access to hardware registers
++ * @info: Pointer to struct rzv2h_hw_info
+ */
+ struct rzv2h_icu_priv {
+ void __iomem *base;
+ const struct irq_chip *irqchip;
+ struct irq_fwspec fwspec[ICU_NUM_IRQ];
+ raw_spinlock_t lock;
++ const struct rzv2h_hw_info *info;
+ };
+
+ static inline struct rzv2h_icu_priv *irq_data_to_priv(struct irq_data *data)
+@@ -111,7 +121,7 @@ static void rzv2h_icu_eoi(struct irq_data *d)
+ tintirq_nr = hw_irq - ICU_TINT_START;
+ bit = BIT(tintirq_nr);
+ if (!irqd_is_level_type(d))
+- writel_relaxed(bit, priv->base + ICU_TSCLR);
++ writel_relaxed(bit, priv->base + priv->info->t_offs + ICU_TSCLR);
+ } else if (hw_irq >= ICU_IRQ_START) {
+ tintirq_nr = hw_irq - ICU_IRQ_START;
+ bit = BIT(tintirq_nr);
+@@ -139,12 +149,20 @@ static void rzv2h_tint_irq_endisable(struct irq_data *d, bool enable)
+ tssel_n = ICU_TSSR_TSSEL_N(tint_nr);
+
+ guard(raw_spinlock)(&priv->lock);
+- tssr = readl_relaxed(priv->base + ICU_TSSR(k));
++ tssr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TSSR(k));
+ if (enable)
+ tssr |= ICU_TSSR_TIEN(tssel_n);
+ else
+ tssr &= ~ICU_TSSR_TIEN(tssel_n);
+- writel_relaxed(tssr, priv->base + ICU_TSSR(k));
++ writel_relaxed(tssr, priv->base + priv->info->t_offs + ICU_TSSR(k));
++
++ /*
++ * A glitch in the edge detection circuit can cause a spurious
++ * interrupt. Clear the status flag after setting the ICU_TSSRk
++ * registers, which is recommended by the hardware manual as a
++ * countermeasure.
++ */
++ writel_relaxed(BIT(tint_nr), priv->base + priv->info->t_offs + ICU_TSCLR);
+ }
+
+ static void rzv2h_icu_irq_disable(struct irq_data *d)
+@@ -247,8 +265,8 @@ static void rzv2h_clear_tint_int(struct rzv2h_icu_priv *priv, unsigned int hwirq
+ u32 bit = BIT(tint_nr);
+ int k = tint_nr / 16;
+
+- tsctr = readl_relaxed(priv->base + ICU_TSCTR);
+- titsr = readl_relaxed(priv->base + ICU_TITSR(k));
++ tsctr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TSCTR);
++ titsr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TITSR(k));
+ titsel = ICU_TITSR_TITSEL_GET(titsr, titsel_n);
+
+ /*
+@@ -257,7 +275,7 @@ static void rzv2h_clear_tint_int(struct rzv2h_icu_priv *priv, unsigned int hwirq
+ */
+ if ((tsctr & bit) && ((titsel == ICU_TINT_EDGE_RISING) ||
+ (titsel == ICU_TINT_EDGE_FALLING)))
+- writel_relaxed(bit, priv->base + ICU_TSCLR);
++ writel_relaxed(bit, priv->base + priv->info->t_offs + ICU_TSCLR);
+ }
+
+ static int rzv2h_tint_set_type(struct irq_data *d, unsigned int type)
+@@ -308,21 +326,21 @@ static int rzv2h_tint_set_type(struct irq_data *d, unsigned int type)
+
+ guard(raw_spinlock)(&priv->lock);
+
+- tssr = readl_relaxed(priv->base + ICU_TSSR(tssr_k));
++ tssr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TSSR(tssr_k));
+ tssr &= ~(ICU_TSSR_TSSEL_MASK(tssel_n) | tien);
+ tssr |= ICU_TSSR_TSSEL_PREP(tint, tssel_n);
+
+- writel_relaxed(tssr, priv->base + ICU_TSSR(tssr_k));
++ writel_relaxed(tssr, priv->base + priv->info->t_offs + ICU_TSSR(tssr_k));
+
+- titsr = readl_relaxed(priv->base + ICU_TITSR(titsr_k));
++ titsr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TITSR(titsr_k));
+ titsr &= ~ICU_TITSR_TITSEL_MASK(titsel_n);
+ titsr |= ICU_TITSR_TITSEL_PREP(sense, titsel_n);
+
+- writel_relaxed(titsr, priv->base + ICU_TITSR(titsr_k));
++ writel_relaxed(titsr, priv->base + priv->info->t_offs + ICU_TITSR(titsr_k));
+
+ rzv2h_clear_tint_int(priv, hwirq);
+
+- writel_relaxed(tssr | tien, priv->base + ICU_TSSR(tssr_k));
++ writel_relaxed(tssr | tien, priv->base + priv->info->t_offs + ICU_TSSR(tssr_k));
+
+ return 0;
+ }
+@@ -421,7 +439,13 @@ static int rzv2h_icu_parse_interrupts(struct rzv2h_icu_priv *priv, struct device
+ return 0;
+ }
+
+-static int rzv2h_icu_init(struct device_node *node, struct device_node *parent)
++static void rzv2h_icu_put_device(void *data)
++{
++ put_device(data);
++}
++
++static int rzv2h_icu_init_common(struct device_node *node, struct device_node *parent,
++ const struct rzv2h_hw_info *hw_info)
+ {
+ struct irq_domain *irq_domain, *parent_domain;
+ struct rzv2h_icu_priv *rzv2h_icu_data;
+@@ -433,43 +457,41 @@ static int rzv2h_icu_init(struct device_node *node, struct device_node *parent)
+ if (!pdev)
+ return -ENODEV;
+
++ ret = devm_add_action_or_reset(&pdev->dev, rzv2h_icu_put_device,
++ &pdev->dev);
++ if (ret < 0)
++ return ret;
++
+ parent_domain = irq_find_host(parent);
+ if (!parent_domain) {
+ dev_err(&pdev->dev, "cannot find parent domain\n");
+- ret = -ENODEV;
+- goto put_dev;
++ return -ENODEV;
+ }
+
+ rzv2h_icu_data = devm_kzalloc(&pdev->dev, sizeof(*rzv2h_icu_data), GFP_KERNEL);
+- if (!rzv2h_icu_data) {
+- ret = -ENOMEM;
+- goto put_dev;
+- }
++ if (!rzv2h_icu_data)
++ return -ENOMEM;
+
+ rzv2h_icu_data->irqchip = &rzv2h_icu_chip;
+
+ rzv2h_icu_data->base = devm_of_iomap(&pdev->dev, pdev->dev.of_node, 0, NULL);
+- if (IS_ERR(rzv2h_icu_data->base)) {
+- ret = PTR_ERR(rzv2h_icu_data->base);
+- goto put_dev;
+- }
++ if (IS_ERR(rzv2h_icu_data->base))
++ return PTR_ERR(rzv2h_icu_data->base);
+
+ ret = rzv2h_icu_parse_interrupts(rzv2h_icu_data, node);
+ if (ret) {
+ dev_err(&pdev->dev, "cannot parse interrupts: %d\n", ret);
+- goto put_dev;
++ return ret;
+ }
+
+ resetn = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+- if (IS_ERR(resetn)) {
+- ret = PTR_ERR(resetn);
+- goto put_dev;
+- }
++ if (IS_ERR(resetn))
++ return PTR_ERR(resetn);
+
+ ret = reset_control_deassert(resetn);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to deassert resetn pin, %d\n", ret);
+- goto put_dev;
++ return ret;
+ }
+
+ pm_runtime_enable(&pdev->dev);
+@@ -489,6 +511,8 @@ static int rzv2h_icu_init(struct device_node *node, struct device_node *parent)
+ goto pm_put;
+ }
+
++ rzv2h_icu_data->info = hw_info;
++
+ /*
+ * coccicheck complains about a missing put_device call before returning, but it's a false
+ * positive. We still need &pdev->dev after successfully returning from this function.
+@@ -500,12 +524,19 @@ static int rzv2h_icu_init(struct device_node *node, struct device_node *parent)
+ pm_disable:
+ pm_runtime_disable(&pdev->dev);
+ reset_control_assert(resetn);
+-put_dev:
+- put_device(&pdev->dev);
+
+ return ret;
+ }
+
++static const struct rzv2h_hw_info rzv2h_hw_params = {
++ .t_offs = 0,
++};
++
++static int rzv2h_icu_init(struct device_node *node, struct device_node *parent)
++{
++ return rzv2h_icu_init_common(node, parent, &rzv2h_hw_params);
++}
++
+ IRQCHIP_PLATFORM_DRIVER_BEGIN(rzv2h_icu)
+ IRQCHIP_MATCH("renesas,r9a09g057-icu", rzv2h_icu_init)
+ IRQCHIP_PLATFORM_DRIVER_END(rzv2h_icu)
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index 82102a4c5d6883..f8215a8f656a46 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -313,6 +313,10 @@ static irqreturn_t pcc_mbox_irq(int irq, void *p)
+ int ret;
+
+ pchan = chan->con_priv;
++
++ if (pcc_chan_reg_read_modify_write(&pchan->plat_irq_ack))
++ return IRQ_NONE;
++
+ if (pchan->type == ACPI_PCCT_TYPE_EXT_PCC_MASTER_SUBSPACE &&
+ !pchan->chan_in_use)
+ return IRQ_NONE;
+@@ -330,13 +334,16 @@ static irqreturn_t pcc_mbox_irq(int irq, void *p)
+ return IRQ_NONE;
+ }
+
+- if (pcc_chan_reg_read_modify_write(&pchan->plat_irq_ack))
+- return IRQ_NONE;
+-
++ /*
++ * Clear this flag after updating interrupt ack register and just
++ * before mbox_chan_received_data() which might call pcc_send_data()
++ * where the flag is set again to start new transfer. This is
++ * required to avoid any possible race in updatation of this flag.
++ */
++ pchan->chan_in_use = false;
+ mbox_chan_received_data(chan, NULL);
+
+ check_and_ack(pchan, chan);
+- pchan->chan_in_use = false;
+
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
+index 02a680c73979b9..bf0d7d58c8b014 100644
+--- a/drivers/mcb/mcb-parse.c
++++ b/drivers/mcb/mcb-parse.c
+@@ -96,7 +96,7 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
+
+ ret = mcb_device_register(bus, mdev);
+ if (ret < 0)
+- goto err;
++ return ret;
+
+ return 0;
+
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 15829ab192d2b1..7373dff023d0fd 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -2199,14 +2199,9 @@ static int fix_sync_read_error(struct r1bio *r1_bio)
+ if (!rdev_set_badblocks(rdev, sect, s, 0))
+ abort = 1;
+ }
+- if (abort) {
+- conf->recovery_disabled =
+- mddev->recovery_disabled;
+- set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+- md_done_sync(mddev, r1_bio->sectors, 0);
+- put_buf(r1_bio);
++ if (abort)
+ return 0;
+- }
++
+ /* Try next page */
+ sectors -= s;
+ sect += s;
+@@ -2345,10 +2340,21 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio)
+ int disks = conf->raid_disks * 2;
+ struct bio *wbio;
+
+- if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
+- /* ouch - failed to read all of that. */
+- if (!fix_sync_read_error(r1_bio))
++ if (!test_bit(R1BIO_Uptodate, &r1_bio->state)) {
++ /*
++ * ouch - failed to read all of that.
++ * No need to fix read error for check/repair
++ * because all member disks are read.
++ */
++ if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) ||
++ !fix_sync_read_error(r1_bio)) {
++ conf->recovery_disabled = mddev->recovery_disabled;
++ set_bit(MD_RECOVERY_INTR, &mddev->recovery);
++ md_done_sync(mddev, r1_bio->sectors, 0);
++ put_buf(r1_bio);
+ return;
++ }
++ }
+
+ if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
+ process_checks(r1_bio);
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index 8ba096b8ebca24..85ecb2aeefdbff 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -140,6 +140,7 @@ config VIDEO_IMX214
+ tristate "Sony IMX214 sensor support"
+ depends on GPIOLIB
+ select REGMAP_I2C
++ select V4L2_CCI_I2C
+ help
+ This is a Video4Linux2 sensor driver for the Sony
+ IMX214 camera.
+diff --git a/drivers/media/i2c/imx214.c b/drivers/media/i2c/imx214.c
+index 6a393e18267f42..ea5e294327e7be 100644
+--- a/drivers/media/i2c/imx214.c
++++ b/drivers/media/i2c/imx214.c
+@@ -15,26 +15,152 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+ #include <media/media-entity.h>
++#include <media/v4l2-cci.h>
+ #include <media/v4l2-ctrls.h>
+ #include <media/v4l2-fwnode.h>
+ #include <media/v4l2-subdev.h>
+
+-#define IMX214_REG_MODE_SELECT 0x0100
++#define IMX214_REG_MODE_SELECT CCI_REG8(0x0100)
+ #define IMX214_MODE_STANDBY 0x00
+ #define IMX214_MODE_STREAMING 0x01
+
++#define IMX214_REG_FAST_STANDBY_CTRL CCI_REG8(0x0106)
++
+ #define IMX214_DEFAULT_CLK_FREQ 24000000
+-#define IMX214_DEFAULT_LINK_FREQ 480000000
++#define IMX214_DEFAULT_LINK_FREQ 600000000
++/* Keep wrong link frequency for backward compatibility */
++#define IMX214_DEFAULT_LINK_FREQ_LEGACY 480000000
+ #define IMX214_DEFAULT_PIXEL_RATE ((IMX214_DEFAULT_LINK_FREQ * 8LL) / 10)
+ #define IMX214_FPS 30
+ #define IMX214_MBUS_CODE MEDIA_BUS_FMT_SRGGB10_1X10
+
++/* V-TIMING internal */
++#define IMX214_REG_FRM_LENGTH_LINES CCI_REG16(0x0340)
++
+ /* Exposure control */
+-#define IMX214_REG_EXPOSURE 0x0202
++#define IMX214_REG_EXPOSURE CCI_REG16(0x0202)
+ #define IMX214_EXPOSURE_MIN 0
+ #define IMX214_EXPOSURE_MAX 3184
+ #define IMX214_EXPOSURE_STEP 1
+ #define IMX214_EXPOSURE_DEFAULT 3184
++#define IMX214_REG_EXPOSURE_RATIO CCI_REG8(0x0222)
++#define IMX214_REG_SHORT_EXPOSURE CCI_REG16(0x0224)
++
++/* Analog gain control */
++#define IMX214_REG_ANALOG_GAIN CCI_REG16(0x0204)
++#define IMX214_REG_SHORT_ANALOG_GAIN CCI_REG16(0x0216)
++
++/* Digital gain control */
++#define IMX214_REG_DIG_GAIN_GREENR CCI_REG16(0x020e)
++#define IMX214_REG_DIG_GAIN_RED CCI_REG16(0x0210)
++#define IMX214_REG_DIG_GAIN_BLUE CCI_REG16(0x0212)
++#define IMX214_REG_DIG_GAIN_GREENB CCI_REG16(0x0214)
++
++#define IMX214_REG_ORIENTATION CCI_REG8(0x0101)
++
++#define IMX214_REG_MASK_CORR_FRAMES CCI_REG8(0x0105)
++#define IMX214_CORR_FRAMES_TRANSMIT 0
++#define IMX214_CORR_FRAMES_MASK 1
++
++#define IMX214_REG_CSI_DATA_FORMAT CCI_REG16(0x0112)
++#define IMX214_CSI_DATA_FORMAT_RAW8 0x0808
++#define IMX214_CSI_DATA_FORMAT_RAW10 0x0A0A
++#define IMX214_CSI_DATA_FORMAT_COMP6 0x0A06
++#define IMX214_CSI_DATA_FORMAT_COMP8 0x0A08
++
++#define IMX214_REG_CSI_LANE_MODE CCI_REG8(0x0114)
++#define IMX214_CSI_2_LANE_MODE 1
++#define IMX214_CSI_4_LANE_MODE 3
++
++#define IMX214_REG_EXCK_FREQ CCI_REG16(0x0136)
++#define IMX214_EXCK_FREQ(n) ((n) * 256) /* n expressed in MHz */
++
++#define IMX214_REG_TEMP_SENSOR_CONTROL CCI_REG8(0x0138)
++
++#define IMX214_REG_HDR_MODE CCI_REG8(0x0220)
++#define IMX214_HDR_MODE_OFF 0
++#define IMX214_HDR_MODE_ON 1
++
++#define IMX214_REG_HDR_RES_REDUCTION CCI_REG8(0x0221)
++#define IMX214_HDR_RES_REDU_THROUGH 0x11
++#define IMX214_HDR_RES_REDU_2_BINNING 0x22
++
++/* PLL settings */
++#define IMX214_REG_VTPXCK_DIV CCI_REG8(0x0301)
++#define IMX214_REG_VTSYCK_DIV CCI_REG8(0x0303)
++#define IMX214_REG_PREPLLCK_VT_DIV CCI_REG8(0x0305)
++#define IMX214_REG_PLL_VT_MPY CCI_REG16(0x0306)
++#define IMX214_REG_OPPXCK_DIV CCI_REG8(0x0309)
++#define IMX214_REG_OPSYCK_DIV CCI_REG8(0x030b)
++#define IMX214_REG_PLL_MULT_DRIV CCI_REG8(0x0310)
++#define IMX214_PLL_SINGLE 0
++#define IMX214_PLL_DUAL 1
++
++#define IMX214_REG_LINE_LENGTH_PCK CCI_REG16(0x0342)
++#define IMX214_REG_X_ADD_STA CCI_REG16(0x0344)
++#define IMX214_REG_Y_ADD_STA CCI_REG16(0x0346)
++#define IMX214_REG_X_ADD_END CCI_REG16(0x0348)
++#define IMX214_REG_Y_ADD_END CCI_REG16(0x034a)
++#define IMX214_REG_X_OUTPUT_SIZE CCI_REG16(0x034c)
++#define IMX214_REG_Y_OUTPUT_SIZE CCI_REG16(0x034e)
++#define IMX214_REG_X_EVEN_INC CCI_REG8(0x0381)
++#define IMX214_REG_X_ODD_INC CCI_REG8(0x0383)
++#define IMX214_REG_Y_EVEN_INC CCI_REG8(0x0385)
++#define IMX214_REG_Y_ODD_INC CCI_REG8(0x0387)
++
++#define IMX214_REG_SCALE_MODE CCI_REG8(0x0401)
++#define IMX214_SCALE_NONE 0
++#define IMX214_SCALE_HORIZONTAL 1
++#define IMX214_SCALE_FULL 2
++#define IMX214_REG_SCALE_M CCI_REG16(0x0404)
++
++#define IMX214_REG_DIG_CROP_X_OFFSET CCI_REG16(0x0408)
++#define IMX214_REG_DIG_CROP_Y_OFFSET CCI_REG16(0x040a)
++#define IMX214_REG_DIG_CROP_WIDTH CCI_REG16(0x040c)
++#define IMX214_REG_DIG_CROP_HEIGHT CCI_REG16(0x040e)
++
++#define IMX214_REG_REQ_LINK_BIT_RATE CCI_REG32(0x0820)
++#define IMX214_LINK_BIT_RATE_MBPS(n) ((n) << 16)
++
++/* Binning mode */
++#define IMX214_REG_BINNING_MODE CCI_REG8(0x0900)
++#define IMX214_BINNING_NONE 0
++#define IMX214_BINNING_ENABLE 1
++#define IMX214_REG_BINNING_TYPE CCI_REG8(0x0901)
++#define IMX214_REG_BINNING_WEIGHTING CCI_REG8(0x0902)
++#define IMX214_BINNING_AVERAGE 0x00
++#define IMX214_BINNING_SUMMED 0x01
++#define IMX214_BINNING_BAYER 0x02
++
++#define IMX214_REG_SING_DEF_CORR_EN CCI_REG8(0x0b06)
++#define IMX214_SING_DEF_CORR_OFF 0
++#define IMX214_SING_DEF_CORR_ON 1
++
++/* AWB control */
++#define IMX214_REG_ABS_GAIN_GREENR CCI_REG16(0x0b8e)
++#define IMX214_REG_ABS_GAIN_RED CCI_REG16(0x0b90)
++#define IMX214_REG_ABS_GAIN_BLUE CCI_REG16(0x0b92)
++#define IMX214_REG_ABS_GAIN_GREENB CCI_REG16(0x0b94)
++
++#define IMX214_REG_RMSC_NR_MODE CCI_REG8(0x3001)
++#define IMX214_REG_STATS_OUT_EN CCI_REG8(0x3013)
++#define IMX214_STATS_OUT_OFF 0
++#define IMX214_STATS_OUT_ON 1
++
++/* Chroma noise reduction */
++#define IMX214_REG_NML_NR_EN CCI_REG8(0x30a2)
++#define IMX214_NML_NR_OFF 0
++#define IMX214_NML_NR_ON 1
++
++#define IMX214_REG_EBD_SIZE_V CCI_REG8(0x5041)
++#define IMX214_EBD_NO 0
++#define IMX214_EBD_4_LINE 4
++
++#define IMX214_REG_RG_STATS_LMT CCI_REG16(0x6d12)
++#define IMX214_RG_STATS_LMT_10_BIT 0x03FF
++#define IMX214_RG_STATS_LMT_14_BIT 0x3FFF
++
++#define IMX214_REG_ATR_FAST_MOVE CCI_REG8(0x9300)
+
+ /* IMX214 native and active pixel array size */
+ #define IMX214_NATIVE_WIDTH 4224U
+@@ -59,8 +185,6 @@ struct imx214 {
+
+ struct v4l2_subdev sd;
+ struct media_pad pad;
+- struct v4l2_mbus_framefmt fmt;
+- struct v4l2_rect crop;
+
+ struct v4l2_ctrl_handler ctrls;
+ struct v4l2_ctrl *pixel_rate;
+@@ -71,353 +195,266 @@ struct imx214 {
+ struct regulator_bulk_data supplies[IMX214_NUM_SUPPLIES];
+
+ struct gpio_desc *enable_gpio;
+-
+- /*
+- * Serialize control access, get/set format, get selection
+- * and start streaming.
+- */
+- struct mutex mutex;
+-};
+-
+-struct reg_8 {
+- u16 addr;
+- u8 val;
+-};
+-
+-enum {
+- IMX214_TABLE_WAIT_MS = 0,
+- IMX214_TABLE_END,
+- IMX214_MAX_RETRIES,
+- IMX214_WAIT_MS
+ };
+
+ /*From imx214_mode_tbls.h*/
+-static const struct reg_8 mode_4096x2304[] = {
+- {0x0114, 0x03},
+- {0x0220, 0x00},
+- {0x0221, 0x11},
+- {0x0222, 0x01},
+- {0x0340, 0x0C},
+- {0x0341, 0x7A},
+- {0x0342, 0x13},
+- {0x0343, 0x90},
+- {0x0344, 0x00},
+- {0x0345, 0x38},
+- {0x0346, 0x01},
+- {0x0347, 0x98},
+- {0x0348, 0x10},
+- {0x0349, 0x37},
+- {0x034A, 0x0A},
+- {0x034B, 0x97},
+- {0x0381, 0x01},
+- {0x0383, 0x01},
+- {0x0385, 0x01},
+- {0x0387, 0x01},
+- {0x0900, 0x00},
+- {0x0901, 0x00},
+- {0x0902, 0x00},
+- {0x3000, 0x35},
+- {0x3054, 0x01},
+- {0x305C, 0x11},
+-
+- {0x0112, 0x0A},
+- {0x0113, 0x0A},
+- {0x034C, 0x10},
+- {0x034D, 0x00},
+- {0x034E, 0x09},
+- {0x034F, 0x00},
+- {0x0401, 0x00},
+- {0x0404, 0x00},
+- {0x0405, 0x10},
+- {0x0408, 0x00},
+- {0x0409, 0x00},
+- {0x040A, 0x00},
+- {0x040B, 0x00},
+- {0x040C, 0x10},
+- {0x040D, 0x00},
+- {0x040E, 0x09},
+- {0x040F, 0x00},
+-
+- {0x0301, 0x05},
+- {0x0303, 0x02},
+- {0x0305, 0x03},
+- {0x0306, 0x00},
+- {0x0307, 0x96},
+- {0x0309, 0x0A},
+- {0x030B, 0x01},
+- {0x0310, 0x00},
+-
+- {0x0820, 0x12},
+- {0x0821, 0xC0},
+- {0x0822, 0x00},
+- {0x0823, 0x00},
+-
+- {0x3A03, 0x09},
+- {0x3A04, 0x50},
+- {0x3A05, 0x01},
+-
+- {0x0B06, 0x01},
+- {0x30A2, 0x00},
+-
+- {0x30B4, 0x00},
+-
+- {0x3A02, 0xFF},
+-
+- {0x3011, 0x00},
+- {0x3013, 0x01},
+-
+- {0x0202, 0x0C},
+- {0x0203, 0x70},
+- {0x0224, 0x01},
+- {0x0225, 0xF4},
+-
+- {0x0204, 0x00},
+- {0x0205, 0x00},
+- {0x020E, 0x01},
+- {0x020F, 0x00},
+- {0x0210, 0x01},
+- {0x0211, 0x00},
+- {0x0212, 0x01},
+- {0x0213, 0x00},
+- {0x0214, 0x01},
+- {0x0215, 0x00},
+- {0x0216, 0x00},
+- {0x0217, 0x00},
+-
+- {0x4170, 0x00},
+- {0x4171, 0x10},
+- {0x4176, 0x00},
+- {0x4177, 0x3C},
+- {0xAE20, 0x04},
+- {0xAE21, 0x5C},
+-
+- {IMX214_TABLE_WAIT_MS, 10},
+- {0x0138, 0x01},
+- {IMX214_TABLE_END, 0x00}
++static const struct cci_reg_sequence mode_4096x2304[] = {
++ { IMX214_REG_HDR_MODE, IMX214_HDR_MODE_OFF },
++ { IMX214_REG_HDR_RES_REDUCTION, IMX214_HDR_RES_REDU_THROUGH },
++ { IMX214_REG_EXPOSURE_RATIO, 1 },
++ { IMX214_REG_FRM_LENGTH_LINES, 3194 },
++ { IMX214_REG_LINE_LENGTH_PCK, 5008 },
++ { IMX214_REG_X_ADD_STA, 56 },
++ { IMX214_REG_Y_ADD_STA, 408 },
++ { IMX214_REG_X_ADD_END, 4151 },
++ { IMX214_REG_Y_ADD_END, 2711 },
++ { IMX214_REG_X_EVEN_INC, 1 },
++ { IMX214_REG_X_ODD_INC, 1 },
++ { IMX214_REG_Y_EVEN_INC, 1 },
++ { IMX214_REG_Y_ODD_INC, 1 },
++ { IMX214_REG_BINNING_MODE, IMX214_BINNING_NONE },
++ { IMX214_REG_BINNING_TYPE, 0 },
++ { IMX214_REG_BINNING_WEIGHTING, IMX214_BINNING_AVERAGE },
++ { CCI_REG8(0x3000), 0x35 },
++ { CCI_REG8(0x3054), 0x01 },
++ { CCI_REG8(0x305C), 0x11 },
++
++ { IMX214_REG_CSI_DATA_FORMAT, IMX214_CSI_DATA_FORMAT_RAW10 },
++ { IMX214_REG_X_OUTPUT_SIZE, 4096 },
++ { IMX214_REG_Y_OUTPUT_SIZE, 2304 },
++ { IMX214_REG_SCALE_MODE, IMX214_SCALE_NONE },
++ { IMX214_REG_SCALE_M, 2 },
++ { IMX214_REG_DIG_CROP_X_OFFSET, 0 },
++ { IMX214_REG_DIG_CROP_Y_OFFSET, 0 },
++ { IMX214_REG_DIG_CROP_WIDTH, 4096 },
++ { IMX214_REG_DIG_CROP_HEIGHT, 2304 },
++
++ { IMX214_REG_VTPXCK_DIV, 5 },
++ { IMX214_REG_VTSYCK_DIV, 2 },
++ { IMX214_REG_PREPLLCK_VT_DIV, 3 },
++ { IMX214_REG_PLL_VT_MPY, 150 },
++ { IMX214_REG_OPPXCK_DIV, 10 },
++ { IMX214_REG_OPSYCK_DIV, 1 },
++ { IMX214_REG_PLL_MULT_DRIV, IMX214_PLL_SINGLE },
++
++ { IMX214_REG_REQ_LINK_BIT_RATE, IMX214_LINK_BIT_RATE_MBPS(4800) },
++
++ { CCI_REG8(0x3A03), 0x09 },
++ { CCI_REG8(0x3A04), 0x50 },
++ { CCI_REG8(0x3A05), 0x01 },
++
++ { IMX214_REG_SING_DEF_CORR_EN, IMX214_SING_DEF_CORR_ON },
++ { IMX214_REG_NML_NR_EN, IMX214_NML_NR_OFF },
++
++ { CCI_REG8(0x30B4), 0x00 },
++
++ { CCI_REG8(0x3A02), 0xFF },
++
++ { CCI_REG8(0x3011), 0x00 },
++ { IMX214_REG_STATS_OUT_EN, IMX214_STATS_OUT_ON },
++
++ { IMX214_REG_EXPOSURE, IMX214_EXPOSURE_DEFAULT },
++ { IMX214_REG_SHORT_EXPOSURE, 500 },
++
++ { IMX214_REG_ANALOG_GAIN, 0 },
++ { IMX214_REG_DIG_GAIN_GREENR, 256 },
++ { IMX214_REG_DIG_GAIN_RED, 256 },
++ { IMX214_REG_DIG_GAIN_BLUE, 256 },
++ { IMX214_REG_DIG_GAIN_GREENB, 256 },
++ { IMX214_REG_SHORT_ANALOG_GAIN, 0 },
++
++ { CCI_REG8(0x4170), 0x00 },
++ { CCI_REG8(0x4171), 0x10 },
++ { CCI_REG8(0x4176), 0x00 },
++ { CCI_REG8(0x4177), 0x3C },
++ { CCI_REG8(0xAE20), 0x04 },
++ { CCI_REG8(0xAE21), 0x5C },
+ };
+
+-static const struct reg_8 mode_1920x1080[] = {
+- {0x0114, 0x03},
+- {0x0220, 0x00},
+- {0x0221, 0x11},
+- {0x0222, 0x01},
+- {0x0340, 0x0C},
+- {0x0341, 0x7A},
+- {0x0342, 0x13},
+- {0x0343, 0x90},
+- {0x0344, 0x04},
+- {0x0345, 0x78},
+- {0x0346, 0x03},
+- {0x0347, 0xFC},
+- {0x0348, 0x0B},
+- {0x0349, 0xF7},
+- {0x034A, 0x08},
+- {0x034B, 0x33},
+- {0x0381, 0x01},
+- {0x0383, 0x01},
+- {0x0385, 0x01},
+- {0x0387, 0x01},
+- {0x0900, 0x00},
+- {0x0901, 0x00},
+- {0x0902, 0x00},
+- {0x3000, 0x35},
+- {0x3054, 0x01},
+- {0x305C, 0x11},
+-
+- {0x0112, 0x0A},
+- {0x0113, 0x0A},
+- {0x034C, 0x07},
+- {0x034D, 0x80},
+- {0x034E, 0x04},
+- {0x034F, 0x38},
+- {0x0401, 0x00},
+- {0x0404, 0x00},
+- {0x0405, 0x10},
+- {0x0408, 0x00},
+- {0x0409, 0x00},
+- {0x040A, 0x00},
+- {0x040B, 0x00},
+- {0x040C, 0x07},
+- {0x040D, 0x80},
+- {0x040E, 0x04},
+- {0x040F, 0x38},
+-
+- {0x0301, 0x05},
+- {0x0303, 0x02},
+- {0x0305, 0x03},
+- {0x0306, 0x00},
+- {0x0307, 0x96},
+- {0x0309, 0x0A},
+- {0x030B, 0x01},
+- {0x0310, 0x00},
+-
+- {0x0820, 0x12},
+- {0x0821, 0xC0},
+- {0x0822, 0x00},
+- {0x0823, 0x00},
+-
+- {0x3A03, 0x04},
+- {0x3A04, 0xF8},
+- {0x3A05, 0x02},
+-
+- {0x0B06, 0x01},
+- {0x30A2, 0x00},
+-
+- {0x30B4, 0x00},
+-
+- {0x3A02, 0xFF},
+-
+- {0x3011, 0x00},
+- {0x3013, 0x01},
+-
+- {0x0202, 0x0C},
+- {0x0203, 0x70},
+- {0x0224, 0x01},
+- {0x0225, 0xF4},
+-
+- {0x0204, 0x00},
+- {0x0205, 0x00},
+- {0x020E, 0x01},
+- {0x020F, 0x00},
+- {0x0210, 0x01},
+- {0x0211, 0x00},
+- {0x0212, 0x01},
+- {0x0213, 0x00},
+- {0x0214, 0x01},
+- {0x0215, 0x00},
+- {0x0216, 0x00},
+- {0x0217, 0x00},
+-
+- {0x4170, 0x00},
+- {0x4171, 0x10},
+- {0x4176, 0x00},
+- {0x4177, 0x3C},
+- {0xAE20, 0x04},
+- {0xAE21, 0x5C},
+-
+- {IMX214_TABLE_WAIT_MS, 10},
+- {0x0138, 0x01},
+- {IMX214_TABLE_END, 0x00}
++static const struct cci_reg_sequence mode_1920x1080[] = {
++ { IMX214_REG_HDR_MODE, IMX214_HDR_MODE_OFF },
++ { IMX214_REG_HDR_RES_REDUCTION, IMX214_HDR_RES_REDU_THROUGH },
++ { IMX214_REG_EXPOSURE_RATIO, 1 },
++ { IMX214_REG_FRM_LENGTH_LINES, 3194 },
++ { IMX214_REG_LINE_LENGTH_PCK, 5008 },
++ { IMX214_REG_X_ADD_STA, 1144 },
++ { IMX214_REG_Y_ADD_STA, 1020 },
++ { IMX214_REG_X_ADD_END, 3063 },
++ { IMX214_REG_Y_ADD_END, 2099 },
++ { IMX214_REG_X_EVEN_INC, 1 },
++ { IMX214_REG_X_ODD_INC, 1 },
++ { IMX214_REG_Y_EVEN_INC, 1 },
++ { IMX214_REG_Y_ODD_INC, 1 },
++ { IMX214_REG_BINNING_MODE, IMX214_BINNING_NONE },
++ { IMX214_REG_BINNING_TYPE, 0 },
++ { IMX214_REG_BINNING_WEIGHTING, IMX214_BINNING_AVERAGE },
++ { CCI_REG8(0x3000), 0x35 },
++ { CCI_REG8(0x3054), 0x01 },
++ { CCI_REG8(0x305C), 0x11 },
++
++ { IMX214_REG_CSI_DATA_FORMAT, IMX214_CSI_DATA_FORMAT_RAW10 },
++ { IMX214_REG_X_OUTPUT_SIZE, 1920 },
++ { IMX214_REG_Y_OUTPUT_SIZE, 1080 },
++ { IMX214_REG_SCALE_MODE, IMX214_SCALE_NONE },
++ { IMX214_REG_SCALE_M, 2 },
++ { IMX214_REG_DIG_CROP_X_OFFSET, 0 },
++ { IMX214_REG_DIG_CROP_Y_OFFSET, 0 },
++ { IMX214_REG_DIG_CROP_WIDTH, 1920 },
++ { IMX214_REG_DIG_CROP_HEIGHT, 1080 },
++
++ { IMX214_REG_VTPXCK_DIV, 5 },
++ { IMX214_REG_VTSYCK_DIV, 2 },
++ { IMX214_REG_PREPLLCK_VT_DIV, 3 },
++ { IMX214_REG_PLL_VT_MPY, 150 },
++ { IMX214_REG_OPPXCK_DIV, 10 },
++ { IMX214_REG_OPSYCK_DIV, 1 },
++ { IMX214_REG_PLL_MULT_DRIV, IMX214_PLL_SINGLE },
++
++ { IMX214_REG_REQ_LINK_BIT_RATE, IMX214_LINK_BIT_RATE_MBPS(4800) },
++
++ { CCI_REG8(0x3A03), 0x04 },
++ { CCI_REG8(0x3A04), 0xF8 },
++ { CCI_REG8(0x3A05), 0x02 },
++
++ { IMX214_REG_SING_DEF_CORR_EN, IMX214_SING_DEF_CORR_ON },
++ { IMX214_REG_NML_NR_EN, IMX214_NML_NR_OFF },
++
++ { CCI_REG8(0x30B4), 0x00 },
++
++ { CCI_REG8(0x3A02), 0xFF },
++
++ { CCI_REG8(0x3011), 0x00 },
++ { IMX214_REG_STATS_OUT_EN, IMX214_STATS_OUT_ON },
++
++ { IMX214_REG_EXPOSURE, IMX214_EXPOSURE_DEFAULT },
++ { IMX214_REG_SHORT_EXPOSURE, 500 },
++
++ { IMX214_REG_ANALOG_GAIN, 0 },
++ { IMX214_REG_DIG_GAIN_GREENR, 256 },
++ { IMX214_REG_DIG_GAIN_RED, 256 },
++ { IMX214_REG_DIG_GAIN_BLUE, 256 },
++ { IMX214_REG_DIG_GAIN_GREENB, 256 },
++ { IMX214_REG_SHORT_ANALOG_GAIN, 0 },
++
++ { CCI_REG8(0x4170), 0x00 },
++ { CCI_REG8(0x4171), 0x10 },
++ { CCI_REG8(0x4176), 0x00 },
++ { CCI_REG8(0x4177), 0x3C },
++ { CCI_REG8(0xAE20), 0x04 },
++ { CCI_REG8(0xAE21), 0x5C },
+ };
+
+-static const struct reg_8 mode_table_common[] = {
++static const struct cci_reg_sequence mode_table_common[] = {
+ /* software reset */
+
+ /* software standby settings */
+- {0x0100, 0x00},
++ { IMX214_REG_MODE_SELECT, IMX214_MODE_STANDBY },
+
+ /* ATR setting */
+- {0x9300, 0x02},
++ { IMX214_REG_ATR_FAST_MOVE, 2 },
+
+ /* external clock setting */
+- {0x0136, 0x18},
+- {0x0137, 0x00},
++ { IMX214_REG_EXCK_FREQ, IMX214_EXCK_FREQ(IMX214_DEFAULT_CLK_FREQ / 1000000) },
+
+ /* global setting */
+ /* basic config */
+- {0x0101, 0x00},
+- {0x0105, 0x01},
+- {0x0106, 0x01},
+- {0x4550, 0x02},
+- {0x4601, 0x00},
+- {0x4642, 0x05},
+- {0x6227, 0x11},
+- {0x6276, 0x00},
+- {0x900E, 0x06},
+- {0xA802, 0x90},
+- {0xA803, 0x11},
+- {0xA804, 0x62},
+- {0xA805, 0x77},
+- {0xA806, 0xAE},
+- {0xA807, 0x34},
+- {0xA808, 0xAE},
+- {0xA809, 0x35},
+- {0xA80A, 0x62},
+- {0xA80B, 0x83},
+- {0xAE33, 0x00},
++ { IMX214_REG_ORIENTATION, 0 },
++ { IMX214_REG_MASK_CORR_FRAMES, IMX214_CORR_FRAMES_MASK },
++ { IMX214_REG_FAST_STANDBY_CTRL, 1 },
++ { CCI_REG8(0x4550), 0x02 },
++ { CCI_REG8(0x4601), 0x00 },
++ { CCI_REG8(0x4642), 0x05 },
++ { CCI_REG8(0x6227), 0x11 },
++ { CCI_REG8(0x6276), 0x00 },
++ { CCI_REG8(0x900E), 0x06 },
++ { CCI_REG8(0xA802), 0x90 },
++ { CCI_REG8(0xA803), 0x11 },
++ { CCI_REG8(0xA804), 0x62 },
++ { CCI_REG8(0xA805), 0x77 },
++ { CCI_REG8(0xA806), 0xAE },
++ { CCI_REG8(0xA807), 0x34 },
++ { CCI_REG8(0xA808), 0xAE },
++ { CCI_REG8(0xA809), 0x35 },
++ { CCI_REG8(0xA80A), 0x62 },
++ { CCI_REG8(0xA80B), 0x83 },
++ { CCI_REG8(0xAE33), 0x00 },
+
+ /* analog setting */
+- {0x4174, 0x00},
+- {0x4175, 0x11},
+- {0x4612, 0x29},
+- {0x461B, 0x12},
+- {0x461F, 0x06},
+- {0x4635, 0x07},
+- {0x4637, 0x30},
+- {0x463F, 0x18},
+- {0x4641, 0x0D},
+- {0x465B, 0x12},
+- {0x465F, 0x11},
+- {0x4663, 0x11},
+- {0x4667, 0x0F},
+- {0x466F, 0x0F},
+- {0x470E, 0x09},
+- {0x4909, 0xAB},
+- {0x490B, 0x95},
+- {0x4915, 0x5D},
+- {0x4A5F, 0xFF},
+- {0x4A61, 0xFF},
+- {0x4A73, 0x62},
+- {0x4A85, 0x00},
+- {0x4A87, 0xFF},
++ { CCI_REG8(0x4174), 0x00 },
++ { CCI_REG8(0x4175), 0x11 },
++ { CCI_REG8(0x4612), 0x29 },
++ { CCI_REG8(0x461B), 0x12 },
++ { CCI_REG8(0x461F), 0x06 },
++ { CCI_REG8(0x4635), 0x07 },
++ { CCI_REG8(0x4637), 0x30 },
++ { CCI_REG8(0x463F), 0x18 },
++ { CCI_REG8(0x4641), 0x0D },
++ { CCI_REG8(0x465B), 0x12 },
++ { CCI_REG8(0x465F), 0x11 },
++ { CCI_REG8(0x4663), 0x11 },
++ { CCI_REG8(0x4667), 0x0F },
++ { CCI_REG8(0x466F), 0x0F },
++ { CCI_REG8(0x470E), 0x09 },
++ { CCI_REG8(0x4909), 0xAB },
++ { CCI_REG8(0x490B), 0x95 },
++ { CCI_REG8(0x4915), 0x5D },
++ { CCI_REG8(0x4A5F), 0xFF },
++ { CCI_REG8(0x4A61), 0xFF },
++ { CCI_REG8(0x4A73), 0x62 },
++ { CCI_REG8(0x4A85), 0x00 },
++ { CCI_REG8(0x4A87), 0xFF },
+
+ /* embedded data */
+- {0x5041, 0x04},
+- {0x583C, 0x04},
+- {0x620E, 0x04},
+- {0x6EB2, 0x01},
+- {0x6EB3, 0x00},
+- {0x9300, 0x02},
++ { IMX214_REG_EBD_SIZE_V, IMX214_EBD_4_LINE },
++ { CCI_REG8(0x583C), 0x04 },
++ { CCI_REG8(0x620E), 0x04 },
++ { CCI_REG8(0x6EB2), 0x01 },
++ { CCI_REG8(0x6EB3), 0x00 },
++ { IMX214_REG_ATR_FAST_MOVE, 2 },
+
+ /* imagequality */
+ /* HDR setting */
+- {0x3001, 0x07},
+- {0x6D12, 0x3F},
+- {0x6D13, 0xFF},
+- {0x9344, 0x03},
+- {0x9706, 0x10},
+- {0x9707, 0x03},
+- {0x9708, 0x03},
+- {0x9E04, 0x01},
+- {0x9E05, 0x00},
+- {0x9E0C, 0x01},
+- {0x9E0D, 0x02},
+- {0x9E24, 0x00},
+- {0x9E25, 0x8C},
+- {0x9E26, 0x00},
+- {0x9E27, 0x94},
+- {0x9E28, 0x00},
+- {0x9E29, 0x96},
++ { IMX214_REG_RMSC_NR_MODE, 0x07 },
++ { IMX214_REG_RG_STATS_LMT, IMX214_RG_STATS_LMT_14_BIT },
++ { CCI_REG8(0x9344), 0x03 },
++ { CCI_REG8(0x9706), 0x10 },
++ { CCI_REG8(0x9707), 0x03 },
++ { CCI_REG8(0x9708), 0x03 },
++ { CCI_REG8(0x9E04), 0x01 },
++ { CCI_REG8(0x9E05), 0x00 },
++ { CCI_REG8(0x9E0C), 0x01 },
++ { CCI_REG8(0x9E0D), 0x02 },
++ { CCI_REG8(0x9E24), 0x00 },
++ { CCI_REG8(0x9E25), 0x8C },
++ { CCI_REG8(0x9E26), 0x00 },
++ { CCI_REG8(0x9E27), 0x94 },
++ { CCI_REG8(0x9E28), 0x00 },
++ { CCI_REG8(0x9E29), 0x96 },
+
+ /* CNR parameter setting */
+- {0x69DB, 0x01},
++ { CCI_REG8(0x69DB), 0x01 },
+
+ /* Moire reduction */
+- {0x6957, 0x01},
++ { CCI_REG8(0x6957), 0x01 },
+
+ /* image enhancement */
+- {0x6987, 0x17},
+- {0x698A, 0x03},
+- {0x698B, 0x03},
++ { CCI_REG8(0x6987), 0x17 },
++ { CCI_REG8(0x698A), 0x03 },
++ { CCI_REG8(0x698B), 0x03 },
+
+ /* white balanace */
+- {0x0B8E, 0x01},
+- {0x0B8F, 0x00},
+- {0x0B90, 0x01},
+- {0x0B91, 0x00},
+- {0x0B92, 0x01},
+- {0x0B93, 0x00},
+- {0x0B94, 0x01},
+- {0x0B95, 0x00},
++ { IMX214_REG_ABS_GAIN_GREENR, 0x0100 },
++ { IMX214_REG_ABS_GAIN_RED, 0x0100 },
++ { IMX214_REG_ABS_GAIN_BLUE, 0x0100 },
++ { IMX214_REG_ABS_GAIN_GREENB, 0x0100 },
+
+ /* ATR setting */
+- {0x6E50, 0x00},
+- {0x6E51, 0x32},
+- {0x9340, 0x00},
+- {0x9341, 0x3C},
+- {0x9342, 0x03},
+- {0x9343, 0xFF},
+- {IMX214_TABLE_END, 0x00}
++ { CCI_REG8(0x6E50), 0x00 },
++ { CCI_REG8(0x6E51), 0x32 },
++ { CCI_REG8(0x9340), 0x00 },
++ { CCI_REG8(0x9341), 0x3C },
++ { CCI_REG8(0x9342), 0x03 },
++ { CCI_REG8(0x9343), 0xFF },
+ };
+
+ /*
+@@ -427,16 +464,19 @@ static const struct reg_8 mode_table_common[] = {
+ static const struct imx214_mode {
+ u32 width;
+ u32 height;
+- const struct reg_8 *reg_table;
++ unsigned int num_of_regs;
++ const struct cci_reg_sequence *reg_table;
+ } imx214_modes[] = {
+ {
+ .width = 4096,
+ .height = 2304,
++ .num_of_regs = ARRAY_SIZE(mode_4096x2304),
+ .reg_table = mode_4096x2304,
+ },
+ {
+ .width = 1920,
+ .height = 1080,
++ .num_of_regs = ARRAY_SIZE(mode_1920x1080),
+ .reg_table = mode_1920x1080,
+ },
+ };
+@@ -490,6 +530,22 @@ static int __maybe_unused imx214_power_off(struct device *dev)
+ return 0;
+ }
+
++static void imx214_update_pad_format(struct imx214 *imx214,
++ const struct imx214_mode *mode,
++ struct v4l2_mbus_framefmt *fmt, u32 code)
++{
++ fmt->code = IMX214_MBUS_CODE;
++ fmt->width = mode->width;
++ fmt->height = mode->height;
++ fmt->field = V4L2_FIELD_NONE;
++ fmt->colorspace = V4L2_COLORSPACE_SRGB;
++ fmt->ycbcr_enc = V4L2_MAP_YCBCR_ENC_DEFAULT(fmt->colorspace);
++ fmt->quantization = V4L2_MAP_QUANTIZATION_DEFAULT(true,
++ fmt->colorspace,
++ fmt->ycbcr_enc);
++ fmt->xfer_func = V4L2_MAP_XFER_FUNC_DEFAULT(fmt->colorspace);
++}
++
+ static int imx214_enum_mbus_code(struct v4l2_subdev *sd,
+ struct v4l2_subdev_state *sd_state,
+ struct v4l2_subdev_mbus_code_enum *code)
+@@ -549,52 +605,6 @@ static const struct v4l2_subdev_core_ops imx214_core_ops = {
+ #endif
+ };
+
+-static struct v4l2_mbus_framefmt *
+-__imx214_get_pad_format(struct imx214 *imx214,
+- struct v4l2_subdev_state *sd_state,
+- unsigned int pad,
+- enum v4l2_subdev_format_whence which)
+-{
+- switch (which) {
+- case V4L2_SUBDEV_FORMAT_TRY:
+- return v4l2_subdev_state_get_format(sd_state, pad);
+- case V4L2_SUBDEV_FORMAT_ACTIVE:
+- return &imx214->fmt;
+- default:
+- return NULL;
+- }
+-}
+-
+-static int imx214_get_format(struct v4l2_subdev *sd,
+- struct v4l2_subdev_state *sd_state,
+- struct v4l2_subdev_format *format)
+-{
+- struct imx214 *imx214 = to_imx214(sd);
+-
+- mutex_lock(&imx214->mutex);
+- format->format = *__imx214_get_pad_format(imx214, sd_state,
+- format->pad,
+- format->which);
+- mutex_unlock(&imx214->mutex);
+-
+- return 0;
+-}
+-
+-static struct v4l2_rect *
+-__imx214_get_pad_crop(struct imx214 *imx214,
+- struct v4l2_subdev_state *sd_state,
+- unsigned int pad, enum v4l2_subdev_format_whence which)
+-{
+- switch (which) {
+- case V4L2_SUBDEV_FORMAT_TRY:
+- return v4l2_subdev_state_get_crop(sd_state, pad);
+- case V4L2_SUBDEV_FORMAT_ACTIVE:
+- return &imx214->crop;
+- default:
+- return NULL;
+- }
+-}
+-
+ static int imx214_set_format(struct v4l2_subdev *sd,
+ struct v4l2_subdev_state *sd_state,
+ struct v4l2_subdev_format *format)
+@@ -604,34 +614,20 @@ static int imx214_set_format(struct v4l2_subdev *sd,
+ struct v4l2_rect *__crop;
+ const struct imx214_mode *mode;
+
+- mutex_lock(&imx214->mutex);
+-
+- __crop = __imx214_get_pad_crop(imx214, sd_state, format->pad,
+- format->which);
+-
+ mode = v4l2_find_nearest_size(imx214_modes,
+ ARRAY_SIZE(imx214_modes), width, height,
+ format->format.width,
+ format->format.height);
+
+- __crop->width = mode->width;
+- __crop->height = mode->height;
+-
+- __format = __imx214_get_pad_format(imx214, sd_state, format->pad,
+- format->which);
+- __format->width = __crop->width;
+- __format->height = __crop->height;
+- __format->code = IMX214_MBUS_CODE;
+- __format->field = V4L2_FIELD_NONE;
+- __format->colorspace = V4L2_COLORSPACE_SRGB;
+- __format->ycbcr_enc = V4L2_MAP_YCBCR_ENC_DEFAULT(__format->colorspace);
+- __format->quantization = V4L2_MAP_QUANTIZATION_DEFAULT(true,
+- __format->colorspace, __format->ycbcr_enc);
+- __format->xfer_func = V4L2_MAP_XFER_FUNC_DEFAULT(__format->colorspace);
++ imx214_update_pad_format(imx214, mode, &format->format,
++ format->format.code);
++ __format = v4l2_subdev_state_get_format(sd_state, 0);
+
+- format->format = *__format;
++ *__format = format->format;
+
+- mutex_unlock(&imx214->mutex);
++ __crop = v4l2_subdev_state_get_crop(sd_state, 0);
++ __crop->width = mode->width;
++ __crop->height = mode->height;
+
+ return 0;
+ }
+@@ -640,14 +636,9 @@ static int imx214_get_selection(struct v4l2_subdev *sd,
+ struct v4l2_subdev_state *sd_state,
+ struct v4l2_subdev_selection *sel)
+ {
+- struct imx214 *imx214 = to_imx214(sd);
+-
+ switch (sel->target) {
+ case V4L2_SEL_TGT_CROP:
+- mutex_lock(&imx214->mutex);
+- sel->r = *__imx214_get_pad_crop(imx214, sd_state, sel->pad,
+- sel->which);
+- mutex_unlock(&imx214->mutex);
++ sel->r = *v4l2_subdev_state_get_crop(sd_state, 0);
+ return 0;
+
+ case V4L2_SEL_TGT_NATIVE_SIZE:
+@@ -687,7 +678,6 @@ static int imx214_set_ctrl(struct v4l2_ctrl *ctrl)
+ {
+ struct imx214 *imx214 = container_of(ctrl->handler,
+ struct imx214, ctrls);
+- u8 vals[2];
+ int ret;
+
+ /*
+@@ -699,12 +689,7 @@ static int imx214_set_ctrl(struct v4l2_ctrl *ctrl)
+
+ switch (ctrl->id) {
+ case V4L2_CID_EXPOSURE:
+- vals[1] = ctrl->val;
+- vals[0] = ctrl->val >> 8;
+- ret = regmap_bulk_write(imx214->regmap, IMX214_REG_EXPOSURE, vals, 2);
+- if (ret < 0)
+- dev_err(imx214->dev, "Error %d\n", ret);
+- ret = 0;
++ cci_write(imx214->regmap, IMX214_REG_EXPOSURE, ctrl->val, &ret);
+ break;
+
+ default:
+@@ -790,76 +775,52 @@ static int imx214_ctrls_init(struct imx214 *imx214)
+ return 0;
+ };
+
+-#define MAX_CMD 4
+-static int imx214_write_table(struct imx214 *imx214,
+- const struct reg_8 table[])
+-{
+- u8 vals[MAX_CMD];
+- int i;
+- int ret;
+-
+- for (; table->addr != IMX214_TABLE_END ; table++) {
+- if (table->addr == IMX214_TABLE_WAIT_MS) {
+- usleep_range(table->val * 1000,
+- table->val * 1000 + 500);
+- continue;
+- }
+-
+- for (i = 0; i < MAX_CMD; i++) {
+- if (table[i].addr != (table[0].addr + i))
+- break;
+- vals[i] = table[i].val;
+- }
+-
+- ret = regmap_bulk_write(imx214->regmap, table->addr, vals, i);
+-
+- if (ret) {
+- dev_err(imx214->dev, "write_table error: %d\n", ret);
+- return ret;
+- }
+-
+- table += i - 1;
+- }
+-
+- return 0;
+-}
+-
+ static int imx214_start_streaming(struct imx214 *imx214)
+ {
++ const struct v4l2_mbus_framefmt *fmt;
++ struct v4l2_subdev_state *state;
+ const struct imx214_mode *mode;
+ int ret;
+
+- mutex_lock(&imx214->mutex);
+- ret = imx214_write_table(imx214, mode_table_common);
++ ret = cci_multi_reg_write(imx214->regmap, mode_table_common,
++ ARRAY_SIZE(mode_table_common), NULL);
+ if (ret < 0) {
+ dev_err(imx214->dev, "could not sent common table %d\n", ret);
+- goto error;
++ return ret;
+ }
+
+- mode = v4l2_find_nearest_size(imx214_modes,
+- ARRAY_SIZE(imx214_modes), width, height,
+- imx214->fmt.width, imx214->fmt.height);
+- ret = imx214_write_table(imx214, mode->reg_table);
++ ret = cci_write(imx214->regmap, IMX214_REG_CSI_LANE_MODE,
++ IMX214_CSI_4_LANE_MODE, NULL);
++ if (ret) {
++ dev_err(imx214->dev, "failed to configure lanes\n");
++ return ret;
++ }
++
++ state = v4l2_subdev_get_locked_active_state(&imx214->sd);
++ fmt = v4l2_subdev_state_get_format(state, 0);
++ mode = v4l2_find_nearest_size(imx214_modes, ARRAY_SIZE(imx214_modes),
++ width, height, fmt->width, fmt->height);
++ ret = cci_multi_reg_write(imx214->regmap, mode->reg_table,
++ mode->num_of_regs, NULL);
+ if (ret < 0) {
+ dev_err(imx214->dev, "could not sent mode table %d\n", ret);
+- goto error;
++ return ret;
+ }
++
++ usleep_range(10000, 10500);
++
++ cci_write(imx214->regmap, IMX214_REG_TEMP_SENSOR_CONTROL, 0x01, NULL);
++
+ ret = __v4l2_ctrl_handler_setup(&imx214->ctrls);
+ if (ret < 0) {
+ dev_err(imx214->dev, "could not sync v4l2 controls\n");
+- goto error;
++ return ret;
+ }
+- ret = regmap_write(imx214->regmap, IMX214_REG_MODE_SELECT, IMX214_MODE_STREAMING);
+- if (ret < 0) {
++ ret = cci_write(imx214->regmap, IMX214_REG_MODE_SELECT,
++ IMX214_MODE_STREAMING, NULL);
++ if (ret < 0)
+ dev_err(imx214->dev, "could not sent start table %d\n", ret);
+- goto error;
+- }
+
+- mutex_unlock(&imx214->mutex);
+- return 0;
+-
+-error:
+- mutex_unlock(&imx214->mutex);
+ return ret;
+ }
+
+@@ -867,7 +828,8 @@ static int imx214_stop_streaming(struct imx214 *imx214)
+ {
+ int ret;
+
+- ret = regmap_write(imx214->regmap, IMX214_REG_MODE_SELECT, IMX214_MODE_STANDBY);
++ ret = cci_write(imx214->regmap, IMX214_REG_MODE_SELECT,
++ IMX214_MODE_STANDBY, NULL);
+ if (ret < 0)
+ dev_err(imx214->dev, "could not sent stop table %d\n", ret);
+
+@@ -877,14 +839,17 @@ static int imx214_stop_streaming(struct imx214 *imx214)
+ static int imx214_s_stream(struct v4l2_subdev *subdev, int enable)
+ {
+ struct imx214 *imx214 = to_imx214(subdev);
+- int ret;
++ struct v4l2_subdev_state *state;
++ int ret = 0;
+
+ if (enable) {
+ ret = pm_runtime_resume_and_get(imx214->dev);
+ if (ret < 0)
+ return ret;
+
++ state = v4l2_subdev_lock_and_get_active_state(subdev);
+ ret = imx214_start_streaming(imx214);
++ v4l2_subdev_unlock_state(state);
+ if (ret < 0)
+ goto err_rpm_put;
+ } else {
+@@ -948,7 +913,7 @@ static const struct v4l2_subdev_pad_ops imx214_subdev_pad_ops = {
+ .enum_mbus_code = imx214_enum_mbus_code,
+ .enum_frame_size = imx214_enum_frame_size,
+ .enum_frame_interval = imx214_enum_frame_interval,
+- .get_fmt = imx214_get_format,
++ .get_fmt = v4l2_subdev_get_fmt,
+ .set_fmt = imx214_set_format,
+ .get_selection = imx214_get_selection,
+ .get_frame_interval = imx214_get_frame_interval,
+@@ -965,12 +930,6 @@ static const struct v4l2_subdev_internal_ops imx214_internal_ops = {
+ .init_state = imx214_entity_init_state,
+ };
+
+-static const struct regmap_config sensor_regmap_config = {
+- .reg_bits = 16,
+- .val_bits = 8,
+- .cache_type = REGCACHE_MAPLE,
+-};
+-
+ static int imx214_get_regulators(struct device *dev, struct imx214 *imx214)
+ {
+ unsigned int i;
+@@ -992,28 +951,42 @@ static int imx214_parse_fwnode(struct device *dev)
+ int ret;
+
+ endpoint = fwnode_graph_get_next_endpoint(dev_fwnode(dev), NULL);
+- if (!endpoint) {
+- dev_err(dev, "endpoint node not found\n");
+- return -EINVAL;
+- }
++ if (!endpoint)
++ return dev_err_probe(dev, -EINVAL, "endpoint node not found\n");
+
+ ret = v4l2_fwnode_endpoint_alloc_parse(endpoint, &bus_cfg);
+ if (ret) {
+- dev_err(dev, "parsing endpoint node failed\n");
++ dev_err_probe(dev, ret, "parsing endpoint node failed\n");
++ goto done;
++ }
++
++ /* Check the number of MIPI CSI2 data lanes */
++ if (bus_cfg.bus.mipi_csi2.num_data_lanes != 4) {
++ ret = dev_err_probe(dev, -EINVAL,
++ "only 4 data lanes are currently supported\n");
+ goto done;
+ }
+
+- for (i = 0; i < bus_cfg.nr_of_link_frequencies; i++)
++ if (bus_cfg.nr_of_link_frequencies != 1)
++ dev_warn(dev, "Only one link-frequency supported, please review your DT. Continuing anyway\n");
++
++ for (i = 0; i < bus_cfg.nr_of_link_frequencies; i++) {
+ if (bus_cfg.link_frequencies[i] == IMX214_DEFAULT_LINK_FREQ)
+ break;
+-
+- if (i == bus_cfg.nr_of_link_frequencies) {
+- dev_err(dev, "link-frequencies %d not supported, Please review your DT\n",
+- IMX214_DEFAULT_LINK_FREQ);
+- ret = -EINVAL;
+- goto done;
++ if (bus_cfg.link_frequencies[i] ==
++ IMX214_DEFAULT_LINK_FREQ_LEGACY) {
++ dev_warn(dev,
++ "link-frequencies %d not supported, please review your DT. Continuing anyway\n",
++ IMX214_DEFAULT_LINK_FREQ);
++ break;
++ }
+ }
+
++ if (i == bus_cfg.nr_of_link_frequencies)
++ ret = dev_err_probe(dev, -EINVAL,
++ "link-frequencies %d not supported, please review your DT\n",
++ IMX214_DEFAULT_LINK_FREQ);
++
+ done:
+ v4l2_fwnode_endpoint_free(&bus_cfg);
+ fwnode_handle_put(endpoint);
+@@ -1037,34 +1010,28 @@ static int imx214_probe(struct i2c_client *client)
+ imx214->dev = dev;
+
+ imx214->xclk = devm_clk_get(dev, NULL);
+- if (IS_ERR(imx214->xclk)) {
+- dev_err(dev, "could not get xclk");
+- return PTR_ERR(imx214->xclk);
+- }
++ if (IS_ERR(imx214->xclk))
++ return dev_err_probe(dev, PTR_ERR(imx214->xclk),
++ "failed to get xclk\n");
+
+ ret = clk_set_rate(imx214->xclk, IMX214_DEFAULT_CLK_FREQ);
+- if (ret) {
+- dev_err(dev, "could not set xclk frequency\n");
+- return ret;
+- }
++ if (ret)
++ return dev_err_probe(dev, ret,
++ "failed to set xclk frequency\n");
+
+ ret = imx214_get_regulators(dev, imx214);
+- if (ret < 0) {
+- dev_err(dev, "cannot get regulators\n");
+- return ret;
+- }
++ if (ret < 0)
++ return dev_err_probe(dev, ret, "failed to get regulators\n");
+
+ imx214->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);
+- if (IS_ERR(imx214->enable_gpio)) {
+- dev_err(dev, "cannot get enable gpio\n");
+- return PTR_ERR(imx214->enable_gpio);
+- }
++ if (IS_ERR(imx214->enable_gpio))
++ return dev_err_probe(dev, PTR_ERR(imx214->enable_gpio),
++ "failed to get enable gpio\n");
+
+- imx214->regmap = devm_regmap_init_i2c(client, &sensor_regmap_config);
+- if (IS_ERR(imx214->regmap)) {
+- dev_err(dev, "regmap init failed\n");
+- return PTR_ERR(imx214->regmap);
+- }
++ imx214->regmap = devm_cci_regmap_init_i2c(client, 16);
++ if (IS_ERR(imx214->regmap))
++ return dev_err_probe(dev, PTR_ERR(imx214->regmap),
++ "failed to initialize CCI\n");
+
+ v4l2_i2c_subdev_init(&imx214->sd, client, &imx214_subdev_ops);
+ imx214->sd.internal_ops = &imx214_internal_ops;
+@@ -1079,9 +1046,6 @@ static int imx214_probe(struct i2c_client *client)
+ if (ret < 0)
+ goto error_power_off;
+
+- mutex_init(&imx214->mutex);
+- imx214->ctrls.lock = &imx214->mutex;
+-
+ imx214->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+ imx214->pad.flags = MEDIA_PAD_FL_SOURCE;
+ imx214->sd.dev = &client->dev;
+@@ -1089,32 +1053,40 @@ static int imx214_probe(struct i2c_client *client)
+
+ ret = media_entity_pads_init(&imx214->sd.entity, 1, &imx214->pad);
+ if (ret < 0) {
+- dev_err(dev, "could not register media entity\n");
++ dev_err_probe(dev, ret, "failed to init entity pads\n");
+ goto free_ctrl;
+ }
+
+- imx214_entity_init_state(&imx214->sd, NULL);
++ imx214->sd.state_lock = imx214->ctrls.lock;
++ ret = v4l2_subdev_init_finalize(&imx214->sd);
++ if (ret < 0) {
++ dev_err_probe(dev, ret, "subdev init error\n");
++ goto free_entity;
++ }
+
+ pm_runtime_set_active(imx214->dev);
+ pm_runtime_enable(imx214->dev);
+
+ ret = v4l2_async_register_subdev_sensor(&imx214->sd);
+ if (ret < 0) {
+- dev_err(dev, "could not register v4l2 device\n");
+- goto free_entity;
++ dev_err_probe(dev, ret,
++ "failed to register sensor sub-device\n");
++ goto error_subdev_cleanup;
+ }
+
+ pm_runtime_idle(imx214->dev);
+
+ return 0;
+
+-free_entity:
++error_subdev_cleanup:
+ pm_runtime_disable(imx214->dev);
+ pm_runtime_set_suspended(&client->dev);
++ v4l2_subdev_cleanup(&imx214->sd);
++
++free_entity:
+ media_entity_cleanup(&imx214->sd.entity);
+
+ free_ctrl:
+- mutex_destroy(&imx214->mutex);
+ v4l2_ctrl_handler_free(&imx214->ctrls);
+
+ error_power_off:
+@@ -1129,9 +1101,9 @@ static void imx214_remove(struct i2c_client *client)
+ struct imx214 *imx214 = to_imx214(sd);
+
+ v4l2_async_unregister_subdev(&imx214->sd);
++ v4l2_subdev_cleanup(sd);
+ media_entity_cleanup(&imx214->sd.entity);
+ v4l2_ctrl_handler_free(&imx214->ctrls);
+- mutex_destroy(&imx214->mutex);
+ pm_runtime_disable(&client->dev);
+ if (!pm_runtime_status_suspended(&client->dev)) {
+ imx214_power_off(imx214->dev);
+diff --git a/drivers/media/i2c/ov08x40.c b/drivers/media/i2c/ov08x40.c
+index 83b49cf114acc7..625fbcd39068e0 100644
+--- a/drivers/media/i2c/ov08x40.c
++++ b/drivers/media/i2c/ov08x40.c
+@@ -1937,6 +1937,32 @@ static int ov08x40_stop_streaming(struct ov08x40 *ov08x)
+ OV08X40_REG_VALUE_08BIT, OV08X40_MODE_STANDBY);
+ }
+
++/* Verify chip ID */
++static int ov08x40_identify_module(struct ov08x40 *ov08x)
++{
++ struct i2c_client *client = v4l2_get_subdevdata(&ov08x->sd);
++ int ret;
++ u32 val;
++
++ if (ov08x->identified)
++ return 0;
++
++ ret = ov08x40_read_reg(ov08x, OV08X40_REG_CHIP_ID,
++ OV08X40_REG_VALUE_24BIT, &val);
++ if (ret)
++ return ret;
++
++ if (val != OV08X40_CHIP_ID) {
++ dev_err(&client->dev, "chip id mismatch: %x!=%x\n",
++ OV08X40_CHIP_ID, val);
++ return -ENXIO;
++ }
++
++ ov08x->identified = true;
++
++ return 0;
++}
++
+ static int ov08x40_set_stream(struct v4l2_subdev *sd, int enable)
+ {
+ struct ov08x40 *ov08x = to_ov08x40(sd);
+@@ -1950,6 +1976,10 @@ static int ov08x40_set_stream(struct v4l2_subdev *sd, int enable)
+ if (ret < 0)
+ goto err_unlock;
+
++ ret = ov08x40_identify_module(ov08x);
++ if (ret)
++ goto err_rpm_put;
++
+ /*
+ * Apply default & customized values
+ * and then start streaming.
+@@ -1974,32 +2004,6 @@ static int ov08x40_set_stream(struct v4l2_subdev *sd, int enable)
+ return ret;
+ }
+
+-/* Verify chip ID */
+-static int ov08x40_identify_module(struct ov08x40 *ov08x)
+-{
+- struct i2c_client *client = v4l2_get_subdevdata(&ov08x->sd);
+- int ret;
+- u32 val;
+-
+- if (ov08x->identified)
+- return 0;
+-
+- ret = ov08x40_read_reg(ov08x, OV08X40_REG_CHIP_ID,
+- OV08X40_REG_VALUE_24BIT, &val);
+- if (ret)
+- return ret;
+-
+- if (val != OV08X40_CHIP_ID) {
+- dev_err(&client->dev, "chip id mismatch: %x!=%x\n",
+- OV08X40_CHIP_ID, val);
+- return -ENXIO;
+- }
+-
+- ov08x->identified = true;
+-
+- return 0;
+-}
+-
+ static const struct v4l2_subdev_video_ops ov08x40_video_ops = {
+ .s_stream = ov08x40_set_stream,
+ };
+diff --git a/drivers/misc/lkdtm/perms.c b/drivers/misc/lkdtm/perms.c
+index 5b861dbff27e9a..6c24426104ba6f 100644
+--- a/drivers/misc/lkdtm/perms.c
++++ b/drivers/misc/lkdtm/perms.c
+@@ -28,6 +28,13 @@ static const unsigned long rodata = 0xAA55AA55;
+ /* This is marked __ro_after_init, so it should ultimately be .rodata. */
+ static unsigned long ro_after_init __ro_after_init = 0x55AA5500;
+
++/*
++ * This is a pointer to do_nothing() which is initialized at runtime rather
++ * than build time to avoid objtool IBT validation warnings caused by an
++ * inlined unrolled memcpy() in execute_location().
++ */
++static void __ro_after_init *do_nothing_ptr;
++
+ /*
+ * This just returns to the caller. It is designed to be copied into
+ * non-executable memory regions.
+@@ -65,13 +72,12 @@ static noinline __nocfi void execute_location(void *dst, bool write)
+ {
+ void (*func)(void);
+ func_desc_t fdesc;
+- void *do_nothing_text = dereference_function_descriptor(do_nothing);
+
+- pr_info("attempting ok execution at %px\n", do_nothing_text);
++ pr_info("attempting ok execution at %px\n", do_nothing_ptr);
+ do_nothing();
+
+ if (write == CODE_WRITE) {
+- memcpy(dst, do_nothing_text, EXEC_SIZE);
++ memcpy(dst, do_nothing_ptr, EXEC_SIZE);
+ flush_icache_range((unsigned long)dst,
+ (unsigned long)dst + EXEC_SIZE);
+ }
+@@ -267,6 +273,8 @@ static void lkdtm_ACCESS_NULL(void)
+
+ void __init lkdtm_perms_init(void)
+ {
++ do_nothing_ptr = dereference_function_descriptor(do_nothing);
++
+ /* Make sure we can write to __ro_after_init values during __init */
+ ro_after_init |= 0xAA;
+ }
+diff --git a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
+index 04756302b87805..98d3d123004c88 100644
+--- a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
++++ b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
+@@ -37,6 +37,7 @@
+ struct pci1xxxx_gpio {
+ struct auxiliary_device *aux_dev;
+ void __iomem *reg_base;
++ raw_spinlock_t wa_lock;
+ struct gpio_chip gpio;
+ spinlock_t lock;
+ int irq_base;
+@@ -167,7 +168,7 @@ static void pci1xxxx_gpio_irq_ack(struct irq_data *data)
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+- pci1xxx_assign_bit(priv->reg_base, INTR_STAT_OFFSET(gpio), (gpio % 32), true);
++ writel(BIT(gpio % 32), priv->reg_base + INTR_STAT_OFFSET(gpio));
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+@@ -257,6 +258,7 @@ static irqreturn_t pci1xxxx_gpio_irq_handler(int irq, void *dev_id)
+ struct pci1xxxx_gpio *priv = dev_id;
+ struct gpio_chip *gc = &priv->gpio;
+ unsigned long int_status = 0;
++ unsigned long wa_flags;
+ unsigned long flags;
+ u8 pincount;
+ int bit;
+@@ -280,7 +282,9 @@ static irqreturn_t pci1xxxx_gpio_irq_handler(int irq, void *dev_id)
+ writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank));
+ spin_unlock_irqrestore(&priv->lock, flags);
+ irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32)));
+- handle_nested_irq(irq);
++ raw_spin_lock_irqsave(&priv->wa_lock, wa_flags);
++ generic_handle_irq(irq);
++ raw_spin_unlock_irqrestore(&priv->wa_lock, wa_flags);
+ }
+ }
+ spin_lock_irqsave(&priv->lock, flags);
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index a5f88ec97df753..bc40b940ae2145 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -117,6 +117,7 @@
+
+ #define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */
+
++#define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */
+ #define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */
+
+ /*
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index d6ff9d82ae94b3..3f9c60b579ae48 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -124,6 +124,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)},
+
++ {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_H, MEI_ME_PCH15_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)},
+
+ /* required last entry */
+diff --git a/drivers/misc/mei/vsc-tp.c b/drivers/misc/mei/vsc-tp.c
+index 7be1649b19725c..fa553d4914b6ed 100644
+--- a/drivers/misc/mei/vsc-tp.c
++++ b/drivers/misc/mei/vsc-tp.c
+@@ -36,20 +36,24 @@
+ #define VSC_TP_XFER_TIMEOUT_BYTES 700
+ #define VSC_TP_PACKET_PADDING_SIZE 1
+ #define VSC_TP_PACKET_SIZE(pkt) \
+- (sizeof(struct vsc_tp_packet) + le16_to_cpu((pkt)->len) + VSC_TP_CRC_SIZE)
++ (sizeof(struct vsc_tp_packet_hdr) + le16_to_cpu((pkt)->hdr.len) + VSC_TP_CRC_SIZE)
+ #define VSC_TP_MAX_PACKET_SIZE \
+- (sizeof(struct vsc_tp_packet) + VSC_TP_MAX_MSG_SIZE + VSC_TP_CRC_SIZE)
++ (sizeof(struct vsc_tp_packet_hdr) + VSC_TP_MAX_MSG_SIZE + VSC_TP_CRC_SIZE)
+ #define VSC_TP_MAX_XFER_SIZE \
+ (VSC_TP_MAX_PACKET_SIZE + VSC_TP_XFER_TIMEOUT_BYTES)
+ #define VSC_TP_NEXT_XFER_LEN(len, offset) \
+- (len + sizeof(struct vsc_tp_packet) + VSC_TP_CRC_SIZE - offset + VSC_TP_PACKET_PADDING_SIZE)
++ (len + sizeof(struct vsc_tp_packet_hdr) + VSC_TP_CRC_SIZE - offset + VSC_TP_PACKET_PADDING_SIZE)
+
+-struct vsc_tp_packet {
++struct vsc_tp_packet_hdr {
+ __u8 sync;
+ __u8 cmd;
+ __le16 len;
+ __le32 seq;
+- __u8 buf[] __counted_by(len);
++};
++
++struct vsc_tp_packet {
++ struct vsc_tp_packet_hdr hdr;
++ __u8 buf[VSC_TP_MAX_XFER_SIZE - sizeof(struct vsc_tp_packet_hdr)];
+ };
+
+ struct vsc_tp {
+@@ -158,12 +162,12 @@ static int vsc_tp_dev_xfer(struct vsc_tp *tp, void *obuf, void *ibuf, size_t len
+ static int vsc_tp_xfer_helper(struct vsc_tp *tp, struct vsc_tp_packet *pkt,
+ void *ibuf, u16 ilen)
+ {
+- int ret, offset = 0, cpy_len, src_len, dst_len = sizeof(struct vsc_tp_packet);
++ int ret, offset = 0, cpy_len, src_len, dst_len = sizeof(struct vsc_tp_packet_hdr);
+ int next_xfer_len = VSC_TP_PACKET_SIZE(pkt) + VSC_TP_XFER_TIMEOUT_BYTES;
+ u8 *src, *crc_src, *rx_buf = tp->rx_buf;
+ int count_down = VSC_TP_MAX_XFER_COUNT;
+ u32 recv_crc = 0, crc = ~0;
+- struct vsc_tp_packet ack;
++ struct vsc_tp_packet_hdr ack;
+ u8 *dst = (u8 *)&ack;
+ bool synced = false;
+
+@@ -280,10 +284,10 @@ int vsc_tp_xfer(struct vsc_tp *tp, u8 cmd, const void *obuf, size_t olen,
+
+ guard(mutex)(&tp->mutex);
+
+- pkt->sync = VSC_TP_PACKET_SYNC;
+- pkt->cmd = cmd;
+- pkt->len = cpu_to_le16(olen);
+- pkt->seq = cpu_to_le32(++tp->seq);
++ pkt->hdr.sync = VSC_TP_PACKET_SYNC;
++ pkt->hdr.cmd = cmd;
++ pkt->hdr.len = cpu_to_le16(olen);
++ pkt->hdr.seq = cpu_to_le32(++tp->seq);
+ memcpy(pkt->buf, obuf, olen);
+
+ crc = ~crc32(~0, (u8 *)pkt, sizeof(pkt) + olen);
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index e3d39311fdc757..3fd898647237f5 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1873,7 +1873,7 @@ static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
+ if (!(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS))
+ return 0;
+
+- ice = of_qcom_ice_get(dev);
++ ice = devm_of_qcom_ice_get(dev);
+ if (ice == ERR_PTR(-EOPNOTSUPP)) {
+ dev_warn(dev, "Disabling inline encryption support\n");
+ ice = NULL;
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 5883eb93efb114..22513f3d56db15 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2541,6 +2541,9 @@ mt7531_setup_common(struct dsa_switch *ds)
+ struct mt7530_priv *priv = ds->priv;
+ int ret, i;
+
++ ds->assisted_learning_on_cpu_port = true;
++ ds->mtu_enforcement_ingress = true;
++
+ mt753x_trap_frames(priv);
+
+ /* Enable and reset MIB counters */
+@@ -2688,9 +2691,6 @@ mt7531_setup(struct dsa_switch *ds)
+ if (ret)
+ return ret;
+
+- ds->assisted_learning_on_cpu_port = true;
+- ds->mtu_enforcement_ingress = true;
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/amd/pds_core/adminq.c b/drivers/net/ethernet/amd/pds_core/adminq.c
+index c83a0a80d5334e..506f682d15c10a 100644
+--- a/drivers/net/ethernet/amd/pds_core/adminq.c
++++ b/drivers/net/ethernet/amd/pds_core/adminq.c
+@@ -5,11 +5,6 @@
+
+ #include "core.h"
+
+-struct pdsc_wait_context {
+- struct pdsc_qcq *qcq;
+- struct completion wait_completion;
+-};
+-
+ static int pdsc_process_notifyq(struct pdsc_qcq *qcq)
+ {
+ union pds_core_notifyq_comp *comp;
+@@ -109,10 +104,10 @@ void pdsc_process_adminq(struct pdsc_qcq *qcq)
+ q_info = &q->info[q->tail_idx];
+ q->tail_idx = (q->tail_idx + 1) & (q->num_descs - 1);
+
+- /* Copy out the completion data */
+- memcpy(q_info->dest, comp, sizeof(*comp));
+-
+- complete_all(&q_info->wc->wait_completion);
++ if (!completion_done(&q_info->completion)) {
++ memcpy(q_info->dest, comp, sizeof(*comp));
++ complete(&q_info->completion);
++ }
+
+ if (cq->tail_idx == cq->num_descs - 1)
+ cq->done_color = !cq->done_color;
+@@ -162,8 +157,7 @@ irqreturn_t pdsc_adminq_isr(int irq, void *data)
+ static int __pdsc_adminq_post(struct pdsc *pdsc,
+ struct pdsc_qcq *qcq,
+ union pds_core_adminq_cmd *cmd,
+- union pds_core_adminq_comp *comp,
+- struct pdsc_wait_context *wc)
++ union pds_core_adminq_comp *comp)
+ {
+ struct pdsc_queue *q = &qcq->q;
+ struct pdsc_q_info *q_info;
+@@ -205,9 +199,9 @@ static int __pdsc_adminq_post(struct pdsc *pdsc,
+ /* Post the request */
+ index = q->head_idx;
+ q_info = &q->info[index];
+- q_info->wc = wc;
+ q_info->dest = comp;
+ memcpy(q_info->desc, cmd, sizeof(*cmd));
++ reinit_completion(&q_info->completion);
+
+ dev_dbg(pdsc->dev, "head_idx %d tail_idx %d\n",
+ q->head_idx, q->tail_idx);
+@@ -231,16 +225,13 @@ int pdsc_adminq_post(struct pdsc *pdsc,
+ union pds_core_adminq_comp *comp,
+ bool fast_poll)
+ {
+- struct pdsc_wait_context wc = {
+- .wait_completion =
+- COMPLETION_INITIALIZER_ONSTACK(wc.wait_completion),
+- };
+ unsigned long poll_interval = 1;
+ unsigned long poll_jiffies;
+ unsigned long time_limit;
+ unsigned long time_start;
+ unsigned long time_done;
+ unsigned long remaining;
++ struct completion *wc;
+ int err = 0;
+ int index;
+
+@@ -250,20 +241,19 @@ int pdsc_adminq_post(struct pdsc *pdsc,
+ return -ENXIO;
+ }
+
+- wc.qcq = &pdsc->adminqcq;
+- index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc);
++ index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp);
+ if (index < 0) {
+ err = index;
+ goto err_out;
+ }
+
++ wc = &pdsc->adminqcq.q.info[index].completion;
+ time_start = jiffies;
+ time_limit = time_start + HZ * pdsc->devcmd_timeout;
+ do {
+ /* Timeslice the actual wait to catch IO errors etc early */
+ poll_jiffies = msecs_to_jiffies(poll_interval);
+- remaining = wait_for_completion_timeout(&wc.wait_completion,
+- poll_jiffies);
++ remaining = wait_for_completion_timeout(wc, poll_jiffies);
+ if (remaining)
+ break;
+
+@@ -292,9 +282,11 @@ int pdsc_adminq_post(struct pdsc *pdsc,
+ dev_dbg(pdsc->dev, "%s: elapsed %d msecs\n",
+ __func__, jiffies_to_msecs(time_done - time_start));
+
+- /* Check the results */
+- if (time_after_eq(time_done, time_limit))
++ /* Check the results and clear an un-completed timeout */
++ if (time_after_eq(time_done, time_limit) && !completion_done(wc)) {
+ err = -ETIMEDOUT;
++ complete(wc);
++ }
+
+ dev_dbg(pdsc->dev, "read admin queue completion idx %d:\n", index);
+ dynamic_hex_dump("comp ", DUMP_PREFIX_OFFSET, 16, 1,
+diff --git a/drivers/net/ethernet/amd/pds_core/auxbus.c b/drivers/net/ethernet/amd/pds_core/auxbus.c
+index 2babea11099179..b76a9b7e0aed66 100644
+--- a/drivers/net/ethernet/amd/pds_core/auxbus.c
++++ b/drivers/net/ethernet/amd/pds_core/auxbus.c
+@@ -107,9 +107,6 @@ int pds_client_adminq_cmd(struct pds_auxiliary_dev *padev,
+ dev_dbg(pf->dev, "%s: %s opcode %d\n",
+ __func__, dev_name(&padev->aux_dev.dev), req->opcode);
+
+- if (pf->state)
+- return -ENXIO;
+-
+ /* Wrap the client's request */
+ cmd.client_request.opcode = PDS_AQ_CMD_CLIENT_CMD;
+ cmd.client_request.client_id = cpu_to_le16(padev->client_id);
+diff --git a/drivers/net/ethernet/amd/pds_core/core.c b/drivers/net/ethernet/amd/pds_core/core.c
+index 536635e5772799..3c60d4cf9d0e17 100644
+--- a/drivers/net/ethernet/amd/pds_core/core.c
++++ b/drivers/net/ethernet/amd/pds_core/core.c
+@@ -167,8 +167,10 @@ static void pdsc_q_map(struct pdsc_queue *q, void *base, dma_addr_t base_pa)
+ q->base = base;
+ q->base_pa = base_pa;
+
+- for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)
++ for (i = 0, cur = q->info; i < q->num_descs; i++, cur++) {
+ cur->desc = base + (i * q->desc_size);
++ init_completion(&cur->completion);
++ }
+ }
+
+ static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa)
+@@ -325,10 +327,7 @@ static int pdsc_core_init(struct pdsc *pdsc)
+ size_t sz;
+ int err;
+
+- /* Scale the descriptor ring length based on number of CPUs and VFs */
+- numdescs = max_t(int, PDSC_ADMINQ_MIN_LENGTH, num_online_cpus());
+- numdescs += 2 * pci_sriov_get_totalvfs(pdsc->pdev);
+- numdescs = roundup_pow_of_two(numdescs);
++ numdescs = PDSC_ADMINQ_MAX_LENGTH;
+ err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_ADMINQ, 0, "adminq",
+ PDS_CORE_QCQ_F_CORE | PDS_CORE_QCQ_F_INTR,
+ numdescs,
+diff --git a/drivers/net/ethernet/amd/pds_core/core.h b/drivers/net/ethernet/amd/pds_core/core.h
+index 14522d6d5f86bb..ec637dc4327a5d 100644
+--- a/drivers/net/ethernet/amd/pds_core/core.h
++++ b/drivers/net/ethernet/amd/pds_core/core.h
+@@ -16,7 +16,7 @@
+
+ #define PDSC_WATCHDOG_SECS 5
+ #define PDSC_QUEUE_NAME_MAX_SZ 16
+-#define PDSC_ADMINQ_MIN_LENGTH 16 /* must be a power of two */
++#define PDSC_ADMINQ_MAX_LENGTH 16 /* must be a power of two */
+ #define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */
+ #define PDSC_TEARDOWN_RECOVERY false
+ #define PDSC_TEARDOWN_REMOVING true
+@@ -96,7 +96,7 @@ struct pdsc_q_info {
+ unsigned int bytes;
+ unsigned int nbufs;
+ struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS];
+- struct pdsc_wait_context *wc;
++ struct completion completion;
+ void *dest;
+ };
+
+diff --git a/drivers/net/ethernet/amd/pds_core/devlink.c b/drivers/net/ethernet/amd/pds_core/devlink.c
+index 44971e71991ff5..ca23cde385e67b 100644
+--- a/drivers/net/ethernet/amd/pds_core/devlink.c
++++ b/drivers/net/ethernet/amd/pds_core/devlink.c
+@@ -102,7 +102,7 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ .fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
+ .fw_control.oper = PDS_CORE_FW_GET_LIST,
+ };
+- struct pds_core_fw_list_info fw_list;
++ struct pds_core_fw_list_info fw_list = {};
+ struct pdsc *pdsc = devlink_priv(dl);
+ union pds_core_dev_comp comp;
+ char buf[32];
+@@ -115,8 +115,6 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ if (!err)
+ memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));
+ mutex_unlock(&pdsc->devcmd_lock);
+- if (err && err != -EIO)
+- return err;
+
+ listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));
+ for (i = 0; i < listlen; i++) {
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 2106861463e40f..3ee52f4b11660a 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -1850,6 +1850,16 @@ static void enetc_xdp_drop(struct enetc_bdr *rx_ring, int rx_ring_first,
+ }
+ }
+
++static void enetc_bulk_flip_buff(struct enetc_bdr *rx_ring, int rx_ring_first,
++ int rx_ring_last)
++{
++ while (rx_ring_first != rx_ring_last) {
++ enetc_flip_rx_buff(rx_ring,
++ &rx_ring->rx_swbd[rx_ring_first]);
++ enetc_bdr_idx_inc(rx_ring, &rx_ring_first);
++ }
++}
++
+ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ struct napi_struct *napi, int work_limit,
+ struct bpf_prog *prog)
+@@ -1868,11 +1878,10 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+
+ while (likely(rx_frm_cnt < work_limit)) {
+ union enetc_rx_bd *rxbd, *orig_rxbd;
+- int orig_i, orig_cleaned_cnt;
+ struct xdp_buff xdp_buff;
+ struct sk_buff *skb;
++ int orig_i, err;
+ u32 bd_status;
+- int err;
+
+ rxbd = enetc_rxbd(rx_ring, i);
+ bd_status = le32_to_cpu(rxbd->r.lstatus);
+@@ -1887,7 +1896,6 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ break;
+
+ orig_rxbd = rxbd;
+- orig_cleaned_cnt = cleaned_cnt;
+ orig_i = i;
+
+ enetc_build_xdp_buff(rx_ring, bd_status, &rxbd, &i,
+@@ -1915,15 +1923,21 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ rx_ring->stats.xdp_drops++;
+ break;
+ case XDP_PASS:
+- rxbd = orig_rxbd;
+- cleaned_cnt = orig_cleaned_cnt;
+- i = orig_i;
+-
+- skb = enetc_build_skb(rx_ring, bd_status, &rxbd,
+- &i, &cleaned_cnt,
+- ENETC_RXB_DMA_SIZE_XDP);
+- if (unlikely(!skb))
++ skb = xdp_build_skb_from_buff(&xdp_buff);
++ /* Probably under memory pressure, stop NAPI */
++ if (unlikely(!skb)) {
++ enetc_xdp_drop(rx_ring, orig_i, i);
++ rx_ring->stats.xdp_drops++;
+ goto out;
++ }
++
++ enetc_get_offloads(rx_ring, orig_rxbd, skb);
++
++ /* These buffers are about to be owned by the stack.
++ * Update our buffer cache (the rx_swbd array elements)
++ * with their other page halves.
++ */
++ enetc_bulk_flip_buff(rx_ring, orig_i, i);
+
+ napi_gro_receive(napi, skb);
+ break;
+@@ -1965,11 +1979,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ enetc_xdp_drop(rx_ring, orig_i, i);
+ rx_ring->stats.xdp_redirect_failures++;
+ } else {
+- while (orig_i != i) {
+- enetc_flip_rx_buff(rx_ring,
+- &rx_ring->rx_swbd[orig_i]);
+- enetc_bdr_idx_inc(rx_ring, &orig_i);
+- }
++ enetc_bulk_flip_buff(rx_ring, orig_i, i);
+ xdp_redirect_frm_cnt++;
+ rx_ring->stats.xdp_redirect++;
+ }
+@@ -3362,7 +3372,8 @@ static int enetc_int_vector_init(struct enetc_ndev_priv *priv, int i,
+ bdr->buffer_offset = ENETC_RXB_PAD;
+ priv->rx_ring[i] = bdr;
+
+- err = xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0);
++ err = __xdp_rxq_info_reg(&bdr->xdp.rxq, priv->ndev, i, 0,
++ ENETC_RXB_DMA_SIZE_XDP);
+ if (err)
+ goto free_vector;
+
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 0cd1ecacfd29f5..477b8732b86099 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -3997,11 +3997,27 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
+ mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP);
+
+ if (mtk_is_netsys_v3_or_greater(eth)) {
+- /* PSE should not drop port1, port8 and port9 packets */
+- mtk_w32(eth, 0x00000302, PSE_DROP_CFG);
++ /* PSE dummy page mechanism */
++ mtk_w32(eth, PSE_DUMMY_WORK_GDM(1) | PSE_DUMMY_WORK_GDM(2) |
++ PSE_DUMMY_WORK_GDM(3) | DUMMY_PAGE_THR, PSE_DUMY_REQ);
++
++ /* PSE free buffer drop threshold */
++ mtk_w32(eth, 0x00600009, PSE_IQ_REV(8));
++
++ /* PSE should not drop port8, port9 and port13 packets from
++ * WDMA Tx
++ */
++ mtk_w32(eth, 0x00002300, PSE_DROP_CFG);
++
++ /* PSE should drop packets to port8, port9 and port13 on WDMA Rx
++ * ring full
++ */
++ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(0));
++ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(1));
++ mtk_w32(eth, 0x00002300, PSE_PPE_DROP(2));
+
+ /* GDM and CDM Threshold */
+- mtk_w32(eth, 0x00000707, MTK_CDMW0_THRES);
++ mtk_w32(eth, 0x08000707, MTK_CDMW0_THRES);
+ mtk_w32(eth, 0x00000077, MTK_CDMW1_THRES);
+
+ /* Disable GDM1 RX CRC stripping */
+@@ -4018,7 +4034,7 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
+ mtk_w32(eth, 0x00000300, PSE_DROP_CFG);
+
+ /* PSE should drop packets to port 8/9 on WDMA Rx ring full */
+- mtk_w32(eth, 0x00000300, PSE_PPE0_DROP);
++ mtk_w32(eth, 0x00000300, PSE_PPE_DROP(0));
+
+ /* PSE Free Queue Flow Control */
+ mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 8d7b6818d86012..0570623e569d5e 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -151,7 +151,15 @@
+ #define PSE_FQFC_CFG1 0x100
+ #define PSE_FQFC_CFG2 0x104
+ #define PSE_DROP_CFG 0x108
+-#define PSE_PPE0_DROP 0x110
++#define PSE_PPE_DROP(x) (0x110 + ((x) * 0x4))
++
++/* PSE Last FreeQ Page Request Control */
++#define PSE_DUMY_REQ 0x10C
++/* PSE_DUMY_REQ is not a typo but actually called like that also in
++ * MediaTek's datasheet
++ */
++#define PSE_DUMMY_WORK_GDM(x) BIT(16 + (x))
++#define DUMMY_PAGE_THR 0x1
+
+ /* PSE Input Queue Reservation Register*/
+ #define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c
+index 9f13cea164465e..43b2216bc0a22b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c
+@@ -618,10 +618,6 @@ struct mlx5_ttc_table *mlx5_create_inner_ttc_table(struct mlx5_core_dev *dev,
+ bool use_l4_type;
+ int err;
+
+- ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL);
+- if (!ttc)
+- return ERR_PTR(-ENOMEM);
+-
+ switch (params->ns_type) {
+ case MLX5_FLOW_NAMESPACE_PORT_SEL:
+ use_l4_type = MLX5_CAP_GEN_2(dev, pcc_ifa2) &&
+@@ -635,7 +631,16 @@ struct mlx5_ttc_table *mlx5_create_inner_ttc_table(struct mlx5_core_dev *dev,
+ return ERR_PTR(-EINVAL);
+ }
+
++ ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL);
++ if (!ttc)
++ return ERR_PTR(-ENOMEM);
++
+ ns = mlx5_get_flow_namespace(dev, params->ns_type);
++ if (!ns) {
++ kvfree(ttc);
++ return ERR_PTR(-EOPNOTSUPP);
++ }
++
+ groups = use_l4_type ? &inner_ttc_groups[TTC_GROUPS_USE_L4_TYPE] :
+ &inner_ttc_groups[TTC_GROUPS_DEFAULT];
+
+@@ -691,10 +696,6 @@ struct mlx5_ttc_table *mlx5_create_ttc_table(struct mlx5_core_dev *dev,
+ bool use_l4_type;
+ int err;
+
+- ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL);
+- if (!ttc)
+- return ERR_PTR(-ENOMEM);
+-
+ switch (params->ns_type) {
+ case MLX5_FLOW_NAMESPACE_PORT_SEL:
+ use_l4_type = MLX5_CAP_GEN_2(dev, pcc_ifa2) &&
+@@ -708,7 +709,16 @@ struct mlx5_ttc_table *mlx5_create_ttc_table(struct mlx5_core_dev *dev,
+ return ERR_PTR(-EINVAL);
+ }
+
++ ttc = kvzalloc(sizeof(*ttc), GFP_KERNEL);
++ if (!ttc)
++ return ERR_PTR(-ENOMEM);
++
+ ns = mlx5_get_flow_namespace(dev, params->ns_type);
++ if (!ns) {
++ kvfree(ttc);
++ return ERR_PTR(-EOPNOTSUPP);
++ }
++
+ groups = use_l4_type ? &ttc_groups[TTC_GROUPS_USE_L4_TYPE] :
+ &ttc_groups[TTC_GROUPS_DEFAULT];
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index 16020b72dec837..ece8588b3b17b7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -523,24 +523,6 @@ static int socfpga_dwmac_resume(struct device *dev)
+
+ dwmac_priv->ops->set_phy_mode(priv->plat->bsp_priv);
+
+- /* Before the enet controller is suspended, the phy is suspended.
+- * This causes the phy clock to be gated. The enet controller is
+- * resumed before the phy, so the clock is still gated "off" when
+- * the enet controller is resumed. This code makes sure the phy
+- * is "resumed" before reinitializing the enet controller since
+- * the enet controller depends on an active phy clock to complete
+- * a DMA reset. A DMA reset will "time out" if executed
+- * with no phy clock input on the Synopsys enet controller.
+- * Verified through Synopsys Case #8000711656.
+- *
+- * Note that the phy clock is also gated when the phy is isolated.
+- * Phy "suspend" and "isolate" controls are located in phy basic
+- * control register 0, and can be modified by the phy driver
+- * framework.
+- */
+- if (ndev->phydev)
+- phy_resume(ndev->phydev);
+-
+ return stmmac_resume(dev);
+ }
+ #endif /* CONFIG_PM_SLEEP */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
+index 600fea8f712fd6..2d5bf1de5d2e44 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
+@@ -331,8 +331,8 @@ enum rtc_control {
+
+ /* PTP and timestamping registers */
+
+-#define GMAC3_X_ATSNS GENMASK(19, 16)
+-#define GMAC3_X_ATSNS_SHIFT 16
++#define GMAC3_X_ATSNS GENMASK(29, 25)
++#define GMAC3_X_ATSNS_SHIFT 25
+
+ #define GMAC_PTP_TCR_ATSFC BIT(24)
+ #define GMAC_PTP_TCR_ATSEN0 BIT(25)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+index 96bcda0856ec62..11c525b8d26987 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+@@ -560,7 +560,7 @@ void dwmac1000_get_ptptime(void __iomem *ptpaddr, u64 *ptp_time)
+ u64 ns;
+
+ ns = readl(ptpaddr + GMAC_PTP_ATNR);
+- ns += readl(ptpaddr + GMAC_PTP_ATSR) * NSEC_PER_SEC;
++ ns += (u64)readl(ptpaddr + GMAC_PTP_ATSR) * NSEC_PER_SEC;
+
+ *ptp_time = ns;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index 0f59aa98260404..e2840fa241f291 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -222,7 +222,7 @@ static void get_ptptime(void __iomem *ptpaddr, u64 *ptp_time)
+ u64 ns;
+
+ ns = readl(ptpaddr + PTP_ATNR);
+- ns += readl(ptpaddr + PTP_ATSR) * NSEC_PER_SEC;
++ ns += (u64)readl(ptpaddr + PTP_ATSR) * NSEC_PER_SEC;
+
+ *ptp_time = ns;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index b7c3bfdaa1802b..b9340f8bd1828a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3448,9 +3448,18 @@ static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
+ if (priv->hw->phylink_pcs)
+ phylink_pcs_pre_init(priv->phylink, priv->hw->phylink_pcs);
+
++ /* Note that clk_rx_i must be running for reset to complete. This
++ * clock may also be required when setting the MAC address.
++ *
++ * Block the receive clock stop for LPI mode at the PHY in case
++ * the link is established with EEE mode active.
++ */
++ phylink_rx_clk_stop_block(priv->phylink);
++
+ /* DMA initialization and SW reset */
+ ret = stmmac_init_dma_engine(priv);
+ if (ret < 0) {
++ phylink_rx_clk_stop_unblock(priv->phylink);
+ netdev_err(priv->dev, "%s: DMA engine initialization failed\n",
+ __func__);
+ return ret;
+@@ -3458,6 +3467,7 @@ static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
+
+ /* Copy the MAC addr into the HW */
+ stmmac_set_umac_addr(priv, priv->hw, dev->dev_addr, 0);
++ phylink_rx_clk_stop_unblock(priv->phylink);
+
+ /* PS and related bits will be programmed according to the speed */
+ if (priv->hw->pcs) {
+@@ -3568,7 +3578,9 @@ static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
+ /* Start the ball rolling... */
+ stmmac_start_all_dma(priv);
+
++ phylink_rx_clk_stop_block(priv->phylink);
+ stmmac_set_hw_vlan_mode(priv, priv->hw);
++ phylink_rx_clk_stop_unblock(priv->phylink);
+
+ return 0;
+ }
+@@ -5853,6 +5865,9 @@ static void stmmac_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ * whenever multicast addresses must be enabled/disabled.
+ * Return value:
+ * void.
++ *
++ * FIXME: This may need RXC to be running, but it may be called with BH
++ * disabled, which means we can't call phylink_rx_clk_stop*().
+ */
+ static void stmmac_set_rx_mode(struct net_device *dev)
+ {
+@@ -5985,7 +6000,9 @@ static int stmmac_set_features(struct net_device *netdev,
+ else
+ priv->hw->hw_vlan_en = false;
+
++ phylink_rx_clk_stop_block(priv->phylink);
+ stmmac_set_hw_vlan_mode(priv, priv->hw);
++ phylink_rx_clk_stop_unblock(priv->phylink);
+
+ return 0;
+ }
+@@ -6269,7 +6286,9 @@ static int stmmac_set_mac_address(struct net_device *ndev, void *addr)
+ if (ret)
+ goto set_mac_error;
+
++ phylink_rx_clk_stop_block(priv->phylink);
+ stmmac_set_umac_addr(priv, priv->hw, ndev->dev_addr, 0);
++ phylink_rx_clk_stop_unblock(priv->phylink);
+
+ set_mac_error:
+ pm_runtime_put(priv->device);
+@@ -6625,6 +6644,9 @@ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
+ return stmmac_update_vlan_hash(priv, priv->hw, hash, pmatch, is_double);
+ }
+
++/* FIXME: This may need RXC to be running, but it may be called with BH
++ * disabled, which means we can't call phylink_rx_clk_stop*().
++ */
+ static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid)
+ {
+ struct stmmac_priv *priv = netdev_priv(ndev);
+@@ -6656,6 +6678,9 @@ static int stmmac_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid
+ return ret;
+ }
+
++/* FIXME: This may need RXC to be running, but it may be called with BH
++ * disabled, which means we can't call phylink_rx_clk_stop*().
++ */
+ static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vid)
+ {
+ struct stmmac_priv *priv = netdev_priv(ndev);
+@@ -7813,13 +7838,11 @@ int stmmac_suspend(struct device *dev)
+ mutex_unlock(&priv->lock);
+
+ rtnl_lock();
+- if (device_may_wakeup(priv->device) && priv->plat->pmt) {
+- phylink_suspend(priv->phylink, true);
+- } else {
+- if (device_may_wakeup(priv->device))
+- phylink_speed_down(priv->phylink, false);
+- phylink_suspend(priv->phylink, false);
+- }
++ if (device_may_wakeup(priv->device) && !priv->plat->pmt)
++ phylink_speed_down(priv->phylink, false);
++
++ phylink_suspend(priv->phylink,
++ device_may_wakeup(priv->device) && priv->plat->pmt);
+ rtnl_unlock();
+
+ if (stmmac_fpe_supported(priv))
+@@ -7909,16 +7932,12 @@ int stmmac_resume(struct device *dev)
+ }
+
+ rtnl_lock();
+- if (device_may_wakeup(priv->device) && priv->plat->pmt) {
+- phylink_resume(priv->phylink);
+- } else {
+- phylink_resume(priv->phylink);
+- if (device_may_wakeup(priv->device))
+- phylink_speed_up(priv->phylink);
+- }
+- rtnl_unlock();
+
+- rtnl_lock();
++ /* Prepare the PHY to resume, ensuring that its clocks which are
++ * necessary for the MAC DMA reset to complete are running
++ */
++ phylink_prepare_resume(priv->phylink);
++
+ mutex_lock(&priv->lock);
+
+ stmmac_reset_queues_param(priv);
+@@ -7928,14 +7947,25 @@ int stmmac_resume(struct device *dev)
+
+ stmmac_hw_setup(ndev, false);
+ stmmac_init_coalesce(priv);
++ phylink_rx_clk_stop_block(priv->phylink);
+ stmmac_set_rx_mode(ndev);
+
+ stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw);
++ phylink_rx_clk_stop_unblock(priv->phylink);
+
+ stmmac_enable_all_queues(priv);
+ stmmac_enable_all_dma_irq(priv);
+
+ mutex_unlock(&priv->lock);
++
++ /* phylink_resume() must be called after the hardware has been
++ * initialised because it may bring the link up immediately in a
++ * workqueue thread, which will race with initialisation.
++ */
++ phylink_resume(priv->phylink);
++ if (device_may_wakeup(priv->device) && !priv->plat->pmt)
++ phylink_speed_up(priv->phylink);
++
+ rtnl_unlock();
+
+ netif_device_attach(ndev);
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 72177fea1cfb3e..edc3165f007741 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -9064,6 +9064,8 @@ static void niu_try_msix(struct niu *np, u8 *ldg_num_map)
+ msi_vec[i].entry = i;
+ }
+
++ pdev->dev_flags |= PCI_DEV_FLAGS_MSIX_TOUCH_ENTRY_DATA_FIRST;
++
+ num_irqs = pci_enable_msix_range(pdev, msi_vec, 1, num_irqs);
+ if (num_irqs < 0) {
+ np->flags &= ~NIU_FLAGS_MSIX;
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index 6599feca1967d7..e32013eb0186ff 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -31,6 +31,7 @@
+ #define MII_DP83822_RCSR 0x17
+ #define MII_DP83822_RESET_CTRL 0x1f
+ #define MII_DP83822_MLEDCR 0x25
++#define MII_DP83822_LDCTRL 0x403
+ #define MII_DP83822_LEDCFG1 0x460
+ #define MII_DP83822_IOCTRL1 0x462
+ #define MII_DP83822_IOCTRL2 0x463
+@@ -123,6 +124,9 @@
+ #define DP83822_IOCTRL1_GPIO1_CTRL GENMASK(2, 0)
+ #define DP83822_IOCTRL1_GPIO1_CTRL_LED_1 BIT(0)
+
++/* LDCTRL bits */
++#define DP83822_100BASE_TX_LINE_DRIVER_SWING GENMASK(7, 4)
++
+ /* IOCTRL2 bits */
+ #define DP83822_IOCTRL2_GPIO2_CLK_SRC GENMASK(6, 4)
+ #define DP83822_IOCTRL2_GPIO2_CTRL GENMASK(2, 0)
+@@ -197,6 +201,7 @@ struct dp83822_private {
+ bool set_gpio2_clk_out;
+ u32 gpio2_clk_out;
+ bool led_pin_enable[DP83822_MAX_LED_PINS];
++ int tx_amplitude_100base_tx_index;
+ };
+
+ static int dp83822_config_wol(struct phy_device *phydev,
+@@ -522,6 +527,12 @@ static int dp83822_config_init(struct phy_device *phydev)
+ FIELD_PREP(DP83822_IOCTRL2_GPIO2_CLK_SRC,
+ dp83822->gpio2_clk_out));
+
++ if (dp83822->tx_amplitude_100base_tx_index >= 0)
++ phy_modify_mmd(phydev, MDIO_MMD_VEND2, MII_DP83822_LDCTRL,
++ DP83822_100BASE_TX_LINE_DRIVER_SWING,
++ FIELD_PREP(DP83822_100BASE_TX_LINE_DRIVER_SWING,
++ dp83822->tx_amplitude_100base_tx_index));
++
+ err = dp83822_config_init_leds(phydev);
+ if (err)
+ return err;
+@@ -719,7 +730,12 @@ static int dp83822_phy_reset(struct phy_device *phydev)
+ return phydev->drv->config_init(phydev);
+ }
+
+-#ifdef CONFIG_OF_MDIO
++#if IS_ENABLED(CONFIG_OF_MDIO)
++static const u32 tx_amplitude_100base_tx_gain[] = {
++ 80, 82, 83, 85, 87, 88, 90, 92,
++ 93, 95, 97, 98, 100, 102, 103, 105,
++};
++
+ static int dp83822_of_init_leds(struct phy_device *phydev)
+ {
+ struct device_node *node = phydev->mdio.dev.of_node;
+@@ -780,6 +796,8 @@ static int dp83822_of_init(struct phy_device *phydev)
+ struct dp83822_private *dp83822 = phydev->priv;
+ struct device *dev = &phydev->mdio.dev;
+ const char *of_val;
++ int i, ret;
++ u32 val;
+
+ /* Signal detection for the PHY is only enabled if the FX_EN and the
+ * SD_EN pins are strapped. Signal detection can only enabled if FX_EN
+@@ -815,6 +833,25 @@ static int dp83822_of_init(struct phy_device *phydev)
+ dp83822->set_gpio2_clk_out = true;
+ }
+
++ ret = phy_get_tx_amplitude_gain(phydev, dev,
++ ETHTOOL_LINK_MODE_100baseT_Full_BIT,
++ &val);
++ if (!ret) {
++ for (i = 0; i < ARRAY_SIZE(tx_amplitude_100base_tx_gain); i++) {
++ if (tx_amplitude_100base_tx_gain[i] == val) {
++ dp83822->tx_amplitude_100base_tx_index = i;
++ break;
++ }
++ }
++
++ if (dp83822->tx_amplitude_100base_tx_index < 0) {
++ phydev_err(phydev,
++ "Invalid value for tx-amplitude-100base-tx-percent property (%u)\n",
++ val);
++ return -EINVAL;
++ }
++ }
++
+ return dp83822_of_init_leds(phydev);
+ }
+
+@@ -893,6 +930,7 @@ static int dp8382x_probe(struct phy_device *phydev)
+ if (!dp83822)
+ return -ENOMEM;
+
++ dp83822->tx_amplitude_100base_tx_index = -1;
+ phydev->priv = dp83822;
+
+ return 0;
+diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
+index 0e17cc458efdc7..93de88c1c8fd58 100644
+--- a/drivers/net/phy/microchip.c
++++ b/drivers/net/phy/microchip.c
+@@ -37,47 +37,6 @@ static int lan88xx_write_page(struct phy_device *phydev, int page)
+ return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page);
+ }
+
+-static int lan88xx_phy_config_intr(struct phy_device *phydev)
+-{
+- int rc;
+-
+- if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+- /* unmask all source and clear them before enable */
+- rc = phy_write(phydev, LAN88XX_INT_MASK, 0x7FFF);
+- rc = phy_read(phydev, LAN88XX_INT_STS);
+- rc = phy_write(phydev, LAN88XX_INT_MASK,
+- LAN88XX_INT_MASK_MDINTPIN_EN_ |
+- LAN88XX_INT_MASK_LINK_CHANGE_);
+- } else {
+- rc = phy_write(phydev, LAN88XX_INT_MASK, 0);
+- if (rc)
+- return rc;
+-
+- /* Ack interrupts after they have been disabled */
+- rc = phy_read(phydev, LAN88XX_INT_STS);
+- }
+-
+- return rc < 0 ? rc : 0;
+-}
+-
+-static irqreturn_t lan88xx_handle_interrupt(struct phy_device *phydev)
+-{
+- int irq_status;
+-
+- irq_status = phy_read(phydev, LAN88XX_INT_STS);
+- if (irq_status < 0) {
+- phy_error(phydev);
+- return IRQ_NONE;
+- }
+-
+- if (!(irq_status & LAN88XX_INT_STS_LINK_CHANGE_))
+- return IRQ_NONE;
+-
+- phy_trigger_machine(phydev);
+-
+- return IRQ_HANDLED;
+-}
+-
+ static int lan88xx_suspend(struct phy_device *phydev)
+ {
+ struct lan88xx_priv *priv = phydev->priv;
+@@ -528,8 +487,9 @@ static struct phy_driver microchip_phy_driver[] = {
+ .config_aneg = lan88xx_config_aneg,
+ .link_change_notify = lan88xx_link_change_notify,
+
+- .config_intr = lan88xx_phy_config_intr,
+- .handle_interrupt = lan88xx_handle_interrupt,
++ /* Interrupt handling is broken, do not define related
++ * functions to force polling.
++ */
+
+ .suspend = lan88xx_suspend,
+ .resume = genphy_resume,
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 92161af788afd2..2a01887c5617e6 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3123,19 +3123,12 @@ void phy_get_pause(struct phy_device *phydev, bool *tx_pause, bool *rx_pause)
+ EXPORT_SYMBOL(phy_get_pause);
+
+ #if IS_ENABLED(CONFIG_OF_MDIO)
+-static int phy_get_int_delay_property(struct device *dev, const char *name)
++static int phy_get_u32_property(struct device *dev, const char *name, u32 *val)
+ {
+- s32 int_delay;
+- int ret;
+-
+- ret = device_property_read_u32(dev, name, &int_delay);
+- if (ret)
+- return ret;
+-
+- return int_delay;
++ return device_property_read_u32(dev, name, val);
+ }
+ #else
+-static int phy_get_int_delay_property(struct device *dev, const char *name)
++static int phy_get_u32_property(struct device *dev, const char *name, u32 *val)
+ {
+ return -EINVAL;
+ }
+@@ -3160,12 +3153,12 @@ static int phy_get_int_delay_property(struct device *dev, const char *name)
+ s32 phy_get_internal_delay(struct phy_device *phydev, struct device *dev,
+ const int *delay_values, int size, bool is_rx)
+ {
+- s32 delay;
+- int i;
++ int i, ret;
++ u32 delay;
+
+ if (is_rx) {
+- delay = phy_get_int_delay_property(dev, "rx-internal-delay-ps");
+- if (delay < 0 && size == 0) {
++ ret = phy_get_u32_property(dev, "rx-internal-delay-ps", &delay);
++ if (ret < 0 && size == 0) {
+ if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID)
+ return 1;
+@@ -3174,8 +3167,8 @@ s32 phy_get_internal_delay(struct phy_device *phydev, struct device *dev,
+ }
+
+ } else {
+- delay = phy_get_int_delay_property(dev, "tx-internal-delay-ps");
+- if (delay < 0 && size == 0) {
++ ret = phy_get_u32_property(dev, "tx-internal-delay-ps", &delay);
++ if (ret < 0 && size == 0) {
+ if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID)
+ return 1;
+@@ -3184,8 +3177,8 @@ s32 phy_get_internal_delay(struct phy_device *phydev, struct device *dev,
+ }
+ }
+
+- if (delay < 0)
+- return delay;
++ if (ret < 0)
++ return ret;
+
+ if (size == 0)
+ return delay;
+@@ -3220,6 +3213,30 @@ s32 phy_get_internal_delay(struct phy_device *phydev, struct device *dev,
+ }
+ EXPORT_SYMBOL(phy_get_internal_delay);
+
++/**
++ * phy_get_tx_amplitude_gain - stores tx amplitude gain in @val
++ * @phydev: phy_device struct
++ * @dev: pointer to the devices device struct
++ * @linkmode: linkmode for which the tx amplitude gain should be retrieved
++ * @val: tx amplitude gain
++ *
++ * Returns: 0 on success, < 0 on failure
++ */
++int phy_get_tx_amplitude_gain(struct phy_device *phydev, struct device *dev,
++ enum ethtool_link_mode_bit_indices linkmode,
++ u32 *val)
++{
++ switch (linkmode) {
++ case ETHTOOL_LINK_MODE_100baseT_Full_BIT:
++ return phy_get_u32_property(dev,
++ "tx-amplitude-100base-tx-percent",
++ val);
++ default:
++ return -EINVAL;
++ }
++}
++EXPORT_SYMBOL_GPL(phy_get_tx_amplitude_gain);
++
+ static int phy_led_set_brightness(struct led_classdev *led_cdev,
+ enum led_brightness value)
+ {
+diff --git a/drivers/net/phy/phy_led_triggers.c b/drivers/net/phy/phy_led_triggers.c
+index f550576eb9dae7..6f9d8da76c4dfb 100644
+--- a/drivers/net/phy/phy_led_triggers.c
++++ b/drivers/net/phy/phy_led_triggers.c
+@@ -91,9 +91,8 @@ int phy_led_triggers_register(struct phy_device *phy)
+ if (!phy->phy_num_led_triggers)
+ return 0;
+
+- phy->led_link_trigger = devm_kzalloc(&phy->mdio.dev,
+- sizeof(*phy->led_link_trigger),
+- GFP_KERNEL);
++ phy->led_link_trigger = kzalloc(sizeof(*phy->led_link_trigger),
++ GFP_KERNEL);
+ if (!phy->led_link_trigger) {
+ err = -ENOMEM;
+ goto out_clear;
+@@ -103,10 +102,9 @@ int phy_led_triggers_register(struct phy_device *phy)
+ if (err)
+ goto out_free_link;
+
+- phy->phy_led_triggers = devm_kcalloc(&phy->mdio.dev,
+- phy->phy_num_led_triggers,
+- sizeof(struct phy_led_trigger),
+- GFP_KERNEL);
++ phy->phy_led_triggers = kcalloc(phy->phy_num_led_triggers,
++ sizeof(struct phy_led_trigger),
++ GFP_KERNEL);
+ if (!phy->phy_led_triggers) {
+ err = -ENOMEM;
+ goto out_unreg_link;
+@@ -127,11 +125,11 @@ int phy_led_triggers_register(struct phy_device *phy)
+ out_unreg:
+ while (i--)
+ phy_led_trigger_unregister(&phy->phy_led_triggers[i]);
+- devm_kfree(&phy->mdio.dev, phy->phy_led_triggers);
++ kfree(phy->phy_led_triggers);
+ out_unreg_link:
+ phy_led_trigger_unregister(phy->led_link_trigger);
+ out_free_link:
+- devm_kfree(&phy->mdio.dev, phy->led_link_trigger);
++ kfree(phy->led_link_trigger);
+ phy->led_link_trigger = NULL;
+ out_clear:
+ phy->phy_num_led_triggers = 0;
+@@ -145,8 +143,13 @@ void phy_led_triggers_unregister(struct phy_device *phy)
+
+ for (i = 0; i < phy->phy_num_led_triggers; i++)
+ phy_led_trigger_unregister(&phy->phy_led_triggers[i]);
++ kfree(phy->phy_led_triggers);
++ phy->phy_led_triggers = NULL;
+
+- if (phy->led_link_trigger)
++ if (phy->led_link_trigger) {
+ phy_led_trigger_unregister(phy->led_link_trigger);
++ kfree(phy->led_link_trigger);
++ phy->led_link_trigger = NULL;
++ }
+ }
+ EXPORT_SYMBOL_GPL(phy_led_triggers_unregister);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index b00a315de06018..5be48eb810abb3 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -82,12 +82,15 @@ struct phylink {
+ unsigned int pcs_state;
+
+ bool link_failed;
++ bool suspend_link_up;
++ bool major_config_failed;
+ bool mac_supports_eee_ops;
+ bool mac_supports_eee;
+ bool phy_enable_tx_lpi;
+ bool mac_enable_tx_lpi;
+ bool mac_tx_clk_stop;
+ u32 mac_tx_lpi_timer;
++ u8 mac_rx_clk_stop_blocked;
+
+ struct sfp_bus *sfp_bus;
+ bool sfp_may_have_phy;
+@@ -1360,12 +1363,16 @@ static void phylink_major_config(struct phylink *pl, bool restart,
+ phylink_an_mode_str(pl->req_link_an_mode),
+ phy_modes(state->interface));
+
++ pl->major_config_failed = false;
++
+ if (pl->mac_ops->mac_select_pcs) {
+ pcs = pl->mac_ops->mac_select_pcs(pl->config, state->interface);
+ if (IS_ERR(pcs)) {
+ phylink_err(pl,
+ "mac_select_pcs unexpectedly failed: %pe\n",
+ pcs);
++
++ pl->major_config_failed = true;
+ return;
+ }
+
+@@ -1387,6 +1394,7 @@ static void phylink_major_config(struct phylink *pl, bool restart,
+ if (err < 0) {
+ phylink_err(pl, "mac_prepare failed: %pe\n",
+ ERR_PTR(err));
++ pl->major_config_failed = true;
+ return;
+ }
+ }
+@@ -1410,8 +1418,15 @@ static void phylink_major_config(struct phylink *pl, bool restart,
+
+ phylink_mac_config(pl, state);
+
+- if (pl->pcs)
+- phylink_pcs_post_config(pl->pcs, state->interface);
++ if (pl->pcs) {
++ err = phylink_pcs_post_config(pl->pcs, state->interface);
++ if (err < 0) {
++ phylink_err(pl, "pcs_post_config failed: %pe\n",
++ ERR_PTR(err));
++
++ pl->major_config_failed = true;
++ }
++ }
+
+ if (pl->pcs_state == PCS_STATE_STARTING || pcs_changed)
+ phylink_pcs_enable(pl->pcs);
+@@ -1422,11 +1437,12 @@ static void phylink_major_config(struct phylink *pl, bool restart,
+
+ err = phylink_pcs_config(pl->pcs, neg_mode, state,
+ !!(pl->link_config.pause & MLO_PAUSE_AN));
+- if (err < 0)
+- phylink_err(pl, "pcs_config failed: %pe\n",
+- ERR_PTR(err));
+- else if (err > 0)
++ if (err < 0) {
++ phylink_err(pl, "pcs_config failed: %pe\n", ERR_PTR(err));
++ pl->major_config_failed = true;
++ } else if (err > 0) {
+ restart = true;
++ }
+
+ if (restart)
+ phylink_pcs_an_restart(pl);
+@@ -1434,16 +1450,22 @@ static void phylink_major_config(struct phylink *pl, bool restart,
+ if (pl->mac_ops->mac_finish) {
+ err = pl->mac_ops->mac_finish(pl->config, pl->act_link_an_mode,
+ state->interface);
+- if (err < 0)
++ if (err < 0) {
+ phylink_err(pl, "mac_finish failed: %pe\n",
+ ERR_PTR(err));
++
++ pl->major_config_failed = true;
++ }
+ }
+
+ if (pl->phydev && pl->phy_ib_mode) {
+ err = phy_config_inband(pl->phydev, pl->phy_ib_mode);
+- if (err < 0)
++ if (err < 0) {
+ phylink_err(pl, "phy_config_inband: %pe\n",
+ ERR_PTR(err));
++
++ pl->major_config_failed = true;
++ }
+ }
+
+ if (pl->sfp_bus) {
+@@ -1795,6 +1817,12 @@ static void phylink_resolve(struct work_struct *w)
+ }
+ }
+
++ /* If configuration of the interface failed, force the link down
++ * until we get a successful configuration.
++ */
++ if (pl->major_config_failed)
++ link_state.link = false;
++
+ if (link_state.link != cur_link_state) {
+ pl->old_link_state = link_state.link;
+ if (!link_state.link)
+@@ -2594,6 +2622,64 @@ void phylink_stop(struct phylink *pl)
+ }
+ EXPORT_SYMBOL_GPL(phylink_stop);
+
++/**
++ * phylink_rx_clk_stop_block() - block PHY ability to stop receive clock in LPI
++ * @pl: a pointer to a &struct phylink returned from phylink_create()
++ *
++ * Disable the PHY's ability to stop the receive clock while the receive path
++ * is in EEE LPI state, until the number of calls to phylink_rx_clk_stop_block()
++ * are balanced by calls to phylink_rx_clk_stop_unblock().
++ */
++void phylink_rx_clk_stop_block(struct phylink *pl)
++{
++ ASSERT_RTNL();
++
++ if (pl->mac_rx_clk_stop_blocked == U8_MAX) {
++ phylink_warn(pl, "%s called too many times - ignoring\n",
++ __func__);
++ dump_stack();
++ return;
++ }
++
++ /* Disable PHY receive clock stop if this is the first time this
++ * function has been called and clock-stop was previously enabled.
++ */
++ if (pl->mac_rx_clk_stop_blocked++ == 0 &&
++ pl->mac_supports_eee_ops && pl->phydev &&
++ pl->config->eee_rx_clk_stop_enable)
++ phy_eee_rx_clock_stop(pl->phydev, false);
++}
++EXPORT_SYMBOL_GPL(phylink_rx_clk_stop_block);
++
++/**
++ * phylink_rx_clk_stop_unblock() - unblock PHY ability to stop receive clock
++ * @pl: a pointer to a &struct phylink returned from phylink_create()
++ *
++ * All calls to phylink_rx_clk_stop_block() must be balanced with a
++ * corresponding call to phylink_rx_clk_stop_unblock() to restore the PHYs
++ * ability to stop the receive clock when the receive path is in EEE LPI mode.
++ */
++void phylink_rx_clk_stop_unblock(struct phylink *pl)
++{
++ ASSERT_RTNL();
++
++ if (pl->mac_rx_clk_stop_blocked == 0) {
++ phylink_warn(pl, "%s called too many times - ignoring\n",
++ __func__);
++ dump_stack();
++ return;
++ }
++
++ /* Re-enable PHY receive clock stop if the number of unblocks matches
++ * the number of calls to the block function above.
++ */
++ if (--pl->mac_rx_clk_stop_blocked == 0 &&
++ pl->mac_supports_eee_ops && pl->phydev &&
++ pl->config->eee_rx_clk_stop_enable)
++ phy_eee_rx_clock_stop(pl->phydev, true);
++}
++EXPORT_SYMBOL_GPL(phylink_rx_clk_stop_unblock);
++
+ /**
+ * phylink_suspend() - handle a network device suspend event
+ * @pl: a pointer to a &struct phylink returned from phylink_create()
+@@ -2619,14 +2705,16 @@ void phylink_suspend(struct phylink *pl, bool mac_wol)
+ /* Stop the resolver bringing the link up */
+ __set_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state);
+
+- /* Disable the carrier, to prevent transmit timeouts,
+- * but one would hope all packets have been sent. This
+- * also means phylink_resolve() will do nothing.
+- */
+- if (pl->netdev)
+- netif_carrier_off(pl->netdev);
+- else
++ pl->suspend_link_up = phylink_link_is_up(pl);
++ if (pl->suspend_link_up) {
++ /* Disable the carrier, to prevent transmit timeouts,
++ * but one would hope all packets have been sent. This
++ * also means phylink_resolve() will do nothing.
++ */
++ if (pl->netdev)
++ netif_carrier_off(pl->netdev);
+ pl->old_link_state = false;
++ }
+
+ /* We do not call mac_link_down() here as we want the
+ * link to remain up to receive the WoL packets.
+@@ -2638,6 +2726,31 @@ void phylink_suspend(struct phylink *pl, bool mac_wol)
+ }
+ EXPORT_SYMBOL_GPL(phylink_suspend);
+
++/**
++ * phylink_prepare_resume() - prepare to resume a network device
++ * @pl: a pointer to a &struct phylink returned from phylink_create()
++ *
++ * Optional, but if called must be called prior to phylink_resume().
++ *
++ * Prepare to resume a network device, preparing the PHY as necessary.
++ */
++void phylink_prepare_resume(struct phylink *pl)
++{
++ struct phy_device *phydev = pl->phydev;
++
++ ASSERT_RTNL();
++
++ /* IEEE 802.3 22.2.4.1.5 allows PHYs to stop their receive clock
++ * when PDOWN is set. However, some MACs require RXC to be running
++ * in order to resume. If the MAC requires RXC, and we have a PHY,
++ * then resume the PHY. Note that 802.3 allows PHYs 500ms before
++ * the clock meets requirements. We do not implement this delay.
++ */
++ if (pl->config->mac_requires_rxc && phydev && phydev->suspended)
++ phy_resume(phydev);
++}
++EXPORT_SYMBOL_GPL(phylink_prepare_resume);
++
+ /**
+ * phylink_resume() - handle a network device resume event
+ * @pl: a pointer to a &struct phylink returned from phylink_create()
+@@ -2652,15 +2765,18 @@ void phylink_resume(struct phylink *pl)
+ if (test_bit(PHYLINK_DISABLE_MAC_WOL, &pl->phylink_disable_state)) {
+ /* Wake-on-Lan enabled, MAC handling */
+
+- /* Call mac_link_down() so we keep the overall state balanced.
+- * Do this under the state_mutex lock for consistency. This
+- * will cause a "Link Down" message to be printed during
+- * resume, which is harmless - the true link state will be
+- * printed when we run a resolve.
+- */
+- mutex_lock(&pl->state_mutex);
+- phylink_link_down(pl);
+- mutex_unlock(&pl->state_mutex);
++ if (pl->suspend_link_up) {
++ /* Call mac_link_down() so we keep the overall state
++ * balanced. Do this under the state_mutex lock for
++ * consistency. This will cause a "Link Down" message
++ * to be printed during resume, which is harmless -
++ * the true link state will be printed when we run a
++ * resolve.
++ */
++ mutex_lock(&pl->state_mutex);
++ phylink_link_down(pl);
++ mutex_unlock(&pl->state_mutex);
++ }
+
+ /* Re-apply the link parameters so that all the settings get
+ * restored to the MAC.
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index d1ed544ba03ac4..3e4896d9537eed 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2789,7 +2789,8 @@ static void skb_recv_done(struct virtqueue *rvq)
+ virtqueue_napi_schedule(&rq->napi, rvq);
+ }
+
+-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
++static void virtnet_napi_do_enable(struct virtqueue *vq,
++ struct napi_struct *napi)
+ {
+ napi_enable(napi);
+
+@@ -2802,10 +2803,16 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+ local_bh_enable();
+ }
+
+-static void virtnet_napi_tx_enable(struct virtnet_info *vi,
+- struct virtqueue *vq,
+- struct napi_struct *napi)
++static void virtnet_napi_enable(struct receive_queue *rq)
+ {
++ virtnet_napi_do_enable(rq->vq, &rq->napi);
++}
++
++static void virtnet_napi_tx_enable(struct send_queue *sq)
++{
++ struct virtnet_info *vi = sq->vq->vdev->priv;
++ struct napi_struct *napi = &sq->napi;
++
+ if (!napi->weight)
+ return;
+
+@@ -2817,15 +2824,24 @@ static void virtnet_napi_tx_enable(struct virtnet_info *vi,
+ return;
+ }
+
+- return virtnet_napi_enable(vq, napi);
++ virtnet_napi_do_enable(sq->vq, napi);
+ }
+
+-static void virtnet_napi_tx_disable(struct napi_struct *napi)
++static void virtnet_napi_tx_disable(struct send_queue *sq)
+ {
++ struct napi_struct *napi = &sq->napi;
++
+ if (napi->weight)
+ napi_disable(napi);
+ }
+
++static void virtnet_napi_disable(struct receive_queue *rq)
++{
++ struct napi_struct *napi = &rq->napi;
++
++ napi_disable(napi);
++}
++
+ static void refill_work(struct work_struct *work)
+ {
+ struct virtnet_info *vi =
+@@ -2836,9 +2852,9 @@ static void refill_work(struct work_struct *work)
+ for (i = 0; i < vi->curr_queue_pairs; i++) {
+ struct receive_queue *rq = &vi->rq[i];
+
+- napi_disable(&rq->napi);
++ virtnet_napi_disable(rq);
+ still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
+- virtnet_napi_enable(rq->vq, &rq->napi);
++ virtnet_napi_enable(rq);
+
+ /* In theory, this can happen: if we don't get any buffers in
+ * we will *never* try to fill again.
+@@ -3035,8 +3051,8 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
+
+ static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
+ {
+- virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
+- napi_disable(&vi->rq[qp_index].napi);
++ virtnet_napi_tx_disable(&vi->sq[qp_index]);
++ virtnet_napi_disable(&vi->rq[qp_index]);
+ xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
+ }
+
+@@ -3055,8 +3071,8 @@ static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
+ if (err < 0)
+ goto err_xdp_reg_mem_model;
+
+- virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
+- virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi);
++ virtnet_napi_enable(&vi->rq[qp_index]);
++ virtnet_napi_tx_enable(&vi->sq[qp_index]);
+
+ return 0;
+
+@@ -3302,25 +3318,72 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
+ return NETDEV_TX_OK;
+ }
+
+-static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
++static void __virtnet_rx_pause(struct virtnet_info *vi,
++ struct receive_queue *rq)
+ {
+ bool running = netif_running(vi->dev);
+
+ if (running) {
+- napi_disable(&rq->napi);
++ virtnet_napi_disable(rq);
+ virtnet_cancel_dim(vi, &rq->dim);
+ }
+ }
+
+-static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)
++static void virtnet_rx_pause_all(struct virtnet_info *vi)
++{
++ int i;
++
++ /*
++ * Make sure refill_work does not run concurrently to
++ * avoid napi_disable race which leads to deadlock.
++ */
++ disable_delayed_refill(vi);
++ cancel_delayed_work_sync(&vi->refill);
++ for (i = 0; i < vi->max_queue_pairs; i++)
++ __virtnet_rx_pause(vi, &vi->rq[i]);
++}
++
++static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
++{
++ /*
++ * Make sure refill_work does not run concurrently to
++ * avoid napi_disable race which leads to deadlock.
++ */
++ disable_delayed_refill(vi);
++ cancel_delayed_work_sync(&vi->refill);
++ __virtnet_rx_pause(vi, rq);
++}
++
++static void __virtnet_rx_resume(struct virtnet_info *vi,
++ struct receive_queue *rq,
++ bool refill)
+ {
+ bool running = netif_running(vi->dev);
+
+- if (!try_fill_recv(vi, rq, GFP_KERNEL))
++ if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
+ schedule_delayed_work(&vi->refill, 0);
+
+ if (running)
+- virtnet_napi_enable(rq->vq, &rq->napi);
++ virtnet_napi_enable(rq);
++}
++
++static void virtnet_rx_resume_all(struct virtnet_info *vi)
++{
++ int i;
++
++ enable_delayed_refill(vi);
++ for (i = 0; i < vi->max_queue_pairs; i++) {
++ if (i < vi->curr_queue_pairs)
++ __virtnet_rx_resume(vi, &vi->rq[i], true);
++ else
++ __virtnet_rx_resume(vi, &vi->rq[i], false);
++ }
++}
++
++static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)
++{
++ enable_delayed_refill(vi);
++ __virtnet_rx_resume(vi, rq, true);
+ }
+
+ static int virtnet_rx_resize(struct virtnet_info *vi,
+@@ -3349,7 +3412,7 @@ static void virtnet_tx_pause(struct virtnet_info *vi, struct send_queue *sq)
+ qindex = sq - vi->sq;
+
+ if (running)
+- virtnet_napi_tx_disable(&sq->napi);
++ virtnet_napi_tx_disable(sq);
+
+ txq = netdev_get_tx_queue(vi->dev, qindex);
+
+@@ -3383,7 +3446,7 @@ static void virtnet_tx_resume(struct virtnet_info *vi, struct send_queue *sq)
+ __netif_tx_unlock_bh(txq);
+
+ if (running)
+- virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
++ virtnet_napi_tx_enable(sq);
+ }
+
+ static int virtnet_tx_resize(struct virtnet_info *vi, struct send_queue *sq,
+@@ -5923,12 +5986,12 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ if (prog)
+ bpf_prog_add(prog, vi->max_queue_pairs - 1);
+
++ virtnet_rx_pause_all(vi);
++
+ /* Make sure NAPI is not using any XDP TX queues for RX. */
+ if (netif_running(dev)) {
+- for (i = 0; i < vi->max_queue_pairs; i++) {
+- napi_disable(&vi->rq[i].napi);
+- virtnet_napi_tx_disable(&vi->sq[i].napi);
+- }
++ for (i = 0; i < vi->max_queue_pairs; i++)
++ virtnet_napi_tx_disable(&vi->sq[i]);
+ }
+
+ if (!prog) {
+@@ -5960,14 +6023,12 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ vi->xdp_enabled = false;
+ }
+
++ virtnet_rx_resume_all(vi);
+ for (i = 0; i < vi->max_queue_pairs; i++) {
+ if (old_prog)
+ bpf_prog_put(old_prog);
+- if (netif_running(dev)) {
+- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
+- virtnet_napi_tx_enable(vi, vi->sq[i].vq,
+- &vi->sq[i].napi);
+- }
++ if (netif_running(dev))
++ virtnet_napi_tx_enable(&vi->sq[i]);
+ }
+
+ return 0;
+@@ -5979,12 +6040,10 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog);
+ }
+
++ virtnet_rx_resume_all(vi);
+ if (netif_running(dev)) {
+- for (i = 0; i < vi->max_queue_pairs; i++) {
+- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
+- virtnet_napi_tx_enable(vi, vi->sq[i].vq,
+- &vi->sq[i].napi);
+- }
++ for (i = 0; i < vi->max_queue_pairs; i++)
++ virtnet_napi_tx_enable(&vi->sq[i]);
+ }
+ if (prog)
+ bpf_prog_sub(prog, vi->max_queue_pairs - 1);
+diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
+index 616ecc38d1726c..5f470499e60024 100644
+--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
++++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
+@@ -397,7 +397,7 @@ vmxnet3_process_xdp(struct vmxnet3_adapter *adapter,
+
+ xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq);
+ xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset,
+- rbi->len, false);
++ rcd->len, false);
+ xdp_buff_clear_frags_flag(&xdp);
+
+ xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog);
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 63fe51d0e64db3..809b407cece15e 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -985,20 +985,27 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
+ act = bpf_prog_run_xdp(prog, xdp);
+ switch (act) {
+ case XDP_TX:
+- get_page(pdata);
+ xdpf = xdp_convert_buff_to_frame(xdp);
++ if (unlikely(!xdpf)) {
++ trace_xdp_exception(queue->info->netdev, prog, act);
++ break;
++ }
++ get_page(pdata);
+ err = xennet_xdp_xmit(queue->info->netdev, 1, &xdpf, 0);
+- if (unlikely(!err))
++ if (unlikely(err <= 0)) {
++ if (err < 0)
++ trace_xdp_exception(queue->info->netdev, prog, act);
+ xdp_return_frame_rx_napi(xdpf);
+- else if (unlikely(err < 0))
+- trace_xdp_exception(queue->info->netdev, prog, act);
++ }
+ break;
+ case XDP_REDIRECT:
+ get_page(pdata);
+ err = xdp_do_redirect(queue->info->netdev, xdp, prog);
+ *need_xdp_flush = true;
+- if (unlikely(err))
++ if (unlikely(err)) {
+ trace_xdp_exception(queue->info->netdev, prog, act);
++ xdp_return_buff(xdp);
++ }
+ break;
+ case XDP_PASS:
+ case XDP_DROP:
+diff --git a/drivers/ntb/hw/amd/ntb_hw_amd.c b/drivers/ntb/hw/amd/ntb_hw_amd.c
+index d687e8c2cc78dc..63ceed89b62ef9 100644
+--- a/drivers/ntb/hw/amd/ntb_hw_amd.c
++++ b/drivers/ntb/hw/amd/ntb_hw_amd.c
+@@ -1318,6 +1318,7 @@ static const struct pci_device_id amd_ntb_pci_tbl[] = {
+ { PCI_VDEVICE(AMD, 0x148b), (kernel_ulong_t)&dev_data[1] },
+ { PCI_VDEVICE(AMD, 0x14c0), (kernel_ulong_t)&dev_data[1] },
+ { PCI_VDEVICE(AMD, 0x14c3), (kernel_ulong_t)&dev_data[1] },
++ { PCI_VDEVICE(AMD, 0x155a), (kernel_ulong_t)&dev_data[1] },
+ { PCI_VDEVICE(HYGON, 0x145b), (kernel_ulong_t)&dev_data[0] },
+ { 0, }
+ };
+diff --git a/drivers/ntb/hw/idt/ntb_hw_idt.c b/drivers/ntb/hw/idt/ntb_hw_idt.c
+index 544d8a4d2af59d..f27df8d7f3b971 100644
+--- a/drivers/ntb/hw/idt/ntb_hw_idt.c
++++ b/drivers/ntb/hw/idt/ntb_hw_idt.c
+@@ -1041,7 +1041,7 @@ static inline char *idt_get_mw_name(enum idt_mw_type mw_type)
+ static struct idt_mw_cfg *idt_scan_mws(struct idt_ntb_dev *ndev, int port,
+ unsigned char *mw_cnt)
+ {
+- struct idt_mw_cfg mws[IDT_MAX_NR_MWS], *ret_mws;
++ struct idt_mw_cfg *mws;
+ const struct idt_ntb_bar *bars;
+ enum idt_mw_type mw_type;
+ unsigned char widx, bidx, en_cnt;
+@@ -1049,6 +1049,11 @@ static struct idt_mw_cfg *idt_scan_mws(struct idt_ntb_dev *ndev, int port,
+ int aprt_size;
+ u32 data;
+
++ mws = devm_kcalloc(&ndev->ntb.pdev->dev, IDT_MAX_NR_MWS,
++ sizeof(*mws), GFP_KERNEL);
++ if (!mws)
++ return ERR_PTR(-ENOMEM);
++
+ /* Retrieve the array of the BARs registers */
+ bars = portdata_tbl[port].bars;
+
+@@ -1103,16 +1108,7 @@ static struct idt_mw_cfg *idt_scan_mws(struct idt_ntb_dev *ndev, int port,
+ }
+ }
+
+- /* Allocate memory for memory window descriptors */
+- ret_mws = devm_kcalloc(&ndev->ntb.pdev->dev, *mw_cnt, sizeof(*ret_mws),
+- GFP_KERNEL);
+- if (!ret_mws)
+- return ERR_PTR(-ENOMEM);
+-
+- /* Copy the info of detected memory windows */
+- memcpy(ret_mws, mws, (*mw_cnt)*sizeof(*ret_mws));
+-
+- return ret_mws;
++ return mws;
+ }
+
+ /*
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 8359d0aa0e44b3..150de63b26b2cf 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4292,6 +4292,15 @@ static void nvme_scan_work(struct work_struct *work)
+ nvme_scan_ns_sequential(ctrl);
+ }
+ mutex_unlock(&ctrl->scan_lock);
++
++ /* Requeue if we have missed AENs */
++ if (test_bit(NVME_AER_NOTICE_NS_CHANGED, &ctrl->events))
++ nvme_queue_scan(ctrl);
++#ifdef CONFIG_NVME_MULTIPATH
++ else if (ctrl->ana_log_buf)
++ /* Re-read the ANA log page to not miss updates */
++ queue_work(nvme_wq, &ctrl->ana_work);
++#endif
+ }
+
+ /*
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 2a763556508304..f39823cde62c72 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -427,7 +427,7 @@ static bool nvme_available_path(struct nvme_ns_head *head)
+ struct nvme_ns *ns;
+
+ if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
+- return NULL;
++ return false;
+
+ list_for_each_entry_srcu(ns, &head->list, siblings,
+ srcu_read_lock_held(&head->srcu)) {
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 2e741696f3712e..6ccce0ee515735 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -324,6 +324,9 @@ int nvmet_enable_port(struct nvmet_port *port)
+
+ lockdep_assert_held(&nvmet_config_sem);
+
++ if (port->disc_addr.trtype == NVMF_TRTYPE_MAX)
++ return -EINVAL;
++
+ ops = nvmet_transports[port->disc_addr.trtype];
+ if (!ops) {
+ up_write(&nvmet_config_sem);
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 7318b736d41417..ef8c5961e10c89 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -1028,33 +1028,24 @@ nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+ struct nvmet_fc_hostport *newhost, *match = NULL;
+ unsigned long flags;
+
++ /*
++ * Caller holds a reference on tgtport.
++ */
++
+ /* if LLDD not implemented, leave as NULL */
+ if (!hosthandle)
+ return NULL;
+
+- /*
+- * take reference for what will be the newly allocated hostport if
+- * we end up using a new allocation
+- */
+- if (!nvmet_fc_tgtport_get(tgtport))
+- return ERR_PTR(-EINVAL);
+-
+ spin_lock_irqsave(&tgtport->lock, flags);
+ match = nvmet_fc_match_hostport(tgtport, hosthandle);
+ spin_unlock_irqrestore(&tgtport->lock, flags);
+
+- if (match) {
+- /* no new allocation - release reference */
+- nvmet_fc_tgtport_put(tgtport);
++ if (match)
+ return match;
+- }
+
+ newhost = kzalloc(sizeof(*newhost), GFP_KERNEL);
+- if (!newhost) {
+- /* no new allocation - release reference */
+- nvmet_fc_tgtport_put(tgtport);
++ if (!newhost)
+ return ERR_PTR(-ENOMEM);
+- }
+
+ spin_lock_irqsave(&tgtport->lock, flags);
+ match = nvmet_fc_match_hostport(tgtport, hosthandle);
+@@ -1063,6 +1054,7 @@ nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
+ kfree(newhost);
+ newhost = match;
+ } else {
++ nvmet_fc_tgtport_get(tgtport);
+ newhost->tgtport = tgtport;
+ newhost->hosthandle = hosthandle;
+ INIT_LIST_HEAD(&newhost->host_list);
+@@ -1097,7 +1089,8 @@ static void
+ nvmet_fc_schedule_delete_assoc(struct nvmet_fc_tgt_assoc *assoc)
+ {
+ nvmet_fc_tgtport_get(assoc->tgtport);
+- queue_work(nvmet_wq, &assoc->del_work);
++ if (!queue_work(nvmet_wq, &assoc->del_work))
++ nvmet_fc_tgtport_put(assoc->tgtport);
+ }
+
+ static bool
+diff --git a/drivers/nvme/target/pci-epf.c b/drivers/nvme/target/pci-epf.c
+index 5c4c4c1f535d44..bc1daa9aede9d2 100644
+--- a/drivers/nvme/target/pci-epf.c
++++ b/drivers/nvme/target/pci-epf.c
+@@ -2109,11 +2109,18 @@ static int nvmet_pci_epf_create_ctrl(struct nvmet_pci_epf *nvme_epf,
+
+ static void nvmet_pci_epf_start_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
+ {
++
++ dev_info(ctrl->dev, "PCI link up\n");
++ ctrl->link_up = true;
++
+ schedule_delayed_work(&ctrl->poll_cc, NVMET_PCI_EPF_CC_POLL_INTERVAL);
+ }
+
+ static void nvmet_pci_epf_stop_ctrl(struct nvmet_pci_epf_ctrl *ctrl)
+ {
++ dev_info(ctrl->dev, "PCI link down\n");
++ ctrl->link_up = false;
++
+ cancel_delayed_work_sync(&ctrl->poll_cc);
+
+ nvmet_pci_epf_disable_ctrl(ctrl, false);
+@@ -2340,10 +2347,8 @@ static int nvmet_pci_epf_epc_init(struct pci_epf *epf)
+ if (ret)
+ goto out_clear_bar;
+
+- if (!epc_features->linkup_notifier) {
+- ctrl->link_up = true;
++ if (!epc_features->linkup_notifier)
+ nvmet_pci_epf_start_ctrl(&nvme_epf->ctrl);
+- }
+
+ return 0;
+
+@@ -2359,7 +2364,6 @@ static void nvmet_pci_epf_epc_deinit(struct pci_epf *epf)
+ struct nvmet_pci_epf *nvme_epf = epf_get_drvdata(epf);
+ struct nvmet_pci_epf_ctrl *ctrl = &nvme_epf->ctrl;
+
+- ctrl->link_up = false;
+ nvmet_pci_epf_destroy_ctrl(ctrl);
+
+ nvmet_pci_epf_deinit_dma(nvme_epf);
+@@ -2371,7 +2375,6 @@ static int nvmet_pci_epf_link_up(struct pci_epf *epf)
+ struct nvmet_pci_epf *nvme_epf = epf_get_drvdata(epf);
+ struct nvmet_pci_epf_ctrl *ctrl = &nvme_epf->ctrl;
+
+- ctrl->link_up = true;
+ nvmet_pci_epf_start_ctrl(ctrl);
+
+ return 0;
+@@ -2382,7 +2385,6 @@ static int nvmet_pci_epf_link_down(struct pci_epf *epf)
+ struct nvmet_pci_epf *nvme_epf = epf_get_drvdata(epf);
+ struct nvmet_pci_epf_ctrl *ctrl = &nvme_epf->ctrl;
+
+- ctrl->link_up = false;
+ nvmet_pci_epf_stop_ctrl(ctrl);
+
+ return 0;
+diff --git a/drivers/of/resolver.c b/drivers/of/resolver.c
+index 779db058c42f5b..2caad365a665c3 100644
+--- a/drivers/of/resolver.c
++++ b/drivers/of/resolver.c
+@@ -249,25 +249,22 @@ static int adjust_local_phandle_references(const struct device_node *local_fixup
+ */
+ int of_resolve_phandles(struct device_node *overlay)
+ {
+- struct device_node *child, *local_fixups, *refnode;
+- struct device_node *tree_symbols, *overlay_fixups;
++ struct device_node *child, *refnode;
++ struct device_node *overlay_fixups;
++ struct device_node __free(device_node) *local_fixups = NULL;
+ struct property *prop;
+ const char *refpath;
+ phandle phandle, phandle_delta;
+ int err;
+
+- tree_symbols = NULL;
+-
+ if (!overlay) {
+ pr_err("null overlay\n");
+- err = -EINVAL;
+- goto out;
++ return -EINVAL;
+ }
+
+ if (!of_node_check_flag(overlay, OF_DETACHED)) {
+ pr_err("overlay not detached\n");
+- err = -EINVAL;
+- goto out;
++ return -EINVAL;
+ }
+
+ phandle_delta = live_tree_max_phandle() + 1;
+@@ -279,7 +276,7 @@ int of_resolve_phandles(struct device_node *overlay)
+
+ err = adjust_local_phandle_references(local_fixups, overlay, phandle_delta);
+ if (err)
+- goto out;
++ return err;
+
+ overlay_fixups = NULL;
+
+@@ -288,16 +285,13 @@ int of_resolve_phandles(struct device_node *overlay)
+ overlay_fixups = child;
+ }
+
+- if (!overlay_fixups) {
+- err = 0;
+- goto out;
+- }
++ if (!overlay_fixups)
++ return 0;
+
+- tree_symbols = of_find_node_by_path("/__symbols__");
++ struct device_node __free(device_node) *tree_symbols = of_find_node_by_path("/__symbols__");
+ if (!tree_symbols) {
+ pr_err("no symbols in root of device tree.\n");
+- err = -EINVAL;
+- goto out;
++ return -EINVAL;
+ }
+
+ for_each_property_of_node(overlay_fixups, prop) {
+@@ -311,14 +305,12 @@ int of_resolve_phandles(struct device_node *overlay)
+ if (err) {
+ pr_err("node label '%s' not found in live devicetree symbols table\n",
+ prop->name);
+- goto out;
++ return err;
+ }
+
+ refnode = of_find_node_by_path(refpath);
+- if (!refnode) {
+- err = -ENOENT;
+- goto out;
+- }
++ if (!refnode)
++ return -ENOENT;
+
+ phandle = refnode->phandle;
+ of_node_put(refnode);
+@@ -328,11 +320,8 @@ int of_resolve_phandles(struct device_node *overlay)
+ break;
+ }
+
+-out:
+ if (err)
+ pr_err("overlay phandle fixup failed: %d\n", err);
+- of_node_put(tree_symbols);
+-
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(of_resolve_phandles);
+diff --git a/drivers/pci/msi/msi.c b/drivers/pci/msi/msi.c
+index 2f647cac4cae34..8b884878861842 100644
+--- a/drivers/pci/msi/msi.c
++++ b/drivers/pci/msi/msi.c
+@@ -10,12 +10,12 @@
+ #include <linux/err.h>
+ #include <linux/export.h>
+ #include <linux/irq.h>
++#include <linux/irqdomain.h>
+
+ #include "../pci.h"
+ #include "msi.h"
+
+ int pci_msi_enable = 1;
+-int pci_msi_ignore_mask;
+
+ /**
+ * pci_msi_supported - check whether MSI may be enabled on a device
+@@ -295,8 +295,7 @@ static int msi_setup_msi_desc(struct pci_dev *dev, int nvec,
+ /* Lies, damned lies, and MSIs */
+ if (dev->dev_flags & PCI_DEV_FLAGS_HAS_MSI_MASKING)
+ control |= PCI_MSI_FLAGS_MASKBIT;
+- /* Respect XEN's mask disabling */
+- if (pci_msi_ignore_mask)
++ if (pci_msi_domain_supports(dev, MSI_FLAG_NO_MASK, DENY_LEGACY))
+ control &= ~PCI_MSI_FLAGS_MASKBIT;
+
+ desc.nvec_used = nvec;
+@@ -609,12 +608,16 @@ void msix_prepare_msi_desc(struct pci_dev *dev, struct msi_desc *desc)
+ desc->pci.msi_attrib.is_64 = 1;
+ desc->pci.msi_attrib.default_irq = dev->irq;
+ desc->pci.mask_base = dev->msix_base;
+- desc->pci.msi_attrib.can_mask = !pci_msi_ignore_mask &&
+- !desc->pci.msi_attrib.is_virtual;
+
+- if (desc->pci.msi_attrib.can_mask) {
++
++ if (!pci_msi_domain_supports(dev, MSI_FLAG_NO_MASK, DENY_LEGACY) &&
++ !desc->pci.msi_attrib.is_virtual) {
+ void __iomem *addr = pci_msix_desc_addr(desc);
+
++ desc->pci.msi_attrib.can_mask = 1;
++ /* Workaround for SUN NIU insanity, which requires write before read */
++ if (dev->dev_flags & PCI_DEV_FLAGS_MSIX_TOUCH_ENTRY_DATA_FIRST)
++ writel(0, addr + PCI_MSIX_ENTRY_DATA);
+ desc->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
+ }
+ }
+@@ -659,9 +662,6 @@ static void msix_mask_all(void __iomem *base, int tsize)
+ u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT;
+ int i;
+
+- if (pci_msi_ignore_mask)
+- return;
+-
+ for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE)
+ writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL);
+ }
+@@ -744,15 +744,17 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ /* Disable INTX */
+ pci_intx_for_msi(dev, 0);
+
+- /*
+- * Ensure that all table entries are masked to prevent
+- * stale entries from firing in a crash kernel.
+- *
+- * Done late to deal with a broken Marvell NVME device
+- * which takes the MSI-X mask bits into account even
+- * when MSI-X is disabled, which prevents MSI delivery.
+- */
+- msix_mask_all(dev->msix_base, tsize);
++ if (!pci_msi_domain_supports(dev, MSI_FLAG_NO_MASK, DENY_LEGACY)) {
++ /*
++ * Ensure that all table entries are masked to prevent
++ * stale entries from firing in a crash kernel.
++ *
++ * Done late to deal with a broken Marvell NVME device
++ * which takes the MSI-X mask bits into account even
++ * when MSI-X is disabled, which prevents MSI delivery.
++ */
++ msix_mask_all(dev->msix_base, tsize);
++ }
+ pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
+
+ pcibios_free_irq(dev);
+diff --git a/drivers/phy/rockchip/phy-rockchip-usbdp.c b/drivers/phy/rockchip/phy-rockchip-usbdp.c
+index 5b1e8a3806ed4e..c04cf64f8a35db 100644
+--- a/drivers/phy/rockchip/phy-rockchip-usbdp.c
++++ b/drivers/phy/rockchip/phy-rockchip-usbdp.c
+@@ -1045,7 +1045,6 @@ static int rk_udphy_dp_phy_init(struct phy *phy)
+ mutex_lock(&udphy->mutex);
+
+ udphy->dp_in_use = true;
+- rk_udphy_dp_hpd_event_trigger(udphy, udphy->dp_sink_hpd_cfg);
+
+ mutex_unlock(&udphy->mutex);
+
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index b96e6368a9568f..4d1f41488017e4 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -382,6 +382,7 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ {
+ struct mcp23s08 *mcp = data;
+ int intcap, intcon, intf, i, gpio, gpio_orig, intcap_mask, defval, gpinten;
++ bool need_unmask = false;
+ unsigned long int enabled_interrupts;
+ unsigned int child_irq;
+ bool intf_set, intcap_changed, gpio_bit_changed,
+@@ -396,9 +397,6 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ goto unlock;
+ }
+
+- if (mcp_read(mcp, MCP_INTCAP, &intcap))
+- goto unlock;
+-
+ if (mcp_read(mcp, MCP_INTCON, &intcon))
+ goto unlock;
+
+@@ -408,6 +406,16 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ if (mcp_read(mcp, MCP_DEFVAL, &defval))
+ goto unlock;
+
++ /* Mask level interrupts to avoid their immediate reactivation after clearing */
++ if (intcon) {
++ need_unmask = true;
++ if (mcp_write(mcp, MCP_GPINTEN, gpinten & ~intcon))
++ goto unlock;
++ }
++
++ if (mcp_read(mcp, MCP_INTCAP, &intcap))
++ goto unlock;
++
+ /* This clears the interrupt(configurable on S18) */
+ if (mcp_read(mcp, MCP_GPIO, &gpio))
+ goto unlock;
+@@ -470,9 +478,18 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ }
+ }
+
++ if (need_unmask) {
++ mutex_lock(&mcp->lock);
++ goto unlock;
++ }
++
+ return IRQ_HANDLED;
+
+ unlock:
++ if (need_unmask)
++ if (mcp_write(mcp, MCP_GPINTEN, gpinten))
++ dev_err(mcp->chip.parent, "can't unmask GPINTEN\n");
++
+ mutex_unlock(&mcp->lock);
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/pinctrl/renesas/pinctrl-rza2.c b/drivers/pinctrl/renesas/pinctrl-rza2.c
+index 8b36161c7c5022..3b58129638500e 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rza2.c
++++ b/drivers/pinctrl/renesas/pinctrl-rza2.c
+@@ -246,6 +246,9 @@ static int rza2_gpio_register(struct rza2_pinctrl_priv *priv)
+ int ret;
+
+ chip.label = devm_kasprintf(priv->dev, GFP_KERNEL, "%pOFn", np);
++ if (!chip.label)
++ return -ENOMEM;
++
+ chip.parent = priv->dev;
+ chip.ngpio = priv->npins;
+
+diff --git a/drivers/platform/x86/x86-android-tablets/dmi.c b/drivers/platform/x86/x86-android-tablets/dmi.c
+index 3e5fa3b6e2fdfe..278c6d151dc492 100644
+--- a/drivers/platform/x86/x86-android-tablets/dmi.c
++++ b/drivers/platform/x86/x86-android-tablets/dmi.c
+@@ -179,6 +179,18 @@ const struct dmi_system_id x86_android_tablet_ids[] __initconst = {
+ },
+ .driver_data = (void *)&peaq_c1010_info,
+ },
++ {
++ /* Vexia Edu Atla 10 tablet 5V version */
++ .matches = {
++ /* Having all 3 of these not set is somewhat unique */
++ DMI_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_BOARD_NAME, "To be filled by O.E.M."),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "05/14/2015"),
++ },
++ .driver_data = (void *)&vexia_edu_atla10_5v_info,
++ },
+ {
+ /* Vexia Edu Atla 10 tablet 9V version */
+ .matches = {
+@@ -187,7 +199,7 @@ const struct dmi_system_id x86_android_tablet_ids[] __initconst = {
+ /* Above strings are too generic, also match on BIOS date */
+ DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"),
+ },
+- .driver_data = (void *)&vexia_edu_atla10_info,
++ .driver_data = (void *)&vexia_edu_atla10_9v_info,
+ },
+ {
+ /* Whitelabel (sold as various brands) TM800A550L */
+diff --git a/drivers/platform/x86/x86-android-tablets/other.c b/drivers/platform/x86/x86-android-tablets/other.c
+index 1d93d9edb23f48..f7bd9f863c85ed 100644
+--- a/drivers/platform/x86/x86-android-tablets/other.c
++++ b/drivers/platform/x86/x86-android-tablets/other.c
+@@ -599,62 +599,122 @@ const struct x86_dev_info whitelabel_tm800a550l_info __initconst = {
+ };
+
+ /*
+- * Vexia EDU ATLA 10 tablet, Android 4.2 / 4.4 + Guadalinex Ubuntu tablet
++ * Vexia EDU ATLA 10 tablet 5V, Android 4.4 + Guadalinex Ubuntu tablet
++ * distributed to schools in the Spanish Andalucía region.
++ */
++static const struct property_entry vexia_edu_atla10_5v_touchscreen_props[] = {
++ PROPERTY_ENTRY_U32("hid-descr-addr", 0x0000),
++ PROPERTY_ENTRY_U32("post-reset-deassert-delay-ms", 120),
++ { }
++};
++
++static const struct software_node vexia_edu_atla10_5v_touchscreen_node = {
++ .properties = vexia_edu_atla10_5v_touchscreen_props,
++};
++
++static const struct x86_i2c_client_info vexia_edu_atla10_5v_i2c_clients[] __initconst = {
++ {
++ /* kxcjk1013 accelerometer */
++ .board_info = {
++ .type = "kxcjk1013",
++ .addr = 0x0f,
++ .dev_name = "kxcjk1013",
++ },
++ .adapter_path = "\\_SB_.I2C3",
++ }, {
++ /* touchscreen controller */
++ .board_info = {
++ .type = "hid-over-i2c",
++ .addr = 0x38,
++ .dev_name = "FTSC1000",
++ .swnode = &vexia_edu_atla10_5v_touchscreen_node,
++ },
++ .adapter_path = "\\_SB_.I2C4",
++ .irq_data = {
++ .type = X86_ACPI_IRQ_TYPE_APIC,
++ .index = 0x44,
++ .trigger = ACPI_LEVEL_SENSITIVE,
++ .polarity = ACPI_ACTIVE_HIGH,
++ },
++ }
++};
++
++static struct gpiod_lookup_table vexia_edu_atla10_5v_ft5416_gpios = {
++ .dev_id = "i2c-FTSC1000",
++ .table = {
++ GPIO_LOOKUP("INT33FC:01", 26, "reset", GPIO_ACTIVE_LOW),
++ { }
++ },
++};
++
++static struct gpiod_lookup_table * const vexia_edu_atla10_5v_gpios[] = {
++ &vexia_edu_atla10_5v_ft5416_gpios,
++ NULL
++};
++
++const struct x86_dev_info vexia_edu_atla10_5v_info __initconst = {
++ .i2c_client_info = vexia_edu_atla10_5v_i2c_clients,
++ .i2c_client_count = ARRAY_SIZE(vexia_edu_atla10_5v_i2c_clients),
++ .gpiod_lookup_tables = vexia_edu_atla10_5v_gpios,
++};
++
++/*
++ * Vexia EDU ATLA 10 tablet 9V, Android 4.2 + Guadalinex Ubuntu tablet
+ * distributed to schools in the Spanish Andalucía region.
+ */
+ static const char * const crystal_cove_pwrsrc_psy[] = { "crystal_cove_pwrsrc" };
+
+-static const struct property_entry vexia_edu_atla10_ulpmc_props[] = {
++static const struct property_entry vexia_edu_atla10_9v_ulpmc_props[] = {
+ PROPERTY_ENTRY_STRING_ARRAY("supplied-from", crystal_cove_pwrsrc_psy),
+ { }
+ };
+
+-static const struct software_node vexia_edu_atla10_ulpmc_node = {
+- .properties = vexia_edu_atla10_ulpmc_props,
++static const struct software_node vexia_edu_atla10_9v_ulpmc_node = {
++ .properties = vexia_edu_atla10_9v_ulpmc_props,
+ };
+
+-static const char * const vexia_edu_atla10_accel_mount_matrix[] = {
++static const char * const vexia_edu_atla10_9v_accel_mount_matrix[] = {
+ "0", "-1", "0",
+ "1", "0", "0",
+ "0", "0", "1"
+ };
+
+-static const struct property_entry vexia_edu_atla10_accel_props[] = {
+- PROPERTY_ENTRY_STRING_ARRAY("mount-matrix", vexia_edu_atla10_accel_mount_matrix),
++static const struct property_entry vexia_edu_atla10_9v_accel_props[] = {
++ PROPERTY_ENTRY_STRING_ARRAY("mount-matrix", vexia_edu_atla10_9v_accel_mount_matrix),
+ { }
+ };
+
+-static const struct software_node vexia_edu_atla10_accel_node = {
+- .properties = vexia_edu_atla10_accel_props,
++static const struct software_node vexia_edu_atla10_9v_accel_node = {
++ .properties = vexia_edu_atla10_9v_accel_props,
+ };
+
+-static const struct property_entry vexia_edu_atla10_touchscreen_props[] = {
++static const struct property_entry vexia_edu_atla10_9v_touchscreen_props[] = {
+ PROPERTY_ENTRY_U32("hid-descr-addr", 0x0000),
+ PROPERTY_ENTRY_U32("post-reset-deassert-delay-ms", 120),
+ { }
+ };
+
+-static const struct software_node vexia_edu_atla10_touchscreen_node = {
+- .properties = vexia_edu_atla10_touchscreen_props,
++static const struct software_node vexia_edu_atla10_9v_touchscreen_node = {
++ .properties = vexia_edu_atla10_9v_touchscreen_props,
+ };
+
+-static const struct property_entry vexia_edu_atla10_pmic_props[] = {
++static const struct property_entry vexia_edu_atla10_9v_pmic_props[] = {
+ PROPERTY_ENTRY_BOOL("linux,register-pwrsrc-power_supply"),
+ { }
+ };
+
+-static const struct software_node vexia_edu_atla10_pmic_node = {
+- .properties = vexia_edu_atla10_pmic_props,
++static const struct software_node vexia_edu_atla10_9v_pmic_node = {
++ .properties = vexia_edu_atla10_9v_pmic_props,
+ };
+
+-static const struct x86_i2c_client_info vexia_edu_atla10_i2c_clients[] __initconst = {
++static const struct x86_i2c_client_info vexia_edu_atla10_9v_i2c_clients[] __initconst = {
+ {
+ /* I2C attached embedded controller, used to access fuel-gauge */
+ .board_info = {
+ .type = "vexia_atla10_ec",
+ .addr = 0x76,
+ .dev_name = "ulpmc",
+- .swnode = &vexia_edu_atla10_ulpmc_node,
++ .swnode = &vexia_edu_atla10_9v_ulpmc_node,
+ },
+ .adapter_path = "0000:00:18.1",
+ }, {
+@@ -679,7 +739,7 @@ static const struct x86_i2c_client_info vexia_edu_atla10_i2c_clients[] __initcon
+ .type = "kxtj21009",
+ .addr = 0x0f,
+ .dev_name = "kxtj21009",
+- .swnode = &vexia_edu_atla10_accel_node,
++ .swnode = &vexia_edu_atla10_9v_accel_node,
+ },
+ .adapter_path = "0000:00:18.5",
+ }, {
+@@ -688,7 +748,7 @@ static const struct x86_i2c_client_info vexia_edu_atla10_i2c_clients[] __initcon
+ .type = "hid-over-i2c",
+ .addr = 0x38,
+ .dev_name = "FTSC1000",
+- .swnode = &vexia_edu_atla10_touchscreen_node,
++ .swnode = &vexia_edu_atla10_9v_touchscreen_node,
+ },
+ .adapter_path = "0000:00:18.6",
+ .irq_data = {
+@@ -703,7 +763,7 @@ static const struct x86_i2c_client_info vexia_edu_atla10_i2c_clients[] __initcon
+ .type = "intel_soc_pmic_crc",
+ .addr = 0x6e,
+ .dev_name = "intel_soc_pmic_crc",
+- .swnode = &vexia_edu_atla10_pmic_node,
++ .swnode = &vexia_edu_atla10_9v_pmic_node,
+ },
+ .adapter_path = "0000:00:18.7",
+ .irq_data = {
+@@ -715,7 +775,7 @@ static const struct x86_i2c_client_info vexia_edu_atla10_i2c_clients[] __initcon
+ }
+ };
+
+-static const struct x86_serdev_info vexia_edu_atla10_serdevs[] __initconst = {
++static const struct x86_serdev_info vexia_edu_atla10_9v_serdevs[] __initconst = {
+ {
+ .ctrl.pci.devfn = PCI_DEVFN(0x1e, 3),
+ .ctrl_devname = "serial0",
+@@ -723,7 +783,7 @@ static const struct x86_serdev_info vexia_edu_atla10_serdevs[] __initconst = {
+ },
+ };
+
+-static struct gpiod_lookup_table vexia_edu_atla10_ft5416_gpios = {
++static struct gpiod_lookup_table vexia_edu_atla10_9v_ft5416_gpios = {
+ .dev_id = "i2c-FTSC1000",
+ .table = {
+ GPIO_LOOKUP("INT33FC:00", 60, "reset", GPIO_ACTIVE_LOW),
+@@ -731,12 +791,12 @@ static struct gpiod_lookup_table vexia_edu_atla10_ft5416_gpios = {
+ },
+ };
+
+-static struct gpiod_lookup_table * const vexia_edu_atla10_gpios[] = {
+- &vexia_edu_atla10_ft5416_gpios,
++static struct gpiod_lookup_table * const vexia_edu_atla10_9v_gpios[] = {
++ &vexia_edu_atla10_9v_ft5416_gpios,
+ NULL
+ };
+
+-static int __init vexia_edu_atla10_init(struct device *dev)
++static int __init vexia_edu_atla10_9v_init(struct device *dev)
+ {
+ struct pci_dev *pdev;
+ int ret;
+@@ -760,13 +820,13 @@ static int __init vexia_edu_atla10_init(struct device *dev)
+ return 0;
+ }
+
+-const struct x86_dev_info vexia_edu_atla10_info __initconst = {
+- .i2c_client_info = vexia_edu_atla10_i2c_clients,
+- .i2c_client_count = ARRAY_SIZE(vexia_edu_atla10_i2c_clients),
+- .serdev_info = vexia_edu_atla10_serdevs,
+- .serdev_count = ARRAY_SIZE(vexia_edu_atla10_serdevs),
+- .gpiod_lookup_tables = vexia_edu_atla10_gpios,
+- .init = vexia_edu_atla10_init,
++const struct x86_dev_info vexia_edu_atla10_9v_info __initconst = {
++ .i2c_client_info = vexia_edu_atla10_9v_i2c_clients,
++ .i2c_client_count = ARRAY_SIZE(vexia_edu_atla10_9v_i2c_clients),
++ .serdev_info = vexia_edu_atla10_9v_serdevs,
++ .serdev_count = ARRAY_SIZE(vexia_edu_atla10_9v_serdevs),
++ .gpiod_lookup_tables = vexia_edu_atla10_9v_gpios,
++ .init = vexia_edu_atla10_9v_init,
+ .use_pci = true,
+ };
+
+diff --git a/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h b/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
+index 63a38a0069bae6..dcf8d49e3b5f48 100644
+--- a/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
++++ b/drivers/platform/x86/x86-android-tablets/x86-android-tablets.h
+@@ -127,7 +127,8 @@ extern const struct x86_dev_info nextbook_ares8_info;
+ extern const struct x86_dev_info nextbook_ares8a_info;
+ extern const struct x86_dev_info peaq_c1010_info;
+ extern const struct x86_dev_info whitelabel_tm800a550l_info;
+-extern const struct x86_dev_info vexia_edu_atla10_info;
++extern const struct x86_dev_info vexia_edu_atla10_5v_info;
++extern const struct x86_dev_info vexia_edu_atla10_9v_info;
+ extern const struct x86_dev_info xiaomi_mipad2_info;
+ extern const struct dmi_system_id x86_android_tablet_ids[];
+
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index ccd54c089bab8b..0e4df71c2cef1b 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -322,7 +322,7 @@ static int __pwm_set_waveform(struct pwm_device *pwm,
+ const struct pwm_ops *ops = chip->ops;
+ char wfhw[WFHWSIZE];
+ struct pwm_waveform wf_rounded;
+- int err;
++ int err, ret_tohw;
+
+ BUG_ON(WFHWSIZE < ops->sizeof_wfhw);
+
+@@ -332,16 +332,16 @@ static int __pwm_set_waveform(struct pwm_device *pwm,
+ if (!pwm_wf_valid(wf))
+ return -EINVAL;
+
+- err = __pwm_round_waveform_tohw(chip, pwm, wf, &wfhw);
+- if (err)
+- return err;
++ ret_tohw = __pwm_round_waveform_tohw(chip, pwm, wf, &wfhw);
++ if (ret_tohw < 0)
++ return ret_tohw;
+
+ if ((IS_ENABLED(CONFIG_PWM_DEBUG) || exact) && wf->period_length_ns) {
+ err = __pwm_round_waveform_fromhw(chip, pwm, &wfhw, &wf_rounded);
+ if (err)
+ return err;
+
+- if (IS_ENABLED(CONFIG_PWM_DEBUG) && !pwm_check_rounding(wf, &wf_rounded))
++ if (IS_ENABLED(CONFIG_PWM_DEBUG) && ret_tohw == 0 && !pwm_check_rounding(wf, &wf_rounded))
+ dev_err(&chip->dev, "Wrong rounding: requested %llu/%llu [+%llu], result %llu/%llu [+%llu]\n",
+ wf->duty_length_ns, wf->period_length_ns, wf->duty_offset_ns,
+ wf_rounded.duty_length_ns, wf_rounded.period_length_ns, wf_rounded.duty_offset_ns);
+@@ -382,7 +382,8 @@ static int __pwm_set_waveform(struct pwm_device *pwm,
+ wf_rounded.duty_length_ns, wf_rounded.period_length_ns, wf_rounded.duty_offset_ns,
+ wf_set.duty_length_ns, wf_set.period_length_ns, wf_set.duty_offset_ns);
+ }
+- return 0;
++
++ return ret_tohw;
+ }
+
+ /**
+diff --git a/drivers/pwm/pwm-axi-pwmgen.c b/drivers/pwm/pwm-axi-pwmgen.c
+index 4259a0db9ff458..4337c8f5acf055 100644
+--- a/drivers/pwm/pwm-axi-pwmgen.c
++++ b/drivers/pwm/pwm-axi-pwmgen.c
+@@ -75,6 +75,7 @@ static int axi_pwmgen_round_waveform_tohw(struct pwm_chip *chip,
+ {
+ struct axi_pwmgen_waveform *wfhw = _wfhw;
+ struct axi_pwmgen_ddata *ddata = axi_pwmgen_ddata_from_chip(chip);
++ int ret = 0;
+
+ if (wf->period_length_ns == 0) {
+ *wfhw = (struct axi_pwmgen_waveform){
+@@ -91,12 +92,15 @@ static int axi_pwmgen_round_waveform_tohw(struct pwm_chip *chip,
+ if (wfhw->period_cnt == 0) {
+ /*
+ * The specified period is too short for the hardware.
+- * Let's round .duty_cycle down to 0 to get a (somewhat)
+- * valid result.
++ * So round up .period_cnt to 1 (i.e. the smallest
++ * possible period). With .duty_cycle and .duty_offset
++ * being less than or equal to .period, their rounded
++ * value must be 0.
+ */
+ wfhw->period_cnt = 1;
+ wfhw->duty_cycle_cnt = 0;
+ wfhw->duty_offset_cnt = 0;
++ ret = 1;
+ } else {
+ wfhw->duty_cycle_cnt = min_t(u64,
+ mul_u64_u32_div(wf->duty_length_ns, ddata->clk_rate_hz, NSEC_PER_SEC),
+@@ -111,7 +115,7 @@ static int axi_pwmgen_round_waveform_tohw(struct pwm_chip *chip,
+ pwm->hwpwm, wf->duty_length_ns, wf->period_length_ns, wf->duty_offset_ns,
+ ddata->clk_rate_hz, wfhw->period_cnt, wfhw->duty_cycle_cnt, wfhw->duty_offset_cnt);
+
+- return 0;
++ return ret;
+ }
+
+ static int axi_pwmgen_round_waveform_fromhw(struct pwm_chip *chip, struct pwm_device *pwm,
+diff --git a/drivers/regulator/rk808-regulator.c b/drivers/regulator/rk808-regulator.c
+index 7d82bd1b36dfcf..1e8142479656a0 100644
+--- a/drivers/regulator/rk808-regulator.c
++++ b/drivers/regulator/rk808-regulator.c
+@@ -270,8 +270,8 @@ static const unsigned int rk817_buck1_4_ramp_table[] = {
+
+ static int rk806_set_mode_dcdc(struct regulator_dev *rdev, unsigned int mode)
+ {
+- int rid = rdev_get_id(rdev);
+- int ctr_bit, reg;
++ unsigned int rid = rdev_get_id(rdev);
++ unsigned int ctr_bit, reg;
+
+ reg = RK806_POWER_FPWM_EN0 + rid / 8;
+ ctr_bit = rid % 8;
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index 905986c616559b..73848f764559b4 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -35,6 +35,7 @@
+ #define PCF85063_REG_CTRL1_CAP_SEL BIT(0)
+ #define PCF85063_REG_CTRL1_STOP BIT(5)
+ #define PCF85063_REG_CTRL1_EXT_TEST BIT(7)
++#define PCF85063_REG_CTRL1_SWR 0x58
+
+ #define PCF85063_REG_CTRL2 0x01
+ #define PCF85063_CTRL2_AF BIT(6)
+@@ -589,7 +590,7 @@ static int pcf85063_probe(struct i2c_client *client)
+
+ i2c_set_clientdata(client, pcf85063);
+
+- err = regmap_read(pcf85063->regmap, PCF85063_REG_CTRL1, &tmp);
++ err = regmap_read(pcf85063->regmap, PCF85063_REG_SC, &tmp);
+ if (err) {
+ dev_err(&client->dev, "RTC chip is not present\n");
+ return err;
+@@ -599,6 +600,22 @@ static int pcf85063_probe(struct i2c_client *client)
+ if (IS_ERR(pcf85063->rtc))
+ return PTR_ERR(pcf85063->rtc);
+
++ /*
++ * If a Power loss is detected, SW reset the device.
++ * From PCF85063A datasheet:
++ * There is a low probability that some devices will have corruption
++ * of the registers after the automatic power-on reset...
++ */
++ if (tmp & PCF85063_REG_SC_OS) {
++ dev_warn(&client->dev,
++ "POR issue detected, sending a SW reset\n");
++ err = regmap_write(pcf85063->regmap, PCF85063_REG_CTRL1,
++ PCF85063_REG_CTRL1_SWR);
++ if (err < 0)
++ dev_warn(&client->dev,
++ "SW reset failed, trying to continue\n");
++ }
++
+ err = pcf85063_load_capacitance(pcf85063, client->dev.of_node,
+ config->force_cap_7000 ? 7000 : 0);
+ if (err < 0)
+diff --git a/drivers/s390/char/sclp_con.c b/drivers/s390/char/sclp_con.c
+index e5d947c763ea5d..6a030ba38bf360 100644
+--- a/drivers/s390/char/sclp_con.c
++++ b/drivers/s390/char/sclp_con.c
+@@ -263,6 +263,19 @@ static struct console sclp_console =
+ .index = 0 /* ttyS0 */
+ };
+
++/*
++ * Release allocated pages.
++ */
++static void __init __sclp_console_free_pages(void)
++{
++ struct list_head *page, *p;
++
++ list_for_each_safe(page, p, &sclp_con_pages) {
++ list_del(page);
++ free_page((unsigned long)page);
++ }
++}
++
+ /*
+ * called by console_init() in drivers/char/tty_io.c at boot-time.
+ */
+@@ -282,6 +295,10 @@ sclp_console_init(void)
+ /* Allocate pages for output buffering */
+ for (i = 0; i < sclp_console_pages; i++) {
+ page = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
++ if (!page) {
++ __sclp_console_free_pages();
++ return -ENOMEM;
++ }
+ list_add_tail(page, &sclp_con_pages);
+ }
+ sclp_conbuf = NULL;
+diff --git a/drivers/s390/char/sclp_tty.c b/drivers/s390/char/sclp_tty.c
+index 892c18d2f87e90..d3edacb6ee148b 100644
+--- a/drivers/s390/char/sclp_tty.c
++++ b/drivers/s390/char/sclp_tty.c
+@@ -490,6 +490,17 @@ static const struct tty_operations sclp_ops = {
+ .flush_buffer = sclp_tty_flush_buffer,
+ };
+
++/* Release allocated pages. */
++static void __init __sclp_tty_free_pages(void)
++{
++ struct list_head *page, *p;
++
++ list_for_each_safe(page, p, &sclp_tty_pages) {
++ list_del(page);
++ free_page((unsigned long)page);
++ }
++}
++
+ static int __init
+ sclp_tty_init(void)
+ {
+@@ -516,6 +527,7 @@ sclp_tty_init(void)
+ for (i = 0; i < MAX_KMEM_PAGES; i++) {
+ page = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
+ if (page == NULL) {
++ __sclp_tty_free_pages();
+ tty_driver_kref_put(driver);
+ return -ENOMEM;
+ }
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 3596414d970b24..7a484ad0f9abe6 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -935,8 +935,28 @@ static void hisi_sas_phyup_work_common(struct work_struct *work,
+ container_of(work, typeof(*phy), works[event]);
+ struct hisi_hba *hisi_hba = phy->hisi_hba;
+ struct asd_sas_phy *sas_phy = &phy->sas_phy;
++ struct asd_sas_port *sas_port = sas_phy->port;
++ struct hisi_sas_port *port = phy->port;
++ struct device *dev = hisi_hba->dev;
++ struct domain_device *port_dev;
+ int phy_no = sas_phy->id;
+
++ if (!test_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags) &&
++ sas_port && port && (port->id != phy->port_id)) {
++ dev_info(dev, "phy%d's hw port id changed from %d to %llu\n",
++ phy_no, port->id, phy->port_id);
++ port_dev = sas_port->port_dev;
++ if (port_dev && !dev_is_expander(port_dev->dev_type)) {
++ /*
++ * Set the device state to gone to block
++ * sending IO to the device.
++ */
++ set_bit(SAS_DEV_GONE, &port_dev->state);
++ hisi_sas_notify_phy_event(phy, HISI_PHYE_LINK_RESET);
++ return;
++ }
++ }
++
+ phy->wait_phyup_cnt = 0;
+ if (phy->identify.target_port_protocols == SAS_PROTOCOL_SSP)
+ hisi_hba->hw->sl_notify_ssp(hisi_hba, phy_no);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index ec5b1ab2871776..c0a372868e1d7f 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -563,7 +563,7 @@ int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,
+ WRITE_ONCE(op_req_q->ci, le16_to_cpu(reply_desc->request_queue_ci));
+ mpi3mr_process_op_reply_desc(mrioc, reply_desc, &reply_dma,
+ reply_qidx);
+- atomic_dec(&op_reply_q->pend_ios);
++
+ if (reply_dma)
+ mpi3mr_repost_reply_buf(mrioc, reply_dma);
+ num_op_reply++;
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 183ce00aa671ea..f7067878b34f3b 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -766,6 +766,7 @@ static void pm8001_dev_gone_notify(struct domain_device *dev)
+ spin_lock_irqsave(&pm8001_ha->lock, flags);
+ }
+ PM8001_CHIP_DISP->dereg_dev_req(pm8001_ha, device_id);
++ pm8001_ha->phy[pm8001_dev->attached_phy].phy_attached = 0;
+ pm8001_free_dev(pm8001_dev);
+ } else {
+ pm8001_dbg(pm8001_ha, DISC, "Found dev has gone.\n");
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index a77e0499b738a6..9d2db5bc8ee7ad 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -695,26 +695,23 @@ void scsi_cdl_check(struct scsi_device *sdev)
+ */
+ int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
+ {
+- struct scsi_mode_data data;
+- struct scsi_sense_hdr sshdr;
+- struct scsi_vpd *vpd;
+- bool is_ata = false;
+ char buf[64];
++ bool is_ata;
+ int ret;
+
+ if (!sdev->cdl_supported)
+ return -EOPNOTSUPP;
+
+ rcu_read_lock();
+- vpd = rcu_dereference(sdev->vpd_pg89);
+- if (vpd)
+- is_ata = true;
++ is_ata = rcu_dereference(sdev->vpd_pg89);
+ rcu_read_unlock();
+
+ /*
+ * For ATA devices, CDL needs to be enabled with a SET FEATURES command.
+ */
+ if (is_ata) {
++ struct scsi_mode_data data;
++ struct scsi_sense_hdr sshdr;
+ char *buf_data;
+ int len;
+
+@@ -723,16 +720,30 @@ int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
+ if (ret)
+ return -EINVAL;
+
+- /* Enable CDL using the ATA feature page */
++ /* Enable or disable CDL using the ATA feature page */
+ len = min_t(size_t, sizeof(buf),
+ data.length - data.header_length -
+ data.block_descriptor_length);
+ buf_data = buf + data.header_length +
+ data.block_descriptor_length;
+- if (enable)
+- buf_data[4] = 0x02;
+- else
+- buf_data[4] = 0;
++
++ /*
++ * If we want to enable CDL and CDL is already enabled on the
++ * device, do nothing. This avoids needlessly resetting the CDL
++ * statistics on the device as that is implied by the CDL enable
++ * action. Similar to this, there is no need to do anything if
++ * we want to disable CDL and CDL is already disabled.
++ */
++ if (enable) {
++ if ((buf_data[4] & 0x03) == 0x02)
++ goto out;
++ buf_data[4] &= ~0x03;
++ buf_data[4] |= 0x02;
++ } else {
++ if ((buf_data[4] & 0x03) == 0x00)
++ goto out;
++ buf_data[4] &= ~0x03;
++ }
+
+ ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3,
+ &data, &sshdr);
+@@ -744,6 +755,7 @@ int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
+ }
+ }
+
++out:
+ sdev->cdl_enable = enable;
+
+ return 0;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index f1cfe0bb89b20c..7a31dae9aa82de 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1253,8 +1253,12 @@ EXPORT_SYMBOL_GPL(scsi_alloc_request);
+ */
+ static void scsi_cleanup_rq(struct request *rq)
+ {
++ struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
++
++ cmd->flags = 0;
++
+ if (rq->rq_flags & RQF_DONTPREP) {
+- scsi_mq_uninit_cmd(blk_mq_rq_to_pdu(rq));
++ scsi_mq_uninit_cmd(cmd);
+ rq->rq_flags &= ~RQF_DONTPREP;
+ }
+ }
+diff --git a/drivers/soc/qcom/ice.c b/drivers/soc/qcom/ice.c
+index 393d2d1d275f18..79e04bff3e331b 100644
+--- a/drivers/soc/qcom/ice.c
++++ b/drivers/soc/qcom/ice.c
+@@ -11,6 +11,7 @@
+ #include <linux/cleanup.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
++#include <linux/device.h>
+ #include <linux/iopoll.h>
+ #include <linux/of.h>
+ #include <linux/of_platform.h>
+@@ -324,6 +325,53 @@ struct qcom_ice *of_qcom_ice_get(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(of_qcom_ice_get);
+
++static void qcom_ice_put(const struct qcom_ice *ice)
++{
++ struct platform_device *pdev = to_platform_device(ice->dev);
++
++ if (!platform_get_resource_byname(pdev, IORESOURCE_MEM, "ice"))
++ platform_device_put(pdev);
++}
++
++static void devm_of_qcom_ice_put(struct device *dev, void *res)
++{
++ qcom_ice_put(*(struct qcom_ice **)res);
++}
++
++/**
++ * devm_of_qcom_ice_get() - Devres managed helper to get an ICE instance from
++ * a DT node.
++ * @dev: device pointer for the consumer device.
++ *
++ * This function will provide an ICE instance either by creating one for the
++ * consumer device if its DT node provides the 'ice' reg range and the 'ice'
++ * clock (for legacy DT style). On the other hand, if consumer provides a
++ * phandle via 'qcom,ice' property to an ICE DT, the ICE instance will already
++ * be created and so this function will return that instead.
++ *
++ * Return: ICE pointer on success, NULL if there is no ICE data provided by the
++ * consumer or ERR_PTR() on error.
++ */
++struct qcom_ice *devm_of_qcom_ice_get(struct device *dev)
++{
++ struct qcom_ice *ice, **dr;
++
++ dr = devres_alloc(devm_of_qcom_ice_put, sizeof(*dr), GFP_KERNEL);
++ if (!dr)
++ return ERR_PTR(-ENOMEM);
++
++ ice = of_qcom_ice_get(dev);
++ if (!IS_ERR_OR_NULL(ice)) {
++ *dr = ice;
++ devres_add(dev, dr);
++ } else {
++ devres_free(dr);
++ }
++
++ return ice;
++}
++EXPORT_SYMBOL_GPL(devm_of_qcom_ice_get);
++
+ static int qcom_ice_probe(struct platform_device *pdev)
+ {
+ struct qcom_ice *engine;
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index eeb7d082c2472f..c43fb496da956b 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1695,9 +1695,12 @@ static int spi_imx_transfer_one(struct spi_controller *controller,
+ struct spi_device *spi,
+ struct spi_transfer *transfer)
+ {
++ int ret;
+ struct spi_imx_data *spi_imx = spi_controller_get_devdata(spi->controller);
+
+- spi_imx_setupxfer(spi, transfer);
++ ret = spi_imx_setupxfer(spi, transfer);
++ if (ret < 0)
++ return ret;
+ transfer->effective_speed_hz = spi_imx->spi_bus_clk;
+
+ /* flush rxfifo before transfer */
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index 08e49a8768943c..64e1b2f8a0001c 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -1117,9 +1117,9 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
+ (&tqspi->xfer_completion,
+ QSPI_DMA_TIMEOUT);
+
+- if (WARN_ON(ret == 0)) {
+- dev_err(tqspi->dev, "QSPI Transfer failed with timeout: %d\n",
+- ret);
++ if (WARN_ON_ONCE(ret == 0)) {
++ dev_err_ratelimited(tqspi->dev,
++ "QSPI Transfer failed with timeout\n");
+ if (tqspi->is_curr_dma_xfer &&
+ (tqspi->cur_direction & DATA_DIR_TX))
+ dmaengine_terminate_all
+diff --git a/drivers/staging/gpib/agilent_82350b/agilent_82350b.c b/drivers/staging/gpib/agilent_82350b/agilent_82350b.c
+index c62407077d37f1..cd7fe7d814cea5 100644
+--- a/drivers/staging/gpib/agilent_82350b/agilent_82350b.c
++++ b/drivers/staging/gpib/agilent_82350b/agilent_82350b.c
+@@ -66,10 +66,7 @@ int agilent_82350b_accel_read(gpib_board_t *board, uint8_t *buffer, size_t lengt
+ int j;
+ int count;
+
+- if (num_fifo_bytes - i < agilent_82350b_fifo_size)
+- block_size = num_fifo_bytes - i;
+- else
+- block_size = agilent_82350b_fifo_size;
++ block_size = min(num_fifo_bytes - i, agilent_82350b_fifo_size);
+ set_transfer_counter(a_priv, block_size);
+ writeb(ENABLE_TI_TO_SRAM | DIRECTION_GPIB_TO_HOST,
+ a_priv->gpib_base + SRAM_ACCESS_CONTROL_REG);
+@@ -200,10 +197,7 @@ int agilent_82350b_accel_write(gpib_board_t *board, uint8_t *buffer, size_t leng
+ for (i = 1; i < fifotransferlength;) {
+ clear_bit(WRITE_READY_BN, &tms_priv->state);
+
+- if (fifotransferlength - i < agilent_82350b_fifo_size)
+- block_size = fifotransferlength - i;
+- else
+- block_size = agilent_82350b_fifo_size;
++ block_size = min(fifotransferlength - i, agilent_82350b_fifo_size);
+ set_transfer_counter(a_priv, block_size);
+ for (j = 0; j < block_size; ++j, ++i) {
+ // load data into board's sram
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 390abcfe718824..8c527af989271c 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -1305,11 +1305,15 @@ static void tb_scan_port(struct tb_port *port)
+ goto out_rpm_put;
+ }
+
+- tb_retimer_scan(port, true);
+-
+ sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
+ tb_downstream_route(port));
+ if (IS_ERR(sw)) {
++ /*
++ * Make the downstream retimers available even if there
++ * is no router connected.
++ */
++ tb_retimer_scan(port, true);
++
+ /*
+ * If there is an error accessing the connected switch
+ * it may be connected to another domain. Also we allow
+@@ -1359,6 +1363,14 @@ static void tb_scan_port(struct tb_port *port)
+ upstream_port = tb_upstream_port(sw);
+ tb_configure_link(port, upstream_port, sw);
+
++ /*
++ * Scan for downstream retimers. We only scan them after the
++ * router has been enumerated to avoid issues with certain
++ * Pluggable devices that expect the host to enumerate them
++ * within certain timeout.
++ */
++ tb_retimer_scan(port, true);
++
+ /*
+ * CL0s and CL1 are enabled and supported together.
+ * Silently ignore CLx enabling in case CLx is not supported.
+diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c
+index 1b137e06844425..3449945493ceb4 100644
+--- a/drivers/tty/serial/msm_serial.c
++++ b/drivers/tty/serial/msm_serial.c
+@@ -1746,6 +1746,12 @@ msm_serial_early_console_setup_dm(struct earlycon_device *device,
+ if (!device->port.membase)
+ return -ENODEV;
+
++ /* Disable DM / single-character modes */
++ msm_write(&device->port, 0, UARTDM_DMEN);
++ msm_write(&device->port, MSM_UART_CR_CMD_RESET_RX, MSM_UART_CR);
++ msm_write(&device->port, MSM_UART_CR_CMD_RESET_TX, MSM_UART_CR);
++ msm_write(&device->port, MSM_UART_CR_TX_ENABLE, MSM_UART_CR);
++
+ device->con->write = msm_serial_early_write_dm;
+ return 0;
+ }
+diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c
+index 5904a2d4cefa71..054a8e630aceac 100644
+--- a/drivers/tty/serial/sifive.c
++++ b/drivers/tty/serial/sifive.c
+@@ -563,8 +563,11 @@ static void sifive_serial_break_ctl(struct uart_port *port, int break_state)
+ static int sifive_serial_startup(struct uart_port *port)
+ {
+ struct sifive_serial_port *ssp = port_to_sifive_serial_port(port);
++ unsigned long flags;
+
++ uart_port_lock_irqsave(&ssp->port, &flags);
+ __ssp_enable_rxwm(ssp);
++ uart_port_unlock_irqrestore(&ssp->port, flags);
+
+ return 0;
+ }
+@@ -572,9 +575,12 @@ static int sifive_serial_startup(struct uart_port *port)
+ static void sifive_serial_shutdown(struct uart_port *port)
+ {
+ struct sifive_serial_port *ssp = port_to_sifive_serial_port(port);
++ unsigned long flags;
+
++ uart_port_lock_irqsave(&ssp->port, &flags);
+ __ssp_disable_rxwm(ssp);
+ __ssp_disable_txwm(ssp);
++ uart_port_unlock_irqrestore(&ssp->port, flags);
+ }
+
+ /**
+diff --git a/drivers/tty/vt/selection.c b/drivers/tty/vt/selection.c
+index 0bd6544e30a6b3..791e2f1f7c0b65 100644
+--- a/drivers/tty/vt/selection.c
++++ b/drivers/tty/vt/selection.c
+@@ -193,13 +193,12 @@ int set_selection_user(const struct tiocl_selection __user *sel,
+ return -EFAULT;
+
+ /*
+- * TIOCL_SELCLEAR, TIOCL_SELPOINTER and TIOCL_SELMOUSEREPORT are OK to
+- * use without CAP_SYS_ADMIN as they do not modify the selection.
++ * TIOCL_SELCLEAR and TIOCL_SELPOINTER are OK to use without
++ * CAP_SYS_ADMIN as they do not modify the selection.
+ */
+ switch (v.sel_mode) {
+ case TIOCL_SELCLEAR:
+ case TIOCL_SELPOINTER:
+- case TIOCL_SELMOUSEREPORT:
+ break;
+ default:
+ if (!capable(CAP_SYS_ADMIN))
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index 240ce135bbfbc3..f1294c29f48491 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -677,13 +677,6 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
+ unsigned long flags;
+ int err;
+
+- if (!ufshcd_cmd_inflight(lrbp->cmd)) {
+- dev_err(hba->dev,
+- "%s: skip abort. cmd at tag %d already completed.\n",
+- __func__, tag);
+- return FAILED;
+- }
+-
+ /* Skip task abort in case previous aborts failed and report failure */
+ if (lrbp->req_abort_skip) {
+ dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n",
+@@ -692,6 +685,11 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
+ }
+
+ hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd));
++ if (!hwq) {
++ dev_err(hba->dev, "%s: skip abort. cmd at tag %d already completed.\n",
++ __func__, tag);
++ return FAILED;
++ }
+
+ if (ufshcd_mcq_sqe_search(hba, hwq, tag)) {
+ /*
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 464f13da259aa0..128e35a848b7b2 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -5658,6 +5658,8 @@ static void ufshcd_mcq_compl_pending_transfer(struct ufs_hba *hba,
+ continue;
+
+ hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd));
++ if (!hwq)
++ continue;
+
+ if (force_compl) {
+ ufshcd_mcq_compl_all_cqes_lock(hba, hwq);
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index 5ea3f9beb1bd9a..2436b9454480b0 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -34,7 +34,7 @@
+ * Exynos's Vendor specific registers for UFSHCI
+ */
+ #define HCI_TXPRDT_ENTRY_SIZE 0x00
+-#define PRDT_PREFECT_EN BIT(31)
++#define PRDT_PREFETCH_EN BIT(31)
+ #define HCI_RXPRDT_ENTRY_SIZE 0x04
+ #define HCI_1US_TO_CNT_VAL 0x0C
+ #define CNT_VAL_1US_MASK 0x3FF
+@@ -1060,9 +1060,14 @@ static int exynos_ufs_pre_link(struct ufs_hba *hba)
+ exynos_ufs_config_intr(ufs, DFES_DEF_L4_ERRS, UNIPRO_L4);
+ exynos_ufs_set_unipro_pclk_div(ufs);
+
++ exynos_ufs_setup_clocks(hba, true, PRE_CHANGE);
++
+ /* unipro */
+ exynos_ufs_config_unipro(ufs);
+
++ if (ufs->drv_data->pre_link)
++ ufs->drv_data->pre_link(ufs);
++
+ /* m-phy */
+ exynos_ufs_phy_init(ufs);
+ if (!(ufs->opts & EXYNOS_UFS_OPT_SKIP_CONFIG_PHY_ATTR)) {
+@@ -1070,11 +1075,6 @@ static int exynos_ufs_pre_link(struct ufs_hba *hba)
+ exynos_ufs_config_phy_cap_attr(ufs);
+ }
+
+- exynos_ufs_setup_clocks(hba, true, PRE_CHANGE);
+-
+- if (ufs->drv_data->pre_link)
+- ufs->drv_data->pre_link(ufs);
+-
+ return 0;
+ }
+
+@@ -1098,12 +1098,17 @@ static int exynos_ufs_post_link(struct ufs_hba *hba)
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+ struct phy *generic_phy = ufs->phy;
+ struct exynos_ufs_uic_attr *attr = ufs->drv_data->uic_attr;
++ u32 val = ilog2(DATA_UNIT_SIZE);
+
+ exynos_ufs_establish_connt(ufs);
+ exynos_ufs_fit_aggr_timeout(ufs);
+
+ hci_writel(ufs, 0xa, HCI_DATA_REORDER);
+- hci_writel(ufs, ilog2(DATA_UNIT_SIZE), HCI_TXPRDT_ENTRY_SIZE);
++
++ if (hba->caps & UFSHCD_CAP_CRYPTO)
++ val |= PRDT_PREFETCH_EN;
++ hci_writel(ufs, val, HCI_TXPRDT_ENTRY_SIZE);
++
+ hci_writel(ufs, ilog2(DATA_UNIT_SIZE), HCI_RXPRDT_ENTRY_SIZE);
+ hci_writel(ufs, (1 << hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE);
+ hci_writel(ufs, (1 << hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE);
+@@ -1517,6 +1522,14 @@ static int exynos_ufs_init(struct ufs_hba *hba)
+ return ret;
+ }
+
++static void exynos_ufs_exit(struct ufs_hba *hba)
++{
++ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
++
++ phy_power_off(ufs->phy);
++ phy_exit(ufs->phy);
++}
++
+ static int exynos_ufs_host_reset(struct ufs_hba *hba)
+ {
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+@@ -1687,6 +1700,12 @@ static void exynos_ufs_hibern8_notify(struct ufs_hba *hba,
+ }
+ }
+
++static int gs101_ufs_suspend(struct exynos_ufs *ufs)
++{
++ hci_writel(ufs, 0 << 0, HCI_GPIO_OUT);
++ return 0;
++}
++
+ static int exynos_ufs_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op,
+ enum ufs_notify_change_status status)
+ {
+@@ -1695,6 +1714,9 @@ static int exynos_ufs_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op,
+ if (status == PRE_CHANGE)
+ return 0;
+
++ if (ufs->drv_data->suspend)
++ ufs->drv_data->suspend(ufs);
++
+ if (!ufshcd_is_link_active(hba))
+ phy_power_off(ufs->phy);
+
+@@ -1972,6 +1994,7 @@ static int gs101_ufs_pre_pwr_change(struct exynos_ufs *ufs,
+ static const struct ufs_hba_variant_ops ufs_hba_exynos_ops = {
+ .name = "exynos_ufs",
+ .init = exynos_ufs_init,
++ .exit = exynos_ufs_exit,
+ .hce_enable_notify = exynos_ufs_hce_enable_notify,
+ .link_startup_notify = exynos_ufs_link_startup_notify,
+ .pwr_change_notify = exynos_ufs_pwr_change_notify,
+@@ -2010,13 +2033,7 @@ static int exynos_ufs_probe(struct platform_device *pdev)
+
+ static void exynos_ufs_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+- struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+-
+ ufshcd_pltfrm_remove(pdev);
+-
+- phy_power_off(ufs->phy);
+- phy_exit(ufs->phy);
+ }
+
+ static struct exynos_ufs_uic_attr exynos7_uic_attr = {
+@@ -2162,6 +2179,7 @@ static const struct exynos_ufs_drv_data gs101_ufs_drvs = {
+ .pre_link = gs101_ufs_pre_link,
+ .post_link = gs101_ufs_post_link,
+ .pre_pwr_change = gs101_ufs_pre_pwr_change,
++ .suspend = gs101_ufs_suspend,
+ };
+
+ static const struct of_device_id exynos_ufs_of_match[] = {
+diff --git a/drivers/ufs/host/ufs-exynos.h b/drivers/ufs/host/ufs-exynos.h
+index d0b3df221503c6..3c6fe5132190ab 100644
+--- a/drivers/ufs/host/ufs-exynos.h
++++ b/drivers/ufs/host/ufs-exynos.h
+@@ -192,6 +192,7 @@ struct exynos_ufs_drv_data {
+ struct ufs_pa_layer_attr *pwr);
+ int (*pre_hce_enable)(struct exynos_ufs *ufs);
+ int (*post_hce_enable)(struct exynos_ufs *ufs);
++ int (*suspend)(struct exynos_ufs *ufs);
+ };
+
+ struct ufs_phy_time_cfg {
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 23b9f6efa0475e..a455a95f65fc67 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -125,7 +125,7 @@ static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
+ int err;
+ int i;
+
+- ice = of_qcom_ice_get(dev);
++ ice = devm_of_qcom_ice_get(dev);
+ if (ice == ERR_PTR(-EOPNOTSUPP)) {
+ dev_warn(dev, "Disabling inline encryption support\n");
+ ice = NULL;
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index fd1beb10bba726..19101ff1cf1bd7 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -1963,6 +1963,7 @@ static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data)
+ unsigned int bit;
+ unsigned long reg;
+
++ local_bh_disable();
+ spin_lock_irqsave(&priv_dev->lock, flags);
+
+ reg = readl(&priv_dev->regs->usb_ists);
+@@ -2004,6 +2005,7 @@ static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data)
+ irqend:
+ writel(~0, &priv_dev->regs->ep_ien);
+ spin_unlock_irqrestore(&priv_dev->lock, flags);
++ local_bh_enable();
+
+ return ret;
+ }
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 1a7fc638213eba..4f8bfd242b5953 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -336,6 +336,13 @@ static int ci_hdrc_imx_notify_event(struct ci_hdrc *ci, unsigned int event)
+ return ret;
+ }
+
++static void ci_hdrc_imx_disable_regulator(void *arg)
++{
++ struct ci_hdrc_imx_data *data = arg;
++
++ regulator_disable(data->hsic_pad_regulator);
++}
++
+ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ {
+ struct ci_hdrc_imx_data *data;
+@@ -394,6 +401,13 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ "Failed to enable HSIC pad regulator\n");
+ goto err_put;
+ }
++ ret = devm_add_action_or_reset(dev,
++ ci_hdrc_imx_disable_regulator, data);
++ if (ret) {
++ dev_err(dev,
++ "Failed to add regulator devm action\n");
++ goto err_put;
++ }
+ }
+ }
+
+@@ -432,11 +446,11 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+
+ ret = imx_get_clks(dev);
+ if (ret)
+- goto disable_hsic_regulator;
++ goto qos_remove_request;
+
+ ret = imx_prepare_enable_clks(dev);
+ if (ret)
+- goto disable_hsic_regulator;
++ goto qos_remove_request;
+
+ ret = clk_prepare_enable(data->clk_wakeup);
+ if (ret)
+@@ -470,7 +484,11 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ of_usb_get_phy_mode(np) == USBPHY_INTERFACE_MODE_ULPI) {
+ pdata.flags |= CI_HDRC_OVERRIDE_PHY_CONTROL;
+ data->override_phy_control = true;
+- usb_phy_init(pdata.usb_phy);
++ ret = usb_phy_init(pdata.usb_phy);
++ if (ret) {
++ dev_err(dev, "Failed to init phy\n");
++ goto err_clk;
++ }
+ }
+
+ if (pdata.flags & CI_HDRC_SUPPORTS_RUNTIME_PM)
+@@ -479,7 +497,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ ret = imx_usbmisc_init(data->usbmisc_data);
+ if (ret) {
+ dev_err(dev, "usbmisc init failed, ret=%d\n", ret);
+- goto err_clk;
++ goto phy_shutdown;
+ }
+
+ data->ci_pdev = ci_hdrc_add_device(dev,
+@@ -488,7 +506,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ if (IS_ERR(data->ci_pdev)) {
+ ret = PTR_ERR(data->ci_pdev);
+ dev_err_probe(dev, ret, "ci_hdrc_add_device failed\n");
+- goto err_clk;
++ goto phy_shutdown;
+ }
+
+ if (data->usbmisc_data) {
+@@ -522,19 +540,20 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+
+ disable_device:
+ ci_hdrc_remove_device(data->ci_pdev);
++phy_shutdown:
++ if (data->override_phy_control)
++ usb_phy_shutdown(data->phy);
+ err_clk:
+ clk_disable_unprepare(data->clk_wakeup);
+ err_wakeup_clk:
+ imx_disable_unprepare_clks(dev);
+-disable_hsic_regulator:
+- if (data->hsic_pad_regulator)
+- /* don't overwrite original ret (cf. EPROBE_DEFER) */
+- regulator_disable(data->hsic_pad_regulator);
++qos_remove_request:
+ if (pdata.flags & CI_HDRC_PMQOS)
+ cpu_latency_qos_remove_request(&data->pm_qos_req);
+ data->ci_pdev = NULL;
+ err_put:
+- put_device(data->usbmisc_data->dev);
++ if (data->usbmisc_data)
++ put_device(data->usbmisc_data->dev);
+ return ret;
+ }
+
+@@ -556,10 +575,9 @@ static void ci_hdrc_imx_remove(struct platform_device *pdev)
+ clk_disable_unprepare(data->clk_wakeup);
+ if (data->plat_data->flags & CI_HDRC_PMQOS)
+ cpu_latency_qos_remove_request(&data->pm_qos_req);
+- if (data->hsic_pad_regulator)
+- regulator_disable(data->hsic_pad_regulator);
+ }
+- put_device(data->usbmisc_data->dev);
++ if (data->usbmisc_data)
++ put_device(data->usbmisc_data->dev);
+ }
+
+ static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 86ee39db013f39..16e7fa4d488d37 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -726,7 +726,7 @@ static int wdm_open(struct inode *inode, struct file *file)
+ rv = -EBUSY;
+ goto out;
+ }
+-
++ smp_rmb(); /* ordered against wdm_wwan_port_stop() */
+ rv = usb_autopm_get_interface(desc->intf);
+ if (rv < 0) {
+ dev_err(&desc->intf->dev, "Error autopm - %d\n", rv);
+@@ -829,6 +829,7 @@ static struct usb_class_driver wdm_class = {
+ static int wdm_wwan_port_start(struct wwan_port *port)
+ {
+ struct wdm_device *desc = wwan_port_get_drvdata(port);
++ int rv;
+
+ /* The interface is both exposed via the WWAN framework and as a
+ * legacy usbmisc chardev. If chardev is already open, just fail
+@@ -848,7 +849,15 @@ static int wdm_wwan_port_start(struct wwan_port *port)
+ wwan_port_txon(port);
+
+ /* Start getting events */
+- return usb_submit_urb(desc->validity, GFP_KERNEL);
++ rv = usb_submit_urb(desc->validity, GFP_KERNEL);
++ if (rv < 0) {
++ wwan_port_txoff(port);
++ desc->manage_power(desc->intf, 0);
++ /* this must be last lest we race with chardev open */
++ clear_bit(WDM_WWAN_IN_USE, &desc->flags);
++ }
++
++ return rv;
+ }
+
+ static void wdm_wwan_port_stop(struct wwan_port *port)
+@@ -859,8 +868,10 @@ static void wdm_wwan_port_stop(struct wwan_port *port)
+ poison_urbs(desc);
+ desc->manage_power(desc->intf, 0);
+ clear_bit(WDM_READ, &desc->flags);
+- clear_bit(WDM_WWAN_IN_USE, &desc->flags);
+ unpoison_urbs(desc);
++ smp_wmb(); /* ordered against wdm_open() */
++ /* this must be last lest we open a poisoned device */
++ clear_bit(WDM_WWAN_IN_USE, &desc->flags);
+ }
+
+ static void wdm_wwan_port_tx_complete(struct urb *urb)
+@@ -868,7 +879,7 @@ static void wdm_wwan_port_tx_complete(struct urb *urb)
+ struct sk_buff *skb = urb->context;
+ struct wdm_device *desc = skb_shinfo(skb)->destructor_arg;
+
+- usb_autopm_put_interface(desc->intf);
++ usb_autopm_put_interface_async(desc->intf);
+ wwan_port_txon(desc->wwanp);
+ kfree_skb(skb);
+ }
+@@ -898,7 +909,7 @@ static int wdm_wwan_port_tx(struct wwan_port *port, struct sk_buff *skb)
+ req->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
+ req->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
+ req->wValue = 0;
+- req->wIndex = desc->inum;
++ req->wIndex = desc->inum; /* already converted */
+ req->wLength = cpu_to_le16(skb->len);
+
+ skb_shinfo(skb)->destructor_arg = desc;
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 8efbacc5bc3411..36d3df7d040c63 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -369,6 +369,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM },
+ { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM },
+
++ /* SanDisk Corp. SanDisk 3.2Gen1 */
++ { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT },
++
+ /* Realforce 87U Keyboard */
+ { USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM },
+
+@@ -383,6 +386,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0904, 0x6103), .driver_info =
+ USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL },
+
++ /* Silicon Motion Flash Drive */
++ { USB_DEVICE(0x090c, 0x1000), .driver_info = USB_QUIRK_DELAY_INIT },
++
+ /* Sound Devices USBPre2 */
+ { USB_DEVICE(0x0926, 0x0202), .driver_info =
+ USB_QUIRK_ENDPOINT_IGNORE },
+@@ -539,6 +545,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x2040, 0x7200), .driver_info =
+ USB_QUIRK_CONFIG_INTF_STRINGS },
+
++ /* VLI disk */
++ { USB_DEVICE(0x2109, 0x0711), .driver_info = USB_QUIRK_NO_LPM },
++
+ /* Raydium Touchscreen */
+ { USB_DEVICE(0x2386, 0x3114), .driver_info = USB_QUIRK_NO_LPM },
+
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 052852f8014676..54a4ee2b90b7f4 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -148,11 +148,21 @@ static const struct property_entry dwc3_pci_intel_byt_properties[] = {
+ {}
+ };
+
++/*
++ * Intel Merrifield SoC uses these endpoints for tracing and they cannot
++ * be re-allocated if being used because the side band flow control signals
++ * are hard wired to certain endpoints:
++ * - 1 High BW Bulk IN (IN#1) (RTIT)
++ * - 1 1KB BW Bulk IN (IN#8) + 1 1KB BW Bulk OUT (Run Control) (OUT#8)
++ */
++static const u8 dwc3_pci_mrfld_reserved_endpoints[] = { 3, 16, 17 };
++
+ static const struct property_entry dwc3_pci_mrfld_properties[] = {
+ PROPERTY_ENTRY_STRING("dr_mode", "otg"),
+ PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
+ PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"),
+ PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"),
++ PROPERTY_ENTRY_U8_ARRAY("snps,reserved-endpoints", dwc3_pci_mrfld_reserved_endpoints),
+ PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"),
+ PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
+ {}
+diff --git a/drivers/usb/dwc3/dwc3-xilinx.c b/drivers/usb/dwc3/dwc3-xilinx.c
+index a33a42ba0249ab..4ca7f6240d07df 100644
+--- a/drivers/usb/dwc3/dwc3-xilinx.c
++++ b/drivers/usb/dwc3/dwc3-xilinx.c
+@@ -207,15 +207,13 @@ static int dwc3_xlnx_init_zynqmp(struct dwc3_xlnx *priv_data)
+
+ skip_usb3_phy:
+ /* ulpi reset via gpio-modepin or gpio-framework driver */
+- reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
++ reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(reset_gpio)) {
+ return dev_err_probe(dev, PTR_ERR(reset_gpio),
+ "Failed to request reset GPIO\n");
+ }
+
+ if (reset_gpio) {
+- /* Toggle ulpi to reset the phy. */
+- gpiod_set_value_cansleep(reset_gpio, 1);
+ usleep_range(5000, 10000);
+ gpiod_set_value_cansleep(reset_gpio, 0);
+ usleep_range(5000, 10000);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 89a4dc8ebf9482..c6761fe89cfaeb 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -547,6 +547,7 @@ static int dwc3_gadget_set_xfer_resource(struct dwc3_ep *dep)
+ int dwc3_gadget_start_config(struct dwc3 *dwc, unsigned int resource_index)
+ {
+ struct dwc3_gadget_ep_cmd_params params;
++ struct dwc3_ep *dep;
+ u32 cmd;
+ int i;
+ int ret;
+@@ -563,8 +564,13 @@ int dwc3_gadget_start_config(struct dwc3 *dwc, unsigned int resource_index)
+ return ret;
+
+ /* Reset resource allocation flags */
+- for (i = resource_index; i < dwc->num_eps && dwc->eps[i]; i++)
+- dwc->eps[i]->flags &= ~DWC3_EP_RESOURCE_ALLOCATED;
++ for (i = resource_index; i < dwc->num_eps; i++) {
++ dep = dwc->eps[i];
++ if (!dep)
++ continue;
++
++ dep->flags &= ~DWC3_EP_RESOURCE_ALLOCATED;
++ }
+
+ return 0;
+ }
+@@ -751,9 +757,11 @@ void dwc3_gadget_clear_tx_fifos(struct dwc3 *dwc)
+
+ dwc->last_fifo_depth = fifo_depth;
+ /* Clear existing TXFIFO for all IN eps except ep0 */
+- for (num = 3; num < min_t(int, dwc->num_eps, DWC3_ENDPOINTS_NUM);
+- num += 2) {
++ for (num = 3; num < min_t(int, dwc->num_eps, DWC3_ENDPOINTS_NUM); num += 2) {
+ dep = dwc->eps[num];
++ if (!dep)
++ continue;
++
+ /* Don't change TXFRAMNUM on usb31 version */
+ size = DWC3_IP_IS(DWC3) ? 0 :
+ dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(num >> 1)) &
+@@ -3703,6 +3711,8 @@ static bool dwc3_gadget_endpoint_trbs_complete(struct dwc3_ep *dep,
+
+ for (i = 0; i < DWC3_ENDPOINTS_NUM; i++) {
+ dep = dwc->eps[i];
++ if (!dep)
++ continue;
+
+ if (!(dep->flags & DWC3_EP_ENABLED))
+ continue;
+@@ -3852,6 +3862,10 @@ static void dwc3_endpoint_interrupt(struct dwc3 *dwc,
+ u8 epnum = event->endpoint_number;
+
+ dep = dwc->eps[epnum];
++ if (!dep) {
++ dev_warn(dwc->dev, "spurious event, endpoint %u is not allocated\n", epnum);
++ return;
++ }
+
+ if (!(dep->flags & DWC3_EP_ENABLED)) {
+ if ((epnum > 1) && !(dep->flags & DWC3_EP_TRANSFER_STARTED))
+@@ -4564,6 +4578,12 @@ static irqreturn_t dwc3_check_event_buf(struct dwc3_event_buffer *evt)
+ if (!count)
+ return IRQ_NONE;
+
++ if (count > evt->length) {
++ dev_err_ratelimited(dwc->dev, "invalid count(%u) > evt->length(%u)\n",
++ count, evt->length);
++ return IRQ_NONE;
++ }
++
+ evt->count = count;
+ evt->flags |= DWC3_EVENT_PENDING;
+
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/dev.c b/drivers/usb/gadget/udc/aspeed-vhub/dev.c
+index 573109ca5b7990..a09f72772e6e95 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/dev.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/dev.c
+@@ -548,6 +548,9 @@ int ast_vhub_init_dev(struct ast_vhub *vhub, unsigned int idx)
+ d->vhub = vhub;
+ d->index = idx;
+ d->name = devm_kasprintf(parent, GFP_KERNEL, "port%d", idx+1);
++ if (!d->name)
++ return -ENOMEM;
++
+ d->regs = vhub->regs + 0x100 + 0x10 * idx;
+
+ ast_vhub_init_ep0(vhub, &d->ep0, d);
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index 0881fdd1823e0b..dcf31a592f5d11 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -1946,6 +1946,12 @@ max3421_remove(struct spi_device *spi)
+ usb_put_hcd(hcd);
+ }
+
++static const struct spi_device_id max3421_spi_ids[] = {
++ { "max3421" },
++ { },
++};
++MODULE_DEVICE_TABLE(spi, max3421_spi_ids);
++
+ static const struct of_device_id max3421_of_match_table[] = {
+ { .compatible = "maxim,max3421", },
+ {},
+@@ -1955,6 +1961,7 @@ MODULE_DEVICE_TABLE(of, max3421_of_match_table);
+ static struct spi_driver max3421_driver = {
+ .probe = max3421_probe,
+ .remove = max3421_remove,
++ .id_table = max3421_spi_ids,
+ .driver = {
+ .name = "max3421-hcd",
+ .of_match_table = max3421_of_match_table,
+diff --git a/drivers/usb/host/ohci-pci.c b/drivers/usb/host/ohci-pci.c
+index 900ea0d368e034..9f0a6b27e47cb6 100644
+--- a/drivers/usb/host/ohci-pci.c
++++ b/drivers/usb/host/ohci-pci.c
+@@ -165,6 +165,25 @@ static int ohci_quirk_amd700(struct usb_hcd *hcd)
+ return 0;
+ }
+
++static int ohci_quirk_loongson(struct usb_hcd *hcd)
++{
++ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);
++
++ /*
++ * Loongson's LS7A OHCI controller (rev 0x02) has a
++ * flaw. MMIO register with offset 0x60/64 is treated
++ * as legacy PS2-compatible keyboard/mouse interface.
++ * Since OHCI only use 4KB BAR resource, LS7A OHCI's
++ * 32KB BAR is wrapped around (the 2nd 4KB BAR space
++ * is the same as the 1st 4KB internally). So add 4KB
++ * offset (0x1000) to the OHCI registers as a quirk.
++ */
++ if (pdev->revision == 0x2)
++ hcd->regs += SZ_4K; /* SZ_4K = 0x1000 */
++
++ return 0;
++}
++
+ static int ohci_quirk_qemu(struct usb_hcd *hcd)
+ {
+ struct ohci_hcd *ohci = hcd_to_ohci(hcd);
+@@ -224,6 +243,10 @@ static const struct pci_device_id ohci_pci_quirks[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_ATI, 0x4399),
+ .driver_data = (unsigned long)ohci_quirk_amd700,
+ },
++ {
++ PCI_DEVICE(PCI_VENDOR_ID_LOONGSON, 0x7a24),
++ .driver_data = (unsigned long)ohci_quirk_loongson,
++ },
+ {
+ .vendor = PCI_VENDOR_ID_APPLE,
+ .device = 0x003f,
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 69c278b64084b2..71e4c4ca6ad59b 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1878,9 +1878,10 @@ int xhci_bus_resume(struct usb_hcd *hcd)
+ int max_ports, port_index;
+ int sret;
+ u32 next_state;
+- u32 temp, portsc;
++ u32 portsc;
+ struct xhci_hub *rhub;
+ struct xhci_port **ports;
++ bool disabled_irq = false;
+
+ rhub = xhci_get_rhub(hcd);
+ ports = rhub->ports;
+@@ -1896,17 +1897,20 @@ int xhci_bus_resume(struct usb_hcd *hcd)
+ return -ESHUTDOWN;
+ }
+
+- /* delay the irqs */
+- temp = readl(&xhci->op_regs->command);
+- temp &= ~CMD_EIE;
+- writel(temp, &xhci->op_regs->command);
+-
+ /* bus specific resume for ports we suspended at bus_suspend */
+- if (hcd->speed >= HCD_USB3)
++ if (hcd->speed >= HCD_USB3) {
+ next_state = XDEV_U0;
+- else
++ } else {
+ next_state = XDEV_RESUME;
+-
++ if (bus_state->bus_suspended) {
++ /*
++ * prevent port event interrupts from interfering
++ * with usb2 port resume process
++ */
++ xhci_disable_interrupter(xhci->interrupters[0]);
++ disabled_irq = true;
++ }
++ }
+ port_index = max_ports;
+ while (port_index--) {
+ portsc = readl(ports[port_index]->addr);
+@@ -1974,11 +1978,9 @@ int xhci_bus_resume(struct usb_hcd *hcd)
+ (void) readl(&xhci->op_regs->command);
+
+ bus_state->next_statechange = jiffies + msecs_to_jiffies(5);
+- /* re-enable irqs */
+- temp = readl(&xhci->op_regs->command);
+- temp |= CMD_EIE;
+- writel(temp, &xhci->op_regs->command);
+- temp = readl(&xhci->op_regs->command);
++ /* re-enable interrupter */
++ if (disabled_irq)
++ xhci_enable_interrupter(xhci->interrupters[0]);
+
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ return 0;
+diff --git a/drivers/usb/host/xhci-mvebu.c b/drivers/usb/host/xhci-mvebu.c
+index 87f1597a0e5ab7..257e4d79971fda 100644
+--- a/drivers/usb/host/xhci-mvebu.c
++++ b/drivers/usb/host/xhci-mvebu.c
+@@ -73,13 +73,3 @@ int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+
+ return 0;
+ }
+-
+-int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd)
+-{
+- struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+-
+- /* Without reset on resume, the HC won't work at all */
+- xhci->quirks |= XHCI_RESET_ON_RESUME;
+-
+- return 0;
+-}
+diff --git a/drivers/usb/host/xhci-mvebu.h b/drivers/usb/host/xhci-mvebu.h
+index 3be021793cc8b0..9d26e22c48422f 100644
+--- a/drivers/usb/host/xhci-mvebu.h
++++ b/drivers/usb/host/xhci-mvebu.h
+@@ -12,16 +12,10 @@ struct usb_hcd;
+
+ #if IS_ENABLED(CONFIG_USB_XHCI_MVEBU)
+ int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd);
+-int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd);
+ #else
+ static inline int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+ {
+ return 0;
+ }
+-
+-static inline int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd)
+-{
+- return 0;
+-}
+ #endif
+ #endif /* __LINUX_XHCI_MVEBU_H */
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index d85ffa9ffaa70f..ff813dca2d1d30 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -106,7 +106,7 @@ static const struct xhci_plat_priv xhci_plat_marvell_armada = {
+ };
+
+ static const struct xhci_plat_priv xhci_plat_marvell_armada3700 = {
+- .init_quirk = xhci_mvebu_a3700_init_quirk,
++ .quirks = XHCI_RESET_ON_RESUME,
+ };
+
+ static const struct xhci_plat_priv xhci_plat_brcm = {
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 5e89e9cdcec2e9..5a0e361818c27d 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1198,16 +1198,19 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ * Stopped state, but it will soon change to Running.
+ *
+ * Assume this bug on unexpected Stop Endpoint failures.
+- * Keep retrying until the EP starts and stops again, on
+- * chips where this is known to help. Wait for 100ms.
++ * Keep retrying until the EP starts and stops again.
+ */
+- if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
+- break;
+ fallthrough;
+ case EP_STATE_RUNNING:
+ /* Race, HW handled stop ep cmd before ep was running */
+ xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n",
+ GET_EP_CTX_STATE(ep_ctx));
++ /*
++ * Don't retry forever if we guessed wrong or a defective HC never starts
++ * the EP or says 'Running' but fails the command. We must give back TDs.
++ */
++ if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
++ break;
+
+ command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ if (!command) {
+@@ -2644,6 +2647,22 @@ static int handle_transferless_tx_event(struct xhci_hcd *xhci, struct xhci_virt_
+ return 0;
+ }
+
++static bool xhci_spurious_success_tx_event(struct xhci_hcd *xhci,
++ struct xhci_ring *ring)
++{
++ switch (ring->old_trb_comp_code) {
++ case COMP_SHORT_PACKET:
++ return xhci->quirks & XHCI_SPURIOUS_SUCCESS;
++ case COMP_USB_TRANSACTION_ERROR:
++ case COMP_BABBLE_DETECTED_ERROR:
++ case COMP_ISOCH_BUFFER_OVERRUN:
++ return xhci->quirks & XHCI_ETRON_HOST &&
++ ring->type == TYPE_ISOC;
++ default:
++ return false;
++ }
++}
++
+ /*
+ * If this function returns an error condition, it means it got a Transfer
+ * event with a corrupted Slot ID, Endpoint ID, or TRB DMA address.
+@@ -2664,6 +2683,7 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ int status = -EINPROGRESS;
+ struct xhci_ep_ctx *ep_ctx;
+ u32 trb_comp_code;
++ bool ring_xrun_event = false;
+
+ slot_id = TRB_TO_SLOT_ID(le32_to_cpu(event->flags));
+ ep_index = TRB_TO_EP_ID(le32_to_cpu(event->flags)) - 1;
+@@ -2697,8 +2717,8 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ case COMP_SUCCESS:
+ if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) != 0) {
+ trb_comp_code = COMP_SHORT_PACKET;
+- xhci_dbg(xhci, "Successful completion on short TX for slot %u ep %u with last td short %d\n",
+- slot_id, ep_index, ep_ring->last_td_was_short);
++ xhci_dbg(xhci, "Successful completion on short TX for slot %u ep %u with last td comp code %d\n",
++ slot_id, ep_index, ep_ring->old_trb_comp_code);
+ }
+ break;
+ case COMP_SHORT_PACKET:
+@@ -2770,14 +2790,12 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ * Underrun Event for OUT Isoch endpoint.
+ */
+ xhci_dbg(xhci, "Underrun event on slot %u ep %u\n", slot_id, ep_index);
+- if (ep->skip)
+- break;
+- return 0;
++ ring_xrun_event = true;
++ break;
+ case COMP_RING_OVERRUN:
+ xhci_dbg(xhci, "Overrun event on slot %u ep %u\n", slot_id, ep_index);
+- if (ep->skip)
+- break;
+- return 0;
++ ring_xrun_event = true;
++ break;
+ case COMP_MISSED_SERVICE_ERROR:
+ /*
+ * When encounter missed service error, one or more isoc tds
+@@ -2789,7 +2807,7 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ xhci_dbg(xhci,
+ "Miss service interval error for slot %u ep %u, set skip flag\n",
+ slot_id, ep_index);
+- return 0;
++ break;
+ case COMP_NO_PING_RESPONSE_ERROR:
+ ep->skip = true;
+ xhci_dbg(xhci,
+@@ -2837,6 +2855,10 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ xhci_dequeue_td(xhci, td, ep_ring, td->status);
+ }
+
++ /* Missed TDs will be skipped on the next event */
++ if (trb_comp_code == COMP_MISSED_SERVICE_ERROR)
++ return 0;
++
+ if (list_empty(&ep_ring->td_list)) {
+ /*
+ * Don't print wanings if ring is empty due to a stopped endpoint generating an
+@@ -2846,7 +2868,8 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ */
+ if (trb_comp_code != COMP_STOPPED &&
+ trb_comp_code != COMP_STOPPED_LENGTH_INVALID &&
+- !ep_ring->last_td_was_short) {
++ !ring_xrun_event &&
++ !xhci_spurious_success_tx_event(xhci, ep_ring)) {
+ xhci_warn(xhci, "Event TRB for slot %u ep %u with no TDs queued\n",
+ slot_id, ep_index);
+ }
+@@ -2880,6 +2903,10 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ goto check_endpoint_halted;
+ }
+
++ /* TD was queued after xrun, maybe xrun was on a link, don't panic yet */
++ if (ring_xrun_event)
++ return 0;
++
+ /*
+ * Skip the Force Stopped Event. The 'ep_trb' of FSE is not in the current
+ * TD pointed by 'ep_ring->dequeue' because that the hardware dequeue
+@@ -2894,11 +2921,12 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+
+ /*
+ * Some hosts give a spurious success event after a short
+- * transfer. Ignore it.
++ * transfer or error on last TRB. Ignore it.
+ */
+- if ((xhci->quirks & XHCI_SPURIOUS_SUCCESS) &&
+- ep_ring->last_td_was_short) {
+- ep_ring->last_td_was_short = false;
++ if (xhci_spurious_success_tx_event(xhci, ep_ring)) {
++ xhci_dbg(xhci, "Spurious event dma %pad, comp_code %u after %u\n",
++ &ep_trb_dma, trb_comp_code, ep_ring->old_trb_comp_code);
++ ep_ring->old_trb_comp_code = 0;
+ return 0;
+ }
+
+@@ -2926,10 +2954,11 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ */
+ } while (ep->skip);
+
+- if (trb_comp_code == COMP_SHORT_PACKET)
+- ep_ring->last_td_was_short = true;
+- else
+- ep_ring->last_td_was_short = false;
++ ep_ring->old_trb_comp_code = trb_comp_code;
++
++ /* Get out if a TD was queued at enqueue after the xrun occurred */
++ if (ring_xrun_event)
++ return 0;
+
+ ep_trb = &ep_seg->trbs[(ep_trb_dma - ep_seg->dma) / sizeof(*ep_trb)];
+ trace_xhci_handle_transfer(ep_ring, (struct xhci_generic_trb *) ep_trb, ep_trb_dma);
+@@ -3780,7 +3809,7 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ * enqueue a No Op TRB, this can prevent the Setup and Data Stage
+ * TRB to be breaked by the Link TRB.
+ */
+- if (trb_is_link(ep_ring->enqueue + 1)) {
++ if (last_trb_on_seg(ep_ring->enq_seg, ep_ring->enqueue + 1)) {
+ field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state;
+ queue_trb(xhci, ep_ring, false, 0, 0,
+ TRB_INTR_TARGET(0), field);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 1a90ebc8a30ea5..72070f7e6a76f1 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -322,7 +322,7 @@ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
+ xhci_info(xhci, "Fault detected\n");
+ }
+
+-static int xhci_enable_interrupter(struct xhci_interrupter *ir)
++int xhci_enable_interrupter(struct xhci_interrupter *ir)
+ {
+ u32 iman;
+
+@@ -335,7 +335,7 @@ static int xhci_enable_interrupter(struct xhci_interrupter *ir)
+ return 0;
+ }
+
+-static int xhci_disable_interrupter(struct xhci_interrupter *ir)
++int xhci_disable_interrupter(struct xhci_interrupter *ir)
+ {
+ u32 iman;
+
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 59c6c1c701b9c9..2c394cba120f15 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1371,7 +1371,7 @@ struct xhci_ring {
+ unsigned int num_trbs_free; /* used only by xhci DbC */
+ unsigned int bounce_buf_len;
+ enum xhci_ring_type type;
+- bool last_td_was_short;
++ u32 old_trb_comp_code;
+ struct radix_tree_root *trb_address_map;
+ };
+
+@@ -1890,6 +1890,8 @@ int xhci_alloc_tt_info(struct xhci_hcd *xhci,
+ struct usb_tt *tt, gfp_t mem_flags);
+ int xhci_set_interrupter_moderation(struct xhci_interrupter *ir,
+ u32 imod_interval);
++int xhci_enable_interrupter(struct xhci_interrupter *ir);
++int xhci_disable_interrupter(struct xhci_interrupter *ir);
+
+ /* xHCI ring, segment, TRB, and TD functions */
+ dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, union xhci_trb *trb);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 9b34e23b70919f..6ac7a0a5cf074e 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1093,6 +1093,8 @@ static const struct usb_device_id id_table_combined[] = {
+ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) },
+ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) },
+ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) },
++ /* Abacus Electrics */
++ { USB_DEVICE(FTDI_VID, ABACUS_OPTICAL_PROBE_PID) },
+ { } /* Terminating entry */
+ };
+
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 52be47d684ea66..9acb6f83732763 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -442,6 +442,11 @@
+ #define LINX_FUTURE_1_PID 0xF44B /* Linx future device */
+ #define LINX_FUTURE_2_PID 0xF44C /* Linx future device */
+
++/*
++ * Abacus Electrics
++ */
++#define ABACUS_OPTICAL_PROBE_PID 0xf458 /* ABACUS ELECTRICS Optical Probe */
++
+ /*
+ * Oceanic product ids
+ */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 5cd26dac2069fa..27879cc575365c 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -611,6 +611,7 @@ static void option_instat_callback(struct urb *urb);
+ /* Sierra Wireless products */
+ #define SIERRA_VENDOR_ID 0x1199
+ #define SIERRA_PRODUCT_EM9191 0x90d3
++#define SIERRA_PRODUCT_EM9291 0x90e3
+
+ /* UNISOC (Spreadtrum) products */
+ #define UNISOC_VENDOR_ID 0x1782
+@@ -2432,6 +2433,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9291, 0xff, 0xff, 0x30) },
++ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9291, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index 2c12449ff60c51..a0afaf254d1229 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -100,6 +100,11 @@ DEVICE(nokia, NOKIA_IDS);
+ { USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */
+ DEVICE_N(novatel_gps, NOVATEL_IDS, 3);
+
++/* OWON electronic test and measurement equipment driver */
++#define OWON_IDS() \
++ { USB_DEVICE(0x5345, 0x1234) } /* HDS200 oscilloscopes and others */
++DEVICE(owon, OWON_IDS);
++
+ /* Siemens USB/MPI adapter */
+ #define SIEMENS_IDS() \
+ { USB_DEVICE(0x908, 0x0004) }
+@@ -134,6 +139,7 @@ static struct usb_serial_driver * const serial_drivers[] = {
+ &motorola_tetra_device,
+ &nokia_device,
+ &novatel_gps_device,
++ &owon_device,
+ &siemens_mpi_device,
+ &suunto_device,
+ &vivopay_device,
+@@ -153,6 +159,7 @@ static const struct usb_device_id id_table[] = {
+ MOTOROLA_TETRA_IDS(),
+ NOKIA_IDS(),
+ NOVATEL_IDS(),
++ OWON_IDS(),
+ SIEMENS_IDS(),
+ SUUNTO_IDS(),
+ VIVOPAY_IDS(),
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 1f8c9b16a0fb85..d460d71b425783 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -83,6 +83,13 @@ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_REPORT_LUNS),
+
++/* Reported-by: Oliver Neukum <oneukum@suse.com> */
++UNUSUAL_DEV(0x125f, 0xa94a, 0x0160, 0x0160,
++ "ADATA",
++ "Portable HDD CH94",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_NO_ATA_1X),
++
+ /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */
+ UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999,
+ "Initio Corporation",
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index 9c76c3d0c6cff9..67a533e3515064 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -1052,9 +1052,11 @@ struct typec_partner *typec_register_partner(struct typec_port *port,
+ partner->usb_mode = USB_MODE_USB3;
+ }
+
++ mutex_lock(&port->partner_link_lock);
+ ret = device_register(&partner->dev);
+ if (ret) {
+ dev_err(&port->dev, "failed to register partner (%d)\n", ret);
++ mutex_unlock(&port->partner_link_lock);
+ put_device(&partner->dev);
+ return ERR_PTR(ret);
+ }
+@@ -1063,6 +1065,7 @@ struct typec_partner *typec_register_partner(struct typec_port *port,
+ typec_partner_link_device(partner, port->usb2_dev);
+ if (port->usb3_dev)
+ typec_partner_link_device(partner, port->usb3_dev);
++ mutex_unlock(&port->partner_link_lock);
+
+ return partner;
+ }
+@@ -1083,12 +1086,18 @@ void typec_unregister_partner(struct typec_partner *partner)
+
+ port = to_typec_port(partner->dev.parent);
+
+- if (port->usb2_dev)
++ mutex_lock(&port->partner_link_lock);
++ if (port->usb2_dev) {
+ typec_partner_unlink_device(partner, port->usb2_dev);
+- if (port->usb3_dev)
++ port->usb2_dev = NULL;
++ }
++ if (port->usb3_dev) {
+ typec_partner_unlink_device(partner, port->usb3_dev);
++ port->usb3_dev = NULL;
++ }
+
+ device_unregister(&partner->dev);
++ mutex_unlock(&port->partner_link_lock);
+ }
+ EXPORT_SYMBOL_GPL(typec_unregister_partner);
+
+@@ -2041,10 +2050,11 @@ static struct typec_partner *typec_get_partner(struct typec_port *port)
+ static void typec_partner_attach(struct typec_connector *con, struct device *dev)
+ {
+ struct typec_port *port = container_of(con, struct typec_port, con);
+- struct typec_partner *partner = typec_get_partner(port);
++ struct typec_partner *partner;
+ struct usb_device *udev = to_usb_device(dev);
+ enum usb_mode usb_mode;
+
++ mutex_lock(&port->partner_link_lock);
+ if (udev->speed < USB_SPEED_SUPER) {
+ usb_mode = USB_MODE_USB2;
+ port->usb2_dev = dev;
+@@ -2053,18 +2063,22 @@ static void typec_partner_attach(struct typec_connector *con, struct device *dev
+ port->usb3_dev = dev;
+ }
+
++ partner = typec_get_partner(port);
+ if (partner) {
+ typec_partner_set_usb_mode(partner, usb_mode);
+ typec_partner_link_device(partner, dev);
+ put_device(&partner->dev);
+ }
++ mutex_unlock(&port->partner_link_lock);
+ }
+
+ static void typec_partner_deattach(struct typec_connector *con, struct device *dev)
+ {
+ struct typec_port *port = container_of(con, struct typec_port, con);
+- struct typec_partner *partner = typec_get_partner(port);
++ struct typec_partner *partner;
+
++ mutex_lock(&port->partner_link_lock);
++ partner = typec_get_partner(port);
+ if (partner) {
+ typec_partner_unlink_device(partner, dev);
+ put_device(&partner->dev);
+@@ -2074,6 +2088,7 @@ static void typec_partner_deattach(struct typec_connector *con, struct device *d
+ port->usb2_dev = NULL;
+ else if (port->usb3_dev == dev)
+ port->usb3_dev = NULL;
++ mutex_unlock(&port->partner_link_lock);
+ }
+
+ /**
+@@ -2614,6 +2629,7 @@ struct typec_port *typec_register_port(struct device *parent,
+
+ ida_init(&port->mode_ids);
+ mutex_init(&port->port_type_lock);
++ mutex_init(&port->partner_link_lock);
+
+ port->id = id;
+ port->ops = cap->ops;
+diff --git a/drivers/usb/typec/class.h b/drivers/usb/typec/class.h
+index b3076a24ad2eee..db2fe96c48ff0f 100644
+--- a/drivers/usb/typec/class.h
++++ b/drivers/usb/typec/class.h
+@@ -59,6 +59,7 @@ struct typec_port {
+ enum typec_port_type port_type;
+ enum usb_mode usb_mode;
+ struct mutex port_type_lock;
++ struct mutex partner_link_lock;
+
+ enum typec_orientation orientation;
+ struct typec_switch *sw;
+diff --git a/drivers/usb/typec/ucsi/cros_ec_ucsi.c b/drivers/usb/typec/ucsi/cros_ec_ucsi.c
+index c605c861672687..744f0709a40edd 100644
+--- a/drivers/usb/typec/ucsi/cros_ec_ucsi.c
++++ b/drivers/usb/typec/ucsi/cros_ec_ucsi.c
+@@ -105,12 +105,13 @@ static int cros_ucsi_async_control(struct ucsi *ucsi, u64 cmd)
+ return 0;
+ }
+
+-static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd)
++static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd, u32 *cci,
++ void *data, size_t size)
+ {
+ struct cros_ucsi_data *udata = ucsi_get_drvdata(ucsi);
+ int ret;
+
+- ret = ucsi_sync_control_common(ucsi, cmd);
++ ret = ucsi_sync_control_common(ucsi, cmd, cci, data, size);
+ switch (ret) {
+ case -EBUSY:
+ /* EC may return -EBUSY if CCI.busy is set.
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 2a2915b0a645ff..e8c7e9dc49309c 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -55,7 +55,8 @@ void ucsi_notify_common(struct ucsi *ucsi, u32 cci)
+ }
+ EXPORT_SYMBOL_GPL(ucsi_notify_common);
+
+-int ucsi_sync_control_common(struct ucsi *ucsi, u64 command)
++int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci,
++ void *data, size_t size)
+ {
+ bool ack = UCSI_COMMAND(command) == UCSI_ACK_CC_CI;
+ int ret;
+@@ -80,6 +81,13 @@ int ucsi_sync_control_common(struct ucsi *ucsi, u64 command)
+ else
+ clear_bit(COMMAND_PENDING, &ucsi->flags);
+
++ if (!ret && cci)
++ ret = ucsi->ops->read_cci(ucsi, cci);
++
++ if (!ret && data &&
++ (*cci & UCSI_CCI_COMMAND_COMPLETE))
++ ret = ucsi->ops->read_message_in(ucsi, data, size);
++
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(ucsi_sync_control_common);
+@@ -95,7 +103,7 @@ static int ucsi_acknowledge(struct ucsi *ucsi, bool conn_ack)
+ ctrl |= UCSI_ACK_CONNECTOR_CHANGE;
+ }
+
+- return ucsi->ops->sync_control(ucsi, ctrl);
++ return ucsi->ops->sync_control(ucsi, ctrl, NULL, NULL, 0);
+ }
+
+ static int ucsi_run_command(struct ucsi *ucsi, u64 command, u32 *cci,
+@@ -108,9 +116,7 @@ static int ucsi_run_command(struct ucsi *ucsi, u64 command, u32 *cci,
+ if (size > UCSI_MAX_DATA_LENGTH(ucsi))
+ return -EINVAL;
+
+- ret = ucsi->ops->sync_control(ucsi, command);
+- if (ucsi->ops->read_cci(ucsi, cci))
+- return -EIO;
++ ret = ucsi->ops->sync_control(ucsi, command, cci, data, size);
+
+ if (*cci & UCSI_CCI_BUSY)
+ return ucsi_run_command(ucsi, UCSI_CANCEL, cci, NULL, 0, false) ?: -EBUSY;
+@@ -127,9 +133,6 @@ static int ucsi_run_command(struct ucsi *ucsi, u64 command, u32 *cci,
+ else
+ err = 0;
+
+- if (!err && data && UCSI_CCI_LENGTH(*cci))
+- err = ucsi->ops->read_message_in(ucsi, data, size);
+-
+ /*
+ * Don't ACK connection change if there was an error.
+ */
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 28780acc4af2e7..892bcf8dbcd50f 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -79,7 +79,8 @@ struct ucsi_operations {
+ int (*read_cci)(struct ucsi *ucsi, u32 *cci);
+ int (*poll_cci)(struct ucsi *ucsi, u32 *cci);
+ int (*read_message_in)(struct ucsi *ucsi, void *val, size_t val_len);
+- int (*sync_control)(struct ucsi *ucsi, u64 command);
++ int (*sync_control)(struct ucsi *ucsi, u64 command, u32 *cci,
++ void *data, size_t size);
+ int (*async_control)(struct ucsi *ucsi, u64 command);
+ bool (*update_altmodes)(struct ucsi *ucsi, struct ucsi_altmode *orig,
+ struct ucsi_altmode *updated);
+@@ -531,7 +532,8 @@ void ucsi_altmode_update_active(struct ucsi_connector *con);
+ int ucsi_resume(struct ucsi *ucsi);
+
+ void ucsi_notify_common(struct ucsi *ucsi, u32 cci);
+-int ucsi_sync_control_common(struct ucsi *ucsi, u64 command);
++int ucsi_sync_control_common(struct ucsi *ucsi, u64 command, u32 *cci,
++ void *data, size_t size);
+
+ #if IS_ENABLED(CONFIG_POWER_SUPPLY)
+ int ucsi_register_port_psy(struct ucsi_connector *con);
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index ac1ebb5d952720..0ac6e5ce4a288c 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -128,12 +128,13 @@ static int ucsi_gram_read_message_in(struct ucsi *ucsi, void *val, size_t val_le
+ return ret;
+ }
+
+-static int ucsi_gram_sync_control(struct ucsi *ucsi, u64 command)
++static int ucsi_gram_sync_control(struct ucsi *ucsi, u64 command, u32 *cci,
++ void *data, size_t size)
+ {
+ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+ int ret;
+
+- ret = ucsi_sync_control_common(ucsi, command);
++ ret = ucsi_sync_control_common(ucsi, command, cci, data, size);
+ if (ret < 0)
+ return ret;
+
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index 511dd1b224ae51..c1d776c82fc2ee 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -222,7 +222,6 @@ struct ucsi_ccg {
+ u16 fw_build;
+ struct work_struct pm_work;
+
+- u64 last_cmd_sent;
+ bool has_multiple_dp;
+ struct ucsi_ccg_altmode orig[UCSI_MAX_ALTMODES];
+ struct ucsi_ccg_altmode updated[UCSI_MAX_ALTMODES];
+@@ -538,9 +537,10 @@ static void ucsi_ccg_update_set_new_cam_cmd(struct ucsi_ccg *uc,
+ * first and then vdo=0x3
+ */
+ static void ucsi_ccg_nvidia_altmode(struct ucsi_ccg *uc,
+- struct ucsi_altmode *alt)
++ struct ucsi_altmode *alt,
++ u64 command)
+ {
+- switch (UCSI_ALTMODE_OFFSET(uc->last_cmd_sent)) {
++ switch (UCSI_ALTMODE_OFFSET(command)) {
+ case NVIDIA_FTB_DP_OFFSET:
+ if (alt[0].mid == USB_TYPEC_NVIDIA_VLINK_DBG_VDO)
+ alt[0].mid = USB_TYPEC_NVIDIA_VLINK_DP_VDO |
+@@ -578,37 +578,11 @@ static int ucsi_ccg_read_cci(struct ucsi *ucsi, u32 *cci)
+ static int ucsi_ccg_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+ {
+ struct ucsi_ccg *uc = ucsi_get_drvdata(ucsi);
+- struct ucsi_capability *cap;
+- struct ucsi_altmode *alt;
+
+ spin_lock(&uc->op_lock);
+ memcpy(val, uc->op_data.message_in, val_len);
+ spin_unlock(&uc->op_lock);
+
+- switch (UCSI_COMMAND(uc->last_cmd_sent)) {
+- case UCSI_GET_CURRENT_CAM:
+- if (uc->has_multiple_dp)
+- ucsi_ccg_update_get_current_cam_cmd(uc, (u8 *)val);
+- break;
+- case UCSI_GET_ALTERNATE_MODES:
+- if (UCSI_ALTMODE_RECIPIENT(uc->last_cmd_sent) ==
+- UCSI_RECIPIENT_SOP) {
+- alt = val;
+- if (alt[0].svid == USB_TYPEC_NVIDIA_VLINK_SID)
+- ucsi_ccg_nvidia_altmode(uc, alt);
+- }
+- break;
+- case UCSI_GET_CAPABILITY:
+- if (uc->fw_build == CCG_FW_BUILD_NVIDIA_TEGRA) {
+- cap = val;
+- cap->features &= ~UCSI_CAP_ALT_MODE_DETAILS;
+- }
+- break;
+- default:
+- break;
+- }
+- uc->last_cmd_sent = 0;
+-
+ return 0;
+ }
+
+@@ -628,7 +602,8 @@ static int ucsi_ccg_async_control(struct ucsi *ucsi, u64 command)
+ return ccg_write(uc, reg, (u8 *)&command, sizeof(command));
+ }
+
+-static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
++static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command, u32 *cci,
++ void *data, size_t size)
+ {
+ struct ucsi_ccg *uc = ucsi_get_drvdata(ucsi);
+ struct ucsi_connector *con;
+@@ -638,11 +613,9 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ mutex_lock(&uc->lock);
+ pm_runtime_get_sync(uc->dev);
+
+- uc->last_cmd_sent = command;
+-
+- if (UCSI_COMMAND(uc->last_cmd_sent) == UCSI_SET_NEW_CAM &&
++ if (UCSI_COMMAND(command) == UCSI_SET_NEW_CAM &&
+ uc->has_multiple_dp) {
+- con_index = (uc->last_cmd_sent >> 16) &
++ con_index = (command >> 16) &
+ UCSI_CMD_CONNECTOR_MASK;
+ if (con_index == 0) {
+ ret = -EINVAL;
+@@ -652,7 +625,31 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ ucsi_ccg_update_set_new_cam_cmd(uc, con, &command);
+ }
+
+- ret = ucsi_sync_control_common(ucsi, command);
++ ret = ucsi_sync_control_common(ucsi, command, cci, data, size);
++
++ switch (UCSI_COMMAND(command)) {
++ case UCSI_GET_CURRENT_CAM:
++ if (uc->has_multiple_dp)
++ ucsi_ccg_update_get_current_cam_cmd(uc, (u8 *)data);
++ break;
++ case UCSI_GET_ALTERNATE_MODES:
++ if (UCSI_ALTMODE_RECIPIENT(command) == UCSI_RECIPIENT_SOP) {
++ struct ucsi_altmode *alt = data;
++
++ if (alt[0].svid == USB_TYPEC_NVIDIA_VLINK_SID)
++ ucsi_ccg_nvidia_altmode(uc, alt, command);
++ }
++ break;
++ case UCSI_GET_CAPABILITY:
++ if (uc->fw_build == CCG_FW_BUILD_NVIDIA_TEGRA) {
++ struct ucsi_capability *cap = data;
++
++ cap->features &= ~UCSI_CAP_ALT_MODE_DETAILS;
++ }
++ break;
++ default:
++ break;
++ }
+
+ err_put:
+ pm_runtime_put_sync(uc->dev);
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 7aeff435c1d873..35a03306d13454 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -630,7 +630,7 @@ vhost_scsi_get_cmd(struct vhost_virtqueue *vq, struct vhost_scsi_tpg *tpg,
+
+ tag = sbitmap_get(&svq->scsi_tags);
+ if (tag < 0) {
+- pr_err("Unable to obtain tag for vhost_scsi_cmd\n");
++ pr_warn_once("Guest sent too many cmds. Returning TASK_SET_FULL.\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+@@ -930,24 +930,69 @@ static void vhost_scsi_target_queue_cmd(struct vhost_scsi_cmd *cmd)
+ }
+
+ static void
+-vhost_scsi_send_bad_target(struct vhost_scsi *vs,
+- struct vhost_virtqueue *vq,
+- int head, unsigned out)
++vhost_scsi_send_status(struct vhost_scsi *vs, struct vhost_virtqueue *vq,
++ struct vhost_scsi_ctx *vc, u8 status)
+ {
+- struct virtio_scsi_cmd_resp __user *resp;
+ struct virtio_scsi_cmd_resp rsp;
++ struct iov_iter iov_iter;
+ int ret;
+
+ memset(&rsp, 0, sizeof(rsp));
+- rsp.response = VIRTIO_SCSI_S_BAD_TARGET;
+- resp = vq->iov[out].iov_base;
+- ret = __copy_to_user(resp, &rsp, sizeof(rsp));
+- if (!ret)
+- vhost_add_used_and_signal(&vs->dev, vq, head, 0);
++ rsp.status = status;
++
++ iov_iter_init(&iov_iter, ITER_DEST, &vq->iov[vc->out], vc->in,
++ sizeof(rsp));
++
++ ret = copy_to_iter(&rsp, sizeof(rsp), &iov_iter);
++
++ if (likely(ret == sizeof(rsp)))
++ vhost_add_used_and_signal(&vs->dev, vq, vc->head, 0);
+ else
+ pr_err("Faulted on virtio_scsi_cmd_resp\n");
+ }
+
++#define TYPE_IO_CMD 0
++#define TYPE_CTRL_TMF 1
++#define TYPE_CTRL_AN 2
++
++static void
++vhost_scsi_send_bad_target(struct vhost_scsi *vs,
++ struct vhost_virtqueue *vq,
++ struct vhost_scsi_ctx *vc, int type)
++{
++ union {
++ struct virtio_scsi_cmd_resp cmd;
++ struct virtio_scsi_ctrl_tmf_resp tmf;
++ struct virtio_scsi_ctrl_an_resp an;
++ } rsp;
++ struct iov_iter iov_iter;
++ size_t rsp_size;
++ int ret;
++
++ memset(&rsp, 0, sizeof(rsp));
++
++ if (type == TYPE_IO_CMD) {
++ rsp_size = sizeof(struct virtio_scsi_cmd_resp);
++ rsp.cmd.response = VIRTIO_SCSI_S_BAD_TARGET;
++ } else if (type == TYPE_CTRL_TMF) {
++ rsp_size = sizeof(struct virtio_scsi_ctrl_tmf_resp);
++ rsp.tmf.response = VIRTIO_SCSI_S_BAD_TARGET;
++ } else {
++ rsp_size = sizeof(struct virtio_scsi_ctrl_an_resp);
++ rsp.an.response = VIRTIO_SCSI_S_BAD_TARGET;
++ }
++
++ iov_iter_init(&iov_iter, ITER_DEST, &vq->iov[vc->out], vc->in,
++ rsp_size);
++
++ ret = copy_to_iter(&rsp, rsp_size, &iov_iter);
++
++ if (likely(ret == rsp_size))
++ vhost_add_used_and_signal(&vs->dev, vq, vc->head, 0);
++ else
++ pr_err("Faulted on virtio scsi type=%d\n", type);
++}
++
+ static int
+ vhost_scsi_get_desc(struct vhost_scsi *vs, struct vhost_virtqueue *vq,
+ struct vhost_scsi_ctx *vc)
+@@ -1216,8 +1261,8 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
+ exp_data_len + prot_bytes,
+ data_direction);
+ if (IS_ERR(cmd)) {
+- vq_err(vq, "vhost_scsi_get_cmd failed %ld\n",
+- PTR_ERR(cmd));
++ ret = PTR_ERR(cmd);
++ vq_err(vq, "vhost_scsi_get_tag failed %dd\n", ret);
+ goto err;
+ }
+ cmd->tvc_vhost = vs;
+@@ -1254,11 +1299,15 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
+ * EINVAL: Invalid response buffer, drop the request
+ * EIO: Respond with bad target
+ * EAGAIN: Pending request
++ * ENOMEM: Could not allocate resources for request
+ */
+ if (ret == -ENXIO)
+ break;
+ else if (ret == -EIO)
+- vhost_scsi_send_bad_target(vs, vq, vc.head, vc.out);
++ vhost_scsi_send_bad_target(vs, vq, &vc, TYPE_IO_CMD);
++ else if (ret == -ENOMEM)
++ vhost_scsi_send_status(vs, vq, &vc,
++ SAM_STAT_TASK_SET_FULL);
+ } while (likely(!vhost_exceeds_weight(vq, ++c, 0)));
+ out:
+ mutex_unlock(&vq->mutex);
+@@ -1488,7 +1537,10 @@ vhost_scsi_ctl_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
+ if (ret == -ENXIO)
+ break;
+ else if (ret == -EIO)
+- vhost_scsi_send_bad_target(vs, vq, vc.head, vc.out);
++ vhost_scsi_send_bad_target(vs, vq, &vc,
++ v_req.type == VIRTIO_SCSI_T_TMF ?
++ TYPE_CTRL_TMF :
++ TYPE_CTRL_AN);
+ } while (likely(!vhost_exceeds_weight(vq, ++c, 0)));
+ out:
+ mutex_unlock(&vq->mutex);
+diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
+index 5eaade7578606e..d50fe030d82534 100644
+--- a/drivers/virtio/virtio_pci_modern.c
++++ b/drivers/virtio/virtio_pci_modern.c
+@@ -247,7 +247,7 @@ virtio_pci_admin_cmd_dev_parts_objects_enable(struct virtio_device *virtio_dev)
+ sg_init_one(&data_sg, get_data, sizeof(*get_data));
+ sg_init_one(&result_sg, result, sizeof(*result));
+ cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_DEVICE_CAP_GET);
+- cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV);
++ cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SELF);
+ cmd.data_sg = &data_sg;
+ cmd.result_sg = &result_sg;
+ ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd);
+@@ -305,7 +305,7 @@ static void virtio_pci_admin_cmd_cap_init(struct virtio_device *virtio_dev)
+
+ sg_init_one(&result_sg, data, sizeof(*data));
+ cmd.opcode = cpu_to_le16(VIRTIO_ADMIN_CMD_CAP_ID_LIST_QUERY);
+- cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SRIOV);
++ cmd.group_type = cpu_to_le16(VIRTIO_ADMIN_GROUP_TYPE_SELF);
+ cmd.result_sg = &result_sg;
+
+ ret = vp_modern_admin_cmd_exec(virtio_dev, &cmd);
+diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
+index f7d6f47971fdf8..24f485827e0399 100644
+--- a/drivers/xen/Kconfig
++++ b/drivers/xen/Kconfig
+@@ -278,7 +278,7 @@ config XEN_PRIVCMD_EVENTFD
+
+ config XEN_ACPI_PROCESSOR
+ tristate "Xen ACPI processor"
+- depends on XEN && XEN_PV_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
++ depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ
+ default m
+ help
+ This ACPI processor uploads Power Management information to the Xen
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 0b568c8d24cbc5..a92997a583bd28 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2104,15 +2104,20 @@ static void btrfs_punch_hole_lock_range(struct inode *inode,
+ * will always return true.
+ * So here we need to do extra page alignment for
+ * filemap_range_has_page().
++ *
++ * And do not decrease page_lockend right now, as it can be 0.
+ */
+ const u64 page_lockstart = round_up(lockstart, PAGE_SIZE);
+- const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE) - 1;
++ const u64 page_lockend = round_down(lockend + 1, PAGE_SIZE);
+
+ while (1) {
+ truncate_pagecache_range(inode, lockstart, lockend);
+
+ lock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,
+ cached_state);
++ /* The same page or adjacent pages. */
++ if (page_lockend <= page_lockstart)
++ break;
+ /*
+ * We can't have ordered extents in the range, nor dirty/writeback
+ * pages, because we have locked the inode's VFS lock in exclusive
+@@ -2124,7 +2129,7 @@ static void btrfs_punch_hole_lock_range(struct inode *inode,
+ * we do, unlock the range and retry.
+ */
+ if (!filemap_range_has_page(inode->i_mapping, page_lockstart,
+- page_lockend))
++ page_lockend - 1))
+ break;
+
+ unlock_extent(&BTRFS_I(inode)->io_tree, lockstart, lockend,
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index aaf925897fdda3..978a57da8b4f5b 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1659,7 +1659,6 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ * stripe.
+ */
+ cache->alloc_offset = cache->zone_capacity;
+- ret = 0;
+ }
+
+ out:
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 7dd6c2275085b9..e3ab07797c8508 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -2362,7 +2362,7 @@ static int fill_fscrypt_truncate(struct inode *inode,
+
+ /* Try to writeback the dirty pagecaches */
+ if (issued & (CEPH_CAP_FILE_BUFFER)) {
+- loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SHIFT - 1;
++ loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SIZE - 1;
+
+ ret = filemap_write_and_wait_range(inode->i_mapping,
+ orig_pos, lend);
+diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
+index 87ee3a17bd29c9..e8c5525afc67a2 100644
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -351,10 +351,9 @@ int ext4_check_blockref(const char *function, unsigned int line,
+ {
+ __le32 *bref = p;
+ unsigned int blk;
++ journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;
+
+- if (ext4_has_feature_journal(inode->i_sb) &&
+- (inode->i_ino ==
+- le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum)))
++ if (journal && inode == journal->j_inode)
+ return 0;
+
+ while (bref < p+max) {
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 4108b7d1696fff..74c5e2a381a6b0 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -383,10 +383,11 @@ static int __check_block_validity(struct inode *inode, const char *func,
+ unsigned int line,
+ struct ext4_map_blocks *map)
+ {
+- if (ext4_has_feature_journal(inode->i_sb) &&
+- (inode->i_ino ==
+- le32_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_journal_inum)))
++ journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;
++
++ if (journal && inode == journal->j_inode)
+ return 0;
++
+ if (!ext4_inode_block_valid(inode, map->m_pblk, map->m_len)) {
+ ext4_error_inode(inode, func, line, map->m_pblk,
+ "lblock %lu mapped to illegal pblock %llu "
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index d303e6c8900cd1..a47e3afd724caf 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -263,7 +263,7 @@ static void iomap_adjust_read_range(struct inode *inode, struct folio *folio,
+ }
+
+ /* truncate len if we find any trailing uptodate block(s) */
+- for ( ; i <= last; i++) {
++ while (++i <= last) {
+ if (ifs_block_is_uptodate(ifs, i)) {
+ plen -= (last - i + 1) * block_size;
+ last = i - 1;
+diff --git a/fs/namespace.c b/fs/namespace.c
+index d401486fe95d17..280a6ebc46d930 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2640,56 +2640,62 @@ static struct mountpoint *do_lock_mount(struct path *path, bool beneath)
+ struct vfsmount *mnt = path->mnt;
+ struct dentry *dentry;
+ struct mountpoint *mp = ERR_PTR(-ENOENT);
++ struct path under = {};
+
+ for (;;) {
+- struct mount *m;
++ struct mount *m = real_mount(mnt);
+
+ if (beneath) {
+- m = real_mount(mnt);
++ path_put(&under);
+ read_seqlock_excl(&mount_lock);
+- dentry = dget(m->mnt_mountpoint);
++ under.mnt = mntget(&m->mnt_parent->mnt);
++ under.dentry = dget(m->mnt_mountpoint);
+ read_sequnlock_excl(&mount_lock);
++ dentry = under.dentry;
+ } else {
+ dentry = path->dentry;
+ }
+
+ inode_lock(dentry->d_inode);
+- if (unlikely(cant_mount(dentry))) {
+- inode_unlock(dentry->d_inode);
+- goto out;
+- }
+-
+ namespace_lock();
+
+- if (beneath && (!is_mounted(mnt) || m->mnt_mountpoint != dentry)) {
++ if (unlikely(cant_mount(dentry) || !is_mounted(mnt)))
++ break; // not to be mounted on
++
++ if (beneath && unlikely(m->mnt_mountpoint != dentry ||
++ &m->mnt_parent->mnt != under.mnt)) {
+ namespace_unlock();
+ inode_unlock(dentry->d_inode);
+- goto out;
++ continue; // got moved
+ }
+
+ mnt = lookup_mnt(path);
+- if (likely(!mnt))
++ if (unlikely(mnt)) {
++ namespace_unlock();
++ inode_unlock(dentry->d_inode);
++ path_put(path);
++ path->mnt = mnt;
++ path->dentry = dget(mnt->mnt_root);
++ continue; // got overmounted
++ }
++ mp = get_mountpoint(dentry);
++ if (IS_ERR(mp))
+ break;
+-
+- namespace_unlock();
+- inode_unlock(dentry->d_inode);
+- if (beneath)
+- dput(dentry);
+- path_put(path);
+- path->mnt = mnt;
+- path->dentry = dget(mnt->mnt_root);
+- }
+-
+- mp = get_mountpoint(dentry);
+- if (IS_ERR(mp)) {
+- namespace_unlock();
+- inode_unlock(dentry->d_inode);
++ if (beneath) {
++ /*
++ * @under duplicates the references that will stay
++ * at least until namespace_unlock(), so the path_put()
++ * below is safe (and OK to do under namespace_lock -
++ * we are not dropping the final references here).
++ */
++ path_put(&under);
++ }
++ return mp;
+ }
+-
+-out:
++ namespace_unlock();
++ inode_unlock(dentry->d_inode);
+ if (beneath)
+- dput(dentry);
+-
++ path_put(&under);
+ return mp;
+ }
+
+@@ -2700,14 +2706,11 @@ static inline struct mountpoint *lock_mount(struct path *path)
+
+ static void unlock_mount(struct mountpoint *where)
+ {
+- struct dentry *dentry = where->m_dentry;
+-
++ inode_unlock(where->m_dentry->d_inode);
+ read_seqlock_excl(&mount_lock);
+ put_mountpoint(where);
+ read_sequnlock_excl(&mount_lock);
+-
+ namespace_unlock();
+- inode_unlock(dentry->d_inode);
+ }
+
+ static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
+diff --git a/fs/netfs/main.c b/fs/netfs/main.c
+index 4e3e6204083140..70ecc8f5f21034 100644
+--- a/fs/netfs/main.c
++++ b/fs/netfs/main.c
+@@ -127,11 +127,13 @@ static int __init netfs_init(void)
+ if (mempool_init_slab_pool(&netfs_subrequest_pool, 100, netfs_subrequest_slab) < 0)
+ goto error_subreqpool;
+
++#ifdef CONFIG_PROC_FS
+ if (!proc_mkdir("fs/netfs", NULL))
+ goto error_proc;
+ if (!proc_create_seq("fs/netfs/requests", S_IFREG | 0444, NULL,
+ &netfs_requests_seq_ops))
+ goto error_procfile;
++#endif
+ #ifdef CONFIG_FSCACHE_STATS
+ if (!proc_create_single("fs/netfs/stats", S_IFREG | 0444, NULL,
+ netfs_stats_show))
+@@ -144,9 +146,11 @@ static int __init netfs_init(void)
+ return 0;
+
+ error_fscache:
++#ifdef CONFIG_PROC_FS
+ error_procfile:
+ remove_proc_subtree("fs/netfs", NULL);
+ error_proc:
++#endif
+ mempool_exit(&netfs_subrequest_pool);
+ error_subreqpool:
+ kmem_cache_destroy(netfs_subrequest_slab);
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index e9f701f884e72c..9b6a3f8d2e7c5c 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -430,6 +430,7 @@ static int ntfs_extend(struct inode *inode, loff_t pos, size_t count,
+ }
+
+ if (extend_init && !is_compressed(ni)) {
++ WARN_ON(ni->i_valid >= pos);
+ err = ntfs_extend_initialized_size(file, ni, ni->i_valid, pos);
+ if (err)
+ goto out;
+@@ -1246,21 +1247,22 @@ static ssize_t ntfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ ssize_t ret;
+ int err;
+
+- err = check_write_restriction(inode);
+- if (err)
+- return err;
+-
+- if (is_compressed(ni) && (iocb->ki_flags & IOCB_DIRECT)) {
+- ntfs_inode_warn(inode, "direct i/o + compressed not supported");
+- return -EOPNOTSUPP;
+- }
+-
+ if (!inode_trylock(inode)) {
+ if (iocb->ki_flags & IOCB_NOWAIT)
+ return -EAGAIN;
+ inode_lock(inode);
+ }
+
++ ret = check_write_restriction(inode);
++ if (ret)
++ goto out;
++
++ if (is_compressed(ni) && (iocb->ki_flags & IOCB_DIRECT)) {
++ ntfs_inode_warn(inode, "direct i/o + compressed not supported");
++ ret = -EOPNOTSUPP;
++ goto out;
++ }
++
+ ret = generic_write_checks(iocb, from);
+ if (ret <= 0)
+ goto out;
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index eb70ebf38464bc..9d6b1a4704773a 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -679,6 +679,22 @@ unicode_oslm_strings(char **pbcc_area, const struct nls_table *nls_cp)
+ *pbcc_area = bcc_ptr;
+ }
+
++static void
++ascii_oslm_strings(char **pbcc_area, const struct nls_table *nls_cp)
++{
++ char *bcc_ptr = *pbcc_area;
++
++ strcpy(bcc_ptr, "Linux version ");
++ bcc_ptr += strlen("Linux version ");
++ strcpy(bcc_ptr, init_utsname()->release);
++ bcc_ptr += strlen(init_utsname()->release) + 1;
++
++ strcpy(bcc_ptr, CIFS_NETWORK_OPSYS);
++ bcc_ptr += strlen(CIFS_NETWORK_OPSYS) + 1;
++
++ *pbcc_area = bcc_ptr;
++}
++
+ static void unicode_domain_string(char **pbcc_area, struct cifs_ses *ses,
+ const struct nls_table *nls_cp)
+ {
+@@ -703,6 +719,25 @@ static void unicode_domain_string(char **pbcc_area, struct cifs_ses *ses,
+ *pbcc_area = bcc_ptr;
+ }
+
++static void ascii_domain_string(char **pbcc_area, struct cifs_ses *ses,
++ const struct nls_table *nls_cp)
++{
++ char *bcc_ptr = *pbcc_area;
++ int len;
++
++ /* copy domain */
++ if (ses->domainName != NULL) {
++ len = strscpy(bcc_ptr, ses->domainName, CIFS_MAX_DOMAINNAME_LEN);
++ if (WARN_ON_ONCE(len < 0))
++ len = CIFS_MAX_DOMAINNAME_LEN - 1;
++ bcc_ptr += len;
++ } /* else we send a null domain name so server will default to its own domain */
++ *bcc_ptr = 0;
++ bcc_ptr++;
++
++ *pbcc_area = bcc_ptr;
++}
++
+ static void unicode_ssetup_strings(char **pbcc_area, struct cifs_ses *ses,
+ const struct nls_table *nls_cp)
+ {
+@@ -748,25 +783,10 @@ static void ascii_ssetup_strings(char **pbcc_area, struct cifs_ses *ses,
+ *bcc_ptr = 0;
+ bcc_ptr++; /* account for null termination */
+
+- /* copy domain */
+- if (ses->domainName != NULL) {
+- len = strscpy(bcc_ptr, ses->domainName, CIFS_MAX_DOMAINNAME_LEN);
+- if (WARN_ON_ONCE(len < 0))
+- len = CIFS_MAX_DOMAINNAME_LEN - 1;
+- bcc_ptr += len;
+- } /* else we send a null domain name so server will default to its own domain */
+- *bcc_ptr = 0;
+- bcc_ptr++;
+-
+ /* BB check for overflow here */
+
+- strcpy(bcc_ptr, "Linux version ");
+- bcc_ptr += strlen("Linux version ");
+- strcpy(bcc_ptr, init_utsname()->release);
+- bcc_ptr += strlen(init_utsname()->release) + 1;
+-
+- strcpy(bcc_ptr, CIFS_NETWORK_OPSYS);
+- bcc_ptr += strlen(CIFS_NETWORK_OPSYS) + 1;
++ ascii_domain_string(&bcc_ptr, ses, nls_cp);
++ ascii_oslm_strings(&bcc_ptr, nls_cp);
+
+ *pbcc_area = bcc_ptr;
+ }
+@@ -1569,7 +1589,7 @@ sess_auth_kerberos(struct sess_data *sess_data)
+ sess_data->iov[1].iov_len = msg->secblob_len;
+ pSMB->req.SecurityBlobLength = cpu_to_le16(sess_data->iov[1].iov_len);
+
+- if (ses->capabilities & CAP_UNICODE) {
++ if (pSMB->req.hdr.Flags2 & SMBFLG2_UNICODE) {
+ /* unicode strings must be word aligned */
+ if (!IS_ALIGNED(sess_data->iov[0].iov_len + sess_data->iov[1].iov_len, 2)) {
+ *bcc_ptr = 0;
+@@ -1578,8 +1598,8 @@ sess_auth_kerberos(struct sess_data *sess_data)
+ unicode_oslm_strings(&bcc_ptr, sess_data->nls_cp);
+ unicode_domain_string(&bcc_ptr, ses, sess_data->nls_cp);
+ } else {
+- /* BB: is this right? */
+- ascii_ssetup_strings(&bcc_ptr, ses, sess_data->nls_cp);
++ ascii_oslm_strings(&bcc_ptr, sess_data->nls_cp);
++ ascii_domain_string(&bcc_ptr, ses, sess_data->nls_cp);
+ }
+
+ sess_data->iov[2].iov_len = (long) bcc_ptr -
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index d6e2fb669c401f..808970e4a7142f 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -573,6 +573,42 @@ static int cifs_query_path_info(const unsigned int xid,
+ data->reparse_point = le32_to_cpu(fi.Attributes) & ATTR_REPARSE;
+ }
+
++#ifdef CONFIG_CIFS_XATTR
++ /*
++ * For WSL CHR and BLK reparse points it is required to fetch
++ * EA $LXDEV which contains major and minor device numbers.
++ */
++ if (!rc && data->reparse_point) {
++ struct smb2_file_full_ea_info *ea;
++
++ ea = (struct smb2_file_full_ea_info *)data->wsl.eas;
++ rc = CIFSSMBQAllEAs(xid, tcon, full_path, SMB2_WSL_XATTR_DEV,
++ &ea->ea_data[SMB2_WSL_XATTR_NAME_LEN + 1],
++ SMB2_WSL_XATTR_DEV_SIZE, cifs_sb);
++ if (rc == SMB2_WSL_XATTR_DEV_SIZE) {
++ ea->next_entry_offset = cpu_to_le32(0);
++ ea->flags = 0;
++ ea->ea_name_length = SMB2_WSL_XATTR_NAME_LEN;
++ ea->ea_value_length = cpu_to_le16(SMB2_WSL_XATTR_DEV_SIZE);
++ memcpy(&ea->ea_data[0], SMB2_WSL_XATTR_DEV, SMB2_WSL_XATTR_NAME_LEN + 1);
++ data->wsl.eas_len = sizeof(*ea) + SMB2_WSL_XATTR_NAME_LEN + 1 +
++ SMB2_WSL_XATTR_DEV_SIZE;
++ rc = 0;
++ } else if (rc >= 0) {
++ /* It is an error if EA $LXDEV has wrong size. */
++ rc = -EINVAL;
++ } else {
++ /*
++ * In all other cases ignore error if fetching
++ * of EA $LXDEV failed. It is needed only for
++ * WSL CHR and BLK reparse points and wsl_to_fattr()
++ * handle the case when EA is missing.
++ */
++ rc = 0;
++ }
++ }
++#endif
++
+ return rc;
+ }
+
+diff --git a/fs/smb/server/vfs_cache.c b/fs/smb/server/vfs_cache.c
+index 8d1f30dcba7e8e..1f8fa3468173ab 100644
+--- a/fs/smb/server/vfs_cache.c
++++ b/fs/smb/server/vfs_cache.c
+@@ -713,12 +713,8 @@ static bool tree_conn_fd_check(struct ksmbd_tree_connect *tcon,
+
+ static bool ksmbd_durable_scavenger_alive(void)
+ {
+- mutex_lock(&durable_scavenger_lock);
+- if (!durable_scavenger_running) {
+- mutex_unlock(&durable_scavenger_lock);
++ if (!durable_scavenger_running)
+ return false;
+- }
+- mutex_unlock(&durable_scavenger_lock);
+
+ if (kthread_should_stop())
+ return false;
+@@ -799,9 +795,7 @@ static int ksmbd_durable_scavenger(void *dummy)
+ break;
+ }
+
+- mutex_lock(&durable_scavenger_lock);
+ durable_scavenger_running = false;
+- mutex_unlock(&durable_scavenger_lock);
+
+ module_put(THIS_MODULE);
+
+diff --git a/fs/splice.c b/fs/splice.c
+index 23fa5561b94419..bd6e889133f5ce 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -45,7 +45,7 @@
+ * here if set to avoid blocking other users of this pipe if splice is
+ * being done on it.
+ */
+-static noinline void noinline pipe_clear_nowait(struct file *file)
++static noinline void pipe_clear_nowait(struct file *file)
+ {
+ fmode_t fmode = READ_ONCE(file->f_mode);
+
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 02bee149ad9674..fabb2a04501ee7 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -703,7 +703,7 @@ static int path_setxattrat(int dfd, const char __user *pathname,
+ return error;
+
+ filename = getname_maybe_null(pathname, at_flags);
+- if (!filename) {
++ if (!filename && dfd >= 0) {
+ CLASS(fd, f)(dfd);
+ if (fd_empty(f))
+ error = -EBADF;
+@@ -847,7 +847,7 @@ static ssize_t path_getxattrat(int dfd, const char __user *pathname,
+ return error;
+
+ filename = getname_maybe_null(pathname, at_flags);
+- if (!filename) {
++ if (!filename && dfd >= 0) {
+ CLASS(fd, f)(dfd);
+ if (fd_empty(f))
+ return -EBADF;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index d37751789bf58c..6aa67e9b2ec081 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1649,10 +1649,6 @@ int bd_prepare_to_claim(struct block_device *bdev, void *holder,
+ const struct blk_holder_ops *hops);
+ void bd_abort_claiming(struct block_device *bdev, void *holder);
+
+-/* just for blk-cgroup, don't use elsewhere */
+-struct block_device *blkdev_get_no_open(dev_t dev);
+-void blkdev_put_no_open(struct block_device *bdev);
+-
+ struct block_device *I_BDEV(struct inode *inode);
+ struct block_device *file_bdev(struct file *bdev_file);
+ bool disk_live(struct gendisk *disk);
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index 78318d49276dc0..b3b8389e9480da 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -167,13 +167,13 @@ struct em_data_callback {
+ struct em_perf_domain *em_cpu_get(int cpu);
+ struct em_perf_domain *em_pd_get(struct device *dev);
+ int em_dev_update_perf_domain(struct device *dev,
+- struct em_perf_table __rcu *new_table);
++ struct em_perf_table *new_table);
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ struct em_data_callback *cb, cpumask_t *span,
+ bool microwatts);
+ void em_dev_unregister_perf_domain(struct device *dev);
+-struct em_perf_table __rcu *em_table_alloc(struct em_perf_domain *pd);
+-void em_table_free(struct em_perf_table __rcu *table);
++struct em_perf_table *em_table_alloc(struct em_perf_domain *pd);
++void em_table_free(struct em_perf_table *table);
+ int em_dev_compute_costs(struct device *dev, struct em_perf_state *table,
+ int nr_states);
+ int em_dev_update_chip_binning(struct device *dev);
+@@ -373,14 +373,14 @@ static inline int em_pd_nr_perf_states(struct em_perf_domain *pd)
+ return 0;
+ }
+ static inline
+-struct em_perf_table __rcu *em_table_alloc(struct em_perf_domain *pd)
++struct em_perf_table *em_table_alloc(struct em_perf_domain *pd)
+ {
+ return NULL;
+ }
+-static inline void em_table_free(struct em_perf_table __rcu *table) {}
++static inline void em_table_free(struct em_perf_table *table) {}
+ static inline
+ int em_dev_update_perf_domain(struct device *dev,
+- struct em_perf_table __rcu *new_table)
++ struct em_perf_table *new_table)
+ {
+ return -EINVAL;
+ }
+diff --git a/include/linux/math.h b/include/linux/math.h
+index f5f18dc3616b01..0198c92cbe3ef5 100644
+--- a/include/linux/math.h
++++ b/include/linux/math.h
+@@ -34,6 +34,18 @@
+ */
+ #define round_down(x, y) ((x) & ~__round_mask(x, y))
+
++/**
++ * DIV_ROUND_UP_POW2 - divide and round up
++ * @n: numerator
++ * @d: denominator (must be a power of 2)
++ *
++ * Divides @n by @d and rounds up to next multiple of @d (which must be a power
++ * of 2). Avoids integer overflows that may occur with __KERNEL_DIV_ROUND_UP().
++ * Performance is roughly equivalent to __KERNEL_DIV_ROUND_UP().
++ */
++#define DIV_ROUND_UP_POW2(n, d) \
++ ((n) / (d) + !!((n) & ((d) - 1)))
++
+ #define DIV_ROUND_UP __KERNEL_DIV_ROUND_UP
+
+ #define DIV_ROUND_DOWN_ULL(ll, d) \
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index b10093c4d00ea5..59a421fc42bf07 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -73,7 +73,6 @@ struct msi_msg {
+ };
+ };
+
+-extern int pci_msi_ignore_mask;
+ /* Helper functions */
+ struct msi_desc;
+ struct pci_dev;
+@@ -556,6 +555,8 @@ enum {
+ MSI_FLAG_PCI_MSIX_ALLOC_DYN = (1 << 20),
+ /* PCI MSIs cannot be steered separately to CPU cores */
+ MSI_FLAG_NO_AFFINITY = (1 << 21),
++ /* Inhibit usage of entry masking */
++ MSI_FLAG_NO_MASK = (1 << 22),
+ };
+
+ /**
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 47b31ad724fa5b..8e028620642f38 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -245,6 +245,8 @@ enum pci_dev_flags {
+ PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11),
+ /* Device does honor MSI masking despite saying otherwise */
+ PCI_DEV_FLAGS_HAS_MSI_MASKING = (__force pci_dev_flags_t) (1 << 12),
++ /* Device requires write to PCI_MSIX_ENTRY_DATA before any MSIX reads */
++ PCI_DEV_FLAGS_MSIX_TOUCH_ENTRY_DATA_FIRST = (__force pci_dev_flags_t) (1 << 13),
+ };
+
+ enum pci_irq_reroute_variant {
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index 19f076a71f9462..7c9da26145d30e 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -2114,6 +2114,10 @@ void phy_get_pause(struct phy_device *phydev, bool *tx_pause, bool *rx_pause);
+ s32 phy_get_internal_delay(struct phy_device *phydev, struct device *dev,
+ const int *delay_values, int size, bool is_rx);
+
++int phy_get_tx_amplitude_gain(struct phy_device *phydev, struct device *dev,
++ enum ethtool_link_mode_bit_indices linkmode,
++ u32 *val);
++
+ void phy_resolve_pause(unsigned long *local_adv, unsigned long *partner_adv,
+ bool *tx_pause, bool *rx_pause);
+
+diff --git a/include/linux/phylink.h b/include/linux/phylink.h
+index 898b00451bbfd2..5069cf155cf405 100644
+--- a/include/linux/phylink.h
++++ b/include/linux/phylink.h
+@@ -678,7 +678,11 @@ int phylink_pcs_pre_init(struct phylink *pl, struct phylink_pcs *pcs);
+ void phylink_start(struct phylink *);
+ void phylink_stop(struct phylink *);
+
++void phylink_rx_clk_stop_block(struct phylink *);
++void phylink_rx_clk_stop_unblock(struct phylink *);
++
+ void phylink_suspend(struct phylink *pl, bool mac_wol);
++void phylink_prepare_resume(struct phylink *pl);
+ void phylink_resume(struct phylink *pl);
+
+ void phylink_ethtool_get_wol(struct phylink *, struct ethtool_wolinfo *);
+diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h
+index 38cae7113de462..6e202ed5e63f3c 100644
+--- a/include/net/netfilter/nft_fib.h
++++ b/include/net/netfilter/nft_fib.h
+@@ -18,6 +18,27 @@ nft_fib_is_loopback(const struct sk_buff *skb, const struct net_device *in)
+ return skb->pkt_type == PACKET_LOOPBACK || in->flags & IFF_LOOPBACK;
+ }
+
++static inline bool nft_fib_can_skip(const struct nft_pktinfo *pkt)
++{
++ const struct net_device *indev = nft_in(pkt);
++ const struct sock *sk;
++
++ switch (nft_hook(pkt)) {
++ case NF_INET_PRE_ROUTING:
++ case NF_INET_INGRESS:
++ case NF_INET_LOCAL_IN:
++ break;
++ default:
++ return false;
++ }
++
++ sk = pkt->skb->sk;
++ if (sk && sk_fullsock(sk))
++ return sk->sk_rx_dst_ifindex == indev->ifindex;
++
++ return nft_fib_is_loopback(pkt->skb, indev);
++}
++
+ int nft_fib_dump(struct sk_buff *skb, const struct nft_expr *expr, bool reset);
+ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ const struct nlattr * const tb[]);
+diff --git a/include/soc/qcom/ice.h b/include/soc/qcom/ice.h
+index 5870a94599a258..d5f6a228df6594 100644
+--- a/include/soc/qcom/ice.h
++++ b/include/soc/qcom/ice.h
+@@ -34,4 +34,6 @@ int qcom_ice_program_key(struct qcom_ice *ice,
+ int slot);
+ int qcom_ice_evict_key(struct qcom_ice *ice, int slot);
+ struct qcom_ice *of_qcom_ice_get(struct device *dev);
++struct qcom_ice *devm_of_qcom_ice_get(struct device *dev);
++
+ #endif /* __QCOM_ICE_H__ */
+diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h
+index 8549d457125714..c691ac210ce2ef 100644
+--- a/include/uapi/linux/virtio_pci.h
++++ b/include/uapi/linux/virtio_pci.h
+@@ -246,6 +246,7 @@ struct virtio_pci_cfg_cap {
+ #define VIRTIO_ADMIN_CMD_LIST_USE 0x1
+
+ /* Admin command group type. */
++#define VIRTIO_ADMIN_GROUP_TYPE_SELF 0x0
+ #define VIRTIO_ADMIN_GROUP_TYPE_SRIOV 0x1
+
+ /* Transitional device admin command. */
+diff --git a/init/Kconfig b/init/Kconfig
+index 5ab47c346ef93a..dc7b10a1fad2b7 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -711,7 +711,7 @@ endmenu # "CPU/Task time and stats accounting"
+
+ config CPU_ISOLATION
+ bool "CPU isolation"
+- depends on SMP || COMPILE_TEST
++ depends on SMP
+ default y
+ help
+ Make sure that CPUs running critical tasks are not disturbed by
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 7370f763346f45..24b9e9a5105d46 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1080,21 +1080,22 @@ static __cold void __io_fallback_tw(struct llist_node *node, bool sync)
+ while (node) {
+ req = container_of(node, struct io_kiocb, io_task_work.node);
+ node = node->next;
+- if (sync && last_ctx != req->ctx) {
++ if (last_ctx != req->ctx) {
+ if (last_ctx) {
+- flush_delayed_work(&last_ctx->fallback_work);
++ if (sync)
++ flush_delayed_work(&last_ctx->fallback_work);
+ percpu_ref_put(&last_ctx->refs);
+ }
+ last_ctx = req->ctx;
+ percpu_ref_get(&last_ctx->refs);
+ }
+- if (llist_add(&req->io_task_work.node,
+- &req->ctx->fallback_llist))
+- schedule_delayed_work(&req->ctx->fallback_work, 1);
++ if (llist_add(&req->io_task_work.node, &last_ctx->fallback_llist))
++ schedule_delayed_work(&last_ctx->fallback_work, 1);
+ }
+
+ if (last_ctx) {
+- flush_delayed_work(&last_ctx->fallback_work);
++ if (sync)
++ flush_delayed_work(&last_ctx->fallback_work);
+ percpu_ref_put(&last_ctx->refs);
+ }
+ }
+@@ -1774,7 +1775,7 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
+ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
+ struct io_kiocb *nxt = NULL;
+
+- if (req_ref_put_and_test(req)) {
++ if (req_ref_put_and_test_atomic(req)) {
+ if (req->flags & IO_REQ_LINK_FLAGS)
+ nxt = io_req_find_next(req);
+ io_free_req(req);
+diff --git a/io_uring/refs.h b/io_uring/refs.h
+index 63982ead9f7dab..0d928d87c4ed13 100644
+--- a/io_uring/refs.h
++++ b/io_uring/refs.h
+@@ -17,6 +17,13 @@ static inline bool req_ref_inc_not_zero(struct io_kiocb *req)
+ return atomic_inc_not_zero(&req->refs);
+ }
+
++static inline bool req_ref_put_and_test_atomic(struct io_kiocb *req)
++{
++ WARN_ON_ONCE(!(data_race(req->flags) & REQ_F_REFCOUNT));
++ WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
++ return atomic_dec_and_test(&req->refs);
++}
++
+ static inline bool req_ref_put_and_test(struct io_kiocb *req)
+ {
+ if (likely(!(req->flags & REQ_F_REFCOUNT)))
+diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
+index 54ff2a85d4c022..148da8f7ff3685 100644
+--- a/kernel/bpf/bpf_cgrp_storage.c
++++ b/kernel/bpf/bpf_cgrp_storage.c
+@@ -161,6 +161,7 @@ BPF_CALL_5(bpf_cgrp_storage_get, struct bpf_map *, map, struct cgroup *, cgroup,
+ void *, value, u64, flags, gfp_t, gfp_flags)
+ {
+ struct bpf_local_storage_data *sdata;
++ bool nobusy;
+
+ WARN_ON_ONCE(!bpf_rcu_lock_held());
+ if (flags & ~(BPF_LOCAL_STORAGE_GET_F_CREATE))
+@@ -169,21 +170,21 @@ BPF_CALL_5(bpf_cgrp_storage_get, struct bpf_map *, map, struct cgroup *, cgroup,
+ if (!cgroup)
+ return (unsigned long)NULL;
+
+- if (!bpf_cgrp_storage_trylock())
+- return (unsigned long)NULL;
++ nobusy = bpf_cgrp_storage_trylock();
+
+- sdata = cgroup_storage_lookup(cgroup, map, true);
++ sdata = cgroup_storage_lookup(cgroup, map, nobusy);
+ if (sdata)
+ goto unlock;
+
+ /* only allocate new storage, when the cgroup is refcounted */
+ if (!percpu_ref_is_dying(&cgroup->self.refcnt) &&
+- (flags & BPF_LOCAL_STORAGE_GET_F_CREATE))
++ (flags & BPF_LOCAL_STORAGE_GET_F_CREATE) && nobusy)
+ sdata = bpf_local_storage_update(cgroup, (struct bpf_local_storage_map *)map,
+ value, BPF_NOEXIST, false, gfp_flags);
+
+ unlock:
+- bpf_cgrp_storage_unlock();
++ if (nobusy)
++ bpf_cgrp_storage_unlock();
+ return IS_ERR_OR_NULL(sdata) ? (unsigned long)NULL : (unsigned long)sdata->data;
+ }
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 4a9eeb7aef8556..c308300fc72f67 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -198,12 +198,12 @@ static bool htab_is_percpu(const struct bpf_htab *htab)
+ static inline void htab_elem_set_ptr(struct htab_elem *l, u32 key_size,
+ void __percpu *pptr)
+ {
+- *(void __percpu **)(l->key + key_size) = pptr;
++ *(void __percpu **)(l->key + roundup(key_size, 8)) = pptr;
+ }
+
+ static inline void __percpu *htab_elem_get_ptr(struct htab_elem *l, u32 key_size)
+ {
+- return *(void __percpu **)(l->key + key_size);
++ return *(void __percpu **)(l->key + roundup(key_size, 8));
+ }
+
+ static void *fd_htab_map_get_ptr(const struct bpf_map *map, struct htab_elem *l)
+@@ -2354,7 +2354,7 @@ static int htab_percpu_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn
+ *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);
+ *insn++ = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3);
+ *insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_0,
+- offsetof(struct htab_elem, key) + map->key_size);
++ offsetof(struct htab_elem, key) + roundup(map->key_size, 8));
+ *insn++ = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0);
+ *insn++ = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+
+diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c
+index 0c63bc2cd895a2..56a81df7a9d7c1 100644
+--- a/kernel/bpf/preload/bpf_preload_kern.c
++++ b/kernel/bpf/preload/bpf_preload_kern.c
+@@ -89,4 +89,5 @@ static void __exit fini(void)
+ }
+ late_initcall(load);
+ module_exit(fini);
++MODULE_IMPORT_NS("BPF_INTERNAL");
+ MODULE_LICENSE("GPL");
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index e1e42e918ba7f6..1c2caae0d89460 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1562,7 +1562,7 @@ struct bpf_map *bpf_map_get(u32 ufd)
+
+ return map;
+ }
+-EXPORT_SYMBOL(bpf_map_get);
++EXPORT_SYMBOL_NS(bpf_map_get, "BPF_INTERNAL");
+
+ struct bpf_map *bpf_map_get_with_uref(u32 ufd)
+ {
+@@ -3345,7 +3345,7 @@ struct bpf_link *bpf_link_get_from_fd(u32 ufd)
+ bpf_link_inc(link);
+ return link;
+ }
+-EXPORT_SYMBOL(bpf_link_get_from_fd);
++EXPORT_SYMBOL_NS(bpf_link_get_from_fd, "BPF_INTERNAL");
+
+ static void bpf_tracing_link_release(struct bpf_link *link)
+ {
+@@ -5981,7 +5981,7 @@ int kern_sys_bpf(int cmd, union bpf_attr *attr, unsigned int size)
+ return ____bpf_sys_bpf(cmd, attr, size);
+ }
+ }
+-EXPORT_SYMBOL(kern_sys_bpf);
++EXPORT_SYMBOL_NS(kern_sys_bpf, "BPF_INTERNAL");
+
+ static const struct bpf_func_proto bpf_sys_bpf_proto = {
+ .func = bpf_sys_bpf,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index c6f3b5f4ff2beb..db95b76f5c1397 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -22863,6 +22863,33 @@ BTF_ID(func, __rcu_read_unlock)
+ #endif
+ BTF_SET_END(btf_id_deny)
+
++/* fexit and fmod_ret can't be used to attach to __noreturn functions.
++ * Currently, we must manually list all __noreturn functions here. Once a more
++ * robust solution is implemented, this workaround can be removed.
++ */
++BTF_SET_START(noreturn_deny)
++#ifdef CONFIG_IA32_EMULATION
++BTF_ID(func, __ia32_sys_exit)
++BTF_ID(func, __ia32_sys_exit_group)
++#endif
++#ifdef CONFIG_KUNIT
++BTF_ID(func, __kunit_abort)
++BTF_ID(func, kunit_try_catch_throw)
++#endif
++#ifdef CONFIG_MODULES
++BTF_ID(func, __module_put_and_kthread_exit)
++#endif
++#ifdef CONFIG_X86_64
++BTF_ID(func, __x64_sys_exit)
++BTF_ID(func, __x64_sys_exit_group)
++#endif
++BTF_ID(func, do_exit)
++BTF_ID(func, do_group_exit)
++BTF_ID(func, kthread_complete_and_exit)
++BTF_ID(func, kthread_exit)
++BTF_ID(func, make_task_dead)
++BTF_SET_END(noreturn_deny)
++
+ static bool can_be_sleepable(struct bpf_prog *prog)
+ {
+ if (prog->type == BPF_PROG_TYPE_TRACING) {
+@@ -22951,6 +22978,11 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
+ } else if (prog->type == BPF_PROG_TYPE_TRACING &&
+ btf_id_set_contains(&btf_id_deny, btf_id)) {
+ return -EINVAL;
++ } else if ((prog->expected_attach_type == BPF_TRACE_FEXIT ||
++ prog->expected_attach_type == BPF_MODIFY_RETURN) &&
++ btf_id_set_contains(&noreturn_deny, btf_id)) {
++ verbose(env, "Attaching fexit/fmod_ret to __noreturn functions is rejected.\n");
++ return -EINVAL;
+ }
+
+ key = bpf_trampoline_compute_key(tgt_prog, prog->aux->attach_btf, btf_id);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 81f078c059e86d..68d58753c75c3c 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2339,9 +2339,37 @@ static struct file_system_type cgroup2_fs_type = {
+ };
+
+ #ifdef CONFIG_CPUSETS_V1
++enum cpuset_param {
++ Opt_cpuset_v2_mode,
++};
++
++static const struct fs_parameter_spec cpuset_fs_parameters[] = {
++ fsparam_flag ("cpuset_v2_mode", Opt_cpuset_v2_mode),
++ {}
++};
++
++static int cpuset_parse_param(struct fs_context *fc, struct fs_parameter *param)
++{
++ struct cgroup_fs_context *ctx = cgroup_fc2context(fc);
++ struct fs_parse_result result;
++ int opt;
++
++ opt = fs_parse(fc, cpuset_fs_parameters, param, &result);
++ if (opt < 0)
++ return opt;
++
++ switch (opt) {
++ case Opt_cpuset_v2_mode:
++ ctx->flags |= CGRP_ROOT_CPUSET_V2_MODE;
++ return 0;
++ }
++ return -EINVAL;
++}
++
+ static const struct fs_context_operations cpuset_fs_context_ops = {
+ .get_tree = cgroup1_get_tree,
+ .free = cgroup_fs_context_free,
++ .parse_param = cpuset_parse_param,
+ };
+
+ /*
+@@ -2378,6 +2406,7 @@ static int cpuset_init_fs_context(struct fs_context *fc)
+ static struct file_system_type cpuset_fs_type = {
+ .name = "cpuset",
+ .init_fs_context = cpuset_init_fs_context,
++ .parameters = cpuset_fs_parameters,
+ .fs_flags = FS_USERNS_MOUNT,
+ };
+ #endif
+diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
+index 976a8bc3ff6031..383963e28ac69c 100644
+--- a/kernel/cgroup/cpuset-internal.h
++++ b/kernel/cgroup/cpuset-internal.h
+@@ -33,6 +33,7 @@ enum prs_errcode {
+ PERR_CPUSEMPTY,
+ PERR_HKEEPING,
+ PERR_ACCESS,
++ PERR_REMOTE,
+ };
+
+ /* bits in struct cpuset flags field */
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index d72f843d9feebd..1287274ae1ce9a 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -62,6 +62,7 @@ static const char * const perr_strings[] = {
+ [PERR_CPUSEMPTY] = "cpuset.cpus and cpuset.cpus.exclusive are empty",
+ [PERR_HKEEPING] = "partition config conflicts with housekeeping setup",
+ [PERR_ACCESS] = "Enable partition not permitted",
++ [PERR_REMOTE] = "Have remote partition underneath",
+ };
+
+ /*
+@@ -2840,6 +2841,19 @@ static int update_prstate(struct cpuset *cs, int new_prs)
+ goto out;
+ }
+
++ /*
++ * We don't support the creation of a new local partition with
++ * a remote partition underneath it. This unsupported
++ * setting can happen only if parent is the top_cpuset because
++ * a remote partition cannot be created underneath an existing
++ * local or remote partition.
++ */
++ if ((parent == &top_cpuset) &&
++ cpumask_intersects(cs->exclusive_cpus, subpartitions_cpus)) {
++ err = PERR_REMOTE;
++ goto out;
++ }
++
+ /*
+ * If parent is valid partition, enable local partiion.
+ * Otherwise, enable a remote partition.
+diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
+index 055da410ac71d6..8df0dfaaca18ee 100644
+--- a/kernel/dma/contiguous.c
++++ b/kernel/dma/contiguous.c
+@@ -64,8 +64,7 @@ struct cma *dma_contiguous_default_area;
+ * Users, who want to set the size of global CMA area for their system
+ * should use cma= kernel parameter.
+ */
+-static const phys_addr_t size_bytes __initconst =
+- (phys_addr_t)CMA_SIZE_MBYTES * SZ_1M;
++#define size_bytes ((phys_addr_t)CMA_SIZE_MBYTES * SZ_1M)
+ static phys_addr_t size_cmdline __initdata = -1;
+ static phys_addr_t base_cmdline __initdata;
+ static phys_addr_t limit_cmdline __initdata;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index ee6b7281a19943..93ce810384c92c 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -13701,6 +13701,9 @@ inherit_event(struct perf_event *parent_event,
+ if (IS_ERR(child_event))
+ return child_event;
+
++ get_ctx(child_ctx);
++ child_event->ctx = child_ctx;
++
+ pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event);
+ if (IS_ERR(pmu_ctx)) {
+ free_event(child_event);
+@@ -13723,8 +13726,6 @@ inherit_event(struct perf_event *parent_event,
+ return NULL;
+ }
+
+- get_ctx(child_ctx);
+-
+ /*
+ * Make the child state follow the state of the parent event,
+ * not its attr.disabled bit. We hold the parent's mutex,
+@@ -13745,7 +13746,6 @@ inherit_event(struct perf_event *parent_event,
+ local64_set(&hwc->period_left, sample_period);
+ }
+
+- child_event->ctx = child_ctx;
+ child_event->overflow_handler = parent_event->overflow_handler;
+ child_event->overflow_handler_context
+ = parent_event->overflow_handler_context;
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index 396a067a8a56b5..7682c36cbccc63 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -1143,7 +1143,7 @@ static bool msi_check_reservation_mode(struct irq_domain *domain,
+ if (!(info->flags & MSI_FLAG_MUST_REACTIVATE))
+ return false;
+
+- if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_ignore_mask)
++ if (info->flags & MSI_FLAG_NO_MASK)
+ return false;
+
+ /*
+diff --git a/kernel/panic.c b/kernel/panic.c
+index d8635d5cecb250..f9f0c5148f6aa7 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -832,9 +832,15 @@ device_initcall(register_warn_debugfs);
+ */
+ __visible noinstr void __stack_chk_fail(void)
+ {
++ unsigned long flags;
++
+ instrumentation_begin();
++ flags = user_access_save();
++
+ panic("stack-protector: Kernel stack is corrupted in: %pB",
+ __builtin_return_address(0));
++
++ user_access_restore(flags);
+ instrumentation_end();
+ }
+ EXPORT_SYMBOL(__stack_chk_fail);
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 3874f0e97651e8..76c4343796893f 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -161,22 +161,10 @@ static void em_debug_create_pd(struct device *dev) {}
+ static void em_debug_remove_pd(struct device *dev) {}
+ #endif
+
+-static void em_destroy_table_rcu(struct rcu_head *rp)
+-{
+- struct em_perf_table __rcu *table;
+-
+- table = container_of(rp, struct em_perf_table, rcu);
+- kfree(table);
+-}
+-
+ static void em_release_table_kref(struct kref *kref)
+ {
+- struct em_perf_table __rcu *table;
+-
+ /* It was the last owner of this table so we can free */
+- table = container_of(kref, struct em_perf_table, kref);
+-
+- call_rcu(&table->rcu, em_destroy_table_rcu);
++ kfree_rcu(container_of(kref, struct em_perf_table, kref), rcu);
+ }
+
+ /**
+@@ -185,7 +173,7 @@ static void em_release_table_kref(struct kref *kref)
+ *
+ * No return values.
+ */
+-void em_table_free(struct em_perf_table __rcu *table)
++void em_table_free(struct em_perf_table *table)
+ {
+ kref_put(&table->kref, em_release_table_kref);
+ }
+@@ -198,9 +186,9 @@ void em_table_free(struct em_perf_table __rcu *table)
+ * has a user.
+ * Returns allocated table or NULL.
+ */
+-struct em_perf_table __rcu *em_table_alloc(struct em_perf_domain *pd)
++struct em_perf_table *em_table_alloc(struct em_perf_domain *pd)
+ {
+- struct em_perf_table __rcu *table;
++ struct em_perf_table *table;
+ int table_size;
+
+ table_size = sizeof(struct em_perf_state) * pd->nr_perf_states;
+@@ -308,9 +296,9 @@ int em_dev_compute_costs(struct device *dev, struct em_perf_state *table,
+ * Return 0 on success or an error code on failure.
+ */
+ int em_dev_update_perf_domain(struct device *dev,
+- struct em_perf_table __rcu *new_table)
++ struct em_perf_table *new_table)
+ {
+- struct em_perf_table __rcu *old_table;
++ struct em_perf_table *old_table;
+ struct em_perf_domain *pd;
+
+ if (!dev)
+@@ -327,7 +315,8 @@ int em_dev_update_perf_domain(struct device *dev,
+
+ kref_get(&new_table->kref);
+
+- old_table = pd->em_table;
++ old_table = rcu_dereference_protected(pd->em_table,
++ lockdep_is_held(&em_pd_mutex));
+ rcu_assign_pointer(pd->em_table, new_table);
+
+ em_cpufreq_update_efficiencies(dev, new_table->state);
+@@ -399,7 +388,7 @@ static int em_create_pd(struct device *dev, int nr_states,
+ struct em_data_callback *cb, cpumask_t *cpus,
+ unsigned long flags)
+ {
+- struct em_perf_table __rcu *em_table;
++ struct em_perf_table *em_table;
+ struct em_perf_domain *pd;
+ struct device *cpu_dev;
+ int cpu, ret, num_cpus;
+@@ -559,6 +548,7 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ struct em_data_callback *cb, cpumask_t *cpus,
+ bool microwatts)
+ {
++ struct em_perf_table *em_table;
+ unsigned long cap, prev_cap = 0;
+ unsigned long flags = 0;
+ int cpu, ret;
+@@ -631,7 +621,9 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ dev->em_pd->min_perf_state = 0;
+ dev->em_pd->max_perf_state = nr_states - 1;
+
+- em_cpufreq_update_efficiencies(dev, dev->em_pd->em_table->state);
++ em_table = rcu_dereference_protected(dev->em_pd->em_table,
++ lockdep_is_held(&em_pd_mutex));
++ em_cpufreq_update_efficiencies(dev, em_table->state);
+
+ em_debug_create_pd(dev);
+ dev_info(dev, "EM: created perf domain\n");
+@@ -668,7 +660,8 @@ void em_dev_unregister_perf_domain(struct device *dev)
+ mutex_lock(&em_pd_mutex);
+ em_debug_remove_pd(dev);
+
+- em_table_free(dev->em_pd->em_table);
++ em_table_free(rcu_dereference_protected(dev->em_pd->em_table,
++ lockdep_is_held(&em_pd_mutex)));
+
+ kfree(dev->em_pd);
+ dev->em_pd = NULL;
+@@ -676,9 +669,9 @@ void em_dev_unregister_perf_domain(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain);
+
+-static struct em_perf_table __rcu *em_table_dup(struct em_perf_domain *pd)
++static struct em_perf_table *em_table_dup(struct em_perf_domain *pd)
+ {
+- struct em_perf_table __rcu *em_table;
++ struct em_perf_table *em_table;
+ struct em_perf_state *ps, *new_ps;
+ int ps_size;
+
+@@ -700,7 +693,7 @@ static struct em_perf_table __rcu *em_table_dup(struct em_perf_domain *pd)
+ }
+
+ static int em_recalc_and_update(struct device *dev, struct em_perf_domain *pd,
+- struct em_perf_table __rcu *em_table)
++ struct em_perf_table *em_table)
+ {
+ int ret;
+
+@@ -731,7 +724,7 @@ static void em_adjust_new_capacity(struct device *dev,
+ struct em_perf_domain *pd,
+ u64 max_cap)
+ {
+- struct em_perf_table __rcu *em_table;
++ struct em_perf_table *em_table;
+
+ em_table = em_table_dup(pd);
+ if (!em_table) {
+@@ -822,7 +815,7 @@ static void em_update_workfn(struct work_struct *work)
+ */
+ int em_dev_update_chip_binning(struct device *dev)
+ {
+- struct em_perf_table __rcu *em_table;
++ struct em_perf_table *em_table;
+ struct em_perf_domain *pd;
+ int i, ret;
+
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 9688f1a5df8b8f..77cdff0d9f3488 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -4944,7 +4944,7 @@ static void scx_ops_bypass(bool bypass)
+
+ static void free_exit_info(struct scx_exit_info *ei)
+ {
+- kfree(ei->dump);
++ kvfree(ei->dump);
+ kfree(ei->msg);
+ kfree(ei->bt);
+ kfree(ei);
+@@ -4960,7 +4960,7 @@ static struct scx_exit_info *alloc_exit_info(size_t exit_dump_len)
+
+ ei->bt = kcalloc(SCX_EXIT_BT_LEN, sizeof(ei->bt[0]), GFP_KERNEL);
+ ei->msg = kzalloc(SCX_EXIT_MSG_LEN, GFP_KERNEL);
+- ei->dump = kzalloc(exit_dump_len, GFP_KERNEL);
++ ei->dump = kvzalloc(exit_dump_len, GFP_KERNEL);
+
+ if (!ei->bt || !ei->msg || !ei->dump) {
+ free_exit_info(ei);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 89c7260103e18b..3d9b68a347b764 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7083,9 +7083,6 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ h_nr_idle = task_has_idle_policy(p);
+ if (task_sleep || task_delayed || !se->sched_delayed)
+ h_nr_runnable = 1;
+- } else {
+- cfs_rq = group_cfs_rq(se);
+- slice = cfs_rq_min_slice(cfs_rq);
+ }
+
+ for_each_sched_entity(se) {
+@@ -7095,6 +7092,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ if (p && &p->se == se)
+ return -1;
+
++ slice = cfs_rq_min_slice(cfs_rq);
+ break;
+ }
+
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index a47bcf71defcf5..9a3859443c042c 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -509,6 +509,7 @@ void tick_resume(void)
+
+ #ifdef CONFIG_SUSPEND
+ static DEFINE_RAW_SPINLOCK(tick_freeze_lock);
++static DEFINE_WAIT_OVERRIDE_MAP(tick_freeze_map, LD_WAIT_SLEEP);
+ static unsigned int tick_freeze_depth;
+
+ /**
+@@ -528,9 +529,22 @@ void tick_freeze(void)
+ if (tick_freeze_depth == num_online_cpus()) {
+ trace_suspend_resume(TPS("timekeeping_freeze"),
+ smp_processor_id(), true);
++ /*
++ * All other CPUs have their interrupts disabled and are
++ * suspended to idle. Other tasks have been frozen so there
++ * is no scheduling happening. This means that there is no
++ * concurrency in the system at this point. Therefore it is
++ * okay to acquire a sleeping lock on PREEMPT_RT, such as a
++ * spinlock, because the lock cannot be held by other CPUs
++ * or threads and acquiring it cannot block.
++ *
++ * Inform lockdep about the situation.
++ */
++ lock_map_acquire_try(&tick_freeze_map);
+ system_state = SYSTEM_SUSPEND;
+ sched_clock_suspend();
+ timekeeping_suspend();
++ lock_map_release(&tick_freeze_map);
+ } else {
+ tick_suspend_local();
+ }
+@@ -552,8 +566,16 @@ void tick_unfreeze(void)
+ raw_spin_lock(&tick_freeze_lock);
+
+ if (tick_freeze_depth == num_online_cpus()) {
++ /*
++ * Similar to tick_freeze(). On resumption the first CPU may
++ * acquire uncontended sleeping locks while other CPUs block on
++ * tick_freeze_lock.
++ */
++ lock_map_acquire_try(&tick_freeze_map);
+ timekeeping_resume();
+ sched_clock_resume();
++ lock_map_release(&tick_freeze_map);
++
+ system_state = SYSTEM_RUNNING;
+ trace_suspend_resume(TPS("timekeeping_freeze"),
+ smp_processor_id(), false);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index a612f6f182e511..13bef2462e94b0 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -392,7 +392,7 @@ static const struct bpf_func_proto bpf_trace_printk_proto = {
+ .arg2_type = ARG_CONST_SIZE,
+ };
+
+-static void __set_printk_clr_event(void)
++static void __set_printk_clr_event(struct work_struct *work)
+ {
+ /*
+ * This program might be calling bpf_trace_printk,
+@@ -405,10 +405,11 @@ static void __set_printk_clr_event(void)
+ if (trace_set_clr_event("bpf_trace", "bpf_trace_printk", 1))
+ pr_warn_ratelimited("could not enable bpf_trace_printk events");
+ }
++static DECLARE_WORK(set_printk_work, __set_printk_clr_event);
+
+ const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
+ {
+- __set_printk_clr_event();
++ schedule_work(&set_printk_work);
+ return &bpf_trace_printk_proto;
+ }
+
+@@ -451,7 +452,7 @@ static const struct bpf_func_proto bpf_trace_vprintk_proto = {
+
+ const struct bpf_func_proto *bpf_get_trace_vprintk_proto(void)
+ {
+- __set_printk_clr_event();
++ schedule_work(&set_printk_work);
+ return &bpf_trace_vprintk_proto;
+ }
+
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 0e6d517e74e0fd..50aa6d59083292 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -10427,6 +10427,16 @@ __init static void enable_instances(void)
+ }
+
+ if (start) {
++ /* Start and size must be page aligned */
++ if (start & ~PAGE_MASK) {
++ pr_warn("Tracing: mapping start addr %pa is not page aligned\n", &start);
++ continue;
++ }
++ if (size & ~PAGE_MASK) {
++ pr_warn("Tracing: mapping size %pa is not page aligned\n", &size);
++ continue;
++ }
++
+ addr = map_pages(start, size);
+ if (addr) {
+ pr_info("Tracing: mapped boot instance %s at physical memory %pa of size 0x%lx\n",
+diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan
+index 1d4aa7a83b3a55..37655f58b8554e 100644
+--- a/lib/Kconfig.ubsan
++++ b/lib/Kconfig.ubsan
+@@ -118,7 +118,6 @@ config UBSAN_UNREACHABLE
+
+ config UBSAN_SIGNED_WRAP
+ bool "Perform checking for signed arithmetic wrap-around"
+- default UBSAN
+ depends on !COMPILE_TEST
+ # The no_sanitize attribute was introduced in GCC with version 8.
+ depends on !CC_IS_GCC || GCC_VERSION >= 80000
+diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
+index b01253cac70a74..b09e78da959ac3 100644
+--- a/lib/crypto/Kconfig
++++ b/lib/crypto/Kconfig
+@@ -42,7 +42,7 @@ config CRYPTO_LIB_BLAKE2S_GENERIC
+ of CRYPTO_LIB_BLAKE2S.
+
+ config CRYPTO_ARCH_HAVE_LIB_CHACHA
+- tristate
++ bool
+ help
+ Declares whether the architecture provides an arch-specific
+ accelerated implementation of the ChaCha library interface,
+@@ -58,17 +58,21 @@ config CRYPTO_LIB_CHACHA_GENERIC
+ implementation is enabled, this implementation serves the users
+ of CRYPTO_LIB_CHACHA.
+
++config CRYPTO_LIB_CHACHA_INTERNAL
++ tristate
++ select CRYPTO_LIB_CHACHA_GENERIC if CRYPTO_ARCH_HAVE_LIB_CHACHA=n
++
+ config CRYPTO_LIB_CHACHA
+ tristate "ChaCha library interface"
+- depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA
+- select CRYPTO_LIB_CHACHA_GENERIC if CRYPTO_ARCH_HAVE_LIB_CHACHA=n
++ select CRYPTO
++ select CRYPTO_LIB_CHACHA_INTERNAL
+ help
+ Enable the ChaCha library interface. This interface may be fulfilled
+ by either the generic implementation or an arch-specific one, if one
+ is available and enabled.
+
+ config CRYPTO_ARCH_HAVE_LIB_CURVE25519
+- tristate
++ bool
+ help
+ Declares whether the architecture provides an arch-specific
+ accelerated implementation of the Curve25519 library interface,
+@@ -76,6 +80,7 @@ config CRYPTO_ARCH_HAVE_LIB_CURVE25519
+
+ config CRYPTO_LIB_CURVE25519_GENERIC
+ tristate
++ select CRYPTO_LIB_UTILS
+ help
+ This symbol can be depended upon by arch implementations of the
+ Curve25519 library interface that require the generic code as a
+@@ -83,11 +88,14 @@ config CRYPTO_LIB_CURVE25519_GENERIC
+ implementation is enabled, this implementation serves the users
+ of CRYPTO_LIB_CURVE25519.
+
++config CRYPTO_LIB_CURVE25519_INTERNAL
++ tristate
++ select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n
++
+ config CRYPTO_LIB_CURVE25519
+ tristate "Curve25519 scalar multiplication library"
+- depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519
+- select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n
+- select CRYPTO_LIB_UTILS
++ select CRYPTO
++ select CRYPTO_LIB_CURVE25519_INTERNAL
+ help
+ Enable the Curve25519 library interface. This interface may be
+ fulfilled by either the generic implementation or an arch-specific
+@@ -104,7 +112,7 @@ config CRYPTO_LIB_POLY1305_RSIZE
+ default 1
+
+ config CRYPTO_ARCH_HAVE_LIB_POLY1305
+- tristate
++ bool
+ help
+ Declares whether the architecture provides an arch-specific
+ accelerated implementation of the Poly1305 library interface,
+@@ -119,10 +127,14 @@ config CRYPTO_LIB_POLY1305_GENERIC
+ implementation is enabled, this implementation serves the users
+ of CRYPTO_LIB_POLY1305.
+
++config CRYPTO_LIB_POLY1305_INTERNAL
++ tristate
++ select CRYPTO_LIB_POLY1305_GENERIC if CRYPTO_ARCH_HAVE_LIB_POLY1305=n
++
+ config CRYPTO_LIB_POLY1305
+ tristate "Poly1305 library interface"
+- depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305
+- select CRYPTO_LIB_POLY1305_GENERIC if CRYPTO_ARCH_HAVE_LIB_POLY1305=n
++ select CRYPTO
++ select CRYPTO_LIB_POLY1305_INTERNAL
+ help
+ Enable the Poly1305 library interface. This interface may be fulfilled
+ by either the generic implementation or an arch-specific one, if one
+@@ -130,11 +142,10 @@ config CRYPTO_LIB_POLY1305
+
+ config CRYPTO_LIB_CHACHA20POLY1305
+ tristate "ChaCha20-Poly1305 AEAD support (8-byte nonce library version)"
+- depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA
+- depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305
+- depends on CRYPTO
++ select CRYPTO
+ select CRYPTO_LIB_CHACHA
+ select CRYPTO_LIB_POLY1305
++ select CRYPTO_LIB_UTILS
+ select CRYPTO_ALGAPI
+
+ config CRYPTO_LIB_SHA1
+diff --git a/lib/test_ubsan.c b/lib/test_ubsan.c
+index 5d7b10e9861070..63b7566e78639e 100644
+--- a/lib/test_ubsan.c
++++ b/lib/test_ubsan.c
+@@ -68,18 +68,22 @@ static void test_ubsan_shift_out_of_bounds(void)
+
+ static void test_ubsan_out_of_bounds(void)
+ {
+- volatile int i = 4, j = 5, k = -1;
+- volatile char above[4] = { }; /* Protect surrounding memory. */
+- volatile int arr[4];
+- volatile char below[4] = { }; /* Protect surrounding memory. */
++ int i = 4, j = 4, k = -1;
++ volatile struct {
++ char above[4]; /* Protect surrounding memory. */
++ int arr[4];
++ char below[4]; /* Protect surrounding memory. */
++ } data;
+
+- above[0] = below[0];
++ OPTIMIZER_HIDE_VAR(i);
++ OPTIMIZER_HIDE_VAR(j);
++ OPTIMIZER_HIDE_VAR(k);
+
+ UBSAN_TEST(CONFIG_UBSAN_BOUNDS, "above");
+- arr[j] = i;
++ data.arr[j] = i;
+
+ UBSAN_TEST(CONFIG_UBSAN_BOUNDS, "below");
+- arr[k] = i;
++ data.arr[k] = i;
+ }
+
+ enum ubsan_test_enum {
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index fada3b35aff834..e9f3ae8158a660 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1112,6 +1112,13 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
+ if (!folio_trylock(folio))
+ goto keep;
+
++ if (folio_contain_hwpoisoned_page(folio)) {
++ unmap_poisoned_folio(folio, folio_pfn(folio), false);
++ folio_unlock(folio);
++ folio_put(folio);
++ continue;
++ }
++
+ VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
+
+ nr_pages = folio_nr_pages(folio);
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 09f8ced9f8bb7f..52a5497cfca794 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -1548,7 +1548,8 @@ p9_client_read_once(struct p9_fid *fid, u64 offset, struct iov_iter *to,
+ struct p9_client *clnt = fid->clnt;
+ struct p9_req_t *req;
+ int count = iov_iter_count(to);
+- int rsize, received, non_zc = 0;
++ u32 rsize, received;
++ bool non_zc = false;
+ char *dataptr;
+
+ *err = 0;
+@@ -1571,7 +1572,7 @@ p9_client_read_once(struct p9_fid *fid, u64 offset, struct iov_iter *to,
+ 0, 11, "dqd", fid->fid,
+ offset, rsize);
+ } else {
+- non_zc = 1;
++ non_zc = true;
+ req = p9_client_rpc(clnt, P9_TREAD, "dqd", fid->fid, offset,
+ rsize);
+ }
+@@ -1592,11 +1593,11 @@ p9_client_read_once(struct p9_fid *fid, u64 offset, struct iov_iter *to,
+ return 0;
+ }
+ if (rsize < received) {
+- pr_err("bogus RREAD count (%d > %d)\n", received, rsize);
++ pr_err("bogus RREAD count (%u > %u)\n", received, rsize);
+ received = rsize;
+ }
+
+- p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", received);
++ p9_debug(P9_DEBUG_9P, "<<< RREAD count %u\n", received);
+
+ if (non_zc) {
+ int n = copy_to_iter(dataptr, received, to);
+@@ -1623,9 +1624,9 @@ p9_client_write(struct p9_fid *fid, u64 offset, struct iov_iter *from, int *err)
+ *err = 0;
+
+ while (iov_iter_count(from)) {
+- int count = iov_iter_count(from);
+- int rsize = fid->iounit;
+- int written;
++ size_t count = iov_iter_count(from);
++ u32 rsize = fid->iounit;
++ u32 written;
+
+ if (!rsize || rsize > clnt->msize - P9_IOHDRSZ)
+ rsize = clnt->msize - P9_IOHDRSZ;
+@@ -1633,7 +1634,7 @@ p9_client_write(struct p9_fid *fid, u64 offset, struct iov_iter *from, int *err)
+ if (count < rsize)
+ rsize = count;
+
+- p9_debug(P9_DEBUG_9P, ">>> TWRITE fid %d offset %llu count %d (/%d)\n",
++ p9_debug(P9_DEBUG_9P, ">>> TWRITE fid %d offset %llu count %u (/%zu)\n",
+ fid->fid, offset, rsize, count);
+
+ /* Don't bother zerocopy for small IO (< 1024) */
+@@ -1659,11 +1660,11 @@ p9_client_write(struct p9_fid *fid, u64 offset, struct iov_iter *from, int *err)
+ break;
+ }
+ if (rsize < written) {
+- pr_err("bogus RWRITE count (%d > %d)\n", written, rsize);
++ pr_err("bogus RWRITE count (%u > %u)\n", written, rsize);
+ written = rsize;
+ }
+
+- p9_debug(P9_DEBUG_9P, "<<< RWRITE count %d\n", written);
++ p9_debug(P9_DEBUG_9P, "<<< RWRITE count %u\n", written);
+
+ p9_req_put(clnt, req);
+ iov_iter_revert(from, count - written - iov_iter_count(from));
+@@ -2098,7 +2099,8 @@ EXPORT_SYMBOL_GPL(p9_client_xattrcreate);
+
+ int p9_client_readdir(struct p9_fid *fid, char *data, u32 count, u64 offset)
+ {
+- int err, rsize, non_zc = 0;
++ int err, non_zc = 0;
++ u32 rsize;
+ struct p9_client *clnt;
+ struct p9_req_t *req;
+ char *dataptr;
+@@ -2107,7 +2109,7 @@ int p9_client_readdir(struct p9_fid *fid, char *data, u32 count, u64 offset)
+
+ iov_iter_kvec(&to, ITER_DEST, &kv, 1, count);
+
+- p9_debug(P9_DEBUG_9P, ">>> TREADDIR fid %d offset %llu count %d\n",
++ p9_debug(P9_DEBUG_9P, ">>> TREADDIR fid %d offset %llu count %u\n",
+ fid->fid, offset, count);
+
+ clnt = fid->clnt;
+@@ -2142,11 +2144,11 @@ int p9_client_readdir(struct p9_fid *fid, char *data, u32 count, u64 offset)
+ goto free_and_error;
+ }
+ if (rsize < count) {
+- pr_err("bogus RREADDIR count (%d > %d)\n", count, rsize);
++ pr_err("bogus RREADDIR count (%u > %u)\n", count, rsize);
+ count = rsize;
+ }
+
+- p9_debug(P9_DEBUG_9P, "<<< RREADDIR count %d\n", count);
++ p9_debug(P9_DEBUG_9P, "<<< RREADDIR count %u\n", count);
+
+ if (non_zc)
+ memmove(data, dataptr, count);
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 196060dc6138af..791e4868f2d4e1 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -191,12 +191,13 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+
+ spin_lock(&m->req_lock);
+
+- if (m->err) {
++ if (READ_ONCE(m->err)) {
+ spin_unlock(&m->req_lock);
+ return;
+ }
+
+- m->err = err;
++ WRITE_ONCE(m->err, err);
++ ASSERT_EXCLUSIVE_WRITER(m->err);
+
+ list_for_each_entry_safe(req, rtmp, &m->req_list, req_list) {
+ list_move(&req->req_list, &cancel_list);
+@@ -283,7 +284,7 @@ static void p9_read_work(struct work_struct *work)
+
+ m = container_of(work, struct p9_conn, rq);
+
+- if (m->err < 0)
++ if (READ_ONCE(m->err) < 0)
+ return;
+
+ p9_debug(P9_DEBUG_TRANS, "start mux %p pos %zd\n", m, m->rc.offset);
+@@ -450,7 +451,7 @@ static void p9_write_work(struct work_struct *work)
+
+ m = container_of(work, struct p9_conn, wq);
+
+- if (m->err < 0) {
++ if (READ_ONCE(m->err) < 0) {
+ clear_bit(Wworksched, &m->wsched);
+ return;
+ }
+@@ -622,7 +623,7 @@ static void p9_poll_mux(struct p9_conn *m)
+ __poll_t n;
+ int err = -ECONNRESET;
+
+- if (m->err < 0)
++ if (READ_ONCE(m->err) < 0)
+ return;
+
+ n = p9_fd_poll(m->client, NULL, &err);
+@@ -665,6 +666,7 @@ static void p9_poll_mux(struct p9_conn *m)
+ static int p9_fd_request(struct p9_client *client, struct p9_req_t *req)
+ {
+ __poll_t n;
++ int err;
+ struct p9_trans_fd *ts = client->trans;
+ struct p9_conn *m = &ts->conn;
+
+@@ -673,9 +675,10 @@ static int p9_fd_request(struct p9_client *client, struct p9_req_t *req)
+
+ spin_lock(&m->req_lock);
+
+- if (m->err < 0) {
++ err = READ_ONCE(m->err);
++ if (err < 0) {
+ spin_unlock(&m->req_lock);
+- return m->err;
++ return err;
+ }
+
+ WRITE_ONCE(req->status, REQ_STATUS_UNSENT);
+diff --git a/net/core/lwtunnel.c b/net/core/lwtunnel.c
+index 4417a18b3e951a..f63586c9ce0216 100644
+--- a/net/core/lwtunnel.c
++++ b/net/core/lwtunnel.c
+@@ -332,6 +332,8 @@ int lwtunnel_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ struct dst_entry *dst;
+ int ret;
+
++ local_bh_disable();
++
+ if (dev_xmit_recursion()) {
+ net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
+ __func__);
+@@ -347,8 +349,10 @@ int lwtunnel_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ lwtstate = dst->lwtstate;
+
+ if (lwtstate->type == LWTUNNEL_ENCAP_NONE ||
+- lwtstate->type > LWTUNNEL_ENCAP_MAX)
+- return 0;
++ lwtstate->type > LWTUNNEL_ENCAP_MAX) {
++ ret = 0;
++ goto out;
++ }
+
+ ret = -EOPNOTSUPP;
+ rcu_read_lock();
+@@ -363,11 +367,13 @@ int lwtunnel_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ if (ret == -EOPNOTSUPP)
+ goto drop;
+
+- return ret;
++ goto out;
+
+ drop:
+ kfree_skb(skb);
+
++out:
++ local_bh_enable();
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(lwtunnel_output);
+@@ -379,6 +385,8 @@ int lwtunnel_xmit(struct sk_buff *skb)
+ struct dst_entry *dst;
+ int ret;
+
++ local_bh_disable();
++
+ if (dev_xmit_recursion()) {
+ net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
+ __func__);
+@@ -395,8 +403,10 @@ int lwtunnel_xmit(struct sk_buff *skb)
+ lwtstate = dst->lwtstate;
+
+ if (lwtstate->type == LWTUNNEL_ENCAP_NONE ||
+- lwtstate->type > LWTUNNEL_ENCAP_MAX)
+- return 0;
++ lwtstate->type > LWTUNNEL_ENCAP_MAX) {
++ ret = 0;
++ goto out;
++ }
+
+ ret = -EOPNOTSUPP;
+ rcu_read_lock();
+@@ -411,11 +421,13 @@ int lwtunnel_xmit(struct sk_buff *skb)
+ if (ret == -EOPNOTSUPP)
+ goto drop;
+
+- return ret;
++ goto out;
+
+ drop:
+ kfree_skb(skb);
+
++out:
++ local_bh_enable();
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(lwtunnel_xmit);
+@@ -427,6 +439,8 @@ int lwtunnel_input(struct sk_buff *skb)
+ struct dst_entry *dst;
+ int ret;
+
++ DEBUG_NET_WARN_ON_ONCE(!in_softirq());
++
+ if (dev_xmit_recursion()) {
+ net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
+ __func__);
+diff --git a/net/core/selftests.c b/net/core/selftests.c
+index 8f801e6e3b91b4..561653f9d71d44 100644
+--- a/net/core/selftests.c
++++ b/net/core/selftests.c
+@@ -100,10 +100,10 @@ static struct sk_buff *net_test_get_skb(struct net_device *ndev,
+ ehdr->h_proto = htons(ETH_P_IP);
+
+ if (attr->tcp) {
++ memset(thdr, 0, sizeof(*thdr));
+ thdr->source = htons(attr->sport);
+ thdr->dest = htons(attr->dport);
+ thdr->doff = sizeof(struct tcphdr) / 4;
+- thdr->check = 0;
+ } else {
+ uhdr->source = htons(attr->sport);
+ uhdr->dest = htons(attr->dport);
+@@ -144,10 +144,18 @@ static struct sk_buff *net_test_get_skb(struct net_device *ndev,
+ attr->id = net_test_next_id;
+ shdr->id = net_test_next_id++;
+
+- if (attr->size)
+- skb_put(skb, attr->size);
+- if (attr->max_size && attr->max_size > skb->len)
+- skb_put(skb, attr->max_size - skb->len);
++ if (attr->size) {
++ void *payload = skb_put(skb, attr->size);
++
++ memset(payload, 0, attr->size);
++ }
++
++ if (attr->max_size && attr->max_size > skb->len) {
++ size_t pad_len = attr->max_size - skb->len;
++ void *pad = skb_put(skb, pad_len);
++
++ memset(pad, 0, pad_len);
++ }
+
+ skb->csum = 0;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index 625adbc4203708..9082ca17e845cb 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -71,6 +71,11 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ const struct net_device *oif;
+ const struct net_device *found;
+
++ if (nft_fib_can_skip(pkt)) {
++ nft_fib_store_result(dest, priv, nft_in(pkt));
++ return;
++ }
++
+ /*
+ * Do not set flowi4_oif, it restricts results (for example, asking
+ * for oif 3 will get RTN_UNICAST result even if the daddr exits
+@@ -85,12 +90,6 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ else
+ oif = NULL;
+
+- if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+- nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+- nft_fib_store_result(dest, priv, nft_in(pkt));
+- return;
+- }
+-
+ iph = skb_header_pointer(pkt->skb, noff, sizeof(_iph), &_iph);
+ if (!iph) {
+ regs->verdict.code = NFT_BREAK;
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index c9f1634b3838ae..7fd9d7b21cd42d 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -170,6 +170,11 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ struct rt6_info *rt;
+ int lookup_flags;
+
++ if (nft_fib_can_skip(pkt)) {
++ nft_fib_store_result(dest, priv, nft_in(pkt));
++ return;
++ }
++
+ if (priv->flags & NFTA_FIB_F_IIF)
+ oif = nft_in(pkt);
+ else if (priv->flags & NFTA_FIB_F_OIF)
+@@ -181,17 +186,13 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ return;
+ }
+
+- lookup_flags = nft_fib6_flowi_init(&fl6, priv, pkt, oif, iph);
+-
+- if (nft_hook(pkt) == NF_INET_PRE_ROUTING ||
+- nft_hook(pkt) == NF_INET_INGRESS) {
+- if (nft_fib_is_loopback(pkt->skb, nft_in(pkt)) ||
+- nft_fib_v6_skip_icmpv6(pkt->skb, pkt->tprot, iph)) {
+- nft_fib_store_result(dest, priv, nft_in(pkt));
+- return;
+- }
++ if (nft_fib_v6_skip_icmpv6(pkt->skb, pkt->tprot, iph)) {
++ nft_fib_store_result(dest, priv, nft_in(pkt));
++ return;
+ }
+
++ lookup_flags = nft_fib6_flowi_init(&fl6, priv, pkt, oif, iph);
++
+ *dest = 0;
+ rt = (void *)ip6_route_lookup(nft_net(pkt), &fl6, pkt->skb,
+ lookup_flags);
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index a3d477059b11c3..940ca94c88634f 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -352,7 +352,11 @@ int mptcp_pm_nl_remove_doit(struct sk_buff *skb, struct genl_info *info)
+
+ release_sock(sk);
+
+- sock_kfree_s(sk, match, sizeof(*match));
++ kfree_rcu_mightsleep(match);
++ /* Adjust sk_omem_alloc like sock_kfree_s() does, to match
++ * with allocation of this memory by sock_kmemdup()
++ */
++ atomic_sub(sizeof(*match), &sk->sk_omem_alloc);
+
+ err = 0;
+ out:
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index c287bf8423b47b..5bb4ab9941d6e9 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -958,6 +958,7 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+
+ if (cl != NULL) {
+ int old_flags;
++ int len = 0;
+
+ if (parentid) {
+ if (cl->cl_parent &&
+@@ -988,9 +989,13 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ if (usc != NULL)
+ hfsc_change_usc(cl, usc, cur_time);
+
++ if (cl->qdisc->q.qlen != 0)
++ len = qdisc_peek_len(cl->qdisc);
++ /* Check queue length again since some qdisc implementations
++ * (e.g., netem/codel) might empty the queue during the peek
++ * operation.
++ */
+ if (cl->qdisc->q.qlen != 0) {
+- int len = qdisc_peek_len(cl->qdisc);
+-
+ if (cl->cl_flags & HFSC_RSC) {
+ if (old_flags & HFSC_RSC)
+ update_ed(cl, len);
+@@ -1632,10 +1637,16 @@ hfsc_dequeue(struct Qdisc *sch)
+ if (cl->qdisc->q.qlen != 0) {
+ /* update ed */
+ next_len = qdisc_peek_len(cl->qdisc);
+- if (realtime)
+- update_ed(cl, next_len);
+- else
+- update_d(cl, next_len);
++ /* Check queue length again since some qdisc implementations
++ * (e.g., netem/codel) might empty the queue during the peek
++ * operation.
++ */
++ if (cl->qdisc->q.qlen != 0) {
++ if (realtime)
++ update_ed(cl, next_len);
++ else
++ update_d(cl, next_len);
++ }
+ } else {
+ /* the class becomes passive */
+ eltree_remove(cl);
+diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c
+index e2f19627e43d52..b45c5b91bc7afb 100644
+--- a/net/tipc/monitor.c
++++ b/net/tipc/monitor.c
+@@ -716,7 +716,8 @@ void tipc_mon_reinit_self(struct net *net)
+ if (!mon)
+ continue;
+ write_lock_bh(&mon->lock);
+- mon->self->addr = tipc_own_addr(net);
++ if (mon->self)
++ mon->self->addr = tipc_own_addr(net);
+ write_unlock_bh(&mon->lock);
+ }
+ }
+diff --git a/rust/Makefile b/rust/Makefile
+index c53a6959550196..a84c6d4b6ca21d 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -57,10 +57,14 @@ endif
+ core-cfgs = \
+ --cfg no_fp_fmt_parse
+
++# `rustc` recognizes `--remap-path-prefix` since 1.26.0, but `rustdoc` only
++# since Rust 1.81.0. Moreover, `rustdoc` ICEs on out-of-tree builds since Rust
++# 1.82.0 (https://github.com/rust-lang/rust/issues/138520). Thus workaround both
++# issues skipping the flag. The former also applies to `RUSTDOC TK`.
+ quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $<
+ cmd_rustdoc = \
+ OBJTREE=$(abspath $(objtree)) \
+- $(RUSTDOC) $(filter-out $(skip_flags),$(if $(rustdoc_host),$(rust_common_flags),$(rust_flags))) \
++ $(RUSTDOC) $(filter-out $(skip_flags) --remap-path-prefix=%,$(if $(rustdoc_host),$(rust_common_flags),$(rust_flags))) \
+ $(rustc_target_flags) -L$(objtree)/$(obj) \
+ -Zunstable-options --generate-link-to-definition \
+ --output $(rustdoc_output) \
+@@ -171,7 +175,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC TK $<
+ rm -rf $(objtree)/$(obj)/test/doctests/kernel; \
+ mkdir -p $(objtree)/$(obj)/test/doctests/kernel; \
+ OBJTREE=$(abspath $(objtree)) \
+- $(RUSTDOC) --test $(rust_flags) \
++ $(RUSTDOC) --test $(filter-out --remap-path-prefix=%,$(rust_flags)) \
+ -L$(objtree)/$(obj) --extern ffi --extern kernel \
+ --extern build_error --extern macros \
+ --extern bindings --extern uapi \
+diff --git a/rust/kernel/firmware.rs b/rust/kernel/firmware.rs
+index c5162fdc95ff05..74c61bd61fbc8a 100644
+--- a/rust/kernel/firmware.rs
++++ b/rust/kernel/firmware.rs
+@@ -4,7 +4,7 @@
+ //!
+ //! C header: [`include/linux/firmware.h`](srctree/include/linux/firmware.h)
+
+-use crate::{bindings, device::Device, error::Error, error::Result, str::CStr};
++use crate::{bindings, device::Device, error::Error, error::Result, ffi, str::CStr};
+ use core::ptr::NonNull;
+
+ /// # Invariants
+@@ -12,7 +12,11 @@
+ /// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`,
+ /// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`.
+ struct FwFunc(
+- unsafe extern "C" fn(*mut *const bindings::firmware, *const u8, *mut bindings::device) -> i32,
++ unsafe extern "C" fn(
++ *mut *const bindings::firmware,
++ *const ffi::c_char,
++ *mut bindings::device,
++ ) -> i32,
+ );
+
+ impl FwFunc {
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index cad20f0e66ee9c..14b004fd35cf3b 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -275,7 +275,7 @@ objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
+ objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
+ objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
+ objtool-args-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess
+-objtool-args-$(CONFIG_GCOV_KERNEL) += --no-unreachable
++objtool-args-$(or $(CONFIG_GCOV_KERNEL),$(CONFIG_KCOV)) += --no-unreachable
+ objtool-args-$(CONFIG_PREFIX_SYMBOLS) += --prefix=$(CONFIG_FUNCTION_PADDING_BYTES)
+
+ objtool-args = $(objtool-args-y) \
+diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
+index 873caaa553134e..fb79fd6b246543 100644
+--- a/scripts/Makefile.vmlinux
++++ b/scripts/Makefile.vmlinux
+@@ -79,6 +79,10 @@ ifdef CONFIG_DEBUG_INFO_BTF
+ vmlinux: $(RESOLVE_BTFIDS)
+ endif
+
++ifdef CONFIG_BUILDTIME_TABLE_SORT
++vmlinux: scripts/sorttable
++endif
++
+ # module.builtin.ranges
+ # ---------------------------------------------------------------------------
+ ifdef CONFIG_BUILTIN_MODULE_RANGES
+diff --git a/sound/soc/codecs/aw88081.c b/sound/soc/codecs/aw88081.c
+index ad16ab6812cd3f..3dd8428f08cce9 100644
+--- a/sound/soc/codecs/aw88081.c
++++ b/sound/soc/codecs/aw88081.c
+@@ -1295,9 +1295,19 @@ static int aw88081_i2c_probe(struct i2c_client *i2c)
+ aw88081_dai, ARRAY_SIZE(aw88081_dai));
+ }
+
++#if defined(CONFIG_OF)
++static const struct of_device_id aw88081_of_match[] = {
++ { .compatible = "awinic,aw88081" },
++ { .compatible = "awinic,aw88083" },
++ { }
++};
++MODULE_DEVICE_TABLE(of, aw88081_of_match);
++#endif
++
+ static struct i2c_driver aw88081_i2c_driver = {
+ .driver = {
+ .name = AW88081_I2C_NAME,
++ .of_match_table = of_match_ptr(aw88081_of_match),
+ },
+ .probe = aw88081_i2c_probe,
+ .id_table = aw88081_i2c_id,
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index 910852eb9698c1..c7f1b28f3b2302 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -2273,7 +2273,7 @@ static irqreturn_t wcd934x_slim_irq_handler(int irq, void *data)
+ {
+ struct wcd934x_codec *wcd = data;
+ unsigned long status = 0;
+- int i, j, port_id;
++ unsigned int i, j, port_id;
+ unsigned int val, int_val = 0;
+ irqreturn_t ret = IRQ_NONE;
+ bool tx;
+diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c
+index f501f47242fb0b..1bba48318e2ddf 100644
+--- a/sound/soc/fsl/fsl_asrc_dma.c
++++ b/sound/soc/fsl/fsl_asrc_dma.c
+@@ -156,11 +156,24 @@ static int fsl_asrc_dma_hw_params(struct snd_soc_component *component,
+ for_each_dpcm_be(rtd, stream, dpcm) {
+ struct snd_soc_pcm_runtime *be = dpcm->be;
+ struct snd_pcm_substream *substream_be;
+- struct snd_soc_dai *dai = snd_soc_rtd_to_cpu(be, 0);
++ struct snd_soc_dai *dai_cpu = snd_soc_rtd_to_cpu(be, 0);
++ struct snd_soc_dai *dai_codec = snd_soc_rtd_to_codec(be, 0);
++ struct snd_soc_dai *dai;
+
+ if (dpcm->fe != rtd)
+ continue;
+
++ /*
++ * With audio graph card, original cpu dai is changed to codec
++ * device in backend, so if cpu dai is dummy device in backend,
++ * get the codec dai device, which is the real hardware device
++ * connected.
++ */
++ if (!snd_soc_dai_is_dummy(dai_cpu))
++ dai = dai_cpu;
++ else
++ dai = dai_codec;
++
+ substream_be = snd_soc_dpcm_get_substream(be, stream);
+ dma_params_be = snd_soc_dai_get_dma_data(dai, substream_be);
+ dev_be = dai->dev;
+diff --git a/sound/virtio/virtio_pcm.c b/sound/virtio/virtio_pcm.c
+index 967e4c45be9bb3..2f7c5e709f0755 100644
+--- a/sound/virtio/virtio_pcm.c
++++ b/sound/virtio/virtio_pcm.c
+@@ -339,6 +339,21 @@ int virtsnd_pcm_parse_cfg(struct virtio_snd *snd)
+ if (!snd->substreams)
+ return -ENOMEM;
+
++ /*
++ * Initialize critical substream fields early in case we hit an
++ * error path and end up trying to clean up uninitialized structures
++ * elsewhere.
++ */
++ for (i = 0; i < snd->nsubstreams; ++i) {
++ struct virtio_pcm_substream *vss = &snd->substreams[i];
++
++ vss->snd = snd;
++ vss->sid = i;
++ INIT_WORK(&vss->elapsed_period, virtsnd_pcm_period_elapsed);
++ init_waitqueue_head(&vss->msg_empty);
++ spin_lock_init(&vss->lock);
++ }
++
+ info = kcalloc(snd->nsubstreams, sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+@@ -352,12 +367,6 @@ int virtsnd_pcm_parse_cfg(struct virtio_snd *snd)
+ struct virtio_pcm_substream *vss = &snd->substreams[i];
+ struct virtio_pcm *vpcm;
+
+- vss->snd = snd;
+- vss->sid = i;
+- INIT_WORK(&vss->elapsed_period, virtsnd_pcm_period_elapsed);
+- init_waitqueue_head(&vss->msg_empty);
+- spin_lock_init(&vss->lock);
+-
+ rc = virtsnd_pcm_build_hw(vss, &info[i]);
+ if (rc)
+ goto on_exit;
+diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt
+index caedb3ef6688fc..f5dd84eb55dcda 100644
+--- a/tools/arch/x86/lib/x86-opcode-map.txt
++++ b/tools/arch/x86/lib/x86-opcode-map.txt
+@@ -996,8 +996,8 @@ AVXcode: 4
+ 83: Grp1 Ev,Ib (1A),(es)
+ # CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL,
+ # CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ
+-84: CTESTSCC (ev)
+-85: CTESTSCC (es) | CTESTSCC (66),(es)
++84: CTESTSCC Eb,Gb (ev)
++85: CTESTSCC Ev,Gv (es) | CTESTSCC Ev,Gv (66),(es)
+ 88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es)
+ 8f: POP2 Bq,Rq (000),(11B),(ev)
+ a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index e71be67f1d8658..52ffb74ae4e89a 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -1928,6 +1928,7 @@ static int do_loader(int argc, char **argv)
+
+ obj = bpf_object__open_file(file, &open_opts);
+ if (!obj) {
++ err = -1;
+ p_err("failed to open object file");
+ goto err_close_obj;
+ }
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 7b535e119cafaa..c51be0f265ac60 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1189,12 +1189,15 @@ static const char *uaccess_safe_builtin[] = {
+ "__ubsan_handle_load_invalid_value",
+ /* STACKLEAK */
+ "stackleak_track_stack",
++ /* TRACE_BRANCH_PROFILING */
++ "ftrace_likely_update",
++ /* STACKPROTECTOR */
++ "__stack_chk_fail",
+ /* misc */
+ "csum_partial_copy_generic",
+ "copy_mc_fragile",
+ "copy_mc_fragile_handle_tail",
+ "copy_mc_enhanced_fast_string",
+- "ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
+ "rep_stos_alternative",
+ "rep_movs_alternative",
+ "__copy_user_nocache",
+@@ -1479,6 +1482,8 @@ static int add_jump_destinations(struct objtool_file *file)
+ unsigned long dest_off;
+
+ for_each_insn(file, insn) {
++ struct symbol *func = insn_func(insn);
++
+ if (insn->jump_dest) {
+ /*
+ * handle_group_alt() may have previously set
+@@ -1502,7 +1507,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ } else if (reloc->sym->return_thunk) {
+ add_return_call(file, insn, true);
+ continue;
+- } else if (insn_func(insn)) {
++ } else if (func) {
+ /*
+ * External sibling call or internal sibling call with
+ * STT_FUNC reloc.
+@@ -1535,6 +1540,15 @@ static int add_jump_destinations(struct objtool_file *file)
+ continue;
+ }
+
++ /*
++ * GCOV/KCOV dead code can jump to the end of the
++ * function/section.
++ */
++ if (file->ignore_unreachables && func &&
++ dest_sec == insn->sec &&
++ dest_off == func->offset + func->len)
++ continue;
++
+ WARN_INSN(insn, "can't find jump dest instruction at %s+0x%lx",
+ dest_sec->name, dest_off);
+ return -1;
+@@ -1559,8 +1573,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ /*
+ * Cross-function jump.
+ */
+- if (insn_func(insn) && insn_func(jump_dest) &&
+- insn_func(insn) != insn_func(jump_dest)) {
++ if (func && insn_func(jump_dest) && func != insn_func(jump_dest)) {
+
+ /*
+ * For GCC 8+, create parent/child links for any cold
+@@ -1577,10 +1590,10 @@ static int add_jump_destinations(struct objtool_file *file)
+ * case where the parent function's only reference to a
+ * subfunction is through a jump table.
+ */
+- if (!strstr(insn_func(insn)->name, ".cold") &&
++ if (!strstr(func->name, ".cold") &&
+ strstr(insn_func(jump_dest)->name, ".cold")) {
+- insn_func(insn)->cfunc = insn_func(jump_dest);
+- insn_func(jump_dest)->pfunc = insn_func(insn);
++ func->cfunc = insn_func(jump_dest);
++ insn_func(jump_dest)->pfunc = func;
+ }
+ }
+
+@@ -3490,6 +3503,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ !strncmp(func->name, "__pfx_", 6))
+ return 0;
+
++ if (file->ignore_unreachables)
++ return 0;
++
+ WARN("%s() falls through to next function %s()",
+ func->name, insn_func(insn)->name);
+ return 1;
+@@ -3709,6 +3725,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ if (!next_insn) {
+ if (state.cfi.cfa.base == CFI_UNDEFINED)
+ return 0;
++ if (file->ignore_unreachables)
++ return 0;
++
+ WARN("%s: unexpected end of section", sec->name);
+ return 1;
+ }
+@@ -3861,6 +3880,9 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
+ break;
+ }
+
++ if (insn->dead_end)
++ return 0;
++
+ if (!next) {
+ WARN_INSN(insn, "teh end!");
+ return -1;
+diff --git a/tools/testing/selftests/bpf/cap_helpers.c b/tools/testing/selftests/bpf/cap_helpers.c
+index d5ac507401d7cd..98f840c3a38f7e 100644
+--- a/tools/testing/selftests/bpf/cap_helpers.c
++++ b/tools/testing/selftests/bpf/cap_helpers.c
+@@ -19,7 +19,7 @@ int cap_enable_effective(__u64 caps, __u64 *old_caps)
+
+ err = capget(&hdr, data);
+ if (err)
+- return err;
++ return -errno;
+
+ if (old_caps)
+ *old_caps = (__u64)(data[1].effective) << 32 | data[0].effective;
+@@ -32,7 +32,7 @@ int cap_enable_effective(__u64 caps, __u64 *old_caps)
+ data[1].effective |= cap1;
+ err = capset(&hdr, data);
+ if (err)
+- return err;
++ return -errno;
+
+ return 0;
+ }
+@@ -49,7 +49,7 @@ int cap_disable_effective(__u64 caps, __u64 *old_caps)
+
+ err = capget(&hdr, data);
+ if (err)
+- return err;
++ return -errno;
+
+ if (old_caps)
+ *old_caps = (__u64)(data[1].effective) << 32 | data[0].effective;
+@@ -61,7 +61,7 @@ int cap_disable_effective(__u64 caps, __u64 *old_caps)
+ data[1].effective &= ~cap1;
+ err = capset(&hdr, data);
+ if (err)
+- return err;
++ return -errno;
+
+ return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/cap_helpers.h b/tools/testing/selftests/bpf/cap_helpers.h
+index 6d163530cb0fd1..8dcb28557f762d 100644
+--- a/tools/testing/selftests/bpf/cap_helpers.h
++++ b/tools/testing/selftests/bpf/cap_helpers.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/types.h>
+ #include <linux/capability.h>
++#include <errno.h>
+
+ #ifndef CAP_PERFMON
+ #define CAP_PERFMON 38
+diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
+index 80844a5fb1feef..95e943270f3596 100644
+--- a/tools/testing/selftests/bpf/network_helpers.c
++++ b/tools/testing/selftests/bpf/network_helpers.c
+@@ -771,12 +771,13 @@ static const char *pkt_type_str(u16 pkt_type)
+ return "Unknown";
+ }
+
++#define MAX_FLAGS_STRLEN 21
+ /* Show the information of the transport layer in the packet */
+ static void show_transport(const u_char *packet, u16 len, u32 ifindex,
+ const char *src_addr, const char *dst_addr,
+ u16 proto, bool ipv6, u8 pkt_type)
+ {
+- char *ifname, _ifname[IF_NAMESIZE];
++ char *ifname, _ifname[IF_NAMESIZE], flags[MAX_FLAGS_STRLEN] = "";
+ const char *transport_str;
+ u16 src_port, dst_port;
+ struct udphdr *udp;
+@@ -817,29 +818,21 @@ static void show_transport(const u_char *packet, u16 len, u32 ifindex,
+
+ /* TCP or UDP*/
+
+- flockfile(stdout);
++ if (proto == IPPROTO_TCP)
++ snprintf(flags, MAX_FLAGS_STRLEN, "%s%s%s%s",
++ tcp->fin ? ", FIN" : "",
++ tcp->syn ? ", SYN" : "",
++ tcp->rst ? ", RST" : "",
++ tcp->ack ? ", ACK" : "");
++
+ if (ipv6)
+- printf("%-7s %-3s IPv6 %s.%d > %s.%d: %s, length %d",
++ printf("%-7s %-3s IPv6 %s.%d > %s.%d: %s, length %d%s\n",
+ ifname, pkt_type_str(pkt_type), src_addr, src_port,
+- dst_addr, dst_port, transport_str, len);
++ dst_addr, dst_port, transport_str, len, flags);
+ else
+- printf("%-7s %-3s IPv4 %s:%d > %s:%d: %s, length %d",
++ printf("%-7s %-3s IPv4 %s:%d > %s:%d: %s, length %d%s\n",
+ ifname, pkt_type_str(pkt_type), src_addr, src_port,
+- dst_addr, dst_port, transport_str, len);
+-
+- if (proto == IPPROTO_TCP) {
+- if (tcp->fin)
+- printf(", FIN");
+- if (tcp->syn)
+- printf(", SYN");
+- if (tcp->rst)
+- printf(", RST");
+- if (tcp->ack)
+- printf(", ACK");
+- }
+-
+- printf("\n");
+- funlockfile(stdout);
++ dst_addr, dst_port, transport_str, len, flags);
+ }
+
+ static void show_ipv6_packet(const u_char *packet, u32 ifindex, u8 pkt_type)
+diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
+index 8a0e1ff8a2dc65..ecc320e0455131 100644
+--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
++++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
+@@ -121,7 +121,7 @@ static void run_tests_aux(const char *skel_name,
+ /* test_verifier tests are executed w/o CAP_SYS_ADMIN, do the same here */
+ err = cap_disable_effective(1ULL << CAP_SYS_ADMIN, &old_caps);
+ if (err) {
+- PRINT_FAIL("failed to drop CAP_SYS_ADMIN: %i, %s\n", err, strerror(err));
++ PRINT_FAIL("failed to drop CAP_SYS_ADMIN: %i, %s\n", err, strerror(-err));
+ return;
+ }
+
+@@ -131,7 +131,7 @@ static void run_tests_aux(const char *skel_name,
+
+ err = cap_enable_effective(old_caps, NULL);
+ if (err)
+- PRINT_FAIL("failed to restore CAP_SYS_ADMIN: %i, %s\n", err, strerror(err));
++ PRINT_FAIL("failed to restore CAP_SYS_ADMIN: %i, %s\n", err, strerror(-err));
+ }
+
+ #define RUN(skel) run_tests_aux(#skel, skel##__elf_bytes, NULL)
+diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
+index 53b06647cf57db..8a403e5aa31450 100644
+--- a/tools/testing/selftests/bpf/test_loader.c
++++ b/tools/testing/selftests/bpf/test_loader.c
+@@ -773,7 +773,7 @@ static int drop_capabilities(struct cap_state *caps)
+
+ err = cap_disable_effective(caps_to_drop, &caps->old_caps);
+ if (err) {
+- PRINT_FAIL("failed to drop capabilities: %i, %s\n", err, strerror(err));
++ PRINT_FAIL("failed to drop capabilities: %i, %s\n", err, strerror(-err));
+ return err;
+ }
+
+@@ -790,7 +790,7 @@ static int restore_capabilities(struct cap_state *caps)
+
+ err = cap_enable_effective(caps->old_caps, NULL);
+ if (err)
+- PRINT_FAIL("failed to restore capabilities: %i, %s\n", err, strerror(err));
++ PRINT_FAIL("failed to restore capabilities: %i, %s\n", err, strerror(-err));
+ caps->initialized = false;
+ return err;
+ }
+@@ -959,7 +959,7 @@ void run_subtest(struct test_loader *tester,
+ if (subspec->caps) {
+ err = cap_enable_effective(subspec->caps, NULL);
+ if (err) {
+- PRINT_FAIL("failed to set capabilities: %i, %s\n", err, strerror(err));
++ PRINT_FAIL("failed to set capabilities: %i, %s\n", err, strerror(-err));
+ goto subtest_cleanup;
+ }
+ }
+diff --git a/tools/testing/selftests/mincore/mincore_selftest.c b/tools/testing/selftests/mincore/mincore_selftest.c
+index 0fd4b00bd345b5..17ed3e9917ca17 100644
+--- a/tools/testing/selftests/mincore/mincore_selftest.c
++++ b/tools/testing/selftests/mincore/mincore_selftest.c
+@@ -261,9 +261,6 @@ TEST(check_file_mmap)
+ TH_LOG("No read-ahead pages found in memory");
+ }
+
+- EXPECT_LT(i, vec_size) {
+- TH_LOG("Read-ahead pages reached the end of the file");
+- }
+ /*
+ * End of the readahead window. The rest of the pages shouldn't
+ * be in memory.
+diff --git a/tools/testing/selftests/pcie_bwctrl/Makefile b/tools/testing/selftests/pcie_bwctrl/Makefile
+index 48ec048f47afda..277f92f9d7537a 100644
+--- a/tools/testing/selftests/pcie_bwctrl/Makefile
++++ b/tools/testing/selftests/pcie_bwctrl/Makefile
+@@ -1,2 +1,3 @@
+-TEST_PROGS = set_pcie_cooling_state.sh set_pcie_speed.sh
++TEST_PROGS = set_pcie_cooling_state.sh
++TEST_FILES = set_pcie_speed.sh
+ include ../lib.mk
+diff --git a/tools/testing/selftests/ublk/test_stripe_04.sh b/tools/testing/selftests/ublk/test_stripe_04.sh
+new file mode 100755
+index 00000000000000..1f2b642381d179
+--- /dev/null
++++ b/tools/testing/selftests/ublk/test_stripe_04.sh
+@@ -0,0 +1,24 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++. "$(cd "$(dirname "$0")" && pwd)"/test_common.sh
++
++TID="stripe_04"
++ERR_CODE=0
++
++_prep_test "stripe" "mkfs & mount & umount on zero copy"
++
++backfile_0=$(_create_backfile 256M)
++backfile_1=$(_create_backfile 256M)
++dev_id=$(_add_ublk_dev -t stripe -z -q 2 "$backfile_0" "$backfile_1")
++_check_add_dev $TID $? "$backfile_0" "$backfile_1"
++
++_mkfs_mount_test /dev/ublkb"${dev_id}"
++ERR_CODE=$?
++
++_cleanup_test "stripe"
++
++_remove_backfile "$backfile_0"
++_remove_backfile "$backfile_1"
++
++_show_result $TID $ERR_CODE
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-09 10:55 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-09 10:55 UTC (permalink / raw
To: gentoo-commits
commit: e98ab37582d0bc3ff92e990f38ff2ad7d034237c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri May 9 10:55:40 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri May 9 10:55:40 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e98ab375
Linux patch 6.14.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-6.14.6.patch | 9318 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9322 insertions(+)
diff --git a/0000_README b/0000_README
index 5de58938..cbaf7be5 100644
--- a/0000_README
+++ b/0000_README
@@ -62,6 +62,10 @@ Patch: 1004_linux-6.14.5.patch
From: https://www.kernel.org
Desc: Linux 6.14.5
+Patch: 1005_linux-6.14.6.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.6
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1005_linux-6.14.6.patch b/1005_linux-6.14.6.patch
new file mode 100644
index 00000000..f70cfd02
--- /dev/null
+++ b/1005_linux-6.14.6.patch
@@ -0,0 +1,9318 @@
+diff --git a/Makefile b/Makefile
+index 87835d7abbceb2..6c3233a21380ce 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ul-imx6ull-opos6ul.dtsi b/arch/arm/boot/dts/nxp/imx/imx6ul-imx6ull-opos6ul.dtsi
+index f2386dcb9ff2c0..dda4fa91b2f2cc 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ul-imx6ull-opos6ul.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6ul-imx6ull-opos6ul.dtsi
+@@ -40,6 +40,9 @@ ethphy1: ethernet-phy@1 {
+ reg = <1>;
+ interrupt-parent = <&gpio4>;
+ interrupts = <16 IRQ_TYPE_LEVEL_LOW>;
++ micrel,led-mode = <1>;
++ clocks = <&clks IMX6UL_CLK_ENET_REF>;
++ clock-names = "rmii-ref";
+ status = "okay";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx95.dtsi b/arch/arm64/boot/dts/freescale/imx95.dtsi
+index 6b8470cb3461a2..0e6a9e639d7696 100644
+--- a/arch/arm64/boot/dts/freescale/imx95.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx95.dtsi
+@@ -1542,7 +1542,7 @@ pcie0: pcie@4c300000 {
+ reg = <0 0x4c300000 0 0x10000>,
+ <0 0x60100000 0 0xfe00000>,
+ <0 0x4c360000 0 0x10000>,
+- <0 0x4c340000 0 0x2000>;
++ <0 0x4c340000 0 0x4000>;
+ reg-names = "dbi", "config", "atu", "app";
+ ranges = <0x81000000 0x0 0x00000000 0x0 0x6ff00000 0 0x00100000>,
+ <0x82000000 0x0 0x10000000 0x9 0x10000000 0 0x10000000>;
+@@ -1582,7 +1582,7 @@ pcie0_ep: pcie-ep@4c300000 {
+ reg = <0 0x4c300000 0 0x10000>,
+ <0 0x4c360000 0 0x1000>,
+ <0 0x4c320000 0 0x1000>,
+- <0 0x4c340000 0 0x2000>,
++ <0 0x4c340000 0 0x4000>,
+ <0 0x4c370000 0 0x10000>,
+ <0x9 0 1 0>;
+ reg-names = "dbi","atu", "dbi2", "app", "dma", "addr_space";
+@@ -1609,7 +1609,7 @@ pcie1: pcie@4c380000 {
+ reg = <0 0x4c380000 0 0x10000>,
+ <8 0x80100000 0 0xfe00000>,
+ <0 0x4c3e0000 0 0x10000>,
+- <0 0x4c3c0000 0 0x2000>;
++ <0 0x4c3c0000 0 0x4000>;
+ reg-names = "dbi", "config", "atu", "app";
+ ranges = <0x81000000 0 0x00000000 0x8 0x8ff00000 0 0x00100000>,
+ <0x82000000 0 0x10000000 0xa 0x10000000 0 0x10000000>;
+@@ -1649,7 +1649,7 @@ pcie1_ep: pcie-ep@4c380000 {
+ reg = <0 0x4c380000 0 0x10000>,
+ <0 0x4c3e0000 0 0x1000>,
+ <0 0x4c3a0000 0 0x1000>,
+- <0 0x4c3c0000 0 0x2000>,
++ <0 0x4c3c0000 0 0x4000>,
+ <0 0x4c3f0000 0 0x10000>,
+ <0xa 0 1 0>;
+ reg-names = "dbi", "atu", "dbi2", "app", "dma", "addr_space";
+diff --git a/arch/arm64/boot/dts/st/stm32mp251.dtsi b/arch/arm64/boot/dts/st/stm32mp251.dtsi
+index f3c6cdfd7008c5..87110f91e4895a 100644
+--- a/arch/arm64/boot/dts/st/stm32mp251.dtsi
++++ b/arch/arm64/boot/dts/st/stm32mp251.dtsi
+@@ -115,14 +115,13 @@ scmi_vdda18adc: regulator@7 {
+ };
+
+ intc: interrupt-controller@4ac00000 {
+- compatible = "arm,cortex-a7-gic";
++ compatible = "arm,gic-400";
+ #interrupt-cells = <3>;
+- #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x0 0x4ac10000 0x0 0x1000>,
+- <0x0 0x4ac20000 0x0 0x2000>,
+- <0x0 0x4ac40000 0x0 0x2000>,
+- <0x0 0x4ac60000 0x0 0x2000>;
++ <0x0 0x4ac20000 0x0 0x20000>,
++ <0x0 0x4ac40000 0x0 0x20000>,
++ <0x0 0x4ac60000 0x0 0x20000>;
+ };
+
+ psci {
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 0f51fd10b4b063..30e79f111b35e3 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -879,10 +879,12 @@ static u8 spectre_bhb_loop_affected(void)
+ static const struct midr_range spectre_bhb_k132_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
++ {},
+ };
+ static const struct midr_range spectre_bhb_k38_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A715),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
++ {},
+ };
+ static const struct midr_range spectre_bhb_k32_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
+diff --git a/arch/parisc/math-emu/driver.c b/arch/parisc/math-emu/driver.c
+index 34495446e051c2..71829cb7bc812a 100644
+--- a/arch/parisc/math-emu/driver.c
++++ b/arch/parisc/math-emu/driver.c
+@@ -97,9 +97,19 @@ handle_fpe(struct pt_regs *regs)
+
+ memcpy(regs->fr, frcopy, sizeof regs->fr);
+ if (signalcode != 0) {
+- force_sig_fault(signalcode >> 24, signalcode & 0xffffff,
+- (void __user *) regs->iaoq[0]);
+- return -1;
++ int sig = signalcode >> 24;
++
++ if (sig == SIGFPE) {
++ /*
++ * Clear floating point trap bit to avoid trapping
++ * again on the first floating-point instruction in
++ * the userspace signal handler.
++ */
++ regs->fr[0] &= ~(1ULL << 38);
++ }
++ force_sig_fault(sig, signalcode & 0xffffff,
++ (void __user *) regs->iaoq[0]);
++ return -1;
+ }
+
+ return signalcode ? -1 : 0;
+diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
+index 1db60fe13802db..3d8dc822282ac8 100755
+--- a/arch/powerpc/boot/wrapper
++++ b/arch/powerpc/boot/wrapper
+@@ -234,10 +234,8 @@ fi
+
+ # suppress some warnings in recent ld versions
+ nowarn="-z noexecstack"
+-if ! ld_is_lld; then
+- if [ "$LD_VERSION" -ge "$(echo 2.39 | ld_version)" ]; then
+- nowarn="$nowarn --no-warn-rwx-segments"
+- fi
++if "${CROSS}ld" -v --no-warn-rwx-segments >/dev/null 2>&1; then
++ nowarn="$nowarn --no-warn-rwx-segments"
+ fi
+
+ platformo=$object/"$platform".o
+diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
+index 34a5aec4908fba..126bf3b06ab7e2 100644
+--- a/arch/powerpc/kernel/module_64.c
++++ b/arch/powerpc/kernel/module_64.c
+@@ -258,10 +258,6 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
+ break;
+ }
+ }
+- if (i == hdr->e_shnum) {
+- pr_err("%s: doesn't contain __patchable_function_entries.\n", me->name);
+- return -ENOEXEC;
+- }
+ #endif
+
+ pr_debug("Looks like a total of %lu stubs, max\n", relocs);
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 311e2112d782ea..128c011afc4818 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -1120,6 +1120,19 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
+ pmd_t *pmd;
+ pte_t *pte;
+
++ /*
++ * Make sure we align the start vmemmap addr so that we calculate
++ * the correct start_pfn in altmap boundary check to decided whether
++ * we should use altmap or RAM based backing memory allocation. Also
++ * the address need to be aligned for set_pte operation.
++
++ * If the start addr is already PMD_SIZE aligned we will try to use
++ * a pmd mapping. We don't want to be too aggressive here beacause
++ * that will cause more allocations in RAM. So only if the namespace
++ * vmemmap start addr is PMD_SIZE aligned we will use PMD mapping.
++ */
++
++ start = ALIGN_DOWN(start, PAGE_SIZE);
+ for (addr = start; addr < end; addr = next) {
+ next = pmd_addr_end(addr, end);
+
+@@ -1145,8 +1158,8 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
+ * in altmap block allocation failures, in which case
+ * we fallback to RAM for vmemmap allocation.
+ */
+- if (altmap && (!IS_ALIGNED(addr, PMD_SIZE) ||
+- altmap_cross_boundary(altmap, addr, PMD_SIZE))) {
++ if (!IS_ALIGNED(addr, PMD_SIZE) || (altmap &&
++ altmap_cross_boundary(altmap, addr, PMD_SIZE))) {
+ /*
+ * make sure we don't create altmap mappings
+ * covering things outside the device.
+diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c
+index f676156d9f3db4..0e9f84ab4bdcd1 100644
+--- a/arch/x86/boot/compressed/mem.c
++++ b/arch/x86/boot/compressed/mem.c
+@@ -34,14 +34,11 @@ static bool early_is_tdx_guest(void)
+
+ void arch_accept_memory(phys_addr_t start, phys_addr_t end)
+ {
+- static bool sevsnp;
+-
+ /* Platform-specific memory-acceptance call goes here */
+ if (early_is_tdx_guest()) {
+ if (!tdx_accept_memory(start, end))
+ panic("TDX: Failed to accept memory\n");
+- } else if (sevsnp || (sev_get_status() & MSR_AMD64_SEV_SNP_ENABLED)) {
+- sevsnp = true;
++ } else if (early_is_sevsnp_guest()) {
+ snp_accept_memory(start, end);
+ } else {
+ error("Cannot accept memory: unknown platform\n");
+diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
+index 89ba168f4f0f02..0003e4416efd16 100644
+--- a/arch/x86/boot/compressed/sev.c
++++ b/arch/x86/boot/compressed/sev.c
+@@ -645,3 +645,43 @@ void sev_prep_identity_maps(unsigned long top_level_pgt)
+
+ sev_verify_cbit(top_level_pgt);
+ }
++
++bool early_is_sevsnp_guest(void)
++{
++ static bool sevsnp;
++
++ if (sevsnp)
++ return true;
++
++ if (!(sev_get_status() & MSR_AMD64_SEV_SNP_ENABLED))
++ return false;
++
++ sevsnp = true;
++
++ if (!snp_vmpl) {
++ unsigned int eax, ebx, ecx, edx;
++
++ /*
++ * CPUID Fn8000_001F_EAX[28] - SVSM support
++ */
++ eax = 0x8000001f;
++ ecx = 0;
++ native_cpuid(&eax, &ebx, &ecx, &edx);
++ if (eax & BIT(28)) {
++ struct msr m;
++
++ /* Obtain the address of the calling area to use */
++ boot_rdmsr(MSR_SVSM_CAA, &m);
++ boot_svsm_caa = (void *)m.q;
++ boot_svsm_caa_pa = m.q;
++
++ /*
++ * The real VMPL level cannot be discovered, but the
++ * memory acceptance routines make no use of that so
++ * any non-zero value suffices here.
++ */
++ snp_vmpl = U8_MAX;
++ }
++ }
++ return true;
++}
+diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h
+index 4e463f33186df4..d3900384b8abb5 100644
+--- a/arch/x86/boot/compressed/sev.h
++++ b/arch/x86/boot/compressed/sev.h
+@@ -13,12 +13,14 @@
+ bool sev_snp_enabled(void);
+ void snp_accept_memory(phys_addr_t start, phys_addr_t end);
+ u64 sev_get_status(void);
++bool early_is_sevsnp_guest(void);
+
+ #else
+
+ static inline bool sev_snp_enabled(void) { return false; }
+ static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
+ static inline u64 sev_get_status(void) { return 0; }
++static inline bool early_is_sevsnp_guest(void) { return false; }
+
+ #endif
+
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index ce8d4fdf54fbb0..a46b792a171cbe 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -753,7 +753,7 @@ void x86_pmu_enable_all(int added)
+ }
+ }
+
+-static inline int is_x86_event(struct perf_event *event)
++int is_x86_event(struct perf_event *event)
+ {
+ int i;
+
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 9e8de416d1f023..47741a0c6dd6b8 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4341,7 +4341,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data)
+ arr[pebs_enable] = (struct perf_guest_switch_msr){
+ .msr = MSR_IA32_PEBS_ENABLE,
+ .host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask,
+- .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask,
++ .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask & kvm_pmu->pebs_enable,
+ };
+
+ if (arr[pebs_enable].host) {
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 1dfa78a30266c9..7bb5cf514d77ab 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -110,9 +110,16 @@ static inline bool is_topdown_event(struct perf_event *event)
+ return is_metric_event(event) || is_slots_event(event);
+ }
+
++int is_x86_event(struct perf_event *event);
++
++static inline bool check_leader_group(struct perf_event *leader, int flags)
++{
++ return is_x86_event(leader) ? !!(leader->hw.flags & flags) : false;
++}
++
+ static inline bool is_branch_counters_group(struct perf_event *event)
+ {
+- return event->group_leader->hw.flags & PERF_X86_EVENT_BRANCH_COUNTERS;
++ return check_leader_group(event->group_leader, PERF_X86_EVENT_BRANCH_COUNTERS);
+ }
+
+ struct amd_nb {
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index 3e56ce8bc2c1d5..93c3687d30b795 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -36,8 +36,6 @@
+ #define DRIVER_VERSION_STR "1.0.0 " UTS_RELEASE
+ #endif
+
+-static struct lock_class_key submitted_jobs_xa_lock_class_key;
+-
+ int ivpu_dbg_mask;
+ module_param_named(dbg_mask, ivpu_dbg_mask, int, 0644);
+ MODULE_PARM_DESC(dbg_mask, "Driver debug mask. See IVPU_DBG_* macros.");
+@@ -465,26 +463,6 @@ static const struct drm_driver driver = {
+ .major = 1,
+ };
+
+-static void ivpu_context_abort_invalid(struct ivpu_device *vdev)
+-{
+- struct ivpu_file_priv *file_priv;
+- unsigned long ctx_id;
+-
+- mutex_lock(&vdev->context_list_lock);
+-
+- xa_for_each(&vdev->context_xa, ctx_id, file_priv) {
+- if (!file_priv->has_mmu_faults || file_priv->aborted)
+- continue;
+-
+- mutex_lock(&file_priv->lock);
+- ivpu_context_abort_locked(file_priv);
+- file_priv->aborted = true;
+- mutex_unlock(&file_priv->lock);
+- }
+-
+- mutex_unlock(&vdev->context_list_lock);
+-}
+-
+ static irqreturn_t ivpu_irq_thread_handler(int irq, void *arg)
+ {
+ struct ivpu_device *vdev = arg;
+@@ -498,9 +476,6 @@ static irqreturn_t ivpu_irq_thread_handler(int irq, void *arg)
+ case IVPU_HW_IRQ_SRC_IPC:
+ ivpu_ipc_irq_thread_handler(vdev);
+ break;
+- case IVPU_HW_IRQ_SRC_MMU_EVTQ:
+- ivpu_context_abort_invalid(vdev);
+- break;
+ case IVPU_HW_IRQ_SRC_DCT:
+ ivpu_pm_dct_irq_thread_handler(vdev);
+ break;
+@@ -617,16 +592,21 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
+ xa_init_flags(&vdev->context_xa, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ);
+ xa_init_flags(&vdev->submitted_jobs_xa, XA_FLAGS_ALLOC1);
+ xa_init_flags(&vdev->db_xa, XA_FLAGS_ALLOC1);
+- lockdep_set_class(&vdev->submitted_jobs_xa.xa_lock, &submitted_jobs_xa_lock_class_key);
+ INIT_LIST_HEAD(&vdev->bo_list);
+
+ vdev->db_limit.min = IVPU_MIN_DB;
+ vdev->db_limit.max = IVPU_MAX_DB;
+
++ INIT_WORK(&vdev->context_abort_work, ivpu_context_abort_thread_handler);
++
+ ret = drmm_mutex_init(&vdev->drm, &vdev->context_list_lock);
+ if (ret)
+ goto err_xa_destroy;
+
++ ret = drmm_mutex_init(&vdev->drm, &vdev->submitted_jobs_lock);
++ if (ret)
++ goto err_xa_destroy;
++
+ ret = drmm_mutex_init(&vdev->drm, &vdev->bo_list_lock);
+ if (ret)
+ goto err_xa_destroy;
+diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
+index 3fdff3f6cffd85..ebfcf3e42a3d93 100644
+--- a/drivers/accel/ivpu/ivpu_drv.h
++++ b/drivers/accel/ivpu/ivpu_drv.h
+@@ -137,6 +137,7 @@ struct ivpu_device {
+ struct mutex context_list_lock; /* Protects user context addition/removal */
+ struct xarray context_xa;
+ struct xa_limit context_xa_limit;
++ struct work_struct context_abort_work;
+
+ struct xarray db_xa;
+ struct xa_limit db_limit;
+@@ -145,6 +146,7 @@ struct ivpu_device {
+ struct mutex bo_list_lock; /* Protects bo_list */
+ struct list_head bo_list;
+
++ struct mutex submitted_jobs_lock; /* Protects submitted_jobs */
+ struct xarray submitted_jobs_xa;
+ struct ivpu_ipc_consumer job_done_consumer;
+
+diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.h b/drivers/accel/ivpu/ivpu_hw_btrs.h
+index 71792dab3c2107..3855e2df1e0c83 100644
+--- a/drivers/accel/ivpu/ivpu_hw_btrs.h
++++ b/drivers/accel/ivpu/ivpu_hw_btrs.h
+@@ -14,7 +14,7 @@
+ #define PLL_PROFILING_FREQ_DEFAULT 38400000
+ #define PLL_PROFILING_FREQ_HIGH 400000000
+
+-#define DCT_DEFAULT_ACTIVE_PERCENT 15u
++#define DCT_DEFAULT_ACTIVE_PERCENT 30u
+ #define DCT_PERIOD_US 35300u
+
+ int ivpu_hw_btrs_info_init(struct ivpu_device *vdev);
+diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
+index 7149312f16e193..673801889c7b23 100644
+--- a/drivers/accel/ivpu/ivpu_job.c
++++ b/drivers/accel/ivpu/ivpu_job.c
+@@ -223,7 +223,8 @@ static int ivpu_cmdq_fini(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cm
+ if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW) {
+ ret = ivpu_jsm_hws_destroy_cmdq(vdev, file_priv->ctx.id, cmdq->id);
+ if (!ret)
+- ivpu_dbg(vdev, JOB, "Command queue %d destroyed\n", cmdq->id);
++ ivpu_dbg(vdev, JOB, "Command queue %d destroyed, ctx %d\n",
++ cmdq->id, file_priv->ctx.id);
+ }
+
+ ret = ivpu_jsm_unregister_db(vdev, cmdq->db_id);
+@@ -324,6 +325,8 @@ void ivpu_context_abort_locked(struct ivpu_file_priv *file_priv)
+
+ if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_OS)
+ ivpu_jsm_context_release(vdev, file_priv->ctx.id);
++
++ file_priv->aborted = true;
+ }
+
+ static int ivpu_cmdq_push_job(struct ivpu_cmdq *cmdq, struct ivpu_job *job)
+@@ -462,16 +465,14 @@ static struct ivpu_job *ivpu_job_remove_from_submitted_jobs(struct ivpu_device *
+ {
+ struct ivpu_job *job;
+
+- xa_lock(&vdev->submitted_jobs_xa);
+- job = __xa_erase(&vdev->submitted_jobs_xa, job_id);
++ lockdep_assert_held(&vdev->submitted_jobs_lock);
+
++ job = xa_erase(&vdev->submitted_jobs_xa, job_id);
+ if (xa_empty(&vdev->submitted_jobs_xa) && job) {
+ vdev->busy_time = ktime_add(ktime_sub(ktime_get(), vdev->busy_start_ts),
+ vdev->busy_time);
+ }
+
+- xa_unlock(&vdev->submitted_jobs_xa);
+-
+ return job;
+ }
+
+@@ -479,6 +480,28 @@ static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32
+ {
+ struct ivpu_job *job;
+
++ lockdep_assert_held(&vdev->submitted_jobs_lock);
++
++ job = xa_load(&vdev->submitted_jobs_xa, job_id);
++ if (!job)
++ return -ENOENT;
++
++ if (job_status == VPU_JSM_STATUS_MVNCI_CONTEXT_VIOLATION_HW) {
++ guard(mutex)(&job->file_priv->lock);
++
++ if (job->file_priv->has_mmu_faults)
++ return 0;
++
++ /*
++ * Mark context as faulty and defer destruction of the job to jobs abort thread
++ * handler to synchronize between both faults and jobs returning context violation
++ * status and ensure both are handled in the same way
++ */
++ job->file_priv->has_mmu_faults = true;
++ queue_work(system_wq, &vdev->context_abort_work);
++ return 0;
++ }
++
+ job = ivpu_job_remove_from_submitted_jobs(vdev, job_id);
+ if (!job)
+ return -ENOENT;
+@@ -497,6 +520,10 @@ static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32
+ ivpu_stop_job_timeout_detection(vdev);
+
+ ivpu_rpm_put(vdev);
++
++ if (!xa_empty(&vdev->submitted_jobs_xa))
++ ivpu_start_job_timeout_detection(vdev);
++
+ return 0;
+ }
+
+@@ -505,8 +532,12 @@ void ivpu_jobs_abort_all(struct ivpu_device *vdev)
+ struct ivpu_job *job;
+ unsigned long id;
+
++ mutex_lock(&vdev->submitted_jobs_lock);
++
+ xa_for_each(&vdev->submitted_jobs_xa, id, job)
+ ivpu_job_signal_and_destroy(vdev, id, DRM_IVPU_JOB_STATUS_ABORTED);
++
++ mutex_unlock(&vdev->submitted_jobs_lock);
+ }
+
+ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
+@@ -521,6 +552,7 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
+ if (ret < 0)
+ return ret;
+
++ mutex_lock(&vdev->submitted_jobs_lock);
+ mutex_lock(&file_priv->lock);
+
+ cmdq = ivpu_cmdq_acquire(file_priv, priority);
+@@ -528,18 +560,17 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
+ ivpu_warn_ratelimited(vdev, "Failed to get job queue, ctx %d engine %d prio %d\n",
+ file_priv->ctx.id, job->engine_idx, priority);
+ ret = -EINVAL;
+- goto err_unlock_file_priv;
++ goto err_unlock;
+ }
+
+- xa_lock(&vdev->submitted_jobs_xa);
+ is_first_job = xa_empty(&vdev->submitted_jobs_xa);
+- ret = __xa_alloc_cyclic(&vdev->submitted_jobs_xa, &job->job_id, job, file_priv->job_limit,
+- &file_priv->job_id_next, GFP_KERNEL);
++ ret = xa_alloc_cyclic(&vdev->submitted_jobs_xa, &job->job_id, job, file_priv->job_limit,
++ &file_priv->job_id_next, GFP_KERNEL);
+ if (ret < 0) {
+ ivpu_dbg(vdev, JOB, "Too many active jobs in ctx %d\n",
+ file_priv->ctx.id);
+ ret = -EBUSY;
+- goto err_unlock_submitted_jobs_xa;
++ goto err_unlock;
+ }
+
+ ret = ivpu_cmdq_push_job(cmdq, job);
+@@ -562,20 +593,20 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
+ job->job_id, file_priv->ctx.id, job->engine_idx, priority,
+ job->cmd_buf_vpu_addr, cmdq->jobq->header.tail);
+
+- xa_unlock(&vdev->submitted_jobs_xa);
+-
+ mutex_unlock(&file_priv->lock);
+
+- if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW))
++ if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW)) {
+ ivpu_job_signal_and_destroy(vdev, job->job_id, VPU_JSM_STATUS_SUCCESS);
++ }
++
++ mutex_unlock(&vdev->submitted_jobs_lock);
+
+ return 0;
+
+ err_erase_xa:
+- __xa_erase(&vdev->submitted_jobs_xa, job->job_id);
+-err_unlock_submitted_jobs_xa:
+- xa_unlock(&vdev->submitted_jobs_xa);
+-err_unlock_file_priv:
++ xa_erase(&vdev->submitted_jobs_xa, job->job_id);
++err_unlock:
++ mutex_unlock(&vdev->submitted_jobs_lock);
+ mutex_unlock(&file_priv->lock);
+ ivpu_rpm_put(vdev);
+ return ret;
+@@ -745,7 +776,6 @@ ivpu_job_done_callback(struct ivpu_device *vdev, struct ivpu_ipc_hdr *ipc_hdr,
+ struct vpu_jsm_msg *jsm_msg)
+ {
+ struct vpu_ipc_msg_payload_job_done *payload;
+- int ret;
+
+ if (!jsm_msg) {
+ ivpu_err(vdev, "IPC message has no JSM payload\n");
+@@ -758,9 +788,10 @@ ivpu_job_done_callback(struct ivpu_device *vdev, struct ivpu_ipc_hdr *ipc_hdr,
+ }
+
+ payload = (struct vpu_ipc_msg_payload_job_done *)&jsm_msg->payload;
+- ret = ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status);
+- if (!ret && !xa_empty(&vdev->submitted_jobs_xa))
+- ivpu_start_job_timeout_detection(vdev);
++
++ mutex_lock(&vdev->submitted_jobs_lock);
++ ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status);
++ mutex_unlock(&vdev->submitted_jobs_lock);
+ }
+
+ void ivpu_job_done_consumer_init(struct ivpu_device *vdev)
+@@ -773,3 +804,41 @@ void ivpu_job_done_consumer_fini(struct ivpu_device *vdev)
+ {
+ ivpu_ipc_consumer_del(vdev, &vdev->job_done_consumer);
+ }
++
++void ivpu_context_abort_thread_handler(struct work_struct *work)
++{
++ struct ivpu_device *vdev = container_of(work, struct ivpu_device, context_abort_work);
++ struct ivpu_file_priv *file_priv;
++ unsigned long ctx_id;
++ struct ivpu_job *job;
++ unsigned long id;
++
++ if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW)
++ ivpu_jsm_reset_engine(vdev, 0);
++
++ mutex_lock(&vdev->context_list_lock);
++ xa_for_each(&vdev->context_xa, ctx_id, file_priv) {
++ if (!file_priv->has_mmu_faults || file_priv->aborted)
++ continue;
++
++ mutex_lock(&file_priv->lock);
++ ivpu_context_abort_locked(file_priv);
++ mutex_unlock(&file_priv->lock);
++ }
++ mutex_unlock(&vdev->context_list_lock);
++
++ if (vdev->fw->sched_mode != VPU_SCHEDULING_MODE_HW)
++ return;
++
++ ivpu_jsm_hws_resume_engine(vdev, 0);
++ /*
++ * In hardware scheduling mode NPU already has stopped processing jobs
++ * and won't send us any further notifications, thus we have to free job related resources
++ * and notify userspace
++ */
++ mutex_lock(&vdev->submitted_jobs_lock);
++ xa_for_each(&vdev->submitted_jobs_xa, id, job)
++ if (job->file_priv->aborted)
++ ivpu_job_signal_and_destroy(vdev, job->job_id, DRM_IVPU_JOB_STATUS_ABORTED);
++ mutex_unlock(&vdev->submitted_jobs_lock);
++}
+diff --git a/drivers/accel/ivpu/ivpu_job.h b/drivers/accel/ivpu/ivpu_job.h
+index 8b19e3f8b4cfb3..af1ed039569cd6 100644
+--- a/drivers/accel/ivpu/ivpu_job.h
++++ b/drivers/accel/ivpu/ivpu_job.h
+@@ -66,6 +66,7 @@ void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev);
+
+ void ivpu_job_done_consumer_init(struct ivpu_device *vdev);
+ void ivpu_job_done_consumer_fini(struct ivpu_device *vdev);
++void ivpu_context_abort_thread_handler(struct work_struct *work);
+
+ void ivpu_jobs_abort_all(struct ivpu_device *vdev);
+
+diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c
+index 26ef52fbb93e53..21f820dd0c658a 100644
+--- a/drivers/accel/ivpu/ivpu_mmu.c
++++ b/drivers/accel/ivpu/ivpu_mmu.c
+@@ -890,8 +890,7 @@ void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev)
+ REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, vdev->mmu->evtq.cons);
+ }
+
+- if (!kfifo_put(&vdev->hw->irq.fifo, IVPU_HW_IRQ_SRC_MMU_EVTQ))
+- ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
++ queue_work(system_wq, &vdev->context_abort_work);
+ }
+
+ void ivpu_mmu_evtq_dump(struct ivpu_device *vdev)
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index 5060c5dd40d1fc..7acf78aeb38005 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -433,16 +433,17 @@ int ivpu_pm_dct_enable(struct ivpu_device *vdev, u8 active_percent)
+ active_us = (DCT_PERIOD_US * active_percent) / 100;
+ inactive_us = DCT_PERIOD_US - active_us;
+
++ vdev->pm->dct_active_percent = active_percent;
++
++ ivpu_dbg(vdev, PM, "DCT requested %u%% (D0: %uus, D0i2: %uus)\n",
++ active_percent, active_us, inactive_us);
++
+ ret = ivpu_jsm_dct_enable(vdev, active_us, inactive_us);
+ if (ret) {
+ ivpu_err_ratelimited(vdev, "Failed to enable DCT: %d\n", ret);
+ return ret;
+ }
+
+- vdev->pm->dct_active_percent = active_percent;
+-
+- ivpu_dbg(vdev, PM, "DCT set to %u%% (D0: %uus, D0i2: %uus)\n",
+- active_percent, active_us, inactive_us);
+ return 0;
+ }
+
+@@ -450,15 +451,16 @@ int ivpu_pm_dct_disable(struct ivpu_device *vdev)
+ {
+ int ret;
+
++ vdev->pm->dct_active_percent = 0;
++
++ ivpu_dbg(vdev, PM, "DCT requested to be disabled\n");
++
+ ret = ivpu_jsm_dct_disable(vdev);
+ if (ret) {
+ ivpu_err_ratelimited(vdev, "Failed to disable DCT: %d\n", ret);
+ return ret;
+ }
+
+- vdev->pm->dct_active_percent = 0;
+-
+- ivpu_dbg(vdev, PM, "DCT disabled\n");
+ return 0;
+ }
+
+@@ -470,7 +472,7 @@ void ivpu_pm_dct_irq_thread_handler(struct ivpu_device *vdev)
+ if (ivpu_hw_btrs_dct_get_request(vdev, &enable))
+ return;
+
+- if (vdev->pm->dct_active_percent)
++ if (enable)
+ ret = ivpu_pm_dct_enable(vdev, DCT_DEFAULT_ACTIVE_PERCENT);
+ else
+ ret = ivpu_pm_dct_disable(vdev);
+diff --git a/drivers/accel/ivpu/ivpu_sysfs.c b/drivers/accel/ivpu/ivpu_sysfs.c
+index 616477fc17fa07..8a616791c32f5e 100644
+--- a/drivers/accel/ivpu/ivpu_sysfs.c
++++ b/drivers/accel/ivpu/ivpu_sysfs.c
+@@ -30,11 +30,12 @@ npu_busy_time_us_show(struct device *dev, struct device_attribute *attr, char *b
+ struct ivpu_device *vdev = to_ivpu_device(drm);
+ ktime_t total, now = 0;
+
+- xa_lock(&vdev->submitted_jobs_xa);
++ mutex_lock(&vdev->submitted_jobs_lock);
++
+ total = vdev->busy_time;
+ if (!xa_empty(&vdev->submitted_jobs_xa))
+ now = ktime_sub(ktime_get(), vdev->busy_start_ts);
+- xa_unlock(&vdev->submitted_jobs_xa);
++ mutex_unlock(&vdev->submitted_jobs_lock);
+
+ return sysfs_emit(buf, "%lld\n", ktime_to_us(ktime_add(total, now)));
+ }
+diff --git a/drivers/base/module.c b/drivers/base/module.c
+index 5bc71bea883a06..218aaa0964552f 100644
+--- a/drivers/base/module.c
++++ b/drivers/base/module.c
+@@ -42,16 +42,13 @@ int module_add_driver(struct module *mod, const struct device_driver *drv)
+ if (mod)
+ mk = &mod->mkobj;
+ else if (drv->mod_name) {
+- struct kobject *mkobj;
+-
+- /* Lookup built-in module entry in /sys/modules */
+- mkobj = kset_find_obj(module_kset, drv->mod_name);
+- if (mkobj) {
+- mk = container_of(mkobj, struct module_kobject, kobj);
++ /* Lookup or create built-in module entry in /sys/modules */
++ mk = lookup_or_create_module_kobject(drv->mod_name);
++ if (mk) {
+ /* remember our module structure */
+ drv->p->mkobj = mk;
+- /* kset_find_obj took a reference */
+- kobject_put(mkobj);
++ /* lookup_or_create_module_kobject took a reference */
++ kobject_put(&mk->kobj);
+ }
+ }
+
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index ab06a7a064fbf8..348c4feb7a2df3 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -115,15 +115,6 @@ struct ublk_uring_cmd_pdu {
+ */
+ #define UBLK_IO_FLAG_OWNED_BY_SRV 0x02
+
+-/*
+- * IO command is aborted, so this flag is set in case of
+- * !UBLK_IO_FLAG_ACTIVE.
+- *
+- * After this flag is observed, any pending or new incoming request
+- * associated with this io command will be failed immediately
+- */
+-#define UBLK_IO_FLAG_ABORTED 0x04
+-
+ /*
+ * UBLK_IO_FLAG_NEED_GET_DATA is set because IO command requires
+ * get data buffer address from ublksrv.
+@@ -194,8 +185,6 @@ struct ublk_device {
+ struct completion completion;
+ unsigned int nr_queues_ready;
+ unsigned int nr_privileged_daemon;
+-
+- struct work_struct nosrv_work;
+ };
+
+ /* header of ublk_params */
+@@ -204,7 +193,9 @@ struct ublk_params_header {
+ __u32 types;
+ };
+
+-static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq);
++
++static void ublk_stop_dev_unlocked(struct ublk_device *ub);
++static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq);
+
+ static inline unsigned int ublk_req_build_flags(struct request *req);
+ static inline struct ublksrv_io_desc *ublk_get_iod(struct ublk_queue *ubq,
+@@ -594,6 +585,11 @@ static inline bool ublk_support_user_copy(const struct ublk_queue *ubq)
+ return ubq->flags & UBLK_F_USER_COPY;
+ }
+
++static inline bool ublk_need_map_io(const struct ublk_queue *ubq)
++{
++ return !ublk_support_user_copy(ubq);
++}
++
+ static inline bool ublk_need_req_ref(const struct ublk_queue *ubq)
+ {
+ /*
+@@ -921,7 +917,7 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
+ {
+ const unsigned int rq_bytes = blk_rq_bytes(req);
+
+- if (ublk_support_user_copy(ubq))
++ if (!ublk_need_map_io(ubq))
+ return rq_bytes;
+
+ /*
+@@ -945,7 +941,7 @@ static int ublk_unmap_io(const struct ublk_queue *ubq,
+ {
+ const unsigned int rq_bytes = blk_rq_bytes(req);
+
+- if (ublk_support_user_copy(ubq))
++ if (!ublk_need_map_io(ubq))
+ return rq_bytes;
+
+ if (ublk_need_unmap_req(req)) {
+@@ -1038,7 +1034,7 @@ static inline struct ublk_uring_cmd_pdu *ublk_get_uring_cmd_pdu(
+
+ static inline bool ubq_daemon_is_dying(struct ublk_queue *ubq)
+ {
+- return ubq->ubq_daemon->flags & PF_EXITING;
++ return !ubq->ubq_daemon || ubq->ubq_daemon->flags & PF_EXITING;
+ }
+
+ /* todo: handle partial completion */
+@@ -1049,12 +1045,6 @@ static inline void __ublk_complete_rq(struct request *req)
+ unsigned int unmapped_bytes;
+ blk_status_t res = BLK_STS_OK;
+
+- /* called from ublk_abort_queue() code path */
+- if (io->flags & UBLK_IO_FLAG_ABORTED) {
+- res = BLK_STS_IOERR;
+- goto exit;
+- }
+-
+ /* failed read IO if nothing is read */
+ if (!io->res && req_op(req) == REQ_OP_READ)
+ io->res = -EIO;
+@@ -1104,47 +1094,6 @@ static void ublk_complete_rq(struct kref *ref)
+ __ublk_complete_rq(req);
+ }
+
+-static void ublk_do_fail_rq(struct request *req)
+-{
+- struct ublk_queue *ubq = req->mq_hctx->driver_data;
+-
+- if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
+- blk_mq_requeue_request(req, false);
+- else
+- __ublk_complete_rq(req);
+-}
+-
+-static void ublk_fail_rq_fn(struct kref *ref)
+-{
+- struct ublk_rq_data *data = container_of(ref, struct ublk_rq_data,
+- ref);
+- struct request *req = blk_mq_rq_from_pdu(data);
+-
+- ublk_do_fail_rq(req);
+-}
+-
+-/*
+- * Since ublk_rq_task_work_cb always fails requests immediately during
+- * exiting, __ublk_fail_req() is only called from abort context during
+- * exiting. So lock is unnecessary.
+- *
+- * Also aborting may not be started yet, keep in mind that one failed
+- * request may be issued by block layer again.
+- */
+-static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
+- struct request *req)
+-{
+- WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
+-
+- if (ublk_need_req_ref(ubq)) {
+- struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
+-
+- kref_put(&data->ref, ublk_fail_rq_fn);
+- } else {
+- ublk_do_fail_rq(req);
+- }
+-}
+-
+ static void ubq_complete_io_cmd(struct ublk_io *io, int res,
+ unsigned issue_flags)
+ {
+@@ -1301,8 +1250,6 @@ static void ublk_queue_cmd_list(struct ublk_queue *ubq, struct rq_list *l)
+ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
+ {
+ struct ublk_queue *ubq = rq->mq_hctx->driver_data;
+- unsigned int nr_inflight = 0;
+- int i;
+
+ if (ubq->flags & UBLK_F_UNPRIVILEGED_DEV) {
+ if (!ubq->timeout) {
+@@ -1313,26 +1260,6 @@ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
+ return BLK_EH_DONE;
+ }
+
+- if (!ubq_daemon_is_dying(ubq))
+- return BLK_EH_RESET_TIMER;
+-
+- for (i = 0; i < ubq->q_depth; i++) {
+- struct ublk_io *io = &ubq->ios[i];
+-
+- if (!(io->flags & UBLK_IO_FLAG_ACTIVE))
+- nr_inflight++;
+- }
+-
+- /* cancelable uring_cmd can't help us if all commands are in-flight */
+- if (nr_inflight == ubq->q_depth) {
+- struct ublk_device *ub = ubq->dev;
+-
+- if (ublk_abort_requests(ub, ubq)) {
+- schedule_work(&ub->nosrv_work);
+- }
+- return BLK_EH_DONE;
+- }
+-
+ return BLK_EH_RESET_TIMER;
+ }
+
+@@ -1435,6 +1362,37 @@ static const struct blk_mq_ops ublk_mq_ops = {
+ .timeout = ublk_timeout,
+ };
+
++static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
++{
++ int i;
++
++ /* All old ioucmds have to be completed */
++ ubq->nr_io_ready = 0;
++
++ /*
++ * old daemon is PF_EXITING, put it now
++ *
++ * It could be NULL in case of closing one quisced device.
++ */
++ if (ubq->ubq_daemon)
++ put_task_struct(ubq->ubq_daemon);
++ /* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */
++ ubq->ubq_daemon = NULL;
++ ubq->timeout = false;
++
++ for (i = 0; i < ubq->q_depth; i++) {
++ struct ublk_io *io = &ubq->ios[i];
++
++ /*
++ * UBLK_IO_FLAG_CANCELED is kept for avoiding to touch
++ * io->cmd
++ */
++ io->flags &= UBLK_IO_FLAG_CANCELED;
++ io->cmd = NULL;
++ io->addr = 0;
++ }
++}
++
+ static int ublk_ch_open(struct inode *inode, struct file *filp)
+ {
+ struct ublk_device *ub = container_of(inode->i_cdev,
+@@ -1446,10 +1404,119 @@ static int ublk_ch_open(struct inode *inode, struct file *filp)
+ return 0;
+ }
+
++static void ublk_reset_ch_dev(struct ublk_device *ub)
++{
++ int i;
++
++ for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
++ ublk_queue_reinit(ub, ublk_get_queue(ub, i));
++
++ /* set to NULL, otherwise new ubq_daemon cannot mmap the io_cmd_buf */
++ ub->mm = NULL;
++ ub->nr_queues_ready = 0;
++ ub->nr_privileged_daemon = 0;
++}
++
++static struct gendisk *ublk_get_disk(struct ublk_device *ub)
++{
++ struct gendisk *disk;
++
++ spin_lock(&ub->lock);
++ disk = ub->ub_disk;
++ if (disk)
++ get_device(disk_to_dev(disk));
++ spin_unlock(&ub->lock);
++
++ return disk;
++}
++
++static void ublk_put_disk(struct gendisk *disk)
++{
++ if (disk)
++ put_device(disk_to_dev(disk));
++}
++
+ static int ublk_ch_release(struct inode *inode, struct file *filp)
+ {
+ struct ublk_device *ub = filp->private_data;
++ struct gendisk *disk;
++ int i;
++
++ /*
++ * disk isn't attached yet, either device isn't live, or it has
++ * been removed already, so we needn't to do anything
++ */
++ disk = ublk_get_disk(ub);
++ if (!disk)
++ goto out;
+
++ /*
++ * All uring_cmd are done now, so abort any request outstanding to
++ * the ublk server
++ *
++ * This can be done in lockless way because ublk server has been
++ * gone
++ *
++ * More importantly, we have to provide forward progress guarantee
++ * without holding ub->mutex, otherwise control task grabbing
++ * ub->mutex triggers deadlock
++ *
++ * All requests may be inflight, so ->canceling may not be set, set
++ * it now.
++ */
++ for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
++ struct ublk_queue *ubq = ublk_get_queue(ub, i);
++
++ ubq->canceling = true;
++ ublk_abort_queue(ub, ubq);
++ }
++ blk_mq_kick_requeue_list(disk->queue);
++
++ /*
++ * All infligh requests have been completed or requeued and any new
++ * request will be failed or requeued via `->canceling` now, so it is
++ * fine to grab ub->mutex now.
++ */
++ mutex_lock(&ub->mutex);
++
++ /* double check after grabbing lock */
++ if (!ub->ub_disk)
++ goto unlock;
++
++ /*
++ * Transition the device to the nosrv state. What exactly this
++ * means depends on the recovery flags
++ */
++ blk_mq_quiesce_queue(disk->queue);
++ if (ublk_nosrv_should_stop_dev(ub)) {
++ /*
++ * Allow any pending/future I/O to pass through quickly
++ * with an error. This is needed because del_gendisk
++ * waits for all pending I/O to complete
++ */
++ for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
++ ublk_get_queue(ub, i)->force_abort = true;
++ blk_mq_unquiesce_queue(disk->queue);
++
++ ublk_stop_dev_unlocked(ub);
++ } else {
++ if (ublk_nosrv_dev_should_queue_io(ub)) {
++ /* ->canceling is set and all requests are aborted */
++ ub->dev_info.state = UBLK_S_DEV_QUIESCED;
++ } else {
++ ub->dev_info.state = UBLK_S_DEV_FAIL_IO;
++ for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
++ ublk_get_queue(ub, i)->fail_io = true;
++ }
++ blk_mq_unquiesce_queue(disk->queue);
++ }
++unlock:
++ mutex_unlock(&ub->mutex);
++ ublk_put_disk(disk);
++
++ /* all uring_cmd has been done now, reset device & ubq */
++ ublk_reset_ch_dev(ub);
++out:
+ clear_bit(UB_STATE_OPEN, &ub->state);
+ return 0;
+ }
+@@ -1516,10 +1583,26 @@ static void ublk_commit_completion(struct ublk_device *ub,
+ ublk_put_req_ref(ubq, req);
+ }
+
++static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
++ struct request *req)
++{
++ WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
++
++ if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
++ blk_mq_requeue_request(req, false);
++ else {
++ io->res = -EIO;
++ __ublk_complete_rq(req);
++ }
++}
++
+ /*
+- * Called from ubq_daemon context via cancel fn, meantime quiesce ublk
+- * blk-mq queue, so we are called exclusively with blk-mq and ubq_daemon
+- * context, so everything is serialized.
++ * Called from ublk char device release handler, when any uring_cmd is
++ * done, meantime request queue is "quiesced" since all inflight requests
++ * can't be completed because ublk server is dead.
++ *
++ * So no one can hold our request IO reference any more, simply ignore the
++ * reference, and complete the request immediately
+ */
+ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
+ {
+@@ -1536,46 +1619,29 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
+ * will do it
+ */
+ rq = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], i);
+- if (rq && blk_mq_request_started(rq)) {
+- io->flags |= UBLK_IO_FLAG_ABORTED;
++ if (rq && blk_mq_request_started(rq))
+ __ublk_fail_req(ubq, io, rq);
+- }
+ }
+ }
+ }
+
+ /* Must be called when queue is frozen */
+-static bool ublk_mark_queue_canceling(struct ublk_queue *ubq)
++static void ublk_mark_queue_canceling(struct ublk_queue *ubq)
+ {
+- bool canceled;
+-
+ spin_lock(&ubq->cancel_lock);
+- canceled = ubq->canceling;
+- if (!canceled)
++ if (!ubq->canceling)
+ ubq->canceling = true;
+ spin_unlock(&ubq->cancel_lock);
+-
+- return canceled;
+ }
+
+-static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq)
++static void ublk_start_cancel(struct ublk_queue *ubq)
+ {
+- bool was_canceled = ubq->canceling;
+- struct gendisk *disk;
+-
+- if (was_canceled)
+- return false;
+-
+- spin_lock(&ub->lock);
+- disk = ub->ub_disk;
+- if (disk)
+- get_device(disk_to_dev(disk));
+- spin_unlock(&ub->lock);
++ struct ublk_device *ub = ubq->dev;
++ struct gendisk *disk = ublk_get_disk(ub);
+
+ /* Our disk has been dead */
+ if (!disk)
+- return false;
+-
++ return;
+ /*
+ * Now we are serialized with ublk_queue_rq()
+ *
+@@ -1584,25 +1650,36 @@ static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq)
+ * touch completed uring_cmd
+ */
+ blk_mq_quiesce_queue(disk->queue);
+- was_canceled = ublk_mark_queue_canceling(ubq);
+- if (!was_canceled) {
+- /* abort queue is for making forward progress */
+- ublk_abort_queue(ub, ubq);
+- }
++ ublk_mark_queue_canceling(ubq);
+ blk_mq_unquiesce_queue(disk->queue);
+- put_device(disk_to_dev(disk));
+-
+- return !was_canceled;
++ ublk_put_disk(disk);
+ }
+
+-static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io,
++static void ublk_cancel_cmd(struct ublk_queue *ubq, unsigned tag,
+ unsigned int issue_flags)
+ {
++ struct ublk_io *io = &ubq->ios[tag];
++ struct ublk_device *ub = ubq->dev;
++ struct request *req;
+ bool done;
+
+ if (!(io->flags & UBLK_IO_FLAG_ACTIVE))
+ return;
+
++ /*
++ * Don't try to cancel this command if the request is started for
++ * avoiding race between io_uring_cmd_done() and
++ * io_uring_cmd_complete_in_task().
++ *
++ * Either the started request will be aborted via __ublk_abort_rq(),
++ * then this uring_cmd is canceled next time, or it will be done in
++ * task work function ublk_dispatch_req() because io_uring guarantees
++ * that ublk_dispatch_req() is always called
++ */
++ req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);
++ if (req && blk_mq_request_started(req))
++ return;
++
+ spin_lock(&ubq->cancel_lock);
+ done = !!(io->flags & UBLK_IO_FLAG_CANCELED);
+ if (!done)
+@@ -1616,6 +1693,17 @@ static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io,
+ /*
+ * The ublk char device won't be closed when calling cancel fn, so both
+ * ublk device and queue are guaranteed to be live
++ *
++ * Two-stage cancel:
++ *
++ * - make every active uring_cmd done in ->cancel_fn()
++ *
++ * - aborting inflight ublk IO requests in ublk char device release handler,
++ * which depends on 1st stage because device can only be closed iff all
++ * uring_cmd are done
++ *
++ * Do _not_ try to acquire ub->mutex before all inflight requests are
++ * aborted, otherwise deadlock may be caused.
+ */
+ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
+@@ -1623,9 +1711,6 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
+ struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd);
+ struct ublk_queue *ubq = pdu->ubq;
+ struct task_struct *task;
+- struct ublk_device *ub;
+- bool need_schedule;
+- struct ublk_io *io;
+
+ if (WARN_ON_ONCE(!ubq))
+ return;
+@@ -1637,16 +1722,11 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
+ if (WARN_ON_ONCE(task && task != ubq->ubq_daemon))
+ return;
+
+- ub = ubq->dev;
+- need_schedule = ublk_abort_requests(ub, ubq);
+-
+- io = &ubq->ios[pdu->tag];
+- WARN_ON_ONCE(io->cmd != cmd);
+- ublk_cancel_cmd(ubq, io, issue_flags);
++ if (!ubq->canceling)
++ ublk_start_cancel(ubq);
+
+- if (need_schedule) {
+- schedule_work(&ub->nosrv_work);
+- }
++ WARN_ON_ONCE(ubq->ios[pdu->tag].cmd != cmd);
++ ublk_cancel_cmd(ubq, pdu->tag, issue_flags);
+ }
+
+ static inline bool ublk_queue_ready(struct ublk_queue *ubq)
+@@ -1659,7 +1739,7 @@ static void ublk_cancel_queue(struct ublk_queue *ubq)
+ int i;
+
+ for (i = 0; i < ubq->q_depth; i++)
+- ublk_cancel_cmd(ubq, &ubq->ios[i], IO_URING_F_UNLOCKED);
++ ublk_cancel_cmd(ubq, i, IO_URING_F_UNLOCKED);
+ }
+
+ /* Cancel all pending commands, must be called after del_gendisk() returns */
+@@ -1697,23 +1777,6 @@ static void ublk_wait_tagset_rqs_idle(struct ublk_device *ub)
+ }
+ }
+
+-static void __ublk_quiesce_dev(struct ublk_device *ub)
+-{
+- int i;
+-
+- pr_devel("%s: quiesce ub: dev_id %d state %s\n",
+- __func__, ub->dev_info.dev_id,
+- ub->dev_info.state == UBLK_S_DEV_LIVE ?
+- "LIVE" : "QUIESCED");
+- blk_mq_quiesce_queue(ub->ub_disk->queue);
+- /* mark every queue as canceling */
+- for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
+- ublk_get_queue(ub, i)->canceling = true;
+- ublk_wait_tagset_rqs_idle(ub);
+- ub->dev_info.state = UBLK_S_DEV_QUIESCED;
+- blk_mq_unquiesce_queue(ub->ub_disk->queue);
+-}
+-
+ static void ublk_force_abort_dev(struct ublk_device *ub)
+ {
+ int i;
+@@ -1748,58 +1811,51 @@ static struct gendisk *ublk_detach_disk(struct ublk_device *ub)
+ return disk;
+ }
+
+-static void ublk_stop_dev(struct ublk_device *ub)
++static void ublk_stop_dev_unlocked(struct ublk_device *ub)
++ __must_hold(&ub->mutex)
+ {
+ struct gendisk *disk;
+
+- mutex_lock(&ub->mutex);
+ if (ub->dev_info.state == UBLK_S_DEV_DEAD)
+- goto unlock;
++ return;
++
+ if (ublk_nosrv_dev_should_queue_io(ub))
+ ublk_force_abort_dev(ub);
+ del_gendisk(ub->ub_disk);
+ disk = ublk_detach_disk(ub);
+ put_disk(disk);
+- unlock:
++}
++
++static void ublk_stop_dev(struct ublk_device *ub)
++{
++ mutex_lock(&ub->mutex);
++ ublk_stop_dev_unlocked(ub);
+ mutex_unlock(&ub->mutex);
+ ublk_cancel_dev(ub);
+ }
+
+-static void ublk_nosrv_work(struct work_struct *work)
++/* reset ublk io_uring queue & io flags */
++static void ublk_reset_io_flags(struct ublk_device *ub)
+ {
+- struct ublk_device *ub =
+- container_of(work, struct ublk_device, nosrv_work);
+- int i;
+-
+- if (ublk_nosrv_should_stop_dev(ub)) {
+- ublk_stop_dev(ub);
+- return;
+- }
++ int i, j;
+
+- mutex_lock(&ub->mutex);
+- if (ub->dev_info.state != UBLK_S_DEV_LIVE)
+- goto unlock;
++ for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
++ struct ublk_queue *ubq = ublk_get_queue(ub, i);
+
+- if (ublk_nosrv_dev_should_queue_io(ub)) {
+- __ublk_quiesce_dev(ub);
+- } else {
+- blk_mq_quiesce_queue(ub->ub_disk->queue);
+- ub->dev_info.state = UBLK_S_DEV_FAIL_IO;
+- for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+- ublk_get_queue(ub, i)->fail_io = true;
+- }
+- blk_mq_unquiesce_queue(ub->ub_disk->queue);
++ /* UBLK_IO_FLAG_CANCELED can be cleared now */
++ spin_lock(&ubq->cancel_lock);
++ for (j = 0; j < ubq->q_depth; j++)
++ ubq->ios[j].flags &= ~UBLK_IO_FLAG_CANCELED;
++ spin_unlock(&ubq->cancel_lock);
++ ubq->canceling = false;
++ ubq->fail_io = false;
+ }
+-
+- unlock:
+- mutex_unlock(&ub->mutex);
+- ublk_cancel_dev(ub);
+ }
+
+ /* device can only be started after all IOs are ready */
+ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
++ __must_hold(&ub->mutex)
+ {
+- mutex_lock(&ub->mutex);
+ ubq->nr_io_ready++;
+ if (ublk_queue_ready(ubq)) {
+ ubq->ubq_daemon = current;
+@@ -1809,9 +1865,12 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
+ if (capable(CAP_SYS_ADMIN))
+ ub->nr_privileged_daemon++;
+ }
+- if (ub->nr_queues_ready == ub->dev_info.nr_hw_queues)
++
++ if (ub->nr_queues_ready == ub->dev_info.nr_hw_queues) {
++ /* now we are ready for handling ublk io request */
++ ublk_reset_io_flags(ub);
+ complete_all(&ub->completion);
+- mutex_unlock(&ub->mutex);
++ }
+ }
+
+ static inline int ublk_check_cmd_op(u32 cmd_op)
+@@ -1850,6 +1909,52 @@ static inline void ublk_prep_cancel(struct io_uring_cmd *cmd,
+ io_uring_cmd_mark_cancelable(cmd, issue_flags);
+ }
+
++static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq,
++ struct ublk_io *io, __u64 buf_addr)
++{
++ struct ublk_device *ub = ubq->dev;
++ int ret = 0;
++
++ /*
++ * When handling FETCH command for setting up ublk uring queue,
++ * ub->mutex is the innermost lock, and we won't block for handling
++ * FETCH, so it is fine even for IO_URING_F_NONBLOCK.
++ */
++ mutex_lock(&ub->mutex);
++ /* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
++ if (ublk_queue_ready(ubq)) {
++ ret = -EBUSY;
++ goto out;
++ }
++
++ /* allow each command to be FETCHed at most once */
++ if (io->flags & UBLK_IO_FLAG_ACTIVE) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV);
++
++ if (ublk_need_map_io(ubq)) {
++ /*
++ * FETCH_RQ has to provide IO buffer if NEED GET
++ * DATA is not enabled
++ */
++ if (!buf_addr && !ublk_need_get_data(ubq))
++ goto out;
++ } else if (buf_addr) {
++ /* User copy requires addr to be unset */
++ ret = -EINVAL;
++ goto out;
++ }
++
++ ublk_fill_io_cmd(io, cmd, buf_addr);
++ ublk_mark_io_ready(ub, ubq);
++out:
++ mutex_unlock(&ub->mutex);
++ return ret;
++}
++
+ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
+ unsigned int issue_flags,
+ const struct ublksrv_io_cmd *ub_cmd)
+@@ -1902,33 +2007,9 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
+ ret = -EINVAL;
+ switch (_IOC_NR(cmd_op)) {
+ case UBLK_IO_FETCH_REQ:
+- /* UBLK_IO_FETCH_REQ is only allowed before queue is setup */
+- if (ublk_queue_ready(ubq)) {
+- ret = -EBUSY;
+- goto out;
+- }
+- /*
+- * The io is being handled by server, so COMMIT_RQ is expected
+- * instead of FETCH_REQ
+- */
+- if (io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)
+- goto out;
+-
+- if (!ublk_support_user_copy(ubq)) {
+- /*
+- * FETCH_RQ has to provide IO buffer if NEED GET
+- * DATA is not enabled
+- */
+- if (!ub_cmd->addr && !ublk_need_get_data(ubq))
+- goto out;
+- } else if (ub_cmd->addr) {
+- /* User copy requires addr to be unset */
+- ret = -EINVAL;
++ ret = ublk_fetch(cmd, ubq, io, ub_cmd->addr);
++ if (ret)
+ goto out;
+- }
+-
+- ublk_fill_io_cmd(io, cmd, ub_cmd->addr);
+- ublk_mark_io_ready(ub, ubq);
+ break;
+ case UBLK_IO_COMMIT_AND_FETCH_REQ:
+ req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag);
+@@ -1936,7 +2017,7 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
+ if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
+ goto out;
+
+- if (!ublk_support_user_copy(ubq)) {
++ if (ublk_need_map_io(ubq)) {
+ /*
+ * COMMIT_AND_FETCH_REQ has to provide IO buffer if
+ * NEED GET DATA is not enabled or it is Read IO.
+@@ -2324,7 +2405,6 @@ static int ublk_add_tag_set(struct ublk_device *ub)
+ static void ublk_remove(struct ublk_device *ub)
+ {
+ ublk_stop_dev(ub);
+- cancel_work_sync(&ub->nosrv_work);
+ cdev_device_del(&ub->cdev, &ub->cdev_dev);
+ ublk_put_device(ub);
+ ublks_added--;
+@@ -2598,7 +2678,6 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
+ goto out_unlock;
+ mutex_init(&ub->mutex);
+ spin_lock_init(&ub->lock);
+- INIT_WORK(&ub->nosrv_work, ublk_nosrv_work);
+
+ ret = ublk_alloc_dev_number(ub, header->dev_id);
+ if (ret < 0)
+@@ -2733,7 +2812,6 @@ static inline void ublk_ctrl_cmd_dump(struct io_uring_cmd *cmd)
+ static int ublk_ctrl_stop_dev(struct ublk_device *ub)
+ {
+ ublk_stop_dev(ub);
+- cancel_work_sync(&ub->nosrv_work);
+ return 0;
+ }
+
+@@ -2840,42 +2918,15 @@ static int ublk_ctrl_set_params(struct ublk_device *ub,
+ return ret;
+ }
+
+-static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq)
+-{
+- int i;
+-
+- WARN_ON_ONCE(!(ubq->ubq_daemon && ubq_daemon_is_dying(ubq)));
+-
+- /* All old ioucmds have to be completed */
+- ubq->nr_io_ready = 0;
+- /* old daemon is PF_EXITING, put it now */
+- put_task_struct(ubq->ubq_daemon);
+- /* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */
+- ubq->ubq_daemon = NULL;
+- ubq->timeout = false;
+-
+- for (i = 0; i < ubq->q_depth; i++) {
+- struct ublk_io *io = &ubq->ios[i];
+-
+- /* forget everything now and be ready for new FETCH_REQ */
+- io->flags = 0;
+- io->cmd = NULL;
+- io->addr = 0;
+- }
+-}
+-
+ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
+ struct io_uring_cmd *cmd)
+ {
+ const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe);
+ int ret = -EINVAL;
+- int i;
+
+ mutex_lock(&ub->mutex);
+ if (ublk_nosrv_should_stop_dev(ub))
+ goto out_unlock;
+- if (!ub->nr_queues_ready)
+- goto out_unlock;
+ /*
+ * START_RECOVERY is only allowd after:
+ *
+@@ -2899,12 +2950,6 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
+ goto out_unlock;
+ }
+ pr_devel("%s: start recovery for dev id %d.\n", __func__, header->dev_id);
+- for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
+- ublk_queue_reinit(ub, ublk_get_queue(ub, i));
+- /* set to NULL, otherwise new ubq_daemon cannot mmap the io_cmd_buf */
+- ub->mm = NULL;
+- ub->nr_queues_ready = 0;
+- ub->nr_privileged_daemon = 0;
+ init_completion(&ub->completion);
+ ret = 0;
+ out_unlock:
+@@ -2918,7 +2963,6 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
+ const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe);
+ int ublksrv_pid = (int)header->data[0];
+ int ret = -EINVAL;
+- int i;
+
+ pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...\n",
+ __func__, ub->dev_info.nr_hw_queues, header->dev_id);
+@@ -2938,22 +2982,10 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
+ goto out_unlock;
+ }
+ ub->dev_info.ublksrv_pid = ublksrv_pid;
++ ub->dev_info.state = UBLK_S_DEV_LIVE;
+ pr_devel("%s: new ublksrv_pid %d, dev id %d\n",
+ __func__, ublksrv_pid, header->dev_id);
+-
+- blk_mq_quiesce_queue(ub->ub_disk->queue);
+- ub->dev_info.state = UBLK_S_DEV_LIVE;
+- for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+- struct ublk_queue *ubq = ublk_get_queue(ub, i);
+-
+- ubq->canceling = false;
+- ubq->fail_io = false;
+- }
+- blk_mq_unquiesce_queue(ub->ub_disk->queue);
+- pr_devel("%s: queue unquiesced, dev id %d.\n",
+- __func__, header->dev_id);
+ blk_mq_kick_requeue_list(ub->ub_disk->queue);
+-
+ ret = 0;
+ out_unlock:
+ mutex_unlock(&ub->mutex);
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 6130854b6658ac..1636f636fbef0c 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -595,8 +595,10 @@ static int btintel_pcie_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ /* This is a debug event that comes from IML and OP image when it
+ * starts execution. There is no need pass this event to stack.
+ */
+- if (skb->data[2] == 0x97)
++ if (skb->data[2] == 0x97) {
++ hci_recv_diag(hdev, skb);
+ return 0;
++ }
+ }
+
+ return hci_recv_frame(hdev, skb);
+@@ -612,7 +614,6 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data,
+ u8 pkt_type;
+ u16 plen;
+ u32 pcie_pkt_type;
+- struct sk_buff *new_skb;
+ void *pdata;
+ struct hci_dev *hdev = data->hdev;
+
+@@ -689,24 +690,20 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data,
+
+ bt_dev_dbg(hdev, "pkt_type: 0x%2.2x len: %u", pkt_type, plen);
+
+- new_skb = bt_skb_alloc(plen, GFP_ATOMIC);
+- if (!new_skb) {
+- bt_dev_err(hdev, "Failed to allocate memory for skb of len: %u",
+- skb->len);
+- ret = -ENOMEM;
+- goto exit_error;
+- }
+-
+- hci_skb_pkt_type(new_skb) = pkt_type;
+- skb_put_data(new_skb, skb->data, plen);
++ hci_skb_pkt_type(skb) = pkt_type;
+ hdev->stat.byte_rx += plen;
++ skb_trim(skb, plen);
+
+ if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT)
+- ret = btintel_pcie_recv_event(hdev, new_skb);
++ ret = btintel_pcie_recv_event(hdev, skb);
+ else
+- ret = hci_recv_frame(hdev, new_skb);
++ ret = hci_recv_frame(hdev, skb);
++ skb = NULL; /* skb is freed in the callee */
+
+ exit_error:
++ if (skb)
++ kfree_skb(skb);
++
+ if (ret)
+ hdev->stat.err_rx++;
+
+@@ -720,16 +717,10 @@ static void btintel_pcie_rx_work(struct work_struct *work)
+ struct btintel_pcie_data *data = container_of(work,
+ struct btintel_pcie_data, rx_work);
+ struct sk_buff *skb;
+- int err;
+- struct hci_dev *hdev = data->hdev;
+
+ /* Process the sk_buf in queue and send to the HCI layer */
+ while ((skb = skb_dequeue(&data->rx_skb_q))) {
+- err = btintel_pcie_recv_frame(data, skb);
+- if (err)
+- bt_dev_err(hdev, "Failed to send received frame: %d",
+- err);
+- kfree_skb(skb);
++ btintel_pcie_recv_frame(data, skb);
+ }
+ }
+
+@@ -782,10 +773,8 @@ static void btintel_pcie_msix_rx_handle(struct btintel_pcie_data *data)
+ bt_dev_dbg(hdev, "RXQ: cr_hia: %u cr_tia: %u", cr_hia, cr_tia);
+
+ /* Check CR_TIA and CR_HIA for change */
+- if (cr_tia == cr_hia) {
+- bt_dev_warn(hdev, "RXQ: no new CD found");
++ if (cr_tia == cr_hia)
+ return;
+- }
+
+ rxq = &data->rxq;
+
+@@ -821,6 +810,16 @@ static irqreturn_t btintel_pcie_msix_isr(int irq, void *data)
+ return IRQ_WAKE_THREAD;
+ }
+
++static inline bool btintel_pcie_is_rxq_empty(struct btintel_pcie_data *data)
++{
++ return data->ia.cr_hia[BTINTEL_PCIE_RXQ_NUM] == data->ia.cr_tia[BTINTEL_PCIE_RXQ_NUM];
++}
++
++static inline bool btintel_pcie_is_txackq_empty(struct btintel_pcie_data *data)
++{
++ return data->ia.cr_tia[BTINTEL_PCIE_TXQ_NUM] == data->ia.cr_hia[BTINTEL_PCIE_TXQ_NUM];
++}
++
+ static irqreturn_t btintel_pcie_irq_msix_handler(int irq, void *dev_id)
+ {
+ struct msix_entry *entry = dev_id;
+@@ -848,12 +847,18 @@ static irqreturn_t btintel_pcie_irq_msix_handler(int irq, void *dev_id)
+ btintel_pcie_msix_gp0_handler(data);
+
+ /* For TX */
+- if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0)
++ if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_0) {
+ btintel_pcie_msix_tx_handle(data);
++ if (!btintel_pcie_is_rxq_empty(data))
++ btintel_pcie_msix_rx_handle(data);
++ }
+
+ /* For RX */
+- if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_1)
++ if (intr_fh & BTINTEL_PCIE_MSIX_FH_INT_CAUSES_1) {
+ btintel_pcie_msix_rx_handle(data);
++ if (!btintel_pcie_is_txackq_empty(data))
++ btintel_pcie_msix_tx_handle(data);
++ }
+
+ /*
+ * Before sending the interrupt the HW disables it to prevent a nested
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index bfd769f2026b30..ccd0a21da39554 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -3010,22 +3010,16 @@ static void btusb_coredump_qca(struct hci_dev *hdev)
+ bt_dev_err(hdev, "%s: triggle crash failed (%d)", __func__, err);
+ }
+
+-/*
+- * ==0: not a dump pkt.
+- * < 0: fails to handle a dump pkt
+- * > 0: otherwise.
+- */
++/* Return: 0 on success, negative errno on failure. */
+ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- int ret = 1;
++ int ret = 0;
+ u8 pkt_type;
+ u8 *sk_ptr;
+ unsigned int sk_len;
+ u16 seqno;
+ u32 dump_size;
+
+- struct hci_event_hdr *event_hdr;
+- struct hci_acl_hdr *acl_hdr;
+ struct qca_dump_hdr *dump_hdr;
+ struct btusb_data *btdata = hci_get_drvdata(hdev);
+ struct usb_device *udev = btdata->udev;
+@@ -3035,30 +3029,14 @@ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ sk_len = skb->len;
+
+ if (pkt_type == HCI_ACLDATA_PKT) {
+- acl_hdr = hci_acl_hdr(skb);
+- if (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE)
+- return 0;
+ sk_ptr += HCI_ACL_HDR_SIZE;
+ sk_len -= HCI_ACL_HDR_SIZE;
+- event_hdr = (struct hci_event_hdr *)sk_ptr;
+- } else {
+- event_hdr = hci_event_hdr(skb);
+ }
+
+- if ((event_hdr->evt != HCI_VENDOR_PKT)
+- || (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
+- return 0;
+-
+ sk_ptr += HCI_EVENT_HDR_SIZE;
+ sk_len -= HCI_EVENT_HDR_SIZE;
+
+ dump_hdr = (struct qca_dump_hdr *)sk_ptr;
+- if ((sk_len < offsetof(struct qca_dump_hdr, data))
+- || (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS)
+- || (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
+- return 0;
+-
+- /*it is dump pkt now*/
+ seqno = le16_to_cpu(dump_hdr->seqno);
+ if (seqno == 0) {
+ set_bit(BTUSB_HW_SSR_ACTIVE, &btdata->flags);
+@@ -3132,17 +3110,84 @@ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ return ret;
+ }
+
++/* Return: true if the ACL packet is a dump packet, false otherwise. */
++static bool acl_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ u8 *sk_ptr;
++ unsigned int sk_len;
++
++ struct hci_event_hdr *event_hdr;
++ struct hci_acl_hdr *acl_hdr;
++ struct qca_dump_hdr *dump_hdr;
++
++ sk_ptr = skb->data;
++ sk_len = skb->len;
++
++ acl_hdr = hci_acl_hdr(skb);
++ if (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE)
++ return false;
++
++ sk_ptr += HCI_ACL_HDR_SIZE;
++ sk_len -= HCI_ACL_HDR_SIZE;
++ event_hdr = (struct hci_event_hdr *)sk_ptr;
++
++ if ((event_hdr->evt != HCI_VENDOR_PKT) ||
++ (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
++ return false;
++
++ sk_ptr += HCI_EVENT_HDR_SIZE;
++ sk_len -= HCI_EVENT_HDR_SIZE;
++
++ dump_hdr = (struct qca_dump_hdr *)sk_ptr;
++ if ((sk_len < offsetof(struct qca_dump_hdr, data)) ||
++ (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
++ (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
++ return false;
++
++ return true;
++}
++
++/* Return: true if the event packet is a dump packet, false otherwise. */
++static bool evt_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ u8 *sk_ptr;
++ unsigned int sk_len;
++
++ struct hci_event_hdr *event_hdr;
++ struct qca_dump_hdr *dump_hdr;
++
++ sk_ptr = skb->data;
++ sk_len = skb->len;
++
++ event_hdr = hci_event_hdr(skb);
++
++ if ((event_hdr->evt != HCI_VENDOR_PKT)
++ || (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
++ return false;
++
++ sk_ptr += HCI_EVENT_HDR_SIZE;
++ sk_len -= HCI_EVENT_HDR_SIZE;
++
++ dump_hdr = (struct qca_dump_hdr *)sk_ptr;
++ if ((sk_len < offsetof(struct qca_dump_hdr, data)) ||
++ (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
++ (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
++ return false;
++
++ return true;
++}
++
+ static int btusb_recv_acl_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- if (handle_dump_pkt_qca(hdev, skb))
+- return 0;
++ if (acl_pkt_is_dump_qca(hdev, skb))
++ return handle_dump_pkt_qca(hdev, skb);
+ return hci_recv_frame(hdev, skb);
+ }
+
+ static int btusb_recv_evt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- if (handle_dump_pkt_qca(hdev, skb))
+- return 0;
++ if (evt_pkt_is_dump_qca(hdev, skb))
++ return handle_dump_pkt_qca(hdev, skb);
+ return hci_recv_frame(hdev, skb);
+ }
+
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index 463b69a2dff52e..453b629d3de658 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -909,6 +909,20 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ if (perf->states[0].core_frequency * 1000 != freq_table[0].frequency)
+ pr_warn(FW_WARN "P-state 0 is not max freq\n");
+
++ if (acpi_cpufreq_driver.set_boost) {
++ if (policy->boost_supported) {
++ /*
++ * The firmware may have altered boost state while the
++ * CPU was offline (for example during a suspend-resume
++ * cycle).
++ */
++ if (policy->boost_enabled != boost_state(cpu))
++ set_boost(policy, policy->boost_enabled);
++ } else {
++ policy->boost_supported = true;
++ }
++ }
++
+ return result;
+
+ err_unreg:
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 934e0e19824ce1..61cbfb56bf4edc 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -535,16 +535,18 @@ void cpufreq_disable_fast_switch(struct cpufreq_policy *policy)
+ EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch);
+
+ static unsigned int __resolve_freq(struct cpufreq_policy *policy,
+- unsigned int target_freq, unsigned int relation)
++ unsigned int target_freq,
++ unsigned int min, unsigned int max,
++ unsigned int relation)
+ {
+ unsigned int idx;
+
+- target_freq = clamp_val(target_freq, policy->min, policy->max);
++ target_freq = clamp_val(target_freq, min, max);
+
+ if (!policy->freq_table)
+ return target_freq;
+
+- idx = cpufreq_frequency_table_target(policy, target_freq, relation);
++ idx = cpufreq_frequency_table_target(policy, target_freq, min, max, relation);
+ policy->cached_resolved_idx = idx;
+ policy->cached_target_freq = target_freq;
+ return policy->freq_table[idx].frequency;
+@@ -564,7 +566,21 @@ static unsigned int __resolve_freq(struct cpufreq_policy *policy,
+ unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
+ unsigned int target_freq)
+ {
+- return __resolve_freq(policy, target_freq, CPUFREQ_RELATION_LE);
++ unsigned int min = READ_ONCE(policy->min);
++ unsigned int max = READ_ONCE(policy->max);
++
++ /*
++ * If this function runs in parallel with cpufreq_set_policy(), it may
++ * read policy->min before the update and policy->max after the update
++ * or the other way around, so there is no ordering guarantee.
++ *
++ * Resolve this by always honoring the max (in case it comes from
++ * thermal throttling or similar).
++ */
++ if (unlikely(min > max))
++ min = max;
++
++ return __resolve_freq(policy, target_freq, min, max, CPUFREQ_RELATION_LE);
+ }
+ EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq);
+
+@@ -2337,7 +2353,8 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
+ if (cpufreq_disabled())
+ return -ENODEV;
+
+- target_freq = __resolve_freq(policy, target_freq, relation);
++ target_freq = __resolve_freq(policy, target_freq, policy->min,
++ policy->max, relation);
+
+ pr_debug("target for CPU %u: %u kHz, relation %u, requested %u kHz\n",
+ policy->cpu, target_freq, relation, old_target_freq);
+@@ -2661,11 +2678,18 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
+ * Resolve policy min/max to available frequencies. It ensures
+ * no frequency resolution will neither overshoot the requested maximum
+ * nor undershoot the requested minimum.
++ *
++ * Avoid storing intermediate values in policy->max or policy->min and
++ * compiler optimizations around them because they may be accessed
++ * concurrently by cpufreq_driver_resolve_freq() during the update.
+ */
+- policy->min = new_data.min;
+- policy->max = new_data.max;
+- policy->min = __resolve_freq(policy, policy->min, CPUFREQ_RELATION_L);
+- policy->max = __resolve_freq(policy, policy->max, CPUFREQ_RELATION_H);
++ WRITE_ONCE(policy->max, __resolve_freq(policy, new_data.max,
++ new_data.min, new_data.max,
++ CPUFREQ_RELATION_H));
++ new_data.min = __resolve_freq(policy, new_data.min, new_data.min,
++ new_data.max, CPUFREQ_RELATION_L);
++ WRITE_ONCE(policy->min, new_data.min > policy->max ? policy->max : new_data.min);
++
+ trace_cpu_frequency_limits(policy);
+
+ cpufreq_update_pressure(policy);
+diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
+index a7c38b8b3e7890..0e65d37c923113 100644
+--- a/drivers/cpufreq/cpufreq_ondemand.c
++++ b/drivers/cpufreq/cpufreq_ondemand.c
+@@ -76,7 +76,8 @@ static unsigned int generic_powersave_bias_target(struct cpufreq_policy *policy,
+ return freq_next;
+ }
+
+- index = cpufreq_frequency_table_target(policy, freq_next, relation);
++ index = cpufreq_frequency_table_target(policy, freq_next, policy->min,
++ policy->max, relation);
+ freq_req = freq_table[index].frequency;
+ freq_reduc = freq_req * od_tuners->powersave_bias / 1000;
+ freq_avg = freq_req - freq_reduc;
+diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c
+index 10e80d912b8d85..9db21ffc11979d 100644
+--- a/drivers/cpufreq/freq_table.c
++++ b/drivers/cpufreq/freq_table.c
+@@ -116,8 +116,8 @@ int cpufreq_generic_frequency_table_verify(struct cpufreq_policy_data *policy)
+ EXPORT_SYMBOL_GPL(cpufreq_generic_frequency_table_verify);
+
+ int cpufreq_table_index_unsorted(struct cpufreq_policy *policy,
+- unsigned int target_freq,
+- unsigned int relation)
++ unsigned int target_freq, unsigned int min,
++ unsigned int max, unsigned int relation)
+ {
+ struct cpufreq_frequency_table optimal = {
+ .driver_data = ~0,
+@@ -148,7 +148,7 @@ int cpufreq_table_index_unsorted(struct cpufreq_policy *policy,
+ cpufreq_for_each_valid_entry_idx(pos, table, i) {
+ freq = pos->frequency;
+
+- if ((freq < policy->min) || (freq > policy->max))
++ if (freq < min || freq > max)
+ continue;
+ if (freq == target_freq) {
+ optimal.driver_data = i;
+@@ -367,6 +367,10 @@ int cpufreq_table_validate_and_sort(struct cpufreq_policy *policy)
+ if (ret)
+ return ret;
+
++ /* Driver's may have set this field already */
++ if (policy_has_boost_freq(policy))
++ policy->boost_supported = true;
++
+ return set_freq_table_sorted(policy);
+ }
+
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 9c4cc01fd51aad..43e847e9f741e3 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -598,6 +598,9 @@ static bool turbo_is_disabled(void)
+ {
+ u64 misc_en;
+
++ if (!cpu_feature_enabled(X86_FEATURE_IDA))
++ return true;
++
+ rdmsrl(MSR_IA32_MISC_ENABLE, misc_en);
+
+ return !!(misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE);
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index 3e971f90236330..dcd7008fe06b05 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -99,7 +99,7 @@ static irqreturn_t altr_sdram_mc_err_handler(int irq, void *dev_id)
+ if (status & priv->ecc_stat_ce_mask) {
+ regmap_read(drvdata->mc_vbase, priv->ecc_saddr_offset,
+ &err_addr);
+- if (priv->ecc_uecnt_offset)
++ if (priv->ecc_cecnt_offset)
+ regmap_read(drvdata->mc_vbase, priv->ecc_cecnt_offset,
+ &err_count);
+ edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, err_count,
+@@ -1005,9 +1005,6 @@ altr_init_a10_ecc_block(struct device_node *np, u32 irq_mask,
+ }
+ }
+
+- /* Interrupt mode set to every SBERR */
+- regmap_write(ecc_mgr_map, ALTR_A10_ECC_INTMODE_OFST,
+- ALTR_A10_ECC_INTMODE);
+ /* Enable ECC */
+ ecc_set_bits(ecc_ctrl_en_mask, (ecc_block_base +
+ ALTR_A10_ECC_CTRL_OFST));
+@@ -2127,6 +2124,10 @@ static int altr_edac_a10_probe(struct platform_device *pdev)
+ return PTR_ERR(edac->ecc_mgr_map);
+ }
+
++ /* Set irq mask for DDR SBE to avoid any pending irq before registration */
++ regmap_write(edac->ecc_mgr_map, A10_SYSMGR_ECC_INTMASK_SET_OFST,
++ (A10_SYSMGR_ECC_INTMASK_SDMMCB | A10_SYSMGR_ECC_INTMASK_DDR0));
++
+ edac->irq_chip.name = pdev->dev.of_node->name;
+ edac->irq_chip.irq_mask = a10_eccmgr_irq_mask;
+ edac->irq_chip.irq_unmask = a10_eccmgr_irq_unmask;
+diff --git a/drivers/edac/altera_edac.h b/drivers/edac/altera_edac.h
+index 3727e72c8c2e70..7248d24c4908d7 100644
+--- a/drivers/edac/altera_edac.h
++++ b/drivers/edac/altera_edac.h
+@@ -249,6 +249,8 @@ struct altr_sdram_mc_data {
+ #define A10_SYSMGR_ECC_INTMASK_SET_OFST 0x94
+ #define A10_SYSMGR_ECC_INTMASK_CLR_OFST 0x98
+ #define A10_SYSMGR_ECC_INTMASK_OCRAM BIT(1)
++#define A10_SYSMGR_ECC_INTMASK_SDMMCB BIT(16)
++#define A10_SYSMGR_ECC_INTMASK_DDR0 BIT(17)
+
+ #define A10_SYSMGR_ECC_INTSTAT_SERR_OFST 0x9C
+ #define A10_SYSMGR_ECC_INTSTAT_DERR_OFST 0xA0
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 655672a8809595..03d22cbb2ad470 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -280,7 +280,8 @@ __ffa_partition_info_get(u32 uuid0, u32 uuid1, u32 uuid2, u32 uuid3,
+ memcpy(buffer + idx, drv_info->rx_buffer + idx * sz,
+ buf_sz);
+
+- ffa_rx_release();
++ if (!(flags & PARTITION_INFO_GET_RETURN_COUNT_ONLY))
++ ffa_rx_release();
+
+ mutex_unlock(&drv_info->rx_lock);
+
+diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
+index a3386bf36de508..7d7af2262c0139 100644
+--- a/drivers/firmware/arm_scmi/bus.c
++++ b/drivers/firmware/arm_scmi/bus.c
+@@ -260,6 +260,9 @@ static struct scmi_device *scmi_child_dev_find(struct device *parent,
+ if (!dev)
+ return NULL;
+
++ /* Drop the refcnt bumped implicitly by device_find_child */
++ put_device(dev);
++
+ return to_scmi_dev(dev);
+ }
+
+diff --git a/drivers/firmware/cirrus/Kconfig b/drivers/firmware/cirrus/Kconfig
+index 0a883091259a2c..e3c2e38b746df9 100644
+--- a/drivers/firmware/cirrus/Kconfig
++++ b/drivers/firmware/cirrus/Kconfig
+@@ -6,14 +6,11 @@ config FW_CS_DSP
+
+ config FW_CS_DSP_KUNIT_TEST_UTILS
+ tristate
+- depends on KUNIT && REGMAP
+- select FW_CS_DSP
+
+ config FW_CS_DSP_KUNIT_TEST
+ tristate "KUnit tests for Cirrus Logic cs_dsp" if !KUNIT_ALL_TESTS
+- depends on KUNIT && REGMAP
++ depends on KUNIT && REGMAP && FW_CS_DSP
+ default KUNIT_ALL_TESTS
+- select FW_CS_DSP
+ select FW_CS_DSP_KUNIT_TEST_UTILS
+ help
+ This builds KUnit tests for cs_dsp.
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index fbef3f471bd0e5..bd228dc77e99b4 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -188,7 +188,7 @@ config DRM_DEBUG_DP_MST_TOPOLOGY_REFS
+ bool "Enable refcount backtrace history in the DP MST helpers"
+ depends on STACKTRACE_SUPPORT
+ select STACKDEPOT
+- depends on DRM_KMS_HELPER
++ select DRM_KMS_HELPER
+ depends on DEBUG_KERNEL
+ depends on EXPERT
+ help
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+index 41421da63a0846..a11f556b3ff113 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+@@ -361,7 +361,7 @@ static void nbio_v7_11_get_clockgating_state(struct amdgpu_device *adev,
+ *flags |= AMD_CG_SUPPORT_BIF_LS;
+ }
+
+-#define MMIO_REG_HOLE_OFFSET (0x80000 - PAGE_SIZE)
++#define MMIO_REG_HOLE_OFFSET 0x44000
+
+ static void nbio_v7_11_set_reg_remap(struct amdgpu_device *adev)
+ {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 76c8e6457175f4..3660e4a1a85f8c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1912,26 +1912,6 @@ static enum dmub_ips_disable_type dm_get_default_ips_mode(
+
+ switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
+ case IP_VERSION(3, 5, 0):
+- /*
+- * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to
+- * cause a hard hang. A fix exists for newer PMFW.
+- *
+- * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest
+- * IPS state in all cases, except for s0ix and all displays off (DPMS),
+- * where IPS2 is allowed.
+- *
+- * When checking pmfw version, use the major and minor only.
+- */
+- if ((adev->pm.fw_version & 0x00FFFF00) < 0x005D6300)
+- ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
+- else if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(11, 5, 0))
+- /*
+- * Other ASICs with DCN35 that have residency issues with
+- * IPS2 in idle.
+- * We want them to use IPS2 only in display off cases.
+- */
+- ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
+- break;
+ case IP_VERSION(3, 5, 1):
+ ret = DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF;
+ break;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index c0dc2324404908..10ba4d7bf63254 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -172,7 +172,10 @@ void hdcp_update_display(struct hdcp_workqueue *hdcp_work,
+ struct mod_hdcp_display_adjustment display_adjust;
+ unsigned int conn_index = aconnector->base.index;
+
+- mutex_lock(&hdcp_w->mutex);
++ guard(mutex)(&hdcp_w->mutex);
++ drm_connector_get(&aconnector->base);
++ if (hdcp_w->aconnector[conn_index])
++ drm_connector_put(&hdcp_w->aconnector[conn_index]->base);
+ hdcp_w->aconnector[conn_index] = aconnector;
+
+ memset(&link_adjust, 0, sizeof(link_adjust));
+@@ -209,7 +212,6 @@ void hdcp_update_display(struct hdcp_workqueue *hdcp_work,
+ mod_hdcp_update_display(&hdcp_w->hdcp, conn_index, &link_adjust, &display_adjust, &hdcp_w->output);
+
+ process_output(hdcp_w);
+- mutex_unlock(&hdcp_w->mutex);
+ }
+
+ static void hdcp_remove_display(struct hdcp_workqueue *hdcp_work,
+@@ -220,8 +222,7 @@ static void hdcp_remove_display(struct hdcp_workqueue *hdcp_work,
+ struct drm_connector_state *conn_state = aconnector->base.state;
+ unsigned int conn_index = aconnector->base.index;
+
+- mutex_lock(&hdcp_w->mutex);
+- hdcp_w->aconnector[conn_index] = aconnector;
++ guard(mutex)(&hdcp_w->mutex);
+
+ /* the removal of display will invoke auth reset -> hdcp destroy and
+ * we'd expect the Content Protection (CP) property changed back to
+@@ -237,9 +238,11 @@ static void hdcp_remove_display(struct hdcp_workqueue *hdcp_work,
+ }
+
+ mod_hdcp_remove_display(&hdcp_w->hdcp, aconnector->base.index, &hdcp_w->output);
+-
++ if (hdcp_w->aconnector[conn_index]) {
++ drm_connector_put(&hdcp_w->aconnector[conn_index]->base);
++ hdcp_w->aconnector[conn_index] = NULL;
++ }
+ process_output(hdcp_w);
+- mutex_unlock(&hdcp_w->mutex);
+ }
+
+ void hdcp_reset_display(struct hdcp_workqueue *hdcp_work, unsigned int link_index)
+@@ -247,7 +250,7 @@ void hdcp_reset_display(struct hdcp_workqueue *hdcp_work, unsigned int link_inde
+ struct hdcp_workqueue *hdcp_w = &hdcp_work[link_index];
+ unsigned int conn_index;
+
+- mutex_lock(&hdcp_w->mutex);
++ guard(mutex)(&hdcp_w->mutex);
+
+ mod_hdcp_reset_connection(&hdcp_w->hdcp, &hdcp_w->output);
+
+@@ -256,11 +259,13 @@ void hdcp_reset_display(struct hdcp_workqueue *hdcp_work, unsigned int link_inde
+ for (conn_index = 0; conn_index < AMDGPU_DM_MAX_DISPLAY_INDEX; conn_index++) {
+ hdcp_w->encryption_status[conn_index] =
+ MOD_HDCP_ENCRYPTION_STATUS_HDCP_OFF;
++ if (hdcp_w->aconnector[conn_index]) {
++ drm_connector_put(&hdcp_w->aconnector[conn_index]->base);
++ hdcp_w->aconnector[conn_index] = NULL;
++ }
+ }
+
+ process_output(hdcp_w);
+-
+- mutex_unlock(&hdcp_w->mutex);
+ }
+
+ void hdcp_handle_cpirq(struct hdcp_workqueue *hdcp_work, unsigned int link_index)
+@@ -277,7 +282,7 @@ static void event_callback(struct work_struct *work)
+ hdcp_work = container_of(to_delayed_work(work), struct hdcp_workqueue,
+ callback_dwork);
+
+- mutex_lock(&hdcp_work->mutex);
++ guard(mutex)(&hdcp_work->mutex);
+
+ cancel_delayed_work(&hdcp_work->callback_dwork);
+
+@@ -285,8 +290,6 @@ static void event_callback(struct work_struct *work)
+ &hdcp_work->output);
+
+ process_output(hdcp_work);
+-
+- mutex_unlock(&hdcp_work->mutex);
+ }
+
+ static void event_property_update(struct work_struct *work)
+@@ -323,7 +326,7 @@ static void event_property_update(struct work_struct *work)
+ continue;
+
+ drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
+- mutex_lock(&hdcp_work->mutex);
++ guard(mutex)(&hdcp_work->mutex);
+
+ if (conn_state->commit) {
+ ret = wait_for_completion_interruptible_timeout(&conn_state->commit->hw_done,
+@@ -355,7 +358,6 @@ static void event_property_update(struct work_struct *work)
+ drm_hdcp_update_content_protection(connector,
+ DRM_MODE_CONTENT_PROTECTION_DESIRED);
+ }
+- mutex_unlock(&hdcp_work->mutex);
+ drm_modeset_unlock(&dev->mode_config.connection_mutex);
+ }
+ }
+@@ -368,7 +370,7 @@ static void event_property_validate(struct work_struct *work)
+ struct amdgpu_dm_connector *aconnector;
+ unsigned int conn_index;
+
+- mutex_lock(&hdcp_work->mutex);
++ guard(mutex)(&hdcp_work->mutex);
+
+ for (conn_index = 0; conn_index < AMDGPU_DM_MAX_DISPLAY_INDEX;
+ conn_index++) {
+@@ -408,8 +410,6 @@ static void event_property_validate(struct work_struct *work)
+ schedule_work(&hdcp_work->property_update_work);
+ }
+ }
+-
+- mutex_unlock(&hdcp_work->mutex);
+ }
+
+ static void event_watchdog_timer(struct work_struct *work)
+@@ -420,7 +420,7 @@ static void event_watchdog_timer(struct work_struct *work)
+ struct hdcp_workqueue,
+ watchdog_timer_dwork);
+
+- mutex_lock(&hdcp_work->mutex);
++ guard(mutex)(&hdcp_work->mutex);
+
+ cancel_delayed_work(&hdcp_work->watchdog_timer_dwork);
+
+@@ -429,8 +429,6 @@ static void event_watchdog_timer(struct work_struct *work)
+ &hdcp_work->output);
+
+ process_output(hdcp_work);
+-
+- mutex_unlock(&hdcp_work->mutex);
+ }
+
+ static void event_cpirq(struct work_struct *work)
+@@ -439,13 +437,11 @@ static void event_cpirq(struct work_struct *work)
+
+ hdcp_work = container_of(work, struct hdcp_workqueue, cpirq_work);
+
+- mutex_lock(&hdcp_work->mutex);
++ guard(mutex)(&hdcp_work->mutex);
+
+ mod_hdcp_process_event(&hdcp_work->hdcp, MOD_HDCP_EVENT_CPIRQ, &hdcp_work->output);
+
+ process_output(hdcp_work);
+-
+- mutex_unlock(&hdcp_work->mutex);
+ }
+
+ void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *hdcp_work)
+@@ -479,7 +475,7 @@ static bool enable_assr(void *handle, struct dc_link *link)
+
+ dtm_cmd = (struct ta_dtm_shared_memory *)psp->dtm_context.context.mem_context.shared_buf;
+
+- mutex_lock(&psp->dtm_context.mutex);
++ guard(mutex)(&psp->dtm_context.mutex);
+ memset(dtm_cmd, 0, sizeof(struct ta_dtm_shared_memory));
+
+ dtm_cmd->cmd_id = TA_DTM_COMMAND__TOPOLOGY_ASSR_ENABLE;
+@@ -494,8 +490,6 @@ static bool enable_assr(void *handle, struct dc_link *link)
+ res = false;
+ }
+
+- mutex_unlock(&psp->dtm_context.mutex);
+-
+ return res;
+ }
+
+@@ -504,6 +498,7 @@ static void update_config(void *handle, struct cp_psp_stream_config *config)
+ struct hdcp_workqueue *hdcp_work = handle;
+ struct amdgpu_dm_connector *aconnector = config->dm_stream_ctx;
+ int link_index = aconnector->dc_link->link_index;
++ unsigned int conn_index = aconnector->base.index;
+ struct mod_hdcp_display *display = &hdcp_work[link_index].display;
+ struct mod_hdcp_link *link = &hdcp_work[link_index].link;
+ struct hdcp_workqueue *hdcp_w = &hdcp_work[link_index];
+@@ -557,13 +552,14 @@ static void update_config(void *handle, struct cp_psp_stream_config *config)
+ (!!aconnector->base.state) ?
+ aconnector->base.state->hdcp_content_type : -1);
+
+- mutex_lock(&hdcp_w->mutex);
++ guard(mutex)(&hdcp_w->mutex);
+
+ mod_hdcp_add_display(&hdcp_w->hdcp, link, display, &hdcp_w->output);
+-
++ drm_connector_get(&aconnector->base);
++ if (hdcp_w->aconnector[conn_index])
++ drm_connector_put(&hdcp_w->aconnector[conn_index]->base);
++ hdcp_w->aconnector[conn_index] = aconnector;
+ process_output(hdcp_w);
+- mutex_unlock(&hdcp_w->mutex);
+-
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index c299cd94d3f78f..cf2463090d3acc 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -964,6 +964,10 @@ void drm_show_fdinfo(struct seq_file *m, struct file *f)
+ struct drm_file *file = f->private_data;
+ struct drm_device *dev = file->minor->dev;
+ struct drm_printer p = drm_seq_file_printer(m);
++ int idx;
++
++ if (!drm_dev_enter(dev, &idx))
++ return;
+
+ drm_printf(&p, "drm-driver:\t%s\n", dev->driver->name);
+ drm_printf(&p, "drm-client-id:\t%llu\n", file->client_id);
+@@ -983,6 +987,8 @@ void drm_show_fdinfo(struct seq_file *m, struct file *f)
+
+ if (dev->driver->show_fdinfo)
+ dev->driver->show_fdinfo(&p, file);
++
++ drm_dev_exit(idx);
+ }
+ EXPORT_SYMBOL(drm_show_fdinfo);
+
+diff --git a/drivers/gpu/drm/drm_mipi_dbi.c b/drivers/gpu/drm/drm_mipi_dbi.c
+index 34bca756757660..3ea9f23b4f67af 100644
+--- a/drivers/gpu/drm/drm_mipi_dbi.c
++++ b/drivers/gpu/drm/drm_mipi_dbi.c
+@@ -404,12 +404,16 @@ static void mipi_dbi_blank(struct mipi_dbi_dev *dbidev)
+ u16 height = drm->mode_config.min_height;
+ u16 width = drm->mode_config.min_width;
+ struct mipi_dbi *dbi = &dbidev->dbi;
+- size_t len = width * height * 2;
++ const struct drm_format_info *dst_format;
++ size_t len;
+ int idx;
+
+ if (!drm_dev_enter(drm, &idx))
+ return;
+
++ dst_format = drm_format_info(dbidev->pixel_format);
++ len = drm_format_info_min_pitch(dst_format, 0, width) * height;
++
+ memset(dbidev->tx_buf, 0, len);
+
+ mipi_dbi_set_window_address(dbidev, 0, width - 1, 0, height - 1);
+diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.h b/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.h
+index 9aae779c4da318..4969d3de2bac3d 100644
+--- a/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.h
++++ b/drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.h
+@@ -23,6 +23,7 @@ int intel_pxp_gsccs_init(struct intel_pxp *pxp);
+
+ int intel_pxp_gsccs_create_session(struct intel_pxp *pxp, int arb_session_id);
+ void intel_pxp_gsccs_end_arb_fw_session(struct intel_pxp *pxp, u32 arb_session_id);
++bool intel_pxp_gsccs_is_ready_for_sessions(struct intel_pxp *pxp);
+
+ #else
+ static inline void intel_pxp_gsccs_fini(struct intel_pxp *pxp)
+@@ -34,8 +35,11 @@ static inline int intel_pxp_gsccs_init(struct intel_pxp *pxp)
+ return 0;
+ }
+
+-#endif
++static inline bool intel_pxp_gsccs_is_ready_for_sessions(struct intel_pxp *pxp)
++{
++ return false;
++}
+
+-bool intel_pxp_gsccs_is_ready_for_sessions(struct intel_pxp *pxp);
++#endif
+
+ #endif /*__INTEL_PXP_GSCCS_H__ */
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
+index 7cc84472cecec2..edddfc036c6d1e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
+@@ -90,7 +90,7 @@ nouveau_fence_context_kill(struct nouveau_fence_chan *fctx, int error)
+ while (!list_empty(&fctx->pending)) {
+ fence = list_entry(fctx->pending.next, typeof(*fence), head);
+
+- if (error)
++ if (error && !dma_fence_is_signaled_locked(&fence->base))
+ dma_fence_set_error(&fence->base, error);
+
+ if (nouveau_fence_signal(fence))
+diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+index fd4215e2f982d2..925fbc2cda700a 100644
+--- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
++++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+@@ -216,6 +216,9 @@ static void drm_gem_shmem_test_get_pages_sgt(struct kunit *test)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt);
+ KUNIT_EXPECT_NULL(test, shmem->sgt);
+
++ ret = kunit_add_action_or_reset(test, kfree_wrapper, sgt);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ ret = kunit_add_action_or_reset(test, sg_free_table_wrapper, sgt);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+diff --git a/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h b/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h
+index a255946b6f77e7..8cfcd3360896c2 100644
+--- a/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h
++++ b/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h
+@@ -41,6 +41,7 @@
+
+ #define GFX_OP_PIPE_CONTROL(len) ((0x3<<29)|(0x3<<27)|(0x2<<24)|((len)-2))
+
++#define PIPE_CONTROL0_L3_READ_ONLY_CACHE_INVALIDATE BIT(10) /* gen12 */
+ #define PIPE_CONTROL0_HDC_PIPELINE_FLUSH BIT(9) /* gen12 */
+
+ #define PIPE_CONTROL_COMMAND_CACHE_INVALIDATE (1<<29)
+diff --git a/drivers/gpu/drm/xe/xe_guc_capture.c b/drivers/gpu/drm/xe/xe_guc_capture.c
+index f6d523e4c5feb7..9095618648bcbc 100644
+--- a/drivers/gpu/drm/xe/xe_guc_capture.c
++++ b/drivers/gpu/drm/xe/xe_guc_capture.c
+@@ -359,7 +359,7 @@ static void __fill_ext_reg(struct __guc_mmio_reg_descr *ext,
+
+ ext->reg = XE_REG(extlist->reg.__reg.addr);
+ ext->flags = FIELD_PREP(GUC_REGSET_STEERING_NEEDED, 1);
+- ext->flags = FIELD_PREP(GUC_REGSET_STEERING_GROUP, slice_id);
++ ext->flags |= FIELD_PREP(GUC_REGSET_STEERING_GROUP, slice_id);
+ ext->flags |= FIELD_PREP(GUC_REGSET_STEERING_INSTANCE, subslice_id);
+ ext->regname = extlist->name;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
+index 9f327f27c0726e..8d1fb33d923f4d 100644
+--- a/drivers/gpu/drm/xe/xe_ring_ops.c
++++ b/drivers/gpu/drm/xe/xe_ring_ops.c
+@@ -141,7 +141,8 @@ emit_pipe_control(u32 *dw, int i, u32 bit_group_0, u32 bit_group_1, u32 offset,
+ static int emit_pipe_invalidate(u32 mask_flags, bool invalidate_tlb, u32 *dw,
+ int i)
+ {
+- u32 flags = PIPE_CONTROL_CS_STALL |
++ u32 flags0 = 0;
++ u32 flags1 = PIPE_CONTROL_CS_STALL |
+ PIPE_CONTROL_COMMAND_CACHE_INVALIDATE |
+ PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE |
+ PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE |
+@@ -152,11 +153,15 @@ static int emit_pipe_invalidate(u32 mask_flags, bool invalidate_tlb, u32 *dw,
+ PIPE_CONTROL_STORE_DATA_INDEX;
+
+ if (invalidate_tlb)
+- flags |= PIPE_CONTROL_TLB_INVALIDATE;
++ flags1 |= PIPE_CONTROL_TLB_INVALIDATE;
+
+- flags &= ~mask_flags;
++ flags1 &= ~mask_flags;
+
+- return emit_pipe_control(dw, i, 0, flags, LRC_PPHWSP_SCRATCH_ADDR, 0);
++ if (flags1 & PIPE_CONTROL_VF_CACHE_INVALIDATE)
++ flags0 |= PIPE_CONTROL0_L3_READ_ONLY_CACHE_INVALIDATE;
++
++ return emit_pipe_control(dw, i, flags0, flags1,
++ LRC_PPHWSP_SCRATCH_ADDR, 0);
+ }
+
+ static int emit_store_imm_ppgtt_posted(u64 addr, u64 value,
+diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
+index 0d4b3935e68732..342d47e675869d 100644
+--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
++++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
+@@ -1380,9 +1380,9 @@ static int lpi2c_imx_probe(struct platform_device *pdev)
+ return 0;
+
+ rpm_disable:
+- pm_runtime_put(&pdev->dev);
+- pm_runtime_disable(&pdev->dev);
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
++ pm_runtime_put_sync(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+
+ return ret;
+ }
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index cb536d372b12ef..fb82f8035c0f29 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -3677,6 +3677,14 @@ static int __init parse_ivrs_acpihid(char *str)
+ while (*uid == '0' && *(uid + 1))
+ uid++;
+
++ if (strlen(hid) >= ACPIHID_HID_LEN) {
++ pr_err("Invalid command line: hid is too long\n");
++ return 1;
++ } else if (strlen(uid) >= ACPIHID_UID_LEN) {
++ pr_err("Invalid command line: uid is too long\n");
++ return 1;
++ }
++
+ i = early_acpihid_map_size++;
+ memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
+ memcpy(early_acpihid_map[i].uid, uid, strlen(uid));
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+index 9ba596430e7cf9..980cc6b33c430f 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+@@ -411,6 +411,12 @@ struct iommu_domain *arm_smmu_sva_domain_alloc(struct device *dev,
+ return ERR_CAST(smmu_domain);
+ smmu_domain->domain.type = IOMMU_DOMAIN_SVA;
+ smmu_domain->domain.ops = &arm_smmu_sva_domain_ops;
++
++ /*
++ * Choose page_size as the leaf page size for invalidation when
++ * ARM_SMMU_FEAT_RANGE_INV is present
++ */
++ smmu_domain->domain.pgsize_bitmap = PAGE_SIZE;
+ smmu_domain->smmu = smmu;
+
+ ret = xa_alloc(&arm_smmu_asid_xa, &asid, smmu_domain,
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 59749e8180afc4..e495334d1c43a1 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -3364,6 +3364,7 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
+ mutex_lock(&smmu->streams_mutex);
+ for (i = 0; i < fwspec->num_ids; i++) {
+ struct arm_smmu_stream *new_stream = &master->streams[i];
++ struct rb_node *existing;
+ u32 sid = fwspec->ids[i];
+
+ new_stream->id = sid;
+@@ -3374,10 +3375,20 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
+ break;
+
+ /* Insert into SID tree */
+- if (rb_find_add(&new_stream->node, &smmu->streams,
+- arm_smmu_streams_cmp_node)) {
+- dev_warn(master->dev, "stream %u already in tree\n",
+- sid);
++ existing = rb_find_add(&new_stream->node, &smmu->streams,
++ arm_smmu_streams_cmp_node);
++ if (existing) {
++ struct arm_smmu_master *existing_master =
++ rb_entry(existing, struct arm_smmu_stream, node)
++ ->master;
++
++ /* Bridged PCI devices may end up with duplicated IDs */
++ if (existing_master == master)
++ continue;
++
++ dev_warn(master->dev,
++ "stream %u already in tree from dev %s\n", sid,
++ dev_name(existing_master->dev));
+ ret = -EINVAL;
+ break;
+ }
+@@ -4405,6 +4416,8 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+ if (FIELD_GET(IDR3_RIL, reg))
+ smmu->features |= ARM_SMMU_FEAT_RANGE_INV;
++ if (FIELD_GET(IDR3_FWB, reg))
++ smmu->features |= ARM_SMMU_FEAT_S2FWB;
+
+ /* IDR5 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 76417bd5e926e0..07adf4ceeea061 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -4504,6 +4504,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e30, quirk_iommu_igfx);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e40, quirk_iommu_igfx);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e90, quirk_iommu_igfx);
+
++/* QM57/QS57 integrated gfx malfunctions with dmar */
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0044, quirk_iommu_igfx);
++
+ /* Broadwell igfx malfunctions with dmar */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1606, quirk_iommu_igfx);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x160B, quirk_iommu_igfx);
+@@ -4581,7 +4584,6 @@ static void quirk_calpella_no_shadow_gtt(struct pci_dev *dev)
+ }
+ }
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0040, quirk_calpella_no_shadow_gtt);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0044, quirk_calpella_no_shadow_gtt);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0062, quirk_calpella_no_shadow_gtt);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x006a, quirk_calpella_no_shadow_gtt);
+
+diff --git a/drivers/irqchip/irq-qcom-mpm.c b/drivers/irqchip/irq-qcom-mpm.c
+index 7942d8eb3d00ea..f772deb9cba574 100644
+--- a/drivers/irqchip/irq-qcom-mpm.c
++++ b/drivers/irqchip/irq-qcom-mpm.c
+@@ -227,6 +227,9 @@ static int qcom_mpm_alloc(struct irq_domain *domain, unsigned int virq,
+ if (ret)
+ return ret;
+
++ if (pin == GPIO_NO_WAKE_IRQ)
++ return irq_domain_disconnect_hierarchy(domain, virq);
++
+ ret = irq_domain_set_hwirq_and_chip(domain, virq, pin,
+ &qcom_mpm_chip, priv);
+ if (ret)
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index aab8240429b0ba..debc533a036582 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -68,6 +68,8 @@
+ #define LIST_DIRTY 1
+ #define LIST_SIZE 2
+
++#define SCAN_RESCHED_CYCLE 16
++
+ /*--------------------------------------------------------------*/
+
+ /*
+@@ -2426,7 +2428,12 @@ static void __scan(struct dm_bufio_client *c)
+
+ atomic_long_dec(&c->need_shrink);
+ freed++;
+- cond_resched();
++
++ if (unlikely(freed % SCAN_RESCHED_CYCLE == 0)) {
++ dm_bufio_unlock(c);
++ cond_resched();
++ dm_bufio_lock(c);
++ }
+ }
+ }
+ }
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 65ab609ac0cb3e..9947962e80f22c 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -5176,7 +5176,7 @@ static void dm_integrity_dtr(struct dm_target *ti)
+ BUG_ON(!RB_EMPTY_ROOT(&ic->in_progress));
+ BUG_ON(!list_empty(&ic->wait_list));
+
+- if (ic->mode == 'B')
++ if (ic->mode == 'B' && ic->bitmap_flush_work.work.func)
+ cancel_delayed_work_sync(&ic->bitmap_flush_work);
+ if (ic->metadata_wq)
+ destroy_workqueue(ic->metadata_wq);
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 0ef5203387b26f..58febd1bc772a4 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -523,8 +523,9 @@ static char **realloc_argv(unsigned int *size, char **old_argv)
+ gfp = GFP_NOIO;
+ }
+ argv = kmalloc_array(new_size, sizeof(*argv), gfp);
+- if (argv && old_argv) {
+- memcpy(argv, old_argv, *size * sizeof(*argv));
++ if (argv) {
++ if (old_argv)
++ memcpy(argv, old_argv, *size * sizeof(*argv));
+ *size = new_size;
+ }
+
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index f73b84bae0c4c7..6ebb3d1eeb4d6f 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -1112,26 +1112,26 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ num_irqs = platform_irq_count(pdev);
+ if (num_irqs < 0) {
+ ret = num_irqs;
+- goto eirq;
++ goto edisclk;
+ }
+
+ /* There must be at least one IRQ source */
+ if (!num_irqs) {
+ ret = -ENXIO;
+- goto eirq;
++ goto edisclk;
+ }
+
+ for (i = 0; i < num_irqs; i++) {
+ irq = platform_get_irq(pdev, i);
+ if (irq < 0) {
+ ret = irq;
+- goto eirq;
++ goto edisclk;
+ }
+
+ ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_irq, 0,
+ dev_name(&pdev->dev), host);
+ if (ret)
+- goto eirq;
++ goto edisclk;
+ }
+
+ ret = tmio_mmc_host_probe(host);
+@@ -1143,8 +1143,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+
+ return ret;
+
+-eirq:
+- tmio_mmc_host_remove(host);
+ edisclk:
+ renesas_sdhi_clk_disable(host);
+ efree:
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 940f1b71226d64..7b35d24c38d765 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1543,7 +1543,7 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
+ struct tc_taprio_qopt_offload *taprio;
+ struct ocelot_port *ocelot_port;
+ struct timespec64 base_ts;
+- int port;
++ int i, port;
+ u32 val;
+
+ mutex_lock(&ocelot->fwd_domain_lock);
+@@ -1575,6 +1575,9 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
+ QSYS_PARAM_CFG_REG_3_BASE_TIME_SEC_MSB_M,
+ QSYS_PARAM_CFG_REG_3);
+
++ for (i = 0; i < taprio->num_entries; i++)
++ vsc9959_tas_gcl_set(ocelot, i, &taprio->entries[i]);
++
+ ocelot_rmw(ocelot, QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
+ QSYS_TAS_PARAM_CFG_CTRL_CONFIG_CHANGE,
+ QSYS_TAS_PARAM_CFG_CTRL);
+diff --git a/drivers/net/ethernet/amd/pds_core/auxbus.c b/drivers/net/ethernet/amd/pds_core/auxbus.c
+index b76a9b7e0aed66..889a18962270aa 100644
+--- a/drivers/net/ethernet/amd/pds_core/auxbus.c
++++ b/drivers/net/ethernet/amd/pds_core/auxbus.c
+@@ -172,34 +172,31 @@ static struct pds_auxiliary_dev *pdsc_auxbus_dev_register(struct pdsc *cf,
+ return padev;
+ }
+
+-int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf)
++void pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf,
++ struct pds_auxiliary_dev **pd_ptr)
+ {
+ struct pds_auxiliary_dev *padev;
+- int err = 0;
+
+- if (!cf)
+- return -ENODEV;
++ if (!*pd_ptr)
++ return;
+
+ mutex_lock(&pf->config_lock);
+
+- padev = pf->vfs[cf->vf_id].padev;
+- if (padev) {
+- pds_client_unregister(pf, padev->client_id);
+- auxiliary_device_delete(&padev->aux_dev);
+- auxiliary_device_uninit(&padev->aux_dev);
+- padev->client_id = 0;
+- }
+- pf->vfs[cf->vf_id].padev = NULL;
++ padev = *pd_ptr;
++ pds_client_unregister(pf, padev->client_id);
++ auxiliary_device_delete(&padev->aux_dev);
++ auxiliary_device_uninit(&padev->aux_dev);
++ *pd_ptr = NULL;
+
+ mutex_unlock(&pf->config_lock);
+- return err;
+ }
+
+-int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf)
++int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf,
++ enum pds_core_vif_types vt,
++ struct pds_auxiliary_dev **pd_ptr)
+ {
+ struct pds_auxiliary_dev *padev;
+ char devname[PDS_DEVNAME_LEN];
+- enum pds_core_vif_types vt;
+ unsigned long mask;
+ u16 vt_support;
+ int client_id;
+@@ -208,6 +205,9 @@ int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf)
+ if (!cf)
+ return -ENODEV;
+
++ if (vt >= PDS_DEV_TYPE_MAX)
++ return -EINVAL;
++
+ mutex_lock(&pf->config_lock);
+
+ mask = BIT_ULL(PDSC_S_FW_DEAD) |
+@@ -219,17 +219,10 @@ int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf)
+ goto out_unlock;
+ }
+
+- /* We only support vDPA so far, so it is the only one to
+- * be verified that it is available in the Core device and
+- * enabled in the devlink param. In the future this might
+- * become a loop for several VIF types.
+- */
+-
+ /* Verify that the type is supported and enabled. It is not
+ * an error if there is no auxbus device support for this
+ * VF, it just means something else needs to happen with it.
+ */
+- vt = PDS_DEV_TYPE_VDPA;
+ vt_support = !!le16_to_cpu(pf->dev_ident.vif_types[vt]);
+ if (!(vt_support &&
+ pf->viftype_status[vt].supported &&
+@@ -255,7 +248,7 @@ int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf)
+ err = PTR_ERR(padev);
+ goto out_unlock;
+ }
+- pf->vfs[cf->vf_id].padev = padev;
++ *pd_ptr = padev;
+
+ out_unlock:
+ mutex_unlock(&pf->config_lock);
+diff --git a/drivers/net/ethernet/amd/pds_core/core.h b/drivers/net/ethernet/amd/pds_core/core.h
+index ec637dc4327a5d..becd3104473c2e 100644
+--- a/drivers/net/ethernet/amd/pds_core/core.h
++++ b/drivers/net/ethernet/amd/pds_core/core.h
+@@ -303,8 +303,11 @@ void pdsc_health_thread(struct work_struct *work);
+ int pdsc_register_notify(struct notifier_block *nb);
+ void pdsc_unregister_notify(struct notifier_block *nb);
+ void pdsc_notify(unsigned long event, void *data);
+-int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf);
+-int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf);
++int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf,
++ enum pds_core_vif_types vt,
++ struct pds_auxiliary_dev **pd_ptr);
++void pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf,
++ struct pds_auxiliary_dev **pd_ptr);
+
+ void pdsc_process_adminq(struct pdsc_qcq *qcq);
+ void pdsc_work_thread(struct work_struct *work);
+diff --git a/drivers/net/ethernet/amd/pds_core/devlink.c b/drivers/net/ethernet/amd/pds_core/devlink.c
+index ca23cde385e67b..d8dc39da4161fb 100644
+--- a/drivers/net/ethernet/amd/pds_core/devlink.c
++++ b/drivers/net/ethernet/amd/pds_core/devlink.c
+@@ -56,8 +56,11 @@ int pdsc_dl_enable_set(struct devlink *dl, u32 id,
+ for (vf_id = 0; vf_id < pdsc->num_vfs; vf_id++) {
+ struct pdsc *vf = pdsc->vfs[vf_id].vf;
+
+- err = ctx->val.vbool ? pdsc_auxbus_dev_add(vf, pdsc) :
+- pdsc_auxbus_dev_del(vf, pdsc);
++ if (ctx->val.vbool)
++ err = pdsc_auxbus_dev_add(vf, pdsc, vt_entry->vif_id,
++ &pdsc->vfs[vf_id].padev);
++ else
++ pdsc_auxbus_dev_del(vf, pdsc, &pdsc->vfs[vf_id].padev);
+ }
+
+ return err;
+diff --git a/drivers/net/ethernet/amd/pds_core/main.c b/drivers/net/ethernet/amd/pds_core/main.c
+index 660268ff95623f..a3a68889137b63 100644
+--- a/drivers/net/ethernet/amd/pds_core/main.c
++++ b/drivers/net/ethernet/amd/pds_core/main.c
+@@ -190,7 +190,8 @@ static int pdsc_init_vf(struct pdsc *vf)
+ devl_unlock(dl);
+
+ pf->vfs[vf->vf_id].vf = vf;
+- err = pdsc_auxbus_dev_add(vf, pf);
++ err = pdsc_auxbus_dev_add(vf, pf, PDS_DEV_TYPE_VDPA,
++ &pf->vfs[vf->vf_id].padev);
+ if (err) {
+ devl_lock(dl);
+ devl_unregister(dl);
+@@ -417,7 +418,7 @@ static void pdsc_remove(struct pci_dev *pdev)
+
+ pf = pdsc_get_pf_struct(pdsc->pdev);
+ if (!IS_ERR(pf)) {
+- pdsc_auxbus_dev_del(pdsc, pf);
++ pdsc_auxbus_dev_del(pdsc, pf, &pf->vfs[pdsc->vf_id].padev);
+ pf->vfs[pdsc->vf_id].vf = NULL;
+ }
+ } else {
+@@ -482,7 +483,8 @@ static void pdsc_reset_prepare(struct pci_dev *pdev)
+
+ pf = pdsc_get_pf_struct(pdsc->pdev);
+ if (!IS_ERR(pf))
+- pdsc_auxbus_dev_del(pdsc, pf);
++ pdsc_auxbus_dev_del(pdsc, pf,
++ &pf->vfs[pdsc->vf_id].padev);
+ }
+
+ pdsc_unmap_bars(pdsc);
+@@ -527,7 +529,8 @@ static void pdsc_reset_done(struct pci_dev *pdev)
+
+ pf = pdsc_get_pf_struct(pdsc->pdev);
+ if (!IS_ERR(pf))
+- pdsc_auxbus_dev_add(pdsc, pf);
++ pdsc_auxbus_dev_add(pdsc, pf, PDS_DEV_TYPE_VDPA,
++ &pf->vfs[pdsc->vf_id].padev);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+index 230726d7b74f63..d41b58fad37bbf 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+@@ -373,8 +373,13 @@ static int xgbe_map_rx_buffer(struct xgbe_prv_data *pdata,
+ }
+
+ /* Set up the header page info */
+- xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa,
+- XGBE_SKB_ALLOC_SIZE);
++ if (pdata->netdev->features & NETIF_F_RXCSUM) {
++ xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa,
++ XGBE_SKB_ALLOC_SIZE);
++ } else {
++ xgbe_set_buffer_data(&rdata->rx.hdr, &ring->rx_hdr_pa,
++ pdata->rx_buf_size);
++ }
+
+ /* Set up the buffer page info */
+ xgbe_set_buffer_data(&rdata->rx.buf, &ring->rx_buf_pa,
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+index f393228d41c7be..f1b0fb02b3cd14 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+@@ -320,6 +320,18 @@ static void xgbe_config_sph_mode(struct xgbe_prv_data *pdata)
+ XGMAC_IOWRITE_BITS(pdata, MAC_RCR, HDSMS, XGBE_SPH_HDSMS_SIZE);
+ }
+
++static void xgbe_disable_sph_mode(struct xgbe_prv_data *pdata)
++{
++ unsigned int i;
++
++ for (i = 0; i < pdata->channel_count; i++) {
++ if (!pdata->channel[i]->rx_ring)
++ break;
++
++ XGMAC_DMA_IOWRITE_BITS(pdata->channel[i], DMA_CH_CR, SPH, 0);
++ }
++}
++
+ static int xgbe_write_rss_reg(struct xgbe_prv_data *pdata, unsigned int type,
+ unsigned int index, unsigned int val)
+ {
+@@ -3545,8 +3557,12 @@ static int xgbe_init(struct xgbe_prv_data *pdata)
+ xgbe_config_tx_coalesce(pdata);
+ xgbe_config_rx_buffer_size(pdata);
+ xgbe_config_tso_mode(pdata);
+- xgbe_config_sph_mode(pdata);
+- xgbe_config_rss(pdata);
++
++ if (pdata->netdev->features & NETIF_F_RXCSUM) {
++ xgbe_config_sph_mode(pdata);
++ xgbe_config_rss(pdata);
++ }
++
+ desc_if->wrapper_tx_desc_init(pdata);
+ desc_if->wrapper_rx_desc_init(pdata);
+ xgbe_enable_dma_interrupts(pdata);
+@@ -3702,5 +3718,9 @@ void xgbe_init_function_ptrs_dev(struct xgbe_hw_if *hw_if)
+ hw_if->disable_vxlan = xgbe_disable_vxlan;
+ hw_if->set_vxlan_id = xgbe_set_vxlan_id;
+
++ /* For Split Header*/
++ hw_if->enable_sph = xgbe_config_sph_mode;
++ hw_if->disable_sph = xgbe_disable_sph_mode;
++
+ DBGPR("<--xgbe_init_function_ptrs\n");
+ }
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index 5475867708f426..8bc49259d71af1 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -2257,10 +2257,17 @@ static int xgbe_set_features(struct net_device *netdev,
+ if (ret)
+ return ret;
+
+- if ((features & NETIF_F_RXCSUM) && !rxcsum)
++ if ((features & NETIF_F_RXCSUM) && !rxcsum) {
++ hw_if->enable_sph(pdata);
++ hw_if->enable_vxlan(pdata);
+ hw_if->enable_rx_csum(pdata);
+- else if (!(features & NETIF_F_RXCSUM) && rxcsum)
++ schedule_work(&pdata->restart_work);
++ } else if (!(features & NETIF_F_RXCSUM) && rxcsum) {
++ hw_if->disable_sph(pdata);
++ hw_if->disable_vxlan(pdata);
+ hw_if->disable_rx_csum(pdata);
++ schedule_work(&pdata->restart_work);
++ }
+
+ if ((features & NETIF_F_HW_VLAN_CTAG_RX) && !rxvlan)
+ hw_if->enable_rx_vlan_stripping(pdata);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
+index d85386cac8d166..ed5d43c16d0e23 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
+@@ -865,6 +865,10 @@ struct xgbe_hw_if {
+ void (*enable_vxlan)(struct xgbe_prv_data *);
+ void (*disable_vxlan)(struct xgbe_prv_data *);
+ void (*set_vxlan_id)(struct xgbe_prv_data *);
++
++ /* For Split Header */
++ void (*enable_sph)(struct xgbe_prv_data *pdata);
++ void (*disable_sph)(struct xgbe_prv_data *pdata);
+ };
+
+ /* This structure represents implementation specific routines for an
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 1b39574e3fa22d..bd8b9cb05ae988 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2011,6 +2011,7 @@ static struct sk_buff *bnxt_rx_vlan(struct sk_buff *skb, u8 cmp_type,
+ }
+ return skb;
+ vlan_err:
++ skb_mark_for_recycle(skb);
+ dev_kfree_skb(skb);
+ return NULL;
+ }
+@@ -3403,6 +3404,9 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
+ }
+ netdev_tx_reset_queue(netdev_get_tx_queue(bp->dev, i));
+ }
++
++ if (bp->ptp_cfg && !(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP))
++ bnxt_ptp_free_txts_skbs(bp->ptp_cfg);
+ }
+
+ static void bnxt_free_one_rx_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+@@ -11376,6 +11380,9 @@ static void bnxt_init_napi(struct bnxt *bp)
+ poll_fn = bnxt_poll_p5;
+ else if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+ cp_nr_rings--;
++
++ set_bit(BNXT_STATE_NAPI_DISABLED, &bp->state);
++
+ for (i = 0; i < cp_nr_rings; i++) {
+ bnapi = bp->bnapi[i];
+ netif_napi_add_config(bp->dev, &bnapi->napi, poll_fn,
+@@ -12165,13 +12172,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
+ set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
+ return rc;
+ }
++ /* IRQ will be initialized later in bnxt_request_irq()*/
+ bnxt_clear_int_mode(bp);
+- rc = bnxt_init_int_mode(bp);
+- if (rc) {
+- clear_bit(BNXT_STATE_FW_RESET_DET, &bp->state);
+- netdev_err(bp->dev, "init int mode failed\n");
+- return rc;
+- }
+ }
+ rc = bnxt_cancel_reservations(bp, fw_reset);
+ }
+@@ -12570,8 +12572,6 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ /* VF-reps may need to be re-opened after the PF is re-opened */
+ if (BNXT_PF(bp))
+ bnxt_vf_reps_open(bp);
+- if (bp->ptp_cfg && !(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP))
+- WRITE_ONCE(bp->ptp_cfg->tx_avail, BNXT_MAX_TX_TS);
+ bnxt_ptp_init_rtc(bp, true);
+ bnxt_ptp_cfg_tstamp_filters(bp);
+ if (BNXT_SUPPORTS_MULTI_RSS_CTX(bp))
+@@ -15731,8 +15731,8 @@ static void bnxt_remove_one(struct pci_dev *pdev)
+
+ bnxt_rdma_aux_device_del(bp);
+
+- bnxt_ptp_clear(bp);
+ unregister_netdev(dev);
++ bnxt_ptp_clear(bp);
+
+ bnxt_rdma_aux_device_uninit(bp);
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
+index 7236d8e548ab5d..a73398c4a3e981 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_coredump.c
+@@ -110,20 +110,30 @@ static int bnxt_hwrm_dbg_dma_data(struct bnxt *bp, void *msg,
+ }
+ }
+
++ if (cmn_req->req_type ==
++ cpu_to_le16(HWRM_DBG_COREDUMP_RETRIEVE))
++ info->dest_buf_size += len;
++
+ if (info->dest_buf) {
+ if ((info->seg_start + off + len) <=
+ BNXT_COREDUMP_BUF_LEN(info->buf_len)) {
+- memcpy(info->dest_buf + off, dma_buf, len);
++ u16 copylen = min_t(u16, len,
++ info->dest_buf_size - off);
++
++ memcpy(info->dest_buf + off, dma_buf, copylen);
++ if (copylen < len)
++ break;
+ } else {
+ rc = -ENOBUFS;
++ if (cmn_req->req_type ==
++ cpu_to_le16(HWRM_DBG_COREDUMP_LIST)) {
++ kfree(info->dest_buf);
++ info->dest_buf = NULL;
++ }
+ break;
+ }
+ }
+
+- if (cmn_req->req_type ==
+- cpu_to_le16(HWRM_DBG_COREDUMP_RETRIEVE))
+- info->dest_buf_size += len;
+-
+ if (!(cmn_resp->flags & HWRM_DBG_CMN_FLAGS_MORE))
+ break;
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 9c582083951448..54208e0495983a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2062,6 +2062,17 @@ static int bnxt_get_regs_len(struct net_device *dev)
+ return reg_len;
+ }
+
++#define BNXT_PCIE_32B_ENTRY(start, end) \
++ { offsetof(struct pcie_ctx_hw_stats, start), \
++ offsetof(struct pcie_ctx_hw_stats, end) }
++
++static const struct {
++ u16 start;
++ u16 end;
++} bnxt_pcie_32b_entries[] = {
++ BNXT_PCIE_32B_ENTRY(pcie_ltssm_histogram[0], pcie_ltssm_histogram[3]),
++};
++
+ static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ void *_p)
+ {
+@@ -2094,12 +2105,27 @@ static void bnxt_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ req->pcie_stat_host_addr = cpu_to_le64(hw_pcie_stats_addr);
+ rc = hwrm_req_send(bp, req);
+ if (!rc) {
+- __le64 *src = (__le64 *)hw_pcie_stats;
+- u64 *dst = (u64 *)(_p + BNXT_PXP_REG_LEN);
+- int i;
+-
+- for (i = 0; i < sizeof(*hw_pcie_stats) / sizeof(__le64); i++)
+- dst[i] = le64_to_cpu(src[i]);
++ u8 *dst = (u8 *)(_p + BNXT_PXP_REG_LEN);
++ u8 *src = (u8 *)hw_pcie_stats;
++ int i, j;
++
++ for (i = 0, j = 0; i < sizeof(*hw_pcie_stats); ) {
++ if (i >= bnxt_pcie_32b_entries[j].start &&
++ i <= bnxt_pcie_32b_entries[j].end) {
++ u32 *dst32 = (u32 *)(dst + i);
++
++ *dst32 = le32_to_cpu(*(__le32 *)(src + i));
++ i += 4;
++ if (i > bnxt_pcie_32b_entries[j].end &&
++ j < ARRAY_SIZE(bnxt_pcie_32b_entries) - 1)
++ j++;
++ } else {
++ u64 *dst64 = (u64 *)(dst + i);
++
++ *dst64 = le64_to_cpu(*(__le64 *)(src + i));
++ i += 8;
++ }
++ }
+ }
+ hwrm_req_drop(bp, req);
+ }
+@@ -4922,6 +4948,7 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
+ if (!bp->num_tests || !BNXT_PF(bp))
+ return;
+
++ memset(buf, 0, sizeof(u64) * bp->num_tests);
+ if (etest->flags & ETH_TEST_FL_OFFLINE &&
+ bnxt_ulp_registered(bp->edev)) {
+ etest->flags |= ETH_TEST_FL_FAILED;
+@@ -4929,7 +4956,6 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
+ return;
+ }
+
+- memset(buf, 0, sizeof(u64) * bp->num_tests);
+ if (!netif_running(dev)) {
+ etest->flags |= ETH_TEST_FL_FAILED;
+ return;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index 2d4e19b96ee744..0669d43472f51b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -794,6 +794,27 @@ static long bnxt_ptp_ts_aux_work(struct ptp_clock_info *ptp_info)
+ return HZ;
+ }
+
++void bnxt_ptp_free_txts_skbs(struct bnxt_ptp_cfg *ptp)
++{
++ struct bnxt_ptp_tx_req *txts_req;
++ u16 cons = ptp->txts_cons;
++
++ /* make sure ptp aux worker finished with
++ * possible BNXT_STATE_OPEN set
++ */
++ ptp_cancel_worker_sync(ptp->ptp_clock);
++
++ ptp->tx_avail = BNXT_MAX_TX_TS;
++ while (cons != ptp->txts_prod) {
++ txts_req = &ptp->txts_req[cons];
++ if (!IS_ERR_OR_NULL(txts_req->tx_skb))
++ dev_kfree_skb_any(txts_req->tx_skb);
++ cons = NEXT_TXTS(cons);
++ }
++ ptp->txts_cons = cons;
++ ptp_schedule_worker(ptp->ptp_clock, 0);
++}
++
+ int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod)
+ {
+ spin_lock_bh(&ptp->ptp_tx_lock);
+@@ -1105,7 +1126,6 @@ int bnxt_ptp_init(struct bnxt *bp)
+ void bnxt_ptp_clear(struct bnxt *bp)
+ {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+- int i;
+
+ if (!ptp)
+ return;
+@@ -1117,12 +1137,5 @@ void bnxt_ptp_clear(struct bnxt *bp)
+ kfree(ptp->ptp_info.pin_config);
+ ptp->ptp_info.pin_config = NULL;
+
+- for (i = 0; i < BNXT_MAX_TX_TS; i++) {
+- if (ptp->txts_req[i].tx_skb) {
+- dev_kfree_skb_any(ptp->txts_req[i].tx_skb);
+- ptp->txts_req[i].tx_skb = NULL;
+- }
+- }
+-
+ bnxt_unmap_ptp_regs(bp);
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index a95f05e9c579b7..0481161d26ef5d 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -162,6 +162,7 @@ int bnxt_ptp_cfg_tstamp_filters(struct bnxt *bp);
+ void bnxt_ptp_reapply_pps(struct bnxt *bp);
+ int bnxt_hwtstamp_set(struct net_device *dev, struct ifreq *ifr);
+ int bnxt_hwtstamp_get(struct net_device *dev, struct ifreq *ifr);
++void bnxt_ptp_free_txts_skbs(struct bnxt_ptp_cfg *ptp);
+ int bnxt_ptp_get_txts_prod(struct bnxt_ptp_cfg *ptp, u16 *prod);
+ void bnxt_get_tx_ts_p5(struct bnxt *bp, struct sk_buff *skb, u16 prod);
+ int bnxt_get_rx_ts_p5(struct bnxt *bp, u64 *ts, u32 pkt_ts);
+diff --git a/drivers/net/ethernet/dlink/dl2k.c b/drivers/net/ethernet/dlink/dl2k.c
+index d0ea9260787061..6bf8a7aeef9081 100644
+--- a/drivers/net/ethernet/dlink/dl2k.c
++++ b/drivers/net/ethernet/dlink/dl2k.c
+@@ -352,7 +352,7 @@ parse_eeprom (struct net_device *dev)
+ eth_hw_addr_set(dev, psrom->mac_addr);
+
+ if (np->chip_id == CHIP_IP1000A) {
+- np->led_mode = psrom->led_mode;
++ np->led_mode = le16_to_cpu(psrom->led_mode);
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/dlink/dl2k.h b/drivers/net/ethernet/dlink/dl2k.h
+index 195dc6cfd8955c..0e33e2eaae9606 100644
+--- a/drivers/net/ethernet/dlink/dl2k.h
++++ b/drivers/net/ethernet/dlink/dl2k.h
+@@ -335,7 +335,7 @@ typedef struct t_SROM {
+ u16 sub_system_id; /* 0x06 */
+ u16 pci_base_1; /* 0x08 (IP1000A only) */
+ u16 pci_base_2; /* 0x0a (IP1000A only) */
+- u16 led_mode; /* 0x0c (IP1000A only) */
++ __le16 led_mode; /* 0x0c (IP1000A only) */
+ u16 reserved1[9]; /* 0x0e-0x1f */
+ u8 mac_addr[6]; /* 0x20-0x25 */
+ u8 reserved2[10]; /* 0x26-0x2f */
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index f7c4ce8e9a2655..c5d5fa8d7dfddc 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -714,7 +714,12 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
+ txq->bd.cur = bdp;
+
+ /* Trigger transmission start */
+- writel(0, txq->bd.reg_desc_active);
++ if (!(fep->quirks & FEC_QUIRK_ERR007885) ||
++ !readl(txq->bd.reg_desc_active) ||
++ !readl(txq->bd.reg_desc_active) ||
++ !readl(txq->bd.reg_desc_active) ||
++ !readl(txq->bd.reg_desc_active))
++ writel(0, txq->bd.reg_desc_active);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index 9bbece25552b17..3d70c97a0bedf6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -60,7 +60,7 @@ static struct hns3_dbg_cmd_info hns3_dbg_cmd[] = {
+ .name = "tm_qset",
+ .cmd = HNAE3_DBG_CMD_TM_QSET,
+ .dentry = HNS3_DBG_DENTRY_TM,
+- .buf_len = HNS3_DBG_READ_LEN,
++ .buf_len = HNS3_DBG_READ_LEN_1MB,
+ .init = hns3_dbg_common_file_init,
+ },
+ {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 9ff797fb36c456..b03b8758c7774e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -473,20 +473,14 @@ static void hns3_mask_vector_irq(struct hns3_enet_tqp_vector *tqp_vector,
+ writel(mask_en, tqp_vector->mask_addr);
+ }
+
+-static void hns3_vector_enable(struct hns3_enet_tqp_vector *tqp_vector)
++static void hns3_irq_enable(struct hns3_enet_tqp_vector *tqp_vector)
+ {
+ napi_enable(&tqp_vector->napi);
+ enable_irq(tqp_vector->vector_irq);
+-
+- /* enable vector */
+- hns3_mask_vector_irq(tqp_vector, 1);
+ }
+
+-static void hns3_vector_disable(struct hns3_enet_tqp_vector *tqp_vector)
++static void hns3_irq_disable(struct hns3_enet_tqp_vector *tqp_vector)
+ {
+- /* disable vector */
+- hns3_mask_vector_irq(tqp_vector, 0);
+-
+ disable_irq(tqp_vector->vector_irq);
+ napi_disable(&tqp_vector->napi);
+ cancel_work_sync(&tqp_vector->rx_group.dim.work);
+@@ -707,11 +701,42 @@ static int hns3_set_rx_cpu_rmap(struct net_device *netdev)
+ return 0;
+ }
+
++static void hns3_enable_irqs_and_tqps(struct net_device *netdev)
++{
++ struct hns3_nic_priv *priv = netdev_priv(netdev);
++ struct hnae3_handle *h = priv->ae_handle;
++ u16 i;
++
++ for (i = 0; i < priv->vector_num; i++)
++ hns3_irq_enable(&priv->tqp_vector[i]);
++
++ for (i = 0; i < priv->vector_num; i++)
++ hns3_mask_vector_irq(&priv->tqp_vector[i], 1);
++
++ for (i = 0; i < h->kinfo.num_tqps; i++)
++ hns3_tqp_enable(h->kinfo.tqp[i]);
++}
++
++static void hns3_disable_irqs_and_tqps(struct net_device *netdev)
++{
++ struct hns3_nic_priv *priv = netdev_priv(netdev);
++ struct hnae3_handle *h = priv->ae_handle;
++ u16 i;
++
++ for (i = 0; i < h->kinfo.num_tqps; i++)
++ hns3_tqp_disable(h->kinfo.tqp[i]);
++
++ for (i = 0; i < priv->vector_num; i++)
++ hns3_mask_vector_irq(&priv->tqp_vector[i], 0);
++
++ for (i = 0; i < priv->vector_num; i++)
++ hns3_irq_disable(&priv->tqp_vector[i]);
++}
++
+ static int hns3_nic_net_up(struct net_device *netdev)
+ {
+ struct hns3_nic_priv *priv = netdev_priv(netdev);
+ struct hnae3_handle *h = priv->ae_handle;
+- int i, j;
+ int ret;
+
+ ret = hns3_nic_reset_all_ring(h);
+@@ -720,23 +745,13 @@ static int hns3_nic_net_up(struct net_device *netdev)
+
+ clear_bit(HNS3_NIC_STATE_DOWN, &priv->state);
+
+- /* enable the vectors */
+- for (i = 0; i < priv->vector_num; i++)
+- hns3_vector_enable(&priv->tqp_vector[i]);
+-
+- /* enable rcb */
+- for (j = 0; j < h->kinfo.num_tqps; j++)
+- hns3_tqp_enable(h->kinfo.tqp[j]);
++ hns3_enable_irqs_and_tqps(netdev);
+
+ /* start the ae_dev */
+ ret = h->ae_algo->ops->start ? h->ae_algo->ops->start(h) : 0;
+ if (ret) {
+ set_bit(HNS3_NIC_STATE_DOWN, &priv->state);
+- while (j--)
+- hns3_tqp_disable(h->kinfo.tqp[j]);
+-
+- for (j = i - 1; j >= 0; j--)
+- hns3_vector_disable(&priv->tqp_vector[j]);
++ hns3_disable_irqs_and_tqps(netdev);
+ }
+
+ return ret;
+@@ -823,17 +838,9 @@ static void hns3_reset_tx_queue(struct hnae3_handle *h)
+ static void hns3_nic_net_down(struct net_device *netdev)
+ {
+ struct hns3_nic_priv *priv = netdev_priv(netdev);
+- struct hnae3_handle *h = hns3_get_handle(netdev);
+ const struct hnae3_ae_ops *ops;
+- int i;
+
+- /* disable vectors */
+- for (i = 0; i < priv->vector_num; i++)
+- hns3_vector_disable(&priv->tqp_vector[i]);
+-
+- /* disable rcb */
+- for (i = 0; i < h->kinfo.num_tqps; i++)
+- hns3_tqp_disable(h->kinfo.tqp[i]);
++ hns3_disable_irqs_and_tqps(netdev);
+
+ /* stop ae_dev */
+ ops = priv->ae_handle->ae_algo->ops;
+@@ -5864,8 +5871,6 @@ int hns3_set_channels(struct net_device *netdev,
+ void hns3_external_lb_prepare(struct net_device *ndev, bool if_running)
+ {
+ struct hns3_nic_priv *priv = netdev_priv(ndev);
+- struct hnae3_handle *h = priv->ae_handle;
+- int i;
+
+ if (!if_running)
+ return;
+@@ -5876,11 +5881,7 @@ void hns3_external_lb_prepare(struct net_device *ndev, bool if_running)
+ netif_carrier_off(ndev);
+ netif_tx_disable(ndev);
+
+- for (i = 0; i < priv->vector_num; i++)
+- hns3_vector_disable(&priv->tqp_vector[i]);
+-
+- for (i = 0; i < h->kinfo.num_tqps; i++)
+- hns3_tqp_disable(h->kinfo.tqp[i]);
++ hns3_disable_irqs_and_tqps(ndev);
+
+ /* delay ring buffer clearing to hns3_reset_notify_uninit_enet
+ * during reset process, because driver may not be able
+@@ -5896,7 +5897,6 @@ void hns3_external_lb_restore(struct net_device *ndev, bool if_running)
+ {
+ struct hns3_nic_priv *priv = netdev_priv(ndev);
+ struct hnae3_handle *h = priv->ae_handle;
+- int i;
+
+ if (!if_running)
+ return;
+@@ -5912,11 +5912,7 @@ void hns3_external_lb_restore(struct net_device *ndev, bool if_running)
+
+ clear_bit(HNS3_NIC_STATE_DOWN, &priv->state);
+
+- for (i = 0; i < priv->vector_num; i++)
+- hns3_vector_enable(&priv->tqp_vector[i]);
+-
+- for (i = 0; i < h->kinfo.num_tqps; i++)
+- hns3_tqp_enable(h->kinfo.tqp[i]);
++ hns3_enable_irqs_and_tqps(ndev);
+
+ netif_tx_wake_all_queues(ndev);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+index 181af419b878d5..0ffda5146bae58 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+@@ -439,6 +439,13 @@ static int hclge_ptp_create_clock(struct hclge_dev *hdev)
+ ptp->info.settime64 = hclge_ptp_settime;
+
+ ptp->info.n_alarm = 0;
++
++ spin_lock_init(&ptp->lock);
++ ptp->io_base = hdev->hw.hw.io_base + HCLGE_PTP_REG_OFFSET;
++ ptp->ts_cfg.rx_filter = HWTSTAMP_FILTER_NONE;
++ ptp->ts_cfg.tx_type = HWTSTAMP_TX_OFF;
++ hdev->ptp = ptp;
++
+ ptp->clock = ptp_clock_register(&ptp->info, &hdev->pdev->dev);
+ if (IS_ERR(ptp->clock)) {
+ dev_err(&hdev->pdev->dev,
+@@ -450,12 +457,6 @@ static int hclge_ptp_create_clock(struct hclge_dev *hdev)
+ return -ENODEV;
+ }
+
+- spin_lock_init(&ptp->lock);
+- ptp->io_base = hdev->hw.hw.io_base + HCLGE_PTP_REG_OFFSET;
+- ptp->ts_cfg.rx_filter = HWTSTAMP_FILTER_NONE;
+- ptp->ts_cfg.tx_type = HWTSTAMP_TX_OFF;
+- hdev->ptp = ptp;
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 9ba767740a043f..dada42e7e0ec96 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1292,9 +1292,8 @@ static void hclgevf_sync_vlan_filter(struct hclgevf_dev *hdev)
+ rtnl_unlock();
+ }
+
+-static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable)
++static int hclgevf_en_hw_strip_rxvtag_cmd(struct hclgevf_dev *hdev, bool enable)
+ {
+- struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ struct hclge_vf_to_pf_msg send_msg;
+
+ hclgevf_build_send_msg(&send_msg, HCLGE_MBX_SET_VLAN,
+@@ -1303,6 +1302,19 @@ static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable)
+ return hclgevf_send_mbx_msg(hdev, &send_msg, false, NULL, 0);
+ }
+
++static int hclgevf_en_hw_strip_rxvtag(struct hnae3_handle *handle, bool enable)
++{
++ struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++ int ret;
++
++ ret = hclgevf_en_hw_strip_rxvtag_cmd(hdev, enable);
++ if (ret)
++ return ret;
++
++ hdev->rxvtag_strip_en = enable;
++ return 0;
++}
++
+ static int hclgevf_reset_tqp(struct hnae3_handle *handle)
+ {
+ #define HCLGEVF_RESET_ALL_QUEUE_DONE 1U
+@@ -2204,12 +2216,13 @@ static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
+ tc_valid, tc_size);
+ }
+
+-static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
++static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev,
++ bool rxvtag_strip_en)
+ {
+ struct hnae3_handle *nic = &hdev->nic;
+ int ret;
+
+- ret = hclgevf_en_hw_strip_rxvtag(nic, true);
++ ret = hclgevf_en_hw_strip_rxvtag(nic, rxvtag_strip_en);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to enable rx vlan offload, ret = %d\n", ret);
+@@ -2879,7 +2892,7 @@ static int hclgevf_reset_hdev(struct hclgevf_dev *hdev)
+ if (ret)
+ return ret;
+
+- ret = hclgevf_init_vlan_config(hdev);
++ ret = hclgevf_init_vlan_config(hdev, hdev->rxvtag_strip_en);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed(%d) to initialize VLAN config\n", ret);
+@@ -2994,7 +3007,7 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ goto err_config;
+ }
+
+- ret = hclgevf_init_vlan_config(hdev);
++ ret = hclgevf_init_vlan_config(hdev, true);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed(%d) to initialize VLAN config\n", ret);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index cccef32284616b..0208425ab594f5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -253,6 +253,7 @@ struct hclgevf_dev {
+ int *vector_irq;
+
+ bool gro_en;
++ bool rxvtag_strip_en;
+
+ unsigned long vlan_del_fail_bmap[BITS_TO_LONGS(VLAN_N_VID)];
+
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 71e05d30f0fdac..d7b90a77bb49af 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -1047,10 +1047,5 @@ static inline void ice_clear_rdma_cap(struct ice_pf *pf)
+ clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
+ }
+
+-static inline enum ice_phy_model ice_get_phy_model(const struct ice_hw *hw)
+-{
+- return hw->ptp.phy_model;
+-}
+-
+ extern const struct xdp_metadata_ops ice_xdp_md_ops;
+ #endif /* _ICE_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 1e801300310e9f..59df31c2c83f7e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -186,7 +186,7 @@ static int ice_set_mac_type(struct ice_hw *hw)
+ * ice_is_generic_mac - check if device's mac_type is generic
+ * @hw: pointer to the hardware structure
+ *
+- * Return: true if mac_type is generic (with SBQ support), false if not
++ * Return: true if mac_type is ICE_MAC_GENERIC*, false otherwise.
+ */
+ bool ice_is_generic_mac(struct ice_hw *hw)
+ {
+@@ -194,120 +194,6 @@ bool ice_is_generic_mac(struct ice_hw *hw)
+ hw->mac_type == ICE_MAC_GENERIC_3K_E825);
+ }
+
+-/**
+- * ice_is_e810
+- * @hw: pointer to the hardware structure
+- *
+- * returns true if the device is E810 based, false if not.
+- */
+-bool ice_is_e810(struct ice_hw *hw)
+-{
+- return hw->mac_type == ICE_MAC_E810;
+-}
+-
+-/**
+- * ice_is_e810t
+- * @hw: pointer to the hardware structure
+- *
+- * returns true if the device is E810T based, false if not.
+- */
+-bool ice_is_e810t(struct ice_hw *hw)
+-{
+- switch (hw->device_id) {
+- case ICE_DEV_ID_E810C_SFP:
+- switch (hw->subsystem_device_id) {
+- case ICE_SUBDEV_ID_E810T:
+- case ICE_SUBDEV_ID_E810T2:
+- case ICE_SUBDEV_ID_E810T3:
+- case ICE_SUBDEV_ID_E810T4:
+- case ICE_SUBDEV_ID_E810T6:
+- case ICE_SUBDEV_ID_E810T7:
+- return true;
+- }
+- break;
+- case ICE_DEV_ID_E810C_QSFP:
+- switch (hw->subsystem_device_id) {
+- case ICE_SUBDEV_ID_E810T2:
+- case ICE_SUBDEV_ID_E810T3:
+- case ICE_SUBDEV_ID_E810T5:
+- return true;
+- }
+- break;
+- default:
+- break;
+- }
+-
+- return false;
+-}
+-
+-/**
+- * ice_is_e822 - Check if a device is E822 family device
+- * @hw: pointer to the hardware structure
+- *
+- * Return: true if the device is E822 based, false if not.
+- */
+-bool ice_is_e822(struct ice_hw *hw)
+-{
+- switch (hw->device_id) {
+- case ICE_DEV_ID_E822C_BACKPLANE:
+- case ICE_DEV_ID_E822C_QSFP:
+- case ICE_DEV_ID_E822C_SFP:
+- case ICE_DEV_ID_E822C_10G_BASE_T:
+- case ICE_DEV_ID_E822C_SGMII:
+- case ICE_DEV_ID_E822L_BACKPLANE:
+- case ICE_DEV_ID_E822L_SFP:
+- case ICE_DEV_ID_E822L_10G_BASE_T:
+- case ICE_DEV_ID_E822L_SGMII:
+- return true;
+- default:
+- return false;
+- }
+-}
+-
+-/**
+- * ice_is_e823
+- * @hw: pointer to the hardware structure
+- *
+- * returns true if the device is E823-L or E823-C based, false if not.
+- */
+-bool ice_is_e823(struct ice_hw *hw)
+-{
+- switch (hw->device_id) {
+- case ICE_DEV_ID_E823L_BACKPLANE:
+- case ICE_DEV_ID_E823L_SFP:
+- case ICE_DEV_ID_E823L_10G_BASE_T:
+- case ICE_DEV_ID_E823L_1GBE:
+- case ICE_DEV_ID_E823L_QSFP:
+- case ICE_DEV_ID_E823C_BACKPLANE:
+- case ICE_DEV_ID_E823C_QSFP:
+- case ICE_DEV_ID_E823C_SFP:
+- case ICE_DEV_ID_E823C_10G_BASE_T:
+- case ICE_DEV_ID_E823C_SGMII:
+- return true;
+- default:
+- return false;
+- }
+-}
+-
+-/**
+- * ice_is_e825c - Check if a device is E825C family device
+- * @hw: pointer to the hardware structure
+- *
+- * Return: true if the device is E825-C based, false if not.
+- */
+-bool ice_is_e825c(struct ice_hw *hw)
+-{
+- switch (hw->device_id) {
+- case ICE_DEV_ID_E825C_BACKPLANE:
+- case ICE_DEV_ID_E825C_QSFP:
+- case ICE_DEV_ID_E825C_SFP:
+- case ICE_DEV_ID_E825C_SGMII:
+- return true;
+- default:
+- return false;
+- }
+-}
+-
+ /**
+ * ice_is_pf_c827 - check if pf contains c827 phy
+ * @hw: pointer to the hw struct
+@@ -2409,7 +2295,7 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p,
+ info->tmr_index_owned = ((number & ICE_TS_TMR_IDX_OWND_M) != 0);
+ info->tmr_index_assoc = ((number & ICE_TS_TMR_IDX_ASSOC_M) != 0);
+
+- if (!ice_is_e825c(hw)) {
++ if (hw->mac_type != ICE_MAC_GENERIC_3K_E825) {
+ info->clk_freq = FIELD_GET(ICE_TS_CLK_FREQ_M, number);
+ info->clk_src = ((number & ICE_TS_CLK_SRC_M) != 0);
+ } else {
+@@ -5765,6 +5651,96 @@ ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+ }
+
++/**
++ * ice_get_pca9575_handle - find and return the PCA9575 controller
++ * @hw: pointer to the hw struct
++ * @pca9575_handle: GPIO controller's handle
++ *
++ * Find and return the GPIO controller's handle in the netlist.
++ * When found - the value will be cached in the hw structure and following calls
++ * will return cached value.
++ *
++ * Return: 0 on success, -ENXIO when there's no PCA9575 present.
++ */
++int ice_get_pca9575_handle(struct ice_hw *hw, u16 *pca9575_handle)
++{
++ struct ice_aqc_get_link_topo *cmd;
++ struct ice_aq_desc desc;
++ int err;
++ u8 idx;
++
++ /* If handle was read previously return cached value */
++ if (hw->io_expander_handle) {
++ *pca9575_handle = hw->io_expander_handle;
++ return 0;
++ }
++
++#define SW_PCA9575_SFP_TOPO_IDX 2
++#define SW_PCA9575_QSFP_TOPO_IDX 1
++
++ /* Check if the SW IO expander controlling SMA exists in the netlist. */
++ if (hw->device_id == ICE_DEV_ID_E810C_SFP)
++ idx = SW_PCA9575_SFP_TOPO_IDX;
++ else if (hw->device_id == ICE_DEV_ID_E810C_QSFP)
++ idx = SW_PCA9575_QSFP_TOPO_IDX;
++ else
++ return -ENXIO;
++
++ /* If handle was not detected read it from the netlist */
++ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo);
++ cmd = &desc.params.get_link_topo;
++ cmd->addr.topo_params.node_type_ctx =
++ ICE_AQC_LINK_TOPO_NODE_TYPE_GPIO_CTRL;
++ cmd->addr.topo_params.index = idx;
++
++ err = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
++ if (err)
++ return -ENXIO;
++
++ /* Verify if we found the right IO expander type */
++ if (desc.params.get_link_topo.node_part_num !=
++ ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575)
++ return -ENXIO;
++
++ /* If present save the handle and return it */
++ hw->io_expander_handle =
++ le16_to_cpu(desc.params.get_link_topo.addr.handle);
++ *pca9575_handle = hw->io_expander_handle;
++
++ return 0;
++}
++
++/**
++ * ice_read_pca9575_reg - read the register from the PCA9575 controller
++ * @hw: pointer to the hw struct
++ * @offset: GPIO controller register offset
++ * @data: pointer to data to be read from the GPIO controller
++ *
++ * Return: 0 on success, negative error code otherwise.
++ */
++int ice_read_pca9575_reg(struct ice_hw *hw, u8 offset, u8 *data)
++{
++ struct ice_aqc_link_topo_addr link_topo;
++ __le16 addr;
++ u16 handle;
++ int err;
++
++ memset(&link_topo, 0, sizeof(link_topo));
++
++ err = ice_get_pca9575_handle(hw, &handle);
++ if (err)
++ return err;
++
++ link_topo.handle = cpu_to_le16(handle);
++ link_topo.topo_params.node_type_ctx =
++ FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_CTX_M,
++ ICE_AQC_LINK_TOPO_NODE_CTX_PROVIDED);
++
++ addr = cpu_to_le16((u16)offset);
++
++ return ice_aq_read_i2c(hw, link_topo, 0, addr, 1, data, NULL);
++}
++
+ /**
+ * ice_aq_set_gpio
+ * @hw: pointer to the hw struct
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
+index 15ba385437389d..9b00aa0ddf10e6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.h
++++ b/drivers/net/ethernet/intel/ice/ice_common.h
+@@ -131,7 +131,6 @@ int
+ ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
+ struct ice_sq_cd *cd);
+ bool ice_is_generic_mac(struct ice_hw *hw);
+-bool ice_is_e810(struct ice_hw *hw);
+ int ice_clear_pf_cfg(struct ice_hw *hw);
+ int
+ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
+@@ -276,10 +275,6 @@ ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+ void
+ ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+ u64 *prev_stat, u64 *cur_stat);
+-bool ice_is_e810t(struct ice_hw *hw);
+-bool ice_is_e822(struct ice_hw *hw);
+-bool ice_is_e823(struct ice_hw *hw);
+-bool ice_is_e825c(struct ice_hw *hw);
+ int
+ ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
+ struct ice_aqc_txsched_elem_data *buf);
+@@ -306,5 +301,7 @@ int
+ ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
+ u16 bus_addr, __le16 addr, u8 params, const u8 *data,
+ struct ice_sq_cd *cd);
++int ice_get_pca9575_handle(struct ice_hw *hw, u16 *pca9575_handle);
++int ice_read_pca9575_reg(struct ice_hw *hw, u8 offset, u8 *data);
+ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw);
+ #endif /* _ICE_COMMON_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c
+index 03988be03729b7..59323c019544fc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ddp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ddp.c
+@@ -2345,15 +2345,15 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size,
+ cmd->set_flags |= ICE_AQC_TX_TOPO_FLAGS_SRC_RAM |
+ ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW;
+
+- if (ice_is_e825c(hw))
+- desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
++ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+ } else {
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_tx_topo);
+ cmd->get_flags = ICE_AQC_TX_TOPO_GET_RAM;
+- }
+
+- if (!ice_is_e825c(hw))
+- desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
++ if (hw->mac_type == ICE_MAC_E810 ||
++ hw->mac_type == ICE_MAC_GENERIC)
++ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
++ }
+
+ status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+ if (status)
+diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c
+index b2148dbe49b284..6b26290452d48f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_gnss.c
++++ b/drivers/net/ethernet/intel/ice/ice_gnss.c
+@@ -381,32 +381,23 @@ void ice_gnss_exit(struct ice_pf *pf)
+ }
+
+ /**
+- * ice_gnss_is_gps_present - Check if GPS HW is present
++ * ice_gnss_is_module_present - Check if GNSS HW is present
+ * @hw: pointer to HW struct
++ *
++ * Return: true when GNSS is present, false otherwise.
+ */
+-bool ice_gnss_is_gps_present(struct ice_hw *hw)
++bool ice_gnss_is_module_present(struct ice_hw *hw)
+ {
+- if (!hw->func_caps.ts_func_info.src_tmr_owned)
+- return false;
++ int err;
++ u8 data;
+
+- if (!ice_is_gps_in_netlist(hw))
++ if (!hw->func_caps.ts_func_info.src_tmr_owned ||
++ !ice_is_gps_in_netlist(hw))
+ return false;
+
+-#if IS_ENABLED(CONFIG_PTP_1588_CLOCK)
+- if (ice_is_e810t(hw)) {
+- int err;
+- u8 data;
+-
+- err = ice_read_pca9575_reg(hw, ICE_PCA9575_P0_IN, &data);
+- if (err || !!(data & ICE_P0_GNSS_PRSNT_N))
+- return false;
+- } else {
+- return false;
+- }
+-#else
+- if (!ice_is_e810t(hw))
++ err = ice_read_pca9575_reg(hw, ICE_PCA9575_P0_IN, &data);
++ if (err || !!(data & ICE_P0_GNSS_PRSNT_N))
+ return false;
+-#endif /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */
+
+ return true;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.h b/drivers/net/ethernet/intel/ice/ice_gnss.h
+index 75e567ad705945..15daf603ed7bf2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_gnss.h
++++ b/drivers/net/ethernet/intel/ice/ice_gnss.h
+@@ -37,11 +37,11 @@ struct gnss_serial {
+ #if IS_ENABLED(CONFIG_GNSS)
+ void ice_gnss_init(struct ice_pf *pf);
+ void ice_gnss_exit(struct ice_pf *pf);
+-bool ice_gnss_is_gps_present(struct ice_hw *hw);
++bool ice_gnss_is_module_present(struct ice_hw *hw);
+ #else
+ static inline void ice_gnss_init(struct ice_pf *pf) { }
+ static inline void ice_gnss_exit(struct ice_pf *pf) { }
+-static inline bool ice_gnss_is_gps_present(struct ice_hw *hw)
++static inline bool ice_gnss_is_module_present(struct ice_hw *hw)
+ {
+ return false;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index d0faa087793da7..e0785e820d6014 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3882,7 +3882,7 @@ void ice_init_feature_support(struct ice_pf *pf)
+ ice_set_feature_support(pf, ICE_F_CGU);
+ if (ice_is_clock_mux_in_netlist(&pf->hw))
+ ice_set_feature_support(pf, ICE_F_SMA_CTRL);
+- if (ice_gnss_is_gps_present(&pf->hw))
++ if (ice_gnss_is_module_present(&pf->hw))
+ ice_set_feature_support(pf, ICE_F_GNSS);
+ break;
+ default:
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index a99e0fbd0b8b55..92ce419ff0bcb1 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -1318,20 +1318,20 @@ ice_ptp_port_phy_stop(struct ice_ptp_port *ptp_port)
+ struct ice_hw *hw = &pf->hw;
+ int err;
+
+- if (ice_is_e810(hw))
+- return 0;
+-
+ mutex_lock(&ptp_port->ps_lock);
+
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- err = ice_stop_phy_timer_eth56g(hw, port, true);
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
++ err = 0;
+ break;
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ kthread_cancel_delayed_work_sync(&ptp_port->ov_work);
+
+ err = ice_stop_phy_timer_e82x(hw, port, true);
+ break;
++ case ICE_MAC_GENERIC_3K_E825:
++ err = ice_stop_phy_timer_eth56g(hw, port, true);
++ break;
+ default:
+ err = -ENODEV;
+ }
+@@ -1361,19 +1361,16 @@ ice_ptp_port_phy_restart(struct ice_ptp_port *ptp_port)
+ unsigned long flags;
+ int err;
+
+- if (ice_is_e810(hw))
+- return 0;
+-
+ if (!ptp_port->link_up)
+ return ice_ptp_port_phy_stop(ptp_port);
+
+ mutex_lock(&ptp_port->ps_lock);
+
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- err = ice_start_phy_timer_eth56g(hw, port);
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
++ err = 0;
+ break;
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ /* Start the PHY timer in Vernier mode */
+ kthread_cancel_delayed_work_sync(&ptp_port->ov_work);
+
+@@ -1398,6 +1395,9 @@ ice_ptp_port_phy_restart(struct ice_ptp_port *ptp_port)
+ kthread_queue_delayed_work(pf->ptp.kworker, &ptp_port->ov_work,
+ 0);
+ break;
++ case ICE_MAC_GENERIC_3K_E825:
++ err = ice_start_phy_timer_eth56g(hw, port);
++ break;
+ default:
+ err = -ENODEV;
+ }
+@@ -1432,12 +1432,13 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
+ /* Skip HW writes if reset is in progress */
+ if (pf->hw.reset_ongoing)
+ return;
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_E810:
++
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ /* Do not reconfigure E810 PHY */
+ return;
+- case ICE_PHY_ETH56G:
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
++ case ICE_MAC_GENERIC_3K_E825:
+ ice_ptp_port_phy_restart(ptp_port);
+ return;
+ default:
+@@ -1465,46 +1466,44 @@ static int ice_ptp_cfg_phy_interrupt(struct ice_pf *pf, bool ena, u32 threshold)
+
+ ice_ptp_reset_ts_memory(hw);
+
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G: {
+- int port;
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
++ return 0;
++ case ICE_MAC_GENERIC: {
++ int quad;
+
+- for (port = 0; port < hw->ptp.num_lports; port++) {
++ for (quad = 0; quad < ICE_GET_QUAD_NUM(hw->ptp.num_lports);
++ quad++) {
+ int err;
+
+- err = ice_phy_cfg_intr_eth56g(hw, port, ena, threshold);
++ err = ice_phy_cfg_intr_e82x(hw, quad, ena, threshold);
+ if (err) {
+- dev_err(dev, "Failed to configure PHY interrupt for port %d, err %d\n",
+- port, err);
++ dev_err(dev, "Failed to configure PHY interrupt for quad %d, err %d\n",
++ quad, err);
+ return err;
+ }
+ }
+
+ return 0;
+ }
+- case ICE_PHY_E82X: {
+- int quad;
++ case ICE_MAC_GENERIC_3K_E825: {
++ int port;
+
+- for (quad = 0; quad < ICE_GET_QUAD_NUM(hw->ptp.num_lports);
+- quad++) {
++ for (port = 0; port < hw->ptp.num_lports; port++) {
+ int err;
+
+- err = ice_phy_cfg_intr_e82x(hw, quad, ena, threshold);
++ err = ice_phy_cfg_intr_eth56g(hw, port, ena, threshold);
+ if (err) {
+- dev_err(dev, "Failed to configure PHY interrupt for quad %d, err %d\n",
+- quad, err);
++ dev_err(dev, "Failed to configure PHY interrupt for port %d, err %d\n",
++ port, err);
+ return err;
+ }
+ }
+
+ return 0;
+ }
+- case ICE_PHY_E810:
+- return 0;
+- case ICE_PHY_UNSUP:
++ case ICE_MAC_UNKNOWN:
+ default:
+- dev_warn(dev, "%s: Unexpected PHY model %d\n", __func__,
+- ice_get_phy_model(hw));
+ return -EOPNOTSUPP;
+ }
+ }
+@@ -1740,7 +1739,7 @@ static int ice_ptp_write_perout(struct ice_hw *hw, unsigned int chan,
+ /* 0. Reset mode & out_en in AUX_OUT */
+ wr32(hw, GLTSYN_AUX_OUT(chan, tmr_idx), 0);
+
+- if (ice_is_e825c(hw)) {
++ if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) {
+ int err;
+
+ /* Enable/disable CGU 1PPS output for E825C */
+@@ -1825,7 +1824,7 @@ static int ice_ptp_cfg_perout(struct ice_pf *pf, struct ptp_perout_request *rq,
+ return ice_ptp_write_perout(hw, rq->index, gpio_pin, 0, 0);
+
+ if (strncmp(pf->ptp.pin_desc[pin_desc_idx].name, "1PPS", 64) == 0 &&
+- period != NSEC_PER_SEC && hw->ptp.phy_model == ICE_PHY_E82X) {
++ period != NSEC_PER_SEC && hw->mac_type == ICE_MAC_GENERIC) {
+ dev_err(ice_pf_to_dev(pf), "1PPS pin supports only 1 s period\n");
+ return -EOPNOTSUPP;
+ }
+@@ -2080,7 +2079,7 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
+ /* For Vernier mode on E82X, we need to recalibrate after new settime.
+ * Start with marking timestamps as invalid.
+ */
+- if (ice_get_phy_model(hw) == ICE_PHY_E82X) {
++ if (hw->mac_type == ICE_MAC_GENERIC) {
+ err = ice_ptp_clear_phy_offset_ready_e82x(hw);
+ if (err)
+ dev_warn(ice_pf_to_dev(pf), "Failed to mark timestamps as invalid before settime\n");
+@@ -2104,7 +2103,7 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
+ ice_ptp_enable_all_perout(pf);
+
+ /* Recalibrate and re-enable timestamp blocks for E822/E823 */
+- if (ice_get_phy_model(hw) == ICE_PHY_E82X)
++ if (hw->mac_type == ICE_MAC_GENERIC)
+ ice_ptp_restart_all_phy(pf);
+ exit:
+ if (err) {
+@@ -2558,7 +2557,7 @@ static void ice_ptp_set_funcs_e82x(struct ice_pf *pf)
+ pf->ptp.info.getcrosststamp = ice_ptp_getcrosststamp_e82x;
+
+ #endif /* CONFIG_ICE_HWTS */
+- if (ice_is_e825c(&pf->hw)) {
++ if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) {
+ pf->ptp.ice_pin_desc = ice_pin_desc_e825c;
+ pf->ptp.info.n_pins = ICE_PIN_DESC_ARR_LEN(ice_pin_desc_e825c);
+ } else {
+@@ -2646,10 +2645,17 @@ static void ice_ptp_set_caps(struct ice_pf *pf)
+ info->enable = ice_ptp_gpio_enable;
+ info->verify = ice_verify_pin;
+
+- if (ice_is_e810(&pf->hw))
++ switch (pf->hw.mac_type) {
++ case ICE_MAC_E810:
+ ice_ptp_set_funcs_e810(pf);
+- else
++ return;
++ case ICE_MAC_GENERIC:
++ case ICE_MAC_GENERIC_3K_E825:
+ ice_ptp_set_funcs_e82x(pf);
++ return;
++ default:
++ return;
++ }
+ }
+
+ /**
+@@ -2779,7 +2785,7 @@ static void ice_ptp_maybe_trigger_tx_interrupt(struct ice_pf *pf)
+ bool trigger_oicr = false;
+ unsigned int i;
+
+- if (ice_is_e810(hw))
++ if (!pf->ptp.port.tx.has_ready_bitmap)
+ return;
+
+ if (!ice_pf_src_tmr_owned(pf))
+@@ -2914,14 +2920,12 @@ static int ice_ptp_rebuild_owner(struct ice_pf *pf)
+ */
+ ice_ptp_flush_all_tx_tracker(pf);
+
+- if (!ice_is_e810(hw)) {
+- /* Enable quad interrupts */
+- err = ice_ptp_cfg_phy_interrupt(pf, true, 1);
+- if (err)
+- return err;
++ /* Enable quad interrupts */
++ err = ice_ptp_cfg_phy_interrupt(pf, true, 1);
++ if (err)
++ return err;
+
+- ice_ptp_restart_all_phy(pf);
+- }
++ ice_ptp_restart_all_phy(pf);
+
+ /* Re-enable all periodic outputs and external timestamp events */
+ ice_ptp_enable_all_perout(pf);
+@@ -2973,8 +2977,9 @@ void ice_ptp_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
+
+ static bool ice_is_primary(struct ice_hw *hw)
+ {
+- return ice_is_e825c(hw) && ice_is_dual(hw) ?
+- !!(hw->dev_caps.nac_topo.mode & ICE_NAC_TOPO_PRIMARY_M) : true;
++ return hw->mac_type == ICE_MAC_GENERIC_3K_E825 && ice_is_dual(hw) ?
++ !!(hw->dev_caps.nac_topo.mode & ICE_NAC_TOPO_PRIMARY_M) :
++ true;
+ }
+
+ static int ice_ptp_setup_adapter(struct ice_pf *pf)
+@@ -2992,7 +2997,7 @@ static int ice_ptp_setup_pf(struct ice_pf *pf)
+ struct ice_ptp *ctrl_ptp = ice_get_ctrl_ptp(pf);
+ struct ice_ptp *ptp = &pf->ptp;
+
+- if (WARN_ON(!ctrl_ptp) || ice_get_phy_model(&pf->hw) == ICE_PHY_UNSUP)
++ if (WARN_ON(!ctrl_ptp) || pf->hw.mac_type == ICE_MAC_UNKNOWN)
+ return -ENODEV;
+
+ INIT_LIST_HEAD(&ptp->port.list_node);
+@@ -3009,7 +3014,7 @@ static void ice_ptp_cleanup_pf(struct ice_pf *pf)
+ {
+ struct ice_ptp *ptp = &pf->ptp;
+
+- if (ice_get_phy_model(&pf->hw) != ICE_PHY_UNSUP) {
++ if (pf->hw.mac_type != ICE_MAC_UNKNOWN) {
+ mutex_lock(&pf->adapter->ports.lock);
+ list_del(&ptp->port.list_node);
+ mutex_unlock(&pf->adapter->ports.lock);
+@@ -3136,18 +3141,18 @@ static int ice_ptp_init_port(struct ice_pf *pf, struct ice_ptp_port *ptp_port)
+
+ mutex_init(&ptp_port->ps_lock);
+
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- return ice_ptp_init_tx_eth56g(pf, &ptp_port->tx,
+- ptp_port->port_num);
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ice_ptp_init_tx_e810(pf, &ptp_port->tx);
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ kthread_init_delayed_work(&ptp_port->ov_work,
+ ice_ptp_wait_for_offsets);
+
+ return ice_ptp_init_tx_e82x(pf, &ptp_port->tx,
+ ptp_port->port_num);
++ case ICE_MAC_GENERIC_3K_E825:
++ return ice_ptp_init_tx_eth56g(pf, &ptp_port->tx,
++ ptp_port->port_num);
+ default:
+ return -ENODEV;
+ }
+@@ -3164,8 +3169,8 @@ static int ice_ptp_init_port(struct ice_pf *pf, struct ice_ptp_port *ptp_port)
+ */
+ static void ice_ptp_init_tx_interrupt_mode(struct ice_pf *pf)
+ {
+- switch (ice_get_phy_model(&pf->hw)) {
+- case ICE_PHY_E82X:
++ switch (pf->hw.mac_type) {
++ case ICE_MAC_GENERIC:
+ /* E822 based PHY has the clock owner process the interrupt
+ * for all ports.
+ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+index ec91822e928066..8475d422f1ec41 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+@@ -746,7 +746,7 @@ static int ice_init_cgu_e82x(struct ice_hw *hw)
+ int err;
+
+ /* Disable sticky lock detection so lock err reported is accurate */
+- if (ice_is_e825c(hw))
++ if (hw->mac_type == ICE_MAC_GENERIC_3K_E825)
+ err = ice_cfg_cgu_pll_dis_sticky_bits_e825c(hw);
+ else
+ err = ice_cfg_cgu_pll_dis_sticky_bits_e82x(hw);
+@@ -756,7 +756,7 @@ static int ice_init_cgu_e82x(struct ice_hw *hw)
+ /* Configure the CGU PLL using the parameters from the function
+ * capabilities.
+ */
+- if (ice_is_e825c(hw))
++ if (hw->mac_type == ICE_MAC_GENERIC_3K_E825)
+ err = ice_cfg_cgu_pll_e825c(hw, ts_info->time_ref,
+ (enum ice_clk_src)ts_info->clk_src);
+ else
+@@ -827,8 +827,8 @@ static u32 ice_ptp_tmr_cmd_to_port_reg(struct ice_hw *hw,
+ /* Certain hardware families share the same register values for the
+ * port register and source timer register.
+ */
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ice_ptp_tmr_cmd_to_src_reg(hw, cmd) & TS_CMD_MASK_E810;
+ default:
+ break;
+@@ -2729,10 +2729,7 @@ static void ice_ptp_init_phy_e825(struct ice_hw *hw)
+ {
+ struct ice_ptp_hw *ptp = &hw->ptp;
+ struct ice_eth56g_params *params;
+- u32 phy_rev;
+- int err;
+
+- ptp->phy_model = ICE_PHY_ETH56G;
+ params = &ptp->phy.eth56g;
+ params->onestep_ena = false;
+ params->peer_delay = 0;
+@@ -2742,9 +2739,6 @@ static void ice_ptp_init_phy_e825(struct ice_hw *hw)
+ ptp->num_lports = params->num_phys * ptp->ports_per_phy;
+
+ ice_sb_access_ena_eth56g(hw, true);
+- err = ice_read_phy_eth56g(hw, hw->pf_id, PHY_REG_REVISION, &phy_rev);
+- if (err || phy_rev != PHY_REVISION_ETH56G)
+- ptp->phy_model = ICE_PHY_UNSUP;
+ }
+
+ /* E822 family functions
+@@ -4792,7 +4786,6 @@ int ice_phy_cfg_intr_e82x(struct ice_hw *hw, u8 quad, bool ena, u8 threshold)
+ */
+ static void ice_ptp_init_phy_e82x(struct ice_ptp_hw *ptp)
+ {
+- ptp->phy_model = ICE_PHY_E82X;
+ ptp->num_lports = 8;
+ ptp->ports_per_phy = 8;
+ }
+@@ -5315,68 +5308,6 @@ ice_get_phy_tx_tstamp_ready_e810(struct ice_hw *hw, u8 port, u64 *tstamp_ready)
+ * to access the extended GPIOs available.
+ */
+
+-/**
+- * ice_get_pca9575_handle
+- * @hw: pointer to the hw struct
+- * @pca9575_handle: GPIO controller's handle
+- *
+- * Find and return the GPIO controller's handle in the netlist.
+- * When found - the value will be cached in the hw structure and following calls
+- * will return cached value
+- */
+-static int
+-ice_get_pca9575_handle(struct ice_hw *hw, u16 *pca9575_handle)
+-{
+- struct ice_aqc_get_link_topo *cmd;
+- struct ice_aq_desc desc;
+- int status;
+- u8 idx;
+-
+- /* If handle was read previously return cached value */
+- if (hw->io_expander_handle) {
+- *pca9575_handle = hw->io_expander_handle;
+- return 0;
+- }
+-
+- /* If handle was not detected read it from the netlist */
+- cmd = &desc.params.get_link_topo;
+- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo);
+-
+- /* Set node type to GPIO controller */
+- cmd->addr.topo_params.node_type_ctx =
+- (ICE_AQC_LINK_TOPO_NODE_TYPE_M &
+- ICE_AQC_LINK_TOPO_NODE_TYPE_GPIO_CTRL);
+-
+-#define SW_PCA9575_SFP_TOPO_IDX 2
+-#define SW_PCA9575_QSFP_TOPO_IDX 1
+-
+- /* Check if the SW IO expander controlling SMA exists in the netlist. */
+- if (hw->device_id == ICE_DEV_ID_E810C_SFP)
+- idx = SW_PCA9575_SFP_TOPO_IDX;
+- else if (hw->device_id == ICE_DEV_ID_E810C_QSFP)
+- idx = SW_PCA9575_QSFP_TOPO_IDX;
+- else
+- return -EOPNOTSUPP;
+-
+- cmd->addr.topo_params.index = idx;
+-
+- status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+- if (status)
+- return -EOPNOTSUPP;
+-
+- /* Verify if we found the right IO expander type */
+- if (desc.params.get_link_topo.node_part_num !=
+- ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575)
+- return -EOPNOTSUPP;
+-
+- /* If present save the handle and return it */
+- hw->io_expander_handle =
+- le16_to_cpu(desc.params.get_link_topo.addr.handle);
+- *pca9575_handle = hw->io_expander_handle;
+-
+- return 0;
+-}
+-
+ /**
+ * ice_read_sma_ctrl
+ * @hw: pointer to the hw struct
+@@ -5441,37 +5372,6 @@ int ice_write_sma_ctrl(struct ice_hw *hw, u8 data)
+ return status;
+ }
+
+-/**
+- * ice_read_pca9575_reg
+- * @hw: pointer to the hw struct
+- * @offset: GPIO controller register offset
+- * @data: pointer to data to be read from the GPIO controller
+- *
+- * Read the register from the GPIO controller
+- */
+-int ice_read_pca9575_reg(struct ice_hw *hw, u8 offset, u8 *data)
+-{
+- struct ice_aqc_link_topo_addr link_topo;
+- __le16 addr;
+- u16 handle;
+- int err;
+-
+- memset(&link_topo, 0, sizeof(link_topo));
+-
+- err = ice_get_pca9575_handle(hw, &handle);
+- if (err)
+- return err;
+-
+- link_topo.handle = cpu_to_le16(handle);
+- link_topo.topo_params.node_type_ctx =
+- FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_CTX_M,
+- ICE_AQC_LINK_TOPO_NODE_CTX_PROVIDED);
+-
+- addr = cpu_to_le16((u16)offset);
+-
+- return ice_aq_read_i2c(hw, link_topo, 0, addr, 1, data, NULL);
+-}
+-
+ /**
+ * ice_ptp_read_sdp_ac - read SDP available connections section from NVM
+ * @hw: pointer to the HW struct
+@@ -5538,7 +5438,6 @@ int ice_ptp_read_sdp_ac(struct ice_hw *hw, __le16 *entries, uint *num_entries)
+ */
+ static void ice_ptp_init_phy_e810(struct ice_ptp_hw *ptp)
+ {
+- ptp->phy_model = ICE_PHY_E810;
+ ptp->num_lports = 8;
+ ptp->ports_per_phy = 4;
+
+@@ -5547,9 +5446,8 @@ static void ice_ptp_init_phy_e810(struct ice_ptp_hw *ptp)
+
+ /* Device agnostic functions
+ *
+- * The following functions implement shared behavior common to both E822 and
+- * E810 devices, possibly calling a device specific implementation where
+- * necessary.
++ * The following functions implement shared behavior common to all devices,
++ * possibly calling a device specific implementation where necessary.
+ */
+
+ /**
+@@ -5612,14 +5510,19 @@ void ice_ptp_init_hw(struct ice_hw *hw)
+ {
+ struct ice_ptp_hw *ptp = &hw->ptp;
+
+- if (ice_is_e822(hw) || ice_is_e823(hw))
+- ice_ptp_init_phy_e82x(ptp);
+- else if (ice_is_e810(hw))
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ ice_ptp_init_phy_e810(ptp);
+- else if (ice_is_e825c(hw))
++ break;
++ case ICE_MAC_GENERIC:
++ ice_ptp_init_phy_e82x(ptp);
++ break;
++ case ICE_MAC_GENERIC_3K_E825:
+ ice_ptp_init_phy_e825(hw);
+- else
+- ptp->phy_model = ICE_PHY_UNSUP;
++ break;
++ default:
++ return;
++ }
+ }
+
+ /**
+@@ -5640,11 +5543,11 @@ void ice_ptp_init_hw(struct ice_hw *hw)
+ static int ice_ptp_write_port_cmd(struct ice_hw *hw, u8 port,
+ enum ice_ptp_tmr_cmd cmd)
+ {
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- return ice_ptp_write_port_cmd_eth56g(hw, port, cmd);
+- case ICE_PHY_E82X:
++ switch (hw->mac_type) {
++ case ICE_MAC_GENERIC:
+ return ice_ptp_write_port_cmd_e82x(hw, port, cmd);
++ case ICE_MAC_GENERIC_3K_E825:
++ return ice_ptp_write_port_cmd_eth56g(hw, port, cmd);
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -5705,8 +5608,8 @@ static int ice_ptp_port_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
+ u32 port;
+
+ /* PHY models which can program all ports simultaneously */
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ice_ptp_port_cmd_e810(hw, cmd);
+ default:
+ break;
+@@ -5784,17 +5687,17 @@ int ice_ptp_init_time(struct ice_hw *hw, u64 time)
+
+ /* PHY timers */
+ /* Fill Rx and Tx ports and send msg to PHY */
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- err = ice_ptp_prep_phy_time_eth56g(hw,
+- (u32)(time & 0xFFFFFFFF));
+- break;
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ err = ice_ptp_prep_phy_time_e810(hw, time & 0xFFFFFFFF);
+ break;
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ err = ice_ptp_prep_phy_time_e82x(hw, time & 0xFFFFFFFF);
+ break;
++ case ICE_MAC_GENERIC_3K_E825:
++ err = ice_ptp_prep_phy_time_eth56g(hw,
++ (u32)(time & 0xFFFFFFFF));
++ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+@@ -5830,16 +5733,16 @@ int ice_ptp_write_incval(struct ice_hw *hw, u64 incval)
+ wr32(hw, GLTSYN_SHADJ_L(tmr_idx), lower_32_bits(incval));
+ wr32(hw, GLTSYN_SHADJ_H(tmr_idx), upper_32_bits(incval));
+
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- err = ice_ptp_prep_phy_incval_eth56g(hw, incval);
+- break;
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ err = ice_ptp_prep_phy_incval_e810(hw, incval);
+ break;
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ err = ice_ptp_prep_phy_incval_e82x(hw, incval);
+ break;
++ case ICE_MAC_GENERIC_3K_E825:
++ err = ice_ptp_prep_phy_incval_eth56g(hw, incval);
++ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+@@ -5899,16 +5802,16 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
+ wr32(hw, GLTSYN_SHADJ_L(tmr_idx), 0);
+ wr32(hw, GLTSYN_SHADJ_H(tmr_idx), adj);
+
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- err = ice_ptp_prep_phy_adj_eth56g(hw, adj);
+- break;
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ err = ice_ptp_prep_phy_adj_e810(hw, adj);
+ break;
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ err = ice_ptp_prep_phy_adj_e82x(hw, adj);
+ break;
++ case ICE_MAC_GENERIC_3K_E825:
++ err = ice_ptp_prep_phy_adj_eth56g(hw, adj);
++ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+@@ -5932,13 +5835,13 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
+ */
+ int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
+ {
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- return ice_read_ptp_tstamp_eth56g(hw, block, idx, tstamp);
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ice_read_phy_tstamp_e810(hw, block, idx, tstamp);
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ return ice_read_phy_tstamp_e82x(hw, block, idx, tstamp);
++ case ICE_MAC_GENERIC_3K_E825:
++ return ice_read_ptp_tstamp_eth56g(hw, block, idx, tstamp);
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -5962,13 +5865,13 @@ int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
+ */
+ int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx)
+ {
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- return ice_clear_ptp_tstamp_eth56g(hw, block, idx);
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ice_clear_phy_tstamp_e810(hw, block, idx);
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ return ice_clear_phy_tstamp_e82x(hw, block, idx);
++ case ICE_MAC_GENERIC_3K_E825:
++ return ice_clear_ptp_tstamp_eth56g(hw, block, idx);
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -6025,14 +5928,14 @@ static int ice_get_pf_c827_idx(struct ice_hw *hw, u8 *idx)
+ */
+ void ice_ptp_reset_ts_memory(struct ice_hw *hw)
+ {
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- ice_ptp_reset_ts_memory_eth56g(hw);
+- break;
+- case ICE_PHY_E82X:
++ switch (hw->mac_type) {
++ case ICE_MAC_GENERIC:
+ ice_ptp_reset_ts_memory_e82x(hw);
+ break;
+- case ICE_PHY_E810:
++ case ICE_MAC_GENERIC_3K_E825:
++ ice_ptp_reset_ts_memory_eth56g(hw);
++ break;
++ case ICE_MAC_E810:
+ default:
+ return;
+ }
+@@ -6054,13 +5957,13 @@ int ice_ptp_init_phc(struct ice_hw *hw)
+ /* Clear event err indications for auxiliary pins */
+ (void)rd32(hw, GLTSYN_STAT(src_idx));
+
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- return ice_ptp_init_phc_eth56g(hw);
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ice_ptp_init_phc_e810(hw);
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ return ice_ptp_init_phc_e82x(hw);
++ case ICE_MAC_GENERIC_3K_E825:
++ return ice_ptp_init_phc_eth56g(hw);
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -6079,16 +5982,16 @@ int ice_ptp_init_phc(struct ice_hw *hw)
+ */
+ int ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready)
+ {
+- switch (ice_get_phy_model(hw)) {
+- case ICE_PHY_ETH56G:
+- return ice_get_phy_tx_tstamp_ready_eth56g(hw, block,
+- tstamp_ready);
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ice_get_phy_tx_tstamp_ready_e810(hw, block,
+ tstamp_ready);
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ return ice_get_phy_tx_tstamp_ready_e82x(hw, block,
+ tstamp_ready);
++ case ICE_MAC_GENERIC_3K_E825:
++ return ice_get_phy_tx_tstamp_ready_eth56g(hw, block,
++ tstamp_ready);
+ break;
+ default:
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+index 6779ce120515a2..6b467940755844 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+@@ -395,7 +395,6 @@ int ice_phy_cfg_intr_e82x(struct ice_hw *hw, u8 quad, bool ena, u8 threshold);
+ /* E810 family functions */
+ int ice_read_sma_ctrl(struct ice_hw *hw, u8 *data);
+ int ice_write_sma_ctrl(struct ice_hw *hw, u8 data);
+-int ice_read_pca9575_reg(struct ice_hw *hw, u8 offset, u8 *data);
+ int ice_ptp_read_sdp_ac(struct ice_hw *hw, __le16 *entries, uint *num_entries);
+ int ice_cgu_get_num_pins(struct ice_hw *hw, bool input);
+ enum dpll_pin_type ice_cgu_get_pin_type(struct ice_hw *hw, u8 pin, bool input);
+@@ -431,13 +430,13 @@ int ice_phy_cfg_ptp_1step_eth56g(struct ice_hw *hw, u8 port);
+ */
+ static inline u64 ice_get_base_incval(struct ice_hw *hw)
+ {
+- switch (hw->ptp.phy_model) {
+- case ICE_PHY_ETH56G:
+- return ICE_ETH56G_NOMINAL_INCVAL;
+- case ICE_PHY_E810:
++ switch (hw->mac_type) {
++ case ICE_MAC_E810:
+ return ICE_PTP_NOMINAL_INCVAL_E810;
+- case ICE_PHY_E82X:
++ case ICE_MAC_GENERIC:
+ return ice_e82x_nominal_incval(ice_e82x_time_ref(hw));
++ case ICE_MAC_GENERIC_3K_E825:
++ return ICE_ETH56G_NOMINAL_INCVAL;
+ default:
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
+index 33a1a5934c0d57..0aab21113cc435 100644
+--- a/drivers/net/ethernet/intel/ice/ice_type.h
++++ b/drivers/net/ethernet/intel/ice/ice_type.h
+@@ -871,14 +871,6 @@ union ice_phy_params {
+ struct ice_eth56g_params eth56g;
+ };
+
+-/* PHY model */
+-enum ice_phy_model {
+- ICE_PHY_UNSUP = -1,
+- ICE_PHY_E810 = 1,
+- ICE_PHY_E82X,
+- ICE_PHY_ETH56G,
+-};
+-
+ /* Global Link Topology */
+ enum ice_global_link_topo {
+ ICE_LINK_TOPO_UP_TO_2_LINKS,
+@@ -888,7 +880,6 @@ enum ice_global_link_topo {
+ };
+
+ struct ice_ptp_hw {
+- enum ice_phy_model phy_model;
+ union ice_phy_params phy;
+ u8 num_lports;
+ u8 ports_per_phy;
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+index 9be4bd717512d0..f90f545b3144d3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+@@ -2097,6 +2097,11 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
+ pf = vf->pf;
+ dev = ice_pf_to_dev(pf);
+ vf_vsi = ice_get_vf_vsi(vf);
++ if (!vf_vsi) {
++ dev_err(dev, "Can not get FDIR vf_vsi for VF %u\n", vf->vf_id);
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ goto err_exit;
++ }
+
+ #define ICE_VF_MAX_FDIR_FILTERS 128
+ if (!ice_fdir_num_avail_fltr(&pf->hw, vf_vsi) ||
+diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
+index 66544faab710aa..aef0e9775a3305 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf.h
++++ b/drivers/net/ethernet/intel/idpf/idpf.h
+@@ -629,13 +629,13 @@ bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all,
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |\
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6)
+
+-#define IDPF_CAP_RX_CSUM_L4V4 (\
+- VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |\
+- VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP)
++#define IDPF_CAP_TX_CSUM_L4V4 (\
++ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP |\
++ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP)
+
+-#define IDPF_CAP_RX_CSUM_L4V6 (\
+- VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\
+- VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP)
++#define IDPF_CAP_TX_CSUM_L4V6 (\
++ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP |\
++ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP)
+
+ #define IDPF_CAP_RX_CSUM (\
+ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |\
+@@ -644,11 +644,9 @@ bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all,
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP)
+
+-#define IDPF_CAP_SCTP_CSUM (\
++#define IDPF_CAP_TX_SCTP_CSUM (\
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |\
+- VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |\
+- VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |\
+- VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP)
++ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP)
+
+ #define IDPF_CAP_TUNNEL_TX_CSUM (\
+ VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |\
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index a055a47449f128..6e8a82dae16286 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -703,8 +703,10 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
+ {
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_vport_config *vport_config;
++ netdev_features_t other_offloads = 0;
++ netdev_features_t csum_offloads = 0;
++ netdev_features_t tso_offloads = 0;
+ netdev_features_t dflt_features;
+- netdev_features_t offloads = 0;
+ struct idpf_netdev_priv *np;
+ struct net_device *netdev;
+ u16 idx = vport->idx;
+@@ -766,53 +768,32 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
+
+ if (idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
+ dflt_features |= NETIF_F_RXHASH;
+- if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V4))
+- dflt_features |= NETIF_F_IP_CSUM;
+- if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V6))
+- dflt_features |= NETIF_F_IPV6_CSUM;
++ if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_CSUM_L4V4))
++ csum_offloads |= NETIF_F_IP_CSUM;
++ if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_CSUM_L4V6))
++ csum_offloads |= NETIF_F_IPV6_CSUM;
+ if (idpf_is_cap_ena(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM))
+- dflt_features |= NETIF_F_RXCSUM;
+- if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_SCTP_CSUM))
+- dflt_features |= NETIF_F_SCTP_CRC;
++ csum_offloads |= NETIF_F_RXCSUM;
++ if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_TX_SCTP_CSUM))
++ csum_offloads |= NETIF_F_SCTP_CRC;
+
+ if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV4_TCP))
+- dflt_features |= NETIF_F_TSO;
++ tso_offloads |= NETIF_F_TSO;
+ if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV6_TCP))
+- dflt_features |= NETIF_F_TSO6;
++ tso_offloads |= NETIF_F_TSO6;
+ if (idpf_is_cap_ena_all(adapter, IDPF_SEG_CAPS,
+ VIRTCHNL2_CAP_SEG_IPV4_UDP |
+ VIRTCHNL2_CAP_SEG_IPV6_UDP))
+- dflt_features |= NETIF_F_GSO_UDP_L4;
++ tso_offloads |= NETIF_F_GSO_UDP_L4;
+ if (idpf_is_cap_ena_all(adapter, IDPF_RSC_CAPS, IDPF_CAP_RSC))
+- offloads |= NETIF_F_GRO_HW;
+- /* advertise to stack only if offloads for encapsulated packets is
+- * supported
+- */
+- if (idpf_is_cap_ena(vport->adapter, IDPF_SEG_CAPS,
+- VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL)) {
+- offloads |= NETIF_F_GSO_UDP_TUNNEL |
+- NETIF_F_GSO_GRE |
+- NETIF_F_GSO_GRE_CSUM |
+- NETIF_F_GSO_PARTIAL |
+- NETIF_F_GSO_UDP_TUNNEL_CSUM |
+- NETIF_F_GSO_IPXIP4 |
+- NETIF_F_GSO_IPXIP6 |
+- 0;
+-
+- if (!idpf_is_cap_ena_all(vport->adapter, IDPF_CSUM_CAPS,
+- IDPF_CAP_TUNNEL_TX_CSUM))
+- netdev->gso_partial_features |=
+- NETIF_F_GSO_UDP_TUNNEL_CSUM;
+-
+- netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
+- offloads |= NETIF_F_TSO_MANGLEID;
+- }
++ other_offloads |= NETIF_F_GRO_HW;
+ if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LOOPBACK))
+- offloads |= NETIF_F_LOOPBACK;
++ other_offloads |= NETIF_F_LOOPBACK;
+
+- netdev->features |= dflt_features;
+- netdev->hw_features |= dflt_features | offloads;
+- netdev->hw_enc_features |= dflt_features | offloads;
++ netdev->features |= dflt_features | csum_offloads | tso_offloads;
++ netdev->hw_features |= netdev->features | other_offloads;
++ netdev->vlan_features |= netdev->features | other_offloads;
++ netdev->hw_enc_features |= dflt_features | other_offloads;
+ idpf_set_ethtool_ops(netdev);
+ SET_NETDEV_DEV(netdev, &adapter->pdev->dev);
+
+@@ -1131,11 +1112,9 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+
+ num_max_q = max(max_q->max_txq, max_q->max_rxq);
+ vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
+- if (!vport->q_vector_idxs) {
+- kfree(vport);
++ if (!vport->q_vector_idxs)
++ goto free_vport;
+
+- return NULL;
+- }
+ idpf_vport_init(vport, max_q);
+
+ /* This alloc is done separate from the LUT because it's not strictly
+@@ -1145,11 +1124,9 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ */
+ rss_data = &adapter->vport_config[idx]->user_config.rss_data;
+ rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL);
+- if (!rss_data->rss_key) {
+- kfree(vport);
++ if (!rss_data->rss_key)
++ goto free_vector_idxs;
+
+- return NULL;
+- }
+ /* Initialize default rss key */
+ netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size);
+
+@@ -1162,6 +1139,13 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ adapter->next_vport = idpf_get_free_slot(adapter);
+
+ return vport;
++
++free_vector_idxs:
++ kfree(vport->q_vector_idxs);
++free_vport:
++ kfree(vport);
++
++ return NULL;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
+index bec4a02c53733e..b35713036a54ab 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_main.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
+@@ -89,6 +89,7 @@ static void idpf_shutdown(struct pci_dev *pdev)
+ {
+ struct idpf_adapter *adapter = pci_get_drvdata(pdev);
+
++ cancel_delayed_work_sync(&adapter->serv_task);
+ cancel_delayed_work_sync(&adapter->vc_event_task);
+ idpf_vc_core_deinit(adapter);
+ idpf_deinit_dflt_mbx(adapter);
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 612ed26a29c5d4..efc7b30e421133 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -1290,6 +1290,8 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ /* reset the tstamp_config */
+ igc_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config);
+
++ mutex_lock(&adapter->ptm_lock);
++
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+
+ switch (adapter->hw.mac.type) {
+@@ -1308,7 +1310,6 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ if (!igc_is_crosststamp_supported(adapter))
+ break;
+
+- mutex_lock(&adapter->ptm_lock);
+ wr32(IGC_PCIE_DIG_DELAY, IGC_PCIE_DIG_DELAY_DEFAULT);
+ wr32(IGC_PCIE_PHY_DELAY, IGC_PCIE_PHY_DELAY_DEFAULT);
+
+@@ -1332,7 +1333,6 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n");
+
+ igc_ptm_reset(hw);
+- mutex_unlock(&adapter->ptm_lock);
+ break;
+ default:
+ /* No work to do. */
+@@ -1349,5 +1349,7 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ out:
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
+
++ mutex_unlock(&adapter->ptm_lock);
++
+ wrfl();
+ }
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 0a679e95196fed..24499bb36c0057 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -1223,7 +1223,7 @@ static void octep_hb_timeout_task(struct work_struct *work)
+ miss_cnt);
+ rtnl_lock();
+ if (netif_running(oct->netdev))
+- octep_stop(oct->netdev);
++ dev_close(oct->netdev);
+ rtnl_unlock();
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+index 18c922dd5fc64d..ccb69bc5c95292 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+@@ -835,7 +835,9 @@ static void octep_vf_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ struct octep_vf_device *oct = netdev_priv(netdev);
+
+ netdev_hold(netdev, NULL, GFP_ATOMIC);
+- schedule_work(&oct->tx_timeout_task);
++ if (!schedule_work(&oct->tx_timeout_task))
++ netdev_put(netdev, NULL);
++
+ }
+
+ static int octep_vf_set_mac(struct net_device *netdev, void *p)
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 477b8732b86099..c6d60f1d4f77aa 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -269,12 +269,8 @@ static const char * const mtk_clks_source_name[] = {
+ "ethwarp_wocpu2",
+ "ethwarp_wocpu1",
+ "ethwarp_wocpu0",
+- "top_usxgmii0_sel",
+- "top_usxgmii1_sel",
+ "top_sgm0_sel",
+ "top_sgm1_sel",
+- "top_xfi_phy0_xtal_sel",
+- "top_xfi_phy1_xtal_sel",
+ "top_eth_gmii_sel",
+ "top_eth_refck_50m_sel",
+ "top_eth_sys_200m_sel",
+@@ -2206,14 +2202,18 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
+ ring->data[idx] = new_data;
+ rxd->rxd1 = (unsigned int)dma_addr;
+ release_desc:
++ if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA)) {
++ if (unlikely(dma_addr == DMA_MAPPING_ERROR))
++ addr64 = FIELD_GET(RX_DMA_ADDR64_MASK,
++ rxd->rxd2);
++ else
++ addr64 = RX_DMA_PREP_ADDR64(dma_addr);
++ }
++
+ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628))
+ rxd->rxd2 = RX_DMA_LSO;
+ else
+- rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size);
+-
+- if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA) &&
+- likely(dma_addr != DMA_MAPPING_ERROR))
+- rxd->rxd2 |= RX_DMA_PREP_ADDR64(dma_addr);
++ rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size) | addr64;
+
+ ring->calc_idx = idx;
+ done++;
+diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+index 25989c79c92e61..c2ab87828d8589 100644
+--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
++++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+@@ -1163,6 +1163,7 @@ static int mtk_star_tx_poll(struct napi_struct *napi, int budget)
+ struct net_device *ndev = priv->ndev;
+ unsigned int head = ring->head;
+ unsigned int entry = ring->tail;
++ unsigned long flags;
+
+ while (entry != head && count < (MTK_STAR_RING_NUM_DESCS - 1)) {
+ ret = mtk_star_tx_complete_one(priv);
+@@ -1182,9 +1183,9 @@ static int mtk_star_tx_poll(struct napi_struct *napi, int budget)
+ netif_wake_queue(ndev);
+
+ if (napi_complete(napi)) {
+- spin_lock(&priv->lock);
++ spin_lock_irqsave(&priv->lock, flags);
+ mtk_star_enable_dma_irq(priv, false, true);
+- spin_unlock(&priv->lock);
++ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+ return 0;
+@@ -1341,16 +1342,16 @@ static int mtk_star_rx(struct mtk_star_priv *priv, int budget)
+ static int mtk_star_rx_poll(struct napi_struct *napi, int budget)
+ {
+ struct mtk_star_priv *priv;
++ unsigned long flags;
+ int work_done = 0;
+
+ priv = container_of(napi, struct mtk_star_priv, rx_napi);
+
+ work_done = mtk_star_rx(priv, budget);
+- if (work_done < budget) {
+- napi_complete_done(napi, work_done);
+- spin_lock(&priv->lock);
++ if (work_done < budget && napi_complete_done(napi, work_done)) {
++ spin_lock_irqsave(&priv->lock, flags);
+ mtk_star_enable_dma_irq(priv, true, false);
+- spin_unlock(&priv->lock);
++ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+ return work_done;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+index 09433b91be176f..c8adf309ecad04 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+@@ -177,6 +177,7 @@ static int mlx5e_tx_reporter_ptpsq_unhealthy_recover(void *ctx)
+
+ priv = ptpsq->txqsq.priv;
+
++ rtnl_lock();
+ mutex_lock(&priv->state_lock);
+ chs = &priv->channels;
+ netdev = priv->netdev;
+@@ -184,22 +185,19 @@ static int mlx5e_tx_reporter_ptpsq_unhealthy_recover(void *ctx)
+ carrier_ok = netif_carrier_ok(netdev);
+ netif_carrier_off(netdev);
+
+- rtnl_lock();
+ mlx5e_deactivate_priv_channels(priv);
+- rtnl_unlock();
+
+ mlx5e_ptp_close(chs->ptp);
+ err = mlx5e_ptp_open(priv, &chs->params, chs->c[0]->lag_port, &chs->ptp);
+
+- rtnl_lock();
+ mlx5e_activate_priv_channels(priv);
+- rtnl_unlock();
+
+ /* return carrier back if needed */
+ if (carrier_ok)
+ netif_carrier_on(netdev);
+
+ mutex_unlock(&priv->state_lock);
++ rtnl_unlock();
+
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
+index e4e487c8431b88..b9cf79e2712440 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_vxlan.c
+@@ -165,9 +165,6 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
+ struct flow_match_enc_keyid enc_keyid;
+ void *misc_c, *misc_v;
+
+- misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+- misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+-
+ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
+ return 0;
+
+@@ -182,6 +179,30 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
+ err = mlx5e_tc_tun_parse_vxlan_gbp_option(priv, spec, f);
+ if (err)
+ return err;
++
++ /* We can't mix custom tunnel headers with symbolic ones and we
++ * don't have a symbolic field name for GBP, so we use custom
++ * tunnel headers in this case. We need hardware support to
++ * match on custom tunnel headers, but we already know it's
++ * supported because the previous call successfully checked for
++ * that.
++ */
++ misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
++ misc_parameters_5);
++ misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
++ misc_parameters_5);
++
++ /* Shift by 8 to account for the reserved bits in the vxlan
++ * header after the VNI.
++ */
++ MLX5_SET(fte_match_set_misc5, misc_c, tunnel_header_1,
++ be32_to_cpu(enc_keyid.mask->keyid) << 8);
++ MLX5_SET(fte_match_set_misc5, misc_v, tunnel_header_1,
++ be32_to_cpu(enc_keyid.key->keyid) << 8);
++
++ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_5;
++
++ return 0;
+ }
+
+ /* match on VNI is required */
+@@ -195,6 +216,11 @@ static int mlx5e_tc_tun_parse_vxlan(struct mlx5e_priv *priv,
+ return -EOPNOTSUPP;
+ }
+
++ misc_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
++ misc_parameters);
++ misc_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
++ misc_parameters);
++
+ MLX5_SET(fte_match_set_misc, misc_c, vxlan_vni,
+ be32_to_cpu(enc_keyid.mask->keyid));
+ MLX5_SET(fte_match_set_misc, misc_v, vxlan_vni,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 9ba99609999f4f..f1d908f611349f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1750,9 +1750,6 @@ extra_split_attr_dests_needed(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr
+ !list_is_first(&attr->list, &flow->attrs))
+ return 0;
+
+- if (flow_flag_test(flow, SLOW))
+- return 0;
+-
+ esw_attr = attr->esw_attr;
+ if (!esw_attr->split_count ||
+ esw_attr->split_count == esw_attr->out_count - 1)
+@@ -1766,7 +1763,7 @@ extra_split_attr_dests_needed(struct mlx5e_tc_flow *flow, struct mlx5_flow_attr
+ for (i = esw_attr->split_count; i < esw_attr->out_count; i++) {
+ /* external dest with encap is considered as internal by firmware */
+ if (esw_attr->dests[i].vport == MLX5_VPORT_UPLINK &&
+- !(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP_VALID))
++ !(esw_attr->dests[i].flags & MLX5_ESW_DEST_ENCAP))
+ ext_dest = true;
+ else
+ int_dest = true;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 20cc01ceee8a94..2e0920199d4711 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -3532,7 +3532,9 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
+ int err;
+
+ mutex_init(&esw->offloads.termtbl_mutex);
+- mlx5_rdma_enable_roce(esw->dev);
++ err = mlx5_rdma_enable_roce(esw->dev);
++ if (err)
++ goto err_roce;
+
+ err = mlx5_esw_host_number_init(esw);
+ if (err)
+@@ -3593,6 +3595,7 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
+ esw_offloads_metadata_uninit(esw);
+ err_metadata:
+ mlx5_rdma_disable_roce(esw->dev);
++err_roce:
+ mutex_destroy(&esw->offloads.termtbl_mutex);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+index a42f6cd99b7448..5c552b71e371c5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c
+@@ -118,8 +118,8 @@ static void mlx5_rdma_make_default_gid(struct mlx5_core_dev *dev, union ib_gid *
+
+ static int mlx5_rdma_add_roce_addr(struct mlx5_core_dev *dev)
+ {
++ u8 mac[ETH_ALEN] = {};
+ union ib_gid gid;
+- u8 mac[ETH_ALEN];
+
+ mlx5_rdma_make_default_gid(dev, &gid);
+ return mlx5_core_roce_gid_set(dev, 0,
+@@ -140,17 +140,17 @@ void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev)
+ mlx5_nic_vport_disable_roce(dev);
+ }
+
+-void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev)
++int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev)
+ {
+ int err;
+
+ if (!MLX5_CAP_GEN(dev, roce))
+- return;
++ return 0;
+
+ err = mlx5_nic_vport_enable_roce(dev);
+ if (err) {
+ mlx5_core_err(dev, "Failed to enable RoCE: %d\n", err);
+- return;
++ return err;
+ }
+
+ err = mlx5_rdma_add_roce_addr(dev);
+@@ -165,10 +165,11 @@ void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev)
+ goto del_roce_addr;
+ }
+
+- return;
++ return err;
+
+ del_roce_addr:
+ mlx5_rdma_del_roce_addr(dev);
+ disable_roce:
+ mlx5_nic_vport_disable_roce(dev);
++ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rdma.h b/drivers/net/ethernet/mellanox/mlx5/core/rdma.h
+index 750cff2a71a4bb..3d9e76c3d42fb1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/rdma.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/rdma.h
+@@ -8,12 +8,12 @@
+
+ #ifdef CONFIG_MLX5_ESWITCH
+
+-void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev);
++int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev);
+ void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev);
+
+ #else /* CONFIG_MLX5_ESWITCH */
+
+-static inline void mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) {}
++static inline int mlx5_rdma_enable_roce(struct mlx5_core_dev *dev) { return 0; }
+ static inline void mlx5_rdma_disable_roce(struct mlx5_core_dev *dev) {}
+
+ #endif /* CONFIG_MLX5_ESWITCH */
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index 23760b613d3ecf..e2d6bfb5d69334 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -1815,6 +1815,7 @@ static void lan743x_tx_frame_add_lso(struct lan743x_tx *tx,
+ if (nr_frags <= 0) {
+ tx->frame_data0 |= TX_DESC_DATA0_LS_;
+ tx->frame_data0 |= TX_DESC_DATA0_IOC_;
++ tx->frame_last = tx->frame_first;
+ }
+ tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
+ tx_descriptor->data0 = cpu_to_le32(tx->frame_data0);
+@@ -1884,6 +1885,7 @@ static int lan743x_tx_frame_add_fragment(struct lan743x_tx *tx,
+ tx->frame_first = 0;
+ tx->frame_data0 = 0;
+ tx->frame_tail = 0;
++ tx->frame_last = 0;
+ return -ENOMEM;
+ }
+
+@@ -1924,16 +1926,18 @@ static void lan743x_tx_frame_end(struct lan743x_tx *tx,
+ TX_DESC_DATA0_DTYPE_DATA_) {
+ tx->frame_data0 |= TX_DESC_DATA0_LS_;
+ tx->frame_data0 |= TX_DESC_DATA0_IOC_;
++ tx->frame_last = tx->frame_tail;
+ }
+
+- tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
+- buffer_info = &tx->buffer_info[tx->frame_tail];
++ tx_descriptor = &tx->ring_cpu_ptr[tx->frame_last];
++ buffer_info = &tx->buffer_info[tx->frame_last];
+ buffer_info->skb = skb;
+ if (time_stamp)
+ buffer_info->flags |= TX_BUFFER_INFO_FLAG_TIMESTAMP_REQUESTED;
+ if (ignore_sync)
+ buffer_info->flags |= TX_BUFFER_INFO_FLAG_IGNORE_SYNC;
+
++ tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail];
+ tx_descriptor->data0 = cpu_to_le32(tx->frame_data0);
+ tx->frame_tail = lan743x_tx_next_index(tx, tx->frame_tail);
+ tx->last_tail = tx->frame_tail;
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.h b/drivers/net/ethernet/microchip/lan743x_main.h
+index 7f73d66854bee4..db5fc73e41cca5 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.h
++++ b/drivers/net/ethernet/microchip/lan743x_main.h
+@@ -980,6 +980,7 @@ struct lan743x_tx {
+ u32 frame_first;
+ u32 frame_data0;
+ u32 frame_tail;
++ u32 frame_last;
+
+ struct lan743x_tx_buffer_info *buffer_info;
+
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index ef93df52088710..08bee56aea35f3 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -830,6 +830,7 @@ EXPORT_SYMBOL(ocelot_vlan_prepare);
+ int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,
+ bool untagged)
+ {
++ struct ocelot_port *ocelot_port = ocelot->ports[port];
+ int err;
+
+ /* Ignore VID 0 added to our RX filter by the 8021q module, since
+@@ -849,6 +850,11 @@ int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,
+ ocelot_bridge_vlan_find(ocelot, vid));
+ if (err)
+ return err;
++ } else if (ocelot_port->pvid_vlan &&
++ ocelot_bridge_vlan_find(ocelot, vid) == ocelot_port->pvid_vlan) {
++ err = ocelot_port_set_pvid(ocelot, port, NULL);
++ if (err)
++ return err;
+ }
+
+ /* Untagged egress vlan clasification */
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+index 2aacc1996796db..55b8d36661530c 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase_main.c
++++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+@@ -1925,8 +1925,8 @@ static u16 rtase_calc_time_mitigation(u32 time_us)
+
+ time_us = min_t(int, time_us, RTASE_MITI_MAX_TIME);
+
+- msb = fls(time_us);
+- if (msb >= RTASE_MITI_COUNT_BIT_NUM) {
++ if (time_us > RTASE_MITI_TIME_COUNT_MASK) {
++ msb = fls(time_us);
+ time_unit = msb - RTASE_MITI_COUNT_BIT_NUM;
+ time_count = time_us >> (msb - RTASE_MITI_COUNT_BIT_NUM);
+ } else {
+diff --git a/drivers/net/ethernet/vertexcom/mse102x.c b/drivers/net/ethernet/vertexcom/mse102x.c
+index 89dc4c401a8de4..e4d993f3137407 100644
+--- a/drivers/net/ethernet/vertexcom/mse102x.c
++++ b/drivers/net/ethernet/vertexcom/mse102x.c
+@@ -6,6 +6,7 @@
+
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#include <linux/if_vlan.h>
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+@@ -33,7 +34,7 @@
+ #define CMD_CTR (0x2 << CMD_SHIFT)
+
+ #define CMD_MASK GENMASK(15, CMD_SHIFT)
+-#define LEN_MASK GENMASK(CMD_SHIFT - 1, 0)
++#define LEN_MASK GENMASK(CMD_SHIFT - 2, 0)
+
+ #define DET_CMD_LEN 4
+ #define DET_SOF_LEN 2
+@@ -262,7 +263,7 @@ static int mse102x_tx_frame_spi(struct mse102x_net *mse, struct sk_buff *txp,
+ }
+
+ static int mse102x_rx_frame_spi(struct mse102x_net *mse, u8 *buff,
+- unsigned int frame_len)
++ unsigned int frame_len, bool drop)
+ {
+ struct mse102x_net_spi *mses = to_mse102x_spi(mse);
+ struct spi_transfer *xfer = &mses->spi_xfer;
+@@ -280,6 +281,9 @@ static int mse102x_rx_frame_spi(struct mse102x_net *mse, u8 *buff,
+ netdev_err(mse->ndev, "%s: spi_sync() failed: %d\n",
+ __func__, ret);
+ mse->stats.xfer_err++;
++ } else if (drop) {
++ netdev_dbg(mse->ndev, "%s: Drop frame\n", __func__);
++ ret = -EINVAL;
+ } else if (*sof != cpu_to_be16(DET_SOF)) {
+ netdev_dbg(mse->ndev, "%s: SPI start of frame is invalid (0x%04x)\n",
+ __func__, *sof);
+@@ -307,6 +311,7 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ struct sk_buff *skb;
+ unsigned int rxalign;
+ unsigned int rxlen;
++ bool drop = false;
+ __be16 rx = 0;
+ u16 cmd_resp;
+ u8 *rxpkt;
+@@ -329,7 +334,8 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ net_dbg_ratelimited("%s: Unexpected response (0x%04x)\n",
+ __func__, cmd_resp);
+ mse->stats.invalid_rts++;
+- return;
++ drop = true;
++ goto drop;
+ }
+
+ net_dbg_ratelimited("%s: Unexpected response to first CMD\n",
+@@ -337,12 +343,20 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ }
+
+ rxlen = cmd_resp & LEN_MASK;
+- if (!rxlen) {
+- net_dbg_ratelimited("%s: No frame length defined\n", __func__);
++ if (rxlen < ETH_ZLEN || rxlen > VLAN_ETH_FRAME_LEN) {
++ net_dbg_ratelimited("%s: Invalid frame length: %d\n", __func__,
++ rxlen);
+ mse->stats.invalid_len++;
+- return;
++ drop = true;
+ }
+
++ /* In case of a invalid CMD_RTS, the frame must be consumed anyway.
++ * So assume the maximum possible frame length.
++ */
++drop:
++ if (drop)
++ rxlen = VLAN_ETH_FRAME_LEN;
++
+ rxalign = ALIGN(rxlen + DET_SOF_LEN + DET_DFT_LEN, 4);
+ skb = netdev_alloc_skb_ip_align(mse->ndev, rxalign);
+ if (!skb)
+@@ -353,7 +367,7 @@ static void mse102x_rx_pkt_spi(struct mse102x_net *mse)
+ * They are copied, but ignored.
+ */
+ rxpkt = skb_put(skb, rxlen) - DET_SOF_LEN;
+- if (mse102x_rx_frame_spi(mse, rxpkt, rxlen)) {
++ if (mse102x_rx_frame_spi(mse, rxpkt, rxlen, drop)) {
+ mse->ndev->stats.rx_errors++;
+ dev_kfree_skb(skb);
+ return;
+@@ -509,6 +523,7 @@ static irqreturn_t mse102x_irq(int irq, void *_mse)
+ static int mse102x_net_open(struct net_device *ndev)
+ {
+ struct mse102x_net *mse = netdev_priv(ndev);
++ struct mse102x_net_spi *mses = to_mse102x_spi(mse);
+ int ret;
+
+ ret = request_threaded_irq(ndev->irq, NULL, mse102x_irq, IRQF_ONESHOT,
+@@ -524,6 +539,13 @@ static int mse102x_net_open(struct net_device *ndev)
+
+ netif_carrier_on(ndev);
+
++ /* The SPI interrupt can stuck in case of pending packet(s).
++ * So poll for possible packet(s) to re-arm the interrupt.
++ */
++ mutex_lock(&mses->lock);
++ mse102x_rx_pkt_spi(mse);
++ mutex_unlock(&mses->lock);
++
+ netif_dbg(mse, ifup, ndev, "network device up\n");
+
+ return 0;
+diff --git a/drivers/net/mdio/mdio-mux-meson-gxl.c b/drivers/net/mdio/mdio-mux-meson-gxl.c
+index 00c66240136b10..3dd12a8c8b03e9 100644
+--- a/drivers/net/mdio/mdio-mux-meson-gxl.c
++++ b/drivers/net/mdio/mdio-mux-meson-gxl.c
+@@ -17,6 +17,7 @@
+ #define REG2_LEDACT GENMASK(23, 22)
+ #define REG2_LEDLINK GENMASK(25, 24)
+ #define REG2_DIV4SEL BIT(27)
++#define REG2_REVERSED BIT(28)
+ #define REG2_ADCBYPASS BIT(30)
+ #define REG2_CLKINSEL BIT(31)
+ #define ETH_REG3 0x4
+@@ -65,7 +66,7 @@ static void gxl_enable_internal_mdio(struct gxl_mdio_mux *priv)
+ * The only constraint is that it must match the one in
+ * drivers/net/phy/meson-gxl.c to properly match the PHY.
+ */
+- writel(FIELD_PREP(REG2_PHYID, EPHY_GXL_ID),
++ writel(REG2_REVERSED | FIELD_PREP(REG2_PHYID, EPHY_GXL_ID),
+ priv->regs + ETH_REG2);
+
+ /* Enable the internal phy */
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index bb0bf141587274..7b3739b29c8f72 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -630,16 +630,6 @@ static const struct driver_info zte_rndis_info = {
+ .tx_fixup = rndis_tx_fixup,
+ };
+
+-static const struct driver_info wwan_rndis_info = {
+- .description = "Mobile Broadband RNDIS device",
+- .flags = FLAG_WWAN | FLAG_POINTTOPOINT | FLAG_FRAMING_RN | FLAG_NO_SETINT,
+- .bind = rndis_bind,
+- .unbind = rndis_unbind,
+- .status = rndis_status,
+- .rx_fixup = rndis_rx_fixup,
+- .tx_fixup = rndis_tx_fixup,
+-};
+-
+ /*-------------------------------------------------------------------------*/
+
+ static const struct usb_device_id products [] = {
+@@ -676,11 +666,9 @@ static const struct usb_device_id products [] = {
+ USB_INTERFACE_INFO(USB_CLASS_WIRELESS_CONTROLLER, 1, 3),
+ .driver_info = (unsigned long) &rndis_info,
+ }, {
+- /* Mobile Broadband Modem, seen in Novatel Verizon USB730L and
+- * Telit FN990A (RNDIS)
+- */
++ /* Novatel Verizon USB730L */
+ USB_INTERFACE_INFO(USB_CLASS_MISC, 4, 1),
+- .driver_info = (unsigned long)&wwan_rndis_info,
++ .driver_info = (unsigned long) &rndis_info,
+ },
+ { }, // END
+ };
+diff --git a/drivers/net/vxlan/vxlan_vnifilter.c b/drivers/net/vxlan/vxlan_vnifilter.c
+index 6e6e9f05509ab0..06d19e90eadb59 100644
+--- a/drivers/net/vxlan/vxlan_vnifilter.c
++++ b/drivers/net/vxlan/vxlan_vnifilter.c
+@@ -627,7 +627,11 @@ static void vxlan_vni_delete_group(struct vxlan_dev *vxlan,
+ * default dst remote_ip previously added for this vni
+ */
+ if (!vxlan_addr_any(&vninode->remote_ip) ||
+- !vxlan_addr_any(&dst->remote_ip))
++ !vxlan_addr_any(&dst->remote_ip)) {
++ u32 hash_index = fdb_head_index(vxlan, all_zeros_mac,
++ vninode->vni);
++
++ spin_lock_bh(&vxlan->hash_lock[hash_index]);
+ __vxlan_fdb_delete(vxlan, all_zeros_mac,
+ (vxlan_addr_any(&vninode->remote_ip) ?
+ dst->remote_ip : vninode->remote_ip),
+@@ -635,6 +639,8 @@ static void vxlan_vni_delete_group(struct vxlan_dev *vxlan,
+ vninode->vni, vninode->vni,
+ dst->remote_ifindex,
+ true);
++ spin_unlock_bh(&vxlan->hash_lock[hash_index]);
++ }
+
+ if (vxlan->dev->flags & IFF_UP) {
+ if (vxlan_addr_multicast(&vninode->remote_ip) &&
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+index 2821c27f317ee0..d06c724f63d9c6 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+@@ -896,14 +896,16 @@ brcmf_usb_dl_writeimage(struct brcmf_usbdev_info *devinfo, u8 *fw, int fwlen)
+ }
+
+ /* 1) Prepare USB boot loader for runtime image */
+- brcmf_usb_dl_cmd(devinfo, DL_START, &state, sizeof(state));
++ err = brcmf_usb_dl_cmd(devinfo, DL_START, &state, sizeof(state));
++ if (err)
++ goto fail;
+
+ rdlstate = le32_to_cpu(state.state);
+ rdlbytes = le32_to_cpu(state.bytes);
+
+ /* 2) Check we are in the Waiting state */
+ if (rdlstate != DL_WAITING) {
+- brcmf_err("Failed to DL_START\n");
++ brcmf_err("Invalid DL state: %u\n", rdlstate);
+ err = -EINVAL;
+ goto fail;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+index be9e464c9b7b08..3ff493e920d284 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+@@ -148,6 +148,7 @@
+ * during a error FW error.
+ */
+ #define CSR_FUNC_SCRATCH_INIT_VALUE (0x01010101)
++#define CSR_FUNC_SCRATCH_POWER_OFF_MASK 0xFFFF
+
+ /* Bits for CSR_HW_IF_CONFIG_REG */
+ #define CSR_HW_IF_CONFIG_REG_MSK_MAC_STEP_DASH (0x0000000F)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
+index 47854a36413e17..ced8261c725f8c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
+@@ -21,6 +21,7 @@ struct iwl_trans_dev_restart_data {
+ struct list_head list;
+ unsigned int restart_count;
+ time64_t last_error;
++ bool backoff;
+ char name[];
+ };
+
+@@ -125,13 +126,20 @@ iwl_trans_determine_restart_mode(struct iwl_trans *trans)
+ if (!data)
+ return at_least;
+
+- if (ktime_get_boottime_seconds() - data->last_error >=
++ if (!data->backoff &&
++ ktime_get_boottime_seconds() - data->last_error >=
+ IWL_TRANS_RESET_OK_TIME)
+ data->restart_count = 0;
+
+ index = data->restart_count;
+- if (index >= ARRAY_SIZE(escalation_list))
++ if (index >= ARRAY_SIZE(escalation_list)) {
+ index = ARRAY_SIZE(escalation_list) - 1;
++ if (!data->backoff) {
++ data->backoff = true;
++ return IWL_RESET_MODE_BACKOFF;
++ }
++ data->backoff = false;
++ }
+
+ return max(at_least, escalation_list[index]);
+ }
+@@ -140,7 +148,8 @@ iwl_trans_determine_restart_mode(struct iwl_trans *trans)
+
+ static void iwl_trans_restart_wk(struct work_struct *wk)
+ {
+- struct iwl_trans *trans = container_of(wk, typeof(*trans), restart.wk);
++ struct iwl_trans *trans = container_of(wk, typeof(*trans),
++ restart.wk.work);
+ struct iwl_trans_reprobe *reprobe;
+ enum iwl_reset_mode mode;
+
+@@ -168,6 +177,12 @@ static void iwl_trans_restart_wk(struct work_struct *wk)
+ return;
+
+ mode = iwl_trans_determine_restart_mode(trans);
++ if (mode == IWL_RESET_MODE_BACKOFF) {
++ IWL_ERR(trans, "Too many device errors - delay next reset\n");
++ queue_delayed_work(system_unbound_wq, &trans->restart.wk,
++ IWL_TRANS_RESET_DELAY);
++ return;
++ }
+
+ iwl_trans_inc_restart_count(trans->dev);
+
+@@ -227,7 +242,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
+ trans->dev = dev;
+ trans->num_rx_queues = 1;
+
+- INIT_WORK(&trans->restart.wk, iwl_trans_restart_wk);
++ INIT_DELAYED_WORK(&trans->restart.wk, iwl_trans_restart_wk);
+
+ return trans;
+ }
+@@ -271,7 +286,7 @@ int iwl_trans_init(struct iwl_trans *trans)
+
+ void iwl_trans_free(struct iwl_trans *trans)
+ {
+- cancel_work_sync(&trans->restart.wk);
++ cancel_delayed_work_sync(&trans->restart.wk);
+ kmem_cache_destroy(trans->dev_cmd_pool);
+ }
+
+@@ -403,7 +418,7 @@ void iwl_trans_op_mode_leave(struct iwl_trans *trans)
+
+ iwl_trans_pcie_op_mode_leave(trans);
+
+- cancel_work_sync(&trans->restart.wk);
++ cancel_delayed_work_sync(&trans->restart.wk);
+
+ trans->op_mode = NULL;
+
+@@ -540,7 +555,6 @@ void __releases(nic_access)
+ iwl_trans_release_nic_access(struct iwl_trans *trans)
+ {
+ iwl_trans_pcie_release_nic_access(trans);
+- __release(nic_access);
+ }
+ IWL_EXPORT_SYMBOL(iwl_trans_release_nic_access);
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+index f6234065dbdde0..9c64e1fd4c096d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+@@ -958,7 +958,7 @@ struct iwl_trans {
+ struct iwl_dma_ptr invalid_tx_cmd;
+
+ struct {
+- struct work_struct wk;
++ struct delayed_work wk;
+ struct iwl_fw_error_dump_mode mode;
+ bool during_reset;
+ } restart;
+@@ -1159,7 +1159,7 @@ static inline void iwl_trans_schedule_reset(struct iwl_trans *trans,
+ */
+ trans->restart.during_reset = test_bit(STATUS_IN_SW_RESET,
+ &trans->status);
+- queue_work(system_unbound_wq, &trans->restart.wk);
++ queue_delayed_work(system_unbound_wq, &trans->restart.wk, 0);
+ }
+
+ static inline void iwl_trans_fw_error(struct iwl_trans *trans,
+@@ -1258,6 +1258,9 @@ enum iwl_reset_mode {
+ IWL_RESET_MODE_RESCAN,
+ IWL_RESET_MODE_FUNC_RESET,
+ IWL_RESET_MODE_PROD_RESET,
++
++ /* keep last - special backoff value */
++ IWL_RESET_MODE_BACKOFF,
+ };
+
+ void iwl_trans_pcie_reset(struct iwl_trans *trans, enum iwl_reset_mode mode);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index e0b657b2f74b06..d4c1bc20971fba 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1702,11 +1702,27 @@ static int _iwl_pci_resume(struct device *device, bool restore)
+ * Scratch value was altered, this means the device was powered off, we
+ * need to reset it completely.
+ * Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan,
+- * so assume that any bits there mean that the device is usable.
++ * but not bits [15:8]. So if we have bits set in lower word, assume
++ * the device is alive.
++ * For older devices, just try silently to grab the NIC.
+ */
+- if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ &&
+- !iwl_read32(trans, CSR_FUNC_SCRATCH))
+- device_was_powered_off = true;
++ if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) {
++ if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) &
++ CSR_FUNC_SCRATCH_POWER_OFF_MASK))
++ device_was_powered_off = true;
++ } else {
++ /*
++ * bh are re-enabled by iwl_trans_pcie_release_nic_access,
++ * so re-enable them if _iwl_trans_pcie_grab_nic_access fails.
++ */
++ local_bh_disable();
++ if (_iwl_trans_pcie_grab_nic_access(trans, true)) {
++ iwl_trans_pcie_release_nic_access(trans);
++ } else {
++ device_was_powered_off = true;
++ local_bh_enable();
++ }
++ }
+
+ if (restore || device_was_powered_off) {
+ trans->state = IWL_TRANS_NO_FW;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 45460f93d24add..114a9195ad7f74 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -558,10 +558,10 @@ void iwl_trans_pcie_free(struct iwl_trans *trans);
+ void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions,
+ struct device *dev);
+
+-bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans);
+-#define _iwl_trans_pcie_grab_nic_access(trans) \
++bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent);
++#define _iwl_trans_pcie_grab_nic_access(trans, silent) \
+ __cond_lock(nic_access_nobh, \
+- likely(__iwl_trans_pcie_grab_nic_access(trans)))
++ likely(__iwl_trans_pcie_grab_nic_access(trans, silent)))
+
+ void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev);
+ void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev);
+@@ -1105,7 +1105,8 @@ void iwl_trans_pcie_set_bits_mask(struct iwl_trans *trans, u32 reg,
+ int iwl_trans_pcie_read_config32(struct iwl_trans *trans, u32 ofs,
+ u32 *val);
+ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans);
+-void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans);
++void __releases(nic_access_nobh)
++iwl_trans_pcie_release_nic_access(struct iwl_trans *trans);
+
+ /* transport gen 1 exported functions */
+ void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index c917ed4c19bcc3..102a6123bba0e4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -2351,7 +2351,8 @@ void iwl_trans_pcie_reset(struct iwl_trans *trans, enum iwl_reset_mode mode)
+ struct iwl_trans_pcie_removal *removal;
+ char _msg = 0, *msg = &_msg;
+
+- if (WARN_ON(mode < IWL_RESET_MODE_REMOVE_ONLY))
++ if (WARN_ON(mode < IWL_RESET_MODE_REMOVE_ONLY ||
++ mode == IWL_RESET_MODE_BACKOFF))
+ return;
+
+ if (test_bit(STATUS_TRANS_DEAD, &trans->status))
+@@ -2405,7 +2406,7 @@ EXPORT_SYMBOL(iwl_trans_pcie_reset);
+ * This version doesn't disable BHs but rather assumes they're
+ * already disabled.
+ */
+-bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
++bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent)
+ {
+ int ret;
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+@@ -2457,6 +2458,11 @@ bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
+ if (unlikely(ret < 0)) {
+ u32 cntrl = iwl_read32(trans, CSR_GP_CNTRL);
+
++ if (silent) {
++ spin_unlock(&trans_pcie->reg_lock);
++ return false;
++ }
++
+ WARN_ONCE(1,
+ "Timeout waiting for hardware access (CSR_GP_CNTRL 0x%08x)\n",
+ cntrl);
+@@ -2488,7 +2494,7 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
+ bool ret;
+
+ local_bh_disable();
+- ret = __iwl_trans_pcie_grab_nic_access(trans);
++ ret = __iwl_trans_pcie_grab_nic_access(trans, false);
+ if (ret) {
+ /* keep BHs disabled until iwl_trans_pcie_release_nic_access */
+ return ret;
+@@ -2497,7 +2503,8 @@ bool iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
+ return false;
+ }
+
+-void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans)
++void __releases(nic_access_nobh)
++iwl_trans_pcie_release_nic_access(struct iwl_trans *trans)
+ {
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+
+@@ -2524,6 +2531,7 @@ void iwl_trans_pcie_release_nic_access(struct iwl_trans *trans)
+ * scheduled on different CPUs (after we drop reg_lock).
+ */
+ out:
++ __release(nic_access_nobh);
+ spin_unlock_bh(&trans_pcie->reg_lock);
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 7c1dd5cc084ac1..83c6fcafcf1a4c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -1021,7 +1021,7 @@ static int iwl_pcie_set_cmd_in_flight(struct iwl_trans *trans,
+ * returned. This needs to be done only on NICs that have
+ * apmg_wake_up_wa set (see above.)
+ */
+- if (!_iwl_trans_pcie_grab_nic_access(trans))
++ if (!_iwl_trans_pcie_grab_nic_access(trans, false))
+ return -EIO;
+
+ /*
+diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.c b/drivers/net/wireless/purelifi/plfxlc/mac.c
+index eae93efa615044..82d1bf7edba20d 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/mac.c
++++ b/drivers/net/wireless/purelifi/plfxlc/mac.c
+@@ -102,7 +102,6 @@ int plfxlc_mac_init_hw(struct ieee80211_hw *hw)
+ void plfxlc_mac_release(struct plfxlc_mac *mac)
+ {
+ plfxlc_chip_release(&mac->chip);
+- lockdep_assert_held(&mac->lock);
+ }
+
+ int plfxlc_op_start(struct ieee80211_hw *hw)
+diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
+index 486afe59818454..09ed1f61c9a85a 100644
+--- a/drivers/nvme/host/Kconfig
++++ b/drivers/nvme/host/Kconfig
+@@ -97,6 +97,7 @@ config NVME_TCP_TLS
+ depends on NVME_TCP
+ select NET_HANDSHAKE
+ select KEYS
++ select TLS
+ help
+ Enables TLS encryption for NVMe TCP using the netlink handshake API.
+
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 1dc12784efafc6..d49b69565d04cc 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3578,7 +3578,7 @@ static pci_ers_result_t nvme_slot_reset(struct pci_dev *pdev)
+
+ dev_info(dev->ctrl.device, "restart after slot reset\n");
+ pci_restore_state(pdev);
+- if (!nvme_try_sched_reset(&dev->ctrl))
++ if (nvme_try_sched_reset(&dev->ctrl))
+ nvme_unquiesce_io_queues(&dev->ctrl);
+ return PCI_ERS_RESULT_RECOVERED;
+ }
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 327f3f2f5399c3..d991baa82a1c27 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1944,7 +1944,7 @@ static void __nvme_tcp_stop_queue(struct nvme_tcp_queue *queue)
+ cancel_work_sync(&queue->io_work);
+ }
+
+-static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
++static void nvme_tcp_stop_queue_nowait(struct nvme_ctrl *nctrl, int qid)
+ {
+ struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
+ struct nvme_tcp_queue *queue = &ctrl->queues[qid];
+@@ -1963,6 +1963,31 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+ mutex_unlock(&queue->queue_lock);
+ }
+
++static void nvme_tcp_wait_queue(struct nvme_ctrl *nctrl, int qid)
++{
++ struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
++ struct nvme_tcp_queue *queue = &ctrl->queues[qid];
++ int timeout = 100;
++
++ while (timeout > 0) {
++ if (!test_bit(NVME_TCP_Q_ALLOCATED, &queue->flags) ||
++ !sk_wmem_alloc_get(queue->sock->sk))
++ return;
++ msleep(2);
++ timeout -= 2;
++ }
++ dev_warn(nctrl->device,
++ "qid %d: timeout draining sock wmem allocation expired\n",
++ qid);
++}
++
++static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
++{
++ nvme_tcp_stop_queue_nowait(nctrl, qid);
++ nvme_tcp_wait_queue(nctrl, qid);
++}
++
++
+ static void nvme_tcp_setup_sock_ops(struct nvme_tcp_queue *queue)
+ {
+ write_lock_bh(&queue->sock->sk->sk_callback_lock);
+@@ -2030,7 +2055,9 @@ static void nvme_tcp_stop_io_queues(struct nvme_ctrl *ctrl)
+ int i;
+
+ for (i = 1; i < ctrl->queue_count; i++)
+- nvme_tcp_stop_queue(ctrl, i);
++ nvme_tcp_stop_queue_nowait(ctrl, i);
++ for (i = 1; i < ctrl->queue_count; i++)
++ nvme_tcp_wait_queue(ctrl, i);
+ }
+
+ static int nvme_tcp_start_io_queues(struct nvme_ctrl *ctrl,
+diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig
+index fb7446d6d6829b..4c253b433bf78d 100644
+--- a/drivers/nvme/target/Kconfig
++++ b/drivers/nvme/target/Kconfig
+@@ -98,6 +98,7 @@ config NVME_TARGET_TCP_TLS
+ bool "NVMe over Fabrics TCP target TLS encryption support"
+ depends on NVME_TARGET_TCP
+ select NET_HANDSHAKE
++ select TLS
+ help
+ Enables TLS encryption for the NVMe TCP target using the netlink handshake API.
+
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx.c b/drivers/pinctrl/freescale/pinctrl-imx.c
+index 842a1e6cbfc41a..18de3132854045 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx.c
+@@ -37,16 +37,16 @@ static inline const struct group_desc *imx_pinctrl_find_group_by_name(
+ struct pinctrl_dev *pctldev,
+ const char *name)
+ {
+- const struct group_desc *grp = NULL;
++ const struct group_desc *grp;
+ int i;
+
+ for (i = 0; i < pctldev->num_groups; i++) {
+ grp = pinctrl_generic_get_group(pctldev, i);
+ if (grp && !strcmp(grp->grp.name, name))
+- break;
++ return grp;
+ }
+
+- return grp;
++ return NULL;
+ }
+
+ static void imx_pin_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *s,
+diff --git a/drivers/pinctrl/mediatek/pinctrl-airoha.c b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+index 547a798b71c8ae..5d84a778683d05 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-airoha.c
++++ b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+@@ -6,6 +6,7 @@
+ */
+
+ #include <dt-bindings/pinctrl/mt65xx.h>
++#include <linux/bitfield.h>
+ #include <linux/bits.h>
+ #include <linux/cleanup.h>
+ #include <linux/gpio/driver.h>
+@@ -112,39 +113,19 @@
+ #define REG_LAN_LED1_MAPPING 0x0280
+
+ #define LAN4_LED_MAPPING_MASK GENMASK(18, 16)
+-#define LAN4_PHY4_LED_MAP BIT(18)
+-#define LAN4_PHY2_LED_MAP BIT(17)
+-#define LAN4_PHY1_LED_MAP BIT(16)
+-#define LAN4_PHY0_LED_MAP 0
+-#define LAN4_PHY3_LED_MAP GENMASK(17, 16)
++#define LAN4_PHY_LED_MAP(_n) FIELD_PREP_CONST(LAN4_LED_MAPPING_MASK, (_n))
+
+ #define LAN3_LED_MAPPING_MASK GENMASK(14, 12)
+-#define LAN3_PHY4_LED_MAP BIT(14)
+-#define LAN3_PHY2_LED_MAP BIT(13)
+-#define LAN3_PHY1_LED_MAP BIT(12)
+-#define LAN3_PHY0_LED_MAP 0
+-#define LAN3_PHY3_LED_MAP GENMASK(13, 12)
++#define LAN3_PHY_LED_MAP(_n) FIELD_PREP_CONST(LAN3_LED_MAPPING_MASK, (_n))
+
+ #define LAN2_LED_MAPPING_MASK GENMASK(10, 8)
+-#define LAN2_PHY4_LED_MAP BIT(12)
+-#define LAN2_PHY2_LED_MAP BIT(11)
+-#define LAN2_PHY1_LED_MAP BIT(10)
+-#define LAN2_PHY0_LED_MAP 0
+-#define LAN2_PHY3_LED_MAP GENMASK(11, 10)
++#define LAN2_PHY_LED_MAP(_n) FIELD_PREP_CONST(LAN2_LED_MAPPING_MASK, (_n))
+
+ #define LAN1_LED_MAPPING_MASK GENMASK(6, 4)
+-#define LAN1_PHY4_LED_MAP BIT(6)
+-#define LAN1_PHY2_LED_MAP BIT(5)
+-#define LAN1_PHY1_LED_MAP BIT(4)
+-#define LAN1_PHY0_LED_MAP 0
+-#define LAN1_PHY3_LED_MAP GENMASK(5, 4)
++#define LAN1_PHY_LED_MAP(_n) FIELD_PREP_CONST(LAN1_LED_MAPPING_MASK, (_n))
+
+ #define LAN0_LED_MAPPING_MASK GENMASK(2, 0)
+-#define LAN0_PHY4_LED_MAP BIT(3)
+-#define LAN0_PHY2_LED_MAP BIT(2)
+-#define LAN0_PHY1_LED_MAP BIT(1)
+-#define LAN0_PHY0_LED_MAP 0
+-#define LAN0_PHY3_LED_MAP GENMASK(2, 1)
++#define LAN0_PHY_LED_MAP(_n) FIELD_PREP_CONST(LAN0_LED_MAPPING_MASK, (_n))
+
+ /* CONF */
+ #define REG_I2C_SDA_E2 0x001c
+@@ -1476,8 +1457,8 @@ static const struct airoha_pinctrl_func_group phy1_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY1_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1491,8 +1472,8 @@ static const struct airoha_pinctrl_func_group phy1_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY1_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1506,8 +1487,8 @@ static const struct airoha_pinctrl_func_group phy1_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY1_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1521,8 +1502,8 @@ static const struct airoha_pinctrl_func_group phy1_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY1_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ },
+@@ -1540,8 +1521,8 @@ static const struct airoha_pinctrl_func_group phy2_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY2_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1555,8 +1536,8 @@ static const struct airoha_pinctrl_func_group phy2_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY2_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1570,8 +1551,8 @@ static const struct airoha_pinctrl_func_group phy2_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY2_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1585,8 +1566,8 @@ static const struct airoha_pinctrl_func_group phy2_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY2_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ },
+@@ -1604,8 +1585,8 @@ static const struct airoha_pinctrl_func_group phy3_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY3_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1619,8 +1600,8 @@ static const struct airoha_pinctrl_func_group phy3_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY3_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1634,8 +1615,8 @@ static const struct airoha_pinctrl_func_group phy3_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY3_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1649,8 +1630,8 @@ static const struct airoha_pinctrl_func_group phy3_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY3_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ },
+@@ -1668,8 +1649,8 @@ static const struct airoha_pinctrl_func_group phy4_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY4_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1683,8 +1664,8 @@ static const struct airoha_pinctrl_func_group phy4_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY4_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1698,8 +1679,8 @@ static const struct airoha_pinctrl_func_group phy4_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY4_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1713,8 +1694,8 @@ static const struct airoha_pinctrl_func_group phy4_led0_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED0_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY4_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ },
+@@ -1732,8 +1713,8 @@ static const struct airoha_pinctrl_func_group phy1_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY1_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1747,8 +1728,8 @@ static const struct airoha_pinctrl_func_group phy1_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY1_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1762,8 +1743,8 @@ static const struct airoha_pinctrl_func_group phy1_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY1_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1777,8 +1758,8 @@ static const struct airoha_pinctrl_func_group phy1_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY1_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(0)
+ },
+ .regmap_size = 2,
+ },
+@@ -1796,8 +1777,8 @@ static const struct airoha_pinctrl_func_group phy2_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY2_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1811,8 +1792,8 @@ static const struct airoha_pinctrl_func_group phy2_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY2_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1826,8 +1807,8 @@ static const struct airoha_pinctrl_func_group phy2_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY2_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1841,8 +1822,8 @@ static const struct airoha_pinctrl_func_group phy2_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY2_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(1)
+ },
+ .regmap_size = 2,
+ },
+@@ -1860,8 +1841,8 @@ static const struct airoha_pinctrl_func_group phy3_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY3_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1875,8 +1856,8 @@ static const struct airoha_pinctrl_func_group phy3_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY3_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1890,8 +1871,8 @@ static const struct airoha_pinctrl_func_group phy3_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY3_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1905,8 +1886,8 @@ static const struct airoha_pinctrl_func_group phy3_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY3_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(2)
+ },
+ .regmap_size = 2,
+ },
+@@ -1924,8 +1905,8 @@ static const struct airoha_pinctrl_func_group phy4_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN1_LED_MAPPING_MASK,
+- LAN1_PHY4_LED_MAP
++ LAN0_LED_MAPPING_MASK,
++ LAN0_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1939,8 +1920,8 @@ static const struct airoha_pinctrl_func_group phy4_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN2_LED_MAPPING_MASK,
+- LAN2_PHY4_LED_MAP
++ LAN1_LED_MAPPING_MASK,
++ LAN1_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1954,8 +1935,8 @@ static const struct airoha_pinctrl_func_group phy4_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN3_LED_MAPPING_MASK,
+- LAN3_PHY4_LED_MAP
++ LAN2_LED_MAPPING_MASK,
++ LAN2_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ }, {
+@@ -1969,8 +1950,8 @@ static const struct airoha_pinctrl_func_group phy4_led1_func_group[] = {
+ .regmap[1] = {
+ AIROHA_FUNC_MUX,
+ REG_LAN_LED1_MAPPING,
+- LAN4_LED_MAPPING_MASK,
+- LAN4_PHY4_LED_MAP
++ LAN3_LED_MAPPING_MASK,
++ LAN3_PHY_LED_MAP(3)
+ },
+ .regmap_size = 2,
+ },
+diff --git a/drivers/pinctrl/qcom/pinctrl-sm8750.c b/drivers/pinctrl/qcom/pinctrl-sm8750.c
+index 1af11cd95fb0e6..b94fb4ee0ec380 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sm8750.c
++++ b/drivers/pinctrl/qcom/pinctrl-sm8750.c
+@@ -46,7 +46,9 @@
+ .out_bit = 1, \
+ .intr_enable_bit = 0, \
+ .intr_status_bit = 0, \
+- .intr_target_bit = 5, \
++ .intr_wakeup_present_bit = 6, \
++ .intr_wakeup_enable_bit = 7, \
++ .intr_target_bit = 8, \
+ .intr_target_kpss_val = 3, \
+ .intr_raw_status_bit = 4, \
+ .intr_polarity_bit = 1, \
+diff --git a/drivers/platform/x86/amd/pmc/pmc.c b/drivers/platform/x86/amd/pmc/pmc.c
+index e6124498b195f5..cfd1c37cf6b6f7 100644
+--- a/drivers/platform/x86/amd/pmc/pmc.c
++++ b/drivers/platform/x86/amd/pmc/pmc.c
+@@ -724,10 +724,9 @@ static void amd_pmc_s2idle_check(void)
+ struct smu_metrics table;
+ int rc;
+
+- /* CZN: Ensure that future s0i3 entry attempts at least 10ms passed */
+- if (pdev->cpu_id == AMD_CPU_ID_CZN && !get_metrics_table(pdev, &table) &&
+- table.s0i3_last_entry_status)
+- usleep_range(10000, 20000);
++ /* Avoid triggering OVP */
++ if (!get_metrics_table(pdev, &table) && table.s0i3_last_entry_status)
++ msleep(2500);
+
+ /* Dump the IdleMask before we add to the STB */
+ amd_pmc_idlemask_read(pdev, pdev->dev, NULL);
+diff --git a/drivers/platform/x86/dell/alienware-wmi.c b/drivers/platform/x86/dell/alienware-wmi.c
+index 1426ea8e4f1948..1a711d395d2da2 100644
+--- a/drivers/platform/x86/dell/alienware-wmi.c
++++ b/drivers/platform/x86/dell/alienware-wmi.c
+@@ -250,6 +250,15 @@ static const struct dmi_system_id alienware_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_asm201,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Alienware m15 R7",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m15 R7"),
++ },
++ .driver_data = &quirk_x_series,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Alienware m16 R1",
+diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
+index 40bbf8e45fa4bb..bdee5d00f30b80 100644
+--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
++++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
+@@ -146,15 +146,13 @@ static int uncore_event_cpu_online(unsigned int cpu)
+ {
+ struct uncore_data *data;
+ int target;
++ int ret;
+
+ /* Check if there is an online cpu in the package for uncore MSR */
+ target = cpumask_any_and(&uncore_cpu_mask, topology_die_cpumask(cpu));
+ if (target < nr_cpu_ids)
+ return 0;
+
+- /* Use this CPU on this die as a control CPU */
+- cpumask_set_cpu(cpu, &uncore_cpu_mask);
+-
+ data = uncore_get_instance(cpu);
+ if (!data)
+ return 0;
+@@ -163,7 +161,14 @@ static int uncore_event_cpu_online(unsigned int cpu)
+ data->die_id = topology_die_id(cpu);
+ data->domain_id = UNCORE_DOMAIN_ID_INVALID;
+
+- return uncore_freq_add_entry(data, cpu);
++ ret = uncore_freq_add_entry(data, cpu);
++ if (ret)
++ return ret;
++
++ /* Use this CPU on this die as a control CPU */
++ cpumask_set_cpu(cpu, &uncore_cpu_mask);
++
++ return 0;
+ }
+
+ static int uncore_event_cpu_offline(unsigned int cpu)
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 4b7344e1816e49..605cce32a3d376 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -2578,12 +2578,60 @@ static const struct ocp_sma_op ocp_fb_sma_op = {
+ .set_output = ptp_ocp_sma_fb_set_output,
+ };
+
++static int
++ptp_ocp_sma_adva_set_output(struct ptp_ocp *bp, int sma_nr, u32 val)
++{
++ u32 reg, mask, shift;
++ unsigned long flags;
++ u32 __iomem *gpio;
++
++ gpio = sma_nr > 2 ? &bp->sma_map1->gpio2 : &bp->sma_map2->gpio2;
++ shift = sma_nr & 1 ? 0 : 16;
++
++ mask = 0xffff << (16 - shift);
++
++ spin_lock_irqsave(&bp->lock, flags);
++
++ reg = ioread32(gpio);
++ reg = (reg & mask) | (val << shift);
++
++ iowrite32(reg, gpio);
++
++ spin_unlock_irqrestore(&bp->lock, flags);
++
++ return 0;
++}
++
++static int
++ptp_ocp_sma_adva_set_inputs(struct ptp_ocp *bp, int sma_nr, u32 val)
++{
++ u32 reg, mask, shift;
++ unsigned long flags;
++ u32 __iomem *gpio;
++
++ gpio = sma_nr > 2 ? &bp->sma_map2->gpio1 : &bp->sma_map1->gpio1;
++ shift = sma_nr & 1 ? 0 : 16;
++
++ mask = 0xffff << (16 - shift);
++
++ spin_lock_irqsave(&bp->lock, flags);
++
++ reg = ioread32(gpio);
++ reg = (reg & mask) | (val << shift);
++
++ iowrite32(reg, gpio);
++
++ spin_unlock_irqrestore(&bp->lock, flags);
++
++ return 0;
++}
++
+ static const struct ocp_sma_op ocp_adva_sma_op = {
+ .tbl = { ptp_ocp_adva_sma_in, ptp_ocp_adva_sma_out },
+ .init = ptp_ocp_sma_fb_init,
+ .get = ptp_ocp_sma_fb_get,
+- .set_inputs = ptp_ocp_sma_fb_set_inputs,
+- .set_output = ptp_ocp_sma_fb_set_output,
++ .set_inputs = ptp_ocp_sma_adva_set_inputs,
++ .set_output = ptp_ocp_sma_adva_set_output,
+ };
+
+ static int
+diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
+index a9f0f47f4759b0..74b013c41601d8 100644
+--- a/drivers/spi/spi-mem.c
++++ b/drivers/spi/spi-mem.c
+@@ -585,7 +585,11 @@ u64 spi_mem_calc_op_duration(struct spi_mem_op *op)
+ ns_per_cycles = 1000000000 / op->max_freq;
+ ncycles += ((op->cmd.nbytes * 8) / op->cmd.buswidth) / (op->cmd.dtr ? 2 : 1);
+ ncycles += ((op->addr.nbytes * 8) / op->addr.buswidth) / (op->addr.dtr ? 2 : 1);
+- ncycles += ((op->dummy.nbytes * 8) / op->dummy.buswidth) / (op->dummy.dtr ? 2 : 1);
++
++ /* Dummy bytes are optional for some SPI flash memory operations */
++ if (op->dummy.nbytes)
++ ncycles += ((op->dummy.nbytes * 8) / op->dummy.buswidth) / (op->dummy.dtr ? 2 : 1);
++
+ ncycles += ((op->data.nbytes * 8) / op->data.buswidth) / (op->data.dtr ? 2 : 1);
+
+ return ncycles * ns_per_cycles;
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index 3822d7c8d8edb9..2a8bb798e95b95 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -728,9 +728,9 @@ static int tegra_spi_set_hw_cs_timing(struct spi_device *spi)
+ u32 inactive_cycles;
+ u8 cs_state;
+
+- if (setup->unit != SPI_DELAY_UNIT_SCK ||
+- hold->unit != SPI_DELAY_UNIT_SCK ||
+- inactive->unit != SPI_DELAY_UNIT_SCK) {
++ if ((setup->unit && setup->unit != SPI_DELAY_UNIT_SCK) ||
++ (hold->unit && hold->unit != SPI_DELAY_UNIT_SCK) ||
++ (inactive->unit && inactive->unit != SPI_DELAY_UNIT_SCK)) {
+ dev_err(&spi->dev,
+ "Invalid delay unit %d, should be SPI_DELAY_UNIT_SCK\n",
+ SPI_DELAY_UNIT_SCK);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 128e35a848b7b2..99e7e4a570f0e2 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -7201,8 +7201,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
+ err = -EINVAL;
+ }
+ }
+- ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP,
+- (struct utp_upiu_req *)lrbp->ucd_rsp_ptr);
+
+ return err;
+ }
+diff --git a/fs/bcachefs/btree_update_interior.c b/fs/bcachefs/btree_update_interior.c
+index e4e7c804625e0e..e9be8b5571a40e 100644
+--- a/fs/bcachefs/btree_update_interior.c
++++ b/fs/bcachefs/btree_update_interior.c
+@@ -35,6 +35,8 @@ static const char * const bch2_btree_update_modes[] = {
+ NULL
+ };
+
++static void bch2_btree_update_to_text(struct printbuf *, struct btree_update *);
++
+ static int bch2_btree_insert_node(struct btree_update *, struct btree_trans *,
+ btree_path_idx_t, struct btree *, struct keylist *);
+ static void bch2_btree_update_add_new_node(struct btree_update *, struct btree *);
+@@ -1782,11 +1784,24 @@ static int bch2_btree_insert_node(struct btree_update *as, struct btree_trans *t
+ int ret;
+
+ lockdep_assert_held(&c->gc_lock);
+- BUG_ON(!btree_node_intent_locked(path, b->c.level));
+ BUG_ON(!b->c.level);
+ BUG_ON(!as || as->b);
+ bch2_verify_keylist_sorted(keys);
+
++ if (!btree_node_intent_locked(path, b->c.level)) {
++ struct printbuf buf = PRINTBUF;
++ bch2_log_msg_start(c, &buf);
++ prt_printf(&buf, "%s(): node not locked at level %u\n",
++ __func__, b->c.level);
++ bch2_btree_update_to_text(&buf, as);
++ bch2_btree_path_to_text(&buf, trans, path_idx);
++
++ bch2_print_string_as_lines(KERN_ERR, buf.buf);
++ printbuf_exit(&buf);
++ bch2_fs_emergency_read_only(c);
++ return -EIO;
++ }
++
+ ret = bch2_btree_node_lock_write(trans, path, &b->c);
+ if (ret)
+ return ret;
+diff --git a/fs/bcachefs/error.c b/fs/bcachefs/error.c
+index 038da6a61f6b55..6cbf4819e92330 100644
+--- a/fs/bcachefs/error.c
++++ b/fs/bcachefs/error.c
+@@ -11,6 +11,14 @@
+
+ #define FSCK_ERR_RATELIMIT_NR 10
+
++void bch2_log_msg_start(struct bch_fs *c, struct printbuf *out)
++{
++#ifdef BCACHEFS_LOG_PREFIX
++ prt_printf(out, bch2_log_msg(c, ""));
++#endif
++ printbuf_indent_add(out, 2);
++}
++
+ bool bch2_inconsistent_error(struct bch_fs *c)
+ {
+ set_bit(BCH_FS_error, &c->flags);
+diff --git a/fs/bcachefs/error.h b/fs/bcachefs/error.h
+index 7acf2a27ca281f..5730eb6b2f3817 100644
+--- a/fs/bcachefs/error.h
++++ b/fs/bcachefs/error.h
+@@ -18,6 +18,8 @@ struct work_struct;
+
+ /* Error messages: */
+
++void bch2_log_msg_start(struct bch_fs *, struct printbuf *);
++
+ /*
+ * Inconsistency errors: The on disk data is inconsistent. If these occur during
+ * initial recovery, they don't indicate a bug in the running code - we walk all
+diff --git a/fs/bcachefs/xattr_format.h b/fs/bcachefs/xattr_format.h
+index c7916011ef34d3..67426e33d04e56 100644
+--- a/fs/bcachefs/xattr_format.h
++++ b/fs/bcachefs/xattr_format.h
+@@ -13,7 +13,13 @@ struct bch_xattr {
+ __u8 x_type;
+ __u8 x_name_len;
+ __le16 x_val_len;
+- __u8 x_name[] __counted_by(x_name_len);
++ /*
++ * x_name contains the name and value counted by
++ * x_name_len + x_val_len. The introduction of
++ * __counted_by(x_name_len) caused a false positive
++ * detection of an out of bounds write.
++ */
++ __u8 x_name[];
+ } __packed __aligned(8);
+
+ #endif /* _BCACHEFS_XATTR_FORMAT_H */
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index b2fa33911c280a..029fba82b81dac 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -516,6 +516,14 @@ static inline void btrfs_assert_inode_locked(struct btrfs_inode *inode)
+ lockdep_assert_held(&inode->vfs_inode.i_rwsem);
+ }
+
++static inline void btrfs_update_inode_mapping_flags(struct btrfs_inode *inode)
++{
++ if (inode->flags & BTRFS_INODE_NODATASUM)
++ mapping_clear_stable_writes(inode->vfs_inode.i_mapping);
++ else
++ mapping_set_stable_writes(inode->vfs_inode.i_mapping);
++}
++
+ /* Array of bytes with variable length, hexadecimal format 0x1234 */
+ #define CSUM_FMT "0x%*phN"
+ #define CSUM_FMT_VALUE(size, bytes) size, bytes
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index b2fae67f8fa342..c021aae8875eb7 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -1871,7 +1871,7 @@ static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc)
+ subpage->bitmaps)) {
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ spin_unlock(&folio->mapping->i_private_lock);
+- bit_start++;
++ bit_start += sectors_per_node;
+ continue;
+ }
+
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index a92997a583bd28..cd4e40a7191860 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -874,7 +874,6 @@ static noinline int prepare_one_folio(struct inode *inode, struct folio **folio_
+ ret = PTR_ERR(folio);
+ return ret;
+ }
+- folio_wait_writeback(folio);
+ /* Only support page sized folio yet. */
+ ASSERT(folio_order(folio) == 0);
+ ret = set_folio_extent_mapped(folio);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 38756f8cef4630..3be6f8e8e157da 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2083,12 +2083,13 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+
+ /*
+ * If the found extent starts after requested offset, then
+- * adjust extent_end to be right before this extent begins
++ * adjust cur_offset to be right before this extent begins.
+ */
+ if (found_key.offset > cur_offset) {
+- extent_end = found_key.offset;
+- extent_type = 0;
+- goto must_cow;
++ if (cow_start == (u64)-1)
++ cow_start = cur_offset;
++ cur_offset = found_key.offset;
++ goto next_slot;
+ }
+
+ /*
+@@ -3845,12 +3846,13 @@ static int btrfs_add_inode_to_root(struct btrfs_inode *inode, bool prealloc)
+ *
+ * On failure clean up the inode.
+ */
+-static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
++static int btrfs_read_locked_inode(struct btrfs_inode *inode, struct btrfs_path *path)
+ {
+- struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
++ struct btrfs_root *root = inode->root;
++ struct btrfs_fs_info *fs_info = root->fs_info;
+ struct extent_buffer *leaf;
+ struct btrfs_inode_item *inode_item;
+- struct btrfs_root *root = BTRFS_I(inode)->root;
++ struct inode *vfs_inode = &inode->vfs_inode;
+ struct btrfs_key location;
+ unsigned long ptr;
+ int maybe_acls;
+@@ -3859,17 +3861,17 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
+ bool filled = false;
+ int first_xattr_slot;
+
+- ret = btrfs_init_file_extent_tree(BTRFS_I(inode));
++ ret = btrfs_init_file_extent_tree(inode);
+ if (ret)
+ goto out;
+
+- ret = btrfs_fill_inode(inode, &rdev);
++ ret = btrfs_fill_inode(vfs_inode, &rdev);
+ if (!ret)
+ filled = true;
+
+ ASSERT(path);
+
+- btrfs_get_inode_key(BTRFS_I(inode), &location);
++ btrfs_get_inode_key(inode, &location);
+
+ ret = btrfs_lookup_inode(NULL, root, path, &location, 0);
+ if (ret) {
+@@ -3889,41 +3891,41 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
+
+ inode_item = btrfs_item_ptr(leaf, path->slots[0],
+ struct btrfs_inode_item);
+- inode->i_mode = btrfs_inode_mode(leaf, inode_item);
+- set_nlink(inode, btrfs_inode_nlink(leaf, inode_item));
+- i_uid_write(inode, btrfs_inode_uid(leaf, inode_item));
+- i_gid_write(inode, btrfs_inode_gid(leaf, inode_item));
+- btrfs_i_size_write(BTRFS_I(inode), btrfs_inode_size(leaf, inode_item));
+- btrfs_inode_set_file_extent_range(BTRFS_I(inode), 0,
+- round_up(i_size_read(inode), fs_info->sectorsize));
+-
+- inode_set_atime(inode, btrfs_timespec_sec(leaf, &inode_item->atime),
++ vfs_inode->i_mode = btrfs_inode_mode(leaf, inode_item);
++ set_nlink(vfs_inode, btrfs_inode_nlink(leaf, inode_item));
++ i_uid_write(vfs_inode, btrfs_inode_uid(leaf, inode_item));
++ i_gid_write(vfs_inode, btrfs_inode_gid(leaf, inode_item));
++ btrfs_i_size_write(inode, btrfs_inode_size(leaf, inode_item));
++ btrfs_inode_set_file_extent_range(inode, 0,
++ round_up(i_size_read(vfs_inode), fs_info->sectorsize));
++
++ inode_set_atime(vfs_inode, btrfs_timespec_sec(leaf, &inode_item->atime),
+ btrfs_timespec_nsec(leaf, &inode_item->atime));
+
+- inode_set_mtime(inode, btrfs_timespec_sec(leaf, &inode_item->mtime),
++ inode_set_mtime(vfs_inode, btrfs_timespec_sec(leaf, &inode_item->mtime),
+ btrfs_timespec_nsec(leaf, &inode_item->mtime));
+
+- inode_set_ctime(inode, btrfs_timespec_sec(leaf, &inode_item->ctime),
++ inode_set_ctime(vfs_inode, btrfs_timespec_sec(leaf, &inode_item->ctime),
+ btrfs_timespec_nsec(leaf, &inode_item->ctime));
+
+- BTRFS_I(inode)->i_otime_sec = btrfs_timespec_sec(leaf, &inode_item->otime);
+- BTRFS_I(inode)->i_otime_nsec = btrfs_timespec_nsec(leaf, &inode_item->otime);
++ inode->i_otime_sec = btrfs_timespec_sec(leaf, &inode_item->otime);
++ inode->i_otime_nsec = btrfs_timespec_nsec(leaf, &inode_item->otime);
+
+- inode_set_bytes(inode, btrfs_inode_nbytes(leaf, inode_item));
+- BTRFS_I(inode)->generation = btrfs_inode_generation(leaf, inode_item);
+- BTRFS_I(inode)->last_trans = btrfs_inode_transid(leaf, inode_item);
++ inode_set_bytes(vfs_inode, btrfs_inode_nbytes(leaf, inode_item));
++ inode->generation = btrfs_inode_generation(leaf, inode_item);
++ inode->last_trans = btrfs_inode_transid(leaf, inode_item);
+
+- inode_set_iversion_queried(inode,
+- btrfs_inode_sequence(leaf, inode_item));
+- inode->i_generation = BTRFS_I(inode)->generation;
+- inode->i_rdev = 0;
++ inode_set_iversion_queried(vfs_inode, btrfs_inode_sequence(leaf, inode_item));
++ vfs_inode->i_generation = inode->generation;
++ vfs_inode->i_rdev = 0;
+ rdev = btrfs_inode_rdev(leaf, inode_item);
+
+- if (S_ISDIR(inode->i_mode))
+- BTRFS_I(inode)->index_cnt = (u64)-1;
++ if (S_ISDIR(vfs_inode->i_mode))
++ inode->index_cnt = (u64)-1;
+
+ btrfs_inode_split_flags(btrfs_inode_flags(leaf, inode_item),
+- &BTRFS_I(inode)->flags, &BTRFS_I(inode)->ro_flags);
++ &inode->flags, &inode->ro_flags);
++ btrfs_update_inode_mapping_flags(inode);
+
+ cache_index:
+ /*
+@@ -3935,9 +3937,8 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
+ * This is required for both inode re-read from disk and delayed inode
+ * in the delayed_nodes xarray.
+ */
+- if (BTRFS_I(inode)->last_trans == btrfs_get_fs_generation(fs_info))
+- set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+- &BTRFS_I(inode)->runtime_flags);
++ if (inode->last_trans == btrfs_get_fs_generation(fs_info))
++ set_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags);
+
+ /*
+ * We don't persist the id of the transaction where an unlink operation
+@@ -3966,7 +3967,7 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
+ * transaction commits on fsync if our inode is a directory, or if our
+ * inode is not a directory, logging its parent unnecessarily.
+ */
+- BTRFS_I(inode)->last_unlink_trans = BTRFS_I(inode)->last_trans;
++ inode->last_unlink_trans = inode->last_trans;
+
+ /*
+ * Same logic as for last_unlink_trans. We don't persist the generation
+@@ -3974,15 +3975,15 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
+ * operation, so after eviction and reloading the inode we must be
+ * pessimistic and assume the last transaction that modified the inode.
+ */
+- BTRFS_I(inode)->last_reflink_trans = BTRFS_I(inode)->last_trans;
++ inode->last_reflink_trans = inode->last_trans;
+
+ path->slots[0]++;
+- if (inode->i_nlink != 1 ||
++ if (vfs_inode->i_nlink != 1 ||
+ path->slots[0] >= btrfs_header_nritems(leaf))
+ goto cache_acl;
+
+ btrfs_item_key_to_cpu(leaf, &location, path->slots[0]);
+- if (location.objectid != btrfs_ino(BTRFS_I(inode)))
++ if (location.objectid != btrfs_ino(inode))
+ goto cache_acl;
+
+ ptr = btrfs_item_ptr_offset(leaf, path->slots[0]);
+@@ -3990,13 +3991,12 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
+ struct btrfs_inode_ref *ref;
+
+ ref = (struct btrfs_inode_ref *)ptr;
+- BTRFS_I(inode)->dir_index = btrfs_inode_ref_index(leaf, ref);
++ inode->dir_index = btrfs_inode_ref_index(leaf, ref);
+ } else if (location.type == BTRFS_INODE_EXTREF_KEY) {
+ struct btrfs_inode_extref *extref;
+
+ extref = (struct btrfs_inode_extref *)ptr;
+- BTRFS_I(inode)->dir_index = btrfs_inode_extref_index(leaf,
+- extref);
++ inode->dir_index = btrfs_inode_extref_index(leaf, extref);
+ }
+ cache_acl:
+ /*
+@@ -4004,50 +4004,49 @@ static int btrfs_read_locked_inode(struct inode *inode, struct btrfs_path *path)
+ * any xattrs or acls
+ */
+ maybe_acls = acls_after_inode_item(leaf, path->slots[0],
+- btrfs_ino(BTRFS_I(inode)), &first_xattr_slot);
++ btrfs_ino(inode), &first_xattr_slot);
+ if (first_xattr_slot != -1) {
+ path->slots[0] = first_xattr_slot;
+- ret = btrfs_load_inode_props(inode, path);
++ ret = btrfs_load_inode_props(vfs_inode, path);
+ if (ret)
+ btrfs_err(fs_info,
+ "error loading props for ino %llu (root %llu): %d",
+- btrfs_ino(BTRFS_I(inode)),
+- btrfs_root_id(root), ret);
++ btrfs_ino(inode), btrfs_root_id(root), ret);
+ }
+
+ if (!maybe_acls)
+- cache_no_acl(inode);
++ cache_no_acl(vfs_inode);
+
+- switch (inode->i_mode & S_IFMT) {
++ switch (vfs_inode->i_mode & S_IFMT) {
+ case S_IFREG:
+- inode->i_mapping->a_ops = &btrfs_aops;
+- inode->i_fop = &btrfs_file_operations;
+- inode->i_op = &btrfs_file_inode_operations;
++ vfs_inode->i_mapping->a_ops = &btrfs_aops;
++ vfs_inode->i_fop = &btrfs_file_operations;
++ vfs_inode->i_op = &btrfs_file_inode_operations;
+ break;
+ case S_IFDIR:
+- inode->i_fop = &btrfs_dir_file_operations;
+- inode->i_op = &btrfs_dir_inode_operations;
++ vfs_inode->i_fop = &btrfs_dir_file_operations;
++ vfs_inode->i_op = &btrfs_dir_inode_operations;
+ break;
+ case S_IFLNK:
+- inode->i_op = &btrfs_symlink_inode_operations;
+- inode_nohighmem(inode);
+- inode->i_mapping->a_ops = &btrfs_aops;
++ vfs_inode->i_op = &btrfs_symlink_inode_operations;
++ inode_nohighmem(vfs_inode);
++ vfs_inode->i_mapping->a_ops = &btrfs_aops;
+ break;
+ default:
+- inode->i_op = &btrfs_special_inode_operations;
+- init_special_inode(inode, inode->i_mode, rdev);
++ vfs_inode->i_op = &btrfs_special_inode_operations;
++ init_special_inode(vfs_inode, vfs_inode->i_mode, rdev);
+ break;
+ }
+
+- btrfs_sync_inode_flags_to_i_flags(inode);
++ btrfs_sync_inode_flags_to_i_flags(vfs_inode);
+
+- ret = btrfs_add_inode_to_root(BTRFS_I(inode), true);
++ ret = btrfs_add_inode_to_root(inode, true);
+ if (ret)
+ goto out;
+
+ return 0;
+ out:
+- iget_failed(inode);
++ iget_failed(vfs_inode);
+ return ret;
+ }
+
+@@ -5602,7 +5601,7 @@ static int btrfs_find_actor(struct inode *inode, void *opaque)
+ args->root == BTRFS_I(inode)->root;
+ }
+
+-static struct inode *btrfs_iget_locked(u64 ino, struct btrfs_root *root)
++static struct btrfs_inode *btrfs_iget_locked(u64 ino, struct btrfs_root *root)
+ {
+ struct inode *inode;
+ struct btrfs_iget_args args;
+@@ -5614,7 +5613,9 @@ static struct inode *btrfs_iget_locked(u64 ino, struct btrfs_root *root)
+ inode = iget5_locked_rcu(root->fs_info->sb, hashval, btrfs_find_actor,
+ btrfs_init_locked_inode,
+ (void *)&args);
+- return inode;
++ if (!inode)
++ return NULL;
++ return BTRFS_I(inode);
+ }
+
+ /*
+@@ -5624,22 +5625,22 @@ static struct inode *btrfs_iget_locked(u64 ino, struct btrfs_root *root)
+ struct inode *btrfs_iget_path(u64 ino, struct btrfs_root *root,
+ struct btrfs_path *path)
+ {
+- struct inode *inode;
++ struct btrfs_inode *inode;
+ int ret;
+
+ inode = btrfs_iget_locked(ino, root);
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+
+- if (!(inode->i_state & I_NEW))
+- return inode;
++ if (!(inode->vfs_inode.i_state & I_NEW))
++ return &inode->vfs_inode;
+
+ ret = btrfs_read_locked_inode(inode, path);
+ if (ret)
+ return ERR_PTR(ret);
+
+- unlock_new_inode(inode);
+- return inode;
++ unlock_new_inode(&inode->vfs_inode);
++ return &inode->vfs_inode;
+ }
+
+ /*
+@@ -5647,7 +5648,7 @@ struct inode *btrfs_iget_path(u64 ino, struct btrfs_root *root,
+ */
+ struct inode *btrfs_iget(u64 ino, struct btrfs_root *root)
+ {
+- struct inode *inode;
++ struct btrfs_inode *inode;
+ struct btrfs_path *path;
+ int ret;
+
+@@ -5655,20 +5656,22 @@ struct inode *btrfs_iget(u64 ino, struct btrfs_root *root)
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+
+- if (!(inode->i_state & I_NEW))
+- return inode;
++ if (!(inode->vfs_inode.i_state & I_NEW))
++ return &inode->vfs_inode;
+
+ path = btrfs_alloc_path();
+- if (!path)
++ if (!path) {
++ iget_failed(&inode->vfs_inode);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ ret = btrfs_read_locked_inode(inode, path);
+ btrfs_free_path(path);
+ if (ret)
+ return ERR_PTR(ret);
+
+- unlock_new_inode(inode);
+- return inode;
++ unlock_new_inode(&inode->vfs_inode);
++ return &inode->vfs_inode;
+ }
+
+ static struct inode *new_simple_dir(struct inode *dir,
+@@ -6339,6 +6342,7 @@ int btrfs_create_new_inode(struct btrfs_trans_handle *trans,
+ if (btrfs_test_opt(fs_info, NODATACOW))
+ BTRFS_I(inode)->flags |= BTRFS_INODE_NODATACOW |
+ BTRFS_INODE_NODATASUM;
++ btrfs_update_inode_mapping_flags(BTRFS_I(inode));
+ }
+
+ ret = btrfs_insert_inode_locked(inode);
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index e666c141cae0b0..10a97f0af8d4b2 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -393,6 +393,7 @@ int btrfs_fileattr_set(struct mnt_idmap *idmap,
+
+ update_flags:
+ binode->flags = binode_flags;
++ btrfs_update_inode_mapping_flags(binode);
+ btrfs_sync_inode_flags_to_i_flags(inode);
+ inode_inc_iversion(inode);
+ inode_set_ctime_current(inode);
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 978a57da8b4f5b..f39656668967c6 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1277,7 +1277,7 @@ struct zone_info {
+
+ static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,
+ struct zone_info *info, unsigned long *active,
+- struct btrfs_chunk_map *map)
++ struct btrfs_chunk_map *map, bool new)
+ {
+ struct btrfs_dev_replace *dev_replace = &fs_info->dev_replace;
+ struct btrfs_device *device;
+@@ -1307,6 +1307,8 @@ static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,
+ return 0;
+ }
+
++ ASSERT(!new || btrfs_dev_is_empty_zone(device, info->physical));
++
+ /* This zone will be used for allocation, so mark this zone non-empty. */
+ btrfs_dev_clear_zone_empty(device, info->physical);
+
+@@ -1319,6 +1321,18 @@ static int btrfs_load_zone_info(struct btrfs_fs_info *fs_info, int zone_idx,
+ * to determine the allocation offset within the zone.
+ */
+ WARN_ON(!IS_ALIGNED(info->physical, fs_info->zone_size));
++
++ if (new) {
++ sector_t capacity;
++
++ capacity = bdev_zone_capacity(device->bdev, info->physical >> SECTOR_SHIFT);
++ up_read(&dev_replace->rwsem);
++ info->alloc_offset = 0;
++ info->capacity = capacity << SECTOR_SHIFT;
++
++ return 0;
++ }
++
+ nofs_flag = memalloc_nofs_save();
+ ret = btrfs_get_dev_zone(device, info->physical, &zone);
+ memalloc_nofs_restore(nofs_flag);
+@@ -1588,7 +1602,7 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ }
+
+ for (i = 0; i < map->num_stripes; i++) {
+- ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map);
++ ret = btrfs_load_zone_info(fs_info, i, &zone_info[i], active, map, new);
+ if (ret)
+ goto out;
+
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 163b8fea47e8a0..e7118501fdcc64 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -2920,6 +2920,7 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ req->CreateContextsOffset = cpu_to_le32(
+ sizeof(struct smb2_create_req) +
+ iov[1].iov_len);
++ le32_add_cpu(&req->CreateContextsLength, iov[n_iov-1].iov_len);
+ pc_buf = iov[n_iov-1].iov_base;
+ }
+
+diff --git a/fs/smb/server/auth.c b/fs/smb/server/auth.c
+index 83caa384974932..b3d121052408cc 100644
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -550,7 +550,19 @@ int ksmbd_krb5_authenticate(struct ksmbd_session *sess, char *in_blob,
+ retval = -ENOMEM;
+ goto out;
+ }
+- sess->user = user;
++
++ if (!sess->user) {
++ /* First successful authentication */
++ sess->user = user;
++ } else {
++ if (!ksmbd_compare_user(sess->user, user)) {
++ ksmbd_debug(AUTH, "different user tried to reuse session\n");
++ retval = -EPERM;
++ ksmbd_free_user(user);
++ goto out;
++ }
++ ksmbd_free_user(user);
++ }
+
+ memcpy(sess->sess_key, resp->payload, resp->session_key_len);
+ memcpy(out_blob, resp->payload + resp->session_key_len,
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index 3f45f28f6f0f8e..9dec4c2940bc04 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -59,10 +59,12 @@ static void ksmbd_session_rpc_clear_list(struct ksmbd_session *sess)
+ struct ksmbd_session_rpc *entry;
+ long index;
+
++ down_write(&sess->rpc_lock);
+ xa_for_each(&sess->rpc_handle_list, index, entry) {
+ xa_erase(&sess->rpc_handle_list, index);
+ __session_rpc_close(sess, entry);
+ }
++ up_write(&sess->rpc_lock);
+
+ xa_destroy(&sess->rpc_handle_list);
+ }
+@@ -92,7 +94,7 @@ int ksmbd_session_rpc_open(struct ksmbd_session *sess, char *rpc_name)
+ {
+ struct ksmbd_session_rpc *entry, *old;
+ struct ksmbd_rpc_command *resp;
+- int method;
++ int method, id;
+
+ method = __rpc_method(rpc_name);
+ if (!method)
+@@ -102,26 +104,29 @@ int ksmbd_session_rpc_open(struct ksmbd_session *sess, char *rpc_name)
+ if (!entry)
+ return -ENOMEM;
+
++ down_read(&sess->rpc_lock);
+ entry->method = method;
+- entry->id = ksmbd_ipc_id_alloc();
+- if (entry->id < 0)
++ entry->id = id = ksmbd_ipc_id_alloc();
++ if (id < 0)
+ goto free_entry;
+- old = xa_store(&sess->rpc_handle_list, entry->id, entry, KSMBD_DEFAULT_GFP);
++ old = xa_store(&sess->rpc_handle_list, id, entry, KSMBD_DEFAULT_GFP);
+ if (xa_is_err(old))
+ goto free_id;
+
+- resp = ksmbd_rpc_open(sess, entry->id);
++ resp = ksmbd_rpc_open(sess, id);
+ if (!resp)
+ goto erase_xa;
+
++ up_read(&sess->rpc_lock);
+ kvfree(resp);
+- return entry->id;
++ return id;
+ erase_xa:
+ xa_erase(&sess->rpc_handle_list, entry->id);
+ free_id:
+ ksmbd_rpc_id_free(entry->id);
+ free_entry:
+ kfree(entry);
++ up_read(&sess->rpc_lock);
+ return -EINVAL;
+ }
+
+@@ -129,9 +134,11 @@ void ksmbd_session_rpc_close(struct ksmbd_session *sess, int id)
+ {
+ struct ksmbd_session_rpc *entry;
+
++ down_write(&sess->rpc_lock);
+ entry = xa_erase(&sess->rpc_handle_list, id);
+ if (entry)
+ __session_rpc_close(sess, entry);
++ up_write(&sess->rpc_lock);
+ }
+
+ int ksmbd_session_rpc_method(struct ksmbd_session *sess, int id)
+@@ -439,6 +446,7 @@ static struct ksmbd_session *__session_create(int protocol)
+ sess->sequence_number = 1;
+ rwlock_init(&sess->tree_conns_lock);
+ atomic_set(&sess->refcnt, 2);
++ init_rwsem(&sess->rpc_lock);
+
+ ret = __init_smb2_session(sess);
+ if (ret)
+diff --git a/fs/smb/server/mgmt/user_session.h b/fs/smb/server/mgmt/user_session.h
+index f21348381d5984..c5749d6ec7151c 100644
+--- a/fs/smb/server/mgmt/user_session.h
++++ b/fs/smb/server/mgmt/user_session.h
+@@ -63,6 +63,7 @@ struct ksmbd_session {
+ rwlock_t tree_conns_lock;
+
+ atomic_t refcnt;
++ struct rw_semaphore rpc_lock;
+ };
+
+ static inline int test_session_flag(struct ksmbd_session *sess, int bit)
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 57839f9708bb6c..58ede919675174 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1602,11 +1602,6 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ if (prev_sess_id && prev_sess_id != sess->id)
+ destroy_previous_session(conn, sess->user, prev_sess_id);
+
+- if (sess->state == SMB2_SESSION_VALID) {
+- ksmbd_free_user(sess->user);
+- sess->user = NULL;
+- }
+-
+ retval = ksmbd_krb5_authenticate(sess, in_blob, in_len,
+ out_blob, &out_len);
+ if (retval) {
+@@ -2249,10 +2244,6 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ sess->state = SMB2_SESSION_EXPIRED;
+ up_write(&conn->session_lock);
+
+- if (sess->user) {
+- ksmbd_free_user(sess->user);
+- sess->user = NULL;
+- }
+ ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_SETUP);
+
+ rsp->StructureSize = cpu_to_le16(4);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 6aa67e9b2ec081..0fec27d6b986c5 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -691,23 +691,6 @@ static inline bool blk_queue_is_zoned(struct request_queue *q)
+ (q->limits.features & BLK_FEAT_ZONED);
+ }
+
+-#ifdef CONFIG_BLK_DEV_ZONED
+-static inline unsigned int disk_nr_zones(struct gendisk *disk)
+-{
+- return disk->nr_zones;
+-}
+-bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs);
+-#else /* CONFIG_BLK_DEV_ZONED */
+-static inline unsigned int disk_nr_zones(struct gendisk *disk)
+-{
+- return 0;
+-}
+-static inline bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+-{
+- return false;
+-}
+-#endif /* CONFIG_BLK_DEV_ZONED */
+-
+ static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector)
+ {
+ if (!blk_queue_is_zoned(disk->queue))
+@@ -715,11 +698,6 @@ static inline unsigned int disk_zone_no(struct gendisk *disk, sector_t sector)
+ return sector >> ilog2(disk->queue->limits.chunk_sectors);
+ }
+
+-static inline unsigned int bdev_nr_zones(struct block_device *bdev)
+-{
+- return disk_nr_zones(bdev->bd_disk);
+-}
+-
+ static inline unsigned int bdev_max_open_zones(struct block_device *bdev)
+ {
+ return bdev->bd_disk->queue->limits.max_open_zones;
+@@ -826,6 +804,51 @@ static inline u64 sb_bdev_nr_blocks(struct super_block *sb)
+ (sb->s_blocksize_bits - SECTOR_SHIFT);
+ }
+
++#ifdef CONFIG_BLK_DEV_ZONED
++static inline unsigned int disk_nr_zones(struct gendisk *disk)
++{
++ return disk->nr_zones;
++}
++bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs);
++
++/**
++ * disk_zone_capacity - returns the zone capacity of zone containing @sector
++ * @disk: disk to work with
++ * @sector: sector number within the querying zone
++ *
++ * Returns the zone capacity of a zone containing @sector. @sector can be any
++ * sector in the zone.
++ */
++static inline unsigned int disk_zone_capacity(struct gendisk *disk,
++ sector_t sector)
++{
++ sector_t zone_sectors = disk->queue->limits.chunk_sectors;
++
++ if (sector + zone_sectors >= get_capacity(disk))
++ return disk->last_zone_capacity;
++ return disk->zone_capacity;
++}
++static inline unsigned int bdev_zone_capacity(struct block_device *bdev,
++ sector_t pos)
++{
++ return disk_zone_capacity(bdev->bd_disk, pos);
++}
++#else /* CONFIG_BLK_DEV_ZONED */
++static inline unsigned int disk_nr_zones(struct gendisk *disk)
++{
++ return 0;
++}
++static inline bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
++{
++ return false;
++}
++#endif /* CONFIG_BLK_DEV_ZONED */
++
++static inline unsigned int bdev_nr_zones(struct block_device *bdev)
++{
++ return disk_nr_zones(bdev->bd_disk);
++}
++
+ int bdev_disk_changed(struct gendisk *disk, bool invalidate);
+
+ void put_disk(struct gendisk *disk);
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index 7fe0981a7e4674..73024830bd7306 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -144,6 +144,9 @@ struct cpufreq_policy {
+ /* Per policy boost enabled flag. */
+ bool boost_enabled;
+
++ /* Per policy boost supported flag. */
++ bool boost_supported;
++
+ /* Cached frequency lookup from cpufreq_driver_resolve_freq. */
+ unsigned int cached_target_freq;
+ unsigned int cached_resolved_idx;
+@@ -770,8 +773,8 @@ int cpufreq_frequency_table_verify(struct cpufreq_policy_data *policy,
+ int cpufreq_generic_frequency_table_verify(struct cpufreq_policy_data *policy);
+
+ int cpufreq_table_index_unsorted(struct cpufreq_policy *policy,
+- unsigned int target_freq,
+- unsigned int relation);
++ unsigned int target_freq, unsigned int min,
++ unsigned int max, unsigned int relation);
+ int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy,
+ unsigned int freq);
+
+@@ -836,12 +839,12 @@ static inline int cpufreq_table_find_index_dl(struct cpufreq_policy *policy,
+ return best;
+ }
+
+-/* Works only on sorted freq-tables */
+-static inline int cpufreq_table_find_index_l(struct cpufreq_policy *policy,
+- unsigned int target_freq,
+- bool efficiencies)
++static inline int find_index_l(struct cpufreq_policy *policy,
++ unsigned int target_freq,
++ unsigned int min, unsigned int max,
++ bool efficiencies)
+ {
+- target_freq = clamp_val(target_freq, policy->min, policy->max);
++ target_freq = clamp_val(target_freq, min, max);
+
+ if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING)
+ return cpufreq_table_find_index_al(policy, target_freq,
+@@ -851,6 +854,14 @@ static inline int cpufreq_table_find_index_l(struct cpufreq_policy *policy,
+ efficiencies);
+ }
+
++/* Works only on sorted freq-tables */
++static inline int cpufreq_table_find_index_l(struct cpufreq_policy *policy,
++ unsigned int target_freq,
++ bool efficiencies)
++{
++ return find_index_l(policy, target_freq, policy->min, policy->max, efficiencies);
++}
++
+ /* Find highest freq at or below target in a table in ascending order */
+ static inline int cpufreq_table_find_index_ah(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+@@ -904,12 +915,12 @@ static inline int cpufreq_table_find_index_dh(struct cpufreq_policy *policy,
+ return best;
+ }
+
+-/* Works only on sorted freq-tables */
+-static inline int cpufreq_table_find_index_h(struct cpufreq_policy *policy,
+- unsigned int target_freq,
+- bool efficiencies)
++static inline int find_index_h(struct cpufreq_policy *policy,
++ unsigned int target_freq,
++ unsigned int min, unsigned int max,
++ bool efficiencies)
+ {
+- target_freq = clamp_val(target_freq, policy->min, policy->max);
++ target_freq = clamp_val(target_freq, min, max);
+
+ if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING)
+ return cpufreq_table_find_index_ah(policy, target_freq,
+@@ -919,6 +930,14 @@ static inline int cpufreq_table_find_index_h(struct cpufreq_policy *policy,
+ efficiencies);
+ }
+
++/* Works only on sorted freq-tables */
++static inline int cpufreq_table_find_index_h(struct cpufreq_policy *policy,
++ unsigned int target_freq,
++ bool efficiencies)
++{
++ return find_index_h(policy, target_freq, policy->min, policy->max, efficiencies);
++}
++
+ /* Find closest freq to target in a table in ascending order */
+ static inline int cpufreq_table_find_index_ac(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+@@ -989,12 +1008,12 @@ static inline int cpufreq_table_find_index_dc(struct cpufreq_policy *policy,
+ return best;
+ }
+
+-/* Works only on sorted freq-tables */
+-static inline int cpufreq_table_find_index_c(struct cpufreq_policy *policy,
+- unsigned int target_freq,
+- bool efficiencies)
++static inline int find_index_c(struct cpufreq_policy *policy,
++ unsigned int target_freq,
++ unsigned int min, unsigned int max,
++ bool efficiencies)
+ {
+- target_freq = clamp_val(target_freq, policy->min, policy->max);
++ target_freq = clamp_val(target_freq, min, max);
+
+ if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING)
+ return cpufreq_table_find_index_ac(policy, target_freq,
+@@ -1004,7 +1023,17 @@ static inline int cpufreq_table_find_index_c(struct cpufreq_policy *policy,
+ efficiencies);
+ }
+
+-static inline bool cpufreq_is_in_limits(struct cpufreq_policy *policy, int idx)
++/* Works only on sorted freq-tables */
++static inline int cpufreq_table_find_index_c(struct cpufreq_policy *policy,
++ unsigned int target_freq,
++ bool efficiencies)
++{
++ return find_index_c(policy, target_freq, policy->min, policy->max, efficiencies);
++}
++
++static inline bool cpufreq_is_in_limits(struct cpufreq_policy *policy,
++ unsigned int min, unsigned int max,
++ int idx)
+ {
+ unsigned int freq;
+
+@@ -1013,11 +1042,13 @@ static inline bool cpufreq_is_in_limits(struct cpufreq_policy *policy, int idx)
+
+ freq = policy->freq_table[idx].frequency;
+
+- return freq == clamp_val(freq, policy->min, policy->max);
++ return freq == clamp_val(freq, min, max);
+ }
+
+ static inline int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
+ unsigned int target_freq,
++ unsigned int min,
++ unsigned int max,
+ unsigned int relation)
+ {
+ bool efficiencies = policy->efficiencies_available &&
+@@ -1028,29 +1059,26 @@ static inline int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
+ relation &= ~CPUFREQ_RELATION_E;
+
+ if (unlikely(policy->freq_table_sorted == CPUFREQ_TABLE_UNSORTED))
+- return cpufreq_table_index_unsorted(policy, target_freq,
+- relation);
++ return cpufreq_table_index_unsorted(policy, target_freq, min,
++ max, relation);
+ retry:
+ switch (relation) {
+ case CPUFREQ_RELATION_L:
+- idx = cpufreq_table_find_index_l(policy, target_freq,
+- efficiencies);
++ idx = find_index_l(policy, target_freq, min, max, efficiencies);
+ break;
+ case CPUFREQ_RELATION_H:
+- idx = cpufreq_table_find_index_h(policy, target_freq,
+- efficiencies);
++ idx = find_index_h(policy, target_freq, min, max, efficiencies);
+ break;
+ case CPUFREQ_RELATION_C:
+- idx = cpufreq_table_find_index_c(policy, target_freq,
+- efficiencies);
++ idx = find_index_c(policy, target_freq, min, max, efficiencies);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+
+- /* Limit frequency index to honor policy->min/max */
+- if (!cpufreq_is_in_limits(policy, idx) && efficiencies) {
++ /* Limit frequency index to honor min and max */
++ if (!cpufreq_is_in_limits(policy, min, max, idx) && efficiencies) {
+ efficiencies = false;
+ goto retry;
+ }
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index 38c65e92ecd091..87cbe47b323e68 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -425,10 +425,10 @@ static inline int __iommu_copy_struct_from_user(
+ void *dst_data, const struct iommu_user_data *src_data,
+ unsigned int data_type, size_t data_len, size_t min_len)
+ {
+- if (src_data->type != data_type)
+- return -EINVAL;
+ if (WARN_ON(!dst_data || !src_data))
+ return -EINVAL;
++ if (src_data->type != data_type)
++ return -EINVAL;
+ if (src_data->len < min_len || data_len < src_data->len)
+ return -EINVAL;
+ return copy_struct_from_user(dst_data, data_len, src_data->uptr,
+@@ -441,8 +441,8 @@ static inline int __iommu_copy_struct_from_user(
+ * include/uapi/linux/iommufd.h
+ * @user_data: Pointer to a struct iommu_user_data for user space data info
+ * @data_type: The data type of the @kdst. Must match with @user_data->type
+- * @min_last: The last memember of the data structure @kdst points in the
+- * initial version.
++ * @min_last: The last member of the data structure @kdst points in the initial
++ * version.
+ * Return 0 for success, otherwise -error.
+ */
+ #define iommu_copy_struct_from_user(kdst, user_data, data_type, min_last) \
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 30e5b19bafa983..ba33bba3cc7427 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -162,6 +162,8 @@ extern void cleanup_module(void);
+ #define __INITRODATA_OR_MODULE __INITRODATA
+ #endif /*CONFIG_MODULES*/
+
++struct module_kobject *lookup_or_create_module_kobject(const char *name);
++
+ /* Generic info of form tag = "info" */
+ #define MODULE_INFO(tag, info) __MODULE_INFO(tag, tag, info)
+
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index a8586c3058c7cd..797992019f9ee5 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -1931,6 +1931,8 @@ struct hci_cp_le_pa_create_sync {
+ __u8 sync_cte_type;
+ } __packed;
+
++#define HCI_OP_LE_PA_CREATE_SYNC_CANCEL 0x2045
++
+ #define HCI_OP_LE_PA_TERM_SYNC 0x2046
+ struct hci_cp_le_pa_term_sync {
+ __le16 handle;
+@@ -2830,7 +2832,7 @@ struct hci_evt_le_create_big_complete {
+ __le16 bis_handle[];
+ } __packed;
+
+-#define HCI_EVT_LE_BIG_SYNC_ESTABILISHED 0x1d
++#define HCI_EVT_LE_BIG_SYNC_ESTABLISHED 0x1d
+ struct hci_evt_le_big_sync_estabilished {
+ __u8 status;
+ __u8 handle;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index f0b49aad519eb2..7d8bab892154eb 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1105,10 +1105,8 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ return NULL;
+ }
+
+-static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev,
+- __u8 sid,
+- bdaddr_t *dst,
+- __u8 dst_type)
++static inline struct hci_conn *
++hci_conn_hash_lookup_create_pa_sync(struct hci_dev *hdev)
+ {
+ struct hci_conn_hash *h = &hdev->conn_hash;
+ struct hci_conn *c;
+@@ -1116,8 +1114,10 @@ static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev,
+ rcu_read_lock();
+
+ list_for_each_entry_rcu(c, &h->list, list) {
+- if (c->type != ISO_LINK || bacmp(&c->dst, dst) ||
+- c->dst_type != dst_type || c->sid != sid)
++ if (c->type != ISO_LINK)
++ continue;
++
++ if (!test_bit(HCI_CONN_CREATE_PA_SYNC, &c->flags))
+ continue;
+
+ rcu_read_unlock();
+@@ -1516,8 +1516,6 @@ bool hci_setup_sync(struct hci_conn *conn, __u16 handle);
+ void hci_sco_setup(struct hci_conn *conn, __u8 status);
+ bool hci_iso_setup_path(struct hci_conn *conn);
+ int hci_le_create_cis_pending(struct hci_dev *hdev);
+-int hci_pa_create_sync_pending(struct hci_dev *hdev);
+-int hci_le_big_create_sync_pending(struct hci_dev *hdev);
+ int hci_conn_check_create_cis(struct hci_conn *conn);
+
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+@@ -1558,9 +1556,9 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ __u8 data_len, __u8 *data);
+ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ __u8 dst_type, __u8 sid, struct bt_iso_qos *qos);
+-int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
+- struct bt_iso_qos *qos,
+- __u16 sync_handle, __u8 num_bis, __u8 bis[]);
++int hci_conn_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
++ struct bt_iso_qos *qos, __u16 sync_handle,
++ __u8 num_bis, __u8 bis[]);
+ int hci_conn_check_link_mode(struct hci_conn *conn);
+ int hci_conn_check_secure(struct hci_conn *conn, __u8 sec_level);
+ int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type,
+diff --git a/include/net/bluetooth/hci_sync.h b/include/net/bluetooth/hci_sync.h
+index 7e2cf0cca939a1..72558c826aa1b4 100644
+--- a/include/net/bluetooth/hci_sync.h
++++ b/include/net/bluetooth/hci_sync.h
+@@ -185,3 +185,6 @@ int hci_connect_le_sync(struct hci_dev *hdev, struct hci_conn *conn);
+ int hci_cancel_connect_sync(struct hci_dev *hdev, struct hci_conn *conn);
+ int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ struct hci_conn_params *params);
++
++int hci_connect_pa_sync(struct hci_dev *hdev, struct hci_conn *conn);
++int hci_connect_big_sync(struct hci_dev *hdev, struct hci_conn *conn);
+diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
+index a58ae7589d1212..e8bd6ddb7b1275 100644
+--- a/include/net/xdp_sock.h
++++ b/include/net/xdp_sock.h
+@@ -71,9 +71,6 @@ struct xdp_sock {
+ */
+ u32 tx_budget_spent;
+
+- /* Protects generic receive. */
+- spinlock_t rx_lock;
+-
+ /* Statistics */
+ u64 rx_dropped;
+ u64 rx_queue_full;
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index 50779406bc2d91..b3699a8488444d 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -53,6 +53,8 @@ struct xsk_buff_pool {
+ refcount_t users;
+ struct xdp_umem *umem;
+ struct work_struct work;
++ /* Protects generic receive in shared and non-shared umem mode. */
++ spinlock_t rx_lock;
+ struct list_head free_list;
+ struct list_head xskb_list;
+ u32 heads_cnt;
+@@ -230,8 +232,8 @@ static inline u64 xp_get_handle(struct xdp_buff_xsk *xskb,
+ return orig_addr;
+
+ offset = xskb->xdp.data - xskb->xdp.data_hard_start;
+- orig_addr -= offset;
+ offset += pool->headroom;
++ orig_addr -= offset;
+ return orig_addr + (offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT);
+ }
+
+diff --git a/include/sound/ump_convert.h b/include/sound/ump_convert.h
+index d099ae27f8491a..682499b871eac4 100644
+--- a/include/sound/ump_convert.h
++++ b/include/sound/ump_convert.h
+@@ -19,7 +19,7 @@ struct ump_cvt_to_ump_bank {
+ /* context for converting from MIDI1 byte stream to UMP packet */
+ struct ump_cvt_to_ump {
+ /* MIDI1 intermediate buffer */
+- unsigned char buf[4];
++ unsigned char buf[6]; /* up to 6 bytes for SysEx */
+ int len;
+ int cmd_bytes;
+
+diff --git a/kernel/params.c b/kernel/params.c
+index 0074d29c9b80ce..c417d28bc1dfba 100644
+--- a/kernel/params.c
++++ b/kernel/params.c
+@@ -763,7 +763,7 @@ void destroy_params(const struct kernel_param *params, unsigned num)
+ params[i].ops->free(params[i].arg);
+ }
+
+-static struct module_kobject * __init locate_module_kobject(const char *name)
++struct module_kobject __modinit * lookup_or_create_module_kobject(const char *name)
+ {
+ struct module_kobject *mk;
+ struct kobject *kobj;
+@@ -805,7 +805,7 @@ static void __init kernel_add_sysfs_param(const char *name,
+ struct module_kobject *mk;
+ int err;
+
+- mk = locate_module_kobject(name);
++ mk = lookup_or_create_module_kobject(name);
+ if (!mk)
+ return;
+
+@@ -876,7 +876,7 @@ static void __init version_sysfs_builtin(void)
+ int err;
+
+ for (vattr = __start___modver; vattr < __stop___modver; vattr++) {
+- mk = locate_module_kobject(vattr->module_name);
++ mk = lookup_or_create_module_kobject(vattr->module_name);
+ if (mk) {
+ err = sysfs_create_file(&mk->kobj, &vattr->mattr.attr);
+ WARN_ON_ONCE(err);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 50aa6d59083292..814626bb410b27 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6682,13 +6682,14 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
+ /* Copy the data into the page, so we can start over. */
+ ret = trace_seq_to_buffer(&iter->seq,
+ page_address(spd.pages[i]),
+- trace_seq_used(&iter->seq));
++ min((size_t)trace_seq_used(&iter->seq),
++ PAGE_SIZE));
+ if (ret < 0) {
+ __free_page(spd.pages[i]);
+ break;
+ }
+ spd.partial[i].offset = 0;
+- spd.partial[i].len = trace_seq_used(&iter->seq);
++ spd.partial[i].len = ret;
+
+ trace_seq_init(&iter->seq);
+ }
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 03d56f711ad14e..358bbebbab5021 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -961,11 +961,12 @@ enum print_line_t print_event_fields(struct trace_iterator *iter,
+ struct trace_event_call *call;
+ struct list_head *head;
+
++ lockdep_assert_held_read(&trace_event_sem);
++
+ /* ftrace defined events have separate call structures */
+ if (event->type <= __TRACE_LAST_TYPE) {
+ bool found = false;
+
+- down_read(&trace_event_sem);
+ list_for_each_entry(call, &ftrace_events, list) {
+ if (call->event.type == event->type) {
+ found = true;
+@@ -975,7 +976,6 @@ enum print_line_t print_event_fields(struct trace_iterator *iter,
+ if (call->event.type > __TRACE_LAST_TYPE)
+ break;
+ }
+- up_read(&trace_event_sem);
+ if (!found) {
+ trace_seq_printf(&iter->seq, "UNKNOWN TYPE %d\n", event->type);
+ goto out;
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 95af35fd138935..9c2df1c609487b 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -2180,11 +2180,14 @@ static void __init memmap_init_reserved_pages(void)
+ struct memblock_region *region;
+ phys_addr_t start, end;
+ int nid;
++ unsigned long max_reserved;
+
+ /*
+ * set nid on all reserved pages and also treat struct
+ * pages for the NOMAP regions as PageReserved
+ */
++repeat:
++ max_reserved = memblock.reserved.max;
+ for_each_mem_region(region) {
+ nid = memblock_get_region_node(region);
+ start = region->base;
+@@ -2193,8 +2196,15 @@ static void __init memmap_init_reserved_pages(void)
+ if (memblock_is_nomap(region))
+ reserve_bootmem_region(start, end, nid);
+
+- memblock_set_node(start, end, &memblock.reserved, nid);
++ memblock_set_node(start, region->size, &memblock.reserved, nid);
+ }
++ /*
++ * 'max' is changed means memblock.reserved has been doubled its
++ * array, which may result a new reserved region before current
++ * 'start'. Now we should repeat the procedure to set its node id.
++ */
++ if (max_reserved != memblock.reserved.max)
++ goto repeat;
+
+ /*
+ * initialize struct pages for reserved regions that don't have
+diff --git a/mm/slub.c b/mm/slub.c
+index 96babca6b33036..87f3edf9acb8ef 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -2025,18 +2025,6 @@ static inline void free_slab_obj_exts(struct slab *slab)
+ slab->obj_exts = 0;
+ }
+
+-static inline bool need_slab_obj_ext(void)
+-{
+- if (mem_alloc_profiling_enabled())
+- return true;
+-
+- /*
+- * CONFIG_MEMCG creates vector of obj_cgroup objects conditionally
+- * inside memcg_slab_post_alloc_hook. No other users for now.
+- */
+- return false;
+-}
+-
+ #else /* CONFIG_SLAB_OBJ_EXT */
+
+ static inline void init_slab_obj_exts(struct slab *slab)
+@@ -2053,11 +2041,6 @@ static inline void free_slab_obj_exts(struct slab *slab)
+ {
+ }
+
+-static inline bool need_slab_obj_ext(void)
+-{
+- return false;
+-}
+-
+ #endif /* CONFIG_SLAB_OBJ_EXT */
+
+ #ifdef CONFIG_MEM_ALLOC_PROFILING
+@@ -2089,7 +2072,7 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p)
+ static inline void
+ alloc_tagging_slab_alloc_hook(struct kmem_cache *s, void *object, gfp_t flags)
+ {
+- if (need_slab_obj_ext()) {
++ if (mem_alloc_profiling_enabled()) {
+ struct slabobj_ext *obj_exts;
+
+ obj_exts = prepare_slab_obj_exts_hook(s, flags, object);
+@@ -2565,8 +2548,12 @@ static __always_inline void account_slab(struct slab *slab, int order,
+ static __always_inline void unaccount_slab(struct slab *slab, int order,
+ struct kmem_cache *s)
+ {
+- if (memcg_kmem_online() || need_slab_obj_ext())
+- free_slab_obj_exts(slab);
++ /*
++ * The slab object extensions should now be freed regardless of
++ * whether mem_alloc_profiling_enabled() or not because profiling
++ * might have been disabled after slab->obj_exts got allocated.
++ */
++ free_slab_obj_exts(slab);
+
+ mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s),
+ -(PAGE_SIZE << order));
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index d097e308a7554f..ae66fa0a5fb584 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -2061,95 +2061,6 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
+ return hci_le_create_big(conn, &conn->iso_qos);
+ }
+
+-static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+-{
+- bt_dev_dbg(hdev, "");
+-
+- if (err)
+- bt_dev_err(hdev, "Unable to create PA: %d", err);
+-}
+-
+-static bool hci_conn_check_create_pa_sync(struct hci_conn *conn)
+-{
+- if (conn->type != ISO_LINK || conn->sid == HCI_SID_INVALID)
+- return false;
+-
+- return true;
+-}
+-
+-static int create_pa_sync(struct hci_dev *hdev, void *data)
+-{
+- struct hci_cp_le_pa_create_sync cp = {0};
+- struct hci_conn *conn;
+- int err = 0;
+-
+- hci_dev_lock(hdev);
+-
+- rcu_read_lock();
+-
+- /* The spec allows only one pending LE Periodic Advertising Create
+- * Sync command at a time. If the command is pending now, don't do
+- * anything. We check for pending connections after each PA Sync
+- * Established event.
+- *
+- * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
+- * page 2493:
+- *
+- * If the Host issues this command when another HCI_LE_Periodic_
+- * Advertising_Create_Sync command is pending, the Controller shall
+- * return the error code Command Disallowed (0x0C).
+- */
+- list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+- if (test_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags))
+- goto unlock;
+- }
+-
+- list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+- if (hci_conn_check_create_pa_sync(conn)) {
+- struct bt_iso_qos *qos = &conn->iso_qos;
+-
+- cp.options = qos->bcast.options;
+- cp.sid = conn->sid;
+- cp.addr_type = conn->dst_type;
+- bacpy(&cp.addr, &conn->dst);
+- cp.skip = cpu_to_le16(qos->bcast.skip);
+- cp.sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
+- cp.sync_cte_type = qos->bcast.sync_cte_type;
+-
+- break;
+- }
+- }
+-
+-unlock:
+- rcu_read_unlock();
+-
+- hci_dev_unlock(hdev);
+-
+- if (bacmp(&cp.addr, BDADDR_ANY)) {
+- hci_dev_set_flag(hdev, HCI_PA_SYNC);
+- set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
+-
+- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
+- sizeof(cp), &cp, HCI_CMD_TIMEOUT);
+- if (!err)
+- err = hci_update_passive_scan_sync(hdev);
+-
+- if (err) {
+- hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+- clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
+- }
+- }
+-
+- return err;
+-}
+-
+-int hci_pa_create_sync_pending(struct hci_dev *hdev)
+-{
+- /* Queue start pa_create_sync and scan */
+- return hci_cmd_sync_queue(hdev, create_pa_sync,
+- NULL, create_pa_complete);
+-}
+-
+ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ __u8 dst_type, __u8 sid,
+ struct bt_iso_qos *qos)
+@@ -2164,97 +2075,18 @@ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ conn->dst_type = dst_type;
+ conn->sid = sid;
+ conn->state = BT_LISTEN;
++ conn->conn_timeout = msecs_to_jiffies(qos->bcast.sync_timeout * 10);
+
+ hci_conn_hold(conn);
+
+- hci_pa_create_sync_pending(hdev);
++ hci_connect_pa_sync(hdev, conn);
+
+ return conn;
+ }
+
+-static bool hci_conn_check_create_big_sync(struct hci_conn *conn)
+-{
+- if (!conn->num_bis)
+- return false;
+-
+- return true;
+-}
+-
+-static void big_create_sync_complete(struct hci_dev *hdev, void *data, int err)
+-{
+- bt_dev_dbg(hdev, "");
+-
+- if (err)
+- bt_dev_err(hdev, "Unable to create BIG sync: %d", err);
+-}
+-
+-static int big_create_sync(struct hci_dev *hdev, void *data)
+-{
+- DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
+- struct hci_conn *conn;
+-
+- rcu_read_lock();
+-
+- pdu->num_bis = 0;
+-
+- /* The spec allows only one pending LE BIG Create Sync command at
+- * a time. If the command is pending now, don't do anything. We
+- * check for pending connections after each BIG Sync Established
+- * event.
+- *
+- * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
+- * page 2586:
+- *
+- * If the Host sends this command when the Controller is in the
+- * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_
+- * Established event has not been generated, the Controller shall
+- * return the error code Command Disallowed (0x0C).
+- */
+- list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+- if (test_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags))
+- goto unlock;
+- }
+-
+- list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+- if (hci_conn_check_create_big_sync(conn)) {
+- struct bt_iso_qos *qos = &conn->iso_qos;
+-
+- set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
+-
+- pdu->handle = qos->bcast.big;
+- pdu->sync_handle = cpu_to_le16(conn->sync_handle);
+- pdu->encryption = qos->bcast.encryption;
+- memcpy(pdu->bcode, qos->bcast.bcode,
+- sizeof(pdu->bcode));
+- pdu->mse = qos->bcast.mse;
+- pdu->timeout = cpu_to_le16(qos->bcast.timeout);
+- pdu->num_bis = conn->num_bis;
+- memcpy(pdu->bis, conn->bis, conn->num_bis);
+-
+- break;
+- }
+- }
+-
+-unlock:
+- rcu_read_unlock();
+-
+- if (!pdu->num_bis)
+- return 0;
+-
+- return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
+- struct_size(pdu, bis, pdu->num_bis), pdu);
+-}
+-
+-int hci_le_big_create_sync_pending(struct hci_dev *hdev)
+-{
+- /* Queue big_create_sync */
+- return hci_cmd_sync_queue_once(hdev, big_create_sync,
+- NULL, big_create_sync_complete);
+-}
+-
+-int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
+- struct bt_iso_qos *qos,
+- __u16 sync_handle, __u8 num_bis, __u8 bis[])
++int hci_conn_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
++ struct bt_iso_qos *qos, __u16 sync_handle,
++ __u8 num_bis, __u8 bis[])
+ {
+ int err;
+
+@@ -2271,9 +2103,10 @@ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
+
+ hcon->num_bis = num_bis;
+ memcpy(hcon->bis, bis, num_bis);
++ hcon->conn_timeout = msecs_to_jiffies(qos->bcast.timeout * 10);
+ }
+
+- return hci_le_big_create_sync_pending(hdev);
++ return hci_connect_big_sync(hdev, hcon);
+ }
+
+ static void create_big_complete(struct hci_dev *hdev, void *data, int err)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 20d3cdcb14f6cd..ab940ec698c0f5 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6371,8 +6371,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+
+ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+
+- conn = hci_conn_hash_lookup_sid(hdev, ev->sid, &ev->bdaddr,
+- ev->bdaddr_type);
++ conn = hci_conn_hash_lookup_create_pa_sync(hdev);
+ if (!conn) {
+ bt_dev_err(hdev,
+ "Unable to find connection for dst %pMR sid 0x%2.2x",
+@@ -6411,9 +6410,6 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ }
+
+ unlock:
+- /* Handle any other pending PA sync command */
+- hci_pa_create_sync_pending(hdev);
+-
+ hci_dev_unlock(hdev);
+ }
+
+@@ -6925,7 +6921,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+
+- if (!hci_le_ev_skb_pull(hdev, skb, HCI_EVT_LE_BIG_SYNC_ESTABILISHED,
++ if (!hci_le_ev_skb_pull(hdev, skb, HCI_EVT_LE_BIG_SYNC_ESTABLISHED,
+ flex_array_size(ev, bis, ev->num_bis)))
+ return;
+
+@@ -6996,9 +6992,6 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ }
+
+ unlock:
+- /* Handle any other pending BIG sync command */
+- hci_le_big_create_sync_pending(hdev);
+-
+ hci_dev_unlock(hdev);
+ }
+
+@@ -7120,8 +7113,8 @@ static const struct hci_le_ev {
+ hci_le_create_big_complete_evt,
+ sizeof(struct hci_evt_le_create_big_complete),
+ HCI_MAX_EVENT_SIZE),
+- /* [0x1d = HCI_EV_LE_BIG_SYNC_ESTABILISHED] */
+- HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_ESTABILISHED,
++ /* [0x1d = HCI_EV_LE_BIG_SYNC_ESTABLISHED] */
++ HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_ESTABLISHED,
+ hci_le_big_sync_established_evt,
+ sizeof(struct hci_evt_le_big_sync_estabilished),
+ HCI_MAX_EVENT_SIZE),
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 14c3ee5c6a1e89..85c6ac082bfcda 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2693,16 +2693,16 @@ static u8 hci_update_accept_list_sync(struct hci_dev *hdev)
+
+ /* Force address filtering if PA Sync is in progress */
+ if (hci_dev_test_flag(hdev, HCI_PA_SYNC)) {
+- struct hci_cp_le_pa_create_sync *sent;
++ struct hci_conn *conn;
+
+- sent = hci_sent_cmd_data(hdev, HCI_OP_LE_PA_CREATE_SYNC);
+- if (sent) {
++ conn = hci_conn_hash_lookup_create_pa_sync(hdev);
++ if (conn) {
+ struct conn_params pa;
+
+ memset(&pa, 0, sizeof(pa));
+
+- bacpy(&pa.addr, &sent->addr);
+- pa.addr_type = sent->addr_type;
++ bacpy(&pa.addr, &conn->dst);
++ pa.addr_type = conn->dst_type;
+
+ /* Clear first since there could be addresses left
+ * behind.
+@@ -6895,3 +6895,143 @@ int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ return __hci_cmd_sync_status(hdev, HCI_OP_LE_CONN_UPDATE,
+ sizeof(cp), &cp, HCI_CMD_TIMEOUT);
+ }
++
++static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
++{
++ bt_dev_dbg(hdev, "err %d", err);
++
++ if (!err)
++ return;
++
++ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
++
++ if (err == -ECANCELED)
++ return;
++
++ hci_dev_lock(hdev);
++
++ hci_update_passive_scan_sync(hdev);
++
++ hci_dev_unlock(hdev);
++}
++
++static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
++{
++ struct hci_cp_le_pa_create_sync cp;
++ struct hci_conn *conn = data;
++ struct bt_iso_qos *qos = &conn->iso_qos;
++ int err;
++
++ if (!hci_conn_valid(hdev, conn))
++ return -ECANCELED;
++
++ if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC))
++ return -EBUSY;
++
++ /* Mark HCI_CONN_CREATE_PA_SYNC so hci_update_passive_scan_sync can
++ * program the address in the allow list so PA advertisements can be
++ * received.
++ */
++ set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++
++ hci_update_passive_scan_sync(hdev);
++
++ memset(&cp, 0, sizeof(cp));
++ cp.options = qos->bcast.options;
++ cp.sid = conn->sid;
++ cp.addr_type = conn->dst_type;
++ bacpy(&cp.addr, &conn->dst);
++ cp.skip = cpu_to_le16(qos->bcast.skip);
++ cp.sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
++ cp.sync_cte_type = qos->bcast.sync_cte_type;
++
++ /* The spec allows only one pending LE Periodic Advertising Create
++ * Sync command at a time so we forcefully wait for PA Sync Established
++ * event since cmd_work can only schedule one command at a time.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2493:
++ *
++ * If the Host issues this command when another HCI_LE_Periodic_
++ * Advertising_Create_Sync command is pending, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_PA_CREATE_SYNC,
++ sizeof(cp), &cp,
++ HCI_EV_LE_PA_SYNC_ESTABLISHED,
++ conn->conn_timeout, NULL);
++ if (err == -ETIMEDOUT)
++ __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL,
++ 0, NULL, HCI_CMD_TIMEOUT);
++
++ return err;
++}
++
++int hci_connect_pa_sync(struct hci_dev *hdev, struct hci_conn *conn)
++{
++ return hci_cmd_sync_queue_once(hdev, hci_le_pa_create_sync, conn,
++ create_pa_complete);
++}
++
++static void create_big_complete(struct hci_dev *hdev, void *data, int err)
++{
++ struct hci_conn *conn = data;
++
++ bt_dev_dbg(hdev, "err %d", err);
++
++ if (err == -ECANCELED)
++ return;
++
++ if (hci_conn_valid(hdev, conn))
++ clear_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++}
++
++static int hci_le_big_create_sync(struct hci_dev *hdev, void *data)
++{
++ DEFINE_FLEX(struct hci_cp_le_big_create_sync, cp, bis, num_bis, 0x11);
++ struct hci_conn *conn = data;
++ struct bt_iso_qos *qos = &conn->iso_qos;
++ int err;
++
++ if (!hci_conn_valid(hdev, conn))
++ return -ECANCELED;
++
++ set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++
++ memset(cp, 0, sizeof(*cp));
++ cp->handle = qos->bcast.big;
++ cp->sync_handle = cpu_to_le16(conn->sync_handle);
++ cp->encryption = qos->bcast.encryption;
++ memcpy(cp->bcode, qos->bcast.bcode, sizeof(cp->bcode));
++ cp->mse = qos->bcast.mse;
++ cp->timeout = cpu_to_le16(qos->bcast.timeout);
++ cp->num_bis = conn->num_bis;
++ memcpy(cp->bis, conn->bis, conn->num_bis);
++
++ /* The spec allows only one pending LE BIG Create Sync command at
++ * a time, so we forcefully wait for BIG Sync Established event since
++ * cmd_work can only schedule one command at a time.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2586:
++ *
++ * If the Host sends this command when the Controller is in the
++ * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_
++ * Established event has not been generated, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
++ struct_size(cp, bis, cp->num_bis), cp,
++ HCI_EVT_LE_BIG_SYNC_ESTABLISHED,
++ conn->conn_timeout, NULL);
++ if (err == -ETIMEDOUT)
++ hci_le_big_terminate_sync(hdev, cp->handle);
++
++ return err;
++}
++
++int hci_connect_big_sync(struct hci_dev *hdev, struct hci_conn *conn)
++{
++ return hci_cmd_sync_queue_once(hdev, hci_le_big_create_sync, conn,
++ create_big_complete);
++}
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 0cb52a3308bae4..491efb327b5b52 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -1450,14 +1450,13 @@ static void iso_conn_big_sync(struct sock *sk)
+ lock_sock(sk);
+
+ if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+- err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
+- &iso_pi(sk)->qos,
+- iso_pi(sk)->sync_handle,
+- iso_pi(sk)->bc_num_bis,
+- iso_pi(sk)->bc_bis);
++ err = hci_conn_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
++ &iso_pi(sk)->qos,
++ iso_pi(sk)->sync_handle,
++ iso_pi(sk)->bc_num_bis,
++ iso_pi(sk)->bc_bis);
+ if (err)
+- bt_dev_err(hdev, "hci_le_big_create_sync: %d",
+- err);
++ bt_dev_err(hdev, "hci_big_create_sync: %d", err);
+ }
+
+ release_sock(sk);
+@@ -1906,7 +1905,7 @@ static void iso_conn_ready(struct iso_conn *conn)
+ hcon);
+ } else if (test_bit(HCI_CONN_BIG_SYNC_FAILED, &hcon->flags)) {
+ ev = hci_recv_event_data(hcon->hdev,
+- HCI_EVT_LE_BIG_SYNC_ESTABILISHED);
++ HCI_EVT_LE_BIG_SYNC_ESTABLISHED);
+
+ /* Get reference to PA sync parent socket, if it exists */
+ parent = iso_get_sock(&hcon->src, &hcon->dst,
+@@ -2097,12 +2096,11 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+
+ if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) &&
+ !test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+- err = hci_le_big_create_sync(hdev,
+- hcon,
+- &iso_pi(sk)->qos,
+- iso_pi(sk)->sync_handle,
+- iso_pi(sk)->bc_num_bis,
+- iso_pi(sk)->bc_bis);
++ err = hci_conn_big_create_sync(hdev, hcon,
++ &iso_pi(sk)->qos,
++ iso_pi(sk)->sync_handle,
++ iso_pi(sk)->bc_num_bis,
++ iso_pi(sk)->bc_bis);
+ if (err) {
+ bt_dev_err(hdev, "hci_le_big_create_sync: %d",
+ err);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index a55388fbf07c84..c219a8c596d3e5 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -7380,6 +7380,9 @@ static int l2cap_recv_frag(struct l2cap_conn *conn, struct sk_buff *skb,
+ return -ENOMEM;
+ /* Init rx_len */
+ conn->rx_len = len;
++
++ skb_set_delivery_time(conn->rx_skb, skb->tstamp,
++ skb->tstamp_type);
+ }
+
+ /* Copy as much as the rx_skb can hold */
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index 2dfac79dc78b8b..e04ebe651c3347 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -435,7 +435,7 @@ static void tcp4_check_fraglist_gro(struct list_head *head, struct sk_buff *skb,
+ iif, sdif);
+ NAPI_GRO_CB(skb)->is_flist = !sk;
+ if (sk)
+- sock_put(sk);
++ sock_gen_put(sk);
+ }
+
+ INDIRECT_CALLABLE_SCOPE
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index ecfca59f31f13e..da5d4aea1b5915 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -247,6 +247,62 @@ static struct sk_buff *__udpv4_gso_segment_list_csum(struct sk_buff *segs)
+ return segs;
+ }
+
++static void __udpv6_gso_segment_csum(struct sk_buff *seg,
++ struct in6_addr *oldip,
++ const struct in6_addr *newip,
++ __be16 *oldport, __be16 newport)
++{
++ struct udphdr *uh = udp_hdr(seg);
++
++ if (ipv6_addr_equal(oldip, newip) && *oldport == newport)
++ return;
++
++ if (uh->check) {
++ inet_proto_csum_replace16(&uh->check, seg, oldip->s6_addr32,
++ newip->s6_addr32, true);
++
++ inet_proto_csum_replace2(&uh->check, seg, *oldport, newport,
++ false);
++ if (!uh->check)
++ uh->check = CSUM_MANGLED_0;
++ }
++
++ *oldip = *newip;
++ *oldport = newport;
++}
++
++static struct sk_buff *__udpv6_gso_segment_list_csum(struct sk_buff *segs)
++{
++ const struct ipv6hdr *iph;
++ const struct udphdr *uh;
++ struct ipv6hdr *iph2;
++ struct sk_buff *seg;
++ struct udphdr *uh2;
++
++ seg = segs;
++ uh = udp_hdr(seg);
++ iph = ipv6_hdr(seg);
++ uh2 = udp_hdr(seg->next);
++ iph2 = ipv6_hdr(seg->next);
++
++ if (!(*(const u32 *)&uh->source ^ *(const u32 *)&uh2->source) &&
++ ipv6_addr_equal(&iph->saddr, &iph2->saddr) &&
++ ipv6_addr_equal(&iph->daddr, &iph2->daddr))
++ return segs;
++
++ while ((seg = seg->next)) {
++ uh2 = udp_hdr(seg);
++ iph2 = ipv6_hdr(seg);
++
++ __udpv6_gso_segment_csum(seg, &iph2->saddr, &iph->saddr,
++ &uh2->source, uh->source);
++ __udpv6_gso_segment_csum(seg, &iph2->daddr, &iph->daddr,
++ &uh2->dest, uh->dest);
++ }
++
++ return segs;
++}
++
+ static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
+ netdev_features_t features,
+ bool is_ipv6)
+@@ -259,7 +315,10 @@ static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
+
+ udp_hdr(skb)->len = htons(sizeof(struct udphdr) + mss);
+
+- return is_ipv6 ? skb : __udpv4_gso_segment_list_csum(skb);
++ if (is_ipv6)
++ return __udpv6_gso_segment_list_csum(skb);
++ else
++ return __udpv4_gso_segment_list_csum(skb);
+ }
+
+ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c
+index ae2da28f9dfb1c..5ab509a5fbdfcf 100644
+--- a/net/ipv6/tcpv6_offload.c
++++ b/net/ipv6/tcpv6_offload.c
+@@ -42,7 +42,7 @@ static void tcp6_check_fraglist_gro(struct list_head *head, struct sk_buff *skb,
+ iif, sdif);
+ NAPI_GRO_CB(skb)->is_flist = !sk;
+ if (sk)
+- sock_put(sk);
++ sock_gen_put(sk);
+ #endif /* IS_ENABLED(CONFIG_IPV6) */
+ }
+
+diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c
+index c69b999fae171c..9b6d79bd873712 100644
+--- a/net/sched/sch_drr.c
++++ b/net/sched/sch_drr.c
+@@ -35,6 +35,11 @@ struct drr_sched {
+ struct Qdisc_class_hash clhash;
+ };
+
++static bool cl_is_active(struct drr_class *cl)
++{
++ return !list_empty(&cl->alist);
++}
++
+ static struct drr_class *drr_find_class(struct Qdisc *sch, u32 classid)
+ {
+ struct drr_sched *q = qdisc_priv(sch);
+@@ -105,6 +110,7 @@ static int drr_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ return -ENOBUFS;
+
+ gnet_stats_basic_sync_init(&cl->bstats);
++ INIT_LIST_HEAD(&cl->alist);
+ cl->common.classid = classid;
+ cl->quantum = quantum;
+ cl->qdisc = qdisc_create_dflt(sch->dev_queue,
+@@ -229,7 +235,7 @@ static void drr_qlen_notify(struct Qdisc *csh, unsigned long arg)
+ {
+ struct drr_class *cl = (struct drr_class *)arg;
+
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ }
+
+ static int drr_dump_class(struct Qdisc *sch, unsigned long arg,
+@@ -336,7 +342,6 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ struct drr_sched *q = qdisc_priv(sch);
+ struct drr_class *cl;
+ int err = 0;
+- bool first;
+
+ cl = drr_classify(skb, sch, &err);
+ if (cl == NULL) {
+@@ -346,7 +351,6 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ return err;
+ }
+
+- first = !cl->qdisc->q.qlen;
+ err = qdisc_enqueue(skb, cl->qdisc, to_free);
+ if (unlikely(err != NET_XMIT_SUCCESS)) {
+ if (net_xmit_drop_count(err)) {
+@@ -356,7 +360,7 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ return err;
+ }
+
+- if (first) {
++ if (!cl_is_active(cl)) {
+ list_add_tail(&cl->alist, &q->active);
+ cl->deficit = cl->quantum;
+ }
+@@ -390,7 +394,7 @@ static struct sk_buff *drr_dequeue(struct Qdisc *sch)
+ if (unlikely(skb == NULL))
+ goto out;
+ if (cl->qdisc->q.qlen == 0)
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+
+ bstats_update(&cl->bstats, skb);
+ qdisc_bstats_update(sch, skb);
+@@ -431,7 +435,7 @@ static void drr_reset_qdisc(struct Qdisc *sch)
+ for (i = 0; i < q->clhash.hashsize; i++) {
+ hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode) {
+ if (cl->qdisc->q.qlen)
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ qdisc_reset(cl->qdisc);
+ }
+ }
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index 516038a4416380..2c069f0181c62b 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -74,6 +74,11 @@ static const struct nla_policy ets_class_policy[TCA_ETS_MAX + 1] = {
+ [TCA_ETS_QUANTA_BAND] = { .type = NLA_U32 },
+ };
+
++static bool cl_is_active(struct ets_class *cl)
++{
++ return !list_empty(&cl->alist);
++}
++
+ static int ets_quantum_parse(struct Qdisc *sch, const struct nlattr *attr,
+ unsigned int *quantum,
+ struct netlink_ext_ack *extack)
+@@ -293,7 +298,7 @@ static void ets_class_qlen_notify(struct Qdisc *sch, unsigned long arg)
+ * to remove them.
+ */
+ if (!ets_class_is_strict(q, cl) && sch->q.qlen)
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ }
+
+ static int ets_class_dump(struct Qdisc *sch, unsigned long arg,
+@@ -416,7 +421,6 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ struct ets_sched *q = qdisc_priv(sch);
+ struct ets_class *cl;
+ int err = 0;
+- bool first;
+
+ cl = ets_classify(skb, sch, &err);
+ if (!cl) {
+@@ -426,7 +430,6 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ return err;
+ }
+
+- first = !cl->qdisc->q.qlen;
+ err = qdisc_enqueue(skb, cl->qdisc, to_free);
+ if (unlikely(err != NET_XMIT_SUCCESS)) {
+ if (net_xmit_drop_count(err)) {
+@@ -436,7 +439,7 @@ static int ets_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ return err;
+ }
+
+- if (first && !ets_class_is_strict(q, cl)) {
++ if (!cl_is_active(cl) && !ets_class_is_strict(q, cl)) {
+ list_add_tail(&cl->alist, &q->active);
+ cl->deficit = cl->quantum;
+ }
+@@ -488,7 +491,7 @@ static struct sk_buff *ets_qdisc_dequeue(struct Qdisc *sch)
+ if (unlikely(!skb))
+ goto out;
+ if (cl->qdisc->q.qlen == 0)
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ return ets_qdisc_dequeue_skb(sch, skb);
+ }
+
+@@ -657,7 +660,7 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ }
+ for (i = q->nbands; i < oldbands; i++) {
+ if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+- list_del(&q->classes[i].alist);
++ list_del_init(&q->classes[i].alist);
+ qdisc_tree_flush_backlog(q->classes[i].qdisc);
+ }
+ WRITE_ONCE(q->nstrict, nstrict);
+@@ -713,7 +716,7 @@ static void ets_qdisc_reset(struct Qdisc *sch)
+
+ for (band = q->nstrict; band < q->nbands; band++) {
+ if (q->classes[band].qdisc->q.qlen)
+- list_del(&q->classes[band].alist);
++ list_del_init(&q->classes[band].alist);
+ }
+ for (band = 0; band < q->nbands; band++)
+ qdisc_reset(q->classes[band].qdisc);
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index 5bb4ab9941d6e9..cb8c525ea20eab 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -203,7 +203,10 @@ eltree_insert(struct hfsc_class *cl)
+ static inline void
+ eltree_remove(struct hfsc_class *cl)
+ {
+- rb_erase(&cl->el_node, &cl->sched->eligible);
++ if (!RB_EMPTY_NODE(&cl->el_node)) {
++ rb_erase(&cl->el_node, &cl->sched->eligible);
++ RB_CLEAR_NODE(&cl->el_node);
++ }
+ }
+
+ static inline void
+@@ -1225,7 +1228,8 @@ hfsc_qlen_notify(struct Qdisc *sch, unsigned long arg)
+ /* vttree is now handled in update_vf() so that update_vf(cl, 0, 0)
+ * needs to be called explicitly to remove a class from vttree.
+ */
+- update_vf(cl, 0, 0);
++ if (cl->cl_nactive)
++ update_vf(cl, 0, 0);
+ if (cl->cl_flags & HFSC_RSC)
+ eltree_remove(cl);
+ }
+@@ -1565,7 +1569,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
+ return err;
+ }
+
+- if (first) {
++ if (first && !cl->cl_nactive) {
+ if (cl->cl_flags & HFSC_RSC)
+ init_ed(cl, len);
+ if (cl->cl_flags & HFSC_FSC)
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index c31bc5489bddc0..4b9a639b642e1e 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -1485,6 +1485,8 @@ static void htb_qlen_notify(struct Qdisc *sch, unsigned long arg)
+ {
+ struct htb_class *cl = (struct htb_class *)arg;
+
++ if (!cl->prio_activity)
++ return;
+ htb_deactivate(qdisc_priv(sch), cl);
+ }
+
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index 6a07cdbdb9e12e..42061d02c05295 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -202,6 +202,11 @@ struct qfq_sched {
+ */
+ enum update_reason {enqueue, requeue};
+
++static bool cl_is_active(struct qfq_class *cl)
++{
++ return !list_empty(&cl->alist);
++}
++
+ static struct qfq_class *qfq_find_class(struct Qdisc *sch, u32 classid)
+ {
+ struct qfq_sched *q = qdisc_priv(sch);
+@@ -347,7 +352,7 @@ static void qfq_deactivate_class(struct qfq_sched *q, struct qfq_class *cl)
+ struct qfq_aggregate *agg = cl->agg;
+
+
+- list_del(&cl->alist); /* remove from RR queue of the aggregate */
++ list_del_init(&cl->alist); /* remove from RR queue of the aggregate */
+ if (list_empty(&agg->active)) /* agg is now inactive */
+ qfq_deactivate_agg(q, agg);
+ }
+@@ -474,6 +479,7 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ gnet_stats_basic_sync_init(&cl->bstats);
+ cl->common.classid = classid;
+ cl->deficit = lmax;
++ INIT_LIST_HEAD(&cl->alist);
+
+ cl->qdisc = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
+ classid, NULL);
+@@ -982,7 +988,7 @@ static struct sk_buff *agg_dequeue(struct qfq_aggregate *agg,
+ cl->deficit -= (int) len;
+
+ if (cl->qdisc->q.qlen == 0) /* no more packets, remove from list */
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ else if (cl->deficit < qdisc_pkt_len(cl->qdisc->ops->peek(cl->qdisc))) {
+ cl->deficit += agg->lmax;
+ list_move_tail(&cl->alist, &agg->active);
+@@ -1214,7 +1220,6 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ struct qfq_class *cl;
+ struct qfq_aggregate *agg;
+ int err = 0;
+- bool first;
+
+ cl = qfq_classify(skb, sch, &err);
+ if (cl == NULL) {
+@@ -1236,7 +1241,6 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ }
+
+ gso_segs = skb_is_gso(skb) ? skb_shinfo(skb)->gso_segs : 1;
+- first = !cl->qdisc->q.qlen;
+ err = qdisc_enqueue(skb, cl->qdisc, to_free);
+ if (unlikely(err != NET_XMIT_SUCCESS)) {
+ pr_debug("qfq_enqueue: enqueue failed %d\n", err);
+@@ -1252,8 +1256,8 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ ++sch->q.qlen;
+
+ agg = cl->agg;
+- /* if the queue was not empty, then done here */
+- if (!first) {
++ /* if the class is active, then done here */
++ if (cl_is_active(cl)) {
+ if (unlikely(skb == cl->qdisc->ops->peek(cl->qdisc)) &&
+ list_first_entry(&agg->active, struct qfq_class, alist)
+ == cl && cl->deficit < len)
+@@ -1415,6 +1419,8 @@ static void qfq_qlen_notify(struct Qdisc *sch, unsigned long arg)
+ struct qfq_sched *q = qdisc_priv(sch);
+ struct qfq_class *cl = (struct qfq_class *)arg;
+
++ if (list_empty(&cl->alist))
++ return;
+ qfq_deactivate_class(q, cl);
+ }
+
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index a373a7130d7572..c13e13fa79fc0b 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -337,13 +337,14 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)
+ u32 len = xdp_get_buff_len(xdp);
+ int err;
+
+- spin_lock_bh(&xs->rx_lock);
+ err = xsk_rcv_check(xs, xdp, len);
+ if (!err) {
++ spin_lock_bh(&xs->pool->rx_lock);
+ err = __xsk_rcv(xs, xdp, len);
+ xsk_flush(xs);
++ spin_unlock_bh(&xs->pool->rx_lock);
+ }
+- spin_unlock_bh(&xs->rx_lock);
++
+ return err;
+ }
+
+@@ -1730,7 +1731,6 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
+ xs = xdp_sk(sk);
+ xs->state = XSK_READY;
+ mutex_init(&xs->mutex);
+- spin_lock_init(&xs->rx_lock);
+
+ INIT_LIST_HEAD(&xs->map_list);
+ spin_lock_init(&xs->map_list_lock);
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index d158cb6dd39193..63ae121d29e6c5 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -87,6 +87,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ pool->addrs = umem->addrs;
+ pool->tx_metadata_len = umem->tx_metadata_len;
+ pool->tx_sw_csum = umem->flags & XDP_UMEM_TX_SW_CSUM;
++ spin_lock_init(&pool->rx_lock);
+ INIT_LIST_HEAD(&pool->free_list);
+ INIT_LIST_HEAD(&pool->xskb_list);
+ INIT_LIST_HEAD(&pool->xsk_tx_list);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 356df48c97309d..2ff02fb6f7e948 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -440,6 +440,10 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ alc_update_coef_idx(codec, 0x67, 0xf000, 0x3000);
+ fallthrough;
+ case 0x10ec0215:
++ case 0x10ec0236:
++ case 0x10ec0245:
++ case 0x10ec0256:
++ case 0x10ec0257:
+ case 0x10ec0285:
+ case 0x10ec0289:
+ alc_update_coef_idx(codec, 0x36, 1<<13, 0);
+@@ -447,12 +451,8 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ case 0x10ec0230:
+ case 0x10ec0233:
+ case 0x10ec0235:
+- case 0x10ec0236:
+- case 0x10ec0245:
+ case 0x10ec0255:
+- case 0x10ec0256:
+ case 0x19e58326:
+- case 0x10ec0257:
+ case 0x10ec0282:
+ case 0x10ec0283:
+ case 0x10ec0286:
+@@ -10713,8 +10713,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8ca7, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8caf, "HP Elite mt645 G8 Mobile Thin Client", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8cbd, "HP Pavilion Aero Laptop 13-bg0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+- SND_PCI_QUIRK(0x103c, 0x8cdd, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
+- SND_PCI_QUIRK(0x103c, 0x8cde, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x103c, 0x8cdd, "HP Spectre", ALC245_FIXUP_HP_SPECTRE_X360_EU0XXX),
++ SND_PCI_QUIRK(0x103c, 0x8cde, "HP OmniBook Ultra Flip Laptop 14t", ALC245_FIXUP_HP_SPECTRE_X360_EU0XXX),
+ SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+@@ -10732,8 +10732,11 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8de8, "HP Gemtree", ALC245_FIXUP_TAS2781_SPI_2),
+ SND_PCI_QUIRK(0x103c, 0x8de9, "HP Gemtree", ALC245_FIXUP_TAS2781_SPI_2),
+ SND_PCI_QUIRK(0x103c, 0x8dec, "HP EliteBook 640 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8ded, "HP EliteBook 640 G12", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8dee, "HP EliteBook 660 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8def, "HP EliteBook 660 G12", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8df0, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8df1, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8dfc, "HP EliteBook 645 G12", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8dfe, "HP EliteBook 665 G12", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8e14, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
+@@ -10771,10 +10774,10 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM),
+ SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+- SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
++ SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+- SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
++ SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1460, "Asus VivoBook 15", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -10828,7 +10831,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1c9f, "ASUS G614JU/JV/JI", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1caf, "ASUS G634JY/JZ/JI/JG", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+- SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1ccf, "ASUS G814JU/JV/JI", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1cdf, "ASUS G814JY/JZ/JG", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1cef, "ASUS G834JY/JZ/JI/JG", ALC285_FIXUP_ASUS_HEADSET_MIC),
+diff --git a/sound/soc/amd/acp/acp-i2s.c b/sound/soc/amd/acp/acp-i2s.c
+index 89e99ed4275a22..f631147fc63bdd 100644
+--- a/sound/soc/amd/acp/acp-i2s.c
++++ b/sound/soc/amd/acp/acp-i2s.c
+@@ -101,7 +101,7 @@ static int acp_i2s_set_tdm_slot(struct snd_soc_dai *dai, u32 tx_mask, u32 rx_mas
+ struct acp_stream *stream;
+ int slot_len, no_of_slots;
+
+- chip = dev_get_platdata(dev);
++ chip = dev_get_drvdata(dev->parent);
+ switch (slot_width) {
+ case SLOT_WIDTH_8:
+ slot_len = 8;
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index ee35f3aa552165..0138cfabbb038a 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -763,10 +763,9 @@ config SND_SOC_CS_AMP_LIB
+ tristate
+
+ config SND_SOC_CS_AMP_LIB_TEST
+- tristate "KUnit test for Cirrus Logic cs-amp-lib"
+- depends on KUNIT
++ tristate "KUnit test for Cirrus Logic cs-amp-lib" if !KUNIT_ALL_TESTS
++ depends on SND_SOC_CS_AMP_LIB && KUNIT
+ default KUNIT_ALL_TESTS
+- select SND_SOC_CS_AMP_LIB
+ help
+ This builds KUnit tests for the Cirrus Logic common
+ amplifier library.
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index 32efb30c55d695..8bd5b93f345768 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -1146,9 +1146,9 @@ void graph_util_parse_link_direction(struct device_node *np,
+ bool is_playback_only = of_property_read_bool(np, "playback-only");
+ bool is_capture_only = of_property_read_bool(np, "capture-only");
+
+- if (is_playback_only)
++ if (playback_only)
+ *playback_only = is_playback_only;
+- if (is_capture_only)
++ if (capture_only)
+ *capture_only = is_capture_only;
+ }
+ EXPORT_SYMBOL_GPL(graph_util_parse_link_direction);
+diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
+index 3a0af4ca7ab6c7..0f7458a4390198 100644
+--- a/sound/soc/renesas/rz-ssi.c
++++ b/sound/soc/renesas/rz-ssi.c
+@@ -1244,7 +1244,7 @@ static int rz_ssi_runtime_resume(struct device *dev)
+
+ static const struct dev_pm_ops rz_ssi_pm_ops = {
+ RUNTIME_PM_OPS(rz_ssi_runtime_suspend, rz_ssi_runtime_resume, NULL)
+- SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
++ NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ static struct platform_driver rz_ssi_driver = {
+diff --git a/sound/soc/sdw_utils/soc_sdw_rt_dmic.c b/sound/soc/sdw_utils/soc_sdw_rt_dmic.c
+index 46d917a99c51da..97be110a59b63a 100644
+--- a/sound/soc/sdw_utils/soc_sdw_rt_dmic.c
++++ b/sound/soc/sdw_utils/soc_sdw_rt_dmic.c
+@@ -29,6 +29,8 @@ int asoc_sdw_rt_dmic_rtd_init(struct snd_soc_pcm_runtime *rtd, struct snd_soc_da
+ mic_name = devm_kasprintf(card->dev, GFP_KERNEL, "rt715-sdca");
+ else
+ mic_name = devm_kasprintf(card->dev, GFP_KERNEL, "%s", component->name_prefix);
++ if (!mic_name)
++ return -ENOMEM;
+
+ card->components = devm_kasprintf(card->dev, GFP_KERNEL,
+ "%s mic:%s", card->components,
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 3c6d8aef413090..26b34b68850839 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -3046,7 +3046,7 @@ int snd_soc_of_parse_pin_switches(struct snd_soc_card *card, const char *prop)
+ unsigned int i, nb_controls;
+ int ret;
+
+- if (!of_property_read_bool(dev->of_node, prop))
++ if (!of_property_present(dev->of_node, prop))
+ return 0;
+
+ strings = devm_kcalloc(dev, nb_controls_max,
+@@ -3120,23 +3120,17 @@ int snd_soc_of_parse_tdm_slot(struct device_node *np,
+ if (rx_mask)
+ snd_soc_of_get_slot_mask(np, "dai-tdm-slot-rx-mask", rx_mask);
+
+- if (of_property_read_bool(np, "dai-tdm-slot-num")) {
+- ret = of_property_read_u32(np, "dai-tdm-slot-num", &val);
+- if (ret)
+- return ret;
+-
+- if (slots)
+- *slots = val;
+- }
+-
+- if (of_property_read_bool(np, "dai-tdm-slot-width")) {
+- ret = of_property_read_u32(np, "dai-tdm-slot-width", &val);
+- if (ret)
+- return ret;
++ ret = of_property_read_u32(np, "dai-tdm-slot-num", &val);
++ if (ret && ret != -EINVAL)
++ return ret;
++ if (!ret && slots)
++ *slots = val;
+
+- if (slot_width)
+- *slot_width = val;
+- }
++ ret = of_property_read_u32(np, "dai-tdm-slot-width", &val);
++ if (ret && ret != -EINVAL)
++ return ret;
++ if (!ret && slot_width)
++ *slot_width = val;
+
+ return 0;
+ }
+@@ -3403,12 +3397,12 @@ unsigned int snd_soc_daifmt_parse_clock_provider_raw(struct device_node *np,
+ * check "[prefix]frame-master"
+ */
+ snprintf(prop, sizeof(prop), "%sbitclock-master", prefix);
+- bit = of_property_read_bool(np, prop);
++ bit = of_property_present(np, prop);
+ if (bit && bitclkmaster)
+ *bitclkmaster = of_parse_phandle(np, prop, 0);
+
+ snprintf(prop, sizeof(prop), "%sframe-master", prefix);
+- frame = of_property_read_bool(np, prop);
++ frame = of_property_present(np, prop);
+ if (frame && framemaster)
+ *framemaster = of_parse_phandle(np, prop, 0);
+
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 88b3ad5a255205..53b0ea68b939f5 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1618,10 +1618,13 @@ static int dpcm_add_paths(struct snd_soc_pcm_runtime *fe, int stream,
+ /*
+ * Filter for systems with 'component_chaining' enabled.
+ * This helps to avoid unnecessary re-configuration of an
+- * already active BE on such systems.
++ * already active BE on such systems and ensures the BE DAI
++ * widget is powered ON after hw_params() BE DAI callback.
+ */
+ if (fe->card->component_chaining &&
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_NEW) &&
++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_OPEN) &&
++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_PARAMS) &&
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_CLOSE))
+ continue;
+
+diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
+index 3efbf4aaf96549..d9c4266c8150d9 100644
+--- a/sound/soc/stm/stm32_sai_sub.c
++++ b/sound/soc/stm/stm32_sai_sub.c
+@@ -409,11 +409,11 @@ static int stm32_sai_set_parent_rate(struct stm32_sai_sub_data *sai,
+ unsigned int rate)
+ {
+ struct platform_device *pdev = sai->pdev;
+- unsigned int sai_ck_rate, sai_ck_max_rate, sai_curr_rate, sai_new_rate;
++ unsigned int sai_ck_rate, sai_ck_max_rate, sai_ck_min_rate, sai_curr_rate, sai_new_rate;
+ int div, ret;
+
+ /*
+- * Set maximum expected kernel clock frequency
++ * Set minimum and maximum expected kernel clock frequency
+ * - mclk on or spdif:
+ * f_sai_ck = MCKDIV * mclk-fs * fs
+ * Here typical 256 ratio is assumed for mclk-fs
+@@ -423,13 +423,16 @@ static int stm32_sai_set_parent_rate(struct stm32_sai_sub_data *sai,
+ * Set constraint MCKDIV * FRL <= 256, to ensure MCKDIV is in available range
+ * f_sai_ck = sai_ck_max_rate * pow_of_two(FRL) / 256
+ */
++ sai_ck_min_rate = rate * 256;
+ if (!(rate % SAI_RATE_11K))
+ sai_ck_max_rate = SAI_MAX_SAMPLE_RATE_11K * 256;
+ else
+ sai_ck_max_rate = SAI_MAX_SAMPLE_RATE_8K * 256;
+
+- if (!sai->sai_mclk && !STM_SAI_PROTOCOL_IS_SPDIF(sai))
++ if (!sai->sai_mclk && !STM_SAI_PROTOCOL_IS_SPDIF(sai)) {
++ sai_ck_min_rate = rate * sai->fs_length;
+ sai_ck_max_rate /= DIV_ROUND_CLOSEST(256, roundup_pow_of_two(sai->fs_length));
++ }
+
+ /*
+ * Request exclusivity, as the clock is shared by SAI sub-blocks and by
+@@ -444,7 +447,10 @@ static int stm32_sai_set_parent_rate(struct stm32_sai_sub_data *sai,
+ * return immediately.
+ */
+ sai_curr_rate = clk_get_rate(sai->sai_ck);
+- if (stm32_sai_rate_accurate(sai_ck_max_rate, sai_curr_rate))
++ dev_dbg(&pdev->dev, "kernel clock rate: min [%u], max [%u], current [%u]",
++ sai_ck_min_rate, sai_ck_max_rate, sai_curr_rate);
++ if (stm32_sai_rate_accurate(sai_ck_max_rate, sai_curr_rate) &&
++ sai_curr_rate >= sai_ck_min_rate)
+ return 0;
+
+ /*
+@@ -472,7 +478,7 @@ static int stm32_sai_set_parent_rate(struct stm32_sai_sub_data *sai,
+ /* Try a lower frequency */
+ div++;
+ sai_ck_rate = sai_ck_max_rate / div;
+- } while (sai_ck_rate > rate);
++ } while (sai_ck_rate >= sai_ck_min_rate);
+
+ /* No accurate rate found */
+ dev_err(&pdev->dev, "Failed to find an accurate rate");
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index a29f28eb7d0c64..f36ec98da4601d 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -926,6 +926,8 @@ static int endpoint_set_interface(struct snd_usb_audio *chip,
+ {
+ int altset = set ? ep->altsetting : 0;
+ int err;
++ int retries = 0;
++ const int max_retries = 5;
+
+ if (ep->iface_ref->altset == altset)
+ return 0;
+@@ -935,8 +937,13 @@ static int endpoint_set_interface(struct snd_usb_audio *chip,
+
+ usb_audio_dbg(chip, "Setting usb interface %d:%d for EP 0x%x\n",
+ ep->iface, altset, ep->ep_num);
++retry:
+ err = usb_set_interface(chip->dev, ep->iface, altset);
+ if (err < 0) {
++ if (err == -EPROTO && ++retries <= max_retries) {
++ msleep(5 * (1 << (retries - 1)));
++ goto retry;
++ }
+ usb_audio_err_ratelimited(
+ chip, "%d:%d: usb_set_interface failed (%d)\n",
+ ep->iface, altset, err);
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 6049d957694ca6..a9283b2bd2f4e5 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -260,7 +260,8 @@ static int parse_audio_format_rates_v1(struct snd_usb_audio *chip, struct audiof
+ }
+
+ /* Jabra Evolve 65 headset */
+- if (chip->usb_id == USB_ID(0x0b0e, 0x030b)) {
++ if (chip->usb_id == USB_ID(0x0b0e, 0x030b) ||
++ chip->usb_id == USB_ID(0x0b0e, 0x030c)) {
+ /* only 48kHz for playback while keeping 16kHz for capture */
+ if (fp->nr_rates != 1)
+ return set_fixed_rate(fp, 48000, SNDRV_PCM_RATE_48000);
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-18 14:32 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-18 14:32 UTC (permalink / raw
To: gentoo-commits
commit: 7d54e41d9083e139a4b84bb4b79bd400947e5e6f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 18 14:32:04 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 18 14:32:04 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7d54e41d
Linux patch 6.14.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-6.14.7.patch | 10088 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 10092 insertions(+)
diff --git a/0000_README b/0000_README
index cbaf7be5..df3d8c2c 100644
--- a/0000_README
+++ b/0000_README
@@ -66,6 +66,10 @@ Patch: 1005_linux-6.14.6.patch
From: https://www.kernel.org
Desc: Linux 6.14.6
+Patch: 1006_linux-6.14.7.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.7
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1006_linux-6.14.7.patch b/1006_linux-6.14.7.patch
new file mode 100644
index 00000000..4f61e36b
--- /dev/null
+++ b/1006_linux-6.14.7.patch
@@ -0,0 +1,10088 @@
+diff --git a/.clippy.toml b/.clippy.toml
+index 815c94732ed785..137f41d203de37 100644
+--- a/.clippy.toml
++++ b/.clippy.toml
+@@ -7,5 +7,5 @@ check-private-items = true
+ disallowed-macros = [
+ # The `clippy::dbg_macro` lint only works with `std::dbg!`, thus we simulate
+ # it here, see: https://github.com/rust-lang/rust-clippy/issues/11303.
+- { path = "kernel::dbg", reason = "the `dbg!` macro is intended as a debugging tool" },
++ { path = "kernel::dbg", reason = "the `dbg!` macro is intended as a debugging tool", allow-invalid = true },
+ ]
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 206079d3bd5b12..6a1acabb29d85f 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -511,6 +511,7 @@ Description: information about CPUs heterogeneity.
+
+ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/gather_data_sampling
++ /sys/devices/system/cpu/vulnerabilities/indirect_target_selection
+ /sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ /sys/devices/system/cpu/vulnerabilities/l1tf
+ /sys/devices/system/cpu/vulnerabilities/mds
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index ff0b440ef2dc90..d2caa390395e5b 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -22,3 +22,4 @@ are configurable at compile, boot or run time.
+ srso
+ gather_data_sampling
+ reg-file-data-sampling
++ indirect-target-selection
+diff --git a/Documentation/admin-guide/hw-vuln/indirect-target-selection.rst b/Documentation/admin-guide/hw-vuln/indirect-target-selection.rst
+new file mode 100644
+index 00000000000000..d9ca64108d2332
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/indirect-target-selection.rst
+@@ -0,0 +1,168 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++Indirect Target Selection (ITS)
++===============================
++
++ITS is a vulnerability in some Intel CPUs that support Enhanced IBRS and were
++released before Alder Lake. ITS may allow an attacker to control the prediction
++of indirect branches and RETs located in the lower half of a cacheline.
++
++ITS is assigned CVE-2024-28956 with a CVSS score of 4.7 (Medium).
++
++Scope of Impact
++---------------
++- **eIBRS Guest/Host Isolation**: Indirect branches in KVM/kernel may still be
++ predicted with unintended target corresponding to a branch in the guest.
++
++- **Intra-Mode BTI**: In-kernel training such as through cBPF or other native
++ gadgets.
++
++- **Indirect Branch Prediction Barrier (IBPB)**: After an IBPB, indirect
++ branches may still be predicted with targets corresponding to direct branches
++ executed prior to the IBPB. This is fixed by the IPU 2025.1 microcode, which
++ should be available via distro updates. Alternatively microcode can be
++ obtained from Intel's github repository [#f1]_.
++
++Affected CPUs
++-------------
++Below is the list of ITS affected CPUs [#f2]_ [#f3]_:
++
++ ======================== ============ ==================== ===============
++ Common name Family_Model eIBRS Intra-mode BTI
++ Guest/Host Isolation
++ ======================== ============ ==================== ===============
++ SKYLAKE_X (step >= 6) 06_55H Affected Affected
++ ICELAKE_X 06_6AH Not affected Affected
++ ICELAKE_D 06_6CH Not affected Affected
++ ICELAKE_L 06_7EH Not affected Affected
++ TIGERLAKE_L 06_8CH Not affected Affected
++ TIGERLAKE 06_8DH Not affected Affected
++ KABYLAKE_L (step >= 12) 06_8EH Affected Affected
++ KABYLAKE (step >= 13) 06_9EH Affected Affected
++ COMETLAKE 06_A5H Affected Affected
++ COMETLAKE_L 06_A6H Affected Affected
++ ROCKETLAKE 06_A7H Not affected Affected
++ ======================== ============ ==================== ===============
++
++- All affected CPUs enumerate Enhanced IBRS feature.
++- IBPB isolation is affected on all ITS affected CPUs, and need a microcode
++ update for mitigation.
++- None of the affected CPUs enumerate BHI_CTRL which was introduced in Golden
++ Cove (Alder Lake and Sapphire Rapids). This can help guests to determine the
++ host's affected status.
++- Intel Atom CPUs are not affected by ITS.
++
++Mitigation
++----------
++As only the indirect branches and RETs that have their last byte of instruction
++in the lower half of the cacheline are vulnerable to ITS, the basic idea behind
++the mitigation is to not allow indirect branches in the lower half.
++
++This is achieved by relying on existing retpoline support in the kernel, and in
++compilers. ITS-vulnerable retpoline sites are runtime patched to point to newly
++added ITS-safe thunks. These safe thunks consists of indirect branch in the
++second half of the cacheline. Not all retpoline sites are patched to thunks, if
++a retpoline site is evaluated to be ITS-safe, it is replaced with an inline
++indirect branch.
++
++Dynamic thunks
++~~~~~~~~~~~~~~
++From a dynamically allocated pool of safe-thunks, each vulnerable site is
++replaced with a new thunk, such that they get a unique address. This could
++improve the branch prediction accuracy. Also, it is a defense-in-depth measure
++against aliasing.
++
++Note, for simplicity, indirect branches in eBPF programs are always replaced
++with a jump to a static thunk in __x86_indirect_its_thunk_array. If required,
++in future this can be changed to use dynamic thunks.
++
++All vulnerable RETs are replaced with a static thunk, they do not use dynamic
++thunks. This is because RETs get their prediction from RSB mostly that does not
++depend on source address. RETs that underflow RSB may benefit from dynamic
++thunks. But, RETs significantly outnumber indirect branches, and any benefit
++from a unique source address could be outweighed by the increased icache
++footprint and iTLB pressure.
++
++Retpoline
++~~~~~~~~~
++Retpoline sequence also mitigates ITS-unsafe indirect branches. For this
++reason, when retpoline is enabled, ITS mitigation only relocates the RETs to
++safe thunks. Unless user requested the RSB-stuffing mitigation.
++
++RSB Stuffing
++~~~~~~~~~~~~
++RSB-stuffing via Call Depth Tracking is a mitigation for Retbleed RSB-underflow
++attacks. And it also mitigates RETs that are vulnerable to ITS.
++
++Mitigation in guests
++^^^^^^^^^^^^^^^^^^^^
++All guests deploy ITS mitigation by default, irrespective of eIBRS enumeration
++and Family/Model of the guest. This is because eIBRS feature could be hidden
++from a guest. One exception to this is when a guest enumerates BHI_DIS_S, which
++indicates that the guest is running on an unaffected host.
++
++To prevent guests from unnecessarily deploying the mitigation on unaffected
++platforms, Intel has defined ITS_NO bit(62) in MSR IA32_ARCH_CAPABILITIES. When
++a guest sees this bit set, it should not enumerate the ITS bug. Note, this bit
++is not set by any hardware, but is **intended for VMMs to synthesize** it for
++guests as per the host's affected status.
++
++Mitigation options
++^^^^^^^^^^^^^^^^^^
++The ITS mitigation can be controlled using the "indirect_target_selection"
++kernel parameter. The available options are:
++
++ ======== ===================================================================
++ on (default) Deploy the "Aligned branch/return thunks" mitigation.
++ If spectre_v2 mitigation enables retpoline, aligned-thunks are only
++ deployed for the affected RET instructions. Retpoline mitigates
++ indirect branches.
++
++ off Disable ITS mitigation.
++
++ vmexit Equivalent to "=on" if the CPU is affected by guest/host isolation
++ part of ITS. Otherwise, mitigation is not deployed. This option is
++ useful when host userspace is not in the threat model, and only
++ attacks from guest to host are considered.
++
++ stuff Deploy RSB-fill mitigation when retpoline is also deployed.
++ Otherwise, deploy the default mitigation. When retpoline mitigation
++ is enabled, RSB-stuffing via Call-Depth-Tracking also mitigates
++ ITS.
++
++ force Force the ITS bug and deploy the default mitigation.
++ ======== ===================================================================
++
++Sysfs reporting
++---------------
++
++The sysfs file showing ITS mitigation status is:
++
++ /sys/devices/system/cpu/vulnerabilities/indirect_target_selection
++
++Note, microcode mitigation status is not reported in this file.
++
++The possible values in this file are:
++
++.. list-table::
++
++ * - Not affected
++ - The processor is not vulnerable.
++ * - Vulnerable
++ - System is vulnerable and no mitigation has been applied.
++ * - Vulnerable, KVM: Not affected
++ - System is vulnerable to intra-mode BTI, but not affected by eIBRS
++ guest/host isolation.
++ * - Mitigation: Aligned branch/return thunks
++ - The mitigation is enabled, affected indirect branches and RETs are
++ relocated to safe thunks.
++ * - Mitigation: Retpolines, Stuffing RSB
++ - The mitigation is enabled using retpoline and RSB stuffing.
++
++References
++----------
++.. [#f1] Microcode repository - https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files
++
++.. [#f2] Affected Processors list - https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html
++
++.. [#f3] Affected Processors list (machine readable) - https://github.com/intel/Intel-affected-processor-list
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 56be1fc99bdd44..f9e11cebc598cb 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2178,6 +2178,23 @@
+ different crypto accelerators. This option can be used
+ to achieve best performance for particular HW.
+
++ indirect_target_selection= [X86,Intel] Mitigation control for Indirect
++ Target Selection(ITS) bug in Intel CPUs. Updated
++ microcode is also required for a fix in IBPB.
++
++ on: Enable mitigation (default).
++ off: Disable mitigation.
++ force: Force the ITS bug and deploy default
++ mitigation.
++ vmexit: Only deploy mitigation if CPU is affected by
++ guest/host isolation part of ITS.
++ stuff: Deploy RSB-fill mitigation when retpoline is
++ also deployed. Otherwise, deploy the default
++ mitigation.
++
++ For details see:
++ Documentation/admin-guide/hw-vuln/indirect-target-selection.rst
++
+ init= [KNL]
+ Format: <full_path>
+ Run specified binary instead of /sbin/init as init
+@@ -3666,6 +3683,7 @@
+ expose users to several CPU vulnerabilities.
+ Equivalent to: if nokaslr then kpti=0 [ARM64]
+ gather_data_sampling=off [X86]
++ indirect_target_selection=off [X86]
+ kvm.nx_huge_pages=off [X86]
+ l1tf=off [X86]
+ mds=off [X86]
+diff --git a/Makefile b/Makefile
+index 6c3233a21380ce..70bd8847c8677a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+index c528594ac4428e..11eb601e144d23 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+@@ -165,6 +165,19 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
+ startup-delay-us = <20000>;
+ };
+
++ reg_usdhc2_vqmmc: regulator-usdhc2-vqmmc {
++ compatible = "regulator-gpio";
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_usdhc2_vsel>;
++ gpios = <&gpio1 4 GPIO_ACTIVE_HIGH>;
++ regulator-max-microvolt = <3300000>;
++ regulator-min-microvolt = <1800000>;
++ states = <1800000 0x1>,
++ <3300000 0x0>;
++ regulator-name = "PMIC_USDHC_VSELECT";
++ vin-supply = <®_nvcc_sd>;
++ };
++
+ reserved-memory {
+ #address-cells = <2>;
+ #size-cells = <2>;
+@@ -290,7 +303,7 @@ &gpio1 {
+ "SODIMM_19",
+ "",
+ "",
+- "",
++ "PMIC_USDHC_VSELECT",
+ "",
+ "",
+ "",
+@@ -806,6 +819,7 @@ &usdhc2 {
+ pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_cd>;
+ pinctrl-3 = <&pinctrl_usdhc2_sleep>, <&pinctrl_usdhc2_cd_sleep>;
+ vmmc-supply = <®_usdhc2_vmmc>;
++ vqmmc-supply = <®_usdhc2_vqmmc>;
+ };
+
+ &wdog1 {
+@@ -1227,13 +1241,17 @@ pinctrl_usdhc2_pwr_en: usdhc2pwrengrp {
+ <MX8MM_IOMUXC_NAND_CLE_GPIO3_IO5 0x6>; /* SODIMM 76 */
+ };
+
++ pinctrl_usdhc2_vsel: usdhc2vselgrp {
++ fsl,pins =
++ <MX8MM_IOMUXC_GPIO1_IO04_GPIO1_IO4 0x10>; /* PMIC_USDHC_VSELECT */
++ };
++
+ /*
+ * Note: Due to ERR050080 we use discrete external on-module resistors pulling-up to the
+ * on-module +V3.3_1.8_SD (LDO5) rail and explicitly disable the internal pull-ups here.
+ */
+ pinctrl_usdhc2: usdhc2grp {
+ fsl,pins =
+- <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x10>,
+ <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x90>, /* SODIMM 78 */
+ <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x90>, /* SODIMM 74 */
+ <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x90>, /* SODIMM 80 */
+@@ -1244,7 +1262,6 @@ pinctrl_usdhc2: usdhc2grp {
+
+ pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+ fsl,pins =
+- <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x10>,
+ <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x94>,
+ <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x94>,
+ <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x94>,
+@@ -1255,7 +1272,6 @@ pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+
+ pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+ fsl,pins =
+- <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x10>,
+ <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x96>,
+ <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x96>,
+ <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x96>,
+@@ -1267,7 +1283,6 @@ pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+ /* Avoid backfeeding with removed card power */
+ pinctrl_usdhc2_sleep: usdhc2slpgrp {
+ fsl,pins =
+- <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x0>,
+ <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x0>,
+ <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x0>,
+ <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x0>,
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 41c21feaef4ad9..8c6bd9da3b1ba3 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -81,6 +81,7 @@
+ #define ARM_CPU_PART_CORTEX_A78AE 0xD42
+ #define ARM_CPU_PART_CORTEX_X1 0xD44
+ #define ARM_CPU_PART_CORTEX_A510 0xD46
++#define ARM_CPU_PART_CORTEX_X1C 0xD4C
+ #define ARM_CPU_PART_CORTEX_A520 0xD80
+ #define ARM_CPU_PART_CORTEX_A710 0xD47
+ #define ARM_CPU_PART_CORTEX_A715 0xD4D
+@@ -167,6 +168,7 @@
+ #define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
+ #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
+ #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
++#define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C)
+ #define MIDR_CORTEX_A520 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A520)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+ #define MIDR_CORTEX_A715 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A715)
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index e390c432f546e5..deb2ea84227880 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -698,6 +698,7 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
+ }
+ #endif
+ u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type);
++u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type);
+ u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
+ enum aarch64_insn_system_register sysreg);
+
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index f1524cdeacf1c4..8fef1262609011 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -97,6 +97,9 @@ enum mitigation_state arm64_get_meltdown_state(void);
+
+ enum mitigation_state arm64_get_spectre_bhb_state(void);
+ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope);
++extern bool __nospectre_bhb;
++u8 get_spectre_bhb_loop_value(void);
++bool is_spectre_bhb_fw_mitigated(void);
+ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
+ bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr);
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index d561cf3b8ac7b1..59e9dca1595d3f 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -113,7 +113,14 @@ static struct arm64_cpu_capabilities const __ro_after_init *cpucap_ptrs[ARM64_NC
+
+ DECLARE_BITMAP(boot_cpucaps, ARM64_NCAPS);
+
+-bool arm64_use_ng_mappings = false;
++/*
++ * arm64_use_ng_mappings must be placed in the .data section, otherwise it
++ * ends up in the .bss section where it is initialized in early_map_kernel()
++ * after the MMU (with the idmap) was enabled. create_init_idmap() - which
++ * runs before early_map_kernel() and reads the variable via PTE_MAYBE_NG -
++ * may end up generating an incorrect idmap page table attributes.
++ */
++bool arm64_use_ng_mappings __read_mostly = false;
+ EXPORT_SYMBOL(arm64_use_ng_mappings);
+
+ DEFINE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector) = vectors;
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 30e79f111b35e3..8ef3335ecff722 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -891,6 +891,7 @@ static u8 spectre_bhb_loop_affected(void)
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1C),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
+@@ -998,6 +999,11 @@ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
+ return true;
+ }
+
++u8 get_spectre_bhb_loop_value(void)
++{
++ return max_bhb_k;
++}
++
+ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
+ {
+ const char *v = arm64_get_bp_hardening_vector(slot);
+@@ -1015,7 +1021,7 @@ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
+ isb();
+ }
+
+-static bool __read_mostly __nospectre_bhb;
++bool __read_mostly __nospectre_bhb;
+ static int __init parse_spectre_bhb_param(char *str)
+ {
+ __nospectre_bhb = true;
+@@ -1093,6 +1099,11 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+ update_mitigation_state(&spectre_bhb_state, state);
+ }
+
++bool is_spectre_bhb_fw_mitigated(void)
++{
++ return test_bit(BHB_FW, &system_bhb_mitigations);
++}
++
+ /* Patched to NOP when enabled */
+ void noinstr spectre_bhb_patch_loop_mitigation_enable(struct alt_instr *alt,
+ __le32 *origptr,
+diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
+index 1f55b0c7b11d94..06f296d0180955 100644
+--- a/arch/arm64/kvm/mmu.c
++++ b/arch/arm64/kvm/mmu.c
+@@ -1489,6 +1489,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ return -EFAULT;
+ }
+
++ if (!is_protected_kvm_enabled())
++ memcache = &vcpu->arch.mmu_page_cache;
++ else
++ memcache = &vcpu->arch.pkvm_memcache;
++
+ /*
+ * Permission faults just need to update the existing leaf entry,
+ * and so normally don't require allocations from the memcache. The
+@@ -1498,13 +1503,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ if (!fault_is_perm || (logging_active && write_fault)) {
+ int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu);
+
+- if (!is_protected_kvm_enabled()) {
+- memcache = &vcpu->arch.mmu_page_cache;
++ if (!is_protected_kvm_enabled())
+ ret = kvm_mmu_topup_memory_cache(memcache, min_pages);
+- } else {
+- memcache = &vcpu->arch.pkvm_memcache;
++ else
+ ret = topup_hyp_memcache(memcache, min_pages);
+- }
++
+ if (ret)
+ return ret;
+ }
+diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c
+index b008a9b46a7ff4..36d33e064ea01b 100644
+--- a/arch/arm64/lib/insn.c
++++ b/arch/arm64/lib/insn.c
+@@ -5,6 +5,7 @@
+ *
+ * Copyright (C) 2014-2016 Zi Shen Lim <zlim.lnx@gmail.com>
+ */
++#include <linux/bitfield.h>
+ #include <linux/bitops.h>
+ #include <linux/bug.h>
+ #include <linux/printk.h>
+@@ -1471,43 +1472,41 @@ u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
+ return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm);
+ }
+
+-u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)
++static u32 __get_barrier_crm_val(enum aarch64_insn_mb_type type)
+ {
+- u32 opt;
+- u32 insn;
+-
+ switch (type) {
+ case AARCH64_INSN_MB_SY:
+- opt = 0xf;
+- break;
++ return 0xf;
+ case AARCH64_INSN_MB_ST:
+- opt = 0xe;
+- break;
++ return 0xe;
+ case AARCH64_INSN_MB_LD:
+- opt = 0xd;
+- break;
++ return 0xd;
+ case AARCH64_INSN_MB_ISH:
+- opt = 0xb;
+- break;
++ return 0xb;
+ case AARCH64_INSN_MB_ISHST:
+- opt = 0xa;
+- break;
++ return 0xa;
+ case AARCH64_INSN_MB_ISHLD:
+- opt = 0x9;
+- break;
++ return 0x9;
+ case AARCH64_INSN_MB_NSH:
+- opt = 0x7;
+- break;
++ return 0x7;
+ case AARCH64_INSN_MB_NSHST:
+- opt = 0x6;
+- break;
++ return 0x6;
+ case AARCH64_INSN_MB_NSHLD:
+- opt = 0x5;
+- break;
++ return 0x5;
+ default:
+- pr_err("%s: unknown dmb type %d\n", __func__, type);
++ pr_err("%s: unknown barrier type %d\n", __func__, type);
+ return AARCH64_BREAK_FAULT;
+ }
++}
++
++u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)
++{
++ u32 opt;
++ u32 insn;
++
++ opt = __get_barrier_crm_val(type);
++ if (opt == AARCH64_BREAK_FAULT)
++ return AARCH64_BREAK_FAULT;
+
+ insn = aarch64_insn_get_dmb_value();
+ insn &= ~GENMASK(11, 8);
+@@ -1516,6 +1515,21 @@ u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)
+ return insn;
+ }
+
++u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type)
++{
++ u32 opt, insn;
++
++ opt = __get_barrier_crm_val(type);
++ if (opt == AARCH64_BREAK_FAULT)
++ return AARCH64_BREAK_FAULT;
++
++ insn = aarch64_insn_get_dsb_base_value();
++ insn &= ~GENMASK(11, 8);
++ insn |= (opt << 8);
++
++ return insn;
++}
++
+ u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
+ enum aarch64_insn_system_register sysreg)
+ {
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 8446848edddb83..3126881fe67680 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -7,6 +7,7 @@
+
+ #define pr_fmt(fmt) "bpf_jit: " fmt
+
++#include <linux/arm-smccc.h>
+ #include <linux/bitfield.h>
+ #include <linux/bpf.h>
+ #include <linux/filter.h>
+@@ -17,6 +18,7 @@
+ #include <asm/asm-extable.h>
+ #include <asm/byteorder.h>
+ #include <asm/cacheflush.h>
++#include <asm/cpufeature.h>
+ #include <asm/debug-monitors.h>
+ #include <asm/insn.h>
+ #include <asm/text-patching.h>
+@@ -864,7 +866,51 @@ static void build_plt(struct jit_ctx *ctx)
+ plt->target = (u64)&dummy_tramp;
+ }
+
+-static void build_epilogue(struct jit_ctx *ctx)
++/* Clobbers BPF registers 1-4, aka x0-x3 */
++static void __maybe_unused build_bhb_mitigation(struct jit_ctx *ctx)
++{
++ const u8 r1 = bpf2a64[BPF_REG_1]; /* aka x0 */
++ u8 k = get_spectre_bhb_loop_value();
++
++ if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY) ||
++ cpu_mitigations_off() || __nospectre_bhb ||
++ arm64_get_spectre_v2_state() == SPECTRE_VULNERABLE)
++ return;
++
++ if (capable(CAP_SYS_ADMIN))
++ return;
++
++ if (supports_clearbhb(SCOPE_SYSTEM)) {
++ emit(aarch64_insn_gen_hint(AARCH64_INSN_HINT_CLEARBHB), ctx);
++ return;
++ }
++
++ if (k) {
++ emit_a64_mov_i64(r1, k, ctx);
++ emit(A64_B(1), ctx);
++ emit(A64_SUBS_I(true, r1, r1, 1), ctx);
++ emit(A64_B_(A64_COND_NE, -2), ctx);
++ emit(aarch64_insn_gen_dsb(AARCH64_INSN_MB_ISH), ctx);
++ emit(aarch64_insn_get_isb_value(), ctx);
++ }
++
++ if (is_spectre_bhb_fw_mitigated()) {
++ emit(A64_ORR_I(false, r1, AARCH64_INSN_REG_ZR,
++ ARM_SMCCC_ARCH_WORKAROUND_3), ctx);
++ switch (arm_smccc_1_1_get_conduit()) {
++ case SMCCC_CONDUIT_HVC:
++ emit(aarch64_insn_get_hvc_value(), ctx);
++ break;
++ case SMCCC_CONDUIT_SMC:
++ emit(aarch64_insn_get_smc_value(), ctx);
++ break;
++ default:
++ pr_err_once("Firmware mitigation enabled with unknown conduit\n");
++ }
++ }
++}
++
++static void build_epilogue(struct jit_ctx *ctx, bool was_classic)
+ {
+ const u8 r0 = bpf2a64[BPF_REG_0];
+ const u8 ptr = bpf2a64[TCCNT_PTR];
+@@ -877,10 +923,13 @@ static void build_epilogue(struct jit_ctx *ctx)
+
+ emit(A64_POP(A64_ZR, ptr, A64_SP), ctx);
+
++ if (was_classic)
++ build_bhb_mitigation(ctx);
++
+ /* Restore FP/LR registers */
+ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
+
+- /* Set return value */
++ /* Move the return value from bpf:r0 (aka x7) to x0 */
+ emit(A64_MOV(1, A64_R(0), r0), ctx);
+
+ /* Authenticate lr */
+@@ -1817,7 +1866,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ }
+
+ ctx.epilogue_offset = ctx.idx;
+- build_epilogue(&ctx);
++ build_epilogue(&ctx, was_classic);
+ build_plt(&ctx);
+
+ extable_align = __alignof__(struct exception_table_entry);
+@@ -1880,7 +1929,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ goto out_free_hdr;
+ }
+
+- build_epilogue(&ctx);
++ build_epilogue(&ctx, was_classic);
+ build_plt(&ctx);
+
+ /* Extra pass to validate JITed code. */
+diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h
+index 85fa9962266a2b..ef72c46b556887 100644
+--- a/arch/mips/include/asm/ptrace.h
++++ b/arch/mips/include/asm/ptrace.h
+@@ -65,7 +65,8 @@ static inline void instruction_pointer_set(struct pt_regs *regs,
+
+ /* Query offset/name of register from its name/offset */
+ extern int regs_query_register_offset(const char *name);
+-#define MAX_REG_OFFSET (offsetof(struct pt_regs, __last))
++#define MAX_REG_OFFSET \
++ (offsetof(struct pt_regs, __last) - sizeof(unsigned long))
+
+ /**
+ * regs_get_register() - get register value from its offset
+diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
+index 7c244de7718008..15d8f75902f858 100644
+--- a/arch/riscv/kernel/process.c
++++ b/arch/riscv/kernel/process.c
+@@ -275,6 +275,9 @@ long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg)
+ unsigned long pmm;
+ u8 pmlen;
+
++ if (!riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM))
++ return -EINVAL;
++
+ if (is_compat_thread(ti))
+ return -EINVAL;
+
+@@ -330,6 +333,9 @@ long get_tagged_addr_ctrl(struct task_struct *task)
+ struct thread_info *ti = task_thread_info(task);
+ long ret = 0;
+
++ if (!riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM))
++ return -EINVAL;
++
+ if (is_compat_thread(ti))
+ return -EINVAL;
+
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 8ff8e8b36524b7..9c83848797a78b 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -198,47 +198,57 @@ asmlinkage __visible __trap_section void do_trap_insn_illegal(struct pt_regs *re
+ DO_ERROR_INFO(do_trap_load_fault,
+ SIGSEGV, SEGV_ACCERR, "load access fault");
+
+-asmlinkage __visible __trap_section void do_trap_load_misaligned(struct pt_regs *regs)
++enum misaligned_access_type {
++ MISALIGNED_STORE,
++ MISALIGNED_LOAD,
++};
++static const struct {
++ const char *type_str;
++ int (*handler)(struct pt_regs *regs);
++} misaligned_handler[] = {
++ [MISALIGNED_STORE] = {
++ .type_str = "Oops - store (or AMO) address misaligned",
++ .handler = handle_misaligned_store,
++ },
++ [MISALIGNED_LOAD] = {
++ .type_str = "Oops - load address misaligned",
++ .handler = handle_misaligned_load,
++ },
++};
++
++static void do_trap_misaligned(struct pt_regs *regs, enum misaligned_access_type type)
+ {
++ irqentry_state_t state;
++
+ if (user_mode(regs)) {
+ irqentry_enter_from_user_mode(regs);
++ local_irq_enable();
++ } else {
++ state = irqentry_nmi_enter(regs);
++ }
+
+- if (handle_misaligned_load(regs))
+- do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+- "Oops - load address misaligned");
++ if (misaligned_handler[type].handler(regs))
++ do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
++ misaligned_handler[type].type_str);
+
++ if (user_mode(regs)) {
++ local_irq_disable();
+ irqentry_exit_to_user_mode(regs);
+ } else {
+- irqentry_state_t state = irqentry_nmi_enter(regs);
+-
+- if (handle_misaligned_load(regs))
+- do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+- "Oops - load address misaligned");
+-
+ irqentry_nmi_exit(regs, state);
+ }
+ }
+
+-asmlinkage __visible __trap_section void do_trap_store_misaligned(struct pt_regs *regs)
++asmlinkage __visible __trap_section void do_trap_load_misaligned(struct pt_regs *regs)
+ {
+- if (user_mode(regs)) {
+- irqentry_enter_from_user_mode(regs);
+-
+- if (handle_misaligned_store(regs))
+- do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+- "Oops - store (or AMO) address misaligned");
+-
+- irqentry_exit_to_user_mode(regs);
+- } else {
+- irqentry_state_t state = irqentry_nmi_enter(regs);
+-
+- if (handle_misaligned_store(regs))
+- do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc,
+- "Oops - store (or AMO) address misaligned");
++ do_trap_misaligned(regs, MISALIGNED_LOAD);
++}
+
+- irqentry_nmi_exit(regs, state);
+- }
++asmlinkage __visible __trap_section void do_trap_store_misaligned(struct pt_regs *regs)
++{
++ do_trap_misaligned(regs, MISALIGNED_STORE);
+ }
++
+ DO_ERROR_INFO(do_trap_store_fault,
+ SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault");
+ DO_ERROR_INFO(do_trap_ecall_s,
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 4354c87c0376fd..dde5d11dc1b50d 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -88,6 +88,13 @@
+ #define INSN_MATCH_C_FSWSP 0xe002
+ #define INSN_MASK_C_FSWSP 0xe003
+
++#define INSN_MATCH_C_LHU 0x8400
++#define INSN_MASK_C_LHU 0xfc43
++#define INSN_MATCH_C_LH 0x8440
++#define INSN_MASK_C_LH 0xfc43
++#define INSN_MATCH_C_SH 0x8c00
++#define INSN_MASK_C_SH 0xfc43
++
+ #define INSN_LEN(insn) ((((insn) & 0x3) < 0x3) ? 2 : 4)
+
+ #if defined(CONFIG_64BIT)
+@@ -431,6 +438,13 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
+ fp = 1;
+ len = 4;
+ #endif
++ } else if ((insn & INSN_MASK_C_LHU) == INSN_MATCH_C_LHU) {
++ len = 2;
++ insn = RVC_RS2S(insn) << SH_RD;
++ } else if ((insn & INSN_MASK_C_LH) == INSN_MATCH_C_LH) {
++ len = 2;
++ shift = 8 * (sizeof(ulong) - len);
++ insn = RVC_RS2S(insn) << SH_RD;
+ } else {
+ regs->epc = epc;
+ return -1;
+@@ -530,6 +544,9 @@ static int handle_scalar_misaligned_store(struct pt_regs *regs)
+ len = 4;
+ val.data_ulong = GET_F32_RS2C(insn, regs);
+ #endif
++ } else if ((insn & INSN_MASK_C_SH) == INSN_MATCH_C_SH) {
++ len = 2;
++ val.data_ulong = GET_RS2S(insn, regs);
+ } else {
+ regs->epc = epc;
+ return -1;
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 88e09a650d2dfe..ce8bac77cbc1b5 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -601,7 +601,8 @@ SYM_CODE_START(stack_overflow)
+ stmg %r0,%r7,__PT_R0(%r11)
+ stmg %r8,%r9,__PT_PSW(%r11)
+ mvc __PT_R8(64,%r11),0(%r14)
+- stg %r10,__PT_ORIG_GPR2(%r11) # store last break to orig_gpr2
++ GET_LC %r2
++ mvc __PT_ORIG_GPR2(8,%r11),__LC_PGM_LAST_BREAK(%r2)
+ xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ lgr %r2,%r11 # pass pointer to pt_regs
+ jg kernel_stack_overflow
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index 14bf7e8d06b7a7..1f5b942e514d3b 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -427,6 +427,8 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ return;
+ }
+ zdev = zpci_create_device(entry->fid, entry->fh, entry->config_state);
++ if (IS_ERR(zdev))
++ return;
+ list_add_tail(&zdev->entry, scan_list);
+ }
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index aeb95b6e553691..f86e7072a5ba3b 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2764,6 +2764,18 @@ config MITIGATION_SSB
+ of speculative execution in a similar way to the Meltdown and Spectre
+ security vulnerabilities.
+
++config MITIGATION_ITS
++ bool "Enable Indirect Target Selection mitigation"
++ depends on CPU_SUP_INTEL && X86_64
++ depends on MITIGATION_RETPOLINE && MITIGATION_RETHUNK
++ select EXECMEM
++ default y
++ help
++ Enable Indirect Target Selection (ITS) mitigation. ITS is a bug in
++ BPU on some Intel CPUs that may allow Spectre V2 style attacks. If
++ disabled, mitigation cannot be enabled via cmdline.
++ See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst>
++
+ endif
+
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index f52dbe0ad93cd1..b4cd6ddde97471 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -1523,7 +1523,9 @@ SYM_CODE_END(rewind_stack_and_make_dead)
+ * ORC to unwind properly.
+ *
+ * The alignment is for performance and not for safety, and may be safely
+- * refactored in the future if needed.
++ * refactored in the future if needed. The .skips are for safety, to ensure
++ * that all RETs are in the second half of a cacheline to mitigate Indirect
++ * Target Selection, rather than taking the slowpath via its_return_thunk.
+ */
+ SYM_FUNC_START(clear_bhb_loop)
+ push %rbp
+@@ -1533,10 +1535,22 @@ SYM_FUNC_START(clear_bhb_loop)
+ call 1f
+ jmp 5f
+ .align 64, 0xcc
++ /*
++ * Shift instructions so that the RET is in the upper half of the
++ * cacheline and don't take the slowpath to its_return_thunk.
++ */
++ .skip 32 - (.Lret1 - 1f), 0xcc
+ ANNOTATE_INTRA_FUNCTION_CALL
+ 1: call 2f
+- RET
++.Lret1: RET
+ .align 64, 0xcc
++ /*
++ * As above shift instructions for RET at .Lret2 as well.
++ *
++ * This should be ideally be: .skip 32 - (.Lret2 - 2f), 0xcc
++ * but some Clang versions (e.g. 18) don't like this.
++ */
++ .skip 32 - 18, 0xcc
+ 2: movl $5, %eax
+ 3: jmp 4f
+ nop
+@@ -1544,7 +1558,7 @@ SYM_FUNC_START(clear_bhb_loop)
+ jnz 3b
+ sub $1, %ecx
+ jnz 1b
+- RET
++.Lret2: RET
+ 5: lfence
+ pop %rbp
+ RET
+diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
+index e3903b731305c9..9e01490220ece3 100644
+--- a/arch/x86/include/asm/alternative.h
++++ b/arch/x86/include/asm/alternative.h
+@@ -6,6 +6,7 @@
+ #include <linux/stringify.h>
+ #include <linux/objtool.h>
+ #include <asm/asm.h>
++#include <asm/bug.h>
+
+ #define ALT_FLAGS_SHIFT 16
+
+@@ -125,6 +126,37 @@ static __always_inline int x86_call_depth_emit_accounting(u8 **pprog,
+ }
+ #endif
+
++#ifdef CONFIG_MITIGATION_ITS
++extern void its_init_mod(struct module *mod);
++extern void its_fini_mod(struct module *mod);
++extern void its_free_mod(struct module *mod);
++extern u8 *its_static_thunk(int reg);
++#else /* CONFIG_MITIGATION_ITS */
++static inline void its_init_mod(struct module *mod) { }
++static inline void its_fini_mod(struct module *mod) { }
++static inline void its_free_mod(struct module *mod) { }
++static inline u8 *its_static_thunk(int reg)
++{
++ WARN_ONCE(1, "ITS not compiled in");
++
++ return NULL;
++}
++#endif
++
++#if defined(CONFIG_MITIGATION_RETHUNK) && defined(CONFIG_OBJTOOL)
++extern bool cpu_wants_rethunk(void);
++extern bool cpu_wants_rethunk_at(void *addr);
++#else
++static __always_inline bool cpu_wants_rethunk(void)
++{
++ return false;
++}
++static __always_inline bool cpu_wants_rethunk_at(void *addr)
++{
++ return false;
++}
++#endif
++
+ #ifdef CONFIG_SMP
+ extern void alternatives_smp_module_add(struct module *mod, char *name,
+ void *locks, void *locks_end,
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 508c0dad116bc4..b8fbd847c34afd 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -483,6 +483,7 @@
+ #define X86_FEATURE_AMD_FAST_CPPC (21*32 + 5) /* Fast CPPC */
+ #define X86_FEATURE_AMD_HETEROGENEOUS_CORES (21*32 + 6) /* Heterogeneous Core Topology */
+ #define X86_FEATURE_AMD_WORKLOAD_CLASS (21*32 + 7) /* Workload Classification */
++#define X86_FEATURE_INDIRECT_THUNK_ITS (21*32 + 8) /* Use thunk for indirect branches in lower half of cacheline */
+
+ /*
+ * BUG word(s)
+@@ -534,4 +535,6 @@
+ #define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
+ #define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */
+ #define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
++#define X86_BUG_ITS X86_BUG(1*32 + 5) /* "its" CPU is affected by Indirect Target Selection */
++#define X86_BUG_ITS_NATIVE_ONLY X86_BUG(1*32 + 6) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index 695e569159c1d1..be7cddc414e4fb 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -17,10 +17,12 @@ struct ucode_cpu_info {
+ void load_ucode_bsp(void);
+ void load_ucode_ap(void);
+ void microcode_bsp_resume(void);
++bool __init microcode_loader_disabled(void);
+ #else
+ static inline void load_ucode_bsp(void) { }
+ static inline void load_ucode_ap(void) { }
+ static inline void microcode_bsp_resume(void) { }
++static inline bool __init microcode_loader_disabled(void) { return false; }
+ #endif
+
+ extern unsigned long initrd_start_early;
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 72765b2fe0d874..d4308e78a009a3 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -209,6 +209,14 @@
+ * VERW clears CPU Register
+ * File.
+ */
++#define ARCH_CAP_ITS_NO BIT_ULL(62) /*
++ * Not susceptible to
++ * Indirect Target Selection.
++ * This bit is not set by
++ * HW, but is synthesized by
++ * VMMs for guests to know
++ * their affected status.
++ */
+
+ #define MSR_IA32_FLUSH_CMD 0x0000010b
+ #define L1D_FLUSH BIT(0) /*
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index aee26bb8230f86..b1ac1d0d29ca89 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -337,10 +337,14 @@
+
+ #else /* __ASSEMBLY__ */
+
++#define ITS_THUNK_SIZE 64
++
+ typedef u8 retpoline_thunk_t[RETPOLINE_THUNK_SIZE];
++typedef u8 its_thunk_t[ITS_THUNK_SIZE];
+ extern retpoline_thunk_t __x86_indirect_thunk_array[];
+ extern retpoline_thunk_t __x86_indirect_call_thunk_array[];
+ extern retpoline_thunk_t __x86_indirect_jump_thunk_array[];
++extern its_thunk_t __x86_indirect_its_thunk_array[];
+
+ #ifdef CONFIG_MITIGATION_RETHUNK
+ extern void __x86_return_thunk(void);
+@@ -364,6 +368,12 @@ static inline void srso_return_thunk(void) {}
+ static inline void srso_alias_return_thunk(void) {}
+ #endif
+
++#ifdef CONFIG_MITIGATION_ITS
++extern void its_return_thunk(void);
++#else
++static inline void its_return_thunk(void) {}
++#endif
++
+ extern void retbleed_return_thunk(void);
+ extern void srso_return_thunk(void);
+ extern void srso_alias_return_thunk(void);
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index c71b575bf2292d..f843fd37cf9870 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -18,6 +18,7 @@
+ #include <linux/mmu_context.h>
+ #include <linux/bsearch.h>
+ #include <linux/sync_core.h>
++#include <linux/execmem.h>
+ #include <asm/text-patching.h>
+ #include <asm/alternative.h>
+ #include <asm/sections.h>
+@@ -31,6 +32,8 @@
+ #include <asm/paravirt.h>
+ #include <asm/asm-prototypes.h>
+ #include <asm/cfi.h>
++#include <asm/ibt.h>
++#include <asm/set_memory.h>
+
+ int __read_mostly alternatives_patched;
+
+@@ -124,6 +127,136 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] =
+ #endif
+ };
+
++#ifdef CONFIG_MITIGATION_ITS
++
++#ifdef CONFIG_MODULES
++static struct module *its_mod;
++#endif
++static void *its_page;
++static unsigned int its_offset;
++
++/* Initialize a thunk with the "jmp *reg; int3" instructions. */
++static void *its_init_thunk(void *thunk, int reg)
++{
++ u8 *bytes = thunk;
++ int i = 0;
++
++ if (reg >= 8) {
++ bytes[i++] = 0x41; /* REX.B prefix */
++ reg -= 8;
++ }
++ bytes[i++] = 0xff;
++ bytes[i++] = 0xe0 + reg; /* jmp *reg */
++ bytes[i++] = 0xcc;
++
++ return thunk;
++}
++
++#ifdef CONFIG_MODULES
++void its_init_mod(struct module *mod)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return;
++
++ mutex_lock(&text_mutex);
++ its_mod = mod;
++ its_page = NULL;
++}
++
++void its_fini_mod(struct module *mod)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return;
++
++ WARN_ON_ONCE(its_mod != mod);
++
++ its_mod = NULL;
++ its_page = NULL;
++ mutex_unlock(&text_mutex);
++
++ for (int i = 0; i < mod->its_num_pages; i++) {
++ void *page = mod->its_page_array[i];
++ set_memory_rox((unsigned long)page, 1);
++ }
++}
++
++void its_free_mod(struct module *mod)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return;
++
++ for (int i = 0; i < mod->its_num_pages; i++) {
++ void *page = mod->its_page_array[i];
++ execmem_free(page);
++ }
++ kfree(mod->its_page_array);
++}
++#endif /* CONFIG_MODULES */
++
++static void *its_alloc(void)
++{
++ void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE);
++
++ if (!page)
++ return NULL;
++
++#ifdef CONFIG_MODULES
++ if (its_mod) {
++ void *tmp = krealloc(its_mod->its_page_array,
++ (its_mod->its_num_pages+1) * sizeof(void *),
++ GFP_KERNEL);
++ if (!tmp)
++ return NULL;
++
++ its_mod->its_page_array = tmp;
++ its_mod->its_page_array[its_mod->its_num_pages++] = page;
++ }
++#endif /* CONFIG_MODULES */
++
++ return no_free_ptr(page);
++}
++
++static void *its_allocate_thunk(int reg)
++{
++ int size = 3 + (reg / 8);
++ void *thunk;
++
++ if (!its_page || (its_offset + size - 1) >= PAGE_SIZE) {
++ its_page = its_alloc();
++ if (!its_page) {
++ pr_err("ITS page allocation failed\n");
++ return NULL;
++ }
++ memset(its_page, INT3_INSN_OPCODE, PAGE_SIZE);
++ its_offset = 32;
++ }
++
++ /*
++ * If the indirect branch instruction will be in the lower half
++ * of a cacheline, then update the offset to reach the upper half.
++ */
++ if ((its_offset + size - 1) % 64 < 32)
++ its_offset = ((its_offset - 1) | 0x3F) + 33;
++
++ thunk = its_page + its_offset;
++ its_offset += size;
++
++ set_memory_rw((unsigned long)its_page, 1);
++ thunk = its_init_thunk(thunk, reg);
++ set_memory_rox((unsigned long)its_page, 1);
++
++ return thunk;
++}
++
++u8 *its_static_thunk(int reg)
++{
++ u8 *thunk = __x86_indirect_its_thunk_array[reg];
++
++ return thunk;
++}
++
++#endif
++
+ /*
+ * Nomenclature for variable names to simplify and clarify this code and ease
+ * any potential staring at it:
+@@ -590,7 +723,8 @@ static int emit_indirect(int op, int reg, u8 *bytes)
+ return i;
+ }
+
+-static int emit_call_track_retpoline(void *addr, struct insn *insn, int reg, u8 *bytes)
++static int __emit_trampoline(void *addr, struct insn *insn, u8 *bytes,
++ void *call_dest, void *jmp_dest)
+ {
+ u8 op = insn->opcode.bytes[0];
+ int i = 0;
+@@ -611,7 +745,7 @@ static int emit_call_track_retpoline(void *addr, struct insn *insn, int reg, u8
+ switch (op) {
+ case CALL_INSN_OPCODE:
+ __text_gen_insn(bytes+i, op, addr+i,
+- __x86_indirect_call_thunk_array[reg],
++ call_dest,
+ CALL_INSN_SIZE);
+ i += CALL_INSN_SIZE;
+ break;
+@@ -619,7 +753,7 @@ static int emit_call_track_retpoline(void *addr, struct insn *insn, int reg, u8
+ case JMP32_INSN_OPCODE:
+ clang_jcc:
+ __text_gen_insn(bytes+i, op, addr+i,
+- __x86_indirect_jump_thunk_array[reg],
++ jmp_dest,
+ JMP32_INSN_SIZE);
+ i += JMP32_INSN_SIZE;
+ break;
+@@ -634,6 +768,39 @@ static int emit_call_track_retpoline(void *addr, struct insn *insn, int reg, u8
+ return i;
+ }
+
++static int emit_call_track_retpoline(void *addr, struct insn *insn, int reg, u8 *bytes)
++{
++ return __emit_trampoline(addr, insn, bytes,
++ __x86_indirect_call_thunk_array[reg],
++ __x86_indirect_jump_thunk_array[reg]);
++}
++
++#ifdef CONFIG_MITIGATION_ITS
++static int emit_its_trampoline(void *addr, struct insn *insn, int reg, u8 *bytes)
++{
++ u8 *thunk = __x86_indirect_its_thunk_array[reg];
++ u8 *tmp = its_allocate_thunk(reg);
++
++ if (tmp)
++ thunk = tmp;
++
++ return __emit_trampoline(addr, insn, bytes, thunk, thunk);
++}
++
++/* Check if an indirect branch is at ITS-unsafe address */
++static bool cpu_wants_indirect_its_thunk_at(unsigned long addr, int reg)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return false;
++
++ /* Indirect branch opcode is 2 or 3 bytes depending on reg */
++ addr += 1 + reg / 8;
++
++ /* Lower-half of the cacheline? */
++ return !(addr & 0x20);
++}
++#endif
++
+ /*
+ * Rewrite the compiler generated retpoline thunk calls.
+ *
+@@ -708,6 +875,15 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
+ bytes[i++] = 0xe8; /* LFENCE */
+ }
+
++#ifdef CONFIG_MITIGATION_ITS
++ /*
++ * Check if the address of last byte of emitted-indirect is in
++ * lower-half of the cacheline. Such branches need ITS mitigation.
++ */
++ if (cpu_wants_indirect_its_thunk_at((unsigned long)addr + i, reg))
++ return emit_its_trampoline(addr, insn, reg, bytes);
++#endif
++
+ ret = emit_indirect(op, reg, bytes + i);
+ if (ret < 0)
+ return ret;
+@@ -781,6 +957,21 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end,
+
+ #ifdef CONFIG_MITIGATION_RETHUNK
+
++bool cpu_wants_rethunk(void)
++{
++ return cpu_feature_enabled(X86_FEATURE_RETHUNK);
++}
++
++bool cpu_wants_rethunk_at(void *addr)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ return false;
++ if (x86_return_thunk != its_return_thunk)
++ return true;
++
++ return !((unsigned long)addr & 0x20);
++}
++
+ /*
+ * Rewrite the compiler generated return thunk tail-calls.
+ *
+@@ -797,7 +988,7 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes)
+ int i = 0;
+
+ /* Patch the custom return thunks... */
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
++ if (cpu_wants_rethunk_at(addr)) {
+ i = JMP32_INSN_SIZE;
+ __text_gen_insn(bytes, JMP32_INSN_OPCODE, addr, x86_return_thunk, i);
+ } else {
+@@ -815,7 +1006,7 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end,
+ {
+ s32 *s;
+
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ if (cpu_wants_rethunk())
+ static_call_force_reinit();
+
+ for (s = start; s < end; s++) {
+@@ -1694,6 +1885,8 @@ static noinline void __init alt_reloc_selftest(void)
+
+ void __init alternative_instructions(void)
+ {
++ u64 ibt;
++
+ int3_selftest();
+
+ /*
+@@ -1720,6 +1913,9 @@ void __init alternative_instructions(void)
+ */
+ paravirt_set_cap();
+
++ /* Keep CET-IBT disabled until caller/callee are patched */
++ ibt = ibt_save(/*disable*/ true);
++
+ __apply_fineibt(__retpoline_sites, __retpoline_sites_end,
+ __cfi_sites, __cfi_sites_end, NULL);
+
+@@ -1743,6 +1939,8 @@ void __init alternative_instructions(void)
+ */
+ apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end, NULL);
+
++ ibt_restore(ibt);
++
+ #ifdef CONFIG_SMP
+ /* Patch to UP if other cpus not imminent. */
+ if (!noreplace_smp && (num_present_cpus() == 1 || setup_max_cpus <= 1)) {
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 9152285aaaf961..b6994993c39f71 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -49,6 +49,7 @@ static void __init srbds_select_mitigation(void);
+ static void __init l1d_flush_select_mitigation(void);
+ static void __init srso_select_mitigation(void);
+ static void __init gds_select_mitigation(void);
++static void __init its_select_mitigation(void);
+
+ /* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+@@ -67,6 +68,14 @@ static DEFINE_MUTEX(spec_ctrl_mutex);
+
+ void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
+
++static void __init set_return_thunk(void *thunk)
++{
++ if (x86_return_thunk != __x86_return_thunk)
++ pr_warn("x86/bugs: return thunk changed\n");
++
++ x86_return_thunk = thunk;
++}
++
+ /* Update SPEC_CTRL MSR and its cached copy unconditionally */
+ static void update_spec_ctrl(u64 val)
+ {
+@@ -175,6 +184,7 @@ void __init cpu_select_mitigations(void)
+ */
+ srso_select_mitigation();
+ gds_select_mitigation();
++ its_select_mitigation();
+ }
+
+ /*
+@@ -1104,7 +1114,7 @@ static void __init retbleed_select_mitigation(void)
+ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+ setup_force_cpu_cap(X86_FEATURE_UNRET);
+
+- x86_return_thunk = retbleed_return_thunk;
++ set_return_thunk(retbleed_return_thunk);
+
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+@@ -1139,7 +1149,7 @@ static void __init retbleed_select_mitigation(void)
+ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+ setup_force_cpu_cap(X86_FEATURE_CALL_DEPTH);
+
+- x86_return_thunk = call_depth_return_thunk;
++ set_return_thunk(call_depth_return_thunk);
+ break;
+
+ default:
+@@ -1173,6 +1183,145 @@ static void __init retbleed_select_mitigation(void)
+ pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+ }
+
++#undef pr_fmt
++#define pr_fmt(fmt) "ITS: " fmt
++
++enum its_mitigation_cmd {
++ ITS_CMD_OFF,
++ ITS_CMD_ON,
++ ITS_CMD_VMEXIT,
++ ITS_CMD_RSB_STUFF,
++};
++
++enum its_mitigation {
++ ITS_MITIGATION_OFF,
++ ITS_MITIGATION_VMEXIT_ONLY,
++ ITS_MITIGATION_ALIGNED_THUNKS,
++ ITS_MITIGATION_RETPOLINE_STUFF,
++};
++
++static const char * const its_strings[] = {
++ [ITS_MITIGATION_OFF] = "Vulnerable",
++ [ITS_MITIGATION_VMEXIT_ONLY] = "Mitigation: Vulnerable, KVM: Not affected",
++ [ITS_MITIGATION_ALIGNED_THUNKS] = "Mitigation: Aligned branch/return thunks",
++ [ITS_MITIGATION_RETPOLINE_STUFF] = "Mitigation: Retpolines, Stuffing RSB",
++};
++
++static enum its_mitigation its_mitigation __ro_after_init = ITS_MITIGATION_ALIGNED_THUNKS;
++
++static enum its_mitigation_cmd its_cmd __ro_after_init =
++ IS_ENABLED(CONFIG_MITIGATION_ITS) ? ITS_CMD_ON : ITS_CMD_OFF;
++
++static int __init its_parse_cmdline(char *str)
++{
++ if (!str)
++ return -EINVAL;
++
++ if (!IS_ENABLED(CONFIG_MITIGATION_ITS)) {
++ pr_err("Mitigation disabled at compile time, ignoring option (%s)", str);
++ return 0;
++ }
++
++ if (!strcmp(str, "off")) {
++ its_cmd = ITS_CMD_OFF;
++ } else if (!strcmp(str, "on")) {
++ its_cmd = ITS_CMD_ON;
++ } else if (!strcmp(str, "force")) {
++ its_cmd = ITS_CMD_ON;
++ setup_force_cpu_bug(X86_BUG_ITS);
++ } else if (!strcmp(str, "vmexit")) {
++ its_cmd = ITS_CMD_VMEXIT;
++ } else if (!strcmp(str, "stuff")) {
++ its_cmd = ITS_CMD_RSB_STUFF;
++ } else {
++ pr_err("Ignoring unknown indirect_target_selection option (%s).", str);
++ }
++
++ return 0;
++}
++early_param("indirect_target_selection", its_parse_cmdline);
++
++static void __init its_select_mitigation(void)
++{
++ enum its_mitigation_cmd cmd = its_cmd;
++
++ if (!boot_cpu_has_bug(X86_BUG_ITS) || cpu_mitigations_off()) {
++ its_mitigation = ITS_MITIGATION_OFF;
++ return;
++ }
++
++ /* Retpoline+CDT mitigates ITS, bail out */
++ if (boot_cpu_has(X86_FEATURE_RETPOLINE) &&
++ boot_cpu_has(X86_FEATURE_CALL_DEPTH)) {
++ its_mitigation = ITS_MITIGATION_RETPOLINE_STUFF;
++ goto out;
++ }
++
++ /* Exit early to avoid irrelevant warnings */
++ if (cmd == ITS_CMD_OFF) {
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (spectre_v2_enabled == SPECTRE_V2_NONE) {
++ pr_err("WARNING: Spectre-v2 mitigation is off, disabling ITS\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (!IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) ||
++ !IS_ENABLED(CONFIG_MITIGATION_RETHUNK)) {
++ pr_err("WARNING: ITS mitigation depends on retpoline and rethunk support\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (IS_ENABLED(CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B)) {
++ pr_err("WARNING: ITS mitigation is not compatible with CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE)) {
++ pr_err("WARNING: ITS mitigation is not compatible with lfence mitigation\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++
++ if (cmd == ITS_CMD_RSB_STUFF &&
++ (!boot_cpu_has(X86_FEATURE_RETPOLINE) || !IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING))) {
++ pr_err("RSB stuff mitigation not supported, using default\n");
++ cmd = ITS_CMD_ON;
++ }
++
++ switch (cmd) {
++ case ITS_CMD_OFF:
++ its_mitigation = ITS_MITIGATION_OFF;
++ break;
++ case ITS_CMD_VMEXIT:
++ if (boot_cpu_has_bug(X86_BUG_ITS_NATIVE_ONLY)) {
++ its_mitigation = ITS_MITIGATION_VMEXIT_ONLY;
++ goto out;
++ }
++ fallthrough;
++ case ITS_CMD_ON:
++ its_mitigation = ITS_MITIGATION_ALIGNED_THUNKS;
++ if (!boot_cpu_has(X86_FEATURE_RETPOLINE))
++ setup_force_cpu_cap(X86_FEATURE_INDIRECT_THUNK_ITS);
++ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
++ set_return_thunk(its_return_thunk);
++ break;
++ case ITS_CMD_RSB_STUFF:
++ its_mitigation = ITS_MITIGATION_RETPOLINE_STUFF;
++ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
++ setup_force_cpu_cap(X86_FEATURE_CALL_DEPTH);
++ set_return_thunk(call_depth_return_thunk);
++ if (retbleed_mitigation == RETBLEED_MITIGATION_NONE) {
++ retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
++ pr_info("Retbleed mitigation updated to stuffing\n");
++ }
++ break;
++ }
++out:
++ pr_info("%s\n", its_strings[its_mitigation]);
++}
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) "Spectre V2 : " fmt
+
+@@ -1684,11 +1833,11 @@ static void __init bhi_select_mitigation(void)
+ return;
+ }
+
+- /* Mitigate in hardware if supported */
+- if (spec_ctrl_bhi_dis())
++ if (!IS_ENABLED(CONFIG_X86_64))
+ return;
+
+- if (!IS_ENABLED(CONFIG_X86_64))
++ /* Mitigate in hardware if supported */
++ if (spec_ctrl_bhi_dis())
+ return;
+
+ if (bhi_mitigation == BHI_MITIGATION_VMEXIT_ONLY) {
+@@ -2627,10 +2776,10 @@ static void __init srso_select_mitigation(void)
+
+ if (boot_cpu_data.x86 == 0x19) {
+ setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+- x86_return_thunk = srso_alias_return_thunk;
++ set_return_thunk(srso_alias_return_thunk);
+ } else {
+ setup_force_cpu_cap(X86_FEATURE_SRSO);
+- x86_return_thunk = srso_return_thunk;
++ set_return_thunk(srso_return_thunk);
+ }
+ if (has_microcode)
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+@@ -2806,6 +2955,11 @@ static ssize_t rfds_show_state(char *buf)
+ return sysfs_emit(buf, "%s\n", rfds_strings[rfds_mitigation]);
+ }
+
++static ssize_t its_show_state(char *buf)
++{
++ return sysfs_emit(buf, "%s\n", its_strings[its_mitigation]);
++}
++
+ static char *stibp_state(void)
+ {
+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
+@@ -2988,6 +3142,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ case X86_BUG_RFDS:
+ return rfds_show_state(buf);
+
++ case X86_BUG_ITS:
++ return its_show_state(buf);
++
+ default:
+ break;
+ }
+@@ -3067,6 +3224,11 @@ ssize_t cpu_show_reg_file_data_sampling(struct device *dev, struct device_attrib
+ {
+ return cpu_show_common(dev, attr, buf, X86_BUG_RFDS);
+ }
++
++ssize_t cpu_show_indirect_target_selection(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_ITS);
++}
+ #endif
+
+ void __warn_thunk(void)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 7cce91b19fb2c5..5e70a9984ccc62 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1226,6 +1226,10 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define GDS BIT(6)
+ /* CPU is affected by Register File Data Sampling */
+ #define RFDS BIT(7)
++/* CPU is affected by Indirect Target Selection */
++#define ITS BIT(8)
++/* CPU is affected by Indirect Target Selection, but guest-host isolation is not affected */
++#define ITS_NATIVE_ONLY BIT(9)
+
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE, X86_STEP_MAX, SRBDS),
+@@ -1237,22 +1241,25 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ VULNBL_INTEL_STEPS(INTEL_BROADWELL_G, X86_STEP_MAX, SRBDS),
+ VULNBL_INTEL_STEPS(INTEL_BROADWELL_X, X86_STEP_MAX, MMIO),
+ VULNBL_INTEL_STEPS(INTEL_BROADWELL, X86_STEP_MAX, SRBDS),
+- VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS),
++ VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, 0x5, MMIO | RETBLEED | GDS),
++ VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS),
+ VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
+ VULNBL_INTEL_STEPS(INTEL_SKYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
+- VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
+- VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS),
++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, 0xb, MMIO | RETBLEED | GDS | SRBDS),
++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS),
++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE, 0xc, MMIO | RETBLEED | GDS | SRBDS),
++ VULNBL_INTEL_STEPS(INTEL_KABYLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | SRBDS | ITS),
+ VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L, X86_STEP_MAX, RETBLEED),
+- VULNBL_INTEL_STEPS(INTEL_ICELAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS),
+- VULNBL_INTEL_STEPS(INTEL_ICELAKE_D, X86_STEP_MAX, MMIO | GDS),
+- VULNBL_INTEL_STEPS(INTEL_ICELAKE_X, X86_STEP_MAX, MMIO | GDS),
+- VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS),
+- VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED),
+- VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS),
+- VULNBL_INTEL_STEPS(INTEL_TIGERLAKE_L, X86_STEP_MAX, GDS),
+- VULNBL_INTEL_STEPS(INTEL_TIGERLAKE, X86_STEP_MAX, GDS),
++ VULNBL_INTEL_STEPS(INTEL_ICELAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPS(INTEL_ICELAKE_D, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPS(INTEL_ICELAKE_X, X86_STEP_MAX, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPS(INTEL_COMETLAKE, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
++ VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, 0x0, MMIO | RETBLEED | ITS),
++ VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
++ VULNBL_INTEL_STEPS(INTEL_TIGERLAKE_L, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPS(INTEL_TIGERLAKE, X86_STEP_MAX, GDS | ITS | ITS_NATIVE_ONLY),
+ VULNBL_INTEL_STEPS(INTEL_LAKEFIELD, X86_STEP_MAX, MMIO | MMIO_SBDS | RETBLEED),
+- VULNBL_INTEL_STEPS(INTEL_ROCKETLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS),
++ VULNBL_INTEL_STEPS(INTEL_ROCKETLAKE, X86_STEP_MAX, MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
+ VULNBL_INTEL_STEPS(INTEL_ALDERLAKE, X86_STEP_MAX, RFDS),
+ VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L, X86_STEP_MAX, RFDS),
+ VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE, X86_STEP_MAX, RFDS),
+@@ -1317,6 +1324,32 @@ static bool __init vulnerable_to_rfds(u64 x86_arch_cap_msr)
+ return cpu_matches(cpu_vuln_blacklist, RFDS);
+ }
+
++static bool __init vulnerable_to_its(u64 x86_arch_cap_msr)
++{
++ /* The "immunity" bit trumps everything else: */
++ if (x86_arch_cap_msr & ARCH_CAP_ITS_NO)
++ return false;
++ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++ return false;
++
++ /* None of the affected CPUs have BHI_CTRL */
++ if (boot_cpu_has(X86_FEATURE_BHI_CTRL))
++ return false;
++
++ /*
++ * If a VMM did not expose ITS_NO, assume that a guest could
++ * be running on a vulnerable hardware or may migrate to such
++ * hardware.
++ */
++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ return true;
++
++ if (cpu_matches(cpu_vuln_blacklist, ITS))
++ return true;
++
++ return false;
++}
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ u64 x86_arch_cap_msr = x86_read_arch_cap_msr();
+@@ -1436,9 +1469,12 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ if (vulnerable_to_rfds(x86_arch_cap_msr))
+ setup_force_cpu_bug(X86_BUG_RFDS);
+
+- /* When virtualized, eIBRS could be hidden, assume vulnerable */
+- if (!(x86_arch_cap_msr & ARCH_CAP_BHI_NO) &&
+- !cpu_matches(cpu_vuln_whitelist, NO_BHI) &&
++ /*
++ * Intel parts with eIBRS are vulnerable to BHI attacks. Parts with
++ * BHI_NO still need to use the BHI mitigation to prevent Intra-mode
++ * attacks. When virtualized, eIBRS could be hidden, assume vulnerable.
++ */
++ if (!cpu_matches(cpu_vuln_whitelist, NO_BHI) &&
+ (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED) ||
+ boot_cpu_has(X86_FEATURE_HYPERVISOR)))
+ setup_force_cpu_bug(X86_BUG_BHI);
+@@ -1446,6 +1482,12 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ if (cpu_has(c, X86_FEATURE_AMD_IBPB) && !cpu_has(c, X86_FEATURE_AMD_IBPB_RET))
+ setup_force_cpu_bug(X86_BUG_IBPB_NO_RET);
+
++ if (vulnerable_to_its(x86_arch_cap_msr)) {
++ setup_force_cpu_bug(X86_BUG_ITS);
++ if (cpu_matches(cpu_vuln_blacklist, ITS_NATIVE_ONLY))
++ setup_force_cpu_bug(X86_BUG_ITS_NATIVE_ONLY);
++ }
++
+ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ return;
+
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 4a10d35e70aa54..96cb992d50ef55 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -1098,15 +1098,17 @@ static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t siz
+
+ static int __init save_microcode_in_initrd(void)
+ {
+- unsigned int cpuid_1_eax = native_cpuid_eax(1);
+ struct cpuinfo_x86 *c = &boot_cpu_data;
+ struct cont_desc desc = { 0 };
++ unsigned int cpuid_1_eax;
+ enum ucode_state ret;
+ struct cpio_data cp;
+
+- if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)
++ if (microcode_loader_disabled() || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)
+ return 0;
+
++ cpuid_1_eax = native_cpuid_eax(1);
++
+ if (!find_blobs_in_containers(&cp))
+ return -EINVAL;
+
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index b3658d11e7b692..079f046ee26d19 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -41,8 +41,8 @@
+
+ #include "internal.h"
+
+-static struct microcode_ops *microcode_ops;
+-bool dis_ucode_ldr = true;
++static struct microcode_ops *microcode_ops;
++static bool dis_ucode_ldr = false;
+
+ bool force_minrev = IS_ENABLED(CONFIG_MICROCODE_LATE_FORCE_MINREV);
+ module_param(force_minrev, bool, S_IRUSR | S_IWUSR);
+@@ -84,6 +84,9 @@ static bool amd_check_current_patch_level(void)
+ u32 lvl, dummy, i;
+ u32 *levels;
+
++ if (x86_cpuid_vendor() != X86_VENDOR_AMD)
++ return false;
++
+ native_rdmsr(MSR_AMD64_PATCH_LEVEL, lvl, dummy);
+
+ levels = final_levels;
+@@ -95,27 +98,29 @@ static bool amd_check_current_patch_level(void)
+ return false;
+ }
+
+-static bool __init check_loader_disabled_bsp(void)
++bool __init microcode_loader_disabled(void)
+ {
+- static const char *__dis_opt_str = "dis_ucode_ldr";
+- const char *cmdline = boot_command_line;
+- const char *option = __dis_opt_str;
++ if (dis_ucode_ldr)
++ return true;
+
+ /*
+- * CPUID(1).ECX[31]: reserved for hypervisor use. This is still not
+- * completely accurate as xen pv guests don't see that CPUID bit set but
+- * that's good enough as they don't land on the BSP path anyway.
++ * Disable when:
++ *
++ * 1) The CPU does not support CPUID.
++ *
++ * 2) Bit 31 in CPUID[1]:ECX is clear
++ * The bit is reserved for hypervisor use. This is still not
++ * completely accurate as XEN PV guests don't see that CPUID bit
++ * set, but that's good enough as they don't land on the BSP
++ * path anyway.
++ *
++ * 3) Certain AMD patch levels are not allowed to be
++ * overwritten.
+ */
+- if (native_cpuid_ecx(1) & BIT(31))
+- return true;
+-
+- if (x86_cpuid_vendor() == X86_VENDOR_AMD) {
+- if (amd_check_current_patch_level())
+- return true;
+- }
+-
+- if (cmdline_find_option_bool(cmdline, option) <= 0)
+- dis_ucode_ldr = false;
++ if (!have_cpuid_p() ||
++ native_cpuid_ecx(1) & BIT(31) ||
++ amd_check_current_patch_level())
++ dis_ucode_ldr = true;
+
+ return dis_ucode_ldr;
+ }
+@@ -125,7 +130,10 @@ void __init load_ucode_bsp(void)
+ unsigned int cpuid_1_eax;
+ bool intel = true;
+
+- if (!have_cpuid_p())
++ if (cmdline_find_option_bool(boot_command_line, "dis_ucode_ldr") > 0)
++ dis_ucode_ldr = true;
++
++ if (microcode_loader_disabled())
+ return;
+
+ cpuid_1_eax = native_cpuid_eax(1);
+@@ -146,9 +154,6 @@ void __init load_ucode_bsp(void)
+ return;
+ }
+
+- if (check_loader_disabled_bsp())
+- return;
+-
+ if (intel)
+ load_ucode_intel_bsp(&early_data);
+ else
+@@ -159,6 +164,11 @@ void load_ucode_ap(void)
+ {
+ unsigned int cpuid_1_eax;
+
++ /*
++ * Can't use microcode_loader_disabled() here - .init section
++ * hell. It doesn't have to either - the BSP variant must've
++ * parsed cmdline already anyway.
++ */
+ if (dis_ucode_ldr)
+ return;
+
+@@ -810,7 +820,7 @@ static int __init microcode_init(void)
+ struct cpuinfo_x86 *c = &boot_cpu_data;
+ int error;
+
+- if (dis_ucode_ldr)
++ if (microcode_loader_disabled())
+ return -EINVAL;
+
+ if (c->x86_vendor == X86_VENDOR_INTEL)
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index f3d534807d914a..9309468c8d2c12 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -389,7 +389,7 @@ static int __init save_builtin_microcode(void)
+ if (xchg(&ucode_patch_va, NULL) != UCODE_BSP_LOADED)
+ return 0;
+
+- if (dis_ucode_ldr || boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++ if (microcode_loader_disabled() || boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ return 0;
+
+ uci.mc = get_microcode_blob(&uci, true);
+diff --git a/arch/x86/kernel/cpu/microcode/internal.h b/arch/x86/kernel/cpu/microcode/internal.h
+index 5df621752fefac..50a9702ae4e2b5 100644
+--- a/arch/x86/kernel/cpu/microcode/internal.h
++++ b/arch/x86/kernel/cpu/microcode/internal.h
+@@ -94,7 +94,6 @@ static inline unsigned int x86_cpuid_family(void)
+ return x86_family(eax);
+ }
+
+-extern bool dis_ucode_ldr;
+ extern bool force_minrev;
+
+ #ifdef CONFIG_CPU_SUP_AMD
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 166bc0ea3bdff9..0a6595463faa6a 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -357,7 +357,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ goto fail;
+
+ ip = trampoline + size;
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ if (cpu_wants_rethunk_at(ip))
+ __text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_SIZE);
+ else
+ text_poke_copy(ip, retq, sizeof(retq));
+diff --git a/arch/x86/kernel/head32.c b/arch/x86/kernel/head32.c
+index de001b2146abf3..375f2d7f1762d4 100644
+--- a/arch/x86/kernel/head32.c
++++ b/arch/x86/kernel/head32.c
+@@ -145,10 +145,6 @@ void __init __no_stack_protector mk_early_pgtbl_32(void)
+ *ptr = (unsigned long)ptep + PAGE_OFFSET;
+
+ #ifdef CONFIG_MICROCODE_INITRD32
+- /* Running on a hypervisor? */
+- if (native_cpuid_ecx(1) & BIT(31))
+- return;
+-
+ params = (struct boot_params *)__pa_nodebug(&boot_params);
+ if (!params->hdr.ramdisk_size || !params->hdr.ramdisk_image)
+ return;
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index 8984abd91c001f..0207066f3caf00 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -252,6 +252,8 @@ int module_finalize(const Elf_Ehdr *hdr,
+ ibt_endbr = s;
+ }
+
++ its_init_mod(me);
++
+ if (retpolines || cfi) {
+ void *rseg = NULL, *cseg = NULL;
+ unsigned int rsize = 0, csize = 0;
+@@ -272,6 +274,9 @@ int module_finalize(const Elf_Ehdr *hdr,
+ void *rseg = (void *)retpolines->sh_addr;
+ apply_retpolines(rseg, rseg + retpolines->sh_size, me);
+ }
++
++ its_fini_mod(me);
++
+ if (returns) {
+ void *rseg = (void *)returns->sh_addr;
+ apply_returns(rseg, rseg + returns->sh_size, me);
+@@ -335,4 +340,5 @@ int module_post_finalize(const Elf_Ehdr *hdr,
+ void module_arch_cleanup(struct module *mod)
+ {
+ alternatives_smp_module_del(mod);
++ its_free_mod(mod);
+ }
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 9e51242ed125ee..aae909d4ed7853 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -81,7 +81,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type,
+ break;
+
+ case RET:
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ if (cpu_wants_rethunk_at(insn))
+ code = text_gen_insn(JMP32_INSN_OPCODE, insn, x86_return_thunk);
+ else
+ code = &retinsn;
+@@ -90,7 +90,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type,
+ case JCC:
+ if (!func) {
+ func = __static_call_return;
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ if (cpu_wants_rethunk())
+ func = x86_return_thunk;
+ }
+
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 0deb4887d6e96a..c329ff6f8d3a55 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -528,6 +528,16 @@ INIT_PER_CPU(irq_stack_backing_store);
+ "SRSO function pair won't alias");
+ #endif
+
++#if defined(CONFIG_MITIGATION_ITS) && !defined(CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B)
++. = ASSERT(__x86_indirect_its_thunk_rax & 0x20, "__x86_indirect_thunk_rax not in second half of cacheline");
++. = ASSERT(((__x86_indirect_its_thunk_rcx - __x86_indirect_its_thunk_rax) % 64) == 0, "Indirect thunks are not cacheline apart");
++. = ASSERT(__x86_indirect_its_thunk_array == __x86_indirect_its_thunk_rax, "Gap in ITS thunk array");
++#endif
++
++#if defined(CONFIG_MITIGATION_ITS) && !defined(CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B)
++. = ASSERT(its_return_thunk & 0x20, "its_return_thunk not in second half of cacheline");
++#endif
++
+ #endif /* CONFIG_X86_64 */
+
+ /*
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 8160870398b904..6eb87b34b242de 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7496,9 +7496,30 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
+ }
+
+ #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
++static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
++ int level)
++{
++ return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG;
++}
++
++static void hugepage_clear_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
++ int level)
++{
++ lpage_info_slot(gfn, slot, level)->disallow_lpage &= ~KVM_LPAGE_MIXED_FLAG;
++}
++
++static void hugepage_set_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
++ int level)
++{
++ lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG;
++}
++
+ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
+ struct kvm_gfn_range *range)
+ {
++ struct kvm_memory_slot *slot = range->slot;
++ int level;
++
+ /*
+ * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only
+ * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM
+@@ -7513,6 +7534,38 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
+ if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
+ return false;
+
++ if (WARN_ON_ONCE(range->end <= range->start))
++ return false;
++
++ /*
++ * If the head and tail pages of the range currently allow a hugepage,
++ * i.e. reside fully in the slot and don't have mixed attributes, then
++ * add each corresponding hugepage range to the ongoing invalidation,
++ * e.g. to prevent KVM from creating a hugepage in response to a fault
++ * for a gfn whose attributes aren't changing. Note, only the range
++ * of gfns whose attributes are being modified needs to be explicitly
++ * unmapped, as that will unmap any existing hugepages.
++ */
++ for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
++ gfn_t start = gfn_round_for_level(range->start, level);
++ gfn_t end = gfn_round_for_level(range->end - 1, level);
++ gfn_t nr_pages = KVM_PAGES_PER_HPAGE(level);
++
++ if ((start != range->start || start + nr_pages > range->end) &&
++ start >= slot->base_gfn &&
++ start + nr_pages <= slot->base_gfn + slot->npages &&
++ !hugepage_test_mixed(slot, start, level))
++ kvm_mmu_invalidate_range_add(kvm, start, start + nr_pages);
++
++ if (end == start)
++ continue;
++
++ if ((end + nr_pages) > range->end &&
++ (end + nr_pages) <= (slot->base_gfn + slot->npages) &&
++ !hugepage_test_mixed(slot, end, level))
++ kvm_mmu_invalidate_range_add(kvm, end, end + nr_pages);
++ }
++
+ /* Unmap the old attribute page. */
+ if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE)
+ range->attr_filter = KVM_FILTER_SHARED;
+@@ -7522,23 +7575,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
+ return kvm_unmap_gfn_range(kvm, range);
+ }
+
+-static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
+- int level)
+-{
+- return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG;
+-}
+-
+-static void hugepage_clear_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
+- int level)
+-{
+- lpage_info_slot(gfn, slot, level)->disallow_lpage &= ~KVM_LPAGE_MIXED_FLAG;
+-}
+
+-static void hugepage_set_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
+- int level)
+-{
+- lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG;
+-}
+
+ static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot,
+ gfn_t gfn, int level, unsigned long attrs)
+diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c
+index e0ab7df27b6663..c51e598684866b 100644
+--- a/arch/x86/kvm/smm.c
++++ b/arch/x86/kvm/smm.c
+@@ -131,6 +131,7 @@ void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm)
+
+ kvm_mmu_reset_context(vcpu);
+ }
++EXPORT_SYMBOL_GPL(kvm_smm_changed);
+
+ void process_smi(struct kvm_vcpu *vcpu)
+ {
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index e67de787fc7143..282c91c6aa338c 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -2220,6 +2220,10 @@ static int shutdown_interception(struct kvm_vcpu *vcpu)
+ */
+ if (!sev_es_guest(vcpu->kvm)) {
+ clear_page(svm->vmcb);
++#ifdef CONFIG_KVM_SMM
++ if (is_smm(vcpu))
++ kvm_smm_changed(vcpu, false);
++#endif
+ kvm_vcpu_reset(vcpu, true);
+ }
+
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c8dd29bccc71e5..9e57dd990a262c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1589,7 +1589,7 @@ EXPORT_SYMBOL_GPL(kvm_emulate_rdpmc);
+ ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
+ ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
+ ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO | \
+- ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR | ARCH_CAP_BHI_NO)
++ ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR | ARCH_CAP_BHI_NO | ARCH_CAP_ITS_NO)
+
+ static u64 kvm_get_arch_capabilities(void)
+ {
+@@ -1623,6 +1623,8 @@ static u64 kvm_get_arch_capabilities(void)
+ data |= ARCH_CAP_MDS_NO;
+ if (!boot_cpu_has_bug(X86_BUG_RFDS))
+ data |= ARCH_CAP_RFDS_NO;
++ if (!boot_cpu_has_bug(X86_BUG_ITS))
++ data |= ARCH_CAP_ITS_NO;
+
+ if (!boot_cpu_has(X86_FEATURE_RTM)) {
+ /*
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 391059b2c6fbc4..614fb9aee2ff65 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -366,6 +366,45 @@ SYM_FUNC_END(call_depth_return_thunk)
+
+ #endif /* CONFIG_MITIGATION_CALL_DEPTH_TRACKING */
+
++#ifdef CONFIG_MITIGATION_ITS
++
++.macro ITS_THUNK reg
++
++SYM_INNER_LABEL(__x86_indirect_its_thunk_\reg, SYM_L_GLOBAL)
++ UNWIND_HINT_UNDEFINED
++ ANNOTATE_NOENDBR
++ ANNOTATE_RETPOLINE_SAFE
++ jmp *%\reg
++ int3
++ .align 32, 0xcc /* fill to the end of the line */
++ .skip 32, 0xcc /* skip to the next upper half */
++.endm
++
++/* ITS mitigation requires thunks be aligned to upper half of cacheline */
++.align 64, 0xcc
++.skip 32, 0xcc
++SYM_CODE_START(__x86_indirect_its_thunk_array)
++
++#define GEN(reg) ITS_THUNK reg
++#include <asm/GEN-for-each-reg.h>
++#undef GEN
++
++ .align 64, 0xcc
++SYM_CODE_END(__x86_indirect_its_thunk_array)
++
++.align 64, 0xcc
++.skip 32, 0xcc
++SYM_CODE_START(its_return_thunk)
++ UNWIND_HINT_FUNC
++ ANNOTATE_NOENDBR
++ ANNOTATE_UNRET_SAFE
++ ret
++ int3
++SYM_CODE_END(its_return_thunk)
++EXPORT_SYMBOL(its_return_thunk)
++
++#endif /* CONFIG_MITIGATION_ITS */
++
+ /*
+ * This function name is magical and is used by -mfunction-return=thunk-extern
+ * for the compiler to generate JMPs to it.
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index e491c75b2a6889..3c81edd54c5c40 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -621,7 +621,11 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
+
+ choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);
+
+- /* Let nmi_uaccess_okay() know that we're changing CR3. */
++ /*
++ * Indicate that CR3 is about to change. nmi_uaccess_okay()
++ * and others are sensitive to the window where mm_cpumask(),
++ * CR3 and cpu_tlbstate.loaded_mm are not all in sync.
++ */
+ this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING);
+ barrier();
+ }
+@@ -895,8 +899,16 @@ static void flush_tlb_func(void *info)
+
+ static bool should_flush_tlb(int cpu, void *data)
+ {
++ struct mm_struct *loaded_mm = per_cpu(cpu_tlbstate.loaded_mm, cpu);
+ struct flush_tlb_info *info = data;
+
++ /*
++ * Order the 'loaded_mm' and 'is_lazy' against their
++ * write ordering in switch_mm_irqs_off(). Ensure
++ * 'is_lazy' is at least as new as 'loaded_mm'.
++ */
++ smp_rmb();
++
+ /* Lazy TLB will get flushed at the next context switch. */
+ if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu))
+ return false;
+@@ -905,8 +917,15 @@ static bool should_flush_tlb(int cpu, void *data)
+ if (!info->mm)
+ return true;
+
++ /*
++ * While switching, the remote CPU could have state from
++ * either the prev or next mm. Assume the worst and flush.
++ */
++ if (loaded_mm == LOADED_MM_SWITCHING)
++ return true;
++
+ /* The target mm is loaded, and the CPU is not lazy. */
+- if (per_cpu(cpu_tlbstate.loaded_mm, cpu) == info->mm)
++ if (loaded_mm == info->mm)
+ return true;
+
+ /* In cpumask, but not the loaded mm? Periodically remove by flushing. */
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index a43fc5af973d27..7d8ba3074e2d22 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -41,6 +41,8 @@ static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
+ #define EMIT2(b1, b2) EMIT((b1) + ((b2) << 8), 2)
+ #define EMIT3(b1, b2, b3) EMIT((b1) + ((b2) << 8) + ((b3) << 16), 3)
+ #define EMIT4(b1, b2, b3, b4) EMIT((b1) + ((b2) << 8) + ((b3) << 16) + ((b4) << 24), 4)
++#define EMIT5(b1, b2, b3, b4, b5) \
++ do { EMIT1(b1); EMIT4(b2, b3, b4, b5); } while (0)
+
+ #define EMIT1_off32(b1, off) \
+ do { EMIT1(b1); EMIT(off, 4); } while (0)
+@@ -653,7 +655,10 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
+ {
+ u8 *prog = *pprog;
+
+- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
++ if (cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS)) {
++ OPTIMIZER_HIDE_VAR(reg);
++ emit_jump(&prog, its_static_thunk(reg), ip);
++ } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
+ EMIT_LFENCE();
+ EMIT2(0xFF, 0xE0 + reg);
+ } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+@@ -675,7 +680,7 @@ static void emit_return(u8 **pprog, u8 *ip)
+ {
+ u8 *prog = *pprog;
+
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
++ if (cpu_wants_rethunk()) {
+ emit_jump(&prog, x86_return_thunk, ip);
+ } else {
+ EMIT1(0xC3); /* ret */
+@@ -1450,6 +1455,48 @@ static void emit_priv_frame_ptr(u8 **pprog, void __percpu *priv_frame_ptr)
+ #define PRIV_STACK_GUARD_SZ 8
+ #define PRIV_STACK_GUARD_VAL 0xEB9F12345678eb9fULL
+
++static int emit_spectre_bhb_barrier(u8 **pprog, u8 *ip,
++ struct bpf_prog *bpf_prog)
++{
++ u8 *prog = *pprog;
++ u8 *func;
++
++ if (cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP)) {
++ /* The clearing sequence clobbers eax and ecx. */
++ EMIT1(0x50); /* push rax */
++ EMIT1(0x51); /* push rcx */
++ ip += 2;
++
++ func = (u8 *)clear_bhb_loop;
++ ip += x86_call_depth_emit_accounting(&prog, func, ip);
++
++ if (emit_call(&prog, func, ip))
++ return -EINVAL;
++ EMIT1(0x59); /* pop rcx */
++ EMIT1(0x58); /* pop rax */
++ }
++ /* Insert IBHF instruction */
++ if ((cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP) &&
++ cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) ||
++ cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_HW)) {
++ /*
++ * Add an Indirect Branch History Fence (IBHF). IBHF acts as a
++ * fence preventing branch history from before the fence from
++ * affecting indirect branches after the fence. This is
++ * specifically used in cBPF jitted code to prevent Intra-mode
++ * BHI attacks. The IBHF instruction is designed to be a NOP on
++ * hardware that doesn't need or support it. The REP and REX.W
++ * prefixes are required by the microcode, and they also ensure
++ * that the NOP is unlikely to be used in existing code.
++ *
++ * IBHF is not a valid instruction in 32-bit mode.
++ */
++ EMIT5(0xF3, 0x48, 0x0F, 0x1E, 0xF8); /* ibhf */
++ }
++ *pprog = prog;
++ return 0;
++}
++
+ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
+ int oldproglen, struct jit_context *ctx, bool jmp_padding)
+ {
+@@ -2467,6 +2514,13 @@ st: if (is_imm8(insn->off))
+ seen_exit = true;
+ /* Update cleanup_addr */
+ ctx->cleanup_addr = proglen;
++ if (bpf_prog_was_classic(bpf_prog) &&
++ !capable(CAP_SYS_ADMIN)) {
++ u8 *ip = image + addrs[i - 1];
++
++ if (emit_spectre_bhb_barrier(&prog, ip, bpf_prog))
++ return -EINVAL;
++ }
+ if (bpf_prog->aux->exception_boundary) {
+ pop_callee_regs(&prog, all_callee_regs_used);
+ pop_r12(&prog);
+diff --git a/drivers/accel/ivpu/ivpu_hw.c b/drivers/accel/ivpu/ivpu_hw.c
+index 4e1054f3466e80..65100576daf295 100644
+--- a/drivers/accel/ivpu/ivpu_hw.c
++++ b/drivers/accel/ivpu/ivpu_hw.c
+@@ -106,7 +106,7 @@ static void timeouts_init(struct ivpu_device *vdev)
+ else
+ vdev->timeout.autosuspend = 100;
+ vdev->timeout.d0i3_entry_msg = 5;
+- vdev->timeout.state_dump_msg = 10;
++ vdev->timeout.state_dump_msg = 100;
+ }
+ }
+
+diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
+index 673801889c7b23..79b77d8a35a772 100644
+--- a/drivers/accel/ivpu/ivpu_job.c
++++ b/drivers/accel/ivpu/ivpu_job.c
+@@ -83,23 +83,9 @@ static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv)
+ if (!cmdq)
+ return NULL;
+
+- ret = xa_alloc_cyclic(&vdev->db_xa, &cmdq->db_id, NULL, vdev->db_limit, &vdev->db_next,
+- GFP_KERNEL);
+- if (ret < 0) {
+- ivpu_err(vdev, "Failed to allocate doorbell id: %d\n", ret);
+- goto err_free_cmdq;
+- }
+-
+- ret = xa_alloc_cyclic(&file_priv->cmdq_xa, &cmdq->id, cmdq, file_priv->cmdq_limit,
+- &file_priv->cmdq_id_next, GFP_KERNEL);
+- if (ret < 0) {
+- ivpu_err(vdev, "Failed to allocate command queue id: %d\n", ret);
+- goto err_erase_db_xa;
+- }
+-
+ cmdq->mem = ivpu_bo_create_global(vdev, SZ_4K, DRM_IVPU_BO_WC | DRM_IVPU_BO_MAPPABLE);
+ if (!cmdq->mem)
+- goto err_erase_cmdq_xa;
++ goto err_free_cmdq;
+
+ ret = ivpu_preemption_buffers_create(vdev, file_priv, cmdq);
+ if (ret)
+@@ -107,10 +93,6 @@ static struct ivpu_cmdq *ivpu_cmdq_alloc(struct ivpu_file_priv *file_priv)
+
+ return cmdq;
+
+-err_erase_cmdq_xa:
+- xa_erase(&file_priv->cmdq_xa, cmdq->id);
+-err_erase_db_xa:
+- xa_erase(&vdev->db_xa, cmdq->db_id);
+ err_free_cmdq:
+ kfree(cmdq);
+ return NULL;
+@@ -234,30 +216,88 @@ static int ivpu_cmdq_fini(struct ivpu_file_priv *file_priv, struct ivpu_cmdq *cm
+ return 0;
+ }
+
++static int ivpu_db_id_alloc(struct ivpu_device *vdev, u32 *db_id)
++{
++ int ret;
++ u32 id;
++
++ ret = xa_alloc_cyclic(&vdev->db_xa, &id, NULL, vdev->db_limit, &vdev->db_next, GFP_KERNEL);
++ if (ret < 0)
++ return ret;
++
++ *db_id = id;
++ return 0;
++}
++
++static int ivpu_cmdq_id_alloc(struct ivpu_file_priv *file_priv, u32 *cmdq_id)
++{
++ int ret;
++ u32 id;
++
++ ret = xa_alloc_cyclic(&file_priv->cmdq_xa, &id, NULL, file_priv->cmdq_limit,
++ &file_priv->cmdq_id_next, GFP_KERNEL);
++ if (ret < 0)
++ return ret;
++
++ *cmdq_id = id;
++ return 0;
++}
++
+ static struct ivpu_cmdq *ivpu_cmdq_acquire(struct ivpu_file_priv *file_priv, u8 priority)
+ {
++ struct ivpu_device *vdev = file_priv->vdev;
+ struct ivpu_cmdq *cmdq;
+- unsigned long cmdq_id;
++ unsigned long id;
+ int ret;
+
+ lockdep_assert_held(&file_priv->lock);
+
+- xa_for_each(&file_priv->cmdq_xa, cmdq_id, cmdq)
++ xa_for_each(&file_priv->cmdq_xa, id, cmdq)
+ if (cmdq->priority == priority)
+ break;
+
+ if (!cmdq) {
+ cmdq = ivpu_cmdq_alloc(file_priv);
+- if (!cmdq)
++ if (!cmdq) {
++ ivpu_err(vdev, "Failed to allocate command queue\n");
+ return NULL;
++ }
++
++ ret = ivpu_db_id_alloc(vdev, &cmdq->db_id);
++ if (ret) {
++ ivpu_err(file_priv->vdev, "Failed to allocate doorbell ID: %d\n", ret);
++ goto err_free_cmdq;
++ }
++
++ ret = ivpu_cmdq_id_alloc(file_priv, &cmdq->id);
++ if (ret) {
++ ivpu_err(vdev, "Failed to allocate command queue ID: %d\n", ret);
++ goto err_erase_db_id;
++ }
++
+ cmdq->priority = priority;
++ ret = xa_err(xa_store(&file_priv->cmdq_xa, cmdq->id, cmdq, GFP_KERNEL));
++ if (ret) {
++ ivpu_err(vdev, "Failed to store command queue in cmdq_xa: %d\n", ret);
++ goto err_erase_cmdq_id;
++ }
+ }
+
+ ret = ivpu_cmdq_init(file_priv, cmdq, priority);
+- if (ret)
+- return NULL;
++ if (ret) {
++ ivpu_err(vdev, "Failed to initialize command queue: %d\n", ret);
++ goto err_free_cmdq;
++ }
+
+ return cmdq;
++
++err_erase_cmdq_id:
++ xa_erase(&file_priv->cmdq_xa, cmdq->id);
++err_erase_db_id:
++ xa_erase(&vdev->db_xa, cmdq->db_id);
++err_free_cmdq:
++ ivpu_cmdq_free(file_priv, cmdq);
++ return NULL;
+ }
+
+ void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv)
+@@ -606,8 +646,8 @@ static int ivpu_job_submit(struct ivpu_job *job, u8 priority)
+ err_erase_xa:
+ xa_erase(&vdev->submitted_jobs_xa, job->job_id);
+ err_unlock:
+- mutex_unlock(&vdev->submitted_jobs_lock);
+ mutex_unlock(&file_priv->lock);
++ mutex_unlock(&vdev->submitted_jobs_lock);
+ ivpu_rpm_put(vdev);
+ return ret;
+ }
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index a7e5118498758e..50651435577c8f 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -600,6 +600,7 @@ CPU_SHOW_VULN_FALLBACK(spec_rstack_overflow);
+ CPU_SHOW_VULN_FALLBACK(gds);
+ CPU_SHOW_VULN_FALLBACK(reg_file_data_sampling);
+ CPU_SHOW_VULN_FALLBACK(ghostwrite);
++CPU_SHOW_VULN_FALLBACK(indirect_target_selection);
+
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+@@ -616,6 +617,7 @@ static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NU
+ static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
+ static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL);
+ static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL);
++static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
+
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_meltdown.attr,
+@@ -633,6 +635,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_gather_data_sampling.attr,
+ &dev_attr_reg_file_data_sampling.attr,
+ &dev_attr_ghostwrite.attr,
++ &dev_attr_indirect_target_selection.attr,
+ NULL
+ };
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 7668b79d8b0a94..b378d2aa49f069 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -496,6 +496,25 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
+ return 0;
+ }
+
++static void loop_assign_backing_file(struct loop_device *lo, struct file *file)
++{
++ lo->lo_backing_file = file;
++ lo->old_gfp_mask = mapping_gfp_mask(file->f_mapping);
++ mapping_set_gfp_mask(file->f_mapping,
++ lo->old_gfp_mask & ~(__GFP_IO | __GFP_FS));
++}
++
++static int loop_check_backing_file(struct file *file)
++{
++ if (!file->f_op->read_iter)
++ return -EINVAL;
++
++ if ((file->f_mode & FMODE_WRITE) && !file->f_op->write_iter)
++ return -EINVAL;
++
++ return 0;
++}
++
+ /*
+ * loop_change_fd switched the backing store of a loopback device to
+ * a new file. This is useful for operating system installers to free up
+@@ -517,6 +536,10 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ if (!file)
+ return -EBADF;
+
++ error = loop_check_backing_file(file);
++ if (error)
++ return error;
++
+ /* suppress uevents while reconfiguring the device */
+ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 1);
+
+@@ -549,10 +572,7 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ disk_force_media_change(lo->lo_disk);
+ memflags = blk_mq_freeze_queue(lo->lo_queue);
+ mapping_set_gfp_mask(old_file->f_mapping, lo->old_gfp_mask);
+- lo->lo_backing_file = file;
+- lo->old_gfp_mask = mapping_gfp_mask(file->f_mapping);
+- mapping_set_gfp_mask(file->f_mapping,
+- lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
++ loop_assign_backing_file(lo, file);
+ loop_update_dio(lo);
+ blk_mq_unfreeze_queue(lo->lo_queue, memflags);
+ partscan = lo->lo_flags & LO_FLAGS_PARTSCAN;
+@@ -943,7 +963,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+ const struct loop_config *config)
+ {
+ struct file *file = fget(config->fd);
+- struct address_space *mapping;
+ struct queue_limits lim;
+ int error;
+ loff_t size;
+@@ -952,6 +971,14 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+
+ if (!file)
+ return -EBADF;
++
++ if ((mode & BLK_OPEN_WRITE) && !file->f_op->write_iter)
++ return -EINVAL;
++
++ error = loop_check_backing_file(file);
++ if (error)
++ return error;
++
+ is_loop = is_loop_device(file);
+
+ /* This is safe, since we have a reference from open(). */
+@@ -979,8 +1006,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+ if (error)
+ goto out_unlock;
+
+- mapping = file->f_mapping;
+-
+ if ((config->info.lo_flags & ~LOOP_CONFIGURE_SETTABLE_FLAGS) != 0) {
+ error = -EINVAL;
+ goto out_unlock;
+@@ -1012,9 +1037,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+ set_disk_ro(lo->lo_disk, (lo->lo_flags & LO_FLAGS_READ_ONLY) != 0);
+
+ lo->lo_device = bdev;
+- lo->lo_backing_file = file;
+- lo->old_gfp_mask = mapping_gfp_mask(mapping);
+- mapping_set_gfp_mask(mapping, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
++ loop_assign_backing_file(lo, file);
+
+ lim = queue_limits_start_update(lo->lo_queue);
+ loop_update_limits(lo, &lim, config->block_size);
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 68846c5bd4f794..4390fd571dbd15 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -1330,13 +1330,6 @@ int btmtk_usb_setup(struct hci_dev *hdev)
+ break;
+ case 0x7922:
+ case 0x7925:
+- /* Reset the device to ensure it's in the initial state before
+- * downloading the firmware to ensure.
+- */
+-
+- if (!test_bit(BTMTK_FIRMWARE_LOADED, &btmtk_data->flags))
+- btmtk_usb_subsys_reset(hdev, dev_id);
+- fallthrough;
+ case 0x7961:
+ btmtk_fw_get_filename(fw_bin_name, sizeof(fw_bin_name), dev_id,
+ fw_version, fw_flavor);
+@@ -1345,12 +1338,9 @@ int btmtk_usb_setup(struct hci_dev *hdev)
+ btmtk_usb_hci_wmt_sync);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to set up firmware (%d)", err);
+- clear_bit(BTMTK_FIRMWARE_LOADED, &btmtk_data->flags);
+ return err;
+ }
+
+- set_bit(BTMTK_FIRMWARE_LOADED, &btmtk_data->flags);
+-
+ /* It's Device EndPoint Reset Option Register */
+ err = btmtk_usb_uhw_reg_write(hdev, MTK_EP_RST_OPT,
+ MTK_EP_RST_IN_OUT_OPT);
+diff --git a/drivers/clocksource/i8253.c b/drivers/clocksource/i8253.c
+index 39f7c2d736d169..b603c25f3dfaac 100644
+--- a/drivers/clocksource/i8253.c
++++ b/drivers/clocksource/i8253.c
+@@ -103,7 +103,7 @@ int __init clocksource_i8253_init(void)
+ #ifdef CONFIG_CLKEVT_I8253
+ void clockevent_i8253_disable(void)
+ {
+- raw_spin_lock(&i8253_lock);
++ guard(raw_spinlock_irqsave)(&i8253_lock);
+
+ /*
+ * Writing the MODE register should stop the counter, according to
+@@ -132,8 +132,6 @@ void clockevent_i8253_disable(void)
+ outb_p(0, PIT_CH0);
+
+ outb_p(0x30, PIT_MODE);
+-
+- raw_spin_unlock(&i8253_lock);
+ }
+
+ static int pit_shutdown(struct clock_event_device *evt)
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 1c75a4c9c37166..0390d5ff195ec0 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -1248,7 +1248,8 @@ static void xfer_put(const struct scmi_protocol_handle *ph,
+ }
+
+ static bool scmi_xfer_done_no_timeout(struct scmi_chan_info *cinfo,
+- struct scmi_xfer *xfer, ktime_t stop)
++ struct scmi_xfer *xfer, ktime_t stop,
++ bool *ooo)
+ {
+ struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
+
+@@ -1257,7 +1258,7 @@ static bool scmi_xfer_done_no_timeout(struct scmi_chan_info *cinfo,
+ * in case of out-of-order receptions of delayed responses
+ */
+ return info->desc->ops->poll_done(cinfo, xfer) ||
+- try_wait_for_completion(&xfer->done) ||
++ (*ooo = try_wait_for_completion(&xfer->done)) ||
+ ktime_after(ktime_get(), stop);
+ }
+
+@@ -1274,15 +1275,17 @@ static int scmi_wait_for_reply(struct device *dev, const struct scmi_desc *desc,
+ * itself to support synchronous commands replies.
+ */
+ if (!desc->sync_cmds_completed_on_ret) {
++ bool ooo = false;
++
+ /*
+ * Poll on xfer using transport provided .poll_done();
+ * assumes no completion interrupt was available.
+ */
+ ktime_t stop = ktime_add_ms(ktime_get(), timeout_ms);
+
+- spin_until_cond(scmi_xfer_done_no_timeout(cinfo,
+- xfer, stop));
+- if (ktime_after(ktime_get(), stop)) {
++ spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer,
++ stop, &ooo));
++ if (!ooo && !info->desc->ops->poll_done(cinfo, xfer)) {
+ dev_err(dev,
+ "timed out in resp(caller: %pS) - polling\n",
+ (void *)_RET_IP_);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 98f0c12df12bc1..416d2611fbf1c6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1593,11 +1593,9 @@ static inline void amdgpu_acpi_get_backlight_caps(struct amdgpu_dm_backlight_cap
+ #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND)
+ bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev);
+ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev);
+-void amdgpu_choose_low_power_state(struct amdgpu_device *adev);
+ #else
+ static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; }
+ static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; }
+-static inline void amdgpu_choose_low_power_state(struct amdgpu_device *adev) { }
+ #endif
+
+ void amdgpu_register_gpu_instance(struct amdgpu_device *adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index b8d4e07d2043ed..bebfbc1497d8e0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -1533,22 +1533,4 @@ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev)
+ #endif /* CONFIG_AMD_PMC */
+ }
+
+-/**
+- * amdgpu_choose_low_power_state
+- *
+- * @adev: amdgpu_device_pointer
+- *
+- * Choose the target low power state for the GPU
+- */
+-void amdgpu_choose_low_power_state(struct amdgpu_device *adev)
+-{
+- if (adev->in_runpm)
+- return;
+-
+- if (amdgpu_acpi_is_s0ix_active(adev))
+- adev->in_s0ix = true;
+- else if (amdgpu_acpi_is_s3_active(adev))
+- adev->in_s3 = true;
+-}
+-
+ #endif /* CONFIG_SUSPEND */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 71e8a76180ad6d..34f0451b274c8a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4819,28 +4819,20 @@ static int amdgpu_device_evict_resources(struct amdgpu_device *adev)
+ * @data: data
+ *
+ * This function is called when the system is about to suspend or hibernate.
+- * It is used to evict resources from the device before the system goes to
+- * sleep while there is still access to swap.
++ * It is used to set the appropriate flags so that eviction can be optimized
++ * in the pm prepare callback.
+ */
+ static int amdgpu_device_pm_notifier(struct notifier_block *nb, unsigned long mode,
+ void *data)
+ {
+ struct amdgpu_device *adev = container_of(nb, struct amdgpu_device, pm_nb);
+- int r;
+
+ switch (mode) {
+ case PM_HIBERNATION_PREPARE:
+ adev->in_s4 = true;
+- fallthrough;
+- case PM_SUSPEND_PREPARE:
+- r = amdgpu_device_evict_resources(adev);
+- /*
+- * This is considered non-fatal at this time because
+- * amdgpu_device_prepare() will also fatally evict resources.
+- * See https://gitlab.freedesktop.org/drm/amd/-/issues/3781
+- */
+- if (r)
+- drm_warn(adev_to_drm(adev), "Failed to evict resources, freeze active processes if problems occur: %d\n", r);
++ break;
++ case PM_POST_HIBERNATION:
++ adev->in_s4 = false;
+ break;
+ }
+
+@@ -4861,15 +4853,13 @@ int amdgpu_device_prepare(struct drm_device *dev)
+ struct amdgpu_device *adev = drm_to_adev(dev);
+ int i, r;
+
+- amdgpu_choose_low_power_state(adev);
+-
+ if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+ return 0;
+
+ /* Evict the majority of BOs before starting suspend sequence */
+ r = amdgpu_device_evict_resources(adev);
+ if (r)
+- goto unprepare;
++ return r;
+
+ flush_delayed_work(&adev->gfx.gfx_off_delay_work);
+
+@@ -4880,15 +4870,10 @@ int amdgpu_device_prepare(struct drm_device *dev)
+ continue;
+ r = adev->ip_blocks[i].version->funcs->prepare_suspend(&adev->ip_blocks[i]);
+ if (r)
+- goto unprepare;
++ return r;
+ }
+
+ return 0;
+-
+-unprepare:
+- adev->in_s0ix = adev->in_s3 = adev->in_s4 = false;
+-
+- return r;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index f2d77bc04e4a98..bb8ab25ea76ad6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2582,13 +2582,8 @@ static int amdgpu_pmops_freeze(struct device *dev)
+ static int amdgpu_pmops_thaw(struct device *dev)
+ {
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+- struct amdgpu_device *adev = drm_to_adev(drm_dev);
+- int r;
+-
+- r = amdgpu_device_resume(drm_dev, true);
+- adev->in_s4 = false;
+
+- return r;
++ return amdgpu_device_resume(drm_dev, true);
+ }
+
+ static int amdgpu_pmops_poweroff(struct device *dev)
+@@ -2601,9 +2596,6 @@ static int amdgpu_pmops_poweroff(struct device *dev)
+ static int amdgpu_pmops_restore(struct device *dev)
+ {
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+- struct amdgpu_device *adev = drm_to_adev(drm_dev);
+-
+- adev->in_s4 = false;
+
+ return amdgpu_device_resume(drm_dev, true);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+index adaf4388ad2806..ce66a938f41a87 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+@@ -66,7 +66,6 @@
+ #define VCN_ENC_CMD_REG_WAIT 0x0000000c
+
+ #define VCN_AON_SOC_ADDRESS_2_0 0x1f800
+-#define VCN1_AON_SOC_ADDRESS_3_0 0x48000
+ #define VCN_VID_IP_ADDRESS_2_0 0x0
+ #define VCN_AON_IP_ADDRESS_2_0 0x30000
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
+index 194026e9be3331..1ca1bbe7784e50 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
+@@ -42,7 +42,12 @@ static void hdp_v4_0_flush_hdp(struct amdgpu_device *adev,
+ {
+ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ /* We just need to read back a register to post the write.
++ * Reading back the remapped register causes problems on
++ * some platforms so just read back the memory size register.
++ */
++ if (adev->nbio.funcs->get_memsize)
++ adev->nbio.funcs->get_memsize(adev);
+ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
+index d3962d46908811..40705e13ca567b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
+@@ -33,7 +33,12 @@ static void hdp_v5_0_flush_hdp(struct amdgpu_device *adev,
+ {
+ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ /* We just need to read back a register to post the write.
++ * Reading back the remapped register causes problems on
++ * some platforms so just read back the memory size register.
++ */
++ if (adev->nbio.funcs->get_memsize)
++ adev->nbio.funcs->get_memsize(adev);
+ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c b/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
+index f52552c5fa27b6..6b9f2e1d9d690d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
+@@ -34,7 +34,17 @@ static void hdp_v5_2_flush_hdp(struct amdgpu_device *adev,
+ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2,
+ 0);
+- RREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ if (amdgpu_sriov_vf(adev)) {
++ /* this is fine because SR_IOV doesn't remap the register */
++ RREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
++ /* We just need to read back a register to post the write.
++ * Reading back the remapped register causes problems on
++ * some platforms so just read back the memory size register.
++ */
++ if (adev->nbio.funcs->get_memsize)
++ adev->nbio.funcs->get_memsize(adev);
++ }
+ } else {
+ amdgpu_ring_emit_wreg(ring,
+ (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2,
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
+index 6948fe9956ce47..20da813299f04a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
+@@ -36,7 +36,12 @@ static void hdp_v6_0_flush_hdp(struct amdgpu_device *adev,
+ {
+ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ /* We just need to read back a register to post the write.
++ * Reading back the remapped register causes problems on
++ * some platforms so just read back the memory size register.
++ */
++ if (adev->nbio.funcs->get_memsize)
++ adev->nbio.funcs->get_memsize(adev);
+ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
+index 63820329f67eb6..f7ecdd15d52827 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
+@@ -33,7 +33,12 @@ static void hdp_v7_0_flush_hdp(struct amdgpu_device *adev,
+ {
+ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ /* We just need to read back a register to post the write.
++ * Reading back the remapped register causes problems on
++ * some platforms so just read back the memory size register.
++ */
++ if (adev->nbio.funcs->get_memsize)
++ adev->nbio.funcs->get_memsize(adev);
+ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+index e42cfc731ad8e2..f40737d27cb016 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+@@ -39,6 +39,7 @@
+
+ #define VCN_VID_SOC_ADDRESS_2_0 0x1fa00
+ #define VCN1_VID_SOC_ADDRESS_3_0 0x48200
++#define VCN1_AON_SOC_ADDRESS_3_0 0x48000
+
+ #define mmUVD_CONTEXT_ID_INTERNAL_OFFSET 0x1fd
+ #define mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET 0x503
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index b518202955cad6..2431e1914a8fe0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -39,6 +39,7 @@
+
+ #define VCN_VID_SOC_ADDRESS_2_0 0x1fa00
+ #define VCN1_VID_SOC_ADDRESS_3_0 0x48200
++#define VCN1_AON_SOC_ADDRESS_3_0 0x48000
+
+ #define mmUVD_CONTEXT_ID_INTERNAL_OFFSET 0x27
+ #define mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET 0x0f
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 63ddd4cca9109c..02c2defcf91edf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -40,6 +40,7 @@
+
+ #define VCN_VID_SOC_ADDRESS_2_0 0x1fa00
+ #define VCN1_VID_SOC_ADDRESS_3_0 0x48200
++#define VCN1_AON_SOC_ADDRESS_3_0 0x48000
+
+ #define mmUVD_CONTEXT_ID_INTERNAL_OFFSET 0x27
+ #define mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET 0x0f
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index 00551d6f037019..090794457339da 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -46,6 +46,7 @@
+
+ #define VCN_VID_SOC_ADDRESS_2_0 0x1fb00
+ #define VCN1_VID_SOC_ADDRESS_3_0 0x48300
++#define VCN1_AON_SOC_ADDRESS_3_0 0x48000
+
+ #define VCN_HARVEST_MMSCH 0
+
+@@ -582,7 +583,8 @@ static void vcn_v4_0_mc_resume_dpg_mode(struct amdgpu_device *adev, int inst_idx
+
+ /* VCN global tiling registers */
+ WREG32_SOC15_DPG_MODE(inst_idx, SOC15_DPG_MODE_OFFSET(
+- VCN, 0, regUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
++ VCN, inst_idx, regUVD_GFX10_ADDR_CONFIG),
++ adev->gfx.config.gb_addr_config, 0, indirect);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+index ecdc027f822037..a2d1a4b2f03a59 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+@@ -44,6 +44,7 @@
+
+ #define VCN_VID_SOC_ADDRESS_2_0 0x1fb00
+ #define VCN1_VID_SOC_ADDRESS_3_0 0x48300
++#define VCN1_AON_SOC_ADDRESS_3_0 0x48000
+
+ static const struct amdgpu_hwip_reg_entry vcn_reg_list_4_0_3[] = {
+ SOC15_REG_ENTRY_STR(VCN, 0, regUVD_POWER_STATUS),
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+index 23d3c16c9d9f29..d2dfdb141b2456 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+@@ -46,6 +46,7 @@
+
+ #define VCN_VID_SOC_ADDRESS_2_0 0x1fb00
+ #define VCN1_VID_SOC_ADDRESS_3_0 (0x48300 + 0x38000)
++#define VCN1_AON_SOC_ADDRESS_3_0 (0x48000 + 0x38000)
+
+ #define VCN_HARVEST_MMSCH 0
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+index b6d78381ebfbc7..97fc3d5b194775 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+@@ -502,7 +502,8 @@ static void vcn_v5_0_0_mc_resume_dpg_mode(struct amdgpu_device *adev, int inst_i
+
+ /* VCN global tiling registers */
+ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET(
+- VCN, 0, regUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
++ VCN, inst_idx, regUVD_GFX10_ADDR_CONFIG),
++ adev->gfx.config.gb_addr_config, 0, indirect);
+
+ return;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 3660e4a1a85f8c..2dbd71fbae28a5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -668,15 +668,21 @@ static void dm_crtc_high_irq(void *interrupt_params)
+ spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+
+ if (acrtc->dm_irq_params.stream &&
+- acrtc->dm_irq_params.vrr_params.supported &&
+- acrtc->dm_irq_params.freesync_config.state ==
+- VRR_STATE_ACTIVE_VARIABLE) {
++ acrtc->dm_irq_params.vrr_params.supported) {
++ bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
++ bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
++ bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
++
+ mod_freesync_handle_v_update(adev->dm.freesync_module,
+ acrtc->dm_irq_params.stream,
+ &acrtc->dm_irq_params.vrr_params);
+
+- dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
+- &acrtc->dm_irq_params.vrr_params.adjust);
++ /* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
++ if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
++ dc_stream_adjust_vmin_vmax(adev->dm.dc,
++ acrtc->dm_irq_params.stream,
++ &acrtc->dm_irq_params.vrr_params.adjust);
++ }
+ }
+
+ /*
+@@ -12601,7 +12607,7 @@ int amdgpu_dm_process_dmub_aux_transfer_sync(
+ * Transient states before tunneling is enabled could
+ * lead to this error. We can ignore this for now.
+ */
+- if (p_notify->result != AUX_RET_ERROR_PROTOCOL_ERROR) {
++ if (p_notify->result == AUX_RET_ERROR_PROTOCOL_ERROR) {
+ DRM_WARN("DPIA AUX failed on 0x%x(%d), error %d\n",
+ payload->address, payload->length,
+ p_notify->result);
+@@ -12610,22 +12616,14 @@ int amdgpu_dm_process_dmub_aux_transfer_sync(
+ goto out;
+ }
+
++ payload->reply[0] = adev->dm.dmub_notify->aux_reply.command & 0xF;
++ if (adev->dm.dmub_notify->aux_reply.command & 0xF0)
++ /* The reply is stored in the top nibble of the command. */
++ payload->reply[0] = (adev->dm.dmub_notify->aux_reply.command >> 4) & 0xF;
+
+- payload->reply[0] = adev->dm.dmub_notify->aux_reply.command;
+- if (!payload->write && p_notify->aux_reply.length &&
+- (payload->reply[0] == AUX_TRANSACTION_REPLY_AUX_ACK)) {
+-
+- if (payload->length != p_notify->aux_reply.length) {
+- DRM_WARN("invalid read length %d from DPIA AUX 0x%x(%d)!\n",
+- p_notify->aux_reply.length,
+- payload->address, payload->length);
+- *operation_result = AUX_RET_ERROR_INVALID_REPLY;
+- goto out;
+- }
+-
++ if (!payload->write && p_notify->aux_reply.length)
+ memcpy(payload->data, p_notify->aux_reply.data,
+ p_notify->aux_reply.length);
+- }
+
+ /* success */
+ ret = p_notify->aux_reply.length;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 07e744da7bf410..66df18b1d0af9f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -51,6 +51,9 @@
+
+ #define PEAK_FACTOR_X1000 1006
+
++/*
++ * This function handles both native AUX and I2C-Over-AUX transactions.
++ */
+ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ struct drm_dp_aux_msg *msg)
+ {
+@@ -87,15 +90,25 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ if (adev->dm.aux_hpd_discon_quirk) {
+ if (msg->address == DP_SIDEBAND_MSG_DOWN_REQ_BASE &&
+ operation_result == AUX_RET_ERROR_HPD_DISCON) {
+- result = 0;
++ result = msg->size;
+ operation_result = AUX_RET_SUCCESS;
+ }
+ }
+
+- if (payload.write && result >= 0)
+- result = msg->size;
++ /*
++ * result equals to 0 includes the cases of AUX_DEFER/I2C_DEFER
++ */
++ if (payload.write && result >= 0) {
++ if (result) {
++ /*one byte indicating partially written bytes. Force 0 to retry*/
++ drm_info(adev_to_drm(adev), "amdgpu: AUX partially written\n");
++ result = 0;
++ } else if (!payload.reply[0])
++ /*I2C_ACK|AUX_ACK*/
++ result = msg->size;
++ }
+
+- if (result < 0)
++ if (result < 0) {
+ switch (operation_result) {
+ case AUX_RET_SUCCESS:
+ break;
+@@ -114,6 +127,13 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ break;
+ }
+
++ drm_info(adev_to_drm(adev), "amdgpu: DP AUX transfer fail:%d\n", operation_result);
++ }
++
++ if (payload.reply[0])
++ drm_info(adev_to_drm(adev), "amdgpu: AUX reply command not ACK: 0x%02x.",
++ payload.reply[0]);
++
+ return result;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index b8a34abaf519a5..aeb9fae83cacc2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -969,7 +969,9 @@ static void populate_dml_surface_cfg_from_plane_state(enum dml_project_id dml2_p
+ }
+ }
+
+-static void get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc_state *context, struct scaler_data *out)
++static struct scaler_data *get_scaler_data_for_plane(
++ const struct dc_plane_state *in,
++ struct dc_state *context)
+ {
+ int i;
+ struct pipe_ctx *temp_pipe = &context->res_ctx.temp_pipe;
+@@ -990,7 +992,7 @@ static void get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc
+ }
+
+ ASSERT(i < MAX_PIPES);
+- memcpy(out, &temp_pipe->plane_res.scl_data, sizeof(*out));
++ return &temp_pipe->plane_res.scl_data;
+ }
+
+ static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location,
+@@ -1053,11 +1055,7 @@ static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out
+ const struct dc_plane_state *in, struct dc_state *context,
+ const struct soc_bounding_box_st *soc)
+ {
+- struct scaler_data *scaler_data = kzalloc(sizeof(*scaler_data), GFP_KERNEL);
+- if (!scaler_data)
+- return;
+-
+- get_scaler_data_for_plane(in, context, scaler_data);
++ struct scaler_data *scaler_data = get_scaler_data_for_plane(in, context);
+
+ out->CursorBPP[location] = dml_cur_32bit;
+ out->CursorWidth[location] = 256;
+@@ -1122,8 +1120,6 @@ static void populate_dml_plane_cfg_from_plane_state(struct dml_plane_cfg_st *out
+ out->DynamicMetadataTransmittedBytes[location] = 0;
+
+ out->NumberOfCursors[location] = 1;
+-
+- kfree(scaler_data);
+ }
+
+ static unsigned int map_stream_to_dml_display_cfg(const struct dml2_context *dml2,
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 9b2f128fd3094b..cf9ab2d1f1d2a7 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -1027,27 +1027,28 @@ static const struct panel_desc auo_g070vvn01 = {
+ },
+ };
+
+-static const struct drm_display_mode auo_g101evn010_mode = {
+- .clock = 68930,
+- .hdisplay = 1280,
+- .hsync_start = 1280 + 82,
+- .hsync_end = 1280 + 82 + 2,
+- .htotal = 1280 + 82 + 2 + 84,
+- .vdisplay = 800,
+- .vsync_start = 800 + 8,
+- .vsync_end = 800 + 8 + 2,
+- .vtotal = 800 + 8 + 2 + 6,
++static const struct display_timing auo_g101evn010_timing = {
++ .pixelclock = { 64000000, 68930000, 85000000 },
++ .hactive = { 1280, 1280, 1280 },
++ .hfront_porch = { 8, 64, 256 },
++ .hback_porch = { 8, 64, 256 },
++ .hsync_len = { 40, 168, 767 },
++ .vactive = { 800, 800, 800 },
++ .vfront_porch = { 4, 8, 100 },
++ .vback_porch = { 4, 8, 100 },
++ .vsync_len = { 8, 16, 223 },
+ };
+
+ static const struct panel_desc auo_g101evn010 = {
+- .modes = &auo_g101evn010_mode,
+- .num_modes = 1,
++ .timings = &auo_g101evn010_timing,
++ .num_timings = 1,
+ .bpc = 6,
+ .size = {
+ .width = 216,
+ .height = 135,
+ },
+ .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
++ .bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 6db503a5691806..78ebc7d54a0ce3 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -746,11 +746,16 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
+ return DRM_GPU_SCHED_STAT_NOMINAL;
+ }
+
+-/* If the current address or return address have changed, then the GPU
+- * has probably made progress and we should delay the reset. This
+- * could fail if the GPU got in an infinite loop in the CL, but that
+- * is pretty unlikely outside of an i-g-t testcase.
+- */
++static void
++v3d_sched_skip_reset(struct drm_sched_job *sched_job)
++{
++ struct drm_gpu_scheduler *sched = sched_job->sched;
++
++ spin_lock(&sched->job_list_lock);
++ list_add(&sched_job->list, &sched->pending_list);
++ spin_unlock(&sched->job_list_lock);
++}
++
+ static enum drm_gpu_sched_stat
+ v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
+ u32 *timedout_ctca, u32 *timedout_ctra)
+@@ -760,9 +765,16 @@ v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
+ u32 ctca = V3D_CORE_READ(0, V3D_CLE_CTNCA(q));
+ u32 ctra = V3D_CORE_READ(0, V3D_CLE_CTNRA(q));
+
++ /* If the current address or return address have changed, then the GPU
++ * has probably made progress and we should delay the reset. This
++ * could fail if the GPU got in an infinite loop in the CL, but that
++ * is pretty unlikely outside of an i-g-t testcase.
++ */
+ if (*timedout_ctca != ctca || *timedout_ctra != ctra) {
+ *timedout_ctca = ctca;
+ *timedout_ctra = ctra;
++
++ v3d_sched_skip_reset(sched_job);
+ return DRM_GPU_SCHED_STAT_NOMINAL;
+ }
+
+@@ -802,11 +814,13 @@ v3d_csd_job_timedout(struct drm_sched_job *sched_job)
+ struct v3d_dev *v3d = job->base.v3d;
+ u32 batches = V3D_CORE_READ(0, V3D_CSD_CURRENT_CFG4(v3d->ver));
+
+- /* If we've made progress, skip reset and let the timer get
+- * rearmed.
++ /* If we've made progress, skip reset, add the job to the pending
++ * list, and let the timer get rearmed.
+ */
+ if (job->timedout_batches != batches) {
+ job->timedout_batches = batches;
++
++ v3d_sched_skip_reset(sched_job);
+ return DRM_GPU_SCHED_STAT_NOMINAL;
+ }
+
+diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
+index ef1e5256c56a8a..0e502feaca8186 100644
+--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
++++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
+@@ -46,8 +46,11 @@ static void read_l3cc_table(struct xe_gt *gt,
+ unsigned int fw_ref, i;
+ u32 reg_val;
+
+- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+- KUNIT_ASSERT_NE_MSG(test, fw_ref, 0, "Forcewake Failed.\n");
++ fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
++ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
++ xe_force_wake_put(gt_to_fw(gt), fw_ref);
++ KUNIT_ASSERT_TRUE_MSG(test, true, "Forcewake Failed.\n");
++ }
+
+ for (i = 0; i < info->num_mocs_regs; i++) {
+ if (!(i & 1)) {
+diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c
+index 2d63a69cbfa38e..f7005a3643e627 100644
+--- a/drivers/gpu/drm/xe/xe_gt_debugfs.c
++++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c
+@@ -92,22 +92,23 @@ static int hw_engines(struct xe_gt *gt, struct drm_printer *p)
+ struct xe_hw_engine *hwe;
+ enum xe_hw_engine_id id;
+ unsigned int fw_ref;
++ int ret = 0;
+
+ xe_pm_runtime_get(xe);
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
+- xe_pm_runtime_put(xe);
+- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+- return -ETIMEDOUT;
++ ret = -ETIMEDOUT;
++ goto fw_put;
+ }
+
+ for_each_hw_engine(hwe, gt, id)
+ xe_hw_engine_print(hwe, p);
+
++fw_put:
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_pm_runtime_put(xe);
+
+- return 0;
++ return ret;
+ }
+
+ static int powergate_info(struct xe_gt *gt, struct drm_printer *p)
+diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+index 2606cd396df5c1..0d0207be93ed7f 100644
+--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
++++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
+@@ -422,9 +422,16 @@ static int xe_alloc_pf_queue(struct xe_gt *gt, struct pf_queue *pf_queue)
+ num_eus = bitmap_weight(gt->fuse_topo.eu_mask_per_dss,
+ XE_MAX_EU_FUSE_BITS) * num_dss;
+
+- /* user can issue separate page faults per EU and per CS */
++ /*
++ * user can issue separate page faults per EU and per CS
++ *
++ * XXX: Multiplier required as compute UMD are getting PF queue errors
++ * without it. Follow on why this multiplier is required.
++ */
++#define PF_MULTIPLIER 8
+ pf_queue->num_dw =
+- (num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW;
++ (num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW * PF_MULTIPLIER;
++#undef PF_MULTIPLIER
+
+ pf_queue->gt = gt;
+ pf_queue->data = devm_kcalloc(xe->drm.dev, pf_queue->num_dw,
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index 29780f3a747848..0b450e53161e51 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -477,4 +477,10 @@ static inline int hv_debug_add_dev_dir(struct hv_device *dev)
+
+ #endif /* CONFIG_HYPERV_TESTING */
+
++/* Create and remove sysfs entry for memory mapped ring buffers for a channel */
++int hv_create_ring_sysfs(struct vmbus_channel *channel,
++ int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel,
++ struct vm_area_struct *vma));
++int hv_remove_ring_sysfs(struct vmbus_channel *channel);
++
+ #endif /* _HYPERV_VMBUS_H */
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 6e55a1a2613d31..9a72101c6be9e4 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1792,6 +1792,27 @@ static ssize_t subchannel_id_show(struct vmbus_channel *channel,
+ }
+ static VMBUS_CHAN_ATTR_RO(subchannel_id);
+
++static int hv_mmap_ring_buffer_wrapper(struct file *filp, struct kobject *kobj,
++ const struct bin_attribute *attr,
++ struct vm_area_struct *vma)
++{
++ struct vmbus_channel *channel = container_of(kobj, struct vmbus_channel, kobj);
++
++ /*
++ * hv_(create|remove)_ring_sysfs implementation ensures that mmap_ring_buffer
++ * is not NULL.
++ */
++ return channel->mmap_ring_buffer(channel, vma);
++}
++
++static struct bin_attribute chan_attr_ring_buffer = {
++ .attr = {
++ .name = "ring",
++ .mode = 0600,
++ },
++ .size = 2 * SZ_2M,
++ .mmap = hv_mmap_ring_buffer_wrapper,
++};
+ static struct attribute *vmbus_chan_attrs[] = {
+ &chan_attr_out_mask.attr,
+ &chan_attr_in_mask.attr,
+@@ -1811,6 +1832,11 @@ static struct attribute *vmbus_chan_attrs[] = {
+ NULL
+ };
+
++static struct bin_attribute *vmbus_chan_bin_attrs[] = {
++ &chan_attr_ring_buffer,
++ NULL
++};
++
+ /*
+ * Channel-level attribute_group callback function. Returns the permission for
+ * each attribute, and returns 0 if an attribute is not visible.
+@@ -1831,9 +1857,24 @@ static umode_t vmbus_chan_attr_is_visible(struct kobject *kobj,
+ return attr->mode;
+ }
+
++static umode_t vmbus_chan_bin_attr_is_visible(struct kobject *kobj,
++ const struct bin_attribute *attr, int idx)
++{
++ const struct vmbus_channel *channel =
++ container_of(kobj, struct vmbus_channel, kobj);
++
++ /* Hide ring attribute if channel's ring_sysfs_visible is set to false */
++ if (attr == &chan_attr_ring_buffer && !channel->ring_sysfs_visible)
++ return 0;
++
++ return attr->attr.mode;
++}
++
+ static const struct attribute_group vmbus_chan_group = {
+ .attrs = vmbus_chan_attrs,
+- .is_visible = vmbus_chan_attr_is_visible
++ .bin_attrs = vmbus_chan_bin_attrs,
++ .is_visible = vmbus_chan_attr_is_visible,
++ .is_bin_visible = vmbus_chan_bin_attr_is_visible,
+ };
+
+ static const struct kobj_type vmbus_chan_ktype = {
+@@ -1841,6 +1882,63 @@ static const struct kobj_type vmbus_chan_ktype = {
+ .release = vmbus_chan_release,
+ };
+
++/**
++ * hv_create_ring_sysfs() - create "ring" sysfs entry corresponding to ring buffers for a channel.
++ * @channel: Pointer to vmbus_channel structure
++ * @hv_mmap_ring_buffer: function pointer for initializing the function to be called on mmap of
++ * channel's "ring" sysfs node, which is for the ring buffer of that channel.
++ * Function pointer is of below type:
++ * int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel,
++ * struct vm_area_struct *vma))
++ * This has a pointer to the channel and a pointer to vm_area_struct,
++ * used for mmap, as arguments.
++ *
++ * Sysfs node for ring buffer of a channel is created along with other fields, however its
++ * visibility is disabled by default. Sysfs creation needs to be controlled when the use-case
++ * is running.
++ * For example, HV_NIC device is used either by uio_hv_generic or hv_netvsc at any given point of
++ * time, and "ring" sysfs is needed only when uio_hv_generic is bound to that device. To avoid
++ * exposing the ring buffer by default, this function is reponsible to enable visibility of
++ * ring for userspace to use.
++ * Note: Race conditions can happen with userspace and it is not encouraged to create new
++ * use-cases for this. This was added to maintain backward compatibility, while solving
++ * one of the race conditions in uio_hv_generic while creating sysfs.
++ *
++ * Returns 0 on success or error code on failure.
++ */
++int hv_create_ring_sysfs(struct vmbus_channel *channel,
++ int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel,
++ struct vm_area_struct *vma))
++{
++ struct kobject *kobj = &channel->kobj;
++
++ channel->mmap_ring_buffer = hv_mmap_ring_buffer;
++ channel->ring_sysfs_visible = true;
++
++ return sysfs_update_group(kobj, &vmbus_chan_group);
++}
++EXPORT_SYMBOL_GPL(hv_create_ring_sysfs);
++
++/**
++ * hv_remove_ring_sysfs() - remove ring sysfs entry corresponding to ring buffers for a channel.
++ * @channel: Pointer to vmbus_channel structure
++ *
++ * Hide "ring" sysfs for a channel by changing its is_visible attribute and updating sysfs group.
++ *
++ * Returns 0 on success or error code on failure.
++ */
++int hv_remove_ring_sysfs(struct vmbus_channel *channel)
++{
++ struct kobject *kobj = &channel->kobj;
++ int ret;
++
++ channel->ring_sysfs_visible = false;
++ ret = sysfs_update_group(kobj, &vmbus_chan_group);
++ channel->mmap_ring_buffer = NULL;
++ return ret;
++}
++EXPORT_SYMBOL_GPL(hv_remove_ring_sysfs);
++
+ /*
+ * vmbus_add_channel_kobj - setup a sub-directory under device/channels
+ */
+diff --git a/drivers/iio/accel/adis16201.c b/drivers/iio/accel/adis16201.c
+index 8601b9a8b8e75c..5127e58eebc7d9 100644
+--- a/drivers/iio/accel/adis16201.c
++++ b/drivers/iio/accel/adis16201.c
+@@ -211,9 +211,9 @@ static const struct iio_chan_spec adis16201_channels[] = {
+ BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
+ ADIS_AUX_ADC_CHAN(ADIS16201_AUX_ADC_REG, ADIS16201_SCAN_AUX_ADC, 0, 12),
+ ADIS_INCLI_CHAN(X, ADIS16201_XINCL_OUT_REG, ADIS16201_SCAN_INCLI_X,
+- BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
++ BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 12),
+ ADIS_INCLI_CHAN(Y, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y,
+- BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
++ BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 12),
+ IIO_CHAN_SOFT_TIMESTAMP(7)
+ };
+
+diff --git a/drivers/iio/accel/adxl355_core.c b/drivers/iio/accel/adxl355_core.c
+index e8cd21fa77a698..cbac622ef82117 100644
+--- a/drivers/iio/accel/adxl355_core.c
++++ b/drivers/iio/accel/adxl355_core.c
+@@ -231,7 +231,7 @@ struct adxl355_data {
+ u8 transf_buf[3];
+ struct {
+ u8 buf[14];
+- s64 ts;
++ aligned_s64 ts;
+ } buffer;
+ } __aligned(IIO_DMA_MINALIGN);
+ };
+diff --git a/drivers/iio/accel/adxl367.c b/drivers/iio/accel/adxl367.c
+index a48ac0d7bd96b1..2ba7d7de47e448 100644
+--- a/drivers/iio/accel/adxl367.c
++++ b/drivers/iio/accel/adxl367.c
+@@ -604,18 +604,14 @@ static int _adxl367_set_odr(struct adxl367_state *st, enum adxl367_odr odr)
+ if (ret)
+ return ret;
+
++ st->odr = odr;
++
+ /* Activity timers depend on ODR */
+ ret = _adxl367_set_act_time_ms(st, st->act_time_ms);
+ if (ret)
+ return ret;
+
+- ret = _adxl367_set_inact_time_ms(st, st->inact_time_ms);
+- if (ret)
+- return ret;
+-
+- st->odr = odr;
+-
+- return 0;
++ return _adxl367_set_inact_time_ms(st, st->inact_time_ms);
+ }
+
+ static int adxl367_set_odr(struct iio_dev *indio_dev, enum adxl367_odr odr)
+diff --git a/drivers/iio/adc/ad7266.c b/drivers/iio/adc/ad7266.c
+index 858c8be2ff1a09..44346f5a5aeea0 100644
+--- a/drivers/iio/adc/ad7266.c
++++ b/drivers/iio/adc/ad7266.c
+@@ -45,7 +45,7 @@ struct ad7266_state {
+ */
+ struct {
+ __be16 sample[2];
+- s64 timestamp;
++ aligned_s64 timestamp;
+ } data __aligned(IIO_DMA_MINALIGN);
+ };
+
+diff --git a/drivers/iio/adc/ad7606_spi.c b/drivers/iio/adc/ad7606_spi.c
+index e2c1475257065c..c8bc9e772dfc26 100644
+--- a/drivers/iio/adc/ad7606_spi.c
++++ b/drivers/iio/adc/ad7606_spi.c
+@@ -165,7 +165,7 @@ static int ad7606_spi_reg_read(struct ad7606_state *st, unsigned int addr)
+ {
+ .tx_buf = &st->d16[0],
+ .len = 2,
+- .cs_change = 0,
++ .cs_change = 1,
+ }, {
+ .rx_buf = &st->d16[1],
+ .len = 2,
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 157a0df97f971b..a9248a85466ea3 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -169,7 +169,7 @@ struct ad7768_state {
+ union {
+ struct {
+ __be32 chan;
+- s64 timestamp;
++ aligned_s64 timestamp;
+ } scan;
+ __be32 d32;
+ u8 d8[2];
+diff --git a/drivers/iio/adc/dln2-adc.c b/drivers/iio/adc/dln2-adc.c
+index 221a5fdc1eaac8..e4165017708550 100644
+--- a/drivers/iio/adc/dln2-adc.c
++++ b/drivers/iio/adc/dln2-adc.c
+@@ -467,7 +467,7 @@ static irqreturn_t dln2_adc_trigger_h(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct {
+ __le16 values[DLN2_ADC_MAX_CHANNELS];
+- int64_t timestamp_space;
++ aligned_s64 timestamp_space;
+ } data;
+ struct dln2_adc_get_all_vals dev_data;
+ struct dln2_adc *dln2 = iio_priv(indio_dev);
+diff --git a/drivers/iio/adc/rockchip_saradc.c b/drivers/iio/adc/rockchip_saradc.c
+index a29e54754c8fbb..ab4de67fb135e3 100644
+--- a/drivers/iio/adc/rockchip_saradc.c
++++ b/drivers/iio/adc/rockchip_saradc.c
+@@ -480,15 +480,6 @@ static int rockchip_saradc_probe(struct platform_device *pdev)
+ if (info->reset)
+ rockchip_saradc_reset_controller(info->reset);
+
+- /*
+- * Use a default value for the converter clock.
+- * This may become user-configurable in the future.
+- */
+- ret = clk_set_rate(info->clk, info->data->clk_rate);
+- if (ret < 0)
+- return dev_err_probe(&pdev->dev, ret,
+- "failed to set adc clk rate\n");
+-
+ ret = regulator_enable(info->vref);
+ if (ret < 0)
+ return dev_err_probe(&pdev->dev, ret,
+@@ -515,6 +506,14 @@ static int rockchip_saradc_probe(struct platform_device *pdev)
+ if (IS_ERR(info->clk))
+ return dev_err_probe(&pdev->dev, PTR_ERR(info->clk),
+ "failed to get adc clock\n");
++ /*
++ * Use a default value for the converter clock.
++ * This may become user-configurable in the future.
++ */
++ ret = clk_set_rate(info->clk, info->data->clk_rate);
++ if (ret < 0)
++ return dev_err_probe(&pdev->dev, ret,
++ "failed to set adc clk rate\n");
+
+ platform_set_drvdata(pdev, indio_dev);
+
+diff --git a/drivers/iio/chemical/pms7003.c b/drivers/iio/chemical/pms7003.c
+index d0bd94912e0a34..e05ce1f12065c6 100644
+--- a/drivers/iio/chemical/pms7003.c
++++ b/drivers/iio/chemical/pms7003.c
+@@ -5,7 +5,6 @@
+ * Copyright (c) Tomasz Duszynski <tduszyns@gmail.com>
+ */
+
+-#include <linux/unaligned.h>
+ #include <linux/completion.h>
+ #include <linux/device.h>
+ #include <linux/errno.h>
+@@ -19,6 +18,8 @@
+ #include <linux/module.h>
+ #include <linux/mutex.h>
+ #include <linux/serdev.h>
++#include <linux/types.h>
++#include <linux/unaligned.h>
+
+ #define PMS7003_DRIVER_NAME "pms7003"
+
+@@ -76,7 +77,7 @@ struct pms7003_state {
+ /* Used to construct scan to push to the IIO buffer */
+ struct {
+ u16 data[3]; /* PM1, PM2P5, PM10 */
+- s64 ts;
++ aligned_s64 ts;
+ } scan;
+ };
+
+diff --git a/drivers/iio/chemical/sps30.c b/drivers/iio/chemical/sps30.c
+index 6f4f2ba2c09d5e..a7888146188d09 100644
+--- a/drivers/iio/chemical/sps30.c
++++ b/drivers/iio/chemical/sps30.c
+@@ -108,7 +108,7 @@ static irqreturn_t sps30_trigger_handler(int irq, void *p)
+ int ret;
+ struct {
+ s32 data[4]; /* PM1, PM2P5, PM4, PM10 */
+- s64 ts;
++ aligned_s64 ts;
+ } scan;
+
+ mutex_lock(&state->lock);
+diff --git a/drivers/iio/common/hid-sensors/hid-sensor-attributes.c b/drivers/iio/common/hid-sensors/hid-sensor-attributes.c
+index ad1882f608c0a2..2055a03cbeb187 100644
+--- a/drivers/iio/common/hid-sensors/hid-sensor-attributes.c
++++ b/drivers/iio/common/hid-sensors/hid-sensor-attributes.c
+@@ -66,6 +66,10 @@ static struct {
+ {HID_USAGE_SENSOR_HUMIDITY, 0, 1000, 0},
+ {HID_USAGE_SENSOR_HINGE, 0, 0, 17453293},
+ {HID_USAGE_SENSOR_HINGE, HID_USAGE_SENSOR_UNITS_DEGREES, 0, 17453293},
++
++ {HID_USAGE_SENSOR_HUMAN_PRESENCE, 0, 1, 0},
++ {HID_USAGE_SENSOR_HUMAN_PROXIMITY, 0, 1, 0},
++ {HID_USAGE_SENSOR_HUMAN_ATTENTION, 0, 1, 0},
+ };
+
+ static void simple_div(int dividend, int divisor, int *whole,
+diff --git a/drivers/iio/imu/bmi270/bmi270_core.c b/drivers/iio/imu/bmi270/bmi270_core.c
+index 7fec52e0b48624..950fcacddd40d7 100644
+--- a/drivers/iio/imu/bmi270/bmi270_core.c
++++ b/drivers/iio/imu/bmi270/bmi270_core.c
+@@ -654,8 +654,7 @@ static int bmi270_configure_imu(struct bmi270_data *bmi270_device)
+ FIELD_PREP(BMI270_ACC_CONF_ODR_MSK,
+ BMI270_ACC_CONF_ODR_100HZ) |
+ FIELD_PREP(BMI270_ACC_CONF_BWP_MSK,
+- BMI270_ACC_CONF_BWP_NORMAL_MODE) |
+- BMI270_PWR_CONF_ADV_PWR_SAVE_MSK);
++ BMI270_ACC_CONF_BWP_NORMAL_MODE));
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to configure accelerometer");
+
+@@ -663,8 +662,7 @@ static int bmi270_configure_imu(struct bmi270_data *bmi270_device)
+ FIELD_PREP(BMI270_GYR_CONF_ODR_MSK,
+ BMI270_GYR_CONF_ODR_200HZ) |
+ FIELD_PREP(BMI270_GYR_CONF_BWP_MSK,
+- BMI270_GYR_CONF_BWP_NORMAL_MODE) |
+- BMI270_PWR_CONF_ADV_PWR_SAVE_MSK);
++ BMI270_GYR_CONF_BWP_NORMAL_MODE));
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to configure gyroscope");
+
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+index 3d3b27f28c9d1c..273196e647a2b5 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+@@ -50,7 +50,7 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ u16 fifo_count;
+ u32 fifo_period;
+ s64 timestamp;
+- u8 data[INV_MPU6050_OUTPUT_DATA_SIZE];
++ u8 data[INV_MPU6050_OUTPUT_DATA_SIZE] __aligned(8);
+ size_t i, nb;
+
+ mutex_lock(&st->lock);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index 0a7cd8c1aa3313..8a9d2593576a2a 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -392,6 +392,9 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+ if (fifo_status & cpu_to_le16(ST_LSM6DSX_FIFO_EMPTY_MASK))
+ return 0;
+
++ if (!pattern_len)
++ pattern_len = ST_LSM6DSX_SAMPLE_SIZE;
++
+ fifo_len = (le16_to_cpu(fifo_status) & fifo_diff_mask) *
+ ST_LSM6DSX_CHAN_SIZE;
+ fifo_len = (fifo_len / pattern_len) * pattern_len;
+@@ -623,6 +626,9 @@ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw)
+ if (!fifo_len)
+ return 0;
+
++ if (!pattern_len)
++ pattern_len = ST_LSM6DSX_TAGGED_SAMPLE_SIZE;
++
+ for (read_len = 0; read_len < fifo_len; read_len += pattern_len) {
+ err = st_lsm6dsx_read_block(hw,
+ ST_LSM6DSX_REG_FIFO_OUT_TAG_ADDR,
+diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c
+index 76b76d12b38822..4c65b32d34ce41 100644
+--- a/drivers/iio/light/hid-sensor-prox.c
++++ b/drivers/iio/light/hid-sensor-prox.c
+@@ -34,9 +34,9 @@ struct prox_state {
+ struct iio_chan_spec channels[MAX_CHANNELS];
+ u32 channel2usage[MAX_CHANNELS];
+ u32 human_presence[MAX_CHANNELS];
+- int scale_pre_decml;
+- int scale_post_decml;
+- int scale_precision;
++ int scale_pre_decml[MAX_CHANNELS];
++ int scale_post_decml[MAX_CHANNELS];
++ int scale_precision[MAX_CHANNELS];
+ unsigned long scan_mask[2]; /* One entry plus one terminator. */
+ int num_channels;
+ };
+@@ -116,13 +116,15 @@ static int prox_read_raw(struct iio_dev *indio_dev,
+ ret_type = IIO_VAL_INT;
+ break;
+ case IIO_CHAN_INFO_SCALE:
+- *val = prox_state->scale_pre_decml;
+- *val2 = prox_state->scale_post_decml;
+- ret_type = prox_state->scale_precision;
++ if (chan->scan_index >= prox_state->num_channels)
++ return -EINVAL;
++
++ *val = prox_state->scale_pre_decml[chan->scan_index];
++ *val2 = prox_state->scale_post_decml[chan->scan_index];
++ ret_type = prox_state->scale_precision[chan->scan_index];
+ break;
+ case IIO_CHAN_INFO_OFFSET:
+- *val = hid_sensor_convert_exponent(
+- prox_state->prox_attr[chan->scan_index].unit_expo);
++ *val = 0;
+ ret_type = IIO_VAL_INT;
+ break;
+ case IIO_CHAN_INFO_SAMP_FREQ:
+@@ -249,6 +251,10 @@ static int prox_parse_report(struct platform_device *pdev,
+ st->prox_attr[index].size);
+ dev_dbg(&pdev->dev, "prox %x:%x\n", st->prox_attr[index].index,
+ st->prox_attr[index].report_id);
++ st->scale_precision[index] =
++ hid_sensor_format_scale(usage_id, &st->prox_attr[index],
++ &st->scale_pre_decml[index],
++ &st->scale_post_decml[index]);
+ index++;
+ }
+
+diff --git a/drivers/iio/light/opt3001.c b/drivers/iio/light/opt3001.c
+index 65b295877b4158..393a3d2fbe1d73 100644
+--- a/drivers/iio/light/opt3001.c
++++ b/drivers/iio/light/opt3001.c
+@@ -788,8 +788,9 @@ static irqreturn_t opt3001_irq(int irq, void *_iio)
+ int ret;
+ bool wake_result_ready_queue = false;
+ enum iio_chan_type chan_type = opt->chip_info->chan_type;
++ bool ok_to_ignore_lock = opt->ok_to_ignore_lock;
+
+- if (!opt->ok_to_ignore_lock)
++ if (!ok_to_ignore_lock)
+ mutex_lock(&opt->lock);
+
+ ret = i2c_smbus_read_word_swapped(opt->client, OPT3001_CONFIGURATION);
+@@ -826,7 +827,7 @@ static irqreturn_t opt3001_irq(int irq, void *_iio)
+ }
+
+ out:
+- if (!opt->ok_to_ignore_lock)
++ if (!ok_to_ignore_lock)
+ mutex_unlock(&opt->lock);
+
+ if (wake_result_ready_queue)
+diff --git a/drivers/iio/pressure/mprls0025pa.h b/drivers/iio/pressure/mprls0025pa.h
+index 9d5c30afa9d69a..d62a018eaff32b 100644
+--- a/drivers/iio/pressure/mprls0025pa.h
++++ b/drivers/iio/pressure/mprls0025pa.h
+@@ -34,16 +34,6 @@ struct iio_dev;
+ struct mpr_data;
+ struct mpr_ops;
+
+-/**
+- * struct mpr_chan
+- * @pres: pressure value
+- * @ts: timestamp
+- */
+-struct mpr_chan {
+- s32 pres;
+- s64 ts;
+-};
+-
+ enum mpr_func_id {
+ MPR_FUNCTION_A,
+ MPR_FUNCTION_B,
+@@ -69,6 +59,8 @@ enum mpr_func_id {
+ * reading in a loop until data is ready
+ * @completion: handshake from irq to read
+ * @chan: channel values for buffered mode
++ * @chan.pres: pressure value
++ * @chan.ts: timestamp
+ * @buffer: raw conversion data
+ */
+ struct mpr_data {
+@@ -87,7 +79,10 @@ struct mpr_data {
+ struct gpio_desc *gpiod_reset;
+ int irq;
+ struct completion completion;
+- struct mpr_chan chan;
++ struct {
++ s32 pres;
++ aligned_s64 ts;
++ } chan;
+ u8 buffer[MPR_MEASUREMENT_RD_SIZE] __aligned(IIO_DMA_MINALIGN);
+ };
+
+diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c
+index c28a7a6dea5f12..555a61e2f3fdd1 100644
+--- a/drivers/iio/temperature/maxim_thermocouple.c
++++ b/drivers/iio/temperature/maxim_thermocouple.c
+@@ -121,9 +121,9 @@ static const struct maxim_thermocouple_chip maxim_thermocouple_chips[] = {
+ struct maxim_thermocouple_data {
+ struct spi_device *spi;
+ const struct maxim_thermocouple_chip *chip;
++ char tc_type;
+
+ u8 buffer[16] __aligned(IIO_DMA_MINALIGN);
+- char tc_type;
+ };
+
+ static int maxim_thermocouple_read(struct maxim_thermocouple_data *data,
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index c33e6f33265ba0..8ee7d8e5d1c733 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -77,12 +77,13 @@
+ * xbox d-pads should map to buttons, as is required for DDR pads
+ * but we map them to axes when possible to simplify things
+ */
+-#define MAP_DPAD_TO_BUTTONS (1 << 0)
+-#define MAP_TRIGGERS_TO_BUTTONS (1 << 1)
+-#define MAP_STICKS_TO_NULL (1 << 2)
+-#define MAP_SELECT_BUTTON (1 << 3)
+-#define MAP_PADDLES (1 << 4)
+-#define MAP_PROFILE_BUTTON (1 << 5)
++#define MAP_DPAD_TO_BUTTONS BIT(0)
++#define MAP_TRIGGERS_TO_BUTTONS BIT(1)
++#define MAP_STICKS_TO_NULL BIT(2)
++#define MAP_SHARE_BUTTON BIT(3)
++#define MAP_PADDLES BIT(4)
++#define MAP_PROFILE_BUTTON BIT(5)
++#define MAP_SHARE_OFFSET BIT(6)
+
+ #define DANCEPAD_MAP_CONFIG (MAP_DPAD_TO_BUTTONS | \
+ MAP_TRIGGERS_TO_BUTTONS | MAP_STICKS_TO_NULL)
+@@ -135,7 +136,7 @@ static const struct xpad_device {
+ { 0x03f0, 0x048D, "HyperX Clutch", 0, XTYPE_XBOX360 }, /* wireless */
+ { 0x03f0, 0x0495, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE },
+ { 0x03f0, 0x07A0, "HyperX Clutch Gladiate RGB", 0, XTYPE_XBOXONE },
+- { 0x03f0, 0x08B6, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE }, /* v2 */
++ { 0x03f0, 0x08B6, "HyperX Clutch Gladiate", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, /* v2 */
+ { 0x03f0, 0x09B4, "HyperX Clutch Tanto", 0, XTYPE_XBOXONE },
+ { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+@@ -159,7 +160,7 @@ static const struct xpad_device {
+ { 0x045e, 0x0719, "Xbox 360 Wireless Receiver", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W },
+ { 0x045e, 0x0b00, "Microsoft X-Box One Elite 2 pad", MAP_PADDLES, XTYPE_XBOXONE },
+ { 0x045e, 0x0b0a, "Microsoft X-Box Adaptive Controller", MAP_PROFILE_BUTTON, XTYPE_XBOXONE },
+- { 0x045e, 0x0b12, "Microsoft Xbox Series S|X Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE },
++ { 0x045e, 0x0b12, "Microsoft Xbox Series S|X Controller", MAP_SHARE_BUTTON | MAP_SHARE_OFFSET, XTYPE_XBOXONE },
+ { 0x046d, 0xc21d, "Logitech Gamepad F310", 0, XTYPE_XBOX360 },
+ { 0x046d, 0xc21e, "Logitech Gamepad F510", 0, XTYPE_XBOX360 },
+ { 0x046d, 0xc21f, "Logitech Gamepad F710", 0, XTYPE_XBOX360 },
+@@ -205,13 +206,13 @@ static const struct xpad_device {
+ { 0x0738, 0x9871, "Mad Catz Portable Drum", 0, XTYPE_XBOX360 },
+ { 0x0738, 0xb726, "Mad Catz Xbox controller - MW2", 0, XTYPE_XBOX360 },
+ { 0x0738, 0xb738, "Mad Catz MVC2TE Stick 2", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+- { 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", XTYPE_XBOX360 },
++ { 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", 0, XTYPE_XBOX360 },
+ { 0x0738, 0xcb02, "Saitek Cyborg Rumble Pad - PC/Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x0738, 0xcb03, "Saitek P3200 Rumble Pad - PC/Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x0738, 0xcb29, "Saitek Aviator Stick AV8R02", 0, XTYPE_XBOX360 },
+ { 0x0738, 0xf738, "Super SFIV FightStick TE S", 0, XTYPE_XBOX360 },
+ { 0x07ff, 0xffff, "Mad Catz GamePad", 0, XTYPE_XBOX360 },
+- { 0x0b05, 0x1a38, "ASUS ROG RAIKIRI", 0, XTYPE_XBOXONE },
++ { 0x0b05, 0x1a38, "ASUS ROG RAIKIRI", MAP_SHARE_BUTTON, XTYPE_XBOXONE },
+ { 0x0b05, 0x1abb, "ASUS ROG RAIKIRI PRO", 0, XTYPE_XBOXONE },
+ { 0x0c12, 0x0005, "Intec wireless", 0, XTYPE_XBOX },
+ { 0x0c12, 0x8801, "Nyko Xbox Controller", 0, XTYPE_XBOX },
+@@ -240,7 +241,7 @@ static const struct xpad_device {
+ { 0x0e6f, 0x0146, "Rock Candy Wired Controller for Xbox One", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0147, "PDP Marvel Xbox One Controller", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x015c, "PDP Xbox One Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+- { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", XTYPE_XBOXONE },
++ { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0161, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0162, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0163, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
+@@ -387,10 +388,11 @@ static const struct xpad_device {
+ { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 },
+ { 0x2dc8, 0x3109, "8BitDo Ultimate Wireless Bluetooth", 0, XTYPE_XBOX360 },
+ { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 },
++ { 0x2dc8, 0x310b, "8BitDo Ultimate 2 Wireless Controller", 0, XTYPE_XBOX360 },
+ { 0x2dc8, 0x6001, "8BitDo SN30 Pro", 0, XTYPE_XBOX360 },
+ { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
+ { 0x2e24, 0x1688, "Hyperkin X91 X-Box One pad", 0, XTYPE_XBOXONE },
+- { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE },
++ { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE },
+ { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 },
+@@ -1027,7 +1029,7 @@ static void xpad360w_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned cha
+ * The report format was gleaned from
+ * https://github.com/kylelemons/xbox/blob/master/xbox.go
+ */
+-static void xpadone_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char *data)
++static void xpadone_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char *data, u32 len)
+ {
+ struct input_dev *dev = xpad->dev;
+ bool do_sync = false;
+@@ -1068,8 +1070,12 @@ static void xpadone_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char
+ /* menu/view buttons */
+ input_report_key(dev, BTN_START, data[4] & BIT(2));
+ input_report_key(dev, BTN_SELECT, data[4] & BIT(3));
+- if (xpad->mapping & MAP_SELECT_BUTTON)
+- input_report_key(dev, KEY_RECORD, data[22] & BIT(0));
++ if (xpad->mapping & MAP_SHARE_BUTTON) {
++ if (xpad->mapping & MAP_SHARE_OFFSET)
++ input_report_key(dev, KEY_RECORD, data[len - 26] & BIT(0));
++ else
++ input_report_key(dev, KEY_RECORD, data[len - 18] & BIT(0));
++ }
+
+ /* buttons A,B,X,Y */
+ input_report_key(dev, BTN_A, data[4] & BIT(4));
+@@ -1217,7 +1223,7 @@ static void xpad_irq_in(struct urb *urb)
+ xpad360w_process_packet(xpad, 0, xpad->idata);
+ break;
+ case XTYPE_XBOXONE:
+- xpadone_process_packet(xpad, 0, xpad->idata);
++ xpadone_process_packet(xpad, 0, xpad->idata, urb->actual_length);
+ break;
+ default:
+ xpad_process_packet(xpad, 0, xpad->idata);
+@@ -1944,7 +1950,7 @@ static int xpad_init_input(struct usb_xpad *xpad)
+ xpad->xtype == XTYPE_XBOXONE) {
+ for (i = 0; xpad360_btn[i] >= 0; i++)
+ input_set_capability(input_dev, EV_KEY, xpad360_btn[i]);
+- if (xpad->mapping & MAP_SELECT_BUTTON)
++ if (xpad->mapping & MAP_SHARE_BUTTON)
+ input_set_capability(input_dev, EV_KEY, KEY_RECORD);
+ } else {
+ for (i = 0; xpad_btn[i] >= 0; i++)
+diff --git a/drivers/input/keyboard/mtk-pmic-keys.c b/drivers/input/keyboard/mtk-pmic-keys.c
+index 5ad6be9141603a..061d48350df661 100644
+--- a/drivers/input/keyboard/mtk-pmic-keys.c
++++ b/drivers/input/keyboard/mtk-pmic-keys.c
+@@ -147,8 +147,8 @@ static void mtk_pmic_keys_lp_reset_setup(struct mtk_pmic_keys *keys,
+ u32 value, mask;
+ int error;
+
+- kregs_home = keys->keys[MTK_PMIC_HOMEKEY_INDEX].regs;
+- kregs_pwr = keys->keys[MTK_PMIC_PWRKEY_INDEX].regs;
++ kregs_home = ®s->keys_regs[MTK_PMIC_HOMEKEY_INDEX];
++ kregs_pwr = ®s->keys_regs[MTK_PMIC_PWRKEY_INDEX];
+
+ error = of_property_read_u32(keys->dev->of_node, "power-off-time-sec",
+ &long_press_debounce);
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index aba57abe697882..b3a1f7a3acc3ca 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -163,6 +163,7 @@ static const char * const topbuttonpad_pnp_ids[] = {
+
+ static const char * const smbus_pnp_ids[] = {
+ /* all of the topbuttonpad_pnp_ids are valid, we just add some extras */
++ "DLL060d", /* Dell Precision M3800 */
+ "LEN0048", /* X1 Carbon 3 */
+ "LEN0046", /* X250 */
+ "LEN0049", /* Yoga 11e */
+@@ -189,11 +190,15 @@ static const char * const smbus_pnp_ids[] = {
+ "LEN2054", /* E480 */
+ "LEN2055", /* E580 */
+ "LEN2068", /* T14 Gen 1 */
++ "SYN1221", /* TUXEDO InfinityBook Pro 14 v5 */
++ "SYN3003", /* HP EliteBook 850 G1 */
+ "SYN3015", /* HP EliteBook 840 G2 */
+ "SYN3052", /* HP EliteBook 840 G4 */
+ "SYN3221", /* HP 15-ay000 */
+ "SYN323d", /* HP Spectre X360 13-w013dx */
+ "SYN3257", /* HP Envy 13-ad105ng */
++ "TOS01f6", /* Dynabook Portege X30L-G */
++ "TOS0213", /* Dynabook Portege X30-D */
+ NULL
+ };
+
+diff --git a/drivers/input/touchscreen/cyttsp5.c b/drivers/input/touchscreen/cyttsp5.c
+index eafe5a9b896484..071b7c9bf566eb 100644
+--- a/drivers/input/touchscreen/cyttsp5.c
++++ b/drivers/input/touchscreen/cyttsp5.c
+@@ -580,7 +580,7 @@ static int cyttsp5_power_control(struct cyttsp5 *ts, bool on)
+ int rc;
+
+ SET_CMD_REPORT_TYPE(cmd[0], 0);
+- SET_CMD_REPORT_ID(cmd[0], HID_POWER_SLEEP);
++ SET_CMD_REPORT_ID(cmd[0], state);
+ SET_CMD_OPCODE(cmd[1], HID_CMD_SET_POWER);
+
+ rc = cyttsp5_write(ts, HID_COMMAND_REG, cmd, sizeof(cmd));
+@@ -870,13 +870,16 @@ static int cyttsp5_probe(struct device *dev, struct regmap *regmap, int irq,
+ ts->input->phys = ts->phys;
+ input_set_drvdata(ts->input, ts);
+
+- /* Reset the gpio to be in a reset state */
++ /* Assert gpio to be in a reset state */
+ ts->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(ts->reset_gpio)) {
+ error = PTR_ERR(ts->reset_gpio);
+ dev_err(dev, "Failed to request reset gpio, error %d\n", error);
+ return error;
+ }
++
++ fsleep(10); /* Ensure long-enough reset pulse (minimum 10us). */
++
+ gpiod_set_value_cansleep(ts->reset_gpio, 0);
+
+ /* Need a delay to have device up */
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 58febd1bc772a4..efc6ec25e0c5d1 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1178,7 +1178,7 @@ static int dm_keyslot_evict(struct blk_crypto_profile *profile,
+
+ t = dm_get_live_table(md, &srcu_idx);
+ if (!t)
+- return 0;
++ goto put_live_table;
+
+ for (unsigned int i = 0; i < t->num_targets; i++) {
+ struct dm_target *ti = dm_table_get_target(t, i);
+@@ -1189,6 +1189,7 @@ static int dm_keyslot_evict(struct blk_crypto_profile *profile,
+ (void *)key);
+ }
+
++put_live_table:
+ dm_put_live_table(md, srcu_idx);
+ return 0;
+ }
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index d025d4163fd121..39ad4442cb813a 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -2379,6 +2379,7 @@ struct m_can_classdev *m_can_class_allocate_dev(struct device *dev,
+ SET_NETDEV_DEV(net_dev, dev);
+
+ m_can_of_parse_mram(class_dev, mram_config_vals);
++ spin_lock_init(&class_dev->tx_handling_spinlock);
+ out:
+ return class_dev;
+ }
+@@ -2463,9 +2464,9 @@ EXPORT_SYMBOL_GPL(m_can_class_register);
+
+ void m_can_class_unregister(struct m_can_classdev *cdev)
+ {
++ unregister_candev(cdev->net);
+ if (cdev->is_peripheral)
+ can_rx_offload_del(&cdev->offload);
+- unregister_candev(cdev->net);
+ }
+ EXPORT_SYMBOL_GPL(m_can_class_unregister);
+
+diff --git a/drivers/net/can/rockchip/rockchip_canfd-core.c b/drivers/net/can/rockchip/rockchip_canfd-core.c
+index 7107a37da36c7f..c3fb3176ce4221 100644
+--- a/drivers/net/can/rockchip/rockchip_canfd-core.c
++++ b/drivers/net/can/rockchip/rockchip_canfd-core.c
+@@ -937,8 +937,8 @@ static void rkcanfd_remove(struct platform_device *pdev)
+ struct rkcanfd_priv *priv = platform_get_drvdata(pdev);
+ struct net_device *ndev = priv->ndev;
+
+- can_rx_offload_del(&priv->offload);
+ rkcanfd_unregister(priv);
++ can_rx_offload_del(&priv->offload);
+ free_candev(ndev);
+ }
+
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index 3bc56517fe7a99..c30b04f8fc0df8 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -75,6 +75,24 @@ static const struct can_bittiming_const mcp251xfd_data_bittiming_const = {
+ .brp_inc = 1,
+ };
+
++/* The datasheet of the mcp2518fd (DS20006027B) specifies a range of
++ * [-64,63] for TDCO, indicating a relative TDCO.
++ *
++ * Manual tests have shown, that using a relative TDCO configuration
++ * results in bus off, while an absolute configuration works.
++ *
++ * For TDCO use the max value (63) from the data sheet, but 0 as the
++ * minimum.
++ */
++static const struct can_tdc_const mcp251xfd_tdc_const = {
++ .tdcv_min = 0,
++ .tdcv_max = 63,
++ .tdco_min = 0,
++ .tdco_max = 63,
++ .tdcf_min = 0,
++ .tdcf_max = 0,
++};
++
+ static const char *__mcp251xfd_get_model_str(enum mcp251xfd_model model)
+ {
+ switch (model) {
+@@ -510,8 +528,7 @@ static int mcp251xfd_set_bittiming(const struct mcp251xfd_priv *priv)
+ {
+ const struct can_bittiming *bt = &priv->can.bittiming;
+ const struct can_bittiming *dbt = &priv->can.data_bittiming;
+- u32 val = 0;
+- s8 tdco;
++ u32 tdcmod, val = 0;
+ int err;
+
+ /* CAN Control Register
+@@ -575,11 +592,16 @@ static int mcp251xfd_set_bittiming(const struct mcp251xfd_priv *priv)
+ return err;
+
+ /* Transmitter Delay Compensation */
+- tdco = clamp_t(int, dbt->brp * (dbt->prop_seg + dbt->phase_seg1),
+- -64, 63);
+- val = FIELD_PREP(MCP251XFD_REG_TDC_TDCMOD_MASK,
+- MCP251XFD_REG_TDC_TDCMOD_AUTO) |
+- FIELD_PREP(MCP251XFD_REG_TDC_TDCO_MASK, tdco);
++ if (priv->can.ctrlmode & CAN_CTRLMODE_TDC_AUTO)
++ tdcmod = MCP251XFD_REG_TDC_TDCMOD_AUTO;
++ else if (priv->can.ctrlmode & CAN_CTRLMODE_TDC_MANUAL)
++ tdcmod = MCP251XFD_REG_TDC_TDCMOD_MANUAL;
++ else
++ tdcmod = MCP251XFD_REG_TDC_TDCMOD_DISABLED;
++
++ val = FIELD_PREP(MCP251XFD_REG_TDC_TDCMOD_MASK, tdcmod) |
++ FIELD_PREP(MCP251XFD_REG_TDC_TDCV_MASK, priv->can.tdc.tdcv) |
++ FIELD_PREP(MCP251XFD_REG_TDC_TDCO_MASK, priv->can.tdc.tdco);
+
+ return regmap_write(priv->map_reg, MCP251XFD_REG_TDC, val);
+ }
+@@ -2083,10 +2105,12 @@ static int mcp251xfd_probe(struct spi_device *spi)
+ priv->can.do_get_berr_counter = mcp251xfd_get_berr_counter;
+ priv->can.bittiming_const = &mcp251xfd_bittiming_const;
+ priv->can.data_bittiming_const = &mcp251xfd_data_bittiming_const;
++ priv->can.tdc_const = &mcp251xfd_tdc_const;
+ priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
+ CAN_CTRLMODE_LISTENONLY | CAN_CTRLMODE_BERR_REPORTING |
+ CAN_CTRLMODE_FD | CAN_CTRLMODE_FD_NON_ISO |
+- CAN_CTRLMODE_CC_LEN8_DLC;
++ CAN_CTRLMODE_CC_LEN8_DLC | CAN_CTRLMODE_TDC_AUTO |
++ CAN_CTRLMODE_TDC_MANUAL;
+ set_bit(MCP251XFD_FLAGS_DOWN, priv->flags);
+ priv->ndev = ndev;
+ priv->spi = spi;
+@@ -2174,8 +2198,8 @@ static void mcp251xfd_remove(struct spi_device *spi)
+ struct mcp251xfd_priv *priv = spi_get_drvdata(spi);
+ struct net_device *ndev = priv->ndev;
+
+- can_rx_offload_del(&priv->offload);
+ mcp251xfd_unregister(priv);
++ can_rx_offload_del(&priv->offload);
+ spi->max_speed_hz = priv->spi_max_speed_hz_orig;
+ free_candev(ndev);
+ }
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 3b49e87e8ef721..e3b5b450ee932b 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -373,15 +373,17 @@ static void b53_enable_vlan(struct b53_device *dev, int port, bool enable,
+ b53_read8(dev, B53_VLAN_PAGE, B53_VLAN_CTRL5, &vc5);
+ }
+
++ vc1 &= ~VC1_RX_MCST_FWD_EN;
++
+ if (enable) {
+ vc0 |= VC0_VLAN_EN | VC0_VID_CHK_EN | VC0_VID_HASH_VID;
+- vc1 |= VC1_RX_MCST_UNTAG_EN | VC1_RX_MCST_FWD_EN;
++ vc1 |= VC1_RX_MCST_UNTAG_EN;
+ vc4 &= ~VC4_ING_VID_CHECK_MASK;
+ if (enable_filtering) {
+ vc4 |= VC4_ING_VID_VIO_DROP << VC4_ING_VID_CHECK_S;
+ vc5 |= VC5_DROP_VTABLE_MISS;
+ } else {
+- vc4 |= VC4_ING_VID_VIO_FWD << VC4_ING_VID_CHECK_S;
++ vc4 |= VC4_NO_ING_VID_CHK << VC4_ING_VID_CHECK_S;
+ vc5 &= ~VC5_DROP_VTABLE_MISS;
+ }
+
+@@ -393,7 +395,7 @@ static void b53_enable_vlan(struct b53_device *dev, int port, bool enable,
+
+ } else {
+ vc0 &= ~(VC0_VLAN_EN | VC0_VID_CHK_EN | VC0_VID_HASH_VID);
+- vc1 &= ~(VC1_RX_MCST_UNTAG_EN | VC1_RX_MCST_FWD_EN);
++ vc1 &= ~VC1_RX_MCST_UNTAG_EN;
+ vc4 &= ~VC4_ING_VID_CHECK_MASK;
+ vc5 &= ~VC5_DROP_VTABLE_MISS;
+
+@@ -576,6 +578,18 @@ static void b53_eee_enable_set(struct dsa_switch *ds, int port, bool enable)
+ b53_write16(dev, B53_EEE_PAGE, B53_EEE_EN_CTRL, reg);
+ }
+
++int b53_setup_port(struct dsa_switch *ds, int port)
++{
++ struct b53_device *dev = ds->priv;
++
++ b53_port_set_ucast_flood(dev, port, true);
++ b53_port_set_mcast_flood(dev, port, true);
++ b53_port_set_learning(dev, port, false);
++
++ return 0;
++}
++EXPORT_SYMBOL(b53_setup_port);
++
+ int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
+ {
+ struct b53_device *dev = ds->priv;
+@@ -588,10 +602,6 @@ int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy)
+
+ cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
+
+- b53_port_set_ucast_flood(dev, port, true);
+- b53_port_set_mcast_flood(dev, port, true);
+- b53_port_set_learning(dev, port, false);
+-
+ if (dev->ops->irq_enable)
+ ret = dev->ops->irq_enable(dev, port);
+ if (ret)
+@@ -722,10 +732,6 @@ static void b53_enable_cpu_port(struct b53_device *dev, int port)
+ b53_write8(dev, B53_CTRL_PAGE, B53_PORT_CTRL(port), port_ctrl);
+
+ b53_brcm_hdr_setup(dev->ds, port);
+-
+- b53_port_set_ucast_flood(dev, port, true);
+- b53_port_set_mcast_flood(dev, port, true);
+- b53_port_set_learning(dev, port, false);
+ }
+
+ static void b53_enable_mib(struct b53_device *dev)
+@@ -761,6 +767,22 @@ static bool b53_vlan_port_needs_forced_tagged(struct dsa_switch *ds, int port)
+ return dev->tag_protocol == DSA_TAG_PROTO_NONE && dsa_is_cpu_port(ds, port);
+ }
+
++static bool b53_vlan_port_may_join_untagged(struct dsa_switch *ds, int port)
++{
++ struct b53_device *dev = ds->priv;
++ struct dsa_port *dp;
++
++ if (!dev->vlan_filtering)
++ return true;
++
++ dp = dsa_to_port(ds, port);
++
++ if (dsa_port_is_cpu(dp))
++ return true;
++
++ return dp->bridge == NULL;
++}
++
+ int b53_configure_vlan(struct dsa_switch *ds)
+ {
+ struct b53_device *dev = ds->priv;
+@@ -779,7 +801,7 @@ int b53_configure_vlan(struct dsa_switch *ds)
+ b53_do_vlan_op(dev, VTA_CMD_CLEAR);
+ }
+
+- b53_enable_vlan(dev, -1, dev->vlan_enabled, ds->vlan_filtering);
++ b53_enable_vlan(dev, -1, dev->vlan_enabled, dev->vlan_filtering);
+
+ /* Create an untagged VLAN entry for the default PVID in case
+ * CONFIG_VLAN_8021Q is disabled and there are no calls to
+@@ -787,26 +809,39 @@ int b53_configure_vlan(struct dsa_switch *ds)
+ * entry. Do this only when the tagging protocol is not
+ * DSA_TAG_PROTO_NONE
+ */
++ v = &dev->vlans[def_vid];
+ b53_for_each_port(dev, i) {
+- v = &dev->vlans[def_vid];
+- v->members |= BIT(i);
++ if (!b53_vlan_port_may_join_untagged(ds, i))
++ continue;
++
++ vl.members |= BIT(i);
+ if (!b53_vlan_port_needs_forced_tagged(ds, i))
+- v->untag = v->members;
+- b53_write16(dev, B53_VLAN_PAGE,
+- B53_VLAN_PORT_DEF_TAG(i), def_vid);
++ vl.untag = vl.members;
++ b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(i),
++ def_vid);
+ }
++ b53_set_vlan_entry(dev, def_vid, &vl);
+
+- /* Upon initial call we have not set-up any VLANs, but upon
+- * system resume, we need to restore all VLAN entries.
+- */
+- for (vid = def_vid; vid < dev->num_vlans; vid++) {
+- v = &dev->vlans[vid];
++ if (dev->vlan_filtering) {
++ /* Upon initial call we have not set-up any VLANs, but upon
++ * system resume, we need to restore all VLAN entries.
++ */
++ for (vid = def_vid + 1; vid < dev->num_vlans; vid++) {
++ v = &dev->vlans[vid];
+
+- if (!v->members)
+- continue;
++ if (!v->members)
++ continue;
++
++ b53_set_vlan_entry(dev, vid, v);
++ b53_fast_age_vlan(dev, vid);
++ }
+
+- b53_set_vlan_entry(dev, vid, v);
+- b53_fast_age_vlan(dev, vid);
++ b53_for_each_port(dev, i) {
++ if (!dsa_is_cpu_port(ds, i))
++ b53_write16(dev, B53_VLAN_PAGE,
++ B53_VLAN_PORT_DEF_TAG(i),
++ dev->ports[i].pvid);
++ }
+ }
+
+ return 0;
+@@ -1125,7 +1160,9 @@ EXPORT_SYMBOL(b53_setup_devlink_resources);
+ static int b53_setup(struct dsa_switch *ds)
+ {
+ struct b53_device *dev = ds->priv;
++ struct b53_vlan *vl;
+ unsigned int port;
++ u16 pvid;
+ int ret;
+
+ /* Request bridge PVID untagged when DSA_TAG_PROTO_NONE is set
+@@ -1133,12 +1170,26 @@ static int b53_setup(struct dsa_switch *ds)
+ */
+ ds->untag_bridge_pvid = dev->tag_protocol == DSA_TAG_PROTO_NONE;
+
++ /* The switch does not tell us the original VLAN for untagged
++ * packets, so keep the CPU port always tagged.
++ */
++ ds->untag_vlan_aware_bridge_pvid = true;
++
+ ret = b53_reset_switch(dev);
+ if (ret) {
+ dev_err(ds->dev, "failed to reset switch\n");
+ return ret;
+ }
+
++ /* setup default vlan for filtering mode */
++ pvid = b53_default_pvid(dev);
++ vl = &dev->vlans[pvid];
++ b53_for_each_port(dev, port) {
++ vl->members |= BIT(port);
++ if (!b53_vlan_port_needs_forced_tagged(ds, port))
++ vl->untag |= BIT(port);
++ }
++
+ b53_reset_mib(dev);
+
+ ret = b53_apply_config(dev);
+@@ -1492,7 +1543,10 @@ int b53_vlan_filtering(struct dsa_switch *ds, int port, bool vlan_filtering,
+ {
+ struct b53_device *dev = ds->priv;
+
+- b53_enable_vlan(dev, port, dev->vlan_enabled, vlan_filtering);
++ if (dev->vlan_filtering != vlan_filtering) {
++ dev->vlan_filtering = vlan_filtering;
++ b53_apply_config(dev);
++ }
+
+ return 0;
+ }
+@@ -1517,7 +1571,7 @@ static int b53_vlan_prepare(struct dsa_switch *ds, int port,
+ if (vlan->vid >= dev->num_vlans)
+ return -ERANGE;
+
+- b53_enable_vlan(dev, port, true, ds->vlan_filtering);
++ b53_enable_vlan(dev, port, true, dev->vlan_filtering);
+
+ return 0;
+ }
+@@ -1530,18 +1584,29 @@ int b53_vlan_add(struct dsa_switch *ds, int port,
+ bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
+ struct b53_vlan *vl;
++ u16 old_pvid, new_pvid;
+ int err;
+
+ err = b53_vlan_prepare(ds, port, vlan);
+ if (err)
+ return err;
+
+- vl = &dev->vlans[vlan->vid];
++ if (vlan->vid == 0)
++ return 0;
+
+- b53_get_vlan_entry(dev, vlan->vid, vl);
++ old_pvid = dev->ports[port].pvid;
++ if (pvid)
++ new_pvid = vlan->vid;
++ else if (!pvid && vlan->vid == old_pvid)
++ new_pvid = b53_default_pvid(dev);
++ else
++ new_pvid = old_pvid;
++ dev->ports[port].pvid = new_pvid;
++
++ vl = &dev->vlans[vlan->vid];
+
+- if (vlan->vid == 0 && vlan->vid == b53_default_pvid(dev))
+- untagged = true;
++ if (dsa_is_cpu_port(ds, port))
++ untagged = false;
+
+ vl->members |= BIT(port);
+ if (untagged && !b53_vlan_port_needs_forced_tagged(ds, port))
+@@ -1549,13 +1614,16 @@ int b53_vlan_add(struct dsa_switch *ds, int port,
+ else
+ vl->untag &= ~BIT(port);
+
++ if (!dev->vlan_filtering)
++ return 0;
++
+ b53_set_vlan_entry(dev, vlan->vid, vl);
+ b53_fast_age_vlan(dev, vlan->vid);
+
+- if (pvid && !dsa_is_cpu_port(ds, port)) {
++ if (!dsa_is_cpu_port(ds, port) && new_pvid != old_pvid) {
+ b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port),
+- vlan->vid);
+- b53_fast_age_vlan(dev, vlan->vid);
++ new_pvid);
++ b53_fast_age_vlan(dev, old_pvid);
+ }
+
+ return 0;
+@@ -1570,20 +1638,25 @@ int b53_vlan_del(struct dsa_switch *ds, int port,
+ struct b53_vlan *vl;
+ u16 pvid;
+
+- b53_read16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), &pvid);
++ if (vlan->vid == 0)
++ return 0;
+
+- vl = &dev->vlans[vlan->vid];
++ pvid = dev->ports[port].pvid;
+
+- b53_get_vlan_entry(dev, vlan->vid, vl);
++ vl = &dev->vlans[vlan->vid];
+
+ vl->members &= ~BIT(port);
+
+ if (pvid == vlan->vid)
+ pvid = b53_default_pvid(dev);
++ dev->ports[port].pvid = pvid;
+
+ if (untagged && !b53_vlan_port_needs_forced_tagged(ds, port))
+ vl->untag &= ~(BIT(port));
+
++ if (!dev->vlan_filtering)
++ return 0;
++
+ b53_set_vlan_entry(dev, vlan->vid, vl);
+ b53_fast_age_vlan(dev, vlan->vid);
+
+@@ -1916,8 +1989,9 @@ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
+ bool *tx_fwd_offload, struct netlink_ext_ack *extack)
+ {
+ struct b53_device *dev = ds->priv;
++ struct b53_vlan *vl;
+ s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
+- u16 pvlan, reg;
++ u16 pvlan, reg, pvid;
+ unsigned int i;
+
+ /* On 7278, port 7 which connects to the ASP should only receive
+@@ -1926,15 +2000,29 @@ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
+ if (dev->chip_id == BCM7278_DEVICE_ID && port == 7)
+ return -EINVAL;
+
+- /* Make this port leave the all VLANs join since we will have proper
+- * VLAN entries from now on
+- */
+- if (is58xx(dev)) {
+- b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, ®);
+- reg &= ~BIT(port);
+- if ((reg & BIT(cpu_port)) == BIT(cpu_port))
+- reg &= ~BIT(cpu_port);
+- b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, reg);
++ pvid = b53_default_pvid(dev);
++ vl = &dev->vlans[pvid];
++
++ if (dev->vlan_filtering) {
++ /* Make this port leave the all VLANs join since we will have
++ * proper VLAN entries from now on
++ */
++ if (is58xx(dev)) {
++ b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN,
++ ®);
++ reg &= ~BIT(port);
++ if ((reg & BIT(cpu_port)) == BIT(cpu_port))
++ reg &= ~BIT(cpu_port);
++ b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN,
++ reg);
++ }
++
++ b53_get_vlan_entry(dev, pvid, vl);
++ vl->members &= ~BIT(port);
++ if (vl->members == BIT(cpu_port))
++ vl->members &= ~BIT(cpu_port);
++ vl->untag = vl->members;
++ b53_set_vlan_entry(dev, pvid, vl);
+ }
+
+ b53_read16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), &pvlan);
+@@ -1967,7 +2055,7 @@ EXPORT_SYMBOL(b53_br_join);
+ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge)
+ {
+ struct b53_device *dev = ds->priv;
+- struct b53_vlan *vl = &dev->vlans[0];
++ struct b53_vlan *vl;
+ s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
+ unsigned int i;
+ u16 pvlan, reg, pvid;
+@@ -1993,15 +2081,18 @@ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge)
+ dev->ports[port].vlan_ctl_mask = pvlan;
+
+ pvid = b53_default_pvid(dev);
++ vl = &dev->vlans[pvid];
++
++ if (dev->vlan_filtering) {
++ /* Make this port join all VLANs without VLAN entries */
++ if (is58xx(dev)) {
++ b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, ®);
++ reg |= BIT(port);
++ if (!(reg & BIT(cpu_port)))
++ reg |= BIT(cpu_port);
++ b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, reg);
++ }
+
+- /* Make this port join all VLANs without VLAN entries */
+- if (is58xx(dev)) {
+- b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, ®);
+- reg |= BIT(port);
+- if (!(reg & BIT(cpu_port)))
+- reg |= BIT(cpu_port);
+- b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, reg);
+- } else {
+ b53_get_vlan_entry(dev, pvid, vl);
+ vl->members |= BIT(port) | BIT(cpu_port);
+ vl->untag |= BIT(port) | BIT(cpu_port);
+@@ -2300,6 +2391,7 @@ static const struct dsa_switch_ops b53_switch_ops = {
+ .phy_read = b53_phy_read16,
+ .phy_write = b53_phy_write16,
+ .phylink_get_caps = b53_phylink_get_caps,
++ .port_setup = b53_setup_port,
+ .port_enable = b53_enable_port,
+ .port_disable = b53_disable_port,
+ .support_eee = b53_support_eee,
+@@ -2744,6 +2836,7 @@ struct b53_device *b53_switch_alloc(struct device *base,
+ ds->ops = &b53_switch_ops;
+ ds->phylink_mac_ops = &b53_phylink_mac_ops;
+ dev->vlan_enabled = true;
++ dev->vlan_filtering = false;
+ /* Let DSA handle the case were multiple bridges span the same switch
+ * device and different VLAN awareness settings are requested, which
+ * would be breaking filtering semantics for any of the other bridge
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 9e9b5bc0c5d6ab..cc86aa777df561 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -95,6 +95,7 @@ struct b53_pcs {
+
+ struct b53_port {
+ u16 vlan_ctl_mask;
++ u16 pvid;
+ struct ethtool_keee eee;
+ };
+
+@@ -146,6 +147,7 @@ struct b53_device {
+ unsigned int num_vlans;
+ struct b53_vlan *vlans;
+ bool vlan_enabled;
++ bool vlan_filtering;
+ unsigned int num_ports;
+ struct b53_port *ports;
+
+@@ -380,6 +382,7 @@ enum dsa_tag_protocol b53_get_tag_protocol(struct dsa_switch *ds, int port,
+ enum dsa_tag_protocol mprot);
+ void b53_mirror_del(struct dsa_switch *ds, int port,
+ struct dsa_mall_mirror_tc_entry *mirror);
++int b53_setup_port(struct dsa_switch *ds, int port);
+ int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy);
+ void b53_disable_port(struct dsa_switch *ds, int port);
+ void b53_brcm_hdr_setup(struct dsa_switch *ds, int port);
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index fa2bf3fa90191a..454a8c7fd7eea5 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1230,6 +1230,7 @@ static const struct dsa_switch_ops bcm_sf2_ops = {
+ .resume = bcm_sf2_sw_resume,
+ .get_wol = bcm_sf2_sw_get_wol,
+ .set_wol = bcm_sf2_sw_set_wol,
++ .port_setup = b53_setup_port,
+ .port_enable = bcm_sf2_port_setup,
+ .port_disable = bcm_sf2_port_disable,
+ .support_eee = b53_support_eee,
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.c b/drivers/net/ethernet/intel/ice/ice_adapter.c
+index 01a08cfd0090ac..66e070095d1bbe 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.c
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.c
+@@ -1,7 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ // SPDX-FileCopyrightText: Copyright Red Hat
+
+-#include <linux/bitfield.h>
+ #include <linux/cleanup.h>
+ #include <linux/mutex.h>
+ #include <linux/pci.h>
+@@ -14,32 +13,16 @@
+ static DEFINE_XARRAY(ice_adapters);
+ static DEFINE_MUTEX(ice_adapters_mutex);
+
+-/* PCI bus number is 8 bits. Slot is 5 bits. Domain can have the rest. */
+-#define INDEX_FIELD_DOMAIN GENMASK(BITS_PER_LONG - 1, 13)
+-#define INDEX_FIELD_DEV GENMASK(31, 16)
+-#define INDEX_FIELD_BUS GENMASK(12, 5)
+-#define INDEX_FIELD_SLOT GENMASK(4, 0)
+-
+-static unsigned long ice_adapter_index(const struct pci_dev *pdev)
++static unsigned long ice_adapter_index(u64 dsn)
+ {
+- unsigned int domain = pci_domain_nr(pdev->bus);
+-
+- WARN_ON(domain > FIELD_MAX(INDEX_FIELD_DOMAIN));
+-
+- switch (pdev->device) {
+- case ICE_DEV_ID_E825C_BACKPLANE:
+- case ICE_DEV_ID_E825C_QSFP:
+- case ICE_DEV_ID_E825C_SFP:
+- case ICE_DEV_ID_E825C_SGMII:
+- return FIELD_PREP(INDEX_FIELD_DEV, pdev->device);
+- default:
+- return FIELD_PREP(INDEX_FIELD_DOMAIN, domain) |
+- FIELD_PREP(INDEX_FIELD_BUS, pdev->bus->number) |
+- FIELD_PREP(INDEX_FIELD_SLOT, PCI_SLOT(pdev->devfn));
+- }
++#if BITS_PER_LONG == 64
++ return dsn;
++#else
++ return (u32)dsn ^ (u32)(dsn >> 32);
++#endif
+ }
+
+-static struct ice_adapter *ice_adapter_new(void)
++static struct ice_adapter *ice_adapter_new(u64 dsn)
+ {
+ struct ice_adapter *adapter;
+
+@@ -47,6 +30,7 @@ static struct ice_adapter *ice_adapter_new(void)
+ if (!adapter)
+ return NULL;
+
++ adapter->device_serial_number = dsn;
+ spin_lock_init(&adapter->ptp_gltsyn_time_lock);
+ refcount_set(&adapter->refcount, 1);
+
+@@ -77,23 +61,26 @@ static void ice_adapter_free(struct ice_adapter *adapter)
+ * Return: Pointer to ice_adapter on success.
+ * ERR_PTR() on error. -ENOMEM is the only possible error.
+ */
+-struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev)
++struct ice_adapter *ice_adapter_get(struct pci_dev *pdev)
+ {
+- unsigned long index = ice_adapter_index(pdev);
++ u64 dsn = pci_get_dsn(pdev);
+ struct ice_adapter *adapter;
++ unsigned long index;
+ int err;
+
++ index = ice_adapter_index(dsn);
+ scoped_guard(mutex, &ice_adapters_mutex) {
+ err = xa_insert(&ice_adapters, index, NULL, GFP_KERNEL);
+ if (err == -EBUSY) {
+ adapter = xa_load(&ice_adapters, index);
+ refcount_inc(&adapter->refcount);
++ WARN_ON_ONCE(adapter->device_serial_number != dsn);
+ return adapter;
+ }
+ if (err)
+ return ERR_PTR(err);
+
+- adapter = ice_adapter_new();
++ adapter = ice_adapter_new(dsn);
+ if (!adapter)
+ return ERR_PTR(-ENOMEM);
+ xa_store(&ice_adapters, index, adapter, GFP_KERNEL);
+@@ -110,11 +97,13 @@ struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev)
+ *
+ * Context: Process, may sleep.
+ */
+-void ice_adapter_put(const struct pci_dev *pdev)
++void ice_adapter_put(struct pci_dev *pdev)
+ {
+- unsigned long index = ice_adapter_index(pdev);
++ u64 dsn = pci_get_dsn(pdev);
+ struct ice_adapter *adapter;
++ unsigned long index;
+
++ index = ice_adapter_index(dsn);
+ scoped_guard(mutex, &ice_adapters_mutex) {
+ adapter = xa_load(&ice_adapters, index);
+ if (WARN_ON(!adapter))
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.h b/drivers/net/ethernet/intel/ice/ice_adapter.h
+index e233225848b384..ac15c0d2bc1a47 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.h
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.h
+@@ -32,6 +32,7 @@ struct ice_port_list {
+ * @refcount: Reference count. struct ice_pf objects hold the references.
+ * @ctrl_pf: Control PF of the adapter
+ * @ports: Ports list
++ * @device_serial_number: DSN cached for collision detection on 32bit systems
+ */
+ struct ice_adapter {
+ refcount_t refcount;
+@@ -40,9 +41,10 @@ struct ice_adapter {
+
+ struct ice_pf *ctrl_pf;
+ struct ice_port_list ports;
++ u64 device_serial_number;
+ };
+
+-struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev);
+-void ice_adapter_put(const struct pci_dev *pdev);
++struct ice_adapter *ice_adapter_get(struct pci_dev *pdev);
++void ice_adapter_put(struct pci_dev *pdev);
+
+ #endif /* _ICE_ADAPTER_H */
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index c6d60f1d4f77aa..341def2bf1d354 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -3140,11 +3140,19 @@ static int mtk_dma_init(struct mtk_eth *eth)
+ static void mtk_dma_free(struct mtk_eth *eth)
+ {
+ const struct mtk_soc_data *soc = eth->soc;
+- int i;
++ int i, j, txqs = 1;
++
++ if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA))
++ txqs = MTK_QDMA_NUM_QUEUES;
++
++ for (i = 0; i < MTK_MAX_DEVS; i++) {
++ if (!eth->netdev[i])
++ continue;
++
++ for (j = 0; j < txqs; j++)
++ netdev_tx_reset_subqueue(eth->netdev[i], j);
++ }
+
+- for (i = 0; i < MTK_MAX_DEVS; i++)
+- if (eth->netdev[i])
+- netdev_reset_queue(eth->netdev[i]);
+ if (!MTK_HAS_CAPS(soc->caps, MTK_SRAM) && eth->scratch_ring) {
+ dma_free_coherent(eth->dma_dev,
+ MTK_QDMA_RING_SIZE * soc->tx.desc_size,
+@@ -3419,9 +3427,6 @@ static int mtk_open(struct net_device *dev)
+ }
+ mtk_gdm_config(eth, target_mac->id, gdm_config);
+ }
+- /* Reset and enable PSE */
+- mtk_w32(eth, RST_GL_PSE, MTK_RST_GL);
+- mtk_w32(eth, 0, MTK_RST_GL);
+
+ napi_enable(ð->tx_napi);
+ napi_enable(ð->rx_napi);
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_csr.h b/drivers/net/ethernet/meta/fbnic/fbnic_csr.h
+index 02bb81b3c50632..bf1655edeed2a3 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_csr.h
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_csr.h
+@@ -785,8 +785,10 @@ enum {
+ /* PUL User Registers */
+ #define FBNIC_CSR_START_PUL_USER 0x31000 /* CSR section delimiter */
+ #define FBNIC_PUL_OB_TLP_HDR_AW_CFG 0x3103d /* 0xc40f4 */
++#define FBNIC_PUL_OB_TLP_HDR_AW_CFG_FLUSH CSR_BIT(19)
+ #define FBNIC_PUL_OB_TLP_HDR_AW_CFG_BME CSR_BIT(18)
+ #define FBNIC_PUL_OB_TLP_HDR_AR_CFG 0x3103e /* 0xc40f8 */
++#define FBNIC_PUL_OB_TLP_HDR_AR_CFG_FLUSH CSR_BIT(19)
+ #define FBNIC_PUL_OB_TLP_HDR_AR_CFG_BME CSR_BIT(18)
+ #define FBNIC_CSR_END_PUL_USER 0x31080 /* CSR section delimiter */
+
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_fw.c b/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
+index bbc7c1c0c37ef9..9351a874689f83 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
+@@ -17,11 +17,29 @@ static void __fbnic_mbx_wr_desc(struct fbnic_dev *fbd, int mbx_idx,
+ {
+ u32 desc_offset = FBNIC_IPC_MBX(mbx_idx, desc_idx);
+
++ /* Write the upper 32b and then the lower 32b. Doing this the
++ * FW can then read lower, upper, lower to verify that the state
++ * of the descriptor wasn't changed mid-transaction.
++ */
+ fw_wr32(fbd, desc_offset + 1, upper_32_bits(desc));
+ fw_wrfl(fbd);
+ fw_wr32(fbd, desc_offset, lower_32_bits(desc));
+ }
+
++static void __fbnic_mbx_invalidate_desc(struct fbnic_dev *fbd, int mbx_idx,
++ int desc_idx, u32 desc)
++{
++ u32 desc_offset = FBNIC_IPC_MBX(mbx_idx, desc_idx);
++
++ /* For initialization we write the lower 32b of the descriptor first.
++ * This way we can set the state to mark it invalid before we clear the
++ * upper 32b.
++ */
++ fw_wr32(fbd, desc_offset, desc);
++ fw_wrfl(fbd);
++ fw_wr32(fbd, desc_offset + 1, 0);
++}
++
+ static u64 __fbnic_mbx_rd_desc(struct fbnic_dev *fbd, int mbx_idx, int desc_idx)
+ {
+ u32 desc_offset = FBNIC_IPC_MBX(mbx_idx, desc_idx);
+@@ -33,29 +51,41 @@ static u64 __fbnic_mbx_rd_desc(struct fbnic_dev *fbd, int mbx_idx, int desc_idx)
+ return desc;
+ }
+
+-static void fbnic_mbx_init_desc_ring(struct fbnic_dev *fbd, int mbx_idx)
++static void fbnic_mbx_reset_desc_ring(struct fbnic_dev *fbd, int mbx_idx)
+ {
+ int desc_idx;
+
++ /* Disable DMA transactions from the device,
++ * and flush any transactions triggered during cleaning
++ */
++ switch (mbx_idx) {
++ case FBNIC_IPC_MBX_RX_IDX:
++ wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AW_CFG,
++ FBNIC_PUL_OB_TLP_HDR_AW_CFG_FLUSH);
++ break;
++ case FBNIC_IPC_MBX_TX_IDX:
++ wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AR_CFG,
++ FBNIC_PUL_OB_TLP_HDR_AR_CFG_FLUSH);
++ break;
++ }
++
++ wrfl(fbd);
++
+ /* Initialize first descriptor to all 0s. Doing this gives us a
+ * solid stop for the firmware to hit when it is done looping
+ * through the ring.
+ */
+- __fbnic_mbx_wr_desc(fbd, mbx_idx, 0, 0);
+-
+- fw_wrfl(fbd);
++ __fbnic_mbx_invalidate_desc(fbd, mbx_idx, 0, 0);
+
+ /* We then fill the rest of the ring starting at the end and moving
+ * back toward descriptor 0 with skip descriptors that have no
+ * length nor address, and tell the firmware that they can skip
+ * them and just move past them to the one we initialized to 0.
+ */
+- for (desc_idx = FBNIC_IPC_MBX_DESC_LEN; --desc_idx;) {
+- __fbnic_mbx_wr_desc(fbd, mbx_idx, desc_idx,
+- FBNIC_IPC_MBX_DESC_FW_CMPL |
+- FBNIC_IPC_MBX_DESC_HOST_CMPL);
+- fw_wrfl(fbd);
+- }
++ for (desc_idx = FBNIC_IPC_MBX_DESC_LEN; --desc_idx;)
++ __fbnic_mbx_invalidate_desc(fbd, mbx_idx, desc_idx,
++ FBNIC_IPC_MBX_DESC_FW_CMPL |
++ FBNIC_IPC_MBX_DESC_HOST_CMPL);
+ }
+
+ void fbnic_mbx_init(struct fbnic_dev *fbd)
+@@ -76,7 +106,7 @@ void fbnic_mbx_init(struct fbnic_dev *fbd)
+ wr32(fbd, FBNIC_INTR_CLEAR(0), 1u << FBNIC_FW_MSIX_ENTRY);
+
+ for (i = 0; i < FBNIC_IPC_MBX_INDICES; i++)
+- fbnic_mbx_init_desc_ring(fbd, i);
++ fbnic_mbx_reset_desc_ring(fbd, i);
+ }
+
+ static int fbnic_mbx_map_msg(struct fbnic_dev *fbd, int mbx_idx,
+@@ -141,7 +171,7 @@ static void fbnic_mbx_clean_desc_ring(struct fbnic_dev *fbd, int mbx_idx)
+ {
+ int i;
+
+- fbnic_mbx_init_desc_ring(fbd, mbx_idx);
++ fbnic_mbx_reset_desc_ring(fbd, mbx_idx);
+
+ for (i = FBNIC_IPC_MBX_DESC_LEN; i--;)
+ fbnic_mbx_unmap_and_free_msg(fbd, mbx_idx, i);
+@@ -322,67 +352,41 @@ static int fbnic_fw_xmit_simple_msg(struct fbnic_dev *fbd, u32 msg_type)
+ return err;
+ }
+
+-/**
+- * fbnic_fw_xmit_cap_msg - Allocate and populate a FW capabilities message
+- * @fbd: FBNIC device structure
+- *
+- * Return: NULL on failure to allocate, error pointer on error, or pointer
+- * to new TLV test message.
+- *
+- * Sends a single TLV header indicating the host wants the firmware to
+- * confirm the capabilities and version.
+- **/
+-static int fbnic_fw_xmit_cap_msg(struct fbnic_dev *fbd)
+-{
+- int err = fbnic_fw_xmit_simple_msg(fbd, FBNIC_TLV_MSG_ID_HOST_CAP_REQ);
+-
+- /* Return 0 if we are not calling this on ASIC */
+- return (err == -EOPNOTSUPP) ? 0 : err;
+-}
+-
+-static void fbnic_mbx_postinit_desc_ring(struct fbnic_dev *fbd, int mbx_idx)
++static void fbnic_mbx_init_desc_ring(struct fbnic_dev *fbd, int mbx_idx)
+ {
+ struct fbnic_fw_mbx *mbx = &fbd->mbx[mbx_idx];
+
+- /* This is a one time init, so just exit if it is completed */
+- if (mbx->ready)
+- return;
+-
+ mbx->ready = true;
+
+ switch (mbx_idx) {
+ case FBNIC_IPC_MBX_RX_IDX:
++ /* Enable DMA writes from the device */
++ wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AW_CFG,
++ FBNIC_PUL_OB_TLP_HDR_AW_CFG_BME);
++
+ /* Make sure we have a page for the FW to write to */
+ fbnic_mbx_alloc_rx_msgs(fbd);
+ break;
+ case FBNIC_IPC_MBX_TX_IDX:
+- /* Force version to 1 if we successfully requested an update
+- * from the firmware. This should be overwritten once we get
+- * the actual version from the firmware in the capabilities
+- * request message.
+- */
+- if (!fbnic_fw_xmit_cap_msg(fbd) &&
+- !fbd->fw_cap.running.mgmt.version)
+- fbd->fw_cap.running.mgmt.version = 1;
++ /* Enable DMA reads from the device */
++ wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AR_CFG,
++ FBNIC_PUL_OB_TLP_HDR_AR_CFG_BME);
+ break;
+ }
+ }
+
+-static void fbnic_mbx_postinit(struct fbnic_dev *fbd)
++static bool fbnic_mbx_event(struct fbnic_dev *fbd)
+ {
+- int i;
+-
+- /* We only need to do this on the first interrupt following init.
++ /* We only need to do this on the first interrupt following reset.
+ * this primes the mailbox so that we will have cleared all the
+ * skip descriptors.
+ */
+ if (!(rd32(fbd, FBNIC_INTR_STATUS(0)) & (1u << FBNIC_FW_MSIX_ENTRY)))
+- return;
++ return false;
+
+ wr32(fbd, FBNIC_INTR_CLEAR(0), 1u << FBNIC_FW_MSIX_ENTRY);
+
+- for (i = 0; i < FBNIC_IPC_MBX_INDICES; i++)
+- fbnic_mbx_postinit_desc_ring(fbd, i);
++ return true;
+ }
+
+ /**
+@@ -864,7 +868,7 @@ static void fbnic_mbx_process_rx_msgs(struct fbnic_dev *fbd)
+
+ void fbnic_mbx_poll(struct fbnic_dev *fbd)
+ {
+- fbnic_mbx_postinit(fbd);
++ fbnic_mbx_event(fbd);
+
+ fbnic_mbx_process_tx_msgs(fbd);
+ fbnic_mbx_process_rx_msgs(fbd);
+@@ -872,60 +876,97 @@ void fbnic_mbx_poll(struct fbnic_dev *fbd)
+
+ int fbnic_mbx_poll_tx_ready(struct fbnic_dev *fbd)
+ {
+- struct fbnic_fw_mbx *tx_mbx;
+- int attempts = 50;
++ unsigned long timeout = jiffies + 10 * HZ + 1;
++ int err, i;
+
+- /* Immediate fail if BAR4 isn't there */
+- if (!fbnic_fw_present(fbd))
+- return -ENODEV;
++ do {
++ if (!time_is_after_jiffies(timeout))
++ return -ETIMEDOUT;
+
+- tx_mbx = &fbd->mbx[FBNIC_IPC_MBX_TX_IDX];
+- while (!tx_mbx->ready && --attempts) {
+ /* Force the firmware to trigger an interrupt response to
+ * avoid the mailbox getting stuck closed if the interrupt
+ * is reset.
+ */
+- fbnic_mbx_init_desc_ring(fbd, FBNIC_IPC_MBX_TX_IDX);
++ fbnic_mbx_reset_desc_ring(fbd, FBNIC_IPC_MBX_TX_IDX);
+
+- msleep(200);
++ /* Immediate fail if BAR4 went away */
++ if (!fbnic_fw_present(fbd))
++ return -ENODEV;
+
+- fbnic_mbx_poll(fbd);
+- }
++ msleep(20);
++ } while (!fbnic_mbx_event(fbd));
++
++ /* FW has shown signs of life. Enable DMA and start Tx/Rx */
++ for (i = 0; i < FBNIC_IPC_MBX_INDICES; i++)
++ fbnic_mbx_init_desc_ring(fbd, i);
++
++ /* Request an update from the firmware. This should overwrite
++ * mgmt.version once we get the actual version from the firmware
++ * in the capabilities request message.
++ */
++ err = fbnic_fw_xmit_simple_msg(fbd, FBNIC_TLV_MSG_ID_HOST_CAP_REQ);
++ if (err)
++ goto clean_mbx;
++
++ /* Use "1" to indicate we entered the state waiting for a response */
++ fbd->fw_cap.running.mgmt.version = 1;
++
++ return 0;
++clean_mbx:
++ /* Cleanup Rx buffers and disable mailbox */
++ fbnic_mbx_clean(fbd);
++ return err;
++}
++
++static void __fbnic_fw_evict_cmpl(struct fbnic_fw_completion *cmpl_data)
++{
++ cmpl_data->result = -EPIPE;
++ complete(&cmpl_data->done);
++}
+
+- return attempts ? 0 : -ETIMEDOUT;
++static void fbnic_mbx_evict_all_cmpl(struct fbnic_dev *fbd)
++{
++ if (fbd->cmpl_data) {
++ __fbnic_fw_evict_cmpl(fbd->cmpl_data);
++ fbd->cmpl_data = NULL;
++ }
+ }
+
+ void fbnic_mbx_flush_tx(struct fbnic_dev *fbd)
+ {
++ unsigned long timeout = jiffies + 10 * HZ + 1;
+ struct fbnic_fw_mbx *tx_mbx;
+- int attempts = 50;
+- u8 count = 0;
+-
+- /* Nothing to do if there is no mailbox */
+- if (!fbnic_fw_present(fbd))
+- return;
++ u8 tail;
+
+ /* Record current Rx stats */
+ tx_mbx = &fbd->mbx[FBNIC_IPC_MBX_TX_IDX];
+
+- /* Nothing to do if mailbox never got to ready */
+- if (!tx_mbx->ready)
+- return;
++ spin_lock_irq(&fbd->fw_tx_lock);
++
++ /* Clear ready to prevent any further attempts to transmit */
++ tx_mbx->ready = false;
++
++ /* Read tail to determine the last tail state for the ring */
++ tail = tx_mbx->tail;
++
++ /* Flush any completions as we are no longer processing Rx */
++ fbnic_mbx_evict_all_cmpl(fbd);
++
++ spin_unlock_irq(&fbd->fw_tx_lock);
+
+ /* Give firmware time to process packet,
+- * we will wait up to 10 seconds which is 50 waits of 200ms.
++ * we will wait up to 10 seconds which is 500 waits of 20ms.
+ */
+ do {
+ u8 head = tx_mbx->head;
+
+- if (head == tx_mbx->tail)
++ /* Tx ring is empty once head == tail */
++ if (head == tail)
+ break;
+
+- msleep(200);
++ msleep(20);
+ fbnic_mbx_process_tx_msgs(fbd);
+-
+- count += (tx_mbx->head - head) % FBNIC_IPC_MBX_DESC_LEN;
+- } while (count < FBNIC_IPC_MBX_DESC_LEN && --attempts);
++ } while (time_is_after_jiffies(timeout));
+ }
+
+ void fbnic_get_fw_ver_commit_str(struct fbnic_dev *fbd, char *fw_version,
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_mac.c b/drivers/net/ethernet/meta/fbnic/fbnic_mac.c
+index 14291401f46321..dde4a37116e20e 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_mac.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_mac.c
+@@ -79,12 +79,6 @@ static void fbnic_mac_init_axi(struct fbnic_dev *fbd)
+ fbnic_init_readrq(fbd, FBNIC_QM_RNI_RBP_CTL, cls, readrq);
+ fbnic_init_mps(fbd, FBNIC_QM_RNI_RDE_CTL, cls, mps);
+ fbnic_init_mps(fbd, FBNIC_QM_RNI_RCM_CTL, cls, mps);
+-
+- /* Enable XALI AR/AW outbound */
+- wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AW_CFG,
+- FBNIC_PUL_OB_TLP_HDR_AW_CFG_BME);
+- wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AR_CFG,
+- FBNIC_PUL_OB_TLP_HDR_AR_CFG_BME);
+ }
+
+ static void fbnic_mac_init_qm(struct fbnic_dev *fbd)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 3e4896d9537eed..8879af5292b491 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -3359,12 +3359,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi,
+ bool refill)
+ {
+ bool running = netif_running(vi->dev);
++ bool schedule_refill = false;
+
+ if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
+- schedule_delayed_work(&vi->refill, 0);
+-
++ schedule_refill = true;
+ if (running)
+ virtnet_napi_enable(rq);
++
++ if (schedule_refill)
++ schedule_delayed_work(&vi->refill, 0);
+ }
+
+ static void virtnet_rx_resume_all(struct virtnet_info *vi)
+@@ -3699,8 +3702,10 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
+ succ:
+ vi->curr_queue_pairs = queue_pairs;
+ /* virtnet_open() will refill when device is going to up. */
+- if (dev->flags & IFF_UP)
++ spin_lock_bh(&vi->refill_lock);
++ if (dev->flags & IFF_UP && vi->refill_enabled)
+ schedule_delayed_work(&vi->refill, 0);
++ spin_unlock_bh(&vi->refill_lock);
+
+ return 0;
+ }
+@@ -5658,6 +5663,10 @@ static void virtnet_get_base_stats(struct net_device *dev,
+
+ if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_SPEED)
+ tx->hw_drop_ratelimits = 0;
++
++ netdev_stat_queue_sum(dev,
++ dev->real_num_rx_queues, vi->max_queue_pairs, rx,
++ dev->real_num_tx_queues, vi->max_queue_pairs, tx);
+ }
+
+ static const struct netdev_stat_ops virtnet_stat_ops = {
+@@ -5865,8 +5874,10 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
+
+ hdr_dma = virtqueue_dma_map_single_attrs(sq->vq, &xsk_hdr, vi->hdr_len,
+ DMA_TO_DEVICE, 0);
+- if (virtqueue_dma_mapping_error(sq->vq, hdr_dma))
+- return -ENOMEM;
++ if (virtqueue_dma_mapping_error(sq->vq, hdr_dma)) {
++ err = -ENOMEM;
++ goto err_free_buffs;
++ }
+
+ err = xsk_pool_dma_map(pool, dma_dev, 0);
+ if (err)
+@@ -5894,6 +5905,8 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
+ err_xsk_map:
+ virtqueue_dma_unmap_single_attrs(rq->vq, hdr_dma, vi->hdr_len,
+ DMA_TO_DEVICE, 0);
++err_free_buffs:
++ kvfree(rq->xsk_buffs);
+ return err;
+ }
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 150de63b26b2cf..a27149e37a9881 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4492,7 +4492,8 @@ static void nvme_fw_act_work(struct work_struct *work)
+ msleep(100);
+ }
+
+- if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
++ if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING) ||
++ !nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE))
+ return;
+
+ nvme_unquiesce_io_queues(ctrl);
+diff --git a/drivers/pci/hotplug/s390_pci_hpc.c b/drivers/pci/hotplug/s390_pci_hpc.c
+index 055518ee354dc9..e9e9aaa91770ae 100644
+--- a/drivers/pci/hotplug/s390_pci_hpc.c
++++ b/drivers/pci/hotplug/s390_pci_hpc.c
+@@ -59,7 +59,6 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
+
+ pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
+ if (pdev && pci_num_vf(pdev)) {
+- pci_dev_put(pdev);
+ rc = -EBUSY;
+ goto out;
+ }
+diff --git a/drivers/staging/axis-fifo/axis-fifo.c b/drivers/staging/axis-fifo/axis-fifo.c
+index 7540c20090c78b..351f983ef9149b 100644
+--- a/drivers/staging/axis-fifo/axis-fifo.c
++++ b/drivers/staging/axis-fifo/axis-fifo.c
+@@ -393,16 +393,14 @@ static ssize_t axis_fifo_read(struct file *f, char __user *buf,
+
+ bytes_available = ioread32(fifo->base_addr + XLLF_RLR_OFFSET);
+ if (!bytes_available) {
+- dev_err(fifo->dt_device, "received a packet of length 0 - fifo core will be reset\n");
+- reset_ip_core(fifo);
++ dev_err(fifo->dt_device, "received a packet of length 0\n");
+ ret = -EIO;
+ goto end_unlock;
+ }
+
+ if (bytes_available > len) {
+- dev_err(fifo->dt_device, "user read buffer too small (available bytes=%zu user buffer bytes=%zu) - fifo core will be reset\n",
++ dev_err(fifo->dt_device, "user read buffer too small (available bytes=%zu user buffer bytes=%zu)\n",
+ bytes_available, len);
+- reset_ip_core(fifo);
+ ret = -EINVAL;
+ goto end_unlock;
+ }
+@@ -411,8 +409,7 @@ static ssize_t axis_fifo_read(struct file *f, char __user *buf,
+ /* this probably can't happen unless IP
+ * registers were previously mishandled
+ */
+- dev_err(fifo->dt_device, "received a packet that isn't word-aligned - fifo core will be reset\n");
+- reset_ip_core(fifo);
++ dev_err(fifo->dt_device, "received a packet that isn't word-aligned\n");
+ ret = -EIO;
+ goto end_unlock;
+ }
+@@ -433,7 +430,6 @@ static ssize_t axis_fifo_read(struct file *f, char __user *buf,
+
+ if (copy_to_user(buf + copied * sizeof(u32), tmp_buf,
+ copy * sizeof(u32))) {
+- reset_ip_core(fifo);
+ ret = -EFAULT;
+ goto end_unlock;
+ }
+@@ -542,7 +538,6 @@ static ssize_t axis_fifo_write(struct file *f, const char __user *buf,
+
+ if (copy_from_user(tmp_buf, buf + copied * sizeof(u32),
+ copy * sizeof(u32))) {
+- reset_ip_core(fifo);
+ ret = -EFAULT;
+ goto end_unlock;
+ }
+@@ -775,9 +770,6 @@ static int axis_fifo_parse_dt(struct axis_fifo *fifo)
+ goto end;
+ }
+
+- /* IP sets TDFV to fifo depth - 4 so we will do the same */
+- fifo->tx_fifo_depth -= 4;
+-
+ ret = get_dts_property(fifo, "xlnx,use-rx-data", &fifo->has_rx_fifo);
+ if (ret) {
+ dev_err(fifo->dt_device, "missing xlnx,use-rx-data property\n");
+diff --git a/drivers/staging/iio/adc/ad7816.c b/drivers/staging/iio/adc/ad7816.c
+index 6c14d7bcdd6750..081b17f498638b 100644
+--- a/drivers/staging/iio/adc/ad7816.c
++++ b/drivers/staging/iio/adc/ad7816.c
+@@ -136,7 +136,7 @@ static ssize_t ad7816_store_mode(struct device *dev,
+ struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+ struct ad7816_chip_info *chip = iio_priv(indio_dev);
+
+- if (strcmp(buf, "full")) {
++ if (strcmp(buf, "full") == 0) {
+ gpiod_set_value(chip->rdwr_pin, 1);
+ chip->mode = AD7816_FULL;
+ } else {
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+index deec33f63bcf82..e6724329356b92 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+@@ -1902,6 +1902,7 @@ static int bcm2835_mmal_probe(struct vchiq_device *device)
+ __func__, ret);
+ goto free_dev;
+ }
++ dev->v4l2_dev.dev = &device->dev;
+
+ /* setup v4l controls */
+ ret = bcm2835_mmal_init_controls(dev, &dev->ctrl_handler);
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index 1b19b56474950f..69c1df0f4ca541 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -131,15 +131,12 @@ static void hv_uio_rescind(struct vmbus_channel *channel)
+ vmbus_device_unregister(channel->device_obj);
+ }
+
+-/* Sysfs API to allow mmap of the ring buffers
++/* Function used for mmap of ring buffer sysfs interface.
+ * The ring buffer is allocated as contiguous memory by vmbus_open
+ */
+-static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj,
+- const struct bin_attribute *attr,
+- struct vm_area_struct *vma)
++static int
++hv_uio_ring_mmap(struct vmbus_channel *channel, struct vm_area_struct *vma)
+ {
+- struct vmbus_channel *channel
+- = container_of(kobj, struct vmbus_channel, kobj);
+ void *ring_buffer = page_address(channel->ringbuffer_page);
+
+ if (channel->state != CHANNEL_OPENED_STATE)
+@@ -149,15 +146,6 @@ static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj,
+ channel->ringbuffer_pagecount << PAGE_SHIFT);
+ }
+
+-static const struct bin_attribute ring_buffer_bin_attr = {
+- .attr = {
+- .name = "ring",
+- .mode = 0600,
+- },
+- .size = 2 * SZ_2M,
+- .mmap = hv_uio_ring_mmap,
+-};
+-
+ /* Callback from VMBUS subsystem when new channel created. */
+ static void
+ hv_uio_new_channel(struct vmbus_channel *new_sc)
+@@ -178,8 +166,7 @@ hv_uio_new_channel(struct vmbus_channel *new_sc)
+ /* Disable interrupts on sub channel */
+ new_sc->inbound.ring_buffer->interrupt_mask = 1;
+ set_channel_read_mode(new_sc, HV_CALL_ISR);
+-
+- ret = sysfs_create_bin_file(&new_sc->kobj, &ring_buffer_bin_attr);
++ ret = hv_create_ring_sysfs(new_sc, hv_uio_ring_mmap);
+ if (ret) {
+ dev_err(device, "sysfs create ring bin file failed; %d\n", ret);
+ vmbus_close(new_sc);
+@@ -350,10 +337,18 @@ hv_uio_probe(struct hv_device *dev,
+ goto fail_close;
+ }
+
+- ret = sysfs_create_bin_file(&channel->kobj, &ring_buffer_bin_attr);
+- if (ret)
+- dev_notice(&dev->device,
+- "sysfs create ring bin file failed; %d\n", ret);
++ /*
++ * This internally calls sysfs_update_group, which returns a non-zero value if it executes
++ * before sysfs_create_group. This is expected as the 'ring' will be created later in
++ * vmbus_device_register() -> vmbus_add_channel_kobj(). Thus, no need to check the return
++ * value and print warning.
++ *
++ * Creating/exposing sysfs in driver probe is not encouraged as it can lead to race
++ * conditions with userspace. For backward compatibility, "ring" sysfs could not be removed
++ * or decoupled from uio_hv_generic probe. Userspace programs can make use of inotify
++ * APIs to make sure that ring is created.
++ */
++ hv_create_ring_sysfs(channel, hv_uio_ring_mmap);
+
+ hv_set_drvdata(dev, pdata);
+
+@@ -375,7 +370,7 @@ hv_uio_remove(struct hv_device *dev)
+ if (!pdata)
+ return;
+
+- sysfs_remove_bin_file(&dev->channel->kobj, &ring_buffer_bin_attr);
++ hv_remove_ring_sysfs(dev->channel);
+ uio_unregister_device(&pdata->info);
+ hv_uio_cleanup(dev, pdata);
+
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.c b/drivers/usb/cdns3/cdnsp-gadget.c
+index 97edf767ecee90..d471409eb66c22 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.c
++++ b/drivers/usb/cdns3/cdnsp-gadget.c
+@@ -139,6 +139,26 @@ static void cdnsp_clear_port_change_bit(struct cdnsp_device *pdev,
+ (portsc & PORT_CHANGE_BITS), port_regs);
+ }
+
++static void cdnsp_set_apb_timeout_value(struct cdnsp_device *pdev)
++{
++ struct cdns *cdns = dev_get_drvdata(pdev->dev);
++ __le32 __iomem *reg;
++ void __iomem *base;
++ u32 offset = 0;
++ u32 val;
++
++ if (!cdns->override_apb_timeout)
++ return;
++
++ base = &pdev->cap_regs->hc_capbase;
++ offset = cdnsp_find_next_ext_cap(base, offset, D_XEC_PRE_REGS_CAP);
++ reg = base + offset + REG_CHICKEN_BITS_3_OFFSET;
++
++ val = le32_to_cpu(readl(reg));
++ val = CHICKEN_APB_TIMEOUT_SET(val, cdns->override_apb_timeout);
++ writel(cpu_to_le32(val), reg);
++}
++
+ static void cdnsp_set_chicken_bits_2(struct cdnsp_device *pdev, u32 bit)
+ {
+ __le32 __iomem *reg;
+@@ -1773,6 +1793,8 @@ static void cdnsp_get_rev_cap(struct cdnsp_device *pdev)
+ reg += cdnsp_find_next_ext_cap(reg, 0, RTL_REV_CAP);
+ pdev->rev_cap = reg;
+
++ pdev->rtl_revision = readl(&pdev->rev_cap->rtl_revision);
++
+ dev_info(pdev->dev, "Rev: %08x/%08x, eps: %08x, buff: %08x/%08x\n",
+ readl(&pdev->rev_cap->ctrl_revision),
+ readl(&pdev->rev_cap->rtl_revision),
+@@ -1798,6 +1820,15 @@ static int cdnsp_gen_setup(struct cdnsp_device *pdev)
+ pdev->hci_version = HC_VERSION(pdev->hcc_params);
+ pdev->hcc_params = readl(&pdev->cap_regs->hcc_params);
+
++ /*
++ * Override the APB timeout value to give the controller more time for
++ * enabling UTMI clock and synchronizing APB and UTMI clock domains.
++ * This fix is platform specific and is required to fixes issue with
++ * reading incorrect value from PORTSC register after resuming
++ * from L1 state.
++ */
++ cdnsp_set_apb_timeout_value(pdev);
++
+ cdnsp_get_rev_cap(pdev);
+
+ /* Make sure the Device Controller is halted. */
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h
+index 84887dfea7635b..12534be52f39df 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -520,6 +520,9 @@ struct cdnsp_rev_cap {
+ #define REG_CHICKEN_BITS_2_OFFSET 0x48
+ #define CHICKEN_XDMA_2_TP_CACHE_DIS BIT(28)
+
++#define REG_CHICKEN_BITS_3_OFFSET 0x4C
++#define CHICKEN_APB_TIMEOUT_SET(p, val) (((p) & ~GENMASK(21, 0)) | (val))
++
+ /* XBUF Extended Capability ID. */
+ #define XBUF_CAP_ID 0xCB
+ #define XBUF_RX_TAG_MASK_0_OFFSET 0x1C
+@@ -1357,6 +1360,7 @@ struct cdnsp_port {
+ * @rev_cap: Controller Capabilities Registers.
+ * @hcs_params1: Cached register copies of read-only HCSPARAMS1
+ * @hcc_params: Cached register copies of read-only HCCPARAMS1
++ * @rtl_revision: Cached controller rtl revision.
+ * @setup: Temporary buffer for setup packet.
+ * @ep0_preq: Internal allocated request used during enumeration.
+ * @ep0_stage: ep0 stage during enumeration process.
+@@ -1411,6 +1415,8 @@ struct cdnsp_device {
+ __u32 hcs_params1;
+ __u32 hcs_params3;
+ __u32 hcc_params;
++ #define RTL_REVISION_NEW_LPM 0x2700
++ __u32 rtl_revision;
+ /* Lock used in interrupt thread context. */
+ spinlock_t lock;
+ struct usb_ctrlrequest setup;
+diff --git a/drivers/usb/cdns3/cdnsp-pci.c b/drivers/usb/cdns3/cdnsp-pci.c
+index a51144504ff337..8c361b8394e959 100644
+--- a/drivers/usb/cdns3/cdnsp-pci.c
++++ b/drivers/usb/cdns3/cdnsp-pci.c
+@@ -28,6 +28,8 @@
+ #define PCI_DRIVER_NAME "cdns-pci-usbssp"
+ #define PLAT_DRIVER_NAME "cdns-usbssp"
+
++#define CHICKEN_APB_TIMEOUT_VALUE 0x1C20
++
+ static struct pci_dev *cdnsp_get_second_fun(struct pci_dev *pdev)
+ {
+ /*
+@@ -139,6 +141,14 @@ static int cdnsp_pci_probe(struct pci_dev *pdev,
+ cdnsp->otg_irq = pdev->irq;
+ }
+
++ /*
++ * Cadence PCI based platform require some longer timeout for APB
++ * to fixes domain clock synchronization issue after resuming
++ * controller from L1 state.
++ */
++ cdnsp->override_apb_timeout = CHICKEN_APB_TIMEOUT_VALUE;
++ pci_set_drvdata(pdev, cdnsp);
++
+ if (pci_is_enabled(func)) {
+ cdnsp->dev = dev;
+ cdnsp->gadget_init = cdnsp_gadget_init;
+@@ -148,8 +158,6 @@ static int cdnsp_pci_probe(struct pci_dev *pdev,
+ goto free_cdnsp;
+ }
+
+- pci_set_drvdata(pdev, cdnsp);
+-
+ device_wakeup_enable(&pdev->dev);
+ if (pci_dev_run_wake(pdev))
+ pm_runtime_put_noidle(&pdev->dev);
+diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
+index 46852529499d16..fd06cb85c4ea84 100644
+--- a/drivers/usb/cdns3/cdnsp-ring.c
++++ b/drivers/usb/cdns3/cdnsp-ring.c
+@@ -308,7 +308,8 @@ static bool cdnsp_ring_ep_doorbell(struct cdnsp_device *pdev,
+
+ writel(db_value, reg_addr);
+
+- cdnsp_force_l0_go(pdev);
++ if (pdev->rtl_revision < RTL_REVISION_NEW_LPM)
++ cdnsp_force_l0_go(pdev);
+
+ /* Doorbell was set. */
+ return true;
+diff --git a/drivers/usb/cdns3/core.h b/drivers/usb/cdns3/core.h
+index 57d47348dc193b..ac30ee21309d02 100644
+--- a/drivers/usb/cdns3/core.h
++++ b/drivers/usb/cdns3/core.h
+@@ -79,6 +79,8 @@ struct cdns3_platform_data {
+ * @pdata: platform data from glue layer
+ * @lock: spinlock structure
+ * @xhci_plat_data: xhci private data structure pointer
++ * @override_apb_timeout: hold value of APB timeout. For value 0 the default
++ * value in CHICKEN_BITS_3 will be preserved.
+ * @gadget_init: pointer to gadget initialization function
+ */
+ struct cdns {
+@@ -117,6 +119,7 @@ struct cdns {
+ struct cdns3_platform_data *pdata;
+ spinlock_t lock;
+ struct xhci_plat_priv *xhci_plat_data;
++ u32 override_apb_timeout;
+
+ int (*gadget_init)(struct cdns *cdns);
+ };
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 34e46ef308abfd..740d2d2b19fbe0 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -482,6 +482,7 @@ static int usbtmc_get_stb(struct usbtmc_file_data *file_data, __u8 *stb)
+ u8 *buffer;
+ u8 tag;
+ int rv;
++ long wait_rv;
+
+ dev_dbg(dev, "Enter ioctl_read_stb iin_ep_present: %d\n",
+ data->iin_ep_present);
+@@ -511,16 +512,17 @@ static int usbtmc_get_stb(struct usbtmc_file_data *file_data, __u8 *stb)
+ }
+
+ if (data->iin_ep_present) {
+- rv = wait_event_interruptible_timeout(
++ wait_rv = wait_event_interruptible_timeout(
+ data->waitq,
+ atomic_read(&data->iin_data_valid) != 0,
+ file_data->timeout);
+- if (rv < 0) {
+- dev_dbg(dev, "wait interrupted %d\n", rv);
++ if (wait_rv < 0) {
++ dev_dbg(dev, "wait interrupted %ld\n", wait_rv);
++ rv = wait_rv;
+ goto exit;
+ }
+
+- if (rv == 0) {
++ if (wait_rv == 0) {
+ dev_dbg(dev, "wait timed out\n");
+ rv = -ETIMEDOUT;
+ goto exit;
+@@ -539,6 +541,8 @@ static int usbtmc_get_stb(struct usbtmc_file_data *file_data, __u8 *stb)
+
+ dev_dbg(dev, "stb:0x%02x received %d\n", (unsigned int)*stb, rv);
+
++ rv = 0;
++
+ exit:
+ /* bump interrupt bTag */
+ data->iin_bTag += 1;
+@@ -602,9 +606,9 @@ static int usbtmc488_ioctl_wait_srq(struct usbtmc_file_data *file_data,
+ {
+ struct usbtmc_device_data *data = file_data->data;
+ struct device *dev = &data->intf->dev;
+- int rv;
+ u32 timeout;
+ unsigned long expire;
++ long wait_rv;
+
+ if (!data->iin_ep_present) {
+ dev_dbg(dev, "no interrupt endpoint present\n");
+@@ -618,25 +622,24 @@ static int usbtmc488_ioctl_wait_srq(struct usbtmc_file_data *file_data,
+
+ mutex_unlock(&data->io_mutex);
+
+- rv = wait_event_interruptible_timeout(
+- data->waitq,
+- atomic_read(&file_data->srq_asserted) != 0 ||
+- atomic_read(&file_data->closing),
+- expire);
++ wait_rv = wait_event_interruptible_timeout(
++ data->waitq,
++ atomic_read(&file_data->srq_asserted) != 0 ||
++ atomic_read(&file_data->closing),
++ expire);
+
+ mutex_lock(&data->io_mutex);
+
+ /* Note! disconnect or close could be called in the meantime */
+ if (atomic_read(&file_data->closing) || data->zombie)
+- rv = -ENODEV;
++ return -ENODEV;
+
+- if (rv < 0) {
+- /* dev can be invalid now! */
+- pr_debug("%s - wait interrupted %d\n", __func__, rv);
+- return rv;
++ if (wait_rv < 0) {
++ dev_dbg(dev, "%s - wait interrupted %ld\n", __func__, wait_rv);
++ return wait_rv;
+ }
+
+- if (rv == 0) {
++ if (wait_rv == 0) {
+ dev_dbg(dev, "%s - wait timed out\n", __func__);
+ return -ETIMEDOUT;
+ }
+@@ -830,6 +833,7 @@ static ssize_t usbtmc_generic_read(struct usbtmc_file_data *file_data,
+ unsigned long expire;
+ int bufcount = 1;
+ int again = 0;
++ long wait_rv;
+
+ /* mutex already locked */
+
+@@ -942,19 +946,24 @@ static ssize_t usbtmc_generic_read(struct usbtmc_file_data *file_data,
+ if (!(flags & USBTMC_FLAG_ASYNC)) {
+ dev_dbg(dev, "%s: before wait time %lu\n",
+ __func__, expire);
+- retval = wait_event_interruptible_timeout(
++ wait_rv = wait_event_interruptible_timeout(
+ file_data->wait_bulk_in,
+ usbtmc_do_transfer(file_data),
+ expire);
+
+- dev_dbg(dev, "%s: wait returned %d\n",
+- __func__, retval);
++ dev_dbg(dev, "%s: wait returned %ld\n",
++ __func__, wait_rv);
++
++ if (wait_rv < 0) {
++ retval = wait_rv;
++ goto error;
++ }
+
+- if (retval <= 0) {
+- if (retval == 0)
+- retval = -ETIMEDOUT;
++ if (wait_rv == 0) {
++ retval = -ETIMEDOUT;
+ goto error;
+ }
++
+ }
+
+ urb = usb_get_from_anchor(&file_data->in_anchor);
+@@ -1380,7 +1389,10 @@ static ssize_t usbtmc_read(struct file *filp, char __user *buf,
+ if (!buffer)
+ return -ENOMEM;
+
+- mutex_lock(&data->io_mutex);
++ retval = mutex_lock_interruptible(&data->io_mutex);
++ if (retval < 0)
++ goto exit_nolock;
++
+ if (data->zombie) {
+ retval = -ENODEV;
+ goto exit;
+@@ -1503,6 +1515,7 @@ static ssize_t usbtmc_read(struct file *filp, char __user *buf,
+
+ exit:
+ mutex_unlock(&data->io_mutex);
++exit_nolock:
+ kfree(buffer);
+ return retval;
+ }
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index aaa39e663f60a5..27eae4cf223dfd 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1164,6 +1164,9 @@ struct dwc3_scratchpad_array {
+ * @gsbuscfg0_reqinfo: store GSBUSCFG0.DATRDREQINFO, DESRDREQINFO,
+ * DATWRREQINFO, and DESWRREQINFO value passed from
+ * glue driver.
++ * @wakeup_pending_funcs: Indicates whether any interface has requested for
++ * function wakeup in bitmap format where bit position
++ * represents interface_id.
+ */
+ struct dwc3 {
+ struct work_struct drd_work;
+@@ -1394,6 +1397,7 @@ struct dwc3 {
+ int num_ep_resized;
+ struct dentry *debug_root;
+ u32 gsbuscfg0_reqinfo;
++ u32 wakeup_pending_funcs;
+ };
+
+ #define INCRX_BURST_MODE 0
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index c6761fe89cfaeb..36384a49618e8c 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -276,8 +276,6 @@ int dwc3_send_gadget_generic_command(struct dwc3 *dwc, unsigned int cmd,
+ return ret;
+ }
+
+-static int __dwc3_gadget_wakeup(struct dwc3 *dwc, bool async);
+-
+ /**
+ * dwc3_send_gadget_ep_cmd - issue an endpoint command
+ * @dep: the endpoint to which the command is going to be issued
+@@ -2359,10 +2357,8 @@ static int dwc3_gadget_get_frame(struct usb_gadget *g)
+ return __dwc3_gadget_get_frame(dwc);
+ }
+
+-static int __dwc3_gadget_wakeup(struct dwc3 *dwc, bool async)
++static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
+ {
+- int retries;
+-
+ int ret;
+ u32 reg;
+
+@@ -2390,8 +2386,7 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc, bool async)
+ return -EINVAL;
+ }
+
+- if (async)
+- dwc3_gadget_enable_linksts_evts(dwc, true);
++ dwc3_gadget_enable_linksts_evts(dwc, true);
+
+ ret = dwc3_gadget_set_link_state(dwc, DWC3_LINK_STATE_RECOV);
+ if (ret < 0) {
+@@ -2410,27 +2405,8 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc, bool async)
+
+ /*
+ * Since link status change events are enabled we will receive
+- * an U0 event when wakeup is successful. So bail out.
++ * an U0 event when wakeup is successful.
+ */
+- if (async)
+- return 0;
+-
+- /* poll until Link State changes to ON */
+- retries = 20000;
+-
+- while (retries--) {
+- reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+-
+- /* in HS, means ON */
+- if (DWC3_DSTS_USBLNKST(reg) == DWC3_LINK_STATE_U0)
+- break;
+- }
+-
+- if (DWC3_DSTS_USBLNKST(reg) != DWC3_LINK_STATE_U0) {
+- dev_err(dwc->dev, "failed to send remote wakeup\n");
+- return -EINVAL;
+- }
+-
+ return 0;
+ }
+
+@@ -2451,7 +2427,7 @@ static int dwc3_gadget_wakeup(struct usb_gadget *g)
+ spin_unlock_irqrestore(&dwc->lock, flags);
+ return -EINVAL;
+ }
+- ret = __dwc3_gadget_wakeup(dwc, true);
++ ret = __dwc3_gadget_wakeup(dwc);
+
+ spin_unlock_irqrestore(&dwc->lock, flags);
+
+@@ -2479,14 +2455,10 @@ static int dwc3_gadget_func_wakeup(struct usb_gadget *g, int intf_id)
+ */
+ link_state = dwc3_gadget_get_link_state(dwc);
+ if (link_state == DWC3_LINK_STATE_U3) {
+- ret = __dwc3_gadget_wakeup(dwc, false);
+- if (ret) {
+- spin_unlock_irqrestore(&dwc->lock, flags);
+- return -EINVAL;
+- }
+- dwc3_resume_gadget(dwc);
+- dwc->suspended = false;
+- dwc->link_state = DWC3_LINK_STATE_U0;
++ dwc->wakeup_pending_funcs |= BIT(intf_id);
++ ret = __dwc3_gadget_wakeup(dwc);
++ spin_unlock_irqrestore(&dwc->lock, flags);
++ return ret;
+ }
+
+ ret = dwc3_send_gadget_generic_command(dwc, DWC3_DGCMD_DEV_NOTIFICATION,
+@@ -4314,6 +4286,8 @@ static void dwc3_gadget_linksts_change_interrupt(struct dwc3 *dwc,
+ {
+ enum dwc3_link_state next = evtinfo & DWC3_LINK_STATE_MASK;
+ unsigned int pwropt;
++ int ret;
++ int intf_id;
+
+ /*
+ * WORKAROUND: DWC3 < 2.50a have an issue when configured without
+@@ -4389,7 +4363,7 @@ static void dwc3_gadget_linksts_change_interrupt(struct dwc3 *dwc,
+
+ switch (next) {
+ case DWC3_LINK_STATE_U0:
+- if (dwc->gadget->wakeup_armed) {
++ if (dwc->gadget->wakeup_armed || dwc->wakeup_pending_funcs) {
+ dwc3_gadget_enable_linksts_evts(dwc, false);
+ dwc3_resume_gadget(dwc);
+ dwc->suspended = false;
+@@ -4412,6 +4386,18 @@ static void dwc3_gadget_linksts_change_interrupt(struct dwc3 *dwc,
+ }
+
+ dwc->link_state = next;
++
++ /* Proceed with func wakeup if any interfaces that has requested */
++ while (dwc->wakeup_pending_funcs && (next == DWC3_LINK_STATE_U0)) {
++ intf_id = ffs(dwc->wakeup_pending_funcs) - 1;
++ ret = dwc3_send_gadget_generic_command(dwc, DWC3_DGCMD_DEV_NOTIFICATION,
++ DWC3_DGCMDPAR_DN_FUNC_WAKE |
++ DWC3_DGCMDPAR_INTF_SEL(intf_id));
++ if (ret)
++ dev_err(dwc->dev, "Failed to send DN wake for intf %d\n", intf_id);
++
++ dwc->wakeup_pending_funcs &= ~BIT(intf_id);
++ }
+ }
+
+ static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc,
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 869ad99afb48bb..8dbc132a505e39 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2011,15 +2011,13 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+
+ if (f->get_status) {
+ status = f->get_status(f);
++
+ if (status < 0)
+ break;
+- } else {
+- /* Set D0 and D1 bits based on func wakeup capability */
+- if (f->config->bmAttributes & USB_CONFIG_ATT_WAKEUP) {
+- status |= USB_INTRF_STAT_FUNC_RW_CAP;
+- if (f->func_wakeup_armed)
+- status |= USB_INTRF_STAT_FUNC_RW;
+- }
++
++ /* if D5 is not set, then device is not wakeup capable */
++ if (!(f->config->bmAttributes & USB_CONFIG_ATT_WAKEUP))
++ status &= ~(USB_INTRF_STAT_FUNC_RW_CAP | USB_INTRF_STAT_FUNC_RW);
+ }
+
+ put_unaligned_le16(status & 0x0000ffff, req->buf);
+diff --git a/drivers/usb/gadget/function/f_ecm.c b/drivers/usb/gadget/function/f_ecm.c
+index 80841de845b091..027226325039f0 100644
+--- a/drivers/usb/gadget/function/f_ecm.c
++++ b/drivers/usb/gadget/function/f_ecm.c
+@@ -892,6 +892,12 @@ static void ecm_resume(struct usb_function *f)
+ gether_resume(&ecm->port);
+ }
+
++static int ecm_get_status(struct usb_function *f)
++{
++ return (f->func_wakeup_armed ? USB_INTRF_STAT_FUNC_RW : 0) |
++ USB_INTRF_STAT_FUNC_RW_CAP;
++}
++
+ static void ecm_free(struct usb_function *f)
+ {
+ struct f_ecm *ecm;
+@@ -960,6 +966,7 @@ static struct usb_function *ecm_alloc(struct usb_function_instance *fi)
+ ecm->port.func.disable = ecm_disable;
+ ecm->port.func.free_func = ecm_free;
+ ecm->port.func.suspend = ecm_suspend;
++ ecm->port.func.get_status = ecm_get_status;
+ ecm->port.func.resume = ecm_resume;
+
+ return &ecm->port.func;
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index c7fdbc55fb0b97..2957316fd3d003 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -1749,6 +1749,10 @@ static int __tegra_xudc_ep_disable(struct tegra_xudc_ep *ep)
+ val = xudc_readl(xudc, CTRL);
+ val &= ~CTRL_RUN;
+ xudc_writel(xudc, val, CTRL);
++
++ val = xudc_readl(xudc, ST);
++ if (val & ST_RC)
++ xudc_writel(xudc, ST_RC, ST);
+ }
+
+ dev_info(xudc->dev, "ep %u disabled\n", ep->index);
+diff --git a/drivers/usb/host/uhci-platform.c b/drivers/usb/host/uhci-platform.c
+index a7c934404ebc7e..62318291f5664c 100644
+--- a/drivers/usb/host/uhci-platform.c
++++ b/drivers/usb/host/uhci-platform.c
+@@ -121,7 +121,7 @@ static int uhci_hcd_platform_probe(struct platform_device *pdev)
+ }
+
+ /* Get and enable clock if any specified */
+- uhci->clk = devm_clk_get(&pdev->dev, NULL);
++ uhci->clk = devm_clk_get_optional(&pdev->dev, NULL);
+ if (IS_ERR(uhci->clk)) {
+ ret = PTR_ERR(uhci->clk);
+ goto err_rmr;
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index fd7895b24367db..0d4ce5734165ed 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -823,6 +823,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ {
+ dma_addr_t deq;
+ union xhci_trb *evt;
++ enum evtreturn ret = EVT_DONE;
+ u32 ctrl, portsc;
+ bool update_erdp = false;
+
+@@ -909,6 +910,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ break;
+ case TRB_TYPE(TRB_TRANSFER):
+ dbc_handle_xfer_event(dbc, evt);
++ ret = EVT_XFER_DONE;
+ break;
+ default:
+ break;
+@@ -927,7 +929,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ lo_hi_writeq(deq, &dbc->regs->erdp);
+ }
+
+- return EVT_DONE;
++ return ret;
+ }
+
+ static void xhci_dbc_handle_events(struct work_struct *work)
+@@ -936,6 +938,7 @@ static void xhci_dbc_handle_events(struct work_struct *work)
+ struct xhci_dbc *dbc;
+ unsigned long flags;
+ unsigned int poll_interval;
++ unsigned long busypoll_timelimit;
+
+ dbc = container_of(to_delayed_work(work), struct xhci_dbc, event_work);
+ poll_interval = dbc->poll_interval;
+@@ -954,11 +957,21 @@ static void xhci_dbc_handle_events(struct work_struct *work)
+ dbc->driver->disconnect(dbc);
+ break;
+ case EVT_DONE:
+- /* set fast poll rate if there are pending data transfers */
++ /*
++ * Set fast poll rate if there are pending out transfers, or
++ * a transfer was recently processed
++ */
++ busypoll_timelimit = dbc->xfer_timestamp +
++ msecs_to_jiffies(DBC_XFER_INACTIVITY_TIMEOUT);
++
+ if (!list_empty(&dbc->eps[BULK_OUT].list_pending) ||
+- !list_empty(&dbc->eps[BULK_IN].list_pending))
++ time_is_after_jiffies(busypoll_timelimit))
+ poll_interval = 0;
+ break;
++ case EVT_XFER_DONE:
++ dbc->xfer_timestamp = jiffies;
++ poll_interval = 0;
++ break;
+ default:
+ dev_info(dbc->dev, "stop handling dbc events\n");
+ return;
+diff --git a/drivers/usb/host/xhci-dbgcap.h b/drivers/usb/host/xhci-dbgcap.h
+index 9dc8f4d8077cc4..47ac72c2286d9a 100644
+--- a/drivers/usb/host/xhci-dbgcap.h
++++ b/drivers/usb/host/xhci-dbgcap.h
+@@ -96,6 +96,7 @@ struct dbc_ep {
+ #define DBC_WRITE_BUF_SIZE 8192
+ #define DBC_POLL_INTERVAL_DEFAULT 64 /* milliseconds */
+ #define DBC_POLL_INTERVAL_MAX 5000 /* milliseconds */
++#define DBC_XFER_INACTIVITY_TIMEOUT 10 /* milliseconds */
+ /*
+ * Private structure for DbC hardware state:
+ */
+@@ -142,6 +143,7 @@ struct xhci_dbc {
+ enum dbc_state state;
+ struct delayed_work event_work;
+ unsigned int poll_interval; /* ms */
++ unsigned long xfer_timestamp;
+ unsigned resume_required:1;
+ struct dbc_ep eps[2];
+
+@@ -187,6 +189,7 @@ struct dbc_request {
+ enum evtreturn {
+ EVT_ERR = -1,
+ EVT_DONE,
++ EVT_XFER_DONE,
+ EVT_GSER,
+ EVT_DISC,
+ };
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 22dc86fb525473..70ec36e4ff5f5f 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -1364,6 +1364,7 @@ static void tegra_xhci_id_work(struct work_struct *work)
+ tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(tegra->padctl,
+ tegra->otg_usb2_port);
+
++ pm_runtime_get_sync(tegra->dev);
+ if (tegra->host_mode) {
+ /* switch to host mode */
+ if (tegra->otg_usb3_port >= 0) {
+@@ -1393,6 +1394,7 @@ static void tegra_xhci_id_work(struct work_struct *work)
+ }
+
+ tegra_xhci_set_port_power(tegra, true, true);
++ pm_runtime_mark_last_busy(tegra->dev);
+
+ } else {
+ if (tegra->otg_usb3_port >= 0)
+@@ -1400,6 +1402,7 @@ static void tegra_xhci_id_work(struct work_struct *work)
+
+ tegra_xhci_set_port_power(tegra, true, false);
+ }
++ pm_runtime_put_autosuspend(tegra->dev);
+ }
+
+ #if IS_ENABLED(CONFIG_PM) || IS_ENABLED(CONFIG_PM_SLEEP)
+diff --git a/drivers/usb/misc/onboard_usb_dev.c b/drivers/usb/misc/onboard_usb_dev.c
+index 75ac3c6aa92d0d..f5372dfa241a9c 100644
+--- a/drivers/usb/misc/onboard_usb_dev.c
++++ b/drivers/usb/misc/onboard_usb_dev.c
+@@ -569,8 +569,14 @@ static void onboard_dev_usbdev_disconnect(struct usb_device *udev)
+ }
+
+ static const struct usb_device_id onboard_dev_id_table[] = {
+- { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6504) }, /* CYUSB33{0,1,2}x/CYUSB230x 3.0 HUB */
+- { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6506) }, /* CYUSB33{0,1,2}x/CYUSB230x 2.0 HUB */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6500) }, /* CYUSB330x 3.0 HUB */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6502) }, /* CYUSB330x 2.0 HUB */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6503) }, /* CYUSB33{0,1}x 2.0 HUB, Vendor Mode */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6504) }, /* CYUSB331x 3.0 HUB */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6506) }, /* CYUSB331x 2.0 HUB */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6507) }, /* CYUSB332x 2.0 HUB, Vendor Mode */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6508) }, /* CYUSB332x 3.0 HUB */
++ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x650a) }, /* CYUSB332x 2.0 HUB */
+ { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6570) }, /* CY7C6563x 2.0 HUB */
+ { USB_DEVICE(VENDOR_ID_GENESYS, 0x0608) }, /* Genesys Logic GL850G USB 2.0 HUB */
+ { USB_DEVICE(VENDOR_ID_GENESYS, 0x0610) }, /* Genesys Logic GL852G USB 2.0 HUB */
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 62ca4a0ec55bb1..65d2b975447909 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -5965,7 +5965,7 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
+ case SNK_TRY_WAIT_DEBOUNCE:
+ if (!tcpm_port_is_sink(port)) {
+ port->max_wait = 0;
+- tcpm_set_state(port, SRC_TRYWAIT, 0);
++ tcpm_set_state(port, SRC_TRYWAIT, PD_T_PD_DEBOUNCE);
+ }
+ break;
+ case SRC_TRY_WAIT:
+diff --git a/drivers/usb/typec/ucsi/displayport.c b/drivers/usb/typec/ucsi/displayport.c
+index 420af5139c70a3..8aae80b457d74d 100644
+--- a/drivers/usb/typec/ucsi/displayport.c
++++ b/drivers/usb/typec/ucsi/displayport.c
+@@ -54,7 +54,8 @@ static int ucsi_displayport_enter(struct typec_altmode *alt, u32 *vdo)
+ u8 cur = 0;
+ int ret;
+
+- mutex_lock(&dp->con->lock);
++ if (!ucsi_con_mutex_lock(dp->con))
++ return -ENOTCONN;
+
+ if (!dp->override && dp->initialized) {
+ const struct typec_altmode *p = typec_altmode_get_partner(alt);
+@@ -100,7 +101,7 @@ static int ucsi_displayport_enter(struct typec_altmode *alt, u32 *vdo)
+ schedule_work(&dp->work);
+ ret = 0;
+ err_unlock:
+- mutex_unlock(&dp->con->lock);
++ ucsi_con_mutex_unlock(dp->con);
+
+ return ret;
+ }
+@@ -112,7 +113,8 @@ static int ucsi_displayport_exit(struct typec_altmode *alt)
+ u64 command;
+ int ret = 0;
+
+- mutex_lock(&dp->con->lock);
++ if (!ucsi_con_mutex_lock(dp->con))
++ return -ENOTCONN;
+
+ if (!dp->override) {
+ const struct typec_altmode *p = typec_altmode_get_partner(alt);
+@@ -144,7 +146,7 @@ static int ucsi_displayport_exit(struct typec_altmode *alt)
+ schedule_work(&dp->work);
+
+ out_unlock:
+- mutex_unlock(&dp->con->lock);
++ ucsi_con_mutex_unlock(dp->con);
+
+ return ret;
+ }
+@@ -202,20 +204,21 @@ static int ucsi_displayport_vdm(struct typec_altmode *alt,
+ int cmd = PD_VDO_CMD(header);
+ int svdm_version;
+
+- mutex_lock(&dp->con->lock);
++ if (!ucsi_con_mutex_lock(dp->con))
++ return -ENOTCONN;
+
+ if (!dp->override && dp->initialized) {
+ const struct typec_altmode *p = typec_altmode_get_partner(alt);
+
+ dev_warn(&p->dev,
+ "firmware doesn't support alternate mode overriding\n");
+- mutex_unlock(&dp->con->lock);
++ ucsi_con_mutex_unlock(dp->con);
+ return -EOPNOTSUPP;
+ }
+
+ svdm_version = typec_altmode_get_svdm_version(alt);
+ if (svdm_version < 0) {
+- mutex_unlock(&dp->con->lock);
++ ucsi_con_mutex_unlock(dp->con);
+ return svdm_version;
+ }
+
+@@ -259,7 +262,7 @@ static int ucsi_displayport_vdm(struct typec_altmode *alt,
+ break;
+ }
+
+- mutex_unlock(&dp->con->lock);
++ ucsi_con_mutex_unlock(dp->con);
+
+ return 0;
+ }
+@@ -296,6 +299,8 @@ void ucsi_displayport_remove_partner(struct typec_altmode *alt)
+ if (!dp)
+ return;
+
++ cancel_work_sync(&dp->work);
++
+ dp->data.conf = 0;
+ dp->data.status = 0;
+ dp->initialized = false;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index e8c7e9dc49309c..01ce858a1a2b34 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -1922,6 +1922,40 @@ void ucsi_set_drvdata(struct ucsi *ucsi, void *data)
+ }
+ EXPORT_SYMBOL_GPL(ucsi_set_drvdata);
+
++/**
++ * ucsi_con_mutex_lock - Acquire the connector mutex
++ * @con: The connector interface to lock
++ *
++ * Returns true on success, false if the connector is disconnected
++ */
++bool ucsi_con_mutex_lock(struct ucsi_connector *con)
++{
++ bool mutex_locked = false;
++ bool connected = true;
++
++ while (connected && !mutex_locked) {
++ mutex_locked = mutex_trylock(&con->lock) != 0;
++ connected = UCSI_CONSTAT(con, CONNECTED);
++ if (connected && !mutex_locked)
++ msleep(20);
++ }
++
++ connected = connected && con->partner;
++ if (!connected && mutex_locked)
++ mutex_unlock(&con->lock);
++
++ return connected;
++}
++
++/**
++ * ucsi_con_mutex_unlock - Release the connector mutex
++ * @con: The connector interface to unlock
++ */
++void ucsi_con_mutex_unlock(struct ucsi_connector *con)
++{
++ mutex_unlock(&con->lock);
++}
++
+ /**
+ * ucsi_create - Allocate UCSI instance
+ * @dev: Device interface to the PPM (Platform Policy Manager)
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 892bcf8dbcd50f..99d0d76f738eec 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -94,6 +94,8 @@ int ucsi_register(struct ucsi *ucsi);
+ void ucsi_unregister(struct ucsi *ucsi);
+ void *ucsi_get_drvdata(struct ucsi *ucsi);
+ void ucsi_set_drvdata(struct ucsi *ucsi, void *data);
++bool ucsi_con_mutex_lock(struct ucsi_connector *con);
++void ucsi_con_mutex_unlock(struct ucsi_connector *con);
+
+ void ucsi_connector_change(struct ucsi *ucsi, u8 num);
+
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index 586e49efb81be3..a071f42511d3b0 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -1654,14 +1654,14 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
+ {
+ struct vm_area_struct *vma = vmf->vma;
+ struct vfio_pci_core_device *vdev = vma->vm_private_data;
+- unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
++ unsigned long addr = vmf->address & ~((PAGE_SIZE << order) - 1);
++ unsigned long pgoff = (addr - vma->vm_start) >> PAGE_SHIFT;
++ unsigned long pfn = vma_to_pfn(vma) + pgoff;
+ vm_fault_t ret = VM_FAULT_SIGBUS;
+
+- pfn = vma_to_pfn(vma) + pgoff;
+-
+- if (order && (pfn & ((1 << order) - 1) ||
+- vmf->address & ((PAGE_SIZE << order) - 1) ||
+- vmf->address + (PAGE_SIZE << order) > vma->vm_end)) {
++ if (order && (addr < vma->vm_start ||
++ addr + (PAGE_SIZE << order) > vma->vm_end ||
++ pfn & ((1 << order) - 1))) {
+ ret = VM_FAULT_FALLBACK;
+ goto out;
+ }
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index 1f65795cf5d7a2..ef56a2500ed69a 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -217,6 +217,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
+ * buffering it.
+ */
+ if (dma_capable(dev, dev_addr, size, true) &&
++ !dma_kmalloc_needs_bounce(dev, size, dir) &&
+ !range_straddles_page_boundary(phys, size) &&
+ !xen_arch_need_swiotlb(dev, phys, dev_addr) &&
+ !is_swiotlb_force_bounce(dev))
+diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
+index 13821e7e825efb..9ac0427724a301 100644
+--- a/drivers/xen/xenbus/xenbus.h
++++ b/drivers/xen/xenbus/xenbus.h
+@@ -77,6 +77,7 @@ enum xb_req_state {
+ struct xb_req_data {
+ struct list_head list;
+ wait_queue_head_t wq;
++ struct kref kref;
+ struct xsd_sockmsg msg;
+ uint32_t caller_req_id;
+ enum xsd_sockmsg_type type;
+@@ -103,6 +104,7 @@ int xb_init_comms(void);
+ void xb_deinit_comms(void);
+ int xs_watch_msg(struct xs_watch_event *event);
+ void xs_request_exit(struct xb_req_data *req);
++void xs_free_req(struct kref *kref);
+
+ int xenbus_match(struct device *_dev, const struct device_driver *_drv);
+ int xenbus_dev_probe(struct device *_dev);
+diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
+index e5fda0256feb3d..82df2da1b880b8 100644
+--- a/drivers/xen/xenbus/xenbus_comms.c
++++ b/drivers/xen/xenbus/xenbus_comms.c
+@@ -309,8 +309,8 @@ static int process_msg(void)
+ virt_wmb();
+ req->state = xb_req_state_got_reply;
+ req->cb(req);
+- } else
+- kfree(req);
++ }
++ kref_put(&req->kref, xs_free_req);
+ }
+
+ mutex_unlock(&xs_response_mutex);
+@@ -386,14 +386,13 @@ static int process_writes(void)
+ state.req->msg.type = XS_ERROR;
+ state.req->err = err;
+ list_del(&state.req->list);
+- if (state.req->state == xb_req_state_aborted)
+- kfree(state.req);
+- else {
++ if (state.req->state != xb_req_state_aborted) {
+ /* write err, then update state */
+ virt_wmb();
+ state.req->state = xb_req_state_got_reply;
+ wake_up(&state.req->wq);
+ }
++ kref_put(&state.req->kref, xs_free_req);
+
+ mutex_unlock(&xb_write_mutex);
+
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 46f8916597e53d..f5c21ba64df571 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -406,7 +406,7 @@ void xenbus_dev_queue_reply(struct xb_req_data *req)
+ mutex_unlock(&u->reply_mutex);
+
+ kfree(req->body);
+- kfree(req);
++ kref_put(&req->kref, xs_free_req);
+
+ kref_put(&u->kref, xenbus_file_free);
+
+diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
+index d32c726f7a12d0..dcf9182c8451ad 100644
+--- a/drivers/xen/xenbus/xenbus_xs.c
++++ b/drivers/xen/xenbus/xenbus_xs.c
+@@ -112,6 +112,12 @@ static void xs_suspend_exit(void)
+ wake_up_all(&xs_state_enter_wq);
+ }
+
++void xs_free_req(struct kref *kref)
++{
++ struct xb_req_data *req = container_of(kref, struct xb_req_data, kref);
++ kfree(req);
++}
++
+ static uint32_t xs_request_enter(struct xb_req_data *req)
+ {
+ uint32_t rq_id;
+@@ -237,6 +243,12 @@ static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg)
+ req->caller_req_id = req->msg.req_id;
+ req->msg.req_id = xs_request_enter(req);
+
++ /*
++ * Take 2nd ref. One for this thread, and the second for the
++ * xenbus_thread.
++ */
++ kref_get(&req->kref);
++
+ mutex_lock(&xb_write_mutex);
+ list_add_tail(&req->list, &xb_write_list);
+ notify = list_is_singular(&xb_write_list);
+@@ -261,8 +273,8 @@ static void *xs_wait_for_reply(struct xb_req_data *req, struct xsd_sockmsg *msg)
+ if (req->state == xb_req_state_queued ||
+ req->state == xb_req_state_wait_reply)
+ req->state = xb_req_state_aborted;
+- else
+- kfree(req);
++
++ kref_put(&req->kref, xs_free_req);
+ mutex_unlock(&xb_write_mutex);
+
+ return ret;
+@@ -291,6 +303,7 @@ int xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void *par)
+ req->cb = xenbus_dev_queue_reply;
+ req->par = par;
+ req->user_req = true;
++ kref_init(&req->kref);
+
+ xs_send(req, msg);
+
+@@ -319,6 +332,7 @@ static void *xs_talkv(struct xenbus_transaction t,
+ req->num_vecs = num_vecs;
+ req->cb = xs_wake_up;
+ req->user_req = false;
++ kref_init(&req->kref);
+
+ msg.req_id = 0;
+ msg.tx_id = t.id;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 3f8afbd1ebb552..b80110fe30fe8c 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -733,82 +733,6 @@ const u8 *btrfs_sb_fsid_ptr(const struct btrfs_super_block *sb)
+ return has_metadata_uuid ? sb->metadata_uuid : sb->fsid;
+ }
+
+-/*
+- * We can have very weird soft links passed in.
+- * One example is "/proc/self/fd/<fd>", which can be a soft link to
+- * a block device.
+- *
+- * But it's never a good idea to use those weird names.
+- * Here we check if the path (not following symlinks) is a good one inside
+- * "/dev/".
+- */
+-static bool is_good_dev_path(const char *dev_path)
+-{
+- struct path path = { .mnt = NULL, .dentry = NULL };
+- char *path_buf = NULL;
+- char *resolved_path;
+- bool is_good = false;
+- int ret;
+-
+- if (!dev_path)
+- goto out;
+-
+- path_buf = kmalloc(PATH_MAX, GFP_KERNEL);
+- if (!path_buf)
+- goto out;
+-
+- /*
+- * Do not follow soft link, just check if the original path is inside
+- * "/dev/".
+- */
+- ret = kern_path(dev_path, 0, &path);
+- if (ret)
+- goto out;
+- resolved_path = d_path(&path, path_buf, PATH_MAX);
+- if (IS_ERR(resolved_path))
+- goto out;
+- if (strncmp(resolved_path, "/dev/", strlen("/dev/")))
+- goto out;
+- is_good = true;
+-out:
+- kfree(path_buf);
+- path_put(&path);
+- return is_good;
+-}
+-
+-static int get_canonical_dev_path(const char *dev_path, char *canonical)
+-{
+- struct path path = { .mnt = NULL, .dentry = NULL };
+- char *path_buf = NULL;
+- char *resolved_path;
+- int ret;
+-
+- if (!dev_path) {
+- ret = -EINVAL;
+- goto out;
+- }
+-
+- path_buf = kmalloc(PATH_MAX, GFP_KERNEL);
+- if (!path_buf) {
+- ret = -ENOMEM;
+- goto out;
+- }
+-
+- ret = kern_path(dev_path, LOOKUP_FOLLOW, &path);
+- if (ret)
+- goto out;
+- resolved_path = d_path(&path, path_buf, PATH_MAX);
+- if (IS_ERR(resolved_path)) {
+- ret = PTR_ERR(resolved_path);
+- goto out;
+- }
+- ret = strscpy(canonical, resolved_path, PATH_MAX);
+-out:
+- kfree(path_buf);
+- path_put(&path);
+- return ret;
+-}
+-
+ static bool is_same_device(struct btrfs_device *device, const char *new_path)
+ {
+ struct path old = { .mnt = NULL, .dentry = NULL };
+@@ -1513,23 +1437,12 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+ bool new_device_added = false;
+ struct btrfs_device *device = NULL;
+ struct file *bdev_file;
+- char *canonical_path = NULL;
+ u64 bytenr;
+ dev_t devt;
+ int ret;
+
+ lockdep_assert_held(&uuid_mutex);
+
+- if (!is_good_dev_path(path)) {
+- canonical_path = kmalloc(PATH_MAX, GFP_KERNEL);
+- if (canonical_path) {
+- ret = get_canonical_dev_path(path, canonical_path);
+- if (ret < 0) {
+- kfree(canonical_path);
+- canonical_path = NULL;
+- }
+- }
+- }
+ /*
+ * Avoid an exclusive open here, as the systemd-udev may initiate the
+ * device scan which may race with the user's mount or mkfs command,
+@@ -1574,8 +1487,7 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+ goto free_disk_super;
+ }
+
+- device = device_list_add(canonical_path ? : path, disk_super,
+- &new_device_added);
++ device = device_list_add(path, disk_super, &new_device_added);
+ if (!IS_ERR(device) && new_device_added)
+ btrfs_free_stale_devices(device->devt, device);
+
+@@ -1584,7 +1496,6 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+
+ error_bdev_put:
+ fput(bdev_file);
+- kfree(canonical_path);
+
+ return device;
+ }
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index abb9c6d3b1aa2a..f4bf1b5e3f5b83 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -150,10 +150,10 @@ static int erofs_fileio_scan_folio(struct erofs_fileio *io, struct folio *folio)
+ io->rq->bio.bi_iter.bi_sector = io->dev.m_pa >> 9;
+ attached = 0;
+ }
+- if (!attached++)
+- erofs_onlinefolio_split(folio);
+ if (!bio_add_folio(&io->rq->bio, folio, len, cur))
+ goto io_retry;
++ if (!attached++)
++ erofs_onlinefolio_split(folio);
+ io->dev.m_pa += len;
+ }
+ cur += len;
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index d771e06db73868..67acef591646c8 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -76,9 +76,6 @@ struct z_erofs_pcluster {
+ /* L: whether partial decompression or not */
+ bool partial;
+
+- /* L: indicate several pageofs_outs or not */
+- bool multibases;
+-
+ /* L: whether extra buffer allocations are best-effort */
+ bool besteffort;
+
+@@ -1050,8 +1047,6 @@ static int z_erofs_scan_folio(struct z_erofs_frontend *f,
+ break;
+
+ erofs_onlinefolio_split(folio);
+- if (f->pcl->pageofs_out != (map->m_la & ~PAGE_MASK))
+- f->pcl->multibases = true;
+ if (f->pcl->length < offset + end - map->m_la) {
+ f->pcl->length = offset + end - map->m_la;
+ f->pcl->pageofs_out = map->m_la & ~PAGE_MASK;
+@@ -1097,7 +1092,6 @@ struct z_erofs_backend {
+ struct page *onstack_pages[Z_EROFS_ONSTACK_PAGES];
+ struct super_block *sb;
+ struct z_erofs_pcluster *pcl;
+-
+ /* pages with the longest decompressed length for deduplication */
+ struct page **decompressed_pages;
+ /* pages to keep the compressed data */
+@@ -1106,6 +1100,8 @@ struct z_erofs_backend {
+ struct list_head decompressed_secondary_bvecs;
+ struct page **pagepool;
+ unsigned int onstack_used, nr_pages;
++ /* indicate if temporary copies should be preserved for later use */
++ bool keepxcpy;
+ };
+
+ struct z_erofs_bvec_item {
+@@ -1116,18 +1112,20 @@ struct z_erofs_bvec_item {
+ static void z_erofs_do_decompressed_bvec(struct z_erofs_backend *be,
+ struct z_erofs_bvec *bvec)
+ {
++ int poff = bvec->offset + be->pcl->pageofs_out;
+ struct z_erofs_bvec_item *item;
+- unsigned int pgnr;
+-
+- if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK) &&
+- (bvec->end == PAGE_SIZE ||
+- bvec->offset + bvec->end == be->pcl->length)) {
+- pgnr = (bvec->offset + be->pcl->pageofs_out) >> PAGE_SHIFT;
+- DBG_BUGON(pgnr >= be->nr_pages);
+- if (!be->decompressed_pages[pgnr]) {
+- be->decompressed_pages[pgnr] = bvec->page;
++ struct page **page;
++
++ if (!(poff & ~PAGE_MASK) && (bvec->end == PAGE_SIZE ||
++ bvec->offset + bvec->end == be->pcl->length)) {
++ DBG_BUGON((poff >> PAGE_SHIFT) >= be->nr_pages);
++ page = be->decompressed_pages + (poff >> PAGE_SHIFT);
++ if (!*page) {
++ *page = bvec->page;
+ return;
+ }
++ } else {
++ be->keepxcpy = true;
+ }
+
+ /* (cold path) one pcluster is requested multiple times */
+@@ -1291,7 +1289,7 @@ static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, int err)
+ .alg = pcl->algorithmformat,
+ .inplace_io = overlapped,
+ .partial_decoding = pcl->partial,
+- .fillgaps = pcl->multibases,
++ .fillgaps = be->keepxcpy,
+ .gfp = pcl->besteffort ? GFP_KERNEL :
+ GFP_NOWAIT | __GFP_NORETRY
+ }, be->pagepool);
+@@ -1348,7 +1346,6 @@ static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, int err)
+
+ pcl->length = 0;
+ pcl->partial = true;
+- pcl->multibases = false;
+ pcl->besteffort = false;
+ pcl->bvset.nextpage = NULL;
+ pcl->vcnt = 0;
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 280a6ebc46d930..5b84e29613fe4d 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -778,7 +778,7 @@ int __legitimize_mnt(struct vfsmount *bastard, unsigned seq)
+ return 0;
+ mnt = real_mount(bastard);
+ mnt_add_count(mnt, 1);
+- smp_mb(); // see mntput_no_expire()
++ smp_mb(); // see mntput_no_expire() and do_umount()
+ if (likely(!read_seqretry(&mount_lock, seq)))
+ return 0;
+ if (bastard->mnt_flags & MNT_SYNC_UMOUNT) {
+@@ -1956,6 +1956,7 @@ static int do_umount(struct mount *mnt, int flags)
+ umount_tree(mnt, UMOUNT_PROPAGATE);
+ retval = 0;
+ } else {
++ smp_mb(); // paired with __legitimize_mnt()
+ shrink_submounts(mnt);
+ retval = -EBUSY;
+ if (!propagate_mount_busy(mnt, 2)) {
+diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
+index b8ac85b548c7e5..821cb7874685e1 100644
+--- a/fs/ocfs2/alloc.c
++++ b/fs/ocfs2/alloc.c
+@@ -6918,6 +6918,7 @@ static int ocfs2_grab_folios(struct inode *inode, loff_t start, loff_t end,
+ if (IS_ERR(folios[numfolios])) {
+ ret = PTR_ERR(folios[numfolios]);
+ mlog_errno(ret);
++ folios[numfolios] = NULL;
+ goto out;
+ }
+
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index f1b4b3e611cb9b..f37831d5f95a19 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -174,7 +174,7 @@ int ocfs2_recovery_init(struct ocfs2_super *osb)
+ struct ocfs2_recovery_map *rm;
+
+ mutex_init(&osb->recovery_lock);
+- osb->disable_recovery = 0;
++ osb->recovery_state = OCFS2_REC_ENABLED;
+ osb->recovery_thread_task = NULL;
+ init_waitqueue_head(&osb->recovery_event);
+
+@@ -190,31 +190,53 @@ int ocfs2_recovery_init(struct ocfs2_super *osb)
+ return 0;
+ }
+
+-/* we can't grab the goofy sem lock from inside wait_event, so we use
+- * memory barriers to make sure that we'll see the null task before
+- * being woken up */
+ static int ocfs2_recovery_thread_running(struct ocfs2_super *osb)
+ {
+- mb();
+ return osb->recovery_thread_task != NULL;
+ }
+
+-void ocfs2_recovery_exit(struct ocfs2_super *osb)
++static void ocfs2_recovery_disable(struct ocfs2_super *osb,
++ enum ocfs2_recovery_state state)
+ {
+- struct ocfs2_recovery_map *rm;
+-
+- /* disable any new recovery threads and wait for any currently
+- * running ones to exit. Do this before setting the vol_state. */
+ mutex_lock(&osb->recovery_lock);
+- osb->disable_recovery = 1;
++ /*
++ * If recovery thread is not running, we can directly transition to
++ * final state.
++ */
++ if (!ocfs2_recovery_thread_running(osb)) {
++ osb->recovery_state = state + 1;
++ goto out_lock;
++ }
++ osb->recovery_state = state;
++ /* Wait for recovery thread to acknowledge state transition */
++ wait_event_cmd(osb->recovery_event,
++ !ocfs2_recovery_thread_running(osb) ||
++ osb->recovery_state >= state + 1,
++ mutex_unlock(&osb->recovery_lock),
++ mutex_lock(&osb->recovery_lock));
++out_lock:
+ mutex_unlock(&osb->recovery_lock);
+- wait_event(osb->recovery_event, !ocfs2_recovery_thread_running(osb));
+
+- /* At this point, we know that no more recovery threads can be
+- * launched, so wait for any recovery completion work to
+- * complete. */
++ /*
++ * At this point we know that no more recovery work can be queued so
++ * wait for any recovery completion work to complete.
++ */
+ if (osb->ocfs2_wq)
+ flush_workqueue(osb->ocfs2_wq);
++}
++
++void ocfs2_recovery_disable_quota(struct ocfs2_super *osb)
++{
++ ocfs2_recovery_disable(osb, OCFS2_REC_QUOTA_WANT_DISABLE);
++}
++
++void ocfs2_recovery_exit(struct ocfs2_super *osb)
++{
++ struct ocfs2_recovery_map *rm;
++
++ /* disable any new recovery threads and wait for any currently
++ * running ones to exit. Do this before setting the vol_state. */
++ ocfs2_recovery_disable(osb, OCFS2_REC_WANT_DISABLE);
+
+ /*
+ * Now that recovery is shut down, and the osb is about to be
+@@ -1472,6 +1494,18 @@ static int __ocfs2_recovery_thread(void *arg)
+ }
+ }
+ restart:
++ if (quota_enabled) {
++ mutex_lock(&osb->recovery_lock);
++ /* Confirm that recovery thread will no longer recover quotas */
++ if (osb->recovery_state == OCFS2_REC_QUOTA_WANT_DISABLE) {
++ osb->recovery_state = OCFS2_REC_QUOTA_DISABLED;
++ wake_up(&osb->recovery_event);
++ }
++ if (osb->recovery_state >= OCFS2_REC_QUOTA_DISABLED)
++ quota_enabled = 0;
++ mutex_unlock(&osb->recovery_lock);
++ }
++
+ status = ocfs2_super_lock(osb, 1);
+ if (status < 0) {
+ mlog_errno(status);
+@@ -1569,27 +1603,29 @@ static int __ocfs2_recovery_thread(void *arg)
+
+ ocfs2_free_replay_slots(osb);
+ osb->recovery_thread_task = NULL;
+- mb(); /* sync with ocfs2_recovery_thread_running */
++ if (osb->recovery_state == OCFS2_REC_WANT_DISABLE)
++ osb->recovery_state = OCFS2_REC_DISABLED;
+ wake_up(&osb->recovery_event);
+
+ mutex_unlock(&osb->recovery_lock);
+
+- if (quota_enabled)
+- kfree(rm_quota);
++ kfree(rm_quota);
+
+ return status;
+ }
+
+ void ocfs2_recovery_thread(struct ocfs2_super *osb, int node_num)
+ {
++ int was_set = -1;
++
+ mutex_lock(&osb->recovery_lock);
++ if (osb->recovery_state < OCFS2_REC_WANT_DISABLE)
++ was_set = ocfs2_recovery_map_set(osb, node_num);
+
+ trace_ocfs2_recovery_thread(node_num, osb->node_num,
+- osb->disable_recovery, osb->recovery_thread_task,
+- osb->disable_recovery ?
+- -1 : ocfs2_recovery_map_set(osb, node_num));
++ osb->recovery_state, osb->recovery_thread_task, was_set);
+
+- if (osb->disable_recovery)
++ if (osb->recovery_state >= OCFS2_REC_WANT_DISABLE)
+ goto out;
+
+ if (osb->recovery_thread_task)
+diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
+index e3c3a35dc5e0e7..6397170f302f22 100644
+--- a/fs/ocfs2/journal.h
++++ b/fs/ocfs2/journal.h
+@@ -148,6 +148,7 @@ void ocfs2_wait_for_recovery(struct ocfs2_super *osb);
+
+ int ocfs2_recovery_init(struct ocfs2_super *osb);
+ void ocfs2_recovery_exit(struct ocfs2_super *osb);
++void ocfs2_recovery_disable_quota(struct ocfs2_super *osb);
+
+ int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
+ void ocfs2_free_replay_slots(struct ocfs2_super *osb);
+diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h
+index 51c52768132d70..6aaa94c554c12a 100644
+--- a/fs/ocfs2/ocfs2.h
++++ b/fs/ocfs2/ocfs2.h
+@@ -308,6 +308,21 @@ enum ocfs2_journal_trigger_type {
+ void ocfs2_initialize_journal_triggers(struct super_block *sb,
+ struct ocfs2_triggers triggers[]);
+
++enum ocfs2_recovery_state {
++ OCFS2_REC_ENABLED = 0,
++ OCFS2_REC_QUOTA_WANT_DISABLE,
++ /*
++ * Must be OCFS2_REC_QUOTA_WANT_DISABLE + 1 for
++ * ocfs2_recovery_disable_quota() to work.
++ */
++ OCFS2_REC_QUOTA_DISABLED,
++ OCFS2_REC_WANT_DISABLE,
++ /*
++ * Must be OCFS2_REC_WANT_DISABLE + 1 for ocfs2_recovery_exit() to work
++ */
++ OCFS2_REC_DISABLED,
++};
++
+ struct ocfs2_journal;
+ struct ocfs2_slot_info;
+ struct ocfs2_recovery_map;
+@@ -370,7 +385,7 @@ struct ocfs2_super
+ struct ocfs2_recovery_map *recovery_map;
+ struct ocfs2_replay_map *replay_map;
+ struct task_struct *recovery_thread_task;
+- int disable_recovery;
++ enum ocfs2_recovery_state recovery_state;
+ wait_queue_head_t checkpoint_event;
+ struct ocfs2_journal *journal;
+ unsigned long osb_commit_interval;
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index 2956d888c13145..e272429da3db34 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -453,8 +453,7 @@ struct ocfs2_quota_recovery *ocfs2_begin_quota_recovery(
+
+ /* Sync changes in local quota file into global quota file and
+ * reinitialize local quota file.
+- * The function expects local quota file to be already locked and
+- * s_umount locked in shared mode. */
++ * The function expects local quota file to be already locked. */
+ static int ocfs2_recover_local_quota_file(struct inode *lqinode,
+ int type,
+ struct ocfs2_quota_recovery *rec)
+@@ -588,7 +587,6 @@ int ocfs2_finish_quota_recovery(struct ocfs2_super *osb,
+ {
+ unsigned int ino[OCFS2_MAXQUOTAS] = { LOCAL_USER_QUOTA_SYSTEM_INODE,
+ LOCAL_GROUP_QUOTA_SYSTEM_INODE };
+- struct super_block *sb = osb->sb;
+ struct ocfs2_local_disk_dqinfo *ldinfo;
+ struct buffer_head *bh;
+ handle_t *handle;
+@@ -600,7 +598,6 @@ int ocfs2_finish_quota_recovery(struct ocfs2_super *osb,
+ printk(KERN_NOTICE "ocfs2: Finishing quota recovery on device (%s) for "
+ "slot %u\n", osb->dev_str, slot_num);
+
+- down_read(&sb->s_umount);
+ for (type = 0; type < OCFS2_MAXQUOTAS; type++) {
+ if (list_empty(&(rec->r_list[type])))
+ continue;
+@@ -677,7 +674,6 @@ int ocfs2_finish_quota_recovery(struct ocfs2_super *osb,
+ break;
+ }
+ out:
+- up_read(&sb->s_umount);
+ kfree(rec);
+ return status;
+ }
+@@ -843,8 +839,7 @@ static int ocfs2_local_free_info(struct super_block *sb, int type)
+ ocfs2_release_local_quota_bitmaps(&oinfo->dqi_chunk);
+
+ /*
+- * s_umount held in exclusive mode protects us against racing with
+- * recovery thread...
++ * ocfs2_dismount_volume() has already aborted quota recovery...
+ */
+ if (oinfo->dqi_rec) {
+ ocfs2_free_quota_recovery(oinfo->dqi_rec);
+diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c
+index f7b483f0de2add..6ac4dcd54588cf 100644
+--- a/fs/ocfs2/suballoc.c
++++ b/fs/ocfs2/suballoc.c
+@@ -698,10 +698,12 @@ static int ocfs2_block_group_alloc(struct ocfs2_super *osb,
+
+ bg_bh = ocfs2_block_group_alloc_contig(osb, handle, alloc_inode,
+ ac, cl);
+- if (PTR_ERR(bg_bh) == -ENOSPC)
++ if (PTR_ERR(bg_bh) == -ENOSPC) {
++ ac->ac_which = OCFS2_AC_USE_MAIN_DISCONTIG;
+ bg_bh = ocfs2_block_group_alloc_discontig(handle,
+ alloc_inode,
+ ac, cl);
++ }
+ if (IS_ERR(bg_bh)) {
+ status = PTR_ERR(bg_bh);
+ bg_bh = NULL;
+@@ -1794,6 +1796,7 @@ static int ocfs2_search_chain(struct ocfs2_alloc_context *ac,
+ {
+ int status;
+ u16 chain;
++ u32 contig_bits;
+ u64 next_group;
+ struct inode *alloc_inode = ac->ac_inode;
+ struct buffer_head *group_bh = NULL;
+@@ -1819,10 +1822,21 @@ static int ocfs2_search_chain(struct ocfs2_alloc_context *ac,
+ status = -ENOSPC;
+ /* for now, the chain search is a bit simplistic. We just use
+ * the 1st group with any empty bits. */
+- while ((status = ac->ac_group_search(alloc_inode, group_bh,
+- bits_wanted, min_bits,
+- ac->ac_max_block,
+- res)) == -ENOSPC) {
++ while (1) {
++ if (ac->ac_which == OCFS2_AC_USE_MAIN_DISCONTIG) {
++ contig_bits = le16_to_cpu(bg->bg_contig_free_bits);
++ if (!contig_bits)
++ contig_bits = ocfs2_find_max_contig_free_bits(bg->bg_bitmap,
++ le16_to_cpu(bg->bg_bits), 0);
++ if (bits_wanted > contig_bits && contig_bits >= min_bits)
++ bits_wanted = contig_bits;
++ }
++
++ status = ac->ac_group_search(alloc_inode, group_bh,
++ bits_wanted, min_bits,
++ ac->ac_max_block, res);
++ if (status != -ENOSPC)
++ break;
+ if (!bg->bg_next_group)
+ break;
+
+@@ -1982,6 +1996,7 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_alloc_context *ac,
+ victim = ocfs2_find_victim_chain(cl);
+ ac->ac_chain = victim;
+
++search:
+ status = ocfs2_search_chain(ac, handle, bits_wanted, min_bits,
+ res, &bits_left);
+ if (!status) {
+@@ -2022,6 +2037,16 @@ static int ocfs2_claim_suballoc_bits(struct ocfs2_alloc_context *ac,
+ }
+ }
+
++ /* Chains can't supply the bits_wanted contiguous space.
++ * We should switch to using every single bit when allocating
++ * from the global bitmap. */
++ if (i == le16_to_cpu(cl->cl_next_free_rec) &&
++ status == -ENOSPC && ac->ac_which == OCFS2_AC_USE_MAIN) {
++ ac->ac_which = OCFS2_AC_USE_MAIN_DISCONTIG;
++ ac->ac_chain = victim;
++ goto search;
++ }
++
+ set_hint:
+ if (status != -ENOSPC) {
+ /* If the next search of this group is not likely to
+@@ -2365,7 +2390,8 @@ int __ocfs2_claim_clusters(handle_t *handle,
+ BUG_ON(ac->ac_bits_given >= ac->ac_bits_wanted);
+
+ BUG_ON(ac->ac_which != OCFS2_AC_USE_LOCAL
+- && ac->ac_which != OCFS2_AC_USE_MAIN);
++ && ac->ac_which != OCFS2_AC_USE_MAIN
++ && ac->ac_which != OCFS2_AC_USE_MAIN_DISCONTIG);
+
+ if (ac->ac_which == OCFS2_AC_USE_LOCAL) {
+ WARN_ON(min_clusters > 1);
+diff --git a/fs/ocfs2/suballoc.h b/fs/ocfs2/suballoc.h
+index b481b834857d33..bcf2ed4a86310b 100644
+--- a/fs/ocfs2/suballoc.h
++++ b/fs/ocfs2/suballoc.h
+@@ -29,6 +29,7 @@ struct ocfs2_alloc_context {
+ #define OCFS2_AC_USE_MAIN 2
+ #define OCFS2_AC_USE_INODE 3
+ #define OCFS2_AC_USE_META 4
++#define OCFS2_AC_USE_MAIN_DISCONTIG 5
+ u32 ac_which;
+
+ /* these are used by the chain search */
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 8bb5022f30824b..3d2533950bae20 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -1812,6 +1812,9 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
+ /* Orphan scan should be stopped as early as possible */
+ ocfs2_orphan_scan_stop(osb);
+
++ /* Stop quota recovery so that we can disable quotas */
++ ocfs2_recovery_disable_quota(osb);
++
+ ocfs2_disable_quotas(osb);
+
+ /* All dquots should be freed by now */
+diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
+index fe738623cf1ba9..240d82c6f90806 100644
+--- a/fs/smb/client/cached_dir.c
++++ b/fs/smb/client/cached_dir.c
+@@ -29,7 +29,6 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ {
+ struct cached_fid *cfid;
+
+- spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry(cfid, &cfids->entries, entry) {
+ if (!strcmp(cfid->path, path)) {
+ /*
+@@ -38,25 +37,20 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ * being deleted due to a lease break.
+ */
+ if (!cfid->time || !cfid->has_lease) {
+- spin_unlock(&cfids->cfid_list_lock);
+ return NULL;
+ }
+ kref_get(&cfid->refcount);
+- spin_unlock(&cfids->cfid_list_lock);
+ return cfid;
+ }
+ }
+ if (lookup_only) {
+- spin_unlock(&cfids->cfid_list_lock);
+ return NULL;
+ }
+ if (cfids->num_entries >= max_cached_dirs) {
+- spin_unlock(&cfids->cfid_list_lock);
+ return NULL;
+ }
+ cfid = init_cached_dir(path);
+ if (cfid == NULL) {
+- spin_unlock(&cfids->cfid_list_lock);
+ return NULL;
+ }
+ cfid->cfids = cfids;
+@@ -74,7 +68,6 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ */
+ cfid->has_lease = true;
+
+- spin_unlock(&cfids->cfid_list_lock);
+ return cfid;
+ }
+
+@@ -187,8 +180,10 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ if (!utf16_path)
+ return -ENOMEM;
+
++ spin_lock(&cfids->cfid_list_lock);
+ cfid = find_or_create_cached_dir(cfids, path, lookup_only, tcon->max_cached_dirs);
+ if (cfid == NULL) {
++ spin_unlock(&cfids->cfid_list_lock);
+ kfree(utf16_path);
+ return -ENOENT;
+ }
+@@ -197,7 +192,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ * Otherwise, it is either a new entry or laundromat worker removed it
+ * from @cfids->entries. Caller will put last reference if the latter.
+ */
+- spin_lock(&cfids->cfid_list_lock);
+ if (cfid->has_lease && cfid->time) {
+ spin_unlock(&cfids->cfid_list_lock);
+ *ret_cfid = cfid;
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index 81a29857b1e32f..03f606afad93a0 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -1496,7 +1496,7 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+
+ if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) <
+ sizeof(struct create_lease_v2) - 4)
+- return NULL;
++ goto err_out;
+
+ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+ lreq->req_state = lc->lcontext.LeaseState;
+@@ -1512,7 +1512,7 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+
+ if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) <
+ sizeof(struct create_lease))
+- return NULL;
++ goto err_out;
+
+ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+ lreq->req_state = lc->lcontext.LeaseState;
+@@ -1521,6 +1521,9 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+ lreq->version = 1;
+ }
+ return lreq;
++err_out:
++ kfree(lreq);
++ return NULL;
+ }
+
+ /**
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 58ede919675174..c2603c398a4674 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -633,6 +633,11 @@ smb2_get_name(const char *src, const int maxlen, struct nls_table *local_nls)
+ return name;
+ }
+
++ if (*name == '\0') {
++ kfree(name);
++ return ERR_PTR(-EINVAL);
++ }
++
+ if (*name == '\\') {
+ pr_err("not allow directory name included leading slash\n");
+ kfree(name);
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 9c765b97375170..648efed5ff7de6 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -443,6 +443,13 @@ static int ksmbd_vfs_stream_write(struct ksmbd_file *fp, char *buf, loff_t *pos,
+ goto out;
+ }
+
++ if (v_len <= *pos) {
++ pr_err("stream write position %lld is out of bounds (stream length: %zd)\n",
++ *pos, v_len);
++ err = -EINVAL;
++ goto out;
++ }
++
+ if (v_len < size) {
+ wbuf = kvzalloc(size, KSMBD_DEFAULT_GFP);
+ if (!wbuf) {
+diff --git a/fs/smb/server/vfs_cache.c b/fs/smb/server/vfs_cache.c
+index 1f8fa3468173ab..dfed6fce890498 100644
+--- a/fs/smb/server/vfs_cache.c
++++ b/fs/smb/server/vfs_cache.c
+@@ -661,21 +661,40 @@ __close_file_table_ids(struct ksmbd_file_table *ft,
+ bool (*skip)(struct ksmbd_tree_connect *tcon,
+ struct ksmbd_file *fp))
+ {
+- unsigned int id;
+- struct ksmbd_file *fp;
+- int num = 0;
++ struct ksmbd_file *fp;
++ unsigned int id = 0;
++ int num = 0;
++
++ while (1) {
++ write_lock(&ft->lock);
++ fp = idr_get_next(ft->idr, &id);
++ if (!fp) {
++ write_unlock(&ft->lock);
++ break;
++ }
+
+- idr_for_each_entry(ft->idr, fp, id) {
+- if (skip(tcon, fp))
++ if (skip(tcon, fp) ||
++ !atomic_dec_and_test(&fp->refcount)) {
++ id++;
++ write_unlock(&ft->lock);
+ continue;
++ }
+
+ set_close_state_blocked_works(fp);
++ idr_remove(ft->idr, fp->volatile_id);
++ fp->volatile_id = KSMBD_NO_FID;
++ write_unlock(&ft->lock);
++
++ down_write(&fp->f_ci->m_lock);
++ list_del_init(&fp->node);
++ up_write(&fp->f_ci->m_lock);
+
+- if (!atomic_dec_and_test(&fp->refcount))
+- continue;
+ __ksmbd_close_fd(ft, fp);
++
+ num++;
++ id++;
+ }
++
+ return num;
+ }
+
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index d80f943461992f..22f4bf956ba1c4 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1585,8 +1585,11 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx,
+ user_uffdio_copy = (struct uffdio_copy __user *) arg;
+
+ ret = -EAGAIN;
+- if (atomic_read(&ctx->mmap_changing))
++ if (unlikely(atomic_read(&ctx->mmap_changing))) {
++ if (unlikely(put_user(ret, &user_uffdio_copy->copy)))
++ return -EFAULT;
+ goto out;
++ }
+
+ ret = -EFAULT;
+ if (copy_from_user(&uffdio_copy, user_uffdio_copy,
+@@ -1641,8 +1644,11 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx,
+ user_uffdio_zeropage = (struct uffdio_zeropage __user *) arg;
+
+ ret = -EAGAIN;
+- if (atomic_read(&ctx->mmap_changing))
++ if (unlikely(atomic_read(&ctx->mmap_changing))) {
++ if (unlikely(put_user(ret, &user_uffdio_zeropage->zeropage)))
++ return -EFAULT;
+ goto out;
++ }
+
+ ret = -EFAULT;
+ if (copy_from_user(&uffdio_zeropage, user_uffdio_zeropage,
+@@ -1744,8 +1750,11 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg)
+ user_uffdio_continue = (struct uffdio_continue __user *)arg;
+
+ ret = -EAGAIN;
+- if (atomic_read(&ctx->mmap_changing))
++ if (unlikely(atomic_read(&ctx->mmap_changing))) {
++ if (unlikely(put_user(ret, &user_uffdio_continue->mapped)))
++ return -EFAULT;
+ goto out;
++ }
+
+ ret = -EFAULT;
+ if (copy_from_user(&uffdio_continue, user_uffdio_continue,
+@@ -1801,8 +1810,11 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long
+ user_uffdio_poison = (struct uffdio_poison __user *)arg;
+
+ ret = -EAGAIN;
+- if (atomic_read(&ctx->mmap_changing))
++ if (unlikely(atomic_read(&ctx->mmap_changing))) {
++ if (unlikely(put_user(ret, &user_uffdio_poison->updated)))
++ return -EFAULT;
+ goto out;
++ }
+
+ ret = -EFAULT;
+ if (copy_from_user(&uffdio_poison, user_uffdio_poison,
+@@ -1870,8 +1882,12 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx,
+
+ user_uffdio_move = (struct uffdio_move __user *) arg;
+
+- if (atomic_read(&ctx->mmap_changing))
+- return -EAGAIN;
++ ret = -EAGAIN;
++ if (unlikely(atomic_read(&ctx->mmap_changing))) {
++ if (unlikely(put_user(ret, &user_uffdio_move->move)))
++ return -EFAULT;
++ goto out;
++ }
+
+ if (copy_from_user(&uffdio_move, user_uffdio_move,
+ /* don't copy "move" last field */
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 6a0a8f1c7c9035..7fdf9eb6b52d58 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -78,6 +78,8 @@ extern ssize_t cpu_show_gds(struct device *dev,
+ extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_ghostwrite(struct device *dev, struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_indirect_target_selection(struct device *dev,
++ struct device_attribute *attr, char *buf);
+
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/linux/execmem.h b/include/linux/execmem.h
+index 64130ae19690a9..89b4035b9f4bd1 100644
+--- a/include/linux/execmem.h
++++ b/include/linux/execmem.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/types.h>
+ #include <linux/moduleloader.h>
++#include <linux/cleanup.h>
+
+ #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
+ !defined(CONFIG_KASAN_VMALLOC)
+@@ -139,6 +140,8 @@ void *execmem_alloc(enum execmem_type type, size_t size);
+ */
+ void execmem_free(void *ptr);
+
++DEFINE_FREE(execmem, void *, if (_T) execmem_free(_T));
++
+ #ifdef CONFIG_MMU
+ /**
+ * execmem_vmap - create virtual mapping for EXECMEM_MODULE_DATA memory
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 4179add2864b41..6192bce9a9d68a 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1058,6 +1058,12 @@ struct vmbus_channel {
+
+ /* The max size of a packet on this channel */
+ u32 max_pkt_size;
++
++ /* function to mmap ring buffer memory to the channel's sysfs ring attribute */
++ int (*mmap_ring_buffer)(struct vmbus_channel *channel, struct vm_area_struct *vma);
++
++ /* boolean to control visibility of sysfs for ring buffer */
++ bool ring_sysfs_visible;
+ };
+
+ #define lock_requestor(channel, flags) \
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 16741e542e81c9..07dcd80f3310c5 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -1526,7 +1526,7 @@ struct ieee80211_mgmt {
+ struct {
+ u8 action_code;
+ u8 dialog_token;
+- u8 status_code;
++ __le16 status_code;
+ u8 variable[];
+ } __packed ttlm_res;
+ struct {
+diff --git a/include/linux/module.h b/include/linux/module.h
+index ba33bba3cc7427..7212fbb06933ca 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -587,6 +587,11 @@ struct module {
+ atomic_t refcnt;
+ #endif
+
++#ifdef CONFIG_MITIGATION_ITS
++ int its_num_pages;
++ void **its_page_array;
++#endif
++
+ #ifdef CONFIG_CONSTRUCTORS
+ /* Constructor functions. */
+ ctor_fn_t *ctors;
+diff --git a/include/linux/timekeeper_internal.h b/include/linux/timekeeper_internal.h
+index e39d4d563b1975..785048a3b3e604 100644
+--- a/include/linux/timekeeper_internal.h
++++ b/include/linux/timekeeper_internal.h
+@@ -51,7 +51,7 @@ struct tk_read_base {
+ * @offs_real: Offset clock monotonic -> clock realtime
+ * @offs_boot: Offset clock monotonic -> clock boottime
+ * @offs_tai: Offset clock monotonic -> clock tai
+- * @tai_offset: The current UTC to TAI offset in seconds
++ * @coarse_nsec: The nanoseconds part for coarse time getters
+ * @tkr_raw: The readout base structure for CLOCK_MONOTONIC_RAW
+ * @raw_sec: CLOCK_MONOTONIC_RAW time in seconds
+ * @clock_was_set_seq: The sequence number of clock was set events
+@@ -76,6 +76,7 @@ struct tk_read_base {
+ * ntp shifted nano seconds.
+ * @ntp_err_mult: Multiplication factor for scaled math conversion
+ * @skip_second_overflow: Flag used to avoid updating NTP twice with same second
++ * @tai_offset: The current UTC to TAI offset in seconds
+ *
+ * Note: For timespec(64) based interfaces wall_to_monotonic is what
+ * we need to add to xtime (or xtime corrected for sub jiffy times)
+@@ -100,7 +101,7 @@ struct tk_read_base {
+ * which results in the following cacheline layout:
+ *
+ * 0: seqcount, tkr_mono
+- * 1: xtime_sec ... tai_offset
++ * 1: xtime_sec ... coarse_nsec
+ * 2: tkr_raw, raw_sec
+ * 3,4: Internal variables
+ *
+@@ -121,7 +122,7 @@ struct timekeeper {
+ ktime_t offs_real;
+ ktime_t offs_boot;
+ ktime_t offs_tai;
+- s32 tai_offset;
++ u32 coarse_nsec;
+
+ /* Cacheline 2: */
+ struct tk_read_base tkr_raw;
+@@ -144,6 +145,7 @@ struct timekeeper {
+ u32 ntp_error_shift;
+ u32 ntp_err_mult;
+ u32 skip_second_overflow;
++ s32 tai_offset;
+ };
+
+ #ifdef CONFIG_GENERIC_TIME_VSYSCALL
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index 31e9ffd936e393..5ca8d4dd149d4e 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -61,6 +61,7 @@ struct vm_struct {
+ unsigned int nr_pages;
+ phys_addr_t phys_addr;
+ const void *caller;
++ unsigned long requested_size;
+ };
+
+ struct vmap_area {
+diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h
+index b02bb9f109d5e3..88598e14ecfa4f 100644
+--- a/include/net/netdev_queues.h
++++ b/include/net/netdev_queues.h
+@@ -102,6 +102,12 @@ struct netdev_stat_ops {
+ struct netdev_queue_stats_tx *tx);
+ };
+
++void netdev_stat_queue_sum(struct net_device *netdev,
++ int rx_start, int rx_end,
++ struct netdev_queue_stats_rx *rx_sum,
++ int tx_start, int tx_end,
++ struct netdev_queue_stats_tx *tx_sum);
++
+ /**
+ * struct netdev_queue_mgmt_ops - netdev ops for queue management
+ *
+diff --git a/init/Kconfig b/init/Kconfig
+index dc7b10a1fad2b7..522fac29949adb 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -137,6 +137,9 @@ config LD_CAN_USE_KEEP_IN_OVERLAY
+ config RUSTC_HAS_COERCE_POINTEE
+ def_bool RUSTC_VERSION >= 108400
+
++config RUSTC_HAS_UNNECESSARY_TRANSMUTES
++ def_bool RUSTC_VERSION >= 108800
++
+ config PAHOLE_VERSION
+ int
+ default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 24b9e9a5105d46..a60cb9d30cc0dc 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -443,24 +443,6 @@ static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
+ return req->link;
+ }
+
+-static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
+-{
+- if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
+- return NULL;
+- return __io_prep_linked_timeout(req);
+-}
+-
+-static noinline void __io_arm_ltimeout(struct io_kiocb *req)
+-{
+- io_queue_linked_timeout(__io_prep_linked_timeout(req));
+-}
+-
+-static inline void io_arm_ltimeout(struct io_kiocb *req)
+-{
+- if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT))
+- __io_arm_ltimeout(req);
+-}
+-
+ static void io_prep_async_work(struct io_kiocb *req)
+ {
+ const struct io_issue_def *def = &io_issue_defs[req->opcode];
+@@ -513,7 +495,6 @@ static void io_prep_async_link(struct io_kiocb *req)
+
+ static void io_queue_iowq(struct io_kiocb *req)
+ {
+- struct io_kiocb *link = io_prep_linked_timeout(req);
+ struct io_uring_task *tctx = req->tctx;
+
+ BUG_ON(!tctx);
+@@ -538,8 +519,6 @@ static void io_queue_iowq(struct io_kiocb *req)
+
+ trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work));
+ io_wq_enqueue(tctx->io_wq, &req->work);
+- if (link)
+- io_queue_linked_timeout(link);
+ }
+
+ static void io_req_queue_iowq_tw(struct io_kiocb *req, struct io_tw_state *ts)
+@@ -874,6 +853,14 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
+ struct io_ring_ctx *ctx = req->ctx;
+ bool posted;
+
++ /*
++ * If multishot has already posted deferred completions, ensure that
++ * those are flushed first before posting this one. If not, CQEs
++ * could get reordered.
++ */
++ if (!wq_list_empty(&ctx->submit_state.compl_reqs))
++ __io_submit_flush_completions(ctx);
++
+ lockdep_assert(!io_wq_current_is_worker());
+ lockdep_assert_held(&ctx->uring_lock);
+
+@@ -1720,17 +1707,24 @@ static bool io_assign_file(struct io_kiocb *req, const struct io_issue_def *def,
+ return !!req->file;
+ }
+
++#define REQ_ISSUE_SLOW_FLAGS (REQ_F_CREDS | REQ_F_ARM_LTIMEOUT)
++
+ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ const struct io_issue_def *def = &io_issue_defs[req->opcode];
+ const struct cred *creds = NULL;
++ struct io_kiocb *link = NULL;
+ int ret;
+
+ if (unlikely(!io_assign_file(req, def, issue_flags)))
+ return -EBADF;
+
+- if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
+- creds = override_creds(req->creds);
++ if (unlikely(req->flags & REQ_ISSUE_SLOW_FLAGS)) {
++ if ((req->flags & REQ_F_CREDS) && req->creds != current_cred())
++ creds = override_creds(req->creds);
++ if (req->flags & REQ_F_ARM_LTIMEOUT)
++ link = __io_prep_linked_timeout(req);
++ }
+
+ if (!def->audit_skip)
+ audit_uring_entry(req->opcode);
+@@ -1740,8 +1734,12 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+ if (!def->audit_skip)
+ audit_uring_exit(!ret, ret);
+
+- if (creds)
+- revert_creds(creds);
++ if (unlikely(creds || link)) {
++ if (creds)
++ revert_creds(creds);
++ if (link)
++ io_queue_linked_timeout(link);
++ }
+
+ if (ret == IOU_OK) {
+ if (issue_flags & IO_URING_F_COMPLETE_DEFER)
+@@ -1754,7 +1752,6 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+
+ if (ret == IOU_ISSUE_SKIP_COMPLETE) {
+ ret = 0;
+- io_arm_ltimeout(req);
+
+ /* If the op doesn't have a file, we're not polling for it */
+ if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
+@@ -1797,8 +1794,6 @@ void io_wq_submit_work(struct io_wq_work *work)
+ else
+ req_ref_get(req);
+
+- io_arm_ltimeout(req);
+-
+ /* either cancelled or io-wq is dying, so don't touch tctx->iowq */
+ if (atomic_read(&work->flags) & IO_WQ_WORK_CANCEL) {
+ fail:
+@@ -1914,15 +1909,11 @@ struct file *io_file_get_normal(struct io_kiocb *req, int fd)
+ static void io_queue_async(struct io_kiocb *req, int ret)
+ __must_hold(&req->ctx->uring_lock)
+ {
+- struct io_kiocb *linked_timeout;
+-
+ if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
+ io_req_defer_failed(req, ret);
+ return;
+ }
+
+- linked_timeout = io_prep_linked_timeout(req);
+-
+ switch (io_arm_poll_handler(req, 0)) {
+ case IO_APOLL_READY:
+ io_kbuf_recycle(req, 0);
+@@ -1935,9 +1926,6 @@ static void io_queue_async(struct io_kiocb *req, int ret)
+ case IO_APOLL_OK:
+ break;
+ }
+-
+- if (linked_timeout)
+- io_queue_linked_timeout(linked_timeout);
+ }
+
+ static inline void io_queue_sqe(struct io_kiocb *req)
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index d037cc68e9d3ea..03c699493b5ab6 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -20,7 +20,7 @@
+ #include "sqpoll.h"
+
+ #define IORING_SQPOLL_CAP_ENTRIES_VALUE 8
+-#define IORING_TW_CAP_ENTRIES_VALUE 8
++#define IORING_TW_CAP_ENTRIES_VALUE 32
+
+ enum {
+ IO_SQ_THREAD_SHOULD_STOP = 0,
+diff --git a/kernel/params.c b/kernel/params.c
+index c417d28bc1dfba..10cb194c2c36d8 100644
+--- a/kernel/params.c
++++ b/kernel/params.c
+@@ -949,7 +949,9 @@ struct kset *module_kset;
+ static void module_kobj_release(struct kobject *kobj)
+ {
+ struct module_kobject *mk = to_module_kobject(kobj);
+- complete(mk->kobj_completion);
++
++ if (mk->kobj_completion)
++ complete(mk->kobj_completion);
+ }
+
+ const struct kobj_type module_ktype = {
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 1e67d076f1955a..a009c91f7b05fc 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -164,10 +164,34 @@ static inline struct timespec64 tk_xtime(const struct timekeeper *tk)
+ return ts;
+ }
+
++static inline struct timespec64 tk_xtime_coarse(const struct timekeeper *tk)
++{
++ struct timespec64 ts;
++
++ ts.tv_sec = tk->xtime_sec;
++ ts.tv_nsec = tk->coarse_nsec;
++ return ts;
++}
++
++/*
++ * Update the nanoseconds part for the coarse time keepers. They can't rely
++ * on xtime_nsec because xtime_nsec could be adjusted by a small negative
++ * amount when the multiplication factor of the clock is adjusted, which
++ * could cause the coarse clocks to go slightly backwards. See
++ * timekeeping_apply_adjustment(). Thus we keep a separate copy for the coarse
++ * clockids which only is updated when the clock has been set or we have
++ * accumulated time.
++ */
++static inline void tk_update_coarse_nsecs(struct timekeeper *tk)
++{
++ tk->coarse_nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift;
++}
++
+ static void tk_set_xtime(struct timekeeper *tk, const struct timespec64 *ts)
+ {
+ tk->xtime_sec = ts->tv_sec;
+ tk->tkr_mono.xtime_nsec = (u64)ts->tv_nsec << tk->tkr_mono.shift;
++ tk_update_coarse_nsecs(tk);
+ }
+
+ static void tk_xtime_add(struct timekeeper *tk, const struct timespec64 *ts)
+@@ -175,6 +199,7 @@ static void tk_xtime_add(struct timekeeper *tk, const struct timespec64 *ts)
+ tk->xtime_sec += ts->tv_sec;
+ tk->tkr_mono.xtime_nsec += (u64)ts->tv_nsec << tk->tkr_mono.shift;
+ tk_normalize_xtime(tk);
++ tk_update_coarse_nsecs(tk);
+ }
+
+ static void tk_set_wall_to_mono(struct timekeeper *tk, struct timespec64 wtm)
+@@ -708,6 +733,7 @@ static void timekeeping_forward_now(struct timekeeper *tk)
+ tk_normalize_xtime(tk);
+ delta -= incr;
+ }
++ tk_update_coarse_nsecs(tk);
+ }
+
+ /**
+@@ -804,8 +830,8 @@ EXPORT_SYMBOL_GPL(ktime_get_with_offset);
+ ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs)
+ {
+ struct timekeeper *tk = &tk_core.timekeeper;
+- unsigned int seq;
+ ktime_t base, *offset = offsets[offs];
++ unsigned int seq;
+ u64 nsecs;
+
+ WARN_ON(timekeeping_suspended);
+@@ -813,7 +839,7 @@ ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs)
+ do {
+ seq = read_seqcount_begin(&tk_core.seq);
+ base = ktime_add(tk->tkr_mono.base, *offset);
+- nsecs = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift;
++ nsecs = tk->coarse_nsec;
+
+ } while (read_seqcount_retry(&tk_core.seq, seq));
+
+@@ -2161,7 +2187,7 @@ static bool timekeeping_advance(enum timekeeping_adv_mode mode)
+ struct timekeeper *real_tk = &tk_core.timekeeper;
+ unsigned int clock_set = 0;
+ int shift = 0, maxshift;
+- u64 offset;
++ u64 offset, orig_offset;
+
+ guard(raw_spinlock_irqsave)(&tk_core.lock);
+
+@@ -2172,7 +2198,7 @@ static bool timekeeping_advance(enum timekeeping_adv_mode mode)
+ offset = clocksource_delta(tk_clock_read(&tk->tkr_mono),
+ tk->tkr_mono.cycle_last, tk->tkr_mono.mask,
+ tk->tkr_mono.clock->max_raw_delta);
+-
++ orig_offset = offset;
+ /* Check if there's really nothing to do */
+ if (offset < real_tk->cycle_interval && mode == TK_ADV_TICK)
+ return false;
+@@ -2205,6 +2231,14 @@ static bool timekeeping_advance(enum timekeeping_adv_mode mode)
+ */
+ clock_set |= accumulate_nsecs_to_secs(tk);
+
++ /*
++ * To avoid inconsistencies caused adjtimex TK_ADV_FREQ calls
++ * making small negative adjustments to the base xtime_nsec
++ * value, only update the coarse clocks if we accumulated time
++ */
++ if (orig_offset != offset)
++ tk_update_coarse_nsecs(tk);
++
+ timekeeping_update_from_shadow(&tk_core, clock_set);
+
+ return !!clock_set;
+@@ -2248,7 +2282,7 @@ void ktime_get_coarse_real_ts64(struct timespec64 *ts)
+ do {
+ seq = read_seqcount_begin(&tk_core.seq);
+
+- *ts = tk_xtime(tk);
++ *ts = tk_xtime_coarse(tk);
+ } while (read_seqcount_retry(&tk_core.seq, seq));
+ }
+ EXPORT_SYMBOL(ktime_get_coarse_real_ts64);
+@@ -2271,7 +2305,7 @@ void ktime_get_coarse_real_ts64_mg(struct timespec64 *ts)
+
+ do {
+ seq = read_seqcount_begin(&tk_core.seq);
+- *ts = tk_xtime(tk);
++ *ts = tk_xtime_coarse(tk);
+ offset = tk_core.timekeeper.offs_real;
+ } while (read_seqcount_retry(&tk_core.seq, seq));
+
+@@ -2350,12 +2384,12 @@ void ktime_get_coarse_ts64(struct timespec64 *ts)
+ do {
+ seq = read_seqcount_begin(&tk_core.seq);
+
+- now = tk_xtime(tk);
++ now = tk_xtime_coarse(tk);
+ mono = tk->wall_to_monotonic;
+ } while (read_seqcount_retry(&tk_core.seq, seq));
+
+ set_normalized_timespec64(ts, now.tv_sec + mono.tv_sec,
+- now.tv_nsec + mono.tv_nsec);
++ now.tv_nsec + mono.tv_nsec);
+ }
+ EXPORT_SYMBOL(ktime_get_coarse_ts64);
+
+diff --git a/kernel/time/vsyscall.c b/kernel/time/vsyscall.c
+index 05d38314316582..c9d946b012d8bf 100644
+--- a/kernel/time/vsyscall.c
++++ b/kernel/time/vsyscall.c
+@@ -97,12 +97,12 @@ void update_vsyscall(struct timekeeper *tk)
+ /* CLOCK_REALTIME_COARSE */
+ vdso_ts = &vdata[CS_HRES_COARSE].basetime[CLOCK_REALTIME_COARSE];
+ vdso_ts->sec = tk->xtime_sec;
+- vdso_ts->nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift;
++ vdso_ts->nsec = tk->coarse_nsec;
+
+ /* CLOCK_MONOTONIC_COARSE */
+ vdso_ts = &vdata[CS_HRES_COARSE].basetime[CLOCK_MONOTONIC_COARSE];
+ vdso_ts->sec = tk->xtime_sec + tk->wall_to_monotonic.tv_sec;
+- nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift;
++ nsec = tk->coarse_nsec;
+ nsec = nsec + tk->wall_to_monotonic.tv_nsec;
+ vdso_ts->sec += __iter_div_u64_rem(nsec, NSEC_PER_SEC, &vdso_ts->nsec);
+
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 373781b21e5ca5..224925201ca2ed 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2959,6 +2959,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, bool freeze, struct folio *folio)
+ {
++ bool pmd_migration = is_pmd_migration_entry(*pmd);
++
+ VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
+ VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
+ VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
+@@ -2969,9 +2971,12 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
+ * require a folio to check the PMD against. Otherwise, there
+ * is a risk of replacing the wrong folio.
+ */
+- if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
+- is_pmd_migration_entry(*pmd)) {
+- if (folio && folio != pmd_folio(*pmd))
++ if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || pmd_migration) {
++ /*
++ * Do not apply pmd_folio() to a migration entry; and folio lock
++ * guarantees that it must be of the wrong folio anyway.
++ */
++ if (folio && (pmd_migration || folio != pmd_folio(*pmd)))
+ return;
+ __split_huge_pmd_locked(vma, pmd, address, freeze);
+ }
+diff --git a/mm/internal.h b/mm/internal.h
+index 20b3535935a31b..ed34773efe3eae 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -205,11 +205,9 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
+ pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags,
+ bool *any_writable, bool *any_young, bool *any_dirty)
+ {
+- unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
+- const pte_t *end_ptep = start_ptep + max_nr;
+ pte_t expected_pte, *ptep;
+ bool writable, young, dirty;
+- int nr;
++ int nr, cur_nr;
+
+ if (any_writable)
+ *any_writable = false;
+@@ -222,11 +220,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
+ VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio);
+ VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio);
+
++ /* Limit max_nr to the actual remaining PFNs in the folio we could batch. */
++ max_nr = min_t(unsigned long, max_nr,
++ folio_pfn(folio) + folio_nr_pages(folio) - pte_pfn(pte));
++
+ nr = pte_batch_hint(start_ptep, pte);
+ expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags);
+ ptep = start_ptep + nr;
+
+- while (ptep < end_ptep) {
++ while (nr < max_nr) {
+ pte = ptep_get(ptep);
+ if (any_writable)
+ writable = !!pte_write(pte);
+@@ -239,14 +241,6 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
+ if (!pte_same(pte, expected_pte))
+ break;
+
+- /*
+- * Stop immediately once we reached the end of the folio. In
+- * corner cases the next PFN might fall into a different
+- * folio.
+- */
+- if (pte_pfn(pte) >= folio_end_pfn)
+- break;
+-
+ if (any_writable)
+ *any_writable |= writable;
+ if (any_young)
+@@ -254,12 +248,13 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
+ if (any_dirty)
+ *any_dirty |= dirty;
+
+- nr = pte_batch_hint(ptep, pte);
+- expected_pte = pte_advance_pfn(expected_pte, nr);
+- ptep += nr;
++ cur_nr = pte_batch_hint(ptep, pte);
++ expected_pte = pte_advance_pfn(expected_pte, cur_nr);
++ ptep += cur_nr;
++ nr += cur_nr;
+ }
+
+- return min(ptep - start_ptep, max_nr);
++ return min(nr, max_nr);
+ }
+
+ /**
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 9c2df1c609487b..58fc76f4d45dd3 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -456,7 +456,14 @@ static int __init_memblock memblock_double_array(struct memblock_type *type,
+ min(new_area_start, memblock.current_limit),
+ new_alloc_size, PAGE_SIZE);
+
+- new_array = addr ? __va(addr) : NULL;
++ if (addr) {
++ /* The memory may not have been accepted, yet. */
++ accept_memory(addr, new_alloc_size);
++
++ new_array = __va(addr);
++ } else {
++ new_array = NULL;
++ }
+ }
+ if (!addr) {
+ pr_err("memblock: Failed to double %s array from %ld to %ld entries !\n",
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 542d25f77be803..74a996a3508e16 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1908,13 +1908,12 @@ static inline bool boost_watermark(struct zone *zone)
+ * can claim the whole pageblock for the requested migratetype. If not, we check
+ * the pageblock for constituent pages; if at least half of the pages are free
+ * or compatible, we can still claim the whole block, so pages freed in the
+- * future will be put on the correct free list. Otherwise, we isolate exactly
+- * the order we need from the fallback block and leave its migratetype alone.
++ * future will be put on the correct free list.
+ */
+ static struct page *
+-steal_suitable_fallback(struct zone *zone, struct page *page,
+- int current_order, int order, int start_type,
+- unsigned int alloc_flags, bool whole_block)
++try_to_steal_block(struct zone *zone, struct page *page,
++ int current_order, int order, int start_type,
++ unsigned int alloc_flags)
+ {
+ int free_pages, movable_pages, alike_pages;
+ unsigned long start_pfn;
+@@ -1927,7 +1926,7 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
+ * highatomic accounting.
+ */
+ if (is_migrate_highatomic(block_type))
+- goto single_page;
++ return NULL;
+
+ /* Take ownership for orders >= pageblock_order */
+ if (current_order >= pageblock_order) {
+@@ -1948,14 +1947,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
+ if (boost_watermark(zone) && (alloc_flags & ALLOC_KSWAPD))
+ set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);
+
+- /* We are not allowed to try stealing from the whole block */
+- if (!whole_block)
+- goto single_page;
+-
+ /* moving whole block can fail due to zone boundary conditions */
+ if (!prep_move_freepages_block(zone, page, &start_pfn, &free_pages,
+ &movable_pages))
+- goto single_page;
++ return NULL;
+
+ /*
+ * Determine how many pages are compatible with our allocation.
+@@ -1988,9 +1983,7 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
+ return __rmqueue_smallest(zone, order, start_type);
+ }
+
+-single_page:
+- page_del_and_expand(zone, page, order, current_order, block_type);
+- return page;
++ return NULL;
+ }
+
+ /*
+@@ -2172,17 +2165,15 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+ }
+
+ /*
+- * Try finding a free buddy page on the fallback list and put it on the free
+- * list of requested migratetype, possibly along with other pages from the same
+- * block, depending on fragmentation avoidance heuristics. Returns true if
+- * fallback was found so that __rmqueue_smallest() can grab it.
++ * Try to allocate from some fallback migratetype by claiming the entire block,
++ * i.e. converting it to the allocation's start migratetype.
+ *
+ * The use of signed ints for order and current_order is a deliberate
+ * deviation from the rest of this file, to make the for loop
+ * condition simpler.
+ */
+ static __always_inline struct page *
+-__rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
++__rmqueue_claim(struct zone *zone, int order, int start_migratetype,
+ unsigned int alloc_flags)
+ {
+ struct free_area *area;
+@@ -2213,58 +2204,66 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
+ if (fallback_mt == -1)
+ continue;
+
+- /*
+- * We cannot steal all free pages from the pageblock and the
+- * requested migratetype is movable. In that case it's better to
+- * steal and split the smallest available page instead of the
+- * largest available page, because even if the next movable
+- * allocation falls back into a different pageblock than this
+- * one, it won't cause permanent fragmentation.
+- */
+- if (!can_steal && start_migratetype == MIGRATE_MOVABLE
+- && current_order > order)
+- goto find_smallest;
++ if (!can_steal)
++ break;
+
+- goto do_steal;
++ page = get_page_from_free_area(area, fallback_mt);
++ page = try_to_steal_block(zone, page, current_order, order,
++ start_migratetype, alloc_flags);
++ if (page) {
++ trace_mm_page_alloc_extfrag(page, order, current_order,
++ start_migratetype, fallback_mt);
++ return page;
++ }
+ }
+
+ return NULL;
++}
++
++/*
++ * Try to steal a single page from some fallback migratetype. Leave the rest of
++ * the block as its current migratetype, potentially causing fragmentation.
++ */
++static __always_inline struct page *
++__rmqueue_steal(struct zone *zone, int order, int start_migratetype)
++{
++ struct free_area *area;
++ int current_order;
++ struct page *page;
++ int fallback_mt;
++ bool can_steal;
+
+-find_smallest:
+ for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) {
+ area = &(zone->free_area[current_order]);
+ fallback_mt = find_suitable_fallback(area, current_order,
+ start_migratetype, false, &can_steal);
+- if (fallback_mt != -1)
+- break;
+- }
+-
+- /*
+- * This should not happen - we already found a suitable fallback
+- * when looking for the largest page.
+- */
+- VM_BUG_ON(current_order > MAX_PAGE_ORDER);
+-
+-do_steal:
+- page = get_page_from_free_area(area, fallback_mt);
+-
+- /* take off list, maybe claim block, expand remainder */
+- page = steal_suitable_fallback(zone, page, current_order, order,
+- start_migratetype, alloc_flags, can_steal);
++ if (fallback_mt == -1)
++ continue;
+
+- trace_mm_page_alloc_extfrag(page, order, current_order,
+- start_migratetype, fallback_mt);
++ page = get_page_from_free_area(area, fallback_mt);
++ page_del_and_expand(zone, page, order, current_order, fallback_mt);
++ trace_mm_page_alloc_extfrag(page, order, current_order,
++ start_migratetype, fallback_mt);
++ return page;
++ }
+
+- return page;
++ return NULL;
+ }
+
++enum rmqueue_mode {
++ RMQUEUE_NORMAL,
++ RMQUEUE_CMA,
++ RMQUEUE_CLAIM,
++ RMQUEUE_STEAL,
++};
++
+ /*
+ * Do the hard work of removing an element from the buddy allocator.
+ * Call me with the zone->lock already held.
+ */
+ static __always_inline struct page *
+ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
+- unsigned int alloc_flags)
++ unsigned int alloc_flags, enum rmqueue_mode *mode)
+ {
+ struct page *page;
+
+@@ -2283,16 +2282,49 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
+ }
+ }
+
+- page = __rmqueue_smallest(zone, order, migratetype);
+- if (unlikely(!page)) {
+- if (alloc_flags & ALLOC_CMA)
++ /*
++ * First try the freelists of the requested migratetype, then try
++ * fallbacks modes with increasing levels of fragmentation risk.
++ *
++ * The fallback logic is expensive and rmqueue_bulk() calls in
++ * a loop with the zone->lock held, meaning the freelists are
++ * not subject to any outside changes. Remember in *mode where
++ * we found pay dirt, to save us the search on the next call.
++ */
++ switch (*mode) {
++ case RMQUEUE_NORMAL:
++ page = __rmqueue_smallest(zone, order, migratetype);
++ if (page)
++ return page;
++ fallthrough;
++ case RMQUEUE_CMA:
++ if (alloc_flags & ALLOC_CMA) {
+ page = __rmqueue_cma_fallback(zone, order);
+-
+- if (!page)
+- page = __rmqueue_fallback(zone, order, migratetype,
+- alloc_flags);
+- }
+- return page;
++ if (page) {
++ *mode = RMQUEUE_CMA;
++ return page;
++ }
++ }
++ fallthrough;
++ case RMQUEUE_CLAIM:
++ page = __rmqueue_claim(zone, order, migratetype, alloc_flags);
++ if (page) {
++ /* Replenished preferred freelist, back to normal mode. */
++ *mode = RMQUEUE_NORMAL;
++ return page;
++ }
++ fallthrough;
++ case RMQUEUE_STEAL:
++ if (!(alloc_flags & ALLOC_NOFRAGMENT)) {
++ page = __rmqueue_steal(zone, order, migratetype);
++ if (page) {
++ *mode = RMQUEUE_STEAL;
++ return page;
++ }
++ }
++ }
++
++ return NULL;
+ }
+
+ /*
+@@ -2304,13 +2336,14 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
+ unsigned long count, struct list_head *list,
+ int migratetype, unsigned int alloc_flags)
+ {
++ enum rmqueue_mode rmqm = RMQUEUE_NORMAL;
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&zone->lock, flags);
+ for (i = 0; i < count; ++i) {
+ struct page *page = __rmqueue(zone, order, migratetype,
+- alloc_flags);
++ alloc_flags, &rmqm);
+ if (unlikely(page == NULL))
+ break;
+
+@@ -2911,7 +2944,9 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
+ if (alloc_flags & ALLOC_HIGHATOMIC)
+ page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
+ if (!page) {
+- page = __rmqueue(zone, order, migratetype, alloc_flags);
++ enum rmqueue_mode rmqm = RMQUEUE_NORMAL;
++
++ page = __rmqueue(zone, order, migratetype, alloc_flags, &rmqm);
+
+ /*
+ * If the allocation fails, allow OOM handling and
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 61981ee1c9d2f7..8aa7eea9b26fb9 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -1940,7 +1940,7 @@ static inline void setup_vmalloc_vm(struct vm_struct *vm,
+ {
+ vm->flags = flags;
+ vm->addr = (void *)va->va_start;
+- vm->size = va_size(va);
++ vm->size = vm->requested_size = va_size(va);
+ vm->caller = caller;
+ va->vm = vm;
+ }
+@@ -3133,6 +3133,7 @@ struct vm_struct *__get_vm_area_node(unsigned long size,
+
+ area->flags = flags;
+ area->caller = caller;
++ area->requested_size = requested_size;
+
+ va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area);
+ if (IS_ERR(va)) {
+@@ -4067,6 +4068,8 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
+ */
+ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+ {
++ struct vm_struct *vm = NULL;
++ size_t alloced_size = 0;
+ size_t old_size = 0;
+ void *n;
+
+@@ -4076,15 +4079,17 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+ }
+
+ if (p) {
+- struct vm_struct *vm;
+-
+ vm = find_vm_area(p);
+ if (unlikely(!vm)) {
+ WARN(1, "Trying to vrealloc() nonexistent vm area (%p)\n", p);
+ return NULL;
+ }
+
+- old_size = get_vm_area_size(vm);
++ alloced_size = get_vm_area_size(vm);
++ old_size = vm->requested_size;
++ if (WARN(alloced_size < old_size,
++ "vrealloc() has mismatched area vs requested sizes (%p)\n", p))
++ return NULL;
+ }
+
+ /*
+@@ -4092,14 +4097,26 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+ * would be a good heuristic for when to shrink the vm_area?
+ */
+ if (size <= old_size) {
+- /* Zero out spare memory. */
+- if (want_init_on_alloc(flags))
++ /* Zero out "freed" memory. */
++ if (want_init_on_free())
+ memset((void *)p + size, 0, old_size - size);
++ vm->requested_size = size;
+ kasan_poison_vmalloc(p + size, old_size - size);
+- kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
+ return (void *)p;
+ }
+
++ /*
++ * We already have the bytes available in the allocation; use them.
++ */
++ if (size <= alloced_size) {
++ kasan_unpoison_vmalloc(p + old_size, size - old_size,
++ KASAN_VMALLOC_PROT_NORMAL);
++ /* Zero out "alloced" memory. */
++ if (want_init_on_alloc(flags))
++ memset((void *)p + old_size, 0, size - old_size);
++ vm->requested_size = size;
++ }
++
+ /* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
+ n = __vmalloc_noprof(size, flags);
+ if (!n)
+diff --git a/net/can/gw.c b/net/can/gw.c
+index ef93293c1fae39..55eccb1c7620c0 100644
+--- a/net/can/gw.c
++++ b/net/can/gw.c
+@@ -130,7 +130,7 @@ struct cgw_job {
+ u32 handled_frames;
+ u32 dropped_frames;
+ u32 deleted_frames;
+- struct cf_mod mod;
++ struct cf_mod __rcu *cf_mod;
+ union {
+ /* CAN frame data source */
+ struct net_device *dev;
+@@ -459,6 +459,7 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
+ struct cgw_job *gwj = (struct cgw_job *)data;
+ struct canfd_frame *cf;
+ struct sk_buff *nskb;
++ struct cf_mod *mod;
+ int modidx = 0;
+
+ /* process strictly Classic CAN or CAN FD frames */
+@@ -506,7 +507,8 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
+ * When there is at least one modification function activated,
+ * we need to copy the skb as we want to modify skb->data.
+ */
+- if (gwj->mod.modfunc[0])
++ mod = rcu_dereference(gwj->cf_mod);
++ if (mod->modfunc[0])
+ nskb = skb_copy(skb, GFP_ATOMIC);
+ else
+ nskb = skb_clone(skb, GFP_ATOMIC);
+@@ -529,8 +531,8 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
+ cf = (struct canfd_frame *)nskb->data;
+
+ /* perform preprocessed modification functions if there are any */
+- while (modidx < MAX_MODFUNCTIONS && gwj->mod.modfunc[modidx])
+- (*gwj->mod.modfunc[modidx++])(cf, &gwj->mod);
++ while (modidx < MAX_MODFUNCTIONS && mod->modfunc[modidx])
++ (*mod->modfunc[modidx++])(cf, mod);
+
+ /* Has the CAN frame been modified? */
+ if (modidx) {
+@@ -546,11 +548,11 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
+ }
+
+ /* check for checksum updates */
+- if (gwj->mod.csumfunc.crc8)
+- (*gwj->mod.csumfunc.crc8)(cf, &gwj->mod.csum.crc8);
++ if (mod->csumfunc.crc8)
++ (*mod->csumfunc.crc8)(cf, &mod->csum.crc8);
+
+- if (gwj->mod.csumfunc.xor)
+- (*gwj->mod.csumfunc.xor)(cf, &gwj->mod.csum.xor);
++ if (mod->csumfunc.xor)
++ (*mod->csumfunc.xor)(cf, &mod->csum.xor);
+ }
+
+ /* clear the skb timestamp if not configured the other way */
+@@ -581,9 +583,20 @@ static void cgw_job_free_rcu(struct rcu_head *rcu_head)
+ {
+ struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu);
+
++ /* cgw_job::cf_mod is always accessed from the same cgw_job object within
++ * the same RCU read section. Once cgw_job is scheduled for removal,
++ * cf_mod can also be removed without mandating an additional grace period.
++ */
++ kfree(rcu_access_pointer(gwj->cf_mod));
+ kmem_cache_free(cgw_cache, gwj);
+ }
+
++/* Return cgw_job::cf_mod with RTNL protected section */
++static struct cf_mod *cgw_job_cf_mod(struct cgw_job *gwj)
++{
++ return rcu_dereference_protected(gwj->cf_mod, rtnl_is_locked());
++}
++
+ static int cgw_notifier(struct notifier_block *nb,
+ unsigned long msg, void *ptr)
+ {
+@@ -616,6 +629,7 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj, int type,
+ {
+ struct rtcanmsg *rtcan;
+ struct nlmsghdr *nlh;
++ struct cf_mod *mod;
+
+ nlh = nlmsg_put(skb, pid, seq, type, sizeof(*rtcan), flags);
+ if (!nlh)
+@@ -650,82 +664,83 @@ static int cgw_put_job(struct sk_buff *skb, struct cgw_job *gwj, int type,
+ goto cancel;
+ }
+
++ mod = cgw_job_cf_mod(gwj);
+ if (gwj->flags & CGW_FLAGS_CAN_FD) {
+ struct cgw_fdframe_mod mb;
+
+- if (gwj->mod.modtype.and) {
+- memcpy(&mb.cf, &gwj->mod.modframe.and, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.and;
++ if (mod->modtype.and) {
++ memcpy(&mb.cf, &mod->modframe.and, sizeof(mb.cf));
++ mb.modtype = mod->modtype.and;
+ if (nla_put(skb, CGW_FDMOD_AND, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.modtype.or) {
+- memcpy(&mb.cf, &gwj->mod.modframe.or, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.or;
++ if (mod->modtype.or) {
++ memcpy(&mb.cf, &mod->modframe.or, sizeof(mb.cf));
++ mb.modtype = mod->modtype.or;
+ if (nla_put(skb, CGW_FDMOD_OR, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.modtype.xor) {
+- memcpy(&mb.cf, &gwj->mod.modframe.xor, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.xor;
++ if (mod->modtype.xor) {
++ memcpy(&mb.cf, &mod->modframe.xor, sizeof(mb.cf));
++ mb.modtype = mod->modtype.xor;
+ if (nla_put(skb, CGW_FDMOD_XOR, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.modtype.set) {
+- memcpy(&mb.cf, &gwj->mod.modframe.set, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.set;
++ if (mod->modtype.set) {
++ memcpy(&mb.cf, &mod->modframe.set, sizeof(mb.cf));
++ mb.modtype = mod->modtype.set;
+ if (nla_put(skb, CGW_FDMOD_SET, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+ } else {
+ struct cgw_frame_mod mb;
+
+- if (gwj->mod.modtype.and) {
+- memcpy(&mb.cf, &gwj->mod.modframe.and, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.and;
++ if (mod->modtype.and) {
++ memcpy(&mb.cf, &mod->modframe.and, sizeof(mb.cf));
++ mb.modtype = mod->modtype.and;
+ if (nla_put(skb, CGW_MOD_AND, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.modtype.or) {
+- memcpy(&mb.cf, &gwj->mod.modframe.or, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.or;
++ if (mod->modtype.or) {
++ memcpy(&mb.cf, &mod->modframe.or, sizeof(mb.cf));
++ mb.modtype = mod->modtype.or;
+ if (nla_put(skb, CGW_MOD_OR, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.modtype.xor) {
+- memcpy(&mb.cf, &gwj->mod.modframe.xor, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.xor;
++ if (mod->modtype.xor) {
++ memcpy(&mb.cf, &mod->modframe.xor, sizeof(mb.cf));
++ mb.modtype = mod->modtype.xor;
+ if (nla_put(skb, CGW_MOD_XOR, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.modtype.set) {
+- memcpy(&mb.cf, &gwj->mod.modframe.set, sizeof(mb.cf));
+- mb.modtype = gwj->mod.modtype.set;
++ if (mod->modtype.set) {
++ memcpy(&mb.cf, &mod->modframe.set, sizeof(mb.cf));
++ mb.modtype = mod->modtype.set;
+ if (nla_put(skb, CGW_MOD_SET, sizeof(mb), &mb) < 0)
+ goto cancel;
+ }
+ }
+
+- if (gwj->mod.uid) {
+- if (nla_put_u32(skb, CGW_MOD_UID, gwj->mod.uid) < 0)
++ if (mod->uid) {
++ if (nla_put_u32(skb, CGW_MOD_UID, mod->uid) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.csumfunc.crc8) {
++ if (mod->csumfunc.crc8) {
+ if (nla_put(skb, CGW_CS_CRC8, CGW_CS_CRC8_LEN,
+- &gwj->mod.csum.crc8) < 0)
++ &mod->csum.crc8) < 0)
+ goto cancel;
+ }
+
+- if (gwj->mod.csumfunc.xor) {
++ if (mod->csumfunc.xor) {
+ if (nla_put(skb, CGW_CS_XOR, CGW_CS_XOR_LEN,
+- &gwj->mod.csum.xor) < 0)
++ &mod->csum.xor) < 0)
+ goto cancel;
+ }
+
+@@ -1059,7 +1074,7 @@ static int cgw_create_job(struct sk_buff *skb, struct nlmsghdr *nlh,
+ struct net *net = sock_net(skb->sk);
+ struct rtcanmsg *r;
+ struct cgw_job *gwj;
+- struct cf_mod mod;
++ struct cf_mod *mod;
+ struct can_can_gw ccgw;
+ u8 limhops = 0;
+ int err = 0;
+@@ -1078,37 +1093,48 @@ static int cgw_create_job(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (r->gwtype != CGW_TYPE_CAN_CAN)
+ return -EINVAL;
+
+- err = cgw_parse_attr(nlh, &mod, CGW_TYPE_CAN_CAN, &ccgw, &limhops);
++ mod = kmalloc(sizeof(*mod), GFP_KERNEL);
++ if (!mod)
++ return -ENOMEM;
++
++ err = cgw_parse_attr(nlh, mod, CGW_TYPE_CAN_CAN, &ccgw, &limhops);
+ if (err < 0)
+- return err;
++ goto out_free_cf;
+
+- if (mod.uid) {
++ if (mod->uid) {
+ ASSERT_RTNL();
+
+ /* check for updating an existing job with identical uid */
+ hlist_for_each_entry(gwj, &net->can.cgw_list, list) {
+- if (gwj->mod.uid != mod.uid)
++ struct cf_mod *old_cf;
++
++ old_cf = cgw_job_cf_mod(gwj);
++ if (old_cf->uid != mod->uid)
+ continue;
+
+ /* interfaces & filters must be identical */
+- if (memcmp(&gwj->ccgw, &ccgw, sizeof(ccgw)))
+- return -EINVAL;
++ if (memcmp(&gwj->ccgw, &ccgw, sizeof(ccgw))) {
++ err = -EINVAL;
++ goto out_free_cf;
++ }
+
+- /* update modifications with disabled softirq & quit */
+- local_bh_disable();
+- memcpy(&gwj->mod, &mod, sizeof(mod));
+- local_bh_enable();
++ rcu_assign_pointer(gwj->cf_mod, mod);
++ kfree_rcu_mightsleep(old_cf);
+ return 0;
+ }
+ }
+
+ /* ifindex == 0 is not allowed for job creation */
+- if (!ccgw.src_idx || !ccgw.dst_idx)
+- return -ENODEV;
++ if (!ccgw.src_idx || !ccgw.dst_idx) {
++ err = -ENODEV;
++ goto out_free_cf;
++ }
+
+ gwj = kmem_cache_alloc(cgw_cache, GFP_KERNEL);
+- if (!gwj)
+- return -ENOMEM;
++ if (!gwj) {
++ err = -ENOMEM;
++ goto out_free_cf;
++ }
+
+ gwj->handled_frames = 0;
+ gwj->dropped_frames = 0;
+@@ -1118,7 +1144,7 @@ static int cgw_create_job(struct sk_buff *skb, struct nlmsghdr *nlh,
+ gwj->limit_hops = limhops;
+
+ /* insert already parsed information */
+- memcpy(&gwj->mod, &mod, sizeof(mod));
++ RCU_INIT_POINTER(gwj->cf_mod, mod);
+ memcpy(&gwj->ccgw, &ccgw, sizeof(ccgw));
+
+ err = -ENODEV;
+@@ -1152,9 +1178,11 @@ static int cgw_create_job(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (!err)
+ hlist_add_head_rcu(&gwj->list, &net->can.cgw_list);
+ out:
+- if (err)
++ if (err) {
+ kmem_cache_free(cgw_cache, gwj);
+-
++out_free_cf:
++ kfree(mod);
++ }
+ return err;
+ }
+
+@@ -1214,19 +1242,22 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
+
+ /* remove only the first matching entry */
+ hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
++ struct cf_mod *cf_mod;
++
+ if (gwj->flags != r->flags)
+ continue;
+
+ if (gwj->limit_hops != limhops)
+ continue;
+
++ cf_mod = cgw_job_cf_mod(gwj);
+ /* we have a match when uid is enabled and identical */
+- if (gwj->mod.uid || mod.uid) {
+- if (gwj->mod.uid != mod.uid)
++ if (cf_mod->uid || mod.uid) {
++ if (cf_mod->uid != mod.uid)
+ continue;
+ } else {
+ /* no uid => check for identical modifications */
+- if (memcmp(&gwj->mod, &mod, sizeof(mod)))
++ if (memcmp(cf_mod, &mod, sizeof(mod)))
+ continue;
+ }
+
+diff --git a/net/core/filter.c b/net/core/filter.c
+index b0df9b7d16d3f3..6c8fbc96b14a3c 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2509,6 +2509,7 @@ int skb_do_redirect(struct sk_buff *skb)
+ goto out_drop;
+ skb->dev = dev;
+ dev_sw_netstats_rx_add(dev, skb->len);
++ skb_scrub_packet(skb, false);
+ return -EAGAIN;
+ }
+ return flags & BPF_F_NEIGH ?
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 7832abc5ca6e2f..9be2bdd2dca890 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -690,25 +690,66 @@ netdev_nl_stats_by_queue(struct net_device *netdev, struct sk_buff *rsp,
+ return 0;
+ }
+
++/**
++ * netdev_stat_queue_sum() - add up queue stats from range of queues
++ * @netdev: net_device
++ * @rx_start: index of the first Rx queue to query
++ * @rx_end: index after the last Rx queue (first *not* to query)
++ * @rx_sum: output Rx stats, should be already initialized
++ * @tx_start: index of the first Tx queue to query
++ * @tx_end: index after the last Tx queue (first *not* to query)
++ * @tx_sum: output Tx stats, should be already initialized
++ *
++ * Add stats from [start, end) range of queue IDs to *x_sum structs.
++ * The sum structs must be already initialized. Usually this
++ * helper is invoked from the .get_base_stats callbacks of drivers
++ * to account for stats of disabled queues. In that case the ranges
++ * are usually [netdev->real_num_*x_queues, netdev->num_*x_queues).
++ */
++void netdev_stat_queue_sum(struct net_device *netdev,
++ int rx_start, int rx_end,
++ struct netdev_queue_stats_rx *rx_sum,
++ int tx_start, int tx_end,
++ struct netdev_queue_stats_tx *tx_sum)
++{
++ const struct netdev_stat_ops *ops;
++ struct netdev_queue_stats_rx rx;
++ struct netdev_queue_stats_tx tx;
++ int i;
++
++ ops = netdev->stat_ops;
++
++ for (i = rx_start; i < rx_end; i++) {
++ memset(&rx, 0xff, sizeof(rx));
++ if (ops->get_queue_stats_rx)
++ ops->get_queue_stats_rx(netdev, i, &rx);
++ netdev_nl_stats_add(rx_sum, &rx, sizeof(rx));
++ }
++ for (i = tx_start; i < tx_end; i++) {
++ memset(&tx, 0xff, sizeof(tx));
++ if (ops->get_queue_stats_tx)
++ ops->get_queue_stats_tx(netdev, i, &tx);
++ netdev_nl_stats_add(tx_sum, &tx, sizeof(tx));
++ }
++}
++EXPORT_SYMBOL(netdev_stat_queue_sum);
++
+ static int
+ netdev_nl_stats_by_netdev(struct net_device *netdev, struct sk_buff *rsp,
+ const struct genl_info *info)
+ {
+- struct netdev_queue_stats_rx rx_sum, rx;
+- struct netdev_queue_stats_tx tx_sum, tx;
+- const struct netdev_stat_ops *ops;
++ struct netdev_queue_stats_rx rx_sum;
++ struct netdev_queue_stats_tx tx_sum;
+ void *hdr;
+- int i;
+
+- ops = netdev->stat_ops;
+ /* Netdev can't guarantee any complete counters */
+- if (!ops->get_base_stats)
++ if (!netdev->stat_ops->get_base_stats)
+ return 0;
+
+ memset(&rx_sum, 0xff, sizeof(rx_sum));
+ memset(&tx_sum, 0xff, sizeof(tx_sum));
+
+- ops->get_base_stats(netdev, &rx_sum, &tx_sum);
++ netdev->stat_ops->get_base_stats(netdev, &rx_sum, &tx_sum);
+
+ /* The op was there, but nothing reported, don't bother */
+ if (!memchr_inv(&rx_sum, 0xff, sizeof(rx_sum)) &&
+@@ -721,18 +762,8 @@ netdev_nl_stats_by_netdev(struct net_device *netdev, struct sk_buff *rsp,
+ if (nla_put_u32(rsp, NETDEV_A_QSTATS_IFINDEX, netdev->ifindex))
+ goto nla_put_failure;
+
+- for (i = 0; i < netdev->real_num_rx_queues; i++) {
+- memset(&rx, 0xff, sizeof(rx));
+- if (ops->get_queue_stats_rx)
+- ops->get_queue_stats_rx(netdev, i, &rx);
+- netdev_nl_stats_add(&rx_sum, &rx, sizeof(rx));
+- }
+- for (i = 0; i < netdev->real_num_tx_queues; i++) {
+- memset(&tx, 0xff, sizeof(tx));
+- if (ops->get_queue_stats_tx)
+- ops->get_queue_stats_tx(netdev, i, &tx);
+- netdev_nl_stats_add(&tx_sum, &tx, sizeof(tx));
+- }
++ netdev_stat_queue_sum(netdev, 0, netdev->real_num_rx_queues, &rx_sum,
++ 0, netdev->real_num_tx_queues, &tx_sum);
+
+ if (netdev_nl_stats_write_rx(rsp, &rx_sum) ||
+ netdev_nl_stats_write_tx(rsp, &tx_sum))
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 54a8ea004da286..943ba80c9e4ff0 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3209,16 +3209,13 @@ static void add_v4_addrs(struct inet6_dev *idev)
+ struct in6_addr addr;
+ struct net_device *dev;
+ struct net *net = dev_net(idev->dev);
+- int scope, plen, offset = 0;
++ int scope, plen;
+ u32 pflags = 0;
+
+ ASSERT_RTNL();
+
+ memset(&addr, 0, sizeof(struct in6_addr));
+- /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */
+- if (idev->dev->addr_len == sizeof(struct in6_addr))
+- offset = sizeof(struct in6_addr) - 4;
+- memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4);
++ memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4);
+
+ if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) {
+ scope = IPV6_ADDR_COMPATv4;
+@@ -3529,7 +3526,13 @@ static void addrconf_gre_config(struct net_device *dev)
+ return;
+ }
+
+- if (dev->type == ARPHRD_ETHER) {
++ /* Generate the IPv6 link-local address using addrconf_addr_gen(),
++ * unless we have an IPv4 GRE device not bound to an IP address and
++ * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this
++ * case). Such devices fall back to add_v4_addrs() instead.
++ */
++ if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 &&
++ idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) {
+ addrconf_addr_gen(idev, true);
+ return;
+ }
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 99e9b03d7fe193..e3deb89674b23d 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -7412,6 +7412,7 @@ ieee80211_send_neg_ttlm_res(struct ieee80211_sub_if_data *sdata,
+ int hdr_len = offsetofend(struct ieee80211_mgmt, u.action.u.ttlm_res);
+ int ttlm_max_len = 2 + 1 + sizeof(struct ieee80211_ttlm_elem) + 1 +
+ 2 * 2 * IEEE80211_TTLM_NUM_TIDS;
++ u16 status_code;
+
+ skb = dev_alloc_skb(local->tx_headroom + hdr_len + ttlm_max_len);
+ if (!skb)
+@@ -7434,19 +7435,18 @@ ieee80211_send_neg_ttlm_res(struct ieee80211_sub_if_data *sdata,
+ WARN_ON(1);
+ fallthrough;
+ case NEG_TTLM_RES_REJECT:
+- mgmt->u.action.u.ttlm_res.status_code =
+- WLAN_STATUS_DENIED_TID_TO_LINK_MAPPING;
++ status_code = WLAN_STATUS_DENIED_TID_TO_LINK_MAPPING;
+ break;
+ case NEG_TTLM_RES_ACCEPT:
+- mgmt->u.action.u.ttlm_res.status_code = WLAN_STATUS_SUCCESS;
++ status_code = WLAN_STATUS_SUCCESS;
+ break;
+ case NEG_TTLM_RES_SUGGEST_PREFERRED:
+- mgmt->u.action.u.ttlm_res.status_code =
+- WLAN_STATUS_PREF_TID_TO_LINK_MAPPING_SUGGESTED;
++ status_code = WLAN_STATUS_PREF_TID_TO_LINK_MAPPING_SUGGESTED;
+ ieee80211_neg_ttlm_add_suggested_map(skb, neg_ttlm);
+ break;
+ }
+
++ mgmt->u.action.u.ttlm_res.status_code = cpu_to_le16(status_code);
+ ieee80211_tx_skb(sdata, skb);
+ }
+
+@@ -7612,7 +7612,7 @@ void ieee80211_process_neg_ttlm_res(struct ieee80211_sub_if_data *sdata,
+ * This can be better implemented in the future, to handle request
+ * rejections.
+ */
+- if (mgmt->u.action.u.ttlm_res.status_code != WLAN_STATUS_SUCCESS)
++ if (le16_to_cpu(mgmt->u.action.u.ttlm_res.status_code) != WLAN_STATUS_SUCCESS)
+ __ieee80211_disconnect(sdata);
+ }
+
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index cf3ce72c3de645..5251524b96afac 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -64,7 +64,7 @@ struct hbucket {
+ #define ahash_sizeof_regions(htable_bits) \
+ (ahash_numof_locks(htable_bits) * sizeof(struct ip_set_region))
+ #define ahash_region(n, htable_bits) \
+- ((n) % ahash_numof_locks(htable_bits))
++ ((n) / jhash_size(HTABLE_REGION_BITS))
+ #define ahash_bucket_start(h, htable_bits) \
+ ((htable_bits) < HTABLE_REGION_BITS ? 0 \
+ : (h) * jhash_size(HTABLE_REGION_BITS))
+diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
+index 3313bceb6cc99d..014f077403695f 100644
+--- a/net/netfilter/ipvs/ip_vs_xmit.c
++++ b/net/netfilter/ipvs/ip_vs_xmit.c
+@@ -119,13 +119,12 @@ __mtu_check_toobig_v6(const struct sk_buff *skb, u32 mtu)
+ return false;
+ }
+
+-/* Get route to daddr, update *saddr, optionally bind route to saddr */
++/* Get route to daddr, optionally bind route to saddr */
+ static struct rtable *do_output_route4(struct net *net, __be32 daddr,
+- int rt_mode, __be32 *saddr)
++ int rt_mode, __be32 *ret_saddr)
+ {
+ struct flowi4 fl4;
+ struct rtable *rt;
+- bool loop = false;
+
+ memset(&fl4, 0, sizeof(fl4));
+ fl4.daddr = daddr;
+@@ -135,23 +134,17 @@ static struct rtable *do_output_route4(struct net *net, __be32 daddr,
+ retry:
+ rt = ip_route_output_key(net, &fl4);
+ if (IS_ERR(rt)) {
+- /* Invalid saddr ? */
+- if (PTR_ERR(rt) == -EINVAL && *saddr &&
+- rt_mode & IP_VS_RT_MODE_CONNECT && !loop) {
+- *saddr = 0;
+- flowi4_update_output(&fl4, 0, daddr, 0);
+- goto retry;
+- }
+ IP_VS_DBG_RL("ip_route_output error, dest: %pI4\n", &daddr);
+ return NULL;
+- } else if (!*saddr && rt_mode & IP_VS_RT_MODE_CONNECT && fl4.saddr) {
++ }
++ if (rt_mode & IP_VS_RT_MODE_CONNECT && fl4.saddr) {
+ ip_rt_put(rt);
+- *saddr = fl4.saddr;
+ flowi4_update_output(&fl4, 0, daddr, fl4.saddr);
+- loop = true;
++ rt_mode = 0;
+ goto retry;
+ }
+- *saddr = fl4.saddr;
++ if (ret_saddr)
++ *ret_saddr = fl4.saddr;
+ return rt;
+ }
+
+@@ -344,19 +337,15 @@ __ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,
+ if (ret_saddr)
+ *ret_saddr = dest_dst->dst_saddr.ip;
+ } else {
+- __be32 saddr = htonl(INADDR_ANY);
+-
+ noref = 0;
+
+ /* For such unconfigured boxes avoid many route lookups
+ * for performance reasons because we do not remember saddr
+ */
+ rt_mode &= ~IP_VS_RT_MODE_CONNECT;
+- rt = do_output_route4(net, daddr, rt_mode, &saddr);
++ rt = do_output_route4(net, daddr, rt_mode, ret_saddr);
+ if (!rt)
+ goto err_unreach;
+- if (ret_saddr)
+- *ret_saddr = saddr;
+ }
+
+ local = (rt->rt_flags & RTCF_LOCAL) ? 1 : 0;
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 61fea7baae5d5c..2f22ca59586f25 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -975,8 +975,7 @@ static int output_userspace(struct datapath *dp, struct sk_buff *skb,
+ upcall.cmd = OVS_PACKET_CMD_ACTION;
+ upcall.mru = OVS_CB(skb)->mru;
+
+- for (a = nla_data(attr), rem = nla_len(attr); rem > 0;
+- a = nla_next(a, &rem)) {
++ nla_for_each_nested(a, attr, rem) {
+ switch (nla_type(a)) {
+ case OVS_USERSPACE_ATTR_USERDATA:
+ upcall.userdata = a;
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index 4b9a639b642e1e..14bf71f570570f 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -348,7 +348,8 @@ static void htb_add_to_wait_tree(struct htb_sched *q,
+ */
+ static inline void htb_next_rb_node(struct rb_node **n)
+ {
+- *n = rb_next(*n);
++ if (*n)
++ *n = rb_next(*n);
+ }
+
+ /**
+@@ -609,8 +610,8 @@ static inline void htb_activate(struct htb_sched *q, struct htb_class *cl)
+ */
+ static inline void htb_deactivate(struct htb_sched *q, struct htb_class *cl)
+ {
+- WARN_ON(!cl->prio_activity);
+-
++ if (!cl->prio_activity)
++ return;
+ htb_deactivate_prios(q, cl);
+ cl->prio_activity = 0;
+ }
+@@ -1485,8 +1486,6 @@ static void htb_qlen_notify(struct Qdisc *sch, unsigned long arg)
+ {
+ struct htb_class *cl = (struct htb_class *)arg;
+
+- if (!cl->prio_activity)
+- return;
+ htb_deactivate(qdisc_priv(sch), cl);
+ }
+
+@@ -1740,8 +1739,7 @@ static int htb_delete(struct Qdisc *sch, unsigned long arg,
+ if (cl->parent)
+ cl->parent->children--;
+
+- if (cl->prio_activity)
+- htb_deactivate(q, cl);
++ htb_deactivate(q, cl);
+
+ if (cl->cmode != HTB_CAN_SEND)
+ htb_safe_rb_erase(&cl->pq_node,
+@@ -1949,8 +1947,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
+ /* turn parent into inner node */
+ qdisc_purge_queue(parent->leaf.q);
+ parent_qdisc = parent->leaf.q;
+- if (parent->prio_activity)
+- htb_deactivate(q, parent);
++ htb_deactivate(q, parent);
+
+ /* remove from evt list because of level change */
+ if (parent->cmode != HTB_CAN_SEND) {
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index cd212432952100..36dbd745838e78 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -2681,7 +2681,7 @@ cfg80211_defrag_mle(const struct element *mle, const u8 *ie, size_t ielen,
+ /* Required length for first defragmentation */
+ buf_len = mle->datalen - 1;
+ for_each_element(elem, mle->data + mle->datalen,
+- ielen - sizeof(*mle) + mle->datalen) {
++ ie + ielen - mle->data - mle->datalen) {
+ if (elem->id != WLAN_EID_FRAGMENT)
+ break;
+
+diff --git a/rust/bindings/lib.rs b/rust/bindings/lib.rs
+index 014af0d1fc70cb..a08eb5518cac5d 100644
+--- a/rust/bindings/lib.rs
++++ b/rust/bindings/lib.rs
+@@ -26,6 +26,7 @@
+
+ #[allow(dead_code)]
+ #[allow(clippy::undocumented_unsafe_blocks)]
++#[cfg_attr(CONFIG_RUSTC_HAS_UNNECESSARY_TRANSMUTES, allow(unnecessary_transmutes))]
+ mod bindings_raw {
+ // Manual definition for blocklisted types.
+ type __kernel_size_t = usize;
+diff --git a/rust/kernel/alloc/kvec.rs b/rust/kernel/alloc/kvec.rs
+index ae9d072741cedb..87a71fd40c3cad 100644
+--- a/rust/kernel/alloc/kvec.rs
++++ b/rust/kernel/alloc/kvec.rs
+@@ -2,6 +2,9 @@
+
+ //! Implementation of [`Vec`].
+
++// May not be needed in Rust 1.87.0 (pending beta backport).
++#![allow(clippy::ptr_eq)]
++
+ use super::{
+ allocator::{KVmalloc, Kmalloc, Vmalloc},
+ layout::ArrayLayout,
+diff --git a/rust/kernel/list.rs b/rust/kernel/list.rs
+index fb93330f4af48c..3841ba02ef7a38 100644
+--- a/rust/kernel/list.rs
++++ b/rust/kernel/list.rs
+@@ -4,6 +4,9 @@
+
+ //! A linked list implementation.
+
++// May not be needed in Rust 1.87.0 (pending beta backport).
++#![allow(clippy::ptr_eq)]
++
+ use crate::init::PinInit;
+ use crate::sync::ArcBorrow;
+ use crate::types::Opaque;
+diff --git a/rust/kernel/str.rs b/rust/kernel/str.rs
+index 28e2201604d678..474ddddd43e4d5 100644
+--- a/rust/kernel/str.rs
++++ b/rust/kernel/str.rs
+@@ -56,7 +56,7 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ b'\r' => f.write_str("\\r")?,
+ // Printable characters.
+ 0x20..=0x7e => f.write_char(b as char)?,
+- _ => write!(f, "\\x{:02x}", b)?,
++ _ => write!(f, "\\x{b:02x}")?,
+ }
+ }
+ Ok(())
+@@ -92,7 +92,7 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ b'\\' => f.write_str("\\\\")?,
+ // Printable characters.
+ 0x20..=0x7e => f.write_char(b as char)?,
+- _ => write!(f, "\\x{:02x}", b)?,
++ _ => write!(f, "\\x{b:02x}")?,
+ }
+ }
+ f.write_char('"')
+@@ -401,7 +401,7 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ // Printable character.
+ f.write_char(c as char)?;
+ } else {
+- write!(f, "\\x{:02x}", c)?;
++ write!(f, "\\x{c:02x}")?;
+ }
+ }
+ Ok(())
+@@ -433,7 +433,7 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ // Printable characters.
+ b'\"' => f.write_str("\\\"")?,
+ 0x20..=0x7e => f.write_char(c as char)?,
+- _ => write!(f, "\\x{:02x}", c)?,
++ _ => write!(f, "\\x{c:02x}")?,
+ }
+ }
+ f.write_str("\"")
+@@ -595,13 +595,13 @@ fn test_cstr_as_str_unchecked() {
+ #[test]
+ fn test_cstr_display() {
+ let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0").unwrap();
+- assert_eq!(format!("{}", hello_world), "hello, world!");
++ assert_eq!(format!("{hello_world}"), "hello, world!");
+ let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0").unwrap();
+- assert_eq!(format!("{}", non_printables), "\\x01\\x09\\x0a");
++ assert_eq!(format!("{non_printables}"), "\\x01\\x09\\x0a");
+ let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0").unwrap();
+- assert_eq!(format!("{}", non_ascii), "d\\xe9j\\xe0 vu");
++ assert_eq!(format!("{non_ascii}"), "d\\xe9j\\xe0 vu");
+ let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0").unwrap();
+- assert_eq!(format!("{}", good_bytes), "\\xf0\\x9f\\xa6\\x80");
++ assert_eq!(format!("{good_bytes}"), "\\xf0\\x9f\\xa6\\x80");
+ }
+
+ #[test]
+@@ -612,47 +612,47 @@ fn test_cstr_display_all_bytes() {
+ bytes[i as usize] = i.wrapping_add(1);
+ }
+ let cstr = CStr::from_bytes_with_nul(&bytes).unwrap();
+- assert_eq!(format!("{}", cstr), ALL_ASCII_CHARS);
++ assert_eq!(format!("{cstr}"), ALL_ASCII_CHARS);
+ }
+
+ #[test]
+ fn test_cstr_debug() {
+ let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0").unwrap();
+- assert_eq!(format!("{:?}", hello_world), "\"hello, world!\"");
++ assert_eq!(format!("{hello_world:?}"), "\"hello, world!\"");
+ let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0").unwrap();
+- assert_eq!(format!("{:?}", non_printables), "\"\\x01\\x09\\x0a\"");
++ assert_eq!(format!("{non_printables:?}"), "\"\\x01\\x09\\x0a\"");
+ let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0").unwrap();
+- assert_eq!(format!("{:?}", non_ascii), "\"d\\xe9j\\xe0 vu\"");
++ assert_eq!(format!("{non_ascii:?}"), "\"d\\xe9j\\xe0 vu\"");
+ let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0").unwrap();
+- assert_eq!(format!("{:?}", good_bytes), "\"\\xf0\\x9f\\xa6\\x80\"");
++ assert_eq!(format!("{good_bytes:?}"), "\"\\xf0\\x9f\\xa6\\x80\"");
+ }
+
+ #[test]
+ fn test_bstr_display() {
+ let hello_world = BStr::from_bytes(b"hello, world!");
+- assert_eq!(format!("{}", hello_world), "hello, world!");
++ assert_eq!(format!("{hello_world}"), "hello, world!");
+ let escapes = BStr::from_bytes(b"_\t_\n_\r_\\_\'_\"_");
+- assert_eq!(format!("{}", escapes), "_\\t_\\n_\\r_\\_'_\"_");
++ assert_eq!(format!("{escapes}"), "_\\t_\\n_\\r_\\_'_\"_");
+ let others = BStr::from_bytes(b"\x01");
+- assert_eq!(format!("{}", others), "\\x01");
++ assert_eq!(format!("{others}"), "\\x01");
+ let non_ascii = BStr::from_bytes(b"d\xe9j\xe0 vu");
+- assert_eq!(format!("{}", non_ascii), "d\\xe9j\\xe0 vu");
++ assert_eq!(format!("{non_ascii}"), "d\\xe9j\\xe0 vu");
+ let good_bytes = BStr::from_bytes(b"\xf0\x9f\xa6\x80");
+- assert_eq!(format!("{}", good_bytes), "\\xf0\\x9f\\xa6\\x80");
++ assert_eq!(format!("{good_bytes}"), "\\xf0\\x9f\\xa6\\x80");
+ }
+
+ #[test]
+ fn test_bstr_debug() {
+ let hello_world = BStr::from_bytes(b"hello, world!");
+- assert_eq!(format!("{:?}", hello_world), "\"hello, world!\"");
++ assert_eq!(format!("{hello_world:?}"), "\"hello, world!\"");
+ let escapes = BStr::from_bytes(b"_\t_\n_\r_\\_\'_\"_");
+- assert_eq!(format!("{:?}", escapes), "\"_\\t_\\n_\\r_\\\\_'_\\\"_\"");
++ assert_eq!(format!("{escapes:?}"), "\"_\\t_\\n_\\r_\\\\_'_\\\"_\"");
+ let others = BStr::from_bytes(b"\x01");
+- assert_eq!(format!("{:?}", others), "\"\\x01\"");
++ assert_eq!(format!("{others:?}"), "\"\\x01\"");
+ let non_ascii = BStr::from_bytes(b"d\xe9j\xe0 vu");
+- assert_eq!(format!("{:?}", non_ascii), "\"d\\xe9j\\xe0 vu\"");
++ assert_eq!(format!("{non_ascii:?}"), "\"d\\xe9j\\xe0 vu\"");
+ let good_bytes = BStr::from_bytes(b"\xf0\x9f\xa6\x80");
+- assert_eq!(format!("{:?}", good_bytes), "\"\\xf0\\x9f\\xa6\\x80\"");
++ assert_eq!(format!("{good_bytes:?}"), "\"\\xf0\\x9f\\xa6\\x80\"");
+ }
+ }
+
+diff --git a/rust/macros/module.rs b/rust/macros/module.rs
+index cdf94f4982dfc1..3f462e71ff0ef8 100644
+--- a/rust/macros/module.rs
++++ b/rust/macros/module.rs
+@@ -48,7 +48,7 @@ fn emit_base(&mut self, field: &str, content: &str, builtin: bool) {
+ )
+ } else {
+ // Loadable modules' modinfo strings go as-is.
+- format!("{field}={content}\0", field = field, content = content)
++ format!("{field}={content}\0")
+ };
+
+ write!(
+@@ -124,10 +124,7 @@ fn parse(it: &mut token_stream::IntoIter) -> Self {
+ };
+
+ if seen_keys.contains(&key) {
+- panic!(
+- "Duplicated key \"{}\". Keys can only be specified once.",
+- key
+- );
++ panic!("Duplicated key \"{key}\". Keys can only be specified once.");
+ }
+
+ assert_eq!(expect_punct(it), ':');
+@@ -140,10 +137,7 @@ fn parse(it: &mut token_stream::IntoIter) -> Self {
+ "license" => info.license = expect_string_ascii(it),
+ "alias" => info.alias = Some(expect_string_array(it)),
+ "firmware" => info.firmware = Some(expect_string_array(it)),
+- _ => panic!(
+- "Unknown key \"{}\". Valid keys are: {:?}.",
+- key, EXPECTED_KEYS
+- ),
++ _ => panic!("Unknown key \"{key}\". Valid keys are: {EXPECTED_KEYS:?}."),
+ }
+
+ assert_eq!(expect_punct(it), ',');
+@@ -155,7 +149,7 @@ fn parse(it: &mut token_stream::IntoIter) -> Self {
+
+ for key in REQUIRED_KEYS {
+ if !seen_keys.iter().any(|e| e == key) {
+- panic!("Missing required key \"{}\".", key);
++ panic!("Missing required key \"{key}\".");
+ }
+ }
+
+@@ -167,10 +161,7 @@ fn parse(it: &mut token_stream::IntoIter) -> Self {
+ }
+
+ if seen_keys != ordered_keys {
+- panic!(
+- "Keys are not ordered as expected. Order them like: {:?}.",
+- ordered_keys
+- );
++ panic!("Keys are not ordered as expected. Order them like: {ordered_keys:?}.");
+ }
+
+ info
+diff --git a/rust/macros/paste.rs b/rust/macros/paste.rs
+index 6529a387673fb5..cce712d19855b5 100644
+--- a/rust/macros/paste.rs
++++ b/rust/macros/paste.rs
+@@ -50,7 +50,7 @@ fn concat_helper(tokens: &[TokenTree]) -> Vec<(String, Span)> {
+ let tokens = group.stream().into_iter().collect::<Vec<TokenTree>>();
+ segments.append(&mut concat_helper(tokens.as_slice()));
+ }
+- token => panic!("unexpected token in paste segments: {:?}", token),
++ token => panic!("unexpected token in paste segments: {token:?}"),
+ };
+ }
+
+diff --git a/rust/macros/pinned_drop.rs b/rust/macros/pinned_drop.rs
+index 88fb72b2066047..79a52e254f719f 100644
+--- a/rust/macros/pinned_drop.rs
++++ b/rust/macros/pinned_drop.rs
+@@ -25,8 +25,7 @@ pub(crate) fn pinned_drop(_args: TokenStream, input: TokenStream) -> TokenStream
+ // Found the end of the generics, this should be `PinnedDrop`.
+ assert!(
+ matches!(tt, TokenTree::Ident(i) if i.to_string() == "PinnedDrop"),
+- "expected 'PinnedDrop', found: '{:?}'",
+- tt
++ "expected 'PinnedDrop', found: '{tt:?}'"
+ );
+ pinned_drop_idx = Some(i);
+ break;
+diff --git a/rust/uapi/lib.rs b/rust/uapi/lib.rs
+index 13495910271faf..c98d7a8cde77da 100644
+--- a/rust/uapi/lib.rs
++++ b/rust/uapi/lib.rs
+@@ -24,6 +24,7 @@
+ unreachable_pub,
+ unsafe_op_in_unsafe_fn
+ )]
++#![cfg_attr(CONFIG_RUSTC_HAS_UNNECESSARY_TRANSMUTES, allow(unnecessary_transmutes))]
+
+ // Manual definition of blocklisted types.
+ type __kernel_size_t = usize;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index c51be0f265ac60..a7dcf2d00ab65a 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -228,6 +228,7 @@ static bool is_rust_noreturn(const struct symbol *func)
+ str_ends_with(func->name, "_4core9panicking19assert_failed_inner") ||
+ str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference") ||
+ str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||
++ str_ends_with(func->name, "_7___rustc17rust_begin_unwind") ||
+ strstr(func->name, "_4core9panicking13assert_failed") ||
+ strstr(func->name, "_4core9panicking11panic_const24panic_const_") ||
+ (strstr(func->name, "_4core5slice5index24slice_") &&
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 8daac70c2f9d2c..f5ad219d7b4b1c 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -118,6 +118,7 @@ TARGETS += user_events
+ TARGETS += vDSO
+ TARGETS += mm
+ TARGETS += x86
++TARGETS += x86/bugs
+ TARGETS += zram
+ #Please keep the TARGETS list alphabetically sorted
+ # Run "make quicktest=1 run_tests" or
+diff --git a/tools/testing/selftests/mm/compaction_test.c b/tools/testing/selftests/mm/compaction_test.c
+index 2c3a0eb6b22d31..9bc4591c7b1699 100644
+--- a/tools/testing/selftests/mm/compaction_test.c
++++ b/tools/testing/selftests/mm/compaction_test.c
+@@ -90,6 +90,8 @@ int check_compaction(unsigned long mem_free, unsigned long hugepage_size,
+ int compaction_index = 0;
+ char nr_hugepages[20] = {0};
+ char init_nr_hugepages[24] = {0};
++ char target_nr_hugepages[24] = {0};
++ int slen;
+
+ snprintf(init_nr_hugepages, sizeof(init_nr_hugepages),
+ "%lu", initial_nr_hugepages);
+@@ -106,11 +108,18 @@ int check_compaction(unsigned long mem_free, unsigned long hugepage_size,
+ goto out;
+ }
+
+- /* Request a large number of huge pages. The Kernel will allocate
+- as much as it can */
+- if (write(fd, "100000", (6*sizeof(char))) != (6*sizeof(char))) {
+- ksft_print_msg("Failed to write 100000 to /proc/sys/vm/nr_hugepages: %s\n",
+- strerror(errno));
++ /*
++ * Request huge pages for about half of the free memory. The Kernel
++ * will allocate as much as it can, and we expect it will get at least 1/3
++ */
++ nr_hugepages_ul = mem_free / hugepage_size / 2;
++ snprintf(target_nr_hugepages, sizeof(target_nr_hugepages),
++ "%lu", nr_hugepages_ul);
++
++ slen = strlen(target_nr_hugepages);
++ if (write(fd, target_nr_hugepages, slen) != slen) {
++ ksft_print_msg("Failed to write %lu to /proc/sys/vm/nr_hugepages: %s\n",
++ nr_hugepages_ul, strerror(errno));
+ goto close_fd;
+ }
+
+diff --git a/tools/testing/selftests/mm/pkey-powerpc.h b/tools/testing/selftests/mm/pkey-powerpc.h
+index 1bad310d282ad6..17bf2d1b0192e0 100644
+--- a/tools/testing/selftests/mm/pkey-powerpc.h
++++ b/tools/testing/selftests/mm/pkey-powerpc.h
+@@ -3,6 +3,8 @@
+ #ifndef _PKEYS_POWERPC_H
+ #define _PKEYS_POWERPC_H
+
++#include <sys/stat.h>
++
+ #ifndef SYS_pkey_alloc
+ # define SYS_pkey_alloc 384
+ # define SYS_pkey_free 385
+@@ -102,8 +104,18 @@ static inline void expect_fault_on_read_execonly_key(void *p1, int pkey)
+ return;
+ }
+
++#define REPEAT_8(s) s s s s s s s s
++#define REPEAT_64(s) REPEAT_8(s) REPEAT_8(s) REPEAT_8(s) REPEAT_8(s) \
++ REPEAT_8(s) REPEAT_8(s) REPEAT_8(s) REPEAT_8(s)
++#define REPEAT_512(s) REPEAT_64(s) REPEAT_64(s) REPEAT_64(s) REPEAT_64(s) \
++ REPEAT_64(s) REPEAT_64(s) REPEAT_64(s) REPEAT_64(s)
++#define REPEAT_4096(s) REPEAT_512(s) REPEAT_512(s) REPEAT_512(s) REPEAT_512(s) \
++ REPEAT_512(s) REPEAT_512(s) REPEAT_512(s) REPEAT_512(s)
++#define REPEAT_16384(s) REPEAT_4096(s) REPEAT_4096(s) \
++ REPEAT_4096(s) REPEAT_4096(s)
++
+ /* 4-byte instructions * 16384 = 64K page */
+-#define __page_o_noops() asm(".rept 16384 ; nop; .endr")
++#define __page_o_noops() asm(REPEAT_16384("nop\n"))
+
+ static inline void *malloc_pkey_with_mprotect_subpage(long size, int prot, u16 pkey)
+ {
+diff --git a/tools/testing/selftests/mm/pkey_util.c b/tools/testing/selftests/mm/pkey_util.c
+index ca4ad0d44ab2e9..255b332f7a08b2 100644
+--- a/tools/testing/selftests/mm/pkey_util.c
++++ b/tools/testing/selftests/mm/pkey_util.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
++#define __SANE_USERSPACE_TYPES__
+ #include <sys/syscall.h>
+ #include <unistd.h>
+
+diff --git a/tools/testing/selftests/x86/bugs/Makefile b/tools/testing/selftests/x86/bugs/Makefile
+new file mode 100644
+index 00000000000000..8ff2d7226c7f3f
+--- /dev/null
++++ b/tools/testing/selftests/x86/bugs/Makefile
+@@ -0,0 +1,3 @@
++TEST_PROGS := its_sysfs.py its_permutations.py its_indirect_alignment.py its_ret_alignment.py
++TEST_FILES := common.py
++include ../../lib.mk
+diff --git a/tools/testing/selftests/x86/bugs/common.py b/tools/testing/selftests/x86/bugs/common.py
+new file mode 100644
+index 00000000000000..2f9664a80617a6
+--- /dev/null
++++ b/tools/testing/selftests/x86/bugs/common.py
+@@ -0,0 +1,164 @@
++#!/usr/bin/env python3
++# SPDX-License-Identifier: GPL-2.0
++#
++# Copyright (c) 2025 Intel Corporation
++#
++# This contains kselftest framework adapted common functions for testing
++# mitigation for x86 bugs.
++
++import os, sys, re, shutil
++
++sys.path.insert(0, '../../kselftest')
++import ksft
++
++def read_file(path):
++ if not os.path.exists(path):
++ return None
++ with open(path, 'r') as file:
++ return file.read().strip()
++
++def cpuinfo_has(arg):
++ cpuinfo = read_file('/proc/cpuinfo')
++ if arg in cpuinfo:
++ return True
++ return False
++
++def cmdline_has(arg):
++ cmdline = read_file('/proc/cmdline')
++ if arg in cmdline:
++ return True
++ return False
++
++def cmdline_has_either(args):
++ cmdline = read_file('/proc/cmdline')
++ for arg in args:
++ if arg in cmdline:
++ return True
++ return False
++
++def cmdline_has_none(args):
++ return not cmdline_has_either(args)
++
++def cmdline_has_all(args):
++ cmdline = read_file('/proc/cmdline')
++ for arg in args:
++ if arg not in cmdline:
++ return False
++ return True
++
++def get_sysfs(bug):
++ return read_file("/sys/devices/system/cpu/vulnerabilities/" + bug)
++
++def sysfs_has(bug, mitigation):
++ status = get_sysfs(bug)
++ if mitigation in status:
++ return True
++ return False
++
++def sysfs_has_either(bugs, mitigations):
++ for bug in bugs:
++ for mitigation in mitigations:
++ if sysfs_has(bug, mitigation):
++ return True
++ return False
++
++def sysfs_has_none(bugs, mitigations):
++ return not sysfs_has_either(bugs, mitigations)
++
++def sysfs_has_all(bugs, mitigations):
++ for bug in bugs:
++ for mitigation in mitigations:
++ if not sysfs_has(bug, mitigation):
++ return False
++ return True
++
++def bug_check_pass(bug, found):
++ ksft.print_msg(f"\nFound: {found}")
++ # ksft.print_msg(f"\ncmdline: {read_file('/proc/cmdline')}")
++ ksft.test_result_pass(f'{bug}: {found}')
++
++def bug_check_fail(bug, found, expected):
++ ksft.print_msg(f'\nFound:\t {found}')
++ ksft.print_msg(f'Expected:\t {expected}')
++ ksft.print_msg(f"\ncmdline: {read_file('/proc/cmdline')}")
++ ksft.test_result_fail(f'{bug}: {found}')
++
++def bug_status_unknown(bug, found):
++ ksft.print_msg(f'\nUnknown status: {found}')
++ ksft.print_msg(f"\ncmdline: {read_file('/proc/cmdline')}")
++ ksft.test_result_fail(f'{bug}: {found}')
++
++def basic_checks_sufficient(bug, mitigation):
++ if not mitigation:
++ bug_status_unknown(bug, "None")
++ return True
++ elif mitigation == "Not affected":
++ ksft.test_result_pass(bug)
++ return True
++ elif mitigation == "Vulnerable":
++ if cmdline_has_either([f'{bug}=off', 'mitigations=off']):
++ bug_check_pass(bug, mitigation)
++ return True
++ return False
++
++def get_section_info(vmlinux, section_name):
++ from elftools.elf.elffile import ELFFile
++ with open(vmlinux, 'rb') as f:
++ elffile = ELFFile(f)
++ section = elffile.get_section_by_name(section_name)
++ if section is None:
++ ksft.print_msg("Available sections in vmlinux:")
++ for sec in elffile.iter_sections():
++ ksft.print_msg(sec.name)
++ raise ValueError(f"Section {section_name} not found in {vmlinux}")
++ return section['sh_addr'], section['sh_offset'], section['sh_size']
++
++def get_patch_sites(vmlinux, offset, size):
++ import struct
++ output = []
++ with open(vmlinux, 'rb') as f:
++ f.seek(offset)
++ i = 0
++ while i < size:
++ data = f.read(4) # s32
++ if not data:
++ break
++ sym_offset = struct.unpack('<i', data)[0] + i
++ i += 4
++ output.append(sym_offset)
++ return output
++
++def get_instruction_from_vmlinux(elffile, section, virtual_address, target_address):
++ from capstone import Cs, CS_ARCH_X86, CS_MODE_64
++ section_start = section['sh_addr']
++ section_end = section_start + section['sh_size']
++
++ if not (section_start <= target_address < section_end):
++ return None
++
++ offset = target_address - section_start
++ code = section.data()[offset:offset + 16]
++
++ cap = init_capstone()
++ for instruction in cap.disasm(code, target_address):
++ if instruction.address == target_address:
++ return instruction
++ return None
++
++def init_capstone():
++ from capstone import Cs, CS_ARCH_X86, CS_MODE_64, CS_OPT_SYNTAX_ATT
++ cap = Cs(CS_ARCH_X86, CS_MODE_64)
++ cap.syntax = CS_OPT_SYNTAX_ATT
++ return cap
++
++def get_runtime_kernel():
++ import drgn
++ return drgn.program_from_kernel()
++
++def check_dependencies_or_skip(modules, script_name="unknown test"):
++ for mod in modules:
++ try:
++ __import__(mod)
++ except ImportError:
++ ksft.test_result_skip(f"Skipping {script_name}: missing module '{mod}'")
++ ksft.finished()
+diff --git a/tools/testing/selftests/x86/bugs/its_indirect_alignment.py b/tools/testing/selftests/x86/bugs/its_indirect_alignment.py
+new file mode 100644
+index 00000000000000..cdc33ae6a91c33
+--- /dev/null
++++ b/tools/testing/selftests/x86/bugs/its_indirect_alignment.py
+@@ -0,0 +1,150 @@
++#!/usr/bin/env python3
++# SPDX-License-Identifier: GPL-2.0
++#
++# Copyright (c) 2025 Intel Corporation
++#
++# Test for indirect target selection (ITS) mitigation.
++#
++# Test if indirect CALL/JMP are correctly patched by evaluating
++# the vmlinux .retpoline_sites in /proc/kcore.
++
++# Install dependencies
++# add-apt-repository ppa:michel-slm/kernel-utils
++# apt update
++# apt install -y python3-drgn python3-pyelftools python3-capstone
++#
++# Best to copy the vmlinux at a standard location:
++# mkdir -p /usr/lib/debug/lib/modules/$(uname -r)
++# cp $VMLINUX /usr/lib/debug/lib/modules/$(uname -r)/vmlinux
++#
++# Usage: ./its_indirect_alignment.py [vmlinux]
++
++import os, sys, argparse
++from pathlib import Path
++
++this_dir = os.path.dirname(os.path.realpath(__file__))
++sys.path.insert(0, this_dir + '/../../kselftest')
++import ksft
++import common as c
++
++bug = "indirect_target_selection"
++
++mitigation = c.get_sysfs(bug)
++if not mitigation or "Aligned branch/return thunks" not in mitigation:
++ ksft.test_result_skip("Skipping its_indirect_alignment.py: Aligned branch/return thunks not enabled")
++ ksft.finished()
++
++if c.sysfs_has("spectre_v2", "Retpolines"):
++ ksft.test_result_skip("Skipping its_indirect_alignment.py: Retpolines deployed")
++ ksft.finished()
++
++c.check_dependencies_or_skip(['drgn', 'elftools', 'capstone'], script_name="its_indirect_alignment.py")
++
++from elftools.elf.elffile import ELFFile
++from drgn.helpers.common.memory import identify_address
++
++cap = c.init_capstone()
++
++if len(os.sys.argv) > 1:
++ arg_vmlinux = os.sys.argv[1]
++ if not os.path.exists(arg_vmlinux):
++ ksft.test_result_fail(f"its_indirect_alignment.py: vmlinux not found at argument path: {arg_vmlinux}")
++ ksft.exit_fail()
++ os.makedirs(f"/usr/lib/debug/lib/modules/{os.uname().release}", exist_ok=True)
++ os.system(f'cp {arg_vmlinux} /usr/lib/debug/lib/modules/$(uname -r)/vmlinux')
++
++vmlinux = f"/usr/lib/debug/lib/modules/{os.uname().release}/vmlinux"
++if not os.path.exists(vmlinux):
++ ksft.test_result_fail(f"its_indirect_alignment.py: vmlinux not found at {vmlinux}")
++ ksft.exit_fail()
++
++ksft.print_msg(f"Using vmlinux: {vmlinux}")
++
++retpolines_start_vmlinux, retpolines_sec_offset, size = c.get_section_info(vmlinux, '.retpoline_sites')
++ksft.print_msg(f"vmlinux: Section .retpoline_sites (0x{retpolines_start_vmlinux:x}) found at 0x{retpolines_sec_offset:x} with size 0x{size:x}")
++
++sites_offset = c.get_patch_sites(vmlinux, retpolines_sec_offset, size)
++total_retpoline_tests = len(sites_offset)
++ksft.print_msg(f"Found {total_retpoline_tests} retpoline sites")
++
++prog = c.get_runtime_kernel()
++retpolines_start_kcore = prog.symbol('__retpoline_sites').address
++ksft.print_msg(f'kcore: __retpoline_sites: 0x{retpolines_start_kcore:x}')
++
++x86_indirect_its_thunk_r15 = prog.symbol('__x86_indirect_its_thunk_r15').address
++ksft.print_msg(f'kcore: __x86_indirect_its_thunk_r15: 0x{x86_indirect_its_thunk_r15:x}')
++
++tests_passed = 0
++tests_failed = 0
++tests_unknown = 0
++
++with open(vmlinux, 'rb') as f:
++ elffile = ELFFile(f)
++ text_section = elffile.get_section_by_name('.text')
++
++ for i in range(0, len(sites_offset)):
++ site = retpolines_start_kcore + sites_offset[i]
++ vmlinux_site = retpolines_start_vmlinux + sites_offset[i]
++ passed = unknown = failed = False
++ try:
++ vmlinux_insn = c.get_instruction_from_vmlinux(elffile, text_section, text_section['sh_addr'], vmlinux_site)
++ kcore_insn = list(cap.disasm(prog.read(site, 16), site))[0]
++ operand = kcore_insn.op_str
++ insn_end = site + kcore_insn.size - 1 # TODO handle Jcc.32 __x86_indirect_thunk_\reg
++ safe_site = insn_end & 0x20
++ site_status = "" if safe_site else "(unsafe)"
++
++ ksft.print_msg(f"\nSite {i}: {identify_address(prog, site)} <0x{site:x}> {site_status}")
++ ksft.print_msg(f"\tvmlinux: 0x{vmlinux_insn.address:x}:\t{vmlinux_insn.mnemonic}\t{vmlinux_insn.op_str}")
++ ksft.print_msg(f"\tkcore: 0x{kcore_insn.address:x}:\t{kcore_insn.mnemonic}\t{kcore_insn.op_str}")
++
++ if (site & 0x20) ^ (insn_end & 0x20):
++ ksft.print_msg(f"\tSite at safe/unsafe boundary: {str(kcore_insn.bytes)} {kcore_insn.mnemonic} {operand}")
++ if safe_site:
++ tests_passed += 1
++ passed = True
++ ksft.print_msg(f"\tPASSED: At safe address")
++ continue
++
++ if operand.startswith('0xffffffff'):
++ thunk = int(operand, 16)
++ if thunk > x86_indirect_its_thunk_r15:
++ insn_at_thunk = list(cap.disasm(prog.read(thunk, 16), thunk))[0]
++ operand += ' -> ' + insn_at_thunk.mnemonic + ' ' + insn_at_thunk.op_str + ' <dynamic-thunk?>'
++ if 'jmp' in insn_at_thunk.mnemonic and thunk & 0x20:
++ ksft.print_msg(f"\tPASSED: Found {operand} at safe address")
++ passed = True
++ if not passed:
++ if kcore_insn.operands[0].type == capstone.CS_OP_IMM:
++ operand += ' <' + prog.symbol(int(operand, 16)) + '>'
++ if '__x86_indirect_its_thunk_' in operand:
++ ksft.print_msg(f"\tPASSED: Found {operand}")
++ else:
++ ksft.print_msg(f"\tPASSED: Found direct branch: {kcore_insn}, ITS thunk not required.")
++ passed = True
++ else:
++ unknown = True
++ if passed:
++ tests_passed += 1
++ elif unknown:
++ ksft.print_msg(f"UNKNOWN: unexpected operand: {kcore_insn}")
++ tests_unknown += 1
++ else:
++ ksft.print_msg(f'\t************* FAILED *************')
++ ksft.print_msg(f"\tFound {kcore_insn.bytes} {kcore_insn.mnemonic} {operand}")
++ ksft.print_msg(f'\t**********************************')
++ tests_failed += 1
++ except Exception as e:
++ ksft.print_msg(f"UNKNOWN: An unexpected error occurred: {e}")
++ tests_unknown += 1
++
++ksft.print_msg(f"\n\nSummary:")
++ksft.print_msg(f"PASS: \t{tests_passed} \t/ {total_retpoline_tests}")
++ksft.print_msg(f"FAIL: \t{tests_failed} \t/ {total_retpoline_tests}")
++ksft.print_msg(f"UNKNOWN: \t{tests_unknown} \t/ {total_retpoline_tests}")
++
++if tests_failed == 0:
++ ksft.test_result_pass("All ITS return thunk sites passed")
++else:
++ ksft.test_result_fail(f"{tests_failed} ITS return thunk sites failed")
++ksft.finished()
+diff --git a/tools/testing/selftests/x86/bugs/its_permutations.py b/tools/testing/selftests/x86/bugs/its_permutations.py
+new file mode 100644
+index 00000000000000..3204f4728c62cc
+--- /dev/null
++++ b/tools/testing/selftests/x86/bugs/its_permutations.py
+@@ -0,0 +1,109 @@
++#!/usr/bin/env python3
++# SPDX-License-Identifier: GPL-2.0
++#
++# Copyright (c) 2025 Intel Corporation
++#
++# Test for indirect target selection (ITS) cmdline permutations with other bugs
++# like spectre_v2 and retbleed.
++
++import os, sys, subprocess, itertools, re, shutil
++
++test_dir = os.path.dirname(os.path.realpath(__file__))
++sys.path.insert(0, test_dir + '/../../kselftest')
++import ksft
++import common as c
++
++bug = "indirect_target_selection"
++mitigation = c.get_sysfs(bug)
++
++if not mitigation or "Not affected" in mitigation:
++ ksft.test_result_skip("Skipping its_permutations.py: not applicable")
++ ksft.finished()
++
++if shutil.which('vng') is None:
++ ksft.test_result_skip("Skipping its_permutations.py: virtme-ng ('vng') not found in PATH.")
++ ksft.finished()
++
++TEST = f"{test_dir}/its_sysfs.py"
++default_kparam = ['clearcpuid=hypervisor', 'panic=5', 'panic_on_warn=1', 'oops=panic', 'nmi_watchdog=1', 'hung_task_panic=1']
++
++DEBUG = " -v "
++
++# Install dependencies
++# https://github.com/arighi/virtme-ng
++# apt install virtme-ng
++BOOT_CMD = f"vng --run {test_dir}/../../../../../arch/x86/boot/bzImage "
++#BOOT_CMD += DEBUG
++
++bug = "indirect_target_selection"
++
++input_options = {
++ 'indirect_target_selection' : ['off', 'on', 'stuff', 'vmexit'],
++ 'retbleed' : ['off', 'stuff', 'auto'],
++ 'spectre_v2' : ['off', 'on', 'eibrs', 'retpoline', 'ibrs', 'eibrs,retpoline'],
++}
++
++def pretty_print(output):
++ OKBLUE = '\033[94m'
++ OKGREEN = '\033[92m'
++ WARNING = '\033[93m'
++ FAIL = '\033[91m'
++ ENDC = '\033[0m'
++ BOLD = '\033[1m'
++
++ # Define patterns and their corresponding colors
++ patterns = {
++ r"^ok \d+": OKGREEN,
++ r"^not ok \d+": FAIL,
++ r"^# Testing .*": OKBLUE,
++ r"^# Found: .*": WARNING,
++ r"^# Totals: .*": BOLD,
++ r"pass:([1-9]\d*)": OKGREEN,
++ r"fail:([1-9]\d*)": FAIL,
++ r"skip:([1-9]\d*)": WARNING,
++ }
++
++ # Apply colors based on patterns
++ for pattern, color in patterns.items():
++ output = re.sub(pattern, lambda match: f"{color}{match.group(0)}{ENDC}", output, flags=re.MULTILINE)
++
++ print(output)
++
++combinations = list(itertools.product(*input_options.values()))
++ksft.print_header()
++ksft.set_plan(len(combinations))
++
++logs = ""
++
++for combination in combinations:
++ append = ""
++ log = ""
++ for p in default_kparam:
++ append += f' --append={p}'
++ command = BOOT_CMD + append
++ test_params = ""
++ for i, key in enumerate(input_options.keys()):
++ param = f'{key}={combination[i]}'
++ test_params += f' {param}'
++ command += f" --append={param}"
++ command += f" -- {TEST}"
++ test_name = f"{bug} {test_params}"
++ pretty_print(f'# Testing {test_name}')
++ t = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
++ t.wait()
++ output, _ = t.communicate()
++ if t.returncode == 0:
++ ksft.test_result_pass(test_name)
++ else:
++ ksft.test_result_fail(test_name)
++ output = output.decode()
++ log += f" {output}"
++ pretty_print(log)
++ logs += output + "\n"
++
++# Optionally use tappy to parse the output
++# apt install python3-tappy
++with open("logs.txt", "w") as f:
++ f.write(logs)
++
++ksft.finished()
+diff --git a/tools/testing/selftests/x86/bugs/its_ret_alignment.py b/tools/testing/selftests/x86/bugs/its_ret_alignment.py
+new file mode 100644
+index 00000000000000..f40078d9f6ffc1
+--- /dev/null
++++ b/tools/testing/selftests/x86/bugs/its_ret_alignment.py
+@@ -0,0 +1,139 @@
++#!/usr/bin/env python3
++# SPDX-License-Identifier: GPL-2.0
++#
++# Copyright (c) 2025 Intel Corporation
++#
++# Test for indirect target selection (ITS) mitigation.
++#
++# Tests if the RETs are correctly patched by evaluating the
++# vmlinux .return_sites in /proc/kcore.
++#
++# Install dependencies
++# add-apt-repository ppa:michel-slm/kernel-utils
++# apt update
++# apt install -y python3-drgn python3-pyelftools python3-capstone
++#
++# Run on target machine
++# mkdir -p /usr/lib/debug/lib/modules/$(uname -r)
++# cp $VMLINUX /usr/lib/debug/lib/modules/$(uname -r)/vmlinux
++#
++# Usage: ./its_ret_alignment.py
++
++import os, sys, argparse
++from pathlib import Path
++
++this_dir = os.path.dirname(os.path.realpath(__file__))
++sys.path.insert(0, this_dir + '/../../kselftest')
++import ksft
++import common as c
++
++bug = "indirect_target_selection"
++mitigation = c.get_sysfs(bug)
++if not mitigation or "Aligned branch/return thunks" not in mitigation:
++ ksft.test_result_skip("Skipping its_ret_alignment.py: Aligned branch/return thunks not enabled")
++ ksft.finished()
++
++c.check_dependencies_or_skip(['drgn', 'elftools', 'capstone'], script_name="its_ret_alignment.py")
++
++from elftools.elf.elffile import ELFFile
++from drgn.helpers.common.memory import identify_address
++
++cap = c.init_capstone()
++
++if len(os.sys.argv) > 1:
++ arg_vmlinux = os.sys.argv[1]
++ if not os.path.exists(arg_vmlinux):
++ ksft.test_result_fail(f"its_ret_alignment.py: vmlinux not found at user-supplied path: {arg_vmlinux}")
++ ksft.exit_fail()
++ os.makedirs(f"/usr/lib/debug/lib/modules/{os.uname().release}", exist_ok=True)
++ os.system(f'cp {arg_vmlinux} /usr/lib/debug/lib/modules/$(uname -r)/vmlinux')
++
++vmlinux = f"/usr/lib/debug/lib/modules/{os.uname().release}/vmlinux"
++if not os.path.exists(vmlinux):
++ ksft.test_result_fail(f"its_ret_alignment.py: vmlinux not found at {vmlinux}")
++ ksft.exit_fail()
++
++ksft.print_msg(f"Using vmlinux: {vmlinux}")
++
++rethunks_start_vmlinux, rethunks_sec_offset, size = c.get_section_info(vmlinux, '.return_sites')
++ksft.print_msg(f"vmlinux: Section .return_sites (0x{rethunks_start_vmlinux:x}) found at 0x{rethunks_sec_offset:x} with size 0x{size:x}")
++
++sites_offset = c.get_patch_sites(vmlinux, rethunks_sec_offset, size)
++total_rethunk_tests = len(sites_offset)
++ksft.print_msg(f"Found {total_rethunk_tests} rethunk sites")
++
++prog = c.get_runtime_kernel()
++rethunks_start_kcore = prog.symbol('__return_sites').address
++ksft.print_msg(f'kcore: __rethunk_sites: 0x{rethunks_start_kcore:x}')
++
++its_return_thunk = prog.symbol('its_return_thunk').address
++ksft.print_msg(f'kcore: its_return_thunk: 0x{its_return_thunk:x}')
++
++tests_passed = 0
++tests_failed = 0
++tests_unknown = 0
++tests_skipped = 0
++
++with open(vmlinux, 'rb') as f:
++ elffile = ELFFile(f)
++ text_section = elffile.get_section_by_name('.text')
++
++ for i in range(len(sites_offset)):
++ site = rethunks_start_kcore + sites_offset[i]
++ vmlinux_site = rethunks_start_vmlinux + sites_offset[i]
++ try:
++ passed = unknown = failed = skipped = False
++
++ symbol = identify_address(prog, site)
++ vmlinux_insn = c.get_instruction_from_vmlinux(elffile, text_section, text_section['sh_addr'], vmlinux_site)
++ kcore_insn = list(cap.disasm(prog.read(site, 16), site))[0]
++
++ insn_end = site + kcore_insn.size - 1
++
++ safe_site = insn_end & 0x20
++ site_status = "" if safe_site else "(unsafe)"
++
++ ksft.print_msg(f"\nSite {i}: {symbol} <0x{site:x}> {site_status}")
++ ksft.print_msg(f"\tvmlinux: 0x{vmlinux_insn.address:x}:\t{vmlinux_insn.mnemonic}\t{vmlinux_insn.op_str}")
++ ksft.print_msg(f"\tkcore: 0x{kcore_insn.address:x}:\t{kcore_insn.mnemonic}\t{kcore_insn.op_str}")
++
++ if safe_site:
++ tests_passed += 1
++ passed = True
++ ksft.print_msg(f"\tPASSED: At safe address")
++ continue
++
++ if "jmp" in kcore_insn.mnemonic:
++ passed = True
++ elif "ret" not in kcore_insn.mnemonic:
++ skipped = True
++
++ if passed:
++ ksft.print_msg(f"\tPASSED: Found {kcore_insn.mnemonic} {kcore_insn.op_str}")
++ tests_passed += 1
++ elif skipped:
++ ksft.print_msg(f"\tSKIPPED: Found '{kcore_insn.mnemonic}'")
++ tests_skipped += 1
++ elif unknown:
++ ksft.print_msg(f"UNKNOWN: An unknown instruction: {kcore_insn}")
++ tests_unknown += 1
++ else:
++ ksft.print_msg(f'\t************* FAILED *************')
++ ksft.print_msg(f"\tFound {kcore_insn.mnemonic} {kcore_insn.op_str}")
++ ksft.print_msg(f'\t**********************************')
++ tests_failed += 1
++ except Exception as e:
++ ksft.print_msg(f"UNKNOWN: An unexpected error occurred: {e}")
++ tests_unknown += 1
++
++ksft.print_msg(f"\n\nSummary:")
++ksft.print_msg(f"PASSED: \t{tests_passed} \t/ {total_rethunk_tests}")
++ksft.print_msg(f"FAILED: \t{tests_failed} \t/ {total_rethunk_tests}")
++ksft.print_msg(f"SKIPPED: \t{tests_skipped} \t/ {total_rethunk_tests}")
++ksft.print_msg(f"UNKNOWN: \t{tests_unknown} \t/ {total_rethunk_tests}")
++
++if tests_failed == 0:
++ ksft.test_result_pass("All ITS return thunk sites passed.")
++else:
++ ksft.test_result_fail(f"{tests_failed} failed sites need ITS return thunks.")
++ksft.finished()
+diff --git a/tools/testing/selftests/x86/bugs/its_sysfs.py b/tools/testing/selftests/x86/bugs/its_sysfs.py
+new file mode 100644
+index 00000000000000..7bca81f2f6065b
+--- /dev/null
++++ b/tools/testing/selftests/x86/bugs/its_sysfs.py
+@@ -0,0 +1,65 @@
++#!/usr/bin/env python3
++# SPDX-License-Identifier: GPL-2.0
++#
++# Copyright (c) 2025 Intel Corporation
++#
++# Test for Indirect Target Selection(ITS) mitigation sysfs status.
++
++import sys, os, re
++this_dir = os.path.dirname(os.path.realpath(__file__))
++sys.path.insert(0, this_dir + '/../../kselftest')
++import ksft
++
++from common import *
++
++bug = "indirect_target_selection"
++mitigation = get_sysfs(bug)
++
++ITS_MITIGATION_ALIGNED_THUNKS = "Mitigation: Aligned branch/return thunks"
++ITS_MITIGATION_RETPOLINE_STUFF = "Mitigation: Retpolines, Stuffing RSB"
++ITS_MITIGATION_VMEXIT_ONLY = "Mitigation: Vulnerable, KVM: Not affected"
++ITS_MITIGATION_VULNERABLE = "Vulnerable"
++
++def check_mitigation():
++ if mitigation == ITS_MITIGATION_ALIGNED_THUNKS:
++ if cmdline_has(f'{bug}=stuff') and sysfs_has("spectre_v2", "Retpolines"):
++ bug_check_fail(bug, ITS_MITIGATION_ALIGNED_THUNKS, ITS_MITIGATION_RETPOLINE_STUFF)
++ return
++ if cmdline_has(f'{bug}=vmexit') and cpuinfo_has('its_native_only'):
++ bug_check_fail(bug, ITS_MITIGATION_ALIGNED_THUNKS, ITS_MITIGATION_VMEXIT_ONLY)
++ return
++ bug_check_pass(bug, ITS_MITIGATION_ALIGNED_THUNKS)
++ return
++
++ if mitigation == ITS_MITIGATION_RETPOLINE_STUFF:
++ if cmdline_has(f'{bug}=stuff') and sysfs_has("spectre_v2", "Retpolines"):
++ bug_check_pass(bug, ITS_MITIGATION_RETPOLINE_STUFF)
++ return
++ if sysfs_has('retbleed', 'Stuffing'):
++ bug_check_pass(bug, ITS_MITIGATION_RETPOLINE_STUFF)
++ return
++ bug_check_fail(bug, ITS_MITIGATION_RETPOLINE_STUFF, ITS_MITIGATION_ALIGNED_THUNKS)
++
++ if mitigation == ITS_MITIGATION_VMEXIT_ONLY:
++ if cmdline_has(f'{bug}=vmexit') and cpuinfo_has('its_native_only'):
++ bug_check_pass(bug, ITS_MITIGATION_VMEXIT_ONLY)
++ return
++ bug_check_fail(bug, ITS_MITIGATION_VMEXIT_ONLY, ITS_MITIGATION_ALIGNED_THUNKS)
++
++ if mitigation == ITS_MITIGATION_VULNERABLE:
++ if sysfs_has("spectre_v2", "Vulnerable"):
++ bug_check_pass(bug, ITS_MITIGATION_VULNERABLE)
++ else:
++ bug_check_fail(bug, "Mitigation", ITS_MITIGATION_VULNERABLE)
++
++ bug_status_unknown(bug, mitigation)
++ return
++
++ksft.print_header()
++ksft.set_plan(1)
++ksft.print_msg(f'{bug}: {mitigation} ...')
++
++if not basic_checks_sufficient(bug, mitigation):
++ check_mitigation()
++
++ksft.finished()
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-22 13:36 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-22 13:36 UTC (permalink / raw
To: gentoo-commits
commit: b934b3e168593072ab8dba8af9d7bdb5f89891b7
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 22 13:36:40 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 22 13:36:40 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b934b3e1
Linux patch 6.14.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-6.14.8.patch | 6376 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6380 insertions(+)
diff --git a/0000_README b/0000_README
index df3d8c2c..aa5c3afa 100644
--- a/0000_README
+++ b/0000_README
@@ -70,6 +70,10 @@ Patch: 1006_linux-6.14.7.patch
From: https://www.kernel.org
Desc: Linux 6.14.7
+Patch: 1007_linux-6.14.8.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.8
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1007_linux-6.14.8.patch b/1007_linux-6.14.8.patch
new file mode 100644
index 00000000..ce546889
--- /dev/null
+++ b/1007_linux-6.14.8.patch
@@ -0,0 +1,6376 @@
+diff --git a/Documentation/netlink/specs/tc.yaml b/Documentation/netlink/specs/tc.yaml
+index aacccea5dfe42a..953aa837958b3f 100644
+--- a/Documentation/netlink/specs/tc.yaml
++++ b/Documentation/netlink/specs/tc.yaml
+@@ -2017,7 +2017,8 @@ attribute-sets:
+ attributes:
+ -
+ name: act
+- type: nest
++ type: indexed-array
++ sub-type: nest
+ nested-attributes: tc-act-attrs
+ -
+ name: police
+@@ -2250,7 +2251,8 @@ attribute-sets:
+ attributes:
+ -
+ name: act
+- type: nest
++ type: indexed-array
++ sub-type: nest
+ nested-attributes: tc-act-attrs
+ -
+ name: police
+@@ -2745,7 +2747,7 @@ attribute-sets:
+ type: u16
+ byte-order: big-endian
+ -
+- name: key-l2-tpv3-sid
++ name: key-l2tpv3-sid
+ type: u32
+ byte-order: big-endian
+ -
+@@ -3504,7 +3506,7 @@ attribute-sets:
+ name: rate64
+ type: u64
+ -
+- name: prate4
++ name: prate64
+ type: u64
+ -
+ name: burst
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 00e94bec401e1b..c0d5232a473b8a 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -17954,7 +17954,7 @@ F: include/uapi/linux/ppdev.h
+ PARAVIRT_OPS INTERFACE
+ M: Juergen Gross <jgross@suse.com>
+ R: Ajay Kaher <ajay.kaher@broadcom.com>
+-R: Alexey Makhalov <alexey.amakhalov@broadcom.com>
++R: Alexey Makhalov <alexey.makhalov@broadcom.com>
+ R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
+ L: virtualization@lists.linux.dev
+ L: x86@kernel.org
+@@ -25350,7 +25350,7 @@ F: drivers/misc/vmw_balloon.c
+
+ VMWARE HYPERVISOR INTERFACE
+ M: Ajay Kaher <ajay.kaher@broadcom.com>
+-M: Alexey Makhalov <alexey.amakhalov@broadcom.com>
++M: Alexey Makhalov <alexey.makhalov@broadcom.com>
+ R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
+ L: virtualization@lists.linux.dev
+ L: x86@kernel.org
+@@ -25378,7 +25378,7 @@ F: drivers/scsi/vmw_pvscsi.h
+ VMWARE VIRTUAL PTP CLOCK DRIVER
+ M: Nick Shi <nick.shi@broadcom.com>
+ R: Ajay Kaher <ajay.kaher@broadcom.com>
+-R: Alexey Makhalov <alexey.amakhalov@broadcom.com>
++R: Alexey Makhalov <alexey.makhalov@broadcom.com>
+ R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
+ L: netdev@vger.kernel.org
+ S: Supported
+diff --git a/Makefile b/Makefile
+index 70bd8847c8677a..70011eb4745f1a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -1070,7 +1070,6 @@ KBUILD_CFLAGS += -fno-builtin-wcslen
+ # change __FILE__ to the relative path to the source directory
+ ifdef building_out_of_srctree
+ KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srcroot)/=)
+-KBUILD_RUSTFLAGS += --remap-path-prefix=$(srcroot)/=
+ endif
+
+ # include additional Makefiles when needed
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-dreambox.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-dreambox.dtsi
+index de35fa2d7a6de3..8e3e3354ed67a9 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-dreambox.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-dreambox.dtsi
+@@ -116,6 +116,10 @@ &arb {
+ status = "okay";
+ };
+
++&clkc_audio {
++ status = "okay";
++};
++
+ &frddr_a {
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-var-som.dtsi
+index b2ac2583a59292..b59da91fdd041f 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-var-som.dtsi
+@@ -35,7 +35,6 @@ memory@40000000 {
+ <0x1 0x00000000 0 0xc0000000>;
+ };
+
+-
+ reg_usdhc2_vmmc: regulator-usdhc2-vmmc {
+ compatible = "regulator-fixed";
+ regulator-name = "VSD_3V3";
+@@ -46,6 +45,16 @@ reg_usdhc2_vmmc: regulator-usdhc2-vmmc {
+ startup-delay-us = <100>;
+ off-on-delay-us = <12000>;
+ };
++
++ reg_usdhc2_vqmmc: regulator-usdhc2-vqmmc {
++ compatible = "regulator-gpio";
++ regulator-name = "VSD_VSEL";
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <3300000>;
++ gpios = <&gpio2 12 GPIO_ACTIVE_HIGH>;
++ states = <3300000 0x0 1800000 0x1>;
++ vin-supply = <&ldo5>;
++ };
+ };
+
+ &A53_0 {
+@@ -205,6 +214,7 @@ &usdhc2 {
+ pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_gpio>;
+ cd-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>;
+ vmmc-supply = <®_usdhc2_vmmc>;
++ vqmmc-supply = <®_usdhc2_vqmmc>;
+ bus-width = <4>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts b/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
+index a9b9db31d2a3e6..bab66b688e0110 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3576-armsom-sige5.dts
+@@ -578,7 +578,7 @@ hym8563: rtc@51 {
+ reg = <0x51>;
+ clock-output-names = "hym8563";
+ interrupt-parent = <&gpio0>;
+- interrupts = <RK_PB0 IRQ_TYPE_LEVEL_LOW>;
++ interrupts = <RK_PA0 IRQ_TYPE_LEVEL_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&hym8563_int>;
+ wakeup-source;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-friendlyelec-cm3588.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-friendlyelec-cm3588.dtsi
+index e3a9598b99fca8..cacffc851584fc 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-friendlyelec-cm3588.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-friendlyelec-cm3588.dtsi
+@@ -222,6 +222,10 @@ rt5616: audio-codec@1b {
+ compatible = "realtek,rt5616";
+ reg = <0x1b>;
+ #sound-dai-cells = <0>;
++ assigned-clocks = <&cru I2S0_8CH_MCLKOUT>;
++ assigned-clock-rates = <12288000>;
++ clocks = <&cru I2S0_8CH_MCLKOUT>;
++ clock-names = "mclk";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
+index 6bc46734cc1407..0270bffce195cb 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
+@@ -214,6 +214,8 @@ rgmii_phy: ethernet-phy@1 {
+ };
+
+ &package_thermal {
++ polling-delay = <1000>;
++
+ trips {
+ package_active1: trip-active1 {
+ temperature = <45000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588j.dtsi b/arch/arm64/boot/dts/rockchip/rk3588j.dtsi
+index bce72bac4503b5..3045cb3bd68c63 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588j.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588j.dtsi
+@@ -11,20 +11,15 @@ cluster0_opp_table: opp-table-cluster0 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+- opp-1416000000 {
+- opp-hz = /bits/ 64 <1416000000>;
++ opp-1200000000 {
++ opp-hz = /bits/ 64 <1200000000>;
+ opp-microvolt = <750000 750000 950000>;
+ clock-latency-ns = <40000>;
+ opp-suspend;
+ };
+- opp-1608000000 {
+- opp-hz = /bits/ 64 <1608000000>;
+- opp-microvolt = <887500 887500 950000>;
+- clock-latency-ns = <40000>;
+- };
+- opp-1704000000 {
+- opp-hz = /bits/ 64 <1704000000>;
+- opp-microvolt = <937500 937500 950000>;
++ opp-1296000000 {
++ opp-hz = /bits/ 64 <1296000000>;
++ opp-microvolt = <775000 775000 950000>;
+ clock-latency-ns = <40000>;
+ };
+ };
+@@ -33,9 +28,14 @@ cluster1_opp_table: opp-table-cluster1 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
++ opp-1200000000{
++ opp-hz = /bits/ 64 <1200000000>;
++ opp-microvolt = <750000 750000 950000>;
++ clock-latency-ns = <40000>;
++ };
+ opp-1416000000 {
+ opp-hz = /bits/ 64 <1416000000>;
+- opp-microvolt = <750000 750000 950000>;
++ opp-microvolt = <762500 762500 950000>;
+ clock-latency-ns = <40000>;
+ };
+ opp-1608000000 {
+@@ -43,25 +43,20 @@ opp-1608000000 {
+ opp-microvolt = <787500 787500 950000>;
+ clock-latency-ns = <40000>;
+ };
+- opp-1800000000 {
+- opp-hz = /bits/ 64 <1800000000>;
+- opp-microvolt = <875000 875000 950000>;
+- clock-latency-ns = <40000>;
+- };
+- opp-2016000000 {
+- opp-hz = /bits/ 64 <2016000000>;
+- opp-microvolt = <950000 950000 950000>;
+- clock-latency-ns = <40000>;
+- };
+ };
+
+ cluster2_opp_table: opp-table-cluster2 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
++ opp-1200000000{
++ opp-hz = /bits/ 64 <1200000000>;
++ opp-microvolt = <750000 750000 950000>;
++ clock-latency-ns = <40000>;
++ };
+ opp-1416000000 {
+ opp-hz = /bits/ 64 <1416000000>;
+- opp-microvolt = <750000 750000 950000>;
++ opp-microvolt = <762500 762500 950000>;
+ clock-latency-ns = <40000>;
+ };
+ opp-1608000000 {
+@@ -69,16 +64,6 @@ opp-1608000000 {
+ opp-microvolt = <787500 787500 950000>;
+ clock-latency-ns = <40000>;
+ };
+- opp-1800000000 {
+- opp-hz = /bits/ 64 <1800000000>;
+- opp-microvolt = <875000 875000 950000>;
+- clock-latency-ns = <40000>;
+- };
+- opp-2016000000 {
+- opp-hz = /bits/ 64 <2016000000>;
+- opp-microvolt = <950000 950000 950000>;
+- clock-latency-ns = <40000>;
+- };
+ };
+
+ gpu_opp_table: opp-table {
+@@ -104,10 +89,6 @@ opp-700000000 {
+ opp-hz = /bits/ 64 <700000000>;
+ opp-microvolt = <750000 750000 850000>;
+ };
+- opp-850000000 {
+- opp-hz = /bits/ 64 <800000000>;
+- opp-microvolt = <787500 787500 850000>;
+- };
+ };
+ };
+
+diff --git a/arch/loongarch/include/asm/ptrace.h b/arch/loongarch/include/asm/ptrace.h
+index a5b63c84f8541a..e5d21e836d993c 100644
+--- a/arch/loongarch/include/asm/ptrace.h
++++ b/arch/loongarch/include/asm/ptrace.h
+@@ -55,7 +55,7 @@ static inline void instruction_pointer_set(struct pt_regs *regs, unsigned long v
+
+ /* Query offset/name of register from its name/offset */
+ extern int regs_query_register_offset(const char *name);
+-#define MAX_REG_OFFSET (offsetof(struct pt_regs, __last))
++#define MAX_REG_OFFSET (offsetof(struct pt_regs, __last) - sizeof(unsigned long))
+
+ /**
+ * regs_get_register() - get register value from its offset
+diff --git a/arch/loongarch/include/asm/uprobes.h b/arch/loongarch/include/asm/uprobes.h
+index 99a0d198927f8b..025fc3f0a1028d 100644
+--- a/arch/loongarch/include/asm/uprobes.h
++++ b/arch/loongarch/include/asm/uprobes.h
+@@ -15,7 +15,6 @@ typedef u32 uprobe_opcode_t;
+ #define UPROBE_XOLBP_INSN __emit_break(BRK_UPROBE_XOLBP)
+
+ struct arch_uprobe {
+- unsigned long resume_era;
+ u32 insn[2];
+ u32 ixol[2];
+ bool simulate;
+diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
+index 4f09121417818d..733a7665e434dc 100644
+--- a/arch/loongarch/kernel/genex.S
++++ b/arch/loongarch/kernel/genex.S
+@@ -16,6 +16,7 @@
+ #include <asm/stackframe.h>
+ #include <asm/thread_info.h>
+
++ .section .cpuidle.text, "ax"
+ .align 5
+ SYM_FUNC_START(__arch_cpu_idle)
+ /* start of idle interrupt region */
+@@ -31,14 +32,16 @@ SYM_FUNC_START(__arch_cpu_idle)
+ */
+ idle 0
+ /* end of idle interrupt region */
+-1: jr ra
++idle_exit:
++ jr ra
+ SYM_FUNC_END(__arch_cpu_idle)
++ .previous
+
+ SYM_CODE_START(handle_vint)
+ UNWIND_HINT_UNDEFINED
+ BACKUP_T0T1
+ SAVE_ALL
+- la_abs t1, 1b
++ la_abs t1, idle_exit
+ LONG_L t0, sp, PT_ERA
+ /* 3 instructions idle interrupt region */
+ ori t0, t0, 0b1100
+diff --git a/arch/loongarch/kernel/kfpu.c b/arch/loongarch/kernel/kfpu.c
+index ec5b28e570c963..4c476904227f95 100644
+--- a/arch/loongarch/kernel/kfpu.c
++++ b/arch/loongarch/kernel/kfpu.c
+@@ -18,11 +18,28 @@ static unsigned int euen_mask = CSR_EUEN_FPEN;
+ static DEFINE_PER_CPU(bool, in_kernel_fpu);
+ static DEFINE_PER_CPU(unsigned int, euen_current);
+
++static inline void fpregs_lock(void)
++{
++ if (IS_ENABLED(CONFIG_PREEMPT_RT))
++ preempt_disable();
++ else
++ local_bh_disable();
++}
++
++static inline void fpregs_unlock(void)
++{
++ if (IS_ENABLED(CONFIG_PREEMPT_RT))
++ preempt_enable();
++ else
++ local_bh_enable();
++}
++
+ void kernel_fpu_begin(void)
+ {
+ unsigned int *euen_curr;
+
+- preempt_disable();
++ if (!irqs_disabled())
++ fpregs_lock();
+
+ WARN_ON(this_cpu_read(in_kernel_fpu));
+
+@@ -73,7 +90,8 @@ void kernel_fpu_end(void)
+
+ this_cpu_write(in_kernel_fpu, false);
+
+- preempt_enable();
++ if (!irqs_disabled())
++ fpregs_unlock();
+ }
+ EXPORT_SYMBOL_GPL(kernel_fpu_end);
+
+diff --git a/arch/loongarch/kernel/time.c b/arch/loongarch/kernel/time.c
+index e2d3bfeb636643..bc75a3a69fc8d5 100644
+--- a/arch/loongarch/kernel/time.c
++++ b/arch/loongarch/kernel/time.c
+@@ -111,7 +111,7 @@ static unsigned long __init get_loops_per_jiffy(void)
+ return lpj;
+ }
+
+-static long init_offset __nosavedata;
++static long init_offset;
+
+ void save_counter(void)
+ {
+diff --git a/arch/loongarch/kernel/uprobes.c b/arch/loongarch/kernel/uprobes.c
+index 87abc7137b738e..6022eb0f71dbce 100644
+--- a/arch/loongarch/kernel/uprobes.c
++++ b/arch/loongarch/kernel/uprobes.c
+@@ -42,7 +42,6 @@ int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+ utask->autask.saved_trap_nr = current->thread.trap_nr;
+ current->thread.trap_nr = UPROBE_TRAP_NR;
+ instruction_pointer_set(regs, utask->xol_vaddr);
+- user_enable_single_step(current);
+
+ return 0;
+ }
+@@ -53,13 +52,7 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+
+ WARN_ON_ONCE(current->thread.trap_nr != UPROBE_TRAP_NR);
+ current->thread.trap_nr = utask->autask.saved_trap_nr;
+-
+- if (auprobe->simulate)
+- instruction_pointer_set(regs, auprobe->resume_era);
+- else
+- instruction_pointer_set(regs, utask->vaddr + LOONGARCH_INSN_SIZE);
+-
+- user_disable_single_step(current);
++ instruction_pointer_set(regs, utask->vaddr + LOONGARCH_INSN_SIZE);
+
+ return 0;
+ }
+@@ -70,7 +63,6 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+
+ current->thread.trap_nr = utask->autask.saved_trap_nr;
+ instruction_pointer_set(regs, utask->vaddr);
+- user_disable_single_step(current);
+ }
+
+ bool arch_uprobe_xol_was_trapped(struct task_struct *t)
+@@ -90,7 +82,6 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
+
+ insn.word = auprobe->insn[0];
+ arch_simulate_insn(insn, regs);
+- auprobe->resume_era = regs->csr_era;
+
+ return true;
+ }
+diff --git a/arch/loongarch/power/hibernate.c b/arch/loongarch/power/hibernate.c
+index 1e0590542f987c..e7b7346592cb2a 100644
+--- a/arch/loongarch/power/hibernate.c
++++ b/arch/loongarch/power/hibernate.c
+@@ -2,6 +2,7 @@
+ #include <asm/fpu.h>
+ #include <asm/loongson.h>
+ #include <asm/sections.h>
++#include <asm/time.h>
+ #include <asm/tlbflush.h>
+ #include <linux/suspend.h>
+
+@@ -14,6 +15,7 @@ struct pt_regs saved_regs;
+
+ void save_processor_state(void)
+ {
++ save_counter();
+ saved_crmd = csr_read32(LOONGARCH_CSR_CRMD);
+ saved_prmd = csr_read32(LOONGARCH_CSR_PRMD);
+ saved_euen = csr_read32(LOONGARCH_CSR_EUEN);
+@@ -26,6 +28,7 @@ void save_processor_state(void)
+
+ void restore_processor_state(void)
+ {
++ sync_counter();
+ csr_write32(saved_crmd, LOONGARCH_CSR_CRMD);
+ csr_write32(saved_prmd, LOONGARCH_CSR_PRMD);
+ csr_write32(saved_euen, LOONGARCH_CSR_EUEN);
+diff --git a/arch/riscv/boot/dts/sophgo/cv18xx.dtsi b/arch/riscv/boot/dts/sophgo/cv18xx.dtsi
+index c18822ec849f35..58cd546392e056 100644
+--- a/arch/riscv/boot/dts/sophgo/cv18xx.dtsi
++++ b/arch/riscv/boot/dts/sophgo/cv18xx.dtsi
+@@ -341,7 +341,7 @@ dmac: dma-controller@4330000 {
+ 1024 1024 1024 1024>;
+ snps,priority = <0 1 2 3 4 5 6 7>;
+ snps,dma-masters = <2>;
+- snps,data-width = <4>;
++ snps,data-width = <2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
+index d14bce0f82cc58..d010edfca21a91 100644
+--- a/arch/x86/coco/sev/core.c
++++ b/arch/x86/coco/sev/core.c
+@@ -959,6 +959,102 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
+ set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
+ }
+
++static int vmgexit_ap_control(u64 event, struct sev_es_save_area *vmsa, u32 apic_id)
++{
++ bool create = event != SVM_VMGEXIT_AP_DESTROY;
++ struct ghcb_state state;
++ unsigned long flags;
++ struct ghcb *ghcb;
++ int ret = 0;
++
++ local_irq_save(flags);
++
++ ghcb = __sev_get_ghcb(&state);
++
++ vc_ghcb_invalidate(ghcb);
++
++ if (create)
++ ghcb_set_rax(ghcb, vmsa->sev_features);
++
++ ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_CREATION);
++ ghcb_set_sw_exit_info_1(ghcb,
++ ((u64)apic_id << 32) |
++ ((u64)snp_vmpl << 16) |
++ event);
++ ghcb_set_sw_exit_info_2(ghcb, __pa(vmsa));
++
++ sev_es_wr_ghcb_msr(__pa(ghcb));
++ VMGEXIT();
++
++ if (!ghcb_sw_exit_info_1_is_valid(ghcb) ||
++ lower_32_bits(ghcb->save.sw_exit_info_1)) {
++ pr_err("SNP AP %s error\n", (create ? "CREATE" : "DESTROY"));
++ ret = -EINVAL;
++ }
++
++ __sev_put_ghcb(&state);
++
++ local_irq_restore(flags);
++
++ return ret;
++}
++
++static int snp_set_vmsa(void *va, void *caa, int apic_id, bool make_vmsa)
++{
++ int ret;
++
++ if (snp_vmpl) {
++ struct svsm_call call = {};
++ unsigned long flags;
++
++ local_irq_save(flags);
++
++ call.caa = this_cpu_read(svsm_caa);
++ call.rcx = __pa(va);
++
++ if (make_vmsa) {
++ /* Protocol 0, Call ID 2 */
++ call.rax = SVSM_CORE_CALL(SVSM_CORE_CREATE_VCPU);
++ call.rdx = __pa(caa);
++ call.r8 = apic_id;
++ } else {
++ /* Protocol 0, Call ID 3 */
++ call.rax = SVSM_CORE_CALL(SVSM_CORE_DELETE_VCPU);
++ }
++
++ ret = svsm_perform_call_protocol(&call);
++
++ local_irq_restore(flags);
++ } else {
++ /*
++ * If the kernel runs at VMPL0, it can change the VMSA
++ * bit for a page using the RMPADJUST instruction.
++ * However, for the instruction to succeed it must
++ * target the permissions of a lesser privileged (higher
++ * numbered) VMPL level, so use VMPL1.
++ */
++ u64 attrs = 1;
++
++ if (make_vmsa)
++ attrs |= RMPADJUST_VMSA_PAGE_BIT;
++
++ ret = rmpadjust((unsigned long)va, RMP_PG_SIZE_4K, attrs);
++ }
++
++ return ret;
++}
++
++static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa, int apic_id)
++{
++ int err;
++
++ err = snp_set_vmsa(vmsa, NULL, apic_id, false);
++ if (err)
++ pr_err("clear VMSA page failed (%u), leaking page\n", err);
++ else
++ free_page((unsigned long)vmsa);
++}
++
+ static void set_pte_enc(pte_t *kpte, int level, void *va)
+ {
+ struct pte_enc_desc d = {
+@@ -1005,7 +1101,8 @@ static void unshare_all_memory(void)
+ data = per_cpu(runtime_data, cpu);
+ ghcb = (unsigned long)&data->ghcb_page;
+
+- if (addr <= ghcb && ghcb <= addr + size) {
++ /* Handle the case of a huge page containing the GHCB page */
++ if (addr <= ghcb && ghcb < addr + size) {
+ skipped_addr = true;
+ break;
+ }
+@@ -1055,11 +1152,70 @@ void snp_kexec_begin(void)
+ pr_warn("Failed to stop shared<->private conversions\n");
+ }
+
++/*
++ * Shutdown all APs except the one handling kexec/kdump and clearing
++ * the VMSA tag on AP's VMSA pages as they are not being used as
++ * VMSA page anymore.
++ */
++static void shutdown_all_aps(void)
++{
++ struct sev_es_save_area *vmsa;
++ int apic_id, this_cpu, cpu;
++
++ this_cpu = get_cpu();
++
++ /*
++ * APs are already in HLT loop when enc_kexec_finish() callback
++ * is invoked.
++ */
++ for_each_present_cpu(cpu) {
++ vmsa = per_cpu(sev_vmsa, cpu);
++
++ /*
++ * The BSP or offlined APs do not have guest allocated VMSA
++ * and there is no need to clear the VMSA tag for this page.
++ */
++ if (!vmsa)
++ continue;
++
++ /*
++ * Cannot clear the VMSA tag for the currently running vCPU.
++ */
++ if (this_cpu == cpu) {
++ unsigned long pa;
++ struct page *p;
++
++ pa = __pa(vmsa);
++ /*
++ * Mark the VMSA page of the running vCPU as offline
++ * so that is excluded and not touched by makedumpfile
++ * while generating vmcore during kdump.
++ */
++ p = pfn_to_online_page(pa >> PAGE_SHIFT);
++ if (p)
++ __SetPageOffline(p);
++ continue;
++ }
++
++ apic_id = cpuid_to_apicid[cpu];
++
++ /*
++ * Issue AP destroy to ensure AP gets kicked out of guest mode
++ * to allow using RMPADJUST to remove the VMSA tag on it's
++ * VMSA page.
++ */
++ vmgexit_ap_control(SVM_VMGEXIT_AP_DESTROY, vmsa, apic_id);
++ snp_cleanup_vmsa(vmsa, apic_id);
++ }
++
++ put_cpu();
++}
++
+ void snp_kexec_finish(void)
+ {
+ struct sev_es_runtime_data *data;
++ unsigned long size, addr;
+ unsigned int level, cpu;
+- unsigned long size;
+ struct ghcb *ghcb;
+ pte_t *pte;
+
+@@ -1069,6 +1225,8 @@ void snp_kexec_finish(void)
+ if (!IS_ENABLED(CONFIG_KEXEC_CORE))
+ return;
+
++ shutdown_all_aps();
++
+ unshare_all_memory();
+
+ /*
+@@ -1085,54 +1243,11 @@ void snp_kexec_finish(void)
+ ghcb = &data->ghcb_page;
+ pte = lookup_address((unsigned long)ghcb, &level);
+ size = page_level_size(level);
+- set_pte_enc(pte, level, (void *)ghcb);
+- snp_set_memory_private((unsigned long)ghcb, (size / PAGE_SIZE));
+- }
+-}
+-
+-static int snp_set_vmsa(void *va, void *caa, int apic_id, bool make_vmsa)
+-{
+- int ret;
+-
+- if (snp_vmpl) {
+- struct svsm_call call = {};
+- unsigned long flags;
+-
+- local_irq_save(flags);
+-
+- call.caa = this_cpu_read(svsm_caa);
+- call.rcx = __pa(va);
+-
+- if (make_vmsa) {
+- /* Protocol 0, Call ID 2 */
+- call.rax = SVSM_CORE_CALL(SVSM_CORE_CREATE_VCPU);
+- call.rdx = __pa(caa);
+- call.r8 = apic_id;
+- } else {
+- /* Protocol 0, Call ID 3 */
+- call.rax = SVSM_CORE_CALL(SVSM_CORE_DELETE_VCPU);
+- }
+-
+- ret = svsm_perform_call_protocol(&call);
+-
+- local_irq_restore(flags);
+- } else {
+- /*
+- * If the kernel runs at VMPL0, it can change the VMSA
+- * bit for a page using the RMPADJUST instruction.
+- * However, for the instruction to succeed it must
+- * target the permissions of a lesser privileged (higher
+- * numbered) VMPL level, so use VMPL1.
+- */
+- u64 attrs = 1;
+-
+- if (make_vmsa)
+- attrs |= RMPADJUST_VMSA_PAGE_BIT;
+-
+- ret = rmpadjust((unsigned long)va, RMP_PG_SIZE_4K, attrs);
++ /* Handle the case of a huge page containing the GHCB page */
++ addr = (unsigned long)ghcb & page_level_mask(level);
++ set_pte_enc(pte, level, (void *)addr);
++ snp_set_memory_private(addr, (size / PAGE_SIZE));
+ }
+-
+- return ret;
+ }
+
+ #define __ATTR_BASE (SVM_SELECTOR_P_MASK | SVM_SELECTOR_S_MASK)
+@@ -1166,24 +1281,10 @@ static void *snp_alloc_vmsa_page(int cpu)
+ return page_address(p + 1);
+ }
+
+-static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa, int apic_id)
+-{
+- int err;
+-
+- err = snp_set_vmsa(vmsa, NULL, apic_id, false);
+- if (err)
+- pr_err("clear VMSA page failed (%u), leaking page\n", err);
+- else
+- free_page((unsigned long)vmsa);
+-}
+-
+ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsigned long start_ip)
+ {
+ struct sev_es_save_area *cur_vmsa, *vmsa;
+- struct ghcb_state state;
+ struct svsm_ca *caa;
+- unsigned long flags;
+- struct ghcb *ghcb;
+ u8 sipi_vector;
+ int cpu, ret;
+ u64 cr4;
+@@ -1297,33 +1398,7 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsigned long start_ip)
+ }
+
+ /* Issue VMGEXIT AP Creation NAE event */
+- local_irq_save(flags);
+-
+- ghcb = __sev_get_ghcb(&state);
+-
+- vc_ghcb_invalidate(ghcb);
+- ghcb_set_rax(ghcb, vmsa->sev_features);
+- ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_CREATION);
+- ghcb_set_sw_exit_info_1(ghcb,
+- ((u64)apic_id << 32) |
+- ((u64)snp_vmpl << 16) |
+- SVM_VMGEXIT_AP_CREATE);
+- ghcb_set_sw_exit_info_2(ghcb, __pa(vmsa));
+-
+- sev_es_wr_ghcb_msr(__pa(ghcb));
+- VMGEXIT();
+-
+- if (!ghcb_sw_exit_info_1_is_valid(ghcb) ||
+- lower_32_bits(ghcb->save.sw_exit_info_1)) {
+- pr_err("SNP AP Creation error\n");
+- ret = -EINVAL;
+- }
+-
+- __sev_put_ghcb(&state);
+-
+- local_irq_restore(flags);
+-
+- /* Perform cleanup if there was an error */
++ ret = vmgexit_ap_control(SVM_VMGEXIT_AP_CREATE, vmsa, apic_id);
+ if (ret) {
+ snp_cleanup_vmsa(vmsa, apic_id);
+ vmsa = NULL;
+diff --git a/arch/x86/include/asm/amd_nb.h b/arch/x86/include/asm/amd_nb.h
+index 4c4efb93045ed5..adfa0854cf2dad 100644
+--- a/arch/x86/include/asm/amd_nb.h
++++ b/arch/x86/include/asm/amd_nb.h
+@@ -27,7 +27,6 @@ struct amd_l3_cache {
+ };
+
+ struct amd_northbridge {
+- struct pci_dev *root;
+ struct pci_dev *misc;
+ struct pci_dev *link;
+ struct amd_l3_cache l3_cache;
+diff --git a/arch/x86/include/asm/amd_node.h b/arch/x86/include/asm/amd_node.h
+index 113ad3e8ee40ae..002c3afbd30f9b 100644
+--- a/arch/x86/include/asm/amd_node.h
++++ b/arch/x86/include/asm/amd_node.h
+@@ -30,7 +30,20 @@ static inline u16 amd_num_nodes(void)
+ return topology_amd_nodes_per_pkg() * topology_max_packages();
+ }
+
++#ifdef CONFIG_AMD_NODE
+ int __must_check amd_smn_read(u16 node, u32 address, u32 *value);
+ int __must_check amd_smn_write(u16 node, u32 address, u32 value);
+
++/* Should only be used by the HSMP driver. */
++int __must_check amd_smn_hsmp_rdwr(u16 node, u32 address, u32 *value, bool write);
++#else
++static inline int __must_check amd_smn_read(u16 node, u32 address, u32 *value) { return -ENODEV; }
++static inline int __must_check amd_smn_write(u16 node, u32 address, u32 value) { return -ENODEV; }
++
++static inline int __must_check amd_smn_hsmp_rdwr(u16 node, u32 address, u32 *value, bool write)
++{
++ return -ENODEV;
++}
++#endif /* CONFIG_AMD_NODE */
++
+ #endif /*_ASM_X86_AMD_NODE_H_*/
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 67e773744edb22..6d12a9b694327b 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -73,7 +73,6 @@ static int amd_cache_northbridges(void)
+ amd_northbridges.nb = nb;
+
+ for (i = 0; i < amd_northbridges.num; i++) {
+- node_to_amd_nb(i)->root = amd_node_get_root(i);
+ node_to_amd_nb(i)->misc = amd_node_get_func(i, 3);
+
+ /*
+diff --git a/arch/x86/kernel/amd_node.c b/arch/x86/kernel/amd_node.c
+index d2ec7fd555c515..65045f223c10a2 100644
+--- a/arch/x86/kernel/amd_node.c
++++ b/arch/x86/kernel/amd_node.c
+@@ -97,6 +97,9 @@ static DEFINE_MUTEX(smn_mutex);
+ #define SMN_INDEX_OFFSET 0x60
+ #define SMN_DATA_OFFSET 0x64
+
++#define HSMP_INDEX_OFFSET 0xc4
++#define HSMP_DATA_OFFSET 0xc8
++
+ /*
+ * SMN accesses may fail in ways that are difficult to detect here in the called
+ * functions amd_smn_read() and amd_smn_write(). Therefore, callers must do
+@@ -179,6 +182,12 @@ int __must_check amd_smn_write(u16 node, u32 address, u32 value)
+ }
+ EXPORT_SYMBOL_GPL(amd_smn_write);
+
++int __must_check amd_smn_hsmp_rdwr(u16 node, u32 address, u32 *value, bool write)
++{
++ return __amd_smn_rw(HSMP_INDEX_OFFSET, HSMP_DATA_OFFSET, node, address, value, write);
++}
++EXPORT_SYMBOL_GPL(amd_smn_hsmp_rdwr);
++
+ static int amd_cache_roots(void)
+ {
+ u16 node, num_nodes = amd_num_nodes();
+diff --git a/block/bio.c b/block/bio.c
+index 6deea10b2cd3d6..ca51bb9d3793af 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -611,7 +611,7 @@ struct bio *bio_kmalloc(unsigned short nr_vecs, gfp_t gfp_mask)
+ {
+ struct bio *bio;
+
+- if (nr_vecs > UIO_MAXIOV)
++ if (nr_vecs > BIO_MAX_INLINE_VECS)
+ return NULL;
+ return kmalloc(struct_size(bio, bi_inline_vecs, nr_vecs), gfp_mask);
+ }
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index 93c3687d30b795..c74fc4958bb95c 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -7,6 +7,7 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
++#include <linux/workqueue.h>
+ #include <generated/utsrelease.h>
+
+ #include <drm/drm_accel.h>
+@@ -419,6 +420,9 @@ void ivpu_prepare_for_reset(struct ivpu_device *vdev)
+ {
+ ivpu_hw_irq_disable(vdev);
+ disable_irq(vdev->irq);
++ flush_work(&vdev->irq_ipc_work);
++ flush_work(&vdev->irq_dct_work);
++ flush_work(&vdev->context_abort_work);
+ ivpu_ipc_disable(vdev);
+ ivpu_mmu_disable(vdev);
+ }
+@@ -463,31 +467,6 @@ static const struct drm_driver driver = {
+ .major = 1,
+ };
+
+-static irqreturn_t ivpu_irq_thread_handler(int irq, void *arg)
+-{
+- struct ivpu_device *vdev = arg;
+- u8 irq_src;
+-
+- if (kfifo_is_empty(&vdev->hw->irq.fifo))
+- return IRQ_NONE;
+-
+- while (kfifo_get(&vdev->hw->irq.fifo, &irq_src)) {
+- switch (irq_src) {
+- case IVPU_HW_IRQ_SRC_IPC:
+- ivpu_ipc_irq_thread_handler(vdev);
+- break;
+- case IVPU_HW_IRQ_SRC_DCT:
+- ivpu_pm_dct_irq_thread_handler(vdev);
+- break;
+- default:
+- ivpu_err_ratelimited(vdev, "Unknown IRQ source: %u\n", irq_src);
+- break;
+- }
+- }
+-
+- return IRQ_HANDLED;
+-}
+-
+ static int ivpu_irq_init(struct ivpu_device *vdev)
+ {
+ struct pci_dev *pdev = to_pci_dev(vdev->drm.dev);
+@@ -499,12 +478,16 @@ static int ivpu_irq_init(struct ivpu_device *vdev)
+ return ret;
+ }
+
++ INIT_WORK(&vdev->irq_ipc_work, ivpu_ipc_irq_work_fn);
++ INIT_WORK(&vdev->irq_dct_work, ivpu_pm_irq_dct_work_fn);
++ INIT_WORK(&vdev->context_abort_work, ivpu_context_abort_work_fn);
++
+ ivpu_irq_handlers_init(vdev);
+
+ vdev->irq = pci_irq_vector(pdev, 0);
+
+- ret = devm_request_threaded_irq(vdev->drm.dev, vdev->irq, ivpu_hw_irq_handler,
+- ivpu_irq_thread_handler, IRQF_NO_AUTOEN, DRIVER_NAME, vdev);
++ ret = devm_request_irq(vdev->drm.dev, vdev->irq, ivpu_hw_irq_handler,
++ IRQF_NO_AUTOEN, DRIVER_NAME, vdev);
+ if (ret)
+ ivpu_err(vdev, "Failed to request an IRQ %d\n", ret);
+
+@@ -597,8 +580,6 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
+ vdev->db_limit.min = IVPU_MIN_DB;
+ vdev->db_limit.max = IVPU_MAX_DB;
+
+- INIT_WORK(&vdev->context_abort_work, ivpu_context_abort_thread_handler);
+-
+ ret = drmm_mutex_init(&vdev->drm, &vdev->context_list_lock);
+ if (ret)
+ goto err_xa_destroy;
+diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
+index ebfcf3e42a3d93..b57d878f2fcd39 100644
+--- a/drivers/accel/ivpu/ivpu_drv.h
++++ b/drivers/accel/ivpu/ivpu_drv.h
+@@ -137,12 +137,15 @@ struct ivpu_device {
+ struct mutex context_list_lock; /* Protects user context addition/removal */
+ struct xarray context_xa;
+ struct xa_limit context_xa_limit;
+- struct work_struct context_abort_work;
+
+ struct xarray db_xa;
+ struct xa_limit db_limit;
+ u32 db_next;
+
++ struct work_struct irq_ipc_work;
++ struct work_struct irq_dct_work;
++ struct work_struct context_abort_work;
++
+ struct mutex bo_list_lock; /* Protects bo_list */
+ struct list_head bo_list;
+
+diff --git a/drivers/accel/ivpu/ivpu_hw.c b/drivers/accel/ivpu/ivpu_hw.c
+index 65100576daf295..d19e976893f8a7 100644
+--- a/drivers/accel/ivpu/ivpu_hw.c
++++ b/drivers/accel/ivpu/ivpu_hw.c
+@@ -285,8 +285,6 @@ void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable)
+
+ void ivpu_irq_handlers_init(struct ivpu_device *vdev)
+ {
+- INIT_KFIFO(vdev->hw->irq.fifo);
+-
+ if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX)
+ vdev->hw->irq.ip_irq_handler = ivpu_hw_ip_irq_handler_37xx;
+ else
+@@ -300,7 +298,6 @@ void ivpu_irq_handlers_init(struct ivpu_device *vdev)
+
+ void ivpu_hw_irq_enable(struct ivpu_device *vdev)
+ {
+- kfifo_reset(&vdev->hw->irq.fifo);
+ ivpu_hw_ip_irq_enable(vdev);
+ ivpu_hw_btrs_irq_enable(vdev);
+ }
+@@ -327,8 +324,6 @@ irqreturn_t ivpu_hw_irq_handler(int irq, void *ptr)
+ /* Re-enable global interrupts to re-trigger MSI for pending interrupts */
+ ivpu_hw_btrs_global_int_enable(vdev);
+
+- if (!kfifo_is_empty(&vdev->hw->irq.fifo))
+- return IRQ_WAKE_THREAD;
+ if (ip_handled || btrs_handled)
+ return IRQ_HANDLED;
+ return IRQ_NONE;
+diff --git a/drivers/accel/ivpu/ivpu_hw.h b/drivers/accel/ivpu/ivpu_hw.h
+index 1e85306bcd0653..aa8fdbf2fc222c 100644
+--- a/drivers/accel/ivpu/ivpu_hw.h
++++ b/drivers/accel/ivpu/ivpu_hw.h
+@@ -6,18 +6,10 @@
+ #ifndef __IVPU_HW_H__
+ #define __IVPU_HW_H__
+
+-#include <linux/kfifo.h>
+-
+ #include "ivpu_drv.h"
+ #include "ivpu_hw_btrs.h"
+ #include "ivpu_hw_ip.h"
+
+-#define IVPU_HW_IRQ_FIFO_LENGTH 1024
+-
+-#define IVPU_HW_IRQ_SRC_IPC 1
+-#define IVPU_HW_IRQ_SRC_MMU_EVTQ 2
+-#define IVPU_HW_IRQ_SRC_DCT 3
+-
+ struct ivpu_addr_range {
+ resource_size_t start;
+ resource_size_t end;
+@@ -27,7 +19,6 @@ struct ivpu_hw_info {
+ struct {
+ bool (*btrs_irq_handler)(struct ivpu_device *vdev, int irq);
+ bool (*ip_irq_handler)(struct ivpu_device *vdev, int irq);
+- DECLARE_KFIFO(fifo, u8, IVPU_HW_IRQ_FIFO_LENGTH);
+ } irq;
+ struct {
+ struct ivpu_addr_range global;
+diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.c b/drivers/accel/ivpu/ivpu_hw_btrs.c
+index 51b9581bb60aca..e17c69fe5a03a7 100644
+--- a/drivers/accel/ivpu/ivpu_hw_btrs.c
++++ b/drivers/accel/ivpu/ivpu_hw_btrs.c
+@@ -666,8 +666,7 @@ bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq)
+
+ if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) {
+ ivpu_dbg(vdev, IRQ, "Survivability IRQ\n");
+- if (!kfifo_put(&vdev->hw->irq.fifo, IVPU_HW_IRQ_SRC_DCT))
+- ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
++ queue_work(system_wq, &vdev->irq_dct_work);
+ }
+
+ if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status)) {
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 5daaf07fc1a712..39f83225c1815a 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -460,13 +460,12 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev)
+ }
+ }
+
+- if (!list_empty(&ipc->cb_msg_list))
+- if (!kfifo_put(&vdev->hw->irq.fifo, IVPU_HW_IRQ_SRC_IPC))
+- ivpu_err_ratelimited(vdev, "IRQ FIFO full\n");
++ queue_work(system_wq, &vdev->irq_ipc_work);
+ }
+
+-void ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev)
++void ivpu_ipc_irq_work_fn(struct work_struct *work)
+ {
++ struct ivpu_device *vdev = container_of(work, struct ivpu_device, irq_ipc_work);
+ struct ivpu_ipc_info *ipc = vdev->ipc;
+ struct ivpu_ipc_rx_msg *rx_msg, *r;
+ struct list_head cb_msg_list;
+diff --git a/drivers/accel/ivpu/ivpu_ipc.h b/drivers/accel/ivpu/ivpu_ipc.h
+index b4dfb504679bac..b524a1985b9de8 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.h
++++ b/drivers/accel/ivpu/ivpu_ipc.h
+@@ -90,7 +90,7 @@ void ivpu_ipc_disable(struct ivpu_device *vdev);
+ void ivpu_ipc_reset(struct ivpu_device *vdev);
+
+ void ivpu_ipc_irq_handler(struct ivpu_device *vdev);
+-void ivpu_ipc_irq_thread_handler(struct ivpu_device *vdev);
++void ivpu_ipc_irq_work_fn(struct work_struct *work);
+
+ void ivpu_ipc_consumer_add(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ u32 channel, ivpu_ipc_rx_callback_t callback);
+diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
+index 79b77d8a35a772..4a8013f669f912 100644
+--- a/drivers/accel/ivpu/ivpu_job.c
++++ b/drivers/accel/ivpu/ivpu_job.c
+@@ -17,6 +17,7 @@
+ #include "ivpu_ipc.h"
+ #include "ivpu_job.h"
+ #include "ivpu_jsm_msg.h"
++#include "ivpu_mmu.h"
+ #include "ivpu_pm.h"
+ #include "ivpu_trace.h"
+ #include "vpu_boot_api.h"
+@@ -360,12 +361,15 @@ void ivpu_context_abort_locked(struct ivpu_file_priv *file_priv)
+ struct ivpu_device *vdev = file_priv->vdev;
+
+ lockdep_assert_held(&file_priv->lock);
++ ivpu_dbg(vdev, JOB, "Context ID: %u abort\n", file_priv->ctx.id);
+
+ ivpu_cmdq_fini_all(file_priv);
+
+ if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_OS)
+ ivpu_jsm_context_release(vdev, file_priv->ctx.id);
+
++ ivpu_mmu_disable_ssid_events(vdev, file_priv->ctx.id);
++
+ file_priv->aborted = true;
+ }
+
+@@ -845,12 +849,12 @@ void ivpu_job_done_consumer_fini(struct ivpu_device *vdev)
+ ivpu_ipc_consumer_del(vdev, &vdev->job_done_consumer);
+ }
+
+-void ivpu_context_abort_thread_handler(struct work_struct *work)
++void ivpu_context_abort_work_fn(struct work_struct *work)
+ {
+ struct ivpu_device *vdev = container_of(work, struct ivpu_device, context_abort_work);
+ struct ivpu_file_priv *file_priv;
+- unsigned long ctx_id;
+ struct ivpu_job *job;
++ unsigned long ctx_id;
+ unsigned long id;
+
+ if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW)
+@@ -867,6 +871,13 @@ void ivpu_context_abort_thread_handler(struct work_struct *work)
+ }
+ mutex_unlock(&vdev->context_list_lock);
+
++ /*
++ * We will not receive new MMU event interrupts until existing events are discarded
++ * however, we want to discard these events only after aborting the faulty context
++ * to avoid generating new faults from that context
++ */
++ ivpu_mmu_discard_events(vdev);
++
+ if (vdev->fw->sched_mode != VPU_SCHEDULING_MODE_HW)
+ return;
+
+diff --git a/drivers/accel/ivpu/ivpu_job.h b/drivers/accel/ivpu/ivpu_job.h
+index af1ed039569cd6..8c0e1845e5dd60 100644
+--- a/drivers/accel/ivpu/ivpu_job.h
++++ b/drivers/accel/ivpu/ivpu_job.h
+@@ -66,7 +66,7 @@ void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev);
+
+ void ivpu_job_done_consumer_init(struct ivpu_device *vdev);
+ void ivpu_job_done_consumer_fini(struct ivpu_device *vdev);
+-void ivpu_context_abort_thread_handler(struct work_struct *work);
++void ivpu_context_abort_work_fn(struct work_struct *work);
+
+ void ivpu_jobs_abort_all(struct ivpu_device *vdev);
+
+diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c
+index 21f820dd0c658a..b80bdded9fd797 100644
+--- a/drivers/accel/ivpu/ivpu_mmu.c
++++ b/drivers/accel/ivpu/ivpu_mmu.c
+@@ -20,6 +20,12 @@
+ #define IVPU_MMU_REG_CR0 0x00200020u
+ #define IVPU_MMU_REG_CR0ACK 0x00200024u
+ #define IVPU_MMU_REG_CR0ACK_VAL_MASK GENMASK(31, 0)
++#define IVPU_MMU_REG_CR0_ATSCHK_MASK BIT(4)
++#define IVPU_MMU_REG_CR0_CMDQEN_MASK BIT(3)
++#define IVPU_MMU_REG_CR0_EVTQEN_MASK BIT(2)
++#define IVPU_MMU_REG_CR0_PRIQEN_MASK BIT(1)
++#define IVPU_MMU_REG_CR0_SMMUEN_MASK BIT(0)
++
+ #define IVPU_MMU_REG_CR1 0x00200028u
+ #define IVPU_MMU_REG_CR2 0x0020002cu
+ #define IVPU_MMU_REG_IRQ_CTRL 0x00200050u
+@@ -141,12 +147,6 @@
+ #define IVPU_MMU_IRQ_EVTQ_EN BIT(2)
+ #define IVPU_MMU_IRQ_GERROR_EN BIT(0)
+
+-#define IVPU_MMU_CR0_ATSCHK BIT(4)
+-#define IVPU_MMU_CR0_CMDQEN BIT(3)
+-#define IVPU_MMU_CR0_EVTQEN BIT(2)
+-#define IVPU_MMU_CR0_PRIQEN BIT(1)
+-#define IVPU_MMU_CR0_SMMUEN BIT(0)
+-
+ #define IVPU_MMU_CR1_TABLE_SH GENMASK(11, 10)
+ #define IVPU_MMU_CR1_TABLE_OC GENMASK(9, 8)
+ #define IVPU_MMU_CR1_TABLE_IC GENMASK(7, 6)
+@@ -596,7 +596,7 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
+ REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, 0);
+ REGV_WR32(IVPU_MMU_REG_CMDQ_CONS, 0);
+
+- val = IVPU_MMU_CR0_CMDQEN;
++ val = REG_SET_FLD(IVPU_MMU_REG_CR0, CMDQEN, 0);
+ ret = ivpu_mmu_reg_write_cr0(vdev, val);
+ if (ret)
+ return ret;
+@@ -617,12 +617,12 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
+ REGV_WR32(IVPU_MMU_REG_EVTQ_PROD_SEC, 0);
+ REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, 0);
+
+- val |= IVPU_MMU_CR0_EVTQEN;
++ val = REG_SET_FLD(IVPU_MMU_REG_CR0, EVTQEN, val);
+ ret = ivpu_mmu_reg_write_cr0(vdev, val);
+ if (ret)
+ return ret;
+
+- val |= IVPU_MMU_CR0_ATSCHK;
++ val = REG_SET_FLD(IVPU_MMU_REG_CR0, ATSCHK, val);
+ ret = ivpu_mmu_reg_write_cr0(vdev, val);
+ if (ret)
+ return ret;
+@@ -631,7 +631,7 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
+ if (ret)
+ return ret;
+
+- val |= IVPU_MMU_CR0_SMMUEN;
++ val = REG_SET_FLD(IVPU_MMU_REG_CR0, SMMUEN, val);
+ return ivpu_mmu_reg_write_cr0(vdev, val);
+ }
+
+@@ -725,8 +725,8 @@ static int ivpu_mmu_cdtab_entry_set(struct ivpu_device *vdev, u32 ssid, u64 cd_d
+ cd[2] = 0;
+ cd[3] = 0x0000000000007444;
+
+- /* For global context generate memory fault on VPU */
+- if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID)
++ /* For global and reserved contexts generate memory fault on VPU */
++ if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID || ssid == IVPU_RESERVED_CONTEXT_MMU_SSID)
+ cd[0] |= IVPU_MMU_CD_0_A;
+
+ if (valid)
+@@ -870,24 +870,95 @@ static u32 *ivpu_mmu_get_event(struct ivpu_device *vdev)
+ return evt;
+ }
+
++static int ivpu_mmu_evtq_set(struct ivpu_device *vdev, bool enable)
++{
++ u32 val = REGV_RD32(IVPU_MMU_REG_CR0);
++
++ if (enable)
++ val = REG_SET_FLD(IVPU_MMU_REG_CR0, EVTQEN, val);
++ else
++ val = REG_CLR_FLD(IVPU_MMU_REG_CR0, EVTQEN, val);
++ REGV_WR32(IVPU_MMU_REG_CR0, val);
++
++ return REGV_POLL_FLD(IVPU_MMU_REG_CR0ACK, VAL, val, IVPU_MMU_REG_TIMEOUT_US);
++}
++
++static int ivpu_mmu_evtq_enable(struct ivpu_device *vdev)
++{
++ return ivpu_mmu_evtq_set(vdev, true);
++}
++
++static int ivpu_mmu_evtq_disable(struct ivpu_device *vdev)
++{
++ return ivpu_mmu_evtq_set(vdev, false);
++}
++
++void ivpu_mmu_discard_events(struct ivpu_device *vdev)
++{
++ /*
++ * Disable event queue (stop MMU from updating the producer)
++ * to allow synchronization of consumer and producer indexes
++ */
++ ivpu_mmu_evtq_disable(vdev);
++
++ vdev->mmu->evtq.cons = REGV_RD32(IVPU_MMU_REG_EVTQ_PROD_SEC);
++ REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, vdev->mmu->evtq.cons);
++ vdev->mmu->evtq.prod = REGV_RD32(IVPU_MMU_REG_EVTQ_PROD_SEC);
++
++ ivpu_mmu_evtq_enable(vdev);
++
++ drm_WARN_ON_ONCE(&vdev->drm, vdev->mmu->evtq.cons != vdev->mmu->evtq.prod);
++}
++
++int ivpu_mmu_disable_ssid_events(struct ivpu_device *vdev, u32 ssid)
++{
++ struct ivpu_mmu_info *mmu = vdev->mmu;
++ struct ivpu_mmu_cdtab *cdtab = &mmu->cdtab;
++ u64 *entry;
++ u64 val;
++
++ if (ssid > IVPU_MMU_CDTAB_ENT_COUNT)
++ return -EINVAL;
++
++ entry = cdtab->base + (ssid * IVPU_MMU_CDTAB_ENT_SIZE);
++
++ val = READ_ONCE(entry[0]);
++ val &= ~IVPU_MMU_CD_0_R;
++ WRITE_ONCE(entry[0], val);
++
++ if (!ivpu_is_force_snoop_enabled(vdev))
++ clflush_cache_range(entry, IVPU_MMU_CDTAB_ENT_SIZE);
++
++ ivpu_mmu_cmdq_write_cfgi_all(vdev);
++ ivpu_mmu_cmdq_sync(vdev);
++
++ return 0;
++}
++
+ void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev)
+ {
++ struct ivpu_file_priv *file_priv;
+ u32 *event;
+ u32 ssid;
+
+ ivpu_dbg(vdev, IRQ, "MMU event queue\n");
+
+- while ((event = ivpu_mmu_get_event(vdev)) != NULL) {
+- ivpu_mmu_dump_event(vdev, event);
+-
+- ssid = FIELD_GET(IVPU_MMU_EVT_SSID_MASK, event[0]);
+- if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID) {
++ while ((event = ivpu_mmu_get_event(vdev))) {
++ ssid = FIELD_GET(IVPU_MMU_EVT_SSID_MASK, *event);
++ if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID ||
++ ssid == IVPU_RESERVED_CONTEXT_MMU_SSID) {
++ ivpu_mmu_dump_event(vdev, event);
+ ivpu_pm_trigger_recovery(vdev, "MMU event");
+ return;
+ }
+
+- ivpu_mmu_user_context_mark_invalid(vdev, ssid);
+- REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, vdev->mmu->evtq.cons);
++ file_priv = xa_load(&vdev->context_xa, ssid);
++ if (file_priv) {
++ if (!READ_ONCE(file_priv->has_mmu_faults)) {
++ ivpu_mmu_dump_event(vdev, event);
++ WRITE_ONCE(file_priv->has_mmu_faults, true);
++ }
++ }
+ }
+
+ queue_work(system_wq, &vdev->context_abort_work);
+diff --git a/drivers/accel/ivpu/ivpu_mmu.h b/drivers/accel/ivpu/ivpu_mmu.h
+index 7afea9cd8731d5..1ce7529746add0 100644
+--- a/drivers/accel/ivpu/ivpu_mmu.h
++++ b/drivers/accel/ivpu/ivpu_mmu.h
+@@ -47,5 +47,7 @@ int ivpu_mmu_invalidate_tlb(struct ivpu_device *vdev, u16 ssid);
+ void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev);
+ void ivpu_mmu_irq_gerr_handler(struct ivpu_device *vdev);
+ void ivpu_mmu_evtq_dump(struct ivpu_device *vdev);
++void ivpu_mmu_discard_events(struct ivpu_device *vdev);
++int ivpu_mmu_disable_ssid_events(struct ivpu_device *vdev, u32 ssid);
+
+ #endif /* __IVPU_MMU_H__ */
+diff --git a/drivers/accel/ivpu/ivpu_mmu_context.c b/drivers/accel/ivpu/ivpu_mmu_context.c
+index 0af614dfb6f925..f0267efa55aa84 100644
+--- a/drivers/accel/ivpu/ivpu_mmu_context.c
++++ b/drivers/accel/ivpu/ivpu_mmu_context.c
+@@ -635,16 +635,3 @@ void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev)
+ ivpu_mmu_cd_clear(vdev, vdev->rctx.id);
+ ivpu_mmu_context_fini(vdev, &vdev->rctx);
+ }
+-
+-void ivpu_mmu_user_context_mark_invalid(struct ivpu_device *vdev, u32 ssid)
+-{
+- struct ivpu_file_priv *file_priv;
+-
+- xa_lock(&vdev->context_xa);
+-
+- file_priv = xa_load(&vdev->context_xa, ssid);
+- if (file_priv)
+- file_priv->has_mmu_faults = true;
+-
+- xa_unlock(&vdev->context_xa);
+-}
+diff --git a/drivers/accel/ivpu/ivpu_mmu_context.h b/drivers/accel/ivpu/ivpu_mmu_context.h
+index 8042fc0670622b..f255310968cfe1 100644
+--- a/drivers/accel/ivpu/ivpu_mmu_context.h
++++ b/drivers/accel/ivpu/ivpu_mmu_context.h
+@@ -37,8 +37,6 @@ void ivpu_mmu_global_context_fini(struct ivpu_device *vdev);
+ int ivpu_mmu_reserved_context_init(struct ivpu_device *vdev);
+ void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev);
+
+-void ivpu_mmu_user_context_mark_invalid(struct ivpu_device *vdev, u32 ssid);
+-
+ int ivpu_mmu_context_insert_node(struct ivpu_mmu_context *ctx, const struct ivpu_addr_range *range,
+ u64 size, struct drm_mm_node *node);
+ void ivpu_mmu_context_remove_node(struct ivpu_mmu_context *ctx, struct drm_mm_node *node);
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index 7acf78aeb38005..57228e190e7fc0 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -464,8 +464,9 @@ int ivpu_pm_dct_disable(struct ivpu_device *vdev)
+ return 0;
+ }
+
+-void ivpu_pm_dct_irq_thread_handler(struct ivpu_device *vdev)
++void ivpu_pm_irq_dct_work_fn(struct work_struct *work)
+ {
++ struct ivpu_device *vdev = container_of(work, struct ivpu_device, irq_dct_work);
+ bool enable;
+ int ret;
+
+diff --git a/drivers/accel/ivpu/ivpu_pm.h b/drivers/accel/ivpu/ivpu_pm.h
+index b70efe6c36e47f..89b264cc0e3e78 100644
+--- a/drivers/accel/ivpu/ivpu_pm.h
++++ b/drivers/accel/ivpu/ivpu_pm.h
+@@ -45,6 +45,6 @@ void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev);
+ int ivpu_pm_dct_init(struct ivpu_device *vdev);
+ int ivpu_pm_dct_enable(struct ivpu_device *vdev, u8 active_percent);
+ int ivpu_pm_dct_disable(struct ivpu_device *vdev);
+-void ivpu_pm_dct_irq_thread_handler(struct ivpu_device *vdev);
++void ivpu_pm_irq_dct_work_fn(struct work_struct *work);
+
+ #endif /* __IVPU_PM_H__ */
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index f73ce6e13065dd..54676e3d82dd59 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -231,16 +231,18 @@ static int acpi_pptt_leaf_node(struct acpi_table_header *table_hdr,
+ sizeof(struct acpi_table_pptt));
+ proc_sz = sizeof(struct acpi_pptt_processor);
+
+- while ((unsigned long)entry + proc_sz < table_end) {
++ /* ignore subtable types that are smaller than a processor node */
++ while ((unsigned long)entry + proc_sz <= table_end) {
+ cpu_node = (struct acpi_pptt_processor *)entry;
++
+ if (entry->type == ACPI_PPTT_TYPE_PROCESSOR &&
+ cpu_node->parent == node_entry)
+ return 0;
+ if (entry->length == 0)
+ return 0;
++
+ entry = ACPI_ADD_PTR(struct acpi_subtable_header, entry,
+ entry->length);
+-
+ }
+ return 1;
+ }
+@@ -273,15 +275,18 @@ static struct acpi_pptt_processor *acpi_find_processor_node(struct acpi_table_he
+ proc_sz = sizeof(struct acpi_pptt_processor);
+
+ /* find the processor structure associated with this cpuid */
+- while ((unsigned long)entry + proc_sz < table_end) {
++ while ((unsigned long)entry + proc_sz <= table_end) {
+ cpu_node = (struct acpi_pptt_processor *)entry;
+
+ if (entry->length == 0) {
+ pr_warn("Invalid zero length subtable\n");
+ break;
+ }
++ /* entry->length may not equal proc_sz, revalidate the processor structure length */
+ if (entry->type == ACPI_PPTT_TYPE_PROCESSOR &&
+ acpi_cpu_id == cpu_node->acpi_processor_id &&
++ (unsigned long)entry + entry->length <= table_end &&
++ entry->length == proc_sz + cpu_node->number_of_priv_resources * sizeof(u32) &&
+ acpi_pptt_leaf_node(table_hdr, cpu_node)) {
+ return (struct acpi_pptt_processor *)entry;
+ }
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 348c4feb7a2df3..7bbfc20f116a44 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1677,7 +1677,7 @@ static void ublk_cancel_cmd(struct ublk_queue *ubq, unsigned tag,
+ * that ublk_dispatch_req() is always called
+ */
+ req = blk_mq_tag_to_rq(ub->tag_set.tags[ubq->q_id], tag);
+- if (req && blk_mq_request_started(req))
++ if (req && blk_mq_request_started(req) && req->tag == tag)
+ return;
+
+ spin_lock(&ubq->cancel_lock);
+diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessions.c
+index b70165b588eccd..a894dbc40e43b3 100644
+--- a/drivers/char/tpm/tpm2-sessions.c
++++ b/drivers/char/tpm/tpm2-sessions.c
+@@ -40,11 +40,6 @@
+ *
+ * These are the usage functions:
+ *
+- * tpm2_start_auth_session() which allocates the opaque auth structure
+- * and gets a session from the TPM. This must be called before
+- * any of the following functions. The session is protected by a
+- * session_key which is derived from a random salt value
+- * encrypted to the NULL seed.
+ * tpm2_end_auth_session() kills the session and frees the resources.
+ * Under normal operation this function is done by
+ * tpm_buf_check_hmac_response(), so this is only to be used on
+@@ -963,16 +958,13 @@ static int tpm2_load_null(struct tpm_chip *chip, u32 *null_key)
+ }
+
+ /**
+- * tpm2_start_auth_session() - create a HMAC authentication session with the TPM
+- * @chip: the TPM chip structure to create the session with
++ * tpm2_start_auth_session() - Create an a HMAC authentication session
++ * @chip: A TPM chip
+ *
+- * This function loads the NULL seed from its saved context and starts
+- * an authentication session on the null seed, fills in the
+- * @chip->auth structure to contain all the session details necessary
+- * for performing the HMAC, encrypt and decrypt operations and
+- * returns. The NULL seed is flushed before this function returns.
++ * Loads the ephemeral key (null seed), and starts an HMAC authenticated
++ * session. The null seed is flushed before the return.
+ *
+- * Return: zero on success or actual error encountered.
++ * Returns zero on success, or a POSIX error code.
+ */
+ int tpm2_start_auth_session(struct tpm_chip *chip)
+ {
+@@ -1024,7 +1016,7 @@ int tpm2_start_auth_session(struct tpm_chip *chip)
+ /* hash algorithm for session */
+ tpm_buf_append_u16(&buf, TPM_ALG_SHA256);
+
+- rc = tpm_transmit_cmd(chip, &buf, 0, "start auth session");
++ rc = tpm_ret_to_err(tpm_transmit_cmd(chip, &buf, 0, "StartAuthSession"));
+ tpm2_flush_context(chip, null_key);
+
+ if (rc == TPM2_RC_SUCCESS)
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 970d02c337c7f1..6c3aa480396b64 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -54,7 +54,7 @@ enum tis_int_flags {
+ enum tis_defaults {
+ TIS_MEM_LEN = 0x5000,
+ TIS_SHORT_TIMEOUT = 750, /* ms */
+- TIS_LONG_TIMEOUT = 2000, /* 2 sec */
++ TIS_LONG_TIMEOUT = 4000, /* 4 secs */
+ TIS_TIMEOUT_MIN_ATML = 14700, /* usecs */
+ TIS_TIMEOUT_MAX_ATML = 15000, /* usecs */
+ };
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index 5f8d010516f07f..b1ef4546346d44 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -320,8 +320,9 @@ void dma_resv_add_fence(struct dma_resv *obj, struct dma_fence *fence,
+ count++;
+
+ dma_resv_list_set(fobj, i, fence, usage);
+- /* pointer update must be visible before we extend the num_fences */
+- smp_store_mb(fobj->num_fences, count);
++ /* fence update must be visible before we extend the num_fences */
++ smp_wmb();
++ fobj->num_fences = count;
+ }
+ EXPORT_SYMBOL(dma_resv_add_fence);
+
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index d891dfca358e20..91b2fbc0b86471 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -841,9 +841,9 @@ static int dmatest_func(void *data)
+ } else {
+ dma_async_issue_pending(chan);
+
+- wait_event_timeout(thread->done_wait,
+- done->done,
+- msecs_to_jiffies(params->timeout));
++ wait_event_freezable_timeout(thread->done_wait,
++ done->done,
++ msecs_to_jiffies(params->timeout));
+
+ status = dma_async_is_tx_complete(chan, cookie, NULL,
+ NULL);
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index b946f78f85e17f..b908942fac7345 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -155,6 +155,25 @@ static void idxd_cleanup_interrupts(struct idxd_device *idxd)
+ pci_free_irq_vectors(pdev);
+ }
+
++static void idxd_clean_wqs(struct idxd_device *idxd)
++{
++ struct idxd_wq *wq;
++ struct device *conf_dev;
++ int i;
++
++ for (i = 0; i < idxd->max_wqs; i++) {
++ wq = idxd->wqs[i];
++ if (idxd->hw.wq_cap.op_config)
++ bitmap_free(wq->opcap_bmap);
++ kfree(wq->wqcfg);
++ conf_dev = wq_confdev(wq);
++ put_device(conf_dev);
++ kfree(wq);
++ }
++ bitmap_free(idxd->wq_enable_map);
++ kfree(idxd->wqs);
++}
++
+ static int idxd_setup_wqs(struct idxd_device *idxd)
+ {
+ struct device *dev = &idxd->pdev->dev;
+@@ -169,8 +188,8 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+
+ idxd->wq_enable_map = bitmap_zalloc_node(idxd->max_wqs, GFP_KERNEL, dev_to_node(dev));
+ if (!idxd->wq_enable_map) {
+- kfree(idxd->wqs);
+- return -ENOMEM;
++ rc = -ENOMEM;
++ goto err_bitmap;
+ }
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+@@ -189,10 +208,8 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ conf_dev->bus = &dsa_bus_type;
+ conf_dev->type = &idxd_wq_device_type;
+ rc = dev_set_name(conf_dev, "wq%d.%d", idxd->id, wq->id);
+- if (rc < 0) {
+- put_device(conf_dev);
++ if (rc < 0)
+ goto err;
+- }
+
+ mutex_init(&wq->wq_lock);
+ init_waitqueue_head(&wq->err_queue);
+@@ -203,7 +220,6 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ wq->enqcmds_retries = IDXD_ENQCMDS_RETRIES;
+ wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev));
+ if (!wq->wqcfg) {
+- put_device(conf_dev);
+ rc = -ENOMEM;
+ goto err;
+ }
+@@ -211,9 +227,8 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ if (idxd->hw.wq_cap.op_config) {
+ wq->opcap_bmap = bitmap_zalloc(IDXD_MAX_OPCAP_BITS, GFP_KERNEL);
+ if (!wq->opcap_bmap) {
+- put_device(conf_dev);
+ rc = -ENOMEM;
+- goto err;
++ goto err_opcap_bmap;
+ }
+ bitmap_copy(wq->opcap_bmap, idxd->opcap_bmap, IDXD_MAX_OPCAP_BITS);
+ }
+@@ -224,15 +239,46 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+
+ return 0;
+
+- err:
++err_opcap_bmap:
++ kfree(wq->wqcfg);
++
++err:
++ put_device(conf_dev);
++ kfree(wq);
++
+ while (--i >= 0) {
+ wq = idxd->wqs[i];
++ if (idxd->hw.wq_cap.op_config)
++ bitmap_free(wq->opcap_bmap);
++ kfree(wq->wqcfg);
+ conf_dev = wq_confdev(wq);
+ put_device(conf_dev);
++ kfree(wq);
++
+ }
++ bitmap_free(idxd->wq_enable_map);
++
++err_bitmap:
++ kfree(idxd->wqs);
++
+ return rc;
+ }
+
++static void idxd_clean_engines(struct idxd_device *idxd)
++{
++ struct idxd_engine *engine;
++ struct device *conf_dev;
++ int i;
++
++ for (i = 0; i < idxd->max_engines; i++) {
++ engine = idxd->engines[i];
++ conf_dev = engine_confdev(engine);
++ put_device(conf_dev);
++ kfree(engine);
++ }
++ kfree(idxd->engines);
++}
++
+ static int idxd_setup_engines(struct idxd_device *idxd)
+ {
+ struct idxd_engine *engine;
+@@ -263,6 +309,7 @@ static int idxd_setup_engines(struct idxd_device *idxd)
+ rc = dev_set_name(conf_dev, "engine%d.%d", idxd->id, engine->id);
+ if (rc < 0) {
+ put_device(conf_dev);
++ kfree(engine);
+ goto err;
+ }
+
+@@ -276,10 +323,26 @@ static int idxd_setup_engines(struct idxd_device *idxd)
+ engine = idxd->engines[i];
+ conf_dev = engine_confdev(engine);
+ put_device(conf_dev);
++ kfree(engine);
+ }
++ kfree(idxd->engines);
++
+ return rc;
+ }
+
++static void idxd_clean_groups(struct idxd_device *idxd)
++{
++ struct idxd_group *group;
++ int i;
++
++ for (i = 0; i < idxd->max_groups; i++) {
++ group = idxd->groups[i];
++ put_device(group_confdev(group));
++ kfree(group);
++ }
++ kfree(idxd->groups);
++}
++
+ static int idxd_setup_groups(struct idxd_device *idxd)
+ {
+ struct device *dev = &idxd->pdev->dev;
+@@ -310,6 +373,7 @@ static int idxd_setup_groups(struct idxd_device *idxd)
+ rc = dev_set_name(conf_dev, "group%d.%d", idxd->id, group->id);
+ if (rc < 0) {
+ put_device(conf_dev);
++ kfree(group);
+ goto err;
+ }
+
+@@ -334,20 +398,18 @@ static int idxd_setup_groups(struct idxd_device *idxd)
+ while (--i >= 0) {
+ group = idxd->groups[i];
+ put_device(group_confdev(group));
++ kfree(group);
+ }
++ kfree(idxd->groups);
++
+ return rc;
+ }
+
+ static void idxd_cleanup_internals(struct idxd_device *idxd)
+ {
+- int i;
+-
+- for (i = 0; i < idxd->max_groups; i++)
+- put_device(group_confdev(idxd->groups[i]));
+- for (i = 0; i < idxd->max_engines; i++)
+- put_device(engine_confdev(idxd->engines[i]));
+- for (i = 0; i < idxd->max_wqs; i++)
+- put_device(wq_confdev(idxd->wqs[i]));
++ idxd_clean_groups(idxd);
++ idxd_clean_engines(idxd);
++ idxd_clean_wqs(idxd);
+ destroy_workqueue(idxd->wq);
+ }
+
+@@ -390,7 +452,7 @@ static int idxd_init_evl(struct idxd_device *idxd)
+ static int idxd_setup_internals(struct idxd_device *idxd)
+ {
+ struct device *dev = &idxd->pdev->dev;
+- int rc, i;
++ int rc;
+
+ init_waitqueue_head(&idxd->cmd_waitq);
+
+@@ -421,14 +483,11 @@ static int idxd_setup_internals(struct idxd_device *idxd)
+ err_evl:
+ destroy_workqueue(idxd->wq);
+ err_wkq_create:
+- for (i = 0; i < idxd->max_groups; i++)
+- put_device(group_confdev(idxd->groups[i]));
++ idxd_clean_groups(idxd);
+ err_group:
+- for (i = 0; i < idxd->max_engines; i++)
+- put_device(engine_confdev(idxd->engines[i]));
++ idxd_clean_engines(idxd);
+ err_engine:
+- for (i = 0; i < idxd->max_wqs; i++)
+- put_device(wq_confdev(idxd->wqs[i]));
++ idxd_clean_wqs(idxd);
+ err_wqs:
+ return rc;
+ }
+@@ -528,6 +587,17 @@ static void idxd_read_caps(struct idxd_device *idxd)
+ idxd->hw.iaa_cap.bits = ioread64(idxd->reg_base + IDXD_IAACAP_OFFSET);
+ }
+
++static void idxd_free(struct idxd_device *idxd)
++{
++ if (!idxd)
++ return;
++
++ put_device(idxd_confdev(idxd));
++ bitmap_free(idxd->opcap_bmap);
++ ida_free(&idxd_ida, idxd->id);
++ kfree(idxd);
++}
++
+ static struct idxd_device *idxd_alloc(struct pci_dev *pdev, struct idxd_driver_data *data)
+ {
+ struct device *dev = &pdev->dev;
+@@ -545,28 +615,34 @@ static struct idxd_device *idxd_alloc(struct pci_dev *pdev, struct idxd_driver_d
+ idxd_dev_set_type(&idxd->idxd_dev, idxd->data->type);
+ idxd->id = ida_alloc(&idxd_ida, GFP_KERNEL);
+ if (idxd->id < 0)
+- return NULL;
++ goto err_ida;
+
+ idxd->opcap_bmap = bitmap_zalloc_node(IDXD_MAX_OPCAP_BITS, GFP_KERNEL, dev_to_node(dev));
+- if (!idxd->opcap_bmap) {
+- ida_free(&idxd_ida, idxd->id);
+- return NULL;
+- }
++ if (!idxd->opcap_bmap)
++ goto err_opcap;
+
+ device_initialize(conf_dev);
+ conf_dev->parent = dev;
+ conf_dev->bus = &dsa_bus_type;
+ conf_dev->type = idxd->data->dev_type;
+ rc = dev_set_name(conf_dev, "%s%d", idxd->data->name_prefix, idxd->id);
+- if (rc < 0) {
+- put_device(conf_dev);
+- return NULL;
+- }
++ if (rc < 0)
++ goto err_name;
+
+ spin_lock_init(&idxd->dev_lock);
+ spin_lock_init(&idxd->cmd_lock);
+
+ return idxd;
++
++err_name:
++ put_device(conf_dev);
++ bitmap_free(idxd->opcap_bmap);
++err_opcap:
++ ida_free(&idxd_ida, idxd->id);
++err_ida:
++ kfree(idxd);
++
++ return NULL;
+ }
+
+ static int idxd_enable_system_pasid(struct idxd_device *idxd)
+@@ -1191,7 +1267,7 @@ int idxd_pci_probe_alloc(struct idxd_device *idxd, struct pci_dev *pdev,
+ err:
+ pci_iounmap(pdev, idxd->reg_base);
+ err_iomap:
+- put_device(idxd_confdev(idxd));
++ idxd_free(idxd);
+ err_idxd_alloc:
+ pci_disable_device(pdev);
+ return rc;
+@@ -1233,7 +1309,6 @@ static void idxd_shutdown(struct pci_dev *pdev)
+ static void idxd_remove(struct pci_dev *pdev)
+ {
+ struct idxd_device *idxd = pci_get_drvdata(pdev);
+- struct idxd_irq_entry *irq_entry;
+
+ idxd_unregister_devices(idxd);
+ /*
+@@ -1246,20 +1321,12 @@ static void idxd_remove(struct pci_dev *pdev)
+ get_device(idxd_confdev(idxd));
+ device_unregister(idxd_confdev(idxd));
+ idxd_shutdown(pdev);
+- if (device_pasid_enabled(idxd))
+- idxd_disable_system_pasid(idxd);
+ idxd_device_remove_debugfs(idxd);
+-
+- irq_entry = idxd_get_ie(idxd, 0);
+- free_irq(irq_entry->vector, irq_entry);
+- pci_free_irq_vectors(pdev);
++ idxd_cleanup(idxd);
+ pci_iounmap(pdev, idxd->reg_base);
+- if (device_user_pasid_enabled(idxd))
+- idxd_disable_sva(pdev);
+- pci_disable_device(pdev);
+- destroy_workqueue(idxd->wq);
+- perfmon_pmu_remove(idxd);
+ put_device(idxd_confdev(idxd));
++ idxd_free(idxd);
++ pci_disable_device(pdev);
+ }
+
+ static struct pci_driver idxd_pci_driver = {
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 7ed1956b464290..d1b96f3d908f42 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -1091,8 +1091,11 @@ static void udma_check_tx_completion(struct work_struct *work)
+ u32 residue_diff;
+ ktime_t time_diff;
+ unsigned long delay;
++ unsigned long flags;
+
+ while (1) {
++ spin_lock_irqsave(&uc->vc.lock, flags);
++
+ if (uc->desc) {
+ /* Get previous residue and time stamp */
+ residue_diff = uc->tx_drain.residue;
+@@ -1127,6 +1130,8 @@ static void udma_check_tx_completion(struct work_struct *work)
+ break;
+ }
+
++ spin_unlock_irqrestore(&uc->vc.lock, flags);
++
+ usleep_range(ktime_to_us(delay),
+ ktime_to_us(delay) + 10);
+ continue;
+@@ -1143,6 +1148,8 @@ static void udma_check_tx_completion(struct work_struct *work)
+
+ break;
+ }
++
++ spin_unlock_irqrestore(&uc->vc.lock, flags);
+ }
+
+ static irqreturn_t udma_ring_irq_handler(int irq, void *data)
+@@ -4246,7 +4253,6 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec,
+ struct of_dma *ofdma)
+ {
+ struct udma_dev *ud = ofdma->of_dma_data;
+- dma_cap_mask_t mask = ud->ddev.cap_mask;
+ struct udma_filter_param filter_param;
+ struct dma_chan *chan;
+
+@@ -4278,7 +4284,7 @@ static struct dma_chan *udma_of_xlate(struct of_phandle_args *dma_spec,
+ }
+ }
+
+- chan = __dma_request_channel(&mask, udma_dma_filter_fn, &filter_param,
++ chan = __dma_request_channel(&ud->ddev.cap_mask, udma_dma_filter_fn, &filter_param,
+ ofdma->of_node);
+ if (!chan) {
+ dev_err(ud->dev, "get channel fail in %s.\n", __func__);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index d63c1030e6ac0e..6121ee6beaa392 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1203,6 +1203,8 @@ static int pca953x_restore_context(struct pca953x_chip *chip)
+
+ guard(mutex)(&chip->i2c_lock);
+
++ if (chip->client->irq > 0)
++ enable_irq(chip->client->irq);
+ regcache_cache_only(chip->regmap, false);
+ regcache_mark_dirty(chip->regmap);
+ ret = pca953x_regcache_sync(chip);
+@@ -1215,6 +1217,10 @@ static int pca953x_restore_context(struct pca953x_chip *chip)
+ static void pca953x_save_context(struct pca953x_chip *chip)
+ {
+ guard(mutex)(&chip->i2c_lock);
++
++ /* Disable IRQ to prevent early triggering while regmap "cache only" is on */
++ if (chip->client->irq > 0)
++ disable_irq(chip->client->irq);
+ regcache_cache_only(chip->regmap, true);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+index cfdf558b48b648..02138aa557935e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+@@ -109,7 +109,7 @@ int amdgpu_unmap_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ struct drm_exec exec;
+ int r;
+
+- drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
++ drm_exec_init(&exec, 0, 0);
+ drm_exec_until_all_locked(&exec) {
+ r = amdgpu_vm_lock_pd(vm, &exec, 0);
+ if (likely(!r))
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+index 1eb97117fe7ae4..0ba82eabca02c0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+@@ -750,6 +750,18 @@ static int gmc_v11_0_sw_init(struct amdgpu_ip_block *ip_block)
+ adev->gmc.vram_type = vram_type;
+ adev->gmc.vram_vendor = vram_vendor;
+
++ /* The mall_size is already calculated as mall_size_per_umc * num_umc.
++ * However, for gfx1151, which features a 2-to-1 UMC mapping,
++ * the result must be multiplied by 2 to determine the actual mall size.
++ */
++ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
++ case IP_VERSION(11, 5, 1):
++ adev->gmc.mall_size *= 2;
++ break;
++ default:
++ break;
++ }
++
+ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
+ case IP_VERSION(11, 0, 0):
+ case IP_VERSION(11, 0, 1):
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2dbd71fbae28a5..4a8d76a4f3ce6e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -12621,7 +12621,8 @@ int amdgpu_dm_process_dmub_aux_transfer_sync(
+ /* The reply is stored in the top nibble of the command. */
+ payload->reply[0] = (adev->dm.dmub_notify->aux_reply.command >> 4) & 0xF;
+
+- if (!payload->write && p_notify->aux_reply.length)
++ /*write req may receive a byte indicating partially written number as well*/
++ if (p_notify->aux_reply.length)
+ memcpy(payload->data, p_notify->aux_reply.data,
+ p_notify->aux_reply.length);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 66df18b1d0af9f..8497de360640a3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -62,6 +62,7 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ enum aux_return_code_type operation_result;
+ struct amdgpu_device *adev;
+ struct ddc_service *ddc;
++ uint8_t copy[16];
+
+ if (WARN_ON(msg->size > 16))
+ return -E2BIG;
+@@ -77,6 +78,11 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ (msg->request & DP_AUX_I2C_WRITE_STATUS_UPDATE) != 0;
+ payload.defer_delay = 0;
+
++ if (payload.write) {
++ memcpy(copy, msg->buffer, msg->size);
++ payload.data = copy;
++ }
++
+ result = dc_link_aux_transfer_raw(TO_DM_AUX(aux)->ddc_service, &payload,
+ &operation_result);
+
+@@ -100,9 +106,9 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ */
+ if (payload.write && result >= 0) {
+ if (result) {
+- /*one byte indicating partially written bytes. Force 0 to retry*/
+- drm_info(adev_to_drm(adev), "amdgpu: AUX partially written\n");
+- result = 0;
++ /*one byte indicating partially written bytes*/
++ drm_dbg_dp(adev_to_drm(adev), "amdgpu: AUX partially written\n");
++ result = payload.data[0];
+ } else if (!payload.reply[0])
+ /*I2C_ACK|AUX_ACK*/
+ result = msg->size;
+@@ -127,11 +133,11 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+ break;
+ }
+
+- drm_info(adev_to_drm(adev), "amdgpu: DP AUX transfer fail:%d\n", operation_result);
++ drm_dbg_dp(adev_to_drm(adev), "amdgpu: DP AUX transfer fail:%d\n", operation_result);
+ }
+
+ if (payload.reply[0])
+- drm_info(adev_to_drm(adev), "amdgpu: AUX reply command not ACK: 0x%02x.",
++ drm_dbg_dp(adev_to_drm(adev), "amdgpu: AUX reply command not ACK: 0x%02x.",
+ payload.reply[0]);
+
+ return result;
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
+index 1236e0f9a2560c..712aff7e17f7a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
+@@ -120,10 +120,11 @@ void dpp401_set_cursor_attributes(
+ enum dc_cursor_color_format color_format = cursor_attributes->color_format;
+ int cur_rom_en = 0;
+
+- // DCN4 should always do Cursor degamma for Cursor Color modes
+ if (color_format == CURSOR_MODE_COLOR_PRE_MULTIPLIED_ALPHA ||
+ color_format == CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA) {
+- cur_rom_en = 1;
++ if (cursor_attributes->attribute_flags.bits.ENABLE_CURSOR_DEGAMMA) {
++ cur_rom_en = 1;
++ }
+ }
+
+ REG_UPDATE_3(CURSOR0_CONTROL,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index 89af3e4afbc251..0d39d193dacfa0 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -2027,9 +2027,9 @@ static void dcn401_program_pipe(
+ dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->hubp_regs.det_size);
+ }
+
+- if (pipe_ctx->update_flags.raw ||
+- (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) ||
+- pipe_ctx->stream->update_flags.raw)
++ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.raw ||
++ pipe_ctx->plane_state->update_flags.raw ||
++ pipe_ctx->stream->update_flags.raw))
+ dc->hwss.update_dchubp_dpp(dc, pipe_ctx, context);
+
+ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.bits.enable ||
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index ce8cea5d3a56be..c61e6026adfae5 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -75,7 +75,7 @@ static void meson_encoder_hdmi_set_vclk(struct meson_encoder_hdmi *encoder_hdmi,
+ unsigned long long venc_freq;
+ unsigned long long hdmi_freq;
+
+- vclk_freq = mode->clock * 1000;
++ vclk_freq = mode->clock * 1000ULL;
+
+ /* For 420, pixel clock is half unlike venc clock */
+ if (encoder_hdmi->output_bus_fmt == MEDIA_BUS_FMT_UYYVYY8_0_5X24)
+@@ -123,7 +123,7 @@ static enum drm_mode_status meson_encoder_hdmi_mode_valid(struct drm_bridge *bri
+ struct meson_encoder_hdmi *encoder_hdmi = bridge_to_meson_encoder_hdmi(bridge);
+ struct meson_drm *priv = encoder_hdmi->priv;
+ bool is_hdmi2_sink = display_info->hdmi.scdc.supported;
+- unsigned long long clock = mode->clock * 1000;
++ unsigned long long clock = mode->clock * 1000ULL;
+ unsigned long long phy_freq;
+ unsigned long long vclk_freq;
+ unsigned long long venc_freq;
+diff --git a/drivers/gpu/drm/tiny/panel-mipi-dbi.c b/drivers/gpu/drm/tiny/panel-mipi-dbi.c
+index 0460ecaef4bd98..23914a9f7fd376 100644
+--- a/drivers/gpu/drm/tiny/panel-mipi-dbi.c
++++ b/drivers/gpu/drm/tiny/panel-mipi-dbi.c
+@@ -390,7 +390,10 @@ static int panel_mipi_dbi_spi_probe(struct spi_device *spi)
+
+ spi_set_drvdata(spi, drm);
+
+- drm_client_setup(drm, NULL);
++ if (bpp == 16)
++ drm_client_setup_with_fourcc(drm, DRM_FORMAT_RGB565);
++ else
++ drm_client_setup_with_fourcc(drm, DRM_FORMAT_RGB888);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
+index 10ec2920d31b34..d4033278be9fca 100644
+--- a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
++++ b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
+@@ -47,6 +47,10 @@
+ #define MI_LRI_FORCE_POSTED REG_BIT(12)
+ #define MI_LRI_LEN(x) (((x) & 0xff) + 1)
+
++#define MI_STORE_REGISTER_MEM (__MI_INSTR(0x24) | XE_INSTR_NUM_DW(4))
++#define MI_SRM_USE_GGTT REG_BIT(22)
++#define MI_SRM_ADD_CS_OFFSET REG_BIT(19)
++
+ #define MI_FLUSH_DW __MI_INSTR(0x26)
+ #define MI_FLUSH_DW_STORE_INDEX REG_BIT(21)
+ #define MI_INVALIDATE_TLB REG_BIT(18)
+diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
+index 1eb791ddc375c7..0f11b1c00960d5 100644
+--- a/drivers/gpu/drm/xe/xe_gsc.c
++++ b/drivers/gpu/drm/xe/xe_gsc.c
+@@ -564,6 +564,28 @@ void xe_gsc_remove(struct xe_gsc *gsc)
+ xe_gsc_proxy_remove(gsc);
+ }
+
++void xe_gsc_stop_prepare(struct xe_gsc *gsc)
++{
++ struct xe_gt *gt = gsc_to_gt(gsc);
++ int ret;
++
++ if (!xe_uc_fw_is_loadable(&gsc->fw) || xe_uc_fw_is_in_error_state(&gsc->fw))
++ return;
++
++ xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GSC);
++
++ /*
++ * If the GSC FW load or the proxy init are interrupted, the only way
++ * to recover it is to do an FLR and reload the GSC from scratch.
++ * Therefore, let's wait for the init to complete before stopping
++ * operations. The proxy init is the last step, so we can just wait on
++ * that
++ */
++ ret = xe_gsc_wait_for_proxy_init_done(gsc);
++ if (ret)
++ xe_gt_err(gt, "failed to wait for GSC init completion before uc stop\n");
++}
++
+ /*
+ * wa_14015076503: if the GSC FW is loaded, we need to alert it before doing a
+ * GSC engine reset by writing a notification bit in the GS1 register and then
+diff --git a/drivers/gpu/drm/xe/xe_gsc.h b/drivers/gpu/drm/xe/xe_gsc.h
+index e282b9ef6ec4d5..c31fe24c4b663c 100644
+--- a/drivers/gpu/drm/xe/xe_gsc.h
++++ b/drivers/gpu/drm/xe/xe_gsc.h
+@@ -16,6 +16,7 @@ struct xe_hw_engine;
+ int xe_gsc_init(struct xe_gsc *gsc);
+ int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc);
+ void xe_gsc_wait_for_worker_completion(struct xe_gsc *gsc);
++void xe_gsc_stop_prepare(struct xe_gsc *gsc);
+ void xe_gsc_load_start(struct xe_gsc *gsc);
+ void xe_gsc_remove(struct xe_gsc *gsc);
+ void xe_gsc_hwe_irq_handler(struct xe_hw_engine *hwe, u16 intr_vec);
+diff --git a/drivers/gpu/drm/xe/xe_gsc_proxy.c b/drivers/gpu/drm/xe/xe_gsc_proxy.c
+index 24cc6a4f9a96a2..76636da3d06cb4 100644
+--- a/drivers/gpu/drm/xe/xe_gsc_proxy.c
++++ b/drivers/gpu/drm/xe/xe_gsc_proxy.c
+@@ -71,6 +71,17 @@ bool xe_gsc_proxy_init_done(struct xe_gsc *gsc)
+ HECI1_FWSTS1_PROXY_STATE_NORMAL;
+ }
+
++int xe_gsc_wait_for_proxy_init_done(struct xe_gsc *gsc)
++{
++ struct xe_gt *gt = gsc_to_gt(gsc);
++
++ /* Proxy init can take up to 500ms, so wait double that for safety */
++ return xe_mmio_wait32(>->mmio, HECI_FWSTS1(MTL_GSC_HECI1_BASE),
++ HECI1_FWSTS1_CURRENT_STATE,
++ HECI1_FWSTS1_PROXY_STATE_NORMAL,
++ USEC_PER_SEC, NULL, false);
++}
++
+ static void __gsc_proxy_irq_rmw(struct xe_gsc *gsc, u32 clr, u32 set)
+ {
+ struct xe_gt *gt = gsc_to_gt(gsc);
+diff --git a/drivers/gpu/drm/xe/xe_gsc_proxy.h b/drivers/gpu/drm/xe/xe_gsc_proxy.h
+index c511ade6b86378..e2498aa6de1881 100644
+--- a/drivers/gpu/drm/xe/xe_gsc_proxy.h
++++ b/drivers/gpu/drm/xe/xe_gsc_proxy.h
+@@ -13,6 +13,7 @@ struct xe_gsc;
+ int xe_gsc_proxy_init(struct xe_gsc *gsc);
+ bool xe_gsc_proxy_init_done(struct xe_gsc *gsc);
+ void xe_gsc_proxy_remove(struct xe_gsc *gsc);
++int xe_gsc_wait_for_proxy_init_done(struct xe_gsc *gsc);
+ int xe_gsc_proxy_start(struct xe_gsc *gsc);
+
+ int xe_gsc_proxy_request_handler(struct xe_gsc *gsc);
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 94eed1315b0f1b..150dca2f910335 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -862,7 +862,7 @@ void xe_gt_suspend_prepare(struct xe_gt *gt)
+
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+
+- xe_uc_stop_prepare(>->uc);
++ xe_uc_suspend_prepare(>->uc);
+
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index bbb9ffbf636726..2a953c4f7d5ddf 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -684,7 +684,7 @@ static inline u32 __xe_lrc_start_seqno_offset(struct xe_lrc *lrc)
+
+ static u32 __xe_lrc_ctx_job_timestamp_offset(struct xe_lrc *lrc)
+ {
+- /* The start seqno is stored in the driver-defined portion of PPHWSP */
++ /* This is stored in the driver-defined portion of PPHWSP */
+ return xe_lrc_pphwsp_offset(lrc) + LRC_CTX_JOB_TIMESTAMP_OFFSET;
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
+index 8d1fb33d923f4d..3493177947680c 100644
+--- a/drivers/gpu/drm/xe/xe_ring_ops.c
++++ b/drivers/gpu/drm/xe/xe_ring_ops.c
+@@ -234,13 +234,10 @@ static u32 get_ppgtt_flag(struct xe_sched_job *job)
+
+ static int emit_copy_timestamp(struct xe_lrc *lrc, u32 *dw, int i)
+ {
+- dw[i++] = MI_COPY_MEM_MEM | MI_COPY_MEM_MEM_SRC_GGTT |
+- MI_COPY_MEM_MEM_DST_GGTT;
++ dw[i++] = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET;
++ dw[i++] = RING_CTX_TIMESTAMP(0).addr;
+ dw[i++] = xe_lrc_ctx_job_timestamp_ggtt_addr(lrc);
+ dw[i++] = 0;
+- dw[i++] = xe_lrc_ctx_timestamp_ggtt_addr(lrc);
+- dw[i++] = 0;
+- dw[i++] = MI_NOOP;
+
+ return i;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_uc.c b/drivers/gpu/drm/xe/xe_uc.c
+index 0d073a9987c2e8..bb03c524613f2c 100644
+--- a/drivers/gpu/drm/xe/xe_uc.c
++++ b/drivers/gpu/drm/xe/xe_uc.c
+@@ -241,7 +241,7 @@ void xe_uc_gucrc_disable(struct xe_uc *uc)
+
+ void xe_uc_stop_prepare(struct xe_uc *uc)
+ {
+- xe_gsc_wait_for_worker_completion(&uc->gsc);
++ xe_gsc_stop_prepare(&uc->gsc);
+ xe_guc_stop_prepare(&uc->guc);
+ }
+
+@@ -275,6 +275,12 @@ static void uc_reset_wait(struct xe_uc *uc)
+ goto again;
+ }
+
++void xe_uc_suspend_prepare(struct xe_uc *uc)
++{
++ xe_gsc_wait_for_worker_completion(&uc->gsc);
++ xe_guc_stop_prepare(&uc->guc);
++}
++
+ int xe_uc_suspend(struct xe_uc *uc)
+ {
+ /* GuC submission not enabled, nothing to do */
+diff --git a/drivers/gpu/drm/xe/xe_uc.h b/drivers/gpu/drm/xe/xe_uc.h
+index 506517c1133397..ba2937ab94cf56 100644
+--- a/drivers/gpu/drm/xe/xe_uc.h
++++ b/drivers/gpu/drm/xe/xe_uc.h
+@@ -18,6 +18,7 @@ int xe_uc_reset_prepare(struct xe_uc *uc);
+ void xe_uc_stop_prepare(struct xe_uc *uc);
+ void xe_uc_stop(struct xe_uc *uc);
+ int xe_uc_start(struct xe_uc *uc);
++void xe_uc_suspend_prepare(struct xe_uc *uc);
+ int xe_uc_suspend(struct xe_uc *uc);
+ int xe_uc_sanitize_reset(struct xe_uc *uc);
+ void xe_uc_remove(struct xe_uc *uc);
+diff --git a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
+index e9929c4aa72eb4..a02969fd50686d 100644
+--- a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
++++ b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
+@@ -134,9 +134,6 @@ static int amd_sfh1_1_hid_client_init(struct amd_mp2_dev *privdata)
+ for (i = 0; i < cl_data->num_hid_devices; i++) {
+ cl_data->sensor_sts[i] = SENSOR_DISABLED;
+
+- if (cl_data->num_hid_devices == 1 && cl_data->sensor_idx[0] == SRA_IDX)
+- break;
+-
+ if (cl_data->sensor_idx[i] == SRA_IDX) {
+ info.sensor_idx = cl_data->sensor_idx[i];
+ writel(0, privdata->mmio + amd_get_p2c_val(privdata, 0));
+@@ -145,8 +142,10 @@ static int amd_sfh1_1_hid_client_init(struct amd_mp2_dev *privdata)
+ (privdata, cl_data->sensor_idx[i], ENABLE_SENSOR);
+
+ cl_data->sensor_sts[i] = (status == 0) ? SENSOR_ENABLED : SENSOR_DISABLED;
+- if (cl_data->sensor_sts[i] == SENSOR_ENABLED)
++ if (cl_data->sensor_sts[i] == SENSOR_ENABLED) {
++ cl_data->is_any_sensor_enabled = true;
+ privdata->dev_en.is_sra_present = true;
++ }
+ continue;
+ }
+
+diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c
+index 2e96ec6a3073da..9a06f9b0e4ef33 100644
+--- a/drivers/hid/bpf/hid_bpf_dispatch.c
++++ b/drivers/hid/bpf/hid_bpf_dispatch.c
+@@ -38,6 +38,9 @@ dispatch_hid_bpf_device_event(struct hid_device *hdev, enum hid_report_type type
+ struct hid_bpf_ops *e;
+ int ret;
+
++ if (unlikely(hdev->bpf.destroyed))
++ return ERR_PTR(-ENODEV);
++
+ if (type >= HID_REPORT_TYPES)
+ return ERR_PTR(-EINVAL);
+
+@@ -93,6 +96,9 @@ int dispatch_hid_bpf_raw_requests(struct hid_device *hdev,
+ struct hid_bpf_ops *e;
+ int ret, idx;
+
++ if (unlikely(hdev->bpf.destroyed))
++ return -ENODEV;
++
+ if (rtype >= HID_REPORT_TYPES)
+ return -EINVAL;
+
+@@ -130,6 +136,9 @@ int dispatch_hid_bpf_output_report(struct hid_device *hdev,
+ struct hid_bpf_ops *e;
+ int ret, idx;
+
++ if (unlikely(hdev->bpf.destroyed))
++ return -ENODEV;
++
+ idx = srcu_read_lock(&hdev->bpf.srcu);
+ list_for_each_entry_srcu(e, &hdev->bpf.prog_list, list,
+ srcu_read_lock_held(&hdev->bpf.srcu)) {
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index 3b81468a1df297..0bf70664c35ee1 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -174,6 +174,7 @@ static void thrustmaster_interrupts(struct hid_device *hdev)
+ u8 ep_addr[2] = {b_ep, 0};
+
+ if (!usb_check_int_endpoints(usbif, ep_addr)) {
++ kfree(send_buf);
+ hid_err(hdev, "Unexpected non-int endpoint\n");
+ return;
+ }
+diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c
+index d8008933c052f5..321c43fb06ae06 100644
+--- a/drivers/hid/hid-uclogic-core.c
++++ b/drivers/hid/hid-uclogic-core.c
+@@ -142,11 +142,12 @@ static int uclogic_input_configured(struct hid_device *hdev,
+ suffix = "System Control";
+ break;
+ }
+- }
+-
+- if (suffix)
++ } else {
+ hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL,
+ "%s %s", hdev->name, suffix);
++ if (!hi->input->name)
++ return -ENOMEM;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index fb8cd8469328ee..35f26fa1ffe76e 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -1077,68 +1077,10 @@ int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
+ EXPORT_SYMBOL(vmbus_sendpacket);
+
+ /*
+- * vmbus_sendpacket_pagebuffer - Send a range of single-page buffer
+- * packets using a GPADL Direct packet type. This interface allows you
+- * to control notifying the host. This will be useful for sending
+- * batched data. Also the sender can control the send flags
+- * explicitly.
+- */
+-int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
+- struct hv_page_buffer pagebuffers[],
+- u32 pagecount, void *buffer, u32 bufferlen,
+- u64 requestid)
+-{
+- int i;
+- struct vmbus_channel_packet_page_buffer desc;
+- u32 descsize;
+- u32 packetlen;
+- u32 packetlen_aligned;
+- struct kvec bufferlist[3];
+- u64 aligned_data = 0;
+-
+- if (pagecount > MAX_PAGE_BUFFER_COUNT)
+- return -EINVAL;
+-
+- /*
+- * Adjust the size down since vmbus_channel_packet_page_buffer is the
+- * largest size we support
+- */
+- descsize = sizeof(struct vmbus_channel_packet_page_buffer) -
+- ((MAX_PAGE_BUFFER_COUNT - pagecount) *
+- sizeof(struct hv_page_buffer));
+- packetlen = descsize + bufferlen;
+- packetlen_aligned = ALIGN(packetlen, sizeof(u64));
+-
+- /* Setup the descriptor */
+- desc.type = VM_PKT_DATA_USING_GPA_DIRECT;
+- desc.flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
+- desc.dataoffset8 = descsize >> 3; /* in 8-bytes granularity */
+- desc.length8 = (u16)(packetlen_aligned >> 3);
+- desc.transactionid = VMBUS_RQST_ERROR; /* will be updated in hv_ringbuffer_write() */
+- desc.reserved = 0;
+- desc.rangecount = pagecount;
+-
+- for (i = 0; i < pagecount; i++) {
+- desc.range[i].len = pagebuffers[i].len;
+- desc.range[i].offset = pagebuffers[i].offset;
+- desc.range[i].pfn = pagebuffers[i].pfn;
+- }
+-
+- bufferlist[0].iov_base = &desc;
+- bufferlist[0].iov_len = descsize;
+- bufferlist[1].iov_base = buffer;
+- bufferlist[1].iov_len = bufferlen;
+- bufferlist[2].iov_base = &aligned_data;
+- bufferlist[2].iov_len = (packetlen_aligned - packetlen);
+-
+- return hv_ringbuffer_write(channel, bufferlist, 3, requestid, NULL);
+-}
+-EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer);
+-
+-/*
+- * vmbus_sendpacket_multipagebuffer - Send a multi-page buffer packet
++ * vmbus_sendpacket_mpb_desc - Send one or more multi-page buffer packets
+ * using a GPADL Direct packet type.
+- * The buffer includes the vmbus descriptor.
++ * The desc argument must include space for the VMBus descriptor. The
++ * rangecount field must already be set.
+ */
+ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
+ struct vmbus_packet_mpb_array *desc,
+@@ -1160,7 +1102,6 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
+ desc->length8 = (u16)(packetlen_aligned >> 3);
+ desc->transactionid = VMBUS_RQST_ERROR; /* will be updated in hv_ringbuffer_write() */
+ desc->reserved = 0;
+- desc->rangecount = 1;
+
+ bufferlist[0].iov_base = desc;
+ bufferlist[0].iov_len = desc_size;
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index 8e0267c7cc294e..f21f9877c04047 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -278,9 +278,11 @@ static int i2c_dw_pci_probe(struct pci_dev *pdev,
+
+ if ((dev->flags & MODEL_MASK) == MODEL_AMD_NAVI_GPU) {
+ dev->slave = i2c_new_ccgx_ucsi(&dev->adapter, dev->irq, &dgpu_node);
+- if (IS_ERR(dev->slave))
++ if (IS_ERR(dev->slave)) {
++ i2c_del_adapter(&dev->adapter);
+ return dev_err_probe(device, PTR_ERR(dev->slave),
+ "register UCSI failed\n");
++ }
+ }
+
+ pm_runtime_set_autosuspend_delay(device, 1000);
+diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c
+index d39354afd5394d..0339e27f92c323 100644
+--- a/drivers/iio/adc/ad7606.c
++++ b/drivers/iio/adc/ad7606.c
+@@ -85,6 +85,10 @@ static const unsigned int ad7606_oversampling_avail[7] = {
+ 1, 2, 4, 8, 16, 32, 64,
+ };
+
++static const unsigned int ad7606b_oversampling_avail[9] = {
++ 1, 2, 4, 8, 16, 32, 64, 128, 256,
++};
++
+ static const unsigned int ad7616_oversampling_avail[8] = {
+ 1, 2, 4, 8, 16, 32, 64, 128,
+ };
+@@ -187,6 +191,8 @@ static int ad7608_chan_scale_setup(struct iio_dev *indio_dev,
+ struct iio_chan_spec *chan, int ch);
+ static int ad7609_chan_scale_setup(struct iio_dev *indio_dev,
+ struct iio_chan_spec *chan, int ch);
++static int ad7616_sw_mode_setup(struct iio_dev *indio_dev);
++static int ad7606b_sw_mode_setup(struct iio_dev *indio_dev);
+
+ const struct ad7606_chip_info ad7605_4_info = {
+ .channels = ad7605_channels,
+@@ -239,6 +245,7 @@ const struct ad7606_chip_info ad7606b_info = {
+ .oversampling_avail = ad7606_oversampling_avail,
+ .oversampling_num = ARRAY_SIZE(ad7606_oversampling_avail),
+ .scale_setup_cb = ad7606_16bit_chan_scale_setup,
++ .sw_setup_cb = ad7606b_sw_mode_setup,
+ };
+ EXPORT_SYMBOL_NS_GPL(ad7606b_info, "IIO_AD7606");
+
+@@ -250,6 +257,7 @@ const struct ad7606_chip_info ad7606c_16_info = {
+ .oversampling_avail = ad7606_oversampling_avail,
+ .oversampling_num = ARRAY_SIZE(ad7606_oversampling_avail),
+ .scale_setup_cb = ad7606c_16bit_chan_scale_setup,
++ .sw_setup_cb = ad7606b_sw_mode_setup,
+ };
+ EXPORT_SYMBOL_NS_GPL(ad7606c_16_info, "IIO_AD7606");
+
+@@ -294,6 +302,7 @@ const struct ad7606_chip_info ad7606c_18_info = {
+ .oversampling_avail = ad7606_oversampling_avail,
+ .oversampling_num = ARRAY_SIZE(ad7606_oversampling_avail),
+ .scale_setup_cb = ad7606c_18bit_chan_scale_setup,
++ .sw_setup_cb = ad7606b_sw_mode_setup,
+ };
+ EXPORT_SYMBOL_NS_GPL(ad7606c_18_info, "IIO_AD7606");
+
+@@ -307,6 +316,7 @@ const struct ad7606_chip_info ad7616_info = {
+ .oversampling_num = ARRAY_SIZE(ad7616_oversampling_avail),
+ .os_req_reset = true,
+ .scale_setup_cb = ad7606_16bit_chan_scale_setup,
++ .sw_setup_cb = ad7616_sw_mode_setup,
+ };
+ EXPORT_SYMBOL_NS_GPL(ad7616_info, "IIO_AD7606");
+
+@@ -1138,16 +1148,123 @@ static const struct iio_trigger_ops ad7606_trigger_ops = {
+ .validate_device = iio_trigger_validate_own_device,
+ };
+
+-static int ad7606_sw_mode_setup(struct iio_dev *indio_dev)
++static int ad7606_write_mask(struct ad7606_state *st, unsigned int addr,
++ unsigned long mask, unsigned int val)
++{
++ int readval;
++
++ readval = st->bops->reg_read(st, addr);
++ if (readval < 0)
++ return readval;
++
++ readval &= ~mask;
++ readval |= val;
++
++ return st->bops->reg_write(st, addr, readval);
++}
++
++static int ad7616_write_scale_sw(struct iio_dev *indio_dev, int ch, int val)
+ {
+ struct ad7606_state *st = iio_priv(indio_dev);
++ unsigned int ch_addr, mode, ch_index;
+
+- st->sw_mode_en = st->bops->sw_mode_config &&
+- device_property_present(st->dev, "adi,sw-mode");
+- if (!st->sw_mode_en)
+- return 0;
++ /*
++ * Ad7616 has 16 channels divided in group A and group B.
++ * The range of channels from A are stored in registers with address 4
++ * while channels from B are stored in register with address 6.
++ * The last bit from channels determines if it is from group A or B
++ * because the order of channels in iio is 0A, 0B, 1A, 1B...
++ */
++ ch_index = ch >> 1;
++
++ ch_addr = AD7616_RANGE_CH_ADDR(ch_index);
++
++ if ((ch & 0x1) == 0) /* channel A */
++ ch_addr += AD7616_RANGE_CH_A_ADDR_OFF;
++ else /* channel B */
++ ch_addr += AD7616_RANGE_CH_B_ADDR_OFF;
++
++ /* 0b01 for 2.5v, 0b10 for 5v and 0b11 for 10v */
++ mode = AD7616_RANGE_CH_MODE(ch_index, ((val + 1) & 0b11));
++
++ return ad7606_write_mask(st, ch_addr, AD7616_RANGE_CH_MSK(ch_index),
++ mode);
++}
++
++static int ad7616_write_os_sw(struct iio_dev *indio_dev, int val)
++{
++ struct ad7606_state *st = iio_priv(indio_dev);
++
++ return ad7606_write_mask(st, AD7616_CONFIGURATION_REGISTER,
++ AD7616_OS_MASK, val << 2);
++}
++
++static int ad7606_write_scale_sw(struct iio_dev *indio_dev, int ch, int val)
++{
++ struct ad7606_state *st = iio_priv(indio_dev);
++
++ return ad7606_write_mask(st, AD7606_RANGE_CH_ADDR(ch),
++ AD7606_RANGE_CH_MSK(ch),
++ AD7606_RANGE_CH_MODE(ch, val));
++}
++
++static int ad7606_write_os_sw(struct iio_dev *indio_dev, int val)
++{
++ struct ad7606_state *st = iio_priv(indio_dev);
++
++ return st->bops->reg_write(st, AD7606_OS_MODE, val);
++}
++
++static int ad7616_sw_mode_setup(struct iio_dev *indio_dev)
++{
++ struct ad7606_state *st = iio_priv(indio_dev);
++ int ret;
++
++ /*
++ * Scale can be configured individually for each channel
++ * in software mode.
++ */
++
++ st->write_scale = ad7616_write_scale_sw;
++ st->write_os = &ad7616_write_os_sw;
++
++ if (st->bops->sw_mode_config) {
++ ret = st->bops->sw_mode_config(indio_dev);
++ if (ret)
++ return ret;
++ }
++
++ /* Activate Burst mode and SEQEN MODE */
++ return ad7606_write_mask(st, AD7616_CONFIGURATION_REGISTER,
++ AD7616_BURST_MODE | AD7616_SEQEN_MODE,
++ AD7616_BURST_MODE | AD7616_SEQEN_MODE);
++}
+
+- indio_dev->info = &ad7606_info_sw_mode;
++static int ad7606b_sw_mode_setup(struct iio_dev *indio_dev)
++{
++ struct ad7606_state *st = iio_priv(indio_dev);
++ DECLARE_BITMAP(os, 3);
++
++ bitmap_fill(os, 3);
++ /*
++ * Software mode is enabled when all three oversampling
++ * pins are set to high. If oversampling gpios are defined
++ * in the device tree, then they need to be set to high,
++ * otherwise, they must be hardwired to VDD
++ */
++ if (st->gpio_os) {
++ gpiod_set_array_value(st->gpio_os->ndescs, st->gpio_os->desc,
++ st->gpio_os->info, os);
++ }
++ /* OS of 128 and 256 are available only in software mode */
++ st->oversampling_avail = ad7606b_oversampling_avail;
++ st->num_os_ratios = ARRAY_SIZE(ad7606b_oversampling_avail);
++
++ st->write_scale = ad7606_write_scale_sw;
++ st->write_os = &ad7606_write_os_sw;
++
++ if (!st->bops->sw_mode_config)
++ return 0;
+
+ return st->bops->sw_mode_config(indio_dev);
+ }
+@@ -1246,17 +1363,6 @@ int ad7606_probe(struct device *dev, int irq, void __iomem *base_address,
+ return -ERESTARTSYS;
+ }
+
+- st->write_scale = ad7606_write_scale_hw;
+- st->write_os = ad7606_write_os_hw;
+-
+- ret = ad7606_sw_mode_setup(indio_dev);
+- if (ret)
+- return ret;
+-
+- ret = ad7606_chan_scales_setup(indio_dev);
+- if (ret)
+- return ret;
+-
+ /* If convst pin is not defined, setup PWM. */
+ if (!st->gpio_convst) {
+ st->cnvst_pwm = devm_pwm_get(dev, NULL);
+@@ -1334,6 +1440,20 @@ int ad7606_probe(struct device *dev, int irq, void __iomem *base_address,
+ return ret;
+ }
+
++ st->write_scale = ad7606_write_scale_hw;
++ st->write_os = ad7606_write_os_hw;
++
++ st->sw_mode_en = st->chip_info->sw_setup_cb &&
++ device_property_present(st->dev, "adi,sw-mode");
++ if (st->sw_mode_en) {
++ indio_dev->info = &ad7606_info_sw_mode;
++ st->chip_info->sw_setup_cb(indio_dev);
++ }
++
++ ret = ad7606_chan_scales_setup(indio_dev);
++ if (ret)
++ return ret;
++
+ return devm_iio_device_register(dev, indio_dev);
+ }
+ EXPORT_SYMBOL_NS_GPL(ad7606_probe, "IIO_AD7606");
+diff --git a/drivers/iio/adc/ad7606.h b/drivers/iio/adc/ad7606.h
+index 8778ffe515b30c..7a044b499cfe16 100644
+--- a/drivers/iio/adc/ad7606.h
++++ b/drivers/iio/adc/ad7606.h
+@@ -10,6 +10,36 @@
+
+ #define AD760X_MAX_CHANNELS 16
+
++#define AD7616_CONFIGURATION_REGISTER 0x02
++#define AD7616_OS_MASK GENMASK(4, 2)
++#define AD7616_BURST_MODE BIT(6)
++#define AD7616_SEQEN_MODE BIT(5)
++#define AD7616_RANGE_CH_A_ADDR_OFF 0x04
++#define AD7616_RANGE_CH_B_ADDR_OFF 0x06
++/*
++ * Range of channels from a group are stored in 2 registers.
++ * 0, 1, 2, 3 in a register followed by 4, 5, 6, 7 in second register.
++ * For channels from second group(8-15) the order is the same, only with
++ * an offset of 2 for register address.
++ */
++#define AD7616_RANGE_CH_ADDR(ch) ((ch) >> 2)
++/* The range of the channel is stored in 2 bits */
++#define AD7616_RANGE_CH_MSK(ch) (0b11 << (((ch) & 0b11) * 2))
++#define AD7616_RANGE_CH_MODE(ch, mode) ((mode) << ((((ch) & 0b11)) * 2))
++
++#define AD7606_CONFIGURATION_REGISTER 0x02
++#define AD7606_SINGLE_DOUT 0x00
++
++/*
++ * Range for AD7606B channels are stored in registers starting with address 0x3.
++ * Each register stores range for 2 channels(4 bits per channel).
++ */
++#define AD7606_RANGE_CH_MSK(ch) (GENMASK(3, 0) << (4 * ((ch) & 0x1)))
++#define AD7606_RANGE_CH_MODE(ch, mode) \
++ ((GENMASK(3, 0) & (mode)) << (4 * ((ch) & 0x1)))
++#define AD7606_RANGE_CH_ADDR(ch) (0x03 + ((ch) >> 1))
++#define AD7606_OS_MODE 0x08
++
+ #define AD760X_CHANNEL(num, mask_sep, mask_type, mask_all, bits) { \
+ .type = IIO_VOLTAGE, \
+ .indexed = 1, \
+@@ -71,6 +101,7 @@ struct ad7606_state;
+
+ typedef int (*ad7606_scale_setup_cb_t)(struct iio_dev *indio_dev,
+ struct iio_chan_spec *chan, int ch);
++typedef int (*ad7606_sw_setup_cb_t)(struct iio_dev *indio_dev);
+
+ /**
+ * struct ad7606_chip_info - chip specific information
+@@ -80,6 +111,7 @@ typedef int (*ad7606_scale_setup_cb_t)(struct iio_dev *indio_dev,
+ * @num_channels: number of channels
+ * @num_adc_channels the number of channels the ADC actually inputs.
+ * @scale_setup_cb: callback to setup the scales for each channel
++ * @sw_setup_cb: callback to setup the software mode if available.
+ * @oversampling_avail pointer to the array which stores the available
+ * oversampling ratios.
+ * @oversampling_num number of elements stored in oversampling_avail array
+@@ -94,6 +126,7 @@ struct ad7606_chip_info {
+ unsigned int num_adc_channels;
+ unsigned int num_channels;
+ ad7606_scale_setup_cb_t scale_setup_cb;
++ ad7606_sw_setup_cb_t sw_setup_cb;
+ const unsigned int *oversampling_avail;
+ unsigned int oversampling_num;
+ bool os_req_reset;
+@@ -206,10 +239,6 @@ struct ad7606_bus_ops {
+ int (*reg_write)(struct ad7606_state *st,
+ unsigned int addr,
+ unsigned int val);
+- int (*write_mask)(struct ad7606_state *st,
+- unsigned int addr,
+- unsigned long mask,
+- unsigned int val);
+ int (*update_scan_mode)(struct iio_dev *indio_dev, const unsigned long *scan_mask);
+ u16 (*rd_wr_cmd)(int addr, char isWriteOp);
+ };
+diff --git a/drivers/iio/adc/ad7606_spi.c b/drivers/iio/adc/ad7606_spi.c
+index c8bc9e772dfc26..179115e909888b 100644
+--- a/drivers/iio/adc/ad7606_spi.c
++++ b/drivers/iio/adc/ad7606_spi.c
+@@ -15,36 +15,6 @@
+
+ #define MAX_SPI_FREQ_HZ 23500000 /* VDRIVE above 4.75 V */
+
+-#define AD7616_CONFIGURATION_REGISTER 0x02
+-#define AD7616_OS_MASK GENMASK(4, 2)
+-#define AD7616_BURST_MODE BIT(6)
+-#define AD7616_SEQEN_MODE BIT(5)
+-#define AD7616_RANGE_CH_A_ADDR_OFF 0x04
+-#define AD7616_RANGE_CH_B_ADDR_OFF 0x06
+-/*
+- * Range of channels from a group are stored in 2 registers.
+- * 0, 1, 2, 3 in a register followed by 4, 5, 6, 7 in second register.
+- * For channels from second group(8-15) the order is the same, only with
+- * an offset of 2 for register address.
+- */
+-#define AD7616_RANGE_CH_ADDR(ch) ((ch) >> 2)
+-/* The range of the channel is stored in 2 bits */
+-#define AD7616_RANGE_CH_MSK(ch) (0b11 << (((ch) & 0b11) * 2))
+-#define AD7616_RANGE_CH_MODE(ch, mode) ((mode) << ((((ch) & 0b11)) * 2))
+-
+-#define AD7606_CONFIGURATION_REGISTER 0x02
+-#define AD7606_SINGLE_DOUT 0x00
+-
+-/*
+- * Range for AD7606B channels are stored in registers starting with address 0x3.
+- * Each register stores range for 2 channels(4 bits per channel).
+- */
+-#define AD7606_RANGE_CH_MSK(ch) (GENMASK(3, 0) << (4 * ((ch) & 0x1)))
+-#define AD7606_RANGE_CH_MODE(ch, mode) \
+- ((GENMASK(3, 0) & mode) << (4 * ((ch) & 0x1)))
+-#define AD7606_RANGE_CH_ADDR(ch) (0x03 + ((ch) >> 1))
+-#define AD7606_OS_MODE 0x08
+-
+ static const struct iio_chan_spec ad7616_sw_channels[] = {
+ IIO_CHAN_SOFT_TIMESTAMP(16),
+ AD7616_CHANNEL(0),
+@@ -89,10 +59,6 @@ static const struct iio_chan_spec ad7606c_18_sw_channels[] = {
+ AD7606_SW_CHANNEL(7, 18),
+ };
+
+-static const unsigned int ad7606B_oversampling_avail[9] = {
+- 1, 2, 4, 8, 16, 32, 64, 128, 256
+-};
+-
+ static u16 ad7616_spi_rd_wr_cmd(int addr, char isWriteOp)
+ {
+ /*
+@@ -194,118 +160,20 @@ static int ad7606_spi_reg_write(struct ad7606_state *st,
+ return spi_write(spi, &st->d16[0], sizeof(st->d16[0]));
+ }
+
+-static int ad7606_spi_write_mask(struct ad7606_state *st,
+- unsigned int addr,
+- unsigned long mask,
+- unsigned int val)
+-{
+- int readval;
+-
+- readval = st->bops->reg_read(st, addr);
+- if (readval < 0)
+- return readval;
+-
+- readval &= ~mask;
+- readval |= val;
+-
+- return st->bops->reg_write(st, addr, readval);
+-}
+-
+-static int ad7616_write_scale_sw(struct iio_dev *indio_dev, int ch, int val)
+-{
+- struct ad7606_state *st = iio_priv(indio_dev);
+- unsigned int ch_addr, mode, ch_index;
+-
+-
+- /*
+- * Ad7616 has 16 channels divided in group A and group B.
+- * The range of channels from A are stored in registers with address 4
+- * while channels from B are stored in register with address 6.
+- * The last bit from channels determines if it is from group A or B
+- * because the order of channels in iio is 0A, 0B, 1A, 1B...
+- */
+- ch_index = ch >> 1;
+-
+- ch_addr = AD7616_RANGE_CH_ADDR(ch_index);
+-
+- if ((ch & 0x1) == 0) /* channel A */
+- ch_addr += AD7616_RANGE_CH_A_ADDR_OFF;
+- else /* channel B */
+- ch_addr += AD7616_RANGE_CH_B_ADDR_OFF;
+-
+- /* 0b01 for 2.5v, 0b10 for 5v and 0b11 for 10v */
+- mode = AD7616_RANGE_CH_MODE(ch_index, ((val + 1) & 0b11));
+- return st->bops->write_mask(st, ch_addr, AD7616_RANGE_CH_MSK(ch_index),
+- mode);
+-}
+-
+-static int ad7616_write_os_sw(struct iio_dev *indio_dev, int val)
+-{
+- struct ad7606_state *st = iio_priv(indio_dev);
+-
+- return st->bops->write_mask(st, AD7616_CONFIGURATION_REGISTER,
+- AD7616_OS_MASK, val << 2);
+-}
+-
+-static int ad7606_write_scale_sw(struct iio_dev *indio_dev, int ch, int val)
+-{
+- struct ad7606_state *st = iio_priv(indio_dev);
+-
+- return ad7606_spi_write_mask(st,
+- AD7606_RANGE_CH_ADDR(ch),
+- AD7606_RANGE_CH_MSK(ch),
+- AD7606_RANGE_CH_MODE(ch, val));
+-}
+-
+-static int ad7606_write_os_sw(struct iio_dev *indio_dev, int val)
+-{
+- struct ad7606_state *st = iio_priv(indio_dev);
+-
+- return ad7606_spi_reg_write(st, AD7606_OS_MODE, val);
+-}
+-
+ static int ad7616_sw_mode_config(struct iio_dev *indio_dev)
+ {
+- struct ad7606_state *st = iio_priv(indio_dev);
+-
+ /*
+ * Scale can be configured individually for each channel
+ * in software mode.
+ */
+ indio_dev->channels = ad7616_sw_channels;
+
+- st->write_scale = ad7616_write_scale_sw;
+- st->write_os = &ad7616_write_os_sw;
+-
+- /* Activate Burst mode and SEQEN MODE */
+- return st->bops->write_mask(st,
+- AD7616_CONFIGURATION_REGISTER,
+- AD7616_BURST_MODE | AD7616_SEQEN_MODE,
+- AD7616_BURST_MODE | AD7616_SEQEN_MODE);
++ return 0;
+ }
+
+ static int ad7606B_sw_mode_config(struct iio_dev *indio_dev)
+ {
+ struct ad7606_state *st = iio_priv(indio_dev);
+- DECLARE_BITMAP(os, 3);
+-
+- bitmap_fill(os, 3);
+- /*
+- * Software mode is enabled when all three oversampling
+- * pins are set to high. If oversampling gpios are defined
+- * in the device tree, then they need to be set to high,
+- * otherwise, they must be hardwired to VDD
+- */
+- if (st->gpio_os) {
+- gpiod_set_array_value(st->gpio_os->ndescs,
+- st->gpio_os->desc, st->gpio_os->info, os);
+- }
+- /* OS of 128 and 256 are available only in software mode */
+- st->oversampling_avail = ad7606B_oversampling_avail;
+- st->num_os_ratios = ARRAY_SIZE(ad7606B_oversampling_avail);
+-
+- st->write_scale = ad7606_write_scale_sw;
+- st->write_os = &ad7606_write_os_sw;
+
+ /* Configure device spi to output on a single channel */
+ st->bops->reg_write(st,
+@@ -350,7 +218,6 @@ static const struct ad7606_bus_ops ad7616_spi_bops = {
+ .read_block = ad7606_spi_read_block,
+ .reg_read = ad7606_spi_reg_read,
+ .reg_write = ad7606_spi_reg_write,
+- .write_mask = ad7606_spi_write_mask,
+ .rd_wr_cmd = ad7616_spi_rd_wr_cmd,
+ .sw_mode_config = ad7616_sw_mode_config,
+ };
+@@ -359,7 +226,6 @@ static const struct ad7606_bus_ops ad7606b_spi_bops = {
+ .read_block = ad7606_spi_read_block,
+ .reg_read = ad7606_spi_reg_read,
+ .reg_write = ad7606_spi_reg_write,
+- .write_mask = ad7606_spi_write_mask,
+ .rd_wr_cmd = ad7606B_spi_rd_wr_cmd,
+ .sw_mode_config = ad7606B_sw_mode_config,
+ };
+@@ -368,7 +234,6 @@ static const struct ad7606_bus_ops ad7606c_18_spi_bops = {
+ .read_block = ad7606_spi_read_block18to32,
+ .reg_read = ad7606_spi_reg_read,
+ .reg_write = ad7606_spi_reg_write,
+- .write_mask = ad7606_spi_write_mask,
+ .rd_wr_cmd = ad7606B_spi_rd_wr_cmd,
+ .sw_mode_config = ad7606c_18_sw_mode_config,
+ };
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index ee75b99f84bcc2..baf12e3e2a7ea9 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -1352,6 +1352,9 @@ static void ib_device_notify_register(struct ib_device *device)
+
+ down_read(&devices_rwsem);
+
++ /* Mark for userspace that device is ready */
++ kobject_uevent(&device->dev.kobj, KOBJ_ADD);
++
+ ret = rdma_nl_notify_event(device, 0, RDMA_REGISTER_EVENT);
+ if (ret)
+ goto out;
+@@ -1468,10 +1471,9 @@ int ib_register_device(struct ib_device *device, const char *name,
+ return ret;
+ }
+ dev_set_uevent_suppress(&device->dev, false);
+- /* Mark for userspace that device is ready */
+- kobject_uevent(&device->dev.kobj, KOBJ_ADD);
+
+ ib_device_notify_register(device);
++
+ ib_device_put(device);
+
+ return 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
+index fec87c9030abdc..fffd144d509eb0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_cq.c
++++ b/drivers/infiniband/sw/rxe/rxe_cq.c
+@@ -56,11 +56,8 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe,
+
+ err = do_mmap_info(rxe, uresp ? &uresp->mi : NULL, udata,
+ cq->queue->buf, cq->queue->buf_size, &cq->queue->ip);
+- if (err) {
+- vfree(cq->queue->buf);
+- kfree(cq->queue);
++ if (err)
+ return err;
+- }
+
+ cq->is_user = uresp;
+
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index e3b5b450ee932b..10484d19eaac50 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -326,6 +326,26 @@ static void b53_get_vlan_entry(struct b53_device *dev, u16 vid,
+ }
+ }
+
++static void b53_set_eap_mode(struct b53_device *dev, int port, int mode)
++{
++ u64 eap_conf;
++
++ if (is5325(dev) || is5365(dev) || dev->chip_id == BCM5389_DEVICE_ID)
++ return;
++
++ b53_read64(dev, B53_EAP_PAGE, B53_PORT_EAP_CONF(port), &eap_conf);
++
++ if (is63xx(dev)) {
++ eap_conf &= ~EAP_MODE_MASK_63XX;
++ eap_conf |= (u64)mode << EAP_MODE_SHIFT_63XX;
++ } else {
++ eap_conf &= ~EAP_MODE_MASK;
++ eap_conf |= (u64)mode << EAP_MODE_SHIFT;
++ }
++
++ b53_write64(dev, B53_EAP_PAGE, B53_PORT_EAP_CONF(port), eap_conf);
++}
++
+ static void b53_set_forwarding(struct b53_device *dev, int enable)
+ {
+ u8 mgmt;
+@@ -586,6 +606,13 @@ int b53_setup_port(struct dsa_switch *ds, int port)
+ b53_port_set_mcast_flood(dev, port, true);
+ b53_port_set_learning(dev, port, false);
+
++ /* Force all traffic to go to the CPU port to prevent the ASIC from
++ * trying to forward to bridged ports on matching FDB entries, then
++ * dropping frames because it isn't allowed to forward there.
++ */
++ if (dsa_is_user_port(ds, port))
++ b53_set_eap_mode(dev, port, EAP_MODE_SIMPLIFIED);
++
+ return 0;
+ }
+ EXPORT_SYMBOL(b53_setup_port);
+@@ -2042,6 +2069,9 @@ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
+ pvlan |= BIT(i);
+ }
+
++ /* Disable redirection of unknown SA to the CPU port */
++ b53_set_eap_mode(dev, port, EAP_MODE_BASIC);
++
+ /* Configure the local port VLAN control membership to include
+ * remote ports and update the local port bitmask
+ */
+@@ -2077,6 +2107,9 @@ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge)
+ pvlan &= ~BIT(i);
+ }
+
++ /* Enable redirection of unknown SA to the CPU port */
++ b53_set_eap_mode(dev, port, EAP_MODE_SIMPLIFIED);
++
+ b53_write16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), pvlan);
+ dev->ports[port].vlan_ctl_mask = pvlan;
+
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index bfbcb66bef6626..5f7a0e5c5709d3 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -50,6 +50,9 @@
+ /* Jumbo Frame Registers */
+ #define B53_JUMBO_PAGE 0x40
+
++/* EAP Registers */
++#define B53_EAP_PAGE 0x42
++
+ /* EEE Control Registers Page */
+ #define B53_EEE_PAGE 0x92
+
+@@ -480,6 +483,17 @@
+ #define JMS_MIN_SIZE 1518
+ #define JMS_MAX_SIZE 9724
+
++/*************************************************************************
++ * EAP Page Registers
++ *************************************************************************/
++#define B53_PORT_EAP_CONF(i) (0x20 + 8 * (i))
++#define EAP_MODE_SHIFT 51
++#define EAP_MODE_SHIFT_63XX 50
++#define EAP_MODE_MASK (0x3ull << EAP_MODE_SHIFT)
++#define EAP_MODE_MASK_63XX (0x3ull << EAP_MODE_SHIFT_63XX)
++#define EAP_MODE_BASIC 0
++#define EAP_MODE_SIMPLIFIED 3
++
+ /*************************************************************************
+ * EEE Configuration Page Registers
+ *************************************************************************/
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 89f0796894af66..f95a9aac56ee1b 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -265,16 +265,70 @@ static void ksz_phylink_mac_link_down(struct phylink_config *config,
+ unsigned int mode,
+ phy_interface_t interface);
+
++/**
++ * ksz_phylink_mac_disable_tx_lpi() - Callback to signal LPI support (Dummy)
++ * @config: phylink config structure
++ *
++ * This function is a dummy handler. See ksz_phylink_mac_enable_tx_lpi() for
++ * a detailed explanation of EEE/LPI handling in KSZ switches.
++ */
++static void ksz_phylink_mac_disable_tx_lpi(struct phylink_config *config)
++{
++}
++
++/**
++ * ksz_phylink_mac_enable_tx_lpi() - Callback to signal LPI support (Dummy)
++ * @config: phylink config structure
++ * @timer: timer value before entering LPI (unused)
++ * @tx_clock_stop: whether to stop the TX clock in LPI mode (unused)
++ *
++ * This function signals to phylink that the driver architecture supports
++ * LPI management, enabling phylink to control EEE advertisement during
++ * negotiation according to IEEE Std 802.3 (Clause 78).
++ *
++ * Hardware Management of EEE/LPI State:
++ * For KSZ switch ports with integrated PHYs (e.g., KSZ9893R ports 1-2),
++ * observation and testing suggest that the actual EEE / Low Power Idle (LPI)
++ * state transitions are managed autonomously by the hardware based on
++ * the auto-negotiation results. (Note: While the datasheet describes EEE
++ * operation based on negotiation, it doesn't explicitly detail the internal
++ * MAC/PHY interaction, so autonomous hardware management of the MAC state
++ * for LPI is inferred from observed behavior).
++ * This hardware control, consistent with the switch's ability to operate
++ * autonomously via strapping, means MAC-level software intervention is not
++ * required or exposed for managing the LPI state once EEE is negotiated.
++ * (Ref: KSZ9893R Data Sheet DS00002420D, primarily Section 4.7.5 explaining
++ * EEE, also Sections 4.1.7 on Auto-Negotiation and 3.2.1 on Configuration
++ * Straps).
++ *
++ * Additionally, ports configured as MAC interfaces (e.g., KSZ9893R port 3)
++ * lack documented MAC-level LPI control.
++ *
++ * Therefore, this callback performs no action and serves primarily to inform
++ * phylink of LPI awareness and to document the inferred hardware behavior.
++ *
++ * Returns: 0 (Always success)
++ */
++static int ksz_phylink_mac_enable_tx_lpi(struct phylink_config *config,
++ u32 timer, bool tx_clock_stop)
++{
++ return 0;
++}
++
+ static const struct phylink_mac_ops ksz88x3_phylink_mac_ops = {
+ .mac_config = ksz88x3_phylink_mac_config,
+ .mac_link_down = ksz_phylink_mac_link_down,
+ .mac_link_up = ksz8_phylink_mac_link_up,
++ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,
++ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,
+ };
+
+ static const struct phylink_mac_ops ksz8_phylink_mac_ops = {
+ .mac_config = ksz_phylink_mac_config,
+ .mac_link_down = ksz_phylink_mac_link_down,
+ .mac_link_up = ksz8_phylink_mac_link_up,
++ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,
++ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,
+ };
+
+ static const struct ksz_dev_ops ksz88xx_dev_ops = {
+@@ -358,6 +412,8 @@ static const struct phylink_mac_ops ksz9477_phylink_mac_ops = {
+ .mac_config = ksz_phylink_mac_config,
+ .mac_link_down = ksz_phylink_mac_link_down,
+ .mac_link_up = ksz9477_phylink_mac_link_up,
++ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,
++ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,
+ };
+
+ static const struct ksz_dev_ops ksz9477_dev_ops = {
+@@ -401,6 +457,8 @@ static const struct phylink_mac_ops lan937x_phylink_mac_ops = {
+ .mac_config = ksz_phylink_mac_config,
+ .mac_link_down = ksz_phylink_mac_link_down,
+ .mac_link_up = ksz9477_phylink_mac_link_up,
++ .mac_disable_tx_lpi = ksz_phylink_mac_disable_tx_lpi,
++ .mac_enable_tx_lpi = ksz_phylink_mac_enable_tx_lpi,
+ };
+
+ static const struct ksz_dev_ops lan937x_dev_ops = {
+@@ -2016,6 +2074,18 @@ static void ksz_phylink_get_caps(struct dsa_switch *ds, int port,
+
+ if (dev->dev_ops->get_caps)
+ dev->dev_ops->get_caps(dev, port, config);
++
++ if (ds->ops->support_eee && ds->ops->support_eee(ds, port)) {
++ memcpy(config->lpi_interfaces, config->supported_interfaces,
++ sizeof(config->lpi_interfaces));
++
++ config->lpi_capabilities = MAC_100FD;
++ if (dev->info->gbit_capable[port])
++ config->lpi_capabilities |= MAC_1000FD;
++
++ /* EEE is fully operational */
++ config->eee_enabled_default = true;
++ }
+ }
+
+ void ksz_r_mib_stats64(struct ksz_device *dev, int port)
+@@ -3008,31 +3078,6 @@ static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
+ if (!port)
+ return MICREL_KSZ8_P1_ERRATA;
+ break;
+- case KSZ8567_CHIP_ID:
+- /* KSZ8567R Errata DS80000752C Module 4 */
+- case KSZ8765_CHIP_ID:
+- case KSZ8794_CHIP_ID:
+- case KSZ8795_CHIP_ID:
+- /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */
+- case KSZ9477_CHIP_ID:
+- /* KSZ9477S Errata DS80000754A Module 4 */
+- case KSZ9567_CHIP_ID:
+- /* KSZ9567S Errata DS80000756A Module 4 */
+- case KSZ9896_CHIP_ID:
+- /* KSZ9896C Errata DS80000757A Module 3 */
+- case KSZ9897_CHIP_ID:
+- case LAN9646_CHIP_ID:
+- /* KSZ9897R Errata DS80000758C Module 4 */
+- /* Energy Efficient Ethernet (EEE) feature select must be manually disabled
+- * The EEE feature is enabled by default, but it is not fully
+- * operational. It must be manually disabled through register
+- * controls. If not disabled, the PHY ports can auto-negotiate
+- * to enable EEE, and this feature can cause link drops when
+- * linked to another device supporting EEE.
+- *
+- * The same item appears in the errata for all switches above.
+- */
+- return MICREL_NO_EEE;
+ }
+
+ return 0;
+@@ -3466,6 +3511,20 @@ static int ksz_max_mtu(struct dsa_switch *ds, int port)
+ return -EOPNOTSUPP;
+ }
+
++/**
++ * ksz_support_eee - Determine Energy Efficient Ethernet (EEE) support for a
++ * port
++ * @ds: Pointer to the DSA switch structure
++ * @port: Port number to check
++ *
++ * This function also documents devices where EEE was initially advertised but
++ * later withdrawn due to reliability issues, as described in official errata
++ * documents. These devices are explicitly listed to record known limitations,
++ * even if there is no technical necessity for runtime checks.
++ *
++ * Returns: true if the internal PHY on the given port supports fully
++ * operational EEE, false otherwise.
++ */
+ static bool ksz_support_eee(struct dsa_switch *ds, int port)
+ {
+ struct ksz_device *dev = ds->priv;
+@@ -3475,15 +3534,35 @@ static bool ksz_support_eee(struct dsa_switch *ds, int port)
+
+ switch (dev->chip_id) {
+ case KSZ8563_CHIP_ID:
++ case KSZ9563_CHIP_ID:
++ case KSZ9893_CHIP_ID:
++ return true;
+ case KSZ8567_CHIP_ID:
++ /* KSZ8567R Errata DS80000752C Module 4 */
++ case KSZ8765_CHIP_ID:
++ case KSZ8794_CHIP_ID:
++ case KSZ8795_CHIP_ID:
++ /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */
+ case KSZ9477_CHIP_ID:
+- case KSZ9563_CHIP_ID:
++ /* KSZ9477S Errata DS80000754A Module 4 */
+ case KSZ9567_CHIP_ID:
+- case KSZ9893_CHIP_ID:
++ /* KSZ9567S Errata DS80000756A Module 4 */
+ case KSZ9896_CHIP_ID:
++ /* KSZ9896C Errata DS80000757A Module 3 */
+ case KSZ9897_CHIP_ID:
+ case LAN9646_CHIP_ID:
+- return true;
++ /* KSZ9897R Errata DS80000758C Module 4 */
++ /* Energy Efficient Ethernet (EEE) feature select must be
++ * manually disabled
++ * The EEE feature is enabled by default, but it is not fully
++ * operational. It must be manually disabled through register
++ * controls. If not disabled, the PHY ports can auto-negotiate
++ * to enable EEE, and this feature can cause link drops when
++ * linked to another device supporting EEE.
++ *
++ * The same item appears in the errata for all switches above.
++ */
++ break;
+ }
+
+ return false;
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index f8454f3b6f9c5d..f674c400f05b29 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -2081,6 +2081,7 @@ static void sja1105_bridge_stp_state_set(struct dsa_switch *ds, int port,
+ switch (state) {
+ case BR_STATE_DISABLED:
+ case BR_STATE_BLOCKING:
++ case BR_STATE_LISTENING:
+ /* From UM10944 description of DRPDTAG (why put this there?):
+ * "Management traffic flows to the port regardless of the state
+ * of the INGRESS flag". So BPDUs are still be allowed to pass.
+@@ -2090,11 +2091,6 @@ static void sja1105_bridge_stp_state_set(struct dsa_switch *ds, int port,
+ mac[port].egress = false;
+ mac[port].dyn_learn = false;
+ break;
+- case BR_STATE_LISTENING:
+- mac[port].ingress = true;
+- mac[port].egress = false;
+- mac[port].dyn_learn = false;
+- break;
+ case BR_STATE_LEARNING:
+ mac[port].ingress = true;
+ mac[port].egress = false;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index c1f57d96e63fc4..e3cc26472c2f1c 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1002,22 +1002,15 @@ static void macb_update_stats(struct macb *bp)
+
+ static int macb_halt_tx(struct macb *bp)
+ {
+- unsigned long halt_time, timeout;
+- u32 status;
++ u32 status;
+
+ macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(THALT));
+
+- timeout = jiffies + usecs_to_jiffies(MACB_HALT_TIMEOUT);
+- do {
+- halt_time = jiffies;
+- status = macb_readl(bp, TSR);
+- if (!(status & MACB_BIT(TGO)))
+- return 0;
+-
+- udelay(250);
+- } while (time_before(halt_time, timeout));
+-
+- return -ETIMEDOUT;
++ /* Poll TSR until TGO is cleared or timeout. */
++ return read_poll_timeout_atomic(macb_readl, status,
++ !(status & MACB_BIT(TGO)),
++ 250, MACB_HALT_TIMEOUT, false,
++ bp, TSR);
+ }
+
+ static void macb_tx_unmap(struct macb *bp, struct macb_tx_skb *tx_skb, int budget)
+diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
+index 0d030cb0b21c74..63aeb400051c4f 100644
+--- a/drivers/net/ethernet/engleder/tsnep_main.c
++++ b/drivers/net/ethernet/engleder/tsnep_main.c
+@@ -67,6 +67,8 @@
+ #define TSNEP_TX_TYPE_XDP_NDO_MAP_PAGE (TSNEP_TX_TYPE_XDP_NDO | TSNEP_TX_TYPE_MAP_PAGE)
+ #define TSNEP_TX_TYPE_XDP (TSNEP_TX_TYPE_XDP_TX | TSNEP_TX_TYPE_XDP_NDO)
+ #define TSNEP_TX_TYPE_XSK BIT(12)
++#define TSNEP_TX_TYPE_TSTAMP BIT(13)
++#define TSNEP_TX_TYPE_SKB_TSTAMP (TSNEP_TX_TYPE_SKB | TSNEP_TX_TYPE_TSTAMP)
+
+ #define TSNEP_XDP_TX BIT(0)
+ #define TSNEP_XDP_REDIRECT BIT(1)
+@@ -387,8 +389,7 @@ static void tsnep_tx_activate(struct tsnep_tx *tx, int index, int length,
+ if (entry->skb) {
+ entry->properties = length & TSNEP_DESC_LENGTH_MASK;
+ entry->properties |= TSNEP_DESC_INTERRUPT_FLAG;
+- if ((entry->type & TSNEP_TX_TYPE_SKB) &&
+- (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS))
++ if ((entry->type & TSNEP_TX_TYPE_SKB_TSTAMP) == TSNEP_TX_TYPE_SKB_TSTAMP)
+ entry->properties |= TSNEP_DESC_EXTENDED_WRITEBACK_FLAG;
+
+ /* toggle user flag to prevent false acknowledge
+@@ -480,7 +481,8 @@ static int tsnep_tx_map_frag(skb_frag_t *frag, struct tsnep_tx_entry *entry,
+ return mapped;
+ }
+
+-static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count)
++static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count,
++ bool do_tstamp)
+ {
+ struct device *dmadev = tx->adapter->dmadev;
+ struct tsnep_tx_entry *entry;
+@@ -506,6 +508,9 @@ static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count)
+ entry->type = TSNEP_TX_TYPE_SKB_INLINE;
+ mapped = 0;
+ }
++
++ if (do_tstamp)
++ entry->type |= TSNEP_TX_TYPE_TSTAMP;
+ } else {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i - 1];
+
+@@ -559,11 +564,12 @@ static int tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count)
+ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
+ struct tsnep_tx *tx)
+ {
+- int count = 1;
+ struct tsnep_tx_entry *entry;
++ bool do_tstamp = false;
++ int count = 1;
+ int length;
+- int i;
+ int retval;
++ int i;
+
+ if (skb_shinfo(skb)->nr_frags > 0)
+ count += skb_shinfo(skb)->nr_frags;
+@@ -580,7 +586,13 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
+ entry = &tx->entry[tx->write];
+ entry->skb = skb;
+
+- retval = tsnep_tx_map(skb, tx, count);
++ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
++ tx->adapter->hwtstamp_config.tx_type == HWTSTAMP_TX_ON) {
++ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
++ do_tstamp = true;
++ }
++
++ retval = tsnep_tx_map(skb, tx, count, do_tstamp);
+ if (retval < 0) {
+ tsnep_tx_unmap(tx, tx->write, count);
+ dev_kfree_skb_any(entry->skb);
+@@ -592,9 +604,6 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
+ }
+ length = retval;
+
+- if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
+- skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+-
+ for (i = 0; i < count; i++)
+ tsnep_tx_activate(tx, (tx->write + i) & TSNEP_RING_MASK, length,
+ i == count - 1);
+@@ -845,8 +854,7 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
+
+ length = tsnep_tx_unmap(tx, tx->read, count);
+
+- if ((entry->type & TSNEP_TX_TYPE_SKB) &&
+- (skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) &&
++ if (((entry->type & TSNEP_TX_TYPE_SKB_TSTAMP) == TSNEP_TX_TYPE_SKB_TSTAMP) &&
+ (__le32_to_cpu(entry->desc_wb->properties) &
+ TSNEP_DESC_EXTENDED_WRITEBACK_FLAG)) {
+ struct skb_shared_hwtstamps hwtstamps;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 8216f843a7cd5f..e43c4608d3ba33 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -707,6 +707,11 @@ int cgx_get_rx_stats(void *cgxd, int lmac_id, int idx, u64 *rx_stat)
+
+ if (!is_lmac_valid(cgx, lmac_id))
+ return -ENODEV;
++
++ /* pass lmac as 0 for CGX_CMR_RX_STAT9-12 */
++ if (idx >= CGX_RX_STAT_GLOBAL_INDEX)
++ lmac_id = 0;
++
+ *rx_stat = cgx_read(cgx, lmac_id, CGXX_CMRX_RX_STAT0 + (idx * 8));
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
+index f3b9daffaec3c2..4c7e0f345cb5ba 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c
+@@ -531,7 +531,8 @@ static int cn10k_mcs_write_tx_secy(struct otx2_nic *pfvf,
+ if (sw_tx_sc->encrypt)
+ sectag_tci |= (MCS_TCI_E | MCS_TCI_C);
+
+- policy = FIELD_PREP(MCS_TX_SECY_PLCY_MTU, secy->netdev->mtu);
++ policy = FIELD_PREP(MCS_TX_SECY_PLCY_MTU,
++ pfvf->netdev->mtu + OTX2_ETH_HLEN);
+ /* Write SecTag excluding AN bits(1..0) */
+ policy |= FIELD_PREP(MCS_TX_SECY_PLCY_ST_TCI, sectag_tci >> 2);
+ policy |= FIELD_PREP(MCS_TX_SECY_PLCY_ST_OFFSET, tag_offset);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 65814e3dc93f59..7cc12f10e8a157 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -349,6 +349,7 @@ struct otx2_flow_config {
+ struct list_head flow_list_tc;
+ u8 ucast_flt_cnt;
+ bool ntuple;
++ u16 ntuple_cnt;
+ };
+
+ struct dev_hw_ops {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c
+index 33ec9a7f7c0339..e13ae5484c19cb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c
+@@ -41,6 +41,7 @@ static int otx2_dl_mcam_count_set(struct devlink *devlink, u32 id,
+ if (!pfvf->flow_cfg)
+ return 0;
+
++ pfvf->flow_cfg->ntuple_cnt = ctx->val.vu16;
+ otx2_alloc_mcam_entries(pfvf, ctx->val.vu16);
+
+ return 0;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index 2d53dc77ef1eff..b3f616a7f2e96b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -315,7 +315,7 @@ static void otx2_get_pauseparam(struct net_device *netdev,
+ struct otx2_nic *pfvf = netdev_priv(netdev);
+ struct cgx_pause_frm_cfg *req, *rsp;
+
+- if (is_otx2_lbkvf(pfvf->pdev))
++ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev))
+ return;
+
+ mutex_lock(&pfvf->mbox.lock);
+@@ -347,7 +347,7 @@ static int otx2_set_pauseparam(struct net_device *netdev,
+ if (pause->autoneg)
+ return -EOPNOTSUPP;
+
+- if (is_otx2_lbkvf(pfvf->pdev))
++ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev))
+ return -EOPNOTSUPP;
+
+ if (pause->rx_pause)
+@@ -937,8 +937,8 @@ static u32 otx2_get_link(struct net_device *netdev)
+ {
+ struct otx2_nic *pfvf = netdev_priv(netdev);
+
+- /* LBK link is internal and always UP */
+- if (is_otx2_lbkvf(pfvf->pdev))
++ /* LBK and SDP links are internal and always UP */
++ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev))
+ return 1;
+ return pfvf->linfo.link_up;
+ }
+@@ -1409,7 +1409,7 @@ static int otx2vf_get_link_ksettings(struct net_device *netdev,
+ {
+ struct otx2_nic *pfvf = netdev_priv(netdev);
+
+- if (is_otx2_lbkvf(pfvf->pdev)) {
++ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev)) {
+ cmd->base.duplex = DUPLEX_FULL;
+ cmd->base.speed = SPEED_100000;
+ } else {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+index 47bfd1fb37d4bc..64c6d9162ef644 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+@@ -247,7 +247,7 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
+ mutex_unlock(&pfvf->mbox.lock);
+
+ /* Allocate entries for Ntuple filters */
+- count = otx2_alloc_mcam_entries(pfvf, OTX2_DEFAULT_FLOWCOUNT);
++ count = otx2_alloc_mcam_entries(pfvf, flow_cfg->ntuple_cnt);
+ if (count <= 0) {
+ otx2_clear_ntuple_flow_info(pfvf, flow_cfg);
+ return 0;
+@@ -307,6 +307,7 @@ int otx2_mcam_flow_init(struct otx2_nic *pf)
+ INIT_LIST_HEAD(&pf->flow_cfg->flow_list_tc);
+
+ pf->flow_cfg->ucast_flt_cnt = OTX2_DEFAULT_UNICAST_FLOWS;
++ pf->flow_cfg->ntuple_cnt = OTX2_DEFAULT_FLOWCOUNT;
+
+ /* Allocate bare minimum number of MCAM entries needed for
+ * unicast and ntuple filters.
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 341def2bf1d354..22a8b909dd80bc 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -4683,7 +4683,7 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ }
+
+ if (mtk_is_netsys_v3_or_greater(mac->hw) &&
+- MTK_HAS_CAPS(mac->hw->soc->caps, MTK_ESW_BIT) &&
++ MTK_HAS_CAPS(mac->hw->soc->caps, MTK_ESW) &&
+ id == MTK_GMAC1_ID) {
+ mac->phylink_config.mac_capabilities = MAC_ASYM_PAUSE |
+ MAC_SYM_PAUSE |
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 8fcaee381b0e09..01f6a60308cb7c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -4373,6 +4373,10 @@ static netdev_features_t mlx5e_fix_uplink_rep_features(struct net_device *netdev
+ if (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
+ netdev_warn(netdev, "Disabling HW_VLAN CTAG FILTERING, not supported in switchdev mode\n");
+
++ features &= ~NETIF_F_HW_MACSEC;
++ if (netdev->features & NETIF_F_HW_MACSEC)
++ netdev_warn(netdev, "Disabling HW MACsec offload, not supported in switchdev mode\n");
++
+ return features;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 7d6d859cef3f9f..511cd92e0e3e7c 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -3014,6 +3014,9 @@ static int mlxsw_sp_neigh_rif_made_sync(struct mlxsw_sp *mlxsw_sp,
+ .rif = rif,
+ };
+
++ if (!mlxsw_sp_dev_lower_is_port(mlxsw_sp_rif_dev(rif)))
++ return 0;
++
+ neigh_for_each(&arp_tbl, mlxsw_sp_neigh_rif_made_sync_each, &rms);
+ if (rms.err)
+ goto err_arp;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 99df00c30b8c6c..b5d744d2586f72 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -203,7 +203,7 @@ static struct pci_driver qede_pci_driver = {
+ };
+
+ static struct qed_eth_cb_ops qede_ll_ops = {
+- {
++ .common = {
+ #ifdef CONFIG_RFS_ACCEL
+ .arfs_filter_op = qede_arfs_filter_op,
+ #endif
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+index 28d24d59efb84f..d57b976b904095 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+@@ -1484,8 +1484,11 @@ static int qlcnic_sriov_channel_cfg_cmd(struct qlcnic_adapter *adapter, u8 cmd_o
+ }
+
+ cmd_op = (cmd.rsp.arg[0] & 0xff);
+- if (cmd.rsp.arg[0] >> 25 == 2)
+- return 2;
++ if (cmd.rsp.arg[0] >> 25 == 2) {
++ ret = 2;
++ goto out;
++ }
++
+ if (cmd_op == QLCNIC_BC_CMD_CHANNEL_INIT)
+ set_bit(QLC_BC_VF_STATE, &vf->state);
+ else
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index 234db693cefa4c..3065f25777bbac 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -158,7 +158,6 @@ struct hv_netvsc_packet {
+ u8 cp_partial; /* partial copy into send buffer */
+
+ u8 rmsg_size; /* RNDIS header and PPI size */
+- u8 rmsg_pgcnt; /* page count of RNDIS header and PPI */
+ u8 page_buf_cnt;
+
+ u16 q_idx;
+@@ -893,6 +892,18 @@ struct nvsp_message {
+ sizeof(struct nvsp_message))
+ #define NETVSC_MIN_IN_MSG_SIZE sizeof(struct vmpacket_descriptor)
+
++/* Maximum # of contiguous data ranges that can make up a trasmitted packet.
++ * Typically it's the max SKB fragments plus 2 for the rndis packet and the
++ * linear portion of the SKB. But if MAX_SKB_FRAGS is large, the value may
++ * need to be limited to MAX_PAGE_BUFFER_COUNT, which is the max # of entries
++ * in a GPA direct packet sent to netvsp over VMBus.
++ */
++#if MAX_SKB_FRAGS + 2 < MAX_PAGE_BUFFER_COUNT
++#define MAX_DATA_RANGES (MAX_SKB_FRAGS + 2)
++#else
++#define MAX_DATA_RANGES MAX_PAGE_BUFFER_COUNT
++#endif
++
+ /* Estimated requestor size:
+ * out_ring_size/min_out_msg_size + in_ring_size/min_in_msg_size
+ */
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index d6f5b9ea3109d2..720104661d7f24 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -953,8 +953,7 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device,
+ + pend_size;
+ int i;
+ u32 padding = 0;
+- u32 page_count = packet->cp_partial ? packet->rmsg_pgcnt :
+- packet->page_buf_cnt;
++ u32 page_count = packet->cp_partial ? 1 : packet->page_buf_cnt;
+ u32 remain;
+
+ /* Add padding */
+@@ -1055,6 +1054,42 @@ static int netvsc_dma_map(struct hv_device *hv_dev,
+ return 0;
+ }
+
++/* Build an "array" of mpb entries describing the data to be transferred
++ * over VMBus. After the desc header fields, each "array" entry is variable
++ * size, and each entry starts after the end of the previous entry. The
++ * "offset" and "len" fields for each entry imply the size of the entry.
++ *
++ * The pfns are in HV_HYP_PAGE_SIZE, because all communication with Hyper-V
++ * uses that granularity, even if the system page size of the guest is larger.
++ * Each entry in the input "pb" array must describe a contiguous range of
++ * guest physical memory so that the pfns are sequential if the range crosses
++ * a page boundary. The offset field must be < HV_HYP_PAGE_SIZE.
++ */
++static inline void netvsc_build_mpb_array(struct hv_page_buffer *pb,
++ u32 page_buffer_count,
++ struct vmbus_packet_mpb_array *desc,
++ u32 *desc_size)
++{
++ struct hv_mpb_array *mpb_entry = &desc->range;
++ int i, j;
++
++ for (i = 0; i < page_buffer_count; i++) {
++ u32 offset = pb[i].offset;
++ u32 len = pb[i].len;
++
++ mpb_entry->offset = offset;
++ mpb_entry->len = len;
++
++ for (j = 0; j < HVPFN_UP(offset + len); j++)
++ mpb_entry->pfn_array[j] = pb[i].pfn + j;
++
++ mpb_entry = (struct hv_mpb_array *)&mpb_entry->pfn_array[j];
++ }
++
++ desc->rangecount = page_buffer_count;
++ *desc_size = (char *)mpb_entry - (char *)desc;
++}
++
+ static inline int netvsc_send_pkt(
+ struct hv_device *device,
+ struct hv_netvsc_packet *packet,
+@@ -1097,8 +1132,11 @@ static inline int netvsc_send_pkt(
+
+ packet->dma_range = NULL;
+ if (packet->page_buf_cnt) {
++ struct vmbus_channel_packet_page_buffer desc;
++ u32 desc_size;
++
+ if (packet->cp_partial)
+- pb += packet->rmsg_pgcnt;
++ pb++;
+
+ ret = netvsc_dma_map(ndev_ctx->device_ctx, packet, pb);
+ if (ret) {
+@@ -1106,11 +1144,12 @@ static inline int netvsc_send_pkt(
+ goto exit;
+ }
+
+- ret = vmbus_sendpacket_pagebuffer(out_channel,
+- pb, packet->page_buf_cnt,
+- &nvmsg, sizeof(nvmsg),
+- req_id);
+-
++ netvsc_build_mpb_array(pb, packet->page_buf_cnt,
++ (struct vmbus_packet_mpb_array *)&desc,
++ &desc_size);
++ ret = vmbus_sendpacket_mpb_desc(out_channel,
++ (struct vmbus_packet_mpb_array *)&desc,
++ desc_size, &nvmsg, sizeof(nvmsg), req_id);
+ if (ret)
+ netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
+ } else {
+@@ -1259,7 +1298,7 @@ int netvsc_send(struct net_device *ndev,
+ packet->send_buf_index = section_index;
+
+ if (packet->cp_partial) {
+- packet->page_buf_cnt -= packet->rmsg_pgcnt;
++ packet->page_buf_cnt--;
+ packet->total_data_buflen = msd_len + packet->rmsg_size;
+ } else {
+ packet->page_buf_cnt = 0;
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index d6c4abfc3a28b0..d9ddbf08784573 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -325,43 +325,10 @@ static u16 netvsc_select_queue(struct net_device *ndev, struct sk_buff *skb,
+ return txq;
+ }
+
+-static u32 fill_pg_buf(unsigned long hvpfn, u32 offset, u32 len,
+- struct hv_page_buffer *pb)
+-{
+- int j = 0;
+-
+- hvpfn += offset >> HV_HYP_PAGE_SHIFT;
+- offset = offset & ~HV_HYP_PAGE_MASK;
+-
+- while (len > 0) {
+- unsigned long bytes;
+-
+- bytes = HV_HYP_PAGE_SIZE - offset;
+- if (bytes > len)
+- bytes = len;
+- pb[j].pfn = hvpfn;
+- pb[j].offset = offset;
+- pb[j].len = bytes;
+-
+- offset += bytes;
+- len -= bytes;
+-
+- if (offset == HV_HYP_PAGE_SIZE && len) {
+- hvpfn++;
+- offset = 0;
+- j++;
+- }
+- }
+-
+- return j + 1;
+-}
+-
+ static u32 init_page_array(void *hdr, u32 len, struct sk_buff *skb,
+ struct hv_netvsc_packet *packet,
+ struct hv_page_buffer *pb)
+ {
+- u32 slots_used = 0;
+- char *data = skb->data;
+ int frags = skb_shinfo(skb)->nr_frags;
+ int i;
+
+@@ -370,28 +337,27 @@ static u32 init_page_array(void *hdr, u32 len, struct sk_buff *skb,
+ * 2. skb linear data
+ * 3. skb fragment data
+ */
+- slots_used += fill_pg_buf(virt_to_hvpfn(hdr),
+- offset_in_hvpage(hdr),
+- len,
+- &pb[slots_used]);
+
++ pb[0].offset = offset_in_hvpage(hdr);
++ pb[0].len = len;
++ pb[0].pfn = virt_to_hvpfn(hdr);
+ packet->rmsg_size = len;
+- packet->rmsg_pgcnt = slots_used;
+
+- slots_used += fill_pg_buf(virt_to_hvpfn(data),
+- offset_in_hvpage(data),
+- skb_headlen(skb),
+- &pb[slots_used]);
++ pb[1].offset = offset_in_hvpage(skb->data);
++ pb[1].len = skb_headlen(skb);
++ pb[1].pfn = virt_to_hvpfn(skb->data);
+
+ for (i = 0; i < frags; i++) {
+ skb_frag_t *frag = skb_shinfo(skb)->frags + i;
++ struct hv_page_buffer *cur_pb = &pb[i + 2];
++ u64 pfn = page_to_hvpfn(skb_frag_page(frag));
++ u32 offset = skb_frag_off(frag);
+
+- slots_used += fill_pg_buf(page_to_hvpfn(skb_frag_page(frag)),
+- skb_frag_off(frag),
+- skb_frag_size(frag),
+- &pb[slots_used]);
++ cur_pb->offset = offset_in_hvpage(offset);
++ cur_pb->len = skb_frag_size(frag);
++ cur_pb->pfn = pfn + (offset >> HV_HYP_PAGE_SHIFT);
+ }
+- return slots_used;
++ return frags + 2;
+ }
+
+ static int count_skb_frag_slots(struct sk_buff *skb)
+@@ -482,7 +448,7 @@ static int netvsc_xmit(struct sk_buff *skb, struct net_device *net, bool xdp_tx)
+ struct net_device *vf_netdev;
+ u32 rndis_msg_size;
+ u32 hash;
+- struct hv_page_buffer pb[MAX_PAGE_BUFFER_COUNT];
++ struct hv_page_buffer pb[MAX_DATA_RANGES];
+
+ /* If VF is present and up then redirect packets to it.
+ * Skip the VF if it is marked down or has no carrier.
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index c0ceeef4fcd810..9b8769a8b77a12 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -225,8 +225,7 @@ static int rndis_filter_send_request(struct rndis_device *dev,
+ struct rndis_request *req)
+ {
+ struct hv_netvsc_packet *packet;
+- struct hv_page_buffer page_buf[2];
+- struct hv_page_buffer *pb = page_buf;
++ struct hv_page_buffer pb;
+ int ret;
+
+ /* Setup the packet to send it */
+@@ -235,27 +234,14 @@ static int rndis_filter_send_request(struct rndis_device *dev,
+ packet->total_data_buflen = req->request_msg.msg_len;
+ packet->page_buf_cnt = 1;
+
+- pb[0].pfn = virt_to_phys(&req->request_msg) >>
+- HV_HYP_PAGE_SHIFT;
+- pb[0].len = req->request_msg.msg_len;
+- pb[0].offset = offset_in_hvpage(&req->request_msg);
+-
+- /* Add one page_buf when request_msg crossing page boundary */
+- if (pb[0].offset + pb[0].len > HV_HYP_PAGE_SIZE) {
+- packet->page_buf_cnt++;
+- pb[0].len = HV_HYP_PAGE_SIZE -
+- pb[0].offset;
+- pb[1].pfn = virt_to_phys((void *)&req->request_msg
+- + pb[0].len) >> HV_HYP_PAGE_SHIFT;
+- pb[1].offset = 0;
+- pb[1].len = req->request_msg.msg_len -
+- pb[0].len;
+- }
++ pb.pfn = virt_to_phys(&req->request_msg) >> HV_HYP_PAGE_SHIFT;
++ pb.len = req->request_msg.msg_len;
++ pb.offset = offset_in_hvpage(&req->request_msg);
+
+ trace_rndis_send(dev->ndev, 0, &req->request_msg);
+
+ rcu_read_lock_bh();
+- ret = netvsc_send(dev->ndev, packet, NULL, pb, NULL, false);
++ ret = netvsc_send(dev->ndev, packet, NULL, &pb, NULL, false);
+ rcu_read_unlock_bh();
+
+ return ret;
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 9c0b1c229af64a..4033e948f9533b 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -2002,12 +2002,6 @@ static int ksz9477_config_init(struct phy_device *phydev)
+ return err;
+ }
+
+- /* According to KSZ9477 Errata DS80000754C (Module 4) all EEE modes
+- * in this switch shall be regarded as broken.
+- */
+- if (phydev->dev_flags & MICREL_NO_EEE)
+- phy_disable_eee(phydev);
+-
+ return kszphy_config_init(phydev);
+ }
+
+@@ -5680,7 +5674,6 @@ static struct phy_driver ksphy_driver[] = {
+ .handle_interrupt = kszphy_handle_interrupt,
+ .suspend = genphy_suspend,
+ .resume = ksz9477_resume,
+- .get_features = ksz9477_get_features,
+ } };
+
+ module_phy_driver(ksphy_driver);
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 844af16ee55131..35b4ec91979e6a 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -1011,6 +1011,7 @@ void mt76_dma_cleanup(struct mt76_dev *dev)
+ int i;
+
+ mt76_worker_disable(&dev->tx_worker);
++ napi_disable(&dev->tx_napi);
+ netif_napi_del(&dev->tx_napi);
+
+ for (i = 0; i < ARRAY_SIZE(dev->phys); i++) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index b8cd7cd3d832b0..f8d45d43f7807f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -1867,14 +1867,14 @@ mt7925_mcu_sta_cmd(struct mt76_phy *phy,
+ mt7925_mcu_sta_mld_tlv(skb, info->vif, info->link_sta->sta);
+ mt7925_mcu_sta_eht_mld_tlv(skb, info->vif, info->link_sta->sta);
+ }
+-
+- mt7925_mcu_sta_hdr_trans_tlv(skb, info->vif, info->link_sta);
+ }
+
+ if (!info->enable) {
+ mt7925_mcu_sta_remove_tlv(skb);
+ mt76_connac_mcu_add_tlv(skb, STA_REC_MLD_OFF,
+ sizeof(struct tlv));
++ } else {
++ mt7925_mcu_sta_hdr_trans_tlv(skb, info->vif, info->link_sta);
+ }
+
+ return mt76_mcu_skb_send_msg(dev, skb, info->cmd, true);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index d49b69565d04cc..00bd21b5c641e3 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -390,7 +390,7 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, __le32 *dbbuf_db,
+ * as it only leads to a small amount of wasted memory for the lifetime of
+ * the I/O.
+ */
+-static int nvme_pci_npages_prp(void)
++static __always_inline int nvme_pci_npages_prp(void)
+ {
+ unsigned max_bytes = (NVME_MAX_KB_SZ * 1024) + NVME_CTRL_PAGE_SIZE;
+ unsigned nprps = DIV_ROUND_UP(max_bytes, NVME_CTRL_PAGE_SIZE);
+@@ -1205,7 +1205,9 @@ static void nvme_poll_irqdisable(struct nvme_queue *nvmeq)
+ WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));
+
+ disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
++ spin_lock(&nvmeq->cq_poll_lock);
+ nvme_poll_cq(nvmeq, NULL);
++ spin_unlock(&nvmeq->cq_poll_lock);
+ enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
+ }
+
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index 775f4f973a6cc2..946dc2f184e877 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -107,7 +107,6 @@ struct rcar_gen3_phy {
+ struct rcar_gen3_chan *ch;
+ u32 int_enable_bits;
+ bool initialized;
+- bool otg_initialized;
+ bool powered;
+ };
+
+@@ -320,16 +319,15 @@ static bool rcar_gen3_is_any_rphy_initialized(struct rcar_gen3_chan *ch)
+ return false;
+ }
+
+-static bool rcar_gen3_needs_init_otg(struct rcar_gen3_chan *ch)
++static bool rcar_gen3_is_any_otg_rphy_initialized(struct rcar_gen3_chan *ch)
+ {
+- int i;
+-
+- for (i = 0; i < NUM_OF_PHYS; i++) {
+- if (ch->rphys[i].otg_initialized)
+- return false;
++ for (enum rcar_gen3_phy_index i = PHY_INDEX_BOTH_HC; i <= PHY_INDEX_EHCI;
++ i++) {
++ if (ch->rphys[i].initialized)
++ return true;
+ }
+
+- return true;
++ return false;
+ }
+
+ static bool rcar_gen3_are_all_rphys_power_off(struct rcar_gen3_chan *ch)
+@@ -351,7 +349,7 @@ static ssize_t role_store(struct device *dev, struct device_attribute *attr,
+ bool is_b_device;
+ enum phy_mode cur_mode, new_mode;
+
+- if (!ch->is_otg_channel || !rcar_gen3_is_any_rphy_initialized(ch))
++ if (!ch->is_otg_channel || !rcar_gen3_is_any_otg_rphy_initialized(ch))
+ return -EIO;
+
+ if (sysfs_streq(buf, "host"))
+@@ -389,7 +387,7 @@ static ssize_t role_show(struct device *dev, struct device_attribute *attr,
+ {
+ struct rcar_gen3_chan *ch = dev_get_drvdata(dev);
+
+- if (!ch->is_otg_channel || !rcar_gen3_is_any_rphy_initialized(ch))
++ if (!ch->is_otg_channel || !rcar_gen3_is_any_otg_rphy_initialized(ch))
+ return -EIO;
+
+ return sprintf(buf, "%s\n", rcar_gen3_is_host(ch) ? "host" :
+@@ -402,6 +400,9 @@ static void rcar_gen3_init_otg(struct rcar_gen3_chan *ch)
+ void __iomem *usb2_base = ch->base;
+ u32 val;
+
++ if (!ch->is_otg_channel || rcar_gen3_is_any_otg_rphy_initialized(ch))
++ return;
++
+ /* Should not use functions of read-modify-write a register */
+ val = readl(usb2_base + USB2_LINECTRL1);
+ val = (val & ~USB2_LINECTRL1_DP_RPD) | USB2_LINECTRL1_DPRPD_EN |
+@@ -462,16 +463,16 @@ static int rcar_gen3_phy_usb2_init(struct phy *p)
+ val = readl(usb2_base + USB2_INT_ENABLE);
+ val |= USB2_INT_ENABLE_UCOM_INTEN | rphy->int_enable_bits;
+ writel(val, usb2_base + USB2_INT_ENABLE);
+- writel(USB2_SPD_RSM_TIMSET_INIT, usb2_base + USB2_SPD_RSM_TIMSET);
+- writel(USB2_OC_TIMSET_INIT, usb2_base + USB2_OC_TIMSET);
+-
+- /* Initialize otg part */
+- if (channel->is_otg_channel) {
+- if (rcar_gen3_needs_init_otg(channel))
+- rcar_gen3_init_otg(channel);
+- rphy->otg_initialized = true;
++
++ if (!rcar_gen3_is_any_rphy_initialized(channel)) {
++ writel(USB2_SPD_RSM_TIMSET_INIT, usb2_base + USB2_SPD_RSM_TIMSET);
++ writel(USB2_OC_TIMSET_INIT, usb2_base + USB2_OC_TIMSET);
+ }
+
++ /* Initialize otg part (only if we initialize a PHY with IRQs). */
++ if (rphy->int_enable_bits)
++ rcar_gen3_init_otg(channel);
++
+ rphy->initialized = true;
+
+ return 0;
+@@ -486,9 +487,6 @@ static int rcar_gen3_phy_usb2_exit(struct phy *p)
+
+ rphy->initialized = false;
+
+- if (channel->is_otg_channel)
+- rphy->otg_initialized = false;
+-
+ val = readl(usb2_base + USB2_INT_ENABLE);
+ val &= ~rphy->int_enable_bits;
+ if (!rcar_gen3_is_any_rphy_initialized(channel))
+diff --git a/drivers/phy/tegra/xusb-tegra186.c b/drivers/phy/tegra/xusb-tegra186.c
+index fae6242aa730e0..23a23f2d64e586 100644
+--- a/drivers/phy/tegra/xusb-tegra186.c
++++ b/drivers/phy/tegra/xusb-tegra186.c
+@@ -237,6 +237,8 @@
+ #define DATA0_VAL_PD BIT(1)
+ #define USE_XUSB_AO BIT(4)
+
++#define TEGRA_UTMI_PAD_MAX 4
++
+ #define TEGRA186_LANE(_name, _offset, _shift, _mask, _type) \
+ { \
+ .name = _name, \
+@@ -269,7 +271,7 @@ struct tegra186_xusb_padctl {
+
+ /* UTMI bias and tracking */
+ struct clk *usb2_trk_clk;
+- unsigned int bias_pad_enable;
++ DECLARE_BITMAP(utmi_pad_enabled, TEGRA_UTMI_PAD_MAX);
+
+ /* padctl context */
+ struct tegra186_xusb_padctl_context context;
+@@ -603,12 +605,8 @@ static void tegra186_utmi_bias_pad_power_on(struct tegra_xusb_padctl *padctl)
+ u32 value;
+ int err;
+
+- mutex_lock(&padctl->lock);
+-
+- if (priv->bias_pad_enable++ > 0) {
+- mutex_unlock(&padctl->lock);
++ if (!bitmap_empty(priv->utmi_pad_enabled, TEGRA_UTMI_PAD_MAX))
+ return;
+- }
+
+ err = clk_prepare_enable(priv->usb2_trk_clk);
+ if (err < 0)
+@@ -658,8 +656,6 @@ static void tegra186_utmi_bias_pad_power_on(struct tegra_xusb_padctl *padctl)
+ } else {
+ clk_disable_unprepare(priv->usb2_trk_clk);
+ }
+-
+- mutex_unlock(&padctl->lock);
+ }
+
+ static void tegra186_utmi_bias_pad_power_off(struct tegra_xusb_padctl *padctl)
+@@ -667,17 +663,8 @@ static void tegra186_utmi_bias_pad_power_off(struct tegra_xusb_padctl *padctl)
+ struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl);
+ u32 value;
+
+- mutex_lock(&padctl->lock);
+-
+- if (WARN_ON(priv->bias_pad_enable == 0)) {
+- mutex_unlock(&padctl->lock);
+- return;
+- }
+-
+- if (--priv->bias_pad_enable > 0) {
+- mutex_unlock(&padctl->lock);
++ if (!bitmap_empty(priv->utmi_pad_enabled, TEGRA_UTMI_PAD_MAX))
+ return;
+- }
+
+ value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL1);
+ value |= USB2_PD_TRK;
+@@ -690,13 +677,13 @@ static void tegra186_utmi_bias_pad_power_off(struct tegra_xusb_padctl *padctl)
+ clk_disable_unprepare(priv->usb2_trk_clk);
+ }
+
+- mutex_unlock(&padctl->lock);
+ }
+
+ static void tegra186_utmi_pad_power_on(struct phy *phy)
+ {
+ struct tegra_xusb_lane *lane = phy_get_drvdata(phy);
+ struct tegra_xusb_padctl *padctl = lane->pad->padctl;
++ struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl);
+ struct tegra_xusb_usb2_port *port;
+ struct device *dev = padctl->dev;
+ unsigned int index = lane->index;
+@@ -705,9 +692,16 @@ static void tegra186_utmi_pad_power_on(struct phy *phy)
+ if (!phy)
+ return;
+
++ mutex_lock(&padctl->lock);
++ if (test_bit(index, priv->utmi_pad_enabled)) {
++ mutex_unlock(&padctl->lock);
++ return;
++ }
++
+ port = tegra_xusb_find_usb2_port(padctl, index);
+ if (!port) {
+ dev_err(dev, "no port found for USB2 lane %u\n", index);
++ mutex_unlock(&padctl->lock);
+ return;
+ }
+
+@@ -724,18 +718,28 @@ static void tegra186_utmi_pad_power_on(struct phy *phy)
+ value = padctl_readl(padctl, XUSB_PADCTL_USB2_OTG_PADX_CTL1(index));
+ value &= ~USB2_OTG_PD_DR;
+ padctl_writel(padctl, value, XUSB_PADCTL_USB2_OTG_PADX_CTL1(index));
++
++ set_bit(index, priv->utmi_pad_enabled);
++ mutex_unlock(&padctl->lock);
+ }
+
+ static void tegra186_utmi_pad_power_down(struct phy *phy)
+ {
+ struct tegra_xusb_lane *lane = phy_get_drvdata(phy);
+ struct tegra_xusb_padctl *padctl = lane->pad->padctl;
++ struct tegra186_xusb_padctl *priv = to_tegra186_xusb_padctl(padctl);
+ unsigned int index = lane->index;
+ u32 value;
+
+ if (!phy)
+ return;
+
++ mutex_lock(&padctl->lock);
++ if (!test_bit(index, priv->utmi_pad_enabled)) {
++ mutex_unlock(&padctl->lock);
++ return;
++ }
++
+ dev_dbg(padctl->dev, "power down UTMI pad %u\n", index);
+
+ value = padctl_readl(padctl, XUSB_PADCTL_USB2_OTG_PADX_CTL0(index));
+@@ -748,7 +752,11 @@ static void tegra186_utmi_pad_power_down(struct phy *phy)
+
+ udelay(2);
+
++ clear_bit(index, priv->utmi_pad_enabled);
++
+ tegra186_utmi_bias_pad_power_off(padctl);
++
++ mutex_unlock(&padctl->lock);
+ }
+
+ static int tegra186_xusb_padctl_vbus_override(struct tegra_xusb_padctl *padctl,
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index 79d4814d758d5e..c89df95aa6ca98 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -548,16 +548,16 @@ static int tegra_xusb_port_init(struct tegra_xusb_port *port,
+
+ err = dev_set_name(&port->dev, "%s-%u", name, index);
+ if (err < 0)
+- goto unregister;
++ goto put_device;
+
+ err = device_add(&port->dev);
+ if (err < 0)
+- goto unregister;
++ goto put_device;
+
+ return 0;
+
+-unregister:
+- device_unregister(&port->dev);
++put_device:
++ put_device(&port->dev);
+ return err;
+ }
+
+diff --git a/drivers/platform/x86/amd/hsmp/Kconfig b/drivers/platform/x86/amd/hsmp/Kconfig
+index 7d10d4462a453d..d6f7a62d55b5e4 100644
+--- a/drivers/platform/x86/amd/hsmp/Kconfig
++++ b/drivers/platform/x86/amd/hsmp/Kconfig
+@@ -7,7 +7,7 @@ config AMD_HSMP
+ tristate
+
+ menu "AMD HSMP Driver"
+- depends on AMD_NB || COMPILE_TEST
++ depends on AMD_NODE || COMPILE_TEST
+
+ config AMD_HSMP_ACPI
+ tristate "AMD HSMP ACPI device driver"
+diff --git a/drivers/platform/x86/amd/hsmp/acpi.c b/drivers/platform/x86/amd/hsmp/acpi.c
+index 444b43be35a256..eaae044e4f824c 100644
+--- a/drivers/platform/x86/amd/hsmp/acpi.c
++++ b/drivers/platform/x86/amd/hsmp/acpi.c
+@@ -10,7 +10,6 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+ #include <asm/amd_hsmp.h>
+-#include <asm/amd_nb.h>
+
+ #include <linux/acpi.h>
+ #include <linux/device.h>
+@@ -24,11 +23,12 @@
+
+ #include <uapi/asm-generic/errno-base.h>
+
++#include <asm/amd_node.h>
++
+ #include "hsmp.h"
+
+-#define DRIVER_NAME "amd_hsmp"
++#define DRIVER_NAME "hsmp_acpi"
+ #define DRIVER_VERSION "2.3"
+-#define ACPI_HSMP_DEVICE_HID "AMDI0097"
+
+ /* These are the strings specified in ACPI table */
+ #define MSG_IDOFF_STR "MsgIdOffset"
+@@ -321,8 +321,8 @@ static int hsmp_acpi_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ if (!hsmp_pdev->is_probed) {
+- hsmp_pdev->num_sockets = amd_nb_num();
+- if (hsmp_pdev->num_sockets == 0 || hsmp_pdev->num_sockets > MAX_AMD_SOCKETS)
++ hsmp_pdev->num_sockets = amd_num_nodes();
++ if (hsmp_pdev->num_sockets == 0 || hsmp_pdev->num_sockets > MAX_AMD_NUM_NODES)
+ return -ENODEV;
+
+ hsmp_pdev->sock = devm_kcalloc(&pdev->dev, hsmp_pdev->num_sockets,
+diff --git a/drivers/platform/x86/amd/hsmp/hsmp.c b/drivers/platform/x86/amd/hsmp/hsmp.c
+index 03164e30b3a502..a3ac09a90de456 100644
+--- a/drivers/platform/x86/amd/hsmp/hsmp.c
++++ b/drivers/platform/x86/amd/hsmp/hsmp.c
+@@ -8,7 +8,6 @@
+ */
+
+ #include <asm/amd_hsmp.h>
+-#include <asm/amd_nb.h>
+
+ #include <linux/acpi.h>
+ #include <linux/delay.h>
+diff --git a/drivers/platform/x86/amd/hsmp/hsmp.h b/drivers/platform/x86/amd/hsmp/hsmp.h
+index e852f0a947e4f8..d58d4f0c20d552 100644
+--- a/drivers/platform/x86/amd/hsmp/hsmp.h
++++ b/drivers/platform/x86/amd/hsmp/hsmp.h
+@@ -21,10 +21,9 @@
+
+ #define HSMP_ATTR_GRP_NAME_SIZE 10
+
+-#define MAX_AMD_SOCKETS 8
+-
+ #define HSMP_CDEV_NAME "hsmp_cdev"
+ #define HSMP_DEVNODE_NAME "hsmp"
++#define ACPI_HSMP_DEVICE_HID "AMDI0097"
+
+ struct hsmp_mbaddr_info {
+ u32 base_addr;
+@@ -41,7 +40,6 @@ struct hsmp_socket {
+ void __iomem *virt_base_addr;
+ struct semaphore hsmp_sem;
+ char name[HSMP_ATTR_GRP_NAME_SIZE];
+- struct pci_dev *root;
+ struct device *dev;
+ u16 sock_ind;
+ int (*amd_hsmp_rdwr)(struct hsmp_socket *sock, u32 off, u32 *val, bool rw);
+diff --git a/drivers/platform/x86/amd/hsmp/plat.c b/drivers/platform/x86/amd/hsmp/plat.c
+index 02ca85762b6866..81931e808bbc81 100644
+--- a/drivers/platform/x86/amd/hsmp/plat.c
++++ b/drivers/platform/x86/amd/hsmp/plat.c
+@@ -10,14 +10,17 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+ #include <asm/amd_hsmp.h>
+-#include <asm/amd_nb.h>
+
++#include <linux/acpi.h>
++#include <linux/build_bug.h>
+ #include <linux/device.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/platform_device.h>
+ #include <linux/sysfs.h>
+
++#include <asm/amd_node.h>
++
+ #include "hsmp.h"
+
+ #define DRIVER_NAME "amd_hsmp"
+@@ -34,28 +37,12 @@
+ #define SMN_HSMP_MSG_RESP 0x0010980
+ #define SMN_HSMP_MSG_DATA 0x00109E0
+
+-#define HSMP_INDEX_REG 0xc4
+-#define HSMP_DATA_REG 0xc8
+-
+ static struct hsmp_plat_device *hsmp_pdev;
+
+ static int amd_hsmp_pci_rdwr(struct hsmp_socket *sock, u32 offset,
+ u32 *value, bool write)
+ {
+- int ret;
+-
+- if (!sock->root)
+- return -ENODEV;
+-
+- ret = pci_write_config_dword(sock->root, HSMP_INDEX_REG,
+- sock->mbinfo.base_addr + offset);
+- if (ret)
+- return ret;
+-
+- ret = (write ? pci_write_config_dword(sock->root, HSMP_DATA_REG, *value)
+- : pci_read_config_dword(sock->root, HSMP_DATA_REG, value));
+-
+- return ret;
++ return amd_smn_hsmp_rdwr(sock->sock_ind, sock->mbinfo.base_addr + offset, value, write);
+ }
+
+ static ssize_t hsmp_metric_tbl_plat_read(struct file *filp, struct kobject *kobj,
+@@ -95,7 +82,12 @@ static umode_t hsmp_is_sock_attr_visible(struct kobject *kobj,
+ * Static array of 8 + 1(for NULL) elements is created below
+ * to create sysfs groups for sockets.
+ * is_bin_visible function is used to show / hide the necessary groups.
++ *
++ * Validate the maximum number against MAX_AMD_NUM_NODES. If this changes,
++ * then the attributes and groups below must be adjusted.
+ */
++static_assert(MAX_AMD_NUM_NODES == 8);
++
+ #define HSMP_BIN_ATTR(index, _list) \
+ static const struct bin_attribute attr##index = { \
+ .attr = { .name = HSMP_METRICS_TABLE_NAME, .mode = 0444}, \
+@@ -159,10 +151,7 @@ static int init_platform_device(struct device *dev)
+ int ret, i;
+
+ for (i = 0; i < hsmp_pdev->num_sockets; i++) {
+- if (!node_to_amd_nb(i))
+- return -ENODEV;
+ sock = &hsmp_pdev->sock[i];
+- sock->root = node_to_amd_nb(i)->root;
+ sock->sock_ind = i;
+ sock->dev = dev;
+ sock->mbinfo.base_addr = SMN_HSMP_BASE;
+@@ -278,7 +267,7 @@ static bool legacy_hsmp_support(void)
+ }
+ case 0x1A:
+ switch (boot_cpu_data.x86_model) {
+- case 0x00 ... 0x1F:
++ case 0x00 ... 0x0F:
+ return true;
+ default:
+ return false;
+@@ -300,16 +289,19 @@ static int __init hsmp_plt_init(void)
+ return ret;
+ }
+
++ if (acpi_dev_present(ACPI_HSMP_DEVICE_HID, NULL, -1))
++ return -ENODEV;
++
+ hsmp_pdev = get_hsmp_pdev();
+ if (!hsmp_pdev)
+ return -ENOMEM;
+
+ /*
+- * amd_nb_num() returns number of SMN/DF interfaces present in the system
++ * amd_num_nodes() returns number of SMN/DF interfaces present in the system
+ * if we have N SMN/DF interfaces that ideally means N sockets
+ */
+- hsmp_pdev->num_sockets = amd_nb_num();
+- if (hsmp_pdev->num_sockets == 0 || hsmp_pdev->num_sockets > MAX_AMD_SOCKETS)
++ hsmp_pdev->num_sockets = amd_num_nodes();
++ if (hsmp_pdev->num_sockets == 0 || hsmp_pdev->num_sockets > MAX_AMD_NUM_NODES)
+ return ret;
+
+ ret = platform_driver_register(&amd_hsmp_driver);
+diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+index b4f49720c87f62..2e3f6fc67c568d 100644
+--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c
++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+@@ -217,6 +217,13 @@ static const struct dmi_system_id fwbug_list[] = {
+ DMI_MATCH(DMI_BIOS_VERSION, "03.05"),
+ }
+ },
++ {
++ .ident = "MECHREVO Wujie 14X (GX4HRXL)",
++ .driver_data = &quirk_spurious_8042,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"),
++ }
++ },
+ {}
+ };
+
+diff --git a/drivers/platform/x86/amd/pmf/tee-if.c b/drivers/platform/x86/amd/pmf/tee-if.c
+index 14b99d8b63d2fc..d3bd12ad036ae0 100644
+--- a/drivers/platform/x86/amd/pmf/tee-if.c
++++ b/drivers/platform/x86/amd/pmf/tee-if.c
+@@ -334,6 +334,11 @@ static int amd_pmf_start_policy_engine(struct amd_pmf_dev *dev)
+ return 0;
+ }
+
++static inline bool amd_pmf_pb_valid(struct amd_pmf_dev *dev)
++{
++ return memchr_inv(dev->policy_buf, 0xff, dev->policy_sz);
++}
++
+ #ifdef CONFIG_AMD_PMF_DEBUG
+ static void amd_pmf_hex_dump_pb(struct amd_pmf_dev *dev)
+ {
+@@ -361,12 +366,22 @@ static ssize_t amd_pmf_get_pb_data(struct file *filp, const char __user *buf,
+ dev->policy_buf = new_policy_buf;
+ dev->policy_sz = length;
+
++ if (!amd_pmf_pb_valid(dev)) {
++ ret = -EINVAL;
++ goto cleanup;
++ }
++
+ amd_pmf_hex_dump_pb(dev);
+ ret = amd_pmf_start_policy_engine(dev);
+ if (ret < 0)
+- return ret;
++ goto cleanup;
+
+ return length;
++
++cleanup:
++ kfree(dev->policy_buf);
++ dev->policy_buf = NULL;
++ return ret;
+ }
+
+ static const struct file_operations pb_fops = {
+@@ -528,6 +543,12 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+
+ memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz);
+
++ if (!amd_pmf_pb_valid(dev)) {
++ dev_info(dev->dev, "No Smart PC policy present\n");
++ ret = -EINVAL;
++ goto err_free_policy;
++ }
++
+ amd_pmf_hex_dump_pb(dev);
+
+ dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL);
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 38ef778e8c19b9..f66d152e265da5 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -4777,7 +4777,8 @@ static int asus_wmi_add(struct platform_device *pdev)
+ goto fail_leds;
+
+ asus_wmi_get_devstate(asus, ASUS_WMI_DEVID_WLAN, &result);
+- if (result & (ASUS_WMI_DSTS_PRESENCE_BIT | ASUS_WMI_DSTS_USER_BIT))
++ if ((result & (ASUS_WMI_DSTS_PRESENCE_BIT | ASUS_WMI_DSTS_USER_BIT)) ==
++ (ASUS_WMI_DSTS_PRESENCE_BIT | ASUS_WMI_DSTS_USER_BIT))
+ asus->driver->wlan_ctrl_by_user = 1;
+
+ if (!(asus->driver->wlan_ctrl_by_user && ashs_present())) {
+diff --git a/drivers/regulator/max20086-regulator.c b/drivers/regulator/max20086-regulator.c
+index 59eb23d467ec05..198d45f8e88493 100644
+--- a/drivers/regulator/max20086-regulator.c
++++ b/drivers/regulator/max20086-regulator.c
+@@ -132,7 +132,7 @@ static int max20086_regulators_register(struct max20086 *chip)
+
+ static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)
+ {
+- struct of_regulator_match matches[MAX20086_MAX_REGULATORS] = { };
++ struct of_regulator_match *matches;
+ struct device_node *node;
+ unsigned int i;
+ int ret;
+@@ -143,6 +143,11 @@ static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)
+ return -ENODEV;
+ }
+
++ matches = devm_kcalloc(chip->dev, chip->info->num_outputs,
++ sizeof(*matches), GFP_KERNEL);
++ if (!matches)
++ return -ENOMEM;
++
+ for (i = 0; i < chip->info->num_outputs; ++i)
+ matches[i].name = max20086_output_names[i];
+
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index 7a447ff600d276..a8db66428f80d4 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -169,6 +169,7 @@ static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
+ unsigned int nr_zones, size_t *buflen)
+ {
+ struct request_queue *q = sdkp->disk->queue;
++ unsigned int max_segments;
+ size_t bufsize;
+ void *buf;
+
+@@ -180,12 +181,15 @@ static void *sd_zbc_alloc_report_buffer(struct scsi_disk *sdkp,
+ * Furthermore, since the report zone command cannot be split, make
+ * sure that the allocated buffer can always be mapped by limiting the
+ * number of pages allocated to the HBA max segments limit.
++ * Since max segments can be larger than the max inline bio vectors,
++ * further limit the allocated buffer to BIO_MAX_INLINE_VECS.
+ */
+ nr_zones = min(nr_zones, sdkp->zone_info.nr_zones);
+ bufsize = roundup((nr_zones + 1) * 64, SECTOR_SIZE);
+ bufsize = min_t(size_t, bufsize,
+ queue_max_hw_sectors(q) << SECTOR_SHIFT);
+- bufsize = min_t(size_t, bufsize, queue_max_segments(q) << PAGE_SHIFT);
++ max_segments = min(BIO_MAX_INLINE_VECS, queue_max_segments(q));
++ bufsize = min_t(size_t, bufsize, max_segments << PAGE_SHIFT);
+
+ while (bufsize >= SECTOR_SIZE) {
+ buf = kvzalloc(bufsize, GFP_KERNEL | __GFP_NORETRY);
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index a8614e54544e59..909b02460209cc 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1819,6 +1819,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ return SCSI_MLQUEUE_DEVICE_BUSY;
+ }
+
++ payload->rangecount = 1;
+ payload->range.len = length;
+ payload->range.offset = offset_in_hvpg;
+
+diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c
+index 31a878d9458d95..7740f94847a883 100644
+--- a/drivers/spi/spi-loopback-test.c
++++ b/drivers/spi/spi-loopback-test.c
+@@ -420,7 +420,7 @@ MODULE_LICENSE("GPL");
+ static void spi_test_print_hex_dump(char *pre, const void *ptr, size_t len)
+ {
+ /* limit the hex_dump */
+- if (len < 1024) {
++ if (len <= 1024) {
+ print_hex_dump(KERN_INFO, pre,
+ DUMP_PREFIX_OFFSET, 16, 1,
+ ptr, len, 0);
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index 2a8bb798e95b95..795a8482c2c700 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -728,9 +728,9 @@ static int tegra_spi_set_hw_cs_timing(struct spi_device *spi)
+ u32 inactive_cycles;
+ u8 cs_state;
+
+- if ((setup->unit && setup->unit != SPI_DELAY_UNIT_SCK) ||
+- (hold->unit && hold->unit != SPI_DELAY_UNIT_SCK) ||
+- (inactive->unit && inactive->unit != SPI_DELAY_UNIT_SCK)) {
++ if ((setup->value && setup->unit != SPI_DELAY_UNIT_SCK) ||
++ (hold->value && hold->unit != SPI_DELAY_UNIT_SCK) ||
++ (inactive->value && inactive->unit != SPI_DELAY_UNIT_SCK)) {
+ dev_err(&spi->dev,
+ "Invalid delay unit %d, should be SPI_DELAY_UNIT_SCK\n",
+ SPI_DELAY_UNIT_SCK);
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index 12e866fb311d63..0a800ba53816a8 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -475,7 +475,7 @@ static void reply_ump_stream_ep_info(struct f_midi2_ep *ep)
+ /* reply a UMP EP device info */
+ static void reply_ump_stream_ep_device(struct f_midi2_ep *ep)
+ {
+- struct snd_ump_stream_msg_devince_info rep = {
++ struct snd_ump_stream_msg_device_info rep = {
+ .type = UMP_MSG_TYPE_STREAM,
+ .status = UMP_STREAM_MSG_STATUS_DEVICE_INFO,
+ .manufacture_id = ep->info.manufacturer,
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 8054f44d39cf7c..855ef795610627 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -831,6 +831,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL;
+ struct elf_phdr *elf_property_phdata = NULL;
+ unsigned long elf_brk;
++ bool brk_moved = false;
+ int retval, i;
+ unsigned long elf_entry;
+ unsigned long e_entry;
+@@ -1098,15 +1099,19 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ /* Calculate any requested alignment. */
+ alignment = maximum_alignment(elf_phdata, elf_ex->e_phnum);
+
+- /*
+- * There are effectively two types of ET_DYN
+- * binaries: programs (i.e. PIE: ET_DYN with PT_INTERP)
+- * and loaders (ET_DYN without PT_INTERP, since they
+- * _are_ the ELF interpreter). The loaders must
+- * be loaded away from programs since the program
+- * may otherwise collide with the loader (especially
+- * for ET_EXEC which does not have a randomized
+- * position). For example to handle invocations of
++ /**
++ * DOC: PIE handling
++ *
++ * There are effectively two types of ET_DYN ELF
++ * binaries: programs (i.e. PIE: ET_DYN with
++ * PT_INTERP) and loaders (i.e. static PIE: ET_DYN
++ * without PT_INTERP, usually the ELF interpreter
++ * itself). Loaders must be loaded away from programs
++ * since the program may otherwise collide with the
++ * loader (especially for ET_EXEC which does not have
++ * a randomized position).
++ *
++ * For example, to handle invocations of
+ * "./ld.so someprog" to test out a new version of
+ * the loader, the subsequent program that the
+ * loader loads must avoid the loader itself, so
+@@ -1119,6 +1124,9 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ * ELF_ET_DYN_BASE and loaders are loaded into the
+ * independently randomized mmap region (0 load_bias
+ * without MAP_FIXED nor MAP_FIXED_NOREPLACE).
++ *
++ * See below for "brk" handling details, which is
++ * also affected by program vs loader and ASLR.
+ */
+ if (interpreter) {
+ /* On ET_DYN with PT_INTERP, we do the ASLR. */
+@@ -1235,8 +1243,6 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ start_data += load_bias;
+ end_data += load_bias;
+
+- current->mm->start_brk = current->mm->brk = ELF_PAGEALIGN(elf_brk);
+-
+ if (interpreter) {
+ elf_entry = load_elf_interp(interp_elf_ex,
+ interpreter,
+@@ -1292,27 +1298,44 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ mm->end_data = end_data;
+ mm->start_stack = bprm->p;
+
+- if ((current->flags & PF_RANDOMIZE) && (snapshot_randomize_va_space > 1)) {
++ /**
++ * DOC: "brk" handling
++ *
++ * For architectures with ELF randomization, when executing a
++ * loader directly (i.e. static PIE: ET_DYN without PT_INTERP),
++ * move the brk area out of the mmap region and into the unused
++ * ELF_ET_DYN_BASE region. Since "brk" grows up it may collide
++ * early with the stack growing down or other regions being put
++ * into the mmap region by the kernel (e.g. vdso).
++ *
++ * In the CONFIG_COMPAT_BRK case, though, everything is turned
++ * off because we're not allowed to move the brk at all.
++ */
++ if (!IS_ENABLED(CONFIG_COMPAT_BRK) &&
++ IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&
++ elf_ex->e_type == ET_DYN && !interpreter) {
++ elf_brk = ELF_ET_DYN_BASE;
++ /* This counts as moving the brk, so let brk(2) know. */
++ brk_moved = true;
++ }
++ mm->start_brk = mm->brk = ELF_PAGEALIGN(elf_brk);
++
++ if ((current->flags & PF_RANDOMIZE) && snapshot_randomize_va_space > 1) {
+ /*
+- * For architectures with ELF randomization, when executing
+- * a loader directly (i.e. no interpreter listed in ELF
+- * headers), move the brk area out of the mmap region
+- * (since it grows up, and may collide early with the stack
+- * growing down), and into the unused ELF_ET_DYN_BASE region.
++ * If we didn't move the brk to ELF_ET_DYN_BASE (above),
++ * leave a gap between .bss and brk.
+ */
+- if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&
+- elf_ex->e_type == ET_DYN && !interpreter) {
+- mm->brk = mm->start_brk = ELF_ET_DYN_BASE;
+- } else {
+- /* Otherwise leave a gap between .bss and brk. */
++ if (!brk_moved)
+ mm->brk = mm->start_brk = mm->brk + PAGE_SIZE;
+- }
+
+ mm->brk = mm->start_brk = arch_randomize_brk(mm);
++ brk_moved = true;
++ }
++
+ #ifdef compat_brk_randomized
++ if (brk_moved)
+ current->brk_randomized = 1;
+ #endif
+- }
+
+ if (current->personality & MMAP_PAGE_ZERO) {
+ /* Why this, you ask??? Well SVr4 maps page 0 as read-only,
+diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
+index e815d165cccc22..e9cdc1759dada8 100644
+--- a/fs/btrfs/discard.c
++++ b/fs/btrfs/discard.c
+@@ -94,8 +94,6 @@ static void __add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ struct btrfs_block_group *block_group)
+ {
+ lockdep_assert_held(&discard_ctl->lock);
+- if (!btrfs_run_discard_work(discard_ctl))
+- return;
+
+ if (list_empty(&block_group->discard_list) ||
+ block_group->discard_index == BTRFS_DISCARD_INDEX_UNUSED) {
+@@ -118,6 +116,9 @@ static void add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ if (!btrfs_is_block_group_data_only(block_group))
+ return;
+
++ if (!btrfs_run_discard_work(discard_ctl))
++ return;
++
+ spin_lock(&discard_ctl->lock);
+ __add_to_discard_list(discard_ctl, block_group);
+ spin_unlock(&discard_ctl->lock);
+@@ -250,6 +251,18 @@ static struct btrfs_block_group *peek_discard_list(
+ block_group->used != 0) {
+ if (btrfs_is_block_group_data_only(block_group)) {
+ __add_to_discard_list(discard_ctl, block_group);
++ /*
++ * The block group must have been moved to other
++ * discard list even if discard was disabled in
++ * the meantime or a transaction abort happened,
++ * otherwise we can end up in an infinite loop,
++ * always jumping into the 'again' label and
++ * keep getting this block group over and over
++ * in case there are no other block groups in
++ * the discard lists.
++ */
++ ASSERT(block_group->discard_index !=
++ BTRFS_DISCARD_INDEX_UNUSED);
+ } else {
+ list_del_init(&block_group->discard_list);
+ btrfs_put_block_group(block_group);
+diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
+index b572d6b9730b22..f46fba127caaa1 100644
+--- a/fs/btrfs/fs.h
++++ b/fs/btrfs/fs.h
+@@ -285,6 +285,7 @@ enum {
+ #define BTRFS_FEATURE_INCOMPAT_SAFE_CLEAR 0ULL
+
+ #define BTRFS_DEFAULT_COMMIT_INTERVAL (30)
++#define BTRFS_WARNING_COMMIT_INTERVAL (300)
+ #define BTRFS_DEFAULT_MAX_INLINE (2048)
+
+ struct btrfs_dev_replace {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 3be6f8e8e157da..a06fca7934d558 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1116,6 +1116,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
+ struct extent_state *cached = NULL;
+ struct extent_map *em;
+ int ret = 0;
++ bool free_pages = false;
+ u64 start = async_extent->start;
+ u64 end = async_extent->start + async_extent->ram_size - 1;
+
+@@ -1136,7 +1137,10 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
+ }
+
+ if (async_extent->compress_type == BTRFS_COMPRESS_NONE) {
++ ASSERT(!async_extent->folios);
++ ASSERT(async_extent->nr_folios == 0);
+ submit_uncompressed_range(inode, async_extent, locked_folio);
++ free_pages = true;
+ goto done;
+ }
+
+@@ -1152,6 +1156,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
+ * fall back to uncompressed.
+ */
+ submit_uncompressed_range(inode, async_extent, locked_folio);
++ free_pages = true;
+ goto done;
+ }
+
+@@ -1193,6 +1198,8 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
+ done:
+ if (async_chunk->blkcg_css)
+ kthread_associate_blkcg(NULL);
++ if (free_pages)
++ free_async_extent_pages(async_extent);
+ kfree(async_extent);
+ return;
+
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index a5f29ff3fbc2e7..8098f5cc9b7b02 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -569,6 +569,10 @@ static int btrfs_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ break;
+ case Opt_commit_interval:
+ ctx->commit_interval = result.uint_32;
++ if (ctx->commit_interval > BTRFS_WARNING_COMMIT_INTERVAL) {
++ btrfs_warn(NULL, "excessive commit interval %u, use with care",
++ ctx->commit_interval);
++ }
+ if (ctx->commit_interval == 0)
+ ctx->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL;
+ break;
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index c01234bbac4989..9a6b924958e2c5 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -2111,9 +2111,10 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+
+ write_unlock_irq(&ep->lock);
+
+- if (!eavail && ep_schedule_timeout(to))
+- timed_out = !schedule_hrtimeout_range(to, slack,
+- HRTIMER_MODE_ABS);
++ if (!eavail)
++ timed_out = !ep_schedule_timeout(to) ||
++ !schedule_hrtimeout_range(to, slack,
++ HRTIMER_MODE_ABS);
+ __set_current_state(TASK_RUNNING);
+
+ /*
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 5c21caeae075c7..4ec952f9f47dde 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -278,6 +278,7 @@ nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ new = __nfs_local_open_fh(clp, cred, fh, nfl, mode);
+ if (IS_ERR(new))
+ return NULL;
++ rcu_read_lock();
+ /* try to swap in the pointer */
+ spin_lock(&clp->cl_uuid.lock);
+ nf = rcu_dereference_protected(*pnf, 1);
+@@ -287,7 +288,6 @@ nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ rcu_assign_pointer(*pnf, nf);
+ }
+ spin_unlock(&clp->cl_uuid.lock);
+- rcu_read_lock();
+ }
+ nf = nfs_local_file_get(nf);
+ rcu_read_unlock();
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 6e95db6c17e928..810cfd9b7e533b 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7049,10 +7049,18 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
+ struct nfs4_unlockdata *p;
+ struct nfs4_state *state = lsp->ls_state;
+ struct inode *inode = state->inode;
++ struct nfs_lock_context *l_ctx;
+
+ p = kzalloc(sizeof(*p), GFP_KERNEL);
+ if (p == NULL)
+ return NULL;
++ l_ctx = nfs_get_lock_context(ctx);
++ if (!IS_ERR(l_ctx)) {
++ p->l_ctx = l_ctx;
++ } else {
++ kfree(p);
++ return NULL;
++ }
+ p->arg.fh = NFS_FH(inode);
+ p->arg.fl = &p->fl;
+ p->arg.seqid = seqid;
+@@ -7060,7 +7068,6 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
+ p->lsp = lsp;
+ /* Ensure we don't close file until we're done freeing locks! */
+ p->ctx = get_nfs_open_context(ctx);
+- p->l_ctx = nfs_get_lock_context(ctx);
+ locks_init_lock(&p->fl);
+ locks_copy_lock(&p->fl, fl);
+ p->server = NFS_SERVER(inode);
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 5f582713bf05eb..683e09be25adf3 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -745,6 +745,14 @@ pnfs_mark_matching_lsegs_invalid(struct pnfs_layout_hdr *lo,
+ return remaining;
+ }
+
++static void pnfs_reset_return_info(struct pnfs_layout_hdr *lo)
++{
++ struct pnfs_layout_segment *lseg;
++
++ list_for_each_entry(lseg, &lo->plh_return_segs, pls_list)
++ pnfs_set_plh_return_info(lo, lseg->pls_range.iomode, 0);
++}
++
+ static void
+ pnfs_free_returned_lsegs(struct pnfs_layout_hdr *lo,
+ struct list_head *free_me,
+@@ -1292,6 +1300,7 @@ void pnfs_layoutreturn_free_lsegs(struct pnfs_layout_hdr *lo,
+ pnfs_mark_matching_lsegs_invalid(lo, &freeme, range, seq);
+ pnfs_free_returned_lsegs(lo, &freeme, range, seq);
+ pnfs_set_layout_stateid(lo, stateid, NULL, true);
++ pnfs_reset_return_info(lo);
+ } else
+ pnfs_mark_layout_stateid_invalid(lo, &freeme);
+ out_unlock:
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index e7118501fdcc64..ed3ffcb80aef0e 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -2967,7 +2967,7 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ /* Eventually save off posix specific response info and timestamps */
+
+ err_free_rsp_buf:
+- free_rsp_buf(resp_buftype, rsp);
++ free_rsp_buf(resp_buftype, rsp_iov.iov_base);
+ kfree(pc_buf);
+ err_free_req:
+ cifs_small_buf_release(req);
+diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
+index 4f33a4a4888613..b4071c9cf8c951 100644
+--- a/fs/udf/truncate.c
++++ b/fs/udf/truncate.c
+@@ -115,7 +115,7 @@ void udf_truncate_tail_extent(struct inode *inode)
+ }
+ /* This inode entry is in-memory only and thus we don't have to mark
+ * the inode dirty */
+- if (ret == 0)
++ if (ret >= 0)
+ iinfo->i_lenExtents = inode->i_size;
+ brelse(epos.bh);
+ }
+diff --git a/fs/xattr.c b/fs/xattr.c
+index fabb2a04501ee7..8ec5b0204bfdc5 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -1428,6 +1428,15 @@ static bool xattr_is_trusted(const char *name)
+ return !strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN);
+ }
+
++static bool xattr_is_maclabel(const char *name)
++{
++ const char *suffix = name + XATTR_SECURITY_PREFIX_LEN;
++
++ return !strncmp(name, XATTR_SECURITY_PREFIX,
++ XATTR_SECURITY_PREFIX_LEN) &&
++ security_ismaclabel(suffix);
++}
++
+ /**
+ * simple_xattr_list - list all xattr objects
+ * @inode: inode from which to get the xattrs
+@@ -1460,6 +1469,17 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ if (err)
+ return err;
+
++ err = security_inode_listsecurity(inode, buffer, remaining_size);
++ if (err < 0)
++ return err;
++
++ if (buffer) {
++ if (remaining_size < err)
++ return -ERANGE;
++ buffer += err;
++ }
++ remaining_size -= err;
++
+ read_lock(&xattrs->lock);
+ for (rbp = rb_first(&xattrs->rb_root); rbp; rbp = rb_next(rbp)) {
+ xattr = rb_entry(rbp, struct simple_xattr, rb_node);
+@@ -1468,6 +1488,10 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ if (!trusted && xattr_is_trusted(xattr->name))
+ continue;
+
++ /* skip MAC labels; these are provided by LSM above */
++ if (xattr_is_maclabel(xattr->name))
++ continue;
++
+ err = xattr_list_one(&buffer, &remaining_size, xattr->name);
+ if (err)
+ break;
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index 4b79bf50f4f0e5..47381e3f5a403b 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -11,6 +11,7 @@
+ #include <linux/uio.h>
+
+ #define BIO_MAX_VECS 256U
++#define BIO_MAX_INLINE_VECS UIO_MAXIOV
+
+ struct queue_limits;
+
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 6192bce9a9d68a..68f53495329fa4 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1223,13 +1223,6 @@ extern int vmbus_sendpacket(struct vmbus_channel *channel,
+ enum vmbus_packet_type type,
+ u32 flags);
+
+-extern int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
+- struct hv_page_buffer pagebuffers[],
+- u32 pagecount,
+- void *buffer,
+- u32 bufferlen,
+- u64 requestid);
+-
+ extern int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
+ struct vmbus_packet_mpb_array *mpb,
+ u32 desc_size,
+diff --git a/include/linux/micrel_phy.h b/include/linux/micrel_phy.h
+index 591bf5b5e8dc22..9af01bdd86d26d 100644
+--- a/include/linux/micrel_phy.h
++++ b/include/linux/micrel_phy.h
+@@ -44,7 +44,6 @@
+ #define MICREL_PHY_50MHZ_CLK BIT(0)
+ #define MICREL_PHY_FXEN BIT(1)
+ #define MICREL_KSZ8_P1_ERRATA BIT(2)
+-#define MICREL_NO_EEE BIT(3)
+
+ #define MICREL_KSZ9021_EXTREG_CTRL 0xB
+ #define MICREL_KSZ9021_EXTREG_DATA_WRITE 0xC
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 6c3125300c009a..a3d8305e88a51e 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -224,7 +224,7 @@ enum tpm2_const {
+
+ enum tpm2_timeouts {
+ TPM2_TIMEOUT_A = 750,
+- TPM2_TIMEOUT_B = 2000,
++ TPM2_TIMEOUT_B = 4000,
+ TPM2_TIMEOUT_C = 200,
+ TPM2_TIMEOUT_D = 30,
+ TPM2_DURATION_SHORT = 20,
+@@ -257,6 +257,7 @@ enum tpm2_return_codes {
+ TPM2_RC_TESTING = 0x090A, /* RC_WARN */
+ TPM2_RC_REFERENCE_H0 = 0x0910,
+ TPM2_RC_RETRY = 0x0922,
++ TPM2_RC_SESSION_MEMORY = 0x0903,
+ };
+
+ enum tpm2_command_codes {
+@@ -437,6 +438,24 @@ static inline u32 tpm2_rc_value(u32 rc)
+ return (rc & BIT(7)) ? rc & 0xbf : rc;
+ }
+
++/*
++ * Convert a return value from tpm_transmit_cmd() to POSIX error code.
++ */
++static inline ssize_t tpm_ret_to_err(ssize_t ret)
++{
++ if (ret < 0)
++ return ret;
++
++ switch (tpm2_rc_value(ret)) {
++ case TPM2_RC_SUCCESS:
++ return 0;
++ case TPM2_RC_SESSION_MEMORY:
++ return -ENOMEM;
++ default:
++ return -EFAULT;
++ }
++}
++
+ #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE)
+
+ extern int tpm_is_tpm2(struct tpm_chip *chip);
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index d48c657191cd01..1c05fed05f2bc2 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1031,6 +1031,21 @@ static inline struct sk_buff *__qdisc_dequeue_head(struct qdisc_skb_head *qh)
+ return skb;
+ }
+
++static inline struct sk_buff *qdisc_dequeue_internal(struct Qdisc *sch, bool direct)
++{
++ struct sk_buff *skb;
++
++ skb = __skb_dequeue(&sch->gso_skb);
++ if (skb) {
++ sch->q.qlen--;
++ return skb;
++ }
++ if (direct)
++ return __qdisc_dequeue_head(&sch->q);
++ else
++ return sch->dequeue(sch);
++}
++
+ static inline struct sk_buff *qdisc_dequeue_head(struct Qdisc *sch)
+ {
+ struct sk_buff *skb = __qdisc_dequeue_head(&sch->q);
+diff --git a/include/sound/ump_msg.h b/include/sound/ump_msg.h
+index 72f60ddfea7535..9556b4755a1ed8 100644
+--- a/include/sound/ump_msg.h
++++ b/include/sound/ump_msg.h
+@@ -604,7 +604,7 @@ struct snd_ump_stream_msg_ep_info {
+ } __packed;
+
+ /* UMP Stream Message: Device Info Notification (128bit) */
+-struct snd_ump_stream_msg_devince_info {
++struct snd_ump_stream_msg_device_info {
+ #ifdef __BIG_ENDIAN_BITFIELD
+ /* 0 */
+ u32 type:4;
+@@ -754,7 +754,7 @@ struct snd_ump_stream_msg_fb_name {
+ union snd_ump_stream_msg {
+ struct snd_ump_stream_msg_ep_discovery ep_discovery;
+ struct snd_ump_stream_msg_ep_info ep_info;
+- struct snd_ump_stream_msg_devince_info device_info;
++ struct snd_ump_stream_msg_device_info device_info;
+ struct snd_ump_stream_msg_stream_cfg stream_cfg;
+ struct snd_ump_stream_msg_fb_discovery fb_discovery;
+ struct snd_ump_stream_msg_fb_info fb_info;
+diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
+index f60d0a9d505e25..336aec7ea8c29f 100644
+--- a/io_uring/fdinfo.c
++++ b/io_uring/fdinfo.c
+@@ -86,13 +86,8 @@ static inline void napi_show_fdinfo(struct io_ring_ctx *ctx,
+ }
+ #endif
+
+-/*
+- * Caller holds a reference to the file already, we don't need to do
+- * anything else to get an extra reference.
+- */
+-__cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
++static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ {
+- struct io_ring_ctx *ctx = file->private_data;
+ struct io_overflow_cqe *ocqe;
+ struct io_rings *r = ctx->rings;
+ struct rusage sq_usage;
+@@ -106,7 +101,6 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ unsigned int sq_entries, cq_entries;
+ int sq_pid = -1, sq_cpu = -1;
+ u64 sq_total_time = 0, sq_work_time = 0;
+- bool has_lock;
+ unsigned int i;
+
+ if (ctx->flags & IORING_SETUP_CQE32)
+@@ -176,15 +170,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ seq_printf(m, "\n");
+ }
+
+- /*
+- * Avoid ABBA deadlock between the seq lock and the io_uring mutex,
+- * since fdinfo case grabs it in the opposite direction of normal use
+- * cases. If we fail to get the lock, we just don't iterate any
+- * structures that could be going away outside the io_uring mutex.
+- */
+- has_lock = mutex_trylock(&ctx->uring_lock);
+-
+- if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {
++ if (ctx->flags & IORING_SETUP_SQPOLL) {
+ struct io_sq_data *sq = ctx->sq_data;
+
+ /*
+@@ -206,7 +192,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ seq_printf(m, "SqTotalTime:\t%llu\n", sq_total_time);
+ seq_printf(m, "SqWorkTime:\t%llu\n", sq_work_time);
+ seq_printf(m, "UserFiles:\t%u\n", ctx->file_table.data.nr);
+- for (i = 0; has_lock && i < ctx->file_table.data.nr; i++) {
++ for (i = 0; i < ctx->file_table.data.nr; i++) {
+ struct file *f = NULL;
+
+ if (ctx->file_table.data.nodes[i])
+@@ -218,7 +204,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ }
+ }
+ seq_printf(m, "UserBufs:\t%u\n", ctx->buf_table.nr);
+- for (i = 0; has_lock && i < ctx->buf_table.nr; i++) {
++ for (i = 0; i < ctx->buf_table.nr; i++) {
+ struct io_mapped_ubuf *buf = NULL;
+
+ if (ctx->buf_table.nodes[i])
+@@ -228,7 +214,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ else
+ seq_printf(m, "%5u: <none>\n", i);
+ }
+- if (has_lock && !xa_empty(&ctx->personalities)) {
++ if (!xa_empty(&ctx->personalities)) {
+ unsigned long index;
+ const struct cred *cred;
+
+@@ -238,7 +224,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ }
+
+ seq_puts(m, "PollList:\n");
+- for (i = 0; has_lock && i < (1U << ctx->cancel_table.hash_bits); i++) {
++ for (i = 0; i < (1U << ctx->cancel_table.hash_bits); i++) {
+ struct io_hash_bucket *hb = &ctx->cancel_table.hbs[i];
+ struct io_kiocb *req;
+
+@@ -247,9 +233,6 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ task_work_pending(req->tctx->task));
+ }
+
+- if (has_lock)
+- mutex_unlock(&ctx->uring_lock);
+-
+ seq_puts(m, "CqOverflowList:\n");
+ spin_lock(&ctx->completion_lock);
+ list_for_each_entry(ocqe, &ctx->cq_overflow_list, list) {
+@@ -262,4 +245,23 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
+ spin_unlock(&ctx->completion_lock);
+ napi_show_fdinfo(ctx, m);
+ }
++
++/*
++ * Caller holds a reference to the file already, we don't need to do
++ * anything else to get an extra reference.
++ */
++__cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
++{
++ struct io_ring_ctx *ctx = file->private_data;
++
++ /*
++ * Avoid ABBA deadlock between the seq lock and the io_uring mutex,
++ * since fdinfo case grabs it in the opposite direction of normal use
++ * cases.
++ */
++ if (mutex_trylock(&ctx->uring_lock)) {
++ __io_uring_show_fdinfo(ctx, m);
++ mutex_unlock(&ctx->uring_lock);
++ }
++}
+ #endif
+diff --git a/io_uring/memmap.c b/io_uring/memmap.c
+index 36113454442744..0929361e0c4984 100644
+--- a/io_uring/memmap.c
++++ b/io_uring/memmap.c
+@@ -116,7 +116,7 @@ static int io_region_init_ptr(struct io_mapped_region *mr)
+ void *ptr;
+
+ if (io_check_coalesce_buffer(mr->pages, mr->nr_pages, &ifd)) {
+- if (ifd.nr_folios == 1) {
++ if (ifd.nr_folios == 1 && !PageHighMem(mr->pages[0])) {
+ mr->ptr = page_address(mr->pages[0]);
+ return 0;
+ }
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index e6701b7aa14743..678a2f7d14fff4 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -244,6 +244,11 @@ int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags)
+ return -EOPNOTSUPP;
+ issue_flags |= IO_URING_F_IOPOLL;
+ req->iopoll_completed = 0;
++ if (ctx->flags & IORING_SETUP_HYBRID_IOPOLL) {
++ /* make sure every req only blocks once */
++ req->flags &= ~REQ_F_IOPOLL_STATE;
++ req->iopoll_start = ktime_get_ns();
++ }
+ }
+
+ ret = file->f_op->uring_cmd(ioucmd, issue_flags);
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 1287274ae1ce9a..6fbffdca0c741f 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1116,9 +1116,11 @@ void cpuset_update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
+
+ if (top_cs) {
+ /*
+- * Percpu kthreads in top_cpuset are ignored
++ * PF_NO_SETAFFINITY tasks are ignored.
++ * All per cpu kthreads should have PF_NO_SETAFFINITY
++ * flag set, see kthread_set_per_cpu().
+ */
+- if (kthread_is_per_cpu(task))
++ if (task->flags & PF_NO_SETAFFINITY)
+ continue;
+ cpumask_andnot(new_cpus, possible_mask, subpartitions_cpus);
+ } else {
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 77cdff0d9f3488..3cfbd654471008 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -7173,6 +7173,12 @@ __bpf_kfunc int bpf_iter_scx_dsq_new(struct bpf_iter_scx_dsq *it, u64 dsq_id,
+ BUILD_BUG_ON(__alignof__(struct bpf_iter_scx_dsq_kern) !=
+ __alignof__(struct bpf_iter_scx_dsq));
+
++ /*
++ * next() and destroy() will be called regardless of the return value.
++ * Always clear $kit->dsq.
++ */
++ kit->dsq = NULL;
++
+ if (flags & ~__SCX_DSQ_ITER_USER_FLAGS)
+ return -EINVAL;
+
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index 95c6e3473a76b5..ba7ff14f5339b5 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -454,7 +454,8 @@ static void fprobe_remove_node_in_module(struct module *mod, struct hlist_head *
+ struct fprobe_hlist_node *node;
+ int ret = 0;
+
+- hlist_for_each_entry_rcu(node, head, hlist) {
++ hlist_for_each_entry_rcu(node, head, hlist,
++ lockdep_is_held(&fprobe_mutex)) {
+ if (!within_module(node->addr, mod))
+ continue;
+ if (delete_fprobe_node(node))
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 9b8ce8f4ff9b38..f76bee67a792ce 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1832,10 +1832,12 @@ static void rb_meta_validate_events(struct ring_buffer_per_cpu *cpu_buffer)
+
+ head_page = cpu_buffer->head_page;
+
+- /* If both the head and commit are on the reader_page then we are done. */
+- if (head_page == cpu_buffer->reader_page &&
+- head_page == cpu_buffer->commit_page)
++ /* If the commit_buffer is the reader page, update the commit page */
++ if (meta->commit_buffer == (unsigned long)cpu_buffer->reader_page->page) {
++ cpu_buffer->commit_page = cpu_buffer->reader_page;
++ /* Nothing more to do, the only page is the reader page */
+ goto done;
++ }
+
+ /* Iterate until finding the commit page */
+ for (i = 0; i < meta->nr_subbufs + 1; i++, rb_inc_page(&head_page)) {
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index a322e4f249a503..5d64a18cacacc6 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -16,7 +16,7 @@
+ #include "trace_output.h" /* for trace_event_sem */
+ #include "trace_dynevent.h"
+
+-static DEFINE_MUTEX(dyn_event_ops_mutex);
++DEFINE_MUTEX(dyn_event_ops_mutex);
+ static LIST_HEAD(dyn_event_ops_list);
+
+ bool trace_event_dyn_try_get_ref(struct trace_event_call *dyn_call)
+@@ -116,6 +116,20 @@ int dyn_event_release(const char *raw_command, struct dyn_event_operations *type
+ return ret;
+ }
+
++/*
++ * Locked version of event creation. The event creation must be protected by
++ * dyn_event_ops_mutex because of protecting trace_probe_log.
++ */
++int dyn_event_create(const char *raw_command, struct dyn_event_operations *type)
++{
++ int ret;
++
++ mutex_lock(&dyn_event_ops_mutex);
++ ret = type->create(raw_command);
++ mutex_unlock(&dyn_event_ops_mutex);
++ return ret;
++}
++
+ static int create_dyn_event(const char *raw_command)
+ {
+ struct dyn_event_operations *ops;
+diff --git a/kernel/trace/trace_dynevent.h b/kernel/trace/trace_dynevent.h
+index 936477a111d3e7..beee3f8d754444 100644
+--- a/kernel/trace/trace_dynevent.h
++++ b/kernel/trace/trace_dynevent.h
+@@ -100,6 +100,7 @@ void *dyn_event_seq_next(struct seq_file *m, void *v, loff_t *pos);
+ void dyn_event_seq_stop(struct seq_file *m, void *v);
+ int dyn_events_release_all(struct dyn_event_operations *type);
+ int dyn_event_release(const char *raw_command, struct dyn_event_operations *type);
++int dyn_event_create(const char *raw_command, struct dyn_event_operations *type);
+
+ /*
+ * for_each_dyn_event - iterate over the dyn_event list
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index d4544894709464..03f7f6c7eab64f 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -1560,7 +1560,7 @@ stacktrace_trigger(struct event_trigger_data *data,
+ struct trace_event_file *file = data->private_data;
+
+ if (file)
+- __trace_stack(file->tr, tracing_gen_ctx(), STACK_SKIP);
++ __trace_stack(file->tr, tracing_gen_ctx_dec(), STACK_SKIP);
+ else
+ trace_dump_stack(STACK_SKIP);
+ }
+diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
+index df56f9b7601094..eb6bebf1f1c778 100644
+--- a/kernel/trace/trace_functions.c
++++ b/kernel/trace/trace_functions.c
+@@ -597,11 +597,7 @@ ftrace_traceoff(unsigned long ip, unsigned long parent_ip,
+
+ static __always_inline void trace_stack(struct trace_array *tr)
+ {
+- unsigned int trace_ctx;
+-
+- trace_ctx = tracing_gen_ctx();
+-
+- __trace_stack(tr, trace_ctx, FTRACE_STACK_SKIP);
++ __trace_stack(tr, tracing_gen_ctx_dec(), FTRACE_STACK_SKIP);
+ }
+
+ static void
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 8287b175667f33..a5d46f109fccea 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1092,7 +1092,7 @@ static int create_or_delete_trace_kprobe(const char *raw_command)
+ if (raw_command[0] == '-')
+ return dyn_event_release(raw_command, &trace_kprobe_ops);
+
+- ret = trace_kprobe_create(raw_command);
++ ret = dyn_event_create(raw_command, &trace_kprobe_ops);
+ return ret == -ECANCELED ? -EINVAL : ret;
+ }
+
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 2eeecb6c95eea5..424751cdf31f9f 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -154,9 +154,12 @@ static const struct fetch_type *find_fetch_type(const char *type, unsigned long
+ }
+
+ static struct trace_probe_log trace_probe_log;
++extern struct mutex dyn_event_ops_mutex;
+
+ void trace_probe_log_init(const char *subsystem, int argc, const char **argv)
+ {
++ lockdep_assert_held(&dyn_event_ops_mutex);
++
+ trace_probe_log.subsystem = subsystem;
+ trace_probe_log.argc = argc;
+ trace_probe_log.argv = argv;
+@@ -165,11 +168,15 @@ void trace_probe_log_init(const char *subsystem, int argc, const char **argv)
+
+ void trace_probe_log_clear(void)
+ {
++ lockdep_assert_held(&dyn_event_ops_mutex);
++
+ memset(&trace_probe_log, 0, sizeof(trace_probe_log));
+ }
+
+ void trace_probe_log_set_index(int index)
+ {
++ lockdep_assert_held(&dyn_event_ops_mutex);
++
+ trace_probe_log.index = index;
+ }
+
+@@ -178,6 +185,8 @@ void __trace_probe_log_err(int offset, int err_type)
+ char *command, *p;
+ int i, len = 0, pos = 0;
+
++ lockdep_assert_held(&dyn_event_ops_mutex);
++
+ if (!trace_probe_log.argv)
+ return;
+
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 3386439ec9f674..35cf76c75dd766 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -741,7 +741,7 @@ static int create_or_delete_trace_uprobe(const char *raw_command)
+ if (raw_command[0] == '-')
+ return dyn_event_release(raw_command, &trace_uprobe_ops);
+
+- ret = trace_uprobe_create(raw_command);
++ ret = dyn_event_create(raw_command, &trace_uprobe_ops);
+ return ret == -ECANCELED ? -EINVAL : ret;
+ }
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 44b8feb83402b3..8acd95964ad15d 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2987,7 +2987,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ struct hugepage_subpool *spool = subpool_vma(vma);
+ struct hstate *h = hstate_vma(vma);
+ struct folio *folio;
+- long retval, gbl_chg;
++ long retval, gbl_chg, gbl_reserve;
+ map_chg_state map_chg;
+ int ret, idx;
+ struct hugetlb_cgroup *h_cg = NULL;
+@@ -3140,8 +3140,16 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h),
+ h_cg);
+ out_subpool_put:
+- if (map_chg)
+- hugepage_subpool_put_pages(spool, 1);
++ /*
++ * put page to subpool iff the quota of subpool's rsv_hpages is used
++ * during hugepage_subpool_get_pages.
++ */
++ if (map_chg && !gbl_chg) {
++ gbl_reserve = hugepage_subpool_put_pages(spool, 1);
++ hugetlb_acct_memory(h, -gbl_reserve);
++ }
++
++
+ out_end_reservation:
+ if (map_chg != MAP_CHG_ENFORCED)
+ vma_end_reservation(h, vma, addr);
+@@ -6949,7 +6957,7 @@ bool hugetlb_reserve_pages(struct inode *inode,
+ struct vm_area_struct *vma,
+ vm_flags_t vm_flags)
+ {
+- long chg = -1, add = -1;
++ long chg = -1, add = -1, spool_resv, gbl_resv;
+ struct hstate *h = hstate_inode(inode);
+ struct hugepage_subpool *spool = subpool_inode(inode);
+ struct resv_map *resv_map;
+@@ -7084,8 +7092,16 @@ bool hugetlb_reserve_pages(struct inode *inode,
+ return true;
+
+ out_put_pages:
+- /* put back original number of pages, chg */
+- (void)hugepage_subpool_put_pages(spool, chg);
++ spool_resv = chg - gbl_reserve;
++ if (spool_resv) {
++ /* put sub pool's reservation back, chg - gbl_reserve */
++ gbl_resv = hugepage_subpool_put_pages(spool, spool_resv);
++ /*
++ * subpool's reserved pages can not be put back due to race,
++ * return to hstate.
++ */
++ hugetlb_acct_memory(h, -gbl_resv);
++ }
+ out_uncharge_cgroup:
+ hugetlb_cgroup_uncharge_cgroup_rsvd(hstate_index(h),
+ chg * pages_per_huge_page(h), h_cg);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 74a996a3508e16..2cc8b3e36dc942 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6951,9 +6951,6 @@ bool has_managed_dma(void)
+
+ #ifdef CONFIG_UNACCEPTED_MEMORY
+
+-/* Counts number of zones with unaccepted pages. */
+-static DEFINE_STATIC_KEY_FALSE(zones_with_unaccepted_pages);
+-
+ static bool lazy_accept = true;
+
+ static int __init accept_memory_parse(char *p)
+@@ -6980,11 +6977,7 @@ static bool page_contains_unaccepted(struct page *page, unsigned int order)
+ static void __accept_page(struct zone *zone, unsigned long *flags,
+ struct page *page)
+ {
+- bool last;
+-
+ list_del(&page->lru);
+- last = list_empty(&zone->unaccepted_pages);
+-
+ account_freepages(zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE);
+ __mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES);
+ __ClearPageUnaccepted(page);
+@@ -6993,9 +6986,6 @@ static void __accept_page(struct zone *zone, unsigned long *flags,
+ accept_memory(page_to_phys(page), PAGE_SIZE << MAX_PAGE_ORDER);
+
+ __free_pages_ok(page, MAX_PAGE_ORDER, FPI_TO_TAIL);
+-
+- if (last)
+- static_branch_dec(&zones_with_unaccepted_pages);
+ }
+
+ void accept_page(struct page *page)
+@@ -7032,19 +7022,11 @@ static bool try_to_accept_memory_one(struct zone *zone)
+ return true;
+ }
+
+-static inline bool has_unaccepted_memory(void)
+-{
+- return static_branch_unlikely(&zones_with_unaccepted_pages);
+-}
+-
+ static bool cond_accept_memory(struct zone *zone, unsigned int order)
+ {
+ long to_accept, wmark;
+ bool ret = false;
+
+- if (!has_unaccepted_memory())
+- return false;
+-
+ if (list_empty(&zone->unaccepted_pages))
+ return false;
+
+@@ -7078,22 +7060,17 @@ static bool __free_unaccepted(struct page *page)
+ {
+ struct zone *zone = page_zone(page);
+ unsigned long flags;
+- bool first = false;
+
+ if (!lazy_accept)
+ return false;
+
+ spin_lock_irqsave(&zone->lock, flags);
+- first = list_empty(&zone->unaccepted_pages);
+ list_add_tail(&page->lru, &zone->unaccepted_pages);
+ account_freepages(zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE);
+ __mod_zone_page_state(zone, NR_UNACCEPTED, MAX_ORDER_NR_PAGES);
+ __SetPageUnaccepted(page);
+ spin_unlock_irqrestore(&zone->lock, flags);
+
+- if (first)
+- static_branch_inc(&zones_with_unaccepted_pages);
+-
+ return true;
+ }
+
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 4295a599d71494..30f7c96b0de249 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1068,8 +1068,13 @@ static int move_present_pte(struct mm_struct *mm,
+ src_folio->index = linear_page_index(dst_vma, dst_addr);
+
+ orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
+- /* Follow mremap() behavior and treat the entry dirty after the move */
+- orig_dst_pte = pte_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);
++ /* Set soft dirty bit so userspace can notice the pte was moved */
++#ifdef CONFIG_MEM_SOFT_DIRTY
++ orig_dst_pte = pte_mksoft_dirty(orig_dst_pte);
++#endif
++ if (pte_dirty(orig_src_pte))
++ orig_dst_pte = pte_mkdirty(orig_dst_pte);
++ orig_dst_pte = pte_mkwrite(orig_dst_pte, dst_vma);
+
+ set_pte_at(mm, dst_addr, dst_pte, orig_dst_pte);
+ out:
+@@ -1104,6 +1109,9 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
+ }
+
+ orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);
++#ifdef CONFIG_MEM_SOFT_DIRTY
++ orig_src_pte = pte_swp_mksoft_dirty(orig_src_pte);
++#endif
+ set_pte_at(mm, dst_addr, dst_pte, orig_src_pte);
+ double_pt_unlock(dst_ptl, src_ptl);
+
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 621c555f639be3..181b1e070b82ec 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7540,11 +7540,16 @@ static void add_device_complete(struct hci_dev *hdev, void *data, int err)
+ struct mgmt_cp_add_device *cp = cmd->param;
+
+ if (!err) {
++ struct hci_conn_params *params;
++
++ params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
++ le_addr_type(cp->addr.type));
++
+ device_added(cmd->sk, hdev, &cp->addr.bdaddr, cp->addr.type,
+ cp->action);
+ device_flags_changed(NULL, hdev, &cp->addr.bdaddr,
+ cp->addr.type, hdev->conn_flags,
+- PTR_UINT(cmd->user_data));
++ params ? params->flags : 0);
+ }
+
+ mgmt_cmd_complete(cmd->sk, hdev->id, MGMT_OP_ADD_DEVICE,
+@@ -7647,8 +7652,6 @@ static int add_device(struct sock *sk, struct hci_dev *hdev,
+ goto unlock;
+ }
+
+- cmd->user_data = UINT_PTR(current_flags);
+-
+ err = hci_cmd_sync_queue(hdev, add_device_sync, cmd,
+ add_device_complete);
+ if (err < 0) {
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 53e5aee4688569..2b6249d75a5d42 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1354,10 +1354,12 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ hw->wiphy->software_iftypes |= BIT(NL80211_IFTYPE_MONITOR);
+
+
+- local->int_scan_req = kzalloc(sizeof(*local->int_scan_req) +
+- sizeof(void *) * channels, GFP_KERNEL);
++ local->int_scan_req = kzalloc(struct_size(local->int_scan_req,
++ channels, channels),
++ GFP_KERNEL);
+ if (!local->int_scan_req)
+ return -ENOMEM;
++ local->int_scan_req->n_channels = channels;
+
+ eth_broadcast_addr(local->int_scan_req->bssid);
+
+diff --git a/net/mctp/device.c b/net/mctp/device.c
+index 8e0724c56723de..7c0dcf3df31962 100644
+--- a/net/mctp/device.c
++++ b/net/mctp/device.c
+@@ -117,11 +117,18 @@ static int mctp_dump_addrinfo(struct sk_buff *skb, struct netlink_callback *cb)
+ struct net_device *dev;
+ struct ifaddrmsg *hdr;
+ struct mctp_dev *mdev;
+- int ifindex, rc;
+-
+- hdr = nlmsg_data(cb->nlh);
+- // filter by ifindex if requested
+- ifindex = hdr->ifa_index;
++ int ifindex = 0, rc;
++
++ /* Filter by ifindex if a header is provided */
++ if (cb->nlh->nlmsg_len >= nlmsg_msg_size(sizeof(*hdr))) {
++ hdr = nlmsg_data(cb->nlh);
++ ifindex = hdr->ifa_index;
++ } else {
++ if (cb->strict_check) {
++ NL_SET_ERR_MSG(cb->extack, "mctp: Invalid header for addr dump request");
++ return -EINVAL;
++ }
++ }
+
+ rcu_read_lock();
+ for_each_netdev_dump(net, dev, mcb->ifindex) {
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 4c460160914f01..d9c8e5a5f9ce9a 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -313,8 +313,10 @@ static void mctp_flow_prepare_output(struct sk_buff *skb, struct mctp_dev *dev)
+
+ key = flow->key;
+
+- if (WARN_ON(key->dev && key->dev != dev))
++ if (key->dev) {
++ WARN_ON(key->dev != dev);
+ return;
++ }
+
+ mctp_dev_set_key(dev, key);
+ }
+diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c
+index 12dd71139da396..c93761040c6e77 100644
+--- a/net/sched/sch_codel.c
++++ b/net/sched/sch_codel.c
+@@ -144,7 +144,7 @@ static int codel_change(struct Qdisc *sch, struct nlattr *opt,
+
+ qlen = sch->q.qlen;
+ while (sch->q.qlen > sch->limit) {
+- struct sk_buff *skb = __qdisc_dequeue_head(&sch->q);
++ struct sk_buff *skb = qdisc_dequeue_internal(sch, true);
+
+ dropped += qdisc_pkt_len(skb);
+ qdisc_qstats_backlog_dec(sch, skb);
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 2ca5332cfcc5c5..902ff54706072b 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -1136,7 +1136,7 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt,
+ sch_tree_lock(sch);
+ }
+ while (sch->q.qlen > sch->limit) {
+- struct sk_buff *skb = fq_dequeue(sch);
++ struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+
+ if (!skb)
+ break;
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 6c9029f71e88d3..2a0f3a513bfaa1 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -441,7 +441,7 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+
+ while (sch->q.qlen > sch->limit ||
+ q->memory_usage > q->memory_limit) {
+- struct sk_buff *skb = fq_codel_dequeue(sch);
++ struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+
+ q->cstats.drop_len += qdisc_pkt_len(skb);
+ rtnl_kfree_skbs(skb, skb);
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index 93c36afbf57624..67f437c1705826 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -366,7 +366,7 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
+
+ /* Drop excess packets if new limit is lower */
+ while (sch->q.qlen > sch->limit) {
+- struct sk_buff *skb = fq_pie_qdisc_dequeue(sch);
++ struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+
+ len_dropped += qdisc_pkt_len(skb);
+ num_dropped += 1;
+diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c
+index 44d9efe1a96a89..5aa434b4670738 100644
+--- a/net/sched/sch_hhf.c
++++ b/net/sched/sch_hhf.c
+@@ -564,7 +564,7 @@ static int hhf_change(struct Qdisc *sch, struct nlattr *opt,
+ qlen = sch->q.qlen;
+ prev_backlog = sch->qstats.backlog;
+ while (sch->q.qlen > sch->limit) {
+- struct sk_buff *skb = hhf_dequeue(sch);
++ struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+
+ rtnl_kfree_skbs(skb, skb);
+ }
+diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c
+index bb1fa9aa530b27..97f71b6dbf5b54 100644
+--- a/net/sched/sch_pie.c
++++ b/net/sched/sch_pie.c
+@@ -195,7 +195,7 @@ static int pie_change(struct Qdisc *sch, struct nlattr *opt,
+ /* Drop excess packets if new limit is lower */
+ qlen = sch->q.qlen;
+ while (sch->q.qlen > sch->limit) {
+- struct sk_buff *skb = __qdisc_dequeue_head(&sch->q);
++ struct sk_buff *skb = qdisc_dequeue_internal(sch, true);
+
+ dropped += qdisc_pkt_len(skb);
+ qdisc_qstats_backlog_dec(sch, skb);
+diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
+index 77e33e1e340e31..65b0da6fdf6a79 100644
+--- a/net/tls/tls_strp.c
++++ b/net/tls/tls_strp.c
+@@ -396,7 +396,6 @@ static int tls_strp_read_copy(struct tls_strparser *strp, bool qshort)
+ return 0;
+
+ shinfo = skb_shinfo(strp->anchor);
+- shinfo->frag_list = NULL;
+
+ /* If we don't know the length go max plus page for cipher overhead */
+ need_spc = strp->stm.full_len ?: TLS_MAX_PAYLOAD_SIZE + PAGE_SIZE;
+@@ -412,6 +411,8 @@ static int tls_strp_read_copy(struct tls_strparser *strp, bool qshort)
+ page, 0, 0);
+ }
+
++ shinfo->frag_list = NULL;
++
+ strp->copy_mode = 1;
+ strp->stm.offset = 0;
+
+diff --git a/samples/ftrace/sample-trace-array.c b/samples/ftrace/sample-trace-array.c
+index d0ee9001c7b376..aaa8fa92e24d52 100644
+--- a/samples/ftrace/sample-trace-array.c
++++ b/samples/ftrace/sample-trace-array.c
+@@ -112,7 +112,7 @@ static int __init sample_trace_array_init(void)
+ /*
+ * If context specific per-cpu buffers havent already been allocated.
+ */
+- trace_printk_init_buffers();
++ trace_array_init_printk(tr);
+
+ simple_tsk = kthread_run(simple_thread, NULL, "sample-instance");
+ if (IS_ERR(simple_tsk)) {
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index dc081cf46d211c..686197407c3c61 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -36,6 +36,18 @@ KBUILD_CFLAGS += -Wno-gnu
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111219
+ KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow-non-kprintf)
+ KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation-non-kprintf)
++
++# Clang may emit a warning when a const variable, such as the dummy variables
++# in typecheck(), or const member of an aggregate type are not initialized,
++# which can result in unexpected behavior. However, in many audited cases of
++# the "field" variant of the warning, this is intentional because the field is
++# never used within a particular call path, the field is within a union with
++# other non-const members, or the containing object is not const so the field
++# can be modified via memcpy() / memset(). While the variable warning also gets
++# disabled with this same switch, there should not be too much coverage lost
++# because -Wuninitialized will still flag when an uninitialized const variable
++# is used.
++KBUILD_CFLAGS += $(call cc-disable-warning, default-const-init-unsafe)
+ else
+
+ # gcc inanely warns about local variables called 'main'
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 706f53e39b53c0..0ae01b85bb18cd 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -732,15 +732,21 @@ static int snd_seq_deliver_single_event(struct snd_seq_client *client,
+ */
+ static int __deliver_to_subscribers(struct snd_seq_client *client,
+ struct snd_seq_event *event,
+- struct snd_seq_client_port *src_port,
+- int atomic, int hop)
++ int port, int atomic, int hop)
+ {
++ struct snd_seq_client_port *src_port;
+ struct snd_seq_subscribers *subs;
+ int err, result = 0, num_ev = 0;
+ union __snd_seq_event event_saved;
+ size_t saved_size;
+ struct snd_seq_port_subs_info *grp;
+
++ if (port < 0)
++ return 0;
++ src_port = snd_seq_port_use_ptr(client, port);
++ if (!src_port)
++ return 0;
++
+ /* save original event record */
+ saved_size = snd_seq_event_packet_size(event);
+ memcpy(&event_saved, event, saved_size);
+@@ -775,6 +781,7 @@ static int __deliver_to_subscribers(struct snd_seq_client *client,
+ read_unlock(&grp->list_lock);
+ else
+ up_read(&grp->list_mutex);
++ snd_seq_port_unlock(src_port);
+ memcpy(event, &event_saved, saved_size);
+ return (result < 0) ? result : num_ev;
+ }
+@@ -783,25 +790,32 @@ static int deliver_to_subscribers(struct snd_seq_client *client,
+ struct snd_seq_event *event,
+ int atomic, int hop)
+ {
+- struct snd_seq_client_port *src_port;
+- int ret = 0, ret2;
+-
+- src_port = snd_seq_port_use_ptr(client, event->source.port);
+- if (src_port) {
+- ret = __deliver_to_subscribers(client, event, src_port, atomic, hop);
+- snd_seq_port_unlock(src_port);
+- }
+-
+- if (client->ump_endpoint_port < 0 ||
+- event->source.port == client->ump_endpoint_port)
+- return ret;
++ int ret;
++#if IS_ENABLED(CONFIG_SND_SEQ_UMP)
++ int ret2;
++#endif
+
+- src_port = snd_seq_port_use_ptr(client, client->ump_endpoint_port);
+- if (!src_port)
++ ret = __deliver_to_subscribers(client, event,
++ event->source.port, atomic, hop);
++#if IS_ENABLED(CONFIG_SND_SEQ_UMP)
++ if (!snd_seq_client_is_ump(client) || client->ump_endpoint_port < 0)
+ return ret;
+- ret2 = __deliver_to_subscribers(client, event, src_port, atomic, hop);
+- snd_seq_port_unlock(src_port);
+- return ret2 < 0 ? ret2 : ret;
++ /* If it's an event from EP port (and with a UMP group),
++ * deliver to subscribers of the corresponding UMP group port, too.
++ * Or, if it's from non-EP port, deliver to subscribers of EP port, too.
++ */
++ if (event->source.port == client->ump_endpoint_port)
++ ret2 = __deliver_to_subscribers(client, event,
++ snd_seq_ump_group_port(event),
++ atomic, hop);
++ else
++ ret2 = __deliver_to_subscribers(client, event,
++ client->ump_endpoint_port,
++ atomic, hop);
++ if (ret2 < 0)
++ return ret2;
++#endif
++ return ret;
+ }
+
+ /* deliver an event to the destination port(s).
+diff --git a/sound/core/seq/seq_ump_convert.c b/sound/core/seq/seq_ump_convert.c
+index ff7e558b4d51d0..db2f169cae11ea 100644
+--- a/sound/core/seq/seq_ump_convert.c
++++ b/sound/core/seq/seq_ump_convert.c
+@@ -1285,3 +1285,21 @@ int snd_seq_deliver_to_ump(struct snd_seq_client *source,
+ else
+ return cvt_to_ump_midi1(dest, dest_port, event, atomic, hop);
+ }
++
++/* return the UMP group-port number of the event;
++ * return -1 if groupless or non-UMP event
++ */
++int snd_seq_ump_group_port(const struct snd_seq_event *event)
++{
++ const struct snd_seq_ump_event *ump_ev =
++ (const struct snd_seq_ump_event *)event;
++ unsigned char type;
++
++ if (!snd_seq_ev_is_ump(event))
++ return -1;
++ type = ump_message_type(ump_ev->ump[0]);
++ if (ump_is_groupless_msg(type))
++ return -1;
++ /* group-port number starts from 1 */
++ return ump_message_group(ump_ev->ump[0]) + 1;
++}
+diff --git a/sound/core/seq/seq_ump_convert.h b/sound/core/seq/seq_ump_convert.h
+index 6c146d8032804f..4abf0a7637d701 100644
+--- a/sound/core/seq/seq_ump_convert.h
++++ b/sound/core/seq/seq_ump_convert.h
+@@ -18,5 +18,6 @@ int snd_seq_deliver_to_ump(struct snd_seq_client *source,
+ struct snd_seq_client_port *dest_port,
+ struct snd_seq_event *event,
+ int atomic, int hop);
++int snd_seq_ump_group_port(const struct snd_seq_event *event);
+
+ #endif /* __SEQ_UMP_CONVERT_H */
+diff --git a/sound/pci/es1968.c b/sound/pci/es1968.c
+index c6c018b40c69f9..4e0693f0ab0f89 100644
+--- a/sound/pci/es1968.c
++++ b/sound/pci/es1968.c
+@@ -1561,7 +1561,7 @@ static int snd_es1968_capture_open(struct snd_pcm_substream *substream)
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct es1968 *chip = snd_pcm_substream_chip(substream);
+ struct esschan *es;
+- int apu1, apu2;
++ int err, apu1, apu2;
+
+ apu1 = snd_es1968_alloc_apu_pair(chip, ESM_APU_PCM_CAPTURE);
+ if (apu1 < 0)
+@@ -1605,7 +1605,9 @@ static int snd_es1968_capture_open(struct snd_pcm_substream *substream)
+ runtime->hw = snd_es1968_capture;
+ runtime->hw.buffer_bytes_max = runtime->hw.period_bytes_max =
+ calc_available_memory_size(chip) - 1024; /* keep MIXBUF size */
+- snd_pcm_hw_constraint_pow2(runtime, 0, SNDRV_PCM_HW_PARAM_BUFFER_BYTES);
++ err = snd_pcm_hw_constraint_pow2(runtime, 0, SNDRV_PCM_HW_PARAM_BUFFER_BYTES);
++ if (err < 0)
++ return err;
+
+ spin_lock_irq(&chip->substream_lock);
+ list_add(&es->list, &chip->substream_list);
+diff --git a/sound/sh/Kconfig b/sound/sh/Kconfig
+index b75fbb3236a7b9..f5fa09d740b4c9 100644
+--- a/sound/sh/Kconfig
++++ b/sound/sh/Kconfig
+@@ -14,7 +14,7 @@ if SND_SUPERH
+
+ config SND_AICA
+ tristate "Dreamcast Yamaha AICA sound"
+- depends on SH_DREAMCAST
++ depends on SH_DREAMCAST && SH_DMA_API
+ select SND_PCM
+ select G2_DMA
+ help
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 09210fb4ac60c1..c7387081577cd3 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2240,6 +2240,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
++ DEVICE_FLG(0x0c45, 0x636b, /* Microdia JP001 USB Camera */
++ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+@@ -2248,6 +2250,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_FIXED_RATE),
+ DEVICE_FLG(0x0fd9, 0x0008, /* Hauppauge HVR-950Q */
+ QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
++ DEVICE_FLG(0x1101, 0x0003, /* Audioengine D1 */
++ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+diff --git a/tools/net/ynl/pyynl/ethtool.py b/tools/net/ynl/pyynl/ethtool.py
+index af7fddd7b085bf..cab6b576c8762e 100755
+--- a/tools/net/ynl/pyynl/ethtool.py
++++ b/tools/net/ynl/pyynl/ethtool.py
+@@ -338,16 +338,24 @@ def main():
+ print('Capabilities:')
+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['timestamping'])]
+
+- print(f'PTP Hardware Clock: {tsinfo["phc-index"]}')
++ print(f'PTP Hardware Clock: {tsinfo.get("phc-index", "none")}')
+
+- print('Hardware Transmit Timestamp Modes:')
+- [print(f'\t{v}') for v in bits_to_dict(tsinfo['tx-types'])]
++ if 'tx-types' in tsinfo:
++ print('Hardware Transmit Timestamp Modes:')
++ [print(f'\t{v}') for v in bits_to_dict(tsinfo['tx-types'])]
++ else:
++ print('Hardware Transmit Timestamp Modes: none')
++
++ if 'rx-filters' in tsinfo:
++ print('Hardware Receive Filter Modes:')
++ [print(f'\t{v}') for v in bits_to_dict(tsinfo['rx-filters'])]
++ else:
++ print('Hardware Receive Filter Modes: none')
+
+- print('Hardware Receive Filter Modes:')
+- [print(f'\t{v}') for v in bits_to_dict(tsinfo['rx-filters'])]
++ if 'stats' in tsinfo and tsinfo['stats']:
++ print('Statistics:')
++ [print(f'\t{k}: {v}') for k, v in tsinfo['stats'].items()]
+
+- print('Statistics:')
+- [print(f'\t{k}: {v}') for k, v in tsinfo['stats'].items()]
+ return
+
+ print(f'Settings for {args.device}:')
+diff --git a/tools/perf/arch/loongarch/include/syscall_table.h b/tools/perf/arch/loongarch/include/syscall_table.h
+index 9d0646d3455cda..b53e31c1580531 100644
+--- a/tools/perf/arch/loongarch/include/syscall_table.h
++++ b/tools/perf/arch/loongarch/include/syscall_table.h
+@@ -1,2 +1,2 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+-#include <asm/syscall_table_64.h>
++#include <asm/syscalls_64.h>
+diff --git a/tools/testing/selftests/drivers/net/hw/ncdevmem.c b/tools/testing/selftests/drivers/net/hw/ncdevmem.c
+index 19a6969643f462..2d8ffe98700ea0 100644
+--- a/tools/testing/selftests/drivers/net/hw/ncdevmem.c
++++ b/tools/testing/selftests/drivers/net/hw/ncdevmem.c
+@@ -432,6 +432,22 @@ static int parse_address(const char *str, int port, struct sockaddr_in6 *sin6)
+ return 0;
+ }
+
++static struct netdev_queue_id *create_queues(void)
++{
++ struct netdev_queue_id *queues;
++ size_t i = 0;
++
++ queues = calloc(num_queues, sizeof(*queues));
++ for (i = 0; i < num_queues; i++) {
++ queues[i]._present.type = 1;
++ queues[i]._present.id = 1;
++ queues[i].type = NETDEV_QUEUE_TYPE_RX;
++ queues[i].id = start_queue + i;
++ }
++
++ return queues;
++}
++
+ int do_server(struct memory_buffer *mem)
+ {
+ char ctrl_data[sizeof(int) * 20000];
+@@ -449,7 +465,6 @@ int do_server(struct memory_buffer *mem)
+ char buffer[256];
+ int socket_fd;
+ int client_fd;
+- size_t i = 0;
+ int ret;
+
+ ret = parse_address(server_ip, atoi(port), &server_sin);
+@@ -472,16 +487,7 @@ int do_server(struct memory_buffer *mem)
+
+ sleep(1);
+
+- queues = malloc(sizeof(*queues) * num_queues);
+-
+- for (i = 0; i < num_queues; i++) {
+- queues[i]._present.type = 1;
+- queues[i]._present.id = 1;
+- queues[i].type = NETDEV_QUEUE_TYPE_RX;
+- queues[i].id = start_queue + i;
+- }
+-
+- if (bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))
++ if (bind_rx_queue(ifindex, mem->fd, create_queues(), num_queues, &ys))
+ error(1, 0, "Failed to bind\n");
+
+ tmp_mem = malloc(mem->size);
+@@ -546,7 +552,6 @@ int do_server(struct memory_buffer *mem)
+ goto cleanup;
+ }
+
+- i++;
+ for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {
+ if (cm->cmsg_level != SOL_SOCKET ||
+ (cm->cmsg_type != SCM_DEVMEM_DMABUF &&
+@@ -631,10 +636,8 @@ int do_server(struct memory_buffer *mem)
+
+ void run_devmem_tests(void)
+ {
+- struct netdev_queue_id *queues;
+ struct memory_buffer *mem;
+ struct ynl_sock *ys;
+- size_t i = 0;
+
+ mem = provider->alloc(getpagesize() * NUM_PAGES);
+
+@@ -642,38 +645,24 @@ void run_devmem_tests(void)
+ if (configure_rss())
+ error(1, 0, "rss error\n");
+
+- queues = calloc(num_queues, sizeof(*queues));
+-
+ if (configure_headersplit(1))
+ error(1, 0, "Failed to configure header split\n");
+
+- if (!bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))
++ if (!bind_rx_queue(ifindex, mem->fd,
++ calloc(num_queues, sizeof(struct netdev_queue_id)),
++ num_queues, &ys))
+ error(1, 0, "Binding empty queues array should have failed\n");
+
+- for (i = 0; i < num_queues; i++) {
+- queues[i]._present.type = 1;
+- queues[i]._present.id = 1;
+- queues[i].type = NETDEV_QUEUE_TYPE_RX;
+- queues[i].id = start_queue + i;
+- }
+-
+ if (configure_headersplit(0))
+ error(1, 0, "Failed to configure header split\n");
+
+- if (!bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))
++ if (!bind_rx_queue(ifindex, mem->fd, create_queues(), num_queues, &ys))
+ error(1, 0, "Configure dmabuf with header split off should have failed\n");
+
+ if (configure_headersplit(1))
+ error(1, 0, "Failed to configure header split\n");
+
+- for (i = 0; i < num_queues; i++) {
+- queues[i]._present.type = 1;
+- queues[i]._present.id = 1;
+- queues[i].type = NETDEV_QUEUE_TYPE_RX;
+- queues[i].id = start_queue + i;
+- }
+-
+- if (bind_rx_queue(ifindex, mem->fd, queues, num_queues, &ys))
++ if (bind_rx_queue(ifindex, mem->fd, create_queues(), num_queues, &ys))
+ error(1, 0, "Failed to bind\n");
+
+ /* Deactivating a bound queue should not be legal */
+diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c
+index d0f6d253ac72d0..613551132a9663 100644
+--- a/tools/testing/vsock/vsock_test.c
++++ b/tools/testing/vsock/vsock_test.c
+@@ -1264,21 +1264,25 @@ static void test_unsent_bytes_client(const struct test_opts *opts, int type)
+ send_buf(fd, buf, sizeof(buf), 0, sizeof(buf));
+ control_expectln("RECEIVED");
+
+- ret = ioctl(fd, SIOCOUTQ, &sock_bytes_unsent);
+- if (ret < 0) {
+- if (errno == EOPNOTSUPP) {
+- fprintf(stderr, "Test skipped, SIOCOUTQ not supported.\n");
+- } else {
++ /* SIOCOUTQ isn't guaranteed to instantly track sent data. Even though
++ * the "RECEIVED" message means that the other side has received the
++ * data, there can be a delay in our kernel before updating the "unsent
++ * bytes" counter. Repeat SIOCOUTQ until it returns 0.
++ */
++ timeout_begin(TIMEOUT);
++ do {
++ ret = ioctl(fd, SIOCOUTQ, &sock_bytes_unsent);
++ if (ret < 0) {
++ if (errno == EOPNOTSUPP) {
++ fprintf(stderr, "Test skipped, SIOCOUTQ not supported.\n");
++ break;
++ }
+ perror("ioctl");
+ exit(EXIT_FAILURE);
+ }
+- } else if (ret == 0 && sock_bytes_unsent != 0) {
+- fprintf(stderr,
+- "Unexpected 'SIOCOUTQ' value, expected 0, got %i\n",
+- sock_bytes_unsent);
+- exit(EXIT_FAILURE);
+- }
+-
++ timeout_check("SIOCOUTQ");
++ } while (sock_bytes_unsent != 0);
++ timeout_end();
+ close(fd);
+ }
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-22 13:50 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-22 13:50 UTC (permalink / raw
To: gentoo-commits
commit: ee6b559c585bc258ddc2c55ed62a526a2a1cbb47
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 22 13:50:15 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 22 13:50:15 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ee6b559c
Remove redundant patch
Removed:
1900_eventpoll-Prevent-hang-in-epoll-wait.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 --
1900_eventpoll-Prevent-hang-in-epoll-wait.patch | 51 -------------------------
2 files changed, 55 deletions(-)
diff --git a/0000_README b/0000_README
index aa5c3afa..e3e2e7cb 100644
--- a/0000_README
+++ b/0000_README
@@ -90,10 +90,6 @@ Patch: 1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
From: https://gitlab.com/cki-project/kernel-ark/-/commit/8d4a52c3921d278f27241fc0c6949d8fdc13a7f5
Desc: x86/insn_decoder_test: allow longer symbol-names
-Patch: 1900_eventpoll-Prevent-hang-in-epoll-wait.patch
-From: https://lore.kernel.org/linux-fsdevel/20250429153419.94723-1-jdamato@fastly.com/T/#u
-Desc: eventpoll: Prevent hang in epoll_wait
-
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1900_eventpoll-Prevent-hang-in-epoll-wait.patch b/1900_eventpoll-Prevent-hang-in-epoll-wait.patch
deleted file mode 100644
index 7f1e543a..00000000
--- a/1900_eventpoll-Prevent-hang-in-epoll-wait.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From git@z Thu Jan 1 00:00:00 1970
-Subject: [PATCH] eventpoll: Prevent hang in epoll_wait
-From: Joe Damato <jdamato@fastly.com>
-Date: Tue, 29 Apr 2025 15:34:19 +0000
-Message-Id: <20250429153419.94723-1-jdamato@fastly.com>
-MIME-Version: 1.0
-Content-Type: text/plain; charset="utf-8"
-Content-Transfer-Encoding: 7bit
-
-In commit 0a65bc27bd64 ("eventpoll: Set epoll timeout if it's in the
-future"), a bug was introduced causing the loop in ep_poll to hang under
-certain circumstances.
-
-When the timeout is non-NULL and ep_schedule_timeout returns false, the
-flag timed_out was not set to true. This causes a hang.
-
-Adjust the logic and set timed_out, if needed, fixing the original code.
-
-Reported-by: Christian Brauner <brauner@kernel.org>
-Closes: https://lore.kernel.org/linux-fsdevel/20250426-haben-redeverbot-0b58878ac722@brauner/
-Reported-by: Mike Pagano <mpagano@gentoo.org>
-Closes: https://bugs.gentoo.org/954806
-Reported-by: Carlos Llamas <cmllamas@google.com>
-Closes: https://lore.kernel.org/linux-fsdevel/aBAB_4gQ6O_haAjp@google.com/
-Fixes: 0a65bc27bd64 ("eventpoll: Set epoll timeout if it's in the future")
-Tested-by: Carlos Llamas <cmllamas@google.com>
-Signed-off-by: Joe Damato <jdamato@fastly.com>
----
- fs/eventpoll.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/fs/eventpoll.c b/fs/eventpoll.c
-index 4bc264b854c4..1a5d1147f082 100644
---- a/fs/eventpoll.c
-+++ b/fs/eventpoll.c
-@@ -2111,7 +2111,9 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
-
- write_unlock_irq(&ep->lock);
-
-- if (!eavail && ep_schedule_timeout(to))
-+ if (!ep_schedule_timeout(to))
-+ timed_out = 1;
-+ else if (!eavail)
- timed_out = !schedule_hrtimeout_range(to, slack,
- HRTIMER_MODE_ABS);
- __set_current_state(TASK_RUNNING);
-
-base-commit: f520bed25d17bb31c2d2d72b0a785b593a4e3179
---
-2.43.0
-
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-28 14:02 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-28 14:02 UTC (permalink / raw
To: gentoo-commits
commit: 268f211d274fd5f502d17d16f0e50e12a4ded631
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 28 14:02:20 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 28 14:02:20 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=268f211d
ftrace: x86: Fix a compile error about get_kernel_nofault()
Bug: https://bugs.gentoo.org/956059
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
...ce-x86-Fix-compile-err-get_kernel_nofault.patch | 119 +++++++++++++++++++++
2 files changed, 123 insertions(+)
diff --git a/0000_README b/0000_README
index e3e2e7cb..459563fb 100644
--- a/0000_README
+++ b/0000_README
@@ -82,6 +82,10 @@ Patch: 1700_sparc-address-warray-bound-warnings.patch
From: https://github.com/KSPP/linux/issues/109
Desc: Address -Warray-bounds warnings
+Patch: 1710_ftrace-x86-Fix-compile-err-get_kernel_nofault.patch
+From: https://lore.kernel.org/all/173881156244.211648.1242168038709680511.stgit@devnote2/
+Desc: ftrace: x86: Fix a compile error about get_kernel_nofault()
+
Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
diff --git a/1710_ftrace-x86-Fix-compile-err-get_kernel_nofault.patch b/1710_ftrace-x86-Fix-compile-err-get_kernel_nofault.patch
new file mode 100644
index 00000000..ad8cd545
--- /dev/null
+++ b/1710_ftrace-x86-Fix-compile-err-get_kernel_nofault.patch
@@ -0,0 +1,119 @@
+From git@z Thu Jan 1 00:00:00 1970
+Subject: [PATCH] ftrace: x86: Fix a compile error about
+ get_kernel_nofault()
+From: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>
+Date: Thu, 06 Feb 2025 12:12:42 +0900
+Message-Id: <173881156244.211648.1242168038709680511.stgit@devnote2>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 7bit
+
+Fix a compile error about get_kernel_nofault() which is defined in the
+linux/uaccess.h. Since asm/ftrace.h is widely used, including
+linux/uaccess.h in asm/ftrace.h caused another error. Thus this
+moves arch_ftrace_get_symaddr() into arch/x86/kernel/ftrace.c.
+
+The original errors look like:
+
+In file included from ./arch/x86/include/asm/asm-prototypes.h:2,
+ from <stdin>:3:
+./arch/x86/include/asm/ftrace.h: In function 'arch_ftrace_get_symaddr':
+./arch/x86/include/asm/ftrace.h:46:21: error: implicit declaration of function 'get_kernel_nofault' [-Werror=implicit-function-declaration]
+ 46 | if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
+ | ^~~~~~~~~~~~~~~~~~
+
+This also makes ftrace_get_symaddr() available only when
+CONFIG_HAVE_FENTRY=y on x86.
+
+Reported-by: Gabriel de Perthuis <g2p.code@gmail.com>
+Closes: https://lore.kernel.org/all/a87f98bf-45b1-4ef5-aa77-02f7e61203f4@gmail.com/
+Reported-by: Haiyue Wang <haiyuewa@163.com>
+Closes: https://lore.kernel.org/all/20250205180116.88644-1-haiyuewa@163.com/
+Fixes: 2bc56fdae1ba ("ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr")
+Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Acked-by: Alexei Starovoitov <ast@kernel.org>
+Acked-by: Andrii Nakryiko <andrii@kernel.org>
+---
+ arch/x86/include/asm/ftrace.h | 23 ++++-------------------
+ arch/x86/kernel/ftrace.c | 26 +++++++++++++++++++++++++-
+ 2 files changed, 29 insertions(+), 20 deletions(-)
+
+diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
+index f9cb4d07df58..1ed08f2de366 100644
+--- a/arch/x86/include/asm/ftrace.h
++++ b/arch/x86/include/asm/ftrace.h
+@@ -34,26 +34,11 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
+ return addr;
+ }
+
+-static inline unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip)
+-{
+-#ifdef CONFIG_X86_KERNEL_IBT
+- u32 instr;
+-
+- /* We want to be extra safe in case entry ip is on the page edge,
+- * but otherwise we need to avoid get_kernel_nofault()'s overhead.
+- */
+- if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
+- if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
+- return fentry_ip;
+- } else {
+- instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
+- }
+- if (is_endbr(instr))
+- fentry_ip -= ENDBR_INSN_SIZE;
+-#endif
+- return fentry_ip;
+-}
++/* This does not support mcount. */
++#ifdef CONFIG_HAVE_FENTRY
++unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip);
+ #define ftrace_get_symaddr(fentry_ip) arch_ftrace_get_symaddr(fentry_ip)
++#endif
+
+ #ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
+
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 166bc0ea3bdf..7250118005fc 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -29,11 +29,35 @@
+
+ #include <trace/syscall.h>
+
+-#include <asm/kprobes.h>
+ #include <asm/ftrace.h>
++#include <asm/ibt.h>
++#include <asm/kprobes.h>
+ #include <asm/nops.h>
+ #include <asm/text-patching.h>
+
++#ifdef CONFIG_HAVE_FENTRY
++/* Convert fentry address to the symbol address. */
++unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip)
++{
++#ifdef CONFIG_X86_KERNEL_IBT
++ u32 instr;
++
++ /* We want to be extra safe in case entry ip is on the page edge,
++ * but otherwise we need to avoid get_kernel_nofault()'s overhead.
++ */
++ if ((fentry_ip & ~PAGE_MASK) < ENDBR_INSN_SIZE) {
++ if (get_kernel_nofault(instr, (u32 *)(fentry_ip - ENDBR_INSN_SIZE)))
++ return fentry_ip;
++ } else {
++ instr = *(u32 *)(fentry_ip - ENDBR_INSN_SIZE);
++ }
++ if (is_endbr(instr))
++ fentry_ip -= ENDBR_INSN_SIZE;
++#endif
++ return fentry_ip;
++}
++#endif
++
+ #ifdef CONFIG_DYNAMIC_FTRACE
+
+ static int ftrace_poke_late = 0;
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-29 16:34 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-29 16:34 UTC (permalink / raw
To: gentoo-commits
commit: ad207ef9b045d5dde1c5717b000ece8bc2b2f299
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 29 16:33:54 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 29 16:33:54 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ad207ef9
Linux patch 6.14.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-6.14.9.patch | 44722 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 44726 insertions(+)
diff --git a/0000_README b/0000_README
index 459563fb..a644f9dd 100644
--- a/0000_README
+++ b/0000_README
@@ -74,6 +74,10 @@ Patch: 1007_linux-6.14.8.patch
From: https://www.kernel.org
Desc: Linux 6.14.8
+Patch: 1008_linux-6.14.9.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.9
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1008_linux-6.14.9.patch b/1008_linux-6.14.9.patch
new file mode 100644
index 00000000..9fb91b16
--- /dev/null
+++ b/1008_linux-6.14.9.patch
@@ -0,0 +1,44722 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f9e11cebc598cb..a8e98f75b610a0 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -6602,6 +6602,8 @@
+
+ Selecting 'on' will also enable the mitigation
+ against user space to user space task attacks.
++ Selecting specific mitigation does not force enable
++ user mitigations.
+
+ Selecting 'off' will disable both the kernel and
+ the user space protections.
+diff --git a/Documentation/driver-api/pps.rst b/Documentation/driver-api/pps.rst
+index 71ad04c82d6cf5..04f1b88778fc55 100644
+--- a/Documentation/driver-api/pps.rst
++++ b/Documentation/driver-api/pps.rst
+@@ -206,8 +206,7 @@ To do so the class pps-gen has been added. PPS generators can be
+ registered in the kernel by defining a struct pps_gen_source_info as
+ follows::
+
+- static struct pps_gen_source_info pps_gen_dummy_info = {
+- .name = "dummy",
++ static const struct pps_gen_source_info pps_gen_dummy_info = {
+ .use_system_clock = true,
+ .get_time = pps_gen_dummy_get_time,
+ .enable = pps_gen_dummy_enable,
+diff --git a/Documentation/driver-api/serial/driver.rst b/Documentation/driver-api/serial/driver.rst
+index 84b43061c11be2..60434f2b028637 100644
+--- a/Documentation/driver-api/serial/driver.rst
++++ b/Documentation/driver-api/serial/driver.rst
+@@ -103,4 +103,4 @@ Some helpers are provided in order to set/get modem control lines via GPIO.
+ .. kernel-doc:: drivers/tty/serial/serial_mctrl_gpio.c
+ :identifiers: mctrl_gpio_init mctrl_gpio_free mctrl_gpio_to_gpiod
+ mctrl_gpio_set mctrl_gpio_get mctrl_gpio_enable_ms
+- mctrl_gpio_disable_ms
++ mctrl_gpio_disable_ms_sync mctrl_gpio_disable_ms_no_sync
+diff --git a/Documentation/hwmon/dell-smm-hwmon.rst b/Documentation/hwmon/dell-smm-hwmon.rst
+index 74905675d71f99..5a4edb6565cf95 100644
+--- a/Documentation/hwmon/dell-smm-hwmon.rst
++++ b/Documentation/hwmon/dell-smm-hwmon.rst
+@@ -32,12 +32,12 @@ Temperature sensors and fans can be queried and set via the standard
+ =============================== ======= =======================================
+ Name Perm Description
+ =============================== ======= =======================================
+-fan[1-3]_input RO Fan speed in RPM.
+-fan[1-3]_label RO Fan label.
+-fan[1-3]_min RO Minimal Fan speed in RPM
+-fan[1-3]_max RO Maximal Fan speed in RPM
+-fan[1-3]_target RO Expected Fan speed in RPM
+-pwm[1-3] RW Control the fan PWM duty-cycle.
++fan[1-4]_input RO Fan speed in RPM.
++fan[1-4]_label RO Fan label.
++fan[1-4]_min RO Minimal Fan speed in RPM
++fan[1-4]_max RO Maximal Fan speed in RPM
++fan[1-4]_target RO Expected Fan speed in RPM
++pwm[1-4] RW Control the fan PWM duty-cycle.
+ pwm1_enable WO Enable or disable automatic BIOS fan
+ control (not supported on all laptops,
+ see below for details).
+@@ -93,7 +93,7 @@ Again, when you find new codes, we'd be happy to have your patches!
+ ---------------------------
+
+ The driver also exports the fans as thermal cooling devices with
+-``type`` set to ``dell-smm-fan[1-3]``. This allows for easy fan control
++``type`` set to ``dell-smm-fan[1-4]``. This allows for easy fan control
+ using one of the thermal governors.
+
+ Module parameters
+diff --git a/Documentation/networking/net_cachelines/snmp.rst b/Documentation/networking/net_cachelines/snmp.rst
+index 90ca2d92547d44..bc96efc92cf5b8 100644
+--- a/Documentation/networking/net_cachelines/snmp.rst
++++ b/Documentation/networking/net_cachelines/snmp.rst
+@@ -36,6 +36,7 @@ unsigned_long LINUX_MIB_TIMEWAITRECYCLED
+ unsigned_long LINUX_MIB_TIMEWAITKILLED
+ unsigned_long LINUX_MIB_PAWSACTIVEREJECTED
+ unsigned_long LINUX_MIB_PAWSESTABREJECTED
++unsigned_long LINUX_MIB_TSECR_REJECTED
+ unsigned_long LINUX_MIB_DELAYEDACKLOST
+ unsigned_long LINUX_MIB_LISTENOVERFLOWS
+ unsigned_long LINUX_MIB_LISTENDROPS
+diff --git a/Makefile b/Makefile
+index 70011eb4745f1a..884279eb952d7a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -1049,10 +1049,6 @@ NOSTDINC_FLAGS += -nostdinc
+ # perform bounds checking.
+ KBUILD_CFLAGS += $(call cc-option, -fstrict-flex-arrays=3)
+
+-#Currently, disable -Wstringop-overflow for GCC 11, globally.
+-KBUILD_CFLAGS-$(CONFIG_CC_NO_STRINGOP_OVERFLOW) += $(call cc-option, -Wno-stringop-overflow)
+-KBUILD_CFLAGS-$(CONFIG_CC_STRINGOP_OVERFLOW) += $(call cc-option, -Wstringop-overflow)
+-
+ # disable invalid "can't wrap" optimizations for signed / pointers
+ KBUILD_CFLAGS += -fno-strict-overflow
+
+diff --git a/arch/arm/boot/dts/nvidia/tegra114.dtsi b/arch/arm/boot/dts/nvidia/tegra114.dtsi
+index 86f14e2fd29f3a..6c057b50695140 100644
+--- a/arch/arm/boot/dts/nvidia/tegra114.dtsi
++++ b/arch/arm/boot/dts/nvidia/tegra114.dtsi
+@@ -139,7 +139,7 @@ dsib: dsi@54400000 {
+ reg = <0x54400000 0x00040000>;
+ clocks = <&tegra_car TEGRA114_CLK_DSIB>,
+ <&tegra_car TEGRA114_CLK_DSIBLP>,
+- <&tegra_car TEGRA114_CLK_PLL_D2_OUT0>;
++ <&tegra_car TEGRA114_CLK_PLL_D_OUT0>;
+ clock-names = "dsi", "lp", "parent";
+ resets = <&tegra_car 82>;
+ reset-names = "dsi";
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 05a1547642b60f..6c3e6aa22606f5 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -545,11 +545,12 @@ extern u32 at91_pm_suspend_in_sram_sz;
+
+ static int at91_suspend_finish(unsigned long val)
+ {
+- unsigned char modified_gray_code[] = {
+- 0x00, 0x01, 0x02, 0x03, 0x06, 0x07, 0x04, 0x05, 0x0c, 0x0d,
+- 0x0e, 0x0f, 0x0a, 0x0b, 0x08, 0x09, 0x18, 0x19, 0x1a, 0x1b,
+- 0x1e, 0x1f, 0x1c, 0x1d, 0x14, 0x15, 0x16, 0x17, 0x12, 0x13,
+- 0x10, 0x11,
++ /* SYNOPSYS workaround to fix a bug in the calibration logic */
++ unsigned char modified_fix_code[] = {
++ 0x00, 0x01, 0x01, 0x06, 0x07, 0x0c, 0x06, 0x07, 0x0b, 0x18,
++ 0x0a, 0x0b, 0x0c, 0x0d, 0x0d, 0x0a, 0x13, 0x13, 0x12, 0x13,
++ 0x14, 0x15, 0x15, 0x12, 0x18, 0x19, 0x19, 0x1e, 0x1f, 0x14,
++ 0x1e, 0x1f,
+ };
+ unsigned int tmp, index;
+ int i;
+@@ -560,25 +561,25 @@ static int at91_suspend_finish(unsigned long val)
+ * restore the ZQ0SR0 with the value saved here. But the
+ * calibration is buggy and restoring some values from ZQ0SR0
+ * is forbidden and risky thus we need to provide processed
+- * values for these (modified gray code values).
++ * values for these.
+ */
+ tmp = readl(soc_pm.data.ramc_phy + DDR3PHY_ZQ0SR0);
+
+ /* Store pull-down output impedance select. */
+ index = (tmp >> DDR3PHY_ZQ0SR0_PDO_OFF) & 0x1f;
+- soc_pm.bu->ddr_phy_calibration[0] = modified_gray_code[index];
++ soc_pm.bu->ddr_phy_calibration[0] = modified_fix_code[index] << DDR3PHY_ZQ0SR0_PDO_OFF;
+
+ /* Store pull-up output impedance select. */
+ index = (tmp >> DDR3PHY_ZQ0SR0_PUO_OFF) & 0x1f;
+- soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index];
++ soc_pm.bu->ddr_phy_calibration[0] |= modified_fix_code[index] << DDR3PHY_ZQ0SR0_PUO_OFF;
+
+ /* Store pull-down on-die termination impedance select. */
+ index = (tmp >> DDR3PHY_ZQ0SR0_PDODT_OFF) & 0x1f;
+- soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index];
++ soc_pm.bu->ddr_phy_calibration[0] |= modified_fix_code[index] << DDR3PHY_ZQ0SR0_PDODT_OFF;
+
+ /* Store pull-up on-die termination impedance select. */
+ index = (tmp >> DDR3PHY_ZQ0SRO_PUODT_OFF) & 0x1f;
+- soc_pm.bu->ddr_phy_calibration[0] |= modified_gray_code[index];
++ soc_pm.bu->ddr_phy_calibration[0] |= modified_fix_code[index] << DDR3PHY_ZQ0SRO_PUODT_OFF;
+
+ /*
+ * The 1st 8 words of memory might get corrupted in the process
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
+index 13a0e63afeaf3d..2c64d834a2c4f7 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-beelink-gs1.dts
+@@ -152,28 +152,12 @@ &pio {
+ vcc-pg-supply = <®_aldo1>;
+ };
+
+-&r_ir {
+- linux,rc-map-name = "rc-beelink-gs1";
+- status = "okay";
+-};
+-
+-&r_pio {
+- /*
+- * FIXME: We can't add that supply for now since it would
+- * create a circular dependency between pinctrl, the regulator
+- * and the RSB Bus.
+- *
+- * vcc-pl-supply = <®_aldo1>;
+- */
+- vcc-pm-supply = <®_aldo1>;
+-};
+-
+-&r_rsb {
++&r_i2c {
+ status = "okay";
+
+- axp805: pmic@745 {
++ axp805: pmic@36 {
+ compatible = "x-powers,axp805", "x-powers,axp806";
+- reg = <0x745>;
++ reg = <0x36>;
+ interrupt-parent = <&r_intc>;
+ interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>;
+ interrupt-controller;
+@@ -291,6 +275,22 @@ sw {
+ };
+ };
+
++&r_ir {
++ linux,rc-map-name = "rc-beelink-gs1";
++ status = "okay";
++};
++
++&r_pio {
++ /*
++ * PL0 and PL1 are used for PMIC I2C
++ * don't enable the pl-supply else
++ * it will fail at boot
++ *
++ * vcc-pl-supply = <®_aldo1>;
++ */
++ vcc-pm-supply = <®_aldo1>;
++};
++
+ &spdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&spdif_tx_pin>;
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts
+index ab87c3447cd782..f005072c68a167 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi-3.dts
+@@ -176,16 +176,12 @@ &pio {
+ vcc-pg-supply = <®_vcc_wifi_io>;
+ };
+
+-&r_ir {
+- status = "okay";
+-};
+-
+-&r_rsb {
++&r_i2c {
+ status = "okay";
+
+- axp805: pmic@745 {
++ axp805: pmic@36 {
+ compatible = "x-powers,axp805", "x-powers,axp806";
+- reg = <0x745>;
++ reg = <0x36>;
+ interrupt-parent = <&r_intc>;
+ interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>;
+ interrupt-controller;
+@@ -296,6 +292,10 @@ sw {
+ };
+ };
+
++&r_ir {
++ status = "okay";
++};
++
+ &rtc {
+ clocks = <&ext_osc32k>;
+ };
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi
+index d05dc5d6e6b9f7..e34dbb9920216d 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-orangepi.dtsi
+@@ -113,20 +113,12 @@ &pio {
+ vcc-pg-supply = <®_aldo1>;
+ };
+
+-&r_ir {
+- status = "okay";
+-};
+-
+-&r_pio {
+- vcc-pm-supply = <®_bldo3>;
+-};
+-
+-&r_rsb {
++&r_i2c {
+ status = "okay";
+
+- axp805: pmic@745 {
++ axp805: pmic@36 {
+ compatible = "x-powers,axp805", "x-powers,axp806";
+- reg = <0x745>;
++ reg = <0x36>;
+ interrupt-parent = <&r_intc>;
+ interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>;
+ interrupt-controller;
+@@ -241,6 +233,14 @@ sw {
+ };
+ };
+
++&r_ir {
++ status = "okay";
++};
++
++&r_pio {
++ vcc-pm-supply = <®_bldo3>;
++};
++
+ &rtc {
+ clocks = <&ext_osc32k>;
+ };
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-uDPU.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-uDPU.dtsi
+index 3a9b6907185d03..24282084570787 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-uDPU.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-3720-uDPU.dtsi
+@@ -26,6 +26,8 @@ memory@0 {
+
+ leds {
+ compatible = "gpio-leds";
++ pinctrl-names = "default";
++ pinctrl-0 = <&spi_quad_pins>;
+
+ led-power1 {
+ label = "udpu:green:power";
+@@ -82,8 +84,6 @@ &sdhci0 {
+
+ &spi0 {
+ status = "okay";
+- pinctrl-names = "default";
+- pinctrl-0 = <&spi_quad_pins>;
+
+ flash@0 {
+ compatible = "jedec,spi-nor";
+@@ -108,6 +108,10 @@ partition@180000 {
+ };
+ };
+
++&spi_quad_pins {
++ function = "gpio";
++};
++
+ &pinctrl_nb {
+ i2c2_recovery_pins: i2c2-recovery-pins {
+ groups = "i2c2";
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
+index 63b94a04308e86..38d49d612c0c19 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
+@@ -1686,7 +1686,7 @@ vdd_1v8_dis: regulator-vdd-1v8-dis {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+- gpio = <&exp1 14 GPIO_ACTIVE_HIGH>;
++ gpio = <&exp1 9 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ vin-supply = <&vdd_1v8>;
+ };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts
+index 36e88805374606..9ce55b4d2de892 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts
+@@ -302,6 +302,16 @@ pcie@14160000 {
+ };
+
+ pcie@141a0000 {
++ reg = <0x00 0x141a0000 0x0 0x00020000 /* appl registers (128K) */
++ 0x00 0x3a000000 0x0 0x00040000 /* configuration space (256K) */
++ 0x00 0x3a040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */
++ 0x00 0x3a080000 0x0 0x00040000 /* DBI reg space (256K) */
++ 0x2e 0x20000000 0x0 0x10000000>; /* ECAM (256MB) */
++
++ ranges = <0x81000000 0x00 0x3a100000 0x00 0x3a100000 0x0 0x00100000 /* downstream I/O (1MB) */
++ 0x82000000 0x00 0x40000000 0x2e 0x30000000 0x0 0x08000000 /* non-prefetchable memory (128MB) */
++ 0xc3000000 0x28 0x00000000 0x28 0x00000000 0x6 0x20000000>; /* prefetchable memory (25088MB) */
++
+ status = "okay";
+ vddio-pex-ctl-supply = <&vdd_1v8_ls>;
+ phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-clk-ccf.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp-clk-ccf.dtsi
+index 60d1b1acf9a030..385fed8a852afd 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp-clk-ccf.dtsi
++++ b/arch/arm64/boot/dts/xilinx/zynqmp-clk-ccf.dtsi
+@@ -10,39 +10,44 @@
+
+ #include <dt-bindings/clock/xlnx-zynqmp-clk.h>
+ / {
+- pss_ref_clk: pss_ref_clk {
++ pss_ref_clk: pss-ref-clk {
+ bootph-all;
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <33333333>;
++ clock-output-names = "pss_ref_clk";
+ };
+
+- video_clk: video_clk {
++ video_clk: video-clk {
+ bootph-all;
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <27000000>;
++ clock-output-names = "video_clk";
+ };
+
+- pss_alt_ref_clk: pss_alt_ref_clk {
++ pss_alt_ref_clk: pss-alt-ref-clk {
+ bootph-all;
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <0>;
++ clock-output-names = "pss_alt_ref_clk";
+ };
+
+- gt_crx_ref_clk: gt_crx_ref_clk {
++ gt_crx_ref_clk: gt-crx-ref-clk {
+ bootph-all;
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <108000000>;
++ clock-output-names = "gt_crx_ref_clk";
+ };
+
+- aux_ref_clk: aux_ref_clk {
++ aux_ref_clk: aux-ref-clk {
+ bootph-all;
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <27000000>;
++ clock-output-names = "aux_ref_clk";
+ };
+ };
+
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 8c6bd9da3b1ba3..3381fdc081ad2a 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -133,6 +133,7 @@
+ #define FUJITSU_CPU_PART_A64FX 0x001
+
+ #define HISI_CPU_PART_TSV110 0xD01
++#define HISI_CPU_PART_HIP09 0xD02
+
+ #define APPLE_CPU_PART_M1_ICESTORM 0x022
+ #define APPLE_CPU_PART_M1_FIRESTORM 0x023
+@@ -210,6 +211,7 @@
+ #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL)
+ #define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX)
+ #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
++#define MIDR_HISI_HIP09 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP09)
+ #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM)
+ #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM)
+ #define MIDR_APPLE_M1_ICESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_PRO)
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 0b2a2ad1b9e83b..665f90443f9e86 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -548,18 +548,6 @@ static inline int pmd_protnone(pmd_t pmd)
+ #endif
+
+ #define pmd_present(pmd) pte_present(pmd_pte(pmd))
+-
+-/*
+- * THP definitions.
+- */
+-
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-static inline int pmd_trans_huge(pmd_t pmd)
+-{
+- return pmd_val(pmd) && pmd_present(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
+-}
+-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+-
+ #define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
+ #define pmd_young(pmd) pte_young(pmd_pte(pmd))
+ #define pmd_valid(pmd) pte_valid(pmd_pte(pmd))
+@@ -724,6 +712,18 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
+ #define pmd_leaf_size(pmd) (pmd_cont(pmd) ? CONT_PMD_SIZE : PMD_SIZE)
+ #define pte_leaf_size(pte) (pte_cont(pte) ? CONT_PTE_SIZE : PAGE_SIZE)
+
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
++static inline int pmd_trans_huge(pmd_t pmd)
++{
++ /*
++ * If pmd is present-invalid, pmd_table() won't detect it
++ * as a table, so force the valid bit for the comparison.
++ */
++ return pmd_val(pmd) && pmd_present(pmd) &&
++ !pmd_table(__pmd(pmd_val(pmd) | PTE_VALID));
++}
++#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
++
+ #if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3
+ static inline bool pud_sect(pud_t pud) { return false; }
+ static inline bool pud_table(pud_t pud) { return true; }
+@@ -805,7 +805,8 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+ pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
+
+ #define pud_none(pud) (!pud_val(pud))
+-#define pud_bad(pud) (!pud_table(pud))
++#define pud_bad(pud) ((pud_val(pud) & PUD_TYPE_MASK) != \
++ PUD_TYPE_TABLE)
+ #define pud_present(pud) pte_present(pud_pte(pud))
+ #ifndef __PAGETABLE_PMD_FOLDED
+ #define pud_leaf(pud) (pud_present(pud) && !pud_table(pud))
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 8ef3335ecff722..31eaf15d2079a4 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -904,6 +904,7 @@ static u8 spectre_bhb_loop_affected(void)
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
+ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_GOLD),
++ MIDR_ALL_VERSIONS(MIDR_HISI_HIP09),
+ {},
+ };
+ static const struct midr_range spectre_bhb_k11_list[] = {
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 3126881fe67680..970c49bb7ed47f 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -274,7 +274,7 @@ static inline void emit_a64_add_i(const bool is64, const int dst, const int src,
+ {
+ if (is_addsub_imm(imm)) {
+ emit(A64_ADD_I(is64, dst, src, imm), ctx);
+- } else if (is_addsub_imm(-imm)) {
++ } else if (is_addsub_imm(-(u32)imm)) {
+ emit(A64_SUB_I(is64, dst, src, -imm), ctx);
+ } else {
+ emit_a64_mov_i(is64, tmp, imm, ctx);
+@@ -1208,7 +1208,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
+ case BPF_ALU64 | BPF_SUB | BPF_K:
+ if (is_addsub_imm(imm)) {
+ emit(A64_SUB_I(is64, dst, dst, imm), ctx);
+- } else if (is_addsub_imm(-imm)) {
++ } else if (is_addsub_imm(-(u32)imm)) {
+ emit(A64_ADD_I(is64, dst, dst, -imm), ctx);
+ } else {
+ emit_a64_mov_i(is64, tmp, imm, ctx);
+@@ -1379,7 +1379,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
+ case BPF_JMP32 | BPF_JSLE | BPF_K:
+ if (is_addsub_imm(imm)) {
+ emit(A64_CMP_I(is64, dst, imm), ctx);
+- } else if (is_addsub_imm(-imm)) {
++ } else if (is_addsub_imm(-(u32)imm)) {
+ emit(A64_CMN_I(is64, dst, -imm), ctx);
+ } else {
+ emit_a64_mov_i(is64, tmp, imm, ctx);
+diff --git a/arch/loongarch/kernel/Makefile b/arch/loongarch/kernel/Makefile
+index 4853e8b04c6fbe..f9dcaa60033d95 100644
+--- a/arch/loongarch/kernel/Makefile
++++ b/arch/loongarch/kernel/Makefile
+@@ -21,10 +21,10 @@ obj-$(CONFIG_CPU_HAS_LBT) += lbt.o
+
+ obj-$(CONFIG_ARCH_STRICT_ALIGN) += unaligned.o
+
+-CFLAGS_module.o += $(call cc-option,-Wno-override-init,)
+-CFLAGS_syscall.o += $(call cc-option,-Wno-override-init,)
+-CFLAGS_traps.o += $(call cc-option,-Wno-override-init,)
+-CFLAGS_perf_event.o += $(call cc-option,-Wno-override-init,)
++CFLAGS_module.o += $(call cc-disable-warning, override-init)
++CFLAGS_syscall.o += $(call cc-disable-warning, override-init)
++CFLAGS_traps.o += $(call cc-disable-warning, override-init)
++CFLAGS_perf_event.o += $(call cc-disable-warning, override-init)
+
+ ifdef CONFIG_FUNCTION_TRACER
+ ifndef CONFIG_DYNAMIC_FTRACE
+diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
+index 3a01292f71cc2b..8e8f6bc87f89ce 100644
+--- a/arch/loongarch/kvm/Makefile
++++ b/arch/loongarch/kvm/Makefile
+@@ -23,4 +23,4 @@ kvm-y += intc/eiointc.o
+ kvm-y += intc/pch_pic.o
+ kvm-y += irqfd.o
+
+-CFLAGS_exit.o += $(call cc-option,-Wno-override-init,)
++CFLAGS_exit.o += $(call cc-disable-warning, override-init)
+diff --git a/arch/mips/include/asm/ftrace.h b/arch/mips/include/asm/ftrace.h
+index dc025888f6d289..b41fc104466888 100644
+--- a/arch/mips/include/asm/ftrace.h
++++ b/arch/mips/include/asm/ftrace.h
+@@ -91,4 +91,20 @@ void prepare_ftrace_return(unsigned long *parent_ra_addr, unsigned long self_ra,
+
+ #endif /* __ASSEMBLY__ */
+ #endif /* CONFIG_FUNCTION_TRACER */
++
++#ifdef CONFIG_FTRACE_SYSCALLS
++#ifndef __ASSEMBLY__
++/*
++ * Some syscall entry functions on mips start with "__sys_" (fork and clone,
++ * for instance). We should also match the sys_ variant with those.
++ */
++#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME
++static inline bool arch_syscall_match_sym_name(const char *sym,
++ const char *name)
++{
++ return !strcmp(sym, name) ||
++ (!strncmp(sym, "__sys_", 6) && !strcmp(sym + 6, name + 4));
++}
++#endif /* __ASSEMBLY__ */
++#endif /* CONFIG_FTRACE_SYSCALLS */
+ #endif /* _ASM_MIPS_FTRACE_H */
+diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c
+index d09ca77e624d76..9369a8dc385e26 100644
+--- a/arch/mips/kernel/pm-cps.c
++++ b/arch/mips/kernel/pm-cps.c
+@@ -57,10 +57,7 @@ static DEFINE_PER_CPU_ALIGNED(u32*, ready_count);
+ /* Indicates online CPUs coupled with the current CPU */
+ static DEFINE_PER_CPU_ALIGNED(cpumask_t, online_coupled);
+
+-/*
+- * Used to synchronize entry to deep idle states. Actually per-core rather
+- * than per-CPU.
+- */
++/* Used to synchronize entry to deep idle states */
+ static DEFINE_PER_CPU_ALIGNED(atomic_t, pm_barrier);
+
+ /* Saved CPU state across the CPS_PM_POWER_GATED state */
+@@ -112,9 +109,10 @@ int cps_pm_enter_state(enum cps_pm_state state)
+ cps_nc_entry_fn entry;
+ struct core_boot_config *core_cfg;
+ struct vpe_boot_config *vpe_cfg;
++ atomic_t *barrier;
+
+ /* Check that there is an entry function for this state */
+- entry = per_cpu(nc_asm_enter, core)[state];
++ entry = per_cpu(nc_asm_enter, cpu)[state];
+ if (!entry)
+ return -EINVAL;
+
+@@ -150,7 +148,7 @@ int cps_pm_enter_state(enum cps_pm_state state)
+ smp_mb__after_atomic();
+
+ /* Create a non-coherent mapping of the core ready_count */
+- core_ready_count = per_cpu(ready_count, core);
++ core_ready_count = per_cpu(ready_count, cpu);
+ nc_addr = kmap_noncoherent(virt_to_page(core_ready_count),
+ (unsigned long)core_ready_count);
+ nc_addr += ((unsigned long)core_ready_count & ~PAGE_MASK);
+@@ -158,7 +156,8 @@ int cps_pm_enter_state(enum cps_pm_state state)
+
+ /* Ensure ready_count is zero-initialised before the assembly runs */
+ WRITE_ONCE(*nc_core_ready_count, 0);
+- coupled_barrier(&per_cpu(pm_barrier, core), online);
++ barrier = &per_cpu(pm_barrier, cpumask_first(&cpu_sibling_map[cpu]));
++ coupled_barrier(barrier, online);
+
+ /* Run the generated entry code */
+ left = entry(online, nc_core_ready_count);
+@@ -629,12 +628,14 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state)
+
+ static int cps_pm_online_cpu(unsigned int cpu)
+ {
+- enum cps_pm_state state;
+- unsigned core = cpu_core(&cpu_data[cpu]);
++ unsigned int sibling, core;
+ void *entry_fn, *core_rc;
++ enum cps_pm_state state;
++
++ core = cpu_core(&cpu_data[cpu]);
+
+ for (state = CPS_PM_NC_WAIT; state < CPS_PM_STATE_COUNT; state++) {
+- if (per_cpu(nc_asm_enter, core)[state])
++ if (per_cpu(nc_asm_enter, cpu)[state])
+ continue;
+ if (!test_bit(state, state_support))
+ continue;
+@@ -646,16 +647,19 @@ static int cps_pm_online_cpu(unsigned int cpu)
+ clear_bit(state, state_support);
+ }
+
+- per_cpu(nc_asm_enter, core)[state] = entry_fn;
++ for_each_cpu(sibling, &cpu_sibling_map[cpu])
++ per_cpu(nc_asm_enter, sibling)[state] = entry_fn;
+ }
+
+- if (!per_cpu(ready_count, core)) {
++ if (!per_cpu(ready_count, cpu)) {
+ core_rc = kmalloc(sizeof(u32), GFP_KERNEL);
+ if (!core_rc) {
+ pr_err("Failed allocate core %u ready_count\n", core);
+ return -ENOMEM;
+ }
+- per_cpu(ready_count, core) = core_rc;
++
++ for_each_cpu(sibling, &cpu_sibling_map[cpu])
++ per_cpu(ready_count, sibling) = core_rc;
+ }
+
+ return 0;
+diff --git a/arch/powerpc/include/asm/mmzone.h b/arch/powerpc/include/asm/mmzone.h
+index d99863cd6cde48..049152f8d597a6 100644
+--- a/arch/powerpc/include/asm/mmzone.h
++++ b/arch/powerpc/include/asm/mmzone.h
+@@ -29,6 +29,7 @@ extern cpumask_var_t node_to_cpumask_map[];
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ extern unsigned long max_pfn;
+ u64 memory_hotplug_max(void);
++u64 hot_add_drconf_memory_max(void);
+ #else
+ #define memory_hotplug_max() memblock_end_of_DRAM()
+ #endif
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index 57082fac466870..fe4659ba8c22aa 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -2889,11 +2889,11 @@ static void __init fixup_device_tree_pmac(void)
+ char type[8];
+ phandle node;
+
+- // Some pmacs are missing #size-cells on escc nodes
++ // Some pmacs are missing #size-cells on escc or i2s nodes
+ for (node = 0; prom_next_node(&node); ) {
+ type[0] = '\0';
+ prom_getprop(node, "device_type", type, sizeof(type));
+- if (prom_strcmp(type, "escc"))
++ if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s"))
+ continue;
+
+ if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 128c011afc4818..9f764bc42b8cc8 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -976,7 +976,7 @@ int __meminit radix__vmemmap_create_mapping(unsigned long start,
+ return 0;
+ }
+
+-
++#ifdef CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP
+ bool vmemmap_can_optimize(struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
+ {
+ if (radix_enabled())
+@@ -984,6 +984,7 @@ bool vmemmap_can_optimize(struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
+
+ return false;
+ }
++#endif
+
+ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
+ unsigned long addr, unsigned long next)
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 3c1da08304d032..603a0f652ba61c 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1336,7 +1336,7 @@ int hot_add_scn_to_nid(unsigned long scn_addr)
+ return nid;
+ }
+
+-static u64 hot_add_drconf_memory_max(void)
++u64 hot_add_drconf_memory_max(void)
+ {
+ struct device_node *memory = NULL;
+ struct device_node *dn = NULL;
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index f4e03aaabb4c36..b906d28f74fd4e 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -2226,6 +2226,10 @@ static struct pmu power_pmu = {
+ #define PERF_SAMPLE_ADDR_TYPE (PERF_SAMPLE_ADDR | \
+ PERF_SAMPLE_PHYS_ADDR | \
+ PERF_SAMPLE_DATA_PAGE_SIZE)
++
++#define SIER_TYPE_SHIFT 15
++#define SIER_TYPE_MASK (0x7ull << SIER_TYPE_SHIFT)
++
+ /*
+ * A counter has overflowed; update its count and record
+ * things if requested. Note that interrupts are hard-disabled
+@@ -2294,6 +2298,22 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
+ is_kernel_addr(mfspr(SPRN_SIAR)))
+ record = 0;
+
++ /*
++ * SIER[46-48] presents instruction type of the sampled instruction.
++ * In ISA v3.0 and before values "0" and "7" are considered reserved.
++ * In ISA v3.1, value "7" has been used to indicate "larx/stcx".
++ * Drop the sample if "type" has reserved values for this field with a
++ * ISA version check.
++ */
++ if (event->attr.sample_type & PERF_SAMPLE_DATA_SRC &&
++ ppmu->get_mem_data_src) {
++ val = (regs->dar & SIER_TYPE_MASK) >> SIER_TYPE_SHIFT;
++ if (val == 0 || (val == 7 && !cpu_has_feature(CPU_FTR_ARCH_31))) {
++ record = 0;
++ atomic64_inc(&event->lost_samples);
++ }
++ }
++
+ /*
+ * Finally record data if requested.
+ */
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index 56301b2bc8ae87..031a2b63c171dc 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -321,8 +321,10 @@ void isa207_get_mem_data_src(union perf_mem_data_src *dsrc, u32 flags,
+
+ sier = mfspr(SPRN_SIER);
+ val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT;
+- if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31)))
++ if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) {
++ dsrc->val = 0;
+ return;
++ }
+
+ idx = (sier & ISA207_SIER_LDST_MASK) >> ISA207_SIER_LDST_SHIFT;
+ sub_idx = (sier & ISA207_SIER_DATA_SRC_MASK) >> ISA207_SIER_DATA_SRC_SHIFT;
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index ae6f7a235d8b24..d6ebc19fb99c51 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -52,7 +52,8 @@ enum {
+ enum {
+ DDW_EXT_SIZE = 0,
+ DDW_EXT_RESET_DMA_WIN = 1,
+- DDW_EXT_QUERY_OUT_SIZE = 2
++ DDW_EXT_QUERY_OUT_SIZE = 2,
++ DDW_EXT_LIMITED_ADDR_MODE = 3
+ };
+
+ static struct iommu_table *iommu_pseries_alloc_table(int node)
+@@ -1284,17 +1285,13 @@ static LIST_HEAD(failed_ddw_pdn_list);
+
+ static phys_addr_t ddw_memory_hotplug_max(void)
+ {
+- resource_size_t max_addr = memory_hotplug_max();
+- struct device_node *memory;
++ resource_size_t max_addr;
+
+- for_each_node_by_type(memory, "memory") {
+- struct resource res;
+-
+- if (of_address_to_resource(memory, 0, &res))
+- continue;
+-
+- max_addr = max_t(resource_size_t, max_addr, res.end + 1);
+- }
++#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)
++ max_addr = hot_add_drconf_memory_max();
++#else
++ max_addr = memblock_end_of_DRAM();
++#endif
+
+ return max_addr;
+ }
+@@ -1331,6 +1328,54 @@ static void reset_dma_window(struct pci_dev *dev, struct device_node *par_dn)
+ ret);
+ }
+
++/*
++ * Platforms support placing PHB in limited address mode starting with LoPAR
++ * level 2.13 implement. In this mode, the DMA address returned by DDW is over
++ * 4GB but, less than 64-bits. This benefits IO adapters that don't support
++ * 64-bits for DMA addresses.
++ */
++static int limited_dma_window(struct pci_dev *dev, struct device_node *par_dn)
++{
++ int ret;
++ u32 cfg_addr, reset_dma_win, las_supported;
++ u64 buid;
++ struct device_node *dn;
++ struct pci_dn *pdn;
++
++ ret = ddw_read_ext(par_dn, DDW_EXT_RESET_DMA_WIN, &reset_dma_win);
++ if (ret)
++ goto out;
++
++ ret = ddw_read_ext(par_dn, DDW_EXT_LIMITED_ADDR_MODE, &las_supported);
++
++ /* Limited Address Space extension available on the platform but DDW in
++ * limited addressing mode not supported
++ */
++ if (!ret && !las_supported)
++ ret = -EPROTO;
++
++ if (ret) {
++ dev_info(&dev->dev, "Limited Address Space for DDW not Supported, err: %d", ret);
++ goto out;
++ }
++
++ dn = pci_device_to_OF_node(dev);
++ pdn = PCI_DN(dn);
++ buid = pdn->phb->buid;
++ cfg_addr = (pdn->busno << 16) | (pdn->devfn << 8);
++
++ ret = rtas_call(reset_dma_win, 4, 1, NULL, cfg_addr, BUID_HI(buid),
++ BUID_LO(buid), 1);
++ if (ret)
++ dev_info(&dev->dev,
++ "ibm,reset-pe-dma-windows(%x) for Limited Addr Support: %x %x %x returned %d ",
++ reset_dma_win, cfg_addr, BUID_HI(buid), BUID_LO(buid),
++ ret);
++
++out:
++ return ret;
++}
++
+ /* Return largest page shift based on "IO Page Sizes" output of ibm,query-pe-dma-window. */
+ static int iommu_get_page_shift(u32 query_page_size)
+ {
+@@ -1398,7 +1443,7 @@ static struct property *ddw_property_create(const char *propname, u32 liobn, u64
+ *
+ * returns true if can map all pages (direct mapping), false otherwise..
+ */
+-static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
++static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn, u64 dma_mask)
+ {
+ int len = 0, ret;
+ int max_ram_len = order_base_2(ddw_memory_hotplug_max());
+@@ -1417,6 +1462,9 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ bool pmem_present;
+ struct pci_dn *pci = PCI_DN(pdn);
+ struct property *default_win = NULL;
++ bool limited_addr_req = false, limited_addr_enabled = false;
++ int dev_max_ddw;
++ int ddw_sz;
+
+ dn = of_find_node_by_type(NULL, "ibm,pmemory");
+ pmem_present = dn != NULL;
+@@ -1443,7 +1491,6 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ * the ibm,ddw-applicable property holds the tokens for:
+ * ibm,query-pe-dma-window
+ * ibm,create-pe-dma-window
+- * ibm,remove-pe-dma-window
+ * for the given node in that order.
+ * the property is actually in the parent, not the PE
+ */
+@@ -1463,6 +1510,20 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ if (ret != 0)
+ goto out_failed;
+
++ /* DMA Limited Addressing required? This is when the driver has
++ * requested to create DDW but supports mask which is less than 64-bits
++ */
++ limited_addr_req = (dma_mask != DMA_BIT_MASK(64));
++
++ /* place the PHB in Limited Addressing mode */
++ if (limited_addr_req) {
++ if (limited_dma_window(dev, pdn))
++ goto out_failed;
++
++ /* PHB is in Limited address mode */
++ limited_addr_enabled = true;
++ }
++
+ /*
+ * If there is no window available, remove the default DMA window,
+ * if it's present. This will make all the resources available to the
+@@ -1509,6 +1570,15 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ goto out_failed;
+ }
+
++ /* Maximum DMA window size that the device can address (in log2) */
++ dev_max_ddw = fls64(dma_mask);
++
++ /* If the device DMA mask is less than 64-bits, make sure the DMA window
++ * size is not bigger than what the device can access
++ */
++ ddw_sz = min(order_base_2(query.largest_available_block << page_shift),
++ dev_max_ddw);
++
+ /*
+ * The "ibm,pmemory" can appear anywhere in the address space.
+ * Assuming it is still backed by page structs, try MAX_PHYSMEM_BITS
+@@ -1517,23 +1587,21 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ */
+ len = max_ram_len;
+ if (pmem_present) {
+- if (query.largest_available_block >=
+- (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
++ if (ddw_sz >= MAX_PHYSMEM_BITS)
+ len = MAX_PHYSMEM_BITS;
+ else
+ dev_info(&dev->dev, "Skipping ibm,pmemory");
+ }
+
+ /* check if the available block * number of ptes will map everything */
+- if (query.largest_available_block < (1ULL << (len - page_shift))) {
++ if (ddw_sz < len) {
+ dev_dbg(&dev->dev,
+ "can't map partition max 0x%llx with %llu %llu-sized pages\n",
+ 1ULL << len,
+ query.largest_available_block,
+ 1ULL << page_shift);
+
+- len = order_base_2(query.largest_available_block << page_shift);
+-
++ len = ddw_sz;
+ dynamic_mapping = true;
+ } else {
+ direct_mapping = !default_win_removed ||
+@@ -1547,8 +1615,9 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ */
+ if (default_win_removed && pmem_present && !direct_mapping) {
+ /* DDW is big enough to be split */
+- if ((query.largest_available_block << page_shift) >=
+- MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) {
++ if ((1ULL << ddw_sz) >=
++ MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) {
++
+ direct_mapping = true;
+
+ /* offset of the Dynamic part of DDW */
+@@ -1559,8 +1628,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ dynamic_mapping = true;
+
+ /* create max size DDW possible */
+- len = order_base_2(query.largest_available_block
+- << page_shift);
++ len = ddw_sz;
+ }
+ }
+
+@@ -1600,7 +1668,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+
+ if (direct_mapping) {
+ /* DDW maps the whole partition, so enable direct DMA mapping */
+- ret = walk_system_ram_range(0, memblock_end_of_DRAM() >> PAGE_SHIFT,
++ ret = walk_system_ram_range(0, ddw_memory_hotplug_max() >> PAGE_SHIFT,
+ win64->value, tce_setrange_multi_pSeriesLP_walk);
+ if (ret) {
+ dev_info(&dev->dev, "failed to map DMA window for %pOF: %d\n",
+@@ -1689,7 +1757,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ __remove_dma_window(pdn, ddw_avail, create.liobn);
+
+ out_failed:
+- if (default_win_removed)
++ if (default_win_removed || limited_addr_enabled)
+ reset_dma_window(dev, pdn);
+
+ fpdn = kzalloc(sizeof(*fpdn), GFP_KERNEL);
+@@ -1708,6 +1776,9 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ dev->dev.bus_dma_limit = dev->dev.archdata.dma_offset +
+ (1ULL << max_ram_len);
+
++ dev_info(&dev->dev, "lsa_required: %x, lsa_enabled: %x, direct mapping: %x\n",
++ limited_addr_req, limited_addr_enabled, direct_mapping);
++
+ return direct_mapping;
+ }
+
+@@ -1833,8 +1904,11 @@ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask)
+ {
+ struct device_node *dn = pci_device_to_OF_node(pdev), *pdn;
+
+- /* only attempt to use a new window if 64-bit DMA is requested */
+- if (dma_mask < DMA_BIT_MASK(64))
++ /* For DDW, DMA mask should be more than 32-bits. For mask more then
++ * 32-bits but less then 64-bits, DMA addressing is supported in
++ * Limited Addressing mode.
++ */
++ if (dma_mask <= DMA_BIT_MASK(32))
+ return false;
+
+ dev_dbg(&pdev->dev, "node is %pOF\n", dn);
+@@ -1847,7 +1921,7 @@ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask)
+ */
+ pdn = pci_dma_find(dn, NULL);
+ if (pdn && PCI_DN(pdn))
+- return enable_ddw(pdev, pdn);
++ return enable_ddw(pdev, pdn, dma_mask);
+
+ return false;
+ }
+@@ -2349,11 +2423,17 @@ static int iommu_mem_notifier(struct notifier_block *nb, unsigned long action,
+ struct memory_notify *arg = data;
+ int ret = 0;
+
++ /* This notifier can get called when onlining persistent memory as well.
++ * TCEs are not pre-mapped for persistent memory. Persistent memory will
++ * always be above ddw_memory_hotplug_max()
++ */
++
+ switch (action) {
+ case MEM_GOING_ONLINE:
+ spin_lock(&dma_win_list_lock);
+ list_for_each_entry(window, &dma_win_list, list) {
+- if (window->direct) {
++ if (window->direct && (arg->start_pfn << PAGE_SHIFT) <
++ ddw_memory_hotplug_max()) {
+ ret |= tce_setrange_multi_pSeriesLP(arg->start_pfn,
+ arg->nr_pages, window->prop);
+ }
+@@ -2365,7 +2445,8 @@ static int iommu_mem_notifier(struct notifier_block *nb, unsigned long action,
+ case MEM_OFFLINE:
+ spin_lock(&dma_win_list_lock);
+ list_for_each_entry(window, &dma_win_list, list) {
+- if (window->direct) {
++ if (window->direct && (arg->start_pfn << PAGE_SHIFT) <
++ ddw_memory_hotplug_max()) {
+ ret |= tce_clearrange_multi_pSeriesLP(arg->start_pfn,
+ arg->nr_pages, window->prop);
+ }
+diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
+index 19defdc2002d85..f56b409361fbe0 100644
+--- a/arch/riscv/include/asm/cpufeature.h
++++ b/arch/riscv/include/asm/cpufeature.h
+@@ -56,6 +56,9 @@ void __init riscv_user_isa_enable(void);
+ #define __RISCV_ISA_EXT_BUNDLE(_name, _bundled_exts) \
+ _RISCV_ISA_EXT_DATA(_name, RISCV_ISA_EXT_INVALID, _bundled_exts, \
+ ARRAY_SIZE(_bundled_exts), NULL)
++#define __RISCV_ISA_EXT_BUNDLE_VALIDATE(_name, _bundled_exts, _validate) \
++ _RISCV_ISA_EXT_DATA(_name, RISCV_ISA_EXT_INVALID, _bundled_exts, \
++ ARRAY_SIZE(_bundled_exts), _validate)
+
+ /* Used to declare extensions that are a superset of other extensions (Zvbb for instance) */
+ #define __RISCV_ISA_EXT_SUPERSET(_name, _id, _sub_exts) \
+diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
+index 125f5ecd956525..6409fd78ae6fa5 100644
+--- a/arch/riscv/include/asm/page.h
++++ b/arch/riscv/include/asm/page.h
+@@ -24,12 +24,9 @@
+ * When not using MMU this corresponds to the first free page in
+ * physical memory (aligned on a page boundary).
+ */
+-#ifdef CONFIG_64BIT
+ #ifdef CONFIG_MMU
++#ifdef CONFIG_64BIT
+ #define PAGE_OFFSET kernel_map.page_offset
+-#else
+-#define PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL)
+-#endif
+ /*
+ * By default, CONFIG_PAGE_OFFSET value corresponds to SV57 address space so
+ * define the PAGE_OFFSET value for SV48 and SV39.
+@@ -39,6 +36,9 @@
+ #else
+ #define PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL)
+ #endif /* CONFIG_64BIT */
++#else
++#define PAGE_OFFSET ((unsigned long)phys_ram_base)
++#endif /* CONFIG_MMU */
+
+ #ifndef __ASSEMBLY__
+
+@@ -95,11 +95,7 @@ typedef struct page *pgtable_t;
+ #define MIN_MEMBLOCK_ADDR 0
+ #endif
+
+-#ifdef CONFIG_MMU
+ #define ARCH_PFN_OFFSET (PFN_DOWN((unsigned long)phys_ram_base))
+-#else
+-#define ARCH_PFN_OFFSET (PAGE_OFFSET >> PAGE_SHIFT)
+-#endif /* CONFIG_MMU */
+
+ struct kernel_mapping {
+ unsigned long page_offset;
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 050fdc49b5ad70..eb7b25ef556ecc 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -12,7 +12,7 @@
+ #include <asm/pgtable-bits.h>
+
+ #ifndef CONFIG_MMU
+-#define KERNEL_LINK_ADDR PAGE_OFFSET
++#define KERNEL_LINK_ADDR _AC(CONFIG_PAGE_OFFSET, UL)
+ #define KERN_VIRT_SIZE (UL(-1))
+ #else
+
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index 8d186bfced451e..f7480c9c6f8d73 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -9,8 +9,8 @@ CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
+ endif
+-CFLAGS_syscall_table.o += $(call cc-option,-Wno-override-init,)
+-CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,)
++CFLAGS_syscall_table.o += $(call cc-disable-warning, override-init)
++CFLAGS_compat_syscall_table.o += $(call cc-disable-warning, override-init)
+
+ ifdef CONFIG_KEXEC_CORE
+ AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax)
+diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
+index 40ac72e407b685..76a3b34d7a7072 100644
+--- a/arch/riscv/kernel/cpufeature.c
++++ b/arch/riscv/kernel/cpufeature.c
+@@ -109,6 +109,38 @@ static int riscv_ext_zicboz_validate(const struct riscv_isa_ext_data *data,
+ return 0;
+ }
+
++static int riscv_ext_vector_x_validate(const struct riscv_isa_ext_data *data,
++ const unsigned long *isa_bitmap)
++{
++ if (!IS_ENABLED(CONFIG_RISCV_ISA_V))
++ return -EINVAL;
++
++ return 0;
++}
++
++static int riscv_ext_vector_float_validate(const struct riscv_isa_ext_data *data,
++ const unsigned long *isa_bitmap)
++{
++ if (!IS_ENABLED(CONFIG_RISCV_ISA_V))
++ return -EINVAL;
++
++ if (!IS_ENABLED(CONFIG_FPU))
++ return -EINVAL;
++
++ /*
++ * The kernel doesn't support systems that don't implement both of
++ * F and D, so if any of the vector extensions that do floating point
++ * are to be usable, both floating point extensions need to be usable.
++ *
++ * Since this function validates vector only, and v/Zve* are probed
++ * after f/d, there's no need for a deferral here.
++ */
++ if (!__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_d))
++ return -EINVAL;
++
++ return 0;
++}
++
+ static int riscv_ext_zca_depends(const struct riscv_isa_ext_data *data,
+ const unsigned long *isa_bitmap)
+ {
+@@ -326,12 +358,10 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
+ __RISCV_ISA_EXT_DATA(d, RISCV_ISA_EXT_d),
+ __RISCV_ISA_EXT_DATA(q, RISCV_ISA_EXT_q),
+ __RISCV_ISA_EXT_SUPERSET(c, RISCV_ISA_EXT_c, riscv_c_exts),
+- __RISCV_ISA_EXT_SUPERSET(v, RISCV_ISA_EXT_v, riscv_v_exts),
++ __RISCV_ISA_EXT_SUPERSET_VALIDATE(v, RISCV_ISA_EXT_v, riscv_v_exts, riscv_ext_vector_float_validate),
+ __RISCV_ISA_EXT_DATA(h, RISCV_ISA_EXT_h),
+- __RISCV_ISA_EXT_SUPERSET_VALIDATE(zicbom, RISCV_ISA_EXT_ZICBOM, riscv_xlinuxenvcfg_exts,
+- riscv_ext_zicbom_validate),
+- __RISCV_ISA_EXT_SUPERSET_VALIDATE(zicboz, RISCV_ISA_EXT_ZICBOZ, riscv_xlinuxenvcfg_exts,
+- riscv_ext_zicboz_validate),
++ __RISCV_ISA_EXT_SUPERSET_VALIDATE(zicbom, RISCV_ISA_EXT_ZICBOM, riscv_xlinuxenvcfg_exts, riscv_ext_zicbom_validate),
++ __RISCV_ISA_EXT_SUPERSET_VALIDATE(zicboz, RISCV_ISA_EXT_ZICBOZ, riscv_xlinuxenvcfg_exts, riscv_ext_zicboz_validate),
+ __RISCV_ISA_EXT_DATA(ziccrse, RISCV_ISA_EXT_ZICCRSE),
+ __RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR),
+ __RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND),
+@@ -372,11 +402,11 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
+ __RISCV_ISA_EXT_DATA(ztso, RISCV_ISA_EXT_ZTSO),
+ __RISCV_ISA_EXT_SUPERSET(zvbb, RISCV_ISA_EXT_ZVBB, riscv_zvbb_exts),
+ __RISCV_ISA_EXT_DATA(zvbc, RISCV_ISA_EXT_ZVBC),
+- __RISCV_ISA_EXT_SUPERSET(zve32f, RISCV_ISA_EXT_ZVE32F, riscv_zve32f_exts),
+- __RISCV_ISA_EXT_DATA(zve32x, RISCV_ISA_EXT_ZVE32X),
+- __RISCV_ISA_EXT_SUPERSET(zve64d, RISCV_ISA_EXT_ZVE64D, riscv_zve64d_exts),
+- __RISCV_ISA_EXT_SUPERSET(zve64f, RISCV_ISA_EXT_ZVE64F, riscv_zve64f_exts),
+- __RISCV_ISA_EXT_SUPERSET(zve64x, RISCV_ISA_EXT_ZVE64X, riscv_zve64x_exts),
++ __RISCV_ISA_EXT_SUPERSET_VALIDATE(zve32f, RISCV_ISA_EXT_ZVE32F, riscv_zve32f_exts, riscv_ext_vector_float_validate),
++ __RISCV_ISA_EXT_DATA_VALIDATE(zve32x, RISCV_ISA_EXT_ZVE32X, riscv_ext_vector_x_validate),
++ __RISCV_ISA_EXT_SUPERSET_VALIDATE(zve64d, RISCV_ISA_EXT_ZVE64D, riscv_zve64d_exts, riscv_ext_vector_float_validate),
++ __RISCV_ISA_EXT_SUPERSET_VALIDATE(zve64f, RISCV_ISA_EXT_ZVE64F, riscv_zve64f_exts, riscv_ext_vector_float_validate),
++ __RISCV_ISA_EXT_SUPERSET_VALIDATE(zve64x, RISCV_ISA_EXT_ZVE64X, riscv_zve64x_exts, riscv_ext_vector_x_validate),
+ __RISCV_ISA_EXT_DATA(zvfh, RISCV_ISA_EXT_ZVFH),
+ __RISCV_ISA_EXT_DATA(zvfhmin, RISCV_ISA_EXT_ZVFHMIN),
+ __RISCV_ISA_EXT_DATA(zvkb, RISCV_ISA_EXT_ZVKB),
+@@ -960,16 +990,6 @@ void __init riscv_fill_hwcap(void)
+ riscv_v_setup_vsize();
+ }
+
+- if (elf_hwcap & COMPAT_HWCAP_ISA_V) {
+- /*
+- * ISA string in device tree might have 'v' flag, but
+- * CONFIG_RISCV_ISA_V is disabled in kernel.
+- * Clear V flag in elf_hwcap if CONFIG_RISCV_ISA_V is disabled.
+- */
+- if (!IS_ENABLED(CONFIG_RISCV_ISA_V))
+- elf_hwcap &= ~COMPAT_HWCAP_ISA_V;
+- }
+-
+ memset(print_str, 0, sizeof(print_str));
+ for (i = 0, j = 0; i < NUM_ALPHA_EXTS; i++)
+ if (riscv_isa[0] & BIT_MASK(i))
+diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
+index 9b6e86ce386744..bb77607c87aa2d 100644
+--- a/arch/riscv/mm/tlbflush.c
++++ b/arch/riscv/mm/tlbflush.c
+@@ -4,6 +4,7 @@
+ #include <linux/smp.h>
+ #include <linux/sched.h>
+ #include <linux/hugetlb.h>
++#include <linux/mmu_notifier.h>
+ #include <asm/sbi.h>
+ #include <asm/mmu_context.h>
+
+@@ -78,10 +79,17 @@ static void __ipi_flush_tlb_range_asid(void *info)
+ local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid);
+ }
+
+-static void __flush_tlb_range(const struct cpumask *cmask, unsigned long asid,
++static inline unsigned long get_mm_asid(struct mm_struct *mm)
++{
++ return mm ? cntx2asid(atomic_long_read(&mm->context.id)) : FLUSH_TLB_NO_ASID;
++}
++
++static void __flush_tlb_range(struct mm_struct *mm,
++ const struct cpumask *cmask,
+ unsigned long start, unsigned long size,
+ unsigned long stride)
+ {
++ unsigned long asid = get_mm_asid(mm);
+ unsigned int cpu;
+
+ if (cpumask_empty(cmask))
+@@ -105,30 +113,26 @@ static void __flush_tlb_range(const struct cpumask *cmask, unsigned long asid,
+ }
+
+ put_cpu();
+-}
+
+-static inline unsigned long get_mm_asid(struct mm_struct *mm)
+-{
+- return cntx2asid(atomic_long_read(&mm->context.id));
++ if (mm)
++ mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, start + size);
+ }
+
+ void flush_tlb_mm(struct mm_struct *mm)
+ {
+- __flush_tlb_range(mm_cpumask(mm), get_mm_asid(mm),
+- 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
++ __flush_tlb_range(mm, mm_cpumask(mm), 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
+ }
+
+ void flush_tlb_mm_range(struct mm_struct *mm,
+ unsigned long start, unsigned long end,
+ unsigned int page_size)
+ {
+- __flush_tlb_range(mm_cpumask(mm), get_mm_asid(mm),
+- start, end - start, page_size);
++ __flush_tlb_range(mm, mm_cpumask(mm), start, end - start, page_size);
+ }
+
+ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+ {
+- __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm),
++ __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm),
+ addr, PAGE_SIZE, PAGE_SIZE);
+ }
+
+@@ -161,13 +165,13 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ }
+ }
+
+- __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm),
++ __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm),
+ start, end - start, stride_size);
+ }
+
+ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
+ {
+- __flush_tlb_range(cpu_online_mask, FLUSH_TLB_NO_ASID,
++ __flush_tlb_range(NULL, cpu_online_mask,
+ start, end - start, PAGE_SIZE);
+ }
+
+@@ -175,7 +179,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
+ void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+ {
+- __flush_tlb_range(mm_cpumask(vma->vm_mm), get_mm_asid(vma->vm_mm),
++ __flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm),
+ start, end - start, PMD_SIZE);
+ }
+ #endif
+@@ -189,7 +193,10 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
+ struct mm_struct *mm,
+ unsigned long uaddr)
+ {
++ unsigned long start = uaddr & PAGE_MASK;
++
+ cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
++ mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, start + PAGE_SIZE);
+ }
+
+ void arch_flush_tlb_batched_pending(struct mm_struct *mm)
+@@ -199,7 +206,7 @@ void arch_flush_tlb_batched_pending(struct mm_struct *mm)
+
+ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
+ {
+- __flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0,
+- FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
++ __flush_tlb_range(NULL, &batch->cpumask,
++ 0, FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
+ cpumask_clear(&batch->cpumask);
+ }
+diff --git a/arch/s390/hypfs/hypfs_diag_fs.c b/arch/s390/hypfs/hypfs_diag_fs.c
+index 00a6d370a28032..280266a74f378d 100644
+--- a/arch/s390/hypfs/hypfs_diag_fs.c
++++ b/arch/s390/hypfs/hypfs_diag_fs.c
+@@ -208,6 +208,8 @@ static int hypfs_create_cpu_files(struct dentry *cpus_dir, void *cpu_info)
+ snprintf(buffer, TMP_SIZE, "%d", cpu_info__cpu_addr(diag204_get_info_type(),
+ cpu_info));
+ cpu_dir = hypfs_mkdir(cpus_dir, buffer);
++ if (IS_ERR(cpu_dir))
++ return PTR_ERR(cpu_dir);
+ rc = hypfs_create_u64(cpu_dir, "mgmtime",
+ cpu_info__acc_time(diag204_get_info_type(), cpu_info) -
+ cpu_info__lp_time(diag204_get_info_type(), cpu_info));
+diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
+index 72655fd2d867c5..f20601995bb0e6 100644
+--- a/arch/s390/include/asm/tlb.h
++++ b/arch/s390/include/asm/tlb.h
+@@ -84,7 +84,7 @@ static inline void pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
+ tlb->mm->context.flush_mm = 1;
+ tlb->freed_tables = 1;
+ tlb->cleared_pmds = 1;
+- if (mm_alloc_pgste(tlb->mm))
++ if (mm_has_pgste(tlb->mm))
+ gmap_unlink(tlb->mm, (unsigned long *)pte, address);
+ tlb_remove_ptdesc(tlb, virt_to_ptdesc(pte));
+ }
+diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
+index befed230aac28e..91b7710c0a5f49 100644
+--- a/arch/um/kernel/mem.c
++++ b/arch/um/kernel/mem.c
+@@ -66,6 +66,7 @@ void __init mem_init(void)
+ map_memory(brk_end, __pa(brk_end), uml_reserved - brk_end, 1, 1, 0);
+ memblock_free((void *)brk_end, uml_reserved - brk_end);
+ uml_reserved = brk_end;
++ min_low_pfn = PFN_UP(__pa(uml_reserved));
+
+ /* this will put all low memory onto the freelists */
+ memblock_free_all();
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index f86e7072a5ba3b..5439bff8850a9e 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -277,7 +277,7 @@ config X86
+ select HAVE_PCI
+ select HAVE_PERF_REGS
+ select HAVE_PERF_USER_STACK_DUMP
+- select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT
++ select MMU_GATHER_RCU_TABLE_FREE
+ select MMU_GATHER_MERGE_VMAS
+ select HAVE_POSIX_CPU_TIMERS_TASK_WORK
+ select HAVE_REGS_AND_STACK_ACCESS_API
+@@ -2430,6 +2430,7 @@ config STRICT_SIGALTSTACK_SIZE
+ config CFI_AUTO_DEFAULT
+ bool "Attempt to use FineIBT by default at boot time"
+ depends on FINEIBT
++ depends on !RUST || RUSTC_VERSION >= 108800
+ default y
+ help
+ Attempt to use FineIBT by default at boot time. If enabled,
+diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
+index 0f24f7ebec9baa..38f17a1e1e367f 100644
+--- a/arch/x86/boot/boot.h
++++ b/arch/x86/boot/boot.h
+@@ -16,7 +16,7 @@
+
+ #define STACK_SIZE 1024 /* Minimum number of bytes for stack */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/stdarg.h>
+ #include <linux/types.h>
+@@ -327,6 +327,6 @@ void probe_cards(int unsafe);
+ /* video-vesa.c */
+ void vesa_store_edid(void);
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* BOOT_BOOT_H */
+diff --git a/arch/x86/boot/genimage.sh b/arch/x86/boot/genimage.sh
+index c9299aeb7333e6..3882ead513f742 100644
+--- a/arch/x86/boot/genimage.sh
++++ b/arch/x86/boot/genimage.sh
+@@ -22,6 +22,7 @@
+ # This script requires:
+ # bash
+ # syslinux
++# genisoimage
+ # mtools (for fdimage* and hdimage)
+ # edk2/OVMF (for hdimage)
+ #
+@@ -251,7 +252,9 @@ geniso() {
+ cp "$isolinux" "$ldlinux" "$tmp_dir"
+ cp "$FBZIMAGE" "$tmp_dir"/linux
+ echo default linux "$KCMDLINE" > "$tmp_dir"/isolinux.cfg
+- cp "${FDINITRDS[@]}" "$tmp_dir"/
++ if [ ${#FDINITRDS[@]} -gt 0 ]; then
++ cp "${FDINITRDS[@]}" "$tmp_dir"/
++ fi
+ genisoimage -J -r -appid 'LINUX_BOOT' -input-charset=utf-8 \
+ -quiet -o "$FIMAGE" -b isolinux.bin \
+ -c boot.cat -no-emul-boot -boot-load-size 4 \
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index 58e3124ee2b420..5b96249734ada1 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -63,7 +63,7 @@ THUNK warn_thunk_thunk, __warn_thunk
+ * entirely in the C code, and use an alias emitted by the linker script
+ * instead.
+ */
+-#ifdef CONFIG_STACKPROTECTOR
++#if defined(CONFIG_STACKPROTECTOR) && defined(CONFIG_SMP)
+ EXPORT_SYMBOL(__ref_stack_chk_guard);
+ #endif
+ #endif
+diff --git a/arch/x86/entry/vdso/extable.h b/arch/x86/entry/vdso/extable.h
+index b56f6b0129416e..baba612b832c34 100644
+--- a/arch/x86/entry/vdso/extable.h
++++ b/arch/x86/entry/vdso/extable.h
+@@ -7,7 +7,7 @@
+ * vDSO uses a dedicated handler the addresses are relative to the overall
+ * exception table, not each individual entry.
+ */
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ #define _ASM_VDSO_EXTABLE_HANDLE(from, to) \
+ ASM_VDSO_EXTABLE_HANDLE from to
+
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index e36c9c63c97ccd..58ad23205d59d9 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -274,7 +274,7 @@ static int perf_ibs_init(struct perf_event *event)
+ {
+ struct hw_perf_event *hwc = &event->hw;
+ struct perf_ibs *perf_ibs;
+- u64 max_cnt, config;
++ u64 config;
+ int ret;
+
+ perf_ibs = get_ibs_pmu(event->attr.type);
+@@ -321,10 +321,19 @@ static int perf_ibs_init(struct perf_event *event)
+ if (!hwc->sample_period)
+ hwc->sample_period = 0x10;
+ } else {
+- max_cnt = config & perf_ibs->cnt_mask;
++ u64 period = 0;
++
++ if (perf_ibs == &perf_ibs_op) {
++ period = (config & IBS_OP_MAX_CNT) << 4;
++ if (ibs_caps & IBS_CAPS_OPCNTEXT)
++ period |= config & IBS_OP_MAX_CNT_EXT_MASK;
++ } else {
++ period = (config & IBS_FETCH_MAX_CNT) << 4;
++ }
++
+ config &= ~perf_ibs->cnt_mask;
+- event->attr.sample_period = max_cnt << 4;
+- hwc->sample_period = event->attr.sample_period;
++ event->attr.sample_period = period;
++ hwc->sample_period = period;
+ }
+
+ if (!hwc->sample_period)
+@@ -1318,7 +1327,8 @@ static __init int perf_ibs_op_init(void)
+ if (ibs_caps & IBS_CAPS_OPCNTEXT) {
+ perf_ibs_op.max_period |= IBS_OP_MAX_CNT_EXT_MASK;
+ perf_ibs_op.config_mask |= IBS_OP_MAX_CNT_EXT_MASK;
+- perf_ibs_op.cnt_mask |= IBS_OP_MAX_CNT_EXT_MASK;
++ perf_ibs_op.cnt_mask |= (IBS_OP_MAX_CNT_EXT_MASK |
++ IBS_OP_CUR_CNT_EXT_MASK);
+ }
+
+ if (ibs_caps & IBS_CAPS_ZEN4)
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 08de293bebad14..97587d4c7befdb 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -2280,8 +2280,9 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs, struct perf_sample_
+ setup_pebs_fixed_sample_data);
+ }
+
+-static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, int size)
++static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, u64 mask)
+ {
++ u64 pebs_enabled = cpuc->pebs_enabled & mask;
+ struct perf_event *event;
+ int bit;
+
+@@ -2292,7 +2293,7 @@ static void intel_pmu_pebs_event_update_no_drain(struct cpu_hw_events *cpuc, int
+ * It needs to call intel_pmu_save_and_restart_reload() to
+ * update the event->count for this case.
+ */
+- for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled, size) {
++ for_each_set_bit(bit, (unsigned long *)&pebs_enabled, X86_PMC_IDX_MAX) {
+ event = cpuc->events[bit];
+ if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)
+ intel_pmu_save_and_restart_reload(event, 0);
+@@ -2327,7 +2328,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
+ }
+
+ if (unlikely(base >= top)) {
+- intel_pmu_pebs_event_update_no_drain(cpuc, size);
++ intel_pmu_pebs_event_update_no_drain(cpuc, mask);
+ return;
+ }
+
+@@ -2441,7 +2442,7 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs, struct perf_sample_d
+ (hybrid(cpuc->pmu, fixed_cntr_mask64) << INTEL_PMC_IDX_FIXED);
+
+ if (unlikely(base >= top)) {
+- intel_pmu_pebs_event_update_no_drain(cpuc, X86_PMC_IDX_MAX);
++ intel_pmu_pebs_event_update_no_drain(cpuc, mask);
+ return;
+ }
+
+diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
+index 9e01490220ece3..abfca03f47a79a 100644
+--- a/arch/x86/include/asm/alternative.h
++++ b/arch/x86/include/asm/alternative.h
+@@ -16,7 +16,7 @@
+ #define ALT_DIRECT_CALL(feature) ((ALT_FLAG_DIRECT_CALL << ALT_FLAGS_SHIFT) | (feature))
+ #define ALT_CALL_ALWAYS ALT_DIRECT_CALL(X86_FEATURE_ALWAYS)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/stddef.h>
+
+@@ -318,7 +318,7 @@ static inline int alternatives_text_reserved(void *start, void *end)
+ void BUG_func(void);
+ void nop_func(void);
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ #ifdef CONFIG_SMP
+ .macro LOCK_PREFIX
+@@ -401,6 +401,6 @@ void nop_func(void);
+ ALTERNATIVE_2 oldinstr, newinstr_no, X86_FEATURE_ALWAYS, \
+ newinstr_yes, ft_flags
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_ALTERNATIVE_H */
+diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
+index 2bec0c89a95c27..e9653ee72813c9 100644
+--- a/arch/x86/include/asm/asm.h
++++ b/arch/x86/include/asm/asm.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_ASM_H
+ #define _ASM_X86_ASM_H
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ # define __ASM_FORM(x, ...) x,## __VA_ARGS__
+ # define __ASM_FORM_RAW(x, ...) x,## __VA_ARGS__
+ # define __ASM_FORM_COMMA(x, ...) x,## __VA_ARGS__,
+@@ -113,7 +113,7 @@
+
+ #endif
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #ifndef __pic__
+ static __always_inline __pure void *rip_rel_ptr(void *p)
+ {
+@@ -144,7 +144,7 @@ static __always_inline __pure void *rip_rel_ptr(void *p)
+ # include <asm/extable_fixup_types.h>
+
+ /* Exception table entry */
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ # define _ASM_EXTABLE_TYPE(from, to, type) \
+ .pushsection "__ex_table","a" ; \
+@@ -164,7 +164,7 @@ static __always_inline __pure void *rip_rel_ptr(void *p)
+ # define _ASM_NOKPROBE(entry)
+ # endif
+
+-#else /* ! __ASSEMBLY__ */
++#else /* ! __ASSEMBLER__ */
+
+ # define DEFINE_EXTABLE_TYPE_REG \
+ ".macro extable_type_reg type:req reg:req\n" \
+@@ -221,7 +221,7 @@ static __always_inline __pure void *rip_rel_ptr(void *p)
+ */
+ register unsigned long current_stack_pointer asm(_ASM_SP);
+ #define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #define _ASM_EXTABLE(from, to) \
+ _ASM_EXTABLE_TYPE(from, to, EX_TYPE_DEFAULT)
+diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
+index 3e5b111e619d4e..3f02ff6d333d36 100644
+--- a/arch/x86/include/asm/boot.h
++++ b/arch/x86/include/asm/boot.h
+@@ -74,7 +74,7 @@
+ # define BOOT_STACK_SIZE 0x1000
+ #endif
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ extern unsigned int output_len;
+ extern const unsigned long kernel_text_size;
+ extern const unsigned long kernel_total_size;
+diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
+index e85ac0c7c039ea..1a5e4b37269404 100644
+--- a/arch/x86/include/asm/bug.h
++++ b/arch/x86/include/asm/bug.h
+@@ -22,8 +22,9 @@
+ #define SECOND_BYTE_OPCODE_UD2 0x0b
+
+ #define BUG_NONE 0xffff
+-#define BUG_UD1 0xfffe
+-#define BUG_UD2 0xfffd
++#define BUG_UD2 0xfffe
++#define BUG_UD1 0xfffd
++#define BUG_UD1_UBSAN 0xfffc
+
+ #ifdef CONFIG_GENERIC_BUG
+
+diff --git a/arch/x86/include/asm/cfi.h b/arch/x86/include/asm/cfi.h
+index 31d19c815f992c..7dd5ab239c87bd 100644
+--- a/arch/x86/include/asm/cfi.h
++++ b/arch/x86/include/asm/cfi.h
+@@ -126,6 +126,17 @@ static inline int cfi_get_offset(void)
+
+ extern u32 cfi_get_func_hash(void *func);
+
++#ifdef CONFIG_FINEIBT
++extern bool decode_fineibt_insn(struct pt_regs *regs, unsigned long *target, u32 *type);
++#else
++static inline bool
++decode_fineibt_insn(struct pt_regs *regs, unsigned long *target, u32 *type)
++{
++ return false;
++}
++
++#endif
++
+ #else
+ static inline enum bug_trap_type handle_cfi_failure(struct pt_regs *regs)
+ {
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index de1ad09fe8d724..7e67bacf02f379 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -4,7 +4,7 @@
+
+ #include <asm/processor.h>
+
+-#if defined(__KERNEL__) && !defined(__ASSEMBLY__)
++#if defined(__KERNEL__) && !defined(__ASSEMBLER__)
+
+ #include <asm/asm.h>
+ #include <linux/bitops.h>
+@@ -208,5 +208,5 @@ static __always_inline bool _static_cpu_has(u16 bit)
+ #define CPU_FEATURE_TYPEVAL boot_cpu_data.x86_vendor, boot_cpu_data.x86, \
+ boot_cpu_data.x86_model
+
+-#endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */
++#endif /* defined(__KERNEL__) && !defined(__ASSEMBLER__) */
+ #endif /* _ASM_X86_CPUFEATURE_H */
+diff --git a/arch/x86/include/asm/cpumask.h b/arch/x86/include/asm/cpumask.h
+index 4acfd57de8f1ce..70f6b60ad67b9b 100644
+--- a/arch/x86/include/asm/cpumask.h
++++ b/arch/x86/include/asm/cpumask.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef _ASM_X86_CPUMASK_H
+ #define _ASM_X86_CPUMASK_H
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/cpumask.h>
+
+ extern void setup_cpu_local_masks(void);
+@@ -34,5 +34,5 @@ static __always_inline void arch_cpumask_clear_cpu(int cpu, struct cpumask *dstp
+
+ #define arch_cpu_is_offline(cpu) unlikely(!arch_cpu_online(cpu))
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_CPUMASK_H */
+diff --git a/arch/x86/include/asm/current.h b/arch/x86/include/asm/current.h
+index bf5953883ec365..f2d0b388798089 100644
+--- a/arch/x86/include/asm/current.h
++++ b/arch/x86/include/asm/current.h
+@@ -5,7 +5,7 @@
+ #include <linux/build_bug.h>
+ #include <linux/compiler.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/cache.h>
+ #include <asm/percpu.h>
+@@ -51,6 +51,6 @@ static __always_inline struct task_struct *get_current(void)
+
+ #define current get_current()
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_CURRENT_H */
+diff --git a/arch/x86/include/asm/desc_defs.h b/arch/x86/include/asm/desc_defs.h
+index d440a65af8f394..7e6b9314758a19 100644
+--- a/arch/x86/include/asm/desc_defs.h
++++ b/arch/x86/include/asm/desc_defs.h
+@@ -58,7 +58,7 @@
+
+ #define DESC_USER (_DESC_DPL(3))
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/types.h>
+
+@@ -166,7 +166,7 @@ struct desc_ptr {
+ unsigned long address;
+ } __attribute__((packed)) ;
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ /* Boot IDT definitions */
+ #define BOOT_IDT_ENTRIES 32
+diff --git a/arch/x86/include/asm/dwarf2.h b/arch/x86/include/asm/dwarf2.h
+index 430fca13bb5683..302e11b15da860 100644
+--- a/arch/x86/include/asm/dwarf2.h
++++ b/arch/x86/include/asm/dwarf2.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_DWARF2_H
+ #define _ASM_X86_DWARF2_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #warning "asm/dwarf2.h should be only included in pure assembly files"
+ #endif
+
+diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
+index d0dcefb5cc59d3..4519c9f35ba04e 100644
+--- a/arch/x86/include/asm/fixmap.h
++++ b/arch/x86/include/asm/fixmap.h
+@@ -31,7 +31,7 @@
+ /* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */
+ #define FIXMAP_PMD_TOP 507
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/kernel.h>
+ #include <asm/apicdef.h>
+ #include <asm/page.h>
+@@ -196,5 +196,5 @@ void __init *early_memremap_decrypted_wp(resource_size_t phys_addr,
+ void __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags);
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+ #endif /* _ASM_X86_FIXMAP_H */
+diff --git a/arch/x86/include/asm/frame.h b/arch/x86/include/asm/frame.h
+index fb42659f6e9886..0ab65073c1cc0b 100644
+--- a/arch/x86/include/asm/frame.h
++++ b/arch/x86/include/asm/frame.h
+@@ -11,7 +11,7 @@
+
+ #ifdef CONFIG_FRAME_POINTER
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ .macro FRAME_BEGIN
+ push %_ASM_BP
+@@ -51,7 +51,7 @@
+ .endm
+ #endif /* CONFIG_X86_64 */
+
+-#else /* !__ASSEMBLY__ */
++#else /* !__ASSEMBLER__ */
+
+ #define FRAME_BEGIN \
+ "push %" _ASM_BP "\n" \
+@@ -82,18 +82,18 @@ static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
+
+ #endif /* CONFIG_X86_64 */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #define FRAME_OFFSET __ASM_SEL(4, 8)
+
+ #else /* !CONFIG_FRAME_POINTER */
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ .macro ENCODE_FRAME_POINTER ptregs_offset=0
+ .endm
+
+-#else /* !__ASSEMBLY */
++#else /* !__ASSEMBLER__ */
+
+ #define ENCODE_FRAME_POINTER
+
+diff --git a/arch/x86/include/asm/fred.h b/arch/x86/include/asm/fred.h
+index 25ca00bd70e834..2a29e521688153 100644
+--- a/arch/x86/include/asm/fred.h
++++ b/arch/x86/include/asm/fred.h
+@@ -32,7 +32,7 @@
+ #define FRED_CONFIG_INT_STKLVL(l) (_AT(unsigned long, l) << 9)
+ #define FRED_CONFIG_ENTRYPOINT(p) _AT(unsigned long, (p))
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef CONFIG_X86_FRED
+ #include <linux/kernel.h>
+@@ -113,6 +113,6 @@ static inline void fred_entry_from_kvm(unsigned int type, unsigned int vector) {
+ static inline void fred_sync_rsp0(unsigned long rsp0) { }
+ static inline void fred_update_rsp0(void) { }
+ #endif /* CONFIG_X86_FRED */
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* ASM_X86_FRED_H */
+diff --git a/arch/x86/include/asm/fsgsbase.h b/arch/x86/include/asm/fsgsbase.h
+index 9e7e8ca8e29977..02f239569b93d3 100644
+--- a/arch/x86/include/asm/fsgsbase.h
++++ b/arch/x86/include/asm/fsgsbase.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_FSGSBASE_H
+ #define _ASM_FSGSBASE_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef CONFIG_X86_64
+
+@@ -80,6 +80,6 @@ extern unsigned long x86_fsgsbase_read_task(struct task_struct *task,
+
+ #endif /* CONFIG_X86_64 */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_FSGSBASE_H */
+diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
+index f9cb4d07df58fd..2d02d5b0517c19 100644
+--- a/arch/x86/include/asm/ftrace.h
++++ b/arch/x86/include/asm/ftrace.h
+@@ -22,7 +22,7 @@
+ #define ARCH_SUPPORTS_FTRACE_OPS 1
+ #endif
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ extern void __fentry__(void);
+
+ static inline unsigned long ftrace_call_adjust(unsigned long addr)
+@@ -118,11 +118,11 @@ struct dyn_arch_ftrace {
+ };
+
+ #endif /* CONFIG_DYNAMIC_FTRACE */
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* CONFIG_FUNCTION_TRACER */
+
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
+ unsigned long frame_pointer);
+@@ -166,6 +166,6 @@ static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs)
+ }
+ #endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_IA32_EMULATION */
+ #endif /* !COMPILE_OFFSETS */
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _ASM_X86_FTRACE_H */
+diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
+index edebf1020e0497..162ebd73a6981e 100644
+--- a/arch/x86/include/asm/hw_irq.h
++++ b/arch/x86/include/asm/hw_irq.h
+@@ -16,7 +16,7 @@
+
+ #include <asm/irq_vectors.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/percpu.h>
+ #include <linux/profile.h>
+@@ -128,6 +128,6 @@ extern char spurious_entries_start[];
+ typedef struct irq_desc* vector_irq_t[NR_VECTORS];
+ DECLARE_PER_CPU(vector_irq_t, vector_irq);
+
+-#endif /* !ASSEMBLY_ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _ASM_X86_HW_IRQ_H */
+diff --git a/arch/x86/include/asm/ibt.h b/arch/x86/include/asm/ibt.h
+index 1e59581d500ca9..b04bcbb1a14efd 100644
+--- a/arch/x86/include/asm/ibt.h
++++ b/arch/x86/include/asm/ibt.h
+@@ -21,7 +21,7 @@
+
+ #define HAS_KERNEL_IBT 1
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef CONFIG_X86_64
+ #define ASM_ENDBR "endbr64\n\t"
+@@ -41,7 +41,7 @@
+ _ASM_PTR fname "\n\t" \
+ ".popsection\n\t"
+
+-static inline __attribute_const__ u32 gen_endbr(void)
++static __always_inline __attribute_const__ u32 gen_endbr(void)
+ {
+ u32 endbr;
+
+@@ -56,7 +56,7 @@ static inline __attribute_const__ u32 gen_endbr(void)
+ return endbr;
+ }
+
+-static inline __attribute_const__ u32 gen_endbr_poison(void)
++static __always_inline __attribute_const__ u32 gen_endbr_poison(void)
+ {
+ /*
+ * 4 byte NOP that isn't NOP4 (in fact it is OSP NOP3), such that it
+@@ -77,7 +77,7 @@ static inline bool is_endbr(u32 val)
+ extern __noendbr u64 ibt_save(bool disable);
+ extern __noendbr void ibt_restore(u64 save);
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ #ifdef CONFIG_X86_64
+ #define ENDBR endbr64
+@@ -85,13 +85,13 @@ extern __noendbr void ibt_restore(u64 save);
+ #define ENDBR endbr32
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #else /* !IBT */
+
+ #define HAS_KERNEL_IBT 0
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #define ASM_ENDBR
+ #define IBT_NOSEAL(name)
+@@ -103,11 +103,11 @@ static inline bool is_endbr(u32 val) { return false; }
+ static inline u64 ibt_save(bool disable) { return 0; }
+ static inline void ibt_restore(u64 save) { }
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ #define ENDBR
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* CONFIG_X86_KERNEL_IBT */
+
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index ad5c68f0509d4d..a4ec27c6798875 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -7,7 +7,7 @@
+
+ #define IDT_ALIGN (8 * (1 + HAS_KERNEL_IBT))
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/entry-common.h>
+ #include <linux/hardirq.h>
+
+@@ -474,7 +474,7 @@ static inline void fred_install_sysvec(unsigned int vector, const idtentry_t fun
+ idt_install_sysvec(vector, asm_##function); \
+ }
+
+-#else /* !__ASSEMBLY__ */
++#else /* !__ASSEMBLER__ */
+
+ /*
+ * The ASM variants for DECLARE_IDTENTRY*() which emit the ASM entry stubs.
+@@ -579,7 +579,7 @@ SYM_CODE_START(spurious_entries_start)
+ SYM_CODE_END(spurious_entries_start)
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ /*
+ * The actual entry points. Note that DECLARE_IDTENTRY*() serves two
+diff --git a/arch/x86/include/asm/inst.h b/arch/x86/include/asm/inst.h
+index 438ccd4f3cc450..e48a00b3311d55 100644
+--- a/arch/x86/include/asm/inst.h
++++ b/arch/x86/include/asm/inst.h
+@@ -6,7 +6,7 @@
+ #ifndef X86_ASM_INST_H
+ #define X86_ASM_INST_H
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ #define REG_NUM_INVALID 100
+
+diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
+index ef5a06ddf02877..44fe88d6cf5c04 100644
+--- a/arch/x86/include/asm/intel-family.h
++++ b/arch/x86/include/asm/intel-family.h
+@@ -46,6 +46,7 @@
+ #define INTEL_ANY IFM(X86_FAMILY_ANY, X86_MODEL_ANY)
+
+ #define INTEL_PENTIUM_PRO IFM(6, 0x01)
++#define INTEL_PENTIUM_III_DESCHUTES IFM(6, 0x05)
+
+ #define INTEL_CORE_YONAH IFM(6, 0x0E)
+
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 1c2db11a2c3cb9..9a9b21b78905a6 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -4,7 +4,7 @@
+
+ #include <asm/processor-flags.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <asm/nospec-branch.h>
+
+@@ -101,7 +101,7 @@ static __always_inline void halt(void)
+ #ifdef CONFIG_PARAVIRT_XXL
+ #include <asm/paravirt.h>
+ #else
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+
+ static __always_inline unsigned long arch_local_save_flags(void)
+@@ -137,10 +137,10 @@ static __always_inline unsigned long arch_local_irq_save(void)
+
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* CONFIG_PARAVIRT_XXL */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ static __always_inline int arch_irqs_disabled_flags(unsigned long flags)
+ {
+ return !(flags & X86_EFLAGS_IF);
+@@ -158,6 +158,6 @@ static __always_inline void arch_local_irq_restore(unsigned long flags)
+ if (!arch_irqs_disabled_flags(flags))
+ arch_local_irq_enable();
+ }
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif
+diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
+index 3f1c1d6c0da12d..61dd1dee7812e7 100644
+--- a/arch/x86/include/asm/jump_label.h
++++ b/arch/x86/include/asm/jump_label.h
+@@ -7,7 +7,7 @@
+ #include <asm/asm.h>
+ #include <asm/nops.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/stringify.h>
+ #include <linux/types.h>
+@@ -55,6 +55,6 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke
+
+ extern int arch_jump_entry_size(struct jump_entry *entry);
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif
+diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
+index de75306b932efd..d7e33c7f096b01 100644
+--- a/arch/x86/include/asm/kasan.h
++++ b/arch/x86/include/asm/kasan.h
+@@ -23,7 +23,7 @@
+ (1ULL << (__VIRTUAL_MASK_SHIFT - \
+ KASAN_SHADOW_SCALE_SHIFT)))
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef CONFIG_KASAN
+ void __init kasan_early_init(void);
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index 8ad187462b68e5..c75509241ff28a 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -13,7 +13,7 @@
+ # define KEXEC_CONTROL_PAGE_SIZE 4096
+ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/string.h>
+ #include <linux/kernel.h>
+@@ -225,6 +225,6 @@ unsigned int arch_crash_get_elfcorehdr_size(void);
+ #define crash_get_elfcorehdr_size arch_crash_get_elfcorehdr_size
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_KEXEC_H */
+diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
+index dc31b13b87a0d1..c95dad65801d55 100644
+--- a/arch/x86/include/asm/linkage.h
++++ b/arch/x86/include/asm/linkage.h
+@@ -38,7 +38,7 @@
+ #define ASM_FUNC_ALIGN __stringify(__FUNC_ALIGN)
+ #define SYM_F_ALIGN __FUNC_ALIGN
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ #if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
+ #define RET jmp __x86_return_thunk
+@@ -50,7 +50,7 @@
+ #endif
+ #endif /* CONFIG_MITIGATION_RETPOLINE */
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ #if defined(CONFIG_MITIGATION_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO)
+ #define ASM_RET "jmp __x86_return_thunk\n\t"
+@@ -62,7 +62,7 @@
+ #endif
+ #endif /* CONFIG_MITIGATION_RETPOLINE */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ /*
+ * Depending on -fpatchable-function-entry=N,N usage (CONFIG_CALL_PADDING) the
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index f922b682b9b4c5..1530ee301dfeaf 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -10,7 +10,7 @@
+ #ifndef __X86_MEM_ENCRYPT_H__
+ #define __X86_MEM_ENCRYPT_H__
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/init.h>
+ #include <linux/cc_platform.h>
+@@ -114,6 +114,6 @@ void add_encrypt_protection_map(void);
+
+ extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* __X86_MEM_ENCRYPT_H__ */
+diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
+index 001853541f1e8c..9397a319d165da 100644
+--- a/arch/x86/include/asm/msr.h
++++ b/arch/x86/include/asm/msr.h
+@@ -4,7 +4,7 @@
+
+ #include "msr-index.h"
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <asm/asm.h>
+ #include <asm/errno.h>
+@@ -397,5 +397,5 @@ static inline int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
+ return wrmsr_safe_regs(regs);
+ }
+ #endif /* CONFIG_SMP */
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_MSR_H */
+diff --git a/arch/x86/include/asm/nmi.h b/arch/x86/include/asm/nmi.h
+index 41a0ebb699ec64..f677382093f360 100644
+--- a/arch/x86/include/asm/nmi.h
++++ b/arch/x86/include/asm/nmi.h
+@@ -56,6 +56,8 @@ int __register_nmi_handler(unsigned int, struct nmiaction *);
+
+ void unregister_nmi_handler(unsigned int, const char *);
+
++void set_emergency_nmi_handler(unsigned int type, nmi_handler_t handler);
++
+ void stop_nmi(void);
+ void restart_nmi(void);
+ void local_touch_nmi(void);
+diff --git a/arch/x86/include/asm/nops.h b/arch/x86/include/asm/nops.h
+index 1c1b7550fa5508..cd94221d83358b 100644
+--- a/arch/x86/include/asm/nops.h
++++ b/arch/x86/include/asm/nops.h
+@@ -82,7 +82,7 @@
+ #define ASM_NOP7 _ASM_BYTES(BYTES_NOP7)
+ #define ASM_NOP8 _ASM_BYTES(BYTES_NOP8)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ extern const unsigned char * const x86_nops[];
+ #endif
+
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index b1ac1d0d29ca89..0cc2d535e5c5f6 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -177,7 +177,7 @@
+ add $(BITS_PER_LONG/8), %_ASM_SP; \
+ lfence;
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ /*
+ * (ab)use RETPOLINE_SAFE on RET to annotate away 'bare' RET instructions
+@@ -335,7 +335,7 @@
+ #define CLEAR_BRANCH_HISTORY_VMEXIT
+ #endif
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ #define ITS_THUNK_SIZE 64
+
+@@ -612,6 +612,6 @@ static __always_inline void mds_idle_clear_cpu_buffers(void)
+ mds_clear_cpu_buffers();
+ }
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */
+diff --git a/arch/x86/include/asm/orc_types.h b/arch/x86/include/asm/orc_types.h
+index 46d7e06763c9f5..e0125afa53fb9d 100644
+--- a/arch/x86/include/asm/orc_types.h
++++ b/arch/x86/include/asm/orc_types.h
+@@ -45,7 +45,7 @@
+ #define ORC_TYPE_REGS 3
+ #define ORC_TYPE_REGS_PARTIAL 4
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <asm/byteorder.h>
+
+ /*
+@@ -73,6 +73,6 @@ struct orc_entry {
+ #endif
+ } __packed;
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ORC_TYPES_H */
+diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
+index c9fe207916f487..9265f2fca99ae1 100644
+--- a/arch/x86/include/asm/page.h
++++ b/arch/x86/include/asm/page.h
+@@ -14,7 +14,7 @@
+ #include <asm/page_32.h>
+ #endif /* CONFIG_X86_64 */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ struct page;
+
+@@ -84,7 +84,7 @@ static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
+ return __canonical_address(vaddr, vaddr_bits) == vaddr;
+ }
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #include <asm-generic/memory_model.h>
+ #include <asm-generic/getorder.h>
+diff --git a/arch/x86/include/asm/page_32.h b/arch/x86/include/asm/page_32.h
+index 580d71aca65a40..0c623706cb7eff 100644
+--- a/arch/x86/include/asm/page_32.h
++++ b/arch/x86/include/asm/page_32.h
+@@ -4,7 +4,7 @@
+
+ #include <asm/page_32_types.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #define __phys_addr_nodebug(x) ((x) - PAGE_OFFSET)
+ #ifdef CONFIG_DEBUG_VIRTUAL
+@@ -26,6 +26,6 @@ static inline void copy_page(void *to, void *from)
+ {
+ memcpy(to, from, PAGE_SIZE);
+ }
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _ASM_X86_PAGE_32_H */
+diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
+index faf9cc1c14bb6d..88e3c8d582986e 100644
+--- a/arch/x86/include/asm/page_32_types.h
++++ b/arch/x86/include/asm/page_32_types.h
+@@ -63,7 +63,7 @@
+ */
+ #define KERNEL_IMAGE_SIZE (512 * 1024 * 1024)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /*
+ * This much address space is reserved for vmalloc() and iomap()
+@@ -75,6 +75,6 @@ extern int sysctl_legacy_va_layout;
+ extern void find_low_pfn_range(void);
+ extern void setup_bootmem_allocator(void);
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _ASM_X86_PAGE_32_DEFS_H */
+diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
+index d63576608ce765..442357defa1178 100644
+--- a/arch/x86/include/asm/page_64.h
++++ b/arch/x86/include/asm/page_64.h
+@@ -4,7 +4,7 @@
+
+ #include <asm/page_64_types.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <asm/cpufeatures.h>
+ #include <asm/alternative.h>
+
+@@ -94,7 +94,7 @@ static __always_inline unsigned long task_size_max(void)
+ }
+ #endif /* CONFIG_X86_5LEVEL */
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #ifdef CONFIG_X86_VSYSCALL_EMULATION
+ # define __HAVE_ARCH_GATE_AREA 1
+diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
+index 06ef25411d622b..1faa8f88850ab5 100644
+--- a/arch/x86/include/asm/page_64_types.h
++++ b/arch/x86/include/asm/page_64_types.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_PAGE_64_DEFS_H
+ #define _ASM_X86_PAGE_64_DEFS_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <asm/kaslr.h>
+ #endif
+
+diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
+index 974688973cf6e0..9f77bf03d74725 100644
+--- a/arch/x86/include/asm/page_types.h
++++ b/arch/x86/include/asm/page_types.h
+@@ -43,7 +43,7 @@
+ #define IOREMAP_MAX_ORDER (PMD_SHIFT)
+ #endif /* CONFIG_X86_64 */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK
+ extern phys_addr_t physical_mask;
+@@ -66,6 +66,6 @@ bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn);
+
+ extern void initmem_init(void);
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _ASM_X86_PAGE_DEFS_H */
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 29e7331a0c98d4..0ace044d6f2cd5 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -6,7 +6,7 @@
+
+ #include <asm/paravirt_types.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ struct mm_struct;
+ #endif
+
+@@ -15,7 +15,7 @@ struct mm_struct;
+ #include <asm/asm.h>
+ #include <asm/nospec-branch.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/bug.h>
+ #include <linux/types.h>
+ #include <linux/cpumask.h>
+@@ -720,7 +720,7 @@ static __always_inline unsigned long arch_local_irq_save(void)
+ extern void default_banner(void);
+ void native_pv_lock_init(void) __init;
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ #ifdef CONFIG_X86_64
+ #ifdef CONFIG_PARAVIRT_XXL
+@@ -740,18 +740,18 @@ void native_pv_lock_init(void) __init;
+ #endif /* CONFIG_PARAVIRT_XXL */
+ #endif /* CONFIG_X86_64 */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #else /* CONFIG_PARAVIRT */
+ # define default_banner x86_init_noop
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ static inline void native_pv_lock_init(void)
+ {
+ }
+ #endif
+ #endif /* !CONFIG_PARAVIRT */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #ifndef CONFIG_PARAVIRT_XXL
+ static inline void paravirt_enter_mmap(struct mm_struct *mm)
+ {
+@@ -769,5 +769,5 @@ static inline void paravirt_set_cap(void)
+ {
+ }
+ #endif
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_PARAVIRT_H */
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index abccfccc2e3fa5..1fca6281988a99 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -4,7 +4,7 @@
+
+ #ifdef CONFIG_PARAVIRT
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+
+ #include <asm/desc_defs.h>
+@@ -518,7 +518,7 @@ unsigned long pv_native_read_cr2(void);
+
+ #define paravirt_nop ((void *)nop_func)
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #define ALT_NOT_XEN ALT_NOT(X86_FEATURE_XENPV)
+
+diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
+index e525cd85f999fd..9c47da5b0a2a29 100644
+--- a/arch/x86/include/asm/percpu.h
++++ b/arch/x86/include/asm/percpu.h
+@@ -10,7 +10,7 @@
+ # define __percpu_rel
+ #endif
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ #ifdef CONFIG_SMP
+ # define __percpu %__percpu_seg:
+@@ -350,9 +350,9 @@ do { \
+ \
+ asm qual (ALTERNATIVE("call this_cpu_cmpxchg8b_emu", \
+ "cmpxchg8b " __percpu_arg([var]), X86_FEATURE_CX8) \
+- : [var] "+m" (__my_cpu_var(_var)), \
+- "+a" (old__.low), \
+- "+d" (old__.high) \
++ : ALT_OUTPUT_SP([var] "+m" (__my_cpu_var(_var)), \
++ "+a" (old__.low), \
++ "+d" (old__.high)) \
+ : "b" (new__.low), \
+ "c" (new__.high), \
+ "S" (&(_var)) \
+@@ -381,10 +381,10 @@ do { \
+ asm qual (ALTERNATIVE("call this_cpu_cmpxchg8b_emu", \
+ "cmpxchg8b " __percpu_arg([var]), X86_FEATURE_CX8) \
+ CC_SET(z) \
+- : CC_OUT(z) (success), \
+- [var] "+m" (__my_cpu_var(_var)), \
+- "+a" (old__.low), \
+- "+d" (old__.high) \
++ : ALT_OUTPUT_SP(CC_OUT(z) (success), \
++ [var] "+m" (__my_cpu_var(_var)), \
++ "+a" (old__.low), \
++ "+d" (old__.high)) \
+ : "b" (new__.low), \
+ "c" (new__.high), \
+ "S" (&(_var)) \
+@@ -421,9 +421,9 @@ do { \
+ \
+ asm qual (ALTERNATIVE("call this_cpu_cmpxchg16b_emu", \
+ "cmpxchg16b " __percpu_arg([var]), X86_FEATURE_CX16) \
+- : [var] "+m" (__my_cpu_var(_var)), \
+- "+a" (old__.low), \
+- "+d" (old__.high) \
++ : ALT_OUTPUT_SP([var] "+m" (__my_cpu_var(_var)), \
++ "+a" (old__.low), \
++ "+d" (old__.high)) \
+ : "b" (new__.low), \
+ "c" (new__.high), \
+ "S" (&(_var)) \
+@@ -452,10 +452,10 @@ do { \
+ asm qual (ALTERNATIVE("call this_cpu_cmpxchg16b_emu", \
+ "cmpxchg16b " __percpu_arg([var]), X86_FEATURE_CX16) \
+ CC_SET(z) \
+- : CC_OUT(z) (success), \
+- [var] "+m" (__my_cpu_var(_var)), \
+- "+a" (old__.low), \
+- "+d" (old__.high) \
++ : ALT_OUTPUT_SP(CC_OUT(z) (success), \
++ [var] "+m" (__my_cpu_var(_var)), \
++ "+a" (old__.low), \
++ "+d" (old__.high)) \
+ : "b" (new__.low), \
+ "c" (new__.high), \
+ "S" (&(_var)) \
+@@ -619,7 +619,7 @@ do { \
+ /* We can use this directly for local CPU (faster). */
+ DECLARE_PER_CPU_READ_MOSTLY(unsigned long, this_cpu_off);
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #ifdef CONFIG_SMP
+
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 0ba8d20f2d1d56..d37c1c32e74b9b 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -536,6 +536,7 @@ struct pebs_xmm {
+ */
+ #define IBS_OP_CUR_CNT (0xFFF80ULL<<32)
+ #define IBS_OP_CUR_CNT_RAND (0x0007FULL<<32)
++#define IBS_OP_CUR_CNT_EXT_MASK (0x7FULL<<52)
+ #define IBS_OP_CNT_CTL (1ULL<<19)
+ #define IBS_OP_VAL (1ULL<<18)
+ #define IBS_OP_ENABLE (1ULL<<17)
+diff --git a/arch/x86/include/asm/pgtable-2level_types.h b/arch/x86/include/asm/pgtable-2level_types.h
+index 4a12c276b1812c..66425424ce91a1 100644
+--- a/arch/x86/include/asm/pgtable-2level_types.h
++++ b/arch/x86/include/asm/pgtable-2level_types.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_PGTABLE_2LEVEL_DEFS_H
+ #define _ASM_X86_PGTABLE_2LEVEL_DEFS_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+
+ typedef unsigned long pteval_t;
+@@ -16,7 +16,7 @@ typedef union {
+ pteval_t pte;
+ pteval_t pte_low;
+ } pte_t;
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #define SHARED_KERNEL_PMD 0
+
+diff --git a/arch/x86/include/asm/pgtable-3level_types.h b/arch/x86/include/asm/pgtable-3level_types.h
+index 80911349519e8e..9d5b257d44e3cb 100644
+--- a/arch/x86/include/asm/pgtable-3level_types.h
++++ b/arch/x86/include/asm/pgtable-3level_types.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_PGTABLE_3LEVEL_DEFS_H
+ #define _ASM_X86_PGTABLE_3LEVEL_DEFS_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+
+ typedef u64 pteval_t;
+@@ -25,7 +25,7 @@ typedef union {
+ };
+ pmdval_t pmd;
+ } pmd_t;
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #define SHARED_KERNEL_PMD (!static_cpu_has(X86_FEATURE_PTI))
+
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+index a0c1525f1b6f41..e12e52ae8083d2 100644
+--- a/arch/x86/include/asm/pgtable-invert.h
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_PGTABLE_INVERT_H
+ #define _ASM_PGTABLE_INVERT_H 1
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /*
+ * A clear pte value is special, and doesn't get inverted.
+@@ -36,6 +36,6 @@ static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
+ return val;
+ }
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 593f10aabd45a6..7bd6bd6df4a112 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -15,7 +15,7 @@
+ cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS))) \
+ : (prot))
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/spinlock.h>
+ #include <asm/x86_init.h>
+ #include <asm/pkru.h>
+@@ -973,7 +973,7 @@ static inline pgd_t pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd)
+ }
+ #endif /* CONFIG_MITIGATION_PAGE_TABLE_ISOLATION */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+
+ #ifdef CONFIG_X86_32
+@@ -982,7 +982,7 @@ static inline pgd_t pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd)
+ # include <asm/pgtable_64.h>
+ #endif
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/mm_types.h>
+ #include <linux/mmdebug.h>
+ #include <linux/log2.h>
+@@ -1233,12 +1233,12 @@ static inline int pgd_none(pgd_t pgd)
+ }
+ #endif /* CONFIG_PGTABLE_LEVELS > 4 */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #define KERNEL_PGD_BOUNDARY pgd_index(PAGE_OFFSET)
+ #define KERNEL_PGD_PTRS (PTRS_PER_PGD - KERNEL_PGD_BOUNDARY)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ extern int direct_gbpages;
+ void init_mem_mapping(void);
+@@ -1812,6 +1812,6 @@ bool arch_is_platform_page(u64 paddr);
+ WARN_ON_ONCE(pgd_present(*pgdp) && !pgd_same(*pgdp, pgd)); \
+ set_pgd(pgdp, pgd); \
+ })
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_PGTABLE_H */
+diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
+index 7d4ad8907297c3..b612cc57a4d349 100644
+--- a/arch/x86/include/asm/pgtable_32.h
++++ b/arch/x86/include/asm/pgtable_32.h
+@@ -13,7 +13,7 @@
+ * This file contains the functions and defines necessary to modify and use
+ * the i386 page table tree.
+ */
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <asm/processor.h>
+ #include <linux/threads.h>
+ #include <asm/paravirt.h>
+@@ -45,7 +45,7 @@ do { \
+ flush_tlb_one_kernel((vaddr)); \
+ } while (0)
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ /*
+ * This is used to calculate the .brk reservation for initial pagetables.
+diff --git a/arch/x86/include/asm/pgtable_32_areas.h b/arch/x86/include/asm/pgtable_32_areas.h
+index b6355416a15a89..921148b4296761 100644
+--- a/arch/x86/include/asm/pgtable_32_areas.h
++++ b/arch/x86/include/asm/pgtable_32_areas.h
+@@ -13,7 +13,7 @@
+ */
+ #define VMALLOC_OFFSET (8 * 1024 * 1024)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ extern bool __vmalloc_start_set; /* set once high_memory is set */
+ #endif
+
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index d1426b64c1b971..b89f8f1194a9ff 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -5,7 +5,7 @@
+ #include <linux/const.h>
+ #include <asm/pgtable_64_types.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /*
+ * This file contains the functions and defines necessary to modify and use
+@@ -270,7 +270,7 @@ static inline bool gup_fast_permitted(unsigned long start, unsigned long end)
+
+ #include <asm/pgtable-invert.h>
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ #define l4_index(x) (((x) >> 39) & 511)
+ #define pud_index(x) (((x) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
+@@ -291,5 +291,5 @@ L3_START_KERNEL = pud_index(__START_KERNEL_map)
+ i = i + 1 ; \
+ .endr
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_PGTABLE_64_H */
+diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
+index ec68f8369bdca4..5bb782d856f2c4 100644
+--- a/arch/x86/include/asm/pgtable_64_types.h
++++ b/arch/x86/include/asm/pgtable_64_types.h
+@@ -4,7 +4,7 @@
+
+ #include <asm/sparsemem.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+ #include <asm/kaslr.h>
+
+@@ -44,7 +44,7 @@ static inline bool pgtable_l5_enabled(void)
+ extern unsigned int pgdir_shift;
+ extern unsigned int ptrs_per_p4d;
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #define SHARED_KERNEL_PMD 0
+
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 4b804531b03c3c..ded7075c606343 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -164,7 +164,7 @@
+ * to have the WB mode at index 0 (all bits clear). This is the default
+ * right now and likely would break too much if changed.
+ */
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ enum page_cache_mode {
+ _PAGE_CACHE_MODE_WB = 0,
+ _PAGE_CACHE_MODE_WC = 1,
+@@ -239,7 +239,7 @@ enum page_cache_mode {
+ #define __PAGE_KERNEL_IO_NOCACHE __PAGE_KERNEL_NOCACHE
+
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #define __PAGE_KERNEL_ENC (__PAGE_KERNEL | _ENC)
+ #define __PAGE_KERNEL_ENC_WP (__PAGE_KERNEL_WP | _ENC)
+@@ -262,7 +262,7 @@ enum page_cache_mode {
+ #define PAGE_KERNEL_IO __pgprot_mask(__PAGE_KERNEL_IO)
+ #define PAGE_KERNEL_IO_NOCACHE __pgprot_mask(__PAGE_KERNEL_IO_NOCACHE)
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ /*
+ * early identity mapping pte attrib macros.
+@@ -281,7 +281,7 @@ enum page_cache_mode {
+ # include <asm/pgtable_64_types.h>
+ #endif
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/types.h>
+
+@@ -580,6 +580,6 @@ extern int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn,
+ unsigned long page_flags);
+ extern int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address,
+ unsigned long numpages);
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _ASM_X86_PGTABLE_DEFS_H */
+diff --git a/arch/x86/include/asm/prom.h b/arch/x86/include/asm/prom.h
+index 365798cb4408d9..5d0dbab8526405 100644
+--- a/arch/x86/include/asm/prom.h
++++ b/arch/x86/include/asm/prom.h
+@@ -8,7 +8,7 @@
+
+ #ifndef _ASM_X86_PROM_H
+ #define _ASM_X86_PROM_H
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/of.h>
+ #include <linux/types.h>
+@@ -33,5 +33,5 @@ static inline void x86_flattree_get_config(void) { }
+
+ extern char cmd_line[COMMAND_LINE_SIZE];
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif
+diff --git a/arch/x86/include/asm/pti.h b/arch/x86/include/asm/pti.h
+index ab167c96b9ab47..88d0a1ab1f77ee 100644
+--- a/arch/x86/include/asm/pti.h
++++ b/arch/x86/include/asm/pti.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef _ASM_X86_PTI_H
+ #define _ASM_X86_PTI_H
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION
+ extern void pti_init(void);
+@@ -11,5 +11,5 @@ extern void pti_finalize(void);
+ static inline void pti_check_boottime_disable(void) { }
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_PTI_H */
+diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
+index 5a83fbd9bc0b44..50f75467f73d0f 100644
+--- a/arch/x86/include/asm/ptrace.h
++++ b/arch/x86/include/asm/ptrace.h
+@@ -6,7 +6,7 @@
+ #include <asm/page_types.h>
+ #include <uapi/asm/ptrace.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #ifdef __i386__
+
+ struct pt_regs {
+@@ -469,5 +469,5 @@ extern int do_set_thread_area(struct task_struct *p, int idx,
+ # define do_set_thread_area_64(p, s, t) (0)
+ #endif
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+ #endif /* _ASM_X86_PTRACE_H */
+diff --git a/arch/x86/include/asm/purgatory.h b/arch/x86/include/asm/purgatory.h
+index 5528e93250494c..2fee5e9f1ccc38 100644
+--- a/arch/x86/include/asm/purgatory.h
++++ b/arch/x86/include/asm/purgatory.h
+@@ -2,10 +2,10 @@
+ #ifndef _ASM_X86_PURGATORY_H
+ #define _ASM_X86_PURGATORY_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/purgatory.h>
+
+ extern void purgatory(void);
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_PURGATORY_H */
+diff --git a/arch/x86/include/asm/pvclock-abi.h b/arch/x86/include/asm/pvclock-abi.h
+index 1436226efe3ef8..b9fece5fc96d6f 100644
+--- a/arch/x86/include/asm/pvclock-abi.h
++++ b/arch/x86/include/asm/pvclock-abi.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef _ASM_X86_PVCLOCK_ABI_H
+ #define _ASM_X86_PVCLOCK_ABI_H
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /*
+ * These structs MUST NOT be changed.
+@@ -44,5 +44,5 @@ struct pvclock_wall_clock {
+ #define PVCLOCK_GUEST_STOPPED (1 << 1)
+ /* PVCLOCK_COUNTS_FROM_ZERO broke ABI and can't be used anymore. */
+ #define PVCLOCK_COUNTS_FROM_ZERO (1 << 2)
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_PVCLOCK_ABI_H */
+diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
+index 87e5482acd0dca..f607081a022abe 100644
+--- a/arch/x86/include/asm/realmode.h
++++ b/arch/x86/include/asm/realmode.h
+@@ -9,7 +9,7 @@
+ #define TH_FLAGS_SME_ACTIVE_BIT 0
+ #define TH_FLAGS_SME_ACTIVE BIT(TH_FLAGS_SME_ACTIVE_BIT)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/types.h>
+ #include <asm/io.h>
+@@ -95,6 +95,6 @@ void reserve_real_mode(void);
+ void load_trampoline_pgtable(void);
+ void init_real_mode(void);
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ARCH_X86_REALMODE_H */
+diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
+index 9d6411c6592050..77d8f49b92bdd0 100644
+--- a/arch/x86/include/asm/segment.h
++++ b/arch/x86/include/asm/segment.h
+@@ -233,7 +233,7 @@
+ #define VDSO_CPUNODE_BITS 12
+ #define VDSO_CPUNODE_MASK 0xfff
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /* Helper functions to store/load CPU and node numbers */
+
+@@ -265,7 +265,7 @@ static inline void vdso_read_cpunode(unsigned *cpu, unsigned *node)
+ *node = (p >> VDSO_CPUNODE_BITS);
+ }
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #ifdef __KERNEL__
+
+@@ -286,7 +286,7 @@ static inline void vdso_read_cpunode(unsigned *cpu, unsigned *node)
+ */
+ #define XEN_EARLY_IDT_HANDLER_SIZE (8 + ENDBR_INSN_SIZE)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDLER_SIZE];
+ extern void early_ignore_irq(void);
+@@ -350,7 +350,7 @@ static inline void __loadsegment_fs(unsigned short value)
+ #define savesegment(seg, value) \
+ asm("mov %%" #seg ",%0":"=r" (value) : : "memory")
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+ #endif /* __KERNEL__ */
+
+ #endif /* _ASM_X86_SEGMENT_H */
+diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
+index 85f4fde3515c43..09201d47c967b9 100644
+--- a/arch/x86/include/asm/setup.h
++++ b/arch/x86/include/asm/setup.h
+@@ -27,7 +27,7 @@
+ #define OLD_CL_ADDRESS 0x020 /* Relative to real mode data */
+ #define NEW_CL_POINTER 0x228 /* Relative to real mode data */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/cache.h>
+
+ #include <asm/bootparam.h>
+@@ -141,7 +141,7 @@ extern bool builtin_cmdline_added __ro_after_init;
+ #define builtin_cmdline_added 0
+ #endif
+
+-#else /* __ASSEMBLY */
++#else /* __ASSEMBLER__ */
+
+ .macro __RESERVE_BRK name, size
+ .pushsection .bss..brk, "aw"
+@@ -153,6 +153,6 @@ SYM_DATA_END(__brk_\name)
+
+ #define RESERVE_BRK(name, size) __RESERVE_BRK name, size
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_SETUP_H */
+diff --git a/arch/x86/include/asm/setup_data.h b/arch/x86/include/asm/setup_data.h
+index 77c51111a89394..7bb16f843c93d9 100644
+--- a/arch/x86/include/asm/setup_data.h
++++ b/arch/x86/include/asm/setup_data.h
+@@ -4,7 +4,7 @@
+
+ #include <uapi/asm/setup_data.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ struct pci_setup_rom {
+ struct setup_data data;
+@@ -27,6 +27,6 @@ struct efi_setup_data {
+ u64 reserved[8];
+ };
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_SETUP_DATA_H */
+diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
+index dcbccdb280f9e8..1385227125ab4f 100644
+--- a/arch/x86/include/asm/sev-common.h
++++ b/arch/x86/include/asm/sev-common.h
+@@ -116,7 +116,7 @@ enum psc_op {
+ #define GHCB_MSR_VMPL_REQ 0x016
+ #define GHCB_MSR_VMPL_REQ_LEVEL(v) \
+ /* GHCBData[39:32] */ \
+- (((u64)(v) & GENMASK_ULL(7, 0) << 32) | \
++ ((((u64)(v) & GENMASK_ULL(7, 0)) << 32) | \
+ /* GHCBDdata[11:0] */ \
+ GHCB_MSR_VMPL_REQ)
+
+diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
+index fcbbef484a78e5..a28ff6b141458b 100644
+--- a/arch/x86/include/asm/shared/tdx.h
++++ b/arch/x86/include/asm/shared/tdx.h
+@@ -106,7 +106,7 @@
+ #define TDX_PS_1G 2
+ #define TDX_PS_NR (TDX_PS_1G + 1)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/compiler_attributes.h>
+
+@@ -177,5 +177,5 @@ static __always_inline u64 hcall_func(u64 exit_reason)
+ return exit_reason;
+ }
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+ #endif /* _ASM_X86_SHARED_TDX_H */
+diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h
+index 4cb77e004615df..ba6f2fe438488d 100644
+--- a/arch/x86/include/asm/shstk.h
++++ b/arch/x86/include/asm/shstk.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_SHSTK_H
+ #define _ASM_X86_SHSTK_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+
+ struct task_struct;
+@@ -37,6 +37,6 @@ static inline int shstk_update_last_frame(unsigned long val) { return 0; }
+ static inline bool shstk_is_enabled(void) { return false; }
+ #endif /* CONFIG_X86_USER_SHADOW_STACK */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_SHSTK_H */
+diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
+index 4a4043ca64934a..c72d461753742f 100644
+--- a/arch/x86/include/asm/signal.h
++++ b/arch/x86/include/asm/signal.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_SIGNAL_H
+ #define _ASM_X86_SIGNAL_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/linkage.h>
+
+ /* Most things should be clean enough to redefine this at will, if care
+@@ -28,9 +28,9 @@ typedef struct {
+ #define SA_IA32_ABI 0x02000000u
+ #define SA_X32_ABI 0x01000000u
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #include <uapi/asm/signal.h>
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #define __ARCH_HAS_SA_RESTORER
+
+@@ -101,5 +101,5 @@ struct pt_regs;
+
+ #endif /* !__i386__ */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_SIGNAL_H */
+diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h
+index 2de1e5a75c5735..daea94c2993c5f 100644
+--- a/arch/x86/include/asm/smap.h
++++ b/arch/x86/include/asm/smap.h
+@@ -13,7 +13,7 @@
+ #include <asm/cpufeatures.h>
+ #include <asm/alternative.h>
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ #define ASM_CLAC \
+ ALTERNATIVE "", "clac", X86_FEATURE_SMAP
+@@ -21,7 +21,7 @@
+ #define ASM_STAC \
+ ALTERNATIVE "", "stac", X86_FEATURE_SMAP
+
+-#else /* __ASSEMBLY__ */
++#else /* __ASSEMBLER__ */
+
+ static __always_inline void clac(void)
+ {
+@@ -61,6 +61,6 @@ static __always_inline void smap_restore(unsigned long flags)
+ #define ASM_STAC \
+ ALTERNATIVE("", "stac", X86_FEATURE_SMAP)
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_SMAP_H */
+diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
+index ca073f40698fad..d234a6321c189c 100644
+--- a/arch/x86/include/asm/smp.h
++++ b/arch/x86/include/asm/smp.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef _ASM_X86_SMP_H
+ #define _ASM_X86_SMP_H
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/cpumask.h>
+
+ #include <asm/cpumask.h>
+@@ -175,7 +175,7 @@ extern void nmi_selftest(void);
+ extern unsigned int smpboot_control;
+ extern unsigned long apic_mmio_base;
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ /* Control bits for startup_64 */
+ #define STARTUP_READ_APICID 0x80000000
+diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
+index 40f9a97371a906..4a1922ec80cf76 100644
+--- a/arch/x86/include/asm/tdx.h
++++ b/arch/x86/include/asm/tdx.h
+@@ -30,7 +30,7 @@
+ #define TDX_SUCCESS 0ULL
+ #define TDX_RND_NO_ENTROPY 0x8000020300000000ULL
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <uapi/asm/mce.h>
+
+@@ -126,5 +126,5 @@ static inline int tdx_enable(void) { return -ENODEV; }
+ static inline const char *tdx_dump_mce_info(struct mce *m) { return NULL; }
+ #endif /* CONFIG_INTEL_TDX_HOST */
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+ #endif /* _ASM_X86_TDX_H */
+diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
+index a55c214f3ba64d..9282465eea21d3 100644
+--- a/arch/x86/include/asm/thread_info.h
++++ b/arch/x86/include/asm/thread_info.h
+@@ -54,7 +54,7 @@
+ * - this struct should fit entirely inside of one cache line
+ * - this struct shares the supervisor stack pages
+ */
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ struct task_struct;
+ #include <asm/cpufeature.h>
+ #include <linux/atomic.h>
+@@ -73,7 +73,7 @@ struct thread_info {
+ .flags = 0, \
+ }
+
+-#else /* !__ASSEMBLY__ */
++#else /* !__ASSEMBLER__ */
+
+ #include <asm/asm-offsets.h>
+
+@@ -161,7 +161,7 @@ struct thread_info {
+ *
+ * preempt_count needs to be 1 initially, until the scheduler is functional.
+ */
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /*
+ * Walks up the stack frames to make sure that the specified object is
+@@ -213,7 +213,7 @@ static inline int arch_within_stack_frames(const void * const stack,
+ #endif
+ }
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ /*
+ * Thread-synchronous status.
+@@ -224,7 +224,7 @@ static inline int arch_within_stack_frames(const void * const stack,
+ */
+ #define TS_COMPAT 0x0002 /* 32bit syscall active (64BIT)*/
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #ifdef CONFIG_COMPAT
+ #define TS_I386_REGS_POKED 0x0004 /* regs poked by 32-bit ptracer */
+
+@@ -242,6 +242,6 @@ static inline int arch_within_stack_frames(const void * const stack,
+
+ extern void arch_setup_new_exec(void);
+ #define arch_setup_new_exec arch_setup_new_exec
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _ASM_X86_THREAD_INFO_H */
+diff --git a/arch/x86/include/asm/unwind_hints.h b/arch/x86/include/asm/unwind_hints.h
+index 85cc57cb65392e..8f4579c5a6f8b9 100644
+--- a/arch/x86/include/asm/unwind_hints.h
++++ b/arch/x86/include/asm/unwind_hints.h
+@@ -5,7 +5,7 @@
+
+ #include "orc_types.h"
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ .macro UNWIND_HINT_END_OF_STACK
+ UNWIND_HINT type=UNWIND_HINT_TYPE_END_OF_STACK
+@@ -88,6 +88,6 @@
+ #define UNWIND_HINT_RESTORE \
+ UNWIND_HINT(UNWIND_HINT_TYPE_RESTORE, 0, 0, 0)
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_UNWIND_HINTS_H */
+diff --git a/arch/x86/include/asm/vdso/getrandom.h b/arch/x86/include/asm/vdso/getrandom.h
+index 2bf9c0e970c3e7..785f8edcb9c999 100644
+--- a/arch/x86/include/asm/vdso/getrandom.h
++++ b/arch/x86/include/asm/vdso/getrandom.h
+@@ -5,7 +5,7 @@
+ #ifndef __ASM_VDSO_GETRANDOM_H
+ #define __ASM_VDSO_GETRANDOM_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <asm/unistd.h>
+
+@@ -37,6 +37,6 @@ static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_data(void
+ return &vdso_rng_data;
+ }
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* __ASM_VDSO_GETRANDOM_H */
+diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/asm/vdso/gettimeofday.h
+index 375a34b0f36579..428f3f4c2235f4 100644
+--- a/arch/x86/include/asm/vdso/gettimeofday.h
++++ b/arch/x86/include/asm/vdso/gettimeofday.h
+@@ -10,7 +10,7 @@
+ #ifndef __ASM_VDSO_GETTIMEOFDAY_H
+ #define __ASM_VDSO_GETTIMEOFDAY_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <uapi/linux/time.h>
+ #include <asm/vgtod.h>
+@@ -350,6 +350,6 @@ static __always_inline u64 vdso_calc_ns(const struct vdso_data *vd, u64 cycles,
+ }
+ #define vdso_calc_ns vdso_calc_ns
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* __ASM_VDSO_GETTIMEOFDAY_H */
+diff --git a/arch/x86/include/asm/vdso/processor.h b/arch/x86/include/asm/vdso/processor.h
+index 2cbce97d29eaf0..c9b2ba7a9ec4c9 100644
+--- a/arch/x86/include/asm/vdso/processor.h
++++ b/arch/x86/include/asm/vdso/processor.h
+@@ -5,7 +5,7 @@
+ #ifndef __ASM_VDSO_PROCESSOR_H
+ #define __ASM_VDSO_PROCESSOR_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */
+ static __always_inline void rep_nop(void)
+@@ -22,6 +22,6 @@ struct getcpu_cache;
+
+ notrace long __vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused);
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* __ASM_VDSO_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/vdso/vsyscall.h b/arch/x86/include/asm/vdso/vsyscall.h
+index 88b31d4cdfaf33..6622e0103444ea 100644
+--- a/arch/x86/include/asm/vdso/vsyscall.h
++++ b/arch/x86/include/asm/vdso/vsyscall.h
+@@ -10,7 +10,7 @@
+ #define VDSO_PAGE_PVCLOCK_OFFSET 0
+ #define VDSO_PAGE_HVCLOCK_OFFSET 1
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <vdso/datapage.h>
+ #include <asm/vgtod.h>
+@@ -37,6 +37,6 @@ struct vdso_rng_data *__x86_get_k_vdso_rng_data(void)
+ /* The asm-generic header needs to be included after the definitions above */
+ #include <asm-generic/vdso/vsyscall.h>
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* __ASM_VDSO_VSYSCALL_H */
+diff --git a/arch/x86/include/asm/xen/interface.h b/arch/x86/include/asm/xen/interface.h
+index baca0b00ef7687..a078a2b0f032b4 100644
+--- a/arch/x86/include/asm/xen/interface.h
++++ b/arch/x86/include/asm/xen/interface.h
+@@ -72,7 +72,7 @@
+ #endif
+ #endif
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ /* Explicitly size integers that represent pfns in the public interface
+ * with Xen so that on ARM we can have one ABI that works for 32 and 64
+ * bit guests. */
+@@ -137,7 +137,7 @@ DEFINE_GUEST_HANDLE(xen_ulong_t);
+ #define TI_SET_DPL(_ti, _dpl) ((_ti)->flags |= (_dpl))
+ #define TI_SET_IF(_ti, _if) ((_ti)->flags |= ((!!(_if))<<2))
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ struct trap_info {
+ uint8_t vector; /* exception vector */
+ uint8_t flags; /* 0-3: privilege level; 4: clear event enable? */
+@@ -186,7 +186,7 @@ struct arch_shared_info {
+ uint32_t wc_sec_hi;
+ #endif
+ };
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #ifdef CONFIG_X86_32
+ #include <asm/xen/interface_32.h>
+@@ -196,7 +196,7 @@ struct arch_shared_info {
+
+ #include <asm/pvclock-abi.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ /*
+ * The following is all CPU context. Note that the fpu_ctxt block is filled
+ * in by FXSAVE if the CPU has feature FXSR; otherwise FSAVE is used.
+@@ -376,7 +376,7 @@ struct xen_pmu_arch {
+ } c;
+ };
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ /*
+ * Prefix forces emulation of some non-trapping instructions.
+diff --git a/arch/x86/include/asm/xen/interface_32.h b/arch/x86/include/asm/xen/interface_32.h
+index dc40578abded7d..74d9768a9cf774 100644
+--- a/arch/x86/include/asm/xen/interface_32.h
++++ b/arch/x86/include/asm/xen/interface_32.h
+@@ -44,7 +44,7 @@
+ */
+ #define __HYPERVISOR_VIRT_START 0xF5800000
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ struct cpu_user_regs {
+ uint32_t ebx;
+@@ -85,7 +85,7 @@ typedef struct xen_callback xen_callback_t;
+
+ #define XEN_CALLBACK(__cs, __eip) \
+ ((struct xen_callback){ .cs = (__cs), .eip = (unsigned long)(__eip) })
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+
+ /*
+diff --git a/arch/x86/include/asm/xen/interface_64.h b/arch/x86/include/asm/xen/interface_64.h
+index c10f279aae9361..38a19edb81a311 100644
+--- a/arch/x86/include/asm/xen/interface_64.h
++++ b/arch/x86/include/asm/xen/interface_64.h
+@@ -77,7 +77,7 @@
+ #define VGCF_in_syscall (1<<_VGCF_in_syscall)
+ #define VGCF_IN_SYSCALL VGCF_in_syscall
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ struct iret_context {
+ /* Top of stack (%rsp at point of hypercall). */
+@@ -143,7 +143,7 @@ typedef unsigned long xen_callback_t;
+ #define XEN_CALLBACK(__cs, __rip) \
+ ((unsigned long)(__rip))
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+
+ #endif /* _ASM_X86_XEN_INTERFACE_64_H */
+diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
+index 9b82eebd7add55..dafbf581c515d0 100644
+--- a/arch/x86/include/uapi/asm/bootparam.h
++++ b/arch/x86/include/uapi/asm/bootparam.h
+@@ -26,7 +26,7 @@
+ #define XLF_5LEVEL_ENABLED (1<<6)
+ #define XLF_MEM_ENCRYPTION (1<<7)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/types.h>
+ #include <linux/screen_info.h>
+@@ -210,6 +210,6 @@ enum x86_hardware_subarch {
+ X86_NR_SUBARCHS,
+ };
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ASM_X86_BOOTPARAM_H */
+diff --git a/arch/x86/include/uapi/asm/e820.h b/arch/x86/include/uapi/asm/e820.h
+index 2f491efe3a1263..55bc6686715603 100644
+--- a/arch/x86/include/uapi/asm/e820.h
++++ b/arch/x86/include/uapi/asm/e820.h
+@@ -54,7 +54,7 @@
+ */
+ #define E820_RESERVED_KERN 128
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+ struct e820entry {
+ __u64 addr; /* start of memory segment */
+@@ -76,7 +76,7 @@ struct e820map {
+ #define BIOS_ROM_BASE 0xffe00000
+ #define BIOS_ROM_END 0xffffffff
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+
+ #endif /* _UAPI_ASM_X86_E820_H */
+diff --git a/arch/x86/include/uapi/asm/ldt.h b/arch/x86/include/uapi/asm/ldt.h
+index d62ac5db093b49..a82c039d8e6a7e 100644
+--- a/arch/x86/include/uapi/asm/ldt.h
++++ b/arch/x86/include/uapi/asm/ldt.h
+@@ -12,7 +12,7 @@
+ /* The size of each LDT entry. */
+ #define LDT_ENTRY_SIZE 8
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ /*
+ * Note on 64bit base and limit is ignored and you cannot set DS/ES/CS
+ * not to the default values if you still want to do syscalls. This
+@@ -44,5 +44,5 @@ struct user_desc {
+ #define MODIFY_LDT_CONTENTS_STACK 1
+ #define MODIFY_LDT_CONTENTS_CODE 2
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+ #endif /* _ASM_X86_LDT_H */
+diff --git a/arch/x86/include/uapi/asm/msr.h b/arch/x86/include/uapi/asm/msr.h
+index e7516b402a00f1..4b8917ca28fe76 100644
+--- a/arch/x86/include/uapi/asm/msr.h
++++ b/arch/x86/include/uapi/asm/msr.h
+@@ -2,7 +2,7 @@
+ #ifndef _UAPI_ASM_X86_MSR_H
+ #define _UAPI_ASM_X86_MSR_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/types.h>
+ #include <linux/ioctl.h>
+@@ -10,5 +10,5 @@
+ #define X86_IOC_RDMSR_REGS _IOWR('c', 0xA0, __u32[8])
+ #define X86_IOC_WRMSR_REGS _IOWR('c', 0xA1, __u32[8])
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _UAPI_ASM_X86_MSR_H */
+diff --git a/arch/x86/include/uapi/asm/ptrace-abi.h b/arch/x86/include/uapi/asm/ptrace-abi.h
+index 16074b9c93bb51..5823584dea132b 100644
+--- a/arch/x86/include/uapi/asm/ptrace-abi.h
++++ b/arch/x86/include/uapi/asm/ptrace-abi.h
+@@ -25,7 +25,7 @@
+
+ #else /* __i386__ */
+
+-#if defined(__ASSEMBLY__) || defined(__FRAME_OFFSETS)
++#if defined(__ASSEMBLER__) || defined(__FRAME_OFFSETS)
+ /*
+ * C ABI says these regs are callee-preserved. They aren't saved on kernel entry
+ * unless syscall needs a complete, fully filled "struct pt_regs".
+@@ -57,7 +57,7 @@
+ #define EFLAGS 144
+ #define RSP 152
+ #define SS 160
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ /* top of stack page */
+ #define FRAME_SIZE 168
+@@ -87,7 +87,7 @@
+
+ #define PTRACE_SINGLEBLOCK 33 /* resume execution until next branch */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+ #endif
+
+diff --git a/arch/x86/include/uapi/asm/ptrace.h b/arch/x86/include/uapi/asm/ptrace.h
+index 85165c0edafc86..e0b5b4f6226b18 100644
+--- a/arch/x86/include/uapi/asm/ptrace.h
++++ b/arch/x86/include/uapi/asm/ptrace.h
+@@ -7,7 +7,7 @@
+ #include <asm/processor-flags.h>
+
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef __i386__
+ /* this struct defines the way the registers are stored on the
+@@ -81,6 +81,6 @@ struct pt_regs {
+
+
+
+-#endif /* !__ASSEMBLY__ */
++#endif /* !__ASSEMBLER__ */
+
+ #endif /* _UAPI_ASM_X86_PTRACE_H */
+diff --git a/arch/x86/include/uapi/asm/setup_data.h b/arch/x86/include/uapi/asm/setup_data.h
+index b111b0c1854491..50c45ead4e7c97 100644
+--- a/arch/x86/include/uapi/asm/setup_data.h
++++ b/arch/x86/include/uapi/asm/setup_data.h
+@@ -18,7 +18,7 @@
+ #define SETUP_INDIRECT (1<<31)
+ #define SETUP_TYPE_MAX (SETUP_ENUM_MAX | SETUP_INDIRECT)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include <linux/types.h>
+
+@@ -78,6 +78,6 @@ struct ima_setup_data {
+ __u64 size;
+ } __attribute__((packed));
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _UAPI_ASM_X86_SETUP_DATA_H */
+diff --git a/arch/x86/include/uapi/asm/signal.h b/arch/x86/include/uapi/asm/signal.h
+index f777346450ec3d..1067efabf18b5b 100644
+--- a/arch/x86/include/uapi/asm/signal.h
++++ b/arch/x86/include/uapi/asm/signal.h
+@@ -2,7 +2,7 @@
+ #ifndef _UAPI_ASM_X86_SIGNAL_H
+ #define _UAPI_ASM_X86_SIGNAL_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+ #include <linux/compiler.h>
+
+@@ -16,7 +16,7 @@ struct siginfo;
+ typedef unsigned long sigset_t;
+
+ #endif /* __KERNEL__ */
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+
+ #define SIGHUP 1
+@@ -68,7 +68,7 @@ typedef unsigned long sigset_t;
+
+ #include <asm-generic/signal-defs.h>
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+
+ # ifndef __KERNEL__
+@@ -106,6 +106,6 @@ typedef struct sigaltstack {
+ __kernel_size_t ss_size;
+ } stack_t;
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _UAPI_ASM_X86_SIGNAL_H */
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index f843fd37cf9870..e1913bcb806e26 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -1271,6 +1271,7 @@ asm( ".pushsection .rodata \n"
+ " endbr64 \n"
+ " subl $0x12345678, %r10d \n"
+ " je fineibt_preamble_end \n"
++ "fineibt_preamble_ud2: \n"
+ " ud2 \n"
+ " nop \n"
+ "fineibt_preamble_end: \n"
+@@ -1278,9 +1279,11 @@ asm( ".pushsection .rodata \n"
+ );
+
+ extern u8 fineibt_preamble_start[];
++extern u8 fineibt_preamble_ud2[];
+ extern u8 fineibt_preamble_end[];
+
+ #define fineibt_preamble_size (fineibt_preamble_end - fineibt_preamble_start)
++#define fineibt_preamble_ud2 (fineibt_preamble_ud2 - fineibt_preamble_start)
+ #define fineibt_preamble_hash 7
+
+ asm( ".pushsection .rodata \n"
+@@ -1598,6 +1601,33 @@ static void poison_cfi(void *addr, void *wr_addr)
+ }
+ }
+
++/*
++ * regs->ip points to a UD2 instruction, return true and fill out target and
++ * type when this UD2 is from a FineIBT preamble.
++ *
++ * We check the preamble by checking for the ENDBR instruction relative to the
++ * UD2 instruction.
++ */
++bool decode_fineibt_insn(struct pt_regs *regs, unsigned long *target, u32 *type)
++{
++ unsigned long addr = regs->ip - fineibt_preamble_ud2;
++ u32 endbr, hash;
++
++ __get_kernel_nofault(&endbr, addr, u32, Efault);
++ if (endbr != gen_endbr())
++ return false;
++
++ *target = addr + fineibt_preamble_size;
++
++ __get_kernel_nofault(&hash, addr + fineibt_preamble_hash, u32, Efault);
++ *type = (u32)regs->r10 + hash;
++
++ return true;
++
++Efault:
++ return false;
++}
++
+ #else
+
+ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline,
+diff --git a/arch/x86/kernel/amd_node.c b/arch/x86/kernel/amd_node.c
+index 65045f223c10a2..ac571948cb353f 100644
+--- a/arch/x86/kernel/amd_node.c
++++ b/arch/x86/kernel/amd_node.c
+@@ -93,6 +93,7 @@ static struct pci_dev **amd_roots;
+
+ /* Protect the PCI config register pairs used for SMN. */
+ static DEFINE_MUTEX(smn_mutex);
++static bool smn_exclusive;
+
+ #define SMN_INDEX_OFFSET 0x60
+ #define SMN_DATA_OFFSET 0x64
+@@ -149,6 +150,9 @@ static int __amd_smn_rw(u8 i_off, u8 d_off, u16 node, u32 address, u32 *value, b
+ if (!root)
+ return err;
+
++ if (!smn_exclusive)
++ return err;
++
+ guard(mutex)(&smn_mutex);
+
+ err = pci_write_config_dword(root, i_off, address);
+@@ -202,6 +206,39 @@ static int amd_cache_roots(void)
+ return 0;
+ }
+
++static int reserve_root_config_spaces(void)
++{
++ struct pci_dev *root = NULL;
++ struct pci_bus *bus = NULL;
++
++ while ((bus = pci_find_next_bus(bus))) {
++ /* Root device is Device 0 Function 0 on each Primary Bus. */
++ root = pci_get_slot(bus, 0);
++ if (!root)
++ continue;
++
++ if (root->vendor != PCI_VENDOR_ID_AMD &&
++ root->vendor != PCI_VENDOR_ID_HYGON)
++ continue;
++
++ pci_dbg(root, "Reserving PCI config space\n");
++
++ /*
++ * There are a few SMN index/data pairs and other registers
++ * that shouldn't be accessed by user space.
++ * So reserve the entire PCI config space for simplicity rather
++ * than covering specific registers piecemeal.
++ */
++ if (!pci_request_config_region_exclusive(root, 0, PCI_CFG_SPACE_SIZE, NULL)) {
++ pci_err(root, "Failed to reserve config space\n");
++ return -EEXIST;
++ }
++ }
++
++ smn_exclusive = true;
++ return 0;
++}
++
+ static int __init amd_smn_init(void)
+ {
+ int err;
+@@ -218,6 +255,10 @@ static int __init amd_smn_init(void)
+ if (err)
+ return err;
+
++ err = reserve_root_config_spaces();
++ if (err)
++ return err;
++
+ return 0;
+ }
+
+diff --git a/arch/x86/kernel/cfi.c b/arch/x86/kernel/cfi.c
+index e6bf78fac14622..f6905bef0af844 100644
+--- a/arch/x86/kernel/cfi.c
++++ b/arch/x86/kernel/cfi.c
+@@ -70,11 +70,25 @@ enum bug_trap_type handle_cfi_failure(struct pt_regs *regs)
+ unsigned long target;
+ u32 type;
+
+- if (!is_cfi_trap(regs->ip))
+- return BUG_TRAP_TYPE_NONE;
++ switch (cfi_mode) {
++ case CFI_KCFI:
++ if (!is_cfi_trap(regs->ip))
++ return BUG_TRAP_TYPE_NONE;
++
++ if (!decode_cfi_insn(regs, &target, &type))
++ return report_cfi_failure_noaddr(regs, regs->ip);
++
++ break;
+
+- if (!decode_cfi_insn(regs, &target, &type))
+- return report_cfi_failure_noaddr(regs, regs->ip);
++ case CFI_FINEIBT:
++ if (!decode_fineibt_insn(regs, &target, &type))
++ return BUG_TRAP_TYPE_NONE;
++
++ break;
++
++ default:
++ return BUG_TRAP_TYPE_NONE;
++ }
+
+ return report_cfi_failure(regs, regs->ip, &target, type);
+ }
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index b6994993c39f71..e0e0ecc401947a 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1442,9 +1442,13 @@ static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
+ static enum spectre_v2_user_cmd __init
+ spectre_v2_parse_user_cmdline(void)
+ {
++ enum spectre_v2_user_cmd mode;
+ char arg[20];
+ int ret, i;
+
++ mode = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ?
++ SPECTRE_V2_USER_CMD_AUTO : SPECTRE_V2_USER_CMD_NONE;
++
+ switch (spectre_v2_cmd) {
+ case SPECTRE_V2_CMD_NONE:
+ return SPECTRE_V2_USER_CMD_NONE;
+@@ -1457,7 +1461,7 @@ spectre_v2_parse_user_cmdline(void)
+ ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
+ arg, sizeof(arg));
+ if (ret < 0)
+- return SPECTRE_V2_USER_CMD_AUTO;
++ return mode;
+
+ for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
+ if (match_option(arg, ret, v2_user_options[i].option)) {
+@@ -1467,8 +1471,8 @@ spectre_v2_parse_user_cmdline(void)
+ }
+ }
+
+- pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg);
+- return SPECTRE_V2_USER_CMD_AUTO;
++ pr_err("Unknown user space protection option (%s). Switching to default\n", arg);
++ return mode;
+ }
+
+ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 9309468c8d2c12..2a397da43923ba 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -74,7 +74,7 @@ void intel_collect_cpu_info(struct cpu_signature *sig)
+ sig->pf = 0;
+ sig->rev = intel_get_microcode_revision();
+
+- if (x86_model(sig->sig) >= 5 || x86_family(sig->sig) > 6) {
++ if (IFM(x86_family(sig->sig), x86_model(sig->sig)) >= INTEL_PENTIUM_III_DESCHUTES) {
+ unsigned int val[2];
+
+ /* get processor flags from MSR 0x17 */
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index ed163c8c8604e3..9a95d00f142330 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -40,8 +40,12 @@
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/nmi.h>
+
++/*
++ * An emergency handler can be set in any context including NMI
++ */
+ struct nmi_desc {
+ raw_spinlock_t lock;
++ nmi_handler_t emerg_handler;
+ struct list_head head;
+ };
+
+@@ -132,9 +136,22 @@ static void nmi_check_duration(struct nmiaction *action, u64 duration)
+ static int nmi_handle(unsigned int type, struct pt_regs *regs)
+ {
+ struct nmi_desc *desc = nmi_to_desc(type);
++ nmi_handler_t ehandler;
+ struct nmiaction *a;
+ int handled=0;
+
++ /*
++ * Call the emergency handler, if set
++ *
++ * In the case of crash_nmi_callback() emergency handler, it will
++ * return in the case of the crashing CPU to enable it to complete
++ * other necessary crashing actions ASAP. Other handlers in the
++ * linked list won't need to be run.
++ */
++ ehandler = desc->emerg_handler;
++ if (ehandler)
++ return ehandler(type, regs);
++
+ rcu_read_lock();
+
+ /*
+@@ -224,6 +241,31 @@ void unregister_nmi_handler(unsigned int type, const char *name)
+ }
+ EXPORT_SYMBOL_GPL(unregister_nmi_handler);
+
++/**
++ * set_emergency_nmi_handler - Set emergency handler
++ * @type: NMI type
++ * @handler: the emergency handler to be stored
++ *
++ * Set an emergency NMI handler which, if set, will preempt all the other
++ * handlers in the linked list. If a NULL handler is passed in, it will clear
++ * it. It is expected that concurrent calls to this function will not happen
++ * or the system is screwed beyond repair.
++ */
++void set_emergency_nmi_handler(unsigned int type, nmi_handler_t handler)
++{
++ struct nmi_desc *desc = nmi_to_desc(type);
++
++ if (WARN_ON_ONCE(desc->emerg_handler == handler))
++ return;
++ desc->emerg_handler = handler;
++
++ /*
++ * Ensure the emergency handler is visible to other CPUs before
++ * function return
++ */
++ smp_wmb();
++}
++
+ static void
+ pci_serr_error(unsigned char reason, struct pt_regs *regs)
+ {
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index c5bb980b8a6732..6669d251c4f75d 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -59,21 +59,6 @@ void __init native_pv_lock_init(void)
+ static_branch_enable(&virt_spin_lock_key);
+ }
+
+-#ifndef CONFIG_PT_RECLAIM
+-static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
+-{
+- struct ptdesc *ptdesc = (struct ptdesc *)table;
+-
+- pagetable_dtor(ptdesc);
+- tlb_remove_page(tlb, ptdesc_page(ptdesc));
+-}
+-#else
+-static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
+-{
+- tlb_remove_table(tlb, table);
+-}
+-#endif
+-
+ struct static_key paravirt_steal_enabled;
+ struct static_key paravirt_steal_rq_enabled;
+
+@@ -197,7 +182,7 @@ struct paravirt_patch_template pv_ops = {
+ .mmu.flush_tlb_kernel = native_flush_tlb_global,
+ .mmu.flush_tlb_one_user = native_flush_tlb_one_user,
+ .mmu.flush_tlb_multi = native_flush_tlb_multi,
+- .mmu.tlb_remove_table = native_tlb_remove_table,
++ .mmu.tlb_remove_table = tlb_remove_table,
+
+ .mmu.exit_mmap = paravirt_nop,
+ .mmu.notify_page_enc_status_changed = paravirt_nop,
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index dc1dd3f3e67fcd..9aaac1f9f45b57 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -926,15 +926,11 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ shootdown_callback = callback;
+
+ atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
+- /* Would it be better to replace the trap vector here? */
+- if (register_nmi_handler(NMI_LOCAL, crash_nmi_callback,
+- NMI_FLAG_FIRST, "crash"))
+- return; /* Return what? */
++
+ /*
+- * Ensure the new callback function is set before sending
+- * out the NMI
++ * Set emergency handler to preempt other handlers.
+ */
+- wmb();
++ set_emergency_nmi_handler(NMI_LOCAL, crash_nmi_callback);
+
+ apic_send_IPI_allbutself(NMI_VECTOR);
+
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index c10850ae6f094d..463634b138bbb8 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -229,7 +229,7 @@ static void ap_calibrate_delay(void)
+ /*
+ * Activate a secondary processor.
+ */
+-static void notrace start_secondary(void *unused)
++static void notrace __noendbr start_secondary(void *unused)
+ {
+ /*
+ * Don't put *anything* except direct CPU state initialization
+@@ -314,6 +314,7 @@ static void notrace start_secondary(void *unused)
+ wmb();
+ cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
++ANNOTATE_NOENDBR_SYM(start_secondary);
+
+ /*
+ * The bootstrap kernel entry code has set these up. Save them for
+@@ -676,9 +677,9 @@ static void __init smp_quirk_init_udelay(void)
+ return;
+
+ /* if modern processor, use no delay */
+- if (((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) && (boot_cpu_data.x86 == 6)) ||
+- ((boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) && (boot_cpu_data.x86 >= 0x18)) ||
+- ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD) && (boot_cpu_data.x86 >= 0xF))) {
++ if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && boot_cpu_data.x86_vfm >= INTEL_PENTIUM_PRO) ||
++ (boot_cpu_data.x86_vendor == X86_VENDOR_HYGON && boot_cpu_data.x86 >= 0x18) ||
++ (boot_cpu_data.x86_vendor == X86_VENDOR_AMD && boot_cpu_data.x86 >= 0xF)) {
+ init_udelay = 0;
+ return;
+ }
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 5e3e036e6e537f..b18fc7539b8d7b 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -94,10 +94,17 @@ __always_inline int is_valid_bugaddr(unsigned long addr)
+
+ /*
+ * Check for UD1 or UD2, accounting for Address Size Override Prefixes.
+- * If it's a UD1, get the ModRM byte to pass along to UBSan.
++ * If it's a UD1, further decode to determine its use:
++ *
++ * UBSan{0}: 67 0f b9 00 ud1 (%eax),%eax
++ * UBSan{10}: 67 0f b9 40 10 ud1 0x10(%eax),%eax
++ * static_call: 0f b9 cc ud1 %esp,%ecx
++ *
++ * Notably UBSAN uses EAX, static_call uses ECX.
+ */
+-__always_inline int decode_bug(unsigned long addr, u32 *imm)
++__always_inline int decode_bug(unsigned long addr, s32 *imm, int *len)
+ {
++ unsigned long start = addr;
+ u8 v;
+
+ if (addr < TASK_SIZE_MAX)
+@@ -110,24 +117,42 @@ __always_inline int decode_bug(unsigned long addr, u32 *imm)
+ return BUG_NONE;
+
+ v = *(u8 *)(addr++);
+- if (v == SECOND_BYTE_OPCODE_UD2)
++ if (v == SECOND_BYTE_OPCODE_UD2) {
++ *len = addr - start;
+ return BUG_UD2;
++ }
+
+- if (!IS_ENABLED(CONFIG_UBSAN_TRAP) || v != SECOND_BYTE_OPCODE_UD1)
++ if (v != SECOND_BYTE_OPCODE_UD1)
+ return BUG_NONE;
+
+- /* Retrieve the immediate (type value) for the UBSAN UD1 */
+- v = *(u8 *)(addr++);
+- if (X86_MODRM_RM(v) == 4)
+- addr++;
+-
+ *imm = 0;
+- if (X86_MODRM_MOD(v) == 1)
+- *imm = *(u8 *)addr;
+- else if (X86_MODRM_MOD(v) == 2)
+- *imm = *(u32 *)addr;
+- else
+- WARN_ONCE(1, "Unexpected MODRM_MOD: %u\n", X86_MODRM_MOD(v));
++ v = *(u8 *)(addr++); /* ModRM */
++
++ if (X86_MODRM_MOD(v) != 3 && X86_MODRM_RM(v) == 4)
++ addr++; /* SIB */
++
++ /* Decode immediate, if present */
++ switch (X86_MODRM_MOD(v)) {
++ case 0: if (X86_MODRM_RM(v) == 5)
++ addr += 4; /* RIP + disp32 */
++ break;
++
++ case 1: *imm = *(s8 *)addr;
++ addr += 1;
++ break;
++
++ case 2: *imm = *(s32 *)addr;
++ addr += 4;
++ break;
++
++ case 3: break;
++ }
++
++ /* record instruction length */
++ *len = addr - start;
++
++ if (X86_MODRM_REG(v) == 0) /* EAX */
++ return BUG_UD1_UBSAN;
+
+ return BUG_UD1;
+ }
+@@ -258,10 +283,10 @@ static inline void handle_invalid_op(struct pt_regs *regs)
+ static noinstr bool handle_bug(struct pt_regs *regs)
+ {
+ bool handled = false;
+- int ud_type;
+- u32 imm;
++ int ud_type, ud_len;
++ s32 ud_imm;
+
+- ud_type = decode_bug(regs->ip, &imm);
++ ud_type = decode_bug(regs->ip, &ud_imm, &ud_len);
+ if (ud_type == BUG_NONE)
+ return handled;
+
+@@ -281,15 +306,28 @@ static noinstr bool handle_bug(struct pt_regs *regs)
+ */
+ if (regs->flags & X86_EFLAGS_IF)
+ raw_local_irq_enable();
+- if (ud_type == BUG_UD2) {
++
++ switch (ud_type) {
++ case BUG_UD2:
+ if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
+ handle_cfi_failure(regs) == BUG_TRAP_TYPE_WARN) {
+- regs->ip += LEN_UD2;
++ regs->ip += ud_len;
+ handled = true;
+ }
+- } else if (IS_ENABLED(CONFIG_UBSAN_TRAP)) {
+- pr_crit("%s at %pS\n", report_ubsan_failure(regs, imm), (void *)regs->ip);
++ break;
++
++ case BUG_UD1_UBSAN:
++ if (IS_ENABLED(CONFIG_UBSAN_TRAP)) {
++ pr_crit("%s at %pS\n",
++ report_ubsan_failure(regs, ud_imm),
++ (void *)regs->ip);
++ }
++ break;
++
++ default:
++ break;
+ }
++
+ if (regs->flags & X86_EFLAGS_IF)
+ raw_local_irq_disable();
+ instrumentation_end();
+diff --git a/arch/x86/math-emu/control_w.h b/arch/x86/math-emu/control_w.h
+index 60f4dcc5edc3cb..93cbc89b34e25a 100644
+--- a/arch/x86/math-emu/control_w.h
++++ b/arch/x86/math-emu/control_w.h
+@@ -11,7 +11,7 @@
+ #ifndef _CONTROLW_H_
+ #define _CONTROLW_H_
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ #define _Const_(x) $##x
+ #else
+ #define _Const_(x) x
+diff --git a/arch/x86/math-emu/exception.h b/arch/x86/math-emu/exception.h
+index 75230b9775777a..59961d350bc4d1 100644
+--- a/arch/x86/math-emu/exception.h
++++ b/arch/x86/math-emu/exception.h
+@@ -10,7 +10,7 @@
+ #ifndef _EXCEPTION_H_
+ #define _EXCEPTION_H_
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ #define Const_(x) $##x
+ #else
+ #define Const_(x) x
+@@ -37,7 +37,7 @@
+ #define PRECISION_LOST_UP Const_((EX_Precision | SW_C1))
+ #define PRECISION_LOST_DOWN Const_(EX_Precision)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #ifdef DEBUG
+ #define EXCEPTION(x) { printk("exception in %s at line %d\n", \
+@@ -46,6 +46,6 @@
+ #define EXCEPTION(x) FPU_exception(x)
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _EXCEPTION_H_ */
+diff --git a/arch/x86/math-emu/fpu_emu.h b/arch/x86/math-emu/fpu_emu.h
+index 0c122226ca56f5..def569c50b7604 100644
+--- a/arch/x86/math-emu/fpu_emu.h
++++ b/arch/x86/math-emu/fpu_emu.h
+@@ -20,7 +20,7 @@
+ */
+ #define PECULIAR_486
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ #include "fpu_asm.h"
+ #define Const(x) $##x
+ #else
+@@ -68,7 +68,7 @@
+
+ #define FPU_Exception Const(0x80000000) /* Added to tag returns. */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #include "fpu_system.h"
+
+@@ -213,6 +213,6 @@ asmlinkage int FPU_round(FPU_REG *arg, unsigned int extent, int dummy,
+ #include "fpu_proto.h"
+ #endif
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _FPU_EMU_H_ */
+diff --git a/arch/x86/math-emu/status_w.h b/arch/x86/math-emu/status_w.h
+index b77bafec95260b..f642957330efc4 100644
+--- a/arch/x86/math-emu/status_w.h
++++ b/arch/x86/math-emu/status_w.h
+@@ -13,7 +13,7 @@
+
+ #include "fpu_emu.h" /* for definition of PECULIAR_486 */
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ #define Const__(x) $##x
+ #else
+ #define Const__(x) x
+@@ -37,7 +37,7 @@
+
+ #define SW_Exc_Mask Const__(0x27f) /* Status word exception bit mask */
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ #define COMP_A_gt_B 1
+ #define COMP_A_eq_B 2
+@@ -63,6 +63,6 @@ static inline void setcc(int cc)
+ # define clear_C1()
+ #endif /* PECULIAR_486 */
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _STATUS_H_ */
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 62aa4d66a032d5..bfa444a7dbb048 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -645,8 +645,13 @@ static void __init memory_map_top_down(unsigned long map_start,
+ */
+ addr = memblock_phys_alloc_range(PMD_SIZE, PMD_SIZE, map_start,
+ map_end);
+- memblock_phys_free(addr, PMD_SIZE);
+- real_end = addr + PMD_SIZE;
++ if (!addr) {
++ pr_warn("Failed to release memory for alloc_low_pages()");
++ real_end = max(map_start, ALIGN_DOWN(map_end, PMD_SIZE));
++ } else {
++ memblock_phys_free(addr, PMD_SIZE);
++ real_end = addr + PMD_SIZE;
++ }
+
+ /* step_size need to be small so pgt_buf from BRK could cover it */
+ step_size = PMD_SIZE;
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 01ea7c6df3036b..17c89dad4f7ff9 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -967,9 +967,18 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
+ ret = __add_pages(nid, start_pfn, nr_pages, params);
+ WARN_ON_ONCE(ret);
+
+- /* update max_pfn, max_low_pfn and high_memory */
+- update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
+- nr_pages << PAGE_SHIFT);
++ /*
++ * Special case: add_pages() is called by memremap_pages() for adding device
++ * private pages. Do not bump up max_pfn in the device private path,
++ * because max_pfn changes affect dma_addressing_limited().
++ *
++ * dma_addressing_limited() returning true when max_pfn is the device's
++ * addressable memory can force device drivers to use bounce buffers
++ * and impact their performance negatively:
++ */
++ if (!params->pgmap)
++ /* update max_pfn, max_low_pfn and high_memory */
++ update_end_of_memory_vars(start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
+
+ return ret;
+ }
+diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
+index 11a93542d1983a..3c306de52fd4da 100644
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -113,8 +113,14 @@ void __init kernel_randomize_memory(void)
+ memory_tb = DIV_ROUND_UP(max_pfn << PAGE_SHIFT, 1UL << TB_SHIFT) +
+ CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING;
+
+- /* Adapt physical memory region size based on available memory */
+- if (memory_tb < kaslr_regions[0].size_tb)
++ /*
++ * Adapt physical memory region size based on available memory,
++ * except when CONFIG_PCI_P2PDMA is enabled. P2PDMA exposes the
++ * device BAR space assuming the direct map space is large enough
++ * for creating a ZONE_DEVICE mapping in the direct map corresponding
++ * to the physical BAR address.
++ */
++ if (!IS_ENABLED(CONFIG_PCI_P2PDMA) && (memory_tb < kaslr_regions[0].size_tb))
+ kaslr_regions[0].size_tb = memory_tb;
+
+ /*
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index 9b0ee41b545c7e..1ddbd799acdf5e 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -18,25 +18,6 @@ EXPORT_SYMBOL(physical_mask);
+ #define PGTABLE_HIGHMEM 0
+ #endif
+
+-#ifndef CONFIG_PARAVIRT
+-#ifndef CONFIG_PT_RECLAIM
+-static inline
+-void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
+-{
+- struct ptdesc *ptdesc = (struct ptdesc *)table;
+-
+- pagetable_dtor(ptdesc);
+- tlb_remove_page(tlb, ptdesc_page(ptdesc));
+-}
+-#else
+-static inline
+-void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
+-{
+- tlb_remove_table(tlb, table);
+-}
+-#endif /* !CONFIG_PT_RECLAIM */
+-#endif /* !CONFIG_PARAVIRT */
+-
+ gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM;
+
+ pgtable_t pte_alloc_one(struct mm_struct *mm)
+@@ -64,7 +45,7 @@ early_param("userpte", setup_userpte);
+ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
+ {
+ paravirt_release_pte(page_to_pfn(pte));
+- paravirt_tlb_remove_table(tlb, page_ptdesc(pte));
++ tlb_remove_table(tlb, page_ptdesc(pte));
+ }
+
+ #if CONFIG_PGTABLE_LEVELS > 2
+@@ -78,21 +59,21 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
+ #ifdef CONFIG_X86_PAE
+ tlb->need_flush_all = 1;
+ #endif
+- paravirt_tlb_remove_table(tlb, virt_to_ptdesc(pmd));
++ tlb_remove_table(tlb, virt_to_ptdesc(pmd));
+ }
+
+ #if CONFIG_PGTABLE_LEVELS > 3
+ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud)
+ {
+ paravirt_release_pud(__pa(pud) >> PAGE_SHIFT);
+- paravirt_tlb_remove_table(tlb, virt_to_ptdesc(pud));
++ tlb_remove_table(tlb, virt_to_ptdesc(pud));
+ }
+
+ #if CONFIG_PGTABLE_LEVELS > 4
+ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d)
+ {
+ paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT);
+- paravirt_tlb_remove_table(tlb, virt_to_ptdesc(p4d));
++ tlb_remove_table(tlb, virt_to_ptdesc(p4d));
+ }
+ #endif /* CONFIG_PGTABLE_LEVELS > 4 */
+ #endif /* CONFIG_PGTABLE_LEVELS > 3 */
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index 63230ff8cf4f06..08e76a5ca1553d 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -27,6 +27,7 @@
+ #include <asm/mmu_context.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/microcode.h>
++#include <asm/fred.h>
+
+ #ifdef CONFIG_X86_32
+ __visible unsigned long saved_context_ebx;
+@@ -231,6 +232,19 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
+ */
+ #ifdef CONFIG_X86_64
+ wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
++
++ /*
++ * Reinitialize FRED to ensure the FRED MSRs contain the same values
++ * as before hibernation.
++ *
++ * Note, the setup of FRED RSPs requires access to percpu data
++ * structures. Therefore, FRED reinitialization can only occur after
++ * the percpu access pointer (i.e., MSR_GS_BASE) is restored.
++ */
++ if (ctxt->cr4 & X86_CR4_FRED) {
++ cpu_init_fred_exceptions();
++ cpu_init_fred_rsps();
++ }
+ #else
+ loadsegment(fs, __KERNEL_PERCPU);
+ #endif
+diff --git a/arch/x86/realmode/rm/realmode.h b/arch/x86/realmode/rm/realmode.h
+index c76041a353970d..867e55f1d6af44 100644
+--- a/arch/x86/realmode/rm/realmode.h
++++ b/arch/x86/realmode/rm/realmode.h
+@@ -2,7 +2,7 @@
+ #ifndef ARCH_X86_REALMODE_RM_REALMODE_H
+ #define ARCH_X86_REALMODE_RM_REALMODE_H
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+
+ /*
+ * 16-bit ljmpw to the real_mode_seg
+@@ -12,7 +12,7 @@
+ */
+ #define LJMPW_RM(to) .byte 0xea ; .word (to), real_mode_seg
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ /*
+ * Signature at the end of the realmode region
+diff --git a/arch/x86/realmode/rm/wakeup.h b/arch/x86/realmode/rm/wakeup.h
+index 0e4fd08ae44713..3b6d8fa82d3e10 100644
+--- a/arch/x86/realmode/rm/wakeup.h
++++ b/arch/x86/realmode/rm/wakeup.h
+@@ -7,7 +7,7 @@
+ #ifndef ARCH_X86_KERNEL_ACPI_RM_WAKEUP_H
+ #define ARCH_X86_KERNEL_ACPI_RM_WAKEUP_H
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <linux/types.h>
+
+ /* This must match data at wakeup.S */
+diff --git a/arch/x86/um/os-Linux/mcontext.c b/arch/x86/um/os-Linux/mcontext.c
+index e80ab7d281177b..1b0d95328b2c72 100644
+--- a/arch/x86/um/os-Linux/mcontext.c
++++ b/arch/x86/um/os-Linux/mcontext.c
+@@ -27,7 +27,6 @@ void get_regs_from_mc(struct uml_pt_regs *regs, mcontext_t *mc)
+ COPY(RIP);
+ COPY2(EFLAGS, EFL);
+ COPY2(CS, CSGSFS);
+- regs->gp[CS / sizeof(unsigned long)] &= 0xffff;
+- regs->gp[CS / sizeof(unsigned long)] |= 3;
++ regs->gp[SS / sizeof(unsigned long)] = mc->gregs[REG_CSGSFS] >> 48;
+ #endif
+ }
+diff --git a/block/badblocks.c b/block/badblocks.c
+index dc147c0179612f..23acdf7c6f3633 100644
+--- a/block/badblocks.c
++++ b/block/badblocks.c
+@@ -1246,14 +1246,15 @@ static int _badblocks_check(struct badblocks *bb, sector_t s, sector_t sectors,
+ len = sectors;
+
+ update_sectors:
++ /* This situation should never happen */
++ WARN_ON(sectors < len);
++
+ s += len;
+ sectors -= len;
+
+ if (sectors > 0)
+ goto re_check;
+
+- WARN_ON(sectors < 0);
+-
+ if (unacked_badblocks > 0)
+ rv = -1;
+ else if (acked_badblocks > 0)
+diff --git a/block/bdev.c b/block/bdev.c
+index 5aebcf437f17c7..9bb7e041bd9ce0 100644
+--- a/block/bdev.c
++++ b/block/bdev.c
+@@ -150,26 +150,64 @@ static void set_init_blocksize(struct block_device *bdev)
+ BD_INODE(bdev)->i_blkbits = blksize_bits(bsize);
+ }
+
+-int set_blocksize(struct file *file, int size)
++/**
++ * bdev_validate_blocksize - check that this block size is acceptable
++ * @bdev: blockdevice to check
++ * @block_size: block size to check
++ *
++ * For block device users that do not use buffer heads or the block device
++ * page cache, make sure that this block size can be used with the device.
++ *
++ * Return: On success zero is returned, negative error code on failure.
++ */
++int bdev_validate_blocksize(struct block_device *bdev, int block_size)
+ {
+- struct inode *inode = file->f_mapping->host;
+- struct block_device *bdev = I_BDEV(inode);
+-
+- if (blk_validate_block_size(size))
++ if (blk_validate_block_size(block_size))
+ return -EINVAL;
+
+ /* Size cannot be smaller than the size supported by the device */
+- if (size < bdev_logical_block_size(bdev))
++ if (block_size < bdev_logical_block_size(bdev))
+ return -EINVAL;
+
++ return 0;
++}
++EXPORT_SYMBOL_GPL(bdev_validate_blocksize);
++
++int set_blocksize(struct file *file, int size)
++{
++ struct inode *inode = file->f_mapping->host;
++ struct block_device *bdev = I_BDEV(inode);
++ int ret;
++
++ ret = bdev_validate_blocksize(bdev, size);
++ if (ret)
++ return ret;
++
+ if (!file->private_data)
+ return -EINVAL;
+
+ /* Don't change the size if it is same as current */
+ if (inode->i_blkbits != blksize_bits(size)) {
++ /*
++ * Flush and truncate the pagecache before we reconfigure the
++ * mapping geometry because folio sizes are variable now. If a
++ * reader has already allocated a folio whose size is smaller
++ * than the new min_order but invokes readahead after the new
++ * min_order becomes visible, readahead will think there are
++ * "zero" blocks per folio and crash. Take the inode and
++ * invalidation locks to avoid racing with
++ * read/write/fallocate.
++ */
++ inode_lock(inode);
++ filemap_invalidate_lock(inode->i_mapping);
++
+ sync_blockdev(bdev);
++ kill_bdev(bdev);
++
+ inode->i_blkbits = blksize_bits(size);
+ kill_bdev(bdev);
++ filemap_invalidate_unlock(inode->i_mapping);
++ inode_unlock(inode);
+ }
+ return 0;
+ }
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index c94efae5bcfaf3..8b07015db819ab 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1727,27 +1727,27 @@ int blkcg_policy_register(struct blkcg_policy *pol)
+ struct blkcg *blkcg;
+ int i, ret;
+
++ /*
++ * Make sure cpd/pd_alloc_fn and cpd/pd_free_fn in pairs, and policy
++ * without pd_alloc_fn/pd_free_fn can't be activated.
++ */
++ if ((!pol->cpd_alloc_fn ^ !pol->cpd_free_fn) ||
++ (!pol->pd_alloc_fn ^ !pol->pd_free_fn))
++ return -EINVAL;
++
+ mutex_lock(&blkcg_pol_register_mutex);
+ mutex_lock(&blkcg_pol_mutex);
+
+ /* find an empty slot */
+- ret = -ENOSPC;
+ for (i = 0; i < BLKCG_MAX_POLS; i++)
+ if (!blkcg_policy[i])
+ break;
+ if (i >= BLKCG_MAX_POLS) {
+ pr_warn("blkcg_policy_register: BLKCG_MAX_POLS too small\n");
++ ret = -ENOSPC;
+ goto err_unlock;
+ }
+
+- /*
+- * Make sure cpd/pd_alloc_fn and cpd/pd_free_fn in pairs, and policy
+- * without pd_alloc_fn/pd_free_fn can't be activated.
+- */
+- if ((!pol->cpd_alloc_fn ^ !pol->cpd_free_fn) ||
+- (!pol->pd_alloc_fn ^ !pol->pd_free_fn))
+- goto err_unlock;
+-
+ /* register @pol */
+ pol->plid = i;
+ blkcg_policy[pol->plid] = pol;
+@@ -1758,8 +1758,10 @@ int blkcg_policy_register(struct blkcg_policy *pol)
+ struct blkcg_policy_data *cpd;
+
+ cpd = pol->cpd_alloc_fn(GFP_KERNEL);
+- if (!cpd)
++ if (!cpd) {
++ ret = -ENOMEM;
+ goto err_free_cpds;
++ }
+
+ blkcg->cpd[pol->plid] = cpd;
+ cpd->blkcg = blkcg;
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 67b119ffa16890..c430c10c864b69 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -124,6 +124,11 @@ static int blk_validate_integrity_limits(struct queue_limits *lim)
+ return 0;
+ }
+
++ if (lim->features & BLK_FEAT_BOUNCE_HIGH) {
++ pr_warn("no bounce buffer support for integrity metadata\n");
++ return -EINVAL;
++ }
++
+ if (!IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY)) {
+ pr_warn("integrity support disabled.\n");
+ return -EINVAL;
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 7802186849074f..dc4037e27e36e4 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -23,9 +23,12 @@
+ struct queue_sysfs_entry {
+ struct attribute attr;
+ ssize_t (*show)(struct gendisk *disk, char *page);
++ ssize_t (*show_limit)(struct gendisk *disk, char *page);
++
+ ssize_t (*store)(struct gendisk *disk, const char *page, size_t count);
+ int (*store_limit)(struct gendisk *disk, const char *page,
+ size_t count, struct queue_limits *lim);
++
+ void (*load_module)(struct gendisk *disk, const char *page, size_t count);
+ };
+
+@@ -412,10 +415,16 @@ static struct queue_sysfs_entry _prefix##_entry = { \
+ .store = _prefix##_store, \
+ };
+
++#define QUEUE_LIM_RO_ENTRY(_prefix, _name) \
++static struct queue_sysfs_entry _prefix##_entry = { \
++ .attr = { .name = _name, .mode = 0444 }, \
++ .show_limit = _prefix##_show, \
++}
++
+ #define QUEUE_LIM_RW_ENTRY(_prefix, _name) \
+ static struct queue_sysfs_entry _prefix##_entry = { \
+ .attr = { .name = _name, .mode = 0644 }, \
+- .show = _prefix##_show, \
++ .show_limit = _prefix##_show, \
+ .store_limit = _prefix##_store, \
+ }
+
+@@ -430,39 +439,39 @@ static struct queue_sysfs_entry _prefix##_entry = { \
+ QUEUE_RW_ENTRY(queue_requests, "nr_requests");
+ QUEUE_RW_ENTRY(queue_ra, "read_ahead_kb");
+ QUEUE_LIM_RW_ENTRY(queue_max_sectors, "max_sectors_kb");
+-QUEUE_RO_ENTRY(queue_max_hw_sectors, "max_hw_sectors_kb");
+-QUEUE_RO_ENTRY(queue_max_segments, "max_segments");
+-QUEUE_RO_ENTRY(queue_max_integrity_segments, "max_integrity_segments");
+-QUEUE_RO_ENTRY(queue_max_segment_size, "max_segment_size");
++QUEUE_LIM_RO_ENTRY(queue_max_hw_sectors, "max_hw_sectors_kb");
++QUEUE_LIM_RO_ENTRY(queue_max_segments, "max_segments");
++QUEUE_LIM_RO_ENTRY(queue_max_integrity_segments, "max_integrity_segments");
++QUEUE_LIM_RO_ENTRY(queue_max_segment_size, "max_segment_size");
+ QUEUE_RW_LOAD_MODULE_ENTRY(elv_iosched, "scheduler");
+
+-QUEUE_RO_ENTRY(queue_logical_block_size, "logical_block_size");
+-QUEUE_RO_ENTRY(queue_physical_block_size, "physical_block_size");
+-QUEUE_RO_ENTRY(queue_chunk_sectors, "chunk_sectors");
+-QUEUE_RO_ENTRY(queue_io_min, "minimum_io_size");
+-QUEUE_RO_ENTRY(queue_io_opt, "optimal_io_size");
++QUEUE_LIM_RO_ENTRY(queue_logical_block_size, "logical_block_size");
++QUEUE_LIM_RO_ENTRY(queue_physical_block_size, "physical_block_size");
++QUEUE_LIM_RO_ENTRY(queue_chunk_sectors, "chunk_sectors");
++QUEUE_LIM_RO_ENTRY(queue_io_min, "minimum_io_size");
++QUEUE_LIM_RO_ENTRY(queue_io_opt, "optimal_io_size");
+
+-QUEUE_RO_ENTRY(queue_max_discard_segments, "max_discard_segments");
+-QUEUE_RO_ENTRY(queue_discard_granularity, "discard_granularity");
+-QUEUE_RO_ENTRY(queue_max_hw_discard_sectors, "discard_max_hw_bytes");
++QUEUE_LIM_RO_ENTRY(queue_max_discard_segments, "max_discard_segments");
++QUEUE_LIM_RO_ENTRY(queue_discard_granularity, "discard_granularity");
++QUEUE_LIM_RO_ENTRY(queue_max_hw_discard_sectors, "discard_max_hw_bytes");
+ QUEUE_LIM_RW_ENTRY(queue_max_discard_sectors, "discard_max_bytes");
+ QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
+
+-QUEUE_RO_ENTRY(queue_atomic_write_max_sectors, "atomic_write_max_bytes");
+-QUEUE_RO_ENTRY(queue_atomic_write_boundary_sectors,
++QUEUE_LIM_RO_ENTRY(queue_atomic_write_max_sectors, "atomic_write_max_bytes");
++QUEUE_LIM_RO_ENTRY(queue_atomic_write_boundary_sectors,
+ "atomic_write_boundary_bytes");
+-QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
+-QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
++QUEUE_LIM_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes");
++QUEUE_LIM_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
+
+ QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
+-QUEUE_RO_ENTRY(queue_max_write_zeroes_sectors, "write_zeroes_max_bytes");
+-QUEUE_RO_ENTRY(queue_max_zone_append_sectors, "zone_append_max_bytes");
+-QUEUE_RO_ENTRY(queue_zone_write_granularity, "zone_write_granularity");
++QUEUE_LIM_RO_ENTRY(queue_max_write_zeroes_sectors, "write_zeroes_max_bytes");
++QUEUE_LIM_RO_ENTRY(queue_max_zone_append_sectors, "zone_append_max_bytes");
++QUEUE_LIM_RO_ENTRY(queue_zone_write_granularity, "zone_write_granularity");
+
+-QUEUE_RO_ENTRY(queue_zoned, "zoned");
++QUEUE_LIM_RO_ENTRY(queue_zoned, "zoned");
+ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones");
+-QUEUE_RO_ENTRY(queue_max_open_zones, "max_open_zones");
+-QUEUE_RO_ENTRY(queue_max_active_zones, "max_active_zones");
++QUEUE_LIM_RO_ENTRY(queue_max_open_zones, "max_open_zones");
++QUEUE_LIM_RO_ENTRY(queue_max_active_zones, "max_active_zones");
+
+ QUEUE_RW_ENTRY(queue_nomerges, "nomerges");
+ QUEUE_LIM_RW_ENTRY(queue_iostats_passthrough, "iostats_passthrough");
+@@ -470,16 +479,16 @@ QUEUE_RW_ENTRY(queue_rq_affinity, "rq_affinity");
+ QUEUE_RW_ENTRY(queue_poll, "io_poll");
+ QUEUE_RW_ENTRY(queue_poll_delay, "io_poll_delay");
+ QUEUE_LIM_RW_ENTRY(queue_wc, "write_cache");
+-QUEUE_RO_ENTRY(queue_fua, "fua");
+-QUEUE_RO_ENTRY(queue_dax, "dax");
++QUEUE_LIM_RO_ENTRY(queue_fua, "fua");
++QUEUE_LIM_RO_ENTRY(queue_dax, "dax");
+ QUEUE_RW_ENTRY(queue_io_timeout, "io_timeout");
+-QUEUE_RO_ENTRY(queue_virt_boundary_mask, "virt_boundary_mask");
+-QUEUE_RO_ENTRY(queue_dma_alignment, "dma_alignment");
++QUEUE_LIM_RO_ENTRY(queue_virt_boundary_mask, "virt_boundary_mask");
++QUEUE_LIM_RO_ENTRY(queue_dma_alignment, "dma_alignment");
+
+ /* legacy alias for logical_block_size: */
+ static struct queue_sysfs_entry queue_hw_sector_size_entry = {
+- .attr = {.name = "hw_sector_size", .mode = 0444 },
+- .show = queue_logical_block_size_show,
++ .attr = {.name = "hw_sector_size", .mode = 0444 },
++ .show_limit = queue_logical_block_size_show,
+ };
+
+ QUEUE_LIM_RW_ENTRY(queue_rotational, "rotational");
+@@ -561,7 +570,9 @@ QUEUE_RW_ENTRY(queue_wb_lat, "wbt_lat_usec");
+
+ /* Common attributes for bio-based and request-based queues. */
+ static struct attribute *queue_attrs[] = {
+- &queue_ra_entry.attr,
++ /*
++ * Attributes which are protected with q->limits_lock.
++ */
+ &queue_max_hw_sectors_entry.attr,
+ &queue_max_sectors_entry.attr,
+ &queue_max_segments_entry.attr,
+@@ -577,37 +588,46 @@ static struct attribute *queue_attrs[] = {
+ &queue_discard_granularity_entry.attr,
+ &queue_max_discard_sectors_entry.attr,
+ &queue_max_hw_discard_sectors_entry.attr,
+- &queue_discard_zeroes_data_entry.attr,
+ &queue_atomic_write_max_sectors_entry.attr,
+ &queue_atomic_write_boundary_sectors_entry.attr,
+ &queue_atomic_write_unit_min_entry.attr,
+ &queue_atomic_write_unit_max_entry.attr,
+- &queue_write_same_max_entry.attr,
+ &queue_max_write_zeroes_sectors_entry.attr,
+ &queue_max_zone_append_sectors_entry.attr,
+ &queue_zone_write_granularity_entry.attr,
+ &queue_rotational_entry.attr,
+ &queue_zoned_entry.attr,
+- &queue_nr_zones_entry.attr,
+ &queue_max_open_zones_entry.attr,
+ &queue_max_active_zones_entry.attr,
+- &queue_nomerges_entry.attr,
+ &queue_iostats_passthrough_entry.attr,
+ &queue_iostats_entry.attr,
+ &queue_stable_writes_entry.attr,
+ &queue_add_random_entry.attr,
+- &queue_poll_entry.attr,
+ &queue_wc_entry.attr,
+ &queue_fua_entry.attr,
+ &queue_dax_entry.attr,
+- &queue_poll_delay_entry.attr,
+ &queue_virt_boundary_mask_entry.attr,
+ &queue_dma_alignment_entry.attr,
++
++ /*
++ * Attributes which are protected with q->sysfs_lock.
++ */
++ &queue_ra_entry.attr,
++ &queue_discard_zeroes_data_entry.attr,
++ &queue_write_same_max_entry.attr,
++ &queue_nr_zones_entry.attr,
++ &queue_nomerges_entry.attr,
++ &queue_poll_entry.attr,
++ &queue_poll_delay_entry.attr,
++
+ NULL,
+ };
+
+ /* Request-based queue attributes that are not relevant for bio-based queues. */
+ static struct attribute *blk_mq_queue_attrs[] = {
++ /*
++ * Attributes which are protected with q->sysfs_lock.
++ */
+ &queue_requests_entry.attr,
+ &elv_iosched_entry.attr,
+ &queue_rq_affinity_entry.attr,
+@@ -666,8 +686,16 @@ queue_attr_show(struct kobject *kobj, struct attribute *attr, char *page)
+ struct gendisk *disk = container_of(kobj, struct gendisk, queue_kobj);
+ ssize_t res;
+
+- if (!entry->show)
++ if (!entry->show && !entry->show_limit)
+ return -EIO;
++
++ if (entry->show_limit) {
++ mutex_lock(&disk->queue->limits_lock);
++ res = entry->show_limit(disk, page);
++ mutex_unlock(&disk->queue->limits_lock);
++ return res;
++ }
++
+ mutex_lock(&disk->queue->sysfs_lock);
+ res = entry->show(disk, page);
+ mutex_unlock(&disk->queue->sysfs_lock);
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index a52f0d6b40ad4e..762fbbd388c876 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -1623,13 +1623,6 @@ static bool tg_within_limit(struct throtl_grp *tg, struct bio *bio, bool rw)
+ return tg_may_dispatch(tg, bio, NULL);
+ }
+
+-static void tg_dispatch_in_debt(struct throtl_grp *tg, struct bio *bio, bool rw)
+-{
+- if (!bio_flagged(bio, BIO_BPS_THROTTLED))
+- tg->carryover_bytes[rw] -= throtl_bio_data_size(bio);
+- tg->carryover_ios[rw]--;
+-}
+-
+ bool __blk_throtl_bio(struct bio *bio)
+ {
+ struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+@@ -1666,10 +1659,12 @@ bool __blk_throtl_bio(struct bio *bio)
+ /*
+ * IOs which may cause priority inversions are
+ * dispatched directly, even if they're over limit.
+- * Debts are handled by carryover_bytes/ios while
+- * calculating wait time.
++ *
++ * Charge and dispatch directly, and our throttle
++ * control algorithm is adaptive, and extra IO bytes
++ * will be throttled for paying the debt
+ */
+- tg_dispatch_in_debt(tg, bio, rw);
++ throtl_charge_bio(tg, bio);
+ } else {
+ /* if above limits, break to queue */
+ break;
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 0c77244a35c92e..8f15d1aa6eb89a 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -343,6 +343,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
+ op = REQ_OP_ZONE_RESET;
+
+ /* Invalidate the page cache, including dirty pages. */
++ inode_lock(bdev->bd_mapping->host);
+ filemap_invalidate_lock(bdev->bd_mapping);
+ ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
+ if (ret)
+@@ -364,8 +365,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
+ ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors);
+
+ fail:
+- if (cmd == BLKRESETZONE)
++ if (cmd == BLKRESETZONE) {
+ filemap_invalidate_unlock(bdev->bd_mapping);
++ inode_unlock(bdev->bd_mapping->host);
++ }
+
+ return ret;
+ }
+diff --git a/block/blk.h b/block/blk.h
+index 9dcc92c7f2b50d..c14f415de5228e 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -480,7 +480,8 @@ static inline void blk_zone_update_request_bio(struct request *rq,
+ * the original BIO sector so that blk_zone_write_plug_bio_endio() can
+ * lookup the zone write plug.
+ */
+- if (req_op(rq) == REQ_OP_ZONE_APPEND || bio_zone_write_plugging(bio))
++ if (req_op(rq) == REQ_OP_ZONE_APPEND ||
++ bio_flagged(bio, BIO_EMULATES_ZONE_APPEND))
+ bio->bi_iter.bi_sector = rq->__sector;
+ }
+ void blk_zone_write_plug_bio_endio(struct bio *bio);
+diff --git a/block/bounce.c b/block/bounce.c
+index 0d898cd5ec497f..09a9616cf20944 100644
+--- a/block/bounce.c
++++ b/block/bounce.c
+@@ -41,8 +41,6 @@ static void init_bounce_bioset(void)
+
+ ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
+ BUG_ON(ret);
+- if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
+- BUG_ON(1);
+
+ ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
+ BUG_ON(ret);
+diff --git a/block/fops.c b/block/fops.c
+index d23ddb2dc1138d..82b672d15ea4f8 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -746,7 +746,14 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ ret = direct_write_fallback(iocb, from, ret,
+ blkdev_buffered_write(iocb, from));
+ } else {
++ /*
++ * Take i_rwsem and invalidate_lock to avoid racing with
++ * set_blocksize changing i_blkbits/folio order and punching
++ * out the pagecache.
++ */
++ inode_lock_shared(bd_inode);
+ ret = blkdev_buffered_write(iocb, from);
++ inode_unlock_shared(bd_inode);
+ }
+
+ if (ret > 0)
+@@ -757,6 +764,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+
+ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ {
++ struct inode *bd_inode = bdev_file_inode(iocb->ki_filp);
+ struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+ loff_t size = bdev_nr_bytes(bdev);
+ loff_t pos = iocb->ki_pos;
+@@ -793,7 +801,13 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ goto reexpand;
+ }
+
++ /*
++ * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize
++ * changing i_blkbits/folio order and punching out the pagecache.
++ */
++ inode_lock_shared(bd_inode);
+ ret = filemap_read(iocb, to, ret);
++ inode_unlock_shared(bd_inode);
+
+ reexpand:
+ if (unlikely(shorted))
+@@ -836,6 +850,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
+ if ((start | len) & (bdev_logical_block_size(bdev) - 1))
+ return -EINVAL;
+
++ inode_lock(inode);
+ filemap_invalidate_lock(inode->i_mapping);
+
+ /*
+@@ -868,6 +883,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
+
+ fail:
+ filemap_invalidate_unlock(inode->i_mapping);
++ inode_unlock(inode);
+ return error;
+ }
+
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 6554b728bae6aa..919066b4bb49c8 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -141,6 +141,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
+ if (err)
+ return err;
+
++ inode_lock(bdev->bd_mapping->host);
+ filemap_invalidate_lock(bdev->bd_mapping);
+ err = truncate_bdev_range(bdev, mode, start, start + len - 1);
+ if (err)
+@@ -173,6 +174,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode,
+ blk_finish_plug(&plug);
+ fail:
+ filemap_invalidate_unlock(bdev->bd_mapping);
++ inode_unlock(bdev->bd_mapping->host);
+ return err;
+ }
+
+@@ -198,12 +200,14 @@ static int blk_ioctl_secure_erase(struct block_device *bdev, blk_mode_t mode,
+ end > bdev_nr_bytes(bdev))
+ return -EINVAL;
+
++ inode_lock(bdev->bd_mapping->host);
+ filemap_invalidate_lock(bdev->bd_mapping);
+ err = truncate_bdev_range(bdev, mode, start, end - 1);
+ if (!err)
+ err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9,
+ GFP_KERNEL);
+ filemap_invalidate_unlock(bdev->bd_mapping);
++ inode_unlock(bdev->bd_mapping->host);
+ return err;
+ }
+
+@@ -235,6 +239,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode,
+ return -EINVAL;
+
+ /* Invalidate the page cache, including dirty pages */
++ inode_lock(bdev->bd_mapping->host);
+ filemap_invalidate_lock(bdev->bd_mapping);
+ err = truncate_bdev_range(bdev, mode, start, end);
+ if (err)
+@@ -245,6 +250,7 @@ static int blk_ioctl_zeroout(struct block_device *bdev, blk_mode_t mode,
+
+ fail:
+ filemap_invalidate_unlock(bdev->bd_mapping);
++ inode_unlock(bdev->bd_mapping->host);
+ return err;
+ }
+
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index b08b89ec26ec51..63960465eea175 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -489,6 +489,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+ struct ahash_alg *alg = crypto_ahash_alg(hash);
+
+ crypto_ahash_set_statesize(hash, alg->halg.statesize);
++ crypto_ahash_set_reqsize(hash, alg->reqsize);
+
+ if (tfm->__crt_alg->cra_type == &crypto_shash_type)
+ return crypto_init_ahash_using_shash(tfm);
+@@ -654,6 +655,9 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
+ if (alg->halg.statesize == 0)
+ return -EINVAL;
+
++ if (alg->reqsize && alg->reqsize < alg->halg.statesize)
++ return -EINVAL;
++
+ err = hash_prepare_alg(&alg->halg);
+ if (err)
+ return err;
+diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
+index 5498a87249d3e7..e3f1a4852737b0 100644
+--- a/crypto/algif_hash.c
++++ b/crypto/algif_hash.c
+@@ -265,10 +265,6 @@ static int hash_accept(struct socket *sock, struct socket *newsock,
+ goto out_free_state;
+
+ err = crypto_ahash_import(&ctx2->req, state);
+- if (err) {
+- sock_orphan(sk2);
+- sock_put(sk2);
+- }
+
+ out_free_state:
+ kfree_sensitive(state);
+diff --git a/crypto/lzo-rle.c b/crypto/lzo-rle.c
+index 0631d975bfac11..0abc2d87f04200 100644
+--- a/crypto/lzo-rle.c
++++ b/crypto/lzo-rle.c
+@@ -55,7 +55,7 @@ static int __lzorle_compress(const u8 *src, unsigned int slen,
+ size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
+ int err;
+
+- err = lzorle1x_1_compress(src, slen, dst, &tmp_len, ctx);
++ err = lzorle1x_1_compress_safe(src, slen, dst, &tmp_len, ctx);
+
+ if (err != LZO_E_OK)
+ return -EINVAL;
+diff --git a/crypto/lzo.c b/crypto/lzo.c
+index ebda132dd22bf5..8338851c7406a3 100644
+--- a/crypto/lzo.c
++++ b/crypto/lzo.c
+@@ -55,7 +55,7 @@ static int __lzo_compress(const u8 *src, unsigned int slen,
+ size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
+ int err;
+
+- err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx);
++ err = lzo1x_1_compress_safe(src, slen, dst, &tmp_len, ctx);
+
+ if (err != LZO_E_OK)
+ return -EINVAL;
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index a9eb2dcf289821..2d2b1589a00971 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -681,6 +681,7 @@ struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(
+
+ /* Only sync algorithms allowed. */
+ mask |= CRYPTO_ALG_ASYNC | CRYPTO_ALG_SKCIPHER_REQSIZE_LARGE;
++ type &= ~(CRYPTO_ALG_ASYNC | CRYPTO_ALG_SKCIPHER_REQSIZE_LARGE);
+
+ tfm = crypto_alloc_tfm(alg_name, &crypto_skcipher_type, type, mask);
+
+diff --git a/drivers/accel/amdxdna/aie2_ctx.c b/drivers/accel/amdxdna/aie2_ctx.c
+index 5f43db02b24046..92c768b0c9a03d 100644
+--- a/drivers/accel/amdxdna/aie2_ctx.c
++++ b/drivers/accel/amdxdna/aie2_ctx.c
+@@ -34,6 +34,8 @@ static void aie2_job_release(struct kref *ref)
+
+ job = container_of(ref, struct amdxdna_sched_job, refcnt);
+ amdxdna_sched_job_cleanup(job);
++ atomic64_inc(&job->hwctx->job_free_cnt);
++ wake_up(&job->hwctx->priv->job_free_wq);
+ if (job->out_fence)
+ dma_fence_put(job->out_fence);
+ kfree(job);
+@@ -134,7 +136,8 @@ static void aie2_hwctx_wait_for_idle(struct amdxdna_hwctx *hwctx)
+ if (!fence)
+ return;
+
+- dma_fence_wait(fence, false);
++ /* Wait up to 2 seconds for fw to finish all pending requests */
++ dma_fence_wait_timeout(fence, false, msecs_to_jiffies(2000));
+ dma_fence_put(fence);
+ }
+
+@@ -616,6 +619,7 @@ int aie2_hwctx_init(struct amdxdna_hwctx *hwctx)
+ hwctx->status = HWCTX_STAT_INIT;
+ ndev = xdna->dev_handle;
+ ndev->hwctx_num++;
++ init_waitqueue_head(&priv->job_free_wq);
+
+ XDNA_DBG(xdna, "hwctx %s init completed", hwctx->name);
+
+@@ -652,25 +656,23 @@ void aie2_hwctx_fini(struct amdxdna_hwctx *hwctx)
+ xdna = hwctx->client->xdna;
+ ndev = xdna->dev_handle;
+ ndev->hwctx_num--;
+- drm_sched_wqueue_stop(&hwctx->priv->sched);
+
+- /* Now, scheduler will not send command to device. */
++ XDNA_DBG(xdna, "%s sequence number %lld", hwctx->name, hwctx->priv->seq);
++ drm_sched_entity_destroy(&hwctx->priv->entity);
++
++ aie2_hwctx_wait_for_idle(hwctx);
++
++ /* Request fw to destroy hwctx and cancel the rest pending requests */
+ aie2_release_resource(hwctx);
+
+- /*
+- * All submitted commands are aborted.
+- * Restart scheduler queues to cleanup jobs. The amdxdna_sched_job_run()
+- * will return NODEV if it is called.
+- */
+- drm_sched_wqueue_start(&hwctx->priv->sched);
++ /* Wait for all submitted jobs to be completed or canceled */
++ wait_event(hwctx->priv->job_free_wq,
++ atomic64_read(&hwctx->job_submit_cnt) ==
++ atomic64_read(&hwctx->job_free_cnt));
+
+- aie2_hwctx_wait_for_idle(hwctx);
+- drm_sched_entity_destroy(&hwctx->priv->entity);
+ drm_sched_fini(&hwctx->priv->sched);
+ aie2_ctx_syncobj_destroy(hwctx);
+
+- XDNA_DBG(xdna, "%s sequence number %lld", hwctx->name, hwctx->priv->seq);
+-
+ for (idx = 0; idx < ARRAY_SIZE(hwctx->priv->cmd_buf); idx++)
+ drm_gem_object_put(to_gobj(hwctx->priv->cmd_buf[idx]));
+ amdxdna_gem_unpin(hwctx->priv->heap);
+@@ -879,6 +881,7 @@ int aie2_cmd_submit(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job,
+ drm_gem_unlock_reservations(job->bos, job->bo_cnt, &acquire_ctx);
+
+ aie2_job_put(job);
++ atomic64_inc(&hwctx->job_submit_cnt);
+
+ return 0;
+
+diff --git a/drivers/accel/amdxdna/amdxdna_ctx.c b/drivers/accel/amdxdna/amdxdna_ctx.c
+index d11b1c83d9c3bb..43442b9e273b34 100644
+--- a/drivers/accel/amdxdna/amdxdna_ctx.c
++++ b/drivers/accel/amdxdna/amdxdna_ctx.c
+@@ -220,6 +220,8 @@ int amdxdna_drm_create_hwctx_ioctl(struct drm_device *dev, void *data, struct dr
+ args->syncobj_handle = hwctx->syncobj_hdl;
+ mutex_unlock(&xdna->dev_lock);
+
++ atomic64_set(&hwctx->job_submit_cnt, 0);
++ atomic64_set(&hwctx->job_free_cnt, 0);
+ XDNA_DBG(xdna, "PID %d create HW context %d, ret %d", client->pid, args->handle, ret);
+ drm_dev_exit(idx);
+ return 0;
+diff --git a/drivers/accel/amdxdna/amdxdna_ctx.h b/drivers/accel/amdxdna/amdxdna_ctx.h
+index 80b0304193ec3f..f0a4a8586d858d 100644
+--- a/drivers/accel/amdxdna/amdxdna_ctx.h
++++ b/drivers/accel/amdxdna/amdxdna_ctx.h
+@@ -87,6 +87,9 @@ struct amdxdna_hwctx {
+ struct amdxdna_qos_info qos;
+ struct amdxdna_hwctx_param_config_cu *cus;
+ u32 syncobj_hdl;
++
++ atomic64_t job_submit_cnt;
++ atomic64_t job_free_cnt ____cacheline_aligned_in_smp;
+ };
+
+ #define drm_job_to_xdna_job(j) \
+diff --git a/drivers/accel/amdxdna/amdxdna_mailbox.c b/drivers/accel/amdxdna/amdxdna_mailbox.c
+index e5301fac139712..2879e4149c9379 100644
+--- a/drivers/accel/amdxdna/amdxdna_mailbox.c
++++ b/drivers/accel/amdxdna/amdxdna_mailbox.c
+@@ -349,8 +349,6 @@ static irqreturn_t mailbox_irq_handler(int irq, void *p)
+ trace_mbox_irq_handle(MAILBOX_NAME, irq);
+ /* Schedule a rx_work to call the callback functions */
+ queue_work(mb_chann->work_q, &mb_chann->rx_work);
+- /* Clear IOHUB register */
+- mailbox_reg_write(mb_chann, mb_chann->iohub_int_addr, 0);
+
+ return IRQ_HANDLED;
+ }
+@@ -367,6 +365,9 @@ static void mailbox_rx_worker(struct work_struct *rx_work)
+ return;
+ }
+
++again:
++ mailbox_reg_write(mb_chann, mb_chann->iohub_int_addr, 0);
++
+ while (1) {
+ /*
+ * If return is 0, keep consuming next message, until there is
+@@ -380,10 +381,18 @@ static void mailbox_rx_worker(struct work_struct *rx_work)
+ if (unlikely(ret)) {
+ MB_ERR(mb_chann, "Unexpected ret %d, disable irq", ret);
+ WRITE_ONCE(mb_chann->bad_state, true);
+- disable_irq(mb_chann->msix_irq);
+- break;
++ return;
+ }
+ }
++
++ /*
++ * The hardware will not generate interrupt if firmware creates a new
++ * response right after driver clears interrupt register. Check
++ * the interrupt register to make sure there is not any new response
++ * before exiting.
++ */
++ if (mailbox_reg_read(mb_chann, mb_chann->iohub_int_addr))
++ goto again;
+ }
+
+ int xdna_mailbox_send_msg(struct mailbox_channel *mb_chann,
+diff --git a/drivers/accel/qaic/qaic_drv.c b/drivers/accel/qaic/qaic_drv.c
+index 81819b9ef8d4fd..32f0e81d3e3048 100644
+--- a/drivers/accel/qaic/qaic_drv.c
++++ b/drivers/accel/qaic/qaic_drv.c
+@@ -431,7 +431,7 @@ static int init_pci(struct qaic_device *qdev, struct pci_dev *pdev)
+ int bars;
+ int ret;
+
+- bars = pci_select_bars(pdev, IORESOURCE_MEM);
++ bars = pci_select_bars(pdev, IORESOURCE_MEM) & 0x3f;
+
+ /* make sure the device has the expected BARs */
+ if (bars != (BIT(0) | BIT(2) | BIT(4))) {
+diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
+index d81b55f5068c40..7f10aa38269d21 100644
+--- a/drivers/acpi/Kconfig
++++ b/drivers/acpi/Kconfig
+@@ -452,7 +452,7 @@ config ACPI_SBS
+ the modules will be called sbs and sbshc.
+
+ config ACPI_HED
+- tristate "Hardware Error Device"
++ bool "Hardware Error Device"
+ help
+ This driver supports the Hardware Error Device (PNP0C33),
+ which is used to report some hardware errors notified via
+diff --git a/drivers/acpi/acpi_pnp.c b/drivers/acpi/acpi_pnp.c
+index 01abf26764b00c..3f5a1840f57330 100644
+--- a/drivers/acpi/acpi_pnp.c
++++ b/drivers/acpi/acpi_pnp.c
+@@ -355,8 +355,10 @@ static bool acpi_pnp_match(const char *idstr, const struct acpi_device_id **matc
+ * device represented by it.
+ */
+ static const struct acpi_device_id acpi_nonpnp_device_ids[] = {
++ {"INT3F0D"},
+ {"INTC1080"},
+ {"INTC1081"},
++ {"INTC1099"},
+ {""},
+ };
+
+diff --git a/drivers/acpi/hed.c b/drivers/acpi/hed.c
+index 7652515a6be1e3..3499f86c411e3b 100644
+--- a/drivers/acpi/hed.c
++++ b/drivers/acpi/hed.c
+@@ -80,7 +80,12 @@ static struct acpi_driver acpi_hed_driver = {
+ .remove = acpi_hed_remove,
+ },
+ };
+-module_acpi_driver(acpi_hed_driver);
++
++static int __init acpi_hed_driver_init(void)
++{
++ return acpi_bus_register_driver(&acpi_hed_driver);
++}
++subsys_initcall(acpi_hed_driver_init);
+
+ MODULE_AUTHOR("Huang Ying");
+ MODULE_DESCRIPTION("ACPI Hardware Error Device Driver");
+diff --git a/drivers/auxdisplay/charlcd.c b/drivers/auxdisplay/charlcd.c
+index 19b619376d48b9..09020bb8ad15fa 100644
+--- a/drivers/auxdisplay/charlcd.c
++++ b/drivers/auxdisplay/charlcd.c
+@@ -595,18 +595,19 @@ static int charlcd_init(struct charlcd *lcd)
+ return 0;
+ }
+
+-struct charlcd *charlcd_alloc(void)
++struct charlcd *charlcd_alloc(unsigned int drvdata_size)
+ {
+ struct charlcd_priv *priv;
+ struct charlcd *lcd;
+
+- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++ priv = kzalloc(sizeof(*priv) + drvdata_size, GFP_KERNEL);
+ if (!priv)
+ return NULL;
+
+ priv->esc_seq.len = -1;
+
+ lcd = &priv->lcd;
++ lcd->drvdata = priv->drvdata;
+
+ return lcd;
+ }
+diff --git a/drivers/auxdisplay/charlcd.h b/drivers/auxdisplay/charlcd.h
+index 4d4287209d04c4..d10b89740bcae7 100644
+--- a/drivers/auxdisplay/charlcd.h
++++ b/drivers/auxdisplay/charlcd.h
+@@ -51,7 +51,7 @@ struct charlcd {
+ unsigned long y;
+ } addr;
+
+- void *drvdata;
++ void *drvdata; /* Set by charlcd_alloc() */
+ };
+
+ /**
+@@ -95,7 +95,8 @@ struct charlcd_ops {
+ };
+
+ void charlcd_backlight(struct charlcd *lcd, enum charlcd_onoff on);
+-struct charlcd *charlcd_alloc(void);
++
++struct charlcd *charlcd_alloc(unsigned int drvdata_size);
+ void charlcd_free(struct charlcd *lcd);
+
+ int charlcd_register(struct charlcd *lcd);
+diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c
+index 9d0ae9c02e9ba2..1d67fe32434124 100644
+--- a/drivers/auxdisplay/hd44780.c
++++ b/drivers/auxdisplay/hd44780.c
+@@ -226,7 +226,7 @@ static int hd44780_probe(struct platform_device *pdev)
+ if (!hdc)
+ return -ENOMEM;
+
+- lcd = charlcd_alloc();
++ lcd = charlcd_alloc(0);
+ if (!lcd)
+ goto fail1;
+
+diff --git a/drivers/auxdisplay/lcd2s.c b/drivers/auxdisplay/lcd2s.c
+index a28daa4ffbf752..c71ebb925971b6 100644
+--- a/drivers/auxdisplay/lcd2s.c
++++ b/drivers/auxdisplay/lcd2s.c
+@@ -307,7 +307,7 @@ static int lcd2s_i2c_probe(struct i2c_client *i2c)
+ if (err < 0)
+ return err;
+
+- lcd = charlcd_alloc();
++ lcd = charlcd_alloc(0);
+ if (!lcd)
+ return -ENOMEM;
+
+diff --git a/drivers/auxdisplay/panel.c b/drivers/auxdisplay/panel.c
+index 6dc8798d01f98c..4da142692d55f8 100644
+--- a/drivers/auxdisplay/panel.c
++++ b/drivers/auxdisplay/panel.c
+@@ -835,7 +835,7 @@ static void lcd_init(void)
+ if (!hdc)
+ return;
+
+- charlcd = charlcd_alloc();
++ charlcd = charlcd_alloc(0);
+ if (!charlcd) {
+ kfree(hdc);
+ return;
+diff --git a/drivers/base/faux.c b/drivers/base/faux.c
+index 531e9d789ee042..407c1d1aad50b7 100644
+--- a/drivers/base/faux.c
++++ b/drivers/base/faux.c
+@@ -102,7 +102,9 @@ static void faux_device_release(struct device *dev)
+ *
+ * Note, when this function is called, the functions specified in struct
+ * faux_ops can be called before the function returns, so be prepared for
+- * everything to be properly initialized before that point in time.
++ * everything to be properly initialized before that point in time. If the
++ * probe callback (if one is present) does NOT succeed, the creation of the
++ * device will fail and NULL will be returned.
+ *
+ * Return:
+ * * NULL if an error happened with creating the device
+@@ -147,6 +149,17 @@ struct faux_device *faux_device_create_with_groups(const char *name,
+ return NULL;
+ }
+
++ /*
++ * Verify that we did bind the driver to the device (i.e. probe worked),
++ * if not, let's fail the creation as trying to guess if probe was
++ * successful is almost impossible to determine by the caller.
++ */
++ if (!dev->driver) {
++ dev_err(dev, "probe did not succeed, tearing down the device\n");
++ faux_device_destroy(faux_dev);
++ faux_dev = NULL;
++ }
++
+ return faux_dev;
+ }
+ EXPORT_SYMBOL_GPL(faux_device_create_with_groups);
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 23be2d1b040798..37fe251b4c591f 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -933,6 +933,13 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ goto Complete;
+
+ if (dev->power.direct_complete) {
++ /*
++ * Allow new children to be added under the device after this
++ * point if it has no PM callbacks.
++ */
++ if (dev->power.no_pm_callbacks)
++ dev->power.is_prepared = false;
++
+ /* Match the pm_runtime_disable() in device_suspend(). */
+ pm_runtime_enable(dev);
+ goto Complete;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index b378d2aa49f069..0b135d1ca25ea0 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -205,8 +205,6 @@ static bool lo_can_use_dio(struct loop_device *lo)
+ */
+ static inline void loop_update_dio(struct loop_device *lo)
+ {
+- bool dio_in_use = lo->lo_flags & LO_FLAGS_DIRECT_IO;
+-
+ lockdep_assert_held(&lo->lo_mutex);
+ WARN_ON_ONCE(lo->lo_state == Lo_bound &&
+ lo->lo_queue->mq_freeze_depth == 0);
+@@ -215,10 +213,6 @@ static inline void loop_update_dio(struct loop_device *lo)
+ lo->lo_flags |= LO_FLAGS_DIRECT_IO;
+ if ((lo->lo_flags & LO_FLAGS_DIRECT_IO) && !lo_can_use_dio(lo))
+ lo->lo_flags &= ~LO_FLAGS_DIRECT_IO;
+-
+- /* flush dirty pages before starting to issue direct I/O */
+- if ((lo->lo_flags & LO_FLAGS_DIRECT_IO) && !dio_in_use)
+- vfs_fsync(lo->lo_backing_file, 0);
+ }
+
+ /**
+@@ -568,6 +562,13 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ if (get_loop_size(lo, file) != get_loop_size(lo, old_file))
+ goto out_err;
+
++ /*
++ * We might switch to direct I/O mode for the loop device, write back
++ * all dirty data the page cache now that so that the individual I/O
++ * operations don't have to do that.
++ */
++ vfs_fsync(file, 0);
++
+ /* and ... switch */
+ disk_force_media_change(lo->lo_disk);
+ memflags = blk_mq_freeze_queue(lo->lo_queue);
+@@ -919,7 +920,7 @@ static unsigned int loop_default_blocksize(struct loop_device *lo,
+ struct block_device *backing_bdev)
+ {
+ /* In case of direct I/O, match underlying block size */
+- if ((lo->lo_backing_file->f_flags & O_DIRECT) && backing_bdev)
++ if ((lo->lo_flags & LO_FLAGS_DIRECT_IO) && backing_bdev)
+ return bdev_logical_block_size(backing_bdev);
+ return SECTOR_SIZE;
+ }
+@@ -972,9 +973,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+ if (!file)
+ return -EBADF;
+
+- if ((mode & BLK_OPEN_WRITE) && !file->f_op->write_iter)
+- return -EINVAL;
+-
+ error = loop_check_backing_file(file);
+ if (error)
+ return error;
+@@ -1046,6 +1044,13 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+ if (error)
+ goto out_unlock;
+
++ /*
++ * We might switch to direct I/O mode for the loop device, write back
++ * all dirty data the page cache now that so that the individual I/O
++ * operations don't have to do that.
++ */
++ vfs_fsync(file, 0);
++
+ loop_update_dio(lo);
+ loop_sysfs_init(lo);
+
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 175566a71bb3f9..41c2cd5818b4a2 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -592,41 +592,41 @@ static ssize_t nullb_device_zone_offline_store(struct config_item *item,
+ CONFIGFS_ATTR_WO(nullb_device_, zone_offline);
+
+ static struct configfs_attribute *nullb_device_attrs[] = {
+- &nullb_device_attr_size,
++ &nullb_device_attr_badblocks,
++ &nullb_device_attr_blocking,
++ &nullb_device_attr_blocksize,
++ &nullb_device_attr_cache_size,
+ &nullb_device_attr_completion_nsec,
+- &nullb_device_attr_submit_queues,
+- &nullb_device_attr_poll_queues,
++ &nullb_device_attr_discard,
++ &nullb_device_attr_fua,
+ &nullb_device_attr_home_node,
+- &nullb_device_attr_queue_mode,
+- &nullb_device_attr_blocksize,
+- &nullb_device_attr_max_sectors,
+- &nullb_device_attr_irqmode,
+ &nullb_device_attr_hw_queue_depth,
+ &nullb_device_attr_index,
+- &nullb_device_attr_blocking,
+- &nullb_device_attr_use_per_node_hctx,
+- &nullb_device_attr_power,
+- &nullb_device_attr_memory_backed,
+- &nullb_device_attr_discard,
++ &nullb_device_attr_irqmode,
++ &nullb_device_attr_max_sectors,
+ &nullb_device_attr_mbps,
+- &nullb_device_attr_cache_size,
+- &nullb_device_attr_badblocks,
+- &nullb_device_attr_zoned,
+- &nullb_device_attr_zone_size,
++ &nullb_device_attr_memory_backed,
++ &nullb_device_attr_no_sched,
++ &nullb_device_attr_poll_queues,
++ &nullb_device_attr_power,
++ &nullb_device_attr_queue_mode,
++ &nullb_device_attr_rotational,
++ &nullb_device_attr_shared_tag_bitmap,
++ &nullb_device_attr_shared_tags,
++ &nullb_device_attr_size,
++ &nullb_device_attr_submit_queues,
++ &nullb_device_attr_use_per_node_hctx,
++ &nullb_device_attr_virt_boundary,
++ &nullb_device_attr_zone_append_max_sectors,
+ &nullb_device_attr_zone_capacity,
+- &nullb_device_attr_zone_nr_conv,
+- &nullb_device_attr_zone_max_open,
++ &nullb_device_attr_zone_full,
+ &nullb_device_attr_zone_max_active,
+- &nullb_device_attr_zone_append_max_sectors,
+- &nullb_device_attr_zone_readonly,
++ &nullb_device_attr_zone_max_open,
++ &nullb_device_attr_zone_nr_conv,
+ &nullb_device_attr_zone_offline,
+- &nullb_device_attr_zone_full,
+- &nullb_device_attr_virt_boundary,
+- &nullb_device_attr_no_sched,
+- &nullb_device_attr_shared_tags,
+- &nullb_device_attr_shared_tag_bitmap,
+- &nullb_device_attr_fua,
+- &nullb_device_attr_rotational,
++ &nullb_device_attr_zone_readonly,
++ &nullb_device_attr_zone_size,
++ &nullb_device_attr_zoned,
+ NULL,
+ };
+
+@@ -704,16 +704,28 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item)
+
+ static ssize_t memb_group_features_show(struct config_item *item, char *page)
+ {
+- return snprintf(page, PAGE_SIZE,
+- "badblocks,blocking,blocksize,cache_size,fua,"
+- "completion_nsec,discard,home_node,hw_queue_depth,"
+- "irqmode,max_sectors,mbps,memory_backed,no_sched,"
+- "poll_queues,power,queue_mode,shared_tag_bitmap,"
+- "shared_tags,size,submit_queues,use_per_node_hctx,"
+- "virt_boundary,zoned,zone_capacity,zone_max_active,"
+- "zone_max_open,zone_nr_conv,zone_offline,zone_readonly,"
+- "zone_size,zone_append_max_sectors,zone_full,"
+- "rotational\n");
++
++ struct configfs_attribute **entry;
++ char delimiter = ',';
++ size_t left = PAGE_SIZE;
++ size_t written = 0;
++ int ret;
++
++ for (entry = &nullb_device_attrs[0]; *entry && left > 0; entry++) {
++ if (!*(entry + 1))
++ delimiter = '\n';
++ ret = snprintf(page + written, left, "%s%c", (*entry)->ca_name,
++ delimiter);
++ if (ret >= left) {
++ WARN_ONCE(1, "Too many null_blk features to print\n");
++ memzero_explicit(page, PAGE_SIZE);
++ return -ENOBUFS;
++ }
++ left -= ret;
++ written += ret;
++ }
++
++ return written;
+ }
+
+ CONFIGFS_ATTR_RO(memb_group_, features);
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 7bbfc20f116a44..08694d7d18db49 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -490,15 +490,17 @@ static wait_queue_head_t ublk_idr_wq; /* wait until one idr is freed */
+
+ static DEFINE_MUTEX(ublk_ctl_mutex);
+
++
++#define UBLK_MAX_UBLKS UBLK_MINORS
++
+ /*
+- * Max ublk devices allowed to add
++ * Max unprivileged ublk devices allowed to add
+ *
+ * It can be extended to one per-user limit in future or even controlled
+ * by cgroup.
+ */
+-#define UBLK_MAX_UBLKS UBLK_MINORS
+-static unsigned int ublks_max = 64;
+-static unsigned int ublks_added; /* protected by ublk_ctl_mutex */
++static unsigned int unprivileged_ublks_max = 64;
++static unsigned int unprivileged_ublks_added; /* protected by ublk_ctl_mutex */
+
+ static struct miscdevice ublk_misc;
+
+@@ -2051,10 +2053,9 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
+ return -EIOCBQUEUED;
+
+ out:
+- io_uring_cmd_done(cmd, ret, 0, issue_flags);
+ pr_devel("%s: complete: cmd op %d, tag %d ret %x io_flags %x\n",
+ __func__, cmd_op, tag, ret, io->flags);
+- return -EIOCBQUEUED;
++ return ret;
+ }
+
+ static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub,
+@@ -2110,7 +2111,10 @@ static inline int ublk_ch_uring_cmd_local(struct io_uring_cmd *cmd,
+ static void ublk_ch_uring_cmd_cb(struct io_uring_cmd *cmd,
+ unsigned int issue_flags)
+ {
+- ublk_ch_uring_cmd_local(cmd, issue_flags);
++ int ret = ublk_ch_uring_cmd_local(cmd, issue_flags);
++
++ if (ret != -EIOCBQUEUED)
++ io_uring_cmd_done(cmd, ret, 0, issue_flags);
+ }
+
+ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+@@ -2375,7 +2379,8 @@ static int ublk_add_chdev(struct ublk_device *ub)
+ if (ret)
+ goto fail;
+
+- ublks_added++;
++ if (ub->dev_info.flags & UBLK_F_UNPRIVILEGED_DEV)
++ unprivileged_ublks_added++;
+ return 0;
+ fail:
+ put_device(dev);
+@@ -2404,10 +2409,15 @@ static int ublk_add_tag_set(struct ublk_device *ub)
+
+ static void ublk_remove(struct ublk_device *ub)
+ {
++ bool unprivileged;
++
+ ublk_stop_dev(ub);
+ cdev_device_del(&ub->cdev, &ub->cdev_dev);
++ unprivileged = ub->dev_info.flags & UBLK_F_UNPRIVILEGED_DEV;
+ ublk_put_device(ub);
+- ublks_added--;
++
++ if (unprivileged)
++ unprivileged_ublks_added--;
+ }
+
+ static struct ublk_device *ublk_get_device_from_id(int idx)
+@@ -2669,7 +2679,8 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
+ return ret;
+
+ ret = -EACCES;
+- if (ublks_added >= ublks_max)
++ if ((info.flags & UBLK_F_UNPRIVILEGED_DEV) &&
++ unprivileged_ublks_added >= unprivileged_ublks_max)
+ goto out_unlock;
+
+ ret = -ENOMEM;
+@@ -3192,10 +3203,9 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
+ if (ub)
+ ublk_put_device(ub);
+ out:
+- io_uring_cmd_done(cmd, ret, 0, issue_flags);
+ pr_devel("%s: cmd done ret %d cmd_op %x, dev id %d qid %d\n",
+ __func__, ret, cmd->cmd_op, header->dev_id, header->queue_id);
+- return -EIOCBQUEUED;
++ return ret;
+ }
+
+ static const struct file_operations ublk_ctl_fops = {
+@@ -3259,23 +3269,26 @@ static void __exit ublk_exit(void)
+ module_init(ublk_init);
+ module_exit(ublk_exit);
+
+-static int ublk_set_max_ublks(const char *buf, const struct kernel_param *kp)
++static int ublk_set_max_unprivileged_ublks(const char *buf,
++ const struct kernel_param *kp)
+ {
+ return param_set_uint_minmax(buf, kp, 0, UBLK_MAX_UBLKS);
+ }
+
+-static int ublk_get_max_ublks(char *buf, const struct kernel_param *kp)
++static int ublk_get_max_unprivileged_ublks(char *buf,
++ const struct kernel_param *kp)
+ {
+- return sysfs_emit(buf, "%u\n", ublks_max);
++ return sysfs_emit(buf, "%u\n", unprivileged_ublks_max);
+ }
+
+-static const struct kernel_param_ops ublk_max_ublks_ops = {
+- .set = ublk_set_max_ublks,
+- .get = ublk_get_max_ublks,
++static const struct kernel_param_ops ublk_max_unprivileged_ublks_ops = {
++ .set = ublk_set_max_unprivileged_ublks,
++ .get = ublk_get_max_unprivileged_ublks,
+ };
+
+-module_param_cb(ublks_max, &ublk_max_ublks_ops, &ublks_max, 0644);
+-MODULE_PARM_DESC(ublks_max, "max number of ublk devices allowed to add(default: 64)");
++module_param_cb(ublks_max, &ublk_max_unprivileged_ublks_ops,
++ &unprivileged_ublks_max, 0644);
++MODULE_PARM_DESC(ublks_max, "max number of unprivileged ublk devices allowed to add(default: 64)");
+
+ MODULE_AUTHOR("Ming Lei <ming.lei@redhat.com>");
+ MODULE_DESCRIPTION("Userspace block device");
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index bd5464bde174fa..1d26207b2ba70a 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -610,7 +610,8 @@ static void btmtksdio_txrx_work(struct work_struct *work)
+ } while (int_status || time_is_before_jiffies(txrx_timeout));
+
+ /* Enable interrupt */
+- sdio_writel(bdev->func, C_INT_EN_SET, MTK_REG_CHLPCR, NULL);
++ if (bdev->func->irq_handler)
++ sdio_writel(bdev->func, C_INT_EN_SET, MTK_REG_CHLPCR, NULL);
+
+ sdio_release_host(bdev->func);
+
+@@ -722,6 +723,10 @@ static int btmtksdio_close(struct hci_dev *hdev)
+ {
+ struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+
++ /* Skip btmtksdio_close if BTMTKSDIO_FUNC_ENABLED isn't set */
++ if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state))
++ return 0;
++
+ sdio_claim_host(bdev->func);
+
+ /* Disable interrupt */
+@@ -1442,11 +1447,15 @@ static void btmtksdio_remove(struct sdio_func *func)
+ if (!bdev)
+ return;
+
++ hdev = bdev->hdev;
++
++ /* Make sure to call btmtksdio_close before removing sdio card */
++ if (test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state))
++ btmtksdio_close(hdev);
++
+ /* Be consistent the state in btmtksdio_probe */
+ pm_runtime_get_noresume(bdev->dev);
+
+- hdev = bdev->hdev;
+-
+ sdio_set_drvdata(func, NULL);
+ hci_unregister_dev(hdev);
+ hci_free_dev(hdev);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index ccd0a21da39554..b15f3ed767c530 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -3014,9 +3014,8 @@ static void btusb_coredump_qca(struct hci_dev *hdev)
+ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ int ret = 0;
++ unsigned int skip = 0;
+ u8 pkt_type;
+- u8 *sk_ptr;
+- unsigned int sk_len;
+ u16 seqno;
+ u32 dump_size;
+
+@@ -3025,18 +3024,13 @@ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ struct usb_device *udev = btdata->udev;
+
+ pkt_type = hci_skb_pkt_type(skb);
+- sk_ptr = skb->data;
+- sk_len = skb->len;
++ skip = sizeof(struct hci_event_hdr);
++ if (pkt_type == HCI_ACLDATA_PKT)
++ skip += sizeof(struct hci_acl_hdr);
+
+- if (pkt_type == HCI_ACLDATA_PKT) {
+- sk_ptr += HCI_ACL_HDR_SIZE;
+- sk_len -= HCI_ACL_HDR_SIZE;
+- }
+-
+- sk_ptr += HCI_EVENT_HDR_SIZE;
+- sk_len -= HCI_EVENT_HDR_SIZE;
++ skb_pull(skb, skip);
++ dump_hdr = (struct qca_dump_hdr *)skb->data;
+
+- dump_hdr = (struct qca_dump_hdr *)sk_ptr;
+ seqno = le16_to_cpu(dump_hdr->seqno);
+ if (seqno == 0) {
+ set_bit(BTUSB_HW_SSR_ACTIVE, &btdata->flags);
+@@ -3056,16 +3050,15 @@ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+
+ btdata->qca_dump.ram_dump_size = dump_size;
+ btdata->qca_dump.ram_dump_seqno = 0;
+- sk_ptr += offsetof(struct qca_dump_hdr, data0);
+- sk_len -= offsetof(struct qca_dump_hdr, data0);
++
++ skb_pull(skb, offsetof(struct qca_dump_hdr, data0));
+
+ usb_disable_autosuspend(udev);
+ bt_dev_info(hdev, "%s memdump size(%u)\n",
+ (pkt_type == HCI_ACLDATA_PKT) ? "ACL" : "event",
+ dump_size);
+ } else {
+- sk_ptr += offsetof(struct qca_dump_hdr, data);
+- sk_len -= offsetof(struct qca_dump_hdr, data);
++ skb_pull(skb, offsetof(struct qca_dump_hdr, data));
+ }
+
+ if (!btdata->qca_dump.ram_dump_size) {
+@@ -3085,7 +3078,6 @@ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ return ret;
+ }
+
+- skb_pull(skb, skb->len - sk_len);
+ hci_devcd_append(hdev, skb);
+ btdata->qca_dump.ram_dump_seqno++;
+ if (seqno == QCA_LAST_SEQUENCE_NUM) {
+@@ -3113,68 +3105,58 @@ static int handle_dump_pkt_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ /* Return: true if the ACL packet is a dump packet, false otherwise. */
+ static bool acl_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- u8 *sk_ptr;
+- unsigned int sk_len;
+-
+ struct hci_event_hdr *event_hdr;
+ struct hci_acl_hdr *acl_hdr;
+ struct qca_dump_hdr *dump_hdr;
++ struct sk_buff *clone = skb_clone(skb, GFP_ATOMIC);
++ bool is_dump = false;
+
+- sk_ptr = skb->data;
+- sk_len = skb->len;
+-
+- acl_hdr = hci_acl_hdr(skb);
+- if (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE)
++ if (!clone)
+ return false;
+
+- sk_ptr += HCI_ACL_HDR_SIZE;
+- sk_len -= HCI_ACL_HDR_SIZE;
+- event_hdr = (struct hci_event_hdr *)sk_ptr;
+-
+- if ((event_hdr->evt != HCI_VENDOR_PKT) ||
+- (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
+- return false;
++ acl_hdr = skb_pull_data(clone, sizeof(*acl_hdr));
++ if (!acl_hdr || (le16_to_cpu(acl_hdr->handle) != QCA_MEMDUMP_ACL_HANDLE))
++ goto out;
+
+- sk_ptr += HCI_EVENT_HDR_SIZE;
+- sk_len -= HCI_EVENT_HDR_SIZE;
++ event_hdr = skb_pull_data(clone, sizeof(*event_hdr));
++ if (!event_hdr || (event_hdr->evt != HCI_VENDOR_PKT))
++ goto out;
+
+- dump_hdr = (struct qca_dump_hdr *)sk_ptr;
+- if ((sk_len < offsetof(struct qca_dump_hdr, data)) ||
+- (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
+- (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
+- return false;
++ dump_hdr = skb_pull_data(clone, sizeof(*dump_hdr));
++ if (!dump_hdr || (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
++ (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
++ goto out;
+
+- return true;
++ is_dump = true;
++out:
++ consume_skb(clone);
++ return is_dump;
+ }
+
+ /* Return: true if the event packet is a dump packet, false otherwise. */
+ static bool evt_pkt_is_dump_qca(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- u8 *sk_ptr;
+- unsigned int sk_len;
+-
+ struct hci_event_hdr *event_hdr;
+ struct qca_dump_hdr *dump_hdr;
++ struct sk_buff *clone = skb_clone(skb, GFP_ATOMIC);
++ bool is_dump = false;
+
+- sk_ptr = skb->data;
+- sk_len = skb->len;
+-
+- event_hdr = hci_event_hdr(skb);
+-
+- if ((event_hdr->evt != HCI_VENDOR_PKT)
+- || (event_hdr->plen != (sk_len - HCI_EVENT_HDR_SIZE)))
++ if (!clone)
+ return false;
+
+- sk_ptr += HCI_EVENT_HDR_SIZE;
+- sk_len -= HCI_EVENT_HDR_SIZE;
++ event_hdr = skb_pull_data(clone, sizeof(*event_hdr));
++ if (!event_hdr || (event_hdr->evt != HCI_VENDOR_PKT))
++ goto out;
+
+- dump_hdr = (struct qca_dump_hdr *)sk_ptr;
+- if ((sk_len < offsetof(struct qca_dump_hdr, data)) ||
+- (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
+- (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
+- return false;
++ dump_hdr = skb_pull_data(clone, sizeof(*dump_hdr));
++ if (!dump_hdr || (dump_hdr->vse_class != QCA_MEMDUMP_VSE_CLASS) ||
++ (dump_hdr->msg_type != QCA_MEMDUMP_MSG_TYPE))
++ goto out;
+
+- return true;
++ is_dump = true;
++out:
++ consume_skb(clone);
++ return is_dump;
+ }
+
+ static int btusb_recv_acl_qca(struct hci_dev *hdev, struct sk_buff *skb)
+diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessions.c
+index a894dbc40e43b3..7b5049b3d476ef 100644
+--- a/drivers/char/tpm/tpm2-sessions.c
++++ b/drivers/char/tpm/tpm2-sessions.c
+@@ -974,7 +974,7 @@ int tpm2_start_auth_session(struct tpm_chip *chip)
+ int rc;
+
+ if (chip->auth) {
+- dev_warn_once(&chip->dev, "auth session is active\n");
++ dev_dbg_once(&chip->dev, "auth session is active\n");
+ return 0;
+ }
+
+diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c
+index 014db638662407..8ddf3a9a53dfd5 100644
+--- a/drivers/clk/clk-s2mps11.c
++++ b/drivers/clk/clk-s2mps11.c
+@@ -137,6 +137,8 @@ static int s2mps11_clk_probe(struct platform_device *pdev)
+ if (!clk_data)
+ return -ENOMEM;
+
++ clk_data->num = S2MPS11_CLKS_NUM;
++
+ switch (hwid) {
+ case S2MPS11X:
+ s2mps11_reg = S2MPS11_REG_RTC_CTRL;
+@@ -186,7 +188,6 @@ static int s2mps11_clk_probe(struct platform_device *pdev)
+ clk_data->hws[i] = &s2mps11_clks[i].hw;
+ }
+
+- clk_data->num = S2MPS11_CLKS_NUM;
+ of_clk_add_hw_provider(s2mps11_clks->clk_np, of_clk_hw_onecell_get,
+ clk_data);
+
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index fb18f507f12135..fe6dac70f1a15b 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -8,6 +8,7 @@
+ #include <linux/err.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
++#include <linux/units.h>
+ #include <linux/of_address.h>
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+@@ -406,11 +407,151 @@ static const char * const imx8mp_clkout_sels[] = {"audio_pll1_out", "audio_pll2_
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+
++struct imx8mp_clock_constraints {
++ unsigned int clkid;
++ u32 maxrate;
++};
++
++/*
++ * Below tables are taken from IMX8MPCEC Rev. 2.1, 07/2023
++ * Table 13. Maximum frequency of modules.
++ * Probable typos fixed are marked with a comment.
++ */
++static const struct imx8mp_clock_constraints imx8mp_clock_common_constraints[] = {
++ { IMX8MP_CLK_A53_DIV, 1000 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ENET_AXI, 266666667 }, /* Datasheet claims 266MHz */
++ { IMX8MP_CLK_NAND_USDHC_BUS, 266666667 }, /* Datasheet claims 266MHz */
++ { IMX8MP_CLK_MEDIA_APB, 200 * HZ_PER_MHZ },
++ { IMX8MP_CLK_HDMI_APB, 133333333 }, /* Datasheet claims 133MHz */
++ { IMX8MP_CLK_ML_AXI, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_AHB, 133333333 },
++ { IMX8MP_CLK_IPG_ROOT, 66666667 },
++ { IMX8MP_CLK_AUDIO_AHB, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_DISP2_PIX, 170 * HZ_PER_MHZ },
++ { IMX8MP_CLK_DRAM_ALT, 666666667 },
++ { IMX8MP_CLK_DRAM_APB, 200 * HZ_PER_MHZ },
++ { IMX8MP_CLK_CAN1, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_CAN2, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_PCIE_AUX, 10 * HZ_PER_MHZ },
++ { IMX8MP_CLK_I2C5, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_I2C6, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_SAI1, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_SAI2, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_SAI3, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_SAI5, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_SAI6, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_ENET_QOS, 125 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ENET_QOS_TIMER, 200 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ENET_REF, 125 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ENET_TIMER, 125 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ENET_PHY_REF, 125 * HZ_PER_MHZ },
++ { IMX8MP_CLK_NAND, 500 * HZ_PER_MHZ },
++ { IMX8MP_CLK_QSPI, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_USDHC1, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_USDHC2, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_I2C1, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_I2C2, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_I2C3, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_I2C4, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_UART1, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_UART2, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_UART3, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_UART4, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ECSPI1, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ECSPI2, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_PWM1, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_PWM2, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_PWM3, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_PWM4, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_GPT1, 100 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPT2, 100 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPT3, 100 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPT4, 100 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPT5, 100 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPT6, 100 * HZ_PER_MHZ },
++ { IMX8MP_CLK_WDOG, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_IPP_DO_CLKO1, 200 * HZ_PER_MHZ },
++ { IMX8MP_CLK_IPP_DO_CLKO2, 200 * HZ_PER_MHZ },
++ { IMX8MP_CLK_HDMI_REF_266M, 266 * HZ_PER_MHZ },
++ { IMX8MP_CLK_USDHC3, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_MIPI_PHY1_REF, 300 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_DISP1_PIX, 250 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_CAM2_PIX, 277 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_LDB, 595 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_MIPI_TEST_BYTE, 200 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ECSPI3, 80 * HZ_PER_MHZ },
++ { IMX8MP_CLK_PDM, 200 * HZ_PER_MHZ },
++ { IMX8MP_CLK_SAI7, 66666667 }, /* Datasheet claims 66MHz */
++ { IMX8MP_CLK_MAIN_AXI, 400 * HZ_PER_MHZ },
++ { /* Sentinel */ }
++};
++
++static const struct imx8mp_clock_constraints imx8mp_clock_nominal_constraints[] = {
++ { IMX8MP_CLK_M7_CORE, 600 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ML_CORE, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU3D_CORE, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU3D_SHADER_CORE, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU2D_CORE, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_AUDIO_AXI_SRC, 600 * HZ_PER_MHZ },
++ { IMX8MP_CLK_HSIO_AXI, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_ISP, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_BUS, 600 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_AXI, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_HDMI_AXI, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU_AXI, 600 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU_AHB, 300 * HZ_PER_MHZ },
++ { IMX8MP_CLK_NOC, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_NOC_IO, 600 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ML_AHB, 300 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_G1, 600 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_G2, 500 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_CAM1_PIX, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_VC8000E, 400 * HZ_PER_MHZ }, /* Datasheet claims 500MHz */
++ { IMX8MP_CLK_DRAM_CORE, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GIC, 400 * HZ_PER_MHZ },
++ { /* Sentinel */ }
++};
++
++static const struct imx8mp_clock_constraints imx8mp_clock_overdrive_constraints[] = {
++ { IMX8MP_CLK_M7_CORE, 800 * HZ_PER_MHZ},
++ { IMX8MP_CLK_ML_CORE, 1000 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU3D_CORE, 1000 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU3D_SHADER_CORE, 1000 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU2D_CORE, 1000 * HZ_PER_MHZ },
++ { IMX8MP_CLK_AUDIO_AXI_SRC, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_HSIO_AXI, 500 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_ISP, 500 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_BUS, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_AXI, 500 * HZ_PER_MHZ },
++ { IMX8MP_CLK_HDMI_AXI, 500 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU_AXI, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GPU_AHB, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_NOC, 1000 * HZ_PER_MHZ },
++ { IMX8MP_CLK_NOC_IO, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_ML_AHB, 400 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_G1, 800 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_G2, 700 * HZ_PER_MHZ },
++ { IMX8MP_CLK_MEDIA_CAM1_PIX, 500 * HZ_PER_MHZ },
++ { IMX8MP_CLK_VPU_VC8000E, 500 * HZ_PER_MHZ }, /* Datasheet claims 400MHz */
++ { IMX8MP_CLK_DRAM_CORE, 1000 * HZ_PER_MHZ },
++ { IMX8MP_CLK_GIC, 500 * HZ_PER_MHZ },
++ { /* Sentinel */ }
++};
++
++static void imx8mp_clocks_apply_constraints(const struct imx8mp_clock_constraints constraints[])
++{
++ const struct imx8mp_clock_constraints *constr;
++
++ for (constr = constraints; constr->clkid; constr++)
++ clk_hw_set_rate_range(hws[constr->clkid], 0, constr->maxrate);
++}
++
+ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+ struct device_node *np;
+ void __iomem *anatop_base, *ccm_base;
++ const char *opmode;
+ int err;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop");
+@@ -715,6 +856,16 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+
+ imx_check_clk_hws(hws, IMX8MP_CLK_END);
+
++ imx8mp_clocks_apply_constraints(imx8mp_clock_common_constraints);
++
++ err = of_property_read_string(np, "fsl,operating-mode", &opmode);
++ if (!err) {
++ if (!strcmp(opmode, "nominal"))
++ imx8mp_clocks_apply_constraints(imx8mp_clock_nominal_constraints);
++ else if (!strcmp(opmode, "overdrive"))
++ imx8mp_clocks_apply_constraints(imx8mp_clock_overdrive_constraints);
++ }
++
+ err = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
+ if (err < 0) {
+ dev_err(dev, "failed to register hws for i.MX8MP\n");
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 69bbf62ba3cd7b..d470ed007854c5 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -217,7 +217,7 @@ config IPQ_GCC_4019
+
+ config IPQ_GCC_5018
+ tristate "IPQ5018 Global Clock Controller"
+- depends on ARM64 || COMPILE_TEST
++ depends on ARM || ARM64 || COMPILE_TEST
+ help
+ Support for global clock controller on ipq5018 devices.
+ Say Y if you want to use peripheral devices such as UART, SPI,
+diff --git a/drivers/clk/qcom/camcc-sm8250.c b/drivers/clk/qcom/camcc-sm8250.c
+index 34d2f17520dcca..450ddbebd35f27 100644
+--- a/drivers/clk/qcom/camcc-sm8250.c
++++ b/drivers/clk/qcom/camcc-sm8250.c
+@@ -411,7 +411,7 @@ static struct clk_rcg2 cam_cc_bps_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -433,7 +433,7 @@ static struct clk_rcg2 cam_cc_camnoc_axi_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -454,7 +454,7 @@ static struct clk_rcg2 cam_cc_cci_0_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -469,7 +469,7 @@ static struct clk_rcg2 cam_cc_cci_1_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -490,7 +490,7 @@ static struct clk_rcg2 cam_cc_cphy_rx_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -511,7 +511,7 @@ static struct clk_rcg2 cam_cc_csi0phytimer_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -526,7 +526,7 @@ static struct clk_rcg2 cam_cc_csi1phytimer_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -556,7 +556,7 @@ static struct clk_rcg2 cam_cc_csi3phytimer_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -571,7 +571,7 @@ static struct clk_rcg2 cam_cc_csi4phytimer_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -586,7 +586,7 @@ static struct clk_rcg2 cam_cc_csi5phytimer_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -611,7 +611,7 @@ static struct clk_rcg2 cam_cc_fast_ahb_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -634,7 +634,7 @@ static struct clk_rcg2 cam_cc_fd_core_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -649,7 +649,7 @@ static struct clk_rcg2 cam_cc_icp_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -673,7 +673,7 @@ static struct clk_rcg2 cam_cc_ife_0_clk_src = {
+ .parent_data = cam_cc_parent_data_2,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_2),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -710,7 +710,7 @@ static struct clk_rcg2 cam_cc_ife_0_csid_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -734,7 +734,7 @@ static struct clk_rcg2 cam_cc_ife_1_clk_src = {
+ .parent_data = cam_cc_parent_data_3,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_3),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -749,7 +749,7 @@ static struct clk_rcg2 cam_cc_ife_1_csid_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -771,7 +771,7 @@ static struct clk_rcg2 cam_cc_ife_lite_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -786,7 +786,7 @@ static struct clk_rcg2 cam_cc_ife_lite_csid_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -810,7 +810,7 @@ static struct clk_rcg2 cam_cc_ipe_0_clk_src = {
+ .parent_data = cam_cc_parent_data_4,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_4),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -825,7 +825,7 @@ static struct clk_rcg2 cam_cc_jpeg_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -847,7 +847,7 @@ static struct clk_rcg2 cam_cc_mclk0_clk_src = {
+ .parent_data = cam_cc_parent_data_1,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_1),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -862,7 +862,7 @@ static struct clk_rcg2 cam_cc_mclk1_clk_src = {
+ .parent_data = cam_cc_parent_data_1,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_1),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -877,7 +877,7 @@ static struct clk_rcg2 cam_cc_mclk2_clk_src = {
+ .parent_data = cam_cc_parent_data_1,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_1),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -892,7 +892,7 @@ static struct clk_rcg2 cam_cc_mclk3_clk_src = {
+ .parent_data = cam_cc_parent_data_1,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_1),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -907,7 +907,7 @@ static struct clk_rcg2 cam_cc_mclk4_clk_src = {
+ .parent_data = cam_cc_parent_data_1,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_1),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -922,7 +922,7 @@ static struct clk_rcg2 cam_cc_mclk5_clk_src = {
+ .parent_data = cam_cc_parent_data_1,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_1),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+@@ -993,7 +993,7 @@ static struct clk_rcg2 cam_cc_slow_ahb_clk_src = {
+ .parent_data = cam_cc_parent_data_0,
+ .num_parents = ARRAY_SIZE(cam_cc_parent_data_0),
+ .flags = CLK_SET_RATE_PARENT,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_shared_ops,
+ },
+ };
+
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 9a65d14acf71c9..cec0afea8e4460 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -709,14 +709,19 @@ clk_alpha_pll_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 alpha_width = pll_alpha_width(pll);
+
+- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l);
++ if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l))
++ return 0;
++
++ if (regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl))
++ return 0;
+
+- regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl);
+ if (ctl & PLL_ALPHA_EN) {
+- regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &low);
++ if (regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &low))
++ return 0;
+ if (alpha_width > 32) {
+- regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll),
+- &high);
++ if (regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll),
++ &high))
++ return 0;
+ a = (u64)high << 32 | low;
+ } else {
+ a = low & GENMASK(alpha_width - 1, 0);
+@@ -942,8 +947,11 @@ alpha_pll_huayra_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 l, alpha = 0, ctl, alpha_m, alpha_n;
+
+- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l);
+- regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl);
++ if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l))
++ return 0;
++
++ if (regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl))
++ return 0;
+
+ if (ctl & PLL_ALPHA_EN) {
+ regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &alpha);
+@@ -1137,8 +1145,11 @@ clk_trion_pll_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 l, frac, alpha_width = pll_alpha_width(pll);
+
+- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l);
+- regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &frac);
++ if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l))
++ return 0;
++
++ if (regmap_read(pll->clkr.regmap, PLL_ALPHA_VAL(pll), &frac))
++ return 0;
+
+ return alpha_pll_calc_rate(parent_rate, l, frac, alpha_width);
+ }
+@@ -1196,7 +1207,8 @@ clk_alpha_pll_postdiv_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ struct clk_alpha_pll_postdiv *pll = to_clk_alpha_pll_postdiv(hw);
+ u32 ctl;
+
+- regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl);
++ if (regmap_read(pll->clkr.regmap, PLL_USER_CTL(pll), &ctl))
++ return 0;
+
+ ctl >>= PLL_POST_DIV_SHIFT;
+ ctl &= PLL_POST_DIV_MASK(pll);
+@@ -1412,8 +1424,11 @@ static unsigned long alpha_pll_fabia_recalc_rate(struct clk_hw *hw,
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 l, frac, alpha_width = pll_alpha_width(pll);
+
+- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l);
+- regmap_read(pll->clkr.regmap, PLL_FRAC(pll), &frac);
++ if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l))
++ return 0;
++
++ if (regmap_read(pll->clkr.regmap, PLL_FRAC(pll), &frac))
++ return 0;
+
+ return alpha_pll_calc_rate(parent_rate, l, frac, alpha_width);
+ }
+@@ -1563,7 +1578,8 @@ clk_trion_pll_postdiv_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+ struct regmap *regmap = pll->clkr.regmap;
+ u32 i, div = 1, val;
+
+- regmap_read(regmap, PLL_USER_CTL(pll), &val);
++ if (regmap_read(regmap, PLL_USER_CTL(pll), &val))
++ return 0;
+
+ val >>= pll->post_div_shift;
+ val &= PLL_POST_DIV_MASK(pll);
+@@ -2484,9 +2500,12 @@ static unsigned long alpha_pll_lucid_evo_recalc_rate(struct clk_hw *hw,
+ struct regmap *regmap = pll->clkr.regmap;
+ u32 l, frac;
+
+- regmap_read(regmap, PLL_L_VAL(pll), &l);
++ if (regmap_read(regmap, PLL_L_VAL(pll), &l))
++ return 0;
+ l &= LUCID_EVO_PLL_L_VAL_MASK;
+- regmap_read(regmap, PLL_ALPHA_VAL(pll), &frac);
++
++ if (regmap_read(regmap, PLL_ALPHA_VAL(pll), &frac))
++ return 0;
+
+ return alpha_pll_calc_rate(parent_rate, l, frac, pll_alpha_width(pll));
+ }
+@@ -2699,7 +2718,8 @@ static unsigned long clk_rivian_evo_pll_recalc_rate(struct clk_hw *hw,
+ struct clk_alpha_pll *pll = to_clk_alpha_pll(hw);
+ u32 l;
+
+- regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l);
++ if (regmap_read(pll->clkr.regmap, PLL_L_VAL(pll), &l))
++ return 0;
+
+ return parent_rate * l;
+ }
+diff --git a/drivers/clk/qcom/lpassaudiocc-sc7280.c b/drivers/clk/qcom/lpassaudiocc-sc7280.c
+index 45e7264770866f..22169da08a51a0 100644
+--- a/drivers/clk/qcom/lpassaudiocc-sc7280.c
++++ b/drivers/clk/qcom/lpassaudiocc-sc7280.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+ * Copyright (c) 2021, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2025, Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/clk-provider.h>
+@@ -713,14 +714,24 @@ static const struct qcom_reset_map lpass_audio_cc_sc7280_resets[] = {
+ [LPASS_AUDIO_SWR_WSA_CGCR] = { 0xb0, 1 },
+ };
+
++static const struct regmap_config lpass_audio_cc_sc7280_reset_regmap_config = {
++ .name = "lpassaudio_cc_reset",
++ .reg_bits = 32,
++ .reg_stride = 4,
++ .val_bits = 32,
++ .fast_io = true,
++ .max_register = 0xc8,
++};
++
+ static const struct qcom_cc_desc lpass_audio_cc_reset_sc7280_desc = {
+- .config = &lpass_audio_cc_sc7280_regmap_config,
++ .config = &lpass_audio_cc_sc7280_reset_regmap_config,
+ .resets = lpass_audio_cc_sc7280_resets,
+ .num_resets = ARRAY_SIZE(lpass_audio_cc_sc7280_resets),
+ };
+
+ static const struct of_device_id lpass_audio_cc_sc7280_match_table[] = {
+- { .compatible = "qcom,sc7280-lpassaudiocc" },
++ { .compatible = "qcom,qcm6490-lpassaudiocc", .data = &lpass_audio_cc_reset_sc7280_desc },
++ { .compatible = "qcom,sc7280-lpassaudiocc", .data = &lpass_audio_cc_sc7280_desc },
+ { }
+ };
+ MODULE_DEVICE_TABLE(of, lpass_audio_cc_sc7280_match_table);
+@@ -752,13 +763,17 @@ static int lpass_audio_cc_sc7280_probe(struct platform_device *pdev)
+ struct regmap *regmap;
+ int ret;
+
++ desc = device_get_match_data(&pdev->dev);
++
++ if (of_device_is_compatible(pdev->dev.of_node, "qcom,qcm6490-lpassaudiocc"))
++ return qcom_cc_probe_by_index(pdev, 1, desc);
++
+ ret = lpass_audio_setup_runtime_pm(pdev);
+ if (ret)
+ return ret;
+
+ lpass_audio_cc_sc7280_regmap_config.name = "lpassaudio_cc";
+ lpass_audio_cc_sc7280_regmap_config.max_register = 0x2f000;
+- desc = &lpass_audio_cc_sc7280_desc;
+
+ regmap = qcom_cc_map(pdev, desc);
+ if (IS_ERR(regmap)) {
+@@ -772,7 +787,7 @@ static int lpass_audio_cc_sc7280_probe(struct platform_device *pdev)
+ regmap_write(regmap, 0x4, 0x3b);
+ regmap_write(regmap, 0x8, 0xff05);
+
+- ret = qcom_cc_really_probe(&pdev->dev, &lpass_audio_cc_sc7280_desc, regmap);
++ ret = qcom_cc_really_probe(&pdev->dev, desc, regmap);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to register LPASS AUDIO CC clocks\n");
+ goto exit;
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 4bd8862dc82be8..91928db411dcd1 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -1549,28 +1549,6 @@ static int rzg2l_cpg_reset_controller_register(struct rzg2l_cpg_priv *priv)
+ return devm_reset_controller_register(priv->dev, &priv->rcdev);
+ }
+
+-static bool rzg2l_cpg_is_pm_clk(struct rzg2l_cpg_priv *priv,
+- const struct of_phandle_args *clkspec)
+-{
+- const struct rzg2l_cpg_info *info = priv->info;
+- unsigned int id;
+- unsigned int i;
+-
+- if (clkspec->args_count != 2)
+- return false;
+-
+- if (clkspec->args[0] != CPG_MOD)
+- return false;
+-
+- id = clkspec->args[1] + info->num_total_core_clks;
+- for (i = 0; i < info->num_no_pm_mod_clks; i++) {
+- if (info->no_pm_mod_clks[i] == id)
+- return false;
+- }
+-
+- return true;
+-}
+-
+ /**
+ * struct rzg2l_cpg_pm_domains - RZ/G2L PM domains data structure
+ * @onecell_data: cell data
+@@ -1595,45 +1573,73 @@ struct rzg2l_cpg_pd {
+ u16 id;
+ };
+
++static bool rzg2l_cpg_is_pm_clk(struct rzg2l_cpg_pd *pd,
++ const struct of_phandle_args *clkspec)
++{
++ if (clkspec->np != pd->genpd.dev.of_node || clkspec->args_count != 2)
++ return false;
++
++ switch (clkspec->args[0]) {
++ case CPG_MOD: {
++ struct rzg2l_cpg_priv *priv = pd->priv;
++ const struct rzg2l_cpg_info *info = priv->info;
++ unsigned int id = clkspec->args[1];
++
++ if (id >= priv->num_mod_clks)
++ return false;
++
++ id += info->num_total_core_clks;
++
++ for (unsigned int i = 0; i < info->num_no_pm_mod_clks; i++) {
++ if (info->no_pm_mod_clks[i] == id)
++ return false;
++ }
++
++ return true;
++ }
++
++ case CPG_CORE:
++ default:
++ return false;
++ }
++}
++
+ static int rzg2l_cpg_attach_dev(struct generic_pm_domain *domain, struct device *dev)
+ {
+ struct rzg2l_cpg_pd *pd = container_of(domain, struct rzg2l_cpg_pd, genpd);
+- struct rzg2l_cpg_priv *priv = pd->priv;
+ struct device_node *np = dev->of_node;
+ struct of_phandle_args clkspec;
+ bool once = true;
+ struct clk *clk;
++ unsigned int i;
+ int error;
+- int i = 0;
+-
+- while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i,
+- &clkspec)) {
+- if (rzg2l_cpg_is_pm_clk(priv, &clkspec)) {
+- if (once) {
+- once = false;
+- error = pm_clk_create(dev);
+- if (error) {
+- of_node_put(clkspec.np);
+- goto err;
+- }
+- }
+- clk = of_clk_get_from_provider(&clkspec);
++
++ for (i = 0; !of_parse_phandle_with_args(np, "clocks", "#clock-cells", i, &clkspec); i++) {
++ if (!rzg2l_cpg_is_pm_clk(pd, &clkspec)) {
+ of_node_put(clkspec.np);
+- if (IS_ERR(clk)) {
+- error = PTR_ERR(clk);
+- goto fail_destroy;
+- }
++ continue;
++ }
+
+- error = pm_clk_add_clk(dev, clk);
++ if (once) {
++ once = false;
++ error = pm_clk_create(dev);
+ if (error) {
+- dev_err(dev, "pm_clk_add_clk failed %d\n",
+- error);
+- goto fail_put;
++ of_node_put(clkspec.np);
++ goto err;
+ }
+- } else {
+- of_node_put(clkspec.np);
+ }
+- i++;
++ clk = of_clk_get_from_provider(&clkspec);
++ of_node_put(clkspec.np);
++ if (IS_ERR(clk)) {
++ error = PTR_ERR(clk);
++ goto fail_destroy;
++ }
++
++ error = pm_clk_add_clk(dev, clk);
++ if (error) {
++ dev_err(dev, "pm_clk_add_clk failed %d\n", error);
++ goto fail_put;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+index bb66c906ebbb62..e83d4fd40240fa 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
++++ b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+@@ -412,19 +412,23 @@ static const struct clk_parent_data mmc0_mmc1_parents[] = {
+ { .hw = &pll_periph0_2x_clk.common.hw },
+ { .hw = &pll_audio1_div2_clk.common.hw },
+ };
+-static SUNXI_CCU_MP_DATA_WITH_MUX_GATE(mmc0_clk, "mmc0", mmc0_mmc1_parents, 0x830,
+- 0, 4, /* M */
+- 8, 2, /* P */
+- 24, 3, /* mux */
+- BIT(31), /* gate */
+- 0);
+-
+-static SUNXI_CCU_MP_DATA_WITH_MUX_GATE(mmc1_clk, "mmc1", mmc0_mmc1_parents, 0x834,
+- 0, 4, /* M */
+- 8, 2, /* P */
+- 24, 3, /* mux */
+- BIT(31), /* gate */
+- 0);
++static SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(mmc0_clk, "mmc0",
++ mmc0_mmc1_parents, 0x830,
++ 0, 4, /* M */
++ 8, 2, /* P */
++ 24, 3, /* mux */
++ BIT(31), /* gate */
++ 2, /* post-div */
++ 0);
++
++static SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(mmc1_clk, "mmc1",
++ mmc0_mmc1_parents, 0x834,
++ 0, 4, /* M */
++ 8, 2, /* P */
++ 24, 3, /* mux */
++ BIT(31), /* gate */
++ 2, /* post-div */
++ 0);
+
+ static const struct clk_parent_data mmc2_parents[] = {
+ { .fw_name = "hosc" },
+@@ -433,12 +437,14 @@ static const struct clk_parent_data mmc2_parents[] = {
+ { .hw = &pll_periph0_800M_clk.common.hw },
+ { .hw = &pll_audio1_div2_clk.common.hw },
+ };
+-static SUNXI_CCU_MP_DATA_WITH_MUX_GATE(mmc2_clk, "mmc2", mmc2_parents, 0x838,
+- 0, 4, /* M */
+- 8, 2, /* P */
+- 24, 3, /* mux */
+- BIT(31), /* gate */
+- 0);
++static SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(mmc2_clk, "mmc2", mmc2_parents,
++ 0x838,
++ 0, 4, /* M */
++ 8, 2, /* P */
++ 24, 3, /* mux */
++ BIT(31), /* gate */
++ 2, /* post-div */
++ 0);
+
+ static SUNXI_CCU_GATE_HWS(bus_mmc0_clk, "bus-mmc0", psi_ahb_hws,
+ 0x84c, BIT(0), 0);
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h616.c b/drivers/clk/sunxi-ng/ccu-sun50i-h616.c
+index 190816c35da9f5..6050cbfa922e2e 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-h616.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-h616.c
+@@ -328,10 +328,16 @@ static SUNXI_CCU_M_WITH_MUX_GATE(gpu0_clk, "gpu0", gpu0_parents, 0x670,
+ 24, 1, /* mux */
+ BIT(31), /* gate */
+ CLK_SET_RATE_PARENT);
++
++/*
++ * This clk is needed as a temporary fall back during GPU PLL freq changes.
++ * Set CLK_IS_CRITICAL flag to prevent from being disabled.
++ */
++#define SUN50I_H616_GPU_CLK1_REG 0x674
+ static SUNXI_CCU_M_WITH_GATE(gpu1_clk, "gpu1", "pll-periph0-2x", 0x674,
+ 0, 2, /* M */
+ BIT(31),/* gate */
+- 0);
++ CLK_IS_CRITICAL);
+
+ static SUNXI_CCU_GATE(bus_gpu_clk, "bus-gpu", "psi-ahb1-ahb2",
+ 0x67c, BIT(0), 0);
+@@ -1120,6 +1126,19 @@ static struct ccu_pll_nb sun50i_h616_pll_cpu_nb = {
+ .lock = BIT(28),
+ };
+
++static struct ccu_mux_nb sun50i_h616_gpu_nb = {
++ .common = &gpu0_clk.common,
++ .cm = &gpu0_clk.mux,
++ .delay_us = 1, /* manual doesn't really say */
++ .bypass_index = 1, /* GPU_CLK1@400MHz */
++};
++
++static struct ccu_pll_nb sun50i_h616_pll_gpu_nb = {
++ .common = &pll_gpu_clk.common,
++ .enable = BIT(29), /* LOCK_ENABLE */
++ .lock = BIT(28),
++};
++
+ static int sun50i_h616_ccu_probe(struct platform_device *pdev)
+ {
+ void __iomem *reg;
+@@ -1170,6 +1189,14 @@ static int sun50i_h616_ccu_probe(struct platform_device *pdev)
+ val |= BIT(0);
+ writel(val, reg + SUN50I_H616_PLL_AUDIO_REG);
+
++ /*
++ * Set the input-divider for the gpu1 clock to 3, to reach a safe 400 MHz.
++ */
++ val = readl(reg + SUN50I_H616_GPU_CLK1_REG);
++ val &= ~GENMASK(1, 0);
++ val |= 2;
++ writel(val, reg + SUN50I_H616_GPU_CLK1_REG);
++
+ /*
+ * First clock parent (osc32K) is unusable for CEC. But since there
+ * is no good way to force parent switch (both run with same frequency),
+@@ -1190,6 +1217,13 @@ static int sun50i_h616_ccu_probe(struct platform_device *pdev)
+ /* Re-lock the CPU PLL after any rate changes */
+ ccu_pll_notifier_register(&sun50i_h616_pll_cpu_nb);
+
++ /* Reparent GPU during GPU PLL rate changes */
++ ccu_mux_notifier_register(pll_gpu_clk.common.hw.clk,
++ &sun50i_h616_gpu_nb);
++
++ /* Re-lock the GPU PLL after any rate changes */
++ ccu_pll_notifier_register(&sun50i_h616_pll_gpu_nb);
++
+ return 0;
+ }
+
+diff --git a/drivers/clk/sunxi-ng/ccu_mp.h b/drivers/clk/sunxi-ng/ccu_mp.h
+index 6e50f3728fb5f1..7d836a9fb3db34 100644
+--- a/drivers/clk/sunxi-ng/ccu_mp.h
++++ b/drivers/clk/sunxi-ng/ccu_mp.h
+@@ -52,6 +52,28 @@ struct ccu_mp {
+ } \
+ }
+
++#define SUNXI_CCU_MP_DATA_WITH_MUX_GATE_POSTDIV(_struct, _name, _parents, \
++ _reg, \
++ _mshift, _mwidth, \
++ _pshift, _pwidth, \
++ _muxshift, _muxwidth, \
++ _gate, _postdiv, _flags)\
++ struct ccu_mp _struct = { \
++ .enable = _gate, \
++ .m = _SUNXI_CCU_DIV(_mshift, _mwidth), \
++ .p = _SUNXI_CCU_DIV(_pshift, _pwidth), \
++ .mux = _SUNXI_CCU_MUX(_muxshift, _muxwidth), \
++ .fixed_post_div = _postdiv, \
++ .common = { \
++ .reg = _reg, \
++ .features = CCU_FEATURE_FIXED_POSTDIV, \
++ .hw.init = CLK_HW_INIT_PARENTS_DATA(_name, \
++ _parents, \
++ &ccu_mp_ops, \
++ _flags), \
++ } \
++ }
++
+ #define SUNXI_CCU_MP_WITH_MUX_GATE(_struct, _name, _parents, _reg, \
+ _mshift, _mwidth, \
+ _pshift, _pwidth, \
+diff --git a/drivers/clocksource/mips-gic-timer.c b/drivers/clocksource/mips-gic-timer.c
+index 7907b740497a5c..abb685a080a5bb 100644
+--- a/drivers/clocksource/mips-gic-timer.c
++++ b/drivers/clocksource/mips-gic-timer.c
+@@ -115,6 +115,9 @@ static void gic_update_frequency(void *data)
+
+ static int gic_starting_cpu(unsigned int cpu)
+ {
++ /* Ensure the GIC counter is running */
++ clear_gic_config(GIC_CONFIG_COUNTSTOP);
++
+ gic_clockevent_cpu_init(cpu, this_cpu_ptr(&gic_clockevent_device));
+ return 0;
+ }
+@@ -288,9 +291,6 @@ static int __init gic_clocksource_of_init(struct device_node *node)
+ pr_warn("Unable to register clock notifier\n");
+ }
+
+- /* And finally start the counter */
+- clear_gic_config(GIC_CONFIG_COUNTSTOP);
+-
+ /*
+ * It's safe to use the MIPS GIC timer as a sched clock source only if
+ * its ticks are stable, which is true on either the platforms with
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index 48ce50c5f5e68e..4d7cf338824a3b 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -126,7 +126,13 @@ static int riscv_timer_starting_cpu(unsigned int cpu)
+
+ static int riscv_timer_dying_cpu(unsigned int cpu)
+ {
++ /*
++ * Stop the timer when the cpu is going to be offline otherwise
++ * the timer interrupt may be pending while performing power-down.
++ */
++ riscv_clock_event_stop();
+ disable_percpu_irq(riscv_clock_event_irq);
++
+ return 0;
+ }
+
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 1b26845703f68c..a27749d948b464 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -746,7 +746,6 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
+ pr_err("Boost mode is not supported by this processor or SBIOS\n");
+ return -EOPNOTSUPP;
+ }
+- guard(mutex)(&amd_pstate_driver_lock);
+
+ ret = amd_pstate_cpu_boost_update(policy, state);
+ refresh_frequency_limits(policy);
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index 2aa00769cf09da..a010da0f6337f8 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -175,6 +175,7 @@ static const struct of_device_id blocklist[] __initconst = {
+ { .compatible = "qcom,sm8350", },
+ { .compatible = "qcom,sm8450", },
+ { .compatible = "qcom,sm8550", },
++ { .compatible = "qcom,sm8650", },
+
+ { .compatible = "st,stih407", },
+ { .compatible = "st,stih410", },
+diff --git a/drivers/cpufreq/tegra186-cpufreq.c b/drivers/cpufreq/tegra186-cpufreq.c
+index c7761eb99f3ccc..92aa50f0166601 100644
+--- a/drivers/cpufreq/tegra186-cpufreq.c
++++ b/drivers/cpufreq/tegra186-cpufreq.c
+@@ -73,11 +73,18 @@ static int tegra186_cpufreq_init(struct cpufreq_policy *policy)
+ {
+ struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
+ unsigned int cluster = data->cpus[policy->cpu].bpmp_cluster_id;
++ u32 cpu;
+
+ policy->freq_table = data->clusters[cluster].table;
+ policy->cpuinfo.transition_latency = 300 * 1000;
+ policy->driver_data = NULL;
+
++ /* set same policy for all cpus in a cluster */
++ for (cpu = 0; cpu < ARRAY_SIZE(tegra186_cpus); cpu++) {
++ if (data->cpus[cpu].bpmp_cluster_id == cluster)
++ cpumask_set_cpu(cpu, policy->cpus);
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index 28363bfa3e4c9f..42b77d820d0fb4 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -192,8 +192,19 @@ static unsigned int get_typical_interval(struct menu_device *data)
+ * This can deal with workloads that have long pauses interspersed
+ * with sporadic activity with a bunch of short pauses.
+ */
+- if ((divisor * 4) <= INTERVALS * 3)
++ if (divisor * 4 <= INTERVALS * 3) {
++ /*
++ * If there are sufficiently many data points still under
++ * consideration after the outliers have been eliminated,
++ * returning without a prediction would be a mistake because it
++ * is likely that the next interval will not exceed the current
++ * maximum, so return the latter in that case.
++ */
++ if (divisor >= INTERVALS / 2)
++ return max;
++
+ return UINT_MAX;
++ }
+
+ thresh = max - 1;
+ goto again;
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c
+index 5387c68f3c9df1..42624410703729 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c
+@@ -264,9 +264,10 @@ static int cpt_process_ccode(struct otx2_cptlfs_info *lfs,
+ break;
+ }
+
+- dev_err(&pdev->dev,
+- "Request failed with software error code 0x%x\n",
+- cpt_status->s.uc_compcode);
++ pr_debug("Request failed with software error code 0x%x: algo = %s driver = %s\n",
++ cpt_status->s.uc_compcode,
++ info->req->areq->tfm->__crt_alg->cra_name,
++ info->req->areq->tfm->__crt_alg->cra_driver_name);
+ otx2_cpt_dump_sg_list(pdev, info->req);
+ break;
+ }
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index d94a26c3541a08..133ebc99823626 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -265,12 +265,12 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ MXS_DCP_CONTROL0_INTERRUPT |
+ MXS_DCP_CONTROL0_ENABLE_CIPHER;
+
+- if (key_referenced)
+- /* Set OTP key bit to select the key via KEY_SELECT. */
+- desc->control0 |= MXS_DCP_CONTROL0_OTP_KEY;
+- else
++ if (!key_referenced)
+ /* Payload contains the key. */
+ desc->control0 |= MXS_DCP_CONTROL0_PAYLOAD_KEY;
++ else if (actx->key[0] == DCP_PAES_KEY_OTP)
++ /* Set OTP key bit to select the key via KEY_SELECT. */
++ desc->control0 |= MXS_DCP_CONTROL0_OTP_KEY;
+
+ if (rctx->enc)
+ desc->control0 |= MXS_DCP_CONTROL0_CIPHER_ENCRYPT;
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index e3e0e88a76d3c3..619403c6e51737 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -57,7 +57,7 @@ static irqreturn_t fsl_edma3_tx_handler(int irq, void *dev_id)
+
+ intr = edma_readl_chreg(fsl_chan, ch_int);
+ if (!intr)
+- return IRQ_HANDLED;
++ return IRQ_NONE;
+
+ edma_writel_chreg(fsl_chan, 1, ch_int);
+
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index ff94ee892339d5..cd57067e821802 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -407,6 +407,9 @@ static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
+ if (!idxd->user_submission_safe && !capable(CAP_SYS_RAWIO))
+ return -EPERM;
+
++ if (current->mm != ctx->mm)
++ return -EPERM;
++
+ rc = check_vma(wq, vma, __func__);
+ if (rc < 0)
+ return rc;
+@@ -473,6 +476,9 @@ static ssize_t idxd_cdev_write(struct file *filp, const char __user *buf, size_t
+ ssize_t written = 0;
+ int i;
+
++ if (current->mm != ctx->mm)
++ return -EPERM;
++
+ for (i = 0; i < len/sizeof(struct dsa_hw_desc); i++) {
+ int rc = idxd_submit_user_descriptor(ctx, udesc + i);
+
+@@ -493,6 +499,9 @@ static __poll_t idxd_cdev_poll(struct file *filp,
+ struct idxd_device *idxd = wq->idxd;
+ __poll_t out = 0;
+
++ if (current->mm != ctx->mm)
++ return POLLNVAL;
++
+ poll_wait(filp, &wq->err_queue, wait);
+ spin_lock(&idxd->dev_lock);
+ if (idxd->sw_err.valid)
+diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
+index 7c224c3ab7a071..f87d244cc2d671 100644
+--- a/drivers/dma/ti/k3-udma-glue.c
++++ b/drivers/dma/ti/k3-udma-glue.c
+@@ -84,6 +84,7 @@ struct k3_udma_glue_rx_channel {
+ struct k3_udma_glue_rx_flow *flows;
+ u32 flow_num;
+ u32 flows_ready;
++ bool single_fdq; /* one FDQ for all flows */
+ };
+
+ static void k3_udma_chan_dev_release(struct device *dev)
+@@ -970,10 +971,13 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name,
+
+ ep_cfg = rx_chn->common.ep_config;
+
+- if (xudma_is_pktdma(rx_chn->common.udmax))
++ if (xudma_is_pktdma(rx_chn->common.udmax)) {
+ rx_chn->udma_rchan_id = ep_cfg->mapped_channel_id;
+- else
++ rx_chn->single_fdq = false;
++ } else {
+ rx_chn->udma_rchan_id = -1;
++ rx_chn->single_fdq = true;
++ }
+
+ /* request and cfg UDMAP RX channel */
+ rx_chn->udma_rchanx = xudma_rchan_get(rx_chn->common.udmax,
+@@ -1103,6 +1107,9 @@ k3_udma_glue_request_remote_rx_chn_common(struct k3_udma_glue_rx_channel *rx_chn
+ rx_chn->common.chan_dev.dma_coherent = true;
+ dma_coerce_mask_and_coherent(&rx_chn->common.chan_dev,
+ DMA_BIT_MASK(48));
++ rx_chn->single_fdq = false;
++ } else {
++ rx_chn->single_fdq = true;
+ }
+
+ ret = k3_udma_glue_allocate_rx_flows(rx_chn, cfg);
+@@ -1453,7 +1460,7 @@ EXPORT_SYMBOL_GPL(k3_udma_glue_tdown_rx_chn);
+
+ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
+ u32 flow_num, void *data,
+- void (*cleanup)(void *data, dma_addr_t desc_dma), bool skip_fdq)
++ void (*cleanup)(void *data, dma_addr_t desc_dma))
+ {
+ struct k3_udma_glue_rx_flow *flow = &rx_chn->flows[flow_num];
+ struct device *dev = rx_chn->common.dev;
+@@ -1465,7 +1472,7 @@ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
+ dev_dbg(dev, "RX reset flow %u occ_rx %u\n", flow_num, occ_rx);
+
+ /* Skip RX FDQ in case one FDQ is used for the set of flows */
+- if (skip_fdq)
++ if (rx_chn->single_fdq && flow_num)
+ goto do_reset;
+
+ /*
+diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
+index 1877201d1aa9fe..20bdc52f63a503 100644
+--- a/drivers/dpll/dpll_core.c
++++ b/drivers/dpll/dpll_core.c
+@@ -443,8 +443,11 @@ static void dpll_pin_prop_free(struct dpll_pin_properties *prop)
+ static int dpll_pin_prop_dup(const struct dpll_pin_properties *src,
+ struct dpll_pin_properties *dst)
+ {
++ if (WARN_ON(src->freq_supported && !src->freq_supported_num))
++ return -EINVAL;
++
+ memcpy(dst, src, sizeof(*dst));
+- if (src->freq_supported && src->freq_supported_num) {
++ if (src->freq_supported) {
+ size_t freq_size = src->freq_supported_num *
+ sizeof(*src->freq_supported);
+ dst->freq_supported = kmemdup(src->freq_supported,
+diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c
+index 9b02a6b43ab581..a8dd55ec52cea1 100644
+--- a/drivers/edac/ie31200_edac.c
++++ b/drivers/edac/ie31200_edac.c
+@@ -408,10 +408,9 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
+ int i, j, ret;
+ struct mem_ctl_info *mci = NULL;
+ struct edac_mc_layer layers[2];
+- struct dimm_data dimm_info[IE31200_CHANNELS][IE31200_DIMMS_PER_CHANNEL];
+ void __iomem *window;
+ struct ie31200_priv *priv;
+- u32 addr_decode, mad_offset;
++ u32 addr_decode[IE31200_CHANNELS], mad_offset;
+
+ /*
+ * Kaby Lake, Coffee Lake seem to work like Skylake. Please re-visit
+@@ -469,19 +468,10 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
+ mad_offset = IE31200_MAD_DIMM_0_OFFSET;
+ }
+
+- /* populate DIMM info */
+ for (i = 0; i < IE31200_CHANNELS; i++) {
+- addr_decode = readl(window + mad_offset +
++ addr_decode[i] = readl(window + mad_offset +
+ (i * 4));
+- edac_dbg(0, "addr_decode: 0x%x\n", addr_decode);
+- for (j = 0; j < IE31200_DIMMS_PER_CHANNEL; j++) {
+- populate_dimm_info(&dimm_info[i][j], addr_decode, j,
+- skl);
+- edac_dbg(0, "size: 0x%x, rank: %d, width: %d\n",
+- dimm_info[i][j].size,
+- dimm_info[i][j].dual_rank,
+- dimm_info[i][j].x16_width);
+- }
++ edac_dbg(0, "addr_decode: 0x%x\n", addr_decode[i]);
+ }
+
+ /*
+@@ -492,14 +482,22 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
+ */
+ for (i = 0; i < IE31200_DIMMS_PER_CHANNEL; i++) {
+ for (j = 0; j < IE31200_CHANNELS; j++) {
++ struct dimm_data dimm_info;
+ struct dimm_info *dimm;
+ unsigned long nr_pages;
+
+- nr_pages = IE31200_PAGES(dimm_info[j][i].size, skl);
++ populate_dimm_info(&dimm_info, addr_decode[j], i,
++ skl);
++ edac_dbg(0, "size: 0x%x, rank: %d, width: %d\n",
++ dimm_info.size,
++ dimm_info.dual_rank,
++ dimm_info.x16_width);
++
++ nr_pages = IE31200_PAGES(dimm_info.size, skl);
+ if (nr_pages == 0)
+ continue;
+
+- if (dimm_info[j][i].dual_rank) {
++ if (dimm_info.dual_rank) {
+ nr_pages = nr_pages / 2;
+ dimm = edac_get_dimm(mci, (i * 2) + 1, j, 0);
+ dimm->nr_pages = nr_pages;
+diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c
+index fa09a82b44921e..eacde92802f583 100644
+--- a/drivers/firmware/arm_ffa/bus.c
++++ b/drivers/firmware/arm_ffa/bus.c
+@@ -213,6 +213,7 @@ ffa_device_register(const struct ffa_partition_info *part_info,
+ dev = &ffa_dev->dev;
+ dev->bus = &ffa_bus_type;
+ dev->release = ffa_release_device;
++ dev->dma_mask = &dev->coherent_dma_mask;
+ dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id);
+
+ ffa_dev->id = id;
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 03d22cbb2ad470..d545d251320517 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -150,6 +150,14 @@ static int ffa_version_check(u32 *version)
+ return -EOPNOTSUPP;
+ }
+
++ if (FFA_MAJOR_VERSION(ver.a0) > FFA_MAJOR_VERSION(FFA_DRIVER_VERSION)) {
++ pr_err("Incompatible v%d.%d! Latest supported v%d.%d\n",
++ FFA_MAJOR_VERSION(ver.a0), FFA_MINOR_VERSION(ver.a0),
++ FFA_MAJOR_VERSION(FFA_DRIVER_VERSION),
++ FFA_MINOR_VERSION(FFA_DRIVER_VERSION));
++ return -EINVAL;
++ }
++
+ if (ver.a0 < FFA_MIN_VERSION) {
+ pr_err("Incompatible v%d.%d! Earliest supported v%d.%d\n",
+ FFA_MAJOR_VERSION(ver.a0), FFA_MINOR_VERSION(ver.a0),
+@@ -1450,6 +1458,10 @@ static int ffa_setup_partitions(void)
+
+ kfree(pbuf);
+
++ /* Check if the host is already added as part of partition info */
++ if (xa_load(&drv_info->partition_info, drv_info->vm_id))
++ return 0;
++
+ /* Allocate for the host */
+ ret = ffa_xa_add_partition_info(drv_info->vm_id);
+ if (ret)
+diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
+index 7d7af2262c0139..5d799f6626963a 100644
+--- a/drivers/firmware/arm_scmi/bus.c
++++ b/drivers/firmware/arm_scmi/bus.c
+@@ -42,7 +42,7 @@ static atomic_t scmi_syspower_registered = ATOMIC_INIT(0);
+ * This helper let an SCMI driver request specific devices identified by the
+ * @id_table to be created for each active SCMI instance.
+ *
+- * The requested device name MUST NOT be already existent for any protocol;
++ * The requested device name MUST NOT be already existent for this protocol;
+ * at first the freshly requested @id_table is annotated in the IDR table
+ * @scmi_requested_devices and then the requested device is advertised to any
+ * registered party via the @scmi_requested_devices_nh notification chain.
+@@ -52,7 +52,6 @@ static atomic_t scmi_syspower_registered = ATOMIC_INIT(0);
+ static int scmi_protocol_device_request(const struct scmi_device_id *id_table)
+ {
+ int ret = 0;
+- unsigned int id = 0;
+ struct list_head *head, *phead = NULL;
+ struct scmi_requested_dev *rdev;
+
+@@ -67,19 +66,13 @@ static int scmi_protocol_device_request(const struct scmi_device_id *id_table)
+ }
+
+ /*
+- * Search for the matching protocol rdev list and then search
+- * of any existent equally named device...fails if any duplicate found.
++ * Find the matching protocol rdev list and then search of any
++ * existent equally named device...fails if any duplicate found.
+ */
+ mutex_lock(&scmi_requested_devices_mtx);
+- idr_for_each_entry(&scmi_requested_devices, head, id) {
+- if (!phead) {
+- /* A list found registered in the IDR is never empty */
+- rdev = list_first_entry(head, struct scmi_requested_dev,
+- node);
+- if (rdev->id_table->protocol_id ==
+- id_table->protocol_id)
+- phead = head;
+- }
++ phead = idr_find(&scmi_requested_devices, id_table->protocol_id);
++ if (phead) {
++ head = phead;
+ list_for_each_entry(rdev, head, node) {
+ if (!strcmp(rdev->id_table->name, id_table->name)) {
+ pr_err("Ignoring duplicate request [%d] %s\n",
+diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
+index 720fa8b5d8e95d..7356e860e65ce2 100644
+--- a/drivers/firmware/xilinx/zynqmp.c
++++ b/drivers/firmware/xilinx/zynqmp.c
+@@ -1139,17 +1139,13 @@ EXPORT_SYMBOL_GPL(zynqmp_pm_fpga_get_status);
+ int zynqmp_pm_fpga_get_config_status(u32 *value)
+ {
+ u32 ret_payload[PAYLOAD_ARG_CNT];
+- u32 buf, lower_addr, upper_addr;
+ int ret;
+
+ if (!value)
+ return -EINVAL;
+
+- lower_addr = lower_32_bits((u64)&buf);
+- upper_addr = upper_32_bits((u64)&buf);
+-
+ ret = zynqmp_pm_invoke_fn(PM_FPGA_READ, ret_payload, 4,
+- XILINX_ZYNQMP_PM_FPGA_CONFIG_STAT_OFFSET, lower_addr, upper_addr,
++ XILINX_ZYNQMP_PM_FPGA_CONFIG_STAT_OFFSET, 0, 0,
+ XILINX_ZYNQMP_PM_FPGA_READ_CONFIG_REG);
+
+ *value = ret_payload[1];
+diff --git a/drivers/fpga/altera-cvp.c b/drivers/fpga/altera-cvp.c
+index 6b091443244530..5af0bd33890c0b 100644
+--- a/drivers/fpga/altera-cvp.c
++++ b/drivers/fpga/altera-cvp.c
+@@ -52,7 +52,7 @@
+ /* V2 Defines */
+ #define VSE_CVP_TX_CREDITS 0x49 /* 8bit */
+
+-#define V2_CREDIT_TIMEOUT_US 20000
++#define V2_CREDIT_TIMEOUT_US 40000
+ #define V2_CHECK_CREDIT_US 10
+ #define V2_POLL_TIMEOUT_US 1000000
+ #define V2_USER_TIMEOUT_US 500000
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 0c00ed2ab43150..960ca0ad45fc87 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -2577,6 +2577,9 @@ int gpio_do_set_config(struct gpio_desc *desc, unsigned long config)
+ return -ENOTSUPP;
+
+ ret = guard.gc->set_config(guard.gc, gpio_chip_hwgpio(desc), config);
++ if (ret > 0)
++ ret = -EBADE;
++
+ #ifdef CONFIG_GPIO_CDEV
+ /*
+ * Special case - if we're setting debounce period, we need to store
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 416d2611fbf1c6..90f688b3d9d369 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1187,9 +1187,15 @@ struct amdgpu_device {
+ bool debug_enable_ras_aca;
+ bool debug_exp_resets;
+
+- bool enforce_isolation[MAX_XCP];
+- /* Added this mutex for cleaner shader isolation between GFX and compute processes */
++ /* Protection for the following isolation structure */
+ struct mutex enforce_isolation_mutex;
++ bool enforce_isolation[MAX_XCP];
++ struct amdgpu_isolation {
++ void *owner;
++ struct dma_fence *spearhead;
++ struct amdgpu_sync active;
++ struct amdgpu_sync prev;
++ } isolation[MAX_XCP];
+
+ struct amdgpu_init_level *init_lvl;
+ };
+@@ -1470,6 +1476,9 @@ void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev,
+ struct dma_fence *amdgpu_device_get_gang(struct amdgpu_device *adev);
+ struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+ struct dma_fence *gang);
++struct dma_fence *amdgpu_device_enforce_isolation(struct amdgpu_device *adev,
++ struct amdgpu_ring *ring,
++ struct amdgpu_job *job);
+ bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev);
+ ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
+ ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index 8af67f18500a74..55d5399676951e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -47,6 +47,7 @@ enum TLB_FLUSH_TYPE {
+ };
+
+ struct amdgpu_device;
++struct kfd_process_device;
+ struct amdgpu_reset_context;
+
+ enum kfd_mem_attachment_type {
+@@ -192,7 +193,7 @@ int kfd_debugfs_kfd_mem_limits(struct seq_file *m, void *data);
+ #if IS_ENABLED(CONFIG_HSA_AMD)
+ bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm);
+ struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f);
+-int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo);
++void amdgpu_amdkfd_remove_all_eviction_fences(struct amdgpu_bo *bo);
+ int amdgpu_amdkfd_evict_userptr(struct mmu_interval_notifier *mni,
+ unsigned long cur_seq, struct kgd_mem *mem);
+ int amdgpu_amdkfd_bo_validate_and_fence(struct amdgpu_bo *bo,
+@@ -212,9 +213,8 @@ struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
+ }
+
+ static inline
+-int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)
++void amdgpu_amdkfd_remove_all_eviction_fences(struct amdgpu_bo *bo)
+ {
+- return 0;
+ }
+
+ static inline
+@@ -299,14 +299,10 @@ bool amdgpu_amdkfd_compute_active(struct amdgpu_device *adev, uint32_t node_id);
+ (&((struct amdgpu_fpriv *) \
+ ((struct drm_file *)(drm_priv))->driver_priv)->vm)
+
+-int amdgpu_amdkfd_gpuvm_set_vm_pasid(struct amdgpu_device *adev,
+- struct amdgpu_vm *avm, u32 pasid);
+ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct amdgpu_device *adev,
+ struct amdgpu_vm *avm,
+ void **process_info,
+ struct dma_fence **ef);
+-void amdgpu_amdkfd_gpuvm_release_process_vm(struct amdgpu_device *adev,
+- void *drm_priv);
+ uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void *drm_priv);
+ size_t amdgpu_amdkfd_get_available_memory(struct amdgpu_device *adev,
+ uint8_t xcp_id);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 1e998f972c308b..b3c8eae4604251 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -370,40 +370,32 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
+ return 0;
+ }
+
+-int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)
++/**
++ * amdgpu_amdkfd_remove_all_eviction_fences - Remove all eviction fences
++ * @bo: the BO where to remove the evictions fences from.
++ *
++ * This functions should only be used on release when all references to the BO
++ * are already dropped. We remove the eviction fence from the private copy of
++ * the dma_resv object here since that is what is used during release to
++ * determine of the BO is idle or not.
++ */
++void amdgpu_amdkfd_remove_all_eviction_fences(struct amdgpu_bo *bo)
+ {
+- struct amdgpu_bo *root = bo;
+- struct amdgpu_vm_bo_base *vm_bo;
+- struct amdgpu_vm *vm;
+- struct amdkfd_process_info *info;
+- struct amdgpu_amdkfd_fence *ef;
+- int ret;
++ struct dma_resv *resv = &bo->tbo.base._resv;
++ struct dma_fence *fence, *stub;
++ struct dma_resv_iter cursor;
+
+- /* we can always get vm_bo from root PD bo.*/
+- while (root->parent)
+- root = root->parent;
++ dma_resv_assert_held(resv);
+
+- vm_bo = root->vm_bo;
+- if (!vm_bo)
+- return 0;
+-
+- vm = vm_bo->vm;
+- if (!vm)
+- return 0;
+-
+- info = vm->process_info;
+- if (!info || !info->eviction_fence)
+- return 0;
+-
+- ef = container_of(dma_fence_get(&info->eviction_fence->base),
+- struct amdgpu_amdkfd_fence, base);
+-
+- BUG_ON(!dma_resv_trylock(bo->tbo.base.resv));
+- ret = amdgpu_amdkfd_remove_eviction_fence(bo, ef);
+- dma_resv_unlock(bo->tbo.base.resv);
++ stub = dma_fence_get_stub();
++ dma_resv_for_each_fence(&cursor, resv, DMA_RESV_USAGE_BOOKKEEP, fence) {
++ if (!to_amdgpu_amdkfd_fence(fence))
++ continue;
+
+- dma_fence_put(&ef->base);
+- return ret;
++ dma_resv_replace_fences(resv, fence->context, stub,
++ DMA_RESV_USAGE_BOOKKEEP);
++ }
++ dma_fence_put(stub);
+ }
+
+ static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain,
+@@ -499,7 +491,7 @@ static int vm_update_pds(struct amdgpu_vm *vm, struct amdgpu_sync *sync)
+ if (ret)
+ return ret;
+
+- return amdgpu_sync_fence(sync, vm->last_update);
++ return amdgpu_sync_fence(sync, vm->last_update, GFP_KERNEL);
+ }
+
+ static uint64_t get_pte_flags(struct amdgpu_device *adev, struct kgd_mem *mem)
+@@ -1263,7 +1255,7 @@ static int unmap_bo_from_gpuvm(struct kgd_mem *mem,
+
+ (void)amdgpu_vm_clear_freed(adev, vm, &bo_va->last_pt_update);
+
+- (void)amdgpu_sync_fence(sync, bo_va->last_pt_update);
++ (void)amdgpu_sync_fence(sync, bo_va->last_pt_update, GFP_KERNEL);
+
+ return 0;
+ }
+@@ -1287,7 +1279,7 @@ static int update_gpuvm_pte(struct kgd_mem *mem,
+ return ret;
+ }
+
+- return amdgpu_sync_fence(sync, bo_va->last_pt_update);
++ return amdgpu_sync_fence(sync, bo_va->last_pt_update, GFP_KERNEL);
+ }
+
+ static int map_bo_to_gpuvm(struct kgd_mem *mem,
+@@ -1529,27 +1521,6 @@ static void amdgpu_amdkfd_gpuvm_unpin_bo(struct amdgpu_bo *bo)
+ amdgpu_bo_unreserve(bo);
+ }
+
+-int amdgpu_amdkfd_gpuvm_set_vm_pasid(struct amdgpu_device *adev,
+- struct amdgpu_vm *avm, u32 pasid)
+-
+-{
+- int ret;
+-
+- /* Free the original amdgpu allocated pasid,
+- * will be replaced with kfd allocated pasid.
+- */
+- if (avm->pasid) {
+- amdgpu_pasid_free(avm->pasid);
+- amdgpu_vm_set_pasid(adev, avm, 0);
+- }
+-
+- ret = amdgpu_vm_set_pasid(adev, avm, pasid);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct amdgpu_device *adev,
+ struct amdgpu_vm *avm,
+ void **process_info,
+@@ -1607,27 +1578,6 @@ void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
+ }
+ }
+
+-void amdgpu_amdkfd_gpuvm_release_process_vm(struct amdgpu_device *adev,
+- void *drm_priv)
+-{
+- struct amdgpu_vm *avm;
+-
+- if (WARN_ON(!adev || !drm_priv))
+- return;
+-
+- avm = drm_priv_to_vm(drm_priv);
+-
+- pr_debug("Releasing process vm %p\n", avm);
+-
+- /* The original pasid of amdgpu vm has already been
+- * released during making a amdgpu vm to a compute vm
+- * The current pasid is managed by kfd and will be
+- * released on kfd process destroy. Set amdgpu pasid
+- * to 0 to avoid duplicate release.
+- */
+- amdgpu_vm_release_compute(adev, avm);
+-}
+-
+ uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void *drm_priv)
+ {
+ struct amdgpu_vm *avm = drm_priv_to_vm(drm_priv);
+@@ -2969,7 +2919,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence __rcu *
+ }
+ dma_resv_for_each_fence(&cursor, bo->tbo.base.resv,
+ DMA_RESV_USAGE_KERNEL, fence) {
+- ret = amdgpu_sync_fence(&sync_obj, fence);
++ ret = amdgpu_sync_fence(&sync_obj, fence, GFP_KERNEL);
+ if (ret) {
+ pr_debug("Memory eviction: Sync BO fence failed. Try again\n");
+ goto validate_map_fail;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 5cc5f59e30184f..4a5b406601fa20 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -428,7 +428,7 @@ static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p,
+ dma_fence_put(old);
+ }
+
+- r = amdgpu_sync_fence(&p->sync, fence);
++ r = amdgpu_sync_fence(&p->sync, fence, GFP_KERNEL);
+ dma_fence_put(fence);
+ if (r)
+ return r;
+@@ -450,7 +450,7 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
+ return r;
+ }
+
+- r = amdgpu_sync_fence(&p->sync, fence);
++ r = amdgpu_sync_fence(&p->sync, fence, GFP_KERNEL);
+ dma_fence_put(fence);
+ return r;
+ }
+@@ -1124,7 +1124,8 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ if (r)
+ return r;
+
+- r = amdgpu_sync_fence(&p->sync, fpriv->prt_va->last_pt_update);
++ r = amdgpu_sync_fence(&p->sync, fpriv->prt_va->last_pt_update,
++ GFP_KERNEL);
+ if (r)
+ return r;
+
+@@ -1135,7 +1136,8 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ if (r)
+ return r;
+
+- r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
++ r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update,
++ GFP_KERNEL);
+ if (r)
+ return r;
+ }
+@@ -1154,7 +1156,8 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ if (r)
+ return r;
+
+- r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update);
++ r = amdgpu_sync_fence(&p->sync, bo_va->last_pt_update,
++ GFP_KERNEL);
+ if (r)
+ return r;
+ }
+@@ -1167,7 +1170,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ if (r)
+ return r;
+
+- r = amdgpu_sync_fence(&p->sync, vm->last_update);
++ r = amdgpu_sync_fence(&p->sync, vm->last_update, GFP_KERNEL);
+ if (r)
+ return r;
+
+@@ -1248,7 +1251,8 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
+ continue;
+ }
+
+- r = amdgpu_sync_fence(&p->gang_leader->explicit_sync, fence);
++ r = amdgpu_sync_fence(&p->gang_leader->explicit_sync, fence,
++ GFP_KERNEL);
+ dma_fence_put(fence);
+ if (r)
+ return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 34f0451b274c8a..28190b0ac5fcaf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -224,6 +224,24 @@ static ssize_t amdgpu_device_get_pcie_replay_count(struct device *dev,
+ static DEVICE_ATTR(pcie_replay_count, 0444,
+ amdgpu_device_get_pcie_replay_count, NULL);
+
++static int amdgpu_device_attr_sysfs_init(struct amdgpu_device *adev)
++{
++ int ret = 0;
++
++ if (!amdgpu_sriov_vf(adev))
++ ret = sysfs_create_file(&adev->dev->kobj,
++ &dev_attr_pcie_replay_count.attr);
++
++ return ret;
++}
++
++static void amdgpu_device_attr_sysfs_fini(struct amdgpu_device *adev)
++{
++ if (!amdgpu_sriov_vf(adev))
++ sysfs_remove_file(&adev->dev->kobj,
++ &dev_attr_pcie_replay_count.attr);
++}
++
+ static ssize_t amdgpu_sysfs_reg_state_get(struct file *f, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t ppos, size_t count)
+@@ -4123,11 +4141,6 @@ static bool amdgpu_device_check_iommu_remap(struct amdgpu_device *adev)
+ }
+ #endif
+
+-static const struct attribute *amdgpu_dev_attributes[] = {
+- &dev_attr_pcie_replay_count.attr,
+- NULL
+-};
+-
+ static void amdgpu_device_set_mcbp(struct amdgpu_device *adev)
+ {
+ if (amdgpu_mcbp == 1)
+@@ -4232,6 +4245,11 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ mutex_init(&adev->gfx.reset_sem_mutex);
+ /* Initialize the mutex for cleaner shader isolation between GFX and compute processes */
+ mutex_init(&adev->enforce_isolation_mutex);
++ for (i = 0; i < MAX_XCP; ++i) {
++ adev->isolation[i].spearhead = dma_fence_get_stub();
++ amdgpu_sync_create(&adev->isolation[i].active);
++ amdgpu_sync_create(&adev->isolation[i].prev);
++ }
+ mutex_init(&adev->gfx.kfd_sch_mutex);
+
+ amdgpu_device_init_apu_flags(adev);
+@@ -4352,10 +4370,17 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ if (r)
+ return r;
+
+- /* Get rid of things like offb */
+- r = aperture_remove_conflicting_pci_devices(adev->pdev, amdgpu_kms_driver.name);
+- if (r)
+- return r;
++ /*
++ * No need to remove conflicting FBs for non-display class devices.
++ * This prevents the sysfb from being freed accidently.
++ */
++ if ((pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA ||
++ (pdev->class >> 8) == PCI_CLASS_DISPLAY_OTHER) {
++ /* Get rid of things like offb */
++ r = aperture_remove_conflicting_pci_devices(adev->pdev, amdgpu_kms_driver.name);
++ if (r)
++ return r;
++ }
+
+ /* Enable TMZ based on IP_VERSION */
+ amdgpu_gmc_tmz_set(adev);
+@@ -4567,7 +4592,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ } else
+ adev->ucode_sysfs_en = true;
+
+- r = sysfs_create_files(&adev->dev->kobj, amdgpu_dev_attributes);
++ r = amdgpu_device_attr_sysfs_init(adev);
+ if (r)
+ dev_err(adev->dev, "Could not create amdgpu device attr\n");
+
+@@ -4704,7 +4729,7 @@ void amdgpu_device_fini_hw(struct amdgpu_device *adev)
+ amdgpu_pm_sysfs_fini(adev);
+ if (adev->ucode_sysfs_en)
+ amdgpu_ucode_sysfs_fini(adev);
+- sysfs_remove_files(&adev->dev->kobj, amdgpu_dev_attributes);
++ amdgpu_device_attr_sysfs_fini(adev);
+ amdgpu_fru_sysfs_fini(adev);
+
+ amdgpu_reg_state_sysfs_fini(adev);
+@@ -4731,7 +4756,7 @@ void amdgpu_device_fini_hw(struct amdgpu_device *adev)
+
+ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ {
+- int idx;
++ int i, idx;
+ bool px;
+
+ amdgpu_device_ip_fini(adev);
+@@ -4739,6 +4764,11 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ amdgpu_ucode_release(&adev->firmware.gpu_info_fw);
+ adev->accel_working = false;
+ dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
++ for (i = 0; i < MAX_XCP; ++i) {
++ dma_fence_put(adev->isolation[i].spearhead);
++ amdgpu_sync_free(&adev->isolation[i].active);
++ amdgpu_sync_free(&adev->isolation[i].prev);
++ }
+
+ amdgpu_reset_fini(adev);
+
+@@ -4755,6 +4785,9 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ kfree(adev->fru_info);
+ adev->fru_info = NULL;
+
++ kfree(adev->xcp_mgr);
++ adev->xcp_mgr = NULL;
++
+ px = amdgpu_device_supports_px(adev_to_drm(adev));
+
+ if (px || (!dev_is_removable(&adev->pdev->dev) &&
+@@ -6860,6 +6893,92 @@ struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+ return NULL;
+ }
+
++/**
++ * amdgpu_device_enforce_isolation - enforce HW isolation
++ * @adev: the amdgpu device pointer
++ * @ring: the HW ring the job is supposed to run on
++ * @job: the job which is about to be pushed to the HW ring
++ *
++ * Makes sure that only one client at a time can use the GFX block.
++ * Returns: The dependency to wait on before the job can be pushed to the HW.
++ * The function is called multiple times until NULL is returned.
++ */
++struct dma_fence *amdgpu_device_enforce_isolation(struct amdgpu_device *adev,
++ struct amdgpu_ring *ring,
++ struct amdgpu_job *job)
++{
++ struct amdgpu_isolation *isolation = &adev->isolation[ring->xcp_id];
++ struct drm_sched_fence *f = job->base.s_fence;
++ struct dma_fence *dep;
++ void *owner;
++ int r;
++
++ /*
++ * For now enforce isolation only for the GFX block since we only need
++ * the cleaner shader on those rings.
++ */
++ if (ring->funcs->type != AMDGPU_RING_TYPE_GFX &&
++ ring->funcs->type != AMDGPU_RING_TYPE_COMPUTE)
++ return NULL;
++
++ /*
++ * All submissions where enforce isolation is false are handled as if
++ * they come from a single client. Use ~0l as the owner to distinct it
++ * from kernel submissions where the owner is NULL.
++ */
++ owner = job->enforce_isolation ? f->owner : (void *)~0l;
++
++ mutex_lock(&adev->enforce_isolation_mutex);
++
++ /*
++ * The "spearhead" submission is the first one which changes the
++ * ownership to its client. We always need to wait for it to be
++ * pushed to the HW before proceeding with anything.
++ */
++ if (&f->scheduled != isolation->spearhead &&
++ !dma_fence_is_signaled(isolation->spearhead)) {
++ dep = isolation->spearhead;
++ goto out_grab_ref;
++ }
++
++ if (isolation->owner != owner) {
++
++ /*
++ * Wait for any gang to be assembled before switching to a
++ * different owner or otherwise we could deadlock the
++ * submissions.
++ */
++ if (!job->gang_submit) {
++ dep = amdgpu_device_get_gang(adev);
++ if (!dma_fence_is_signaled(dep))
++ goto out_return_dep;
++ dma_fence_put(dep);
++ }
++
++ dma_fence_put(isolation->spearhead);
++ isolation->spearhead = dma_fence_get(&f->scheduled);
++ amdgpu_sync_move(&isolation->active, &isolation->prev);
++ isolation->owner = owner;
++ }
++
++ /*
++ * Specifying the ring here helps to pipeline submissions even when
++ * isolation is enabled. If that is not desired for testing NULL can be
++ * used instead of the ring to enforce a CPU round trip while switching
++ * between clients.
++ */
++ dep = amdgpu_sync_peek_fence(&isolation->prev, ring);
++ r = amdgpu_sync_fence(&isolation->active, &f->finished, GFP_NOWAIT);
++ if (r)
++ DRM_WARN("OOM tracking isolation\n");
++
++out_grab_ref:
++ dma_fence_get(dep);
++out_return_dep:
++ mutex_unlock(&adev->enforce_isolation_mutex);
++ return dep;
++}
++
+ bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev)
+ {
+ switch (adev->asic_type) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 949d74eff29465..6a6dc15273dc7b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -113,8 +113,7 @@
+ #include "amdgpu_isp.h"
+ #endif
+
+-#define FIRMWARE_IP_DISCOVERY "amdgpu/ip_discovery.bin"
+-MODULE_FIRMWARE(FIRMWARE_IP_DISCOVERY);
++MODULE_FIRMWARE("amdgpu/ip_discovery.bin");
+
+ #define mmIP_DISCOVERY_VERSION 0x16A00
+ #define mmRCC_CONFIG_MEMSIZE 0xde3
+@@ -297,21 +296,13 @@ static int amdgpu_discovery_read_binary_from_mem(struct amdgpu_device *adev,
+ return ret;
+ }
+
+-static int amdgpu_discovery_read_binary_from_file(struct amdgpu_device *adev, uint8_t *binary)
++static int amdgpu_discovery_read_binary_from_file(struct amdgpu_device *adev,
++ uint8_t *binary,
++ const char *fw_name)
+ {
+ const struct firmware *fw;
+- const char *fw_name;
+ int r;
+
+- switch (amdgpu_discovery) {
+- case 2:
+- fw_name = FIRMWARE_IP_DISCOVERY;
+- break;
+- default:
+- dev_warn(adev->dev, "amdgpu_discovery is not set properly\n");
+- return -EINVAL;
+- }
+-
+ r = request_firmware(&fw, fw_name, adev->dev);
+ if (r) {
+ dev_err(adev->dev, "can't load firmware \"%s\"\n",
+@@ -404,10 +395,19 @@ static int amdgpu_discovery_verify_npsinfo(struct amdgpu_device *adev,
+ return 0;
+ }
+
++static const char *amdgpu_discovery_get_fw_name(struct amdgpu_device *adev)
++{
++ if (amdgpu_discovery == 2)
++ return "amdgpu/ip_discovery.bin";
++
++ return NULL;
++}
++
+ static int amdgpu_discovery_init(struct amdgpu_device *adev)
+ {
+ struct table_info *info;
+ struct binary_header *bhdr;
++ const char *fw_name;
+ uint16_t offset;
+ uint16_t size;
+ uint16_t checksum;
+@@ -419,9 +419,10 @@ static int amdgpu_discovery_init(struct amdgpu_device *adev)
+ return -ENOMEM;
+
+ /* Read from file if it is the preferred option */
+- if (amdgpu_discovery == 2) {
++ fw_name = amdgpu_discovery_get_fw_name(adev);
++ if (fw_name != NULL) {
+ dev_info(adev->dev, "use ip discovery information from file");
+- r = amdgpu_discovery_read_binary_from_file(adev, adev->mman.discovery_bin);
++ r = amdgpu_discovery_read_binary_from_file(adev, adev->mman.discovery_bin, fw_name);
+
+ if (r) {
+ dev_err(adev->dev, "failed to read ip discovery binary from file\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+index c9842a0e2a1cd4..cb043296f9aecf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+@@ -43,6 +43,29 @@
+ #include <linux/dma-fence-array.h>
+ #include <linux/pci-p2pdma.h>
+
++static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops;
++
++/**
++ * dma_buf_attach_adev - Helper to get adev of an attachment
++ *
++ * @attach: attachment
++ *
++ * Returns:
++ * A struct amdgpu_device * if the attaching device is an amdgpu device or
++ * partition, NULL otherwise.
++ */
++static struct amdgpu_device *dma_buf_attach_adev(struct dma_buf_attachment *attach)
++{
++ if (attach->importer_ops == &amdgpu_dma_buf_attach_ops) {
++ struct drm_gem_object *obj = attach->importer_priv;
++ struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
++
++ return amdgpu_ttm_adev(bo->tbo.bdev);
++ }
++
++ return NULL;
++}
++
+ /**
+ * amdgpu_dma_buf_attach - &dma_buf_ops.attach implementation
+ *
+@@ -54,11 +77,13 @@
+ static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attach)
+ {
++ struct amdgpu_device *attach_adev = dma_buf_attach_adev(attach);
+ struct drm_gem_object *obj = dmabuf->priv;
+ struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
+ struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+
+- if (pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0)
++ if (!amdgpu_dmabuf_is_xgmi_accessible(attach_adev, bo) &&
++ pci_p2pdma_distance(adev->pdev, attach->dev, false) < 0)
+ attach->peer2peer = false;
+
+ amdgpu_vm_bo_update_shared(bo);
+@@ -459,6 +484,9 @@ bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
+ struct drm_gem_object *obj = &bo->tbo.base;
+ struct drm_gem_object *gobj;
+
++ if (!adev)
++ return false;
++
+ if (obj->import_attach) {
+ struct dma_buf *dma_buf = obj->import_attach->dmabuf;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index bb8ab25ea76ad6..e4ce33e69a48b3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -173,6 +173,7 @@ uint amdgpu_sdma_phase_quantum = 32;
+ char *amdgpu_disable_cu;
+ char *amdgpu_virtual_display;
+ bool enforce_isolation;
++int amdgpu_modeset = -1;
+
+ /* Specifies the default granularity for SVM, used in buffer
+ * migration and restoration of backing memory when handling
+@@ -1033,6 +1034,13 @@ module_param_named(user_partt_mode, amdgpu_user_partt_mode, uint, 0444);
+ module_param(enforce_isolation, bool, 0444);
+ MODULE_PARM_DESC(enforce_isolation, "enforce process isolation between graphics and compute . enforce_isolation = on");
+
++/**
++ * DOC: modeset (int)
++ * Override nomodeset (1 = override, -1 = auto). The default is -1 (auto).
++ */
++MODULE_PARM_DESC(modeset, "Override nomodeset (1 = enable, -1 = auto)");
++module_param_named(modeset, amdgpu_modeset, int, 0444);
++
+ /**
+ * DOC: seamless (int)
+ * Seamless boot will keep the image on the screen during the boot process.
+@@ -2244,6 +2252,12 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ int ret, retry = 0, i;
+ bool supports_atomic = false;
+
++ if ((pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA ||
++ (pdev->class >> 8) == PCI_CLASS_DISPLAY_OTHER) {
++ if (drm_firmware_drivers_only() && amdgpu_modeset == -1)
++ return -EINVAL;
++ }
++
+ /* skip devices which are owned by radeon */
+ for (i = 0; i < ARRAY_SIZE(amdgpu_unsupported_pciidlist); i++) {
+ if (amdgpu_unsupported_pciidlist[i] == pdev->device)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+index 8e712a11aba5d2..92ab821afc06ae 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+@@ -209,7 +209,7 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_ring *ring,
+ return 0;
+ }
+
+- fences = kmalloc_array(id_mgr->num_ids, sizeof(void *), GFP_KERNEL);
++ fences = kmalloc_array(id_mgr->num_ids, sizeof(void *), GFP_NOWAIT);
+ if (!fences)
+ return -ENOMEM;
+
+@@ -287,46 +287,34 @@ static int amdgpu_vmid_grab_reserved(struct amdgpu_vm *vm,
+ (*id)->flushed_updates < updates ||
+ !(*id)->last_flush ||
+ ((*id)->last_flush->context != fence_context &&
+- !dma_fence_is_signaled((*id)->last_flush))) {
++ !dma_fence_is_signaled((*id)->last_flush)))
++ needs_flush = true;
++
++ if ((*id)->owner != vm->immediate.fence_context ||
++ (!adev->vm_manager.concurrent_flush && needs_flush)) {
+ struct dma_fence *tmp;
+
+- /* Wait for the gang to be assembled before using a
+- * reserved VMID or otherwise the gang could deadlock.
++ /* Don't use per engine and per process VMID at the
++ * same time
+ */
+- tmp = amdgpu_device_get_gang(adev);
+- if (!dma_fence_is_signaled(tmp) && tmp != job->gang_submit) {
++ if (adev->vm_manager.concurrent_flush)
++ ring = NULL;
++
++ /* to prevent one context starved by another context */
++ (*id)->pd_gpu_addr = 0;
++ tmp = amdgpu_sync_peek_fence(&(*id)->active, ring);
++ if (tmp) {
+ *id = NULL;
+- *fence = tmp;
++ *fence = dma_fence_get(tmp);
+ return 0;
+ }
+- dma_fence_put(tmp);
+-
+- /* Make sure the id is owned by the gang before proceeding */
+- if (!job->gang_submit ||
+- (*id)->owner != vm->immediate.fence_context) {
+-
+- /* Don't use per engine and per process VMID at the
+- * same time
+- */
+- if (adev->vm_manager.concurrent_flush)
+- ring = NULL;
+-
+- /* to prevent one context starved by another context */
+- (*id)->pd_gpu_addr = 0;
+- tmp = amdgpu_sync_peek_fence(&(*id)->active, ring);
+- if (tmp) {
+- *id = NULL;
+- *fence = dma_fence_get(tmp);
+- return 0;
+- }
+- }
+- needs_flush = true;
+ }
+
+ /* Good we can use this VMID. Remember this submission as
+ * user of the VMID.
+ */
+- r = amdgpu_sync_fence(&(*id)->active, &job->base.s_fence->finished);
++ r = amdgpu_sync_fence(&(*id)->active, &job->base.s_fence->finished,
++ GFP_NOWAIT);
+ if (r)
+ return r;
+
+@@ -385,7 +373,8 @@ static int amdgpu_vmid_grab_used(struct amdgpu_vm *vm,
+ * user of the VMID.
+ */
+ r = amdgpu_sync_fence(&(*id)->active,
+- &job->base.s_fence->finished);
++ &job->base.s_fence->finished,
++ GFP_NOWAIT);
+ if (r)
+ return r;
+
+@@ -437,7 +426,8 @@ int amdgpu_vmid_grab(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
+
+ /* Remember this submission as user of the VMID */
+ r = amdgpu_sync_fence(&id->active,
+- &job->base.s_fence->finished);
++ &job->base.s_fence->finished,
++ GFP_NOWAIT);
+ if (r)
+ goto error;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
+index 7d4395a5d8ac9f..b0a88f92cd821d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
+@@ -78,6 +78,9 @@ struct amdgpu_ih_ring {
+ #define amdgpu_ih_ts_after(t1, t2) \
+ (((int64_t)((t2) << 16) - (int64_t)((t1) << 16)) > 0LL)
+
++#define amdgpu_ih_ts_after_or_equal(t1, t2) \
++ (((int64_t)((t2) << 16) - (int64_t)((t1) << 16)) >= 0LL)
++
+ /* provided by the ih block */
+ struct amdgpu_ih_funcs {
+ /* ring read/write ptr handling, called from interrupt context */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 100f044759435e..685c61a05af857 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -342,17 +342,24 @@ amdgpu_job_prepare_job(struct drm_sched_job *sched_job,
+ {
+ struct amdgpu_ring *ring = to_amdgpu_ring(s_entity->rq->sched);
+ struct amdgpu_job *job = to_amdgpu_job(sched_job);
+- struct dma_fence *fence = NULL;
++ struct dma_fence *fence;
+ int r;
+
+ r = drm_sched_entity_error(s_entity);
+ if (r)
+ goto error;
+
+- if (job->gang_submit)
++ if (job->gang_submit) {
+ fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit);
++ if (fence)
++ return fence;
++ }
++
++ fence = amdgpu_device_enforce_isolation(ring->adev, ring, job);
++ if (fence)
++ return fence;
+
+- if (!fence && job->vm && !job->vmid) {
++ if (job->vm && !job->vmid) {
+ r = amdgpu_vmid_grab(job->vm, ring, job, &fence);
+ if (r) {
+ dev_err(ring->adev->dev, "Error getting VM ID (%d)\n", r);
+@@ -365,9 +372,10 @@ amdgpu_job_prepare_job(struct drm_sched_job *sched_job,
+ */
+ if (!fence)
+ job->vm = NULL;
++ return fence;
+ }
+
+- return fence;
++ return NULL;
+
+ error:
+ dma_fence_set_error(&job->base.s_fence->finished, r);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+index 6fa20980a0b15e..e4251d0691c9cb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+@@ -1335,14 +1335,14 @@ int amdgpu_mes_ctx_map_meta_data(struct amdgpu_device *adev,
+ DRM_ERROR("failed to do vm_bo_update on meta data\n");
+ goto error_del_bo_va;
+ }
+- amdgpu_sync_fence(&sync, bo_va->last_pt_update);
++ amdgpu_sync_fence(&sync, bo_va->last_pt_update, GFP_KERNEL);
+
+ r = amdgpu_vm_update_pdes(adev, vm, false);
+ if (r) {
+ DRM_ERROR("failed to update pdes on meta data\n");
+ goto error_del_bo_va;
+ }
+- amdgpu_sync_fence(&sync, vm->last_update);
++ amdgpu_sync_fence(&sync, vm->last_update, GFP_KERNEL);
+
+ amdgpu_sync_wait(&sync, false);
+ drm_exec_fini(&exec);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 00752e3f9d8ab2..0b9987781f7622 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1295,28 +1295,36 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
+ if (abo->kfd_bo)
+ amdgpu_amdkfd_release_notify(abo);
+
+- /* We only remove the fence if the resv has individualized. */
+- WARN_ON_ONCE(bo->type == ttm_bo_type_kernel
+- && bo->base.resv != &bo->base._resv);
+- if (bo->base.resv == &bo->base._resv)
+- amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo);
++ /*
++ * We lock the private dma_resv object here and since the BO is about to
++ * be released nobody else should have a pointer to it.
++ * So when this locking here fails something is wrong with the reference
++ * counting.
++ */
++ if (WARN_ON_ONCE(!dma_resv_trylock(&bo->base._resv)))
++ return;
++
++ amdgpu_amdkfd_remove_all_eviction_fences(abo);
+
+ if (!bo->resource || bo->resource->mem_type != TTM_PL_VRAM ||
+ !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE) ||
+ adev->in_suspend || drm_dev_is_unplugged(adev_to_drm(adev)))
+- return;
++ goto out;
+
+- if (WARN_ON_ONCE(!dma_resv_trylock(bo->base.resv)))
+- return;
++ r = dma_resv_reserve_fences(&bo->base._resv, 1);
++ if (r)
++ goto out;
+
+- r = amdgpu_fill_buffer(abo, 0, bo->base.resv, &fence, true);
+- if (!WARN_ON(r)) {
+- amdgpu_vram_mgr_set_cleared(bo->resource);
+- amdgpu_bo_fence(abo, fence, false);
+- dma_fence_put(fence);
+- }
++ r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true);
++ if (WARN_ON(r))
++ goto out;
++
++ amdgpu_vram_mgr_set_cleared(bo->resource);
++ dma_resv_add_fence(&bo->base._resv, fence, DMA_RESV_USAGE_KERNEL);
++ dma_fence_put(fence);
+
+- dma_resv_unlock(bo->base.resv);
++out:
++ dma_resv_unlock(&bo->base._resv);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index e5fc80ed06eaea..6dded11a23acfd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -44,7 +44,7 @@
+ #include "amdgpu_securedisplay.h"
+ #include "amdgpu_atomfirmware.h"
+
+-#define AMD_VBIOS_FILE_MAX_SIZE_B (1024*1024*3)
++#define AMD_VBIOS_FILE_MAX_SIZE_B (1024*1024*16)
+
+ static int psp_load_smu_fw(struct psp_context *psp);
+ static int psp_rap_terminate(struct psp_context *psp);
+@@ -533,7 +533,6 @@ static int psp_sw_fini(struct amdgpu_ip_block *ip_block)
+ {
+ struct amdgpu_device *adev = ip_block->adev;
+ struct psp_context *psp = &adev->psp;
+- struct psp_gfx_cmd_resp *cmd = psp->cmd;
+
+ psp_memory_training_fini(psp);
+
+@@ -543,8 +542,8 @@ static int psp_sw_fini(struct amdgpu_ip_block *ip_block)
+ amdgpu_ucode_release(&psp->cap_fw);
+ amdgpu_ucode_release(&psp->toc_fw);
+
+- kfree(cmd);
+- cmd = NULL;
++ kfree(psp->cmd);
++ psp->cmd = NULL;
+
+ psp_free_shared_bufs(psp);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index f0924aa3f4e485..0c338dcdde48ae 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1864,6 +1864,9 @@ int amdgpu_ras_sysfs_create(struct amdgpu_device *adev,
+ if (!obj || obj->attr_inuse)
+ return -EINVAL;
+
++ if (amdgpu_sriov_vf(adev) && !amdgpu_virt_ras_telemetry_block_en(adev, head->block))
++ return 0;
++
+ get_obj(obj);
+
+ snprintf(obj->fs_data.sysfs_name, sizeof(obj->fs_data.sysfs_name),
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+index 52c16bfeccaad8..12ffe4a963d317 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+@@ -748,7 +748,7 @@ amdgpu_ras_eeprom_update_header(struct amdgpu_ras_eeprom_control *control)
+ /* Modify the header if it exceeds.
+ */
+ if (amdgpu_bad_page_threshold != 0 &&
+- control->ras_num_bad_pages >= ras->bad_page_cnt_threshold) {
++ control->ras_num_bad_pages > ras->bad_page_cnt_threshold) {
+ dev_warn(adev->dev,
+ "Saved bad pages %d reaches threshold value %d\n",
+ control->ras_num_bad_pages, ras->bad_page_cnt_threshold);
+@@ -806,7 +806,7 @@ amdgpu_ras_eeprom_update_header(struct amdgpu_ras_eeprom_control *control)
+ */
+ if (amdgpu_bad_page_threshold != 0 &&
+ control->tbl_hdr.version == RAS_TABLE_VER_V2_1 &&
+- control->ras_num_bad_pages < ras->bad_page_cnt_threshold)
++ control->ras_num_bad_pages <= ras->bad_page_cnt_threshold)
+ control->tbl_rai.health_percent = ((ras->bad_page_cnt_threshold -
+ control->ras_num_bad_pages) * 100) /
+ ras->bad_page_cnt_threshold;
+@@ -1451,7 +1451,7 @@ int amdgpu_ras_eeprom_check(struct amdgpu_ras_eeprom_control *control)
+ res);
+ return -EINVAL;
+ }
+- if (ras->bad_page_cnt_threshold > control->ras_num_bad_pages) {
++ if (ras->bad_page_cnt_threshold >= control->ras_num_bad_pages) {
+ /* This means that, the threshold was increased since
+ * the last time the system was booted, and now,
+ * ras->bad_page_cnt_threshold - control->num_recs > 0,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+index c586ab4c911bfe..34fc742fda91d5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+@@ -152,7 +152,8 @@ static bool amdgpu_sync_add_later(struct amdgpu_sync *sync, struct dma_fence *f)
+ *
+ * Add the fence to the sync object.
+ */
+-int amdgpu_sync_fence(struct amdgpu_sync *sync, struct dma_fence *f)
++int amdgpu_sync_fence(struct amdgpu_sync *sync, struct dma_fence *f,
++ gfp_t flags)
+ {
+ struct amdgpu_sync_entry *e;
+
+@@ -162,7 +163,7 @@ int amdgpu_sync_fence(struct amdgpu_sync *sync, struct dma_fence *f)
+ if (amdgpu_sync_add_later(sync, f))
+ return 0;
+
+- e = kmem_cache_alloc(amdgpu_sync_slab, GFP_KERNEL);
++ e = kmem_cache_alloc(amdgpu_sync_slab, flags);
+ if (!e)
+ return -ENOMEM;
+
+@@ -249,7 +250,7 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
+ struct dma_fence *tmp = dma_fence_chain_contained(f);
+
+ if (amdgpu_sync_test_fence(adev, mode, owner, tmp)) {
+- r = amdgpu_sync_fence(sync, f);
++ r = amdgpu_sync_fence(sync, f, GFP_KERNEL);
+ dma_fence_put(f);
+ if (r)
+ return r;
+@@ -281,7 +282,7 @@ int amdgpu_sync_kfd(struct amdgpu_sync *sync, struct dma_resv *resv)
+ if (fence_owner != AMDGPU_FENCE_OWNER_KFD)
+ continue;
+
+- r = amdgpu_sync_fence(sync, f);
++ r = amdgpu_sync_fence(sync, f, GFP_KERNEL);
+ if (r)
+ break;
+ }
+@@ -388,7 +389,7 @@ int amdgpu_sync_clone(struct amdgpu_sync *source, struct amdgpu_sync *clone)
+ hash_for_each_safe(source->fences, i, tmp, e, node) {
+ f = e->fence;
+ if (!dma_fence_is_signaled(f)) {
+- r = amdgpu_sync_fence(clone, f);
++ r = amdgpu_sync_fence(clone, f, GFP_KERNEL);
+ if (r)
+ return r;
+ } else {
+@@ -399,6 +400,25 @@ int amdgpu_sync_clone(struct amdgpu_sync *source, struct amdgpu_sync *clone)
+ return 0;
+ }
+
++/**
++ * amdgpu_sync_move - move all fences from src to dst
++ *
++ * @src: source of the fences, empty after function
++ * @dst: destination for the fences
++ *
++ * Moves all fences from source to destination. All fences in destination are
++ * freed and source is empty after the function call.
++ */
++void amdgpu_sync_move(struct amdgpu_sync *src, struct amdgpu_sync *dst)
++{
++ unsigned int i;
++
++ amdgpu_sync_free(dst);
++
++ for (i = 0; i < HASH_SIZE(src->fences); ++i)
++ hlist_move_list(&src->fences[i], &dst->fences[i]);
++}
++
+ /**
+ * amdgpu_sync_push_to_job - push fences into job
+ * @sync: sync object to get the fences from
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h
+index e3272dce798d79..51eb4382c91ebd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.h
+@@ -47,7 +47,8 @@ struct amdgpu_sync {
+ };
+
+ void amdgpu_sync_create(struct amdgpu_sync *sync);
+-int amdgpu_sync_fence(struct amdgpu_sync *sync, struct dma_fence *f);
++int amdgpu_sync_fence(struct amdgpu_sync *sync, struct dma_fence *f,
++ gfp_t flags);
+ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
+ struct dma_resv *resv, enum amdgpu_sync_mode mode,
+ void *owner);
+@@ -56,6 +57,7 @@ struct dma_fence *amdgpu_sync_peek_fence(struct amdgpu_sync *sync,
+ struct amdgpu_ring *ring);
+ struct dma_fence *amdgpu_sync_get_fence(struct amdgpu_sync *sync);
+ int amdgpu_sync_clone(struct amdgpu_sync *source, struct amdgpu_sync *clone);
++void amdgpu_sync_move(struct amdgpu_sync *src, struct amdgpu_sync *dst);
+ int amdgpu_sync_push_to_job(struct amdgpu_sync *sync, struct amdgpu_job *job);
+ int amdgpu_sync_wait(struct amdgpu_sync *sync, bool intr);
+ void amdgpu_sync_free(struct amdgpu_sync *sync);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
+index eafe20d8fe0b65..0a1ef95b28668c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
+@@ -387,6 +387,45 @@ int amdgpu_umc_fill_error_record(struct ras_err_data *err_data,
+ return 0;
+ }
+
++static int amdgpu_umc_loop_all_aid(struct amdgpu_device *adev, umc_func func,
++ void *data)
++{
++ uint32_t umc_node_inst;
++ uint32_t node_inst;
++ uint32_t umc_inst;
++ uint32_t ch_inst;
++ int ret;
++
++ /*
++ * This loop is done based on the following -
++ * umc.active mask = mask of active umc instances across all nodes
++ * umc.umc_inst_num = maximum number of umc instancess per node
++ * umc.node_inst_num = maximum number of node instances
++ * Channel instances are not assumed to be harvested.
++ */
++ dev_dbg(adev->dev, "active umcs :%lx umc_inst per node: %d",
++ adev->umc.active_mask, adev->umc.umc_inst_num);
++ for_each_set_bit(umc_node_inst, &(adev->umc.active_mask),
++ adev->umc.node_inst_num * adev->umc.umc_inst_num) {
++ node_inst = umc_node_inst / adev->umc.umc_inst_num;
++ umc_inst = umc_node_inst % adev->umc.umc_inst_num;
++ LOOP_UMC_CH_INST(ch_inst) {
++ dev_dbg(adev->dev,
++ "node_inst :%d umc_inst: %d ch_inst: %d",
++ node_inst, umc_inst, ch_inst);
++ ret = func(adev, node_inst, umc_inst, ch_inst, data);
++ if (ret) {
++ dev_err(adev->dev,
++ "Node %d umc %d ch %d func returns %d\n",
++ node_inst, umc_inst, ch_inst, ret);
++ return ret;
++ }
++ }
++ }
++
++ return 0;
++}
++
+ int amdgpu_umc_loop_channels(struct amdgpu_device *adev,
+ umc_func func, void *data)
+ {
+@@ -395,6 +434,9 @@ int amdgpu_umc_loop_channels(struct amdgpu_device *adev,
+ uint32_t ch_inst = 0;
+ int ret = 0;
+
++ if (adev->aid_mask)
++ return amdgpu_umc_loop_all_aid(adev, func, data);
++
+ if (adev->umc.node_inst_num) {
+ LOOP_UMC_EACH_NODE_INST_AND_CH(node_inst, umc_inst, ch_inst) {
+ ret = func(adev, node_inst, umc_inst, ch_inst, data);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 13e5709ea1caa3..e6f0152e5b0872 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -1247,7 +1247,8 @@ amdgpu_ras_block_to_sriov(struct amdgpu_device *adev, enum amdgpu_ras_block bloc
+ case AMDGPU_RAS_BLOCK__MPIO:
+ return RAS_TELEMETRY_GPU_BLOCK_MPIO;
+ default:
+- dev_err(adev->dev, "Unsupported SRIOV RAS telemetry block 0x%x\n", block);
++ DRM_WARN_ONCE("Unsupported SRIOV RAS telemetry block 0x%x\n",
++ block);
+ return RAS_TELEMETRY_GPU_BLOCK_COUNT;
+ }
+ }
+@@ -1332,3 +1333,17 @@ int amdgpu_virt_ras_telemetry_post_reset(struct amdgpu_device *adev)
+
+ return 0;
+ }
++
++bool amdgpu_virt_ras_telemetry_block_en(struct amdgpu_device *adev,
++ enum amdgpu_ras_block block)
++{
++ enum amd_sriov_ras_telemetry_gpu_block sriov_block;
++
++ sriov_block = amdgpu_ras_block_to_sriov(adev, block);
++
++ if (sriov_block >= RAS_TELEMETRY_GPU_BLOCK_COUNT ||
++ !amdgpu_sriov_ras_telemetry_block_en(adev, sriov_block))
++ return false;
++
++ return true;
++}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index 0ca73343a76893..0f3ccae5c1ab3b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -407,4 +407,6 @@ bool amdgpu_virt_get_ras_capability(struct amdgpu_device *adev);
+ int amdgpu_virt_req_ras_err_count(struct amdgpu_device *adev, enum amdgpu_ras_block block,
+ struct ras_err_data *err_data);
+ int amdgpu_virt_ras_telemetry_post_reset(struct amdgpu_device *adev);
++bool amdgpu_virt_ras_telemetry_block_en(struct amdgpu_device *adev,
++ enum amdgpu_ras_block block);
+ #endif
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 22aa4a8f11891b..21be10d46cf9ce 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -754,6 +754,7 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ bool need_pipe_sync)
+ {
+ struct amdgpu_device *adev = ring->adev;
++ struct amdgpu_isolation *isolation = &adev->isolation[ring->xcp_id];
+ unsigned vmhub = ring->vm_hub;
+ struct amdgpu_vmid_mgr *id_mgr = &adev->vm_manager.id_mgr[vmhub];
+ struct amdgpu_vmid *id = &id_mgr->ids[job->vmid];
+@@ -761,8 +762,9 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ bool gds_switch_needed = ring->funcs->emit_gds_switch &&
+ job->gds_switch_needed;
+ bool vm_flush_needed = job->vm_needs_flush;
+- struct dma_fence *fence = NULL;
++ bool cleaner_shader_needed = false;
+ bool pasid_mapping_needed = false;
++ struct dma_fence *fence = NULL;
+ unsigned int patch;
+ int r;
+
+@@ -785,8 +787,12 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ pasid_mapping_needed &= adev->gmc.gmc_funcs->emit_pasid_mapping &&
+ ring->funcs->emit_wreg;
+
++ cleaner_shader_needed = adev->gfx.enable_cleaner_shader &&
++ ring->funcs->emit_cleaner_shader && job->base.s_fence &&
++ &job->base.s_fence->scheduled == isolation->spearhead;
++
+ if (!vm_flush_needed && !gds_switch_needed && !need_pipe_sync &&
+- !(job->enforce_isolation && !job->vmid))
++ !cleaner_shader_needed)
+ return 0;
+
+ amdgpu_ring_ib_begin(ring);
+@@ -797,9 +803,7 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ if (need_pipe_sync)
+ amdgpu_ring_emit_pipeline_sync(ring);
+
+- if (adev->gfx.enable_cleaner_shader &&
+- ring->funcs->emit_cleaner_shader &&
+- job->enforce_isolation)
++ if (cleaner_shader_needed)
+ ring->funcs->emit_cleaner_shader(ring);
+
+ if (vm_flush_needed) {
+@@ -821,7 +825,7 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ job->oa_size);
+ }
+
+- if (vm_flush_needed || pasid_mapping_needed) {
++ if (vm_flush_needed || pasid_mapping_needed || cleaner_shader_needed) {
+ r = amdgpu_fence_emit(ring, &fence, NULL, 0);
+ if (r)
+ return r;
+@@ -843,6 +847,17 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ id->pasid_mapping = dma_fence_get(fence);
+ mutex_unlock(&id_mgr->lock);
+ }
++
++ /*
++ * Make sure that all other submissions wait for the cleaner shader to
++ * finish before we push them to the HW.
++ */
++ if (cleaner_shader_needed) {
++ mutex_lock(&adev->enforce_isolation_mutex);
++ dma_fence_put(isolation->spearhead);
++ isolation->spearhead = dma_fence_get(fence);
++ mutex_unlock(&adev->enforce_isolation_mutex);
++ }
+ dma_fence_put(fence);
+
+ amdgpu_ring_patch_cond_exec(ring, patch);
+@@ -2672,20 +2687,6 @@ int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+ return r;
+ }
+
+-/**
+- * amdgpu_vm_release_compute - release a compute vm
+- * @adev: amdgpu_device pointer
+- * @vm: a vm turned into compute vm by calling amdgpu_vm_make_compute
+- *
+- * This is a correspondant of amdgpu_vm_make_compute. It decouples compute
+- * pasid from vm. Compute should stop use of vm after this call.
+- */
+-void amdgpu_vm_release_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+-{
+- amdgpu_vm_set_pasid(adev, vm, 0);
+- vm->is_compute_context = false;
+-}
+-
+ static int amdgpu_vm_stats_is_zero(struct amdgpu_vm *vm)
+ {
+ for (int i = 0; i < __AMDGPU_PL_NUM; ++i) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index 5010a3107bf892..f3ad687125ad65 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -489,7 +489,6 @@ int amdgpu_vm_set_pasid(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout);
+ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm, int32_t xcp_id);
+ int amdgpu_vm_make_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm);
+-void amdgpu_vm_release_compute(struct amdgpu_device *adev, struct amdgpu_vm *vm);
+ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm);
+ int amdgpu_vm_lock_pd(struct amdgpu_vm *vm, struct drm_exec *exec,
+ unsigned int num_fences);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 1f32c531f610e3..09178d56afbf61 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4739,6 +4739,20 @@ static int gfx_v10_0_sw_init(struct amdgpu_ip_block *ip_block)
+ break;
+ }
+ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
++ case IP_VERSION(10, 1, 10):
++ adev->gfx.cleaner_shader_ptr = gfx_10_1_10_cleaner_shader_hex;
++ adev->gfx.cleaner_shader_size = sizeof(gfx_10_1_10_cleaner_shader_hex);
++ if (adev->gfx.me_fw_version >= 101 &&
++ adev->gfx.pfp_fw_version >= 158 &&
++ adev->gfx.mec_fw_version >= 152) {
++ adev->gfx.enable_cleaner_shader = true;
++ r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size);
++ if (r) {
++ adev->gfx.enable_cleaner_shader = false;
++ dev_err(adev->dev, "Failed to initialize cleaner shader\n");
++ }
++ }
++ break;
+ case IP_VERSION(10, 3, 0):
+ case IP_VERSION(10, 3, 2):
+ case IP_VERSION(10, 3, 4):
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
+index 663c2572d440a1..5255378af53c0a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
+@@ -21,6 +21,41 @@
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
++/* Define the cleaner shader gfx_10_1_10 */
++static const u32 gfx_10_1_10_cleaner_shader_hex[] = {
++ 0xb0804004, 0xbf8a0000,
++ 0xbf068100, 0xbf840023,
++ 0xbe8203b8, 0xbefc0380,
++ 0x7e008480, 0x7e028480,
++ 0x7e048480, 0x7e068480,
++ 0x7e088480, 0x7e0a8480,
++ 0x7e0c8480, 0x7e0e8480,
++ 0xbefc0302, 0x80828802,
++ 0xbf84fff5, 0xbe8203ff,
++ 0x80000000, 0x87020102,
++ 0xbf840012, 0xbefe03c1,
++ 0xbeff03c1, 0xd7650001,
++ 0x0001007f, 0xd7660001,
++ 0x0002027e, 0x16020288,
++ 0xbe8203bf, 0xbefc03c1,
++ 0xd9382000, 0x00020201,
++ 0xd9386040, 0x00040401,
++ 0xd70f6a01, 0x000202ff,
++ 0x00000400, 0x80828102,
++ 0xbf84fff7, 0xbefc03ff,
++ 0x00000068, 0xbe803080,
++ 0xbe813080, 0xbe823080,
++ 0xbe833080, 0x80fc847c,
++ 0xbf84fffa, 0xbeea0480,
++ 0xbeec0480, 0xbeee0480,
++ 0xbef00480, 0xbef20480,
++ 0xbef40480, 0xbef60480,
++ 0xbef80480, 0xbefa0480,
++ 0xbf810000, 0xbf9f0000,
++ 0xbf9f0000, 0xbf9f0000,
++ 0xbf9f0000, 0xbf9f0000,
++};
++
+ /* Define the cleaner shader gfx_10_3_0 */
+ static const u32 gfx_10_3_0_cleaner_shader_hex[] = {
+ 0xb0804004, 0xbf8a0000,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm b/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
+new file mode 100644
+index 00000000000000..9ba3359253c95d
+--- /dev/null
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
+@@ -0,0 +1,126 @@
++/* SPDX-License-Identifier: MIT */
++/*
++ * Copyright 2025 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ */
++
++// This shader is to clean LDS, SGPRs and VGPRs. It is first 64 Dwords or 256 bytes of 256 Dwords cleaner shader.
++
++// GFX10.1 : Clear SGPRs, VGPRs and LDS
++// Launch 32 waves per CU (16 per SIMD) as a workgroup (threadgroup) to fill every wave slot
++// Waves are "wave32" and have 64 VGPRs each, which uses all 1024 VGPRs per SIMD
++// Waves are launched in "CU" mode, and the workgroup shares 64KB of LDS (half of the WGP's LDS)
++// It takes 2 workgroups to use all of LDS: one on each CU of the WGP
++// Each wave clears SGPRs 0 - 107
++// Each wave clears VGPRs 0 - 63
++// The first wave of the workgroup clears its 64KB of LDS
++// The shader starts with "S_BARRIER" to ensure SPI has launched all waves of the workgroup
++// before any wave in the workgroup could end. Without this, it is possible not all SGPRs get cleared.
++
++
++shader main
++ asic(GFX10.1)
++ type(CS)
++ wave_size(32)
++// Note: original source code from SQ team
++
++//
++// Create 32 waves in a threadgroup (CS waves)
++// Each allocates 64 VGPRs
++// The workgroup allocates all of LDS (64kbytes)
++//
++// Takes about 2500 clocks to run.
++// (theorhetical fastest = 1024clks vgpr + 640lds = 1660 clks)
++//
++ S_BARRIER
++ s_cmp_eq_u32 s0, 1 // Bit0 is set, sgpr0 is set then clear VGPRS and LDS as FW set COMPUTE_USER_DATA_0
++ s_cbranch_scc0 label_0023 // Clean VGPRs and LDS if sgpr0 of wave is set, scc = (s0 == 1)
++
++ s_mov_b32 s2, 0x00000038 // Loop 64/8=8 times (loop unrolled for performance)
++ s_mov_b32 m0, 0
++ //
++ // CLEAR VGPRs
++ //
++label_0005:
++ v_movreld_b32 v0, 0
++ v_movreld_b32 v1, 0
++ v_movreld_b32 v2, 0
++ v_movreld_b32 v3, 0
++ v_movreld_b32 v4, 0
++ v_movreld_b32 v5, 0
++ v_movreld_b32 v6, 0
++ v_movreld_b32 v7, 0
++ s_mov_b32 m0, s2
++ s_sub_u32 s2, s2, 8
++ s_cbranch_scc0 label_0005
++ //
++ s_mov_b32 s2, 0x80000000 // Bit31 is first_wave
++ s_and_b32 s2, s2, s0 // sgpr0 has tg_size (first_wave) term as in ucode only COMPUTE_PGM_RSRC2.tg_size_en is set
++ s_cbranch_scc0 label_0023 // Clean LDS if its first wave of ThreadGroup/WorkGroup
++ // CLEAR LDS
++ //
++ s_mov_b32 exec_lo, 0xffffffff
++ s_mov_b32 exec_hi, 0xffffffff
++ v_mbcnt_lo_u32_b32 v1, exec_hi, 0 // Set V1 to thread-ID (0..63)
++ v_mbcnt_hi_u32_b32 v1, exec_lo, v1 // Set V1 to thread-ID (0..63)
++ v_mul_u32_u24 v1, 0x00000008, v1 // * 8, so each thread is a double-dword address (8byte)
++ s_mov_b32 s2, 0x00000003f // 64 loop iterations
++ s_mov_b32 m0, 0xffffffff
++ // Clear all of LDS space
++ // Each FirstWave of WorkGroup clears 64kbyte block
++
++label_001F:
++ ds_write2_b64 v1, v[2:3], v[2:3] offset1:32
++ ds_write2_b64 v1, v[4:5], v[4:5] offset0:64 offset1:96
++ v_add_co_u32 v1, vcc, 0x00000400, v1
++ s_sub_u32 s2, s2, 1
++ s_cbranch_scc0 label_001F
++
++ //
++ // CLEAR SGPRs
++ //
++label_0023:
++ s_mov_b32 m0, 0x00000068 // Loop 108/4=27 times (loop unrolled for performance)
++label_sgpr_loop:
++ s_movreld_b32 s0, 0
++ s_movreld_b32 s1, 0
++ s_movreld_b32 s2, 0
++ s_movreld_b32 s3, 0
++ s_sub_u32 m0, m0, 4
++ s_cbranch_scc0 label_sgpr_loop
++
++ //clear vcc
++ s_mov_b64 vcc, 0 //clear vcc
++ //s_setreg_imm32_b32 hw_reg_shader_flat_scratch_lo, 0 //clear flat scratch lo SGPR
++ //s_setreg_imm32_b32 hw_reg_shader_flat_scratch_hi, 0 //clear flat scratch hi SGPR
++ s_mov_b64 ttmp0, 0 //Clear ttmp0 and ttmp1
++ s_mov_b64 ttmp2, 0 //Clear ttmp2 and ttmp3
++ s_mov_b64 ttmp4, 0 //Clear ttmp4 and ttmp5
++ s_mov_b64 ttmp6, 0 //Clear ttmp6 and ttmp7
++ s_mov_b64 ttmp8, 0 //Clear ttmp8 and ttmp9
++ s_mov_b64 ttmp10, 0 //Clear ttmp10 and ttmp11
++ s_mov_b64 ttmp12, 0 //Clear ttmp12 and ttmp13
++ s_mov_b64 ttmp14, 0 //Clear ttmp14 and ttmp15
++
++ s_endpgm
++
++end
++
++
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index f1f53c7687410e..e050c2e4ea734c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -64,6 +64,23 @@
+ #define regPC_CONFIG_CNTL_1 0x194d
+ #define regPC_CONFIG_CNTL_1_BASE_IDX 1
+
++#define regCP_GFX_MQD_CONTROL_DEFAULT 0x00000100
++#define regCP_GFX_HQD_VMID_DEFAULT 0x00000000
++#define regCP_GFX_HQD_QUEUE_PRIORITY_DEFAULT 0x00000000
++#define regCP_GFX_HQD_QUANTUM_DEFAULT 0x00000a01
++#define regCP_GFX_HQD_CNTL_DEFAULT 0x00a00000
++#define regCP_RB_DOORBELL_CONTROL_DEFAULT 0x00000000
++#define regCP_GFX_HQD_RPTR_DEFAULT 0x00000000
++
++#define regCP_HQD_EOP_CONTROL_DEFAULT 0x00000006
++#define regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT 0x00000000
++#define regCP_MQD_CONTROL_DEFAULT 0x00000100
++#define regCP_HQD_PQ_CONTROL_DEFAULT 0x00308509
++#define regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT 0x00000000
++#define regCP_HQD_PQ_RPTR_DEFAULT 0x00000000
++#define regCP_HQD_PERSISTENT_STATE_DEFAULT 0x0be05501
++#define regCP_HQD_IB_CONTROL_DEFAULT 0x00300000
++
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_pfp.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin");
+@@ -3958,7 +3975,7 @@ static void gfx_v11_0_gfx_mqd_set_priority(struct amdgpu_device *adev,
+ if (prop->hqd_pipe_priority == AMDGPU_GFX_PIPE_PRIO_HIGH)
+ priority = 1;
+
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_QUEUE_PRIORITY);
++ tmp = regCP_GFX_HQD_QUEUE_PRIORITY_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUEUE_PRIORITY, PRIORITY_LEVEL, priority);
+ mqd->cp_gfx_hqd_queue_priority = tmp;
+ }
+@@ -3980,14 +3997,14 @@ static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_mqd_base_addr_hi = upper_32_bits(prop->mqd_gpu_addr);
+
+ /* set up mqd control */
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_MQD_CONTROL);
++ tmp = regCP_GFX_MQD_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, VMID, 0);
+ tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, PRIV_STATE, 1);
+ tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, CACHE_POLICY, 0);
+ mqd->cp_gfx_mqd_control = tmp;
+
+ /* set up gfx_hqd_vimd with 0x0 to indicate the ring buffer's vmid */
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_VMID);
++ tmp = regCP_GFX_HQD_VMID_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_VMID, VMID, 0);
+ mqd->cp_gfx_hqd_vmid = 0;
+
+@@ -3995,7 +4012,7 @@ static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ gfx_v11_0_gfx_mqd_set_priority(adev, mqd, prop);
+
+ /* set up time quantum */
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_QUANTUM);
++ tmp = regCP_GFX_HQD_QUANTUM_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUANTUM, QUANTUM_EN, 1);
+ mqd->cp_gfx_hqd_quantum = tmp;
+
+@@ -4017,7 +4034,7 @@ static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+
+ /* set up the gfx_hqd_control, similar as CP_RB0_CNTL */
+ rb_bufsz = order_base_2(prop->queue_size / 4) - 1;
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_CNTL);
++ tmp = regCP_GFX_HQD_CNTL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BUFSZ, rb_bufsz);
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BLKSZ, rb_bufsz - 2);
+ #ifdef __BIG_ENDIAN
+@@ -4026,7 +4043,7 @@ static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_gfx_hqd_cntl = tmp;
+
+ /* set up cp_doorbell_control */
+- tmp = RREG32_SOC15(GC, 0, regCP_RB_DOORBELL_CONTROL);
++ tmp = regCP_RB_DOORBELL_CONTROL_DEFAULT;
+ if (prop->use_doorbell) {
+ tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+ DOORBELL_OFFSET, prop->doorbell_index);
+@@ -4038,7 +4055,7 @@ static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_rb_doorbell_control = tmp;
+
+ /* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+- mqd->cp_gfx_hqd_rptr = RREG32_SOC15(GC, 0, regCP_GFX_HQD_RPTR);
++ mqd->cp_gfx_hqd_rptr = regCP_GFX_HQD_RPTR_DEFAULT;
+
+ /* active the queue */
+ mqd->cp_gfx_hqd_active = 1;
+@@ -4124,14 +4141,14 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
+
+ /* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_EOP_CONTROL);
++ tmp = regCP_HQD_EOP_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
+ (order_base_2(GFX11_MEC_HPD_SIZE / 4) - 1));
+
+ mqd->cp_hqd_eop_control = tmp;
+
+ /* enable doorbell? */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
++ tmp = regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT;
+
+ if (prop->use_doorbell) {
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+@@ -4160,7 +4177,7 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_mqd_base_addr_hi = upper_32_bits(prop->mqd_gpu_addr);
+
+ /* set MQD vmid to 0 */
+- tmp = RREG32_SOC15(GC, 0, regCP_MQD_CONTROL);
++ tmp = regCP_MQD_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
+ mqd->cp_mqd_control = tmp;
+
+@@ -4170,7 +4187,7 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
+
+ /* set up the HQD, this is similar to CP_RB0_CNTL */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_CONTROL);
++ tmp = regCP_HQD_PQ_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
+ (order_base_2(prop->queue_size / 4) - 1));
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+@@ -4196,7 +4213,7 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ tmp = 0;
+ /* enable the doorbell if requested */
+ if (prop->use_doorbell) {
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
++ tmp = regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_OFFSET, prop->doorbell_index);
+
+@@ -4211,17 +4228,17 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_hqd_pq_doorbell_control = tmp;
+
+ /* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+- mqd->cp_hqd_pq_rptr = RREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR);
++ mqd->cp_hqd_pq_rptr = regCP_HQD_PQ_RPTR_DEFAULT;
+
+ /* set the vmid for the queue */
+ mqd->cp_hqd_vmid = 0;
+
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PERSISTENT_STATE);
++ tmp = regCP_HQD_PERSISTENT_STATE_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x55);
+ mqd->cp_hqd_persistent_state = tmp;
+
+ /* set MIN_IB_AVAIL_SIZE */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_IB_CONTROL);
++ tmp = regCP_HQD_IB_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_IB_CONTROL, MIN_IB_AVAIL_SIZE, 3);
+ mqd->cp_hqd_ib_control = tmp;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 0c08785099f320..2ec900d50d7f84 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -52,6 +52,24 @@
+
+ #define RLCG_UCODE_LOADING_START_ADDRESS 0x00002000L
+
++#define regCP_GFX_MQD_CONTROL_DEFAULT 0x00000100
++#define regCP_GFX_HQD_VMID_DEFAULT 0x00000000
++#define regCP_GFX_HQD_QUEUE_PRIORITY_DEFAULT 0x00000000
++#define regCP_GFX_HQD_QUANTUM_DEFAULT 0x00000a01
++#define regCP_GFX_HQD_CNTL_DEFAULT 0x00f00000
++#define regCP_RB_DOORBELL_CONTROL_DEFAULT 0x00000000
++#define regCP_GFX_HQD_RPTR_DEFAULT 0x00000000
++
++#define regCP_HQD_EOP_CONTROL_DEFAULT 0x00000006
++#define regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT 0x00000000
++#define regCP_MQD_CONTROL_DEFAULT 0x00000100
++#define regCP_HQD_PQ_CONTROL_DEFAULT 0x00308509
++#define regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT 0x00000000
++#define regCP_HQD_PQ_RPTR_DEFAULT 0x00000000
++#define regCP_HQD_PERSISTENT_STATE_DEFAULT 0x0be05501
++#define regCP_HQD_IB_CONTROL_DEFAULT 0x00300000
++
++
+ MODULE_FIRMWARE("amdgpu/gc_12_0_0_pfp.bin");
+ MODULE_FIRMWARE("amdgpu/gc_12_0_0_me.bin");
+ MODULE_FIRMWARE("amdgpu/gc_12_0_0_mec.bin");
+@@ -2891,25 +2909,25 @@ static int gfx_v12_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_mqd_base_addr_hi = upper_32_bits(prop->mqd_gpu_addr);
+
+ /* set up mqd control */
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_MQD_CONTROL);
++ tmp = regCP_GFX_MQD_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, VMID, 0);
+ tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, PRIV_STATE, 1);
+ tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, CACHE_POLICY, 0);
+ mqd->cp_gfx_mqd_control = tmp;
+
+ /* set up gfx_hqd_vimd with 0x0 to indicate the ring buffer's vmid */
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_VMID);
++ tmp = regCP_GFX_HQD_VMID_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_VMID, VMID, 0);
+ mqd->cp_gfx_hqd_vmid = 0;
+
+ /* set up default queue priority level
+ * 0x0 = low priority, 0x1 = high priority */
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_QUEUE_PRIORITY);
++ tmp = regCP_GFX_HQD_QUEUE_PRIORITY_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUEUE_PRIORITY, PRIORITY_LEVEL, 0);
+ mqd->cp_gfx_hqd_queue_priority = tmp;
+
+ /* set up time quantum */
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_QUANTUM);
++ tmp = regCP_GFX_HQD_QUANTUM_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUANTUM, QUANTUM_EN, 1);
+ mqd->cp_gfx_hqd_quantum = tmp;
+
+@@ -2931,7 +2949,7 @@ static int gfx_v12_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+
+ /* set up the gfx_hqd_control, similar as CP_RB0_CNTL */
+ rb_bufsz = order_base_2(prop->queue_size / 4) - 1;
+- tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_CNTL);
++ tmp = regCP_GFX_HQD_CNTL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BUFSZ, rb_bufsz);
+ tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BLKSZ, rb_bufsz - 2);
+ #ifdef __BIG_ENDIAN
+@@ -2940,7 +2958,7 @@ static int gfx_v12_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_gfx_hqd_cntl = tmp;
+
+ /* set up cp_doorbell_control */
+- tmp = RREG32_SOC15(GC, 0, regCP_RB_DOORBELL_CONTROL);
++ tmp = regCP_RB_DOORBELL_CONTROL_DEFAULT;
+ if (prop->use_doorbell) {
+ tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+ DOORBELL_OFFSET, prop->doorbell_index);
+@@ -2952,7 +2970,7 @@ static int gfx_v12_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_rb_doorbell_control = tmp;
+
+ /* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+- mqd->cp_gfx_hqd_rptr = RREG32_SOC15(GC, 0, regCP_GFX_HQD_RPTR);
++ mqd->cp_gfx_hqd_rptr = regCP_GFX_HQD_RPTR_DEFAULT;
+
+ /* active the queue */
+ mqd->cp_gfx_hqd_active = 1;
+@@ -3047,14 +3065,14 @@ static int gfx_v12_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
+
+ /* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_EOP_CONTROL);
++ tmp = regCP_HQD_EOP_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
+ (order_base_2(GFX12_MEC_HPD_SIZE / 4) - 1));
+
+ mqd->cp_hqd_eop_control = tmp;
+
+ /* enable doorbell? */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
++ tmp = regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT;
+
+ if (prop->use_doorbell) {
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+@@ -3083,7 +3101,7 @@ static int gfx_v12_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_mqd_base_addr_hi = upper_32_bits(prop->mqd_gpu_addr);
+
+ /* set MQD vmid to 0 */
+- tmp = RREG32_SOC15(GC, 0, regCP_MQD_CONTROL);
++ tmp = regCP_MQD_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
+ mqd->cp_mqd_control = tmp;
+
+@@ -3093,7 +3111,7 @@ static int gfx_v12_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
+
+ /* set up the HQD, this is similar to CP_RB0_CNTL */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_CONTROL);
++ tmp = regCP_HQD_PQ_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
+ (order_base_2(prop->queue_size / 4) - 1));
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+@@ -3118,7 +3136,7 @@ static int gfx_v12_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ tmp = 0;
+ /* enable the doorbell if requested */
+ if (prop->use_doorbell) {
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
++ tmp = regCP_HQD_PQ_DOORBELL_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_OFFSET, prop->doorbell_index);
+
+@@ -3133,17 +3151,17 @@ static int gfx_v12_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ mqd->cp_hqd_pq_doorbell_control = tmp;
+
+ /* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+- mqd->cp_hqd_pq_rptr = RREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR);
++ mqd->cp_hqd_pq_rptr = regCP_HQD_PQ_RPTR_DEFAULT;
+
+ /* set the vmid for the queue */
+ mqd->cp_hqd_vmid = 0;
+
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_PERSISTENT_STATE);
++ tmp = regCP_HQD_PERSISTENT_STATE_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x55);
+ mqd->cp_hqd_persistent_state = tmp;
+
+ /* set MIN_IB_AVAIL_SIZE */
+- tmp = RREG32_SOC15(GC, 0, regCP_HQD_IB_CONTROL);
++ tmp = regCP_HQD_IB_CONTROL_DEFAULT;
+ tmp = REG_SET_FIELD(tmp, CP_HQD_IB_CONTROL, MIN_IB_AVAIL_SIZE, 3);
+ mqd->cp_hqd_ib_control = tmp;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+index 0e3ddea7b8e0f8..a7bfc9f41d0e39 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
+@@ -92,12 +92,12 @@ static void gfxhub_v1_0_init_system_aperture_regs(struct amdgpu_device *adev)
+ {
+ uint64_t value;
+
+- /* Program the AGP BAR */
+- WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BASE, 0);
+- WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BOT, adev->gmc.agp_start >> 24);
+- WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_TOP, adev->gmc.agp_end >> 24);
+-
+ if (!amdgpu_sriov_vf(adev) || adev->asic_type <= CHIP_VEGA10) {
++ /* Program the AGP BAR */
++ WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BASE, 0);
++ WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_BOT, adev->gmc.agp_start >> 24);
++ WREG32_SOC15_RLC(GC, 0, mmMC_VM_AGP_TOP, adev->gmc.agp_end >> 24);
++
+ /* Program the system aperture low logical page number. */
+ WREG32_SOC15_RLC(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR,
+ min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 5250b470e5ef39..f1dc9e50d67e71 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1504,7 +1504,6 @@ static void gmc_v9_0_set_umc_funcs(struct amdgpu_device *adev)
+ adev->umc.umc_inst_num = UMC_V12_0_UMC_INSTANCE_NUM;
+ adev->umc.node_inst_num /= UMC_V12_0_UMC_INSTANCE_NUM;
+ adev->umc.channel_offs = UMC_V12_0_PER_CHANNEL_OFFSET;
+- adev->umc.active_mask = adev->aid_mask;
+ adev->umc.retire_unit = UMC_V12_0_BAD_PAGE_NUM_PER_CHANNEL;
+ if (!adev->gmc.xgmi.connected_to_cpu && !adev->gmc.is_app_apu)
+ adev->umc.ras = &umc_v12_0_ras;
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+index 88f9771c16869c..b2904ee494e048 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+@@ -637,7 +637,7 @@ static uint64_t jpeg_v4_0_3_dec_ring_get_wptr(struct amdgpu_ring *ring)
+ ring->pipe ? (0x40 * ring->pipe - 0xc80) : 0);
+ }
+
+-static void jpeg_v4_0_3_ring_emit_hdp_flush(struct amdgpu_ring *ring)
++void jpeg_v4_0_3_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+ {
+ /* JPEG engine access for HDP flush doesn't work when RRMT is enabled.
+ * This is a workaround to avoid any HDP flush through JPEG ring.
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
+index 747a3e5f68564c..a90bf370a00259 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.h
+@@ -56,6 +56,7 @@ void jpeg_v4_0_3_dec_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq
+ unsigned int flags);
+ void jpeg_v4_0_3_dec_ring_emit_vm_flush(struct amdgpu_ring *ring,
+ unsigned int vmid, uint64_t pd_addr);
++void jpeg_v4_0_3_ring_emit_hdp_flush(struct amdgpu_ring *ring);
+ void jpeg_v4_0_3_dec_ring_nop(struct amdgpu_ring *ring, uint32_t count);
+ void jpeg_v4_0_3_dec_ring_insert_start(struct amdgpu_ring *ring);
+ void jpeg_v4_0_3_dec_ring_insert_end(struct amdgpu_ring *ring);
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+index 40d4c32a8c2a6e..f2cc11b3fd68b9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+@@ -655,6 +655,7 @@ static const struct amdgpu_ring_funcs jpeg_v5_0_1_dec_ring_vm_funcs = {
+ .emit_ib = jpeg_v4_0_3_dec_ring_emit_ib,
+ .emit_fence = jpeg_v4_0_3_dec_ring_emit_fence,
+ .emit_vm_flush = jpeg_v4_0_3_dec_ring_emit_vm_flush,
++ .emit_hdp_flush = jpeg_v4_0_3_ring_emit_hdp_flush,
+ .test_ring = amdgpu_jpeg_dec_ring_test_ring,
+ .test_ib = amdgpu_jpeg_dec_ring_test_ib,
+ .insert_nop = jpeg_v4_0_3_dec_ring_nop,
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index 0f808ffcab9433..68bb334393bb62 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -730,7 +730,7 @@ static int mes_v11_0_set_hw_resources(struct amdgpu_mes *mes)
+
+ static int mes_v11_0_set_hw_resources_1(struct amdgpu_mes *mes)
+ {
+- int size = 128 * PAGE_SIZE;
++ int size = 128 * AMDGPU_GPU_PAGE_SIZE;
+ int ret = 0;
+ struct amdgpu_device *adev = mes->adev;
+ union MESAPI_SET_HW_RESOURCES_1 mes_set_hw_res_pkt;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c
+index 9689e2b5d4e518..2adee2b94c37d6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.c
+@@ -172,6 +172,30 @@ static void mmhub_v1_7_init_tlb_regs(struct amdgpu_device *adev)
+ WREG32_SOC15(MMHUB, 0, regMC_VM_MX_L1_TLB_CNTL, tmp);
+ }
+
++/* Set snoop bit for SDMA so that SDMA writes probe-invalidates RW lines */
++static void mmhub_v1_7_init_snoop_override_regs(struct amdgpu_device *adev)
++{
++ uint32_t tmp;
++ int i;
++ uint32_t distance = regDAGB1_WRCLI_GPU_SNOOP_OVERRIDE -
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE;
++
++ for (i = 0; i < 5; i++) { /* DAGB instances */
++ tmp = RREG32_SOC15_OFFSET(MMHUB, 0,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE, i * distance);
++ tmp |= (1 << 15); /* SDMA client is BIT15 */
++ WREG32_SOC15_OFFSET(MMHUB, 0,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE, i * distance, tmp);
++
++ tmp = RREG32_SOC15_OFFSET(MMHUB, 0,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE, i * distance);
++ tmp |= (1 << 15);
++ WREG32_SOC15_OFFSET(MMHUB, 0,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE, i * distance, tmp);
++ }
++
++}
++
+ static void mmhub_v1_7_init_cache_regs(struct amdgpu_device *adev)
+ {
+ uint32_t tmp;
+@@ -337,6 +361,7 @@ static int mmhub_v1_7_gart_enable(struct amdgpu_device *adev)
+ mmhub_v1_7_init_system_aperture_regs(adev);
+ mmhub_v1_7_init_tlb_regs(adev);
+ mmhub_v1_7_init_cache_regs(adev);
++ mmhub_v1_7_init_snoop_override_regs(adev);
+
+ mmhub_v1_7_enable_system_domain(adev);
+ mmhub_v1_7_disable_identity_aperture(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
+index e646e5cef0a2e7..ce013a715b8644 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_8.c
+@@ -213,6 +213,32 @@ static void mmhub_v1_8_init_tlb_regs(struct amdgpu_device *adev)
+ }
+ }
+
++/* Set snoop bit for SDMA so that SDMA writes probe-invalidates RW lines */
++static void mmhub_v1_8_init_snoop_override_regs(struct amdgpu_device *adev)
++{
++ uint32_t tmp, inst_mask;
++ int i, j;
++ uint32_t distance = regDAGB1_WRCLI_GPU_SNOOP_OVERRIDE -
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE;
++
++ inst_mask = adev->aid_mask;
++ for_each_inst(i, inst_mask) {
++ for (j = 0; j < 5; j++) { /* DAGB instances */
++ tmp = RREG32_SOC15_OFFSET(MMHUB, i,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE, j * distance);
++ tmp |= (1 << 15); /* SDMA client is BIT15 */
++ WREG32_SOC15_OFFSET(MMHUB, i,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE, j * distance, tmp);
++
++ tmp = RREG32_SOC15_OFFSET(MMHUB, i,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE, j * distance);
++ tmp |= (1 << 15);
++ WREG32_SOC15_OFFSET(MMHUB, i,
++ regDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE, j * distance, tmp);
++ }
++ }
++}
++
+ static void mmhub_v1_8_init_cache_regs(struct amdgpu_device *adev)
+ {
+ uint32_t tmp, inst_mask;
+@@ -418,6 +444,7 @@ static int mmhub_v1_8_gart_enable(struct amdgpu_device *adev)
+ mmhub_v1_8_init_system_aperture_regs(adev);
+ mmhub_v1_8_init_tlb_regs(adev);
+ mmhub_v1_8_init_cache_regs(adev);
++ mmhub_v1_8_init_snoop_override_regs(adev);
+
+ mmhub_v1_8_enable_system_domain(adev);
+ mmhub_v1_8_disable_identity_aperture(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
+index ff1b58e446892a..fe0710b55c3ac7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.c
+@@ -198,6 +198,36 @@ static void mmhub_v9_4_init_tlb_regs(struct amdgpu_device *adev, int hubid)
+ hubid * MMHUB_INSTANCE_REGISTER_OFFSET, tmp);
+ }
+
++/* Set snoop bit for SDMA so that SDMA writes probe-invalidates RW lines */
++static void mmhub_v9_4_init_snoop_override_regs(struct amdgpu_device *adev, int hubid)
++{
++ uint32_t tmp;
++ int i;
++ uint32_t distance = mmDAGB1_WRCLI_GPU_SNOOP_OVERRIDE -
++ mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE;
++ uint32_t huboffset = hubid * MMHUB_INSTANCE_REGISTER_OFFSET;
++
++ for (i = 0; i < 5 - (2 * hubid); i++) {
++ /* DAGB instances 0 to 4 are in hub0 and 5 to 7 are in hub1 */
++ tmp = RREG32_SOC15_OFFSET(MMHUB, 0,
++ mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE,
++ huboffset + i * distance);
++ tmp |= (1 << 15); /* SDMA client is BIT15 */
++ WREG32_SOC15_OFFSET(MMHUB, 0,
++ mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE,
++ huboffset + i * distance, tmp);
++
++ tmp = RREG32_SOC15_OFFSET(MMHUB, 0,
++ mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE,
++ huboffset + i * distance);
++ tmp |= (1 << 15);
++ WREG32_SOC15_OFFSET(MMHUB, 0,
++ mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE,
++ huboffset + i * distance, tmp);
++ }
++
++}
++
+ static void mmhub_v9_4_init_cache_regs(struct amdgpu_device *adev, int hubid)
+ {
+ uint32_t tmp;
+@@ -392,6 +422,7 @@ static int mmhub_v9_4_gart_enable(struct amdgpu_device *adev)
+ if (!amdgpu_sriov_vf(adev))
+ mmhub_v9_4_init_cache_regs(adev, i);
+
++ mmhub_v9_4_init_snoop_override_regs(adev, i);
+ mmhub_v9_4_enable_system_domain(adev, i);
+ if (!amdgpu_sriov_vf(adev))
+ mmhub_v9_4_disable_identity_aperture(adev, i);
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index 95c609317a8d76..efca7dc27d68d9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -141,23 +141,23 @@ static struct amdgpu_video_codec_info sriov_sc_video_codecs_encode_array[] = {
+ };
+
+ static struct amdgpu_video_codec_info sriov_sc_video_codecs_decode_array_vcn0[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 16384, 16384, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
+ };
+
+ static struct amdgpu_video_codec_info sriov_sc_video_codecs_decode_array_vcn1[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 16384, 16384, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+ };
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index e98fb3fa36a88c..6e09613de8cd27 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -604,12 +604,10 @@ soc15_asic_reset_method(struct amdgpu_device *adev)
+ static bool soc15_need_reset_on_resume(struct amdgpu_device *adev)
+ {
+ /* Will reset for the following suspend abort cases.
+- * 1) Only reset on APU side, dGPU hasn't checked yet.
+- * 2) S3 suspend aborted in the normal S3 suspend or
+- * performing pm core test.
++ * 1) S3 suspend aborted in the normal S3 suspend
++ * 2) S3 suspend aborted in performing pm core test.
+ */
+- if (adev->flags & AMD_IS_APU && adev->in_s3 &&
+- !pm_resume_via_firmware())
++ if (adev->in_s3 && !pm_resume_via_firmware())
+ return true;
+ else
+ return false;
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index 62ad67d0b598f5..c66cff399f63de 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -117,23 +117,17 @@ static struct amdgpu_video_codecs sriov_vcn_4_0_0_video_codecs_encode_vcn1 = {
+ };
+
+ static struct amdgpu_video_codec_info sriov_vcn_4_0_0_video_codecs_decode_array_vcn0[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 16384, 16384, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
+ };
+
+ static struct amdgpu_video_codec_info sriov_vcn_4_0_0_video_codecs_decode_array_vcn1[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 16384, 16384, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+ };
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+index a2d1a4b2f03a59..855da1149c5c86 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+@@ -31,6 +31,7 @@
+ #include "soc15d.h"
+ #include "soc15_hw_ip.h"
+ #include "vcn_v2_0.h"
++#include "vcn_v4_0_3.h"
+ #include "mmsch_v4_0_3.h"
+
+ #include "vcn/vcn_4_0_3_offset.h"
+@@ -1462,8 +1463,8 @@ static uint64_t vcn_v4_0_3_unified_ring_get_wptr(struct amdgpu_ring *ring)
+ regUVD_RB_WPTR);
+ }
+
+-static void vcn_v4_0_3_enc_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
+- uint32_t val, uint32_t mask)
++void vcn_v4_0_3_enc_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
++ uint32_t val, uint32_t mask)
+ {
+ /* Use normalized offsets when required */
+ if (vcn_v4_0_3_normalizn_reqd(ring->adev))
+@@ -1475,7 +1476,8 @@ static void vcn_v4_0_3_enc_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t
+ amdgpu_ring_write(ring, val);
+ }
+
+-static void vcn_v4_0_3_enc_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, uint32_t val)
++void vcn_v4_0_3_enc_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
++ uint32_t val)
+ {
+ /* Use normalized offsets when required */
+ if (vcn_v4_0_3_normalizn_reqd(ring->adev))
+@@ -1486,8 +1488,8 @@ static void vcn_v4_0_3_enc_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg
+ amdgpu_ring_write(ring, val);
+ }
+
+-static void vcn_v4_0_3_enc_ring_emit_vm_flush(struct amdgpu_ring *ring,
+- unsigned int vmid, uint64_t pd_addr)
++void vcn_v4_0_3_enc_ring_emit_vm_flush(struct amdgpu_ring *ring,
++ unsigned int vmid, uint64_t pd_addr)
+ {
+ struct amdgpu_vmhub *hub = &ring->adev->vmhub[ring->vm_hub];
+
+@@ -1499,7 +1501,7 @@ static void vcn_v4_0_3_enc_ring_emit_vm_flush(struct amdgpu_ring *ring,
+ lower_32_bits(pd_addr), 0xffffffff);
+ }
+
+-static void vcn_v4_0_3_ring_emit_hdp_flush(struct amdgpu_ring *ring)
++void vcn_v4_0_3_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+ {
+ /* VCN engine access for HDP flush doesn't work when RRMT is enabled.
+ * This is a workaround to avoid any HDP flush through VCN ring.
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.h b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.h
+index 0b046114373ae2..03572a1d0c9cb7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.h
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.h
+@@ -26,4 +26,13 @@
+
+ extern const struct amdgpu_ip_block_version vcn_v4_0_3_ip_block;
+
++void vcn_v4_0_3_enc_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
++ uint32_t val, uint32_t mask);
++
++void vcn_v4_0_3_enc_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
++ uint32_t val);
++void vcn_v4_0_3_enc_ring_emit_vm_flush(struct amdgpu_ring *ring,
++ unsigned int vmid, uint64_t pd_addr);
++void vcn_v4_0_3_ring_emit_hdp_flush(struct amdgpu_ring *ring);
++
+ #endif /* __VCN_V4_0_3_H__ */
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+index d2dfdb141b2456..c9761d27fd6127 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+@@ -983,6 +983,10 @@ static int vcn_v4_0_5_start_dpg_mode(struct amdgpu_device *adev, int inst_idx, b
+ ring->doorbell_index << VCN_RB1_DB_CTRL__OFFSET__SHIFT |
+ VCN_RB1_DB_CTRL__EN_MASK);
+
++ /* Keeping one read-back to ensure all register writes are done, otherwise
++ * it may introduce race conditions */
++ RREG32_SOC15(VCN, inst_idx, regVCN_RB1_DB_CTRL);
++
+ return 0;
+ }
+
+@@ -991,183 +995,182 @@ static int vcn_v4_0_5_start_dpg_mode(struct amdgpu_device *adev, int inst_idx, b
+ * vcn_v4_0_5_start - VCN start
+ *
+ * @adev: amdgpu_device pointer
++ * @i: instance to start
+ *
+ * Start VCN block
+ */
+-static int vcn_v4_0_5_start(struct amdgpu_device *adev)
++static int vcn_v4_0_5_start(struct amdgpu_device *adev, int i)
+ {
+ volatile struct amdgpu_vcn4_fw_shared *fw_shared;
+ struct amdgpu_ring *ring;
+ uint32_t tmp;
+- int i, j, k, r;
++ int j, k, r;
+
+- for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+- if (adev->pm.dpm_enabled)
+- amdgpu_dpm_enable_vcn(adev, true, i);
+- }
++ if (adev->vcn.harvest_config & (1 << i))
++ return 0;
+
+- for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+- if (adev->vcn.harvest_config & (1 << i))
+- continue;
++ if (adev->pm.dpm_enabled)
++ amdgpu_dpm_enable_vcn(adev, true, i);
+
+- fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
++ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+
+- if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
+- r = vcn_v4_0_5_start_dpg_mode(adev, i, adev->vcn.indirect_sram);
+- continue;
+- }
++ if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)
++ return vcn_v4_0_5_start_dpg_mode(adev, i, adev->vcn.indirect_sram);
+
+- /* disable VCN power gating */
+- vcn_v4_0_5_disable_static_power_gating(adev, i);
+-
+- /* set VCN status busy */
+- tmp = RREG32_SOC15(VCN, i, regUVD_STATUS) | UVD_STATUS__UVD_BUSY;
+- WREG32_SOC15(VCN, i, regUVD_STATUS, tmp);
+-
+- /*SW clock gating */
+- vcn_v4_0_5_disable_clock_gating(adev, i);
+-
+- /* enable VCPU clock */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL),
+- UVD_VCPU_CNTL__CLK_EN_MASK, ~UVD_VCPU_CNTL__CLK_EN_MASK);
+-
+- /* disable master interrupt */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_MASTINT_EN), 0,
+- ~UVD_MASTINT_EN__VCPU_EN_MASK);
+-
+- /* enable LMI MC and UMC channels */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_LMI_CTRL2), 0,
+- ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
+-
+- tmp = RREG32_SOC15(VCN, i, regUVD_SOFT_RESET);
+- tmp &= ~UVD_SOFT_RESET__LMI_SOFT_RESET_MASK;
+- tmp &= ~UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK;
+- WREG32_SOC15(VCN, i, regUVD_SOFT_RESET, tmp);
+-
+- /* setup regUVD_LMI_CTRL */
+- tmp = RREG32_SOC15(VCN, i, regUVD_LMI_CTRL);
+- WREG32_SOC15(VCN, i, regUVD_LMI_CTRL, tmp |
+- UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK |
+- UVD_LMI_CTRL__MASK_MC_URGENT_MASK |
+- UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK |
+- UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK);
+-
+- /* setup regUVD_MPC_CNTL */
+- tmp = RREG32_SOC15(VCN, i, regUVD_MPC_CNTL);
+- tmp &= ~UVD_MPC_CNTL__REPLACEMENT_MODE_MASK;
+- tmp |= 0x2 << UVD_MPC_CNTL__REPLACEMENT_MODE__SHIFT;
+- WREG32_SOC15(VCN, i, regUVD_MPC_CNTL, tmp);
+-
+- /* setup UVD_MPC_SET_MUXA0 */
+- WREG32_SOC15(VCN, i, regUVD_MPC_SET_MUXA0,
+- ((0x1 << UVD_MPC_SET_MUXA0__VARA_1__SHIFT) |
+- (0x2 << UVD_MPC_SET_MUXA0__VARA_2__SHIFT) |
+- (0x3 << UVD_MPC_SET_MUXA0__VARA_3__SHIFT) |
+- (0x4 << UVD_MPC_SET_MUXA0__VARA_4__SHIFT)));
+-
+- /* setup UVD_MPC_SET_MUXB0 */
+- WREG32_SOC15(VCN, i, regUVD_MPC_SET_MUXB0,
+- ((0x1 << UVD_MPC_SET_MUXB0__VARB_1__SHIFT) |
+- (0x2 << UVD_MPC_SET_MUXB0__VARB_2__SHIFT) |
+- (0x3 << UVD_MPC_SET_MUXB0__VARB_3__SHIFT) |
+- (0x4 << UVD_MPC_SET_MUXB0__VARB_4__SHIFT)));
+-
+- /* setup UVD_MPC_SET_MUX */
+- WREG32_SOC15(VCN, i, regUVD_MPC_SET_MUX,
+- ((0x0 << UVD_MPC_SET_MUX__SET_0__SHIFT) |
+- (0x1 << UVD_MPC_SET_MUX__SET_1__SHIFT) |
+- (0x2 << UVD_MPC_SET_MUX__SET_2__SHIFT)));
+-
+- vcn_v4_0_5_mc_resume(adev, i);
+-
+- /* VCN global tiling registers */
+- WREG32_SOC15(VCN, i, regUVD_GFX10_ADDR_CONFIG,
+- adev->gfx.config.gb_addr_config);
+-
+- /* unblock VCPU register access */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_RB_ARB_CTRL), 0,
+- ~UVD_RB_ARB_CTRL__VCPU_DIS_MASK);
+-
+- /* release VCPU reset to boot */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL), 0,
+- ~UVD_VCPU_CNTL__BLK_RST_MASK);
+-
+- for (j = 0; j < 10; ++j) {
+- uint32_t status;
+-
+- for (k = 0; k < 100; ++k) {
+- status = RREG32_SOC15(VCN, i, regUVD_STATUS);
+- if (status & 2)
+- break;
+- mdelay(10);
+- if (amdgpu_emu_mode == 1)
+- msleep(1);
+- }
++ /* disable VCN power gating */
++ vcn_v4_0_5_disable_static_power_gating(adev, i);
++
++ /* set VCN status busy */
++ tmp = RREG32_SOC15(VCN, i, regUVD_STATUS) | UVD_STATUS__UVD_BUSY;
++ WREG32_SOC15(VCN, i, regUVD_STATUS, tmp);
++
++ /* SW clock gating */
++ vcn_v4_0_5_disable_clock_gating(adev, i);
++
++ /* enable VCPU clock */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL),
++ UVD_VCPU_CNTL__CLK_EN_MASK, ~UVD_VCPU_CNTL__CLK_EN_MASK);
++
++ /* disable master interrupt */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_MASTINT_EN), 0,
++ ~UVD_MASTINT_EN__VCPU_EN_MASK);
+
+- if (amdgpu_emu_mode == 1) {
+- r = -1;
+- if (status & 2) {
+- r = 0;
+- break;
+- }
+- } else {
++ /* enable LMI MC and UMC channels */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_LMI_CTRL2), 0,
++ ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
++
++ tmp = RREG32_SOC15(VCN, i, regUVD_SOFT_RESET);
++ tmp &= ~UVD_SOFT_RESET__LMI_SOFT_RESET_MASK;
++ tmp &= ~UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK;
++ WREG32_SOC15(VCN, i, regUVD_SOFT_RESET, tmp);
++
++ /* setup regUVD_LMI_CTRL */
++ tmp = RREG32_SOC15(VCN, i, regUVD_LMI_CTRL);
++ WREG32_SOC15(VCN, i, regUVD_LMI_CTRL, tmp |
++ UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK |
++ UVD_LMI_CTRL__MASK_MC_URGENT_MASK |
++ UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK |
++ UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK);
++
++ /* setup regUVD_MPC_CNTL */
++ tmp = RREG32_SOC15(VCN, i, regUVD_MPC_CNTL);
++ tmp &= ~UVD_MPC_CNTL__REPLACEMENT_MODE_MASK;
++ tmp |= 0x2 << UVD_MPC_CNTL__REPLACEMENT_MODE__SHIFT;
++ WREG32_SOC15(VCN, i, regUVD_MPC_CNTL, tmp);
++
++ /* setup UVD_MPC_SET_MUXA0 */
++ WREG32_SOC15(VCN, i, regUVD_MPC_SET_MUXA0,
++ ((0x1 << UVD_MPC_SET_MUXA0__VARA_1__SHIFT) |
++ (0x2 << UVD_MPC_SET_MUXA0__VARA_2__SHIFT) |
++ (0x3 << UVD_MPC_SET_MUXA0__VARA_3__SHIFT) |
++ (0x4 << UVD_MPC_SET_MUXA0__VARA_4__SHIFT)));
++
++ /* setup UVD_MPC_SET_MUXB0 */
++ WREG32_SOC15(VCN, i, regUVD_MPC_SET_MUXB0,
++ ((0x1 << UVD_MPC_SET_MUXB0__VARB_1__SHIFT) |
++ (0x2 << UVD_MPC_SET_MUXB0__VARB_2__SHIFT) |
++ (0x3 << UVD_MPC_SET_MUXB0__VARB_3__SHIFT) |
++ (0x4 << UVD_MPC_SET_MUXB0__VARB_4__SHIFT)));
++
++ /* setup UVD_MPC_SET_MUX */
++ WREG32_SOC15(VCN, i, regUVD_MPC_SET_MUX,
++ ((0x0 << UVD_MPC_SET_MUX__SET_0__SHIFT) |
++ (0x1 << UVD_MPC_SET_MUX__SET_1__SHIFT) |
++ (0x2 << UVD_MPC_SET_MUX__SET_2__SHIFT)));
++
++ vcn_v4_0_5_mc_resume(adev, i);
++
++ /* VCN global tiling registers */
++ WREG32_SOC15(VCN, i, regUVD_GFX10_ADDR_CONFIG,
++ adev->gfx.config.gb_addr_config);
++
++ /* unblock VCPU register access */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_RB_ARB_CTRL), 0,
++ ~UVD_RB_ARB_CTRL__VCPU_DIS_MASK);
++
++ /* release VCPU reset to boot */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL), 0,
++ ~UVD_VCPU_CNTL__BLK_RST_MASK);
++
++ for (j = 0; j < 10; ++j) {
++ uint32_t status;
++
++ for (k = 0; k < 100; ++k) {
++ status = RREG32_SOC15(VCN, i, regUVD_STATUS);
++ if (status & 2)
++ break;
++ mdelay(10);
++ if (amdgpu_emu_mode == 1)
++ msleep(1);
++ }
++
++ if (amdgpu_emu_mode == 1) {
++ r = -1;
++ if (status & 2) {
+ r = 0;
+- if (status & 2)
+- break;
+-
+- dev_err(adev->dev,
+- "VCN[%d] is not responding, trying to reset VCPU!!!\n", i);
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL),
+- UVD_VCPU_CNTL__BLK_RST_MASK,
+- ~UVD_VCPU_CNTL__BLK_RST_MASK);
+- mdelay(10);
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL), 0,
+- ~UVD_VCPU_CNTL__BLK_RST_MASK);
+-
+- mdelay(10);
+- r = -1;
++ break;
+ }
++ } else {
++ r = 0;
++ if (status & 2)
++ break;
++
++ dev_err(adev->dev,
++ "VCN[%d] is not responding, trying to reset VCPU!!!\n", i);
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL),
++ UVD_VCPU_CNTL__BLK_RST_MASK,
++ ~UVD_VCPU_CNTL__BLK_RST_MASK);
++ mdelay(10);
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL), 0,
++ ~UVD_VCPU_CNTL__BLK_RST_MASK);
++
++ mdelay(10);
++ r = -1;
+ }
++ }
+
+- if (r) {
+- dev_err(adev->dev, "VCN[%d] is not responding, giving up!!!\n", i);
+- return r;
+- }
++ if (r) {
++ dev_err(adev->dev, "VCN[%d] is not responding, giving up!!!\n", i);
++ return r;
++ }
+
+- /* enable master interrupt */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_MASTINT_EN),
+- UVD_MASTINT_EN__VCPU_EN_MASK,
+- ~UVD_MASTINT_EN__VCPU_EN_MASK);
++ /* enable master interrupt */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_MASTINT_EN),
++ UVD_MASTINT_EN__VCPU_EN_MASK,
++ ~UVD_MASTINT_EN__VCPU_EN_MASK);
+
+- /* clear the busy bit of VCN_STATUS */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_STATUS), 0,
+- ~(2 << UVD_STATUS__VCPU_REPORT__SHIFT));
++ /* clear the busy bit of VCN_STATUS */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_STATUS), 0,
++ ~(2 << UVD_STATUS__VCPU_REPORT__SHIFT));
+
+- ring = &adev->vcn.inst[i].ring_enc[0];
+- WREG32_SOC15(VCN, i, regVCN_RB1_DB_CTRL,
+- ring->doorbell_index << VCN_RB1_DB_CTRL__OFFSET__SHIFT |
+- VCN_RB1_DB_CTRL__EN_MASK);
+-
+- WREG32_SOC15(VCN, i, regUVD_RB_BASE_LO, ring->gpu_addr);
+- WREG32_SOC15(VCN, i, regUVD_RB_BASE_HI, upper_32_bits(ring->gpu_addr));
+- WREG32_SOC15(VCN, i, regUVD_RB_SIZE, ring->ring_size / 4);
+-
+- tmp = RREG32_SOC15(VCN, i, regVCN_RB_ENABLE);
+- tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK);
+- WREG32_SOC15(VCN, i, regVCN_RB_ENABLE, tmp);
+- fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET;
+- WREG32_SOC15(VCN, i, regUVD_RB_RPTR, 0);
+- WREG32_SOC15(VCN, i, regUVD_RB_WPTR, 0);
+-
+- tmp = RREG32_SOC15(VCN, i, regUVD_RB_RPTR);
+- WREG32_SOC15(VCN, i, regUVD_RB_WPTR, tmp);
+- ring->wptr = RREG32_SOC15(VCN, i, regUVD_RB_WPTR);
+-
+- tmp = RREG32_SOC15(VCN, i, regVCN_RB_ENABLE);
+- tmp |= VCN_RB_ENABLE__RB1_EN_MASK;
+- WREG32_SOC15(VCN, i, regVCN_RB_ENABLE, tmp);
+- fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF);
+- }
++ ring = &adev->vcn.inst[i].ring_enc[0];
++ WREG32_SOC15(VCN, i, regVCN_RB1_DB_CTRL,
++ ring->doorbell_index << VCN_RB1_DB_CTRL__OFFSET__SHIFT |
++ VCN_RB1_DB_CTRL__EN_MASK);
++
++ WREG32_SOC15(VCN, i, regUVD_RB_BASE_LO, ring->gpu_addr);
++ WREG32_SOC15(VCN, i, regUVD_RB_BASE_HI, upper_32_bits(ring->gpu_addr));
++ WREG32_SOC15(VCN, i, regUVD_RB_SIZE, ring->ring_size / 4);
++
++ tmp = RREG32_SOC15(VCN, i, regVCN_RB_ENABLE);
++ tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK);
++ WREG32_SOC15(VCN, i, regVCN_RB_ENABLE, tmp);
++ fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET;
++ WREG32_SOC15(VCN, i, regUVD_RB_RPTR, 0);
++ WREG32_SOC15(VCN, i, regUVD_RB_WPTR, 0);
++
++ tmp = RREG32_SOC15(VCN, i, regUVD_RB_RPTR);
++ WREG32_SOC15(VCN, i, regUVD_RB_WPTR, tmp);
++ ring->wptr = RREG32_SOC15(VCN, i, regUVD_RB_WPTR);
++
++ tmp = RREG32_SOC15(VCN, i, regVCN_RB_ENABLE);
++ tmp |= VCN_RB_ENABLE__RB1_EN_MASK;
++ WREG32_SOC15(VCN, i, regVCN_RB_ENABLE, tmp);
++ fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF);
++
++ /* Keeping one read-back to ensure all register writes are done, otherwise
++ * it may introduce race conditions */
++ RREG32_SOC15(VCN, i, regVCN_RB_ENABLE);
+
+ return 0;
+ }
+@@ -1204,88 +1207,87 @@ static void vcn_v4_0_5_stop_dpg_mode(struct amdgpu_device *adev, int inst_idx)
+ * vcn_v4_0_5_stop - VCN stop
+ *
+ * @adev: amdgpu_device pointer
++ * @i: instance to stop
+ *
+ * Stop VCN block
+ */
+-static int vcn_v4_0_5_stop(struct amdgpu_device *adev)
++static int vcn_v4_0_5_stop(struct amdgpu_device *adev, int i)
+ {
+ volatile struct amdgpu_vcn4_fw_shared *fw_shared;
+ uint32_t tmp;
+- int i, r = 0;
+-
+- for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+- if (adev->vcn.harvest_config & (1 << i))
+- continue;
++ int r = 0;
+
+- fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+- fw_shared->sq.queue_mode |= FW_QUEUE_DPG_HOLD_OFF;
++ if (adev->vcn.harvest_config & (1 << i))
++ return 0;
+
+- if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
+- vcn_v4_0_5_stop_dpg_mode(adev, i);
+- continue;
+- }
++ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
++ fw_shared->sq.queue_mode |= FW_QUEUE_DPG_HOLD_OFF;
+
+- /* wait for vcn idle */
+- r = SOC15_WAIT_ON_RREG(VCN, i, regUVD_STATUS, UVD_STATUS__IDLE, 0x7);
+- if (r)
+- return r;
++ if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) {
++ vcn_v4_0_5_stop_dpg_mode(adev, i);
++ r = 0;
++ goto done;
++ }
+
+- tmp = UVD_LMI_STATUS__VCPU_LMI_WRITE_CLEAN_MASK |
+- UVD_LMI_STATUS__READ_CLEAN_MASK |
+- UVD_LMI_STATUS__WRITE_CLEAN_MASK |
+- UVD_LMI_STATUS__WRITE_CLEAN_RAW_MASK;
+- r = SOC15_WAIT_ON_RREG(VCN, i, regUVD_LMI_STATUS, tmp, tmp);
+- if (r)
+- return r;
++ /* wait for vcn idle */
++ r = SOC15_WAIT_ON_RREG(VCN, i, regUVD_STATUS, UVD_STATUS__IDLE, 0x7);
++ if (r)
++ goto done;
+
+- /* disable LMI UMC channel */
+- tmp = RREG32_SOC15(VCN, i, regUVD_LMI_CTRL2);
+- tmp |= UVD_LMI_CTRL2__STALL_ARB_UMC_MASK;
+- WREG32_SOC15(VCN, i, regUVD_LMI_CTRL2, tmp);
+- tmp = UVD_LMI_STATUS__UMC_READ_CLEAN_RAW_MASK |
+- UVD_LMI_STATUS__UMC_WRITE_CLEAN_RAW_MASK;
+- r = SOC15_WAIT_ON_RREG(VCN, i, regUVD_LMI_STATUS, tmp, tmp);
+- if (r)
+- return r;
++ tmp = UVD_LMI_STATUS__VCPU_LMI_WRITE_CLEAN_MASK |
++ UVD_LMI_STATUS__READ_CLEAN_MASK |
++ UVD_LMI_STATUS__WRITE_CLEAN_MASK |
++ UVD_LMI_STATUS__WRITE_CLEAN_RAW_MASK;
++ r = SOC15_WAIT_ON_RREG(VCN, i, regUVD_LMI_STATUS, tmp, tmp);
++ if (r)
++ goto done;
++
++ /* disable LMI UMC channel */
++ tmp = RREG32_SOC15(VCN, i, regUVD_LMI_CTRL2);
++ tmp |= UVD_LMI_CTRL2__STALL_ARB_UMC_MASK;
++ WREG32_SOC15(VCN, i, regUVD_LMI_CTRL2, tmp);
++ tmp = UVD_LMI_STATUS__UMC_READ_CLEAN_RAW_MASK |
++ UVD_LMI_STATUS__UMC_WRITE_CLEAN_RAW_MASK;
++ r = SOC15_WAIT_ON_RREG(VCN, i, regUVD_LMI_STATUS, tmp, tmp);
++ if (r)
++ goto done;
+
+- /* block VCPU register access */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_RB_ARB_CTRL),
+- UVD_RB_ARB_CTRL__VCPU_DIS_MASK,
+- ~UVD_RB_ARB_CTRL__VCPU_DIS_MASK);
++ /* block VCPU register access */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_RB_ARB_CTRL),
++ UVD_RB_ARB_CTRL__VCPU_DIS_MASK,
++ ~UVD_RB_ARB_CTRL__VCPU_DIS_MASK);
+
+- /* reset VCPU */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL),
+- UVD_VCPU_CNTL__BLK_RST_MASK,
+- ~UVD_VCPU_CNTL__BLK_RST_MASK);
++ /* reset VCPU */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL),
++ UVD_VCPU_CNTL__BLK_RST_MASK,
++ ~UVD_VCPU_CNTL__BLK_RST_MASK);
+
+- /* disable VCPU clock */
+- WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL), 0,
+- ~(UVD_VCPU_CNTL__CLK_EN_MASK));
++ /* disable VCPU clock */
++ WREG32_P(SOC15_REG_OFFSET(VCN, i, regUVD_VCPU_CNTL), 0,
++ ~(UVD_VCPU_CNTL__CLK_EN_MASK));
+
+- /* apply soft reset */
+- tmp = RREG32_SOC15(VCN, i, regUVD_SOFT_RESET);
+- tmp |= UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK;
+- WREG32_SOC15(VCN, i, regUVD_SOFT_RESET, tmp);
+- tmp = RREG32_SOC15(VCN, i, regUVD_SOFT_RESET);
+- tmp |= UVD_SOFT_RESET__LMI_SOFT_RESET_MASK;
+- WREG32_SOC15(VCN, i, regUVD_SOFT_RESET, tmp);
++ /* apply soft reset */
++ tmp = RREG32_SOC15(VCN, i, regUVD_SOFT_RESET);
++ tmp |= UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK;
++ WREG32_SOC15(VCN, i, regUVD_SOFT_RESET, tmp);
++ tmp = RREG32_SOC15(VCN, i, regUVD_SOFT_RESET);
++ tmp |= UVD_SOFT_RESET__LMI_SOFT_RESET_MASK;
++ WREG32_SOC15(VCN, i, regUVD_SOFT_RESET, tmp);
+
+- /* clear status */
+- WREG32_SOC15(VCN, i, regUVD_STATUS, 0);
++ /* clear status */
++ WREG32_SOC15(VCN, i, regUVD_STATUS, 0);
+
+- /* apply HW clock gating */
+- vcn_v4_0_5_enable_clock_gating(adev, i);
++ /* apply HW clock gating */
++ vcn_v4_0_5_enable_clock_gating(adev, i);
+
+- /* enable VCN power gating */
+- vcn_v4_0_5_enable_static_power_gating(adev, i);
+- }
++ /* enable VCN power gating */
++ vcn_v4_0_5_enable_static_power_gating(adev, i);
+
+- for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+- if (adev->pm.dpm_enabled)
+- amdgpu_dpm_enable_vcn(adev, false, i);
+- }
++done:
++ if (adev->pm.dpm_enabled)
++ amdgpu_dpm_enable_vcn(adev, false, i);
+
+- return 0;
++ return r;
+ }
+
+ /**
+@@ -1537,15 +1539,17 @@ static int vcn_v4_0_5_set_powergating_state(struct amdgpu_ip_block *ip_block,
+ enum amd_powergating_state state)
+ {
+ struct amdgpu_device *adev = ip_block->adev;
+- int ret;
++ int ret = 0, i;
+
+ if (state == adev->vcn.cur_state)
+ return 0;
+
+- if (state == AMD_PG_STATE_GATE)
+- ret = vcn_v4_0_5_stop(adev);
+- else
+- ret = vcn_v4_0_5_start(adev);
++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++ if (state == AMD_PG_STATE_GATE)
++ ret |= vcn_v4_0_5_stop(adev, i);
++ else
++ ret |= vcn_v4_0_5_start(adev, i);
++ }
+
+ if (!ret)
+ adev->vcn.cur_state = state;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
+index 8b0b3739a53776..f893a84282832e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
+@@ -29,6 +29,7 @@
+ #include "soc15d.h"
+ #include "soc15_hw_ip.h"
+ #include "vcn_v2_0.h"
++#include "vcn_v4_0_3.h"
+
+ #include "vcn/vcn_5_0_0_offset.h"
+ #include "vcn/vcn_5_0_0_sh_mask.h"
+@@ -65,6 +66,22 @@ static int vcn_v5_0_1_early_init(struct amdgpu_ip_block *ip_block)
+ return amdgpu_vcn_early_init(adev);
+ }
+
++static void vcn_v5_0_1_fw_shared_init(struct amdgpu_device *adev, int inst_idx)
++{
++ struct amdgpu_vcn5_fw_shared *fw_shared;
++
++ fw_shared = adev->vcn.inst[inst_idx].fw_shared.cpu_addr;
++
++ if (fw_shared->sq.is_enabled)
++ return;
++ fw_shared->present_flag_0 =
++ cpu_to_le32(AMDGPU_FW_SHARED_FLAG_0_UNIFIED_QUEUE);
++ fw_shared->sq.is_enabled = 1;
++
++ if (amdgpu_vcnfw_log)
++ amdgpu_vcn_fwlog_init(&adev->vcn.inst[inst_idx]);
++}
++
+ /**
+ * vcn_v5_0_1_sw_init - sw init for VCN block
+ *
+@@ -95,8 +112,6 @@ static int vcn_v5_0_1_sw_init(struct amdgpu_ip_block *ip_block)
+ return r;
+
+ for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+- volatile struct amdgpu_vcn5_fw_shared *fw_shared;
+-
+ vcn_inst = GET_INST(VCN, i);
+
+ ring = &adev->vcn.inst[i].ring_enc[0];
+@@ -111,12 +126,7 @@ static int vcn_v5_0_1_sw_init(struct amdgpu_ip_block *ip_block)
+ if (r)
+ return r;
+
+- fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+- fw_shared->present_flag_0 = cpu_to_le32(AMDGPU_FW_SHARED_FLAG_0_UNIFIED_QUEUE);
+- fw_shared->sq.is_enabled = true;
+-
+- if (amdgpu_vcnfw_log)
+- amdgpu_vcn_fwlog_init(&adev->vcn.inst[i]);
++ vcn_v5_0_1_fw_shared_init(adev, i);
+ }
+
+ /* TODO: Add queue reset mask when FW fully supports it */
+@@ -188,6 +198,9 @@ static int vcn_v5_0_1_hw_init(struct amdgpu_ip_block *ip_block)
+ 9 * vcn_inst),
+ adev->vcn.inst[i].aid_id);
+
++ /* Re-init fw_shared, if required */
++ vcn_v5_0_1_fw_shared_init(adev, i);
++
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+@@ -893,16 +906,17 @@ static const struct amdgpu_ring_funcs vcn_v5_0_1_unified_ring_vm_funcs = {
+ .get_rptr = vcn_v5_0_1_unified_ring_get_rptr,
+ .get_wptr = vcn_v5_0_1_unified_ring_get_wptr,
+ .set_wptr = vcn_v5_0_1_unified_ring_set_wptr,
+- .emit_frame_size =
+- SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
+- SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 4 +
+- 4 + /* vcn_v2_0_enc_ring_emit_vm_flush */
+- 5 + 5 + /* vcn_v2_0_enc_ring_emit_fence x2 vm fence */
+- 1, /* vcn_v2_0_enc_ring_insert_end */
++ .emit_frame_size = SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
++ SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 4 +
++ 4 + /* vcn_v2_0_enc_ring_emit_vm_flush */
++ 5 +
++ 5 + /* vcn_v2_0_enc_ring_emit_fence x2 vm fence */
++ 1, /* vcn_v2_0_enc_ring_insert_end */
+ .emit_ib_size = 5, /* vcn_v2_0_enc_ring_emit_ib */
+ .emit_ib = vcn_v2_0_enc_ring_emit_ib,
+ .emit_fence = vcn_v2_0_enc_ring_emit_fence,
+- .emit_vm_flush = vcn_v2_0_enc_ring_emit_vm_flush,
++ .emit_vm_flush = vcn_v4_0_3_enc_ring_emit_vm_flush,
++ .emit_hdp_flush = vcn_v4_0_3_ring_emit_hdp_flush,
+ .test_ring = amdgpu_vcn_enc_ring_test_ring,
+ .test_ib = amdgpu_vcn_unified_ring_test_ib,
+ .insert_nop = amdgpu_ring_insert_nop,
+@@ -910,8 +924,8 @@ static const struct amdgpu_ring_funcs vcn_v5_0_1_unified_ring_vm_funcs = {
+ .pad_ib = amdgpu_ring_generic_pad_ib,
+ .begin_use = amdgpu_vcn_ring_begin_use,
+ .end_use = amdgpu_vcn_ring_end_use,
+- .emit_wreg = vcn_v2_0_enc_ring_emit_wreg,
+- .emit_reg_wait = vcn_v2_0_enc_ring_emit_reg_wait,
++ .emit_wreg = vcn_v4_0_3_enc_ring_emit_wreg,
++ .emit_reg_wait = vcn_v4_0_3_enc_ring_emit_reg_wait,
+ .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ };
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/cik_event_interrupt.c b/drivers/gpu/drm/amd/amdkfd/cik_event_interrupt.c
+index 795382b55e0a91..981d9adcc5e1d2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/cik_event_interrupt.c
++++ b/drivers/gpu/drm/amd/amdkfd/cik_event_interrupt.c
+@@ -107,20 +107,30 @@ static void cik_event_interrupt_wq(struct kfd_node *dev,
+ kfd_signal_hw_exception_event(pasid);
+ else if (ihre->source_id == CIK_INTSRC_GFX_PAGE_INV_FAULT ||
+ ihre->source_id == CIK_INTSRC_GFX_MEM_PROT_FAULT) {
++ struct kfd_process_device *pdd = NULL;
+ struct kfd_vm_fault_info info;
++ struct kfd_process *p;
+
+ kfd_smi_event_update_vmfault(dev, pasid);
+- kfd_dqm_evict_pasid(dev->dqm, pasid);
++ p = kfd_lookup_process_by_pasid(pasid, &pdd);
++ if (!pdd)
++ return;
++
++ kfd_evict_process_device(pdd);
+
+ memset(&info, 0, sizeof(info));
+ amdgpu_amdkfd_gpuvm_get_vm_fault_info(dev->adev, &info);
+- if (!info.page_addr && !info.status)
++ if (!info.page_addr && !info.status) {
++ kfd_unref_process(p);
+ return;
++ }
+
+ if (info.vmid == vmid)
+- kfd_signal_vm_fault_event(dev, pasid, &info, NULL);
++ kfd_signal_vm_fault_event(pdd, &info, NULL);
+ else
+- kfd_signal_vm_fault_event(dev, pasid, NULL, NULL);
++ kfd_signal_vm_fault_event(pdd, &info, NULL);
++
++ kfd_unref_process(p);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 33df35cab46791..8c2e92378b491e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -155,8 +155,8 @@ static int kfd_open(struct inode *inode, struct file *filep)
+ /* filep now owns the reference returned by kfd_create_process */
+ filep->private_data = process;
+
+- dev_dbg(kfd_device, "process %d opened, compat mode (32 bit) - %d\n",
+- process->pasid, process->is_32bit_user_mode);
++ dev_dbg(kfd_device, "process pid %d opened kfd node, compat mode (32 bit) - %d\n",
++ process->lead_thread->pid, process->is_32bit_user_mode);
+
+ return 0;
+ }
+@@ -366,8 +366,8 @@ static int kfd_ioctl_create_queue(struct file *filep, struct kfd_process *p,
+ goto err_acquire_queue_buf;
+ }
+
+- pr_debug("Creating queue for PASID 0x%x on gpu 0x%x\n",
+- p->pasid,
++ pr_debug("Creating queue for process pid %d on gpu 0x%x\n",
++ p->lead_thread->pid,
+ dev->id);
+
+ err = pqm_create_queue(&p->pqm, dev, &q_properties, &queue_id,
+@@ -420,9 +420,9 @@ static int kfd_ioctl_destroy_queue(struct file *filp, struct kfd_process *p,
+ int retval;
+ struct kfd_ioctl_destroy_queue_args *args = data;
+
+- pr_debug("Destroying queue id %d for pasid 0x%x\n",
++ pr_debug("Destroying queue id %d for process pid %d\n",
+ args->queue_id,
+- p->pasid);
++ p->lead_thread->pid);
+
+ mutex_lock(&p->mutex);
+
+@@ -478,8 +478,8 @@ static int kfd_ioctl_update_queue(struct file *filp, struct kfd_process *p,
+ properties.pm4_target_xcc = (args->queue_percentage >> 8) & 0xFF;
+ properties.priority = args->queue_priority;
+
+- pr_debug("Updating queue id %d for pasid 0x%x\n",
+- args->queue_id, p->pasid);
++ pr_debug("Updating queue id %d for process pid %d\n",
++ args->queue_id, p->lead_thread->pid);
+
+ mutex_lock(&p->mutex);
+
+@@ -705,7 +705,7 @@ static int kfd_ioctl_get_process_apertures(struct file *filp,
+ struct kfd_process_device_apertures *pAperture;
+ int i;
+
+- dev_dbg(kfd_device, "get apertures for PASID 0x%x", p->pasid);
++ dev_dbg(kfd_device, "get apertures for process pid %d", p->lead_thread->pid);
+
+ args->num_of_nodes = 0;
+
+@@ -757,7 +757,8 @@ static int kfd_ioctl_get_process_apertures_new(struct file *filp,
+ int ret;
+ int i;
+
+- dev_dbg(kfd_device, "get apertures for PASID 0x%x", p->pasid);
++ dev_dbg(kfd_device, "get apertures for process pid %d",
++ p->lead_thread->pid);
+
+ if (args->num_of_nodes == 0) {
+ /* Return number of nodes, so that user space can alloacate
+@@ -3375,12 +3376,12 @@ static int kfd_mmio_mmap(struct kfd_node *dev, struct kfd_process *process,
+
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+- pr_debug("pasid 0x%x mapping mmio page\n"
++ pr_debug("process pid %d mapping mmio page\n"
+ " target user address == 0x%08llX\n"
+ " physical address == 0x%08llX\n"
+ " vm_flags == 0x%04lX\n"
+ " size == 0x%04lX\n",
+- process->pasid, (unsigned long long) vma->vm_start,
++ process->lead_thread->pid, (unsigned long long) vma->vm_start,
+ address, vma->vm_flags, PAGE_SIZE);
+
+ return io_remap_pfn_range(vma,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+index a8abc309180137..12456c61ffa549 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+@@ -204,11 +204,12 @@ bool kfd_set_dbg_ev_from_interrupt(struct kfd_node *dev,
+ size_t exception_data_size)
+ {
+ struct kfd_process *p;
++ struct kfd_process_device *pdd = NULL;
+ bool signaled_to_debugger_or_runtime = false;
+
+- p = kfd_lookup_process_by_pasid(pasid);
++ p = kfd_lookup_process_by_pasid(pasid, &pdd);
+
+- if (!p)
++ if (!pdd)
+ return false;
+
+ if (!kfd_dbg_ev_raise(trap_mask, p, dev, doorbell_id, true,
+@@ -238,9 +239,8 @@ bool kfd_set_dbg_ev_from_interrupt(struct kfd_node *dev,
+
+ mutex_unlock(&p->mutex);
+ } else if (trap_mask & KFD_EC_MASK(EC_DEVICE_MEMORY_VIOLATION)) {
+- kfd_dqm_evict_pasid(dev->dqm, p->pasid);
+- kfd_signal_vm_fault_event(dev, p->pasid, NULL,
+- exception_data);
++ kfd_evict_process_device(pdd);
++ kfd_signal_vm_fault_event(pdd, NULL, exception_data);
+
+ signaled_to_debugger_or_runtime = true;
+ }
+@@ -276,8 +276,8 @@ int kfd_dbg_send_exception_to_runtime(struct kfd_process *p,
+ data = (struct kfd_hsa_memory_exception_data *)
+ pdd->vm_fault_exc_data;
+
+- kfd_dqm_evict_pasid(pdd->dev->dqm, p->pasid);
+- kfd_signal_vm_fault_event(pdd->dev, p->pasid, NULL, data);
++ kfd_evict_process_device(pdd);
++ kfd_signal_vm_fault_event(pdd, NULL, data);
+ error_reason &= ~KFD_EC_MASK(EC_DEVICE_MEMORY_VIOLATION);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 6cefd338f23de0..bf978b368f6a5e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -1558,7 +1558,7 @@ bool kgd2kfd_vmfault_fast_path(struct amdgpu_device *adev, struct amdgpu_iv_entr
+ u32 cam_index;
+
+ if (entry->ih == &adev->irq.ih_soft || entry->ih == &adev->irq.ih1) {
+- p = kfd_lookup_process_by_pasid(entry->pasid);
++ p = kfd_lookup_process_by_pasid(entry->pasid, NULL);
+ if (!p)
+ return true;
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index ad9cb50a9fa38b..35ae3c55a97fa7 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -208,7 +208,7 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+ return -EIO;
+
+ memset(&queue_input, 0x0, sizeof(struct mes_add_queue_input));
+- queue_input.process_id = qpd->pqm->process->pasid;
++ queue_input.process_id = pdd->pasid;
+ queue_input.page_table_base_addr = qpd->page_table_base;
+ queue_input.process_va_start = 0;
+ queue_input.process_va_end = adev->vm_manager.max_pfn - 1;
+@@ -527,6 +527,7 @@ static int allocate_vmid(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd,
+ struct queue *q)
+ {
++ struct kfd_process_device *pdd = qpd_to_pdd(qpd);
+ struct device *dev = dqm->dev->adev->dev;
+ int allocated_vmid = -1, i;
+
+@@ -545,9 +546,9 @@ static int allocate_vmid(struct device_queue_manager *dqm,
+
+ pr_debug("vmid allocated: %d\n", allocated_vmid);
+
+- dqm->vmid_pasid[allocated_vmid] = q->process->pasid;
++ dqm->vmid_pasid[allocated_vmid] = pdd->pasid;
+
+- set_pasid_vmid_mapping(dqm, q->process->pasid, allocated_vmid);
++ set_pasid_vmid_mapping(dqm, pdd->pasid, allocated_vmid);
+
+ qpd->vmid = allocated_vmid;
+ q->properties.vmid = allocated_vmid;
+@@ -799,6 +800,11 @@ static int dbgdev_wave_reset_wavefronts(struct kfd_node *dev, struct kfd_process
+ return -EOPNOTSUPP;
+ }
+
++ /* taking the VMID for that process on the safe way using PDD */
++ pdd = kfd_get_process_device_data(dev, p);
++ if (!pdd)
++ return -EFAULT;
++
+ /* Scan all registers in the range ATC_VMID8_PASID_MAPPING ..
+ * ATC_VMID15_PASID_MAPPING
+ * to check which VMID the current process is mapped to.
+@@ -808,23 +814,19 @@ static int dbgdev_wave_reset_wavefronts(struct kfd_node *dev, struct kfd_process
+ status = dev->kfd2kgd->get_atc_vmid_pasid_mapping_info
+ (dev->adev, vmid, &queried_pasid);
+
+- if (status && queried_pasid == p->pasid) {
+- pr_debug("Killing wave fronts of vmid %d and pasid 0x%x\n",
+- vmid, p->pasid);
++ if (status && queried_pasid == pdd->pasid) {
++ pr_debug("Killing wave fronts of vmid %d and process pid %d\n",
++ vmid, p->lead_thread->pid);
+ break;
+ }
+ }
+
+ if (vmid > last_vmid_to_scan) {
+- dev_err(dev->adev->dev, "Didn't find vmid for pasid 0x%x\n", p->pasid);
++ dev_err(dev->adev->dev, "Didn't find vmid for process pid %d\n",
++ p->lead_thread->pid);
+ return -EFAULT;
+ }
+
+- /* taking the VMID for that process on the safe way using PDD */
+- pdd = kfd_get_process_device_data(dev, p);
+- if (!pdd)
+- return -EFAULT;
+-
+ reg_gfx_index.bits.sh_broadcast_writes = 1;
+ reg_gfx_index.bits.se_broadcast_writes = 1;
+ reg_gfx_index.bits.instance_broadcast_writes = 1;
+@@ -1060,8 +1062,8 @@ static int suspend_single_queue(struct device_queue_manager *dqm,
+ if (q->properties.is_suspended)
+ return 0;
+
+- pr_debug("Suspending PASID %u queue [%i]\n",
+- pdd->process->pasid,
++ pr_debug("Suspending process pid %d queue [%i]\n",
++ pdd->process->lead_thread->pid,
+ q->properties.queue_id);
+
+ is_new = q->properties.exception_status & KFD_EC_MASK(EC_QUEUE_NEW);
+@@ -1108,8 +1110,8 @@ static int resume_single_queue(struct device_queue_manager *dqm,
+
+ pdd = qpd_to_pdd(qpd);
+
+- pr_debug("Restoring from suspend PASID %u queue [%i]\n",
+- pdd->process->pasid,
++ pr_debug("Restoring from suspend process pid %d queue [%i]\n",
++ pdd->process->lead_thread->pid,
+ q->properties.queue_id);
+
+ q->properties.is_suspended = false;
+@@ -1142,8 +1144,8 @@ static int evict_process_queues_nocpsch(struct device_queue_manager *dqm,
+ goto out;
+
+ pdd = qpd_to_pdd(qpd);
+- pr_debug_ratelimited("Evicting PASID 0x%x queues\n",
+- pdd->process->pasid);
++ pr_debug_ratelimited("Evicting process pid %d queues\n",
++ pdd->process->lead_thread->pid);
+
+ pdd->last_evict_timestamp = get_jiffies_64();
+ /* Mark all queues as evicted. Deactivate all active queues on
+@@ -1200,8 +1202,8 @@ static int evict_process_queues_cpsch(struct device_queue_manager *dqm,
+ if (!pdd->drm_priv)
+ goto out;
+
+- pr_debug_ratelimited("Evicting PASID 0x%x queues\n",
+- pdd->process->pasid);
++ pr_debug_ratelimited("Evicting process pid %d queues\n",
++ pdd->process->lead_thread->pid);
+
+ /* Mark all queues as evicted. Deactivate all active queues on
+ * the qpd.
+@@ -1261,8 +1263,8 @@ static int restore_process_queues_nocpsch(struct device_queue_manager *dqm,
+ goto out;
+ }
+
+- pr_debug_ratelimited("Restoring PASID 0x%x queues\n",
+- pdd->process->pasid);
++ pr_debug_ratelimited("Restoring process pid %d queues\n",
++ pdd->process->lead_thread->pid);
+
+ /* Update PD Base in QPD */
+ qpd->page_table_base = pd_base;
+@@ -1345,8 +1347,8 @@ static int restore_process_queues_cpsch(struct device_queue_manager *dqm,
+ if (!pdd->drm_priv)
+ goto vm_not_acquired;
+
+- pr_debug_ratelimited("Restoring PASID 0x%x queues\n",
+- pdd->process->pasid);
++ pr_debug_ratelimited("Restoring process pid %d queues\n",
++ pdd->process->lead_thread->pid);
+
+ /* Update PD Base in QPD */
+ qpd->page_table_base = amdgpu_amdkfd_gpuvm_get_process_page_dir(pdd->drm_priv);
+@@ -2137,8 +2139,8 @@ static void set_queue_as_reset(struct device_queue_manager *dqm, struct queue *q
+ {
+ struct kfd_process_device *pdd = qpd_to_pdd(qpd);
+
+- dev_err(dqm->dev->adev->dev, "queue id 0x%0x at pasid 0x%0x is reset\n",
+- q->properties.queue_id, q->process->pasid);
++ dev_err(dqm->dev->adev->dev, "queue id 0x%0x at pasid %d is reset\n",
++ q->properties.queue_id, pdd->process->lead_thread->pid);
+
+ pdd->has_reset_queue = true;
+ if (q->properties.is_active) {
+@@ -2491,14 +2493,6 @@ static int destroy_queue_cpsch(struct device_queue_manager *dqm,
+ return retval;
+ }
+
+-/*
+- * Low bits must be 0000/FFFF as required by HW, high bits must be 0 to
+- * stay in user mode.
+- */
+-#define APE1_FIXED_BITS_MASK 0xFFFF80000000FFFFULL
+-/* APE1 limit is inclusive and 64K aligned. */
+-#define APE1_LIMIT_ALIGNMENT 0xFFFF
+-
+ static bool set_cache_memory_policy(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd,
+ enum cache_policy default_policy,
+@@ -2513,34 +2507,6 @@ static bool set_cache_memory_policy(struct device_queue_manager *dqm,
+
+ dqm_lock(dqm);
+
+- if (alternate_aperture_size == 0) {
+- /* base > limit disables APE1 */
+- qpd->sh_mem_ape1_base = 1;
+- qpd->sh_mem_ape1_limit = 0;
+- } else {
+- /*
+- * In FSA64, APE1_Base[63:0] = { 16{SH_MEM_APE1_BASE[31]},
+- * SH_MEM_APE1_BASE[31:0], 0x0000 }
+- * APE1_Limit[63:0] = { 16{SH_MEM_APE1_LIMIT[31]},
+- * SH_MEM_APE1_LIMIT[31:0], 0xFFFF }
+- * Verify that the base and size parameters can be
+- * represented in this format and convert them.
+- * Additionally restrict APE1 to user-mode addresses.
+- */
+-
+- uint64_t base = (uintptr_t)alternate_aperture_base;
+- uint64_t limit = base + alternate_aperture_size - 1;
+-
+- if (limit <= base || (base & APE1_FIXED_BITS_MASK) != 0 ||
+- (limit & APE1_FIXED_BITS_MASK) != APE1_LIMIT_ALIGNMENT) {
+- retval = false;
+- goto out;
+- }
+-
+- qpd->sh_mem_ape1_base = base >> 16;
+- qpd->sh_mem_ape1_limit = limit >> 16;
+- }
+-
+ retval = dqm->asic_ops.set_cache_memory_policy(
+ dqm,
+ qpd,
+@@ -2549,6 +2515,9 @@ static bool set_cache_memory_policy(struct device_queue_manager *dqm,
+ alternate_aperture_base,
+ alternate_aperture_size);
+
++ if (retval)
++ goto out;
++
+ if ((dqm->sched_policy == KFD_SCHED_POLICY_NO_HWS) && (qpd->vmid != 0))
+ program_sh_mem_settings(dqm, qpd);
+
+@@ -2974,20 +2943,19 @@ void device_queue_manager_uninit(struct device_queue_manager *dqm)
+
+ int kfd_dqm_suspend_bad_queue_mes(struct kfd_node *knode, u32 pasid, u32 doorbell_id)
+ {
+- struct kfd_process_device *pdd;
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct kfd_process_device *pdd = NULL;
++ struct kfd_process *p = kfd_lookup_process_by_pasid(pasid, &pdd);
+ struct device_queue_manager *dqm = knode->dqm;
+ struct device *dev = dqm->dev->adev->dev;
+ struct qcm_process_device *qpd;
+ struct queue *q = NULL;
+ int ret = 0;
+
+- if (!p)
++ if (!pdd)
+ return -EINVAL;
+
+ dqm_lock(dqm);
+
+- pdd = kfd_get_process_device_data(dqm->dev, p);
+ if (pdd) {
+ qpd = &pdd->qpd;
+
+@@ -3020,6 +2988,7 @@ int kfd_dqm_suspend_bad_queue_mes(struct kfd_node *knode, u32 pasid, u32 doorbel
+
+ out:
+ dqm_unlock(dqm);
++ kfd_unref_process(p);
+ return ret;
+ }
+
+@@ -3061,24 +3030,21 @@ static int kfd_dqm_evict_pasid_mes(struct device_queue_manager *dqm,
+ return ret;
+ }
+
+-int kfd_dqm_evict_pasid(struct device_queue_manager *dqm, u32 pasid)
++int kfd_evict_process_device(struct kfd_process_device *pdd)
+ {
+- struct kfd_process_device *pdd;
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct device_queue_manager *dqm;
++ struct kfd_process *p;
+ int ret = 0;
+
+- if (!p)
+- return -EINVAL;
++ p = pdd->process;
++ dqm = pdd->dev->dqm;
++
+ WARN(debug_evictions, "Evicting pid %d", p->lead_thread->pid);
+- pdd = kfd_get_process_device_data(dqm->dev, p);
+- if (pdd) {
+- if (dqm->dev->kfd->shared_resources.enable_mes)
+- ret = kfd_dqm_evict_pasid_mes(dqm, &pdd->qpd);
+- else
+- ret = dqm->ops.evict_process_queues(dqm, &pdd->qpd);
+- }
+
+- kfd_unref_process(p);
++ if (dqm->dev->kfd->shared_resources.enable_mes)
++ ret = kfd_dqm_evict_pasid_mes(dqm, &pdd->qpd);
++ else
++ ret = dqm->ops.evict_process_queues(dqm, &pdd->qpd);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_cik.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_cik.c
+index d4d95c7f2e5d40..32bedef912b3b2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_cik.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_cik.c
+@@ -27,6 +27,14 @@
+ #include "oss/oss_2_4_sh_mask.h"
+ #include "gca/gfx_7_2_sh_mask.h"
+
++/*
++ * Low bits must be 0000/FFFF as required by HW, high bits must be 0 to
++ * stay in user mode.
++ */
++#define APE1_FIXED_BITS_MASK 0xFFFF80000000FFFFULL
++/* APE1 limit is inclusive and 64K aligned. */
++#define APE1_LIMIT_ALIGNMENT 0xFFFF
++
+ static bool set_cache_memory_policy_cik(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd,
+ enum cache_policy default_policy,
+@@ -84,6 +92,36 @@ static bool set_cache_memory_policy_cik(struct device_queue_manager *dqm,
+ {
+ uint32_t default_mtype;
+ uint32_t ape1_mtype;
++ unsigned int temp;
++ bool retval = true;
++
++ if (alternate_aperture_size == 0) {
++ /* base > limit disables APE1 */
++ qpd->sh_mem_ape1_base = 1;
++ qpd->sh_mem_ape1_limit = 0;
++ } else {
++ /*
++ * In FSA64, APE1_Base[63:0] = { 16{SH_MEM_APE1_BASE[31]},
++ * SH_MEM_APE1_BASE[31:0], 0x0000 }
++ * APE1_Limit[63:0] = { 16{SH_MEM_APE1_LIMIT[31]},
++ * SH_MEM_APE1_LIMIT[31:0], 0xFFFF }
++ * Verify that the base and size parameters can be
++ * represented in this format and convert them.
++ * Additionally restrict APE1 to user-mode addresses.
++ */
++
++ uint64_t base = (uintptr_t)alternate_aperture_base;
++ uint64_t limit = base + alternate_aperture_size - 1;
++
++ if (limit <= base || (base & APE1_FIXED_BITS_MASK) != 0 ||
++ (limit & APE1_FIXED_BITS_MASK) != APE1_LIMIT_ALIGNMENT) {
++ retval = false;
++ goto out;
++ }
++
++ qpd->sh_mem_ape1_base = base >> 16;
++ qpd->sh_mem_ape1_limit = limit >> 16;
++ }
+
+ default_mtype = (default_policy == cache_policy_coherent) ?
+ MTYPE_NONCACHED :
+@@ -97,37 +135,22 @@ static bool set_cache_memory_policy_cik(struct device_queue_manager *dqm,
+ | ALIGNMENT_MODE(SH_MEM_ALIGNMENT_MODE_UNALIGNED)
+ | DEFAULT_MTYPE(default_mtype)
+ | APE1_MTYPE(ape1_mtype);
+-
+- return true;
+-}
+-
+-static int update_qpd_cik(struct device_queue_manager *dqm,
+- struct qcm_process_device *qpd)
+-{
+- struct kfd_process_device *pdd;
+- unsigned int temp;
+-
+- pdd = qpd_to_pdd(qpd);
+-
+- /* check if sh_mem_config register already configured */
+- if (qpd->sh_mem_config == 0) {
+- qpd->sh_mem_config =
+- ALIGNMENT_MODE(SH_MEM_ALIGNMENT_MODE_UNALIGNED) |
+- DEFAULT_MTYPE(MTYPE_NONCACHED) |
+- APE1_MTYPE(MTYPE_NONCACHED);
+- qpd->sh_mem_ape1_limit = 0;
+- qpd->sh_mem_ape1_base = 0;
+- }
+-
+ /* On dGPU we're always in GPUVM64 addressing mode with 64-bit
+ * aperture addresses.
+ */
+- temp = get_sh_mem_bases_nybble_64(pdd);
++ temp = get_sh_mem_bases_nybble_64(qpd_to_pdd(qpd));
+ qpd->sh_mem_bases = compute_sh_mem_bases_64bit(temp);
+
+ pr_debug("is32bit process: %d sh_mem_bases nybble: 0x%X and register 0x%X\n",
+ qpd->pqm->process->is_32bit_user_mode, temp, qpd->sh_mem_bases);
+
++out:
++ return retval;
++}
++
++static int update_qpd_cik(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd)
++{
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
+index 245a90dfc2f6b3..b5f5f141353b5f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
+@@ -31,10 +31,17 @@ static int update_qpd_v10(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd);
+ static void init_sdma_vm_v10(struct device_queue_manager *dqm, struct queue *q,
+ struct qcm_process_device *qpd);
++static bool set_cache_memory_policy_v10(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size);
+
+ void device_queue_manager_init_v10(
+ struct device_queue_manager_asic_ops *asic_ops)
+ {
++ asic_ops->set_cache_memory_policy = set_cache_memory_policy_v10;
+ asic_ops->update_qpd = update_qpd_v10;
+ asic_ops->init_sdma_vm = init_sdma_vm_v10;
+ asic_ops->mqd_manager_init = mqd_manager_init_v10;
+@@ -49,27 +56,27 @@ static uint32_t compute_sh_mem_bases_64bit(struct kfd_process_device *pdd)
+ private_base;
+ }
+
+-static int update_qpd_v10(struct device_queue_manager *dqm,
+- struct qcm_process_device *qpd)
++static bool set_cache_memory_policy_v10(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size)
+ {
+- struct kfd_process_device *pdd;
+-
+- pdd = qpd_to_pdd(qpd);
+-
+- /* check if sh_mem_config register already configured */
+- if (qpd->sh_mem_config == 0) {
+- qpd->sh_mem_config =
+- (SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+- SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
+- (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
+- qpd->sh_mem_ape1_limit = 0;
+- qpd->sh_mem_ape1_base = 0;
+- }
+-
+- qpd->sh_mem_bases = compute_sh_mem_bases_64bit(pdd);
++ qpd->sh_mem_config = (SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
++ SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
++ (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
++ qpd->sh_mem_ape1_limit = 0;
++ qpd->sh_mem_ape1_base = 0;
++ qpd->sh_mem_bases = compute_sh_mem_bases_64bit(qpd_to_pdd(qpd));
+
+ pr_debug("sh_mem_bases 0x%X\n", qpd->sh_mem_bases);
++ return true;
++}
+
++static int update_qpd_v10(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd)
++{
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c
+index 2e129da7acb43a..f436878d0d6218 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c
+@@ -30,10 +30,17 @@ static int update_qpd_v11(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd);
+ static void init_sdma_vm_v11(struct device_queue_manager *dqm, struct queue *q,
+ struct qcm_process_device *qpd);
++static bool set_cache_memory_policy_v11(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size);
+
+ void device_queue_manager_init_v11(
+ struct device_queue_manager_asic_ops *asic_ops)
+ {
++ asic_ops->set_cache_memory_policy = set_cache_memory_policy_v11;
+ asic_ops->update_qpd = update_qpd_v11;
+ asic_ops->init_sdma_vm = init_sdma_vm_v11;
+ asic_ops->mqd_manager_init = mqd_manager_init_v11;
+@@ -48,28 +55,28 @@ static uint32_t compute_sh_mem_bases_64bit(struct kfd_process_device *pdd)
+ private_base;
+ }
+
+-static int update_qpd_v11(struct device_queue_manager *dqm,
+- struct qcm_process_device *qpd)
++static bool set_cache_memory_policy_v11(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size)
+ {
+- struct kfd_process_device *pdd;
+-
+- pdd = qpd_to_pdd(qpd);
+-
+- /* check if sh_mem_config register already configured */
+- if (qpd->sh_mem_config == 0) {
+- qpd->sh_mem_config =
+- (SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+- SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
+- (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
+-
+- qpd->sh_mem_ape1_limit = 0;
+- qpd->sh_mem_ape1_base = 0;
+- }
++ qpd->sh_mem_config = (SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
++ SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
++ (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
+
+- qpd->sh_mem_bases = compute_sh_mem_bases_64bit(pdd);
++ qpd->sh_mem_ape1_limit = 0;
++ qpd->sh_mem_ape1_base = 0;
++ qpd->sh_mem_bases = compute_sh_mem_bases_64bit(qpd_to_pdd(qpd));
+
+ pr_debug("sh_mem_bases 0x%X\n", qpd->sh_mem_bases);
++ return true;
++}
+
++static int update_qpd_v11(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd)
++{
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v12.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v12.c
+index 4f3295b29dfb1b..62ca1c8fcbaf9a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v12.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v12.c
+@@ -30,10 +30,17 @@ static int update_qpd_v12(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd);
+ static void init_sdma_vm_v12(struct device_queue_manager *dqm, struct queue *q,
+ struct qcm_process_device *qpd);
++static bool set_cache_memory_policy_v12(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size);
+
+ void device_queue_manager_init_v12(
+ struct device_queue_manager_asic_ops *asic_ops)
+ {
++ asic_ops->set_cache_memory_policy = set_cache_memory_policy_v12;
+ asic_ops->update_qpd = update_qpd_v12;
+ asic_ops->init_sdma_vm = init_sdma_vm_v12;
+ asic_ops->mqd_manager_init = mqd_manager_init_v12;
+@@ -48,28 +55,28 @@ static uint32_t compute_sh_mem_bases_64bit(struct kfd_process_device *pdd)
+ private_base;
+ }
+
+-static int update_qpd_v12(struct device_queue_manager *dqm,
+- struct qcm_process_device *qpd)
++static bool set_cache_memory_policy_v12(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size)
+ {
+- struct kfd_process_device *pdd;
+-
+- pdd = qpd_to_pdd(qpd);
+-
+- /* check if sh_mem_config register already configured */
+- if (qpd->sh_mem_config == 0) {
+- qpd->sh_mem_config =
+- (SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+- SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
+- (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
+-
+- qpd->sh_mem_ape1_limit = 0;
+- qpd->sh_mem_ape1_base = 0;
+- }
++ qpd->sh_mem_config = (SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
++ SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
++ (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
+
+- qpd->sh_mem_bases = compute_sh_mem_bases_64bit(pdd);
++ qpd->sh_mem_ape1_limit = 0;
++ qpd->sh_mem_ape1_base = 0;
++ qpd->sh_mem_bases = compute_sh_mem_bases_64bit(qpd_to_pdd(qpd));
+
+ pr_debug("sh_mem_bases 0x%X\n", qpd->sh_mem_bases);
++ return true;
++}
+
++static int update_qpd_v12(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd)
++{
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v9.c
+index 67137e674f1d08..d85eadaa1e11bd 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v9.c
+@@ -30,10 +30,17 @@ static int update_qpd_v9(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd);
+ static void init_sdma_vm_v9(struct device_queue_manager *dqm, struct queue *q,
+ struct qcm_process_device *qpd);
++static bool set_cache_memory_policy_v9(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size);
+
+ void device_queue_manager_init_v9(
+ struct device_queue_manager_asic_ops *asic_ops)
+ {
++ asic_ops->set_cache_memory_policy = set_cache_memory_policy_v9;
+ asic_ops->update_qpd = update_qpd_v9;
+ asic_ops->init_sdma_vm = init_sdma_vm_v9;
+ asic_ops->mqd_manager_init = mqd_manager_init_v9;
+@@ -48,10 +55,36 @@ static uint32_t compute_sh_mem_bases_64bit(struct kfd_process_device *pdd)
+ private_base;
+ }
+
++static bool set_cache_memory_policy_v9(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd,
++ enum cache_policy default_policy,
++ enum cache_policy alternate_policy,
++ void __user *alternate_aperture_base,
++ uint64_t alternate_aperture_size)
++{
++ qpd->sh_mem_config = SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
++ SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT;
++
++ if (dqm->dev->kfd->noretry)
++ qpd->sh_mem_config |= 1 << SH_MEM_CONFIG__RETRY_DISABLE__SHIFT;
++
++ if (KFD_GC_VERSION(dqm->dev->kfd) == IP_VERSION(9, 4, 3) ||
++ KFD_GC_VERSION(dqm->dev->kfd) == IP_VERSION(9, 4, 4))
++ qpd->sh_mem_config |= (1 << SH_MEM_CONFIG__F8_MODE__SHIFT);
++
++ qpd->sh_mem_ape1_limit = 0;
++ qpd->sh_mem_ape1_base = 0;
++ qpd->sh_mem_bases = compute_sh_mem_bases_64bit(qpd_to_pdd(qpd));
++
++ pr_debug("sh_mem_bases 0x%X sh_mem_config 0x%X\n", qpd->sh_mem_bases,
++ qpd->sh_mem_config);
++ return true;
++}
++
+ static int update_qpd_v9(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd)
+ {
+- struct kfd_process_device *pdd;
++ struct kfd_process_device *pdd = qpd_to_pdd(qpd);
+
+ pdd = qpd_to_pdd(qpd);
+
+@@ -64,8 +97,7 @@ static int update_qpd_v9(struct device_queue_manager *dqm,
+ qpd->sh_mem_config |= 1 << SH_MEM_CONFIG__RETRY_DISABLE__SHIFT;
+
+ if (KFD_GC_VERSION(dqm->dev->kfd) == IP_VERSION(9, 4, 3) ||
+- KFD_GC_VERSION(dqm->dev->kfd) == IP_VERSION(9, 4, 4) ||
+- KFD_GC_VERSION(dqm->dev->kfd) == IP_VERSION(9, 5, 0))
++ KFD_GC_VERSION(dqm->dev->kfd) == IP_VERSION(9, 4, 4))
+ qpd->sh_mem_config |=
+ (1 << SH_MEM_CONFIG__F8_MODE__SHIFT);
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_vi.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_vi.c
+index b291ee0fab9439..320518f418903d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_vi.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_vi.c
+@@ -27,6 +27,14 @@
+ #include "gca/gfx_8_0_sh_mask.h"
+ #include "oss/oss_3_0_sh_mask.h"
+
++/*
++ * Low bits must be 0000/FFFF as required by HW, high bits must be 0 to
++ * stay in user mode.
++ */
++#define APE1_FIXED_BITS_MASK 0xFFFF80000000FFFFULL
++/* APE1 limit is inclusive and 64K aligned. */
++#define APE1_LIMIT_ALIGNMENT 0xFFFF
++
+ static bool set_cache_memory_policy_vi(struct device_queue_manager *dqm,
+ struct qcm_process_device *qpd,
+ enum cache_policy default_policy,
+@@ -85,6 +93,36 @@ static bool set_cache_memory_policy_vi(struct device_queue_manager *dqm,
+ {
+ uint32_t default_mtype;
+ uint32_t ape1_mtype;
++ unsigned int temp;
++ bool retval = true;
++
++ if (alternate_aperture_size == 0) {
++ /* base > limit disables APE1 */
++ qpd->sh_mem_ape1_base = 1;
++ qpd->sh_mem_ape1_limit = 0;
++ } else {
++ /*
++ * In FSA64, APE1_Base[63:0] = { 16{SH_MEM_APE1_BASE[31]},
++ * SH_MEM_APE1_BASE[31:0], 0x0000 }
++ * APE1_Limit[63:0] = { 16{SH_MEM_APE1_LIMIT[31]},
++ * SH_MEM_APE1_LIMIT[31:0], 0xFFFF }
++ * Verify that the base and size parameters can be
++ * represented in this format and convert them.
++ * Additionally restrict APE1 to user-mode addresses.
++ */
++
++ uint64_t base = (uintptr_t)alternate_aperture_base;
++ uint64_t limit = base + alternate_aperture_size - 1;
++
++ if (limit <= base || (base & APE1_FIXED_BITS_MASK) != 0 ||
++ (limit & APE1_FIXED_BITS_MASK) != APE1_LIMIT_ALIGNMENT) {
++ retval = false;
++ goto out;
++ }
++
++ qpd->sh_mem_ape1_base = base >> 16;
++ qpd->sh_mem_ape1_limit = limit >> 16;
++ }
+
+ default_mtype = (default_policy == cache_policy_coherent) ?
+ MTYPE_UC :
+@@ -100,40 +138,21 @@ static bool set_cache_memory_policy_vi(struct device_queue_manager *dqm,
+ default_mtype << SH_MEM_CONFIG__DEFAULT_MTYPE__SHIFT |
+ ape1_mtype << SH_MEM_CONFIG__APE1_MTYPE__SHIFT;
+
+- return true;
+-}
+-
+-static int update_qpd_vi(struct device_queue_manager *dqm,
+- struct qcm_process_device *qpd)
+-{
+- struct kfd_process_device *pdd;
+- unsigned int temp;
+-
+- pdd = qpd_to_pdd(qpd);
+-
+- /* check if sh_mem_config register already configured */
+- if (qpd->sh_mem_config == 0) {
+- qpd->sh_mem_config =
+- SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+- SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT |
+- MTYPE_UC <<
+- SH_MEM_CONFIG__DEFAULT_MTYPE__SHIFT |
+- MTYPE_UC <<
+- SH_MEM_CONFIG__APE1_MTYPE__SHIFT;
+-
+- qpd->sh_mem_ape1_limit = 0;
+- qpd->sh_mem_ape1_base = 0;
+- }
+-
+ /* On dGPU we're always in GPUVM64 addressing mode with 64-bit
+ * aperture addresses.
+ */
+- temp = get_sh_mem_bases_nybble_64(pdd);
++ temp = get_sh_mem_bases_nybble_64(qpd_to_pdd(qpd));
+ qpd->sh_mem_bases = compute_sh_mem_bases_64bit(temp);
+
+ pr_debug("sh_mem_bases nybble: 0x%X and register 0x%X\n",
+ temp, qpd->sh_mem_bases);
++out:
++ return retval;
++}
+
++static int update_qpd_vi(struct device_queue_manager *dqm,
++ struct qcm_process_device *qpd)
++{
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+index d075f24e5f9f39..fecdb679407503 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+@@ -727,7 +727,7 @@ void kfd_signal_event_interrupt(u32 pasid, uint32_t partial_id,
+ * to process context, kfd_process could attempt to exit while we are
+ * running so the lookup function increments the process ref count.
+ */
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct kfd_process *p = kfd_lookup_process_by_pasid(pasid, NULL);
+
+ if (!p)
+ return; /* Presumably process exited. */
+@@ -1139,8 +1139,8 @@ static void lookup_events_by_type_and_signal(struct kfd_process *p,
+
+ if (type == KFD_EVENT_TYPE_MEMORY) {
+ dev_warn(kfd_device,
+- "Sending SIGSEGV to process %d (pasid 0x%x)",
+- p->lead_thread->pid, p->pasid);
++ "Sending SIGSEGV to process pid %d",
++ p->lead_thread->pid);
+ send_sig(SIGSEGV, p->lead_thread, 0);
+ }
+
+@@ -1148,13 +1148,13 @@ static void lookup_events_by_type_and_signal(struct kfd_process *p,
+ if (send_signal) {
+ if (send_sigterm) {
+ dev_warn(kfd_device,
+- "Sending SIGTERM to process %d (pasid 0x%x)",
+- p->lead_thread->pid, p->pasid);
++ "Sending SIGTERM to process pid %d",
++ p->lead_thread->pid);
+ send_sig(SIGTERM, p->lead_thread, 0);
+ } else {
+ dev_err(kfd_device,
+- "Process %d (pasid 0x%x) got unhandled exception",
+- p->lead_thread->pid, p->pasid);
++ "Process pid %d got unhandled exception",
++ p->lead_thread->pid);
+ }
+ }
+
+@@ -1168,7 +1168,7 @@ void kfd_signal_hw_exception_event(u32 pasid)
+ * to process context, kfd_process could attempt to exit while we are
+ * running so the lookup function increments the process ref count.
+ */
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct kfd_process *p = kfd_lookup_process_by_pasid(pasid, NULL);
+
+ if (!p)
+ return; /* Presumably process exited. */
+@@ -1177,22 +1177,20 @@ void kfd_signal_hw_exception_event(u32 pasid)
+ kfd_unref_process(p);
+ }
+
+-void kfd_signal_vm_fault_event(struct kfd_node *dev, u32 pasid,
++void kfd_signal_vm_fault_event(struct kfd_process_device *pdd,
+ struct kfd_vm_fault_info *info,
+ struct kfd_hsa_memory_exception_data *data)
+ {
+ struct kfd_event *ev;
+ uint32_t id;
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct kfd_process *p = pdd->process;
+ struct kfd_hsa_memory_exception_data memory_exception_data;
+ int user_gpu_id;
+
+- if (!p)
+- return; /* Presumably process exited. */
+-
+- user_gpu_id = kfd_process_get_user_gpu_id(p, dev->id);
++ user_gpu_id = kfd_process_get_user_gpu_id(p, pdd->dev->id);
+ if (unlikely(user_gpu_id == -EINVAL)) {
+- WARN_ONCE(1, "Could not get user_gpu_id from dev->id:%x\n", dev->id);
++ WARN_ONCE(1, "Could not get user_gpu_id from dev->id:%x\n",
++ pdd->dev->id);
+ return;
+ }
+
+@@ -1229,7 +1227,6 @@ void kfd_signal_vm_fault_event(struct kfd_node *dev, u32 pasid,
+ }
+
+ rcu_read_unlock();
+- kfd_unref_process(p);
+ }
+
+ void kfd_signal_reset_event(struct kfd_node *dev)
+@@ -1264,7 +1261,8 @@ void kfd_signal_reset_event(struct kfd_node *dev)
+ }
+
+ if (unlikely(!pdd)) {
+- WARN_ONCE(1, "Could not get device data from pasid:0x%x\n", p->pasid);
++ WARN_ONCE(1, "Could not get device data from process pid:%d\n",
++ p->lead_thread->pid);
+ continue;
+ }
+
+@@ -1273,8 +1271,15 @@ void kfd_signal_reset_event(struct kfd_node *dev)
+
+ if (dev->dqm->detect_hang_count) {
+ struct amdgpu_task_info *ti;
++ struct amdgpu_fpriv *drv_priv;
++
++ if (unlikely(amdgpu_file_to_fpriv(pdd->drm_file, &drv_priv))) {
++ WARN_ONCE(1, "Could not get vm for device %x from pid:%d\n",
++ dev->id, p->lead_thread->pid);
++ continue;
++ }
+
+- ti = amdgpu_vm_get_task_info_pasid(dev->adev, p->pasid);
++ ti = amdgpu_vm_get_task_info_vm(&drv_priv->vm);
+ if (ti) {
+ dev_err(dev->adev->dev,
+ "Queues reset on process %s tid %d thread %s pid %d\n",
+@@ -1311,7 +1316,7 @@ void kfd_signal_reset_event(struct kfd_node *dev)
+
+ void kfd_signal_poison_consumed_event(struct kfd_node *dev, u32 pasid)
+ {
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct kfd_process *p = kfd_lookup_process_by_pasid(pasid, NULL);
+ struct kfd_hsa_memory_exception_data memory_exception_data;
+ struct kfd_hsa_hw_exception_data hw_exception_data;
+ struct kfd_event *ev;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
+index b3f988b275a888..c5f97e6e36ff5e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
+@@ -194,7 +194,7 @@ static void event_interrupt_poison_consumption_v11(struct kfd_node *dev,
+ enum amdgpu_ras_block block = 0;
+ int ret = -EINVAL;
+ uint32_t reset = 0;
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct kfd_process *p = kfd_lookup_process_by_pasid(pasid, NULL);
+
+ if (!p)
+ return;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+index 0cb5c582ce7dc4..b8a91bf4ef3070 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+@@ -146,7 +146,7 @@ static void event_interrupt_poison_consumption_v9(struct kfd_node *dev,
+ {
+ enum amdgpu_ras_block block = 0;
+ uint32_t reset = 0;
+- struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
++ struct kfd_process *p = kfd_lookup_process_by_pasid(pasid, NULL);
+ enum ras_event_type type = RAS_EVENT_TYPE_POISON_CONSUMPTION;
+ u64 event_id;
+ int old_poison, ret;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+index 1f9f5bfeaf8680..d56525201155af 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+@@ -47,7 +47,7 @@ static int pm_map_process_v9(struct packet_manager *pm,
+ packet->bitfields2.exec_cleaner_shader = 1;
+ packet->bitfields2.diq_enable = (qpd->is_debug) ? 1 : 0;
+ packet->bitfields2.process_quantum = 10;
+- packet->bitfields2.pasid = qpd->pqm->process->pasid;
++ packet->bitfields2.pasid = pdd->pasid;
+ packet->bitfields14.gds_size = qpd->gds_size & 0x3F;
+ packet->bitfields14.gds_size_hi = (qpd->gds_size >> 6) & 0xF;
+ packet->bitfields14.num_gws = (qpd->mapped_gws_queue) ? qpd->num_gws : 0;
+@@ -106,7 +106,7 @@ static int pm_map_process_aldebaran(struct packet_manager *pm,
+ packet->bitfields2.exec_cleaner_shader = 1;
+ packet->bitfields2.diq_enable = (qpd->is_debug) ? 1 : 0;
+ packet->bitfields2.process_quantum = 10;
+- packet->bitfields2.pasid = qpd->pqm->process->pasid;
++ packet->bitfields2.pasid = pdd->pasid;
+ packet->bitfields14.gds_size = qpd->gds_size & 0x3F;
+ packet->bitfields14.gds_size_hi = (qpd->gds_size >> 6) & 0xF;
+ packet->bitfields14.num_gws = (qpd->mapped_gws_queue) ? qpd->num_gws : 0;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c
+index c1199d06d131b6..347c86e1c378fc 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_vi.c
+@@ -42,6 +42,7 @@ unsigned int pm_build_pm4_header(unsigned int opcode, size_t packet_size)
+ static int pm_map_process_vi(struct packet_manager *pm, uint32_t *buffer,
+ struct qcm_process_device *qpd)
+ {
++ struct kfd_process_device *pdd = qpd_to_pdd(qpd);
+ struct pm4_mes_map_process *packet;
+
+ packet = (struct pm4_mes_map_process *)buffer;
+@@ -52,7 +53,7 @@ static int pm_map_process_vi(struct packet_manager *pm, uint32_t *buffer,
+ sizeof(struct pm4_mes_map_process));
+ packet->bitfields2.diq_enable = (qpd->is_debug) ? 1 : 0;
+ packet->bitfields2.process_quantum = 10;
+- packet->bitfields2.pasid = qpd->pqm->process->pasid;
++ packet->bitfields2.pasid = pdd->pasid;
+ packet->bitfields3.page_table_base = qpd->page_table_base;
+ packet->bitfields10.gds_size = qpd->gds_size;
+ packet->bitfields10.num_gws = qpd->num_gws;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index d8cd913aa772ba..0a99c5c9cadc03 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -851,6 +851,8 @@ struct kfd_process_device {
+
+ /* Tracks queue reset status */
+ bool has_reset_queue;
++
++ u32 pasid;
+ };
+
+ #define qpd_to_pdd(x) container_of(x, struct kfd_process_device, qpd)
+@@ -910,8 +912,6 @@ struct kfd_process {
+ /* We want to receive a notification when the mm_struct is destroyed */
+ struct mmu_notifier mmu_notifier;
+
+- u32 pasid;
+-
+ /*
+ * Array of kfd_process_device pointers,
+ * one for each device the process is using.
+@@ -1039,7 +1039,8 @@ void kfd_process_destroy_wq(void);
+ void kfd_cleanup_processes(void);
+ struct kfd_process *kfd_create_process(struct task_struct *thread);
+ struct kfd_process *kfd_get_process(const struct task_struct *task);
+-struct kfd_process *kfd_lookup_process_by_pasid(u32 pasid);
++struct kfd_process *kfd_lookup_process_by_pasid(u32 pasid,
++ struct kfd_process_device **pdd);
+ struct kfd_process *kfd_lookup_process_by_mm(const struct mm_struct *mm);
+
+ int kfd_process_gpuidx_from_gpuid(struct kfd_process *p, uint32_t gpu_id);
+@@ -1337,7 +1338,7 @@ void device_queue_manager_uninit(struct device_queue_manager *dqm);
+ struct kernel_queue *kernel_queue_init(struct kfd_node *dev,
+ enum kfd_queue_type type);
+ void kernel_queue_uninit(struct kernel_queue *kq);
+-int kfd_dqm_evict_pasid(struct device_queue_manager *dqm, u32 pasid);
++int kfd_evict_process_device(struct kfd_process_device *pdd);
+ int kfd_dqm_suspend_bad_queue_mes(struct kfd_node *knode, u32 pasid, u32 doorbell_id);
+
+ /* Process Queue Manager */
+@@ -1492,7 +1493,7 @@ int kfd_event_create(struct file *devkfd, struct kfd_process *p,
+ int kfd_get_num_events(struct kfd_process *p);
+ int kfd_event_destroy(struct kfd_process *p, uint32_t event_id);
+
+-void kfd_signal_vm_fault_event(struct kfd_node *dev, u32 pasid,
++void kfd_signal_vm_fault_event(struct kfd_process_device *pdd,
+ struct kfd_vm_fault_info *info,
+ struct kfd_hsa_memory_exception_data *data);
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index c3f2c0428e013b..7c0c24732481e9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -283,8 +283,8 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+ cu_cnt = 0;
+ proc = pdd->process;
+ if (pdd->qpd.queue_count == 0) {
+- pr_debug("Gpu-Id: %d has no active queues for process %d\n",
+- dev->id, proc->pasid);
++ pr_debug("Gpu-Id: %d has no active queues for process pid %d\n",
++ dev->id, (int)proc->lead_thread->pid);
+ return snprintf(buffer, PAGE_SIZE, "%d\n", cu_cnt);
+ }
+
+@@ -328,12 +328,9 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ char *buffer)
+ {
+- if (strcmp(attr->name, "pasid") == 0) {
+- struct kfd_process *p = container_of(attr, struct kfd_process,
+- attr_pasid);
+-
+- return snprintf(buffer, PAGE_SIZE, "%d\n", p->pasid);
+- } else if (strncmp(attr->name, "vram_", 5) == 0) {
++ if (strcmp(attr->name, "pasid") == 0)
++ return snprintf(buffer, PAGE_SIZE, "%d\n", 0);
++ else if (strncmp(attr->name, "vram_", 5) == 0) {
+ struct kfd_process_device *pdd = container_of(attr, struct kfd_process_device,
+ attr_vram);
+ return snprintf(buffer, PAGE_SIZE, "%llu\n", atomic64_read(&pdd->vram_usage));
+@@ -842,6 +839,14 @@ struct kfd_process *kfd_create_process(struct task_struct *thread)
+ return ERR_PTR(-EINVAL);
+ }
+
++ /* If the process just called exec(3), it is possible that the
++ * cleanup of the kfd_process (following the release of the mm
++ * of the old process image) is still in the cleanup work queue.
++ * Make sure to drain any job before trying to recreate any
++ * resource for this process.
++ */
++ flush_workqueue(kfd_process_wq);
++
+ /*
+ * take kfd processes mutex before starting of process creation
+ * so there won't be a case where two threads of the same process
+@@ -862,14 +867,6 @@ struct kfd_process *kfd_create_process(struct task_struct *thread)
+ if (process) {
+ pr_debug("Process already found\n");
+ } else {
+- /* If the process just called exec(3), it is possible that the
+- * cleanup of the kfd_process (following the release of the mm
+- * of the old process image) is still in the cleanup work queue.
+- * Make sure to drain any job before trying to recreate any
+- * resource for this process.
+- */
+- flush_workqueue(kfd_process_wq);
+-
+ process = create_process(thread);
+ if (IS_ERR(process))
+ goto out;
+@@ -1057,17 +1054,13 @@ static void kfd_process_destroy_pdds(struct kfd_process *p)
+ for (i = 0; i < p->n_pdds; i++) {
+ struct kfd_process_device *pdd = p->pdds[i];
+
+- pr_debug("Releasing pdd (topology id %d) for process (pasid 0x%x)\n",
+- pdd->dev->id, p->pasid);
+-
++ pr_debug("Releasing pdd (topology id %d, for pid %d)\n",
++ pdd->dev->id, p->lead_thread->pid);
+ kfd_process_device_destroy_cwsr_dgpu(pdd);
+ kfd_process_device_destroy_ib_mem(pdd);
+
+- if (pdd->drm_file) {
+- amdgpu_amdkfd_gpuvm_release_process_vm(
+- pdd->dev->adev, pdd->drm_priv);
++ if (pdd->drm_file)
+ fput(pdd->drm_file);
+- }
+
+ if (pdd->qpd.cwsr_kaddr && !pdd->qpd.cwsr_base)
+ free_pages((unsigned long)pdd->qpd.cwsr_kaddr,
+@@ -1191,7 +1184,6 @@ static void kfd_process_wq_release(struct work_struct *work)
+
+ kfd_event_free_process(p);
+
+- kfd_pasid_free(p->pasid);
+ mutex_destroy(&p->mutex);
+
+ put_task_struct(p->lead_thread);
+@@ -1542,12 +1534,6 @@ static struct kfd_process *create_process(const struct task_struct *thread)
+ atomic_set(&process->debugged_process_count, 0);
+ sema_init(&process->runtime_enable_sema, 0);
+
+- process->pasid = kfd_pasid_alloc();
+- if (process->pasid == 0) {
+- err = -ENOSPC;
+- goto err_alloc_pasid;
+- }
+-
+ err = pqm_init(&process->pqm, process);
+ if (err != 0)
+ goto err_process_pqm_init;
+@@ -1601,8 +1587,6 @@ static struct kfd_process *create_process(const struct task_struct *thread)
+ err_init_apertures:
+ pqm_uninit(&process->pqm);
+ err_process_pqm_init:
+- kfd_pasid_free(process->pasid);
+-err_alloc_pasid:
+ kfd_event_free_process(process);
+ err_event_init:
+ mutex_destroy(&process->mutex);
+@@ -1721,15 +1705,19 @@ int kfd_process_device_init_vm(struct kfd_process_device *pdd,
+ if (ret)
+ goto err_init_cwsr;
+
+- ret = amdgpu_amdkfd_gpuvm_set_vm_pasid(dev->adev, avm, p->pasid);
+- if (ret)
+- goto err_set_pasid;
++ if (unlikely(!avm->pasid)) {
++ dev_warn(pdd->dev->adev->dev, "WARN: vm %p has no pasid associated",
++ avm);
++ ret = -EINVAL;
++ goto err_get_pasid;
++ }
+
++ pdd->pasid = avm->pasid;
+ pdd->drm_file = drm_file;
+
+ return 0;
+
+-err_set_pasid:
++err_get_pasid:
+ kfd_process_device_destroy_cwsr_dgpu(pdd);
+ err_init_cwsr:
+ kfd_process_device_destroy_ib_mem(pdd);
+@@ -1815,25 +1803,50 @@ void kfd_process_device_remove_obj_handle(struct kfd_process_device *pdd,
+ idr_remove(&pdd->alloc_idr, handle);
+ }
+
+-/* This increments the process->ref counter. */
+-struct kfd_process *kfd_lookup_process_by_pasid(u32 pasid)
++static struct kfd_process_device *kfd_lookup_process_device_by_pasid(u32 pasid)
+ {
+- struct kfd_process *p, *ret_p = NULL;
++ struct kfd_process_device *ret_p = NULL;
++ struct kfd_process *p;
+ unsigned int temp;
+-
+- int idx = srcu_read_lock(&kfd_processes_srcu);
++ int i;
+
+ hash_for_each_rcu(kfd_processes_table, temp, p, kfd_processes) {
+- if (p->pasid == pasid) {
+- kref_get(&p->ref);
+- ret_p = p;
+- break;
++ for (i = 0; i < p->n_pdds; i++) {
++ if (p->pdds[i]->pasid == pasid) {
++ ret_p = p->pdds[i];
++ break;
++ }
+ }
++ if (ret_p)
++ break;
++ }
++ return ret_p;
++}
++
++/* This increments the process->ref counter. */
++struct kfd_process *kfd_lookup_process_by_pasid(u32 pasid,
++ struct kfd_process_device **pdd)
++{
++ struct kfd_process_device *ret_p;
++
++ int idx = srcu_read_lock(&kfd_processes_srcu);
++
++ ret_p = kfd_lookup_process_device_by_pasid(pasid);
++ if (ret_p) {
++ if (pdd)
++ *pdd = ret_p;
++ kref_get(&ret_p->process->ref);
++
++ srcu_read_unlock(&kfd_processes_srcu, idx);
++ return ret_p->process;
+ }
+
+ srcu_read_unlock(&kfd_processes_srcu, idx);
+
+- return ret_p;
++ if (pdd)
++ *pdd = NULL;
++
++ return NULL;
+ }
+
+ /* This increments the process->ref counter. */
+@@ -1989,7 +2002,7 @@ static void evict_process_worker(struct work_struct *work)
+ */
+ p = container_of(dwork, struct kfd_process, eviction_work);
+
+- pr_debug("Started evicting pasid 0x%x\n", p->pasid);
++ pr_debug("Started evicting process pid %d\n", p->lead_thread->pid);
+ ret = kfd_process_evict_queues(p, KFD_QUEUE_EVICTION_TRIGGER_TTM);
+ if (!ret) {
+ /* If another thread already signaled the eviction fence,
+@@ -2001,9 +2014,9 @@ static void evict_process_worker(struct work_struct *work)
+ msecs_to_jiffies(PROCESS_RESTORE_TIME_MS)))
+ kfd_process_restore_queues(p);
+
+- pr_debug("Finished evicting pasid 0x%x\n", p->pasid);
++ pr_debug("Finished evicting process pid %d\n", p->lead_thread->pid);
+ } else
+- pr_err("Failed to evict queues of pasid 0x%x\n", p->pasid);
++ pr_err("Failed to evict queues of process pid %d\n", p->lead_thread->pid);
+ }
+
+ static int restore_process_helper(struct kfd_process *p)
+@@ -2020,9 +2033,11 @@ static int restore_process_helper(struct kfd_process *p)
+
+ ret = kfd_process_restore_queues(p);
+ if (!ret)
+- pr_debug("Finished restoring pasid 0x%x\n", p->pasid);
++ pr_debug("Finished restoring process pid %d\n",
++ p->lead_thread->pid);
+ else
+- pr_err("Failed to restore queues of pasid 0x%x\n", p->pasid);
++ pr_err("Failed to restore queues of process pid %d\n",
++ p->lead_thread->pid);
+
+ return ret;
+ }
+@@ -2039,7 +2054,7 @@ static void restore_process_worker(struct work_struct *work)
+ * lifetime of this thread, kfd_process p will be valid
+ */
+ p = container_of(dwork, struct kfd_process, restore_work);
+- pr_debug("Started restoring pasid 0x%x\n", p->pasid);
++ pr_debug("Started restoring process pasid %d\n", (int)p->lead_thread->pid);
+
+ /* Setting last_restore_timestamp before successful restoration.
+ * Otherwise this would have to be set by KGD (restore_process_bos)
+@@ -2055,8 +2070,8 @@ static void restore_process_worker(struct work_struct *work)
+
+ ret = restore_process_helper(p);
+ if (ret) {
+- pr_debug("Failed to restore BOs of pasid 0x%x, retry after %d ms\n",
+- p->pasid, PROCESS_BACK_OFF_TIME_MS);
++ pr_debug("Failed to restore BOs of process pid %d, retry after %d ms\n",
++ p->lead_thread->pid, PROCESS_BACK_OFF_TIME_MS);
+ if (mod_delayed_work(kfd_restore_wq, &p->restore_work,
+ msecs_to_jiffies(PROCESS_RESTORE_TIME_MS)))
+ kfd_process_restore_queues(p);
+@@ -2072,7 +2087,7 @@ void kfd_suspend_all_processes(void)
+ WARN(debug_evictions, "Evicting all processes");
+ hash_for_each_rcu(kfd_processes_table, temp, p, kfd_processes) {
+ if (kfd_process_evict_queues(p, KFD_QUEUE_EVICTION_TRIGGER_SUSPEND))
+- pr_err("Failed to suspend process 0x%x\n", p->pasid);
++ pr_err("Failed to suspend process pid %d\n", p->lead_thread->pid);
+ signal_eviction_fence(p);
+ }
+ srcu_read_unlock(&kfd_processes_srcu, idx);
+@@ -2086,8 +2101,8 @@ int kfd_resume_all_processes(void)
+
+ hash_for_each_rcu(kfd_processes_table, temp, p, kfd_processes) {
+ if (restore_process_helper(p)) {
+- pr_err("Restore process %d failed during resume\n",
+- p->pasid);
++ pr_err("Restore process pid %d failed during resume\n",
++ p->lead_thread->pid);
+ ret = -EFAULT;
+ }
+ }
+@@ -2142,7 +2157,7 @@ int kfd_process_drain_interrupts(struct kfd_process_device *pdd)
+ memset(irq_drain_fence, 0, sizeof(irq_drain_fence));
+ irq_drain_fence[0] = (KFD_IRQ_FENCE_SOURCEID << 8) |
+ KFD_IRQ_FENCE_CLIENTID;
+- irq_drain_fence[3] = pdd->process->pasid;
++ irq_drain_fence[3] = pdd->pasid;
+
+ /*
+ * For GFX 9.4.3/9.5.0, send the NodeId also in IH cookie DW[3]
+@@ -2173,7 +2188,7 @@ void kfd_process_close_interrupt_drain(unsigned int pasid)
+ {
+ struct kfd_process *p;
+
+- p = kfd_lookup_process_by_pasid(pasid);
++ p = kfd_lookup_process_by_pasid(pasid, NULL);
+
+ if (!p)
+ return;
+@@ -2294,8 +2309,8 @@ int kfd_debugfs_mqds_by_process(struct seq_file *m, void *data)
+ int idx = srcu_read_lock(&kfd_processes_srcu);
+
+ hash_for_each_rcu(kfd_processes_table, temp, p, kfd_processes) {
+- seq_printf(m, "Process %d PASID 0x%x:\n",
+- p->lead_thread->tgid, p->pasid);
++ seq_printf(m, "Process %d PASID %d:\n",
++ p->lead_thread->tgid, p->lead_thread->pid);
+
+ mutex_lock(&p->mutex);
+ r = pqm_debugfs_mqds(m, &p->pqm);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index d79caa1a68676d..662c595ce7838f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -69,8 +69,8 @@ static int find_available_queue_slot(struct process_queue_manager *pqm,
+ pr_debug("The new slot id %lu\n", found);
+
+ if (found >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS) {
+- pr_info("Cannot open more queues for process with pasid 0x%x\n",
+- pqm->process->pasid);
++ pr_info("Cannot open more queues for process with pid %d\n",
++ pqm->process->lead_thread->pid);
+ return -ENOMEM;
+ }
+
+@@ -451,8 +451,8 @@ int pqm_create_queue(struct process_queue_manager *pqm,
+ }
+
+ if (retval != 0) {
+- pr_err("Pasid 0x%x DQM create queue type %d failed. ret %d\n",
+- pqm->process->pasid, type, retval);
++ pr_err("process pid %d DQM create queue type %d failed. ret %d\n",
++ pqm->process->lead_thread->pid, type, retval);
+ goto err_create_queue;
+ }
+
+@@ -546,7 +546,7 @@ int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid)
+ retval = dqm->ops.destroy_queue(dqm, &pdd->qpd, pqn->q);
+ if (retval) {
+ pr_err("Pasid 0x%x destroy queue %d failed, ret %d\n",
+- pqm->process->pasid,
++ pdd->pasid,
+ pqn->q->properties.queue_id, retval);
+ if (retval != -ETIME && retval != -EIO)
+ goto err_destroy_queue;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index d1cf9dd352904c..116116a9f5781a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -563,7 +563,8 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
+ int r;
+
+ p = container_of(prange->svms, struct kfd_process, svms);
+- pr_debug("pasid: %x svms 0x%p [0x%lx 0x%lx]\n", p->pasid, prange->svms,
++ pr_debug("process pid: %d svms 0x%p [0x%lx 0x%lx]\n",
++ p->lead_thread->pid, prange->svms,
+ prange->start, prange->last);
+
+ if (svm_range_validate_svm_bo(node, prange))
+@@ -2973,7 +2974,7 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ return -EFAULT;
+ }
+
+- p = kfd_lookup_process_by_pasid(pasid);
++ p = kfd_lookup_process_by_pasid(pasid, NULL);
+ if (!p) {
+ pr_debug("kfd process not founded pasid 0x%x\n", pasid);
+ return 0;
+@@ -3024,7 +3025,7 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+
+ /* check if this page fault time stamp is before svms->checkpoint_ts */
+ if (svms->checkpoint_ts[gpuidx] != 0) {
+- if (amdgpu_ih_ts_after(ts, svms->checkpoint_ts[gpuidx])) {
++ if (amdgpu_ih_ts_after_or_equal(ts, svms->checkpoint_ts[gpuidx])) {
+ pr_debug("draining retry fault, drop fault 0x%llx\n", addr);
+ r = -EAGAIN;
+ goto out_unlock_svms;
+@@ -3239,7 +3240,8 @@ void svm_range_list_fini(struct kfd_process *p)
+ struct svm_range *prange;
+ struct svm_range *next;
+
+- pr_debug("pasid 0x%x svms 0x%p\n", p->pasid, &p->svms);
++ pr_debug("process pid %d svms 0x%p\n", p->lead_thread->pid,
++ &p->svms);
+
+ cancel_delayed_work_sync(&p->svms.restore_work);
+
+@@ -3262,7 +3264,8 @@ void svm_range_list_fini(struct kfd_process *p)
+
+ mutex_destroy(&p->svms.lock);
+
+- pr_debug("pasid 0x%x svms 0x%p done\n", p->pasid, &p->svms);
++ pr_debug("process pid %d svms 0x%p done\n",
++ p->lead_thread->pid, &p->svms);
+ }
+
+ int svm_range_list_init(struct kfd_process *p)
+@@ -3625,8 +3628,8 @@ svm_range_set_attr(struct kfd_process *p, struct mm_struct *mm,
+ bool flush_tlb;
+ int r, ret = 0;
+
+- pr_debug("pasid 0x%x svms 0x%p [0x%llx 0x%llx] pages 0x%llx\n",
+- p->pasid, &p->svms, start, start + size - 1, size);
++ pr_debug("process pid %d svms 0x%p [0x%llx 0x%llx] pages 0x%llx\n",
++ p->lead_thread->pid, &p->svms, start, start + size - 1, size);
+
+ r = svm_range_check_attr(p, nattr, attrs);
+ if (r)
+@@ -3734,8 +3737,8 @@ svm_range_set_attr(struct kfd_process *p, struct mm_struct *mm,
+ out:
+ mutex_unlock(&process_info->lock);
+
+- pr_debug("pasid 0x%x svms 0x%p [0x%llx 0x%llx] done, r=%d\n", p->pasid,
+- &p->svms, start, start + size - 1, r);
++ pr_debug("process pid %d svms 0x%p [0x%llx 0x%llx] done, r=%d\n",
++ p->lead_thread->pid, &p->svms, start, start + size - 1, r);
+
+ return ret ? ret : r;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 62a9a9ccf9bb63..98317eda2cdb44 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -1683,17 +1683,32 @@ static int fill_in_l2_l3_pcache(struct kfd_cache_properties **props_ext,
+ int cache_type, unsigned int cu_processor_id,
+ struct kfd_node *knode)
+ {
+- unsigned int cu_sibling_map_mask;
++ unsigned int cu_sibling_map_mask = 0;
+ int first_active_cu;
+ int i, j, k, xcc, start, end;
+ int num_xcc = NUM_XCC(knode->xcc_mask);
+ struct kfd_cache_properties *pcache = NULL;
+ enum amdgpu_memory_partition mode;
+ struct amdgpu_device *adev = knode->adev;
++ bool found = false;
+
+ start = ffs(knode->xcc_mask) - 1;
+ end = start + num_xcc;
+- cu_sibling_map_mask = cu_info->bitmap[start][0][0];
++
++ /* To find the bitmap in the first active cu in the first
++ * xcc, it is based on the assumption that evrey xcc must
++ * have at least one active cu.
++ */
++ for (i = 0; i < gfx_info->max_shader_engines && !found; i++) {
++ for (j = 0; j < gfx_info->max_sh_per_se && !found; j++) {
++ if (cu_info->bitmap[start][i % 4][j % 4]) {
++ cu_sibling_map_mask =
++ cu_info->bitmap[start][i % 4][j % 4];
++ found = true;
++ }
++ }
++ }
++
+ cu_sibling_map_mask &=
+ ((1 << pcache_info[cache_type].num_cu_shared) - 1);
+ first_active_cu = ffs(cu_sibling_map_mask);
+@@ -2006,10 +2021,6 @@ static void kfd_topology_set_capabilities(struct kfd_topology_device *dev)
+ dev->node_props.debug_prop |= HSA_DBG_WATCH_ADDR_MASK_LO_BIT_GFX10 |
+ HSA_DBG_WATCH_ADDR_MASK_HI_BIT;
+
+- if (KFD_GC_VERSION(dev->gpu) >= IP_VERSION(11, 0, 0))
+- dev->node_props.capability |=
+- HSA_CAP_TRAP_DEBUG_PRECISE_MEMORY_OPERATIONS_SUPPORTED;
+-
+ if (KFD_GC_VERSION(dev->gpu) >= IP_VERSION(12, 0, 0))
+ dev->node_props.capability |=
+ HSA_CAP_TRAP_DEBUG_PRECISE_ALU_OPERATIONS_SUPPORTED;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 4a8d76a4f3ce6e..1a7bfc548d7025 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -367,6 +367,8 @@ get_crtc_by_otg_inst(struct amdgpu_device *adev,
+ static inline bool is_dc_timing_adjust_needed(struct dm_crtc_state *old_state,
+ struct dm_crtc_state *new_state)
+ {
++ if (new_state->stream->adjust.timing_adjust_pending)
++ return true;
+ if (new_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED)
+ return true;
+ else if (amdgpu_dm_crtc_vrr_active(old_state) != amdgpu_dm_crtc_vrr_active(new_state))
+@@ -3390,11 +3392,6 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+
+ return 0;
+ }
+-
+- /* leave display off for S4 sequence */
+- if (adev->in_s4)
+- return 0;
+-
+ /* Recreate dc_state - DC invalidates it when setting power state to S3. */
+ dc_state_release(dm_state->context);
+ dm_state->context = dc_state_create(dm->dc, NULL);
+@@ -5648,9 +5645,9 @@ fill_plane_color_attributes(const struct drm_plane_state *plane_state,
+
+ case DRM_COLOR_YCBCR_BT2020:
+ if (full_range)
+- *color_space = COLOR_SPACE_2020_YCBCR;
++ *color_space = COLOR_SPACE_2020_YCBCR_FULL;
+ else
+- return -EINVAL;
++ *color_space = COLOR_SPACE_2020_YCBCR_LIMITED;
+ break;
+
+ default:
+@@ -6146,7 +6143,7 @@ get_output_color_space(const struct dc_crtc_timing *dc_crtc_timing,
+ if (dc_crtc_timing->pixel_encoding == PIXEL_ENCODING_RGB)
+ color_space = COLOR_SPACE_2020_RGB_FULLRANGE;
+ else
+- color_space = COLOR_SPACE_2020_YCBCR;
++ color_space = COLOR_SPACE_2020_YCBCR_LIMITED;
+ break;
+ case DRM_MODE_COLORIMETRY_DEFAULT: // ITU601
+ default:
+@@ -7491,12 +7488,12 @@ static enum dc_status dm_validate_stream_and_context(struct dc *dc,
+ }
+
+ struct dc_stream_state *
+-create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
++create_validate_stream_for_sink(struct drm_connector *connector,
+ const struct drm_display_mode *drm_mode,
+ const struct dm_connector_state *dm_state,
+ const struct dc_stream_state *old_stream)
+ {
+- struct drm_connector *connector = &aconnector->base;
++ struct amdgpu_dm_connector *aconnector = NULL;
+ struct amdgpu_device *adev = drm_to_adev(connector->dev);
+ struct dc_stream_state *stream;
+ const struct drm_connector_state *drm_state = dm_state ? &dm_state->base : NULL;
+@@ -7507,8 +7504,12 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ if (!dm_state)
+ return NULL;
+
+- if (aconnector->dc_link->connector_signal == SIGNAL_TYPE_HDMI_TYPE_A ||
+- aconnector->dc_link->dpcd_caps.dongle_type == DISPLAY_DONGLE_DP_HDMI_CONVERTER)
++ if (connector->connector_type != DRM_MODE_CONNECTOR_WRITEBACK)
++ aconnector = to_amdgpu_dm_connector(connector);
++
++ if (aconnector &&
++ (aconnector->dc_link->connector_signal == SIGNAL_TYPE_HDMI_TYPE_A ||
++ aconnector->dc_link->dpcd_caps.dongle_type == DISPLAY_DONGLE_DP_HDMI_CONVERTER))
+ bpc_limit = 8;
+
+ do {
+@@ -7520,10 +7521,11 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ break;
+ }
+
+- if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
++ dc_result = dc_validate_stream(adev->dm.dc, stream);
++
++ if (!aconnector) /* writeback connector */
+ return stream;
+
+- dc_result = dc_validate_stream(adev->dm.dc, stream);
+ if (dc_result == DC_OK && stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ dc_result = dm_dp_mst_is_port_support_mode(aconnector, stream);
+
+@@ -7553,7 +7555,7 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ __func__, __LINE__);
+
+ aconnector->force_yuv420_output = true;
+- stream = create_validate_stream_for_sink(aconnector, drm_mode,
++ stream = create_validate_stream_for_sink(connector, drm_mode,
+ dm_state, old_stream);
+ aconnector->force_yuv420_output = false;
+ }
+@@ -7568,6 +7570,9 @@ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connec
+ struct dc_sink *dc_sink;
+ /* TODO: Unhardcode stream count */
+ struct dc_stream_state *stream;
++ /* we always have an amdgpu_dm_connector here since we got
++ * here via the amdgpu_dm_connector_helper_funcs
++ */
+ struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
+
+ if ((mode->flags & DRM_MODE_FLAG_INTERLACE) ||
+@@ -7592,7 +7597,7 @@ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connec
+
+ drm_mode_set_crtcinfo(mode, 0);
+
+- stream = create_validate_stream_for_sink(aconnector, mode,
++ stream = create_validate_stream_for_sink(connector, mode,
+ to_dm_connector_state(connector->state),
+ NULL);
+ if (stream) {
+@@ -8372,7 +8377,7 @@ static int amdgpu_dm_i2c_xfer(struct i2c_adapter *i2c_adap,
+ int i;
+ int result = -EIO;
+
+- if (!ddc_service->ddc_pin || !ddc_service->ddc_pin->hw_info.hw_supported)
++ if (!ddc_service->ddc_pin)
+ return result;
+
+ cmd.payloads = kcalloc(num, sizeof(struct i2c_payload), GFP_KERNEL);
+@@ -10644,7 +10649,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ if (!drm_atomic_crtc_needs_modeset(new_crtc_state))
+ goto skip_modeset;
+
+- new_stream = create_validate_stream_for_sink(aconnector,
++ new_stream = create_validate_stream_for_sink(connector,
+ &new_crtc_state->mode,
+ dm_new_conn_state,
+ dm_old_crtc_state->stream);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index d2703ca7dff31d..195fec9048df7b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -989,7 +989,7 @@ int amdgpu_dm_process_dmub_set_config_sync(struct dc_context *ctx, unsigned int
+ struct set_config_cmd_payload *payload, enum set_config_status *operation_result);
+
+ struct dc_stream_state *
+- create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
++ create_validate_stream_for_sink(struct drm_connector *connector,
+ const struct drm_display_mode *drm_mode,
+ const struct dm_connector_state *dm_state,
+ const struct dc_stream_state *old_stream);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 049046c6046269..c7d13e743e6c8c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -1169,7 +1169,7 @@ static int amdgpu_current_colorspace_show(struct seq_file *m, void *data)
+ case COLOR_SPACE_2020_RGB_FULLRANGE:
+ seq_puts(m, "BT2020_RGB");
+ break;
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
+ seq_puts(m, "BT2020_YCC");
+ break;
+ default:
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 8497de360640a3..c3759a1c32ceca 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1651,7 +1651,6 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+
+ if (ind >= 0) {
+ struct drm_connector *connector;
+- struct amdgpu_dm_connector *aconnector;
+ struct drm_connector_state *drm_new_conn_state;
+ struct dm_connector_state *dm_new_conn_state;
+ struct dm_crtc_state *dm_old_crtc_state;
+@@ -1659,15 +1658,14 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ connector =
+ amdgpu_dm_find_first_crtc_matching_connector(state,
+ state->crtcs[ind].ptr);
+- aconnector = to_amdgpu_dm_connector(connector);
+ drm_new_conn_state =
+ drm_atomic_get_new_connector_state(state,
+- &aconnector->base);
++ connector);
+ dm_new_conn_state = to_dm_connector_state(drm_new_conn_state);
+ dm_old_crtc_state = to_dm_crtc_state(state->crtcs[ind].old_state);
+
+ local_dc_state->streams[i] =
+- create_validate_stream_for_sink(aconnector,
++ create_validate_stream_for_sink(connector,
+ &state->crtcs[ind].new_state->mode,
+ dm_new_conn_state,
+ dm_old_crtc_state->stream);
+diff --git a/drivers/gpu/drm/amd/display/dc/basics/dc_common.c b/drivers/gpu/drm/amd/display/dc/basics/dc_common.c
+index b2fc4f8e648250..a51c2701da247f 100644
+--- a/drivers/gpu/drm/amd/display/dc/basics/dc_common.c
++++ b/drivers/gpu/drm/amd/display/dc/basics/dc_common.c
+@@ -40,7 +40,8 @@ bool is_rgb_cspace(enum dc_color_space output_color_space)
+ case COLOR_SPACE_YCBCR709:
+ case COLOR_SPACE_YCBCR601_LIMITED:
+ case COLOR_SPACE_YCBCR709_LIMITED:
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
++ case COLOR_SPACE_2020_YCBCR_FULL:
+ return false;
+ default:
+ /* Add a case to switch */
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+index 7d18f372ce7ab4..6bc59b7ef007b9 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+@@ -101,7 +101,6 @@ static void init_dig_encoder_control(struct bios_parser *bp)
+ bp->cmd_tbl.dig_encoder_control = encoder_control_digx_v1_5;
+ break;
+ default:
+- dm_output_to_console("Don't have dig_encoder_control for v%d\n", version);
+ bp->cmd_tbl.dig_encoder_control = encoder_control_fallback;
+ break;
+ }
+@@ -238,7 +237,6 @@ static void init_transmitter_control(struct bios_parser *bp)
+ bp->cmd_tbl.transmitter_control = transmitter_control_v1_7;
+ break;
+ default:
+- dm_output_to_console("Don't have transmitter_control for v%d\n", crev);
+ bp->cmd_tbl.transmitter_control = transmitter_control_fallback;
+ break;
+ }
+@@ -408,8 +406,6 @@ static void init_set_pixel_clock(struct bios_parser *bp)
+ bp->cmd_tbl.set_pixel_clock = set_pixel_clock_v7;
+ break;
+ default:
+- dm_output_to_console("Don't have set_pixel_clock for v%d\n",
+- BIOS_CMD_TABLE_PARA_REVISION(setpixelclock));
+ bp->cmd_tbl.set_pixel_clock = set_pixel_clock_fallback;
+ break;
+ }
+@@ -554,7 +550,6 @@ static void init_set_crtc_timing(struct bios_parser *bp)
+ set_crtc_using_dtd_timing_v3;
+ break;
+ default:
+- dm_output_to_console("Don't have set_crtc_timing for v%d\n", dtd_version);
+ bp->cmd_tbl.set_crtc_timing = NULL;
+ break;
+ }
+@@ -671,8 +666,6 @@ static void init_enable_crtc(struct bios_parser *bp)
+ bp->cmd_tbl.enable_crtc = enable_crtc_v1;
+ break;
+ default:
+- dm_output_to_console("Don't have enable_crtc for v%d\n",
+- BIOS_CMD_TABLE_PARA_REVISION(enablecrtc));
+ bp->cmd_tbl.enable_crtc = NULL;
+ break;
+ }
+@@ -864,8 +857,6 @@ static void init_set_dce_clock(struct bios_parser *bp)
+ bp->cmd_tbl.set_dce_clock = set_dce_clock_v2_1;
+ break;
+ default:
+- dm_output_to_console("Don't have set_dce_clock for v%d\n",
+- BIOS_CMD_TABLE_PARA_REVISION(setdceclock));
+ bp->cmd_tbl.set_dce_clock = NULL;
+ break;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
+index 73458e2951034a..df8139bda142bf 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
+@@ -87,8 +87,7 @@ bool dal_bios_parser_init_cmd_tbl_helper2(
+ return true;
+
+ default:
+- /* Unsupported DCE */
+- BREAK_TO_DEBUGGER();
++ *h = dal_cmd_tbl_helper_dce112_get_table2();
+ return false;
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+index a0fb4481d2f1b1..e4d22f74f98691 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+@@ -130,7 +130,7 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
+ struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk;
+ struct dc *dc = clk_mgr_base->ctx->dc;
+- int display_count;
++ int display_count = 0;
+ bool update_dppclk = false;
+ bool update_dispclk = false;
+ bool dpp_clock_lowered = false;
+@@ -194,8 +194,6 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
+ // workaround: Limit dppclk to 100Mhz to avoid lower eDP panel switch to plus 4K monitor underflow.
+ if (new_clocks->dppclk_khz < MIN_DPP_DISP_CLK)
+ new_clocks->dppclk_khz = MIN_DPP_DISP_CLK;
+- if (new_clocks->dispclk_khz < MIN_DPP_DISP_CLK)
+- new_clocks->dispclk_khz = MIN_DPP_DISP_CLK;
+
+ if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr->base.clks.dppclk_khz)) {
+ if (clk_mgr->base.clks.dppclk_khz > new_clocks->dppclk_khz)
+@@ -204,15 +202,19 @@ static void dcn315_update_clocks(struct clk_mgr *clk_mgr_base,
+ update_dppclk = true;
+ }
+
+- if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
+- /* No need to apply the w/a if we haven't taken over from bios yet */
+- if (clk_mgr_base->clks.dispclk_khz)
+- dcn315_disable_otg_wa(clk_mgr_base, context, true);
++ if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) &&
++ (new_clocks->dispclk_khz > 0 || (safe_to_lower && display_count == 0))) {
++ int requested_dispclk_khz = new_clocks->dispclk_khz;
+
++ dcn315_disable_otg_wa(clk_mgr_base, context, true);
++
++ /* Clamp the requested clock to PMFW based on their limit. */
++ if (dc->debug.min_disp_clk_khz > 0 && requested_dispclk_khz < dc->debug.min_disp_clk_khz)
++ requested_dispclk_khz = dc->debug.min_disp_clk_khz;
++
++ dcn315_smu_set_dispclk(clk_mgr, requested_dispclk_khz);
+ clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+- dcn315_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
+- if (clk_mgr_base->clks.dispclk_khz)
+- dcn315_disable_otg_wa(clk_mgr_base, context, false);
++ dcn315_disable_otg_wa(clk_mgr_base, context, false);
+
+ update_dispclk = true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
+index c3e50c3aaa609e..49efea0c8fcffa 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
+@@ -140,7 +140,7 @@ static void dcn316_update_clocks(struct clk_mgr *clk_mgr_base,
+ struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk;
+ struct dc *dc = clk_mgr_base->ctx->dc;
+- int display_count;
++ int display_count = 0;
+ bool update_dppclk = false;
+ bool update_dispclk = false;
+ bool dpp_clock_lowered = false;
+@@ -201,8 +201,6 @@ static void dcn316_update_clocks(struct clk_mgr *clk_mgr_base,
+ // workaround: Limit dppclk to 100Mhz to avoid lower eDP panel switch to plus 4K monitor underflow.
+ if (new_clocks->dppclk_khz < 100000)
+ new_clocks->dppclk_khz = 100000;
+- if (new_clocks->dispclk_khz < 100000)
+- new_clocks->dispclk_khz = 100000;
+
+ if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr->base.clks.dppclk_khz)) {
+ if (clk_mgr->base.clks.dppclk_khz > new_clocks->dppclk_khz)
+@@ -211,11 +209,18 @@ static void dcn316_update_clocks(struct clk_mgr *clk_mgr_base,
+ update_dppclk = true;
+ }
+
+- if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
++ if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) &&
++ (new_clocks->dispclk_khz > 0 || (safe_to_lower && display_count == 0))) {
++ int requested_dispclk_khz = new_clocks->dispclk_khz;
++
+ dcn316_disable_otg_wa(clk_mgr_base, context, safe_to_lower, true);
+
++ /* Clamp the requested clock to PMFW based on their limit. */
++ if (dc->debug.min_disp_clk_khz > 0 && requested_dispclk_khz < dc->debug.min_disp_clk_khz)
++ requested_dispclk_khz = dc->debug.min_disp_clk_khz;
++
++ dcn316_smu_set_dispclk(clk_mgr, requested_dispclk_khz);
+ clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+- dcn316_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
+ dcn316_disable_otg_wa(clk_mgr_base, context, safe_to_lower, false);
+
+ update_dispclk = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 1648226586e22c..1f47931c2dafcf 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -467,14 +467,19 @@ void dcn35_update_clocks(struct clk_mgr *clk_mgr_base,
+ update_dppclk = true;
+ }
+
+- if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
++ if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) &&
++ (new_clocks->dispclk_khz > 0 || (safe_to_lower && display_count == 0))) {
++ int requested_dispclk_khz = new_clocks->dispclk_khz;
++
+ dcn35_disable_otg_wa(clk_mgr_base, context, safe_to_lower, true);
+
+- if (dc->debug.min_disp_clk_khz > 0 && new_clocks->dispclk_khz < dc->debug.min_disp_clk_khz)
+- new_clocks->dispclk_khz = dc->debug.min_disp_clk_khz;
++ /* Clamp the requested clock to PMFW based on their limit. */
++ if (dc->debug.min_disp_clk_khz > 0 && requested_dispclk_khz < dc->debug.min_disp_clk_khz)
++ requested_dispclk_khz = dc->debug.min_disp_clk_khz;
+
++ dcn35_smu_set_dispclk(clk_mgr, requested_dispclk_khz);
+ clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+- dcn35_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
++
+ dcn35_disable_otg_wa(clk_mgr_base, context, safe_to_lower, false);
+
+ update_dispclk = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
+index 8082bb8776114a..a3b8e3d4a429e3 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
+@@ -24,6 +24,8 @@
+
+ #include "dml/dcn401/dcn401_fpu.h"
+
++#define DCN_BASE__INST0_SEG1 0x000000C0
++
+ #define mmCLK01_CLK0_CLK_PLL_REQ 0x16E37
+ #define mmCLK01_CLK0_CLK0_DFS_CNTL 0x16E69
+ #define mmCLK01_CLK0_CLK1_DFS_CNTL 0x16E6C
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 4683c7ef4507f5..0ce0ad7f983963 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -276,6 +276,7 @@ static bool create_links(
+ link->link_id.type = OBJECT_TYPE_CONNECTOR;
+ link->link_id.id = CONNECTOR_ID_VIRTUAL;
+ link->link_id.enum_id = ENUM_ID_1;
++ link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
+ link->link_enc = kzalloc(sizeof(*link->link_enc), GFP_KERNEL);
+
+ if (!link->link_enc) {
+@@ -438,9 +439,12 @@ bool dc_stream_adjust_vmin_vmax(struct dc *dc,
+ * Don't adjust DRR while there's bandwidth optimizations pending to
+ * avoid conflicting with firmware updates.
+ */
+- if (dc->ctx->dce_version > DCE_VERSION_MAX)
+- if (dc->optimized_required || dc->wm_optimized_required)
++ if (dc->ctx->dce_version > DCE_VERSION_MAX) {
++ if (dc->optimized_required || dc->wm_optimized_required) {
++ stream->adjust.timing_adjust_pending = true;
+ return false;
++ }
++ }
+
+ dc_exit_ips_for_hw_access(dc);
+
+@@ -452,6 +456,7 @@ bool dc_stream_adjust_vmin_vmax(struct dc *dc,
+
+ if (dc->caps.max_v_total != 0 &&
+ (adjust->v_total_max > dc->caps.max_v_total || adjust->v_total_min > dc->caps.max_v_total)) {
++ stream->adjust.timing_adjust_pending = false;
+ if (adjust->allow_otg_v_count_halt)
+ return set_long_vtotal(dc, stream, adjust);
+ else
+@@ -465,7 +470,7 @@ bool dc_stream_adjust_vmin_vmax(struct dc *dc,
+ dc->hwss.set_drr(&pipe,
+ 1,
+ *adjust);
+-
++ stream->adjust.timing_adjust_pending = false;
+ return true;
+ }
+ }
+@@ -3127,8 +3132,14 @@ static void copy_stream_update_to_stream(struct dc *dc,
+ if (update->vrr_active_fixed)
+ stream->vrr_active_fixed = *update->vrr_active_fixed;
+
+- if (update->crtc_timing_adjust)
++ if (update->crtc_timing_adjust) {
++ if (stream->adjust.v_total_min != update->crtc_timing_adjust->v_total_min ||
++ stream->adjust.v_total_max != update->crtc_timing_adjust->v_total_max ||
++ stream->adjust.timing_adjust_pending)
++ update->crtc_timing_adjust->timing_adjust_pending = true;
+ stream->adjust = *update->crtc_timing_adjust;
++ update->crtc_timing_adjust->timing_adjust_pending = false;
++ }
+
+ if (update->dpms_off)
+ stream->dpms_off = *update->dpms_off;
+@@ -4902,7 +4913,8 @@ static bool full_update_required(struct dc *dc,
+ stream_update->lut3d_func ||
+ stream_update->pending_test_pattern ||
+ stream_update->crtc_timing_adjust ||
+- stream_update->scaler_sharpener_update))
++ stream_update->scaler_sharpener_update ||
++ stream_update->hw_cursor_req))
+ return true;
+
+ if (stream) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index 6eb9bae3af9127..4f54e75a8f95be 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -176,7 +176,7 @@ static bool is_ycbcr2020_type(
+ {
+ bool ret = false;
+
+- if (color_space == COLOR_SPACE_2020_YCBCR)
++ if (color_space == COLOR_SPACE_2020_YCBCR_LIMITED || color_space == COLOR_SPACE_2020_YCBCR_FULL)
+ ret = true;
+ return ret;
+ }
+@@ -247,7 +247,8 @@ void color_space_to_black_color(
+ case COLOR_SPACE_YCBCR709_BLACK:
+ case COLOR_SPACE_YCBCR601_LIMITED:
+ case COLOR_SPACE_YCBCR709_LIMITED:
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
++ case COLOR_SPACE_2020_YCBCR_FULL:
+ *black_color = black_color_format[BLACK_COLOR_FORMAT_YUV_CV];
+ break;
+
+@@ -563,6 +564,7 @@ void set_p_state_switch_method(
+ if (!dc->ctx || !dc->ctx->dmub_srv || !pipe_ctx || !vba)
+ return;
+
++ pipe_ctx->p_state_type = P_STATE_UNKNOWN;
+ if (vba->DRAMClockChangeSupport[vba->VoltageLevel][vba->maxMpcComb] !=
+ dm_dram_clock_change_unsupported) {
+ /* MCLK switching is supported */
+@@ -609,6 +611,21 @@ void set_p_state_switch_method(
+ }
+ }
+
++void set_drr_and_clear_adjust_pending(
++ struct pipe_ctx *pipe_ctx,
++ struct dc_stream_state *stream,
++ struct drr_params *params)
++{
++ /* params can be null.*/
++ if (pipe_ctx && pipe_ctx->stream_res.tg &&
++ pipe_ctx->stream_res.tg->funcs->set_drr)
++ pipe_ctx->stream_res.tg->funcs->set_drr(
++ pipe_ctx->stream_res.tg, params);
++
++ if (stream)
++ stream->adjust.timing_adjust_pending = false;
++}
++
+ void get_fams2_visual_confirm_color(
+ struct dc *dc,
+ struct dc_state *context,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 298668e9729c76..375b3b1d1d1828 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -941,6 +941,17 @@ static void calculate_adjust_recout_for_visual_confirm(struct pipe_ctx *pipe_ctx
+ *base_offset = VISUAL_CONFIRM_BASE_DEFAULT;
+ }
+
++static void reverse_adjust_recout_for_visual_confirm(struct rect *recout,
++ struct pipe_ctx *pipe_ctx)
++{
++ int dpp_offset, base_offset;
++
++ calculate_adjust_recout_for_visual_confirm(pipe_ctx, &base_offset,
++ &dpp_offset);
++ recout->height += base_offset;
++ recout->height += dpp_offset;
++}
++
+ static void adjust_recout_for_visual_confirm(struct rect *recout,
+ struct pipe_ctx *pipe_ctx)
+ {
+@@ -1642,6 +1653,62 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ return res;
+ }
+
++bool resource_can_pipe_disable_cursor(struct pipe_ctx *pipe_ctx)
++{
++ struct pipe_ctx *test_pipe, *split_pipe;
++ struct rect r1 = pipe_ctx->plane_res.scl_data.recout;
++ int r1_right, r1_bottom;
++ int cur_layer = pipe_ctx->plane_state->layer_index;
++
++ reverse_adjust_recout_for_visual_confirm(&r1, pipe_ctx);
++ r1_right = r1.x + r1.width;
++ r1_bottom = r1.y + r1.height;
++
++ /**
++ * Disable the cursor if there's another pipe above this with a
++ * plane that contains this pipe's viewport to prevent double cursor
++ * and incorrect scaling artifacts.
++ */
++ for (test_pipe = pipe_ctx->top_pipe; test_pipe;
++ test_pipe = test_pipe->top_pipe) {
++ struct rect r2;
++ int r2_right, r2_bottom;
++ // Skip invisible layer and pipe-split plane on same layer
++ if (!test_pipe->plane_state ||
++ !test_pipe->plane_state->visible ||
++ test_pipe->plane_state->layer_index == cur_layer)
++ continue;
++
++ r2 = test_pipe->plane_res.scl_data.recout;
++ reverse_adjust_recout_for_visual_confirm(&r2, test_pipe);
++ r2_right = r2.x + r2.width;
++ r2_bottom = r2.y + r2.height;
++
++ /**
++ * There is another half plane on same layer because of
++ * pipe-split, merge together per same height.
++ */
++ for (split_pipe = pipe_ctx->top_pipe; split_pipe;
++ split_pipe = split_pipe->top_pipe)
++ if (split_pipe->plane_state->layer_index == test_pipe->plane_state->layer_index) {
++ struct rect r2_half;
++
++ r2_half = split_pipe->plane_res.scl_data.recout;
++ reverse_adjust_recout_for_visual_confirm(&r2_half, split_pipe);
++ r2.x = min(r2_half.x, r2.x);
++ r2.width = r2.width + r2_half.width;
++ r2_right = r2.x + r2.width;
++ r2_bottom = min(r2_bottom, r2_half.y + r2_half.height);
++ break;
++ }
++
++ if (r1.x >= r2.x && r1.y >= r2.y && r1_right <= r2_right && r1_bottom <= r2_bottom)
++ return true;
++ }
++
++ return false;
++}
++
+
+ enum dc_status resource_build_scaling_params_for_context(
+ const struct dc *dc,
+@@ -4247,7 +4314,7 @@ static void set_avi_info_frame(
+ break;
+ case COLOR_SPACE_2020_RGB_FULLRANGE:
+ case COLOR_SPACE_2020_RGB_LIMITEDRANGE:
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
+ hdmi_info.bits.EC0_EC2 = COLORIMETRYEX_BT2020RGBYCBCR;
+ hdmi_info.bits.C0_C1 = COLORIMETRY_EXTENDED;
+ break;
+@@ -4261,7 +4328,7 @@ static void set_avi_info_frame(
+ break;
+ }
+
+- if (pixel_encoding && color_space == COLOR_SPACE_2020_YCBCR &&
++ if (pixel_encoding && color_space == COLOR_SPACE_2020_YCBCR_LIMITED &&
+ stream->out_transfer_func.tf == TRANSFER_FUNCTION_GAMMA22) {
+ hdmi_info.bits.EC0_EC2 = 0;
+ hdmi_info.bits.C0_C1 = COLORIMETRY_ITU709;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+index 44ff9abe2880f3..87b4c2793df3c2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+@@ -991,57 +991,11 @@ void dc_dmub_srv_log_diagnostic_data(struct dc_dmub_srv *dc_dmub_srv)
+ DC_LOG_DEBUG(" is_cw6_en : %d", diag_data.is_cw6_enabled);
+ }
+
+-static bool dc_can_pipe_disable_cursor(struct pipe_ctx *pipe_ctx)
+-{
+- struct pipe_ctx *test_pipe, *split_pipe;
+- const struct scaler_data *scl_data = &pipe_ctx->plane_res.scl_data;
+- struct rect r1 = scl_data->recout, r2, r2_half;
+- int r1_r = r1.x + r1.width, r1_b = r1.y + r1.height, r2_r, r2_b;
+- int cur_layer = pipe_ctx->plane_state->layer_index;
+-
+- /**
+- * Disable the cursor if there's another pipe above this with a
+- * plane that contains this pipe's viewport to prevent double cursor
+- * and incorrect scaling artifacts.
+- */
+- for (test_pipe = pipe_ctx->top_pipe; test_pipe;
+- test_pipe = test_pipe->top_pipe) {
+- // Skip invisible layer and pipe-split plane on same layer
+- if (!test_pipe->plane_state->visible || test_pipe->plane_state->layer_index == cur_layer)
+- continue;
+-
+- r2 = test_pipe->plane_res.scl_data.recout;
+- r2_r = r2.x + r2.width;
+- r2_b = r2.y + r2.height;
+-
+- /**
+- * There is another half plane on same layer because of
+- * pipe-split, merge together per same height.
+- */
+- for (split_pipe = pipe_ctx->top_pipe; split_pipe;
+- split_pipe = split_pipe->top_pipe)
+- if (split_pipe->plane_state->layer_index == test_pipe->plane_state->layer_index) {
+- r2_half = split_pipe->plane_res.scl_data.recout;
+- r2.x = (r2_half.x < r2.x) ? r2_half.x : r2.x;
+- r2.width = r2.width + r2_half.width;
+- r2_r = r2.x + r2.width;
+- break;
+- }
+-
+- if (r1.x >= r2.x && r1.y >= r2.y && r1_r <= r2_r && r1_b <= r2_b)
+- return true;
+- }
+-
+- return false;
+-}
+-
+ static bool dc_dmub_should_update_cursor_data(struct pipe_ctx *pipe_ctx)
+ {
+ if (pipe_ctx->plane_state != NULL) {
+- if (pipe_ctx->plane_state->address.type == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
+- return false;
+-
+- if (dc_can_pipe_disable_cursor(pipe_ctx))
++ if (pipe_ctx->plane_state->address.type == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE ||
++ resource_can_pipe_disable_cursor(pipe_ctx))
+ return false;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+index cc005da75ce4ce..8bb628ab78554e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+@@ -959,6 +959,14 @@ union dp_128b_132b_supported_lttpr_link_rates {
+ uint8_t raw;
+ };
+
++union dp_alpm_lttpr_cap {
++ struct {
++ uint8_t AUX_LESS_ALPM_SUPPORTED :1;
++ uint8_t RESERVED :7;
++ } bits;
++ uint8_t raw;
++};
++
+ union dp_sink_video_fallback_formats {
+ struct {
+ uint8_t dp_1024x768_60Hz_24bpp_support :1;
+@@ -1118,6 +1126,7 @@ struct dc_lttpr_caps {
+ uint8_t max_ext_timeout;
+ union dp_main_link_channel_coding_lttpr_cap main_link_channel_coding;
+ union dp_128b_132b_supported_lttpr_link_rates supported_128b_132b_rates;
++ union dp_alpm_lttpr_cap alpm;
+ uint8_t aux_rd_interval[MAX_REPEATER_CNT - 1];
+ uint8_t lttpr_ieee_oui[3];
+ uint8_t lttpr_device_id[6];
+@@ -1372,6 +1381,9 @@ struct dp_trace {
+ #ifndef DPCD_MAX_UNCOMPRESSED_PIXEL_RATE_CAP
+ #define DPCD_MAX_UNCOMPRESSED_PIXEL_RATE_CAP 0x221c
+ #endif
++#ifndef DP_LTTPR_ALPM_CAPABILITIES
++#define DP_LTTPR_ALPM_CAPABILITIES 0xF0009
++#endif
+ #ifndef DP_REPEATER_CONFIGURATION_AND_STATUS_SIZE
+ #define DP_REPEATER_CONFIGURATION_AND_STATUS_SIZE 0x50
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+index 5ac55601a6da17..d562ddeca51260 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+@@ -653,7 +653,8 @@ enum dc_color_space {
+ COLOR_SPACE_YCBCR709_LIMITED,
+ COLOR_SPACE_2020_RGB_FULLRANGE,
+ COLOR_SPACE_2020_RGB_LIMITEDRANGE,
+- COLOR_SPACE_2020_YCBCR,
++ COLOR_SPACE_2020_YCBCR_LIMITED,
++ COLOR_SPACE_2020_YCBCR_FULL,
+ COLOR_SPACE_ADOBERGB,
+ COLOR_SPACE_DCIP3,
+ COLOR_SPACE_DISPLAYNATIVE,
+@@ -661,6 +662,7 @@ enum dc_color_space {
+ COLOR_SPACE_APPCTRL,
+ COLOR_SPACE_CUSTOMPOINTS,
+ COLOR_SPACE_YCBCR709_BLACK,
++ COLOR_SPACE_2020_YCBCR = COLOR_SPACE_2020_YCBCR_LIMITED,
+ };
+
+ enum dc_dither_option {
+@@ -1015,6 +1017,7 @@ struct dc_crtc_timing_adjust {
+ uint32_t v_total_mid;
+ uint32_t v_total_mid_frame_num;
+ uint32_t allow_otg_v_count_halt;
++ uint8_t timing_adjust_pending;
+ };
+
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 0c2aa91f0a1113..e60898c2df01a7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -1033,6 +1033,13 @@ struct psr_settings {
+ unsigned int psr_sdp_transmit_line_num_deadline;
+ uint8_t force_ffu_mode;
+ unsigned int psr_power_opt;
++
++ /**
++ * Some panels cannot handle idle pattern during PSR entry.
++ * To power down phy before disable stream to avoid sending
++ * idle pattern.
++ */
++ uint8_t power_down_phy_before_disable_stream;
+ };
+
+ enum replay_coasting_vtotal_type {
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c
+index d199e4ed2e59e6..1130d7619b2637 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c
+@@ -418,7 +418,7 @@ static void dce110_stream_encoder_dp_set_stream_attribute(
+ dynamic_range_rgb = 1; /*limited range*/
+ break;
+ case COLOR_SPACE_2020_RGB_FULLRANGE:
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
+ case COLOR_SPACE_XR_RGB:
+ case COLOR_SPACE_MSREF_SCRGB:
+ case COLOR_SPACE_ADOBERGB:
+@@ -430,6 +430,7 @@ static void dce110_stream_encoder_dp_set_stream_attribute(
+ case COLOR_SPACE_APPCTRL:
+ case COLOR_SPACE_CUSTOMPOINTS:
+ case COLOR_SPACE_UNKNOWN:
++ default:
+ /* do nothing */
+ break;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+index 88c75c243bf8ae..ff3b8244ba3d0b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+@@ -418,6 +418,10 @@ static bool dmub_psr_copy_settings(struct dmub_psr *dmub,
+ copy_settings_data->relock_delay_frame_cnt = 0;
+ if (link->dpcd_caps.sink_dev_id == DP_BRANCH_DEVICE_ID_001CF8)
+ copy_settings_data->relock_delay_frame_cnt = 2;
++
++ copy_settings_data->power_down_phy_before_disable_stream =
++ link->psr_settings.power_down_phy_before_disable_stream;
++
+ copy_settings_data->dsc_slice_height = psr_context->dsc_slice_height;
+
+ dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
+diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn10/dcn10_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dio/dcn10/dcn10_stream_encoder.c
+index d01a8b8f95954e..22e66b375a7fec 100644
+--- a/drivers/gpu/drm/amd/display/dc/dio/dcn10/dcn10_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dio/dcn10/dcn10_stream_encoder.c
+@@ -391,7 +391,7 @@ void enc1_stream_encoder_dp_set_stream_attribute(
+ break;
+ case COLOR_SPACE_2020_RGB_LIMITEDRANGE:
+ case COLOR_SPACE_2020_RGB_FULLRANGE:
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
+ case COLOR_SPACE_XR_RGB:
+ case COLOR_SPACE_MSREF_SCRGB:
+ case COLOR_SPACE_ADOBERGB:
+@@ -404,6 +404,7 @@ void enc1_stream_encoder_dp_set_stream_attribute(
+ case COLOR_SPACE_CUSTOMPOINTS:
+ case COLOR_SPACE_UNKNOWN:
+ case COLOR_SPACE_YCBCR709_BLACK:
++ default:
+ /* do nothing */
+ break;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c
+index 098c2a01a85099..9e5072627ec7b4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c
+@@ -632,7 +632,7 @@ void enc401_stream_encoder_dp_set_stream_attribute(
+ break;
+ case COLOR_SPACE_2020_RGB_LIMITEDRANGE:
+ case COLOR_SPACE_2020_RGB_FULLRANGE:
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
+ case COLOR_SPACE_XR_RGB:
+ case COLOR_SPACE_MSREF_SCRGB:
+ case COLOR_SPACE_ADOBERGB:
+@@ -645,6 +645,7 @@ void enc401_stream_encoder_dp_set_stream_attribute(
+ case COLOR_SPACE_CUSTOMPOINTS:
+ case COLOR_SPACE_UNKNOWN:
+ case COLOR_SPACE_YCBCR709_BLACK:
++ default:
+ /* do nothing */
+ break;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
+index 47d785204f29cb..c90dee4e9116ae 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
+@@ -195,9 +195,9 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
+ .dcn_downspread_percent = 0.5,
+ .gpuvm_min_page_size_bytes = 4096,
+ .hostvm_min_page_size_bytes = 4096,
+- .do_urgent_latency_adjustment = 0,
++ .do_urgent_latency_adjustment = 1,
+ .urgent_latency_adjustment_fabric_clock_component_us = 0,
+- .urgent_latency_adjustment_fabric_clock_reference_mhz = 0,
++ .urgent_latency_adjustment_fabric_clock_reference_mhz = 3000,
+ };
+
+ void dcn35_build_wm_range_table_fpu(struct clk_mgr *clk_mgr)
+@@ -367,6 +367,8 @@ void dcn35_update_bw_bounding_box_fpu(struct dc *dc,
+ clock_limits[i].socclk_mhz;
+ dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].memclk_mhz =
+ clk_table->entries[i].memclk_mhz * clk_table->entries[i].wck_ratio;
++
++ dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].dram_speed_mts = clock_limits[i].dram_speed_mts;
+ dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].dtbclk_mhz =
+ clock_limits[i].dtbclk_mhz;
+ dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_dcfclk_levels =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn351/dcn351_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn351/dcn351_fpu.c
+index d9e63c4fdd95cd..17d0b4923b0cc4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn351/dcn351_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn351/dcn351_fpu.c
+@@ -401,6 +401,7 @@ void dcn351_update_bw_bounding_box_fpu(struct dc *dc,
+ clock_limits[i].socclk_mhz;
+ dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].memclk_mhz =
+ clk_table->entries[i].memclk_mhz * clk_table->entries[i].wck_ratio;
++ dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].dram_speed_mts = clock_limits[i].dram_speed_mts;
+ dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].dtbclk_mhz =
+ clock_limits[i].dtbclk_mhz;
+ dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_dcfclk_levels =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_utils.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_utils.c
+index 1e56d995cd0e76..930e86cdb88a2f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_utils.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_utils.c
+@@ -232,7 +232,6 @@ void dml21_program_dc_pipe(struct dml2_context *dml_ctx, struct dc_state *contex
+ context->bw_ctx.bw.dcn.clk.dppclk_khz = pipe_ctx->plane_res.bw.dppclk_khz;
+
+ dml21_populate_mall_allocation_size(context, dml_ctx, pln_prog, pipe_ctx);
+- memcpy(&context->bw_ctx.bw.dcn.mcache_allocations[pipe_ctx->pipe_idx], &pln_prog->mcache_allocation, sizeof(struct dml2_mcache_surface_allocation));
+
+ bool sub_vp_enabled = is_sub_vp_enabled(pipe_ctx->stream->ctx->dc, context);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
+index d6fd13f43c08f7..ed6584535e898e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
+@@ -129,6 +129,7 @@ static void dml21_calculate_rq_and_dlg_params(const struct dc *dc, struct dc_sta
+ struct pipe_ctx *dc_main_pipes[__DML2_WRAPPER_MAX_STREAMS_PLANES__];
+ struct pipe_ctx *dc_phantom_pipes[__DML2_WRAPPER_MAX_STREAMS_PLANES__] = {0};
+ int num_pipes;
++ unsigned int dml_phantom_prog_idx;
+
+ context->bw_ctx.bw.dcn.clk.dppclk_khz = 0;
+
+@@ -142,6 +143,9 @@ static void dml21_calculate_rq_and_dlg_params(const struct dc *dc, struct dc_sta
+ context->bw_ctx.bw.dcn.mall_ss_psr_active_size_bytes = 0;
+ context->bw_ctx.bw.dcn.mall_subvp_size_bytes = 0;
+
++ /* phantom's start after main planes */
++ dml_phantom_prog_idx = in_ctx->v21.mode_programming.programming->display_config.num_planes;
++
+ for (dml_prog_idx = 0; dml_prog_idx < DML2_MAX_PLANES; dml_prog_idx++) {
+ pln_prog = &in_ctx->v21.mode_programming.programming->plane_programming[dml_prog_idx];
+
+@@ -167,6 +171,16 @@ static void dml21_calculate_rq_and_dlg_params(const struct dc *dc, struct dc_sta
+ dml21_program_dc_pipe(in_ctx, context, dc_phantom_pipes[dc_pipe_index], pln_prog, stream_prog);
+ }
+ }
++
++ /* copy per plane mcache allocation */
++ memcpy(&context->bw_ctx.bw.dcn.mcache_allocations[dml_prog_idx], &pln_prog->mcache_allocation, sizeof(struct dml2_mcache_surface_allocation));
++ if (pln_prog->phantom_plane.valid) {
++ memcpy(&context->bw_ctx.bw.dcn.mcache_allocations[dml_phantom_prog_idx],
++ &pln_prog->phantom_plane.mcache_allocation,
++ sizeof(struct dml2_mcache_surface_allocation));
++
++ dml_phantom_prog_idx++;
++ }
+ }
+
+ /* assign global clocks */
+@@ -220,7 +234,9 @@ static bool dml21_mode_check_and_programming(const struct dc *in_dc, struct dc_s
+ if (!result)
+ return false;
+
++ DC_FP_START();
+ result = dml2_build_mode_programming(mode_programming);
++ DC_FP_END();
+ if (!result)
+ return false;
+
+@@ -263,7 +279,9 @@ static bool dml21_check_mode_support(const struct dc *in_dc, struct dc_state *co
+ mode_support->dml2_instance = dml_init->dml2_instance;
+ dml21_map_dc_state_into_dml_display_cfg(in_dc, context, dml_ctx);
+ dml_ctx->v21.mode_programming.dml2_instance->scratch.build_mode_programming_locals.mode_programming_params.programming = dml_ctx->v21.mode_programming.programming;
++ DC_FP_START();
+ is_supported = dml2_check_mode_supported(mode_support);
++ DC_FP_END();
+ if (!is_supported)
+ return false;
+
+@@ -274,16 +292,12 @@ bool dml21_validate(const struct dc *in_dc, struct dc_state *context, struct dml
+ {
+ bool out = false;
+
+- DC_FP_START();
+-
+ /* Use dml_validate_only for fast_validate path */
+ if (fast_validate)
+ out = dml21_check_mode_support(in_dc, context, dml_ctx);
+ else
+ out = dml21_mode_check_and_programming(in_dc, context, dml_ctx);
+
+- DC_FP_END();
+-
+ return out;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/inc/dml_top_types.h b/drivers/gpu/drm/amd/display/dc/dml2/dml21/inc/dml_top_types.h
+index d2d053f2354d00..0ab19cf4d24219 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/inc/dml_top_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/inc/dml_top_types.h
+@@ -245,6 +245,7 @@ struct dml2_per_plane_programming {
+ struct {
+ bool valid;
+ struct dml2_plane_parameters descriptor;
++ struct dml2_mcache_surface_allocation mcache_allocation;
+ struct dml2_dchub_per_pipe_register_set *pipe_regs[DML2_MAX_PLANES];
+ } phantom_plane;
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
+index 7216d25c783e68..44d2969a904ea7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
+@@ -253,7 +253,8 @@ static void expand_implict_subvp(const struct display_configuation_with_meta *di
+ static void pack_mode_programming_params_with_implicit_subvp(struct dml2_core_instance *core, const struct display_configuation_with_meta *display_cfg,
+ const struct dml2_display_cfg *svp_expanded_display_cfg, struct dml2_display_cfg_programming *programming, struct dml2_core_scratch *scratch)
+ {
+- unsigned int stream_index, plane_index, pipe_offset, stream_already_populated_mask, main_plane_index;
++ unsigned int stream_index, plane_index, pipe_offset, stream_already_populated_mask, main_plane_index, mcache_index;
++ unsigned int total_main_mcaches_required = 0;
+ int total_pipe_regs_copied = 0;
+ int dml_internal_pipe_index = 0;
+ const struct dml2_plane_parameters *main_plane;
+@@ -324,6 +325,13 @@ static void pack_mode_programming_params_with_implicit_subvp(struct dml2_core_in
+
+ dml2_core_calcs_get_mall_allocation(&core->clean_me_up.mode_lib, &programming->plane_programming[plane_index].surface_size_mall_bytes, dml_internal_pipe_index);
+
++ memcpy(&programming->plane_programming[plane_index].mcache_allocation,
++ &display_cfg->stage2.mcache_allocations[plane_index],
++ sizeof(struct dml2_mcache_surface_allocation));
++ total_main_mcaches_required += programming->plane_programming[plane_index].mcache_allocation.num_mcaches_plane0 +
++ programming->plane_programming[plane_index].mcache_allocation.num_mcaches_plane1 -
++ (programming->plane_programming[plane_index].mcache_allocation.last_slice_sharing.plane0_plane1 ? 1 : 0);
++
+ for (pipe_offset = 0; pipe_offset < programming->plane_programming[plane_index].num_dpps_required; pipe_offset++) {
+ // Assign storage for this pipe's register values
+ programming->plane_programming[plane_index].pipe_regs[pipe_offset] = &programming->pipe_regs[total_pipe_regs_copied];
+@@ -362,6 +370,22 @@ static void pack_mode_programming_params_with_implicit_subvp(struct dml2_core_in
+ memcpy(&programming->plane_programming[main_plane_index].phantom_plane.descriptor, phantom_plane, sizeof(struct dml2_plane_parameters));
+
+ dml2_core_calcs_get_mall_allocation(&core->clean_me_up.mode_lib, &programming->plane_programming[main_plane_index].svp_size_mall_bytes, dml_internal_pipe_index);
++
++ /* generate mcache allocation, phantoms use identical mcache configuration, but in the MALL set and unique mcache ID's beginning after all main ID's */
++ memcpy(&programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation,
++ &programming->plane_programming[main_plane_index].mcache_allocation,
++ sizeof(struct dml2_mcache_surface_allocation));
++ for (mcache_index = 0; mcache_index < programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.num_mcaches_plane0; mcache_index++) {
++ programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.global_mcache_ids_plane0[mcache_index] += total_main_mcaches_required;
++ programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.global_mcache_ids_mall_plane0[mcache_index] =
++ programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.global_mcache_ids_plane0[mcache_index];
++ }
++ for (mcache_index = 0; mcache_index < programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.num_mcaches_plane1; mcache_index++) {
++ programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.global_mcache_ids_plane1[mcache_index] += total_main_mcaches_required;
++ programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.global_mcache_ids_mall_plane1[mcache_index] =
++ programming->plane_programming[main_plane_index].phantom_plane.mcache_allocation.global_mcache_ids_plane1[mcache_index];
++ }
++
+ for (pipe_offset = 0; pipe_offset < programming->plane_programming[main_plane_index].num_dpps_required; pipe_offset++) {
+ // Assign storage for this pipe's register values
+ programming->plane_programming[main_plane_index].phantom_plane.pipe_regs[pipe_offset] = &programming->pipe_regs[total_pipe_regs_copied];
+@@ -571,6 +595,10 @@ bool core_dcn4_mode_programming(struct dml2_core_mode_programming_in_out *in_out
+
+ dml2_core_calcs_get_mall_allocation(&core->clean_me_up.mode_lib, &in_out->programming->plane_programming[plane_index].surface_size_mall_bytes, dml_internal_pipe_index);
+
++ memcpy(&in_out->programming->plane_programming[plane_index].mcache_allocation,
++ &in_out->display_cfg->stage2.mcache_allocations[plane_index],
++ sizeof(struct dml2_mcache_surface_allocation));
++
+ for (pipe_offset = 0; pipe_offset < in_out->programming->plane_programming[plane_index].num_dpps_required; pipe_offset++) {
+ in_out->programming->plane_programming[plane_index].plane_descriptor = &in_out->programming->display_config.plane_descriptors[plane_index];
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+index c1ff869512f27c..a72b4c05e1fbf5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+@@ -2638,6 +2638,9 @@ static void calculate_mcache_setting(
+ // Luma/Chroma combine in the last mcache
+ // In the case of Luma/Chroma combine-mCache (with lc_comb_mcache==1), all mCaches except the last segment are filled as much as possible, when stay aligned to mvmpg boundary
+ if (*p->lc_comb_mcache && l->is_dual_plane) {
++ /* if luma and chroma planes share an mcache, increase total chroma mcache count */
++ *p->num_mcaches_c = *p->num_mcaches_c + 1;
++
+ for (n = 0; n < *p->num_mcaches_l - 1; n++)
+ p->mcache_offsets_l[n] = (n + 1) * l->mvmpg_per_mcache_lb_l * l->mvmpg_access_width_l;
+ p->mcache_offsets_l[*p->num_mcaches_l - 1] = l->full_vp_access_width_l;
+@@ -3710,13 +3713,12 @@ static unsigned int CalculateMaxVStartup(
+ double line_time_us = (double)timing->h_total / ((double)timing->pixel_clock_khz / 1000);
+ unsigned int vblank_actual = timing->v_total - timing->v_active;
+ unsigned int vblank_nom_default_in_line = (unsigned int)math_floor2((double)vblank_nom_default_us / line_time_us, 1.0);
+- unsigned int vblank_nom_input = (unsigned int)math_min2(timing->vblank_nom, vblank_nom_default_in_line);
+- unsigned int vblank_avail = (vblank_nom_input == 0) ? vblank_nom_default_in_line : vblank_nom_input;
++ unsigned int vblank_avail = (timing->vblank_nom == 0) ? vblank_nom_default_in_line : (unsigned int)timing->vblank_nom;
+
+ vblank_size = (unsigned int)math_min2(vblank_actual, vblank_avail);
+
+ if (timing->interlaced && !ptoi_supported)
+- max_vstartup_lines = (unsigned int)(math_floor2(vblank_size / 2.0, 1.0));
++ max_vstartup_lines = (unsigned int)(math_floor2((vblank_size - 1) / 2.0, 1.0));
+ else
+ max_vstartup_lines = vblank_size - (unsigned int)math_max2(1.0, math_ceil2(write_back_delay_us / line_time_us, 1.0));
+ #ifdef __DML_VBA_DEBUG__
+@@ -4912,6 +4914,7 @@ static double get_urgent_bandwidth_required(
+ double ReadBandwidthChroma[],
+ double PrefetchBandwidthLuma[],
+ double PrefetchBandwidthChroma[],
++ double PrefetchBandwidthOto[],
+ double excess_vactive_fill_bw_l[],
+ double excess_vactive_fill_bw_c[],
+ double cursor_bw[],
+@@ -4975,8 +4978,9 @@ static double get_urgent_bandwidth_required(
+ l->vm_row_bw = NumberOfDPP[k] * prefetch_vmrow_bw[k];
+ l->flip_and_active_bw = l->per_plane_flip_bw[k] + ReadBandwidthLuma[k] * l->adj_factor_p0 + ReadBandwidthChroma[k] * l->adj_factor_p1 + cursor_bw[k] * l->adj_factor_cur;
+ l->flip_and_prefetch_bw = l->per_plane_flip_bw[k] + NumberOfDPP[k] * (PrefetchBandwidthLuma[k] * l->adj_factor_p0_pre + PrefetchBandwidthChroma[k] * l->adj_factor_p1_pre) + prefetch_cursor_bw[k] * l->adj_factor_cur_pre;
++ l->flip_and_prefetch_bw_oto = l->per_plane_flip_bw[k] + NumberOfDPP[k] * (PrefetchBandwidthOto[k] * l->adj_factor_p0_pre + PrefetchBandwidthChroma[k] * l->adj_factor_p1_pre) + prefetch_cursor_bw[k] * l->adj_factor_cur_pre;
+ l->active_and_excess_bw = (ReadBandwidthLuma[k] + excess_vactive_fill_bw_l[k]) * l->tmp_nom_adj_factor_p0 + (ReadBandwidthChroma[k] + excess_vactive_fill_bw_c[k]) * l->tmp_nom_adj_factor_p1 + dpte_row_bw[k] + meta_row_bw[k];
+- surface_required_bw[k] = math_max4(l->vm_row_bw, l->flip_and_active_bw, l->flip_and_prefetch_bw, l->active_and_excess_bw);
++ surface_required_bw[k] = math_max5(l->vm_row_bw, l->flip_and_active_bw, l->flip_and_prefetch_bw, l->active_and_excess_bw, l->flip_and_prefetch_bw_oto);
+
+ /* export peak required bandwidth for the surface */
+ surface_peak_required_bw[k] = math_max2(surface_required_bw[k], surface_peak_required_bw[k]);
+@@ -5174,6 +5178,7 @@ static bool CalculatePrefetchSchedule(struct dml2_core_internal_scratch *scratch
+ s->Tsw_est3 = 0.0;
+ s->cursor_prefetch_bytes = 0;
+ *p->prefetch_cursor_bw = 0;
++ *p->RequiredPrefetchBWOTO = 0.0;
+
+ dcc_mrq_enable = (p->dcc_enable && p->mrq_present);
+
+@@ -5387,6 +5392,9 @@ static bool CalculatePrefetchSchedule(struct dml2_core_internal_scratch *scratch
+ s->prefetch_bw_oto += (p->swath_width_chroma_ub * p->myPipe->BytePerPixelC) / s->LineTime;
+ }
+
++ /* oto prefetch bw should be always be less than total vactive bw */
++ DML2_ASSERT(s->prefetch_bw_oto < s->per_pipe_vactive_sw_bw * p->myPipe->DPPPerSurface);
++
+ s->prefetch_bw_oto = math_max2(s->per_pipe_vactive_sw_bw, s->prefetch_bw_oto) * p->mall_prefetch_sdp_overhead_factor;
+
+ s->prefetch_bw_oto = math_min2(s->prefetch_bw_oto, *p->prefetch_sw_bytes/(s->min_Lsw_oto*s->LineTime));
+@@ -5397,6 +5405,12 @@ static bool CalculatePrefetchSchedule(struct dml2_core_internal_scratch *scratch
+ p->vm_bytes * p->HostVMInefficiencyFactor / (31 * s->LineTime) - *p->Tno_bw,
+ (p->PixelPTEBytesPerRow * p->HostVMInefficiencyFactor + p->meta_row_bytes + tdlut_row_bytes) / (15 * s->LineTime));
+
++ /* oto bw needs to be outputted even if the oto schedule isn't being used to avoid ms/mp mismatch.
++ * mp will fail if ms decides to use equ schedule and mp decides to use oto schedule
++ * and the required bandwidth increases when going from ms to mp
++ */
++ *p->RequiredPrefetchBWOTO = s->prefetch_bw_oto;
++
+ #ifdef __DML_VBA_DEBUG__
+ dml2_printf("DML::%s: vactive_sw_bw_l = %f\n", __func__, p->vactive_sw_bw_l);
+ dml2_printf("DML::%s: vactive_sw_bw_c = %f\n", __func__, p->vactive_sw_bw_c);
+@@ -6157,6 +6171,7 @@ static void calculate_peak_bandwidth_required(
+ p->surface_read_bandwidth_c,
+ l->zero_array, //PrefetchBandwidthLuma,
+ l->zero_array, //PrefetchBandwidthChroma,
++ l->zero_array, //PrefetchBWOTO
+ l->zero_array,
+ l->zero_array,
+ l->zero_array,
+@@ -6193,6 +6208,7 @@ static void calculate_peak_bandwidth_required(
+ p->surface_read_bandwidth_c,
+ l->zero_array, //PrefetchBandwidthLuma,
+ l->zero_array, //PrefetchBandwidthChroma,
++ l->zero_array, //PrefetchBWOTO
+ p->excess_vactive_fill_bw_l,
+ p->excess_vactive_fill_bw_c,
+ p->cursor_bw,
+@@ -6229,6 +6245,7 @@ static void calculate_peak_bandwidth_required(
+ p->surface_read_bandwidth_c,
+ p->prefetch_bandwidth_l,
+ p->prefetch_bandwidth_c,
++ p->prefetch_bandwidth_oto, // to prevent ms/mp mismatch when oto bw > total vactive bw
+ p->excess_vactive_fill_bw_l,
+ p->excess_vactive_fill_bw_c,
+ p->cursor_bw,
+@@ -6265,6 +6282,7 @@ static void calculate_peak_bandwidth_required(
+ p->surface_read_bandwidth_c,
+ p->prefetch_bandwidth_l,
+ p->prefetch_bandwidth_c,
++ p->prefetch_bandwidth_oto, // to prevent ms/mp mismatch when oto bw > total vactive bw
+ p->excess_vactive_fill_bw_l,
+ p->excess_vactive_fill_bw_c,
+ p->cursor_bw,
+@@ -6301,6 +6319,7 @@ static void calculate_peak_bandwidth_required(
+ p->surface_read_bandwidth_c,
+ p->prefetch_bandwidth_l,
+ p->prefetch_bandwidth_c,
++ p->prefetch_bandwidth_oto, // to prevent ms/mp mismatch when oto bw > total vactive bw
+ p->excess_vactive_fill_bw_l,
+ p->excess_vactive_fill_bw_c,
+ p->cursor_bw,
+@@ -9063,6 +9082,7 @@ static bool dml_core_mode_support(struct dml2_core_calcs_mode_support_ex *in_out
+ CalculatePrefetchSchedule_params->VRatioPrefetchC = &mode_lib->ms.VRatioPreC[k];
+ CalculatePrefetchSchedule_params->RequiredPrefetchPixelDataBWLuma = &mode_lib->ms.RequiredPrefetchPixelDataBWLuma[k]; // prefetch_sw_bw_l
+ CalculatePrefetchSchedule_params->RequiredPrefetchPixelDataBWChroma = &mode_lib->ms.RequiredPrefetchPixelDataBWChroma[k]; // prefetch_sw_bw_c
++ CalculatePrefetchSchedule_params->RequiredPrefetchBWOTO = &mode_lib->ms.RequiredPrefetchBWOTO[k];
+ CalculatePrefetchSchedule_params->NotEnoughTimeForDynamicMetadata = &mode_lib->ms.NoTimeForDynamicMetadata[k];
+ CalculatePrefetchSchedule_params->Tno_bw = &mode_lib->ms.Tno_bw[k];
+ CalculatePrefetchSchedule_params->Tno_bw_flip = &mode_lib->ms.Tno_bw_flip[k];
+@@ -9207,6 +9227,7 @@ static bool dml_core_mode_support(struct dml2_core_calcs_mode_support_ex *in_out
+ calculate_peak_bandwidth_params->surface_read_bandwidth_c = mode_lib->ms.vactive_sw_bw_c;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_l = mode_lib->ms.RequiredPrefetchPixelDataBWLuma;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_c = mode_lib->ms.RequiredPrefetchPixelDataBWChroma;
++ calculate_peak_bandwidth_params->prefetch_bandwidth_oto = mode_lib->ms.RequiredPrefetchBWOTO;
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_l = mode_lib->ms.excess_vactive_fill_bw_l;
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_c = mode_lib->ms.excess_vactive_fill_bw_c;
+ calculate_peak_bandwidth_params->cursor_bw = mode_lib->ms.cursor_bw;
+@@ -9373,6 +9394,7 @@ static bool dml_core_mode_support(struct dml2_core_calcs_mode_support_ex *in_out
+ calculate_peak_bandwidth_params->surface_read_bandwidth_c = mode_lib->ms.vactive_sw_bw_c;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_l = mode_lib->ms.RequiredPrefetchPixelDataBWLuma;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_c = mode_lib->ms.RequiredPrefetchPixelDataBWChroma;
++ calculate_peak_bandwidth_params->prefetch_bandwidth_oto = mode_lib->ms.RequiredPrefetchBWOTO;
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_l = mode_lib->ms.excess_vactive_fill_bw_l;
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_c = mode_lib->ms.excess_vactive_fill_bw_c;
+ calculate_peak_bandwidth_params->cursor_bw = mode_lib->ms.cursor_bw;
+@@ -11289,6 +11311,7 @@ static bool dml_core_mode_programming(struct dml2_core_calcs_mode_programming_ex
+ CalculatePrefetchSchedule_params->VRatioPrefetchC = &mode_lib->mp.VRatioPrefetchC[k];
+ CalculatePrefetchSchedule_params->RequiredPrefetchPixelDataBWLuma = &mode_lib->mp.RequiredPrefetchPixelDataBWLuma[k];
+ CalculatePrefetchSchedule_params->RequiredPrefetchPixelDataBWChroma = &mode_lib->mp.RequiredPrefetchPixelDataBWChroma[k];
++ CalculatePrefetchSchedule_params->RequiredPrefetchBWOTO = &s->dummy_single_array[0][k];
+ CalculatePrefetchSchedule_params->NotEnoughTimeForDynamicMetadata = &mode_lib->mp.NotEnoughTimeForDynamicMetadata[k];
+ CalculatePrefetchSchedule_params->Tno_bw = &mode_lib->mp.Tno_bw[k];
+ CalculatePrefetchSchedule_params->Tno_bw_flip = &mode_lib->mp.Tno_bw_flip[k];
+@@ -11431,6 +11454,7 @@ static bool dml_core_mode_programming(struct dml2_core_calcs_mode_programming_ex
+ calculate_peak_bandwidth_params->surface_read_bandwidth_c = mode_lib->mp.vactive_sw_bw_c;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_l = mode_lib->mp.RequiredPrefetchPixelDataBWLuma;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_c = mode_lib->mp.RequiredPrefetchPixelDataBWChroma;
++ calculate_peak_bandwidth_params->prefetch_bandwidth_oto = s->dummy_single_array[0];
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_l = mode_lib->mp.excess_vactive_fill_bw_l;
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_c = mode_lib->mp.excess_vactive_fill_bw_c;
+ calculate_peak_bandwidth_params->cursor_bw = mode_lib->mp.cursor_bw;
+@@ -11563,6 +11587,7 @@ static bool dml_core_mode_programming(struct dml2_core_calcs_mode_programming_ex
+ calculate_peak_bandwidth_params->surface_read_bandwidth_c = mode_lib->mp.vactive_sw_bw_c;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_l = mode_lib->mp.RequiredPrefetchPixelDataBWLuma;
+ calculate_peak_bandwidth_params->prefetch_bandwidth_c = mode_lib->mp.RequiredPrefetchPixelDataBWChroma;
++ calculate_peak_bandwidth_params->prefetch_bandwidth_oto = s->dummy_single_array[k];
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_l = mode_lib->mp.excess_vactive_fill_bw_l;
+ calculate_peak_bandwidth_params->excess_vactive_fill_bw_c = mode_lib->mp.excess_vactive_fill_bw_c;
+ calculate_peak_bandwidth_params->cursor_bw = mode_lib->mp.cursor_bw;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h
+index 23c0fca5515fef..b7cb017b59baaf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_shared_types.h
+@@ -484,6 +484,8 @@ struct dml2_core_internal_mode_support {
+ double WriteBandwidth[DML2_MAX_PLANES][DML2_MAX_WRITEBACK];
+ double RequiredPrefetchPixelDataBWLuma[DML2_MAX_PLANES];
+ double RequiredPrefetchPixelDataBWChroma[DML2_MAX_PLANES];
++ /* oto bw should also be considered when calculating urgent bw to avoid situations oto/equ mismatches between ms and mp */
++ double RequiredPrefetchBWOTO[DML2_MAX_PLANES];
+ double cursor_bw[DML2_MAX_PLANES];
+ double prefetch_cursor_bw[DML2_MAX_PLANES];
+ double prefetch_vmrow_bw[DML2_MAX_PLANES];
+@@ -1381,6 +1383,7 @@ struct dml2_core_shared_get_urgent_bandwidth_required_locals {
+ double vm_row_bw;
+ double flip_and_active_bw;
+ double flip_and_prefetch_bw;
++ double flip_and_prefetch_bw_oto;
+ double active_and_excess_bw;
+ };
+
+@@ -1792,6 +1795,7 @@ struct dml2_core_calcs_CalculatePrefetchSchedule_params {
+ double *VRatioPrefetchC;
+ double *RequiredPrefetchPixelDataBWLuma;
+ double *RequiredPrefetchPixelDataBWChroma;
++ double *RequiredPrefetchBWOTO;
+ bool *NotEnoughTimeForDynamicMetadata;
+ double *Tno_bw;
+ double *Tno_bw_flip;
+@@ -2025,6 +2029,7 @@ struct dml2_core_calcs_calculate_peak_bandwidth_required_params {
+ double *surface_read_bandwidth_c;
+ double *prefetch_bandwidth_l;
+ double *prefetch_bandwidth_c;
++ double *prefetch_bandwidth_oto;
+ double *excess_vactive_fill_bw_l;
+ double *excess_vactive_fill_bw_c;
+ double *cursor_bw;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
+index a3324f7b9ba68b..15c906c42ec450 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
+@@ -1082,12 +1082,21 @@ static bool all_timings_support_svp(const struct dml2_pmo_instance *pmo,
+ const struct dml2_fams2_meta *stream_fams2_meta;
+ unsigned int microschedule_vlines;
+ unsigned int i;
++ unsigned int mcaches_per_plane;
++ unsigned int total_mcaches_required = 0;
+
+ unsigned int num_planes_per_stream[DML2_MAX_PLANES] = { 0 };
+
+ /* confirm timing it is not a centered timing */
+ for (i = 0; i < display_config->display_config.num_planes; i++) {
+ plane_descriptor = &display_config->display_config.plane_descriptors[i];
++ mcaches_per_plane = 0;
++
++ if (plane_descriptor->surface.dcc.enable) {
++ mcaches_per_plane += display_config->stage2.mcache_allocations[i].num_mcaches_plane0 +
++ display_config->stage2.mcache_allocations[i].num_mcaches_plane1 -
++ (display_config->stage2.mcache_allocations[i].last_slice_sharing.plane0_plane1 ? 1 : 0);
++ }
+
+ if (is_bit_set_in_bitfield(mask, (unsigned char)plane_descriptor->stream_index)) {
+ num_planes_per_stream[plane_descriptor->stream_index]++;
+@@ -1098,7 +1107,19 @@ static bool all_timings_support_svp(const struct dml2_pmo_instance *pmo,
+ plane_descriptor->composition.rotation_angle != dml2_rotation_0) {
+ return false;
+ }
++
++ /* phantom requires same number of mcaches as main */
++ if (plane_descriptor->surface.dcc.enable) {
++ mcaches_per_plane *= 2;
++ }
+ }
++
++ total_mcaches_required += mcaches_per_plane;
++ }
++
++ if (total_mcaches_required > pmo->soc_bb->num_dcc_mcaches) {
++ /* too many mcaches required */
++ return false;
+ }
+
+ for (i = 0; i < DML2_MAX_PLANES; i++) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_top/dml2_top_soc15.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_top/dml2_top_soc15.c
+index a8f58f8448e427..dc2ce5e77f5799 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_top/dml2_top_soc15.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_top/dml2_top_soc15.c
+@@ -831,7 +831,6 @@ static bool dml2_top_soc15_build_mode_programming(struct dml2_build_mode_program
+ bool uclk_pstate_success = false;
+ bool vmin_success = false;
+ bool stutter_success = false;
+- unsigned int i;
+
+ memset(l, 0, sizeof(struct dml2_build_mode_programming_locals));
+ memset(in_out->programming, 0, sizeof(struct dml2_display_cfg_programming));
+@@ -976,13 +975,6 @@ static bool dml2_top_soc15_build_mode_programming(struct dml2_build_mode_program
+ l->base_display_config_with_meta.stage5.success = true;
+ }
+
+- /*
+- * Populate mcache programming
+- */
+- for (i = 0; i < in_out->display_config->num_planes; i++) {
+- in_out->programming->plane_programming[i].mcache_allocation = l->base_display_config_with_meta.stage2.mcache_allocations[i];
+- }
+-
+ /*
+ * Call DPMM to map all requirements to minimum clock state
+ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
+index 0f944fcfd5a5bb..785226945699dd 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.h
+@@ -159,6 +159,7 @@ struct dml2_clks_table_entry {
+ unsigned int dtbclk_mhz;
+ unsigned int dispclk_mhz;
+ unsigned int dppclk_mhz;
++ unsigned int dram_speed_mts; /*which is based on wck_ratio*/
+ };
+
+ struct dml2_clks_num_entries {
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
+index 40acebd13e46dc..abf439e743f233 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
+@@ -425,11 +425,6 @@ bool dpp3_get_optimal_number_of_taps(
+ int min_taps_y, min_taps_c;
+ enum lb_memory_config lb_config;
+
+- if (scl_data->viewport.width > scl_data->h_active &&
+- dpp->ctx->dc->debug.max_downscale_src_width != 0 &&
+- scl_data->viewport.width > dpp->ctx->dc->debug.max_downscale_src_width)
+- return false;
+-
+ /*
+ * Set default taps if none are provided
+ * From programming guide: taps = min{ ceil(2*H_RATIO,1), 8} for downscaling
+@@ -467,6 +462,12 @@ bool dpp3_get_optimal_number_of_taps(
+ else
+ scl_data->taps.h_taps_c = in_taps->h_taps_c;
+
++ // Avoid null data in the scl data with this early return, proceed non-adaptive calcualtion first
++ if (scl_data->viewport.width > scl_data->h_active &&
++ dpp->ctx->dc->debug.max_downscale_src_width != 0 &&
++ scl_data->viewport.width > dpp->ctx->dc->debug.max_downscale_src_width)
++ return false;
++
+ /*Ensure we can support the requested number of vtaps*/
+ min_taps_y = dc_fixpt_ceil(scl_data->ratios.vert);
+ min_taps_c = dc_fixpt_ceil(scl_data->ratios.vert_c);
+diff --git a/drivers/gpu/drm/amd/display/dc/hpo/dcn31/dcn31_hpo_dp_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/hpo/dcn31/dcn31_hpo_dp_stream_encoder.c
+index 678db949cfe3ce..759b453385c46b 100644
+--- a/drivers/gpu/drm/amd/display/dc/hpo/dcn31/dcn31_hpo_dp_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/hpo/dcn31/dcn31_hpo_dp_stream_encoder.c
+@@ -323,7 +323,7 @@ static void dcn31_hpo_dp_stream_enc_set_stream_attribute(
+ break;
+ case COLOR_SPACE_2020_RGB_LIMITEDRANGE:
+ case COLOR_SPACE_2020_RGB_FULLRANGE:
+- case COLOR_SPACE_2020_YCBCR:
++ case COLOR_SPACE_2020_YCBCR_LIMITED:
+ case COLOR_SPACE_XR_RGB:
+ case COLOR_SPACE_MSREF_SCRGB:
+ case COLOR_SPACE_ADOBERGB:
+@@ -336,6 +336,7 @@ static void dcn31_hpo_dp_stream_enc_set_stream_attribute(
+ case COLOR_SPACE_CUSTOMPOINTS:
+ case COLOR_SPACE_UNKNOWN:
+ case COLOR_SPACE_YCBCR709_BLACK:
++ default:
+ /* do nothing */
+ break;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+index 81f4c386c28751..94ceccfc049824 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+@@ -1065,7 +1065,8 @@ void dce110_edp_backlight_control(
+ DC_LOG_DC("edp_receiver_ready_T9 skipped\n");
+ }
+
+- if (!enable && link->dpcd_sink_ext_caps.bits.oled) {
++ if (!enable) {
++ /*follow oem panel config's requirement*/
+ pre_T11_delay += link->panel_config.pps.extra_pre_t11_ms;
+ msleep(pre_T11_delay);
+ }
+@@ -1654,9 +1655,7 @@ enum dc_status dce110_apply_single_controller_ctx_to_hw(
+
+ params.vertical_total_min = stream->adjust.v_total_min;
+ params.vertical_total_max = stream->adjust.v_total_max;
+- if (pipe_ctx->stream_res.tg->funcs->set_drr)
+- pipe_ctx->stream_res.tg->funcs->set_drr(
+- pipe_ctx->stream_res.tg, ¶ms);
++ set_drr_and_clear_adjust_pending(pipe_ctx, stream, ¶ms);
+
+ // DRR should set trigger event to monitor surface update event
+ if (stream->adjust.v_total_min != 0 && stream->adjust.v_total_max != 0)
+@@ -2104,8 +2103,7 @@ static void set_drr(struct pipe_ctx **pipe_ctx,
+ struct timing_generator *tg = pipe_ctx[i]->stream_res.tg;
+
+ if ((tg != NULL) && tg->funcs) {
+- if (tg->funcs->set_drr)
+- tg->funcs->set_drr(tg, ¶ms);
++ set_drr_and_clear_adjust_pending(pipe_ctx[i], pipe_ctx[i]->stream, ¶ms);
+ if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
+ if (tg->funcs->set_static_screen_control)
+ tg->funcs->set_static_screen_control(
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index 13f9e9b439f6a5..bbeaefe1ef0db0 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1112,9 +1112,7 @@ static void dcn10_reset_back_end_for_pipe(
+ pipe_ctx->stream_res.tg->funcs->disable_crtc(pipe_ctx->stream_res.tg);
+
+ pipe_ctx->stream_res.tg->funcs->enable_optc_clock(pipe_ctx->stream_res.tg, false);
+- if (pipe_ctx->stream_res.tg->funcs->set_drr)
+- pipe_ctx->stream_res.tg->funcs->set_drr(
+- pipe_ctx->stream_res.tg, NULL);
++ set_drr_and_clear_adjust_pending(pipe_ctx, pipe_ctx->stream, NULL);
+ if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal))
+ pipe_ctx->stream->link->phy_state.symclk_ref_cnts.otg = 0;
+ }
+@@ -3217,8 +3215,7 @@ void dcn10_set_drr(struct pipe_ctx **pipe_ctx,
+ struct timing_generator *tg = pipe_ctx[i]->stream_res.tg;
+
+ if ((tg != NULL) && tg->funcs) {
+- if (tg->funcs->set_drr)
+- tg->funcs->set_drr(tg, ¶ms);
++ set_drr_and_clear_adjust_pending(pipe_ctx[i], pipe_ctx[i]->stream, ¶ms);
+ if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
+ if (tg->funcs->set_static_screen_control)
+ tg->funcs->set_static_screen_control(
+@@ -3427,52 +3424,6 @@ void dcn10_update_dchub(struct dce_hwseq *hws, struct dchub_init_data *dh_data)
+ hubbub->funcs->update_dchub(hubbub, dh_data);
+ }
+
+-static bool dcn10_can_pipe_disable_cursor(struct pipe_ctx *pipe_ctx)
+-{
+- struct pipe_ctx *test_pipe, *split_pipe;
+- const struct scaler_data *scl_data = &pipe_ctx->plane_res.scl_data;
+- struct rect r1 = scl_data->recout, r2, r2_half;
+- int r1_r = r1.x + r1.width, r1_b = r1.y + r1.height, r2_r, r2_b;
+- int cur_layer = pipe_ctx->plane_state->layer_index;
+-
+- /**
+- * Disable the cursor if there's another pipe above this with a
+- * plane that contains this pipe's viewport to prevent double cursor
+- * and incorrect scaling artifacts.
+- */
+- for (test_pipe = pipe_ctx->top_pipe; test_pipe;
+- test_pipe = test_pipe->top_pipe) {
+- // Skip invisible layer and pipe-split plane on same layer
+- if (!test_pipe->plane_state ||
+- !test_pipe->plane_state->visible ||
+- test_pipe->plane_state->layer_index == cur_layer)
+- continue;
+-
+- r2 = test_pipe->plane_res.scl_data.recout;
+- r2_r = r2.x + r2.width;
+- r2_b = r2.y + r2.height;
+-
+- /**
+- * There is another half plane on same layer because of
+- * pipe-split, merge together per same height.
+- */
+- for (split_pipe = pipe_ctx->top_pipe; split_pipe;
+- split_pipe = split_pipe->top_pipe)
+- if (split_pipe->plane_state->layer_index == test_pipe->plane_state->layer_index) {
+- r2_half = split_pipe->plane_res.scl_data.recout;
+- r2.x = (r2_half.x < r2.x) ? r2_half.x : r2.x;
+- r2.width = r2.width + r2_half.width;
+- r2_r = r2.x + r2.width;
+- break;
+- }
+-
+- if (r1.x >= r2.x && r1.y >= r2.y && r1_r <= r2_r && r1_b <= r2_b)
+- return true;
+- }
+-
+- return false;
+-}
+-
+ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
+ {
+ struct dc_cursor_position pos_cpy = pipe_ctx->stream->cursor_position;
+@@ -3572,7 +3523,7 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
+ == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
+ pos_cpy.enable = false;
+
+- if (pos_cpy.enable && dcn10_can_pipe_disable_cursor(pipe_ctx))
++ if (pos_cpy.enable && resource_can_pipe_disable_cursor(pipe_ctx))
+ pos_cpy.enable = false;
+
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index b78096a7690eeb..1a07973ead4f57 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -952,9 +952,7 @@ enum dc_status dcn20_enable_stream_timing(
+ params.vertical_total_max = stream->adjust.v_total_max;
+ params.vertical_total_mid = stream->adjust.v_total_mid;
+ params.vertical_total_mid_frame_num = stream->adjust.v_total_mid_frame_num;
+- if (pipe_ctx->stream_res.tg->funcs->set_drr)
+- pipe_ctx->stream_res.tg->funcs->set_drr(
+- pipe_ctx->stream_res.tg, ¶ms);
++ set_drr_and_clear_adjust_pending(pipe_ctx, stream, ¶ms);
+
+ // DRR should set trigger event to monitor surface update event
+ if (stream->adjust.v_total_min != 0 && stream->adjust.v_total_max != 0)
+@@ -1266,14 +1264,18 @@ static void dcn20_power_on_plane_resources(
+ struct dce_hwseq *hws,
+ struct pipe_ctx *pipe_ctx)
+ {
++ uint32_t org_ip_request_cntl = 0;
++
+ DC_LOGGER_INIT(hws->ctx->logger);
+
+ if (hws->funcs.dpp_root_clock_control)
+ hws->funcs.dpp_root_clock_control(hws, pipe_ctx->plane_res.dpp->inst, true);
+
+ if (REG(DC_IP_REQUEST_CNTL)) {
+- REG_SET(DC_IP_REQUEST_CNTL, 0,
+- IP_REQUEST_EN, 1);
++ REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
++ if (org_ip_request_cntl == 0)
++ REG_SET(DC_IP_REQUEST_CNTL, 0,
++ IP_REQUEST_EN, 1);
+
+ if (hws->funcs.dpp_pg_control)
+ hws->funcs.dpp_pg_control(hws, pipe_ctx->plane_res.dpp->inst, true);
+@@ -1281,8 +1283,10 @@ static void dcn20_power_on_plane_resources(
+ if (hws->funcs.hubp_pg_control)
+ hws->funcs.hubp_pg_control(hws, pipe_ctx->plane_res.hubp->inst, true);
+
+- REG_SET(DC_IP_REQUEST_CNTL, 0,
+- IP_REQUEST_EN, 0);
++ if (org_ip_request_cntl == 0)
++ REG_SET(DC_IP_REQUEST_CNTL, 0,
++ IP_REQUEST_EN, 0);
++
+ DC_LOG_DEBUG(
+ "Un-gated front end for pipe %d\n", pipe_ctx->plane_res.hubp->inst);
+ }
+@@ -2849,9 +2853,7 @@ void dcn20_reset_back_end_for_pipe(
+ pipe_ctx->stream_res.tg->funcs->set_odm_bypass(
+ pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
+
+- if (pipe_ctx->stream_res.tg->funcs->set_drr)
+- pipe_ctx->stream_res.tg->funcs->set_drr(
+- pipe_ctx->stream_res.tg, NULL);
++ set_drr_and_clear_adjust_pending(pipe_ctx, pipe_ctx->stream, NULL);
+ /* TODO - convert symclk_ref_cnts for otg to a bit map to solve
+ * the case where the same symclk is shared across multiple otg
+ * instances
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+index 03ba01f4ace18a..38f8898266971c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c
+@@ -538,9 +538,7 @@ static void dcn31_reset_back_end_for_pipe(
+ if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal))
+ pipe_ctx->stream->link->phy_state.symclk_ref_cnts.otg = 0;
+
+- if (pipe_ctx->stream_res.tg->funcs->set_drr)
+- pipe_ctx->stream_res.tg->funcs->set_drr(
+- pipe_ctx->stream_res.tg, NULL);
++ set_drr_and_clear_adjust_pending(pipe_ctx, pipe_ctx->stream, NULL);
+
+ /* DPMS may already disable or */
+ /* dpms_off status is incorrect due to fastboot
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index b907ad1acedd9e..922b8d71cf1aa5 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -1473,8 +1473,7 @@ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
+ num_frames = 2 * (frame_rate % 60);
+ }
+ }
+- if (tg->funcs->set_drr)
+- tg->funcs->set_drr(tg, ¶ms);
++ set_drr_and_clear_adjust_pending(pipe_ctx[i], pipe_ctx[i]->stream, ¶ms);
+ if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
+ if (tg->funcs->set_static_screen_control)
+ tg->funcs->set_static_screen_control(
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index 0d39d193dacfa0..da8afb08b92018 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -830,10 +830,7 @@ enum dc_status dcn401_enable_stream_timing(
+ }
+
+ hws->funcs.wait_for_blank_complete(pipe_ctx->stream_res.opp);
+-
+- if (pipe_ctx->stream_res.tg->funcs->set_drr)
+- pipe_ctx->stream_res.tg->funcs->set_drr(
+- pipe_ctx->stream_res.tg, ¶ms);
++ set_drr_and_clear_adjust_pending(pipe_ctx, stream, ¶ms);
+
+ /* Event triggers and num frames initialized for DRR, but can be
+ * later updated for PSR use. Note DRR trigger events are generated
+@@ -975,52 +972,6 @@ void dcn401_setup_hpo_hw_control(const struct dce_hwseq *hws, bool enable)
+ REG_UPDATE(HPO_TOP_HW_CONTROL, HPO_IO_EN, enable);
+ }
+
+-static bool dcn401_can_pipe_disable_cursor(struct pipe_ctx *pipe_ctx)
+-{
+- struct pipe_ctx *test_pipe, *split_pipe;
+- const struct scaler_data *scl_data = &pipe_ctx->plane_res.scl_data;
+- struct rect r1 = scl_data->recout, r2, r2_half;
+- int r1_r = r1.x + r1.width, r1_b = r1.y + r1.height, r2_r, r2_b;
+- int cur_layer = pipe_ctx->plane_state->layer_index;
+-
+- /**
+- * Disable the cursor if there's another pipe above this with a
+- * plane that contains this pipe's viewport to prevent double cursor
+- * and incorrect scaling artifacts.
+- */
+- for (test_pipe = pipe_ctx->top_pipe; test_pipe;
+- test_pipe = test_pipe->top_pipe) {
+- // Skip invisible layer and pipe-split plane on same layer
+- if (!test_pipe->plane_state ||
+- !test_pipe->plane_state->visible ||
+- test_pipe->plane_state->layer_index == cur_layer)
+- continue;
+-
+- r2 = test_pipe->plane_res.scl_data.recout;
+- r2_r = r2.x + r2.width;
+- r2_b = r2.y + r2.height;
+-
+- /**
+- * There is another half plane on same layer because of
+- * pipe-split, merge together per same height.
+- */
+- for (split_pipe = pipe_ctx->top_pipe; split_pipe;
+- split_pipe = split_pipe->top_pipe)
+- if (split_pipe->plane_state->layer_index == test_pipe->plane_state->layer_index) {
+- r2_half = split_pipe->plane_res.scl_data.recout;
+- r2.x = (r2_half.x < r2.x) ? r2_half.x : r2.x;
+- r2.width = r2.width + r2_half.width;
+- r2_r = r2.x + r2.width;
+- break;
+- }
+-
+- if (r1.x >= r2.x && r1.y >= r2.y && r1_r <= r2_r && r1_b <= r2_b)
+- return true;
+- }
+-
+- return false;
+-}
+-
+ void adjust_hotspot_between_slices_for_2x_magnify(uint32_t cursor_width, struct dc_cursor_position *pos_cpy)
+ {
+ if (cursor_width <= 128) {
+@@ -1211,7 +1162,7 @@ void dcn401_set_cursor_position(struct pipe_ctx *pipe_ctx)
+ pos_cpy.x = (uint32_t)x_pos;
+ pos_cpy.y = (uint32_t)y_pos;
+
+- if (pos_cpy.enable && dcn401_can_pipe_disable_cursor(pipe_ctx))
++ if (pos_cpy.enable && resource_can_pipe_disable_cursor(pipe_ctx))
+ pos_cpy.enable = false;
+
+ x_pos = pos_cpy.x - param.recout.x;
+@@ -1866,9 +1817,8 @@ void dcn401_reset_back_end_for_pipe(
+ pipe_ctx->stream_res.tg->funcs->set_odm_bypass(
+ pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
+
+- if (pipe_ctx->stream_res.tg->funcs->set_drr)
+- pipe_ctx->stream_res.tg->funcs->set_drr(
+- pipe_ctx->stream_res.tg, NULL);
++ set_drr_and_clear_adjust_pending(pipe_ctx, pipe_ctx->stream, NULL);
++
+ /* TODO - convert symclk_ref_cnts for otg to a bit map to solve
+ * the case where the same symclk is shared across multiple otg
+ * instances
+@@ -2659,3 +2609,37 @@ void dcn401_detect_pipe_changes(struct dc_state *old_state,
+ new_pipe->update_flags.bits.test_pattern_changed = 1;
+ }
+ }
++
++void dcn401_plane_atomic_power_down(struct dc *dc,
++ struct dpp *dpp,
++ struct hubp *hubp)
++{
++ struct dce_hwseq *hws = dc->hwseq;
++ uint32_t org_ip_request_cntl = 0;
++
++ DC_LOGGER_INIT(dc->ctx->logger);
++
++ REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
++ if (org_ip_request_cntl == 0)
++ REG_SET(DC_IP_REQUEST_CNTL, 0,
++ IP_REQUEST_EN, 1);
++
++ if (hws->funcs.dpp_pg_control)
++ hws->funcs.dpp_pg_control(hws, dpp->inst, false);
++
++ if (hws->funcs.hubp_pg_control)
++ hws->funcs.hubp_pg_control(hws, hubp->inst, false);
++
++ hubp->funcs->hubp_reset(hubp);
++ dpp->funcs->dpp_reset(dpp);
++
++ if (org_ip_request_cntl == 0)
++ REG_SET(DC_IP_REQUEST_CNTL, 0,
++ IP_REQUEST_EN, 0);
++
++ DC_LOG_DEBUG(
++ "Power gated front end %d\n", hubp->inst);
++
++ if (hws->funcs.dpp_root_clock_control)
++ hws->funcs.dpp_root_clock_control(hws, dpp->inst, false);
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h
+index 17cea748789e18..dbd69d215b8bc0 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.h
+@@ -102,4 +102,7 @@ void dcn401_detect_pipe_changes(
+ struct dc_state *new_state,
+ struct pipe_ctx *old_pipe,
+ struct pipe_ctx *new_pipe);
++void dcn401_plane_atomic_power_down(struct dc *dc,
++ struct dpp *dpp,
++ struct hubp *hubp);
+ #endif /* __DC_HWSS_DCN401_H__ */
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
+index 44cb376f97c172..a4e3501fadbbe0 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
+@@ -123,7 +123,7 @@ static const struct hwseq_private_funcs dcn401_private_funcs = {
+ .disable_vga = dcn20_disable_vga,
+ .bios_golden_init = dcn10_bios_golden_init,
+ .plane_atomic_disable = dcn20_plane_atomic_disable,
+- .plane_atomic_power_down = dcn10_plane_atomic_power_down,
++ .plane_atomic_power_down = dcn401_plane_atomic_power_down,
+ .enable_power_gating_plane = dcn32_enable_power_gating_plane,
+ .hubp_pg_control = dcn32_hubp_pg_control,
+ .program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
+index a7d66cfd93c911..16ef5250a02e13 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
++++ b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
+@@ -46,6 +46,7 @@ struct dce_hwseq;
+ struct link_resource;
+ struct dc_dmub_cmd;
+ struct pg_block_update;
++struct drr_params;
+
+ struct subvp_pipe_control_lock_fast_params {
+ struct dc *dc;
+@@ -521,6 +522,11 @@ void set_p_state_switch_method(
+ struct dc_state *context,
+ struct pipe_ctx *pipe_ctx);
+
++void set_drr_and_clear_adjust_pending(
++ struct pipe_ctx *pipe_ctx,
++ struct dc_stream_state *stream,
++ struct drr_params *params);
++
+ void hwss_execute_sequence(struct dc *dc,
+ struct block_sequence block_sequence[],
+ int num_steps);
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+index d558efc6e12f9d..652d52040f4e61 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+@@ -627,7 +627,7 @@ struct dc_state {
+ */
+ struct bw_context bw_ctx;
+
+- struct block_sequence block_sequence[50];
++ struct block_sequence block_sequence[100];
+ unsigned int block_sequence_steps;
+ struct dc_dmub_cmd dc_dmub_cmd[10];
+ unsigned int dmub_cmd_count;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+index 7a1ca1e98059b0..221645c023b502 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+@@ -221,6 +221,7 @@ enum dentist_divider_range {
+ CLK_SF(CLK0_CLK_PLL_REQ, FbMult_frac, mask_sh)
+
+ #define CLK_REG_LIST_DCN401() \
++ SR(DENTIST_DISPCLK_CNTL), \
+ CLK_SR_DCN401(CLK0_CLK_PLL_REQ, CLK01, 0), \
+ CLK_SR_DCN401(CLK0_CLK0_DFS_CNTL, CLK01, 0), \
+ CLK_SR_DCN401(CLK0_CLK1_DFS_CNTL, CLK01, 0), \
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
+index 0150f2581ee4c5..0c5675d1c59368 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
+@@ -119,10 +119,14 @@ static const struct dpp_input_csc_matrix __maybe_unused dpp_input_csc_matrix[] =
+ { 0x39a6, 0x2568, 0, 0xe0d6,
+ 0xeedd, 0x2568, 0xf925, 0x9a8,
+ 0, 0x2568, 0x43ee, 0xdbb2 } },
+- { COLOR_SPACE_2020_YCBCR,
++ { COLOR_SPACE_2020_YCBCR_FULL,
+ { 0x2F30, 0x2000, 0, 0xE869,
+ 0xEDB7, 0x2000, 0xFABC, 0xBC6,
+ 0, 0x2000, 0x3C34, 0xE1E6 } },
++ { COLOR_SPACE_2020_YCBCR_LIMITED,
++ { 0x35B9, 0x2543, 0, 0xE2B2,
++ 0xEB2F, 0x2543, 0xFA01, 0x0B1F,
++ 0, 0x2543, 0x4489, 0xDB42 } },
+ { COLOR_SPACE_2020_RGB_LIMITEDRANGE,
+ { 0x35E0, 0x255F, 0, 0xE2B3,
+ 0xEB20, 0x255F, 0xF9FD, 0xB1E,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+index cd1157d225abe7..b32d07ce0f0878 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+@@ -152,6 +152,8 @@ bool resource_attach_surfaces_to_context(
+ struct dc_state *context,
+ const struct resource_pool *pool);
+
++bool resource_can_pipe_disable_cursor(struct pipe_ctx *pipe_ctx);
++
+ #define FREE_PIPE_INDEX_NOT_FOUND -1
+
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index 44f33e3bc1c599..64e4ae379e3464 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -250,21 +250,21 @@ static uint32_t intersect_frl_link_bw_support(
+ {
+ uint32_t supported_bw_in_kbps = max_supported_frl_bw_in_kbps;
+
+- // HDMI_ENCODED_LINK_BW bits are only valid if HDMI Link Configuration bit is 1 (FRL mode)
+- if (hdmi_encoded_link_bw.bits.FRL_MODE) {
+- if (hdmi_encoded_link_bw.bits.BW_48Gbps)
+- supported_bw_in_kbps = 48000000;
+- else if (hdmi_encoded_link_bw.bits.BW_40Gbps)
+- supported_bw_in_kbps = 40000000;
+- else if (hdmi_encoded_link_bw.bits.BW_32Gbps)
+- supported_bw_in_kbps = 32000000;
+- else if (hdmi_encoded_link_bw.bits.BW_24Gbps)
+- supported_bw_in_kbps = 24000000;
+- else if (hdmi_encoded_link_bw.bits.BW_18Gbps)
+- supported_bw_in_kbps = 18000000;
+- else if (hdmi_encoded_link_bw.bits.BW_9Gbps)
+- supported_bw_in_kbps = 9000000;
+- }
++ /* Skip checking FRL_MODE bit, as certain PCON will clear
++ * it despite supporting the link BW indicated in the other bits.
++ */
++ if (hdmi_encoded_link_bw.bits.BW_48Gbps)
++ supported_bw_in_kbps = 48000000;
++ else if (hdmi_encoded_link_bw.bits.BW_40Gbps)
++ supported_bw_in_kbps = 40000000;
++ else if (hdmi_encoded_link_bw.bits.BW_32Gbps)
++ supported_bw_in_kbps = 32000000;
++ else if (hdmi_encoded_link_bw.bits.BW_24Gbps)
++ supported_bw_in_kbps = 24000000;
++ else if (hdmi_encoded_link_bw.bits.BW_18Gbps)
++ supported_bw_in_kbps = 18000000;
++ else if (hdmi_encoded_link_bw.bits.BW_9Gbps)
++ supported_bw_in_kbps = 9000000;
+
+ return supported_bw_in_kbps;
+ }
+@@ -945,6 +945,9 @@ bool link_decide_link_settings(struct dc_stream_state *stream,
+ * TODO: add MST specific link training routine
+ */
+ decide_mst_link_settings(link, link_setting);
++ } else if (stream->signal == SIGNAL_TYPE_VIRTUAL) {
++ link_setting->lane_count = LANE_COUNT_FOUR;
++ link_setting->link_rate = LINK_RATE_HIGH3;
+ } else if (link->connector_signal == SIGNAL_TYPE_EDP) {
+ /* enable edp link optimization for DSC eDP case */
+ if (stream->timing.flags.DSC) {
+@@ -967,9 +970,6 @@ bool link_decide_link_settings(struct dc_stream_state *stream,
+ } else {
+ edp_decide_link_settings(link, link_setting, req_bw);
+ }
+- } else if (stream->signal == SIGNAL_TYPE_VIRTUAL) {
+- link_setting->lane_count = LANE_COUNT_FOUR;
+- link_setting->link_rate = LINK_RATE_HIGH3;
+ } else {
+ decide_dp_link_settings(link, link_setting, req_bw);
+ }
+@@ -1502,7 +1502,7 @@ static bool dpcd_read_sink_ext_caps(struct dc_link *link)
+
+ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
+ {
+- uint8_t lttpr_dpcd_data[8] = {0};
++ uint8_t lttpr_dpcd_data[10] = {0};
+ enum dc_status status;
+ bool is_lttpr_present;
+
+@@ -1552,6 +1552,10 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
+ lttpr_dpcd_data[DP_PHY_REPEATER_128B132B_RATES -
+ DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
+
++ link->dpcd_caps.lttpr_caps.alpm.raw =
++ lttpr_dpcd_data[DP_LTTPR_ALPM_CAPABILITIES -
++ DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
++
+ /* If this chip cap is set, at least one retimer must exist in the chain
+ * Override count to 1 if we receive a known bad count (0 or an invalid value) */
+ if (((link->chip_caps & AMD_EXT_DISPLAY_PATH_CAPS__EXT_CHIP_MASK) == AMD_EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN) &&
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c
+index 2c73ac87cd665c..c27ffec5d84fb2 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_phy.c
+@@ -75,7 +75,8 @@ void dp_disable_link_phy(struct dc_link *link,
+ struct dc *dc = link->ctx->dc;
+
+ if (!link->wa_flags.dp_keep_receiver_powered &&
+- !link->skip_implict_edp_power_control)
++ !link->skip_implict_edp_power_control &&
++ link->type != dc_connection_none)
+ dpcd_write_rx_power_ctrl(link, false);
+
+ dc->hwss.disable_link_output(link, link_res, signal);
+@@ -163,8 +164,9 @@ enum dc_status dp_set_fec_ready(struct dc_link *link, const struct link_resource
+ } else {
+ if (link->fec_state == dc_link_fec_ready) {
+ fec_config = 0;
+- core_link_write_dpcd(link, DP_FEC_CONFIGURATION,
+- &fec_config, sizeof(fec_config));
++ if (link->type != dc_connection_none)
++ core_link_write_dpcd(link, DP_FEC_CONFIGURATION,
++ &fec_config, sizeof(fec_config));
+
+ link_enc->funcs->fec_set_ready(link_enc, false);
+ link->fec_state = dc_link_fec_not_ready;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+index 751c18e592ea5e..7848ddb94456c5 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c
+@@ -1782,13 +1782,10 @@ bool perform_link_training_with_retries(
+ is_link_bw_min = ((cur_link_settings.link_rate <= LINK_RATE_LOW) &&
+ (cur_link_settings.lane_count <= LANE_COUNT_ONE));
+
+- if (is_link_bw_low) {
++ if (is_link_bw_low)
+ DC_LOG_WARNING(
+ "%s: Link(%d) bandwidth too low after fallback req_bw(%d) > link_bw(%d)\n",
+ __func__, link->link_index, req_bw, link_bw);
+-
+- return false;
+- }
+ }
+
+ msleep(delay_between_attempts);
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
+index 3bdce32a85e3c7..ae95ec48e57219 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
+@@ -36,7 +36,8 @@
+ link->ctx->logger
+
+ static int32_t get_cr_training_aux_rd_interval(struct dc_link *link,
+- const struct dc_link_settings *link_settings)
++ const struct dc_link_settings *link_settings,
++ enum lttpr_mode lttpr_mode)
+ {
+ union training_aux_rd_interval training_rd_interval;
+ uint32_t wait_in_micro_secs = 100;
+@@ -49,6 +50,8 @@ static int32_t get_cr_training_aux_rd_interval(struct dc_link *link,
+ DP_TRAINING_AUX_RD_INTERVAL,
+ (uint8_t *)&training_rd_interval,
+ sizeof(training_rd_interval));
++ if (lttpr_mode != LTTPR_MODE_NON_TRANSPARENT)
++ wait_in_micro_secs = 400;
+ if (training_rd_interval.bits.TRAINIG_AUX_RD_INTERVAL)
+ wait_in_micro_secs = training_rd_interval.bits.TRAINIG_AUX_RD_INTERVAL * 4000;
+ }
+@@ -110,7 +113,6 @@ void decide_8b_10b_training_settings(
+ */
+ lt_settings->link_settings.link_spread = link->dp_ss_off ?
+ LINK_SPREAD_DISABLED : LINK_SPREAD_05_DOWNSPREAD_30KHZ;
+- lt_settings->cr_pattern_time = get_cr_training_aux_rd_interval(link, link_setting);
+ lt_settings->eq_pattern_time = get_eq_training_aux_rd_interval(link, link_setting);
+ lt_settings->pattern_for_cr = decide_cr_training_pattern(link_setting);
+ lt_settings->pattern_for_eq = decide_eq_training_pattern(link, link_setting);
+@@ -119,6 +121,7 @@ void decide_8b_10b_training_settings(
+ lt_settings->disallow_per_lane_settings = true;
+ lt_settings->always_match_dpcd_with_hw_lane_settings = true;
+ lt_settings->lttpr_mode = dp_decide_8b_10b_lttpr_mode(link);
++ lt_settings->cr_pattern_time = get_cr_training_aux_rd_interval(link, link_setting, lt_settings->lttpr_mode);
+ dp_hw_to_dpcd_lane_settings(lt_settings, lt_settings->hw_lane_settings, lt_settings->dpcd_lane_settings);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+index e0e3bb86535952..1e4adbc764ea69 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+@@ -675,6 +675,18 @@ bool edp_setup_psr(struct dc_link *link,
+ if (!link)
+ return false;
+
++ //Clear PSR cfg
++ memset(&psr_configuration, 0, sizeof(psr_configuration));
++ dm_helpers_dp_write_dpcd(
++ link->ctx,
++ link,
++ DP_PSR_EN_CFG,
++ &psr_configuration.raw,
++ sizeof(psr_configuration.raw));
++
++ if (link->psr_settings.psr_version == DC_PSR_VERSION_UNSUPPORTED)
++ return false;
++
+ dc = link->ctx->dc;
+ dmcu = dc->res_pool->dmcu;
+ psr = dc->res_pool->psr;
+@@ -685,9 +697,6 @@ bool edp_setup_psr(struct dc_link *link,
+ if (!dc_get_edp_link_panel_inst(dc, link, &panel_inst))
+ return false;
+
+-
+- memset(&psr_configuration, 0, sizeof(psr_configuration));
+-
+ psr_configuration.bits.ENABLE = 1;
+ psr_configuration.bits.CRC_VERIFICATION = 1;
+ psr_configuration.bits.FRAME_CAPTURE_INDICATION =
+@@ -950,6 +959,16 @@ bool edp_setup_replay(struct dc_link *link, const struct dc_stream_state *stream
+ if (!link)
+ return false;
+
++ //Clear Replay config
++ dm_helpers_dp_write_dpcd(link->ctx, link,
++ DP_SINK_PR_ENABLE_AND_CONFIGURATION,
++ (uint8_t *)&(replay_config.raw), sizeof(uint8_t));
++
++ if (!(link->replay_settings.config.replay_supported))
++ return false;
++
++ link->replay_settings.config.replay_error_status.raw = 0;
++
+ dc = link->ctx->dc;
+
+ replay = dc->res_pool->replay;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+index 14acef036b5a01..6c2bb3f63be15e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+@@ -1698,7 +1698,7 @@ static int dcn315_populate_dml_pipes_from_context(
+ pipes[pipe_cnt].dout.dsc_input_bpc = 0;
+ DC_FP_START();
+ dcn31_zero_pipe_dcc_fraction(pipes, pipe_cnt);
+- if (pixel_rate_crb && !pipe->top_pipe && !pipe->prev_odm_pipe) {
++ if (pixel_rate_crb) {
+ int bpp = source_format_to_bpp(pipes[pipe_cnt].pipe.src.source_format);
+ /* Ceil to crb segment size */
+ int approx_det_segs_required_for_pstate = dcn_get_approx_det_segs_required_for_pstate(
+@@ -1755,28 +1755,26 @@ static int dcn315_populate_dml_pipes_from_context(
+ continue;
+ }
+
+- if (!pipe->top_pipe && !pipe->prev_odm_pipe) {
+- bool split_required = pipe->stream->timing.pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc)
+- || (pipe->plane_state && pipe->plane_state->src_rect.width > 5120);
+-
+- if (remaining_det_segs > MIN_RESERVED_DET_SEGS && crb_pipes != 0)
+- pipes[pipe_cnt].pipe.src.det_size_override += (remaining_det_segs - MIN_RESERVED_DET_SEGS) / crb_pipes +
+- (crb_idx < (remaining_det_segs - MIN_RESERVED_DET_SEGS) % crb_pipes ? 1 : 0);
+- if (pipes[pipe_cnt].pipe.src.det_size_override > 2 * DCN3_15_MAX_DET_SEGS) {
+- /* Clamp to 2 pipe split max det segments */
+- remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override - 2 * (DCN3_15_MAX_DET_SEGS);
+- pipes[pipe_cnt].pipe.src.det_size_override = 2 * DCN3_15_MAX_DET_SEGS;
+- }
+- if (pipes[pipe_cnt].pipe.src.det_size_override > DCN3_15_MAX_DET_SEGS || split_required) {
+- /* If we are splitting we must have an even number of segments */
+- remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override % 2;
+- pipes[pipe_cnt].pipe.src.det_size_override -= pipes[pipe_cnt].pipe.src.det_size_override % 2;
+- }
+- /* Convert segments into size for DML use */
+- pipes[pipe_cnt].pipe.src.det_size_override *= DCN3_15_CRB_SEGMENT_SIZE_KB;
+-
+- crb_idx++;
++ bool split_required = pipe->stream->timing.pix_clk_100hz >= dcn_get_max_non_odm_pix_rate_100hz(&dc->dml.soc)
++ || (pipe->plane_state && pipe->plane_state->src_rect.width > 5120);
++
++ if (remaining_det_segs > MIN_RESERVED_DET_SEGS && crb_pipes != 0)
++ pipes[pipe_cnt].pipe.src.det_size_override += (remaining_det_segs - MIN_RESERVED_DET_SEGS) / crb_pipes +
++ (crb_idx < (remaining_det_segs - MIN_RESERVED_DET_SEGS) % crb_pipes ? 1 : 0);
++ if (pipes[pipe_cnt].pipe.src.det_size_override > 2 * DCN3_15_MAX_DET_SEGS) {
++ /* Clamp to 2 pipe split max det segments */
++ remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override - 2 * (DCN3_15_MAX_DET_SEGS);
++ pipes[pipe_cnt].pipe.src.det_size_override = 2 * DCN3_15_MAX_DET_SEGS;
++ }
++ if (pipes[pipe_cnt].pipe.src.det_size_override > DCN3_15_MAX_DET_SEGS || split_required) {
++ /* If we are splitting we must have an even number of segments */
++ remaining_det_segs += pipes[pipe_cnt].pipe.src.det_size_override % 2;
++ pipes[pipe_cnt].pipe.src.det_size_override -= pipes[pipe_cnt].pipe.src.det_size_override % 2;
+ }
++ /* Convert segments into size for DML use */
++ pipes[pipe_cnt].pipe.src.det_size_override *= DCN3_15_CRB_SEGMENT_SIZE_KB;
++
++ crb_idx++;
+ pipe_cnt++;
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/spl/dc_spl.c b/drivers/gpu/drm/amd/display/dc/spl/dc_spl.c
+index 38a9a0d680581a..047f05ab018102 100644
+--- a/drivers/gpu/drm/amd/display/dc/spl/dc_spl.c
++++ b/drivers/gpu/drm/amd/display/dc/spl/dc_spl.c
+@@ -8,7 +8,7 @@
+ #include "dc_spl_isharp_filters.h"
+ #include "spl_debug.h"
+
+-#define IDENTITY_RATIO(ratio) (spl_fixpt_u2d19(ratio) == (1 << 19))
++#define IDENTITY_RATIO(ratio) (spl_fixpt_u3d19(ratio) == (1 << 19))
+ #define MIN_VIEWPORT_SIZE 12
+
+ static bool spl_is_yuv420(enum spl_pixel_format format)
+@@ -76,6 +76,21 @@ static struct spl_rect shift_rec(const struct spl_rect *rec_in, int x, int y)
+ return rec_out;
+ }
+
++static void spl_opp_adjust_rect(struct spl_rect *rec, const struct spl_opp_adjust *adjust)
++{
++ if ((rec->x + adjust->x) >= 0)
++ rec->x += adjust->x;
++
++ if ((rec->y + adjust->y) >= 0)
++ rec->y += adjust->y;
++
++ if ((rec->width + adjust->width) >= 1)
++ rec->width += adjust->width;
++
++ if ((rec->height + adjust->height) >= 1)
++ rec->height += adjust->height;
++}
++
+ static struct spl_rect calculate_plane_rec_in_timing_active(
+ struct spl_in *spl_in,
+ const struct spl_rect *rec_in)
+@@ -723,13 +738,15 @@ static void spl_handle_3d_recout(struct spl_in *spl_in, struct spl_rect *recout)
+ }
+ }
+
+-static void spl_clamp_viewport(struct spl_rect *viewport)
++static void spl_clamp_viewport(struct spl_rect *viewport, int min_viewport_size)
+ {
++ if (min_viewport_size == 0)
++ min_viewport_size = MIN_VIEWPORT_SIZE;
+ /* Clamp minimum viewport size */
+- if (viewport->height < MIN_VIEWPORT_SIZE)
+- viewport->height = MIN_VIEWPORT_SIZE;
+- if (viewport->width < MIN_VIEWPORT_SIZE)
+- viewport->width = MIN_VIEWPORT_SIZE;
++ if (viewport->height < min_viewport_size)
++ viewport->height = min_viewport_size;
++ if (viewport->width < min_viewport_size)
++ viewport->width = min_viewport_size;
+ }
+
+ static enum scl_mode spl_get_dscl_mode(const struct spl_in *spl_in,
+@@ -767,25 +784,13 @@ static enum scl_mode spl_get_dscl_mode(const struct spl_in *spl_in,
+ return SCL_MODE_SCALING_420_YCBCR_ENABLE;
+ }
+
+-static bool spl_choose_lls_policy(enum spl_pixel_format format,
+- enum spl_transfer_func_type tf_type,
+- enum spl_transfer_func_predefined tf_predefined_type,
++static void spl_choose_lls_policy(enum spl_pixel_format format,
+ enum linear_light_scaling *lls_pref)
+ {
+- if (spl_is_video_format(format)) {
++ if (spl_is_subsampled_format(format))
+ *lls_pref = LLS_PREF_NO;
+- if ((tf_type == SPL_TF_TYPE_PREDEFINED) ||
+- (tf_type == SPL_TF_TYPE_DISTRIBUTED_POINTS))
+- return true;
+- } else { /* RGB or YUV444 */
+- if ((tf_type == SPL_TF_TYPE_PREDEFINED) ||
+- (tf_type == SPL_TF_TYPE_BYPASS)) {
+- *lls_pref = LLS_PREF_YES;
+- return true;
+- }
+- }
+- *lls_pref = LLS_PREF_NO;
+- return false;
++ else /* RGB or YUV444 */
++ *lls_pref = LLS_PREF_YES;
+ }
+
+ /* Enable EASF ?*/
+@@ -794,7 +799,6 @@ static bool enable_easf(struct spl_in *spl_in, struct spl_scratch *spl_scratch)
+ int vratio = 0;
+ int hratio = 0;
+ bool skip_easf = false;
+- bool lls_enable_easf = true;
+
+ if (spl_in->disable_easf)
+ skip_easf = true;
+@@ -810,17 +814,13 @@ static bool enable_easf(struct spl_in *spl_in, struct spl_scratch *spl_scratch)
+ skip_easf = true;
+
+ /*
+- * If lls_pref is LLS_PREF_DONT_CARE, then use pixel format and transfer
+- * function to determine whether to use LINEAR or NONLINEAR scaling
++ * If lls_pref is LLS_PREF_DONT_CARE, then use pixel format
++ * to determine whether to use LINEAR or NONLINEAR scaling
+ */
+ if (spl_in->lls_pref == LLS_PREF_DONT_CARE)
+- lls_enable_easf = spl_choose_lls_policy(spl_in->basic_in.format,
+- spl_in->basic_in.tf_type, spl_in->basic_in.tf_predefined_type,
++ spl_choose_lls_policy(spl_in->basic_in.format,
+ &spl_in->lls_pref);
+
+- if (!lls_enable_easf)
+- skip_easf = true;
+-
+ /* Check for linear scaling or EASF preferred */
+ if (spl_in->lls_pref != LLS_PREF_YES && !spl_in->prefer_easf)
+ skip_easf = true;
+@@ -887,6 +887,8 @@ static bool spl_get_isharp_en(struct spl_in *spl_in,
+ static void spl_get_taps_non_adaptive_scaler(
+ struct spl_scratch *spl_scratch, const struct spl_taps *in_taps)
+ {
++ bool check_max_downscale = false;
++
+ if (in_taps->h_taps == 0) {
+ if (spl_fixpt_ceil(spl_scratch->scl_data.ratios.horz) > 1)
+ spl_scratch->scl_data.taps.h_taps = spl_min(2 * spl_fixpt_ceil(
+@@ -926,6 +928,23 @@ static void spl_get_taps_non_adaptive_scaler(
+ else
+ spl_scratch->scl_data.taps.h_taps_c = in_taps->h_taps_c;
+
++
++ /*
++ * Max downscale supported is 6.0x. Add ASSERT to catch if go beyond that
++ */
++ check_max_downscale = spl_fixpt_le(spl_scratch->scl_data.ratios.horz,
++ spl_fixpt_from_fraction(6, 1));
++ SPL_ASSERT(check_max_downscale);
++ check_max_downscale = spl_fixpt_le(spl_scratch->scl_data.ratios.vert,
++ spl_fixpt_from_fraction(6, 1));
++ SPL_ASSERT(check_max_downscale);
++ check_max_downscale = spl_fixpt_le(spl_scratch->scl_data.ratios.horz_c,
++ spl_fixpt_from_fraction(6, 1));
++ SPL_ASSERT(check_max_downscale);
++ check_max_downscale = spl_fixpt_le(spl_scratch->scl_data.ratios.vert_c,
++ spl_fixpt_from_fraction(6, 1));
++ SPL_ASSERT(check_max_downscale);
++
+ if (IDENTITY_RATIO(spl_scratch->scl_data.ratios.horz))
+ spl_scratch->scl_data.taps.h_taps = 1;
+ if (IDENTITY_RATIO(spl_scratch->scl_data.ratios.vert))
+@@ -944,8 +963,8 @@ static bool spl_get_optimal_number_of_taps(
+ bool *enable_isharp)
+ {
+ int num_part_y, num_part_c;
+- int max_taps_y, max_taps_c;
+- int min_taps_y, min_taps_c;
++ unsigned int max_taps_y, max_taps_c;
++ unsigned int min_taps_y, min_taps_c;
+ enum lb_memory_config lb_config;
+ bool skip_easf = false;
+ bool is_subsampled = spl_is_subsampled_format(spl_in->basic_in.format);
+@@ -1781,6 +1800,8 @@ static bool spl_calculate_number_of_taps(struct spl_in *spl_in, struct spl_scrat
+ spl_calculate_recout(spl_in, spl_scratch, spl_out);
+ /* depends on pixel format */
+ spl_calculate_scaling_ratios(spl_in, spl_scratch, spl_out);
++ /* Adjust recout for opp if needed */
++ spl_opp_adjust_rect(&spl_scratch->scl_data.recout, &spl_in->basic_in.opp_recout_adjust);
+ /* depends on scaling ratios and recout, does not calculate offset yet */
+ spl_calculate_viewport_size(spl_in, spl_scratch);
+
+@@ -1817,7 +1838,7 @@ bool spl_calculate_scaler_params(struct spl_in *spl_in, struct spl_out *spl_out)
+ // Handle 3d recout
+ spl_handle_3d_recout(spl_in, &spl_scratch.scl_data.recout);
+ // Clamp
+- spl_clamp_viewport(&spl_scratch.scl_data.viewport);
++ spl_clamp_viewport(&spl_scratch.scl_data.viewport, spl_in->min_viewport_size);
+
+ // Save all calculated parameters in dscl_prog_data structure to program hw registers
+ spl_set_dscl_prog_data(spl_in, &spl_scratch, spl_out, enable_easf_v, enable_easf_h, enable_isharp);
+diff --git a/drivers/gpu/drm/amd/display/dc/spl/dc_spl_types.h b/drivers/gpu/drm/amd/display/dc/spl/dc_spl_types.h
+index 467af9dd90ded1..1c3949b24611f6 100644
+--- a/drivers/gpu/drm/amd/display/dc/spl/dc_spl_types.h
++++ b/drivers/gpu/drm/amd/display/dc/spl/dc_spl_types.h
+@@ -427,6 +427,14 @@ struct spl_out {
+
+ // SPL inputs
+
++// opp extra adjustment for rect
++struct spl_opp_adjust {
++ int x;
++ int y;
++ int width;
++ int height;
++};
++
+ // Basic input information
+ struct basic_in {
+ enum spl_pixel_format format; // Pixel Format
+@@ -444,6 +452,7 @@ struct basic_in {
+ } num_slices_recout_width;
+ } num_h_slices_recout_width_align;
+ int mpc_h_slice_index; // previous mpc_combine_v - split_idx
++ struct spl_opp_adjust opp_recout_adjust;
+ // Inputs for adaptive scaler - TODO
+ enum spl_transfer_func_type tf_type; /* Transfer function type */
+ enum spl_transfer_func_predefined tf_predefined_type; /* Transfer function predefined type */
+@@ -484,7 +493,7 @@ struct spl_sharpness_range {
+ };
+ struct adaptive_sharpness {
+ bool enable;
+- int sharpness_level;
++ unsigned int sharpness_level;
+ struct spl_sharpness_range sharpness_range;
+ };
+ enum linear_light_scaling { // convert it in translation logic
+@@ -535,6 +544,7 @@ struct spl_in {
+ bool is_hdr_on;
+ int h_active;
+ int v_active;
++ int min_viewport_size;
+ int sdr_white_level_nits;
+ enum sharpen_policy sharpen_policy;
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.c b/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.c
+index 131f1e3949d33f..52d97918a3bd21 100644
+--- a/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.c
++++ b/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.c
+@@ -346,7 +346,7 @@ struct spl_fixed31_32 spl_fixpt_exp(struct spl_fixed31_32 arg)
+ if (m > 0)
+ return spl_fixpt_shl(
+ spl_fixed31_32_exp_from_taylor_series(r),
+- (unsigned char)m);
++ (unsigned int)m);
+ else
+ return spl_fixpt_div_int(
+ spl_fixed31_32_exp_from_taylor_series(r),
+diff --git a/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.h b/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.h
+index ed2647f9a09997..9f349ffe91485b 100644
+--- a/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.h
++++ b/drivers/gpu/drm/amd/display/dc/spl/spl_fixpt31_32.h
+@@ -189,7 +189,7 @@ static inline struct spl_fixed31_32 spl_fixpt_clamp(
+ * @brief
+ * result = arg << shift
+ */
+-static inline struct spl_fixed31_32 spl_fixpt_shl(struct spl_fixed31_32 arg, unsigned char shift)
++static inline struct spl_fixed31_32 spl_fixpt_shl(struct spl_fixed31_32 arg, unsigned int shift)
+ {
+ SPL_ASSERT(((arg.value >= 0) && (arg.value <= LLONG_MAX >> shift)) ||
+ ((arg.value < 0) && (arg.value >= ~(LLONG_MAX >> shift))));
+@@ -203,7 +203,7 @@ static inline struct spl_fixed31_32 spl_fixpt_shl(struct spl_fixed31_32 arg, uns
+ * @brief
+ * result = arg >> shift
+ */
+-static inline struct spl_fixed31_32 spl_fixpt_shr(struct spl_fixed31_32 arg, unsigned char shift)
++static inline struct spl_fixed31_32 spl_fixpt_shr(struct spl_fixed31_32 arg, unsigned int shift)
+ {
+ bool negative = arg.value < 0;
+
+diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+index d0fe324cb5371e..8cf89aed024b76 100644
+--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
++++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+@@ -3118,6 +3118,12 @@ struct dmub_cmd_psr_copy_settings_data {
+ * Some panels request main link off before xth vertical line
+ */
+ uint16_t poweroff_before_vertical_line;
++ /**
++ * Some panels cannot handle idle pattern during PSR entry.
++ * To power down phy before disable stream to avoid sending
++ * idle pattern.
++ */
++ uint8_t power_down_phy_before_disable_stream;
+ };
+
+ /**
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
+index d9f31b191c693d..1a68b5782cac6c 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
+@@ -83,8 +83,8 @@ static inline void dmub_dcn31_translate_addr(const union dmub_addr *addr_in,
+ void dmub_dcn31_reset(struct dmub_srv *dmub)
+ {
+ union dmub_gpint_data_register cmd;
+- const uint32_t timeout = 100;
+- uint32_t in_reset, scratch, i, pwait_mode;
++ const uint32_t timeout = 100000;
++ uint32_t in_reset, is_enabled, scratch, i, pwait_mode;
+
+ REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &in_reset);
+
+@@ -108,7 +108,7 @@ void dmub_dcn31_reset(struct dmub_srv *dmub)
+ }
+
+ for (i = 0; i < timeout; ++i) {
+- scratch = dmub->hw_funcs.get_gpint_response(dmub);
++ scratch = REG_READ(DMCUB_SCRATCH7);
+ if (scratch == DMUB_GPINT__STOP_FW_RESPONSE)
+ break;
+
+@@ -125,9 +125,14 @@ void dmub_dcn31_reset(struct dmub_srv *dmub)
+ /* Force reset in case we timed out, DMCUB is likely hung. */
+ }
+
+- REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
+- REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
+- REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
++ REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enabled);
++
++ if (is_enabled) {
++ REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
++ REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
++ REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
++ }
++
+ REG_WRITE(DMCUB_INBOX1_RPTR, 0);
+ REG_WRITE(DMCUB_INBOX1_WPTR, 0);
+ REG_WRITE(DMCUB_OUTBOX1_RPTR, 0);
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+index e5e77bd3c31ea1..01d013a12b9476 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+@@ -88,7 +88,7 @@ static inline void dmub_dcn35_translate_addr(const union dmub_addr *addr_in,
+ void dmub_dcn35_reset(struct dmub_srv *dmub)
+ {
+ union dmub_gpint_data_register cmd;
+- const uint32_t timeout = 100;
++ const uint32_t timeout = 100000;
+ uint32_t in_reset, is_enabled, scratch, i, pwait_mode;
+
+ REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &in_reset);
+@@ -113,7 +113,7 @@ void dmub_dcn35_reset(struct dmub_srv *dmub)
+ }
+
+ for (i = 0; i < timeout; ++i) {
+- scratch = dmub->hw_funcs.get_gpint_response(dmub);
++ scratch = REG_READ(DMCUB_SCRATCH7);
+ if (scratch == DMUB_GPINT__STOP_FW_RESPONSE)
+ break;
+
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.c
+index 39a8cb6d7523c3..e1c4fe1c6e3ee2 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.c
+@@ -63,8 +63,10 @@ static inline void dmub_dcn401_translate_addr(const union dmub_addr *addr_in,
+ void dmub_dcn401_reset(struct dmub_srv *dmub)
+ {
+ union dmub_gpint_data_register cmd;
+- const uint32_t timeout = 30;
+- uint32_t in_reset, scratch, i;
++ const uint32_t timeout_us = 1 * 1000 * 1000; //1s
++ const uint32_t poll_delay_us = 1; //1us
++ uint32_t i = 0;
++ uint32_t in_reset, scratch, pwait_mode;
+
+ REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &in_reset);
+
+@@ -75,32 +77,35 @@ void dmub_dcn401_reset(struct dmub_srv *dmub)
+
+ dmub->hw_funcs.set_gpint(dmub, cmd);
+
+- /**
+- * Timeout covers both the ACK and the wait
+- * for remaining work to finish.
+- *
+- * This is mostly bound by the PHY disable sequence.
+- * Each register check will be greater than 1us, so
+- * don't bother using udelay.
+- */
+-
+- for (i = 0; i < timeout; ++i) {
++ for (i = 0; i < timeout_us; i++) {
+ if (dmub->hw_funcs.is_gpint_acked(dmub, cmd))
+ break;
++
++ udelay(poll_delay_us);
+ }
+
+- for (i = 0; i < timeout; ++i) {
++ for (; i < timeout_us; i++) {
+ scratch = dmub->hw_funcs.get_gpint_response(dmub);
+ if (scratch == DMUB_GPINT__STOP_FW_RESPONSE)
+ break;
++
++ udelay(poll_delay_us);
+ }
+
+- /* Force reset in case we timed out, DMCUB is likely hung. */
++ for (; i < timeout_us; i++) {
++ REG_GET(DMCUB_CNTL, DMCUB_PWAIT_MODE_STATUS, &pwait_mode);
++ if (pwait_mode & (1 << 0))
++ break;
++
++ udelay(poll_delay_us);
++ }
++ }
++
++ if (i >= timeout_us) {
++ /* timeout should never occur */
++ BREAK_TO_DEBUGGER();
+ }
+
+- REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
+- REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
+- REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
+ REG_WRITE(DMCUB_INBOX1_RPTR, 0);
+ REG_WRITE(DMCUB_INBOX1_WPTR, 0);
+ REG_WRITE(DMCUB_OUTBOX1_RPTR, 0);
+@@ -131,7 +136,10 @@ void dmub_dcn401_backdoor_load(struct dmub_srv *dmub,
+
+ dmub_dcn401_get_fb_base_offset(dmub, &fb_base, &fb_offset);
+
++ /* reset and disable DMCUB and MMHUBBUB DMUIF */
+ REG_UPDATE(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 1);
++ REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
++ REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
+
+ dmub_dcn401_translate_addr(&cw0->offset, fb_base, fb_offset, &offset);
+
+@@ -151,6 +159,7 @@ void dmub_dcn401_backdoor_load(struct dmub_srv *dmub,
+ DMCUB_REGION3_CW1_TOP_ADDRESS, cw1->region.top,
+ DMCUB_REGION3_CW1_ENABLE, 1);
+
++ /* release DMCUB reset only to prevent premature execution */
+ REG_UPDATE_2(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 0, DMCUB_MEM_UNIT_ID,
+ 0x20);
+ }
+@@ -161,7 +170,10 @@ void dmub_dcn401_backdoor_load_zfb_mode(struct dmub_srv *dmub,
+ {
+ union dmub_addr offset;
+
++ /* reset and disable DMCUB and MMHUBBUB DMUIF */
+ REG_UPDATE(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 1);
++ REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
++ REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
+
+ offset = cw0->offset;
+
+@@ -181,6 +193,7 @@ void dmub_dcn401_backdoor_load_zfb_mode(struct dmub_srv *dmub,
+ DMCUB_REGION3_CW1_TOP_ADDRESS, cw1->region.top,
+ DMCUB_REGION3_CW1_ENABLE, 1);
+
++ /* release DMCUB reset only to prevent premature execution */
+ REG_UPDATE_2(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 0, DMCUB_MEM_UNIT_ID,
+ 0x20);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.h
+index 4c8843b796950b..31f95b27e227d6 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.h
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn401.h
+@@ -169,7 +169,8 @@ struct dmub_srv;
+ DMUB_SF(HOST_INTERRUPT_CSR, HOST_REG_INBOX0_RSP_INT_EN) \
+ DMUB_SF(HOST_INTERRUPT_CSR, HOST_REG_OUTBOX0_RDY_INT_ACK) \
+ DMUB_SF(HOST_INTERRUPT_CSR, HOST_REG_OUTBOX0_RDY_INT_STAT) \
+- DMUB_SF(HOST_INTERRUPT_CSR, HOST_REG_OUTBOX0_RDY_INT_EN)
++ DMUB_SF(HOST_INTERRUPT_CSR, HOST_REG_OUTBOX0_RDY_INT_EN) \
++ DMUB_SF(DMCUB_CNTL, DMCUB_PWAIT_MODE_STATUS)
+
+ struct dmub_srv_dcn401_reg_offset {
+ #define DMUB_SR(reg) uint32_t reg;
+diff --git a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+index a344e2e49b0eab..b3d55cac35694b 100644
+--- a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
++++ b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+@@ -383,10 +383,10 @@ void mod_build_vsc_infopacket(const struct dc_stream_state *stream,
+ colorimetryFormat = ColorimetryYCC_DP_ITU709;
+ else if (cs == COLOR_SPACE_ADOBERGB)
+ colorimetryFormat = ColorimetryYCC_DP_AdobeYCC;
+- else if (cs == COLOR_SPACE_2020_YCBCR)
++ else if (cs == COLOR_SPACE_2020_YCBCR_LIMITED)
+ colorimetryFormat = ColorimetryYCC_DP_ITU2020YCbCr;
+
+- if (cs == COLOR_SPACE_2020_YCBCR && tf == TRANSFER_FUNC_GAMMA_22)
++ if (cs == COLOR_SPACE_2020_YCBCR_LIMITED && tf == TRANSFER_FUNC_GAMMA_22)
+ colorimetryFormat = ColorimetryYCC_DP_ITU709;
+ break;
+
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_offset.h b/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_offset.h
+index c488d4a50cf46a..b2252deabc17a4 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_offset.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_offset.h
+@@ -203,6 +203,10 @@
+ #define mmDAGB0_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB0_WR_MISC_CREDIT 0x0058
+ #define mmDAGB0_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE 0x005b
++#define mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x005c
++#define mmDAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB0_WRCLI_ASK_PENDING 0x005d
+ #define mmDAGB0_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB0_WRCLI_GO_PENDING 0x005e
+@@ -455,6 +459,10 @@
+ #define mmDAGB1_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB1_WR_MISC_CREDIT 0x00d8
+ #define mmDAGB1_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB1_WRCLI_GPU_SNOOP_OVERRIDE 0x00db
++#define mmDAGB1_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB1_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x00dc
++#define mmDAGB1_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB1_WRCLI_ASK_PENDING 0x00dd
+ #define mmDAGB1_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB1_WRCLI_GO_PENDING 0x00de
+@@ -707,6 +715,10 @@
+ #define mmDAGB2_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB2_WR_MISC_CREDIT 0x0158
+ #define mmDAGB2_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB2_WRCLI_GPU_SNOOP_OVERRIDE 0x015b
++#define mmDAGB2_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB2_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x015c
++#define mmDAGB2_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB2_WRCLI_ASK_PENDING 0x015d
+ #define mmDAGB2_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB2_WRCLI_GO_PENDING 0x015e
+@@ -959,6 +971,10 @@
+ #define mmDAGB3_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB3_WR_MISC_CREDIT 0x01d8
+ #define mmDAGB3_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB3_WRCLI_GPU_SNOOP_OVERRIDE 0x01db
++#define mmDAGB3_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB3_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x01dc
++#define mmDAGB3_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB3_WRCLI_ASK_PENDING 0x01dd
+ #define mmDAGB3_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB3_WRCLI_GO_PENDING 0x01de
+@@ -1211,6 +1227,10 @@
+ #define mmDAGB4_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB4_WR_MISC_CREDIT 0x0258
+ #define mmDAGB4_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB4_WRCLI_GPU_SNOOP_OVERRIDE 0x025b
++#define mmDAGB4_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB4_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x025c
++#define mmDAGB4_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB4_WRCLI_ASK_PENDING 0x025d
+ #define mmDAGB4_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB4_WRCLI_GO_PENDING 0x025e
+@@ -4793,6 +4813,10 @@
+ #define mmDAGB5_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB5_WR_MISC_CREDIT 0x3058
+ #define mmDAGB5_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB5_WRCLI_GPU_SNOOP_OVERRIDE 0x305b
++#define mmDAGB5_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB5_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x305c
++#define mmDAGB5_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB5_WRCLI_ASK_PENDING 0x305d
+ #define mmDAGB5_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB5_WRCLI_GO_PENDING 0x305e
+@@ -5045,6 +5069,10 @@
+ #define mmDAGB6_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB6_WR_MISC_CREDIT 0x30d8
+ #define mmDAGB6_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB6_WRCLI_GPU_SNOOP_OVERRIDE 0x30db
++#define mmDAGB6_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB6_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x30dc
++#define mmDAGB6_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB6_WRCLI_ASK_PENDING 0x30dd
+ #define mmDAGB6_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB6_WRCLI_GO_PENDING 0x30de
+@@ -5297,6 +5325,10 @@
+ #define mmDAGB7_WR_DATA_CREDIT_BASE_IDX 1
+ #define mmDAGB7_WR_MISC_CREDIT 0x3158
+ #define mmDAGB7_WR_MISC_CREDIT_BASE_IDX 1
++#define mmDAGB7_WRCLI_GPU_SNOOP_OVERRIDE 0x315b
++#define mmDAGB7_WRCLI_GPU_SNOOP_OVERRIDE_BASE_IDX 1
++#define mmDAGB7_WRCLI_GPU_SNOOP_OVERRIDE_VALUE 0x315c
++#define mmDAGB7_WRCLI_GPU_SNOOP_OVERRIDE_VALUE_BASE_IDX 1
+ #define mmDAGB7_WRCLI_ASK_PENDING 0x315d
+ #define mmDAGB7_WRCLI_ASK_PENDING_BASE_IDX 1
+ #define mmDAGB7_WRCLI_GO_PENDING 0x315e
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_sh_mask.h b/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_sh_mask.h
+index 2969fbf282b7d0..5069d2fd467f2b 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_sh_mask.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/mmhub/mmhub_9_4_1_sh_mask.h
+@@ -1532,6 +1532,12 @@
+ //DAGB0_WRCLI_DBUS_GO_PENDING
+ #define DAGB0_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB0_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB0_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB0_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB0_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB0_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB0_DAGB_DLY
+ #define DAGB0_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB0_DAGB_DLY__CLI__SHIFT 0x8
+@@ -3207,6 +3213,12 @@
+ //DAGB1_WRCLI_DBUS_GO_PENDING
+ #define DAGB1_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB1_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB1_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB1_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB1_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB1_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB1_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB1_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB1_DAGB_DLY
+ #define DAGB1_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB1_DAGB_DLY__CLI__SHIFT 0x8
+@@ -4882,6 +4894,12 @@
+ //DAGB2_WRCLI_DBUS_GO_PENDING
+ #define DAGB2_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB2_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB2_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB2_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB2_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB2_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB2_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB2_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB2_DAGB_DLY
+ #define DAGB2_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB2_DAGB_DLY__CLI__SHIFT 0x8
+@@ -6557,6 +6575,12 @@
+ //DAGB3_WRCLI_DBUS_GO_PENDING
+ #define DAGB3_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB3_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB3_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB3_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB3_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB3_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB3_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB3_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB3_DAGB_DLY
+ #define DAGB3_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB3_DAGB_DLY__CLI__SHIFT 0x8
+@@ -8232,6 +8256,12 @@
+ //DAGB4_WRCLI_DBUS_GO_PENDING
+ #define DAGB4_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB4_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB4_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB4_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB4_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB4_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB4_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB4_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB4_DAGB_DLY
+ #define DAGB4_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB4_DAGB_DLY__CLI__SHIFT 0x8
+@@ -28737,6 +28767,12 @@
+ //DAGB5_WRCLI_DBUS_GO_PENDING
+ #define DAGB5_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB5_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB5_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB5_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB5_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB5_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB5_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB5_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB5_DAGB_DLY
+ #define DAGB5_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB5_DAGB_DLY__CLI__SHIFT 0x8
+@@ -30412,6 +30448,12 @@
+ //DAGB6_WRCLI_DBUS_GO_PENDING
+ #define DAGB6_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB6_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB6_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB6_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB6_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB6_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB6_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB6_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB6_DAGB_DLY
+ #define DAGB6_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB6_DAGB_DLY__CLI__SHIFT 0x8
+@@ -32087,6 +32129,12 @@
+ //DAGB7_WRCLI_DBUS_GO_PENDING
+ #define DAGB7_WRCLI_DBUS_GO_PENDING__BUSY__SHIFT 0x0
+ #define DAGB7_WRCLI_DBUS_GO_PENDING__BUSY_MASK 0xFFFFFFFFL
++//DAGB7_WRCLI_GPU_SNOOP_OVERRIDE
++#define DAGB7_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE__SHIFT 0x0
++#define DAGB7_WRCLI_GPU_SNOOP_OVERRIDE__ENABLE_MASK 0xFFFFFFFFL
++//DAGB7_WRCLI_GPU_SNOOP_OVERRIDE_VALUE
++#define DAGB7_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE__SHIFT 0x0
++#define DAGB7_WRCLI_GPU_SNOOP_OVERRIDE_VALUE__ENABLE_MASK 0xFFFFFFFFL
+ //DAGB7_DAGB_DLY
+ #define DAGB7_DAGB_DLY__DLY__SHIFT 0x0
+ #define DAGB7_DAGB_DLY__CLI__SHIFT 0x8
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index ed9dac00ebfb18..f3f5b7dd15ccca 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2802,6 +2802,7 @@ int smu_get_power_limit(void *handle,
+ switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
+ case IP_VERSION(13, 0, 2):
+ case IP_VERSION(13, 0, 6):
++ case IP_VERSION(13, 0, 12):
+ case IP_VERSION(13, 0, 14):
+ case IP_VERSION(11, 0, 7):
+ case IP_VERSION(11, 0, 11):
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+index da7bd9227afeb4..5f2a824918e3b3 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
+@@ -450,8 +450,9 @@ static int smu_v13_0_6_init_microcode(struct smu_context *smu)
+ int var = (adev->pdev->device & 0xF);
+ char ucode_prefix[15];
+
+- /* No need to load P2S tables in IOV mode */
+- if (amdgpu_sriov_vf(adev))
++ /* No need to load P2S tables in IOV mode or for smu v13.0.12 */
++ if (amdgpu_sriov_vf(adev) ||
++ (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 12)))
+ return 0;
+
+ if (!(adev->flags & AMD_IS_APU)) {
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index bc37c65305d486..96470fc8e6e53a 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -96,21 +96,21 @@ static void ast_detect_tx_chip(struct ast_device *ast, bool need_post)
+ /* Check 3rd Tx option (digital output afaik) */
+ ast->tx_chip = AST_TX_NONE;
+
+- /*
+- * VGACRA3 Enhanced Color Mode Register, check if DVO is already
+- * enabled, in that case, assume we have a SIL164 TMDS transmitter
+- *
+- * Don't make that assumption if we the chip wasn't enabled and
+- * is at power-on reset, otherwise we'll incorrectly "detect" a
+- * SIL164 when there is none.
+- */
+- if (!need_post) {
+- jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xff);
+- if (jreg & 0x80)
+- ast->tx_chip = AST_TX_SIL164;
+- }
+-
+- if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast) || IS_AST_GEN6(ast)) {
++ if (AST_GEN(ast) <= 3) {
++ /*
++ * VGACRA3 Enhanced Color Mode Register, check if DVO is already
++ * enabled, in that case, assume we have a SIL164 TMDS transmitter
++ *
++ * Don't make that assumption if we the chip wasn't enabled and
++ * is at power-on reset, otherwise we'll incorrectly "detect" a
++ * SIL164 when there is none.
++ */
++ if (!need_post) {
++ jreg = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xa3, 0xff);
++ if (jreg & 0x80)
++ ast->tx_chip = AST_TX_SIL164;
++ }
++ } else if (IS_AST_GEN4(ast) || IS_AST_GEN5(ast) || IS_AST_GEN6(ast)) {
+ /*
+ * On AST GEN4+, look the configuration set by the SoC in
+ * the SOC scratch register #1 bits 11:8 (interestingly marked
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 9d5321c81e68d6..a29fe1ae803f15 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -131,7 +131,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format,
+ return false;
+ }
+
+- switch (mode->crtc_hdisplay) {
++ switch (mode->hdisplay) {
+ case 640:
+ vbios_mode->enh_table = &res_640x480[refresh_rate_index];
+ break;
+@@ -145,7 +145,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format,
+ vbios_mode->enh_table = &res_1152x864[refresh_rate_index];
+ break;
+ case 1280:
+- if (mode->crtc_vdisplay == 800)
++ if (mode->vdisplay == 800)
+ vbios_mode->enh_table = &res_1280x800[refresh_rate_index];
+ else
+ vbios_mode->enh_table = &res_1280x1024[refresh_rate_index];
+@@ -157,7 +157,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format,
+ vbios_mode->enh_table = &res_1440x900[refresh_rate_index];
+ break;
+ case 1600:
+- if (mode->crtc_vdisplay == 900)
++ if (mode->vdisplay == 900)
+ vbios_mode->enh_table = &res_1600x900[refresh_rate_index];
+ else
+ vbios_mode->enh_table = &res_1600x1200[refresh_rate_index];
+@@ -166,7 +166,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format,
+ vbios_mode->enh_table = &res_1680x1050[refresh_rate_index];
+ break;
+ case 1920:
+- if (mode->crtc_vdisplay == 1080)
++ if (mode->vdisplay == 1080)
+ vbios_mode->enh_table = &res_1920x1080[refresh_rate_index];
+ else
+ vbios_mode->enh_table = &res_1920x1200[refresh_rate_index];
+@@ -210,6 +210,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format,
+ hborder = (vbios_mode->enh_table->flags & HBorder) ? 8 : 0;
+ vborder = (vbios_mode->enh_table->flags & VBorder) ? 8 : 0;
+
++ adjusted_mode->crtc_hdisplay = vbios_mode->enh_table->hde;
+ adjusted_mode->crtc_htotal = vbios_mode->enh_table->ht;
+ adjusted_mode->crtc_hblank_start = vbios_mode->enh_table->hde + hborder;
+ adjusted_mode->crtc_hblank_end = vbios_mode->enh_table->ht - hborder;
+@@ -219,6 +220,7 @@ static bool ast_get_vbios_mode_info(const struct drm_format_info *format,
+ vbios_mode->enh_table->hfp +
+ vbios_mode->enh_table->hsync);
+
++ adjusted_mode->crtc_vdisplay = vbios_mode->enh_table->vde;
+ adjusted_mode->crtc_vtotal = vbios_mode->enh_table->vt;
+ adjusted_mode->crtc_vblank_start = vbios_mode->enh_table->vde + vborder;
+ adjusted_mode->crtc_vblank_end = vbios_mode->enh_table->vt - vborder;
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+index 657bc3dd18dff7..98030500a978ac 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+@@ -245,7 +245,9 @@ static const struct hdmi_codec_pdata codec_data = {
+ .ops = &adv7511_codec_ops,
+ .max_i2s_channels = 2,
+ .i2s = 1,
++ .no_i2s_capture = 1,
+ .spdif = 1,
++ .no_spdif_capture = 1,
+ };
+
+ int adv7511_audio_init(struct device *dev, struct adv7511 *adv7511)
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 32902f77f00dd8..40e4e1b6c91106 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -574,6 +574,30 @@ mode_valid(struct drm_atomic_state *state)
+ return 0;
+ }
+
++static int drm_atomic_check_valid_clones(struct drm_atomic_state *state,
++ struct drm_crtc *crtc)
++{
++ struct drm_encoder *drm_enc;
++ struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state,
++ crtc);
++
++ drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) {
++ if (!drm_enc->possible_clones) {
++ DRM_DEBUG("enc%d possible_clones is 0\n", drm_enc->base.id);
++ continue;
++ }
++
++ if ((crtc_state->encoder_mask & drm_enc->possible_clones) !=
++ crtc_state->encoder_mask) {
++ DRM_DEBUG("crtc%d failed valid clone check for mask 0x%x\n",
++ crtc->base.id, crtc_state->encoder_mask);
++ return -EINVAL;
++ }
++ }
++
++ return 0;
++}
++
+ /**
+ * drm_atomic_helper_check_modeset - validate state object for modeset changes
+ * @dev: DRM device
+@@ -745,6 +769,10 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
+ ret = drm_atomic_add_affected_planes(state, crtc);
+ if (ret != 0)
+ return ret;
++
++ ret = drm_atomic_check_valid_clones(state, crtc);
++ if (ret != 0)
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
+index 103c185bb1c8a7..ca42e6081d27c4 100644
+--- a/drivers/gpu/drm/drm_buddy.c
++++ b/drivers/gpu/drm/drm_buddy.c
+@@ -324,7 +324,7 @@ EXPORT_SYMBOL(drm_buddy_init);
+ */
+ void drm_buddy_fini(struct drm_buddy *mm)
+ {
+- u64 root_size, size;
++ u64 root_size, size, start;
+ unsigned int order;
+ int i;
+
+@@ -332,7 +332,8 @@ void drm_buddy_fini(struct drm_buddy *mm)
+
+ for (i = 0; i < mm->n_roots; ++i) {
+ order = ilog2(size) - ilog2(mm->chunk_size);
+- __force_merge(mm, 0, size, order);
++ start = drm_buddy_block_offset(mm->roots[i]);
++ __force_merge(mm, start, start + size, order);
+
+ WARN_ON(!drm_buddy_block_is_free(mm->roots[i]));
+ drm_block_free(mm, mm->roots[i]);
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 13bc4c290b17d5..9edb3247c767b8 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -6596,6 +6596,7 @@ static void drm_reset_display_info(struct drm_connector *connector)
+ info->has_hdmi_infoframe = false;
+ info->rgb_quant_range_selectable = false;
+ memset(&info->hdmi, 0, sizeof(info->hdmi));
++ memset(&connector->hdr_sink_metadata, 0, sizeof(connector->hdr_sink_metadata));
+
+ info->edid_hdmi_rgb444_dc_modes = 0;
+ info->edid_hdmi_ycbcr444_dc_modes = 0;
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index ee811764c3df4b..c6240bab3fa558 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -348,7 +348,7 @@ int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
+ return -ENOENT;
+
+ /* Don't allow imported objects to be mapped */
+- if (obj->import_attach) {
++ if (drm_gem_is_imported(obj)) {
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -1178,7 +1178,7 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
+ drm_vma_node_start(&obj->vma_node));
+ drm_printf_indent(p, indent, "size=%zu\n", obj->size);
+ drm_printf_indent(p, indent, "imported=%s\n",
+- str_yes_no(obj->import_attach));
++ str_yes_no(drm_gem_is_imported(obj)));
+
+ if (obj->funcs->print_info)
+ obj->funcs->print_info(p, indent, obj);
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 86d6185fda50ad..8a6135b179d3b9 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -221,6 +221,7 @@ int intel_dp_mtp_tu_compute_config(struct intel_dp *intel_dp,
+ to_intel_connector(conn_state->connector);
+ const struct drm_display_mode *adjusted_mode =
+ &crtc_state->hw.adjusted_mode;
++ bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST);
+ fixed20_12 pbn_div;
+ int bpp, slots = -EINVAL;
+ int dsc_slice_count = 0;
+@@ -271,7 +272,7 @@ int intel_dp_mtp_tu_compute_config(struct intel_dp *intel_dp,
+ link_bpp_x16,
+ &crtc_state->dp_m_n);
+
+- if (intel_dp->is_mst) {
++ if (is_mst) {
+ int remote_bw_overhead;
+ int remote_tu;
+ fixed20_12 pbn;
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index a12ef24c774234..b7d90574df9a65 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -410,12 +410,13 @@ static void mtk_dpi_config_swap_input(struct mtk_dpi *dpi, bool enable)
+
+ static void mtk_dpi_config_2n_h_fre(struct mtk_dpi *dpi)
+ {
+- mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, H_FRE_2N, H_FRE_2N);
++ if (dpi->conf->reg_h_fre_con)
++ mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, H_FRE_2N, H_FRE_2N);
+ }
+
+ static void mtk_dpi_config_disable_edge(struct mtk_dpi *dpi)
+ {
+- if (dpi->conf->edge_sel_en)
++ if (dpi->conf->edge_sel_en && dpi->conf->reg_h_fre_con)
+ mtk_dpi_mask(dpi, dpi->conf->reg_h_fre_con, 0, EDGE_SEL_EN);
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 7b56da24711e43..eca9c7d4ec6f5c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -2539,6 +2539,38 @@ static int dpu_encoder_virt_add_phys_encs(
+ return 0;
+ }
+
++/**
++ * dpu_encoder_get_clones - Calculate the possible_clones for DPU encoder
++ * @drm_enc: DRM encoder pointer
++ * Returns: possible_clones mask
++ */
++uint32_t dpu_encoder_get_clones(struct drm_encoder *drm_enc)
++{
++ struct drm_encoder *curr;
++ int type = drm_enc->encoder_type;
++ uint32_t clone_mask = drm_encoder_mask(drm_enc);
++
++ /*
++ * Set writeback as possible clones of real-time DSI encoders and vice
++ * versa
++ *
++ * Writeback encoders can't be clones of each other and DSI
++ * encoders can't be clones of each other.
++ *
++ * TODO: Add DP encoders as valid possible clones for writeback encoders
++ * (and vice versa) once concurrent writeback has been validated for DP
++ */
++ drm_for_each_encoder(curr, drm_enc->dev) {
++ if ((type == DRM_MODE_ENCODER_VIRTUAL &&
++ curr->encoder_type == DRM_MODE_ENCODER_DSI) ||
++ (type == DRM_MODE_ENCODER_DSI &&
++ curr->encoder_type == DRM_MODE_ENCODER_VIRTUAL))
++ clone_mask |= drm_encoder_mask(curr);
++ }
++
++ return clone_mask;
++}
++
+ static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc,
+ struct dpu_kms *dpu_kms,
+ struct msm_display_info *disp_info)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
+index da133ee4701a32..751be231ee7b12 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h
+@@ -60,6 +60,8 @@ enum dpu_intf_mode dpu_encoder_get_intf_mode(struct drm_encoder *encoder);
+
+ void dpu_encoder_virt_runtime_resume(struct drm_encoder *encoder);
+
++uint32_t dpu_encoder_get_clones(struct drm_encoder *drm_enc);
++
+ struct drm_encoder *dpu_encoder_init(struct drm_device *dev,
+ int drm_enc_mode,
+ struct msm_display_info *disp_info);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index 8741dc6fc8ddc4..b8f4ebba8ac28a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -2,7 +2,7 @@
+ /*
+ * Copyright (C) 2013 Red Hat
+ * Copyright (c) 2014-2018, The Linux Foundation. All rights reserved.
+- * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ *
+ * Author: Rob Clark <robdclark@gmail.com>
+ */
+@@ -834,8 +834,11 @@ static int _dpu_kms_drm_obj_init(struct dpu_kms *dpu_kms)
+ return ret;
+
+ num_encoders = 0;
+- drm_for_each_encoder(encoder, dev)
++ drm_for_each_encoder(encoder, dev) {
+ num_encoders++;
++ if (catalog->cwb_count > 0)
++ encoder->possible_clones = dpu_encoder_get_clones(encoder);
++ }
+
+ max_crtc_count = min(catalog->mixer_count, num_encoders);
+
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index 58502102926b6b..bb86b6d4ca49eb 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -61,7 +61,7 @@
+ extern struct dentry *nouveau_debugfs_root;
+
+ #define GSP_MSG_MIN_SIZE GSP_PAGE_SIZE
+-#define GSP_MSG_MAX_SIZE GSP_PAGE_MIN_SIZE * 16
++#define GSP_MSG_MAX_SIZE (GSP_MSG_MIN_SIZE * 16)
+
+ struct r535_gsp_msg {
+ u8 auth_tag_buffer[16];
+diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
+index f8511fe5fb0d65..b0315d3ba00a54 100644
+--- a/drivers/gpu/drm/panel/panel-edp.c
++++ b/drivers/gpu/drm/panel/panel-edp.c
+@@ -1993,6 +1993,7 @@ static const struct edp_panel_entry edp_panels[] = {
+ EDP_PANEL_ENTRY('S', 'H', 'P', 0x154c, &delay_200_500_p2e100, "LQ116M1JW10"),
+ EDP_PANEL_ENTRY('S', 'H', 'P', 0x1593, &delay_200_500_p2e100, "LQ134N1"),
+
++ EDP_PANEL_ENTRY('S', 'T', 'A', 0x0004, &delay_200_500_e200, "116KHD024006"),
+ EDP_PANEL_ENTRY('S', 'T', 'A', 0x0100, &delay_100_500_e200, "2081116HHD028001-51D"),
+
+ { /* sentinal */ }
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index 17a98845fd31b5..bcbd4988239282 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -159,6 +159,7 @@ struct vop2_video_port {
+ struct drm_crtc crtc;
+ struct vop2 *vop2;
+ struct clk *dclk;
++ struct clk *dclk_src;
+ unsigned int id;
+ const struct vop2_video_port_data *data;
+
+@@ -214,6 +215,7 @@ struct vop2 {
+ struct clk *hclk;
+ struct clk *aclk;
+ struct clk *pclk;
++ struct clk *pll_hdmiphy0;
+
+ /* optional internal rgb encoder */
+ struct rockchip_rgb *rgb;
+@@ -222,6 +224,8 @@ struct vop2 {
+ struct vop2_win win[];
+ };
+
++#define VOP2_MAX_DCLK_RATE 600000000
++
+ #define vop2_output_if_is_hdmi(x) ((x) == ROCKCHIP_VOP2_EP_HDMI0 || \
+ (x) == ROCKCHIP_VOP2_EP_HDMI1)
+
+@@ -1155,6 +1159,9 @@ static void vop2_crtc_atomic_disable(struct drm_crtc *crtc,
+
+ vop2_crtc_disable_irq(vp, VP_INT_DSP_HOLD_VALID);
+
++ if (vp->dclk_src)
++ clk_set_parent(vp->dclk, vp->dclk_src);
++
+ clk_disable_unprepare(vp->dclk);
+
+ vop2->enable_count--;
+@@ -1547,10 +1554,8 @@ static void vop2_plane_atomic_update(struct drm_plane *plane,
+
+ rb_swap = vop2_win_rb_swap(fb->format->format);
+ vop2_win_write(win, VOP2_WIN_RB_SWAP, rb_swap);
+- if (!vop2_cluster_window(win)) {
+- uv_swap = vop2_win_uv_swap(fb->format->format);
+- vop2_win_write(win, VOP2_WIN_UV_SWAP, uv_swap);
+- }
++ uv_swap = vop2_win_uv_swap(fb->format->format);
++ vop2_win_write(win, VOP2_WIN_UV_SWAP, uv_swap);
+
+ if (fb->format->is_yuv) {
+ vop2_win_write(win, VOP2_WIN_UV_VIR, DIV_ROUND_UP(fb->pitches[1], 4));
+@@ -2259,6 +2264,27 @@ static void vop2_crtc_atomic_enable(struct drm_crtc *crtc,
+
+ vop2_vp_write(vp, RK3568_VP_MIPI_CTRL, 0);
+
++ /*
++ * Switch to HDMI PHY PLL as DCLK source for display modes up
++ * to 4K@60Hz, if available, otherwise keep using the system CRU.
++ */
++ if (vop2->pll_hdmiphy0 && clock <= VOP2_MAX_DCLK_RATE) {
++ drm_for_each_encoder_mask(encoder, crtc->dev, crtc_state->encoder_mask) {
++ struct rockchip_encoder *rkencoder = to_rockchip_encoder(encoder);
++
++ if (rkencoder->crtc_endpoint_id == ROCKCHIP_VOP2_EP_HDMI0) {
++ if (!vp->dclk_src)
++ vp->dclk_src = clk_get_parent(vp->dclk);
++
++ ret = clk_set_parent(vp->dclk, vop2->pll_hdmiphy0);
++ if (ret < 0)
++ drm_warn(vop2->drm,
++ "Could not switch to HDMI0 PHY PLL: %d\n", ret);
++ break;
++ }
++ }
++ }
++
+ clk_set_rate(vp->dclk, clock);
+
+ vop2_post_config(crtc);
+@@ -3699,6 +3725,12 @@ static int vop2_bind(struct device *dev, struct device *master, void *data)
+ return PTR_ERR(vop2->pclk);
+ }
+
++ vop2->pll_hdmiphy0 = devm_clk_get_optional(vop2->dev, "pll_hdmiphy0");
++ if (IS_ERR(vop2->pll_hdmiphy0)) {
++ drm_err(vop2->drm, "failed to get pll_hdmiphy0\n");
++ return PTR_ERR(vop2->pll_hdmiphy0);
++ }
++
+ vop2->irq = platform_get_irq(pdev, 0);
+ if (vop2->irq < 0) {
+ drm_err(vop2->drm, "cannot find irq for vop2\n");
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index ea5e4985885738..72c675191a0226 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -1092,7 +1092,8 @@ struct ttm_bo_swapout_walk {
+ struct ttm_lru_walk walk;
+ /** @gfp_flags: The gfp flags to use for ttm_tt_swapout() */
+ gfp_t gfp_flags;
+-
++ /** @hit_low: Whether we should attempt to swap BO's with low watermark threshold */
++ /** @evict_low: If we cannot swap a bo when @try_low is false (first pass) */
+ bool hit_low, evict_low;
+ };
+
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
+index 930737a9347b63..852015214e971c 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.c
++++ b/drivers/gpu/drm/v3d/v3d_drv.c
+@@ -295,11 +295,21 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ v3d->clk = devm_clk_get_optional(dev, NULL);
++ if (IS_ERR(v3d->clk))
++ return dev_err_probe(dev, PTR_ERR(v3d->clk), "Failed to get V3D clock\n");
++
++ ret = clk_prepare_enable(v3d->clk);
++ if (ret) {
++ dev_err(&pdev->dev, "Couldn't enable the V3D clock\n");
++ return ret;
++ }
++
+ mmu_debug = V3D_READ(V3D_MMU_DEBUG_INFO);
+ mask = DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH));
+ ret = dma_set_mask_and_coherent(dev, mask);
+ if (ret)
+- return ret;
++ goto clk_disable;
+
+ v3d->va_width = 30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_VA_WIDTH);
+
+@@ -319,28 +329,29 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ ret = PTR_ERR(v3d->reset);
+
+ if (ret == -EPROBE_DEFER)
+- return ret;
++ goto clk_disable;
+
+ v3d->reset = NULL;
+ ret = map_regs(v3d, &v3d->bridge_regs, "bridge");
+ if (ret) {
+ dev_err(dev,
+ "Failed to get reset control or bridge regs\n");
+- return ret;
++ goto clk_disable;
+ }
+ }
+
+ if (v3d->ver < 41) {
+ ret = map_regs(v3d, &v3d->gca_regs, "gca");
+ if (ret)
+- return ret;
++ goto clk_disable;
+ }
+
+ v3d->mmu_scratch = dma_alloc_wc(dev, 4096, &v3d->mmu_scratch_paddr,
+ GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO);
+ if (!v3d->mmu_scratch) {
+ dev_err(dev, "Failed to allocate MMU scratch page\n");
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto clk_disable;
+ }
+
+ ret = v3d_gem_init(drm);
+@@ -369,6 +380,8 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ v3d_gem_destroy(drm);
+ dma_free:
+ dma_free_wc(dev, 4096, v3d->mmu_scratch, v3d->mmu_scratch_paddr);
++clk_disable:
++ clk_disable_unprepare(v3d->clk);
+ return ret;
+ }
+
+@@ -386,6 +399,8 @@ static void v3d_platform_drm_remove(struct platform_device *pdev)
+
+ dma_free_wc(v3d->drm.dev, 4096, v3d->mmu_scratch,
+ v3d->mmu_scratch_paddr);
++
++ clk_disable_unprepare(v3d->clk);
+ }
+
+ static struct platform_driver v3d_platform_driver = {
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
+index 6a67c6297d5836..8719b778a1ff08 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
+@@ -125,6 +125,14 @@ static void virtio_gpu_remove(struct virtio_device *vdev)
+ drm_dev_put(dev);
+ }
+
++static void virtio_gpu_shutdown(struct virtio_device *vdev)
++{
++ /*
++ * drm does its own synchronization on shutdown.
++ * Do nothing here, opt out of device reset.
++ */
++}
++
+ static void virtio_gpu_config_changed(struct virtio_device *vdev)
+ {
+ struct drm_device *dev = vdev->priv;
+@@ -159,6 +167,7 @@ static struct virtio_driver virtio_gpu_driver = {
+ .id_table = id_table,
+ .probe = virtio_gpu_probe,
+ .remove = virtio_gpu_remove,
++ .shutdown = virtio_gpu_shutdown,
+ .config_changed = virtio_gpu_config_changed
+ };
+
+diff --git a/drivers/gpu/drm/xe/display/xe_display.c b/drivers/gpu/drm/xe/display/xe_display.c
+index b3921dbc52ff67..b735e30953ceeb 100644
+--- a/drivers/gpu/drm/xe/display/xe_display.c
++++ b/drivers/gpu/drm/xe/display/xe_display.c
+@@ -346,7 +346,8 @@ static void __xe_display_pm_suspend(struct xe_device *xe, bool runtime)
+
+ xe_display_flush_cleanup_work(xe);
+
+- intel_hpd_cancel_work(xe);
++ if (!runtime)
++ intel_hpd_cancel_work(xe);
+
+ if (!runtime && has_display(xe)) {
+ intel_display_driver_suspend_access(display);
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 3f5391d416d469..2070aa12059ce3 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -713,6 +713,21 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ goto out;
+ }
+
++ /* Reject BO eviction if BO is bound to current VM. */
++ if (evict && ctx->resv) {
++ struct drm_gpuvm_bo *vm_bo;
++
++ drm_gem_for_each_gpuvm_bo(vm_bo, &bo->ttm.base) {
++ struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
++
++ if (xe_vm_resv(vm) == ctx->resv &&
++ xe_vm_in_preempt_fence_mode(vm)) {
++ ret = -EBUSY;
++ goto out;
++ }
++ }
++ }
++
+ /*
+ * Failed multi-hop where the old_mem is still marked as
+ * TTM_PL_FLAG_TEMPORARY, should just be a dummy move.
+@@ -2142,6 +2157,7 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_gem_create *args = data;
+ struct xe_vm *vm = NULL;
++ ktime_t end = 0;
+ struct xe_bo *bo;
+ unsigned int bo_flags;
+ u32 handle;
+@@ -2214,6 +2230,10 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -ENOENT;
++ }
++
++retry:
++ if (vm) {
+ err = xe_vm_lock(vm, true);
+ if (err)
+ goto out_vm;
+@@ -2227,6 +2247,8 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
+
+ if (IS_ERR(bo)) {
+ err = PTR_ERR(bo);
++ if (xe_vm_validate_should_retry(NULL, err, &end))
++ goto retry;
+ goto out_vm;
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
+index 492b4877433f16..92e6fa8fe3a17c 100644
+--- a/drivers/gpu/drm/xe/xe_debugfs.c
++++ b/drivers/gpu/drm/xe/xe_debugfs.c
+@@ -166,7 +166,7 @@ static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
+ return -EINVAL;
+
+ if (xe->wedged.mode == wedged_mode)
+- return 0;
++ return size;
+
+ xe->wedged.mode = wedged_mode;
+
+@@ -175,6 +175,7 @@ static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
+ ret = xe_guc_ads_scheduler_policy_toggle_reset(>->uc.guc.ads);
+ if (ret) {
+ xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n");
++ xe_pm_runtime_put(xe);
+ return -EIO;
+ }
+ }
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index 4e1839b483a004..74516e73ba4e53 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -722,7 +722,9 @@ int xe_device_probe(struct xe_device *xe)
+ }
+
+ /* Allocate and map stolen after potential VRAM resize */
+- xe_ttm_stolen_mgr_init(xe);
++ err = xe_ttm_stolen_mgr_init(xe);
++ if (err)
++ return err;
+
+ /*
+ * Now that GT is initialized (TTM in particular),
+@@ -734,6 +736,12 @@ int xe_device_probe(struct xe_device *xe)
+ if (err)
+ goto err;
+
++ for_each_tile(tile, xe, id) {
++ err = xe_tile_init(tile);
++ if (err)
++ goto err;
++ }
++
+ for_each_gt(gt, xe, id) {
+ last_gt = id;
+
+diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
+index 2d4874d2b92253..31f688e953d7bf 100644
+--- a/drivers/gpu/drm/xe/xe_drm_client.c
++++ b/drivers/gpu/drm/xe/xe_drm_client.c
+@@ -324,6 +324,14 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
+ u64 gpu_timestamp;
+ unsigned int fw_ref;
+
++ /*
++ * RING_TIMESTAMP registers are inaccessible in VF mode.
++ * Without drm-total-cycles-*, other keys provide little value.
++ * Show all or none of the optional "run_ticks" keys in this case.
++ */
++ if (IS_SRIOV_VF(xe))
++ return;
++
+ /*
+ * Wait for any exec queue going away: their cycles will get updated on
+ * context switch out, so wait for that to happen
+diff --git a/drivers/gpu/drm/xe/xe_gen_wa_oob.c b/drivers/gpu/drm/xe/xe_gen_wa_oob.c
+index 904cf47925aa1d..ed9183599e31cc 100644
+--- a/drivers/gpu/drm/xe/xe_gen_wa_oob.c
++++ b/drivers/gpu/drm/xe/xe_gen_wa_oob.c
+@@ -28,10 +28,10 @@
+ "\n" \
+ "#endif\n"
+
+-static void print_usage(FILE *f)
++static void print_usage(FILE *f, const char *progname)
+ {
+ fprintf(f, "usage: %s <input-rule-file> <generated-c-source-file> <generated-c-header-file>\n",
+- program_invocation_short_name);
++ progname);
+ }
+
+ static void print_parse_error(const char *err_msg, const char *line,
+@@ -144,7 +144,7 @@ int main(int argc, const char *argv[])
+
+ if (argc < 3) {
+ fprintf(stderr, "ERROR: wrong arguments\n");
+- print_usage(stderr);
++ print_usage(stderr, argv[0]);
+ return 1;
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 150dca2f910335..6b4b9eca2c384a 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -650,6 +650,9 @@ void xe_gt_mmio_init(struct xe_gt *gt)
+ if (gt->info.type == XE_GT_TYPE_MEDIA) {
+ gt->mmio.adj_offset = MEDIA_GT_GSI_OFFSET;
+ gt->mmio.adj_limit = MEDIA_GT_GSI_LENGTH;
++ } else {
++ gt->mmio.adj_offset = 0;
++ gt->mmio.adj_limit = 0;
+ }
+
+ if (IS_SRIOV_VF(gt_to_xe(gt)))
+diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c b/drivers/gpu/drm/xe/xe_gt_idle.c
+index ffd3ba7f665616..fbbace7b0b12a9 100644
+--- a/drivers/gpu/drm/xe/xe_gt_idle.c
++++ b/drivers/gpu/drm/xe/xe_gt_idle.c
+@@ -69,6 +69,8 @@ static u64 get_residency_ms(struct xe_gt_idle *gtidle, u64 cur_residency)
+ {
+ u64 delta, overflow_residency, prev_residency;
+
++ lockdep_assert_held(>idle->lock);
++
+ overflow_residency = BIT_ULL(32);
+
+ /*
+@@ -275,8 +277,21 @@ static ssize_t idle_status_show(struct device *dev,
+
+ return sysfs_emit(buff, "%s\n", gt_idle_state_to_string(state));
+ }
+-static DEVICE_ATTR_RO(idle_status);
+
++u64 xe_gt_idle_residency_msec(struct xe_gt_idle *gtidle)
++{
++ struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
++ u64 residency;
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(>idle->lock, flags);
++ residency = get_residency_ms(gtidle, gtidle->idle_residency(pc));
++ raw_spin_unlock_irqrestore(>idle->lock, flags);
++
++ return residency;
++}
++
++static DEVICE_ATTR_RO(idle_status);
+ static ssize_t idle_residency_ms_show(struct device *dev,
+ struct device_attribute *attr, char *buff)
+ {
+@@ -285,10 +300,10 @@ static ssize_t idle_residency_ms_show(struct device *dev,
+ u64 residency;
+
+ xe_pm_runtime_get(pc_to_xe(pc));
+- residency = gtidle->idle_residency(pc);
++ residency = xe_gt_idle_residency_msec(gtidle);
+ xe_pm_runtime_put(pc_to_xe(pc));
+
+- return sysfs_emit(buff, "%llu\n", get_residency_ms(gtidle, residency));
++ return sysfs_emit(buff, "%llu\n", residency);
+ }
+ static DEVICE_ATTR_RO(idle_residency_ms);
+
+@@ -331,6 +346,8 @@ int xe_gt_idle_init(struct xe_gt_idle *gtidle)
+ if (!kobj)
+ return -ENOMEM;
+
++ raw_spin_lock_init(>idle->lock);
++
+ if (xe_gt_is_media_type(gt)) {
+ snprintf(gtidle->name, sizeof(gtidle->name), "gt%d-mc", gt->info.id);
+ gtidle->idle_residency = xe_guc_pc_mc6_residency;
+diff --git a/drivers/gpu/drm/xe/xe_gt_idle.h b/drivers/gpu/drm/xe/xe_gt_idle.h
+index 4455a6501cb073..591a01e181bcc2 100644
+--- a/drivers/gpu/drm/xe/xe_gt_idle.h
++++ b/drivers/gpu/drm/xe/xe_gt_idle.h
+@@ -17,5 +17,6 @@ void xe_gt_idle_disable_c6(struct xe_gt *gt);
+ void xe_gt_idle_enable_pg(struct xe_gt *gt);
+ void xe_gt_idle_disable_pg(struct xe_gt *gt);
+ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p);
++u64 xe_gt_idle_residency_msec(struct xe_gt_idle *gtidle);
+
+ #endif /* _XE_GT_IDLE_H_ */
+diff --git a/drivers/gpu/drm/xe/xe_gt_idle_types.h b/drivers/gpu/drm/xe/xe_gt_idle_types.h
+index b8b297a3f8848e..a3667c567f8a7d 100644
+--- a/drivers/gpu/drm/xe/xe_gt_idle_types.h
++++ b/drivers/gpu/drm/xe/xe_gt_idle_types.h
+@@ -6,6 +6,7 @@
+ #ifndef _XE_GT_IDLE_SYSFS_TYPES_H_
+ #define _XE_GT_IDLE_SYSFS_TYPES_H_
+
++#include <linux/spinlock.h>
+ #include <linux/types.h>
+
+ struct xe_guc_pc;
+@@ -31,6 +32,8 @@ struct xe_gt_idle {
+ u64 cur_residency;
+ /** @prev_residency: previous residency counter */
+ u64 prev_residency;
++ /** @lock: Lock protecting idle residency counters */
++ raw_spinlock_t lock;
+ /** @idle_status: get the current idle state */
+ enum xe_gt_idle_state (*idle_status)(struct xe_guc_pc *pc);
+ /** @idle_residency: get idle residency counter */
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+index 6f906c8e8108ba..c08efca6420e71 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+@@ -15,7 +15,11 @@
+ #include "xe_gt_sriov_pf_helpers.h"
+ #include "xe_gt_sriov_pf_migration.h"
+ #include "xe_gt_sriov_pf_service.h"
++#include "xe_gt_sriov_printk.h"
+ #include "xe_mmio.h"
++#include "xe_pm.h"
++
++static void pf_worker_restart_func(struct work_struct *w);
+
+ /*
+ * VF's metadata is maintained in the flexible array where:
+@@ -41,6 +45,11 @@ static int pf_alloc_metadata(struct xe_gt *gt)
+ return 0;
+ }
+
++static void pf_init_workers(struct xe_gt *gt)
++{
++ INIT_WORK(>->sriov.pf.workers.restart, pf_worker_restart_func);
++}
++
+ /**
+ * xe_gt_sriov_pf_init_early - Prepare SR-IOV PF data structures on PF.
+ * @gt: the &xe_gt to initialize
+@@ -65,6 +74,8 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
+ if (err)
+ return err;
+
++ pf_init_workers(gt);
++
+ return 0;
+ }
+
+@@ -78,6 +89,12 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
+ */
+ int xe_gt_sriov_pf_init(struct xe_gt *gt)
+ {
++ int err;
++
++ err = xe_gt_sriov_pf_config_init(gt);
++ if (err)
++ return err;
++
+ return xe_gt_sriov_pf_migration_init(gt);
+ }
+
+@@ -155,6 +172,35 @@ void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid)
+ pf_clear_vf_scratch_regs(gt, vfid);
+ }
+
++static void pf_restart(struct xe_gt *gt)
++{
++ struct xe_device *xe = gt_to_xe(gt);
++
++ xe_pm_runtime_get(xe);
++ xe_gt_sriov_pf_config_restart(gt);
++ xe_gt_sriov_pf_control_restart(gt);
++ xe_pm_runtime_put(xe);
++
++ xe_gt_sriov_dbg(gt, "restart completed\n");
++}
++
++static void pf_worker_restart_func(struct work_struct *w)
++{
++ struct xe_gt *gt = container_of(w, typeof(*gt), sriov.pf.workers.restart);
++
++ pf_restart(gt);
++}
++
++static void pf_queue_restart(struct xe_gt *gt)
++{
++ struct xe_device *xe = gt_to_xe(gt);
++
++ xe_gt_assert(gt, IS_SRIOV_PF(xe));
++
++ if (!queue_work(xe->sriov.wq, >->sriov.pf.workers.restart))
++ xe_gt_sriov_dbg(gt, "restart already in queue!\n");
++}
++
+ /**
+ * xe_gt_sriov_pf_restart - Restart SR-IOV support after a GT reset.
+ * @gt: the &xe_gt
+@@ -163,6 +209,5 @@ void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid)
+ */
+ void xe_gt_sriov_pf_restart(struct xe_gt *gt)
+ {
+- xe_gt_sriov_pf_config_restart(gt);
+- xe_gt_sriov_pf_control_restart(gt);
++ pf_queue_restart(gt);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+index 4bd255adfb401c..27f309e3a76e69 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+@@ -336,6 +336,26 @@ static int pf_push_full_vf_config(struct xe_gt *gt, unsigned int vfid)
+ return err;
+ }
+
++static int pf_push_vf_cfg(struct xe_gt *gt, unsigned int vfid, bool reset)
++{
++ int err = 0;
++
++ xe_gt_assert(gt, vfid);
++ lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
++
++ if (reset)
++ err = pf_send_vf_cfg_reset(gt, vfid);
++ if (!err)
++ err = pf_push_full_vf_config(gt, vfid);
++
++ return err;
++}
++
++static int pf_refresh_vf_cfg(struct xe_gt *gt, unsigned int vfid)
++{
++ return pf_push_vf_cfg(gt, vfid, true);
++}
++
+ static u64 pf_get_ggtt_alignment(struct xe_gt *gt)
+ {
+ struct xe_device *xe = gt_to_xe(gt);
+@@ -432,6 +452,10 @@ static int pf_provision_vf_ggtt(struct xe_gt *gt, unsigned int vfid, u64 size)
+ return err;
+
+ pf_release_vf_config_ggtt(gt, config);
++
++ err = pf_refresh_vf_cfg(gt, vfid);
++ if (unlikely(err))
++ return err;
+ }
+ xe_gt_assert(gt, !xe_ggtt_node_allocated(config->ggtt_region));
+
+@@ -757,6 +781,10 @@ static int pf_provision_vf_ctxs(struct xe_gt *gt, unsigned int vfid, u32 num_ctx
+ return ret;
+
+ pf_release_config_ctxs(gt, config);
++
++ ret = pf_refresh_vf_cfg(gt, vfid);
++ if (unlikely(ret))
++ return ret;
+ }
+
+ if (!num_ctxs)
+@@ -1054,6 +1082,10 @@ static int pf_provision_vf_dbs(struct xe_gt *gt, unsigned int vfid, u32 num_dbs)
+ return ret;
+
+ pf_release_config_dbs(gt, config);
++
++ ret = pf_refresh_vf_cfg(gt, vfid);
++ if (unlikely(ret))
++ return ret;
+ }
+
+ if (!num_dbs)
+@@ -2085,10 +2117,7 @@ int xe_gt_sriov_pf_config_push(struct xe_gt *gt, unsigned int vfid, bool refresh
+ xe_gt_assert(gt, vfid);
+
+ mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
+- if (refresh)
+- err = pf_send_vf_cfg_reset(gt, vfid);
+- if (!err)
+- err = pf_push_full_vf_config(gt, vfid);
++ err = pf_push_vf_cfg(gt, vfid, refresh);
+ mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
+
+ if (unlikely(err)) {
+@@ -2320,6 +2349,35 @@ int xe_gt_sriov_pf_config_restore(struct xe_gt *gt, unsigned int vfid,
+ return err;
+ }
+
++static void fini_config(void *arg)
++{
++ struct xe_gt *gt = arg;
++ struct xe_device *xe = gt_to_xe(gt);
++ unsigned int n, total_vfs = xe_sriov_pf_get_totalvfs(xe);
++
++ mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
++ for (n = 1; n <= total_vfs; n++)
++ pf_release_vf_config(gt, n);
++ mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
++}
++
++/**
++ * xe_gt_sriov_pf_config_init - Initialize SR-IOV configuration data.
++ * @gt: the &xe_gt
++ *
++ * This function can only be called on PF.
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++int xe_gt_sriov_pf_config_init(struct xe_gt *gt)
++{
++ struct xe_device *xe = gt_to_xe(gt);
++
++ xe_gt_assert(gt, IS_SRIOV_PF(xe));
++
++ return devm_add_action_or_reset(xe->drm.dev, fini_config, gt);
++}
++
+ /**
+ * xe_gt_sriov_pf_config_restart - Restart SR-IOV configurations after a GT reset.
+ * @gt: the &xe_gt
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
+index f894e9d4abba28..513e6512a575b6 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
+@@ -63,6 +63,7 @@ int xe_gt_sriov_pf_config_restore(struct xe_gt *gt, unsigned int vfid,
+
+ bool xe_gt_sriov_pf_config_is_empty(struct xe_gt *gt, unsigned int vfid);
+
++int xe_gt_sriov_pf_config_init(struct xe_gt *gt);
+ void xe_gt_sriov_pf_config_restart(struct xe_gt *gt);
+
+ int xe_gt_sriov_pf_config_print_ggtt(struct xe_gt *gt, struct drm_printer *p);
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_types.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_types.h
+index 0426b1a77069ac..a64a6835ad6564 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_types.h
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_types.h
+@@ -35,8 +35,17 @@ struct xe_gt_sriov_metadata {
+ struct xe_gt_sriov_state_snapshot snapshot;
+ };
+
++/**
++ * struct xe_gt_sriov_pf_workers - GT level workers used by the PF.
++ */
++struct xe_gt_sriov_pf_workers {
++ /** @restart: worker that executes actions post GT reset */
++ struct work_struct restart;
++};
++
+ /**
+ * struct xe_gt_sriov_pf - GT level PF virtualization data.
++ * @workers: workers data.
+ * @service: service data.
+ * @control: control data.
+ * @policy: policy data.
+@@ -45,6 +54,7 @@ struct xe_gt_sriov_metadata {
+ * @vfs: metadata for all VFs.
+ */
+ struct xe_gt_sriov_pf {
++ struct xe_gt_sriov_pf_workers workers;
+ struct xe_gt_sriov_pf_service service;
+ struct xe_gt_sriov_pf_control control;
+ struct xe_gt_sriov_pf_policy policy;
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
+index 9c30cbd9af6e18..a439261bf4d729 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
+@@ -47,12 +47,19 @@ static int guc_action_vf_reset(struct xe_guc *guc)
+ return ret > 0 ? -EPROTO : ret;
+ }
+
++#define GUC_RESET_VF_STATE_RETRY_MAX 10
+ static int vf_reset_guc_state(struct xe_gt *gt)
+ {
++ unsigned int retry = GUC_RESET_VF_STATE_RETRY_MAX;
+ struct xe_guc *guc = >->uc.guc;
+ int err;
+
+- err = guc_action_vf_reset(guc);
++ do {
++ err = guc_action_vf_reset(guc);
++ if (!err || err != -ETIMEDOUT)
++ break;
++ } while (--retry);
++
+ if (unlikely(err))
+ xe_gt_sriov_err(gt, "Failed to reset GuC state (%pe)\n", ERR_PTR(err));
+ return err;
+@@ -229,6 +236,9 @@ int xe_gt_sriov_vf_bootstrap(struct xe_gt *gt)
+ {
+ int err;
+
++ if (!xe_device_uc_enabled(gt_to_xe(gt)))
++ return -ENODEV;
++
+ err = vf_reset_guc_state(gt);
+ if (unlikely(err))
+ return err;
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 9405d83d4db2ab..084cbdeba8eaa5 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -418,6 +418,28 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
+ return send_tlb_invalidation(>->uc.guc, fence, action, len);
+ }
+
++/**
++ * xe_gt_tlb_invalidation_vm - Issue a TLB invalidation on this GT for a VM
++ * @gt: graphics tile
++ * @vm: VM to invalidate
++ *
++ * Invalidate entire VM's address space
++ */
++void xe_gt_tlb_invalidation_vm(struct xe_gt *gt, struct xe_vm *vm)
++{
++ struct xe_gt_tlb_invalidation_fence fence;
++ u64 range = 1ull << vm->xe->info.va_bits;
++ int ret;
++
++ xe_gt_tlb_invalidation_fence_init(gt, &fence, true);
++
++ ret = xe_gt_tlb_invalidation_range(gt, &fence, 0, range, vm->usm.asid);
++ if (ret < 0)
++ return;
++
++ xe_gt_tlb_invalidation_fence_wait(&fence);
++}
++
+ /**
+ * xe_gt_tlb_invalidation_vma - Issue a TLB invalidation on this GT for a VMA
+ * @gt: GT structure
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+index 672acfcdf0d70d..abe9b03d543e6e 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+@@ -12,6 +12,7 @@
+
+ struct xe_gt;
+ struct xe_guc;
++struct xe_vm;
+ struct xe_vma;
+
+ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
+@@ -21,6 +22,7 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
+ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
+ struct xe_gt_tlb_invalidation_fence *fence,
+ struct xe_vma *vma);
++void xe_gt_tlb_invalidation_vm(struct xe_gt *gt, struct xe_vm *vm);
+ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
+ struct xe_gt_tlb_invalidation_fence *fence,
+ u64 start, u64 end, u32 asid);
+diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
+index 0ca3056d8bd3fa..80514a446ba287 100644
+--- a/drivers/gpu/drm/xe/xe_guc_log.c
++++ b/drivers/gpu/drm/xe/xe_guc_log.c
+@@ -149,16 +149,12 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
+ size_t remain;
+ int i;
+
+- if (!log->bo) {
+- xe_gt_err(gt, "GuC log buffer not allocated\n");
++ if (!log->bo)
+ return NULL;
+- }
+
+ snapshot = xe_guc_log_snapshot_alloc(log, atomic);
+- if (!snapshot) {
+- xe_gt_err(gt, "GuC log snapshot not allocated\n");
++ if (!snapshot)
+ return NULL;
+- }
+
+ remain = snapshot->size;
+ for (i = 0; i < snapshot->num_chunks; i++) {
+diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
+index f382f5d53ca8bc..2276d85926fcb2 100644
+--- a/drivers/gpu/drm/xe/xe_guc_pc.c
++++ b/drivers/gpu/drm/xe/xe_guc_pc.c
+@@ -371,16 +371,17 @@ static void tgl_update_rpa_value(struct xe_guc_pc *pc)
+ u32 reg;
+
+ /*
+- * For PVC we still need to use fused RP1 as the approximation for RPe
+- * For other platforms than PVC we get the resolved RPe directly from
++ * For PVC we still need to use fused RP0 as the approximation for RPa
++ * For other platforms than PVC we get the resolved RPa directly from
+ * PCODE at a different register
+ */
+- if (xe->info.platform == XE_PVC)
++ if (xe->info.platform == XE_PVC) {
+ reg = xe_mmio_read32(>->mmio, PVC_RP_STATE_CAP);
+- else
++ pc->rpa_freq = REG_FIELD_GET(RP0_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
++ } else {
+ reg = xe_mmio_read32(>->mmio, FREQ_INFO_REC);
+-
+- pc->rpa_freq = REG_FIELD_GET(RPA_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
++ pc->rpa_freq = REG_FIELD_GET(RPA_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
++ }
+ }
+
+ static void tgl_update_rpe_value(struct xe_guc_pc *pc)
+@@ -394,12 +395,13 @@ static void tgl_update_rpe_value(struct xe_guc_pc *pc)
+ * For other platforms than PVC we get the resolved RPe directly from
+ * PCODE at a different register
+ */
+- if (xe->info.platform == XE_PVC)
++ if (xe->info.platform == XE_PVC) {
+ reg = xe_mmio_read32(>->mmio, PVC_RP_STATE_CAP);
+- else
++ pc->rpe_freq = REG_FIELD_GET(RP1_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
++ } else {
+ reg = xe_mmio_read32(>->mmio, FREQ_INFO_REC);
+-
+- pc->rpe_freq = REG_FIELD_GET(RPE_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
++ pc->rpe_freq = REG_FIELD_GET(RPE_MASK, reg) * GT_FREQUENCY_MULTIPLIER;
++ }
+ }
+
+ static void pc_update_rp_values(struct xe_guc_pc *pc)
+diff --git a/drivers/gpu/drm/xe/xe_guc_relay.c b/drivers/gpu/drm/xe/xe_guc_relay.c
+index 8f62de026724cc..e5dc94f3e61810 100644
+--- a/drivers/gpu/drm/xe/xe_guc_relay.c
++++ b/drivers/gpu/drm/xe/xe_guc_relay.c
+@@ -225,7 +225,7 @@ __relay_get_transaction(struct xe_guc_relay *relay, bool incoming, u32 remote, u
+ * with CTB lock held which is marked as used in the reclaim path.
+ * Btw, that's one of the reason why we use mempool here!
+ */
+- txn = mempool_alloc(&relay->pool, incoming ? GFP_ATOMIC : GFP_KERNEL);
++ txn = mempool_alloc(&relay->pool, incoming ? GFP_ATOMIC : GFP_NOWAIT);
+ if (!txn)
+ return ERR_PTR(-ENOMEM);
+
+diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
+index a48f239cad1c59..9c2f60ce0c948c 100644
+--- a/drivers/gpu/drm/xe/xe_mmio.c
++++ b/drivers/gpu/drm/xe/xe_mmio.c
+@@ -76,12 +76,12 @@ static void mmio_multi_tile_setup(struct xe_device *xe, size_t tile_mmio_size)
+ * is fine as it's going to the root tile's mmio, that's
+ * guaranteed to be initialized earlier in xe_mmio_init()
+ */
+- mtcfg = xe_mmio_read64_2x32(mmio, XEHP_MTCFG_ADDR);
++ mtcfg = xe_mmio_read32(mmio, XEHP_MTCFG_ADDR);
+ tile_count = REG_FIELD_GET(TILE_COUNT, mtcfg) + 1;
+
+ if (tile_count < xe->info.tile_count) {
+ drm_info(&xe->drm, "tile_count: %d, reduced_tile_count %d\n",
+- xe->info.tile_count, tile_count);
++ xe->info.tile_count, tile_count);
+ xe->info.tile_count = tile_count;
+
+ /*
+@@ -173,7 +173,7 @@ int xe_mmio_init(struct xe_device *xe)
+ */
+ xe->mmio.size = pci_resource_len(pdev, GTTMMADR_BAR);
+ xe->mmio.regs = pci_iomap(pdev, GTTMMADR_BAR, 0);
+- if (xe->mmio.regs == NULL) {
++ if (!xe->mmio.regs) {
+ drm_err(&xe->drm, "failed to map registers\n");
+ return -EIO;
+ }
+@@ -338,8 +338,8 @@ u64 xe_mmio_read64_2x32(struct xe_mmio *mmio, struct xe_reg reg)
+ return (u64)udw << 32 | ldw;
+ }
+
+-static int __xe_mmio_wait32(struct xe_mmio *mmio, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
+- u32 *out_val, bool atomic, bool expect_match)
++static int __xe_mmio_wait32(struct xe_mmio *mmio, struct xe_reg reg, u32 mask, u32 val,
++ u32 timeout_us, u32 *out_val, bool atomic, bool expect_match)
+ {
+ ktime_t cur = ktime_get_raw();
+ const ktime_t end = ktime_add_us(cur, timeout_us);
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index eb6cd91e1e2265..abf37d9ab22129 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -548,6 +548,7 @@ static ssize_t xe_oa_read(struct file *file, char __user *buf,
+ mutex_unlock(&stream->stream_lock);
+ } while (!offset && !ret);
+ } else {
++ xe_oa_buffer_check_unlocked(stream);
+ mutex_lock(&stream->stream_lock);
+ ret = __xe_oa_read(stream, buf, count, &offset);
+ mutex_unlock(&stream->stream_lock);
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index 39be74848e4472..d92b2e5885b986 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -150,7 +150,6 @@ static const struct xe_graphics_desc graphics_xehpc = {
+ };
+
+ static const struct xe_graphics_desc graphics_xelpg = {
+- .name = "Xe_LPG",
+ .hw_engine_mask =
+ BIT(XE_HW_ENGINE_RCS0) | BIT(XE_HW_ENGINE_BCS0) |
+ BIT(XE_HW_ENGINE_CCS0),
+@@ -174,8 +173,6 @@ static const struct xe_graphics_desc graphics_xelpg = {
+ GENMASK(XE_HW_ENGINE_CCS3, XE_HW_ENGINE_CCS0)
+
+ static const struct xe_graphics_desc graphics_xe2 = {
+- .name = "Xe2_LPG / Xe2_HPG / Xe3_LPG",
+-
+ XE2_GFX_FEATURES,
+ };
+
+@@ -200,15 +197,6 @@ static const struct xe_media_desc media_xehpm = {
+ };
+
+ static const struct xe_media_desc media_xelpmp = {
+- .name = "Xe_LPM+",
+- .hw_engine_mask =
+- GENMASK(XE_HW_ENGINE_VCS7, XE_HW_ENGINE_VCS0) |
+- GENMASK(XE_HW_ENGINE_VECS3, XE_HW_ENGINE_VECS0) |
+- BIT(XE_HW_ENGINE_GSCCS0)
+-};
+-
+-static const struct xe_media_desc media_xe2 = {
+- .name = "Xe2_LPM / Xe2_HPM / Xe3_LPM",
+ .hw_engine_mask =
+ GENMASK(XE_HW_ENGINE_VCS7, XE_HW_ENGINE_VCS0) |
+ GENMASK(XE_HW_ENGINE_VECS3, XE_HW_ENGINE_VECS0) |
+@@ -357,21 +345,21 @@ __diag_pop();
+
+ /* Map of GMD_ID values to graphics IP */
+ static const struct gmdid_map graphics_ip_map[] = {
+- { 1270, &graphics_xelpg },
+- { 1271, &graphics_xelpg },
+- { 1274, &graphics_xelpg }, /* Xe_LPG+ */
+- { 2001, &graphics_xe2 },
+- { 2004, &graphics_xe2 },
+- { 3000, &graphics_xe2 },
+- { 3001, &graphics_xe2 },
++ { 1270, "Xe_LPG", &graphics_xelpg },
++ { 1271, "Xe_LPG", &graphics_xelpg },
++ { 1274, "Xe_LPG+", &graphics_xelpg },
++ { 2001, "Xe2_HPG", &graphics_xe2 },
++ { 2004, "Xe2_LPG", &graphics_xe2 },
++ { 3000, "Xe3_LPG", &graphics_xe2 },
++ { 3001, "Xe3_LPG", &graphics_xe2 },
+ };
+
+ /* Map of GMD_ID values to media IP */
+ static const struct gmdid_map media_ip_map[] = {
+- { 1300, &media_xelpmp },
+- { 1301, &media_xe2 },
+- { 2000, &media_xe2 },
+- { 3000, &media_xe2 },
++ { 1300, "Xe_LPM+", &media_xelpmp },
++ { 1301, "Xe2_HPM", &media_xelpmp },
++ { 2000, "Xe2_LPM", &media_xelpmp },
++ { 3000, "Xe3_LPM", &media_xelpmp },
+ };
+
+ /*
+@@ -502,6 +490,7 @@ static void read_gmdid(struct xe_device *xe, enum xe_gmdid_type type, u32 *ver,
+ gt->info.type = XE_GT_TYPE_MAIN;
+ }
+
++ xe_gt_mmio_init(gt);
+ xe_guc_comm_init_early(>->uc.guc);
+
+ /* Don't bother with GMDID if failed to negotiate the GuC ABI */
+@@ -566,6 +555,7 @@ static void handle_gmdid(struct xe_device *xe,
+ for (int i = 0; i < ARRAY_SIZE(graphics_ip_map); i++) {
+ if (ver == graphics_ip_map[i].ver) {
+ xe->info.graphics_verx100 = ver;
++ xe->info.graphics_name = graphics_ip_map[i].name;
+ *graphics = graphics_ip_map[i].ip;
+
+ break;
+@@ -586,6 +576,7 @@ static void handle_gmdid(struct xe_device *xe,
+ for (int i = 0; i < ARRAY_SIZE(media_ip_map); i++) {
+ if (ver == media_ip_map[i].ver) {
+ xe->info.media_verx100 = ver;
++ xe->info.media_name = media_ip_map[i].name;
+ *media = media_ip_map[i].ip;
+
+ break;
+diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
+index aaceee748287ef..09ee8a06fe2ed3 100644
+--- a/drivers/gpu/drm/xe/xe_pci_sriov.c
++++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
+@@ -62,6 +62,55 @@ static void pf_reset_vfs(struct xe_device *xe, unsigned int num_vfs)
+ xe_gt_sriov_pf_control_trigger_flr(gt, n);
+ }
+
++static struct pci_dev *xe_pci_pf_get_vf_dev(struct xe_device *xe, unsigned int vf_id)
++{
++ struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
++
++ xe_assert(xe, IS_SRIOV_PF(xe));
++
++ /* caller must use pci_dev_put() */
++ return pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
++ pdev->bus->number,
++ pci_iov_virtfn_devfn(pdev, vf_id));
++}
++
++static void pf_link_vfs(struct xe_device *xe, int num_vfs)
++{
++ struct pci_dev *pdev_pf = to_pci_dev(xe->drm.dev);
++ struct device_link *link;
++ struct pci_dev *pdev_vf;
++ unsigned int n;
++
++ /*
++ * When both PF and VF devices are enabled on the host, during system
++ * resume they are resuming in parallel.
++ *
++ * But PF has to complete the provision of VF first to allow any VFs to
++ * successfully resume.
++ *
++ * Create a parent-child device link between PF and VF devices that will
++ * enforce correct resume order.
++ */
++ for (n = 1; n <= num_vfs; n++) {
++ pdev_vf = xe_pci_pf_get_vf_dev(xe, n - 1);
++
++ /* unlikely, something weird is happening, abort */
++ if (!pdev_vf) {
++ xe_sriov_err(xe, "Cannot find VF%u device, aborting link%s creation!\n",
++ n, str_plural(num_vfs));
++ break;
++ }
++
++ link = device_link_add(&pdev_vf->dev, &pdev_pf->dev,
++ DL_FLAG_AUTOREMOVE_CONSUMER);
++ /* unlikely and harmless, continue with other VFs */
++ if (!link)
++ xe_sriov_notice(xe, "Failed linking VF%u\n", n);
++
++ pci_dev_put(pdev_vf);
++ }
++}
++
+ static int pf_enable_vfs(struct xe_device *xe, int num_vfs)
+ {
+ struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
+@@ -92,6 +141,8 @@ static int pf_enable_vfs(struct xe_device *xe, int num_vfs)
+ if (err < 0)
+ goto failed;
+
++ pf_link_vfs(xe, num_vfs);
++
+ xe_sriov_info(xe, "Enabled %u of %u VF%s\n",
+ num_vfs, total_vfs, str_plural(total_vfs));
+ return num_vfs;
+diff --git a/drivers/gpu/drm/xe/xe_pci_types.h b/drivers/gpu/drm/xe/xe_pci_types.h
+index 79b0f80376a4df..665b4447b2ebcb 100644
+--- a/drivers/gpu/drm/xe/xe_pci_types.h
++++ b/drivers/gpu/drm/xe/xe_pci_types.h
+@@ -44,6 +44,7 @@ struct xe_media_desc {
+
+ struct gmdid_map {
+ unsigned int ver;
++ const char *name;
+ const void *ip;
+ };
+
+diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
+index dc24baa8409244..148d90611eebfe 100644
+--- a/drivers/gpu/drm/xe/xe_pt.c
++++ b/drivers/gpu/drm/xe/xe_pt.c
+@@ -218,6 +218,20 @@ void xe_pt_destroy(struct xe_pt *pt, u32 flags, struct llist_head *deferred)
+ xe_pt_free(pt);
+ }
+
++/**
++ * xe_pt_clear() - Clear a page-table.
++ * @xe: xe device.
++ * @pt: The page-table.
++ *
++ * Clears page-table by setting to zero.
++ */
++void xe_pt_clear(struct xe_device *xe, struct xe_pt *pt)
++{
++ struct iosys_map *map = &pt->bo->vmap;
++
++ xe_map_memset(xe, map, 0, 0, SZ_4K);
++}
++
+ /**
+ * DOC: Pagetable building
+ *
+diff --git a/drivers/gpu/drm/xe/xe_pt.h b/drivers/gpu/drm/xe/xe_pt.h
+index 9ab386431caddb..8e43912ae8e94c 100644
+--- a/drivers/gpu/drm/xe/xe_pt.h
++++ b/drivers/gpu/drm/xe/xe_pt.h
+@@ -13,6 +13,7 @@ struct dma_fence;
+ struct xe_bo;
+ struct xe_device;
+ struct xe_exec_queue;
++struct xe_svm_range;
+ struct xe_sync_entry;
+ struct xe_tile;
+ struct xe_vm;
+@@ -35,6 +36,8 @@ void xe_pt_populate_empty(struct xe_tile *tile, struct xe_vm *vm,
+
+ void xe_pt_destroy(struct xe_pt *pt, u32 flags, struct llist_head *deferred);
+
++void xe_pt_clear(struct xe_device *xe, struct xe_pt *pt);
++
+ int xe_pt_update_ops_prepare(struct xe_tile *tile, struct xe_vma_ops *vops);
+ struct dma_fence *xe_pt_update_ops_run(struct xe_tile *tile,
+ struct xe_vma_ops *vops);
+diff --git a/drivers/gpu/drm/xe/xe_sa.c b/drivers/gpu/drm/xe/xe_sa.c
+index e055bed7ae5555..4e7aba445ebc8a 100644
+--- a/drivers/gpu/drm/xe/xe_sa.c
++++ b/drivers/gpu/drm/xe/xe_sa.c
+@@ -57,8 +57,6 @@ struct xe_sa_manager *xe_sa_bo_manager_init(struct xe_tile *tile, u32 size, u32
+ }
+ sa_manager->bo = bo;
+ sa_manager->is_iomem = bo->vmap.is_iomem;
+-
+- drm_suballoc_manager_init(&sa_manager->base, managed_size, align);
+ sa_manager->gpu_addr = xe_bo_ggtt_addr(bo);
+
+ if (bo->vmap.is_iomem) {
+@@ -72,6 +70,7 @@ struct xe_sa_manager *xe_sa_bo_manager_init(struct xe_tile *tile, u32 size, u32
+ memset(sa_manager->cpu_ptr, 0, bo->ttm.base.size);
+ }
+
++ drm_suballoc_manager_init(&sa_manager->base, managed_size, align);
+ ret = drmm_add_action_or_reset(&xe->drm, xe_sa_bo_manager_fini,
+ sa_manager);
+ if (ret)
+diff --git a/drivers/gpu/drm/xe/xe_tile.c b/drivers/gpu/drm/xe/xe_tile.c
+index 07cf7cfe4abd5a..377438ea6b8384 100644
+--- a/drivers/gpu/drm/xe/xe_tile.c
++++ b/drivers/gpu/drm/xe/xe_tile.c
+@@ -170,17 +170,19 @@ int xe_tile_init_noalloc(struct xe_tile *tile)
+ if (err)
+ return err;
+
++ xe_wa_apply_tile_workarounds(tile);
++
++ return xe_tile_sysfs_init(tile);
++}
++
++int xe_tile_init(struct xe_tile *tile)
++{
+ tile->mem.kernel_bb_pool = xe_sa_bo_manager_init(tile, SZ_1M, 16);
+ if (IS_ERR(tile->mem.kernel_bb_pool))
+ return PTR_ERR(tile->mem.kernel_bb_pool);
+
+- xe_wa_apply_tile_workarounds(tile);
+-
+- err = xe_tile_sysfs_init(tile);
+-
+ return 0;
+ }
+-
+ void xe_tile_migrate_wait(struct xe_tile *tile)
+ {
+ xe_migrate_wait(tile->migrate);
+diff --git a/drivers/gpu/drm/xe/xe_tile.h b/drivers/gpu/drm/xe/xe_tile.h
+index 1c9e42ade6b05d..eb939316d55b05 100644
+--- a/drivers/gpu/drm/xe/xe_tile.h
++++ b/drivers/gpu/drm/xe/xe_tile.h
+@@ -12,6 +12,7 @@ struct xe_tile;
+
+ int xe_tile_init_early(struct xe_tile *tile, struct xe_device *xe, u8 id);
+ int xe_tile_init_noalloc(struct xe_tile *tile);
++int xe_tile_init(struct xe_tile *tile);
+
+ void xe_tile_migrate_wait(struct xe_tile *tile);
+
+diff --git a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
+index d414421f8c131e..d9c9d2547aadf5 100644
+--- a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
++++ b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.c
+@@ -207,17 +207,16 @@ static u64 detect_stolen(struct xe_device *xe, struct xe_ttm_stolen_mgr *mgr)
+ #endif
+ }
+
+-void xe_ttm_stolen_mgr_init(struct xe_device *xe)
++int xe_ttm_stolen_mgr_init(struct xe_device *xe)
+ {
+- struct xe_ttm_stolen_mgr *mgr = drmm_kzalloc(&xe->drm, sizeof(*mgr), GFP_KERNEL);
+ struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
++ struct xe_ttm_stolen_mgr *mgr;
+ u64 stolen_size, io_size;
+ int err;
+
+- if (!mgr) {
+- drm_dbg_kms(&xe->drm, "Stolen mgr init failed\n");
+- return;
+- }
++ mgr = drmm_kzalloc(&xe->drm, sizeof(*mgr), GFP_KERNEL);
++ if (!mgr)
++ return -ENOMEM;
+
+ if (IS_SRIOV_VF(xe))
+ stolen_size = 0;
+@@ -230,7 +229,7 @@ void xe_ttm_stolen_mgr_init(struct xe_device *xe)
+
+ if (!stolen_size) {
+ drm_dbg_kms(&xe->drm, "No stolen memory support\n");
+- return;
++ return 0;
+ }
+
+ /*
+@@ -246,7 +245,7 @@ void xe_ttm_stolen_mgr_init(struct xe_device *xe)
+ io_size, PAGE_SIZE);
+ if (err) {
+ drm_dbg_kms(&xe->drm, "Stolen mgr init failed: %i\n", err);
+- return;
++ return err;
+ }
+
+ drm_dbg_kms(&xe->drm, "Initialized stolen memory support with %llu bytes\n",
+@@ -254,6 +253,8 @@ void xe_ttm_stolen_mgr_init(struct xe_device *xe)
+
+ if (io_size)
+ mgr->mapping = devm_ioremap_wc(&pdev->dev, mgr->io_base, io_size);
++
++ return 0;
+ }
+
+ u64 xe_ttm_stolen_io_offset(struct xe_bo *bo, u32 offset)
+diff --git a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.h b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.h
+index 1777245ff81011..8e877d1e839bd5 100644
+--- a/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.h
++++ b/drivers/gpu/drm/xe/xe_ttm_stolen_mgr.h
+@@ -12,7 +12,7 @@ struct ttm_resource;
+ struct xe_bo;
+ struct xe_device;
+
+-void xe_ttm_stolen_mgr_init(struct xe_device *xe);
++int xe_ttm_stolen_mgr_init(struct xe_device *xe);
+ int xe_ttm_stolen_io_mem_reserve(struct xe_device *xe, struct ttm_resource *mem);
+ bool xe_ttm_stolen_cpu_access_needs_ggtt(struct xe_device *xe);
+ u64 xe_ttm_stolen_io_offset(struct xe_bo *bo, u32 offset);
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 5956631c0d40a4..785b8960050bdb 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -8,6 +8,7 @@
+ #include <linux/dma-fence-array.h>
+ #include <linux/nospec.h>
+
++#include <drm/drm_drv.h>
+ #include <drm/drm_exec.h>
+ #include <drm/drm_print.h>
+ #include <drm/ttm/ttm_tt.h>
+@@ -1582,9 +1583,40 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+
+ static void xe_vm_close(struct xe_vm *vm)
+ {
++ struct xe_device *xe = vm->xe;
++ bool bound;
++ int idx;
++
++ bound = drm_dev_enter(&xe->drm, &idx);
++
+ down_write(&vm->lock);
++
+ vm->size = 0;
++
++ if (!((vm->flags & XE_VM_FLAG_MIGRATION))) {
++ struct xe_tile *tile;
++ struct xe_gt *gt;
++ u8 id;
++
++ /* Wait for pending binds */
++ dma_resv_wait_timeout(xe_vm_resv(vm),
++ DMA_RESV_USAGE_BOOKKEEP,
++ false, MAX_SCHEDULE_TIMEOUT);
++
++ if (bound) {
++ for_each_tile(tile, xe, id)
++ if (vm->pt_root[id])
++ xe_pt_clear(xe, vm->pt_root[id]);
++
++ for_each_gt(gt, xe, id)
++ xe_gt_tlb_invalidation_vm(gt, vm);
++ }
++ }
++
+ up_write(&vm->lock);
++
++ if (bound)
++ drm_dev_exit(idx);
+ }
+
+ void xe_vm_close_and_put(struct xe_vm *vm)
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 4cfea399ebab2d..76be97c5fc2ff6 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -603,6 +603,7 @@ config HID_LOGITECH
+ tristate "Logitech devices"
+ depends on USB_HID
+ depends on LEDS_CLASS
++ depends on LEDS_CLASS_MULTICOLOR
+ default !EXPERT
+ help
+ Support for Logitech devices that are not fully compliant with HID standard.
+diff --git a/drivers/hid/usbhid/usbkbd.c b/drivers/hid/usbhid/usbkbd.c
+index c439ed2f16dbca..af6bc76dbf6493 100644
+--- a/drivers/hid/usbhid/usbkbd.c
++++ b/drivers/hid/usbhid/usbkbd.c
+@@ -160,7 +160,7 @@ static int usb_kbd_event(struct input_dev *dev, unsigned int type,
+ return -1;
+
+ spin_lock_irqsave(&kbd->leds_lock, flags);
+- kbd->newleds = (!!test_bit(LED_KANA, dev->led) << 3) | (!!test_bit(LED_COMPOSE, dev->led) << 3) |
++ kbd->newleds = (!!test_bit(LED_KANA, dev->led) << 4) | (!!test_bit(LED_COMPOSE, dev->led) << 3) |
+ (!!test_bit(LED_SCROLLL, dev->led) << 2) | (!!test_bit(LED_CAPSL, dev->led) << 1) |
+ (!!test_bit(LED_NUML, dev->led));
+
+diff --git a/drivers/hwmon/acpi_power_meter.c b/drivers/hwmon/acpi_power_meter.c
+index 44afb07409a465..f05986e4f3792a 100644
+--- a/drivers/hwmon/acpi_power_meter.c
++++ b/drivers/hwmon/acpi_power_meter.c
+@@ -437,9 +437,13 @@ static ssize_t show_val(struct device *dev,
+ ret = update_cap(resource);
+ if (ret)
+ return ret;
++ resource->power_alarm = resource->power > resource->cap;
++ val = resource->power_alarm;
++ } else {
++ val = resource->power_alarm ||
++ resource->power > resource->cap;
++ resource->power_alarm = resource->power > resource->cap;
+ }
+- val = resource->power_alarm || resource->power > resource->cap;
+- resource->power_alarm = resource->power > resource->cap;
+ break;
+ case 7:
+ case 8:
+diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
+index cd00adaad1b414..79e5606e6d2f8f 100644
+--- a/drivers/hwmon/dell-smm-hwmon.c
++++ b/drivers/hwmon/dell-smm-hwmon.c
+@@ -73,7 +73,7 @@
+ #define DELL_SMM_LEGACY_EXECUTE 0x1
+
+ #define DELL_SMM_NO_TEMP 10
+-#define DELL_SMM_NO_FANS 3
++#define DELL_SMM_NO_FANS 4
+
+ struct smm_regs {
+ unsigned int eax;
+@@ -1074,11 +1074,14 @@ static const struct hwmon_channel_info * const dell_smm_info[] = {
+ HWMON_F_INPUT | HWMON_F_LABEL | HWMON_F_MIN | HWMON_F_MAX |
+ HWMON_F_TARGET,
+ HWMON_F_INPUT | HWMON_F_LABEL | HWMON_F_MIN | HWMON_F_MAX |
++ HWMON_F_TARGET,
++ HWMON_F_INPUT | HWMON_F_LABEL | HWMON_F_MIN | HWMON_F_MAX |
+ HWMON_F_TARGET
+ ),
+ HWMON_CHANNEL_INFO(pwm,
+ HWMON_PWM_INPUT | HWMON_PWM_ENABLE,
+ HWMON_PWM_INPUT,
++ HWMON_PWM_INPUT,
+ HWMON_PWM_INPUT
+ ),
+ NULL
+diff --git a/drivers/hwmon/gpio-fan.c b/drivers/hwmon/gpio-fan.c
+index d92c536be9af78..b779240328d59f 100644
+--- a/drivers/hwmon/gpio-fan.c
++++ b/drivers/hwmon/gpio-fan.c
+@@ -393,7 +393,12 @@ static int gpio_fan_set_cur_state(struct thermal_cooling_device *cdev,
+ if (state >= fan_data->num_speed)
+ return -EINVAL;
+
++ mutex_lock(&fan_data->lock);
++
+ set_fan_speed(fan_data, state);
++
++ mutex_unlock(&fan_data->lock);
++
+ return 0;
+ }
+
+@@ -489,7 +494,11 @@ MODULE_DEVICE_TABLE(of, of_gpio_fan_match);
+
+ static void gpio_fan_stop(void *data)
+ {
++ struct gpio_fan_data *fan_data = data;
++
++ mutex_lock(&fan_data->lock);
+ set_fan_speed(data, 0);
++ mutex_unlock(&fan_data->lock);
+ }
+
+ static int gpio_fan_probe(struct platform_device *pdev)
+@@ -562,7 +571,9 @@ static int gpio_fan_suspend(struct device *dev)
+
+ if (fan_data->gpios) {
+ fan_data->resume_speed = fan_data->speed_index;
++ mutex_lock(&fan_data->lock);
+ set_fan_speed(fan_data, 0);
++ mutex_unlock(&fan_data->lock);
+ }
+
+ return 0;
+@@ -572,8 +583,11 @@ static int gpio_fan_resume(struct device *dev)
+ {
+ struct gpio_fan_data *fan_data = dev_get_drvdata(dev);
+
+- if (fan_data->gpios)
++ if (fan_data->gpios) {
++ mutex_lock(&fan_data->lock);
+ set_fan_speed(fan_data, fan_data->resume_speed);
++ mutex_unlock(&fan_data->lock);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/hwmon/xgene-hwmon.c b/drivers/hwmon/xgene-hwmon.c
+index 7087197383c96c..2cdbd5f107a2cc 100644
+--- a/drivers/hwmon/xgene-hwmon.c
++++ b/drivers/hwmon/xgene-hwmon.c
+@@ -105,7 +105,7 @@ struct xgene_hwmon_dev {
+
+ phys_addr_t comm_base_addr;
+ void *pcc_comm_addr;
+- u64 usecs_lat;
++ unsigned int usecs_lat;
+ };
+
+ /*
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 4936dc2f7a56b1..4fe837b02e314b 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1249,7 +1249,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+
+ if (csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+ csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) {
+- spin_lock_init(&csdev->perf_sink_id_map.lock);
++ raw_spin_lock_init(&csdev->perf_sink_id_map.lock);
+ csdev->perf_sink_id_map.cpu_map = alloc_percpu(atomic_t);
+ if (!csdev->perf_sink_id_map.cpu_map) {
+ kfree(csdev);
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index aea9ac9c4bd069..7948597d483d2b 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -84,7 +84,7 @@ struct etb_drvdata {
+ struct clk *atclk;
+ struct coresight_device *csdev;
+ struct miscdevice miscdev;
+- spinlock_t spinlock;
++ raw_spinlock_t spinlock;
+ local_t reading;
+ pid_t pid;
+ u8 *buf;
+@@ -145,7 +145,7 @@ static int etb_enable_sysfs(struct coresight_device *csdev)
+ unsigned long flags;
+ struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+- spin_lock_irqsave(&drvdata->spinlock, flags);
++ raw_spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ /* Don't messup with perf sessions. */
+ if (coresight_get_mode(csdev) == CS_MODE_PERF) {
+@@ -163,7 +163,7 @@ static int etb_enable_sysfs(struct coresight_device *csdev)
+
+ csdev->refcnt++;
+ out:
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return ret;
+ }
+
+@@ -176,7 +176,7 @@ static int etb_enable_perf(struct coresight_device *csdev, void *data)
+ struct perf_output_handle *handle = data;
+ struct cs_buffers *buf = etm_perf_sink_config(handle);
+
+- spin_lock_irqsave(&drvdata->spinlock, flags);
++ raw_spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ /* No need to continue if the component is already in used by sysFS. */
+ if (coresight_get_mode(drvdata->csdev) == CS_MODE_SYSFS) {
+@@ -219,7 +219,7 @@ static int etb_enable_perf(struct coresight_device *csdev, void *data)
+ }
+
+ out:
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return ret;
+ }
+
+@@ -352,11 +352,11 @@ static int etb_disable(struct coresight_device *csdev)
+ struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ unsigned long flags;
+
+- spin_lock_irqsave(&drvdata->spinlock, flags);
++ raw_spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ csdev->refcnt--;
+ if (csdev->refcnt) {
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return -EBUSY;
+ }
+
+@@ -366,7 +366,7 @@ static int etb_disable(struct coresight_device *csdev)
+ /* Dissociate from monitored process. */
+ drvdata->pid = -1;
+ coresight_set_mode(csdev, CS_MODE_DISABLED);
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ dev_dbg(&csdev->dev, "ETB disabled\n");
+ return 0;
+@@ -443,7 +443,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
+
+ capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS;
+
+- spin_lock_irqsave(&drvdata->spinlock, flags);
++ raw_spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ /* Don't do anything if another tracer is using this sink */
+ if (csdev->refcnt != 1)
+@@ -566,7 +566,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
+ __etb_enable_hw(drvdata);
+ CS_LOCK(drvdata->base);
+ out:
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ return to_read;
+ }
+@@ -587,13 +587,13 @@ static void etb_dump(struct etb_drvdata *drvdata)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&drvdata->spinlock, flags);
++ raw_spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (coresight_get_mode(drvdata->csdev) == CS_MODE_SYSFS) {
+ __etb_disable_hw(drvdata);
+ etb_dump_hw(drvdata);
+ __etb_enable_hw(drvdata);
+ }
+- spin_unlock_irqrestore(&drvdata->spinlock, flags);
++ raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ dev_dbg(&drvdata->csdev->dev, "ETB dumped\n");
+ }
+@@ -746,7 +746,7 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
+ drvdata->base = base;
+ desc.access = CSDEV_ACCESS_IOMEM(base);
+
+- spin_lock_init(&drvdata->spinlock);
++ raw_spin_lock_init(&drvdata->spinlock);
+
+ drvdata->buffer_depth = etb_get_buffer_depth(drvdata);
+
+diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
+index 378af743be4555..7ed337d54d3e3d 100644
+--- a/drivers/hwtracing/coresight/coresight-trace-id.c
++++ b/drivers/hwtracing/coresight/coresight-trace-id.c
+@@ -22,7 +22,7 @@ enum trace_id_flags {
+ static DEFINE_PER_CPU(atomic_t, id_map_default_cpu_ids) = ATOMIC_INIT(0);
+ static struct coresight_trace_id_map id_map_default = {
+ .cpu_map = &id_map_default_cpu_ids,
+- .lock = __SPIN_LOCK_UNLOCKED(id_map_default.lock)
++ .lock = __RAW_SPIN_LOCK_UNLOCKED(id_map_default.lock)
+ };
+
+ /* #define TRACE_ID_DEBUG 1 */
+@@ -131,11 +131,11 @@ static void coresight_trace_id_release_all(struct coresight_trace_id_map *id_map
+ unsigned long flags;
+ int cpu;
+
+- spin_lock_irqsave(&id_map->lock, flags);
++ raw_spin_lock_irqsave(&id_map->lock, flags);
+ bitmap_zero(id_map->used_ids, CORESIGHT_TRACE_IDS_MAX);
+ for_each_possible_cpu(cpu)
+ atomic_set(per_cpu_ptr(id_map->cpu_map, cpu), 0);
+- spin_unlock_irqrestore(&id_map->lock, flags);
++ raw_spin_unlock_irqrestore(&id_map->lock, flags);
+ DUMP_ID_MAP(id_map);
+ }
+
+@@ -144,7 +144,7 @@ static int _coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map
+ unsigned long flags;
+ int id;
+
+- spin_lock_irqsave(&id_map->lock, flags);
++ raw_spin_lock_irqsave(&id_map->lock, flags);
+
+ /* check for existing allocation for this CPU */
+ id = _coresight_trace_id_read_cpu_id(cpu, id_map);
+@@ -171,7 +171,7 @@ static int _coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map
+ atomic_set(per_cpu_ptr(id_map->cpu_map, cpu), id);
+
+ get_cpu_id_out_unlock:
+- spin_unlock_irqrestore(&id_map->lock, flags);
++ raw_spin_unlock_irqrestore(&id_map->lock, flags);
+
+ DUMP_ID_CPU(cpu, id);
+ DUMP_ID_MAP(id_map);
+@@ -188,12 +188,12 @@ static void _coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_ma
+ if (!id)
+ return;
+
+- spin_lock_irqsave(&id_map->lock, flags);
++ raw_spin_lock_irqsave(&id_map->lock, flags);
+
+ coresight_trace_id_free(id, id_map);
+ atomic_set(per_cpu_ptr(id_map->cpu_map, cpu), 0);
+
+- spin_unlock_irqrestore(&id_map->lock, flags);
++ raw_spin_unlock_irqrestore(&id_map->lock, flags);
+ DUMP_ID_CPU(cpu, id);
+ DUMP_ID_MAP(id_map);
+ }
+@@ -204,9 +204,9 @@ static int coresight_trace_id_map_get_system_id(struct coresight_trace_id_map *i
+ unsigned long flags;
+ int id;
+
+- spin_lock_irqsave(&id_map->lock, flags);
++ raw_spin_lock_irqsave(&id_map->lock, flags);
+ id = coresight_trace_id_alloc_new_id(id_map, preferred_id, traceid_flags);
+- spin_unlock_irqrestore(&id_map->lock, flags);
++ raw_spin_unlock_irqrestore(&id_map->lock, flags);
+
+ DUMP_ID(id);
+ DUMP_ID_MAP(id_map);
+@@ -217,9 +217,9 @@ static void coresight_trace_id_map_put_system_id(struct coresight_trace_id_map *
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&id_map->lock, flags);
++ raw_spin_lock_irqsave(&id_map->lock, flags);
+ coresight_trace_id_free(id, id_map);
+- spin_unlock_irqrestore(&id_map->lock, flags);
++ raw_spin_unlock_irqrestore(&id_map->lock, flags);
+
+ DUMP_ID(id);
+ DUMP_ID_MAP(id_map);
+diff --git a/drivers/hwtracing/intel_th/Kconfig b/drivers/hwtracing/intel_th/Kconfig
+index 4b6359326ede99..4f7d2b6d79e294 100644
+--- a/drivers/hwtracing/intel_th/Kconfig
++++ b/drivers/hwtracing/intel_th/Kconfig
+@@ -60,6 +60,7 @@ config INTEL_TH_STH
+
+ config INTEL_TH_MSU
+ tristate "Intel(R) Trace Hub Memory Storage Unit"
++ depends on MMU
+ help
+ Memory Storage Unit (MSU) trace output device enables
+ storing STP traces to system memory. It supports single
+diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
+index bf99d79a419204..7163950eb3719c 100644
+--- a/drivers/hwtracing/intel_th/msu.c
++++ b/drivers/hwtracing/intel_th/msu.c
+@@ -19,6 +19,7 @@
+ #include <linux/io.h>
+ #include <linux/workqueue.h>
+ #include <linux/dma-mapping.h>
++#include <linux/pfn_t.h>
+
+ #ifdef CONFIG_X86
+ #include <asm/set_memory.h>
+@@ -976,7 +977,6 @@ static void msc_buffer_contig_free(struct msc *msc)
+ for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) {
+ struct page *page = virt_to_page(msc->base + off);
+
+- page->mapping = NULL;
+ __free_page(page);
+ }
+
+@@ -1158,9 +1158,6 @@ static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win)
+ int i;
+
+ for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) {
+- struct page *page = msc_sg_page(sg);
+-
+- page->mapping = NULL;
+ dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE,
+ sg_virt(sg), sg_dma_address(sg));
+ }
+@@ -1601,22 +1598,10 @@ static void msc_mmap_close(struct vm_area_struct *vma)
+ {
+ struct msc_iter *iter = vma->vm_file->private_data;
+ struct msc *msc = iter->msc;
+- unsigned long pg;
+
+ if (!atomic_dec_and_mutex_lock(&msc->mmap_count, &msc->buf_mutex))
+ return;
+
+- /* drop page _refcounts */
+- for (pg = 0; pg < msc->nr_pages; pg++) {
+- struct page *page = msc_buffer_get_page(msc, pg);
+-
+- if (WARN_ON_ONCE(!page))
+- continue;
+-
+- if (page->mapping)
+- page->mapping = NULL;
+- }
+-
+ /* last mapping -- drop user_count */
+ atomic_dec(&msc->user_count);
+ mutex_unlock(&msc->buf_mutex);
+@@ -1626,16 +1611,14 @@ static vm_fault_t msc_mmap_fault(struct vm_fault *vmf)
+ {
+ struct msc_iter *iter = vmf->vma->vm_file->private_data;
+ struct msc *msc = iter->msc;
++ struct page *page;
+
+- vmf->page = msc_buffer_get_page(msc, vmf->pgoff);
+- if (!vmf->page)
++ page = msc_buffer_get_page(msc, vmf->pgoff);
++ if (!page)
+ return VM_FAULT_SIGBUS;
+
+- get_page(vmf->page);
+- vmf->page->mapping = vmf->vma->vm_file->f_mapping;
+- vmf->page->index = vmf->pgoff;
+-
+- return 0;
++ get_page(page);
++ return vmf_insert_mixed(vmf->vma, vmf->address, page_to_pfn_t(page));
+ }
+
+ static const struct vm_operations_struct msc_mmap_ops = {
+@@ -1676,7 +1659,7 @@ static int intel_th_msc_mmap(struct file *file, struct vm_area_struct *vma)
+ atomic_dec(&msc->user_count);
+
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+- vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY);
++ vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY | VM_MIXEDMAP);
+ vma->vm_ops = &msc_mmap_ops;
+ return ret;
+ }
+diff --git a/drivers/i2c/busses/i2c-amd-asf-plat.c b/drivers/i2c/busses/i2c-amd-asf-plat.c
+index 93ebec162c6ddb..61bc714c28d70b 100644
+--- a/drivers/i2c/busses/i2c-amd-asf-plat.c
++++ b/drivers/i2c/busses/i2c-amd-asf-plat.c
+@@ -69,7 +69,7 @@ static void amd_asf_process_target(struct work_struct *work)
+ /* Check if no error bits are set in target status register */
+ if (reg & ASF_ERROR_STATUS) {
+ /* Set bank as full */
+- cmd = 0;
++ cmd = 1;
+ reg |= GENMASK(3, 2);
+ outb_p(reg, ASFDATABNKSEL);
+ } else {
+diff --git a/drivers/i2c/busses/i2c-pxa.c b/drivers/i2c/busses/i2c-pxa.c
+index cb69884826739d..4415a29f749b92 100644
+--- a/drivers/i2c/busses/i2c-pxa.c
++++ b/drivers/i2c/busses/i2c-pxa.c
+@@ -1503,7 +1503,10 @@ static int i2c_pxa_probe(struct platform_device *dev)
+ i2c->adap.name);
+ }
+
+- clk_prepare_enable(i2c->clk);
++ ret = clk_prepare_enable(i2c->clk);
++ if (ret)
++ return dev_err_probe(&dev->dev, ret,
++ "failed to enable clock\n");
+
+ if (i2c->use_pio) {
+ i2c->adap.algo = &i2c_pxa_pio_algorithm;
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 7bbd478171e02c..515a784c951ca6 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -148,9 +148,9 @@ struct geni_i2c_clk_fld {
+ * source_clock = 19.2 MHz
+ */
+ static const struct geni_i2c_clk_fld geni_i2c_clk_map_19p2mhz[] = {
+- {KHZ(100), 7, 10, 11, 26},
+- {KHZ(400), 2, 5, 12, 24},
+- {KHZ(1000), 1, 3, 9, 18},
++ {KHZ(100), 7, 10, 12, 26},
++ {KHZ(400), 2, 5, 11, 22},
++ {KHZ(1000), 1, 2, 8, 18},
+ {},
+ };
+
+diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c
+index da20b4487c9a54..3a36d682ed5726 100644
+--- a/drivers/i2c/busses/i2c-qup.c
++++ b/drivers/i2c/busses/i2c-qup.c
+@@ -14,6 +14,7 @@
+ #include <linux/dma-mapping.h>
+ #include <linux/err.h>
+ #include <linux/i2c.h>
++#include <linux/interconnect.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
+@@ -150,6 +151,8 @@
+ /* TAG length for DATA READ in RX FIFO */
+ #define READ_RX_TAGS_LEN 2
+
++#define QUP_BUS_WIDTH 8
++
+ static unsigned int scl_freq;
+ module_param_named(scl_freq, scl_freq, uint, 0444);
+ MODULE_PARM_DESC(scl_freq, "SCL frequency override");
+@@ -227,6 +230,7 @@ struct qup_i2c_dev {
+ int irq;
+ struct clk *clk;
+ struct clk *pclk;
++ struct icc_path *icc_path;
+ struct i2c_adapter adap;
+
+ int clk_ctl;
+@@ -255,6 +259,10 @@ struct qup_i2c_dev {
+ /* To configure when bus is in run state */
+ u32 config_run;
+
++ /* bandwidth votes */
++ u32 src_clk_freq;
++ u32 cur_bw_clk_freq;
++
+ /* dma parameters */
+ bool is_dma;
+ /* To check if the current transfer is using DMA */
+@@ -453,6 +461,23 @@ static int qup_i2c_bus_active(struct qup_i2c_dev *qup, int len)
+ return ret;
+ }
+
++static int qup_i2c_vote_bw(struct qup_i2c_dev *qup, u32 clk_freq)
++{
++ u32 needed_peak_bw;
++ int ret;
++
++ if (qup->cur_bw_clk_freq == clk_freq)
++ return 0;
++
++ needed_peak_bw = Bps_to_icc(clk_freq * QUP_BUS_WIDTH);
++ ret = icc_set_bw(qup->icc_path, 0, needed_peak_bw);
++ if (ret)
++ return ret;
++
++ qup->cur_bw_clk_freq = clk_freq;
++ return 0;
++}
++
+ static void qup_i2c_write_tx_fifo_v1(struct qup_i2c_dev *qup)
+ {
+ struct qup_i2c_block *blk = &qup->blk;
+@@ -838,6 +863,10 @@ static int qup_i2c_bam_xfer(struct i2c_adapter *adap, struct i2c_msg *msg,
+ int ret = 0;
+ int idx = 0;
+
++ ret = qup_i2c_vote_bw(qup, qup->src_clk_freq);
++ if (ret)
++ return ret;
++
+ enable_irq(qup->irq);
+ ret = qup_i2c_req_dma(qup);
+
+@@ -1643,6 +1672,7 @@ static void qup_i2c_disable_clocks(struct qup_i2c_dev *qup)
+ config = readl(qup->base + QUP_CONFIG);
+ config |= QUP_CLOCK_AUTO_GATE;
+ writel(config, qup->base + QUP_CONFIG);
++ qup_i2c_vote_bw(qup, 0);
+ clk_disable_unprepare(qup->pclk);
+ }
+
+@@ -1743,6 +1773,11 @@ static int qup_i2c_probe(struct platform_device *pdev)
+ goto fail_dma;
+ }
+ qup->is_dma = true;
++
++ qup->icc_path = devm_of_icc_get(&pdev->dev, NULL);
++ if (IS_ERR(qup->icc_path))
++ return dev_err_probe(&pdev->dev, PTR_ERR(qup->icc_path),
++ "failed to get interconnect path\n");
+ }
+
+ nodma:
+@@ -1791,6 +1826,7 @@ static int qup_i2c_probe(struct platform_device *pdev)
+ qup_i2c_enable_clocks(qup);
+ src_clk_freq = clk_get_rate(qup->clk);
+ }
++ qup->src_clk_freq = src_clk_freq;
+
+ /*
+ * Bootloaders might leave a pending interrupt on certain QUP's,
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 0fc03bb5d0a6ee..75127b6c161f09 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -551,6 +551,8 @@ static void svc_i3c_master_ibi_work(struct work_struct *work)
+ queue_work(master->base.wq, &master->hj_work);
+ break;
+ case SVC_I3C_MSTATUS_IBITYPE_MASTER_REQUEST:
++ svc_i3c_master_emit_stop(master);
++ break;
+ default:
+ break;
+ }
+@@ -898,6 +900,8 @@ static int svc_i3c_master_do_daa_locked(struct svc_i3c_master *master,
+ u32 reg;
+ int ret, i;
+
++ svc_i3c_master_flush_fifo(master);
++
+ while (true) {
+ /* clean SVC_I3C_MINT_IBIWON w1c bits */
+ writel(SVC_I3C_MINT_IBIWON, master->regs + SVC_I3C_MSTATUS);
+diff --git a/drivers/iio/accel/fxls8962af-core.c b/drivers/iio/accel/fxls8962af-core.c
+index 987212a7c038ec..a0ae30c86687af 100644
+--- a/drivers/iio/accel/fxls8962af-core.c
++++ b/drivers/iio/accel/fxls8962af-core.c
+@@ -1229,8 +1229,11 @@ int fxls8962af_core_probe(struct device *dev, struct regmap *regmap, int irq)
+ if (ret)
+ return ret;
+
+- if (device_property_read_bool(dev, "wakeup-source"))
+- device_init_wakeup(dev, true);
++ if (device_property_read_bool(dev, "wakeup-source")) {
++ ret = devm_device_init_wakeup(dev);
++ if (ret)
++ return dev_err_probe(dev, ret, "Failed to init wakeup\n");
++ }
+
+ return devm_iio_device_register(dev, indio_dev);
+ }
+diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c
+index 0339e27f92c323..1c547e8d52ac06 100644
+--- a/drivers/iio/adc/ad7606.c
++++ b/drivers/iio/adc/ad7606.c
+@@ -862,7 +862,12 @@ static int ad7606_write_raw(struct iio_dev *indio_dev,
+ }
+ val = (val * MICRO) + val2;
+ i = find_closest(val, scale_avail_uv, cs->num_scales);
++
++ ret = iio_device_claim_direct_mode(indio_dev);
++ if (ret < 0)
++ return ret;
+ ret = st->write_scale(indio_dev, ch, i + cs->reg_offset);
++ iio_device_release_direct_mode(indio_dev);
+ if (ret < 0)
+ return ret;
+ cs->range = i;
+@@ -873,7 +878,12 @@ static int ad7606_write_raw(struct iio_dev *indio_dev,
+ return -EINVAL;
+ i = find_closest(val, st->oversampling_avail,
+ st->num_os_ratios);
++
++ ret = iio_device_claim_direct_mode(indio_dev);
++ if (ret < 0)
++ return ret;
+ ret = st->write_os(indio_dev, i);
++ iio_device_release_direct_mode(indio_dev);
+ if (ret < 0)
+ return ret;
+ st->oversampling = st->oversampling_avail[i];
+diff --git a/drivers/iio/adc/ad7944.c b/drivers/iio/adc/ad7944.c
+index 0ec9cda10f5f8f..abfababcea1015 100644
+--- a/drivers/iio/adc/ad7944.c
++++ b/drivers/iio/adc/ad7944.c
+@@ -98,6 +98,9 @@ struct ad7944_chip_info {
+ const struct iio_chan_spec channels[2];
+ };
+
++/* get number of bytes for SPI xfer */
++#define AD7944_SPI_BYTES(scan_type) ((scan_type).realbits > 16 ? 4 : 2)
++
+ /*
+ * AD7944_DEFINE_CHIP_INFO - Define a chip info structure for a specific chip
+ * @_name: The name of the chip
+@@ -164,7 +167,7 @@ static int ad7944_3wire_cs_mode_init_msg(struct device *dev, struct ad7944_adc *
+
+ /* Then we can read the data during the acquisition phase */
+ xfers[2].rx_buf = &adc->sample.raw;
+- xfers[2].len = BITS_TO_BYTES(chan->scan_type.storagebits);
++ xfers[2].len = AD7944_SPI_BYTES(chan->scan_type);
+ xfers[2].bits_per_word = chan->scan_type.realbits;
+
+ spi_message_init_with_transfers(&adc->msg, xfers, 3);
+@@ -193,7 +196,7 @@ static int ad7944_4wire_mode_init_msg(struct device *dev, struct ad7944_adc *adc
+ xfers[0].delay.unit = SPI_DELAY_UNIT_NSECS;
+
+ xfers[1].rx_buf = &adc->sample.raw;
+- xfers[1].len = BITS_TO_BYTES(chan->scan_type.storagebits);
++ xfers[1].len = AD7944_SPI_BYTES(chan->scan_type);
+ xfers[1].bits_per_word = chan->scan_type.realbits;
+
+ spi_message_init_with_transfers(&adc->msg, xfers, 2);
+@@ -228,7 +231,7 @@ static int ad7944_chain_mode_init_msg(struct device *dev, struct ad7944_adc *adc
+ xfers[0].delay.unit = SPI_DELAY_UNIT_NSECS;
+
+ xfers[1].rx_buf = adc->chain_mode_buf;
+- xfers[1].len = BITS_TO_BYTES(chan->scan_type.storagebits) * n_chain_dev;
++ xfers[1].len = AD7944_SPI_BYTES(chan->scan_type) * n_chain_dev;
+ xfers[1].bits_per_word = chan->scan_type.realbits;
+
+ spi_message_init_with_transfers(&adc->msg, xfers, 2);
+@@ -274,12 +277,12 @@ static int ad7944_single_conversion(struct ad7944_adc *adc,
+ return ret;
+
+ if (adc->spi_mode == AD7944_SPI_MODE_CHAIN) {
+- if (chan->scan_type.storagebits > 16)
++ if (chan->scan_type.realbits > 16)
+ *val = ((u32 *)adc->chain_mode_buf)[chan->scan_index];
+ else
+ *val = ((u16 *)adc->chain_mode_buf)[chan->scan_index];
+ } else {
+- if (chan->scan_type.storagebits > 16)
++ if (chan->scan_type.realbits > 16)
+ *val = adc->sample.raw.u32;
+ else
+ *val = adc->sample.raw.u16;
+@@ -409,8 +412,7 @@ static int ad7944_chain_mode_alloc(struct device *dev,
+ /* 1 word for each voltage channel + aligned u64 for timestamp */
+
+ chain_mode_buf_size = ALIGN(n_chain_dev *
+- BITS_TO_BYTES(chan[0].scan_type.storagebits), sizeof(u64))
+- + sizeof(u64);
++ AD7944_SPI_BYTES(chan[0].scan_type), sizeof(u64)) + sizeof(u64);
+ buf = devm_kzalloc(dev, chain_mode_buf_size, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+diff --git a/drivers/iio/adc/qcom-spmi-iadc.c b/drivers/iio/adc/qcom-spmi-iadc.c
+index 7fb8b2499a1d00..b64a8a407168bb 100644
+--- a/drivers/iio/adc/qcom-spmi-iadc.c
++++ b/drivers/iio/adc/qcom-spmi-iadc.c
+@@ -543,7 +543,9 @@ static int iadc_probe(struct platform_device *pdev)
+ else
+ return ret;
+ } else {
+- device_init_wakeup(iadc->dev, 1);
++ ret = devm_device_init_wakeup(iadc->dev);
++ if (ret)
++ return dev_err_probe(iadc->dev, ret, "Failed to init wakeup\n");
+ }
+
+ ret = iadc_update_offset(iadc);
+diff --git a/drivers/iio/dac/ad3552r-hs.c b/drivers/iio/dac/ad3552r-hs.c
+index 8974df62567081..67957fc21696ad 100644
+--- a/drivers/iio/dac/ad3552r-hs.c
++++ b/drivers/iio/dac/ad3552r-hs.c
+@@ -137,13 +137,20 @@ static int ad3552r_hs_buffer_postenable(struct iio_dev *indio_dev)
+ if (ret)
+ return ret;
+
++ /* Primary region access, set streaming mode (now in SPI + SDR). */
++ ret = ad3552r_qspi_update_reg_bits(st,
++ AD3552R_REG_ADDR_INTERFACE_CONFIG_B,
++ AD3552R_MASK_SINGLE_INST, 0, 1);
++ if (ret)
++ return ret;
++
+ /* Inform DAC chip to switch into DDR mode */
+ ret = ad3552r_qspi_update_reg_bits(st,
+ AD3552R_REG_ADDR_INTERFACE_CONFIG_D,
+ AD3552R_MASK_SPI_CONFIG_DDR,
+ AD3552R_MASK_SPI_CONFIG_DDR, 1);
+ if (ret)
+- return ret;
++ goto exit_err_ddr;
+
+ /* Inform DAC IP to go for DDR mode from now on */
+ ret = iio_backend_ddr_enable(st->back);
+@@ -174,6 +181,11 @@ static int ad3552r_hs_buffer_postenable(struct iio_dev *indio_dev)
+
+ iio_backend_ddr_disable(st->back);
+
++exit_err_ddr:
++ ad3552r_qspi_update_reg_bits(st, AD3552R_REG_ADDR_INTERFACE_CONFIG_B,
++ AD3552R_MASK_SINGLE_INST,
++ AD3552R_MASK_SINGLE_INST, 1);
++
+ return ret;
+ }
+
+@@ -198,6 +210,14 @@ static int ad3552r_hs_buffer_predisable(struct iio_dev *indio_dev)
+ if (ret)
+ return ret;
+
++ /* Back to single instruction mode, disabling loop. */
++ ret = ad3552r_qspi_update_reg_bits(st,
++ AD3552R_REG_ADDR_INTERFACE_CONFIG_B,
++ AD3552R_MASK_SINGLE_INST,
++ AD3552R_MASK_SINGLE_INST, 1);
++ if (ret)
++ return ret;
++
+ return 0;
+ }
+
+@@ -308,6 +328,13 @@ static int ad3552r_hs_setup(struct ad3552r_hs_state *st)
+ if (ret)
+ return ret;
+
++ ret = st->data->bus_reg_write(st->back,
++ AD3552R_REG_ADDR_INTERFACE_CONFIG_B,
++ AD3552R_MASK_SINGLE_INST |
++ AD3552R_MASK_SHORT_INSTRUCTION, 1);
++ if (ret)
++ return ret;
++
+ ret = ad3552r_hs_scratch_pad_test(st);
+ if (ret)
+ return ret;
+diff --git a/drivers/iio/dac/ad3552r-hs.h b/drivers/iio/dac/ad3552r-hs.h
+index 724261d38dea3f..4a9e3523412443 100644
+--- a/drivers/iio/dac/ad3552r-hs.h
++++ b/drivers/iio/dac/ad3552r-hs.h
+@@ -8,11 +8,19 @@
+
+ struct iio_backend;
+
++enum ad3552r_io_mode {
++ AD3552R_IO_MODE_SPI,
++ AD3552R_IO_MODE_DSPI,
++ AD3552R_IO_MODE_QSPI,
++};
++
+ struct ad3552r_hs_platform_data {
+ int (*bus_reg_read)(struct iio_backend *back, u32 reg, u32 *val,
+ size_t data_size);
+ int (*bus_reg_write)(struct iio_backend *back, u32 reg, u32 val,
+ size_t data_size);
++ int (*bus_set_io_mode)(struct iio_backend *back,
++ enum ad3552r_io_mode mode);
+ u32 bus_sample_data_clock_hz;
+ };
+
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index ac871deb8063cd..bcaf365feef428 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -64,7 +64,7 @@
+ #define AXI_DAC_UI_STATUS_IF_BUSY BIT(4)
+ #define AXI_DAC_CUSTOM_CTRL_REG 0x008C
+ #define AXI_DAC_CUSTOM_CTRL_ADDRESS GENMASK(31, 24)
+-#define AXI_DAC_CUSTOM_CTRL_SYNCED_TRANSFER BIT(2)
++#define AXI_DAC_CUSTOM_CTRL_MULTI_IO_MODE GENMASK(3, 2)
+ #define AXI_DAC_CUSTOM_CTRL_STREAM BIT(1)
+ #define AXI_DAC_CUSTOM_CTRL_TRANSFER_DATA BIT(0)
+
+@@ -722,6 +722,25 @@ static int axi_dac_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val,
+ return regmap_read(st->regmap, AXI_DAC_CUSTOM_RD_REG, val);
+ }
+
++static int axi_dac_bus_set_io_mode(struct iio_backend *back,
++ enum ad3552r_io_mode mode)
++{
++ struct axi_dac_state *st = iio_backend_get_priv(back);
++ int ival, ret;
++
++ guard(mutex)(&st->lock);
++
++ ret = regmap_update_bits(st->regmap, AXI_DAC_CUSTOM_CTRL_REG,
++ AXI_DAC_CUSTOM_CTRL_MULTI_IO_MODE,
++ FIELD_PREP(AXI_DAC_CUSTOM_CTRL_MULTI_IO_MODE, mode));
++ if (ret)
++ return ret;
++
++ return regmap_read_poll_timeout(st->regmap, AXI_DAC_UI_STATUS_REG, ival,
++ FIELD_GET(AXI_DAC_UI_STATUS_IF_BUSY, ival) == 0, 10,
++ 100 * KILO);
++}
++
+ static void axi_dac_child_remove(void *data)
+ {
+ platform_device_unregister(data);
+@@ -733,6 +752,7 @@ static int axi_dac_create_platform_device(struct axi_dac_state *st,
+ struct ad3552r_hs_platform_data pdata = {
+ .bus_reg_read = axi_dac_bus_reg_read,
+ .bus_reg_write = axi_dac_bus_reg_write,
++ .bus_set_io_mode = axi_dac_bus_set_io_mode,
+ .bus_sample_data_clock_hz = st->dac_clk_rate,
+ };
+ struct platform_device_info pi = {
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 4fdcc2acc94ed0..96c6106b95eef6 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -2719,8 +2719,11 @@ int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id,
+ }
+
+ if (device_property_read_bool(dev, "wakeup-source") ||
+- (pdata && pdata->wakeup_source))
+- device_init_wakeup(dev, true);
++ (pdata && pdata->wakeup_source)) {
++ err = devm_device_init_wakeup(dev);
++ if (err)
++ return dev_err_probe(dev, err, "Failed to init wakeup\n");
++ }
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 07c571c7b69992..c5b68639476058 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -80,9 +80,12 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
+ unsigned long pgsz_bitmap,
+ unsigned long virt)
+ {
+- struct scatterlist *sg;
++ unsigned long curr_len = 0;
++ dma_addr_t curr_base = ~0;
+ unsigned long va, pgoff;
++ struct scatterlist *sg;
+ dma_addr_t mask;
++ dma_addr_t end;
+ int i;
+
+ umem->iova = va = virt;
+@@ -107,17 +110,30 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
+ pgoff = umem->address & ~PAGE_MASK;
+
+ for_each_sgtable_dma_sg(&umem->sgt_append.sgt, sg, i) {
+- /* Walk SGL and reduce max page size if VA/PA bits differ
+- * for any address.
++ /* If the current entry is physically contiguous with the previous
++ * one, no need to take its start addresses into consideration.
+ */
+- mask |= (sg_dma_address(sg) + pgoff) ^ va;
++ if (check_add_overflow(curr_base, curr_len, &end) ||
++ end != sg_dma_address(sg)) {
++
++ curr_base = sg_dma_address(sg);
++ curr_len = 0;
++
++ /* Reduce max page size if VA/PA bits differ */
++ mask |= (curr_base + pgoff) ^ va;
++
++ /* The alignment of any VA matching a discontinuity point
++ * in the physical memory sets the maximum possible page
++ * size as this must be a starting point of a new page that
++ * needs to be aligned.
++ */
++ if (i != 0)
++ mask |= va;
++ }
++
++ curr_len += sg_dma_len(sg);
+ va += sg_dma_len(sg) - pgoff;
+- /* Except for the last entry, the ending iova alignment sets
+- * the maximum possible page size as the low bits of the iova
+- * must be zero when starting the next chunk.
+- */
+- if (i != (umem->sgt_append.sgt.nents - 1))
+- mask |= va;
++
+ pgoff = 0;
+ }
+
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 5ad14c39d48c98..de75dcc0947c7d 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -716,8 +716,8 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs)
+ goto err_free;
+
+ pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs);
+- if (!pd) {
+- ret = -EINVAL;
++ if (IS_ERR(pd)) {
++ ret = PTR_ERR(pd);
+ goto err_free;
+ }
+
+@@ -807,8 +807,8 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs)
+ if (cmd.flags & IB_MR_REREG_PD) {
+ new_pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle,
+ attrs);
+- if (!new_pd) {
+- ret = -EINVAL;
++ if (IS_ERR(new_pd)) {
++ ret = PTR_ERR(new_pd);
+ goto put_uobjs;
+ }
+ } else {
+@@ -917,8 +917,8 @@ static int ib_uverbs_alloc_mw(struct uverbs_attr_bundle *attrs)
+ return PTR_ERR(uobj);
+
+ pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs);
+- if (!pd) {
+- ret = -EINVAL;
++ if (IS_ERR(pd)) {
++ ret = PTR_ERR(pd);
+ goto err_free;
+ }
+
+@@ -1125,8 +1125,8 @@ static int ib_uverbs_resize_cq(struct uverbs_attr_bundle *attrs)
+ return ret;
+
+ cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs);
+- if (!cq)
+- return -EINVAL;
++ if (IS_ERR(cq))
++ return PTR_ERR(cq);
+
+ ret = cq->device->ops.resize_cq(cq, cmd.cqe, &attrs->driver_udata);
+ if (ret)
+@@ -1187,8 +1187,8 @@ static int ib_uverbs_poll_cq(struct uverbs_attr_bundle *attrs)
+ return ret;
+
+ cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs);
+- if (!cq)
+- return -EINVAL;
++ if (IS_ERR(cq))
++ return PTR_ERR(cq);
+
+ /* we copy a struct ib_uverbs_poll_cq_resp to user space */
+ header_ptr = attrs->ucore.outbuf;
+@@ -1236,8 +1236,8 @@ static int ib_uverbs_req_notify_cq(struct uverbs_attr_bundle *attrs)
+ return ret;
+
+ cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs);
+- if (!cq)
+- return -EINVAL;
++ if (IS_ERR(cq))
++ return PTR_ERR(cq);
+
+ ib_req_notify_cq(cq, cmd.solicited_only ?
+ IB_CQ_SOLICITED : IB_CQ_NEXT_COMP);
+@@ -1319,8 +1319,8 @@ static int create_qp(struct uverbs_attr_bundle *attrs,
+ ind_tbl = uobj_get_obj_read(rwq_ind_table,
+ UVERBS_OBJECT_RWQ_IND_TBL,
+ cmd->rwq_ind_tbl_handle, attrs);
+- if (!ind_tbl) {
+- ret = -EINVAL;
++ if (IS_ERR(ind_tbl)) {
++ ret = PTR_ERR(ind_tbl);
+ goto err_put;
+ }
+
+@@ -1358,8 +1358,10 @@ static int create_qp(struct uverbs_attr_bundle *attrs,
+ if (cmd->is_srq) {
+ srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ,
+ cmd->srq_handle, attrs);
+- if (!srq || srq->srq_type == IB_SRQT_XRC) {
+- ret = -EINVAL;
++ if (IS_ERR(srq) ||
++ srq->srq_type == IB_SRQT_XRC) {
++ ret = IS_ERR(srq) ? PTR_ERR(srq) :
++ -EINVAL;
+ goto err_put;
+ }
+ }
+@@ -1369,23 +1371,29 @@ static int create_qp(struct uverbs_attr_bundle *attrs,
+ rcq = uobj_get_obj_read(
+ cq, UVERBS_OBJECT_CQ,
+ cmd->recv_cq_handle, attrs);
+- if (!rcq) {
+- ret = -EINVAL;
++ if (IS_ERR(rcq)) {
++ ret = PTR_ERR(rcq);
+ goto err_put;
+ }
+ }
+ }
+ }
+
+- if (has_sq)
++ if (has_sq) {
+ scq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ,
+ cmd->send_cq_handle, attrs);
++ if (IS_ERR(scq)) {
++ ret = PTR_ERR(scq);
++ goto err_put;
++ }
++ }
++
+ if (!ind_tbl && cmd->qp_type != IB_QPT_XRC_INI)
+ rcq = rcq ?: scq;
+ pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle,
+ attrs);
+- if (!pd || (!scq && has_sq)) {
+- ret = -EINVAL;
++ if (IS_ERR(pd)) {
++ ret = PTR_ERR(pd);
+ goto err_put;
+ }
+
+@@ -1480,18 +1488,18 @@ static int create_qp(struct uverbs_attr_bundle *attrs,
+ err_put:
+ if (!IS_ERR(xrcd_uobj))
+ uobj_put_read(xrcd_uobj);
+- if (pd)
++ if (!IS_ERR_OR_NULL(pd))
+ uobj_put_obj_read(pd);
+- if (scq)
++ if (!IS_ERR_OR_NULL(scq))
+ rdma_lookup_put_uobject(&scq->uobject->uevent.uobject,
+ UVERBS_LOOKUP_READ);
+- if (rcq && rcq != scq)
++ if (!IS_ERR_OR_NULL(rcq) && rcq != scq)
+ rdma_lookup_put_uobject(&rcq->uobject->uevent.uobject,
+ UVERBS_LOOKUP_READ);
+- if (srq)
++ if (!IS_ERR_OR_NULL(srq))
+ rdma_lookup_put_uobject(&srq->uobject->uevent.uobject,
+ UVERBS_LOOKUP_READ);
+- if (ind_tbl)
++ if (!IS_ERR_OR_NULL(ind_tbl))
+ uobj_put_obj_read(ind_tbl);
+
+ uobj_alloc_abort(&obj->uevent.uobject, attrs);
+@@ -1653,8 +1661,8 @@ static int ib_uverbs_query_qp(struct uverbs_attr_bundle *attrs)
+ }
+
+ qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs);
+- if (!qp) {
+- ret = -EINVAL;
++ if (IS_ERR(qp)) {
++ ret = PTR_ERR(qp);
+ goto out;
+ }
+
+@@ -1759,8 +1767,8 @@ static int modify_qp(struct uverbs_attr_bundle *attrs,
+
+ qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd->base.qp_handle,
+ attrs);
+- if (!qp) {
+- ret = -EINVAL;
++ if (IS_ERR(qp)) {
++ ret = PTR_ERR(qp);
+ goto out;
+ }
+
+@@ -2026,8 +2034,8 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs)
+ return -ENOMEM;
+
+ qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs);
+- if (!qp) {
+- ret = -EINVAL;
++ if (IS_ERR(qp)) {
++ ret = PTR_ERR(qp);
+ goto out;
+ }
+
+@@ -2064,9 +2072,9 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs)
+
+ ud->ah = uobj_get_obj_read(ah, UVERBS_OBJECT_AH,
+ user_wr->wr.ud.ah, attrs);
+- if (!ud->ah) {
++ if (IS_ERR(ud->ah)) {
++ ret = PTR_ERR(ud->ah);
+ kfree(ud);
+- ret = -EINVAL;
+ goto out_put;
+ }
+ ud->remote_qpn = user_wr->wr.ud.remote_qpn;
+@@ -2303,8 +2311,8 @@ static int ib_uverbs_post_recv(struct uverbs_attr_bundle *attrs)
+ return PTR_ERR(wr);
+
+ qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs);
+- if (!qp) {
+- ret = -EINVAL;
++ if (IS_ERR(qp)) {
++ ret = PTR_ERR(qp);
+ goto out;
+ }
+
+@@ -2354,8 +2362,8 @@ static int ib_uverbs_post_srq_recv(struct uverbs_attr_bundle *attrs)
+ return PTR_ERR(wr);
+
+ srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs);
+- if (!srq) {
+- ret = -EINVAL;
++ if (IS_ERR(srq)) {
++ ret = PTR_ERR(srq);
+ goto out;
+ }
+
+@@ -2411,8 +2419,8 @@ static int ib_uverbs_create_ah(struct uverbs_attr_bundle *attrs)
+ }
+
+ pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs);
+- if (!pd) {
+- ret = -EINVAL;
++ if (IS_ERR(pd)) {
++ ret = PTR_ERR(pd);
+ goto err;
+ }
+
+@@ -2481,8 +2489,8 @@ static int ib_uverbs_attach_mcast(struct uverbs_attr_bundle *attrs)
+ return ret;
+
+ qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs);
+- if (!qp)
+- return -EINVAL;
++ if (IS_ERR(qp))
++ return PTR_ERR(qp);
+
+ obj = qp->uobject;
+
+@@ -2531,8 +2539,8 @@ static int ib_uverbs_detach_mcast(struct uverbs_attr_bundle *attrs)
+ return ret;
+
+ qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs);
+- if (!qp)
+- return -EINVAL;
++ if (IS_ERR(qp))
++ return PTR_ERR(qp);
+
+ obj = qp->uobject;
+ mutex_lock(&obj->mcast_lock);
+@@ -2666,8 +2674,8 @@ static int kern_spec_to_ib_spec_action(struct uverbs_attr_bundle *attrs,
+ UVERBS_OBJECT_FLOW_ACTION,
+ kern_spec->action.handle,
+ attrs);
+- if (!ib_spec->action.act)
+- return -EINVAL;
++ if (IS_ERR(ib_spec->action.act))
++ return PTR_ERR(ib_spec->action.act);
+ ib_spec->action.size =
+ sizeof(struct ib_flow_spec_action_handle);
+ flow_resources_add(uflow_res,
+@@ -2684,8 +2692,8 @@ static int kern_spec_to_ib_spec_action(struct uverbs_attr_bundle *attrs,
+ UVERBS_OBJECT_COUNTERS,
+ kern_spec->flow_count.handle,
+ attrs);
+- if (!ib_spec->flow_count.counters)
+- return -EINVAL;
++ if (IS_ERR(ib_spec->flow_count.counters))
++ return PTR_ERR(ib_spec->flow_count.counters);
+ ib_spec->flow_count.size =
+ sizeof(struct ib_flow_spec_action_count);
+ flow_resources_add(uflow_res,
+@@ -2903,14 +2911,14 @@ static int ib_uverbs_ex_create_wq(struct uverbs_attr_bundle *attrs)
+ return PTR_ERR(obj);
+
+ pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd.pd_handle, attrs);
+- if (!pd) {
+- err = -EINVAL;
++ if (IS_ERR(pd)) {
++ err = PTR_ERR(pd);
+ goto err_uobj;
+ }
+
+ cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs);
+- if (!cq) {
+- err = -EINVAL;
++ if (IS_ERR(cq)) {
++ err = PTR_ERR(cq);
+ goto err_put_pd;
+ }
+
+@@ -3011,8 +3019,8 @@ static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs)
+ return -EINVAL;
+
+ wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ, cmd.wq_handle, attrs);
+- if (!wq)
+- return -EINVAL;
++ if (IS_ERR(wq))
++ return PTR_ERR(wq);
+
+ if (cmd.attr_mask & IB_WQ_FLAGS) {
+ wq_attr.flags = cmd.flags;
+@@ -3095,8 +3103,8 @@ static int ib_uverbs_ex_create_rwq_ind_table(struct uverbs_attr_bundle *attrs)
+ num_read_wqs++) {
+ wq = uobj_get_obj_read(wq, UVERBS_OBJECT_WQ,
+ wqs_handles[num_read_wqs], attrs);
+- if (!wq) {
+- err = -EINVAL;
++ if (IS_ERR(wq)) {
++ err = PTR_ERR(wq);
+ goto put_wqs;
+ }
+
+@@ -3251,8 +3259,8 @@ static int ib_uverbs_ex_create_flow(struct uverbs_attr_bundle *attrs)
+ }
+
+ qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs);
+- if (!qp) {
+- err = -EINVAL;
++ if (IS_ERR(qp)) {
++ err = PTR_ERR(qp);
+ goto err_uobj;
+ }
+
+@@ -3398,15 +3406,15 @@ static int __uverbs_create_xsrq(struct uverbs_attr_bundle *attrs,
+ if (ib_srq_has_cq(cmd->srq_type)) {
+ attr.ext.cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ,
+ cmd->cq_handle, attrs);
+- if (!attr.ext.cq) {
+- ret = -EINVAL;
++ if (IS_ERR(attr.ext.cq)) {
++ ret = PTR_ERR(attr.ext.cq);
+ goto err_put_xrcd;
+ }
+ }
+
+ pd = uobj_get_obj_read(pd, UVERBS_OBJECT_PD, cmd->pd_handle, attrs);
+- if (!pd) {
+- ret = -EINVAL;
++ if (IS_ERR(pd)) {
++ ret = PTR_ERR(pd);
+ goto err_put_cq;
+ }
+
+@@ -3513,8 +3521,8 @@ static int ib_uverbs_modify_srq(struct uverbs_attr_bundle *attrs)
+ return ret;
+
+ srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs);
+- if (!srq)
+- return -EINVAL;
++ if (IS_ERR(srq))
++ return PTR_ERR(srq);
+
+ attr.max_wr = cmd.max_wr;
+ attr.srq_limit = cmd.srq_limit;
+@@ -3541,8 +3549,8 @@ static int ib_uverbs_query_srq(struct uverbs_attr_bundle *attrs)
+ return ret;
+
+ srq = uobj_get_obj_read(srq, UVERBS_OBJECT_SRQ, cmd.srq_handle, attrs);
+- if (!srq)
+- return -EINVAL;
++ if (IS_ERR(srq))
++ return PTR_ERR(srq);
+
+ ret = ib_query_srq(srq, &attr);
+
+@@ -3667,8 +3675,8 @@ static int ib_uverbs_ex_modify_cq(struct uverbs_attr_bundle *attrs)
+ return -EOPNOTSUPP;
+
+ cq = uobj_get_obj_read(cq, UVERBS_OBJECT_CQ, cmd.cq_handle, attrs);
+- if (!cq)
+- return -EINVAL;
++ if (IS_ERR(cq))
++ return PTR_ERR(cq);
+
+ ret = rdma_set_cq_moderation(cq, cmd.attr.cq_count, cmd.attr.cq_period);
+
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index 473ee0831307c1..dc40001072a5ec 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -3109,22 +3109,23 @@ EXPORT_SYMBOL(__rdma_block_iter_start);
+ bool __rdma_block_iter_next(struct ib_block_iter *biter)
+ {
+ unsigned int block_offset;
+- unsigned int sg_delta;
++ unsigned int delta;
+
+ if (!biter->__sg_nents || !biter->__sg)
+ return false;
+
+ biter->__dma_addr = sg_dma_address(biter->__sg) + biter->__sg_advance;
+ block_offset = biter->__dma_addr & (BIT_ULL(biter->__pg_bit) - 1);
+- sg_delta = BIT_ULL(biter->__pg_bit) - block_offset;
++ delta = BIT_ULL(biter->__pg_bit) - block_offset;
+
+- if (sg_dma_len(biter->__sg) - biter->__sg_advance > sg_delta) {
+- biter->__sg_advance += sg_delta;
+- } else {
++ while (biter->__sg_nents && biter->__sg &&
++ sg_dma_len(biter->__sg) - biter->__sg_advance <= delta) {
++ delta -= sg_dma_len(biter->__sg) - biter->__sg_advance;
+ biter->__sg_advance = 0;
+ biter->__sg = sg_next(biter->__sg);
+ biter->__sg_nents--;
+ }
++ biter->__sg_advance += delta;
+
+ return true;
+ }
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 8ee7d8e5d1c733..400e7aac6aaf67 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -289,6 +289,8 @@ static const struct xpad_device {
+ { 0x1038, 0x1430, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 },
+ { 0x1038, 0x1431, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 },
+ { 0x10f5, 0x7005, "Turtle Beach Recon Controller", 0, XTYPE_XBOXONE },
++ { 0x10f5, 0x7008, "Turtle Beach Recon Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE },
++ { 0x10f5, 0x7073, "Turtle Beach Stealth Ultra Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE },
+ { 0x11c9, 0x55f0, "Nacon GC-100XF", 0, XTYPE_XBOX360 },
+ { 0x11ff, 0x0511, "PXN V900", 0, XTYPE_XBOX360 },
+ { 0x1209, 0x2882, "Ardwiino Controller", 0, XTYPE_XBOX360 },
+@@ -353,6 +355,7 @@ static const struct xpad_device {
+ { 0x1ee9, 0x1590, "ZOTAC Gaming Zone", 0, XTYPE_XBOX360 },
+ { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE },
+ { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE },
++ { 0x20d6, 0x2064, "PowerA Wired Controller for Xbox", MAP_SHARE_BUTTON, XTYPE_XBOXONE },
+ { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x2345, 0xe00b, "Machenike G5 Pro Controller", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
+index c616de2c5926ec..a56a2739630591 100644
+--- a/drivers/iommu/amd/io_pgtable_v2.c
++++ b/drivers/iommu/amd/io_pgtable_v2.c
+@@ -254,7 +254,7 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ pte = v2_alloc_pte(cfg->amd.nid, pgtable->pgd,
+ iova, map_size, gfp, &updated);
+ if (!pte) {
+- ret = -EINVAL;
++ ret = -ENOMEM;
+ goto out;
+ }
+
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 2a9fa0c8cc00fe..0f0caf59023c79 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -1815,7 +1815,7 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr)
+ static DEFINE_MUTEX(msi_prepare_lock); /* see below */
+
+ if (!domain || !domain->iova_cookie) {
+- desc->iommu_cookie = NULL;
++ msi_desc_set_iommu_msi_iova(desc, 0, 0);
+ return 0;
+ }
+
+@@ -1827,11 +1827,12 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr)
+ mutex_lock(&msi_prepare_lock);
+ msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain);
+ mutex_unlock(&msi_prepare_lock);
+-
+- msi_desc_set_iommu_cookie(desc, msi_page);
+-
+ if (!msi_page)
+ return -ENOMEM;
++
++ msi_desc_set_iommu_msi_iova(
++ desc, msi_page->iova,
++ ilog2(cookie_msi_granule(domain->iova_cookie)));
+ return 0;
+ }
+
+@@ -1842,18 +1843,15 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr)
+ */
+ void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg)
+ {
+- struct device *dev = msi_desc_to_dev(desc);
+- const struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+- const struct iommu_dma_msi_page *msi_page;
++#ifdef CONFIG_IRQ_MSI_IOMMU
++ if (desc->iommu_msi_shift) {
++ u64 msi_iova = desc->iommu_msi_iova << desc->iommu_msi_shift;
+
+- msi_page = msi_desc_get_iommu_cookie(desc);
+-
+- if (!domain || !domain->iova_cookie || WARN_ON(!msi_page))
+- return;
+-
+- msg->address_hi = upper_32_bits(msi_page->iova);
+- msg->address_lo &= cookie_msi_granule(domain->iova_cookie) - 1;
+- msg->address_lo += lower_32_bits(msi_page->iova);
++ msg->address_hi = upper_32_bits(msi_iova);
++ msg->address_lo = lower_32_bits(msi_iova) |
++ (msg->address_lo & ((1 << desc->iommu_msi_shift) - 1));
++ }
++#endif
+ }
+
+ static int iommu_dma_init(void)
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 07adf4ceeea061..580d24dd4edd90 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3863,41 +3863,6 @@ static struct iommu_group *intel_iommu_device_group(struct device *dev)
+ return generic_device_group(dev);
+ }
+
+-static int intel_iommu_enable_sva(struct device *dev)
+-{
+- struct device_domain_info *info = dev_iommu_priv_get(dev);
+- struct intel_iommu *iommu;
+-
+- if (!info || dmar_disabled)
+- return -EINVAL;
+-
+- iommu = info->iommu;
+- if (!iommu)
+- return -EINVAL;
+-
+- if (!(iommu->flags & VTD_FLAG_SVM_CAPABLE))
+- return -ENODEV;
+-
+- if (!info->pasid_enabled || !info->ats_enabled)
+- return -EINVAL;
+-
+- /*
+- * Devices having device-specific I/O fault handling should not
+- * support PCI/PRI. The IOMMU side has no means to check the
+- * capability of device-specific IOPF. Therefore, IOMMU can only
+- * default that if the device driver enables SVA on a non-PRI
+- * device, it will handle IOPF in its own way.
+- */
+- if (!info->pri_supported)
+- return 0;
+-
+- /* Devices supporting PRI should have it enabled. */
+- if (!info->pri_enabled)
+- return -EINVAL;
+-
+- return 0;
+-}
+-
+ static int context_flip_pri(struct device_domain_info *info, bool enable)
+ {
+ struct intel_iommu *iommu = info->iommu;
+@@ -4018,7 +3983,7 @@ intel_iommu_dev_enable_feat(struct device *dev, enum iommu_dev_features feat)
+ return intel_iommu_enable_iopf(dev);
+
+ case IOMMU_DEV_FEAT_SVA:
+- return intel_iommu_enable_sva(dev);
++ return 0;
+
+ default:
+ return -ENODEV;
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index f5569347591f26..ba93123cb4ebad 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -110,6 +110,41 @@ static const struct mmu_notifier_ops intel_mmuops = {
+ .free_notifier = intel_mm_free_notifier,
+ };
+
++static int intel_iommu_sva_supported(struct device *dev)
++{
++ struct device_domain_info *info = dev_iommu_priv_get(dev);
++ struct intel_iommu *iommu;
++
++ if (!info || dmar_disabled)
++ return -EINVAL;
++
++ iommu = info->iommu;
++ if (!iommu)
++ return -EINVAL;
++
++ if (!(iommu->flags & VTD_FLAG_SVM_CAPABLE))
++ return -ENODEV;
++
++ if (!info->pasid_enabled || !info->ats_enabled)
++ return -EINVAL;
++
++ /*
++ * Devices having device-specific I/O fault handling should not
++ * support PCI/PRI. The IOMMU side has no means to check the
++ * capability of device-specific IOPF. Therefore, IOMMU can only
++ * default that if the device driver enables SVA on a non-PRI
++ * device, it will handle IOPF in its own way.
++ */
++ if (!info->pri_supported)
++ return 0;
++
++ /* Devices supporting PRI should have it enabled. */
++ if (!info->pri_enabled)
++ return -EINVAL;
++
++ return 0;
++}
++
+ static int intel_svm_set_dev_pasid(struct iommu_domain *domain,
+ struct device *dev, ioasid_t pasid,
+ struct iommu_domain *old)
+@@ -121,6 +156,10 @@ static int intel_svm_set_dev_pasid(struct iommu_domain *domain,
+ unsigned long sflags;
+ int ret = 0;
+
++ ret = intel_iommu_sva_supported(dev);
++ if (ret)
++ return ret;
++
+ dev_pasid = domain_add_dev_pasid(domain, dev, pasid);
+ if (IS_ERR(dev_pasid))
+ return PTR_ERR(dev_pasid);
+@@ -161,6 +200,10 @@ struct iommu_domain *intel_svm_domain_alloc(struct device *dev,
+ struct dmar_domain *domain;
+ int ret;
+
++ ret = intel_iommu_sva_supported(dev);
++ if (ret)
++ return ERR_PTR(ret);
++
+ domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+ if (!domain)
+ return ERR_PTR(-ENOMEM);
+diff --git a/drivers/iommu/iommu-priv.h b/drivers/iommu/iommu-priv.h
+index de5b54eaa8bf1a..a5913c0b02a0a7 100644
+--- a/drivers/iommu/iommu-priv.h
++++ b/drivers/iommu/iommu-priv.h
+@@ -17,6 +17,8 @@ static inline const struct iommu_ops *dev_iommu_ops(struct device *dev)
+ return dev->iommu->iommu_dev->ops;
+ }
+
++void dev_iommu_free(struct device *dev);
++
+ const struct iommu_ops *iommu_ops_from_fwnode(const struct fwnode_handle *fwnode);
+
+ static inline const struct iommu_ops *iommu_fwspec_ops(struct iommu_fwspec *fwspec)
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 1efe7cddb4fe33..3a2804a98203b5 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -352,7 +352,7 @@ static struct dev_iommu *dev_iommu_get(struct device *dev)
+ return param;
+ }
+
+-static void dev_iommu_free(struct device *dev)
++void dev_iommu_free(struct device *dev)
+ {
+ struct dev_iommu *param = dev->iommu;
+
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index 3c7800d4ab622d..66a6b7466820d8 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -3,6 +3,7 @@
+ */
+ #include <linux/iommu.h>
+ #include <linux/iommufd.h>
++#include <linux/pci-ats.h>
+ #include <linux/slab.h>
+ #include <uapi/linux/iommufd.h>
+
+@@ -1304,7 +1305,8 @@ int iommufd_get_hw_info(struct iommufd_ucmd *ucmd)
+ void *data;
+ int rc;
+
+- if (cmd->flags || cmd->__reserved)
++ if (cmd->flags || cmd->__reserved[0] || cmd->__reserved[1] ||
++ cmd->__reserved[2])
+ return -EOPNOTSUPP;
+
+ idev = iommufd_get_device(ucmd, cmd->dev_id);
+@@ -1361,6 +1363,36 @@ int iommufd_get_hw_info(struct iommufd_ucmd *ucmd)
+ if (device_iommu_capable(idev->dev, IOMMU_CAP_DIRTY_TRACKING))
+ cmd->out_capabilities |= IOMMU_HW_CAP_DIRTY_TRACKING;
+
++ cmd->out_max_pasid_log2 = 0;
++ /*
++ * Currently, all iommu drivers enable PASID in the probe_device()
++ * op if iommu and device supports it. So the max_pasids stored in
++ * dev->iommu indicates both PASID support and enable status. A
++ * non-zero dev->iommu->max_pasids means PASID is supported and
++ * enabled. The iommufd only reports PASID capability to userspace
++ * if it's enabled.
++ */
++ if (idev->dev->iommu->max_pasids) {
++ cmd->out_max_pasid_log2 = ilog2(idev->dev->iommu->max_pasids);
++
++ if (dev_is_pci(idev->dev)) {
++ struct pci_dev *pdev = to_pci_dev(idev->dev);
++ int ctrl;
++
++ ctrl = pci_pasid_status(pdev);
++
++ WARN_ON_ONCE(ctrl < 0 ||
++ !(ctrl & PCI_PASID_CTRL_ENABLE));
++
++ if (ctrl & PCI_PASID_CTRL_EXEC)
++ cmd->out_capabilities |=
++ IOMMU_HW_CAP_PCI_PASID_EXEC;
++ if (ctrl & PCI_PASID_CTRL_PRIV)
++ cmd->out_capabilities |=
++ IOMMU_HW_CAP_PCI_PASID_PRIV;
++ }
++ }
++
+ rc = iommufd_ucmd_respond(ucmd, sizeof(*cmd));
+ out_free:
+ kfree(data);
+diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
+index 598be26a14e28e..9b5b0b85222996 100644
+--- a/drivers/iommu/iommufd/hw_pagetable.c
++++ b/drivers/iommu/iommufd/hw_pagetable.c
+@@ -126,6 +126,9 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
+ if ((flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING) &&
+ !device_iommu_capable(idev->dev, IOMMU_CAP_DIRTY_TRACKING))
+ return ERR_PTR(-EOPNOTSUPP);
++ if ((flags & IOMMU_HWPT_FAULT_ID_VALID) &&
++ (flags & IOMMU_HWPT_ALLOC_NEST_PARENT))
++ return ERR_PTR(-EOPNOTSUPP);
+
+ hwpt_paging = __iommufd_object_alloc(
+ ictx, hwpt_paging, IOMMUFD_OBJ_HWPT_PAGING, common.obj);
+diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c
+index 97987cd78da934..e10a68b5ffde13 100644
+--- a/drivers/iommu/of_iommu.c
++++ b/drivers/iommu/of_iommu.c
+@@ -116,6 +116,7 @@ static void of_pci_check_device_ats(struct device *dev, struct device_node *np)
+ int of_iommu_configure(struct device *dev, struct device_node *master_np,
+ const u32 *id)
+ {
++ bool dev_iommu_present;
+ int err;
+
+ if (!master_np)
+@@ -127,6 +128,7 @@ int of_iommu_configure(struct device *dev, struct device_node *master_np,
+ mutex_unlock(&iommu_probe_device_lock);
+ return 0;
+ }
++ dev_iommu_present = dev->iommu;
+
+ /*
+ * We don't currently walk up the tree looking for a parent IOMMU.
+@@ -147,8 +149,10 @@ int of_iommu_configure(struct device *dev, struct device_node *master_np,
+ err = of_iommu_configure_device(master_np, dev, id);
+ }
+
+- if (err)
++ if (err && dev_iommu_present)
+ iommu_fwspec_free(dev);
++ else if (err && dev->iommu)
++ dev_iommu_free(dev);
+ mutex_unlock(&iommu_probe_device_lock);
+
+ if (!err && dev->bus)
+diff --git a/drivers/irqchip/irq-riscv-aplic-direct.c b/drivers/irqchip/irq-riscv-aplic-direct.c
+index 7cd6b646774b9a..205ad61d15e49f 100644
+--- a/drivers/irqchip/irq-riscv-aplic-direct.c
++++ b/drivers/irqchip/irq-riscv-aplic-direct.c
+@@ -31,7 +31,7 @@ struct aplic_direct {
+ };
+
+ struct aplic_idc {
+- unsigned int hart_index;
++ u32 hart_index;
+ void __iomem *regs;
+ struct aplic_direct *direct;
+ };
+@@ -219,6 +219,20 @@ static int aplic_direct_parse_parent_hwirq(struct device *dev, u32 index,
+ return 0;
+ }
+
++static int aplic_direct_get_hart_index(struct device *dev, u32 logical_index,
++ u32 *hart_index)
++{
++ const char *prop_hart_index = "riscv,hart-indexes";
++ struct device_node *np = to_of_node(dev->fwnode);
++
++ if (!np || !of_property_present(np, prop_hart_index)) {
++ *hart_index = logical_index;
++ return 0;
++ }
++
++ return of_property_read_u32_index(np, prop_hart_index, logical_index, hart_index);
++}
++
+ int aplic_direct_setup(struct device *dev, void __iomem *regs)
+ {
+ int i, j, rc, cpu, current_cpu, setup_count = 0;
+@@ -265,8 +279,12 @@ int aplic_direct_setup(struct device *dev, void __iomem *regs)
+ cpumask_set_cpu(cpu, &direct->lmask);
+
+ idc = per_cpu_ptr(&aplic_idcs, cpu);
+- idc->hart_index = i;
+- idc->regs = priv->regs + APLIC_IDC_BASE + i * APLIC_IDC_SIZE;
++ rc = aplic_direct_get_hart_index(dev, i, &idc->hart_index);
++ if (rc) {
++ dev_warn(dev, "hart index not found for IDC%d\n", i);
++ continue;
++ }
++ idc->regs = priv->regs + APLIC_IDC_BASE + idc->hart_index * APLIC_IDC_SIZE;
+ idc->direct = direct;
+
+ aplic_idc_set_delivery(idc, true);
+diff --git a/drivers/irqchip/irq-riscv-imsic-early.c b/drivers/irqchip/irq-riscv-imsic-early.c
+index 275df500570576..553650932c75fe 100644
+--- a/drivers/irqchip/irq-riscv-imsic-early.c
++++ b/drivers/irqchip/irq-riscv-imsic-early.c
+@@ -77,6 +77,12 @@ static void imsic_handle_irq(struct irq_desc *desc)
+ struct imsic_vector *vec;
+ unsigned long local_id;
+
++ /*
++ * Process pending local synchronization instead of waiting
++ * for per-CPU local timer to expire.
++ */
++ imsic_local_sync_all(false);
++
+ chained_irq_enter(chip, desc);
+
+ while ((local_id = csr_swap(CSR_TOPEI, 0))) {
+@@ -120,7 +126,7 @@ static int imsic_starting_cpu(unsigned int cpu)
+ * Interrupts identities might have been enabled/disabled while
+ * this CPU was not running so sync-up local enable/disable state.
+ */
+- imsic_local_sync_all();
++ imsic_local_sync_all(true);
+
+ /* Enable local interrupt delivery */
+ imsic_local_delivery(true);
+diff --git a/drivers/irqchip/irq-riscv-imsic-platform.c b/drivers/irqchip/irq-riscv-imsic-platform.c
+index c708780e8760f3..5d7c30ad8855b1 100644
+--- a/drivers/irqchip/irq-riscv-imsic-platform.c
++++ b/drivers/irqchip/irq-riscv-imsic-platform.c
+@@ -96,9 +96,8 @@ static int imsic_irq_set_affinity(struct irq_data *d, const struct cpumask *mask
+ bool force)
+ {
+ struct imsic_vector *old_vec, *new_vec;
+- struct irq_data *pd = d->parent_data;
+
+- old_vec = irq_data_get_irq_chip_data(pd);
++ old_vec = irq_data_get_irq_chip_data(d);
+ if (WARN_ON(!old_vec))
+ return -ENOENT;
+
+@@ -116,13 +115,13 @@ static int imsic_irq_set_affinity(struct irq_data *d, const struct cpumask *mask
+ return -ENOSPC;
+
+ /* Point device to the new vector */
+- imsic_msi_update_msg(d, new_vec);
++ imsic_msi_update_msg(irq_get_irq_data(d->irq), new_vec);
+
+ /* Update irq descriptors with the new vector */
+- pd->chip_data = new_vec;
++ d->chip_data = new_vec;
+
+- /* Update effective affinity of parent irq data */
+- irq_data_update_effective_affinity(pd, cpumask_of(new_vec->cpu));
++ /* Update effective affinity */
++ irq_data_update_effective_affinity(d, cpumask_of(new_vec->cpu));
+
+ /* Move state of the old vector to the new vector */
+ imsic_vector_move(old_vec, new_vec);
+@@ -135,6 +134,9 @@ static struct irq_chip imsic_irq_base_chip = {
+ .name = "IMSIC",
+ .irq_mask = imsic_irq_mask,
+ .irq_unmask = imsic_irq_unmask,
++#ifdef CONFIG_SMP
++ .irq_set_affinity = imsic_irq_set_affinity,
++#endif
+ .irq_retrigger = imsic_irq_retrigger,
+ .irq_compose_msi_msg = imsic_irq_compose_msg,
+ .flags = IRQCHIP_SKIP_SET_WAKE |
+@@ -245,7 +247,7 @@ static bool imsic_init_dev_msi_info(struct device *dev,
+ if (WARN_ON_ONCE(domain != real_parent))
+ return false;
+ #ifdef CONFIG_SMP
+- info->chip->irq_set_affinity = imsic_irq_set_affinity;
++ info->chip->irq_set_affinity = irq_chip_set_affinity_parent;
+ #endif
+ break;
+ default:
+diff --git a/drivers/irqchip/irq-riscv-imsic-state.c b/drivers/irqchip/irq-riscv-imsic-state.c
+index b97e6cd89ed742..06ff0e17c0c337 100644
+--- a/drivers/irqchip/irq-riscv-imsic-state.c
++++ b/drivers/irqchip/irq-riscv-imsic-state.c
+@@ -124,10 +124,11 @@ void __imsic_eix_update(unsigned long base_id, unsigned long num_id, bool pend,
+ }
+ }
+
+-static void __imsic_local_sync(struct imsic_local_priv *lpriv)
++static bool __imsic_local_sync(struct imsic_local_priv *lpriv)
+ {
+ struct imsic_local_config *mlocal;
+ struct imsic_vector *vec, *mvec;
++ bool ret = true;
+ int i;
+
+ lockdep_assert_held(&lpriv->lock);
+@@ -143,35 +144,75 @@ static void __imsic_local_sync(struct imsic_local_priv *lpriv)
+ __imsic_id_clear_enable(i);
+
+ /*
+- * If the ID was being moved to a new ID on some other CPU
+- * then we can get a MSI during the movement so check the
+- * ID pending bit and re-trigger the new ID on other CPU
+- * using MMIO write.
++ * Clear the previous vector pointer of the new vector only
++ * after the movement is complete on the old CPU.
+ */
+- mvec = READ_ONCE(vec->move);
+- WRITE_ONCE(vec->move, NULL);
+- if (mvec && mvec != vec) {
++ mvec = READ_ONCE(vec->move_prev);
++ if (mvec) {
++ /*
++ * If the old vector has not been updated then
++ * try again in the next sync-up call.
++ */
++ if (READ_ONCE(mvec->move_next)) {
++ ret = false;
++ continue;
++ }
++
++ WRITE_ONCE(vec->move_prev, NULL);
++ }
++
++ /*
++ * If a vector was being moved to a new vector on some other
++ * CPU then we can get a MSI during the movement so check the
++ * ID pending bit and re-trigger the new ID on other CPU using
++ * MMIO write.
++ */
++ mvec = READ_ONCE(vec->move_next);
++ if (mvec) {
+ if (__imsic_id_read_clear_pending(i)) {
+ mlocal = per_cpu_ptr(imsic->global.local, mvec->cpu);
+ writel_relaxed(mvec->local_id, mlocal->msi_va);
+ }
+
++ WRITE_ONCE(vec->move_next, NULL);
+ imsic_vector_free(&lpriv->vectors[i]);
+ }
+
+ skip:
+ bitmap_clear(lpriv->dirty_bitmap, i, 1);
+ }
++
++ return ret;
+ }
+
+-void imsic_local_sync_all(void)
++#ifdef CONFIG_SMP
++static void __imsic_local_timer_start(struct imsic_local_priv *lpriv, unsigned int cpu)
++{
++ lockdep_assert_held(&lpriv->lock);
++
++ if (!timer_pending(&lpriv->timer)) {
++ lpriv->timer.expires = jiffies + 1;
++ add_timer_on(&lpriv->timer, cpu);
++ }
++}
++#else
++static inline void __imsic_local_timer_start(struct imsic_local_priv *lpriv, unsigned int cpu)
++{
++}
++#endif
++
++void imsic_local_sync_all(bool force_all)
+ {
+ struct imsic_local_priv *lpriv = this_cpu_ptr(imsic->lpriv);
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&lpriv->lock, flags);
+- bitmap_fill(lpriv->dirty_bitmap, imsic->global.nr_ids + 1);
+- __imsic_local_sync(lpriv);
++
++ if (force_all)
++ bitmap_fill(lpriv->dirty_bitmap, imsic->global.nr_ids + 1);
++ if (!__imsic_local_sync(lpriv))
++ __imsic_local_timer_start(lpriv, smp_processor_id());
++
+ raw_spin_unlock_irqrestore(&lpriv->lock, flags);
+ }
+
+@@ -190,12 +231,7 @@ void imsic_local_delivery(bool enable)
+ #ifdef CONFIG_SMP
+ static void imsic_local_timer_callback(struct timer_list *timer)
+ {
+- struct imsic_local_priv *lpriv = this_cpu_ptr(imsic->lpriv);
+- unsigned long flags;
+-
+- raw_spin_lock_irqsave(&lpriv->lock, flags);
+- __imsic_local_sync(lpriv);
+- raw_spin_unlock_irqrestore(&lpriv->lock, flags);
++ imsic_local_sync_all(false);
+ }
+
+ static void __imsic_remote_sync(struct imsic_local_priv *lpriv, unsigned int cpu)
+@@ -216,14 +252,11 @@ static void __imsic_remote_sync(struct imsic_local_priv *lpriv, unsigned int cpu
+ */
+ if (cpu_online(cpu)) {
+ if (cpu == smp_processor_id()) {
+- __imsic_local_sync(lpriv);
+- return;
++ if (__imsic_local_sync(lpriv))
++ return;
+ }
+
+- if (!timer_pending(&lpriv->timer)) {
+- lpriv->timer.expires = jiffies + 1;
+- add_timer_on(&lpriv->timer, cpu);
+- }
++ __imsic_local_timer_start(lpriv, cpu);
+ }
+ }
+ #else
+@@ -278,8 +311,9 @@ void imsic_vector_unmask(struct imsic_vector *vec)
+ raw_spin_unlock(&lpriv->lock);
+ }
+
+-static bool imsic_vector_move_update(struct imsic_local_priv *lpriv, struct imsic_vector *vec,
+- bool new_enable, struct imsic_vector *new_move)
++static bool imsic_vector_move_update(struct imsic_local_priv *lpriv,
++ struct imsic_vector *vec, bool is_old_vec,
++ bool new_enable, struct imsic_vector *move_vec)
+ {
+ unsigned long flags;
+ bool enabled;
+@@ -289,7 +323,10 @@ static bool imsic_vector_move_update(struct imsic_local_priv *lpriv, struct imsi
+ /* Update enable and move details */
+ enabled = READ_ONCE(vec->enable);
+ WRITE_ONCE(vec->enable, new_enable);
+- WRITE_ONCE(vec->move, new_move);
++ if (is_old_vec)
++ WRITE_ONCE(vec->move_next, move_vec);
++ else
++ WRITE_ONCE(vec->move_prev, move_vec);
+
+ /* Mark the vector as dirty and synchronize */
+ bitmap_set(lpriv->dirty_bitmap, vec->local_id, 1);
+@@ -322,8 +359,8 @@ void imsic_vector_move(struct imsic_vector *old_vec, struct imsic_vector *new_ve
+ * interrupt on the old vector while device was being moved
+ * to the new vector.
+ */
+- enabled = imsic_vector_move_update(old_lpriv, old_vec, false, new_vec);
+- imsic_vector_move_update(new_lpriv, new_vec, enabled, new_vec);
++ enabled = imsic_vector_move_update(old_lpriv, old_vec, true, false, new_vec);
++ imsic_vector_move_update(new_lpriv, new_vec, false, enabled, old_vec);
+ }
+
+ #ifdef CONFIG_GENERIC_IRQ_DEBUGFS
+@@ -386,7 +423,8 @@ struct imsic_vector *imsic_vector_alloc(unsigned int hwirq, const struct cpumask
+ vec = &lpriv->vectors[local_id];
+ vec->hwirq = hwirq;
+ vec->enable = false;
+- vec->move = NULL;
++ vec->move_next = NULL;
++ vec->move_prev = NULL;
+
+ return vec;
+ }
+diff --git a/drivers/irqchip/irq-riscv-imsic-state.h b/drivers/irqchip/irq-riscv-imsic-state.h
+index 391e4428082757..f02842b84ed582 100644
+--- a/drivers/irqchip/irq-riscv-imsic-state.h
++++ b/drivers/irqchip/irq-riscv-imsic-state.h
+@@ -23,7 +23,8 @@ struct imsic_vector {
+ unsigned int hwirq;
+ /* Details accessed using local lock held */
+ bool enable;
+- struct imsic_vector *move;
++ struct imsic_vector *move_next;
++ struct imsic_vector *move_prev;
+ };
+
+ struct imsic_local_priv {
+@@ -74,7 +75,7 @@ static inline void __imsic_id_clear_enable(unsigned long id)
+ __imsic_eix_update(id, 1, false, false);
+ }
+
+-void imsic_local_sync_all(void);
++void imsic_local_sync_all(bool force_all);
+ void imsic_local_delivery(bool enable);
+
+ void imsic_vector_mask(struct imsic_vector *vec);
+@@ -87,7 +88,7 @@ static inline bool imsic_vector_isenabled(struct imsic_vector *vec)
+
+ static inline struct imsic_vector *imsic_vector_get_move(struct imsic_vector *vec)
+ {
+- return READ_ONCE(vec->move);
++ return READ_ONCE(vec->move_prev);
+ }
+
+ void imsic_vector_move(struct imsic_vector *old_vec, struct imsic_vector *new_vec);
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index 2b27d043921ce0..8859e8fe292a9c 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -971,6 +971,7 @@ config LEDS_ST1202
+ depends on I2C
+ depends on OF
+ select LEDS_TRIGGERS
++ select LEDS_TRIGGER_PATTERN
+ help
+ Say Y to enable support for LEDs connected to LED1202
+ LED driver chips accessed via the I2C bus.
+diff --git a/drivers/leds/leds-st1202.c b/drivers/leds/leds-st1202.c
+index 4cebc0203c227a..ccea216c11f9b6 100644
+--- a/drivers/leds/leds-st1202.c
++++ b/drivers/leds/leds-st1202.c
+@@ -350,11 +350,11 @@ static int st1202_probe(struct i2c_client *client)
+ return ret;
+ chip->client = client;
+
+- ret = st1202_dt_init(chip);
++ ret = st1202_setup(chip);
+ if (ret < 0)
+ return ret;
+
+- ret = st1202_setup(chip);
++ ret = st1202_dt_init(chip);
+ if (ret < 0)
+ return ret;
+
+diff --git a/drivers/leds/rgb/leds-pwm-multicolor.c b/drivers/leds/rgb/leds-pwm-multicolor.c
+index f80a06cc31f8a4..1c7705bafdfc75 100644
+--- a/drivers/leds/rgb/leds-pwm-multicolor.c
++++ b/drivers/leds/rgb/leds-pwm-multicolor.c
+@@ -141,8 +141,11 @@ static int led_pwm_mc_probe(struct platform_device *pdev)
+
+ /* init the multicolor's LED class device */
+ cdev = &priv->mc_cdev.led_cdev;
+- fwnode_property_read_u32(mcnode, "max-brightness",
++ ret = fwnode_property_read_u32(mcnode, "max-brightness",
+ &cdev->max_brightness);
++ if (ret)
++ goto release_mcnode;
++
+ cdev->flags = LED_CORE_SUSPENDRESUME;
+ cdev->brightness_set_blocking = led_pwm_mc_set;
+
+diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
+index c15efe3e50780f..4e048e08c4fdec 100644
+--- a/drivers/leds/trigger/ledtrig-netdev.c
++++ b/drivers/leds/trigger/ledtrig-netdev.c
+@@ -68,6 +68,7 @@ struct led_netdev_data {
+ unsigned int last_activity;
+
+ unsigned long mode;
++ unsigned long blink_delay;
+ int link_speed;
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(supported_link_modes);
+ u8 duplex;
+@@ -86,6 +87,10 @@ static void set_baseline_state(struct led_netdev_data *trigger_data)
+ /* Already validated, hw control is possible with the requested mode */
+ if (trigger_data->hw_control) {
+ led_cdev->hw_control_set(led_cdev, trigger_data->mode);
++ if (led_cdev->blink_set) {
++ led_cdev->blink_set(led_cdev, &trigger_data->blink_delay,
++ &trigger_data->blink_delay);
++ }
+
+ return;
+ }
+@@ -454,10 +459,11 @@ static ssize_t interval_store(struct device *dev,
+ size_t size)
+ {
+ struct led_netdev_data *trigger_data = led_trigger_get_drvdata(dev);
++ struct led_classdev *led_cdev = trigger_data->led_cdev;
+ unsigned long value;
+ int ret;
+
+- if (trigger_data->hw_control)
++ if (trigger_data->hw_control && !led_cdev->blink_set)
+ return -EINVAL;
+
+ ret = kstrtoul(buf, 0, &value);
+@@ -466,9 +472,13 @@ static ssize_t interval_store(struct device *dev,
+
+ /* impose some basic bounds on the timer interval */
+ if (value >= 5 && value <= 10000) {
+- cancel_delayed_work_sync(&trigger_data->work);
++ if (trigger_data->hw_control) {
++ trigger_data->blink_delay = value;
++ } else {
++ cancel_delayed_work_sync(&trigger_data->work);
+
+- atomic_set(&trigger_data->interval, msecs_to_jiffies(value));
++ atomic_set(&trigger_data->interval, msecs_to_jiffies(value));
++ }
+ set_baseline_state(trigger_data); /* resets timer */
+ }
+
+diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
+index d3d26a2c98956c..cb174e788a96c2 100644
+--- a/drivers/mailbox/mailbox.c
++++ b/drivers/mailbox/mailbox.c
+@@ -415,11 +415,12 @@ struct mbox_chan *mbox_request_channel(struct mbox_client *cl, int index)
+
+ mutex_lock(&con_mutex);
+
+- if (of_parse_phandle_with_args(dev->of_node, "mboxes",
+- "#mbox-cells", index, &spec)) {
++ ret = of_parse_phandle_with_args(dev->of_node, "mboxes", "#mbox-cells",
++ index, &spec);
++ if (ret) {
+ dev_dbg(dev, "%s: can't parse \"mboxes\" property\n", __func__);
+ mutex_unlock(&con_mutex);
+- return ERR_PTR(-ENODEV);
++ return ERR_PTR(ret);
+ }
+
+ chan = ERR_PTR(-EPROBE_DEFER);
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index f8215a8f656a46..49254d99a8ad68 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -419,8 +419,12 @@ int pcc_mbox_ioremap(struct mbox_chan *chan)
+ return -1;
+ pchan_info = chan->con_priv;
+ pcc_mbox_chan = &pchan_info->chan;
+- pcc_mbox_chan->shmem = ioremap(pcc_mbox_chan->shmem_base_addr,
+- pcc_mbox_chan->shmem_size);
++
++ pcc_mbox_chan->shmem = acpi_os_ioremap(pcc_mbox_chan->shmem_base_addr,
++ pcc_mbox_chan->shmem_size);
++ if (!pcc_mbox_chan->shmem)
++ return -ENXIO;
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(pcc_mbox_ioremap);
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 9cb797a561d6e3..6cee5eac8b9e2b 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -2899,6 +2899,27 @@ static dm_cblock_t get_cache_dev_size(struct cache *cache)
+ return to_cblock(size);
+ }
+
++static bool can_resume(struct cache *cache)
++{
++ /*
++ * Disallow retrying the resume operation for devices that failed the
++ * first resume attempt, as the failure leaves the policy object partially
++ * initialized. Retrying could trigger BUG_ON when loading cache mappings
++ * into the incomplete policy object.
++ */
++ if (cache->sized && !cache->loaded_mappings) {
++ if (get_cache_mode(cache) != CM_WRITE)
++ DMERR("%s: unable to resume a failed-loaded cache, please check metadata.",
++ cache_device_name(cache));
++ else
++ DMERR("%s: unable to resume cache due to missing proper cache table reload",
++ cache_device_name(cache));
++ return false;
++ }
++
++ return true;
++}
++
+ static bool can_resize(struct cache *cache, dm_cblock_t new_size)
+ {
+ if (from_cblock(new_size) > from_cblock(cache->cache_size)) {
+@@ -2947,6 +2968,9 @@ static int cache_preresume(struct dm_target *ti)
+ struct cache *cache = ti->private;
+ dm_cblock_t csize = get_cache_dev_size(cache);
+
++ if (!can_resume(cache))
++ return -EINVAL;
++
+ /*
+ * Check to see if the cache has resized.
+ */
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index efc6ec25e0c5d1..4752966fdb3f4d 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -698,6 +698,10 @@ int dm_table_add_target(struct dm_table *t, const char *type,
+ DMERR("%s: zero-length target", dm_device_name(t->md));
+ return -EINVAL;
+ }
++ if (start + len < start || start + len > LLONG_MAX >> SECTOR_SHIFT) {
++ DMERR("%s: too large device", dm_device_name(t->md));
++ return -EINVAL;
++ }
+
+ ti->type = dm_get_target_type(type);
+ if (!ti->type) {
+diff --git a/drivers/md/dm-vdo/indexer/index-layout.c b/drivers/md/dm-vdo/indexer/index-layout.c
+index af8fab83b0f3ec..61edf2b72427dd 100644
+--- a/drivers/md/dm-vdo/indexer/index-layout.c
++++ b/drivers/md/dm-vdo/indexer/index-layout.c
+@@ -54,7 +54,6 @@
+ * Each save also has a unique nonce.
+ */
+
+-#define MAGIC_SIZE 32
+ #define NONCE_INFO_SIZE 32
+ #define MAX_SAVES 2
+
+@@ -98,9 +97,11 @@ enum region_type {
+ #define SUPER_VERSION_CURRENT 3
+ #define SUPER_VERSION_MAXIMUM 7
+
+-static const u8 LAYOUT_MAGIC[MAGIC_SIZE] = "*ALBIREO*SINGLE*FILE*LAYOUT*001*";
++static const u8 LAYOUT_MAGIC[] = "*ALBIREO*SINGLE*FILE*LAYOUT*001*";
+ static const u64 REGION_MAGIC = 0x416c6252676e3031; /* 'AlbRgn01' */
+
++#define MAGIC_SIZE (sizeof(LAYOUT_MAGIC) - 1)
++
+ struct region_header {
+ u64 magic;
+ u64 region_blocks;
+diff --git a/drivers/md/dm-vdo/vdo.c b/drivers/md/dm-vdo/vdo.c
+index a7e32baab4afd3..80b60867402255 100644
+--- a/drivers/md/dm-vdo/vdo.c
++++ b/drivers/md/dm-vdo/vdo.c
+@@ -31,9 +31,7 @@
+
+ #include <linux/completion.h>
+ #include <linux/device-mapper.h>
+-#include <linux/kernel.h>
+ #include <linux/lz4.h>
+-#include <linux/module.h>
+ #include <linux/mutex.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+@@ -142,12 +140,6 @@ static void finish_vdo_request_queue(void *ptr)
+ vdo_unregister_allocating_thread();
+ }
+
+-#ifdef MODULE
+-#define MODULE_NAME THIS_MODULE->name
+-#else
+-#define MODULE_NAME "dm-vdo"
+-#endif /* MODULE */
+-
+ static const struct vdo_work_queue_type default_queue_type = {
+ .start = start_vdo_request_queue,
+ .finish = finish_vdo_request_queue,
+@@ -559,8 +551,7 @@ int vdo_make(unsigned int instance, struct device_config *config, char **reason,
+ *vdo_ptr = vdo;
+
+ snprintf(vdo->thread_name_prefix, sizeof(vdo->thread_name_prefix),
+- "%s%u", MODULE_NAME, instance);
+- BUG_ON(vdo->thread_name_prefix[0] == '\0');
++ "vdo%u", instance);
+ result = vdo_allocate(vdo->thread_config.thread_count,
+ struct vdo_thread, __func__, &vdo->threads);
+ if (result != VDO_SUCCESS) {
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 4d1e42891d2465..5ab7574c0c76ab 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1540,14 +1540,18 @@ static void __send_empty_flush(struct clone_info *ci)
+ {
+ struct dm_table *t = ci->map;
+ struct bio flush_bio;
++ blk_opf_t opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;
++
++ if ((ci->io->orig_bio->bi_opf & (REQ_IDLE | REQ_SYNC)) ==
++ (REQ_IDLE | REQ_SYNC))
++ opf |= REQ_IDLE;
+
+ /*
+ * Use an on-stack bio for this, it's safe since we don't
+ * need to reference it after submit. It's just used as
+ * the basis for the clone(s).
+ */
+- bio_init(&flush_bio, ci->io->md->disk->part0, NULL, 0,
+- REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC);
++ bio_init(&flush_bio, ci->io->md->disk->part0, NULL, 0, opf);
+
+ ci->bio = &flush_bio;
+ ci->sector_count = 0;
+diff --git a/drivers/media/cec/core/cec-pin.c b/drivers/media/cec/core/cec-pin.c
+index a70451d99ebc96..f232c3df7ee165 100644
+--- a/drivers/media/cec/core/cec-pin.c
++++ b/drivers/media/cec/core/cec-pin.c
+@@ -873,19 +873,19 @@ static enum hrtimer_restart cec_pin_timer(struct hrtimer *timer)
+ if (pin->wait_usecs > 150) {
+ pin->wait_usecs -= 100;
+ pin->timer_ts = ktime_add_us(ts, 100);
+- hrtimer_forward_now(timer, ns_to_ktime(100000));
++ hrtimer_forward_now(timer, us_to_ktime(100));
+ return HRTIMER_RESTART;
+ }
+ if (pin->wait_usecs > 100) {
+ pin->wait_usecs /= 2;
+ pin->timer_ts = ktime_add_us(ts, pin->wait_usecs);
+ hrtimer_forward_now(timer,
+- ns_to_ktime(pin->wait_usecs * 1000));
++ us_to_ktime(pin->wait_usecs));
+ return HRTIMER_RESTART;
+ }
+ pin->timer_ts = ktime_add_us(ts, pin->wait_usecs);
+ hrtimer_forward_now(timer,
+- ns_to_ktime(pin->wait_usecs * 1000));
++ us_to_ktime(pin->wait_usecs));
+ pin->wait_usecs = 0;
+ return HRTIMER_RESTART;
+ }
+@@ -1020,13 +1020,12 @@ static enum hrtimer_restart cec_pin_timer(struct hrtimer *timer)
+ if (!adap->monitor_pin_cnt || usecs <= 150) {
+ pin->wait_usecs = 0;
+ pin->timer_ts = ktime_add_us(ts, usecs);
+- hrtimer_forward_now(timer,
+- ns_to_ktime(usecs * 1000));
++ hrtimer_forward_now(timer, us_to_ktime(usecs));
+ return HRTIMER_RESTART;
+ }
+ pin->wait_usecs = usecs - 100;
+ pin->timer_ts = ktime_add_us(ts, 100);
+- hrtimer_forward_now(timer, ns_to_ktime(100000));
++ hrtimer_forward_now(timer, us_to_ktime(100));
+ return HRTIMER_RESTART;
+ }
+
+diff --git a/drivers/media/i2c/adv7180.c b/drivers/media/i2c/adv7180.c
+index ff7dfa0278a7a0..6e50b14f888f1d 100644
+--- a/drivers/media/i2c/adv7180.c
++++ b/drivers/media/i2c/adv7180.c
+@@ -195,6 +195,7 @@ struct adv7180_state;
+ #define ADV7180_FLAG_V2 BIT(1)
+ #define ADV7180_FLAG_MIPI_CSI2 BIT(2)
+ #define ADV7180_FLAG_I2P BIT(3)
++#define ADV7180_FLAG_TEST_PATTERN BIT(4)
+
+ struct adv7180_chip_info {
+ unsigned int flags;
+@@ -682,11 +683,15 @@ static int adv7180_init_controls(struct adv7180_state *state)
+ ADV7180_HUE_MAX, 1, ADV7180_HUE_DEF);
+ v4l2_ctrl_new_custom(&state->ctrl_hdl, &adv7180_ctrl_fast_switch, NULL);
+
+- v4l2_ctrl_new_std_menu_items(&state->ctrl_hdl, &adv7180_ctrl_ops,
+- V4L2_CID_TEST_PATTERN,
+- ARRAY_SIZE(test_pattern_menu) - 1,
+- 0, ARRAY_SIZE(test_pattern_menu) - 1,
+- test_pattern_menu);
++ if (state->chip_info->flags & ADV7180_FLAG_TEST_PATTERN) {
++ v4l2_ctrl_new_std_menu_items(&state->ctrl_hdl,
++ &adv7180_ctrl_ops,
++ V4L2_CID_TEST_PATTERN,
++ ARRAY_SIZE(test_pattern_menu) - 1,
++ 0,
++ ARRAY_SIZE(test_pattern_menu) - 1,
++ test_pattern_menu);
++ }
+
+ state->sd.ctrl_handler = &state->ctrl_hdl;
+ if (state->ctrl_hdl.error) {
+@@ -1221,7 +1226,7 @@ static const struct adv7180_chip_info adv7182_info = {
+ };
+
+ static const struct adv7180_chip_info adv7280_info = {
+- .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P,
++ .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P | ADV7180_FLAG_TEST_PATTERN,
+ .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) |
+ BIT(ADV7182_INPUT_CVBS_AIN2) |
+ BIT(ADV7182_INPUT_CVBS_AIN3) |
+@@ -1235,7 +1240,8 @@ static const struct adv7180_chip_info adv7280_info = {
+ };
+
+ static const struct adv7180_chip_info adv7280_m_info = {
+- .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P,
++ .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P |
++ ADV7180_FLAG_TEST_PATTERN,
+ .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) |
+ BIT(ADV7182_INPUT_CVBS_AIN2) |
+ BIT(ADV7182_INPUT_CVBS_AIN3) |
+@@ -1256,7 +1262,8 @@ static const struct adv7180_chip_info adv7280_m_info = {
+ };
+
+ static const struct adv7180_chip_info adv7281_info = {
+- .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2,
++ .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 |
++ ADV7180_FLAG_TEST_PATTERN,
+ .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) |
+ BIT(ADV7182_INPUT_CVBS_AIN2) |
+ BIT(ADV7182_INPUT_CVBS_AIN7) |
+@@ -1271,7 +1278,8 @@ static const struct adv7180_chip_info adv7281_info = {
+ };
+
+ static const struct adv7180_chip_info adv7281_m_info = {
+- .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2,
++ .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 |
++ ADV7180_FLAG_TEST_PATTERN,
+ .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) |
+ BIT(ADV7182_INPUT_CVBS_AIN2) |
+ BIT(ADV7182_INPUT_CVBS_AIN3) |
+@@ -1291,7 +1299,8 @@ static const struct adv7180_chip_info adv7281_m_info = {
+ };
+
+ static const struct adv7180_chip_info adv7281_ma_info = {
+- .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2,
++ .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 |
++ ADV7180_FLAG_TEST_PATTERN,
+ .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) |
+ BIT(ADV7182_INPUT_CVBS_AIN2) |
+ BIT(ADV7182_INPUT_CVBS_AIN3) |
+@@ -1316,7 +1325,7 @@ static const struct adv7180_chip_info adv7281_ma_info = {
+ };
+
+ static const struct adv7180_chip_info adv7282_info = {
+- .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P,
++ .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_I2P | ADV7180_FLAG_TEST_PATTERN,
+ .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) |
+ BIT(ADV7182_INPUT_CVBS_AIN2) |
+ BIT(ADV7182_INPUT_CVBS_AIN7) |
+@@ -1331,7 +1340,8 @@ static const struct adv7180_chip_info adv7282_info = {
+ };
+
+ static const struct adv7180_chip_info adv7282_m_info = {
+- .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P,
++ .flags = ADV7180_FLAG_V2 | ADV7180_FLAG_MIPI_CSI2 | ADV7180_FLAG_I2P |
++ ADV7180_FLAG_TEST_PATTERN,
+ .valid_input_mask = BIT(ADV7182_INPUT_CVBS_AIN1) |
+ BIT(ADV7182_INPUT_CVBS_AIN2) |
+ BIT(ADV7182_INPUT_CVBS_AIN3) |
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 64227eb423d431..dd5a8577d7479e 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -73,7 +73,7 @@
+ #define IMX219_REG_VTS CCI_REG16(0x0160)
+ #define IMX219_VTS_MAX 0xffff
+
+-#define IMX219_VBLANK_MIN 4
++#define IMX219_VBLANK_MIN 32
+
+ /* HBLANK control - read only */
+ #define IMX219_PPL_DEFAULT 3448
+diff --git a/drivers/media/i2c/imx335.c b/drivers/media/i2c/imx335.c
+index fcfd1d851bd4aa..0beb80b8c45815 100644
+--- a/drivers/media/i2c/imx335.c
++++ b/drivers/media/i2c/imx335.c
+@@ -559,12 +559,14 @@ static int imx335_set_ctrl(struct v4l2_ctrl *ctrl)
+ imx335->vblank,
+ imx335->vblank + imx335->cur_mode->height);
+
+- return __v4l2_ctrl_modify_range(imx335->exp_ctrl,
+- IMX335_EXPOSURE_MIN,
+- imx335->vblank +
+- imx335->cur_mode->height -
+- IMX335_EXPOSURE_OFFSET,
+- 1, IMX335_EXPOSURE_DEFAULT);
++ ret = __v4l2_ctrl_modify_range(imx335->exp_ctrl,
++ IMX335_EXPOSURE_MIN,
++ imx335->vblank +
++ imx335->cur_mode->height -
++ IMX335_EXPOSURE_OFFSET,
++ 1, IMX335_EXPOSURE_DEFAULT);
++ if (ret)
++ return ret;
+ }
+
+ /*
+@@ -575,6 +577,13 @@ static int imx335_set_ctrl(struct v4l2_ctrl *ctrl)
+ return 0;
+
+ switch (ctrl->id) {
++ case V4L2_CID_VBLANK:
++ exposure = imx335->exp_ctrl->val;
++ analog_gain = imx335->again_ctrl->val;
++
++ ret = imx335_update_exp_gain(imx335, exposure, analog_gain);
++
++ break;
+ case V4L2_CID_EXPOSURE:
+ exposure = ctrl->val;
+ analog_gain = imx335->again_ctrl->val;
+diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c
+index 9a5d118b87b018..04e93618f408a7 100644
+--- a/drivers/media/i2c/ov2740.c
++++ b/drivers/media/i2c/ov2740.c
+@@ -828,8 +828,10 @@ static int ov2740_init_controls(struct ov2740 *ov2740)
+ 0, 0, ov2740_test_pattern_menu);
+
+ ret = v4l2_fwnode_device_parse(&client->dev, &props);
+- if (ret)
++ if (ret) {
++ v4l2_ctrl_handler_free(ctrl_hdlr);
+ return ret;
++ }
+
+ v4l2_ctrl_new_fwnode_properties(ctrl_hdlr, &ov2740_ctrl_ops, &props);
+
+diff --git a/drivers/media/i2c/tc358746.c b/drivers/media/i2c/tc358746.c
+index 389582420ba782..048a1a381b3331 100644
+--- a/drivers/media/i2c/tc358746.c
++++ b/drivers/media/i2c/tc358746.c
+@@ -460,24 +460,20 @@ static int tc358746_apply_misc_config(struct tc358746 *tc358746)
+ return err;
+ }
+
+-/* Use MHz as base so the div needs no u64 */
+-static u32 tc358746_cfg_to_cnt(unsigned int cfg_val,
+- unsigned int clk_mhz,
+- unsigned int time_base)
++static u32 tc358746_cfg_to_cnt(unsigned long cfg_val, unsigned long clk_hz,
++ unsigned long long time_base)
+ {
+- return DIV_ROUND_UP(cfg_val * clk_mhz, time_base);
++ return div64_u64((u64)cfg_val * clk_hz + time_base - 1, time_base);
+ }
+
+-static u32 tc358746_ps_to_cnt(unsigned int cfg_val,
+- unsigned int clk_mhz)
++static u32 tc358746_ps_to_cnt(unsigned long cfg_val, unsigned long clk_hz)
+ {
+- return tc358746_cfg_to_cnt(cfg_val, clk_mhz, USEC_PER_SEC);
++ return tc358746_cfg_to_cnt(cfg_val, clk_hz, PSEC_PER_SEC);
+ }
+
+-static u32 tc358746_us_to_cnt(unsigned int cfg_val,
+- unsigned int clk_mhz)
++static u32 tc358746_us_to_cnt(unsigned long cfg_val, unsigned long clk_hz)
+ {
+- return tc358746_cfg_to_cnt(cfg_val, clk_mhz, 1);
++ return tc358746_cfg_to_cnt(cfg_val, clk_hz, USEC_PER_SEC);
+ }
+
+ static int tc358746_apply_dphy_config(struct tc358746 *tc358746)
+@@ -492,7 +488,6 @@ static int tc358746_apply_dphy_config(struct tc358746 *tc358746)
+
+ /* The hs_byte_clk is also called SYSCLK in the excel sheet */
+ hs_byte_clk = cfg->hs_clk_rate / 8;
+- hs_byte_clk /= HZ_PER_MHZ;
+ hf_clk = hs_byte_clk / 2;
+
+ val = tc358746_us_to_cnt(cfg->init, hf_clk) - 1;
+diff --git a/drivers/media/platform/qcom/camss/camss-csid.c b/drivers/media/platform/qcom/camss/camss-csid.c
+index 858db5d4ca75c3..e51f2ed3f0315a 100644
+--- a/drivers/media/platform/qcom/camss/camss-csid.c
++++ b/drivers/media/platform/qcom/camss/camss-csid.c
+@@ -683,11 +683,13 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ int ret;
+
+ if (enable) {
+- ret = v4l2_ctrl_handler_setup(&csid->ctrls);
+- if (ret < 0) {
+- dev_err(csid->camss->dev,
+- "could not sync v4l2 controls: %d\n", ret);
+- return ret;
++ if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED) {
++ ret = v4l2_ctrl_handler_setup(&csid->ctrls);
++ if (ret < 0) {
++ dev_err(csid->camss->dev,
++ "could not sync v4l2 controls: %d\n", ret);
++ return ret;
++ }
+ }
+
+ if (!csid->testgen.enabled &&
+@@ -761,7 +763,8 @@ static void csid_try_format(struct csid_device *csid,
+ break;
+
+ case MSM_CSID_PAD_SRC:
+- if (csid->testgen_mode->cur.val == 0) {
++ if (csid->testgen.nmodes == CSID_PAYLOAD_MODE_DISABLED ||
++ csid->testgen_mode->cur.val == 0) {
+ /* Test generator is disabled, */
+ /* keep pad formats in sync */
+ u32 code = fmt->code;
+@@ -811,7 +814,8 @@ static int csid_enum_mbus_code(struct v4l2_subdev *sd,
+
+ code->code = csid->res->formats->formats[code->index].code;
+ } else {
+- if (csid->testgen_mode->cur.val == 0) {
++ if (csid->testgen.nmodes == CSID_PAYLOAD_MODE_DISABLED ||
++ csid->testgen_mode->cur.val == 0) {
+ struct v4l2_mbus_framefmt *sink_fmt;
+
+ sink_fmt = __csid_get_format(csid, sd_state,
+@@ -1190,7 +1194,8 @@ static int csid_link_setup(struct media_entity *entity,
+
+ /* If test generator is enabled */
+ /* do not allow a link from CSIPHY to CSID */
+- if (csid->testgen_mode->cur.val != 0)
++ if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED &&
++ csid->testgen_mode->cur.val != 0)
+ return -EBUSY;
+
+ sd = media_entity_to_v4l2_subdev(remote->entity);
+@@ -1283,24 +1288,27 @@ int msm_csid_register_entity(struct csid_device *csid,
+ MSM_CSID_NAME, csid->id);
+ v4l2_set_subdevdata(sd, csid);
+
+- ret = v4l2_ctrl_handler_init(&csid->ctrls, 1);
+- if (ret < 0) {
+- dev_err(dev, "Failed to init ctrl handler: %d\n", ret);
+- return ret;
+- }
++ if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED) {
++ ret = v4l2_ctrl_handler_init(&csid->ctrls, 1);
++ if (ret < 0) {
++ dev_err(dev, "Failed to init ctrl handler: %d\n", ret);
++ return ret;
++ }
+
+- csid->testgen_mode = v4l2_ctrl_new_std_menu_items(&csid->ctrls,
+- &csid_ctrl_ops, V4L2_CID_TEST_PATTERN,
+- csid->testgen.nmodes, 0, 0,
+- csid->testgen.modes);
++ csid->testgen_mode =
++ v4l2_ctrl_new_std_menu_items(&csid->ctrls,
++ &csid_ctrl_ops, V4L2_CID_TEST_PATTERN,
++ csid->testgen.nmodes, 0, 0,
++ csid->testgen.modes);
+
+- if (csid->ctrls.error) {
+- dev_err(dev, "Failed to init ctrl: %d\n", csid->ctrls.error);
+- ret = csid->ctrls.error;
+- goto free_ctrl;
+- }
++ if (csid->ctrls.error) {
++ dev_err(dev, "Failed to init ctrl: %d\n", csid->ctrls.error);
++ ret = csid->ctrls.error;
++ goto free_ctrl;
++ }
+
+- csid->subdev.ctrl_handler = &csid->ctrls;
++ csid->subdev.ctrl_handler = &csid->ctrls;
++ }
+
+ ret = csid_init_formats(sd, NULL);
+ if (ret < 0) {
+@@ -1331,7 +1339,8 @@ int msm_csid_register_entity(struct csid_device *csid,
+ media_cleanup:
+ media_entity_cleanup(&sd->entity);
+ free_ctrl:
+- v4l2_ctrl_handler_free(&csid->ctrls);
++ if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED)
++ v4l2_ctrl_handler_free(&csid->ctrls);
+
+ return ret;
+ }
+@@ -1344,7 +1353,8 @@ void msm_csid_unregister_entity(struct csid_device *csid)
+ {
+ v4l2_device_unregister_subdev(&csid->subdev);
+ media_entity_cleanup(&csid->subdev.entity);
+- v4l2_ctrl_handler_free(&csid->ctrls);
++ if (csid->testgen.nmodes != CSID_PAYLOAD_MODE_DISABLED)
++ v4l2_ctrl_handler_free(&csid->ctrls);
+ }
+
+ inline bool csid_is_lite(struct csid_device *csid)
+diff --git a/drivers/media/platform/qcom/camss/camss-vfe.c b/drivers/media/platform/qcom/camss/camss-vfe.c
+index 95f6a1ac7eaf53..3c5811c6c028e0 100644
+--- a/drivers/media/platform/qcom/camss/camss-vfe.c
++++ b/drivers/media/platform/qcom/camss/camss-vfe.c
+@@ -400,6 +400,10 @@ static u32 vfe_src_pad_code(struct vfe_line *line, u32 sink_code,
+ return sink_code;
+ }
+ break;
++ default:
++ WARN(1, "Unsupported HW version: %x\n",
++ vfe->camss->res->version);
++ break;
+ }
+ return 0;
+ }
+diff --git a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
+index 7b3a37957e3ae8..d151d2ed1f64bc 100644
+--- a/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
++++ b/drivers/media/platform/st/sti/c8sectpfe/c8sectpfe-core.c
+@@ -797,13 +797,12 @@ static int c8sectpfe_probe(struct platform_device *pdev)
+ }
+ tsin->i2c_adapter =
+ of_find_i2c_adapter_by_node(i2c_bus);
++ of_node_put(i2c_bus);
+ if (!tsin->i2c_adapter) {
+ dev_err(&pdev->dev, "No i2c adapter found\n");
+- of_node_put(i2c_bus);
+ ret = -ENODEV;
+ goto err_node_put;
+ }
+- of_node_put(i2c_bus);
+
+ /* Acquire reset GPIO and activate it */
+ tsin->rst_gpio = devm_fwnode_gpiod_get(dev,
+diff --git a/drivers/media/platform/st/stm32/stm32-csi.c b/drivers/media/platform/st/stm32/stm32-csi.c
+index 48941aae8c9b8d..0c776e4a7ce83d 100644
+--- a/drivers/media/platform/st/stm32/stm32-csi.c
++++ b/drivers/media/platform/st/stm32/stm32-csi.c
+@@ -325,7 +325,6 @@ static const struct stm32_csi_mbps_phy_reg snps_stm32mp25[] = {
+ { .mbps = 2400, .hsfreqrange = 0x47, .osc_freq_target = 442 },
+ { .mbps = 2450, .hsfreqrange = 0x48, .osc_freq_target = 451 },
+ { .mbps = 2500, .hsfreqrange = 0x49, .osc_freq_target = 460 },
+- { /* sentinel */ }
+ };
+
+ static const struct v4l2_mbus_framefmt fmt_default = {
+@@ -444,13 +443,13 @@ static void stm32_csi_phy_reg_write(struct stm32_csi_dev *csidev,
+ static int stm32_csi_start(struct stm32_csi_dev *csidev,
+ struct v4l2_subdev_state *state)
+ {
+- const struct stm32_csi_mbps_phy_reg *phy_regs;
++ const struct stm32_csi_mbps_phy_reg *phy_regs = NULL;
+ struct v4l2_mbus_framefmt *sink_fmt;
+ const struct stm32_csi_fmts *fmt;
+ unsigned long phy_clk_frate;
++ u32 lanes_ie, lanes_en;
+ unsigned int mbps;
+- u32 lanes_ie = 0;
+- u32 lanes_en = 0;
++ unsigned int i;
+ s64 link_freq;
+ int ret;
+ u32 ccfr;
+@@ -474,11 +473,14 @@ static int stm32_csi_start(struct stm32_csi_dev *csidev,
+ mbps = div_s64(link_freq, 500000);
+ dev_dbg(csidev->dev, "Computed Mbps: %u\n", mbps);
+
+- for (phy_regs = snps_stm32mp25; phy_regs->mbps != 0; phy_regs++)
+- if (phy_regs->mbps >= mbps)
++ for (i = 0; i < ARRAY_SIZE(snps_stm32mp25); i++) {
++ if (snps_stm32mp25[i].mbps >= mbps) {
++ phy_regs = &snps_stm32mp25[i];
+ break;
++ }
++ }
+
+- if (!phy_regs->mbps) {
++ if (!phy_regs) {
+ dev_err(csidev->dev, "Unsupported PHY speed (%u Mbps)", mbps);
+ return -ERANGE;
+ }
+@@ -488,8 +490,8 @@ static int stm32_csi_start(struct stm32_csi_dev *csidev,
+ phy_regs->osc_freq_target);
+
+ /* Prepare lanes related configuration bits */
+- lanes_ie |= STM32_CSI_SR1_DL0_ERRORS;
+- lanes_en |= STM32_CSI_PCR_DL0EN;
++ lanes_ie = STM32_CSI_SR1_DL0_ERRORS;
++ lanes_en = STM32_CSI_PCR_DL0EN;
+ if (csidev->num_lanes == 2) {
+ lanes_ie |= STM32_CSI_SR1_DL1_ERRORS;
+ lanes_en |= STM32_CSI_PCR_DL1EN;
+@@ -497,21 +499,19 @@ static int stm32_csi_start(struct stm32_csi_dev *csidev,
+
+ ret = pm_runtime_get_sync(csidev->dev);
+ if (ret < 0)
+- return ret;
++ goto error_put;
+
+ /* Retrieve CSI2PHY clock rate to compute CCFR value */
+ phy_clk_frate = clk_get_rate(csidev->clks[STM32_CSI_CLK_CSI2PHY].clk);
+ if (!phy_clk_frate) {
+- pm_runtime_put(csidev->dev);
+ dev_err(csidev->dev, "CSI2PHY clock rate invalid (0)\n");
+- return ret;
++ ret = -EINVAL;
++ goto error_put;
+ }
+
+ ret = stm32_csi_setup_lane_merger(csidev);
+- if (ret) {
+- pm_runtime_put(csidev->dev);
+- return ret;
+- }
++ if (ret)
++ goto error_put;
+
+ /* Enable the CSI */
+ writel_relaxed(STM32_CSI_CR_CSIEN, csidev->base + STM32_CSI_CR);
+@@ -567,6 +567,10 @@ static int stm32_csi_start(struct stm32_csi_dev *csidev,
+ writel_relaxed(0, csidev->base + STM32_CSI_PMCR);
+
+ return ret;
++
++error_put:
++ pm_runtime_put(csidev->dev);
++ return ret;
+ }
+
+ static void stm32_csi_stop(struct stm32_csi_dev *csidev)
+diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-cap.c b/drivers/media/test-drivers/vivid/vivid-kthread-cap.c
+index 669bd96da4c795..273e8ed8c2a908 100644
+--- a/drivers/media/test-drivers/vivid/vivid-kthread-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-kthread-cap.c
+@@ -789,9 +789,14 @@ static int vivid_thread_vid_cap(void *data)
+ next_jiffies_since_start = jiffies_since_start;
+
+ wait_jiffies = next_jiffies_since_start - jiffies_since_start;
+- while (time_is_after_jiffies(cur_jiffies + wait_jiffies) &&
+- !kthread_should_stop())
+- schedule();
++ if (!time_is_after_jiffies(cur_jiffies + wait_jiffies))
++ continue;
++
++ wait_queue_head_t wait;
++
++ init_waitqueue_head(&wait);
++ wait_event_interruptible_timeout(wait, kthread_should_stop(),
++ cur_jiffies + wait_jiffies - jiffies);
+ }
+ dprintk(dev, 1, "Video Capture Thread End\n");
+ return 0;
+diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-out.c b/drivers/media/test-drivers/vivid/vivid-kthread-out.c
+index fac6208b51da84..015a7b166a1e61 100644
+--- a/drivers/media/test-drivers/vivid/vivid-kthread-out.c
++++ b/drivers/media/test-drivers/vivid/vivid-kthread-out.c
+@@ -235,9 +235,14 @@ static int vivid_thread_vid_out(void *data)
+ next_jiffies_since_start = jiffies_since_start;
+
+ wait_jiffies = next_jiffies_since_start - jiffies_since_start;
+- while (time_is_after_jiffies(cur_jiffies + wait_jiffies) &&
+- !kthread_should_stop())
+- schedule();
++ if (!time_is_after_jiffies(cur_jiffies + wait_jiffies))
++ continue;
++
++ wait_queue_head_t wait;
++
++ init_waitqueue_head(&wait);
++ wait_event_interruptible_timeout(wait, kthread_should_stop(),
++ cur_jiffies + wait_jiffies - jiffies);
+ }
+ dprintk(dev, 1, "Video Output Thread End\n");
+ return 0;
+diff --git a/drivers/media/test-drivers/vivid/vivid-kthread-touch.c b/drivers/media/test-drivers/vivid/vivid-kthread-touch.c
+index fa711ee36a3fbc..c862689786b69c 100644
+--- a/drivers/media/test-drivers/vivid/vivid-kthread-touch.c
++++ b/drivers/media/test-drivers/vivid/vivid-kthread-touch.c
+@@ -135,9 +135,14 @@ static int vivid_thread_touch_cap(void *data)
+ next_jiffies_since_start = jiffies_since_start;
+
+ wait_jiffies = next_jiffies_since_start - jiffies_since_start;
+- while (time_is_after_jiffies(cur_jiffies + wait_jiffies) &&
+- !kthread_should_stop())
+- schedule();
++ if (!time_is_after_jiffies(cur_jiffies + wait_jiffies))
++ continue;
++
++ wait_queue_head_t wait;
++
++ init_waitqueue_head(&wait);
++ wait_event_interruptible_timeout(wait, kthread_should_stop(),
++ cur_jiffies + wait_jiffies - jiffies);
+ }
+ dprintk(dev, 1, "Touch Capture Thread End\n");
+ return 0;
+diff --git a/drivers/media/test-drivers/vivid/vivid-sdr-cap.c b/drivers/media/test-drivers/vivid/vivid-sdr-cap.c
+index 74a91d28c8be93..c633fc2ed664f5 100644
+--- a/drivers/media/test-drivers/vivid/vivid-sdr-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-sdr-cap.c
+@@ -206,9 +206,14 @@ static int vivid_thread_sdr_cap(void *data)
+ next_jiffies_since_start = jiffies_since_start;
+
+ wait_jiffies = next_jiffies_since_start - jiffies_since_start;
+- while (time_is_after_jiffies(cur_jiffies + wait_jiffies) &&
+- !kthread_should_stop())
+- schedule();
++ if (!time_is_after_jiffies(cur_jiffies + wait_jiffies))
++ continue;
++
++ wait_queue_head_t wait;
++
++ init_waitqueue_head(&wait);
++ wait_event_interruptible_timeout(wait, kthread_should_stop(),
++ cur_jiffies + wait_jiffies - jiffies);
+ }
+ dprintk(dev, 1, "SDR Capture Thread End\n");
+ return 0;
+diff --git a/drivers/media/usb/cx231xx/cx231xx-417.c b/drivers/media/usb/cx231xx/cx231xx-417.c
+index a4a9781328c50a..06d61e52f018c5 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-417.c
++++ b/drivers/media/usb/cx231xx/cx231xx-417.c
+@@ -1720,6 +1720,8 @@ static void cx231xx_video_dev_init(
+ vfd->lock = &dev->lock;
+ vfd->release = video_device_release_empty;
+ vfd->ctrl_handler = &dev->mpeg_ctrl_handler.hdl;
++ vfd->device_caps = V4L2_CAP_READWRITE | V4L2_CAP_STREAMING |
++ V4L2_CAP_VIDEO_CAPTURE;
+ video_set_drvdata(vfd, dev);
+ if (dev->tuner_type == TUNER_ABSENT) {
+ v4l2_disable_ioctl(vfd, VIDIOC_G_FREQUENCY);
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 4e58476d305efd..4a55cf78ec5261 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -862,6 +862,25 @@ static inline void uvc_clear_bit(u8 *data, int bit)
+ data[bit >> 3] &= ~(1 << (bit & 7));
+ }
+
++static s32 uvc_menu_to_v4l2_menu(struct uvc_control_mapping *mapping, s32 val)
++{
++ unsigned int i;
++
++ for (i = 0; BIT(i) <= mapping->menu_mask; ++i) {
++ u32 menu_value;
++
++ if (!test_bit(i, &mapping->menu_mask))
++ continue;
++
++ menu_value = uvc_mapping_get_menu_value(mapping, i);
++
++ if (menu_value == val)
++ return i;
++ }
++
++ return val;
++}
++
+ /*
+ * Extract the bit string specified by mapping->offset and mapping->size
+ * from the little-endian data stored at 'data' and return the result as
+@@ -896,6 +915,16 @@ static s32 uvc_get_le_value(struct uvc_control_mapping *mapping,
+ if (mapping->data_type == UVC_CTRL_DATA_TYPE_SIGNED)
+ value |= -(value & (1 << (mapping->size - 1)));
+
++ /* If it is a menu, convert from uvc to v4l2. */
++ if (mapping->v4l2_type != V4L2_CTRL_TYPE_MENU)
++ return value;
++
++ switch (query) {
++ case UVC_GET_CUR:
++ case UVC_GET_DEF:
++ return uvc_menu_to_v4l2_menu(mapping, value);
++ }
++
+ return value;
+ }
+
+@@ -1060,32 +1089,6 @@ static int uvc_ctrl_populate_cache(struct uvc_video_chain *chain,
+ return 0;
+ }
+
+-static s32 __uvc_ctrl_get_value(struct uvc_control_mapping *mapping,
+- const u8 *data)
+-{
+- s32 value = mapping->get(mapping, UVC_GET_CUR, data);
+-
+- if (mapping->v4l2_type == V4L2_CTRL_TYPE_MENU) {
+- unsigned int i;
+-
+- for (i = 0; BIT(i) <= mapping->menu_mask; ++i) {
+- u32 menu_value;
+-
+- if (!test_bit(i, &mapping->menu_mask))
+- continue;
+-
+- menu_value = uvc_mapping_get_menu_value(mapping, i);
+-
+- if (menu_value == value) {
+- value = i;
+- break;
+- }
+- }
+- }
+-
+- return value;
+-}
+-
+ static int __uvc_ctrl_load_cur(struct uvc_video_chain *chain,
+ struct uvc_control *ctrl)
+ {
+@@ -1136,8 +1139,8 @@ static int __uvc_ctrl_get(struct uvc_video_chain *chain,
+ if (ret < 0)
+ return ret;
+
+- *value = __uvc_ctrl_get_value(mapping,
+- uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT));
++ *value = mapping->get(mapping, UVC_GET_CUR,
++ uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT));
+
+ return 0;
+ }
+@@ -1287,7 +1290,6 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ {
+ struct uvc_control_mapping *master_map = NULL;
+ struct uvc_control *master_ctrl = NULL;
+- unsigned int i;
+
+ memset(v4l2_ctrl, 0, sizeof(*v4l2_ctrl));
+ v4l2_ctrl->id = mapping->id;
+@@ -1330,21 +1332,6 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ v4l2_ctrl->minimum = ffs(mapping->menu_mask) - 1;
+ v4l2_ctrl->maximum = fls(mapping->menu_mask) - 1;
+ v4l2_ctrl->step = 1;
+-
+- for (i = 0; BIT(i) <= mapping->menu_mask; ++i) {
+- u32 menu_value;
+-
+- if (!test_bit(i, &mapping->menu_mask))
+- continue;
+-
+- menu_value = uvc_mapping_get_menu_value(mapping, i);
+-
+- if (menu_value == v4l2_ctrl->default_value) {
+- v4l2_ctrl->default_value = i;
+- break;
+- }
+- }
+-
+ return 0;
+
+ case V4L2_CTRL_TYPE_BOOLEAN:
+@@ -1630,7 +1617,7 @@ void uvc_ctrl_status_event(struct uvc_video_chain *chain,
+ uvc_ctrl_set_handle(handle, ctrl, NULL);
+
+ list_for_each_entry(mapping, &ctrl->info.mappings, list) {
+- s32 value = __uvc_ctrl_get_value(mapping, data);
++ s32 value = mapping->get(mapping, UVC_GET_CUR, data);
+
+ /*
+ * handle may be NULL here if the device sends auto-update
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 93c6cdb238812d..75216507cbdcf2 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -108,6 +108,12 @@ static int uvc_ioctl_xu_ctrl_map(struct uvc_video_chain *chain,
+ struct uvc_control_mapping *map;
+ int ret;
+
++ if (xmap->data_type > UVC_CTRL_DATA_TYPE_BITMASK) {
++ uvc_dbg(chain->dev, CONTROL,
++ "Unsupported UVC data type %u\n", xmap->data_type);
++ return -EINVAL;
++ }
++
+ map = kzalloc(sizeof(*map), GFP_KERNEL);
+ if (map == NULL)
+ return -ENOMEM;
+diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
+index cde1774c9098dd..a3074f469b1503 100644
+--- a/drivers/media/v4l2-core/v4l2-subdev.c
++++ b/drivers/media/v4l2-core/v4l2-subdev.c
+@@ -444,6 +444,8 @@ static int call_enum_dv_timings(struct v4l2_subdev *sd,
+ static int call_get_mbus_config(struct v4l2_subdev *sd, unsigned int pad,
+ struct v4l2_mbus_config *config)
+ {
++ memset(config, 0, sizeof(*config));
++
+ return check_pad(sd, pad) ? :
+ sd->ops->pad->get_mbus_config(sd, pad, config);
+ }
+diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
+index a9604ba3c80585..f0746db92ca61a 100644
+--- a/drivers/message/fusion/mptscsih.c
++++ b/drivers/message/fusion/mptscsih.c
+@@ -2915,14 +2915,14 @@ mptscsih_do_cmd(MPT_SCSI_HOST *hd, INTERNAL_CMD *io)
+ timeout = 10;
+ break;
+
+- case RESERVE:
++ case RESERVE_6:
+ cmdLen = 6;
+ dir = MPI_SCSIIO_CONTROL_READ;
+ CDB[0] = cmd;
+ timeout = 10;
+ break;
+
+- case RELEASE:
++ case RELEASE_6:
+ cmdLen = 6;
+ dir = MPI_SCSIIO_CONTROL_READ;
+ CDB[0] = cmd;
+diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
+index cff56deba24f03..e9914e8a29a33c 100644
+--- a/drivers/mfd/axp20x.c
++++ b/drivers/mfd/axp20x.c
+@@ -224,6 +224,7 @@ static const struct regmap_range axp717_writeable_ranges[] = {
+ regmap_reg_range(AXP717_VSYS_V_POWEROFF, AXP717_VSYS_V_POWEROFF),
+ regmap_reg_range(AXP717_IRQ0_EN, AXP717_IRQ4_EN),
+ regmap_reg_range(AXP717_IRQ0_STATE, AXP717_IRQ4_STATE),
++ regmap_reg_range(AXP717_TS_PIN_CFG, AXP717_TS_PIN_CFG),
+ regmap_reg_range(AXP717_ICC_CHG_SET, AXP717_CV_CHG_SET),
+ regmap_reg_range(AXP717_DCDC_OUTPUT_CONTROL, AXP717_CPUSLDO_CONTROL),
+ regmap_reg_range(AXP717_ADC_CH_EN_CONTROL, AXP717_ADC_CH_EN_CONTROL),
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index aa4a9940b569a3..ae71a2710bed8b 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -47,6 +47,7 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ struct regmap_config syscon_config = syscon_regmap_config;
+ struct resource res;
+ struct reset_control *reset;
++ resource_size_t res_size;
+
+ WARN_ON(!mutex_is_locked(&syscon_list_lock));
+
+@@ -96,6 +97,12 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ }
+ }
+
++ res_size = resource_size(&res);
++ if (res_size < reg_io_width) {
++ ret = -EFAULT;
++ goto err_regmap;
++ }
++
+ syscon_config.name = kasprintf(GFP_KERNEL, "%pOFn@%pa", np, &res.start);
+ if (!syscon_config.name) {
+ ret = -ENOMEM;
+@@ -103,7 +110,7 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ }
+ syscon_config.reg_stride = reg_io_width;
+ syscon_config.val_bits = reg_io_width * 8;
+- syscon_config.max_register = resource_size(&res) - reg_io_width;
++ syscon_config.max_register = res_size - reg_io_width;
+ if (!syscon_config.max_register)
+ syscon_config.max_register_is_0 = true;
+
+diff --git a/drivers/mfd/tps65219.c b/drivers/mfd/tps65219.c
+index 081c5a30b04a25..4aca922658e34d 100644
+--- a/drivers/mfd/tps65219.c
++++ b/drivers/mfd/tps65219.c
+@@ -221,7 +221,6 @@ static const struct regmap_irq_chip tps65219_irq_chip = {
+ static int tps65219_probe(struct i2c_client *client)
+ {
+ struct tps65219 *tps;
+- unsigned int chipid;
+ bool pwr_button;
+ int ret;
+
+@@ -246,12 +245,6 @@ static int tps65219_probe(struct i2c_client *client)
+ if (ret)
+ return ret;
+
+- ret = regmap_read(tps->regmap, TPS65219_REG_TI_DEV_ID, &chipid);
+- if (ret) {
+- dev_err(tps->dev, "Failed to read device ID: %d\n", ret);
+- return ret;
+- }
+-
+ ret = devm_mfd_add_devices(tps->dev, PLATFORM_DEVID_AUTO,
+ tps65219_cells, ARRAY_SIZE(tps65219_cells),
+ NULL, 0, regmap_irq_get_domain(tps->irq_data));
+diff --git a/drivers/misc/eeprom/ee1004.c b/drivers/misc/eeprom/ee1004.c
+index 89224d4af4a201..e13f9fdd9d7b1c 100644
+--- a/drivers/misc/eeprom/ee1004.c
++++ b/drivers/misc/eeprom/ee1004.c
+@@ -304,6 +304,10 @@ static int ee1004_probe(struct i2c_client *client)
+ I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_READ_BYTE_DATA))
+ return -EPFNOSUPPORT;
+
++ err = i2c_smbus_read_byte(client);
++ if (err < 0)
++ return -ENODEV;
++
+ mutex_lock(&ee1004_bus_lock);
+
+ err = ee1004_init_bus_data(client);
+diff --git a/drivers/misc/mei/vsc-tp.c b/drivers/misc/mei/vsc-tp.c
+index fa553d4914b6ed..da26a080916c54 100644
+--- a/drivers/misc/mei/vsc-tp.c
++++ b/drivers/misc/mei/vsc-tp.c
+@@ -71,8 +71,8 @@ struct vsc_tp {
+ u32 seq;
+
+ /* command buffer */
+- void *tx_buf;
+- void *rx_buf;
++ struct vsc_tp_packet *tx_buf;
++ struct vsc_tp_packet *rx_buf;
+
+ atomic_t assert_cnt;
+ wait_queue_head_t xfer_wait;
+@@ -164,7 +164,7 @@ static int vsc_tp_xfer_helper(struct vsc_tp *tp, struct vsc_tp_packet *pkt,
+ {
+ int ret, offset = 0, cpy_len, src_len, dst_len = sizeof(struct vsc_tp_packet_hdr);
+ int next_xfer_len = VSC_TP_PACKET_SIZE(pkt) + VSC_TP_XFER_TIMEOUT_BYTES;
+- u8 *src, *crc_src, *rx_buf = tp->rx_buf;
++ u8 *src, *crc_src, *rx_buf = (u8 *)tp->rx_buf;
+ int count_down = VSC_TP_MAX_XFER_COUNT;
+ u32 recv_crc = 0, crc = ~0;
+ struct vsc_tp_packet_hdr ack;
+@@ -324,7 +324,7 @@ int vsc_tp_rom_xfer(struct vsc_tp *tp, const void *obuf, void *ibuf, size_t len)
+ guard(mutex)(&tp->mutex);
+
+ /* rom xfer is big endian */
+- cpu_to_be32_array(tp->tx_buf, obuf, words);
++ cpu_to_be32_array((u32 *)tp->tx_buf, obuf, words);
+
+ ret = read_poll_timeout(gpiod_get_value_cansleep, ret,
+ !ret, VSC_TP_ROM_XFER_POLL_DELAY_US,
+@@ -340,7 +340,7 @@ int vsc_tp_rom_xfer(struct vsc_tp *tp, const void *obuf, void *ibuf, size_t len)
+ return ret;
+
+ if (ibuf)
+- be32_to_cpu_array(ibuf, tp->rx_buf, words);
++ be32_to_cpu_array(ibuf, (u32 *)tp->rx_buf, words);
+
+ return ret;
+ }
+@@ -494,11 +494,11 @@ static int vsc_tp_probe(struct spi_device *spi)
+ if (!tp)
+ return -ENOMEM;
+
+- tp->tx_buf = devm_kzalloc(dev, VSC_TP_MAX_XFER_SIZE, GFP_KERNEL);
++ tp->tx_buf = devm_kzalloc(dev, sizeof(*tp->tx_buf), GFP_KERNEL);
+ if (!tp->tx_buf)
+ return -ENOMEM;
+
+- tp->rx_buf = devm_kzalloc(dev, VSC_TP_MAX_XFER_SIZE, GFP_KERNEL);
++ tp->rx_buf = devm_kzalloc(dev, sizeof(*tp->rx_buf), GFP_KERNEL);
+ if (!tp->rx_buf)
+ return -ENOMEM;
+
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 4c0f37ad0281b1..8a7e860c068126 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -295,11 +295,13 @@ static int pci_endpoint_test_bar(struct pci_endpoint_test *test,
+ struct pci_dev *pdev = test->pdev;
+ int buf_size;
+
++ bar_size = pci_resource_len(pdev, barno);
++ if (!bar_size)
++ return -ENODATA;
++
+ if (!test->bar[barno])
+ return -ENOMEM;
+
+- bar_size = pci_resource_len(pdev, barno);
+-
+ if (barno == test->test_reg_bar)
+ bar_size = 0x4;
+
+diff --git a/drivers/mmc/host/dw_mmc-exynos.c b/drivers/mmc/host/dw_mmc-exynos.c
+index 53d32d0f2709e0..e3548408ca392c 100644
+--- a/drivers/mmc/host/dw_mmc-exynos.c
++++ b/drivers/mmc/host/dw_mmc-exynos.c
+@@ -27,6 +27,8 @@ enum dw_mci_exynos_type {
+ DW_MCI_TYPE_EXYNOS5420_SMU,
+ DW_MCI_TYPE_EXYNOS7,
+ DW_MCI_TYPE_EXYNOS7_SMU,
++ DW_MCI_TYPE_EXYNOS7870,
++ DW_MCI_TYPE_EXYNOS7870_SMU,
+ DW_MCI_TYPE_ARTPEC8,
+ };
+
+@@ -69,6 +71,12 @@ static struct dw_mci_exynos_compatible {
+ }, {
+ .compatible = "samsung,exynos7-dw-mshc-smu",
+ .ctrl_type = DW_MCI_TYPE_EXYNOS7_SMU,
++ }, {
++ .compatible = "samsung,exynos7870-dw-mshc",
++ .ctrl_type = DW_MCI_TYPE_EXYNOS7870,
++ }, {
++ .compatible = "samsung,exynos7870-dw-mshc-smu",
++ .ctrl_type = DW_MCI_TYPE_EXYNOS7870_SMU,
+ }, {
+ .compatible = "axis,artpec8-dw-mshc",
+ .ctrl_type = DW_MCI_TYPE_ARTPEC8,
+@@ -85,6 +93,8 @@ static inline u8 dw_mci_exynos_get_ciu_div(struct dw_mci *host)
+ return EXYNOS4210_FIXED_CIU_CLK_DIV;
+ else if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ return SDMMC_CLKSEL_GET_DIV(mci_readl(host, CLKSEL64)) + 1;
+ else
+@@ -100,7 +110,8 @@ static void dw_mci_exynos_config_smu(struct dw_mci *host)
+ * set for non-ecryption mode at this time.
+ */
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS5420_SMU ||
+- priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU) {
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU) {
+ mci_writel(host, MPSBEGIN0, 0);
+ mci_writel(host, MPSEND0, SDMMC_ENDING_SEC_NR_MAX);
+ mci_writel(host, MPSCTRL0, SDMMC_MPSCTRL_SECURE_WRITE_BIT |
+@@ -126,6 +137,12 @@ static int dw_mci_exynos_priv_init(struct dw_mci *host)
+ DQS_CTRL_GET_RD_DELAY(priv->saved_strobe_ctrl);
+ }
+
++ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU) {
++ /* Quirk needed for certain Exynos SoCs */
++ host->quirks |= DW_MMC_QUIRK_FIFO64_32;
++ }
++
+ if (priv->ctrl_type == DW_MCI_TYPE_ARTPEC8) {
+ /* Quirk needed for the ARTPEC-8 SoC */
+ host->quirks |= DW_MMC_QUIRK_EXTENDED_TMOUT;
+@@ -143,6 +160,8 @@ static void dw_mci_exynos_set_clksel_timing(struct dw_mci *host, u32 timing)
+
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ clksel = mci_readl(host, CLKSEL64);
+ else
+@@ -152,6 +171,8 @@ static void dw_mci_exynos_set_clksel_timing(struct dw_mci *host, u32 timing)
+
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ mci_writel(host, CLKSEL64, clksel);
+ else
+@@ -222,6 +243,8 @@ static int dw_mci_exynos_resume_noirq(struct device *dev)
+
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ clksel = mci_readl(host, CLKSEL64);
+ else
+@@ -230,6 +253,8 @@ static int dw_mci_exynos_resume_noirq(struct device *dev)
+ if (clksel & SDMMC_CLKSEL_WAKEUP_INT) {
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ mci_writel(host, CLKSEL64, clksel);
+ else
+@@ -409,6 +434,8 @@ static inline u8 dw_mci_exynos_get_clksmpl(struct dw_mci *host)
+
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ return SDMMC_CLKSEL_CCLK_SAMPLE(mci_readl(host, CLKSEL64));
+ else
+@@ -422,6 +449,8 @@ static inline void dw_mci_exynos_set_clksmpl(struct dw_mci *host, u8 sample)
+
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ clksel = mci_readl(host, CLKSEL64);
+ else
+@@ -429,6 +458,8 @@ static inline void dw_mci_exynos_set_clksmpl(struct dw_mci *host, u8 sample)
+ clksel = SDMMC_CLKSEL_UP_SAMPLE(clksel, sample);
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ mci_writel(host, CLKSEL64, clksel);
+ else
+@@ -443,6 +474,8 @@ static inline u8 dw_mci_exynos_move_next_clksmpl(struct dw_mci *host)
+
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ clksel = mci_readl(host, CLKSEL64);
+ else
+@@ -453,6 +486,8 @@ static inline u8 dw_mci_exynos_move_next_clksmpl(struct dw_mci *host)
+
+ if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS7 ||
+ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870 ||
++ priv->ctrl_type == DW_MCI_TYPE_EXYNOS7870_SMU ||
+ priv->ctrl_type == DW_MCI_TYPE_ARTPEC8)
+ mci_writel(host, CLKSEL64, clksel);
+ else
+@@ -632,6 +667,10 @@ static const struct of_device_id dw_mci_exynos_match[] = {
+ .data = &exynos_drv_data, },
+ { .compatible = "samsung,exynos7-dw-mshc-smu",
+ .data = &exynos_drv_data, },
++ { .compatible = "samsung,exynos7870-dw-mshc",
++ .data = &exynos_drv_data, },
++ { .compatible = "samsung,exynos7870-dw-mshc-smu",
++ .data = &exynos_drv_data, },
+ { .compatible = "axis,artpec8-dw-mshc",
+ .data = &artpec_drv_data, },
+ {},
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 1f0bd723f01124..13a84b9309e064 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -610,8 +610,12 @@ static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
+
+ sdhci_set_power(host, mode, vdd);
+
+- if (mode == MMC_POWER_OFF)
++ if (mode == MMC_POWER_OFF) {
++ if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_APL_SD ||
++ slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BYT_SD)
++ usleep_range(15000, 17500);
+ return;
++ }
+
+ /*
+ * Bus power might not enable after D3 -> D0 transition due to the
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index f4a7733a8ad22f..5f91b44891f9b1 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2065,10 +2065,15 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+
+ host->mmc->actual_clock = 0;
+
+- sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
++ clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
++ if (clk & SDHCI_CLOCK_CARD_EN)
++ sdhci_writew(host, clk & ~SDHCI_CLOCK_CARD_EN,
++ SDHCI_CLOCK_CONTROL);
+
+- if (clock == 0)
++ if (clock == 0) {
++ sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
+ return;
++ }
+
+ clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock);
+ sdhci_enable_clk(host, clk);
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index f75c31815ab00d..73385ff4c0f30b 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -155,6 +155,7 @@ struct sdhci_am654_data {
+ u32 tuning_loop;
+
+ #define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0)
++#define SDHCI_AM654_QUIRK_SUPPRESS_V1P8_ENA BIT(1)
+ };
+
+ struct window {
+@@ -166,6 +167,7 @@ struct window {
+ struct sdhci_am654_driver_data {
+ const struct sdhci_pltfm_data *pdata;
+ u32 flags;
++ u32 quirks;
+ #define IOMUX_PRESENT (1 << 0)
+ #define FREQSEL_2_BIT (1 << 1)
+ #define STRBSEL_4_BIT (1 << 2)
+@@ -356,6 +358,29 @@ static void sdhci_j721e_4bit_set_clock(struct sdhci_host *host,
+ sdhci_set_clock(host, clock);
+ }
+
++static int sdhci_am654_start_signal_voltage_switch(struct mmc_host *mmc, struct mmc_ios *ios)
++{
++ struct sdhci_host *host = mmc_priv(mmc);
++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++ struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
++ int ret;
++
++ if ((sdhci_am654->quirks & SDHCI_AM654_QUIRK_SUPPRESS_V1P8_ENA) &&
++ ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180) {
++ if (!IS_ERR(mmc->supply.vqmmc)) {
++ ret = mmc_regulator_set_vqmmc(mmc, ios);
++ if (ret < 0) {
++ pr_err("%s: Switching to 1.8V signalling voltage failed,\n",
++ mmc_hostname(mmc));
++ return -EIO;
++ }
++ }
++ return 0;
++ }
++
++ return sdhci_start_signal_voltage_switch(mmc, ios);
++}
++
+ static u8 sdhci_am654_write_power_on(struct sdhci_host *host, u8 val, int reg)
+ {
+ writeb(val, host->ioaddr + reg);
+@@ -650,6 +675,12 @@ static const struct sdhci_am654_driver_data sdhci_j721e_4bit_drvdata = {
+ .flags = IOMUX_PRESENT,
+ };
+
++static const struct sdhci_am654_driver_data sdhci_am62_4bit_drvdata = {
++ .pdata = &sdhci_j721e_4bit_pdata,
++ .flags = IOMUX_PRESENT,
++ .quirks = SDHCI_AM654_QUIRK_SUPPRESS_V1P8_ENA,
++};
++
+ static const struct soc_device_attribute sdhci_am654_devices[] = {
+ { .family = "AM65X",
+ .revision = "SR1.0",
+@@ -872,7 +903,7 @@ static const struct of_device_id sdhci_am654_of_match[] = {
+ },
+ {
+ .compatible = "ti,am62-sdhci",
+- .data = &sdhci_j721e_4bit_drvdata,
++ .data = &sdhci_am62_4bit_drvdata,
+ },
+ { /* sentinel */ }
+ };
+@@ -906,6 +937,7 @@ static int sdhci_am654_probe(struct platform_device *pdev)
+ pltfm_host = sdhci_priv(host);
+ sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
+ sdhci_am654->flags = drvdata->flags;
++ sdhci_am654->quirks = drvdata->quirks;
+
+ clk_xin = devm_clk_get(dev, "clk_xin");
+ if (IS_ERR(clk_xin)) {
+@@ -940,6 +972,7 @@ static int sdhci_am654_probe(struct platform_device *pdev)
+ goto err_pltfm_free;
+ }
+
++ host->mmc_host_ops.start_signal_voltage_switch = sdhci_am654_start_signal_voltage_switch;
+ host->mmc_host_ops.execute_tuning = sdhci_am654_execute_tuning;
+
+ pm_runtime_get_noresume(dev);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 4da5fcb7def47f..203d3467dcbcde 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2551,7 +2551,7 @@ static int __bond_release_one(struct net_device *bond_dev,
+
+ RCU_INIT_POINTER(bond->current_arp_slave, NULL);
+
+- if (!all && (!bond->params.fail_over_mac ||
++ if (!all && (bond->params.fail_over_mac != BOND_FOM_ACTIVE ||
+ BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)) {
+ if (ether_addr_equal_64bits(bond_dev->dev_addr, slave->perm_hwaddr) &&
+ bond_has_slaves(bond))
+diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c
+index 399844809bbeaa..bb6071a758f365 100644
+--- a/drivers/net/can/c_can/c_can_platform.c
++++ b/drivers/net/can/c_can/c_can_platform.c
+@@ -324,7 +324,7 @@ static int c_can_plat_probe(struct platform_device *pdev)
+ /* Check if we need custom RAMINIT via syscon. Mostly for TI
+ * platforms. Only supported with DT boot.
+ */
+- if (np && of_property_read_bool(np, "syscon-raminit")) {
++ if (np && of_property_present(np, "syscon-raminit")) {
+ u32 id;
+ struct c_can_raminit *raminit = &priv->raminit_sys;
+
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index fa04a7ced02b8b..4f177ca1b998e8 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -16,6 +16,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/pci.h>
+ #include <linux/timer.h>
++#include <net/netdev_queues.h>
+
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_AUTHOR("Kvaser AB <support@kvaser.com>");
+@@ -410,10 +411,13 @@ struct kvaser_pciefd_can {
+ void __iomem *reg_base;
+ struct can_berr_counter bec;
+ u8 cmd_seq;
++ u8 tx_max_count;
++ u8 tx_idx;
++ u8 ack_idx;
+ int err_rep_cnt;
+- int echo_idx;
++ unsigned int completed_tx_pkts;
++ unsigned int completed_tx_bytes;
+ spinlock_t lock; /* Locks sensitive registers (e.g. MODE) */
+- spinlock_t echo_lock; /* Locks the message echo buffer */
+ struct timer_list bec_poll_timer;
+ struct completion start_comp, flush_comp;
+ };
+@@ -714,6 +718,9 @@ static int kvaser_pciefd_open(struct net_device *netdev)
+ int ret;
+ struct kvaser_pciefd_can *can = netdev_priv(netdev);
+
++ can->tx_idx = 0;
++ can->ack_idx = 0;
++
+ ret = open_candev(netdev);
+ if (ret)
+ return ret;
+@@ -745,21 +752,26 @@ static int kvaser_pciefd_stop(struct net_device *netdev)
+ del_timer(&can->bec_poll_timer);
+ }
+ can->can.state = CAN_STATE_STOPPED;
++ netdev_reset_queue(netdev);
+ close_candev(netdev);
+
+ return ret;
+ }
+
++static unsigned int kvaser_pciefd_tx_avail(const struct kvaser_pciefd_can *can)
++{
++ return can->tx_max_count - (READ_ONCE(can->tx_idx) - READ_ONCE(can->ack_idx));
++}
++
+ static int kvaser_pciefd_prepare_tx_packet(struct kvaser_pciefd_tx_packet *p,
+- struct kvaser_pciefd_can *can,
++ struct can_priv *can, u8 seq,
+ struct sk_buff *skb)
+ {
+ struct canfd_frame *cf = (struct canfd_frame *)skb->data;
+ int packet_size;
+- int seq = can->echo_idx;
+
+ memset(p, 0, sizeof(*p));
+- if (can->can.ctrlmode & CAN_CTRLMODE_ONE_SHOT)
++ if (can->ctrlmode & CAN_CTRLMODE_ONE_SHOT)
+ p->header[1] |= KVASER_PCIEFD_TPACKET_SMS;
+
+ if (cf->can_id & CAN_RTR_FLAG)
+@@ -782,7 +794,7 @@ static int kvaser_pciefd_prepare_tx_packet(struct kvaser_pciefd_tx_packet *p,
+ } else {
+ p->header[1] |=
+ FIELD_PREP(KVASER_PCIEFD_RPACKET_DLC_MASK,
+- can_get_cc_dlc((struct can_frame *)cf, can->can.ctrlmode));
++ can_get_cc_dlc((struct can_frame *)cf, can->ctrlmode));
+ }
+
+ p->header[1] |= FIELD_PREP(KVASER_PCIEFD_PACKET_SEQ_MASK, seq);
+@@ -797,22 +809,24 @@ static netdev_tx_t kvaser_pciefd_start_xmit(struct sk_buff *skb,
+ struct net_device *netdev)
+ {
+ struct kvaser_pciefd_can *can = netdev_priv(netdev);
+- unsigned long irq_flags;
+ struct kvaser_pciefd_tx_packet packet;
++ unsigned int seq = can->tx_idx & (can->can.echo_skb_max - 1);
++ unsigned int frame_len;
+ int nr_words;
+- u8 count;
+
+ if (can_dev_dropped_skb(netdev, skb))
+ return NETDEV_TX_OK;
++ if (!netif_subqueue_maybe_stop(netdev, 0, kvaser_pciefd_tx_avail(can), 1, 1))
++ return NETDEV_TX_BUSY;
+
+- nr_words = kvaser_pciefd_prepare_tx_packet(&packet, can, skb);
++ nr_words = kvaser_pciefd_prepare_tx_packet(&packet, &can->can, seq, skb);
+
+- spin_lock_irqsave(&can->echo_lock, irq_flags);
+ /* Prepare and save echo skb in internal slot */
+- can_put_echo_skb(skb, netdev, can->echo_idx, 0);
+-
+- /* Move echo index to the next slot */
+- can->echo_idx = (can->echo_idx + 1) % can->can.echo_skb_max;
++ WRITE_ONCE(can->can.echo_skb[seq], NULL);
++ frame_len = can_skb_get_frame_len(skb);
++ can_put_echo_skb(skb, netdev, seq, frame_len);
++ netdev_sent_queue(netdev, frame_len);
++ WRITE_ONCE(can->tx_idx, can->tx_idx + 1);
+
+ /* Write header to fifo */
+ iowrite32(packet.header[0],
+@@ -836,14 +850,7 @@ static netdev_tx_t kvaser_pciefd_start_xmit(struct sk_buff *skb,
+ KVASER_PCIEFD_KCAN_FIFO_LAST_REG);
+ }
+
+- count = FIELD_GET(KVASER_PCIEFD_KCAN_TX_NR_PACKETS_CURRENT_MASK,
+- ioread32(can->reg_base + KVASER_PCIEFD_KCAN_TX_NR_PACKETS_REG));
+- /* No room for a new message, stop the queue until at least one
+- * successful transmit
+- */
+- if (count >= can->can.echo_skb_max || can->can.echo_skb[can->echo_idx])
+- netif_stop_queue(netdev);
+- spin_unlock_irqrestore(&can->echo_lock, irq_flags);
++ netif_subqueue_maybe_stop(netdev, 0, kvaser_pciefd_tx_avail(can), 1, 1);
+
+ return NETDEV_TX_OK;
+ }
+@@ -970,6 +977,8 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ can->kv_pcie = pcie;
+ can->cmd_seq = 0;
+ can->err_rep_cnt = 0;
++ can->completed_tx_pkts = 0;
++ can->completed_tx_bytes = 0;
+ can->bec.txerr = 0;
+ can->bec.rxerr = 0;
+
+@@ -983,11 +992,10 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ tx_nr_packets_max =
+ FIELD_GET(KVASER_PCIEFD_KCAN_TX_NR_PACKETS_MAX_MASK,
+ ioread32(can->reg_base + KVASER_PCIEFD_KCAN_TX_NR_PACKETS_REG));
++ can->tx_max_count = min(KVASER_PCIEFD_CAN_TX_MAX_COUNT, tx_nr_packets_max - 1);
+
+ can->can.clock.freq = pcie->freq;
+- can->can.echo_skb_max = min(KVASER_PCIEFD_CAN_TX_MAX_COUNT, tx_nr_packets_max - 1);
+- can->echo_idx = 0;
+- spin_lock_init(&can->echo_lock);
++ can->can.echo_skb_max = roundup_pow_of_two(can->tx_max_count);
+ spin_lock_init(&can->lock);
+
+ can->can.bittiming_const = &kvaser_pciefd_bittiming_const;
+@@ -1201,7 +1209,7 @@ static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
+ skb = alloc_canfd_skb(priv->dev, &cf);
+ if (!skb) {
+ priv->dev->stats.rx_dropped++;
+- return -ENOMEM;
++ return 0;
+ }
+
+ cf->len = can_fd_dlc2len(dlc);
+@@ -1213,7 +1221,7 @@ static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
+ skb = alloc_can_skb(priv->dev, (struct can_frame **)&cf);
+ if (!skb) {
+ priv->dev->stats.rx_dropped++;
+- return -ENOMEM;
++ return 0;
+ }
+ can_frame_set_cc_len((struct can_frame *)cf, dlc, priv->ctrlmode);
+ }
+@@ -1231,7 +1239,9 @@ static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
+ priv->dev->stats.rx_packets++;
+ kvaser_pciefd_set_skb_timestamp(pcie, skb, p->timestamp);
+
+- return netif_rx(skb);
++ netif_rx(skb);
++
++ return 0;
+ }
+
+ static void kvaser_pciefd_change_state(struct kvaser_pciefd_can *can,
+@@ -1510,19 +1520,21 @@ static int kvaser_pciefd_handle_ack_packet(struct kvaser_pciefd *pcie,
+ netdev_dbg(can->can.dev, "Packet was flushed\n");
+ } else {
+ int echo_idx = FIELD_GET(KVASER_PCIEFD_PACKET_SEQ_MASK, p->header[0]);
+- int len;
+- u8 count;
++ unsigned int len, frame_len = 0;
+ struct sk_buff *skb;
+
++ if (echo_idx != (can->ack_idx & (can->can.echo_skb_max - 1)))
++ return 0;
+ skb = can->can.echo_skb[echo_idx];
+- if (skb)
+- kvaser_pciefd_set_skb_timestamp(pcie, skb, p->timestamp);
+- len = can_get_echo_skb(can->can.dev, echo_idx, NULL);
+- count = FIELD_GET(KVASER_PCIEFD_KCAN_TX_NR_PACKETS_CURRENT_MASK,
+- ioread32(can->reg_base + KVASER_PCIEFD_KCAN_TX_NR_PACKETS_REG));
++ if (!skb)
++ return 0;
++ kvaser_pciefd_set_skb_timestamp(pcie, skb, p->timestamp);
++ len = can_get_echo_skb(can->can.dev, echo_idx, &frame_len);
+
+- if (count < can->can.echo_skb_max && netif_queue_stopped(can->can.dev))
+- netif_wake_queue(can->can.dev);
++ /* Pairs with barrier in kvaser_pciefd_start_xmit() */
++ smp_store_release(&can->ack_idx, can->ack_idx + 1);
++ can->completed_tx_pkts++;
++ can->completed_tx_bytes += frame_len;
+
+ if (!one_shot_fail) {
+ can->can.dev->stats.tx_bytes += len;
+@@ -1638,11 +1650,26 @@ static int kvaser_pciefd_read_buffer(struct kvaser_pciefd *pcie, int dma_buf)
+ {
+ int pos = 0;
+ int res = 0;
++ unsigned int i;
+
+ do {
+ res = kvaser_pciefd_read_packet(pcie, &pos, dma_buf);
+ } while (!res && pos > 0 && pos < KVASER_PCIEFD_DMA_SIZE);
+
++ /* Report ACKs in this buffer to BQL en masse for correct periods */
++ for (i = 0; i < pcie->nr_channels; ++i) {
++ struct kvaser_pciefd_can *can = pcie->can[i];
++
++ if (!can->completed_tx_pkts)
++ continue;
++ netif_subqueue_completed_wake(can->can.dev, 0,
++ can->completed_tx_pkts,
++ can->completed_tx_bytes,
++ kvaser_pciefd_tx_avail(can), 1);
++ can->completed_tx_pkts = 0;
++ can->completed_tx_bytes = 0;
++ }
++
+ return res;
+ }
+
+diff --git a/drivers/net/can/slcan/slcan-core.c b/drivers/net/can/slcan/slcan-core.c
+index 24c6622d36bd85..58ff2ec1d9757e 100644
+--- a/drivers/net/can/slcan/slcan-core.c
++++ b/drivers/net/can/slcan/slcan-core.c
+@@ -71,12 +71,21 @@ MODULE_AUTHOR("Dario Binacchi <dario.binacchi@amarulasolutions.com>");
+ #define SLCAN_CMD_LEN 1
+ #define SLCAN_SFF_ID_LEN 3
+ #define SLCAN_EFF_ID_LEN 8
++#define SLCAN_DATA_LENGTH_LEN 1
++#define SLCAN_ERROR_LEN 1
+ #define SLCAN_STATE_LEN 1
+ #define SLCAN_STATE_BE_RXCNT_LEN 3
+ #define SLCAN_STATE_BE_TXCNT_LEN 3
+-#define SLCAN_STATE_FRAME_LEN (1 + SLCAN_CMD_LEN + \
+- SLCAN_STATE_BE_RXCNT_LEN + \
+- SLCAN_STATE_BE_TXCNT_LEN)
++#define SLCAN_STATE_MSG_LEN (SLCAN_CMD_LEN + \
++ SLCAN_STATE_LEN + \
++ SLCAN_STATE_BE_RXCNT_LEN + \
++ SLCAN_STATE_BE_TXCNT_LEN)
++#define SLCAN_ERROR_MSG_LEN_MIN (SLCAN_CMD_LEN + \
++ SLCAN_ERROR_LEN + \
++ SLCAN_DATA_LENGTH_LEN)
++#define SLCAN_FRAME_MSG_LEN_MIN (SLCAN_CMD_LEN + \
++ SLCAN_SFF_ID_LEN + \
++ SLCAN_DATA_LENGTH_LEN)
+ struct slcan {
+ struct can_priv can;
+
+@@ -176,6 +185,9 @@ static void slcan_bump_frame(struct slcan *sl)
+ u32 tmpid;
+ char *cmd = sl->rbuff;
+
++ if (sl->rcount < SLCAN_FRAME_MSG_LEN_MIN)
++ return;
++
+ skb = alloc_can_skb(sl->dev, &cf);
+ if (unlikely(!skb)) {
+ sl->dev->stats.rx_dropped++;
+@@ -281,7 +293,7 @@ static void slcan_bump_state(struct slcan *sl)
+ return;
+ }
+
+- if (state == sl->can.state || sl->rcount < SLCAN_STATE_FRAME_LEN)
++ if (state == sl->can.state || sl->rcount != SLCAN_STATE_MSG_LEN)
+ return;
+
+ cmd += SLCAN_STATE_BE_RXCNT_LEN + SLCAN_CMD_LEN + 1;
+@@ -328,6 +340,9 @@ static void slcan_bump_err(struct slcan *sl)
+ bool rx_errors = false, tx_errors = false, rx_over_errors = false;
+ int i, len;
+
++ if (sl->rcount < SLCAN_ERROR_MSG_LEN_MIN)
++ return;
++
+ /* get len from sanitized ASCII value */
+ len = cmd[1];
+ if (len >= '0' && len < '9')
+@@ -456,8 +471,7 @@ static void slcan_bump(struct slcan *sl)
+ static void slcan_unesc(struct slcan *sl, unsigned char s)
+ {
+ if ((s == '\r') || (s == '\a')) { /* CR or BEL ends the pdu */
+- if (!test_and_clear_bit(SLF_ERROR, &sl->flags) &&
+- sl->rcount > 4)
++ if (!test_and_clear_bit(SLF_ERROR, &sl->flags))
+ slcan_bump(sl);
+
+ sl->rcount = 0;
+diff --git a/drivers/net/ethernet/apm/xgene-v2/main.c b/drivers/net/ethernet/apm/xgene-v2/main.c
+index 2a91c84aebdb04..d7ca847d44c7cc 100644
+--- a/drivers/net/ethernet/apm/xgene-v2/main.c
++++ b/drivers/net/ethernet/apm/xgene-v2/main.c
+@@ -9,8 +9,6 @@
+
+ #include "main.h"
+
+-static const struct acpi_device_id xge_acpi_match[];
+-
+ static int xge_get_resources(struct xge_pdata *pdata)
+ {
+ struct platform_device *pdev;
+@@ -731,7 +729,7 @@ MODULE_DEVICE_TABLE(acpi, xge_acpi_match);
+ static struct platform_driver xge_driver = {
+ .driver = {
+ .name = "xgene-enet-v2",
+- .acpi_match_table = ACPI_PTR(xge_acpi_match),
++ .acpi_match_table = xge_acpi_match,
+ },
+ .probe = xge_probe,
+ .remove = xge_remove,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index bd8b9cb05ae988..d844cf621dd236 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -5589,6 +5589,8 @@ int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap, int bmap_size,
+ if (bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY)
+ flags |= FUNC_DRV_RGTR_REQ_FLAGS_ERROR_RECOVERY_SUPPORT |
+ FUNC_DRV_RGTR_REQ_FLAGS_MASTER_SUPPORT;
++ if (bp->fw_cap & BNXT_FW_CAP_NPAR_1_2)
++ flags |= FUNC_DRV_RGTR_REQ_FLAGS_NPAR_1_2_SUPPORT;
+ req->flags = cpu_to_le32(flags);
+ req->ver_maj_8b = DRV_VER_MAJ;
+ req->ver_min_8b = DRV_VER_MIN;
+@@ -8389,6 +8391,7 @@ static int bnxt_hwrm_func_qcfg(struct bnxt *bp)
+
+ switch (resp->port_partition_type) {
+ case FUNC_QCFG_RESP_PORT_PARTITION_TYPE_NPAR1_0:
++ case FUNC_QCFG_RESP_PORT_PARTITION_TYPE_NPAR1_2:
+ case FUNC_QCFG_RESP_PORT_PARTITION_TYPE_NPAR1_5:
+ case FUNC_QCFG_RESP_PORT_PARTITION_TYPE_NPAR2_0:
+ bp->port_partition_type = resp->port_partition_type;
+@@ -9553,6 +9556,8 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
+ bp->fw_cap |= BNXT_FW_CAP_HOT_RESET_IF;
+ if (BNXT_PF(bp) && (flags_ext & FUNC_QCAPS_RESP_FLAGS_EXT_FW_LIVEPATCH_SUPPORTED))
+ bp->fw_cap |= BNXT_FW_CAP_LIVEPATCH;
++ if (flags_ext & FUNC_QCAPS_RESP_FLAGS_EXT_NPAR_1_2_SUPPORTED)
++ bp->fw_cap |= BNXT_FW_CAP_NPAR_1_2;
+ if (BNXT_PF(bp) && (flags_ext & FUNC_QCAPS_RESP_FLAGS_EXT_DFLT_VLAN_TPID_PCP_SUPPORTED))
+ bp->fw_cap |= BNXT_FW_CAP_DFLT_VLAN_TPID_PCP;
+ if (flags_ext & FUNC_QCAPS_RESP_FLAGS_EXT_BS_V2_SUPPORTED)
+@@ -12104,6 +12109,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
+ struct hwrm_func_drv_if_change_input *req;
+ bool fw_reset = !bp->irq_tbl;
+ bool resc_reinit = false;
++ bool caps_change = false;
+ int rc, retry = 0;
+ u32 flags = 0;
+
+@@ -12159,8 +12165,11 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
+ set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
+ return -ENODEV;
+ }
+- if (resc_reinit || fw_reset) {
+- if (fw_reset) {
++ if (flags & FUNC_DRV_IF_CHANGE_RESP_FLAGS_CAPS_CHANGE)
++ caps_change = true;
++
++ if (resc_reinit || fw_reset || caps_change) {
++ if (fw_reset || caps_change) {
+ set_bit(BNXT_STATE_FW_RESET_DET, &bp->state);
+ if (!test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
+ bnxt_ulp_irq_stop(bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index d621fb621f30c7..f91d9d8eacb972 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -2498,6 +2498,7 @@ struct bnxt {
+ #define BNXT_FW_CAP_CFA_RFS_RING_TBL_IDX_V3 BIT_ULL(39)
+ #define BNXT_FW_CAP_VNIC_RE_FLUSH BIT_ULL(40)
+ #define BNXT_FW_CAP_SW_MAX_RESOURCE_LIMITS BIT_ULL(41)
++ #define BNXT_FW_CAP_NPAR_1_2 BIT_ULL(42)
+
+ u32 fw_dbg_cap;
+
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index c5d5fa8d7dfddc..17e9bddb9ddd58 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1098,6 +1098,29 @@ static void fec_enet_enable_ring(struct net_device *ndev)
+ }
+ }
+
++/* Whack a reset. We should wait for this.
++ * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
++ * instead of reset MAC itself.
++ */
++static void fec_ctrl_reset(struct fec_enet_private *fep, bool allow_wol)
++{
++ u32 val;
++
++ if (!allow_wol || !(fep->wol_flag & FEC_WOL_FLAG_SLEEP_ON)) {
++ if (fep->quirks & FEC_QUIRK_HAS_MULTI_QUEUES ||
++ ((fep->quirks & FEC_QUIRK_NO_HARD_RESET) && fep->link)) {
++ writel(0, fep->hwp + FEC_ECNTRL);
++ } else {
++ writel(FEC_ECR_RESET, fep->hwp + FEC_ECNTRL);
++ udelay(10);
++ }
++ } else {
++ val = readl(fep->hwp + FEC_ECNTRL);
++ val |= (FEC_ECR_MAGICEN | FEC_ECR_SLEEP);
++ writel(val, fep->hwp + FEC_ECNTRL);
++ }
++}
++
+ /*
+ * This function is called to start or restart the FEC during a link
+ * change, transmit timeout, or to reconfigure the FEC. The network
+@@ -1114,17 +1137,7 @@ fec_restart(struct net_device *ndev)
+ if (fep->bufdesc_ex)
+ fec_ptp_save_state(fep);
+
+- /* Whack a reset. We should wait for this.
+- * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+- * instead of reset MAC itself.
+- */
+- if (fep->quirks & FEC_QUIRK_HAS_MULTI_QUEUES ||
+- ((fep->quirks & FEC_QUIRK_NO_HARD_RESET) && fep->link)) {
+- writel(0, fep->hwp + FEC_ECNTRL);
+- } else {
+- writel(1, fep->hwp + FEC_ECNTRL);
+- udelay(10);
+- }
++ fec_ctrl_reset(fep, false);
+
+ /*
+ * enet-mac reset will reset mac address registers too,
+@@ -1378,22 +1391,7 @@ fec_stop(struct net_device *ndev)
+ if (fep->bufdesc_ex)
+ fec_ptp_save_state(fep);
+
+- /* Whack a reset. We should wait for this.
+- * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+- * instead of reset MAC itself.
+- */
+- if (!(fep->wol_flag & FEC_WOL_FLAG_SLEEP_ON)) {
+- if (fep->quirks & FEC_QUIRK_HAS_MULTI_QUEUES) {
+- writel(0, fep->hwp + FEC_ECNTRL);
+- } else {
+- writel(FEC_ECR_RESET, fep->hwp + FEC_ECNTRL);
+- udelay(10);
+- }
+- } else {
+- val = readl(fep->hwp + FEC_ECNTRL);
+- val |= (FEC_ECR_MAGICEN | FEC_ECR_SLEEP);
+- writel(val, fep->hwp + FEC_ECNTRL);
+- }
++ fec_ctrl_reset(fep, true);
+ writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
+ writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index f241493a6ac883..6bbb304ad9ab74 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -3817,8 +3817,7 @@ static u32 ice_get_combined_cnt(struct ice_vsi *vsi)
+ ice_for_each_q_vector(vsi, q_idx) {
+ struct ice_q_vector *q_vector = vsi->q_vectors[q_idx];
+
+- if (q_vector->rx.rx_ring && q_vector->tx.tx_ring)
+- combined++;
++ combined += min(q_vector->num_ring_tx, q_vector->num_ring_rx);
+ }
+
+ return combined;
+diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c b/drivers/net/ethernet/intel/ice/ice_irq.c
+index ad82ff7d199570..09f9c7ba52795b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_irq.c
++++ b/drivers/net/ethernet/intel/ice/ice_irq.c
+@@ -45,7 +45,7 @@ static void ice_free_irq_res(struct ice_pf *pf, u16 index)
+ /**
+ * ice_get_irq_res - get an interrupt resource
+ * @pf: board private structure
+- * @dyn_only: force entry to be dynamically allocated
++ * @dyn_allowed: allow entry to be dynamically allocated
+ *
+ * Allocate new irq entry in the free slot of the tracker. Since xarray
+ * is used, always allocate new entry at the lowest possible index. Set
+@@ -53,11 +53,12 @@ static void ice_free_irq_res(struct ice_pf *pf, u16 index)
+ *
+ * Returns allocated irq entry or NULL on failure.
+ */
+-static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf, bool dyn_only)
++static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf,
++ bool dyn_allowed)
+ {
+- struct xa_limit limit = { .max = pf->irq_tracker.num_entries,
++ struct xa_limit limit = { .max = pf->irq_tracker.num_entries - 1,
+ .min = 0 };
+- unsigned int num_static = pf->irq_tracker.num_static;
++ unsigned int num_static = pf->irq_tracker.num_static - 1;
+ struct ice_irq_entry *entry;
+ unsigned int index;
+ int ret;
+@@ -66,9 +67,9 @@ static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf, bool dyn_only)
+ if (!entry)
+ return NULL;
+
+- /* skip preallocated entries if the caller says so */
+- if (dyn_only)
+- limit.min = num_static;
++ /* only already allocated if the caller says so */
++ if (!dyn_allowed)
++ limit.max = num_static;
+
+ ret = xa_alloc(&pf->irq_tracker.entries, &index, entry, limit,
+ GFP_KERNEL);
+@@ -78,7 +79,7 @@ static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf, bool dyn_only)
+ entry = NULL;
+ } else {
+ entry->index = index;
+- entry->dynamic = index >= num_static;
++ entry->dynamic = index > num_static;
+ }
+
+ return entry;
+@@ -272,7 +273,7 @@ int ice_init_interrupt_scheme(struct ice_pf *pf)
+ /**
+ * ice_alloc_irq - Allocate new interrupt vector
+ * @pf: board private structure
+- * @dyn_only: force dynamic allocation of the interrupt
++ * @dyn_allowed: allow dynamic allocation of the interrupt
+ *
+ * Allocate new interrupt vector for a given owner id.
+ * return struct msi_map with interrupt details and track
+@@ -285,20 +286,20 @@ int ice_init_interrupt_scheme(struct ice_pf *pf)
+ * interrupt will be allocated with pci_msix_alloc_irq_at.
+ *
+ * Some callers may only support dynamically allocated interrupts.
+- * This is indicated with dyn_only flag.
++ * This is indicated with dyn_allowed flag.
+ *
+ * On failure, return map with negative .index. The caller
+ * is expected to check returned map index.
+ *
+ */
+-struct msi_map ice_alloc_irq(struct ice_pf *pf, bool dyn_only)
++struct msi_map ice_alloc_irq(struct ice_pf *pf, bool dyn_allowed)
+ {
+ int sriov_base_vector = pf->sriov_base_vector;
+ struct msi_map map = { .index = -ENOENT };
+ struct device *dev = ice_pf_to_dev(pf);
+ struct ice_irq_entry *entry;
+
+- entry = ice_get_irq_res(pf, dyn_only);
++ entry = ice_get_irq_res(pf, dyn_allowed);
+ if (!entry)
+ return map;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c
+index 22371011c24928..2410aee59fb2d5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lag.c
++++ b/drivers/net/ethernet/intel/ice/ice_lag.c
+@@ -1321,12 +1321,18 @@ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr)
+ */
+ if (!primary_lag) {
+ lag->primary = true;
++ if (!ice_is_switchdev_running(lag->pf))
++ return;
++
+ /* Configure primary's SWID to be shared */
+ ice_lag_primary_swid(lag, true);
+ primary_lag = lag;
+ } else {
+ u16 swid;
+
++ if (!ice_is_switchdev_running(primary_lag->pf))
++ return;
++
+ swid = primary_lag->pf->hw.port_info->sw_id;
+ ice_lag_set_swid(swid, lag, true);
+ ice_lag_add_prune_list(primary_lag, lag->pf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index e0785e820d6014..021ed7451bb9f4 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -567,6 +567,8 @@ ice_vsi_alloc_def(struct ice_vsi *vsi, struct ice_channel *ch)
+ return -ENOMEM;
+ }
+
++ vsi->irq_dyn_alloc = pci_msix_can_alloc_dyn(vsi->back->pdev);
++
+ switch (vsi->type) {
+ case ICE_VSI_PF:
+ case ICE_VSI_SF:
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index e13bd5a6cb6c4e..d24d46b24e3711 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5186,11 +5186,12 @@ int ice_load(struct ice_pf *pf)
+
+ ice_napi_add(vsi);
+
++ ice_init_features(pf);
++
+ err = ice_init_rdma(pf);
+ if (err)
+ goto err_init_rdma;
+
+- ice_init_features(pf);
+ ice_service_task_restart(pf);
+
+ clear_bit(ICE_DOWN, pf->state);
+@@ -5198,6 +5199,7 @@ int ice_load(struct ice_pf *pf)
+ return 0;
+
+ err_init_rdma:
++ ice_deinit_features(pf);
+ ice_tc_indir_block_unregister(vsi);
+ err_tc_indir_block_register:
+ ice_unregister_netdev(vsi);
+@@ -5221,8 +5223,8 @@ void ice_unload(struct ice_pf *pf)
+
+ devl_assert_locked(priv_to_devlink(pf));
+
+- ice_deinit_features(pf);
+ ice_deinit_rdma(pf);
++ ice_deinit_features(pf);
+ ice_tc_indir_block_unregister(vsi);
+ ice_unregister_netdev(vsi);
+ ice_devlink_destroy_pf_port(pf);
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 1af51469f070b6..9be9ce300fa4aa 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -4211,7 +4211,6 @@ static int ice_vc_repr_add_mac(struct ice_vf *vf, u8 *msg)
+ }
+
+ ice_vfhw_mac_add(vf, &al->list[i]);
+- vf->num_mac++;
+ break;
+ }
+
+diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
+index aef0e9775a3305..70dbf80f3bb75b 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf.h
++++ b/drivers/net/ethernet/intel/idpf/idpf.h
+@@ -143,6 +143,7 @@ enum idpf_vport_state {
+ * @vport_id: Vport identifier
+ * @link_speed_mbps: Link speed in mbps
+ * @vport_idx: Relative vport index
++ * @max_tx_hdr_size: Max header length hardware can support
+ * @state: See enum idpf_vport_state
+ * @netstats: Packet and byte stats
+ * @stats_lock: Lock to protect stats update
+@@ -153,6 +154,7 @@ struct idpf_netdev_priv {
+ u32 vport_id;
+ u32 link_speed_mbps;
+ u16 vport_idx;
++ u16 max_tx_hdr_size;
+ enum idpf_vport_state state;
+ struct rtnl_link_stats64 netstats;
+ spinlock_t stats_lock;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 6e8a82dae16286..df71e6ad651091 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -723,6 +723,7 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
+ np->vport = vport;
+ np->vport_idx = vport->idx;
+ np->vport_id = vport->vport_id;
++ np->max_tx_hdr_size = idpf_get_max_tx_hdr_size(adapter);
+ vport->netdev = netdev;
+
+ return idpf_init_mac_addr(vport, netdev);
+@@ -740,6 +741,7 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
+ np->adapter = adapter;
+ np->vport_idx = vport->idx;
+ np->vport_id = vport->vport_id;
++ np->max_tx_hdr_size = idpf_get_max_tx_hdr_size(adapter);
+
+ spin_lock_init(&np->stats_lock);
+
+@@ -2202,8 +2204,8 @@ static netdev_features_t idpf_features_check(struct sk_buff *skb,
+ struct net_device *netdev,
+ netdev_features_t features)
+ {
+- struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
+- struct idpf_adapter *adapter = vport->adapter;
++ struct idpf_netdev_priv *np = netdev_priv(netdev);
++ u16 max_tx_hdr_size = np->max_tx_hdr_size;
+ size_t len;
+
+ /* No point in doing any of this if neither checksum nor GSO are
+@@ -2226,7 +2228,7 @@ static netdev_features_t idpf_features_check(struct sk_buff *skb,
+ goto unsupported;
+
+ len = skb_network_header_len(skb);
+- if (unlikely(len > idpf_get_max_tx_hdr_size(adapter)))
++ if (unlikely(len > max_tx_hdr_size))
+ goto unsupported;
+
+ if (!skb->encapsulation)
+@@ -2239,7 +2241,7 @@ static netdev_features_t idpf_features_check(struct sk_buff *skb,
+
+ /* IPLEN can support at most 127 dwords */
+ len = skb_inner_network_header_len(skb);
+- if (unlikely(len > idpf_get_max_tx_hdr_size(adapter)))
++ if (unlikely(len > max_tx_hdr_size))
+ goto unsupported;
+
+ /* No need to validate L4LEN as TCP is the only protocol with a
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 977741c4149805..60b2e034c0348a 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -4031,6 +4031,14 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
+ return budget;
+ }
+
++ /* Switch to poll mode in the tear-down path after sending disable
++ * queues virtchnl message, as the interrupts will be disabled after
++ * that.
++ */
++ if (unlikely(q_vector->num_txq && idpf_queue_has(POLL_MODE,
++ q_vector->tx[0])))
++ return budget;
++
+ work_done = min_t(int, work_done, budget - 1);
+
+ /* Exit the polling mode, but don't re-enable interrupts if stack might
+@@ -4041,15 +4049,7 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
+ else
+ idpf_vport_intr_set_wb_on_itr(q_vector);
+
+- /* Switch to poll mode in the tear-down path after sending disable
+- * queues virtchnl message, as the interrupts will be disabled after
+- * that
+- */
+- if (unlikely(q_vector->num_txq && idpf_queue_has(POLL_MODE,
+- q_vector->tx[0])))
+- return budget;
+- else
+- return work_done;
++ return work_done;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c
+index 869815f48ac1d2..9eb47b4beb0622 100644
+--- a/drivers/net/ethernet/intel/igc/igc_xdp.c
++++ b/drivers/net/ethernet/intel/igc/igc_xdp.c
+@@ -14,6 +14,7 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog,
+ bool if_running = netif_running(dev);
+ struct bpf_prog *old_prog;
+ bool need_update;
++ unsigned int i;
+
+ if (dev->mtu > ETH_DATA_LEN) {
+ /* For now, the driver doesn't support XDP functionality with
+@@ -24,8 +25,13 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog,
+ }
+
+ need_update = !!adapter->xdp_prog != !!prog;
+- if (if_running && need_update)
+- igc_close(dev);
++ if (if_running && need_update) {
++ for (i = 0; i < adapter->num_rx_queues; i++) {
++ igc_disable_rx_ring(adapter->rx_ring[i]);
++ igc_disable_tx_ring(adapter->tx_ring[i]);
++ napi_disable(&adapter->rx_ring[i]->q_vector->napi);
++ }
++ }
+
+ old_prog = xchg(&adapter->xdp_prog, prog);
+ if (old_prog)
+@@ -36,8 +42,13 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog,
+ else
+ xdp_features_clear_redirect_target(dev);
+
+- if (if_running && need_update)
+- igc_open(dev);
++ if (if_running && need_update) {
++ for (i = 0; i < adapter->num_rx_queues; i++) {
++ napi_enable(&adapter->rx_ring[i]->q_vector->napi);
++ igc_enable_tx_ring(adapter->tx_ring[i]);
++ igc_enable_rx_ring(adapter->rx_ring[i]);
++ }
++ }
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 467f81239e12f9..481f917f7ed288 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -3185,6 +3185,10 @@ static void ixgbe_handle_fw_event(struct ixgbe_adapter *adapter)
+ case ixgbe_aci_opc_get_link_status:
+ ixgbe_handle_link_status_event(adapter, &event);
+ break;
++ case ixgbe_aci_opc_temp_tca_event:
++ e_crit(drv, "%s\n", ixgbe_overheat_msg);
++ ixgbe_down(adapter);
++ break;
+ default:
+ e_warn(hw, "unknown FW async event captured\n");
+ break;
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
+index 8d06ade3c7cd97..617e07878e4f7f 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
+@@ -171,6 +171,9 @@ enum ixgbe_aci_opc {
+ ixgbe_aci_opc_done_alt_write = 0x0904,
+ ixgbe_aci_opc_clear_port_alt_write = 0x0906,
+
++ /* TCA Events */
++ ixgbe_aci_opc_temp_tca_event = 0x0C94,
++
+ /* debug commands */
+ ixgbe_aci_opc_debug_dump_internals = 0xFF08,
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index e43c4608d3ba33..971993586fb49d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -66,8 +66,18 @@ static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool en);
+ /* Supported devices */
+ static const struct pci_device_id cgx_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
+- { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_RPM) },
+- { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10KB_RPM) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_RPM,
++ PCI_ANY_ID, PCI_SUBSYS_DEVID_CN10K_A) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_RPM,
++ PCI_ANY_ID, PCI_SUBSYS_DEVID_CNF10K_A) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_RPM,
++ PCI_ANY_ID, PCI_SUBSYS_DEVID_CNF10K_B) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10KB_RPM,
++ PCI_ANY_ID, PCI_SUBSYS_DEVID_CN10K_B) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10KB_RPM,
++ PCI_ANY_ID, PCI_SUBSYS_DEVID_CN20KA) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10KB_RPM,
++ PCI_ANY_ID, PCI_SUBSYS_DEVID_CNF20KA) },
+ { 0, } /* end of table */
+ };
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index a383b5ef5b2d8d..60f085b00a8cc0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -30,6 +30,8 @@
+ #define PCI_SUBSYS_DEVID_CNF10K_A 0xBA00
+ #define PCI_SUBSYS_DEVID_CNF10K_B 0xBC00
+ #define PCI_SUBSYS_DEVID_CN10K_B 0xBD00
++#define PCI_SUBSYS_DEVID_CN20KA 0xC220
++#define PCI_SUBSYS_DEVID_CNF20KA 0xC320
+
+ /* PCI BAR nos */
+ #define PCI_AF_REG_BAR_NUM 0
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+index 7fa98aeb3663c0..4a3370a40dd887 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+@@ -13,19 +13,26 @@
+ /* RVU LMTST */
+ #define LMT_TBL_OP_READ 0
+ #define LMT_TBL_OP_WRITE 1
+-#define LMT_MAP_TABLE_SIZE (128 * 1024)
+ #define LMT_MAPTBL_ENTRY_SIZE 16
++#define LMT_MAX_VFS 256
++
++#define LMT_MAP_ENTRY_ENA BIT_ULL(20)
++#define LMT_MAP_ENTRY_LINES GENMASK_ULL(18, 16)
+
+ /* Function to perform operations (read/write) on lmtst map table */
+ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val,
+ int lmt_tbl_op)
+ {
+ void __iomem *lmt_map_base;
+- u64 tbl_base;
++ u64 tbl_base, cfg;
++ int pfs, vfs;
+
+ tbl_base = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_MAP_BASE);
++ cfg = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_CFG);
++ vfs = 1 << (cfg & 0xF);
++ pfs = 1 << ((cfg >> 4) & 0x7);
+
+- lmt_map_base = ioremap_wc(tbl_base, LMT_MAP_TABLE_SIZE);
++ lmt_map_base = ioremap_wc(tbl_base, pfs * vfs * LMT_MAPTBL_ENTRY_SIZE);
+ if (!lmt_map_base) {
+ dev_err(rvu->dev, "Failed to setup lmt map table mapping!!\n");
+ return -ENOMEM;
+@@ -35,6 +42,13 @@ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val,
+ *val = readq(lmt_map_base + index);
+ } else {
+ writeq((*val), (lmt_map_base + index));
++
++ cfg = FIELD_PREP(LMT_MAP_ENTRY_ENA, 0x1);
++ /* 2048 LMTLINES */
++ cfg |= FIELD_PREP(LMT_MAP_ENTRY_LINES, 0x6);
++
++ writeq(cfg, (lmt_map_base + (index + 8)));
++
+ /* Flushing the AP interceptor cache to make APR_LMT_MAP_ENTRY_S
+ * changes effective. Write 1 for flush and read is being used as a
+ * barrier and sets up a data dependency. Write to 0 after a write
+@@ -52,7 +66,7 @@ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val,
+ #define LMT_MAP_TBL_W1_OFF 8
+ static u32 rvu_get_lmtst_tbl_index(struct rvu *rvu, u16 pcifunc)
+ {
+- return ((rvu_get_pf(pcifunc) * rvu->hw->total_vfs) +
++ return ((rvu_get_pf(pcifunc) * LMT_MAX_VFS) +
+ (pcifunc & RVU_PFVF_FUNC_MASK)) * LMT_MAPTBL_ENTRY_SIZE;
+ }
+
+@@ -69,7 +83,7 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
+
+ mutex_lock(&rvu->rsrc_lock);
+ rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova);
+- pf = rvu_get_pf(pcifunc) & 0x1F;
++ pf = rvu_get_pf(pcifunc) & RVU_PFVF_PF_MASK;
+ val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 |
+ ((pcifunc & RVU_PFVF_FUNC_MASK) & 0xFF);
+ rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TXN_REQ, val);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index a1f9ec03c2ce69..c827da62647126 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -553,6 +553,7 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp,
+ u64 lmt_addr, val, tbl_base;
+ int pf, vf, num_vfs, hw_vfs;
+ void __iomem *lmt_map_base;
++ int apr_pfs, apr_vfs;
+ int buf_size = 10240;
+ size_t off = 0;
+ int index = 0;
+@@ -568,8 +569,12 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp,
+ return -ENOMEM;
+
+ tbl_base = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_MAP_BASE);
++ val = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_CFG);
++ apr_vfs = 1 << (val & 0xF);
++ apr_pfs = 1 << ((val >> 4) & 0x7);
+
+- lmt_map_base = ioremap_wc(tbl_base, 128 * 1024);
++ lmt_map_base = ioremap_wc(tbl_base, apr_pfs * apr_vfs *
++ LMT_MAPTBL_ENTRY_SIZE);
+ if (!lmt_map_base) {
+ dev_err(rvu->dev, "Failed to setup lmt map table mapping!!\n");
+ kfree(buf);
+@@ -591,7 +596,7 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp,
+ off += scnprintf(&buf[off], buf_size - 1 - off, "PF%d \t\t\t",
+ pf);
+
+- index = pf * rvu->hw->total_vfs * LMT_MAPTBL_ENTRY_SIZE;
++ index = pf * apr_vfs * LMT_MAPTBL_ENTRY_SIZE;
+ off += scnprintf(&buf[off], buf_size - 1 - off, " 0x%llx\t\t",
+ (tbl_base + index));
+ lmt_addr = readq(lmt_map_base + index);
+@@ -604,7 +609,7 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp,
+ /* Reading num of VFs per PF */
+ rvu_get_pf_numvfs(rvu, pf, &num_vfs, &hw_vfs);
+ for (vf = 0; vf < num_vfs; vf++) {
+- index = (pf * rvu->hw->total_vfs * 16) +
++ index = (pf * apr_vfs * LMT_MAPTBL_ENTRY_SIZE) +
+ ((vf + 1) * LMT_MAPTBL_ENTRY_SIZE);
+ off += scnprintf(&buf[off], buf_size - 1 - off,
+ "PF%d:VF%d \t\t", pf, vf);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
+index cb6513ab35e74e..69e0778f9ac104 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
+@@ -9,7 +9,7 @@ obj-$(CONFIG_RVU_ESWITCH) += rvu_rep.o
+
+ rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \
+ otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \
+- otx2_devlink.o qos_sq.o qos.o
++ otx2_devlink.o qos_sq.o qos.o otx2_xsk.o
+ rvu_nicvf-y := otx2_vf.o
+ rvu_rep-y := rep.o
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+index a15cc86635d66c..c3b6e0f60a7998 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+@@ -112,9 +112,12 @@ int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq)
+ struct otx2_nic *pfvf = dev;
+ int cnt = cq->pool_ptrs;
+ u64 ptrs[NPA_MAX_BURST];
++ struct otx2_pool *pool;
+ dma_addr_t bufptr;
+ int num_ptrs = 1;
+
++ pool = &pfvf->qset.pool[cq->cq_idx];
++
+ /* Refill pool with new buffers */
+ while (cq->pool_ptrs) {
+ if (otx2_alloc_buffer(pfvf, cq, &bufptr)) {
+@@ -124,7 +127,9 @@ int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq)
+ break;
+ }
+ cq->pool_ptrs--;
+- ptrs[num_ptrs] = (u64)bufptr + OTX2_HEAD_ROOM;
++ ptrs[num_ptrs] = pool->xsk_pool ?
++ (u64)bufptr : (u64)bufptr + OTX2_HEAD_ROOM;
++
+ num_ptrs++;
+ if (num_ptrs == NPA_MAX_BURST || cq->pool_ptrs == 0) {
+ __cn10k_aura_freeptr(pfvf, cq->cq_idx, ptrs,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 2b49bfec78692c..92b0dba07853ad 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -17,6 +17,7 @@
+ #include "otx2_common.h"
+ #include "otx2_struct.h"
+ #include "cn10k.h"
++#include "otx2_xsk.h"
+
+ static bool otx2_is_pfc_enabled(struct otx2_nic *pfvf)
+ {
+@@ -549,10 +550,13 @@ static int otx2_alloc_pool_buf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+ }
+
+ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+- dma_addr_t *dma)
++ dma_addr_t *dma, int qidx, int idx)
+ {
+ u8 *buf;
+
++ if (pool->xsk_pool)
++ return otx2_xsk_pool_alloc_buf(pfvf, pool, dma, idx);
++
+ if (pool->page_pool)
+ return otx2_alloc_pool_buf(pfvf, pool, dma);
+
+@@ -571,12 +575,12 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+ }
+
+ int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+- dma_addr_t *dma)
++ dma_addr_t *dma, int qidx, int idx)
+ {
+ int ret;
+
+ local_bh_disable();
+- ret = __otx2_alloc_rbuf(pfvf, pool, dma);
++ ret = __otx2_alloc_rbuf(pfvf, pool, dma, qidx, idx);
+ local_bh_enable();
+ return ret;
+ }
+@@ -584,7 +588,8 @@ int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+ int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq,
+ dma_addr_t *dma)
+ {
+- if (unlikely(__otx2_alloc_rbuf(pfvf, cq->rbpool, dma)))
++ if (unlikely(__otx2_alloc_rbuf(pfvf, cq->rbpool, dma,
++ cq->cq_idx, cq->pool_ptrs - 1)))
+ return -ENOMEM;
+ return 0;
+ }
+@@ -884,7 +889,7 @@ void otx2_sqb_flush(struct otx2_nic *pfvf)
+ #define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */
+ #define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */
+
+-static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura)
++int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura)
+ {
+ struct otx2_qset *qset = &pfvf->qset;
+ struct nix_aq_enq_req *aq;
+@@ -1041,12 +1046,13 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
+
+ }
+
+-static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx)
++int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx)
+ {
+ struct otx2_qset *qset = &pfvf->qset;
+ int err, pool_id, non_xdp_queues;
+ struct nix_aq_enq_req *aq;
+ struct otx2_cq_queue *cq;
++ struct otx2_pool *pool;
+
+ cq = &qset->cq[qidx];
+ cq->cq_idx = qidx;
+@@ -1055,8 +1061,20 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx)
+ cq->cq_type = CQ_RX;
+ cq->cint_idx = qidx;
+ cq->cqe_cnt = qset->rqe_cnt;
+- if (pfvf->xdp_prog)
++ if (pfvf->xdp_prog) {
+ xdp_rxq_info_reg(&cq->xdp_rxq, pfvf->netdev, qidx, 0);
++ pool = &qset->pool[qidx];
++ if (pool->xsk_pool) {
++ xdp_rxq_info_reg_mem_model(&cq->xdp_rxq,
++ MEM_TYPE_XSK_BUFF_POOL,
++ NULL);
++ xsk_pool_set_rxq_info(pool->xsk_pool, &cq->xdp_rxq);
++ } else if (pool->page_pool) {
++ xdp_rxq_info_reg_mem_model(&cq->xdp_rxq,
++ MEM_TYPE_PAGE_POOL,
++ pool->page_pool);
++ }
++ }
+ } else if (qidx < non_xdp_queues) {
+ cq->cq_type = CQ_TX;
+ cq->cint_idx = qidx - pfvf->hw.rx_queues;
+@@ -1275,9 +1293,10 @@ void otx2_free_bufs(struct otx2_nic *pfvf, struct otx2_pool *pool,
+
+ pa = otx2_iova_to_phys(pfvf->iommu_domain, iova);
+ page = virt_to_head_page(phys_to_virt(pa));
+-
+ if (pool->page_pool) {
+ page_pool_put_full_page(pool->page_pool, page, true);
++ } else if (pool->xsk_pool) {
++ /* Note: No way of identifying xdp_buff */
+ } else {
+ dma_unmap_page_attrs(pfvf->dev, iova, size,
+ DMA_FROM_DEVICE,
+@@ -1292,6 +1311,7 @@ void otx2_free_aura_ptr(struct otx2_nic *pfvf, int type)
+ int pool_id, pool_start = 0, pool_end = 0, size = 0;
+ struct otx2_pool *pool;
+ u64 iova;
++ int idx;
+
+ if (type == AURA_NIX_SQ) {
+ pool_start = otx2_get_pool_idx(pfvf, type, 0);
+@@ -1306,16 +1326,21 @@ void otx2_free_aura_ptr(struct otx2_nic *pfvf, int type)
+
+ /* Free SQB and RQB pointers from the aura pool */
+ for (pool_id = pool_start; pool_id < pool_end; pool_id++) {
+- iova = otx2_aura_allocptr(pfvf, pool_id);
+ pool = &pfvf->qset.pool[pool_id];
++ iova = otx2_aura_allocptr(pfvf, pool_id);
+ while (iova) {
+ if (type == AURA_NIX_RQ)
+ iova -= OTX2_HEAD_ROOM;
+-
+ otx2_free_bufs(pfvf, pool, iova, size);
+-
+ iova = otx2_aura_allocptr(pfvf, pool_id);
+ }
++
++ for (idx = 0 ; idx < pool->xdp_cnt; idx++) {
++ if (!pool->xdp[idx])
++ continue;
++
++ xsk_buff_free(pool->xdp[idx]);
++ }
+ }
+ }
+
+@@ -1332,7 +1357,8 @@ void otx2_aura_pool_free(struct otx2_nic *pfvf)
+ qmem_free(pfvf->dev, pool->stack);
+ qmem_free(pfvf->dev, pool->fc_addr);
+ page_pool_destroy(pool->page_pool);
+- pool->page_pool = NULL;
++ devm_kfree(pfvf->dev, pool->xdp);
++ pool->xsk_pool = NULL;
+ }
+ devm_kfree(pfvf->dev, pfvf->qset.pool);
+ pfvf->qset.pool = NULL;
+@@ -1419,6 +1445,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
+ int stack_pages, int numptrs, int buf_size, int type)
+ {
+ struct page_pool_params pp_params = { 0 };
++ struct xsk_buff_pool *xsk_pool;
+ struct npa_aq_enq_req *aq;
+ struct otx2_pool *pool;
+ int err;
+@@ -1462,21 +1489,35 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
+ aq->ctype = NPA_AQ_CTYPE_POOL;
+ aq->op = NPA_AQ_INSTOP_INIT;
+
+- if (type != AURA_NIX_RQ) {
+- pool->page_pool = NULL;
++ if (type != AURA_NIX_RQ)
++ return 0;
++
++ if (!test_bit(pool_id, pfvf->af_xdp_zc_qidx)) {
++ pp_params.order = get_order(buf_size);
++ pp_params.flags = PP_FLAG_DMA_MAP;
++ pp_params.pool_size = min(OTX2_PAGE_POOL_SZ, numptrs);
++ pp_params.nid = NUMA_NO_NODE;
++ pp_params.dev = pfvf->dev;
++ pp_params.dma_dir = DMA_FROM_DEVICE;
++ pool->page_pool = page_pool_create(&pp_params);
++ if (IS_ERR(pool->page_pool)) {
++ netdev_err(pfvf->netdev, "Creation of page pool failed\n");
++ return PTR_ERR(pool->page_pool);
++ }
+ return 0;
+ }
+
+- pp_params.order = get_order(buf_size);
+- pp_params.flags = PP_FLAG_DMA_MAP;
+- pp_params.pool_size = min(OTX2_PAGE_POOL_SZ, numptrs);
+- pp_params.nid = NUMA_NO_NODE;
+- pp_params.dev = pfvf->dev;
+- pp_params.dma_dir = DMA_FROM_DEVICE;
+- pool->page_pool = page_pool_create(&pp_params);
+- if (IS_ERR(pool->page_pool)) {
+- netdev_err(pfvf->netdev, "Creation of page pool failed\n");
+- return PTR_ERR(pool->page_pool);
++ /* Set XSK pool to support AF_XDP zero-copy */
++ xsk_pool = xsk_get_pool_from_qid(pfvf->netdev, pool_id);
++ if (xsk_pool) {
++ pool->xsk_pool = xsk_pool;
++ pool->xdp_cnt = numptrs;
++ pool->xdp = devm_kcalloc(pfvf->dev,
++ numptrs, sizeof(struct xdp_buff *), GFP_KERNEL);
++ if (IS_ERR(pool->xdp)) {
++ netdev_err(pfvf->netdev, "Creation of xsk pool failed\n");
++ return PTR_ERR(pool->xdp);
++ }
+ }
+
+ return 0;
+@@ -1537,9 +1578,18 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
+ }
+
+ for (ptr = 0; ptr < num_sqbs; ptr++) {
+- err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
+- if (err)
++ err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
++ if (err) {
++ if (pool->xsk_pool) {
++ ptr--;
++ while (ptr >= 0) {
++ xsk_buff_free(pool->xdp[ptr]);
++ ptr--;
++ }
++ }
+ goto err_mem;
++ }
++
+ pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr);
+ sq->sqb_ptrs[sq->sqb_count++] = (u64)bufptr;
+ }
+@@ -1589,11 +1639,19 @@ int otx2_rq_aura_pool_init(struct otx2_nic *pfvf)
+ /* Allocate pointers and free them to aura/pool */
+ for (pool_id = 0; pool_id < hw->rqpool_cnt; pool_id++) {
+ pool = &pfvf->qset.pool[pool_id];
++
+ for (ptr = 0; ptr < num_ptrs; ptr++) {
+- err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
+- if (err)
++ err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
++ if (err) {
++ if (pool->xsk_pool) {
++ while (ptr)
++ xsk_buff_free(pool->xdp[--ptr]);
++ }
+ return -ENOMEM;
++ }
++
+ pfvf->hw_ops->aura_freeptr(pfvf, pool_id,
++ pool->xsk_pool ? bufptr :
+ bufptr + OTX2_HEAD_ROOM);
+ }
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 7cc12f10e8a157..7477038d29e211 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -21,6 +21,7 @@
+ #include <linux/time64.h>
+ #include <linux/dim.h>
+ #include <uapi/linux/if_macsec.h>
++#include <net/page_pool/helpers.h>
+
+ #include <mbox.h>
+ #include <npc.h>
+@@ -532,6 +533,8 @@ struct otx2_nic {
+
+ /* Inline ipsec */
+ struct cn10k_ipsec ipsec;
++ /* af_xdp zero-copy */
++ unsigned long *af_xdp_zc_qidx;
+ };
+
+ static inline bool is_otx2_lbkvf(struct pci_dev *pdev)
+@@ -1003,7 +1006,7 @@ void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq);
+ void otx2_free_pending_sqe(struct otx2_nic *pfvf);
+ void otx2_sqb_flush(struct otx2_nic *pfvf);
+ int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
+- dma_addr_t *dma);
++ dma_addr_t *dma, int qidx, int idx);
+ int otx2_rxtx_enable(struct otx2_nic *pfvf, bool enable);
+ void otx2_ctx_disable(struct mbox *mbox, int type, bool npa);
+ int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable);
+@@ -1033,6 +1036,8 @@ void otx2_pfaf_mbox_destroy(struct otx2_nic *pf);
+ void otx2_disable_mbox_intr(struct otx2_nic *pf);
+ void otx2_disable_napi(struct otx2_nic *pf);
+ irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq);
++int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura);
++int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx);
+
+ /* RSS configuration APIs*/
+ int otx2_rss_init(struct otx2_nic *pfvf);
+@@ -1095,7 +1100,8 @@ int otx2_del_macfilter(struct net_device *netdev, const u8 *mac);
+ int otx2_add_macfilter(struct net_device *netdev, const u8 *mac);
+ int otx2_enable_rxvlan(struct otx2_nic *pf, bool enable);
+ int otx2_install_rxvlan_offload_flow(struct otx2_nic *pfvf);
+-bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 qidx);
++bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, struct xdp_frame *xdpf,
++ u64 iova, int len, u16 qidx, u16 flags);
+ u16 otx2_get_max_mtu(struct otx2_nic *pfvf);
+ int otx2_handle_ntuple_tc_features(struct net_device *netdev,
+ netdev_features_t features);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index e1dde93e8af823..09a51970851ff9 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -27,6 +27,7 @@
+ #include "qos.h"
+ #include <rvu_trace.h>
+ #include "cn10k_ipsec.h"
++#include "otx2_xsk.h"
+
+ #define DRV_NAME "rvu_nicpf"
+ #define DRV_STRING "Marvell RVU NIC Physical Function Driver"
+@@ -1662,9 +1663,7 @@ void otx2_free_hw_resources(struct otx2_nic *pf)
+ struct nix_lf_free_req *free_req;
+ struct mbox *mbox = &pf->mbox;
+ struct otx2_cq_queue *cq;
+- struct otx2_pool *pool;
+ struct msg_req *req;
+- int pool_id;
+ int qidx;
+
+ /* Ensure all SQE are processed */
+@@ -1705,13 +1704,6 @@ void otx2_free_hw_resources(struct otx2_nic *pf)
+ /* Free RQ buffer pointers*/
+ otx2_free_aura_ptr(pf, AURA_NIX_RQ);
+
+- for (qidx = 0; qidx < pf->hw.rx_queues; qidx++) {
+- pool_id = otx2_get_pool_idx(pf, AURA_NIX_RQ, qidx);
+- pool = &pf->qset.pool[pool_id];
+- page_pool_destroy(pool->page_pool);
+- pool->page_pool = NULL;
+- }
+-
+ otx2_free_cq_res(pf);
+
+ /* Free all ingress bandwidth profiles allocated */
+@@ -2691,7 +2683,6 @@ static int otx2_get_vf_config(struct net_device *netdev, int vf,
+ static int otx2_xdp_xmit_tx(struct otx2_nic *pf, struct xdp_frame *xdpf,
+ int qidx)
+ {
+- struct page *page;
+ u64 dma_addr;
+ int err = 0;
+
+@@ -2701,11 +2692,11 @@ static int otx2_xdp_xmit_tx(struct otx2_nic *pf, struct xdp_frame *xdpf,
+ if (dma_mapping_error(pf->dev, dma_addr))
+ return -ENOMEM;
+
+- err = otx2_xdp_sq_append_pkt(pf, dma_addr, xdpf->len, qidx);
++ err = otx2_xdp_sq_append_pkt(pf, xdpf, dma_addr, xdpf->len,
++ qidx, XDP_REDIRECT);
+ if (!err) {
+ otx2_dma_unmap_page(pf, dma_addr, xdpf->len, DMA_TO_DEVICE);
+- page = virt_to_page(xdpf->data);
+- put_page(page);
++ xdp_return_frame(xdpf);
+ return -ENOMEM;
+ }
+ return 0;
+@@ -2789,6 +2780,8 @@ static int otx2_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
+ switch (xdp->command) {
+ case XDP_SETUP_PROG:
+ return otx2_xdp_setup(pf, xdp->prog);
++ case XDP_SETUP_XSK_POOL:
++ return otx2_xsk_pool_setup(pf, xdp->xsk.pool, xdp->xsk.queue_id);
+ default:
+ return -EINVAL;
+ }
+@@ -2866,6 +2859,7 @@ static const struct net_device_ops otx2_netdev_ops = {
+ .ndo_set_vf_vlan = otx2_set_vf_vlan,
+ .ndo_get_vf_config = otx2_get_vf_config,
+ .ndo_bpf = otx2_xdp,
++ .ndo_xsk_wakeup = otx2_xsk_wakeup,
+ .ndo_xdp_xmit = otx2_xdp_xmit,
+ .ndo_setup_tc = otx2_setup_tc,
+ .ndo_set_vf_trust = otx2_ndo_set_vf_trust,
+@@ -3204,16 +3198,28 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ /* Enable link notifications */
+ otx2_cgx_config_linkevents(pf, true);
+
++ pf->af_xdp_zc_qidx = bitmap_zalloc(qcount, GFP_KERNEL);
++ if (!pf->af_xdp_zc_qidx) {
++ err = -ENOMEM;
++ goto err_sriov_cleannup;
++ }
++
+ #ifdef CONFIG_DCB
+ err = otx2_dcbnl_set_ops(netdev);
+ if (err)
+- goto err_pf_sriov_init;
++ goto err_free_zc_bmap;
+ #endif
+
+ otx2_qos_init(pf, qos_txqs);
+
+ return 0;
+
++#ifdef CONFIG_DCB
++err_free_zc_bmap:
++ bitmap_free(pf->af_xdp_zc_qidx);
++#endif
++err_sriov_cleannup:
++ otx2_sriov_vfcfg_cleanup(pf);
+ err_pf_sriov_init:
+ otx2_shutdown_tc(pf);
+ err_mcam_flow_del:
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+index 224cef9389274d..00b6903ba250ca 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+@@ -12,6 +12,7 @@
+ #include <linux/bpf_trace.h>
+ #include <net/ip6_checksum.h>
+ #include <net/xfrm.h>
++#include <net/xdp.h>
+
+ #include "otx2_reg.h"
+ #include "otx2_common.h"
+@@ -96,20 +97,16 @@ static unsigned int frag_num(unsigned int i)
+
+ static void otx2_xdp_snd_pkt_handler(struct otx2_nic *pfvf,
+ struct otx2_snd_queue *sq,
+- struct nix_cqe_tx_s *cqe)
++ struct nix_cqe_tx_s *cqe)
+ {
+ struct nix_send_comp_s *snd_comp = &cqe->comp;
+ struct sg_list *sg;
+- struct page *page;
+- u64 pa;
+
+ sg = &sq->sg[snd_comp->sqe_id];
+-
+- pa = otx2_iova_to_phys(pfvf->iommu_domain, sg->dma_addr[0]);
+- otx2_dma_unmap_page(pfvf, sg->dma_addr[0],
+- sg->size[0], DMA_TO_DEVICE);
+- page = virt_to_page(phys_to_virt(pa));
+- put_page(page);
++ if (sg->flags & XDP_REDIRECT)
++ otx2_dma_unmap_page(pfvf, sg->dma_addr[0], sg->size[0], DMA_TO_DEVICE);
++ xdp_return_frame((struct xdp_frame *)sg->skb);
++ sg->skb = (u64)NULL;
+ }
+
+ static void otx2_snd_pkt_handler(struct otx2_nic *pfvf,
+@@ -527,9 +524,10 @@ static void otx2_adjust_adaptive_coalese(struct otx2_nic *pfvf, struct otx2_cq_p
+ int otx2_napi_handler(struct napi_struct *napi, int budget)
+ {
+ struct otx2_cq_queue *rx_cq = NULL;
++ struct otx2_cq_queue *cq = NULL;
++ struct otx2_pool *pool = NULL;
+ struct otx2_cq_poll *cq_poll;
+ int workdone = 0, cq_idx, i;
+- struct otx2_cq_queue *cq;
+ struct otx2_qset *qset;
+ struct otx2_nic *pfvf;
+ int filled_cnt = -1;
+@@ -554,6 +552,7 @@ int otx2_napi_handler(struct napi_struct *napi, int budget)
+
+ if (rx_cq && rx_cq->pool_ptrs)
+ filled_cnt = pfvf->hw_ops->refill_pool_ptrs(pfvf, rx_cq);
++
+ /* Clear the IRQ */
+ otx2_write64(pfvf, NIX_LF_CINTX_INT(cq_poll->cint_idx), BIT_ULL(0));
+
+@@ -566,20 +565,31 @@ int otx2_napi_handler(struct napi_struct *napi, int budget)
+ if (pfvf->flags & OTX2_FLAG_ADPTV_INT_COAL_ENABLED)
+ otx2_adjust_adaptive_coalese(pfvf, cq_poll);
+
++ if (likely(cq))
++ pool = &pfvf->qset.pool[cq->cq_idx];
++
+ if (unlikely(!filled_cnt)) {
+ struct refill_work *work;
+ struct delayed_work *dwork;
+
+- work = &pfvf->refill_wrk[cq->cq_idx];
+- dwork = &work->pool_refill_work;
+- /* Schedule a task if no other task is running */
+- if (!cq->refill_task_sched) {
+- work->napi = napi;
+- cq->refill_task_sched = true;
+- schedule_delayed_work(dwork,
+- msecs_to_jiffies(100));
++ if (likely(cq)) {
++ work = &pfvf->refill_wrk[cq->cq_idx];
++ dwork = &work->pool_refill_work;
++ /* Schedule a task if no other task is running */
++ if (!cq->refill_task_sched) {
++ work->napi = napi;
++ cq->refill_task_sched = true;
++ schedule_delayed_work(dwork,
++ msecs_to_jiffies(100));
++ }
++ /* Call wake-up for not able to fill buffers */
++ if (pool->xsk_pool)
++ xsk_set_rx_need_wakeup(pool->xsk_pool);
+ }
+ } else {
++ /* Clear wake-up, since buffers are filled successfully */
++ if (pool && pool->xsk_pool)
++ xsk_clear_rx_need_wakeup(pool->xsk_pool);
+ /* Re-enable interrupts */
+ otx2_write64(pfvf,
+ NIX_LF_CINTX_ENA_W1S(cq_poll->cint_idx),
+@@ -1230,15 +1240,19 @@ void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int q
+ u16 pool_id;
+ u64 iova;
+
+- if (pfvf->xdp_prog)
++ pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_RQ, qidx);
++ pool = &pfvf->qset.pool[pool_id];
++
++ if (pfvf->xdp_prog) {
++ if (pool->page_pool)
++ xdp_rxq_info_unreg_mem_model(&cq->xdp_rxq);
++
+ xdp_rxq_info_unreg(&cq->xdp_rxq);
++ }
+
+ if (otx2_nix_cq_op_status(pfvf, cq) || !cq->pend_cqe)
+ return;
+
+- pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_RQ, qidx);
+- pool = &pfvf->qset.pool[pool_id];
+-
+ while (cq->pend_cqe) {
+ cqe = (struct nix_cqe_rx_s *)otx2_get_next_cqe(cq);
+ processed_cqe++;
+@@ -1359,8 +1373,9 @@ void otx2_free_pending_sqe(struct otx2_nic *pfvf)
+ }
+ }
+
+-static void otx2_xdp_sqe_add_sg(struct otx2_snd_queue *sq, u64 dma_addr,
+- int len, int *offset)
++static void otx2_xdp_sqe_add_sg(struct otx2_snd_queue *sq,
++ struct xdp_frame *xdpf,
++ u64 dma_addr, int len, int *offset, u16 flags)
+ {
+ struct nix_sqe_sg_s *sg = NULL;
+ u64 *iova = NULL;
+@@ -1377,9 +1392,12 @@ static void otx2_xdp_sqe_add_sg(struct otx2_snd_queue *sq, u64 dma_addr,
+ sq->sg[sq->head].dma_addr[0] = dma_addr;
+ sq->sg[sq->head].size[0] = len;
+ sq->sg[sq->head].num_segs = 1;
++ sq->sg[sq->head].flags = flags;
++ sq->sg[sq->head].skb = (u64)xdpf;
+ }
+
+-bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 qidx)
++bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, struct xdp_frame *xdpf,
++ u64 iova, int len, u16 qidx, u16 flags)
+ {
+ struct nix_sqe_hdr_s *sqe_hdr;
+ struct otx2_snd_queue *sq;
+@@ -1405,7 +1423,7 @@ bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 qidx)
+
+ offset = sizeof(*sqe_hdr);
+
+- otx2_xdp_sqe_add_sg(sq, iova, len, &offset);
++ otx2_xdp_sqe_add_sg(sq, xdpf, iova, len, &offset, flags);
+ sqe_hdr->sizem1 = (offset / 16) - 1;
+ pfvf->hw_ops->sqe_flush(pfvf, sq, offset, qidx);
+
+@@ -1418,14 +1436,28 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
+ struct otx2_cq_queue *cq,
+ bool *need_xdp_flush)
+ {
++ struct xdp_buff xdp, *xsk_buff = NULL;
+ unsigned char *hard_start;
++ struct otx2_pool *pool;
++ struct xdp_frame *xdpf;
+ int qidx = cq->cq_idx;
+- struct xdp_buff xdp;
+ struct page *page;
+ u64 iova, pa;
+ u32 act;
+ int err;
+
++ pool = &pfvf->qset.pool[qidx];
++
++ if (pool->xsk_pool) {
++ xsk_buff = pool->xdp[--cq->rbpool->xdp_top];
++ if (!xsk_buff)
++ return false;
++
++ xsk_buff->data_end = xsk_buff->data + cqe->sg.seg_size;
++ act = bpf_prog_run_xdp(prog, xsk_buff);
++ goto handle_xdp_verdict;
++ }
++
+ iova = cqe->sg.seg_addr - OTX2_HEAD_ROOM;
+ pa = otx2_iova_to_phys(pfvf->iommu_domain, iova);
+ page = virt_to_page(phys_to_virt(pa));
+@@ -1438,37 +1470,57 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
+
+ act = bpf_prog_run_xdp(prog, &xdp);
+
++handle_xdp_verdict:
+ switch (act) {
+ case XDP_PASS:
+ break;
+ case XDP_TX:
+ qidx += pfvf->hw.tx_queues;
+ cq->pool_ptrs++;
+- return otx2_xdp_sq_append_pkt(pfvf, iova,
+- cqe->sg.seg_size, qidx);
++ xdpf = xdp_convert_buff_to_frame(&xdp);
++ return otx2_xdp_sq_append_pkt(pfvf, xdpf, cqe->sg.seg_addr,
++ cqe->sg.seg_size, qidx, XDP_TX);
+ case XDP_REDIRECT:
+ cq->pool_ptrs++;
+- err = xdp_do_redirect(pfvf->netdev, &xdp, prog);
++ if (xsk_buff) {
++ err = xdp_do_redirect(pfvf->netdev, xsk_buff, prog);
++ if (!err) {
++ *need_xdp_flush = true;
++ return true;
++ }
++ return false;
++ }
+
+- otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize,
+- DMA_FROM_DEVICE);
++ err = xdp_do_redirect(pfvf->netdev, &xdp, prog);
+ if (!err) {
+ *need_xdp_flush = true;
+ return true;
+ }
+- put_page(page);
++
++ otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize,
++ DMA_FROM_DEVICE);
++ xdpf = xdp_convert_buff_to_frame(&xdp);
++ xdp_return_frame(xdpf);
+ break;
+ default:
+ bpf_warn_invalid_xdp_action(pfvf->netdev, prog, act);
+ break;
+ case XDP_ABORTED:
++ if (xsk_buff)
++ xsk_buff_free(xsk_buff);
+ trace_xdp_exception(pfvf->netdev, prog, act);
+ break;
+ case XDP_DROP:
+- otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize,
+- DMA_FROM_DEVICE);
+- put_page(page);
+ cq->pool_ptrs++;
++ if (xsk_buff) {
++ xsk_buff_free(xsk_buff);
++ } else if (page->pp) {
++ page_pool_recycle_direct(pool->page_pool, page);
++ } else {
++ otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize,
++ DMA_FROM_DEVICE);
++ put_page(page);
++ }
+ return true;
+ }
+ return false;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+index d23810963fdbd5..8f346fbc8221fa 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+@@ -12,6 +12,7 @@
+ #include <linux/iommu.h>
+ #include <linux/if_vlan.h>
+ #include <net/xdp.h>
++#include <net/xdp_sock_drv.h>
+
+ #define LBK_CHAN_BASE 0x000
+ #define SDP_CHAN_BASE 0x700
+@@ -76,6 +77,7 @@ struct otx2_rcv_queue {
+
+ struct sg_list {
+ u16 num_segs;
++ u16 flags;
+ u64 skb;
+ u64 size[OTX2_MAX_FRAGS_IN_SQE];
+ u64 dma_addr[OTX2_MAX_FRAGS_IN_SQE];
+@@ -127,7 +129,11 @@ struct otx2_pool {
+ struct qmem *stack;
+ struct qmem *fc_addr;
+ struct page_pool *page_pool;
++ struct xsk_buff_pool *xsk_pool;
++ struct xdp_buff **xdp;
++ u16 xdp_cnt;
+ u16 rbsize;
++ u16 xdp_top;
+ };
+
+ struct otx2_cq_queue {
+@@ -144,6 +150,7 @@ struct otx2_cq_queue {
+ void *cqe_base;
+ struct qmem *cqe;
+ struct otx2_pool *rbpool;
++ bool xsk_zc_en;
+ struct xdp_rxq_info xdp_rxq;
+ } ____cacheline_aligned_in_smp;
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index e926c6ce96cffa..9b28be4c4a5d6c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -722,15 +722,30 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ if (err)
+ goto err_shutdown_tc;
+
++ vf->af_xdp_zc_qidx = bitmap_zalloc(qcount, GFP_KERNEL);
++ if (!vf->af_xdp_zc_qidx) {
++ err = -ENOMEM;
++ goto err_unreg_devlink;
++ }
++
+ #ifdef CONFIG_DCB
+- err = otx2_dcbnl_set_ops(netdev);
+- if (err)
+- goto err_shutdown_tc;
++ /* Priority flow control is not supported for LBK and SDP vf(s) */
++ if (!(is_otx2_lbkvf(vf->pdev) || is_otx2_sdp_rep(vf->pdev))) {
++ err = otx2_dcbnl_set_ops(netdev);
++ if (err)
++ goto err_free_zc_bmap;
++ }
+ #endif
+ otx2_qos_init(vf, qos_txqs);
+
+ return 0;
+
++#ifdef CONFIG_DCB
++err_free_zc_bmap:
++ bitmap_free(vf->af_xdp_zc_qidx);
++#endif
++err_unreg_devlink:
++ otx2_unregister_dl(vf);
+ err_shutdown_tc:
+ otx2_shutdown_tc(vf);
+ err_unreg_netdev:
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c
+new file mode 100644
+index 00000000000000..894c1e0aea6f11
+--- /dev/null
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.c
+@@ -0,0 +1,182 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Marvell RVU Ethernet driver
++ *
++ * Copyright (C) 2024 Marvell.
++ *
++ */
++
++#include <linux/bpf_trace.h>
++#include <linux/stringify.h>
++#include <net/xdp_sock_drv.h>
++#include <net/xdp.h>
++
++#include "otx2_common.h"
++#include "otx2_xsk.h"
++
++int otx2_xsk_pool_alloc_buf(struct otx2_nic *pfvf, struct otx2_pool *pool,
++ dma_addr_t *dma, int idx)
++{
++ struct xdp_buff *xdp;
++ int delta;
++
++ xdp = xsk_buff_alloc(pool->xsk_pool);
++ if (!xdp)
++ return -ENOMEM;
++
++ pool->xdp[pool->xdp_top++] = xdp;
++ *dma = OTX2_DATA_ALIGN(xsk_buff_xdp_get_dma(xdp));
++ /* Adjust xdp->data for unaligned addresses */
++ delta = *dma - xsk_buff_xdp_get_dma(xdp);
++ xdp->data += delta;
++
++ return 0;
++}
++
++static int otx2_xsk_ctx_disable(struct otx2_nic *pfvf, u16 qidx, int aura_id)
++{
++ struct nix_cn10k_aq_enq_req *cn10k_rq_aq;
++ struct npa_aq_enq_req *aura_aq;
++ struct npa_aq_enq_req *pool_aq;
++ struct nix_aq_enq_req *rq_aq;
++
++ if (test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) {
++ cn10k_rq_aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox);
++ if (!cn10k_rq_aq)
++ return -ENOMEM;
++ cn10k_rq_aq->qidx = qidx;
++ cn10k_rq_aq->rq.ena = 0;
++ cn10k_rq_aq->rq_mask.ena = 1;
++ cn10k_rq_aq->ctype = NIX_AQ_CTYPE_RQ;
++ cn10k_rq_aq->op = NIX_AQ_INSTOP_WRITE;
++ } else {
++ rq_aq = otx2_mbox_alloc_msg_nix_aq_enq(&pfvf->mbox);
++ if (!rq_aq)
++ return -ENOMEM;
++ rq_aq->qidx = qidx;
++ rq_aq->sq.ena = 0;
++ rq_aq->sq_mask.ena = 1;
++ rq_aq->ctype = NIX_AQ_CTYPE_RQ;
++ rq_aq->op = NIX_AQ_INSTOP_WRITE;
++ }
++
++ aura_aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox);
++ if (!aura_aq)
++ goto fail;
++
++ aura_aq->aura_id = aura_id;
++ aura_aq->aura.ena = 0;
++ aura_aq->aura_mask.ena = 1;
++ aura_aq->ctype = NPA_AQ_CTYPE_AURA;
++ aura_aq->op = NPA_AQ_INSTOP_WRITE;
++
++ pool_aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox);
++ if (!pool_aq)
++ goto fail;
++
++ pool_aq->aura_id = aura_id;
++ pool_aq->pool.ena = 0;
++ pool_aq->pool_mask.ena = 1;
++
++ pool_aq->ctype = NPA_AQ_CTYPE_POOL;
++ pool_aq->op = NPA_AQ_INSTOP_WRITE;
++
++ return otx2_sync_mbox_msg(&pfvf->mbox);
++
++fail:
++ otx2_mbox_reset(&pfvf->mbox.mbox, 0);
++ return -ENOMEM;
++}
++
++static void otx2_clean_up_rq(struct otx2_nic *pfvf, int qidx)
++{
++ struct otx2_qset *qset = &pfvf->qset;
++ struct otx2_cq_queue *cq;
++ struct otx2_pool *pool;
++ u64 iova;
++
++ /* If the DOWN flag is set SQs are already freed */
++ if (pfvf->flags & OTX2_FLAG_INTF_DOWN)
++ return;
++
++ cq = &qset->cq[qidx];
++ if (cq)
++ otx2_cleanup_rx_cqes(pfvf, cq, qidx);
++
++ pool = &pfvf->qset.pool[qidx];
++ iova = otx2_aura_allocptr(pfvf, qidx);
++ while (iova) {
++ iova -= OTX2_HEAD_ROOM;
++ otx2_free_bufs(pfvf, pool, iova, pfvf->rbsize);
++ iova = otx2_aura_allocptr(pfvf, qidx);
++ }
++
++ mutex_lock(&pfvf->mbox.lock);
++ otx2_xsk_ctx_disable(pfvf, qidx, qidx);
++ mutex_unlock(&pfvf->mbox.lock);
++}
++
++int otx2_xsk_pool_enable(struct otx2_nic *pf, struct xsk_buff_pool *pool, u16 qidx)
++{
++ u16 rx_queues = pf->hw.rx_queues;
++ u16 tx_queues = pf->hw.tx_queues;
++ int err;
++
++ if (qidx >= rx_queues || qidx >= tx_queues)
++ return -EINVAL;
++
++ err = xsk_pool_dma_map(pool, pf->dev, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
++ if (err)
++ return err;
++
++ set_bit(qidx, pf->af_xdp_zc_qidx);
++ otx2_clean_up_rq(pf, qidx);
++ /* Kick start the NAPI context so that receiving will start */
++ return otx2_xsk_wakeup(pf->netdev, qidx, XDP_WAKEUP_RX);
++}
++
++int otx2_xsk_pool_disable(struct otx2_nic *pf, u16 qidx)
++{
++ struct net_device *netdev = pf->netdev;
++ struct xsk_buff_pool *pool;
++
++ pool = xsk_get_pool_from_qid(netdev, qidx);
++ if (!pool)
++ return -EINVAL;
++
++ otx2_clean_up_rq(pf, qidx);
++ clear_bit(qidx, pf->af_xdp_zc_qidx);
++ xsk_pool_dma_unmap(pool, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
++
++ return 0;
++}
++
++int otx2_xsk_pool_setup(struct otx2_nic *pf, struct xsk_buff_pool *pool, u16 qidx)
++{
++ if (pool)
++ return otx2_xsk_pool_enable(pf, pool, qidx);
++
++ return otx2_xsk_pool_disable(pf, qidx);
++}
++
++int otx2_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
++{
++ struct otx2_nic *pf = netdev_priv(dev);
++ struct otx2_cq_poll *cq_poll = NULL;
++ struct otx2_qset *qset = &pf->qset;
++
++ if (pf->flags & OTX2_FLAG_INTF_DOWN)
++ return -ENETDOWN;
++
++ if (queue_id >= pf->hw.rx_queues)
++ return -EINVAL;
++
++ cq_poll = &qset->napi[queue_id];
++ if (!cq_poll)
++ return -EINVAL;
++
++ /* Trigger interrupt */
++ if (!napi_if_scheduled_mark_missed(&cq_poll->napi))
++ otx2_write64(pf, NIX_LF_CINTX_ENA_W1S(cq_poll->cint_idx), BIT_ULL(0));
++
++ return 0;
++}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h
+new file mode 100644
+index 00000000000000..022b3433edbbb5
+--- /dev/null
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_xsk.h
+@@ -0,0 +1,21 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/* Marvell RVU PF/VF Netdev Devlink
++ *
++ * Copyright (C) 2024 Marvell.
++ *
++ */
++
++#ifndef OTX2_XSK_H
++#define OTX2_XSK_H
++
++struct otx2_nic;
++struct xsk_buff_pool;
++
++int otx2_xsk_pool_setup(struct otx2_nic *pf, struct xsk_buff_pool *pool, u16 qid);
++int otx2_xsk_pool_enable(struct otx2_nic *pf, struct xsk_buff_pool *pool, u16 qid);
++int otx2_xsk_pool_disable(struct otx2_nic *pf, u16 qid);
++int otx2_xsk_pool_alloc_buf(struct otx2_nic *pfvf, struct otx2_pool *pool,
++ dma_addr_t *dma, int idx);
++int otx2_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags);
++
++#endif /* OTX2_XSK_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
+index 9d887bfc31089c..c5dbae0e513b64 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
+@@ -82,7 +82,7 @@ static int otx2_qos_sq_aura_pool_init(struct otx2_nic *pfvf, int qidx)
+ }
+
+ for (ptr = 0; ptr < num_sqbs; ptr++) {
+- err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
++ err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
+ if (err)
+ goto sqb_free;
+ pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr);
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+index f20bb390df3add..c855fb799ce145 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+@@ -34,8 +34,10 @@ struct mtk_flow_data {
+ u16 vlan_in;
+
+ struct {
+- u16 id;
+- __be16 proto;
++ struct {
++ u16 id;
++ __be16 proto;
++ } vlans[2];
+ u8 num;
+ } vlan;
+ struct {
+@@ -349,18 +351,19 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f,
+ case FLOW_ACTION_CSUM:
+ break;
+ case FLOW_ACTION_VLAN_PUSH:
+- if (data.vlan.num == 1 ||
++ if (data.vlan.num + data.pppoe.num == 2 ||
+ act->vlan.proto != htons(ETH_P_8021Q))
+ return -EOPNOTSUPP;
+
+- data.vlan.id = act->vlan.vid;
+- data.vlan.proto = act->vlan.proto;
++ data.vlan.vlans[data.vlan.num].id = act->vlan.vid;
++ data.vlan.vlans[data.vlan.num].proto = act->vlan.proto;
+ data.vlan.num++;
+ break;
+ case FLOW_ACTION_VLAN_POP:
+ break;
+ case FLOW_ACTION_PPPOE_PUSH:
+- if (data.pppoe.num == 1)
++ if (data.pppoe.num == 1 ||
++ data.vlan.num == 2)
+ return -EOPNOTSUPP;
+
+ data.pppoe.sid = act->pppoe.sid;
+@@ -450,12 +453,9 @@ mtk_flow_offload_replace(struct mtk_eth *eth, struct flow_cls_offload *f,
+ if (offload_type == MTK_PPE_PKT_TYPE_BRIDGE)
+ foe.bridge.vlan = data.vlan_in;
+
+- if (data.vlan.num == 1) {
+- if (data.vlan.proto != htons(ETH_P_8021Q))
+- return -EOPNOTSUPP;
++ for (i = 0; i < data.vlan.num; i++)
++ mtk_foe_entry_set_vlan(eth, &foe, data.vlan.vlans[i].id);
+
+- mtk_foe_entry_set_vlan(eth, &foe, data.vlan.id);
+- }
+ if (data.pppoe.num == 1)
+ mtk_foe_entry_set_pppoe(eth, &foe, data.pppoe.sid);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx4/alloc.c b/drivers/net/ethernet/mellanox/mlx4/alloc.c
+index b330020dc0d674..f2bded847e61d1 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/alloc.c
++++ b/drivers/net/ethernet/mellanox/mlx4/alloc.c
+@@ -682,9 +682,9 @@ static struct mlx4_db_pgdir *mlx4_alloc_db_pgdir(struct device *dma_device)
+ }
+
+ static int mlx4_alloc_db_from_pgdir(struct mlx4_db_pgdir *pgdir,
+- struct mlx4_db *db, int order)
++ struct mlx4_db *db, unsigned int order)
+ {
+- int o;
++ unsigned int o;
+ int i;
+
+ for (o = order; o <= 1; ++o) {
+@@ -712,7 +712,7 @@ static int mlx4_alloc_db_from_pgdir(struct mlx4_db_pgdir *pgdir,
+ return 0;
+ }
+
+-int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, int order)
++int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, unsigned int order)
+ {
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ struct mlx4_db_pgdir *pgdir;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+index 1ddb11cb25f916..6e077d202827a2 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+@@ -450,6 +450,8 @@ int mlx4_en_process_tx_cq(struct net_device *dev,
+
+ if (unlikely(!priv->port_up))
+ return 0;
++ if (unlikely(!napi_budget) && cq->type == TX_XDP)
++ return 0;
+
+ netdev_txq_bql_complete_prefetchw(ring->tx_queue);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 979fc56205e1fe..769e683f248836 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -95,8 +95,6 @@ struct page_pool;
+ #define MLX5_MPWRQ_DEF_LOG_STRIDE_SZ(mdev) \
+ MLX5_MPWRQ_LOG_STRIDE_SZ(mdev, order_base_2(MLX5E_RX_MAX_HEAD))
+
+-#define MLX5_MPWRQ_MAX_LOG_WQE_SZ 18
+-
+ /* Keep in sync with mlx5e_mpwrq_log_wqe_sz.
+ * These are theoretical maximums, which can be further restricted by
+ * capabilities. These values are used for static resource allocations and
+@@ -386,7 +384,6 @@ enum {
+ MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE,
+ MLX5E_SQ_STATE_PENDING_XSK_TX,
+ MLX5E_SQ_STATE_PENDING_TLS_RX_RESYNC,
+- MLX5E_SQ_STATE_XDP_MULTIBUF,
+ MLX5E_NUM_SQ_STATES, /* Must be kept last */
+ };
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index 31eb99f09c63c1..58ec5e44aa7ada 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -10,6 +10,9 @@
+ #include <net/page_pool/types.h>
+ #include <net/xdp_sock_drv.h>
+
++#define MLX5_MPWRQ_MAX_LOG_WQE_SZ 18
++#define MLX5_REP_MPWRQ_MAX_LOG_WQE_SZ 17
++
+ static u8 mlx5e_mpwrq_min_page_shift(struct mlx5_core_dev *mdev)
+ {
+ u8 min_page_shift = MLX5_CAP_GEN_2(mdev, log_min_mkey_entity_size);
+@@ -103,18 +106,22 @@ u8 mlx5e_mpwrq_log_wqe_sz(struct mlx5_core_dev *mdev, u8 page_shift,
+ enum mlx5e_mpwrq_umr_mode umr_mode)
+ {
+ u8 umr_entry_size = mlx5e_mpwrq_umr_entry_size(umr_mode);
+- u8 max_pages_per_wqe, max_log_mpwqe_size;
++ u8 max_pages_per_wqe, max_log_wqe_size_calc;
++ u8 max_log_wqe_size_cap;
+ u16 max_wqe_size;
+
+ /* Keep in sync with MLX5_MPWRQ_MAX_PAGES_PER_WQE. */
+ max_wqe_size = mlx5e_get_max_sq_aligned_wqebbs(mdev) * MLX5_SEND_WQE_BB;
+ max_pages_per_wqe = ALIGN_DOWN(max_wqe_size - sizeof(struct mlx5e_umr_wqe),
+ MLX5_UMR_FLEX_ALIGNMENT) / umr_entry_size;
+- max_log_mpwqe_size = ilog2(max_pages_per_wqe) + page_shift;
++ max_log_wqe_size_calc = ilog2(max_pages_per_wqe) + page_shift;
++
++ WARN_ON_ONCE(max_log_wqe_size_calc < MLX5E_ORDER2_MAX_PACKET_MTU);
+
+- WARN_ON_ONCE(max_log_mpwqe_size < MLX5E_ORDER2_MAX_PACKET_MTU);
++ max_log_wqe_size_cap = mlx5_core_is_ecpf(mdev) ?
++ MLX5_REP_MPWRQ_MAX_LOG_WQE_SZ : MLX5_MPWRQ_MAX_LOG_WQE_SZ;
+
+- return min_t(u8, max_log_mpwqe_size, MLX5_MPWRQ_MAX_LOG_WQE_SZ);
++ return min_t(u8, max_log_wqe_size_calc, max_log_wqe_size_cap);
+ }
+
+ u8 mlx5e_mpwrq_pages_per_wqe(struct mlx5_core_dev *mdev, u8 page_shift,
+@@ -1242,7 +1249,6 @@ void mlx5e_build_xdpsq_param(struct mlx5_core_dev *mdev,
+ mlx5e_build_sq_param_common(mdev, param);
+ MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size);
+ param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE);
+- param->is_xdp_mb = !mlx5e_rx_is_linear_skb(mdev, params, xsk);
+ mlx5e_build_tx_cq_param(mdev, params, ¶m->cqp);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+index 3f8986f9d86291..bd5877acc5b1eb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h
+@@ -33,7 +33,6 @@ struct mlx5e_sq_param {
+ struct mlx5_wq_param wq;
+ bool is_mpw;
+ bool is_tls;
+- bool is_xdp_mb;
+ u16 stop_room;
+ };
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+index c8adf309ecad04..dbd9482359e1ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+@@ -16,7 +16,6 @@ static const char * const sq_sw_state_type_name[] = {
+ [MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE] = "vlan_need_l2_inline",
+ [MLX5E_SQ_STATE_PENDING_XSK_TX] = "pending_xsk_tx",
+ [MLX5E_SQ_STATE_PENDING_TLS_RX_RESYNC] = "pending_tls_rx_resync",
+- [MLX5E_SQ_STATE_XDP_MULTIBUF] = "xdp_multibuf",
+ };
+
+ static int mlx5e_wait_for_sq_flush(struct mlx5e_txqsq *sq)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+index 94b2916620873c..7a6cc0f4002eaa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+@@ -546,6 +546,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd,
+ bool inline_ok;
+ bool linear;
+ u16 pi;
++ int i;
+
+ struct mlx5e_xdpsq_stats *stats = sq->stats;
+
+@@ -612,41 +613,33 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd,
+
+ cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_SEND);
+
+- if (test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state)) {
+- int i;
+-
+- memset(&cseg->trailer, 0, sizeof(cseg->trailer));
+- memset(eseg, 0, sizeof(*eseg) - sizeof(eseg->trailer));
+-
+- eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz);
++ memset(&cseg->trailer, 0, sizeof(cseg->trailer));
++ memset(eseg, 0, sizeof(*eseg) - sizeof(eseg->trailer));
+
+- for (i = 0; i < num_frags; i++) {
+- skb_frag_t *frag = &xdptxdf->sinfo->frags[i];
+- dma_addr_t addr;
++ eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz);
+
+- addr = xdptxdf->dma_arr ? xdptxdf->dma_arr[i] :
+- page_pool_get_dma_addr(skb_frag_page(frag)) +
+- skb_frag_off(frag);
++ for (i = 0; i < num_frags; i++) {
++ skb_frag_t *frag = &xdptxdf->sinfo->frags[i];
++ dma_addr_t addr;
+
+- dseg->addr = cpu_to_be64(addr);
+- dseg->byte_count = cpu_to_be32(skb_frag_size(frag));
+- dseg->lkey = sq->mkey_be;
+- dseg++;
+- }
++ addr = xdptxdf->dma_arr ? xdptxdf->dma_arr[i] :
++ page_pool_get_dma_addr(skb_frag_page(frag)) +
++ skb_frag_off(frag);
+
+- cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt);
++ dseg->addr = cpu_to_be64(addr);
++ dseg->byte_count = cpu_to_be32(skb_frag_size(frag));
++ dseg->lkey = sq->mkey_be;
++ dseg++;
++ }
+
+- sq->db.wqe_info[pi] = (struct mlx5e_xdp_wqe_info) {
+- .num_wqebbs = num_wqebbs,
+- .num_pkts = 1,
+- };
++ cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt);
+
+- sq->pc += num_wqebbs;
+- } else {
+- cseg->fm_ce_se = 0;
++ sq->db.wqe_info[pi] = (struct mlx5e_xdp_wqe_info) {
++ .num_wqebbs = num_wqebbs,
++ .num_pkts = 1,
++ };
+
+- sq->pc++;
+- }
++ sq->pc += num_wqebbs;
+
+ xsk_tx_metadata_request(meta, &mlx5e_xsk_tx_metadata_ops, eseg);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+index e7b64679f12195..3cf44fbdf5ee69 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+@@ -165,6 +165,25 @@ static void ipsec_rx_status_pass_destroy(struct mlx5e_ipsec *ipsec,
+ #endif
+ }
+
++static void ipsec_rx_rule_add_match_obj(struct mlx5e_ipsec_sa_entry *sa_entry,
++ struct mlx5e_ipsec_rx *rx,
++ struct mlx5_flow_spec *spec)
++{
++ struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
++
++ if (rx == ipsec->rx_esw) {
++ mlx5_esw_ipsec_rx_rule_add_match_obj(sa_entry, spec);
++ } else {
++ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria,
++ misc_parameters_2.metadata_reg_c_2);
++ MLX5_SET(fte_match_param, spec->match_value,
++ misc_parameters_2.metadata_reg_c_2,
++ sa_entry->ipsec_obj_id | BIT(31));
++
++ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_2;
++ }
++}
++
+ static int rx_add_rule_drop_auth_trailer(struct mlx5e_ipsec_sa_entry *sa_entry,
+ struct mlx5e_ipsec_rx *rx)
+ {
+@@ -200,11 +219,8 @@ static int rx_add_rule_drop_auth_trailer(struct mlx5e_ipsec_sa_entry *sa_entry,
+
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, misc_parameters_2.ipsec_syndrome);
+ MLX5_SET(fte_match_param, spec->match_value, misc_parameters_2.ipsec_syndrome, 1);
+- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, misc_parameters_2.metadata_reg_c_2);
+- MLX5_SET(fte_match_param, spec->match_value,
+- misc_parameters_2.metadata_reg_c_2,
+- sa_entry->ipsec_obj_id | BIT(31));
+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2;
++ ipsec_rx_rule_add_match_obj(sa_entry, rx, spec);
+ rule = mlx5_add_flow_rules(ft, spec, &flow_act, &dest, 1);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+@@ -281,10 +297,8 @@ static int rx_add_rule_drop_replay(struct mlx5e_ipsec_sa_entry *sa_entry, struct
+
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, misc_parameters_2.metadata_reg_c_4);
+ MLX5_SET(fte_match_param, spec->match_value, misc_parameters_2.metadata_reg_c_4, 1);
+- MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, misc_parameters_2.metadata_reg_c_2);
+- MLX5_SET(fte_match_param, spec->match_value, misc_parameters_2.metadata_reg_c_2,
+- sa_entry->ipsec_obj_id | BIT(31));
+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2;
++ ipsec_rx_rule_add_match_obj(sa_entry, rx, spec);
+ rule = mlx5_add_flow_rules(ft, spec, &flow_act, &dest, 1);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 01f6a60308cb7c..ca56740f6d2f91 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -359,7 +359,7 @@ static int mlx5e_rq_shampo_hd_info_alloc(struct mlx5e_rq *rq, int node)
+ return 0;
+
+ err_nomem:
+- kvfree(shampo->bitmap);
++ bitmap_free(shampo->bitmap);
+ kvfree(shampo->pages);
+
+ return -ENOMEM;
+@@ -367,7 +367,7 @@ static int mlx5e_rq_shampo_hd_info_alloc(struct mlx5e_rq *rq, int node)
+
+ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
+ {
+- kvfree(rq->mpwqe.shampo->bitmap);
++ bitmap_free(rq->mpwqe.shampo->bitmap);
+ kvfree(rq->mpwqe.shampo->pages);
+ }
+
+@@ -2023,41 +2023,12 @@ int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params,
+ csp.min_inline_mode = sq->min_inline_mode;
+ set_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
+
+- if (param->is_xdp_mb)
+- set_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state);
+-
+ err = mlx5e_create_sq_rdy(c->mdev, param, &csp, 0, &sq->sqn);
+ if (err)
+ goto err_free_xdpsq;
+
+ mlx5e_set_xmit_fp(sq, param->is_mpw);
+
+- if (!param->is_mpw && !test_bit(MLX5E_SQ_STATE_XDP_MULTIBUF, &sq->state)) {
+- unsigned int ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT + 1;
+- unsigned int inline_hdr_sz = 0;
+- int i;
+-
+- if (sq->min_inline_mode != MLX5_INLINE_MODE_NONE) {
+- inline_hdr_sz = MLX5E_XDP_MIN_INLINE;
+- ds_cnt++;
+- }
+-
+- /* Pre initialize fixed WQE fields */
+- for (i = 0; i < mlx5_wq_cyc_get_size(&sq->wq); i++) {
+- struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(&sq->wq, i);
+- struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
+- struct mlx5_wqe_eth_seg *eseg = &wqe->eth;
+-
+- sq->db.wqe_info[i] = (struct mlx5e_xdp_wqe_info) {
+- .num_wqebbs = 1,
+- .num_pkts = 1,
+- };
+-
+- cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_cnt);
+- eseg->inline_hdr.sz = cpu_to_be16(inline_hdr_sz);
+- }
+- }
+-
+ return 0;
+
+ err_free_xdpsq:
+@@ -3816,8 +3787,11 @@ static int mlx5e_setup_tc_mqprio(struct mlx5e_priv *priv,
+ /* MQPRIO is another toplevel qdisc that can't be attached
+ * simultaneously with the offloaded HTB.
+ */
+- if (WARN_ON(mlx5e_selq_is_htb_enabled(&priv->selq)))
+- return -EINVAL;
++ if (mlx5e_selq_is_htb_enabled(&priv->selq)) {
++ NL_SET_ERR_MSG_MOD(mqprio->extack,
++ "MQPRIO cannot be configured when HTB offload is enabled.");
++ return -EOPNOTSUPP;
++ }
+
+ switch (mqprio->mode) {
+ case TC_MQPRIO_MODE_DCB:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index fdff9fd8a89ec1..07f38f472a2796 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -65,6 +65,7 @@
+ #define MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE \
+ max(0x7, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)
+ #define MLX5E_REP_PARAMS_DEF_NUM_CHANNELS 1
++#define MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE 0x8
+
+ static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
+
+@@ -855,6 +856,8 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
+
+ /* RQ */
+ mlx5e_build_rq_params(mdev, params);
++ if (!mlx5e_is_uplink_rep(priv) && mlx5_core_is_ecpf(mdev))
++ params->log_rq_mtu_frames = MLX5E_REP_PARAMS_DEF_LOG_RQ_SIZE;
+
+ /* If netdev is already registered (e.g. move from nic profile to uplink,
+ * RTNL lock must be held before triggering netdev notifiers.
+@@ -886,6 +889,8 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev,
+ netdev->ethtool_ops = &mlx5e_rep_ethtool_ops;
+
+ netdev->watchdog_timeo = 15 * HZ;
++ if (mlx5_core_is_ecpf(mdev))
++ netdev->tx_queue_len = 1 << MLX5E_REP_PARAMS_DEF_LOG_SQ_SIZE;
+
+ #if IS_ENABLED(CONFIG_MLX5_CLS_ACT)
+ netdev->hw_features |= NETIF_F_HW_TC;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+index 1d60465cc2ca4f..2f7a543feca623 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+@@ -166,6 +166,9 @@ mlx5e_test_loopback_validate(struct sk_buff *skb,
+ struct udphdr *udph;
+ struct iphdr *iph;
+
++ if (skb_linearize(skb))
++ goto out;
++
+ /* We are only going to peek, no need to clone the SKB */
+ if (MLX5E_TEST_PKT_SIZE - ETH_HLEN > skb_headlen(skb))
+ goto out;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+index ed977ae75fab89..4bba2884c1c058 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+@@ -85,6 +85,19 @@ int mlx5_esw_ipsec_rx_setup_modify_header(struct mlx5e_ipsec_sa_entry *sa_entry,
+ return err;
+ }
+
++void mlx5_esw_ipsec_rx_rule_add_match_obj(struct mlx5e_ipsec_sa_entry *sa_entry,
++ struct mlx5_flow_spec *spec)
++{
++ MLX5_SET(fte_match_param, spec->match_criteria,
++ misc_parameters_2.metadata_reg_c_1,
++ ESW_IPSEC_RX_MAPPED_ID_MATCH_MASK);
++ MLX5_SET(fte_match_param, spec->match_value,
++ misc_parameters_2.metadata_reg_c_1,
++ sa_entry->rx_mapped_id << ESW_ZONE_ID_BITS);
++
++ spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_2;
++}
++
+ void mlx5_esw_ipsec_rx_id_mapping_remove(struct mlx5e_ipsec_sa_entry *sa_entry)
+ {
+ struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.h b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.h
+index ac9c65b89166e6..514c15258b1d13 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.h
+@@ -20,6 +20,8 @@ int mlx5_esw_ipsec_rx_ipsec_obj_id_search(struct mlx5e_priv *priv, u32 id,
+ void mlx5_esw_ipsec_tx_create_attr_set(struct mlx5e_ipsec *ipsec,
+ struct mlx5e_ipsec_tx_create_attr *attr);
+ void mlx5_esw_ipsec_restore_dest_uplink(struct mlx5_core_dev *mdev);
++void mlx5_esw_ipsec_rx_rule_add_match_obj(struct mlx5e_ipsec_sa_entry *sa_entry,
++ struct mlx5_flow_spec *spec);
+ #else
+ static inline void mlx5_esw_ipsec_rx_create_attr_set(struct mlx5e_ipsec *ipsec,
+ struct mlx5e_ipsec_rx_create_attr *attr) {}
+@@ -48,5 +50,8 @@ static inline void mlx5_esw_ipsec_tx_create_attr_set(struct mlx5e_ipsec *ipsec,
+ struct mlx5e_ipsec_tx_create_attr *attr) {}
+
+ static inline void mlx5_esw_ipsec_restore_dest_uplink(struct mlx5_core_dev *mdev) {}
++static inline void
++mlx5_esw_ipsec_rx_rule_add_match_obj(struct mlx5e_ipsec_sa_entry *sa_entry,
++ struct mlx5_flow_spec *spec) {}
+ #endif /* CONFIG_MLX5_ESWITCH */
+ #endif /* __MLX5_ESW_IPSEC_FS_H__ */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+index 45183de424f3dd..76382626ad41d3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/legacy.c
+@@ -96,7 +96,7 @@ static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw)
+ if (!flow_group_in)
+ return -ENOMEM;
+
+- ft_attr.max_fte = POOL_NEXT_SIZE;
++ ft_attr.max_fte = MLX5_FS_MAX_POOL_SIZE;
+ ft_attr.prio = LEGACY_FDB_PRIO;
+ fdb = mlx5_create_flow_table(root_ns, &ft_attr);
+ if (IS_ERR(fdb)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+index 823c1ba456cd18..803bacf2a95e6c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+@@ -305,8 +305,9 @@ static int esw_qos_set_node_min_rate(struct mlx5_esw_sched_node *node,
+ return 0;
+ }
+
+-static int esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_element_id,
+- u32 *tsar_ix)
++static int
++esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_element_id,
++ u32 max_rate, u32 bw_share, u32 *tsar_ix)
+ {
+ u32 tsar_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {};
+ void *attr;
+@@ -323,6 +324,8 @@ static int esw_qos_create_node_sched_elem(struct mlx5_core_dev *dev, u32 parent_
+ SCHEDULING_CONTEXT_ELEMENT_TYPE_TSAR);
+ MLX5_SET(scheduling_context, tsar_ctx, parent_element_id,
+ parent_element_id);
++ MLX5_SET(scheduling_context, tsar_ctx, max_average_bw, max_rate);
++ MLX5_SET(scheduling_context, tsar_ctx, bw_share, bw_share);
+ attr = MLX5_ADDR_OF(scheduling_context, tsar_ctx, element_attributes);
+ MLX5_SET(tsar_element, attr, tsar_type, TSAR_ELEMENT_TSAR_TYPE_DWRR);
+
+@@ -396,7 +399,8 @@ __esw_qos_create_vports_sched_node(struct mlx5_eswitch *esw, struct mlx5_esw_sch
+ u32 tsar_ix;
+ int err;
+
+- err = esw_qos_create_node_sched_elem(esw->dev, esw->qos.root_tsar_ix, &tsar_ix);
++ err = esw_qos_create_node_sched_elem(esw->dev, esw->qos.root_tsar_ix, 0,
++ 0, &tsar_ix);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "E-Switch create TSAR for node failed");
+ return ERR_PTR(err);
+@@ -463,7 +467,8 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta
+ if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling))
+ return -EOPNOTSUPP;
+
+- err = esw_qos_create_node_sched_elem(esw->dev, 0, &esw->qos.root_tsar_ix);
++ err = esw_qos_create_node_sched_elem(esw->dev, 0, 0, 0,
++ &esw->qos.root_tsar_ix);
+ if (err) {
+ esw_warn(dev, "E-Switch create root TSAR failed (%d)\n", err);
+ return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c
+index d91ea53eb394d1..fc6e56305cbbc8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/events.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c
+@@ -163,11 +163,16 @@ static int temp_warn(struct notifier_block *nb, unsigned long type, void *data)
+ u64 value_msb;
+
+ value_lsb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_lsb);
++ /* bit 1-63 are not supported for NICs,
++ * hence read only bit 0 (asic) from lsb.
++ */
++ value_lsb &= 0x1;
+ value_msb = be64_to_cpu(eqe->data.temp_warning.sensor_warning_msb);
+
+- mlx5_core_warn(events->dev,
+- "High temperature on sensors with bit set %llx %llx",
+- value_msb, value_lsb);
++ if (net_ratelimit())
++ mlx5_core_warn(events->dev,
++ "High temperature on sensors with bit set %llx %llx",
++ value_msb, value_lsb);
+
+ return NOTIFY_OK;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.c
+index c14590acc77260..f6abfd00d7e68c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.c
+@@ -50,10 +50,12 @@ mlx5_ft_pool_get_avail_sz(struct mlx5_core_dev *dev, enum fs_flow_table_type tab
+ int i, found_i = -1;
+
+ for (i = ARRAY_SIZE(FT_POOLS) - 1; i >= 0; i--) {
+- if (dev->priv.ft_pool->ft_left[i] && FT_POOLS[i] >= desired_size &&
++ if (dev->priv.ft_pool->ft_left[i] &&
++ (FT_POOLS[i] >= desired_size ||
++ desired_size == MLX5_FS_MAX_POOL_SIZE) &&
+ FT_POOLS[i] <= max_ft_size) {
+ found_i = i;
+- if (desired_size != POOL_NEXT_SIZE)
++ if (desired_size != MLX5_FS_MAX_POOL_SIZE)
+ break;
+ }
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.h
+index 25f4274b372b56..173e312db7204f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_ft_pool.h
+@@ -7,8 +7,6 @@
+ #include <linux/mlx5/driver.h>
+ #include "fs_core.h"
+
+-#define POOL_NEXT_SIZE 0
+-
+ int mlx5_ft_pool_init(struct mlx5_core_dev *dev);
+ void mlx5_ft_pool_destroy(struct mlx5_core_dev *dev);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index a6329ca2d9bffb..52c8035547be5c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -799,6 +799,7 @@ static void poll_health(struct timer_list *t)
+ health->prev = count;
+ if (health->miss_counter == MAX_MISSES) {
+ mlx5_core_err(dev, "device's health compromised - reached miss count\n");
++ health->synd = ioread8(&h->synd);
+ print_health_info(dev);
+ queue_work(health->wq, &health->report_work);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+index 711d14dea2485f..d313cb7f0ed88c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+@@ -161,7 +161,8 @@ mlx5_chains_create_table(struct mlx5_fs_chains *chains,
+ ft_attr.flags |= (MLX5_FLOW_TABLE_TUNNEL_EN_REFORMAT |
+ MLX5_FLOW_TABLE_TUNNEL_EN_DECAP);
+
+- sz = (chain == mlx5_chains_get_nf_ft_chain(chains)) ? FT_TBL_SZ : POOL_NEXT_SIZE;
++ sz = (chain == mlx5_chains_get_nf_ft_chain(chains)) ?
++ FT_TBL_SZ : MLX5_FS_MAX_POOL_SIZE;
+ ft_attr.max_fte = sz;
+
+ /* We use chains_default_ft(chains) as the table's next_ft till
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_fw.c b/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
+index 9351a874689f83..cdd6c3a2109446 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_fw.c
+@@ -747,9 +747,9 @@ int fbnic_fw_xmit_tsene_read_msg(struct fbnic_dev *fbd,
+ }
+
+ static const struct fbnic_tlv_index fbnic_tsene_read_resp_index[] = {
+- FBNIC_TLV_ATTR_S32(FBNIC_TSENE_THERM),
+- FBNIC_TLV_ATTR_S32(FBNIC_TSENE_VOLT),
+- FBNIC_TLV_ATTR_S32(FBNIC_TSENE_ERROR),
++ FBNIC_TLV_ATTR_S32(FBNIC_FW_TSENE_THERM),
++ FBNIC_TLV_ATTR_S32(FBNIC_FW_TSENE_VOLT),
++ FBNIC_TLV_ATTR_S32(FBNIC_FW_TSENE_ERROR),
+ FBNIC_TLV_ATTR_LAST
+ };
+
+@@ -766,21 +766,21 @@ static int fbnic_fw_parse_tsene_read_resp(void *opaque,
+ if (!cmpl_data)
+ return -EINVAL;
+
+- if (results[FBNIC_TSENE_ERROR]) {
+- err = fbnic_tlv_attr_get_unsigned(results[FBNIC_TSENE_ERROR]);
++ if (results[FBNIC_FW_TSENE_ERROR]) {
++ err = fbnic_tlv_attr_get_unsigned(results[FBNIC_FW_TSENE_ERROR]);
+ if (err)
+ goto exit_complete;
+ }
+
+- if (!results[FBNIC_TSENE_THERM] || !results[FBNIC_TSENE_VOLT]) {
++ if (!results[FBNIC_FW_TSENE_THERM] || !results[FBNIC_FW_TSENE_VOLT]) {
+ err = -EINVAL;
+ goto exit_complete;
+ }
+
+ cmpl_data->u.tsene.millidegrees =
+- fbnic_tlv_attr_get_signed(results[FBNIC_TSENE_THERM]);
++ fbnic_tlv_attr_get_signed(results[FBNIC_FW_TSENE_THERM]);
+ cmpl_data->u.tsene.millivolts =
+- fbnic_tlv_attr_get_signed(results[FBNIC_TSENE_VOLT]);
++ fbnic_tlv_attr_get_signed(results[FBNIC_FW_TSENE_VOLT]);
+
+ exit_complete:
+ cmpl_data->result = err;
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_fw.h b/drivers/net/ethernet/meta/fbnic/fbnic_fw.h
+index fe68333d51b18f..a3618e7826c25a 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_fw.h
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_fw.h
+@@ -139,10 +139,10 @@ enum {
+ };
+
+ enum {
+- FBNIC_TSENE_THERM = 0x0,
+- FBNIC_TSENE_VOLT = 0x1,
+- FBNIC_TSENE_ERROR = 0x2,
+- FBNIC_TSENE_MSG_MAX
++ FBNIC_FW_TSENE_THERM = 0x0,
++ FBNIC_FW_TSENE_VOLT = 0x1,
++ FBNIC_FW_TSENE_ERROR = 0x2,
++ FBNIC_FW_TSENE_MSG_MAX
+ };
+
+ enum {
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+index 7a96b6ee773f31..1db57c42333efa 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+@@ -628,6 +628,8 @@ struct net_device *fbnic_netdev_alloc(struct fbnic_dev *fbd)
+ fbnic_rss_key_fill(fbn->rss_key);
+ fbnic_rss_init_en_mask(fbn);
+
++ netdev->priv_flags |= IFF_UNICAST_FLT;
++
+ netdev->features |=
+ NETIF_F_RXHASH |
+ NETIF_F_SG |
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index e2d6bfb5d69334..a70b88037a208b 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -3495,6 +3495,7 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
+ struct pci_dev *pdev)
+ {
+ struct lan743x_tx *tx;
++ u32 sgmii_ctl;
+ int index;
+ int ret;
+
+@@ -3507,6 +3508,15 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
+ spin_lock_init(&adapter->eth_syslock_spinlock);
+ mutex_init(&adapter->sgmii_rw_lock);
+ pci11x1x_set_rfe_rd_fifo_threshold(adapter);
++ sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL);
++ if (adapter->is_sgmii_en) {
++ sgmii_ctl |= SGMII_CTL_SGMII_ENABLE_;
++ sgmii_ctl &= ~SGMII_CTL_SGMII_POWER_DN_;
++ } else {
++ sgmii_ctl &= ~SGMII_CTL_SGMII_ENABLE_;
++ sgmii_ctl |= SGMII_CTL_SGMII_POWER_DN_;
++ }
++ lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
+ } else {
+ adapter->max_tx_channels = LAN743X_MAX_TX_CHANNELS;
+ adapter->used_tx_channels = LAN743X_USED_TX_CHANNELS;
+@@ -3558,7 +3568,6 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
+
+ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter)
+ {
+- u32 sgmii_ctl;
+ int ret;
+
+ adapter->mdiobus = devm_mdiobus_alloc(&adapter->pdev->dev);
+@@ -3570,10 +3579,6 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter)
+ adapter->mdiobus->priv = (void *)adapter;
+ if (adapter->is_pci11x1x) {
+ if (adapter->is_sgmii_en) {
+- sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL);
+- sgmii_ctl |= SGMII_CTL_SGMII_ENABLE_;
+- sgmii_ctl &= ~SGMII_CTL_SGMII_POWER_DN_;
+- lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
+ netif_dbg(adapter, drv, adapter->netdev,
+ "SGMII operation\n");
+ adapter->mdiobus->read = lan743x_mdiobus_read_c22;
+@@ -3584,10 +3589,6 @@ static int lan743x_mdiobus_init(struct lan743x_adapter *adapter)
+ netif_dbg(adapter, drv, adapter->netdev,
+ "lan743x-mdiobus-c45\n");
+ } else {
+- sgmii_ctl = lan743x_csr_read(adapter, SGMII_CTL);
+- sgmii_ctl &= ~SGMII_CTL_SGMII_ENABLE_;
+- sgmii_ctl |= SGMII_CTL_SGMII_POWER_DN_;
+- lan743x_csr_write(adapter, SGMII_CTL, sgmii_ctl);
+ netif_dbg(adapter, drv, adapter->netdev,
+ "RGMII operation\n");
+ // Only C22 support when RGMII I/F
+diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+index 638ef64d639f3d..f412e17b0d505e 100644
+--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+@@ -1047,7 +1047,7 @@ static u32 mana_gd_write_client_oob(const struct gdma_wqe_request *wqe_req,
+ header->inline_oob_size_div4 = client_oob_size / sizeof(u32);
+
+ if (oob_in_sgl) {
+- WARN_ON_ONCE(!pad_data || wqe_req->num_sge < 2);
++ WARN_ON_ONCE(wqe_req->num_sge < 2);
+
+ header->client_oob_in_sgl = 1;
+
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 5a5eba49c6515b..267105ba927442 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -89,6 +89,7 @@
+ #define JUMBO_6K (6 * SZ_1K - VLAN_ETH_HLEN - ETH_FCS_LEN)
+ #define JUMBO_7K (7 * SZ_1K - VLAN_ETH_HLEN - ETH_FCS_LEN)
+ #define JUMBO_9K (9 * SZ_1K - VLAN_ETH_HLEN - ETH_FCS_LEN)
++#define JUMBO_16K (SZ_16K - VLAN_ETH_HLEN - ETH_FCS_LEN)
+
+ static const struct {
+ const char *name;
+@@ -2850,6 +2851,32 @@ static u32 rtl_csi_read(struct rtl8169_private *tp, int addr)
+ RTL_R32(tp, CSIDR) : ~0;
+ }
+
++static void rtl_disable_zrxdc_timeout(struct rtl8169_private *tp)
++{
++ struct pci_dev *pdev = tp->pci_dev;
++ u32 csi;
++ int rc;
++ u8 val;
++
++#define RTL_GEN3_RELATED_OFF 0x0890
++#define RTL_GEN3_ZRXDC_NONCOMPL 0x1
++ if (pdev->cfg_size > RTL_GEN3_RELATED_OFF) {
++ rc = pci_read_config_byte(pdev, RTL_GEN3_RELATED_OFF, &val);
++ if (rc == PCIBIOS_SUCCESSFUL) {
++ val &= ~RTL_GEN3_ZRXDC_NONCOMPL;
++ rc = pci_write_config_byte(pdev, RTL_GEN3_RELATED_OFF,
++ val);
++ if (rc == PCIBIOS_SUCCESSFUL)
++ return;
++ }
++ }
++
++ netdev_notice_once(tp->dev,
++ "No native access to PCI extended config space, falling back to CSI\n");
++ csi = rtl_csi_read(tp, RTL_GEN3_RELATED_OFF);
++ rtl_csi_write(tp, RTL_GEN3_RELATED_OFF, csi & ~RTL_GEN3_ZRXDC_NONCOMPL);
++}
++
+ static void rtl_set_aspm_entry_latency(struct rtl8169_private *tp, u8 val)
+ {
+ struct pci_dev *pdev = tp->pci_dev;
+@@ -3822,6 +3849,7 @@ static void rtl_hw_start_8125d(struct rtl8169_private *tp)
+
+ static void rtl_hw_start_8126a(struct rtl8169_private *tp)
+ {
++ rtl_disable_zrxdc_timeout(tp);
+ rtl_set_def_aspm_entry_latency(tp);
+ rtl_hw_start_8125_common(tp);
+ }
+@@ -5222,6 +5250,7 @@ static int r8169_mdio_register(struct rtl8169_private *tp)
+ new_bus->priv = tp;
+ new_bus->parent = &pdev->dev;
+ new_bus->irq[0] = PHY_MAC_INTERRUPT;
++ new_bus->phy_mask = GENMASK(31, 1);
+ snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x-%x",
+ pci_domain_nr(pdev->bus), pci_dev_id(pdev));
+
+@@ -5326,6 +5355,9 @@ static int rtl_jumbo_max(struct rtl8169_private *tp)
+ /* RTL8168c */
+ case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_24:
+ return JUMBO_6K;
++ /* RTL8125/8126 */
++ case RTL_GIGA_MAC_VER_61 ... RTL_GIGA_MAC_VER_71:
++ return JUMBO_16K;
+ default:
+ return JUMBO_9K;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index e25db747a81a5b..c660eb933f24b8 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -101,8 +101,8 @@ struct stmmac_rxq_stats {
+ /* Updates on each CPU protected by not allowing nested irqs. */
+ struct stmmac_pcpu_stats {
+ struct u64_stats_sync syncp;
+- u64_stats_t rx_normal_irq_n[MTL_MAX_TX_QUEUES];
+- u64_stats_t tx_normal_irq_n[MTL_MAX_RX_QUEUES];
++ u64_stats_t rx_normal_irq_n[MTL_MAX_RX_QUEUES];
++ u64_stats_t tx_normal_irq_n[MTL_MAX_TX_QUEUES];
+ };
+
+ /* Extra statistic and debug information exposed by ethtool */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index ab7c2750c10425..702ea5a00b56d3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -590,6 +590,9 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ if (ret)
+ goto err_disable_device;
+
++ plat->tx_fifo_size = SZ_16K * plat->tx_queues_to_use;
++ plat->rx_fifo_size = SZ_16K * plat->rx_queues_to_use;
++
+ if (dev_of_node(&pdev->dev))
+ ret = loongson_dwmac_dt_config(pdev, plat, &res);
+ else
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+index a4dc89e23a68e4..a33be23121b353 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+@@ -33,6 +33,7 @@ struct rk_gmac_ops {
+ void (*set_clock_selection)(struct rk_priv_data *bsp_priv, bool input,
+ bool enable);
+ void (*integrated_phy_powerup)(struct rk_priv_data *bsp_priv);
++ bool php_grf_required;
+ bool regs_valid;
+ u32 regs[];
+ };
+@@ -1254,6 +1255,7 @@ static const struct rk_gmac_ops rk3576_ops = {
+ .set_rgmii_speed = rk3576_set_gmac_speed,
+ .set_rmii_speed = rk3576_set_gmac_speed,
+ .set_clock_selection = rk3576_set_clock_selection,
++ .php_grf_required = true,
+ .regs_valid = true,
+ .regs = {
+ 0x2a220000, /* gmac0 */
+@@ -1401,6 +1403,7 @@ static const struct rk_gmac_ops rk3588_ops = {
+ .set_rgmii_speed = rk3588_set_gmac_speed,
+ .set_rmii_speed = rk3588_set_gmac_speed,
+ .set_clock_selection = rk3588_set_clock_selection,
++ .php_grf_required = true,
+ .regs_valid = true,
+ .regs = {
+ 0xfe1b0000, /* gmac0 */
+@@ -1812,8 +1815,22 @@ static struct rk_priv_data *rk_gmac_setup(struct platform_device *pdev,
+
+ bsp_priv->grf = syscon_regmap_lookup_by_phandle(dev->of_node,
+ "rockchip,grf");
+- bsp_priv->php_grf = syscon_regmap_lookup_by_phandle(dev->of_node,
+- "rockchip,php-grf");
++ if (IS_ERR(bsp_priv->grf)) {
++ dev_err_probe(dev, PTR_ERR(bsp_priv->grf),
++ "failed to lookup rockchip,grf\n");
++ return ERR_CAST(bsp_priv->grf);
++ }
++
++ if (ops->php_grf_required) {
++ bsp_priv->php_grf =
++ syscon_regmap_lookup_by_phandle(dev->of_node,
++ "rockchip,php-grf");
++ if (IS_ERR(bsp_priv->php_grf)) {
++ dev_err_probe(dev, PTR_ERR(bsp_priv->php_grf),
++ "failed to lookup rockchip,php-grf\n");
++ return ERR_CAST(bsp_priv->php_grf);
++ }
++ }
+
+ if (plat->phy_node) {
+ bsp_priv->integrated_phy = of_property_read_bool(plat->phy_node,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 4b7b2582a1201d..9d31fa5bbe15ee 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -964,7 +964,7 @@ static int sun8i_dwmac_set_syscon(struct device *dev,
+ /* of_mdio_parse_addr returns a valid (0 ~ 31) PHY
+ * address. No need to mask it again.
+ */
+- reg |= 1 << H3_EPHY_ADDR_SHIFT;
++ reg |= ret << H3_EPHY_ADDR_SHIFT;
+ } else {
+ /* For SoCs without internal PHY the PHY selection bit should be
+ * set to 0 (external PHY).
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index f05cae103d836c..dae279ee2c2808 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -257,7 +257,7 @@ struct stmmac_priv {
+ /* Frequently used values are kept adjacent for cache effect */
+ u32 tx_coal_frames[MTL_MAX_TX_QUEUES];
+ u32 tx_coal_timer[MTL_MAX_TX_QUEUES];
+- u32 rx_coal_frames[MTL_MAX_TX_QUEUES];
++ u32 rx_coal_frames[MTL_MAX_RX_QUEUES];
+
+ int hwts_tx_en;
+ bool tx_path_in_lpi_mode;
+@@ -265,8 +265,7 @@ struct stmmac_priv {
+ int sph;
+ int sph_cap;
+ u32 sarc_type;
+-
+- u32 rx_riwt[MTL_MAX_TX_QUEUES];
++ u32 rx_riwt[MTL_MAX_RX_QUEUES];
+ int hwts_rx_en;
+
+ void __iomem *ioaddr;
+@@ -343,7 +342,7 @@ struct stmmac_priv {
+ char int_name_sfty[IFNAMSIZ + 10];
+ char int_name_sfty_ce[IFNAMSIZ + 10];
+ char int_name_sfty_ue[IFNAMSIZ + 10];
+- char int_name_rx_irq[MTL_MAX_TX_QUEUES][IFNAMSIZ + 14];
++ char int_name_rx_irq[MTL_MAX_RX_QUEUES][IFNAMSIZ + 14];
+ char int_name_tx_irq[MTL_MAX_TX_QUEUES][IFNAMSIZ + 18];
+
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/drivers/net/ethernet/tehuti/tn40.c b/drivers/net/ethernet/tehuti/tn40.c
+index 259bdac24cf211..558b791a97eddd 100644
+--- a/drivers/net/ethernet/tehuti/tn40.c
++++ b/drivers/net/ethernet/tehuti/tn40.c
+@@ -1778,7 +1778,7 @@ static int tn40_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ ret = tn40_phy_register(priv);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to set up PHY.\n");
+- goto err_free_irq;
++ goto err_cleanup_swnodes;
+ }
+
+ ret = tn40_priv_init(priv);
+@@ -1795,6 +1795,8 @@ static int tn40_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ return 0;
+ err_unregister_phydev:
+ tn40_phy_unregister(priv);
++err_cleanup_swnodes:
++ tn40_swnodes_cleanup(priv);
+ err_free_irq:
+ pci_free_irq_vectors(pdev);
+ err_unset_drvdata:
+@@ -1816,6 +1818,7 @@ static void tn40_remove(struct pci_dev *pdev)
+ unregister_netdev(ndev);
+
+ tn40_phy_unregister(priv);
++ tn40_swnodes_cleanup(priv);
+ pci_free_irq_vectors(priv->pdev);
+ pci_set_drvdata(pdev, NULL);
+ iounmap(priv->regs);
+@@ -1832,6 +1835,10 @@ static const struct pci_device_id tn40_id_table[] = {
+ PCI_VENDOR_ID_ASUSTEK, 0x8709) },
+ { PCI_DEVICE_SUB(PCI_VENDOR_ID_TEHUTI, 0x4022,
+ PCI_VENDOR_ID_EDIMAX, 0x8103) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_TEHUTI, PCI_DEVICE_ID_TEHUTI_TN9510,
++ PCI_VENDOR_ID_TEHUTI, 0x3015) },
++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_TEHUTI, PCI_DEVICE_ID_TEHUTI_TN9510,
++ PCI_VENDOR_ID_EDIMAX, 0x8102) },
+ { }
+ };
+
+diff --git a/drivers/net/ethernet/tehuti/tn40.h b/drivers/net/ethernet/tehuti/tn40.h
+index 490781fe512053..25da8686d4691d 100644
+--- a/drivers/net/ethernet/tehuti/tn40.h
++++ b/drivers/net/ethernet/tehuti/tn40.h
+@@ -4,10 +4,13 @@
+ #ifndef _TN40_H_
+ #define _TN40_H_
+
++#include <linux/property.h>
+ #include "tn40_regs.h"
+
+ #define TN40_DRV_NAME "tn40xx"
+
++#define PCI_DEVICE_ID_TEHUTI_TN9510 0x4025
++
+ #define TN40_MDIO_SPEED_1MHZ (1)
+ #define TN40_MDIO_SPEED_6MHZ (6)
+
+@@ -102,10 +105,39 @@ struct tn40_txdb {
+ int size; /* Number of elements in the db */
+ };
+
++#define NODE_PROP(_NAME, _PROP) ( \
++ (const struct software_node) { \
++ .name = _NAME, \
++ .properties = _PROP, \
++ })
++
++#define NODE_PAR_PROP(_NAME, _PAR, _PROP) ( \
++ (const struct software_node) { \
++ .name = _NAME, \
++ .parent = _PAR, \
++ .properties = _PROP, \
++ })
++
++enum tn40_swnodes {
++ SWNODE_MDIO,
++ SWNODE_PHY,
++ SWNODE_MAX
++};
++
++struct tn40_nodes {
++ char phy_name[32];
++ char mdio_name[32];
++ struct property_entry phy_props[3];
++ struct software_node swnodes[SWNODE_MAX];
++ const struct software_node *group[SWNODE_MAX + 1];
++};
++
+ struct tn40_priv {
+ struct net_device *ndev;
+ struct pci_dev *pdev;
+
++ struct tn40_nodes nodes;
++
+ struct napi_struct napi;
+ /* RX FIFOs: 1 for data (full) descs, and 2 for free descs */
+ struct tn40_rxd_fifo rxd_fifo0;
+@@ -225,6 +257,7 @@ static inline void tn40_write_reg(struct tn40_priv *priv, u32 reg, u32 val)
+
+ int tn40_set_link_speed(struct tn40_priv *priv, u32 speed);
+
++void tn40_swnodes_cleanup(struct tn40_priv *priv);
+ int tn40_mdiobus_init(struct tn40_priv *priv);
+
+ int tn40_phy_register(struct tn40_priv *priv);
+diff --git a/drivers/net/ethernet/tehuti/tn40_mdio.c b/drivers/net/ethernet/tehuti/tn40_mdio.c
+index af18615d64a8a2..5bb0cbc87d064e 100644
+--- a/drivers/net/ethernet/tehuti/tn40_mdio.c
++++ b/drivers/net/ethernet/tehuti/tn40_mdio.c
+@@ -14,6 +14,8 @@
+ (FIELD_PREP(TN40_MDIO_PRTAD_MASK, (port))))
+ #define TN40_MDIO_CMD_READ BIT(15)
+
++#define AQR105_FIRMWARE "tehuti/aqr105-tn40xx.cld"
++
+ static void tn40_mdio_set_speed(struct tn40_priv *priv, u32 speed)
+ {
+ void __iomem *regs = priv->regs;
+@@ -111,6 +113,56 @@ static int tn40_mdio_write_c45(struct mii_bus *mii_bus, int addr, int devnum,
+ return tn40_mdio_write(mii_bus->priv, addr, devnum, regnum, val);
+ }
+
++/* registers an mdio node and an aqr105 PHY at address 1
++ * tn40_mdio-%id {
++ * ethernet-phy@1 {
++ * compatible = "ethernet-phy-id03a1.b4a3";
++ * reg = <1>;
++ * firmware-name = AQR105_FIRMWARE;
++ * };
++ * };
++ */
++static int tn40_swnodes_register(struct tn40_priv *priv)
++{
++ struct tn40_nodes *nodes = &priv->nodes;
++ struct pci_dev *pdev = priv->pdev;
++ struct software_node *swnodes;
++ u32 id;
++
++ id = pci_dev_id(pdev);
++
++ snprintf(nodes->phy_name, sizeof(nodes->phy_name), "ethernet-phy@1");
++ snprintf(nodes->mdio_name, sizeof(nodes->mdio_name), "tn40_mdio-%x",
++ id);
++
++ swnodes = nodes->swnodes;
++
++ swnodes[SWNODE_MDIO] = NODE_PROP(nodes->mdio_name, NULL);
++
++ nodes->phy_props[0] = PROPERTY_ENTRY_STRING("compatible",
++ "ethernet-phy-id03a1.b4a3");
++ nodes->phy_props[1] = PROPERTY_ENTRY_U32("reg", 1);
++ nodes->phy_props[2] = PROPERTY_ENTRY_STRING("firmware-name",
++ AQR105_FIRMWARE);
++ swnodes[SWNODE_PHY] = NODE_PAR_PROP(nodes->phy_name,
++ &swnodes[SWNODE_MDIO],
++ nodes->phy_props);
++
++ nodes->group[SWNODE_PHY] = &swnodes[SWNODE_PHY];
++ nodes->group[SWNODE_MDIO] = &swnodes[SWNODE_MDIO];
++ return software_node_register_node_group(nodes->group);
++}
++
++void tn40_swnodes_cleanup(struct tn40_priv *priv)
++{
++ /* cleanup of swnodes is only needed for AQR105-based cards */
++ if (priv->pdev->device == PCI_DEVICE_ID_TEHUTI_TN9510) {
++ fwnode_handle_put(dev_fwnode(&priv->mdio->dev));
++ device_remove_software_node(&priv->mdio->dev);
++ software_node_unregister_node_group(priv->nodes.group);
++ }
++}
++
+ int tn40_mdiobus_init(struct tn40_priv *priv)
+ {
+ struct pci_dev *pdev = priv->pdev;
+@@ -129,14 +181,40 @@ int tn40_mdiobus_init(struct tn40_priv *priv)
+
+ bus->read_c45 = tn40_mdio_read_c45;
+ bus->write_c45 = tn40_mdio_write_c45;
++ priv->mdio = bus;
++
++ /* provide swnodes for AQR105-based cards only */
++ if (pdev->device == PCI_DEVICE_ID_TEHUTI_TN9510) {
++ ret = tn40_swnodes_register(priv);
++ if (ret) {
++ pr_err("swnodes failed\n");
++ return ret;
++ }
++
++ ret = device_add_software_node(&bus->dev,
++ priv->nodes.group[SWNODE_MDIO]);
++ if (ret) {
++ dev_err(&pdev->dev,
++ "device_add_software_node failed: %d\n", ret);
++ goto err_swnodes_unregister;
++ }
++ }
+
+ ret = devm_mdiobus_register(&pdev->dev, bus);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register mdiobus %d %u %u\n",
+ ret, bus->state, MDIOBUS_UNREGISTERED);
+- return ret;
++ goto err_swnodes_cleanup;
+ }
+ tn40_mdio_set_speed(priv, TN40_MDIO_SPEED_6MHZ);
+- priv->mdio = bus;
+ return 0;
++
++err_swnodes_unregister:
++ software_node_unregister_node_group(priv->nodes.group);
++ return ret;
++err_swnodes_cleanup:
++ tn40_swnodes_cleanup(priv);
++ return ret;
+ }
++
++MODULE_FIRMWARE(AQR105_FIRMWARE);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index afe8127fd32beb..cac67babe45593 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -515,7 +515,7 @@ static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id)
+ napi_disable(&flow->napi_rx);
+ hrtimer_cancel(&flow->rx_hrtimer);
+ k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, id, rx_chn,
+- am65_cpsw_nuss_rx_cleanup, !!id);
++ am65_cpsw_nuss_rx_cleanup);
+
+ for (port = 0; port < common->port_num; port++) {
+ if (!common->ports[port].ndev)
+@@ -3433,7 +3433,7 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common)
+ for (i = 0; i < common->rx_ch_num_flows; i++)
+ k3_udma_glue_reset_rx_chn(rx_chan->rx_chn, i,
+ rx_chan,
+- am65_cpsw_nuss_rx_cleanup, !!i);
++ am65_cpsw_nuss_rx_cleanup);
+
+ k3_udma_glue_disable_rx_chn(rx_chan->rx_chn);
+
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index cec0a90659d944..66713bc931741f 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -1418,6 +1418,7 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
+ ndev->netdev_ops = &cpsw_netdev_ops;
+ ndev->ethtool_ops = &cpsw_ethtool_ops;
+ SET_NETDEV_DEV(ndev, dev);
++ ndev->dev.of_node = slave_data->slave_node;
+
+ if (!napi_ndev) {
+ /* CPSW Host port CPDMA interface is shared between
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_common.c b/drivers/net/ethernet/ti/icssg/icssg_common.c
+index 74f0f200a89d4f..62065416e88614 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_common.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_common.c
+@@ -955,7 +955,7 @@ void prueth_reset_rx_chan(struct prueth_rx_chn *chn,
+
+ for (i = 0; i < num_flows; i++)
+ k3_udma_glue_reset_rx_chn(chn->rx_chn, i, chn,
+- prueth_rx_cleanup, !!i);
++ prueth_rx_cleanup);
+ if (disable)
+ k3_udma_glue_disable_rx_chn(chn->rx_chn);
+ }
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index 753215ebc67c70..a036910f60828c 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -1446,8 +1446,7 @@ static u8 mcps_data_request(
+ command.pdata.data_req.src_addr_mode = src_addr_mode;
+ command.pdata.data_req.dst.mode = dst_address_mode;
+ if (dst_address_mode != MAC_MODE_NO_ADDR) {
+- command.pdata.data_req.dst.pan_id[0] = LS_BYTE(dst_pan_id);
+- command.pdata.data_req.dst.pan_id[1] = MS_BYTE(dst_pan_id);
++ put_unaligned_le16(dst_pan_id, command.pdata.data_req.dst.pan_id);
+ if (dst_address_mode == MAC_MODE_SHORT_ADDR) {
+ command.pdata.data_req.dst.address[0] = LS_BYTE(
+ dst_addr->short_address
+@@ -1795,12 +1794,12 @@ static int ca8210_skb_rx(
+ }
+ hdr.source.mode = data_ind[0];
+ dev_dbg(&priv->spi->dev, "srcAddrMode: %#03x\n", hdr.source.mode);
+- hdr.source.pan_id = *(u16 *)&data_ind[1];
++ hdr.source.pan_id = cpu_to_le16(get_unaligned_le16(&data_ind[1]));
+ dev_dbg(&priv->spi->dev, "srcPanId: %#06x\n", hdr.source.pan_id);
+ memcpy(&hdr.source.extended_addr, &data_ind[3], 8);
+ hdr.dest.mode = data_ind[11];
+ dev_dbg(&priv->spi->dev, "dstAddrMode: %#03x\n", hdr.dest.mode);
+- hdr.dest.pan_id = *(u16 *)&data_ind[12];
++ hdr.dest.pan_id = cpu_to_le16(get_unaligned_le16(&data_ind[12]));
+ dev_dbg(&priv->spi->dev, "dstPanId: %#06x\n", hdr.dest.pan_id);
+ memcpy(&hdr.dest.extended_addr, &data_ind[14], 8);
+
+@@ -1927,7 +1926,7 @@ static int ca8210_skb_tx(
+ status = mcps_data_request(
+ header.source.mode,
+ header.dest.mode,
+- header.dest.pan_id,
++ le16_to_cpu(header.dest.pan_id),
+ (union macaddr *)&header.dest.extended_addr,
+ skb->len - mac_len,
+ &skb->data[mac_len],
+diff --git a/drivers/net/mctp/mctp-i2c.c b/drivers/net/mctp/mctp-i2c.c
+index d74d47dd6e04dc..f782d93f826efc 100644
+--- a/drivers/net/mctp/mctp-i2c.c
++++ b/drivers/net/mctp/mctp-i2c.c
+@@ -537,7 +537,7 @@ static void mctp_i2c_xmit(struct mctp_i2c_dev *midev, struct sk_buff *skb)
+ rc = __i2c_transfer(midev->adapter, &msg, 1);
+
+ /* on tx errors, the flow can no longer be considered valid */
+- if (rc)
++ if (rc < 0)
+ mctp_i2c_invalidate_tx_flow(midev, skb);
+
+ break;
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 42f247cbdceecb..a41dc79e9c2e08 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -87,7 +87,8 @@ static netdev_tx_t nsim_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ if (unlikely(nsim_forward_skb(peer_dev, skb, rq) == NET_RX_DROP))
+ goto out_drop_cnt;
+
+- napi_schedule(&rq->napi);
++ if (!hrtimer_active(&rq->napi_timer))
++ hrtimer_start(&rq->napi_timer, us_to_ktime(5), HRTIMER_MODE_REL);
+
+ rcu_read_unlock();
+ u64_stats_update_begin(&ns->syncp);
+@@ -426,6 +427,22 @@ static int nsim_init_napi(struct netdevsim *ns)
+ return err;
+ }
+
++static enum hrtimer_restart nsim_napi_schedule(struct hrtimer *timer)
++{
++ struct nsim_rq *rq;
++
++ rq = container_of(timer, struct nsim_rq, napi_timer);
++ napi_schedule(&rq->napi);
++
++ return HRTIMER_NORESTART;
++}
++
++static void nsim_rq_timer_init(struct nsim_rq *rq)
++{
++ hrtimer_init(&rq->napi_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ rq->napi_timer.function = nsim_napi_schedule;
++}
++
+ static void nsim_enable_napi(struct netdevsim *ns)
+ {
+ struct net_device *dev = ns->netdev;
+@@ -615,11 +632,13 @@ static struct nsim_rq *nsim_queue_alloc(void)
+ return NULL;
+
+ skb_queue_head_init(&rq->skb_queue);
++ nsim_rq_timer_init(rq);
+ return rq;
+ }
+
+ static void nsim_queue_free(struct nsim_rq *rq)
+ {
++ hrtimer_cancel(&rq->napi_timer);
+ skb_queue_purge_reason(&rq->skb_queue, SKB_DROP_REASON_QUEUE_PURGE);
+ kfree(rq);
+ }
+@@ -645,8 +664,11 @@ nsim_queue_mem_alloc(struct net_device *dev, void *per_queue_mem, int idx)
+ if (ns->rq_reset_mode > 3)
+ return -EINVAL;
+
+- if (ns->rq_reset_mode == 1)
++ if (ns->rq_reset_mode == 1) {
++ if (!netif_running(ns->netdev))
++ return -ENETDOWN;
+ return nsim_create_page_pool(&qmem->pp, &ns->rq[idx]->napi);
++ }
+
+ qmem->rq = nsim_queue_alloc();
+ if (!qmem->rq)
+@@ -754,11 +776,6 @@ nsim_qreset_write(struct file *file, const char __user *data,
+ return -EINVAL;
+
+ rtnl_lock();
+- if (!netif_running(ns->netdev)) {
+- ret = -ENETDOWN;
+- goto exit_unlock;
+- }
+-
+ if (queue >= ns->netdev->real_num_rx_queues) {
+ ret = -EINVAL;
+ goto exit_unlock;
+diff --git a/drivers/net/netdevsim/netdevsim.h b/drivers/net/netdevsim/netdevsim.h
+index 96d54c08043d3a..e757f85ed8617b 100644
+--- a/drivers/net/netdevsim/netdevsim.h
++++ b/drivers/net/netdevsim/netdevsim.h
+@@ -97,6 +97,7 @@ struct nsim_rq {
+ struct napi_struct napi;
+ struct sk_buff_head skb_queue;
+ struct page_pool *page_pool;
++ struct hrtimer napi_timer;
+ };
+
+ struct netdevsim {
+diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
+index e9fc54517449c8..16e1c13ae2f8dc 100644
+--- a/drivers/net/phy/nxp-c45-tja11xx.c
++++ b/drivers/net/phy/nxp-c45-tja11xx.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* NXP C45 PHY driver
+- * Copyright 2021-2023 NXP
++ * Copyright 2021-2025 NXP
+ * Author: Radu Pirea <radu-nicolae.pirea@oss.nxp.com>
+ */
+
+@@ -19,6 +19,8 @@
+
+ #include "nxp-c45-tja11xx.h"
+
++#define PHY_ID_MASK GENMASK(31, 4)
++/* Same id: TJA1103, TJA1104 */
+ #define PHY_ID_TJA_1103 0x001BB010
+ #define PHY_ID_TJA_1120 0x001BB031
+
+@@ -1956,6 +1958,30 @@ static void tja1120_nmi_handler(struct phy_device *phydev,
+ }
+ }
+
++static int nxp_c45_macsec_ability(struct phy_device *phydev)
++{
++ bool macsec_ability;
++ int phy_abilities;
++
++ phy_abilities = phy_read_mmd(phydev, MDIO_MMD_VEND1,
++ VEND1_PORT_ABILITIES);
++ macsec_ability = !!(phy_abilities & MACSEC_ABILITY);
++
++ return macsec_ability;
++}
++
++static int tja1103_match_phy_device(struct phy_device *phydev)
++{
++ return phy_id_compare(phydev->phy_id, PHY_ID_TJA_1103, PHY_ID_MASK) &&
++ !nxp_c45_macsec_ability(phydev);
++}
++
++static int tja1104_match_phy_device(struct phy_device *phydev)
++{
++ return phy_id_compare(phydev->phy_id, PHY_ID_TJA_1103, PHY_ID_MASK) &&
++ nxp_c45_macsec_ability(phydev);
++}
++
+ static const struct nxp_c45_regmap tja1120_regmap = {
+ .vend1_ptp_clk_period = 0x1020,
+ .vend1_event_msg_filt = 0x9010,
+@@ -2026,7 +2052,6 @@ static const struct nxp_c45_phy_data tja1120_phy_data = {
+
+ static struct phy_driver nxp_c45_driver[] = {
+ {
+- PHY_ID_MATCH_MODEL(PHY_ID_TJA_1103),
+ .name = "NXP C45 TJA1103",
+ .get_features = nxp_c45_get_features,
+ .driver_data = &tja1103_phy_data,
+@@ -2048,6 +2073,31 @@ static struct phy_driver nxp_c45_driver[] = {
+ .get_sqi = nxp_c45_get_sqi,
+ .get_sqi_max = nxp_c45_get_sqi_max,
+ .remove = nxp_c45_remove,
++ .match_phy_device = tja1103_match_phy_device,
++ },
++ {
++ .name = "NXP C45 TJA1104",
++ .get_features = nxp_c45_get_features,
++ .driver_data = &tja1103_phy_data,
++ .probe = nxp_c45_probe,
++ .soft_reset = nxp_c45_soft_reset,
++ .config_aneg = genphy_c45_config_aneg,
++ .config_init = nxp_c45_config_init,
++ .config_intr = tja1103_config_intr,
++ .handle_interrupt = nxp_c45_handle_interrupt,
++ .read_status = genphy_c45_read_status,
++ .suspend = genphy_c45_pma_suspend,
++ .resume = genphy_c45_pma_resume,
++ .get_sset_count = nxp_c45_get_sset_count,
++ .get_strings = nxp_c45_get_strings,
++ .get_stats = nxp_c45_get_stats,
++ .cable_test_start = nxp_c45_cable_test_start,
++ .cable_test_get_status = nxp_c45_cable_test_get_status,
++ .set_loopback = genphy_c45_loopback,
++ .get_sqi = nxp_c45_get_sqi,
++ .get_sqi_max = nxp_c45_get_sqi_max,
++ .remove = nxp_c45_remove,
++ .match_phy_device = tja1104_match_phy_device,
+ },
+ {
+ PHY_ID_MATCH_MODEL(PHY_ID_TJA_1120),
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 5be48eb810abb3..8c4dfe9dc76507 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -2074,7 +2074,7 @@ bool phylink_expects_phy(struct phylink *pl)
+ {
+ if (pl->cfg_link_an_mode == MLO_AN_FIXED ||
+ (pl->cfg_link_an_mode == MLO_AN_INBAND &&
+- phy_interface_mode_is_8023z(pl->link_config.interface)))
++ phy_interface_mode_is_8023z(pl->link_interface)))
+ return false;
+ return true;
+ }
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 96fa3857d8e257..2cab046749a922 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -10085,6 +10085,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ { USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) },
+ { USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) },
+ { USB_DEVICE(VENDOR_ID_DLINK, 0xb301) },
++ { USB_DEVICE(VENDOR_ID_DELL, 0xb097) },
+ { USB_DEVICE(VENDOR_ID_ASUS, 0x1976) },
+ {}
+ };
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index 3df6aabc7e339e..c676979c7ab940 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -3607,8 +3607,6 @@ vmxnet3_change_mtu(struct net_device *netdev, int new_mtu)
+ struct vmxnet3_adapter *adapter = netdev_priv(netdev);
+ int err = 0;
+
+- WRITE_ONCE(netdev->mtu, new_mtu);
+-
+ /*
+ * Reset_work may be in the middle of resetting the device, wait for its
+ * completion.
+@@ -3622,6 +3620,7 @@ vmxnet3_change_mtu(struct net_device *netdev, int new_mtu)
+
+ /* we need to re-create the rx queue based on the new mtu */
+ vmxnet3_rq_destroy_all(adapter);
++ WRITE_ONCE(netdev->mtu, new_mtu);
+ vmxnet3_adjust_rx_ring_size(adapter);
+ err = vmxnet3_rq_create_all(adapter);
+ if (err) {
+@@ -3638,6 +3637,8 @@ vmxnet3_change_mtu(struct net_device *netdev, int new_mtu)
+ "Closing it\n", err);
+ goto out;
+ }
++ } else {
++ WRITE_ONCE(netdev->mtu, new_mtu);
+ }
+
+ out:
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 92516189e792f8..cdd2a78badf55a 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -227,9 +227,9 @@ static int vxlan_fdb_info(struct sk_buff *skb, struct vxlan_dev *vxlan,
+ be32_to_cpu(fdb->vni)))
+ goto nla_put_failure;
+
+- ci.ndm_used = jiffies_to_clock_t(now - fdb->used);
++ ci.ndm_used = jiffies_to_clock_t(now - READ_ONCE(fdb->used));
+ ci.ndm_confirmed = 0;
+- ci.ndm_updated = jiffies_to_clock_t(now - fdb->updated);
++ ci.ndm_updated = jiffies_to_clock_t(now - READ_ONCE(fdb->updated));
+ ci.ndm_refcnt = 0;
+
+ if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci))
+@@ -434,8 +434,8 @@ static struct vxlan_fdb *vxlan_find_mac(struct vxlan_dev *vxlan,
+ struct vxlan_fdb *f;
+
+ f = __vxlan_find_mac(vxlan, mac, vni);
+- if (f && f->used != jiffies)
+- f->used = jiffies;
++ if (f && READ_ONCE(f->used) != jiffies)
++ WRITE_ONCE(f->used, jiffies);
+
+ return f;
+ }
+@@ -1009,12 +1009,12 @@ static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
+ !(f->flags & NTF_VXLAN_ADDED_BY_USER)) {
+ if (f->state != state) {
+ f->state = state;
+- f->updated = jiffies;
++ WRITE_ONCE(f->updated, jiffies);
+ notify = 1;
+ }
+ if (f->flags != fdb_flags) {
+ f->flags = fdb_flags;
+- f->updated = jiffies;
++ WRITE_ONCE(f->updated, jiffies);
+ notify = 1;
+ }
+ }
+@@ -1048,7 +1048,7 @@ static int vxlan_fdb_update_existing(struct vxlan_dev *vxlan,
+ }
+
+ if (ndm_flags & NTF_USE)
+- f->used = jiffies;
++ WRITE_ONCE(f->used, jiffies);
+
+ if (notify) {
+ if (rd == NULL)
+@@ -1481,7 +1481,7 @@ static enum skb_drop_reason vxlan_snoop(struct net_device *dev,
+ src_mac, &rdst->remote_ip.sa, &src_ip->sa);
+
+ rdst->remote_ip = *src_ip;
+- f->updated = jiffies;
++ WRITE_ONCE(f->updated, jiffies);
+ vxlan_fdb_notify(vxlan, f, rdst, RTM_NEWNEIGH, true, NULL);
+ } else {
+ u32 hash_index = fdb_head_index(vxlan, src_mac, vni);
+@@ -2852,7 +2852,7 @@ static void vxlan_cleanup(struct timer_list *t)
+ if (f->flags & NTF_EXT_LEARNED)
+ continue;
+
+- timeout = f->used + vxlan->cfg.age_interval * HZ;
++ timeout = READ_ONCE(f->used) + vxlan->cfg.age_interval * HZ;
+ if (time_before_eq(timeout, jiffies)) {
+ netdev_dbg(vxlan->dev,
+ "garbage collect %pM\n",
+@@ -4415,6 +4415,7 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ struct netlink_ext_ack *extack)
+ {
+ struct vxlan_dev *vxlan = netdev_priv(dev);
++ bool rem_ip_changed, change_igmp;
+ struct net_device *lowerdev;
+ struct vxlan_config conf;
+ struct vxlan_rdst *dst;
+@@ -4438,8 +4439,13 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ if (err)
+ return err;
+
++ rem_ip_changed = !vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip);
++ change_igmp = vxlan->dev->flags & IFF_UP &&
++ (rem_ip_changed ||
++ dst->remote_ifindex != conf.remote_ifindex);
++
+ /* handle default dst entry */
+- if (!vxlan_addr_equal(&conf.remote_ip, &dst->remote_ip)) {
++ if (rem_ip_changed) {
+ u32 hash_index = fdb_head_index(vxlan, all_zeros_mac, conf.vni);
+
+ spin_lock_bh(&vxlan->hash_lock[hash_index]);
+@@ -4483,6 +4489,9 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ }
+ }
+
++ if (change_igmp && vxlan_addr_multicast(&dst->remote_ip))
++ err = vxlan_multicast_leave(vxlan);
++
+ if (conf.age_interval != vxlan->cfg.age_interval)
+ mod_timer(&vxlan->age_timer, jiffies);
+
+@@ -4490,7 +4499,12 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
+ if (lowerdev && lowerdev != dst->remote_dev)
+ dst->remote_dev = lowerdev;
+ vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true);
+- return 0;
++
++ if (!err && change_igmp &&
++ vxlan_addr_multicast(&dst->remote_ip))
++ err = vxlan_multicast_join(vxlan);
++
++ return err;
+ }
+
+ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
+diff --git a/drivers/net/wireless/ath/ath11k/dp.h b/drivers/net/wireless/ath/ath11k/dp.h
+index f777314db8b36f..7a55afd33be82e 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.h
++++ b/drivers/net/wireless/ath/ath11k/dp.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023, 2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #ifndef ATH11K_DP_H
+@@ -20,7 +20,6 @@ struct ath11k_ext_irq_grp;
+
+ struct dp_rx_tid {
+ u8 tid;
+- u32 *vaddr;
+ dma_addr_t paddr;
+ u32 size;
+ u32 ba_win_sz;
+@@ -37,6 +36,9 @@ struct dp_rx_tid {
+ /* Timer info related to fragments */
+ struct timer_list frag_timer;
+ struct ath11k_base *ab;
++ u32 *vaddr_unaligned;
++ dma_addr_t paddr_unaligned;
++ u32 unaligned_size;
+ };
+
+ #define DP_REO_DESC_FREE_THRESHOLD 64
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index b8b3dce9cdb53a..a7a484a9ba7fb3 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/ieee80211.h>
+@@ -675,11 +675,11 @@ void ath11k_dp_reo_cmd_list_cleanup(struct ath11k_base *ab)
+ list_for_each_entry_safe(cmd, tmp, &dp->reo_cmd_list, list) {
+ list_del(&cmd->list);
+ rx_tid = &cmd->data;
+- if (rx_tid->vaddr) {
+- dma_unmap_single(ab->dev, rx_tid->paddr,
+- rx_tid->size, DMA_BIDIRECTIONAL);
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
++ if (rx_tid->vaddr_unaligned) {
++ dma_free_noncoherent(ab->dev, rx_tid->unaligned_size,
++ rx_tid->vaddr_unaligned,
++ rx_tid->paddr_unaligned, DMA_BIDIRECTIONAL);
++ rx_tid->vaddr_unaligned = NULL;
+ }
+ kfree(cmd);
+ }
+@@ -689,11 +689,11 @@ void ath11k_dp_reo_cmd_list_cleanup(struct ath11k_base *ab)
+ list_del(&cmd_cache->list);
+ dp->reo_cmd_cache_flush_count--;
+ rx_tid = &cmd_cache->data;
+- if (rx_tid->vaddr) {
+- dma_unmap_single(ab->dev, rx_tid->paddr,
+- rx_tid->size, DMA_BIDIRECTIONAL);
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
++ if (rx_tid->vaddr_unaligned) {
++ dma_free_noncoherent(ab->dev, rx_tid->unaligned_size,
++ rx_tid->vaddr_unaligned,
++ rx_tid->paddr_unaligned, DMA_BIDIRECTIONAL);
++ rx_tid->vaddr_unaligned = NULL;
+ }
+ kfree(cmd_cache);
+ }
+@@ -708,11 +708,11 @@ static void ath11k_dp_reo_cmd_free(struct ath11k_dp *dp, void *ctx,
+ if (status != HAL_REO_CMD_SUCCESS)
+ ath11k_warn(dp->ab, "failed to flush rx tid hw desc, tid %d status %d\n",
+ rx_tid->tid, status);
+- if (rx_tid->vaddr) {
+- dma_unmap_single(dp->ab->dev, rx_tid->paddr, rx_tid->size,
+- DMA_BIDIRECTIONAL);
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
++ if (rx_tid->vaddr_unaligned) {
++ dma_free_noncoherent(dp->ab->dev, rx_tid->unaligned_size,
++ rx_tid->vaddr_unaligned,
++ rx_tid->paddr_unaligned, DMA_BIDIRECTIONAL);
++ rx_tid->vaddr_unaligned = NULL;
+ }
+ }
+
+@@ -749,10 +749,10 @@ static void ath11k_dp_reo_cache_flush(struct ath11k_base *ab,
+ if (ret) {
+ ath11k_err(ab, "failed to send HAL_REO_CMD_FLUSH_CACHE cmd, tid %d (%d)\n",
+ rx_tid->tid, ret);
+- dma_unmap_single(ab->dev, rx_tid->paddr, rx_tid->size,
+- DMA_BIDIRECTIONAL);
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
++ dma_free_noncoherent(ab->dev, rx_tid->unaligned_size,
++ rx_tid->vaddr_unaligned,
++ rx_tid->paddr_unaligned, DMA_BIDIRECTIONAL);
++ rx_tid->vaddr_unaligned = NULL;
+ }
+ }
+
+@@ -802,10 +802,10 @@ static void ath11k_dp_rx_tid_del_func(struct ath11k_dp *dp, void *ctx,
+
+ return;
+ free_desc:
+- dma_unmap_single(ab->dev, rx_tid->paddr, rx_tid->size,
+- DMA_BIDIRECTIONAL);
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
++ dma_free_noncoherent(ab->dev, rx_tid->unaligned_size,
++ rx_tid->vaddr_unaligned,
++ rx_tid->paddr_unaligned, DMA_BIDIRECTIONAL);
++ rx_tid->vaddr_unaligned = NULL;
+ }
+
+ void ath11k_peer_rx_tid_delete(struct ath11k *ar,
+@@ -831,14 +831,16 @@ void ath11k_peer_rx_tid_delete(struct ath11k *ar,
+ if (ret != -ESHUTDOWN)
+ ath11k_err(ar->ab, "failed to send HAL_REO_CMD_UPDATE_RX_QUEUE cmd, tid %d (%d)\n",
+ tid, ret);
+- dma_unmap_single(ar->ab->dev, rx_tid->paddr, rx_tid->size,
+- DMA_BIDIRECTIONAL);
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
++ dma_free_noncoherent(ar->ab->dev, rx_tid->unaligned_size,
++ rx_tid->vaddr_unaligned,
++ rx_tid->paddr_unaligned, DMA_BIDIRECTIONAL);
++ rx_tid->vaddr_unaligned = NULL;
+ }
+
+ rx_tid->paddr = 0;
++ rx_tid->paddr_unaligned = 0;
+ rx_tid->size = 0;
++ rx_tid->unaligned_size = 0;
+ }
+
+ static int ath11k_dp_rx_link_desc_return(struct ath11k_base *ab,
+@@ -982,10 +984,9 @@ static void ath11k_dp_rx_tid_mem_free(struct ath11k_base *ab,
+ if (!rx_tid->active)
+ goto unlock_exit;
+
+- dma_unmap_single(ab->dev, rx_tid->paddr, rx_tid->size,
+- DMA_BIDIRECTIONAL);
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
++ dma_free_noncoherent(ab->dev, rx_tid->unaligned_size, rx_tid->vaddr_unaligned,
++ rx_tid->paddr_unaligned, DMA_BIDIRECTIONAL);
++ rx_tid->vaddr_unaligned = NULL;
+
+ rx_tid->active = false;
+
+@@ -1000,9 +1001,8 @@ int ath11k_peer_rx_tid_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id,
+ struct ath11k_base *ab = ar->ab;
+ struct ath11k_peer *peer;
+ struct dp_rx_tid *rx_tid;
+- u32 hw_desc_sz;
+- u32 *addr_aligned;
+- void *vaddr;
++ u32 hw_desc_sz, *vaddr;
++ void *vaddr_unaligned;
+ dma_addr_t paddr;
+ int ret;
+
+@@ -1050,49 +1050,40 @@ int ath11k_peer_rx_tid_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id,
+ else
+ hw_desc_sz = ath11k_hal_reo_qdesc_size(DP_BA_WIN_SZ_MAX, tid);
+
+- vaddr = kzalloc(hw_desc_sz + HAL_LINK_DESC_ALIGN - 1, GFP_ATOMIC);
+- if (!vaddr) {
++ rx_tid->unaligned_size = hw_desc_sz + HAL_LINK_DESC_ALIGN - 1;
++ vaddr_unaligned = dma_alloc_noncoherent(ab->dev, rx_tid->unaligned_size, &paddr,
++ DMA_BIDIRECTIONAL, GFP_ATOMIC);
++ if (!vaddr_unaligned) {
+ spin_unlock_bh(&ab->base_lock);
+ return -ENOMEM;
+ }
+
+- addr_aligned = PTR_ALIGN(vaddr, HAL_LINK_DESC_ALIGN);
+-
+- ath11k_hal_reo_qdesc_setup(addr_aligned, tid, ba_win_sz,
+- ssn, pn_type);
+-
+- paddr = dma_map_single(ab->dev, addr_aligned, hw_desc_sz,
+- DMA_BIDIRECTIONAL);
+-
+- ret = dma_mapping_error(ab->dev, paddr);
+- if (ret) {
+- spin_unlock_bh(&ab->base_lock);
+- ath11k_warn(ab, "failed to setup dma map for peer %pM rx tid %d: %d\n",
+- peer_mac, tid, ret);
+- goto err_mem_free;
+- }
+-
+- rx_tid->vaddr = vaddr;
+- rx_tid->paddr = paddr;
++ rx_tid->vaddr_unaligned = vaddr_unaligned;
++ vaddr = PTR_ALIGN(vaddr_unaligned, HAL_LINK_DESC_ALIGN);
++ rx_tid->paddr_unaligned = paddr;
++ rx_tid->paddr = rx_tid->paddr_unaligned + ((unsigned long)vaddr -
++ (unsigned long)rx_tid->vaddr_unaligned);
++ ath11k_hal_reo_qdesc_setup(vaddr, tid, ba_win_sz, ssn, pn_type);
+ rx_tid->size = hw_desc_sz;
+ rx_tid->active = true;
+
++ /* After dma_alloc_noncoherent, vaddr is being modified for reo qdesc setup.
++ * Since these changes are not reflected in the device, driver now needs to
++ * explicitly call dma_sync_single_for_device.
++ */
++ dma_sync_single_for_device(ab->dev, rx_tid->paddr,
++ rx_tid->size,
++ DMA_TO_DEVICE);
+ spin_unlock_bh(&ab->base_lock);
+
+- ret = ath11k_wmi_peer_rx_reorder_queue_setup(ar, vdev_id, peer_mac,
+- paddr, tid, 1, ba_win_sz);
++ ret = ath11k_wmi_peer_rx_reorder_queue_setup(ar, vdev_id, peer_mac, rx_tid->paddr,
++ tid, 1, ba_win_sz);
+ if (ret) {
+ ath11k_warn(ar->ab, "failed to setup rx reorder queue for peer %pM tid %d: %d\n",
+ peer_mac, tid, ret);
+ ath11k_dp_rx_tid_mem_free(ab, peer_mac, vdev_id, tid);
+ }
+
+- return ret;
+-
+-err_mem_free:
+- kfree(rx_tid->vaddr);
+- rx_tid->vaddr = NULL;
+-
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
+index 212cd935e60a05..d0aed4c56050db 100644
+--- a/drivers/net/wireless/ath/ath12k/core.c
++++ b/drivers/net/wireless/ath/ath12k/core.c
+@@ -173,7 +173,7 @@ EXPORT_SYMBOL(ath12k_core_resume);
+
+ static int __ath12k_core_create_board_name(struct ath12k_base *ab, char *name,
+ size_t name_len, bool with_variant,
+- bool bus_type_mode)
++ bool bus_type_mode, bool with_default)
+ {
+ /* strlen(',variant=') + strlen(ab->qmi.target.bdf_ext) */
+ char variant[9 + ATH12K_QMI_BDF_EXT_STR_LENGTH] = { 0 };
+@@ -204,7 +204,9 @@ static int __ath12k_core_create_board_name(struct ath12k_base *ab, char *name,
+ "bus=%s,qmi-chip-id=%d,qmi-board-id=%d%s",
+ ath12k_bus_str(ab->hif.bus),
+ ab->qmi.target.chip_id,
+- ab->qmi.target.board_id, variant);
++ with_default ?
++ ATH12K_BOARD_ID_DEFAULT : ab->qmi.target.board_id,
++ variant);
+ break;
+ }
+
+@@ -216,19 +218,19 @@ static int __ath12k_core_create_board_name(struct ath12k_base *ab, char *name,
+ static int ath12k_core_create_board_name(struct ath12k_base *ab, char *name,
+ size_t name_len)
+ {
+- return __ath12k_core_create_board_name(ab, name, name_len, true, false);
++ return __ath12k_core_create_board_name(ab, name, name_len, true, false, false);
+ }
+
+ static int ath12k_core_create_fallback_board_name(struct ath12k_base *ab, char *name,
+ size_t name_len)
+ {
+- return __ath12k_core_create_board_name(ab, name, name_len, false, false);
++ return __ath12k_core_create_board_name(ab, name, name_len, false, false, true);
+ }
+
+ static int ath12k_core_create_bus_type_board_name(struct ath12k_base *ab, char *name,
+ size_t name_len)
+ {
+- return __ath12k_core_create_board_name(ab, name, name_len, false, true);
++ return __ath12k_core_create_board_name(ab, name, name_len, false, true, true);
+ }
+
+ const struct firmware *ath12k_core_firmware_request(struct ath12k_base *ab,
+@@ -887,10 +889,41 @@ static void ath12k_core_hw_group_stop(struct ath12k_hw_group *ag)
+ ath12k_mac_destroy(ag);
+ }
+
++u8 ath12k_get_num_partner_link(struct ath12k *ar)
++{
++ struct ath12k_base *partner_ab, *ab = ar->ab;
++ struct ath12k_hw_group *ag = ab->ag;
++ struct ath12k_pdev *pdev;
++ u8 num_link = 0;
++ int i, j;
++
++ lockdep_assert_held(&ag->mutex);
++
++ for (i = 0; i < ag->num_devices; i++) {
++ partner_ab = ag->ab[i];
++
++ for (j = 0; j < partner_ab->num_radios; j++) {
++ pdev = &partner_ab->pdevs[j];
++
++ /* Avoid the self link */
++ if (ar == pdev->ar)
++ continue;
++
++ num_link++;
++ }
++ }
++
++ return num_link;
++}
++
+ static int __ath12k_mac_mlo_ready(struct ath12k *ar)
+ {
++ u8 num_link = ath12k_get_num_partner_link(ar);
+ int ret;
+
++ if (num_link == 0)
++ return 0;
++
+ ret = ath12k_wmi_mlo_ready(ar);
+ if (ret) {
+ ath12k_err(ar->ab, "MLO ready failed for pdev %d: %d\n",
+@@ -932,7 +965,7 @@ static int ath12k_core_mlo_setup(struct ath12k_hw_group *ag)
+ {
+ int ret, i;
+
+- if (!ag->mlo_capable || ag->num_devices == 1)
++ if (!ag->mlo_capable)
+ return 0;
+
+ ret = ath12k_mac_mlo_setup(ag);
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index ee595794a7aee8..96b2830e891bd7 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -87,6 +87,7 @@ enum wme_ac {
+ #define ATH12K_HT_MCS_MAX 7
+ #define ATH12K_VHT_MCS_MAX 9
+ #define ATH12K_HE_MCS_MAX 11
++#define ATH12K_EHT_MCS_MAX 15
+
+ enum ath12k_crypt_mode {
+ /* Only use hardware crypto engine */
+@@ -166,6 +167,7 @@ struct ath12k_ext_irq_grp {
+ u32 num_irq;
+ u32 grp_id;
+ u64 timestamp;
++ bool napi_enabled;
+ struct napi_struct napi;
+ struct net_device *napi_ndev;
+ };
+@@ -500,6 +502,7 @@ struct ath12k_link_sta {
+ struct ath12k_rx_peer_stats *rx_stats;
+ struct ath12k_wbm_tx_stats *wbm_tx_stats;
+ u32 bw_prev;
++ u32 peer_nss;
+
+ /* For now the assoc link will be considered primary */
+ bool is_assoc_link;
+@@ -1084,6 +1087,7 @@ int ath12k_core_resume(struct ath12k_base *ab);
+ int ath12k_core_suspend(struct ath12k_base *ab);
+ int ath12k_core_suspend_late(struct ath12k_base *ab);
+ void ath12k_core_hw_group_unassign(struct ath12k_base *ab);
++u8 ath12k_get_num_partner_link(struct ath12k *ar);
+
+ const struct firmware *ath12k_core_firmware_request(struct ath12k_base *ab,
+ const char *filename);
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index b952e79179d011..8737dc8fea3548 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include "dp_mon.h"
+@@ -638,6 +638,9 @@ ath12k_dp_mon_rx_parse_status_tlv(struct ath12k_base *ab,
+ ppdu_info->num_mpdu_fcs_err =
+ u32_get_bits(info[0],
+ HAL_RX_PPDU_END_USER_STATS_INFO0_MPDU_CNT_FCS_ERR);
++ ppdu_info->peer_id =
++ u32_get_bits(info[0], HAL_RX_PPDU_END_USER_STATS_INFO0_PEER_ID);
++
+ switch (ppdu_info->preamble_type) {
+ case HAL_RX_PREAMBLE_11N:
+ ppdu_info->ht_flags = 1;
+@@ -655,6 +658,11 @@ ath12k_dp_mon_rx_parse_status_tlv(struct ath12k_base *ab,
+ if (userid < HAL_MAX_UL_MU_USERS) {
+ struct hal_rx_user_status *rxuser_stats =
+ &ppdu_info->userstats[userid];
++
++ if (ppdu_info->num_mpdu_fcs_ok > 1 ||
++ ppdu_info->num_mpdu_fcs_err > 1)
++ ppdu_info->userstats[userid].ampdu_present = true;
++
+ ppdu_info->num_users += 1;
+
+ ath12k_dp_mon_rx_handle_ofdma_info(eu_stats, rxuser_stats);
+@@ -755,8 +763,8 @@ ath12k_dp_mon_rx_parse_status_tlv(struct ath12k_base *ab,
+ if (userid < HAL_MAX_UL_MU_USERS) {
+ info[0] = __le32_to_cpu(mpdu_start->info0);
+ ppdu_info->userid = userid;
+- ppdu_info->ampdu_id[userid] =
+- u32_get_bits(info[0], HAL_RX_MPDU_START_INFO1_PEERID);
++ ppdu_info->userstats[userid].ampdu_id =
++ u32_get_bits(info[0], HAL_RX_MPDU_START_INFO0_PPDU_ID);
+ }
+
+ break;
+@@ -956,15 +964,14 @@ static void ath12k_dp_mon_update_radiotap(struct ath12k *ar,
+ {
+ struct ieee80211_supported_band *sband;
+ u8 *ptr = NULL;
+- u16 ampdu_id = ppduinfo->ampdu_id[ppduinfo->userid];
+
+ rxs->flag |= RX_FLAG_MACTIME_START;
+ rxs->signal = ppduinfo->rssi_comb + ATH12K_DEFAULT_NOISE_FLOOR;
+ rxs->nss = ppduinfo->nss + 1;
+
+- if (ampdu_id) {
++ if (ppduinfo->userstats[ppduinfo->userid].ampdu_present) {
+ rxs->flag |= RX_FLAG_AMPDU_DETAILS;
+- rxs->ampdu_reference = ampdu_id;
++ rxs->ampdu_reference = ppduinfo->userstats[ppduinfo->userid].ampdu_id;
+ }
+
+ if (ppduinfo->he_mu_flags) {
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index ae6608b10bb570..ff6a709b5042cf 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/ieee80211.h>
+@@ -2392,6 +2392,23 @@ static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ rx_status->he_gi = ath12k_he_gi_to_nl80211_he_gi(sgi);
+ rx_status->bw = ath12k_mac_bw_to_mac80211_bw(bw);
+ break;
++ case RX_MSDU_START_PKT_TYPE_11BE:
++ rx_status->rate_idx = rate_mcs;
++
++ if (rate_mcs > ATH12K_EHT_MCS_MAX) {
++ ath12k_warn(ar->ab,
++ "Received with invalid mcs in EHT mode %d\n",
++ rate_mcs);
++ break;
++ }
++
++ rx_status->encoding = RX_ENC_EHT;
++ rx_status->nss = nss;
++ rx_status->eht.gi = ath12k_mac_eht_gi_to_nl80211_eht_gi(sgi);
++ rx_status->bw = ath12k_mac_bw_to_mac80211_bw(bw);
++ break;
++ default:
++ break;
+ }
+ }
+
+@@ -2486,7 +2503,7 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ spin_unlock_bh(&ab->base_lock);
+
+ ath12k_dbg(ab, ATH12K_DBG_DATA,
+- "rx skb %p len %u peer %pM %d %s sn %u %s%s%s%s%s%s%s%s%s rate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
++ "rx skb %p len %u peer %pM %d %s sn %u %s%s%s%s%s%s%s%s%s%s rate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
+ msdu,
+ msdu->len,
+ peer ? peer->addr : NULL,
+@@ -2497,6 +2514,7 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ (status->encoding == RX_ENC_HT) ? "ht" : "",
+ (status->encoding == RX_ENC_VHT) ? "vht" : "",
+ (status->encoding == RX_ENC_HE) ? "he" : "",
++ (status->encoding == RX_ENC_EHT) ? "eht" : "",
+ (status->bw == RATE_INFO_BW_40) ? "40" : "",
+ (status->bw == RATE_INFO_BW_80) ? "80" : "",
+ (status->bw == RATE_INFO_BW_160) ? "160" : "",
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.h b/drivers/net/wireless/ath/ath12k/dp_rx.h
+index 1ce82088c95409..c0aa965f47e77e 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.h
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+ #ifndef ATH12K_DP_RX_H
+ #define ATH12K_DP_RX_H
+@@ -79,6 +79,9 @@ static inline u32 ath12k_he_gi_to_nl80211_he_gi(u8 sgi)
+ case RX_MSDU_START_SGI_3_2_US:
+ ret = NL80211_RATE_INFO_HE_GI_3_2;
+ break;
++ default:
++ ret = NL80211_RATE_INFO_HE_GI_0_8;
++ break;
+ }
+
+ return ret;
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index 1fffabaca527a4..7dc35762878505 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -8,6 +8,8 @@
+ #include "dp_tx.h"
+ #include "debug.h"
+ #include "hw.h"
++#include "peer.h"
++#include "mac.h"
+
+ static enum hal_tcl_encap_type
+ ath12k_dp_tx_get_encap_type(struct ath12k_link_vif *arvif, struct sk_buff *skb)
+@@ -117,7 +119,7 @@ static void ath12k_hal_tx_cmd_ext_desc_setup(struct ath12k_base *ab,
+ le32_encode_bits(ti->data_len,
+ HAL_TX_MSDU_EXT_INFO1_BUF_LEN);
+
+- tcl_ext_cmd->info1 = le32_encode_bits(1, HAL_TX_MSDU_EXT_INFO1_EXTN_OVERRIDE) |
++ tcl_ext_cmd->info1 |= le32_encode_bits(1, HAL_TX_MSDU_EXT_INFO1_EXTN_OVERRIDE) |
+ le32_encode_bits(ti->encap_type,
+ HAL_TX_MSDU_EXT_INFO1_ENCAP_TYPE) |
+ le32_encode_bits(ti->encrypt_type,
+@@ -560,13 +562,13 @@ ath12k_dp_tx_process_htt_tx_complete(struct ath12k_base *ab,
+
+ switch (wbm_status) {
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_OK:
+- case HAL_WBM_REL_HTT_TX_COMP_STATUS_DROP:
+- case HAL_WBM_REL_HTT_TX_COMP_STATUS_TTL:
+ ts.acked = (wbm_status == HAL_WBM_REL_HTT_TX_COMP_STATUS_OK);
+ ts.ack_rssi = le32_get_bits(status_desc->info2,
+ HTT_TX_WBM_COMP_INFO2_ACK_RSSI);
+ ath12k_dp_tx_htt_tx_complete_buf(ab, msdu, tx_ring, &ts);
+ break;
++ case HAL_WBM_REL_HTT_TX_COMP_STATUS_DROP:
++ case HAL_WBM_REL_HTT_TX_COMP_STATUS_TTL:
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ:
+ case HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT:
+ ath12k_dp_tx_free_txbuf(ab, msdu, mac_id, tx_ring);
+@@ -582,6 +584,124 @@ ath12k_dp_tx_process_htt_tx_complete(struct ath12k_base *ab,
+ }
+ }
+
++static void ath12k_dp_tx_update_txcompl(struct ath12k *ar, struct hal_tx_status *ts)
++{
++ struct ath12k_base *ab = ar->ab;
++ struct ath12k_peer *peer;
++ struct ieee80211_sta *sta;
++ struct ath12k_sta *ahsta;
++ struct ath12k_link_sta *arsta;
++ struct rate_info txrate = {0};
++ u16 rate, ru_tones;
++ u8 rate_idx = 0;
++ int ret;
++
++ spin_lock_bh(&ab->base_lock);
++ peer = ath12k_peer_find_by_id(ab, ts->peer_id);
++ if (!peer || !peer->sta) {
++ ath12k_dbg(ab, ATH12K_DBG_DP_TX,
++ "failed to find the peer by id %u\n", ts->peer_id);
++ spin_unlock_bh(&ab->base_lock);
++ return;
++ }
++ sta = peer->sta;
++ ahsta = ath12k_sta_to_ahsta(sta);
++ arsta = &ahsta->deflink;
++
++ /* This is to prefer choose the real NSS value arsta->last_txrate.nss,
++ * if it is invalid, then choose the NSS value while assoc.
++ */
++ if (arsta->last_txrate.nss)
++ txrate.nss = arsta->last_txrate.nss;
++ else
++ txrate.nss = arsta->peer_nss;
++ spin_unlock_bh(&ab->base_lock);
++
++ switch (ts->pkt_type) {
++ case HAL_TX_RATE_STATS_PKT_TYPE_11A:
++ case HAL_TX_RATE_STATS_PKT_TYPE_11B:
++ ret = ath12k_mac_hw_ratecode_to_legacy_rate(ts->mcs,
++ ts->pkt_type,
++ &rate_idx,
++ &rate);
++ if (ret < 0) {
++ ath12k_warn(ab, "Invalid tx legacy rate %d\n", ret);
++ return;
++ }
++
++ txrate.legacy = rate;
++ break;
++ case HAL_TX_RATE_STATS_PKT_TYPE_11N:
++ if (ts->mcs > ATH12K_HT_MCS_MAX) {
++ ath12k_warn(ab, "Invalid HT mcs index %d\n", ts->mcs);
++ return;
++ }
++
++ if (txrate.nss != 0)
++ txrate.mcs = ts->mcs + 8 * (txrate.nss - 1);
++
++ txrate.flags = RATE_INFO_FLAGS_MCS;
++
++ if (ts->sgi)
++ txrate.flags |= RATE_INFO_FLAGS_SHORT_GI;
++ break;
++ case HAL_TX_RATE_STATS_PKT_TYPE_11AC:
++ if (ts->mcs > ATH12K_VHT_MCS_MAX) {
++ ath12k_warn(ab, "Invalid VHT mcs index %d\n", ts->mcs);
++ return;
++ }
++
++ txrate.mcs = ts->mcs;
++ txrate.flags = RATE_INFO_FLAGS_VHT_MCS;
++
++ if (ts->sgi)
++ txrate.flags |= RATE_INFO_FLAGS_SHORT_GI;
++ break;
++ case HAL_TX_RATE_STATS_PKT_TYPE_11AX:
++ if (ts->mcs > ATH12K_HE_MCS_MAX) {
++ ath12k_warn(ab, "Invalid HE mcs index %d\n", ts->mcs);
++ return;
++ }
++
++ txrate.mcs = ts->mcs;
++ txrate.flags = RATE_INFO_FLAGS_HE_MCS;
++ txrate.he_gi = ath12k_he_gi_to_nl80211_he_gi(ts->sgi);
++ break;
++ case HAL_TX_RATE_STATS_PKT_TYPE_11BE:
++ if (ts->mcs > ATH12K_EHT_MCS_MAX) {
++ ath12k_warn(ab, "Invalid EHT mcs index %d\n", ts->mcs);
++ return;
++ }
++
++ txrate.mcs = ts->mcs;
++ txrate.flags = RATE_INFO_FLAGS_EHT_MCS;
++ txrate.eht_gi = ath12k_mac_eht_gi_to_nl80211_eht_gi(ts->sgi);
++ break;
++ default:
++ ath12k_warn(ab, "Invalid tx pkt type: %d\n", ts->pkt_type);
++ return;
++ }
++
++ txrate.bw = ath12k_mac_bw_to_mac80211_bw(ts->bw);
++
++ if (ts->ofdma && ts->pkt_type == HAL_TX_RATE_STATS_PKT_TYPE_11AX) {
++ txrate.bw = RATE_INFO_BW_HE_RU;
++ ru_tones = ath12k_mac_he_convert_tones_to_ru_tones(ts->tones);
++ txrate.he_ru_alloc =
++ ath12k_he_ru_tones_to_nl80211_he_ru_alloc(ru_tones);
++ }
++
++ if (ts->ofdma && ts->pkt_type == HAL_TX_RATE_STATS_PKT_TYPE_11BE) {
++ txrate.bw = RATE_INFO_BW_EHT_RU;
++ txrate.eht_ru_alloc =
++ ath12k_mac_eht_ru_tones_to_nl80211_eht_ru_alloc(ts->tones);
++ }
++
++ spin_lock_bh(&ab->base_lock);
++ arsta->txrate = txrate;
++ spin_unlock_bh(&ab->base_lock);
++}
++
+ static void ath12k_dp_tx_complete_msdu(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct hal_tx_status *ts)
+@@ -660,6 +780,8 @@ static void ath12k_dp_tx_complete_msdu(struct ath12k *ar,
+ * Might end up reporting it out-of-band from HTT stats.
+ */
+
++ ath12k_dp_tx_update_txcompl(ar, ts);
++
+ ieee80211_tx_status_skb(ath12k_ar_to_hw(ar), msdu);
+
+ exit:
+@@ -670,6 +792,8 @@ static void ath12k_dp_tx_status_parse(struct ath12k_base *ab,
+ struct hal_wbm_completion_ring_tx *desc,
+ struct hal_tx_status *ts)
+ {
++ u32 info0 = le32_to_cpu(desc->rate_stats.info0);
++
+ ts->buf_rel_source =
+ le32_get_bits(desc->info0, HAL_WBM_COMPL_TX_INFO0_REL_SRC_MODULE);
+ if (ts->buf_rel_source != HAL_WBM_REL_SRC_MODULE_FW &&
+@@ -684,10 +808,17 @@ static void ath12k_dp_tx_status_parse(struct ath12k_base *ab,
+
+ ts->ppdu_id = le32_get_bits(desc->info1,
+ HAL_WBM_COMPL_TX_INFO1_TQM_STATUS_NUMBER);
+- if (le32_to_cpu(desc->rate_stats.info0) & HAL_TX_RATE_STATS_INFO0_VALID)
+- ts->rate_stats = le32_to_cpu(desc->rate_stats.info0);
+- else
+- ts->rate_stats = 0;
++
++ ts->peer_id = le32_get_bits(desc->info3, HAL_WBM_COMPL_TX_INFO3_PEER_ID);
++
++ if (info0 & HAL_TX_RATE_STATS_INFO0_VALID) {
++ ts->pkt_type = u32_get_bits(info0, HAL_TX_RATE_STATS_INFO0_PKT_TYPE);
++ ts->mcs = u32_get_bits(info0, HAL_TX_RATE_STATS_INFO0_MCS);
++ ts->sgi = u32_get_bits(info0, HAL_TX_RATE_STATS_INFO0_SGI);
++ ts->bw = u32_get_bits(info0, HAL_TX_RATE_STATS_INFO0_BW);
++ ts->tones = u32_get_bits(info0, HAL_TX_RATE_STATS_INFO0_TONES_IN_RU);
++ ts->ofdma = u32_get_bits(info0, HAL_TX_RATE_STATS_INFO0_OFDMA_TX);
++ }
+ }
+
+ void ath12k_dp_tx_completion_handler(struct ath12k_base *ab, int ring_id)
+diff --git a/drivers/net/wireless/ath/ath12k/hal_desc.h b/drivers/net/wireless/ath/ath12k/hal_desc.h
+index 7b0403d245e599..a102d27e5785f1 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_desc.h
++++ b/drivers/net/wireless/ath/ath12k/hal_desc.h
+@@ -2968,7 +2968,7 @@ struct hal_mon_buf_ring {
+
+ #define HAL_MON_DEST_COOKIE_BUF_ID GENMASK(17, 0)
+
+-#define HAL_MON_DEST_INFO0_END_OFFSET GENMASK(15, 0)
++#define HAL_MON_DEST_INFO0_END_OFFSET GENMASK(11, 0)
+ #define HAL_MON_DEST_INFO0_FLUSH_DETECTED BIT(16)
+ #define HAL_MON_DEST_INFO0_END_OF_PPDU BIT(17)
+ #define HAL_MON_DEST_INFO0_INITIATOR BIT(18)
+diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.h b/drivers/net/wireless/ath/ath12k/hal_rx.h
+index 54f3eaeca8bb96..ac3b3f17ec2c85 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_rx.h
++++ b/drivers/net/wireless/ath/ath12k/hal_rx.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #ifndef ATH12K_HAL_RX_H
+@@ -146,6 +146,8 @@ struct hal_rx_user_status {
+ u32 mpdu_fcs_ok_bitmap[HAL_RX_NUM_WORDS_PER_PPDU_BITMAP];
+ u32 mpdu_ok_byte_count;
+ u32 mpdu_err_byte_count;
++ bool ampdu_present;
++ u16 ampdu_id;
+ };
+
+ #define HAL_MAX_UL_MU_USERS 37
+@@ -230,7 +232,6 @@ struct hal_rx_mon_ppdu_info {
+ u8 addr4[ETH_ALEN];
+ struct hal_rx_user_status userstats[HAL_MAX_UL_MU_USERS];
+ u8 userid;
+- u16 ampdu_id[HAL_MAX_UL_MU_USERS];
+ bool first_msdu_in_mpdu;
+ bool is_ampdu;
+ u8 medium_prot_type;
+@@ -665,6 +666,9 @@ enum nl80211_he_ru_alloc ath12k_he_ru_tones_to_nl80211_he_ru_alloc(u16 ru_tones)
+ case RU_996:
+ ret = NL80211_RATE_INFO_HE_RU_ALLOC_996;
+ break;
++ case RU_2X996:
++ ret = NL80211_RATE_INFO_HE_RU_ALLOC_2x996;
++ break;
+ case RU_26:
+ fallthrough;
+ default:
+diff --git a/drivers/net/wireless/ath/ath12k/hal_tx.h b/drivers/net/wireless/ath/ath12k/hal_tx.h
+index 3cf5973771d78d..eb065a79f6c647 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_tx.h
++++ b/drivers/net/wireless/ath/ath12k/hal_tx.h
+@@ -1,7 +1,8 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024-2025 Qualcomm Innovation Center, Inc.
++ * All rights reserved.
+ */
+
+ #ifndef ATH12K_HAL_TX_H
+@@ -63,7 +64,12 @@ struct hal_tx_status {
+ u8 try_cnt;
+ u8 tid;
+ u16 peer_id;
+- u32 rate_stats;
++ enum hal_tx_rate_stats_pkt_type pkt_type;
++ enum hal_tx_rate_stats_sgi sgi;
++ enum ath12k_supported_bw bw;
++ u8 mcs;
++ u16 tones;
++ u8 ofdma;
+ };
+
+ #define HAL_TX_PHY_DESC_INFO0_BF_TYPE GENMASK(17, 16)
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 9c3e66dbe0c3be..5f6d9896ef613f 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -337,6 +337,82 @@ static const char *ath12k_mac_phymode_str(enum wmi_phy_mode mode)
+ return "<unknown>";
+ }
+
++u16 ath12k_mac_he_convert_tones_to_ru_tones(u16 tones)
++{
++ switch (tones) {
++ case 26:
++ return RU_26;
++ case 52:
++ return RU_52;
++ case 106:
++ return RU_106;
++ case 242:
++ return RU_242;
++ case 484:
++ return RU_484;
++ case 996:
++ return RU_996;
++ case (996 * 2):
++ return RU_2X996;
++ default:
++ return RU_26;
++ }
++}
++
++enum nl80211_eht_gi ath12k_mac_eht_gi_to_nl80211_eht_gi(u8 sgi)
++{
++ switch (sgi) {
++ case RX_MSDU_START_SGI_0_8_US:
++ return NL80211_RATE_INFO_EHT_GI_0_8;
++ case RX_MSDU_START_SGI_1_6_US:
++ return NL80211_RATE_INFO_EHT_GI_1_6;
++ case RX_MSDU_START_SGI_3_2_US:
++ return NL80211_RATE_INFO_EHT_GI_3_2;
++ default:
++ return NL80211_RATE_INFO_EHT_GI_0_8;
++ }
++}
++
++enum nl80211_eht_ru_alloc ath12k_mac_eht_ru_tones_to_nl80211_eht_ru_alloc(u16 ru_tones)
++{
++ switch (ru_tones) {
++ case 26:
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_26;
++ case 52:
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_52;
++ case (52 + 26):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_52P26;
++ case 106:
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_106;
++ case (106 + 26):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_106P26;
++ case 242:
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_242;
++ case 484:
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_484;
++ case (484 + 242):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_484P242;
++ case 996:
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_996;
++ case (996 + 484):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_996P484;
++ case (996 + 484 + 242):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_996P484P242;
++ case (2 * 996):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_2x996;
++ case (2 * 996 + 484):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_2x996P484;
++ case (3 * 996):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_3x996;
++ case (3 * 996 + 484):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_3x996P484;
++ case (4 * 996):
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_4x996;
++ default:
++ return NL80211_RATE_INFO_EHT_RU_ALLOC_26;
++ }
++}
++
+ enum rate_info_bw
+ ath12k_mac_bw_to_mac80211_bw(enum ath12k_supported_bw bw)
+ {
+@@ -3116,6 +3192,7 @@ static void ath12k_peer_assoc_prepare(struct ath12k *ar,
+ ath12k_peer_assoc_h_smps(arsta, arg);
+ ath12k_peer_assoc_h_mlo(arsta, arg);
+
++ arsta->peer_nss = arg->peer_nss;
+ /* TODO: amsdu_disable req? */
+ }
+
+@@ -4534,9 +4611,6 @@ static int ath12k_mac_set_key(struct ath12k *ar, enum set_key_cmd cmd,
+ struct ath12k_link_sta *arsta,
+ struct ieee80211_key_conf *key)
+ {
+- struct ath12k_vif *ahvif = arvif->ahvif;
+- struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif);
+- struct ieee80211_bss_conf *link_conf;
+ struct ieee80211_sta *sta = NULL;
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_peer *peer;
+@@ -4553,19 +4627,10 @@ static int ath12k_mac_set_key(struct ath12k *ar, enum set_key_cmd cmd,
+ if (test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags))
+ return 1;
+
+- link_conf = ath12k_mac_get_link_bss_conf(arvif);
+- if (!link_conf) {
+- ath12k_warn(ab, "unable to access bss link conf in set key for vif %pM link %u\n",
+- vif->addr, arvif->link_id);
+- return -ENOLINK;
+- }
+-
+ if (sta)
+ peer_addr = arsta->addr;
+- else if (ahvif->vdev_type == WMI_VDEV_TYPE_STA)
+- peer_addr = link_conf->bssid;
+ else
+- peer_addr = link_conf->addr;
++ peer_addr = arvif->bssid;
+
+ key->hw_key_idx = key->keyidx;
+
+@@ -10054,6 +10119,8 @@ static void ath12k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ sinfo->txrate.he_gi = arsta->txrate.he_gi;
+ sinfo->txrate.he_dcm = arsta->txrate.he_dcm;
+ sinfo->txrate.he_ru_alloc = arsta->txrate.he_ru_alloc;
++ sinfo->txrate.eht_gi = arsta->txrate.eht_gi;
++ sinfo->txrate.eht_ru_alloc = arsta->txrate.eht_ru_alloc;
+ }
+ sinfo->txrate.flags = arsta->txrate.flags;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+@@ -11140,6 +11207,9 @@ static int __ath12k_mac_mlo_setup(struct ath12k *ar)
+ }
+ }
+
++ if (num_link == 0)
++ return 0;
++
+ mlo.group_id = cpu_to_le32(ag->id);
+ mlo.partner_link_id = partner_link_id;
+ mlo.num_partner_links = num_link;
+@@ -11169,10 +11239,16 @@ static int __ath12k_mac_mlo_teardown(struct ath12k *ar)
+ {
+ struct ath12k_base *ab = ar->ab;
+ int ret;
++ u8 num_link;
+
+ if (test_bit(ATH12K_FLAG_RECOVERY, &ab->dev_flags))
+ return 0;
+
++ num_link = ath12k_get_num_partner_link(ar);
++
++ if (num_link == 0)
++ return 0;
++
+ ret = ath12k_wmi_mlo_teardown(ar);
+ if (ret) {
+ ath12k_warn(ab, "failed to send MLO teardown WMI command for pdev %d: %d\n",
+diff --git a/drivers/net/wireless/ath/ath12k/mac.h b/drivers/net/wireless/ath/ath12k/mac.h
+index 3594729b63974e..1acaf3f68292c9 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.h
++++ b/drivers/net/wireless/ath/ath12k/mac.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #ifndef ATH12K_MAC_H
+@@ -108,5 +108,8 @@ int ath12k_mac_vdev_stop(struct ath12k_link_vif *arvif);
+ void ath12k_mac_get_any_chanctx_conf_iter(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *conf,
+ void *data);
++u16 ath12k_mac_he_convert_tones_to_ru_tones(u16 tones);
++enum nl80211_eht_ru_alloc ath12k_mac_eht_ru_tones_to_nl80211_eht_ru_alloc(u16 ru_tones);
++enum nl80211_eht_gi ath12k_mac_eht_gi_to_nl80211_eht_gi(u8 sgi);
+
+ #endif
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index ee14b848454879..895b2314d1d58c 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -483,8 +483,11 @@ static void __ath12k_pci_ext_irq_disable(struct ath12k_base *ab)
+
+ ath12k_pci_ext_grp_disable(irq_grp);
+
+- napi_synchronize(&irq_grp->napi);
+- napi_disable(&irq_grp->napi);
++ if (irq_grp->napi_enabled) {
++ napi_synchronize(&irq_grp->napi);
++ napi_disable(&irq_grp->napi);
++ irq_grp->napi_enabled = false;
++ }
+ }
+ }
+
+@@ -1114,7 +1117,11 @@ void ath12k_pci_ext_irq_enable(struct ath12k_base *ab)
+ for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
+ struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
+
+- napi_enable(&irq_grp->napi);
++ if (!irq_grp->napi_enabled) {
++ napi_enable(&irq_grp->napi);
++ irq_grp->napi_enabled = true;
++ }
++
+ ath12k_pci_ext_grp_enable(irq_grp);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/rx_desc.h b/drivers/net/wireless/ath/ath12k/rx_desc.h
+index 10366bbe999999..7367935ee68b36 100644
+--- a/drivers/net/wireless/ath/ath12k/rx_desc.h
++++ b/drivers/net/wireless/ath/ath12k/rx_desc.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+ * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+ #ifndef ATH12K_RX_DESC_H
+ #define ATH12K_RX_DESC_H
+@@ -637,6 +637,8 @@ enum rx_msdu_start_pkt_type {
+ RX_MSDU_START_PKT_TYPE_11N,
+ RX_MSDU_START_PKT_TYPE_11AC,
+ RX_MSDU_START_PKT_TYPE_11AX,
++ RX_MSDU_START_PKT_TYPE_11BA,
++ RX_MSDU_START_PKT_TYPE_11BE,
+ };
+
+ enum rx_msdu_start_sgi {
+@@ -1546,5 +1548,6 @@ struct hal_rx_desc {
+ #define RU_242 9
+ #define RU_484 18
+ #define RU_996 37
++#define RU_2X996 74
+
+ #endif /* ATH12K_RX_DESC_H */
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 7a87777e0a047d..9cd7ceae5a4f88 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -2373,8 +2373,8 @@ void ath12k_wmi_start_scan_init(struct ath12k *ar,
+ arg->dwell_time_active = 50;
+ arg->dwell_time_active_2g = 0;
+ arg->dwell_time_passive = 150;
+- arg->dwell_time_active_6g = 40;
+- arg->dwell_time_passive_6g = 30;
++ arg->dwell_time_active_6g = 70;
++ arg->dwell_time_passive_6g = 70;
+ arg->min_rest_time = 50;
+ arg->max_rest_time = 500;
+ arg->repeat_probe_time = 0;
+diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
+index f9e77c4624d99c..01e0dffbf57ed7 100644
+--- a/drivers/net/wireless/ath/ath9k/init.c
++++ b/drivers/net/wireless/ath/ath9k/init.c
+@@ -647,7 +647,9 @@ static int ath9k_of_init(struct ath_softc *sc)
+ ah->ah_flags |= AH_NO_EEP_SWAP;
+ }
+
+- of_get_mac_address(np, common->macaddr);
++ ret = of_get_mac_address(np, common->macaddr);
++ if (ret == -EPROBE_DEFER)
++ return ret;
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/dr.c b/drivers/net/wireless/intel/iwlwifi/cfg/dr.c
+index ab7c0f8d54f425..d3542af0f625ee 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/dr.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/dr.c
+@@ -148,11 +148,9 @@ const struct iwl_cfg_trans_params iwl_br_trans_cfg = {
+ .mq_rx_supported = true,
+ .rf_id = true,
+ .gen2 = true,
+- .integrated = true,
+ .umac_prph_offset = 0x300000,
+ .xtal_latency = 12000,
+ .low_latency_xtal = true,
+- .ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
+ };
+
+ const char iwl_br_name[] = "Intel(R) TBD Br device";
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 6594216f873c47..cd284767ff4bad 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2005-2014, 2018-2024 Intel Corporation
++ * Copyright (C) 2005-2014, 2018-2025 Intel Corporation
+ * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+ * Copyright (C) 2015-2017 Intel Deutschland GmbH
+ */
+@@ -2691,7 +2691,7 @@ static u32 iwl_dump_ini_trigger(struct iwl_fw_runtime *fwrt,
+ }
+ /* collect DRAM_IMR region in the last */
+ if (imr_reg_data.reg_tlv)
+- size += iwl_dump_ini_mem(fwrt, list, ®_data,
++ size += iwl_dump_ini_mem(fwrt, list, &imr_reg_data,
+ &iwl_dump_ini_region_ops[IWL_FW_INI_REGION_DRAM_IMR]);
+
+ if (size) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+index 434eed4130b901..91bcff499311db 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright(c) 2021-2024 Intel Corporation
++ * Copyright(c) 2021-2025 Intel Corporation
+ */
+
+ #include "iwl-drv.h"
+@@ -678,8 +678,10 @@ int iwl_uefi_get_eckv(struct iwl_fw_runtime *fwrt, u32 *extl_clk)
+ struct uefi_cnv_var_eckv *data;
+ int ret = 0;
+
+- data = iwl_uefi_get_verified_variable(fwrt->trans, IWL_UEFI_ECKV_NAME,
+- "ECKV", sizeof(*data), NULL);
++ data = iwl_uefi_get_verified_variable_guid(fwrt->trans,
++ &IWL_EFI_WIFI_BT_GUID,
++ IWL_UEFI_ECKV_NAME,
++ "ECKV", sizeof(*data), NULL);
+ if (IS_ERR(data))
+ return -EINVAL;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+index 0c8943a8bd0115..eb3c05417da371 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright(c) 2021-2024 Intel Corporation
++ * Copyright(c) 2021-2025 Intel Corporation
+ */
+ #ifndef __iwl_fw_uefi__
+ #define __iwl_fw_uefi__
+@@ -19,7 +19,7 @@
+ #define IWL_UEFI_WTAS_NAME L"UefiCnvWlanWTAS"
+ #define IWL_UEFI_SPLC_NAME L"UefiCnvWlanSPLC"
+ #define IWL_UEFI_WRDD_NAME L"UefiCnvWlanWRDD"
+-#define IWL_UEFI_ECKV_NAME L"UefiCnvWlanECKV"
++#define IWL_UEFI_ECKV_NAME L"UefiCnvCommonECKV"
+ #define IWL_UEFI_DSM_NAME L"UefiCnvWlanGeneralCfg"
+ #define IWL_UEFI_WBEM_NAME L"UefiCnvWlanWBEM"
+ #define IWL_UEFI_PUNCTURING_NAME L"UefiCnvWlanPuncturing"
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 08d990ba8a7949..ce787326aa69d0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2018-2024 Intel Corporation
++ * Copyright (C) 2018-2025 Intel Corporation
+ */
+ #include <linux/firmware.h>
+ #include "iwl-drv.h"
+@@ -1372,15 +1372,15 @@ void _iwl_dbg_tlv_time_point(struct iwl_fw_runtime *fwrt,
+ switch (tp_id) {
+ case IWL_FW_INI_TIME_POINT_EARLY:
+ iwl_dbg_tlv_init_cfg(fwrt);
+- iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ iwl_dbg_tlv_update_drams(fwrt);
+ iwl_dbg_tlv_tp_trigger(fwrt, sync, trig_list, tp_data, NULL);
++ iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ break;
+ case IWL_FW_INI_TIME_POINT_AFTER_ALIVE:
+ iwl_dbg_tlv_apply_buffers(fwrt);
+ iwl_dbg_tlv_send_hcmds(fwrt, hcmd_list);
+- iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ iwl_dbg_tlv_tp_trigger(fwrt, sync, trig_list, tp_data, NULL);
++ iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ break;
+ case IWL_FW_INI_TIME_POINT_PERIODIC:
+ iwl_dbg_tlv_set_periodic_trigs(fwrt);
+@@ -1390,14 +1390,14 @@ void _iwl_dbg_tlv_time_point(struct iwl_fw_runtime *fwrt,
+ case IWL_FW_INI_TIME_POINT_MISSED_BEACONS:
+ case IWL_FW_INI_TIME_POINT_FW_DHC_NOTIFICATION:
+ iwl_dbg_tlv_send_hcmds(fwrt, hcmd_list);
+- iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ iwl_dbg_tlv_tp_trigger(fwrt, sync, trig_list, tp_data,
+ iwl_dbg_tlv_check_fw_pkt);
++ iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ break;
+ default:
+ iwl_dbg_tlv_send_hcmds(fwrt, hcmd_list);
+- iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ iwl_dbg_tlv_tp_trigger(fwrt, sync, trig_list, tp_data, NULL);
++ iwl_dbg_tlv_apply_config(fwrt, conf_list);
+ break;
+ }
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
+index ced8261c725f8c..75571f8693ee3a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
+@@ -2,7 +2,7 @@
+ /*
+ * Copyright (C) 2015 Intel Mobile Communications GmbH
+ * Copyright (C) 2016-2017 Intel Deutschland GmbH
+- * Copyright (C) 2019-2021, 2023-2024 Intel Corporation
++ * Copyright (C) 2019-2021, 2023-2025 Intel Corporation
+ */
+ #include <linux/kernel.h>
+ #include <linux/bsearch.h>
+@@ -655,6 +655,9 @@ IWL_EXPORT_SYMBOL(iwl_trans_tx);
+ void iwl_trans_reclaim(struct iwl_trans *trans, int queue, int ssn,
+ struct sk_buff_head *skbs, bool is_flush)
+ {
++ if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status)))
++ return;
++
+ if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE,
+ "bad state = %d\n", trans->state))
+ return;
+@@ -687,6 +690,9 @@ IWL_EXPORT_SYMBOL(iwl_trans_txq_enable_cfg);
+
+ int iwl_trans_wait_txq_empty(struct iwl_trans *trans, int queue)
+ {
++ if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status)))
++ return -EIO;
++
+ if (WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE,
+ "bad state = %d\n", trans->state))
+ return -EIO;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+index b26141c30c61c5..0a96c26e741b86 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+@@ -773,7 +773,11 @@ iwl_mvm_ftm_set_secured_ranging(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+
+ target.bssid = bssid;
+ target.cipher = cipher;
++ target.tk = NULL;
+ ieee80211_iter_keys(mvm->hw, vif, iter, &target);
++
++ if (!WARN_ON(!target.tk))
++ memcpy(tk, target.tk, TK_11AZ_LEN);
+ } else {
+ memcpy(tk, entry->tk, sizeof(entry->tk));
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index af6644b7e95fbe..e17ad647da48ce 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -4099,6 +4099,20 @@ iwl_mvm_sta_state_authorized_to_assoc(struct iwl_mvm *mvm,
+ return 0;
+ }
+
++void iwl_mvm_smps_workaround(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
++ bool update)
++{
++ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
++
++ if (!iwl_mvm_has_rlc_offload(mvm))
++ return;
++
++ mvmvif->ps_disabled = !vif->cfg.ps;
++
++ if (update)
++ iwl_mvm_power_update_mac(mvm);
++}
++
+ /* Common part for MLD and non-MLD modes */
+ int iwl_mvm_mac_sta_state_common(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+@@ -4191,6 +4205,7 @@ int iwl_mvm_mac_sta_state_common(struct ieee80211_hw *hw,
+ new_state == IEEE80211_STA_AUTHORIZED) {
+ ret = iwl_mvm_sta_state_assoc_to_authorized(mvm, vif, sta,
+ callbacks);
++ iwl_mvm_smps_workaround(mvm, vif, true);
+ } else if (old_state == IEEE80211_STA_AUTHORIZED &&
+ new_state == IEEE80211_STA_ASSOC) {
+ ret = iwl_mvm_sta_state_authorized_to_assoc(mvm, vif, sta,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+index 341a2a7a49ec9b..78d7153a0cfca0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2022-2024 Intel Corporation
++ * Copyright (C) 2022-2025 Intel Corporation
+ */
+ #include "mvm.h"
+
+@@ -887,6 +887,7 @@ static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm,
+ }
+
+ if (changes & BSS_CHANGED_PS) {
++ iwl_mvm_smps_workaround(mvm, vif, false);
+ ret = iwl_mvm_power_update_mac(mvm);
+ if (ret)
+ IWL_ERR(mvm, "failed to update power mode\n");
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index ee769da72e68ce..211f542ec85572 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -3043,4 +3043,7 @@ iwl_mvm_send_ap_tx_power_constraint_cmd(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *bss_conf,
+ bool is_ap);
++
++void iwl_mvm_smps_workaround(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
++ bool update);
+ #endif /* __IWL_MVM_H__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index d4c1bc20971fba..69cf46c79b4b34 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -587,6 +587,8 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ IWL_DEV_INFO(0x7A70, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
+ IWL_DEV_INFO(0x7AF0, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name),
+ IWL_DEV_INFO(0x7AF0, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
++ IWL_DEV_INFO(0x7F70, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name),
++ IWL_DEV_INFO(0x7F70, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name),
+
+ IWL_DEV_INFO(0x271C, 0x0214, iwl9260_2ac_cfg, iwl9260_1_name),
+ IWL_DEV_INFO(0x7E40, 0x1691, iwl_cfg_ma, iwl_ax411_killer_1690s_name),
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
+index 66f0f5377ac181..738bafc3749b0a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n.c
+@@ -403,12 +403,14 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv,
+
+ if (sband->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40 &&
+ bss_desc->bcn_ht_oper->ht_param &
+- IEEE80211_HT_PARAM_CHAN_WIDTH_ANY)
++ IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) {
++ chan_list->chan_scan_param[0].radio_type |=
++ CHAN_BW_40MHZ << 2;
+ SET_SECONDARYCHAN(chan_list->chan_scan_param[0].
+ radio_type,
+ (bss_desc->bcn_ht_oper->ht_param &
+ IEEE80211_HT_PARAM_CHA_SEC_OFFSET));
+-
++ }
+ *buffer += struct_size(chan_list, chan_scan_param, 1);
+ ret_len += struct_size(chan_list, chan_scan_param, 1);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/channel.c b/drivers/net/wireless/mediatek/mt76/channel.c
+index 6a35c6ebd823e2..e7b839e7429034 100644
+--- a/drivers/net/wireless/mediatek/mt76/channel.c
++++ b/drivers/net/wireless/mediatek/mt76/channel.c
+@@ -293,6 +293,7 @@ struct mt76_vif_link *mt76_get_vif_phy_link(struct mt76_phy *phy,
+ kfree(mlink);
+ return ERR_PTR(ret);
+ }
++ rcu_assign_pointer(mvif->offchannel_link, mlink);
+
+ return mlink;
+ }
+@@ -301,10 +302,12 @@ void mt76_put_vif_phy_link(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ struct mt76_vif_link *mlink)
+ {
+ struct mt76_dev *dev = phy->dev;
++ struct mt76_vif_data *mvif = mlink->mvif;
+
+ if (IS_ERR_OR_NULL(mlink) || !mlink->offchannel)
+ return;
+
++ rcu_assign_pointer(mvif->offchannel_link, NULL);
+ dev->drv->vif_link_remove(phy, vif, &vif->bss_conf, mlink);
+ kfree(mlink);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 05651efb549ecf..e4ecd9cde36dc3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -351,6 +351,7 @@ struct mt76_wcid {
+ u8 hw_key_idx;
+ u8 hw_key_idx2;
+
++ u8 offchannel:1;
+ u8 sta:1;
+ u8 sta_disabled:1;
+ u8 amsdu:1;
+@@ -491,6 +492,7 @@ struct mt76_hw_cap {
+ #define MT_DRV_RX_DMA_HDR BIT(3)
+ #define MT_DRV_HW_MGMT_TXQ BIT(4)
+ #define MT_DRV_AMSDU_OFFLOAD BIT(5)
++#define MT_DRV_IGNORE_TXS_FAILED BIT(6)
+
+ struct mt76_driver_ops {
+ u32 drv_flags;
+@@ -787,6 +789,7 @@ struct mt76_vif_link {
+
+ struct mt76_vif_data {
+ struct mt76_vif_link __rcu *link[IEEE80211_MLD_MAX_NUM_LINKS];
++ struct mt76_vif_link __rcu *offchannel_link;
+
+ struct mt76_phy *roc_phy;
+ u16 valid_links;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
+index db0c29e65185ca..487ad716f872ad 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
+@@ -314,6 +314,9 @@ enum tx_frag_idx {
+ #define MT_TXFREE_INFO_COUNT GENMASK(27, 24)
+ #define MT_TXFREE_INFO_STAT GENMASK(29, 28)
+
++#define MT_TXS_HDR_SIZE 4 /* Unit: DW */
++#define MT_TXS_SIZE 12 /* Unit: DW */
++
+ #define MT_TXS0_BW GENMASK(31, 29)
+ #define MT_TXS0_TID GENMASK(28, 26)
+ #define MT_TXS0_AMPDU BIT(25)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index d0e49d68c5dbf0..bafcf5a279e23f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -391,7 +391,7 @@ void mt76_connac_mcu_sta_basic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
+ basic->conn_type = cpu_to_le32(CONNECTION_INFRA_BC);
+
+ if (vif->type == NL80211_IFTYPE_STATION &&
+- !is_zero_ether_addr(link_conf->bssid)) {
++ link_conf && !is_zero_ether_addr(link_conf->bssid)) {
+ memcpy(basic->peer_addr, link_conf->bssid, ETH_ALEN);
+ basic->aid = cpu_to_le16(vif->cfg.aid);
+ } else {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
+index b456ccd00d581f..11c16d1fc70fc0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
+@@ -156,7 +156,8 @@ mt76x0e_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ static const struct mt76_driver_ops drv_ops = {
+ .txwi_size = sizeof(struct mt76x02_txwi),
+ .drv_flags = MT_DRV_TX_ALIGNED4_SKBS |
+- MT_DRV_SW_RX_AIRTIME,
++ MT_DRV_SW_RX_AIRTIME |
++ MT_DRV_IGNORE_TXS_FAILED,
+ .survey_flags = SURVEY_INFO_TIME_TX,
+ .update_survey = mt76x02_update_channel,
+ .set_channel = mt76x0_set_channel,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c
+index b031c500b74156..90e5666c0857dc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/usb.c
+@@ -214,7 +214,8 @@ static int mt76x0u_probe(struct usb_interface *usb_intf,
+ const struct usb_device_id *id)
+ {
+ static const struct mt76_driver_ops drv_ops = {
+- .drv_flags = MT_DRV_SW_RX_AIRTIME,
++ .drv_flags = MT_DRV_SW_RX_AIRTIME |
++ MT_DRV_IGNORE_TXS_FAILED,
+ .survey_flags = SURVEY_INFO_TIME_TX,
+ .update_survey = mt76x02_update_channel,
+ .set_channel = mt76x0_set_channel,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+index 727bfdd00b4000..2303019670e2c0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+@@ -22,7 +22,8 @@ mt76x2e_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ static const struct mt76_driver_ops drv_ops = {
+ .txwi_size = sizeof(struct mt76x02_txwi),
+ .drv_flags = MT_DRV_TX_ALIGNED4_SKBS |
+- MT_DRV_SW_RX_AIRTIME,
++ MT_DRV_SW_RX_AIRTIME |
++ MT_DRV_IGNORE_TXS_FAILED,
+ .survey_flags = SURVEY_INFO_TIME_TX,
+ .update_survey = mt76x02_update_channel,
+ .set_channel = mt76x2e_set_channel,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+index a4f4d12f904e7c..84ef80ab4afbfa 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+@@ -30,7 +30,8 @@ static int mt76x2u_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+ {
+ static const struct mt76_driver_ops drv_ops = {
+- .drv_flags = MT_DRV_SW_RX_AIRTIME,
++ .drv_flags = MT_DRV_SW_RX_AIRTIME |
++ MT_DRV_IGNORE_TXS_FAILED,
+ .survey_flags = SURVEY_INFO_TIME_TX,
+ .update_survey = mt76x02_update_channel,
+ .set_channel = mt76x2u_set_channel,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index f8d45d43f7807f..59fa812b30d35e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -348,14 +348,10 @@ mt7925_mcu_handle_hif_ctrl_basic(struct mt792x_dev *dev, struct tlv *tlv)
+ basic = (struct mt7925_mcu_hif_ctrl_basic_tlv *)tlv;
+
+ if (basic->hifsuspend) {
+- if (basic->hif_tx_traffic_status == HIF_TRAFFIC_IDLE &&
+- basic->hif_rx_traffic_status == HIF_TRAFFIC_IDLE)
+- /* success */
+- dev->hif_idle = true;
+- else
+- /* busy */
+- /* invalid */
+- dev->hif_idle = false;
++ dev->hif_idle = true;
++ if (!(basic->hif_tx_traffic_status == HIF_TRAFFIC_IDLE &&
++ basic->hif_rx_traffic_status == HIF_TRAFFIC_IDLE))
++ dev_info(dev->mt76.dev, "Hif traffic not idle.\n");
+ } else {
+ dev->hif_resumed = true;
+ }
+@@ -631,6 +627,54 @@ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
+ enable, false);
+ }
+
++static int mt7925_mcu_read_eeprom(struct mt792x_dev *dev, u32 offset, u8 *val)
++{
++ struct {
++ u8 rsv[4];
++
++ __le16 tag;
++ __le16 len;
++
++ __le32 addr;
++ __le32 valid;
++ u8 data[MT7925_EEPROM_BLOCK_SIZE];
++ } __packed req = {
++ .tag = cpu_to_le16(1),
++ .len = cpu_to_le16(sizeof(req) - 4),
++ .addr = cpu_to_le32(round_down(offset,
++ MT7925_EEPROM_BLOCK_SIZE)),
++ };
++ struct evt {
++ u8 rsv[4];
++
++ __le16 tag;
++ __le16 len;
++
++ __le32 ver;
++ __le32 addr;
++ __le32 valid;
++ __le32 size;
++ __le32 magic_num;
++ __le32 type;
++ __le32 rsv1[4];
++ u8 data[32];
++ } __packed *res;
++ struct sk_buff *skb;
++ int ret;
++
++ ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_WM_UNI_CMD_QUERY(EFUSE_CTRL),
++ &req, sizeof(req), true, &skb);
++ if (ret)
++ return ret;
++
++ res = (struct evt *)skb->data;
++ *val = res->data[offset % MT7925_EEPROM_BLOCK_SIZE];
++
++ dev_kfree_skb(skb);
++
++ return 0;
++}
++
+ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+ {
+ const struct mt76_connac2_fw_trailer *hdr;
+@@ -639,13 +683,20 @@ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+ struct mt76_dev *mdev = &dev->mt76;
+ struct mt792x_phy *phy = &dev->phy;
+ const struct firmware *fw;
++ u8 *clc_base = NULL, hw_encap = 0;
+ int ret, i, len, offset = 0;
+- u8 *clc_base = NULL;
+
+ if (mt7925_disable_clc ||
+ mt76_is_usb(&dev->mt76))
+ return 0;
+
++ if (mt76_is_mmio(&dev->mt76)) {
++ ret = mt7925_mcu_read_eeprom(dev, MT_EE_HW_TYPE, &hw_encap);
++ if (ret)
++ return ret;
++ hw_encap = u8_get_bits(hw_encap, MT_EE_HW_TYPE_ENCAP);
++ }
++
+ ret = request_firmware(&fw, fw_name, mdev->dev);
+ if (ret)
+ return ret;
+@@ -690,6 +741,10 @@ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+ if (phy->clc[clc->idx])
+ continue;
+
++ /* header content sanity */
++ if (u8_get_bits(clc->type, MT_EE_HW_TYPE_ENCAP) != hw_encap)
++ continue;
++
+ phy->clc[clc->idx] = devm_kmemdup(mdev->dev, clc,
+ le32_to_cpu(clc->len),
+ GFP_KERNEL);
+@@ -3239,6 +3294,9 @@ int mt7925_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ else
+ uni_txd->option = MCU_CMD_UNI_EXT_ACK;
+
++ if (cmd == MCU_UNI_CMD(HIF_CTRL))
++ uni_txd->option &= ~MCU_CMD_ACK;
++
+ goto exit;
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+index cb7b1a49fbd14e..073e433069e0e5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+@@ -167,9 +167,12 @@ enum mt7925_eeprom_field {
+ MT_EE_CHIP_ID = 0x000,
+ MT_EE_VERSION = 0x002,
+ MT_EE_MAC_ADDR = 0x004,
++ MT_EE_HW_TYPE = 0xa71,
+ __MT_EE_MAX = 0x9ff
+ };
+
++#define MT_EE_HW_TYPE_ENCAP GENMASK(1, 0)
++
+ enum {
+ TXPWR_USER,
+ TXPWR_EEPROM,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 019c925ae600e8..c7e83360273340 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -832,7 +832,8 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ u8 band_idx = (info->hw_queue & MT_TX_HW_QUEUE_PHY) >> 2;
+ u8 p_fmt, q_idx, omac_idx = 0, wmm_idx = 0;
+ bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
+- struct mt76_vif_link *mvif;
++ struct mt76_vif_link *mlink = NULL;
++ struct mt7996_vif *mvif;
+ u16 tx_count = 15;
+ u32 val;
+ bool inband_disc = !!(changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
+@@ -840,11 +841,18 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ bool beacon = !!(changed & (BSS_CHANGED_BEACON |
+ BSS_CHANGED_BEACON_ENABLED)) && (!inband_disc);
+
+- mvif = vif ? (struct mt76_vif_link *)vif->drv_priv : NULL;
+- if (mvif) {
+- omac_idx = mvif->omac_idx;
+- wmm_idx = mvif->wmm_idx;
+- band_idx = mvif->band_idx;
++ if (vif) {
++ mvif = (struct mt7996_vif *)vif->drv_priv;
++ if (wcid->offchannel)
++ mlink = rcu_dereference(mvif->mt76.offchannel_link);
++ if (!mlink)
++ mlink = &mvif->deflink.mt76;
++ }
++
++ if (mlink) {
++ omac_idx = mlink->omac_idx;
++ wmm_idx = mlink->wmm_idx;
++ band_idx = mlink->band_idx;
+ }
+
+ if (inband_disc) {
+@@ -910,13 +918,13 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ is_multicast_ether_addr(hdr->addr1);
+ u8 idx = MT7996_BASIC_RATES_TBL;
+
+- if (mvif) {
+- if (mcast && mvif->mcast_rates_idx)
+- idx = mvif->mcast_rates_idx;
+- else if (beacon && mvif->beacon_rates_idx)
+- idx = mvif->beacon_rates_idx;
++ if (mlink) {
++ if (mcast && mlink->mcast_rates_idx)
++ idx = mlink->mcast_rates_idx;
++ else if (beacon && mlink->beacon_rates_idx)
++ idx = mlink->beacon_rates_idx;
+ else
+- idx = mvif->basic_rates_idx;
++ idx = mlink->basic_rates_idx;
+ }
+
+ val = FIELD_PREP(MT_TXD6_TX_RATE, idx) | MT_TXD6_FIXED_BW;
+@@ -984,8 +992,14 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+
+ if (vif) {
+ struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
++ struct mt76_vif_link *mlink = NULL;
++
++ if (wcid->offchannel)
++ mlink = rcu_dereference(mvif->mt76.offchannel_link);
++ if (!mlink)
++ mlink = &mvif->deflink.mt76;
+
+- txp->fw.bss_idx = mvif->deflink.mt76.idx;
++ txp->fw.bss_idx = mlink->idx;
+ }
+
+ txp->fw.token = cpu_to_le16(id);
+@@ -1399,7 +1413,7 @@ bool mt7996_rx_check(struct mt76_dev *mdev, void *data, int len)
+ mt7996_mac_tx_free(dev, data, len);
+ return false;
+ case PKT_TYPE_TXS:
+- for (rxd += 4; rxd + 8 <= end; rxd += 8)
++ for (rxd += MT_TXS_HDR_SIZE; rxd + MT_TXS_SIZE <= end; rxd += MT_TXS_SIZE)
+ mt7996_mac_add_txs(dev, rxd);
+ return false;
+ case PKT_TYPE_RX_FW_MONITOR:
+@@ -1442,7 +1456,7 @@ void mt7996_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
+ mt7996_mcu_rx_event(dev, skb);
+ break;
+ case PKT_TYPE_TXS:
+- for (rxd += 4; rxd + 8 <= end; rxd += 8)
++ for (rxd += MT_TXS_HDR_SIZE; rxd + MT_TXS_SIZE <= end; rxd += MT_TXS_SIZE)
+ mt7996_mac_add_txs(dev, rxd);
+ dev_kfree_skb(skb);
+ break;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 69dd565d831900..b01cc7ef479997 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -249,6 +249,7 @@ int mt7996_vif_link_add(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ mlink->band_idx = band_idx;
+ mlink->wmm_idx = vif->type == NL80211_IFTYPE_AP ? 0 : 3;
+ mlink->wcid = &link->sta.wcid;
++ mlink->wcid->offchannel = mlink->offchannel;
+
+ ret = mt7996_mcu_add_dev_info(phy, vif, link_conf, mlink, true);
+ if (ret)
+@@ -601,6 +602,33 @@ static void mt7996_configure_filter(struct ieee80211_hw *hw,
+ mutex_unlock(&dev->mt76.mutex);
+ }
+
++static int
++mt7996_get_txpower(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
++ unsigned int link_id, int *dbm)
++{
++ struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
++ struct mt7996_phy *phy = mt7996_vif_link_phy(&mvif->deflink);
++ struct mt7996_dev *dev = mt7996_hw_dev(hw);
++ struct wireless_dev *wdev;
++ int n_chains, delta, i;
++
++ if (!phy) {
++ wdev = ieee80211_vif_to_wdev(vif);
++ for (i = 0; i < hw->wiphy->n_radio; i++)
++ if (wdev->radio_mask & BIT(i))
++ phy = dev->radio_phy[i];
++
++ if (!phy)
++ return -EINVAL;
++ }
++
++ n_chains = hweight16(phy->mt76->chainmask);
++ delta = mt76_tx_power_nss_delta(n_chains);
++ *dbm = DIV_ROUND_UP(phy->mt76->txpower_cur + delta, 2);
++
++ return 0;
++}
++
+ static u8
+ mt7996_get_rates_table(struct mt7996_phy *phy, struct ieee80211_bss_conf *conf,
+ bool beacon, bool mcast)
+@@ -1650,7 +1678,7 @@ const struct ieee80211_ops mt7996_ops = {
+ .remain_on_channel = mt76_remain_on_channel,
+ .cancel_remain_on_channel = mt76_cancel_remain_on_channel,
+ .release_buffered_frames = mt76_release_buffered_frames,
+- .get_txpower = mt76_get_txpower,
++ .get_txpower = mt7996_get_txpower,
+ .channel_switch_beacon = mt7996_channel_switch_beacon,
+ .get_stats = mt7996_get_stats,
+ .get_et_sset_count = mt7996_get_et_sset_count,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+index 43468bcaffc6dd..a75e1c9435bb01 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+@@ -908,7 +908,8 @@ enum {
+ UNI_CMD_SER_SET_RECOVER_L3_TX_DISABLE,
+ UNI_CMD_SER_SET_RECOVER_L3_BF,
+ UNI_CMD_SER_SET_RECOVER_L4_MDP,
+- UNI_CMD_SER_SET_RECOVER_FULL,
++ UNI_CMD_SER_SET_RECOVER_FROM_ETH,
++ UNI_CMD_SER_SET_RECOVER_FULL = 8,
+ UNI_CMD_SER_SET_SYSTEM_ASSERT,
+ /* action */
+ UNI_CMD_SER_ENABLE = 1,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+index 7a8ee6c75cf2bd..9d37f823874643 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+@@ -281,7 +281,7 @@ static int mt7996_mmio_wed_reset(struct mtk_wed_device *wed)
+ if (test_and_set_bit(MT76_STATE_WED_RESET, &mphy->state))
+ return -EBUSY;
+
+- ret = mt7996_mcu_set_ser(dev, UNI_CMD_SER_TRIGGER, UNI_CMD_SER_SET_RECOVER_L1,
++ ret = mt7996_mcu_set_ser(dev, UNI_CMD_SER_TRIGGER, UNI_CMD_SER_SET_RECOVER_FROM_ETH,
+ mphy->band_idx);
+ if (ret)
+ goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/scan.c b/drivers/net/wireless/mediatek/mt76/scan.c
+index 1c4f9deaaada5e..9b20ccbeb8cf19 100644
+--- a/drivers/net/wireless/mediatek/mt76/scan.c
++++ b/drivers/net/wireless/mediatek/mt76/scan.c
+@@ -52,11 +52,6 @@ mt76_scan_send_probe(struct mt76_dev *dev, struct cfg80211_ssid *ssid)
+ ether_addr_copy(hdr->addr3, req->bssid);
+ }
+
+- info = IEEE80211_SKB_CB(skb);
+- if (req->no_cck)
+- info->flags |= IEEE80211_TX_CTL_NO_CCK_RATE;
+- info->control.flags |= IEEE80211_TX_CTRL_DONT_USE_RATE_MASK;
+-
+ if (req->ie_len)
+ skb_put_data(skb, req->ie, req->ie_len);
+
+@@ -64,10 +59,20 @@ mt76_scan_send_probe(struct mt76_dev *dev, struct cfg80211_ssid *ssid)
+ skb_set_queue_mapping(skb, IEEE80211_AC_VO);
+
+ rcu_read_lock();
+- if (ieee80211_tx_prepare_skb(phy->hw, vif, skb, band, NULL))
+- mt76_tx(phy, NULL, mvif->wcid, skb);
+- else
++
++ if (!ieee80211_tx_prepare_skb(phy->hw, vif, skb, band, NULL)) {
+ ieee80211_free_txskb(phy->hw, skb);
++ goto out;
++ }
++
++ info = IEEE80211_SKB_CB(skb);
++ if (req->no_cck)
++ info->flags |= IEEE80211_TX_CTL_NO_CCK_RATE;
++ info->control.flags |= IEEE80211_TX_CTRL_DONT_USE_RATE_MASK;
++
++ mt76_tx(phy, NULL, mvif->wcid, skb);
++
++out:
+ rcu_read_unlock();
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index af0c50c983ec11..513916469ca2f6 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -100,7 +100,8 @@ __mt76_tx_status_skb_done(struct mt76_dev *dev, struct sk_buff *skb, u8 flags,
+ return;
+
+ /* Tx status can be unreliable. if it fails, mark the frame as ACKed */
+- if (flags & MT_TX_CB_TXS_FAILED) {
++ if (flags & MT_TX_CB_TXS_FAILED &&
++ (dev->drv->drv_flags & MT_DRV_IGNORE_TXS_FAILED)) {
+ info->status.rates[0].count = 0;
+ info->status.rates[0].idx = -1;
+ info->flags |= IEEE80211_TX_STAT_ACK;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index 4ce0c05c512910..569856ca677f62 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -860,9 +860,10 @@ rtl8xxxu_writeN(struct rtl8xxxu_priv *priv, u16 addr, u8 *buf, u16 len)
+ return len;
+
+ write_error:
+- dev_info(&udev->dev,
+- "%s: Failed to write block at addr: %04x size: %04x\n",
+- __func__, addr, blocksize);
++ if (rtl8xxxu_debug & RTL8XXXU_DEBUG_REG_WRITE)
++ dev_info(&udev->dev,
++ "%s: Failed to write block at addr: %04x size: %04x\n",
++ __func__, addr, blocksize);
+ return -EAGAIN;
+ }
+
+@@ -4064,8 +4065,14 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
+ */
+ rtl8xxxu_write16(priv, REG_TRXFF_BNDY + 2, fops->trxff_boundary);
+
+- ret = rtl8xxxu_download_firmware(priv);
+- dev_dbg(dev, "%s: download_firmware %i\n", __func__, ret);
++ for (int retry = 5; retry >= 0 ; retry--) {
++ ret = rtl8xxxu_download_firmware(priv);
++ dev_dbg(dev, "%s: download_firmware %i\n", __func__, ret);
++ if (ret != -EAGAIN)
++ break;
++ if (retry)
++ dev_dbg(dev, "%s: retry firmware download\n", __func__);
++ }
+ if (ret)
+ goto exit;
+ ret = rtl8xxxu_start_firmware(priv);
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index 02389b7c687682..6b563ac489a745 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -735,6 +735,7 @@ void rtw_fw_send_ra_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si,
+ {
+ u8 h2c_pkt[H2C_PKT_SIZE] = {0};
+ bool disable_pt = true;
++ u32 mask_hi;
+
+ SET_H2C_CMD_ID_CLASS(h2c_pkt, H2C_CMD_RA_INFO);
+
+@@ -755,6 +756,20 @@ void rtw_fw_send_ra_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si,
+ si->init_ra_lv = 0;
+
+ rtw_fw_send_h2c_command(rtwdev, h2c_pkt);
++
++ if (rtwdev->chip->id != RTW_CHIP_TYPE_8814A)
++ return;
++
++ SET_H2C_CMD_ID_CLASS(h2c_pkt, H2C_CMD_RA_INFO_HI);
++
++ mask_hi = si->ra_mask >> 32;
++
++ SET_RA_INFO_RA_MASK0(h2c_pkt, (mask_hi & 0xff));
++ SET_RA_INFO_RA_MASK1(h2c_pkt, (mask_hi & 0xff00) >> 8);
++ SET_RA_INFO_RA_MASK2(h2c_pkt, (mask_hi & 0xff0000) >> 16);
++ SET_RA_INFO_RA_MASK3(h2c_pkt, (mask_hi & 0xff000000) >> 24);
++
++ rtw_fw_send_h2c_command(rtwdev, h2c_pkt);
+ }
+
+ void rtw_fw_media_status_report(struct rtw_dev *rtwdev, u8 mac_id, bool connect)
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.h b/drivers/net/wireless/realtek/rtw88/fw.h
+index 404de1b0c407b4..48ad9ceab6ea12 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.h
++++ b/drivers/net/wireless/realtek/rtw88/fw.h
+@@ -557,6 +557,7 @@ static inline void rtw_h2c_pkt_set_header(u8 *h2c_pkt, u8 sub_id)
+ #define H2C_CMD_DEFAULT_PORT 0x2c
+ #define H2C_CMD_RA_INFO 0x40
+ #define H2C_CMD_RSSI_MONITOR 0x42
++#define H2C_CMD_RA_INFO_HI 0x46
+ #define H2C_CMD_BCN_FILTER_OFFLOAD_P0 0x56
+ #define H2C_CMD_BCN_FILTER_OFFLOAD_P1 0x57
+ #define H2C_CMD_WL_PHY_INFO 0x58
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
+index cae9cca6dca3d8..0491f501c13839 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.c
++++ b/drivers/net/wireless/realtek/rtw88/mac.c
+@@ -291,6 +291,7 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
+ if (rtw_read8(rtwdev, REG_CR) == 0xea)
+ cur_pwr = false;
+ else if (rtw_hci_type(rtwdev) == RTW_HCI_TYPE_USB &&
++ chip->id != RTW_CHIP_TYPE_8814A &&
+ (rtw_read8(rtwdev, REG_SYS_STATUS1 + 1) & BIT(0)))
+ cur_pwr = false;
+ else
+@@ -784,7 +785,8 @@ static int __rtw_download_firmware(struct rtw_dev *rtwdev,
+ if (!check_firmware_size(data, size))
+ return -EINVAL;
+
+- if (!ltecoex_read_reg(rtwdev, 0x38, <ecoex_bckp))
++ if (rtwdev->chip->ltecoex_addr &&
++ !ltecoex_read_reg(rtwdev, 0x38, <ecoex_bckp))
+ return -EBUSY;
+
+ wlan_cpu_enable(rtwdev, false);
+@@ -802,7 +804,8 @@ static int __rtw_download_firmware(struct rtw_dev *rtwdev,
+
+ wlan_cpu_enable(rtwdev, true);
+
+- if (!ltecoex_reg_write(rtwdev, 0x38, ltecoex_bckp)) {
++ if (rtwdev->chip->ltecoex_addr &&
++ !ltecoex_reg_write(rtwdev, 0x38, ltecoex_bckp)) {
+ ret = -EBUSY;
+ goto dlfw_fail;
+ }
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 0cee0fd8c0ef07..9b9e76eebce958 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -1234,7 +1234,9 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si,
+ if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC)
+ ldpc_en = VHT_LDPC_EN;
+ } else if (sta->deflink.ht_cap.ht_supported) {
+- ra_mask |= (sta->deflink.ht_cap.mcs.rx_mask[1] << 20) |
++ ra_mask |= ((u64)sta->deflink.ht_cap.mcs.rx_mask[3] << 36) |
++ ((u64)sta->deflink.ht_cap.mcs.rx_mask[2] << 28) |
++ (sta->deflink.ht_cap.mcs.rx_mask[1] << 20) |
+ (sta->deflink.ht_cap.mcs.rx_mask[0] << 12);
+ if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
+ stbc_en = HT_STBC_EN;
+@@ -1244,6 +1246,9 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si,
+
+ if (efuse->hw_cap.nss == 1 || rtwdev->hal.txrx_1ss)
+ ra_mask &= RA_MASK_VHT_RATES_1SS | RA_MASK_HT_RATES_1SS;
++ else if (efuse->hw_cap.nss == 2)
++ ra_mask &= RA_MASK_VHT_RATES_2SS | RA_MASK_HT_RATES_2SS |
++ RA_MASK_VHT_RATES_1SS | RA_MASK_HT_RATES_1SS;
+
+ if (hal->current_band_type == RTW_BAND_5G) {
+ ra_mask |= (u64)sta->deflink.supp_rates[NL80211_BAND_5GHZ] << 4;
+@@ -1302,10 +1307,9 @@ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si,
+ break;
+ }
+
+- if (sta->deflink.vht_cap.vht_supported && ra_mask & 0xffc00000)
+- tx_num = 2;
+- else if (sta->deflink.ht_cap.ht_supported && ra_mask & 0xfff00000)
+- tx_num = 2;
++ if (sta->deflink.vht_cap.vht_supported ||
++ sta->deflink.ht_cap.ht_supported)
++ tx_num = efuse->hw_cap.nss;
+
+ rate_id = get_rate_id(wireless_set, bw_mode, tx_num);
+
+@@ -1561,6 +1565,7 @@ static void rtw_init_ht_cap(struct rtw_dev *rtwdev,
+ {
+ const struct rtw_chip_info *chip = rtwdev->chip;
+ struct rtw_efuse *efuse = &rtwdev->efuse;
++ int i;
+
+ ht_cap->ht_supported = true;
+ ht_cap->cap = 0;
+@@ -1580,25 +1585,20 @@ static void rtw_init_ht_cap(struct rtw_dev *rtwdev,
+ ht_cap->ampdu_factor = IEEE80211_HT_MAX_AMPDU_64K;
+ ht_cap->ampdu_density = chip->ampdu_density;
+ ht_cap->mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED;
+- if (efuse->hw_cap.nss > 1) {
+- ht_cap->mcs.rx_mask[0] = 0xFF;
+- ht_cap->mcs.rx_mask[1] = 0xFF;
+- ht_cap->mcs.rx_mask[4] = 0x01;
+- ht_cap->mcs.rx_highest = cpu_to_le16(300);
+- } else {
+- ht_cap->mcs.rx_mask[0] = 0xFF;
+- ht_cap->mcs.rx_mask[1] = 0x00;
+- ht_cap->mcs.rx_mask[4] = 0x01;
+- ht_cap->mcs.rx_highest = cpu_to_le16(150);
+- }
++
++ for (i = 0; i < efuse->hw_cap.nss; i++)
++ ht_cap->mcs.rx_mask[i] = 0xFF;
++ ht_cap->mcs.rx_mask[4] = 0x01;
++ ht_cap->mcs.rx_highest = cpu_to_le16(150 * efuse->hw_cap.nss);
+ }
+
+ static void rtw_init_vht_cap(struct rtw_dev *rtwdev,
+ struct ieee80211_sta_vht_cap *vht_cap)
+ {
+ struct rtw_efuse *efuse = &rtwdev->efuse;
+- u16 mcs_map;
++ u16 mcs_map = 0;
+ __le16 highest;
++ int i;
+
+ if (efuse->hw_cap.ptcl != EFUSE_HW_CAP_IGNORE &&
+ efuse->hw_cap.ptcl != EFUSE_HW_CAP_PTCL_VHT)
+@@ -1621,21 +1621,15 @@ static void rtw_init_vht_cap(struct rtw_dev *rtwdev,
+ if (rtw_chip_has_rx_ldpc(rtwdev))
+ vht_cap->cap |= IEEE80211_VHT_CAP_RXLDPC;
+
+- mcs_map = IEEE80211_VHT_MCS_SUPPORT_0_9 << 0 |
+- IEEE80211_VHT_MCS_NOT_SUPPORTED << 4 |
+- IEEE80211_VHT_MCS_NOT_SUPPORTED << 6 |
+- IEEE80211_VHT_MCS_NOT_SUPPORTED << 8 |
+- IEEE80211_VHT_MCS_NOT_SUPPORTED << 10 |
+- IEEE80211_VHT_MCS_NOT_SUPPORTED << 12 |
+- IEEE80211_VHT_MCS_NOT_SUPPORTED << 14;
+- if (efuse->hw_cap.nss > 1) {
+- highest = cpu_to_le16(780);
+- mcs_map |= IEEE80211_VHT_MCS_SUPPORT_0_9 << 2;
+- } else {
+- highest = cpu_to_le16(390);
+- mcs_map |= IEEE80211_VHT_MCS_NOT_SUPPORTED << 2;
++ for (i = 0; i < 8; i++) {
++ if (i < efuse->hw_cap.nss)
++ mcs_map |= IEEE80211_VHT_MCS_SUPPORT_0_9 << (i * 2);
++ else
++ mcs_map |= IEEE80211_VHT_MCS_NOT_SUPPORTED << (i * 2);
+ }
+
++ highest = cpu_to_le16(390 * efuse->hw_cap.nss);
++
+ vht_cap->vht_mcs.rx_mcs_map = cpu_to_le16(mcs_map);
+ vht_cap->vht_mcs.tx_mcs_map = cpu_to_le16(mcs_map);
+ vht_cap->vht_mcs.rx_highest = highest;
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index 62cd4c52630192..a61ea853f98d9e 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -191,6 +191,7 @@ enum rtw_chip_type {
+ RTW_CHIP_TYPE_8703B,
+ RTW_CHIP_TYPE_8821A,
+ RTW_CHIP_TYPE_8812A,
++ RTW_CHIP_TYPE_8814A,
+ };
+
+ enum rtw_tx_queue_type {
+diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
+index e438405fba566b..209b6fc08a73ea 100644
+--- a/drivers/net/wireless/realtek/rtw88/reg.h
++++ b/drivers/net/wireless/realtek/rtw88/reg.h
+@@ -130,6 +130,7 @@
+ #define BIT_SHIFT_ROM_PGE 16
+ #define BIT_FW_INIT_RDY BIT(15)
+ #define BIT_FW_DW_RDY BIT(14)
++#define BIT_CPU_CLK_SEL (BIT(12) | BIT(13))
+ #define BIT_RPWM_TOGGLE BIT(7)
+ #define BIT_RAM_DL_SEL BIT(7) /* legacy only */
+ #define BIT_DMEM_CHKSUM_OK BIT(6)
+@@ -147,7 +148,7 @@
+ BIT_CHECK_SUM_OK)
+ #define FW_READY_LEGACY (BIT_MCUFWDL_RDY | BIT_FWDL_CHK_RPT | \
+ BIT_WINTINI_RDY | BIT_RAM_DL_SEL)
+-#define FW_READY_MASK 0xffff
++#define FW_READY_MASK (0xffff & ~BIT_CPU_CLK_SEL)
+
+ #define REG_MCU_TST_CFG 0x84
+ #define VAL_FW_TRIGGER 0x1
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+index 7f03903ddf4bbc..23a29019752daf 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+@@ -935,11 +935,11 @@ static void query_phy_status(struct rtw_dev *rtwdev, u8 *phy_status,
+ }
+
+ static void
+-rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path, u8 rs)
++rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path,
++ u8 rs, u32 *phy_pwr_idx)
+ {
+ struct rtw_hal *hal = &rtwdev->hal;
+ static const u32 offset_txagc[2] = {0x1d00, 0x1d80};
+- static u32 phy_pwr_idx;
+ u8 rate, rate_idx, pwr_index, shift;
+ int j;
+
+@@ -947,12 +947,12 @@ rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path, u8 rs)
+ rate = rtw_rate_section[rs][j];
+ pwr_index = hal->tx_pwr_tbl[path][rate];
+ shift = rate & 0x3;
+- phy_pwr_idx |= ((u32)pwr_index << (shift * 8));
++ *phy_pwr_idx |= ((u32)pwr_index << (shift * 8));
+ if (shift == 0x3) {
+ rate_idx = rate & 0xfc;
+ rtw_write32(rtwdev, offset_txagc[path] + rate_idx,
+- phy_pwr_idx);
+- phy_pwr_idx = 0;
++ *phy_pwr_idx);
++ *phy_pwr_idx = 0;
+ }
+ }
+ }
+@@ -960,11 +960,13 @@ rtw8822b_set_tx_power_index_by_rate(struct rtw_dev *rtwdev, u8 path, u8 rs)
+ static void rtw8822b_set_tx_power_index(struct rtw_dev *rtwdev)
+ {
+ struct rtw_hal *hal = &rtwdev->hal;
++ u32 phy_pwr_idx = 0;
+ int rs, path;
+
+ for (path = 0; path < hal->rf_path_num; path++) {
+ for (rs = 0; rs < RTW_RATE_SECTION_MAX; rs++)
+- rtw8822b_set_tx_power_index_by_rate(rtwdev, path, rs);
++ rtw8822b_set_tx_power_index_by_rate(rtwdev, path, rs,
++ &phy_pwr_idx);
+ }
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw88/util.c b/drivers/net/wireless/realtek/rtw88/util.c
+index e222d3c01a77ec..66819f69440550 100644
+--- a/drivers/net/wireless/realtek/rtw88/util.c
++++ b/drivers/net/wireless/realtek/rtw88/util.c
+@@ -101,7 +101,8 @@ void rtw_desc_to_mcsrate(u16 rate, u8 *mcs, u8 *nss)
+ *nss = 4;
+ *mcs = rate - DESC_RATEVHT4SS_MCS0;
+ } else if (rate >= DESC_RATEMCS0 &&
+- rate <= DESC_RATEMCS15) {
++ rate <= DESC_RATEMCS31) {
++ *nss = 0;
+ *mcs = rate - DESC_RATEMCS0;
+ }
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
+index 68316d44b20430..86e8f78ec94a4b 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.c
++++ b/drivers/net/wireless/realtek/rtw89/coex.c
+@@ -89,10 +89,10 @@ static const struct rtw89_btc_fbtc_slot s_def[] = {
+ [CXST_B4] = __DEF_FBTC_SLOT(50, 0xe5555555, SLOT_MIX),
+ [CXST_LK] = __DEF_FBTC_SLOT(20, 0xea5a5a5a, SLOT_ISO),
+ [CXST_BLK] = __DEF_FBTC_SLOT(500, 0x55555555, SLOT_MIX),
+- [CXST_E2G] = __DEF_FBTC_SLOT(0, 0xea5a5a5a, SLOT_MIX),
+- [CXST_E5G] = __DEF_FBTC_SLOT(0, 0xffffffff, SLOT_ISO),
++ [CXST_E2G] = __DEF_FBTC_SLOT(5, 0xea5a5a5a, SLOT_MIX),
++ [CXST_E5G] = __DEF_FBTC_SLOT(5, 0xffffffff, SLOT_ISO),
+ [CXST_EBT] = __DEF_FBTC_SLOT(5, 0xe5555555, SLOT_MIX),
+- [CXST_ENULL] = __DEF_FBTC_SLOT(0, 0xaaaaaaaa, SLOT_ISO),
++ [CXST_ENULL] = __DEF_FBTC_SLOT(5, 0xaaaaaaaa, SLOT_ISO),
+ [CXST_WLK] = __DEF_FBTC_SLOT(250, 0xea5a5a5a, SLOT_MIX),
+ [CXST_W1FDD] = __DEF_FBTC_SLOT(50, 0xffffffff, SLOT_ISO),
+ [CXST_B1FDD] = __DEF_FBTC_SLOT(50, 0xffffdfff, SLOT_ISO),
+@@ -1372,11 +1372,9 @@ static u32 _chk_btc_report(struct rtw89_dev *rtwdev,
+ } else if (ver->fcxbtcrpt == 8) {
+ pfinfo = &pfwinfo->rpt_ctrl.finfo.v8;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo.v8);
+- break;
+ } else if (ver->fcxbtcrpt == 7) {
+ pfinfo = &pfwinfo->rpt_ctrl.finfo.v7;
+ pcinfo->req_len = sizeof(pfwinfo->rpt_ctrl.finfo.v7);
+- break;
+ } else {
+ goto err;
+ }
+@@ -4589,17 +4587,16 @@ static void _action_bt_a2dp(struct rtw89_dev *rtwdev)
+
+ _set_ant(rtwdev, NM_EXEC, BTC_PHY_ALL, BTC_ANT_W2G);
+
++ if (a2dp.vendor_id == 0x4c || dm->leak_ap || bt_linfo->slave_role)
++ dm->slot_dur[CXST_W1] = 20;
++ else
++ dm->slot_dur[CXST_W1] = 40;
++
++ dm->slot_dur[CXST_B1] = BTC_B1_MAX;
++
+ switch (btc->cx.state_map) {
+ case BTC_WBUSY_BNOSCAN: /* wl-busy + bt-A2DP */
+- if (a2dp.vendor_id == 0x4c || dm->leak_ap) {
+- dm->slot_dur[CXST_W1] = 40;
+- dm->slot_dur[CXST_B1] = 200;
+- _set_policy(rtwdev,
+- BTC_CXP_PAUTO_TDW1B1, BTC_ACT_BT_A2DP);
+- } else {
+- _set_policy(rtwdev,
+- BTC_CXP_PAUTO_TD50B1, BTC_ACT_BT_A2DP);
+- }
++ _set_policy(rtwdev, BTC_CXP_PAUTO_TDW1B1, BTC_ACT_BT_A2DP);
+ break;
+ case BTC_WBUSY_BSCAN: /* wl-busy + bt-inq + bt-A2DP */
+ _set_policy(rtwdev, BTC_CXP_PAUTO2_TD3050, BTC_ACT_BT_A2DP);
+@@ -4609,15 +4606,10 @@ static void _action_bt_a2dp(struct rtw89_dev *rtwdev)
+ break;
+ case BTC_WSCAN_BNOSCAN: /* wl-scan + bt-A2DP */
+ case BTC_WLINKING: /* wl-connecting + bt-A2DP */
+- if (a2dp.vendor_id == 0x4c || dm->leak_ap) {
+- dm->slot_dur[CXST_W1] = 40;
+- dm->slot_dur[CXST_B1] = 200;
+- _set_policy(rtwdev, BTC_CXP_AUTO_TDW1B1,
+- BTC_ACT_BT_A2DP);
+- } else {
+- _set_policy(rtwdev, BTC_CXP_AUTO_TD50B1,
+- BTC_ACT_BT_A2DP);
+- }
++ if (btc->cx.wl.rfk_info.con_rfk)
++ _set_policy(rtwdev, BTC_CXP_OFF_BT, BTC_ACT_BT_A2DP);
++ else
++ _set_policy(rtwdev, BTC_CXP_AUTO_TDW1B1, BTC_ACT_BT_A2DP);
+ break;
+ case BTC_WIDLE: /* wl-idle + bt-A2DP */
+ _set_policy(rtwdev, BTC_CXP_AUTO_TD20B1, BTC_ACT_BT_A2DP);
+@@ -4645,7 +4637,10 @@ static void _action_bt_a2dpsink(struct rtw89_dev *rtwdev)
+ _set_policy(rtwdev, BTC_CXP_FIX_TD2060, BTC_ACT_BT_A2DPSINK);
+ break;
+ case BTC_WLINKING: /* wl-connecting + bt-A2dp_Sink */
+- _set_policy(rtwdev, BTC_CXP_FIX_TD3030, BTC_ACT_BT_A2DPSINK);
++ if (btc->cx.wl.rfk_info.con_rfk)
++ _set_policy(rtwdev, BTC_CXP_OFF_BT, BTC_ACT_BT_A2DPSINK);
++ else
++ _set_policy(rtwdev, BTC_CXP_FIX_TD3030, BTC_ACT_BT_A2DPSINK);
+ break;
+ case BTC_WIDLE: /* wl-idle + bt-A2dp_Sink */
+ _set_policy(rtwdev, BTC_CXP_FIX_TD2080, BTC_ACT_BT_A2DPSINK);
+@@ -4699,21 +4694,20 @@ static void _action_bt_a2dp_hid(struct rtw89_dev *rtwdev)
+
+ _set_ant(rtwdev, NM_EXEC, BTC_PHY_ALL, BTC_ANT_W2G);
+
++ if (a2dp.vendor_id == 0x4c || dm->leak_ap || bt_linfo->slave_role)
++ dm->slot_dur[CXST_W1] = 20;
++ else
++ dm->slot_dur[CXST_W1] = 40;
++
++ dm->slot_dur[CXST_B1] = BTC_B1_MAX;
++
+ switch (btc->cx.state_map) {
+ case BTC_WBUSY_BNOSCAN: /* wl-busy + bt-A2DP+HID */
+ case BTC_WIDLE: /* wl-idle + bt-A2DP */
+- if (a2dp.vendor_id == 0x4c || dm->leak_ap) {
+- dm->slot_dur[CXST_W1] = 40;
+- dm->slot_dur[CXST_B1] = 200;
+- _set_policy(rtwdev,
+- BTC_CXP_PAUTO_TDW1B1, BTC_ACT_BT_A2DP_HID);
+- } else {
+- _set_policy(rtwdev,
+- BTC_CXP_PAUTO_TD50B1, BTC_ACT_BT_A2DP_HID);
+- }
++ _set_policy(rtwdev, BTC_CXP_PAUTO_TDW1B1, BTC_ACT_BT_A2DP_HID);
+ break;
+ case BTC_WBUSY_BSCAN: /* wl-busy + bt-inq + bt-A2DP+HID */
+- _set_policy(rtwdev, BTC_CXP_PAUTO2_TD3050, BTC_ACT_BT_A2DP_HID);
++ _set_policy(rtwdev, BTC_CXP_PAUTO2_TD3070, BTC_ACT_BT_A2DP_HID);
+ break;
+
+ case BTC_WSCAN_BSCAN: /* wl-scan + bt-inq + bt-A2DP+HID */
+@@ -4721,15 +4715,10 @@ static void _action_bt_a2dp_hid(struct rtw89_dev *rtwdev)
+ break;
+ case BTC_WSCAN_BNOSCAN: /* wl-scan + bt-A2DP+HID */
+ case BTC_WLINKING: /* wl-connecting + bt-A2DP+HID */
+- if (a2dp.vendor_id == 0x4c || dm->leak_ap) {
+- dm->slot_dur[CXST_W1] = 40;
+- dm->slot_dur[CXST_B1] = 200;
+- _set_policy(rtwdev, BTC_CXP_AUTO_TDW1B1,
+- BTC_ACT_BT_A2DP_HID);
+- } else {
+- _set_policy(rtwdev, BTC_CXP_AUTO_TD50B1,
+- BTC_ACT_BT_A2DP_HID);
+- }
++ if (btc->cx.wl.rfk_info.con_rfk)
++ _set_policy(rtwdev, BTC_CXP_OFF_BT, BTC_ACT_BT_A2DP_HID);
++ else
++ _set_policy(rtwdev, BTC_CXP_AUTO_TDW1B1, BTC_ACT_BT_A2DP_HID);
+ break;
+ }
+ }
+@@ -5408,7 +5397,8 @@ static void _action_wl_scan(struct rtw89_dev *rtwdev)
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_wl_dbcc_info *wl_dinfo = &wl->dbcc_info;
+
+- if (RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw)) {
++ if (btc->cx.state_map != BTC_WLINKING &&
++ RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw)) {
+ _action_wl_25g_mcc(rtwdev);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], Scan offload!\n");
+ } else if (rtwdev->dbcc_en) {
+@@ -7002,7 +6992,7 @@ void _run_coex(struct rtw89_dev *rtwdev, enum btc_reason_and_action reason)
+ goto exit;
+ }
+
+- if (wl->status.val & btc_scanning_map.val) {
++ if (wl->status.val & btc_scanning_map.val && !wl->rfk_info.con_rfk) {
+ _action_wl_scan(rtwdev);
+ bt->scan_rx_low_pri = true;
+ goto exit;
+@@ -7223,6 +7213,8 @@ void rtw89_btc_ntfy_scan_finish(struct rtw89_dev *rtwdev, u8 phy_idx)
+ _fw_set_drv_info(rtwdev, CXDRVINFO_DBCC);
+ }
+
++ btc->dm.tdma_instant_excute = 1;
++
+ _run_coex(rtwdev, BTC_RSN_NTFY_SCAN_FINISH);
+ }
+
+@@ -7671,7 +7663,8 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev,
+ else
+ wl->status.map.connecting = 0;
+
+- if (state == BTC_ROLE_MSTS_STA_DIS_CONN)
++ if (state == BTC_ROLE_MSTS_STA_DIS_CONN ||
++ state == BTC_ROLE_MSTS_STA_CONN_END)
+ wl->status.map._4way = false;
+
+ _run_coex(rtwdev, BTC_RSN_NTFY_ROLE_INFO);
+@@ -8115,6 +8108,7 @@ void rtw89_btc_c2h_handle(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ return;
+
+ func = rtw89_btc_c2h_get_index_by_ver(rtwdev, func);
++ pfwinfo->cnt_c2h++;
+
+ switch (func) {
+ case BTF_EVNT_BUF_OVERFLOW:
+@@ -11037,3 +11031,24 @@ void rtw89_coex_recognize_ver(struct rtw89_dev *rtwdev)
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC] use version def[%d] = 0x%08x\n",
+ (int)(btc->ver - rtw89_btc_ver_defs), btc->ver->fw_ver_code);
+ }
++
++void rtw89_btc_ntfy_preserve_bt_time(struct rtw89_dev *rtwdev, u32 ms)
++{
++ struct rtw89_btc_bt_link_info *bt_linfo = &rtwdev->btc.cx.bt.link_info;
++ struct rtw89_btc_bt_a2dp_desc a2dp = bt_linfo->a2dp_desc;
++
++ if (test_bit(RTW89_FLAG_SER_HANDLING, rtwdev->flags))
++ return;
++
++ if (!a2dp.exist)
++ return;
++
++ fsleep(ms * 1000);
++}
++EXPORT_SYMBOL(rtw89_btc_ntfy_preserve_bt_time);
++
++void rtw89_btc_ntfy_conn_rfk(struct rtw89_dev *rtwdev, bool state)
++{
++ rtwdev->btc.cx.wl.rfk_info.con_rfk = state;
++}
++EXPORT_SYMBOL(rtw89_btc_ntfy_conn_rfk);
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.h b/drivers/net/wireless/realtek/rtw89/coex.h
+index dbdb56e063ef03..757d03675cf4e6 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.h
++++ b/drivers/net/wireless/realtek/rtw89/coex.h
+@@ -290,6 +290,8 @@ void rtw89_coex_power_on(struct rtw89_dev *rtwdev);
+ void rtw89_btc_set_policy(struct rtw89_dev *rtwdev, u16 policy_type);
+ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type);
+ void rtw89_coex_recognize_ver(struct rtw89_dev *rtwdev);
++void rtw89_btc_ntfy_preserve_bt_time(struct rtw89_dev *rtwdev, u32 ms);
++void rtw89_btc_ntfy_conn_rfk(struct rtw89_dev *rtwdev, bool state);
+
+ static inline u8 rtw89_btc_phymap(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx,
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 85f739f1173d8d..422cc3867f3bc0 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -2381,6 +2381,49 @@ static void rtw89_core_validate_rx_signal(struct ieee80211_rx_status *rx_status)
+ rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
+ }
+
++static void rtw89_core_update_rx_freq_from_ie(struct rtw89_dev *rtwdev,
++ struct sk_buff *skb,
++ struct ieee80211_rx_status *rx_status)
++{
++ struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
++ size_t hdr_len, ielen;
++ u8 *variable;
++ int chan;
++
++ if (!rtwdev->chip->rx_freq_frome_ie)
++ return;
++
++ if (!rtwdev->scanning)
++ return;
++
++ if (ieee80211_is_beacon(mgmt->frame_control)) {
++ variable = mgmt->u.beacon.variable;
++ hdr_len = offsetof(struct ieee80211_mgmt,
++ u.beacon.variable);
++ } else if (ieee80211_is_probe_resp(mgmt->frame_control)) {
++ variable = mgmt->u.probe_resp.variable;
++ hdr_len = offsetof(struct ieee80211_mgmt,
++ u.probe_resp.variable);
++ } else {
++ return;
++ }
++
++ if (skb->len > hdr_len)
++ ielen = skb->len - hdr_len;
++ else
++ return;
++
++ /* The parsing code for both 2GHz and 5GHz bands is the same in this
++ * function.
++ */
++ chan = cfg80211_get_ies_channel_number(variable, ielen, NL80211_BAND_2GHZ);
++ if (chan == -1)
++ return;
++
++ rx_status->band = chan > 14 ? RTW89_BAND_5G : RTW89_BAND_2G;
++ rx_status->freq = ieee80211_channel_to_frequency(chan, rx_status->band);
++}
++
+ static void rtw89_core_rx_to_mac80211(struct rtw89_dev *rtwdev,
+ struct rtw89_rx_phy_ppdu *phy_ppdu,
+ struct rtw89_rx_desc_info *desc_info,
+@@ -2398,6 +2441,7 @@ static void rtw89_core_rx_to_mac80211(struct rtw89_dev *rtwdev,
+ rtw89_core_update_rx_status_by_ppdu(rtwdev, rx_status, phy_ppdu);
+ rtw89_core_update_radiotap(rtwdev, skb_ppdu, rx_status);
+ rtw89_core_validate_rx_signal(rx_status);
++ rtw89_core_update_rx_freq_from_ie(rtwdev, skb_ppdu, rx_status);
+
+ /* In low power mode, it does RX in thread context. */
+ local_bh_disable();
+@@ -5042,8 +5086,6 @@ static int rtw89_chip_efuse_info_setup(struct rtw89_dev *rtwdev)
+
+ rtw89_hci_mac_pre_deinit(rtwdev);
+
+- rtw89_mac_pwr_off(rtwdev);
+-
+ return 0;
+ }
+
+@@ -5124,36 +5166,45 @@ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev)
+
+ rtw89_read_chip_ver(rtwdev);
+
++ ret = rtw89_mac_pwr_on(rtwdev);
++ if (ret) {
++ rtw89_err(rtwdev, "failed to power on\n");
++ return ret;
++ }
++
+ ret = rtw89_wait_firmware_completion(rtwdev);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to wait firmware completion\n");
+- return ret;
++ goto out;
+ }
+
+ ret = rtw89_fw_recognize(rtwdev);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to recognize firmware\n");
+- return ret;
++ goto out;
+ }
+
+ ret = rtw89_chip_efuse_info_setup(rtwdev);
+ if (ret)
+- return ret;
++ goto out;
+
+ ret = rtw89_fw_recognize_elements(rtwdev);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to recognize firmware elements\n");
+- return ret;
++ goto out;
+ }
+
+ ret = rtw89_chip_board_info_setup(rtwdev);
+ if (ret)
+- return ret;
++ goto out;
+
+ rtw89_core_setup_rfe_parms(rtwdev);
+ rtwdev->ps_mode = rtw89_update_ps_mode(rtwdev);
+
+- return 0;
++out:
++ rtw89_mac_pwr_off(rtwdev);
++
++ return ret;
+ }
+ EXPORT_SYMBOL(rtw89_chip_info_setup);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index 93e41def81b409..963f0046f0bc35 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -17,6 +17,7 @@ struct rtw89_dev;
+ struct rtw89_pci_info;
+ struct rtw89_mac_gen_def;
+ struct rtw89_phy_gen_def;
++struct rtw89_fw_blacklist;
+ struct rtw89_efuse_block_cfg;
+ struct rtw89_h2c_rf_tssi;
+ struct rtw89_fw_txpwr_track_cfg;
+@@ -1761,7 +1762,8 @@ struct rtw89_btc_wl_rfk_info {
+ u32 phy_map: 2;
+ u32 band: 2;
+ u32 type: 8;
+- u32 rsvd: 14;
++ u32 con_rfk: 1;
++ u32 rsvd: 13;
+
+ u32 start_time;
+ u32 proc_time;
+@@ -4251,6 +4253,7 @@ struct rtw89_chip_info {
+ bool try_ce_fw;
+ u8 bbmcu_nr;
+ u32 needed_fw_elms;
++ const struct rtw89_fw_blacklist *fw_blacklist;
+ u32 fifo_size;
+ bool small_fifo_size;
+ u32 dle_scc_rsvd_size;
+@@ -4273,6 +4276,7 @@ struct rtw89_chip_info {
+ bool support_ant_gain;
+ bool ul_tb_waveform_ctrl;
+ bool ul_tb_pwr_diff;
++ bool rx_freq_frome_ie;
+ bool hw_sec_hdr;
+ bool hw_mgmt_tx_encrypt;
+ u8 rf_path_num;
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 2f3869c7006967..1fbcba718998f6 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -38,6 +38,16 @@ struct rtw89_arp_rsp {
+
+ static const u8 mss_signature[] = {0x4D, 0x53, 0x53, 0x4B, 0x50, 0x4F, 0x4F, 0x4C};
+
++const struct rtw89_fw_blacklist rtw89_fw_blacklist_default = {
++ .ver = 0x00,
++ .list = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
++ 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
++ 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
++ 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
++ },
++};
++EXPORT_SYMBOL(rtw89_fw_blacklist_default);
++
+ union rtw89_fw_element_arg {
+ size_t offset;
+ enum rtw89_rf_path rf_path;
+@@ -314,7 +324,7 @@ static int __parse_formatted_mssc(struct rtw89_dev *rtwdev,
+ if (!sec->secure_boot)
+ goto out;
+
+- sb_sel_ver = le32_to_cpu(section_content->sb_sel_ver.v);
++ sb_sel_ver = get_unaligned_le32(§ion_content->sb_sel_ver.v);
+ if (sb_sel_ver && sb_sel_ver != sec->sb_sel_mgn)
+ goto ignore;
+
+@@ -344,6 +354,46 @@ static int __parse_formatted_mssc(struct rtw89_dev *rtwdev,
+ return 0;
+ }
+
++static int __check_secure_blacklist(struct rtw89_dev *rtwdev,
++ struct rtw89_fw_bin_info *info,
++ struct rtw89_fw_hdr_section_info *section_info,
++ const void *content)
++{
++ const struct rtw89_fw_blacklist *chip_blacklist = rtwdev->chip->fw_blacklist;
++ const union rtw89_fw_section_mssc_content *section_content = content;
++ struct rtw89_fw_secure *sec = &rtwdev->fw.sec;
++ u8 byte_idx;
++ u8 bit_mask;
++
++ if (!sec->secure_boot)
++ return 0;
++
++ if (!info->secure_section_exist || section_info->ignore)
++ return 0;
++
++ if (!chip_blacklist) {
++ rtw89_err(rtwdev, "chip no blacklist for secure firmware\n");
++ return -ENOENT;
++ }
++
++ byte_idx = section_content->blacklist.bit_in_chip_list >> 3;
++ bit_mask = BIT(section_content->blacklist.bit_in_chip_list & 0x7);
++
++ if (section_content->blacklist.ver > chip_blacklist->ver) {
++ rtw89_err(rtwdev, "chip blacklist out of date (%u, %u)\n",
++ section_content->blacklist.ver, chip_blacklist->ver);
++ return -EINVAL;
++ }
++
++ if (chip_blacklist->list[byte_idx] & bit_mask) {
++ rtw89_err(rtwdev, "firmware %u in chip blacklist\n",
++ section_content->blacklist.ver);
++ return -EPERM;
++ }
++
++ return 0;
++}
++
+ static int __parse_security_section(struct rtw89_dev *rtwdev,
+ struct rtw89_fw_bin_info *info,
+ struct rtw89_fw_hdr_section_info *section_info,
+@@ -374,7 +424,7 @@ static int __parse_security_section(struct rtw89_dev *rtwdev,
+ info->secure_section_exist = true;
+ }
+
+- return 0;
++ return __check_secure_blacklist(rtwdev, info, section_info, content);
+ }
+
+ static int rtw89_fw_hdr_parser_v1(struct rtw89_dev *rtwdev, const u8 *fw, u32 len,
+@@ -489,6 +539,30 @@ static int rtw89_fw_hdr_parser(struct rtw89_dev *rtwdev,
+ }
+ }
+
++static int rtw89_mfw_validate_hdr(struct rtw89_dev *rtwdev,
++ const struct firmware *firmware,
++ const struct rtw89_mfw_hdr *mfw_hdr)
++{
++ const void *mfw = firmware->data;
++ u32 mfw_len = firmware->size;
++ u8 fw_nr = mfw_hdr->fw_nr;
++ const void *ptr;
++
++ if (fw_nr == 0) {
++ rtw89_err(rtwdev, "mfw header has no fw entry\n");
++ return -ENOENT;
++ }
++
++ ptr = &mfw_hdr->info[fw_nr];
++
++ if (ptr > mfw + mfw_len) {
++ rtw89_err(rtwdev, "mfw header out of address\n");
++ return -EFAULT;
++ }
++
++ return 0;
++}
++
+ static
+ int rtw89_mfw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
+ struct rtw89_fw_suit *fw_suit, bool nowarn)
+@@ -499,6 +573,7 @@ int rtw89_mfw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
+ u32 mfw_len = firmware->size;
+ const struct rtw89_mfw_hdr *mfw_hdr = (const struct rtw89_mfw_hdr *)mfw;
+ const struct rtw89_mfw_info *mfw_info = NULL, *tmp;
++ int ret;
+ int i;
+
+ if (mfw_hdr->sig != RTW89_MFW_SIG) {
+@@ -511,6 +586,10 @@ int rtw89_mfw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
+ return 0;
+ }
+
++ ret = rtw89_mfw_validate_hdr(rtwdev, firmware, mfw_hdr);
++ if (ret)
++ return ret;
++
+ for (i = 0; i < mfw_hdr->fw_nr; i++) {
+ tmp = &mfw_hdr->info[i];
+ if (tmp->type != type)
+@@ -540,6 +619,12 @@ int rtw89_mfw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
+ found:
+ fw_suit->data = mfw + le32_to_cpu(mfw_info->shift);
+ fw_suit->size = le32_to_cpu(mfw_info->size);
++
++ if (fw_suit->data + fw_suit->size > mfw + mfw_len) {
++ rtw89_err(rtwdev, "fw_suit %d out of address\n", type);
++ return -EFAULT;
++ }
++
+ return 0;
+ }
+
+@@ -551,12 +636,17 @@ static u32 rtw89_mfw_get_size(struct rtw89_dev *rtwdev)
+ (const struct rtw89_mfw_hdr *)firmware->data;
+ const struct rtw89_mfw_info *mfw_info;
+ u32 size;
++ int ret;
+
+ if (mfw_hdr->sig != RTW89_MFW_SIG) {
+ rtw89_warn(rtwdev, "not mfw format\n");
+ return 0;
+ }
+
++ ret = rtw89_mfw_validate_hdr(rtwdev, firmware, mfw_hdr);
++ if (ret)
++ return ret;
++
+ mfw_info = &mfw_hdr->info[mfw_hdr->fw_nr - 1];
+ size = le32_to_cpu(mfw_info->shift) + le32_to_cpu(mfw_info->size);
+
+@@ -1322,7 +1412,6 @@ static int __rtw89_fw_download_hdr(struct rtw89_dev *rtwdev,
+ ret = rtw89_h2c_tx(rtwdev, skb, false);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to send h2c\n");
+- ret = -1;
+ goto fail;
+ }
+
+@@ -1409,7 +1498,6 @@ static int __rtw89_fw_download_main(struct rtw89_dev *rtwdev,
+ ret = rtw89_h2c_tx(rtwdev, skb, true);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to send h2c\n");
+- ret = -1;
+ goto fail;
+ }
+
+@@ -3281,9 +3369,10 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ CCTLINFO_G7_W5_NOMINAL_PKT_PADDING3 |
+ CCTLINFO_G7_W5_NOMINAL_PKT_PADDING4);
+
+- h2c->w6 = le32_encode_bits(vif->type == NL80211_IFTYPE_STATION ? 1 : 0,
++ h2c->w6 = le32_encode_bits(vif->cfg.aid, CCTLINFO_G7_W6_AID12_PAID) |
++ le32_encode_bits(vif->type == NL80211_IFTYPE_STATION ? 1 : 0,
+ CCTLINFO_G7_W6_ULDL);
+- h2c->m6 = cpu_to_le32(CCTLINFO_G7_W6_ULDL);
++ h2c->m6 = cpu_to_le32(CCTLINFO_G7_W6_AID12_PAID | CCTLINFO_G7_W6_ULDL);
+
+ if (rtwsta_link) {
+ h2c->w8 = le32_encode_bits(link_sta->he_cap.has_he,
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
+index 2026bc2fd2acd4..ee2be09bd3dbd8 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.h
++++ b/drivers/net/wireless/realtek/rtw89/fw.h
+@@ -663,6 +663,11 @@ struct rtw89_fw_mss_pool_hdr {
+ } __packed;
+
+ union rtw89_fw_section_mssc_content {
++ struct {
++ u8 pad[0x20];
++ u8 bit_in_chip_list;
++ u8 ver;
++ } __packed blacklist;
+ struct {
+ u8 pad[58];
+ __le32 v;
+@@ -673,6 +678,13 @@ union rtw89_fw_section_mssc_content {
+ } __packed key_sign_len;
+ } __packed;
+
++struct rtw89_fw_blacklist {
++ u8 ver;
++ u8 list[32];
++};
++
++extern const struct rtw89_fw_blacklist rtw89_fw_blacklist_default;
++
+ static inline void SET_CTRL_INFO_MACID(void *table, u32 val)
+ {
+ le32p_replace_bits((__le32 *)(table) + 0, val, GENMASK(6, 0));
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index a37c6d525d6f0a..def12dbfe48d33 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -1495,6 +1495,21 @@ static int rtw89_mac_power_switch(struct rtw89_dev *rtwdev, bool on)
+ #undef PWR_ACT
+ }
+
++int rtw89_mac_pwr_on(struct rtw89_dev *rtwdev)
++{
++ int ret;
++
++ ret = rtw89_mac_power_switch(rtwdev, true);
++ if (ret) {
++ rtw89_mac_power_switch(rtwdev, false);
++ ret = rtw89_mac_power_switch(rtwdev, true);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
+ void rtw89_mac_pwr_off(struct rtw89_dev *rtwdev)
+ {
+ rtw89_mac_power_switch(rtwdev, false);
+@@ -3996,14 +4011,6 @@ int rtw89_mac_partial_init(struct rtw89_dev *rtwdev, bool include_bb)
+ {
+ int ret;
+
+- ret = rtw89_mac_power_switch(rtwdev, true);
+- if (ret) {
+- rtw89_mac_power_switch(rtwdev, false);
+- ret = rtw89_mac_power_switch(rtwdev, true);
+- if (ret)
+- return ret;
+- }
+-
+ rtw89_mac_ctrl_hci_dma_trx(rtwdev, true);
+
+ if (include_bb) {
+@@ -4036,6 +4043,10 @@ int rtw89_mac_init(struct rtw89_dev *rtwdev)
+ bool include_bb = !!chip->bbmcu_nr;
+ int ret;
+
++ ret = rtw89_mac_pwr_on(rtwdev);
++ if (ret)
++ return ret;
++
+ ret = rtw89_mac_partial_init(rtwdev, include_bb);
+ if (ret)
+ goto fail;
+@@ -4067,7 +4078,7 @@ int rtw89_mac_init(struct rtw89_dev *rtwdev)
+
+ return ret;
+ fail:
+- rtw89_mac_power_switch(rtwdev, false);
++ rtw89_mac_pwr_off(rtwdev);
+
+ return ret;
+ }
+@@ -4826,6 +4837,32 @@ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+ rtw89_write32_set(rtwdev, reg, mac->narrow_bw_ru_dis.mask);
+ }
+
++void rtw89_mac_set_he_tb(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct ieee80211_bss_conf *bss_conf;
++ bool set;
++ u32 reg;
++
++ if (rtwdev->chip->chip_gen != RTW89_CHIP_BE)
++ return;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ set = bss_conf->he_support && !bss_conf->eht_support;
++
++ rcu_read_unlock();
++
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_CLIENT_OM_CTRL,
++ rtwvif_link->mac_idx);
++
++ if (set)
++ rtw89_write32_set(rtwdev, reg, B_BE_TRIG_DIS_EHTTB);
++ else
++ rtw89_write32_clr(rtwdev, reg, B_BE_TRIG_DIS_EHTTB);
++}
++
+ void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif_link);
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
+index 8edea96d037f64..71574dbd8764ef 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.h
++++ b/drivers/net/wireless/realtek/rtw89/mac.h
+@@ -1145,6 +1145,7 @@ rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_l
+ rtw89_write32_set(rtwdev, reg, bit);
+ }
+
++int rtw89_mac_pwr_on(struct rtw89_dev *rtwdev);
+ void rtw89_mac_pwr_off(struct rtw89_dev *rtwdev);
+ int rtw89_mac_partial_init(struct rtw89_dev *rtwdev, bool include_bb);
+ int rtw89_mac_init(struct rtw89_dev *rtwdev);
+@@ -1185,6 +1186,8 @@ void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
+ struct rtw89_vif_link *rtwvif_link, bool en);
+ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+ struct rtw89_vif_link *rtwvif_link);
++void rtw89_mac_set_he_tb(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link);
+ void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en);
+ int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index b3669e0074df9d..7c9b53a9ba3b76 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -670,6 +670,7 @@ static void __rtw89_ops_bss_link_assoc(struct rtw89_dev *rtwdev,
+ rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, rtwvif_link);
+ rtw89_mac_port_update(rtwdev, rtwvif_link);
+ rtw89_mac_set_he_obss_narrow_bw_ru(rtwdev, rtwvif_link);
++ rtw89_mac_set_he_tb(rtwdev, rtwvif_link);
+ }
+
+ static void __rtw89_ops_bss_assoc(struct rtw89_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
+index 10d0efa7a58efd..850ae5bf50ef39 100644
+--- a/drivers/net/wireless/realtek/rtw89/reg.h
++++ b/drivers/net/wireless/realtek/rtw89/reg.h
+@@ -7095,6 +7095,10 @@
+ #define B_BE_MACLBK_RDY_NUM_MASK GENMASK(7, 3)
+ #define B_BE_MACLBK_EN BIT(0)
+
++#define R_BE_CLIENT_OM_CTRL 0x11040
++#define R_BE_CLIENT_OM_CTRL_C1 0x15040
++#define B_BE_TRIG_DIS_EHTTB BIT(24)
++
+ #define R_BE_WMAC_NAV_CTL 0x11080
+ #define R_BE_WMAC_NAV_CTL_C1 0x15080
+ #define B_BE_WMAC_NAV_UPPER_EN BIT(26)
+diff --git a/drivers/net/wireless/realtek/rtw89/regd.c b/drivers/net/wireless/realtek/rtw89/regd.c
+index 80b2f74589eb9f..5b8d95c90d7390 100644
+--- a/drivers/net/wireless/realtek/rtw89/regd.c
++++ b/drivers/net/wireless/realtek/rtw89/regd.c
+@@ -720,6 +720,7 @@ void rtw89_regd_notifier(struct wiphy *wiphy, struct regulatory_request *request
+ struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
+ struct rtw89_dev *rtwdev = hw->priv;
+
++ wiphy_lock(wiphy);
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_ps_mode(rtwdev);
+
+@@ -737,6 +738,7 @@ void rtw89_regd_notifier(struct wiphy *wiphy, struct regulatory_request *request
+
+ exit:
+ mutex_unlock(&rtwdev->mutex);
++ wiphy_unlock(wiphy);
+ }
+
+ /* Maximum Transmit Power field (@raw) can be EIRP or PSD.
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8851b.c b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
+index c56f70267882a6..546e88cd464935 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8851b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
+@@ -1596,10 +1596,16 @@ static void rtw8851b_rfk_channel(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
+ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
++ rtw89_btc_ntfy_conn_rfk(rtwdev, true);
++
+ rtw8851b_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8851b_iqk(rtwdev, phy_idx, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8851b_tssi(rtwdev, phy_idx, true, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8851b_dpk(rtwdev, phy_idx, chanctx_idx);
++
++ rtw89_btc_ntfy_conn_rfk(rtwdev, false);
+ }
+
+ static void rtw8851b_rfk_band_changed(struct rtw89_dev *rtwdev,
+@@ -2445,6 +2451,7 @@ const struct rtw89_chip_info rtw8851b_chip_info = {
+ .try_ce_fw = true,
+ .bbmcu_nr = 0,
+ .needed_fw_elms = 0,
++ .fw_blacklist = NULL,
+ .fifo_size = 196608,
+ .small_fifo_size = true,
+ .dle_scc_rsvd_size = 98304,
+@@ -2485,6 +2492,7 @@ const struct rtw89_chip_info rtw8851b_chip_info = {
+ .support_ant_gain = false,
+ .ul_tb_waveform_ctrl = true,
+ .ul_tb_pwr_diff = false,
++ .rx_freq_frome_ie = true,
+ .hw_sec_hdr = false,
+ .hw_mgmt_tx_encrypt = false,
+ .rf_path_num = 1,
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a.c b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+index 9bd2842c27d50f..e12e5bd402e5b8 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852a.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+@@ -1356,10 +1356,16 @@ static void rtw8852a_rfk_channel(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
+ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
++ rtw89_btc_ntfy_conn_rfk(rtwdev, true);
++
+ rtw8852a_rx_dck(rtwdev, phy_idx, true, chanctx_idx);
+ rtw8852a_iqk(rtwdev, phy_idx, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852a_tssi(rtwdev, phy_idx, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852a_dpk(rtwdev, phy_idx, chanctx_idx);
++
++ rtw89_btc_ntfy_conn_rfk(rtwdev, false);
+ }
+
+ static void rtw8852a_rfk_band_changed(struct rtw89_dev *rtwdev,
+@@ -2162,6 +2168,7 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
+ .try_ce_fw = false,
+ .bbmcu_nr = 0,
+ .needed_fw_elms = 0,
++ .fw_blacklist = NULL,
+ .fifo_size = 458752,
+ .small_fifo_size = false,
+ .dle_scc_rsvd_size = 0,
+@@ -2203,6 +2210,7 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
+ .support_ant_gain = false,
+ .ul_tb_waveform_ctrl = false,
+ .ul_tb_pwr_diff = false,
++ .rx_freq_frome_ie = true,
+ .hw_sec_hdr = false,
+ .hw_mgmt_tx_encrypt = false,
+ .rf_path_num = 2,
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+index dfb2bf61b0b834..ab9365b0ec089e 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+@@ -568,10 +568,16 @@ static void rtw8852b_rfk_channel(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
+ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
++ rtw89_btc_ntfy_conn_rfk(rtwdev, true);
++
+ rtw8852b_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8852b_iqk(rtwdev, phy_idx, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852b_tssi(rtwdev, phy_idx, true, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852b_dpk(rtwdev, phy_idx, chanctx_idx);
++
++ rtw89_btc_ntfy_conn_rfk(rtwdev, false);
+ }
+
+ static void rtw8852b_rfk_band_changed(struct rtw89_dev *rtwdev,
+@@ -798,6 +804,7 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
+ .try_ce_fw = true,
+ .bbmcu_nr = 0,
+ .needed_fw_elms = 0,
++ .fw_blacklist = &rtw89_fw_blacklist_default,
+ .fifo_size = 196608,
+ .small_fifo_size = true,
+ .dle_scc_rsvd_size = 98304,
+@@ -839,6 +846,7 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
+ .support_ant_gain = true,
+ .ul_tb_waveform_ctrl = true,
+ .ul_tb_pwr_diff = false,
++ .rx_freq_frome_ie = true,
+ .hw_sec_hdr = false,
+ .hw_mgmt_tx_encrypt = false,
+ .rf_path_num = 2,
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
+index bde3e1fb7ca628..412e633944f371 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
+@@ -541,10 +541,16 @@ static void rtw8852bt_rfk_channel(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
+ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
++ rtw89_btc_ntfy_conn_rfk(rtwdev, true);
++
+ rtw8852bt_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8852bt_iqk(rtwdev, phy_idx, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852bt_tssi(rtwdev, phy_idx, true, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852bt_dpk(rtwdev, phy_idx, chanctx_idx);
++
++ rtw89_btc_ntfy_conn_rfk(rtwdev, false);
+ }
+
+ static void rtw8852bt_rfk_band_changed(struct rtw89_dev *rtwdev,
+@@ -732,6 +738,7 @@ const struct rtw89_chip_info rtw8852bt_chip_info = {
+ .try_ce_fw = true,
+ .bbmcu_nr = 0,
+ .needed_fw_elms = RTW89_AX_GEN_DEF_NEEDED_FW_ELEMENTS_NO_6GHZ,
++ .fw_blacklist = &rtw89_fw_blacklist_default,
+ .fifo_size = 458752,
+ .small_fifo_size = true,
+ .dle_scc_rsvd_size = 98304,
+@@ -772,6 +779,7 @@ const struct rtw89_chip_info rtw8852bt_chip_info = {
+ .support_ant_gain = true,
+ .ul_tb_waveform_ctrl = true,
+ .ul_tb_pwr_diff = false,
++ .rx_freq_frome_ie = true,
+ .hw_sec_hdr = false,
+ .hw_mgmt_tx_encrypt = false,
+ .rf_path_num = 2,
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c.c b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+index bc84b15e7826dd..cd68f6cbeecbfe 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852c.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+@@ -1853,10 +1853,16 @@ static void rtw8852c_rfk_channel(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852c_mcc_get_ch_info(rtwdev, phy_idx);
++ rtw89_btc_ntfy_conn_rfk(rtwdev, true);
++
+ rtw8852c_rx_dck(rtwdev, phy_idx, false);
+ rtw8852c_iqk(rtwdev, phy_idx, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852c_tssi(rtwdev, phy_idx, chanctx_idx);
++ rtw89_btc_ntfy_preserve_bt_time(rtwdev, 30);
+ rtw8852c_dpk(rtwdev, phy_idx, chanctx_idx);
++
++ rtw89_btc_ntfy_conn_rfk(rtwdev, false);
+ rtw89_fw_h2c_rf_ntfy_mcc(rtwdev);
+ }
+
+@@ -2954,6 +2960,7 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
+ .try_ce_fw = false,
+ .bbmcu_nr = 0,
+ .needed_fw_elms = 0,
++ .fw_blacklist = &rtw89_fw_blacklist_default,
+ .fifo_size = 458752,
+ .small_fifo_size = false,
+ .dle_scc_rsvd_size = 0,
+@@ -2998,6 +3005,7 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
+ .support_ant_gain = true,
+ .ul_tb_waveform_ctrl = false,
+ .ul_tb_pwr_diff = true,
++ .rx_freq_frome_ie = false,
+ .hw_sec_hdr = true,
+ .hw_mgmt_tx_encrypt = true,
+ .rf_path_num = 2,
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8922a.c b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
+index 11d66bfceb15f1..2696fdf350f630 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8922a.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
+@@ -2721,6 +2721,7 @@ const struct rtw89_chip_info rtw8922a_chip_info = {
+ .try_ce_fw = false,
+ .bbmcu_nr = 1,
+ .needed_fw_elms = RTW89_BE_GEN_DEF_NEEDED_FW_ELEMENTS,
++ .fw_blacklist = &rtw89_fw_blacklist_default,
+ .fifo_size = 589824,
+ .small_fifo_size = false,
+ .dle_scc_rsvd_size = 0,
+@@ -2763,6 +2764,7 @@ const struct rtw89_chip_info rtw8922a_chip_info = {
+ .support_ant_gain = false,
+ .ul_tb_waveform_ctrl = false,
+ .ul_tb_pwr_diff = false,
++ .rx_freq_frome_ie = false,
+ .hw_sec_hdr = true,
+ .hw_mgmt_tx_encrypt = true,
+ .rf_path_num = 2,
+diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
+index 26a944d3b67270..d0c8584308c06e 100644
+--- a/drivers/net/wireless/realtek/rtw89/ser.c
++++ b/drivers/net/wireless/realtek/rtw89/ser.c
+@@ -156,9 +156,11 @@ static void ser_state_run(struct rtw89_ser *ser, u8 evt)
+ rtw89_debug(rtwdev, RTW89_DBG_SER, "ser: %s receive %s\n",
+ ser_st_name(ser), ser_ev_name(ser, evt));
+
++ wiphy_lock(rtwdev->hw->wiphy);
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_lps(rtwdev);
+ mutex_unlock(&rtwdev->mutex);
++ wiphy_unlock(rtwdev->hw->wiphy);
+
+ ser->st_tbl[ser->state].st_func(ser, evt);
+ }
+@@ -708,9 +710,11 @@ static void ser_l2_reset_st_hdl(struct rtw89_ser *ser, u8 evt)
+
+ switch (evt) {
+ case SER_EV_STATE_IN:
++ wiphy_lock(rtwdev->hw->wiphy);
+ mutex_lock(&rtwdev->mutex);
+ ser_l2_reset_st_pre_hdl(ser);
+ mutex_unlock(&rtwdev->mutex);
++ wiphy_unlock(rtwdev->hw->wiphy);
+
+ ieee80211_restart_hw(rtwdev->hw);
+ ser_set_alarm(ser, SER_RECFG_TIMEOUT, SER_EV_L2_RECFG_TIMEOUT);
+diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
+index cf6a331d404271..a68530344d205c 100644
+--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
++++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
+@@ -4,7 +4,7 @@
+ * Copyright (c) 2008, Jouni Malinen <j@w1.fi>
+ * Copyright (c) 2011, Javier Lopez <jlopex@gmail.com>
+ * Copyright (c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2024 Intel Corporation
++ * Copyright (C) 2018 - 2025 Intel Corporation
+ */
+
+ /*
+@@ -1983,11 +1983,13 @@ static void mac80211_hwsim_tx(struct ieee80211_hw *hw,
+ return;
+ }
+
+- if (sta && sta->mlo) {
+- if (WARN_ON(!link_sta)) {
+- ieee80211_free_txskb(hw, skb);
+- return;
+- }
++ /* Do address translations only between shared links. It is
++ * possible that while an non-AP MLD station and an AP MLD
++ * station have shared links, the frame is intended to be sent
++ * on a link which is not shared (for example when sending a
++ * probe response).
++ */
++ if (sta && sta->mlo && link_sta) {
+ /* address translation to link addresses on TX */
+ ether_addr_copy(hdr->addr1, link_sta->addr);
+ ether_addr_copy(hdr->addr2, bss_conf->addr);
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index 082253a3a95607..04f4a049599a1a 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -442,7 +442,8 @@ int nd_label_data_init(struct nvdimm_drvdata *ndd)
+ if (ndd->data)
+ return 0;
+
+- if (ndd->nsarea.status || ndd->nsarea.max_xfer == 0) {
++ if (ndd->nsarea.status || ndd->nsarea.max_xfer == 0 ||
++ ndd->nsarea.config_size == 0) {
+ dev_dbg(ndd->dev, "failed to init config data area: (%u:%u)\n",
+ ndd->nsarea.max_xfer, ndd->nsarea.config_size);
+ return -ENXIO;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 00bd21b5c641e3..abd097eba6623f 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3628,6 +3628,9 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
+ .driver_data = NVME_QUIRK_DMAPOOL_ALIGN_512, },
++ { PCI_DEVICE(0x126f, 0x1001), /* Silicon Motion generic */
++ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
++ NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ { PCI_DEVICE(0x126f, 0x2262), /* Silicon Motion generic */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ NVME_QUIRK_BOGUS_NID, },
+@@ -3651,6 +3654,9 @@ static const struct pci_device_id nvme_id_table[] = {
+ NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ { PCI_DEVICE(0x15b7, 0x5008), /* Sandisk SN530 */
+ .driver_data = NVME_QUIRK_BROKEN_MSI },
++ { PCI_DEVICE(0x15b7, 0x5009), /* Sandisk SN550 */
++ .driver_data = NVME_QUIRK_BROKEN_MSI |
++ NVME_QUIRK_NO_DEEPEST_PS },
+ { PCI_DEVICE(0x1987, 0x5012), /* Phison E12 */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1987, 0x5016), /* Phison E16 */
+diff --git a/drivers/nvme/target/pci-epf.c b/drivers/nvme/target/pci-epf.c
+index bc1daa9aede9d2..fbc167f47d8a67 100644
+--- a/drivers/nvme/target/pci-epf.c
++++ b/drivers/nvme/target/pci-epf.c
+@@ -1264,6 +1264,7 @@ static u16 nvmet_pci_epf_create_cq(struct nvmet_ctrl *tctrl,
+ struct nvmet_pci_epf_ctrl *ctrl = tctrl->drvdata;
+ struct nvmet_pci_epf_queue *cq = &ctrl->cq[cqid];
+ u16 status;
++ int ret;
+
+ if (test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags))
+ return NVME_SC_QID_INVALID | NVME_STATUS_DNR;
+@@ -1298,6 +1299,24 @@ static u16 nvmet_pci_epf_create_cq(struct nvmet_ctrl *tctrl,
+ if (status != NVME_SC_SUCCESS)
+ goto err;
+
++ /*
++ * Map the CQ PCI address space and since PCI endpoint controllers may
++ * return a partial mapping, check that the mapping is large enough.
++ */
++ ret = nvmet_pci_epf_mem_map(ctrl->nvme_epf, cq->pci_addr, cq->pci_size,
++ &cq->pci_map);
++ if (ret) {
++ dev_err(ctrl->dev, "Failed to map CQ %u (err=%d)\n",
++ cq->qid, ret);
++ goto err_internal;
++ }
++
++ if (cq->pci_map.pci_size < cq->pci_size) {
++ dev_err(ctrl->dev, "Invalid partial mapping of queue %u\n",
++ cq->qid);
++ goto err_unmap_queue;
++ }
++
+ set_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags);
+
+ dev_dbg(ctrl->dev, "CQ[%u]: %u entries of %zu B, IRQ vector %u\n",
+@@ -1305,6 +1324,10 @@ static u16 nvmet_pci_epf_create_cq(struct nvmet_ctrl *tctrl,
+
+ return NVME_SC_SUCCESS;
+
++err_unmap_queue:
++ nvmet_pci_epf_mem_unmap(ctrl->nvme_epf, &cq->pci_map);
++err_internal:
++ status = NVME_SC_INTERNAL | NVME_STATUS_DNR;
+ err:
+ if (test_and_clear_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))
+ nvmet_pci_epf_remove_irq_vector(ctrl, cq->vector);
+@@ -1321,7 +1344,9 @@ static u16 nvmet_pci_epf_delete_cq(struct nvmet_ctrl *tctrl, u16 cqid)
+
+ cancel_delayed_work_sync(&cq->work);
+ nvmet_pci_epf_drain_queue(cq);
+- nvmet_pci_epf_remove_irq_vector(ctrl, cq->vector);
++ if (test_and_clear_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))
++ nvmet_pci_epf_remove_irq_vector(ctrl, cq->vector);
++ nvmet_pci_epf_mem_unmap(ctrl->nvme_epf, &cq->pci_map);
+
+ return NVME_SC_SUCCESS;
+ }
+@@ -1554,36 +1579,6 @@ static void nvmet_pci_epf_free_queues(struct nvmet_pci_epf_ctrl *ctrl)
+ ctrl->cq = NULL;
+ }
+
+-static int nvmet_pci_epf_map_queue(struct nvmet_pci_epf_ctrl *ctrl,
+- struct nvmet_pci_epf_queue *queue)
+-{
+- struct nvmet_pci_epf *nvme_epf = ctrl->nvme_epf;
+- int ret;
+-
+- ret = nvmet_pci_epf_mem_map(nvme_epf, queue->pci_addr,
+- queue->pci_size, &queue->pci_map);
+- if (ret) {
+- dev_err(ctrl->dev, "Failed to map queue %u (err=%d)\n",
+- queue->qid, ret);
+- return ret;
+- }
+-
+- if (queue->pci_map.pci_size < queue->pci_size) {
+- dev_err(ctrl->dev, "Invalid partial mapping of queue %u\n",
+- queue->qid);
+- nvmet_pci_epf_mem_unmap(nvme_epf, &queue->pci_map);
+- return -ENOMEM;
+- }
+-
+- return 0;
+-}
+-
+-static inline void nvmet_pci_epf_unmap_queue(struct nvmet_pci_epf_ctrl *ctrl,
+- struct nvmet_pci_epf_queue *queue)
+-{
+- nvmet_pci_epf_mem_unmap(ctrl->nvme_epf, &queue->pci_map);
+-}
+-
+ static void nvmet_pci_epf_exec_iod_work(struct work_struct *work)
+ {
+ struct nvmet_pci_epf_iod *iod =
+@@ -1749,11 +1744,7 @@ static void nvmet_pci_epf_cq_work(struct work_struct *work)
+ struct nvme_completion *cqe;
+ struct nvmet_pci_epf_iod *iod;
+ unsigned long flags;
+- int ret, n = 0;
+-
+- ret = nvmet_pci_epf_map_queue(ctrl, cq);
+- if (ret)
+- goto again;
++ int ret = 0, n = 0;
+
+ while (test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags) && ctrl->link_up) {
+
+@@ -1809,8 +1800,6 @@ static void nvmet_pci_epf_cq_work(struct work_struct *work)
+ n++;
+ }
+
+- nvmet_pci_epf_unmap_queue(ctrl, cq);
+-
+ /*
+ * We do not support precise IRQ coalescing time (100ns units as per
+ * NVMe specifications). So if we have posted completion entries without
+@@ -1819,7 +1808,6 @@ static void nvmet_pci_epf_cq_work(struct work_struct *work)
+ if (n)
+ nvmet_pci_epf_raise_irq(ctrl, cq, true);
+
+-again:
+ if (ret < 0)
+ queue_delayed_work(system_highpri_wq, &cq->work,
+ NVMET_PCI_EPF_CQ_RETRY_INTERVAL);
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 4f9cac8a5abe07..259ad77c03c50f 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1560,6 +1560,9 @@ static void nvmet_tcp_restore_socket_callbacks(struct nvmet_tcp_queue *queue)
+ {
+ struct socket *sock = queue->sock;
+
++ if (!queue->state_change)
++ return;
++
+ write_lock_bh(&sock->sk->sk_callback_lock);
+ sock->sk->sk_data_ready = queue->data_ready;
+ sock->sk->sk_state_change = queue->state_change;
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index fff85bbf0ecd0f..e206efc29a0044 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -594,9 +594,11 @@ static int nvmem_cell_info_to_nvmem_cell_entry_nodup(struct nvmem_device *nvmem,
+ cell->nbits = info->nbits;
+ cell->np = info->np;
+
+- if (cell->nbits)
++ if (cell->nbits) {
+ cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset,
+ BITS_PER_BYTE);
++ cell->raw_len = ALIGN(cell->bytes, nvmem->word_size);
++ }
+
+ if (!IS_ALIGNED(cell->offset, nvmem->stride)) {
+ dev_err(&nvmem->dev,
+@@ -605,6 +607,18 @@ static int nvmem_cell_info_to_nvmem_cell_entry_nodup(struct nvmem_device *nvmem,
+ return -EINVAL;
+ }
+
++ if (!IS_ALIGNED(cell->raw_len, nvmem->word_size)) {
++ dev_err(&nvmem->dev,
++ "cell %s raw len %zd unaligned to nvmem word size %d\n",
++ cell->name ?: "<unknown>", cell->raw_len,
++ nvmem->word_size);
++
++ if (info->raw_len)
++ return -EINVAL;
++
++ cell->raw_len = ALIGN(cell->raw_len, nvmem->word_size);
++ }
++
+ return 0;
+ }
+
+@@ -837,7 +851,9 @@ static int nvmem_add_cells_from_dt(struct nvmem_device *nvmem, struct device_nod
+ if (addr && len == (2 * sizeof(u32))) {
+ info.bit_offset = be32_to_cpup(addr++);
+ info.nbits = be32_to_cpup(addr);
+- if (info.bit_offset >= BITS_PER_BYTE || info.nbits < 1) {
++ if (info.bit_offset >= BITS_PER_BYTE * info.bytes ||
++ info.nbits < 1 ||
++ info.bit_offset + info.nbits > BITS_PER_BYTE * info.bytes) {
+ dev_err(dev, "nvmem: invalid bits on %pOF\n", child);
+ of_node_put(child);
+ return -EINVAL;
+@@ -1630,21 +1646,29 @@ EXPORT_SYMBOL_GPL(nvmem_cell_put);
+ static void nvmem_shift_read_buffer_in_place(struct nvmem_cell_entry *cell, void *buf)
+ {
+ u8 *p, *b;
+- int i, extra, bit_offset = cell->bit_offset;
++ int i, extra, bytes_offset;
++ int bit_offset = cell->bit_offset;
+
+ p = b = buf;
+- if (bit_offset) {
++
++ bytes_offset = bit_offset / BITS_PER_BYTE;
++ b += bytes_offset;
++ bit_offset %= BITS_PER_BYTE;
++
++ if (bit_offset % BITS_PER_BYTE) {
+ /* First shift */
+- *b++ >>= bit_offset;
++ *p = *b++ >> bit_offset;
+
+ /* setup rest of the bytes if any */
+ for (i = 1; i < cell->bytes; i++) {
+ /* Get bits from next byte and shift them towards msb */
+- *p |= *b << (BITS_PER_BYTE - bit_offset);
++ *p++ |= *b << (BITS_PER_BYTE - bit_offset);
+
+- p = b;
+- *b++ >>= bit_offset;
++ *p = *b++ >> bit_offset;
+ }
++ } else if (p != b) {
++ memmove(p, b, cell->bytes - bytes_offset);
++ p += cell->bytes - 1;
+ } else {
+ /* point to the msb */
+ p += cell->bytes - 1;
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index 116a39e804c70b..a872c640b8c5a5 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -321,19 +321,32 @@ static int qfprom_reg_read(void *context,
+ unsigned int reg, void *_val, size_t bytes)
+ {
+ struct qfprom_priv *priv = context;
+- u8 *val = _val;
+- int i = 0, words = bytes;
++ u32 *val = _val;
+ void __iomem *base = priv->qfpcorrected;
++ int words = DIV_ROUND_UP(bytes, sizeof(u32));
++ int i;
+
+ if (read_raw_data && priv->qfpraw)
+ base = priv->qfpraw;
+
+- while (words--)
+- *val++ = readb(base + reg + i++);
++ for (i = 0; i < words; i++)
++ *val++ = readl(base + reg + i * sizeof(u32));
+
+ return 0;
+ }
+
++/* Align reads to word boundary */
++static void qfprom_fixup_dt_cell_info(struct nvmem_device *nvmem,
++ struct nvmem_cell_info *cell)
++{
++ unsigned int byte_offset = cell->offset % sizeof(u32);
++
++ cell->bit_offset += byte_offset * BITS_PER_BYTE;
++ cell->offset -= byte_offset;
++ if (byte_offset && !cell->nbits)
++ cell->nbits = cell->bytes * BITS_PER_BYTE;
++}
++
+ static void qfprom_runtime_disable(void *data)
+ {
+ pm_runtime_disable(data);
+@@ -358,10 +371,11 @@ static int qfprom_probe(struct platform_device *pdev)
+ struct nvmem_config econfig = {
+ .name = "qfprom",
+ .add_legacy_fixed_of_cells = true,
+- .stride = 1,
+- .word_size = 1,
++ .stride = 4,
++ .word_size = 4,
+ .id = NVMEM_DEVID_AUTO,
+ .reg_read = qfprom_reg_read,
++ .fixup_dt_cell_info = qfprom_fixup_dt_cell_info,
+ };
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+diff --git a/drivers/nvmem/rockchip-otp.c b/drivers/nvmem/rockchip-otp.c
+index ebc3f0b24166bc..d88f12c5324264 100644
+--- a/drivers/nvmem/rockchip-otp.c
++++ b/drivers/nvmem/rockchip-otp.c
+@@ -59,7 +59,6 @@
+ #define RK3588_OTPC_AUTO_EN 0x08
+ #define RK3588_OTPC_INT_ST 0x84
+ #define RK3588_OTPC_DOUT0 0x20
+-#define RK3588_NO_SECURE_OFFSET 0x300
+ #define RK3588_NBYTES 4
+ #define RK3588_BURST_NUM 1
+ #define RK3588_BURST_SHIFT 8
+@@ -69,6 +68,7 @@
+
+ struct rockchip_data {
+ int size;
++ int read_offset;
+ const char * const *clks;
+ int num_clks;
+ nvmem_reg_read_t reg_read;
+@@ -196,7 +196,7 @@ static int rk3588_otp_read(void *context, unsigned int offset,
+ addr_start = round_down(offset, RK3588_NBYTES) / RK3588_NBYTES;
+ addr_end = round_up(offset + bytes, RK3588_NBYTES) / RK3588_NBYTES;
+ addr_len = addr_end - addr_start;
+- addr_start += RK3588_NO_SECURE_OFFSET;
++ addr_start += otp->data->read_offset / RK3588_NBYTES;
+
+ buf = kzalloc(array_size(addr_len, RK3588_NBYTES), GFP_KERNEL);
+ if (!buf)
+@@ -274,12 +274,21 @@ static const struct rockchip_data px30_data = {
+ .reg_read = px30_otp_read,
+ };
+
++static const struct rockchip_data rk3576_data = {
++ .size = 0x100,
++ .read_offset = 0x700,
++ .clks = px30_otp_clocks,
++ .num_clks = ARRAY_SIZE(px30_otp_clocks),
++ .reg_read = rk3588_otp_read,
++};
++
+ static const char * const rk3588_otp_clocks[] = {
+ "otp", "apb_pclk", "phy", "arb",
+ };
+
+ static const struct rockchip_data rk3588_data = {
+ .size = 0x400,
++ .read_offset = 0xc00,
+ .clks = rk3588_otp_clocks,
+ .num_clks = ARRAY_SIZE(rk3588_otp_clocks),
+ .reg_read = rk3588_otp_read,
+@@ -294,6 +303,10 @@ static const struct of_device_id rockchip_otp_match[] = {
+ .compatible = "rockchip,rk3308-otp",
+ .data = &px30_data,
+ },
++ {
++ .compatible = "rockchip,rk3576-otp",
++ .data = &rk3576_data,
++ },
+ {
+ .compatible = "rockchip,rk3588-otp",
+ .data = &rk3588_data,
+diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
+index 2fbd379923fd1e..5c3054aaec8c12 100644
+--- a/drivers/pci/Kconfig
++++ b/drivers/pci/Kconfig
+@@ -203,6 +203,12 @@ config PCI_P2PDMA
+ P2P DMA transactions must be between devices behind the same root
+ port.
+
++ Enabling this option will reduce the entropy of x86 KASLR memory
++ regions. For example - on a 46 bit system, the entropy goes down
++ from 16 bits to 15 bits. The actual reduction in entropy depends
++ on the physical address bits, on processor features, kernel config
++ (5 level page table) and physical memory present on the system.
++
+ If unsure, say N.
+
+ config PCI_LABEL
+diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c
+index c6b266c772c81c..ec6c8dbdc5e9c9 100644
+--- a/drivers/pci/ats.c
++++ b/drivers/pci/ats.c
+@@ -538,4 +538,37 @@ int pci_max_pasids(struct pci_dev *pdev)
+ return (1 << FIELD_GET(PCI_PASID_CAP_WIDTH, supported));
+ }
+ EXPORT_SYMBOL_GPL(pci_max_pasids);
++
++/**
++ * pci_pasid_status - Check the PASID status
++ * @pdev: PCI device structure
++ *
++ * Returns a negative value when no PASID capability is present.
++ * Otherwise the value of the control register is returned.
++ * Status reported are:
++ *
++ * PCI_PASID_CTRL_ENABLE - PASID enabled
++ * PCI_PASID_CTRL_EXEC - Execute permission enabled
++ * PCI_PASID_CTRL_PRIV - Privileged mode enabled
++ */
++int pci_pasid_status(struct pci_dev *pdev)
++{
++ int pasid;
++ u16 ctrl;
++
++ if (pdev->is_virtfn)
++ pdev = pci_physfn(pdev);
++
++ pasid = pdev->pasid_cap;
++ if (!pasid)
++ return -EINVAL;
++
++ pci_read_config_word(pdev, pasid + PCI_PASID_CTRL, &ctrl);
++
++ ctrl &= PCI_PASID_CTRL_ENABLE | PCI_PASID_CTRL_EXEC |
++ PCI_PASID_CTRL_PRIV;
++
++ return ctrl;
++}
++EXPORT_SYMBOL_GPL(pci_pasid_status);
+ #endif /* CONFIG_PCI_PASID */
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index e41479a9ca0275..c91d095024689b 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -282,7 +282,7 @@ static int dw_pcie_find_index(struct dw_pcie_ep *ep, phys_addr_t addr,
+ u32 index;
+ struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
+
+- for (index = 0; index < pci->num_ob_windows; index++) {
++ for_each_set_bit(index, ep->ob_window_map, pci->num_ob_windows) {
+ if (ep->outbound_addr[index] != addr)
+ continue;
+ *atu_index = index;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index ffaded8f2df7bc..ae3fd2a5dbf85d 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -908,7 +908,7 @@ static int dw_pcie_pme_turn_off(struct dw_pcie *pci)
+ if (ret)
+ return ret;
+
+- mem = ioremap(atu.cpu_addr, pci->region_align);
++ mem = ioremap(pci->pp.msg_res->start, pci->region_align);
+ if (!mem)
+ return -ENOMEM;
+
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index 1a3bdc01b0747c..bae226c779a509 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -309,8 +309,8 @@ static int brcm_pcie_encode_ibar_size(u64 size)
+ if (log2_in >= 12 && log2_in <= 15)
+ /* Covers 4KB to 32KB (inclusive) */
+ return (log2_in - 12) + 0x1c;
+- else if (log2_in >= 16 && log2_in <= 35)
+- /* Covers 64KB to 32GB, (inclusive) */
++ else if (log2_in >= 16 && log2_in <= 36)
++ /* Covers 64KB to 64GB, (inclusive) */
+ return log2_in - 15;
+ /* Something is awry so disable */
+ return 0;
+@@ -1947,3 +1947,4 @@ module_platform_driver(brcm_pcie_driver);
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Broadcom STB PCIe RC driver");
+ MODULE_AUTHOR("Broadcom");
++MODULE_SOFTDEP("pre: irq_bcm2712_mip");
+diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c
+index dc8ecdbee56c89..163d805673d6dd 100644
+--- a/drivers/pci/controller/pcie-xilinx-cpm.c
++++ b/drivers/pci/controller/pcie-xilinx-cpm.c
+@@ -538,7 +538,8 @@ static int xilinx_cpm_pcie_parse_dt(struct xilinx_cpm_pcie *port,
+ if (IS_ERR(port->cfg))
+ return PTR_ERR(port->cfg);
+
+- if (port->variant->version == CPM5) {
++ if (port->variant->version == CPM5 ||
++ port->variant->version == CPM5_HOST1) {
+ port->reg_base = devm_platform_ioremap_resource_byname(pdev,
+ "cpm_csr");
+ if (IS_ERR(port->reg_base))
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 94ceec50a2b94c..8df064b62a2ff3 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -17,6 +17,8 @@
+ #include <linux/rculist.h>
+ #include <linux/rcupdate.h>
+
++#include <xen/xen.h>
++
+ #include <asm/irqdomain.h>
+
+ #define VMD_CFGBAR 0
+@@ -970,6 +972,24 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ struct vmd_dev *vmd;
+ int err;
+
++ if (xen_domain()) {
++ /*
++ * Xen doesn't have knowledge about devices in the VMD bus
++ * because the config space of devices behind the VMD bridge is
++ * not known to Xen, and hence Xen cannot discover or configure
++ * them in any way.
++ *
++ * Bypass of MSI remapping won't work in that case as direct
++ * write by Linux to the MSI entries won't result in functional
++ * interrupts, as Xen is the entity that manages the host
++ * interrupt controller and must configure interrupts. However
++ * multiplexing of interrupts by the VMD bridge will work under
++ * Xen, so force the usage of that mode which must always be
++ * supported by VMD bridges.
++ */
++ features &= ~VMD_FEAT_CAN_BYPASS_MSI_REMAP;
++ }
++
+ if (resource_size(&dev->resource[VMD_CFGBAR]) < (1 << 20))
+ return -ENOMEM;
+
+diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+index 54286a40bdfbf7..6643a88c7a0ce3 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
++++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+@@ -125,7 +125,7 @@ static const struct pci_epf_mhi_ep_info sm8450_info = {
+
+ static struct pci_epf_header sa8775p_header = {
+ .vendorid = PCI_VENDOR_ID_QCOM,
+- .deviceid = 0x0306, /* FIXME: Update deviceid for sa8775p EP */
++ .deviceid = 0x0116,
+ .baseclass_code = PCI_CLASS_OTHERS,
+ .interrupt_pin = PCI_INTERRUPT_INTA,
+ };
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 2409787cf56d99..bce3ae2c0f652d 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -738,6 +738,7 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
+ if (ret) {
+ pci_epf_free_space(epf, epf_test->reg[bar], bar,
+ PRIMARY_INTERFACE);
++ epf_test->reg[bar] = NULL;
+ dev_err(dev, "Failed to set BAR%d\n", bar);
+ if (bar == test_reg_bar)
+ return ret;
+@@ -929,6 +930,7 @@ static void pci_epf_test_free_space(struct pci_epf *epf)
+
+ pci_epf_free_space(epf, epf_test->reg[bar], bar,
+ PRIMARY_INTERFACE);
++ epf_test->reg[bar] = NULL;
+ }
+ }
+
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+index 8707c5b08cf341..477eb07bfbca91 100644
+--- a/drivers/pci/setup-bus.c
++++ b/drivers/pci/setup-bus.c
+@@ -814,11 +814,9 @@ static resource_size_t calculate_iosize(resource_size_t size,
+ size = (size & 0xff) + ((size & ~0xffUL) << 2);
+ #endif
+ size = size + size1;
+- if (size < old_size)
+- size = old_size;
+
+- size = ALIGN(max(size, add_size) + children_add_size, align);
+- return size;
++ size = max(size, add_size) + children_add_size;
++ return ALIGN(max(size, old_size), align);
+ }
+
+ static resource_size_t calculate_memsize(resource_size_t size,
+diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
+index 0e360feb3432e1..9ebc950559c0a2 100644
+--- a/drivers/perf/arm_pmuv3.c
++++ b/drivers/perf/arm_pmuv3.c
+@@ -825,10 +825,10 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu)
+ else
+ armv8pmu_disable_user_access();
+
++ kvm_vcpu_pmu_resync_el0();
++
+ /* Enable all counters */
+ armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
+-
+- kvm_vcpu_pmu_resync_el0();
+ }
+
+ static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
+diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c
+index 8dfdce605a905d..067316dfcd83a6 100644
+--- a/drivers/phy/phy-core.c
++++ b/drivers/phy/phy-core.c
+@@ -405,13 +405,14 @@ EXPORT_SYMBOL_GPL(phy_power_off);
+
+ int phy_set_mode_ext(struct phy *phy, enum phy_mode mode, int submode)
+ {
+- int ret;
++ int ret = 0;
+
+- if (!phy || !phy->ops->set_mode)
++ if (!phy)
+ return 0;
+
+ mutex_lock(&phy->mutex);
+- ret = phy->ops->set_mode(phy, mode, submode);
++ if (phy->ops->set_mode)
++ ret = phy->ops->set_mode(phy, mode, submode);
+ if (!ret)
+ phy->attrs.mode = mode;
+ mutex_unlock(&phy->mutex);
+diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+index 946dc2f184e877..9fdf17e0848a28 100644
+--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c
++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c
+@@ -9,6 +9,7 @@
+ * Copyright (C) 2014 Cogent Embedded, Inc.
+ */
+
++#include <linux/cleanup.h>
+ #include <linux/extcon-provider.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+@@ -118,9 +119,8 @@ struct rcar_gen3_chan {
+ struct regulator *vbus;
+ struct reset_control *rstc;
+ struct work_struct work;
+- struct mutex lock; /* protects rphys[...].powered */
++ spinlock_t lock; /* protects access to hardware and driver data structure. */
+ enum usb_dr_mode dr_mode;
+- int irq;
+ u32 obint_enable_bits;
+ bool extcon_host;
+ bool is_otg_channel;
+@@ -349,6 +349,8 @@ static ssize_t role_store(struct device *dev, struct device_attribute *attr,
+ bool is_b_device;
+ enum phy_mode cur_mode, new_mode;
+
++ guard(spinlock_irqsave)(&ch->lock);
++
+ if (!ch->is_otg_channel || !rcar_gen3_is_any_otg_rphy_initialized(ch))
+ return -EIO;
+
+@@ -416,7 +418,7 @@ static void rcar_gen3_init_otg(struct rcar_gen3_chan *ch)
+ val = readl(usb2_base + USB2_ADPCTRL);
+ writel(val | USB2_ADPCTRL_IDPULLUP, usb2_base + USB2_ADPCTRL);
+ }
+- msleep(20);
++ mdelay(20);
+
+ writel(0xffffffff, usb2_base + USB2_OBINTSTA);
+ writel(ch->obint_enable_bits, usb2_base + USB2_OBINTEN);
+@@ -428,16 +430,27 @@ static irqreturn_t rcar_gen3_phy_usb2_irq(int irq, void *_ch)
+ {
+ struct rcar_gen3_chan *ch = _ch;
+ void __iomem *usb2_base = ch->base;
+- u32 status = readl(usb2_base + USB2_OBINTSTA);
++ struct device *dev = ch->dev;
+ irqreturn_t ret = IRQ_NONE;
++ u32 status;
++
++ pm_runtime_get_noresume(dev);
+
+- if (status & ch->obint_enable_bits) {
+- dev_vdbg(ch->dev, "%s: %08x\n", __func__, status);
+- writel(ch->obint_enable_bits, usb2_base + USB2_OBINTSTA);
+- rcar_gen3_device_recognition(ch);
+- ret = IRQ_HANDLED;
++ if (pm_runtime_suspended(dev))
++ goto rpm_put;
++
++ scoped_guard(spinlock, &ch->lock) {
++ status = readl(usb2_base + USB2_OBINTSTA);
++ if (status & ch->obint_enable_bits) {
++ dev_vdbg(dev, "%s: %08x\n", __func__, status);
++ writel(ch->obint_enable_bits, usb2_base + USB2_OBINTSTA);
++ rcar_gen3_device_recognition(ch);
++ ret = IRQ_HANDLED;
++ }
+ }
+
++rpm_put:
++ pm_runtime_put_noidle(dev);
+ return ret;
+ }
+
+@@ -447,17 +460,8 @@ static int rcar_gen3_phy_usb2_init(struct phy *p)
+ struct rcar_gen3_chan *channel = rphy->ch;
+ void __iomem *usb2_base = channel->base;
+ u32 val;
+- int ret;
+
+- if (!rcar_gen3_is_any_rphy_initialized(channel) && channel->irq >= 0) {
+- INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);
+- ret = request_irq(channel->irq, rcar_gen3_phy_usb2_irq,
+- IRQF_SHARED, dev_name(channel->dev), channel);
+- if (ret < 0) {
+- dev_err(channel->dev, "No irq handler (%d)\n", channel->irq);
+- return ret;
+- }
+- }
++ guard(spinlock_irqsave)(&channel->lock);
+
+ /* Initialize USB2 part */
+ val = readl(usb2_base + USB2_INT_ENABLE);
+@@ -485,6 +489,8 @@ static int rcar_gen3_phy_usb2_exit(struct phy *p)
+ void __iomem *usb2_base = channel->base;
+ u32 val;
+
++ guard(spinlock_irqsave)(&channel->lock);
++
+ rphy->initialized = false;
+
+ val = readl(usb2_base + USB2_INT_ENABLE);
+@@ -493,9 +499,6 @@ static int rcar_gen3_phy_usb2_exit(struct phy *p)
+ val &= ~USB2_INT_ENABLE_UCOM_INTEN;
+ writel(val, usb2_base + USB2_INT_ENABLE);
+
+- if (channel->irq >= 0 && !rcar_gen3_is_any_rphy_initialized(channel))
+- free_irq(channel->irq, channel);
+-
+ return 0;
+ }
+
+@@ -507,16 +510,17 @@ static int rcar_gen3_phy_usb2_power_on(struct phy *p)
+ u32 val;
+ int ret = 0;
+
+- mutex_lock(&channel->lock);
+- if (!rcar_gen3_are_all_rphys_power_off(channel))
+- goto out;
+-
+ if (channel->vbus) {
+ ret = regulator_enable(channel->vbus);
+ if (ret)
+- goto out;
++ return ret;
+ }
+
++ guard(spinlock_irqsave)(&channel->lock);
++
++ if (!rcar_gen3_are_all_rphys_power_off(channel))
++ goto out;
++
+ val = readl(usb2_base + USB2_USBCTR);
+ val |= USB2_USBCTR_PLL_RST;
+ writel(val, usb2_base + USB2_USBCTR);
+@@ -526,7 +530,6 @@ static int rcar_gen3_phy_usb2_power_on(struct phy *p)
+ out:
+ /* The powered flag should be set for any other phys anyway */
+ rphy->powered = true;
+- mutex_unlock(&channel->lock);
+
+ return 0;
+ }
+@@ -537,18 +540,20 @@ static int rcar_gen3_phy_usb2_power_off(struct phy *p)
+ struct rcar_gen3_chan *channel = rphy->ch;
+ int ret = 0;
+
+- mutex_lock(&channel->lock);
+- rphy->powered = false;
++ scoped_guard(spinlock_irqsave, &channel->lock) {
++ rphy->powered = false;
+
+- if (!rcar_gen3_are_all_rphys_power_off(channel))
+- goto out;
++ if (rcar_gen3_are_all_rphys_power_off(channel)) {
++ u32 val = readl(channel->base + USB2_USBCTR);
++
++ val |= USB2_USBCTR_PLL_RST;
++ writel(val, channel->base + USB2_USBCTR);
++ }
++ }
+
+ if (channel->vbus)
+ ret = regulator_disable(channel->vbus);
+
+-out:
+- mutex_unlock(&channel->lock);
+-
+ return ret;
+ }
+
+@@ -701,7 +706,7 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ struct device *dev = &pdev->dev;
+ struct rcar_gen3_chan *channel;
+ struct phy_provider *provider;
+- int ret = 0, i;
++ int ret = 0, i, irq;
+
+ if (!dev->of_node) {
+ dev_err(dev, "This driver needs device tree\n");
+@@ -717,8 +722,6 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ return PTR_ERR(channel->base);
+
+ channel->obint_enable_bits = USB2_OBINT_BITS;
+- /* get irq number here and request_irq for OTG in phy_init */
+- channel->irq = platform_get_irq_optional(pdev, 0);
+ channel->dr_mode = rcar_gen3_get_dr_mode(dev->of_node);
+ if (channel->dr_mode != USB_DR_MODE_UNKNOWN) {
+ channel->is_otg_channel = true;
+@@ -761,7 +764,7 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ if (phy_data->no_adp_ctrl)
+ channel->obint_enable_bits = USB2_OBINT_IDCHG_EN;
+
+- mutex_init(&channel->lock);
++ spin_lock_init(&channel->lock);
+ for (i = 0; i < NUM_OF_PHYS; i++) {
+ channel->rphys[i].phy = devm_phy_create(dev, NULL,
+ phy_data->phy_usb2_ops);
+@@ -787,6 +790,20 @@ static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev)
+ channel->vbus = NULL;
+ }
+
++ irq = platform_get_irq_optional(pdev, 0);
++ if (irq < 0 && irq != -ENXIO) {
++ ret = irq;
++ goto error;
++ } else if (irq > 0) {
++ INIT_WORK(&channel->work, rcar_gen3_phy_usb2_work);
++ ret = devm_request_irq(dev, irq, rcar_gen3_phy_usb2_irq,
++ IRQF_SHARED, dev_name(dev), channel);
++ if (ret < 0) {
++ dev_err(dev, "Failed to request irq (%d)\n", irq);
++ goto error;
++ }
++ }
++
+ provider = devm_of_phy_provider_register(dev, rcar_gen3_phy_usb2_xlate);
+ if (IS_ERR(provider)) {
+ dev_err(dev, "Failed to register PHY provider\n");
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 2fb4f297fda3d6..920abf6fa9bdd8 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -94,8 +94,8 @@
+ #define LCPLL_ALONE_MODE BIT(1)
+ /* CMN_REG(0097) */
+ #define DIG_CLK_SEL BIT(1)
+-#define ROPLL_REF BIT(1)
+-#define LCPLL_REF 0
++#define LCPLL_REF BIT(1)
++#define ROPLL_REF 0
+ /* CMN_REG(0099) */
+ #define CMN_ROPLL_ALONE_MODE BIT(2)
+ #define ROPLL_ALONE_MODE BIT(2)
+diff --git a/drivers/phy/rockchip/phy-rockchip-usbdp.c b/drivers/phy/rockchip/phy-rockchip-usbdp.c
+index c04cf64f8a35db..fff04e0fbd800d 100644
+--- a/drivers/phy/rockchip/phy-rockchip-usbdp.c
++++ b/drivers/phy/rockchip/phy-rockchip-usbdp.c
+@@ -187,6 +187,8 @@ struct rk_udphy {
+ u32 dp_aux_din_sel;
+ bool dp_sink_hpd_sel;
+ bool dp_sink_hpd_cfg;
++ unsigned int link_rate;
++ unsigned int lanes;
+ u8 bw;
+ int id;
+
+@@ -1102,15 +1104,19 @@ static int rk_udphy_dp_phy_power_off(struct phy *phy)
+ return 0;
+ }
+
+-static int rk_udphy_dp_phy_verify_link_rate(unsigned int link_rate)
++/*
++ * Verify link rate
++ */
++static int rk_udphy_dp_phy_verify_link_rate(struct rk_udphy *udphy,
++ struct phy_configure_opts_dp *dp)
+ {
+- switch (link_rate) {
++ switch (dp->link_rate) {
+ case 1620:
+ case 2700:
+ case 5400:
+ case 8100:
++ udphy->link_rate = dp->link_rate;
+ break;
+-
+ default:
+ return -EINVAL;
+ }
+@@ -1118,45 +1124,44 @@ static int rk_udphy_dp_phy_verify_link_rate(unsigned int link_rate)
+ return 0;
+ }
+
+-static int rk_udphy_dp_phy_verify_config(struct rk_udphy *udphy,
+- struct phy_configure_opts_dp *dp)
++static int rk_udphy_dp_phy_verify_lanes(struct rk_udphy *udphy,
++ struct phy_configure_opts_dp *dp)
+ {
+- int i, ret;
+-
+- /* If changing link rate was required, verify it's supported. */
+- ret = rk_udphy_dp_phy_verify_link_rate(dp->link_rate);
+- if (ret)
+- return ret;
+-
+- /* Verify lane count. */
+ switch (dp->lanes) {
+ case 1:
+ case 2:
+ case 4:
+ /* valid lane count. */
++ udphy->lanes = dp->lanes;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+- /*
+- * If changing voltages is required, check swing and pre-emphasis
+- * levels, per-lane.
+- */
+- if (dp->set_voltages) {
+- /* Lane count verified previously. */
+- for (i = 0; i < dp->lanes; i++) {
+- if (dp->voltage[i] > 3 || dp->pre[i] > 3)
+- return -EINVAL;
++ return 0;
++}
+
+- /*
+- * Sum of voltage swing and pre-emphasis levels cannot
+- * exceed 3.
+- */
+- if (dp->voltage[i] + dp->pre[i] > 3)
+- return -EINVAL;
+- }
++/*
++ * If changing voltages is required, check swing and pre-emphasis
++ * levels, per-lane.
++ */
++static int rk_udphy_dp_phy_verify_voltages(struct rk_udphy *udphy,
++ struct phy_configure_opts_dp *dp)
++{
++ int i;
++
++ /* Lane count verified previously. */
++ for (i = 0; i < udphy->lanes; i++) {
++ if (dp->voltage[i] > 3 || dp->pre[i] > 3)
++ return -EINVAL;
++
++ /*
++ * Sum of voltage swing and pre-emphasis levels cannot
++ * exceed 3.
++ */
++ if (dp->voltage[i] + dp->pre[i] > 3)
++ return -EINVAL;
+ }
+
+ return 0;
+@@ -1196,9 +1201,23 @@ static int rk_udphy_dp_phy_configure(struct phy *phy,
+ u32 i, val, lane;
+ int ret;
+
+- ret = rk_udphy_dp_phy_verify_config(udphy, dp);
+- if (ret)
+- return ret;
++ if (dp->set_rate) {
++ ret = rk_udphy_dp_phy_verify_link_rate(udphy, dp);
++ if (ret)
++ return ret;
++ }
++
++ if (dp->set_lanes) {
++ ret = rk_udphy_dp_phy_verify_lanes(udphy, dp);
++ if (ret)
++ return ret;
++ }
++
++ if (dp->set_voltages) {
++ ret = rk_udphy_dp_phy_verify_voltages(udphy, dp);
++ if (ret)
++ return ret;
++ }
+
+ if (dp->set_rate) {
+ regmap_update_bits(udphy->pma_regmap, CMN_DP_RSTN_OFFSET,
+@@ -1243,9 +1262,9 @@ static int rk_udphy_dp_phy_configure(struct phy *phy,
+ }
+
+ if (dp->set_voltages) {
+- for (i = 0; i < dp->lanes; i++) {
++ for (i = 0; i < udphy->lanes; i++) {
+ lane = udphy->dp_lane_sel[i];
+- switch (dp->link_rate) {
++ switch (udphy->link_rate) {
+ case 1620:
+ case 2700:
+ regmap_update_bits(udphy->pma_regmap,
+diff --git a/drivers/phy/samsung/phy-exynos5-usbdrd.c b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+index 46b8f6987c62c3..28d02ae60cc140 100644
+--- a/drivers/phy/samsung/phy-exynos5-usbdrd.c
++++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+@@ -1513,8 +1513,11 @@ static const struct exynos5_usbdrd_phy_tuning gs101_tunes_pipe3_preinit[] = {
+ PHY_TUNING_ENTRY_PMA(0x09e0, -1, 0x00),
+ PHY_TUNING_ENTRY_PMA(0x09e4, -1, 0x36),
+ PHY_TUNING_ENTRY_PMA(0x1e7c, -1, 0x06),
+- PHY_TUNING_ENTRY_PMA(0x1e90, -1, 0x00),
+- PHY_TUNING_ENTRY_PMA(0x1e94, -1, 0x36),
++ PHY_TUNING_ENTRY_PMA(0x19e0, -1, 0x00),
++ PHY_TUNING_ENTRY_PMA(0x19e4, -1, 0x36),
++ /* fix bootloader bug */
++ PHY_TUNING_ENTRY_PMA(0x1e90, -1, 0x02),
++ PHY_TUNING_ENTRY_PMA(0x1e94, -1, 0x0b),
+ /* improve LVCC */
+ PHY_TUNING_ENTRY_PMA(0x08f0, -1, 0x30),
+ PHY_TUNING_ENTRY_PMA(0x18f0, -1, 0x30),
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+index cf6efa9c0364a1..a039b490cdb8e6 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+@@ -72,7 +72,7 @@ static enum bcm281xx_pin_type hdmi_pin = BCM281XX_PIN_TYPE_HDMI;
+ struct bcm281xx_pin_function {
+ const char *name;
+ const char * const *groups;
+- const unsigned ngroups;
++ const unsigned int ngroups;
+ };
+
+ /*
+@@ -84,10 +84,10 @@ struct bcm281xx_pinctrl_data {
+
+ /* List of all pins */
+ const struct pinctrl_pin_desc *pins;
+- const unsigned npins;
++ const unsigned int npins;
+
+ const struct bcm281xx_pin_function *functions;
+- const unsigned nfunctions;
++ const unsigned int nfunctions;
+
+ struct regmap *regmap;
+ };
+@@ -941,7 +941,7 @@ static struct bcm281xx_pinctrl_data bcm281xx_pinctrl = {
+ };
+
+ static inline enum bcm281xx_pin_type pin_type_get(struct pinctrl_dev *pctldev,
+- unsigned pin)
++ unsigned int pin)
+ {
+ struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
+
+@@ -985,7 +985,7 @@ static int bcm281xx_pinctrl_get_groups_count(struct pinctrl_dev *pctldev)
+ }
+
+ static const char *bcm281xx_pinctrl_get_group_name(struct pinctrl_dev *pctldev,
+- unsigned group)
++ unsigned int group)
+ {
+ struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
+
+@@ -993,9 +993,9 @@ static const char *bcm281xx_pinctrl_get_group_name(struct pinctrl_dev *pctldev,
+ }
+
+ static int bcm281xx_pinctrl_get_group_pins(struct pinctrl_dev *pctldev,
+- unsigned group,
++ unsigned int group,
+ const unsigned **pins,
+- unsigned *num_pins)
++ unsigned int *num_pins)
+ {
+ struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
+
+@@ -1007,7 +1007,7 @@ static int bcm281xx_pinctrl_get_group_pins(struct pinctrl_dev *pctldev,
+
+ static void bcm281xx_pinctrl_pin_dbg_show(struct pinctrl_dev *pctldev,
+ struct seq_file *s,
+- unsigned offset)
++ unsigned int offset)
+ {
+ seq_printf(s, " %s", dev_name(pctldev->dev));
+ }
+@@ -1029,7 +1029,7 @@ static int bcm281xx_pinctrl_get_fcns_count(struct pinctrl_dev *pctldev)
+ }
+
+ static const char *bcm281xx_pinctrl_get_fcn_name(struct pinctrl_dev *pctldev,
+- unsigned function)
++ unsigned int function)
+ {
+ struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
+
+@@ -1037,9 +1037,9 @@ static const char *bcm281xx_pinctrl_get_fcn_name(struct pinctrl_dev *pctldev,
+ }
+
+ static int bcm281xx_pinctrl_get_fcn_groups(struct pinctrl_dev *pctldev,
+- unsigned function,
++ unsigned int function,
+ const char * const **groups,
+- unsigned * const num_groups)
++ unsigned int * const num_groups)
+ {
+ struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
+
+@@ -1050,8 +1050,8 @@ static int bcm281xx_pinctrl_get_fcn_groups(struct pinctrl_dev *pctldev,
+ }
+
+ static int bcm281xx_pinmux_set(struct pinctrl_dev *pctldev,
+- unsigned function,
+- unsigned group)
++ unsigned int function,
++ unsigned int group)
+ {
+ struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
+ const struct bcm281xx_pin_function *f = &pdata->functions[function];
+@@ -1082,7 +1082,7 @@ static const struct pinmux_ops bcm281xx_pinctrl_pinmux_ops = {
+ };
+
+ static int bcm281xx_pinctrl_pin_config_get(struct pinctrl_dev *pctldev,
+- unsigned pin,
++ unsigned int pin,
+ unsigned long *config)
+ {
+ return -ENOTSUPP;
+@@ -1091,9 +1091,9 @@ static int bcm281xx_pinctrl_pin_config_get(struct pinctrl_dev *pctldev,
+
+ /* Goes through the configs and update register val/mask */
+ static int bcm281xx_std_pin_update(struct pinctrl_dev *pctldev,
+- unsigned pin,
++ unsigned int pin,
+ unsigned long *configs,
+- unsigned num_configs,
++ unsigned int num_configs,
+ u32 *val,
+ u32 *mask)
+ {
+@@ -1207,9 +1207,9 @@ static const u16 bcm281xx_pullup_map[] = {
+
+ /* Goes through the configs and update register val/mask */
+ static int bcm281xx_i2c_pin_update(struct pinctrl_dev *pctldev,
+- unsigned pin,
++ unsigned int pin,
+ unsigned long *configs,
+- unsigned num_configs,
++ unsigned int num_configs,
+ u32 *val,
+ u32 *mask)
+ {
+@@ -1277,9 +1277,9 @@ static int bcm281xx_i2c_pin_update(struct pinctrl_dev *pctldev,
+
+ /* Goes through the configs and update register val/mask */
+ static int bcm281xx_hdmi_pin_update(struct pinctrl_dev *pctldev,
+- unsigned pin,
++ unsigned int pin,
+ unsigned long *configs,
+- unsigned num_configs,
++ unsigned int num_configs,
+ u32 *val,
+ u32 *mask)
+ {
+@@ -1321,9 +1321,9 @@ static int bcm281xx_hdmi_pin_update(struct pinctrl_dev *pctldev,
+ }
+
+ static int bcm281xx_pinctrl_pin_config_set(struct pinctrl_dev *pctldev,
+- unsigned pin,
++ unsigned int pin,
+ unsigned long *configs,
+- unsigned num_configs)
++ unsigned int num_configs)
+ {
+ struct bcm281xx_pinctrl_data *pdata = pinctrl_dev_get_drvdata(pctldev);
+ enum bcm281xx_pin_type pin_type;
+diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c
+index 6a94ecd6a8deae..0b7f74beb6a6a8 100644
+--- a/drivers/pinctrl/devicetree.c
++++ b/drivers/pinctrl/devicetree.c
+@@ -143,10 +143,14 @@ static int dt_to_map_one_config(struct pinctrl *p,
+ pctldev = get_pinctrl_dev_from_of_node(np_pctldev);
+ if (pctldev)
+ break;
+- /* Do not defer probing of hogs (circular loop) */
++ /*
++ * Do not defer probing of hogs (circular loop)
++ *
++ * Return 1 to let the caller catch the case.
++ */
+ if (np_pctldev == p->dev->of_node) {
+ of_node_put(np_pctldev);
+- return -ENODEV;
++ return 1;
+ }
+ }
+ of_node_put(np_pctldev);
+@@ -265,6 +269,8 @@ int pinctrl_dt_to_map(struct pinctrl *p, struct pinctrl_dev *pctldev)
+ ret = dt_to_map_one_config(p, pctldev, statename,
+ np_config);
+ of_node_put(np_config);
++ if (ret == 1)
++ continue;
+ if (ret < 0)
+ goto err;
+ }
+diff --git a/drivers/pinctrl/meson/pinctrl-meson.c b/drivers/pinctrl/meson/pinctrl-meson.c
+index 253a0cc57e396d..e5a32a0532eeec 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson.c
++++ b/drivers/pinctrl/meson/pinctrl-meson.c
+@@ -487,7 +487,7 @@ static int meson_pinconf_get(struct pinctrl_dev *pcdev, unsigned int pin,
+ case PIN_CONFIG_BIAS_PULL_DOWN:
+ case PIN_CONFIG_BIAS_PULL_UP:
+ if (meson_pinconf_get_pull(pc, pin) == param)
+- arg = 1;
++ arg = 60000;
+ else
+ return -EINVAL;
+ break;
+diff --git a/drivers/pinctrl/qcom/Kconfig.msm b/drivers/pinctrl/qcom/Kconfig.msm
+index 35f47660a56b1a..a0d63a67253934 100644
+--- a/drivers/pinctrl/qcom/Kconfig.msm
++++ b/drivers/pinctrl/qcom/Kconfig.msm
+@@ -138,10 +138,10 @@ config PINCTRL_MSM8916
+ Qualcomm TLMM block found on the Qualcomm 8916 platform.
+
+ config PINCTRL_MSM8917
+- tristate "Qualcomm 8917 pin controller driver"
++ tristate "Qualcomm 8917/8937 pin controller driver"
+ help
+ This is the pinctrl, pinmux, pinconf and gpiolib driver for the
+- Qualcomm TLMM block found on the Qualcomm MSM8917 platform.
++ Qualcomm TLMM block found on the Qualcomm MSM8917, MSM8937 platform.
+
+ config PINCTRL_MSM8953
+ tristate "Qualcomm 8953 pin controller driver"
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 82f0cc43bbf4f4..0eb816395dc64d 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -44,7 +44,6 @@
+ * @pctrl: pinctrl handle.
+ * @chip: gpiochip handle.
+ * @desc: pin controller descriptor
+- * @restart_nb: restart notifier block.
+ * @irq: parent irq for the TLMM irq_chip.
+ * @intr_target_use_scm: route irq to application cpu using scm calls
+ * @lock: Spinlock to protect register resources as well
+@@ -64,7 +63,6 @@ struct msm_pinctrl {
+ struct pinctrl_dev *pctrl;
+ struct gpio_chip chip;
+ struct pinctrl_desc desc;
+- struct notifier_block restart_nb;
+
+ int irq;
+
+@@ -1471,10 +1469,9 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
+ return 0;
+ }
+
+-static int msm_ps_hold_restart(struct notifier_block *nb, unsigned long action,
+- void *data)
++static int msm_ps_hold_restart(struct sys_off_data *data)
+ {
+- struct msm_pinctrl *pctrl = container_of(nb, struct msm_pinctrl, restart_nb);
++ struct msm_pinctrl *pctrl = data->cb_data;
+
+ writel(0, pctrl->regs[0] + PS_HOLD_OFFSET);
+ mdelay(1000);
+@@ -1485,7 +1482,11 @@ static struct msm_pinctrl *poweroff_pctrl;
+
+ static void msm_ps_hold_poweroff(void)
+ {
+- msm_ps_hold_restart(&poweroff_pctrl->restart_nb, 0, NULL);
++ struct sys_off_data data = {
++ .cb_data = poweroff_pctrl,
++ };
++
++ msm_ps_hold_restart(&data);
+ }
+
+ static void msm_pinctrl_setup_pm_reset(struct msm_pinctrl *pctrl)
+@@ -1495,9 +1496,11 @@ static void msm_pinctrl_setup_pm_reset(struct msm_pinctrl *pctrl)
+
+ for (i = 0; i < pctrl->soc->nfunctions; i++)
+ if (!strcmp(func[i].name, "ps_hold")) {
+- pctrl->restart_nb.notifier_call = msm_ps_hold_restart;
+- pctrl->restart_nb.priority = 128;
+- if (register_restart_handler(&pctrl->restart_nb))
++ if (devm_register_sys_off_handler(pctrl->dev,
++ SYS_OFF_MODE_RESTART,
++ 128,
++ msm_ps_hold_restart,
++ pctrl))
+ dev_err(pctrl->dev,
+ "failed to setup restart handler.\n");
+ poweroff_pctrl = pctrl;
+@@ -1599,8 +1602,6 @@ void msm_pinctrl_remove(struct platform_device *pdev)
+ struct msm_pinctrl *pctrl = platform_get_drvdata(pdev);
+
+ gpiochip_remove(&pctrl->chip);
+-
+- unregister_restart_handler(&pctrl->restart_nb);
+ }
+ EXPORT_SYMBOL(msm_pinctrl_remove);
+
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm8917.c b/drivers/pinctrl/qcom/pinctrl-msm8917.c
+index cff137bb3b23fb..350636807b07d9 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm8917.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm8917.c
+@@ -539,6 +539,7 @@ enum msm8917_functions {
+ msm_mux_webcam_standby,
+ msm_mux_wsa_io,
+ msm_mux_wsa_irq,
++ msm_mux_wsa_reset,
+ msm_mux__,
+ };
+
+@@ -1123,6 +1124,10 @@ static const char * const wsa_io_groups[] = {
+ "gpio94", "gpio95",
+ };
+
++static const char * const wsa_reset_groups[] = {
++ "gpio96",
++};
++
+ static const char * const blsp_spi8_groups[] = {
+ "gpio96", "gpio97", "gpio98", "gpio99",
+ };
+@@ -1378,6 +1383,7 @@ static const struct pinfunction msm8917_functions[] = {
+ MSM_PIN_FUNCTION(webcam_standby),
+ MSM_PIN_FUNCTION(wsa_io),
+ MSM_PIN_FUNCTION(wsa_irq),
++ MSM_PIN_FUNCTION(wsa_reset),
+ };
+
+ static const struct msm_pingroup msm8917_groups[] = {
+@@ -1616,5 +1622,5 @@ static void __exit msm8917_pinctrl_exit(void)
+ }
+ module_exit(msm8917_pinctrl_exit);
+
+-MODULE_DESCRIPTION("Qualcomm msm8917 pinctrl driver");
++MODULE_DESCRIPTION("Qualcomm msm8917/msm8937 pinctrl driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index d1da7f53fc6008..c72e250f4a1544 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -318,6 +318,7 @@ struct rzg2l_pinctrl_pin_settings {
+ * @pmc: PMC registers cache
+ * @pfc: PFC registers cache
+ * @iolh: IOLH registers cache
++ * @pupd: PUPD registers cache
+ * @ien: IEN registers cache
+ * @sd_ch: SD_CH registers cache
+ * @eth_poc: ET_POC registers cache
+@@ -331,6 +332,7 @@ struct rzg2l_pinctrl_reg_cache {
+ u32 *pfc;
+ u32 *iolh[2];
+ u32 *ien[2];
++ u32 *pupd[2];
+ u8 sd_ch[2];
+ u8 eth_poc[2];
+ u8 eth_mode;
+@@ -2712,6 +2714,11 @@ static int rzg2l_pinctrl_reg_cache_alloc(struct rzg2l_pinctrl *pctrl)
+ if (!cache->ien[i])
+ return -ENOMEM;
+
++ cache->pupd[i] = devm_kcalloc(pctrl->dev, nports, sizeof(*cache->pupd[i]),
++ GFP_KERNEL);
++ if (!cache->pupd[i])
++ return -ENOMEM;
++
+ /* Allocate dedicated cache. */
+ dedicated_cache->iolh[i] = devm_kcalloc(pctrl->dev, n_dedicated_pins,
+ sizeof(*dedicated_cache->iolh[i]),
+@@ -2955,7 +2962,7 @@ static void rzg2l_pinctrl_pm_setup_regs(struct rzg2l_pinctrl *pctrl, bool suspen
+ struct rzg2l_pinctrl_reg_cache *cache = pctrl->cache;
+
+ for (u32 port = 0; port < nports; port++) {
+- bool has_iolh, has_ien;
++ bool has_iolh, has_ien, has_pupd;
+ u32 off, caps;
+ u8 pincnt;
+ u64 cfg;
+@@ -2967,6 +2974,7 @@ static void rzg2l_pinctrl_pm_setup_regs(struct rzg2l_pinctrl *pctrl, bool suspen
+ caps = FIELD_GET(PIN_CFG_MASK, cfg);
+ has_iolh = !!(caps & (PIN_CFG_IOLH_A | PIN_CFG_IOLH_B | PIN_CFG_IOLH_C));
+ has_ien = !!(caps & PIN_CFG_IEN);
++ has_pupd = !!(caps & PIN_CFG_PUPD);
+
+ if (suspend)
+ RZG2L_PCTRL_REG_ACCESS32(suspend, pctrl->base + PFC(off), cache->pfc[port]);
+@@ -2985,6 +2993,15 @@ static void rzg2l_pinctrl_pm_setup_regs(struct rzg2l_pinctrl *pctrl, bool suspen
+ }
+ }
+
++ if (has_pupd) {
++ RZG2L_PCTRL_REG_ACCESS32(suspend, pctrl->base + PUPD(off),
++ cache->pupd[0][port]);
++ if (pincnt >= 4) {
++ RZG2L_PCTRL_REG_ACCESS32(suspend, pctrl->base + PUPD(off),
++ cache->pupd[1][port]);
++ }
++ }
++
+ RZG2L_PCTRL_REG_ACCESS16(suspend, pctrl->base + PM(off), cache->pm[port]);
+ RZG2L_PCTRL_REG_ACCESS8(suspend, pctrl->base + P(off), cache->p[port]);
+
+diff --git a/drivers/pinctrl/sophgo/pinctrl-cv18xx.c b/drivers/pinctrl/sophgo/pinctrl-cv18xx.c
+index 57f2674e75d688..84b4850771ce2a 100644
+--- a/drivers/pinctrl/sophgo/pinctrl-cv18xx.c
++++ b/drivers/pinctrl/sophgo/pinctrl-cv18xx.c
+@@ -574,10 +574,10 @@ static int cv1800_pinconf_compute_config(struct cv1800_pinctrl *pctrl,
+ struct cv1800_pin *pin,
+ unsigned long *configs,
+ unsigned int num_configs,
+- u32 *value)
++ u32 *value, u32 *mask)
+ {
+ int i;
+- u32 v = 0;
++ u32 v = 0, m = 0;
+ enum cv1800_pin_io_type type;
+ int ret;
+
+@@ -596,10 +596,12 @@ static int cv1800_pinconf_compute_config(struct cv1800_pinctrl *pctrl,
+ case PIN_CONFIG_BIAS_PULL_DOWN:
+ v &= ~PIN_IO_PULLDOWN;
+ v |= FIELD_PREP(PIN_IO_PULLDOWN, arg);
++ m |= PIN_IO_PULLDOWN;
+ break;
+ case PIN_CONFIG_BIAS_PULL_UP:
+ v &= ~PIN_IO_PULLUP;
+ v |= FIELD_PREP(PIN_IO_PULLUP, arg);
++ m |= PIN_IO_PULLUP;
+ break;
+ case PIN_CONFIG_DRIVE_STRENGTH_UA:
+ ret = cv1800_pinctrl_oc2reg(pctrl, pin, arg);
+@@ -607,6 +609,7 @@ static int cv1800_pinconf_compute_config(struct cv1800_pinctrl *pctrl,
+ return ret;
+ v &= ~PIN_IO_DRIVE;
+ v |= FIELD_PREP(PIN_IO_DRIVE, ret);
++ m |= PIN_IO_DRIVE;
+ break;
+ case PIN_CONFIG_INPUT_SCHMITT_UV:
+ ret = cv1800_pinctrl_schmitt2reg(pctrl, pin, arg);
+@@ -614,6 +617,7 @@ static int cv1800_pinconf_compute_config(struct cv1800_pinctrl *pctrl,
+ return ret;
+ v &= ~PIN_IO_SCHMITT;
+ v |= FIELD_PREP(PIN_IO_SCHMITT, ret);
++ m |= PIN_IO_SCHMITT;
+ break;
+ case PIN_CONFIG_POWER_SOURCE:
+ /* Ignore power source as it is always fixed */
+@@ -621,10 +625,12 @@ static int cv1800_pinconf_compute_config(struct cv1800_pinctrl *pctrl,
+ case PIN_CONFIG_SLEW_RATE:
+ v &= ~PIN_IO_OUT_FAST_SLEW;
+ v |= FIELD_PREP(PIN_IO_OUT_FAST_SLEW, arg);
++ m |= PIN_IO_OUT_FAST_SLEW;
+ break;
+ case PIN_CONFIG_BIAS_BUS_HOLD:
+ v &= ~PIN_IO_BUS_HOLD;
+ v |= FIELD_PREP(PIN_IO_BUS_HOLD, arg);
++ m |= PIN_IO_BUS_HOLD;
+ break;
+ default:
+ return -ENOTSUPP;
+@@ -632,17 +638,19 @@ static int cv1800_pinconf_compute_config(struct cv1800_pinctrl *pctrl,
+ }
+
+ *value = v;
++ *mask = m;
+
+ return 0;
+ }
+
+ static int cv1800_pin_set_config(struct cv1800_pinctrl *pctrl,
+ unsigned int pin_id,
+- u32 value)
++ u32 value, u32 mask)
+ {
+ struct cv1800_pin *pin = cv1800_get_pin(pctrl, pin_id);
+ unsigned long flags;
+ void __iomem *addr;
++ u32 reg;
+
+ if (!pin)
+ return -EINVAL;
+@@ -650,7 +658,10 @@ static int cv1800_pin_set_config(struct cv1800_pinctrl *pctrl,
+ addr = cv1800_pinctrl_get_component_addr(pctrl, &pin->conf);
+
+ raw_spin_lock_irqsave(&pctrl->lock, flags);
+- writel(value, addr);
++ reg = readl(addr);
++ reg &= ~mask;
++ reg |= value;
++ writel(reg, addr);
+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+
+ return 0;
+@@ -662,16 +673,17 @@ static int cv1800_pconf_set(struct pinctrl_dev *pctldev,
+ {
+ struct cv1800_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
+ struct cv1800_pin *pin = cv1800_get_pin(pctrl, pin_id);
+- u32 value;
++ u32 value, mask;
+
+ if (!pin)
+ return -ENODEV;
+
+ if (cv1800_pinconf_compute_config(pctrl, pin,
+- configs, num_configs, &value))
++ configs, num_configs,
++ &value, &mask))
+ return -ENOTSUPP;
+
+- return cv1800_pin_set_config(pctrl, pin_id, value);
++ return cv1800_pin_set_config(pctrl, pin_id, value, mask);
+ }
+
+ static int cv1800_pconf_group_set(struct pinctrl_dev *pctldev,
+@@ -682,7 +694,7 @@ static int cv1800_pconf_group_set(struct pinctrl_dev *pctldev,
+ struct cv1800_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
+ const struct group_desc *group;
+ const struct cv1800_pin_mux_config *pinmuxs;
+- u32 value;
++ u32 value, mask;
+ int i;
+
+ group = pinctrl_generic_get_group(pctldev, gsel);
+@@ -692,11 +704,12 @@ static int cv1800_pconf_group_set(struct pinctrl_dev *pctldev,
+ pinmuxs = group->data;
+
+ if (cv1800_pinconf_compute_config(pctrl, pinmuxs[0].pin,
+- configs, num_configs, &value))
++ configs, num_configs,
++ &value, &mask))
+ return -ENOTSUPP;
+
+ for (i = 0; i < group->grp.npins; i++)
+- cv1800_pin_set_config(pctrl, group->grp.pins[i], value);
++ cv1800_pin_set_config(pctrl, group->grp.pins[i], value, mask);
+
+ return 0;
+ }
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
+index 3b046450bd3ff8..edcc78ebce4569 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
+@@ -278,8 +278,8 @@ static int tegra_pinctrl_set_mux(struct pinctrl_dev *pctldev,
+ return 0;
+ }
+
+-static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev *pctldev,
+- unsigned int offset)
++static int tegra_pinctrl_get_group_index(struct pinctrl_dev *pctldev,
++ unsigned int offset)
+ {
+ struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
+ unsigned int group, num_pins, j;
+@@ -292,12 +292,35 @@ static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev *
+ continue;
+ for (j = 0; j < num_pins; j++) {
+ if (offset == pins[j])
+- return &pmx->soc->groups[group];
++ return group;
+ }
+ }
+
+- dev_err(pctldev->dev, "Pingroup not found for pin %u\n", offset);
+- return NULL;
++ return -EINVAL;
++}
++
++static const struct tegra_pingroup *tegra_pinctrl_get_group(struct pinctrl_dev *pctldev,
++ unsigned int offset,
++ int group_index)
++{
++ struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
++
++ if (group_index < 0 || group_index >= pmx->soc->ngroups)
++ return NULL;
++
++ return &pmx->soc->groups[group_index];
++}
++
++static struct tegra_pingroup_config *tegra_pinctrl_get_group_config(struct pinctrl_dev *pctldev,
++ unsigned int offset,
++ int group_index)
++{
++ struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
++
++ if (group_index < 0)
++ return NULL;
++
++ return &pmx->pingroup_configs[group_index];
+ }
+
+ static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev,
+@@ -306,12 +329,15 @@ static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev,
+ {
+ struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
+ const struct tegra_pingroup *group;
++ struct tegra_pingroup_config *config;
++ int group_index;
+ u32 value;
+
+ if (!pmx->soc->sfsel_in_mux)
+ return 0;
+
+- group = tegra_pinctrl_get_group(pctldev, offset);
++ group_index = tegra_pinctrl_get_group_index(pctldev, offset);
++ group = tegra_pinctrl_get_group(pctldev, offset, group_index);
+
+ if (!group)
+ return -EINVAL;
+@@ -319,7 +345,11 @@ static int tegra_pinctrl_gpio_request_enable(struct pinctrl_dev *pctldev,
+ if (group->mux_reg < 0 || group->sfsel_bit < 0)
+ return -EINVAL;
+
++ config = tegra_pinctrl_get_group_config(pctldev, offset, group_index);
++ if (!config)
++ return -EINVAL;
+ value = pmx_readl(pmx, group->mux_bank, group->mux_reg);
++ config->is_sfsel = (value & BIT(group->sfsel_bit)) != 0;
+ value &= ~BIT(group->sfsel_bit);
+ pmx_writel(pmx, value, group->mux_bank, group->mux_reg);
+
+@@ -332,12 +362,15 @@ static void tegra_pinctrl_gpio_disable_free(struct pinctrl_dev *pctldev,
+ {
+ struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
+ const struct tegra_pingroup *group;
++ struct tegra_pingroup_config *config;
++ int group_index;
+ u32 value;
+
+ if (!pmx->soc->sfsel_in_mux)
+ return;
+
+- group = tegra_pinctrl_get_group(pctldev, offset);
++ group_index = tegra_pinctrl_get_group_index(pctldev, offset);
++ group = tegra_pinctrl_get_group(pctldev, offset, group_index);
+
+ if (!group)
+ return;
+@@ -345,8 +378,12 @@ static void tegra_pinctrl_gpio_disable_free(struct pinctrl_dev *pctldev,
+ if (group->mux_reg < 0 || group->sfsel_bit < 0)
+ return;
+
++ config = tegra_pinctrl_get_group_config(pctldev, offset, group_index);
++ if (!config)
++ return;
+ value = pmx_readl(pmx, group->mux_bank, group->mux_reg);
+- value |= BIT(group->sfsel_bit);
++ if (config->is_sfsel)
++ value |= BIT(group->sfsel_bit);
+ pmx_writel(pmx, value, group->mux_bank, group->mux_reg);
+ }
+
+@@ -791,6 +828,12 @@ int tegra_pinctrl_probe(struct platform_device *pdev,
+ pmx->dev = &pdev->dev;
+ pmx->soc = soc_data;
+
++ pmx->pingroup_configs = devm_kcalloc(&pdev->dev,
++ pmx->soc->ngroups, sizeof(*pmx->pingroup_configs),
++ GFP_KERNEL);
++ if (!pmx->pingroup_configs)
++ return -ENOMEM;
++
+ /*
+ * Each mux group will appear in 4 functions' list of groups.
+ * This over-allocates slightly, since not all groups are mux groups.
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.h b/drivers/pinctrl/tegra/pinctrl-tegra.h
+index b3289bdf727d82..b97136685f7a88 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.h
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.h
+@@ -8,6 +8,10 @@
+ #ifndef __PINMUX_TEGRA_H__
+ #define __PINMUX_TEGRA_H__
+
++struct tegra_pingroup_config {
++ bool is_sfsel;
++};
++
+ struct tegra_pmx {
+ struct device *dev;
+ struct pinctrl_dev *pctl;
+@@ -21,6 +25,8 @@ struct tegra_pmx {
+ int nbanks;
+ void __iomem **regs;
+ u32 *backup_regs;
++ /* Array of size soc->ngroups */
++ struct tegra_pingroup_config *pingroup_configs;
+ };
+
+ enum tegra_pinconf_param {
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index f66d152e265da5..47cc766624d7bb 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -304,6 +304,7 @@ struct asus_wmi {
+
+ u32 kbd_rgb_dev;
+ bool kbd_rgb_state_available;
++ bool oobe_state_available;
+
+ u8 throttle_thermal_policy_mode;
+ u32 throttle_thermal_policy_dev;
+@@ -1826,7 +1827,7 @@ static int asus_wmi_led_init(struct asus_wmi *asus)
+ goto error;
+ }
+
+- if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE)) {
++ if (asus->oobe_state_available) {
+ /*
+ * Disable OOBE state, so that e.g. the keyboard backlight
+ * works.
+@@ -4723,6 +4724,7 @@ static int asus_wmi_add(struct platform_device *pdev)
+ asus->egpu_enable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_EGPU);
+ asus->dgpu_disable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_DGPU);
+ asus->kbd_rgb_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_TUF_RGB_STATE);
++ asus->oobe_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE);
+ asus->ally_mcu_usb_switch = acpi_has_method(NULL, ASUS_USB0_PWR_EC0_CSEE)
+ && dmi_check_system(asus_ally_mcu_quirk);
+
+@@ -4971,6 +4973,13 @@ static int asus_hotk_restore(struct device *device)
+ }
+ if (!IS_ERR_OR_NULL(asus->kbd_led.dev))
+ kbd_led_update(asus);
++ if (asus->oobe_state_available) {
++ /*
++ * Disable OOBE state, so that e.g. the keyboard backlight
++ * works.
++ */
++ asus_wmi_set_devstate(ASUS_WMI_DEVID_OOBE, 1, NULL);
++ }
+
+ if (asus_wmi_has_fnlock_key(asus))
+ asus_wmi_fnlock_update(asus);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
+index 230e6ee966366a..d8f1bf5e58a0f4 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
+@@ -45,7 +45,7 @@ static ssize_t current_password_store(struct kobject *kobj,
+ int length;
+
+ length = strlen(buf);
+- if (buf[length-1] == '\n')
++ if (length && buf[length - 1] == '\n')
+ length--;
+
+ /* firmware does verifiation of min/max password length,
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 30bd366d7b58a7..b740c7bb810151 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -1308,6 +1308,16 @@ static const struct key_entry ideapad_keymap[] = {
+ /* Specific to some newer models */
+ { KE_KEY, 0x3e | IDEAPAD_WMI_KEY, { KEY_MICMUTE } },
+ { KE_KEY, 0x3f | IDEAPAD_WMI_KEY, { KEY_RFKILL } },
++ /* Star- (User Assignable Key) */
++ { KE_KEY, 0x44 | IDEAPAD_WMI_KEY, { KEY_PROG1 } },
++ /* Eye */
++ { KE_KEY, 0x45 | IDEAPAD_WMI_KEY, { KEY_PROG3 } },
++ /* Performance toggle also Fn+Q, handled inside ideapad_wmi_notify() */
++ { KE_KEY, 0x3d | IDEAPAD_WMI_KEY, { KEY_PROG4 } },
++ /* shift + prtsc */
++ { KE_KEY, 0x2d | IDEAPAD_WMI_KEY, { KEY_CUT } },
++ { KE_KEY, 0x29 | IDEAPAD_WMI_KEY, { KEY_TOUCHPAD_TOGGLE } },
++ { KE_KEY, 0x2a | IDEAPAD_WMI_KEY, { KEY_ROOT_MENU } },
+
+ { KE_END },
+ };
+@@ -2094,6 +2104,12 @@ static void ideapad_wmi_notify(struct wmi_device *wdev, union acpi_object *data)
+ dev_dbg(&wdev->dev, "WMI fn-key event: 0x%llx\n",
+ data->integer.value);
+
++ /* performance button triggered by 0x3d */
++ if (data->integer.value == 0x3d && priv->dytc) {
++ platform_profile_cycle();
++ break;
++ }
++
+ /* 0x02 FnLock, 0x03 Esc */
+ if (data->integer.value == 0x02 || data->integer.value == 0x03)
+ ideapad_fn_lock_led_notify(priv, data->integer.value == 0x02);
+diff --git a/drivers/platform/x86/intel/hid.c b/drivers/platform/x86/intel/hid.c
+index 88a1a9ff2f3443..0b5e43444ed603 100644
+--- a/drivers/platform/x86/intel/hid.c
++++ b/drivers/platform/x86/intel/hid.c
+@@ -44,16 +44,17 @@ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Alex Hung");
+
+ static const struct acpi_device_id intel_hid_ids[] = {
+- {"INT33D5", 0},
+- {"INTC1051", 0},
+- {"INTC1054", 0},
+- {"INTC1070", 0},
+- {"INTC1076", 0},
+- {"INTC1077", 0},
+- {"INTC1078", 0},
+- {"INTC107B", 0},
+- {"INTC10CB", 0},
+- {"", 0},
++ { "INT33D5" },
++ { "INTC1051" },
++ { "INTC1054" },
++ { "INTC1070" },
++ { "INTC1076" },
++ { "INTC1077" },
++ { "INTC1078" },
++ { "INTC107B" },
++ { "INTC10CB" },
++ { "INTC10CC" },
++ { }
+ };
+ MODULE_DEVICE_TABLE(acpi, intel_hid_ids);
+
+diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c
+index 323316ac6783aa..e4b3b20d03f32e 100644
+--- a/drivers/platform/x86/think-lmi.c
++++ b/drivers/platform/x86/think-lmi.c
+@@ -1065,8 +1065,8 @@ static ssize_t current_value_store(struct kobject *kobj,
+ ret = -EINVAL;
+ goto out;
+ }
+- set_str = kasprintf(GFP_KERNEL, "%s,%s,%s", setting->display_name,
+- new_setting, tlmi_priv.pwd_admin->signature);
++ set_str = kasprintf(GFP_KERNEL, "%s,%s,%s", setting->name,
++ new_setting, tlmi_priv.pwd_admin->signature);
+ if (!set_str) {
+ ret = -ENOMEM;
+ goto out;
+@@ -1096,7 +1096,7 @@ static ssize_t current_value_store(struct kobject *kobj,
+ goto out;
+ }
+
+- set_str = kasprintf(GFP_KERNEL, "%s,%s;", setting->display_name,
++ set_str = kasprintf(GFP_KERNEL, "%s,%s;", setting->name,
+ new_setting);
+ if (!set_str) {
+ ret = -ENOMEM;
+@@ -1124,11 +1124,11 @@ static ssize_t current_value_store(struct kobject *kobj,
+ }
+
+ if (auth_str)
+- set_str = kasprintf(GFP_KERNEL, "%s,%s,%s", setting->display_name,
+- new_setting, auth_str);
++ set_str = kasprintf(GFP_KERNEL, "%s,%s,%s", setting->name,
++ new_setting, auth_str);
+ else
+- set_str = kasprintf(GFP_KERNEL, "%s,%s;", setting->display_name,
+- new_setting);
++ set_str = kasprintf(GFP_KERNEL, "%s,%s;", setting->name,
++ new_setting);
+ if (!set_str) {
+ ret = -ENOMEM;
+ goto out;
+@@ -1633,9 +1633,6 @@ static int tlmi_analyze(void)
+ continue;
+ }
+
+- /* It is not allowed to have '/' for file name. Convert it into '\'. */
+- strreplace(item, '/', '\\');
+-
+ /* Remove the value part */
+ strreplace(item, ',', '\0');
+
+@@ -1647,11 +1644,16 @@ static int tlmi_analyze(void)
+ goto fail_clear_attr;
+ }
+ setting->index = i;
++
++ strscpy(setting->name, item);
++ /* It is not allowed to have '/' for file name. Convert it into '\'. */
++ strreplace(item, '/', '\\');
+ strscpy(setting->display_name, item);
++
+ /* If BIOS selections supported, load those */
+ if (tlmi_priv.can_get_bios_selections) {
+- ret = tlmi_get_bios_selections(setting->display_name,
+- &setting->possible_values);
++ ret = tlmi_get_bios_selections(setting->name,
++ &setting->possible_values);
+ if (ret || !setting->possible_values)
+ pr_info("Error retrieving possible values for %d : %s\n",
+ i, setting->display_name);
+diff --git a/drivers/platform/x86/think-lmi.h b/drivers/platform/x86/think-lmi.h
+index f267d8b46957e6..95a3d935edaaf2 100644
+--- a/drivers/platform/x86/think-lmi.h
++++ b/drivers/platform/x86/think-lmi.h
+@@ -88,6 +88,7 @@ struct tlmi_pwd_setting {
+ struct tlmi_attr_setting {
+ struct kobject kobj;
+ int index;
++ char name[TLMI_SETTINGS_MAXLEN];
+ char display_name[TLMI_SETTINGS_MAXLEN];
+ char *possible_values;
+ };
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 6c94137865c9b5..949445e9297310 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -3091,7 +3091,7 @@ struct device *genpd_dev_pm_attach_by_id(struct device *dev,
+ /* Verify that the index is within a valid range. */
+ num_domains = of_count_phandle_with_args(dev->of_node, "power-domains",
+ "#power-domain-cells");
+- if (index >= num_domains)
++ if (num_domains < 0 || index >= num_domains)
+ return NULL;
+
+ /* Allocate and register device on the genpd bus. */
+diff --git a/drivers/pmdomain/imx/gpcv2.c b/drivers/pmdomain/imx/gpcv2.c
+index 958d34d4821b1b..105fcaf13a34c7 100644
+--- a/drivers/pmdomain/imx/gpcv2.c
++++ b/drivers/pmdomain/imx/gpcv2.c
+@@ -1361,7 +1361,7 @@ static int imx_pgc_domain_probe(struct platform_device *pdev)
+ }
+
+ if (IS_ENABLED(CONFIG_LOCKDEP) &&
+- of_property_read_bool(domain->dev->of_node, "power-domains"))
++ of_property_present(domain->dev->of_node, "power-domains"))
+ lockdep_set_subclass(&domain->genpd.mlock, 1);
+
+ ret = of_genpd_add_provider_simple(domain->dev->of_node,
+diff --git a/drivers/pmdomain/renesas/rcar-gen4-sysc.c b/drivers/pmdomain/renesas/rcar-gen4-sysc.c
+index 66409cff2083fc..e001b5c25bed00 100644
+--- a/drivers/pmdomain/renesas/rcar-gen4-sysc.c
++++ b/drivers/pmdomain/renesas/rcar-gen4-sysc.c
+@@ -338,11 +338,6 @@ static int __init rcar_gen4_sysc_pd_init(void)
+ struct rcar_gen4_sysc_pd *pd;
+ size_t n;
+
+- if (!area->name) {
+- /* Skip NULLified area */
+- continue;
+- }
+-
+ n = strlen(area->name) + 1;
+ pd = kzalloc(sizeof(*pd) + n, GFP_KERNEL);
+ if (!pd) {
+diff --git a/drivers/pmdomain/renesas/rcar-sysc.c b/drivers/pmdomain/renesas/rcar-sysc.c
+index b99326917330f5..1e29485237894a 100644
+--- a/drivers/pmdomain/renesas/rcar-sysc.c
++++ b/drivers/pmdomain/renesas/rcar-sysc.c
+@@ -396,11 +396,6 @@ static int __init rcar_sysc_pd_init(void)
+ struct rcar_sysc_pd *pd;
+ size_t n;
+
+- if (!area->name) {
+- /* Skip NULLified area */
+- continue;
+- }
+-
+ n = strlen(area->name) + 1;
+ pd = kzalloc(sizeof(*pd) + n, GFP_KERNEL);
+ if (!pd) {
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index 3c3158f31a484d..f4cf129a0b6838 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -89,6 +89,8 @@
+ #define AXP717_BAT_CC_MIN_UA 0
+ #define AXP717_BAT_CC_MAX_UA 3008000
+
++#define AXP717_TS_PIN_DISABLE BIT(4)
++
+ struct axp20x_batt_ps;
+
+ struct axp_data {
+@@ -117,6 +119,7 @@ struct axp20x_batt_ps {
+ /* Maximum constant charge current */
+ unsigned int max_ccc;
+ const struct axp_data *data;
++ bool ts_disable;
+ };
+
+ static int axp20x_battery_get_max_voltage(struct axp20x_batt_ps *axp20x_batt,
+@@ -984,6 +987,24 @@ static void axp717_set_battery_info(struct platform_device *pdev,
+ int ccc = info->constant_charge_current_max_ua;
+ int val;
+
++ axp_batt->ts_disable = (device_property_read_bool(axp_batt->dev,
++ "x-powers,no-thermistor"));
++
++ /*
++ * Under rare conditions an incorrectly programmed efuse for
++ * the temp sensor on the PMIC may trigger a fault condition.
++ * Allow users to hard-code if the ts pin is not used to work
++ * around this problem. Note that this requires the battery
++ * be correctly defined in the device tree with a monitored
++ * battery node.
++ */
++ if (axp_batt->ts_disable) {
++ regmap_update_bits(axp_batt->regmap,
++ AXP717_TS_PIN_CFG,
++ AXP717_TS_PIN_DISABLE,
++ AXP717_TS_PIN_DISABLE);
++ }
++
+ if (vmin > 0 && axp717_set_voltage_min_design(axp_batt, vmin))
+ dev_err(&pdev->dev,
+ "couldn't set voltage_min_design\n");
+diff --git a/drivers/pps/generators/pps_gen-dummy.c b/drivers/pps/generators/pps_gen-dummy.c
+index b284c200cbe500..55de4aecf35ed7 100644
+--- a/drivers/pps/generators/pps_gen-dummy.c
++++ b/drivers/pps/generators/pps_gen-dummy.c
+@@ -61,7 +61,7 @@ static int pps_gen_dummy_enable(struct pps_gen_device *pps_gen, bool enable)
+ * The PPS info struct
+ */
+
+-static struct pps_gen_source_info pps_gen_dummy_info = {
++static const struct pps_gen_source_info pps_gen_dummy_info = {
+ .use_system_clock = true,
+ .get_time = pps_gen_dummy_get_time,
+ .enable = pps_gen_dummy_enable,
+diff --git a/drivers/pps/generators/pps_gen.c b/drivers/pps/generators/pps_gen.c
+index ca592f1736f46b..5b8bb454913cd8 100644
+--- a/drivers/pps/generators/pps_gen.c
++++ b/drivers/pps/generators/pps_gen.c
+@@ -66,7 +66,7 @@ static long pps_gen_cdev_ioctl(struct file *file,
+ if (ret)
+ return -EFAULT;
+
+- ret = pps_gen->info.enable(pps_gen, status);
++ ret = pps_gen->info->enable(pps_gen, status);
+ if (ret)
+ return ret;
+ pps_gen->enabled = status;
+@@ -76,7 +76,7 @@ static long pps_gen_cdev_ioctl(struct file *file,
+ case PPS_GEN_USESYSTEMCLOCK:
+ dev_dbg(pps_gen->dev, "PPS_GEN_USESYSTEMCLOCK\n");
+
+- ret = put_user(pps_gen->info.use_system_clock, uiuarg);
++ ret = put_user(pps_gen->info->use_system_clock, uiuarg);
+ if (ret)
+ return -EFAULT;
+
+@@ -175,7 +175,7 @@ static int pps_gen_register_cdev(struct pps_gen_device *pps_gen)
+ devt = MKDEV(MAJOR(pps_gen_devt), pps_gen->id);
+
+ cdev_init(&pps_gen->cdev, &pps_gen_cdev_fops);
+- pps_gen->cdev.owner = pps_gen->info.owner;
++ pps_gen->cdev.owner = pps_gen->info->owner;
+
+ err = cdev_add(&pps_gen->cdev, devt, 1);
+ if (err) {
+@@ -183,8 +183,8 @@ static int pps_gen_register_cdev(struct pps_gen_device *pps_gen)
+ MAJOR(pps_gen_devt), pps_gen->id);
+ goto free_ida;
+ }
+- pps_gen->dev = device_create(pps_gen_class, pps_gen->info.parent, devt,
+- pps_gen, "pps-gen%d", pps_gen->id);
++ pps_gen->dev = device_create(pps_gen_class, pps_gen->info->parent, devt,
++ pps_gen, "pps-gen%d", pps_gen->id);
+ if (IS_ERR(pps_gen->dev)) {
+ err = PTR_ERR(pps_gen->dev);
+ goto del_cdev;
+@@ -225,7 +225,7 @@ static void pps_gen_unregister_cdev(struct pps_gen_device *pps_gen)
+ * Return: the PPS generator device in case of success, and ERR_PTR(errno)
+ * otherwise.
+ */
+-struct pps_gen_device *pps_gen_register_source(struct pps_gen_source_info *info)
++struct pps_gen_device *pps_gen_register_source(const struct pps_gen_source_info *info)
+ {
+ struct pps_gen_device *pps_gen;
+ int err;
+@@ -235,7 +235,7 @@ struct pps_gen_device *pps_gen_register_source(struct pps_gen_source_info *info)
+ err = -ENOMEM;
+ goto pps_gen_register_source_exit;
+ }
+- pps_gen->info = *info;
++ pps_gen->info = info;
+ pps_gen->enabled = false;
+
+ init_waitqueue_head(&pps_gen->queue);
+diff --git a/drivers/pps/generators/sysfs.c b/drivers/pps/generators/sysfs.c
+index faf8b1c6d20262..6d6bc0006feae0 100644
+--- a/drivers/pps/generators/sysfs.c
++++ b/drivers/pps/generators/sysfs.c
+@@ -19,7 +19,7 @@ static ssize_t system_show(struct device *dev, struct device_attribute *attr,
+ {
+ struct pps_gen_device *pps_gen = dev_get_drvdata(dev);
+
+- return sysfs_emit(buf, "%d\n", pps_gen->info.use_system_clock);
++ return sysfs_emit(buf, "%d\n", pps_gen->info->use_system_clock);
+ }
+ static DEVICE_ATTR_RO(system);
+
+@@ -30,7 +30,7 @@ static ssize_t time_show(struct device *dev, struct device_attribute *attr,
+ struct timespec64 time;
+ int ret;
+
+- ret = pps_gen->info.get_time(pps_gen, &time);
++ ret = pps_gen->info->get_time(pps_gen, &time);
+ if (ret)
+ return ret;
+
+@@ -49,7 +49,7 @@ static ssize_t enable_store(struct device *dev, struct device_attribute *attr,
+ if (ret)
+ return ret;
+
+- ret = pps_gen->info.enable(pps_gen, status);
++ ret = pps_gen->info->enable(pps_gen, status);
+ if (ret)
+ return ret;
+ pps_gen->enabled = status;
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 605cce32a3d376..e4be8b92919664 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -315,6 +315,8 @@ struct ptp_ocp_serial_port {
+ #define OCP_BOARD_ID_LEN 13
+ #define OCP_SERIAL_LEN 6
+ #define OCP_SMA_NUM 4
++#define OCP_SIGNAL_NUM 4
++#define OCP_FREQ_NUM 4
+
+ enum {
+ PORT_GNSS,
+@@ -342,8 +344,8 @@ struct ptp_ocp {
+ struct dcf_master_reg __iomem *dcf_out;
+ struct dcf_slave_reg __iomem *dcf_in;
+ struct tod_reg __iomem *nmea_out;
+- struct frequency_reg __iomem *freq_in[4];
+- struct ptp_ocp_ext_src *signal_out[4];
++ struct frequency_reg __iomem *freq_in[OCP_FREQ_NUM];
++ struct ptp_ocp_ext_src *signal_out[OCP_SIGNAL_NUM];
+ struct ptp_ocp_ext_src *pps;
+ struct ptp_ocp_ext_src *ts0;
+ struct ptp_ocp_ext_src *ts1;
+@@ -378,10 +380,12 @@ struct ptp_ocp {
+ u32 utc_tai_offset;
+ u32 ts_window_adjust;
+ u64 fw_cap;
+- struct ptp_ocp_signal signal[4];
++ struct ptp_ocp_signal signal[OCP_SIGNAL_NUM];
+ struct ptp_ocp_sma_connector sma[OCP_SMA_NUM];
+ const struct ocp_sma_op *sma_op;
+ struct dpll_device *dpll;
++ int signals_nr;
++ int freq_in_nr;
+ };
+
+ #define OCP_REQ_TIMESTAMP BIT(0)
+@@ -2697,6 +2701,8 @@ ptp_ocp_fb_board_init(struct ptp_ocp *bp, struct ocp_resource *r)
+ bp->eeprom_map = fb_eeprom_map;
+ bp->fw_version = ioread32(&bp->image->version);
+ bp->sma_op = &ocp_fb_sma_op;
++ bp->signals_nr = 4;
++ bp->freq_in_nr = 4;
+
+ ptp_ocp_fb_set_version(bp);
+
+@@ -2862,6 +2868,8 @@ ptp_ocp_art_board_init(struct ptp_ocp *bp, struct ocp_resource *r)
+ bp->fw_version = ioread32(&bp->reg->version);
+ bp->fw_tag = 2;
+ bp->sma_op = &ocp_art_sma_op;
++ bp->signals_nr = 4;
++ bp->freq_in_nr = 4;
+
+ /* Enable MAC serial port during initialisation */
+ iowrite32(1, &bp->board_config->mro50_serial_activate);
+@@ -2888,6 +2896,8 @@ ptp_ocp_adva_board_init(struct ptp_ocp *bp, struct ocp_resource *r)
+ bp->flash_start = 0xA00000;
+ bp->eeprom_map = fb_eeprom_map;
+ bp->sma_op = &ocp_adva_sma_op;
++ bp->signals_nr = 2;
++ bp->freq_in_nr = 2;
+
+ version = ioread32(&bp->image->version);
+ /* if lower 16 bits are empty, this is the fw loader. */
+@@ -4008,7 +4018,7 @@ _signal_summary_show(struct seq_file *s, struct ptp_ocp *bp, int nr)
+ {
+ struct signal_reg __iomem *reg = bp->signal_out[nr]->mem;
+ struct ptp_ocp_signal *signal = &bp->signal[nr];
+- char label[8];
++ char label[16];
+ bool on;
+ u32 val;
+
+@@ -4034,7 +4044,7 @@ static void
+ _frequency_summary_show(struct seq_file *s, int nr,
+ struct frequency_reg __iomem *reg)
+ {
+- char label[8];
++ char label[16];
+ bool on;
+ u32 val;
+
+@@ -4178,11 +4188,11 @@ ptp_ocp_summary_show(struct seq_file *s, void *data)
+ }
+
+ if (bp->fw_cap & OCP_CAP_SIGNAL)
+- for (i = 0; i < 4; i++)
++ for (i = 0; i < bp->signals_nr; i++)
+ _signal_summary_show(s, bp, i);
+
+ if (bp->fw_cap & OCP_CAP_FREQ)
+- for (i = 0; i < 4; i++)
++ for (i = 0; i < bp->freq_in_nr; i++)
+ _frequency_summary_show(s, i, bp->freq_in[i]);
+
+ if (bp->irig_out) {
+diff --git a/drivers/regulator/ad5398.c b/drivers/regulator/ad5398.c
+index 40f7dba42b5ad7..404cbe32711e73 100644
+--- a/drivers/regulator/ad5398.c
++++ b/drivers/regulator/ad5398.c
+@@ -14,6 +14,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/regulator/driver.h>
+ #include <linux/regulator/machine.h>
++#include <linux/regulator/of_regulator.h>
+
+ #define AD5398_CURRENT_EN_MASK 0x8000
+
+@@ -221,15 +222,20 @@ static int ad5398_probe(struct i2c_client *client)
+ const struct ad5398_current_data_format *df =
+ (struct ad5398_current_data_format *)id->driver_data;
+
+- if (!init_data)
+- return -EINVAL;
+-
+ chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL);
+ if (!chip)
+ return -ENOMEM;
+
+ config.dev = &client->dev;
++ if (client->dev.of_node)
++ init_data = of_get_regulator_init_data(&client->dev,
++ client->dev.of_node,
++ &ad5398_reg);
++ if (!init_data)
++ return -EINVAL;
++
+ config.init_data = init_data;
++ config.of_node = client->dev.of_node;
+ config.driver_data = chip;
+
+ chip->client = client;
+diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c
+index 5b5664603eed29..2c7e519a2254ba 100644
+--- a/drivers/remoteproc/qcom_wcnss.c
++++ b/drivers/remoteproc/qcom_wcnss.c
+@@ -117,10 +117,10 @@ static const struct wcnss_data pronto_v1_data = {
+ .pmu_offset = 0x1004,
+ .spare_offset = 0x1088,
+
+- .pd_names = { "mx", "cx" },
++ .pd_names = { "cx", "mx" },
+ .vregs = (struct wcnss_vreg_info[]) {
+- { "vddmx", 950000, 1150000, 0 },
+ { "vddcx", .super_turbo = true},
++ { "vddmx", 950000, 1150000, 0 },
+ { "vddpx", 1800000, 1800000, 0 },
+ },
+ .num_pd_vregs = 2,
+@@ -131,10 +131,10 @@ static const struct wcnss_data pronto_v2_data = {
+ .pmu_offset = 0x1004,
+ .spare_offset = 0x1088,
+
+- .pd_names = { "mx", "cx" },
++ .pd_names = { "cx", "mx" },
+ .vregs = (struct wcnss_vreg_info[]) {
+- { "vddmx", 1287500, 1287500, 0 },
+ { "vddcx", .super_turbo = true },
++ { "vddmx", 1287500, 1287500, 0 },
+ { "vddpx", 1800000, 1800000, 0 },
+ },
+ .num_pd_vregs = 2,
+@@ -397,8 +397,17 @@ static irqreturn_t wcnss_stop_ack_interrupt(int irq, void *dev)
+ static int wcnss_init_pds(struct qcom_wcnss *wcnss,
+ const char * const pd_names[WCNSS_MAX_PDS])
+ {
++ struct device *dev = wcnss->dev;
+ int i, ret;
+
++ /* Handle single power domain */
++ if (dev->pm_domain) {
++ wcnss->pds[0] = dev;
++ wcnss->num_pds = 1;
++ pm_runtime_enable(dev);
++ return 0;
++ }
++
+ for (i = 0; i < WCNSS_MAX_PDS; i++) {
+ if (!pd_names[i])
+ break;
+@@ -418,8 +427,15 @@ static int wcnss_init_pds(struct qcom_wcnss *wcnss,
+
+ static void wcnss_release_pds(struct qcom_wcnss *wcnss)
+ {
++ struct device *dev = wcnss->dev;
+ int i;
+
++ /* Handle single power domain */
++ if (wcnss->num_pds == 1 && dev->pm_domain) {
++ pm_runtime_disable(dev);
++ return;
++ }
++
+ for (i = 0; i < wcnss->num_pds; i++)
+ dev_pm_domain_detach(wcnss->pds[i], false);
+ }
+@@ -437,10 +453,14 @@ static int wcnss_init_regulators(struct qcom_wcnss *wcnss,
+ * the regulators for the power domains. For old device trees we need to
+ * reserve extra space to manage them through the regulator interface.
+ */
+- if (wcnss->num_pds)
+- info += num_pd_vregs;
+- else
++ if (wcnss->num_pds) {
++ info += wcnss->num_pds;
++ /* Handle single power domain case */
++ if (wcnss->num_pds < num_pd_vregs)
++ num_vregs += num_pd_vregs - wcnss->num_pds;
++ } else {
+ num_vregs += num_pd_vregs;
++ }
+
+ bulk = devm_kcalloc(wcnss->dev,
+ num_vregs, sizeof(struct regulator_bulk_data),
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index 872e0b679be481..5efbe69bf5ca8c 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1807,10 +1807,8 @@ static int ds1307_probe(struct i2c_client *client)
+ * For some variants, be sure alarms can trigger when we're
+ * running on Vbackup (BBSQI/BBSQW)
+ */
+- if (want_irq || ds1307_can_wakeup_device) {
++ if (want_irq || ds1307_can_wakeup_device)
+ regs[0] |= DS1337_BIT_INTCN | chip->bbsqi_bit;
+- regs[0] &= ~(DS1337_BIT_A2IE | DS1337_BIT_A1IE);
+- }
+
+ regmap_write(ds1307->regmap, DS1337_REG_CONTROL,
+ regs[0]);
+diff --git a/drivers/rtc/rtc-rv3032.c b/drivers/rtc/rtc-rv3032.c
+index 35b2e36b426a0d..cb01038a2e27fe 100644
+--- a/drivers/rtc/rtc-rv3032.c
++++ b/drivers/rtc/rtc-rv3032.c
+@@ -69,7 +69,7 @@
+ #define RV3032_CLKOUT2_FD_MSK GENMASK(6, 5)
+ #define RV3032_CLKOUT2_OS BIT(7)
+
+-#define RV3032_CTRL1_EERD BIT(3)
++#define RV3032_CTRL1_EERD BIT(2)
+ #define RV3032_CTRL1_WADA BIT(5)
+
+ #define RV3032_CTRL2_STOP BIT(0)
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index a52c2690933fd5..a6a06289795554 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -863,48 +863,66 @@ static void vfio_ap_mdev_remove(struct mdev_device *mdev)
+ vfio_put_device(&matrix_mdev->vdev);
+ }
+
+-#define MDEV_SHARING_ERR "Userspace may not re-assign queue %02lx.%04lx " \
+- "already assigned to %s"
++#define MDEV_SHARING_ERR "Userspace may not assign queue %02lx.%04lx to mdev: already assigned to %s"
+
+-static void vfio_ap_mdev_log_sharing_err(struct ap_matrix_mdev *matrix_mdev,
+- unsigned long *apm,
+- unsigned long *aqm)
++#define MDEV_IN_USE_ERR "Can not reserve queue %02lx.%04lx for host driver: in use by mdev"
++
++static void vfio_ap_mdev_log_sharing_err(struct ap_matrix_mdev *assignee,
++ struct ap_matrix_mdev *assigned_to,
++ unsigned long *apm, unsigned long *aqm)
+ {
+ unsigned long apid, apqi;
+- const struct device *dev = mdev_dev(matrix_mdev->mdev);
+- const char *mdev_name = dev_name(dev);
+
+- for_each_set_bit_inv(apid, apm, AP_DEVICES)
++ for_each_set_bit_inv(apid, apm, AP_DEVICES) {
++ for_each_set_bit_inv(apqi, aqm, AP_DOMAINS) {
++ dev_warn(mdev_dev(assignee->mdev), MDEV_SHARING_ERR,
++ apid, apqi, dev_name(mdev_dev(assigned_to->mdev)));
++ }
++ }
++}
++
++static void vfio_ap_mdev_log_in_use_err(struct ap_matrix_mdev *assignee,
++ unsigned long *apm, unsigned long *aqm)
++{
++ unsigned long apid, apqi;
++
++ for_each_set_bit_inv(apid, apm, AP_DEVICES) {
+ for_each_set_bit_inv(apqi, aqm, AP_DOMAINS)
+- dev_warn(dev, MDEV_SHARING_ERR, apid, apqi, mdev_name);
++ dev_warn(mdev_dev(assignee->mdev), MDEV_IN_USE_ERR, apid, apqi);
++ }
+ }
+
+ /**
+ * vfio_ap_mdev_verify_no_sharing - verify APQNs are not shared by matrix mdevs
+ *
++ * @assignee: the matrix mdev to which @mdev_apm and @mdev_aqm are being
++ * assigned; or, NULL if this function was called by the AP bus
++ * driver in_use callback to verify none of the APQNs being reserved
++ * for the host device driver are in use by a vfio_ap mediated device
+ * @mdev_apm: mask indicating the APIDs of the APQNs to be verified
+ * @mdev_aqm: mask indicating the APQIs of the APQNs to be verified
+ *
+- * Verifies that each APQN derived from the Cartesian product of a bitmap of
+- * AP adapter IDs and AP queue indexes is not configured for any matrix
+- * mediated device. AP queue sharing is not allowed.
++ * Verifies that each APQN derived from the Cartesian product of APIDs
++ * represented by the bits set in @mdev_apm and the APQIs of the bits set in
++ * @mdev_aqm is not assigned to a mediated device other than the mdev to which
++ * the APQN is being assigned (@assignee). AP queue sharing is not allowed.
+ *
+ * Return: 0 if the APQNs are not shared; otherwise return -EADDRINUSE.
+ */
+-static int vfio_ap_mdev_verify_no_sharing(unsigned long *mdev_apm,
++static int vfio_ap_mdev_verify_no_sharing(struct ap_matrix_mdev *assignee,
++ unsigned long *mdev_apm,
+ unsigned long *mdev_aqm)
+ {
+- struct ap_matrix_mdev *matrix_mdev;
++ struct ap_matrix_mdev *assigned_to;
+ DECLARE_BITMAP(apm, AP_DEVICES);
+ DECLARE_BITMAP(aqm, AP_DOMAINS);
+
+- list_for_each_entry(matrix_mdev, &matrix_dev->mdev_list, node) {
++ list_for_each_entry(assigned_to, &matrix_dev->mdev_list, node) {
+ /*
+- * If the input apm and aqm are fields of the matrix_mdev
+- * object, then move on to the next matrix_mdev.
++ * If the mdev to which the mdev_apm and mdev_aqm is being
++ * assigned is the same as the mdev being verified
+ */
+- if (mdev_apm == matrix_mdev->matrix.apm &&
+- mdev_aqm == matrix_mdev->matrix.aqm)
++ if (assignee == assigned_to)
+ continue;
+
+ memset(apm, 0, sizeof(apm));
+@@ -914,15 +932,16 @@ static int vfio_ap_mdev_verify_no_sharing(unsigned long *mdev_apm,
+ * We work on full longs, as we can only exclude the leftover
+ * bits in non-inverse order. The leftover is all zeros.
+ */
+- if (!bitmap_and(apm, mdev_apm, matrix_mdev->matrix.apm,
+- AP_DEVICES))
++ if (!bitmap_and(apm, mdev_apm, assigned_to->matrix.apm, AP_DEVICES))
+ continue;
+
+- if (!bitmap_and(aqm, mdev_aqm, matrix_mdev->matrix.aqm,
+- AP_DOMAINS))
++ if (!bitmap_and(aqm, mdev_aqm, assigned_to->matrix.aqm, AP_DOMAINS))
+ continue;
+
+- vfio_ap_mdev_log_sharing_err(matrix_mdev, apm, aqm);
++ if (assignee)
++ vfio_ap_mdev_log_sharing_err(assignee, assigned_to, apm, aqm);
++ else
++ vfio_ap_mdev_log_in_use_err(assigned_to, apm, aqm);
+
+ return -EADDRINUSE;
+ }
+@@ -951,7 +970,8 @@ static int vfio_ap_mdev_validate_masks(struct ap_matrix_mdev *matrix_mdev)
+ matrix_mdev->matrix.aqm))
+ return -EADDRNOTAVAIL;
+
+- return vfio_ap_mdev_verify_no_sharing(matrix_mdev->matrix.apm,
++ return vfio_ap_mdev_verify_no_sharing(matrix_mdev,
++ matrix_mdev->matrix.apm,
+ matrix_mdev->matrix.aqm);
+ }
+
+@@ -2458,7 +2478,7 @@ int vfio_ap_mdev_resource_in_use(unsigned long *apm, unsigned long *aqm)
+
+ mutex_lock(&matrix_dev->guests_lock);
+ mutex_lock(&matrix_dev->mdevs_lock);
+- ret = vfio_ap_mdev_verify_no_sharing(apm, aqm);
++ ret = vfio_ap_mdev_verify_no_sharing(NULL, apm, aqm);
+ mutex_unlock(&matrix_dev->mdevs_lock);
+ mutex_unlock(&matrix_dev->guests_lock);
+
+diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
+index abf6a82b74af35..0be719f383770a 100644
+--- a/drivers/scsi/aacraid/aachba.c
++++ b/drivers/scsi/aacraid/aachba.c
+@@ -3221,8 +3221,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
+ break;
+ }
+ fallthrough;
+- case RESERVE:
+- case RELEASE:
++ case RESERVE_6:
++ case RELEASE_6:
+ case REZERO_UNIT:
+ case REASSIGN_BLOCKS:
+ case SEEK_10:
+diff --git a/drivers/scsi/arm/acornscsi.c b/drivers/scsi/arm/acornscsi.c
+index e50a3dbf9de3e8..ef21b85cf01465 100644
+--- a/drivers/scsi/arm/acornscsi.c
++++ b/drivers/scsi/arm/acornscsi.c
+@@ -591,7 +591,7 @@ datadir_t acornscsi_datadirection(int command)
+ case CHANGE_DEFINITION: case COMPARE: case COPY:
+ case COPY_VERIFY: case LOG_SELECT: case MODE_SELECT:
+ case MODE_SELECT_10: case SEND_DIAGNOSTIC: case WRITE_BUFFER:
+- case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE:
++ case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE_6:
+ case SEARCH_EQUAL: case SEARCH_HIGH: case SEARCH_LOW:
+ case WRITE_6: case WRITE_10: case WRITE_VERIFY:
+ case UPDATE_BLOCK: case WRITE_LONG: case WRITE_SAME:
+diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
+index cce6c6b409ad51..94adb6ac02a4e6 100644
+--- a/drivers/scsi/ips.c
++++ b/drivers/scsi/ips.c
+@@ -3631,8 +3631,8 @@ ips_send_cmd(ips_ha_t * ha, ips_scb_t * scb)
+
+ break;
+
+- case RESERVE:
+- case RELEASE:
++ case RESERVE_6:
++ case RELEASE_6:
+ scb->scsi_cmd->result = DID_OK << 16;
+ break;
+
+@@ -3899,8 +3899,8 @@ ips_chkstatus(ips_ha_t * ha, IPS_STATUS * pstatus)
+ case WRITE_6:
+ case READ_10:
+ case WRITE_10:
+- case RESERVE:
+- case RELEASE:
++ case RESERVE_6:
++ case RELEASE_6:
+ break;
+
+ case MODE_SENSE:
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 1d7db49a8fe451..318dc83e9a2acf 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -9569,18 +9569,16 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
+ mbx_tmo_err = test_bit(MBX_TMO_ERR, &phba->bit_flags);
+ /* First we need to issue aborts to outstanding cmds on txcmpl */
+ list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
++ if (piocb->vport != vport)
++ continue;
++
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "2243 iotag = 0x%x cmd_flag = 0x%x "
+- "ulp_command = 0x%x this_vport %x "
+- "sli_flag = 0x%x\n",
++ "ulp_command = 0x%x sli_flag = 0x%x\n",
+ piocb->iotag, piocb->cmd_flag,
+ get_job_cmnd(phba, piocb),
+- (piocb->vport == vport),
+ phba->sli.sli_flag);
+
+- if (piocb->vport != vport)
+- continue;
+-
+ if ((phba->sli.sli_flag & LPFC_SLI_ACTIVE) && !mbx_tmo_err) {
+ if (piocb->cmd_flag & LPFC_IO_LIBDFC)
+ continue;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 36e66df36a18cb..07cd611f34bd5f 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -228,10 +228,16 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
+ return;
+
+- /* check for recovered fabric node */
+- if (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE &&
+- ndlp->nlp_DID == Fabric_DID)
++ /* Ignore callback for a mismatched (stale) rport */
++ if (ndlp->rport != rport) {
++ lpfc_vlog_msg(vport, KERN_WARNING, LOG_NODE,
++ "6788 fc rport mismatch: d_id x%06x ndlp x%px "
++ "fc rport x%px node rport x%px state x%x "
++ "refcnt %u\n",
++ ndlp->nlp_DID, ndlp, rport, ndlp->rport,
++ ndlp->nlp_state, kref_read(&ndlp->kref));
+ return;
++ }
+
+ if (rport->port_name != wwn_to_u64(ndlp->nlp_portname.u.wwn))
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+@@ -5564,6 +5570,7 @@ static struct lpfc_nodelist *
+ __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
+ {
+ struct lpfc_nodelist *ndlp;
++ struct lpfc_nodelist *np = NULL;
+ uint32_t data1;
+
+ list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+@@ -5578,14 +5585,20 @@ __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
+ ndlp, ndlp->nlp_DID,
+ ndlp->nlp_flag, data1, ndlp->nlp_rpi,
+ ndlp->active_rrqs_xri_bitmap);
+- return ndlp;
++
++ /* Check for new or potentially stale node */
++ if (ndlp->nlp_state != NLP_STE_UNUSED_NODE)
++ return ndlp;
++ np = ndlp;
+ }
+ }
+
+- /* FIND node did <did> NOT FOUND */
+- lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+- "0932 FIND node did x%x NOT FOUND.\n", did);
+- return NULL;
++ if (!np)
++ /* FIND node did <did> NOT FOUND */
++ lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
++ "0932 FIND node did x%x NOT FOUND.\n", did);
++
++ return np;
+ }
+
+ struct lpfc_nodelist *
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index bcadf11414c8a4..411a6b927c5b09 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -13170,6 +13170,7 @@ lpfc_sli4_enable_msi(struct lpfc_hba *phba)
+ eqhdl = lpfc_get_eq_hdl(0);
+ rc = pci_irq_vector(phba->pcidev, 0);
+ if (rc < 0) {
++ free_irq(phba->pcidev->irq, phba);
+ pci_free_irq_vectors(phba->pcidev);
+ lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
+ "0496 MSI pci_irq_vec failed (%d)\n", rc);
+@@ -13250,6 +13251,7 @@ lpfc_sli4_enable_intr(struct lpfc_hba *phba, uint32_t cfg_mode)
+ eqhdl = lpfc_get_eq_hdl(0);
+ retval = pci_irq_vector(phba->pcidev, 0);
+ if (retval < 0) {
++ free_irq(phba->pcidev->irq, phba);
+ lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
+ "0502 INTR pci_irq_vec failed (%d)\n",
+ retval);
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index adab151663dd85..2006094af41897 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -855,8 +855,8 @@ mega_build_cmd(adapter_t *adapter, struct scsi_cmnd *cmd, int *busy)
+ return scb;
+
+ #if MEGA_HAVE_CLUSTERING
+- case RESERVE:
+- case RELEASE:
++ case RESERVE_6:
++ case RELEASE_6:
+
+ /*
+ * Do we support clustering and is the support enabled
+@@ -875,7 +875,7 @@ mega_build_cmd(adapter_t *adapter, struct scsi_cmnd *cmd, int *busy)
+ }
+
+ scb->raw_mbox[0] = MEGA_CLUSTER_CMD;
+- scb->raw_mbox[2] = ( *cmd->cmnd == RESERVE ) ?
++ scb->raw_mbox[2] = *cmd->cmnd == RESERVE_6 ?
+ MEGA_RESERVE_LD : MEGA_RELEASE_LD;
+
+ scb->raw_mbox[3] = ldrv_num;
+@@ -1618,8 +1618,8 @@ mega_cmd_done(adapter_t *adapter, u8 completed[], int nstatus, int status)
+ * failed or the input parameter is invalid
+ */
+ if( status == 1 &&
+- (cmd->cmnd[0] == RESERVE ||
+- cmd->cmnd[0] == RELEASE) ) {
++ (cmd->cmnd[0] == RESERVE_6 ||
++ cmd->cmnd[0] == RELEASE_6) ) {
+
+ cmd->result |= (DID_ERROR << 16) |
+ SAM_STAT_RESERVATION_CONFLICT;
+diff --git a/drivers/scsi/megaraid/megaraid_mbox.c b/drivers/scsi/megaraid/megaraid_mbox.c
+index 60cc3372991fdc..3ba837b3093f82 100644
+--- a/drivers/scsi/megaraid/megaraid_mbox.c
++++ b/drivers/scsi/megaraid/megaraid_mbox.c
+@@ -1725,8 +1725,8 @@ megaraid_mbox_build_cmd(adapter_t *adapter, struct scsi_cmnd *scp, int *busy)
+
+ return scb;
+
+- case RESERVE:
+- case RELEASE:
++ case RESERVE_6:
++ case RELEASE_6:
+ /*
+ * Do we support clustering and is the support enabled
+ */
+@@ -1748,7 +1748,7 @@ megaraid_mbox_build_cmd(adapter_t *adapter, struct scsi_cmnd *scp, int *busy)
+ scb->dev_channel = 0xFF;
+ scb->dev_target = target;
+ ccb->raw_mbox[0] = CLUSTER_CMD;
+- ccb->raw_mbox[2] = (scp->cmnd[0] == RESERVE) ?
++ ccb->raw_mbox[2] = scp->cmnd[0] == RESERVE_6 ?
+ RESERVE_LD : RELEASE_LD;
+
+ ccb->raw_mbox[3] = target;
+@@ -2334,8 +2334,8 @@ megaraid_mbox_dpc(unsigned long devp)
+ * Error code returned is 1 if Reserve or Release
+ * failed or the input parameter is invalid
+ */
+- if (status == 1 && (scp->cmnd[0] == RESERVE ||
+- scp->cmnd[0] == RELEASE)) {
++ if (status == 1 && (scp->cmnd[0] == RESERVE_6 ||
++ scp->cmnd[0] == RELEASE_6)) {
+
+ scp->result = DID_ERROR << 16 |
+ SAM_STAT_RESERVATION_CONFLICT;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index c0a372868e1d7f..604f37e5c0c355 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -174,6 +174,9 @@ static void mpi3mr_print_event_data(struct mpi3mr_ioc *mrioc,
+ char *desc = NULL;
+ u16 event;
+
++ if (!(mrioc->logging_level & MPI3_DEBUG_EVENT))
++ return;
++
+ event = event_reply->event;
+
+ switch (event) {
+@@ -2744,7 +2747,10 @@ static void mpi3mr_watchdog_work(struct work_struct *work)
+ return;
+ }
+
+- if (mrioc->ts_update_counter++ >= mrioc->ts_update_interval) {
++ if (!(mrioc->facts.ioc_capabilities &
++ MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_IOC) &&
++ (mrioc->ts_update_counter++ >= mrioc->ts_update_interval)) {
++
+ mrioc->ts_update_counter = 0;
+ mpi3mr_sync_timestamp(mrioc);
+ }
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_ctl.c b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+index 87784c96249a7f..47faa27bc35591 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_ctl.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+@@ -679,6 +679,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
+ size_t data_in_sz = 0;
+ long ret;
+ u16 device_handle = MPT3SAS_INVALID_DEVICE_HANDLE;
++ int tm_ret;
+
+ issue_reset = 0;
+
+@@ -1120,18 +1121,25 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
+ if (pcie_device && (!ioc->tm_custom_handling) &&
+ (!(mpt3sas_scsih_is_pcie_scsi_device(
+ pcie_device->device_info))))
+- mpt3sas_scsih_issue_locked_tm(ioc,
++ tm_ret = mpt3sas_scsih_issue_locked_tm(ioc,
+ le16_to_cpu(mpi_request->FunctionDependent1),
+ 0, 0, 0,
+ MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0,
+ 0, pcie_device->reset_timeout,
+ MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE);
+ else
+- mpt3sas_scsih_issue_locked_tm(ioc,
++ tm_ret = mpt3sas_scsih_issue_locked_tm(ioc,
+ le16_to_cpu(mpi_request->FunctionDependent1),
+ 0, 0, 0,
+ MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0,
+ 0, 30, MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET);
++
++ if (tm_ret != SUCCESS) {
++ ioc_info(ioc,
++ "target reset failed, issue hard reset: handle (0x%04x)\n",
++ le16_to_cpu(mpi_request->FunctionDependent1));
++ mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
++ }
+ } else
+ mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
+ }
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 5ceaa4665e5df7..4da0c259390b5f 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -173,6 +173,10 @@ static const char *sdebug_version_date = "20210520";
+ #define DEF_ZBC_MAX_OPEN_ZONES 8
+ #define DEF_ZBC_NR_CONV_ZONES 1
+
++/* Default parameters for tape drives */
++#define TAPE_DEF_DENSITY 0x0
++#define TAPE_DEF_BLKSIZE 0
++
+ #define SDEBUG_LUN_0_VAL 0
+
+ /* bit mask values for sdebug_opts */
+@@ -363,6 +367,10 @@ struct sdebug_dev_info {
+ ktime_t create_ts; /* time since bootup that this device was created */
+ struct sdeb_zone_state *zstate;
+
++ /* For tapes */
++ unsigned int tape_blksize;
++ unsigned int tape_density;
++
+ struct dentry *debugfs_entry;
+ struct spinlock list_lock;
+ struct list_head inject_err_list;
+@@ -773,7 +781,7 @@ static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = {
+ /* 20 */
+ {0, 0x1e, 0, 0, NULL, NULL, /* ALLOW REMOVAL */
+ {6, 0, 0, 0, 0x3, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
+- {0, 0x1, 0, 0, resp_start_stop, NULL, /* REWIND ?? */
++ {0, 0x1, 0, 0, NULL, NULL, /* REWIND ?? */
+ {6, 0x1, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
+ {0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* ATA_PT */
+ {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} },
+@@ -2742,7 +2750,7 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ unsigned char *ap;
+ unsigned char *arr __free(kfree);
+ unsigned char *cmd = scp->cmnd;
+- bool dbd, llbaa, msense_6, is_disk, is_zbc;
++ bool dbd, llbaa, msense_6, is_disk, is_zbc, is_tape;
+
+ arr = kzalloc(SDEBUG_MAX_MSENSE_SZ, GFP_ATOMIC);
+ if (!arr)
+@@ -2755,7 +2763,8 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ llbaa = msense_6 ? false : !!(cmd[1] & 0x10);
+ is_disk = (sdebug_ptype == TYPE_DISK);
+ is_zbc = devip->zoned;
+- if ((is_disk || is_zbc) && !dbd)
++ is_tape = (sdebug_ptype == TYPE_TAPE);
++ if ((is_disk || is_zbc || is_tape) && !dbd)
+ bd_len = llbaa ? 16 : 8;
+ else
+ bd_len = 0;
+@@ -2793,15 +2802,25 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ put_unaligned_be32(0xffffffff, ap + 0);
+ else
+ put_unaligned_be32(sdebug_capacity, ap + 0);
+- put_unaligned_be16(sdebug_sector_size, ap + 6);
++ if (is_tape) {
++ ap[0] = devip->tape_density;
++ put_unaligned_be16(devip->tape_blksize, ap + 6);
++ } else
++ put_unaligned_be16(sdebug_sector_size, ap + 6);
+ offset += bd_len;
+ ap = arr + offset;
+ } else if (16 == bd_len) {
++ if (is_tape) {
++ mk_sense_invalid_fld(scp, SDEB_IN_DATA, 1, 4);
++ return check_condition_result;
++ }
+ put_unaligned_be64((u64)sdebug_capacity, ap + 0);
+ put_unaligned_be32(sdebug_sector_size, ap + 12);
+ offset += bd_len;
+ ap = arr + offset;
+ }
++ if (cmd[2] == 0)
++ goto only_bd; /* Only block descriptor requested */
+
+ /*
+ * N.B. If len>0 before resp_*_pg() call, then form of that call should be:
+@@ -2902,6 +2921,7 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ default:
+ goto bad_pcode;
+ }
++only_bd:
+ if (msense_6)
+ arr[0] = offset - 1;
+ else
+@@ -2945,8 +2965,27 @@ static int resp_mode_select(struct scsi_cmnd *scp,
+ __func__, param_len, res);
+ md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
+ bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
+- off = bd_len + (mselect6 ? 4 : 8);
+- if (md_len > 2 || off >= res) {
++ off = (mselect6 ? 4 : 8);
++ if (sdebug_ptype == TYPE_TAPE) {
++ int blksize;
++
++ if (bd_len != 8) {
++ mk_sense_invalid_fld(scp, SDEB_IN_DATA,
++ mselect6 ? 3 : 6, -1);
++ return check_condition_result;
++ }
++ blksize = get_unaligned_be16(arr + off + 6);
++ if ((blksize % 4) != 0) {
++ mk_sense_invalid_fld(scp, SDEB_IN_DATA, off + 6, -1);
++ return check_condition_result;
++ }
++ devip->tape_density = arr[off];
++ devip->tape_blksize = blksize;
++ }
++ off += bd_len;
++ if (off >= res)
++ return 0; /* No page written, just descriptors */
++ if (md_len > 2) {
+ mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1);
+ return check_condition_result;
+ }
+@@ -5835,6 +5874,10 @@ static struct sdebug_dev_info *sdebug_device_create(
+ } else {
+ devip->zoned = false;
+ }
++ if (sdebug_ptype == TYPE_TAPE) {
++ devip->tape_density = TAPE_DEF_DENSITY;
++ devip->tape_blksize = TAPE_DEF_BLKSIZE;
++ }
+ devip->create_ts = ktime_get_boottime();
+ atomic_set(&devip->stopped, (sdeb_tur_ms_to_ready > 0 ? 2 : 0));
+ spin_lock_init(&devip->list_lock);
+diff --git a/drivers/scsi/scsi_sysctl.c b/drivers/scsi/scsi_sysctl.c
+index be4aef0f4f9962..055a03a83ad68e 100644
+--- a/drivers/scsi/scsi_sysctl.c
++++ b/drivers/scsi/scsi_sysctl.c
+@@ -17,7 +17,9 @@ static const struct ctl_table scsi_table[] = {
+ .data = &scsi_logging_level,
+ .maxlen = sizeof(scsi_logging_level),
+ .mode = 0644,
+- .proc_handler = proc_dointvec },
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_INT_MAX },
+ };
+
+ static struct ctl_table_header *scsi_table_header;
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index 344e4da336bb56..7dec7958344ea7 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -952,7 +952,6 @@ static void reset_state(struct scsi_tape *STp)
+ STp->partition = find_partition(STp);
+ if (STp->partition < 0)
+ STp->partition = 0;
+- STp->new_partition = STp->partition;
+ }
+ }
+ \f
+@@ -2897,7 +2896,6 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
+ timeout = STp->long_timeout * 8;
+
+ DEBC_printk(STp, "Erasing tape.\n");
+- fileno = blkno = at_sm = 0;
+ break;
+ case MTSETBLK: /* Set block length */
+ case MTSETDENSITY: /* Set tape density */
+@@ -2930,14 +2928,17 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
+ if (cmd_in == MTSETDENSITY) {
+ (STp->buffer)->b_data[4] = arg;
+ STp->density_changed = 1; /* At least we tried ;-) */
++ STp->changed_density = arg;
+ } else if (cmd_in == SET_DENS_AND_BLK)
+ (STp->buffer)->b_data[4] = arg >> 24;
+ else
+ (STp->buffer)->b_data[4] = STp->density;
+ if (cmd_in == MTSETBLK || cmd_in == SET_DENS_AND_BLK) {
+ ltmp = arg & MT_ST_BLKSIZE_MASK;
+- if (cmd_in == MTSETBLK)
++ if (cmd_in == MTSETBLK) {
+ STp->blksize_changed = 1; /* At least we tried ;-) */
++ STp->changed_blksize = arg;
++ }
+ } else
+ ltmp = STp->block_size;
+ (STp->buffer)->b_data[9] = (ltmp >> 16);
+@@ -3084,7 +3085,9 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
+ cmd_in == MTSETDRVBUFFER ||
+ cmd_in == SET_DENS_AND_BLK) {
+ if (cmdstatp->sense_hdr.sense_key == ILLEGAL_REQUEST &&
+- !(STp->use_pf & PF_TESTED)) {
++ cmdstatp->sense_hdr.asc == 0x24 &&
++ (STp->device)->scsi_level <= SCSI_2 &&
++ !(STp->use_pf & PF_TESTED)) {
+ /* Try the other possible state of Page Format if not
+ already tried */
+ STp->use_pf = (STp->use_pf ^ USE_PF) | PF_TESTED;
+@@ -3636,9 +3639,25 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ retval = (-EIO);
+ goto out;
+ }
+- reset_state(STp);
++ reset_state(STp); /* Clears pos_unknown */
+ /* remove this when the midlevel properly clears was_reset */
+ STp->device->was_reset = 0;
++
++ /* Fix the device settings after reset, ignore errors */
++ if (mtc.mt_op == MTREW || mtc.mt_op == MTSEEK ||
++ mtc.mt_op == MTEOM) {
++ if (STp->can_partitions) {
++ /* STp->new_partition contains the
++ * latest partition set
++ */
++ STp->partition = 0;
++ switch_partition(STp);
++ }
++ if (STp->density_changed)
++ st_int_ioctl(STp, MTSETDENSITY, STp->changed_density);
++ if (STp->blksize_changed)
++ st_int_ioctl(STp, MTSETBLK, STp->changed_blksize);
++ }
+ }
+
+ if (mtc.mt_op != MTNOP && mtc.mt_op != MTSETBLK &&
+diff --git a/drivers/scsi/st.h b/drivers/scsi/st.h
+index 1aaaf5369a40fc..6d31b894ee84cc 100644
+--- a/drivers/scsi/st.h
++++ b/drivers/scsi/st.h
+@@ -165,6 +165,7 @@ struct scsi_tape {
+ unsigned char compression_changed;
+ unsigned char drv_buffer;
+ unsigned char density;
++ unsigned char changed_density;
+ unsigned char door_locked;
+ unsigned char autorew_dev; /* auto-rewind device */
+ unsigned char rew_at_close; /* rewind necessary at close */
+@@ -172,6 +173,7 @@ struct scsi_tape {
+ unsigned char cleaning_req; /* cleaning requested? */
+ unsigned char first_tur; /* first TEST UNIT READY */
+ int block_size;
++ int changed_blksize;
+ int min_block;
+ int max_block;
+ int recover_count; /* From tape opening */
+diff --git a/drivers/soc/apple/rtkit-internal.h b/drivers/soc/apple/rtkit-internal.h
+index 27c9fa745fd528..b8d5244678f010 100644
+--- a/drivers/soc/apple/rtkit-internal.h
++++ b/drivers/soc/apple/rtkit-internal.h
+@@ -44,6 +44,7 @@ struct apple_rtkit {
+
+ struct apple_rtkit_shmem ioreport_buffer;
+ struct apple_rtkit_shmem crashlog_buffer;
++ struct apple_rtkit_shmem oslog_buffer;
+
+ struct apple_rtkit_shmem syslog_buffer;
+ char *syslog_msg_buffer;
+diff --git a/drivers/soc/apple/rtkit.c b/drivers/soc/apple/rtkit.c
+index e6d940292c9fbd..45ccbe2cbcd63f 100644
+--- a/drivers/soc/apple/rtkit.c
++++ b/drivers/soc/apple/rtkit.c
+@@ -66,8 +66,9 @@ enum {
+ #define APPLE_RTKIT_SYSLOG_MSG_SIZE GENMASK_ULL(31, 24)
+
+ #define APPLE_RTKIT_OSLOG_TYPE GENMASK_ULL(63, 56)
+-#define APPLE_RTKIT_OSLOG_INIT 1
+-#define APPLE_RTKIT_OSLOG_ACK 3
++#define APPLE_RTKIT_OSLOG_BUFFER_REQUEST 1
++#define APPLE_RTKIT_OSLOG_SIZE GENMASK_ULL(55, 36)
++#define APPLE_RTKIT_OSLOG_IOVA GENMASK_ULL(35, 0)
+
+ #define APPLE_RTKIT_MIN_SUPPORTED_VERSION 11
+ #define APPLE_RTKIT_MAX_SUPPORTED_VERSION 12
+@@ -251,15 +252,21 @@ static int apple_rtkit_common_rx_get_buffer(struct apple_rtkit *rtk,
+ struct apple_rtkit_shmem *buffer,
+ u8 ep, u64 msg)
+ {
+- size_t n_4kpages = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_SIZE, msg);
+ u64 reply;
+ int err;
+
++ /* The different size vs. IOVA shifts look odd but are indeed correct this way */
++ if (ep == APPLE_RTKIT_EP_OSLOG) {
++ buffer->size = FIELD_GET(APPLE_RTKIT_OSLOG_SIZE, msg);
++ buffer->iova = FIELD_GET(APPLE_RTKIT_OSLOG_IOVA, msg) << 12;
++ } else {
++ buffer->size = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_SIZE, msg) << 12;
++ buffer->iova = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_IOVA, msg);
++ }
++
+ buffer->buffer = NULL;
+ buffer->iomem = NULL;
+ buffer->is_mapped = false;
+- buffer->iova = FIELD_GET(APPLE_RTKIT_BUFFER_REQUEST_IOVA, msg);
+- buffer->size = n_4kpages << 12;
+
+ dev_dbg(rtk->dev, "RTKit: buffer request for 0x%zx bytes at %pad\n",
+ buffer->size, &buffer->iova);
+@@ -284,11 +291,21 @@ static int apple_rtkit_common_rx_get_buffer(struct apple_rtkit *rtk,
+ }
+
+ if (!buffer->is_mapped) {
+- reply = FIELD_PREP(APPLE_RTKIT_SYSLOG_TYPE,
+- APPLE_RTKIT_BUFFER_REQUEST);
+- reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_SIZE, n_4kpages);
+- reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_IOVA,
+- buffer->iova);
++ /* oslog uses different fields and needs a shifted IOVA instead of size */
++ if (ep == APPLE_RTKIT_EP_OSLOG) {
++ reply = FIELD_PREP(APPLE_RTKIT_OSLOG_TYPE,
++ APPLE_RTKIT_OSLOG_BUFFER_REQUEST);
++ reply |= FIELD_PREP(APPLE_RTKIT_OSLOG_SIZE, buffer->size);
++ reply |= FIELD_PREP(APPLE_RTKIT_OSLOG_IOVA,
++ buffer->iova >> 12);
++ } else {
++ reply = FIELD_PREP(APPLE_RTKIT_SYSLOG_TYPE,
++ APPLE_RTKIT_BUFFER_REQUEST);
++ reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_SIZE,
++ buffer->size >> 12);
++ reply |= FIELD_PREP(APPLE_RTKIT_BUFFER_REQUEST_IOVA,
++ buffer->iova);
++ }
+ apple_rtkit_send_message(rtk, ep, reply, NULL, false);
+ }
+
+@@ -482,25 +499,18 @@ static void apple_rtkit_syslog_rx(struct apple_rtkit *rtk, u64 msg)
+ }
+ }
+
+-static void apple_rtkit_oslog_rx_init(struct apple_rtkit *rtk, u64 msg)
+-{
+- u64 ack;
+-
+- dev_dbg(rtk->dev, "RTKit: oslog init: msg: 0x%llx\n", msg);
+- ack = FIELD_PREP(APPLE_RTKIT_OSLOG_TYPE, APPLE_RTKIT_OSLOG_ACK);
+- apple_rtkit_send_message(rtk, APPLE_RTKIT_EP_OSLOG, ack, NULL, false);
+-}
+-
+ static void apple_rtkit_oslog_rx(struct apple_rtkit *rtk, u64 msg)
+ {
+ u8 type = FIELD_GET(APPLE_RTKIT_OSLOG_TYPE, msg);
+
+ switch (type) {
+- case APPLE_RTKIT_OSLOG_INIT:
+- apple_rtkit_oslog_rx_init(rtk, msg);
++ case APPLE_RTKIT_OSLOG_BUFFER_REQUEST:
++ apple_rtkit_common_rx_get_buffer(rtk, &rtk->oslog_buffer,
++ APPLE_RTKIT_EP_OSLOG, msg);
+ break;
+ default:
+- dev_warn(rtk->dev, "RTKit: Unknown oslog message: %llx\n", msg);
++ dev_warn(rtk->dev, "RTKit: Unknown oslog message: %llx\n",
++ msg);
+ }
+ }
+
+@@ -667,7 +677,7 @@ struct apple_rtkit *apple_rtkit_init(struct device *dev, void *cookie,
+ rtk->mbox->rx = apple_rtkit_rx;
+ rtk->mbox->cookie = rtk;
+
+- rtk->wq = alloc_ordered_workqueue("rtkit-%s", WQ_MEM_RECLAIM,
++ rtk->wq = alloc_ordered_workqueue("rtkit-%s", WQ_HIGHPRI | WQ_MEM_RECLAIM,
+ dev_name(rtk->dev));
+ if (!rtk->wq) {
+ ret = -ENOMEM;
+@@ -710,6 +720,7 @@ int apple_rtkit_reinit(struct apple_rtkit *rtk)
+
+ apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer);
+ apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer);
++ apple_rtkit_free_buffer(rtk, &rtk->oslog_buffer);
+ apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer);
+
+ kfree(rtk->syslog_msg_buffer);
+@@ -890,6 +901,7 @@ void apple_rtkit_free(struct apple_rtkit *rtk)
+
+ apple_rtkit_free_buffer(rtk, &rtk->ioreport_buffer);
+ apple_rtkit_free_buffer(rtk, &rtk->crashlog_buffer);
++ apple_rtkit_free_buffer(rtk, &rtk->oslog_buffer);
+ apple_rtkit_free_buffer(rtk, &rtk->syslog_buffer);
+
+ kfree(rtk->syslog_msg_buffer);
+diff --git a/drivers/soc/mediatek/mtk-mutex.c b/drivers/soc/mediatek/mtk-mutex.c
+index 5250c1d702eb9b..aaa965d4b050a7 100644
+--- a/drivers/soc/mediatek/mtk-mutex.c
++++ b/drivers/soc/mediatek/mtk-mutex.c
+@@ -155,6 +155,7 @@
+ #define MT8188_MUTEX_MOD_DISP1_VPP_MERGE3 23
+ #define MT8188_MUTEX_MOD_DISP1_VPP_MERGE4 24
+ #define MT8188_MUTEX_MOD_DISP1_DISP_MIXER 30
++#define MT8188_MUTEX_MOD_DISP1_DPI1 38
+ #define MT8188_MUTEX_MOD_DISP1_DP_INTF1 39
+
+ #define MT8195_MUTEX_MOD_DISP_OVL0 0
+@@ -289,6 +290,7 @@
+ #define MT8188_MUTEX_SOF_DSI0 1
+ #define MT8188_MUTEX_SOF_DP_INTF0 3
+ #define MT8188_MUTEX_SOF_DP_INTF1 4
++#define MT8188_MUTEX_SOF_DPI1 5
+ #define MT8195_MUTEX_SOF_DSI0 1
+ #define MT8195_MUTEX_SOF_DSI1 2
+ #define MT8195_MUTEX_SOF_DP_INTF0 3
+@@ -301,6 +303,7 @@
+ #define MT8188_MUTEX_EOF_DSI0 (MT8188_MUTEX_SOF_DSI0 << 7)
+ #define MT8188_MUTEX_EOF_DP_INTF0 (MT8188_MUTEX_SOF_DP_INTF0 << 7)
+ #define MT8188_MUTEX_EOF_DP_INTF1 (MT8188_MUTEX_SOF_DP_INTF1 << 7)
++#define MT8188_MUTEX_EOF_DPI1 (MT8188_MUTEX_SOF_DPI1 << 7)
+ #define MT8195_MUTEX_EOF_DSI0 (MT8195_MUTEX_SOF_DSI0 << 7)
+ #define MT8195_MUTEX_EOF_DSI1 (MT8195_MUTEX_SOF_DSI1 << 7)
+ #define MT8195_MUTEX_EOF_DP_INTF0 (MT8195_MUTEX_SOF_DP_INTF0 << 7)
+@@ -472,6 +475,7 @@ static const u8 mt8188_mutex_mod[DDP_COMPONENT_ID_MAX] = {
+ [DDP_COMPONENT_PWM0] = MT8188_MUTEX_MOD2_DISP_PWM0,
+ [DDP_COMPONENT_DP_INTF0] = MT8188_MUTEX_MOD_DISP_DP_INTF0,
+ [DDP_COMPONENT_DP_INTF1] = MT8188_MUTEX_MOD_DISP1_DP_INTF1,
++ [DDP_COMPONENT_DPI1] = MT8188_MUTEX_MOD_DISP1_DPI1,
+ [DDP_COMPONENT_ETHDR_MIXER] = MT8188_MUTEX_MOD_DISP1_DISP_MIXER,
+ [DDP_COMPONENT_MDP_RDMA0] = MT8188_MUTEX_MOD_DISP1_MDP_RDMA0,
+ [DDP_COMPONENT_MDP_RDMA1] = MT8188_MUTEX_MOD_DISP1_MDP_RDMA1,
+@@ -686,6 +690,8 @@ static const u16 mt8188_mutex_sof[DDP_MUTEX_SOF_MAX] = {
+ [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE,
+ [MUTEX_SOF_DSI0] =
+ MT8188_MUTEX_SOF_DSI0 | MT8188_MUTEX_EOF_DSI0,
++ [MUTEX_SOF_DPI1] =
++ MT8188_MUTEX_SOF_DPI1 | MT8188_MUTEX_EOF_DPI1,
+ [MUTEX_SOF_DP_INTF0] =
+ MT8188_MUTEX_SOF_DP_INTF0 | MT8188_MUTEX_EOF_DP_INTF0,
+ [MUTEX_SOF_DP_INTF1] =
+diff --git a/drivers/soc/samsung/exynos-asv.c b/drivers/soc/samsung/exynos-asv.c
+index 97006cc3b94610..8e681f51952644 100644
+--- a/drivers/soc/samsung/exynos-asv.c
++++ b/drivers/soc/samsung/exynos-asv.c
+@@ -9,6 +9,7 @@
+ * Samsung Exynos SoC Adaptive Supply Voltage support
+ */
+
++#include <linux/array_size.h>
+ #include <linux/cpu.h>
+ #include <linux/device.h>
+ #include <linux/energy_model.h>
+diff --git a/drivers/soc/samsung/exynos-chipid.c b/drivers/soc/samsung/exynos-chipid.c
+index 95294462ff2113..99c5f9c80101b5 100644
+--- a/drivers/soc/samsung/exynos-chipid.c
++++ b/drivers/soc/samsung/exynos-chipid.c
+@@ -12,6 +12,7 @@
+ * Samsung Exynos SoC Adaptive Supply Voltage and Chip ID support
+ */
+
++#include <linux/array_size.h>
+ #include <linux/device.h>
+ #include <linux/errno.h>
+ #include <linux/mfd/syscon.h>
+diff --git a/drivers/soc/samsung/exynos-pmu.c b/drivers/soc/samsung/exynos-pmu.c
+index dd5256e5aae1ae..c40313886a0123 100644
+--- a/drivers/soc/samsung/exynos-pmu.c
++++ b/drivers/soc/samsung/exynos-pmu.c
+@@ -5,6 +5,7 @@
+ //
+ // Exynos - CPU PMU(Power Management Unit) support
+
++#include <linux/array_size.h>
+ #include <linux/arm-smccc.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+diff --git a/drivers/soc/samsung/exynos-usi.c b/drivers/soc/samsung/exynos-usi.c
+index 114352695ac2bc..5a93a68dba87fd 100644
+--- a/drivers/soc/samsung/exynos-usi.c
++++ b/drivers/soc/samsung/exynos-usi.c
+@@ -6,6 +6,7 @@
+ * Samsung Exynos USI driver (Universal Serial Interface).
+ */
+
++#include <linux/array_size.h>
+ #include <linux/clk.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+diff --git a/drivers/soc/samsung/exynos3250-pmu.c b/drivers/soc/samsung/exynos3250-pmu.c
+index 30f230ed1769cf..4bad12a995422e 100644
+--- a/drivers/soc/samsung/exynos3250-pmu.c
++++ b/drivers/soc/samsung/exynos3250-pmu.c
+@@ -5,6 +5,7 @@
+ //
+ // Exynos3250 - CPU PMU (Power Management Unit) support
+
++#include <linux/array_size.h>
+ #include <linux/soc/samsung/exynos-regs-pmu.h>
+ #include <linux/soc/samsung/exynos-pmu.h>
+
+diff --git a/drivers/soc/samsung/exynos5250-pmu.c b/drivers/soc/samsung/exynos5250-pmu.c
+index 7a2d50be6b4ac0..2ae5c3e1b07a37 100644
+--- a/drivers/soc/samsung/exynos5250-pmu.c
++++ b/drivers/soc/samsung/exynos5250-pmu.c
+@@ -5,6 +5,7 @@
+ //
+ // Exynos5250 - CPU PMU (Power Management Unit) support
+
++#include <linux/array_size.h>
+ #include <linux/soc/samsung/exynos-regs-pmu.h>
+ #include <linux/soc/samsung/exynos-pmu.h>
+
+diff --git a/drivers/soc/samsung/exynos5420-pmu.c b/drivers/soc/samsung/exynos5420-pmu.c
+index 6fedcd78cb4519..58a2209795f78a 100644
+--- a/drivers/soc/samsung/exynos5420-pmu.c
++++ b/drivers/soc/samsung/exynos5420-pmu.c
+@@ -5,6 +5,7 @@
+ //
+ // Exynos5420 - CPU PMU (Power Management Unit) support
+
++#include <linux/array_size.h>
+ #include <linux/pm.h>
+ #include <linux/soc/samsung/exynos-regs-pmu.h>
+ #include <linux/soc/samsung/exynos-pmu.h>
+diff --git a/drivers/soc/ti/k3-socinfo.c b/drivers/soc/ti/k3-socinfo.c
+index 4fb0f0a248288b..704039eb3c0784 100644
+--- a/drivers/soc/ti/k3-socinfo.c
++++ b/drivers/soc/ti/k3-socinfo.c
+@@ -105,6 +105,12 @@ k3_chipinfo_variant_to_sr(unsigned int partno, unsigned int variant,
+ return -ENODEV;
+ }
+
++static const struct regmap_config k3_chipinfo_regmap_cfg = {
++ .reg_bits = 32,
++ .val_bits = 32,
++ .reg_stride = 4,
++};
++
+ static int k3_chipinfo_probe(struct platform_device *pdev)
+ {
+ struct device_node *node = pdev->dev.of_node;
+@@ -112,13 +118,18 @@ static int k3_chipinfo_probe(struct platform_device *pdev)
+ struct device *dev = &pdev->dev;
+ struct soc_device *soc_dev;
+ struct regmap *regmap;
++ void __iomem *base;
+ u32 partno_id;
+ u32 variant;
+ u32 jtag_id;
+ u32 mfg;
+ int ret;
+
+- regmap = device_node_to_regmap(node);
++ base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(base))
++ return PTR_ERR(base);
++
++ regmap = regmap_init_mmio(dev, base, &k3_chipinfo_regmap_cfg);
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+diff --git a/drivers/soundwire/amd_manager.c b/drivers/soundwire/amd_manager.c
+index 5a54b10daf77a8..9d80623787247c 100644
+--- a/drivers/soundwire/amd_manager.c
++++ b/drivers/soundwire/amd_manager.c
+@@ -1139,6 +1139,7 @@ static int __maybe_unused amd_suspend(struct device *dev)
+ amd_sdw_wake_enable(amd_manager, false);
+ return amd_sdw_clock_stop(amd_manager);
+ } else if (amd_manager->power_mode_mask & AMD_SDW_POWER_OFF_MODE) {
++ amd_sdw_wake_enable(amd_manager, false);
+ /*
+ * As per hardware programming sequence on AMD platforms,
+ * clock stop should be invoked first before powering-off
+@@ -1166,6 +1167,7 @@ static int __maybe_unused amd_suspend_runtime(struct device *dev)
+ amd_sdw_wake_enable(amd_manager, true);
+ return amd_sdw_clock_stop(amd_manager);
+ } else if (amd_manager->power_mode_mask & AMD_SDW_POWER_OFF_MODE) {
++ amd_sdw_wake_enable(amd_manager, true);
+ ret = amd_sdw_clock_stop(amd_manager);
+ if (ret)
+ return ret;
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 9b295fc9acd534..df73e2c0409046 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -121,6 +121,10 @@ int sdw_bus_master_add(struct sdw_bus *bus, struct device *parent,
+ set_bit(SDW_GROUP13_DEV_NUM, bus->assigned);
+ set_bit(SDW_MASTER_DEV_NUM, bus->assigned);
+
++ ret = sdw_irq_create(bus, fwnode);
++ if (ret)
++ return ret;
++
+ /*
+ * SDW is an enumerable bus, but devices can be powered off. So,
+ * they won't be able to report as present.
+@@ -137,6 +141,7 @@ int sdw_bus_master_add(struct sdw_bus *bus, struct device *parent,
+
+ if (ret < 0) {
+ dev_err(bus->dev, "Finding slaves failed:%d\n", ret);
++ sdw_irq_delete(bus);
+ return ret;
+ }
+
+@@ -155,10 +160,6 @@ int sdw_bus_master_add(struct sdw_bus *bus, struct device *parent,
+ bus->params.curr_bank = SDW_BANK0;
+ bus->params.next_bank = SDW_BANK1;
+
+- ret = sdw_irq_create(bus, fwnode);
+- if (ret)
+- return ret;
+-
+ return 0;
+ }
+ EXPORT_SYMBOL(sdw_bus_master_add);
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index f367670ea991b9..68be8ff3f02b1b 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -1341,7 +1341,7 @@ static u32 cdns_set_initial_frame_shape(int n_rows, int n_cols)
+ return val;
+ }
+
+-static void cdns_init_clock_ctrl(struct sdw_cdns *cdns)
++static int cdns_init_clock_ctrl(struct sdw_cdns *cdns)
+ {
+ struct sdw_bus *bus = &cdns->bus;
+ struct sdw_master_prop *prop = &bus->prop;
+@@ -1355,14 +1355,25 @@ static void cdns_init_clock_ctrl(struct sdw_cdns *cdns)
+ prop->default_row,
+ prop->default_col);
+
++ if (!prop->default_frame_rate || !prop->default_row) {
++ dev_err(cdns->dev, "Default frame_rate %d or row %d is invalid\n",
++ prop->default_frame_rate, prop->default_row);
++ return -EINVAL;
++ }
++
+ /* Set clock divider */
+- divider = (prop->mclk_freq / prop->max_clk_freq) - 1;
++ divider = (prop->mclk_freq * SDW_DOUBLE_RATE_FACTOR /
++ bus->params.curr_dr_freq) - 1;
+
+ cdns_updatel(cdns, CDNS_MCP_CLK_CTRL0,
+ CDNS_MCP_CLK_MCLKD_MASK, divider);
+ cdns_updatel(cdns, CDNS_MCP_CLK_CTRL1,
+ CDNS_MCP_CLK_MCLKD_MASK, divider);
+
++ /* Set frame shape base on the actual bus frequency. */
++ prop->default_col = bus->params.curr_dr_freq /
++ prop->default_frame_rate / prop->default_row;
++
+ /*
+ * Frame shape changes after initialization have to be done
+ * with the bank switch mechanism
+@@ -1375,6 +1386,8 @@ static void cdns_init_clock_ctrl(struct sdw_cdns *cdns)
+ ssp_interval = prop->default_frame_rate / SDW_CADENCE_GSYNC_HZ;
+ cdns_writel(cdns, CDNS_MCP_SSP_CTRL0, ssp_interval);
+ cdns_writel(cdns, CDNS_MCP_SSP_CTRL1, ssp_interval);
++
++ return 0;
+ }
+
+ /**
+@@ -1408,9 +1421,12 @@ EXPORT_SYMBOL(sdw_cdns_soft_reset);
+ */
+ int sdw_cdns_init(struct sdw_cdns *cdns)
+ {
++ int ret;
+ u32 val;
+
+- cdns_init_clock_ctrl(cdns);
++ ret = cdns_init_clock_ctrl(cdns);
++ if (ret)
++ return ret;
+
+ sdw_cdns_check_self_clearing_bits(cdns, __func__, false, 0);
+
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 067c954cb6ea03..863781ba6c1601 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0+
+ //
+ // Copyright 2013 Freescale Semiconductor, Inc.
+-// Copyright 2020 NXP
++// Copyright 2020-2025 NXP
+ //
+ // Freescale DSPI driver
+ // This file contains a driver for the Freescale DSPI
+@@ -62,6 +62,7 @@
+ #define SPI_SR_TFIWF BIT(18)
+ #define SPI_SR_RFDF BIT(17)
+ #define SPI_SR_CMDFFF BIT(16)
++#define SPI_SR_TXRXS BIT(30)
+ #define SPI_SR_CLEAR (SPI_SR_TCFQF | \
+ SPI_SR_TFUF | SPI_SR_TFFF | \
+ SPI_SR_CMDTCF | SPI_SR_SPEF | \
+@@ -921,9 +922,20 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ struct spi_transfer *transfer;
+ bool cs = false;
+ int status = 0;
++ u32 val = 0;
++ bool cs_change = false;
+
+ message->actual_length = 0;
+
++ /* Put DSPI in running mode if halted. */
++ regmap_read(dspi->regmap, SPI_MCR, &val);
++ if (val & SPI_MCR_HALT) {
++ regmap_update_bits(dspi->regmap, SPI_MCR, SPI_MCR_HALT, 0);
++ while (regmap_read(dspi->regmap, SPI_SR, &val) >= 0 &&
++ !(val & SPI_SR_TXRXS))
++ ;
++ }
++
+ list_for_each_entry(transfer, &message->transfers, transfer_list) {
+ dspi->cur_transfer = transfer;
+ dspi->cur_msg = message;
+@@ -953,6 +965,7 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ dspi->tx_cmd |= SPI_PUSHR_CMD_CONT;
+ }
+
++ cs_change = transfer->cs_change;
+ dspi->tx = transfer->tx_buf;
+ dspi->rx = transfer->rx_buf;
+ dspi->len = transfer->len;
+@@ -962,6 +975,8 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ SPI_MCR_CLR_TXF | SPI_MCR_CLR_RXF,
+ SPI_MCR_CLR_TXF | SPI_MCR_CLR_RXF);
+
++ regmap_write(dspi->regmap, SPI_SR, SPI_SR_CLEAR);
++
+ spi_take_timestamp_pre(dspi->ctlr, dspi->cur_transfer,
+ dspi->progress, !dspi->irq);
+
+@@ -988,6 +1003,15 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ dspi_deassert_cs(spi, &cs);
+ }
+
++ if (status || !cs_change) {
++ /* Put DSPI in stop mode */
++ regmap_update_bits(dspi->regmap, SPI_MCR,
++ SPI_MCR_HALT, SPI_MCR_HALT);
++ while (regmap_read(dspi->regmap, SPI_SR, &val) >= 0 &&
++ val & SPI_SR_TXRXS)
++ ;
++ }
++
+ message->status = status;
+ spi_finalize_current_message(ctlr);
+
+@@ -1167,6 +1191,20 @@ static int dspi_resume(struct device *dev)
+
+ static SIMPLE_DEV_PM_OPS(dspi_pm, dspi_suspend, dspi_resume);
+
++static const struct regmap_range dspi_yes_ranges[] = {
++ regmap_reg_range(SPI_MCR, SPI_MCR),
++ regmap_reg_range(SPI_TCR, SPI_CTAR(3)),
++ regmap_reg_range(SPI_SR, SPI_TXFR3),
++ regmap_reg_range(SPI_RXFR0, SPI_RXFR3),
++ regmap_reg_range(SPI_CTARE(0), SPI_CTARE(3)),
++ regmap_reg_range(SPI_SREX, SPI_SREX),
++};
++
++static const struct regmap_access_table dspi_access_table = {
++ .yes_ranges = dspi_yes_ranges,
++ .n_yes_ranges = ARRAY_SIZE(dspi_yes_ranges),
++};
++
+ static const struct regmap_range dspi_volatile_ranges[] = {
+ regmap_reg_range(SPI_MCR, SPI_TCR),
+ regmap_reg_range(SPI_SR, SPI_SR),
+@@ -1184,6 +1222,8 @@ static const struct regmap_config dspi_regmap_config = {
+ .reg_stride = 4,
+ .max_register = 0x88,
+ .volatile_table = &dspi_volatile_table,
++ .rd_table = &dspi_access_table,
++ .wr_table = &dspi_access_table,
+ };
+
+ static const struct regmap_range dspi_xspi_volatile_ranges[] = {
+@@ -1205,6 +1245,8 @@ static const struct regmap_config dspi_xspi_regmap_config[] = {
+ .reg_stride = 4,
+ .max_register = 0x13c,
+ .volatile_table = &dspi_xspi_volatile_table,
++ .rd_table = &dspi_access_table,
++ .wr_table = &dspi_access_table,
+ },
+ {
+ .name = "pushr",
+@@ -1227,6 +1269,8 @@ static int dspi_init(struct fsl_dspi *dspi)
+ if (!spi_controller_is_target(dspi->ctlr))
+ mcr |= SPI_MCR_HOST;
+
++ mcr |= SPI_MCR_HALT;
++
+ regmap_write(dspi->regmap, SPI_MCR, mcr);
+ regmap_write(dspi->regmap, SPI_SR, SPI_SR_CLEAR);
+
+diff --git a/drivers/spi/spi-mux.c b/drivers/spi/spi-mux.c
+index c02c4204442f5e..0eb35c4e3987ea 100644
+--- a/drivers/spi/spi-mux.c
++++ b/drivers/spi/spi-mux.c
+@@ -68,9 +68,7 @@ static int spi_mux_select(struct spi_device *spi)
+
+ priv->current_cs = spi_get_chipselect(spi, 0);
+
+- spi_setup(priv->spi);
+-
+- return 0;
++ return spi_setup(priv->spi);
+ }
+
+ static int spi_mux_setup(struct spi_device *spi)
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 1bc012fce7cb8d..1a6381de6f33d7 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -547,7 +547,7 @@ static int rockchip_spi_config(struct rockchip_spi *rs,
+ cr0 |= (spi->mode & 0x3U) << CR0_SCPH_OFFSET;
+ if (spi->mode & SPI_LSB_FIRST)
+ cr0 |= CR0_FBM_LSB << CR0_FBM_OFFSET;
+- if (spi->mode & SPI_CS_HIGH)
++ if ((spi->mode & SPI_CS_HIGH) && !(spi_get_csgpiod(spi, 0)))
+ cr0 |= BIT(spi_get_chipselect(spi, 0)) << CR0_SOI_OFFSET;
+
+ if (xfer->rx_buf && xfer->tx_buf)
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index d800d79f62a70c..12ab13edab543b 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -799,7 +799,6 @@ static void zynqmp_process_dma_irq(struct zynqmp_qspi *xqspi)
+ static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id)
+ {
+ struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_id;
+- irqreturn_t ret = IRQ_NONE;
+ u32 status, mask, dma_status = 0;
+
+ status = zynqmp_gqspi_read(xqspi, GQSPI_ISR_OFST);
+@@ -814,27 +813,24 @@ static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id)
+ dma_status);
+ }
+
+- if (mask & GQSPI_ISR_TXNOT_FULL_MASK) {
++ if (!mask && !dma_status)
++ return IRQ_NONE;
++
++ if (mask & GQSPI_ISR_TXNOT_FULL_MASK)
+ zynqmp_qspi_filltxfifo(xqspi, GQSPI_TX_FIFO_FILL);
+- ret = IRQ_HANDLED;
+- }
+
+- if (dma_status & GQSPI_QSPIDMA_DST_I_STS_DONE_MASK) {
++ if (dma_status & GQSPI_QSPIDMA_DST_I_STS_DONE_MASK)
+ zynqmp_process_dma_irq(xqspi);
+- ret = IRQ_HANDLED;
+- } else if (!(mask & GQSPI_IER_RXEMPTY_MASK) &&
+- (mask & GQSPI_IER_GENFIFOEMPTY_MASK)) {
++ else if (!(mask & GQSPI_IER_RXEMPTY_MASK) &&
++ (mask & GQSPI_IER_GENFIFOEMPTY_MASK))
+ zynqmp_qspi_readrxfifo(xqspi, GQSPI_RX_FIFO_FILL);
+- ret = IRQ_HANDLED;
+- }
+
+ if (xqspi->bytes_to_receive == 0 && xqspi->bytes_to_transfer == 0 &&
+ ((status & GQSPI_IRQ_MASK) == GQSPI_IRQ_MASK)) {
+ zynqmp_gqspi_write(xqspi, GQSPI_IDR_OFST, GQSPI_ISR_IDR_MASK);
+ complete(&xqspi->data_completion);
+- ret = IRQ_HANDLED;
+ }
+- return ret;
++ return IRQ_HANDLED;
+ }
+
+ /**
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 0c7ea2d0ee85e8..64f9536f123293 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -280,29 +280,6 @@ static int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state
+ return 0;
+ }
+
+-int
+-vchiq_platform_init_state(struct vchiq_state *state)
+-{
+- struct vchiq_arm_state *platform_state;
+-
+- platform_state = devm_kzalloc(state->dev, sizeof(*platform_state), GFP_KERNEL);
+- if (!platform_state)
+- return -ENOMEM;
+-
+- rwlock_init(&platform_state->susp_res_lock);
+-
+- init_completion(&platform_state->ka_evt);
+- atomic_set(&platform_state->ka_use_count, 0);
+- atomic_set(&platform_state->ka_use_ack_count, 0);
+- atomic_set(&platform_state->ka_release_count, 0);
+-
+- platform_state->state = state;
+-
+- state->platform_state = (struct opaque_platform_state *)platform_state;
+-
+- return 0;
+-}
+-
+ static struct vchiq_arm_state *vchiq_platform_get_arm_state(struct vchiq_state *state)
+ {
+ return (struct vchiq_arm_state *)state->platform_state;
+@@ -1011,6 +988,39 @@ vchiq_keepalive_thread_func(void *v)
+ return 0;
+ }
+
++int
++vchiq_platform_init_state(struct vchiq_state *state)
++{
++ struct vchiq_arm_state *platform_state;
++ char threadname[16];
++
++ platform_state = devm_kzalloc(state->dev, sizeof(*platform_state), GFP_KERNEL);
++ if (!platform_state)
++ return -ENOMEM;
++
++ snprintf(threadname, sizeof(threadname), "vchiq-keep/%d",
++ state->id);
++ platform_state->ka_thread = kthread_create(&vchiq_keepalive_thread_func,
++ (void *)state, threadname);
++ if (IS_ERR(platform_state->ka_thread)) {
++ dev_err(state->dev, "couldn't create thread %s\n", threadname);
++ return PTR_ERR(platform_state->ka_thread);
++ }
++
++ rwlock_init(&platform_state->susp_res_lock);
++
++ init_completion(&platform_state->ka_evt);
++ atomic_set(&platform_state->ka_use_count, 0);
++ atomic_set(&platform_state->ka_use_ack_count, 0);
++ atomic_set(&platform_state->ka_release_count, 0);
++
++ platform_state->state = state;
++
++ state->platform_state = (struct opaque_platform_state *)platform_state;
++
++ return 0;
++}
++
+ int
+ vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service,
+ enum USE_TYPE_E use_type)
+@@ -1331,7 +1341,6 @@ void vchiq_platform_conn_state_changed(struct vchiq_state *state,
+ enum vchiq_connstate newstate)
+ {
+ struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state);
+- char threadname[16];
+
+ dev_dbg(state->dev, "suspend: %d: %s->%s\n",
+ state->id, get_conn_state_name(oldstate), get_conn_state_name(newstate));
+@@ -1346,17 +1355,7 @@ void vchiq_platform_conn_state_changed(struct vchiq_state *state,
+
+ arm_state->first_connect = 1;
+ write_unlock_bh(&arm_state->susp_res_lock);
+- snprintf(threadname, sizeof(threadname), "vchiq-keep/%d",
+- state->id);
+- arm_state->ka_thread = kthread_create(&vchiq_keepalive_thread_func,
+- (void *)state,
+- threadname);
+- if (IS_ERR(arm_state->ka_thread)) {
+- dev_err(state->dev, "suspend: Couldn't create thread %s\n",
+- threadname);
+- } else {
+- wake_up_process(arm_state->ka_thread);
+- }
++ wake_up_process(arm_state->ka_thread);
+ }
+
+ static const struct of_device_id vchiq_of_match[] = {
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 1244ef3aa86c1d..620ba6e0ab0756 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4263,8 +4263,8 @@ int iscsit_close_connection(
+ spin_unlock(&iscsit_global->ts_bitmap_lock);
+
+ iscsit_stop_timers_for_cmds(conn);
+- iscsit_stop_nopin_response_timer(conn);
+ iscsit_stop_nopin_timer(conn);
++ iscsit_stop_nopin_response_timer(conn);
+
+ if (conn->conn_transport->iscsit_wait_conn)
+ conn->conn_transport->iscsit_wait_conn(conn);
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index d1ae3df069a4f3..cc2da086f96e2f 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -1078,8 +1078,8 @@ passthrough_parse_cdb(struct se_cmd *cmd,
+ if (!dev->dev_attrib.emulate_pr &&
+ ((cdb[0] == PERSISTENT_RESERVE_IN) ||
+ (cdb[0] == PERSISTENT_RESERVE_OUT) ||
+- (cdb[0] == RELEASE || cdb[0] == RELEASE_10) ||
+- (cdb[0] == RESERVE || cdb[0] == RESERVE_10))) {
++ (cdb[0] == RELEASE_6 || cdb[0] == RELEASE_10) ||
++ (cdb[0] == RESERVE_6 || cdb[0] == RESERVE_10))) {
+ return TCM_UNSUPPORTED_SCSI_OPCODE;
+ }
+
+@@ -1101,7 +1101,7 @@ passthrough_parse_cdb(struct se_cmd *cmd,
+ return target_cmd_size_check(cmd, size);
+ }
+
+- if (cdb[0] == RELEASE || cdb[0] == RELEASE_10) {
++ if (cdb[0] == RELEASE_6 || cdb[0] == RELEASE_10) {
+ cmd->execute_cmd = target_scsi2_reservation_release;
+ if (cdb[0] == RELEASE_10)
+ size = get_unaligned_be16(&cdb[7]);
+@@ -1109,7 +1109,7 @@ passthrough_parse_cdb(struct se_cmd *cmd,
+ size = cmd->data_length;
+ return target_cmd_size_check(cmd, size);
+ }
+- if (cdb[0] == RESERVE || cdb[0] == RESERVE_10) {
++ if (cdb[0] == RESERVE_6 || cdb[0] == RESERVE_10) {
+ cmd->execute_cmd = target_scsi2_reservation_reserve;
+ if (cdb[0] == RESERVE_10)
+ size = get_unaligned_be16(&cdb[7]);
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 4f4ad6af416c8f..34cf2c399b399d 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -91,7 +91,7 @@ target_scsi2_reservation_check(struct se_cmd *cmd)
+
+ switch (cmd->t_task_cdb[0]) {
+ case INQUIRY:
+- case RELEASE:
++ case RELEASE_6:
+ case RELEASE_10:
+ return 0;
+ default:
+@@ -418,12 +418,12 @@ static int core_scsi3_pr_seq_non_holder(struct se_cmd *cmd, u32 pr_reg_type,
+ return -EINVAL;
+ }
+ break;
+- case RELEASE:
++ case RELEASE_6:
+ case RELEASE_10:
+ /* Handled by CRH=1 in target_scsi2_reservation_release() */
+ ret = 0;
+ break;
+- case RESERVE:
++ case RESERVE_6:
+ case RESERVE_10:
+ /* Handled by CRH=1 in target_scsi2_reservation_reserve() */
+ ret = 0;
+diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
+index 61c065702350e0..0a02492bef701c 100644
+--- a/drivers/target/target_core_spc.c
++++ b/drivers/target/target_core_spc.c
+@@ -1674,9 +1674,9 @@ static bool tcm_is_pr_enabled(struct target_opcode_descriptor *descr,
+ return true;
+
+ switch (descr->opcode) {
+- case RESERVE:
++ case RESERVE_6:
+ case RESERVE_10:
+- case RELEASE:
++ case RELEASE_6:
+ case RELEASE_10:
+ /*
+ * The pr_ops which are used by the backend modules don't
+@@ -1828,9 +1828,9 @@ static struct target_opcode_descriptor tcm_opcode_pro_register_move = {
+
+ static struct target_opcode_descriptor tcm_opcode_release = {
+ .support = SCSI_SUPPORT_FULL,
+- .opcode = RELEASE,
++ .opcode = RELEASE_6,
+ .cdb_size = 6,
+- .usage_bits = {RELEASE, 0x00, 0x00, 0x00,
++ .usage_bits = {RELEASE_6, 0x00, 0x00, 0x00,
+ 0x00, SCSI_CONTROL_MASK},
+ .enabled = tcm_is_pr_enabled,
+ };
+@@ -1847,9 +1847,9 @@ static struct target_opcode_descriptor tcm_opcode_release10 = {
+
+ static struct target_opcode_descriptor tcm_opcode_reserve = {
+ .support = SCSI_SUPPORT_FULL,
+- .opcode = RESERVE,
++ .opcode = RESERVE_6,
+ .cdb_size = 6,
+- .usage_bits = {RESERVE, 0x00, 0x00, 0x00,
++ .usage_bits = {RESERVE_6, 0x00, 0x00, 0x00,
+ 0x00, SCSI_CONTROL_MASK},
+ .enabled = tcm_is_pr_enabled,
+ };
+@@ -2151,8 +2151,10 @@ spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode)
+ if (descr->serv_action_valid)
+ return TCM_INVALID_CDB_FIELD;
+
+- if (!descr->enabled || descr->enabled(descr, cmd))
++ if (!descr->enabled || descr->enabled(descr, cmd)) {
+ *opcode = descr;
++ return TCM_NO_SENSE;
++ }
+ break;
+ case 0x2:
+ /*
+@@ -2166,8 +2168,10 @@ spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode)
+ if (descr->serv_action_valid &&
+ descr->service_action == requested_sa) {
+ if (!descr->enabled || descr->enabled(descr,
+- cmd))
++ cmd)) {
+ *opcode = descr;
++ return TCM_NO_SENSE;
++ }
+ } else if (!descr->serv_action_valid)
+ return TCM_INVALID_CDB_FIELD;
+ break;
+@@ -2180,13 +2184,15 @@ spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode)
+ */
+ if (descr->service_action == requested_sa)
+ if (!descr->enabled || descr->enabled(descr,
+- cmd))
++ cmd)) {
+ *opcode = descr;
++ return TCM_NO_SENSE;
++ }
+ break;
+ }
+ }
+
+- return 0;
++ return TCM_NO_SENSE;
+ }
+
+ static sense_reason_t
+@@ -2267,9 +2273,9 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
+ unsigned char *cdb = cmd->t_task_cdb;
+
+ switch (cdb[0]) {
+- case RESERVE:
++ case RESERVE_6:
+ case RESERVE_10:
+- case RELEASE:
++ case RELEASE_6:
+ case RELEASE_10:
+ if (!dev->dev_attrib.emulate_pr)
+ return TCM_UNSUPPORTED_SCSI_OPCODE;
+@@ -2313,7 +2319,7 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
+ *size = get_unaligned_be32(&cdb[5]);
+ cmd->execute_cmd = target_scsi3_emulate_pr_out;
+ break;
+- case RELEASE:
++ case RELEASE_6:
+ case RELEASE_10:
+ if (cdb[0] == RELEASE_10)
+ *size = get_unaligned_be16(&cdb[7]);
+@@ -2322,7 +2328,7 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
+
+ cmd->execute_cmd = target_scsi2_reservation_release;
+ break;
+- case RESERVE:
++ case RESERVE_6:
+ case RESERVE_10:
+ /*
+ * The SPC-2 RESERVE does not contain a size in the SCSI CDB.
+diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
+index 496abf8e55e0d5..2841d14914b710 100644
+--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
++++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
+@@ -329,6 +329,7 @@ static int pkg_temp_thermal_device_add(unsigned int cpu)
+ tj_max = intel_tcc_get_tjmax(cpu);
+ if (tj_max < 0)
+ return tj_max;
++ tj_max *= 1000;
+
+ zonedev = kzalloc(sizeof(*zonedev), GFP_KERNEL);
+ if (!zonedev)
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 0aaa44b734ca43..d0901d8ac85da0 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -65,7 +65,6 @@
+ #define LVTS_HW_FILTER 0x0
+ #define LVTS_TSSEL_CONF 0x13121110
+ #define LVTS_CALSCALE_CONF 0x300
+-#define LVTS_MONINT_CONF 0x0300318C
+
+ #define LVTS_MONINT_OFFSET_SENSOR0 0xC
+ #define LVTS_MONINT_OFFSET_SENSOR1 0x180
+@@ -929,7 +928,7 @@ static int lvts_irq_init(struct lvts_ctrl *lvts_ctrl)
+ * The LVTS_MONINT register layout is the same as the LVTS_MONINTSTS
+ * register, except we set the bits to enable the interrupt.
+ */
+- writel(LVTS_MONINT_CONF, LVTS_MONINT(lvts_ctrl->base));
++ writel(0, LVTS_MONINT(lvts_ctrl->base));
+
+ return 0;
+ }
+diff --git a/drivers/thermal/qoriq_thermal.c b/drivers/thermal/qoriq_thermal.c
+index 52e26be8c53df6..aed2729f63d06c 100644
+--- a/drivers/thermal/qoriq_thermal.c
++++ b/drivers/thermal/qoriq_thermal.c
+@@ -18,6 +18,7 @@
+ #define SITES_MAX 16
+ #define TMR_DISABLE 0x0
+ #define TMR_ME 0x80000000
++#define TMR_CMD BIT(29)
+ #define TMR_ALPF 0x0c000000
+ #define TMR_ALPF_V2 0x03000000
+ #define TMTMIR_DEFAULT 0x0000000f
+@@ -356,6 +357,12 @@ static int qoriq_tmu_suspend(struct device *dev)
+ if (ret)
+ return ret;
+
++ if (data->ver > TMU_VER1) {
++ ret = regmap_set_bits(data->regmap, REGS_TMR, TMR_CMD);
++ if (ret)
++ return ret;
++ }
++
+ clk_disable_unprepare(data->clk);
+
+ return 0;
+@@ -370,6 +377,12 @@ static int qoriq_tmu_resume(struct device *dev)
+ if (ret)
+ return ret;
+
++ if (data->ver > TMU_VER1) {
++ ret = regmap_clear_bits(data->regmap, REGS_TMR, TMR_CMD);
++ if (ret)
++ return ret;
++ }
++
+ /* Enable monitoring */
+ return regmap_update_bits(data->regmap, REGS_TMR, TMR_ME, TMR_ME);
+ }
+diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
+index 1f25529fe05dac..361fece3d81881 100644
+--- a/drivers/thunderbolt/retimer.c
++++ b/drivers/thunderbolt/retimer.c
+@@ -93,9 +93,11 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt)
+ if (ret)
+ goto err_nvm;
+
+- ret = tb_nvm_add_non_active(nvm, nvm_write);
+- if (ret)
+- goto err_nvm;
++ if (!rt->no_nvm_upgrade) {
++ ret = tb_nvm_add_non_active(nvm, nvm_write);
++ if (ret)
++ goto err_nvm;
++ }
+
+ rt->nvm = nvm;
+ dev_dbg(&rt->dev, "NVM version %x.%x\n", nvm->major, nvm->minor);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 442967a6cd52dc..886e40f680d451 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1680,7 +1680,7 @@ static void serial8250_disable_ms(struct uart_port *port)
+ if (up->bugs & UART_BUG_NOMSR)
+ return;
+
+- mctrl_gpio_disable_ms(up->gpios);
++ mctrl_gpio_disable_ms_no_sync(up->gpios);
+
+ up->ier &= ~UART_IER_MSI;
+ serial_port_out(port, UART_IER, up->ier);
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index f44f9d20a97440..8918fbd4bddd5d 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -700,7 +700,7 @@ static void atmel_disable_ms(struct uart_port *port)
+
+ atmel_port->ms_irq_enabled = false;
+
+- mctrl_gpio_disable_ms(atmel_port->gpios);
++ mctrl_gpio_disable_ms_no_sync(atmel_port->gpios);
+
+ if (!mctrl_gpio_to_gpiod(atmel_port->gpios, UART_GPIO_CTS))
+ idr |= ATMEL_US_CTSIC;
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 9c59ec128bb4fc..cfeb3f8cf45eae 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -1608,7 +1608,7 @@ static void imx_uart_shutdown(struct uart_port *port)
+ imx_uart_dma_exit(sport);
+ }
+
+- mctrl_gpio_disable_ms(sport->gpios);
++ mctrl_gpio_disable_ms_sync(sport->gpios);
+
+ uart_port_lock_irqsave(&sport->port, &flags);
+ ucr2 = imx_uart_readl(sport, UCR2);
+diff --git a/drivers/tty/serial/serial_mctrl_gpio.c b/drivers/tty/serial/serial_mctrl_gpio.c
+index 8855688a5b6c09..ca55bcc0b61119 100644
+--- a/drivers/tty/serial/serial_mctrl_gpio.c
++++ b/drivers/tty/serial/serial_mctrl_gpio.c
+@@ -322,11 +322,7 @@ void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios)
+ }
+ EXPORT_SYMBOL_GPL(mctrl_gpio_enable_ms);
+
+-/**
+- * mctrl_gpio_disable_ms - disable irqs and handling of changes to the ms lines
+- * @gpios: gpios to disable
+- */
+-void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios)
++static void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios, bool sync)
+ {
+ enum mctrl_gpio_idx i;
+
+@@ -342,10 +338,34 @@ void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios)
+ if (!gpios->irq[i])
+ continue;
+
+- disable_irq(gpios->irq[i]);
++ if (sync)
++ disable_irq(gpios->irq[i]);
++ else
++ disable_irq_nosync(gpios->irq[i]);
+ }
+ }
+-EXPORT_SYMBOL_GPL(mctrl_gpio_disable_ms);
++
++/**
++ * mctrl_gpio_disable_ms_sync - disable irqs and handling of changes to the ms
++ * lines, and wait for any pending IRQ to be processed
++ * @gpios: gpios to disable
++ */
++void mctrl_gpio_disable_ms_sync(struct mctrl_gpios *gpios)
++{
++ mctrl_gpio_disable_ms(gpios, true);
++}
++EXPORT_SYMBOL_GPL(mctrl_gpio_disable_ms_sync);
++
++/**
++ * mctrl_gpio_disable_ms_no_sync - disable irqs and handling of changes to the
++ * ms lines, and return immediately
++ * @gpios: gpios to disable
++ */
++void mctrl_gpio_disable_ms_no_sync(struct mctrl_gpios *gpios)
++{
++ mctrl_gpio_disable_ms(gpios, false);
++}
++EXPORT_SYMBOL_GPL(mctrl_gpio_disable_ms_no_sync);
+
+ void mctrl_gpio_enable_irq_wake(struct mctrl_gpios *gpios)
+ {
+diff --git a/drivers/tty/serial/serial_mctrl_gpio.h b/drivers/tty/serial/serial_mctrl_gpio.h
+index fc76910fb105a3..79e97838ebe567 100644
+--- a/drivers/tty/serial/serial_mctrl_gpio.h
++++ b/drivers/tty/serial/serial_mctrl_gpio.h
+@@ -87,9 +87,16 @@ void mctrl_gpio_free(struct device *dev, struct mctrl_gpios *gpios);
+ void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios);
+
+ /*
+- * Disable gpio interrupts to report status line changes.
++ * Disable gpio interrupts to report status line changes, and block until
++ * any corresponding IRQ is processed
+ */
+-void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios);
++void mctrl_gpio_disable_ms_sync(struct mctrl_gpios *gpios);
++
++/*
++ * Disable gpio interrupts to report status line changes, and return
++ * immediately
++ */
++void mctrl_gpio_disable_ms_no_sync(struct mctrl_gpios *gpios);
+
+ /*
+ * Enable gpio wakeup interrupts to enable wake up source.
+@@ -148,7 +155,11 @@ static inline void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios)
+ {
+ }
+
+-static inline void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios)
++static inline void mctrl_gpio_disable_ms_sync(struct mctrl_gpios *gpios)
++{
++}
++
++static inline void mctrl_gpio_disable_ms_no_sync(struct mctrl_gpios *gpios)
+ {
+ }
+
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index b1ea48f38248eb..0219135caafa43 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -104,6 +104,20 @@ struct plat_sci_reg {
+ u8 offset, size;
+ };
+
++struct sci_suspend_regs {
++ u16 scdl;
++ u16 sccks;
++ u16 scsmr;
++ u16 scscr;
++ u16 scfcr;
++ u16 scsptr;
++ u16 hssrr;
++ u16 scpcr;
++ u16 scpdr;
++ u8 scbrr;
++ u8 semr;
++};
++
+ struct sci_port_params {
+ const struct plat_sci_reg regs[SCIx_NR_REGS];
+ unsigned int fifosize;
+@@ -134,6 +148,8 @@ struct sci_port {
+ struct dma_chan *chan_tx;
+ struct dma_chan *chan_rx;
+
++ struct reset_control *rstc;
++
+ #ifdef CONFIG_SERIAL_SH_SCI_DMA
+ struct dma_chan *chan_tx_saved;
+ struct dma_chan *chan_rx_saved;
+@@ -153,6 +169,7 @@ struct sci_port {
+ int rx_trigger;
+ struct timer_list rx_fifo_timer;
+ int rx_fifo_timeout;
++ struct sci_suspend_regs suspend_regs;
+ u16 hscif_tot;
+
+ bool has_rtscts;
+@@ -2298,7 +2315,7 @@ static void sci_shutdown(struct uart_port *port)
+ dev_dbg(port->dev, "%s(%d)\n", __func__, port->line);
+
+ s->autorts = false;
+- mctrl_gpio_disable_ms(to_sci_port(port)->gpios);
++ mctrl_gpio_disable_ms_sync(to_sci_port(port)->gpios);
+
+ uart_port_lock_irqsave(port, &flags);
+ sci_stop_rx(port);
+@@ -3374,6 +3391,7 @@ static struct plat_sci_port *sci_parse_dt(struct platform_device *pdev,
+ }
+
+ sp = &sci_ports[id];
++ sp->rstc = rstc;
+ *dev_id = id;
+
+ p->type = SCI_OF_TYPE(data);
+@@ -3546,13 +3564,77 @@ static int sci_probe(struct platform_device *dev)
+ return 0;
+ }
+
++static void sci_console_save(struct sci_port *s)
++{
++ struct sci_suspend_regs *regs = &s->suspend_regs;
++ struct uart_port *port = &s->port;
++
++ if (sci_getreg(port, SCDL)->size)
++ regs->scdl = sci_serial_in(port, SCDL);
++ if (sci_getreg(port, SCCKS)->size)
++ regs->sccks = sci_serial_in(port, SCCKS);
++ if (sci_getreg(port, SCSMR)->size)
++ regs->scsmr = sci_serial_in(port, SCSMR);
++ if (sci_getreg(port, SCSCR)->size)
++ regs->scscr = sci_serial_in(port, SCSCR);
++ if (sci_getreg(port, SCFCR)->size)
++ regs->scfcr = sci_serial_in(port, SCFCR);
++ if (sci_getreg(port, SCSPTR)->size)
++ regs->scsptr = sci_serial_in(port, SCSPTR);
++ if (sci_getreg(port, SCBRR)->size)
++ regs->scbrr = sci_serial_in(port, SCBRR);
++ if (sci_getreg(port, HSSRR)->size)
++ regs->hssrr = sci_serial_in(port, HSSRR);
++ if (sci_getreg(port, SCPCR)->size)
++ regs->scpcr = sci_serial_in(port, SCPCR);
++ if (sci_getreg(port, SCPDR)->size)
++ regs->scpdr = sci_serial_in(port, SCPDR);
++ if (sci_getreg(port, SEMR)->size)
++ regs->semr = sci_serial_in(port, SEMR);
++}
++
++static void sci_console_restore(struct sci_port *s)
++{
++ struct sci_suspend_regs *regs = &s->suspend_regs;
++ struct uart_port *port = &s->port;
++
++ if (sci_getreg(port, SCDL)->size)
++ sci_serial_out(port, SCDL, regs->scdl);
++ if (sci_getreg(port, SCCKS)->size)
++ sci_serial_out(port, SCCKS, regs->sccks);
++ if (sci_getreg(port, SCSMR)->size)
++ sci_serial_out(port, SCSMR, regs->scsmr);
++ if (sci_getreg(port, SCSCR)->size)
++ sci_serial_out(port, SCSCR, regs->scscr);
++ if (sci_getreg(port, SCFCR)->size)
++ sci_serial_out(port, SCFCR, regs->scfcr);
++ if (sci_getreg(port, SCSPTR)->size)
++ sci_serial_out(port, SCSPTR, regs->scsptr);
++ if (sci_getreg(port, SCBRR)->size)
++ sci_serial_out(port, SCBRR, regs->scbrr);
++ if (sci_getreg(port, HSSRR)->size)
++ sci_serial_out(port, HSSRR, regs->hssrr);
++ if (sci_getreg(port, SCPCR)->size)
++ sci_serial_out(port, SCPCR, regs->scpcr);
++ if (sci_getreg(port, SCPDR)->size)
++ sci_serial_out(port, SCPDR, regs->scpdr);
++ if (sci_getreg(port, SEMR)->size)
++ sci_serial_out(port, SEMR, regs->semr);
++}
++
+ static __maybe_unused int sci_suspend(struct device *dev)
+ {
+ struct sci_port *sport = dev_get_drvdata(dev);
+
+- if (sport)
++ if (sport) {
+ uart_suspend_port(&sci_uart_driver, &sport->port);
+
++ if (!console_suspend_enabled && uart_console(&sport->port))
++ sci_console_save(sport);
++ else
++ return reset_control_assert(sport->rstc);
++ }
++
+ return 0;
+ }
+
+@@ -3560,8 +3642,18 @@ static __maybe_unused int sci_resume(struct device *dev)
+ {
+ struct sci_port *sport = dev_get_drvdata(dev);
+
+- if (sport)
++ if (sport) {
++ if (!console_suspend_enabled && uart_console(&sport->port)) {
++ sci_console_restore(sport);
++ } else {
++ int ret = reset_control_deassert(sport->rstc);
++
++ if (ret)
++ return ret;
++ }
++
+ uart_resume_port(&sci_uart_driver, &sport->port);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 0854ad8c90cd24..ad06b760cfca7e 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -944,7 +944,7 @@ static void stm32_usart_enable_ms(struct uart_port *port)
+
+ static void stm32_usart_disable_ms(struct uart_port *port)
+ {
+- mctrl_gpio_disable_ms(to_stm32_port(port)->gpios);
++ mctrl_gpio_disable_ms_sync(to_stm32_port(port)->gpios);
+ }
+
+ /* Transmit stop */
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 99e7e4a570f0e2..47ec4e4e4a2a02 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -278,6 +278,7 @@ static const struct ufs_dev_quirk ufs_fixups[] = {
+ .model = UFS_ANY_MODEL,
+ .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM |
+ UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE |
++ UFS_DEVICE_QUIRK_PA_HIBER8TIME |
+ UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS },
+ { .wmanufacturerid = UFS_VENDOR_SKHYNIX,
+ .model = UFS_ANY_MODEL,
+@@ -8384,6 +8385,31 @@ static int ufshcd_quirk_tune_host_pa_tactivate(struct ufs_hba *hba)
+ return ret;
+ }
+
++/**
++ * ufshcd_quirk_override_pa_h8time - Ensures proper adjustment of PA_HIBERN8TIME.
++ * @hba: per-adapter instance
++ *
++ * Some UFS devices require specific adjustments to the PA_HIBERN8TIME parameter
++ * to ensure proper hibernation timing. This function retrieves the current
++ * PA_HIBERN8TIME value and increments it by 100us.
++ */
++static void ufshcd_quirk_override_pa_h8time(struct ufs_hba *hba)
++{
++ u32 pa_h8time;
++ int ret;
++
++ ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_HIBERN8TIME), &pa_h8time);
++ if (ret) {
++ dev_err(hba->dev, "Failed to get PA_HIBERN8TIME: %d\n", ret);
++ return;
++ }
++
++ /* Increment by 1 to increase hibernation time by 100 µs */
++ ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_HIBERN8TIME), pa_h8time + 1);
++ if (ret)
++ dev_err(hba->dev, "Failed updating PA_HIBERN8TIME: %d\n", ret);
++}
++
+ static void ufshcd_tune_unipro_params(struct ufs_hba *hba)
+ {
+ ufshcd_vops_apply_dev_quirks(hba);
+@@ -8394,6 +8420,9 @@ static void ufshcd_tune_unipro_params(struct ufs_hba *hba)
+
+ if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE)
+ ufshcd_quirk_tune_host_pa_tactivate(hba);
++
++ if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_HIBER8TIME)
++ ufshcd_quirk_override_pa_h8time(hba);
+ }
+
+ static void ufshcd_clear_dbg_ufs_stats(struct ufs_hba *hba)
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index 2eae8fc2e0db78..94d478b6bcd3d4 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -2142,8 +2142,8 @@ static int do_scsi_command(struct fsg_common *common)
+ * of Posix locks.
+ */
+ case FORMAT_UNIT:
+- case RELEASE:
+- case RESERVE:
++ case RELEASE_6:
++ case RESERVE_6:
+ case SEND_DIAGNOSTIC:
+
+ default:
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 9aa7e2a876ec1a..d698095fc88d3c 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1953,7 +1953,6 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
+ xhci->interrupters = NULL;
+
+ xhci->page_size = 0;
+- xhci->page_shift = 0;
+ xhci->usb2_rhub.bus_state.bus_suspended = 0;
+ xhci->usb3_rhub.bus_state.bus_suspended = 0;
+ }
+@@ -2372,6 +2371,22 @@ xhci_create_secondary_interrupter(struct usb_hcd *hcd, unsigned int segs,
+ }
+ EXPORT_SYMBOL_GPL(xhci_create_secondary_interrupter);
+
++static void xhci_hcd_page_size(struct xhci_hcd *xhci)
++{
++ u32 page_size;
++
++ page_size = readl(&xhci->op_regs->page_size) & XHCI_PAGE_SIZE_MASK;
++ if (!is_power_of_2(page_size)) {
++ xhci_warn(xhci, "Invalid page size register = 0x%x\n", page_size);
++ /* Fallback to 4K page size, since that's common */
++ page_size = 1;
++ }
++
++ xhci->page_size = page_size << 12;
++ xhci_dbg_trace(xhci, trace_xhci_dbg_init, "HCD page size set to %iK",
++ xhci->page_size >> 10);
++}
++
+ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ {
+ struct xhci_interrupter *ir;
+@@ -2379,7 +2394,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ dma_addr_t dma;
+ unsigned int val, val2;
+ u64 val_64;
+- u32 page_size, temp;
++ u32 temp;
+ int i;
+
+ INIT_LIST_HEAD(&xhci->cmd_list);
+@@ -2388,20 +2403,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ INIT_DELAYED_WORK(&xhci->cmd_timer, xhci_handle_command_timeout);
+ init_completion(&xhci->cmd_ring_stop_completion);
+
+- page_size = readl(&xhci->op_regs->page_size);
+- xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+- "Supported page size register = 0x%x", page_size);
+- val = ffs(page_size) - 1;
+- if (val < 16)
+- xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+- "Supported page size of %iK", (1 << (val + 12)) / 1024);
+- else
+- xhci_warn(xhci, "WARN: no supported page size\n");
+- /* Use 4K pages, since that's common and the minimum the HC supports */
+- xhci->page_shift = 12;
+- xhci->page_size = 1 << xhci->page_shift;
+- xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+- "HCD page size set to %iK", xhci->page_size / 1024);
++ xhci_hcd_page_size(xhci);
+
+ /*
+ * Program the Number of Device Slots Enabled field in the CONFIG
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 5a0e361818c27d..073b0acd8a7429 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1164,7 +1164,14 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ */
+ switch (GET_EP_CTX_STATE(ep_ctx)) {
+ case EP_STATE_HALTED:
+- xhci_dbg(xhci, "Stop ep completion raced with stall, reset ep\n");
++ xhci_dbg(xhci, "Stop ep completion raced with stall\n");
++ /*
++ * If the halt happened before Stop Endpoint failed, its transfer event
++ * should have already been handled and Reset Endpoint should be pending.
++ */
++ if (ep->ep_state & EP_HALTED)
++ goto reset_done;
++
+ if (ep->ep_state & EP_HAS_STREAMS) {
+ reset_type = EP_SOFT_RESET;
+ } else {
+@@ -1175,8 +1182,11 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ }
+ /* reset ep, reset handler cleans up cancelled tds */
+ err = xhci_handle_halted_endpoint(xhci, ep, td, reset_type);
++ xhci_dbg(xhci, "Stop ep completion resetting ep, status %d\n", err);
+ if (err)
+ break;
++reset_done:
++ /* Reset EP handler will clean up cancelled TDs */
+ ep->ep_state &= ~EP_STOP_CMD_PENDING;
+ return;
+ case EP_STATE_STOPPED:
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 2c394cba120f15..7d22617417fe08 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -211,6 +211,9 @@ struct xhci_op_regs {
+ #define CONFIG_CIE (1 << 9)
+ /* bits 10:31 - reserved and should be preserved */
+
++/* bits 15:0 - HCD page shift bit */
++#define XHCI_PAGE_SIZE_MASK 0xffff
++
+ /**
+ * struct xhci_intr_reg - Interrupt Register Set
+ * @irq_pending: IMAN - Interrupt Management Register. Used to enable
+@@ -1514,10 +1517,7 @@ struct xhci_hcd {
+ u16 max_interrupters;
+ /* imod_interval in ns (I * 250ns) */
+ u32 imod_interval;
+- /* 4KB min, 128MB max */
+- int page_size;
+- /* Valid values are 12 to 20, inclusive */
+- int page_shift;
++ u32 page_size;
+ /* MSI-X/MSI vectors */
+ int nvecs;
+ /* optional clocks */
+diff --git a/drivers/usb/storage/debug.c b/drivers/usb/storage/debug.c
+index 576be66ad96271..dda610f689b73a 100644
+--- a/drivers/usb/storage/debug.c
++++ b/drivers/usb/storage/debug.c
+@@ -58,8 +58,8 @@ void usb_stor_show_command(const struct us_data *us, struct scsi_cmnd *srb)
+ case INQUIRY: what = "INQUIRY"; break;
+ case RECOVER_BUFFERED_DATA: what = "RECOVER_BUFFERED_DATA"; break;
+ case MODE_SELECT: what = "MODE_SELECT"; break;
+- case RESERVE: what = "RESERVE"; break;
+- case RELEASE: what = "RELEASE"; break;
++ case RESERVE_6: what = "RESERVE"; break;
++ case RELEASE_6: what = "RELEASE"; break;
+ case COPY: what = "COPY"; break;
+ case ERASE: what = "ERASE"; break;
+ case MODE_SENSE: what = "MODE_SENSE"; break;
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 36099047560df9..cccc49a08a1abf 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -3884,6 +3884,9 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ ndev->mvdev.max_vqs = max_vqs;
+ mvdev = &ndev->mvdev;
+ mvdev->mdev = mdev;
++ /* cpu_to_mlx5vdpa16() below depends on this flag */
++ mvdev->actual_features =
++ (device_features & BIT_ULL(VIRTIO_F_VERSION_1));
+
+ ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
+ ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index 94142581c98ce8..14437396d72118 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -1814,7 +1814,8 @@ int vfio_config_init(struct vfio_pci_core_device *vdev)
+ cpu_to_le16(PCI_COMMAND_MEMORY);
+ }
+
+- if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || vdev->nointx)
++ if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) || vdev->nointx ||
++ vdev->pdev->irq == IRQ_NOTCONNECTED)
+ vconfig[PCI_INTERRUPT_PIN] = 0;
+
+ ret = vfio_cap_init(vdev);
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index a071f42511d3b0..b8bfc909fd5d8d 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -727,15 +727,7 @@ EXPORT_SYMBOL_GPL(vfio_pci_core_finish_enable);
+ static int vfio_pci_get_irq_count(struct vfio_pci_core_device *vdev, int irq_type)
+ {
+ if (irq_type == VFIO_PCI_INTX_IRQ_INDEX) {
+- u8 pin;
+-
+- if (!IS_ENABLED(CONFIG_VFIO_PCI_INTX) ||
+- vdev->nointx || vdev->pdev->is_virtfn)
+- return 0;
+-
+- pci_read_config_byte(vdev->pdev, PCI_INTERRUPT_PIN, &pin);
+-
+- return pin ? 1 : 0;
++ return vdev->vconfig[PCI_INTERRUPT_PIN] ? 1 : 0;
+ } else if (irq_type == VFIO_PCI_MSI_IRQ_INDEX) {
+ u8 pos;
+ u16 flags;
+diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
+index 8382c583433565..565966351dfadc 100644
+--- a/drivers/vfio/pci/vfio_pci_intrs.c
++++ b/drivers/vfio/pci/vfio_pci_intrs.c
+@@ -259,7 +259,7 @@ static int vfio_intx_enable(struct vfio_pci_core_device *vdev,
+ if (!is_irq_none(vdev))
+ return -EINVAL;
+
+- if (!pdev->irq)
++ if (!pdev->irq || pdev->irq == IRQ_NOTCONNECTED)
+ return -ENODEV;
+
+ name = kasprintf(GFP_KERNEL_ACCOUNT, "vfio-intx(%s)", pci_name(pdev));
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 35a03306d13454..38d243d914d00b 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -571,6 +571,9 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
+ int ret;
+
+ llnode = llist_del_all(&svq->completion_list);
++
++ mutex_lock(&svq->vq.mutex);
++
+ llist_for_each_entry_safe(cmd, t, llnode, tvc_completion_list) {
+ se_cmd = &cmd->tvc_se_cmd;
+
+@@ -604,6 +607,8 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
+ vhost_scsi_release_cmd_res(se_cmd);
+ }
+
++ mutex_unlock(&svq->vq.mutex);
++
+ if (signal)
+ vhost_signal(&svq->vs->dev, &svq->vq);
+ }
+@@ -757,7 +762,7 @@ vhost_scsi_copy_iov_to_sgl(struct vhost_scsi_cmd *cmd, struct iov_iter *iter,
+ size_t len = iov_iter_count(iter);
+ unsigned int nbytes = 0;
+ struct page *page;
+- int i;
++ int i, ret;
+
+ if (cmd->tvc_data_direction == DMA_FROM_DEVICE) {
+ cmd->saved_iter_addr = dup_iter(&cmd->saved_iter, iter,
+@@ -770,6 +775,7 @@ vhost_scsi_copy_iov_to_sgl(struct vhost_scsi_cmd *cmd, struct iov_iter *iter,
+ page = alloc_page(GFP_KERNEL);
+ if (!page) {
+ i--;
++ ret = -ENOMEM;
+ goto err;
+ }
+
+@@ -777,8 +783,10 @@ vhost_scsi_copy_iov_to_sgl(struct vhost_scsi_cmd *cmd, struct iov_iter *iter,
+ sg_set_page(&sg[i], page, nbytes, 0);
+
+ if (cmd->tvc_data_direction == DMA_TO_DEVICE &&
+- copy_page_from_iter(page, 0, nbytes, iter) != nbytes)
++ copy_page_from_iter(page, 0, nbytes, iter) != nbytes) {
++ ret = -EFAULT;
+ goto err;
++ }
+
+ len -= nbytes;
+ }
+@@ -793,7 +801,7 @@ vhost_scsi_copy_iov_to_sgl(struct vhost_scsi_cmd *cmd, struct iov_iter *iter,
+ for (; i >= 0; i--)
+ __free_page(sg_page(&sg[i]));
+ kfree(cmd->saved_iter_addr);
+- return -ENOMEM;
++ return ret;
+ }
+
+ static int
+@@ -1277,9 +1285,9 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
+ " %d\n", cmd, exp_data_len, prot_bytes, data_direction);
+
+ if (data_direction != DMA_NONE) {
+- if (unlikely(vhost_scsi_mapal(cmd, prot_bytes,
+- &prot_iter, exp_data_len,
+- &data_iter))) {
++ ret = vhost_scsi_mapal(cmd, prot_bytes, &prot_iter,
++ exp_data_len, &data_iter);
++ if (unlikely(ret)) {
+ vq_err(vq, "Failed to map iov to sgl\n");
+ vhost_scsi_release_cmd_res(&cmd->tvc_se_cmd);
+ goto err;
+@@ -1346,8 +1354,11 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work)
+ else
+ resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED;
+
++ mutex_lock(&tmf->svq->vq.mutex);
+ vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs,
+ tmf->vq_desc, &tmf->resp_iov, resp_code);
++ mutex_unlock(&tmf->svq->vq.mutex);
++
+ vhost_scsi_release_tmf_res(tmf);
+ }
+
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index 3ff1b2a8659e87..f9475c14f7339b 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -59,12 +59,11 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ }
+
+ static void bit_clear(struct vc_data *vc, struct fb_info *info, int sy,
+- int sx, int height, int width)
++ int sx, int height, int width, int fg, int bg)
+ {
+- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+ struct fb_fillrect region;
+
+- region.color = attr_bgcol_ec(bgshift, vc, info);
++ region.color = bg;
+ region.dx = sx * vc->vc_font.width;
+ region.dy = sy * vc->vc_font.height;
+ region.width = width * vc->vc_font.width;
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index e8b4e8c119b5ce..07d127110ca4c9 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1258,7 +1258,7 @@ static void __fbcon_clear(struct vc_data *vc, unsigned int sy, unsigned int sx,
+ {
+ struct fb_info *info = fbcon_info_from_console(vc->vc_num);
+ struct fbcon_ops *ops = info->fbcon_par;
+-
++ int fg, bg;
+ struct fbcon_display *p = &fb_display[vc->vc_num];
+ u_int y_break;
+
+@@ -1279,16 +1279,18 @@ static void __fbcon_clear(struct vc_data *vc, unsigned int sy, unsigned int sx,
+ fbcon_clear_margins(vc, 0);
+ }
+
++ fg = get_color(vc, info, vc->vc_video_erase_char, 1);
++ bg = get_color(vc, info, vc->vc_video_erase_char, 0);
+ /* Split blits that cross physical y_wrap boundary */
+
+ y_break = p->vrows - p->yscroll;
+ if (sy < y_break && sy + height - 1 >= y_break) {
+ u_int b = y_break - sy;
+- ops->clear(vc, info, real_y(p, sy), sx, b, width);
++ ops->clear(vc, info, real_y(p, sy), sx, b, width, fg, bg);
+ ops->clear(vc, info, real_y(p, sy + b), sx, height - b,
+- width);
++ width, fg, bg);
+ } else
+- ops->clear(vc, info, real_y(p, sy), sx, height, width);
++ ops->clear(vc, info, real_y(p, sy), sx, height, width, fg, bg);
+ }
+
+ static void fbcon_clear(struct vc_data *vc, unsigned int sy, unsigned int sx,
+diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h
+index df70ea5ec5b379..4d97e6d8a16a24 100644
+--- a/drivers/video/fbdev/core/fbcon.h
++++ b/drivers/video/fbdev/core/fbcon.h
+@@ -55,7 +55,7 @@ struct fbcon_ops {
+ void (*bmove)(struct vc_data *vc, struct fb_info *info, int sy,
+ int sx, int dy, int dx, int height, int width);
+ void (*clear)(struct vc_data *vc, struct fb_info *info, int sy,
+- int sx, int height, int width);
++ int sx, int height, int width, int fb, int bg);
+ void (*putcs)(struct vc_data *vc, struct fb_info *info,
+ const unsigned short *s, int count, int yy, int xx,
+ int fg, int bg);
+@@ -116,42 +116,6 @@ static inline int mono_col(const struct fb_info *info)
+ return (~(0xfff << max_len)) & 0xff;
+ }
+
+-static inline int attr_col_ec(int shift, struct vc_data *vc,
+- struct fb_info *info, int is_fg)
+-{
+- int is_mono01;
+- int col;
+- int fg;
+- int bg;
+-
+- if (!vc)
+- return 0;
+-
+- if (vc->vc_can_do_color)
+- return is_fg ? attr_fgcol(shift,vc->vc_video_erase_char)
+- : attr_bgcol(shift,vc->vc_video_erase_char);
+-
+- if (!info)
+- return 0;
+-
+- col = mono_col(info);
+- is_mono01 = info->fix.visual == FB_VISUAL_MONO01;
+-
+- if (attr_reverse(vc->vc_video_erase_char)) {
+- fg = is_mono01 ? col : 0;
+- bg = is_mono01 ? 0 : col;
+- }
+- else {
+- fg = is_mono01 ? 0 : col;
+- bg = is_mono01 ? col : 0;
+- }
+-
+- return is_fg ? fg : bg;
+-}
+-
+-#define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0)
+-#define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1)
+-
+ /*
+ * Scroll Method
+ */
+diff --git a/drivers/video/fbdev/core/fbcon_ccw.c b/drivers/video/fbdev/core/fbcon_ccw.c
+index f9b794ff7d3968..89ef4ba7e8672b 100644
+--- a/drivers/video/fbdev/core/fbcon_ccw.c
++++ b/drivers/video/fbdev/core/fbcon_ccw.c
+@@ -78,14 +78,13 @@ static void ccw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ }
+
+ static void ccw_clear(struct vc_data *vc, struct fb_info *info, int sy,
+- int sx, int height, int width)
++ int sx, int height, int width, int fg, int bg)
+ {
+ struct fbcon_ops *ops = info->fbcon_par;
+ struct fb_fillrect region;
+- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+ u32 vyres = GETVYRES(ops->p, info);
+
+- region.color = attr_bgcol_ec(bgshift,vc,info);
++ region.color = bg;
+ region.dx = sy * vc->vc_font.height;
+ region.dy = vyres - ((sx + width) * vc->vc_font.width);
+ region.height = width * vc->vc_font.width;
+diff --git a/drivers/video/fbdev/core/fbcon_cw.c b/drivers/video/fbdev/core/fbcon_cw.c
+index 903f6fc174e146..b9dac7940fb777 100644
+--- a/drivers/video/fbdev/core/fbcon_cw.c
++++ b/drivers/video/fbdev/core/fbcon_cw.c
+@@ -63,14 +63,13 @@ static void cw_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ }
+
+ static void cw_clear(struct vc_data *vc, struct fb_info *info, int sy,
+- int sx, int height, int width)
++ int sx, int height, int width, int fg, int bg)
+ {
+ struct fbcon_ops *ops = info->fbcon_par;
+ struct fb_fillrect region;
+- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+ u32 vxres = GETVXRES(ops->p, info);
+
+- region.color = attr_bgcol_ec(bgshift,vc,info);
++ region.color = bg;
+ region.dx = vxres - ((sy + height) * vc->vc_font.height);
+ region.dy = sx * vc->vc_font.width;
+ region.height = width * vc->vc_font.width;
+diff --git a/drivers/video/fbdev/core/fbcon_ud.c b/drivers/video/fbdev/core/fbcon_ud.c
+index 594331936fd3cf..0af7913a2abdcc 100644
+--- a/drivers/video/fbdev/core/fbcon_ud.c
++++ b/drivers/video/fbdev/core/fbcon_ud.c
+@@ -64,15 +64,14 @@ static void ud_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ }
+
+ static void ud_clear(struct vc_data *vc, struct fb_info *info, int sy,
+- int sx, int height, int width)
++ int sx, int height, int width, int fg, int bg)
+ {
+ struct fbcon_ops *ops = info->fbcon_par;
+ struct fb_fillrect region;
+- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+ u32 vyres = GETVYRES(ops->p, info);
+ u32 vxres = GETVXRES(ops->p, info);
+
+- region.color = attr_bgcol_ec(bgshift,vc,info);
++ region.color = bg;
+ region.dy = vyres - ((sy + height) * vc->vc_font.height);
+ region.dx = vxres - ((sx + width) * vc->vc_font.width);
+ region.width = width * vc->vc_font.width;
+diff --git a/drivers/video/fbdev/core/tileblit.c b/drivers/video/fbdev/core/tileblit.c
+index eff7ec4da1671f..d342b90c42b7fe 100644
+--- a/drivers/video/fbdev/core/tileblit.c
++++ b/drivers/video/fbdev/core/tileblit.c
+@@ -32,16 +32,14 @@ static void tile_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ }
+
+ static void tile_clear(struct vc_data *vc, struct fb_info *info, int sy,
+- int sx, int height, int width)
++ int sx, int height, int width, int fg, int bg)
+ {
+ struct fb_tilerect rect;
+- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
+- int fgshift = (vc->vc_hi_font_mask) ? 9 : 8;
+
+ rect.index = vc->vc_video_erase_char &
+ ((vc->vc_hi_font_mask) ? 0x1ff : 0xff);
+- rect.fg = attr_fgcol_ec(fgshift, vc, info);
+- rect.bg = attr_bgcol_ec(bgshift, vc, info);
++ rect.fg = fg;
++ rect.bg = bg;
+ rect.sx = sx;
+ rect.sy = sy;
+ rect.width = width;
+@@ -76,7 +74,42 @@ static void tile_putcs(struct vc_data *vc, struct fb_info *info,
+ static void tile_clear_margins(struct vc_data *vc, struct fb_info *info,
+ int color, int bottom_only)
+ {
+- return;
++ unsigned int cw = vc->vc_font.width;
++ unsigned int ch = vc->vc_font.height;
++ unsigned int rw = info->var.xres - (vc->vc_cols*cw);
++ unsigned int bh = info->var.yres - (vc->vc_rows*ch);
++ unsigned int rs = info->var.xres - rw;
++ unsigned int bs = info->var.yres - bh;
++ unsigned int vwt = info->var.xres_virtual / cw;
++ unsigned int vht = info->var.yres_virtual / ch;
++ struct fb_tilerect rect;
++
++ rect.index = vc->vc_video_erase_char &
++ ((vc->vc_hi_font_mask) ? 0x1ff : 0xff);
++ rect.fg = color;
++ rect.bg = color;
++
++ if ((int) rw > 0 && !bottom_only) {
++ rect.sx = (info->var.xoffset + rs + cw - 1) / cw;
++ rect.sy = 0;
++ rect.width = (rw + cw - 1) / cw;
++ rect.height = vht;
++ if (rect.width + rect.sx > vwt)
++ rect.width = vwt - rect.sx;
++ if (rect.sx < vwt)
++ info->tileops->fb_tilefill(info, &rect);
++ }
++
++ if ((int) bh > 0) {
++ rect.sx = info->var.xoffset / cw;
++ rect.sy = (info->var.yoffset + bs) / ch;
++ rect.width = rs / cw;
++ rect.height = (bh + ch - 1) / ch;
++ if (rect.height + rect.sy > vht)
++ rect.height = vht - rect.sy;
++ if (rect.sy < vht)
++ info->tileops->fb_tilefill(info, &rect);
++ }
+ }
+
+ static void tile_cursor(struct vc_data *vc, struct fb_info *info, bool enable,
+diff --git a/drivers/video/fbdev/fsl-diu-fb.c b/drivers/video/fbdev/fsl-diu-fb.c
+index 5ac8201c353378..b71d15794ce8b8 100644
+--- a/drivers/video/fbdev/fsl-diu-fb.c
++++ b/drivers/video/fbdev/fsl-diu-fb.c
+@@ -1827,6 +1827,7 @@ static void fsl_diu_remove(struct platform_device *pdev)
+ int i;
+
+ data = dev_get_drvdata(&pdev->dev);
++ device_remove_file(&pdev->dev, &data->dev_attr);
+ disable_lcdc(&data->fsl_diu_info[0]);
+
+ free_irq(data->irq, data->diu_reg);
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index ba37665188b51f..95d5d7993e5b1b 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -395,6 +395,40 @@ static const struct cpumask *virtio_irq_get_affinity(struct device *_d,
+ return dev->config->get_vq_affinity(dev, irq_vec);
+ }
+
++static void virtio_dev_shutdown(struct device *_d)
++{
++ struct virtio_device *dev = dev_to_virtio(_d);
++ struct virtio_driver *drv = drv_to_virtio(dev->dev.driver);
++
++ /*
++ * Stop accesses to or from the device.
++ * We only need to do it if there's a driver - no accesses otherwise.
++ */
++ if (!drv)
++ return;
++
++ /* If the driver has its own shutdown method, use that. */
++ if (drv->shutdown) {
++ drv->shutdown(dev);
++ return;
++ }
++
++ /*
++ * Some devices get wedged if you kick them after they are
++ * reset. Mark all vqs as broken to make sure we don't.
++ */
++ virtio_break_device(dev);
++ /*
++ * Guarantee that any callback will see vq->broken as true.
++ */
++ virtio_synchronize_cbs(dev);
++ /*
++ * As IOMMUs are reset on shutdown, this will block device access to memory.
++ * Some devices get wedged if this happens, so reset to make sure it does not.
++ */
++ dev->config->reset(dev);
++}
++
+ static const struct bus_type virtio_bus = {
+ .name = "virtio",
+ .match = virtio_dev_match,
+@@ -403,6 +437,7 @@ static const struct bus_type virtio_bus = {
+ .probe = virtio_dev_probe,
+ .remove = virtio_dev_remove,
+ .irq_get_affinity = virtio_irq_get_affinity,
++ .shutdown = virtio_dev_shutdown,
+ };
+
+ int __register_virtio_driver(struct virtio_driver *driver, struct module *owner)
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index fdd2d2b07b5a2a..b784aab6686703 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -2650,7 +2650,7 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
+ struct vring_virtqueue *vq = to_vvq(_vq);
+
+ if (vq->event_triggered)
+- vq->event_triggered = false;
++ data_race(vq->event_triggered = false);
+
+ return vq->packed_ring ? virtqueue_enable_cb_delayed_packed(_vq) :
+ virtqueue_enable_cb_delayed_split(_vq);
+diff --git a/drivers/watchdog/aspeed_wdt.c b/drivers/watchdog/aspeed_wdt.c
+index b4773a6aaf8cc7..837e15701c0e27 100644
+--- a/drivers/watchdog/aspeed_wdt.c
++++ b/drivers/watchdog/aspeed_wdt.c
+@@ -11,21 +11,30 @@
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+ #include <linux/kstrtox.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_irq.h>
+ #include <linux/platform_device.h>
++#include <linux/regmap.h>
+ #include <linux/watchdog.h>
+
+ static bool nowayout = WATCHDOG_NOWAYOUT;
+ module_param(nowayout, bool, 0);
+ MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default="
+ __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
++struct aspeed_wdt_scu {
++ const char *compatible;
++ u32 reset_status_reg;
++ u32 wdt_reset_mask;
++ u32 wdt_reset_mask_shift;
++};
+
+ struct aspeed_wdt_config {
+ u32 ext_pulse_width_mask;
+ u32 irq_shift;
+ u32 irq_mask;
++ struct aspeed_wdt_scu scu;
+ };
+
+ struct aspeed_wdt {
+@@ -39,18 +48,36 @@ static const struct aspeed_wdt_config ast2400_config = {
+ .ext_pulse_width_mask = 0xff,
+ .irq_shift = 0,
+ .irq_mask = 0,
++ .scu = {
++ .compatible = "aspeed,ast2400-scu",
++ .reset_status_reg = 0x3c,
++ .wdt_reset_mask = 0x1,
++ .wdt_reset_mask_shift = 1,
++ },
+ };
+
+ static const struct aspeed_wdt_config ast2500_config = {
+ .ext_pulse_width_mask = 0xfffff,
+ .irq_shift = 12,
+ .irq_mask = GENMASK(31, 12),
++ .scu = {
++ .compatible = "aspeed,ast2500-scu",
++ .reset_status_reg = 0x3c,
++ .wdt_reset_mask = 0x1,
++ .wdt_reset_mask_shift = 2,
++ },
+ };
+
+ static const struct aspeed_wdt_config ast2600_config = {
+ .ext_pulse_width_mask = 0xfffff,
+ .irq_shift = 0,
+ .irq_mask = GENMASK(31, 10),
++ .scu = {
++ .compatible = "aspeed,ast2600-scu",
++ .reset_status_reg = 0x74,
++ .wdt_reset_mask = 0xf,
++ .wdt_reset_mask_shift = 16,
++ },
+ };
+
+ static const struct of_device_id aspeed_wdt_of_table[] = {
+@@ -213,6 +240,56 @@ static int aspeed_wdt_restart(struct watchdog_device *wdd,
+ return 0;
+ }
+
++static void aspeed_wdt_update_bootstatus(struct platform_device *pdev,
++ struct aspeed_wdt *wdt)
++{
++ const struct resource *res;
++ struct aspeed_wdt_scu scu = wdt->cfg->scu;
++ struct regmap *scu_base;
++ u32 reset_mask_width;
++ u32 reset_mask_shift;
++ u32 idx = 0;
++ u32 status;
++ int ret;
++
++ if (!of_device_is_compatible(pdev->dev.of_node, "aspeed,ast2400-wdt")) {
++ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ idx = ((intptr_t)wdt->base & 0x00000fff) / (uintptr_t)resource_size(res);
++ }
++
++ scu_base = syscon_regmap_lookup_by_compatible(scu.compatible);
++ if (IS_ERR(scu_base)) {
++ wdt->wdd.bootstatus = WDIOS_UNKNOWN;
++ return;
++ }
++
++ ret = regmap_read(scu_base, scu.reset_status_reg, &status);
++ if (ret) {
++ wdt->wdd.bootstatus = WDIOS_UNKNOWN;
++ return;
++ }
++
++ reset_mask_width = hweight32(scu.wdt_reset_mask);
++ reset_mask_shift = scu.wdt_reset_mask_shift +
++ reset_mask_width * idx;
++
++ if (status & (scu.wdt_reset_mask << reset_mask_shift))
++ wdt->wdd.bootstatus = WDIOF_CARDRESET;
++
++ /* clear wdt reset event flag */
++ if (of_device_is_compatible(pdev->dev.of_node, "aspeed,ast2400-wdt") ||
++ of_device_is_compatible(pdev->dev.of_node, "aspeed,ast2500-wdt")) {
++ ret = regmap_read(scu_base, scu.reset_status_reg, &status);
++ if (!ret) {
++ status &= ~(scu.wdt_reset_mask << reset_mask_shift);
++ regmap_write(scu_base, scu.reset_status_reg, status);
++ }
++ } else {
++ regmap_write(scu_base, scu.reset_status_reg,
++ scu.wdt_reset_mask << reset_mask_shift);
++ }
++}
++
+ /* access_cs0 shows if cs0 is accessible, hence the reverted bit */
+ static ssize_t access_cs0_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+@@ -458,10 +535,10 @@ static int aspeed_wdt_probe(struct platform_device *pdev)
+ writel(duration - 1, wdt->base + WDT_RESET_WIDTH);
+ }
+
++ aspeed_wdt_update_bootstatus(pdev, wdt);
++
+ status = readl(wdt->base + WDT_TIMEOUT_STATUS);
+ if (status & WDT_TIMEOUT_STATUS_BOOT_SECONDARY) {
+- wdt->wdd.bootstatus = WDIOF_CARDRESET;
+-
+ if (of_device_is_compatible(np, "aspeed,ast2400-wdt") ||
+ of_device_is_compatible(np, "aspeed,ast2500-wdt"))
+ wdt->wdd.groups = bswitch_groups;
+diff --git a/drivers/watchdog/s3c2410_wdt.c b/drivers/watchdog/s3c2410_wdt.c
+index 30450e99e5e9d4..bdd81d8074b249 100644
+--- a/drivers/watchdog/s3c2410_wdt.c
++++ b/drivers/watchdog/s3c2410_wdt.c
+@@ -72,6 +72,8 @@
+ #define EXYNOS850_CLUSTER1_WDTRESET_BIT 23
+ #define EXYNOSAUTOV9_CLUSTER0_WDTRESET_BIT 25
+ #define EXYNOSAUTOV9_CLUSTER1_WDTRESET_BIT 24
++#define EXYNOSAUTOV920_CLUSTER0_WDTRESET_BIT 0
++#define EXYNOSAUTOV920_CLUSTER1_WDTRESET_BIT 1
+
+ #define GS_CLUSTER0_NONCPU_OUT 0x1220
+ #define GS_CLUSTER1_NONCPU_OUT 0x1420
+@@ -312,9 +314,9 @@ static const struct s3c2410_wdt_variant drv_data_exynosautov920_cl0 = {
+ .mask_bit = 2,
+ .mask_reset_inv = true,
+ .rst_stat_reg = EXYNOS5_RST_STAT_REG_OFFSET,
+- .rst_stat_bit = EXYNOSAUTOV9_CLUSTER0_WDTRESET_BIT,
++ .rst_stat_bit = EXYNOSAUTOV920_CLUSTER0_WDTRESET_BIT,
+ .cnt_en_reg = EXYNOSAUTOV920_CLUSTER0_NONCPU_OUT,
+- .cnt_en_bit = 7,
++ .cnt_en_bit = 8,
+ .quirks = QUIRK_HAS_WTCLRINT_REG | QUIRK_HAS_PMU_MASK_RESET |
+ QUIRK_HAS_PMU_RST_STAT | QUIRK_HAS_PMU_CNT_EN |
+ QUIRK_HAS_DBGACK_BIT,
+@@ -325,9 +327,9 @@ static const struct s3c2410_wdt_variant drv_data_exynosautov920_cl1 = {
+ .mask_bit = 2,
+ .mask_reset_inv = true,
+ .rst_stat_reg = EXYNOS5_RST_STAT_REG_OFFSET,
+- .rst_stat_bit = EXYNOSAUTOV9_CLUSTER1_WDTRESET_BIT,
++ .rst_stat_bit = EXYNOSAUTOV920_CLUSTER1_WDTRESET_BIT,
+ .cnt_en_reg = EXYNOSAUTOV920_CLUSTER1_NONCPU_OUT,
+- .cnt_en_bit = 7,
++ .cnt_en_bit = 8,
+ .quirks = QUIRK_HAS_WTCLRINT_REG | QUIRK_HAS_PMU_MASK_RESET |
+ QUIRK_HAS_PMU_RST_STAT | QUIRK_HAS_PMU_CNT_EN |
+ QUIRK_HAS_DBGACK_BIT,
+diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c
+index 416f231809cb69..bfe07adb3e3a6c 100644
+--- a/drivers/xen/pci.c
++++ b/drivers/xen/pci.c
+@@ -43,6 +43,18 @@ static int xen_add_device(struct device *dev)
+ pci_mcfg_reserved = true;
+ }
+ #endif
++
++ if (pci_domain_nr(pci_dev->bus) >> 16) {
++ /*
++ * The hypercall interface is limited to 16bit PCI segment
++ * values, do not attempt to register devices with Xen in
++ * segments greater or equal than 0x10000.
++ */
++ dev_info(dev,
++ "not registering with Xen: invalid PCI segment\n");
++ return 0;
++ }
++
+ if (pci_seg_supported) {
+ DEFINE_RAW_FLEX(struct physdev_pci_device_add, add, optarr, 1);
+
+@@ -149,6 +161,16 @@ static int xen_remove_device(struct device *dev)
+ int r;
+ struct pci_dev *pci_dev = to_pci_dev(dev);
+
++ if (pci_domain_nr(pci_dev->bus) >> 16) {
++ /*
++ * The hypercall interface is limited to 16bit PCI segment
++ * values.
++ */
++ dev_info(dev,
++ "not unregistering with Xen: invalid PCI segment\n");
++ return 0;
++ }
++
+ if (pci_seg_supported) {
+ struct physdev_pci_device device = {
+ .seg = pci_domain_nr(pci_dev->bus),
+@@ -182,6 +204,16 @@ int xen_reset_device(const struct pci_dev *dev)
+ .flags = PCI_DEVICE_RESET_FLR,
+ };
+
++ if (pci_domain_nr(dev->bus) >> 16) {
++ /*
++ * The hypercall interface is limited to 16bit PCI segment
++ * values.
++ */
++ dev_info(&dev->dev,
++ "unable to notify Xen of device reset: invalid PCI segment\n");
++ return 0;
++ }
++
+ return HYPERVISOR_physdev_op(PHYSDEVOP_pci_device_reset, &device);
+ }
+ EXPORT_SYMBOL_GPL(xen_reset_device);
+diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
+index 544d3f9010b92a..1db82da56db62b 100644
+--- a/drivers/xen/platform-pci.c
++++ b/drivers/xen/platform-pci.c
+@@ -26,6 +26,8 @@
+
+ #define DRV_NAME "xen-platform-pci"
+
++#define PCI_DEVICE_ID_XEN_PLATFORM_XS61 0x0002
++
+ static unsigned long platform_mmio;
+ static unsigned long platform_mmio_alloc;
+ static unsigned long platform_mmiolen;
+@@ -174,6 +176,8 @@ static int platform_pci_probe(struct pci_dev *pdev,
+ static const struct pci_device_id platform_pci_tbl[] = {
+ {PCI_VENDOR_ID_XEN, PCI_DEVICE_ID_XEN_PLATFORM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
++ {PCI_VENDOR_ID_XEN, PCI_DEVICE_ID_XEN_PLATFORM_XS61,
++ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ {0,}
+ };
+
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 6d32ffb0113650..86fe6e77905669 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -966,9 +966,15 @@ static int __init xenbus_init(void)
+ if (xen_pv_domain())
+ xen_store_domain_type = XS_PV;
+ if (xen_hvm_domain())
++ {
+ xen_store_domain_type = XS_HVM;
+- if (xen_hvm_domain() && xen_initial_domain())
+- xen_store_domain_type = XS_LOCAL;
++ err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
++ if (err)
++ goto out_error;
++ xen_store_evtchn = (int)v;
++ if (!v && xen_initial_domain())
++ xen_store_domain_type = XS_LOCAL;
++ }
+ if (xen_pv_domain() && !xen_start_info->store_evtchn)
+ xen_store_domain_type = XS_LOCAL;
+ if (xen_pv_domain() && xen_start_info->store_evtchn)
+@@ -987,10 +993,6 @@ static int __init xenbus_init(void)
+ xen_store_interface = gfn_to_virt(xen_store_gfn);
+ break;
+ case XS_HVM:
+- err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v);
+- if (err)
+- goto out_error;
+- xen_store_evtchn = (int)v;
+ err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v);
+ if (err)
+ goto out_error;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index b96b2359433447..b14faa1d57b8ce 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1888,6 +1888,17 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ up_write(&space_info->groups_sem);
+ goto next;
+ }
++
++ /*
++ * Cache the zone_unusable value before turning the block group
++ * to read only. As soon as the block group is read only it's
++ * zone_unusable value gets moved to the block group's read-only
++ * bytes and isn't available for calculations anymore. We also
++ * cache it before unlocking the block group, to prevent races
++ * (reports from KCSAN and such tools) with tasks updating it.
++ */
++ zone_unusable = bg->zone_unusable;
++
+ spin_unlock(&bg->lock);
+ spin_unlock(&space_info->lock);
+
+@@ -1904,13 +1915,6 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ goto next;
+ }
+
+- /*
+- * Cache the zone_unusable value before turning the block group
+- * to read only. As soon as the blog group is read only it's
+- * zone_unusable value gets moved to the block group's read-only
+- * bytes and isn't available for calculations anymore.
+- */
+- zone_unusable = bg->zone_unusable;
+ ret = inc_block_group_ro(bg, 0);
+ up_write(&space_info->groups_sem);
+ if (ret < 0)
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 0c4d486c3048da..18d2210dc72496 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -606,7 +606,7 @@ void btrfs_submit_compressed_read(struct btrfs_bio *bbio)
+ free_extent_map(em);
+
+ cb->nr_folios = DIV_ROUND_UP(compressed_len, PAGE_SIZE);
+- cb->compressed_folios = kcalloc(cb->nr_folios, sizeof(struct page *), GFP_NOFS);
++ cb->compressed_folios = kcalloc(cb->nr_folios, sizeof(struct folio *), GFP_NOFS);
+ if (!cb->compressed_folios) {
+ ret = BLK_STS_RESOURCE;
+ goto out_free_bio;
+diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
+index e9cdc1759dada8..de23c4b3515e58 100644
+--- a/fs/btrfs/discard.c
++++ b/fs/btrfs/discard.c
+@@ -168,13 +168,7 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ block_group->discard_eligible_time = 0;
+ queued = !list_empty(&block_group->discard_list);
+ list_del_init(&block_group->discard_list);
+- /*
+- * If the block group is currently running in the discard workfn, we
+- * don't want to deref it, since it's still being used by the workfn.
+- * The workfn will notice this case and deref the block group when it is
+- * finished.
+- */
+- if (queued && !running)
++ if (queued)
+ btrfs_put_block_group(block_group);
+
+ spin_unlock(&discard_ctl->lock);
+@@ -273,9 +267,10 @@ static struct btrfs_block_group *peek_discard_list(
+ block_group->discard_cursor = block_group->start;
+ block_group->discard_state = BTRFS_DISCARD_EXTENTS;
+ }
+- discard_ctl->block_group = block_group;
+ }
+ if (block_group) {
++ btrfs_get_block_group(block_group);
++ discard_ctl->block_group = block_group;
+ *discard_state = block_group->discard_state;
+ *discard_index = block_group->discard_index;
+ }
+@@ -506,9 +501,20 @@ static void btrfs_discard_workfn(struct work_struct *work)
+
+ block_group = peek_discard_list(discard_ctl, &discard_state,
+ &discard_index, now);
+- if (!block_group || !btrfs_run_discard_work(discard_ctl))
++ if (!block_group)
+ return;
++ if (!btrfs_run_discard_work(discard_ctl)) {
++ spin_lock(&discard_ctl->lock);
++ btrfs_put_block_group(block_group);
++ discard_ctl->block_group = NULL;
++ spin_unlock(&discard_ctl->lock);
++ return;
++ }
+ if (now < block_group->discard_eligible_time) {
++ spin_lock(&discard_ctl->lock);
++ btrfs_put_block_group(block_group);
++ discard_ctl->block_group = NULL;
++ spin_unlock(&discard_ctl->lock);
+ btrfs_discard_schedule_work(discard_ctl, false);
+ return;
+ }
+@@ -560,15 +566,7 @@ static void btrfs_discard_workfn(struct work_struct *work)
+ spin_lock(&discard_ctl->lock);
+ discard_ctl->prev_discard = trimmed;
+ discard_ctl->prev_discard_time = now;
+- /*
+- * If the block group was removed from the discard list while it was
+- * running in this workfn, then we didn't deref it, since this function
+- * still owned that reference. But we set the discard_ctl->block_group
+- * back to NULL, so we can use that condition to know that now we need
+- * to deref the block_group.
+- */
+- if (discard_ctl->block_group == NULL)
+- btrfs_put_block_group(block_group);
++ btrfs_put_block_group(block_group);
+ discard_ctl->block_group = NULL;
+ __btrfs_discard_schedule_work(discard_ctl, now, false);
+ spin_unlock(&discard_ctl->lock);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index ca821e5966bd3a..cc8d7369370124 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4328,6 +4328,14 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ /* clear out the rbtree of defraggable inodes */
+ btrfs_cleanup_defrag_inodes(fs_info);
+
++ /*
++ * Handle the error fs first, as it will flush and wait for all ordered
++ * extents. This will generate delayed iputs, thus we want to handle
++ * it first.
++ */
++ if (unlikely(BTRFS_FS_ERROR(fs_info)))
++ btrfs_error_commit_super(fs_info);
++
+ /*
+ * Wait for any fixup workers to complete.
+ * If we don't wait for them here and they are still running by the time
+@@ -4348,6 +4356,19 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ */
+ btrfs_flush_workqueue(fs_info->delalloc_workers);
+
++ /*
++ * We can have ordered extents getting their last reference dropped from
++ * the fs_info->workers queue because for async writes for data bios we
++ * queue a work for that queue, at btrfs_wq_submit_bio(), that runs
++ * run_one_async_done() which calls btrfs_bio_end_io() in case the bio
++ * has an error, and that later function can do the final
++ * btrfs_put_ordered_extent() on the ordered extent attached to the bio,
++ * which adds a delayed iput for the inode. So we must flush the queue
++ * so that we don't have delayed iputs after committing the current
++ * transaction below and stopping the cleaner and transaction kthreads.
++ */
++ btrfs_flush_workqueue(fs_info->workers);
++
+ /*
+ * When finishing a compressed write bio we schedule a work queue item
+ * to finish an ordered extent - btrfs_finish_compressed_write_work()
+@@ -4418,9 +4439,6 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ btrfs_err(fs_info, "commit super ret %d", ret);
+ }
+
+- if (BTRFS_FS_ERROR(fs_info))
+- btrfs_error_commit_super(fs_info);
+-
+ kthread_stop(fs_info->transaction_kthread);
+ kthread_stop(fs_info->cleaner_kthread);
+
+@@ -4543,10 +4561,6 @@ static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info)
+ /* cleanup FS via transaction */
+ btrfs_cleanup_transaction(fs_info);
+
+- mutex_lock(&fs_info->cleaner_mutex);
+- btrfs_run_delayed_iputs(fs_info);
+- mutex_unlock(&fs_info->cleaner_mutex);
+-
+ down_write(&fs_info->cleanup_work_sem);
+ up_write(&fs_info->cleanup_work_sem);
+ }
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index c021aae8875eb7..06922529f19dc5 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2842,10 +2842,10 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
+ return eb;
+ }
+
+-#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
+ u64 start)
+ {
++#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ struct extent_buffer *eb, *exists = NULL;
+ int ret;
+
+@@ -2881,8 +2881,11 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
+ free_eb:
+ btrfs_release_extent_buffer(eb);
+ return exists;
+-}
++#else
++ /* Stub to avoid linker error when compiled with optimizations turned off. */
++ return NULL;
+ #endif
++}
+
+ static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info,
+ struct folio *folio)
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 6c5328bfabc22a..2aefc64cdd2958 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -297,6 +297,8 @@ static inline int num_extent_pages(const struct extent_buffer *eb)
+ */
+ static inline int num_extent_folios(const struct extent_buffer *eb)
+ {
++ if (!eb->folios[0])
++ return 0;
+ if (folio_order(eb->folios[0]))
+ return 1;
+ return num_extent_pages(eb);
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 531312efee8df6..5d0060eb8ff4c4 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -1541,8 +1541,8 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg,
+ u64 extent_gen;
+ int ret;
+
+- if (unlikely(!extent_root)) {
+- btrfs_err(fs_info, "no valid extent root for scrub");
++ if (unlikely(!extent_root || !csum_root)) {
++ btrfs_err(fs_info, "no valid extent or csum root for scrub");
+ return -EUCLEAN;
+ }
+ memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) *
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index f437138fefbc5b..bb8a0945b0fd37 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -487,10 +487,8 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
+ if (p->buf_len >= len)
+ return 0;
+
+- if (len > PATH_MAX) {
+- WARN_ON(1);
+- return -ENOMEM;
+- }
++ if (WARN_ON(len > PATH_MAX))
++ return -ENAMETOOLONG;
+
+ path_len = p->end - p->start;
+ old_buf_len = p->buf_len;
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 43979891f7c895..2b66a6130269a3 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -2235,7 +2235,7 @@ int btrfs_verify_level_key(struct extent_buffer *eb,
+ btrfs_err(fs_info,
+ "tree level mismatch detected, bytenr=%llu level expected=%u has=%u",
+ eb->start, check->level, found_level);
+- return -EIO;
++ return -EUCLEAN;
+ }
+
+ if (!check->has_first_key)
+diff --git a/fs/buffer.c b/fs/buffer.c
+index cc8452f6025167..2494fe3a5e69ee 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -176,18 +176,8 @@ void end_buffer_write_sync(struct buffer_head *bh, int uptodate)
+ }
+ EXPORT_SYMBOL(end_buffer_write_sync);
+
+-/*
+- * Various filesystems appear to want __find_get_block to be non-blocking.
+- * But it's the page lock which protects the buffers. To get around this,
+- * we get exclusion from try_to_free_buffers with the blockdev mapping's
+- * i_private_lock.
+- *
+- * Hack idea: for the blockdev mapping, i_private_lock contention
+- * may be quite high. This code could TryLock the page, and if that
+- * succeeds, there is no need to take i_private_lock.
+- */
+ static struct buffer_head *
+-__find_get_block_slow(struct block_device *bdev, sector_t block)
++__find_get_block_slow(struct block_device *bdev, sector_t block, bool atomic)
+ {
+ struct address_space *bd_mapping = bdev->bd_mapping;
+ const int blkbits = bd_mapping->host->i_blkbits;
+@@ -204,7 +194,16 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
+ if (IS_ERR(folio))
+ goto out;
+
+- spin_lock(&bd_mapping->i_private_lock);
++ /*
++ * Folio lock protects the buffers. Callers that cannot block
++ * will fallback to serializing vs try_to_free_buffers() via
++ * the i_private_lock.
++ */
++ if (atomic)
++ spin_lock(&bd_mapping->i_private_lock);
++ else
++ folio_lock(folio);
++
+ head = folio_buffers(folio);
+ if (!head)
+ goto out_unlock;
+@@ -236,7 +235,10 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
+ 1 << blkbits);
+ }
+ out_unlock:
+- spin_unlock(&bd_mapping->i_private_lock);
++ if (atomic)
++ spin_unlock(&bd_mapping->i_private_lock);
++ else
++ folio_unlock(folio);
+ folio_put(folio);
+ out:
+ return ret;
+@@ -656,7 +658,9 @@ EXPORT_SYMBOL(generic_buffers_fsync);
+ void write_boundary_block(struct block_device *bdev,
+ sector_t bblock, unsigned blocksize)
+ {
+- struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize);
++ struct buffer_head *bh;
++
++ bh = __find_get_block_nonatomic(bdev, bblock + 1, blocksize);
+ if (bh) {
+ if (buffer_dirty(bh))
+ write_dirty_buffer(bh, 0);
+@@ -1388,14 +1392,15 @@ lookup_bh_lru(struct block_device *bdev, sector_t block, unsigned size)
+ * it in the LRU and mark it as accessed. If it is not present then return
+ * NULL
+ */
+-struct buffer_head *
+-__find_get_block(struct block_device *bdev, sector_t block, unsigned size)
++static struct buffer_head *
++find_get_block_common(struct block_device *bdev, sector_t block,
++ unsigned size, bool atomic)
+ {
+ struct buffer_head *bh = lookup_bh_lru(bdev, block, size);
+
+ if (bh == NULL) {
+ /* __find_get_block_slow will mark the page accessed */
+- bh = __find_get_block_slow(bdev, block);
++ bh = __find_get_block_slow(bdev, block, atomic);
+ if (bh)
+ bh_lru_install(bh);
+ } else
+@@ -1403,8 +1408,23 @@ __find_get_block(struct block_device *bdev, sector_t block, unsigned size)
+
+ return bh;
+ }
++
++struct buffer_head *
++__find_get_block(struct block_device *bdev, sector_t block, unsigned size)
++{
++ return find_get_block_common(bdev, block, size, true);
++}
+ EXPORT_SYMBOL(__find_get_block);
+
++/* same as __find_get_block() but allows sleeping contexts */
++struct buffer_head *
++__find_get_block_nonatomic(struct block_device *bdev, sector_t block,
++ unsigned size)
++{
++ return find_get_block_common(bdev, block, size, false);
++}
++EXPORT_SYMBOL(__find_get_block_nonatomic);
++
+ /**
+ * bdev_getblk - Get a buffer_head in a block device's buffer cache.
+ * @bdev: The block device.
+@@ -1422,7 +1442,12 @@ EXPORT_SYMBOL(__find_get_block);
+ struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block,
+ unsigned size, gfp_t gfp)
+ {
+- struct buffer_head *bh = __find_get_block(bdev, block, size);
++ struct buffer_head *bh;
++
++ if (gfpflags_allow_blocking(gfp))
++ bh = __find_get_block_nonatomic(bdev, block, size);
++ else
++ bh = __find_get_block(bdev, block, size);
+
+ might_alloc(gfp);
+ if (bh)
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index d28141829c051b..70abd4da17a63a 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -1826,8 +1826,8 @@ static int dlm_tcp_listen_validate(void)
+ {
+ /* We don't support multi-homed hosts */
+ if (dlm_local_count > 1) {
+- log_print("TCP protocol can't handle multi-homed hosts, try SCTP");
+- return -EINVAL;
++ log_print("Detect multi-homed hosts but use only the first IP address.");
++ log_print("Try SCTP, if you want to enable multi-link.");
+ }
+
+ return 0;
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index efd25f3101f1f6..2b8d9a10f00267 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -446,6 +446,7 @@ int __init erofs_init_shrinker(void);
+ void erofs_exit_shrinker(void);
+ int __init z_erofs_init_subsystem(void);
+ void z_erofs_exit_subsystem(void);
++int z_erofs_init_super(struct super_block *sb);
+ unsigned long z_erofs_shrink_scan(struct erofs_sb_info *sbi,
+ unsigned long nr_shrink);
+ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+@@ -455,7 +456,6 @@ void z_erofs_put_gbuf(void *ptr);
+ int z_erofs_gbuf_growsize(unsigned int nrpages);
+ int __init z_erofs_gbuf_init(void);
+ void z_erofs_gbuf_exit(void);
+-int erofs_init_managed_cache(struct super_block *sb);
+ int z_erofs_parse_cfgs(struct super_block *sb, struct erofs_super_block *dsb);
+ #else
+ static inline void erofs_shrinker_register(struct super_block *sb) {}
+@@ -464,7 +464,7 @@ static inline int erofs_init_shrinker(void) { return 0; }
+ static inline void erofs_exit_shrinker(void) {}
+ static inline int z_erofs_init_subsystem(void) { return 0; }
+ static inline void z_erofs_exit_subsystem(void) {}
+-static inline int erofs_init_managed_cache(struct super_block *sb) { return 0; }
++static inline int z_erofs_init_super(struct super_block *sb) { return 0; }
+ #endif /* !CONFIG_EROFS_FS_ZIP */
+
+ #ifdef CONFIG_EROFS_FS_BACKED_BY_FILE
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 9f2bce5af9c837..b30125a2a5011c 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -631,9 +631,16 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ else
+ sb->s_flags &= ~SB_POSIXACL;
+
+-#ifdef CONFIG_EROFS_FS_ZIP
+- xa_init(&sbi->managed_pslots);
+-#endif
++ err = z_erofs_init_super(sb);
++ if (err)
++ return err;
++
++ if (erofs_sb_has_fragments(sbi) && sbi->packed_nid) {
++ inode = erofs_iget(sb, sbi->packed_nid);
++ if (IS_ERR(inode))
++ return PTR_ERR(inode);
++ sbi->packed_inode = inode;
++ }
+
+ inode = erofs_iget(sb, sbi->root_nid);
+ if (IS_ERR(inode))
+@@ -645,24 +652,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ iput(inode);
+ return -EINVAL;
+ }
+-
+ sb->s_root = d_make_root(inode);
+ if (!sb->s_root)
+ return -ENOMEM;
+
+ erofs_shrinker_register(sb);
+- if (erofs_sb_has_fragments(sbi) && sbi->packed_nid) {
+- sbi->packed_inode = erofs_iget(sb, sbi->packed_nid);
+- if (IS_ERR(sbi->packed_inode)) {
+- err = PTR_ERR(sbi->packed_inode);
+- sbi->packed_inode = NULL;
+- return err;
+- }
+- }
+- err = erofs_init_managed_cache(sb);
+- if (err)
+- return err;
+-
+ err = erofs_xattr_prefixes_init(sb);
+ if (err)
+ return err;
+@@ -798,6 +792,16 @@ static int erofs_init_fs_context(struct fs_context *fc)
+ return 0;
+ }
+
++static void erofs_drop_internal_inodes(struct erofs_sb_info *sbi)
++{
++ iput(sbi->packed_inode);
++ sbi->packed_inode = NULL;
++#ifdef CONFIG_EROFS_FS_ZIP
++ iput(sbi->managed_cache);
++ sbi->managed_cache = NULL;
++#endif
++}
++
+ static void erofs_kill_sb(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+@@ -807,6 +811,7 @@ static void erofs_kill_sb(struct super_block *sb)
+ kill_anon_super(sb);
+ else
+ kill_block_super(sb);
++ erofs_drop_internal_inodes(sbi);
+ fs_put_dax(sbi->dif0.dax_dev, NULL);
+ erofs_fscache_unregister_fs(sb);
+ erofs_sb_free(sbi);
+@@ -817,17 +822,10 @@ static void erofs_put_super(struct super_block *sb)
+ {
+ struct erofs_sb_info *const sbi = EROFS_SB(sb);
+
+- DBG_BUGON(!sbi);
+-
+ erofs_unregister_sysfs(sb);
+ erofs_shrinker_unregister(sb);
+ erofs_xattr_prefixes_cleanup(sb);
+-#ifdef CONFIG_EROFS_FS_ZIP
+- iput(sbi->managed_cache);
+- sbi->managed_cache = NULL;
+-#endif
+- iput(sbi->packed_inode);
+- sbi->packed_inode = NULL;
++ erofs_drop_internal_inodes(sbi);
+ erofs_free_dev_context(sbi->devs);
+ sbi->devs = NULL;
+ erofs_fscache_unregister_fs(sb);
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 67acef591646c8..87192635b2d202 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -641,18 +641,18 @@ static const struct address_space_operations z_erofs_cache_aops = {
+ .invalidate_folio = z_erofs_cache_invalidate_folio,
+ };
+
+-int erofs_init_managed_cache(struct super_block *sb)
++int z_erofs_init_super(struct super_block *sb)
+ {
+ struct inode *const inode = new_inode(sb);
+
+ if (!inode)
+ return -ENOMEM;
+-
+ set_nlink(inode, 1);
+ inode->i_size = OFFSET_MAX;
+ inode->i_mapping->a_ops = &z_erofs_cache_aops;
+ mapping_set_gfp_mask(inode->i_mapping, GFP_KERNEL);
+ EROFS_SB(sb)->managed_cache = inode;
++ xa_init(&EROFS_SB(sb)->managed_pslots);
+ return 0;
+ }
+
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index a23677de4544f3..b22c02d6000f79 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -274,9 +274,11 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ sector_t last_block;
+ sector_t phys = 0;
+ sector_t valid_blks;
++ loff_t i_size;
+
+ mutex_lock(&sbi->s_lock);
+- last_block = EXFAT_B_TO_BLK_ROUND_UP(i_size_read(inode), sb);
++ i_size = i_size_read(inode);
++ last_block = EXFAT_B_TO_BLK_ROUND_UP(i_size, sb);
+ if (iblock >= last_block && !create)
+ goto done;
+
+@@ -305,102 +307,95 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ if (buffer_delay(bh_result))
+ clear_buffer_delay(bh_result);
+
+- if (create) {
++ /*
++ * In most cases, we just need to set bh_result to mapped, unmapped
++ * or new status as follows:
++ * 1. i_size == valid_size
++ * 2. write case (create == 1)
++ * 3. direct_read (!bh_result->b_folio)
++ * -> the unwritten part will be zeroed in exfat_direct_IO()
++ *
++ * Otherwise, in the case of buffered read, it is necessary to take
++ * care the last nested block if valid_size is not equal to i_size.
++ */
++ if (i_size == ei->valid_size || create || !bh_result->b_folio)
+ valid_blks = EXFAT_B_TO_BLK_ROUND_UP(ei->valid_size, sb);
++ else
++ valid_blks = EXFAT_B_TO_BLK(ei->valid_size, sb);
+
+- if (iblock + max_blocks < valid_blks) {
+- /* The range has been written, map it */
+- goto done;
+- } else if (iblock < valid_blks) {
+- /*
+- * The range has been partially written,
+- * map the written part.
+- */
+- max_blocks = valid_blks - iblock;
+- goto done;
+- }
++ /* The range has been fully written, map it */
++ if (iblock + max_blocks < valid_blks)
++ goto done;
+
+- /* The area has not been written, map and mark as new. */
+- set_buffer_new(bh_result);
++ /* The range has been partially written, map the written part */
++ if (iblock < valid_blks) {
++ max_blocks = valid_blks - iblock;
++ goto done;
++ }
+
++ /* The area has not been written, map and mark as new for create case */
++ if (create) {
++ set_buffer_new(bh_result);
+ ei->valid_size = EXFAT_BLK_TO_B(iblock + max_blocks, sb);
+ mark_inode_dirty(inode);
+- } else {
+- valid_blks = EXFAT_B_TO_BLK(ei->valid_size, sb);
++ goto done;
++ }
+
+- if (iblock + max_blocks < valid_blks) {
+- /* The range has been written, map it */
+- goto done;
+- } else if (iblock < valid_blks) {
+- /*
+- * The area has been partially written,
+- * map the written part.
+- */
+- max_blocks = valid_blks - iblock;
++ /*
++ * The area has just one block partially written.
++ * In that case, we should read and fill the unwritten part of
++ * a block with zero.
++ */
++ if (bh_result->b_folio && iblock == valid_blks &&
++ (ei->valid_size & (sb->s_blocksize - 1))) {
++ loff_t size, pos;
++ void *addr;
++
++ max_blocks = 1;
++
++ /*
++ * No buffer_head is allocated.
++ * (1) bmap: It's enough to set blocknr without I/O.
++ * (2) read: The unwritten part should be filled with zero.
++ * If a folio does not have any buffers,
++ * let's returns -EAGAIN to fallback to
++ * block_read_full_folio() for per-bh IO.
++ */
++ if (!folio_buffers(bh_result->b_folio)) {
++ err = -EAGAIN;
+ goto done;
+- } else if (iblock == valid_blks &&
+- (ei->valid_size & (sb->s_blocksize - 1))) {
+- /*
+- * The block has been partially written,
+- * zero the unwritten part and map the block.
+- */
+- loff_t size, pos;
+- void *addr;
+-
+- max_blocks = 1;
+-
+- /*
+- * For direct read, the unwritten part will be zeroed in
+- * exfat_direct_IO()
+- */
+- if (!bh_result->b_folio)
+- goto done;
+-
+- /*
+- * No buffer_head is allocated.
+- * (1) bmap: It's enough to fill bh_result without I/O.
+- * (2) read: The unwritten part should be filled with 0
+- * If a folio does not have any buffers,
+- * let's returns -EAGAIN to fallback to
+- * per-bh IO like block_read_full_folio().
+- */
+- if (!folio_buffers(bh_result->b_folio)) {
+- err = -EAGAIN;
+- goto done;
+- }
++ }
+
+- pos = EXFAT_BLK_TO_B(iblock, sb);
+- size = ei->valid_size - pos;
+- addr = folio_address(bh_result->b_folio) +
+- offset_in_folio(bh_result->b_folio, pos);
++ pos = EXFAT_BLK_TO_B(iblock, sb);
++ size = ei->valid_size - pos;
++ addr = folio_address(bh_result->b_folio) +
++ offset_in_folio(bh_result->b_folio, pos);
+
+- /* Check if bh->b_data points to proper addr in folio */
+- if (bh_result->b_data != addr) {
+- exfat_fs_error_ratelimit(sb,
++ /* Check if bh->b_data points to proper addr in folio */
++ if (bh_result->b_data != addr) {
++ exfat_fs_error_ratelimit(sb,
+ "b_data(%p) != folio_addr(%p)",
+ bh_result->b_data, addr);
+- err = -EINVAL;
+- goto done;
+- }
+-
+- /* Read a block */
+- err = bh_read(bh_result, 0);
+- if (err < 0)
+- goto done;
++ err = -EINVAL;
++ goto done;
++ }
+
+- /* Zero unwritten part of a block */
+- memset(bh_result->b_data + size, 0,
+- bh_result->b_size - size);
++ /* Read a block */
++ err = bh_read(bh_result, 0);
++ if (err < 0)
++ goto done;
+
+- err = 0;
+- } else {
+- /*
+- * The range has not been written, clear the mapped flag
+- * to only zero the cache and do not read from disk.
+- */
+- clear_buffer_mapped(bh_result);
+- }
++ /* Zero unwritten part of a block */
++ memset(bh_result->b_data + size, 0, bh_result->b_size - size);
++ err = 0;
++ goto done;
+ }
++
++ /*
++ * The area has not been written, clear mapped for read/bmap cases.
++ * If so, it will be filled with zero without reading from disk.
++ */
++ clear_buffer_mapped(bh_result);
+ done:
+ bh_result->b_size = EXFAT_BLK_TO_B(max_blocks, sb);
+ if (err < 0)
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 8042ad87380897..c48fd36b2d74c0 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -649,8 +649,8 @@ static int ext4_has_free_clusters(struct ext4_sb_info *sbi,
+ /* Hm, nope. Are (enough) root reserved clusters available? */
+ if (uid_eq(sbi->s_resuid, current_fsuid()) ||
+ (!gid_eq(sbi->s_resgid, GLOBAL_ROOT_GID) && in_group_p(sbi->s_resgid)) ||
+- capable(CAP_SYS_RESOURCE) ||
+- (flags & EXT4_MB_USE_ROOT_BLOCKS)) {
++ (flags & EXT4_MB_USE_ROOT_BLOCKS) ||
++ capable(CAP_SYS_RESOURCE)) {
+
+ if (free_clusters >= (nclusters + dirty_clusters +
+ resv_clusters))
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index df30d9f235123b..4f81983b79186e 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -278,7 +278,8 @@ struct ext4_system_blocks {
+ /*
+ * Flags for ext4_io_end->flags
+ */
+-#define EXT4_IO_END_UNWRITTEN 0x0001
++#define EXT4_IO_END_UNWRITTEN 0x0001
++#define EXT4_IO_END_FAILED 0x0002
+
+ struct ext4_io_end_vec {
+ struct list_head list; /* list of io_end_vec */
+@@ -3010,6 +3011,8 @@ extern int ext4_inode_attach_jinode(struct inode *inode);
+ extern int ext4_can_truncate(struct inode *inode);
+ extern int ext4_truncate(struct inode *);
+ extern int ext4_break_layouts(struct inode *);
++extern int ext4_truncate_page_cache_block_range(struct inode *inode,
++ loff_t start, loff_t end);
+ extern int ext4_punch_hole(struct file *file, loff_t offset, loff_t length);
+ extern void ext4_set_inode_flags(struct inode *, bool init);
+ extern int ext4_alloc_da_blocks(struct inode *inode);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index a07a98a4b97a5f..8dc6b4271b15d7 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4667,22 +4667,13 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ goto out_mutex;
+ }
+
+- /*
+- * For journalled data we need to write (and checkpoint) pages
+- * before discarding page cache to avoid inconsitent data on
+- * disk in case of crash before zeroing trans is committed.
+- */
+- if (ext4_should_journal_data(inode)) {
+- ret = filemap_write_and_wait_range(mapping, start,
+- end - 1);
+- if (ret) {
+- filemap_invalidate_unlock(mapping);
+- goto out_mutex;
+- }
++ /* Now release the pages and zero block aligned part of pages */
++ ret = ext4_truncate_page_cache_block_range(inode, start, end);
++ if (ret) {
++ filemap_invalidate_unlock(mapping);
++ goto out_mutex;
+ }
+
+- /* Now release the pages and zero block aligned part of pages */
+- truncate_pagecache_range(inode, start, end - 1);
+ inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
+
+ ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 74c5e2a381a6b0..44780bb537f95e 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -31,6 +31,7 @@
+ #include <linux/writeback.h>
+ #include <linux/pagevec.h>
+ #include <linux/mpage.h>
++#include <linux/rmap.h>
+ #include <linux/namei.h>
+ #include <linux/uio.h>
+ #include <linux/bio.h>
+@@ -3903,6 +3904,68 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
+ return ret;
+ }
+
++static inline void ext4_truncate_folio(struct inode *inode,
++ loff_t start, loff_t end)
++{
++ unsigned long blocksize = i_blocksize(inode);
++ struct folio *folio;
++
++ /* Nothing to be done if no complete block needs to be truncated. */
++ if (round_up(start, blocksize) >= round_down(end, blocksize))
++ return;
++
++ folio = filemap_lock_folio(inode->i_mapping, start >> PAGE_SHIFT);
++ if (IS_ERR(folio))
++ return;
++
++ if (folio_mkclean(folio))
++ folio_mark_dirty(folio);
++ folio_unlock(folio);
++ folio_put(folio);
++}
++
++int ext4_truncate_page_cache_block_range(struct inode *inode,
++ loff_t start, loff_t end)
++{
++ unsigned long blocksize = i_blocksize(inode);
++ int ret;
++
++ /*
++ * For journalled data we need to write (and checkpoint) pages
++ * before discarding page cache to avoid inconsitent data on disk
++ * in case of crash before freeing or unwritten converting trans
++ * is committed.
++ */
++ if (ext4_should_journal_data(inode)) {
++ ret = filemap_write_and_wait_range(inode->i_mapping, start,
++ end - 1);
++ if (ret)
++ return ret;
++ goto truncate_pagecache;
++ }
++
++ /*
++ * If the block size is less than the page size, the file's mapped
++ * blocks within one page could be freed or converted to unwritten.
++ * So it's necessary to remove writable userspace mappings, and then
++ * ext4_page_mkwrite() can be called during subsequent write access
++ * to these partial folios.
++ */
++ if (!IS_ALIGNED(start | end, PAGE_SIZE) &&
++ blocksize < PAGE_SIZE && start < inode->i_size) {
++ loff_t page_boundary = round_up(start, PAGE_SIZE);
++
++ ext4_truncate_folio(inode, start, min(page_boundary, end));
++ if (end > page_boundary)
++ ext4_truncate_folio(inode,
++ round_down(end, PAGE_SIZE), end);
++ }
++
++truncate_pagecache:
++ truncate_pagecache_range(inode, start, end - 1);
++ return 0;
++}
++
+ static void ext4_wait_dax_page(struct inode *inode)
+ {
+ filemap_invalidate_unlock(inode->i_mapping);
+@@ -3957,17 +4020,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+
+ trace_ext4_punch_hole(inode, offset, length, 0);
+
+- /*
+- * Write out all dirty pages to avoid race conditions
+- * Then release them.
+- */
+- if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) {
+- ret = filemap_write_and_wait_range(mapping, offset,
+- offset + length - 1);
+- if (ret)
+- return ret;
+- }
+-
+ inode_lock(inode);
+
+ /* No need to punch hole beyond i_size */
+@@ -4029,8 +4081,11 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ ret = ext4_update_disksize_before_punch(inode, offset, length);
+ if (ret)
+ goto out_dio;
+- truncate_pagecache_range(inode, first_block_offset,
+- last_block_offset);
++
++ ret = ext4_truncate_page_cache_block_range(inode,
++ first_block_offset, last_block_offset + 1);
++ if (ret)
++ goto out_dio;
+ }
+
+ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index b25a27c8669692..d6f1e61c6dc820 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6644,7 +6644,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ for (i = 0; i < count; i++) {
+ cond_resched();
+ if (is_metadata)
+- bh = sb_find_get_block(inode->i_sb, block + i);
++ bh = sb_find_get_block_nonatomic(inode->i_sb,
++ block + i);
+ ext4_forget(handle, is_metadata, inode, bh, block + i);
+ }
+ }
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index 69b8a7221a2b19..f041a5d93716fb 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -181,14 +181,25 @@ static int ext4_end_io_end(ext4_io_end_t *io_end)
+ "list->prev 0x%p\n",
+ io_end, inode->i_ino, io_end->list.next, io_end->list.prev);
+
+- io_end->handle = NULL; /* Following call will use up the handle */
+- ret = ext4_convert_unwritten_io_end_vec(handle, io_end);
++ /*
++ * Do not convert the unwritten extents if data writeback fails,
++ * or stale data may be exposed.
++ */
++ io_end->handle = NULL; /* Following call will use up the handle */
++ if (unlikely(io_end->flag & EXT4_IO_END_FAILED)) {
++ ret = -EIO;
++ if (handle)
++ jbd2_journal_free_reserved(handle);
++ } else {
++ ret = ext4_convert_unwritten_io_end_vec(handle, io_end);
++ }
+ if (ret < 0 && !ext4_forced_shutdown(inode->i_sb)) {
+ ext4_msg(inode->i_sb, KERN_EMERG,
+ "failed to convert unwritten extents to written "
+ "extents -- potential data loss! "
+ "(inode %lu, error %d)", inode->i_ino, ret);
+ }
++
+ ext4_clear_io_unwritten_flag(io_end);
+ ext4_release_io_end(io_end);
+ return ret;
+@@ -344,6 +355,7 @@ static void ext4_end_bio(struct bio *bio)
+ bio->bi_status, inode->i_ino,
+ (unsigned long long)
+ bi_sector >> (inode->i_blkbits - 9));
++ io_end->flag |= EXT4_IO_END_FAILED;
+ mapping_set_error(inode->i_mapping,
+ blk_status_to_errno(bio->bi_status));
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 528979de0f7c1e..b956e1ee982902 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2785,6 +2785,13 @@ static int ext4_check_opt_consistency(struct fs_context *fc,
+ }
+
+ if (is_remount) {
++ if (!sbi->s_journal &&
++ ctx_test_mount_opt(ctx, EXT4_MOUNT_DATA_ERR_ABORT)) {
++ ext4_msg(NULL, KERN_WARNING,
++ "Remounting fs w/o journal so ignoring data_err option");
++ ctx_clear_mount_opt(ctx, EXT4_MOUNT_DATA_ERR_ABORT);
++ }
++
+ if (ctx_test_mount_opt(ctx, EXT4_MOUNT_DAX_ALWAYS) &&
+ (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)) {
+ ext4_msg(NULL, KERN_ERR, "can't mount with "
+@@ -5428,6 +5435,11 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ "data=, fs mounted w/o journal");
+ goto failed_mount3a;
+ }
++ if (test_opt(sb, DATA_ERR_ABORT)) {
++ ext4_msg(sb, KERN_ERR,
++ "can't mount with data_err=abort, fs mounted w/o journal");
++ goto failed_mount3a;
++ }
+ sbi->s_def_mount_opt &= ~EXT4_MOUNT_JOURNAL_CHECKSUM;
+ clear_opt(sb, JOURNAL_CHECKSUM);
+ clear_opt(sb, DATA_FLAGS);
+@@ -6776,6 +6788,7 @@ static int ext4_reconfigure(struct fs_context *fc)
+ {
+ struct super_block *sb = fc->root->d_sb;
+ int ret;
++ bool old_ro = sb_rdonly(sb);
+
+ fc->s_fs_info = EXT4_SB(sb);
+
+@@ -6787,9 +6800,9 @@ static int ext4_reconfigure(struct fs_context *fc)
+ if (ret < 0)
+ return ret;
+
+- ext4_msg(sb, KERN_INFO, "re-mounted %pU %s. Quota mode: %s.",
+- &sb->s_uuid, sb_rdonly(sb) ? "ro" : "r/w",
+- ext4_quota_mode(sb));
++ ext4_msg(sb, KERN_INFO, "re-mounted %pU%s.",
++ &sb->s_uuid,
++ (old_ro != sb_rdonly(sb)) ? (sb_rdonly(sb) ? " ro" : " r/w") : "");
+
+ return 0;
+ }
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index d15c68b28952b0..b419555e1ea7f7 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -61,6 +61,12 @@ struct f2fs_attr {
+ int id;
+ };
+
++struct f2fs_base_attr {
++ struct attribute attr;
++ ssize_t (*show)(struct f2fs_base_attr *a, char *buf);
++ ssize_t (*store)(struct f2fs_base_attr *a, const char *buf, size_t len);
++};
++
+ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
+ struct f2fs_sb_info *sbi, char *buf);
+
+@@ -862,6 +868,25 @@ static void f2fs_sb_release(struct kobject *kobj)
+ complete(&sbi->s_kobj_unregister);
+ }
+
++static ssize_t f2fs_base_attr_show(struct kobject *kobj,
++ struct attribute *attr, char *buf)
++{
++ struct f2fs_base_attr *a = container_of(attr,
++ struct f2fs_base_attr, attr);
++
++ return a->show ? a->show(a, buf) : 0;
++}
++
++static ssize_t f2fs_base_attr_store(struct kobject *kobj,
++ struct attribute *attr,
++ const char *buf, size_t len)
++{
++ struct f2fs_base_attr *a = container_of(attr,
++ struct f2fs_base_attr, attr);
++
++ return a->store ? a->store(a, buf, len) : 0;
++}
++
+ /*
+ * Note that there are three feature list entries:
+ * 1) /sys/fs/f2fs/features
+@@ -880,14 +905,13 @@ static void f2fs_sb_release(struct kobject *kobj)
+ * please add new on-disk feature in this list only.
+ * - ref. F2FS_SB_FEATURE_RO_ATTR()
+ */
+-static ssize_t f2fs_feature_show(struct f2fs_attr *a,
+- struct f2fs_sb_info *sbi, char *buf)
++static ssize_t f2fs_feature_show(struct f2fs_base_attr *a, char *buf)
+ {
+ return sysfs_emit(buf, "supported\n");
+ }
+
+ #define F2FS_FEATURE_RO_ATTR(_name) \
+-static struct f2fs_attr f2fs_attr_##_name = { \
++static struct f2fs_base_attr f2fs_base_attr_##_name = { \
+ .attr = {.name = __stringify(_name), .mode = 0444 }, \
+ .show = f2fs_feature_show, \
+ }
+@@ -1256,37 +1280,38 @@ static struct attribute *f2fs_attrs[] = {
+ };
+ ATTRIBUTE_GROUPS(f2fs);
+
++#define BASE_ATTR_LIST(name) (&f2fs_base_attr_##name.attr)
+ static struct attribute *f2fs_feat_attrs[] = {
+ #ifdef CONFIG_FS_ENCRYPTION
+- ATTR_LIST(encryption),
+- ATTR_LIST(test_dummy_encryption_v2),
++ BASE_ATTR_LIST(encryption),
++ BASE_ATTR_LIST(test_dummy_encryption_v2),
+ #if IS_ENABLED(CONFIG_UNICODE)
+- ATTR_LIST(encrypted_casefold),
++ BASE_ATTR_LIST(encrypted_casefold),
+ #endif
+ #endif /* CONFIG_FS_ENCRYPTION */
+ #ifdef CONFIG_BLK_DEV_ZONED
+- ATTR_LIST(block_zoned),
++ BASE_ATTR_LIST(block_zoned),
+ #endif
+- ATTR_LIST(atomic_write),
+- ATTR_LIST(extra_attr),
+- ATTR_LIST(project_quota),
+- ATTR_LIST(inode_checksum),
+- ATTR_LIST(flexible_inline_xattr),
+- ATTR_LIST(quota_ino),
+- ATTR_LIST(inode_crtime),
+- ATTR_LIST(lost_found),
++ BASE_ATTR_LIST(atomic_write),
++ BASE_ATTR_LIST(extra_attr),
++ BASE_ATTR_LIST(project_quota),
++ BASE_ATTR_LIST(inode_checksum),
++ BASE_ATTR_LIST(flexible_inline_xattr),
++ BASE_ATTR_LIST(quota_ino),
++ BASE_ATTR_LIST(inode_crtime),
++ BASE_ATTR_LIST(lost_found),
+ #ifdef CONFIG_FS_VERITY
+- ATTR_LIST(verity),
++ BASE_ATTR_LIST(verity),
+ #endif
+- ATTR_LIST(sb_checksum),
++ BASE_ATTR_LIST(sb_checksum),
+ #if IS_ENABLED(CONFIG_UNICODE)
+- ATTR_LIST(casefold),
++ BASE_ATTR_LIST(casefold),
+ #endif
+- ATTR_LIST(readonly),
++ BASE_ATTR_LIST(readonly),
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+- ATTR_LIST(compression),
++ BASE_ATTR_LIST(compression),
+ #endif
+- ATTR_LIST(pin_file),
++ BASE_ATTR_LIST(pin_file),
+ NULL,
+ };
+ ATTRIBUTE_GROUPS(f2fs_feat);
+@@ -1362,9 +1387,14 @@ static struct kset f2fs_kset = {
+ .kobj = {.ktype = &f2fs_ktype},
+ };
+
++static const struct sysfs_ops f2fs_feat_attr_ops = {
++ .show = f2fs_base_attr_show,
++ .store = f2fs_base_attr_store,
++};
++
+ static const struct kobj_type f2fs_feat_ktype = {
+ .default_groups = f2fs_feat_groups,
+- .sysfs_ops = &f2fs_attr_ops,
++ .sysfs_ops = &f2fs_feat_attr_ops,
+ };
+
+ static struct kobject f2fs_feat = {
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 3b031d24d36912..8f699c67561fad 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1137,6 +1137,8 @@ static int fuse_link(struct dentry *entry, struct inode *newdir,
+ else if (err == -EINTR)
+ fuse_invalidate_attr(inode);
+
++ if (err == -ENOSYS)
++ err = -EPERM;
+ return err;
+ }
+
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 65c07aa9571841..7f9cc7872214e2 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -843,12 +843,13 @@ static void run_queue(struct gfs2_glock *gl, const int nonblock)
+ __releases(&gl->gl_lockref.lock)
+ __acquires(&gl->gl_lockref.lock)
+ {
+- struct gfs2_holder *gh = NULL;
++ struct gfs2_holder *gh;
+
+ if (test_bit(GLF_LOCK, &gl->gl_flags))
+ return;
+ set_bit(GLF_LOCK, &gl->gl_flags);
+
++ /* While a demote is in progress, the GLF_LOCK flag must be set. */
+ GLOCK_BUG_ON(gl, test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags));
+
+ if (test_bit(GLF_DEMOTE, &gl->gl_flags) &&
+@@ -860,18 +861,22 @@ __acquires(&gl->gl_lockref.lock)
+ set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
+ GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE);
+ gl->gl_target = gl->gl_demote_state;
++ do_xmote(gl, NULL, gl->gl_target);
++ return;
+ } else {
+ if (test_bit(GLF_DEMOTE, &gl->gl_flags))
+ gfs2_demote_wake(gl);
+ if (do_promote(gl))
+ goto out_unlock;
+ gh = find_first_waiter(gl);
++ if (!gh)
++ goto out_unlock;
+ gl->gl_target = gh->gh_state;
+ if (!(gh->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB)))
+ do_error(gl, 0); /* Fail queued try locks */
++ do_xmote(gl, gh, gl->gl_target);
++ return;
+ }
+- do_xmote(gl, gh, gl->gl_target);
+- return;
+
+ out_sched:
+ clear_bit(GLF_LOCK, &gl->gl_flags);
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index 9192be7c19d835..a3c39a71c4ad30 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -39,7 +39,7 @@ struct recovery_info
+
+ static int do_one_pass(journal_t *journal,
+ struct recovery_info *info, enum passtype pass);
+-static int scan_revoke_records(journal_t *, struct buffer_head *,
++static int scan_revoke_records(journal_t *, enum passtype, struct buffer_head *,
+ tid_t, struct recovery_info *);
+
+ #ifdef __KERNEL__
+@@ -287,19 +287,20 @@ static int fc_do_one_pass(journal_t *journal,
+ int jbd2_journal_recover(journal_t *journal)
+ {
+ int err, err2;
+- journal_superblock_t * sb;
+-
+ struct recovery_info info;
+
+ memset(&info, 0, sizeof(info));
+- sb = journal->j_superblock;
+
+ /*
+ * The journal superblock's s_start field (the current log head)
+ * is always zero if, and only if, the journal was cleanly
+- * unmounted.
++ * unmounted. We use its in-memory version j_tail here because
++ * jbd2_journal_wipe() could have updated it without updating journal
++ * superblock.
+ */
+- if (!sb->s_start) {
++ if (!journal->j_tail) {
++ journal_superblock_t *sb = journal->j_superblock;
++
+ jbd2_debug(1, "No recovery required, last transaction %d, head block %u\n",
+ be32_to_cpu(sb->s_sequence), be32_to_cpu(sb->s_head));
+ journal->j_transaction_sequence = be32_to_cpu(sb->s_sequence) + 1;
+@@ -327,6 +328,12 @@ int jbd2_journal_recover(journal_t *journal)
+ journal->j_transaction_sequence, journal->j_head);
+
+ jbd2_journal_clear_revoke(journal);
++ /* Free revoke table allocated for replay */
++ if (journal->j_revoke != journal->j_revoke_table[0] &&
++ journal->j_revoke != journal->j_revoke_table[1]) {
++ jbd2_journal_destroy_revoke_table(journal->j_revoke);
++ journal->j_revoke = journal->j_revoke_table[1];
++ }
+ err2 = sync_blockdev(journal->j_fs_dev);
+ if (!err)
+ err = err2;
+@@ -612,6 +619,31 @@ static int do_one_pass(journal_t *journal,
+ first_commit_ID = next_commit_ID;
+ if (pass == PASS_SCAN)
+ info->start_transaction = first_commit_ID;
++ else if (pass == PASS_REVOKE) {
++ /*
++ * Would the default revoke table have too long hash chains
++ * during replay?
++ */
++ if (info->nr_revokes > JOURNAL_REVOKE_DEFAULT_HASH * 16) {
++ unsigned int hash_size;
++
++ /*
++ * Aim for average chain length of 8, limit at 1M
++ * entries to avoid problems with malicious
++ * filesystems.
++ */
++ hash_size = min(roundup_pow_of_two(info->nr_revokes / 8),
++ 1U << 20);
++ journal->j_revoke =
++ jbd2_journal_init_revoke_table(hash_size);
++ if (!journal->j_revoke) {
++ printk(KERN_ERR
++ "JBD2: failed to allocate revoke table for replay with %u entries. "
++ "Journal replay may be slow.\n", hash_size);
++ journal->j_revoke = journal->j_revoke_table[1];
++ }
++ }
++ }
+
+ jbd2_debug(1, "Starting recovery pass %d\n", pass);
+
+@@ -851,6 +883,13 @@ static int do_one_pass(journal_t *journal,
+ continue;
+
+ case JBD2_REVOKE_BLOCK:
++ /*
++ * If we aren't in the SCAN or REVOKE pass, then we can
++ * just skip over this block.
++ */
++ if (pass != PASS_REVOKE && pass != PASS_SCAN)
++ continue;
++
+ /*
+ * Check revoke block crc in pass_scan, if csum verify
+ * failed, check commit block time later.
+@@ -863,12 +902,7 @@ static int do_one_pass(journal_t *journal,
+ need_check_commit_time = true;
+ }
+
+- /* If we aren't in the REVOKE pass, then we can
+- * just skip over this block. */
+- if (pass != PASS_REVOKE)
+- continue;
+-
+- err = scan_revoke_records(journal, bh,
++ err = scan_revoke_records(journal, pass, bh,
+ next_commit_ID, info);
+ if (err)
+ goto failed;
+@@ -922,8 +956,9 @@ static int do_one_pass(journal_t *journal,
+
+ /* Scan a revoke record, marking all blocks mentioned as revoked. */
+
+-static int scan_revoke_records(journal_t *journal, struct buffer_head *bh,
+- tid_t sequence, struct recovery_info *info)
++static int scan_revoke_records(journal_t *journal, enum passtype pass,
++ struct buffer_head *bh, tid_t sequence,
++ struct recovery_info *info)
+ {
+ jbd2_journal_revoke_header_t *header;
+ int offset, max;
+@@ -944,6 +979,11 @@ static int scan_revoke_records(journal_t *journal, struct buffer_head *bh,
+ if (jbd2_has_feature_64bit(journal))
+ record_len = 8;
+
++ if (pass == PASS_SCAN) {
++ info->nr_revokes += (max - offset) / record_len;
++ return 0;
++ }
++
+ while (offset + record_len <= max) {
+ unsigned long long blocknr;
+ int err;
+@@ -956,7 +996,6 @@ static int scan_revoke_records(journal_t *journal, struct buffer_head *bh,
+ err = jbd2_journal_set_revoke(journal, blocknr, sequence);
+ if (err)
+ return err;
+- ++info->nr_revokes;
+ }
+ return 0;
+ }
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index ce63d5fde9c3a8..bc328c39028a21 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -215,7 +215,7 @@ int __init jbd2_journal_init_revoke_table_cache(void)
+ return 0;
+ }
+
+-static struct jbd2_revoke_table_s *jbd2_journal_init_revoke_table(int hash_size)
++struct jbd2_revoke_table_s *jbd2_journal_init_revoke_table(int hash_size)
+ {
+ int shift = 0;
+ int tmp = hash_size;
+@@ -231,7 +231,7 @@ static struct jbd2_revoke_table_s *jbd2_journal_init_revoke_table(int hash_size)
+ table->hash_size = hash_size;
+ table->hash_shift = shift;
+ table->hash_table =
+- kmalloc_array(hash_size, sizeof(struct list_head), GFP_KERNEL);
++ kvmalloc_array(hash_size, sizeof(struct list_head), GFP_KERNEL);
+ if (!table->hash_table) {
+ kmem_cache_free(jbd2_revoke_table_cache, table);
+ table = NULL;
+@@ -245,7 +245,7 @@ static struct jbd2_revoke_table_s *jbd2_journal_init_revoke_table(int hash_size)
+ return table;
+ }
+
+-static void jbd2_journal_destroy_revoke_table(struct jbd2_revoke_table_s *table)
++void jbd2_journal_destroy_revoke_table(struct jbd2_revoke_table_s *table)
+ {
+ int i;
+ struct list_head *hash_list;
+@@ -255,7 +255,7 @@ static void jbd2_journal_destroy_revoke_table(struct jbd2_revoke_table_s *table)
+ J_ASSERT(list_empty(hash_list));
+ }
+
+- kfree(table->hash_table);
++ kvfree(table->hash_table);
+ kmem_cache_free(jbd2_revoke_table_cache, table);
+ }
+
+@@ -345,7 +345,8 @@ int jbd2_journal_revoke(handle_t *handle, unsigned long long blocknr,
+ bh = bh_in;
+
+ if (!bh) {
+- bh = __find_get_block(bdev, blocknr, journal->j_blocksize);
++ bh = __find_get_block_nonatomic(bdev, blocknr,
++ journal->j_blocksize);
+ if (bh)
+ BUFFER_TRACE(bh, "found on hash");
+ }
+@@ -355,7 +356,8 @@ int jbd2_journal_revoke(handle_t *handle, unsigned long long blocknr,
+
+ /* If there is a different buffer_head lying around in
+ * memory anywhere... */
+- bh2 = __find_get_block(bdev, blocknr, journal->j_blocksize);
++ bh2 = __find_get_block_nonatomic(bdev, blocknr,
++ journal->j_blocksize);
+ if (bh2) {
+ /* ... and it has RevokeValid status... */
+ if (bh2 != bh && buffer_revokevalid(bh2))
+@@ -466,7 +468,8 @@ int jbd2_journal_cancel_revoke(handle_t *handle, struct journal_head *jh)
+ * state machine will get very upset later on. */
+ if (need_cancel) {
+ struct buffer_head *bh2;
+- bh2 = __find_get_block(bh->b_bdev, bh->b_blocknr, bh->b_size);
++ bh2 = __find_get_block_nonatomic(bh->b_bdev, bh->b_blocknr,
++ bh->b_size);
+ if (bh2) {
+ if (bh2 != bh)
+ clear_buffer_revoked(bh2);
+@@ -495,9 +498,9 @@ void jbd2_clear_buffer_revoked_flags(journal_t *journal)
+ struct jbd2_revoke_record_s *record;
+ struct buffer_head *bh;
+ record = (struct jbd2_revoke_record_s *)list_entry;
+- bh = __find_get_block(journal->j_fs_dev,
+- record->blocknr,
+- journal->j_blocksize);
++ bh = __find_get_block_nonatomic(journal->j_fs_dev,
++ record->blocknr,
++ journal->j_blocksize);
+ if (bh) {
+ clear_buffer_revoked(bh);
+ __brelse(bh);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 5b84e29613fe4d..1d950974d67ee8 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -781,12 +781,8 @@ int __legitimize_mnt(struct vfsmount *bastard, unsigned seq)
+ smp_mb(); // see mntput_no_expire() and do_umount()
+ if (likely(!read_seqretry(&mount_lock, seq)))
+ return 0;
+- if (bastard->mnt_flags & MNT_SYNC_UMOUNT) {
+- mnt_add_count(mnt, -1);
+- return 1;
+- }
+ lock_mount_hash();
+- if (unlikely(bastard->mnt_flags & MNT_DOOMED)) {
++ if (unlikely(bastard->mnt_flags & (MNT_SYNC_UMOUNT | MNT_DOOMED))) {
+ mnt_add_count(mnt, -1);
+ unlock_mount_hash();
+ return 1;
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 325ba0663a6de2..8bdbc4dca89ca6 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -307,7 +307,8 @@ nfs_start_delegation_return_locked(struct nfs_inode *nfsi)
+ if (delegation == NULL)
+ goto out;
+ spin_lock(&delegation->lock);
+- if (!test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) {
++ if (delegation->inode &&
++ !test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) {
+ clear_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags);
+ /* Refcount matched in nfs_end_delegation_return() */
+ ret = nfs_get_delegation(delegation);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 98b45b636be330..646cda8e2e75bd 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1264,6 +1264,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ case -ECONNRESET:
+ case -EHOSTDOWN:
+ case -EHOSTUNREACH:
++ case -ENETDOWN:
+ case -ENETUNREACH:
+ case -EADDRINUSE:
+ case -ENOBUFS:
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 1aa67fca69b2fb..119e447758b994 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -74,6 +74,8 @@ nfs_fattr_to_ino_t(struct nfs_fattr *fattr)
+
+ int nfs_wait_bit_killable(struct wait_bit_key *key, int mode)
+ {
++ if (unlikely(nfs_current_task_exiting()))
++ return -EINTR;
+ schedule();
+ if (signal_pending_state(mode, current))
+ return -ERESTARTSYS;
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 59bb4d0338f39a..3cedb7979fcd02 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -905,6 +905,11 @@ static inline u32 nfs_stateid_hash(const nfs4_stateid *stateid)
+ NFS4_STATEID_OTHER_SIZE);
+ }
+
++static inline bool nfs_current_task_exiting(void)
++{
++ return (current->flags & PF_EXITING) != 0;
++}
++
+ static inline bool nfs_error_is_fatal(int err)
+ {
+ switch (err) {
+diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
+index 0c3bc98cd999cc..c1736dbb92b638 100644
+--- a/fs/nfs/nfs3proc.c
++++ b/fs/nfs/nfs3proc.c
+@@ -39,7 +39,7 @@ nfs3_rpc_wrapper(struct rpc_clnt *clnt, struct rpc_message *msg, int flags)
+ __set_current_state(TASK_KILLABLE|TASK_FREEZABLE_UNSAFE);
+ schedule_timeout(NFS_JUKEBOX_RETRY_TIME);
+ res = -ERESTARTSYS;
+- } while (!fatal_signal_pending(current));
++ } while (!fatal_signal_pending(current) && !nfs_current_task_exiting());
+ return res;
+ }
+
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 810cfd9b7e533b..cfdbd521fc7b66 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -443,6 +443,8 @@ static int nfs4_delay_killable(long *timeout)
+ {
+ might_sleep();
+
++ if (unlikely(nfs_current_task_exiting()))
++ return -EINTR;
+ __set_current_state(TASK_KILLABLE|TASK_FREEZABLE_UNSAFE);
+ schedule_timeout(nfs4_update_delay(timeout));
+ if (!__fatal_signal_pending(current))
+@@ -454,6 +456,8 @@ static int nfs4_delay_interruptible(long *timeout)
+ {
+ might_sleep();
+
++ if (unlikely(nfs_current_task_exiting()))
++ return -EINTR;
+ __set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE_UNSAFE);
+ schedule_timeout(nfs4_update_delay(timeout));
+ if (!signal_pending(current))
+@@ -1774,7 +1778,8 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state,
+ rcu_read_unlock();
+ trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0);
+
+- if (!fatal_signal_pending(current)) {
++ if (!fatal_signal_pending(current) &&
++ !nfs_current_task_exiting()) {
+ if (schedule_timeout(5*HZ) == 0)
+ status = -EAGAIN;
+ else
+@@ -3578,7 +3583,7 @@ static bool nfs4_refresh_open_old_stateid(nfs4_stateid *dst,
+ write_sequnlock(&state->seqlock);
+ trace_nfs4_close_stateid_update_wait(state->inode, dst, 0);
+
+- if (fatal_signal_pending(current))
++ if (fatal_signal_pending(current) || nfs_current_task_exiting())
+ status = -EINTR;
+ else
+ if (schedule_timeout(5*HZ) != 0)
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 542cdf71229feb..04c726cc2900b8 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -2739,7 +2739,15 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ pr_warn_ratelimited("NFS: state manager%s%s failed on NFSv4 server %s"
+ " with error %d\n", section_sep, section,
+ clp->cl_hostname, -status);
+- ssleep(1);
++ switch (status) {
++ case -ENETDOWN:
++ case -ENETUNREACH:
++ nfs_mark_client_ready(clp, -EIO);
++ break;
++ default:
++ ssleep(1);
++ break;
++ }
+ out_drain:
+ memalloc_nofs_restore(memflags);
+ nfs4_end_drain_session(clp);
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index cb01ea81724dca..d0bcf744c553a0 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -705,8 +705,6 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
+ int blocksize;
+ int err;
+
+- down_write(&nilfs->ns_sem);
+-
+ blocksize = sb_min_blocksize(sb, NILFS_MIN_BLOCK_SIZE);
+ if (!blocksize) {
+ nilfs_err(sb, "unable to set blocksize");
+@@ -779,7 +777,6 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
+ set_nilfs_init(nilfs);
+ err = 0;
+ out:
+- up_write(&nilfs->ns_sem);
+ return err;
+
+ failed_sbh:
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index f37831d5f95a19..e5f58ff2175f41 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -1271,7 +1271,7 @@ static int ocfs2_force_read_journal(struct inode *inode)
+ }
+
+ for (i = 0; i < p_blocks; i++, p_blkno++) {
+- bh = __find_get_block(osb->sb->s_bdev, p_blkno,
++ bh = __find_get_block_nonatomic(osb->sb->s_bdev, p_blkno,
+ osb->sb->s_blocksize);
+ /* block not cached. */
+ if (!bh)
+diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c
+index aae6d2b8767df0..63d7c1ca0dfd35 100644
+--- a/fs/orangefs/inode.c
++++ b/fs/orangefs/inode.c
+@@ -23,9 +23,9 @@ static int orangefs_writepage_locked(struct page *page,
+ struct orangefs_write_range *wr = NULL;
+ struct iov_iter iter;
+ struct bio_vec bv;
+- size_t len, wlen;
++ size_t wlen;
+ ssize_t ret;
+- loff_t off;
++ loff_t len, off;
+
+ set_page_writeback(page);
+
+@@ -91,8 +91,7 @@ static int orangefs_writepages_work(struct orangefs_writepages *ow,
+ struct orangefs_write_range *wrp, wr;
+ struct iov_iter iter;
+ ssize_t ret;
+- size_t len;
+- loff_t off;
++ loff_t len, off;
+ int i;
+
+ len = i_size_read(inode);
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index c0478b3c55d9fa..9aa4c705776ddd 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -188,20 +188,21 @@ static void pidfd_show_fdinfo(struct seq_file *m, struct file *f)
+ static __poll_t pidfd_poll(struct file *file, struct poll_table_struct *pts)
+ {
+ struct pid *pid = pidfd_pid(file);
+- bool thread = file->f_flags & PIDFD_THREAD;
+ struct task_struct *task;
+ __poll_t poll_flags = 0;
+
+ poll_wait(file, &pid->wait_pidfd, pts);
+ /*
+- * Depending on PIDFD_THREAD, inform pollers when the thread
+- * or the whole thread-group exits.
++ * Don't wake waiters if the thread-group leader exited
++ * prematurely. They either get notified when the last subthread
++ * exits or not at all if one of the remaining subthreads execs
++ * and assumes the struct pid of the old thread-group leader.
+ */
+ guard(rcu)();
+ task = pid_task(pid, PIDTYPE_PID);
+ if (!task)
+ poll_flags = EPOLLIN | EPOLLRDNORM | EPOLLHUP;
+- else if (task->exit_state && (thread || thread_group_empty(task)))
++ else if (task->exit_state && !delay_group_leader(task))
+ poll_flags = EPOLLIN | EPOLLRDNORM;
+
+ return poll_flags;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 4d0799e4e7196b..88e81f84e3eaf8 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -1271,6 +1271,10 @@ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots)
+ struct pipe_buffer *bufs;
+ unsigned int head, tail, mask, n;
+
++ /* nr_slots larger than limits of pipe->{head,tail} */
++ if (unlikely(nr_slots > (pipe_index_t)-1u))
++ return -EINVAL;
++
+ bufs = kcalloc(nr_slots, sizeof(*bufs),
+ GFP_KERNEL_ACCOUNT | __GFP_NOWARN);
+ if (unlikely(!bufs))
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index 56815799ce798e..9de6b280c4f411 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -265,7 +265,7 @@ static void parse_options(char *options)
+ static int pstore_show_options(struct seq_file *m, struct dentry *root)
+ {
+ if (kmsg_bytes != CONFIG_PSTORE_DEFAULT_KMSG_BYTES)
+- seq_printf(m, ",kmsg_bytes=%lu", kmsg_bytes);
++ seq_printf(m, ",kmsg_bytes=%u", kmsg_bytes);
+ return 0;
+ }
+
+diff --git a/fs/pstore/internal.h b/fs/pstore/internal.h
+index 801d6c0b170c3a..a0fc511969100c 100644
+--- a/fs/pstore/internal.h
++++ b/fs/pstore/internal.h
+@@ -6,7 +6,7 @@
+ #include <linux/time.h>
+ #include <linux/pstore.h>
+
+-extern unsigned long kmsg_bytes;
++extern unsigned int kmsg_bytes;
+
+ #ifdef CONFIG_PSTORE_FTRACE
+ extern void pstore_register_ftrace(void);
+@@ -35,7 +35,7 @@ static inline void pstore_unregister_pmsg(void) {}
+
+ extern struct pstore_info *psinfo;
+
+-extern void pstore_set_kmsg_bytes(int);
++extern void pstore_set_kmsg_bytes(unsigned int bytes);
+ extern void pstore_get_records(int);
+ extern void pstore_get_backend_records(struct pstore_info *psi,
+ struct dentry *root, int quiet);
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index f56b066ab80ce4..557cf9d40177f6 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -92,8 +92,8 @@ module_param(compress, charp, 0444);
+ MODULE_PARM_DESC(compress, "compression to use");
+
+ /* How much of the kernel log to snapshot */
+-unsigned long kmsg_bytes = CONFIG_PSTORE_DEFAULT_KMSG_BYTES;
+-module_param(kmsg_bytes, ulong, 0444);
++unsigned int kmsg_bytes = CONFIG_PSTORE_DEFAULT_KMSG_BYTES;
++module_param(kmsg_bytes, uint, 0444);
+ MODULE_PARM_DESC(kmsg_bytes, "amount of kernel log to snapshot (in bytes)");
+
+ static void *compress_workspace;
+@@ -107,9 +107,9 @@ static void *compress_workspace;
+ static char *big_oops_buf;
+ static size_t max_compressed_size;
+
+-void pstore_set_kmsg_bytes(int bytes)
++void pstore_set_kmsg_bytes(unsigned int bytes)
+ {
+- kmsg_bytes = bytes;
++ WRITE_ONCE(kmsg_bytes, bytes);
+ }
+
+ /* Tag each group of saved records with a sequence number */
+@@ -278,6 +278,7 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ struct kmsg_dump_detail *detail)
+ {
+ struct kmsg_dump_iter iter;
++ unsigned int remaining = READ_ONCE(kmsg_bytes);
+ unsigned long total = 0;
+ const char *why;
+ unsigned int part = 1;
+@@ -300,7 +301,7 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ kmsg_dump_rewind(&iter);
+
+ oopscount++;
+- while (total < kmsg_bytes) {
++ while (total < remaining) {
+ char *dst;
+ size_t dst_size;
+ int header_size;
+diff --git a/fs/smb/client/cifsacl.c b/fs/smb/client/cifsacl.c
+index 64bd68f750f842..63b3b1290bed21 100644
+--- a/fs/smb/client/cifsacl.c
++++ b/fs/smb/client/cifsacl.c
+@@ -811,7 +811,23 @@ static void parse_dacl(struct smb_acl *pdacl, char *end_of_acl,
+ return;
+
+ for (i = 0; i < num_aces; ++i) {
++ if (end_of_acl - acl_base < acl_size)
++ break;
++
+ ppace[i] = (struct smb_ace *) (acl_base + acl_size);
++ acl_base = (char *)ppace[i];
++ acl_size = offsetof(struct smb_ace, sid) +
++ offsetof(struct smb_sid, sub_auth);
++
++ if (end_of_acl - acl_base < acl_size ||
++ ppace[i]->sid.num_subauth == 0 ||
++ ppace[i]->sid.num_subauth > SID_MAX_SUB_AUTHORITIES ||
++ (end_of_acl - acl_base <
++ acl_size + sizeof(__le32) * ppace[i]->sid.num_subauth) ||
++ (le16_to_cpu(ppace[i]->size) <
++ acl_size + sizeof(__le32) * ppace[i]->sid.num_subauth))
++ break;
++
+ #ifdef CONFIG_CIFS_DEBUG2
+ dump_ace(ppace[i], end_of_acl);
+ #endif
+@@ -855,7 +871,6 @@ static void parse_dacl(struct smb_acl *pdacl, char *end_of_acl,
+ (void *)ppace[i],
+ sizeof(struct smb_ace)); */
+
+- acl_base = (char *)ppace[i];
+ acl_size = le16_to_cpu(ppace[i]->size);
+ }
+
+@@ -1550,7 +1565,7 @@ cifs_acl_to_fattr(struct cifs_sb_info *cifs_sb, struct cifs_fattr *fattr,
+ int rc = 0;
+ struct tcon_link *tlink = cifs_sb_tlink(cifs_sb);
+ struct smb_version_operations *ops;
+- const u32 info = 0;
++ const u32 info = OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO;
+
+ cifs_dbg(NOISY, "converting ACL to mode for %s\n", path);
+
+@@ -1604,7 +1619,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ struct tcon_link *tlink;
+ struct smb_version_operations *ops;
+ bool mode_from_sid, id_from_sid;
+- const u32 info = 0;
++ const u32 info = OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO;
+ bool posix;
+
+ tlink = cifs_sb_tlink(cifs_sb);
+diff --git a/fs/smb/client/cifspdu.h b/fs/smb/client/cifspdu.h
+index 48d0d6f439cf45..cf9ca7e49b8bc3 100644
+--- a/fs/smb/client/cifspdu.h
++++ b/fs/smb/client/cifspdu.h
+@@ -1266,10 +1266,9 @@ typedef struct smb_com_query_information_rsp {
+ typedef struct smb_com_setattr_req {
+ struct smb_hdr hdr; /* wct = 8 */
+ __le16 attr;
+- __le16 time_low;
+- __le16 time_high;
++ __le32 last_write_time;
+ __le16 reserved[5]; /* must be zero */
+- __u16 ByteCount;
++ __le16 ByteCount;
+ __u8 BufferFormat; /* 4 = ASCII */
+ unsigned char fileName[];
+ } __attribute__((packed)) SETATTR_REQ;
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 278092a15f8903..2dd5a485a1e04a 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -31,6 +31,9 @@ extern void cifs_small_buf_release(void *);
+ extern void free_rsp_buf(int, void *);
+ extern int smb_send(struct TCP_Server_Info *, struct smb_hdr *,
+ unsigned int /* length */);
++extern int smb_send_kvec(struct TCP_Server_Info *server,
++ struct msghdr *msg,
++ size_t *sent);
+ extern unsigned int _get_xid(void);
+ extern void _free_xid(unsigned int);
+ #define get_xid() \
+@@ -392,6 +395,10 @@ extern int CIFSSMBQFSUnixInfo(const unsigned int xid, struct cifs_tcon *tcon);
+ extern int CIFSSMBQFSPosixInfo(const unsigned int xid, struct cifs_tcon *tcon,
+ struct kstatfs *FSData);
+
++extern int SMBSetInformation(const unsigned int xid, struct cifs_tcon *tcon,
++ const char *fileName, __le32 attributes, __le64 write_time,
++ const struct nls_table *nls_codepage,
++ struct cifs_sb_info *cifs_sb);
+ extern int CIFSSMBSetPathInfo(const unsigned int xid, struct cifs_tcon *tcon,
+ const char *fileName, const FILE_BASIC_INFO *data,
+ const struct nls_table *nls_codepage,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 4fc9485c5d91a3..4059550859a9b1 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -1038,15 +1038,31 @@ static __u16 convert_disposition(int disposition)
+ static int
+ access_flags_to_smbopen_mode(const int access_flags)
+ {
+- int masked_flags = access_flags & (GENERIC_READ | GENERIC_WRITE);
+-
+- if (masked_flags == GENERIC_READ)
+- return SMBOPEN_READ;
+- else if (masked_flags == GENERIC_WRITE)
++ /*
++ * SYSTEM_SECURITY grants both read and write access to SACL, treat is as read/write.
++ * MAXIMUM_ALLOWED grants as many access as possible, so treat it as read/write too.
++ * SYNCHRONIZE as is does not grant any specific access, so do not check its mask.
++ * If only SYNCHRONIZE bit is specified then fallback to read access.
++ */
++ bool with_write_flags = access_flags & (FILE_WRITE_DATA | FILE_APPEND_DATA | FILE_WRITE_EA |
++ FILE_DELETE_CHILD | FILE_WRITE_ATTRIBUTES | DELETE |
++ WRITE_DAC | WRITE_OWNER | SYSTEM_SECURITY |
++ MAXIMUM_ALLOWED | GENERIC_WRITE | GENERIC_ALL);
++ bool with_read_flags = access_flags & (FILE_READ_DATA | FILE_READ_EA | FILE_EXECUTE |
++ FILE_READ_ATTRIBUTES | READ_CONTROL |
++ SYSTEM_SECURITY | MAXIMUM_ALLOWED | GENERIC_ALL |
++ GENERIC_EXECUTE | GENERIC_READ);
++ bool with_execute_flags = access_flags & (FILE_EXECUTE | MAXIMUM_ALLOWED | GENERIC_ALL |
++ GENERIC_EXECUTE);
++
++ if (with_write_flags && with_read_flags)
++ return SMBOPEN_READWRITE;
++ else if (with_write_flags)
+ return SMBOPEN_WRITE;
+-
+- /* just go for read/write */
+- return SMBOPEN_READWRITE;
++ else if (with_execute_flags)
++ return SMBOPEN_EXECUTE;
++ else
++ return SMBOPEN_READ;
+ }
+
+ int
+@@ -2709,6 +2725,9 @@ int cifs_query_reparse_point(const unsigned int xid,
+ if (cap_unix(tcon->ses))
+ return -EOPNOTSUPP;
+
++ if (!(le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS))
++ return -EOPNOTSUPP;
++
+ oparms = (struct cifs_open_parms) {
+ .tcon = tcon,
+ .cifs_sb = cifs_sb,
+@@ -3400,8 +3419,7 @@ CIFSSMBGetCIFSACL(const unsigned int xid, struct cifs_tcon *tcon, __u16 fid,
+ /* BB TEST with big acls that might need to be e.g. larger than 16K */
+ pSMB->MaxSetupCount = 0;
+ pSMB->Fid = fid; /* file handle always le */
+- pSMB->AclFlags = cpu_to_le32(CIFS_ACL_OWNER | CIFS_ACL_GROUP |
+- CIFS_ACL_DACL | info);
++ pSMB->AclFlags = cpu_to_le32(info);
+ pSMB->ByteCount = cpu_to_le16(11); /* 3 bytes pad + 8 bytes parm */
+ inc_rfc1001_len(pSMB, 11);
+ iov[0].iov_base = (char *)pSMB;
+@@ -5150,6 +5168,63 @@ CIFSSMBSetFileSize(const unsigned int xid, struct cifs_tcon *tcon,
+ return rc;
+ }
+
++int
++SMBSetInformation(const unsigned int xid, struct cifs_tcon *tcon,
++ const char *fileName, __le32 attributes, __le64 write_time,
++ const struct nls_table *nls_codepage,
++ struct cifs_sb_info *cifs_sb)
++{
++ SETATTR_REQ *pSMB;
++ SETATTR_RSP *pSMBr;
++ struct timespec64 ts;
++ int bytes_returned;
++ int name_len;
++ int rc;
++
++ cifs_dbg(FYI, "In %s path %s\n", __func__, fileName);
++
++retry:
++ rc = smb_init(SMB_COM_SETATTR, 8, tcon, (void **) &pSMB,
++ (void **) &pSMBr);
++ if (rc)
++ return rc;
++
++ if (pSMB->hdr.Flags2 & SMBFLG2_UNICODE) {
++ name_len =
++ cifsConvertToUTF16((__le16 *) pSMB->fileName,
++ fileName, PATH_MAX, nls_codepage,
++ cifs_remap(cifs_sb));
++ name_len++; /* trailing null */
++ name_len *= 2;
++ } else {
++ name_len = copy_path_name(pSMB->fileName, fileName);
++ }
++ /* Only few attributes can be set by this command, others are not accepted by Win9x. */
++ pSMB->attr = cpu_to_le16(le32_to_cpu(attributes) &
++ (ATTR_READONLY | ATTR_HIDDEN | ATTR_SYSTEM | ATTR_ARCHIVE));
++ /* Zero write time value (in both NT and SETATTR formats) means to not change it. */
++ if (le64_to_cpu(write_time) != 0) {
++ ts = cifs_NTtimeToUnix(write_time);
++ pSMB->last_write_time = cpu_to_le32(ts.tv_sec);
++ }
++ pSMB->BufferFormat = 0x04;
++ name_len++; /* account for buffer type byte */
++ inc_rfc1001_len(pSMB, (__u16)name_len);
++ pSMB->ByteCount = cpu_to_le16(name_len);
++
++ rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB,
++ (struct smb_hdr *) pSMBr, &bytes_returned, 0);
++ if (rc)
++ cifs_dbg(FYI, "Send error in %s = %d\n", __func__, rc);
++
++ cifs_buf_release(pSMB);
++
++ if (rc == -EAGAIN)
++ goto retry;
++
++ return rc;
++}
++
+ /* Some legacy servers such as NT4 require that the file times be set on
+ an open handle, rather than by pathname - this is awkward due to
+ potential access conflicts on the open, but it is unavoidable for these
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index cc9c912db8def9..22f29d27259282 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -3028,8 +3028,10 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
+ * sessinit is sent but no second negprot
+ */
+ struct rfc1002_session_packet req = {};
+- struct smb_hdr *smb_buf = (struct smb_hdr *)&req;
++ struct msghdr msg = {};
++ struct kvec iov = {};
+ unsigned int len;
++ size_t sent;
+
+ req.trailer.session_req.called_len = sizeof(req.trailer.session_req.called_name);
+
+@@ -3058,10 +3060,18 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
+ * As per rfc1002, @len must be the number of bytes that follows the
+ * length field of a rfc1002 session request payload.
+ */
+- len = sizeof(req) - offsetof(struct rfc1002_session_packet, trailer.session_req);
++ len = sizeof(req.trailer.session_req);
++ req.type = RFC1002_SESSION_REQUEST;
++ req.flags = 0;
++ req.length = cpu_to_be16(len);
++ len += offsetof(typeof(req), trailer.session_req);
++ iov.iov_base = &req;
++ iov.iov_len = len;
++ iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, len);
++ rc = smb_send_kvec(server, &msg, &sent);
++ if (rc < 0 || len != sent)
++ return (rc == -EINTR || rc == -EAGAIN) ? rc : -ECONNABORTED;
+
+- smb_buf->smb_buf_length = cpu_to_be32((RFC1002_SESSION_REQUEST << 24) | len);
+- rc = smb_send(server, smb_buf, len);
+ /*
+ * RFC1001 layer in at least one server requires very short break before
+ * negprot presumably because not expecting negprot to follow so fast.
+@@ -3070,7 +3080,7 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
+ */
+ usleep_range(1000, 2000);
+
+- return rc;
++ return 0;
+ }
+
+ static int
+@@ -3922,11 +3932,13 @@ int
+ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ struct TCP_Server_Info *server)
+ {
++ bool in_retry = false;
+ int rc = 0;
+
+ if (!server->ops->need_neg || !server->ops->negotiate)
+ return -ENOSYS;
+
++retry:
+ /* only send once per connect */
+ spin_lock(&server->srv_lock);
+ if (server->tcpStatus != CifsGood &&
+@@ -3946,6 +3958,14 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ spin_unlock(&server->srv_lock);
+
+ rc = server->ops->negotiate(xid, ses, server);
++ if (rc == -EAGAIN) {
++ /* Allow one retry attempt */
++ if (!in_retry) {
++ in_retry = true;
++ goto retry;
++ }
++ rc = -EHOSTDOWN;
++ }
+ if (rc == 0) {
+ spin_lock(&server->srv_lock);
+ if (server->tcpStatus == CifsInNegotiate)
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index e38521a713a6b3..b877def5a36640 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1118,6 +1118,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ int i, opt;
+ bool is_smb3 = !strcmp(fc->fs_type->name, "smb3");
+ bool skip_parsing = false;
++ char *hostname;
+
+ cifs_dbg(FYI, "CIFS: parsing cifs mount option '%s'\n", param->key);
+
+@@ -1327,6 +1328,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ case Opt_rsize:
+ ctx->rsize = result.uint_32;
+ ctx->got_rsize = true;
++ ctx->vol_rsize = ctx->rsize;
+ break;
+ case Opt_wsize:
+ ctx->wsize = result.uint_32;
+@@ -1342,6 +1344,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ ctx->wsize, PAGE_SIZE);
+ }
+ }
++ ctx->vol_wsize = ctx->wsize;
+ break;
+ case Opt_acregmax:
+ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) {
+@@ -1448,6 +1451,16 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ cifs_errorf(fc, "OOM when copying UNC string\n");
+ goto cifs_parse_mount_err;
+ }
++ hostname = extract_hostname(ctx->UNC);
++ if (IS_ERR(hostname)) {
++ cifs_errorf(fc, "Cannot extract hostname from UNC string\n");
++ goto cifs_parse_mount_err;
++ }
++ /* last byte, type, is 0x20 for servr type */
++ memset(ctx->target_rfc1001_name, 0x20, RFC1001_NAME_LEN_WITH_NULL);
++ for (i = 0; i < RFC1001_NAME_LEN && hostname[i] != 0; i++)
++ ctx->target_rfc1001_name[i] = toupper(hostname[i]);
++ kfree(hostname);
+ break;
+ case Opt_user:
+ kfree(ctx->username);
+diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h
+index 881bfc08667e7d..7516ccdc69c71c 100644
+--- a/fs/smb/client/fs_context.h
++++ b/fs/smb/client/fs_context.h
+@@ -279,6 +279,9 @@ struct smb3_fs_context {
+ bool use_client_guid:1;
+ /* reuse existing guid for multichannel */
+ u8 client_guid[SMB2_CLIENT_GUID_SIZE];
++ /* User-specified original r/wsize value */
++ unsigned int vol_rsize;
++ unsigned int vol_wsize;
+ unsigned int bsize;
+ unsigned int rasize;
+ unsigned int rsize;
+diff --git a/fs/smb/client/link.c b/fs/smb/client/link.c
+index 6e6c09cc5ce7ab..769752ad2c5ce8 100644
+--- a/fs/smb/client/link.c
++++ b/fs/smb/client/link.c
+@@ -258,7 +258,7 @@ cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_open_parms oparms;
+ struct cifs_io_parms io_parms = {0};
+ int buf_type = CIFS_NO_BUFFER;
+- FILE_ALL_INFO file_info;
++ struct cifs_open_info_data query_data;
+
+ oparms = (struct cifs_open_parms) {
+ .tcon = tcon,
+@@ -270,11 +270,11 @@ cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ .fid = &fid,
+ };
+
+- rc = CIFS_open(xid, &oparms, &oplock, &file_info);
++ rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, &query_data);
+ if (rc)
+ return rc;
+
+- if (file_info.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) {
++ if (query_data.fi.EndOfFile != cpu_to_le64(CIFS_MF_SYMLINK_FILE_SIZE)) {
+ rc = -ENOENT;
+ /* it's not a symlink */
+ goto out;
+@@ -313,7 +313,7 @@ cifs_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ .fid = &fid,
+ };
+
+- rc = CIFS_open(xid, &oparms, &oplock, NULL);
++ rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, NULL);
+ if (rc)
+ return rc;
+
+@@ -643,7 +643,8 @@ cifs_symlink(struct mnt_idmap *idmap, struct inode *inode,
+ case CIFS_SYMLINK_TYPE_NATIVE:
+ case CIFS_SYMLINK_TYPE_NFS:
+ case CIFS_SYMLINK_TYPE_WSL:
+- if (server->ops->create_reparse_symlink) {
++ if (server->ops->create_reparse_symlink &&
++ (le32_to_cpu(pTcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS)) {
+ rc = server->ops->create_reparse_symlink(xid, inode,
+ direntry,
+ pTcon,
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index 50f96259d9adc2..787d6bcb5d1dc4 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -733,7 +733,10 @@ find_cifs_entry(const unsigned int xid, struct cifs_tcon *tcon, loff_t pos,
+ else
+ cifs_buf_release(cfile->srch_inf.
+ ntwrk_buf_start);
++ /* Reset all pointers to the network buffer to prevent stale references */
+ cfile->srch_inf.ntwrk_buf_start = NULL;
++ cfile->srch_inf.srch_entries_start = NULL;
++ cfile->srch_inf.last_entry = NULL;
+ }
+ rc = initiate_cifs_search(xid, file, full_path);
+ if (rc) {
+@@ -756,11 +759,11 @@ find_cifs_entry(const unsigned int xid, struct cifs_tcon *tcon, loff_t pos,
+ rc = server->ops->query_dir_next(xid, tcon, &cfile->fid,
+ search_flags,
+ &cfile->srch_inf);
++ if (rc)
++ return -ENOENT;
+ /* FindFirst/Next set last_entry to NULL on malformed reply */
+ if (cfile->srch_inf.last_entry)
+ cifs_save_resume_key(cfile->srch_inf.last_entry, cfile);
+- if (rc)
+- return -ENOENT;
+ }
+ if (index_to_find < cfile->srch_inf.index_of_last_entry) {
+ /* we found the buffer that contains the entry */
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index 808970e4a7142f..47c815f34bc367 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -426,13 +426,6 @@ cifs_negotiate(const unsigned int xid,
+ {
+ int rc;
+ rc = CIFSSMBNegotiate(xid, ses, server);
+- if (rc == -EAGAIN) {
+- /* retry only once on 1st time connection */
+- set_credits(server, 1);
+- rc = CIFSSMBNegotiate(xid, ses, server);
+- if (rc == -EAGAIN)
+- rc = -EHOSTDOWN;
+- }
+ return rc;
+ }
+
+@@ -444,8 +437,8 @@ cifs_negotiate_wsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ unsigned int wsize;
+
+ /* start with specified wsize, or default */
+- if (ctx->wsize)
+- wsize = ctx->wsize;
++ if (ctx->got_wsize)
++ wsize = ctx->vol_wsize;
+ else if (tcon->unix_ext && (unix_cap & CIFS_UNIX_LARGE_WRITE_CAP))
+ wsize = CIFS_DEFAULT_IOSIZE;
+ else
+@@ -497,7 +490,7 @@ cifs_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ else
+ defsize = server->maxBuf - sizeof(READ_RSP);
+
+- rsize = ctx->rsize ? ctx->rsize : defsize;
++ rsize = ctx->got_rsize ? ctx->vol_rsize : defsize;
+
+ /*
+ * no CAP_LARGE_READ_X? Then MS-CIFS states that we must limit this to
+@@ -548,24 +541,104 @@ static int cifs_query_path_info(const unsigned int xid,
+ const char *full_path,
+ struct cifs_open_info_data *data)
+ {
+- int rc;
++ int rc = -EOPNOTSUPP;
+ FILE_ALL_INFO fi = {};
++ struct cifs_search_info search_info = {};
++ bool non_unicode_wildcard = false;
+
+ data->reparse_point = false;
+ data->adjust_tz = false;
+
+- /* could do find first instead but this returns more info */
+- rc = CIFSSMBQPathInfo(xid, tcon, full_path, &fi, 0 /* not legacy */, cifs_sb->local_nls,
+- cifs_remap(cifs_sb));
+ /*
+- * BB optimize code so we do not make the above call when server claims
+- * no NT SMB support and the above call failed at least once - set flag
+- * in tcon or mount.
++ * First try CIFSSMBQPathInfo() function which returns more info
++ * (NumberOfLinks) than CIFSFindFirst() fallback function.
++ * Some servers like Win9x do not support SMB_QUERY_FILE_ALL_INFO over
++ * TRANS2_QUERY_PATH_INFORMATION, but supports it with filehandle over
++ * TRANS2_QUERY_FILE_INFORMATION (function CIFSSMBQFileInfo(). But SMB
++ * Open command on non-NT servers works only for files, does not work
++ * for directories. And moreover Win9x SMB server returns bogus data in
++ * SMB_QUERY_FILE_ALL_INFO Attributes field. So for non-NT servers,
++ * do not even use CIFSSMBQPathInfo() or CIFSSMBQFileInfo() function.
++ */
++ if (tcon->ses->capabilities & CAP_NT_SMBS)
++ rc = CIFSSMBQPathInfo(xid, tcon, full_path, &fi, 0 /* not legacy */,
++ cifs_sb->local_nls, cifs_remap(cifs_sb));
++
++ /*
++ * Non-UNICODE variant of fallback functions below expands wildcards,
++ * so they cannot be used for querying paths with wildcard characters.
+ */
+- if ((rc == -EOPNOTSUPP) || (rc == -EINVAL)) {
++ if (rc && !(tcon->ses->capabilities & CAP_UNICODE) && strpbrk(full_path, "*?\"><"))
++ non_unicode_wildcard = true;
++
++ /*
++ * Then fallback to CIFSFindFirst() which works also with non-NT servers
++ * but does not does not provide NumberOfLinks.
++ */
++ if ((rc == -EOPNOTSUPP || rc == -EINVAL) &&
++ !non_unicode_wildcard) {
++ if (!(tcon->ses->capabilities & tcon->ses->server->vals->cap_nt_find))
++ search_info.info_level = SMB_FIND_FILE_INFO_STANDARD;
++ else
++ search_info.info_level = SMB_FIND_FILE_FULL_DIRECTORY_INFO;
++ rc = CIFSFindFirst(xid, tcon, full_path, cifs_sb, NULL,
++ CIFS_SEARCH_CLOSE_ALWAYS | CIFS_SEARCH_CLOSE_AT_END,
++ &search_info, false);
++ if (rc == 0) {
++ if (!(tcon->ses->capabilities & tcon->ses->server->vals->cap_nt_find)) {
++ FIND_FILE_STANDARD_INFO *di;
++ int offset = tcon->ses->server->timeAdj;
++
++ di = (FIND_FILE_STANDARD_INFO *)search_info.srch_entries_start;
++ fi.CreationTime = cpu_to_le64(cifs_UnixTimeToNT(cnvrtDosUnixTm(
++ di->CreationDate, di->CreationTime, offset)));
++ fi.LastAccessTime = cpu_to_le64(cifs_UnixTimeToNT(cnvrtDosUnixTm(
++ di->LastAccessDate, di->LastAccessTime, offset)));
++ fi.LastWriteTime = cpu_to_le64(cifs_UnixTimeToNT(cnvrtDosUnixTm(
++ di->LastWriteDate, di->LastWriteTime, offset)));
++ fi.ChangeTime = fi.LastWriteTime;
++ fi.Attributes = cpu_to_le32(le16_to_cpu(di->Attributes));
++ fi.AllocationSize = cpu_to_le64(le32_to_cpu(di->AllocationSize));
++ fi.EndOfFile = cpu_to_le64(le32_to_cpu(di->DataSize));
++ } else {
++ FILE_FULL_DIRECTORY_INFO *di;
++
++ di = (FILE_FULL_DIRECTORY_INFO *)search_info.srch_entries_start;
++ fi.CreationTime = di->CreationTime;
++ fi.LastAccessTime = di->LastAccessTime;
++ fi.LastWriteTime = di->LastWriteTime;
++ fi.ChangeTime = di->ChangeTime;
++ fi.Attributes = di->ExtFileAttributes;
++ fi.AllocationSize = di->AllocationSize;
++ fi.EndOfFile = di->EndOfFile;
++ fi.EASize = di->EaSize;
++ }
++ fi.NumberOfLinks = cpu_to_le32(1);
++ fi.DeletePending = 0;
++ fi.Directory = !!(le32_to_cpu(fi.Attributes) & ATTR_DIRECTORY);
++ cifs_buf_release(search_info.ntwrk_buf_start);
++ } else if (!full_path[0]) {
++ /*
++ * CIFSFindFirst() does not work on root path if the
++ * root path was exported on the server from the top
++ * level path (drive letter).
++ */
++ rc = -EOPNOTSUPP;
++ }
++ }
++
++ /*
++ * If everything failed then fallback to the legacy SMB command
++ * SMB_COM_QUERY_INFORMATION which works with all servers, but
++ * provide just few information.
++ */
++ if ((rc == -EOPNOTSUPP || rc == -EINVAL) && !non_unicode_wildcard) {
+ rc = SMBQueryInformation(xid, tcon, full_path, &fi, cifs_sb->local_nls,
+ cifs_remap(cifs_sb));
+ data->adjust_tz = true;
++ } else if ((rc == -EOPNOTSUPP || rc == -EINVAL) && non_unicode_wildcard) {
++ /* Path with non-UNICODE wildcard character cannot exist. */
++ rc = -ENOENT;
+ }
+
+ if (!rc) {
+@@ -644,6 +717,13 @@ static int cifs_query_file_info(const unsigned int xid, struct cifs_tcon *tcon,
+ int rc;
+ FILE_ALL_INFO fi = {};
+
++ /*
++ * CIFSSMBQFileInfo() for non-NT servers returns bogus data in
++ * Attributes fields. So do not use this command for non-NT servers.
++ */
++ if (!(tcon->ses->capabilities & CAP_NT_SMBS))
++ return -EOPNOTSUPP;
++
+ if (cfile->symlink_target) {
+ data->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+ if (!data->symlink_target)
+@@ -814,6 +894,9 @@ smb_set_file_info(struct inode *inode, const char *full_path,
+ struct cifs_fid fid;
+ struct cifs_open_parms oparms;
+ struct cifsFileInfo *open_file;
++ FILE_BASIC_INFO new_buf;
++ struct cifs_open_info_data query_data;
++ __le64 write_time = buf->LastWriteTime;
+ struct cifsInodeInfo *cinode = CIFS_I(inode);
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ struct tcon_link *tlink = NULL;
+@@ -821,20 +904,58 @@ smb_set_file_info(struct inode *inode, const char *full_path,
+
+ /* if the file is already open for write, just use that fileid */
+ open_file = find_writable_file(cinode, FIND_WR_FSUID_ONLY);
++
+ if (open_file) {
+ fid.netfid = open_file->fid.netfid;
+ netpid = open_file->pid;
+ tcon = tlink_tcon(open_file->tlink);
+- goto set_via_filehandle;
++ } else {
++ tlink = cifs_sb_tlink(cifs_sb);
++ if (IS_ERR(tlink)) {
++ rc = PTR_ERR(tlink);
++ tlink = NULL;
++ goto out;
++ }
++ tcon = tlink_tcon(tlink);
+ }
+
+- tlink = cifs_sb_tlink(cifs_sb);
+- if (IS_ERR(tlink)) {
+- rc = PTR_ERR(tlink);
+- tlink = NULL;
+- goto out;
++ /*
++ * Non-NT servers interprets zero time value in SMB_SET_FILE_BASIC_INFO
++ * over TRANS2_SET_FILE_INFORMATION as a valid time value. NT servers
++ * interprets zero time value as do not change existing value on server.
++ * API of ->set_file_info() callback expects that zero time value has
++ * the NT meaning - do not change. Therefore if server is non-NT and
++ * some time values in "buf" are zero, then fetch missing time values.
++ */
++ if (!(tcon->ses->capabilities & CAP_NT_SMBS) &&
++ (!buf->CreationTime || !buf->LastAccessTime ||
++ !buf->LastWriteTime || !buf->ChangeTime)) {
++ rc = cifs_query_path_info(xid, tcon, cifs_sb, full_path, &query_data);
++ if (rc) {
++ if (open_file) {
++ cifsFileInfo_put(open_file);
++ open_file = NULL;
++ }
++ goto out;
++ }
++ /*
++ * Original write_time from buf->LastWriteTime is preserved
++ * as SMBSetInformation() interprets zero as do not change.
++ */
++ new_buf = *buf;
++ buf = &new_buf;
++ if (!buf->CreationTime)
++ buf->CreationTime = query_data.fi.CreationTime;
++ if (!buf->LastAccessTime)
++ buf->LastAccessTime = query_data.fi.LastAccessTime;
++ if (!buf->LastWriteTime)
++ buf->LastWriteTime = query_data.fi.LastWriteTime;
++ if (!buf->ChangeTime)
++ buf->ChangeTime = query_data.fi.ChangeTime;
+ }
+- tcon = tlink_tcon(tlink);
++
++ if (open_file)
++ goto set_via_filehandle;
+
+ rc = CIFSSMBSetPathInfo(xid, tcon, full_path, buf, cifs_sb->local_nls,
+ cifs_sb);
+@@ -855,8 +976,45 @@ smb_set_file_info(struct inode *inode, const char *full_path,
+ .fid = &fid,
+ };
+
+- cifs_dbg(FYI, "calling SetFileInfo since SetPathInfo for times not supported by this server\n");
+- rc = CIFS_open(xid, &oparms, &oplock, NULL);
++ if (S_ISDIR(inode->i_mode) && !(tcon->ses->capabilities & CAP_NT_SMBS)) {
++ /* Opening directory path is not possible on non-NT servers. */
++ rc = -EOPNOTSUPP;
++ } else {
++ /*
++ * Use cifs_open_file() instead of CIFS_open() as the
++ * cifs_open_file() selects the correct function which
++ * works also on non-NT servers.
++ */
++ rc = cifs_open_file(xid, &oparms, &oplock, NULL);
++ /*
++ * Opening path for writing on non-NT servers is not
++ * possible when the read-only attribute is already set.
++ * Non-NT server in this case returns -EACCES. For those
++ * servers the only possible way how to clear the read-only
++ * bit is via SMB_COM_SETATTR command.
++ */
++ if (rc == -EACCES &&
++ (cinode->cifsAttrs & ATTR_READONLY) &&
++ le32_to_cpu(buf->Attributes) != 0 && /* 0 = do not change attrs */
++ !(le32_to_cpu(buf->Attributes) & ATTR_READONLY) &&
++ !(tcon->ses->capabilities & CAP_NT_SMBS))
++ rc = -EOPNOTSUPP;
++ }
++
++ /* Fallback to SMB_COM_SETATTR command when absolutelty needed. */
++ if (rc == -EOPNOTSUPP) {
++ cifs_dbg(FYI, "calling SetInformation since SetPathInfo for attrs/times not supported by this server\n");
++ rc = SMBSetInformation(xid, tcon, full_path,
++ buf->Attributes != 0 ? buf->Attributes : cpu_to_le32(cinode->cifsAttrs),
++ write_time,
++ cifs_sb->local_nls, cifs_sb);
++ if (rc == 0)
++ cinode->cifsAttrs = le32_to_cpu(buf->Attributes);
++ else
++ rc = -EACCES;
++ goto out;
++ }
++
+ if (rc != 0) {
+ if (rc == -EIO)
+ rc = -EINVAL;
+@@ -864,6 +1022,7 @@ smb_set_file_info(struct inode *inode, const char *full_path,
+ }
+
+ netpid = current->tgid;
++ cifs_dbg(FYI, "calling SetFileInfo since SetPathInfo for attrs/times not supported by this server\n");
+
+ set_via_filehandle:
+ rc = CIFSSMBSetFileInfo(xid, tcon, buf, fid.netfid, netpid);
+@@ -874,6 +1033,21 @@ smb_set_file_info(struct inode *inode, const char *full_path,
+ CIFSSMBClose(xid, tcon, fid.netfid);
+ else
+ cifsFileInfo_put(open_file);
++
++ /*
++ * Setting the read-only bit is not honered on non-NT servers when done
++ * via open-semantics. So for setting it, use SMB_COM_SETATTR command.
++ * This command works only after the file is closed, so use it only when
++ * operation was called without the filehandle.
++ */
++ if (open_file == NULL &&
++ !(tcon->ses->capabilities & CAP_NT_SMBS) &&
++ le32_to_cpu(buf->Attributes) & ATTR_READONLY) {
++ SMBSetInformation(xid, tcon, full_path,
++ buf->Attributes,
++ 0 /* do not change write time */,
++ cifs_sb->local_nls, cifs_sb);
++ }
+ out:
+ if (tlink != NULL)
+ cifs_put_tlink(tlink);
+diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
+index d609a20fb98a9c..2d726e9b950cf4 100644
+--- a/fs/smb/client/smb2file.c
++++ b/fs/smb/client/smb2file.c
+@@ -152,16 +152,25 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32
+ int err_buftype = CIFS_NO_BUFFER;
+ struct cifs_fid *fid = oparms->fid;
+ struct network_resiliency_req nr_ioctl_req;
++ bool retry_without_read_attributes = false;
+
+ smb2_path = cifs_convert_path_to_utf16(oparms->path, oparms->cifs_sb);
+ if (smb2_path == NULL)
+ return -ENOMEM;
+
+- oparms->desired_access |= FILE_READ_ATTRIBUTES;
++ if (!(oparms->desired_access & FILE_READ_ATTRIBUTES)) {
++ oparms->desired_access |= FILE_READ_ATTRIBUTES;
++ retry_without_read_attributes = true;
++ }
+ smb2_oplock = SMB2_OPLOCK_LEVEL_BATCH;
+
+ rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov,
+ &err_buftype);
++ if (rc == -EACCES && retry_without_read_attributes) {
++ oparms->desired_access &= ~FILE_READ_ATTRIBUTES;
++ rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov,
++ &err_buftype);
++ }
+ if (rc && data) {
+ struct smb2_hdr *hdr = err_iov.iov_base;
+
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 826b57a5a2a8d2..e9fd3e204a6f40 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -1273,6 +1273,14 @@ struct inode *smb2_get_reparse_inode(struct cifs_open_info_data *data,
+ int rc;
+ int i;
+
++ /*
++ * If server filesystem does not support reparse points then do not
++ * attempt to create reparse point. This will prevent creating unusable
++ * empty object on the server.
++ */
++ if (!(le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS))
++ return ERR_PTR(-EOPNOTSUPP);
++
+ oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
+ SYNCHRONIZE | DELETE |
+ FILE_READ_ATTRIBUTES |
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 7aeac8dd9a1d13..fbb3686292134a 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -464,12 +464,20 @@ smb2_negotiate(const unsigned int xid,
+ server->CurrentMid = 0;
+ spin_unlock(&server->mid_lock);
+ rc = SMB2_negotiate(xid, ses, server);
+- /* BB we probably don't need to retry with modern servers */
+- if (rc == -EAGAIN)
+- rc = -EHOSTDOWN;
+ return rc;
+ }
+
++static inline unsigned int
++prevent_zero_iosize(unsigned int size, const char *type)
++{
++ if (size == 0) {
++ cifs_dbg(VFS, "SMB: Zero %ssize calculated, using minimum value %u\n",
++ type, CIFS_MIN_DEFAULT_IOSIZE);
++ return CIFS_MIN_DEFAULT_IOSIZE;
++ }
++ return size;
++}
++
+ static unsigned int
+ smb2_negotiate_wsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ {
+@@ -477,12 +485,12 @@ smb2_negotiate_wsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ unsigned int wsize;
+
+ /* start with specified wsize, or default */
+- wsize = ctx->wsize ? ctx->wsize : CIFS_DEFAULT_IOSIZE;
++ wsize = ctx->got_wsize ? ctx->vol_wsize : CIFS_DEFAULT_IOSIZE;
+ wsize = min_t(unsigned int, wsize, server->max_write);
+ if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU))
+ wsize = min_t(unsigned int, wsize, SMB2_MAX_BUFFER_SIZE);
+
+- return wsize;
++ return prevent_zero_iosize(wsize, "w");
+ }
+
+ static unsigned int
+@@ -492,7 +500,7 @@ smb3_negotiate_wsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ unsigned int wsize;
+
+ /* start with specified wsize, or default */
+- wsize = ctx->wsize ? ctx->wsize : SMB3_DEFAULT_IOSIZE;
++ wsize = ctx->got_wsize ? ctx->vol_wsize : SMB3_DEFAULT_IOSIZE;
+ wsize = min_t(unsigned int, wsize, server->max_write);
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ if (server->rdma) {
+@@ -514,7 +522,7 @@ smb3_negotiate_wsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU))
+ wsize = min_t(unsigned int, wsize, SMB2_MAX_BUFFER_SIZE);
+
+- return wsize;
++ return prevent_zero_iosize(wsize, "w");
+ }
+
+ static unsigned int
+@@ -524,13 +532,13 @@ smb2_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ unsigned int rsize;
+
+ /* start with specified rsize, or default */
+- rsize = ctx->rsize ? ctx->rsize : CIFS_DEFAULT_IOSIZE;
++ rsize = ctx->got_rsize ? ctx->vol_rsize : CIFS_DEFAULT_IOSIZE;
+ rsize = min_t(unsigned int, rsize, server->max_read);
+
+ if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU))
+ rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE);
+
+- return rsize;
++ return prevent_zero_iosize(rsize, "r");
+ }
+
+ static unsigned int
+@@ -540,7 +548,7 @@ smb3_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ unsigned int rsize;
+
+ /* start with specified rsize, or default */
+- rsize = ctx->rsize ? ctx->rsize : SMB3_DEFAULT_IOSIZE;
++ rsize = ctx->got_rsize ? ctx->vol_rsize : SMB3_DEFAULT_IOSIZE;
+ rsize = min_t(unsigned int, rsize, server->max_read);
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ if (server->rdma) {
+@@ -563,7 +571,7 @@ smb3_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU))
+ rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE);
+
+- return rsize;
++ return prevent_zero_iosize(rsize, "r");
+ }
+
+ /*
+@@ -5229,7 +5237,7 @@ static int smb2_make_node(unsigned int xid, struct inode *inode,
+ const char *full_path, umode_t mode, dev_t dev)
+ {
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+- int rc;
++ int rc = -EOPNOTSUPP;
+
+ /*
+ * Check if mounted with mount parm 'sfu' mount parm.
+@@ -5240,7 +5248,7 @@ static int smb2_make_node(unsigned int xid, struct inode *inode,
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL) {
+ rc = cifs_sfu_make_node(xid, inode, dentry, tcon,
+ full_path, mode, dev);
+- } else {
++ } else if (le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS) {
+ rc = smb2_mknod_reparse(xid, inode, dentry, tcon,
+ full_path, mode, dev);
+ }
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index ed3ffcb80aef0e..d080c777906b41 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -3910,12 +3910,10 @@ SMB2_query_acl(const unsigned int xid, struct cifs_tcon *tcon,
+ u64 persistent_fid, u64 volatile_fid,
+ void **data, u32 *plen, u32 extra_info)
+ {
+- __u32 additional_info = OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO |
+- extra_info;
+ *plen = 0;
+
+ return query_info(xid, tcon, persistent_fid, volatile_fid,
+- 0, SMB2_O_INFO_SECURITY, additional_info,
++ 0, SMB2_O_INFO_SECURITY, extra_info,
+ SMB2_MAX_BUFFER_SIZE, MIN_SEC_DESC_LEN, data, plen);
+ }
+
+diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
+index 0dc80959ce4885..03434dbe9374c7 100644
+--- a/fs/smb/client/transport.c
++++ b/fs/smb/client/transport.c
+@@ -179,7 +179,7 @@ delete_mid(struct mid_q_entry *mid)
+ * Our basic "send data to server" function. Should be called with srv_mutex
+ * held. The caller is responsible for handling the results.
+ */
+-static int
++int
+ smb_send_kvec(struct TCP_Server_Info *server, struct msghdr *smb_msg,
+ size_t *sent)
+ {
+diff --git a/fs/smb/client/xattr.c b/fs/smb/client/xattr.c
+index 58a584f0b27e91..7d49f38f01f3e7 100644
+--- a/fs/smb/client/xattr.c
++++ b/fs/smb/client/xattr.c
+@@ -320,10 +320,17 @@ static int cifs_xattr_get(const struct xattr_handler *handler,
+ if (pTcon->ses->server->ops->get_acl == NULL)
+ goto out; /* rc already EOPNOTSUPP */
+
+- if (handler->flags == XATTR_CIFS_NTSD_FULL) {
+- extra_info = SACL_SECINFO;
+- } else {
+- extra_info = 0;
++ switch (handler->flags) {
++ case XATTR_CIFS_NTSD_FULL:
++ extra_info = OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO | SACL_SECINFO;
++ break;
++ case XATTR_CIFS_NTSD:
++ extra_info = OWNER_SECINFO | GROUP_SECINFO | DACL_SECINFO;
++ break;
++ case XATTR_CIFS_ACL:
++ default:
++ extra_info = DACL_SECINFO;
++ break;
+ }
+ pacl = pTcon->ses->server->ops->get_acl(cifs_sb,
+ inode, full_path, &acllen, extra_info);
+diff --git a/fs/smb/common/smb2pdu.h b/fs/smb/common/smb2pdu.h
+index 12f0013334057e..f79a5165a7cc6a 100644
+--- a/fs/smb/common/smb2pdu.h
++++ b/fs/smb/common/smb2pdu.h
+@@ -95,6 +95,9 @@
+ */
+ #define SMB3_DEFAULT_IOSIZE (4 * 1024 * 1024)
+
++/* According to MS-SMB2 specification The minimum recommended value is 65536.*/
++#define CIFS_MIN_DEFAULT_IOSIZE (65536)
++
+ /*
+ * SMB2 Header Definition
+ *
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index c2603c398a4674..f2a2be8467c669 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1450,7 +1450,7 @@ static int ntlm_authenticate(struct ksmbd_work *work,
+ {
+ struct ksmbd_conn *conn = work->conn;
+ struct ksmbd_session *sess = work->sess;
+- struct channel *chann = NULL;
++ struct channel *chann = NULL, *old;
+ struct ksmbd_user *user;
+ u64 prev_id;
+ int sz, rc;
+@@ -1562,7 +1562,12 @@ static int ntlm_authenticate(struct ksmbd_work *work,
+ return -ENOMEM;
+
+ chann->conn = conn;
+- xa_store(&sess->ksmbd_chann_list, (long)conn, chann, KSMBD_DEFAULT_GFP);
++ old = xa_store(&sess->ksmbd_chann_list, (long)conn, chann,
++ KSMBD_DEFAULT_GFP);
++ if (xa_is_err(old)) {
++ kfree(chann);
++ return xa_err(old);
++ }
+ }
+ }
+
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 648efed5ff7de6..474dc6e122c84f 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -426,10 +426,15 @@ static int ksmbd_vfs_stream_write(struct ksmbd_file *fp, char *buf, loff_t *pos,
+ ksmbd_debug(VFS, "write stream data pos : %llu, count : %zd\n",
+ *pos, count);
+
++ if (*pos >= XATTR_SIZE_MAX) {
++ pr_err("stream write position %lld is out of bounds\n", *pos);
++ return -EINVAL;
++ }
++
+ size = *pos + count;
+ if (size > XATTR_SIZE_MAX) {
+ size = XATTR_SIZE_MAX;
+- count = (*pos + count) - XATTR_SIZE_MAX;
++ count = XATTR_SIZE_MAX - *pos;
+ }
+
+ v_len = ksmbd_vfs_getcasexattr(idmap,
+@@ -443,13 +448,6 @@ static int ksmbd_vfs_stream_write(struct ksmbd_file *fp, char *buf, loff_t *pos,
+ goto out;
+ }
+
+- if (v_len <= *pos) {
+- pr_err("stream write position %lld is out of bounds (stream length: %zd)\n",
+- *pos, v_len);
+- err = -EINVAL;
+- goto out;
+- }
+-
+ if (v_len < size) {
+ wbuf = kvzalloc(size, KSMBD_DEFAULT_GFP);
+ if (!wbuf) {
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index 2d5ea9f9ff43eb..6692253f0b5be5 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -132,6 +132,7 @@ struct ahash_request {
+ * This is a counterpart to @init_tfm, used to remove
+ * various changes set in @init_tfm.
+ * @clone_tfm: Copy transform into new object, may allocate memory.
++ * @reqsize: Size of the request context.
+ * @halg: see struct hash_alg_common
+ */
+ struct ahash_alg {
+@@ -148,6 +149,8 @@ struct ahash_alg {
+ void (*exit_tfm)(struct crypto_ahash *tfm);
+ int (*clone_tfm)(struct crypto_ahash *dst, struct crypto_ahash *src);
+
++ unsigned int reqsize;
++
+ struct hash_alg_common halg;
+ };
+
+diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
+index 31ca88deb10d26..1ded9a8d4e84d7 100644
+--- a/include/drm/drm_atomic.h
++++ b/include/drm/drm_atomic.h
+@@ -376,8 +376,27 @@ struct drm_atomic_state {
+ *
+ * Allow full modeset. This is used by the ATOMIC IOCTL handler to
+ * implement the DRM_MODE_ATOMIC_ALLOW_MODESET flag. Drivers should
+- * never consult this flag, instead looking at the output of
+- * drm_atomic_crtc_needs_modeset().
++ * generally not consult this flag, but instead look at the output of
++ * drm_atomic_crtc_needs_modeset(). The detailed rules are:
++ *
++ * - Drivers must not consult @allow_modeset in the atomic commit path.
++ * Use drm_atomic_crtc_needs_modeset() instead.
++ *
++ * - Drivers must consult @allow_modeset before adding unrelated struct
++ * drm_crtc_state to this commit by calling
++ * drm_atomic_get_crtc_state(). See also the warning in the
++ * documentation for that function.
++ *
++ * - Drivers must never change this flag, it is under the exclusive
++ * control of userspace.
++ *
++ * - Drivers may consult @allow_modeset in the atomic check path, if
++ * they have the choice between an optimal hardware configuration
++ * which requires a modeset, and a less optimal configuration which
++ * can be committed without a modeset. An example would be suboptimal
++ * scanout FIFO allocation resulting in increased idle power
++ * consumption. This allows userspace to avoid flickering and delays
++ * for the normal composition loop at reasonable cost.
+ */
+ bool allow_modeset : 1;
+ /**
+diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
+index fdae947682cd0b..bcd54020d6ba51 100644
+--- a/include/drm/drm_gem.h
++++ b/include/drm/drm_gem.h
+@@ -35,6 +35,7 @@
+ */
+
+ #include <linux/kref.h>
++#include <linux/dma-buf.h>
+ #include <linux/dma-resv.h>
+ #include <linux/list.h>
+ #include <linux/mutex.h>
+@@ -575,6 +576,18 @@ static inline bool drm_gem_object_is_shared_for_memory_stats(struct drm_gem_obje
+ return (obj->handle_count > 1) || obj->dma_buf;
+ }
+
++/**
++ * drm_gem_is_imported() - Tests if GEM object's buffer has been imported
++ * @obj: the GEM object
++ *
++ * Returns:
++ * True if the GEM object's buffer has been imported, false otherwise
++ */
++static inline bool drm_gem_is_imported(const struct drm_gem_object *obj)
++{
++ return !!obj->import_attach;
++}
++
+ #ifdef CONFIG_LOCKDEP
+ /**
+ * drm_gem_gpuva_set_lock() - Set the lock protecting accesses to the gpuva list.
+diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
+index a946e0203e6d60..8f7931eb7d164c 100644
+--- a/include/linux/alloc_tag.h
++++ b/include/linux/alloc_tag.h
+@@ -104,6 +104,16 @@ DECLARE_PER_CPU(struct alloc_tag_counters, _shared_alloc_tag);
+
+ #else /* ARCH_NEEDS_WEAK_PER_CPU */
+
++#ifdef MODULE
++
++#define DEFINE_ALLOC_TAG(_alloc_tag) \
++ static struct alloc_tag _alloc_tag __used __aligned(8) \
++ __section(ALLOC_TAG_SECTION_NAME) = { \
++ .ct = CODE_TAG_INIT, \
++ .counters = NULL };
++
++#else /* MODULE */
++
+ #define DEFINE_ALLOC_TAG(_alloc_tag) \
+ static DEFINE_PER_CPU(struct alloc_tag_counters, _alloc_tag_cntr); \
+ static struct alloc_tag _alloc_tag __used __aligned(8) \
+@@ -111,6 +121,8 @@ DECLARE_PER_CPU(struct alloc_tag_counters, _shared_alloc_tag);
+ .ct = CODE_TAG_INIT, \
+ .counters = &_alloc_tag_cntr };
+
++#endif /* MODULE */
++
+ #endif /* ARCH_NEEDS_WEAK_PER_CPU */
+
+ DECLARE_STATIC_KEY_MAYBE(CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT,
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 0fec27d6b986c5..844af92bea1e48 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1616,6 +1616,7 @@ static inline void bio_end_io_acct(struct bio *bio, unsigned long start_time)
+ return bio_end_io_acct_remapped(bio, start_time, bio->bi_bdev);
+ }
+
++int bdev_validate_blocksize(struct block_device *bdev, int block_size);
+ int set_blocksize(struct file *file, int size);
+
+ int lookup_bdev(const char *pathname, dev_t *dev);
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index 7fc69083e7450f..9de7adb6829485 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -111,6 +111,7 @@ struct bpf_prog_list {
+ struct bpf_prog *prog;
+ struct bpf_cgroup_link *link;
+ struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE];
++ u32 flags;
+ };
+
+ int cgroup_bpf_inherit(struct cgroup *cgrp);
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index f3f50e29d63929..f4df39e8c7357b 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1507,7 +1507,7 @@ struct bpf_prog_aux {
+ u32 max_rdonly_access;
+ u32 max_rdwr_access;
+ struct btf *attach_btf;
+- const struct bpf_ctx_arg_aux *ctx_arg_info;
++ struct bpf_ctx_arg_aux *ctx_arg_info;
+ void __percpu *priv_stack_ptr;
+ struct mutex dst_mutex; /* protects dst_* pointers below, *after* prog becomes visible */
+ struct bpf_prog *dst_prog;
+@@ -1945,6 +1945,9 @@ static inline void bpf_struct_ops_desc_release(struct bpf_struct_ops_desc *st_op
+
+ #endif
+
++int bpf_prog_ctx_arg_info_init(struct bpf_prog *prog,
++ const struct bpf_ctx_arg_aux *info, u32 cnt);
++
+ #if defined(CONFIG_CGROUP_BPF) && defined(CONFIG_BPF_LSM)
+ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
+ int cgroup_atype);
+@@ -2546,7 +2549,7 @@ struct bpf_iter__bpf_map_elem {
+
+ int bpf_iter_reg_target(const struct bpf_iter_reg *reg_info);
+ void bpf_iter_unreg_target(const struct bpf_iter_reg *reg_info);
+-bool bpf_iter_prog_supported(struct bpf_prog *prog);
++int bpf_iter_prog_supported(struct bpf_prog *prog);
+ const struct bpf_func_proto *
+ bpf_iter_get_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog);
+ int bpf_iter_link_attach(const union bpf_attr *attr, bpfptr_t uattr, struct bpf_prog *prog);
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index 932139c5d46f5d..ffcd76d9777038 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -223,6 +223,8 @@ void __wait_on_buffer(struct buffer_head *);
+ wait_queue_head_t *bh_waitq_head(struct buffer_head *bh);
+ struct buffer_head *__find_get_block(struct block_device *bdev, sector_t block,
+ unsigned size);
++struct buffer_head *__find_get_block_nonatomic(struct block_device *bdev,
++ sector_t block, unsigned size);
+ struct buffer_head *bdev_getblk(struct block_device *bdev, sector_t block,
+ unsigned size, gfp_t gfp);
+ void __brelse(struct buffer_head *);
+@@ -398,6 +400,12 @@ sb_find_get_block(struct super_block *sb, sector_t block)
+ return __find_get_block(sb->s_bdev, block, sb->s_blocksize);
+ }
+
++static inline struct buffer_head *
++sb_find_get_block_nonatomic(struct super_block *sb, sector_t block)
++{
++ return __find_get_block_nonatomic(sb->s_bdev, block, sb->s_blocksize);
++}
++
+ static inline void
+ map_bh(struct buffer_head *bh, struct super_block *sb, sector_t block)
+ {
+diff --git a/include/linux/codetag.h b/include/linux/codetag.h
+index d14dbd26b37085..0ee4c21c6dbc7c 100644
+--- a/include/linux/codetag.h
++++ b/include/linux/codetag.h
+@@ -36,10 +36,10 @@ union codetag_ref {
+ struct codetag_type_desc {
+ const char *section;
+ size_t tag_size;
+- void (*module_load)(struct codetag_type *cttype,
+- struct codetag_module *cmod);
+- void (*module_unload)(struct codetag_type *cttype,
+- struct codetag_module *cmod);
++ void (*module_load)(struct module *mod,
++ struct codetag *start, struct codetag *end);
++ void (*module_unload)(struct module *mod,
++ struct codetag *start, struct codetag *end);
+ #ifdef CONFIG_MODULES
+ void (*module_replaced)(struct module *mod, struct module *new_mod);
+ bool (*needs_section_mem)(struct module *mod, unsigned long size);
+diff --git a/include/linux/coresight.h b/include/linux/coresight.h
+index 6ddcbb8be51653..d5ca0292550b26 100644
+--- a/include/linux/coresight.h
++++ b/include/linux/coresight.h
+@@ -238,7 +238,7 @@ struct coresight_trace_id_map {
+ DECLARE_BITMAP(used_ids, CORESIGHT_TRACE_IDS_MAX);
+ atomic_t __percpu *cpu_map;
+ atomic_t perf_cs_etm_session_active;
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ };
+
+ /**
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 80a5b32689866c..78ca7fd0e625ab 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -26,9 +26,9 @@
+ #include <linux/atomic.h>
+ #include <linux/uidgid.h>
+ #include <linux/gfp.h>
+-#include <linux/overflow.h>
+ #include <linux/device/bus.h>
+ #include <linux/device/class.h>
++#include <linux/device/devres.h>
+ #include <linux/device/driver.h>
+ #include <linux/cleanup.h>
+ #include <asm/device.h>
+@@ -281,123 +281,6 @@ int __must_check device_create_bin_file(struct device *dev,
+ void device_remove_bin_file(struct device *dev,
+ const struct bin_attribute *attr);
+
+-/* device resource management */
+-typedef void (*dr_release_t)(struct device *dev, void *res);
+-typedef int (*dr_match_t)(struct device *dev, void *res, void *match_data);
+-
+-void *__devres_alloc_node(dr_release_t release, size_t size, gfp_t gfp,
+- int nid, const char *name) __malloc;
+-#define devres_alloc(release, size, gfp) \
+- __devres_alloc_node(release, size, gfp, NUMA_NO_NODE, #release)
+-#define devres_alloc_node(release, size, gfp, nid) \
+- __devres_alloc_node(release, size, gfp, nid, #release)
+-
+-void devres_for_each_res(struct device *dev, dr_release_t release,
+- dr_match_t match, void *match_data,
+- void (*fn)(struct device *, void *, void *),
+- void *data);
+-void devres_free(void *res);
+-void devres_add(struct device *dev, void *res);
+-void *devres_find(struct device *dev, dr_release_t release,
+- dr_match_t match, void *match_data);
+-void *devres_get(struct device *dev, void *new_res,
+- dr_match_t match, void *match_data);
+-void *devres_remove(struct device *dev, dr_release_t release,
+- dr_match_t match, void *match_data);
+-int devres_destroy(struct device *dev, dr_release_t release,
+- dr_match_t match, void *match_data);
+-int devres_release(struct device *dev, dr_release_t release,
+- dr_match_t match, void *match_data);
+-
+-/* devres group */
+-void * __must_check devres_open_group(struct device *dev, void *id, gfp_t gfp);
+-void devres_close_group(struct device *dev, void *id);
+-void devres_remove_group(struct device *dev, void *id);
+-int devres_release_group(struct device *dev, void *id);
+-
+-/* managed devm_k.alloc/kfree for device drivers */
+-void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp) __alloc_size(2);
+-void *devm_krealloc(struct device *dev, void *ptr, size_t size,
+- gfp_t gfp) __must_check __realloc_size(3);
+-__printf(3, 0) char *devm_kvasprintf(struct device *dev, gfp_t gfp,
+- const char *fmt, va_list ap) __malloc;
+-__printf(3, 4) char *devm_kasprintf(struct device *dev, gfp_t gfp,
+- const char *fmt, ...) __malloc;
+-static inline void *devm_kzalloc(struct device *dev, size_t size, gfp_t gfp)
+-{
+- return devm_kmalloc(dev, size, gfp | __GFP_ZERO);
+-}
+-static inline void *devm_kmalloc_array(struct device *dev,
+- size_t n, size_t size, gfp_t flags)
+-{
+- size_t bytes;
+-
+- if (unlikely(check_mul_overflow(n, size, &bytes)))
+- return NULL;
+-
+- return devm_kmalloc(dev, bytes, flags);
+-}
+-static inline void *devm_kcalloc(struct device *dev,
+- size_t n, size_t size, gfp_t flags)
+-{
+- return devm_kmalloc_array(dev, n, size, flags | __GFP_ZERO);
+-}
+-static inline __realloc_size(3, 4) void * __must_check
+-devm_krealloc_array(struct device *dev, void *p, size_t new_n, size_t new_size, gfp_t flags)
+-{
+- size_t bytes;
+-
+- if (unlikely(check_mul_overflow(new_n, new_size, &bytes)))
+- return NULL;
+-
+- return devm_krealloc(dev, p, bytes, flags);
+-}
+-
+-void devm_kfree(struct device *dev, const void *p);
+-char *devm_kstrdup(struct device *dev, const char *s, gfp_t gfp) __malloc;
+-const char *devm_kstrdup_const(struct device *dev, const char *s, gfp_t gfp);
+-void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp)
+- __realloc_size(3);
+-
+-unsigned long devm_get_free_pages(struct device *dev,
+- gfp_t gfp_mask, unsigned int order);
+-void devm_free_pages(struct device *dev, unsigned long addr);
+-
+-#ifdef CONFIG_HAS_IOMEM
+-void __iomem *devm_ioremap_resource(struct device *dev,
+- const struct resource *res);
+-void __iomem *devm_ioremap_resource_wc(struct device *dev,
+- const struct resource *res);
+-
+-void __iomem *devm_of_iomap(struct device *dev,
+- struct device_node *node, int index,
+- resource_size_t *size);
+-#else
+-
+-static inline
+-void __iomem *devm_ioremap_resource(struct device *dev,
+- const struct resource *res)
+-{
+- return ERR_PTR(-EINVAL);
+-}
+-
+-static inline
+-void __iomem *devm_ioremap_resource_wc(struct device *dev,
+- const struct resource *res)
+-{
+- return ERR_PTR(-EINVAL);
+-}
+-
+-static inline
+-void __iomem *devm_of_iomap(struct device *dev,
+- struct device_node *node, int index,
+- resource_size_t *size)
+-{
+- return ERR_PTR(-EINVAL);
+-}
+-
+-#endif
+-
+ /* allows to add/remove a custom action to devres stack */
+ int devm_remove_action_nowarn(struct device *dev, void (*action)(void *), void *data);
+
+diff --git a/include/linux/device/devres.h b/include/linux/device/devres.h
+new file mode 100644
+index 00000000000000..9b49f991585086
+--- /dev/null
++++ b/include/linux/device/devres.h
+@@ -0,0 +1,129 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _DEVICE_DEVRES_H_
++#define _DEVICE_DEVRES_H_
++
++#include <linux/err.h>
++#include <linux/gfp_types.h>
++#include <linux/numa.h>
++#include <linux/overflow.h>
++#include <linux/stdarg.h>
++#include <linux/types.h>
++
++struct device;
++struct device_node;
++struct resource;
++
++/* device resource management */
++typedef void (*dr_release_t)(struct device *dev, void *res);
++typedef int (*dr_match_t)(struct device *dev, void *res, void *match_data);
++
++void * __malloc
++__devres_alloc_node(dr_release_t release, size_t size, gfp_t gfp, int nid, const char *name);
++#define devres_alloc(release, size, gfp) \
++ __devres_alloc_node(release, size, gfp, NUMA_NO_NODE, #release)
++#define devres_alloc_node(release, size, gfp, nid) \
++ __devres_alloc_node(release, size, gfp, nid, #release)
++
++void devres_for_each_res(struct device *dev, dr_release_t release,
++ dr_match_t match, void *match_data,
++ void (*fn)(struct device *, void *, void *),
++ void *data);
++void devres_free(void *res);
++void devres_add(struct device *dev, void *res);
++void *devres_find(struct device *dev, dr_release_t release, dr_match_t match, void *match_data);
++void *devres_get(struct device *dev, void *new_res, dr_match_t match, void *match_data);
++void *devres_remove(struct device *dev, dr_release_t release, dr_match_t match, void *match_data);
++int devres_destroy(struct device *dev, dr_release_t release, dr_match_t match, void *match_data);
++int devres_release(struct device *dev, dr_release_t release, dr_match_t match, void *match_data);
++
++/* devres group */
++void * __must_check devres_open_group(struct device *dev, void *id, gfp_t gfp);
++void devres_close_group(struct device *dev, void *id);
++void devres_remove_group(struct device *dev, void *id);
++int devres_release_group(struct device *dev, void *id);
++
++/* managed devm_k.alloc/kfree for device drivers */
++void * __alloc_size(2)
++devm_kmalloc(struct device *dev, size_t size, gfp_t gfp);
++void * __must_check __realloc_size(3)
++devm_krealloc(struct device *dev, void *ptr, size_t size, gfp_t gfp);
++static inline void *devm_kzalloc(struct device *dev, size_t size, gfp_t gfp)
++{
++ return devm_kmalloc(dev, size, gfp | __GFP_ZERO);
++}
++static inline void *devm_kmalloc_array(struct device *dev, size_t n, size_t size, gfp_t flags)
++{
++ size_t bytes;
++
++ if (unlikely(check_mul_overflow(n, size, &bytes)))
++ return NULL;
++
++ return devm_kmalloc(dev, bytes, flags);
++}
++static inline void *devm_kcalloc(struct device *dev, size_t n, size_t size, gfp_t flags)
++{
++ return devm_kmalloc_array(dev, n, size, flags | __GFP_ZERO);
++}
++static inline __realloc_size(3, 4) void * __must_check
++devm_krealloc_array(struct device *dev, void *p, size_t new_n, size_t new_size, gfp_t flags)
++{
++ size_t bytes;
++
++ if (unlikely(check_mul_overflow(new_n, new_size, &bytes)))
++ return NULL;
++
++ return devm_krealloc(dev, p, bytes, flags);
++}
++
++void devm_kfree(struct device *dev, const void *p);
++
++void * __realloc_size(3)
++devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp);
++static inline void *devm_kmemdup_array(struct device *dev, const void *src,
++ size_t n, size_t size, gfp_t flags)
++{
++ return devm_kmemdup(dev, src, size_mul(size, n), flags);
++}
++
++char * __malloc
++devm_kstrdup(struct device *dev, const char *s, gfp_t gfp);
++const char *devm_kstrdup_const(struct device *dev, const char *s, gfp_t gfp);
++char * __printf(3, 0) __malloc
++devm_kvasprintf(struct device *dev, gfp_t gfp, const char *fmt, va_list ap);
++char * __printf(3, 4) __malloc
++devm_kasprintf(struct device *dev, gfp_t gfp, const char *fmt, ...);
++
++unsigned long devm_get_free_pages(struct device *dev, gfp_t gfp_mask, unsigned int order);
++void devm_free_pages(struct device *dev, unsigned long addr);
++
++#ifdef CONFIG_HAS_IOMEM
++
++void __iomem *devm_ioremap_resource(struct device *dev, const struct resource *res);
++void __iomem *devm_ioremap_resource_wc(struct device *dev, const struct resource *res);
++
++void __iomem *devm_of_iomap(struct device *dev, struct device_node *node, int index,
++ resource_size_t *size);
++#else
++
++static inline
++void __iomem *devm_ioremap_resource(struct device *dev, const struct resource *res)
++{
++ return IOMEM_ERR_PTR(-EINVAL);
++}
++
++static inline
++void __iomem *devm_ioremap_resource_wc(struct device *dev, const struct resource *res)
++{
++ return IOMEM_ERR_PTR(-EINVAL);
++}
++
++static inline
++void __iomem *devm_of_iomap(struct device *dev, struct device_node *node, int index,
++ resource_size_t *size)
++{
++ return IOMEM_ERR_PTR(-EINVAL);
++}
++
++#endif
++
++#endif /* _DEVICE_DEVRES_H_ */
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index b79925b1c4333c..85ab710ec0e724 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -629,10 +629,14 @@ static inline int dma_mmap_wc(struct device *dev,
+ #else
+ #define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME)
+ #define DEFINE_DMA_UNMAP_LEN(LEN_NAME)
+-#define dma_unmap_addr(PTR, ADDR_NAME) (0)
+-#define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0)
+-#define dma_unmap_len(PTR, LEN_NAME) (0)
+-#define dma_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0)
++#define dma_unmap_addr(PTR, ADDR_NAME) \
++ ({ typeof(PTR) __p __maybe_unused = PTR; 0; })
++#define dma_unmap_addr_set(PTR, ADDR_NAME, VAL) \
++ do { typeof(PTR) __p __maybe_unused = PTR; } while (0)
++#define dma_unmap_len(PTR, LEN_NAME) \
++ ({ typeof(PTR) __p __maybe_unused = PTR; 0; })
++#define dma_unmap_len_set(PTR, LEN_NAME, VAL) \
++ do { typeof(PTR) __p __maybe_unused = PTR; } while (0)
+ #endif
+
+ #endif /* _LINUX_DMA_MAPPING_H */
+diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h
+index 2dea217629d0a3..5d43881e6fb77d 100644
+--- a/include/linux/dma/k3-udma-glue.h
++++ b/include/linux/dma/k3-udma-glue.h
+@@ -138,8 +138,7 @@ int k3_udma_glue_rx_get_irq(struct k3_udma_glue_rx_channel *rx_chn,
+ u32 flow_num);
+ void k3_udma_glue_reset_rx_chn(struct k3_udma_glue_rx_channel *rx_chn,
+ u32 flow_num, void *data,
+- void (*cleanup)(void *data, dma_addr_t desc_dma),
+- bool skip_fdq);
++ void (*cleanup)(void *data, dma_addr_t desc_dma));
+ int k3_udma_glue_rx_flow_enable(struct k3_udma_glue_rx_channel *rx_chn,
+ u32 flow_idx);
+ int k3_udma_glue_rx_flow_disable(struct k3_udma_glue_rx_channel *rx_chn,
+diff --git a/include/linux/err.h b/include/linux/err.h
+index a4dacd745fcf41..1d60aa86db53b5 100644
+--- a/include/linux/err.h
++++ b/include/linux/err.h
+@@ -44,6 +44,9 @@ static inline void * __must_check ERR_PTR(long error)
+ /* Return the pointer in the percpu address space. */
+ #define ERR_PTR_PCPU(error) ((void __percpu *)(unsigned long)ERR_PTR(error))
+
++/* Cast an error pointer to __iomem. */
++#define IOMEM_ERR_PTR(error) (__force void __iomem *)ERR_PTR(error)
++
+ /**
+ * PTR_ERR - Extract the error code from an error pointer.
+ * @ptr: An error pointer.
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 2dd7cb9cc270a6..5ce6b2167f808b 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -347,7 +347,8 @@ struct gpio_irq_chip {
+ * @set: assigns output value for signal "offset"
+ * @set_multiple: assigns output values for multiple signals defined by "mask"
+ * @set_config: optional hook for all kinds of settings. Uses the same
+- * packed config format as generic pinconf.
++ * packed config format as generic pinconf. Must return 0 on success and
++ * a negative error number on failure.
+ * @to_irq: optional hook supporting non-static gpiod_to_irq() mappings;
+ * implementation may not sleep
+ * @dbg_show: optional routine to show contents in debugfs; default code
+diff --git a/include/linux/highmem.h b/include/linux/highmem.h
+index 5c6bea81a90ecf..c698f8415675ef 100644
+--- a/include/linux/highmem.h
++++ b/include/linux/highmem.h
+@@ -461,7 +461,7 @@ static inline void memcpy_from_folio(char *to, struct folio *folio,
+ const char *from = kmap_local_folio(folio, offset);
+ size_t chunk = len;
+
+- if (folio_test_highmem(folio) &&
++ if (folio_test_partial_kmap(folio) &&
+ chunk > PAGE_SIZE - offset_in_page(offset))
+ chunk = PAGE_SIZE - offset_in_page(offset);
+ memcpy(to, from, chunk);
+@@ -489,7 +489,7 @@ static inline void memcpy_to_folio(struct folio *folio, size_t offset,
+ char *to = kmap_local_folio(folio, offset);
+ size_t chunk = len;
+
+- if (folio_test_highmem(folio) &&
++ if (folio_test_partial_kmap(folio) &&
+ chunk > PAGE_SIZE - offset_in_page(offset))
+ chunk = PAGE_SIZE - offset_in_page(offset);
+ memcpy(to, from, chunk);
+@@ -522,7 +522,7 @@ static inline __must_check void *folio_zero_tail(struct folio *folio,
+ {
+ size_t len = folio_size(folio) - offset;
+
+- if (folio_test_highmem(folio)) {
++ if (folio_test_partial_kmap(folio)) {
+ size_t max = PAGE_SIZE - offset_in_page(offset);
+
+ while (len > max) {
+@@ -560,7 +560,7 @@ static inline void folio_fill_tail(struct folio *folio, size_t offset,
+
+ VM_BUG_ON(offset + len > folio_size(folio));
+
+- if (folio_test_highmem(folio)) {
++ if (folio_test_partial_kmap(folio)) {
+ size_t max = PAGE_SIZE - offset_in_page(offset);
+
+ while (len > max) {
+@@ -597,7 +597,7 @@ static inline size_t memcpy_from_file_folio(char *to, struct folio *folio,
+ size_t offset = offset_in_folio(folio, pos);
+ char *from = kmap_local_folio(folio, offset);
+
+- if (folio_test_highmem(folio)) {
++ if (folio_test_partial_kmap(folio)) {
+ offset = offset_in_page(offset);
+ len = min_t(size_t, len, PAGE_SIZE - offset);
+ } else
+diff --git a/include/linux/io.h b/include/linux/io.h
+index 59ec5eea696c4f..40cb2de73f5ece 100644
+--- a/include/linux/io.h
++++ b/include/linux/io.h
+@@ -65,8 +65,6 @@ static inline void devm_ioport_unmap(struct device *dev, void __iomem *addr)
+ }
+ #endif
+
+-#define IOMEM_ERR_PTR(err) (__force void __iomem *)ERR_PTR(err)
+-
+ void __iomem *devm_ioremap(struct device *dev, resource_size_t offset,
+ resource_size_t size);
+ void __iomem *devm_ioremap_uc(struct device *dev, resource_size_t offset,
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index a6e2aadbb91bd3..5aeeed22f35bfc 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -207,6 +207,7 @@ struct inet6_cork {
+ struct ipv6_txoptions *opt;
+ u8 hop_limit;
+ u8 tclass;
++ u8 dontfrag:1;
+ };
+
+ /* struct ipv6_pinfo - ipv6 private area */
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index 561025b4f3d91d..469c4a191ced43 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1627,6 +1627,8 @@ extern void jbd2_journal_destroy_revoke_record_cache(void);
+ extern void jbd2_journal_destroy_revoke_table_cache(void);
+ extern int __init jbd2_journal_init_revoke_record_cache(void);
+ extern int __init jbd2_journal_init_revoke_table_cache(void);
++struct jbd2_revoke_table_s *jbd2_journal_init_revoke_table(int hash_size);
++void jbd2_journal_destroy_revoke_table(struct jbd2_revoke_table_s *table);
+
+ extern void jbd2_journal_destroy_revoke(journal_t *);
+ extern int jbd2_journal_revoke (handle_t *, unsigned long long, struct buffer_head *);
+diff --git a/include/linux/lzo.h b/include/linux/lzo.h
+index e95c7d1092b286..4d30e3624acd23 100644
+--- a/include/linux/lzo.h
++++ b/include/linux/lzo.h
+@@ -24,10 +24,18 @@
+ int lzo1x_1_compress(const unsigned char *src, size_t src_len,
+ unsigned char *dst, size_t *dst_len, void *wrkmem);
+
++/* Same as above but does not write more than dst_len to dst. */
++int lzo1x_1_compress_safe(const unsigned char *src, size_t src_len,
++ unsigned char *dst, size_t *dst_len, void *wrkmem);
++
+ /* This requires 'wrkmem' of size LZO1X_1_MEM_COMPRESS */
+ int lzorle1x_1_compress(const unsigned char *src, size_t src_len,
+ unsigned char *dst, size_t *dst_len, void *wrkmem);
+
++/* Same as above but does not write more than dst_len to dst. */
++int lzorle1x_1_compress_safe(const unsigned char *src, size_t src_len,
++ unsigned char *dst, size_t *dst_len, void *wrkmem);
++
+ /* safe decompression with overrun testing */
+ int lzo1x_decompress_safe(const unsigned char *src, size_t src_len,
+ unsigned char *dst, size_t *dst_len);
+diff --git a/include/linux/mfd/axp20x.h b/include/linux/mfd/axp20x.h
+index c3df0e615fbf45..3c5aecf1d4b5be 100644
+--- a/include/linux/mfd/axp20x.h
++++ b/include/linux/mfd/axp20x.h
+@@ -137,6 +137,7 @@ enum axp20x_variants {
+ #define AXP717_IRQ2_STATE 0x4a
+ #define AXP717_IRQ3_STATE 0x4b
+ #define AXP717_IRQ4_STATE 0x4c
++#define AXP717_TS_PIN_CFG 0x50
+ #define AXP717_ICC_CHG_SET 0x62
+ #define AXP717_ITERM_CHG_SET 0x63
+ #define AXP717_CV_CHG_SET 0x64
+diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
+index 27f42f713c891c..86f0f2a25a3d63 100644
+--- a/include/linux/mlx4/device.h
++++ b/include/linux/mlx4/device.h
+@@ -1135,7 +1135,7 @@ int mlx4_write_mtt(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
+ int mlx4_buf_write_mtt(struct mlx4_dev *dev, struct mlx4_mtt *mtt,
+ struct mlx4_buf *buf);
+
+-int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, int order);
++int mlx4_db_alloc(struct mlx4_dev *dev, struct mlx4_db *db, unsigned int order);
+ void mlx4_db_free(struct mlx4_dev *dev, struct mlx4_db *db);
+
+ int mlx4_alloc_hwq_res(struct mlx4_dev *dev, struct mlx4_hwq_resources *wqres,
+diff --git a/include/linux/mlx5/eswitch.h b/include/linux/mlx5/eswitch.h
+index df73a2ccc9af3d..67256e776566c6 100644
+--- a/include/linux/mlx5/eswitch.h
++++ b/include/linux/mlx5/eswitch.h
+@@ -147,6 +147,8 @@ u32 mlx5_eswitch_get_vport_metadata_for_set(struct mlx5_eswitch *esw,
+
+ /* reuse tun_opts for the mapped ipsec obj id when tun_id is 0 (invalid) */
+ #define ESW_IPSEC_RX_MAPPED_ID_MASK GENMASK(ESW_TUN_OPTS_BITS - 1, 0)
++#define ESW_IPSEC_RX_MAPPED_ID_MATCH_MASK \
++ GENMASK(31 - ESW_RESERVED_BITS, ESW_ZONE_ID_BITS)
+
+ u8 mlx5_eswitch_mode(const struct mlx5_core_dev *dev);
+ u16 mlx5_eswitch_get_total_vports(const struct mlx5_core_dev *dev);
+diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
+index 2a69d9d71276d6..01cb72d68c231f 100644
+--- a/include/linux/mlx5/fs.h
++++ b/include/linux/mlx5/fs.h
+@@ -40,6 +40,8 @@
+
+ #define MLX5_SET_CFG(p, f, v) MLX5_SET(create_flow_group_in, p, f, v)
+
++#define MLX5_FS_MAX_POOL_SIZE BIT(30)
++
+ enum mlx5_flow_destination_type {
+ MLX5_FLOW_DESTINATION_TYPE_NONE,
+ MLX5_FLOW_DESTINATION_TYPE_VPORT,
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 1f80baddacc59b..a5cb65ad3bf829 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -411,7 +411,7 @@ extern unsigned int kobjsize(const void *objp);
+ #endif
+
+ #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR
+-# define VM_UFFD_MINOR_BIT 38
++# define VM_UFFD_MINOR_BIT 41
+ # define VM_UFFD_MINOR BIT(VM_UFFD_MINOR_BIT) /* UFFD minor faults */
+ #else /* !CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
+ # define VM_UFFD_MINOR VM_NONE
+diff --git a/include/linux/mman.h b/include/linux/mman.h
+index a842783ffa62bd..03a91024622258 100644
+--- a/include/linux/mman.h
++++ b/include/linux/mman.h
+@@ -157,7 +157,9 @@ calc_vm_flag_bits(struct file *file, unsigned long flags)
+ return _calc_vm_trans(flags, MAP_GROWSDOWN, VM_GROWSDOWN ) |
+ _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED ) |
+ _calc_vm_trans(flags, MAP_SYNC, VM_SYNC ) |
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ _calc_vm_trans(flags, MAP_STACK, VM_NOHUGEPAGE) |
++#endif
+ arch_calc_vm_flag_bits(file, flags);
+ }
+
+diff --git a/include/linux/mroute_base.h b/include/linux/mroute_base.h
+index 58a2401e4b551b..0075f6e5c3da9d 100644
+--- a/include/linux/mroute_base.h
++++ b/include/linux/mroute_base.h
+@@ -262,6 +262,11 @@ struct mr_table {
+ int mroute_reg_vif_num;
+ };
+
++static inline bool mr_can_free_table(struct net *net)
++{
++ return !check_net(net) || !net_initialized(net);
++}
++
+ #ifdef CONFIG_IP_MROUTE_COMMON
+ void vif_device_init(struct vif_device *v,
+ struct net_device *dev,
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index 59a421fc42bf07..63d0e51f7a8015 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -165,6 +165,10 @@ struct msi_desc_data {
+ * @dev: Pointer to the device which uses this descriptor
+ * @msg: The last set MSI message cached for reuse
+ * @affinity: Optional pointer to a cpu affinity mask for this descriptor
++ * @iommu_msi_iova: Optional shifted IOVA from the IOMMU to override the msi_addr.
++ * Only used if iommu_msi_shift != 0
++ * @iommu_msi_shift: Indicates how many bits of the original address should be
++ * preserved when using iommu_msi_iova.
+ * @sysfs_attr: Pointer to sysfs device attribute
+ *
+ * @write_msi_msg: Callback that may be called when the MSI message
+@@ -183,7 +187,8 @@ struct msi_desc {
+ struct msi_msg msg;
+ struct irq_affinity_desc *affinity;
+ #ifdef CONFIG_IRQ_MSI_IOMMU
+- const void *iommu_cookie;
++ u64 iommu_msi_iova : 58;
++ u64 iommu_msi_shift : 6;
+ #endif
+ #ifdef CONFIG_SYSFS
+ struct device_attribute *sysfs_attrs;
+@@ -284,28 +289,14 @@ struct msi_desc *msi_next_desc(struct device *dev, unsigned int domid,
+
+ #define msi_desc_to_dev(desc) ((desc)->dev)
+
+-#ifdef CONFIG_IRQ_MSI_IOMMU
+-static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
+-{
+- return desc->iommu_cookie;
+-}
+-
+-static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc,
+- const void *iommu_cookie)
+-{
+- desc->iommu_cookie = iommu_cookie;
+-}
+-#else
+-static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
++static inline void msi_desc_set_iommu_msi_iova(struct msi_desc *desc, u64 msi_iova,
++ unsigned int msi_shift)
+ {
+- return NULL;
+-}
+-
+-static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc,
+- const void *iommu_cookie)
+-{
+-}
++#ifdef CONFIG_IRQ_MSI_IOMMU
++ desc->iommu_msi_iova = msi_iova >> msi_shift;
++ desc->iommu_msi_shift = msi_shift;
+ #endif
++}
+
+ int msi_domain_insert_msi_desc(struct device *dev, unsigned int domid,
+ struct msi_desc *init_desc);
+diff --git a/include/linux/objtool.h b/include/linux/objtool.h
+index c722a921165ba3..3ca965a2ddc808 100644
+--- a/include/linux/objtool.h
++++ b/include/linux/objtool.h
+@@ -128,7 +128,7 @@
+ #define UNWIND_HINT(type, sp_reg, sp_offset, signal) "\n\t"
+ #define STACK_FRAME_NON_STANDARD(func)
+ #define STACK_FRAME_NON_STANDARD_FP(func)
+-#define __ASM_ANNOTATE(label, type)
++#define __ASM_ANNOTATE(label, type) ""
+ #define ASM_ANNOTATE(type)
+ #else
+ .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 signal=0
+@@ -147,6 +147,8 @@
+ * these relocations will never be used for indirect calls.
+ */
+ #define ANNOTATE_NOENDBR ASM_ANNOTATE(ANNOTYPE_NOENDBR)
++#define ANNOTATE_NOENDBR_SYM(sym) asm(__ASM_ANNOTATE(sym, ANNOTYPE_NOENDBR))
++
+ /*
+ * This should be used immediately before an indirect jump/call. It tells
+ * objtool the subsequent indirect jump/call is vouched safe for retpoline
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index be2f0017a66739..cd8278a8390476 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -578,6 +578,13 @@ FOLIO_FLAG(dropbehind, FOLIO_HEAD_PAGE)
+ PAGEFLAG_FALSE(HighMem, highmem)
+ #endif
+
++/* Does kmap_local_folio() only allow access to one page of the folio? */
++#ifdef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP
++#define folio_test_partial_kmap(f) true
++#else
++#define folio_test_partial_kmap(f) folio_test_highmem(f)
++#endif
++
+ #ifdef CONFIG_SWAP
+ static __always_inline bool folio_test_swapcache(const struct folio *folio)
+ {
+diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h
+index 0e8b74e63767a6..75c6c86cf09dcb 100644
+--- a/include/linux/pci-ats.h
++++ b/include/linux/pci-ats.h
+@@ -42,6 +42,7 @@ int pci_enable_pasid(struct pci_dev *pdev, int features);
+ void pci_disable_pasid(struct pci_dev *pdev);
+ int pci_pasid_features(struct pci_dev *pdev);
+ int pci_max_pasids(struct pci_dev *pdev);
++int pci_pasid_status(struct pci_dev *pdev);
+ #else /* CONFIG_PCI_PASID */
+ static inline int pci_enable_pasid(struct pci_dev *pdev, int features)
+ { return -EINVAL; }
+@@ -50,6 +51,8 @@ static inline int pci_pasid_features(struct pci_dev *pdev)
+ { return -EINVAL; }
+ static inline int pci_max_pasids(struct pci_dev *pdev)
+ { return -EINVAL; }
++static inline int pci_pasid_status(struct pci_dev *pdev)
++{ return -EINVAL; }
+ #endif /* CONFIG_PCI_PASID */
+
+ #endif /* LINUX_PCI_ATS_H */
+diff --git a/include/linux/percpu.h b/include/linux/percpu.h
+index 52b5ea663b9f09..85bf8dd9f08740 100644
+--- a/include/linux/percpu.h
++++ b/include/linux/percpu.h
+@@ -15,11 +15,7 @@
+
+ /* enough to cover all DEFINE_PER_CPUs in modules */
+ #ifdef CONFIG_MODULES
+-#ifdef CONFIG_MEM_ALLOC_PROFILING
+-#define PERCPU_MODULE_RESERVE (8 << 13)
+-#else
+ #define PERCPU_MODULE_RESERVE (8 << 10)
+-#endif
+ #else
+ #define PERCPU_MODULE_RESERVE 0
+ #endif
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 93ea9c6672f0e1..3d5814c6d251c4 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1098,7 +1098,13 @@ struct perf_output_handle {
+ struct perf_buffer *rb;
+ unsigned long wakeup;
+ unsigned long size;
+- u64 aux_flags;
++ union {
++ u64 flags; /* perf_output*() */
++ u64 aux_flags; /* perf_aux_output*() */
++ struct {
++ u64 skip_read : 1;
++ };
++ };
+ union {
+ void *addr;
+ unsigned long head;
+diff --git a/include/linux/pnp.h b/include/linux/pnp.h
+index b7a7158aaf65e3..23fe3eaf242d63 100644
+--- a/include/linux/pnp.h
++++ b/include/linux/pnp.h
+@@ -290,7 +290,7 @@ static inline void pnp_set_drvdata(struct pnp_dev *pdev, void *data)
+ }
+
+ struct pnp_fixup {
+- char id[7];
++ char id[8];
+ void (*quirk_function) (struct pnp_dev *dev); /* fixup function */
+ };
+
+diff --git a/include/linux/pps_gen_kernel.h b/include/linux/pps_gen_kernel.h
+index 022ea0ac44402a..6214c8aa2e0208 100644
+--- a/include/linux/pps_gen_kernel.h
++++ b/include/linux/pps_gen_kernel.h
+@@ -43,7 +43,7 @@ struct pps_gen_source_info {
+
+ /* The main struct */
+ struct pps_gen_device {
+- struct pps_gen_source_info info; /* PSS generator info */
++ const struct pps_gen_source_info *info; /* PSS generator info */
+ bool enabled; /* PSS generator status */
+
+ unsigned int event;
+@@ -70,7 +70,7 @@ extern const struct attribute_group *pps_gen_groups[];
+ */
+
+ extern struct pps_gen_device *pps_gen_register_source(
+- struct pps_gen_source_info *info);
++ const struct pps_gen_source_info *info);
+ extern void pps_gen_unregister_source(struct pps_gen_device *pps_gen);
+ extern void pps_gen_event(struct pps_gen_device *pps_gen,
+ unsigned int event, void *data);
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index bd69ddc102fbc5..0844ab3288519a 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -95,9 +95,9 @@ static inline void __rcu_read_lock(void)
+
+ static inline void __rcu_read_unlock(void)
+ {
+- preempt_enable();
+ if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD))
+ rcu_read_unlock_strict();
++ preempt_enable();
+ }
+
+ static inline int rcu_preempt_depth(void)
+diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
+index 27d86d9127817e..aad586f15ed0cd 100644
+--- a/include/linux/rcutree.h
++++ b/include/linux/rcutree.h
+@@ -103,7 +103,7 @@ extern int rcu_scheduler_active;
+ void rcu_end_inkernel_boot(void);
+ bool rcu_inkernel_boot_has_ended(void);
+ bool rcu_is_watching(void);
+-#ifndef CONFIG_PREEMPTION
++#ifndef CONFIG_PREEMPT_RCU
+ void rcu_all_qs(void);
+ #endif
+
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index 17fbb78552952d..8de035f4f0d9a2 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -94,8 +94,7 @@ struct trace_buffer *__ring_buffer_alloc_range(unsigned long size, unsigned flag
+ unsigned long range_size,
+ struct lock_class_key *key);
+
+-bool ring_buffer_last_boot_delta(struct trace_buffer *buffer, long *text,
+- long *data);
++bool ring_buffer_last_boot_delta(struct trace_buffer *buffer, unsigned long *kaslr_addr);
+
+ /*
+ * Because the ring buffer is generic, if other users of the ring buffer get
+diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
+index 8497f4747e24d4..660725dfd6fb61 100644
+--- a/include/linux/spi/spi.h
++++ b/include/linux/spi/spi.h
+@@ -247,10 +247,7 @@ struct spi_device {
+ static_assert((SPI_MODE_KERNEL_MASK & SPI_MODE_USER_MASK) == 0,
+ "SPI_MODE_USER_MASK & SPI_MODE_KERNEL_MASK must not overlap");
+
+-static inline struct spi_device *to_spi_device(const struct device *dev)
+-{
+- return dev ? container_of(dev, struct spi_device, dev) : NULL;
+-}
++#define to_spi_device(__dev) container_of_const(__dev, struct spi_device, dev)
+
+ /* Most drivers won't need to care about device refcounting */
+ static inline struct spi_device *spi_dev_get(struct spi_device *spi)
+diff --git a/include/linux/tcp.h b/include/linux/tcp.h
+index f88daaa76d8366..159b2c59eb6271 100644
+--- a/include/linux/tcp.h
++++ b/include/linux/tcp.h
+@@ -160,6 +160,8 @@ struct tcp_request_sock {
+ u32 rcv_isn;
+ u32 snt_isn;
+ u32 ts_off;
++ u32 snt_tsval_first;
++ u32 snt_tsval_last;
+ u32 last_oow_ack_time; /* last SYNACK */
+ u32 rcv_nxt; /* the ack # by SYNACK. For
+ * FastOpen it's the seq#
+diff --git a/include/linux/trace.h b/include/linux/trace.h
+index fdcd76b7be83d7..7eaad857dee04f 100644
+--- a/include/linux/trace.h
++++ b/include/linux/trace.h
+@@ -72,8 +72,8 @@ static inline int unregister_ftrace_export(struct trace_export *export)
+ static inline void trace_printk_init_buffers(void)
+ {
+ }
+-static inline int trace_array_printk(struct trace_array *tr, unsigned long ip,
+- const char *fmt, ...)
++static inline __printf(3, 4)
++int trace_array_printk(struct trace_array *tr, unsigned long ip, const char *fmt, ...)
+ {
+ return 0;
+ }
+diff --git a/include/linux/trace_seq.h b/include/linux/trace_seq.h
+index 1ef95c0287f05d..a93ed5ac322656 100644
+--- a/include/linux/trace_seq.h
++++ b/include/linux/trace_seq.h
+@@ -88,8 +88,8 @@ extern __printf(2, 3)
+ void trace_seq_printf(struct trace_seq *s, const char *fmt, ...);
+ extern __printf(2, 0)
+ void trace_seq_vprintf(struct trace_seq *s, const char *fmt, va_list args);
+-extern void
+-trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary);
++extern __printf(2, 0)
++void trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary);
+ extern int trace_print_seq(struct seq_file *m, struct trace_seq *s);
+ extern int trace_seq_to_user(struct trace_seq *s, char __user *ubuf,
+ int cnt);
+@@ -113,8 +113,8 @@ static inline __printf(2, 3)
+ void trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
+ {
+ }
+-static inline void
+-trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary)
++static inline __printf(2, 0)
++void trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary)
+ {
+ }
+
+diff --git a/include/linux/usb/r8152.h b/include/linux/usb/r8152.h
+index 33a4c146dc19c4..2ca60828f28bb6 100644
+--- a/include/linux/usb/r8152.h
++++ b/include/linux/usb/r8152.h
+@@ -30,6 +30,7 @@
+ #define VENDOR_ID_NVIDIA 0x0955
+ #define VENDOR_ID_TPLINK 0x2357
+ #define VENDOR_ID_DLINK 0x2001
++#define VENDOR_ID_DELL 0x413c
+ #define VENDOR_ID_ASUS 0x0b05
+
+ #if IS_REACHABLE(CONFIG_USB_RTL8152)
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index 4d16c13d0df580..64cb4b04be7add 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -220,6 +220,8 @@ size_t virtio_max_dma_size(const struct virtio_device *vdev);
+ * occurs.
+ * @reset_done: optional function to call after transport specific reset
+ * operation has finished.
++ * @shutdown: synchronize with the device on shutdown. If provided, replaces
++ * the virtio core implementation.
+ */
+ struct virtio_driver {
+ struct device_driver driver;
+@@ -237,6 +239,7 @@ struct virtio_driver {
+ int (*restore)(struct virtio_device *dev);
+ int (*reset_prepare)(struct virtio_device *dev);
+ int (*reset_done)(struct virtio_device *dev);
++ void (*shutdown)(struct virtio_device *dev);
+ };
+
+ #define drv_to_virtio(__drv) container_of_const(__drv, struct virtio_driver, driver)
+diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
+index 2f2200875b0384..57f2bcb4eb16c3 100644
+--- a/include/media/v4l2-subdev.h
++++ b/include/media/v4l2-subdev.h
+@@ -822,7 +822,9 @@ struct v4l2_subdev_state {
+ * possible configuration from the remote end, likely calling
+ * this operation as close as possible to stream on time. The
+ * operation shall fail if the pad index it has been called on
+- * is not valid or in case of unrecoverable failures.
++ * is not valid or in case of unrecoverable failures. The
++ * config argument has been memset to 0 just before calling
++ * the op.
+ *
+ * @set_routing: Enable or disable data connection routes described in the
+ * subdevice routing table. Subdevs that implement this operation
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 363d7dd2255aae..b000f6a59a0301 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -127,6 +127,8 @@ struct wiphy;
+ * even if it is otherwise disabled.
+ * @IEEE80211_CHAN_ALLOW_6GHZ_VLP_AP: Allow using this channel for AP operation
+ * with very low power (VLP), even if otherwise set to NO_IR.
++ * @IEEE80211_CHAN_ALLOW_20MHZ_ACTIVITY: Allow activity on a 20 MHz channel,
++ * even if otherwise set to NO_IR.
+ */
+ enum ieee80211_channel_flags {
+ IEEE80211_CHAN_DISABLED = BIT(0),
+@@ -155,6 +157,7 @@ enum ieee80211_channel_flags {
+ IEEE80211_CHAN_NO_6GHZ_AFC_CLIENT = BIT(23),
+ IEEE80211_CHAN_CAN_MONITOR = BIT(24),
+ IEEE80211_CHAN_ALLOW_6GHZ_VLP_AP = BIT(25),
++ IEEE80211_CHAN_ALLOW_20MHZ_ACTIVITY = BIT(26),
+ };
+
+ #define IEEE80211_CHAN_NO_HT40 \
+@@ -9750,6 +9753,7 @@ struct cfg80211_mlo_reconf_done_data {
+ u16 added_links;
+ struct {
+ struct cfg80211_bss *bss;
++ u8 *addr;
+ } links[IEEE80211_MLD_MAX_NUM_LINKS];
+ };
+
+diff --git a/include/net/dropreason.h b/include/net/dropreason.h
+index 56cb7be92244c2..7d3b1a2a6feca0 100644
+--- a/include/net/dropreason.h
++++ b/include/net/dropreason.h
+@@ -17,12 +17,6 @@ enum skb_drop_reason_subsys {
+ */
+ SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE,
+
+- /**
+- * @SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR: mac80211 drop reasons
+- * for frames still going to monitor, see net/mac80211/drop.h
+- */
+- SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR,
+-
+ /**
+ * @SKB_DROP_REASON_SUBSYS_OPENVSWITCH: openvswitch drop reasons,
+ * see net/openvswitch/drop.h
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index dcbb2e54746c7f..b421526aae8512 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -7,7 +7,7 @@
+ * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright (C) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2024 Intel Corporation
++ * Copyright (C) 2018 - 2025 Intel Corporation
+ */
+
+ #ifndef MAC80211_H
+@@ -3829,7 +3829,7 @@ enum ieee80211_reconfig_type {
+ * @was_assoc: set if this call is due to deauth/disassoc
+ * while just having been associated
+ * @link_id: the link id on which the frame will be TX'ed.
+- * Only used with the mgd_prepare_tx() method.
++ * 0 for a non-MLO connection.
+ */
+ struct ieee80211_prep_tx_info {
+ u16 duration;
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index e1eed5d47d0725..03a1ed1e610b2c 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -236,7 +236,6 @@ struct xfrm_state {
+
+ /* Data for encapsulator */
+ struct xfrm_encap_tmpl *encap;
+- struct sock __rcu *encap_sk;
+
+ /* NAT keepalive */
+ u32 nat_keepalive_interval; /* seconds */
+diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h
+index fe05121169589f..555ea3d142a46b 100644
+--- a/include/rdma/uverbs_std_types.h
++++ b/include/rdma/uverbs_std_types.h
+@@ -34,7 +34,7 @@
+ static inline void *_uobj_get_obj_read(struct ib_uobject *uobj)
+ {
+ if (IS_ERR(uobj))
+- return NULL;
++ return ERR_CAST(uobj);
+ return uobj->object;
+ }
+ #define uobj_get_obj_read(_object, _type, _id, _attrs) \
+diff --git a/include/scsi/scsi_proto.h b/include/scsi/scsi_proto.h
+index 70e1262b2e202e..aeca37816506dd 100644
+--- a/include/scsi/scsi_proto.h
++++ b/include/scsi/scsi_proto.h
+@@ -33,8 +33,8 @@
+ #define INQUIRY 0x12
+ #define RECOVER_BUFFERED_DATA 0x14
+ #define MODE_SELECT 0x15
+-#define RESERVE 0x16
+-#define RELEASE 0x17
++#define RESERVE_6 0x16
++#define RELEASE_6 0x17
+ #define COPY 0x18
+ #define ERASE 0x19
+ #define MODE_SENSE 0x1a
+diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
+index 575e55aa08ca93..c1fe6290d04dcb 100644
+--- a/include/sound/hda_codec.h
++++ b/include/sound/hda_codec.h
+@@ -195,6 +195,7 @@ struct hda_codec {
+ /* beep device */
+ struct hda_beep *beep;
+ unsigned int beep_mode;
++ bool beep_just_power_on;
+
+ /* widget capabilities cache */
+ u32 *wcaps;
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 8becb450488736..8582d22f381847 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -1404,6 +1404,8 @@ int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, struct vm_area_s
+ #define snd_pcm_lib_mmap_iomem NULL
+ #endif
+
++void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime);
++
+ /**
+ * snd_pcm_limit_isa_dma_size - Get the max size fitting with ISA DMA transfer
+ * @dma: DMA number
+diff --git a/include/sound/soc_sdw_utils.h b/include/sound/soc_sdw_utils.h
+index 36a4a1e1d8ca28..d8bd5d37131aa0 100644
+--- a/include/sound/soc_sdw_utils.h
++++ b/include/sound/soc_sdw_utils.h
+@@ -226,6 +226,7 @@ int asoc_sdw_cs_amp_init(struct snd_soc_card *card,
+ bool playback);
+ int asoc_sdw_cs_spk_feedback_rtd_init(struct snd_soc_pcm_runtime *rtd,
+ struct snd_soc_dai *dai);
++int asoc_sdw_cs35l56_volume_limit(struct snd_soc_card *card, const char *name_prefix);
+
+ /* MAXIM codec support */
+ int asoc_sdw_maxim_init(struct snd_soc_card *card,
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index 549ab3b4196180..3efc00cc1bcd29 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -1928,7 +1928,7 @@ DECLARE_EVENT_CLASS(btrfs__prelim_ref,
+ TP_PROTO(const struct btrfs_fs_info *fs_info,
+ const struct prelim_ref *oldref,
+ const struct prelim_ref *newref, u64 tree_size),
+- TP_ARGS(fs_info, newref, oldref, tree_size),
++ TP_ARGS(fs_info, oldref, newref, tree_size),
+
+ TP_STRUCT__entry_btrfs(
+ __field( u64, root_id )
+diff --git a/include/trace/events/scsi.h b/include/trace/events/scsi.h
+index 05f1945ed204ec..bf6cc98d912288 100644
+--- a/include/trace/events/scsi.h
++++ b/include/trace/events/scsi.h
+@@ -29,8 +29,8 @@
+ scsi_opcode_name(INQUIRY), \
+ scsi_opcode_name(RECOVER_BUFFERED_DATA), \
+ scsi_opcode_name(MODE_SELECT), \
+- scsi_opcode_name(RESERVE), \
+- scsi_opcode_name(RELEASE), \
++ scsi_opcode_name(RESERVE_6), \
++ scsi_opcode_name(RELEASE_6), \
+ scsi_opcode_name(COPY), \
+ scsi_opcode_name(ERASE), \
+ scsi_opcode_name(MODE_SENSE), \
+diff --git a/include/trace/events/target.h b/include/trace/events/target.h
+index a13cbf2b340505..7e2e20ba26f1c7 100644
+--- a/include/trace/events/target.h
++++ b/include/trace/events/target.h
+@@ -31,8 +31,8 @@
+ scsi_opcode_name(INQUIRY), \
+ scsi_opcode_name(RECOVER_BUFFERED_DATA), \
+ scsi_opcode_name(MODE_SELECT), \
+- scsi_opcode_name(RESERVE), \
+- scsi_opcode_name(RELEASE), \
++ scsi_opcode_name(RESERVE_6), \
++ scsi_opcode_name(RELEASE_6), \
+ scsi_opcode_name(COPY), \
+ scsi_opcode_name(ERASE), \
+ scsi_opcode_name(MODE_SENSE), \
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 2acf9b33637174..89242184a19376 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -1207,6 +1207,7 @@ enum bpf_perf_event_type {
+ #define BPF_F_BEFORE (1U << 3)
+ #define BPF_F_AFTER (1U << 4)
+ #define BPF_F_ID (1U << 5)
++#define BPF_F_PREORDER (1U << 6)
+ #define BPF_F_LINK BPF_F_LINK /* 1 << 13 */
+
+ /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the
+diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h
+index 78747b24bd0fbc..9495604e40b061 100644
+--- a/include/uapi/linux/iommufd.h
++++ b/include/uapi/linux/iommufd.h
+@@ -608,9 +608,17 @@ enum iommu_hw_info_type {
+ * IOMMU_HWPT_GET_DIRTY_BITMAP
+ * IOMMU_HWPT_SET_DIRTY_TRACKING
+ *
++ * @IOMMU_HW_CAP_PCI_PASID_EXEC: Execute Permission Supported, user ignores it
++ * when the struct
++ * iommu_hw_info::out_max_pasid_log2 is zero.
++ * @IOMMU_HW_CAP_PCI_PASID_PRIV: Privileged Mode Supported, user ignores it
++ * when the struct
++ * iommu_hw_info::out_max_pasid_log2 is zero.
+ */
+ enum iommufd_hw_capabilities {
+ IOMMU_HW_CAP_DIRTY_TRACKING = 1 << 0,
++ IOMMU_HW_CAP_PCI_PASID_EXEC = 1 << 1,
++ IOMMU_HW_CAP_PCI_PASID_PRIV = 1 << 2,
+ };
+
+ /**
+@@ -626,6 +634,9 @@ enum iommufd_hw_capabilities {
+ * iommu_hw_info_type.
+ * @out_capabilities: Output the generic iommu capability info type as defined
+ * in the enum iommu_hw_capabilities.
++ * @out_max_pasid_log2: Output the width of PASIDs. 0 means no PASID support.
++ * PCI devices turn to out_capabilities to check if the
++ * specific capabilities is supported or not.
+ * @__reserved: Must be 0
+ *
+ * Query an iommu type specific hardware information data from an iommu behind
+@@ -649,7 +660,8 @@ struct iommu_hw_info {
+ __u32 data_len;
+ __aligned_u64 data_uptr;
+ __u32 out_data_type;
+- __u32 __reserved;
++ __u8 out_max_pasid_log2;
++ __u8 __reserved[3];
+ __aligned_u64 out_capabilities;
+ };
+ #define IOMMU_GET_HW_INFO _IO(IOMMUFD_TYPE, IOMMUFD_CMD_GET_HW_INFO)
+diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h
+index f6c1b181c886d2..ea30fa455a098a 100644
+--- a/include/uapi/linux/nl80211.h
++++ b/include/uapi/linux/nl80211.h
+@@ -4327,6 +4327,8 @@ enum nl80211_wmm_rule {
+ * otherwise completely disabled.
+ * @NL80211_FREQUENCY_ATTR_ALLOW_6GHZ_VLP_AP: This channel can be used for a
+ * very low power (VLP) AP, despite being NO_IR.
++ * @NL80211_FREQUENCY_ATTR_ALLOW_20MHZ_ACTIVITY: This channel can be active in
++ * 20 MHz bandwidth, despite being NO_IR.
+ * @NL80211_FREQUENCY_ATTR_MAX: highest frequency attribute number
+ * currently defined
+ * @__NL80211_FREQUENCY_ATTR_AFTER_LAST: internal use
+@@ -4371,6 +4373,7 @@ enum nl80211_frequency_attr {
+ NL80211_FREQUENCY_ATTR_NO_6GHZ_AFC_CLIENT,
+ NL80211_FREQUENCY_ATTR_CAN_MONITOR,
+ NL80211_FREQUENCY_ATTR_ALLOW_6GHZ_VLP_AP,
++ NL80211_FREQUENCY_ATTR_ALLOW_20MHZ_ACTIVITY,
+
+ /* keep last */
+ __NL80211_FREQUENCY_ATTR_AFTER_LAST,
+@@ -4582,31 +4585,34 @@ enum nl80211_sched_scan_match_attr {
+ * @NL80211_RRF_NO_6GHZ_AFC_CLIENT: Client connection to AFC AP not allowed
+ * @NL80211_RRF_ALLOW_6GHZ_VLP_AP: Very low power (VLP) AP can be permitted
+ * despite NO_IR configuration.
++ * @NL80211_RRF_ALLOW_20MHZ_ACTIVITY: Allow activity in 20 MHz bandwidth,
++ * despite NO_IR configuration.
+ */
+ enum nl80211_reg_rule_flags {
+- NL80211_RRF_NO_OFDM = 1<<0,
+- NL80211_RRF_NO_CCK = 1<<1,
+- NL80211_RRF_NO_INDOOR = 1<<2,
+- NL80211_RRF_NO_OUTDOOR = 1<<3,
+- NL80211_RRF_DFS = 1<<4,
+- NL80211_RRF_PTP_ONLY = 1<<5,
+- NL80211_RRF_PTMP_ONLY = 1<<6,
+- NL80211_RRF_NO_IR = 1<<7,
+- __NL80211_RRF_NO_IBSS = 1<<8,
+- NL80211_RRF_AUTO_BW = 1<<11,
+- NL80211_RRF_IR_CONCURRENT = 1<<12,
+- NL80211_RRF_NO_HT40MINUS = 1<<13,
+- NL80211_RRF_NO_HT40PLUS = 1<<14,
+- NL80211_RRF_NO_80MHZ = 1<<15,
+- NL80211_RRF_NO_160MHZ = 1<<16,
+- NL80211_RRF_NO_HE = 1<<17,
+- NL80211_RRF_NO_320MHZ = 1<<18,
+- NL80211_RRF_NO_EHT = 1<<19,
+- NL80211_RRF_PSD = 1<<20,
+- NL80211_RRF_DFS_CONCURRENT = 1<<21,
+- NL80211_RRF_NO_6GHZ_VLP_CLIENT = 1<<22,
+- NL80211_RRF_NO_6GHZ_AFC_CLIENT = 1<<23,
+- NL80211_RRF_ALLOW_6GHZ_VLP_AP = 1<<24,
++ NL80211_RRF_NO_OFDM = 1 << 0,
++ NL80211_RRF_NO_CCK = 1 << 1,
++ NL80211_RRF_NO_INDOOR = 1 << 2,
++ NL80211_RRF_NO_OUTDOOR = 1 << 3,
++ NL80211_RRF_DFS = 1 << 4,
++ NL80211_RRF_PTP_ONLY = 1 << 5,
++ NL80211_RRF_PTMP_ONLY = 1 << 6,
++ NL80211_RRF_NO_IR = 1 << 7,
++ __NL80211_RRF_NO_IBSS = 1 << 8,
++ NL80211_RRF_AUTO_BW = 1 << 11,
++ NL80211_RRF_IR_CONCURRENT = 1 << 12,
++ NL80211_RRF_NO_HT40MINUS = 1 << 13,
++ NL80211_RRF_NO_HT40PLUS = 1 << 14,
++ NL80211_RRF_NO_80MHZ = 1 << 15,
++ NL80211_RRF_NO_160MHZ = 1 << 16,
++ NL80211_RRF_NO_HE = 1 << 17,
++ NL80211_RRF_NO_320MHZ = 1 << 18,
++ NL80211_RRF_NO_EHT = 1 << 19,
++ NL80211_RRF_PSD = 1 << 20,
++ NL80211_RRF_DFS_CONCURRENT = 1 << 21,
++ NL80211_RRF_NO_6GHZ_VLP_CLIENT = 1 << 22,
++ NL80211_RRF_NO_6GHZ_AFC_CLIENT = 1 << 23,
++ NL80211_RRF_ALLOW_6GHZ_VLP_AP = 1 << 24,
++ NL80211_RRF_ALLOW_20MHZ_ACTIVITY = 1 << 25,
+ };
+
+ #define NL80211_RRF_PASSIVE_SCAN NL80211_RRF_NO_IR
+diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h
+index 848c7784e684c0..eb9fb776fdc3e5 100644
+--- a/include/uapi/linux/snmp.h
++++ b/include/uapi/linux/snmp.h
+@@ -186,6 +186,7 @@ enum
+ LINUX_MIB_TIMEWAITKILLED, /* TimeWaitKilled */
+ LINUX_MIB_PAWSACTIVEREJECTED, /* PAWSActiveRejected */
+ LINUX_MIB_PAWSESTABREJECTED, /* PAWSEstabRejected */
++ LINUX_MIB_TSECRREJECTED, /* TSEcrRejected */
+ LINUX_MIB_PAWS_OLD_ACK, /* PAWSOldAck */
+ LINUX_MIB_DELAYEDACKS, /* DelayedACKs */
+ LINUX_MIB_DELAYEDACKLOCKED, /* DelayedACKLocked */
+diff --git a/include/uapi/linux/taskstats.h b/include/uapi/linux/taskstats.h
+index 95762232e01863..5929030d4e8b06 100644
+--- a/include/uapi/linux/taskstats.h
++++ b/include/uapi/linux/taskstats.h
+@@ -34,7 +34,7 @@
+ */
+
+
+-#define TASKSTATS_VERSION 15
++#define TASKSTATS_VERSION 16
+ #define TS_COMM_LEN 32 /* should be >= TASK_COMM_LEN
+ * in linux/sched.h */
+
+@@ -72,8 +72,6 @@ struct taskstats {
+ */
+ __u64 cpu_count __attribute__((aligned(8)));
+ __u64 cpu_delay_total;
+- __u64 cpu_delay_max;
+- __u64 cpu_delay_min;
+
+ /* Following four fields atomically updated using task->delays->lock */
+
+@@ -82,14 +80,10 @@ struct taskstats {
+ */
+ __u64 blkio_count;
+ __u64 blkio_delay_total;
+- __u64 blkio_delay_max;
+- __u64 blkio_delay_min;
+
+ /* Delay waiting for page fault I/O (swap in only) */
+ __u64 swapin_count;
+ __u64 swapin_delay_total;
+- __u64 swapin_delay_max;
+- __u64 swapin_delay_min;
+
+ /* cpu "wall-clock" running time
+ * On some architectures, value will adjust for cpu time stolen
+@@ -172,14 +166,11 @@ struct taskstats {
+ /* Delay waiting for memory reclaim */
+ __u64 freepages_count;
+ __u64 freepages_delay_total;
+- __u64 freepages_delay_max;
+- __u64 freepages_delay_min;
++
+
+ /* Delay waiting for thrashing page */
+ __u64 thrashing_count;
+ __u64 thrashing_delay_total;
+- __u64 thrashing_delay_max;
+- __u64 thrashing_delay_min;
+
+ /* v10: 64-bit btime to avoid overflow */
+ __u64 ac_btime64; /* 64-bit begin time */
+@@ -187,8 +178,6 @@ struct taskstats {
+ /* v11: Delay waiting for memory compact */
+ __u64 compact_count;
+ __u64 compact_delay_total;
+- __u64 compact_delay_max;
+- __u64 compact_delay_min;
+
+ /* v12 begin */
+ __u32 ac_tgid; /* thread group ID */
+@@ -210,15 +199,37 @@ struct taskstats {
+ /* v13: Delay waiting for write-protect copy */
+ __u64 wpcopy_count;
+ __u64 wpcopy_delay_total;
+- __u64 wpcopy_delay_max;
+- __u64 wpcopy_delay_min;
+
+ /* v14: Delay waiting for IRQ/SOFTIRQ */
+ __u64 irq_count;
+ __u64 irq_delay_total;
+- __u64 irq_delay_max;
+- __u64 irq_delay_min;
+- /* v15: add Delay max */
++
++ /* v15: add Delay max and Delay min */
++
++ /* v16: move Delay max and Delay min to the end of taskstat */
++ __u64 cpu_delay_max;
++ __u64 cpu_delay_min;
++
++ __u64 blkio_delay_max;
++ __u64 blkio_delay_min;
++
++ __u64 swapin_delay_max;
++ __u64 swapin_delay_min;
++
++ __u64 freepages_delay_max;
++ __u64 freepages_delay_min;
++
++ __u64 thrashing_delay_max;
++ __u64 thrashing_delay_min;
++
++ __u64 compact_delay_max;
++ __u64 compact_delay_min;
++
++ __u64 wpcopy_delay_max;
++ __u64 wpcopy_delay_min;
++
++ __u64 irq_delay_max;
++ __u64 irq_delay_min;
+ };
+
+
+diff --git a/include/ufs/ufs_quirks.h b/include/ufs/ufs_quirks.h
+index 41ff44dfa1db3f..f52de5ed1b3b6e 100644
+--- a/include/ufs/ufs_quirks.h
++++ b/include/ufs/ufs_quirks.h
+@@ -107,4 +107,10 @@ struct ufs_dev_quirk {
+ */
+ #define UFS_DEVICE_QUIRK_DELAY_AFTER_LPM (1 << 11)
+
++/*
++ * Some ufs devices may need more time to be in hibern8 before exiting.
++ * Enable this quirk to give it an additional 100us.
++ */
++#define UFS_DEVICE_QUIRK_PA_HIBER8TIME (1 << 12)
++
+ #endif /* UFS_QUIRKS_H_ */
+diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
+index 336aec7ea8c29f..e0d6a59a89fa1b 100644
+--- a/io_uring/fdinfo.c
++++ b/io_uring/fdinfo.c
+@@ -117,11 +117,11 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ seq_printf(m, "SqMask:\t0x%x\n", sq_mask);
+ seq_printf(m, "SqHead:\t%u\n", sq_head);
+ seq_printf(m, "SqTail:\t%u\n", sq_tail);
+- seq_printf(m, "CachedSqHead:\t%u\n", ctx->cached_sq_head);
++ seq_printf(m, "CachedSqHead:\t%u\n", data_race(ctx->cached_sq_head));
+ seq_printf(m, "CqMask:\t0x%x\n", cq_mask);
+ seq_printf(m, "CqHead:\t%u\n", cq_head);
+ seq_printf(m, "CqTail:\t%u\n", cq_tail);
+- seq_printf(m, "CachedCqTail:\t%u\n", ctx->cached_cq_tail);
++ seq_printf(m, "CachedCqTail:\t%u\n", data_race(ctx->cached_cq_tail));
+ seq_printf(m, "SQEs:\t%u\n", sq_tail - sq_head);
+ sq_entries = min(sq_tail - sq_head, ctx->sq_entries);
+ for (i = 0; i < sq_entries; i++) {
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index a60cb9d30cc0dc..1f60883d78c645 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -110,11 +110,13 @@
+ #define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
+ IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
+
++#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
++
+ #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
+ REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
+ REQ_F_ASYNC_DATA)
+
+-#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
++#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | IO_REQ_LINK_FLAGS | \
+ REQ_F_REISSUE | IO_REQ_CLEAN_FLAGS)
+
+ #define IO_TCTX_REFS_CACHE_NR (1U << 10)
+@@ -131,7 +133,6 @@ struct io_defer_entry {
+
+ /* requests with any of those set should undergo io_disarm_next() */
+ #define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
+-#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
+
+ /*
+ * No waiters. It's larger than any valid value of the tw counter
+@@ -631,6 +632,7 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
+ * to care for a non-real case.
+ */
+ if (need_resched()) {
++ ctx->cqe_sentinel = ctx->cqe_cached;
+ io_cq_unlock_post(ctx);
+ mutex_unlock(&ctx->uring_lock);
+ cond_resched();
+@@ -864,10 +866,15 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
+ lockdep_assert(!io_wq_current_is_worker());
+ lockdep_assert_held(&ctx->uring_lock);
+
+- __io_cq_lock(ctx);
+- posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);
++ if (!ctx->lockless_cq) {
++ spin_lock(&ctx->completion_lock);
++ posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);
++ spin_unlock(&ctx->completion_lock);
++ } else {
++ posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);
++ }
++
+ ctx->submit_state.cq_flush = true;
+- __io_cq_unlock_post(ctx);
+ return posted;
+ }
+
+@@ -1145,7 +1152,7 @@ static inline void io_req_local_work_add(struct io_kiocb *req,
+ * We don't know how many reuqests is there in the link and whether
+ * they can even be queued lazily, fall back to non-lazy.
+ */
+- if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK))
++ if (req->flags & IO_REQ_LINK_FLAGS)
+ flags &= ~IOU_F_TWQ_LAZY_WAKE;
+
+ guard(rcu)();
+@@ -3526,6 +3533,44 @@ static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
+ O_RDWR | O_CLOEXEC, NULL);
+ }
+
++static int io_uring_sanitise_params(struct io_uring_params *p)
++{
++ unsigned flags = p->flags;
++
++ /* There is no way to mmap rings without a real fd */
++ if ((flags & IORING_SETUP_REGISTERED_FD_ONLY) &&
++ !(flags & IORING_SETUP_NO_MMAP))
++ return -EINVAL;
++
++ if (flags & IORING_SETUP_SQPOLL) {
++ /* IPI related flags don't make sense with SQPOLL */
++ if (flags & (IORING_SETUP_COOP_TASKRUN |
++ IORING_SETUP_TASKRUN_FLAG |
++ IORING_SETUP_DEFER_TASKRUN))
++ return -EINVAL;
++ }
++
++ if (flags & IORING_SETUP_TASKRUN_FLAG) {
++ if (!(flags & (IORING_SETUP_COOP_TASKRUN |
++ IORING_SETUP_DEFER_TASKRUN)))
++ return -EINVAL;
++ }
++
++ /* HYBRID_IOPOLL only valid with IOPOLL */
++ if ((flags & IORING_SETUP_HYBRID_IOPOLL) && !(flags & IORING_SETUP_IOPOLL))
++ return -EINVAL;
++
++ /*
++ * For DEFER_TASKRUN we require the completion task to be the same as
++ * the submission task. This implies that there is only one submitter.
++ */
++ if ((flags & IORING_SETUP_DEFER_TASKRUN) &&
++ !(flags & IORING_SETUP_SINGLE_ISSUER))
++ return -EINVAL;
++
++ return 0;
++}
++
+ int io_uring_fill_params(unsigned entries, struct io_uring_params *p)
+ {
+ if (!entries)
+@@ -3536,10 +3581,6 @@ int io_uring_fill_params(unsigned entries, struct io_uring_params *p)
+ entries = IORING_MAX_ENTRIES;
+ }
+
+- if ((p->flags & IORING_SETUP_REGISTERED_FD_ONLY)
+- && !(p->flags & IORING_SETUP_NO_MMAP))
+- return -EINVAL;
+-
+ /*
+ * Use twice as many entries for the CQ ring. It's possible for the
+ * application to drive a higher depth than the size of the SQ ring,
+@@ -3601,6 +3642,10 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
+ struct file *file;
+ int ret;
+
++ ret = io_uring_sanitise_params(p);
++ if (ret)
++ return ret;
++
+ ret = io_uring_fill_params(entries, p);
+ if (unlikely(ret))
+ return ret;
+@@ -3648,37 +3693,10 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
+ * For SQPOLL, we just need a wakeup, always. For !SQPOLL, if
+ * COOP_TASKRUN is set, then IPIs are never needed by the app.
+ */
+- ret = -EINVAL;
+- if (ctx->flags & IORING_SETUP_SQPOLL) {
+- /* IPI related flags don't make sense with SQPOLL */
+- if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
+- IORING_SETUP_TASKRUN_FLAG |
+- IORING_SETUP_DEFER_TASKRUN))
+- goto err;
++ if (ctx->flags & (IORING_SETUP_SQPOLL|IORING_SETUP_COOP_TASKRUN))
+ ctx->notify_method = TWA_SIGNAL_NO_IPI;
+- } else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
+- ctx->notify_method = TWA_SIGNAL_NO_IPI;
+- } else {
+- if (ctx->flags & IORING_SETUP_TASKRUN_FLAG &&
+- !(ctx->flags & IORING_SETUP_DEFER_TASKRUN))
+- goto err;
++ else
+ ctx->notify_method = TWA_SIGNAL;
+- }
+-
+- /* HYBRID_IOPOLL only valid with IOPOLL */
+- if ((ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_HYBRID_IOPOLL)) ==
+- IORING_SETUP_HYBRID_IOPOLL)
+- goto err;
+-
+- /*
+- * For DEFER_TASKRUN we require the completion task to be the same as the
+- * submission task. This implies that there is only one submitter, so enforce
+- * that.
+- */
+- if (ctx->flags & IORING_SETUP_DEFER_TASKRUN &&
+- !(ctx->flags & IORING_SETUP_SINGLE_ISSUER)) {
+- goto err;
+- }
+
+ /*
+ * This is just grabbed for accounting purposes. When a process exits,
+diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
+index 7e6f68e911f10c..f844ab24cda42b 100644
+--- a/io_uring/msg_ring.c
++++ b/io_uring/msg_ring.c
+@@ -93,6 +93,7 @@ static int io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ kmem_cache_free(req_cachep, req);
+ return -EOWNERDEAD;
+ }
++ req->opcode = IORING_OP_NOP;
+ req->cqe.user_data = user_data;
+ io_req_set_res(req, res, cflags);
+ percpu_ref_get(&ctx->refs);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 8f965ec67b5767..cb1e7bd1bee9ca 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -860,18 +860,24 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ cflags |= IORING_CQE_F_SOCK_NONEMPTY;
+
+ if (sr->flags & IORING_RECVSEND_BUNDLE) {
+- cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, *ret),
++ size_t this_ret = *ret - sr->done_io;
++
++ cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, this_ret),
+ issue_flags);
+ if (sr->retry)
+ cflags = req->cqe.flags | (cflags & CQE_F_MASK);
+ /* bundle with no more immediate buffers, we're done */
+ if (req->flags & REQ_F_BL_EMPTY)
+ goto finish;
+- /* if more is available, retry and append to this one */
+- if (!sr->retry && kmsg->msg.msg_inq > 0 && *ret > 0) {
++ /*
++ * If more is available AND it was a full transfer, retry and
++ * append to this one
++ */
++ if (!sr->retry && kmsg->msg.msg_inq > 0 && this_ret > 0 &&
++ !iov_iter_count(&kmsg->msg.msg_iter)) {
+ req->cqe.flags = cflags & ~CQE_F_MASK;
+ sr->len = kmsg->msg.msg_inq;
+- sr->done_io += *ret;
++ sr->done_io += this_ret;
+ sr->retry = true;
+ return false;
+ }
+diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c
+index 106735145948b4..380e9a7cac75da 100644
+--- a/kernel/bpf/bpf_iter.c
++++ b/kernel/bpf/bpf_iter.c
+@@ -335,7 +335,7 @@ static void cache_btf_id(struct bpf_iter_target_info *tinfo,
+ tinfo->btf_id = prog->aux->attach_btf_id;
+ }
+
+-bool bpf_iter_prog_supported(struct bpf_prog *prog)
++int bpf_iter_prog_supported(struct bpf_prog *prog)
+ {
+ const char *attach_fname = prog->aux->attach_func_name;
+ struct bpf_iter_target_info *tinfo = NULL, *iter;
+@@ -344,7 +344,7 @@ bool bpf_iter_prog_supported(struct bpf_prog *prog)
+ int prefix_len = strlen(prefix);
+
+ if (strncmp(attach_fname, prefix, prefix_len))
+- return false;
++ return -EINVAL;
+
+ mutex_lock(&targets_mutex);
+ list_for_each_entry(iter, &targets, list) {
+@@ -360,12 +360,11 @@ bool bpf_iter_prog_supported(struct bpf_prog *prog)
+ }
+ mutex_unlock(&targets_mutex);
+
+- if (tinfo) {
+- prog->aux->ctx_arg_info_size = tinfo->reg_info->ctx_arg_info_size;
+- prog->aux->ctx_arg_info = tinfo->reg_info->ctx_arg_info;
+- }
++ if (!tinfo)
++ return -EINVAL;
+
+- return tinfo != NULL;
++ return bpf_prog_ctx_arg_info_init(prog, tinfo->reg_info->ctx_arg_info,
++ tinfo->reg_info->ctx_arg_info_size);
+ }
+
+ const struct bpf_func_proto *
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index 040fb1cd840b65..9b7f3b9c52622d 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -146,39 +146,6 @@ void bpf_struct_ops_image_free(void *image)
+ }
+
+ #define MAYBE_NULL_SUFFIX "__nullable"
+-#define MAX_STUB_NAME 128
+-
+-/* Return the type info of a stub function, if it exists.
+- *
+- * The name of a stub function is made up of the name of the struct_ops and
+- * the name of the function pointer member, separated by "__". For example,
+- * if the struct_ops type is named "foo_ops" and the function pointer
+- * member is named "bar", the stub function name would be "foo_ops__bar".
+- */
+-static const struct btf_type *
+-find_stub_func_proto(const struct btf *btf, const char *st_op_name,
+- const char *member_name)
+-{
+- char stub_func_name[MAX_STUB_NAME];
+- const struct btf_type *func_type;
+- s32 btf_id;
+- int cp;
+-
+- cp = snprintf(stub_func_name, MAX_STUB_NAME, "%s__%s",
+- st_op_name, member_name);
+- if (cp >= MAX_STUB_NAME) {
+- pr_warn("Stub function name too long\n");
+- return NULL;
+- }
+- btf_id = btf_find_by_name_kind(btf, stub_func_name, BTF_KIND_FUNC);
+- if (btf_id < 0)
+- return NULL;
+- func_type = btf_type_by_id(btf, btf_id);
+- if (!func_type)
+- return NULL;
+-
+- return btf_type_by_id(btf, func_type->type); /* FUNC_PROTO */
+-}
+
+ /* Prepare argument info for every nullable argument of a member of a
+ * struct_ops type.
+@@ -203,27 +170,42 @@ find_stub_func_proto(const struct btf *btf, const char *st_op_name,
+ static int prepare_arg_info(struct btf *btf,
+ const char *st_ops_name,
+ const char *member_name,
+- const struct btf_type *func_proto,
++ const struct btf_type *func_proto, void *stub_func_addr,
+ struct bpf_struct_ops_arg_info *arg_info)
+ {
+ const struct btf_type *stub_func_proto, *pointed_type;
+ const struct btf_param *stub_args, *args;
+ struct bpf_ctx_arg_aux *info, *info_buf;
+ u32 nargs, arg_no, info_cnt = 0;
++ char ksym[KSYM_SYMBOL_LEN];
++ const char *stub_fname;
++ s32 stub_func_id;
+ u32 arg_btf_id;
+ int offset;
+
+- stub_func_proto = find_stub_func_proto(btf, st_ops_name, member_name);
+- if (!stub_func_proto)
+- return 0;
++ stub_fname = kallsyms_lookup((unsigned long)stub_func_addr, NULL, NULL, NULL, ksym);
++ if (!stub_fname) {
++ pr_warn("Cannot find the stub function name for the %s in struct %s\n",
++ member_name, st_ops_name);
++ return -ENOENT;
++ }
++
++ stub_func_id = btf_find_by_name_kind(btf, stub_fname, BTF_KIND_FUNC);
++ if (stub_func_id < 0) {
++ pr_warn("Cannot find the stub function %s in btf\n", stub_fname);
++ return -ENOENT;
++ }
++
++ stub_func_proto = btf_type_by_id(btf, stub_func_id);
++ stub_func_proto = btf_type_by_id(btf, stub_func_proto->type);
+
+ /* Check if the number of arguments of the stub function is the same
+ * as the number of arguments of the function pointer.
+ */
+ nargs = btf_type_vlen(func_proto);
+ if (nargs != btf_type_vlen(stub_func_proto)) {
+- pr_warn("the number of arguments of the stub function %s__%s does not match the number of arguments of the member %s of struct %s\n",
+- st_ops_name, member_name, member_name, st_ops_name);
++ pr_warn("the number of arguments of the stub function %s does not match the number of arguments of the member %s of struct %s\n",
++ stub_fname, member_name, st_ops_name);
+ return -EINVAL;
+ }
+
+@@ -253,21 +235,21 @@ static int prepare_arg_info(struct btf *btf,
+ &arg_btf_id);
+ if (!pointed_type ||
+ !btf_type_is_struct(pointed_type)) {
+- pr_warn("stub function %s__%s has %s tagging to an unsupported type\n",
+- st_ops_name, member_name, MAYBE_NULL_SUFFIX);
++ pr_warn("stub function %s has %s tagging to an unsupported type\n",
++ stub_fname, MAYBE_NULL_SUFFIX);
+ goto err_out;
+ }
+
+ offset = btf_ctx_arg_offset(btf, func_proto, arg_no);
+ if (offset < 0) {
+- pr_warn("stub function %s__%s has an invalid trampoline ctx offset for arg#%u\n",
+- st_ops_name, member_name, arg_no);
++ pr_warn("stub function %s has an invalid trampoline ctx offset for arg#%u\n",
++ stub_fname, arg_no);
+ goto err_out;
+ }
+
+ if (args[arg_no].type != stub_args[arg_no].type) {
+- pr_warn("arg#%u type in stub function %s__%s does not match with its original func_proto\n",
+- arg_no, st_ops_name, member_name);
++ pr_warn("arg#%u type in stub function %s does not match with its original func_proto\n",
++ arg_no, stub_fname);
+ goto err_out;
+ }
+
+@@ -324,6 +306,13 @@ static bool is_module_member(const struct btf *btf, u32 id)
+ return !strcmp(btf_name_by_offset(btf, t->name_off), "module");
+ }
+
++int bpf_struct_ops_supported(const struct bpf_struct_ops *st_ops, u32 moff)
++{
++ void *func_ptr = *(void **)(st_ops->cfi_stubs + moff);
++
++ return func_ptr ? 0 : -ENOTSUPP;
++}
++
+ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+ struct btf *btf,
+ struct bpf_verifier_log *log)
+@@ -387,7 +376,10 @@ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+
+ for_each_member(i, t, member) {
+ const struct btf_type *func_proto;
++ void **stub_func_addr;
++ u32 moff;
+
++ moff = __btf_member_bit_offset(t, member) / 8;
+ mname = btf_name_by_offset(btf, member->name_off);
+ if (!*mname) {
+ pr_warn("anon member in struct %s is not supported\n",
+@@ -413,7 +405,11 @@ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+ func_proto = btf_type_resolve_func_ptr(btf,
+ member->type,
+ NULL);
+- if (!func_proto)
++
++ /* The member is not a function pointer or
++ * the function pointer is not supported.
++ */
++ if (!func_proto || bpf_struct_ops_supported(st_ops, moff))
+ continue;
+
+ if (btf_distill_func_proto(log, btf,
+@@ -425,8 +421,9 @@ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+ goto errout;
+ }
+
++ stub_func_addr = *(void **)(st_ops->cfi_stubs + moff);
+ err = prepare_arg_info(btf, st_ops->name, mname,
+- func_proto,
++ func_proto, stub_func_addr,
+ arg_info + i);
+ if (err)
+ goto errout;
+@@ -1152,13 +1149,6 @@ void bpf_struct_ops_put(const void *kdata)
+ bpf_map_put(&st_map->map);
+ }
+
+-int bpf_struct_ops_supported(const struct bpf_struct_ops *st_ops, u32 moff)
+-{
+- void *func_ptr = *(void **)(st_ops->cfi_stubs + moff);
+-
+- return func_ptr ? 0 : -ENOTSUPP;
+-}
+-
+ static bool bpf_struct_ops_valid_to_reg(struct bpf_map *map)
+ {
+ struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map;
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 46e5db65dbc8d8..84f58f3d028a3e 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -369,7 +369,7 @@ static struct bpf_prog *prog_list_prog(struct bpf_prog_list *pl)
+ /* count number of elements in the list.
+ * it's slow but the list cannot be long
+ */
+-static u32 prog_list_length(struct hlist_head *head)
++static u32 prog_list_length(struct hlist_head *head, int *preorder_cnt)
+ {
+ struct bpf_prog_list *pl;
+ u32 cnt = 0;
+@@ -377,6 +377,8 @@ static u32 prog_list_length(struct hlist_head *head)
+ hlist_for_each_entry(pl, head, node) {
+ if (!prog_list_prog(pl))
+ continue;
++ if (preorder_cnt && (pl->flags & BPF_F_PREORDER))
++ (*preorder_cnt)++;
+ cnt++;
+ }
+ return cnt;
+@@ -400,7 +402,7 @@ static bool hierarchy_allows_attach(struct cgroup *cgrp,
+
+ if (flags & BPF_F_ALLOW_MULTI)
+ return true;
+- cnt = prog_list_length(&p->bpf.progs[atype]);
++ cnt = prog_list_length(&p->bpf.progs[atype], NULL);
+ WARN_ON_ONCE(cnt > 1);
+ if (cnt == 1)
+ return !!(flags & BPF_F_ALLOW_OVERRIDE);
+@@ -423,12 +425,12 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ struct bpf_prog_array *progs;
+ struct bpf_prog_list *pl;
+ struct cgroup *p = cgrp;
+- int cnt = 0;
++ int i, j, cnt = 0, preorder_cnt = 0, fstart, bstart, init_bstart;
+
+ /* count number of effective programs by walking parents */
+ do {
+ if (cnt == 0 || (p->bpf.flags[atype] & BPF_F_ALLOW_MULTI))
+- cnt += prog_list_length(&p->bpf.progs[atype]);
++ cnt += prog_list_length(&p->bpf.progs[atype], &preorder_cnt);
+ p = cgroup_parent(p);
+ } while (p);
+
+@@ -439,20 +441,34 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ /* populate the array with effective progs */
+ cnt = 0;
+ p = cgrp;
++ fstart = preorder_cnt;
++ bstart = preorder_cnt - 1;
+ do {
+ if (cnt > 0 && !(p->bpf.flags[atype] & BPF_F_ALLOW_MULTI))
+ continue;
+
++ init_bstart = bstart;
+ hlist_for_each_entry(pl, &p->bpf.progs[atype], node) {
+ if (!prog_list_prog(pl))
+ continue;
+
+- item = &progs->items[cnt];
++ if (pl->flags & BPF_F_PREORDER) {
++ item = &progs->items[bstart];
++ bstart--;
++ } else {
++ item = &progs->items[fstart];
++ fstart++;
++ }
+ item->prog = prog_list_prog(pl);
+ bpf_cgroup_storages_assign(item->cgroup_storage,
+ pl->storage);
+ cnt++;
+ }
++
++ /* reverse pre-ordering progs at this cgroup level */
++ for (i = bstart + 1, j = init_bstart; i < j; i++, j--)
++ swap(progs->items[i], progs->items[j]);
++
+ } while ((p = cgroup_parent(p)));
+
+ *array = progs;
+@@ -663,7 +679,7 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
+ */
+ return -EPERM;
+
+- if (prog_list_length(progs) >= BPF_CGROUP_MAX_PROGS)
++ if (prog_list_length(progs, NULL) >= BPF_CGROUP_MAX_PROGS)
+ return -E2BIG;
+
+ pl = find_attach_entry(progs, prog, link, replace_prog,
+@@ -698,6 +714,7 @@ static int __cgroup_bpf_attach(struct cgroup *cgrp,
+
+ pl->prog = prog;
+ pl->link = link;
++ pl->flags = flags;
+ bpf_cgroup_storages_assign(pl->storage, storage);
+ cgrp->bpf.flags[atype] = saved_flags;
+
+@@ -1073,7 +1090,7 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
+ lockdep_is_held(&cgroup_mutex));
+ total_cnt += bpf_prog_array_length(effective);
+ } else {
+- total_cnt += prog_list_length(&cgrp->bpf.progs[atype]);
++ total_cnt += prog_list_length(&cgrp->bpf.progs[atype], NULL);
+ }
+ }
+
+@@ -1105,7 +1122,7 @@ static int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
+ u32 id;
+
+ progs = &cgrp->bpf.progs[atype];
+- cnt = min_t(int, prog_list_length(progs), total_cnt);
++ cnt = min_t(int, prog_list_length(progs, NULL), total_cnt);
+ i = 0;
+ hlist_for_each_entry(pl, progs, node) {
+ prog = prog_list_prog(pl);
+diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
+index 309c4aa1b026ab..c235acbd650955 100644
+--- a/kernel/bpf/disasm.c
++++ b/kernel/bpf/disasm.c
+@@ -202,7 +202,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
+ insn->dst_reg, class == BPF_ALU ? 'w' : 'r',
+ insn->dst_reg);
+ } else if (is_addr_space_cast(insn)) {
+- verbose(cbs->private_data, "(%02x) r%d = addr_space_cast(r%d, %d, %d)\n",
++ verbose(cbs->private_data, "(%02x) r%d = addr_space_cast(r%d, %u, %u)\n",
+ insn->code, insn->dst_reg,
+ insn->src_reg, ((u32)insn->imm) >> 16, (u16)insn->imm);
+ } else if (is_mov_percpu_addr(insn)) {
+@@ -369,7 +369,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
+ insn->code, class == BPF_JMP32 ? 'w' : 'r',
+ insn->dst_reg,
+ bpf_jmp_string[BPF_OP(insn->code) >> 4],
+- insn->imm, insn->off);
++ (u32)insn->imm, insn->off);
+ }
+ } else {
+ verbose(cbs->private_data, "(%02x) %s\n",
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index c308300fc72f67..c620ffb2b66297 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -2224,7 +2224,7 @@ static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_
+ b = &htab->buckets[i];
+ rcu_read_lock();
+ head = &b->head;
+- hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) {
++ hlist_nulls_for_each_entry_safe(elem, n, head, hash_node) {
+ key = elem->key;
+ if (is_percpu) {
+ /* current cpu value for percpu map */
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 1c2caae0d89460..32a8d5fd98612f 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2314,6 +2314,7 @@ static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred)
+ kvfree(prog->aux->jited_linfo);
+ kvfree(prog->aux->linfo);
+ kfree(prog->aux->kfunc_tab);
++ kfree(prog->aux->ctx_arg_info);
+ if (prog->aux->attach_btf)
+ btf_put(prog->aux->attach_btf);
+
+@@ -4169,7 +4170,8 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
+ #define BPF_F_ATTACH_MASK_BASE \
+ (BPF_F_ALLOW_OVERRIDE | \
+ BPF_F_ALLOW_MULTI | \
+- BPF_F_REPLACE)
++ BPF_F_REPLACE | \
++ BPF_F_PREORDER)
+
+ #define BPF_F_ATTACH_MASK_MPROG \
+ (BPF_F_REPLACE | \
+@@ -4733,6 +4735,8 @@ static int bpf_prog_get_info_by_fd(struct file *file,
+ info.recursion_misses = stats.misses;
+
+ info.verified_insns = prog->aux->verified_insns;
++ if (prog->aux->btf)
++ info.btf_id = btf_obj_id(prog->aux->btf);
+
+ if (!bpf_capable()) {
+ info.jited_prog_len = 0;
+@@ -4879,8 +4883,6 @@ static int bpf_prog_get_info_by_fd(struct file *file,
+ }
+ }
+
+- if (prog->aux->btf)
+- info.btf_id = btf_obj_id(prog->aux->btf);
+ info.attach_btf_id = prog->aux->attach_btf_id;
+ if (attach_btf)
+ info.attach_btf_obj_id = btf_obj_id(attach_btf);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index db95b76f5c1397..1841467c4f2e54 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1661,6 +1661,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
+ dst_state->callback_unroll_depth = src->callback_unroll_depth;
+ dst_state->used_as_loop_entry = src->used_as_loop_entry;
+ dst_state->may_goto_depth = src->may_goto_depth;
++ dst_state->loop_entry = src->loop_entry;
+ for (i = 0; i <= src->curframe; i++) {
+ dst = dst_state->frame[i];
+ if (!dst) {
+@@ -3206,6 +3207,21 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
+ return res ? &res->func_model : NULL;
+ }
+
++static int add_kfunc_in_insns(struct bpf_verifier_env *env,
++ struct bpf_insn *insn, int cnt)
++{
++ int i, ret;
++
++ for (i = 0; i < cnt; i++, insn++) {
++ if (bpf_pseudo_kfunc_call(insn)) {
++ ret = add_kfunc_call(env, insn->imm, insn->off);
++ if (ret < 0)
++ return ret;
++ }
++ }
++ return 0;
++}
++
+ static int add_subprog_and_kfunc(struct bpf_verifier_env *env)
+ {
+ struct bpf_subprog_info *subprog = env->subprog_info;
+@@ -17815,12 +17831,16 @@ static void clean_verifier_state(struct bpf_verifier_env *env,
+ static void clean_live_states(struct bpf_verifier_env *env, int insn,
+ struct bpf_verifier_state *cur)
+ {
++ struct bpf_verifier_state *loop_entry;
+ struct bpf_verifier_state_list *sl;
+
+ sl = *explored_state(env, insn);
+ while (sl) {
+ if (sl->state.branches)
+ goto next;
++ loop_entry = get_loop_entry(&sl->state);
++ if (loop_entry && loop_entry->branches)
++ goto next;
+ if (sl->state.insn_idx != insn ||
+ !same_callsites(&sl->state, cur))
+ goto next;
+@@ -19245,6 +19265,10 @@ static int do_check(struct bpf_verifier_env *env)
+ return err;
+ break;
+ } else {
++ if (WARN_ON_ONCE(env->cur_state->loop_entry)) {
++ verbose(env, "verifier bug: env->cur_state->loop_entry != NULL\n");
++ return -EFAULT;
++ }
+ do_print_state = true;
+ continue;
+ }
+@@ -20334,7 +20358,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ {
+ struct bpf_subprog_info *subprogs = env->subprog_info;
+ const struct bpf_verifier_ops *ops = env->ops;
+- int i, cnt, size, ctx_field_size, delta = 0, epilogue_cnt = 0;
++ int i, cnt, size, ctx_field_size, ret, delta = 0, epilogue_cnt = 0;
+ const int insn_cnt = env->prog->len;
+ struct bpf_insn *epilogue_buf = env->epilogue_buf;
+ struct bpf_insn *insn_buf = env->insn_buf;
+@@ -20363,6 +20387,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ return -ENOMEM;
+ env->prog = new_prog;
+ delta += cnt - 1;
++
++ ret = add_kfunc_in_insns(env, epilogue_buf, epilogue_cnt - 1);
++ if (ret < 0)
++ return ret;
+ }
+ }
+
+@@ -20383,6 +20411,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+
+ env->prog = new_prog;
+ delta += cnt - 1;
++
++ ret = add_kfunc_in_insns(env, insn_buf, cnt - 1);
++ if (ret < 0)
++ return ret;
+ }
+ }
+
+@@ -22399,6 +22431,15 @@ static void print_verification_stats(struct bpf_verifier_env *env)
+ env->peak_states, env->longest_mark_read_walk);
+ }
+
++int bpf_prog_ctx_arg_info_init(struct bpf_prog *prog,
++ const struct bpf_ctx_arg_aux *info, u32 cnt)
++{
++ prog->aux->ctx_arg_info = kmemdup_array(info, cnt, sizeof(*info), GFP_KERNEL);
++ prog->aux->ctx_arg_info_size = cnt;
++
++ return prog->aux->ctx_arg_info ? 0 : -ENOMEM;
++}
++
+ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
+ {
+ const struct btf_type *t, *func_proto;
+@@ -22479,17 +22520,12 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
+ return -EACCES;
+ }
+
+- /* btf_ctx_access() used this to provide argument type info */
+- prog->aux->ctx_arg_info =
+- st_ops_desc->arg_info[member_idx].info;
+- prog->aux->ctx_arg_info_size =
+- st_ops_desc->arg_info[member_idx].cnt;
+-
+ prog->aux->attach_func_proto = func_proto;
+ prog->aux->attach_func_name = mname;
+ env->ops = st_ops->verifier_ops;
+
+- return 0;
++ return bpf_prog_ctx_arg_info_init(prog, st_ops_desc->arg_info[member_idx].info,
++ st_ops_desc->arg_info[member_idx].cnt);
+ }
+ #define SECURITY_PREFIX "security_"
+
+@@ -22966,9 +23002,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
+ prog->aux->attach_btf_trace = true;
+ return 0;
+ } else if (prog->expected_attach_type == BPF_TRACE_ITER) {
+- if (!bpf_iter_prog_supported(prog))
+- return -EINVAL;
+- return 0;
++ return bpf_iter_prog_supported(prog);
+ }
+
+ if (prog->type == BPF_PROG_TYPE_LSM) {
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 68d58753c75c3c..660d27a0cb3d4a 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -90,7 +90,7 @@
+ DEFINE_MUTEX(cgroup_mutex);
+ DEFINE_SPINLOCK(css_set_lock);
+
+-#ifdef CONFIG_PROVE_RCU
++#if (defined CONFIG_PROVE_RCU || defined CONFIG_LOCKDEP)
+ EXPORT_SYMBOL_GPL(cgroup_mutex);
+ EXPORT_SYMBOL_GPL(css_set_lock);
+ #endif
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index 3e01781aeb7bd0..c4ce2f5a9745f6 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -323,13 +323,11 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp)
+ rcu_read_unlock();
+ }
+
+- /* play nice and yield if necessary */
+- if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) {
+- __cgroup_rstat_unlock(cgrp, cpu);
+- if (!cond_resched())
+- cpu_relax();
+- __cgroup_rstat_lock(cgrp, cpu);
+- }
++ /* play nice and avoid disabling interrupts for a long time */
++ __cgroup_rstat_unlock(cgrp, cpu);
++ if (!cond_resched())
++ cpu_relax();
++ __cgroup_rstat_lock(cgrp, cpu);
+ }
+ }
+
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index cda127027e48a7..051a32988040ff 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -910,6 +910,19 @@ int dma_set_coherent_mask(struct device *dev, u64 mask)
+ }
+ EXPORT_SYMBOL(dma_set_coherent_mask);
+
++static bool __dma_addressing_limited(struct device *dev)
++{
++ const struct dma_map_ops *ops = get_dma_ops(dev);
++
++ if (min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
++ dma_get_required_mask(dev))
++ return true;
++
++ if (unlikely(ops) || use_dma_iommu(dev))
++ return false;
++ return !dma_direct_all_ram_mapped(dev);
++}
++
+ /**
+ * dma_addressing_limited - return if the device is addressing limited
+ * @dev: device to check
+@@ -920,15 +933,11 @@ EXPORT_SYMBOL(dma_set_coherent_mask);
+ */
+ bool dma_addressing_limited(struct device *dev)
+ {
+- const struct dma_map_ops *ops = get_dma_ops(dev);
+-
+- if (min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
+- dma_get_required_mask(dev))
+- return true;
+-
+- if (unlikely(ops) || use_dma_iommu(dev))
++ if (!__dma_addressing_limited(dev))
+ return false;
+- return !dma_direct_all_ram_mapped(dev);
++
++ dev_dbg(dev, "device is DMA addressing limited\n");
++ return true;
+ }
+ EXPORT_SYMBOL_GPL(dma_addressing_limited);
+
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 93ce810384c92c..6fa70b3826d0ec 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -1191,6 +1191,12 @@ static void perf_assert_pmu_disabled(struct pmu *pmu)
+ WARN_ON_ONCE(*this_cpu_ptr(pmu->pmu_disable_count) == 0);
+ }
+
++static inline void perf_pmu_read(struct perf_event *event)
++{
++ if (event->state == PERF_EVENT_STATE_ACTIVE)
++ event->pmu->read(event);
++}
++
+ static void get_ctx(struct perf_event_context *ctx)
+ {
+ refcount_inc(&ctx->refcount);
+@@ -3478,8 +3484,7 @@ static void __perf_event_sync_stat(struct perf_event *event,
+ * we know the event must be on the current CPU, therefore we
+ * don't need to use it.
+ */
+- if (event->state == PERF_EVENT_STATE_ACTIVE)
+- event->pmu->read(event);
++ perf_pmu_read(event);
+
+ perf_event_update_time(event);
+
+@@ -4625,15 +4630,8 @@ static void __perf_event_read(void *info)
+
+ pmu->read(event);
+
+- for_each_sibling_event(sub, event) {
+- if (sub->state == PERF_EVENT_STATE_ACTIVE) {
+- /*
+- * Use sibling's PMU rather than @event's since
+- * sibling could be on different (eg: software) PMU.
+- */
+- sub->pmu->read(sub);
+- }
+- }
++ for_each_sibling_event(sub, event)
++ perf_pmu_read(sub);
+
+ data->ret = pmu->commit_txn(pmu);
+
+@@ -6834,7 +6832,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ if (!ret)
+ ret = map_range(rb, vma);
+
+- if (event->pmu->event_mapped)
++ if (!ret && event->pmu->event_mapped)
+ event->pmu->event_mapped(event, vma->vm_mm);
+
+ return ret;
+@@ -7451,9 +7449,8 @@ static void perf_output_read_group(struct perf_output_handle *handle,
+ if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING)
+ values[n++] = running;
+
+- if ((leader != event) &&
+- (leader->state == PERF_EVENT_STATE_ACTIVE))
+- leader->pmu->read(leader);
++ if ((leader != event) && !handle->skip_read)
++ perf_pmu_read(leader);
+
+ values[n++] = perf_event_count(leader, self);
+ if (read_format & PERF_FORMAT_ID)
+@@ -7466,9 +7463,8 @@ static void perf_output_read_group(struct perf_output_handle *handle,
+ for_each_sibling_event(sub, leader) {
+ n = 0;
+
+- if ((sub != event) &&
+- (sub->state == PERF_EVENT_STATE_ACTIVE))
+- sub->pmu->read(sub);
++ if ((sub != event) && !handle->skip_read)
++ perf_pmu_read(sub);
+
+ values[n++] = perf_event_count(sub, self);
+ if (read_format & PERF_FORMAT_ID)
+@@ -7527,6 +7523,9 @@ void perf_output_sample(struct perf_output_handle *handle,
+ {
+ u64 sample_type = data->type;
+
++ if (data->sample_flags & PERF_SAMPLE_READ)
++ handle->skip_read = 1;
++
+ perf_output_put(handle, *header);
+
+ if (sample_type & PERF_SAMPLE_IDENTIFIER)
+@@ -12020,40 +12019,51 @@ static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
+ if (ctx)
+ perf_event_ctx_unlock(event->group_leader, ctx);
+
+- if (!ret) {
+- if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) &&
+- has_extended_regs(event))
+- ret = -EOPNOTSUPP;
++ if (ret)
++ goto err_pmu;
+
+- if (pmu->capabilities & PERF_PMU_CAP_NO_EXCLUDE &&
+- event_has_any_exclude_flag(event))
+- ret = -EINVAL;
++ if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) &&
++ has_extended_regs(event)) {
++ ret = -EOPNOTSUPP;
++ goto err_destroy;
++ }
+
+- if (pmu->scope != PERF_PMU_SCOPE_NONE && event->cpu >= 0) {
+- const struct cpumask *cpumask = perf_scope_cpu_topology_cpumask(pmu->scope, event->cpu);
+- struct cpumask *pmu_cpumask = perf_scope_cpumask(pmu->scope);
+- int cpu;
+-
+- if (pmu_cpumask && cpumask) {
+- cpu = cpumask_any_and(pmu_cpumask, cpumask);
+- if (cpu >= nr_cpu_ids)
+- ret = -ENODEV;
+- else
+- event->event_caps |= PERF_EV_CAP_READ_SCOPE;
+- } else {
+- ret = -ENODEV;
+- }
+- }
++ if (pmu->capabilities & PERF_PMU_CAP_NO_EXCLUDE &&
++ event_has_any_exclude_flag(event)) {
++ ret = -EINVAL;
++ goto err_destroy;
++ }
++
++ if (pmu->scope != PERF_PMU_SCOPE_NONE && event->cpu >= 0) {
++ const struct cpumask *cpumask;
++ struct cpumask *pmu_cpumask;
++ int cpu;
+
+- if (ret && event->destroy)
+- event->destroy(event);
++ cpumask = perf_scope_cpu_topology_cpumask(pmu->scope, event->cpu);
++ pmu_cpumask = perf_scope_cpumask(pmu->scope);
++
++ ret = -ENODEV;
++ if (!pmu_cpumask || !cpumask)
++ goto err_destroy;
++
++ cpu = cpumask_any_and(pmu_cpumask, cpumask);
++ if (cpu >= nr_cpu_ids)
++ goto err_destroy;
++
++ event->event_caps |= PERF_EV_CAP_READ_SCOPE;
+ }
+
+- if (ret) {
+- event->pmu = NULL;
+- module_put(pmu->module);
++ return 0;
++
++err_destroy:
++ if (event->destroy) {
++ event->destroy(event);
++ event->destroy = NULL;
+ }
+
++err_pmu:
++ event->pmu = NULL;
++ module_put(pmu->module);
+ return ret;
+ }
+
+diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
+index bc4a61029b6dca..8ec2cb68890389 100644
+--- a/kernel/events/hw_breakpoint.c
++++ b/kernel/events/hw_breakpoint.c
+@@ -950,9 +950,10 @@ static int hw_breakpoint_event_init(struct perf_event *bp)
+ return -ENOENT;
+
+ /*
+- * no branch sampling for breakpoint events
++ * Check if breakpoint type is supported before proceeding.
++ * Also, no branch sampling for breakpoint events.
+ */
+- if (has_branch_stack(bp))
++ if (!hw_breakpoint_slots_cached(find_slot_idx(bp->attr.bp_type)) || has_branch_stack(bp))
+ return -EOPNOTSUPP;
+
+ err = register_perf_hw_breakpoint(bp);
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 09459647cb8221..5130b119d0ae04 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -185,6 +185,7 @@ __perf_output_begin(struct perf_output_handle *handle,
+
+ handle->rb = rb;
+ handle->event = event;
++ handle->flags = 0;
+
+ have_lost = local_read(&rb->lost);
+ if (unlikely(have_lost)) {
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 3485e5fc499e46..c195f1d2b908a4 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -741,10 +741,10 @@ static void exit_notify(struct task_struct *tsk, int group_dead)
+
+ tsk->exit_state = EXIT_ZOMBIE;
+ /*
+- * sub-thread or delay_group_leader(), wake up the
+- * PIDFD_THREAD waiters.
++ * Ignore thread-group leaders that exited before all
++ * subthreads did.
+ */
+- if (!thread_group_empty(tsk))
++ if (!delay_group_leader(tsk))
+ do_notify_pidfd(tsk);
+
+ if (unlikely(tsk->ptrace)) {
+diff --git a/kernel/fork.c b/kernel/fork.c
+index ca2ca3884f763d..5e640468baff13 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -504,10 +504,6 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
+ vma_numab_state_init(new);
+ dup_anon_vma_name(orig, new);
+
+- /* track_pfn_copy() will later take care of copying internal state. */
+- if (unlikely(new->vm_flags & VM_PFNMAP))
+- untrack_pfn_clear(new);
+-
+ return new;
+ }
+
+@@ -698,6 +694,11 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ tmp = vm_area_dup(mpnt);
+ if (!tmp)
+ goto fail_nomem;
++
++ /* track_pfn_copy() will later take care of copying internal state. */
++ if (unlikely(tmp->vm_flags & VM_PFNMAP))
++ untrack_pfn_clear(tmp);
++
+ retval = vma_dup_policy(mpnt, tmp);
+ if (retval)
+ goto fail_nomem_policy;
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 1fb9ad289a6f8f..738ecfa7e6657c 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -2852,6 +2852,7 @@ static void module_deallocate(struct module *mod, struct load_info *info)
+ {
+ percpu_modfree(mod);
+ module_arch_freeing_init(mod);
++ codetag_free_module_sections(mod);
+
+ free_mod_mem(mod);
+ }
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 418987056340ea..83e61e1469e9c5 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -358,7 +358,8 @@ static void padata_reorder(struct parallel_data *pd)
+ * To avoid UAF issue, add pd ref here, and put pd ref after reorder_work finish.
+ */
+ padata_get_pd(pd);
+- queue_work(pinst->serial_wq, &pd->reorder_work);
++ if (!queue_work(pinst->serial_wq, &pd->reorder_work))
++ padata_put_pd(pd);
+ }
+ }
+
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 057db78876cd98..13d4210d8862f1 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -3340,7 +3340,12 @@ void console_unblank(void)
+ */
+ cookie = console_srcu_read_lock();
+ for_each_console_srcu(c) {
+- if ((console_srcu_read_flags(c) & CON_ENABLED) && c->unblank) {
++ short flags = console_srcu_read_flags(c);
++
++ if (flags & CON_SUSPENDED)
++ continue;
++
++ if ((flags & CON_ENABLED) && c->unblank) {
+ found_unblank = true;
+ break;
+ }
+@@ -3377,7 +3382,12 @@ void console_unblank(void)
+
+ cookie = console_srcu_read_lock();
+ for_each_console_srcu(c) {
+- if ((console_srcu_read_flags(c) & CON_ENABLED) && c->unblank)
++ short flags = console_srcu_read_flags(c);
++
++ if (flags & CON_SUSPENDED)
++ continue;
++
++ if ((flags & CON_ENABLED) && c->unblank)
+ c->unblank();
+ }
+ console_srcu_read_unlock(cookie);
+diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
+index feb3ac1dc5d590..f87c9d6d36fcbe 100644
+--- a/kernel/rcu/rcu.h
++++ b/kernel/rcu/rcu.h
+@@ -162,7 +162,7 @@ static inline bool rcu_seq_done_exact(unsigned long *sp, unsigned long s)
+ {
+ unsigned long cur_s = READ_ONCE(*sp);
+
+- return ULONG_CMP_GE(cur_s, s) || ULONG_CMP_LT(cur_s, s - (2 * RCU_SEQ_STATE_MASK + 1));
++ return ULONG_CMP_GE(cur_s, s) || ULONG_CMP_LT(cur_s, s - (3 * RCU_SEQ_STATE_MASK + 1));
+ }
+
+ /*
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 475f31deed1418..b7bf9db9bb4610 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1801,10 +1801,14 @@ static noinline_for_stack bool rcu_gp_init(void)
+
+ /* Advance to a new grace period and initialize state. */
+ record_gp_stall_check_time();
++ /*
++ * A new wait segment must be started before gp_seq advanced, so
++ * that previous gp waiters won't observe the new gp_seq.
++ */
++ start_new_poll = rcu_sr_normal_gp_init();
+ /* Record GP times before starting GP, hence rcu_seq_start(). */
+ rcu_seq_start(&rcu_state.gp_seq);
+ ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq);
+- start_new_poll = rcu_sr_normal_gp_init();
+ trace_rcu_grace_period(rcu_state.name, rcu_state.gp_seq, TPS("start"));
+ rcu_poll_gp_seq_start(&rcu_state.gp_seq_polled_snap);
+ raw_spin_unlock_irq_rcu_node(rnp);
+@@ -3357,14 +3361,17 @@ EXPORT_SYMBOL_GPL(get_state_synchronize_rcu);
+ */
+ void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp)
+ {
+- struct rcu_node *rnp = rcu_get_root();
+-
+ /*
+ * Any prior manipulation of RCU-protected data must happen
+ * before the loads from ->gp_seq and ->expedited_sequence.
+ */
+ smp_mb(); /* ^^^ */
+- rgosp->rgos_norm = rcu_seq_snap(&rnp->gp_seq);
++
++ // Yes, rcu_state.gp_seq, not rnp_root->gp_seq, the latter's use
++ // in poll_state_synchronize_rcu_full() notwithstanding. Use of
++ // the latter here would result in too-short grace periods due to
++ // interactions with newly onlined CPUs.
++ rgosp->rgos_norm = rcu_seq_snap(&rcu_state.gp_seq);
+ rgosp->rgos_exp = rcu_seq_snap(&rcu_state.expedited_sequence);
+ }
+ EXPORT_SYMBOL_GPL(get_state_synchronize_rcu_full);
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 3600152b858e88..3c0bbbbb686fe2 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -833,8 +833,17 @@ void rcu_read_unlock_strict(void)
+ {
+ struct rcu_data *rdp;
+
+- if (irqs_disabled() || preempt_count() || !rcu_state.gp_kthread)
++ if (irqs_disabled() || in_atomic_preempt_off() || !rcu_state.gp_kthread)
+ return;
++
++ /*
++ * rcu_report_qs_rdp() can only be invoked with a stable rdp and
++ * from the local CPU.
++ *
++ * The in_atomic_preempt_off() check ensures that we come here holding
++ * the last preempt_count (which will get dropped once we return to
++ * __rcu_read_unlock().
++ */
+ rdp = this_cpu_ptr(&rcu_data);
+ rdp->cpu_no_qs.b.norm = false;
+ rcu_report_qs_rdp(rdp);
+@@ -975,13 +984,16 @@ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp)
+ */
+ static void rcu_flavor_sched_clock_irq(int user)
+ {
+- if (user || rcu_is_cpu_rrupt_from_idle()) {
++ if (user || rcu_is_cpu_rrupt_from_idle() ||
++ (IS_ENABLED(CONFIG_PREEMPT_COUNT) &&
++ (preempt_count() == HARDIRQ_OFFSET))) {
+
+ /*
+ * Get here if this CPU took its interrupt from user
+- * mode or from the idle loop, and if this is not a
+- * nested interrupt. In this case, the CPU is in
+- * a quiescent state, so note it.
++ * mode, from the idle loop without this being a nested
++ * interrupt, or while not holding the task preempt count
++ * (with PREEMPT_COUNT=y). In this case, the CPU is in a
++ * quiescent state, so note it.
+ *
+ * No memory barrier is required here because rcu_qs()
+ * references only CPU-local variables that other CPUs
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index a7d81229eda04b..b7a1ec327e8117 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -236,6 +236,29 @@ static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
+ return -EFAULT;
+ }
+
++/*
++ * Get the user-space pointer value stored in the 'rseq_cs' field.
++ */
++static int rseq_get_rseq_cs_ptr_val(struct rseq __user *rseq, u64 *rseq_cs)
++{
++ if (!rseq_cs)
++ return -EFAULT;
++
++#ifdef CONFIG_64BIT
++ if (get_user(*rseq_cs, &rseq->rseq_cs))
++ return -EFAULT;
++#else
++ if (copy_from_user(rseq_cs, &rseq->rseq_cs, sizeof(*rseq_cs)))
++ return -EFAULT;
++#endif
++
++ return 0;
++}
++
++/*
++ * If the rseq_cs field of 'struct rseq' contains a valid pointer to
++ * user-space, copy 'struct rseq_cs' from user-space and validate its fields.
++ */
+ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+ {
+ struct rseq_cs __user *urseq_cs;
+@@ -244,17 +267,16 @@ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+ u32 sig;
+ int ret;
+
+-#ifdef CONFIG_64BIT
+- if (get_user(ptr, &t->rseq->rseq_cs))
+- return -EFAULT;
+-#else
+- if (copy_from_user(&ptr, &t->rseq->rseq_cs, sizeof(ptr)))
+- return -EFAULT;
+-#endif
++ ret = rseq_get_rseq_cs_ptr_val(t->rseq, &ptr);
++ if (ret)
++ return ret;
++
++ /* If the rseq_cs pointer is NULL, return a cleared struct rseq_cs. */
+ if (!ptr) {
+ memset(rseq_cs, 0, sizeof(*rseq_cs));
+ return 0;
+ }
++ /* Check that the pointer value fits in the user-space process space. */
+ if (ptr >= TASK_SIZE)
+ return -EINVAL;
+ urseq_cs = (struct rseq_cs __user *)(unsigned long)ptr;
+@@ -330,7 +352,7 @@ static int rseq_need_restart(struct task_struct *t, u32 cs_flags)
+ return !!event_mask;
+ }
+
+-static int clear_rseq_cs(struct task_struct *t)
++static int clear_rseq_cs(struct rseq __user *rseq)
+ {
+ /*
+ * The rseq_cs field is set to NULL on preemption or signal
+@@ -341,9 +363,9 @@ static int clear_rseq_cs(struct task_struct *t)
+ * Set rseq_cs to NULL.
+ */
+ #ifdef CONFIG_64BIT
+- return put_user(0UL, &t->rseq->rseq_cs);
++ return put_user(0UL, &rseq->rseq_cs);
+ #else
+- if (clear_user(&t->rseq->rseq_cs, sizeof(t->rseq->rseq_cs)))
++ if (clear_user(&rseq->rseq_cs, sizeof(rseq->rseq_cs)))
+ return -EFAULT;
+ return 0;
+ #endif
+@@ -375,11 +397,11 @@ static int rseq_ip_fixup(struct pt_regs *regs)
+ * Clear the rseq_cs pointer and return.
+ */
+ if (!in_rseq_cs(ip, &rseq_cs))
+- return clear_rseq_cs(t);
++ return clear_rseq_cs(t->rseq);
+ ret = rseq_need_restart(t, rseq_cs.flags);
+ if (ret <= 0)
+ return ret;
+- ret = clear_rseq_cs(t);
++ ret = clear_rseq_cs(t->rseq);
+ if (ret)
+ return ret;
+ trace_rseq_ip_fixup(ip, rseq_cs.start_ip, rseq_cs.post_commit_offset,
+@@ -453,6 +475,7 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
+ int, flags, u32, sig)
+ {
+ int ret;
++ u64 rseq_cs;
+
+ if (flags & RSEQ_FLAG_UNREGISTER) {
+ if (flags & ~RSEQ_FLAG_UNREGISTER)
+@@ -507,6 +530,19 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
+ return -EINVAL;
+ if (!access_ok(rseq, rseq_len))
+ return -EFAULT;
++
++ /*
++ * If the rseq_cs pointer is non-NULL on registration, clear it to
++ * avoid a potential segfault on return to user-space. The proper thing
++ * to do would have been to fail the registration but this would break
++ * older libcs that reuse the rseq area for new threads without
++ * clearing the fields.
++ */
++ if (rseq_get_rseq_cs_ptr_val(rseq, &rseq_cs))
++ return -EFAULT;
++ if (rseq_cs && clear_rseq_cs(rseq))
++ return -EFAULT;
++
+ #ifdef CONFIG_DEBUG_RSEQ
+ /*
+ * Initialize the in-kernel rseq fields copy for validation of
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 3d9b68a347b764..eb11650160f7ed 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -74,10 +74,10 @@ unsigned int sysctl_sched_tunable_scaling = SCHED_TUNABLESCALING_LOG;
+ /*
+ * Minimal preemption granularity for CPU-bound tasks:
+ *
+- * (default: 0.75 msec * (1 + ilog(ncpus)), units: nanoseconds)
++ * (default: 0.70 msec * (1 + ilog(ncpus)), units: nanoseconds)
+ */
+-unsigned int sysctl_sched_base_slice = 750000ULL;
+-static unsigned int normalized_sysctl_sched_base_slice = 750000ULL;
++unsigned int sysctl_sched_base_slice = 700000ULL;
++static unsigned int normalized_sysctl_sched_base_slice = 700000ULL;
+
+ const_debug unsigned int sysctl_sched_migration_cost = 500000UL;
+
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 875e97f6205a2c..b2e5c90f29602a 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -2180,8 +2180,7 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
+ WARN_ON_ONCE(!tsk->ptrace &&
+ (tsk->group_leader != tsk || !thread_group_empty(tsk)));
+ /*
+- * tsk is a group leader and has no threads, wake up the
+- * non-PIDFD_THREAD waiters.
++ * Notify for thread-group leaders without subthreads.
+ */
+ if (thread_group_empty(tsk))
+ do_notify_pidfd(tsk);
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index 4dae6ac2e83fb2..513b1945987cc6 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -126,6 +126,18 @@ static DEFINE_PER_CPU(struct softirq_ctrl, softirq_ctrl) = {
+ .lock = INIT_LOCAL_LOCK(softirq_ctrl.lock),
+ };
+
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++static struct lock_class_key bh_lock_key;
++struct lockdep_map bh_lock_map = {
++ .name = "local_bh",
++ .key = &bh_lock_key,
++ .wait_type_outer = LD_WAIT_FREE,
++ .wait_type_inner = LD_WAIT_CONFIG, /* PREEMPT_RT makes BH preemptible. */
++ .lock_type = LD_LOCK_PERCPU,
++};
++EXPORT_SYMBOL_GPL(bh_lock_map);
++#endif
++
+ /**
+ * local_bh_blocked() - Check for idle whether BH processing is blocked
+ *
+@@ -148,6 +160,8 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
+
+ WARN_ON_ONCE(in_hardirq());
+
++ lock_map_acquire_read(&bh_lock_map);
++
+ /* First entry of a task into a BH disabled section? */
+ if (!current->softirq_disable_cnt) {
+ if (preemptible()) {
+@@ -211,6 +225,8 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
+ WARN_ON_ONCE(in_hardirq());
+ lockdep_assert_irqs_enabled();
+
++ lock_map_release(&bh_lock_map);
++
+ local_irq_save(flags);
+ curcnt = __this_cpu_read(softirq_ctrl.cnt);
+
+@@ -261,6 +277,8 @@ static inline void ksoftirqd_run_begin(void)
+ /* Counterpart to ksoftirqd_run_begin() */
+ static inline void ksoftirqd_run_end(void)
+ {
++ /* pairs with the lock_map_acquire_read() in ksoftirqd_run_begin() */
++ lock_map_release(&bh_lock_map);
+ __local_bh_enable(SOFTIRQ_OFFSET, true);
+ WARN_ON_ONCE(in_interrupt());
+ local_irq_enable();
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index deb1aa32814e3b..453dc76c934848 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -117,16 +117,6 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
+ .csd = CSD_INIT(retrigger_next_event, NULL)
+ };
+
+-static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = {
+- /* Make sure we catch unsupported clockids */
+- [0 ... MAX_CLOCKS - 1] = HRTIMER_MAX_CLOCK_BASES,
+-
+- [CLOCK_REALTIME] = HRTIMER_BASE_REALTIME,
+- [CLOCK_MONOTONIC] = HRTIMER_BASE_MONOTONIC,
+- [CLOCK_BOOTTIME] = HRTIMER_BASE_BOOTTIME,
+- [CLOCK_TAI] = HRTIMER_BASE_TAI,
+-};
+-
+ static inline bool hrtimer_base_is_online(struct hrtimer_cpu_base *base)
+ {
+ if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
+@@ -1587,14 +1577,19 @@ u64 hrtimer_next_event_without(const struct hrtimer *exclude)
+
+ static inline int hrtimer_clockid_to_base(clockid_t clock_id)
+ {
+- if (likely(clock_id < MAX_CLOCKS)) {
+- int base = hrtimer_clock_to_base_table[clock_id];
+-
+- if (likely(base != HRTIMER_MAX_CLOCK_BASES))
+- return base;
++ switch (clock_id) {
++ case CLOCK_REALTIME:
++ return HRTIMER_BASE_REALTIME;
++ case CLOCK_MONOTONIC:
++ return HRTIMER_BASE_MONOTONIC;
++ case CLOCK_BOOTTIME:
++ return HRTIMER_BASE_BOOTTIME;
++ case CLOCK_TAI:
++ return HRTIMER_BASE_TAI;
++ default:
++ WARN(1, "Invalid clockid %d. Using MONOTONIC\n", clock_id);
++ return HRTIMER_BASE_MONOTONIC;
+ }
+- WARN(1, "Invalid clockid %d. Using MONOTONIC\n", clock_id);
+- return HRTIMER_BASE_MONOTONIC;
+ }
+
+ static enum hrtimer_restart hrtimer_dummy_timeout(struct hrtimer *unused)
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index 1b675aee99a980..43b08a04898a87 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -118,6 +118,7 @@ static int posix_timer_add(struct k_itimer *timer)
+ return id;
+ }
+ spin_unlock(&hash_lock);
++ cond_resched();
+ }
+ /* POSIX return code when no timer ID could be allocated */
+ return -EAGAIN;
+@@ -462,14 +463,21 @@ static int do_timer_create(clockid_t which_clock, struct sigevent *event,
+ if (error)
+ goto out;
+
+- spin_lock_irq(¤t->sighand->siglock);
+- /* This makes the timer valid in the hash table */
+- WRITE_ONCE(new_timer->it_signal, current->signal);
+- hlist_add_head(&new_timer->list, ¤t->signal->posix_timers);
+- spin_unlock_irq(¤t->sighand->siglock);
+ /*
+- * After unlocking sighand::siglock @new_timer is subject to
+- * concurrent removal and cannot be touched anymore
++ * timer::it_lock ensures that __lock_timer() observes a fully
++ * initialized timer when it observes a valid timer::it_signal.
++ *
++ * sighand::siglock is required to protect signal::posix_timers.
++ */
++ scoped_guard (spinlock_irq, &new_timer->it_lock) {
++ guard(spinlock)(¤t->sighand->siglock);
++ /* This makes the timer valid in the hash table */
++ WRITE_ONCE(new_timer->it_signal, current->signal);
++ hlist_add_head(&new_timer->list, ¤t->signal->posix_timers);
++ }
++ /*
++ * After unlocking @new_timer is subject to concurrent removal and
++ * cannot be touched anymore
+ */
+ return 0;
+ out:
+@@ -1099,8 +1107,10 @@ void exit_itimers(struct task_struct *tsk)
+ spin_unlock_irq(&tsk->sighand->siglock);
+
+ /* The timers are not longer accessible via tsk::signal */
+- while (!hlist_empty(&timers))
++ while (!hlist_empty(&timers)) {
+ itimer_delete(hlist_entry(timers.first, struct k_itimer, list));
++ cond_resched();
++ }
+
+ /*
+ * There should be no timers on the ignored list. itimer_delete() has
+diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c
+index 1c311c46da5074..cfbb46cc4e7613 100644
+--- a/kernel/time/timer_list.c
++++ b/kernel/time/timer_list.c
+@@ -46,7 +46,7 @@ static void
+ print_timer(struct seq_file *m, struct hrtimer *taddr, struct hrtimer *timer,
+ int idx, u64 now)
+ {
+- SEQ_printf(m, " #%d: <%pK>, %ps", idx, taddr, timer->function);
++ SEQ_printf(m, " #%d: <%p>, %ps", idx, taddr, timer->function);
+ SEQ_printf(m, ", S:%02x", timer->state);
+ SEQ_printf(m, "\n");
+ SEQ_printf(m, " # expires at %Lu-%Lu nsecs [in %Ld to %Ld nsecs]\n",
+@@ -98,7 +98,7 @@ print_active_timers(struct seq_file *m, struct hrtimer_clock_base *base,
+ static void
+ print_base(struct seq_file *m, struct hrtimer_clock_base *base, u64 now)
+ {
+- SEQ_printf(m, " .base: %pK\n", base);
++ SEQ_printf(m, " .base: %p\n", base);
+ SEQ_printf(m, " .index: %d\n", base->index);
+
+ SEQ_printf(m, " .resolution: %u nsecs\n", hrtimer_resolution);
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index f76bee67a792ce..9b1db04c74e25d 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -31,6 +31,7 @@
+
+ #include <asm/local64.h>
+ #include <asm/local.h>
++#include <asm/setup.h>
+
+ #include "trace.h"
+
+@@ -49,8 +50,7 @@ static void update_pages_handler(struct work_struct *work);
+ struct ring_buffer_meta {
+ int magic;
+ int struct_size;
+- unsigned long text_addr;
+- unsigned long data_addr;
++ unsigned long kaslr_addr;
+ unsigned long first_buffer;
+ unsigned long head_buffer;
+ unsigned long commit_buffer;
+@@ -550,8 +550,7 @@ struct trace_buffer {
+ unsigned long range_addr_start;
+ unsigned long range_addr_end;
+
+- long last_text_delta;
+- long last_data_delta;
++ unsigned long kaslr_addr;
+
+ unsigned int subbuf_size;
+ unsigned int subbuf_order;
+@@ -1893,16 +1892,13 @@ static void rb_meta_validate_events(struct ring_buffer_per_cpu *cpu_buffer)
+ }
+ }
+
+-/* Used to calculate data delta */
+-static char rb_data_ptr[] = "";
+-
+-#define THIS_TEXT_PTR ((unsigned long)rb_meta_init_text_addr)
+-#define THIS_DATA_PTR ((unsigned long)rb_data_ptr)
+-
+ static void rb_meta_init_text_addr(struct ring_buffer_meta *meta)
+ {
+- meta->text_addr = THIS_TEXT_PTR;
+- meta->data_addr = THIS_DATA_PTR;
++#ifdef CONFIG_RANDOMIZE_BASE
++ meta->kaslr_addr = kaslr_offset();
++#else
++ meta->kaslr_addr = 0;
++#endif
+ }
+
+ static void rb_range_meta_init(struct trace_buffer *buffer, int nr_pages)
+@@ -1930,8 +1926,7 @@ static void rb_range_meta_init(struct trace_buffer *buffer, int nr_pages)
+ meta->first_buffer += delta;
+ meta->head_buffer += delta;
+ meta->commit_buffer += delta;
+- buffer->last_text_delta = THIS_TEXT_PTR - meta->text_addr;
+- buffer->last_data_delta = THIS_DATA_PTR - meta->data_addr;
++ buffer->kaslr_addr = meta->kaslr_addr;
+ continue;
+ }
+
+@@ -2484,17 +2479,15 @@ struct trace_buffer *__ring_buffer_alloc_range(unsigned long size, unsigned flag
+ *
+ * Returns: The true if the delta is non zero
+ */
+-bool ring_buffer_last_boot_delta(struct trace_buffer *buffer, long *text,
+- long *data)
++bool ring_buffer_last_boot_delta(struct trace_buffer *buffer, unsigned long *kaslr_addr)
+ {
+ if (!buffer)
+ return false;
+
+- if (!buffer->last_text_delta)
++ if (!buffer->kaslr_addr)
+ return false;
+
+- *text = buffer->last_text_delta;
+- *data = buffer->last_data_delta;
++ *kaslr_addr = buffer->kaslr_addr;
+
+ return true;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 814626bb410b27..b1738563bdc3b1 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -50,7 +50,7 @@
+ #include <linux/irq_work.h>
+ #include <linux/workqueue.h>
+
+-#include <asm/setup.h> /* COMMAND_LINE_SIZE */
++#include <asm/setup.h> /* COMMAND_LINE_SIZE and kaslr_offset() */
+
+ #include "trace.h"
+ #include "trace_output.h"
+@@ -3322,10 +3322,9 @@ int trace_vbprintk(unsigned long ip, const char *fmt, va_list args)
+ }
+ EXPORT_SYMBOL_GPL(trace_vbprintk);
+
+-__printf(3, 0)
+-static int
+-__trace_array_vprintk(struct trace_buffer *buffer,
+- unsigned long ip, const char *fmt, va_list args)
++static __printf(3, 0)
++int __trace_array_vprintk(struct trace_buffer *buffer,
++ unsigned long ip, const char *fmt, va_list args)
+ {
+ struct ring_buffer_event *event;
+ int len = 0, size;
+@@ -3375,7 +3374,6 @@ __trace_array_vprintk(struct trace_buffer *buffer,
+ return len;
+ }
+
+-__printf(3, 0)
+ int trace_array_vprintk(struct trace_array *tr,
+ unsigned long ip, const char *fmt, va_list args)
+ {
+@@ -3405,7 +3403,6 @@ int trace_array_vprintk(struct trace_array *tr,
+ * Note, trace_array_init_printk() must be called on @tr before this
+ * can be used.
+ */
+-__printf(3, 0)
+ int trace_array_printk(struct trace_array *tr,
+ unsigned long ip, const char *fmt, ...)
+ {
+@@ -3450,7 +3447,6 @@ int trace_array_init_printk(struct trace_array *tr)
+ }
+ EXPORT_SYMBOL_GPL(trace_array_init_printk);
+
+-__printf(3, 4)
+ int trace_array_printk_buf(struct trace_buffer *buffer,
+ unsigned long ip, const char *fmt, ...)
+ {
+@@ -3466,7 +3462,6 @@ int trace_array_printk_buf(struct trace_buffer *buffer,
+ return ret;
+ }
+
+-__printf(2, 0)
+ int trace_vprintk(unsigned long ip, const char *fmt, va_list args)
+ {
+ return trace_array_vprintk(printk_trace, ip, fmt, args);
+@@ -4193,7 +4188,7 @@ static enum print_line_t print_trace_fmt(struct trace_iterator *iter)
+ * safe to use if the array has delta offsets
+ * Force printing via the fields.
+ */
+- if ((tr->text_delta || tr->data_delta) &&
++ if ((tr->text_delta) &&
+ event->type > __TRACE_LAST_TYPE)
+ return print_event_fields(iter, event);
+
+@@ -5990,7 +5985,7 @@ ssize_t tracing_resize_ring_buffer(struct trace_array *tr,
+
+ static void update_last_data(struct trace_array *tr)
+ {
+- if (!tr->text_delta && !tr->data_delta)
++ if (!(tr->flags & TRACE_ARRAY_FL_LAST_BOOT))
+ return;
+
+ /*
+@@ -6003,7 +5998,8 @@ static void update_last_data(struct trace_array *tr)
+
+ /* Using current data now */
+ tr->text_delta = 0;
+- tr->data_delta = 0;
++
++ tr->flags &= ~TRACE_ARRAY_FL_LAST_BOOT;
+ }
+
+ /**
+@@ -6822,8 +6818,17 @@ tracing_last_boot_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t
+
+ seq_buf_init(&seq, buf, 64);
+
+- seq_buf_printf(&seq, "text delta:\t%ld\n", tr->text_delta);
+- seq_buf_printf(&seq, "data delta:\t%ld\n", tr->data_delta);
++ /*
++ * Do not leak KASLR address. This only shows the KASLR address of
++ * the last boot. When the ring buffer is started, the LAST_BOOT
++ * flag gets cleared, and this should only report "current".
++ * Otherwise it shows the KASLR address from the previous boot which
++ * should not be the same as the current boot.
++ */
++ if (tr->flags & TRACE_ARRAY_FL_LAST_BOOT)
++ seq_buf_printf(&seq, "%lx\t[kernel]\n", tr->kaslr_addr);
++ else
++ seq_buf_puts(&seq, "# Current\n");
+
+ return simple_read_from_buffer(ubuf, cnt, ppos, buf, seq_buf_used(&seq));
+ }
+@@ -9211,8 +9216,10 @@ allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, int size
+ tr->range_addr_start,
+ tr->range_addr_size);
+
+- ring_buffer_last_boot_delta(buf->buffer,
+- &tr->text_delta, &tr->data_delta);
++#ifdef CONFIG_RANDOMIZE_BASE
++ if (ring_buffer_last_boot_delta(buf->buffer, &tr->kaslr_addr))
++ tr->text_delta = kaslr_offset() - tr->kaslr_addr;
++#endif
+ /*
+ * This is basically the same as a mapped buffer,
+ * with the same restrictions.
+@@ -10470,7 +10477,7 @@ __init static void enable_instances(void)
+ * to it.
+ */
+ if (start) {
+- tr->flags |= TRACE_ARRAY_FL_BOOT;
++ tr->flags |= TRACE_ARRAY_FL_BOOT | TRACE_ARRAY_FL_LAST_BOOT;
+ tr->ref++;
+ }
+
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 9c21ba45b7af6b..ccf3874823f5fe 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -348,8 +348,8 @@ struct trace_array {
+ unsigned int mapped;
+ unsigned long range_addr_start;
+ unsigned long range_addr_size;
++ unsigned long kaslr_addr;
+ long text_delta;
+- long data_delta;
+
+ struct trace_pid_list __rcu *filtered_pids;
+ struct trace_pid_list __rcu *filtered_no_pids;
+@@ -433,9 +433,10 @@ struct trace_array {
+ };
+
+ enum {
+- TRACE_ARRAY_FL_GLOBAL = BIT(0),
+- TRACE_ARRAY_FL_BOOT = BIT(1),
+- TRACE_ARRAY_FL_MOD_INIT = BIT(2),
++ TRACE_ARRAY_FL_GLOBAL = BIT(0),
++ TRACE_ARRAY_FL_BOOT = BIT(1),
++ TRACE_ARRAY_FL_LAST_BOOT = BIT(2),
++ TRACE_ARRAY_FL_MOD_INIT = BIT(3),
+ };
+
+ #ifdef CONFIG_MODULES
+@@ -836,13 +837,15 @@ static inline void __init disable_tracing_selftest(const char *reason)
+
+ extern void *head_page(struct trace_array_cpu *data);
+ extern unsigned long long ns2usecs(u64 nsec);
+-extern int
+-trace_vbprintk(unsigned long ip, const char *fmt, va_list args);
+-extern int
+-trace_vprintk(unsigned long ip, const char *fmt, va_list args);
+-extern int
+-trace_array_vprintk(struct trace_array *tr,
+- unsigned long ip, const char *fmt, va_list args);
++
++__printf(2, 0)
++int trace_vbprintk(unsigned long ip, const char *fmt, va_list args);
++__printf(2, 0)
++int trace_vprintk(unsigned long ip, const char *fmt, va_list args);
++__printf(3, 0)
++int trace_array_vprintk(struct trace_array *tr,
++ unsigned long ip, const char *fmt, va_list args);
++__printf(3, 4)
+ int trace_array_printk_buf(struct trace_buffer *buffer,
+ unsigned long ip, const char *fmt, ...);
+ void trace_printk_seq(struct trace_seq *s);
+diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
+index 2ef2e1b8009165..2f844c279a3e01 100644
+--- a/kernel/vhost_task.c
++++ b/kernel/vhost_task.c
+@@ -111,7 +111,7 @@ EXPORT_SYMBOL_GPL(vhost_task_stop);
+ * @arg: data to be passed to fn and handled_kill
+ * @name: the thread's name
+ *
+- * This returns a specialized task for use by the vhost layer or NULL on
++ * This returns a specialized task for use by the vhost layer or ERR_PTR() on
+ * failure. The returned task is inactive, and the caller must fire it up
+ * through vhost_task_start().
+ */
+diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
+index 513176e33242e1..df43aea19341f7 100644
+--- a/lib/alloc_tag.c
++++ b/lib/alloc_tag.c
+@@ -350,18 +350,28 @@ static bool needs_section_mem(struct module *mod, unsigned long size)
+ return size >= sizeof(struct alloc_tag);
+ }
+
+-static struct alloc_tag *find_used_tag(struct alloc_tag *from, struct alloc_tag *to)
++static bool clean_unused_counters(struct alloc_tag *start_tag,
++ struct alloc_tag *end_tag)
+ {
+- while (from <= to) {
++ struct alloc_tag *tag;
++ bool ret = true;
++
++ for (tag = start_tag; tag <= end_tag; tag++) {
+ struct alloc_tag_counters counter;
+
+- counter = alloc_tag_read(from);
+- if (counter.bytes)
+- return from;
+- from++;
++ if (!tag->counters)
++ continue;
++
++ counter = alloc_tag_read(tag);
++ if (!counter.bytes) {
++ free_percpu(tag->counters);
++ tag->counters = NULL;
++ } else {
++ ret = false;
++ }
+ }
+
+- return NULL;
++ return ret;
+ }
+
+ /* Called with mod_area_mt locked */
+@@ -371,12 +381,16 @@ static void clean_unused_module_areas_locked(void)
+ struct module *val;
+
+ mas_for_each(&mas, val, module_tags.size) {
++ struct alloc_tag *start_tag;
++ struct alloc_tag *end_tag;
++
+ if (val != &unloaded_mod)
+ continue;
+
+ /* Release area if all tags are unused */
+- if (!find_used_tag((struct alloc_tag *)(module_tags.start_addr + mas.index),
+- (struct alloc_tag *)(module_tags.start_addr + mas.last)))
++ start_tag = (struct alloc_tag *)(module_tags.start_addr + mas.index);
++ end_tag = (struct alloc_tag *)(module_tags.start_addr + mas.last);
++ if (clean_unused_counters(start_tag, end_tag))
+ mas_erase(&mas);
+ }
+ }
+@@ -561,7 +575,8 @@ static void *reserve_module_tags(struct module *mod, unsigned long size,
+ static void release_module_tags(struct module *mod, bool used)
+ {
+ MA_STATE(mas, &mod_area_mt, module_tags.size, module_tags.size);
+- struct alloc_tag *tag;
++ struct alloc_tag *start_tag;
++ struct alloc_tag *end_tag;
+ struct module *val;
+
+ mas_lock(&mas);
+@@ -575,15 +590,22 @@ static void release_module_tags(struct module *mod, bool used)
+ if (!used)
+ goto release_area;
+
+- /* Find out if the area is used */
+- tag = find_used_tag((struct alloc_tag *)(module_tags.start_addr + mas.index),
+- (struct alloc_tag *)(module_tags.start_addr + mas.last));
+- if (tag) {
+- struct alloc_tag_counters counter = alloc_tag_read(tag);
++ start_tag = (struct alloc_tag *)(module_tags.start_addr + mas.index);
++ end_tag = (struct alloc_tag *)(module_tags.start_addr + mas.last);
++ if (!clean_unused_counters(start_tag, end_tag)) {
++ struct alloc_tag *tag;
++
++ for (tag = start_tag; tag <= end_tag; tag++) {
++ struct alloc_tag_counters counter;
++
++ if (!tag->counters)
++ continue;
+
+- pr_info("%s:%u module %s func:%s has %llu allocated at module unload\n",
+- tag->ct.filename, tag->ct.lineno, tag->ct.modname,
+- tag->ct.function, counter.bytes);
++ counter = alloc_tag_read(tag);
++ pr_info("%s:%u module %s func:%s has %llu allocated at module unload\n",
++ tag->ct.filename, tag->ct.lineno, tag->ct.modname,
++ tag->ct.function, counter.bytes);
++ }
+ } else {
+ used = false;
+ }
+@@ -596,6 +618,34 @@ static void release_module_tags(struct module *mod, bool used)
+ mas_unlock(&mas);
+ }
+
++static void load_module(struct module *mod, struct codetag *start, struct codetag *stop)
++{
++ /* Allocate module alloc_tag percpu counters */
++ struct alloc_tag *start_tag;
++ struct alloc_tag *stop_tag;
++ struct alloc_tag *tag;
++
++ if (!mod)
++ return;
++
++ start_tag = ct_to_alloc_tag(start);
++ stop_tag = ct_to_alloc_tag(stop);
++ for (tag = start_tag; tag < stop_tag; tag++) {
++ WARN_ON(tag->counters);
++ tag->counters = alloc_percpu(struct alloc_tag_counters);
++ if (!tag->counters) {
++ while (--tag >= start_tag) {
++ free_percpu(tag->counters);
++ tag->counters = NULL;
++ }
++ shutdown_mem_profiling(true);
++ pr_err("Failed to allocate memory for allocation tag percpu counters in the module %s. Memory allocation profiling is disabled!\n",
++ mod->name);
++ break;
++ }
++ }
++}
++
+ static void replace_module(struct module *mod, struct module *new_mod)
+ {
+ MA_STATE(mas, &mod_area_mt, 0, module_tags.size);
+@@ -757,6 +807,7 @@ static int __init alloc_tag_init(void)
+ .needs_section_mem = needs_section_mem,
+ .alloc_section_mem = reserve_module_tags,
+ .free_section_mem = release_module_tags,
++ .module_load = load_module,
+ .module_replaced = replace_module,
+ #endif
+ };
+diff --git a/lib/codetag.c b/lib/codetag.c
+index 42aadd6c145499..de332e98d6f5b5 100644
+--- a/lib/codetag.c
++++ b/lib/codetag.c
+@@ -194,7 +194,7 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod)
+ if (err >= 0) {
+ cttype->count += range_size(cttype, &range);
+ if (cttype->desc.module_load)
+- cttype->desc.module_load(cttype, cmod);
++ cttype->desc.module_load(mod, range.start, range.stop);
+ }
+ up_write(&cttype->mod_lock);
+
+@@ -333,7 +333,8 @@ void codetag_unload_module(struct module *mod)
+ }
+ if (found) {
+ if (cttype->desc.module_unload)
+- cttype->desc.module_unload(cttype, cmod);
++ cttype->desc.module_unload(cmod->mod,
++ cmod->range.start, cmod->range.stop);
+
+ cttype->count -= range_size(cttype, &cmod->range);
+ idr_remove(&cttype->mod_idr, mod_id);
+diff --git a/lib/dynamic_queue_limits.c b/lib/dynamic_queue_limits.c
+index c1b7638a594ac4..f97a752e900a0d 100644
+--- a/lib/dynamic_queue_limits.c
++++ b/lib/dynamic_queue_limits.c
+@@ -190,7 +190,7 @@ EXPORT_SYMBOL(dql_completed);
+ void dql_reset(struct dql *dql)
+ {
+ /* Reset all dynamic values */
+- dql->limit = 0;
++ dql->limit = dql->min_limit;
+ dql->num_queued = 0;
+ dql->num_completed = 0;
+ dql->last_obj_cnt = 0;
+diff --git a/lib/lzo/Makefile b/lib/lzo/Makefile
+index 2f58fafbbdddc0..fc7b2b7ef4b20e 100644
+--- a/lib/lzo/Makefile
++++ b/lib/lzo/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-lzo_compress-objs := lzo1x_compress.o
++lzo_compress-objs := lzo1x_compress.o lzo1x_compress_safe.o
+ lzo_decompress-objs := lzo1x_decompress_safe.o
+
+ obj-$(CONFIG_LZO_COMPRESS) += lzo_compress.o
+diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
+index 47d6d43ea9578c..7b10ca86a89300 100644
+--- a/lib/lzo/lzo1x_compress.c
++++ b/lib/lzo/lzo1x_compress.c
+@@ -18,11 +18,22 @@
+ #include <linux/lzo.h>
+ #include "lzodefs.h"
+
+-static noinline size_t
+-lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+- unsigned char *out, size_t *out_len,
+- size_t ti, void *wrkmem, signed char *state_offset,
+- const unsigned char bitstream_version)
++#undef LZO_UNSAFE
++
++#ifndef LZO_SAFE
++#define LZO_UNSAFE 1
++#define LZO_SAFE(name) name
++#define HAVE_OP(x) 1
++#endif
++
++#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun
++
++static noinline int
++LZO_SAFE(lzo1x_1_do_compress)(const unsigned char *in, size_t in_len,
++ unsigned char **out, unsigned char *op_end,
++ size_t *tp, void *wrkmem,
++ signed char *state_offset,
++ const unsigned char bitstream_version)
+ {
+ const unsigned char *ip;
+ unsigned char *op;
+@@ -30,8 +41,9 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+ const unsigned char * const ip_end = in + in_len - 20;
+ const unsigned char *ii;
+ lzo_dict_t * const dict = (lzo_dict_t *) wrkmem;
++ size_t ti = *tp;
+
+- op = out;
++ op = *out;
+ ip = in;
+ ii = ip;
+ ip += ti < 4 ? 4 - ti : 0;
+@@ -116,25 +128,32 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+ if (t != 0) {
+ if (t <= 3) {
+ op[*state_offset] |= t;
++ NEED_OP(4);
+ COPY4(op, ii);
+ op += t;
+ } else if (t <= 16) {
++ NEED_OP(17);
+ *op++ = (t - 3);
+ COPY8(op, ii);
+ COPY8(op + 8, ii + 8);
+ op += t;
+ } else {
+ if (t <= 18) {
++ NEED_OP(1);
+ *op++ = (t - 3);
+ } else {
+ size_t tt = t - 18;
++ NEED_OP(1);
+ *op++ = 0;
+ while (unlikely(tt > 255)) {
+ tt -= 255;
++ NEED_OP(1);
+ *op++ = 0;
+ }
++ NEED_OP(1);
+ *op++ = tt;
+ }
++ NEED_OP(t);
+ do {
+ COPY8(op, ii);
+ COPY8(op + 8, ii + 8);
+@@ -151,6 +170,7 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+ if (unlikely(run_length)) {
+ ip += run_length;
+ run_length -= MIN_ZERO_RUN_LENGTH;
++ NEED_OP(4);
+ put_unaligned_le32((run_length << 21) | 0xfffc18
+ | (run_length & 0x7), op);
+ op += 4;
+@@ -243,10 +263,12 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+ ip += m_len;
+ if (m_len <= M2_MAX_LEN && m_off <= M2_MAX_OFFSET) {
+ m_off -= 1;
++ NEED_OP(2);
+ *op++ = (((m_len - 1) << 5) | ((m_off & 7) << 2));
+ *op++ = (m_off >> 3);
+ } else if (m_off <= M3_MAX_OFFSET) {
+ m_off -= 1;
++ NEED_OP(1);
+ if (m_len <= M3_MAX_LEN)
+ *op++ = (M3_MARKER | (m_len - 2));
+ else {
+@@ -254,14 +276,18 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+ *op++ = M3_MARKER | 0;
+ while (unlikely(m_len > 255)) {
+ m_len -= 255;
++ NEED_OP(1);
+ *op++ = 0;
+ }
++ NEED_OP(1);
+ *op++ = (m_len);
+ }
++ NEED_OP(2);
+ *op++ = (m_off << 2);
+ *op++ = (m_off >> 6);
+ } else {
+ m_off -= 0x4000;
++ NEED_OP(1);
+ if (m_len <= M4_MAX_LEN)
+ *op++ = (M4_MARKER | ((m_off >> 11) & 8)
+ | (m_len - 2));
+@@ -282,11 +308,14 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+ m_len -= M4_MAX_LEN;
+ *op++ = (M4_MARKER | ((m_off >> 11) & 8));
+ while (unlikely(m_len > 255)) {
++ NEED_OP(1);
+ m_len -= 255;
+ *op++ = 0;
+ }
++ NEED_OP(1);
+ *op++ = (m_len);
+ }
++ NEED_OP(2);
+ *op++ = (m_off << 2);
+ *op++ = (m_off >> 6);
+ }
+@@ -295,14 +324,20 @@ lzo1x_1_do_compress(const unsigned char *in, size_t in_len,
+ ii = ip;
+ goto next;
+ }
+- *out_len = op - out;
+- return in_end - (ii - ti);
++ *out = op;
++ *tp = in_end - (ii - ti);
++ return LZO_E_OK;
++
++output_overrun:
++ return LZO_E_OUTPUT_OVERRUN;
+ }
+
+-static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len,
+- unsigned char *out, size_t *out_len,
+- void *wrkmem, const unsigned char bitstream_version)
++static int LZO_SAFE(lzogeneric1x_1_compress)(
++ const unsigned char *in, size_t in_len,
++ unsigned char *out, size_t *out_len,
++ void *wrkmem, const unsigned char bitstream_version)
+ {
++ unsigned char * const op_end = out + *out_len;
+ const unsigned char *ip = in;
+ unsigned char *op = out;
+ unsigned char *data_start;
+@@ -326,14 +361,18 @@ static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len,
+ while (l > 20) {
+ size_t ll = min_t(size_t, l, m4_max_offset + 1);
+ uintptr_t ll_end = (uintptr_t) ip + ll;
++ int err;
++
+ if ((ll_end + ((t + ll) >> 5)) <= ll_end)
+ break;
+ BUILD_BUG_ON(D_SIZE * sizeof(lzo_dict_t) > LZO1X_1_MEM_COMPRESS);
+ memset(wrkmem, 0, D_SIZE * sizeof(lzo_dict_t));
+- t = lzo1x_1_do_compress(ip, ll, op, out_len, t, wrkmem,
+- &state_offset, bitstream_version);
++ err = LZO_SAFE(lzo1x_1_do_compress)(
++ ip, ll, &op, op_end, &t, wrkmem,
++ &state_offset, bitstream_version);
++ if (err != LZO_E_OK)
++ return err;
+ ip += ll;
+- op += *out_len;
+ l -= ll;
+ }
+ t += l;
+@@ -342,20 +381,26 @@ static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len,
+ const unsigned char *ii = in + in_len - t;
+
+ if (op == data_start && t <= 238) {
++ NEED_OP(1);
+ *op++ = (17 + t);
+ } else if (t <= 3) {
+ op[state_offset] |= t;
+ } else if (t <= 18) {
++ NEED_OP(1);
+ *op++ = (t - 3);
+ } else {
+ size_t tt = t - 18;
++ NEED_OP(1);
+ *op++ = 0;
+ while (tt > 255) {
+ tt -= 255;
++ NEED_OP(1);
+ *op++ = 0;
+ }
++ NEED_OP(1);
+ *op++ = tt;
+ }
++ NEED_OP(t);
+ if (t >= 16) do {
+ COPY8(op, ii);
+ COPY8(op + 8, ii + 8);
+@@ -368,31 +413,38 @@ static int lzogeneric1x_1_compress(const unsigned char *in, size_t in_len,
+ } while (--t > 0);
+ }
+
++ NEED_OP(3);
+ *op++ = M4_MARKER | 1;
+ *op++ = 0;
+ *op++ = 0;
+
+ *out_len = op - out;
+ return LZO_E_OK;
++
++output_overrun:
++ return LZO_E_OUTPUT_OVERRUN;
+ }
+
+-int lzo1x_1_compress(const unsigned char *in, size_t in_len,
+- unsigned char *out, size_t *out_len,
+- void *wrkmem)
++int LZO_SAFE(lzo1x_1_compress)(const unsigned char *in, size_t in_len,
++ unsigned char *out, size_t *out_len,
++ void *wrkmem)
+ {
+- return lzogeneric1x_1_compress(in, in_len, out, out_len, wrkmem, 0);
++ return LZO_SAFE(lzogeneric1x_1_compress)(
++ in, in_len, out, out_len, wrkmem, 0);
+ }
+
+-int lzorle1x_1_compress(const unsigned char *in, size_t in_len,
+- unsigned char *out, size_t *out_len,
+- void *wrkmem)
++int LZO_SAFE(lzorle1x_1_compress)(const unsigned char *in, size_t in_len,
++ unsigned char *out, size_t *out_len,
++ void *wrkmem)
+ {
+- return lzogeneric1x_1_compress(in, in_len, out, out_len,
+- wrkmem, LZO_VERSION);
++ return LZO_SAFE(lzogeneric1x_1_compress)(
++ in, in_len, out, out_len, wrkmem, LZO_VERSION);
+ }
+
+-EXPORT_SYMBOL_GPL(lzo1x_1_compress);
+-EXPORT_SYMBOL_GPL(lzorle1x_1_compress);
++EXPORT_SYMBOL_GPL(LZO_SAFE(lzo1x_1_compress));
++EXPORT_SYMBOL_GPL(LZO_SAFE(lzorle1x_1_compress));
+
++#ifndef LZO_UNSAFE
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("LZO1X-1 Compressor");
++#endif
+diff --git a/lib/lzo/lzo1x_compress_safe.c b/lib/lzo/lzo1x_compress_safe.c
+new file mode 100644
+index 00000000000000..371c9f84949281
+--- /dev/null
++++ b/lib/lzo/lzo1x_compress_safe.c
+@@ -0,0 +1,18 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * LZO1X Compressor from LZO
++ *
++ * Copyright (C) 1996-2012 Markus F.X.J. Oberhumer <markus@oberhumer.com>
++ *
++ * The full LZO package can be found at:
++ * http://www.oberhumer.com/opensource/lzo/
++ *
++ * Changed for Linux kernel use by:
++ * Nitin Gupta <nitingupta910@gmail.com>
++ * Richard Purdie <rpurdie@openedhand.com>
++ */
++
++#define LZO_SAFE(name) name##_safe
++#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x))
++
++#include "lzo1x_compress.c"
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 8acd95964ad15d..4aa363681cbb3b 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2926,12 +2926,20 @@ int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
+
+ while (start_pfn < end_pfn) {
+ folio = pfn_folio(start_pfn);
++
++ /*
++ * The folio might have been dissolved from under our feet, so make sure
++ * to carefully check the state under the lock.
++ */
++ spin_lock_irq(&hugetlb_lock);
+ if (folio_test_hugetlb(folio)) {
+ h = folio_hstate(folio);
+ } else {
++ spin_unlock_irq(&hugetlb_lock);
+ start_pfn++;
+ continue;
+ }
++ spin_unlock_irq(&hugetlb_lock);
+
+ if (!folio_ref_count(folio)) {
+ ret = alloc_and_dissolve_hugetlb_folio(h, folio,
+diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
+index 88d1c9dcb50721..d2c70cd2afb1de 100644
+--- a/mm/kasan/shadow.c
++++ b/mm/kasan/shadow.c
+@@ -292,33 +292,99 @@ void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+ {
+ }
+
++struct vmalloc_populate_data {
++ unsigned long start;
++ struct page **pages;
++};
++
+ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
+- void *unused)
++ void *_data)
+ {
+- unsigned long page;
++ struct vmalloc_populate_data *data = _data;
++ struct page *page;
+ pte_t pte;
++ int index;
+
+ if (likely(!pte_none(ptep_get(ptep))))
+ return 0;
+
+- page = __get_free_page(GFP_KERNEL);
+- if (!page)
+- return -ENOMEM;
+-
+- __memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
+- pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
++ index = PFN_DOWN(addr - data->start);
++ page = data->pages[index];
++ __memset(page_to_virt(page), KASAN_VMALLOC_INVALID, PAGE_SIZE);
++ pte = pfn_pte(page_to_pfn(page), PAGE_KERNEL);
+
+ spin_lock(&init_mm.page_table_lock);
+ if (likely(pte_none(ptep_get(ptep)))) {
+ set_pte_at(&init_mm, addr, ptep, pte);
+- page = 0;
++ data->pages[index] = NULL;
+ }
+ spin_unlock(&init_mm.page_table_lock);
+- if (page)
+- free_page(page);
++
++ return 0;
++}
++
++static void ___free_pages_bulk(struct page **pages, int nr_pages)
++{
++ int i;
++
++ for (i = 0; i < nr_pages; i++) {
++ if (pages[i]) {
++ __free_pages(pages[i], 0);
++ pages[i] = NULL;
++ }
++ }
++}
++
++static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
++{
++ unsigned long nr_populated, nr_total = nr_pages;
++ struct page **page_array = pages;
++
++ while (nr_pages) {
++ nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, pages);
++ if (!nr_populated) {
++ ___free_pages_bulk(page_array, nr_total - nr_pages);
++ return -ENOMEM;
++ }
++ pages += nr_populated;
++ nr_pages -= nr_populated;
++ }
++
+ return 0;
+ }
+
++static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
++{
++ unsigned long nr_pages, nr_total = PFN_UP(end - start);
++ struct vmalloc_populate_data data;
++ int ret = 0;
++
++ data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO);
++ if (!data.pages)
++ return -ENOMEM;
++
++ while (nr_total) {
++ nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0]));
++ ret = ___alloc_pages_bulk(data.pages, nr_pages);
++ if (ret)
++ break;
++
++ data.start = start;
++ ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE,
++ kasan_populate_vmalloc_pte, &data);
++ ___free_pages_bulk(data.pages, nr_pages);
++ if (ret)
++ break;
++
++ start += nr_pages * PAGE_SIZE;
++ nr_total -= nr_pages;
++ }
++
++ free_page((unsigned long)data.pages);
++
++ return ret;
++}
++
+ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
+ {
+ unsigned long shadow_start, shadow_end;
+@@ -348,9 +414,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
+ shadow_start = PAGE_ALIGN_DOWN(shadow_start);
+ shadow_end = PAGE_ALIGN(shadow_end);
+
+- ret = apply_to_page_range(&init_mm, shadow_start,
+- shadow_end - shadow_start,
+- kasan_populate_vmalloc_pte, NULL);
++ ret = __kasan_populate_vmalloc(shadow_start, shadow_end);
+ if (ret)
+ return ret;
+
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index a037ec92881d59..e51e04dba20d69 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1161,7 +1161,6 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
+ {
+ struct mem_cgroup *iter;
+ int ret = 0;
+- int i = 0;
+
+ BUG_ON(mem_cgroup_is_root(memcg));
+
+@@ -1171,10 +1170,9 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
+
+ css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it);
+ while (!ret && (task = css_task_iter_next(&it))) {
+- /* Avoid potential softlockup warning */
+- if ((++i & 1023) == 0)
+- cond_resched();
+ ret = fn(task, arg);
++ /* Avoid potential softlockup warning */
++ cond_resched();
+ }
+ css_task_iter_end(&it);
+ if (ret) {
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 2cc8b3e36dc942..a4a351996153bb 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4381,6 +4381,14 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ }
+
+ retry:
++ /*
++ * Deal with possible cpuset update races or zonelist updates to avoid
++ * infinite retries.
++ */
++ if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
++ check_retry_zonelist(zonelist_iter_cookie))
++ goto restart;
++
+ /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
+ if (alloc_flags & ALLOC_KSWAPD)
+ wake_all_kswapds(order, gfp_mask, ac);
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 8aa7eea9b26fb9..ae7e419be29d90 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -4097,8 +4097,8 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+ * would be a good heuristic for when to shrink the vm_area?
+ */
+ if (size <= old_size) {
+- /* Zero out "freed" memory. */
+- if (want_init_on_free())
++ /* Zero out "freed" memory, potentially for future realloc. */
++ if (want_init_on_free() || want_init_on_alloc(flags))
+ memset((void *)p + size, 0, old_size - size);
+ vm->requested_size = size;
+ kasan_poison_vmalloc(p + size, old_size - size);
+@@ -4111,10 +4111,13 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+ if (size <= alloced_size) {
+ kasan_unpoison_vmalloc(p + old_size, size - old_size,
+ KASAN_VMALLOC_PROT_NORMAL);
+- /* Zero out "alloced" memory. */
+- if (want_init_on_alloc(flags))
+- memset((void *)p + old_size, 0, size - old_size);
++ /*
++ * No need to zero memory here, as unused memory will have
++ * already been zeroed at initial allocation time or during
++ * realloc shrink time.
++ */
+ vm->requested_size = size;
++ return (void *)p;
+ }
+
+ /* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index ab940ec698c0f5..7152a1ca56778a 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -930,6 +930,9 @@ static u8 hci_cc_read_buffer_size(struct hci_dev *hdev, void *data,
+ hdev->sco_pkts = 8;
+ }
+
++ if (!read_voice_setting_capable(hdev))
++ hdev->sco_pkts = 0;
++
+ hdev->acl_cnt = hdev->acl_pkts;
+ hdev->sco_cnt = hdev->sco_pkts;
+
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index c219a8c596d3e5..66fa5d6fea6cad 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1411,7 +1411,8 @@ static void l2cap_request_info(struct l2cap_conn *conn)
+ sizeof(req), &req);
+ }
+
+-static bool l2cap_check_enc_key_size(struct hci_conn *hcon)
++static bool l2cap_check_enc_key_size(struct hci_conn *hcon,
++ struct l2cap_chan *chan)
+ {
+ /* The minimum encryption key size needs to be enforced by the
+ * host stack before establishing any L2CAP connections. The
+@@ -1425,7 +1426,7 @@ static bool l2cap_check_enc_key_size(struct hci_conn *hcon)
+ int min_key_size = hcon->hdev->min_enc_key_size;
+
+ /* On FIPS security level, key size must be 16 bytes */
+- if (hcon->sec_level == BT_SECURITY_FIPS)
++ if (chan->sec_level == BT_SECURITY_FIPS)
+ min_key_size = 16;
+
+ return (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags) ||
+@@ -1453,7 +1454,7 @@ static void l2cap_do_start(struct l2cap_chan *chan)
+ !__l2cap_no_conn_pending(chan))
+ return;
+
+- if (l2cap_check_enc_key_size(conn->hcon))
++ if (l2cap_check_enc_key_size(conn->hcon, chan))
+ l2cap_start_connection(chan);
+ else
+ __set_chan_timer(chan, L2CAP_DISC_TIMEOUT);
+@@ -1528,7 +1529,7 @@ static void l2cap_conn_start(struct l2cap_conn *conn)
+ continue;
+ }
+
+- if (l2cap_check_enc_key_size(conn->hcon))
++ if (l2cap_check_enc_key_size(conn->hcon, chan))
+ l2cap_start_connection(chan);
+ else
+ l2cap_chan_close(chan, ECONNREFUSED);
+@@ -3957,7 +3958,7 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
+ /* Check if the ACL is secure enough (if not SDP) */
+ if (psm != cpu_to_le16(L2CAP_PSM_SDP) &&
+ (!hci_conn_check_link_mode(conn->hcon) ||
+- !l2cap_check_enc_key_size(conn->hcon))) {
++ !l2cap_check_enc_key_size(conn->hcon, pchan))) {
+ conn->disc_reason = HCI_ERROR_AUTH_FAILURE;
+ result = L2CAP_CR_SEC_BLOCK;
+ goto response;
+@@ -7317,7 +7318,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
+ }
+
+ if (chan->state == BT_CONNECT) {
+- if (!status && l2cap_check_enc_key_size(hcon))
++ if (!status && l2cap_check_enc_key_size(hcon, chan))
+ l2cap_start_connection(chan);
+ else
+ __set_chan_timer(chan, L2CAP_DISC_TIMEOUT);
+@@ -7327,7 +7328,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
+ struct l2cap_conn_rsp rsp;
+ __u16 res, stat;
+
+- if (!status && l2cap_check_enc_key_size(hcon)) {
++ if (!status && l2cap_check_enc_key_size(hcon, chan)) {
+ if (test_bit(FLAG_DEFER_SETUP, &chan->flags)) {
+ res = L2CAP_CR_PEND;
+ stat = L2CAP_CS_AUTHOR_PEND;
+diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
+index 1a52a0bca086d6..7e1ad229e1330c 100644
+--- a/net/bridge/br_mdb.c
++++ b/net/bridge/br_mdb.c
+@@ -1040,7 +1040,7 @@ static int br_mdb_add_group(const struct br_mdb_config *cfg,
+
+ /* host join */
+ if (!port) {
+- if (mp->host_joined) {
++ if (mp->host_joined && !(cfg->nlflags & NLM_F_REPLACE)) {
+ NL_SET_ERR_MSG_MOD(extack, "Group is already joined by host");
+ return -EEXIST;
+ }
+diff --git a/net/bridge/br_nf_core.c b/net/bridge/br_nf_core.c
+index 98aea5485aaef4..a8c67035e23c00 100644
+--- a/net/bridge/br_nf_core.c
++++ b/net/bridge/br_nf_core.c
+@@ -65,17 +65,14 @@ static struct dst_ops fake_dst_ops = {
+ * ipt_REJECT needs it. Future netfilter modules might
+ * require us to fill additional fields.
+ */
+-static const u32 br_dst_default_metrics[RTAX_MAX] = {
+- [RTAX_MTU - 1] = 1500,
+-};
+-
+ void br_netfilter_rtable_init(struct net_bridge *br)
+ {
+ struct rtable *rt = &br->fake_rtable;
+
+ rcuref_init(&rt->dst.__rcuref, 1);
+ rt->dst.dev = br->dev;
+- dst_init_metrics(&rt->dst, br_dst_default_metrics, true);
++ dst_init_metrics(&rt->dst, br->metrics, false);
++ dst_metric_set(&rt->dst, RTAX_MTU, br->dev->mtu);
+ rt->dst.flags = DST_NOXFRM | DST_FAKE_RTABLE;
+ rt->dst.ops = &fake_dst_ops;
+ }
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index d5b3c5936a79e1..4715a8d6dc3266 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -505,6 +505,7 @@ struct net_bridge {
+ struct rtable fake_rtable;
+ struct rt6_info fake_rt6_info;
+ };
++ u32 metrics[RTAX_MAX];
+ #endif
+ u16 group_fwd_mask;
+ u16 group_fwd_mask_required;
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index 217049fa496e9d..e33ff2a5b20ccb 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -58,6 +58,7 @@
+ #include <linux/can/skb.h>
+ #include <linux/can/bcm.h>
+ #include <linux/slab.h>
++#include <linux/spinlock.h>
+ #include <net/sock.h>
+ #include <net/net_namespace.h>
+
+@@ -122,6 +123,7 @@ struct bcm_op {
+ struct canfd_frame last_sframe;
+ struct sock *sk;
+ struct net_device *rx_reg_dev;
++ spinlock_t bcm_tx_lock; /* protect currframe/count in runtime updates */
+ };
+
+ struct bcm_sock {
+@@ -217,7 +219,9 @@ static int bcm_proc_show(struct seq_file *m, void *v)
+ seq_printf(m, " / bound %s", bcm_proc_getifname(net, ifname, bo->ifindex));
+ seq_printf(m, " <<<\n");
+
+- list_for_each_entry(op, &bo->rx_ops, list) {
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(op, &bo->rx_ops, list) {
+
+ unsigned long reduction;
+
+@@ -273,6 +277,9 @@ static int bcm_proc_show(struct seq_file *m, void *v)
+ seq_printf(m, "# sent %ld\n", op->frames_abs);
+ }
+ seq_putc(m, '\n');
++
++ rcu_read_unlock();
++
+ return 0;
+ }
+ #endif /* CONFIG_PROC_FS */
+@@ -285,13 +292,18 @@ static void bcm_can_tx(struct bcm_op *op)
+ {
+ struct sk_buff *skb;
+ struct net_device *dev;
+- struct canfd_frame *cf = op->frames + op->cfsiz * op->currframe;
++ struct canfd_frame *cf;
+ int err;
+
+ /* no target device? => exit */
+ if (!op->ifindex)
+ return;
+
++ /* read currframe under lock protection */
++ spin_lock_bh(&op->bcm_tx_lock);
++ cf = op->frames + op->cfsiz * op->currframe;
++ spin_unlock_bh(&op->bcm_tx_lock);
++
+ dev = dev_get_by_index(sock_net(op->sk), op->ifindex);
+ if (!dev) {
+ /* RFC: should this bcm_op remove itself here? */
+@@ -312,6 +324,10 @@ static void bcm_can_tx(struct bcm_op *op)
+ skb->dev = dev;
+ can_skb_set_owner(skb, op->sk);
+ err = can_send(skb, 1);
++
++ /* update currframe and count under lock protection */
++ spin_lock_bh(&op->bcm_tx_lock);
++
+ if (!err)
+ op->frames_abs++;
+
+@@ -320,6 +336,11 @@ static void bcm_can_tx(struct bcm_op *op)
+ /* reached last frame? */
+ if (op->currframe >= op->nframes)
+ op->currframe = 0;
++
++ if (op->count > 0)
++ op->count--;
++
++ spin_unlock_bh(&op->bcm_tx_lock);
+ out:
+ dev_put(dev);
+ }
+@@ -430,7 +451,7 @@ static enum hrtimer_restart bcm_tx_timeout_handler(struct hrtimer *hrtimer)
+ struct bcm_msg_head msg_head;
+
+ if (op->kt_ival1 && (op->count > 0)) {
+- op->count--;
++ bcm_can_tx(op);
+ if (!op->count && (op->flags & TX_COUNTEVT)) {
+
+ /* create notification to user */
+@@ -445,7 +466,6 @@ static enum hrtimer_restart bcm_tx_timeout_handler(struct hrtimer *hrtimer)
+
+ bcm_send_to_user(op, &msg_head, NULL, 0);
+ }
+- bcm_can_tx(op);
+
+ } else if (op->kt_ival2) {
+ bcm_can_tx(op);
+@@ -843,7 +863,7 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
+ REGMASK(op->can_id),
+ bcm_rx_handler, op);
+
+- list_del(&op->list);
++ list_del_rcu(&op->list);
+ bcm_remove_op(op);
+ return 1; /* done */
+ }
+@@ -863,7 +883,7 @@ static int bcm_delete_tx_op(struct list_head *ops, struct bcm_msg_head *mh,
+ list_for_each_entry_safe(op, n, ops, list) {
+ if ((op->can_id == mh->can_id) && (op->ifindex == ifindex) &&
+ (op->flags & CAN_FD_FRAME) == (mh->flags & CAN_FD_FRAME)) {
+- list_del(&op->list);
++ list_del_rcu(&op->list);
+ bcm_remove_op(op);
+ return 1; /* done */
+ }
+@@ -956,6 +976,27 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ }
+ op->flags = msg_head->flags;
+
++ /* only lock for unlikely count/nframes/currframe changes */
++ if (op->nframes != msg_head->nframes ||
++ op->flags & TX_RESET_MULTI_IDX ||
++ op->flags & SETTIMER) {
++
++ spin_lock_bh(&op->bcm_tx_lock);
++
++ if (op->nframes != msg_head->nframes ||
++ op->flags & TX_RESET_MULTI_IDX) {
++ /* potentially update changed nframes */
++ op->nframes = msg_head->nframes;
++ /* restart multiple frame transmission */
++ op->currframe = 0;
++ }
++
++ if (op->flags & SETTIMER)
++ op->count = msg_head->count;
++
++ spin_unlock_bh(&op->bcm_tx_lock);
++ }
++
+ } else {
+ /* insert new BCM operation for the given can_id */
+
+@@ -963,9 +1004,14 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ if (!op)
+ return -ENOMEM;
+
++ spin_lock_init(&op->bcm_tx_lock);
+ op->can_id = msg_head->can_id;
+ op->cfsiz = CFSIZ(msg_head->flags);
+ op->flags = msg_head->flags;
++ op->nframes = msg_head->nframes;
++
++ if (op->flags & SETTIMER)
++ op->count = msg_head->count;
+
+ /* create array for CAN frames and copy the data */
+ if (msg_head->nframes > 1) {
+@@ -1024,22 +1070,8 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+
+ } /* if ((op = bcm_find_op(&bo->tx_ops, msg_head->can_id, ifindex))) */
+
+- if (op->nframes != msg_head->nframes) {
+- op->nframes = msg_head->nframes;
+- /* start multiple frame transmission with index 0 */
+- op->currframe = 0;
+- }
+-
+- /* check flags */
+-
+- if (op->flags & TX_RESET_MULTI_IDX) {
+- /* start multiple frame transmission with index 0 */
+- op->currframe = 0;
+- }
+-
+ if (op->flags & SETTIMER) {
+ /* set timer values */
+- op->count = msg_head->count;
+ op->ival1 = msg_head->ival1;
+ op->ival2 = msg_head->ival2;
+ op->kt_ival1 = bcm_timeval_to_ktime(msg_head->ival1);
+@@ -1056,11 +1088,8 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ op->flags |= TX_ANNOUNCE;
+ }
+
+- if (op->flags & TX_ANNOUNCE) {
++ if (op->flags & TX_ANNOUNCE)
+ bcm_can_tx(op);
+- if (op->count)
+- op->count--;
+- }
+
+ if (op->flags & STARTTIMER)
+ bcm_tx_start_timer(op);
+@@ -1276,7 +1305,7 @@ static int bcm_rx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
+ bcm_rx_handler, op, "bcm", sk);
+ if (err) {
+ /* this bcm rx op is broken -> remove it */
+- list_del(&op->list);
++ list_del_rcu(&op->list);
+ bcm_remove_op(op);
+ return err;
+ }
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2f7f5fd9ffec7c..77306b522966c3 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6187,16 +6187,18 @@ EXPORT_SYMBOL(netif_receive_skb_list);
+ static void flush_backlog(struct work_struct *work)
+ {
+ struct sk_buff *skb, *tmp;
++ struct sk_buff_head list;
+ struct softnet_data *sd;
+
++ __skb_queue_head_init(&list);
+ local_bh_disable();
+ sd = this_cpu_ptr(&softnet_data);
+
+ backlog_lock_irq_disable(sd);
+ skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
+- if (skb->dev->reg_state == NETREG_UNREGISTERING) {
++ if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
+ __skb_unlink(skb, &sd->input_pkt_queue);
+- dev_kfree_skb_irq(skb);
++ __skb_queue_tail(&list, skb);
+ rps_input_queue_head_incr(sd);
+ }
+ }
+@@ -6204,14 +6206,16 @@ static void flush_backlog(struct work_struct *work)
+
+ local_lock_nested_bh(&softnet_data.process_queue_bh_lock);
+ skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
+- if (skb->dev->reg_state == NETREG_UNREGISTERING) {
++ if (READ_ONCE(skb->dev->reg_state) == NETREG_UNREGISTERING) {
+ __skb_unlink(skb, &sd->process_queue);
+- kfree_skb(skb);
++ __skb_queue_tail(&list, skb);
+ rps_input_queue_head_incr(sd);
+ }
+ }
+ local_unlock_nested_bh(&softnet_data.process_queue_bh_lock);
+ local_bh_enable();
++
++ __skb_queue_purge_reason(&list, SKB_DROP_REASON_DEV_READY);
+ }
+
+ static bool flush_required(int cpu)
+diff --git a/net/core/dev.h b/net/core/dev.h
+index a5b166bbd169a0..caa13e431a6bcf 100644
+--- a/net/core/dev.h
++++ b/net/core/dev.h
+@@ -299,6 +299,18 @@ void xdp_do_check_flushed(struct napi_struct *napi);
+ static inline void xdp_do_check_flushed(struct napi_struct *napi) { }
+ #endif
+
++/* Best effort check that NAPI is not idle (can't be scheduled to run) */
++static inline void napi_assert_will_not_race(const struct napi_struct *napi)
++{
++ /* uninitialized instance, can't race */
++ if (!napi->poll_list.next)
++ return;
++
++ /* SCHED bit is set on disabled instances */
++ WARN_ON(!test_bit(NAPI_STATE_SCHED, &napi->state));
++ WARN_ON(READ_ONCE(napi->list_owner) != -1);
++}
++
+ void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu);
+
+ #define XMIT_RECURSION_LIMIT 8
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 07cb99b114bdd7..88e001a4e0810b 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -232,11 +232,12 @@ static ssize_t carrier_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct net_device *netdev = to_net_dev(dev);
+- int ret = -EINVAL;
++ int ret;
+
+ if (!rtnl_trylock())
+ return restart_syscall();
+
++ ret = -EINVAL;
+ if (netif_running(netdev)) {
+ /* Synchronize carrier state with link watch,
+ * see also rtnl_getlink().
+@@ -266,6 +267,7 @@ static ssize_t speed_show(struct device *dev,
+ if (!rtnl_trylock())
+ return restart_syscall();
+
++ ret = -EINVAL;
+ if (netif_running(netdev)) {
+ struct ethtool_link_ksettings cmd;
+
+@@ -292,6 +294,7 @@ static ssize_t duplex_show(struct device *dev,
+ if (!rtnl_trylock())
+ return restart_syscall();
+
++ ret = -EINVAL;
+ if (netif_running(netdev)) {
+ struct ethtool_link_ksettings cmd;
+
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index ede82c610936e5..cca51aa2e876f6 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -25,6 +25,7 @@
+
+ #include <trace/events/page_pool.h>
+
++#include "dev.h"
+ #include "mp_dmabuf_devmem.h"
+ #include "netmem_priv.h"
+ #include "page_pool_priv.h"
+@@ -1146,11 +1147,7 @@ void page_pool_disable_direct_recycling(struct page_pool *pool)
+ if (!pool->p.napi)
+ return;
+
+- /* To avoid races with recycling and additional barriers make sure
+- * pool and NAPI are unlinked when NAPI is disabled.
+- */
+- WARN_ON(!test_bit(NAPI_STATE_SCHED, &pool->p.napi->state));
+- WARN_ON(READ_ONCE(pool->p.napi->list_owner) != -1);
++ napi_assert_will_not_race(pool->p.napi);
+
+ mutex_lock(&page_pools_lock);
+ WRITE_ONCE(pool->p.napi, NULL);
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index 82b6a2c3c141f2..d3a76e81dd8863 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -898,6 +898,10 @@ static ssize_t get_labels(const char __user *buffer, struct pktgen_dev *pkt_dev)
+ pkt_dev->nr_labels = 0;
+ do {
+ __u32 tmp;
++
++ if (n >= MAX_MPLS_LABELS)
++ return -E2BIG;
++
+ len = hex32_arg(&buffer[i], 8, &tmp);
+ if (len <= 0)
+ return len;
+@@ -909,8 +913,6 @@ static ssize_t get_labels(const char __user *buffer, struct pktgen_dev *pkt_dev)
+ return -EFAULT;
+ i++;
+ n++;
+- if (n >= MAX_MPLS_LABELS)
+- return -E2BIG;
+ } while (c == ',');
+
+ pkt_dev->nr_labels = n;
+@@ -1896,8 +1898,8 @@ static ssize_t pktgen_thread_write(struct file *file,
+ i = len;
+
+ /* Read variable name */
+-
+- len = strn_len(&user_buffer[i], sizeof(name) - 1);
++ max = min(sizeof(name) - 1, count - i);
++ len = strn_len(&user_buffer[i], max);
+ if (len < 0)
+ return len;
+
+@@ -1927,7 +1929,8 @@ static ssize_t pktgen_thread_write(struct file *file,
+ if (!strcmp(name, "add_device")) {
+ char f[32];
+ memset(f, 0, 32);
+- len = strn_len(&user_buffer[i], sizeof(f) - 1);
++ max = min(sizeof(f) - 1, count - i);
++ len = strn_len(&user_buffer[i], max);
+ if (len < 0) {
+ ret = len;
+ goto out;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 80e006940f51a9..ab7041150f2955 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3865,20 +3865,26 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh,
+ {
+ struct nlattr ** const tb = tbs->tb;
+ struct net *net = sock_net(skb->sk);
++ struct net *device_net;
+ struct net_device *dev;
+ struct ifinfomsg *ifm;
+ bool link_specified;
+
++ /* When creating, lookup for existing device in target net namespace */
++ device_net = (nlh->nlmsg_flags & NLM_F_CREATE) &&
++ (nlh->nlmsg_flags & NLM_F_EXCL) ?
++ tgt_net : net;
++
+ ifm = nlmsg_data(nlh);
+ if (ifm->ifi_index > 0) {
+ link_specified = true;
+- dev = __dev_get_by_index(net, ifm->ifi_index);
++ dev = __dev_get_by_index(device_net, ifm->ifi_index);
+ } else if (ifm->ifi_index < 0) {
+ NL_SET_ERR_MSG(extack, "ifindex can't be negative");
+ return -EINVAL;
+ } else if (tb[IFLA_IFNAME] || tb[IFLA_ALT_IFNAME]) {
+ link_specified = true;
+- dev = rtnl_dev_get(net, tb);
++ dev = rtnl_dev_get(device_net, tb);
+ } else {
+ link_specified = false;
+ dev = NULL;
+diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c
+index c33d4bf179297c..0b7564b53790da 100644
+--- a/net/dsa/tag_ksz.c
++++ b/net/dsa/tag_ksz.c
+@@ -140,7 +140,12 @@ static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ static struct sk_buff *ksz8795_rcv(struct sk_buff *skb, struct net_device *dev)
+ {
+- u8 *tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN;
++ u8 *tag;
++
++ if (skb_linearize(skb))
++ return NULL;
++
++ tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN;
+
+ return ksz_common_rcv(skb, dev, tag[0] & KSZ8795_TAIL_TAG_EG_PORT_M,
+ KSZ_EGRESS_TAG_LEN);
+@@ -311,10 +316,16 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
+
+ static struct sk_buff *ksz9477_rcv(struct sk_buff *skb, struct net_device *dev)
+ {
+- /* Tag decoding */
+- u8 *tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN;
+- unsigned int port = tag[0] & KSZ9477_TAIL_TAG_EG_PORT_M;
+ unsigned int len = KSZ_EGRESS_TAG_LEN;
++ unsigned int port;
++ u8 *tag;
++
++ if (skb_linearize(skb))
++ return NULL;
++
++ /* Tag decoding */
++ tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN;
++ port = tag[0] & KSZ9477_TAIL_TAG_EG_PORT_M;
+
+ /* Extra 4-bytes PTP timestamp */
+ if (tag[0] & KSZ9477_PTP_TAG_INDICATION) {
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index b6fb18469439ae..2c43776b7c4fb2 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -616,6 +616,7 @@ static struct hsr_proto_ops hsr_ops = {
+ .drop_frame = hsr_drop_frame,
+ .fill_frame_info = hsr_fill_frame_info,
+ .invalid_dan_ingress_frame = hsr_invalid_dan_ingress_frame,
++ .register_frame_out = hsr_register_frame_out,
+ };
+
+ static struct hsr_proto_ops prp_ops = {
+@@ -626,6 +627,7 @@ static struct hsr_proto_ops prp_ops = {
+ .fill_frame_info = prp_fill_frame_info,
+ .handle_san_frame = prp_handle_san_frame,
+ .update_san_info = prp_update_san_info,
++ .register_frame_out = prp_register_frame_out,
+ };
+
+ void hsr_dev_setup(struct net_device *dev)
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index a4bacf1985558a..c67c0d35921de0 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -536,8 +536,8 @@ static void hsr_forward_do(struct hsr_frame_info *frame)
+ * Also for SAN, this shouldn't be done.
+ */
+ if (!frame->is_from_san &&
+- hsr_register_frame_out(port, frame->node_src,
+- frame->sequence_nr))
++ hsr->proto_ops->register_frame_out &&
++ hsr->proto_ops->register_frame_out(port, frame))
+ continue;
+
+ if (frame->is_supervision && port->type == HSR_PT_MASTER &&
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 73bc6f659812f6..85991fab7db584 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -35,6 +35,7 @@ static bool seq_nr_after(u16 a, u16 b)
+
+ #define seq_nr_before(a, b) seq_nr_after((b), (a))
+ #define seq_nr_before_or_eq(a, b) (!seq_nr_after((a), (b)))
++#define PRP_DROP_WINDOW_LEN 32768
+
+ bool hsr_addr_is_redbox(struct hsr_priv *hsr, unsigned char *addr)
+ {
+@@ -176,8 +177,11 @@ static struct hsr_node *hsr_add_node(struct hsr_priv *hsr,
+ new_node->time_in[i] = now;
+ new_node->time_out[i] = now;
+ }
+- for (i = 0; i < HSR_PT_PORTS; i++)
++ for (i = 0; i < HSR_PT_PORTS; i++) {
+ new_node->seq_out[i] = seq_out;
++ new_node->seq_expected[i] = seq_out + 1;
++ new_node->seq_start[i] = seq_out + 1;
++ }
+
+ if (san && hsr->proto_ops->handle_san_frame)
+ hsr->proto_ops->handle_san_frame(san, rx_port, new_node);
+@@ -482,9 +486,11 @@ void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
+ * 0 otherwise, or
+ * negative error code on error
+ */
+-int hsr_register_frame_out(struct hsr_port *port, struct hsr_node *node,
+- u16 sequence_nr)
++int hsr_register_frame_out(struct hsr_port *port, struct hsr_frame_info *frame)
+ {
++ struct hsr_node *node = frame->node_src;
++ u16 sequence_nr = frame->sequence_nr;
++
+ spin_lock_bh(&node->seq_out_lock);
+ if (seq_nr_before_or_eq(sequence_nr, node->seq_out[port->type]) &&
+ time_is_after_jiffies(node->time_out[port->type] +
+@@ -499,6 +505,89 @@ int hsr_register_frame_out(struct hsr_port *port, struct hsr_node *node,
+ return 0;
+ }
+
++/* Adaptation of the PRP duplicate discard algorithm described in wireshark
++ * wiki (https://wiki.wireshark.org/PRP)
++ *
++ * A drop window is maintained for both LANs with start sequence set to the
++ * first sequence accepted on the LAN that has not been seen on the other LAN,
++ * and expected sequence set to the latest received sequence number plus one.
++ *
++ * When a frame is received on either LAN it is compared against the received
++ * frames on the other LAN. If it is outside the drop window of the other LAN
++ * the frame is accepted and the drop window is updated.
++ * The drop window for the other LAN is reset.
++ *
++ * 'port' is the outgoing interface
++ * 'frame' is the frame to be sent
++ *
++ * Return:
++ * 1 if frame can be shown to have been sent recently on this interface,
++ * 0 otherwise
++ */
++int prp_register_frame_out(struct hsr_port *port, struct hsr_frame_info *frame)
++{
++ enum hsr_port_type other_port;
++ enum hsr_port_type rcv_port;
++ struct hsr_node *node;
++ u16 sequence_diff;
++ u16 sequence_exp;
++ u16 sequence_nr;
++
++ /* out-going frames are always in order
++ * and can be checked the same way as for HSR
++ */
++ if (frame->port_rcv->type == HSR_PT_MASTER)
++ return hsr_register_frame_out(port, frame);
++
++ /* for PRP we should only forward frames from the slave ports
++ * to the master port
++ */
++ if (port->type != HSR_PT_MASTER)
++ return 1;
++
++ node = frame->node_src;
++ sequence_nr = frame->sequence_nr;
++ sequence_exp = sequence_nr + 1;
++ rcv_port = frame->port_rcv->type;
++ other_port = rcv_port == HSR_PT_SLAVE_A ? HSR_PT_SLAVE_B :
++ HSR_PT_SLAVE_A;
++
++ spin_lock_bh(&node->seq_out_lock);
++ if (time_is_before_jiffies(node->time_out[port->type] +
++ msecs_to_jiffies(HSR_ENTRY_FORGET_TIME)) ||
++ (node->seq_start[rcv_port] == node->seq_expected[rcv_port] &&
++ node->seq_start[other_port] == node->seq_expected[other_port])) {
++ /* the node hasn't been sending for a while
++ * or both drop windows are empty, forward the frame
++ */
++ node->seq_start[rcv_port] = sequence_nr;
++ } else if (seq_nr_before(sequence_nr, node->seq_expected[other_port]) &&
++ seq_nr_before_or_eq(node->seq_start[other_port], sequence_nr)) {
++ /* drop the frame, update the drop window for the other port
++ * and reset our drop window
++ */
++ node->seq_start[other_port] = sequence_exp;
++ node->seq_expected[rcv_port] = sequence_exp;
++ node->seq_start[rcv_port] = node->seq_expected[rcv_port];
++ spin_unlock_bh(&node->seq_out_lock);
++ return 1;
++ }
++
++ /* update the drop window for the port where this frame was received
++ * and clear the drop window for the other port
++ */
++ node->seq_start[other_port] = node->seq_expected[other_port];
++ node->seq_expected[rcv_port] = sequence_exp;
++ sequence_diff = sequence_exp - node->seq_start[rcv_port];
++ if (sequence_diff > PRP_DROP_WINDOW_LEN)
++ node->seq_start[rcv_port] = sequence_exp - PRP_DROP_WINDOW_LEN;
++
++ node->time_out[port->type] = jiffies;
++ node->seq_out[port->type] = sequence_nr;
++ spin_unlock_bh(&node->seq_out_lock);
++ return 0;
++}
++
+ static struct hsr_port *get_late_port(struct hsr_priv *hsr,
+ struct hsr_node *node)
+ {
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index 993fa950d81449..b04948659d84d8 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -44,8 +44,7 @@ void hsr_addr_subst_dest(struct hsr_node *node_src, struct sk_buff *skb,
+
+ void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
+ u16 sequence_nr);
+-int hsr_register_frame_out(struct hsr_port *port, struct hsr_node *node,
+- u16 sequence_nr);
++int hsr_register_frame_out(struct hsr_port *port, struct hsr_frame_info *frame);
+
+ void hsr_prune_nodes(struct timer_list *t);
+ void hsr_prune_proxy_nodes(struct timer_list *t);
+@@ -73,6 +72,8 @@ void prp_update_san_info(struct hsr_node *node, bool is_sup);
+ bool hsr_is_node_in_db(struct list_head *node_db,
+ const unsigned char addr[ETH_ALEN]);
+
++int prp_register_frame_out(struct hsr_port *port, struct hsr_frame_info *frame);
++
+ struct hsr_node {
+ struct list_head mac_list;
+ /* Protect R/W access to seq_out */
+@@ -89,6 +90,9 @@ struct hsr_node {
+ bool san_b;
+ u16 seq_out[HSR_PT_PORTS];
+ bool removed;
++ /* PRP specific duplicate handling */
++ u16 seq_expected[HSR_PT_PORTS];
++ u16 seq_start[HSR_PT_PORTS];
+ struct rcu_head rcu_head;
+ };
+
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index 7561845b8bf6f4..1bc47b17a2968c 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -175,6 +175,8 @@ struct hsr_proto_ops {
+ struct hsr_frame_info *frame);
+ bool (*invalid_dan_ingress_frame)(__be16 protocol);
+ void (*update_san_info)(struct hsr_node *node, bool is_sup);
++ int (*register_frame_out)(struct hsr_port *port,
++ struct hsr_frame_info *frame);
+ };
+
+ struct hsr_self_node {
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 0e4076866c0a40..f14a41ee4aa101 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -120,47 +120,16 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb)
+ }
+
+ #ifdef CONFIG_INET_ESPINTCP
+-struct esp_tcp_sk {
+- struct sock *sk;
+- struct rcu_head rcu;
+-};
+-
+-static void esp_free_tcp_sk(struct rcu_head *head)
+-{
+- struct esp_tcp_sk *esk = container_of(head, struct esp_tcp_sk, rcu);
+-
+- sock_put(esk->sk);
+- kfree(esk);
+-}
+-
+ static struct sock *esp_find_tcp_sk(struct xfrm_state *x)
+ {
+ struct xfrm_encap_tmpl *encap = x->encap;
+ struct net *net = xs_net(x);
+- struct esp_tcp_sk *esk;
+ __be16 sport, dport;
+- struct sock *nsk;
+ struct sock *sk;
+
+- sk = rcu_dereference(x->encap_sk);
+- if (sk && sk->sk_state == TCP_ESTABLISHED)
+- return sk;
+-
+ spin_lock_bh(&x->lock);
+ sport = encap->encap_sport;
+ dport = encap->encap_dport;
+- nsk = rcu_dereference_protected(x->encap_sk,
+- lockdep_is_held(&x->lock));
+- if (sk && sk == nsk) {
+- esk = kmalloc(sizeof(*esk), GFP_ATOMIC);
+- if (!esk) {
+- spin_unlock_bh(&x->lock);
+- return ERR_PTR(-ENOMEM);
+- }
+- RCU_INIT_POINTER(x->encap_sk, NULL);
+- esk->sk = sk;
+- call_rcu(&esk->rcu, esp_free_tcp_sk);
+- }
+ spin_unlock_bh(&x->lock);
+
+ sk = inet_lookup_established(net, net->ipv4.tcp_death_row.hashinfo, x->id.daddr.a4,
+@@ -173,20 +142,6 @@ static struct sock *esp_find_tcp_sk(struct xfrm_state *x)
+ return ERR_PTR(-EINVAL);
+ }
+
+- spin_lock_bh(&x->lock);
+- nsk = rcu_dereference_protected(x->encap_sk,
+- lockdep_is_held(&x->lock));
+- if (encap->encap_sport != sport ||
+- encap->encap_dport != dport) {
+- sock_put(sk);
+- sk = nsk ?: ERR_PTR(-EREMCHG);
+- } else if (sk == nsk) {
+- sock_put(sk);
+- } else {
+- rcu_assign_pointer(x->encap_sk, sk);
+- }
+- spin_unlock_bh(&x->lock);
+-
+ return sk;
+ }
+
+@@ -199,8 +154,10 @@ static int esp_output_tcp_finish(struct xfrm_state *x, struct sk_buff *skb)
+
+ sk = esp_find_tcp_sk(x);
+ err = PTR_ERR_OR_ZERO(sk);
+- if (err)
++ if (err) {
++ kfree_skb(skb);
+ goto out;
++ }
+
+ bh_lock_sock(sk);
+ if (sock_owned_by_user(sk))
+@@ -209,6 +166,8 @@ static int esp_output_tcp_finish(struct xfrm_state *x, struct sk_buff *skb)
+ err = espintcp_push_skb(sk, skb);
+ bh_unlock_sock(sk);
+
++ sock_put(sk);
++
+ out:
+ rcu_read_unlock();
+ return err;
+@@ -392,6 +351,8 @@ static struct ip_esp_hdr *esp_output_tcp_encap(struct xfrm_state *x,
+ if (IS_ERR(sk))
+ return ERR_CAST(sk);
+
++ sock_put(sk);
++
+ *lenp = htons(len);
+ esph = (struct ip_esp_hdr *)(lenp + 1);
+
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 272e42d813230c..8470e259d8fd8e 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -553,18 +553,16 @@ static int rtentry_to_fib_config(struct net *net, int cmd, struct rtentry *rt,
+ const struct in_ifaddr *ifa;
+ struct in_device *in_dev;
+
+- in_dev = __in_dev_get_rtnl(dev);
++ in_dev = __in_dev_get_rtnl_net(dev);
+ if (!in_dev)
+ return -ENODEV;
+
+ *colon = ':';
+
+- rcu_read_lock();
+- in_dev_for_each_ifa_rcu(ifa, in_dev) {
++ in_dev_for_each_ifa_rtnl_net(net, ifa, in_dev) {
+ if (strcmp(ifa->ifa_label, devname) == 0)
+ break;
+ }
+- rcu_read_unlock();
+
+ if (!ifa)
+ return -ENODEV;
+@@ -635,7 +633,7 @@ int ip_rt_ioctl(struct net *net, unsigned int cmd, struct rtentry *rt)
+ if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
+ return -EPERM;
+
+- rtnl_lock();
++ rtnl_net_lock(net);
+ err = rtentry_to_fib_config(net, cmd, rt, &cfg);
+ if (err == 0) {
+ struct fib_table *tb;
+@@ -659,7 +657,7 @@ int ip_rt_ioctl(struct net *net, unsigned int cmd, struct rtentry *rt)
+ /* allocated by rtentry_to_fib_config() */
+ kfree(cfg.fc_mx);
+ }
+- rtnl_unlock();
++ rtnl_net_unlock(net);
+ return err;
+ }
+ return -EINVAL;
+@@ -837,19 +835,33 @@ static int rtm_to_fib_config(struct net *net, struct sk_buff *skb,
+ }
+ }
+
++ if (cfg->fc_dst_len > 32) {
++ NL_SET_ERR_MSG(extack, "Invalid prefix length");
++ err = -EINVAL;
++ goto errout;
++ }
++
++ if (cfg->fc_dst_len < 32 && (ntohl(cfg->fc_dst) << cfg->fc_dst_len)) {
++ NL_SET_ERR_MSG(extack, "Invalid prefix for given prefix length");
++ err = -EINVAL;
++ goto errout;
++ }
++
+ if (cfg->fc_nh_id) {
+ if (cfg->fc_oif || cfg->fc_gw_family ||
+ cfg->fc_encap || cfg->fc_mp) {
+ NL_SET_ERR_MSG(extack,
+ "Nexthop specification and nexthop id are mutually exclusive");
+- return -EINVAL;
++ err = -EINVAL;
++ goto errout;
+ }
+ }
+
+ if (has_gw && has_via) {
+ NL_SET_ERR_MSG(extack,
+ "Nexthop configuration can not contain both GATEWAY and VIA");
+- return -EINVAL;
++ err = -EINVAL;
++ goto errout;
+ }
+
+ if (!cfg->fc_table)
+diff --git a/net/ipv4/fib_rules.c b/net/ipv4/fib_rules.c
+index 9517b8667e0002..041c46787d9414 100644
+--- a/net/ipv4/fib_rules.c
++++ b/net/ipv4/fib_rules.c
+@@ -245,9 +245,9 @@ static int fib4_rule_configure(struct fib_rule *rule, struct sk_buff *skb,
+ struct nlattr **tb,
+ struct netlink_ext_ack *extack)
+ {
+- struct net *net = sock_net(skb->sk);
++ struct fib4_rule *rule4 = (struct fib4_rule *)rule;
++ struct net *net = rule->fr_net;
+ int err = -EINVAL;
+- struct fib4_rule *rule4 = (struct fib4_rule *) rule;
+
+ if (tb[FRA_FLOWLABEL] || tb[FRA_FLOWLABEL_MASK]) {
+ NL_SET_ERR_MSG(extack,
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index d6411ac810961a..59a6f0a9638f99 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -1187,22 +1187,6 @@ static int fib_insert_alias(struct trie *t, struct key_vector *tp,
+ return 0;
+ }
+
+-static bool fib_valid_key_len(u32 key, u8 plen, struct netlink_ext_ack *extack)
+-{
+- if (plen > KEYLENGTH) {
+- NL_SET_ERR_MSG(extack, "Invalid prefix length");
+- return false;
+- }
+-
+- if ((plen < KEYLENGTH) && (key << plen)) {
+- NL_SET_ERR_MSG(extack,
+- "Invalid prefix for given prefix length");
+- return false;
+- }
+-
+- return true;
+-}
+-
+ static void fib_remove_alias(struct trie *t, struct key_vector *tp,
+ struct key_vector *l, struct fib_alias *old);
+
+@@ -1223,9 +1207,6 @@ int fib_table_insert(struct net *net, struct fib_table *tb,
+
+ key = ntohl(cfg->fc_dst);
+
+- if (!fib_valid_key_len(key, plen, extack))
+- return -EINVAL;
+-
+ pr_debug("Insert table=%u %08x/%d\n", tb->tb_id, key, plen);
+
+ fi = fib_create_info(cfg, extack);
+@@ -1717,9 +1698,6 @@ int fib_table_delete(struct net *net, struct fib_table *tb,
+
+ key = ntohl(cfg->fc_dst);
+
+- if (!fib_valid_key_len(key, plen, extack))
+- return -EINVAL;
+-
+ l = fib_find_node(t, &tp, key);
+ if (!l)
+ return -ESRCH;
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 9bfcfd016e1827..2b4a588247639e 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -1230,22 +1230,37 @@ int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
+ {
+ unsigned int locksz = sizeof(spinlock_t);
+ unsigned int i, nblocks = 1;
++ spinlock_t *ptr = NULL;
+
+- if (locksz != 0) {
+- /* allocate 2 cache lines or at least one spinlock per cpu */
+- nblocks = max(2U * L1_CACHE_BYTES / locksz, 1U);
+- nblocks = roundup_pow_of_two(nblocks * num_possible_cpus());
++ if (locksz == 0)
++ goto set_mask;
+
+- /* no more locks than number of hash buckets */
+- nblocks = min(nblocks, hashinfo->ehash_mask + 1);
++ /* Allocate 2 cache lines or at least one spinlock per cpu. */
++ nblocks = max(2U * L1_CACHE_BYTES / locksz, 1U) * num_possible_cpus();
+
+- hashinfo->ehash_locks = kvmalloc_array(nblocks, locksz, GFP_KERNEL);
+- if (!hashinfo->ehash_locks)
+- return -ENOMEM;
++ /* At least one page per NUMA node. */
++ nblocks = max(nblocks, num_online_nodes() * PAGE_SIZE / locksz);
++
++ nblocks = roundup_pow_of_two(nblocks);
++
++ /* No more locks than number of hash buckets. */
++ nblocks = min(nblocks, hashinfo->ehash_mask + 1);
+
+- for (i = 0; i < nblocks; i++)
+- spin_lock_init(&hashinfo->ehash_locks[i]);
++ if (num_online_nodes() > 1) {
++ /* Use vmalloc() to allow NUMA policy to spread pages
++ * on all available nodes if desired.
++ */
++ ptr = vmalloc_array(nblocks, locksz);
++ }
++ if (!ptr) {
++ ptr = kvmalloc_array(nblocks, locksz, GFP_KERNEL);
++ if (!ptr)
++ return -ENOMEM;
+ }
++ for (i = 0; i < nblocks; i++)
++ spin_lock_init(&ptr[i]);
++ hashinfo->ehash_locks = ptr;
++set_mask:
+ hashinfo->ehash_locks_mask = nblocks - 1;
+ return 0;
+ }
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index ed1b6b44faf804..c9f11a046c2630 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -141,7 +141,6 @@ static int ipgre_err(struct sk_buff *skb, u32 info,
+ const struct iphdr *iph;
+ const int type = icmp_hdr(skb)->type;
+ const int code = icmp_hdr(skb)->code;
+- unsigned int data_len = 0;
+ struct ip_tunnel *t;
+
+ if (tpi->proto == htons(ETH_P_TEB))
+@@ -182,7 +181,6 @@ static int ipgre_err(struct sk_buff *skb, u32 info,
+ case ICMP_TIME_EXCEEDED:
+ if (code != ICMP_EXC_TTL)
+ return 0;
+- data_len = icmp_hdr(skb)->un.reserved[1] * 4; /* RFC 4884 4.1 */
+ break;
+
+ case ICMP_REDIRECT:
+@@ -190,10 +188,16 @@ static int ipgre_err(struct sk_buff *skb, u32 info,
+ }
+
+ #if IS_ENABLED(CONFIG_IPV6)
+- if (tpi->proto == htons(ETH_P_IPV6) &&
+- !ip6_err_gen_icmpv6_unreach(skb, iph->ihl * 4 + tpi->hdr_len,
+- type, data_len))
+- return 0;
++ if (tpi->proto == htons(ETH_P_IPV6)) {
++ unsigned int data_len = 0;
++
++ if (type == ICMP_TIME_EXCEEDED)
++ data_len = icmp_hdr(skb)->un.reserved[1] * 4; /* RFC 4884 4.1 */
++
++ if (!ip6_err_gen_icmpv6_unreach(skb, iph->ihl * 4 + tpi->hdr_len,
++ type, data_len))
++ return 0;
++ }
+ #endif
+
+ if (t->parms.iph.daddr == 0 ||
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 21ae7594a8525a..69df45c4a0aae5 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -120,11 +120,6 @@ static void ipmr_expire_process(struct timer_list *t);
+ lockdep_rtnl_is_held() || \
+ list_empty(&net->ipv4.mr_tables))
+
+-static bool ipmr_can_free_table(struct net *net)
+-{
+- return !check_net(net) || !net_initialized(net);
+-}
+-
+ static struct mr_table *ipmr_mr_table_iter(struct net *net,
+ struct mr_table *mrt)
+ {
+@@ -317,11 +312,6 @@ EXPORT_SYMBOL(ipmr_rule_default);
+ #define ipmr_for_each_table(mrt, net) \
+ for (mrt = net->ipv4.mrt; mrt; mrt = NULL)
+
+-static bool ipmr_can_free_table(struct net *net)
+-{
+- return !check_net(net);
+-}
+-
+ static struct mr_table *ipmr_mr_table_iter(struct net *net,
+ struct mr_table *mrt)
+ {
+@@ -437,7 +427,7 @@ static void ipmr_free_table(struct mr_table *mrt)
+ {
+ struct net *net = read_pnet(&mrt->net);
+
+- WARN_ON_ONCE(!ipmr_can_free_table(net));
++ WARN_ON_ONCE(!mr_can_free_table(net));
+
+ timer_shutdown_sync(&mrt->ipmr_expire_timer);
+ mroute_clean_tables(mrt, MRT_FLUSH_VIFS | MRT_FLUSH_VIFS_STATIC |
+diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
+index affd21a0f57281..10cbeb76c27456 100644
+--- a/net/ipv4/proc.c
++++ b/net/ipv4/proc.c
+@@ -189,6 +189,7 @@ static const struct snmp_mib snmp4_net_list[] = {
+ SNMP_MIB_ITEM("TWKilled", LINUX_MIB_TIMEWAITKILLED),
+ SNMP_MIB_ITEM("PAWSActive", LINUX_MIB_PAWSACTIVEREJECTED),
+ SNMP_MIB_ITEM("PAWSEstab", LINUX_MIB_PAWSESTABREJECTED),
++ SNMP_MIB_ITEM("TSEcrRejected", LINUX_MIB_TSECRREJECTED),
+ SNMP_MIB_ITEM("PAWSOldAck", LINUX_MIB_PAWS_OLD_ACK),
+ SNMP_MIB_ITEM("DelayedACKs", LINUX_MIB_DELAYEDACKS),
+ SNMP_MIB_ITEM("DelayedACKLocked", LINUX_MIB_DELAYEDACKLOCKED),
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 1948d15f1f281b..25976fa7768c95 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -279,6 +279,7 @@ static int cookie_tcp_reqsk_init(struct sock *sk, struct sk_buff *skb,
+ ireq->smc_ok = 0;
+
+ treq->snt_synack = 0;
++ treq->snt_tsval_first = 0;
+ treq->tfo_listener = false;
+ treq->txhash = net_tx_rndhash();
+ treq->rcv_isn = ntohl(th->seq) - 1;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 0cbf81bf3d4515..1b09b4d76c296c 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -419,6 +419,20 @@ static bool tcp_ecn_rcv_ecn_echo(const struct tcp_sock *tp, const struct tcphdr
+ return false;
+ }
+
++static void tcp_count_delivered_ce(struct tcp_sock *tp, u32 ecn_count)
++{
++ tp->delivered_ce += ecn_count;
++}
++
++/* Updates the delivered and delivered_ce counts */
++static void tcp_count_delivered(struct tcp_sock *tp, u32 delivered,
++ bool ece_ack)
++{
++ tp->delivered += delivered;
++ if (ece_ack)
++ tcp_count_delivered_ce(tp, delivered);
++}
++
+ /* Buffer size and advertised window tuning.
+ *
+ * 1. Tuning sk->sk_sndbuf, when connection enters established state.
+@@ -1154,15 +1168,6 @@ void tcp_mark_skb_lost(struct sock *sk, struct sk_buff *skb)
+ }
+ }
+
+-/* Updates the delivered and delivered_ce counts */
+-static void tcp_count_delivered(struct tcp_sock *tp, u32 delivered,
+- bool ece_ack)
+-{
+- tp->delivered += delivered;
+- if (ece_ack)
+- tp->delivered_ce += delivered;
+-}
+-
+ /* This procedure tags the retransmission queue when SACKs arrive.
+ *
+ * We have three tag bits: SACKED(S), RETRANS(R) and LOST(L).
+@@ -3862,12 +3867,23 @@ static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag)
+ }
+ }
+
+-static inline void tcp_in_ack_event(struct sock *sk, u32 flags)
++static void tcp_in_ack_event(struct sock *sk, int flag)
+ {
+ const struct inet_connection_sock *icsk = inet_csk(sk);
+
+- if (icsk->icsk_ca_ops->in_ack_event)
+- icsk->icsk_ca_ops->in_ack_event(sk, flags);
++ if (icsk->icsk_ca_ops->in_ack_event) {
++ u32 ack_ev_flags = 0;
++
++ if (flag & FLAG_WIN_UPDATE)
++ ack_ev_flags |= CA_ACK_WIN_UPDATE;
++ if (flag & FLAG_SLOWPATH) {
++ ack_ev_flags |= CA_ACK_SLOWPATH;
++ if (flag & FLAG_ECE)
++ ack_ev_flags |= CA_ACK_ECE;
++ }
++
++ icsk->icsk_ca_ops->in_ack_event(sk, ack_ev_flags);
++ }
+ }
+
+ /* Congestion control has updated the cwnd already. So if we're in
+@@ -3984,12 +4000,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ tcp_snd_una_update(tp, ack);
+ flag |= FLAG_WIN_UPDATE;
+
+- tcp_in_ack_event(sk, CA_ACK_WIN_UPDATE);
+-
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPHPACKS);
+ } else {
+- u32 ack_ev_flags = CA_ACK_SLOWPATH;
+-
+ if (ack_seq != TCP_SKB_CB(skb)->end_seq)
+ flag |= FLAG_DATA;
+ else
+@@ -4001,19 +4013,12 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ flag |= tcp_sacktag_write_queue(sk, skb, prior_snd_una,
+ &sack_state);
+
+- if (tcp_ecn_rcv_ecn_echo(tp, tcp_hdr(skb))) {
++ if (tcp_ecn_rcv_ecn_echo(tp, tcp_hdr(skb)))
+ flag |= FLAG_ECE;
+- ack_ev_flags |= CA_ACK_ECE;
+- }
+
+ if (sack_state.sack_delivered)
+ tcp_count_delivered(tp, sack_state.sack_delivered,
+ flag & FLAG_ECE);
+-
+- if (flag & FLAG_WIN_UPDATE)
+- ack_ev_flags |= CA_ACK_WIN_UPDATE;
+-
+- tcp_in_ack_event(sk, ack_ev_flags);
+ }
+
+ /* This is a deviation from RFC3168 since it states that:
+@@ -4040,6 +4045,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+
+ tcp_rack_update_reo_wnd(sk, &rs);
+
++ tcp_in_ack_event(sk, flag);
++
+ if (tp->tlp_high_seq)
+ tcp_process_tlp_ack(sk, ack, flag);
+
+@@ -4071,6 +4078,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ return 1;
+
+ no_queue:
++ tcp_in_ack_event(sk, flag);
+ /* If data was DSACKed, see if we can undo a cwnd reduction. */
+ if (flag & FLAG_DSACKING_ACK) {
+ tcp_fastretrans_alert(sk, prior_snd_una, num_dupack, &flag,
+@@ -7081,6 +7089,7 @@ static void tcp_openreq_init(struct request_sock *req,
+ tcp_rsk(req)->rcv_isn = TCP_SKB_CB(skb)->seq;
+ tcp_rsk(req)->rcv_nxt = TCP_SKB_CB(skb)->seq + 1;
+ tcp_rsk(req)->snt_synack = 0;
++ tcp_rsk(req)->snt_tsval_first = 0;
+ tcp_rsk(req)->last_oow_ack_time = 0;
+ req->mss = rx_opt->mss_clamp;
+ req->ts_recent = rx_opt->saw_tstamp ? rx_opt->rcv_tsval : 0;
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index dfdb7a4608a85d..0d4ff5f2352f8d 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -665,6 +665,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ struct sock *child;
+ const struct tcphdr *th = tcp_hdr(skb);
+ __be32 flg = tcp_flag_word(th) & (TCP_FLAG_RST|TCP_FLAG_SYN|TCP_FLAG_ACK);
++ bool tsecr_reject = false;
+ bool paws_reject = false;
+ bool own_req;
+
+@@ -674,8 +675,13 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+
+ if (tmp_opt.saw_tstamp) {
+ tmp_opt.ts_recent = READ_ONCE(req->ts_recent);
+- if (tmp_opt.rcv_tsecr)
++ if (tmp_opt.rcv_tsecr) {
++ if (inet_rsk(req)->tstamp_ok && !fastopen)
++ tsecr_reject = !between(tmp_opt.rcv_tsecr,
++ tcp_rsk(req)->snt_tsval_first,
++ READ_ONCE(tcp_rsk(req)->snt_tsval_last));
+ tmp_opt.rcv_tsecr -= tcp_rsk(req)->ts_off;
++ }
+ /* We do not store true stamp, but it is not required,
+ * it can be estimated (approximately)
+ * from another data.
+@@ -790,18 +796,14 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ tcp_rsk(req)->snt_isn + 1))
+ return sk;
+
+- /* Also, it would be not so bad idea to check rcv_tsecr, which
+- * is essentially ACK extension and too early or too late values
+- * should cause reset in unsynchronized states.
+- */
+-
+ /* RFC793: "first check sequence number". */
+
+- if (paws_reject || !tcp_in_window(TCP_SKB_CB(skb)->seq,
+- TCP_SKB_CB(skb)->end_seq,
+- tcp_rsk(req)->rcv_nxt,
+- tcp_rsk(req)->rcv_nxt +
+- tcp_synack_window(req))) {
++ if (paws_reject || tsecr_reject ||
++ !tcp_in_window(TCP_SKB_CB(skb)->seq,
++ TCP_SKB_CB(skb)->end_seq,
++ tcp_rsk(req)->rcv_nxt,
++ tcp_rsk(req)->rcv_nxt +
++ tcp_synack_window(req))) {
+ /* Out of window: send ACK and drop. */
+ if (!(flg & TCP_FLAG_RST) &&
+ !tcp_oow_rate_limited(sock_net(sk), skb,
+@@ -810,6 +812,8 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ req->rsk_ops->send_ack(sk, skb, req);
+ if (paws_reject)
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
++ else if (tsecr_reject)
++ NET_INC_STATS(sock_net(sk), LINUX_MIB_TSECRREJECTED);
+ return NULL;
+ }
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index bc95d2a5924fdc..6031d7f7f5198d 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -941,6 +941,12 @@ static unsigned int tcp_synack_options(const struct sock *sk,
+ opts->options |= OPTION_TS;
+ opts->tsval = tcp_skb_timestamp_ts(tcp_rsk(req)->req_usec_ts, skb) +
+ tcp_rsk(req)->ts_off;
++ if (!tcp_rsk(req)->snt_tsval_first) {
++ if (!opts->tsval)
++ opts->tsval = ~0U;
++ tcp_rsk(req)->snt_tsval_first = opts->tsval;
++ }
++ WRITE_ONCE(tcp_rsk(req)->snt_tsval_last, opts->tsval);
+ opts->tsecr = READ_ONCE(req->ts_recent);
+ remaining -= TCPOLEN_TSTAMP_ALIGNED;
+ }
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index b5b06323cfd94a..0d31a8c108d4f6 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -182,11 +182,15 @@ struct sk_buff *xfrm4_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+ int offset = skb_gro_offset(skb);
+ const struct net_offload *ops;
+ struct sk_buff *pp = NULL;
+- int ret;
+-
+- offset = offset - sizeof(struct udphdr);
++ int len, dlen;
++ __u8 *udpdata;
++ __be32 *udpdata32;
+
+- if (!pskb_pull(skb, offset))
++ len = skb->len - offset;
++ dlen = offset + min(len, 8);
++ udpdata = skb_gro_header(skb, dlen, offset);
++ udpdata32 = (__be32 *)udpdata;
++ if (unlikely(!udpdata))
+ return NULL;
+
+ rcu_read_lock();
+@@ -194,11 +198,10 @@ struct sk_buff *xfrm4_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+ if (!ops || !ops->callbacks.gro_receive)
+ goto out;
+
+- ret = __xfrm4_udp_encap_rcv(sk, skb, false);
+- if (ret)
++ /* check if it is a keepalive or IKE packet */
++ if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0)
+ goto out;
+
+- skb_push(skb, offset);
+ NAPI_GRO_CB(skb)->proto = IPPROTO_UDP;
+
+ pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+@@ -208,7 +211,6 @@ struct sk_buff *xfrm4_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+
+ out:
+ rcu_read_unlock();
+- skb_push(skb, offset);
+ NAPI_GRO_CB(skb)->same_flow = 0;
+ NAPI_GRO_CB(skb)->flush = 1;
+
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 9e73944e3b530a..72adfc107b557d 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -137,47 +137,16 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb)
+ }
+
+ #ifdef CONFIG_INET6_ESPINTCP
+-struct esp_tcp_sk {
+- struct sock *sk;
+- struct rcu_head rcu;
+-};
+-
+-static void esp_free_tcp_sk(struct rcu_head *head)
+-{
+- struct esp_tcp_sk *esk = container_of(head, struct esp_tcp_sk, rcu);
+-
+- sock_put(esk->sk);
+- kfree(esk);
+-}
+-
+ static struct sock *esp6_find_tcp_sk(struct xfrm_state *x)
+ {
+ struct xfrm_encap_tmpl *encap = x->encap;
+ struct net *net = xs_net(x);
+- struct esp_tcp_sk *esk;
+ __be16 sport, dport;
+- struct sock *nsk;
+ struct sock *sk;
+
+- sk = rcu_dereference(x->encap_sk);
+- if (sk && sk->sk_state == TCP_ESTABLISHED)
+- return sk;
+-
+ spin_lock_bh(&x->lock);
+ sport = encap->encap_sport;
+ dport = encap->encap_dport;
+- nsk = rcu_dereference_protected(x->encap_sk,
+- lockdep_is_held(&x->lock));
+- if (sk && sk == nsk) {
+- esk = kmalloc(sizeof(*esk), GFP_ATOMIC);
+- if (!esk) {
+- spin_unlock_bh(&x->lock);
+- return ERR_PTR(-ENOMEM);
+- }
+- RCU_INIT_POINTER(x->encap_sk, NULL);
+- esk->sk = sk;
+- call_rcu(&esk->rcu, esp_free_tcp_sk);
+- }
+ spin_unlock_bh(&x->lock);
+
+ sk = __inet6_lookup_established(net, net->ipv4.tcp_death_row.hashinfo, &x->id.daddr.in6,
+@@ -190,20 +159,6 @@ static struct sock *esp6_find_tcp_sk(struct xfrm_state *x)
+ return ERR_PTR(-EINVAL);
+ }
+
+- spin_lock_bh(&x->lock);
+- nsk = rcu_dereference_protected(x->encap_sk,
+- lockdep_is_held(&x->lock));
+- if (encap->encap_sport != sport ||
+- encap->encap_dport != dport) {
+- sock_put(sk);
+- sk = nsk ?: ERR_PTR(-EREMCHG);
+- } else if (sk == nsk) {
+- sock_put(sk);
+- } else {
+- rcu_assign_pointer(x->encap_sk, sk);
+- }
+- spin_unlock_bh(&x->lock);
+-
+ return sk;
+ }
+
+@@ -216,8 +171,10 @@ static int esp_output_tcp_finish(struct xfrm_state *x, struct sk_buff *skb)
+
+ sk = esp6_find_tcp_sk(x);
+ err = PTR_ERR_OR_ZERO(sk);
+- if (err)
++ if (err) {
++ kfree_skb(skb);
+ goto out;
++ }
+
+ bh_lock_sock(sk);
+ if (sock_owned_by_user(sk))
+@@ -226,6 +183,8 @@ static int esp_output_tcp_finish(struct xfrm_state *x, struct sk_buff *skb)
+ err = espintcp_push_skb(sk, skb);
+ bh_unlock_sock(sk);
+
++ sock_put(sk);
++
+ out:
+ rcu_read_unlock();
+ return err;
+@@ -422,6 +381,8 @@ static struct ip_esp_hdr *esp6_output_tcp_encap(struct xfrm_state *x,
+ if (IS_ERR(sk))
+ return ERR_CAST(sk);
+
++ sock_put(sk);
++
+ *lenp = htons(len);
+ esph = (struct ip_esp_hdr *)(lenp + 1);
+
+diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c
+index 67d39114d9a634..40af8fd6efa70d 100644
+--- a/net/ipv6/fib6_rules.c
++++ b/net/ipv6/fib6_rules.c
+@@ -399,9 +399,9 @@ static int fib6_rule_configure(struct fib_rule *rule, struct sk_buff *skb,
+ struct nlattr **tb,
+ struct netlink_ext_ack *extack)
+ {
++ struct fib6_rule *rule6 = (struct fib6_rule *)rule;
++ struct net *net = rule->fr_net;
+ int err = -EINVAL;
+- struct net *net = sock_net(skb->sk);
+- struct fib6_rule *rule6 = (struct fib6_rule *) rule;
+
+ if (!inet_validate_dscp(frh->tos)) {
+ NL_SET_ERR_MSG(extack,
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 235808cfec7050..68e9a41eed4914 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1498,7 +1498,6 @@ static int ip6gre_tunnel_init_common(struct net_device *dev)
+ tunnel = netdev_priv(dev);
+
+ tunnel->dev = dev;
+- tunnel->net = dev_net(dev);
+ strcpy(tunnel->parms.name, dev->name);
+
+ ret = dst_cache_init(&tunnel->dst_cache, GFP_KERNEL);
+@@ -1882,7 +1881,6 @@ static int ip6erspan_tap_init(struct net_device *dev)
+ tunnel = netdev_priv(dev);
+
+ tunnel->dev = dev;
+- tunnel->net = dev_net(dev);
+ strcpy(tunnel->parms.name, dev->name);
+
+ ret = dst_cache_init(&tunnel->dst_cache, GFP_KERNEL);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index d577bf2f305387..581bc62890818e 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1386,6 +1386,7 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
+ }
+ v6_cork->hop_limit = ipc6->hlimit;
+ v6_cork->tclass = ipc6->tclass;
++ v6_cork->dontfrag = ipc6->dontfrag;
+ if (rt->dst.flags & DST_XFRM_TUNNEL)
+ mtu = READ_ONCE(np->pmtudisc) >= IPV6_PMTUDISC_PROBE ?
+ READ_ONCE(rt->dst.dev->mtu) : dst_mtu(&rt->dst);
+@@ -1421,7 +1422,7 @@ static int __ip6_append_data(struct sock *sk,
+ int getfrag(void *from, char *to, int offset,
+ int len, int odd, struct sk_buff *skb),
+ void *from, size_t length, int transhdrlen,
+- unsigned int flags, struct ipcm6_cookie *ipc6)
++ unsigned int flags)
+ {
+ struct sk_buff *skb, *skb_prev = NULL;
+ struct inet_cork *cork = &cork_full->base;
+@@ -1475,7 +1476,7 @@ static int __ip6_append_data(struct sock *sk,
+ if (headersize + transhdrlen > mtu)
+ goto emsgsize;
+
+- if (cork->length + length > mtu - headersize && ipc6->dontfrag &&
++ if (cork->length + length > mtu - headersize && v6_cork->dontfrag &&
+ (sk->sk_protocol == IPPROTO_UDP ||
+ sk->sk_protocol == IPPROTO_ICMPV6 ||
+ sk->sk_protocol == IPPROTO_RAW)) {
+@@ -1855,7 +1856,7 @@ int ip6_append_data(struct sock *sk,
+
+ return __ip6_append_data(sk, &sk->sk_write_queue, &inet->cork,
+ &np->cork, sk_page_frag(sk), getfrag,
+- from, length, transhdrlen, flags, ipc6);
++ from, length, transhdrlen, flags);
+ }
+ EXPORT_SYMBOL_GPL(ip6_append_data);
+
+@@ -2054,13 +2055,11 @@ struct sk_buff *ip6_make_skb(struct sock *sk,
+ ip6_cork_release(cork, &v6_cork);
+ return ERR_PTR(err);
+ }
+- if (ipc6->dontfrag < 0)
+- ipc6->dontfrag = inet6_test_bit(DONTFRAG, sk);
+
+ err = __ip6_append_data(sk, &queue, cork, &v6_cork,
+ ¤t->task_frag, getfrag, from,
+ length + exthdrlen, transhdrlen + exthdrlen,
+- flags, ipc6);
++ flags);
+ if (err) {
+ __ip6_flush_pending_frames(sk, &queue, cork, &v6_cork);
+ return ERR_PTR(err);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 48fd53b9897265..5350c9bb2319bf 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1878,7 +1878,6 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
+ int t_hlen;
+
+ t->dev = dev;
+- t->net = dev_net(dev);
+
+ ret = dst_cache_init(&t->dst_cache, GFP_KERNEL);
+ if (ret)
+@@ -1940,6 +1939,7 @@ static int __net_init ip6_fb_tnl_dev_init(struct net_device *dev)
+ struct net *net = dev_net(dev);
+ struct ip6_tnl_net *ip6n = net_generic(net, ip6_tnl_net_id);
+
++ t->net = net;
+ t->parms.proto = IPPROTO_IPV6;
+
+ rcu_assign_pointer(ip6n->tnls_wc[0], t);
+@@ -2013,6 +2013,7 @@ static int ip6_tnl_newlink(struct net *src_net, struct net_device *dev,
+ int err;
+
+ nt = netdev_priv(dev);
++ nt->net = net;
+
+ if (ip_tunnel_netlink_encap_parms(data, &ipencap)) {
+ err = ip6_tnl_encap_setup(nt, &ipencap);
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 590737c2753798..01235046914432 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -925,7 +925,6 @@ static inline int vti6_dev_init_gen(struct net_device *dev)
+ struct ip6_tnl *t = netdev_priv(dev);
+
+ t->dev = dev;
+- t->net = dev_net(dev);
+ netdev_hold(dev, &t->dev_tracker, GFP_KERNEL);
+ netdev_lockdep_set_classes(dev);
+ return 0;
+@@ -958,6 +957,7 @@ static int __net_init vti6_fb_tnl_dev_init(struct net_device *dev)
+ struct net *net = dev_net(dev);
+ struct vti6_net *ip6n = net_generic(net, vti6_net_id);
+
++ t->net = net;
+ t->parms.proto = IPPROTO_IPV6;
+
+ rcu_assign_pointer(ip6n->tnls_wc[0], t);
+@@ -1008,6 +1008,7 @@ static int vti6_newlink(struct net *src_net, struct net_device *dev,
+ vti6_netlink_parms(data, &nt->parms);
+
+ nt->parms.proto = IPPROTO_IPV6;
++ nt->net = net;
+
+ if (vti6_locate(net, &nt->parms, 0))
+ return -EEXIST;
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 535e9f72514c06..33351acc45e10f 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -108,11 +108,6 @@ static void ipmr_expire_process(struct timer_list *t);
+ lockdep_rtnl_is_held() || \
+ list_empty(&net->ipv6.mr6_tables))
+
+-static bool ip6mr_can_free_table(struct net *net)
+-{
+- return !check_net(net) || !net_initialized(net);
+-}
+-
+ static struct mr_table *ip6mr_mr_table_iter(struct net *net,
+ struct mr_table *mrt)
+ {
+@@ -306,11 +301,6 @@ EXPORT_SYMBOL(ip6mr_rule_default);
+ #define ip6mr_for_each_table(mrt, net) \
+ for (mrt = net->ipv6.mrt6; mrt; mrt = NULL)
+
+-static bool ip6mr_can_free_table(struct net *net)
+-{
+- return !check_net(net);
+-}
+-
+ static struct mr_table *ip6mr_mr_table_iter(struct net *net,
+ struct mr_table *mrt)
+ {
+@@ -416,7 +406,7 @@ static void ip6mr_free_table(struct mr_table *mrt)
+ {
+ struct net *net = read_pnet(&mrt->net);
+
+- WARN_ON_ONCE(!ip6mr_can_free_table(net));
++ WARN_ON_ONCE(!mr_can_free_table(net));
+
+ timer_shutdown_sync(&mrt->ipmr_expire_timer);
+ mroute_clean_tables(mrt, MRT6_FLUSH_MIFS | MRT6_FLUSH_MIFS_STATIC |
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 39bd8951bfca18..3c15a0ae228e21 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -269,6 +269,7 @@ static struct ip_tunnel *ipip6_tunnel_locate(struct net *net,
+
+ nt = netdev_priv(dev);
+
++ nt->net = net;
+ nt->parms = *parms;
+ if (ipip6_tunnel_create(dev) < 0)
+ goto failed_free;
+@@ -1449,7 +1450,6 @@ static int ipip6_tunnel_init(struct net_device *dev)
+ int err;
+
+ tunnel->dev = dev;
+- tunnel->net = dev_net(dev);
+ strcpy(tunnel->parms.name, dev->name);
+
+ ipip6_tunnel_bind_dev(dev);
+@@ -1563,6 +1563,7 @@ static int ipip6_newlink(struct net *src_net, struct net_device *dev,
+ int err;
+
+ nt = netdev_priv(dev);
++ nt->net = net;
+
+ if (ip_tunnel_netlink_encap_parms(data, &ipencap)) {
+ err = ip_tunnel_encap_setup(nt, &ipencap);
+@@ -1858,6 +1859,9 @@ static int __net_init sit_init_net(struct net *net)
+ */
+ sitn->fb_tunnel_dev->netns_local = true;
+
++ t = netdev_priv(sitn->fb_tunnel_dev);
++ t->net = net;
++
+ err = register_netdev(sitn->fb_tunnel_dev);
+ if (err)
+ goto err_reg_dev;
+@@ -1865,8 +1869,6 @@ static int __net_init sit_init_net(struct net *net)
+ ipip6_tunnel_clone_6rd(sitn->fb_tunnel_dev, sitn);
+ ipip6_fb_tunnel_init(sitn->fb_tunnel_dev);
+
+- t = netdev_priv(sitn->fb_tunnel_dev);
+-
+ strcpy(t->parms.name, sitn->fb_tunnel_dev->name);
+ return 0;
+
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 4abc5e9d63227a..841c81abaaf4ff 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -179,14 +179,18 @@ struct sk_buff *xfrm6_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+ int offset = skb_gro_offset(skb);
+ const struct net_offload *ops;
+ struct sk_buff *pp = NULL;
+- int ret;
++ int len, dlen;
++ __u8 *udpdata;
++ __be32 *udpdata32;
+
+ if (skb->protocol == htons(ETH_P_IP))
+ return xfrm4_gro_udp_encap_rcv(sk, head, skb);
+
+- offset = offset - sizeof(struct udphdr);
+-
+- if (!pskb_pull(skb, offset))
++ len = skb->len - offset;
++ dlen = offset + min(len, 8);
++ udpdata = skb_gro_header(skb, dlen, offset);
++ udpdata32 = (__be32 *)udpdata;
++ if (unlikely(!udpdata))
+ return NULL;
+
+ rcu_read_lock();
+@@ -194,11 +198,10 @@ struct sk_buff *xfrm6_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+ if (!ops || !ops->callbacks.gro_receive)
+ goto out;
+
+- ret = __xfrm6_udp_encap_rcv(sk, skb, false);
+- if (ret)
++ /* check if it is a keepalive or IKE packet */
++ if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0)
+ goto out;
+
+- skb_push(skb, offset);
+ NAPI_GRO_CB(skb)->proto = IPPROTO_UDP;
+
+ pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+@@ -208,7 +211,6 @@ struct sk_buff *xfrm6_gro_udp_encap_rcv(struct sock *sk, struct list_head *head,
+
+ out:
+ rcu_read_unlock();
+- skb_push(skb, offset);
+ NAPI_GRO_CB(skb)->same_flow = 0;
+ NAPI_GRO_CB(skb)->flush = 1;
+
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 0259cde394ba09..cc77ec5769d828 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -887,15 +887,15 @@ static int llc_ui_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ if (sk->sk_type != SOCK_STREAM)
+ goto copy_uaddr;
+
++ /* Partial read */
++ if (used + offset < skb_len)
++ continue;
++
+ if (!(flags & MSG_PEEK)) {
+ skb_unlink(skb, &sk->sk_receive_queue);
+ kfree_skb(skb);
+ *seq = 0;
+ }
+-
+- /* Partial read */
+- if (used + offset < skb_len)
+- continue;
+ } while (len > 0);
+
+ out:
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 61f2cac37728ce..92120f9051499a 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -9,7 +9,7 @@
+ * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+ * Copyright 2007-2010, Intel Corporation
+ * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2023 Intel Corporation
++ * Copyright (C) 2018 - 2024 Intel Corporation
+ */
+
+ #include <linux/ieee80211.h>
+@@ -464,7 +464,8 @@ static void ieee80211_send_addba_with_timeout(struct sta_info *sta,
+ sta->ampdu_mlme.addba_req_num[tid]++;
+ spin_unlock_bh(&sta->lock);
+
+- if (sta->sta.deflink.eht_cap.has_eht) {
++ if (sta->sta.deflink.eht_cap.has_eht ||
++ ieee80211_hw_check(&local->hw, STRICT)) {
+ buf_size = local->hw.max_tx_aggregation_subframes;
+ } else if (sta->sta.deflink.he_cap.has_he) {
+ buf_size = min_t(u16, local->hw.max_tx_aggregation_subframes,
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index b766472703b12f..a7aeb37254bbfb 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -89,15 +89,14 @@ static int ieee80211_set_mon_options(struct ieee80211_sub_if_data *sdata,
+
+ /* check flags first */
+ if (params->flags && ieee80211_sdata_running(sdata)) {
+- u32 mask = MONITOR_FLAG_COOK_FRAMES | MONITOR_FLAG_ACTIVE;
++ u32 mask = MONITOR_FLAG_ACTIVE;
+
+ /*
+- * Prohibit MONITOR_FLAG_COOK_FRAMES and
+- * MONITOR_FLAG_ACTIVE to be changed while the
+- * interface is up.
++ * Prohibit MONITOR_FLAG_ACTIVE to be changed
++ * while the interface is up.
+ * Else we would need to add a lot of cruft
+ * to update everything:
+- * cooked_mntrs, monitor and all fif_* counters
++ * monitor and all fif_* counters
+ * reconfigure hardware
+ */
+ if ((params->flags & mask) != (sdata->u.mntr.flags & mask))
+@@ -4371,9 +4370,8 @@ static int ieee80211_cfg_get_channel(struct wiphy *wiphy,
+ if (chanctx_conf) {
+ *chandef = link->conf->chanreq.oper;
+ ret = 0;
+- } else if (!ieee80211_hw_check(&local->hw, NO_VIRTUAL_MONITOR) &&
+- local->open_count > 0 &&
+- local->open_count == local->monitors &&
++ } else if (local->open_count > 0 &&
++ local->open_count == local->virt_monitors &&
+ sdata->vif.type == NL80211_IFTYPE_MONITOR) {
+ *chandef = local->monitor_chanreq.oper;
+ ret = 0;
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index 5acecc7bd4a990..307587c8a0037b 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -2,7 +2,7 @@
+ /*
+ * Portions of this file
+ * Copyright(c) 2016 Intel Deutschland GmbH
+-* Copyright (C) 2018-2019, 2021-2024 Intel Corporation
++* Copyright (C) 2018-2019, 2021-2025 Intel Corporation
+ */
+
+ #ifndef __MAC80211_DRIVER_OPS
+@@ -955,6 +955,7 @@ static inline void drv_mgd_complete_tx(struct ieee80211_local *local,
+ return;
+ WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_STATION);
+
++ info->link_id = info->link_id < 0 ? 0 : info->link_id;
+ trace_drv_mgd_complete_tx(local, sdata, info->duration,
+ info->subtype, info->success);
+ if (local->ops->mgd_complete_tx)
+diff --git a/net/mac80211/drop.h b/net/mac80211/drop.h
+index 59e3ec4dc9607c..eb9ab310f91caa 100644
+--- a/net/mac80211/drop.h
++++ b/net/mac80211/drop.h
+@@ -11,12 +11,6 @@
+
+ typedef unsigned int __bitwise ieee80211_rx_result;
+
+-#define MAC80211_DROP_REASONS_MONITOR(R) \
+- R(RX_DROP_M_UNEXPECTED_4ADDR_FRAME) \
+- R(RX_DROP_M_BAD_BCN_KEYIDX) \
+- R(RX_DROP_M_BAD_MGMT_KEYIDX) \
+-/* this line for the trailing \ - add before this */
+-
+ #define MAC80211_DROP_REASONS_UNUSABLE(R) \
+ /* 0x00 == ___RX_DROP_UNUSABLE */ \
+ R(RX_DROP_U_MIC_FAIL) \
+@@ -66,6 +60,10 @@ typedef unsigned int __bitwise ieee80211_rx_result;
+ R(RX_DROP_U_UNEXPECTED_STA_4ADDR) \
+ R(RX_DROP_U_UNEXPECTED_VLAN_MCAST) \
+ R(RX_DROP_U_NOT_PORT_CONTROL) \
++ R(RX_DROP_U_UNEXPECTED_4ADDR_FRAME) \
++ R(RX_DROP_U_BAD_BCN_KEYIDX) \
++ /* 0x30 */ \
++ R(RX_DROP_U_BAD_MGMT_KEYIDX) \
+ R(RX_DROP_U_UNKNOWN_ACTION_REJECTED) \
+ /* this line for the trailing \ - add before this */
+
+@@ -78,10 +76,6 @@ enum ___mac80211_drop_reason {
+ ___RX_QUEUED = SKB_NOT_DROPPED_YET,
+
+ #define ENUM(x) ___ ## x,
+- ___RX_DROP_MONITOR = SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR <<
+- SKB_DROP_REASON_SUBSYS_SHIFT,
+- MAC80211_DROP_REASONS_MONITOR(ENUM)
+-
+ ___RX_DROP_UNUSABLE = SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE <<
+ SKB_DROP_REASON_SUBSYS_SHIFT,
+ MAC80211_DROP_REASONS_UNUSABLE(ENUM)
+@@ -89,11 +83,10 @@ enum ___mac80211_drop_reason {
+ };
+
+ enum mac80211_drop_reason {
+- RX_CONTINUE = (__force ieee80211_rx_result)___RX_CONTINUE,
+- RX_QUEUED = (__force ieee80211_rx_result)___RX_QUEUED,
+- RX_DROP_MONITOR = (__force ieee80211_rx_result)___RX_DROP_MONITOR,
++ RX_CONTINUE = (__force ieee80211_rx_result)___RX_CONTINUE,
++ RX_QUEUED = (__force ieee80211_rx_result)___RX_QUEUED,
++ RX_DROP = (__force ieee80211_rx_result)___RX_DROP_UNUSABLE,
+ #define DEF(x) x = (__force ieee80211_rx_result)___ ## x,
+- MAC80211_DROP_REASONS_MONITOR(DEF)
+ MAC80211_DROP_REASONS_UNUSABLE(DEF)
+ #undef DEF
+ };
+diff --git a/net/mac80211/ethtool.c b/net/mac80211/ethtool.c
+index 42f7ee142ce3f4..0397755a3bd1c7 100644
+--- a/net/mac80211/ethtool.c
++++ b/net/mac80211/ethtool.c
+@@ -158,7 +158,7 @@ static void ieee80211_get_stats(struct net_device *dev,
+ if (chanctx_conf)
+ channel = chanctx_conf->def.chan;
+ else if (local->open_count > 0 &&
+- local->open_count == local->monitors &&
++ local->open_count == local->virt_monitors &&
+ sdata->vif.type == NL80211_IFTYPE_MONITOR)
+ channel = local->monitor_chanreq.oper.chan;
+ else
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index e7dc3f0cfc9a9a..3d7304ce23e23d 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -200,7 +200,6 @@ enum ieee80211_packet_rx_flags {
+ /**
+ * enum ieee80211_rx_flags - RX data flags
+ *
+- * @IEEE80211_RX_CMNTR: received on cooked monitor already
+ * @IEEE80211_RX_BEACON_REPORTED: This frame was already reported
+ * to cfg80211_report_obss_beacon().
+ *
+@@ -208,8 +207,7 @@ enum ieee80211_packet_rx_flags {
+ * for a single frame.
+ */
+ enum ieee80211_rx_flags {
+- IEEE80211_RX_CMNTR = BIT(0),
+- IEEE80211_RX_BEACON_REPORTED = BIT(1),
++ IEEE80211_RX_BEACON_REPORTED = BIT(0),
+ };
+
+ struct ieee80211_rx_data {
+@@ -462,7 +460,7 @@ struct ieee80211_mgd_assoc_data {
+ bool s1g;
+ bool spp_amsdu;
+
+- unsigned int assoc_link_id;
++ s8 assoc_link_id;
+
+ u8 fils_nonces[2 * FILS_NONCE_LEN];
+ u8 fils_kek[FILS_MAX_KEK_LEN];
+@@ -1380,7 +1378,7 @@ struct ieee80211_local {
+ spinlock_t queue_stop_reason_lock;
+
+ int open_count;
+- int monitors, cooked_mntrs, tx_mntrs;
++ int monitors, virt_monitors, tx_mntrs;
+ /* number of interfaces with corresponding FIF_ flags */
+ int fif_fcsfail, fif_plcpfail, fif_control, fif_other_bss, fif_pspoll,
+ fif_probe_req;
+@@ -1492,7 +1490,7 @@ struct ieee80211_local {
+
+ /* see iface.c */
+ struct list_head interfaces;
+- struct list_head mon_list; /* only that are IFF_UP && !cooked */
++ struct list_head mon_list; /* only that are IFF_UP */
+ struct mutex iflist_mtx;
+
+ /* Scanning and BSS list */
+@@ -2090,8 +2088,7 @@ struct sk_buff *
+ ieee80211_build_data_template(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb, u32 info_flags);
+ void ieee80211_tx_monitor(struct ieee80211_local *local, struct sk_buff *skb,
+- int retry_count, bool send_to_cooked,
+- struct ieee80211_tx_status *status);
++ int retry_count, struct ieee80211_tx_status *status);
+
+ void ieee80211_check_fast_xmit(struct sta_info *sta);
+ void ieee80211_check_fast_xmit_all(struct ieee80211_local *local);
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index d299bdbca6b3b4..768d774d7d1f9b 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -483,8 +483,6 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ ieee80211_ibss_stop(sdata);
+ break;
+ case NL80211_IFTYPE_MONITOR:
+- if (sdata->u.mntr.flags & MONITOR_FLAG_COOK_FRAMES)
+- break;
+ list_del_rcu(&sdata->u.mntr.list);
+ break;
+ default:
+@@ -584,18 +582,19 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ /* no need to tell driver */
+ break;
+ case NL80211_IFTYPE_MONITOR:
+- if (sdata->u.mntr.flags & MONITOR_FLAG_COOK_FRAMES) {
+- local->cooked_mntrs--;
+- break;
+- }
+-
+ local->monitors--;
+- if (local->monitors == 0) {
+- local->hw.conf.flags &= ~IEEE80211_CONF_MONITOR;
+- hw_reconf_flags |= IEEE80211_CONF_CHANGE_MONITOR;
+- }
+
+- ieee80211_adjust_monitor_flags(sdata, -1);
++ if (!(sdata->u.mntr.flags & MONITOR_FLAG_ACTIVE) &&
++ !ieee80211_hw_check(&local->hw, NO_VIRTUAL_MONITOR)) {
++
++ local->virt_monitors--;
++ if (local->virt_monitors == 0) {
++ local->hw.conf.flags &= ~IEEE80211_CONF_MONITOR;
++ hw_reconf_flags |= IEEE80211_CONF_CHANGE_MONITOR;
++ }
++
++ ieee80211_adjust_monitor_flags(sdata, -1);
++ }
+ break;
+ case NL80211_IFTYPE_NAN:
+ /* clean all the functions */
+@@ -689,7 +688,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ case NL80211_IFTYPE_AP_VLAN:
+ break;
+ case NL80211_IFTYPE_MONITOR:
+- if (local->monitors == 0)
++ if (local->virt_monitors == 0)
+ ieee80211_del_virtual_monitor(local);
+
+ ieee80211_recalc_idle(local);
+@@ -726,7 +725,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ ieee80211_configure_filter(local);
+ ieee80211_hw_config(local, hw_reconf_flags);
+
+- if (local->monitors == local->open_count)
++ if (local->virt_monitors == local->open_count)
+ ieee80211_add_virtual_monitor(local);
+ }
+
+@@ -985,7 +984,7 @@ static bool ieee80211_set_sdata_offload_flags(struct ieee80211_sub_if_data *sdat
+ local->hw.wiphy->frag_threshold != (u32)-1)
+ flags &= ~IEEE80211_OFFLOAD_ENCAP_ENABLED;
+
+- if (local->monitors)
++ if (local->virt_monitors)
+ flags &= ~IEEE80211_OFFLOAD_ENCAP_ENABLED;
+ } else {
+ flags &= ~IEEE80211_OFFLOAD_ENCAP_ENABLED;
+@@ -995,7 +994,7 @@ static bool ieee80211_set_sdata_offload_flags(struct ieee80211_sub_if_data *sdat
+ ieee80211_iftype_supports_hdr_offload(sdata->vif.type)) {
+ flags |= IEEE80211_OFFLOAD_DECAP_ENABLED;
+
+- if (local->monitors &&
++ if (local->virt_monitors &&
+ !ieee80211_hw_check(&local->hw, SUPPORTS_CONC_MON_RX_DECAP))
+ flags &= ~IEEE80211_OFFLOAD_DECAP_ENABLED;
+ } else {
+@@ -1333,28 +1332,27 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
+ }
+ break;
+ case NL80211_IFTYPE_MONITOR:
+- if (sdata->u.mntr.flags & MONITOR_FLAG_COOK_FRAMES) {
+- local->cooked_mntrs++;
+- break;
+- }
+-
+ if ((sdata->u.mntr.flags & MONITOR_FLAG_ACTIVE) ||
+ ieee80211_hw_check(&local->hw, NO_VIRTUAL_MONITOR)) {
+ res = drv_add_interface(local, sdata);
+ if (res)
+ goto err_stop;
+- } else if (local->monitors == 0 && local->open_count == 0) {
+- res = ieee80211_add_virtual_monitor(local);
+- if (res)
+- goto err_stop;
++ } else {
++ if (local->virt_monitors == 0 && local->open_count == 0) {
++ res = ieee80211_add_virtual_monitor(local);
++ if (res)
++ goto err_stop;
++ }
++ local->virt_monitors++;
++
++ /* must be before the call to ieee80211_configure_filter */
++ if (local->virt_monitors == 1) {
++ local->hw.conf.flags |= IEEE80211_CONF_MONITOR;
++ hw_reconf_flags |= IEEE80211_CONF_CHANGE_MONITOR;
++ }
+ }
+
+- /* must be before the call to ieee80211_configure_filter */
+ local->monitors++;
+- if (local->monitors == 1) {
+- local->hw.conf.flags |= IEEE80211_CONF_MONITOR;
+- hw_reconf_flags |= IEEE80211_CONF_CHANGE_MONITOR;
+- }
+
+ ieee80211_adjust_monitor_flags(sdata, 1);
+ ieee80211_configure_filter(local);
+@@ -1430,8 +1428,6 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
+ rcu_assign_pointer(local->p2p_sdata, sdata);
+ break;
+ case NL80211_IFTYPE_MONITOR:
+- if (sdata->u.mntr.flags & MONITOR_FLAG_COOK_FRAMES)
+- break;
+ list_add_tail_rcu(&sdata->u.mntr.list, &local->mon_list);
+ break;
+ default:
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 2b6249d75a5d42..6b6de43d9420ac 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1746,18 +1746,7 @@ void ieee80211_free_hw(struct ieee80211_hw *hw)
+ wiphy_free(local->hw.wiphy);
+ }
+ EXPORT_SYMBOL(ieee80211_free_hw);
+-
+-static const char * const drop_reasons_monitor[] = {
+-#define V(x) #x,
+- [0] = "RX_DROP_MONITOR",
+- MAC80211_DROP_REASONS_MONITOR(V)
+-};
+-
+-static struct drop_reason_list drop_reason_list_monitor = {
+- .reasons = drop_reasons_monitor,
+- .n_reasons = ARRAY_SIZE(drop_reasons_monitor),
+-};
+-
++#define V(x) #x,
+ static const char * const drop_reasons_unusable[] = {
+ [0] = "RX_DROP_UNUSABLE",
+ MAC80211_DROP_REASONS_UNUSABLE(V)
+@@ -1786,8 +1775,6 @@ static int __init ieee80211_init(void)
+ if (ret)
+ goto err_netdev;
+
+- drop_reasons_register_subsys(SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR,
+- &drop_reason_list_monitor);
+ drop_reasons_register_subsys(SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE,
+ &drop_reason_list_unusable);
+
+@@ -1806,7 +1793,6 @@ static void __exit ieee80211_exit(void)
+
+ ieee80211_iface_exit();
+
+- drop_reasons_unregister_subsys(SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR);
+ drop_reasons_unregister_subsys(SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE);
+
+ rcu_barrier();
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index e3deb89674b23d..ef65ae5137dcd4 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -8,7 +8,7 @@
+ * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright (C) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2024 Intel Corporation
++ * Copyright (C) 2018 - 2025 Intel Corporation
+ */
+
+ #include <linux/delay.h>
+@@ -345,6 +345,115 @@ ieee80211_determine_ap_chan(struct ieee80211_sub_if_data *sdata,
+ return IEEE80211_CONN_MODE_EHT;
+ }
+
++static bool
++ieee80211_verify_sta_ht_mcs_support(struct ieee80211_sub_if_data *sdata,
++ struct ieee80211_supported_band *sband,
++ const struct ieee80211_ht_operation *ht_op)
++{
++ struct ieee80211_sta_ht_cap sta_ht_cap;
++ int i;
++
++ if (sband->band == NL80211_BAND_6GHZ)
++ return true;
++
++ if (!ht_op)
++ return false;
++
++ memcpy(&sta_ht_cap, &sband->ht_cap, sizeof(sta_ht_cap));
++ ieee80211_apply_htcap_overrides(sdata, &sta_ht_cap);
++
++ /*
++ * P802.11REVme/D7.0 - 6.5.4.2.4
++ * ...
++ * If the MLME of an HT STA receives an MLME-JOIN.request primitive
++ * with the SelectedBSS parameter containing a Basic HT-MCS Set field
++ * in the HT Operation parameter that contains any unsupported MCSs,
++ * the MLME response in the resulting MLME-JOIN.confirm primitive shall
++ * contain a ResultCode parameter that is not set to the value SUCCESS.
++ * ...
++ */
++
++ /* Simply check that all basic rates are in the STA RX mask */
++ for (i = 0; i < IEEE80211_HT_MCS_MASK_LEN; i++) {
++ if ((ht_op->basic_set[i] & sta_ht_cap.mcs.rx_mask[i]) !=
++ ht_op->basic_set[i])
++ return false;
++ }
++
++ return true;
++}
++
++static bool
++ieee80211_verify_sta_vht_mcs_support(struct ieee80211_sub_if_data *sdata,
++ int link_id,
++ struct ieee80211_supported_band *sband,
++ const struct ieee80211_vht_operation *vht_op)
++{
++ struct ieee80211_sta_vht_cap sta_vht_cap;
++ u16 ap_min_req_set, sta_rx_mcs_map, sta_tx_mcs_map;
++ int nss;
++
++ if (sband->band != NL80211_BAND_5GHZ)
++ return true;
++
++ if (!vht_op)
++ return false;
++
++ memcpy(&sta_vht_cap, &sband->vht_cap, sizeof(sta_vht_cap));
++ ieee80211_apply_vhtcap_overrides(sdata, &sta_vht_cap);
++
++ ap_min_req_set = le16_to_cpu(vht_op->basic_mcs_set);
++ sta_rx_mcs_map = le16_to_cpu(sta_vht_cap.vht_mcs.rx_mcs_map);
++ sta_tx_mcs_map = le16_to_cpu(sta_vht_cap.vht_mcs.tx_mcs_map);
++
++ /*
++ * Many APs are incorrectly advertising an all-zero value here,
++ * which really means MCS 0-7 are required for 1-8 streams, but
++ * they don't really mean it that way.
++ * Some other APs are incorrectly advertising 3 spatial streams
++ * with MCS 0-7 are required, but don't really mean it that way
++ * and we'll connect only with HT, rather than even HE.
++ * As a result, unfortunately the VHT basic MCS/NSS set cannot
++ * be used at all, so check it only in strict mode.
++ */
++ if (!ieee80211_hw_check(&sdata->local->hw, STRICT))
++ return true;
++
++ /*
++ * P802.11REVme/D7.0 - 6.5.4.2.4
++ * ...
++ * If the MLME of a VHT STA receives an MLME-JOIN.request primitive
++ * with a SelectedBSS parameter containing a Basic VHT-MCS And NSS Set
++ * field in the VHT Operation parameter that contains any unsupported
++ * <VHT-MCS, NSS> tuple, the MLME response in the resulting
++ * MLME-JOIN.confirm primitive shall contain a ResultCode parameter
++ * that is not set to the value SUCCESS.
++ * ...
++ */
++ for (nss = 8; nss > 0; nss--) {
++ u8 ap_op_val = (ap_min_req_set >> (2 * (nss - 1))) & 3;
++ u8 sta_rx_val;
++ u8 sta_tx_val;
++
++ if (ap_op_val == IEEE80211_HE_MCS_NOT_SUPPORTED)
++ continue;
++
++ sta_rx_val = (sta_rx_mcs_map >> (2 * (nss - 1))) & 3;
++ sta_tx_val = (sta_tx_mcs_map >> (2 * (nss - 1))) & 3;
++
++ if (sta_rx_val == IEEE80211_HE_MCS_NOT_SUPPORTED ||
++ sta_tx_val == IEEE80211_HE_MCS_NOT_SUPPORTED ||
++ sta_rx_val < ap_op_val || sta_tx_val < ap_op_val) {
++ link_id_info(sdata, link_id,
++ "Missing mandatory rates for %d Nss, rx %d, tx %d oper %d, disable VHT\n",
++ nss, sta_rx_val, sta_tx_val, ap_op_val);
++ return false;
++ }
++ }
++
++ return true;
++}
++
+ static bool
+ ieee80211_verify_peer_he_mcs_support(struct ieee80211_sub_if_data *sdata,
+ int link_id,
+@@ -1042,6 +1151,26 @@ ieee80211_determine_chan_mode(struct ieee80211_sub_if_data *sdata,
+ link_id_info(sdata, link_id,
+ "regulatory prevented using AP config, downgraded\n");
+
++ if (conn->mode >= IEEE80211_CONN_MODE_HT &&
++ !ieee80211_verify_sta_ht_mcs_support(sdata, sband,
++ elems->ht_operation)) {
++ conn->mode = IEEE80211_CONN_MODE_LEGACY;
++ conn->bw_limit = IEEE80211_CONN_BW_LIMIT_20;
++ link_id_info(sdata, link_id,
++ "required MCSes not supported, disabling HT\n");
++ }
++
++ if (conn->mode >= IEEE80211_CONN_MODE_VHT &&
++ !ieee80211_verify_sta_vht_mcs_support(sdata, link_id, sband,
++ elems->vht_operation)) {
++ conn->mode = IEEE80211_CONN_MODE_HT;
++ conn->bw_limit = min_t(enum ieee80211_conn_bw_limit,
++ conn->bw_limit,
++ IEEE80211_CONN_BW_LIMIT_40);
++ link_id_info(sdata, link_id,
++ "required MCSes not supported, disabling VHT\n");
++ }
++
+ if (conn->mode >= IEEE80211_CONN_MODE_HE &&
+ (!ieee80211_verify_peer_he_mcs_support(sdata, link_id,
+ (void *)elems->he_cap,
+@@ -3832,7 +3961,8 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ if (tx)
+ ieee80211_flush_queues(local, sdata, false);
+
+- drv_mgd_complete_tx(sdata->local, sdata, &info);
++ if (tx || frame_buf)
++ drv_mgd_complete_tx(sdata->local, sdata, &info);
+
+ /* clear AP addr only after building the needed mgmt frames */
+ eth_zero_addr(sdata->deflink.u.mgd.bssid);
+@@ -4298,7 +4428,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ struct ieee80211_link_data *link;
+
+ link = sdata_dereference(sdata->link[link_id], sdata);
+- if (!link)
++ if (!link || !link->conf->bss)
+ continue;
+ cfg80211_unlink_bss(local->hw.wiphy, link->conf->bss);
+ link->conf->bss = NULL;
+@@ -4578,6 +4708,8 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction);
+ status_code = le16_to_cpu(mgmt->u.auth.status_code);
+
++ info.link_id = ifmgd->auth_data->link_id;
++
+ if (auth_alg != ifmgd->auth_data->algorithm ||
+ (auth_alg != WLAN_AUTH_SAE &&
+ auth_transaction != ifmgd->auth_data->expected_transaction) ||
+@@ -9507,7 +9639,6 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
+ ieee80211_report_disconnect(sdata, frame_buf,
+ sizeof(frame_buf), true,
+ req->reason_code, false);
+- drv_mgd_complete_tx(sdata->local, sdata, &info);
+ return 0;
+ }
+
+@@ -10156,6 +10287,8 @@ int ieee80211_mgd_assoc_ml_reconf(struct ieee80211_sub_if_data *sdata,
+ if (!data)
+ return -ENOMEM;
+
++ data->assoc_link_id = -1;
++
+ uapsd_supported = true;
+ ieee80211_ml_reconf_selectors(userspace_selectors);
+ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS;
+@@ -10214,12 +10347,11 @@ int ieee80211_mgd_assoc_ml_reconf(struct ieee80211_sub_if_data *sdata,
+ }
+ }
+
+- /* Require U-APSD support to be similar to the current valid
+- * links
+- */
+- if (uapsd_supported !=
+- !!(sdata->u.mgd.flags & IEEE80211_STA_UAPSD_ENABLED)) {
++ /* Require U-APSD support if we enabled it */
++ if (sdata->u.mgd.flags & IEEE80211_STA_UAPSD_ENABLED &&
++ !uapsd_supported) {
+ err = -EINVAL;
++ sdata_info(sdata, "U-APSD on but not available on (all) new links\n");
+ goto err_free;
+ }
+
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 0659ec892ec6c7..ad019a50b6b339 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1045,14 +1045,14 @@ static ieee80211_rx_result ieee80211_rx_mesh_check(struct ieee80211_rx_data *rx)
+ if (is_multicast_ether_addr(hdr->addr1)) {
+ if (ieee80211_has_tods(hdr->frame_control) ||
+ !ieee80211_has_fromds(hdr->frame_control))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ if (ether_addr_equal(hdr->addr3, dev_addr))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ } else {
+ if (!ieee80211_has_a4(hdr->frame_control))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ if (ether_addr_equal(hdr->addr4, dev_addr))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+ }
+
+@@ -1064,20 +1064,20 @@ static ieee80211_rx_result ieee80211_rx_mesh_check(struct ieee80211_rx_data *rx)
+ struct ieee80211_mgmt *mgmt;
+
+ if (!ieee80211_is_mgmt(hdr->frame_control))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ if (ieee80211_is_action(hdr->frame_control)) {
+ u8 category;
+
+ /* make sure category field is present */
+ if (rx->skb->len < IEEE80211_MIN_ACTION_SIZE)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ mgmt = (struct ieee80211_mgmt *)hdr;
+ category = mgmt->u.action.category;
+ if (category != WLAN_CATEGORY_MESH_ACTION &&
+ category != WLAN_CATEGORY_SELF_PROTECTED)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ return RX_CONTINUE;
+ }
+
+@@ -1087,7 +1087,7 @@ static ieee80211_rx_result ieee80211_rx_mesh_check(struct ieee80211_rx_data *rx)
+ ieee80211_is_auth(hdr->frame_control))
+ return RX_CONTINUE;
+
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ return RX_CONTINUE;
+@@ -1513,7 +1513,7 @@ ieee80211_rx_h_check(struct ieee80211_rx_data *rx)
+ hdrlen = ieee80211_hdrlen(hdr->frame_control);
+
+ if (rx->skb->len < hdrlen + 8)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ skb_copy_bits(rx->skb, hdrlen + 6, ðertype, 2);
+ if (ethertype == rx->sdata->control_port_protocol)
+@@ -1526,7 +1526,7 @@ ieee80211_rx_h_check(struct ieee80211_rx_data *rx)
+ GFP_ATOMIC))
+ return RX_DROP_U_SPURIOUS;
+
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ return RX_CONTINUE;
+@@ -1862,7 +1862,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
+ cfg80211_rx_unexpected_4addr_frame(
+ rx->sdata->dev, sta->sta.addr,
+ GFP_ATOMIC);
+- return RX_DROP_M_UNEXPECTED_4ADDR_FRAME;
++ return RX_DROP_U_UNEXPECTED_4ADDR_FRAME;
+ }
+ /*
+ * Update counter and free packet here to avoid
+@@ -1997,7 +1997,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+ cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
+ skb->data,
+ skb->len);
+- return RX_DROP_M_BAD_BCN_KEYIDX;
++ return RX_DROP_U_BAD_BCN_KEYIDX;
+ }
+
+ rx->key = ieee80211_rx_get_bigtk(rx, mmie_keyidx);
+@@ -2011,11 +2011,11 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+
+ if (mmie_keyidx < NUM_DEFAULT_KEYS ||
+ mmie_keyidx >= NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS)
+- return RX_DROP_M_BAD_MGMT_KEYIDX; /* unexpected BIP keyidx */
++ return RX_DROP_U_BAD_MGMT_KEYIDX; /* unexpected BIP keyidx */
+ if (rx->link_sta) {
+ if (ieee80211_is_group_privacy_action(skb) &&
+ test_sta_flag(rx->sta, WLAN_STA_MFP))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ rx->key = rcu_dereference(rx->link_sta->gtk[mmie_keyidx]);
+ }
+@@ -2100,11 +2100,11 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
+
+ if (rx->key) {
+ if (unlikely(rx->key->flags & KEY_FLAG_TAINTED))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* TODO: add threshold stuff again */
+ } else {
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ switch (rx->key->conf.cipher) {
+@@ -2278,7 +2278,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ goto out;
+
+ if (is_multicast_ether_addr(hdr->addr1))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ I802_DEBUG_INC(rx->local->rx_handlers_fragments);
+
+@@ -2333,7 +2333,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
+ rx->seqno_idx, hdr);
+ if (!entry) {
+ I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag);
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ /* "The receiver shall discard MSDUs and MMPDUs whose constituent
+@@ -2855,25 +2855,25 @@ ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta
+ return RX_CONTINUE;
+
+ if (!pskb_may_pull(skb, sizeof(*eth) + 6))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ mesh_hdr = (struct ieee80211s_hdr *)(skb->data + sizeof(*eth));
+ mesh_hdrlen = ieee80211_get_mesh_hdrlen(mesh_hdr);
+
+ if (!pskb_may_pull(skb, sizeof(*eth) + mesh_hdrlen))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ eth = (struct ethhdr *)skb->data;
+ multicast = is_multicast_ether_addr(eth->h_dest);
+
+ mesh_hdr = (struct ieee80211s_hdr *)(eth + 1);
+ if (!mesh_hdr->ttl)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* frame is in RMC, don't forward */
+ if (is_multicast_ether_addr(eth->h_dest) &&
+ mesh_rmc_check(sdata, eth->h_source, mesh_hdr))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* forward packet */
+ if (sdata->crypto_tx_tailroom_needed_cnt)
+@@ -2890,7 +2890,7 @@ ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta
+ /* has_a4 already checked in ieee80211_rx_mesh_check */
+ proxied_addr = mesh_hdr->eaddr2;
+ else
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ rcu_read_lock();
+ mppath = mpp_path_lookup(sdata, proxied_addr);
+@@ -2922,14 +2922,14 @@ ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta
+ goto rx_accept;
+
+ IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_ttl);
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ if (!ifmsh->mshcfg.dot11MeshForwarding) {
+ if (is_multicast_ether_addr(eth->h_dest))
+ goto rx_accept;
+
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ skb_set_queue_mapping(skb, ieee802_1d_to_ac[skb->priority]);
+@@ -3122,7 +3122,7 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx)
+ return RX_CONTINUE;
+
+ if (unlikely(!ieee80211_is_data_present(fc)))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ if (unlikely(ieee80211_has_a4(hdr->frame_control))) {
+ switch (rx->sdata->vif.type) {
+@@ -3179,19 +3179,16 @@ ieee80211_rx_h_data(struct ieee80211_rx_data *rx)
+ return RX_CONTINUE;
+
+ if (unlikely(!ieee80211_is_data_present(hdr->frame_control)))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+- /*
+- * Send unexpected-4addr-frame event to hostapd. For older versions,
+- * also drop the frame to cooked monitor interfaces.
+- */
++ /* Send unexpected-4addr-frame event to hostapd */
+ if (ieee80211_has_a4(hdr->frame_control) &&
+ sdata->vif.type == NL80211_IFTYPE_AP) {
+ if (rx->sta &&
+ !test_and_set_sta_flag(rx->sta, WLAN_STA_4ADDR_EVENT))
+ cfg80211_rx_unexpected_4addr_frame(
+ rx->sdata->dev, rx->sta->sta.addr, GFP_ATOMIC);
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ res = __ieee80211_data_to_8023(rx, &port_control);
+@@ -3203,7 +3200,7 @@ ieee80211_rx_h_data(struct ieee80211_rx_data *rx)
+ return res;
+
+ if (!ieee80211_frame_allowed(rx, fc))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* directly handle TDLS channel switch requests/responses */
+ if (unlikely(((struct ethhdr *)rx->skb->data)->h_proto ==
+@@ -3268,11 +3265,11 @@ ieee80211_rx_h_ctrl(struct ieee80211_rx_data *rx, struct sk_buff_head *frames)
+ };
+
+ if (!rx->sta)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ if (skb_copy_bits(skb, offsetof(struct ieee80211_bar, control),
+ &bar_data, sizeof(bar_data)))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ tid = le16_to_cpu(bar_data.control) >> 12;
+
+@@ -3284,7 +3281,7 @@ ieee80211_rx_h_ctrl(struct ieee80211_rx_data *rx, struct sk_buff_head *frames)
+
+ tid_agg_rx = rcu_dereference(rx->sta->ampdu_mlme.tid_rx[tid]);
+ if (!tid_agg_rx)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ start_seq_num = le16_to_cpu(bar_data.start_seq_num) >> 4;
+ event.u.ba.tid = tid;
+@@ -3308,12 +3305,7 @@ ieee80211_rx_h_ctrl(struct ieee80211_rx_data *rx, struct sk_buff_head *frames)
+ return RX_QUEUED;
+ }
+
+- /*
+- * After this point, we only want management frames,
+- * so we can drop all remaining control frames to
+- * cooked monitor interfaces.
+- */
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ static void ieee80211_process_sa_query_req(struct ieee80211_sub_if_data *sdata,
+@@ -3422,10 +3414,10 @@ ieee80211_rx_h_mgmt_check(struct ieee80211_rx_data *rx)
+ * and unknown (reserved) frames are useless.
+ */
+ if (rx->skb->len < 24)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ if (!ieee80211_is_mgmt(mgmt->frame_control))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* drop too small action frames */
+ if (ieee80211_is_action(mgmt->frame_control) &&
+@@ -3951,17 +3943,16 @@ ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx)
+ * ones. For all other modes we will return them to the sender,
+ * setting the 0x80 bit in the action category, as required by
+ * 802.11-2012 9.24.4.
+- * Newer versions of hostapd shall also use the management frame
+- * registration mechanisms, but older ones still use cooked
+- * monitor interfaces so push all frames there.
++ * Newer versions of hostapd use the management frame registration
++ * mechanisms and old cooked monitor interface is no longer supported.
+ */
+ if (!(status->rx_flags & IEEE80211_RX_MALFORMED_ACTION_FRM) &&
+ (sdata->vif.type == NL80211_IFTYPE_AP ||
+ sdata->vif.type == NL80211_IFTYPE_AP_VLAN))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ if (is_multicast_ether_addr(mgmt->da))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* do not return rejected action frames */
+ if (mgmt->u.action.category & 0x80)
+@@ -4006,7 +3997,7 @@ ieee80211_rx_h_ext(struct ieee80211_rx_data *rx)
+ return RX_CONTINUE;
+
+ if (sdata->vif.type != NL80211_IFTYPE_STATION)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* for now only beacons are ext, so queue them */
+ ieee80211_queue_skb_to_iface(sdata, rx->link_id, rx->sta, rx->skb);
+@@ -4027,7 +4018,7 @@ ieee80211_rx_h_mgmt(struct ieee80211_rx_data *rx)
+ sdata->vif.type != NL80211_IFTYPE_ADHOC &&
+ sdata->vif.type != NL80211_IFTYPE_OCB &&
+ sdata->vif.type != NL80211_IFTYPE_STATION)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ switch (stype) {
+ case cpu_to_le16(IEEE80211_STYPE_AUTH):
+@@ -4038,32 +4029,32 @@ ieee80211_rx_h_mgmt(struct ieee80211_rx_data *rx)
+ case cpu_to_le16(IEEE80211_STYPE_DEAUTH):
+ if (is_multicast_ether_addr(mgmt->da) &&
+ !is_broadcast_ether_addr(mgmt->da))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* process only for station/IBSS */
+ if (sdata->vif.type != NL80211_IFTYPE_STATION &&
+ sdata->vif.type != NL80211_IFTYPE_ADHOC)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ break;
+ case cpu_to_le16(IEEE80211_STYPE_ASSOC_RESP):
+ case cpu_to_le16(IEEE80211_STYPE_REASSOC_RESP):
+ case cpu_to_le16(IEEE80211_STYPE_DISASSOC):
+ if (is_multicast_ether_addr(mgmt->da) &&
+ !is_broadcast_ether_addr(mgmt->da))
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+
+ /* process only for station */
+ if (sdata->vif.type != NL80211_IFTYPE_STATION)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ break;
+ case cpu_to_le16(IEEE80211_STYPE_PROBE_REQ):
+ /* process only for ibss and mesh */
+ if (sdata->vif.type != NL80211_IFTYPE_ADHOC &&
+ sdata->vif.type != NL80211_IFTYPE_MESH_POINT)
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ break;
+ default:
+- return RX_DROP_MONITOR;
++ return RX_DROP;
+ }
+
+ ieee80211_queue_skb_to_iface(sdata, rx->link_id, rx->sta, rx->skb);
+@@ -4071,82 +4062,9 @@ ieee80211_rx_h_mgmt(struct ieee80211_rx_data *rx)
+ return RX_QUEUED;
+ }
+
+-static void ieee80211_rx_cooked_monitor(struct ieee80211_rx_data *rx,
+- struct ieee80211_rate *rate,
+- ieee80211_rx_result reason)
+-{
+- struct ieee80211_sub_if_data *sdata;
+- struct ieee80211_local *local = rx->local;
+- struct sk_buff *skb = rx->skb, *skb2;
+- struct net_device *prev_dev = NULL;
+- struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+- int needed_headroom;
+-
+- /*
+- * If cooked monitor has been processed already, then
+- * don't do it again. If not, set the flag.
+- */
+- if (rx->flags & IEEE80211_RX_CMNTR)
+- goto out_free_skb;
+- rx->flags |= IEEE80211_RX_CMNTR;
+-
+- /* If there are no cooked monitor interfaces, just free the SKB */
+- if (!local->cooked_mntrs)
+- goto out_free_skb;
+-
+- /* room for the radiotap header based on driver features */
+- needed_headroom = ieee80211_rx_radiotap_hdrlen(local, status, skb);
+-
+- if (skb_headroom(skb) < needed_headroom &&
+- pskb_expand_head(skb, needed_headroom, 0, GFP_ATOMIC))
+- goto out_free_skb;
+-
+- /* prepend radiotap information */
+- ieee80211_add_rx_radiotap_header(local, skb, rate, needed_headroom,
+- false);
+-
+- skb_reset_mac_header(skb);
+- skb->ip_summed = CHECKSUM_UNNECESSARY;
+- skb->pkt_type = PACKET_OTHERHOST;
+- skb->protocol = htons(ETH_P_802_2);
+-
+- list_for_each_entry_rcu(sdata, &local->interfaces, list) {
+- if (!ieee80211_sdata_running(sdata))
+- continue;
+-
+- if (sdata->vif.type != NL80211_IFTYPE_MONITOR ||
+- !(sdata->u.mntr.flags & MONITOR_FLAG_COOK_FRAMES))
+- continue;
+-
+- if (prev_dev) {
+- skb2 = skb_clone(skb, GFP_ATOMIC);
+- if (skb2) {
+- skb2->dev = prev_dev;
+- netif_receive_skb(skb2);
+- }
+- }
+-
+- prev_dev = sdata->dev;
+- dev_sw_netstats_rx_add(sdata->dev, skb->len);
+- }
+-
+- if (prev_dev) {
+- skb->dev = prev_dev;
+- netif_receive_skb(skb);
+- return;
+- }
+-
+- out_free_skb:
+- kfree_skb_reason(skb, (__force u32)reason);
+-}
+-
+ static void ieee80211_rx_handlers_result(struct ieee80211_rx_data *rx,
+ ieee80211_rx_result res)
+ {
+- struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
+- struct ieee80211_supported_band *sband;
+- struct ieee80211_rate *rate = NULL;
+-
+ if (res == RX_QUEUED) {
+ I802_DEBUG_INC(rx->sdata->local->rx_handlers_queued);
+ return;
+@@ -4158,23 +4076,13 @@ static void ieee80211_rx_handlers_result(struct ieee80211_rx_data *rx,
+ rx->link_sta->rx_stats.dropped++;
+ }
+
+- if (u32_get_bits((__force u32)res, SKB_DROP_REASON_SUBSYS_MASK) ==
+- SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE) {
+- kfree_skb_reason(rx->skb, (__force u32)res);
+- return;
+- }
+-
+- sband = rx->local->hw.wiphy->bands[status->band];
+- if (status->encoding == RX_ENC_LEGACY)
+- rate = &sband->bitrates[status->rate_idx];
+-
+- ieee80211_rx_cooked_monitor(rx, rate, res);
++ kfree_skb_reason(rx->skb, (__force u32)res);
+ }
+
+ static void ieee80211_rx_handlers(struct ieee80211_rx_data *rx,
+ struct sk_buff_head *frames)
+ {
+- ieee80211_rx_result res = RX_DROP_MONITOR;
++ ieee80211_rx_result res = RX_DROP;
+ struct sk_buff *skb;
+
+ #define CALL_RXH(rxh) \
+@@ -4238,7 +4146,7 @@ static void ieee80211_rx_handlers(struct ieee80211_rx_data *rx,
+ static void ieee80211_invoke_rx_handlers(struct ieee80211_rx_data *rx)
+ {
+ struct sk_buff_head reorder_release;
+- ieee80211_rx_result res = RX_DROP_MONITOR;
++ ieee80211_rx_result res = RX_DROP;
+
+ __skb_queue_head_init(&reorder_release);
+
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c
+index 5f28f3633fa0a4..a362254b310cd5 100644
+--- a/net/mac80211/status.c
++++ b/net/mac80211/status.c
+@@ -895,8 +895,7 @@ static int ieee80211_tx_get_rates(struct ieee80211_hw *hw,
+ }
+
+ void ieee80211_tx_monitor(struct ieee80211_local *local, struct sk_buff *skb,
+- int retry_count, bool send_to_cooked,
+- struct ieee80211_tx_status *status)
++ int retry_count, struct ieee80211_tx_status *status)
+ {
+ struct sk_buff *skb2;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+@@ -930,10 +929,6 @@ void ieee80211_tx_monitor(struct ieee80211_local *local, struct sk_buff *skb,
+ if (sdata->u.mntr.flags & MONITOR_FLAG_SKIP_TX)
+ continue;
+
+- if ((sdata->u.mntr.flags & MONITOR_FLAG_COOK_FRAMES) &&
+- !send_to_cooked)
+- continue;
+-
+ if (prev_dev) {
+ skb2 = skb_clone(skb, GFP_ATOMIC);
+ if (skb2) {
+@@ -964,7 +959,6 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
+ struct ieee80211_tx_info *info = status->info;
+ struct sta_info *sta;
+ __le16 fc;
+- bool send_to_cooked;
+ bool acked;
+ bool noack_success;
+ struct ieee80211_bar *bar;
+@@ -1091,28 +1085,16 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
+
+ ieee80211_report_used_skb(local, skb, false, status->ack_hwtstamp);
+
+- /* this was a transmitted frame, but now we want to reuse it */
+- skb_orphan(skb);
+-
+- /* Need to make a copy before skb->cb gets cleared */
+- send_to_cooked = !!(info->flags & IEEE80211_TX_CTL_INJECTED) ||
+- !(ieee80211_is_data(fc));
+-
+ /*
+ * This is a bit racy but we can avoid a lot of work
+ * with this test...
+ */
+- if (!local->tx_mntrs && (!send_to_cooked || !local->cooked_mntrs)) {
+- if (status->free_list)
+- list_add_tail(&skb->list, status->free_list);
+- else
+- dev_kfree_skb(skb);
+- return;
+- }
+-
+- /* send to monitor interfaces */
+- ieee80211_tx_monitor(local, skb, retry_count,
+- send_to_cooked, status);
++ if (local->tx_mntrs)
++ ieee80211_tx_monitor(local, skb, retry_count, status);
++ else if (status->free_list)
++ list_add_tail(&skb->list, status->free_list);
++ else
++ dev_kfree_skb(skb);
+ }
+
+ void ieee80211_tx_status_skb(struct ieee80211_hw *hw, struct sk_buff *skb)
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index a24636bda67936..1289df373795eb 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -5617,7 +5617,7 @@ struct sk_buff *ieee80211_beacon_get_tim(struct ieee80211_hw *hw,
+ if (!copy)
+ return bcn;
+
+- ieee80211_tx_monitor(hw_to_local(hw), copy, 1, false, NULL);
++ ieee80211_tx_monitor(hw_to_local(hw), copy, 1, NULL);
+
+ return bcn;
+ }
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index fdda14c08e2b11..dec6e16b8c7d28 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -2156,7 +2156,8 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+
+ wake_up:
+
+- if (local->monitors == local->open_count && local->monitors > 0)
++ if (local->virt_monitors > 0 &&
++ local->virt_monitors == local->open_count)
+ ieee80211_add_virtual_monitor(local);
+
+ /*
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index 940ca94c88634f..cd220742d24936 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -583,11 +583,9 @@ int mptcp_userspace_pm_set_flags(struct sk_buff *skb, struct genl_info *info)
+ if (ret < 0)
+ goto set_flags_err;
+
+- if (attr_rem) {
+- ret = mptcp_pm_parse_entry(attr_rem, info, false, &rem);
+- if (ret < 0)
+- goto set_flags_err;
+- }
++ ret = mptcp_pm_parse_entry(attr_rem, info, false, &rem);
++ if (ret < 0)
++ goto set_flags_err;
+
+ if (loc.addr.family == AF_UNSPEC ||
+ rem.addr.family == AF_UNSPEC) {
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index 502cf10aab41d5..2f666751c7e7c7 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -618,7 +618,9 @@ static struct ctl_table nf_ct_sysctl_table[] = {
+ .data = &nf_conntrack_max,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_INT_MAX,
+ },
+ [NF_SYSCTL_CT_COUNT] = {
+ .procname = "nf_conntrack_count",
+@@ -654,7 +656,9 @@ static struct ctl_table nf_ct_sysctl_table[] = {
+ .data = &nf_ct_expect_max,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = SYSCTL_ONE,
++ .extra2 = SYSCTL_INT_MAX,
+ },
+ [NF_SYSCTL_CT_ACCT] = {
+ .procname = "nf_conntrack_acct",
+@@ -947,7 +951,9 @@ static struct ctl_table nf_ct_netfilter_table[] = {
+ .data = &nf_conntrack_max,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_INT_MAX,
+ },
+ };
+
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index cb8c525ea20eab..7986145a527cbe 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -1569,6 +1569,9 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
+ return err;
+ }
+
++ sch->qstats.backlog += len;
++ sch->q.qlen++;
++
+ if (first && !cl->cl_nactive) {
+ if (cl->cl_flags & HFSC_RSC)
+ init_ed(cl, len);
+@@ -1584,9 +1587,6 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
+
+ }
+
+- sch->qstats.backlog += len;
+- sch->q.qlen++;
+-
+ return NET_XMIT_SUCCESS;
+ }
+
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index 716808f374a8d1..b391c2ef463f20 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -1079,14 +1079,16 @@ static void smc_pnet_find_roce_by_pnetid(struct net_device *ndev,
+ struct smc_init_info *ini)
+ {
+ u8 ndev_pnetid[SMC_MAX_PNETID_LEN];
++ struct net_device *base_ndev;
+ struct net *net;
+
+- ndev = pnet_find_base_ndev(ndev);
++ base_ndev = pnet_find_base_ndev(ndev);
+ net = dev_net(ndev);
+- if (smc_pnetid_by_dev_port(ndev->dev.parent, ndev->dev_port,
++ if (smc_pnetid_by_dev_port(base_ndev->dev.parent, base_ndev->dev_port,
+ ndev_pnetid) &&
++ smc_pnet_find_ndev_pnetid_by_table(base_ndev, ndev_pnetid) &&
+ smc_pnet_find_ndev_pnetid_by_table(ndev, ndev_pnetid)) {
+- smc_pnet_find_rdma_dev(ndev, ini);
++ smc_pnet_find_rdma_dev(base_ndev, ini);
+ return; /* pnetid could not be determined */
+ }
+ _smc_pnet_find_roce_by_pnetid(ndev_pnetid, ini, NULL, net);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 2fe88ea79a70c1..c9c5f0caef6bd9 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -270,9 +270,6 @@ static struct rpc_xprt *rpc_clnt_set_transport(struct rpc_clnt *clnt,
+ old = rcu_dereference_protected(clnt->cl_xprt,
+ lockdep_is_held(&clnt->cl_lock));
+
+- if (!xprt_bound(xprt))
+- clnt->cl_autobind = 1;
+-
+ clnt->cl_timeout = timeout;
+ rcu_assign_pointer(clnt->cl_xprt, xprt);
+ spin_unlock(&clnt->cl_lock);
+diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
+index 102c3818bc54d4..53bcca365fb1cd 100644
+--- a/net/sunrpc/rpcb_clnt.c
++++ b/net/sunrpc/rpcb_clnt.c
+@@ -820,9 +820,10 @@ static void rpcb_getport_done(struct rpc_task *child, void *data)
+ }
+
+ trace_rpcb_setport(child, map->r_status, map->r_port);
+- xprt->ops->set_port(xprt, map->r_port);
+- if (map->r_port)
++ if (map->r_port) {
++ xprt->ops->set_port(xprt, map->r_port);
+ xprt_set_bound(xprt);
++ }
+ }
+
+ /*
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 9b45fbdc90cabe..73bc39281ef5f5 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -276,6 +276,8 @@ EXPORT_SYMBOL_GPL(rpc_destroy_wait_queue);
+
+ static int rpc_wait_bit_killable(struct wait_bit_key *key, int mode)
+ {
++ if (unlikely(current->flags & PF_EXITING))
++ return -EINTR;
+ schedule();
+ if (signal_pending_state(mode, current))
+ return -ERESTARTSYS;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index c524421ec65252..8584893b478510 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -817,12 +817,16 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb,
+ goto exit;
+ }
+
++ /* Get net to avoid freed tipc_crypto when delete namespace */
++ get_net(aead->crypto->net);
++
+ /* Now, do encrypt */
+ rc = crypto_aead_encrypt(req);
+ if (rc == -EINPROGRESS || rc == -EBUSY)
+ return rc;
+
+ tipc_bearer_put(b);
++ put_net(aead->crypto->net);
+
+ exit:
+ kfree(ctx);
+@@ -860,6 +864,7 @@ static void tipc_aead_encrypt_done(void *data, int err)
+ kfree(tx_ctx);
+ tipc_bearer_put(b);
+ tipc_aead_put(aead);
++ put_net(net);
+ }
+
+ /**
+diff --git a/net/wireless/chan.c b/net/wireless/chan.c
+index 9f918b77b40e23..4cdb74a3f38c6a 100644
+--- a/net/wireless/chan.c
++++ b/net/wireless/chan.c
+@@ -6,7 +6,7 @@
+ *
+ * Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+- * Copyright 2018-2024 Intel Corporation
++ * Copyright 2018-2025 Intel Corporation
+ */
+
+ #include <linux/export.h>
+@@ -1497,6 +1497,12 @@ bool cfg80211_reg_check_beaconing(struct wiphy *wiphy,
+ if (cfg->reg_power == IEEE80211_REG_VLP_AP)
+ permitting_flags |= IEEE80211_CHAN_ALLOW_6GHZ_VLP_AP;
+
++ if ((cfg->iftype == NL80211_IFTYPE_P2P_GO ||
++ cfg->iftype == NL80211_IFTYPE_AP) &&
++ (chandef->width == NL80211_CHAN_WIDTH_20_NOHT ||
++ chandef->width == NL80211_CHAN_WIDTH_20))
++ permitting_flags |= IEEE80211_CHAN_ALLOW_20MHZ_ACTIVITY;
++
+ return _cfg80211_reg_can_beacon(wiphy, chandef, cfg->iftype,
+ check_no_ir ? IEEE80211_CHAN_NO_IR : 0,
+ permitting_flags);
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index e10f2b3b4b7f64..c1b71179601dbb 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -1361,6 +1361,10 @@ void cfg80211_mlo_reconf_add_done(struct net_device *dev,
+ if (data->added_links & BIT(link_id)) {
+ wdev->links[link_id].client.current_bss =
+ bss_from_pub(bss);
++
++ memcpy(wdev->links[link_id].addr,
++ data->links[link_id].addr,
++ ETH_ALEN);
+ } else {
+ cfg80211_unhold_bss(bss_from_pub(bss));
+ cfg80211_put_bss(wiphy, bss);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index b457fe78672b71..370b668678da08 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -1234,6 +1234,10 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy,
+ if ((chan->flags & IEEE80211_CHAN_ALLOW_6GHZ_VLP_AP) &&
+ nla_put_flag(msg, NL80211_FREQUENCY_ATTR_ALLOW_6GHZ_VLP_AP))
+ goto nla_put_failure;
++ if ((chan->flags & IEEE80211_CHAN_ALLOW_20MHZ_ACTIVITY) &&
++ nla_put_flag(msg,
++ NL80211_FREQUENCY_ATTR_ALLOW_20MHZ_ACTIVITY))
++ goto nla_put_failure;
+ }
+
+ if (nla_put_u32(msg, NL80211_FREQUENCY_ATTR_MAX_TX_POWER,
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 212e9561aae778..c1752b31734faa 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -5,7 +5,7 @@
+ * Copyright 2008-2011 Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2024 Intel Corporation
++ * Copyright (C) 2018 - 2025 Intel Corporation
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+@@ -1603,6 +1603,8 @@ static u32 map_regdom_flags(u32 rd_flags)
+ channel_flags |= IEEE80211_CHAN_PSD;
+ if (rd_flags & NL80211_RRF_ALLOW_6GHZ_VLP_AP)
+ channel_flags |= IEEE80211_CHAN_ALLOW_6GHZ_VLP_AP;
++ if (rd_flags & NL80211_RRF_ALLOW_20MHZ_ACTIVITY)
++ channel_flags |= IEEE80211_CHAN_ALLOW_20MHZ_ACTIVITY;
+ return channel_flags;
+ }
+
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index c13e13fa79fc0b..dc67870b76122a 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -1301,7 +1301,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ xs->queue_id = qid;
+ xp_add_xsk(xs->pool, xs);
+
+- if (xs->zc && qid < dev->real_num_rx_queues) {
++ if (qid < dev->real_num_rx_queues) {
+ struct netdev_rx_queue *rxq;
+
+ rxq = __netif_get_rx_queue(dev, qid);
+diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c
+index fe82e2d073006e..fc7a603b04f130 100644
+--- a/net/xfrm/espintcp.c
++++ b/net/xfrm/espintcp.c
+@@ -171,8 +171,10 @@ int espintcp_queue_out(struct sock *sk, struct sk_buff *skb)
+ struct espintcp_ctx *ctx = espintcp_getctx(sk);
+
+ if (skb_queue_len(&ctx->out_queue) >=
+- READ_ONCE(net_hotdata.max_backlog))
++ READ_ONCE(net_hotdata.max_backlog)) {
++ kfree_skb(skb);
+ return -ENOBUFS;
++ }
+
+ __skb_queue_tail(&ctx->out_queue, skb);
+
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 6551e588fe526f..50a17112c87af6 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1581,6 +1581,9 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
+ struct xfrm_policy *delpol;
+ struct hlist_head *chain;
+
++ /* Sanitize mark before store */
++ policy->mark.v &= policy->mark.m;
++
+ spin_lock_bh(&net->xfrm.xfrm_policy_lock);
+ chain = policy_hash_bysel(net, &policy->selector, policy->family, dir);
+ if (chain)
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 69af5964c886c9..8176081fa1f49c 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -838,9 +838,6 @@ int __xfrm_state_delete(struct xfrm_state *x)
+ xfrm_nat_keepalive_state_updated(x);
+ spin_unlock(&net->xfrm.xfrm_state_lock);
+
+- if (x->encap_sk)
+- sock_put(rcu_dereference_raw(x->encap_sk));
+-
+ xfrm_dev_state_delete(x);
+
+ /* All xfrm_state objects are created by xfrm_state_alloc.
+@@ -1721,6 +1718,9 @@ static void __xfrm_state_insert(struct xfrm_state *x)
+
+ list_add(&x->km.all, &net->xfrm.state_all);
+
++ /* Sanitize mark before store */
++ x->mark.v &= x->mark.m;
++
+ h = xfrm_dst_hash(net, &x->id.daddr, &x->props.saddr,
+ x->props.reqid, x->props.family);
+ XFRM_STATE_INSERT(bydst, &x->bydst, net->xfrm.state_bydst + h,
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 82a768500999b2..b5266e0848e829 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -178,6 +178,12 @@ static inline int verify_replay(struct xfrm_usersa_info *p,
+ "Replay seq and seq_hi should be 0 for output SA");
+ return -EINVAL;
+ }
++ if (rs->oseq_hi && !(p->flags & XFRM_STATE_ESN)) {
++ NL_SET_ERR_MSG(
++ extack,
++ "Replay oseq_hi should be 0 in non-ESN mode for output SA");
++ return -EINVAL;
++ }
+ if (rs->bmp_len) {
+ NL_SET_ERR_MSG(extack, "Replay bmp_len should 0 for output SA");
+ return -EINVAL;
+@@ -190,6 +196,12 @@ static inline int verify_replay(struct xfrm_usersa_info *p,
+ "Replay oseq and oseq_hi should be 0 for input SA");
+ return -EINVAL;
+ }
++ if (rs->seq_hi && !(p->flags & XFRM_STATE_ESN)) {
++ NL_SET_ERR_MSG(
++ extack,
++ "Replay seq_hi should be 0 in non-ESN mode for input SA");
++ return -EINVAL;
++ }
+ }
+
+ return 0;
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index 5b632635e00dde..95a4fa1f1e4474 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -376,7 +376,7 @@ $(obj)/%.o: $(src)/%.c
+ @echo " CLANG-bpf " $@
+ $(Q)$(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(BPF_EXTRA_CFLAGS) \
+ -I$(obj) -I$(srctree)/tools/testing/selftests/bpf/ \
+- -I$(LIBBPF_INCLUDE) \
++ -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) \
+ -D__KERNEL__ -D__BPF_TRACING__ -Wno-unused-value -Wno-pointer-sign \
+ -D__TARGET_ARCH_$(SRCARCH) -Wno-compare-distinct-pointer-types \
+ -Wno-gnu-variable-sized-type-not-at-end \
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 686197407c3c61..5652d903523209 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -8,6 +8,7 @@
+
+ # Default set of warnings, always enabled
+ KBUILD_CFLAGS += -Wall
++KBUILD_CFLAGS += -Wextra
+ KBUILD_CFLAGS += -Wundef
+ KBUILD_CFLAGS += -Werror=implicit-function-declaration
+ KBUILD_CFLAGS += -Werror=implicit-int
+@@ -15,7 +16,7 @@ KBUILD_CFLAGS += -Werror=return-type
+ KBUILD_CFLAGS += -Werror=strict-prototypes
+ KBUILD_CFLAGS += -Wno-format-security
+ KBUILD_CFLAGS += -Wno-trigraphs
+-KBUILD_CFLAGS += $(call cc-disable-warning,frame-address,)
++KBUILD_CFLAGS += $(call cc-disable-warning, frame-address)
+ KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)
+ KBUILD_CFLAGS += -Wmissing-declarations
+ KBUILD_CFLAGS += -Wmissing-prototypes
+@@ -68,6 +69,13 @@ KBUILD_CFLAGS += -Wno-pointer-sign
+ # globally built with -Wcast-function-type.
+ KBUILD_CFLAGS += $(call cc-option, -Wcast-function-type)
+
++# Currently, disable -Wstringop-overflow for GCC 11, globally.
++KBUILD_CFLAGS-$(CONFIG_CC_NO_STRINGOP_OVERFLOW) += $(call cc-disable-warning, stringop-overflow)
++KBUILD_CFLAGS-$(CONFIG_CC_STRINGOP_OVERFLOW) += $(call cc-option, -Wstringop-overflow)
++
++# Currently, disable -Wunterminated-string-initialization as broken
++KBUILD_CFLAGS += $(call cc-disable-warning, unterminated-string-initialization)
++
+ # The allocators already balk at large sizes, so silence the compiler
+ # warnings for bounds checks involving those possible values. While
+ # -Wno-alloc-size-larger-than would normally be used here, earlier versions
+@@ -97,7 +105,6 @@ KBUILD_CFLAGS += $(call cc-option,-Wenum-conversion)
+ # Explicitly clear padding bits during variable initialization
+ KBUILD_CFLAGS += $(call cc-option,-fzero-init-padding-bits=all)
+
+-KBUILD_CFLAGS += -Wextra
+ KBUILD_CFLAGS += -Wunused
+
+ #
+diff --git a/scripts/config b/scripts/config
+index ff88e2faefd35c..ea475c07de283e 100755
+--- a/scripts/config
++++ b/scripts/config
+@@ -32,6 +32,7 @@ commands:
+ Disable option directly after other option
+ --module-after|-M beforeopt option
+ Turn option into module directly after other option
++ --refresh Refresh the config using old settings
+
+ commands can be repeated multiple times
+
+@@ -124,16 +125,22 @@ undef_var() {
+ txt_delete "^# $name is not set" "$FN"
+ }
+
+-if [ "$1" = "--file" ]; then
+- FN="$2"
+- if [ "$FN" = "" ] ; then
+- usage
++FN=.config
++CMDS=()
++while [[ $# -gt 0 ]]; do
++ if [ "$1" = "--file" ]; then
++ if [ "$2" = "" ]; then
++ usage
++ fi
++ FN="$2"
++ shift 2
++ else
++ CMDS+=("$1")
++ shift
+ fi
+- shift 2
+-else
+- FN=.config
+-fi
++done
+
++set -- "${CMDS[@]}"
+ if [ "$1" = "" ] ; then
+ usage
+ fi
+@@ -217,9 +224,8 @@ while [ "$1" != "" ] ; do
+ set_var "${CONFIG_}$B" "${CONFIG_}$B=m" "${CONFIG_}$A"
+ ;;
+
+- # undocumented because it ignores --file (fixme)
+ --refresh)
+- yes "" | make oldconfig
++ yes "" | make oldconfig KCONFIG_CONFIG=$FN
+ ;;
+
+ *)
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index 3b55e7a4131d9a..ac95661a1c9dd9 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -385,7 +385,7 @@ int conf_read_simple(const char *name, int def)
+
+ def_flags = SYMBOL_DEF << def;
+ for_all_symbols(sym) {
+- sym->flags &= ~(def_flags|SYMBOL_VALID);
++ sym->flags &= ~def_flags;
+ switch (sym->type) {
+ case S_INT:
+ case S_HEX:
+@@ -398,7 +398,11 @@ int conf_read_simple(const char *name, int def)
+ }
+ }
+
+- expr_invalidate_all();
++ if (def == S_DEF_USER) {
++ for_all_symbols(sym)
++ sym->flags &= ~SYMBOL_VALID;
++ expr_invalidate_all();
++ }
+
+ while (getline_stripped(&line, &line_asize, in) != -1) {
+ struct menu *choice;
+@@ -464,6 +468,9 @@ int conf_read_simple(const char *name, int def)
+ if (conf_set_sym_val(sym, def, def_flags, val))
+ continue;
+
++ if (def != S_DEF_USER)
++ continue;
++
+ /*
+ * If this is a choice member, give it the highest priority.
+ * If conflicting CONFIG options are given from an input file,
+@@ -967,10 +974,8 @@ static int conf_touch_deps(void)
+ depfile_path[depfile_prefix_len] = 0;
+
+ conf_read_simple(name, S_DEF_AUTO);
+- sym_calc_value(modules_sym);
+
+ for_all_symbols(sym) {
+- sym_calc_value(sym);
+ if (sym_is_choice(sym))
+ continue;
+ if (sym->flags & SYMBOL_WRITE) {
+@@ -1084,12 +1089,12 @@ int conf_write_autoconf(int overwrite)
+ if (ret)
+ return -1;
+
+- if (conf_touch_deps())
+- return 1;
+-
+ for_all_symbols(sym)
+ sym_calc_value(sym);
+
++ if (conf_touch_deps())
++ return 1;
++
+ ret = __conf_write_autoconf(conf_get_autoheader_name(),
+ print_symbol_for_c,
+ &comment_style_c);
+diff --git a/scripts/kconfig/merge_config.sh b/scripts/kconfig/merge_config.sh
+index 0b7952471c18f6..79c09b378be816 100755
+--- a/scripts/kconfig/merge_config.sh
++++ b/scripts/kconfig/merge_config.sh
+@@ -112,8 +112,8 @@ INITFILE=$1
+ shift;
+
+ if [ ! -r "$INITFILE" ]; then
+- echo "The base file '$INITFILE' does not exist. Exit." >&2
+- exit 1
++ echo "The base file '$INITFILE' does not exist. Creating one..." >&2
++ touch "$INITFILE"
+ fi
+
+ MERGE_LIST=$*
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index f3e7ac513db3f5..f99ab1a3b0f092 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -245,7 +245,9 @@ static int process_measurement(struct file *file, const struct cred *cred,
+ &allowed_algos);
+ violation_check = ((func == FILE_CHECK || func == MMAP_CHECK ||
+ func == MMAP_CHECK_REQPROT) &&
+- (ima_policy_flag & IMA_MEASURE));
++ (ima_policy_flag & IMA_MEASURE) &&
++ ((action & IMA_MEASURE) ||
++ (file->f_mode & FMODE_WRITE)));
+ if (!action && !violation_check)
+ return 0;
+
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 357188f764ce16..a7886cfc9dc3ad 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -812,7 +812,7 @@ static int smk_open_cipso(struct inode *inode, struct file *file)
+ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos, int format)
+ {
+- struct netlbl_lsm_catmap *old_cat, *new_cat = NULL;
++ struct netlbl_lsm_catmap *old_cat;
+ struct smack_known *skp;
+ struct netlbl_lsm_secattr ncats;
+ char mapcatset[SMK_CIPSOLEN];
+@@ -899,22 +899,15 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+
+ smack_catset_bit(cat, mapcatset);
+ }
+- ncats.flags = 0;
+- if (catlen == 0) {
+- ncats.attr.mls.cat = NULL;
+- ncats.attr.mls.lvl = maplevel;
+- new_cat = netlbl_catmap_alloc(GFP_ATOMIC);
+- if (new_cat)
+- new_cat->next = ncats.attr.mls.cat;
+- ncats.attr.mls.cat = new_cat;
+- skp->smk_netlabel.flags &= ~(1U << 3);
+- rc = 0;
+- } else {
+- rc = smk_netlbl_mls(maplevel, mapcatset, &ncats, SMK_CIPSOLEN);
+- }
++
++ rc = smk_netlbl_mls(maplevel, mapcatset, &ncats, SMK_CIPSOLEN);
+ if (rc >= 0) {
+ old_cat = skp->smk_netlabel.attr.mls.cat;
+ rcu_assign_pointer(skp->smk_netlabel.attr.mls.cat, ncats.attr.mls.cat);
++ if (ncats.attr.mls.cat)
++ skp->smk_netlabel.flags |= NETLBL_SECATTR_MLS_CAT;
++ else
++ skp->smk_netlabel.flags &= ~(u32)NETLBL_SECATTR_MLS_CAT;
+ skp->smk_netlabel.attr.mls.lvl = ncats.attr.mls.lvl;
+ synchronize_rcu();
+ netlbl_catmap_free(old_cat);
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 4683b9139c566a..4ecb17bd5436e7 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -1074,8 +1074,7 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ runtime->oss.params = 0;
+ runtime->oss.prepare = 1;
+ runtime->oss.buffer_used = 0;
+- if (runtime->dma_area)
+- snd_pcm_format_set_silence(runtime->format, runtime->dma_area, bytes_to_samples(runtime, runtime->dma_bytes));
++ snd_pcm_runtime_buffer_set_silence(runtime);
+
+ runtime->oss.period_frames = snd_pcm_alsa_frames(substream, oss_period_size);
+
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 6c2b6a62d9d2f8..853ac5bb33ff2a 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -723,6 +723,17 @@ static void snd_pcm_buffer_access_unlock(struct snd_pcm_runtime *runtime)
+ atomic_inc(&runtime->buffer_accessing);
+ }
+
++/* fill the PCM buffer with the current silence format; called from pcm_oss.c */
++void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime)
++{
++ snd_pcm_buffer_access_lock(runtime);
++ if (runtime->dma_area)
++ snd_pcm_format_set_silence(runtime->format, runtime->dma_area,
++ bytes_to_samples(runtime, runtime->dma_bytes));
++ snd_pcm_buffer_access_unlock(runtime);
++}
++EXPORT_SYMBOL_GPL(snd_pcm_runtime_buffer_set_silence);
++
+ #if IS_ENABLED(CONFIG_SND_PCM_OSS)
+ #define is_oss_stream(substream) ((substream)->oss.oss)
+ #else
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 0ae01b85bb18cd..198683a69a5346 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1164,8 +1164,7 @@ static __poll_t snd_seq_poll(struct file *file, poll_table * wait)
+ if (snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT) {
+
+ /* check if data is available in the pool */
+- if (!snd_seq_write_pool_allocated(client) ||
+- snd_seq_pool_poll_wait(client->pool, file, wait))
++ if (snd_seq_pool_poll_wait(client->pool, file, wait))
+ mask |= EPOLLOUT | EPOLLWRNORM;
+ }
+
+@@ -2600,8 +2599,6 @@ int snd_seq_kernel_client_write_poll(int clientid, struct file *file, poll_table
+ if (client == NULL)
+ return -ENXIO;
+
+- if (! snd_seq_write_pool_allocated(client))
+- return 1;
+ if (snd_seq_pool_poll_wait(client->pool, file, wait))
+ return 1;
+ return 0;
+diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
+index 20155e3e87c6ac..ccde0ca3d20823 100644
+--- a/sound/core/seq/seq_memory.c
++++ b/sound/core/seq/seq_memory.c
+@@ -427,6 +427,7 @@ int snd_seq_pool_poll_wait(struct snd_seq_pool *pool, struct file *file,
+ poll_table *wait)
+ {
+ poll_wait(file, &pool->output_sleep, wait);
++ guard(spinlock_irq)(&pool->lock);
+ return snd_seq_output_ok(pool);
+ }
+
+diff --git a/sound/pci/hda/hda_beep.c b/sound/pci/hda/hda_beep.c
+index e51d475725576b..13a7d92e8d8d03 100644
+--- a/sound/pci/hda/hda_beep.c
++++ b/sound/pci/hda/hda_beep.c
+@@ -31,8 +31,9 @@ static void generate_tone(struct hda_beep *beep, int tone)
+ beep->power_hook(beep, true);
+ beep->playing = 1;
+ }
+- snd_hda_codec_write(codec, beep->nid, 0,
+- AC_VERB_SET_BEEP_CONTROL, tone);
++ if (!codec->beep_just_power_on)
++ snd_hda_codec_write(codec, beep->nid, 0,
++ AC_VERB_SET_BEEP_CONTROL, tone);
+ if (!tone && beep->playing) {
+ beep->playing = 0;
+ if (beep->power_hook)
+@@ -212,10 +213,12 @@ int snd_hda_attach_beep_device(struct hda_codec *codec, int nid)
+ struct hda_beep *beep;
+ int err;
+
+- if (!snd_hda_get_bool_hint(codec, "beep"))
+- return 0; /* disabled explicitly by hints */
+- if (codec->beep_mode == HDA_BEEP_MODE_OFF)
+- return 0; /* disabled by module option */
++ if (!codec->beep_just_power_on) {
++ if (!snd_hda_get_bool_hint(codec, "beep"))
++ return 0; /* disabled explicitly by hints */
++ if (codec->beep_mode == HDA_BEEP_MODE_OFF)
++ return 0; /* disabled by module option */
++ }
+
+ beep = kzalloc(sizeof(*beep), GFP_KERNEL);
+ if (beep == NULL)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2ff02fb6f7e948..9e5c36ad8f52d3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -28,6 +28,7 @@
+ #include <sound/hda_codec.h>
+ #include "hda_local.h"
+ #include "hda_auto_parser.h"
++#include "hda_beep.h"
+ #include "hda_jack.h"
+ #include "hda_generic.h"
+ #include "hda_component.h"
+@@ -6962,6 +6963,41 @@ static void alc285_fixup_hp_spectre_x360_eb1(struct hda_codec *codec,
+ }
+ }
+
++/* GPIO1 = amplifier on/off */
++static void alc285_fixup_hp_spectre_x360_df1(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ struct alc_spec *spec = codec->spec;
++ static const hda_nid_t conn[] = { 0x02 };
++ static const struct hda_pintbl pincfgs[] = {
++ { 0x14, 0x90170110 }, /* front/high speakers */
++ { 0x17, 0x90170130 }, /* back/bass speakers */
++ { }
++ };
++
++ // enable mute led
++ alc285_fixup_hp_mute_led_coefbit(codec, fix, action);
++
++ switch (action) {
++ case HDA_FIXUP_ACT_PRE_PROBE:
++ /* needed for amp of back speakers */
++ spec->gpio_mask |= 0x01;
++ spec->gpio_dir |= 0x01;
++ snd_hda_apply_pincfgs(codec, pincfgs);
++ /* share DAC to have unified volume control */
++ snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn), conn);
++ snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++ break;
++ case HDA_FIXUP_ACT_INIT:
++ /* need to toggle GPIO to enable the amp of back speakers */
++ alc_update_gpio_data(codec, 0x01, true);
++ msleep(100);
++ alc_update_gpio_data(codec, 0x01, false);
++ break;
++ }
++}
++
+ static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -7034,6 +7070,30 @@ static void alc285_fixup_hp_envy_x360(struct hda_codec *codec,
+ }
+ }
+
++static void alc285_fixup_hp_beep(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ codec->beep_just_power_on = true;
++ } else if (action == HDA_FIXUP_ACT_INIT) {
++#ifdef CONFIG_SND_HDA_INPUT_BEEP
++ /*
++ * Just enable loopback to internal speaker and headphone jack.
++ * Disable amplification to get about the same beep volume as
++ * was on pure BIOS setup before loading the driver.
++ */
++ alc_update_coef_idx(codec, 0x36, 0x7070, BIT(13));
++
++ snd_hda_enable_beep_device(codec, 1);
++
++#if !IS_ENABLED(CONFIG_INPUT_PCSPKR)
++ dev_warn_once(hda_codec_dev(codec),
++ "enable CONFIG_INPUT_PCSPKR to get PC beeps\n");
++#endif
++#endif
++ }
++}
++
+ /* for hda_fixup_thinkpad_acpi() */
+ #include "thinkpad_helper.c"
+
+@@ -7718,6 +7778,7 @@ enum {
+ ALC280_FIXUP_HP_9480M,
+ ALC245_FIXUP_HP_X360_AMP,
+ ALC285_FIXUP_HP_SPECTRE_X360_EB1,
++ ALC285_FIXUP_HP_SPECTRE_X360_DF1,
+ ALC285_FIXUP_HP_ENVY_X360,
+ ALC288_FIXUP_DELL_HEADSET_MODE,
+ ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
+@@ -7819,6 +7880,7 @@ enum {
+ ALC285_FIXUP_HP_GPIO_LED,
+ ALC285_FIXUP_HP_MUTE_LED,
+ ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED,
++ ALC285_FIXUP_HP_BEEP_MICMUTE_LED,
+ ALC236_FIXUP_HP_MUTE_LED_COEFBIT2,
+ ALC236_FIXUP_HP_GPIO_LED,
+ ALC236_FIXUP_HP_MUTE_LED,
+@@ -9415,6 +9477,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc285_fixup_hp_spectre_x360_mute_led,
+ },
++ [ALC285_FIXUP_HP_BEEP_MICMUTE_LED] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc285_fixup_hp_beep,
++ .chained = true,
++ .chain_id = ALC285_FIXUP_HP_MUTE_LED,
++ },
+ [ALC236_FIXUP_HP_MUTE_LED_COEFBIT2] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc236_fixup_hp_mute_led_coefbit2,
+@@ -9786,6 +9854,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc285_fixup_hp_spectre_x360_eb1
+ },
++ [ALC285_FIXUP_HP_SPECTRE_X360_DF1] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc285_fixup_hp_spectre_x360_df1
++ },
+ [ALC285_FIXUP_HP_ENVY_X360] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc285_fixup_hp_envy_x360,
+@@ -10509,6 +10581,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x86c1, "HP Laptop 15-da3001TU", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
++ SND_PCI_QUIRK(0x103c, 0x863e, "HP Spectre x360 15-df1xxx", ALC285_FIXUP_HP_SPECTRE_X360_DF1),
+ SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ SND_PCI_QUIRK(0x103c, 0x86f9, "HP Spectre x360 13-aw0xxx", ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -10519,7 +10592,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8735, "HP ProBook 435 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+- SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED),
++ SND_PCI_QUIRK(0x103c, 0x8760, "HP EliteBook 8{4,5}5 G7", ALC285_FIXUP_HP_BEEP_MICMUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x876e, "HP ENVY x360 Convertible 13-ay0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+@@ -11161,6 +11234,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD),
+ SND_PCI_QUIRK(0x17aa, 0x38fd, "ThinkBook plus Gen5 Hybrid", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
++ SND_PCI_QUIRK(0x17aa, 0x390d, "Lenovo Yoga Pro 7 14ASP10", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+ SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC),
+ SND_PCI_QUIRK(0x17aa, 0x391f, "Yoga S990-16 pro Quad YC Quad", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3920, "Yoga S990-16 pro Quad VECO Quad", ALC287_FIXUP_TAS2781_I2C),
+@@ -11425,6 +11499,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"},
+ {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"},
+ {.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"},
++ {.id = ALC285_FIXUP_HP_SPECTRE_X360_DF1, .name = "alc285-hp-spectre-x360-df1"},
+ {.id = ALC285_FIXUP_HP_ENVY_X360, .name = "alc285-hp-envy-x360"},
+ {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"},
+ {.id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN, .name = "alc287-yoga9-bass-spk-pin"},
+diff --git a/sound/soc/codecs/cs42l43-jack.c b/sound/soc/codecs/cs42l43-jack.c
+index 20e6ab6f0d4ad7..6165ac16c3a950 100644
+--- a/sound/soc/codecs/cs42l43-jack.c
++++ b/sound/soc/codecs/cs42l43-jack.c
+@@ -654,6 +654,10 @@ static int cs42l43_run_type_detect(struct cs42l43_codec *priv)
+
+ reinit_completion(&priv->type_detect);
+
++ regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CLAMP_CTRL,
++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_VAL_MASK,
++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_VAL_MASK);
++
+ cs42l43_start_hs_bias(priv, true);
+ regmap_update_bits(cs42l43->regmap, CS42L43_HS2,
+ CS42L43_HSDET_MODE_MASK, 0x3 << CS42L43_HSDET_MODE_SHIFT);
+@@ -665,6 +669,9 @@ static int cs42l43_run_type_detect(struct cs42l43_codec *priv)
+ CS42L43_HSDET_MODE_MASK, 0x2 << CS42L43_HSDET_MODE_SHIFT);
+ cs42l43_stop_hs_bias(priv);
+
++ regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CLAMP_CTRL,
++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_VAL_MASK, 0);
++
+ if (!time_left)
+ return -ETIMEDOUT;
+
+diff --git a/sound/soc/codecs/mt6359-accdet.h b/sound/soc/codecs/mt6359-accdet.h
+index c234f2f4276a12..78ada3a5bfae55 100644
+--- a/sound/soc/codecs/mt6359-accdet.h
++++ b/sound/soc/codecs/mt6359-accdet.h
+@@ -123,6 +123,15 @@ struct mt6359_accdet {
+ struct workqueue_struct *jd_workqueue;
+ };
+
++#if IS_ENABLED(CONFIG_SND_SOC_MT6359_ACCDET)
+ int mt6359_accdet_enable_jack_detect(struct snd_soc_component *component,
+ struct snd_soc_jack *jack);
++#else
++static inline int
++mt6359_accdet_enable_jack_detect(struct snd_soc_component *component,
++ struct snd_soc_jack *jack)
++{
++ return -EOPNOTSUPP;
++}
++#endif
+ #endif
+diff --git a/sound/soc/codecs/pcm3168a.c b/sound/soc/codecs/pcm3168a.c
+index fac0617ab95b65..6cbb8d0535b02e 100644
+--- a/sound/soc/codecs/pcm3168a.c
++++ b/sound/soc/codecs/pcm3168a.c
+@@ -493,9 +493,9 @@ static int pcm3168a_hw_params(struct snd_pcm_substream *substream,
+ }
+ break;
+ case 24:
+- if (provider_mode || (format == SND_SOC_DAIFMT_DSP_A) ||
+- (format == SND_SOC_DAIFMT_DSP_B)) {
+- dev_err(component->dev, "24-bit slots not supported in provider mode, or consumer mode using DSP\n");
++ if (!provider_mode && ((format == SND_SOC_DAIFMT_DSP_A) ||
++ (format == SND_SOC_DAIFMT_DSP_B))) {
++ dev_err(component->dev, "24-bit slots not supported in consumer mode using DSP\n");
+ return -EINVAL;
+ }
+ break;
+diff --git a/sound/soc/codecs/pcm6240.c b/sound/soc/codecs/pcm6240.c
+index 4ff39e0b95b272..b2bd2f172ae76e 100644
+--- a/sound/soc/codecs/pcm6240.c
++++ b/sound/soc/codecs/pcm6240.c
+@@ -14,7 +14,7 @@
+
+ #include <linux/unaligned.h>
+ #include <linux/firmware.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/of_irq.h>
+@@ -2035,10 +2035,8 @@ static const struct regmap_config pcmdevice_i2c_regmap = {
+
+ static void pcmdevice_remove(struct pcmdevice_priv *pcm_dev)
+ {
+- if (gpio_is_valid(pcm_dev->irq_info.gpio)) {
+- gpio_free(pcm_dev->irq_info.gpio);
+- free_irq(pcm_dev->irq_info.nmb, pcm_dev);
+- }
++ if (pcm_dev->irq)
++ free_irq(pcm_dev->irq, pcm_dev);
+ mutex_destroy(&pcm_dev->codec_lock);
+ }
+
+@@ -2109,7 +2107,7 @@ static int pcmdevice_i2c_probe(struct i2c_client *i2c)
+ ndev = 1;
+ dev_addrs[0] = i2c->addr;
+ }
+- pcm_dev->irq_info.gpio = of_irq_get(np, 0);
++ pcm_dev->irq = of_irq_get(np, 0);
+
+ for (i = 0; i < ndev; i++)
+ pcm_dev->addr[i] = dev_addrs[i];
+@@ -2132,22 +2130,10 @@ static int pcmdevice_i2c_probe(struct i2c_client *i2c)
+
+ if (pcm_dev->chip_id == PCM1690)
+ goto skip_interrupt;
+- if (gpio_is_valid(pcm_dev->irq_info.gpio)) {
+- dev_dbg(pcm_dev->dev, "irq-gpio = %d", pcm_dev->irq_info.gpio);
+-
+- ret = gpio_request(pcm_dev->irq_info.gpio, "PCMDEV-IRQ");
+- if (!ret) {
+- int gpio = pcm_dev->irq_info.gpio;
+-
+- gpio_direction_input(gpio);
+- pcm_dev->irq_info.nmb = gpio_to_irq(gpio);
+-
+- } else
+- dev_err(pcm_dev->dev, "%s: GPIO %d request error\n",
+- __func__, pcm_dev->irq_info.gpio);
++ if (pcm_dev->irq) {
++ dev_dbg(pcm_dev->dev, "irq = %d", pcm_dev->irq);
+ } else
+- dev_err(pcm_dev->dev, "Looking up irq-gpio failed %d\n",
+- pcm_dev->irq_info.gpio);
++ dev_err(pcm_dev->dev, "No irq provided\n");
+
+ skip_interrupt:
+ ret = devm_snd_soc_register_component(&i2c->dev,
+diff --git a/sound/soc/codecs/pcm6240.h b/sound/soc/codecs/pcm6240.h
+index 1e125bb9728603..2d8f9e798139ac 100644
+--- a/sound/soc/codecs/pcm6240.h
++++ b/sound/soc/codecs/pcm6240.h
+@@ -208,11 +208,6 @@ struct pcmdevice_regbin {
+ struct pcmdevice_config_info **cfg_info;
+ };
+
+-struct pcmdevice_irqinfo {
+- int gpio;
+- int nmb;
+-};
+-
+ struct pcmdevice_priv {
+ struct snd_soc_component *component;
+ struct i2c_client *client;
+@@ -221,7 +216,7 @@ struct pcmdevice_priv {
+ struct gpio_desc *hw_rst;
+ struct regmap *regmap;
+ struct pcmdevice_regbin regbin;
+- struct pcmdevice_irqinfo irq_info;
++ int irq;
+ unsigned int addr[PCMDEVICE_MAX_I2C_DEVICES];
+ unsigned int chip_id;
+ int cur_conf;
+diff --git a/sound/soc/codecs/rt722-sdca-sdw.c b/sound/soc/codecs/rt722-sdca-sdw.c
+index 4d3043627bd04e..cfb030e71e5c57 100644
+--- a/sound/soc/codecs/rt722-sdca-sdw.c
++++ b/sound/soc/codecs/rt722-sdca-sdw.c
+@@ -28,9 +28,50 @@ static bool rt722_sdca_readable_register(struct device *dev, unsigned int reg)
+ 0):
+ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_GE49, RT722_SDCA_CTL_DETECTED_MODE,
+ 0):
+- case SDW_SDCA_CTL(FUNC_NUM_HID, RT722_SDCA_ENT_HID01, RT722_SDCA_CTL_HIDTX_CURRENT_OWNER,
+- 0) ... SDW_SDCA_CTL(FUNC_NUM_HID, RT722_SDCA_ENT_HID01,
+- RT722_SDCA_CTL_HIDTX_MESSAGE_LENGTH, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_XU03, RT722_SDCA_CTL_SELECTED_MODE,
++ 0):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_USER_FU05,
++ RT722_SDCA_CTL_FU_MUTE, CH_L) ...
++ SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_USER_FU05,
++ RT722_SDCA_CTL_FU_MUTE, CH_R):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_XU0D,
++ RT722_SDCA_CTL_SELECTED_MODE, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_USER_FU0F,
++ RT722_SDCA_CTL_FU_MUTE, CH_L) ...
++ SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_USER_FU0F,
++ RT722_SDCA_CTL_FU_MUTE, CH_R):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_PDE40,
++ RT722_SDCA_CTL_REQ_POWER_STATE, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_PDE12,
++ RT722_SDCA_CTL_REQ_POWER_STATE, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_CS01,
++ RT722_SDCA_CTL_SAMPLE_FREQ_INDEX, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT722_SDCA_ENT_CS11,
++ RT722_SDCA_CTL_SAMPLE_FREQ_INDEX, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E,
++ RT722_SDCA_CTL_FU_MUTE, CH_01) ...
++ SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E,
++ RT722_SDCA_CTL_FU_MUTE, CH_04):
++ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_IT26,
++ RT722_SDCA_CTL_VENDOR_DEF, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_PDE2A,
++ RT722_SDCA_CTL_REQ_POWER_STATE, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_CS1F,
++ RT722_SDCA_CTL_SAMPLE_FREQ_INDEX, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_HID, RT722_SDCA_ENT_HID01,
++ RT722_SDCA_CTL_HIDTX_CURRENT_OWNER, 0) ...
++ SDW_SDCA_CTL(FUNC_NUM_HID, RT722_SDCA_ENT_HID01,
++ RT722_SDCA_CTL_HIDTX_MESSAGE_LENGTH, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_AMP, RT722_SDCA_ENT_USER_FU06,
++ RT722_SDCA_CTL_FU_MUTE, CH_L) ...
++ SDW_SDCA_CTL(FUNC_NUM_AMP, RT722_SDCA_ENT_USER_FU06,
++ RT722_SDCA_CTL_FU_MUTE, CH_R):
++ case SDW_SDCA_CTL(FUNC_NUM_AMP, RT722_SDCA_ENT_OT23,
++ RT722_SDCA_CTL_VENDOR_DEF, CH_08):
++ case SDW_SDCA_CTL(FUNC_NUM_AMP, RT722_SDCA_ENT_PDE23,
++ RT722_SDCA_CTL_REQ_POWER_STATE, 0):
++ case SDW_SDCA_CTL(FUNC_NUM_AMP, RT722_SDCA_ENT_CS31,
++ RT722_SDCA_CTL_SAMPLE_FREQ_INDEX, 0):
+ case RT722_BUF_ADDR_HID1 ... RT722_BUF_ADDR_HID2:
+ return true;
+ default:
+@@ -74,6 +115,7 @@ static bool rt722_sdca_mbq_readable_register(struct device *dev, unsigned int re
+ case 0x5600000 ... 0x5600007:
+ case 0x5700000 ... 0x5700004:
+ case 0x5800000 ... 0x5800004:
++ case 0x5810000:
+ case 0x5b00003:
+ case 0x5c00011:
+ case 0x5d00006:
+@@ -81,6 +123,7 @@ static bool rt722_sdca_mbq_readable_register(struct device *dev, unsigned int re
+ case 0x5f00030:
+ case 0x6100000 ... 0x6100051:
+ case 0x6100055 ... 0x6100057:
++ case 0x6100060:
+ case 0x6100062:
+ case 0x6100064 ... 0x6100065:
+ case 0x6100067:
+diff --git a/sound/soc/codecs/sma1307.c b/sound/soc/codecs/sma1307.c
+index 480bcea48541e6..793abec56dd27d 100644
+--- a/sound/soc/codecs/sma1307.c
++++ b/sound/soc/codecs/sma1307.c
+@@ -1710,7 +1710,7 @@ static void sma1307_check_fault_worker(struct work_struct *work)
+ static void sma1307_setting_loaded(struct sma1307_priv *sma1307, const char *file)
+ {
+ const struct firmware *fw;
+- int *data, size, offset, num_mode;
++ int size, offset, num_mode;
+ int ret;
+
+ ret = request_firmware(&fw, file, sma1307->dev);
+@@ -1727,7 +1727,12 @@ static void sma1307_setting_loaded(struct sma1307_priv *sma1307, const char *fil
+ return;
+ }
+
+- data = kzalloc(fw->size, GFP_KERNEL);
++ int *data __free(kfree) = kzalloc(fw->size, GFP_KERNEL);
++ if (!data) {
++ release_firmware(fw);
++ sma1307->set.status = false;
++ return;
++ }
+ size = fw->size >> 2;
+ memcpy(data, fw->data, fw->size);
+
+@@ -1741,6 +1746,11 @@ static void sma1307_setting_loaded(struct sma1307_priv *sma1307, const char *fil
+ sma1307->set.header = devm_kzalloc(sma1307->dev,
+ sma1307->set.header_size,
+ GFP_KERNEL);
++ if (!sma1307->set.header) {
++ sma1307->set.status = false;
++ return;
++ }
++
+ memcpy(sma1307->set.header, data,
+ sma1307->set.header_size * sizeof(int));
+
+@@ -1756,6 +1766,11 @@ static void sma1307_setting_loaded(struct sma1307_priv *sma1307, const char *fil
+ sma1307->set.def
+ = devm_kzalloc(sma1307->dev,
+ sma1307->set.def_size * sizeof(int), GFP_KERNEL);
++ if (!sma1307->set.def) {
++ sma1307->set.status = false;
++ return;
++ }
++
+ memcpy(sma1307->set.def,
+ &data[sma1307->set.header_size],
+ sma1307->set.def_size * sizeof(int));
+@@ -1768,6 +1783,13 @@ static void sma1307_setting_loaded(struct sma1307_priv *sma1307, const char *fil
+ = devm_kzalloc(sma1307->dev,
+ sma1307->set.mode_size * 2 * sizeof(int),
+ GFP_KERNEL);
++ if (!sma1307->set.mode_set[i]) {
++ for (int j = 0; j < i; j++)
++ kfree(sma1307->set.mode_set[j]);
++ sma1307->set.status = false;
++ return;
++ }
++
+ for (int j = 0; j < sma1307->set.mode_size; j++) {
+ sma1307->set.mode_set[i][2 * j]
+ = data[offset + ((num_mode + 1) * j)];
+@@ -1776,7 +1798,6 @@ static void sma1307_setting_loaded(struct sma1307_priv *sma1307, const char *fil
+ }
+ }
+
+- kfree(data);
+ sma1307->set.status = true;
+
+ }
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index 58315eab492a16..39a7d39536fe6f 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -180,33 +180,6 @@ static SOC_ENUM_SINGLE_DECL(
+ static const struct snd_kcontrol_new tas2764_asi1_mux =
+ SOC_DAPM_ENUM("ASI1 Source", tas2764_ASI1_src_enum);
+
+-static int tas2764_dac_event(struct snd_soc_dapm_widget *w,
+- struct snd_kcontrol *kcontrol, int event)
+-{
+- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+- struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
+- int ret;
+-
+- switch (event) {
+- case SND_SOC_DAPM_POST_PMU:
+- tas2764->dac_powered = true;
+- ret = tas2764_update_pwr_ctrl(tas2764);
+- break;
+- case SND_SOC_DAPM_PRE_PMD:
+- tas2764->dac_powered = false;
+- ret = tas2764_update_pwr_ctrl(tas2764);
+- break;
+- default:
+- dev_err(tas2764->dev, "Unsupported event\n");
+- return -EINVAL;
+- }
+-
+- if (ret < 0)
+- return ret;
+-
+- return 0;
+-}
+-
+ static const struct snd_kcontrol_new isense_switch =
+ SOC_DAPM_SINGLE("Switch", TAS2764_PWR_CTRL, TAS2764_ISENSE_POWER_EN, 1, 1);
+ static const struct snd_kcontrol_new vsense_switch =
+@@ -219,8 +192,7 @@ static const struct snd_soc_dapm_widget tas2764_dapm_widgets[] = {
+ 1, &isense_switch),
+ SND_SOC_DAPM_SWITCH("VSENSE", TAS2764_PWR_CTRL, TAS2764_VSENSE_POWER_EN,
+ 1, &vsense_switch),
+- SND_SOC_DAPM_DAC_E("DAC", NULL, SND_SOC_NOPM, 0, 0, tas2764_dac_event,
+- SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD),
++ SND_SOC_DAPM_DAC("DAC", NULL, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_OUTPUT("OUT"),
+ SND_SOC_DAPM_SIGGEN("VMON"),
+ SND_SOC_DAPM_SIGGEN("IMON")
+@@ -241,9 +213,28 @@ static int tas2764_mute(struct snd_soc_dai *dai, int mute, int direction)
+ {
+ struct tas2764_priv *tas2764 =
+ snd_soc_component_get_drvdata(dai->component);
++ int ret;
++
++ if (!mute) {
++ tas2764->dac_powered = true;
++ ret = tas2764_update_pwr_ctrl(tas2764);
++ if (ret)
++ return ret;
++ }
+
+ tas2764->unmuted = !mute;
+- return tas2764_update_pwr_ctrl(tas2764);
++ ret = tas2764_update_pwr_ctrl(tas2764);
++ if (ret)
++ return ret;
++
++ if (mute) {
++ tas2764->dac_powered = false;
++ ret = tas2764_update_pwr_ctrl(tas2764);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
+ }
+
+ static int tas2764_set_bitwidth(struct tas2764_priv *tas2764, int bitwidth)
+@@ -634,6 +625,7 @@ static const struct reg_default tas2764_reg_defaults[] = {
+ { TAS2764_TDM_CFG2, 0x0a },
+ { TAS2764_TDM_CFG3, 0x10 },
+ { TAS2764_TDM_CFG5, 0x42 },
++ { TAS2764_INT_CLK_CFG, 0x19 },
+ };
+
+ static const struct regmap_range_cfg tas2764_regmap_ranges[] = {
+@@ -651,6 +643,7 @@ static const struct regmap_range_cfg tas2764_regmap_ranges[] = {
+ static bool tas2764_volatile_register(struct device *dev, unsigned int reg)
+ {
+ switch (reg) {
++ case TAS2764_SW_RST:
+ case TAS2764_INT_LTCH0 ... TAS2764_INT_LTCH4:
+ case TAS2764_INT_CLK_CFG:
+ return true;
+diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c
+index 47da5674d7c922..e31b7fb104e6c5 100644
+--- a/sound/soc/codecs/wsa883x.c
++++ b/sound/soc/codecs/wsa883x.c
+@@ -529,7 +529,7 @@ static const struct sdw_port_config wsa883x_pconfig[WSA883X_MAX_SWR_PORTS] = {
+ },
+ [WSA883X_PORT_VISENSE] = {
+ .num = WSA883X_PORT_VISENSE + 1,
+- .ch_mask = 0x3,
++ .ch_mask = 0x1,
+ },
+ };
+
+diff --git a/sound/soc/codecs/wsa884x.c b/sound/soc/codecs/wsa884x.c
+index 560a2c04b69553..18b0ee8f15a55a 100644
+--- a/sound/soc/codecs/wsa884x.c
++++ b/sound/soc/codecs/wsa884x.c
+@@ -891,7 +891,7 @@ static const struct sdw_port_config wsa884x_pconfig[WSA884X_MAX_SWR_PORTS] = {
+ },
+ [WSA884X_PORT_VISENSE] = {
+ .num = WSA884X_PORT_VISENSE + 1,
+- .ch_mask = 0x3,
++ .ch_mask = 0x1,
+ },
+ [WSA884X_PORT_CPS] = {
+ .num = WSA884X_PORT_CPS + 1,
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index 21f617f6f9fa84..566214cb3d60c2 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -543,7 +543,7 @@ static int imx_card_parse_of(struct imx_card_data *data)
+ if (!card->dai_link)
+ return -ENOMEM;
+
+- data->link_data = devm_kcalloc(dev, num_links, sizeof(*link), GFP_KERNEL);
++ data->link_data = devm_kcalloc(dev, num_links, sizeof(*link_data), GFP_KERNEL);
+ if (!data->link_data)
+ return -ENOMEM;
+
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 6446cda0f85727..0f3b8f44e70112 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -576,6 +576,19 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF2 |
+ BYT_RT5640_MCLK_EN),
+ },
++ { /* Acer Aspire SW3-013 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW3-013"),
++ },
++ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++ BYT_RT5640_JD_SRC_JD2_IN4N |
++ BYT_RT5640_OVCD_TH_2000UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_DIFF_MIC |
++ BYT_RT5640_SSP0_AIF1 |
++ BYT_RT5640_MCLK_EN),
++ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
+diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-clk.c b/sound/soc/mediatek/mt8188/mt8188-afe-clk.c
+index e69c1bb2cb2395..7f411b85778237 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-afe-clk.c
++++ b/sound/soc/mediatek/mt8188/mt8188-afe-clk.c
+@@ -58,7 +58,15 @@ static const char *aud_clks[MT8188_CLK_NUM] = {
+ [MT8188_CLK_AUD_ADC] = "aud_adc",
+ [MT8188_CLK_AUD_DAC_HIRES] = "aud_dac_hires",
+ [MT8188_CLK_AUD_A1SYS_HP] = "aud_a1sys_hp",
++ [MT8188_CLK_AUD_AFE_DMIC1] = "aud_afe_dmic1",
++ [MT8188_CLK_AUD_AFE_DMIC2] = "aud_afe_dmic2",
++ [MT8188_CLK_AUD_AFE_DMIC3] = "aud_afe_dmic3",
++ [MT8188_CLK_AUD_AFE_DMIC4] = "aud_afe_dmic4",
+ [MT8188_CLK_AUD_ADC_HIRES] = "aud_adc_hires",
++ [MT8188_CLK_AUD_DMIC_HIRES1] = "aud_dmic_hires1",
++ [MT8188_CLK_AUD_DMIC_HIRES2] = "aud_dmic_hires2",
++ [MT8188_CLK_AUD_DMIC_HIRES3] = "aud_dmic_hires3",
++ [MT8188_CLK_AUD_DMIC_HIRES4] = "aud_dmic_hires4",
+ [MT8188_CLK_AUD_I2SIN] = "aud_i2sin",
+ [MT8188_CLK_AUD_TDM_IN] = "aud_tdm_in",
+ [MT8188_CLK_AUD_I2S_OUT] = "aud_i2s_out",
+diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-clk.h b/sound/soc/mediatek/mt8188/mt8188-afe-clk.h
+index ec53c171c170a8..c6c78d684f3ee1 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-afe-clk.h
++++ b/sound/soc/mediatek/mt8188/mt8188-afe-clk.h
+@@ -54,7 +54,15 @@ enum {
+ MT8188_CLK_AUD_ADC,
+ MT8188_CLK_AUD_DAC_HIRES,
+ MT8188_CLK_AUD_A1SYS_HP,
++ MT8188_CLK_AUD_AFE_DMIC1,
++ MT8188_CLK_AUD_AFE_DMIC2,
++ MT8188_CLK_AUD_AFE_DMIC3,
++ MT8188_CLK_AUD_AFE_DMIC4,
+ MT8188_CLK_AUD_ADC_HIRES,
++ MT8188_CLK_AUD_DMIC_HIRES1,
++ MT8188_CLK_AUD_DMIC_HIRES2,
++ MT8188_CLK_AUD_DMIC_HIRES3,
++ MT8188_CLK_AUD_DMIC_HIRES4,
+ MT8188_CLK_AUD_I2SIN,
+ MT8188_CLK_AUD_TDM_IN,
+ MT8188_CLK_AUD_I2S_OUT,
+diff --git a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
+index 73e5c63aeec878..d36520c6272dd8 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
++++ b/sound/soc/mediatek/mt8188/mt8188-afe-pcm.c
+@@ -2855,10 +2855,6 @@ static bool mt8188_is_volatile_reg(struct device *dev, unsigned int reg)
+ case AFE_DMIC3_SRC_DEBUG_MON0:
+ case AFE_DMIC3_UL_SRC_MON0:
+ case AFE_DMIC3_UL_SRC_MON1:
+- case DMIC_GAIN1_CUR:
+- case DMIC_GAIN2_CUR:
+- case DMIC_GAIN3_CUR:
+- case DMIC_GAIN4_CUR:
+ case ETDM_IN1_MONITOR:
+ case ETDM_IN2_MONITOR:
+ case ETDM_OUT1_MONITOR:
+diff --git a/sound/soc/qcom/sm8250.c b/sound/soc/qcom/sm8250.c
+index 45e0c33fc3f376..9039107972e2b3 100644
+--- a/sound/soc/qcom/sm8250.c
++++ b/sound/soc/qcom/sm8250.c
+@@ -7,6 +7,7 @@
+ #include <sound/soc.h>
+ #include <sound/soc-dapm.h>
+ #include <sound/pcm.h>
++#include <sound/pcm_params.h>
+ #include <linux/soundwire/sdw.h>
+ #include <sound/jack.h>
+ #include <linux/input-event-codes.h>
+@@ -39,9 +40,11 @@ static int sm8250_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
+ SNDRV_PCM_HW_PARAM_RATE);
+ struct snd_interval *channels = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_CHANNELS);
++ struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
+
+ rate->min = rate->max = 48000;
+ channels->min = channels->max = 2;
++ snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE);
+
+ return 0;
+ }
+diff --git a/sound/soc/sdw_utils/soc_sdw_bridge_cs35l56.c b/sound/soc/sdw_utils/soc_sdw_bridge_cs35l56.c
+index 246e5c2e0af55f..c7e55f4433514c 100644
+--- a/sound/soc/sdw_utils/soc_sdw_bridge_cs35l56.c
++++ b/sound/soc/sdw_utils/soc_sdw_bridge_cs35l56.c
+@@ -60,6 +60,10 @@ static int asoc_sdw_bridge_cs35l56_asp_init(struct snd_soc_pcm_runtime *rtd)
+
+ /* 4 x 16-bit sample slots and FSYNC=48000, BCLK=3.072 MHz */
+ for_each_rtd_codec_dais(rtd, i, codec_dai) {
++ ret = asoc_sdw_cs35l56_volume_limit(card, codec_dai->component->name_prefix);
++ if (ret)
++ return ret;
++
+ ret = snd_soc_dai_set_tdm_slot(codec_dai, tx_mask, rx_mask, 4, 16);
+ if (ret < 0)
+ return ret;
+diff --git a/sound/soc/sdw_utils/soc_sdw_cs42l43.c b/sound/soc/sdw_utils/soc_sdw_cs42l43.c
+index 668c9d28a1c12d..b415d45d520d0c 100644
+--- a/sound/soc/sdw_utils/soc_sdw_cs42l43.c
++++ b/sound/soc/sdw_utils/soc_sdw_cs42l43.c
+@@ -20,6 +20,8 @@
+ #include <sound/soc-dapm.h>
+ #include <sound/soc_sdw_utils.h>
+
++#define CS42L43_SPK_VOLUME_0DB 128 /* 0dB Max */
++
+ static const struct snd_soc_dapm_route cs42l43_hs_map[] = {
+ { "Headphone", NULL, "cs42l43 AMP3_OUT" },
+ { "Headphone", NULL, "cs42l43 AMP4_OUT" },
+@@ -117,6 +119,14 @@ int asoc_sdw_cs42l43_spk_rtd_init(struct snd_soc_pcm_runtime *rtd, struct snd_so
+ return -ENOMEM;
+ }
+
++ ret = snd_soc_limit_volume(card, "cs42l43 Speaker Digital Volume",
++ CS42L43_SPK_VOLUME_0DB);
++ if (ret)
++ dev_err(card->dev, "cs42l43 speaker volume limit failed: %d\n", ret);
++ else
++ dev_info(card->dev, "Setting CS42L43 Speaker volume limit to %d\n",
++ CS42L43_SPK_VOLUME_0DB);
++
+ ret = snd_soc_dapm_add_routes(&card->dapm, cs42l43_spk_map,
+ ARRAY_SIZE(cs42l43_spk_map));
+ if (ret)
+diff --git a/sound/soc/sdw_utils/soc_sdw_cs_amp.c b/sound/soc/sdw_utils/soc_sdw_cs_amp.c
+index 4b6181cf29716f..35b550bcd4ded5 100644
+--- a/sound/soc/sdw_utils/soc_sdw_cs_amp.c
++++ b/sound/soc/sdw_utils/soc_sdw_cs_amp.c
+@@ -16,6 +16,25 @@
+
+ #define CODEC_NAME_SIZE 8
+ #define CS_AMP_CHANNELS_PER_AMP 4
++#define CS35L56_SPK_VOLUME_0DB 400 /* 0dB Max */
++
++int asoc_sdw_cs35l56_volume_limit(struct snd_soc_card *card, const char *name_prefix)
++{
++ char *volume_ctl_name;
++ int ret;
++
++ volume_ctl_name = kasprintf(GFP_KERNEL, "%s Speaker Volume", name_prefix);
++ if (!volume_ctl_name)
++ return -ENOMEM;
++
++ ret = snd_soc_limit_volume(card, volume_ctl_name, CS35L56_SPK_VOLUME_0DB);
++ if (ret)
++ dev_err(card->dev, "%s limit set failed: %d\n", volume_ctl_name, ret);
++
++ kfree(volume_ctl_name);
++ return ret;
++}
++EXPORT_SYMBOL_NS(asoc_sdw_cs35l56_volume_limit, "SND_SOC_SDW_UTILS");
+
+ int asoc_sdw_cs_spk_rtd_init(struct snd_soc_pcm_runtime *rtd, struct snd_soc_dai *dai)
+ {
+@@ -40,6 +59,11 @@ int asoc_sdw_cs_spk_rtd_init(struct snd_soc_pcm_runtime *rtd, struct snd_soc_dai
+
+ snprintf(widget_name, sizeof(widget_name), "%s SPK",
+ codec_dai->component->name_prefix);
++
++ ret = asoc_sdw_cs35l56_volume_limit(card, codec_dai->component->name_prefix);
++ if (ret)
++ return ret;
++
+ ret = snd_soc_dapm_add_routes(&card->dapm, &route, 1);
+ if (ret)
+ return ret;
+diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
+index ca0308f6d41c17..dc7283ee4dfb06 100644
+--- a/sound/soc/soc-dai.c
++++ b/sound/soc/soc-dai.c
+@@ -275,10 +275,11 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai,
+
+ if (dai->driver->ops &&
+ dai->driver->ops->xlate_tdm_slot_mask)
+- dai->driver->ops->xlate_tdm_slot_mask(slots,
+- &tx_mask, &rx_mask);
++ ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+ else
+- snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++ ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++ if (ret)
++ goto err;
+
+ for_each_pcm_streams(stream)
+ snd_soc_dai_tdm_mask_set(dai, stream, *tdm_mask[stream]);
+@@ -287,6 +288,7 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai,
+ dai->driver->ops->set_tdm_slot)
+ ret = dai->driver->ops->set_tdm_slot(dai, tx_mask, rx_mask,
+ slots, slot_width);
++err:
+ return soc_dai_ret(dai, ret);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_dai_set_tdm_slot);
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index b0e4e4168f38d5..fb11003d56cf65 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -639,6 +639,33 @@ int snd_soc_get_volsw_range(struct snd_kcontrol *kcontrol,
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_get_volsw_range);
+
++static int snd_soc_clip_to_platform_max(struct snd_kcontrol *kctl)
++{
++ struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value;
++ struct snd_ctl_elem_value uctl;
++ int ret;
++
++ if (!mc->platform_max)
++ return 0;
++
++ ret = kctl->get(kctl, &uctl);
++ if (ret < 0)
++ return ret;
++
++ if (uctl.value.integer.value[0] > mc->platform_max)
++ uctl.value.integer.value[0] = mc->platform_max;
++
++ if (snd_soc_volsw_is_stereo(mc) &&
++ uctl.value.integer.value[1] > mc->platform_max)
++ uctl.value.integer.value[1] = mc->platform_max;
++
++ ret = kctl->put(kctl, &uctl);
++ if (ret < 0)
++ return ret;
++
++ return 0;
++}
++
+ /**
+ * snd_soc_limit_volume - Set new limit to an existing volume control.
+ *
+@@ -663,7 +690,7 @@ int snd_soc_limit_volume(struct snd_soc_card *card,
+ struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value;
+ if (max <= mc->max - mc->min) {
+ mc->platform_max = max;
+- ret = 0;
++ ret = snd_soc_clip_to_platform_max(kctl);
+ }
+ }
+ return ret;
+diff --git a/sound/soc/sof/intel/hda-bus.c b/sound/soc/sof/intel/hda-bus.c
+index b1be03011d7e15..6492e1cefbfb60 100644
+--- a/sound/soc/sof/intel/hda-bus.c
++++ b/sound/soc/sof/intel/hda-bus.c
+@@ -76,7 +76,7 @@ void sof_hda_bus_init(struct snd_sof_dev *sdev, struct device *dev)
+
+ snd_hdac_ext_bus_init(bus, dev, &bus_core_ops, sof_hda_ext_ops);
+
+- if (chip && chip->hw_ip_version == SOF_INTEL_ACE_2_0)
++ if (chip && chip->hw_ip_version >= SOF_INTEL_ACE_2_0)
+ bus->use_pio_for_commands = true;
+ #else
+ snd_hdac_ext_bus_init(bus, dev, NULL, NULL);
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index a1ccd95da8bb73..9ea194dfbd2ec4 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -1011,7 +1011,21 @@ static void hda_generic_machine_select(struct snd_sof_dev *sdev,
+ if (!*mach && codec_num <= 2) {
+ bool tplg_fixup = false;
+
+- hda_mach = snd_soc_acpi_intel_hda_machines;
++ /*
++ * make a local copy of the match array since we might
++ * be modifying it
++ */
++ hda_mach = devm_kmemdup_array(sdev->dev,
++ snd_soc_acpi_intel_hda_machines,
++ 2, /* we have one entry + sentinel in the array */
++ sizeof(snd_soc_acpi_intel_hda_machines[0]),
++ GFP_KERNEL);
++ if (!hda_mach) {
++ dev_err(bus->dev,
++ "%s: failed to duplicate the HDA match table\n",
++ __func__);
++ return;
++ }
+
+ dev_info(bus->dev, "using HDA machine driver %s now\n",
+ hda_mach->drv_name);
+diff --git a/sound/soc/sof/ipc4-control.c b/sound/soc/sof/ipc4-control.c
+index 576f407cd456af..976a4794d61000 100644
+--- a/sound/soc/sof/ipc4-control.c
++++ b/sound/soc/sof/ipc4-control.c
+@@ -531,6 +531,14 @@ static int sof_ipc4_bytes_ext_put(struct snd_sof_control *scontrol,
+ return -EINVAL;
+ }
+
++ /* Check header id */
++ if (header.numid != SOF_CTRL_CMD_BINARY) {
++ dev_err_ratelimited(scomp->dev,
++ "Incorrect numid for bytes put %d\n",
++ header.numid);
++ return -EINVAL;
++ }
++
+ /* Verify the ABI header first */
+ if (copy_from_user(&abi_hdr, tlvd->tlv, sizeof(abi_hdr)))
+ return -EFAULT;
+@@ -613,7 +621,8 @@ static int _sof_ipc4_bytes_ext_get(struct snd_sof_control *scontrol,
+ if (data_size > size)
+ return -ENOSPC;
+
+- header.numid = scontrol->comp_id;
++ /* Set header id and length */
++ header.numid = SOF_CTRL_CMD_BINARY;
+ header.length = data_size;
+
+ if (copy_to_user(tlvd, &header, sizeof(struct snd_ctl_tlv)))
+diff --git a/sound/soc/sof/ipc4-pcm.c b/sound/soc/sof/ipc4-pcm.c
+index 18fff2df76f97e..3e6e63bfee3eeb 100644
+--- a/sound/soc/sof/ipc4-pcm.c
++++ b/sound/soc/sof/ipc4-pcm.c
+@@ -797,7 +797,8 @@ static int sof_ipc4_pcm_setup(struct snd_sof_dev *sdev, struct snd_sof_pcm *spcm
+
+ spcm->stream[stream].private = stream_priv;
+
+- if (!support_info)
++ /* Delay reporting is only supported on playback */
++ if (!support_info || stream == SNDRV_PCM_STREAM_CAPTURE)
+ continue;
+
+ time_info = kzalloc(sizeof(*time_info), GFP_KERNEL);
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index dc9cb832406783..14aa8ecc4bc426 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1063,7 +1063,7 @@ static int sof_connect_dai_widget(struct snd_soc_component *scomp,
+ struct snd_sof_dai *dai)
+ {
+ struct snd_soc_card *card = scomp->card;
+- struct snd_soc_pcm_runtime *rtd;
++ struct snd_soc_pcm_runtime *rtd, *full, *partial;
+ struct snd_soc_dai *cpu_dai;
+ int stream;
+ int i;
+@@ -1080,12 +1080,22 @@ static int sof_connect_dai_widget(struct snd_soc_component *scomp,
+ else
+ goto end;
+
++ full = NULL;
++ partial = NULL;
+ list_for_each_entry(rtd, &card->rtd_list, list) {
+ /* does stream match DAI link ? */
+- if (!rtd->dai_link->stream_name ||
+- !strstr(rtd->dai_link->stream_name, w->sname))
+- continue;
++ if (rtd->dai_link->stream_name) {
++ if (!strcmp(rtd->dai_link->stream_name, w->sname)) {
++ full = rtd;
++ break;
++ } else if (strstr(rtd->dai_link->stream_name, w->sname)) {
++ partial = rtd;
++ }
++ }
++ }
+
++ rtd = full ? full : partial;
++ if (rtd) {
+ for_each_rtd_cpu_dais(rtd, i, cpu_dai) {
+ /*
+ * Please create DAI widget in the right order
+diff --git a/sound/soc/sunxi/sun4i-codec.c b/sound/soc/sunxi/sun4i-codec.c
+index 886b3fa537d262..3701f56c72756a 100644
+--- a/sound/soc/sunxi/sun4i-codec.c
++++ b/sound/soc/sunxi/sun4i-codec.c
+@@ -22,6 +22,7 @@
+ #include <linux/gpio/consumer.h>
+
+ #include <sound/core.h>
++#include <sound/jack.h>
+ #include <sound/pcm.h>
+ #include <sound/pcm_params.h>
+ #include <sound/soc.h>
+@@ -331,6 +332,7 @@ struct sun4i_codec {
+ struct clk *clk_module;
+ struct reset_control *rst;
+ struct gpio_desc *gpio_pa;
++ struct gpio_desc *gpio_hp;
+
+ /* ADC_FIFOC register is at different offset on different SoCs */
+ struct regmap_field *reg_adc_fifoc;
+@@ -1583,6 +1585,49 @@ static struct snd_soc_dai_driver dummy_cpu_dai = {
+ .ops = &dummy_dai_ops,
+ };
+
++static struct snd_soc_jack sun4i_headphone_jack;
++
++static struct snd_soc_jack_pin sun4i_headphone_jack_pins[] = {
++ { .pin = "Headphone", .mask = SND_JACK_HEADPHONE },
++};
++
++static struct snd_soc_jack_gpio sun4i_headphone_jack_gpio = {
++ .name = "hp-det",
++ .report = SND_JACK_HEADPHONE,
++ .debounce_time = 150,
++};
++
++static int sun4i_codec_machine_init(struct snd_soc_pcm_runtime *rtd)
++{
++ struct snd_soc_card *card = rtd->card;
++ struct sun4i_codec *scodec = snd_soc_card_get_drvdata(card);
++ int ret;
++
++ if (scodec->gpio_hp) {
++ ret = snd_soc_card_jack_new_pins(card, "Headphone Jack",
++ SND_JACK_HEADPHONE,
++ &sun4i_headphone_jack,
++ sun4i_headphone_jack_pins,
++ ARRAY_SIZE(sun4i_headphone_jack_pins));
++ if (ret) {
++ dev_err(rtd->dev,
++ "Headphone jack creation failed: %d\n", ret);
++ return ret;
++ }
++
++ sun4i_headphone_jack_gpio.desc = scodec->gpio_hp;
++ ret = snd_soc_jack_add_gpios(&sun4i_headphone_jack, 1,
++ &sun4i_headphone_jack_gpio);
++
++ if (ret) {
++ dev_err(rtd->dev, "Headphone GPIO not added: %d\n", ret);
++ return ret;
++ }
++ }
++
++ return 0;
++}
++
+ static struct snd_soc_dai_link *sun4i_codec_create_link(struct device *dev,
+ int *num_links)
+ {
+@@ -1608,6 +1653,7 @@ static struct snd_soc_dai_link *sun4i_codec_create_link(struct device *dev,
+ link->codecs->name = dev_name(dev);
+ link->platforms->name = dev_name(dev);
+ link->dai_fmt = SND_SOC_DAIFMT_I2S;
++ link->init = sun4i_codec_machine_init;
+
+ *num_links = 1;
+
+@@ -1916,10 +1962,11 @@ static const struct snd_soc_component_driver sun50i_h616_codec_codec = {
+ };
+
+ static const struct snd_kcontrol_new sun50i_h616_card_controls[] = {
+- SOC_DAPM_PIN_SWITCH("LINEOUT"),
++ SOC_DAPM_PIN_SWITCH("Speaker"),
+ };
+
+ static const struct snd_soc_dapm_widget sun50i_h616_codec_card_dapm_widgets[] = {
++ SND_SOC_DAPM_HP("Headphone", NULL),
+ SND_SOC_DAPM_LINE("Line Out", NULL),
+ SND_SOC_DAPM_SPK("Speaker", sun4i_codec_spk_event),
+ };
+@@ -2301,6 +2348,13 @@ static int sun4i_codec_probe(struct platform_device *pdev)
+ return ret;
+ }
+
++ scodec->gpio_hp = devm_gpiod_get_optional(&pdev->dev, "hp-det", GPIOD_IN);
++ if (IS_ERR(scodec->gpio_hp)) {
++ ret = PTR_ERR(scodec->gpio_hp);
++ dev_err_probe(&pdev->dev, ret, "Failed to get hp-det gpio\n");
++ return ret;
++ }
++
+ /* reg_field setup */
+ scodec->reg_adc_fifoc = devm_regmap_field_alloc(&pdev->dev,
+ scodec->regmap,
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 826ac870f24690..a792ada18863ac 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1885,10 +1885,18 @@ static void snd_usbmidi_init_substream(struct snd_usb_midi *umidi,
+ }
+
+ port_info = find_port_info(umidi, number);
+- name_format = port_info ? port_info->name :
+- (jack_name != default_jack_name ? "%s %s" : "%s %s %d");
+- snprintf(substream->name, sizeof(substream->name),
+- name_format, umidi->card->shortname, jack_name, number + 1);
++ if (port_info || jack_name == default_jack_name ||
++ strncmp(umidi->card->shortname, jack_name, strlen(umidi->card->shortname)) != 0) {
++ name_format = port_info ? port_info->name :
++ (jack_name != default_jack_name ? "%s %s" : "%s %s %d");
++ snprintf(substream->name, sizeof(substream->name),
++ name_format, umidi->card->shortname, jack_name, number + 1);
++ } else {
++ /* The manufacturer included the iProduct name in the jack
++ * name, do not use both
++ */
++ strscpy(substream->name, jack_name);
++ }
+
+ *rsubstream = substream;
+ }
+diff --git a/tools/arch/x86/include/asm/asm.h b/tools/arch/x86/include/asm/asm.h
+index 3ad3da9a7d9745..dbe39b44256ba3 100644
+--- a/tools/arch/x86/include/asm/asm.h
++++ b/tools/arch/x86/include/asm/asm.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_X86_ASM_H
+ #define _ASM_X86_ASM_H
+
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ # define __ASM_FORM(x, ...) x,## __VA_ARGS__
+ # define __ASM_FORM_RAW(x, ...) x,## __VA_ARGS__
+ # define __ASM_FORM_COMMA(x, ...) x,## __VA_ARGS__,
+@@ -123,7 +123,7 @@
+ #ifdef __KERNEL__
+
+ /* Exception table entry */
+-#ifdef __ASSEMBLY__
++#ifdef __ASSEMBLER__
+ # define _ASM_EXTABLE_HANDLE(from, to, handler) \
+ .pushsection "__ex_table","a" ; \
+ .balign 4 ; \
+@@ -154,7 +154,7 @@
+ # define _ASM_NOKPROBE(entry)
+ # endif
+
+-#else /* ! __ASSEMBLY__ */
++#else /* ! __ASSEMBLER__ */
+ # define _EXPAND_EXTABLE_HANDLE(x) #x
+ # define _ASM_EXTABLE_HANDLE(from, to, handler) \
+ " .pushsection \"__ex_table\",\"a\"\n" \
+@@ -186,7 +186,7 @@
+ */
+ register unsigned long current_stack_pointer asm(_ASM_SP);
+ #define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* __KERNEL__ */
+
+diff --git a/tools/arch/x86/include/asm/nops.h b/tools/arch/x86/include/asm/nops.h
+index 1c1b7550fa5508..cd94221d83358b 100644
+--- a/tools/arch/x86/include/asm/nops.h
++++ b/tools/arch/x86/include/asm/nops.h
+@@ -82,7 +82,7 @@
+ #define ASM_NOP7 _ASM_BYTES(BYTES_NOP7)
+ #define ASM_NOP8 _ASM_BYTES(BYTES_NOP8)
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ extern const unsigned char * const x86_nops[];
+ #endif
+
+diff --git a/tools/arch/x86/include/asm/orc_types.h b/tools/arch/x86/include/asm/orc_types.h
+index 46d7e06763c9f5..e0125afa53fb9d 100644
+--- a/tools/arch/x86/include/asm/orc_types.h
++++ b/tools/arch/x86/include/asm/orc_types.h
+@@ -45,7 +45,7 @@
+ #define ORC_TYPE_REGS 3
+ #define ORC_TYPE_REGS_PARTIAL 4
+
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+ #include <asm/byteorder.h>
+
+ /*
+@@ -73,6 +73,6 @@ struct orc_entry {
+ #endif
+ } __packed;
+
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+
+ #endif /* _ORC_TYPES_H */
+diff --git a/tools/arch/x86/include/asm/pvclock-abi.h b/tools/arch/x86/include/asm/pvclock-abi.h
+index 1436226efe3ef8..b9fece5fc96d6f 100644
+--- a/tools/arch/x86/include/asm/pvclock-abi.h
++++ b/tools/arch/x86/include/asm/pvclock-abi.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef _ASM_X86_PVCLOCK_ABI_H
+ #define _ASM_X86_PVCLOCK_ABI_H
+-#ifndef __ASSEMBLY__
++#ifndef __ASSEMBLER__
+
+ /*
+ * These structs MUST NOT be changed.
+@@ -44,5 +44,5 @@ struct pvclock_wall_clock {
+ #define PVCLOCK_GUEST_STOPPED (1 << 1)
+ /* PVCLOCK_COUNTS_FROM_ZERO broke ABI and can't be used anymore. */
+ #define PVCLOCK_COUNTS_FROM_ZERO (1 << 2)
+-#endif /* __ASSEMBLY__ */
++#endif /* __ASSEMBLER__ */
+ #endif /* _ASM_X86_PVCLOCK_ABI_H */
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index 2636655ac18085..6b14cbfa58aa2b 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -253,7 +253,7 @@ static int dump_btf_type(const struct btf *btf, __u32 id,
+ if (btf_kflag(t))
+ printf("\n\t'%s' val=%d", name, v->val);
+ else
+- printf("\n\t'%s' val=%u", name, v->val);
++ printf("\n\t'%s' val=%u", name, (__u32)v->val);
+ }
+ }
+ if (json_output)
+@@ -1022,7 +1022,7 @@ static int do_dump(int argc, char **argv)
+ for (i = 0; i < root_type_cnt; i++) {
+ if (root_type_ids[i] == root_id) {
+ err = -EINVAL;
+- p_err("duplicate root_id %d supplied", root_id);
++ p_err("duplicate root_id %u supplied", root_id);
+ goto done;
+ }
+ }
+@@ -1132,7 +1132,7 @@ build_btf_type_table(struct hashmap *tab, enum bpf_obj_type type,
+ break;
+ default:
+ err = -1;
+- p_err("unexpected object type: %d", type);
++ p_err("unexpected object type: %u", type);
+ goto err_free;
+ }
+ if (err) {
+@@ -1155,7 +1155,7 @@ build_btf_type_table(struct hashmap *tab, enum bpf_obj_type type,
+ break;
+ default:
+ err = -1;
+- p_err("unexpected object type: %d", type);
++ p_err("unexpected object type: %u", type);
+ goto err_free;
+ }
+ if (fd < 0) {
+@@ -1188,7 +1188,7 @@ build_btf_type_table(struct hashmap *tab, enum bpf_obj_type type,
+ break;
+ default:
+ err = -1;
+- p_err("unexpected object type: %d", type);
++ p_err("unexpected object type: %u", type);
+ goto err_free;
+ }
+ if (!btf_id)
+@@ -1254,12 +1254,12 @@ show_btf_plain(struct bpf_btf_info *info, int fd,
+
+ n = 0;
+ hashmap__for_each_key_entry(btf_prog_table, entry, info->id) {
+- printf("%s%lu", n++ == 0 ? " prog_ids " : ",", entry->value);
++ printf("%s%lu", n++ == 0 ? " prog_ids " : ",", (unsigned long)entry->value);
+ }
+
+ n = 0;
+ hashmap__for_each_key_entry(btf_map_table, entry, info->id) {
+- printf("%s%lu", n++ == 0 ? " map_ids " : ",", entry->value);
++ printf("%s%lu", n++ == 0 ? " map_ids " : ",", (unsigned long)entry->value);
+ }
+
+ emit_obj_refs_plain(refs_table, info->id, "\n\tpids ");
+diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
+index 527fe867a8fbde..4e896d8a2416e9 100644
+--- a/tools/bpf/bpftool/btf_dumper.c
++++ b/tools/bpf/bpftool/btf_dumper.c
+@@ -653,7 +653,7 @@ static int __btf_dumper_type_only(const struct btf *btf, __u32 type_id,
+ case BTF_KIND_ARRAY:
+ array = (struct btf_array *)(t + 1);
+ BTF_PRINT_TYPE(array->type);
+- BTF_PRINT_ARG("[%d]", array->nelems);
++ BTF_PRINT_ARG("[%u]", array->nelems);
+ break;
+ case BTF_KIND_PTR:
+ BTF_PRINT_TYPE(t->type);
+diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
+index 9af426d4329931..93b139bfb9880a 100644
+--- a/tools/bpf/bpftool/cgroup.c
++++ b/tools/bpf/bpftool/cgroup.c
+@@ -191,7 +191,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type,
+ if (attach_btf_name)
+ printf(" %-15s", attach_btf_name);
+ else if (info.attach_btf_id)
+- printf(" attach_btf_obj_id=%d attach_btf_id=%d",
++ printf(" attach_btf_obj_id=%u attach_btf_id=%u",
+ info.attach_btf_obj_id, info.attach_btf_id);
+ printf("\n");
+ }
+diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
+index 9b75639434b815..ecfa790adc13f2 100644
+--- a/tools/bpf/bpftool/common.c
++++ b/tools/bpf/bpftool/common.c
+@@ -461,10 +461,11 @@ int get_fd_type(int fd)
+ p_err("can't read link type: %s", strerror(errno));
+ return -1;
+ }
+- if (n == sizeof(path)) {
++ if (n == sizeof(buf)) {
+ p_err("can't read link type: path too long!");
+ return -1;
+ }
++ buf[n] = '\0';
+
+ if (strstr(buf, "bpf-map"))
+ return BPF_OBJ_MAP;
+@@ -713,7 +714,7 @@ ifindex_to_arch(__u32 ifindex, __u64 ns_dev, __u64 ns_ino, const char **opt)
+ int vendor_id;
+
+ if (!ifindex_to_name_ns(ifindex, ns_dev, ns_ino, devname)) {
+- p_err("Can't get net device name for ifindex %d: %s", ifindex,
++ p_err("Can't get net device name for ifindex %u: %s", ifindex,
+ strerror(errno));
+ return NULL;
+ }
+@@ -738,7 +739,7 @@ ifindex_to_arch(__u32 ifindex, __u64 ns_dev, __u64 ns_ino, const char **opt)
+ /* No NFP support in LLVM, we have no valid triple to return. */
+ default:
+ p_err("Can't get arch name for device vendor id 0x%04x",
+- vendor_id);
++ (unsigned int)vendor_id);
+ return NULL;
+ }
+ }
+diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
+index c032d2c6ab6d55..8895b4e1f69031 100644
+--- a/tools/bpf/bpftool/jit_disasm.c
++++ b/tools/bpf/bpftool/jit_disasm.c
+@@ -343,7 +343,8 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
+ {
+ const struct bpf_line_info *linfo = NULL;
+ unsigned int nr_skip = 0;
+- int count, i, pc = 0;
++ int count, i;
++ unsigned int pc = 0;
+ disasm_ctx_t ctx;
+
+ if (!len)
+diff --git a/tools/bpf/bpftool/map_perf_ring.c b/tools/bpf/bpftool/map_perf_ring.c
+index 21d7d447e1f3b2..552b4ca40c27c0 100644
+--- a/tools/bpf/bpftool/map_perf_ring.c
++++ b/tools/bpf/bpftool/map_perf_ring.c
+@@ -91,15 +91,15 @@ print_bpf_output(void *private_data, int cpu, struct perf_event_header *event)
+ jsonw_end_object(json_wtr);
+ } else {
+ if (e->header.type == PERF_RECORD_SAMPLE) {
+- printf("== @%lld.%09lld CPU: %d index: %d =====\n",
++ printf("== @%llu.%09llu CPU: %d index: %d =====\n",
+ e->time / 1000000000ULL, e->time % 1000000000ULL,
+ cpu, idx);
+ fprint_hex(stdout, e->data, e->size, " ");
+ printf("\n");
+ } else if (e->header.type == PERF_RECORD_LOST) {
+- printf("lost %lld events\n", lost->lost);
++ printf("lost %llu events\n", lost->lost);
+ } else {
+- printf("unknown event type=%d size=%d\n",
++ printf("unknown event type=%u size=%u\n",
+ e->header.type, e->header.size);
+ }
+ }
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index d2242d9f84411f..64f958f437b01e 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -476,7 +476,7 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
+ for (i = 0; i < optq.count; i++) {
+ NET_START_OBJECT;
+ NET_DUMP_STR("devname", "%s", dev->devname);
+- NET_DUMP_UINT("ifindex", "(%u)", dev->ifindex);
++ NET_DUMP_UINT("ifindex", "(%u)", (unsigned int)dev->ifindex);
+ NET_DUMP_STR("kind", " %s", attach_loc_strings[loc]);
+ ret = __show_dev_tc_bpf_name(prog_ids[i], prog_name,
+ sizeof(prog_name));
+@@ -831,7 +831,7 @@ static void show_link_netfilter(void)
+ if (err) {
+ if (errno == ENOENT)
+ break;
+- p_err("can't get next link: %s (id %d)", strerror(errno), id);
++ p_err("can't get next link: %s (id %u)", strerror(errno), id);
+ break;
+ }
+
+diff --git a/tools/bpf/bpftool/netlink_dumper.c b/tools/bpf/bpftool/netlink_dumper.c
+index 5f65140b003b2e..0a3c7e96c797a7 100644
+--- a/tools/bpf/bpftool/netlink_dumper.c
++++ b/tools/bpf/bpftool/netlink_dumper.c
+@@ -45,7 +45,7 @@ static int do_xdp_dump_one(struct nlattr *attr, unsigned int ifindex,
+ NET_START_OBJECT;
+ if (name)
+ NET_DUMP_STR("devname", "%s", name);
+- NET_DUMP_UINT("ifindex", "(%d)", ifindex);
++ NET_DUMP_UINT("ifindex", "(%u)", ifindex);
+
+ if (mode == XDP_ATTACHED_MULTI) {
+ if (json_output) {
+@@ -74,7 +74,7 @@ int do_xdp_dump(struct ifinfomsg *ifinfo, struct nlattr **tb)
+ if (!tb[IFLA_XDP])
+ return 0;
+
+- return do_xdp_dump_one(tb[IFLA_XDP], ifinfo->ifi_index,
++ return do_xdp_dump_one(tb[IFLA_XDP], (unsigned int)ifinfo->ifi_index,
+ libbpf_nla_getattr_str(tb[IFLA_IFNAME]));
+ }
+
+@@ -168,7 +168,7 @@ int do_filter_dump(struct tcmsg *info, struct nlattr **tb, const char *kind,
+ NET_START_OBJECT;
+ if (devname[0] != '\0')
+ NET_DUMP_STR("devname", "%s", devname);
+- NET_DUMP_UINT("ifindex", "(%u)", ifindex);
++ NET_DUMP_UINT("ifindex", "(%u)", (unsigned int)ifindex);
+ NET_DUMP_STR("kind", " %s", kind);
+ ret = do_bpf_filter_dump(tb[TCA_OPTIONS]);
+ NET_END_OBJECT_FINAL;
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 52ffb74ae4e89a..f010295350be54 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -521,10 +521,10 @@ static void print_prog_header_plain(struct bpf_prog_info *info, int fd)
+ print_dev_plain(info->ifindex, info->netns_dev, info->netns_ino);
+ printf("%s", info->gpl_compatible ? " gpl" : "");
+ if (info->run_time_ns)
+- printf(" run_time_ns %lld run_cnt %lld",
++ printf(" run_time_ns %llu run_cnt %llu",
+ info->run_time_ns, info->run_cnt);
+ if (info->recursion_misses)
+- printf(" recursion_misses %lld", info->recursion_misses);
++ printf(" recursion_misses %llu", info->recursion_misses);
+ printf("\n");
+ }
+
+@@ -569,7 +569,7 @@ static void print_prog_plain(struct bpf_prog_info *info, int fd, bool orphaned)
+ }
+
+ if (info->btf_id)
+- printf("\n\tbtf_id %d", info->btf_id);
++ printf("\n\tbtf_id %u", info->btf_id);
+
+ emit_obj_refs_plain(refs_table, info->id, "\n\tpids ");
+
+@@ -1164,7 +1164,7 @@ static int get_run_data(const char *fname, void **data_ptr, unsigned int *size)
+ }
+ if (nb_read > buf_size - block_size) {
+ if (buf_size == UINT32_MAX) {
+- p_err("data_in/ctx_in is too long (max: %d)",
++ p_err("data_in/ctx_in is too long (max: %u)",
+ UINT32_MAX);
+ goto err_free;
+ }
+@@ -2252,7 +2252,7 @@ static char *profile_target_name(int tgt_fd)
+
+ t = btf__type_by_id(btf, func_info.type_id);
+ if (!t) {
+- p_err("btf %d doesn't have type %d",
++ p_err("btf %u doesn't have type %u",
+ info.btf_id, func_info.type_id);
+ goto out;
+ }
+@@ -2330,7 +2330,7 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
+ continue;
+ for (cpu = 0; cpu < obj->rodata->num_cpu; cpu++) {
+ if (profile_open_perf_event(m, cpu, map_fd)) {
+- p_err("failed to create event %s on cpu %d",
++ p_err("failed to create event %s on cpu %u",
+ metrics[m].name, cpu);
+ return -1;
+ }
+diff --git a/tools/bpf/bpftool/tracelog.c b/tools/bpf/bpftool/tracelog.c
+index bf1f0221279724..31d806e3bdaaa9 100644
+--- a/tools/bpf/bpftool/tracelog.c
++++ b/tools/bpf/bpftool/tracelog.c
+@@ -78,7 +78,7 @@ static bool get_tracefs_pipe(char *mnt)
+ return false;
+
+ /* Allow room for NULL terminating byte and pipe file name */
+- snprintf(format, sizeof(format), "%%*s %%%zds %%99s %%*s %%*d %%*d\\n",
++ snprintf(format, sizeof(format), "%%*s %%%zus %%99s %%*s %%*d %%*d\\n",
+ PATH_MAX - strlen(pipe_name) - 1);
+ while (fscanf(fp, format, mnt, type) == 2)
+ if (strcmp(type, fstype) == 0) {
+diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
+index d0094345fb2bc8..5e7cb8b36fef2e 100644
+--- a/tools/bpf/bpftool/xlated_dumper.c
++++ b/tools/bpf/bpftool/xlated_dumper.c
+@@ -199,13 +199,13 @@ static const char *print_imm(void *private_data,
+
+ if (insn->src_reg == BPF_PSEUDO_MAP_FD)
+ snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
+- "map[id:%u]", insn->imm);
++ "map[id:%d]", insn->imm);
+ else if (insn->src_reg == BPF_PSEUDO_MAP_VALUE)
+ snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
+- "map[id:%u][0]+%u", insn->imm, (insn + 1)->imm);
++ "map[id:%d][0]+%d", insn->imm, (insn + 1)->imm);
+ else if (insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE)
+ snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
+- "map[idx:%u]+%u", insn->imm, (insn + 1)->imm);
++ "map[idx:%d]+%d", insn->imm, (insn + 1)->imm);
+ else if (insn->src_reg == BPF_PSEUDO_FUNC)
+ snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
+ "subprog[%+d]", insn->imm);
+diff --git a/tools/build/Makefile.build b/tools/build/Makefile.build
+index e710ed67a1b49d..3584ff30860786 100644
+--- a/tools/build/Makefile.build
++++ b/tools/build/Makefile.build
+@@ -129,6 +129,10 @@ objprefix := $(subst ./,,$(OUTPUT)$(dir)/)
+ obj-y := $(addprefix $(objprefix),$(obj-y))
+ subdir-obj-y := $(addprefix $(objprefix),$(subdir-obj-y))
+
++# Separate out test log files from real build objects.
++test-y := $(filter %_log, $(obj-y))
++obj-y := $(filter-out %_log, $(obj-y))
++
+ # Final '$(obj)-in.o' object
+ in-target := $(objprefix)$(obj)-in.o
+
+@@ -139,7 +143,7 @@ $(subdir-y):
+
+ $(sort $(subdir-obj-y)): $(subdir-y) ;
+
+-$(in-target): $(obj-y) FORCE
++$(in-target): $(obj-y) $(test-y) FORCE
+ $(call rule_mkdir)
+ $(call if_changed,$(host)ld_multi)
+
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 2acf9b33637174..89242184a19376 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -1207,6 +1207,7 @@ enum bpf_perf_event_type {
+ #define BPF_F_BEFORE (1U << 3)
+ #define BPF_F_AFTER (1U << 4)
+ #define BPF_F_ID (1U << 5)
++#define BPF_F_PREORDER (1U << 6)
+ #define BPF_F_LINK BPF_F_LINK /* 1 << 13 */
+
+ /* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 194809da51725e..1cc87dbd015d80 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -2106,7 +2106,7 @@ static int set_kcfg_value_str(struct extern_desc *ext, char *ext_val,
+ }
+
+ len = strlen(value);
+- if (value[len - 1] != '"') {
++ if (len < 2 || value[len - 1] != '"') {
+ pr_warn("extern (kcfg) '%s': invalid string config '%s'\n",
+ ext->name, value);
+ return -EINVAL;
+diff --git a/tools/net/ynl/lib/ynl.c b/tools/net/ynl/lib/ynl.c
+index ce32cb35007d6f..c4da34048ef858 100644
+--- a/tools/net/ynl/lib/ynl.c
++++ b/tools/net/ynl/lib/ynl.c
+@@ -364,7 +364,7 @@ int ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr)
+ "Invalid attribute (binary %s)", policy->name);
+ return -1;
+ case YNL_PT_NUL_STR:
+- if ((!policy->len || len <= policy->len) && !data[len - 1])
++ if (len && (!policy->len || len <= policy->len) && !data[len - 1])
+ break;
+ yerr(yarg->ys, YNL_ERROR_ATTR_INVALID,
+ "Invalid attribute (string %s)", policy->name);
+diff --git a/tools/net/ynl/pyynl/ynl_gen_c.py b/tools/net/ynl/pyynl/ynl_gen_c.py
+index c2eabc90dce8c4..aa08b8b1463d0c 100755
+--- a/tools/net/ynl/pyynl/ynl_gen_c.py
++++ b/tools/net/ynl/pyynl/ynl_gen_c.py
+@@ -2549,6 +2549,9 @@ def render_uapi(family, cw):
+
+ defines = []
+ for const in family['definitions']:
++ if const.get('header'):
++ continue
++
+ if const['type'] != 'const':
+ cw.writes_defines(defines)
+ defines = []
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index a7dcf2d00ab65a..70f5b3fa587c5e 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -3209,7 +3209,7 @@ static int handle_insn_ops(struct instruction *insn,
+ if (update_cfi_state(insn, next_insn, &state->cfi, op))
+ return 1;
+
+- if (!insn->alt_group)
++ if (!opts.uaccess || !insn->alt_group)
+ continue;
+
+ if (op->dest.type == OP_DEST_PUSHF) {
+@@ -3676,6 +3676,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ return 0;
+
+ case INSN_STAC:
++ if (!opts.uaccess)
++ break;
++
+ if (state.uaccess) {
+ WARN_INSN(insn, "recursive UACCESS enable");
+ return 1;
+@@ -3685,6 +3688,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ break;
+
+ case INSN_CLAC:
++ if (!opts.uaccess)
++ break;
++
+ if (!state.uaccess && func) {
+ WARN_INSN(insn, "redundant UACCESS disable");
+ return 1;
+@@ -4160,7 +4166,8 @@ static int validate_symbol(struct objtool_file *file, struct section *sec,
+ if (!insn || insn->ignore || insn->visited)
+ return 0;
+
+- state->uaccess = sym->uaccess_safe;
++ if (opts.uaccess)
++ state->uaccess = sym->uaccess_safe;
+
+ ret = validate_branch(file, insn_func(insn), insn, *state);
+ if (ret)
+@@ -4626,8 +4633,10 @@ int check(struct objtool_file *file)
+ init_cfi_state(&force_undefined_cfi);
+ force_undefined_cfi.force_undefined = true;
+
+- if (!cfi_hash_alloc(1UL << (file->elf->symbol_bits - 3)))
++ if (!cfi_hash_alloc(1UL << (file->elf->symbol_bits - 3))) {
++ ret = -1;
+ goto out;
++ }
+
+ cfi_hash_add(&init_cfi);
+ cfi_hash_add(&func_cfi);
+@@ -4644,7 +4653,7 @@ int check(struct objtool_file *file)
+ if (opts.retpoline) {
+ ret = validate_retpoline(file);
+ if (ret < 0)
+- return ret;
++ goto out;
+ warnings += ret;
+ }
+
+@@ -4680,7 +4689,7 @@ int check(struct objtool_file *file)
+ */
+ ret = validate_unrets(file);
+ if (ret < 0)
+- return ret;
++ goto out;
+ warnings += ret;
+ }
+
+@@ -4743,7 +4752,7 @@ int check(struct objtool_file *file)
+ if (opts.prefix) {
+ ret = add_prefix_symbols(file);
+ if (ret < 0)
+- return ret;
++ goto out;
+ warnings += ret;
+ }
+
+diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
+index e4f9f93c123a2b..abee03ddc7f09c 100644
+--- a/tools/power/x86/turbostat/turbostat.8
++++ b/tools/power/x86/turbostat/turbostat.8
+@@ -201,6 +201,7 @@ The system configuration dump (if --quiet is not used) is followed by statistics
+ \fBUncMHz\fP per-package uncore MHz, instantaneous sample.
+ .PP
+ \fBUMHz1.0\fP per-package uncore MHz for domain=1 and fabric_cluster=0, instantaneous sample. System summary is the average of all packages.
++For the "--show" and "--hide" options, use "UncMHz" to operate on all UMHz*.* as a group.
+ .SH TOO MUCH INFORMATION EXAMPLE
+ By default, turbostat dumps all possible information -- a system configuration header, followed by columns for all counters.
+ This is ideal for remote debugging, use the "--out" option to save everything to a text file, and get that file to the expert helping you debug.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 4155d9bfcfc6da..505b07b5be19b2 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -6713,7 +6713,18 @@ static void probe_intel_uncore_frequency_cluster(void)
+ sprintf(path, "%s/current_freq_khz", path_base);
+ sprintf(name_buf, "UMHz%d.%d", domain_id, cluster_id);
+
+- add_counter(0, path, name_buf, 0, SCOPE_PACKAGE, COUNTER_K2M, FORMAT_AVERAGE, 0, package_id);
++ /*
++ * Once add_couter() is called, that counter is always read
++ * and reported -- So it is effectively (enabled & present).
++ * Only call add_counter() here if legacy BIC_UNCORE_MHZ (UncMHz)
++ * is (enabled). Since we are in this routine, we
++ * know we will not probe and set (present) the legacy counter.
++ *
++ * This allows "--show/--hide UncMHz" to be effective for
++ * the clustered MHz counters, as a group.
++ */
++ if BIC_IS_ENABLED(BIC_UNCORE_MHZ)
++ add_counter(0, path, name_buf, 0, SCOPE_PACKAGE, COUNTER_K2M, FORMAT_AVERAGE, 0, package_id);
+
+ if (quiet)
+ continue;
+diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
+index 29fc27e8949bd4..da53a709773a23 100644
+--- a/tools/testing/kunit/kunit_parser.py
++++ b/tools/testing/kunit/kunit_parser.py
+@@ -759,7 +759,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest:
+ # If parsing the main/top-level test, parse KTAP version line and
+ # test plan
+ test.name = "main"
+- ktap_line = parse_ktap_header(lines, test, printer)
++ parse_ktap_header(lines, test, printer)
+ test.log.extend(parse_diagnostic(lines))
+ parse_test_plan(lines, test)
+ parent_test = True
+@@ -768,13 +768,12 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest:
+ # the KTAP version line and/or subtest header line
+ ktap_line = parse_ktap_header(lines, test, printer)
+ subtest_line = parse_test_header(lines, test)
++ test.log.extend(parse_diagnostic(lines))
++ parse_test_plan(lines, test)
+ parent_test = (ktap_line or subtest_line)
+ if parent_test:
+- # If KTAP version line and/or subtest header is found, attempt
+- # to parse test plan and print test header
+- test.log.extend(parse_diagnostic(lines))
+- parse_test_plan(lines, test)
+ print_test_header(test, printer)
++
+ expected_count = test.expected_count
+ subtests = []
+ test_num = 1
+diff --git a/tools/testing/kunit/qemu_configs/x86_64.py b/tools/testing/kunit/qemu_configs/x86_64.py
+index dc794907686304..4a6bf4e048f5b0 100644
+--- a/tools/testing/kunit/qemu_configs/x86_64.py
++++ b/tools/testing/kunit/qemu_configs/x86_64.py
+@@ -7,4 +7,6 @@ CONFIG_SERIAL_8250_CONSOLE=y''',
+ qemu_arch='x86_64',
+ kernel_path='arch/x86/boot/bzImage',
+ kernel_command_line='console=ttyS0',
+- extra_qemu_params=[])
++ # qboot is faster than SeaBIOS and doesn't mess up
++ # the terminal.
++ extra_qemu_params=['-bios', 'qboot.rom'])
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c b/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
+index 2d0796314862ac..0a99fd404f6dc0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
+@@ -68,7 +68,6 @@ static void test_sockmap_ktls_disconnect_after_delete(int family, int map)
+ goto close_cli;
+
+ err = disconnect(cli);
+- ASSERT_OK(err, "disconnect");
+
+ close_cli:
+ close(cli);
+diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c
+index a1b2b657999dcc..618c03bb6509bf 100644
+--- a/tools/testing/selftests/iommu/iommufd.c
++++ b/tools/testing/selftests/iommu/iommufd.c
+@@ -439,6 +439,10 @@ TEST_F(iommufd_ioas, alloc_hwpt_nested)
+ &test_hwpt_id);
+ test_err_hwpt_alloc(EINVAL, self->device_id, self->device_id, 0,
+ &test_hwpt_id);
++ test_err_hwpt_alloc(EOPNOTSUPP, self->device_id, self->ioas_id,
++ IOMMU_HWPT_ALLOC_NEST_PARENT |
++ IOMMU_HWPT_FAULT_ID_VALID,
++ &test_hwpt_id);
+
+ test_cmd_hwpt_alloc(self->device_id, self->ioas_id,
+ IOMMU_HWPT_ALLOC_NEST_PARENT,
+diff --git a/tools/testing/selftests/net/forwarding/bridge_mdb.sh b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
+index d9d587454d2079..8c1597ebc2d38b 100755
+--- a/tools/testing/selftests/net/forwarding/bridge_mdb.sh
++++ b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
+@@ -149,7 +149,7 @@ cfg_test_host_common()
+ check_err $? "Failed to add $name host entry"
+
+ bridge mdb replace dev br0 port br0 grp $grp $state vid 10 &> /dev/null
+- check_fail $? "Managed to replace $name host entry"
++ check_err $? "Failed to replace $name host entry"
+
+ bridge mdb del dev br0 port br0 grp $grp $state vid 10
+ bridge mdb get dev br0 grp $grp vid 10 &> /dev/null
+diff --git a/tools/testing/selftests/net/gro.sh b/tools/testing/selftests/net/gro.sh
+index 02c21ff4ca81fd..aabd6e5480b8e5 100755
+--- a/tools/testing/selftests/net/gro.sh
++++ b/tools/testing/selftests/net/gro.sh
+@@ -100,5 +100,6 @@ trap cleanup EXIT
+ if [[ "${test}" == "all" ]]; then
+ run_all_tests
+ else
+- run_test "${proto}" "${test}"
++ exit_code=$(run_test "${proto}" "${test}")
++ exit $exit_code
+ fi;
+diff --git a/tools/testing/selftests/net/nl_netdev.py b/tools/testing/selftests/net/nl_netdev.py
+index 93e8cb671c3d9d..beaee5e4e2aaba 100755
+--- a/tools/testing/selftests/net/nl_netdev.py
++++ b/tools/testing/selftests/net/nl_netdev.py
+@@ -35,6 +35,21 @@ def napi_list_check(nf) -> None:
+ comment=f"queue count after reset queue {q} mode {i}")
+
+
++def nsim_rxq_reset_down(nf) -> None:
++ """
++ Test that the queue API supports resetting a queue
++ while the interface is down. We should convert this
++ test to testing real HW once more devices support
++ queue API.
++ """
++ with NetdevSimDev(queue_count=4) as nsimdev:
++ nsim = nsimdev.nsims[0]
++
++ ip(f"link set dev {nsim.ifname} down")
++ for i in [0, 2, 3]:
++ nsim.dfs_write("queue_reset", f"1 {i}")
++
++
+ def page_pool_check(nf) -> None:
+ with NetdevSimDev() as nsimdev:
+ nsim = nsimdev.nsims[0]
+@@ -106,7 +121,8 @@ def page_pool_check(nf) -> None:
+
+ def main() -> None:
+ nf = NetdevFamily()
+- ksft_run([empty_check, lo_check, page_pool_check, napi_list_check],
++ ksft_run([empty_check, lo_check, page_pool_check, napi_list_check,
++ nsim_rxq_reset_down],
+ args=(nf, ))
+ ksft_exit()
+
+diff --git a/tools/testing/selftests/pci_endpoint/pci_endpoint_test.c b/tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
+index c267b822c1081d..576c590b277b10 100644
+--- a/tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
++++ b/tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
+@@ -65,6 +65,8 @@ TEST_F(pci_ep_bar, BAR_TEST)
+ int ret;
+
+ pci_ep_ioctl(PCITEST_BAR, variant->barno);
++ if (ret == -ENODATA)
++ SKIP(return, "BAR is disabled");
+ EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno);
+ }
+
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-29 17:00 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-29 17:00 UTC (permalink / raw
To: gentoo-commits
commit: e2d5f13b42aa3a1923c914988973ddd5902a1032
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 29 17:00:05 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 29 17:00:05 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e2d5f13b
Fix RANDOM_KMALLOC_CACHE(S) typo
Bug: https://bugs.gentoo.org/956708
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index c308dca8..21531dc7 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -207,7 +207,7 @@
+ select SECURITY_LANDLOCK
+ select SCHED_CORE if SCHED_SMT
+ select BUG_ON_DATA_CORRUPTION
-+ select RANDOM_KMALLOC_CACHE if SLUB_TINY=n
++ select RANDOM_KMALLOC_CACHES if SLUB_TINY=n
+ select SCHED_STACK_END_CHECK
+ select SECCOMP if HAVE_ARCH_SECCOMP
+ select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-05-29 17:28 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-05-29 17:28 UTC (permalink / raw
To: gentoo-commits
commit: e262b657baff6fa7532b7b315fc710cad79c8ff4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 29 17:28:32 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 29 17:28:32 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e262b657
Revert "drm/amd/display: more liberal vmin/vmax update for freesync"
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +++
2700_amd-revert-vmin-vmax-for-freesync.patch | 48 ++++++++++++++++++++++++++++
2 files changed, 52 insertions(+)
diff --git a/0000_README b/0000_README
index a644f9dd..c605f246 100644
--- a/0000_README
+++ b/0000_README
@@ -102,6 +102,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2700_amd-revert-vmin-vmax-for-freesync.patch
+From: https://github.com/archlinux/linux/commit/30dd9945fd79d33a049da4e52984c9bc07450de2.patch
+Desc: Revert "drm/amd/display: more liberal vmin/vmax update for freesync"
+
Patch: 2901_permit-menuconfig-sorting.patch
From: https://lore.kernel.org/
Desc: menuconfig: Allow sorting the entries alphabetically
diff --git a/2700_amd-revert-vmin-vmax-for-freesync.patch b/2700_amd-revert-vmin-vmax-for-freesync.patch
new file mode 100644
index 00000000..b0b80885
--- /dev/null
+++ b/2700_amd-revert-vmin-vmax-for-freesync.patch
@@ -0,0 +1,48 @@
+From 30dd9945fd79d33a049da4e52984c9bc07450de2 Mon Sep 17 00:00:00 2001
+From: Aurabindo Pillai <aurabindo.pillai@amd.com>
+Date: Wed, 21 May 2025 16:10:57 -0400
+Subject: [PATCH] Revert "drm/amd/display: more liberal vmin/vmax update for
+ freesync"
+
+This reverts commit 219898d29c438d8ec34a5560fac4ea8f6b8d4f20 since it
+causes regressions on certain configs. Revert until the issue can be
+isolated and debugged.
+
+Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4238
+Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
+Cherry-picked-for: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/issues/139
+---
+ .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 16 +++++-----------
+ 1 file changed, 5 insertions(+), 11 deletions(-)
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2dbd71fbae28a5..e4f0517f0f2b23 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -668,21 +668,15 @@ static void dm_crtc_high_irq(void *interrupt_params)
+ spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+
+ if (acrtc->dm_irq_params.stream &&
+- acrtc->dm_irq_params.vrr_params.supported) {
+- bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
+- bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
+- bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
+-
++ acrtc->dm_irq_params.vrr_params.supported &&
++ acrtc->dm_irq_params.freesync_config.state ==
++ VRR_STATE_ACTIVE_VARIABLE) {
+ mod_freesync_handle_v_update(adev->dm.freesync_module,
+ acrtc->dm_irq_params.stream,
+ &acrtc->dm_irq_params.vrr_params);
+
+- /* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
+- if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
+- dc_stream_adjust_vmin_vmax(adev->dm.dc,
+- acrtc->dm_irq_params.stream,
+- &acrtc->dm_irq_params.vrr_params.adjust);
+- }
++ dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
++ &acrtc->dm_irq_params.vrr_params.adjust);
+ }
+
+ /*
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-06-04 18:09 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-06-04 18:09 UTC (permalink / raw
To: gentoo-commits
commit: 623e6fa270e21574ffee9f3009967415f2f21704
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 4 18:09:18 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 4 18:09:18 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=623e6fa2
Linux patch 6.14.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-6.14.10.patch | 3214 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3218 insertions(+)
diff --git a/0000_README b/0000_README
index c605f246..7ad72962 100644
--- a/0000_README
+++ b/0000_README
@@ -78,6 +78,10 @@ Patch: 1008_linux-6.14.9.patch
From: https://www.kernel.org
Desc: Linux 6.14.9
+Patch: 1009_linux-6.14.10.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.10
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1009_linux-6.14.10.patch b/1009_linux-6.14.10.patch
new file mode 100644
index 00000000..c6b11274
--- /dev/null
+++ b/1009_linux-6.14.10.patch
@@ -0,0 +1,3214 @@
+diff --git a/Makefile b/Makefile
+index 884279eb952d7a..0f3aad52b3de89 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi b/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi
+index 51c6e19e40b843..7d9394a0430272 100644
+--- a/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi
++++ b/arch/arm64/boot/dts/intel/socfpga_agilex5.dtsi
+@@ -222,9 +222,9 @@ i3c1: i3c@10da1000 {
+ status = "disabled";
+ };
+
+- gpio0: gpio@ffc03200 {
++ gpio0: gpio@10c03200 {
+ compatible = "snps,dw-apb-gpio";
+- reg = <0xffc03200 0x100>;
++ reg = <0x10c03200 0x100>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ resets = <&rst GPIO0_RESET>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+index 94229002897257..3c02351fbb156a 100644
+--- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+@@ -378,6 +378,8 @@ cryptobam: dma-controller@704000 {
+ interrupts = <GIC_SPI 207 IRQ_TYPE_LEVEL_HIGH>;
+ #dma-cells = <1>;
+ qcom,ee = <1>;
++ qcom,num-ees = <4>;
++ num-channels = <16>;
+ qcom,controlled-remotely;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 3394ae2d130034..2329460b210381 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -2413,6 +2413,8 @@ cryptobam: dma-controller@1dc4000 {
+ interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ #dma-cells = <1>;
+ qcom,ee = <0>;
++ qcom,num-ees = <4>;
++ num-channels = <20>;
+ qcom,controlled-remotely;
+ iommus = <&apps_smmu 0x480 0x00>,
+ <&apps_smmu 0x481 0x00>;
+@@ -4903,15 +4905,7 @@ compute-cb@1 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <1>;
+ iommus = <&apps_smmu 0x2141 0x04a0>,
+- <&apps_smmu 0x2161 0x04a0>,
+- <&apps_smmu 0x2181 0x0400>,
+- <&apps_smmu 0x21c1 0x04a0>,
+- <&apps_smmu 0x21e1 0x04a0>,
+- <&apps_smmu 0x2541 0x04a0>,
+- <&apps_smmu 0x2561 0x04a0>,
+- <&apps_smmu 0x2581 0x0400>,
+- <&apps_smmu 0x25c1 0x04a0>,
+- <&apps_smmu 0x25e1 0x04a0>;
++ <&apps_smmu 0x2181 0x0400>;
+ dma-coherent;
+ };
+
+@@ -4919,15 +4913,7 @@ compute-cb@2 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <2>;
+ iommus = <&apps_smmu 0x2142 0x04a0>,
+- <&apps_smmu 0x2162 0x04a0>,
+- <&apps_smmu 0x2182 0x0400>,
+- <&apps_smmu 0x21c2 0x04a0>,
+- <&apps_smmu 0x21e2 0x04a0>,
+- <&apps_smmu 0x2542 0x04a0>,
+- <&apps_smmu 0x2562 0x04a0>,
+- <&apps_smmu 0x2582 0x0400>,
+- <&apps_smmu 0x25c2 0x04a0>,
+- <&apps_smmu 0x25e2 0x04a0>;
++ <&apps_smmu 0x2182 0x0400>;
+ dma-coherent;
+ };
+
+@@ -4935,15 +4921,7 @@ compute-cb@3 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <3>;
+ iommus = <&apps_smmu 0x2143 0x04a0>,
+- <&apps_smmu 0x2163 0x04a0>,
+- <&apps_smmu 0x2183 0x0400>,
+- <&apps_smmu 0x21c3 0x04a0>,
+- <&apps_smmu 0x21e3 0x04a0>,
+- <&apps_smmu 0x2543 0x04a0>,
+- <&apps_smmu 0x2563 0x04a0>,
+- <&apps_smmu 0x2583 0x0400>,
+- <&apps_smmu 0x25c3 0x04a0>,
+- <&apps_smmu 0x25e3 0x04a0>;
++ <&apps_smmu 0x2183 0x0400>;
+ dma-coherent;
+ };
+
+@@ -4951,15 +4929,7 @@ compute-cb@4 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <4>;
+ iommus = <&apps_smmu 0x2144 0x04a0>,
+- <&apps_smmu 0x2164 0x04a0>,
+- <&apps_smmu 0x2184 0x0400>,
+- <&apps_smmu 0x21c4 0x04a0>,
+- <&apps_smmu 0x21e4 0x04a0>,
+- <&apps_smmu 0x2544 0x04a0>,
+- <&apps_smmu 0x2564 0x04a0>,
+- <&apps_smmu 0x2584 0x0400>,
+- <&apps_smmu 0x25c4 0x04a0>,
+- <&apps_smmu 0x25e4 0x04a0>;
++ <&apps_smmu 0x2184 0x0400>;
+ dma-coherent;
+ };
+
+@@ -4967,15 +4937,7 @@ compute-cb@5 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <5>;
+ iommus = <&apps_smmu 0x2145 0x04a0>,
+- <&apps_smmu 0x2165 0x04a0>,
+- <&apps_smmu 0x2185 0x0400>,
+- <&apps_smmu 0x21c5 0x04a0>,
+- <&apps_smmu 0x21e5 0x04a0>,
+- <&apps_smmu 0x2545 0x04a0>,
+- <&apps_smmu 0x2565 0x04a0>,
+- <&apps_smmu 0x2585 0x0400>,
+- <&apps_smmu 0x25c5 0x04a0>,
+- <&apps_smmu 0x25e5 0x04a0>;
++ <&apps_smmu 0x2185 0x0400>;
+ dma-coherent;
+ };
+
+@@ -4983,15 +4945,7 @@ compute-cb@6 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <6>;
+ iommus = <&apps_smmu 0x2146 0x04a0>,
+- <&apps_smmu 0x2166 0x04a0>,
+- <&apps_smmu 0x2186 0x0400>,
+- <&apps_smmu 0x21c6 0x04a0>,
+- <&apps_smmu 0x21e6 0x04a0>,
+- <&apps_smmu 0x2546 0x04a0>,
+- <&apps_smmu 0x2566 0x04a0>,
+- <&apps_smmu 0x2586 0x0400>,
+- <&apps_smmu 0x25c6 0x04a0>,
+- <&apps_smmu 0x25e6 0x04a0>;
++ <&apps_smmu 0x2186 0x0400>;
+ dma-coherent;
+ };
+
+@@ -4999,15 +4953,7 @@ compute-cb@7 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <7>;
+ iommus = <&apps_smmu 0x2147 0x04a0>,
+- <&apps_smmu 0x2167 0x04a0>,
+- <&apps_smmu 0x2187 0x0400>,
+- <&apps_smmu 0x21c7 0x04a0>,
+- <&apps_smmu 0x21e7 0x04a0>,
+- <&apps_smmu 0x2547 0x04a0>,
+- <&apps_smmu 0x2567 0x04a0>,
+- <&apps_smmu 0x2587 0x0400>,
+- <&apps_smmu 0x25c7 0x04a0>,
+- <&apps_smmu 0x25e7 0x04a0>;
++ <&apps_smmu 0x2187 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5015,15 +4961,7 @@ compute-cb@8 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <8>;
+ iommus = <&apps_smmu 0x2148 0x04a0>,
+- <&apps_smmu 0x2168 0x04a0>,
+- <&apps_smmu 0x2188 0x0400>,
+- <&apps_smmu 0x21c8 0x04a0>,
+- <&apps_smmu 0x21e8 0x04a0>,
+- <&apps_smmu 0x2548 0x04a0>,
+- <&apps_smmu 0x2568 0x04a0>,
+- <&apps_smmu 0x2588 0x0400>,
+- <&apps_smmu 0x25c8 0x04a0>,
+- <&apps_smmu 0x25e8 0x04a0>;
++ <&apps_smmu 0x2188 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5031,31 +4969,7 @@ compute-cb@9 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <9>;
+ iommus = <&apps_smmu 0x2149 0x04a0>,
+- <&apps_smmu 0x2169 0x04a0>,
+- <&apps_smmu 0x2189 0x0400>,
+- <&apps_smmu 0x21c9 0x04a0>,
+- <&apps_smmu 0x21e9 0x04a0>,
+- <&apps_smmu 0x2549 0x04a0>,
+- <&apps_smmu 0x2569 0x04a0>,
+- <&apps_smmu 0x2589 0x0400>,
+- <&apps_smmu 0x25c9 0x04a0>,
+- <&apps_smmu 0x25e9 0x04a0>;
+- dma-coherent;
+- };
+-
+- compute-cb@10 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <10>;
+- iommus = <&apps_smmu 0x214a 0x04a0>,
+- <&apps_smmu 0x216a 0x04a0>,
+- <&apps_smmu 0x218a 0x0400>,
+- <&apps_smmu 0x21ca 0x04a0>,
+- <&apps_smmu 0x21ea 0x04a0>,
+- <&apps_smmu 0x254a 0x04a0>,
+- <&apps_smmu 0x256a 0x04a0>,
+- <&apps_smmu 0x258a 0x0400>,
+- <&apps_smmu 0x25ca 0x04a0>,
+- <&apps_smmu 0x25ea 0x04a0>;
++ <&apps_smmu 0x2189 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5063,15 +4977,7 @@ compute-cb@11 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <11>;
+ iommus = <&apps_smmu 0x214b 0x04a0>,
+- <&apps_smmu 0x216b 0x04a0>,
+- <&apps_smmu 0x218b 0x0400>,
+- <&apps_smmu 0x21cb 0x04a0>,
+- <&apps_smmu 0x21eb 0x04a0>,
+- <&apps_smmu 0x254b 0x04a0>,
+- <&apps_smmu 0x256b 0x04a0>,
+- <&apps_smmu 0x258b 0x0400>,
+- <&apps_smmu 0x25cb 0x04a0>,
+- <&apps_smmu 0x25eb 0x04a0>;
++ <&apps_smmu 0x218b 0x0400>;
+ dma-coherent;
+ };
+ };
+@@ -5131,15 +5037,7 @@ compute-cb@1 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <1>;
+ iommus = <&apps_smmu 0x2941 0x04a0>,
+- <&apps_smmu 0x2961 0x04a0>,
+- <&apps_smmu 0x2981 0x0400>,
+- <&apps_smmu 0x29c1 0x04a0>,
+- <&apps_smmu 0x29e1 0x04a0>,
+- <&apps_smmu 0x2d41 0x04a0>,
+- <&apps_smmu 0x2d61 0x04a0>,
+- <&apps_smmu 0x2d81 0x0400>,
+- <&apps_smmu 0x2dc1 0x04a0>,
+- <&apps_smmu 0x2de1 0x04a0>;
++ <&apps_smmu 0x2981 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5147,15 +5045,7 @@ compute-cb@2 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <2>;
+ iommus = <&apps_smmu 0x2942 0x04a0>,
+- <&apps_smmu 0x2962 0x04a0>,
+- <&apps_smmu 0x2982 0x0400>,
+- <&apps_smmu 0x29c2 0x04a0>,
+- <&apps_smmu 0x29e2 0x04a0>,
+- <&apps_smmu 0x2d42 0x04a0>,
+- <&apps_smmu 0x2d62 0x04a0>,
+- <&apps_smmu 0x2d82 0x0400>,
+- <&apps_smmu 0x2dc2 0x04a0>,
+- <&apps_smmu 0x2de2 0x04a0>;
++ <&apps_smmu 0x2982 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5163,15 +5053,7 @@ compute-cb@3 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <3>;
+ iommus = <&apps_smmu 0x2943 0x04a0>,
+- <&apps_smmu 0x2963 0x04a0>,
+- <&apps_smmu 0x2983 0x0400>,
+- <&apps_smmu 0x29c3 0x04a0>,
+- <&apps_smmu 0x29e3 0x04a0>,
+- <&apps_smmu 0x2d43 0x04a0>,
+- <&apps_smmu 0x2d63 0x04a0>,
+- <&apps_smmu 0x2d83 0x0400>,
+- <&apps_smmu 0x2dc3 0x04a0>,
+- <&apps_smmu 0x2de3 0x04a0>;
++ <&apps_smmu 0x2983 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5179,15 +5061,7 @@ compute-cb@4 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <4>;
+ iommus = <&apps_smmu 0x2944 0x04a0>,
+- <&apps_smmu 0x2964 0x04a0>,
+- <&apps_smmu 0x2984 0x0400>,
+- <&apps_smmu 0x29c4 0x04a0>,
+- <&apps_smmu 0x29e4 0x04a0>,
+- <&apps_smmu 0x2d44 0x04a0>,
+- <&apps_smmu 0x2d64 0x04a0>,
+- <&apps_smmu 0x2d84 0x0400>,
+- <&apps_smmu 0x2dc4 0x04a0>,
+- <&apps_smmu 0x2de4 0x04a0>;
++ <&apps_smmu 0x2984 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5195,15 +5069,7 @@ compute-cb@5 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <5>;
+ iommus = <&apps_smmu 0x2945 0x04a0>,
+- <&apps_smmu 0x2965 0x04a0>,
+- <&apps_smmu 0x2985 0x0400>,
+- <&apps_smmu 0x29c5 0x04a0>,
+- <&apps_smmu 0x29e5 0x04a0>,
+- <&apps_smmu 0x2d45 0x04a0>,
+- <&apps_smmu 0x2d65 0x04a0>,
+- <&apps_smmu 0x2d85 0x0400>,
+- <&apps_smmu 0x2dc5 0x04a0>,
+- <&apps_smmu 0x2de5 0x04a0>;
++ <&apps_smmu 0x2985 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5211,15 +5077,7 @@ compute-cb@6 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <6>;
+ iommus = <&apps_smmu 0x2946 0x04a0>,
+- <&apps_smmu 0x2966 0x04a0>,
+- <&apps_smmu 0x2986 0x0400>,
+- <&apps_smmu 0x29c6 0x04a0>,
+- <&apps_smmu 0x29e6 0x04a0>,
+- <&apps_smmu 0x2d46 0x04a0>,
+- <&apps_smmu 0x2d66 0x04a0>,
+- <&apps_smmu 0x2d86 0x0400>,
+- <&apps_smmu 0x2dc6 0x04a0>,
+- <&apps_smmu 0x2de6 0x04a0>;
++ <&apps_smmu 0x2986 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5227,15 +5085,7 @@ compute-cb@7 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <7>;
+ iommus = <&apps_smmu 0x2947 0x04a0>,
+- <&apps_smmu 0x2967 0x04a0>,
+- <&apps_smmu 0x2987 0x0400>,
+- <&apps_smmu 0x29c7 0x04a0>,
+- <&apps_smmu 0x29e7 0x04a0>,
+- <&apps_smmu 0x2d47 0x04a0>,
+- <&apps_smmu 0x2d67 0x04a0>,
+- <&apps_smmu 0x2d87 0x0400>,
+- <&apps_smmu 0x2dc7 0x04a0>,
+- <&apps_smmu 0x2de7 0x04a0>;
++ <&apps_smmu 0x2987 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5243,15 +5093,7 @@ compute-cb@8 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <8>;
+ iommus = <&apps_smmu 0x2948 0x04a0>,
+- <&apps_smmu 0x2968 0x04a0>,
+- <&apps_smmu 0x2988 0x0400>,
+- <&apps_smmu 0x29c8 0x04a0>,
+- <&apps_smmu 0x29e8 0x04a0>,
+- <&apps_smmu 0x2d48 0x04a0>,
+- <&apps_smmu 0x2d68 0x04a0>,
+- <&apps_smmu 0x2d88 0x0400>,
+- <&apps_smmu 0x2dc8 0x04a0>,
+- <&apps_smmu 0x2de8 0x04a0>;
++ <&apps_smmu 0x2988 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5259,15 +5101,7 @@ compute-cb@9 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <9>;
+ iommus = <&apps_smmu 0x2949 0x04a0>,
+- <&apps_smmu 0x2969 0x04a0>,
+- <&apps_smmu 0x2989 0x0400>,
+- <&apps_smmu 0x29c9 0x04a0>,
+- <&apps_smmu 0x29e9 0x04a0>,
+- <&apps_smmu 0x2d49 0x04a0>,
+- <&apps_smmu 0x2d69 0x04a0>,
+- <&apps_smmu 0x2d89 0x0400>,
+- <&apps_smmu 0x2dc9 0x04a0>,
+- <&apps_smmu 0x2de9 0x04a0>;
++ <&apps_smmu 0x2989 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5275,15 +5109,7 @@ compute-cb@10 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <10>;
+ iommus = <&apps_smmu 0x294a 0x04a0>,
+- <&apps_smmu 0x296a 0x04a0>,
+- <&apps_smmu 0x298a 0x0400>,
+- <&apps_smmu 0x29ca 0x04a0>,
+- <&apps_smmu 0x29ea 0x04a0>,
+- <&apps_smmu 0x2d4a 0x04a0>,
+- <&apps_smmu 0x2d6a 0x04a0>,
+- <&apps_smmu 0x2d8a 0x0400>,
+- <&apps_smmu 0x2dca 0x04a0>,
+- <&apps_smmu 0x2dea 0x04a0>;
++ <&apps_smmu 0x298a 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5291,15 +5117,7 @@ compute-cb@11 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <11>;
+ iommus = <&apps_smmu 0x294b 0x04a0>,
+- <&apps_smmu 0x296b 0x04a0>,
+- <&apps_smmu 0x298b 0x0400>,
+- <&apps_smmu 0x29cb 0x04a0>,
+- <&apps_smmu 0x29eb 0x04a0>,
+- <&apps_smmu 0x2d4b 0x04a0>,
+- <&apps_smmu 0x2d6b 0x04a0>,
+- <&apps_smmu 0x2d8b 0x0400>,
+- <&apps_smmu 0x2dcb 0x04a0>,
+- <&apps_smmu 0x2deb 0x04a0>;
++ <&apps_smmu 0x298b 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5307,15 +5125,7 @@ compute-cb@12 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <12>;
+ iommus = <&apps_smmu 0x294c 0x04a0>,
+- <&apps_smmu 0x296c 0x04a0>,
+- <&apps_smmu 0x298c 0x0400>,
+- <&apps_smmu 0x29cc 0x04a0>,
+- <&apps_smmu 0x29ec 0x04a0>,
+- <&apps_smmu 0x2d4c 0x04a0>,
+- <&apps_smmu 0x2d6c 0x04a0>,
+- <&apps_smmu 0x2d8c 0x0400>,
+- <&apps_smmu 0x2dcc 0x04a0>,
+- <&apps_smmu 0x2dec 0x04a0>;
++ <&apps_smmu 0x298c 0x0400>;
+ dma-coherent;
+ };
+
+@@ -5323,15 +5133,7 @@ compute-cb@13 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <13>;
+ iommus = <&apps_smmu 0x294d 0x04a0>,
+- <&apps_smmu 0x296d 0x04a0>,
+- <&apps_smmu 0x298d 0x0400>,
+- <&apps_smmu 0x29Cd 0x04a0>,
+- <&apps_smmu 0x29ed 0x04a0>,
+- <&apps_smmu 0x2d4d 0x04a0>,
+- <&apps_smmu 0x2d6d 0x04a0>,
+- <&apps_smmu 0x2d8d 0x0400>,
+- <&apps_smmu 0x2dcd 0x04a0>,
+- <&apps_smmu 0x2ded 0x04a0>;
++ <&apps_smmu 0x298d 0x0400>;
+ dma-coherent;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 69da30f35baaab..f055600d6cfe5b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -455,7 +455,7 @@ cdsp_secure_heap: memory@80c00000 {
+ no-map;
+ };
+
+- pil_camera_mem: mmeory@85200000 {
++ pil_camera_mem: memory@85200000 {
+ reg = <0x0 0x85200000 0x0 0x500000>;
+ no-map;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 9c809fc5fa45a9..419df72cd04b0c 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -5283,6 +5283,8 @@ cryptobam: dma-controller@1dc4000 {
+ interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ #dma-cells = <1>;
+ qcom,ee = <0>;
++ qcom,num-ees = <4>;
++ num-channels = <16>;
+ qcom,controlled-remotely;
+ iommus = <&apps_smmu 0x584 0x11>,
+ <&apps_smmu 0x588 0x0>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index eac8de4005d82f..ac3e00ad417719 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -1957,6 +1957,8 @@ cryptobam: dma-controller@1dc4000 {
+ interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ #dma-cells = <1>;
+ qcom,ee = <0>;
++ qcom,num-ees = <4>;
++ num-channels = <20>;
+ qcom,controlled-remotely;
+ iommus = <&apps_smmu 0x480 0x0>,
+ <&apps_smmu 0x481 0x0>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index 86684cb9a93256..c8a2a76a98f000 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -2533,6 +2533,8 @@ cryptobam: dma-controller@1dc4000 {
+ <&apps_smmu 0x481 0>;
+
+ qcom,ee = <0>;
++ qcom,num-ees = <4>;
++ num-channels = <20>;
+ qcom,controlled-remotely;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+index 5e3970b26e2f95..f5063a0df9fbfa 100644
+--- a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
++++ b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+@@ -507,6 +507,7 @@ vreg_l12b_1p2: ldo12 {
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l13b_3p0: ldo13 {
+@@ -528,6 +529,7 @@ vreg_l15b_1p8: ldo15 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l16b_2p9: ldo16 {
+@@ -745,8 +747,8 @@ vreg_l1j_0p8: ldo1 {
+
+ vreg_l2j_1p2: ldo2 {
+ regulator-name = "vreg_l2j_1p2";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <1200000>;
++ regulator-min-microvolt = <1256000>;
++ regulator-max-microvolt = <1256000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index 53781f9b13af3e..f53067463b7601 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -330,8 +330,8 @@ vreg_l1j_0p8: ldo1 {
+
+ vreg_l2j_1p2: ldo2 {
+ regulator-name = "vreg_l2j_1p2";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <1200000>;
++ regulator-min-microvolt = <1256000>;
++ regulator-max-microvolt = <1256000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts b/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts
+index 86e87f03b0ec61..90f588ed7d63d7 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-dell-xps13-9345.dts
+@@ -359,6 +359,7 @@ vreg_l12b_1p2: ldo12 {
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l13b_3p0: ldo13 {
+@@ -380,6 +381,7 @@ vreg_l15b_1p8: ldo15 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l17b_2p5: ldo17 {
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts b/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts
+index cd860a246c450b..929da9ecddc47c 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-hp-omnibook-x14.dts
+@@ -633,6 +633,7 @@ vreg_l12b_1p2: ldo12 {
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l13b_3p0: ldo13 {
+@@ -654,6 +655,7 @@ vreg_l15b_1p8: ldo15 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l16b_2p9: ldo16 {
+@@ -871,8 +873,8 @@ vreg_l1j_0p8: ldo1 {
+
+ vreg_l2j_1p2: ldo2 {
+ regulator-name = "vreg_l2j_1p2";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <1200000>;
++ regulator-min-microvolt = <1256000>;
++ regulator-max-microvolt = <1256000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ };
+
+@@ -1352,18 +1354,22 @@ &remoteproc_cdsp {
+ status = "okay";
+ };
+
++&smb2360_0 {
++ status = "okay";
++};
++
+ &smb2360_0_eusb2_repeater {
+ vdd18-supply = <&vreg_l3d_1p8>;
+ vdd3-supply = <&vreg_l2b_3p0>;
++};
+
++&smb2360_1 {
+ status = "okay";
+ };
+
+ &smb2360_1_eusb2_repeater {
+ vdd18-supply = <&vreg_l3d_1p8>;
+ vdd3-supply = <&vreg_l14b_3p0>;
+-
+- status = "okay";
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index a3d53f2ba2c3d0..744a66ae5bdc84 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -290,6 +290,7 @@ vreg_l12b_1p2: ldo12 {
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l14b_3p0: ldo14 {
+@@ -304,8 +305,8 @@ vreg_l15b_1p8: ldo15 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+-
+ };
+
+ regulators-1 {
+@@ -508,8 +509,8 @@ vreg_l1j_0p8: ldo1 {
+
+ vreg_l2j_1p2: ldo2 {
+ regulator-name = "vreg_l2j_1p2";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <1200000>;
++ regulator-min-microvolt = <1256000>;
++ regulator-max-microvolt = <1256000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index ec594628304a9a..f06f4547884683 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -437,6 +437,7 @@ vreg_l12b_1p2: ldo12 {
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l13b_3p0: ldo13 {
+@@ -458,6 +459,7 @@ vreg_l15b_1p8: ldo15 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++ regulator-always-on;
+ };
+
+ vreg_l16b_2p9: ldo16 {
+@@ -675,8 +677,8 @@ vreg_l1j_0p8: ldo1 {
+
+ vreg_l2j_1p2: ldo2 {
+ regulator-name = "vreg_l2j_1p2";
+- regulator-min-microvolt = <1200000>;
+- regulator-max-microvolt = <1200000>;
++ regulator-min-microvolt = <1256000>;
++ regulator-max-microvolt = <1256000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 4936fa5b98ff7a..5aeecf711340d2 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -20,6 +20,7 @@
+ #include <dt-bindings/soc/qcom,gpr.h>
+ #include <dt-bindings/soc/qcom,rpmh-rsc.h>
+ #include <dt-bindings/sound/qcom,q6dsp-lpass-ports.h>
++#include <dt-bindings/thermal/thermal.h>
+
+ / {
+ interrupt-parent = <&intc>;
+@@ -3125,7 +3126,7 @@ pcie3: pcie@1bd0000 {
+ device_type = "pci";
+ compatible = "qcom,pcie-x1e80100";
+ reg = <0x0 0x01bd0000 0x0 0x3000>,
+- <0x0 0x78000000 0x0 0xf1d>,
++ <0x0 0x78000000 0x0 0xf20>,
+ <0x0 0x78000f40 0x0 0xa8>,
+ <0x0 0x78001000 0x0 0x1000>,
+ <0x0 0x78100000 0x0 0x100000>,
+@@ -8457,8 +8458,8 @@ trip-point0 {
+ };
+
+ aoss0-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -8483,7 +8484,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8509,7 +8510,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8535,7 +8536,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8561,7 +8562,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8587,7 +8588,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8613,7 +8614,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8639,7 +8640,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8665,7 +8666,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8683,8 +8684,8 @@ trip-point0 {
+ };
+
+ cpuss2-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -8701,8 +8702,8 @@ trip-point0 {
+ };
+
+ cpuss2-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -8719,7 +8720,7 @@ trip-point0 {
+ };
+
+ mem-critical {
+- temperature = <125000>;
++ temperature = <115000>;
+ hysteresis = <0>;
+ type = "critical";
+ };
+@@ -8727,15 +8728,19 @@ mem-critical {
+ };
+
+ video-thermal {
+- polling-delay-passive = <250>;
+-
+ thermal-sensors = <&tsens0 12>;
+
+ trips {
+ trip-point0 {
+- temperature = <125000>;
++ temperature = <90000>;
++ hysteresis = <2000>;
++ type = "hot";
++ };
++
++ video-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+- type = "passive";
++ type = "critical";
+ };
+ };
+ };
+@@ -8751,8 +8756,8 @@ trip-point0 {
+ };
+
+ aoss0-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -8777,7 +8782,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8803,7 +8808,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8829,7 +8834,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8855,7 +8860,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8881,7 +8886,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8907,7 +8912,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8933,7 +8938,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8959,7 +8964,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -8977,8 +8982,8 @@ trip-point0 {
+ };
+
+ cpuss2-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -8995,8 +9000,8 @@ trip-point0 {
+ };
+
+ cpuss2-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9013,8 +9018,8 @@ trip-point0 {
+ };
+
+ aoss0-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9039,7 +9044,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9065,7 +9070,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9091,7 +9096,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9117,7 +9122,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9143,7 +9148,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9169,7 +9174,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9195,7 +9200,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9221,7 +9226,7 @@ trip-point1 {
+ };
+
+ cpu-critical {
+- temperature = <110000>;
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9239,8 +9244,8 @@ trip-point0 {
+ };
+
+ cpuss2-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9257,8 +9262,8 @@ trip-point0 {
+ };
+
+ cpuss2-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9275,8 +9280,8 @@ trip-point0 {
+ };
+
+ aoss0-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9293,8 +9298,8 @@ trip-point0 {
+ };
+
+ nsp0-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9311,8 +9316,8 @@ trip-point0 {
+ };
+
+ nsp1-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9329,8 +9334,8 @@ trip-point0 {
+ };
+
+ nsp2-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9347,33 +9352,34 @@ trip-point0 {
+ };
+
+ nsp3-critical {
+- temperature = <125000>;
+- hysteresis = <0>;
++ temperature = <115000>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+ };
+
+ gpuss-0-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 5>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss0_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss0_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9381,25 +9387,26 @@ trip-point2 {
+ };
+
+ gpuss-1-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 6>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss1_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss1_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9407,25 +9414,26 @@ trip-point2 {
+ };
+
+ gpuss-2-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 7>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss2_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss2_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9433,25 +9441,26 @@ trip-point2 {
+ };
+
+ gpuss-3-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 8>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss3_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss3_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9459,25 +9468,26 @@ trip-point2 {
+ };
+
+ gpuss-4-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 9>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss4_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss4_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9485,25 +9495,26 @@ trip-point2 {
+ };
+
+ gpuss-5-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 10>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss5_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss5_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9511,25 +9522,26 @@ trip-point2 {
+ };
+
+ gpuss-6-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 11>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss6_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss6_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9537,25 +9549,26 @@ trip-point2 {
+ };
+
+ gpuss-7-thermal {
+- polling-delay-passive = <10>;
++ polling-delay-passive = <200>;
+
+ thermal-sensors = <&tsens3 12>;
+
+- trips {
+- trip-point0 {
+- temperature = <85000>;
+- hysteresis = <1000>;
+- type = "passive";
++ cooling-maps {
++ map0 {
++ trip = <&gpuss7_alert0>;
++ cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
++ };
+
+- trip-point1 {
+- temperature = <90000>;
++ trips {
++ gpuss7_alert0: trip-point0 {
++ temperature = <95000>;
+ hysteresis = <1000>;
+- type = "hot";
++ type = "passive";
+ };
+
+- trip-point2 {
+- temperature = <125000>;
++ gpu-critical {
++ temperature = <115000>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+@@ -9574,7 +9587,7 @@ trip-point0 {
+
+ camera0-critical {
+ temperature = <115000>;
+- hysteresis = <0>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+@@ -9592,7 +9605,7 @@ trip-point0 {
+
+ camera0-critical {
+ temperature = <115000>;
+- hysteresis = <0>;
++ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 995b30a7aae01a..dd5a9bca26d1d2 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -60,16 +60,6 @@ vcc3v3_sys: regulator-vcc3v3-sys {
+ vin-supply = <&vcc5v0_sys>;
+ };
+
+- vcc5v0_host: regulator-vcc5v0-host {
+- compatible = "regulator-fixed";
+- gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&vcc5v0_host_en>;
+- regulator-name = "vcc5v0_host";
+- regulator-always-on;
+- vin-supply = <&vcc5v0_sys>;
+- };
+-
+ vcc5v0_sys: regulator-vcc5v0-sys {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc5v0_sys";
+@@ -521,10 +511,10 @@ pmic_int_l: pmic-int-l {
+ };
+ };
+
+- usb2 {
+- vcc5v0_host_en: vcc5v0-host-en {
++ usb {
++ cy3304_reset: cy3304-reset {
+ rockchip,pins =
+- <4 RK_PA3 RK_FUNC_GPIO &pcfg_pull_none>;
++ <4 RK_PA3 RK_FUNC_GPIO &pcfg_output_high>;
+ };
+ };
+
+@@ -591,7 +581,6 @@ u2phy1_otg: otg-port {
+ };
+
+ u2phy1_host: host-port {
+- phy-supply = <&vcc5v0_host>;
+ status = "okay";
+ };
+ };
+@@ -603,6 +592,29 @@ &usbdrd3_1 {
+ &usbdrd_dwc3_1 {
+ status = "okay";
+ dr_mode = "host";
++ pinctrl-names = "default";
++ pinctrl-0 = <&cy3304_reset>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ hub_2_0: hub@1 {
++ compatible = "usb4b4,6502", "usb4b4,6506";
++ reg = <1>;
++ peer-hub = <&hub_3_0>;
++ reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
++ vdd-supply = <&vcc1v2_phy>;
++ vdd2-supply = <&vcc3v3_sys>;
++
++ };
++
++ hub_3_0: hub@2 {
++ compatible = "usb4b4,6500", "usb4b4,6504";
++ reg = <2>;
++ peer-hub = <&hub_2_0>;
++ reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
++ vdd-supply = <&vcc1v2_phy>;
++ vdd2-supply = <&vcc3v3_sys>;
++ };
+ };
+
+ &usb_host1_ehci {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index 7d355aa73ea211..0c286f600296cd 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -552,8 +552,6 @@ sdhci0: mmc@fa10000 {
+ power-domains = <&k3_pds 57 TI_SCI_PD_EXCLUSIVE>;
+ clocks = <&k3_clks 57 5>, <&k3_clks 57 6>;
+ clock-names = "clk_ahb", "clk_xin";
+- assigned-clocks = <&k3_clks 57 6>;
+- assigned-clock-parents = <&k3_clks 57 8>;
+ bus-width = <8>;
+ mmc-ddr-1_8v;
+ mmc-hs200-1_8v;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+index a1daba7b1fad5d..455ccc770f16a1 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+@@ -575,8 +575,6 @@ sdhci0: mmc@fa10000 {
+ power-domains = <&k3_pds 57 TI_SCI_PD_EXCLUSIVE>;
+ clocks = <&k3_clks 57 5>, <&k3_clks 57 6>;
+ clock-names = "clk_ahb", "clk_xin";
+- assigned-clocks = <&k3_clks 57 6>;
+- assigned-clock-parents = <&k3_clks 57 8>;
+ bus-width = <8>;
+ mmc-hs200-1_8v;
+ ti,clkbuf-sel = <0x7>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+index 6e3beb5c2e010e..f9b5c97518d68f 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+@@ -564,8 +564,6 @@ sdhci0: mmc@fa10000 {
+ power-domains = <&k3_pds 57 TI_SCI_PD_EXCLUSIVE>;
+ clocks = <&k3_clks 57 1>, <&k3_clks 57 2>;
+ clock-names = "clk_ahb", "clk_xin";
+- assigned-clocks = <&k3_clks 57 2>;
+- assigned-clock-parents = <&k3_clks 57 4>;
+ bus-width = <8>;
+ mmc-ddr-1_8v;
+ mmc-hs200-1_8v;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso
+index 76ca02127f95ff..dd090813a32d61 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-imx219.dtso
+@@ -22,7 +22,7 @@ &main_i2c2 {
+ #size-cells = <0>;
+ status = "okay";
+
+- i2c-switch@71 {
++ i2c-mux@71 {
+ compatible = "nxp,pca9543";
+ #address-cells = <1>;
+ #size-cells = <0>;
+@@ -39,7 +39,6 @@ ov5640: camera@10 {
+ reg = <0x10>;
+
+ clocks = <&clk_imx219_fixed>;
+- clock-names = "xclk";
+
+ reset-gpios = <&exp1 13 GPIO_ACTIVE_HIGH>;
+
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso
+index ccc7f5e43184fa..7fc7c95f5cd578 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-ov5640.dtso
+@@ -22,7 +22,7 @@ &main_i2c2 {
+ #size-cells = <0>;
+ status = "okay";
+
+- i2c-switch@71 {
++ i2c-mux@71 {
+ compatible = "nxp,pca9543";
+ #address-cells = <1>;
+ #size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso
+index 4eaf9d757dd0ad..b6bfdfbbdd984a 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-csi2-tevi-ov5640.dtso
+@@ -22,7 +22,7 @@ &main_i2c2 {
+ #size-cells = <0>;
+ status = "okay";
+
+- i2c-switch@71 {
++ i2c-mux@71 {
+ compatible = "nxp,pca9543";
+ #address-cells = <1>;
+ #size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index 94a812a1355baf..5ebf7ada6e4851 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -449,6 +449,8 @@ sdhci0: mmc@4f80000 {
+ ti,otap-del-sel-mmc-hs = <0x0>;
+ ti,otap-del-sel-ddr52 = <0x5>;
+ ti,otap-del-sel-hs200 = <0x5>;
++ ti,itap-del-sel-legacy = <0xa>;
++ ti,itap-del-sel-mmc-hs = <0x1>;
+ ti,itap-del-sel-ddr52 = <0x0>;
+ dma-coherent;
+ status = "disabled";
+diff --git a/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts b/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts
+index 11522b36e0cece..5fa70a874d7b4d 100644
+--- a/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-am68-sk-base-board.dts
+@@ -44,6 +44,17 @@ vusb_main: regulator-vusb-main5v0 {
+ regulator-boot-on;
+ };
+
++ vsys_5v0: regulator-vsys5v0 {
++ /* Output of LM61460 */
++ compatible = "regulator-fixed";
++ regulator-name = "vsys_5v0";
++ regulator-min-microvolt = <5000000>;
++ regulator-max-microvolt = <5000000>;
++ vin-supply = <&vusb_main>;
++ regulator-always-on;
++ regulator-boot-on;
++ };
++
+ vsys_3v3: regulator-vsys3v3 {
+ /* Output of LM5141 */
+ compatible = "regulator-fixed";
+@@ -76,7 +87,7 @@ vdd_sd_dv: regulator-tlv71033 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+- vin-supply = <&vsys_3v3>;
++ vin-supply = <&vsys_5v0>;
+ gpios = <&main_gpio0 49 GPIO_ACTIVE_HIGH>;
+ states = <1800000 0x0>,
+ <3300000 0x1>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso b/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso
+index 47bb5480b5b006..4eb3cffab0321d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso
++++ b/arch/arm64/boot/dts/ti/k3-j721e-sk-csi2-dual-imx219.dtso
+@@ -19,6 +19,33 @@ clk_imx219_fixed: imx219-xclk {
+ #clock-cells = <0>;
+ clock-frequency = <24000000>;
+ };
++
++ reg_2p8v: regulator-2p8v {
++ compatible = "regulator-fixed";
++ regulator-name = "2P8V";
++ regulator-min-microvolt = <2800000>;
++ regulator-max-microvolt = <2800000>;
++ vin-supply = <&vdd_sd_dv>;
++ regulator-always-on;
++ };
++
++ reg_1p8v: regulator-1p8v {
++ compatible = "regulator-fixed";
++ regulator-name = "1P8V";
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ vin-supply = <&vdd_sd_dv>;
++ regulator-always-on;
++ };
++
++ reg_1p2v: regulator-1p2v {
++ compatible = "regulator-fixed";
++ regulator-name = "1P2V";
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <1200000>;
++ vin-supply = <&vdd_sd_dv>;
++ regulator-always-on;
++ };
+ };
+
+ &csi_mux {
+@@ -34,7 +61,9 @@ imx219_0: imx219-0@10 {
+ reg = <0x10>;
+
+ clocks = <&clk_imx219_fixed>;
+- clock-names = "xclk";
++ VANA-supply = <®_2p8v>;
++ VDIG-supply = <®_1p8v>;
++ VDDL-supply = <®_1p2v>;
+
+ port {
+ csi2_cam0: endpoint {
+@@ -56,7 +85,9 @@ imx219_1: imx219-1@10 {
+ reg = <0x10>;
+
+ clocks = <&clk_imx219_fixed>;
+- clock-names = "xclk";
++ VANA-supply = <®_2p8v>;
++ VDIG-supply = <®_1p8v>;
++ VDDL-supply = <®_1p2v>;
+
+ port {
+ csi2_cam1: endpoint {
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+index 69b3d1ed8a21c2..9fd32f0200102c 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-sk.dts
+@@ -184,6 +184,17 @@ vsys_3v3: fixedregulator-vsys3v3 {
+ regulator-boot-on;
+ };
+
++ vsys_5v0: fixedregulator-vsys5v0 {
++ /* Output of LM61460 */
++ compatible = "regulator-fixed";
++ regulator-name = "vsys_5v0";
++ regulator-min-microvolt = <5000000>;
++ regulator-max-microvolt = <5000000>;
++ vin-supply = <&vusb_main>;
++ regulator-always-on;
++ regulator-boot-on;
++ };
++
+ vdd_mmc1: fixedregulator-sd {
+ compatible = "regulator-fixed";
+ pinctrl-names = "default";
+@@ -211,6 +222,20 @@ vdd_sd_dv_alt: gpio-regulator-tps659411 {
+ <3300000 0x1>;
+ };
+
++ vdd_sd_dv: gpio-regulator-TLV71033 {
++ compatible = "regulator-gpio";
++ pinctrl-names = "default";
++ pinctrl-0 = <&vdd_sd_dv_pins_default>;
++ regulator-name = "tlv71033";
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <3300000>;
++ regulator-boot-on;
++ vin-supply = <&vsys_5v0>;
++ gpios = <&main_gpio0 118 GPIO_ACTIVE_HIGH>;
++ states = <1800000 0x0>,
++ <3300000 0x1>;
++ };
++
+ transceiver1: can-phy1 {
+ compatible = "ti,tcan1042";
+ #phy-cells = <0>;
+@@ -613,6 +638,12 @@ J721E_WKUP_IOPAD(0xd4, PIN_OUTPUT, 7) /* (G26) WKUP_GPIO0_9 */
+ >;
+ };
+
++ vdd_sd_dv_pins_default: vdd-sd-dv-default-pins {
++ pinctrl-single,pins = <
++ J721E_IOPAD(0x1dc, PIN_OUTPUT, 7) /* (Y1) SPI1_CLK.GPIO0_118 */
++ >;
++ };
++
+ wkup_uart0_pins_default: wkup-uart0-default-pins {
+ pinctrl-single,pins = <
+ J721E_WKUP_IOPAD(0xa0, PIN_INPUT, 0) /* (J29) WKUP_UART0_RXD */
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+index adee69607fdbf5..2503580254ad9a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+@@ -815,6 +815,10 @@ &serdes_ln_ctrl {
+ <J722S_SERDES1_LANE0_PCIE0_LANE0>;
+ };
+
++&serdes_wiz0 {
++ status = "okay";
++};
++
+ &serdes0 {
+ status = "okay";
+ serdes0_usb_link: phy@0 {
+@@ -826,6 +830,10 @@ serdes0_usb_link: phy@0 {
+ };
+ };
+
++&serdes_wiz1 {
++ status = "okay";
++};
++
+ &serdes1 {
+ status = "okay";
+ serdes1_pcie_link: phy@0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi b/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
+index 6da7b3a2943c44..43204084568c38 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j722s-main.dtsi
+@@ -32,6 +32,8 @@ serdes_wiz0: phy@f000000 {
+ assigned-clocks = <&k3_clks 279 1>;
+ assigned-clock-parents = <&k3_clks 279 5>;
+
++ status = "disabled";
++
+ serdes0: serdes@f000000 {
+ compatible = "ti,j721e-serdes-10g";
+ reg = <0x0f000000 0x00010000>;
+@@ -70,6 +72,8 @@ serdes_wiz1: phy@f010000 {
+ assigned-clocks = <&k3_clks 280 1>;
+ assigned-clock-parents = <&k3_clks 280 5>;
+
++ status = "disabled";
++
+ serdes1: serdes@f010000 {
+ compatible = "ti,j721e-serdes-10g";
+ reg = <0x0f010000 0x00010000>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
+index 1944616ab3579a..1fc0a11c5ab4a9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
+@@ -77,7 +77,7 @@ pcie1_ctrl: pcie1-ctrl@4074 {
+
+ serdes_ln_ctrl: mux-controller@4080 {
+ compatible = "reg-mux";
+- reg = <0x00004080 0x30>;
++ reg = <0x00004080 0x50>;
+ #mux-control-cells = <1>;
+ mux-reg-masks = <0x0 0x3>, <0x4 0x3>, /* SERDES0 lane0/1 select */
+ <0x8 0x3>, <0xc 0x3>, /* SERDES0 lane2/3 select */
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index 1d36a613aad83d..9ed792e565c917 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -154,5 +154,6 @@ MRPROPER_FILES += $(HOST_DIR)/include/generated
+ archclean:
+ @find . \( -name '*.bb' -o -name '*.bbg' -o -name '*.da' \
+ -o -name '*.gcov' \) -type f -print | xargs rm -f
++ $(Q)$(MAKE) -f $(srctree)/Makefile ARCH=$(HEADER_ARCH) clean
+
+ export HEADER_ARCH SUBARCH USER_CFLAGS CFLAGS_NO_HARDENING DEV_NULL_PATH
+diff --git a/drivers/char/tpm/tpm-buf.c b/drivers/char/tpm/tpm-buf.c
+index e49a19fea3bdf6..dc882fc9fa9efc 100644
+--- a/drivers/char/tpm/tpm-buf.c
++++ b/drivers/char/tpm/tpm-buf.c
+@@ -201,7 +201,7 @@ static void tpm_buf_read(struct tpm_buf *buf, off_t *offset, size_t count, void
+ */
+ u8 tpm_buf_read_u8(struct tpm_buf *buf, off_t *offset)
+ {
+- u8 value;
++ u8 value = 0;
+
+ tpm_buf_read(buf, offset, sizeof(value), &value);
+
+@@ -218,7 +218,7 @@ EXPORT_SYMBOL_GPL(tpm_buf_read_u8);
+ */
+ u16 tpm_buf_read_u16(struct tpm_buf *buf, off_t *offset)
+ {
+- u16 value;
++ u16 value = 0;
+
+ tpm_buf_read(buf, offset, sizeof(value), &value);
+
+@@ -235,7 +235,7 @@ EXPORT_SYMBOL_GPL(tpm_buf_read_u16);
+ */
+ u32 tpm_buf_read_u32(struct tpm_buf *buf, off_t *offset)
+ {
+- u32 value;
++ u32 value = 0;
+
+ tpm_buf_read(buf, offset, sizeof(value), &value);
+
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index cd57067e821802..6d12033649f817 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -222,7 +222,7 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
+ struct idxd_wq *wq;
+ struct device *dev, *fdev;
+ int rc = 0;
+- struct iommu_sva *sva;
++ struct iommu_sva *sva = NULL;
+ unsigned int pasid;
+ struct idxd_cdev *idxd_cdev;
+
+@@ -317,7 +317,7 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
+ if (device_user_pasid_enabled(idxd))
+ idxd_xa_pasid_remove(ctx);
+ failed_get_pasid:
+- if (device_user_pasid_enabled(idxd))
++ if (device_user_pasid_enabled(idxd) && !IS_ERR_OR_NULL(sva))
+ iommu_sva_unbind_device(sva);
+ failed:
+ mutex_unlock(&wq->wq_lock);
+diff --git a/drivers/gpio/gpio-virtuser.c b/drivers/gpio/gpio-virtuser.c
+index e89f299f214009..dcecb7a2591176 100644
+--- a/drivers/gpio/gpio-virtuser.c
++++ b/drivers/gpio/gpio-virtuser.c
+@@ -400,10 +400,15 @@ static ssize_t gpio_virtuser_direction_do_write(struct file *file,
+ char buf[32], *trimmed;
+ int ret, dir, val = 0;
+
+- ret = simple_write_to_buffer(buf, sizeof(buf), ppos, user_buf, count);
++ if (count >= sizeof(buf))
++ return -EINVAL;
++
++ ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, user_buf, count);
+ if (ret < 0)
+ return ret;
+
++ buf[ret] = '\0';
++
+ trimmed = strim(buf);
+
+ if (strcmp(trimmed, "input") == 0) {
+@@ -622,12 +627,15 @@ static ssize_t gpio_virtuser_consumer_write(struct file *file,
+ char buf[GPIO_VIRTUSER_NAME_BUF_LEN + 2];
+ int ret;
+
++ if (count >= sizeof(buf))
++ return -EINVAL;
++
+ ret = simple_write_to_buffer(buf, GPIO_VIRTUSER_NAME_BUF_LEN, ppos,
+ user_buf, count);
+ if (ret < 0)
+ return ret;
+
+- buf[strlen(buf) - 1] = '\0';
++ buf[ret] = '\0';
+
+ ret = gpiod_set_consumer_name(data->ad.desc, buf);
+ if (ret)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+index 0c8ec30ea67268..731fbd4bc600b4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+@@ -910,7 +910,7 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+ }
+
+ //TODO : Could be possibly moved to a common helper layer.
+-static bool dml21_wrapper_get_plane_id(const struct dc_state *context, const struct dc_plane_state *plane, unsigned int *plane_id)
++static bool dml21_wrapper_get_plane_id(const struct dc_state *context, unsigned int stream_id, const struct dc_plane_state *plane, unsigned int *plane_id)
+ {
+ int i, j;
+
+@@ -918,10 +918,12 @@ static bool dml21_wrapper_get_plane_id(const struct dc_state *context, const str
+ return false;
+
+ for (i = 0; i < context->stream_count; i++) {
+- for (j = 0; j < context->stream_status[i].plane_count; j++) {
+- if (context->stream_status[i].plane_states[j] == plane) {
+- *plane_id = (i << 16) | j;
+- return true;
++ if (context->streams[i]->stream_id == stream_id) {
++ for (j = 0; j < context->stream_status[i].plane_count; j++) {
++ if (context->stream_status[i].plane_states[j] == plane) {
++ *plane_id = (i << 16) | j;
++ return true;
++ }
+ }
+ }
+ }
+@@ -944,14 +946,14 @@ static unsigned int map_stream_to_dml21_display_cfg(const struct dml2_context *d
+ return location;
+ }
+
+-static unsigned int map_plane_to_dml21_display_cfg(const struct dml2_context *dml_ctx,
++static unsigned int map_plane_to_dml21_display_cfg(const struct dml2_context *dml_ctx, unsigned int stream_id,
+ const struct dc_plane_state *plane, const struct dc_state *context)
+ {
+ unsigned int plane_id;
+ int i = 0;
+ int location = -1;
+
+- if (!dml21_wrapper_get_plane_id(context, plane, &plane_id)) {
++ if (!dml21_wrapper_get_plane_id(context, stream_id, plane, &plane_id)) {
+ ASSERT(false);
+ return -1;
+ }
+@@ -1037,7 +1039,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s
+ dml_dispcfg->plane_descriptors[disp_cfg_plane_location].stream_index = disp_cfg_stream_location;
+ } else {
+ for (plane_index = 0; plane_index < context->stream_status[stream_index].plane_count; plane_index++) {
+- disp_cfg_plane_location = map_plane_to_dml21_display_cfg(dml_ctx, context->stream_status[stream_index].plane_states[plane_index], context);
++ disp_cfg_plane_location = map_plane_to_dml21_display_cfg(dml_ctx, context->streams[stream_index]->stream_id, context->stream_status[stream_index].plane_states[plane_index], context);
+
+ if (disp_cfg_plane_location < 0)
+ disp_cfg_plane_location = dml_dispcfg->num_planes++;
+@@ -1048,7 +1050,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s
+ populate_dml21_plane_config_from_plane_state(dml_ctx, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location], context->stream_status[stream_index].plane_states[plane_index], context, stream_index);
+ dml_dispcfg->plane_descriptors[disp_cfg_plane_location].stream_index = disp_cfg_stream_location;
+
+- if (dml21_wrapper_get_plane_id(context, context->stream_status[stream_index].plane_states[plane_index], &dml_ctx->v21.dml_to_dc_pipe_mapping.disp_cfg_to_plane_id[disp_cfg_plane_location]))
++ if (dml21_wrapper_get_plane_id(context, context->streams[stream_index]->stream_id, context->stream_status[stream_index].plane_states[plane_index], &dml_ctx->v21.dml_to_dc_pipe_mapping.disp_cfg_to_plane_id[disp_cfg_plane_location]))
+ dml_ctx->v21.dml_to_dc_pipe_mapping.disp_cfg_to_plane_id_valid[disp_cfg_plane_location] = true;
+
+ /* apply forced pstate policy */
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+index ec7de9c01fab01..e95ec72b4096c9 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+@@ -148,6 +148,7 @@ void link_blank_dp_stream(struct dc_link *link, bool hw_init)
+ void link_set_all_streams_dpms_off_for_link(struct dc_link *link)
+ {
+ struct pipe_ctx *pipes[MAX_PIPES];
++ struct dc_stream_state *streams[MAX_PIPES];
+ struct dc_state *state = link->dc->current_state;
+ uint8_t count;
+ int i;
+@@ -160,10 +161,18 @@ void link_set_all_streams_dpms_off_for_link(struct dc_link *link)
+
+ link_get_master_pipes_with_dpms_on(link, state, &count, pipes);
+
++ /* The subsequent call to dc_commit_updates_for_stream for a full update
++ * will release the current state and swap to a new state. Releasing the
++ * current state results in the stream pointers in the pipe_ctx structs
++ * to be zero'd. Hence, cache all streams prior to dc_commit_updates_for_stream.
++ */
++ for (i = 0; i < count; i++)
++ streams[i] = pipes[i]->stream;
++
+ for (i = 0; i < count; i++) {
+- stream_update.stream = pipes[i]->stream;
++ stream_update.stream = streams[i];
+ dc_commit_updates_for_stream(link->ctx->dc, NULL, 0,
+- pipes[i]->stream, &stream_update,
++ streams[i], &stream_update,
+ state);
+ }
+
+diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+index d0ea8a55fd9c22..ab95d3545a72c7 100644
+--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+@@ -157,6 +157,7 @@
+ #define XEHPG_SC_INSTDONE_EXTRA2 XE_REG_MCR(0x7108)
+
+ #define COMMON_SLICE_CHICKEN4 XE_REG(0x7300, XE_REG_OPTION_MASKED)
++#define SBE_PUSH_CONSTANT_BEHIND_FIX_ENABLE REG_BIT(12)
+ #define DISABLE_TDC_LOAD_BALANCING_CALC REG_BIT(6)
+
+ #define COMMON_SLICE_CHICKEN3 XE_REG(0x7304, XE_REG_OPTION_MASKED)
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index 2a953c4f7d5ddf..5d7629bb6b8ddc 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -864,7 +864,7 @@ static void *empty_lrc_data(struct xe_hw_engine *hwe)
+
+ static void xe_lrc_set_ppgtt(struct xe_lrc *lrc, struct xe_vm *vm)
+ {
+- u64 desc = xe_vm_pdp4_descriptor(vm, lrc->tile);
++ u64 desc = xe_vm_pdp4_descriptor(vm, gt_to_tile(lrc->gt));
+
+ xe_lrc_write_ctx_reg(lrc, CTX_PDP0_UDW, upper_32_bits(desc));
+ xe_lrc_write_ctx_reg(lrc, CTX_PDP0_LDW, lower_32_bits(desc));
+@@ -895,6 +895,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
+ int err;
+
+ kref_init(&lrc->refcount);
++ lrc->gt = gt;
+ lrc->flags = 0;
+ lrc_size = ring_size + xe_gt_lrc_size(gt, hwe->class);
+ if (xe_gt_has_indirect_ring_state(gt))
+@@ -913,7 +914,6 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
+ return PTR_ERR(lrc->bo);
+
+ lrc->size = lrc_size;
+- lrc->tile = gt_to_tile(hwe->gt);
+ lrc->ring.size = ring_size;
+ lrc->ring.tail = 0;
+ lrc->ctx_timestamp = 0;
+diff --git a/drivers/gpu/drm/xe/xe_lrc_types.h b/drivers/gpu/drm/xe/xe_lrc_types.h
+index 71ecb453f811a4..cd38586ae98932 100644
+--- a/drivers/gpu/drm/xe/xe_lrc_types.h
++++ b/drivers/gpu/drm/xe/xe_lrc_types.h
+@@ -25,8 +25,8 @@ struct xe_lrc {
+ /** @size: size of lrc including any indirect ring state page */
+ u32 size;
+
+- /** @tile: tile which this LRC belongs to */
+- struct xe_tile *tile;
++ /** @gt: gt which this LRC belongs to */
++ struct xe_gt *gt;
+
+ /** @flags: LRC flags */
+ #define XE_LRC_FLAG_INDIRECT_RING_STATE 0x1
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index 65bfb2f894d00e..56257430b3642f 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -801,6 +801,10 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
+ XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
+ XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_CLIP_NEGATIVE_BOUNDING_BOX))
+ },
++ { XE_RTP_NAME("22021007897"),
++ XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
++ XE_RTP_ACTIONS(SET(COMMON_SLICE_CHICKEN4, SBE_PUSH_CONSTANT_BEHIND_FIX_ENABLE))
++ },
+
+ /* Xe3_LPG */
+ { XE_RTP_NAME("14021490052"),
+diff --git a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
+index a02969fd50686d..08acc707938d35 100644
+--- a/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
++++ b/drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
+@@ -83,6 +83,9 @@ static int amd_sfh_hid_client_deinit(struct amd_mp2_dev *privdata)
+ case ALS_IDX:
+ privdata->dev_en.is_als_present = false;
+ break;
++ case SRA_IDX:
++ privdata->dev_en.is_sra_present = false;
++ break;
+ }
+
+ if (cl_data->sensor_sts[i] == SENSOR_ENABLED) {
+@@ -235,6 +238,8 @@ static int amd_sfh1_1_hid_client_init(struct amd_mp2_dev *privdata)
+ cleanup:
+ amd_sfh_hid_client_deinit(privdata);
+ for (i = 0; i < cl_data->num_hid_devices; i++) {
++ if (cl_data->sensor_idx[i] == SRA_IDX)
++ continue;
+ devm_kfree(dev, cl_data->feature_report[i]);
+ devm_kfree(dev, in_data->input_report[i]);
+ devm_kfree(dev, cl_data->report_descr[i]);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 288a2b864cc41d..1062731315a2a5 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -41,6 +41,10 @@
+ #define USB_VENDOR_ID_ACTIONSTAR 0x2101
+ #define USB_DEVICE_ID_ACTIONSTAR_1011 0x1011
+
++#define USB_VENDOR_ID_ADATA_XPG 0x125f
++#define USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE 0x7505
++#define USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE_DONGLE 0x7506
++
+ #define USB_VENDOR_ID_ADS_TECH 0x06e1
+ #define USB_DEVICE_ID_ADS_TECH_RADIO_SI470X 0xa155
+
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 5d7a418ccdbecf..73979643315bfd 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -27,6 +27,8 @@
+ static const struct hid_device_id hid_quirks[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_GAMEPAD), HID_QUIRK_BADPAD },
+ { HID_USB_DEVICE(USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_PREDATOR), HID_QUIRK_BADPAD },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ADATA_XPG, USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE), HID_QUIRK_ALWAYS_POLL },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ADATA_XPG, USB_VENDOR_ID_ADATA_XPG_WL_GAMING_MOUSE_DONGLE), HID_QUIRK_ALWAYS_POLL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_AFATECH, USB_DEVICE_ID_AFATECH_AF9016), HID_QUIRK_FULLSPEED_INTERVAL },
+ { HID_USB_DEVICE(USB_VENDOR_ID_AIREN, USB_DEVICE_ID_AIREN_SLIMPLUS), HID_QUIRK_NOGET },
+ { HID_USB_DEVICE(USB_VENDOR_ID_AKAI_09E8, USB_DEVICE_ID_AKAI_09E8_MIDIMIX), HID_QUIRK_NO_INIT_REPORTS },
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 3a2804a98203b5..6e59b2c4c39b7f 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -273,6 +273,8 @@ int iommu_device_register(struct iommu_device *iommu,
+ err = bus_iommu_probe(iommu_buses[i]);
+ if (err)
+ iommu_device_unregister(iommu);
++ else
++ WRITE_ONCE(iommu->ready, true);
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(iommu_device_register);
+@@ -2801,31 +2803,39 @@ bool iommu_default_passthrough(void)
+ }
+ EXPORT_SYMBOL_GPL(iommu_default_passthrough);
+
+-const struct iommu_ops *iommu_ops_from_fwnode(const struct fwnode_handle *fwnode)
++static const struct iommu_device *iommu_from_fwnode(const struct fwnode_handle *fwnode)
+ {
+- const struct iommu_ops *ops = NULL;
+- struct iommu_device *iommu;
++ const struct iommu_device *iommu, *ret = NULL;
+
+ spin_lock(&iommu_device_lock);
+ list_for_each_entry(iommu, &iommu_device_list, list)
+ if (iommu->fwnode == fwnode) {
+- ops = iommu->ops;
++ ret = iommu;
+ break;
+ }
+ spin_unlock(&iommu_device_lock);
+- return ops;
++ return ret;
++}
++
++const struct iommu_ops *iommu_ops_from_fwnode(const struct fwnode_handle *fwnode)
++{
++ const struct iommu_device *iommu = iommu_from_fwnode(fwnode);
++
++ return iommu ? iommu->ops : NULL;
+ }
+
+ int iommu_fwspec_init(struct device *dev, struct fwnode_handle *iommu_fwnode)
+ {
+- const struct iommu_ops *ops = iommu_ops_from_fwnode(iommu_fwnode);
++ const struct iommu_device *iommu = iommu_from_fwnode(iommu_fwnode);
+ struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+- if (!ops)
++ if (!iommu)
+ return driver_deferred_probe_check_state(dev);
++ if (!dev->iommu && !READ_ONCE(iommu->ready))
++ return -EPROBE_DEFER;
+
+ if (fwspec)
+- return ops == iommu_fwspec_ops(fwspec) ? 0 : -EINVAL;
++ return iommu->ops == iommu_fwspec_ops(fwspec) ? 0 : -EINVAL;
+
+ if (!dev_iommu_get(dev))
+ return -ENOMEM;
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 4f177ca1b998e8..efc40125593d99 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -1673,24 +1673,28 @@ static int kvaser_pciefd_read_buffer(struct kvaser_pciefd *pcie, int dma_buf)
+ return res;
+ }
+
+-static u32 kvaser_pciefd_receive_irq(struct kvaser_pciefd *pcie)
++static void kvaser_pciefd_receive_irq(struct kvaser_pciefd *pcie)
+ {
++ void __iomem *srb_cmd_reg = KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG;
+ u32 irq = ioread32(KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_IRQ_REG);
+
+- if (irq & KVASER_PCIEFD_SRB_IRQ_DPD0)
++ iowrite32(irq, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_IRQ_REG);
++
++ if (irq & KVASER_PCIEFD_SRB_IRQ_DPD0) {
+ kvaser_pciefd_read_buffer(pcie, 0);
++ iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0, srb_cmd_reg); /* Rearm buffer */
++ }
+
+- if (irq & KVASER_PCIEFD_SRB_IRQ_DPD1)
++ if (irq & KVASER_PCIEFD_SRB_IRQ_DPD1) {
+ kvaser_pciefd_read_buffer(pcie, 1);
++ iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1, srb_cmd_reg); /* Rearm buffer */
++ }
+
+ if (unlikely(irq & KVASER_PCIEFD_SRB_IRQ_DOF0 ||
+ irq & KVASER_PCIEFD_SRB_IRQ_DOF1 ||
+ irq & KVASER_PCIEFD_SRB_IRQ_DUF0 ||
+ irq & KVASER_PCIEFD_SRB_IRQ_DUF1))
+ dev_err(&pcie->pci->dev, "DMA IRQ error 0x%08X\n", irq);
+-
+- iowrite32(irq, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_IRQ_REG);
+- return irq;
+ }
+
+ static void kvaser_pciefd_transmit_irq(struct kvaser_pciefd_can *can)
+@@ -1718,29 +1722,22 @@ static irqreturn_t kvaser_pciefd_irq_handler(int irq, void *dev)
+ struct kvaser_pciefd *pcie = (struct kvaser_pciefd *)dev;
+ const struct kvaser_pciefd_irq_mask *irq_mask = pcie->driver_data->irq_mask;
+ u32 pci_irq = ioread32(KVASER_PCIEFD_PCI_IRQ_ADDR(pcie));
+- u32 srb_irq = 0;
+- u32 srb_release = 0;
+ int i;
+
+ if (!(pci_irq & irq_mask->all))
+ return IRQ_NONE;
+
++ iowrite32(0, KVASER_PCIEFD_PCI_IEN_ADDR(pcie));
++
+ if (pci_irq & irq_mask->kcan_rx0)
+- srb_irq = kvaser_pciefd_receive_irq(pcie);
++ kvaser_pciefd_receive_irq(pcie);
+
+ for (i = 0; i < pcie->nr_channels; i++) {
+ if (pci_irq & irq_mask->kcan_tx[i])
+ kvaser_pciefd_transmit_irq(pcie->can[i]);
+ }
+
+- if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0)
+- srb_release |= KVASER_PCIEFD_SRB_CMD_RDB0;
+-
+- if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1)
+- srb_release |= KVASER_PCIEFD_SRB_CMD_RDB1;
+-
+- if (srb_release)
+- iowrite32(srb_release, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
++ iowrite32(irq_mask->all, KVASER_PCIEFD_PCI_IEN_ADDR(pcie));
+
+ return IRQ_HANDLED;
+ }
+@@ -1760,13 +1757,22 @@ static void kvaser_pciefd_teardown_can_ctrls(struct kvaser_pciefd *pcie)
+ }
+ }
+
++static void kvaser_pciefd_disable_irq_srcs(struct kvaser_pciefd *pcie)
++{
++ unsigned int i;
++
++ /* Masking PCI_IRQ is insufficient as running ISR will unmask it */
++ iowrite32(0, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_IEN_REG);
++ for (i = 0; i < pcie->nr_channels; ++i)
++ iowrite32(0, pcie->can[i]->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
++}
++
+ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+ {
+ int ret;
+ struct kvaser_pciefd *pcie;
+ const struct kvaser_pciefd_irq_mask *irq_mask;
+- void __iomem *irq_en_base;
+
+ pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL);
+ if (!pcie)
+@@ -1832,8 +1838,7 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_IEN_REG);
+
+ /* Enable PCI interrupts */
+- irq_en_base = KVASER_PCIEFD_PCI_IEN_ADDR(pcie);
+- iowrite32(irq_mask->all, irq_en_base);
++ iowrite32(irq_mask->all, KVASER_PCIEFD_PCI_IEN_ADDR(pcie));
+ /* Ready the DMA buffers */
+ iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0,
+ KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
+@@ -1847,8 +1852,7 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ return 0;
+
+ err_free_irq:
+- /* Disable PCI interrupts */
+- iowrite32(0, irq_en_base);
++ kvaser_pciefd_disable_irq_srcs(pcie);
+ free_irq(pcie->pci->irq, pcie);
+
+ err_pci_free_irq_vectors:
+@@ -1871,35 +1875,26 @@ static int kvaser_pciefd_probe(struct pci_dev *pdev,
+ return ret;
+ }
+
+-static void kvaser_pciefd_remove_all_ctrls(struct kvaser_pciefd *pcie)
+-{
+- int i;
+-
+- for (i = 0; i < pcie->nr_channels; i++) {
+- struct kvaser_pciefd_can *can = pcie->can[i];
+-
+- if (can) {
+- iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_IEN_REG);
+- unregister_candev(can->can.dev);
+- del_timer(&can->bec_poll_timer);
+- kvaser_pciefd_pwm_stop(can);
+- free_candev(can->can.dev);
+- }
+- }
+-}
+-
+ static void kvaser_pciefd_remove(struct pci_dev *pdev)
+ {
+ struct kvaser_pciefd *pcie = pci_get_drvdata(pdev);
++ unsigned int i;
+
+- kvaser_pciefd_remove_all_ctrls(pcie);
++ for (i = 0; i < pcie->nr_channels; ++i) {
++ struct kvaser_pciefd_can *can = pcie->can[i];
+
+- /* Disable interrupts */
+- iowrite32(0, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CTRL_REG);
+- iowrite32(0, KVASER_PCIEFD_PCI_IEN_ADDR(pcie));
++ unregister_candev(can->can.dev);
++ del_timer(&can->bec_poll_timer);
++ kvaser_pciefd_pwm_stop(can);
++ }
+
++ kvaser_pciefd_disable_irq_srcs(pcie);
+ free_irq(pcie->pci->irq, pcie);
+ pci_free_irq_vectors(pcie->pci);
++
++ for (i = 0; i < pcie->nr_channels; ++i)
++ free_candev(pcie->can[i]->can.dev);
++
+ pci_iounmap(pdev, pcie->reg_base);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index cac67babe45593..11411074071656 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2775,7 +2775,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ port->slave.mac_addr);
+ if (!is_valid_ether_addr(port->slave.mac_addr)) {
+ eth_random_addr(port->slave.mac_addr);
+- dev_err(dev, "Use random MAC address\n");
++ dev_info(dev, "Use random MAC address\n");
+ }
+ }
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index a27149e37a9881..8863c9fcb4aabd 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2059,7 +2059,21 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id,
+ if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf)
+ atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs;
+ else
+- atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs;
++ atomic_bs = (1 + ns->ctrl->awupf) * bs;
++
++ /*
++ * Set subsystem atomic bs.
++ */
++ if (ns->ctrl->subsys->atomic_bs) {
++ if (atomic_bs != ns->ctrl->subsys->atomic_bs) {
++ dev_err_ratelimited(ns->ctrl->device,
++ "%s: Inconsistent Atomic Write Size, Namespace will not be added: Subsystem=%d bytes, Controller/Namespace=%d bytes\n",
++ ns->disk ? ns->disk->disk_name : "?",
++ ns->ctrl->subsys->atomic_bs,
++ atomic_bs);
++ }
++ } else
++ ns->ctrl->subsys->atomic_bs = atomic_bs;
+
+ nvme_update_atomic_write_disk_info(ns, id, lim, bs, atomic_bs);
+ }
+@@ -2201,6 +2215,17 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
+ nvme_set_chunk_sectors(ns, id, &lim);
+ if (!nvme_update_disk_info(ns, id, &lim))
+ capacity = 0;
++
++ /*
++ * Validate the max atomic write size fits within the subsystem's
++ * atomic write capabilities.
++ */
++ if (lim.atomic_write_hw_max > ns->ctrl->subsys->atomic_bs) {
++ blk_mq_unfreeze_queue(ns->disk->queue, memflags);
++ ret = -ENXIO;
++ goto out;
++ }
++
+ nvme_config_discard(ns, &lim);
+ if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+ ns->head->ids.csi == NVME_CSI_ZNS)
+@@ -3031,7 +3056,6 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ kfree(subsys);
+ return -EINVAL;
+ }
+- subsys->awupf = le16_to_cpu(id->awupf);
+ nvme_mpath_default_iopolicy(subsys);
+
+ subsys->dev.class = &nvme_subsys_class;
+@@ -3441,7 +3465,7 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
+ dev_pm_qos_expose_latency_tolerance(ctrl->device);
+ else if (!ctrl->apst_enabled && prev_apst_enabled)
+ dev_pm_qos_hide_latency_tolerance(ctrl->device);
+-
++ ctrl->awupf = le16_to_cpu(id->awupf);
+ out_free:
+ kfree(id);
+ return ret;
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index f39823cde62c72..ac17e650327f1a 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -638,7 +638,8 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
+
+ blk_set_stacking_limits(&lim);
+ lim.dma_alignment = 3;
+- lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
++ lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT |
++ BLK_FEAT_POLL | BLK_FEAT_ATOMIC_WRITES;
+ if (head->ids.csi == NVME_CSI_ZNS)
+ lim.features |= BLK_FEAT_ZONED;
+
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 7be92d07430e95..3804f91b194206 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -410,6 +410,7 @@ struct nvme_ctrl {
+
+ enum nvme_ctrl_type cntrltype;
+ enum nvme_dctype dctype;
++ u16 awupf; /* 0's based value. */
+ };
+
+ static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)
+@@ -442,11 +443,11 @@ struct nvme_subsystem {
+ u8 cmic;
+ enum nvme_subsys_type subtype;
+ u16 vendor_id;
+- u16 awupf; /* 0's based awupf value. */
+ struct ida ns_ida;
+ #ifdef CONFIG_NVME_MULTIPATH
+ enum nvme_iopolicy iopolicy;
+ #endif
++ u32 atomic_bs;
+ };
+
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index abd097eba6623f..28f560f86e9126 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3742,6 +3742,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ { PCI_DEVICE(0x1e49, 0x0041), /* ZHITAI TiPro7000 NVMe SSD */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
++ { PCI_DEVICE(0x025e, 0xf1ac), /* SOLIDIGM P44 pro SSDPFKKW020X7 */
++ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ { PCI_DEVICE(0xc0a9, 0x540a), /* Crucial P2 */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1d97, 0x2263), /* Lexar NM610 */
+diff --git a/drivers/nvme/target/pci-epf.c b/drivers/nvme/target/pci-epf.c
+index fbc167f47d8a67..17c0e5ee731a40 100644
+--- a/drivers/nvme/target/pci-epf.c
++++ b/drivers/nvme/target/pci-epf.c
+@@ -596,9 +596,6 @@ static bool nvmet_pci_epf_should_raise_irq(struct nvmet_pci_epf_ctrl *ctrl,
+ struct nvmet_pci_epf_irq_vector *iv = cq->iv;
+ bool ret;
+
+- if (!test_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))
+- return false;
+-
+ /* IRQ coalescing for the admin queue is not allowed. */
+ if (!cq->qid)
+ return true;
+@@ -625,7 +622,8 @@ static void nvmet_pci_epf_raise_irq(struct nvmet_pci_epf_ctrl *ctrl,
+ struct pci_epf *epf = nvme_epf->epf;
+ int ret = 0;
+
+- if (!test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags))
++ if (!test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags) ||
++ !test_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags))
+ return;
+
+ mutex_lock(&ctrl->irq_lock);
+@@ -656,7 +654,9 @@ static void nvmet_pci_epf_raise_irq(struct nvmet_pci_epf_ctrl *ctrl,
+ }
+
+ if (ret)
+- dev_err(ctrl->dev, "Failed to raise IRQ (err=%d)\n", ret);
++ dev_err_ratelimited(ctrl->dev,
++ "CQ[%u]: Failed to raise IRQ (err=%d)\n",
++ cq->qid, ret);
+
+ unlock:
+ mutex_unlock(&ctrl->irq_lock);
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index ef959e66db7c33..0ba0f545b118ff 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -727,8 +727,8 @@ static umode_t arm_cmn_event_attr_is_visible(struct kobject *kobj,
+
+ if ((chan == 5 && cmn->rsp_vc_num < 2) ||
+ (chan == 6 && cmn->dat_vc_num < 2) ||
+- (chan == 7 && cmn->snp_vc_num < 2) ||
+- (chan == 8 && cmn->req_vc_num < 2))
++ (chan == 7 && cmn->req_vc_num < 2) ||
++ (chan == 8 && cmn->snp_vc_num < 2))
+ return 0;
+ }
+
+@@ -884,8 +884,8 @@ static umode_t arm_cmn_event_attr_is_visible(struct kobject *kobj,
+ _CMN_EVENT_XP(pub_##_name, (_event) | (4 << 5)), \
+ _CMN_EVENT_XP(rsp2_##_name, (_event) | (5 << 5)), \
+ _CMN_EVENT_XP(dat2_##_name, (_event) | (6 << 5)), \
+- _CMN_EVENT_XP(snp2_##_name, (_event) | (7 << 5)), \
+- _CMN_EVENT_XP(req2_##_name, (_event) | (8 << 5))
++ _CMN_EVENT_XP(req2_##_name, (_event) | (7 << 5)), \
++ _CMN_EVENT_XP(snp2_##_name, (_event) | (8 << 5))
+
+ #define CMN_EVENT_XP_DAT(_name, _event) \
+ _CMN_EVENT_XP_PORT(dat_##_name, (_event) | (3 << 5)), \
+@@ -2557,6 +2557,7 @@ static int arm_cmn_probe(struct platform_device *pdev)
+
+ cmn->dev = &pdev->dev;
+ cmn->part = (unsigned long)device_get_match_data(cmn->dev);
++ cmn->cpu = cpumask_local_spread(0, dev_to_node(cmn->dev));
+ platform_set_drvdata(pdev, cmn);
+
+ if (cmn->part == PART_CMN600 && has_acpi_companion(cmn->dev)) {
+@@ -2584,7 +2585,6 @@ static int arm_cmn_probe(struct platform_device *pdev)
+ if (err)
+ return err;
+
+- cmn->cpu = cpumask_local_spread(0, dev_to_node(cmn->dev));
+ cmn->pmu = (struct pmu) {
+ .module = THIS_MODULE,
+ .parent = cmn->dev,
+@@ -2650,6 +2650,7 @@ static const struct acpi_device_id arm_cmn_acpi_match[] = {
+ { "ARMHC600", PART_CMN600 },
+ { "ARMHC650" },
+ { "ARMHC700" },
++ { "ARMHC003" },
+ {}
+ };
+ MODULE_DEVICE_TABLE(acpi, arm_cmn_acpi_match);
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 920abf6fa9bdd8..28a4235bfd5fbb 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -325,6 +325,8 @@ static const struct ropll_config ropll_tmds_cfg[] = {
+ 1, 1, 0, 0x20, 0x0c, 1, 0x0e, 0, 0, },
+ { 650000, 162, 162, 1, 1, 11, 1, 1, 1, 1, 1, 1, 1, 54, 0, 16, 4, 1,
+ 1, 1, 0, 0x20, 0x0c, 1, 0x0e, 0, 0, },
++ { 502500, 84, 84, 1, 1, 7, 1, 1, 1, 1, 1, 1, 1, 11, 1, 4, 5,
++ 4, 11, 1, 0, 0x20, 0x0c, 1, 0x0e, 0, 0, },
+ { 337500, 0x70, 0x70, 1, 1, 0xf, 1, 1, 1, 1, 1, 1, 1, 0x2, 0, 0x01, 5,
+ 1, 1, 1, 0, 0x20, 0x0c, 1, 0x0e, 0, 0, },
+ { 400000, 100, 100, 1, 1, 11, 1, 1, 0, 1, 0, 1, 1, 0x9, 0, 0x05, 0,
+diff --git a/drivers/phy/starfive/phy-jh7110-usb.c b/drivers/phy/starfive/phy-jh7110-usb.c
+index cb5454fbe2c8fa..b505d89860b439 100644
+--- a/drivers/phy/starfive/phy-jh7110-usb.c
++++ b/drivers/phy/starfive/phy-jh7110-usb.c
+@@ -18,6 +18,8 @@
+ #include <linux/usb/of.h>
+
+ #define USB_125M_CLK_RATE 125000000
++#define USB_CLK_MODE_OFF 0x0
++#define USB_CLK_MODE_RX_NORMAL_PWR BIT(1)
+ #define USB_LS_KEEPALIVE_OFF 0x4
+ #define USB_LS_KEEPALIVE_ENABLE BIT(4)
+
+@@ -78,6 +80,7 @@ static int jh7110_usb2_phy_init(struct phy *_phy)
+ {
+ struct jh7110_usb2_phy *phy = phy_get_drvdata(_phy);
+ int ret;
++ unsigned int val;
+
+ ret = clk_set_rate(phy->usb_125m_clk, USB_125M_CLK_RATE);
+ if (ret)
+@@ -87,6 +90,10 @@ static int jh7110_usb2_phy_init(struct phy *_phy)
+ if (ret)
+ return ret;
+
++ val = readl(phy->regs + USB_CLK_MODE_OFF);
++ val |= USB_CLK_MODE_RX_NORMAL_PWR;
++ writel(val, phy->regs + USB_CLK_MODE_OFF);
++
+ return 0;
+ }
+
+diff --git a/drivers/platform/x86/fujitsu-laptop.c b/drivers/platform/x86/fujitsu-laptop.c
+index a0eae24ca9e608..162809140f68a2 100644
+--- a/drivers/platform/x86/fujitsu-laptop.c
++++ b/drivers/platform/x86/fujitsu-laptop.c
+@@ -17,13 +17,13 @@
+ /*
+ * fujitsu-laptop.c - Fujitsu laptop support, providing access to additional
+ * features made available on a range of Fujitsu laptops including the
+- * P2xxx/P5xxx/S6xxx/S7xxx series.
++ * P2xxx/P5xxx/S2xxx/S6xxx/S7xxx series.
+ *
+ * This driver implements a vendor-specific backlight control interface for
+ * Fujitsu laptops and provides support for hotkeys present on certain Fujitsu
+ * laptops.
+ *
+- * This driver has been tested on a Fujitsu Lifebook S6410, S7020 and
++ * This driver has been tested on a Fujitsu Lifebook S2110, S6410, S7020 and
+ * P8010. It should work on most P-series and S-series Lifebooks, but
+ * YMMV.
+ *
+@@ -107,7 +107,11 @@
+ #define KEY2_CODE 0x411
+ #define KEY3_CODE 0x412
+ #define KEY4_CODE 0x413
+-#define KEY5_CODE 0x420
++#define KEY5_CODE 0x414
++#define KEY6_CODE 0x415
++#define KEY7_CODE 0x416
++#define KEY8_CODE 0x417
++#define KEY9_CODE 0x420
+
+ /* Hotkey ringbuffer limits */
+ #define MAX_HOTKEY_RINGBUFFER_SIZE 100
+@@ -560,7 +564,7 @@ static const struct key_entry keymap_default[] = {
+ { KE_KEY, KEY2_CODE, { KEY_PROG2 } },
+ { KE_KEY, KEY3_CODE, { KEY_PROG3 } },
+ { KE_KEY, KEY4_CODE, { KEY_PROG4 } },
+- { KE_KEY, KEY5_CODE, { KEY_RFKILL } },
++ { KE_KEY, KEY9_CODE, { KEY_RFKILL } },
+ /* Soft keys read from status flags */
+ { KE_KEY, FLAG_RFKILL, { KEY_RFKILL } },
+ { KE_KEY, FLAG_TOUCHPAD_TOGGLE, { KEY_TOUCHPAD_TOGGLE } },
+@@ -584,6 +588,18 @@ static const struct key_entry keymap_p8010[] = {
+ { KE_END, 0 }
+ };
+
++static const struct key_entry keymap_s2110[] = {
++ { KE_KEY, KEY1_CODE, { KEY_PROG1 } }, /* "A" */
++ { KE_KEY, KEY2_CODE, { KEY_PROG2 } }, /* "B" */
++ { KE_KEY, KEY3_CODE, { KEY_WWW } }, /* "Internet" */
++ { KE_KEY, KEY4_CODE, { KEY_EMAIL } }, /* "E-mail" */
++ { KE_KEY, KEY5_CODE, { KEY_STOPCD } },
++ { KE_KEY, KEY6_CODE, { KEY_PLAYPAUSE } },
++ { KE_KEY, KEY7_CODE, { KEY_PREVIOUSSONG } },
++ { KE_KEY, KEY8_CODE, { KEY_NEXTSONG } },
++ { KE_END, 0 }
++};
++
+ static const struct key_entry *keymap = keymap_default;
+
+ static int fujitsu_laptop_dmi_keymap_override(const struct dmi_system_id *id)
+@@ -621,6 +637,15 @@ static const struct dmi_system_id fujitsu_laptop_dmi_table[] = {
+ },
+ .driver_data = (void *)keymap_p8010
+ },
++ {
++ .callback = fujitsu_laptop_dmi_keymap_override,
++ .ident = "Fujitsu LifeBook S2110",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK S2110"),
++ },
++ .driver_data = (void *)keymap_s2110
++ },
+ {}
+ };
+
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 2ff38ca9ddb400..d0376ee1f8ce0a 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -232,6 +232,7 @@ enum tpacpi_hkey_event_t {
+ /* Thermal events */
+ TP_HKEY_EV_ALARM_BAT_HOT = 0x6011, /* battery too hot */
+ TP_HKEY_EV_ALARM_BAT_XHOT = 0x6012, /* battery critically hot */
++ TP_HKEY_EV_ALARM_BAT_LIM_CHANGE = 0x6013, /* battery charge limit changed*/
+ TP_HKEY_EV_ALARM_SENSOR_HOT = 0x6021, /* sensor too hot */
+ TP_HKEY_EV_ALARM_SENSOR_XHOT = 0x6022, /* sensor critically hot */
+ TP_HKEY_EV_THM_TABLE_CHANGED = 0x6030, /* windows; thermal table changed */
+@@ -3780,6 +3781,10 @@ static bool hotkey_notify_6xxx(const u32 hkey, bool *send_acpi_ev)
+ pr_alert("THERMAL EMERGENCY: battery is extremely hot!\n");
+ /* recommended action: immediate sleep/hibernate */
+ break;
++ case TP_HKEY_EV_ALARM_BAT_LIM_CHANGE:
++ pr_debug("Battery Info: battery charge threshold changed\n");
++ /* User changed charging threshold. No action needed */
++ return true;
+ case TP_HKEY_EV_ALARM_SENSOR_HOT:
+ pr_crit("THERMAL ALARM: a sensor reports something is too hot!\n");
+ /* recommended action: warn user through gui, that */
+@@ -11481,6 +11486,8 @@ static int __must_check __init get_thinkpad_model_data(
+ tp->vendor = PCI_VENDOR_ID_IBM;
+ else if (dmi_name_in_vendors("LENOVO"))
+ tp->vendor = PCI_VENDOR_ID_LENOVO;
++ else if (dmi_name_in_vendors("NEC"))
++ tp->vendor = PCI_VENDOR_ID_LENOVO;
+ else
+ return 0;
+
+diff --git a/drivers/spi/spi-sun4i.c b/drivers/spi/spi-sun4i.c
+index fcbe864c9b7d69..4b070377e3d1db 100644
+--- a/drivers/spi/spi-sun4i.c
++++ b/drivers/spi/spi-sun4i.c
+@@ -264,6 +264,9 @@ static int sun4i_spi_transfer_one(struct spi_controller *host,
+ else
+ reg |= SUN4I_CTL_DHB;
+
++ /* Now that the settings are correct, enable the interface */
++ reg |= SUN4I_CTL_ENABLE;
++
+ sun4i_spi_write(sspi, SUN4I_CTL_REG, reg);
+
+ /* Ensure that we have a parent clock fast enough */
+@@ -404,7 +407,7 @@ static int sun4i_spi_runtime_resume(struct device *dev)
+ }
+
+ sun4i_spi_write(sspi, SUN4I_CTL_REG,
+- SUN4I_CTL_ENABLE | SUN4I_CTL_MASTER | SUN4I_CTL_TP);
++ SUN4I_CTL_MASTER | SUN4I_CTL_TP);
+
+ return 0;
+
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 4ebec51fe4f22a..d56f5421e62c6e 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -43,6 +43,8 @@
+ #include <linux/timekeeping.h>
+ #include <linux/sysctl.h>
+ #include <linux/elf.h>
++#include <linux/pidfs.h>
++#include <uapi/linux/pidfd.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/mmu_context.h>
+@@ -60,6 +62,12 @@ static void free_vma_snapshot(struct coredump_params *cprm);
+ #define CORE_FILE_NOTE_SIZE_DEFAULT (4*1024*1024)
+ /* Define a reasonable max cap */
+ #define CORE_FILE_NOTE_SIZE_MAX (16*1024*1024)
++/*
++ * File descriptor number for the pidfd for the thread-group leader of
++ * the coredumping task installed into the usermode helper's file
++ * descriptor table.
++ */
++#define COREDUMP_PIDFD_NUMBER 3
+
+ static int core_uses_pid;
+ static unsigned int core_pipe_limit;
+@@ -339,6 +347,27 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm,
+ case 'C':
+ err = cn_printf(cn, "%d", cprm->cpu);
+ break;
++ /* pidfd number */
++ case 'F': {
++ /*
++ * Installing a pidfd only makes sense if
++ * we actually spawn a usermode helper.
++ */
++ if (!ispipe)
++ break;
++
++ /*
++ * Note that we'll install a pidfd for the
++ * thread-group leader. We know that task
++ * linkage hasn't been removed yet and even if
++ * this @current isn't the actual thread-group
++ * leader we know that the thread-group leader
++ * cannot be reaped until @current has exited.
++ */
++ cprm->pid = task_tgid(current);
++ err = cn_printf(cn, "%d", COREDUMP_PIDFD_NUMBER);
++ break;
++ }
+ default:
+ break;
+ }
+@@ -493,7 +522,7 @@ static void wait_for_dump_helpers(struct file *file)
+ }
+
+ /*
+- * umh_pipe_setup
++ * umh_coredump_setup
+ * helper function to customize the process used
+ * to collect the core in userspace. Specifically
+ * it sets up a pipe and installs it as fd 0 (stdin)
+@@ -503,11 +532,32 @@ static void wait_for_dump_helpers(struct file *file)
+ * is a special value that we use to trap recursive
+ * core dumps
+ */
+-static int umh_pipe_setup(struct subprocess_info *info, struct cred *new)
++static int umh_coredump_setup(struct subprocess_info *info, struct cred *new)
+ {
+ struct file *files[2];
+ struct coredump_params *cp = (struct coredump_params *)info->data;
+- int err = create_pipe_files(files, 0);
++ int err;
++
++ if (cp->pid) {
++ struct file *pidfs_file __free(fput) = NULL;
++
++ pidfs_file = pidfs_alloc_file(cp->pid, O_RDWR);
++ if (IS_ERR(pidfs_file))
++ return PTR_ERR(pidfs_file);
++
++ /*
++ * Usermode helpers are childen of either
++ * system_unbound_wq or of kthreadd. So we know that
++ * we're starting off with a clean file descriptor
++ * table. So we should always be able to use
++ * COREDUMP_PIDFD_NUMBER as our file descriptor value.
++ */
++ err = replace_fd(COREDUMP_PIDFD_NUMBER, pidfs_file, 0);
++ if (err < 0)
++ return err;
++ }
++
++ err = create_pipe_files(files, 0);
+ if (err)
+ return err;
+
+@@ -515,10 +565,13 @@ static int umh_pipe_setup(struct subprocess_info *info, struct cred *new)
+
+ err = replace_fd(0, files[0], 0);
+ fput(files[0]);
++ if (err < 0)
++ return err;
++
+ /* and disallow core files too */
+ current->signal->rlim[RLIMIT_CORE] = (struct rlimit){1, 1};
+
+- return err;
++ return 0;
+ }
+
+ void do_coredump(const kernel_siginfo_t *siginfo)
+@@ -593,7 +646,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ }
+
+ if (cprm.limit == 1) {
+- /* See umh_pipe_setup() which sets RLIMIT_CORE = 1.
++ /* See umh_coredump_setup() which sets RLIMIT_CORE = 1.
+ *
+ * Normally core limits are irrelevant to pipes, since
+ * we're not writing to the file system, but we use
+@@ -632,7 +685,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ retval = -ENOMEM;
+ sub_info = call_usermodehelper_setup(helper_argv[0],
+ helper_argv, NULL, GFP_KERNEL,
+- umh_pipe_setup, NULL, &cprm);
++ umh_coredump_setup, NULL, &cprm);
+ if (sub_info)
+ retval = call_usermodehelper_exec(sub_info,
+ UMH_WAIT_EXEC);
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 3b0918ade53cd3..a10d39150abc81 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -1100,6 +1100,8 @@ struct nfs_server *nfs_create_server(struct fs_context *fc)
+ if (server->namelen == 0 || server->namelen > NFS2_MAXNAMLEN)
+ server->namelen = NFS2_MAXNAMLEN;
+ }
++ /* Linux 'subtree_check' borkenness mandates this setting */
++ server->fh_expire_type = NFS_FH_VOL_RENAME;
+
+ if (!(fattr->valid & NFS_ATTR_FATTR)) {
+ error = ctx->nfs_mod->rpc_ops->getattr(server, ctx->mntfh,
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 2b04038b0e4052..34f3471ce813bc 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -2678,6 +2678,18 @@ nfs_unblock_rename(struct rpc_task *task, struct nfs_renamedata *data)
+ unblock_revalidate(new_dentry);
+ }
+
++static bool nfs_rename_is_unsafe_cross_dir(struct dentry *old_dentry,
++ struct dentry *new_dentry)
++{
++ struct nfs_server *server = NFS_SB(old_dentry->d_sb);
++
++ if (old_dentry->d_parent != new_dentry->d_parent)
++ return false;
++ if (server->fh_expire_type & NFS_FH_RENAME_UNSAFE)
++ return !(server->fh_expire_type & NFS_FH_NOEXPIRE_WITH_OPEN);
++ return true;
++}
++
+ /*
+ * RENAME
+ * FIXME: Some nfsds, like the Linux user space nfsd, may generate a
+@@ -2765,7 +2777,8 @@ int nfs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+
+ }
+
+- if (S_ISREG(old_inode->i_mode))
++ if (S_ISREG(old_inode->i_mode) &&
++ nfs_rename_is_unsafe_cross_dir(old_dentry, new_dentry))
+ nfs_sync_inode(old_inode);
+ task = nfs_async_rename(old_dir, new_dir, old_dentry, new_dentry,
+ must_unblock ? nfs_unblock_rename : NULL);
+diff --git a/fs/nfs/filelayout/filelayoutdev.c b/fs/nfs/filelayout/filelayoutdev.c
+index 4fa304fa5bc4b2..29d9234d5c085f 100644
+--- a/fs/nfs/filelayout/filelayoutdev.c
++++ b/fs/nfs/filelayout/filelayoutdev.c
+@@ -76,6 +76,7 @@ nfs4_fl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+ struct page *scratch;
+ struct list_head dsaddrs;
+ struct nfs4_pnfs_ds_addr *da;
++ struct net *net = server->nfs_client->cl_net;
+
+ /* set up xdr stream */
+ scratch = alloc_page(gfp_flags);
+@@ -159,8 +160,7 @@ nfs4_fl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+
+ mp_count = be32_to_cpup(p); /* multipath count */
+ for (j = 0; j < mp_count; j++) {
+- da = nfs4_decode_mp_ds_addr(server->nfs_client->cl_net,
+- &stream, gfp_flags);
++ da = nfs4_decode_mp_ds_addr(net, &stream, gfp_flags);
+ if (da)
+ list_add_tail(&da->da_node, &dsaddrs);
+ }
+@@ -170,7 +170,7 @@ nfs4_fl_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+ goto out_err_free_deviceid;
+ }
+
+- dsaddr->ds_list[i] = nfs4_pnfs_ds_add(&dsaddrs, gfp_flags);
++ dsaddr->ds_list[i] = nfs4_pnfs_ds_add(net, &dsaddrs, gfp_flags);
+ if (!dsaddr->ds_list[i])
+ goto out_err_drain_dsaddrs;
+ trace_fl_getdevinfo(server, &pdev->dev_id, dsaddr->ds_list[i]->ds_remotestr);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index e58bedfb1dcc14..4a304cf17c4b07 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -49,6 +49,7 @@ nfs4_ff_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+ struct nfs4_pnfs_ds_addr *da;
+ struct nfs4_ff_layout_ds *new_ds = NULL;
+ struct nfs4_ff_ds_version *ds_versions = NULL;
++ struct net *net = server->nfs_client->cl_net;
+ u32 mp_count;
+ u32 version_count;
+ __be32 *p;
+@@ -80,8 +81,7 @@ nfs4_ff_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+
+ for (i = 0; i < mp_count; i++) {
+ /* multipath ds */
+- da = nfs4_decode_mp_ds_addr(server->nfs_client->cl_net,
+- &stream, gfp_flags);
++ da = nfs4_decode_mp_ds_addr(net, &stream, gfp_flags);
+ if (da)
+ list_add_tail(&da->da_node, &dsaddrs);
+ }
+@@ -149,7 +149,7 @@ nfs4_ff_alloc_deviceid_node(struct nfs_server *server, struct pnfs_device *pdev,
+ new_ds->ds_versions = ds_versions;
+ new_ds->ds_versions_cnt = version_count;
+
+- new_ds->ds = nfs4_pnfs_ds_add(&dsaddrs, gfp_flags);
++ new_ds->ds = nfs4_pnfs_ds_add(net, &dsaddrs, gfp_flags);
+ if (!new_ds->ds)
+ goto out_err_drain_dsaddrs;
+
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index 30d2613e912b88..91ff877185c8af 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -60,6 +60,7 @@ struct nfs4_pnfs_ds {
+ struct list_head ds_node; /* nfs4_pnfs_dev_hlist dev_dslist */
+ char *ds_remotestr; /* comma sep list of addrs */
+ struct list_head ds_addrs;
++ const struct net *ds_net;
+ struct nfs_client *ds_clp;
+ refcount_t ds_count;
+ unsigned long ds_state;
+@@ -415,7 +416,8 @@ int pnfs_generic_commit_pagelist(struct inode *inode,
+ int pnfs_generic_scan_commit_lists(struct nfs_commit_info *cinfo, int max);
+ void pnfs_generic_write_commit_done(struct rpc_task *task, void *data);
+ void nfs4_pnfs_ds_put(struct nfs4_pnfs_ds *ds);
+-struct nfs4_pnfs_ds *nfs4_pnfs_ds_add(struct list_head *dsaddrs,
++struct nfs4_pnfs_ds *nfs4_pnfs_ds_add(const struct net *net,
++ struct list_head *dsaddrs,
+ gfp_t gfp_flags);
+ void nfs4_pnfs_v3_ds_connect_unload(void);
+ int nfs4_pnfs_ds_connect(struct nfs_server *mds_srv, struct nfs4_pnfs_ds *ds,
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index dbef837e871ad4..2ee20a0f0b36d3 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -604,12 +604,12 @@ _same_data_server_addrs_locked(const struct list_head *dsaddrs1,
+ * Lookup DS by addresses. nfs4_ds_cache_lock is held
+ */
+ static struct nfs4_pnfs_ds *
+-_data_server_lookup_locked(const struct list_head *dsaddrs)
++_data_server_lookup_locked(const struct net *net, const struct list_head *dsaddrs)
+ {
+ struct nfs4_pnfs_ds *ds;
+
+ list_for_each_entry(ds, &nfs4_data_server_cache, ds_node)
+- if (_same_data_server_addrs_locked(&ds->ds_addrs, dsaddrs))
++ if (ds->ds_net == net && _same_data_server_addrs_locked(&ds->ds_addrs, dsaddrs))
+ return ds;
+ return NULL;
+ }
+@@ -716,7 +716,7 @@ nfs4_pnfs_remotestr(struct list_head *dsaddrs, gfp_t gfp_flags)
+ * uncached and return cached struct nfs4_pnfs_ds.
+ */
+ struct nfs4_pnfs_ds *
+-nfs4_pnfs_ds_add(struct list_head *dsaddrs, gfp_t gfp_flags)
++nfs4_pnfs_ds_add(const struct net *net, struct list_head *dsaddrs, gfp_t gfp_flags)
+ {
+ struct nfs4_pnfs_ds *tmp_ds, *ds = NULL;
+ char *remotestr;
+@@ -734,13 +734,14 @@ nfs4_pnfs_ds_add(struct list_head *dsaddrs, gfp_t gfp_flags)
+ remotestr = nfs4_pnfs_remotestr(dsaddrs, gfp_flags);
+
+ spin_lock(&nfs4_ds_cache_lock);
+- tmp_ds = _data_server_lookup_locked(dsaddrs);
++ tmp_ds = _data_server_lookup_locked(net, dsaddrs);
+ if (tmp_ds == NULL) {
+ INIT_LIST_HEAD(&ds->ds_addrs);
+ list_splice_init(dsaddrs, &ds->ds_addrs);
+ ds->ds_remotestr = remotestr;
+ refcount_set(&ds->ds_count, 1);
+ INIT_LIST_HEAD(&ds->ds_node);
++ ds->ds_net = net;
+ ds->ds_clp = NULL;
+ list_add(&ds->ds_node, &nfs4_data_server_cache);
+ dprintk("%s add new data server %s\n", __func__,
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index 03f606afad93a0..d7a8a580d01362 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -146,12 +146,9 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ {
+ struct oplock_info *opinfo;
+
+- if (list_empty(&ci->m_op_list))
+- return NULL;
+-
+ down_read(&ci->m_lock);
+- opinfo = list_first_entry(&ci->m_op_list, struct oplock_info,
+- op_entry);
++ opinfo = list_first_entry_or_null(&ci->m_op_list, struct oplock_info,
++ op_entry);
+ if (opinfo) {
+ if (opinfo->conn == NULL ||
+ !atomic_inc_not_zero(&opinfo->refcount))
+diff --git a/include/linux/coredump.h b/include/linux/coredump.h
+index 77e6e195d1d687..76e41805b92de9 100644
+--- a/include/linux/coredump.h
++++ b/include/linux/coredump.h
+@@ -28,6 +28,7 @@ struct coredump_params {
+ int vma_count;
+ size_t vma_data_size;
+ struct core_vma_metadata *vma_meta;
++ struct pid *pid;
+ };
+
+ extern unsigned int core_file_note_size_limit;
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index 87cbe47b323e68..0ba67935f530d7 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -735,6 +735,7 @@ struct iommu_domain_ops {
+ * @dev: struct device for sysfs handling
+ * @singleton_group: Used internally for drivers that have only one group
+ * @max_pasids: number of supported PASIDs
++ * @ready: set once iommu_device_register() has completed successfully
+ */
+ struct iommu_device {
+ struct list_head list;
+@@ -743,6 +744,7 @@ struct iommu_device {
+ struct device *dev;
+ struct iommu_group *singleton_group;
+ u32 max_pasids;
++ bool ready;
+ };
+
+ /**
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index 108862d81b5798..8baaad2dfbe405 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -210,6 +210,15 @@ struct nfs_server {
+ char *fscache_uniq; /* Uniquifier (or NULL) */
+ #endif
+
++ /* The following #defines numerically match the NFSv4 equivalents */
++#define NFS_FH_NOEXPIRE_WITH_OPEN (0x1)
++#define NFS_FH_VOLATILE_ANY (0x2)
++#define NFS_FH_VOL_MIGRATION (0x4)
++#define NFS_FH_VOL_RENAME (0x8)
++#define NFS_FH_RENAME_UNSAFE (NFS_FH_VOLATILE_ANY | NFS_FH_VOL_RENAME)
++ u32 fh_expire_type; /* V4 bitmask representing file
++ handle volatility type for
++ this filesystem */
+ u32 pnfs_blksize; /* layout_blksize attr */
+ #if IS_ENABLED(CONFIG_NFS_V4)
+ u32 attr_bitmask[3];/* V4 bitmask representing the set
+@@ -233,9 +242,6 @@ struct nfs_server {
+ u32 acl_bitmask; /* V4 bitmask representing the ACEs
+ that are supported on this
+ filesystem */
+- u32 fh_expire_type; /* V4 bitmask representing file
+- handle volatility type for
+- this filesystem */
+ struct pnfs_layoutdriver_type *pnfs_curr_ld; /* Active layout driver */
+ struct rpc_wait_queue roc_rpcwaitq;
+ void *pnfs_ld_data; /* per mount point data */
+diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig
+index d7762ef5949a2c..39278737bb68fd 100644
+--- a/kernel/module/Kconfig
++++ b/kernel/module/Kconfig
+@@ -192,6 +192,11 @@ config GENDWARFKSYMS
+ depends on !DEBUG_INFO_REDUCED && !DEBUG_INFO_SPLIT
+ # Requires ELF object files.
+ depends on !LTO
++ # To avoid conflicts with the discarded __gendwarfksyms_ptr symbols on
++ # X86, requires pahole before commit 47dcb534e253 ("btf_encoder: Stop
++ # indexing symbols for VARs") or after commit 9810758003ce ("btf_encoder:
++ # Verify 0 address DWARF variables are in ELF section").
++ depends on !X86 || !DEBUG_INFO_BTF || PAHOLE_VERSION < 128 || PAHOLE_VERSION > 129
+ help
+ Calculate symbol versions from DWARF debugging information using
+ gendwarfksyms. Requires DEBUG_INFO to be enabled.
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index 7986145a527cbe..5a7745170e84b1 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -175,6 +175,11 @@ struct hfsc_sched {
+
+ #define HT_INFINITY 0xffffffffffffffffULL /* infinite time value */
+
++static bool cl_in_el_or_vttree(struct hfsc_class *cl)
++{
++ return ((cl->cl_flags & HFSC_FSC) && cl->cl_nactive) ||
++ ((cl->cl_flags & HFSC_RSC) && !RB_EMPTY_NODE(&cl->el_node));
++}
+
+ /*
+ * eligible tree holds backlogged classes being sorted by their eligible times.
+@@ -1040,6 +1045,8 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ if (cl == NULL)
+ return -ENOBUFS;
+
++ RB_CLEAR_NODE(&cl->el_node);
++
+ err = tcf_block_get(&cl->block, &cl->filter_list, sch, extack);
+ if (err) {
+ kfree(cl);
+@@ -1572,7 +1579,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
+ sch->qstats.backlog += len;
+ sch->q.qlen++;
+
+- if (first && !cl->cl_nactive) {
++ if (first && !cl_in_el_or_vttree(cl)) {
+ if (cl->cl_flags & HFSC_RSC)
+ init_ed(cl, len);
+ if (cl->cl_flags & HFSC_FSC)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 9e5c36ad8f52d3..3f09ceac08ada5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6811,7 +6811,10 @@ static void alc256_fixup_chromebook(struct hda_codec *codec,
+
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
+- spec->gen.suppress_auto_mute = 1;
++ if (codec->core.subsystem_id == 0x10280d76)
++ spec->gen.suppress_auto_mute = 0;
++ else
++ spec->gen.suppress_auto_mute = 1;
+ spec->gen.suppress_auto_mic = 1;
+ spec->en_3kpull_low = false;
+ break;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-06-10 12:14 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-06-10 12:14 UTC (permalink / raw
To: gentoo-commits
commit: 4a7aed4738281d703fd55c573b63fb0d96fdb21e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 10 12:14:27 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 10 12:14:27 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4a7aed47
Linux patch 6.14.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-6.14.11.patch | 652 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 656 insertions(+)
diff --git a/0000_README b/0000_README
index 7ad72962..c3151688 100644
--- a/0000_README
+++ b/0000_README
@@ -82,6 +82,10 @@ Patch: 1009_linux-6.14.10.patch
From: https://www.kernel.org
Desc: Linux 6.14.10
+Patch: 1010_linux-6.14.11.patch
+From: https://www.kernel.org
+Desc: Linux 6.14.11
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1010_linux-6.14.11.patch b/1010_linux-6.14.11.patch
new file mode 100644
index 00000000..182d03a4
--- /dev/null
+++ b/1010_linux-6.14.11.patch
@@ -0,0 +1,652 @@
+diff --git a/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml b/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml
+index daee0c0fc91539..c468207eb95168 100644
+--- a/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml
++++ b/Documentation/devicetree/bindings/phy/fsl,imx8mq-usb-phy.yaml
+@@ -63,8 +63,7 @@ properties:
+ fsl,phy-tx-vboost-level-microvolt:
+ description:
+ Adjust the boosted transmit launch pk-pk differential amplitude
+- minimum: 880
+- maximum: 1120
++ enum: [844, 1008, 1156]
+
+ fsl,phy-comp-dis-tune-percent:
+ description:
+diff --git a/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml b/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml
+index 45e112d0efb466..5575c58357d6e7 100644
+--- a/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml
++++ b/Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml
+@@ -30,11 +30,19 @@ properties:
+ const: 3
+
+ clocks:
+- maxItems: 1
++ minItems: 1
++ maxItems: 2
++
++ clock-names:
++ minItems: 1
++ items:
++ - const: axi
++ - const: ext
+
+ required:
+ - reg
+ - clocks
++ - clock-names
+
+ unevaluatedProperties: false
+
+@@ -43,6 +51,7 @@ examples:
+ pwm@44b00000 {
+ compatible = "adi,axi-pwmgen-2.00.a";
+ reg = <0x44b00000 0x1000>;
+- clocks = <&spi_clk>;
++ clocks = <&fpga_clk>, <&spi_clk>;
++ clock-names = "axi", "ext";
+ #pwm-cells = <3>;
+ };
+diff --git a/Documentation/devicetree/bindings/usb/cypress,hx3.yaml b/Documentation/devicetree/bindings/usb/cypress,hx3.yaml
+index 1033b7a4b8f953..d6eac1213228d2 100644
+--- a/Documentation/devicetree/bindings/usb/cypress,hx3.yaml
++++ b/Documentation/devicetree/bindings/usb/cypress,hx3.yaml
+@@ -14,9 +14,22 @@ allOf:
+
+ properties:
+ compatible:
+- enum:
+- - usb4b4,6504
+- - usb4b4,6506
++ oneOf:
++ - enum:
++ - usb4b4,6504
++ - usb4b4,6506
++ - items:
++ - enum:
++ - usb4b4,6500
++ - usb4b4,6508
++ - const: usb4b4,6504
++ - items:
++ - enum:
++ - usb4b4,6502
++ - usb4b4,6503
++ - usb4b4,6507
++ - usb4b4,650a
++ - const: usb4b4,6506
+
+ reg: true
+
+diff --git a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+index 8d8b53e96bcfee..ccb4b153e6f2dd 100644
+--- a/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
++++ b/Documentation/firmware-guide/acpi/dsd/data-node-references.rst
+@@ -12,11 +12,14 @@ ACPI in general allows referring to device objects in the tree only.
+ Hierarchical data extension nodes may not be referred to directly, hence this
+ document defines a scheme to implement such references.
+
+-A reference consist of the device object name followed by one or more
+-hierarchical data extension [dsd-guide] keys. Specifically, the hierarchical
+-data extension node which is referred to by the key shall lie directly under
+-the parent object i.e. either the device object or another hierarchical data
+-extension node.
++A reference to a _DSD hierarchical data node is a string consisting of a
++device object reference followed by a dot (".") and a relative path to a data
++node object. Do not use non-string references as this will produce a copy of
++the hierarchical data node, not a reference!
++
++The hierarchical data extension node which is referred to shall be located
++directly under its parent object i.e. either the device object or another
++hierarchical data extension node [dsd-guide].
+
+ The keys in the hierarchical data nodes shall consist of the name of the node,
+ "@" character and the number of the node in hexadecimal notation (without pre-
+@@ -33,11 +36,9 @@ extension key.
+ Example
+ =======
+
+-In the ASL snippet below, the "reference" _DSD property contains a
+-device object reference to DEV0 and under that device object, a
+-hierarchical data extension key "node@1" referring to the NOD1 object
+-and lastly, a hierarchical data extension key "anothernode" referring to
+-the ANOD object which is also the final target node of the reference.
++In the ASL snippet below, the "reference" _DSD property contains a string
++reference to a hierarchical data extension node ANOD under DEV0 under the parent
++of DEV1. ANOD is also the final target node of the reference.
+ ::
+
+ Device (DEV0)
+@@ -76,10 +77,7 @@ the ANOD object which is also the final target node of the reference.
+ Name (_DSD, Package () {
+ ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ Package () {
+- Package () {
+- "reference", Package () {
+- ^DEV0, "node@1", "anothernode"
+- }
++ Package () { "reference", "^DEV0.ANOD" }
+ },
+ }
+ })
+diff --git a/Documentation/firmware-guide/acpi/dsd/graph.rst b/Documentation/firmware-guide/acpi/dsd/graph.rst
+index b9dbfc73ed25b6..d6ae5ffa748ca4 100644
+--- a/Documentation/firmware-guide/acpi/dsd/graph.rst
++++ b/Documentation/firmware-guide/acpi/dsd/graph.rst
+@@ -66,12 +66,9 @@ of that port shall be zero. Similarly, if a port may only have a single
+ endpoint, the number of that endpoint shall be zero.
+
+ The endpoint reference uses property extension with "remote-endpoint" property
+-name followed by a reference in the same package. Such references consist of
+-the remote device reference, the first package entry of the port data extension
+-reference under the device and finally the first package entry of the endpoint
+-data extension reference under the port. Individual references thus appear as::
++name followed by a string reference in the same package. [data-node-ref]::
+
+- Package() { device, "port@X", "endpoint@Y" }
++ "device.datanode"
+
+ In the above example, "X" is the number of the port and "Y" is the number of
+ the endpoint.
+@@ -109,7 +106,7 @@ A simple example of this is show below::
+ ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ Package () {
+ Package () { "reg", 0 },
+- Package () { "remote-endpoint", Package() { \_SB.PCI0.ISP, "port@4", "endpoint@0" } },
++ Package () { "remote-endpoint", "\\_SB.PCI0.ISP.EP40" },
+ }
+ })
+ }
+@@ -141,7 +138,7 @@ A simple example of this is show below::
+ ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ Package () {
+ Package () { "reg", 0 },
+- Package () { "remote-endpoint", Package () { \_SB.PCI0.I2C2.CAM0, "port@0", "endpoint@0" } },
++ Package () { "remote-endpoint", "\\_SB.PCI0.I2C2.CAM0.EP00" },
+ }
+ })
+ }
+diff --git a/Documentation/firmware-guide/acpi/dsd/leds.rst b/Documentation/firmware-guide/acpi/dsd/leds.rst
+index 93db592c93c712..a97cd07d49be38 100644
+--- a/Documentation/firmware-guide/acpi/dsd/leds.rst
++++ b/Documentation/firmware-guide/acpi/dsd/leds.rst
+@@ -15,11 +15,6 @@ Referring to LEDs in Device tree is documented in [video-interfaces], in
+ "flash-leds" property documentation. In short, LEDs are directly referred to by
+ using phandles.
+
+-While Device tree allows referring to any node in the tree [devicetree], in
+-ACPI references are limited to device nodes only [acpi]. For this reason using
+-the same mechanism on ACPI is not possible. A mechanism to refer to non-device
+-ACPI nodes is documented in [data-node-ref].
+-
+ ACPI allows (as does DT) using integer arguments after the reference. A
+ combination of the LED driver device reference and an integer argument,
+ referring to the "reg" property of the relevant LED, is used to identify
+@@ -74,7 +69,7 @@ omitted. ::
+ Package () {
+ Package () {
+ "flash-leds",
+- Package () { ^LED, "led@0", ^LED, "led@1" },
++ Package () { "^LED.LED0", "^LED.LED1" },
+ }
+ }
+ })
+diff --git a/Makefile b/Makefile
+index 0f3aad52b3de89..48d2ce96398b75 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 14
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 5fc2c8ee61b19b..6be0f7ac7213d1 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -79,6 +79,8 @@ static HLIST_HEAD(binder_deferred_list);
+ static DEFINE_MUTEX(binder_deferred_lock);
+
+ static HLIST_HEAD(binder_devices);
++static DEFINE_SPINLOCK(binder_devices_lock);
++
+ static HLIST_HEAD(binder_procs);
+ static DEFINE_MUTEX(binder_procs_lock);
+
+@@ -5244,6 +5246,7 @@ static void binder_free_proc(struct binder_proc *proc)
+ __func__, proc->outstanding_txns);
+ device = container_of(proc->context, struct binder_device, context);
+ if (refcount_dec_and_test(&device->ref)) {
++ binder_remove_device(device);
+ kfree(proc->context->name);
+ kfree(device);
+ }
+@@ -6929,7 +6932,16 @@ const struct binder_debugfs_entry binder_debugfs_entries[] = {
+
+ void binder_add_device(struct binder_device *device)
+ {
++ spin_lock(&binder_devices_lock);
+ hlist_add_head(&device->hlist, &binder_devices);
++ spin_unlock(&binder_devices_lock);
++}
++
++void binder_remove_device(struct binder_device *device)
++{
++ spin_lock(&binder_devices_lock);
++ hlist_del_init(&device->hlist);
++ spin_unlock(&binder_devices_lock);
+ }
+
+ static int __init init_binder_device(const char *name)
+@@ -6956,7 +6968,7 @@ static int __init init_binder_device(const char *name)
+ return ret;
+ }
+
+- hlist_add_head(&binder_device->hlist, &binder_devices);
++ binder_add_device(binder_device);
+
+ return ret;
+ }
+@@ -7018,7 +7030,7 @@ static int __init binder_init(void)
+ err_init_binder_device_failed:
+ hlist_for_each_entry_safe(device, tmp, &binder_devices, hlist) {
+ misc_deregister(&device->miscdev);
+- hlist_del(&device->hlist);
++ binder_remove_device(device);
+ kfree(device);
+ }
+
+diff --git a/drivers/android/binder_internal.h b/drivers/android/binder_internal.h
+index e4eb8357989cb1..c5d68c1d37808c 100644
+--- a/drivers/android/binder_internal.h
++++ b/drivers/android/binder_internal.h
+@@ -584,9 +584,13 @@ struct binder_object {
+ /**
+ * Add a binder device to binder_devices
+ * @device: the new binder device to add to the global list
+- *
+- * Not reentrant as the list is not protected by any locks
+ */
+ void binder_add_device(struct binder_device *device);
+
++/**
++ * Remove a binder device to binder_devices
++ * @device: the binder device to remove from the global list
++ */
++void binder_remove_device(struct binder_device *device);
++
+ #endif /* _LINUX_BINDER_INTERNAL_H */
+diff --git a/drivers/android/binderfs.c b/drivers/android/binderfs.c
+index 94c6446604fc95..44d430c4ebefd2 100644
+--- a/drivers/android/binderfs.c
++++ b/drivers/android/binderfs.c
+@@ -274,7 +274,7 @@ static void binderfs_evict_inode(struct inode *inode)
+ mutex_unlock(&binderfs_minors_mutex);
+
+ if (refcount_dec_and_test(&device->ref)) {
+- hlist_del_init(&device->hlist);
++ binder_remove_device(device);
+ kfree(device->context.name);
+ kfree(device);
+ }
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index f2558506a02c72..04443356795971 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2415,14 +2415,14 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+
+ qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable",
+ GPIOD_OUT_LOW);
+- if (IS_ERR(qcadev->bt_en) &&
+- (data->soc_type == QCA_WCN6750 ||
+- data->soc_type == QCA_WCN6855)) {
+- dev_err(&serdev->dev, "failed to acquire BT_EN gpio\n");
+- return PTR_ERR(qcadev->bt_en);
+- }
++ if (IS_ERR(qcadev->bt_en))
++ return dev_err_probe(&serdev->dev,
++ PTR_ERR(qcadev->bt_en),
++ "failed to acquire BT_EN gpio\n");
+
+- if (!qcadev->bt_en)
++ if (!qcadev->bt_en &&
++ (data->soc_type == QCA_WCN6750 ||
++ data->soc_type == QCA_WCN6855))
+ power_ctrl_enabled = false;
+
+ qcadev->sw_ctrl = devm_gpiod_get_optional(&serdev->dev, "swctrl",
+diff --git a/drivers/clk/samsung/clk-exynosautov920.c b/drivers/clk/samsung/clk-exynosautov920.c
+index 2a8bfd5d9abc8a..24f26e254c29c5 100644
+--- a/drivers/clk/samsung/clk-exynosautov920.c
++++ b/drivers/clk/samsung/clk-exynosautov920.c
+@@ -1393,7 +1393,7 @@ static const unsigned long hsi1_clk_regs[] __initconst = {
+ /* List of parent clocks for Muxes in CMU_HSI1 */
+ PNAME(mout_hsi1_mmc_card_user_p) = {"oscclk", "dout_clkcmu_hsi1_mmc_card"};
+ PNAME(mout_hsi1_noc_user_p) = { "oscclk", "dout_clkcmu_hsi1_noc" };
+-PNAME(mout_hsi1_usbdrd_user_p) = { "oscclk", "mout_clkcmu_hsi1_usbdrd" };
++PNAME(mout_hsi1_usbdrd_user_p) = { "oscclk", "dout_clkcmu_hsi1_usbdrd" };
+ PNAME(mout_hsi1_usbdrd_p) = { "dout_tcxo_div2", "mout_hsi1_usbdrd_user" };
+
+ static const struct samsung_mux_clock hsi1_mux_clks[] __initconst = {
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index 453b629d3de658..5f5586a579d5d6 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -660,7 +660,7 @@ static u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq)
+ nominal_perf = perf_caps.nominal_perf;
+
+ if (nominal_freq)
+- *nominal_freq = perf_caps.nominal_freq;
++ *nominal_freq = perf_caps.nominal_freq * 1000;
+
+ if (!highest_perf || !nominal_perf) {
+ pr_debug("CPU%d: highest or nominal performance missing\n", cpu);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 1a7bfc548d7025..4801dcde2cb3eb 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -670,21 +670,15 @@ static void dm_crtc_high_irq(void *interrupt_params)
+ spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
+
+ if (acrtc->dm_irq_params.stream &&
+- acrtc->dm_irq_params.vrr_params.supported) {
+- bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
+- bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
+- bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
+-
++ acrtc->dm_irq_params.vrr_params.supported &&
++ acrtc->dm_irq_params.freesync_config.state ==
++ VRR_STATE_ACTIVE_VARIABLE) {
+ mod_freesync_handle_v_update(adev->dm.freesync_module,
+ acrtc->dm_irq_params.stream,
+ &acrtc->dm_irq_params.vrr_params);
+
+- /* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
+- if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
+- dc_stream_adjust_vmin_vmax(adev->dm.dc,
+- acrtc->dm_irq_params.stream,
+- &acrtc->dm_irq_params.vrr_params.adjust);
+- }
++ dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
++ &acrtc->dm_irq_params.vrr_params.adjust);
+ }
+
+ /*
+diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
+index 8671b7c974b933..eceb3cdb421ffb 100644
+--- a/drivers/nvmem/Kconfig
++++ b/drivers/nvmem/Kconfig
+@@ -260,6 +260,7 @@ config NVMEM_RCAR_EFUSE
+ config NVMEM_RMEM
+ tristate "Reserved Memory Based Driver Support"
+ depends on HAS_IOMEM
++ select CRC32
+ help
+ This driver maps reserved memory into an nvmem device. It might be
+ useful to expose information left by firmware in memory.
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 335744ac831057..79f9c08e5039c3 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -417,20 +417,22 @@ static int armada_37xx_gpio_direction_output(struct gpio_chip *chip,
+ unsigned int offset, int value)
+ {
+ struct armada_37xx_pinctrl *info = gpiochip_get_data(chip);
+- unsigned int reg = OUTPUT_EN;
++ unsigned int en_offset = offset;
++ unsigned int reg = OUTPUT_VAL;
+ unsigned int mask, val, ret;
+
+ armada_37xx_update_reg(®, &offset);
+ mask = BIT(offset);
++ val = value ? mask : 0;
+
+- ret = regmap_update_bits(info->regmap, reg, mask, mask);
+-
++ ret = regmap_update_bits(info->regmap, reg, mask, val);
+ if (ret)
+ return ret;
+
+- reg = OUTPUT_VAL;
+- val = value ? mask : 0;
+- regmap_update_bits(info->regmap, reg, mask, val);
++ reg = OUTPUT_EN;
++ armada_37xx_update_reg(®, &en_offset);
++
++ regmap_update_bits(info->regmap, reg, mask, mask);
+
+ return 0;
+ }
+diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
+index e31fa0ad127e95..a0afdeaac270f0 100644
+--- a/drivers/rtc/class.c
++++ b/drivers/rtc/class.c
+@@ -327,7 +327,7 @@ static void rtc_device_get_offset(struct rtc_device *rtc)
+ *
+ * Otherwise the offset seconds should be 0.
+ */
+- if (rtc->start_secs > rtc->range_max ||
++ if ((rtc->start_secs >= 0 && rtc->start_secs > rtc->range_max) ||
+ rtc->start_secs + range_secs - 1 < rtc->range_min)
+ rtc->offset_secs = rtc->start_secs - rtc->range_min;
+ else if (rtc->start_secs > rtc->range_min)
+diff --git a/drivers/rtc/lib.c b/drivers/rtc/lib.c
+index fe361652727a3f..13b5b1f2046510 100644
+--- a/drivers/rtc/lib.c
++++ b/drivers/rtc/lib.c
+@@ -46,24 +46,38 @@ EXPORT_SYMBOL(rtc_year_days);
+ * rtc_time64_to_tm - converts time64_t to rtc_time.
+ *
+ * @time: The number of seconds since 01-01-1970 00:00:00.
+- * (Must be positive.)
++ * Works for values since at least 1900
+ * @tm: Pointer to the struct rtc_time.
+ */
+ void rtc_time64_to_tm(time64_t time, struct rtc_time *tm)
+ {
+- unsigned int secs;
+- int days;
++ int days, secs;
+
+ u64 u64tmp;
+ u32 u32tmp, udays, century, day_of_century, year_of_century, year,
+ day_of_year, month, day;
+ bool is_Jan_or_Feb, is_leap_year;
+
+- /* time must be positive */
++ /*
++ * Get days and seconds while preserving the sign to
++ * handle negative time values (dates before 1970-01-01)
++ */
+ days = div_s64_rem(time, 86400, &secs);
+
++ /*
++ * We need 0 <= secs < 86400 which isn't given for negative
++ * values of time. Fixup accordingly.
++ */
++ if (secs < 0) {
++ days -= 1;
++ secs += 86400;
++ }
++
+ /* day of the week, 1970-01-01 was a Thursday */
+ tm->tm_wday = (days + 4) % 7;
++ /* Ensure tm_wday is always positive */
++ if (tm->tm_wday < 0)
++ tm->tm_wday += 7;
+
+ /*
+ * The following algorithm is, basically, Proposition 6.3 of Neri
+@@ -93,7 +107,7 @@ void rtc_time64_to_tm(time64_t time, struct rtc_time *tm)
+ * thus, is slightly different from [1].
+ */
+
+- udays = ((u32) days) + 719468;
++ udays = days + 719468;
+
+ u32tmp = 4 * udays + 3;
+ century = u32tmp / 146097;
+diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
+index dc1f456736dc46..2ec6c9e477d059 100644
+--- a/drivers/thunderbolt/ctl.c
++++ b/drivers/thunderbolt/ctl.c
+@@ -151,6 +151,11 @@ static void tb_cfg_request_dequeue(struct tb_cfg_request *req)
+ struct tb_ctl *ctl = req->ctl;
+
+ mutex_lock(&ctl->request_queue_lock);
++ if (!test_bit(TB_CFG_REQUEST_ACTIVE, &req->flags)) {
++ mutex_unlock(&ctl->request_queue_lock);
++ return;
++ }
++
+ list_del(&req->list);
+ clear_bit(TB_CFG_REQUEST_ACTIVE, &req->flags);
+ if (test_bit(TB_CFG_REQUEST_CANCELED, &req->flags))
+diff --git a/drivers/tty/serial/jsm/jsm_tty.c b/drivers/tty/serial/jsm/jsm_tty.c
+index ce0fef7e2c665c..be2f130696b3a0 100644
+--- a/drivers/tty/serial/jsm/jsm_tty.c
++++ b/drivers/tty/serial/jsm/jsm_tty.c
+@@ -451,6 +451,7 @@ int jsm_uart_port_init(struct jsm_board *brd)
+ if (!brd->channels[i])
+ continue;
+
++ brd->channels[i]->uart_port.dev = &brd->pci_dev->dev;
+ brd->channels[i]->uart_port.irq = brd->irq;
+ brd->channels[i]->uart_port.uartclk = 14745600;
+ brd->channels[i]->uart_port.type = PORT_JSM;
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 740d2d2b19fbe0..66f3d9324ba2f3 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -483,6 +483,7 @@ static int usbtmc_get_stb(struct usbtmc_file_data *file_data, __u8 *stb)
+ u8 tag;
+ int rv;
+ long wait_rv;
++ unsigned long expire;
+
+ dev_dbg(dev, "Enter ioctl_read_stb iin_ep_present: %d\n",
+ data->iin_ep_present);
+@@ -512,10 +513,11 @@ static int usbtmc_get_stb(struct usbtmc_file_data *file_data, __u8 *stb)
+ }
+
+ if (data->iin_ep_present) {
++ expire = msecs_to_jiffies(file_data->timeout);
+ wait_rv = wait_event_interruptible_timeout(
+ data->waitq,
+ atomic_read(&data->iin_data_valid) != 0,
+- file_data->timeout);
++ expire);
+ if (wait_rv < 0) {
+ dev_dbg(dev, "wait interrupted %ld\n", wait_rv);
+ rv = wait_rv;
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 36d3df7d040c63..53d68d20fb62e0 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -372,6 +372,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* SanDisk Corp. SanDisk 3.2Gen1 */
+ { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT },
+
++ /* SanDisk Extreme 55AE */
++ { USB_DEVICE(0x0781, 0x55ae), .driver_info = USB_QUIRK_NO_LPM },
++
+ /* Realforce 87U Keyboard */
+ { USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM },
+
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 010688dd9e49ce..22579d0d8ab8aa 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -458,6 +458,8 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ case 0x605:
+ case 0x700: /* GR */
+ case 0x705:
++ case 0x905: /* GT-2AB */
++ case 0x1005: /* GC-Q20 */
+ return TYPE_HXN;
+ }
+ break;
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index d460d71b425783..1477e31d776327 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -52,6 +52,13 @@ UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+
++/* Reported-by: Zhihong Zhou <zhouzhihong@greatwall.com.cn> */
++UNUSUAL_DEV(0x0781, 0x55e8, 0x0000, 0x9999,
++ "SanDisk",
++ "",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_UAS),
++
+ /* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
+ UNUSUAL_DEV(0x090c, 0x2000, 0x0000, 0x9999,
+ "Hiksemi",
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 99d0d76f738eec..753888cf9f8939 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -432,7 +432,7 @@ struct ucsi_debugfs_entry {
+ u64 low;
+ u64 high;
+ } response;
+- u32 status;
++ int status;
+ struct dentry *dentry;
+ };
+
+diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c
+index 63d7c1ca0dfd35..a244f397a68c65 100644
+--- a/fs/orangefs/inode.c
++++ b/fs/orangefs/inode.c
+@@ -32,12 +32,13 @@ static int orangefs_writepage_locked(struct page *page,
+ len = i_size_read(inode);
+ if (PagePrivate(page)) {
+ wr = (struct orangefs_write_range *)page_private(page);
+- WARN_ON(wr->pos >= len);
+ off = wr->pos;
+- if (off + wr->len > len)
++ if ((off + wr->len > len) && (off <= len))
+ wlen = len - off;
+ else
+ wlen = wr->len;
++ if (wlen == 0)
++ wlen = wr->len;
+ } else {
+ WARN_ON(1);
+ off = page_offset(page);
+@@ -46,8 +47,6 @@ static int orangefs_writepage_locked(struct page *page,
+ else
+ wlen = PAGE_SIZE;
+ }
+- /* Should've been handled in orangefs_invalidate_folio. */
+- WARN_ON(off == len || off + wlen > len);
+
+ WARN_ON(wlen == 0);
+ bvec_set_page(&bv, page, wlen, off % PAGE_SIZE);
+@@ -340,6 +339,8 @@ static int orangefs_write_begin(struct file *file,
+ wr->len += len;
+ goto okay;
+ } else {
++ wr->pos = pos;
++ wr->len = len;
+ ret = orangefs_launder_folio(folio);
+ if (ret)
+ return ret;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b1738563bdc3b1..00592fc88e6101 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6679,7 +6679,7 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
+ ret = trace_seq_to_buffer(&iter->seq,
+ page_address(spd.pages[i]),
+ min((size_t)trace_seq_used(&iter->seq),
+- PAGE_SIZE));
++ (size_t)PAGE_SIZE));
+ if (ret < 0) {
+ __free_page(spd.pages[i]);
+ break;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [gentoo-commits] proj/linux-patches:6.14 commit in: /
@ 2025-06-10 12:18 Mike Pagano
0 siblings, 0 replies; 21+ messages in thread
From: Mike Pagano @ 2025-06-10 12:18 UTC (permalink / raw
To: gentoo-commits
commit: 1ea95345ed742b30dde36b5f3a86c12388a135c7
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 10 12:18:38 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 10 12:18:38 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1ea95345
Remove redundant patch
Removed:
2700_amd-revert-vmin-vmax-for-freesync.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ---
2700_amd-revert-vmin-vmax-for-freesync.patch | 48 ----------------------------
2 files changed, 52 deletions(-)
diff --git a/0000_README b/0000_README
index c3151688..94c48fb6 100644
--- a/0000_README
+++ b/0000_README
@@ -110,10 +110,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2700_amd-revert-vmin-vmax-for-freesync.patch
-From: https://github.com/archlinux/linux/commit/30dd9945fd79d33a049da4e52984c9bc07450de2.patch
-Desc: Revert "drm/amd/display: more liberal vmin/vmax update for freesync"
-
Patch: 2901_permit-menuconfig-sorting.patch
From: https://lore.kernel.org/
Desc: menuconfig: Allow sorting the entries alphabetically
diff --git a/2700_amd-revert-vmin-vmax-for-freesync.patch b/2700_amd-revert-vmin-vmax-for-freesync.patch
deleted file mode 100644
index b0b80885..00000000
--- a/2700_amd-revert-vmin-vmax-for-freesync.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-From 30dd9945fd79d33a049da4e52984c9bc07450de2 Mon Sep 17 00:00:00 2001
-From: Aurabindo Pillai <aurabindo.pillai@amd.com>
-Date: Wed, 21 May 2025 16:10:57 -0400
-Subject: [PATCH] Revert "drm/amd/display: more liberal vmin/vmax update for
- freesync"
-
-This reverts commit 219898d29c438d8ec34a5560fac4ea8f6b8d4f20 since it
-causes regressions on certain configs. Revert until the issue can be
-isolated and debugged.
-
-Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4238
-Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
-Cherry-picked-for: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/issues/139
----
- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 16 +++++-----------
- 1 file changed, 5 insertions(+), 11 deletions(-)
-
-diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-index 2dbd71fbae28a5..e4f0517f0f2b23 100644
---- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-@@ -668,21 +668,15 @@ static void dm_crtc_high_irq(void *interrupt_params)
- spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
-
- if (acrtc->dm_irq_params.stream &&
-- acrtc->dm_irq_params.vrr_params.supported) {
-- bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
-- bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
-- bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
--
-+ acrtc->dm_irq_params.vrr_params.supported &&
-+ acrtc->dm_irq_params.freesync_config.state ==
-+ VRR_STATE_ACTIVE_VARIABLE) {
- mod_freesync_handle_v_update(adev->dm.freesync_module,
- acrtc->dm_irq_params.stream,
- &acrtc->dm_irq_params.vrr_params);
-
-- /* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
-- if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
-- dc_stream_adjust_vmin_vmax(adev->dm.dc,
-- acrtc->dm_irq_params.stream,
-- &acrtc->dm_irq_params.vrr_params.adjust);
-- }
-+ dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
-+ &acrtc->dm_irq_params.vrr_params.adjust);
- }
-
- /*
^ permalink raw reply related [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-06-10 12:18 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-22 13:50 [gentoo-commits] proj/linux-patches:6.14 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2025-06-10 12:18 Mike Pagano
2025-06-10 12:14 Mike Pagano
2025-06-04 18:09 Mike Pagano
2025-05-29 17:28 Mike Pagano
2025-05-29 17:00 Mike Pagano
2025-05-29 16:34 Mike Pagano
2025-05-28 14:02 Mike Pagano
2025-05-22 13:36 Mike Pagano
2025-05-18 14:32 Mike Pagano
2025-05-09 10:55 Mike Pagano
2025-05-02 10:54 Mike Pagano
2025-04-29 17:26 Mike Pagano
2025-04-25 12:12 Mike Pagano
2025-04-25 11:46 Mike Pagano
2025-04-20 9:36 Mike Pagano
2025-04-10 13:38 Mike Pagano
2025-04-07 10:28 Mike Pagano
2025-03-25 19:35 Mike Pagano
2025-03-17 17:31 Mike Pagano
2025-03-02 22:41 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox