From: "Arisu Tachibana" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.6 commit in: /
Date: Fri, 11 Jul 2025 02:28:41 +0000 (UTC) [thread overview]
Message-ID: <1752200907.162af0ac3ff484aeaa155620c1312ea312861888.alicef@gentoo> (raw)
commit: 162af0ac3ff484aeaa155620c1312ea312861888
Author: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 11 02:28:27 2025 +0000
Commit: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Jul 11 02:28:27 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=162af0ac
Linux patch 6.6.97
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>
0000_README | 4 +
1096_linux-6.6.97.patch | 6094 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6098 insertions(+)
diff --git a/0000_README b/0000_README
index f3a9eacc..049a6902 100644
--- a/0000_README
+++ b/0000_README
@@ -427,6 +427,10 @@ Patch: 1095_linux-6.6.96.patch
From: https://www.kernel.org
Desc: Linux 6.6.96
+Patch: 1096_linux-6.6.97.patch
+From: https://www.kernel.org
+Desc: Linux 6.6.97
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch
Desc: Enable link security restrictions by default.
diff --git a/1096_linux-6.6.97.patch b/1096_linux-6.6.97.patch
new file mode 100644
index 00000000..36cf6a96
--- /dev/null
+++ b/1096_linux-6.6.97.patch
@@ -0,0 +1,6094 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 0426ec112155ec..868ec736a9d235 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -526,6 +526,7 @@ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/spectre_v1
+ /sys/devices/system/cpu/vulnerabilities/spectre_v2
+ /sys/devices/system/cpu/vulnerabilities/srbds
++ /sys/devices/system/cpu/vulnerabilities/tsa
+ /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ Date: January 2018
+ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
+diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs
+index 0c7efaf62de0c0..84131641580c95 100644
+--- a/Documentation/ABI/testing/sysfs-driver-ufs
++++ b/Documentation/ABI/testing/sysfs-driver-ufs
+@@ -711,7 +711,7 @@ Description: This file shows the thin provisioning type. This is one of
+
+ The file is read only.
+
+-What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resourse_count
++What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resource_count
+ Date: February 2018
+ Contact: Stanislav Nijnikov <stanislav.nijnikov@wdc.com>
+ Description: This file shows the total physical memory resources. This is
+diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+index c98fd11907cc87..e916dc232b0f0c 100644
+--- a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
++++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+@@ -157,9 +157,7 @@ This is achieved by using the otherwise unused and obsolete VERW instruction in
+ combination with a microcode update. The microcode clears the affected CPU
+ buffers when the VERW instruction is executed.
+
+-Kernel reuses the MDS function to invoke the buffer clearing:
+-
+- mds_clear_cpu_buffers()
++Kernel does the buffer clearing with x86_clear_cpu_buffers().
+
+ On MDS affected CPUs, the kernel already invokes CPU buffer clear on
+ kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f95734ceb82b86..bcfa49019c3f16 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -6645,6 +6645,19 @@
+ If not specified, "default" is used. In this case,
+ the RNG's choice is left to each individual trust source.
+
++ tsa= [X86] Control mitigation for Transient Scheduler
++ Attacks on AMD CPUs. Search the following in your
++ favourite search engine for more details:
++
++ "Technical guidance for mitigating transient scheduler
++ attacks".
++
++ off - disable the mitigation
++ on - enable the mitigation (default)
++ user - mitigate only user/kernel transitions
++ vm - mitigate only guest/host transitions
++
++
+ tsc= Disable clocksource stability checks for TSC.
+ Format: <string>
+ [x86] reliable: mark tsc clocksource as reliable, this
+diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst
+index c58c72362911cd..43106f349cc35f 100644
+--- a/Documentation/arch/x86/mds.rst
++++ b/Documentation/arch/x86/mds.rst
+@@ -93,7 +93,7 @@ enters a C-state.
+
+ The kernel provides a function to invoke the buffer clearing:
+
+- mds_clear_cpu_buffers()
++ x86_clear_cpu_buffers()
+
+ Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path.
+ Other than CFLAGS.ZF, this macro doesn't clobber any registers.
+@@ -185,9 +185,9 @@ Mitigation points
+ idle clearing would be a window dressing exercise and is therefore not
+ activated.
+
+- The invocation is controlled by the static key mds_idle_clear which is
+- switched depending on the chosen mitigation mode and the SMT state of
+- the system.
++ The invocation is controlled by the static key cpu_buf_idle_clear which is
++ switched depending on the chosen mitigation mode and the SMT state of the
++ system.
+
+ The buffer clear is only invoked before entering the C-State to prevent
+ that stale data from the idling CPU from spilling to the Hyper-Thread
+diff --git a/Documentation/core-api/symbol-namespaces.rst b/Documentation/core-api/symbol-namespaces.rst
+index 12e4aecdae9452..29875e25e376f6 100644
+--- a/Documentation/core-api/symbol-namespaces.rst
++++ b/Documentation/core-api/symbol-namespaces.rst
+@@ -28,6 +28,9 @@ kernel. As of today, modules that make use of symbols exported into namespaces,
+ are required to import the namespace. Otherwise the kernel will, depending on
+ its configuration, reject loading the module or warn about a missing import.
+
++Additionally, it is possible to put symbols into a module namespace, strictly
++limiting which modules are allowed to use these symbols.
++
+ 2. How to define Symbol Namespaces
+ ==================================
+
+@@ -84,6 +87,22 @@ unit as preprocessor statement. The above example would then read::
+ within the corresponding compilation unit before any EXPORT_SYMBOL macro is
+ used.
+
++2.3 Using the EXPORT_SYMBOL_GPL_FOR_MODULES() macro
++===================================================
++
++Symbols exported using this macro are put into a module namespace. This
++namespace cannot be imported.
++
++The macro takes a comma separated list of module names, allowing only those
++modules to access this symbol. Simple tail-globs are supported.
++
++For example:
++
++ EXPORT_SYMBOL_GPL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*")
++
++will limit usage of this symbol to modules whoes name matches the given
++patterns.
++
+ 3. How to use Symbols exported in Namespaces
+ ============================================
+
+@@ -155,3 +174,6 @@ in-tree modules::
+ You can also run nsdeps for external module builds. A typical usage is::
+
+ $ make -C <path_to_kernel_src> M=$PWD nsdeps
++
++Note: it will happily generate an import statement for the module namespace;
++which will not work and generates build and runtime failures.
+diff --git a/Makefile b/Makefile
+index 038fc8e0982bdc..9d5c08363637bd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 6
+-SUBLEVEL = 96
++SUBLEVEL = 97
+ EXTRAVERSION =
+ NAME = Pinguïn Aangedreven
+
+diff --git a/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi b/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi
+index 5988a4eb6efaa0..cb78ce7af0b380 100644
+--- a/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi
++++ b/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi
+@@ -71,7 +71,7 @@ hpm1: usb-pd@3f {
+ */
+ &port00 {
+ bus-range = <1 1>;
+- wifi0: network@0,0 {
++ wifi0: wifi@0,0 {
+ compatible = "pci14e4,4425";
+ reg = <0x10000 0x0 0x0 0x0 0x0>;
+ /* To be filled by the loader */
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index c14c6f8583d548..2f0f1c2ab7391f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -1064,6 +1064,20 @@ spi13: spi@894000 {
+ status = "disabled";
+ };
+
++ uart14: serial@898000 {
++ compatible = "qcom,geni-uart";
++ reg = <0 0x898000 0 0x4000>;
++ clock-names = "se";
++ clocks = <&gcc GCC_QUPV3_WRAP2_S6_CLK>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&qup_uart14_default>, <&qup_uart14_cts_rts>;
++ interrupts = <GIC_SPI 461 IRQ_TYPE_LEVEL_HIGH>;
++ interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
++ <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>;
++ interconnect-names = "qup-core", "qup-config";
++ status = "disabled";
++ };
++
+ i2c15: i2c@89c000 {
+ compatible = "qcom,geni-i2c";
+ reg = <0 0x0089c000 0 0x4000>;
+@@ -3640,6 +3654,22 @@ qup_uart7_default: qup-uart7-default-state {
+ bias-disable;
+ };
+
++ qup_uart14_default: qup-uart14-default-state {
++ /* TX, RX */
++ pins = "gpio78", "gpio79";
++ function = "qup2_se6";
++ drive-strength = <2>;
++ bias-pull-up;
++ };
++
++ qup_uart14_cts_rts: qup-uart14-cts-rts-state {
++ /* CTS, RTS */
++ pins = "gpio76", "gpio77";
++ function = "qup2_se6";
++ drive-strength = <2>;
++ bias-pull-down;
++ };
++
+ sdc2_sleep: sdc2-sleep-state {
+ clk-pins {
+ pins = "sdc2_clk";
+diff --git a/arch/powerpc/include/uapi/asm/ioctls.h b/arch/powerpc/include/uapi/asm/ioctls.h
+index 2c145da3b774a1..b5211e413829a2 100644
+--- a/arch/powerpc/include/uapi/asm/ioctls.h
++++ b/arch/powerpc/include/uapi/asm/ioctls.h
+@@ -23,10 +23,10 @@
+ #define TCSETSW _IOW('t', 21, struct termios)
+ #define TCSETSF _IOW('t', 22, struct termios)
+
+-#define TCGETA _IOR('t', 23, struct termio)
+-#define TCSETA _IOW('t', 24, struct termio)
+-#define TCSETAW _IOW('t', 25, struct termio)
+-#define TCSETAF _IOW('t', 28, struct termio)
++#define TCGETA 0x40147417 /* _IOR('t', 23, struct termio) */
++#define TCSETA 0x80147418 /* _IOW('t', 24, struct termio) */
++#define TCSETAW 0x80147419 /* _IOW('t', 25, struct termio) */
++#define TCSETAF 0x8014741c /* _IOW('t', 28, struct termio) */
+
+ #define TCSBRK _IO('t', 29)
+ #define TCXONC _IO('t', 30)
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index b7629122680b1e..131c859b24679e 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -165,9 +165,7 @@ endif
+
+ obj64-$(CONFIG_PPC_TRANSACTIONAL_MEM) += tm.o
+
+-ifneq ($(CONFIG_XMON)$(CONFIG_KEXEC_CORE)$(CONFIG_PPC_BOOK3S),)
+ obj-y += ppc_save_regs.o
+-endif
+
+ obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o
+ obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index b3961f1016ea0b..d969f36bf186f2 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -98,6 +98,10 @@ static pci_ers_result_t zpci_event_do_error_state_clear(struct pci_dev *pdev,
+ struct zpci_dev *zdev = to_zpci(pdev);
+ int rc;
+
++ /* The underlying device may have been disabled by the event */
++ if (!zdev_enabled(zdev))
++ return PCI_ERS_RESULT_NEED_RESET;
++
+ pr_info("%s: Unblocking device access for examination\n", pci_name(pdev));
+ rc = zpci_reset_load_store_blocked(zdev);
+ if (rc) {
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 4372657ab0d6fa..caa6adcedc18dd 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2621,6 +2621,15 @@ config MITIGATION_ITS
+ disabled, mitigation cannot be enabled via cmdline.
+ See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst>
+
++config MITIGATION_TSA
++ bool "Mitigate Transient Scheduler Attacks"
++ depends on CPU_SUP_AMD
++ default y
++ help
++ Enable mitigation for Transient Scheduler Attacks. TSA is a hardware
++ security vulnerability on AMD CPUs which can lead to forwarding of
++ invalid info to subsequent instructions and thus can affect their
++ timing and thereby cause a leakage.
+ endif
+
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index ad292c0d971a3f..4e7ecffee762ad 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -31,20 +31,20 @@ EXPORT_SYMBOL_GPL(entry_ibpb);
+
+ /*
+ * Define the VERW operand that is disguised as entry code so that
+- * it can be referenced with KPTI enabled. This ensure VERW can be
++ * it can be referenced with KPTI enabled. This ensures VERW can be
+ * used late in exit-to-user path after page tables are switched.
+ */
+ .pushsection .entry.text, "ax"
+
+ .align L1_CACHE_BYTES, 0xcc
+-SYM_CODE_START_NOALIGN(mds_verw_sel)
++SYM_CODE_START_NOALIGN(x86_verw_sel)
+ UNWIND_HINT_UNDEFINED
+ ANNOTATE_NOENDBR
+ .word __KERNEL_DS
+ .align L1_CACHE_BYTES, 0xcc
+-SYM_CODE_END(mds_verw_sel);
++SYM_CODE_END(x86_verw_sel);
+ /* For KVM */
+-EXPORT_SYMBOL_GPL(mds_verw_sel);
++EXPORT_SYMBOL_GPL(x86_verw_sel);
+
+ .popsection
+
+diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
+index fecc4fe1d68aff..9c67f8b4c91971 100644
+--- a/arch/x86/include/asm/cpu.h
++++ b/arch/x86/include/asm/cpu.h
+@@ -81,4 +81,16 @@ int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
+
+ extern struct cpumask cpus_stop_mask;
+
++union zen_patch_rev {
++ struct {
++ __u32 rev : 8,
++ stepping : 4,
++ model : 4,
++ __reserved : 4,
++ ext_model : 4,
++ ext_fam : 8;
++ };
++ __u32 ucode_rev;
++};
++
+ #endif /* _ASM_X86_CPU_H */
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 8a2482651a6f1e..311cc58f29581d 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -449,6 +449,7 @@
+ /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
+ #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
+ #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
++#define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* "" The memory form of VERW mitigates TSA */
+ #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
+ #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */
+ #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */
+@@ -470,6 +471,10 @@
+ #define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* "" Clear branch history at vmexit using SW loop */
+ #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32 + 5) /* "" Use thunk for indirect branches in lower half of cacheline */
+
++#define X86_FEATURE_TSA_SQ_NO (21*32+11) /* "" AMD CPU not vulnerable to TSA-SQ */
++#define X86_FEATURE_TSA_L1_NO (21*32+12) /* "" AMD CPU not vulnerable to TSA-L1 */
++#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* "" Clear CPU buffers using VERW before VMRUN */
++
+ /*
+ * BUG word(s)
+ */
+@@ -521,4 +526,5 @@
+ #define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
+ #define X86_BUG_ITS X86_BUG(1*32 + 5) /* CPU is affected by Indirect Target Selection */
+ #define X86_BUG_ITS_NATIVE_ONLY X86_BUG(1*32 + 6) /* CPU is affected by ITS, VMX is not affected */
++#define X86_BUG_TSA X86_BUG(1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 9acfe2bcf1fd5b..9bfb7b90e2990e 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -44,13 +44,13 @@ static __always_inline void native_irq_enable(void)
+
+ static __always_inline void native_safe_halt(void)
+ {
+- mds_idle_clear_cpu_buffers();
++ x86_idle_clear_cpu_buffers();
+ asm volatile("sti; hlt": : :"memory");
+ }
+
+ static __always_inline void native_halt(void)
+ {
+- mds_idle_clear_cpu_buffers();
++ x86_idle_clear_cpu_buffers();
+ asm volatile("hlt": : :"memory");
+ }
+
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index a541411d9226ef..ae7a83e3f743e0 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -44,8 +44,6 @@ static __always_inline void __monitorx(const void *eax, unsigned long ecx,
+
+ static __always_inline void __mwait(unsigned long eax, unsigned long ecx)
+ {
+- mds_idle_clear_cpu_buffers();
+-
+ /* "mwait %eax, %ecx;" */
+ asm volatile(".byte 0x0f, 0x01, 0xc9;"
+ :: "a" (eax), "c" (ecx));
+@@ -80,7 +78,7 @@ static __always_inline void __mwait(unsigned long eax, unsigned long ecx)
+ static __always_inline void __mwaitx(unsigned long eax, unsigned long ebx,
+ unsigned long ecx)
+ {
+- /* No MDS buffer clear as this is AMD/HYGON only */
++ /* No need for TSA buffer clearing on AMD */
+
+ /* "mwaitx %eax, %ebx, %ecx;" */
+ asm volatile(".byte 0x0f, 0x01, 0xfb;"
+@@ -89,7 +87,7 @@ static __always_inline void __mwaitx(unsigned long eax, unsigned long ebx,
+
+ static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ {
+- mds_idle_clear_cpu_buffers();
++
+ /* "mwait %eax, %ecx;" */
+ asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
+ :: "a" (eax), "c" (ecx));
+@@ -107,21 +105,29 @@ static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ */
+ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
+ {
++ if (need_resched())
++ return;
++
++ x86_idle_clear_cpu_buffers();
++
+ if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) {
+ const void *addr = ¤t_thread_info()->flags;
+
+ alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
+ __monitor(addr, 0, 0);
+
+- if (!need_resched()) {
+- if (ecx & 1) {
+- __mwait(eax, ecx);
+- } else {
+- __sti_mwait(eax, ecx);
+- raw_local_irq_disable();
+- }
++ if (need_resched())
++ goto out;
++
++ if (ecx & 1) {
++ __mwait(eax, ecx);
++ } else {
++ __sti_mwait(eax, ecx);
++ raw_local_irq_disable();
+ }
+ }
++
++out:
+ current_clr_polling();
+ }
+
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index bc4fa6d09d29d9..04f5a41c3a04ed 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -324,25 +324,31 @@
+ .endm
+
+ /*
+- * Macro to execute VERW instruction that mitigate transient data sampling
+- * attacks such as MDS. On affected systems a microcode update overloaded VERW
+- * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+- *
++ * Macro to execute VERW insns that mitigate transient data sampling
++ * attacks such as MDS or TSA. On affected systems a microcode update
++ * overloaded VERW insns to also clear the CPU buffers. VERW clobbers
++ * CFLAGS.ZF.
+ * Note: Only the memory operand variant of VERW clears the CPU buffers.
+ */
+-.macro CLEAR_CPU_BUFFERS
++.macro __CLEAR_CPU_BUFFERS feature
+ #ifdef CONFIG_X86_64
+- ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
++ ALTERNATIVE "", "verw x86_verw_sel(%rip)", \feature
+ #else
+ /*
+ * In 32bit mode, the memory operand must be a %cs reference. The data
+ * segments may not be usable (vm86 mode), and the stack segment may not
+ * be flat (ESPFIX32).
+ */
+- ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
++ ALTERNATIVE "", "verw %cs:x86_verw_sel", \feature
+ #endif
+ .endm
+
++#define CLEAR_CPU_BUFFERS \
++ __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF
++
++#define VM_CLEAR_CPU_BUFFERS \
++ __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF_VM
++
+ #ifdef CONFIG_X86_64
+ .macro CLEAR_BRANCH_HISTORY
+ ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_LOOP
+@@ -592,24 +598,24 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+
+-DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
++DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
+
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
+
+ DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+
+-extern u16 mds_verw_sel;
++extern u16 x86_verw_sel;
+
+ #include <asm/segment.h>
+
+ /**
+- * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
++ * x86_clear_cpu_buffers - Buffer clearing support for different x86 CPU vulns
+ *
+ * This uses the otherwise unused and obsolete VERW instruction in
+ * combination with microcode which triggers a CPU buffer flush when the
+ * instruction is executed.
+ */
+-static __always_inline void mds_clear_cpu_buffers(void)
++static __always_inline void x86_clear_cpu_buffers(void)
+ {
+ static const u16 ds = __KERNEL_DS;
+
+@@ -626,14 +632,15 @@ static __always_inline void mds_clear_cpu_buffers(void)
+ }
+
+ /**
+- * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
++ * x86_idle_clear_cpu_buffers - Buffer clearing support in idle for the MDS
++ * and TSA vulnerabilities.
+ *
+ * Clear CPU buffers if the corresponding static key is enabled
+ */
+-static __always_inline void mds_idle_clear_cpu_buffers(void)
++static __always_inline void x86_idle_clear_cpu_buffers(void)
+ {
+- if (static_branch_likely(&mds_idle_clear))
+- mds_clear_cpu_buffers();
++ if (static_branch_likely(&cpu_buf_idle_clear))
++ x86_clear_cpu_buffers();
+ }
+
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/x86/include/uapi/asm/debugreg.h b/arch/x86/include/uapi/asm/debugreg.h
+index 0007ba077c0c2b..41da492dfb01f0 100644
+--- a/arch/x86/include/uapi/asm/debugreg.h
++++ b/arch/x86/include/uapi/asm/debugreg.h
+@@ -15,7 +15,26 @@
+ which debugging register was responsible for the trap. The other bits
+ are either reserved or not of interest to us. */
+
+-/* Define reserved bits in DR6 which are always set to 1 */
++/*
++ * Define bits in DR6 which are set to 1 by default.
++ *
++ * This is also the DR6 architectural value following Power-up, Reset or INIT.
++ *
++ * Note, with the introduction of Bus Lock Detection (BLD) and Restricted
++ * Transactional Memory (RTM), the DR6 register has been modified:
++ *
++ * 1) BLD flag (bit 11) is no longer reserved to 1 if the CPU supports
++ * Bus Lock Detection. The assertion of a bus lock could clear it.
++ *
++ * 2) RTM flag (bit 16) is no longer reserved to 1 if the CPU supports
++ * restricted transactional memory. #DB occurred inside an RTM region
++ * could clear it.
++ *
++ * Apparently, DR6.BLD and DR6.RTM are active low bits.
++ *
++ * As a result, DR6_RESERVED is an incorrect name now, but it is kept for
++ * compatibility.
++ */
+ #define DR6_RESERVED (0xFFFF0FF0)
+
+ #define DR_TRAP0 (0x1) /* db0 */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 498f2753777292..1180689a239037 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -539,6 +539,63 @@ static void early_init_amd_mc(struct cpuinfo_x86 *c)
+ #endif
+ }
+
++static bool amd_check_tsa_microcode(void)
++{
++ struct cpuinfo_x86 *c = &boot_cpu_data;
++ union zen_patch_rev p;
++ u32 min_rev = 0;
++
++ p.ext_fam = c->x86 - 0xf;
++ p.model = c->x86_model;
++ p.stepping = c->x86_stepping;
++
++ if (cpu_has(c, X86_FEATURE_ZEN3) ||
++ cpu_has(c, X86_FEATURE_ZEN4)) {
++ switch (p.ucode_rev >> 8) {
++ case 0xa0011: min_rev = 0x0a0011d7; break;
++ case 0xa0012: min_rev = 0x0a00123b; break;
++ case 0xa0082: min_rev = 0x0a00820d; break;
++ case 0xa1011: min_rev = 0x0a10114c; break;
++ case 0xa1012: min_rev = 0x0a10124c; break;
++ case 0xa1081: min_rev = 0x0a108109; break;
++ case 0xa2010: min_rev = 0x0a20102e; break;
++ case 0xa2012: min_rev = 0x0a201211; break;
++ case 0xa4041: min_rev = 0x0a404108; break;
++ case 0xa5000: min_rev = 0x0a500012; break;
++ case 0xa6012: min_rev = 0x0a60120a; break;
++ case 0xa7041: min_rev = 0x0a704108; break;
++ case 0xa7052: min_rev = 0x0a705208; break;
++ case 0xa7080: min_rev = 0x0a708008; break;
++ case 0xa70c0: min_rev = 0x0a70c008; break;
++ case 0xaa002: min_rev = 0x0aa00216; break;
++ default:
++ pr_debug("%s: ucode_rev: 0x%x, current revision: 0x%x\n",
++ __func__, p.ucode_rev, c->microcode);
++ return false;
++ }
++ }
++
++ if (!min_rev)
++ return false;
++
++ return c->microcode >= min_rev;
++}
++
++static void tsa_init(struct cpuinfo_x86 *c)
++{
++ if (cpu_has(c, X86_FEATURE_HYPERVISOR))
++ return;
++
++ if (cpu_has(c, X86_FEATURE_ZEN3) ||
++ cpu_has(c, X86_FEATURE_ZEN4)) {
++ if (amd_check_tsa_microcode())
++ setup_force_cpu_cap(X86_FEATURE_VERW_CLEAR);
++ } else {
++ setup_force_cpu_cap(X86_FEATURE_TSA_SQ_NO);
++ setup_force_cpu_cap(X86_FEATURE_TSA_L1_NO);
++ }
++}
++
+ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ {
+ if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {
+@@ -645,6 +702,9 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ break;
+ }
+
++
++ tsa_init(c);
++
+ return;
+
+ warn:
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 07b45bbf6348de..c4d5ac99c6af84 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -50,6 +50,7 @@ static void __init l1d_flush_select_mitigation(void);
+ static void __init srso_select_mitigation(void);
+ static void __init gds_select_mitigation(void);
+ static void __init its_select_mitigation(void);
++static void __init tsa_select_mitigation(void);
+
+ /* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+@@ -122,9 +123,9 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ /* Control unconditional IBPB in switch_mm() */
+ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+
+-/* Control MDS CPU buffer clear before idling (halt, mwait) */
+-DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
+-EXPORT_SYMBOL_GPL(mds_idle_clear);
++/* Control CPU buffer clear before idling (halt, mwait) */
++DEFINE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
++EXPORT_SYMBOL_GPL(cpu_buf_idle_clear);
+
+ /*
+ * Controls whether l1d flush based mitigations are enabled,
+@@ -185,6 +186,7 @@ void __init cpu_select_mitigations(void)
+ srso_select_mitigation();
+ gds_select_mitigation();
+ its_select_mitigation();
++ tsa_select_mitigation();
+ }
+
+ /*
+@@ -445,7 +447,7 @@ static void __init mmio_select_mitigation(void)
+ * is required irrespective of SMT state.
+ */
+ if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
+- static_branch_enable(&mds_idle_clear);
++ static_branch_enable(&cpu_buf_idle_clear);
+
+ /*
+ * Check if the system has the right microcode.
+@@ -2082,10 +2084,10 @@ static void update_mds_branch_idle(void)
+ return;
+
+ if (sched_smt_active()) {
+- static_branch_enable(&mds_idle_clear);
++ static_branch_enable(&cpu_buf_idle_clear);
+ } else if (mmio_mitigation == MMIO_MITIGATION_OFF ||
+ (x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) {
+- static_branch_disable(&mds_idle_clear);
++ static_branch_disable(&cpu_buf_idle_clear);
+ }
+ }
+
+@@ -2093,6 +2095,94 @@ static void update_mds_branch_idle(void)
+ #define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
+ #define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
+
++#undef pr_fmt
++#define pr_fmt(fmt) "Transient Scheduler Attacks: " fmt
++
++enum tsa_mitigations {
++ TSA_MITIGATION_NONE,
++ TSA_MITIGATION_UCODE_NEEDED,
++ TSA_MITIGATION_USER_KERNEL,
++ TSA_MITIGATION_VM,
++ TSA_MITIGATION_FULL,
++};
++
++static const char * const tsa_strings[] = {
++ [TSA_MITIGATION_NONE] = "Vulnerable",
++ [TSA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
++ [TSA_MITIGATION_USER_KERNEL] = "Mitigation: Clear CPU buffers: user/kernel boundary",
++ [TSA_MITIGATION_VM] = "Mitigation: Clear CPU buffers: VM",
++ [TSA_MITIGATION_FULL] = "Mitigation: Clear CPU buffers",
++};
++
++static enum tsa_mitigations tsa_mitigation __ro_after_init =
++ IS_ENABLED(CONFIG_MITIGATION_TSA) ? TSA_MITIGATION_FULL : TSA_MITIGATION_NONE;
++
++static int __init tsa_parse_cmdline(char *str)
++{
++ if (!str)
++ return -EINVAL;
++
++ if (!strcmp(str, "off"))
++ tsa_mitigation = TSA_MITIGATION_NONE;
++ else if (!strcmp(str, "on"))
++ tsa_mitigation = TSA_MITIGATION_FULL;
++ else if (!strcmp(str, "user"))
++ tsa_mitigation = TSA_MITIGATION_USER_KERNEL;
++ else if (!strcmp(str, "vm"))
++ tsa_mitigation = TSA_MITIGATION_VM;
++ else
++ pr_err("Ignoring unknown tsa=%s option.\n", str);
++
++ return 0;
++}
++early_param("tsa", tsa_parse_cmdline);
++
++static void __init tsa_select_mitigation(void)
++{
++ if (tsa_mitigation == TSA_MITIGATION_NONE)
++ return;
++
++ if (cpu_mitigations_off() || !boot_cpu_has_bug(X86_BUG_TSA)) {
++ tsa_mitigation = TSA_MITIGATION_NONE;
++ return;
++ }
++
++ if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR))
++ tsa_mitigation = TSA_MITIGATION_UCODE_NEEDED;
++
++ switch (tsa_mitigation) {
++ case TSA_MITIGATION_USER_KERNEL:
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++ break;
++
++ case TSA_MITIGATION_VM:
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
++ break;
++
++ case TSA_MITIGATION_UCODE_NEEDED:
++ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ goto out;
++
++ pr_notice("Forcing mitigation on in a VM\n");
++
++ /*
++ * On the off-chance that microcode has been updated
++ * on the host, enable the mitigation in the guest just
++ * in case.
++ */
++ fallthrough;
++ case TSA_MITIGATION_FULL:
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
++ break;
++ default:
++ break;
++ }
++
++out:
++ pr_info("%s\n", tsa_strings[tsa_mitigation]);
++}
++
+ void cpu_bugs_smt_update(void)
+ {
+ mutex_lock(&spec_ctrl_mutex);
+@@ -2146,6 +2236,24 @@ void cpu_bugs_smt_update(void)
+ break;
+ }
+
++ switch (tsa_mitigation) {
++ case TSA_MITIGATION_USER_KERNEL:
++ case TSA_MITIGATION_VM:
++ case TSA_MITIGATION_FULL:
++ case TSA_MITIGATION_UCODE_NEEDED:
++ /*
++ * TSA-SQ can potentially lead to info leakage between
++ * SMT threads.
++ */
++ if (sched_smt_active())
++ static_branch_enable(&cpu_buf_idle_clear);
++ else
++ static_branch_disable(&cpu_buf_idle_clear);
++ break;
++ case TSA_MITIGATION_NONE:
++ break;
++ }
++
+ mutex_unlock(&spec_ctrl_mutex);
+ }
+
+@@ -3075,6 +3183,11 @@ static ssize_t gds_show_state(char *buf)
+ return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]);
+ }
+
++static ssize_t tsa_show_state(char *buf)
++{
++ return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ char *buf, unsigned int bug)
+ {
+@@ -3136,6 +3249,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ case X86_BUG_ITS:
+ return its_show_state(buf);
+
++ case X86_BUG_TSA:
++ return tsa_show_state(buf);
++
+ default:
+ break;
+ }
+@@ -3220,4 +3336,9 @@ ssize_t cpu_show_indirect_target_selection(struct device *dev, struct device_att
+ {
+ return cpu_show_common(dev, attr, buf, X86_BUG_ITS);
+ }
++
++ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_TSA);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b6e43dad577a3c..f66c71bffa6d93 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1277,6 +1277,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define ITS BIT(8)
+ /* CPU is affected by Indirect Target Selection, but guest-host isolation is not affected */
+ #define ITS_NATIVE_ONLY BIT(9)
++/* CPU is affected by Transient Scheduler Attacks */
++#define TSA BIT(10)
+
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
+@@ -1324,7 +1326,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ VULNBL_AMD(0x16, RETBLEED),
+ VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
+ VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
+- VULNBL_AMD(0x19, SRSO),
++ VULNBL_AMD(0x19, SRSO | TSA),
+ {}
+ };
+
+@@ -1529,6 +1531,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ setup_force_cpu_bug(X86_BUG_ITS_NATIVE_ONLY);
+ }
+
++ if (c->x86_vendor == X86_VENDOR_AMD) {
++ if (!cpu_has(c, X86_FEATURE_TSA_SQ_NO) ||
++ !cpu_has(c, X86_FEATURE_TSA_L1_NO)) {
++ if (cpu_matches(cpu_vuln_blacklist, TSA) ||
++ /* Enable bug on Zen guests to allow for live migration. */
++ (cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_ZEN)))
++ setup_force_cpu_bug(X86_BUG_TSA);
++ }
++ }
++
+ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ return;
+
+@@ -2215,20 +2227,16 @@ EXPORT_PER_CPU_SYMBOL(__stack_chk_guard);
+
+ #endif /* CONFIG_X86_64 */
+
+-/*
+- * Clear all 6 debug registers:
+- */
+-static void clear_all_debug_regs(void)
++static void initialize_debug_regs(void)
+ {
+- int i;
+-
+- for (i = 0; i < 8; i++) {
+- /* Ignore db4, db5 */
+- if ((i == 4) || (i == 5))
+- continue;
+-
+- set_debugreg(0, i);
+- }
++ /* Control register first -- to make sure everything is disabled. */
++ set_debugreg(0, 7);
++ set_debugreg(DR6_RESERVED, 6);
++ /* dr5 and dr4 don't exist */
++ set_debugreg(0, 3);
++ set_debugreg(0, 2);
++ set_debugreg(0, 1);
++ set_debugreg(0, 0);
+ }
+
+ #ifdef CONFIG_KGDB
+@@ -2371,7 +2379,7 @@ void cpu_init(void)
+
+ load_mm_ldt(&init_mm);
+
+- clear_all_debug_regs();
++ initialize_debug_regs();
+ dbg_restore_debug_regs();
+
+ doublefault_init_cpu_tss();
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 9b0570f769eb3d..7444fe0e3d08cd 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -96,18 +96,6 @@ static struct equiv_cpu_table {
+ struct equiv_cpu_entry *entry;
+ } equiv_table;
+
+-union zen_patch_rev {
+- struct {
+- __u32 rev : 8,
+- stepping : 4,
+- model : 4,
+- __reserved : 4,
+- ext_model : 4,
+- ext_fam : 8;
+- };
+- __u32 ucode_rev;
+-};
+-
+ union cpuid_1_eax {
+ struct {
+ __u32 stepping : 4,
+diff --git a/arch/x86/kernel/cpu/microcode/amd_shas.c b/arch/x86/kernel/cpu/microcode/amd_shas.c
+index 2a1655b1fdd883..1fd349cfc8024a 100644
+--- a/arch/x86/kernel/cpu/microcode/amd_shas.c
++++ b/arch/x86/kernel/cpu/microcode/amd_shas.c
+@@ -231,6 +231,13 @@ static const struct patch_digest phashes[] = {
+ 0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21,
+ }
+ },
++ { 0xa0011d7, {
++ 0x35,0x07,0xcd,0x40,0x94,0xbc,0x81,0x6b,
++ 0xfc,0x61,0x56,0x1a,0xe2,0xdb,0x96,0x12,
++ 0x1c,0x1c,0x31,0xb1,0x02,0x6f,0xe5,0xd2,
++ 0xfe,0x1b,0x04,0x03,0x2c,0x8f,0x4c,0x36,
++ }
++ },
+ { 0xa001223, {
+ 0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8,
+ 0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4,
+@@ -294,6 +301,13 @@ static const struct patch_digest phashes[] = {
+ 0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59,
+ }
+ },
++ { 0xa00123b, {
++ 0xef,0xa1,0x1e,0x71,0xf1,0xc3,0x2c,0xe2,
++ 0xc3,0xef,0x69,0x41,0x7a,0x54,0xca,0xc3,
++ 0x8f,0x62,0x84,0xee,0xc2,0x39,0xd9,0x28,
++ 0x95,0xa7,0x12,0x49,0x1e,0x30,0x71,0x72,
++ }
++ },
+ { 0xa00820c, {
+ 0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3,
+ 0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63,
+@@ -301,6 +315,13 @@ static const struct patch_digest phashes[] = {
+ 0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2,
+ }
+ },
++ { 0xa00820d, {
++ 0xf9,0x2a,0xc0,0xf4,0x9e,0xa4,0x87,0xa4,
++ 0x7d,0x87,0x00,0xfd,0xab,0xda,0x19,0xca,
++ 0x26,0x51,0x32,0xc1,0x57,0x91,0xdf,0xc1,
++ 0x05,0xeb,0x01,0x7c,0x5a,0x95,0x21,0xb7,
++ }
++ },
+ { 0xa10113e, {
+ 0x05,0x3c,0x66,0xd7,0xa9,0x5a,0x33,0x10,
+ 0x1b,0xf8,0x9c,0x8f,0xed,0xfc,0xa7,0xa0,
+@@ -322,6 +343,13 @@ static const struct patch_digest phashes[] = {
+ 0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4,
+ }
+ },
++ { 0xa10114c, {
++ 0x9e,0xb6,0xa2,0xd9,0x87,0x38,0xc5,0x64,
++ 0xd8,0x88,0xfa,0x78,0x98,0xf9,0x6f,0x74,
++ 0x39,0x90,0x1b,0xa5,0xcf,0x5e,0xb4,0x2a,
++ 0x02,0xff,0xd4,0x8c,0x71,0x8b,0xe2,0xc0,
++ }
++ },
+ { 0xa10123e, {
+ 0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18,
+ 0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d,
+@@ -343,6 +371,13 @@ static const struct patch_digest phashes[] = {
+ 0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75,
+ }
+ },
++ { 0xa10124c, {
++ 0x29,0xea,0xf1,0x2c,0xb2,0xe4,0xef,0x90,
++ 0xa4,0xcd,0x1d,0x86,0x97,0x17,0x61,0x46,
++ 0xfc,0x22,0xcb,0x57,0x75,0x19,0xc8,0xcc,
++ 0x0c,0xf5,0xbc,0xac,0x81,0x9d,0x9a,0xd2,
++ }
++ },
+ { 0xa108108, {
+ 0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9,
+ 0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6,
+@@ -350,6 +385,13 @@ static const struct patch_digest phashes[] = {
+ 0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16,
+ }
+ },
++ { 0xa108109, {
++ 0x85,0xb4,0xbd,0x7c,0x49,0xa7,0xbd,0xfa,
++ 0x49,0x36,0x80,0x81,0xc5,0xb7,0x39,0x1b,
++ 0x9a,0xaa,0x50,0xde,0x9b,0xe9,0x32,0x35,
++ 0x42,0x7e,0x51,0x4f,0x52,0x2c,0x28,0x59,
++ }
++ },
+ { 0xa20102d, {
+ 0xf9,0x6e,0xf2,0x32,0xd3,0x0f,0x5f,0x11,
+ 0x59,0xa1,0xfe,0xcc,0xcd,0x9b,0x42,0x89,
+@@ -357,6 +399,13 @@ static const struct patch_digest phashes[] = {
+ 0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4,
+ }
+ },
++ { 0xa20102e, {
++ 0xbe,0x1f,0x32,0x04,0x0d,0x3c,0x9c,0xdd,
++ 0xe1,0xa4,0xbf,0x76,0x3a,0xec,0xc2,0xf6,
++ 0x11,0x00,0xa7,0xaf,0x0f,0xe5,0x02,0xc5,
++ 0x54,0x3a,0x1f,0x8c,0x16,0xb5,0xff,0xbe,
++ }
++ },
+ { 0xa201210, {
+ 0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe,
+ 0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9,
+@@ -364,6 +413,13 @@ static const struct patch_digest phashes[] = {
+ 0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41,
+ }
+ },
++ { 0xa201211, {
++ 0x69,0xa1,0x17,0xec,0xd0,0xf6,0x6c,0x95,
++ 0xe2,0x1e,0xc5,0x59,0x1a,0x52,0x0a,0x27,
++ 0xc4,0xed,0xd5,0x59,0x1f,0xbf,0x00,0xff,
++ 0x08,0x88,0xb5,0xe1,0x12,0xb6,0xcc,0x27,
++ }
++ },
+ { 0xa404107, {
+ 0xbb,0x04,0x4e,0x47,0xdd,0x5e,0x26,0x45,
+ 0x1a,0xc9,0x56,0x24,0xa4,0x4c,0x82,0xb0,
+@@ -371,6 +427,13 @@ static const struct patch_digest phashes[] = {
+ 0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99,
+ }
+ },
++ { 0xa404108, {
++ 0x69,0x67,0x43,0x06,0xf8,0x0c,0x62,0xdc,
++ 0xa4,0x21,0x30,0x4f,0x0f,0x21,0x2c,0xcb,
++ 0xcc,0x37,0xf1,0x1c,0xc3,0xf8,0x2f,0x19,
++ 0xdf,0x53,0x53,0x46,0xb1,0x15,0xea,0x00,
++ }
++ },
+ { 0xa500011, {
+ 0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4,
+ 0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1,
+@@ -378,6 +441,13 @@ static const struct patch_digest phashes[] = {
+ 0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74,
+ }
+ },
++ { 0xa500012, {
++ 0xeb,0x74,0x0d,0x47,0xa1,0x8e,0x09,0xe4,
++ 0x93,0x4c,0xad,0x03,0x32,0x4c,0x38,0x16,
++ 0x10,0x39,0xdd,0x06,0xaa,0xce,0xd6,0x0f,
++ 0x62,0x83,0x9d,0x8e,0x64,0x55,0xbe,0x63,
++ }
++ },
+ { 0xa601209, {
+ 0x66,0x48,0xd4,0x09,0x05,0xcb,0x29,0x32,
+ 0x66,0xb7,0x9a,0x76,0xcd,0x11,0xf3,0x30,
+@@ -385,6 +455,13 @@ static const struct patch_digest phashes[] = {
+ 0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d,
+ }
+ },
++ { 0xa60120a, {
++ 0x0c,0x8b,0x3d,0xfd,0x52,0x52,0x85,0x7d,
++ 0x20,0x3a,0xe1,0x7e,0xa4,0x21,0x3b,0x7b,
++ 0x17,0x86,0xae,0xac,0x13,0xb8,0x63,0x9d,
++ 0x06,0x01,0xd0,0xa0,0x51,0x9a,0x91,0x2c,
++ }
++ },
+ { 0xa704107, {
+ 0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6,
+ 0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93,
+@@ -392,6 +469,13 @@ static const struct patch_digest phashes[] = {
+ 0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39,
+ }
+ },
++ { 0xa704108, {
++ 0xd7,0x55,0x15,0x2b,0xfe,0xc4,0xbc,0x93,
++ 0xec,0x91,0xa0,0xae,0x45,0xb7,0xc3,0x98,
++ 0x4e,0xff,0x61,0x77,0x88,0xc2,0x70,0x49,
++ 0xe0,0x3a,0x1d,0x84,0x38,0x52,0xbf,0x5a,
++ }
++ },
+ { 0xa705206, {
+ 0x8d,0xc0,0x76,0xbd,0x58,0x9f,0x8f,0xa4,
+ 0x12,0x9d,0x21,0xfb,0x48,0x21,0xbc,0xe7,
+@@ -399,6 +483,13 @@ static const struct patch_digest phashes[] = {
+ 0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc,
+ }
+ },
++ { 0xa705208, {
++ 0x30,0x1d,0x55,0x24,0xbc,0x6b,0x5a,0x19,
++ 0x0c,0x7d,0x1d,0x74,0xaa,0xd1,0xeb,0xd2,
++ 0x16,0x62,0xf7,0x5b,0xe1,0x1f,0x18,0x11,
++ 0x5c,0xf0,0x94,0x90,0x26,0xec,0x69,0xff,
++ }
++ },
+ { 0xa708007, {
+ 0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3,
+ 0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2,
+@@ -406,6 +497,13 @@ static const struct patch_digest phashes[] = {
+ 0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93,
+ }
+ },
++ { 0xa708008, {
++ 0x08,0x6e,0xf0,0x22,0x4b,0x8e,0xc4,0x46,
++ 0x58,0x34,0xe6,0x47,0xa2,0x28,0xfd,0xab,
++ 0x22,0x3d,0xdd,0xd8,0x52,0x9e,0x1d,0x16,
++ 0xfa,0x01,0x68,0x14,0x79,0x3e,0xe8,0x6b,
++ }
++ },
+ { 0xa70c005, {
+ 0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b,
+ 0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f,
+@@ -413,6 +511,13 @@ static const struct patch_digest phashes[] = {
+ 0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13,
+ }
+ },
++ { 0xa70c008, {
++ 0x0f,0xdb,0x37,0xa1,0x10,0xaf,0xd4,0x21,
++ 0x94,0x0d,0xa4,0xa2,0xe9,0x86,0x6c,0x0e,
++ 0x85,0x7c,0x36,0x30,0xa3,0x3a,0x78,0x66,
++ 0x18,0x10,0x60,0x0d,0x78,0x3d,0x44,0xd0,
++ }
++ },
+ { 0xaa00116, {
+ 0xe8,0x4c,0x2c,0x88,0xa1,0xac,0x24,0x63,
+ 0x65,0xe5,0xaa,0x2d,0x16,0xa9,0xc3,0xf5,
+@@ -441,4 +546,11 @@ static const struct patch_digest phashes[] = {
+ 0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef,
+ }
+ },
++ { 0xaa00216, {
++ 0x79,0xfb,0x5b,0x9f,0xb6,0xe6,0xa8,0xf5,
++ 0x4e,0x7c,0x4f,0x8e,0x1d,0xad,0xd0,0x08,
++ 0xc2,0x43,0x7c,0x8b,0xe6,0xdb,0xd0,0xd2,
++ 0xe8,0x39,0x26,0xc1,0xe5,0x5a,0x48,0xf1,
++ }
++ },
+ };
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index af5aa2c754c222..7a42e699f6e39a 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -48,6 +48,8 @@ static const struct cpuid_bit cpuid_bits[] = {
+ { X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
+ { X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 },
+ { X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 },
++ { X86_FEATURE_TSA_SQ_NO, CPUID_ECX, 1, 0x80000021, 0 },
++ { X86_FEATURE_TSA_L1_NO, CPUID_ECX, 2, 0x80000021, 0 },
+ { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 },
+ { X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 },
+ { X86_FEATURE_AMD_LBR_PMC_FREEZE, CPUID_EAX, 2, 0x80000022, 0 },
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 33c235e9d0d3fb..e3c26cc45f7008 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -928,16 +928,24 @@ static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
+ */
+ static __cpuidle void mwait_idle(void)
+ {
++ if (need_resched())
++ return;
++
++ x86_idle_clear_cpu_buffers();
++
+ if (!current_set_polling_and_test()) {
+ const void *addr = ¤t_thread_info()->flags;
+
+ alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
+ __monitor(addr, 0, 0);
+- if (!need_resched()) {
+- __sti_mwait(0, 0);
+- raw_local_irq_disable();
+- }
++ if (need_resched())
++ goto out;
++
++ __sti_mwait(0, 0);
++ raw_local_irq_disable();
+ }
++
++out:
+ __current_clr_polling();
+ }
+
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 8718d58dd0fbea..a52db362a65d16 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -975,24 +975,32 @@ static bool is_sysenter_singlestep(struct pt_regs *regs)
+ #endif
+ }
+
+-static __always_inline unsigned long debug_read_clear_dr6(void)
++static __always_inline unsigned long debug_read_reset_dr6(void)
+ {
+ unsigned long dr6;
+
++ get_debugreg(dr6, 6);
++ dr6 ^= DR6_RESERVED; /* Flip to positive polarity */
++
+ /*
+ * The Intel SDM says:
+ *
+- * Certain debug exceptions may clear bits 0-3. The remaining
+- * contents of the DR6 register are never cleared by the
+- * processor. To avoid confusion in identifying debug
+- * exceptions, debug handlers should clear the register before
+- * returning to the interrupted task.
++ * Certain debug exceptions may clear bits 0-3 of DR6.
++ *
++ * BLD induced #DB clears DR6.BLD and any other debug
++ * exception doesn't modify DR6.BLD.
+ *
+- * Keep it simple: clear DR6 immediately.
++ * RTM induced #DB clears DR6.RTM and any other debug
++ * exception sets DR6.RTM.
++ *
++ * To avoid confusion in identifying debug exceptions,
++ * debug handlers should set DR6.BLD and DR6.RTM, and
++ * clear other DR6 bits before returning.
++ *
++ * Keep it simple: write DR6 with its architectural reset
++ * value 0xFFFF0FF0, defined as DR6_RESERVED, immediately.
+ */
+- get_debugreg(dr6, 6);
+ set_debugreg(DR6_RESERVED, 6);
+- dr6 ^= DR6_RESERVED; /* Flip to positive polarity */
+
+ return dr6;
+ }
+@@ -1188,19 +1196,19 @@ static __always_inline void exc_debug_user(struct pt_regs *regs,
+ /* IST stack entry */
+ DEFINE_IDTENTRY_DEBUG(exc_debug)
+ {
+- exc_debug_kernel(regs, debug_read_clear_dr6());
++ exc_debug_kernel(regs, debug_read_reset_dr6());
+ }
+
+ /* User entry, runs on regular task stack */
+ DEFINE_IDTENTRY_DEBUG_USER(exc_debug)
+ {
+- exc_debug_user(regs, debug_read_clear_dr6());
++ exc_debug_user(regs, debug_read_reset_dr6());
+ }
+ #else
+ /* 32 bit does not have separate entry points. */
+ DEFINE_IDTENTRY_RAW(exc_debug)
+ {
+- unsigned long dr6 = debug_read_clear_dr6();
++ unsigned long dr6 = debug_read_reset_dr6();
+
+ if (user_mode(regs))
+ exc_debug_user(regs, dr6);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index a6cffeff75d40b..288db351677222 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -780,6 +780,7 @@ void kvm_set_cpu_caps(void)
+
+ kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
+ F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
++ F(VERW_CLEAR) |
+ F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
+ );
+
+@@ -790,6 +791,10 @@ void kvm_set_cpu_caps(void)
+ F(PERFMON_V2)
+ );
+
++ kvm_cpu_cap_init_kvm_defined(CPUID_8000_0021_ECX,
++ F(TSA_SQ_NO) | F(TSA_L1_NO)
++ );
++
+ /*
+ * Synthesize "LFENCE is serializing" into the AMD-defined entry in
+ * KVM's supported CPUID if the feature is reported as supported by the
+@@ -1296,8 +1301,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+ break;
+ case 0x80000021:
+- entry->ebx = entry->ecx = entry->edx = 0;
++ entry->ebx = entry->edx = 0;
+ cpuid_entry_override(entry, CPUID_8000_0021_EAX);
++ cpuid_entry_override(entry, CPUID_8000_0021_ECX);
+ break;
+ /* AMD Extended Performance Monitoring and Debug */
+ case 0x80000022: {
+diff --git a/arch/x86/kvm/reverse_cpuid.h b/arch/x86/kvm/reverse_cpuid.h
+index 2f4e155080badc..be23712354bd8e 100644
+--- a/arch/x86/kvm/reverse_cpuid.h
++++ b/arch/x86/kvm/reverse_cpuid.h
+@@ -17,6 +17,7 @@ enum kvm_only_cpuid_leafs {
+ CPUID_8000_0007_EDX,
+ CPUID_8000_0022_EAX,
+ CPUID_7_2_EDX,
++ CPUID_8000_0021_ECX,
+ NR_KVM_CPU_CAPS,
+
+ NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
+@@ -61,6 +62,10 @@ enum kvm_only_cpuid_leafs {
+ /* CPUID level 0x80000022 (EAX) */
+ #define KVM_X86_FEATURE_PERFMON_V2 KVM_X86_FEATURE(CPUID_8000_0022_EAX, 0)
+
++/* CPUID level 0x80000021 (ECX) */
++#define KVM_X86_FEATURE_TSA_SQ_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 1)
++#define KVM_X86_FEATURE_TSA_L1_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 2)
++
+ struct cpuid_reg {
+ u32 function;
+ u32 index;
+@@ -90,6 +95,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
+ [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
+ [CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX},
+ [CPUID_7_2_EDX] = { 7, 2, CPUID_EDX},
++ [CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX},
+ };
+
+ /*
+@@ -129,6 +135,8 @@ static __always_inline u32 __feature_translate(int x86_feature)
+ KVM_X86_TRANSLATE_FEATURE(PERFMON_V2);
+ KVM_X86_TRANSLATE_FEATURE(RRSBA_CTRL);
+ KVM_X86_TRANSLATE_FEATURE(BHI_CTRL);
++ KVM_X86_TRANSLATE_FEATURE(TSA_SQ_NO);
++ KVM_X86_TRANSLATE_FEATURE(TSA_L1_NO);
+ default:
+ return x86_feature;
+ }
+diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
+index ef2ebabb059c8c..56fe34d9397f64 100644
+--- a/arch/x86/kvm/svm/vmenter.S
++++ b/arch/x86/kvm/svm/vmenter.S
+@@ -167,6 +167,9 @@ SYM_FUNC_START(__svm_vcpu_run)
+ #endif
+ mov VCPU_RDI(%_ASM_DI), %_ASM_DI
+
++ /* Clobbers EFLAGS.ZF */
++ VM_CLEAR_CPU_BUFFERS
++
+ /* Enter guest mode */
+ sti
+
+@@ -334,6 +337,9 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
+ mov SVM_current_vmcb(%_ASM_DI), %_ASM_AX
+ mov KVM_VMCB_pa(%_ASM_AX), %_ASM_AX
+
++ /* Clobbers EFLAGS.ZF */
++ VM_CLEAR_CPU_BUFFERS
++
+ /* Enter guest mode */
+ sti
+
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index e7f3b70f9114ae..e53620e189254b 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7263,7 +7263,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ vmx_l1d_flush(vcpu);
+ else if (static_branch_unlikely(&mmio_stale_data_clear) &&
+ kvm_arch_has_assigned_device(vcpu->kvm))
+- mds_clear_cpu_buffers();
++ x86_clear_cpu_buffers();
+
+ vmx_disable_fb_clear(vmx);
+
+diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c
+index e809c2aed78aed..a232746d150a75 100644
+--- a/drivers/acpi/acpica/dsmethod.c
++++ b/drivers/acpi/acpica/dsmethod.c
+@@ -483,6 +483,13 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ return_ACPI_STATUS(AE_NULL_OBJECT);
+ }
+
++ if (this_walk_state->num_operands < obj_desc->method.param_count) {
++ ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]",
++ acpi_ut_get_node_name(method_node)));
++
++ return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG);
++ }
++
+ /* Init for new method, possibly wait on method mutex */
+
+ status =
+diff --git a/drivers/ata/libata-acpi.c b/drivers/ata/libata-acpi.c
+index d36e71f475abdc..39a350755a1baf 100644
+--- a/drivers/ata/libata-acpi.c
++++ b/drivers/ata/libata-acpi.c
+@@ -514,15 +514,19 @@ unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev,
+ EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask);
+
+ /**
+- * ata_acpi_cbl_80wire - Check for 80 wire cable
++ * ata_acpi_cbl_pata_type - Return PATA cable type
+ * @ap: Port to check
+- * @gtm: GTM data to use
+ *
+- * Return 1 if the @gtm indicates the BIOS selected an 80wire mode.
++ * Return ATA_CBL_PATA* according to the transfer mode selected by BIOS
+ */
+-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
++int ata_acpi_cbl_pata_type(struct ata_port *ap)
+ {
+ struct ata_device *dev;
++ int ret = ATA_CBL_PATA_UNK;
++ const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap);
++
++ if (!gtm)
++ return ATA_CBL_PATA40;
+
+ ata_for_each_dev(dev, &ap->link, ENABLED) {
+ unsigned int xfer_mask, udma_mask;
+@@ -530,13 +534,17 @@ int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
+ xfer_mask = ata_acpi_gtm_xfermask(dev, gtm);
+ ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask);
+
+- if (udma_mask & ~ATA_UDMA_MASK_40C)
+- return 1;
++ ret = ATA_CBL_PATA40;
++
++ if (udma_mask & ~ATA_UDMA_MASK_40C) {
++ ret = ATA_CBL_PATA80;
++ break;
++ }
+ }
+
+- return 0;
++ return ret;
+ }
+-EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire);
++EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type);
+
+ static void ata_acpi_gtf_to_tf(struct ata_device *dev,
+ const struct ata_acpi_gtf *gtf,
+diff --git a/drivers/ata/pata_cs5536.c b/drivers/ata/pata_cs5536.c
+index b811efd2cc346a..73e81e160c91fb 100644
+--- a/drivers/ata/pata_cs5536.c
++++ b/drivers/ata/pata_cs5536.c
+@@ -27,7 +27,7 @@
+ #include <scsi/scsi_host.h>
+ #include <linux/dmi.h>
+
+-#ifdef CONFIG_X86_32
++#if defined(CONFIG_X86) && defined(CONFIG_X86_32)
+ #include <asm/msr.h>
+ static int use_msr;
+ module_param_named(msr, use_msr, int, 0644);
+diff --git a/drivers/ata/pata_via.c b/drivers/ata/pata_via.c
+index d82728a01832b5..bb80e7800dcbe9 100644
+--- a/drivers/ata/pata_via.c
++++ b/drivers/ata/pata_via.c
+@@ -201,11 +201,9 @@ static int via_cable_detect(struct ata_port *ap) {
+ two drives */
+ if (ata66 & (0x10100000 >> (16 * ap->port_no)))
+ return ATA_CBL_PATA80;
++
+ /* Check with ACPI so we can spot BIOS reported SATA bridges */
+- if (ata_acpi_init_gtm(ap) &&
+- ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap)))
+- return ATA_CBL_PATA80;
+- return ATA_CBL_PATA40;
++ return ata_acpi_cbl_pata_type(ap);
+ }
+
+ static int via_pre_reset(struct ata_link *link, unsigned long deadline)
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index a5cfc1bfad51fb..a3aea3c1431aa9 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -567,6 +567,7 @@ CPU_SHOW_VULN_FALLBACK(spec_rstack_overflow);
+ CPU_SHOW_VULN_FALLBACK(gds);
+ CPU_SHOW_VULN_FALLBACK(reg_file_data_sampling);
+ CPU_SHOW_VULN_FALLBACK(indirect_target_selection);
++CPU_SHOW_VULN_FALLBACK(tsa);
+
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+@@ -583,6 +584,7 @@ static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NU
+ static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
+ static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL);
+ static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
++static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL);
+
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_meltdown.attr,
+@@ -600,6 +602,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_gather_data_sampling.attr,
+ &dev_attr_reg_file_data_sampling.attr,
+ &dev_attr_indirect_target_selection.attr,
++ &dev_attr_tsa.attr,
+ NULL
+ };
+
+diff --git a/drivers/block/aoe/aoe.h b/drivers/block/aoe/aoe.h
+index 749ae1246f4cf8..d35caa3c69e15e 100644
+--- a/drivers/block/aoe/aoe.h
++++ b/drivers/block/aoe/aoe.h
+@@ -80,6 +80,7 @@ enum {
+ DEVFL_NEWSIZE = (1<<6), /* need to update dev size in block layer */
+ DEVFL_FREEING = (1<<7), /* set when device is being cleaned up */
+ DEVFL_FREED = (1<<8), /* device has been cleaned up */
++ DEVFL_DEAD = (1<<9), /* device has timed out of aoe_deadsecs */
+ };
+
+ enum {
+diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
+index d1f4ddc576451a..c4c5cf1ec71ba9 100644
+--- a/drivers/block/aoe/aoecmd.c
++++ b/drivers/block/aoe/aoecmd.c
+@@ -754,7 +754,7 @@ rexmit_timer(struct timer_list *timer)
+
+ utgts = count_targets(d, NULL);
+
+- if (d->flags & DEVFL_TKILL) {
++ if (d->flags & (DEVFL_TKILL | DEVFL_DEAD)) {
+ spin_unlock_irqrestore(&d->lock, flags);
+ return;
+ }
+@@ -786,7 +786,8 @@ rexmit_timer(struct timer_list *timer)
+ * to clean up.
+ */
+ list_splice(&flist, &d->factive[0]);
+- aoedev_downdev(d);
++ d->flags |= DEVFL_DEAD;
++ queue_work(aoe_wq, &d->work);
+ goto out;
+ }
+
+@@ -898,6 +899,9 @@ aoecmd_sleepwork(struct work_struct *work)
+ {
+ struct aoedev *d = container_of(work, struct aoedev, work);
+
++ if (d->flags & DEVFL_DEAD)
++ aoedev_downdev(d);
++
+ if (d->flags & DEVFL_GDALLOC)
+ aoeblk_gdalloc(d);
+
+diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
+index 280679bde3a506..4240e11adfb769 100644
+--- a/drivers/block/aoe/aoedev.c
++++ b/drivers/block/aoe/aoedev.c
+@@ -200,8 +200,11 @@ aoedev_downdev(struct aoedev *d)
+ struct list_head *head, *pos, *nx;
+ struct request *rq, *rqnext;
+ int i;
++ unsigned long flags;
+
+- d->flags &= ~DEVFL_UP;
++ spin_lock_irqsave(&d->lock, flags);
++ d->flags &= ~(DEVFL_UP | DEVFL_DEAD);
++ spin_unlock_irqrestore(&d->lock, flags);
+
+ /* clean out active and to-be-retransmitted buffers */
+ for (i = 0; i < NFACTIVE; i++) {
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index 9093f751f1336a..8f3fa149a76d9b 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -678,11 +678,13 @@ long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
+ dma_resv_iter_begin(&cursor, obj, usage);
+ dma_resv_for_each_fence_unlocked(&cursor, fence) {
+
+- ret = dma_fence_wait_timeout(fence, intr, ret);
+- if (ret <= 0) {
+- dma_resv_iter_end(&cursor);
+- return ret;
+- }
++ ret = dma_fence_wait_timeout(fence, intr, timeout);
++ if (ret <= 0)
++ break;
++
++ /* Even for zero timeout the return value is 1 */
++ if (timeout)
++ timeout = ret;
+ }
+ dma_resv_iter_end(&cursor);
+
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index 5bdc246f5fad09..341e95269836e0 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -187,6 +187,7 @@ struct fimd_context {
+ u32 i80ifcon;
+ bool i80_if;
+ bool suspended;
++ bool dp_clk_enabled;
+ wait_queue_head_t wait_vsync_queue;
+ atomic_t wait_vsync_event;
+ atomic_t win_updated;
+@@ -1047,7 +1048,18 @@ static void fimd_dp_clock_enable(struct exynos_drm_clk *clk, bool enable)
+ struct fimd_context *ctx = container_of(clk, struct fimd_context,
+ dp_clk);
+ u32 val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
++
++ if (enable == ctx->dp_clk_enabled)
++ return;
++
++ if (enable)
++ pm_runtime_resume_and_get(ctx->dev);
++
++ ctx->dp_clk_enabled = enable;
+ writel(val, ctx->regs + DP_MIE_CLKCON);
++
++ if (!enable)
++ pm_runtime_put(ctx->dev);
+ }
+
+ static const struct exynos_drm_crtc_ops fimd_crtc_ops = {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index 023b2ea74c3601..5a687a3686bd53 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -2013,7 +2013,7 @@ static int eb_capture_stage(struct i915_execbuffer *eb)
+ continue;
+
+ if (i915_gem_context_is_recoverable(eb->gem_context) &&
+- GRAPHICS_VER_FULL(eb->i915) > IP_VER(12, 10))
++ (IS_DGFX(eb->i915) || GRAPHICS_VER_FULL(eb->i915) > IP_VER(12, 0)))
+ return -EINVAL;
+
+ for_each_batch_create_order(eb, j) {
+diff --git a/drivers/gpu/drm/i915/gt/intel_gsc.c b/drivers/gpu/drm/i915/gt/intel_gsc.c
+index bcc3605158dbde..27420ed631d850 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gsc.c
++++ b/drivers/gpu/drm/i915/gt/intel_gsc.c
+@@ -298,7 +298,7 @@ static void gsc_irq_handler(struct intel_gt *gt, unsigned int intf_id)
+ if (gt->gsc.intf[intf_id].irq < 0)
+ return;
+
+- ret = generic_handle_irq(gt->gsc.intf[intf_id].irq);
++ ret = generic_handle_irq_safe(gt->gsc.intf[intf_id].irq);
+ if (ret)
+ drm_err_ratelimited(>->i915->drm, "error handling GSC irq: %d\n", ret);
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+index 92085ffd23de0e..4eb78895773f6f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+@@ -573,7 +573,6 @@ static int ring_context_alloc(struct intel_context *ce)
+ /* One ringbuffer to rule them all */
+ GEM_BUG_ON(!engine->legacy.ring);
+ ce->ring = engine->legacy.ring;
+- ce->timeline = intel_timeline_get(engine->legacy.timeline);
+
+ GEM_BUG_ON(ce->state);
+ if (engine->context_size) {
+@@ -586,6 +585,8 @@ static int ring_context_alloc(struct intel_context *ce)
+ ce->state = vma;
+ }
+
++ ce->timeline = intel_timeline_get(engine->legacy.timeline);
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
+index a9b79888c19316..c7ce2c570ad1f8 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_request.c
++++ b/drivers/gpu/drm/i915/selftests/i915_request.c
+@@ -73,8 +73,8 @@ static int igt_add_request(void *arg)
+ /* Basic preliminary test to create a request and let it loose! */
+
+ request = mock_request(rcs0(i915)->kernel_context, HZ / 10);
+- if (!request)
+- return -ENOMEM;
++ if (IS_ERR(request))
++ return PTR_ERR(request);
+
+ i915_request_add(request);
+
+@@ -91,8 +91,8 @@ static int igt_wait_request(void *arg)
+ /* Submit a request, then wait upon it */
+
+ request = mock_request(rcs0(i915)->kernel_context, T);
+- if (!request)
+- return -ENOMEM;
++ if (IS_ERR(request))
++ return PTR_ERR(request);
+
+ i915_request_get(request);
+
+@@ -160,8 +160,8 @@ static int igt_fence_wait(void *arg)
+ /* Submit a request, treat it as a fence and wait upon it */
+
+ request = mock_request(rcs0(i915)->kernel_context, T);
+- if (!request)
+- return -ENOMEM;
++ if (IS_ERR(request))
++ return PTR_ERR(request);
+
+ if (dma_fence_wait_timeout(&request->fence, false, T) != -ETIME) {
+ pr_err("fence wait success before submit (expected timeout)!\n");
+@@ -219,8 +219,8 @@ static int igt_request_rewind(void *arg)
+ GEM_BUG_ON(IS_ERR(ce));
+ request = mock_request(ce, 2 * HZ);
+ intel_context_put(ce);
+- if (!request) {
+- err = -ENOMEM;
++ if (IS_ERR(request)) {
++ err = PTR_ERR(request);
+ goto err_context_0;
+ }
+
+@@ -237,8 +237,8 @@ static int igt_request_rewind(void *arg)
+ GEM_BUG_ON(IS_ERR(ce));
+ vip = mock_request(ce, 0);
+ intel_context_put(ce);
+- if (!vip) {
+- err = -ENOMEM;
++ if (IS_ERR(vip)) {
++ err = PTR_ERR(vip);
+ goto err_context_1;
+ }
+
+diff --git a/drivers/gpu/drm/i915/selftests/mock_request.c b/drivers/gpu/drm/i915/selftests/mock_request.c
+index 09f747228dff57..1b0cf073e9643f 100644
+--- a/drivers/gpu/drm/i915/selftests/mock_request.c
++++ b/drivers/gpu/drm/i915/selftests/mock_request.c
+@@ -35,7 +35,7 @@ mock_request(struct intel_context *ce, unsigned long delay)
+ /* NB the i915->requests slab cache is enlarged to fit mock_request */
+ request = intel_context_create_request(ce);
+ if (IS_ERR(request))
+- return NULL;
++ return request;
+
+ request->mock.delay = delay;
+ return request;
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index 018b39546fc1dd..bbe4f1665b6039 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -85,6 +85,15 @@ void __msm_gem_submit_destroy(struct kref *kref)
+ container_of(kref, struct msm_gem_submit, ref);
+ unsigned i;
+
++ /*
++ * In error paths, we could unref the submit without calling
++ * drm_sched_entity_push_job(), so msm_job_free() will never
++ * get called. Since drm_sched_job_cleanup() will NULL out
++ * s_fence, we can use that to detect this case.
++ */
++ if (submit->base.s_fence)
++ drm_sched_job_cleanup(&submit->base);
++
+ if (submit->fence_id) {
+ spin_lock(&submit->queue->idr_lock);
+ idr_remove(&submit->queue->fence_idr, submit->fence_id);
+@@ -754,6 +763,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ struct msm_ringbuffer *ring;
+ struct msm_submit_post_dep *post_deps = NULL;
+ struct drm_syncobj **syncobjs_to_reset = NULL;
++ struct sync_file *sync_file = NULL;
+ int out_fence_fd = -1;
+ bool has_ww_ticket = false;
+ unsigned i;
+@@ -970,7 +980,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ }
+
+ if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
+- struct sync_file *sync_file = sync_file_create(submit->user_fence);
++ sync_file = sync_file_create(submit->user_fence);
+ if (!sync_file) {
+ ret = -ENOMEM;
+ } else {
+@@ -1003,8 +1013,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ out_unlock:
+ mutex_unlock(&queue->lock);
+ out_post_unlock:
+- if (ret && (out_fence_fd >= 0))
++ if (ret && (out_fence_fd >= 0)) {
+ put_unused_fd(out_fence_fd);
++ if (sync_file)
++ fput(sync_file->file);
++ }
+
+ if (!IS_ERR_OR_NULL(submit)) {
+ msm_gem_submit_put(submit);
+diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c
+index 8ea120eb8674bd..30676b1073034e 100644
+--- a/drivers/gpu/drm/tiny/simpledrm.c
++++ b/drivers/gpu/drm/tiny/simpledrm.c
+@@ -276,7 +276,7 @@ static struct simpledrm_device *simpledrm_device_of_dev(struct drm_device *dev)
+
+ static void simpledrm_device_release_clocks(void *res)
+ {
+- struct simpledrm_device *sdev = simpledrm_device_of_dev(res);
++ struct simpledrm_device *sdev = res;
+ unsigned int i;
+
+ for (i = 0; i < sdev->clk_count; ++i) {
+@@ -374,7 +374,7 @@ static int simpledrm_device_init_clocks(struct simpledrm_device *sdev)
+
+ static void simpledrm_device_release_regulators(void *res)
+ {
+- struct simpledrm_device *sdev = simpledrm_device_of_dev(res);
++ struct simpledrm_device *sdev = res;
+ unsigned int i;
+
+ for (i = 0; i < sdev->regulator_count; ++i) {
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index 7f664a4b2a7521..bcef978ba9c4ca 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -62,6 +62,12 @@ struct v3d_perfmon {
+ u64 values[];
+ };
+
++enum v3d_irq {
++ V3D_CORE_IRQ,
++ V3D_HUB_IRQ,
++ V3D_MAX_IRQS,
++};
++
+ struct v3d_dev {
+ struct drm_device drm;
+
+@@ -71,6 +77,8 @@ struct v3d_dev {
+ int ver;
+ bool single_irq_line;
+
++ int irq[V3D_MAX_IRQS];
++
+ void __iomem *hub_regs;
+ void __iomem *core_regs[3];
+ void __iomem *bridge_regs;
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index 2e94ce788c714b..ef991a9b1c6c46 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -120,6 +120,8 @@ v3d_reset(struct v3d_dev *v3d)
+ if (false)
+ v3d_idle_axi(v3d, 0);
+
++ v3d_irq_disable(v3d);
++
+ v3d_idle_gca(v3d);
+ v3d_reset_v3d(v3d);
+
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index b2d59a16869728..641315dbee8b29 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -215,7 +215,7 @@ v3d_hub_irq(int irq, void *arg)
+ int
+ v3d_irq_init(struct v3d_dev *v3d)
+ {
+- int irq1, ret, core;
++ int irq, ret, core;
+
+ INIT_WORK(&v3d->overflow_mem_work, v3d_overflow_mem_work);
+
+@@ -226,17 +226,24 @@ v3d_irq_init(struct v3d_dev *v3d)
+ V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS);
+ V3D_WRITE(V3D_HUB_INT_CLR, V3D_HUB_IRQS);
+
+- irq1 = platform_get_irq_optional(v3d_to_pdev(v3d), 1);
+- if (irq1 == -EPROBE_DEFER)
+- return irq1;
+- if (irq1 > 0) {
+- ret = devm_request_irq(v3d->drm.dev, irq1,
++ irq = platform_get_irq_optional(v3d_to_pdev(v3d), 1);
++ if (irq == -EPROBE_DEFER)
++ return irq;
++ if (irq > 0) {
++ v3d->irq[V3D_CORE_IRQ] = irq;
++
++ ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
+ v3d_irq, IRQF_SHARED,
+ "v3d_core0", v3d);
+ if (ret)
+ goto fail;
+- ret = devm_request_irq(v3d->drm.dev,
+- platform_get_irq(v3d_to_pdev(v3d), 0),
++
++ irq = platform_get_irq(v3d_to_pdev(v3d), 0);
++ if (irq < 0)
++ return irq;
++ v3d->irq[V3D_HUB_IRQ] = irq;
++
++ ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_HUB_IRQ],
+ v3d_hub_irq, IRQF_SHARED,
+ "v3d_hub", v3d);
+ if (ret)
+@@ -244,8 +251,12 @@ v3d_irq_init(struct v3d_dev *v3d)
+ } else {
+ v3d->single_irq_line = true;
+
+- ret = devm_request_irq(v3d->drm.dev,
+- platform_get_irq(v3d_to_pdev(v3d), 0),
++ irq = platform_get_irq(v3d_to_pdev(v3d), 0);
++ if (irq < 0)
++ return irq;
++ v3d->irq[V3D_CORE_IRQ] = irq;
++
++ ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
+ v3d_irq, IRQF_SHARED,
+ "v3d", v3d);
+ if (ret)
+@@ -286,6 +297,12 @@ v3d_irq_disable(struct v3d_dev *v3d)
+ V3D_CORE_WRITE(core, V3D_CTL_INT_MSK_SET, ~0);
+ V3D_WRITE(V3D_HUB_INT_MSK_SET, ~0);
+
++ /* Finish any interrupt handler still in flight. */
++ for (int i = 0; i < V3D_MAX_IRQS; i++) {
++ if (v3d->irq[i])
++ synchronize_irq(v3d->irq[i]);
++ }
++
+ /* Clear any pending interrupts we might have left. */
+ for (core = 0; core < v3d->cores; core++)
+ V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS);
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 51f5491648c077..e865869ccc50ee 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -327,6 +327,7 @@ static int amd_i2c_dw_xfer_quirk(struct i2c_adapter *adap, struct i2c_msg *msgs,
+
+ dev->msgs = msgs;
+ dev->msgs_num = num_msgs;
++ dev->msg_write_idx = 0;
+ i2c_dw_xfer_init(dev);
+ regmap_write(dev->map, DW_IC_INTR_MASK, 0);
+
+diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c
+index b049bba2157905..d06128501ce4e7 100644
+--- a/drivers/infiniband/hw/mlx5/counters.c
++++ b/drivers/infiniband/hw/mlx5/counters.c
+@@ -387,7 +387,7 @@ static int do_get_hw_stats(struct ib_device *ibdev,
+ return ret;
+
+ /* We don't expose device counters over Vports */
+- if (is_mdev_switchdev_mode(dev->mdev) && port_num != 0)
++ if (is_mdev_switchdev_mode(dev->mdev) && dev->is_rep && port_num != 0)
+ goto done;
+
+ if (MLX5_CAP_PCAM_FEATURE(dev->mdev, rx_icrc_encapsulated_counter)) {
+@@ -407,7 +407,7 @@ static int do_get_hw_stats(struct ib_device *ibdev,
+ */
+ goto done;
+ }
+- ret = mlx5_lag_query_cong_counters(dev->mdev,
++ ret = mlx5_lag_query_cong_counters(mdev,
+ stats->value +
+ cnts->num_q_counters,
+ cnts->num_cong_counters,
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 6e19974ecf6e71..3f1fa45d936821 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1914,6 +1914,7 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ /* Level1 is valid for future use, no need to free */
+ return -ENOMEM;
+
++ INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ err = xa_insert(&event->object_ids,
+ key_level2,
+ obj_event,
+@@ -1922,7 +1923,6 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ kfree(obj_event);
+ return err;
+ }
+- INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ }
+
+ return 0;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index ada7dbf8eb1cf5..e922fb87286547 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1690,6 +1690,33 @@ static void deallocate_uars(struct mlx5_ib_dev *dev,
+ context->devx_uid);
+ }
+
++static int mlx5_ib_enable_lb_mp(struct mlx5_core_dev *master,
++ struct mlx5_core_dev *slave)
++{
++ int err;
++
++ err = mlx5_nic_vport_update_local_lb(master, true);
++ if (err)
++ return err;
++
++ err = mlx5_nic_vport_update_local_lb(slave, true);
++ if (err)
++ goto out;
++
++ return 0;
++
++out:
++ mlx5_nic_vport_update_local_lb(master, false);
++ return err;
++}
++
++static void mlx5_ib_disable_lb_mp(struct mlx5_core_dev *master,
++ struct mlx5_core_dev *slave)
++{
++ mlx5_nic_vport_update_local_lb(slave, false);
++ mlx5_nic_vport_update_local_lb(master, false);
++}
++
+ int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev, bool td, bool qp)
+ {
+ int err = 0;
+@@ -3224,6 +3251,8 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
+
+ lockdep_assert_held(&mlx5_ib_multiport_mutex);
+
++ mlx5_ib_disable_lb_mp(ibdev->mdev, mpi->mdev);
++
+ mlx5_core_mp_event_replay(ibdev->mdev,
+ MLX5_DRIVER_EVENT_AFFILIATION_REMOVED,
+ NULL);
+@@ -3319,6 +3348,10 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev,
+ MLX5_DRIVER_EVENT_AFFILIATION_DONE,
+ &key);
+
++ err = mlx5_ib_enable_lb_mp(ibdev->mdev, mpi->mdev);
++ if (err)
++ goto unbind;
++
+ return true;
+
+ unbind:
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index e6fed973ea7411..05c00421ff2b7e 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -174,6 +174,7 @@ static const struct xpad_device {
+ { 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX },
+ { 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX },
+ { 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX },
++ { 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX },
+ { 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX },
+ { 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX },
+ { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
+@@ -514,6 +515,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x045e), /* Microsoft Xbox 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft Xbox One controllers */
+ XPAD_XBOX360_VENDOR(0x046d), /* Logitech Xbox 360-style controllers */
++ XPAD_XBOX360_VENDOR(0x0502), /* Acer Inc. Xbox 360 style controllers */
+ XPAD_XBOX360_VENDOR(0x056e), /* Elecom JC-U3613M */
+ XPAD_XBOX360_VENDOR(0x06a3), /* Saitek P3600 */
+ XPAD_XBOX360_VENDOR(0x0738), /* Mad Catz Xbox 360 controllers */
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index b98529568eeb83..ce7e977cc8a7a1 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -301,6 +301,7 @@ struct iqs7222_dev_desc {
+ int allow_offset;
+ int event_offset;
+ int comms_offset;
++ int ext_chan;
+ bool legacy_gesture;
+ struct iqs7222_reg_grp_desc reg_grps[IQS7222_NUM_REG_GRPS];
+ };
+@@ -315,6 +316,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ .allow_offset = 9,
+ .event_offset = 10,
+ .comms_offset = 12,
++ .ext_chan = 10,
+ .reg_grps = {
+ [IQS7222_REG_GRP_STAT] = {
+ .base = IQS7222_SYS_STATUS,
+@@ -373,6 +375,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ .allow_offset = 9,
+ .event_offset = 10,
+ .comms_offset = 12,
++ .ext_chan = 10,
+ .legacy_gesture = true,
+ .reg_grps = {
+ [IQS7222_REG_GRP_STAT] = {
+@@ -2244,7 +2247,7 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222,
+ const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc;
+ struct i2c_client *client = iqs7222->client;
+ int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
+- int ext_chan = rounddown(num_chan, 10);
++ int ext_chan = dev_desc->ext_chan ? : num_chan;
+ int error, i;
+ u16 *chan_setup = iqs7222->chan_setup[chan_index];
+ u16 *sys_setup = iqs7222->sys_setup;
+@@ -2448,7 +2451,7 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222,
+ const struct iqs7222_dev_desc *dev_desc = iqs7222->dev_desc;
+ struct i2c_client *client = iqs7222->client;
+ int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
+- int ext_chan = rounddown(num_chan, 10);
++ int ext_chan = dev_desc->ext_chan ? : num_chan;
+ int count, error, reg_offset, i;
+ u16 *event_mask = &iqs7222->sys_setup[dev_desc->event_offset];
+ u16 *sldr_setup = iqs7222->sldr_setup[sldr_index];
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 8ff69fbf9f65db..36fec26d2a04ac 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -1177,7 +1177,6 @@ static int rk_iommu_of_xlate(struct device *dev,
+ iommu_dev = of_find_device_by_node(args->np);
+
+ data->iommu = platform_get_drvdata(iommu_dev);
+- data->iommu->domain = &rk_identity_domain;
+ dev_iommu_priv_set(dev, data);
+
+ platform_device_put(iommu_dev);
+@@ -1217,6 +1216,8 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ if (!iommu)
+ return -ENOMEM;
+
++ iommu->domain = &rk_identity_domain;
++
+ platform_set_drvdata(pdev, iommu);
+ iommu->dev = dev;
+ iommu->num_mmu = 0;
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 7f893bafaa607d..c417ed34c05767 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -44,6 +44,12 @@ static const struct mmc_fixup __maybe_unused mmc_sd_fixups[] = {
+ 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
+ MMC_QUIRK_NO_UHS_DDR50_TUNING, EXT_CSD_REV_ANY),
+
++ /*
++ * Some SD cards reports discard support while they don't
++ */
++ MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,
++ MMC_QUIRK_BROKEN_SD_DISCARD),
++
+ END_FIXUP
+ };
+
+@@ -147,12 +153,6 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc,
+ MMC_QUIRK_TRIM_BROKEN),
+
+- /*
+- * Some SD cards reports discard support while they don't
+- */
+- MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,
+- MMC_QUIRK_BROKEN_SD_DISCARD),
+-
+ END_FIXUP
+ };
+
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 02f3748e46c144..cf685c0a17edc0 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -770,12 +770,18 @@ static inline void msdc_dma_setup(struct msdc_host *host, struct msdc_dma *dma,
+ static void msdc_prepare_data(struct msdc_host *host, struct mmc_data *data)
+ {
+ if (!(data->host_cookie & MSDC_PREPARE_FLAG)) {
+- data->host_cookie |= MSDC_PREPARE_FLAG;
+ data->sg_count = dma_map_sg(host->dev, data->sg, data->sg_len,
+ mmc_get_dma_dir(data));
++ if (data->sg_count)
++ data->host_cookie |= MSDC_PREPARE_FLAG;
+ }
+ }
+
++static bool msdc_data_prepared(struct mmc_data *data)
++{
++ return data->host_cookie & MSDC_PREPARE_FLAG;
++}
++
+ static void msdc_unprepare_data(struct msdc_host *host, struct mmc_data *data)
+ {
+ if (data->host_cookie & MSDC_ASYNC_FLAG)
+@@ -1338,8 +1344,19 @@ static void msdc_ops_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ WARN_ON(host->mrq);
+ host->mrq = mrq;
+
+- if (mrq->data)
++ if (mrq->data) {
+ msdc_prepare_data(host, mrq->data);
++ if (!msdc_data_prepared(mrq->data)) {
++ host->mrq = NULL;
++ /*
++ * Failed to prepare DMA area, fail fast before
++ * starting any commands.
++ */
++ mrq->cmd->error = -ENOSPC;
++ mmc_request_done(mmc_from_priv(host), mrq);
++ return;
++ }
++ }
+
+ /* if SBC is required, we have HW option and SW option.
+ * if HW option is enabled, and SBC does not have "special" flags,
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index f32429ff905ff6..9796a3cb3ca62c 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2035,15 +2035,10 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+
+ host->mmc->actual_clock = 0;
+
+- clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
+- if (clk & SDHCI_CLOCK_CARD_EN)
+- sdhci_writew(host, clk & ~SDHCI_CLOCK_CARD_EN,
+- SDHCI_CLOCK_CONTROL);
++ sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
+
+- if (clock == 0) {
+- sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
++ if (clock == 0)
+ return;
+- }
+
+ clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock);
+ sdhci_enable_clk(host, clk);
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index a315cee698094f..16d7bff9eae562 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -825,4 +825,20 @@ void sdhci_switch_external_dma(struct sdhci_host *host, bool en);
+ void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable);
+ void __sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd);
+
++#if defined(CONFIG_DYNAMIC_DEBUG) || \
++ (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE))
++#define SDHCI_DBG_ANYWAY 0
++#elif defined(DEBUG)
++#define SDHCI_DBG_ANYWAY 1
++#else
++#define SDHCI_DBG_ANYWAY 0
++#endif
++
++#define sdhci_dbg_dumpregs(host, fmt) \
++do { \
++ DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt); \
++ if (DYNAMIC_DEBUG_BRANCH(descriptor) || SDHCI_DBG_ANYWAY) \
++ sdhci_dumpregs(host); \
++} while (0)
++
+ #endif /* __SDHCI_HW_H */
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index 393ff37f0d23c1..cd21bf8f254a75 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -1316,6 +1316,7 @@ static void spinand_cleanup(struct spinand_device *spinand)
+ {
+ struct nand_device *nand = spinand_to_nand(spinand);
+
++ nanddev_ecc_engine_cleanup(nand);
+ nanddev_cleanup(nand);
+ spinand_manufacturer_cleanup(spinand);
+ kfree(spinand->databuf);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+index 3b70f67376331e..aa25a8a0a106f6 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+@@ -1373,6 +1373,8 @@
+ #define MDIO_VEND2_CTRL1_SS13 BIT(13)
+ #endif
+
++#define XGBE_VEND2_MAC_AUTO_SW BIT(9)
++
+ /* MDIO mask values */
+ #define XGBE_AN_CL73_INT_CMPLT BIT(0)
+ #define XGBE_AN_CL73_INC_LINK BIT(1)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 4a2dc705b52801..8345d439184ebe 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -375,6 +375,10 @@ static void xgbe_an37_set(struct xgbe_prv_data *pdata, bool enable,
+ reg |= MDIO_VEND2_CTRL1_AN_RESTART;
+
+ XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_CTRL1, reg);
++
++ reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL);
++ reg |= XGBE_VEND2_MAC_AUTO_SW;
++ XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL, reg);
+ }
+
+ static void xgbe_an37_restart(struct xgbe_prv_data *pdata)
+@@ -1003,6 +1007,11 @@ static void xgbe_an37_init(struct xgbe_prv_data *pdata)
+
+ netif_dbg(pdata, link, pdata->netdev, "CL37 AN (%s) initialized\n",
+ (pdata->an_mode == XGBE_AN_MODE_CL37) ? "BaseX" : "SGMII");
++
++ reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1);
++ reg &= ~MDIO_AN_CTRL1_ENABLE;
++ XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_CTRL1, reg);
++
+ }
+
+ static void xgbe_an73_init(struct xgbe_prv_data *pdata)
+@@ -1404,6 +1413,10 @@ static void xgbe_phy_status(struct xgbe_prv_data *pdata)
+
+ pdata->phy.link = pdata->phy_if.phy_impl.link_status(pdata,
+ &an_restart);
++ /* bail out if the link status register read fails */
++ if (pdata->phy.link < 0)
++ return;
++
+ if (an_restart) {
+ xgbe_phy_config_aneg(pdata);
+ goto adjust_link;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 268399dfcf22f0..32e633d1134843 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -2855,8 +2855,7 @@ static bool xgbe_phy_valid_speed(struct xgbe_prv_data *pdata, int speed)
+ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
+ {
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+- unsigned int reg;
+- int ret;
++ int reg, ret;
+
+ *an_restart = 0;
+
+@@ -2890,11 +2889,20 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
+ return 0;
+ }
+
+- /* Link status is latched low, so read once to clear
+- * and then read again to get current state
+- */
+- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
++ if (reg < 0)
++ return reg;
++
++ /* Link status is latched low so that momentary link drops
++ * can be detected. If link was already down read again
++ * to get the latest state.
++ */
++
++ if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) {
++ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
++ if (reg < 0)
++ return reg;
++ }
+
+ if (pdata->en_rx_adap) {
+ /* if the link is available and adaptation is done,
+@@ -2913,9 +2921,7 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)
+ xgbe_phy_set_mode(pdata, phy_data->cur_mode);
+ }
+
+- /* check again for the link and adaptation status */
+- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);
+- if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done)
++ if (pdata->rx_adapt_done)
+ return 1;
+ } else if (reg & MDIO_STAT1_LSTATUS)
+ return 1;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
+index 173f4dad470f55..a596cd08124fa4 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
+@@ -292,12 +292,12 @@
+ #define XGBE_LINK_TIMEOUT 5
+ #define XGBE_KR_TRAINING_WAIT_ITER 50
+
+-#define XGBE_SGMII_AN_LINK_STATUS BIT(1)
++#define XGBE_SGMII_AN_LINK_DUPLEX BIT(1)
+ #define XGBE_SGMII_AN_LINK_SPEED (BIT(2) | BIT(3))
+ #define XGBE_SGMII_AN_LINK_SPEED_10 0x00
+ #define XGBE_SGMII_AN_LINK_SPEED_100 0x04
+ #define XGBE_SGMII_AN_LINK_SPEED_1000 0x08
+-#define XGBE_SGMII_AN_LINK_DUPLEX BIT(4)
++#define XGBE_SGMII_AN_LINK_STATUS BIT(4)
+
+ /* ECC correctable error notification window (seconds) */
+ #define XGBE_ECC_LIMIT 60
+diff --git a/drivers/net/ethernet/atheros/atlx/atl1.c b/drivers/net/ethernet/atheros/atlx/atl1.c
+index 02aa6fd8ebc2d4..4ed165702d58eb 100644
+--- a/drivers/net/ethernet/atheros/atlx/atl1.c
++++ b/drivers/net/ethernet/atheros/atlx/atl1.c
+@@ -1861,14 +1861,21 @@ static u16 atl1_alloc_rx_buffers(struct atl1_adapter *adapter)
+ break;
+ }
+
+- buffer_info->alloced = 1;
+- buffer_info->skb = skb;
+- buffer_info->length = (u16) adapter->rx_buffer_len;
+ page = virt_to_page(skb->data);
+ offset = offset_in_page(skb->data);
+ buffer_info->dma = dma_map_page(&pdev->dev, page, offset,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
++ if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
++ kfree_skb(skb);
++ adapter->soft_stats.rx_dropped++;
++ break;
++ }
++
++ buffer_info->alloced = 1;
++ buffer_info->skb = skb;
++ buffer_info->length = (u16)adapter->rx_buffer_len;
++
+ rfd_desc->buffer_addr = cpu_to_le64(buffer_info->dma);
+ rfd_desc->buf_len = cpu_to_le16(adapter->rx_buffer_len);
+ rfd_desc->coalese = 0;
+@@ -2183,8 +2190,8 @@ static int atl1_tx_csum(struct atl1_adapter *adapter, struct sk_buff *skb,
+ return 0;
+ }
+
+-static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+- struct tx_packet_desc *ptpd)
++static bool atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
++ struct tx_packet_desc *ptpd)
+ {
+ struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
+ struct atl1_buffer *buffer_info;
+@@ -2194,6 +2201,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ unsigned int nr_frags;
+ unsigned int f;
+ int retval;
++ u16 first_mapped;
+ u16 next_to_use;
+ u16 data_len;
+ u8 hdr_len;
+@@ -2201,6 +2209,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buf_len -= skb->data_len;
+ nr_frags = skb_shinfo(skb)->nr_frags;
+ next_to_use = atomic_read(&tpd_ring->next_to_use);
++ first_mapped = next_to_use;
+ buffer_info = &tpd_ring->buffer_info[next_to_use];
+ BUG_ON(buffer_info->skb);
+ /* put skb in last TPD */
+@@ -2216,6 +2225,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buffer_info->dma = dma_map_page(&adapter->pdev->dev, page,
+ offset, hdr_len,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma))
++ goto dma_err;
+
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+@@ -2242,6 +2253,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ page, offset,
+ buffer_info->length,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev,
++ buffer_info->dma))
++ goto dma_err;
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+ }
+@@ -2254,6 +2268,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buffer_info->dma = dma_map_page(&adapter->pdev->dev, page,
+ offset, buf_len,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma))
++ goto dma_err;
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+ }
+@@ -2277,6 +2293,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buffer_info->dma = skb_frag_dma_map(&adapter->pdev->dev,
+ frag, i * ATL1_MAX_TX_BUF_LEN,
+ buffer_info->length, DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev,
++ buffer_info->dma))
++ goto dma_err;
+
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+@@ -2285,6 +2304,22 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+
+ /* last tpd's buffer-info */
+ buffer_info->skb = skb;
++
++ return true;
++
++ dma_err:
++ while (first_mapped != next_to_use) {
++ buffer_info = &tpd_ring->buffer_info[first_mapped];
++ dma_unmap_page(&adapter->pdev->dev,
++ buffer_info->dma,
++ buffer_info->length,
++ DMA_TO_DEVICE);
++ buffer_info->dma = 0;
++
++ if (++first_mapped == tpd_ring->count)
++ first_mapped = 0;
++ }
++ return false;
+ }
+
+ static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count,
+@@ -2355,10 +2390,8 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+
+ len = skb_headlen(skb);
+
+- if (unlikely(skb->len <= 0)) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (unlikely(skb->len <= 0))
++ goto drop_packet;
+
+ nr_frags = skb_shinfo(skb)->nr_frags;
+ for (f = 0; f < nr_frags; f++) {
+@@ -2371,10 +2404,9 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+ if (mss) {
+ if (skb->protocol == htons(ETH_P_IP)) {
+ proto_hdr_len = skb_tcp_all_headers(skb);
+- if (unlikely(proto_hdr_len > len)) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (unlikely(proto_hdr_len > len))
++ goto drop_packet;
++
+ /* need additional TPD ? */
+ if (proto_hdr_len != len)
+ count += (len - proto_hdr_len +
+@@ -2406,23 +2438,26 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+ }
+
+ tso = atl1_tso(adapter, skb, ptpd);
+- if (tso < 0) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (tso < 0)
++ goto drop_packet;
+
+ if (!tso) {
+ ret_val = atl1_tx_csum(adapter, skb, ptpd);
+- if (ret_val < 0) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (ret_val < 0)
++ goto drop_packet;
+ }
+
+- atl1_tx_map(adapter, skb, ptpd);
++ if (!atl1_tx_map(adapter, skb, ptpd))
++ goto drop_packet;
++
+ atl1_tx_queue(adapter, count, ptpd);
+ atl1_update_mailbox(adapter);
+ return NETDEV_TX_OK;
++
++drop_packet:
++ adapter->soft_stats.tx_errors++;
++ dev_kfree_skb_any(skb);
++ return NETDEV_TX_OK;
+ }
+
+ static int atl1_rings_clean(struct napi_struct *napi, int budget)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 6bf4a21853858f..8e4e8291d8c66f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2491,6 +2491,7 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ {
+ struct bnxt_napi *bnapi = cpr->bnapi;
+ u32 raw_cons = cpr->cp_raw_cons;
++ bool flush_xdp = false;
+ u32 cons;
+ int tx_pkts = 0;
+ int rx_pkts = 0;
+@@ -2528,6 +2529,8 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ else
+ rc = bnxt_force_rx_discard(bp, cpr, &raw_cons,
+ &event);
++ if (event & BNXT_REDIRECT_EVENT)
++ flush_xdp = true;
+ if (likely(rc >= 0))
+ rx_pkts += rc;
+ /* Increment rx_pkts when rc is -ENOMEM to count towards
+@@ -2555,8 +2558,10 @@ static int __bnxt_poll_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ }
+ }
+
+- if (event & BNXT_REDIRECT_EVENT)
++ if (flush_xdp) {
+ xdp_do_flush();
++ event &= ~BNXT_REDIRECT_EVENT;
++ }
+
+ if (event & BNXT_TX_EVENT) {
+ struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index cccf0db2fb4e58..48701032c20c56 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -2057,10 +2057,10 @@ static int enic_change_mtu(struct net_device *netdev, int new_mtu)
+ if (enic_is_dynamic(enic) || enic_is_sriov_vf(enic))
+ return -EOPNOTSUPP;
+
+- if (netdev->mtu > enic->port_mtu)
++ if (new_mtu > enic->port_mtu)
+ netdev_warn(netdev,
+ "interface MTU (%d) set higher than port MTU (%d)\n",
+- netdev->mtu, enic->port_mtu);
++ new_mtu, enic->port_mtu);
+
+ return _enic_change_mtu(netdev, new_mtu);
+ }
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 40e88182959519..d3c36a6f84b01d 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -3928,6 +3928,7 @@ static int dpaa2_eth_setup_rx_flow(struct dpaa2_eth_priv *priv,
+ MEM_TYPE_PAGE_ORDER0, NULL);
+ if (err) {
+ dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n");
++ xdp_rxq_info_unreg(&fq->channel->xdp_rxq);
+ return err;
+ }
+
+@@ -4421,17 +4422,25 @@ static int dpaa2_eth_bind_dpni(struct dpaa2_eth_priv *priv)
+ return -EINVAL;
+ }
+ if (err)
+- return err;
++ goto out;
+ }
+
+ err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token,
+ DPNI_QUEUE_TX, &priv->tx_qdid);
+ if (err) {
+ dev_err(dev, "dpni_get_qdid() failed\n");
+- return err;
++ goto out;
+ }
+
+ return 0;
++
++out:
++ while (i--) {
++ if (priv->fq[i].type == DPAA2_RX_FQ &&
++ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))
++ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);
++ }
++ return err;
+ }
+
+ /* Allocate rings for storing incoming frame descriptors */
+@@ -4813,6 +4822,17 @@ static void dpaa2_eth_del_ch_napi(struct dpaa2_eth_priv *priv)
+ }
+ }
+
++static void dpaa2_eth_free_rx_xdp_rxq(struct dpaa2_eth_priv *priv)
++{
++ int i;
++
++ for (i = 0; i < priv->num_fqs; i++) {
++ if (priv->fq[i].type == DPAA2_RX_FQ &&
++ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))
++ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);
++ }
++}
++
+ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ {
+ struct device *dev;
+@@ -5016,6 +5036,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ free_percpu(priv->percpu_stats);
+ err_alloc_percpu_stats:
+ dpaa2_eth_del_ch_napi(priv);
++ dpaa2_eth_free_rx_xdp_rxq(priv);
+ err_bind:
+ dpaa2_eth_free_dpbps(priv);
+ err_dpbp_setup:
+@@ -5068,6 +5089,7 @@ static void dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+ free_percpu(priv->percpu_extras);
+
+ dpaa2_eth_del_ch_napi(priv);
++ dpaa2_eth_free_rx_xdp_rxq(priv);
+ dpaa2_eth_free_dpbps(priv);
+ dpaa2_eth_free_dpio(priv);
+ dpaa2_eth_free_dpni(priv);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index e2f5c4384455e0..11543db4c47f0e 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -6772,6 +6772,10 @@ static int igc_probe(struct pci_dev *pdev,
+ adapter->port_num = hw->bus.func;
+ adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
+
++ /* Disable ASPM L1.2 on I226 devices to avoid packet loss */
++ if (igc_is_device_id_i226(hw))
++ pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
++
+ err = pci_save_state(pdev);
+ if (err)
+ goto err_ioremap;
+@@ -7144,6 +7148,9 @@ static int __maybe_unused igc_resume(struct device *dev)
+ pci_enable_wake(pdev, PCI_D3hot, 0);
+ pci_enable_wake(pdev, PCI_D3cold, 0);
+
++ if (igc_is_device_id_i226(hw))
++ pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
++
+ if (igc_init_interrupt_scheme(adapter, true)) {
+ netdev_err(netdev, "Unable to allocate memory for queues\n");
+ return -ENOMEM;
+@@ -7259,6 +7266,9 @@ static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev)
+ pci_enable_wake(pdev, PCI_D3hot, 0);
+ pci_enable_wake(pdev, PCI_D3cold, 0);
+
++ if (igc_is_device_id_i226(hw))
++ pci_disable_link_state_locked(pdev, PCIE_LINK_STATE_L1_2);
++
+ /* In case of PCI error, adapter loses its HW address
+ * so we should re-assign it here.
+ */
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 011d74087f860d..fc6217917fc22b 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -3336,7 +3336,7 @@ static int niu_rbr_add_page(struct niu *np, struct rx_ring_info *rp,
+
+ addr = np->ops->map_page(np->device, page, 0,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+- if (!addr) {
++ if (np->ops->mapping_error(np->device, addr)) {
+ __free_page(page);
+ return -ENOMEM;
+ }
+@@ -6672,6 +6672,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ len = skb_headlen(skb);
+ mapping = np->ops->map_single(np->device, skb->data,
+ len, DMA_TO_DEVICE);
++ if (np->ops->mapping_error(np->device, mapping))
++ goto out_drop;
+
+ prod = rp->prod;
+
+@@ -6713,6 +6715,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ mapping = np->ops->map_page(np->device, skb_frag_page(frag),
+ skb_frag_off(frag), len,
+ DMA_TO_DEVICE);
++ if (np->ops->mapping_error(np->device, mapping))
++ goto out_unmap;
+
+ rp->tx_buffs[prod].skb = NULL;
+ rp->tx_buffs[prod].mapping = mapping;
+@@ -6737,6 +6741,19 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ out:
+ return NETDEV_TX_OK;
+
++out_unmap:
++ while (i--) {
++ const skb_frag_t *frag;
++
++ prod = PREVIOUS_TX(rp, prod);
++ frag = &skb_shinfo(skb)->frags[i];
++ np->ops->unmap_page(np->device, rp->tx_buffs[prod].mapping,
++ skb_frag_size(frag), DMA_TO_DEVICE);
++ }
++
++ np->ops->unmap_single(np->device, rp->tx_buffs[rp->prod].mapping,
++ skb_headlen(skb), DMA_TO_DEVICE);
++
+ out_drop:
+ rp->tx_errors++;
+ kfree_skb(skb);
+@@ -9636,6 +9653,11 @@ static void niu_pci_unmap_single(struct device *dev, u64 dma_address,
+ dma_unmap_single(dev, dma_address, size, direction);
+ }
+
++static int niu_pci_mapping_error(struct device *dev, u64 addr)
++{
++ return dma_mapping_error(dev, addr);
++}
++
+ static const struct niu_ops niu_pci_ops = {
+ .alloc_coherent = niu_pci_alloc_coherent,
+ .free_coherent = niu_pci_free_coherent,
+@@ -9643,6 +9665,7 @@ static const struct niu_ops niu_pci_ops = {
+ .unmap_page = niu_pci_unmap_page,
+ .map_single = niu_pci_map_single,
+ .unmap_single = niu_pci_unmap_single,
++ .mapping_error = niu_pci_mapping_error,
+ };
+
+ static void niu_driver_version(void)
+@@ -10009,6 +10032,11 @@ static void niu_phys_unmap_single(struct device *dev, u64 dma_address,
+ /* Nothing to do. */
+ }
+
++static int niu_phys_mapping_error(struct device *dev, u64 dma_address)
++{
++ return false;
++}
++
+ static const struct niu_ops niu_phys_ops = {
+ .alloc_coherent = niu_phys_alloc_coherent,
+ .free_coherent = niu_phys_free_coherent,
+@@ -10016,6 +10044,7 @@ static const struct niu_ops niu_phys_ops = {
+ .unmap_page = niu_phys_unmap_page,
+ .map_single = niu_phys_map_single,
+ .unmap_single = niu_phys_unmap_single,
++ .mapping_error = niu_phys_mapping_error,
+ };
+
+ static int niu_of_probe(struct platform_device *op)
+diff --git a/drivers/net/ethernet/sun/niu.h b/drivers/net/ethernet/sun/niu.h
+index 04c215f91fc08e..0b169c08b0f2d1 100644
+--- a/drivers/net/ethernet/sun/niu.h
++++ b/drivers/net/ethernet/sun/niu.h
+@@ -2879,6 +2879,9 @@ struct tx_ring_info {
+ #define NEXT_TX(tp, index) \
+ (((index) + 1) < (tp)->pending ? ((index) + 1) : 0)
+
++#define PREVIOUS_TX(tp, index) \
++ (((index) - 1) >= 0 ? ((index) - 1) : (((tp)->pending) - 1))
++
+ static inline u32 niu_tx_avail(struct tx_ring_info *tp)
+ {
+ return (tp->pending -
+@@ -3140,6 +3143,7 @@ struct niu_ops {
+ enum dma_data_direction direction);
+ void (*unmap_single)(struct device *dev, u64 dma_address,
+ size_t size, enum dma_data_direction direction);
++ int (*mapping_error)(struct device *dev, u64 dma_address);
+ };
+
+ struct niu_link_config {
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 09173d7b87ed5c..ec5689cd240aaf 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -4229,8 +4229,6 @@ static void lan78xx_disconnect(struct usb_interface *intf)
+ if (!dev)
+ return;
+
+- netif_napi_del(&dev->napi);
+-
+ udev = interface_to_usbdev(intf);
+ net = dev->net;
+
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 3bf394b24d9711..5a949f9446a8ed 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -487,6 +487,26 @@ static unsigned int mergeable_ctx_to_truesize(void *mrg_ctx)
+ return (unsigned long)mrg_ctx & ((1 << MRG_CTX_HEADER_SHIFT) - 1);
+ }
+
++static int check_mergeable_len(struct net_device *dev, void *mrg_ctx,
++ unsigned int len)
++{
++ unsigned int headroom, tailroom, room, truesize;
++
++ truesize = mergeable_ctx_to_truesize(mrg_ctx);
++ headroom = mergeable_ctx_to_headroom(mrg_ctx);
++ tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
++ room = SKB_DATA_ALIGN(headroom + tailroom);
++
++ if (len > truesize - room) {
++ pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
++ dev->name, len, (unsigned long)(truesize - room));
++ DEV_STATS_INC(dev, rx_length_errors);
++ return -1;
++ }
++
++ return 0;
++}
++
+ static struct sk_buff *virtnet_build_skb(void *buf, unsigned int buflen,
+ unsigned int headroom,
+ unsigned int len)
+@@ -1084,7 +1104,8 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
+ * across multiple buffers (num_buf > 1), and we make sure buffers
+ * have enough headroom.
+ */
+-static struct page *xdp_linearize_page(struct receive_queue *rq,
++static struct page *xdp_linearize_page(struct net_device *dev,
++ struct receive_queue *rq,
+ int *num_buf,
+ struct page *p,
+ int offset,
+@@ -1104,18 +1125,27 @@ static struct page *xdp_linearize_page(struct receive_queue *rq,
+ memcpy(page_address(page) + page_off, page_address(p) + offset, *len);
+ page_off += *len;
+
++ /* Only mergeable mode can go inside this while loop. In small mode,
++ * *num_buf == 1, so it cannot go inside.
++ */
+ while (--*num_buf) {
+ unsigned int buflen;
+ void *buf;
++ void *ctx;
+ int off;
+
+- buf = virtnet_rq_get_buf(rq, &buflen, NULL);
++ buf = virtnet_rq_get_buf(rq, &buflen, &ctx);
+ if (unlikely(!buf))
+ goto err_buf;
+
+ p = virt_to_head_page(buf);
+ off = buf - page_address(p);
+
++ if (check_mergeable_len(dev, ctx, buflen)) {
++ put_page(p);
++ goto err_buf;
++ }
++
+ /* guard against a misconfigured or uncooperative backend that
+ * is sending packet larger than the MTU.
+ */
+@@ -1204,7 +1234,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev,
+ headroom = vi->hdr_len + header_offset;
+ buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+- xdp_page = xdp_linearize_page(rq, &num_buf, page,
++ xdp_page = xdp_linearize_page(dev, rq, &num_buf, page,
+ offset, header_offset,
+ &tlen);
+ if (!xdp_page)
+@@ -1539,7 +1569,7 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi,
+ */
+ if (!xdp_prog->aux->xdp_has_frags) {
+ /* linearize data for XDP */
+- xdp_page = xdp_linearize_page(rq, num_buf,
++ xdp_page = xdp_linearize_page(vi->dev, rq, num_buf,
+ *page, offset,
+ VIRTIO_XDP_HEADROOM,
+ len);
+diff --git a/drivers/net/wireless/ath/ath6kl/bmi.c b/drivers/net/wireless/ath/ath6kl/bmi.c
+index af98e871199d31..5a9e93fd1ef42a 100644
+--- a/drivers/net/wireless/ath/ath6kl/bmi.c
++++ b/drivers/net/wireless/ath/ath6kl/bmi.c
+@@ -87,7 +87,9 @@ int ath6kl_bmi_get_target_info(struct ath6kl *ar,
+ * We need to do some backwards compatibility to make this work.
+ */
+ if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) {
+- WARN_ON(1);
++ ath6kl_err("mismatched byte count %d vs. expected %zd\n",
++ le32_to_cpu(targ_info->byte_count),
++ sizeof(*targ_info));
+ return -EINVAL;
+ }
+
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index 39828eb84e0ba0..1015948ef43eb8 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -281,7 +281,8 @@ static int mlxbf_tmfifo_alloc_vrings(struct mlxbf_tmfifo *fifo,
+ vring->align = SMP_CACHE_BYTES;
+ vring->index = i;
+ vring->vdev_id = tm_vdev->vdev.id.device;
+- vring->drop_desc.len = VRING_DROP_DESC_MAX_LEN;
++ vring->drop_desc.len = cpu_to_virtio32(&tm_vdev->vdev,
++ VRING_DROP_DESC_MAX_LEN);
+ dev = &tm_vdev->vdev.dev;
+
+ size = vring_size(vring->num, vring->align);
+diff --git a/drivers/platform/mellanox/mlxreg-lc.c b/drivers/platform/mellanox/mlxreg-lc.c
+index 8d833836a6d322..74e9d78ff01efe 100644
+--- a/drivers/platform/mellanox/mlxreg-lc.c
++++ b/drivers/platform/mellanox/mlxreg-lc.c
+@@ -688,7 +688,7 @@ static int mlxreg_lc_completion_notify(void *handle, struct i2c_adapter *parent,
+ if (regval & mlxreg_lc->data->mask) {
+ mlxreg_lc->state |= MLXREG_LC_SYNCED;
+ mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_SYNCED, 1);
+- if (mlxreg_lc->state & ~MLXREG_LC_POWERED) {
++ if (!(mlxreg_lc->state & MLXREG_LC_POWERED)) {
+ err = mlxreg_lc_power_on_off(mlxreg_lc, 1);
+ if (err)
+ goto mlxreg_lc_regmap_power_on_off_fail;
+diff --git a/drivers/platform/mellanox/nvsw-sn2201.c b/drivers/platform/mellanox/nvsw-sn2201.c
+index 1a7c45aa41bbf0..6b4d3c44d7bd96 100644
+--- a/drivers/platform/mellanox/nvsw-sn2201.c
++++ b/drivers/platform/mellanox/nvsw-sn2201.c
+@@ -1088,7 +1088,7 @@ static int nvsw_sn2201_i2c_completion_notify(void *handle, int id)
+ if (!nvsw_sn2201->main_mux_devs->adapter) {
+ err = -ENODEV;
+ dev_err(nvsw_sn2201->dev, "Failed to get adapter for bus %d\n",
+- nvsw_sn2201->cpld_devs->nr);
++ nvsw_sn2201->main_mux_devs->nr);
+ goto i2c_get_adapter_main_fail;
+ }
+
+diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+index 2e3f6fc67c568d..7ed12c1d3b34c0 100644
+--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c
++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+@@ -224,6 +224,15 @@ static const struct dmi_system_id fwbug_list[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"),
+ }
+ },
++ /* https://bugzilla.kernel.org/show_bug.cgi?id=220116 */
++ {
++ .ident = "PCSpecialist Lafite Pro V 14M",
++ .driver_data = &quirk_spurious_8042,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"),
++ }
++ },
+ {}
+ };
+
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h b/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h
+index 3ad33a094588c6..817ee7ba07ca08 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h
+@@ -89,6 +89,11 @@ extern struct wmi_sysman_priv wmi_priv;
+
+ enum { ENUM, INT, STR, PO };
+
++#define ENUM_MIN_ELEMENTS 8
++#define INT_MIN_ELEMENTS 9
++#define STR_MIN_ELEMENTS 8
++#define PO_MIN_ELEMENTS 4
++
+ enum {
+ ATTR_NAME,
+ DISPL_NAME_LANG_CODE,
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c
+index 8cc212c8526683..fc2f58b4cbc6ef 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c
+@@ -23,9 +23,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a
+ obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_ENUMERATION_ATTRIBUTE_GUID);
+ if (!obj)
+ return -EIO;
+- if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < ENUM_MIN_ELEMENTS ||
++ obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
+ kfree(obj);
+- return -EINVAL;
++ return -EIO;
+ }
+ ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer);
+ kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c
+index 951e75b538fad4..73524806423914 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c
+@@ -25,9 +25,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a
+ obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_INTEGER_ATTRIBUTE_GUID);
+ if (!obj)
+ return -EIO;
+- if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) {
++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < INT_MIN_ELEMENTS ||
++ obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) {
+ kfree(obj);
+- return -EINVAL;
++ return -EIO;
+ }
+ ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[CURRENT_VAL].integer.value);
+ kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
+index d8f1bf5e58a0f4..3167e06d416ede 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c
+@@ -26,9 +26,10 @@ static ssize_t is_enabled_show(struct kobject *kobj, struct kobj_attribute *attr
+ obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_PASSOBJ_ATTRIBUTE_GUID);
+ if (!obj)
+ return -EIO;
+- if (obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) {
++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < PO_MIN_ELEMENTS ||
++ obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) {
+ kfree(obj);
+- return -EINVAL;
++ return -EIO;
+ }
+ ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[IS_PASS_SET].integer.value);
+ kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c
+index c392f0ecf8b55b..0d2c74f8d1aad7 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c
+@@ -25,9 +25,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a
+ obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_STRING_ATTRIBUTE_GUID);
+ if (!obj)
+ return -EIO;
+- if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < STR_MIN_ELEMENTS ||
++ obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) {
+ kfree(obj);
+- return -EINVAL;
++ return -EIO;
+ }
+ ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer);
+ kfree(obj);
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+index af49dd6b31ade7..f5402b71465729 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+@@ -25,7 +25,6 @@ struct wmi_sysman_priv wmi_priv = {
+ /* reset bios to defaults */
+ static const char * const reset_types[] = {"builtinsafe", "lastknowngood", "factory", "custom"};
+ static int reset_option = -1;
+-static struct class *fw_attr_class;
+
+
+ /**
+@@ -408,10 +407,10 @@ static int init_bios_attributes(int attr_type, const char *guid)
+ return retval;
+
+ switch (attr_type) {
+- case ENUM: min_elements = 8; break;
+- case INT: min_elements = 9; break;
+- case STR: min_elements = 8; break;
+- case PO: min_elements = 4; break;
++ case ENUM: min_elements = ENUM_MIN_ELEMENTS; break;
++ case INT: min_elements = INT_MIN_ELEMENTS; break;
++ case STR: min_elements = STR_MIN_ELEMENTS; break;
++ case PO: min_elements = PO_MIN_ELEMENTS; break;
+ default:
+ pr_err("Error: Unknown attr_type: %d\n", attr_type);
+ return -EINVAL;
+@@ -541,15 +540,11 @@ static int __init sysman_init(void)
+ goto err_exit_bios_attr_pass_interface;
+ }
+
+- ret = fw_attributes_class_get(&fw_attr_class);
+- if (ret)
+- goto err_exit_bios_attr_pass_interface;
+-
+- wmi_priv.class_dev = device_create(fw_attr_class, NULL, MKDEV(0, 0),
++ wmi_priv.class_dev = device_create(&firmware_attributes_class, NULL, MKDEV(0, 0),
+ NULL, "%s", DRIVER_NAME);
+ if (IS_ERR(wmi_priv.class_dev)) {
+ ret = PTR_ERR(wmi_priv.class_dev);
+- goto err_unregister_class;
++ goto err_exit_bios_attr_pass_interface;
+ }
+
+ wmi_priv.main_dir_kset = kset_create_and_add("attributes", NULL,
+@@ -602,10 +597,7 @@ static int __init sysman_init(void)
+ release_attributes_data();
+
+ err_destroy_classdev:
+- device_destroy(fw_attr_class, MKDEV(0, 0));
+-
+-err_unregister_class:
+- fw_attributes_class_put();
++ device_unregister(wmi_priv.class_dev);
+
+ err_exit_bios_attr_pass_interface:
+ exit_bios_attr_pass_interface();
+@@ -619,8 +611,7 @@ static int __init sysman_init(void)
+ static void __exit sysman_exit(void)
+ {
+ release_attributes_data();
+- device_destroy(fw_attr_class, MKDEV(0, 0));
+- fw_attributes_class_put();
++ device_unregister(wmi_priv.class_dev);
+ exit_bios_attr_set_interface();
+ exit_bios_attr_pass_interface();
+ }
+diff --git a/drivers/platform/x86/firmware_attributes_class.c b/drivers/platform/x86/firmware_attributes_class.c
+index fafe8eaf6e3e4e..e214efc97311e2 100644
+--- a/drivers/platform/x86/firmware_attributes_class.c
++++ b/drivers/platform/x86/firmware_attributes_class.c
+@@ -2,48 +2,35 @@
+
+ /* Firmware attributes class helper module */
+
+-#include <linux/mutex.h>
+-#include <linux/device/class.h>
+ #include <linux/module.h>
+ #include "firmware_attributes_class.h"
+
+-static DEFINE_MUTEX(fw_attr_lock);
+-static int fw_attr_inuse;
+-
+-static struct class firmware_attributes_class = {
++const struct class firmware_attributes_class = {
+ .name = "firmware-attributes",
+ };
++EXPORT_SYMBOL_GPL(firmware_attributes_class);
++
++static __init int fw_attributes_class_init(void)
++{
++ return class_register(&firmware_attributes_class);
++}
++module_init(fw_attributes_class_init);
++
++static __exit void fw_attributes_class_exit(void)
++{
++ class_unregister(&firmware_attributes_class);
++}
++module_exit(fw_attributes_class_exit);
+
+-int fw_attributes_class_get(struct class **fw_attr_class)
++int fw_attributes_class_get(const struct class **fw_attr_class)
+ {
+- int err;
+-
+- mutex_lock(&fw_attr_lock);
+- if (!fw_attr_inuse) { /*first time class is being used*/
+- err = class_register(&firmware_attributes_class);
+- if (err) {
+- mutex_unlock(&fw_attr_lock);
+- return err;
+- }
+- }
+- fw_attr_inuse++;
+ *fw_attr_class = &firmware_attributes_class;
+- mutex_unlock(&fw_attr_lock);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(fw_attributes_class_get);
+
+ int fw_attributes_class_put(void)
+ {
+- mutex_lock(&fw_attr_lock);
+- if (!fw_attr_inuse) {
+- mutex_unlock(&fw_attr_lock);
+- return -EINVAL;
+- }
+- fw_attr_inuse--;
+- if (!fw_attr_inuse) /* No more consumers */
+- class_unregister(&firmware_attributes_class);
+- mutex_unlock(&fw_attr_lock);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(fw_attributes_class_put);
+diff --git a/drivers/platform/x86/firmware_attributes_class.h b/drivers/platform/x86/firmware_attributes_class.h
+index 486485cb1f54e3..ef6c3764a83497 100644
+--- a/drivers/platform/x86/firmware_attributes_class.h
++++ b/drivers/platform/x86/firmware_attributes_class.h
+@@ -5,7 +5,10 @@
+ #ifndef FW_ATTR_CLASS_H
+ #define FW_ATTR_CLASS_H
+
+-int fw_attributes_class_get(struct class **fw_attr_class);
++#include <linux/device/class.h>
++
++extern const struct class firmware_attributes_class;
++int fw_attributes_class_get(const struct class **fw_attr_class);
+ int fw_attributes_class_put(void);
+
+ #endif /* FW_ATTR_CLASS_H */
+diff --git a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
+index 6ddca857cc4d1a..b62b158cffd85a 100644
+--- a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
++++ b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
+@@ -24,8 +24,6 @@ struct bioscfg_priv bioscfg_drv = {
+ .mutex = __MUTEX_INITIALIZER(bioscfg_drv.mutex),
+ };
+
+-static struct class *fw_attr_class;
+-
+ ssize_t display_name_language_code_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+@@ -974,11 +972,7 @@ static int __init hp_init(void)
+ if (ret)
+ return ret;
+
+- ret = fw_attributes_class_get(&fw_attr_class);
+- if (ret)
+- goto err_unregister_class;
+-
+- bioscfg_drv.class_dev = device_create(fw_attr_class, NULL, MKDEV(0, 0),
++ bioscfg_drv.class_dev = device_create(&firmware_attributes_class, NULL, MKDEV(0, 0),
+ NULL, "%s", DRIVER_NAME);
+ if (IS_ERR(bioscfg_drv.class_dev)) {
+ ret = PTR_ERR(bioscfg_drv.class_dev);
+@@ -1045,10 +1039,9 @@ static int __init hp_init(void)
+ release_attributes_data();
+
+ err_destroy_classdev:
+- device_destroy(fw_attr_class, MKDEV(0, 0));
++ device_unregister(bioscfg_drv.class_dev);
+
+ err_unregister_class:
+- fw_attributes_class_put();
+ hp_exit_attr_set_interface();
+
+ return ret;
+@@ -1057,9 +1050,8 @@ static int __init hp_init(void)
+ static void __exit hp_exit(void)
+ {
+ release_attributes_data();
+- device_destroy(fw_attr_class, MKDEV(0, 0));
++ device_unregister(bioscfg_drv.class_dev);
+
+- fw_attributes_class_put();
+ hp_exit_attr_set_interface();
+ }
+
+diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c
+index 2396decdb3cb3f..d5319b4637e18d 100644
+--- a/drivers/platform/x86/think-lmi.c
++++ b/drivers/platform/x86/think-lmi.c
+@@ -195,7 +195,6 @@ static const char * const level_options[] = {
+ [TLMI_LEVEL_MASTER] = "master",
+ };
+ static struct think_lmi tlmi_priv;
+-static struct class *fw_attr_class;
+ static DEFINE_MUTEX(tlmi_mutex);
+
+ /* ------ Utility functions ------------*/
+@@ -917,6 +916,7 @@ static const struct attribute_group auth_attr_group = {
+ .is_visible = auth_attr_is_visible,
+ .attrs = auth_attrs,
+ };
++__ATTRIBUTE_GROUPS(auth_attr);
+
+ /* ---- Attributes sysfs --------------------------------------------------------- */
+ static ssize_t display_name_show(struct kobject *kobj, struct kobj_attribute *attr,
+@@ -1120,6 +1120,7 @@ static const struct attribute_group tlmi_attr_group = {
+ .is_visible = attr_is_visible,
+ .attrs = tlmi_attrs,
+ };
++__ATTRIBUTE_GROUPS(tlmi_attr);
+
+ static void tlmi_attr_setting_release(struct kobject *kobj)
+ {
+@@ -1139,11 +1140,13 @@ static void tlmi_pwd_setting_release(struct kobject *kobj)
+ static const struct kobj_type tlmi_attr_setting_ktype = {
+ .release = &tlmi_attr_setting_release,
+ .sysfs_ops = &kobj_sysfs_ops,
++ .default_groups = tlmi_attr_groups,
+ };
+
+ static const struct kobj_type tlmi_pwd_setting_ktype = {
+ .release = &tlmi_pwd_setting_release,
+ .sysfs_ops = &kobj_sysfs_ops,
++ .default_groups = auth_attr_groups,
+ };
+
+ static ssize_t pending_reboot_show(struct kobject *kobj, struct kobj_attribute *attr,
+@@ -1213,19 +1216,16 @@ static struct kobj_attribute debug_cmd = __ATTR_WO(debug_cmd);
+ /* ---- Initialisation --------------------------------------------------------- */
+ static void tlmi_release_attr(void)
+ {
+- int i;
++ struct kobject *pos, *n;
+
+ /* Attribute structures */
+- for (i = 0; i < TLMI_SETTINGS_COUNT; i++) {
+- if (tlmi_priv.setting[i]) {
+- sysfs_remove_group(&tlmi_priv.setting[i]->kobj, &tlmi_attr_group);
+- kobject_put(&tlmi_priv.setting[i]->kobj);
+- }
+- }
+ sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &pending_reboot.attr);
+ if (tlmi_priv.can_debug_cmd && debug_support)
+ sysfs_remove_file(&tlmi_priv.attribute_kset->kobj, &debug_cmd.attr);
+
++ list_for_each_entry_safe(pos, n, &tlmi_priv.attribute_kset->list, entry)
++ kobject_put(pos);
++
+ kset_unregister(tlmi_priv.attribute_kset);
+
+ /* Free up any saved signatures */
+@@ -1233,19 +1233,8 @@ static void tlmi_release_attr(void)
+ kfree(tlmi_priv.pwd_admin->save_signature);
+
+ /* Authentication structures */
+- sysfs_remove_group(&tlmi_priv.pwd_admin->kobj, &auth_attr_group);
+- kobject_put(&tlmi_priv.pwd_admin->kobj);
+- sysfs_remove_group(&tlmi_priv.pwd_power->kobj, &auth_attr_group);
+- kobject_put(&tlmi_priv.pwd_power->kobj);
+-
+- if (tlmi_priv.opcode_support) {
+- sysfs_remove_group(&tlmi_priv.pwd_system->kobj, &auth_attr_group);
+- kobject_put(&tlmi_priv.pwd_system->kobj);
+- sysfs_remove_group(&tlmi_priv.pwd_hdd->kobj, &auth_attr_group);
+- kobject_put(&tlmi_priv.pwd_hdd->kobj);
+- sysfs_remove_group(&tlmi_priv.pwd_nvme->kobj, &auth_attr_group);
+- kobject_put(&tlmi_priv.pwd_nvme->kobj);
+- }
++ list_for_each_entry_safe(pos, n, &tlmi_priv.authentication_kset->list, entry)
++ kobject_put(pos);
+
+ kset_unregister(tlmi_priv.authentication_kset);
+ }
+@@ -1272,11 +1261,7 @@ static int tlmi_sysfs_init(void)
+ {
+ int i, ret;
+
+- ret = fw_attributes_class_get(&fw_attr_class);
+- if (ret)
+- return ret;
+-
+- tlmi_priv.class_dev = device_create(fw_attr_class, NULL, MKDEV(0, 0),
++ tlmi_priv.class_dev = device_create(&firmware_attributes_class, NULL, MKDEV(0, 0),
+ NULL, "%s", "thinklmi");
+ if (IS_ERR(tlmi_priv.class_dev)) {
+ ret = PTR_ERR(tlmi_priv.class_dev);
+@@ -1290,6 +1275,14 @@ static int tlmi_sysfs_init(void)
+ goto fail_device_created;
+ }
+
++ tlmi_priv.authentication_kset = kset_create_and_add("authentication", NULL,
++ &tlmi_priv.class_dev->kobj);
++ if (!tlmi_priv.authentication_kset) {
++ kset_unregister(tlmi_priv.attribute_kset);
++ ret = -ENOMEM;
++ goto fail_device_created;
++ }
++
+ for (i = 0; i < TLMI_SETTINGS_COUNT; i++) {
+ /* Check if index is a valid setting - skip if it isn't */
+ if (!tlmi_priv.setting[i])
+@@ -1306,12 +1299,8 @@ static int tlmi_sysfs_init(void)
+
+ /* Build attribute */
+ tlmi_priv.setting[i]->kobj.kset = tlmi_priv.attribute_kset;
+- ret = kobject_add(&tlmi_priv.setting[i]->kobj, NULL,
+- "%s", tlmi_priv.setting[i]->display_name);
+- if (ret)
+- goto fail_create_attr;
+-
+- ret = sysfs_create_group(&tlmi_priv.setting[i]->kobj, &tlmi_attr_group);
++ ret = kobject_init_and_add(&tlmi_priv.setting[i]->kobj, &tlmi_attr_setting_ktype,
++ NULL, "%s", tlmi_priv.setting[i]->display_name);
+ if (ret)
+ goto fail_create_attr;
+ }
+@@ -1327,55 +1316,34 @@ static int tlmi_sysfs_init(void)
+ }
+
+ /* Create authentication entries */
+- tlmi_priv.authentication_kset = kset_create_and_add("authentication", NULL,
+- &tlmi_priv.class_dev->kobj);
+- if (!tlmi_priv.authentication_kset) {
+- ret = -ENOMEM;
+- goto fail_create_attr;
+- }
+ tlmi_priv.pwd_admin->kobj.kset = tlmi_priv.authentication_kset;
+- ret = kobject_add(&tlmi_priv.pwd_admin->kobj, NULL, "%s", "Admin");
+- if (ret)
+- goto fail_create_attr;
+-
+- ret = sysfs_create_group(&tlmi_priv.pwd_admin->kobj, &auth_attr_group);
++ ret = kobject_init_and_add(&tlmi_priv.pwd_admin->kobj, &tlmi_pwd_setting_ktype,
++ NULL, "%s", "Admin");
+ if (ret)
+ goto fail_create_attr;
+
+ tlmi_priv.pwd_power->kobj.kset = tlmi_priv.authentication_kset;
+- ret = kobject_add(&tlmi_priv.pwd_power->kobj, NULL, "%s", "Power-on");
+- if (ret)
+- goto fail_create_attr;
+-
+- ret = sysfs_create_group(&tlmi_priv.pwd_power->kobj, &auth_attr_group);
++ ret = kobject_init_and_add(&tlmi_priv.pwd_power->kobj, &tlmi_pwd_setting_ktype,
++ NULL, "%s", "Power-on");
+ if (ret)
+ goto fail_create_attr;
+
+ if (tlmi_priv.opcode_support) {
+ tlmi_priv.pwd_system->kobj.kset = tlmi_priv.authentication_kset;
+- ret = kobject_add(&tlmi_priv.pwd_system->kobj, NULL, "%s", "System");
+- if (ret)
+- goto fail_create_attr;
+-
+- ret = sysfs_create_group(&tlmi_priv.pwd_system->kobj, &auth_attr_group);
++ ret = kobject_init_and_add(&tlmi_priv.pwd_system->kobj, &tlmi_pwd_setting_ktype,
++ NULL, "%s", "System");
+ if (ret)
+ goto fail_create_attr;
+
+ tlmi_priv.pwd_hdd->kobj.kset = tlmi_priv.authentication_kset;
+- ret = kobject_add(&tlmi_priv.pwd_hdd->kobj, NULL, "%s", "HDD");
+- if (ret)
+- goto fail_create_attr;
+-
+- ret = sysfs_create_group(&tlmi_priv.pwd_hdd->kobj, &auth_attr_group);
++ ret = kobject_init_and_add(&tlmi_priv.pwd_hdd->kobj, &tlmi_pwd_setting_ktype,
++ NULL, "%s", "HDD");
+ if (ret)
+ goto fail_create_attr;
+
+ tlmi_priv.pwd_nvme->kobj.kset = tlmi_priv.authentication_kset;
+- ret = kobject_add(&tlmi_priv.pwd_nvme->kobj, NULL, "%s", "NVMe");
+- if (ret)
+- goto fail_create_attr;
+-
+- ret = sysfs_create_group(&tlmi_priv.pwd_nvme->kobj, &auth_attr_group);
++ ret = kobject_init_and_add(&tlmi_priv.pwd_nvme->kobj, &tlmi_pwd_setting_ktype,
++ NULL, "%s", "NVMe");
+ if (ret)
+ goto fail_create_attr;
+ }
+@@ -1385,9 +1353,8 @@ static int tlmi_sysfs_init(void)
+ fail_create_attr:
+ tlmi_release_attr();
+ fail_device_created:
+- device_destroy(fw_attr_class, MKDEV(0, 0));
++ device_unregister(tlmi_priv.class_dev);
+ fail_class_created:
+- fw_attributes_class_put();
+ return ret;
+ }
+
+@@ -1409,8 +1376,6 @@ static struct tlmi_pwd_setting *tlmi_create_auth(const char *pwd_type,
+ new_pwd->maxlen = tlmi_priv.pwdcfg.core.max_length;
+ new_pwd->index = 0;
+
+- kobject_init(&new_pwd->kobj, &tlmi_pwd_setting_ktype);
+-
+ return new_pwd;
+ }
+
+@@ -1514,7 +1479,6 @@ static int tlmi_analyze(void)
+ if (setting->possible_values)
+ strreplace(setting->possible_values, ',', ';');
+
+- kobject_init(&setting->kobj, &tlmi_attr_setting_ktype);
+ tlmi_priv.setting[i] = setting;
+ kfree(item);
+ }
+@@ -1610,8 +1574,7 @@ static int tlmi_analyze(void)
+ static void tlmi_remove(struct wmi_device *wdev)
+ {
+ tlmi_release_attr();
+- device_destroy(fw_attr_class, MKDEV(0, 0));
+- fw_attributes_class_put();
++ device_unregister(tlmi_priv.class_dev);
+ }
+
+ static int tlmi_probe(struct wmi_device *wdev, const void *context)
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index f1de4111e98d9d..5a09a56698f4a0 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -338,12 +338,28 @@ static int set_domain_enable(struct powercap_zone *power_zone, bool mode)
+ {
+ struct rapl_domain *rd = power_zone_to_rapl_domain(power_zone);
+ struct rapl_defaults *defaults = get_defaults(rd->rp);
++ u64 val;
+ int ret;
+
+ cpus_read_lock();
+ ret = rapl_write_pl_data(rd, POWER_LIMIT1, PL_ENABLE, mode);
+- if (!ret && defaults->set_floor_freq)
++ if (ret)
++ goto end;
++
++ ret = rapl_read_pl_data(rd, POWER_LIMIT1, PL_ENABLE, false, &val);
++ if (ret)
++ goto end;
++
++ if (mode != val) {
++ pr_debug("%s cannot be %s\n", power_zone->name,
++ str_enabled_disabled(mode));
++ goto end;
++ }
++
++ if (defaults->set_floor_freq)
+ defaults->set_floor_freq(rd, mode);
++
++end:
+ cpus_read_unlock();
+
+ return ret;
+diff --git a/drivers/regulator/fan53555.c b/drivers/regulator/fan53555.c
+index 48f312167e5351..8912f5be72707c 100644
+--- a/drivers/regulator/fan53555.c
++++ b/drivers/regulator/fan53555.c
+@@ -147,6 +147,7 @@ struct fan53555_device_info {
+ unsigned int slew_mask;
+ const unsigned int *ramp_delay_table;
+ unsigned int n_ramp_values;
++ unsigned int enable_time;
+ unsigned int slew_rate;
+ };
+
+@@ -282,6 +283,7 @@ static int fan53526_voltages_setup_fairchild(struct fan53555_device_info *di)
+ di->slew_mask = CTL_SLEW_MASK;
+ di->ramp_delay_table = slew_rates;
+ di->n_ramp_values = ARRAY_SIZE(slew_rates);
++ di->enable_time = 250;
+ di->vsel_count = FAN53526_NVOLTAGES;
+
+ return 0;
+@@ -296,10 +298,12 @@ static int fan53555_voltages_setup_fairchild(struct fan53555_device_info *di)
+ case FAN53555_CHIP_REV_00:
+ di->vsel_min = 600000;
+ di->vsel_step = 10000;
++ di->enable_time = 400;
+ break;
+ case FAN53555_CHIP_REV_13:
+ di->vsel_min = 800000;
+ di->vsel_step = 10000;
++ di->enable_time = 400;
+ break;
+ default:
+ dev_err(di->dev,
+@@ -311,13 +315,19 @@ static int fan53555_voltages_setup_fairchild(struct fan53555_device_info *di)
+ case FAN53555_CHIP_ID_01:
+ case FAN53555_CHIP_ID_03:
+ case FAN53555_CHIP_ID_05:
++ di->vsel_min = 600000;
++ di->vsel_step = 10000;
++ di->enable_time = 400;
++ break;
+ case FAN53555_CHIP_ID_08:
+ di->vsel_min = 600000;
+ di->vsel_step = 10000;
++ di->enable_time = 175;
+ break;
+ case FAN53555_CHIP_ID_04:
+ di->vsel_min = 603000;
+ di->vsel_step = 12826;
++ di->enable_time = 400;
+ break;
+ default:
+ dev_err(di->dev,
+@@ -350,6 +360,7 @@ static int fan53555_voltages_setup_rockchip(struct fan53555_device_info *di)
+ di->slew_mask = CTL_SLEW_MASK;
+ di->ramp_delay_table = slew_rates;
+ di->n_ramp_values = ARRAY_SIZE(slew_rates);
++ di->enable_time = 360;
+ di->vsel_count = FAN53555_NVOLTAGES;
+
+ return 0;
+@@ -372,6 +383,7 @@ static int rk8602_voltages_setup_rockchip(struct fan53555_device_info *di)
+ di->slew_mask = CTL_SLEW_MASK;
+ di->ramp_delay_table = slew_rates;
+ di->n_ramp_values = ARRAY_SIZE(slew_rates);
++ di->enable_time = 360;
+ di->vsel_count = RK8602_NVOLTAGES;
+
+ return 0;
+@@ -395,6 +407,7 @@ static int fan53555_voltages_setup_silergy(struct fan53555_device_info *di)
+ di->slew_mask = CTL_SLEW_MASK;
+ di->ramp_delay_table = slew_rates;
+ di->n_ramp_values = ARRAY_SIZE(slew_rates);
++ di->enable_time = 400;
+ di->vsel_count = FAN53555_NVOLTAGES;
+
+ return 0;
+@@ -594,6 +607,7 @@ static int fan53555_regulator_register(struct fan53555_device_info *di,
+ rdesc->ramp_mask = di->slew_mask;
+ rdesc->ramp_delay_table = di->ramp_delay_table;
+ rdesc->n_ramp_values = di->n_ramp_values;
++ rdesc->enable_time = di->enable_time;
+ rdesc->owner = THIS_MODULE;
+
+ rdev = devm_regulator_register(di->dev, &di->desc, config);
+diff --git a/drivers/regulator/gpio-regulator.c b/drivers/regulator/gpio-regulator.c
+index 65927fa2ef161c..1bdd494cf8821e 100644
+--- a/drivers/regulator/gpio-regulator.c
++++ b/drivers/regulator/gpio-regulator.c
+@@ -260,8 +260,10 @@ static int gpio_regulator_probe(struct platform_device *pdev)
+ return -ENOMEM;
+ }
+
+- drvdata->gpiods = devm_kzalloc(dev, sizeof(struct gpio_desc *),
+- GFP_KERNEL);
++ drvdata->gpiods = devm_kcalloc(dev, config->ngpios,
++ sizeof(struct gpio_desc *), GFP_KERNEL);
++ if (!drvdata->gpiods)
++ return -ENOMEM;
+
+ if (config->input_supply) {
+ drvdata->desc.supply_name = devm_kstrdup(&pdev->dev,
+@@ -274,8 +276,6 @@ static int gpio_regulator_probe(struct platform_device *pdev)
+ }
+ }
+
+- if (!drvdata->gpiods)
+- return -ENOMEM;
+ for (i = 0; i < config->ngpios; i++) {
+ drvdata->gpiods[i] = devm_gpiod_get_index(dev,
+ NULL,
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 5849d2970bba45..095de4e0e4f388 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -697,8 +697,12 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ {
+ u8 irqstat;
+ u8 rtc_control;
++ unsigned long flags;
+
+- spin_lock(&rtc_lock);
++ /* We cannot use spin_lock() here, as cmos_interrupt() is also called
++ * in a non-irq context.
++ */
++ spin_lock_irqsave(&rtc_lock, flags);
+
+ /* When the HPET interrupt handler calls us, the interrupt
+ * status is passed as arg1 instead of the irq number. But
+@@ -732,7 +736,7 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ hpet_mask_rtc_irq_bit(RTC_AIE);
+ CMOS_READ(RTC_INTR_FLAGS);
+ }
+- spin_unlock(&rtc_lock);
++ spin_unlock_irqrestore(&rtc_lock, flags);
+
+ if (is_intr(irqstat)) {
+ rtc_update_irq(p, 1, irqstat);
+@@ -1300,9 +1304,7 @@ static void cmos_check_wkalrm(struct device *dev)
+ * ACK the rtc irq here
+ */
+ if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
+- local_irq_disable();
+ cmos_interrupt(0, (void *)cmos->rtc);
+- local_irq_enable();
+ return;
+ }
+
+diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
+index 9c04c4e1a49c37..fc079b9dcf7192 100644
+--- a/drivers/rtc/rtc-pcf2127.c
++++ b/drivers/rtc/rtc-pcf2127.c
+@@ -1383,6 +1383,11 @@ static int pcf2127_i2c_probe(struct i2c_client *client)
+ variant = &pcf21xx_cfg[type];
+ }
+
++ if (variant->type == PCF2131) {
++ config.read_flag_mask = 0x0;
++ config.write_flag_mask = 0x0;
++ }
++
+ config.max_register = variant->max_register,
+
+ regmap = devm_regmap_init(&client->dev, &pcf2127_i2c_regmap,
+@@ -1456,7 +1461,7 @@ static int pcf2127_spi_probe(struct spi_device *spi)
+ variant = &pcf21xx_cfg[type];
+ }
+
+- config.max_register = variant->max_register,
++ config.max_register = variant->max_register;
+
+ regmap = devm_regmap_init_spi(spi, &config);
+ if (IS_ERR(regmap)) {
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 0cd6f3e1488249..13b6cb1b93acd9 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -2147,7 +2147,7 @@ qla24xx_get_port_database(scsi_qla_host_t *vha, u16 nport_handle,
+
+ pdb_dma = dma_map_single(&vha->hw->pdev->dev, pdb,
+ sizeof(*pdb), DMA_FROM_DEVICE);
+- if (!pdb_dma) {
++ if (dma_mapping_error(&vha->hw->pdev->dev, pdb_dma)) {
+ ql_log(ql_log_warn, vha, 0x1116, "Failed to map dma buffer.\n");
+ return QLA_MEMORY_ALLOC_FAILED;
+ }
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 675332e49a7b06..77c28d2ebf0137 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -3420,6 +3420,8 @@ static int qla4xxx_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
+ task_data->data_dma = dma_map_single(&ha->pdev->dev, task->data,
+ task->data_count,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&ha->pdev->dev, task_data->data_dma))
++ return -ENOMEM;
+ }
+
+ DEBUG2(ql4_printk(KERN_INFO, ha, "%s: MaxRecvLen %u, iscsi hrd %d\n",
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 7dd94369abb47c..3206c84c6f22fb 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -988,11 +988,20 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) {
+ status = dspi_dma_xfer(dspi);
+ } else {
++ /*
++ * Reinitialize the completion before transferring data
++ * to avoid the case where it might remain in the done
++ * state due to a spurious interrupt from a previous
++ * transfer. This could falsely signal that the current
++ * transfer has completed.
++ */
++ if (dspi->irq)
++ reinit_completion(&dspi->xfer_done);
++
+ dspi_fifo_write(dspi);
+
+ if (dspi->irq) {
+ wait_for_completion(&dspi->xfer_done);
+- reinit_completion(&dspi->xfer_done);
+ } else {
+ do {
+ status = dspi_poll(dspi);
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 49d9167bb263b5..a9eb6a3e838347 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -1841,7 +1841,9 @@ core_scsi3_decode_spec_i_port(
+ }
+
+ kmem_cache_free(t10_pr_reg_cache, dest_pr_reg);
+- core_scsi3_lunacl_undepend_item(dest_se_deve);
++
++ if (dest_se_deve)
++ core_scsi3_lunacl_undepend_item(dest_se_deve);
+
+ if (is_local)
+ continue;
+diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
+index 3692b39b35e789..6c48255dfff02b 100644
+--- a/drivers/ufs/core/ufs-sysfs.c
++++ b/drivers/ufs/core/ufs-sysfs.c
+@@ -1278,7 +1278,7 @@ UFS_UNIT_DESC_PARAM(logical_block_size, _LOGICAL_BLK_SIZE, 1);
+ UFS_UNIT_DESC_PARAM(logical_block_count, _LOGICAL_BLK_COUNT, 8);
+ UFS_UNIT_DESC_PARAM(erase_block_size, _ERASE_BLK_SIZE, 4);
+ UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1);
+-UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8);
++UFS_UNIT_DESC_PARAM(physical_memory_resource_count, _PHY_MEM_RSRC_CNT, 8);
+ UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2);
+ UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1);
+ UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4);
+@@ -1295,7 +1295,7 @@ static struct attribute *ufs_sysfs_unit_descriptor[] = {
+ &dev_attr_logical_block_count.attr,
+ &dev_attr_erase_block_size.attr,
+ &dev_attr_provisioning_type.attr,
+- &dev_attr_physical_memory_resourse_count.attr,
++ &dev_attr_physical_memory_resource_count.attr,
+ &dev_attr_context_capabilities.attr,
+ &dev_attr_large_unit_granularity.attr,
+ &dev_attr_wb_buf_alloc_units.attr,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 412931cf240f64..da20bd3d46bc78 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -20,6 +20,7 @@
+ #include <linux/delay.h>
+ #include <linux/interrupt.h>
+ #include <linux/module.h>
++#include <linux/pm_opp.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/sched/clock.h>
+ #include <linux/iopoll.h>
+@@ -289,8 +290,8 @@ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
+ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba);
+ static void ufshcd_resume_clkscaling(struct ufs_hba *hba);
+ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba);
+-static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba);
+-static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up);
++static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq,
++ bool scale_up);
+ static irqreturn_t ufshcd_intr(int irq, void *__hba);
+ static int ufshcd_change_power_mode(struct ufs_hba *hba,
+ struct ufs_pa_layer_attr *pwr_mode);
+@@ -1079,14 +1080,32 @@ static int ufshcd_set_clk_freq(struct ufs_hba *hba, bool scale_up)
+ return ret;
+ }
+
++static int ufshcd_opp_set_rate(struct ufs_hba *hba, unsigned long freq)
++{
++ struct dev_pm_opp *opp;
++ int ret;
++
++ opp = dev_pm_opp_find_freq_floor_indexed(hba->dev,
++ &freq, 0);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++
++ ret = dev_pm_opp_set_opp(hba->dev, opp);
++ dev_pm_opp_put(opp);
++
++ return ret;
++}
++
+ /**
+ * ufshcd_scale_clks - scale up or scale down UFS controller clocks
+ * @hba: per adapter instance
++ * @freq: frequency to scale
+ * @scale_up: True if scaling up and false if scaling down
+ *
+ * Return: 0 if successful; < 0 upon failure.
+ */
+-static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up)
++static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq,
++ bool scale_up)
+ {
+ int ret = 0;
+ ktime_t start = ktime_get();
+@@ -1095,13 +1114,21 @@ static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up)
+ if (ret)
+ goto out;
+
+- ret = ufshcd_set_clk_freq(hba, scale_up);
++ if (hba->use_pm_opp)
++ ret = ufshcd_opp_set_rate(hba, freq);
++ else
++ ret = ufshcd_set_clk_freq(hba, scale_up);
+ if (ret)
+ goto out;
+
+ ret = ufshcd_vops_clk_scale_notify(hba, scale_up, POST_CHANGE);
+- if (ret)
+- ufshcd_set_clk_freq(hba, !scale_up);
++ if (ret) {
++ if (hba->use_pm_opp)
++ ufshcd_opp_set_rate(hba,
++ hba->devfreq->previous_freq);
++ else
++ ufshcd_set_clk_freq(hba, !scale_up);
++ }
+
+ out:
+ trace_ufshcd_profile_clk_scaling(dev_name(hba->dev),
+@@ -1113,12 +1140,13 @@ static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up)
+ /**
+ * ufshcd_is_devfreq_scaling_required - check if scaling is required or not
+ * @hba: per adapter instance
++ * @freq: frequency to scale
+ * @scale_up: True if scaling up and false if scaling down
+ *
+ * Return: true if scaling is required, false otherwise.
+ */
+ static bool ufshcd_is_devfreq_scaling_required(struct ufs_hba *hba,
+- bool scale_up)
++ unsigned long freq, bool scale_up)
+ {
+ struct ufs_clk_info *clki;
+ struct list_head *head = &hba->clk_list_head;
+@@ -1126,6 +1154,9 @@ static bool ufshcd_is_devfreq_scaling_required(struct ufs_hba *hba,
+ if (list_empty(head))
+ return false;
+
++ if (hba->use_pm_opp)
++ return freq != hba->clk_scaling.target_freq;
++
+ list_for_each_entry(clki, head, list) {
+ if (!IS_ERR_OR_NULL(clki->clk)) {
+ if (scale_up && clki->max_freq) {
+@@ -1324,12 +1355,14 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err, bool sc
+ /**
+ * ufshcd_devfreq_scale - scale up/down UFS clocks and gear
+ * @hba: per adapter instance
++ * @freq: frequency to scale
+ * @scale_up: True for scaling up and false for scalin down
+ *
+ * Return: 0 for success; -EBUSY if scaling can't happen at this time; non-zero
+ * for any other errors.
+ */
+-static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
++static int ufshcd_devfreq_scale(struct ufs_hba *hba, unsigned long freq,
++ bool scale_up)
+ {
+ int ret = 0;
+
+@@ -1344,7 +1377,7 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
+ goto out_unprepare;
+ }
+
+- ret = ufshcd_scale_clks(hba, scale_up);
++ ret = ufshcd_scale_clks(hba, freq, scale_up);
+ if (ret) {
+ if (!scale_up)
+ ufshcd_scale_gear(hba, true);
+@@ -1355,7 +1388,8 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
+ if (scale_up) {
+ ret = ufshcd_scale_gear(hba, true);
+ if (ret) {
+- ufshcd_scale_clks(hba, false);
++ ufshcd_scale_clks(hba, hba->devfreq->previous_freq,
++ false);
+ goto out_unprepare;
+ }
+ }
+@@ -1377,9 +1411,10 @@ static void ufshcd_clk_scaling_suspend_work(struct work_struct *work)
+ return;
+ }
+ hba->clk_scaling.is_suspended = true;
++ hba->clk_scaling.window_start_t = 0;
+ spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
+
+- __ufshcd_suspend_clkscaling(hba);
++ devfreq_suspend_device(hba->devfreq);
+ }
+
+ static void ufshcd_clk_scaling_resume_work(struct work_struct *work)
+@@ -1413,9 +1448,22 @@ static int ufshcd_devfreq_target(struct device *dev,
+ if (!ufshcd_is_clkscaling_supported(hba))
+ return -EINVAL;
+
+- clki = list_first_entry(&hba->clk_list_head, struct ufs_clk_info, list);
+- /* Override with the closest supported frequency */
+- *freq = (unsigned long) clk_round_rate(clki->clk, *freq);
++ if (hba->use_pm_opp) {
++ struct dev_pm_opp *opp;
++
++ /* Get the recommended frequency from OPP framework */
++ opp = devfreq_recommended_opp(dev, freq, flags);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++
++ dev_pm_opp_put(opp);
++ } else {
++ /* Override with the closest supported frequency */
++ clki = list_first_entry(&hba->clk_list_head, struct ufs_clk_info,
++ list);
++ *freq = (unsigned long) clk_round_rate(clki->clk, *freq);
++ }
++
+ spin_lock_irqsave(hba->host->host_lock, irq_flags);
+ if (ufshcd_eh_in_progress(hba)) {
+ spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
+@@ -1430,12 +1478,17 @@ static int ufshcd_devfreq_target(struct device *dev,
+ goto out;
+ }
+
+- /* Decide based on the rounded-off frequency and update */
+- scale_up = *freq == clki->max_freq;
+- if (!scale_up)
++ /* Decide based on the target or rounded-off frequency and update */
++ if (hba->use_pm_opp)
++ scale_up = *freq > hba->clk_scaling.target_freq;
++ else
++ scale_up = *freq == clki->max_freq;
++
++ if (!hba->use_pm_opp && !scale_up)
+ *freq = clki->min_freq;
++
+ /* Update the frequency */
+- if (!ufshcd_is_devfreq_scaling_required(hba, scale_up)) {
++ if (!ufshcd_is_devfreq_scaling_required(hba, *freq, scale_up)) {
+ spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
+ ret = 0;
+ goto out; /* no state change required */
+@@ -1443,7 +1496,9 @@ static int ufshcd_devfreq_target(struct device *dev,
+ spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
+
+ start = ktime_get();
+- ret = ufshcd_devfreq_scale(hba, scale_up);
++ ret = ufshcd_devfreq_scale(hba, *freq, scale_up);
++ if (!ret)
++ hba->clk_scaling.target_freq = *freq;
+
+ trace_ufshcd_profile_clk_scaling(dev_name(hba->dev),
+ (scale_up ? "up" : "down"),
+@@ -1463,8 +1518,6 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+ struct ufs_clk_scaling *scaling = &hba->clk_scaling;
+ unsigned long flags;
+- struct list_head *clk_list = &hba->clk_list_head;
+- struct ufs_clk_info *clki;
+ ktime_t curr_t;
+
+ if (!ufshcd_is_clkscaling_supported(hba))
+@@ -1477,17 +1530,24 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
+ if (!scaling->window_start_t)
+ goto start_window;
+
+- clki = list_first_entry(clk_list, struct ufs_clk_info, list);
+ /*
+ * If current frequency is 0, then the ondemand governor considers
+ * there's no initial frequency set. And it always requests to set
+ * to max. frequency.
+ */
+- stat->current_frequency = clki->curr_freq;
++ if (hba->use_pm_opp) {
++ stat->current_frequency = hba->clk_scaling.target_freq;
++ } else {
++ struct list_head *clk_list = &hba->clk_list_head;
++ struct ufs_clk_info *clki;
++
++ clki = list_first_entry(clk_list, struct ufs_clk_info, list);
++ stat->current_frequency = clki->curr_freq;
++ }
++
+ if (scaling->is_busy_started)
+ scaling->tot_busy_t += ktime_us_delta(curr_t,
+ scaling->busy_start_t);
+-
+ stat->total_time = ktime_us_delta(curr_t, scaling->window_start_t);
+ stat->busy_time = scaling->tot_busy_t;
+ start_window:
+@@ -1516,9 +1576,11 @@ static int ufshcd_devfreq_init(struct ufs_hba *hba)
+ if (list_empty(clk_list))
+ return 0;
+
+- clki = list_first_entry(clk_list, struct ufs_clk_info, list);
+- dev_pm_opp_add(hba->dev, clki->min_freq, 0);
+- dev_pm_opp_add(hba->dev, clki->max_freq, 0);
++ if (!hba->use_pm_opp) {
++ clki = list_first_entry(clk_list, struct ufs_clk_info, list);
++ dev_pm_opp_add(hba->dev, clki->min_freq, 0);
++ dev_pm_opp_add(hba->dev, clki->max_freq, 0);
++ }
+
+ ufshcd_vops_config_scaling_param(hba, &hba->vps->devfreq_profile,
+ &hba->vps->ondemand_data);
+@@ -1530,8 +1592,10 @@ static int ufshcd_devfreq_init(struct ufs_hba *hba)
+ ret = PTR_ERR(devfreq);
+ dev_err(hba->dev, "Unable to register with devfreq %d\n", ret);
+
+- dev_pm_opp_remove(hba->dev, clki->min_freq);
+- dev_pm_opp_remove(hba->dev, clki->max_freq);
++ if (!hba->use_pm_opp) {
++ dev_pm_opp_remove(hba->dev, clki->min_freq);
++ dev_pm_opp_remove(hba->dev, clki->max_freq);
++ }
+ return ret;
+ }
+
+@@ -1543,7 +1607,6 @@ static int ufshcd_devfreq_init(struct ufs_hba *hba)
+ static void ufshcd_devfreq_remove(struct ufs_hba *hba)
+ {
+ struct list_head *clk_list = &hba->clk_list_head;
+- struct ufs_clk_info *clki;
+
+ if (!hba->devfreq)
+ return;
+@@ -1551,19 +1614,13 @@ static void ufshcd_devfreq_remove(struct ufs_hba *hba)
+ devfreq_remove_device(hba->devfreq);
+ hba->devfreq = NULL;
+
+- clki = list_first_entry(clk_list, struct ufs_clk_info, list);
+- dev_pm_opp_remove(hba->dev, clki->min_freq);
+- dev_pm_opp_remove(hba->dev, clki->max_freq);
+-}
++ if (!hba->use_pm_opp) {
++ struct ufs_clk_info *clki;
+
+-static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba)
+-{
+- unsigned long flags;
+-
+- devfreq_suspend_device(hba->devfreq);
+- spin_lock_irqsave(hba->host->host_lock, flags);
+- hba->clk_scaling.window_start_t = 0;
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ clki = list_first_entry(clk_list, struct ufs_clk_info, list);
++ dev_pm_opp_remove(hba->dev, clki->min_freq);
++ dev_pm_opp_remove(hba->dev, clki->max_freq);
++ }
+ }
+
+ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
+@@ -1578,11 +1635,12 @@ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
+ if (!hba->clk_scaling.is_suspended) {
+ suspend = true;
+ hba->clk_scaling.is_suspended = true;
++ hba->clk_scaling.window_start_t = 0;
+ }
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+
+ if (suspend)
+- __ufshcd_suspend_clkscaling(hba);
++ devfreq_suspend_device(hba->devfreq);
+ }
+
+ static void ufshcd_resume_clkscaling(struct ufs_hba *hba)
+@@ -1638,7 +1696,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev,
+ ufshcd_resume_clkscaling(hba);
+ } else {
+ ufshcd_suspend_clkscaling(hba);
+- err = ufshcd_devfreq_scale(hba, true);
++ err = ufshcd_devfreq_scale(hba, ULONG_MAX, true);
+ if (err)
+ dev_err(hba->dev, "%s: failed to scale clocks up %d\n",
+ __func__, err);
+@@ -7722,7 +7780,8 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
+ hba->silence_err_logs = false;
+
+ /* scale up clocks to max frequency before full reinitialization */
+- ufshcd_scale_clks(hba, true);
++ if (ufshcd_is_clkscaling_supported(hba))
++ ufshcd_scale_clks(hba, ULONG_MAX, true);
+
+ err = ufshcd_hba_enable(hba);
+
+@@ -9360,6 +9419,17 @@ static int ufshcd_init_clocks(struct ufs_hba *hba)
+ dev_dbg(dev, "%s: clk: %s, rate: %lu\n", __func__,
+ clki->name, clk_get_rate(clki->clk));
+ }
++
++ /* Set Max. frequency for all clocks */
++ if (hba->use_pm_opp) {
++ ret = ufshcd_opp_set_rate(hba, ULONG_MAX);
++ if (ret) {
++ dev_err(hba->dev, "%s: failed to set OPP: %d", __func__,
++ ret);
++ goto out;
++ }
++ }
++
+ out:
+ return ret;
+ }
+diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
+index 080a3f17a35dd7..3b17d9e4b07d8c 100644
+--- a/drivers/usb/cdns3/cdnsp-ring.c
++++ b/drivers/usb/cdns3/cdnsp-ring.c
+@@ -772,7 +772,9 @@ static int cdnsp_update_port_id(struct cdnsp_device *pdev, u32 port_id)
+ }
+
+ if (port_id != old_port) {
+- cdnsp_disable_slot(pdev);
++ if (pdev->slot_id)
++ cdnsp_disable_slot(pdev);
++
+ pdev->active_port = port;
+ cdnsp_enable_slot(pdev);
+ }
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index f2ae5f4c58283a..0bee561420af29 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -2213,6 +2213,10 @@ static void udc_suspend(struct ci_hdrc *ci)
+ */
+ if (hw_read(ci, OP_ENDPTLISTADDR, ~0) == 0)
+ hw_write(ci, OP_ENDPTLISTADDR, ~0, ~0);
++
++ if (ci->gadget.connected &&
++ (!ci->suspended || !device_may_wakeup(ci->dev)))
++ usb_gadget_disconnect(&ci->gadget);
+ }
+
+ static void udc_resume(struct ci_hdrc *ci, bool power_lost)
+@@ -2223,6 +2227,9 @@ static void udc_resume(struct ci_hdrc *ci, bool power_lost)
+ OTGSC_BSVIS | OTGSC_BSVIE);
+ if (ci->vbus_active)
+ usb_gadget_vbus_disconnect(&ci->gadget);
++ } else if (ci->vbus_active && ci->driver &&
++ !ci->gadget.connected) {
++ usb_gadget_connect(&ci->gadget);
+ }
+
+ /* Restore value 0 if it was set for power lost check */
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index c979ecd0169a2d..46db600fdd824e 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -227,7 +227,8 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
+
+ /* Logitech HD Webcam C270 */
+- { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME },
++ { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME |
++ USB_QUIRK_NO_LPM},
+
+ /* Logitech HD Pro Webcams C920, C920-C, C922, C925e and C930e */
+ { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index fab9e6be4e27ae..2cd8c757c65342 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -639,6 +639,10 @@ static void xhci_dbc_stop(struct xhci_dbc *dbc)
+ case DS_DISABLED:
+ return;
+ case DS_CONFIGURED:
++ spin_lock(&dbc->lock);
++ xhci_dbc_flush_requests(dbc);
++ spin_unlock(&dbc->lock);
++
+ if (dbc->driver->disconnect)
+ dbc->driver->disconnect(dbc);
+ break;
+diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c
+index 0266c2f5bc0d8e..aa689fbd3dce67 100644
+--- a/drivers/usb/host/xhci-dbgtty.c
++++ b/drivers/usb/host/xhci-dbgtty.c
+@@ -585,6 +585,7 @@ int dbc_tty_init(void)
+ dbc_tty_driver->type = TTY_DRIVER_TYPE_SERIAL;
+ dbc_tty_driver->subtype = SERIAL_TYPE_NORMAL;
+ dbc_tty_driver->init_termios = tty_std_termios;
++ dbc_tty_driver->init_termios.c_lflag &= ~ECHO;
+ dbc_tty_driver->init_termios.c_cflag =
+ B9600 | CS8 | CREAD | HUPCL | CLOCAL;
+ dbc_tty_driver->init_termios.c_ispeed = 9600;
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 22cca89efbfd72..cceb69d4f61e1c 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1436,6 +1436,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
+ /* Periodic endpoint bInterval limit quirk */
+ if (usb_endpoint_xfer_int(&ep->desc) ||
+ usb_endpoint_xfer_isoc(&ep->desc)) {
++ if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_9) &&
++ interval >= 9) {
++ interval = 8;
++ }
+ if ((xhci->quirks & XHCI_LIMIT_ENDPOINT_INTERVAL_7) &&
+ udev->speed >= USB_SPEED_HIGH &&
+ interval >= 7) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index c1a172b6feae84..5abc48f148dcbc 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -65,12 +65,22 @@
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_PCH_XHCI 0x54ed
+
++#define PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI 0x13ed
++#define PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI 0x13ee
++#define PCI_DEVICE_ID_AMD_STARSHIP_XHCI 0x148c
++#define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI 0x15d4
++#define PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI 0x15d5
++#define PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI 0x15e0
++#define PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI 0x15e1
++#define PCI_DEVICE_ID_AMD_RAVEN2_XHCI 0x15e5
+ #define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_1 0x43bc
+
++#define PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI 0x7316
++
+ #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
+ #define PCI_DEVICE_ID_ASMEDIA_1142_XHCI 0x1242
+@@ -348,6 +358,21 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ if (pdev->vendor == PCI_VENDOR_ID_NEC)
+ xhci->quirks |= XHCI_NEC_HOST;
+
++ if (pdev->vendor == PCI_VENDOR_ID_AMD &&
++ (pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEC_XHCI ||
++ pdev->device == PCI_DEVICE_ID_AMD_ARIEL_TYPEA_XHCI ||
++ pdev->device == PCI_DEVICE_ID_AMD_STARSHIP_XHCI ||
++ pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D4_XHCI ||
++ pdev->device == PCI_DEVICE_ID_AMD_FIREFLIGHT_15D5_XHCI ||
++ pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E0_XHCI ||
++ pdev->device == PCI_DEVICE_ID_AMD_RAVEN_15E1_XHCI ||
++ pdev->device == PCI_DEVICE_ID_AMD_RAVEN2_XHCI))
++ xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9;
++
++ if (pdev->vendor == PCI_VENDOR_ID_ATI &&
++ pdev->device == PCI_DEVICE_ID_ATI_NAVI10_7316_XHCI)
++ xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_9;
++
+ if (pdev->vendor == PCI_VENDOR_ID_AMD && xhci->hci_version == 0x96)
+ xhci->quirks |= XHCI_AMD_0x96_HOST;
+
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 8832e0cedadaff..749ba3596c2b3f 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -313,7 +313,8 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+ }
+
+ usb3_hcd = xhci_get_usb3_hcd(xhci);
+- if (usb3_hcd && HCC_MAX_PSA(xhci->hcc_params) >= 4)
++ if (usb3_hcd && HCC_MAX_PSA(xhci->hcc_params) >= 4 &&
++ !(xhci->quirks & XHCI_BROKEN_STREAMS))
+ usb3_hcd->can_do_streams = 1;
+
+ if (xhci->shared_hcd) {
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 74bdd035d756a4..159cdfc7129070 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1659,6 +1659,7 @@ struct xhci_hcd {
+ #define XHCI_WRITE_64_HI_LO BIT_ULL(47)
+ #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
+ #define XHCI_ETRON_HOST BIT_ULL(49)
++#define XHCI_LIMIT_ENDPOINT_INTERVAL_9 BIT_ULL(50)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index 5f6fc5b79212ef..7eb78885fa2b3a 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -324,8 +324,7 @@ static int dp_altmode_vdm(struct typec_altmode *alt,
+ case CMDT_RSP_NAK:
+ switch (cmd) {
+ case DP_CMD_STATUS_UPDATE:
+- if (typec_altmode_exit(alt))
+- dev_err(&dp->alt->dev, "Exit Mode Failed!\n");
++ dp->state = DP_STATE_EXIT;
+ break;
+ case DP_CMD_CONFIGURE:
+ dp->data.conf = 0;
+@@ -528,7 +527,7 @@ static ssize_t pin_assignment_show(struct device *dev,
+
+ assignments = get_current_pin_assignments(dp);
+
+- for (i = 0; assignments; assignments >>= 1, i++) {
++ for (i = 0; assignments && i < DP_PIN_ASSIGN_MAX; assignments >>= 1, i++) {
+ if (assignments & 1) {
+ if (i == cur)
+ len += sprintf(buf + len, "[%s] ",
+diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c
+index 24192a7667edf7..a25766e90f0a6e 100644
+--- a/fs/anon_inodes.c
++++ b/fs/anon_inodes.c
+@@ -55,15 +55,26 @@ static struct file_system_type anon_inode_fs_type = {
+ .kill_sb = kill_anon_super,
+ };
+
+-static struct inode *anon_inode_make_secure_inode(
+- const char *name,
+- const struct inode *context_inode)
++/**
++ * anon_inode_make_secure_inode - allocate an anonymous inode with security context
++ * @sb: [in] Superblock to allocate from
++ * @name: [in] Name of the class of the newfile (e.g., "secretmem")
++ * @context_inode:
++ * [in] Optional parent inode for security inheritance
++ *
++ * The function ensures proper security initialization through the LSM hook
++ * security_inode_init_security_anon().
++ *
++ * Return: Pointer to new inode on success, ERR_PTR on failure.
++ */
++struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name,
++ const struct inode *context_inode)
+ {
+ struct inode *inode;
+ const struct qstr qname = QSTR_INIT(name, strlen(name));
+ int error;
+
+- inode = alloc_anon_inode(anon_inode_mnt->mnt_sb);
++ inode = alloc_anon_inode(sb);
+ if (IS_ERR(inode))
+ return inode;
+ inode->i_flags &= ~S_PRIVATE;
+@@ -74,6 +85,7 @@ static struct inode *anon_inode_make_secure_inode(
+ }
+ return inode;
+ }
++EXPORT_SYMBOL_GPL_FOR_MODULES(anon_inode_make_secure_inode, "kvm");
+
+ static struct file *__anon_inode_getfile(const char *name,
+ const struct file_operations *fops,
+@@ -88,7 +100,8 @@ static struct file *__anon_inode_getfile(const char *name,
+ return ERR_PTR(-ENOENT);
+
+ if (secure) {
+- inode = anon_inode_make_secure_inode(name, context_inode);
++ inode = anon_inode_make_secure_inode(anon_inode_mnt->mnt_sb,
++ name, context_inode);
+ if (IS_ERR(inode)) {
+ file = ERR_CAST(inode);
+ goto err;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index af1f22b3cff7dc..e8e57abb032d7a 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4615,9 +4615,8 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ {
+ struct inode *inode = d_inode(dentry);
+ struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
+- int err = 0;
++ int ret = 0;
+ struct btrfs_trans_handle *trans;
+- u64 last_unlink_trans;
+ struct fscrypt_name fname;
+
+ if (inode->i_size > BTRFS_EMPTY_DIR_SIZE)
+@@ -4631,55 +4630,56 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ return btrfs_delete_subvolume(BTRFS_I(dir), dentry);
+ }
+
+- err = fscrypt_setup_filename(dir, &dentry->d_name, 1, &fname);
+- if (err)
+- return err;
++ ret = fscrypt_setup_filename(dir, &dentry->d_name, 1, &fname);
++ if (ret)
++ return ret;
+
+ /* This needs to handle no-key deletions later on */
+
+ trans = __unlink_start_trans(BTRFS_I(dir));
+ if (IS_ERR(trans)) {
+- err = PTR_ERR(trans);
++ ret = PTR_ERR(trans);
+ goto out_notrans;
+ }
+
++ /*
++ * Propagate the last_unlink_trans value of the deleted dir to its
++ * parent directory. This is to prevent an unrecoverable log tree in the
++ * case we do something like this:
++ * 1) create dir foo
++ * 2) create snapshot under dir foo
++ * 3) delete the snapshot
++ * 4) rmdir foo
++ * 5) mkdir foo
++ * 6) fsync foo or some file inside foo
++ *
++ * This is because we can't unlink other roots when replaying the dir
++ * deletes for directory foo.
++ */
++ if (BTRFS_I(inode)->last_unlink_trans >= trans->transid)
++ btrfs_record_snapshot_destroy(trans, BTRFS_I(dir));
++
+ if (unlikely(btrfs_ino(BTRFS_I(inode)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) {
+- err = btrfs_unlink_subvol(trans, BTRFS_I(dir), dentry);
++ ret = btrfs_unlink_subvol(trans, BTRFS_I(dir), dentry);
+ goto out;
+ }
+
+- err = btrfs_orphan_add(trans, BTRFS_I(inode));
+- if (err)
++ ret = btrfs_orphan_add(trans, BTRFS_I(inode));
++ if (ret)
+ goto out;
+
+- last_unlink_trans = BTRFS_I(inode)->last_unlink_trans;
+-
+ /* now the directory is empty */
+- err = btrfs_unlink_inode(trans, BTRFS_I(dir), BTRFS_I(d_inode(dentry)),
++ ret = btrfs_unlink_inode(trans, BTRFS_I(dir), BTRFS_I(d_inode(dentry)),
+ &fname.disk_name);
+- if (!err) {
++ if (!ret)
+ btrfs_i_size_write(BTRFS_I(inode), 0);
+- /*
+- * Propagate the last_unlink_trans value of the deleted dir to
+- * its parent directory. This is to prevent an unrecoverable
+- * log tree in the case we do something like this:
+- * 1) create dir foo
+- * 2) create snapshot under dir foo
+- * 3) delete the snapshot
+- * 4) rmdir foo
+- * 5) mkdir foo
+- * 6) fsync foo or some file inside foo
+- */
+- if (last_unlink_trans >= trans->transid)
+- BTRFS_I(dir)->last_unlink_trans = last_unlink_trans;
+- }
+ out:
+ btrfs_end_transaction(trans);
+ out_notrans:
+ btrfs_btree_balance_dirty(fs_info);
+ fscrypt_free_filename(&fname);
+
+- return err;
++ return ret;
+ }
+
+ /*
+diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
+index 86d846eb5ed492..c68e9ecbc438cc 100644
+--- a/fs/btrfs/ordered-data.c
++++ b/fs/btrfs/ordered-data.c
+@@ -154,9 +154,10 @@ static struct btrfs_ordered_extent *alloc_ordered_extent(
+ struct btrfs_ordered_extent *entry;
+ int ret;
+ u64 qgroup_rsv = 0;
++ const bool is_nocow = (flags &
++ ((1U << BTRFS_ORDERED_NOCOW) | (1U << BTRFS_ORDERED_PREALLOC)));
+
+- if (flags &
+- ((1 << BTRFS_ORDERED_NOCOW) | (1 << BTRFS_ORDERED_PREALLOC))) {
++ if (is_nocow) {
+ /* For nocow write, we can release the qgroup rsv right now */
+ ret = btrfs_qgroup_free_data(inode, NULL, file_offset, num_bytes, &qgroup_rsv);
+ if (ret < 0)
+@@ -171,8 +172,13 @@ static struct btrfs_ordered_extent *alloc_ordered_extent(
+ return ERR_PTR(ret);
+ }
+ entry = kmem_cache_zalloc(btrfs_ordered_extent_cache, GFP_NOFS);
+- if (!entry)
++ if (!entry) {
++ if (!is_nocow)
++ btrfs_qgroup_free_refroot(inode->root->fs_info,
++ btrfs_root_id(inode->root),
++ qgroup_rsv, BTRFS_QGROUP_RSV_DATA);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ entry->file_offset = file_offset;
+ entry->num_bytes = num_bytes;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index cc9a2f8a4ae3b7..13377c3b22897d 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1087,7 +1087,9 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ search_key.type = BTRFS_INODE_REF_KEY;
+ search_key.offset = parent_objectid;
+ ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0);
+- if (ret == 0) {
++ if (ret < 0) {
++ return ret;
++ } else if (ret == 0) {
+ struct btrfs_inode_ref *victim_ref;
+ unsigned long ptr;
+ unsigned long ptr_end;
+@@ -1160,13 +1162,13 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ struct fscrypt_str victim_name;
+
+ extref = (struct btrfs_inode_extref *)(base + cur_offset);
++ victim_name.len = btrfs_inode_extref_name_len(leaf, extref);
+
+ if (btrfs_inode_extref_parent(leaf, extref) != parent_objectid)
+ goto next;
+
+ ret = read_alloc_one_name(leaf, &extref->name,
+- btrfs_inode_extref_name_len(leaf, extref),
+- &victim_name);
++ victim_name.len, &victim_name);
+ if (ret)
+ return ret;
+
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index ae129044c52f42..8f0cb7c7eedeb4 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -36,9 +36,21 @@
+ #include <trace/events/f2fs.h>
+ #include <uapi/linux/f2fs.h>
+
++static void f2fs_zero_post_eof_page(struct inode *inode, loff_t new_size)
++{
++ loff_t old_size = i_size_read(inode);
++
++ if (old_size >= new_size)
++ return;
++
++ /* zero or drop pages only in range of [old_size, new_size] */
++ truncate_pagecache(inode, old_size);
++}
++
+ static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
+ {
+ struct inode *inode = file_inode(vmf->vma->vm_file);
++ vm_flags_t flags = vmf->vma->vm_flags;
+ vm_fault_t ret;
+
+ ret = filemap_fault(vmf);
+@@ -46,47 +58,50 @@ static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
+ f2fs_update_iostat(F2FS_I_SB(inode), inode,
+ APP_MAPPED_READ_IO, F2FS_BLKSIZE);
+
+- trace_f2fs_filemap_fault(inode, vmf->pgoff, (unsigned long)ret);
++ trace_f2fs_filemap_fault(inode, vmf->pgoff, flags, ret);
+
+ return ret;
+ }
+
+ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+ {
+- struct page *page = vmf->page;
++ struct folio *folio = page_folio(vmf->page);
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct dnode_of_data dn;
+- bool need_alloc = true;
++ bool need_alloc = !f2fs_is_pinned_file(inode);
+ int err = 0;
++ vm_fault_t ret;
+
+ if (unlikely(IS_IMMUTABLE(inode)))
+ return VM_FAULT_SIGBUS;
+
+- if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED))
+- return VM_FAULT_SIGBUS;
++ if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
++ err = -EIO;
++ goto out;
++ }
+
+ if (unlikely(f2fs_cp_error(sbi))) {
+ err = -EIO;
+- goto err;
++ goto out;
+ }
+
+ if (!f2fs_is_checkpoint_ready(sbi)) {
+ err = -ENOSPC;
+- goto err;
++ goto out;
+ }
+
+ err = f2fs_convert_inline_inode(inode);
+ if (err)
+- goto err;
++ goto out;
+
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ if (f2fs_compressed_file(inode)) {
+- int ret = f2fs_is_compressed_cluster(inode, page->index);
++ int ret = f2fs_is_compressed_cluster(inode, folio->index);
+
+ if (ret < 0) {
+ err = ret;
+- goto err;
++ goto out;
+ } else if (ret) {
+ need_alloc = false;
+ }
+@@ -100,36 +115,40 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+
+ f2fs_bug_on(sbi, f2fs_has_inline_data(inode));
+
++ filemap_invalidate_lock(inode->i_mapping);
++ f2fs_zero_post_eof_page(inode, (folio->index + 1) << PAGE_SHIFT);
++ filemap_invalidate_unlock(inode->i_mapping);
++
+ file_update_time(vmf->vma->vm_file);
+ filemap_invalidate_lock_shared(inode->i_mapping);
+- lock_page(page);
+- if (unlikely(page->mapping != inode->i_mapping ||
+- page_offset(page) > i_size_read(inode) ||
+- !PageUptodate(page))) {
+- unlock_page(page);
++
++ folio_lock(folio);
++ if (unlikely(folio->mapping != inode->i_mapping ||
++ folio_pos(folio) > i_size_read(inode) ||
++ !folio_test_uptodate(folio))) {
++ folio_unlock(folio);
+ err = -EFAULT;
+ goto out_sem;
+ }
+
++ set_new_dnode(&dn, inode, NULL, NULL, 0);
+ if (need_alloc) {
+ /* block allocation */
+- set_new_dnode(&dn, inode, NULL, NULL, 0);
+- err = f2fs_get_block_locked(&dn, page->index);
+- }
+-
+-#ifdef CONFIG_F2FS_FS_COMPRESSION
+- if (!need_alloc) {
+- set_new_dnode(&dn, inode, NULL, NULL, 0);
+- err = f2fs_get_dnode_of_data(&dn, page->index, LOOKUP_NODE);
++ err = f2fs_get_block_locked(&dn, folio->index);
++ } else {
++ err = f2fs_get_dnode_of_data(&dn, folio->index, LOOKUP_NODE);
+ f2fs_put_dnode(&dn);
++ if (f2fs_is_pinned_file(inode) &&
++ !__is_valid_data_blkaddr(dn.data_blkaddr))
++ err = -EIO;
+ }
+-#endif
++
+ if (err) {
+- unlock_page(page);
++ folio_unlock(folio);
+ goto out_sem;
+ }
+
+- f2fs_wait_on_page_writeback(page, DATA, false, true);
++ f2fs_wait_on_page_writeback(folio_page(folio, 0), DATA, false, true);
+
+ /* wait for GCed page writeback via META_MAPPING */
+ f2fs_wait_on_block_writeback(inode, dn.data_blkaddr);
+@@ -137,29 +156,31 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
+ /*
+ * check to see if the page is mapped already (no holes)
+ */
+- if (PageMappedToDisk(page))
++ if (folio_test_mappedtodisk(folio))
+ goto out_sem;
+
+ /* page is wholly or partially inside EOF */
+- if (((loff_t)(page->index + 1) << PAGE_SHIFT) >
++ if (((loff_t)(folio->index + 1) << PAGE_SHIFT) >
+ i_size_read(inode)) {
+ loff_t offset;
+
+ offset = i_size_read(inode) & ~PAGE_MASK;
+- zero_user_segment(page, offset, PAGE_SIZE);
++ folio_zero_segment(folio, offset, folio_size(folio));
+ }
+- set_page_dirty(page);
++ folio_mark_dirty(folio);
+
+ f2fs_update_iostat(sbi, inode, APP_MAPPED_IO, F2FS_BLKSIZE);
+ f2fs_update_time(sbi, REQ_TIME);
+
+- trace_f2fs_vm_page_mkwrite(page, DATA);
+ out_sem:
+ filemap_invalidate_unlock_shared(inode->i_mapping);
+
+ sb_end_pagefault(inode->i_sb);
+-err:
+- return vmf_fs_error(err);
++out:
++ ret = vmf_fs_error(err);
++
++ trace_f2fs_vm_page_mkwrite(inode, folio->index, vmf->vma->vm_flags, ret);
++ return ret;
+ }
+
+ static const struct vm_operations_struct f2fs_file_vm_ops = {
+@@ -1047,6 +1068,8 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ filemap_invalidate_lock(inode->i_mapping);
+
++ if (attr->ia_size > old_size)
++ f2fs_zero_post_eof_page(inode, attr->ia_size);
+ truncate_setsize(inode, attr->ia_size);
+
+ if (attr->ia_size <= old_size)
+@@ -1165,6 +1188,10 @@ static int f2fs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ if (ret)
+ return ret;
+
++ filemap_invalidate_lock(inode->i_mapping);
++ f2fs_zero_post_eof_page(inode, offset + len);
++ filemap_invalidate_unlock(inode->i_mapping);
++
+ pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;
+ pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;
+
+@@ -1449,6 +1476,8 @@ static int f2fs_do_collapse(struct inode *inode, loff_t offset, loff_t len)
+ f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ filemap_invalidate_lock(inode->i_mapping);
+
++ f2fs_zero_post_eof_page(inode, offset + len);
++
+ f2fs_lock_op(sbi);
+ f2fs_drop_extent_tree(inode);
+ truncate_pagecache(inode, offset);
+@@ -1571,6 +1600,10 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ if (ret)
+ return ret;
+
++ filemap_invalidate_lock(mapping);
++ f2fs_zero_post_eof_page(inode, offset + len);
++ filemap_invalidate_unlock(mapping);
++
+ pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;
+ pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;
+
+@@ -1702,6 +1735,8 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ /* avoid gc operation during block exchange */
+ f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ filemap_invalidate_lock(mapping);
++
++ f2fs_zero_post_eof_page(inode, offset + len);
+ truncate_pagecache(inode, offset);
+
+ while (!ret && idx > pg_start) {
+@@ -1757,6 +1792,10 @@ static int f2fs_expand_inode_data(struct inode *inode, loff_t offset,
+ if (err)
+ return err;
+
++ filemap_invalidate_lock(inode->i_mapping);
++ f2fs_zero_post_eof_page(inode, offset + len);
++ filemap_invalidate_unlock(inode->i_mapping);
++
+ f2fs_balance_fs(sbi, true);
+
+ pg_start = ((unsigned long long)offset) >> PAGE_SHIFT;
+@@ -3327,7 +3366,7 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
+ goto done;
+ }
+
+- if (f2fs_sb_has_blkzoned(sbi) && F2FS_HAS_BLOCKS(inode)) {
++ if (F2FS_HAS_BLOCKS(inode)) {
+ ret = -EFBIG;
+ goto out;
+ }
+@@ -4670,6 +4709,10 @@ static ssize_t f2fs_write_checks(struct kiocb *iocb, struct iov_iter *from)
+ err = file_modified(file);
+ if (err)
+ return err;
++
++ filemap_invalidate_lock(inode->i_mapping);
++ f2fs_zero_post_eof_page(inode, iocb->ki_pos + iov_iter_count(from));
++ filemap_invalidate_unlock(inode->i_mapping);
+ return count;
+ }
+
+@@ -4914,6 +4957,8 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ bool dio;
+ bool may_need_sync = true;
+ int preallocated;
++ const loff_t pos = iocb->ki_pos;
++ const ssize_t count = iov_iter_count(from);
+ ssize_t ret;
+
+ if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) {
+@@ -4935,6 +4980,12 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ inode_lock(inode);
+ }
+
++ if (f2fs_is_pinned_file(inode) &&
++ !f2fs_overwrite_io(inode, pos, count)) {
++ ret = -EIO;
++ goto out_unlock;
++ }
++
+ ret = f2fs_write_checks(iocb, from);
+ if (ret <= 0)
+ goto out_unlock;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 0bc537de1b2958..0a26444fe20233 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1096,6 +1096,7 @@ static void ff_layout_reset_read(struct nfs_pgio_header *hdr)
+ }
+
+ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
++ u32 op_status,
+ struct nfs4_state *state,
+ struct nfs_client *clp,
+ struct pnfs_layout_segment *lseg,
+@@ -1106,32 +1107,42 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
+ struct nfs4_slot_table *tbl = &clp->cl_session->fc_slot_table;
+
+- switch (task->tk_status) {
+- case -NFS4ERR_BADSESSION:
+- case -NFS4ERR_BADSLOT:
+- case -NFS4ERR_BAD_HIGH_SLOT:
+- case -NFS4ERR_DEADSESSION:
+- case -NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
+- case -NFS4ERR_SEQ_FALSE_RETRY:
+- case -NFS4ERR_SEQ_MISORDERED:
++ switch (op_status) {
++ case NFS4_OK:
++ case NFS4ERR_NXIO:
++ break;
++ case NFSERR_PERM:
++ if (!task->tk_xprt)
++ break;
++ xprt_force_disconnect(task->tk_xprt);
++ goto out_retry;
++ case NFS4ERR_BADSESSION:
++ case NFS4ERR_BADSLOT:
++ case NFS4ERR_BAD_HIGH_SLOT:
++ case NFS4ERR_DEADSESSION:
++ case NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
++ case NFS4ERR_SEQ_FALSE_RETRY:
++ case NFS4ERR_SEQ_MISORDERED:
+ dprintk("%s ERROR %d, Reset session. Exchangeid "
+ "flags 0x%x\n", __func__, task->tk_status,
+ clp->cl_exchange_flags);
+ nfs4_schedule_session_recovery(clp->cl_session, task->tk_status);
+- break;
+- case -NFS4ERR_DELAY:
+- case -NFS4ERR_GRACE:
++ goto out_retry;
++ case NFS4ERR_DELAY:
++ nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);
++ fallthrough;
++ case NFS4ERR_GRACE:
+ rpc_delay(task, FF_LAYOUT_POLL_RETRY_MAX);
+- break;
+- case -NFS4ERR_RETRY_UNCACHED_REP:
+- break;
++ goto out_retry;
++ case NFS4ERR_RETRY_UNCACHED_REP:
++ goto out_retry;
+ /* Invalidate Layout errors */
+- case -NFS4ERR_PNFS_NO_LAYOUT:
+- case -ESTALE: /* mapped NFS4ERR_STALE */
+- case -EBADHANDLE: /* mapped NFS4ERR_BADHANDLE */
+- case -EISDIR: /* mapped NFS4ERR_ISDIR */
+- case -NFS4ERR_FHEXPIRED:
+- case -NFS4ERR_WRONG_TYPE:
++ case NFS4ERR_PNFS_NO_LAYOUT:
++ case NFS4ERR_STALE:
++ case NFS4ERR_BADHANDLE:
++ case NFS4ERR_ISDIR:
++ case NFS4ERR_FHEXPIRED:
++ case NFS4ERR_WRONG_TYPE:
+ dprintk("%s Invalid layout error %d\n", __func__,
+ task->tk_status);
+ /*
+@@ -1144,6 +1155,11 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ pnfs_destroy_layout(NFS_I(inode));
+ rpc_wake_up(&tbl->slot_tbl_waitq);
+ goto reset;
++ default:
++ break;
++ }
++
++ switch (task->tk_status) {
+ /* RPC connection errors */
+ case -ECONNREFUSED:
+ case -EHOSTDOWN:
+@@ -1159,26 +1175,56 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ nfs4_delete_deviceid(devid->ld, devid->nfs_client,
+ &devid->deviceid);
+ rpc_wake_up(&tbl->slot_tbl_waitq);
+- fallthrough;
++ break;
+ default:
+- if (ff_layout_avoid_mds_available_ds(lseg))
+- return -NFS4ERR_RESET_TO_PNFS;
+-reset:
+- dprintk("%s Retry through MDS. Error %d\n", __func__,
+- task->tk_status);
+- return -NFS4ERR_RESET_TO_MDS;
++ break;
+ }
++
++ if (ff_layout_avoid_mds_available_ds(lseg))
++ return -NFS4ERR_RESET_TO_PNFS;
++reset:
++ dprintk("%s Retry through MDS. Error %d\n", __func__,
++ task->tk_status);
++ return -NFS4ERR_RESET_TO_MDS;
++
++out_retry:
+ task->tk_status = 0;
+ return -EAGAIN;
+ }
+
+ /* Retry all errors through either pNFS or MDS except for -EJUKEBOX */
+ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
++ u32 op_status,
++ struct nfs_client *clp,
+ struct pnfs_layout_segment *lseg,
+ u32 idx)
+ {
+ struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
+
++ switch (op_status) {
++ case NFS_OK:
++ case NFSERR_NXIO:
++ break;
++ case NFSERR_PERM:
++ if (!task->tk_xprt)
++ break;
++ xprt_force_disconnect(task->tk_xprt);
++ goto out_retry;
++ case NFSERR_ACCES:
++ case NFSERR_BADHANDLE:
++ case NFSERR_FBIG:
++ case NFSERR_IO:
++ case NFSERR_NOSPC:
++ case NFSERR_ROFS:
++ case NFSERR_STALE:
++ goto out_reset_to_pnfs;
++ case NFSERR_JUKEBOX:
++ nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);
++ goto out_retry;
++ default:
++ break;
++ }
++
+ switch (task->tk_status) {
+ /* File access problems. Don't mark the device as unavailable */
+ case -EACCES:
+@@ -1197,6 +1243,7 @@ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
+ nfs4_delete_deviceid(devid->ld, devid->nfs_client,
+ &devid->deviceid);
+ }
++out_reset_to_pnfs:
+ /* FIXME: Need to prevent infinite looping here. */
+ return -NFS4ERR_RESET_TO_PNFS;
+ out_retry:
+@@ -1207,6 +1254,7 @@ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
+ }
+
+ static int ff_layout_async_handle_error(struct rpc_task *task,
++ u32 op_status,
+ struct nfs4_state *state,
+ struct nfs_client *clp,
+ struct pnfs_layout_segment *lseg,
+@@ -1225,10 +1273,11 @@ static int ff_layout_async_handle_error(struct rpc_task *task,
+
+ switch (vers) {
+ case 3:
+- return ff_layout_async_handle_error_v3(task, lseg, idx);
+- case 4:
+- return ff_layout_async_handle_error_v4(task, state, clp,
++ return ff_layout_async_handle_error_v3(task, op_status, clp,
+ lseg, idx);
++ case 4:
++ return ff_layout_async_handle_error_v4(task, op_status, state,
++ clp, lseg, idx);
+ default:
+ /* should never happen */
+ WARN_ON_ONCE(1);
+@@ -1281,6 +1330,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ switch (status) {
+ case NFS4ERR_DELAY:
+ case NFS4ERR_GRACE:
++ case NFS4ERR_PERM:
+ break;
+ case NFS4ERR_NXIO:
+ ff_layout_mark_ds_unreachable(lseg, idx);
+@@ -1313,7 +1363,8 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ trace_ff_layout_read_error(hdr);
+ }
+
+- err = ff_layout_async_handle_error(task, hdr->args.context->state,
++ err = ff_layout_async_handle_error(task, hdr->res.op_status,
++ hdr->args.context->state,
+ hdr->ds_clp, hdr->lseg,
+ hdr->pgio_mirror_idx);
+
+@@ -1483,7 +1534,8 @@ static int ff_layout_write_done_cb(struct rpc_task *task,
+ trace_ff_layout_write_error(hdr);
+ }
+
+- err = ff_layout_async_handle_error(task, hdr->args.context->state,
++ err = ff_layout_async_handle_error(task, hdr->res.op_status,
++ hdr->args.context->state,
+ hdr->ds_clp, hdr->lseg,
+ hdr->pgio_mirror_idx);
+
+@@ -1529,8 +1581,9 @@ static int ff_layout_commit_done_cb(struct rpc_task *task,
+ trace_ff_layout_commit_error(data);
+ }
+
+- err = ff_layout_async_handle_error(task, NULL, data->ds_clp,
+- data->lseg, data->ds_commit_index);
++ err = ff_layout_async_handle_error(task, data->res.op_status,
++ NULL, data->ds_clp, data->lseg,
++ data->ds_commit_index);
+
+ trace_nfs4_pnfs_commit_ds(data, err);
+ switch (err) {
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 419d98cf9e29f1..7e7dd2aab449dd 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -2442,15 +2442,26 @@ EXPORT_SYMBOL_GPL(nfs_net_id);
+ static int nfs_net_init(struct net *net)
+ {
+ struct nfs_net *nn = net_generic(net, nfs_net_id);
++ int err;
+
+ nfs_clients_init(net);
+
+ if (!rpc_proc_register(net, &nn->rpcstats)) {
+- nfs_clients_exit(net);
+- return -ENOMEM;
++ err = -ENOMEM;
++ goto err_proc_rpc;
+ }
+
+- return nfs_fs_proc_net_init(net);
++ err = nfs_fs_proc_net_init(net);
++ if (err)
++ goto err_proc_nfs;
++
++ return 0;
++
++err_proc_nfs:
++ rpc_proc_unregister(net, "nfs");
++err_proc_rpc:
++ nfs_clients_exit(net);
++ return err;
+ }
+
+ static void nfs_net_exit(struct net *net)
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 73aa5a63afe3fb..79d1ffdcbebd3d 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1930,8 +1930,10 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo)
+ static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
+ {
+ if (atomic_dec_and_test(&lo->plh_outstanding) &&
+- test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags))
++ test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) {
++ smp_mb__after_atomic();
+ wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN);
++ }
+ }
+
+ static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index d776340ad91ce6..5c856adf7be9ec 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -743,6 +743,7 @@ struct TCP_Server_Info {
+ __le32 session_key_id; /* retrieved from negotiate response and send in session setup request */
+ struct session_key session_key;
+ unsigned long lstrp; /* when we got last response from this server */
++ unsigned long neg_start; /* when negotiate started (jiffies) */
+ struct cifs_secmech secmech; /* crypto sec mech functs, descriptors */
+ #define CIFS_NEGFLAVOR_UNENCAP 1 /* wct == 17, but no ext_sec */
+ #define CIFS_NEGFLAVOR_EXTENDED 2 /* wct == 17, ext_sec bit set */
+@@ -1268,6 +1269,7 @@ struct cifs_tcon {
+ bool use_persistent:1; /* use persistent instead of durable handles */
+ bool no_lease:1; /* Do not request leases on files or directories */
+ bool use_witness:1; /* use witness protocol */
++ bool dummy:1; /* dummy tcon used for reconnecting channels */
+ __le32 capabilities;
+ __u32 share_flags;
+ __u32 maximal_access;
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 454420aa02220f..8298d1745f9b9c 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -677,12 +677,12 @@ server_unresponsive(struct TCP_Server_Info *server)
+ /*
+ * If we're in the process of mounting a share or reconnecting a session
+ * and the server abruptly shut down (e.g. socket wasn't closed, packet
+- * had been ACK'ed but no SMB response), don't wait longer than 20s to
+- * negotiate protocol.
++ * had been ACK'ed but no SMB response), don't wait longer than 20s from
++ * when negotiate actually started.
+ */
+ spin_lock(&server->srv_lock);
+ if (server->tcpStatus == CifsInNegotiate &&
+- time_after(jiffies, server->lstrp + 20 * HZ)) {
++ time_after(jiffies, server->neg_start + 20 * HZ)) {
+ spin_unlock(&server->srv_lock);
+ cifs_reconnect(server, false);
+ return true;
+@@ -3998,6 +3998,7 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+
+ server->lstrp = jiffies;
+ server->tcpStatus = CifsInNegotiate;
++ server->neg_start = jiffies;
+ spin_unlock(&server->srv_lock);
+
+ rc = server->ops->negotiate(xid, ses, server);
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index 222348ae625866..0be16f8acd9af5 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -263,7 +263,7 @@ cifs_posix_to_fattr(struct cifs_fattr *fattr, struct smb2_posix_info *info,
+ /* The Mode field in the response can now include the file type as well */
+ fattr->cf_mode = wire_mode_to_posix(le32_to_cpu(info->Mode),
+ fattr->cf_cifsattrs & ATTR_DIRECTORY);
+- fattr->cf_dtype = S_DT(le32_to_cpu(info->Mode));
++ fattr->cf_dtype = S_DT(fattr->cf_mode);
+
+ switch (fattr->cf_mode & S_IFMT) {
+ case S_IFLNK:
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index e0f58600933059..357abb0170c495 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -437,9 +437,9 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ free_xid(xid);
+ ses->flags &= ~CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES;
+
+- /* regardless of rc value, setup polling */
+- queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
+- (SMB_INTERFACE_POLL_INTERVAL * HZ));
++ if (!tcon->ipc && !tcon->dummy)
++ queue_delayed_work(cifsiod_wq, &tcon->query_interfaces,
++ (SMB_INTERFACE_POLL_INTERVAL * HZ));
+
+ mutex_unlock(&ses->session_mutex);
+
+@@ -4228,10 +4228,8 @@ void smb2_reconnect_server(struct work_struct *work)
+ }
+ goto done;
+ }
+-
+ tcon->status = TID_GOOD;
+- tcon->retry = false;
+- tcon->need_reconnect = false;
++ tcon->dummy = true;
+
+ /* now reconnect sessions for necessary channels */
+ list_for_each_entry_safe(ses, ses2, &tmp_ses_list, rlist) {
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 563cb4d8edf0c3..4dfdc521c5c985 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -114,7 +114,7 @@ DECLARE_EVENT_CLASS(smb3_rw_err_class,
+ __entry->len = len;
+ __entry->rc = rc;
+ ),
+- TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d",
++ TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d",
+ __entry->xid, __entry->sesid, __entry->tid, __entry->fid,
+ __entry->offset, __entry->len, __entry->rc)
+ )
+@@ -247,7 +247,7 @@ DECLARE_EVENT_CLASS(smb3_fd_class,
+ __entry->tid = tid;
+ __entry->sesid = sesid;
+ ),
+- TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx",
++ TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx",
+ __entry->xid, __entry->sesid, __entry->tid, __entry->fid)
+ )
+
+@@ -286,7 +286,7 @@ DECLARE_EVENT_CLASS(smb3_fd_err_class,
+ __entry->sesid = sesid;
+ __entry->rc = rc;
+ ),
+- TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d",
++ TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d",
+ __entry->xid, __entry->sesid, __entry->tid, __entry->fid,
+ __entry->rc)
+ )
+@@ -558,7 +558,7 @@ DECLARE_EVENT_CLASS(smb3_cmd_err_class,
+ __entry->status = status;
+ __entry->rc = rc;
+ ),
+- TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d",
++ TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d",
+ __entry->sesid, __entry->tid, __entry->cmd, __entry->mid,
+ __entry->status, __entry->rc)
+ )
+@@ -593,7 +593,7 @@ DECLARE_EVENT_CLASS(smb3_cmd_done_class,
+ __entry->cmd = cmd;
+ __entry->mid = mid;
+ ),
+- TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu",
++ TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu",
+ __entry->sesid, __entry->tid,
+ __entry->cmd, __entry->mid)
+ )
+@@ -631,7 +631,7 @@ DECLARE_EVENT_CLASS(smb3_mid_class,
+ __entry->when_sent = when_sent;
+ __entry->when_received = when_received;
+ ),
+- TP_printk("\tcmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu",
++ TP_printk("cmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu",
+ __entry->cmd, __entry->mid, __entry->pid, __entry->when_sent,
+ __entry->when_received)
+ )
+@@ -662,7 +662,7 @@ DECLARE_EVENT_CLASS(smb3_exit_err_class,
+ __assign_str(func_name, func_name);
+ __entry->rc = rc;
+ ),
+- TP_printk("\t%s: xid=%u rc=%d",
++ TP_printk("%s: xid=%u rc=%d",
+ __get_str(func_name), __entry->xid, __entry->rc)
+ )
+
+@@ -688,7 +688,7 @@ DECLARE_EVENT_CLASS(smb3_sync_err_class,
+ __entry->ino = ino;
+ __entry->rc = rc;
+ ),
+- TP_printk("\tino=%lu rc=%d",
++ TP_printk("ino=%lu rc=%d",
+ __entry->ino, __entry->rc)
+ )
+
+@@ -714,7 +714,7 @@ DECLARE_EVENT_CLASS(smb3_enter_exit_class,
+ __entry->xid = xid;
+ __assign_str(func_name, func_name);
+ ),
+- TP_printk("\t%s: xid=%u",
++ TP_printk("%s: xid=%u",
+ __get_str(func_name), __entry->xid)
+ )
+
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 20db7fc0651f3c..6b4f9f16968821 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -79,6 +79,7 @@ extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_indirect_target_selection(struct device *dev,
+ struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf);
+
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/linux/export.h b/include/linux/export.h
+index 9911508a9604fb..06f7a4eb649286 100644
+--- a/include/linux/export.h
++++ b/include/linux/export.h
+@@ -42,11 +42,17 @@ extern struct module __this_module;
+ .long sym
+ #endif
+
+-#define ___EXPORT_SYMBOL(sym, license, ns) \
++/*
++ * LLVM integrated assembler cam merge adjacent string literals (like
++ * C and GNU-as) passed to '.ascii', but not to '.asciz' and chokes on:
++ *
++ * .asciz "MODULE_" "kvm" ;
++ */
++#define ___EXPORT_SYMBOL(sym, license, ns...) \
+ .section ".export_symbol","a" ASM_NL \
+ __export_symbol_##sym: ASM_NL \
+ .asciz license ASM_NL \
+- .asciz ns ASM_NL \
++ .ascii ns "\0" ASM_NL \
+ __EXPORT_SYMBOL_REF(sym) ASM_NL \
+ .previous
+
+@@ -88,4 +94,6 @@ extern struct module __this_module;
+ #define EXPORT_SYMBOL_NS(sym, ns) __EXPORT_SYMBOL(sym, "", __stringify(ns))
+ #define EXPORT_SYMBOL_NS_GPL(sym, ns) __EXPORT_SYMBOL(sym, "GPL", __stringify(ns))
+
++#define EXPORT_SYMBOL_GPL_FOR_MODULES(sym, mods) __EXPORT_SYMBOL(sym, "GPL", "module:" mods)
++
+ #endif /* _LINUX_EXPORT_H */
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 81edfa1e66b608..b641a01512fb09 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3170,6 +3170,8 @@ extern int simple_write_begin(struct file *file, struct address_space *mapping,
+ extern const struct address_space_operations ram_aops;
+ extern int always_delete_dentry(const struct dentry *);
+ extern struct inode *alloc_anon_inode(struct super_block *);
++struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name,
++ const struct inode *context_inode);
+ extern int simple_nosetlease(struct file *, int, struct file_lock **, void **);
+ extern const struct dentry_operations simple_dentry_operations;
+
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 91c4e11cb6abb4..285d709cbbde4d 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -1305,7 +1305,7 @@ int ata_acpi_stm(struct ata_port *ap, const struct ata_acpi_gtm *stm);
+ int ata_acpi_gtm(struct ata_port *ap, struct ata_acpi_gtm *stm);
+ unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev,
+ const struct ata_acpi_gtm *gtm);
+-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm);
++int ata_acpi_cbl_pata_type(struct ata_port *ap);
+ #else
+ static inline const struct ata_acpi_gtm *ata_acpi_init_gtm(struct ata_port *ap)
+ {
+@@ -1330,10 +1330,9 @@ static inline unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev,
+ return 0;
+ }
+
+-static inline int ata_acpi_cbl_80wire(struct ata_port *ap,
+- const struct ata_acpi_gtm *gtm)
++static inline int ata_acpi_cbl_pata_type(struct ata_port *ap)
+ {
+- return 0;
++ return ATA_CBL_PATA40;
+ }
+ #endif
+
+diff --git a/include/linux/usb/typec_dp.h b/include/linux/usb/typec_dp.h
+index 8d09c2f0a9b807..c3f08af20295ca 100644
+--- a/include/linux/usb/typec_dp.h
++++ b/include/linux/usb/typec_dp.h
+@@ -56,6 +56,7 @@ enum {
+ DP_PIN_ASSIGN_D,
+ DP_PIN_ASSIGN_E,
+ DP_PIN_ASSIGN_F, /* Not supported after v1.0b */
++ DP_PIN_ASSIGN_MAX,
+ };
+
+ /* DisplayPort alt mode specific commands */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index d63af08c6cdc2b..4f067599e6e9e0 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -29,6 +29,7 @@
+ #include <linux/idr.h>
+ #include <linux/leds.h>
+ #include <linux/rculist.h>
++#include <linux/srcu.h>
+
+ #include <net/bluetooth/hci.h>
+ #include <net/bluetooth/hci_sync.h>
+@@ -339,6 +340,7 @@ struct adv_monitor {
+
+ struct hci_dev {
+ struct list_head list;
++ struct srcu_struct srcu;
+ struct mutex lock;
+
+ struct ida unset_handle_ida;
+diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
+index b6ffae01a8cd86..f2ce7f6da87975 100644
+--- a/include/trace/events/f2fs.h
++++ b/include/trace/events/f2fs.h
+@@ -1284,13 +1284,6 @@ DEFINE_EVENT(f2fs__page, f2fs_set_page_dirty,
+ TP_ARGS(page, type)
+ );
+
+-DEFINE_EVENT(f2fs__page, f2fs_vm_page_mkwrite,
+-
+- TP_PROTO(struct page *page, int type),
+-
+- TP_ARGS(page, type)
+-);
+-
+ TRACE_EVENT(f2fs_replace_atomic_write_block,
+
+ TP_PROTO(struct inode *inode, struct inode *cow_inode, pgoff_t index,
+@@ -1328,30 +1321,50 @@ TRACE_EVENT(f2fs_replace_atomic_write_block,
+ __entry->recovery)
+ );
+
+-TRACE_EVENT(f2fs_filemap_fault,
++DECLARE_EVENT_CLASS(f2fs_mmap,
+
+- TP_PROTO(struct inode *inode, pgoff_t index, unsigned long ret),
++ TP_PROTO(struct inode *inode, pgoff_t index,
++ vm_flags_t flags, vm_fault_t ret),
+
+- TP_ARGS(inode, index, ret),
++ TP_ARGS(inode, index, flags, ret),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(pgoff_t, index)
+- __field(unsigned long, ret)
++ __field(vm_flags_t, flags)
++ __field(vm_fault_t, ret)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->index = index;
++ __entry->flags = flags;
+ __entry->ret = ret;
+ ),
+
+- TP_printk("dev = (%d,%d), ino = %lu, index = %lu, ret = %lx",
++ TP_printk("dev = (%d,%d), ino = %lu, index = %lu, flags: %s, ret: %s",
+ show_dev_ino(__entry),
+ (unsigned long)__entry->index,
+- __entry->ret)
++ __print_flags(__entry->flags, "|", FAULT_FLAG_TRACE),
++ __print_flags(__entry->ret, "|", VM_FAULT_RESULT_TRACE))
++);
++
++DEFINE_EVENT(f2fs_mmap, f2fs_filemap_fault,
++
++ TP_PROTO(struct inode *inode, pgoff_t index,
++ vm_flags_t flags, vm_fault_t ret),
++
++ TP_ARGS(inode, index, flags, ret)
++);
++
++DEFINE_EVENT(f2fs_mmap, f2fs_vm_page_mkwrite,
++
++ TP_PROTO(struct inode *inode, pgoff_t index,
++ vm_flags_t flags, vm_fault_t ret),
++
++ TP_ARGS(inode, index, flags, ret)
+ );
+
+ TRACE_EVENT(f2fs_writepages,
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index d5aa832f8dba3c..e9db9682316a2a 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -430,6 +430,7 @@ struct ufs_clk_gating {
+ * @workq: workqueue to schedule devfreq suspend/resume work
+ * @suspend_work: worker to suspend devfreq
+ * @resume_work: worker to resume devfreq
++ * @target_freq: frequency requested by devfreq framework
+ * @min_gear: lowest HS gear to scale down to
+ * @is_enabled: tracks if scaling is currently enabled or not, controlled by
+ * clkscale_enable sysfs node
+@@ -449,6 +450,7 @@ struct ufs_clk_scaling {
+ struct workqueue_struct *workq;
+ struct work_struct suspend_work;
+ struct work_struct resume_work;
++ unsigned long target_freq;
+ u32 min_gear;
+ bool is_enabled;
+ bool is_allowed;
+@@ -862,6 +864,7 @@ enum ufshcd_mcq_opr {
+ * @auto_bkops_enabled: to track whether bkops is enabled in device
+ * @vreg_info: UFS device voltage regulator information
+ * @clk_list_head: UFS host controller clocks list node head
++ * @use_pm_opp: Indicates whether OPP based scaling is used or not
+ * @req_abort_count: number of times ufshcd_abort() has been called
+ * @lanes_per_direction: number of lanes per data direction between the UFS
+ * controller and the UFS device.
+@@ -1014,6 +1017,7 @@ struct ufs_hba {
+ bool auto_bkops_enabled;
+ struct ufs_vreg_info vreg_info;
+ struct list_head clk_list_head;
++ bool use_pm_opp;
+
+ /* Number of requests aborts */
+ int req_abort_count;
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 1fb3b7a0ed5d27..536acebf22b0d0 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2699,6 +2699,10 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy_in)
+ /* Misaligned rcu_head! */
+ WARN_ON_ONCE((unsigned long)head & (sizeof(void *) - 1));
+
++ /* Avoid NULL dereference if callback is NULL. */
++ if (WARN_ON_ONCE(!func))
++ return;
++
+ if (debug_rcu_head_queue(head)) {
+ /*
+ * Probable double call_rcu(), so leak the callback.
+diff --git a/lib/test_objagg.c b/lib/test_objagg.c
+index c0c957c5063541..c0f7bb53db8d5c 100644
+--- a/lib/test_objagg.c
++++ b/lib/test_objagg.c
+@@ -899,8 +899,10 @@ static int check_expect_hints_stats(struct objagg_hints *objagg_hints,
+ int err;
+
+ stats = objagg_hints_stats_get(objagg_hints);
+- if (IS_ERR(stats))
++ if (IS_ERR(stats)) {
++ *errmsg = "objagg_hints_stats_get() failed.";
+ return PTR_ERR(stats);
++ }
+ err = __check_expect_stats(stats, expect_stats, errmsg);
+ objagg_stats_put(stats);
+ return err;
+diff --git a/mm/secretmem.c b/mm/secretmem.c
+index 399552814fd0ff..4bedf491a8a742 100644
+--- a/mm/secretmem.c
++++ b/mm/secretmem.c
+@@ -195,19 +195,10 @@ static struct file *secretmem_file_create(unsigned long flags)
+ struct file *file;
+ struct inode *inode;
+ const char *anon_name = "[secretmem]";
+- const struct qstr qname = QSTR_INIT(anon_name, strlen(anon_name));
+- int err;
+
+- inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
++ inode = anon_inode_make_secure_inode(secretmem_mnt->mnt_sb, anon_name, NULL);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+-
+- err = security_inode_init_security_anon(inode, &qname, NULL);
+- if (err) {
+- file = ERR_PTR(err);
+- goto err_free_inode;
+- }
+-
+ file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+ O_RDWR, &secretmem_fops);
+ if (IS_ERR(file))
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 32f7bd0e891689..824208a53c251e 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -65,7 +65,7 @@ static DEFINE_IDA(hci_index_ida);
+
+ /* Get HCI device by index.
+ * Device is held on return. */
+-struct hci_dev *hci_dev_get(int index)
++static struct hci_dev *__hci_dev_get(int index, int *srcu_index)
+ {
+ struct hci_dev *hdev = NULL, *d;
+
+@@ -78,6 +78,8 @@ struct hci_dev *hci_dev_get(int index)
+ list_for_each_entry(d, &hci_dev_list, list) {
+ if (d->id == index) {
+ hdev = hci_dev_hold(d);
++ if (srcu_index)
++ *srcu_index = srcu_read_lock(&d->srcu);
+ break;
+ }
+ }
+@@ -85,6 +87,22 @@ struct hci_dev *hci_dev_get(int index)
+ return hdev;
+ }
+
++struct hci_dev *hci_dev_get(int index)
++{
++ return __hci_dev_get(index, NULL);
++}
++
++static struct hci_dev *hci_dev_get_srcu(int index, int *srcu_index)
++{
++ return __hci_dev_get(index, srcu_index);
++}
++
++static void hci_dev_put_srcu(struct hci_dev *hdev, int srcu_index)
++{
++ srcu_read_unlock(&hdev->srcu, srcu_index);
++ hci_dev_put(hdev);
++}
++
+ /* ---- Inquiry support ---- */
+
+ bool hci_discovery_active(struct hci_dev *hdev)
+@@ -590,9 +608,9 @@ static int hci_dev_do_reset(struct hci_dev *hdev)
+ int hci_dev_reset(__u16 dev)
+ {
+ struct hci_dev *hdev;
+- int err;
++ int err, srcu_index;
+
+- hdev = hci_dev_get(dev);
++ hdev = hci_dev_get_srcu(dev, &srcu_index);
+ if (!hdev)
+ return -ENODEV;
+
+@@ -614,7 +632,7 @@ int hci_dev_reset(__u16 dev)
+ err = hci_dev_do_reset(hdev);
+
+ done:
+- hci_dev_put(hdev);
++ hci_dev_put_srcu(hdev, srcu_index);
+ return err;
+ }
+
+@@ -2424,6 +2442,11 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
+ if (!hdev)
+ return NULL;
+
++ if (init_srcu_struct(&hdev->srcu)) {
++ kfree(hdev);
++ return NULL;
++ }
++
+ hdev->pkt_type = (HCI_DM1 | HCI_DH1 | HCI_HV1);
+ hdev->esco_type = (ESCO_HV1);
+ hdev->link_mode = (HCI_LM_ACCEPT);
+@@ -2670,6 +2693,9 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ list_del(&hdev->list);
+ write_unlock(&hci_dev_list_lock);
+
++ synchronize_srcu(&hdev->srcu);
++ cleanup_srcu_struct(&hdev->srcu);
++
+ cancel_work_sync(&hdev->rx_work);
+ cancel_work_sync(&hdev->cmd_work);
+ cancel_work_sync(&hdev->tx_work);
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index e92bc4ceb5adda..d602e9d8eff450 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2010,13 +2010,10 @@ static int hci_clear_adv_sets_sync(struct hci_dev *hdev, struct sock *sk)
+ static int hci_clear_adv_sync(struct hci_dev *hdev, struct sock *sk, bool force)
+ {
+ struct adv_info *adv, *n;
+- int err = 0;
+
+ if (ext_adv_capable(hdev))
+ /* Remove all existing sets */
+- err = hci_clear_adv_sets_sync(hdev, sk);
+- if (ext_adv_capable(hdev))
+- return err;
++ return hci_clear_adv_sets_sync(hdev, sk);
+
+ /* This is safe as long as there is no command send while the lock is
+ * held.
+@@ -2044,13 +2041,11 @@ static int hci_clear_adv_sync(struct hci_dev *hdev, struct sock *sk, bool force)
+ static int hci_remove_adv_sync(struct hci_dev *hdev, u8 instance,
+ struct sock *sk)
+ {
+- int err = 0;
++ int err;
+
+ /* If we use extended advertising, instance has to be removed first. */
+ if (ext_adv_capable(hdev))
+- err = hci_remove_ext_adv_instance_sync(hdev, instance, sk);
+- if (ext_adv_capable(hdev))
+- return err;
++ return hci_remove_ext_adv_instance_sync(hdev, instance, sk);
+
+ /* This is safe as long as there is no command send while the lock is
+ * held.
+@@ -2149,16 +2144,13 @@ int hci_read_tx_power_sync(struct hci_dev *hdev, __le16 handle, u8 type)
+ int hci_disable_advertising_sync(struct hci_dev *hdev)
+ {
+ u8 enable = 0x00;
+- int err = 0;
+
+ /* If controller is not advertising we are done. */
+ if (!hci_dev_test_flag(hdev, HCI_LE_ADV))
+ return 0;
+
+ if (ext_adv_capable(hdev))
+- err = hci_disable_ext_adv_instance_sync(hdev, 0x00);
+- if (ext_adv_capable(hdev))
+- return err;
++ return hci_disable_ext_adv_instance_sync(hdev, 0x00);
+
+ return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_ENABLE,
+ sizeof(enable), &enable, HCI_CMD_TIMEOUT);
+@@ -2526,6 +2518,10 @@ static int hci_pause_advertising_sync(struct hci_dev *hdev)
+ int err;
+ int old_state;
+
++ /* If controller is not advertising we are done. */
++ if (!hci_dev_test_flag(hdev, HCI_LE_ADV))
++ return 0;
++
+ /* If already been paused there is nothing to do. */
+ if (hdev->advertising_paused)
+ return 0;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 853d217cabc917..82fa8c28438f25 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1074,7 +1074,8 @@ static int mesh_send_done_sync(struct hci_dev *hdev, void *data)
+ struct mgmt_mesh_tx *mesh_tx;
+
+ hci_dev_clear_flag(hdev, HCI_MESH_SENDING);
+- hci_disable_advertising_sync(hdev);
++ if (list_empty(&hdev->adv_instances))
++ hci_disable_advertising_sync(hdev);
+ mesh_tx = mgmt_mesh_next(hdev, NULL);
+
+ if (mesh_tx)
+@@ -2140,6 +2141,9 @@ static int set_mesh_sync(struct hci_dev *hdev, void *data)
+ else
+ hci_dev_clear_flag(hdev, HCI_MESH);
+
++ hdev->le_scan_interval = __le16_to_cpu(cp->period);
++ hdev->le_scan_window = __le16_to_cpu(cp->window);
++
+ len -= sizeof(*cp);
+
+ /* If filters don't fit, forward all adv pkts */
+@@ -2154,6 +2158,7 @@ static int set_mesh(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ {
+ struct mgmt_cp_set_mesh *cp = data;
+ struct mgmt_pending_cmd *cmd;
++ __u16 period, window;
+ int err = 0;
+
+ bt_dev_dbg(hdev, "sock %p", sk);
+@@ -2167,6 +2172,23 @@ static int set_mesh(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
+ MGMT_STATUS_INVALID_PARAMS);
+
++ /* Keep allowed ranges in sync with set_scan_params() */
++ period = __le16_to_cpu(cp->period);
++
++ if (period < 0x0004 || period > 0x4000)
++ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
++ MGMT_STATUS_INVALID_PARAMS);
++
++ window = __le16_to_cpu(cp->window);
++
++ if (window < 0x0004 || window > 0x4000)
++ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
++ MGMT_STATUS_INVALID_PARAMS);
++
++ if (window > period)
++ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,
++ MGMT_STATUS_INVALID_PARAMS);
++
+ hci_dev_lock(hdev);
+
+ cmd = mgmt_pending_add(sk, MGMT_OP_SET_MESH_RECEIVER, hdev, data, len);
+@@ -6529,6 +6551,7 @@ static int set_scan_params(struct sock *sk, struct hci_dev *hdev,
+ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_SCAN_PARAMS,
+ MGMT_STATUS_NOT_SUPPORTED);
+
++ /* Keep allowed ranges in sync with set_mesh() */
+ interval = __le16_to_cpu(cp->interval);
+
+ if (interval < 0x0004 || interval > 0x4000)
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index 68952752b5990f..31c4f112345ea4 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -89,11 +89,11 @@ ieee80211_chanctx_reserved_chandef(struct ieee80211_local *local,
+
+ lockdep_assert_held(&local->chanctx_mtx);
+
++ if (WARN_ON(!compat))
++ return NULL;
++
+ list_for_each_entry(link, &ctx->reserved_links,
+ reserved_chanctx_list) {
+- if (!compat)
+- compat = &link->reserved_chandef;
+-
+ compat = cfg80211_chandef_compatible(&link->reserved_chandef,
+ compat);
+ if (!compat)
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 04c876d78d3bf0..44aad3394084bd 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1186,6 +1186,15 @@ ieee80211_vif_get_shift(struct ieee80211_vif *vif)
+ return shift;
+ }
+
++#define for_each_link_data(sdata, __link) \
++ struct ieee80211_sub_if_data *__sdata = sdata; \
++ for (int __link_id = 0; \
++ __link_id < ARRAY_SIZE((__sdata)->link); __link_id++) \
++ if ((!(__sdata)->vif.valid_links || \
++ (__sdata)->vif.valid_links & BIT(__link_id)) && \
++ ((__link) = sdata_dereference((__sdata)->link[__link_id], \
++ (__sdata))))
++
+ static inline int
+ ieee80211_get_mbssid_beacon_len(struct cfg80211_mbssid_elems *elems,
+ struct cfg80211_rnr_elems *rnr_elems,
+diff --git a/net/mac80211/link.c b/net/mac80211/link.c
+index 16cbaea93fc32d..af4d2b2e9a26f8 100644
+--- a/net/mac80211/link.c
++++ b/net/mac80211/link.c
+@@ -28,8 +28,16 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
+ if (link_id < 0)
+ link_id = 0;
+
+- rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf);
+- rcu_assign_pointer(sdata->link[link_id], link);
++ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) {
++ struct ieee80211_sub_if_data *ap_bss;
++ struct ieee80211_bss_conf *ap_bss_conf;
++
++ ap_bss = container_of(sdata->bss,
++ struct ieee80211_sub_if_data, u.ap);
++ ap_bss_conf = sdata_dereference(ap_bss->vif.link_conf[link_id],
++ ap_bss);
++ memcpy(link_conf, ap_bss_conf, sizeof(*link_conf));
++ }
+
+ link->sdata = sdata;
+ link->link_id = link_id;
+@@ -65,6 +73,9 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
+
+ ieee80211_link_debugfs_add(link);
+ }
++
++ rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf);
++ rcu_assign_pointer(sdata->link[link_id], link);
+ }
+
+ void ieee80211_link_stop(struct ieee80211_link_data *link)
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 5eb233f619817b..58665b6ae6354b 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4419,6 +4419,10 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
+ if (!multicast &&
+ !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1))
+ return false;
++ /* reject invalid/our STA address */
++ if (!is_valid_ether_addr(hdr->addr2) ||
++ ether_addr_equal(sdata->dev->dev_addr, hdr->addr2))
++ return false;
+ if (!rx->sta) {
+ int rate_idx;
+ if (status->encoding != RX_ENC_LEGACY)
+diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c
+index fee772b4637c88..a7054546f52dfa 100644
+--- a/net/rose/rose_route.c
++++ b/net/rose/rose_route.c
+@@ -497,22 +497,15 @@ void rose_rt_device_down(struct net_device *dev)
+ t = rose_node;
+ rose_node = rose_node->next;
+
+- for (i = 0; i < t->count; i++) {
++ for (i = t->count - 1; i >= 0; i--) {
+ if (t->neighbour[i] != s)
+ continue;
+
+ t->count--;
+
+- switch (i) {
+- case 0:
+- t->neighbour[0] = t->neighbour[1];
+- fallthrough;
+- case 1:
+- t->neighbour[1] = t->neighbour[2];
+- break;
+- case 2:
+- break;
+- }
++ memmove(&t->neighbour[i], &t->neighbour[i + 1],
++ sizeof(t->neighbour[0]) *
++ (t->count - i));
+ }
+
+ if (t->count <= 0)
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index df89790c459ad6..282423106f15d9 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -779,15 +779,12 @@ static u32 qdisc_alloc_handle(struct net_device *dev)
+
+ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ {
+- bool qdisc_is_offloaded = sch->flags & TCQ_F_OFFLOADED;
+ const struct Qdisc_class_ops *cops;
+ unsigned long cl;
+ u32 parentid;
+ bool notify;
+ int drops;
+
+- if (n == 0 && len == 0)
+- return;
+ drops = max_t(int, n, 0);
+ rcu_read_lock();
+ while ((parentid = sch->parent)) {
+@@ -796,17 +793,8 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+
+ if (sch->flags & TCQ_F_NOPARENT)
+ break;
+- /* Notify parent qdisc only if child qdisc becomes empty.
+- *
+- * If child was empty even before update then backlog
+- * counter is screwed and we skip notification because
+- * parent class is already passive.
+- *
+- * If the original child was offloaded then it is allowed
+- * to be seem as empty, so the parent is notified anyway.
+- */
+- notify = !sch->q.qlen && !WARN_ON_ONCE(!n &&
+- !qdisc_is_offloaded);
++ /* Notify parent qdisc only if child qdisc becomes empty. */
++ notify = !sch->q.qlen;
+ /* TODO: perform the search on a per txq basis */
+ sch = qdisc_lookup_rcu(qdisc_dev(sch), TC_H_MAJ(parentid));
+ if (sch == NULL) {
+@@ -815,6 +803,9 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ }
+ cops = sch->ops->cl_ops;
+ if (notify && cops->qlen_notify) {
++ /* Note that qlen_notify must be idempotent as it may get called
++ * multiple times.
++ */
+ cl = cops->find(sch, parentid);
+ cops->qlen_notify(sch, cl);
+ }
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index b370070194fa4a..7eccd6708d6649 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -119,6 +119,8 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
+ u16 proto,
+ struct vmci_handle handle)
+ {
++ memset(pkt, 0, sizeof(*pkt));
++
+ /* We register the stream control handler as an any cid handle so we
+ * must always send from a source address of VMADDR_CID_ANY
+ */
+@@ -131,8 +133,6 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
+ pkt->type = type;
+ pkt->src_port = src->svm_port;
+ pkt->dst_port = dst->svm_port;
+- memset(&pkt->proto, 0, sizeof(pkt->proto));
+- memset(&pkt->_reserved2, 0, sizeof(pkt->_reserved2));
+
+ switch (pkt->type) {
+ case VMCI_TRANSPORT_PACKET_TYPE_INVALID:
+diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c
+index a9b87e159b2d11..1497a7822eee68 100644
+--- a/sound/isa/sb/sb16_main.c
++++ b/sound/isa/sb/sb16_main.c
+@@ -703,6 +703,9 @@ static int snd_sb16_dma_control_put(struct snd_kcontrol *kcontrol, struct snd_ct
+ unsigned char nval, oval;
+ int change;
+
++ if (chip->mode & (SB_MODE_PLAYBACK | SB_MODE_CAPTURE))
++ return -EBUSY;
++
+ nval = ucontrol->value.enumerated.item[0];
+ if (nval > 2)
+ return -EINVAL;
+@@ -711,6 +714,10 @@ static int snd_sb16_dma_control_put(struct snd_kcontrol *kcontrol, struct snd_ct
+ change = nval != oval;
+ snd_sb16_set_dma_mode(chip, nval);
+ spin_unlock_irqrestore(&chip->reg_lock, flags);
++ if (change) {
++ snd_dma_disable(chip->dma8);
++ snd_dma_disable(chip->dma16);
++ }
+ return change;
+ }
+
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 40e2b5a87916a8..429e61d47ffbbe 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -451,6 +451,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VEK"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VF"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -514,6 +521,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb2xxx"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
next reply other threads:[~2025-07-11 2:28 UTC|newest]
Thread overview: 175+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-11 2:28 Arisu Tachibana [this message]
-- strict thread matches above, loose matches on Subject: below --
2025-10-20 5:30 [gentoo-commits] proj/linux-patches:6.6 commit in: / Arisu Tachibana
2025-10-15 17:30 Arisu Tachibana
2025-10-13 11:57 Arisu Tachibana
2025-10-06 11:07 Arisu Tachibana
2025-10-02 13:25 Arisu Tachibana
2025-09-25 12:03 Arisu Tachibana
2025-09-20 6:12 Arisu Tachibana
2025-09-20 5:26 Arisu Tachibana
2025-09-12 3:57 Arisu Tachibana
2025-09-10 6:59 Arisu Tachibana
2025-09-10 6:25 Arisu Tachibana
2025-09-10 6:23 Arisu Tachibana
2025-09-10 5:32 Arisu Tachibana
2025-09-04 14:31 Arisu Tachibana
2025-08-28 17:07 Arisu Tachibana
2025-08-28 15:34 Arisu Tachibana
2025-08-16 3:10 Arisu Tachibana
2025-08-01 10:31 Arisu Tachibana
2025-07-24 9:18 Arisu Tachibana
2025-07-18 12:06 Arisu Tachibana
2025-07-14 16:20 Arisu Tachibana
2025-07-06 13:41 Arisu Tachibana
2025-06-27 11:19 Mike Pagano
2025-06-20 12:16 Mike Pagano
2025-06-19 14:23 Mike Pagano
2025-06-04 18:27 Mike Pagano
2025-06-04 18:11 Mike Pagano
2025-05-27 20:06 Mike Pagano
2025-05-22 13:38 Mike Pagano
2025-05-18 14:33 Mike Pagano
2025-05-09 10:57 Mike Pagano
2025-05-03 19:45 Mike Pagano
2025-05-02 10:55 Mike Pagano
2025-04-25 11:48 Mike Pagano
2025-04-10 13:30 Mike Pagano
2025-04-07 10:31 Mike Pagano
2025-03-29 10:48 Mike Pagano
2025-03-23 11:46 Mike Pagano
2025-03-23 11:44 Mike Pagano
2025-03-23 11:33 Mike Pagano
2025-03-13 12:55 Mike Pagano
2025-03-09 10:48 Mike Pagano
2025-03-07 16:37 Mike Pagano
2025-02-27 13:23 Mike Pagano
2025-02-21 13:31 Mike Pagano
2025-02-17 11:22 Mike Pagano
2025-02-17 11:17 Mike Pagano
2025-02-11 11:44 Mike Pagano
2025-02-08 11:27 Mike Pagano
2025-02-01 23:07 Mike Pagano
2025-01-30 12:49 Mike Pagano
2025-01-23 17:22 Mike Pagano
2025-01-23 17:03 Mike Pagano
2025-01-21 11:36 Mike Pagano
2025-01-17 13:18 Mike Pagano
2025-01-10 14:18 Mike Pagano
2025-01-09 13:53 Mike Pagano
2025-01-06 23:29 Mike Pagano
2025-01-02 12:33 Mike Pagano
2024-12-30 11:17 Mike Pagano
2024-12-30 11:17 Mike Pagano
2024-12-30 0:06 Mike Pagano
2024-12-27 14:08 Mike Pagano
2024-12-25 12:28 Mike Pagano
2024-12-19 18:08 Mike Pagano
2024-12-14 23:48 Mike Pagano
2024-12-12 19:41 Mike Pagano
2024-12-11 17:00 Mike Pagano
2024-12-09 11:36 Mike Pagano
2024-11-30 17:34 Mike Pagano
2024-11-22 17:52 Mike Pagano
2024-11-22 17:47 Mike Pagano
2024-11-19 19:20 Mike Pagano
2024-11-17 18:16 Mike Pagano
2024-11-14 14:54 Mike Pagano
2024-11-14 13:27 Mike Pagano
2024-11-08 16:30 Mike Pagano
2024-11-04 20:46 Mike Pagano
2024-11-03 11:26 Mike Pagano
2024-11-01 12:02 Mike Pagano
2024-11-01 11:52 Mike Pagano
2024-11-01 11:27 Mike Pagano
2024-10-26 22:46 Mike Pagano
2024-10-25 11:44 Mike Pagano
2024-10-22 16:57 Mike Pagano
2024-10-17 14:28 Mike Pagano
2024-10-17 14:05 Mike Pagano
2024-10-10 11:37 Mike Pagano
2024-10-04 15:23 Mike Pagano
2024-09-30 16:04 Mike Pagano
2024-09-30 15:18 Mike Pagano
2024-09-18 18:03 Mike Pagano
2024-09-12 12:32 Mike Pagano
2024-09-08 11:06 Mike Pagano
2024-09-04 13:51 Mike Pagano
2024-08-29 16:49 Mike Pagano
2024-08-19 10:24 Mike Pagano
2024-08-14 15:14 Mike Pagano
2024-08-14 14:51 Mike Pagano
2024-08-14 14:10 Mike Pagano
2024-08-11 13:28 Mike Pagano
2024-08-10 15:43 Mike Pagano
2024-08-03 15:22 Mike Pagano
2024-07-27 13:46 Mike Pagano
2024-07-25 15:48 Mike Pagano
2024-07-25 12:09 Mike Pagano
2024-07-18 12:15 Mike Pagano
2024-07-15 11:15 Mike Pagano
2024-07-11 11:48 Mike Pagano
2024-07-09 10:45 Mike Pagano
2024-07-05 10:49 Mike Pagano
2024-06-27 12:32 Mike Pagano
2024-06-21 14:06 Mike Pagano
2024-06-16 14:33 Mike Pagano
2024-06-12 10:23 Mike Pagano
2024-05-25 15:17 Mike Pagano
2024-05-17 11:49 Mike Pagano
2024-05-17 11:35 Mike Pagano
2024-05-05 18:06 Mike Pagano
2024-05-02 15:01 Mike Pagano
2024-04-27 22:05 Mike Pagano
2024-04-27 17:21 Mike Pagano
2024-04-27 17:05 Mike Pagano
2024-04-18 6:38 Alice Ferrazzi
2024-04-18 3:05 Alice Ferrazzi
2024-04-13 13:06 Mike Pagano
2024-04-11 14:49 Mike Pagano
2024-04-10 15:09 Mike Pagano
2024-04-04 19:06 Mike Pagano
2024-04-03 14:03 Mike Pagano
2024-03-27 11:24 Mike Pagano
2024-03-15 22:00 Mike Pagano
2024-03-06 18:07 Mike Pagano
2024-03-02 22:37 Mike Pagano
2024-03-01 13:06 Mike Pagano
2024-02-23 13:25 Mike Pagano
2024-02-23 12:36 Mike Pagano
2024-02-22 13:39 Mike Pagano
2024-02-16 19:06 Mike Pagano
2024-02-16 18:59 Mike Pagano
2024-02-06 17:27 Mike Pagano
2024-02-06 15:38 Mike Pagano
2024-02-06 15:34 Mike Pagano
2024-02-05 21:04 Mike Pagano
2024-02-05 21:00 Mike Pagano
2024-02-01 23:18 Mike Pagano
2024-02-01 1:22 Mike Pagano
2024-01-26 22:48 Mike Pagano
2024-01-26 0:08 Mike Pagano
2024-01-25 13:49 Mike Pagano
2024-01-20 11:45 Mike Pagano
2024-01-15 18:46 Mike Pagano
2024-01-10 17:20 Mike Pagano
2024-01-10 17:16 Mike Pagano
2024-01-05 14:49 Mike Pagano
2024-01-04 15:36 Mike Pagano
2024-01-01 13:45 Mike Pagano
2023-12-20 16:55 Mike Pagano
2023-12-17 14:55 Mike Pagano
2023-12-13 18:26 Mike Pagano
2023-12-11 14:19 Mike Pagano
2023-12-08 12:01 Mike Pagano
2023-12-08 10:54 Mike Pagano
2023-12-07 18:53 Mike Pagano
2023-12-03 11:24 Mike Pagano
2023-12-03 11:15 Mike Pagano
2023-12-01 10:31 Mike Pagano
2023-11-28 18:16 Mike Pagano
2023-11-28 17:50 Mike Pagano
2023-11-20 11:40 Mike Pagano
2023-11-19 15:18 Mike Pagano
2023-11-19 14:41 Mike Pagano
2023-11-08 11:52 Mike Pagano
2023-10-30 11:30 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1752200907.162af0ac3ff484aeaa155620c1312ea312861888.alicef@gentoo \
--to=alicef@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox