From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.2 commit in: /
Date: Fri, 10 Mar 2023 12:37:55 +0000 (UTC) [thread overview]
Message-ID: <1678451866.3d261b682e1bda28f3fc52ce8c45e9ab259f2b3f.mpagano@gentoo> (raw)
commit: 3d261b682e1bda28f3fc52ce8c45e9ab259f2b3f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 10 12:37:46 2023 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 10 12:37:46 2023 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3d261b68
Linux patch 6.2.3
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1002_linux-6.2.3.patch | 37425 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 37429 insertions(+)
diff --git a/0000_README b/0000_README
index 49d3a418..57b81d70 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-6.2.2.patch
From: https://www.kernel.org
Desc: Linux 6.2.2
+Patch: 1002_linux-6.2.3.patch
+From: https://www.kernel.org
+Desc: Linux 6.2.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-6.2.3.patch b/1002_linux-6.2.3.patch
new file mode 100644
index 00000000..435fd26a
--- /dev/null
+++ b/1002_linux-6.2.3.patch
@@ -0,0 +1,37425 @@
+diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
+index 60370f2c67b99..258e45cc3b2db 100644
+--- a/Documentation/admin-guide/cgroup-v1/memory.rst
++++ b/Documentation/admin-guide/cgroup-v1/memory.rst
+@@ -86,6 +86,8 @@ Brief summary of control files.
+ memory.swappiness set/show swappiness parameter of vmscan
+ (See sysctl's vm.swappiness)
+ memory.move_charge_at_immigrate set/show controls of moving charges
++ This knob is deprecated and shouldn't be
++ used.
+ memory.oom_control set/show oom controls.
+ memory.numa_stat show the number of memory usage per numa
+ node
+@@ -717,8 +719,15 @@ NOTE2:
+ It is recommended to set the soft limit always below the hard limit,
+ otherwise the hard limit will take precedence.
+
+-8. Move charges at task migration
+-=================================
++8. Move charges at task migration (DEPRECATED!)
++===============================================
++
++THIS IS DEPRECATED!
++
++It's expensive and unreliable! It's better practice to launch workload
++tasks directly from inside their target cgroup. Use dedicated workload
++cgroups to allow fine-grained policy adjustments without having to
++move physical pages between control domains.
+
+ Users can move charges associated with a task along with task migration, that
+ is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
+diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
+index c4dcdb3d0d451..a39bbfe9526b6 100644
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -479,8 +479,16 @@ Spectre variant 2
+ On Intel Skylake-era systems the mitigation covers most, but not all,
+ cases. See :ref:`[3] <spec_ref3>` for more details.
+
+- On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
+- IBRS on x86), retpoline is automatically disabled at run time.
++ On CPUs with hardware mitigation for Spectre variant 2 (e.g. IBRS
++ or enhanced IBRS on x86), retpoline is automatically disabled at run time.
++
++ Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at
++ boot, by setting the IBRS bit, and they're automatically protected against
++ Spectre v2 variant attacks, including cross-thread branch target injections
++ on SMT systems (STIBP). In other words, eIBRS enables STIBP too.
++
++ Legacy IBRS systems clear the IBRS bit on exit to userspace and
++ therefore explicitly enable STIBP for that
+
+ The retpoline mitigation is turned on by default on vulnerable
+ CPUs. It can be forced on or off by the administrator
+@@ -504,9 +512,12 @@ Spectre variant 2
+ For Spectre variant 2 mitigation, individual user programs
+ can be compiled with return trampolines for indirect branches.
+ This protects them from consuming poisoned entries in the branch
+- target buffer left by malicious software. Alternatively, the
+- programs can disable their indirect branch speculation via prctl()
+- (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
++ target buffer left by malicious software.
++
++ On legacy IBRS systems, at return to userspace, implicit STIBP is disabled
++ because the kernel clears the IBRS bit. In this case, the userspace programs
++ can disable indirect branch speculation via prctl() (See
++ :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
+ On x86, this will turn on STIBP to guard against attacks from the
+ sibling thread when the user program is running, and use IBPB to
+ flush the branch target buffer when switching to/from the program.
+diff --git a/Documentation/admin-guide/kdump/gdbmacros.txt b/Documentation/admin-guide/kdump/gdbmacros.txt
+index 82aecdcae8a6c..030de95e3e6b2 100644
+--- a/Documentation/admin-guide/kdump/gdbmacros.txt
++++ b/Documentation/admin-guide/kdump/gdbmacros.txt
+@@ -312,10 +312,10 @@ define dmesg
+ set var $prev_flags = $info->flags
+ end
+
+- set var $id = ($id + 1) & $id_mask
+ if ($id == $end_id)
+ loop_break
+ end
++ set var $id = ($id + 1) & $id_mask
+ end
+ end
+ document dmesg
+diff --git a/Documentation/bpf/instruction-set.rst b/Documentation/bpf/instruction-set.rst
+index e672d5ec6cc7b..2d3fe59bd260f 100644
+--- a/Documentation/bpf/instruction-set.rst
++++ b/Documentation/bpf/instruction-set.rst
+@@ -99,19 +99,26 @@ code value description
+ BPF_ADD 0x00 dst += src
+ BPF_SUB 0x10 dst -= src
+ BPF_MUL 0x20 dst \*= src
+-BPF_DIV 0x30 dst /= src
++BPF_DIV 0x30 dst = (src != 0) ? (dst / src) : 0
+ BPF_OR 0x40 dst \|= src
+ BPF_AND 0x50 dst &= src
+ BPF_LSH 0x60 dst <<= src
+ BPF_RSH 0x70 dst >>= src
+ BPF_NEG 0x80 dst = ~src
+-BPF_MOD 0x90 dst %= src
++BPF_MOD 0x90 dst = (src != 0) ? (dst % src) : dst
+ BPF_XOR 0xa0 dst ^= src
+ BPF_MOV 0xb0 dst = src
+ BPF_ARSH 0xc0 sign extending shift right
+ BPF_END 0xd0 byte swap operations (see `Byte swap instructions`_ below)
+ ======== ===== ==========================================================
+
++Underflow and overflow are allowed during arithmetic operations, meaning
++the 64-bit or 32-bit value will wrap. If eBPF program execution would
++result in division by zero, the destination register is instead set to zero.
++If execution would result in modulo by zero, for ``BPF_ALU64`` the value of
++the destination register is unchanged whereas for ``BPF_ALU`` the upper
++32 bits of the destination register are zeroed.
++
+ ``BPF_ADD | BPF_X | BPF_ALU`` means::
+
+ dst_reg = (u32) dst_reg + (u32) src_reg;
+@@ -128,6 +135,11 @@ BPF_END 0xd0 byte swap operations (see `Byte swap instructions`_ below)
+
+ dst_reg = dst_reg ^ imm32
+
++Also note that the division and modulo operations are unsigned. Thus, for
++``BPF_ALU``, 'imm' is first interpreted as an unsigned 32-bit value, whereas
++for ``BPF_ALU64``, 'imm' is first sign extended to 64 bits and the result
++interpreted as an unsigned 64-bit value. There are no instructions for
++signed division or modulo.
+
+ Byte swap instructions
+ ~~~~~~~~~~~~~~~~~~~~~~
+diff --git a/Documentation/dev-tools/gdb-kernel-debugging.rst b/Documentation/dev-tools/gdb-kernel-debugging.rst
+index 8e0f1fe8d17ad..895285c037c72 100644
+--- a/Documentation/dev-tools/gdb-kernel-debugging.rst
++++ b/Documentation/dev-tools/gdb-kernel-debugging.rst
+@@ -39,6 +39,10 @@ Setup
+ this mode. In this case, you should build the kernel with
+ CONFIG_RANDOMIZE_BASE disabled if the architecture supports KASLR.
+
++- Build the gdb scripts (required on kernels v5.1 and above)::
++
++ make scripts_gdb
++
+ - Enable the gdb stub of QEMU/KVM, either
+
+ - at VM startup time by appending "-s" to the QEMU command line
+diff --git a/Documentation/devicetree/bindings/display/mediatek/mediatek,ccorr.yaml b/Documentation/devicetree/bindings/display/mediatek/mediatek,ccorr.yaml
+index 63fb02014a56a..117e3db43f84a 100644
+--- a/Documentation/devicetree/bindings/display/mediatek/mediatek,ccorr.yaml
++++ b/Documentation/devicetree/bindings/display/mediatek/mediatek,ccorr.yaml
+@@ -32,7 +32,7 @@ properties:
+ - items:
+ - enum:
+ - mediatek,mt8186-disp-ccorr
+- - const: mediatek,mt8183-disp-ccorr
++ - const: mediatek,mt8192-disp-ccorr
+
+ reg:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
+index 5b8d59245f82f..b358fd601ed38 100644
+--- a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
++++ b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
+@@ -62,7 +62,7 @@ patternProperties:
+ description: phandle of the CPU DAI
+
+ patternProperties:
+- "^codec-[0-9]+$":
++ "^codec(-[0-9]+)?$":
+ type: object
+ additionalProperties: false
+ description: |-
+diff --git a/Documentation/hwmon/ftsteutates.rst b/Documentation/hwmon/ftsteutates.rst
+index 58a2483d8d0da..198fa8e2819da 100644
+--- a/Documentation/hwmon/ftsteutates.rst
++++ b/Documentation/hwmon/ftsteutates.rst
+@@ -22,6 +22,10 @@ enhancements. It can monitor up to 4 voltages, 16 temperatures and
+ 8 fans. It also contains an integrated watchdog which is currently
+ implemented in this driver.
+
++The 4 voltages require a board-specific multiplier, since the BMC can
++only measure voltages up to 3.3V and thus relies on voltage dividers.
++Consult your motherboard manual for details.
++
+ To clear a temperature or fan alarm, execute the following command with the
+ correct path to the alarm file::
+
+diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
+index 0a67cb738013e..dc89d4e9d23e6 100644
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -4457,6 +4457,18 @@ not holding a previously reported uncorrected error).
+ :Parameters: struct kvm_s390_cmma_log (in, out)
+ :Returns: 0 on success, a negative value on error
+
++Errors:
++
++ ====== =============================================================
++ ENOMEM not enough memory can be allocated to complete the task
++ ENXIO if CMMA is not enabled
++ EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled
++ EINVAL if KVM_S390_CMMA_PEEK is not set but dirty tracking has been
++ disabled (and thus migration mode was automatically disabled)
++ EFAULT if the userspace address is invalid or if no page table is
++ present for the addresses (e.g. when using hugepages).
++ ====== =============================================================
++
+ This ioctl is used to get the values of the CMMA bits on the s390
+ architecture. It is meant to be used in two scenarios:
+
+@@ -4537,12 +4549,6 @@ mask is unused.
+
+ values points to the userspace buffer where the result will be stored.
+
+-This ioctl can fail with -ENOMEM if not enough memory can be allocated to
+-complete the task, with -ENXIO if CMMA is not enabled, with -EINVAL if
+-KVM_S390_CMMA_PEEK is not set but migration mode was not enabled, with
+--EFAULT if the userspace address is invalid or if no page table is
+-present for the addresses (e.g. when using hugepages).
+-
+ 4.108 KVM_S390_SET_CMMA_BITS
+ ----------------------------
+
+diff --git a/Documentation/virt/kvm/devices/vm.rst b/Documentation/virt/kvm/devices/vm.rst
+index 60acc39e0e937..147efec626e52 100644
+--- a/Documentation/virt/kvm/devices/vm.rst
++++ b/Documentation/virt/kvm/devices/vm.rst
+@@ -302,6 +302,10 @@ Allows userspace to start migration mode, needed for PGSTE migration.
+ Setting this attribute when migration mode is already active will have
+ no effects.
+
++Dirty tracking must be enabled on all memslots, else -EINVAL is returned. When
++dirty tracking is disabled on any memslot, migration mode is automatically
++stopped.
++
+ :Parameters: none
+ :Returns: -ENOMEM if there is not enough free memory to start migration mode;
+ -EINVAL if the state of the VM is invalid (e.g. no memory defined);
+diff --git a/Makefile b/Makefile
+index 1836ddaf2c94c..eef164b4172a9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 2
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+
+diff --git a/arch/alpha/boot/tools/objstrip.c b/arch/alpha/boot/tools/objstrip.c
+index 08b430d25a315..7cf92d172dce9 100644
+--- a/arch/alpha/boot/tools/objstrip.c
++++ b/arch/alpha/boot/tools/objstrip.c
+@@ -148,7 +148,7 @@ main (int argc, char *argv[])
+ #ifdef __ELF__
+ elf = (struct elfhdr *) buf;
+
+- if (elf->e_ident[0] == 0x7f && str_has_prefix((char *)elf->e_ident + 1, "ELF")) {
++ if (memcmp(&elf->e_ident[EI_MAG0], ELFMAG, SELFMAG) == 0) {
+ if (elf->e_type != ET_EXEC) {
+ fprintf(stderr, "%s: %s is not an ELF executable\n",
+ prog_name, inname);
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 8a66fe544c69b..d9a67b370e047 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -233,7 +233,21 @@ do_entIF(unsigned long type, struct pt_regs *regs)
+ {
+ int signo, code;
+
+- if ((regs->ps & ~IPL_MAX) == 0) {
++ if (type == 3) { /* FEN fault */
++ /* Irritating users can call PAL_clrfen to disable the
++ FPU for the process. The kernel will then trap in
++ do_switch_stack and undo_switch_stack when we try
++ to save and restore the FP registers.
++
++ Given that GCC by default generates code that uses the
++ FP registers, PAL_clrfen is not useful except for DoS
++ attacks. So turn the bleeding FPU back on and be done
++ with it. */
++ current_thread_info()->pcb.flags |= 1;
++ __reload_thread(¤t_thread_info()->pcb);
++ return;
++ }
++ if (!user_mode(regs)) {
+ if (type == 1) {
+ const unsigned int *data
+ = (const unsigned int *) regs->pc;
+@@ -366,20 +380,6 @@ do_entIF(unsigned long type, struct pt_regs *regs)
+ }
+ break;
+
+- case 3: /* FEN fault */
+- /* Irritating users can call PAL_clrfen to disable the
+- FPU for the process. The kernel will then trap in
+- do_switch_stack and undo_switch_stack when we try
+- to save and restore the FP registers.
+-
+- Given that GCC by default generates code that uses the
+- FP registers, PAL_clrfen is not useful except for DoS
+- attacks. So turn the bleeding FPU back on and be done
+- with it. */
+- current_thread_info()->pcb.flags |= 1;
+- __reload_thread(¤t_thread_info()->pcb);
+- return;
+-
+ case 5: /* illoc */
+ default: /* unexpected instruction-fault type */
+ ;
+diff --git a/arch/arm/boot/dts/exynos3250-rinato.dts b/arch/arm/boot/dts/exynos3250-rinato.dts
+index 6d2c7bb191842..2eb682009815a 100644
+--- a/arch/arm/boot/dts/exynos3250-rinato.dts
++++ b/arch/arm/boot/dts/exynos3250-rinato.dts
+@@ -250,7 +250,7 @@
+ i80-if-timings {
+ cs-setup = <0>;
+ wr-setup = <0>;
+- wr-act = <1>;
++ wr-active = <1>;
+ wr-hold = <0>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi b/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
+index 021d9fc1b4923..27a1a89526655 100644
+--- a/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
++++ b/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
+@@ -10,7 +10,7 @@
+ / {
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+- thermal-sensors = <&tmu 0>;
++ thermal-sensors = <&tmu>;
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+ trips {
+diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi
+index 5c4ecda27a476..7ba7a18c25000 100644
+--- a/arch/arm/boot/dts/exynos4.dtsi
++++ b/arch/arm/boot/dts/exynos4.dtsi
+@@ -605,7 +605,7 @@
+ status = "disabled";
+
+ hdmi_i2c_phy: hdmiphy@38 {
+- compatible = "exynos4210-hdmiphy";
++ compatible = "samsung,exynos4210-hdmiphy";
+ reg = <0x38>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
+index 2c25cc37934e8..f8c6c5d1906af 100644
+--- a/arch/arm/boot/dts/exynos4210.dtsi
++++ b/arch/arm/boot/dts/exynos4210.dtsi
+@@ -393,7 +393,6 @@
+ &cpu_thermal {
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+- thermal-sensors = <&tmu 0>;
+ };
+
+ &gic {
+diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
+index 4708dcd575a77..01751706ff96d 100644
+--- a/arch/arm/boot/dts/exynos5250.dtsi
++++ b/arch/arm/boot/dts/exynos5250.dtsi
+@@ -1107,7 +1107,7 @@
+ &cpu_thermal {
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+- thermal-sensors = <&tmu 0>;
++ thermal-sensors = <&tmu>;
+
+ cooling-maps {
+ map0 {
+diff --git a/arch/arm/boot/dts/exynos5410-odroidxu.dts b/arch/arm/boot/dts/exynos5410-odroidxu.dts
+index d1cbc6b8a5703..e18110b93875a 100644
+--- a/arch/arm/boot/dts/exynos5410-odroidxu.dts
++++ b/arch/arm/boot/dts/exynos5410-odroidxu.dts
+@@ -120,7 +120,6 @@
+ };
+
+ &cpu0_thermal {
+- thermal-sensors = <&tmu_cpu0 0>;
+ polling-delay-passive = <0>;
+ polling-delay = <0>;
+
+diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi
+index 9f2523a873d9d..62263eb91b3cc 100644
+--- a/arch/arm/boot/dts/exynos5420.dtsi
++++ b/arch/arm/boot/dts/exynos5420.dtsi
+@@ -592,7 +592,7 @@
+ };
+
+ mipi_phy: mipi-video-phy {
+- compatible = "samsung,s5pv210-mipi-video-phy";
++ compatible = "samsung,exynos5420-mipi-video-phy";
+ syscon = <&pmu_system_controller>;
+ #phy-cells = <1>;
+ };
+diff --git a/arch/arm/boot/dts/exynos5422-odroidhc1.dts b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+index 3de7019572a20..5e42803937067 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidhc1.dts
++++ b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+@@ -31,7 +31,7 @@
+
+ thermal-zones {
+ cpu0_thermal: cpu0-thermal {
+- thermal-sensors = <&tmu_cpu0 0>;
++ thermal-sensors = <&tmu_cpu0>;
+ trips {
+ cpu0_alert0: cpu-alert-0 {
+ temperature = <70000>; /* millicelsius */
+@@ -86,7 +86,7 @@
+ };
+ };
+ cpu1_thermal: cpu1-thermal {
+- thermal-sensors = <&tmu_cpu1 0>;
++ thermal-sensors = <&tmu_cpu1>;
+ trips {
+ cpu1_alert0: cpu-alert-0 {
+ temperature = <70000>;
+@@ -130,7 +130,7 @@
+ };
+ };
+ cpu2_thermal: cpu2-thermal {
+- thermal-sensors = <&tmu_cpu2 0>;
++ thermal-sensors = <&tmu_cpu2>;
+ trips {
+ cpu2_alert0: cpu-alert-0 {
+ temperature = <70000>;
+@@ -174,7 +174,7 @@
+ };
+ };
+ cpu3_thermal: cpu3-thermal {
+- thermal-sensors = <&tmu_cpu3 0>;
++ thermal-sensors = <&tmu_cpu3>;
+ trips {
+ cpu3_alert0: cpu-alert-0 {
+ temperature = <70000>;
+@@ -218,7 +218,7 @@
+ };
+ };
+ gpu_thermal: gpu-thermal {
+- thermal-sensors = <&tmu_gpu 0>;
++ thermal-sensors = <&tmu_gpu>;
+ trips {
+ gpu_alert0: gpu-alert-0 {
+ temperature = <70000>;
+diff --git a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
+index a6961ff240304..e6e7e2ff2a261 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
+@@ -50,7 +50,7 @@
+
+ thermal-zones {
+ cpu0_thermal: cpu0-thermal {
+- thermal-sensors = <&tmu_cpu0 0>;
++ thermal-sensors = <&tmu_cpu0>;
+ polling-delay-passive = <250>;
+ polling-delay = <0>;
+ trips {
+@@ -139,7 +139,7 @@
+ };
+ };
+ cpu1_thermal: cpu1-thermal {
+- thermal-sensors = <&tmu_cpu1 0>;
++ thermal-sensors = <&tmu_cpu1>;
+ polling-delay-passive = <250>;
+ polling-delay = <0>;
+ trips {
+@@ -212,7 +212,7 @@
+ };
+ };
+ cpu2_thermal: cpu2-thermal {
+- thermal-sensors = <&tmu_cpu2 0>;
++ thermal-sensors = <&tmu_cpu2>;
+ polling-delay-passive = <250>;
+ polling-delay = <0>;
+ trips {
+@@ -285,7 +285,7 @@
+ };
+ };
+ cpu3_thermal: cpu3-thermal {
+- thermal-sensors = <&tmu_cpu3 0>;
++ thermal-sensors = <&tmu_cpu3>;
+ polling-delay-passive = <250>;
+ polling-delay = <0>;
+ trips {
+@@ -358,7 +358,7 @@
+ };
+ };
+ gpu_thermal: gpu-thermal {
+- thermal-sensors = <&tmu_gpu 0>;
++ thermal-sensors = <&tmu_gpu>;
+ polling-delay-passive = <250>;
+ polling-delay = <0>;
+ trips {
+diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
+index 0fc9e6b8b05dc..11b9321badc51 100644
+--- a/arch/arm/boot/dts/imx7s.dtsi
++++ b/arch/arm/boot/dts/imx7s.dtsi
+@@ -513,7 +513,7 @@
+
+ mux: mux-controller {
+ compatible = "mmio-mux";
+- #mux-control-cells = <0>;
++ #mux-control-cells = <1>;
+ mux-reg-masks = <0x14 0x00000010>;
+ };
+
+diff --git a/arch/arm/boot/dts/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom-sdx55.dtsi
+index f1c0dab409922..93d71aff3fab7 100644
+--- a/arch/arm/boot/dts/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx55.dtsi
+@@ -578,7 +578,7 @@
+ };
+
+ apps_smmu: iommu@15000000 {
+- compatible = "qcom,sdx55-smmu-500", "arm,mmu-500";
++ compatible = "qcom,sdx55-smmu-500", "qcom,smmu-500", "arm,mmu-500";
+ reg = <0x15000000 0x20000>;
+ #iommu-cells = <2>;
+ #global-interrupts = <1>;
+diff --git a/arch/arm/boot/dts/qcom-sdx65.dtsi b/arch/arm/boot/dts/qcom-sdx65.dtsi
+index b073e0c63df4f..408c4b87d44b0 100644
+--- a/arch/arm/boot/dts/qcom-sdx65.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx65.dtsi
+@@ -455,7 +455,7 @@
+ };
+
+ apps_smmu: iommu@15000000 {
+- compatible = "qcom,sdx65-smmu-500", "arm,mmu-500";
++ compatible = "qcom,sdx65-smmu-500", "qcom,smmu-500", "arm,mmu-500";
+ reg = <0x15000000 0x40000>;
+ #iommu-cells = <2>;
+ #global-interrupts = <1>;
+diff --git a/arch/arm/boot/dts/stm32mp131.dtsi b/arch/arm/boot/dts/stm32mp131.dtsi
+index accc3824f7e98..99d88096959eb 100644
+--- a/arch/arm/boot/dts/stm32mp131.dtsi
++++ b/arch/arm/boot/dts/stm32mp131.dtsi
+@@ -527,6 +527,7 @@
+
+ part_number_otp: part_number_otp@4 {
+ reg = <0x4 0x2>;
++ bits = <0 12>;
+ };
+ ts_cal1: calib@5c {
+ reg = <0x5c 0x2>;
+diff --git a/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts b/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
+index 43641cb82398f..343b02b971555 100644
+--- a/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
++++ b/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
+@@ -57,7 +57,7 @@
+ regulator-ramp-delay = <50>; /* 4ms */
+
+ enable-active-high;
+- enable-gpio = <&r_pio 0 8 GPIO_ACTIVE_HIGH>; /* PL8 */
++ enable-gpios = <&r_pio 0 8 GPIO_ACTIVE_HIGH>; /* PL8 */
+ gpios = <&r_pio 0 6 GPIO_ACTIVE_HIGH>; /* PL6 */
+ gpios-states = <0x1>;
+ states = <1100000 0>, <1300000 1>;
+diff --git a/arch/arm/configs/bcm2835_defconfig b/arch/arm/configs/bcm2835_defconfig
+index a51babd178c26..be0c984a66947 100644
+--- a/arch/arm/configs/bcm2835_defconfig
++++ b/arch/arm/configs/bcm2835_defconfig
+@@ -107,6 +107,7 @@ CONFIG_MEDIA_CAMERA_SUPPORT=y
+ CONFIG_DRM=y
+ CONFIG_DRM_V3D=y
+ CONFIG_DRM_VC4=y
++CONFIG_FB=y
+ CONFIG_FB_SIMPLE=y
+ CONFIG_FRAMEBUFFER_CONSOLE=y
+ CONFIG_SOUND=y
+diff --git a/arch/arm/mach-imx/mmdc.c b/arch/arm/mach-imx/mmdc.c
+index af12668d0bf51..b9efe9da06e0b 100644
+--- a/arch/arm/mach-imx/mmdc.c
++++ b/arch/arm/mach-imx/mmdc.c
+@@ -99,6 +99,7 @@ struct mmdc_pmu {
+ cpumask_t cpu;
+ struct hrtimer hrtimer;
+ unsigned int active_events;
++ int id;
+ struct device *dev;
+ struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
+ struct hlist_node node;
+@@ -433,8 +434,6 @@ static enum hrtimer_restart mmdc_pmu_timer_handler(struct hrtimer *hrtimer)
+ static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
+ void __iomem *mmdc_base, struct device *dev)
+ {
+- int mmdc_num;
+-
+ *pmu_mmdc = (struct mmdc_pmu) {
+ .pmu = (struct pmu) {
+ .task_ctx_nr = perf_invalid_context,
+@@ -452,15 +451,16 @@ static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
+ .active_events = 0,
+ };
+
+- mmdc_num = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
++ pmu_mmdc->id = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
+
+- return mmdc_num;
++ return pmu_mmdc->id;
+ }
+
+ static int imx_mmdc_remove(struct platform_device *pdev)
+ {
+ struct mmdc_pmu *pmu_mmdc = platform_get_drvdata(pdev);
+
++ ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
+ cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
+ perf_pmu_unregister(&pmu_mmdc->pmu);
+ iounmap(pmu_mmdc->mmdc_base);
+@@ -474,7 +474,6 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ {
+ struct mmdc_pmu *pmu_mmdc;
+ char *name;
+- int mmdc_num;
+ int ret;
+ const struct of_device_id *of_id =
+ of_match_device(imx_mmdc_dt_ids, &pdev->dev);
+@@ -497,14 +496,14 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ cpuhp_mmdc_state = ret;
+ }
+
+- mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
+- pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
+- if (mmdc_num == 0)
+- name = "mmdc";
+- else
+- name = devm_kasprintf(&pdev->dev,
+- GFP_KERNEL, "mmdc%d", mmdc_num);
++ ret = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
++ if (ret < 0)
++ goto pmu_free;
+
++ name = devm_kasprintf(&pdev->dev,
++ GFP_KERNEL, "mmdc%d", ret);
++
++ pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
+ pmu_mmdc->devtype_data = (struct fsl_mmdc_devtype_data *)of_id->data;
+
+ hrtimer_init(&pmu_mmdc->hrtimer, CLOCK_MONOTONIC,
+@@ -525,6 +524,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+
+ pmu_register_err:
+ pr_warn("MMDC Perf PMU failed (%d), disabled\n", ret);
++ ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
+ cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
+ hrtimer_cancel(&pmu_mmdc->hrtimer);
+ pmu_free:
+diff --git a/arch/arm/mach-omap1/timer.c b/arch/arm/mach-omap1/timer.c
+index f5cd4bbf7566d..81a912c1145a9 100644
+--- a/arch/arm/mach-omap1/timer.c
++++ b/arch/arm/mach-omap1/timer.c
+@@ -158,7 +158,7 @@ err_free_pdata:
+ kfree(pdata);
+
+ err_free_pdev:
+- platform_device_unregister(pdev);
++ platform_device_put(pdev);
+
+ return ret;
+ }
+diff --git a/arch/arm/mach-omap2/omap4-common.c b/arch/arm/mach-omap2/omap4-common.c
+index 6d1eb4eefefe5..d9ed2a5dcd5ef 100644
+--- a/arch/arm/mach-omap2/omap4-common.c
++++ b/arch/arm/mach-omap2/omap4-common.c
+@@ -140,6 +140,7 @@ static int __init omap4_sram_init(void)
+ __func__);
+ else
+ sram_sync = (void __iomem *)gen_pool_alloc(sram_pool, PAGE_SIZE);
++ of_node_put(np);
+
+ return 0;
+ }
+diff --git a/arch/arm/mach-omap2/timer.c b/arch/arm/mach-omap2/timer.c
+index 620ba69c8f114..5677c4a08f376 100644
+--- a/arch/arm/mach-omap2/timer.c
++++ b/arch/arm/mach-omap2/timer.c
+@@ -76,6 +76,7 @@ static void __init realtime_counter_init(void)
+ }
+
+ rate = clk_get_rate(sys_clk);
++ clk_put(sys_clk);
+
+ if (soc_is_dra7xx()) {
+ /*
+diff --git a/arch/arm/mach-s3c/s3c64xx.c b/arch/arm/mach-s3c/s3c64xx.c
+index 0a8116c108fe4..dce2b0e953088 100644
+--- a/arch/arm/mach-s3c/s3c64xx.c
++++ b/arch/arm/mach-s3c/s3c64xx.c
+@@ -173,7 +173,8 @@ static struct samsung_pwm_variant s3c64xx_pwm_variant = {
+ .tclk_mask = (1 << 7) | (1 << 6) | (1 << 5),
+ };
+
+-void __init s3c64xx_set_timer_source(unsigned int event, unsigned int source)
++void __init s3c64xx_set_timer_source(enum s3c64xx_timer_mode event,
++ enum s3c64xx_timer_mode source)
+ {
+ s3c64xx_pwm_variant.output_mask = BIT(SAMSUNG_PWM_NUM) - 1;
+ s3c64xx_pwm_variant.output_mask &= ~(BIT(event) | BIT(source));
+diff --git a/arch/arm/mach-zynq/slcr.c b/arch/arm/mach-zynq/slcr.c
+index 37707614885a5..9765b3f4c2fc5 100644
+--- a/arch/arm/mach-zynq/slcr.c
++++ b/arch/arm/mach-zynq/slcr.c
+@@ -213,6 +213,7 @@ int __init zynq_early_slcr_init(void)
+ zynq_slcr_regmap = syscon_regmap_lookup_by_compatible("xlnx,zynq-slcr");
+ if (IS_ERR(zynq_slcr_regmap)) {
+ pr_err("%s: failed to find zynq-slcr\n", __func__);
++ of_node_put(np);
+ return -ENODEV;
+ }
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index c5ccca26a4087..ddfd35c86bdac 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -100,7 +100,6 @@ config ARM64
+ select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
+ select ARCH_WANT_FRAME_POINTERS
+ select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36)
+- select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+ select ARCH_WANT_LD_ORPHAN_WARN
+ select ARCH_WANTS_NO_INSTR
+ select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg-jethome-jethub-j1xx.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg-jethome-jethub-j1xx.dtsi
+index 5836b00309312..e1605a9b0a13f 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg-jethome-jethub-j1xx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-axg-jethome-jethub-j1xx.dtsi
+@@ -168,15 +168,15 @@
+ reg = <0x32 0x20>;
+ };
+
+- eth_mac: eth_mac@0 {
++ eth_mac: eth-mac@0 {
+ reg = <0x0 0x6>;
+ };
+
+- bt_mac: bt_mac@6 {
++ bt_mac: bt-mac@6 {
+ reg = <0x6 0x6>;
+ };
+
+- wifi_mac: wifi_mac@c {
++ wifi_mac: wifi-mac@c {
+ reg = <0xc 0x6>;
+ };
+
+@@ -217,7 +217,7 @@
+ pinctrl-names = "default";
+
+ /* RTC */
+- pcf8563: pcf8563@51 {
++ pcf8563: rtc@51 {
+ compatible = "nxp,pcf8563";
+ reg = <0x51>;
+ status = "okay";
+@@ -303,7 +303,7 @@
+
+ &usb {
+ status = "okay";
+- phy-supply = <&usb_pwr>;
++ vbus-supply = <&usb_pwr>;
+ };
+
+ &spicc1 {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+index 417523dc4cc03..ff2b33313e637 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+@@ -153,7 +153,7 @@
+ scpi_clocks: clocks {
+ compatible = "arm,scpi-clocks";
+
+- scpi_dvfs: clock-controller {
++ scpi_dvfs: clocks-0 {
+ compatible = "arm,scpi-dvfs-clocks";
+ #clock-cells = <1>;
+ clock-indices = <0>;
+@@ -162,7 +162,7 @@
+ };
+
+ scpi_sensors: sensors {
+- compatible = "amlogic,meson-gxbb-scpi-sensors";
++ compatible = "amlogic,meson-gxbb-scpi-sensors", "arm,scpi-sensors";
+ #thermal-sensor-cells = <1>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 7f55d97f6c283..c063a144e0e7b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -1694,7 +1694,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+- internal_ephy: ethernet_phy@8 {
++ internal_ephy: ethernet-phy@8 {
+ compatible = "ethernet-phy-id0180.3301",
+ "ethernet-phy-ieee802.3-c22";
+ interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a-radxa-zero.dts b/arch/arm64/boot/dts/amlogic/meson-g12a-radxa-zero.dts
+index e3bb6df42ff3e..cf0a9be83fc47 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a-radxa-zero.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a-radxa-zero.dts
+@@ -401,5 +401,4 @@
+
+ &usb {
+ status = "okay";
+- dr_mode = "host";
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
+index 7677764eeee6e..f58fd2a6fe61c 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
+@@ -58,26 +58,6 @@
+ compatible = "operating-points-v2";
+ opp-shared;
+
+- opp-100000000 {
+- opp-hz = /bits/ 64 <100000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-250000000 {
+- opp-hz = /bits/ 64 <250000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-500000000 {
+- opp-hz = /bits/ 64 <500000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-667000000 {
+- opp-hz = /bits/ 64 <666666666>;
+- opp-microvolt = <731000>;
+- };
+-
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <731000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-go-ultra.dts b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-go-ultra.dts
+index 1e40709610c52..c8e5a0a42b898 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-go-ultra.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-odroid-go-ultra.dts
+@@ -381,6 +381,7 @@
+ reg = <0x1c>;
+ interrupt-parent = <&gpio_intc>;
+ interrupts = <7 IRQ_TYPE_LEVEL_LOW>; /* GPIOAO_7 */
++ #clock-cells = <1>;
+
+ vcc1-supply = <&vdd_sys>;
+ vcc2-supply = <&vdd_sys>;
+@@ -391,7 +392,6 @@
+ vcc8-supply = <&vcc_2v3>;
+ vcc9-supply = <&vddao_3v3>;
+ boost-supply = <&vdd_sys>;
+- switch-supply = <&vdd_sys>;
+
+ regulators {
+ vddcpu_a: DCDC_REG1 {
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+index bcdf55f48a831..4e84ab87cc7db 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+@@ -17,7 +17,7 @@
+ io-channel-names = "buttons";
+ keyup-threshold-microvolt = <1800000>;
+
+- update-button {
++ button-update {
+ label = "update";
+ linux,code = <KEY_VENDOR>;
+ press-threshold-microvolt = <1300000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+index 5eed15035b674..11f89bfecb56a 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+@@ -233,7 +233,7 @@
+ reg = <0x14 0x10>;
+ };
+
+- eth_mac: eth_mac@34 {
++ eth_mac: eth-mac@34 {
+ reg = <0x34 0x10>;
+ };
+
+@@ -250,7 +250,7 @@
+ scpi_clocks: clocks {
+ compatible = "arm,scpi-clocks";
+
+- scpi_dvfs: scpi_clocks@0 {
++ scpi_dvfs: clocks-0 {
+ compatible = "arm,scpi-dvfs-clocks";
+ #clock-cells = <1>;
+ clock-indices = <0>;
+@@ -532,7 +532,7 @@
+ #size-cells = <2>;
+ ranges = <0x0 0x0 0x0 0xc8834000 0x0 0x2000>;
+
+- hwrng: rng {
++ hwrng: rng@0 {
+ compatible = "amlogic,meson-rng";
+ reg = <0x0 0x0 0x0 0x4>;
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+index 6d8cc00fedc7f..5f2d4317ecfbf 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+@@ -16,7 +16,7 @@
+
+ leds {
+ compatible = "gpio-leds";
+- status {
++ led {
+ gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ color = <LED_COLOR_ID_RED>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
+index 9ef210f17b4aa..393d3cb33b9ee 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
+@@ -18,7 +18,7 @@
+ leds {
+ compatible = "gpio-leds";
+
+- status {
++ led {
+ label = "n1:white:status";
+ gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
+index b331a013572f3..c490dbbf063bf 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
+@@ -79,6 +79,5 @@
+ enable-gpios = <&gpio GPIOX_17 GPIO_ACTIVE_HIGH>;
+ max-speed = <2000000>;
+ clocks = <&wifi32k>;
+- clock-names = "lpo";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905w-jethome-jethub-j80.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905w-jethome-jethub-j80.dts
+index 6831137c5c109..a18d6d241a5ad 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905w-jethome-jethub-j80.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905w-jethome-jethub-j80.dts
+@@ -86,11 +86,11 @@
+ };
+
+ &efuse {
+- bt_mac: bt_mac@6 {
++ bt_mac: bt-mac@6 {
+ reg = <0x6 0x6>;
+ };
+
+- wifi_mac: wifi_mac@C {
++ wifi_mac: wifi-mac@c {
+ reg = <0xc 0x6>;
+ };
+ };
+@@ -239,7 +239,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c_b_pins>;
+
+- pcf8563: pcf8563@51 {
++ pcf8563: rtc@51 {
+ compatible = "nxp,pcf8563";
+ reg = <0x51>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+index 04e9d0f1bde0f..5905a6df09b04 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+@@ -773,7 +773,7 @@
+ };
+ };
+
+- eth-phy-mux {
++ eth-phy-mux@55c {
+ compatible = "mdio-mux-mmioreg", "mdio-mux";
+ #address-cells = <1>;
+ #size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+index cadba194b149b..38ebe98ba9c6b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+@@ -17,13 +17,13 @@
+ compatible = "bananapi,bpi-m5", "amlogic,sm1";
+ model = "Banana Pi BPI-M5";
+
+- adc_keys {
++ adc-keys {
+ compatible = "adc-keys";
+ io-channels = <&saradc 2>;
+ io-channel-names = "buttons";
+ keyup-threshold-microvolt = <1800000>;
+
+- key {
++ button-sw3 {
+ label = "SW3";
+ linux,code = <BTN_3>;
+ press-threshold-microvolt = <1700000>;
+@@ -123,7 +123,7 @@
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+
+- enable-gpio = <&gpio_ao GPIOE_2 GPIO_ACTIVE_HIGH>;
++ enable-gpio = <&gpio_ao GPIOE_2 GPIO_OPEN_DRAIN>;
+ enable-active-high;
+ regulator-always-on;
+
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts
+index a1f0c38ccadda..74088e7280fee 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts
+@@ -76,9 +76,17 @@
+ };
+
+ &cpu_thermal {
++ trips {
++ cpu_active: cpu-active {
++ temperature = <60000>; /* millicelsius */
++ hysteresis = <2000>; /* millicelsius */
++ type = "active";
++ };
++ };
++
+ cooling-maps {
+ map {
+- trip = <&cpu_passive>;
++ trip = <&cpu_active>;
+ cooling-device = <&fan0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index 4ee89fdcf59bd..b45852e8087a9 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -563,7 +563,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+- imx8mm_uid: unique-id@410 {
++ imx8mm_uid: unique-id@4 {
+ reg = <0x4 0x8>;
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+index b7d91df71cc26..7601a031f85a0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+@@ -564,7 +564,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+- imx8mn_uid: unique-id@410 {
++ imx8mn_uid: unique-id@4 {
+ reg = <0x4 0x8>;
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 03034b439c1f7..bafe0a572f7e9 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -425,7 +425,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+- imx8mp_uid: unique-id@420 {
++ imx8mp_uid: unique-id@8 {
+ reg = <0x8 0x8>;
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 7ce99c084e545..6eb5a98bb1bd4 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -593,7 +593,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+- imx8mq_uid: soc-uid@410 {
++ imx8mq_uid: soc-uid@4 {
+ reg = <0x4 0x8>;
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 146e18b5b1f46..7bb316922a3a9 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -435,6 +435,7 @@
+ pwm: pwm@11006000 {
+ compatible = "mediatek,mt7622-pwm";
+ reg = <0 0x11006000 0 0x1000>;
++ #pwm-cells = <2>;
+ interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_LOW>;
+ clocks = <&topckgen CLK_TOP_PWM_SEL>,
+ <&pericfg CLK_PERI_PWM_PD>,
+diff --git a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
+index 0e9406fc63e2d..0ed5e067928b5 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi
+@@ -167,8 +167,7 @@
+ };
+
+ watchdog: watchdog@1001c000 {
+- compatible = "mediatek,mt7986-wdt",
+- "mediatek,mt6589-wdt";
++ compatible = "mediatek,mt7986-wdt";
+ reg = <0 0x1001c000 0 0x1000>;
+ interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>;
+ #reset-cells = <1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 402136bfd5350..268a1f28af8ce 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -585,6 +585,15 @@
+ method = "smc";
+ };
+
++ clk13m: fixed-factor-clock-13m {
++ compatible = "fixed-factor-clock";
++ #clock-cells = <0>;
++ clocks = <&clk26m>;
++ clock-div = <2>;
++ clock-mult = <1>;
++ clock-output-names = "clk13m";
++ };
++
+ clk26m: oscillator {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+@@ -968,8 +977,7 @@
+ "mediatek,mt6765-timer";
+ reg = <0 0x10017000 0 0x1000>;
+ interrupts = <GIC_SPI 200 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&topckgen CLK_TOP_CLK13M>;
+- clock-names = "clk13m";
++ clocks = <&clk13m>;
+ };
+
+ iommu: iommu@10205000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186.dtsi b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+index c326aeb33a109..a02bf4ab1504d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+@@ -47,14 +47,12 @@
+ core5 {
+ cpu = <&cpu5>;
+ };
+- };
+
+- cluster1 {
+- core0 {
++ core6 {
+ cpu = <&cpu6>;
+ };
+
+- core1 {
++ core7 {
+ cpu = <&cpu7>;
+ };
+ };
+@@ -214,10 +212,12 @@
+ };
+ };
+
+- clk13m: oscillator-13m {
+- compatible = "fixed-clock";
++ clk13m: fixed-factor-clock-13m {
++ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+- clock-frequency = <13000000>;
++ clocks = <&clk26m>;
++ clock-div = <2>;
++ clock-mult = <1>;
+ clock-output-names = "clk13m";
+ };
+
+@@ -333,8 +333,7 @@
+ };
+
+ watchdog: watchdog@10007000 {
+- compatible = "mediatek,mt8186-wdt",
+- "mediatek,mt6589-wdt";
++ compatible = "mediatek,mt8186-wdt";
+ mediatek,disable-extrst;
+ reg = <0 0x10007000 0 0x1000>;
+ #reset-cells = <1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 424fc89cc6f7c..627e3bf1c544b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -29,6 +29,15 @@
+ rdma4 = &rdma4;
+ };
+
++ clk13m: fixed-factor-clock-13m {
++ compatible = "fixed-factor-clock";
++ #clock-cells = <0>;
++ clocks = <&clk26m>;
++ clock-div = <2>;
++ clock-mult = <1>;
++ clock-output-names = "clk13m";
++ };
++
+ clk26m: oscillator0 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+@@ -149,19 +158,16 @@
+ core3 {
+ cpu = <&cpu3>;
+ };
+- };
+-
+- cluster1 {
+- core0 {
++ core4 {
+ cpu = <&cpu4>;
+ };
+- core1 {
++ core5 {
+ cpu = <&cpu5>;
+ };
+- core2 {
++ core6 {
+ cpu = <&cpu6>;
+ };
+- core3 {
++ core7 {
+ cpu = <&cpu7>;
+ };
+ };
+@@ -534,8 +540,7 @@
+ "mediatek,mt6765-timer";
+ reg = <0 0x10017000 0 0x1000>;
+ interrupts = <GIC_SPI 233 IRQ_TYPE_LEVEL_HIGH 0>;
+- clocks = <&topckgen CLK_TOP_CSW_F26M_D2>;
+- clock-names = "clk13m";
++ clocks = <&clk13m>;
+ };
+
+ pwrap: pwrap@10026000 {
+@@ -578,6 +583,8 @@
+ compatible = "mediatek,mt8192-scp_adsp";
+ reg = <0 0x10720000 0 0x1000>;
+ #clock-cells = <1>;
++ /* power domain dependency not upstreamed */
++ status = "fail";
+ };
+
+ uart0: serial@11002000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index c10cfeb1214d5..c5b8abc0c5854 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -151,22 +151,20 @@
+ core3 {
+ cpu = <&cpu3>;
+ };
+- };
+
+- cluster1 {
+- core0 {
++ core4 {
+ cpu = <&cpu4>;
+ };
+
+- core1 {
++ core5 {
+ cpu = <&cpu5>;
+ };
+
+- core2 {
++ core6 {
+ cpu = <&cpu6>;
+ };
+
+- core3 {
++ core7 {
+ cpu = <&cpu7>;
+ };
+ };
+@@ -248,6 +246,15 @@
+ status = "disabled";
+ };
+
++ clk13m: fixed-factor-clock-13m {
++ compatible = "fixed-factor-clock";
++ #clock-cells = <0>;
++ clocks = <&clk26m>;
++ clock-div = <2>;
++ clock-mult = <1>;
++ clock-output-names = "clk13m";
++ };
++
+ clk26m: oscillator-26m {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+@@ -687,8 +694,7 @@
+ };
+
+ watchdog: watchdog@10007000 {
+- compatible = "mediatek,mt8195-wdt",
+- "mediatek,mt6589-wdt";
++ compatible = "mediatek,mt8195-wdt";
+ mediatek,disable-extrst;
+ reg = <0 0x10007000 0 0x100>;
+ #reset-cells = <1>;
+@@ -705,7 +711,7 @@
+ "mediatek,mt6765-timer";
+ reg = <0 0x10017000 0 0x1000>;
+ interrupts = <GIC_SPI 265 IRQ_TYPE_LEVEL_HIGH 0>;
+- clocks = <&topckgen CLK_TOP_CLK26M_D2>;
++ clocks = <&clk13m>;
+ };
+
+ pwrap: pwrap@10024000 {
+@@ -1549,6 +1555,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0 0x11e30000 0xe00>;
++ power-domains = <&spm MT8195_POWER_DOMAIN_SSUSB_PCIE_PHY>;
+ status = "disabled";
+
+ u2port1: usb-phy@0 {
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 4afcbd60e144e..d8169920b33b4 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -1918,6 +1918,7 @@
+ interconnects = <&mc TEGRA194_MEMORY_CLIENT_HOST1XDMAR &emc>;
+ interconnect-names = "dma-mem";
+ iommus = <&smmu TEGRA194_SID_HOST1X>;
++ dma-coherent;
+
+ /* Context isolation domains */
+ iommu-map = <0 &smmu TEGRA194_SID_HOST1X_CTX0 1>,
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
+index dd9a17922fe5c..a87e103f3828d 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
+@@ -1667,7 +1667,7 @@
+ vin-supply = <&vdd_5v0_sys>;
+ };
+
+- vdd_cam_1v2: regulator-vdd-cam-1v8 {
++ vdd_cam_1v2: regulator-vdd-cam-1v2 {
+ compatible = "regulator-fixed";
+ regulator-name = "vdd-cam-1v2";
+ regulator-min-microvolt = <1200000>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+index eaf05ee9acd18..77ceed615b7fc 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+@@ -571,6 +571,7 @@
+ interconnects = <&mc TEGRA234_MEMORY_CLIENT_HOST1XDMAR &emc>;
+ interconnect-names = "dma-mem";
+ iommus = <&smmu_niso1 TEGRA234_SID_HOST1X>;
++ dma-coherent;
+
+ /* Context isolation domains */
+ iommu-map = <0 &smmu_niso0 TEGRA234_SID_HOST1X_CTX0 1>,
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 4e51d8e3df043..4294beeb494fd 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -137,7 +137,7 @@
+ #clock-cells = <0>;
+ clocks = <&gcc GCC_USB1_PIPE_CLK>;
+ clock-names = "pipe0";
+- clock-output-names = "gcc_usb1_pipe_clk_src";
++ clock-output-names = "usb3phy_1_cc_pipe_clk";
+ };
+ };
+
+@@ -180,7 +180,7 @@
+ #clock-cells = <0>;
+ clocks = <&gcc GCC_USB0_PIPE_CLK>;
+ clock-names = "pipe0";
+- clock-output-names = "gcc_usb0_pipe_clk_src";
++ clock-output-names = "usb3phy_0_cc_pipe_clk";
+ };
+ };
+
+@@ -197,9 +197,9 @@
+ status = "disabled";
+ };
+
+- pcie_qmp0: phy@86000 {
+- compatible = "qcom,ipq8074-qmp-pcie-phy";
+- reg = <0x00086000 0x1c4>;
++ pcie_qmp0: phy@84000 {
++ compatible = "qcom,ipq8074-qmp-gen3-pcie-phy";
++ reg = <0x00084000 0x1bc>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+@@ -213,15 +213,16 @@
+ "common";
+ status = "disabled";
+
+- pcie_phy0: phy@86200 {
+- reg = <0x86200 0x16c>,
+- <0x86400 0x200>,
+- <0x86800 0x4f4>;
++ pcie_phy0: phy@84200 {
++ reg = <0x84200 0x16c>,
++ <0x84400 0x200>,
++ <0x84800 0x1f0>,
++ <0x84c00 0xf4>;
+ #phy-cells = <0>;
+ #clock-cells = <0>;
+ clocks = <&gcc GCC_PCIE0_PIPE_CLK>;
+ clock-names = "pipe0";
+- clock-output-names = "pcie_0_pipe_clk";
++ clock-output-names = "pcie20_phy0_pipe_clk";
+ };
+ };
+
+@@ -242,14 +243,14 @@
+ status = "disabled";
+
+ pcie_phy1: phy@8e200 {
+- reg = <0x8e200 0x16c>,
++ reg = <0x8e200 0x130>,
+ <0x8e400 0x200>,
+- <0x8e800 0x4f4>;
++ <0x8e800 0x1f8>;
+ #phy-cells = <0>;
+ #clock-cells = <0>;
+ clocks = <&gcc GCC_PCIE1_PIPE_CLK>;
+ clock-names = "pipe0";
+- clock-output-names = "pcie_1_pipe_clk";
++ clock-output-names = "pcie20_phy1_pipe_clk";
+ };
+ };
+
+@@ -772,9 +773,9 @@
+ phy-names = "pciephy";
+
+ ranges = <0x81000000 0 0x10200000 0x10200000
+- 0 0x100000 /* downstream I/O */
+- 0x82000000 0 0x10300000 0x10300000
+- 0 0xd00000>; /* non-prefetchable memory */
++ 0 0x10000>, /* downstream I/O */
++ <0x82000000 0 0x10220000 0x10220000
++ 0 0xfde0000>; /* non-prefetchable memory */
+
+ interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "msi";
+@@ -817,16 +818,18 @@
+ };
+
+ pcie0: pci@20000000 {
+- compatible = "qcom,pcie-ipq8074";
++ compatible = "qcom,pcie-ipq8074-gen3";
+ reg = <0x20000000 0xf1d>,
+ <0x20000f20 0xa8>,
+- <0x00080000 0x2000>,
++ <0x20001000 0x1000>,
++ <0x00080000 0x4000>,
+ <0x20100000 0x1000>;
+- reg-names = "dbi", "elbi", "parf", "config";
++ reg-names = "dbi", "elbi", "atu", "parf", "config";
+ device_type = "pci";
+ linux,pci-domain = <0>;
+ bus-range = <0x00 0xff>;
+ num-lanes = <1>;
++ max-link-speed = <3>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+
+@@ -834,9 +837,9 @@
+ phy-names = "pciephy";
+
+ ranges = <0x81000000 0 0x20200000 0x20200000
+- 0 0x100000 /* downstream I/O */
+- 0x82000000 0 0x20300000 0x20300000
+- 0 0xd00000>; /* non-prefetchable memory */
++ 0 0x10000>, /* downstream I/O */
++ <0x82000000 0 0x20220000 0x20220000
++ 0 0xfde0000>; /* non-prefetchable memory */
+
+ interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "msi";
+@@ -854,28 +857,30 @@
+ clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>,
+ <&gcc GCC_PCIE0_AXI_M_CLK>,
+ <&gcc GCC_PCIE0_AXI_S_CLK>,
+- <&gcc GCC_PCIE0_AHB_CLK>,
+- <&gcc GCC_PCIE0_AUX_CLK>;
+-
++ <&gcc GCC_PCIE0_AXI_S_BRIDGE_CLK>,
++ <&gcc GCC_PCIE0_RCHNG_CLK>;
+ clock-names = "iface",
+ "axi_m",
+ "axi_s",
+- "ahb",
+- "aux";
++ "axi_bridge",
++ "rchng";
++
+ resets = <&gcc GCC_PCIE0_PIPE_ARES>,
+ <&gcc GCC_PCIE0_SLEEP_ARES>,
+ <&gcc GCC_PCIE0_CORE_STICKY_ARES>,
+ <&gcc GCC_PCIE0_AXI_MASTER_ARES>,
+ <&gcc GCC_PCIE0_AXI_SLAVE_ARES>,
+ <&gcc GCC_PCIE0_AHB_ARES>,
+- <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>;
++ <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>,
++ <&gcc GCC_PCIE0_AXI_SLAVE_STICKY_ARES>;
+ reset-names = "pipe",
+ "sleep",
+ "sticky",
+ "axi_m",
+ "axi_s",
+ "ahb",
+- "axi_m_sticky";
++ "axi_m_sticky",
++ "axi_s_sticky";
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/msm8953.dtsi b/arch/arm64/boot/dts/qcom/msm8953.dtsi
+index 32349174c4bd9..70f033656b555 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8953.dtsi
+@@ -455,7 +455,7 @@
+ reg = <0x1000000 0x300000>;
+ interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+- gpio-ranges = <&tlmm 0 0 155>;
++ gpio-ranges = <&tlmm 0 0 142>;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8956.dtsi b/arch/arm64/boot/dts/qcom/msm8956.dtsi
+index e432512d8716a..668e05185c21e 100644
+--- a/arch/arm64/boot/dts/qcom/msm8956.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8956.dtsi
+@@ -12,6 +12,10 @@
+ interrupts = <GIC_PPI 7 (GIC_CPU_MASK_SIMPLE(6) | IRQ_TYPE_LEVEL_HIGH)>;
+ };
+
++&tsens {
++ compatible = "qcom,msm8956-tsens", "qcom,tsens-v1";
++};
++
+ /*
+ * You might be wondering.. why is it so empty out there?
+ * Well, the SoCs are almost identical.
+diff --git a/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi b/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
+index 79de9cc395c4c..cd77dcb558722 100644
+--- a/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
+@@ -2,7 +2,7 @@
+ /*
+ * Copyright (c) 2015, LGE Inc. All rights reserved.
+ * Copyright (c) 2016, The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021, Petr Vorel <petr.vorel@gmail.com>
++ * Copyright (c) 2021-2022, Petr Vorel <petr.vorel@gmail.com>
+ * Copyright (c) 2022, Dominik Kobinski <dominikkobinski314@gmail.com>
+ */
+
+@@ -15,6 +15,9 @@
+ /* cont_splash_mem has different memory mapping */
+ /delete-node/ &cont_splash_mem;
+
++/* disabled on downstream, conflicts with cont_splash_mem */
++/delete-node/ &dfps_data_mem;
++
+ / {
+ model = "LG Nexus 5X";
+ compatible = "lg,bullhead", "qcom,msm8992";
+@@ -49,12 +52,17 @@
+ };
+
+ cont_splash_mem: memory@3400000 {
+- reg = <0 0x03400000 0 0x1200000>;
++ reg = <0 0x03400000 0 0xc00000>;
+ no-map;
+ };
+
+- removed_region: reserved@5000000 {
+- reg = <0 0x05000000 0 0x2200000>;
++ reserved@5000000 {
++ reg = <0x0 0x05000000 0x0 0x1a00000>;
++ no-map;
++ };
++
++ reserved@6c00000 {
++ reg = <0x0 0x06c00000 0x0 0x400000>;
+ no-map;
+ };
+ };
+@@ -86,8 +94,8 @@
+ /* S1, S2, S6 and S12 are managed by RPMPD */
+
+ pm8994_s1: s1 {
+- regulator-min-microvolt = <800000>;
+- regulator-max-microvolt = <800000>;
++ regulator-min-microvolt = <1025000>;
++ regulator-max-microvolt = <1025000>;
+ };
+
+ pm8994_s2: s2 {
+@@ -243,11 +251,8 @@
+ };
+
+ pm8994_l26: l26 {
+- /*
+- * TODO: value from downstream
+- * regulator-min-microvolt = <987500>;
+- * fails to apply
+- */
++ regulator-min-microvolt = <987500>;
++ regulator-max-microvolt = <987500>;
+ };
+
+ pm8994_l27: l27 {
+@@ -261,19 +266,13 @@
+ };
+
+ pm8994_l29: l29 {
+- /*
+- * TODO: Unsupported voltage range.
+- * regulator-min-microvolt = <2800000>;
+- * regulator-max-microvolt = <2800000>;
+- */
++ regulator-min-microvolt = <2800000>;
++ regulator-max-microvolt = <2800000>;
+ };
+
+ pm8994_l30: l30 {
+- /*
+- * TODO: get this verified
+- * regulator-min-microvolt = <1800000>;
+- * regulator-max-microvolt = <1800000>;
+- */
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
+ };
+
+ pm8994_l31: l31 {
+@@ -282,11 +281,8 @@
+ };
+
+ pm8994_l32: l32 {
+- /*
+- * TODO: get this verified
+- * regulator-min-microvolt = <1800000>;
+- * regulator-max-microvolt = <1800000>;
+- */
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8996-oneplus-common.dtsi b/arch/arm64/boot/dts/qcom/msm8996-oneplus-common.dtsi
+index 20f5c103c63b7..2994337c60464 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996-oneplus-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996-oneplus-common.dtsi
+@@ -179,7 +179,6 @@
+ };
+
+ &dsi0_phy {
+- vdda-supply = <&vreg_l2a_1p25>;
+ vcca-supply = <&vreg_l28a_0p925>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi b/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi
+index dec361b93ccea..be62899edf8e3 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi
+@@ -943,10 +943,6 @@
+ };
+ };
+
+-/*
+- * For reasons that are currently unknown (but probably related to fusb301), USB takes about
+- * 6 minutes to wake up (nothing interesting in kernel logs), but then it works as it should.
+- */
+ &usb3 {
+ status = "okay";
+ qcom,select-utmi-as-pipe-clk;
+@@ -955,6 +951,7 @@
+ &usb3_dwc3 {
+ extcon = <&usb3_id>;
+ dr_mode = "peripheral";
++ maximum-speed = "high-speed";
+ phys = <&hsusb_phy1>;
+ phy-names = "usb2-phy";
+ snps,hird-threshold = /bits/ 8 <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index d31464204f696..71678749d66f6 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -713,7 +713,7 @@
+ #power-domain-cells = <1>;
+ reg = <0x00300000 0x90000>;
+
+- clocks = <&rpmcc RPM_SMD_BB_CLK1>,
++ clocks = <&rpmcc RPM_SMD_XO_CLK_SRC>,
+ <&rpmcc RPM_SMD_LN_BB_CLK>,
+ <&sleep_clk>,
+ <&pciephy_0>,
+@@ -830,9 +830,11 @@
+ compatible = "qcom,msm8996-a2noc";
+ reg = <0x00583000 0x7000>;
+ #interconnect-cells = <1>;
+- clock-names = "bus", "bus_a";
++ clock-names = "bus", "bus_a", "aggre2_ufs_axi", "ufs_axi";
+ clocks = <&rpmcc RPM_SMD_AGGR2_NOC_CLK>,
+- <&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>;
++ <&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>,
++ <&gcc GCC_AGGRE2_UFS_AXI_CLK>,
++ <&gcc GCC_UFS_AXI_CLK>;
+ };
+
+ mnoc: interconnect@5a4000 {
+@@ -1050,7 +1052,7 @@
+ #clock-cells = <1>;
+ #phy-cells = <0>;
+
+- clocks = <&mmcc MDSS_AHB_CLK>, <&rpmcc RPM_SMD_BB_CLK1>;
++ clocks = <&mmcc MDSS_AHB_CLK>, <&rpmcc RPM_SMD_XO_CLK_SRC>;
+ clock-names = "iface", "ref";
+ status = "disabled";
+ };
+@@ -1117,7 +1119,7 @@
+ #clock-cells = <1>;
+ #phy-cells = <0>;
+
+- clocks = <&mmcc MDSS_AHB_CLK>, <&rpmcc RPM_SMD_BB_CLK1>;
++ clocks = <&mmcc MDSS_AHB_CLK>, <&rpmcc RPM_SMD_XO_CLK_SRC>;
+ clock-names = "iface", "ref";
+ status = "disabled";
+ };
+@@ -2940,8 +2942,8 @@
+ compatible = "qcom,msm8996-apcc";
+ reg = <0x06400000 0x90000>;
+
+- clock-names = "xo";
+- clocks = <&rpmcc RPM_SMD_BB_CLK1>;
++ clock-names = "xo", "sys_apcs_aux";
++ clocks = <&rpmcc RPM_SMD_XO_A_CLK_SRC>, <&apcs_glb>;
+
+ #clock-cells = <1>;
+ };
+@@ -3060,7 +3062,7 @@
+ clock-names = "iface", "core", "xo";
+ clocks = <&gcc GCC_SDCC1_AHB_CLK>,
+ <&gcc GCC_SDCC1_APPS_CLK>,
+- <&rpmcc RPM_SMD_BB_CLK1>;
++ <&rpmcc RPM_SMD_XO_CLK_SRC>;
+ resets = <&gcc GCC_SDCC1_BCR>;
+
+ pinctrl-names = "default", "sleep";
+@@ -3084,7 +3086,7 @@
+ clock-names = "iface", "core", "xo";
+ clocks = <&gcc GCC_SDCC2_AHB_CLK>,
+ <&gcc GCC_SDCC2_APPS_CLK>,
+- <&rpmcc RPM_SMD_BB_CLK1>;
++ <&rpmcc RPM_SMD_XO_CLK_SRC>;
+ resets = <&gcc GCC_SDCC2_BCR>;
+
+ pinctrl-names = "default", "sleep";
+@@ -3406,7 +3408,7 @@
+ interrupt-names = "wdog", "fatal", "ready",
+ "handover", "stop-ack";
+
+- clocks = <&rpmcc RPM_SMD_BB_CLK1>;
++ clocks = <&rpmcc RPM_SMD_XO_CLK_SRC>;
+ clock-names = "xo";
+
+ memory-region = <&adsp_mem>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-fxtec-pro1.dts b/arch/arm64/boot/dts/qcom/msm8998-fxtec-pro1.dts
+index 310f7a2df1e83..510d12c8c5126 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998-fxtec-pro1.dts
++++ b/arch/arm64/boot/dts/qcom/msm8998-fxtec-pro1.dts
+@@ -364,14 +364,9 @@
+ };
+ };
+
+-&pm8998_pon {
+- resin {
+- compatible = "qcom,pm8941-resin";
+- interrupts = <GIC_SPI 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+- bias-pull-up;
+- debounce = <15625>;
+- linux,code = <KEY_VOLUMEDOWN>;
+- };
++&pm8998_resin {
++ linux,code = <KEY_VOLUMEDOWN>;
++ status = "okay";
+ };
+
+ &qusb2phy {
+diff --git a/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino.dtsi b/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino.dtsi
+index 5da87baa2b23b..3bbd5df196bfc 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998-sony-xperia-yoshino.dtsi
+@@ -357,14 +357,9 @@
+ };
+ };
+
+-&pm8998_pon {
+- resin {
+- compatible = "qcom,pm8941-resin";
+- interrupts = <GIC_SPI 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+- debounce = <15625>;
+- bias-pull-up;
+- linux,code = <KEY_VOLUMEUP>;
+- };
++&pm8998_resin {
++ linux,code = <KEY_VOLUMEUP>;
++ status = "okay";
+ };
+
+ &qusb2phy {
+diff --git a/arch/arm64/boot/dts/qcom/pmi8950.dtsi b/arch/arm64/boot/dts/qcom/pmi8950.dtsi
+index 32d27e2187e38..8008f02434a9e 100644
+--- a/arch/arm64/boot/dts/qcom/pmi8950.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmi8950.dtsi
+@@ -47,7 +47,7 @@
+ adc-chan@a {
+ reg = <VADC_REF_1250MV>;
+ qcom,pre-scaling = <1 1>;
+- label = "ref_1250v";
++ label = "ref_1250mv";
+ };
+
+ adc-chan@d {
+diff --git a/arch/arm64/boot/dts/qcom/pmk8350.dtsi b/arch/arm64/boot/dts/qcom/pmk8350.dtsi
+index 32f5e6af8c11c..f26fb7d32faf2 100644
+--- a/arch/arm64/boot/dts/qcom/pmk8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmk8350.dtsi
+@@ -21,7 +21,7 @@
+ #size-cells = <0>;
+
+ pmk8350_pon: pon@1300 {
+- compatible = "qcom,pm8998-pon";
++ compatible = "qcom,pmk8350-pon";
+ reg = <0x1300>, <0x800>;
+ reg-names = "hlos", "pbs";
+
+diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+index a5324eecb50a9..502dd6db491e2 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+@@ -806,7 +806,7 @@
+
+ clocks = <&gcc GCC_PCIE_0_PIPE_CLK>;
+ resets = <&gcc GCC_PCIEPHY_0_PHY_BCR>,
+- <&gcc 21>;
++ <&gcc GCC_PCIE_0_PIPE_ARES>;
+ reset-names = "phy", "pipe";
+
+ clock-output-names = "pcie_0_pipe_clk";
+@@ -1336,12 +1336,12 @@
+ <&gcc GCC_PCIE_0_SLV_AXI_CLK>;
+ clock-names = "iface", "aux", "master_bus", "slave_bus";
+
+- resets = <&gcc 18>,
+- <&gcc 17>,
+- <&gcc 15>,
+- <&gcc 19>,
++ resets = <&gcc GCC_PCIE_0_AXI_MASTER_ARES>,
++ <&gcc GCC_PCIE_0_AXI_SLAVE_ARES>,
++ <&gcc GCC_PCIE_0_AXI_MASTER_STICKY_ARES>,
++ <&gcc GCC_PCIE_0_CORE_STICKY_ARES>,
+ <&gcc GCC_PCIE_0_BCR>,
+- <&gcc 16>;
++ <&gcc GCC_PCIE_0_AHB_ARES>;
+ reset-names = "axi_m",
+ "axi_s",
+ "axi_m_sticky",
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index f71cf21a8dd8a..e45726be81c82 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3274,8 +3274,8 @@
+ interrupts-extended = <&pdc 1 IRQ_TYPE_LEVEL_HIGH>;
+ qcom,ee = <0>;
+ qcom,channel = <0>;
+- #address-cells = <1>;
+- #size-cells = <1>;
++ #address-cells = <2>;
++ #size-cells = <0>;
+ interrupt-controller;
+ #interrupt-cells = <4>;
+ cell-index = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 0adf13399e649..3bedd45e14afd 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -4246,8 +4246,8 @@
+ interrupts-extended = <&pdc 1 IRQ_TYPE_LEVEL_HIGH>;
+ qcom,ee = <0>;
+ qcom,channel = <0>;
+- #address-cells = <1>;
+- #size-cells = <1>;
++ #address-cells = <2>;
++ #size-cells = <0>;
+ interrupt-controller;
+ #interrupt-cells = <4>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index 71cf81a8eb4da..8363e82369854 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -1863,6 +1863,7 @@
+ "ss_phy_irq";
+
+ power-domains = <&gcc USB30_PRIM_GDSC>;
++ required-opps = <&rpmhpd_opp_nom>;
+
+ resets = <&gcc GCC_USB30_PRIM_BCR>;
+
+@@ -1917,6 +1918,7 @@
+ "ss_phy_irq";
+
+ power-domains = <&gcc USB30_SEC_GDSC>;
++ required-opps = <&rpmhpd_opp_nom>;
+
+ resets = <&gcc GCC_USB30_SEC_BCR>;
+
+@@ -2051,8 +2053,8 @@
+ interrupts-extended = <&pdc 1 IRQ_TYPE_LEVEL_HIGH>;
+ qcom,ee = <0>;
+ qcom,channel = <0>;
+- #address-cells = <1>;
+- #size-cells = <1>;
++ #address-cells = <2>;
++ #size-cells = <0>;
+ interrupt-controller;
+ #interrupt-cells = <4>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sdm670-google-sargo.dts b/arch/arm64/boot/dts/qcom/sdm670-google-sargo.dts
+index cf2ae540db125..e3e61b9d1b9d7 100644
+--- a/arch/arm64/boot/dts/qcom/sdm670-google-sargo.dts
++++ b/arch/arm64/boot/dts/qcom/sdm670-google-sargo.dts
+@@ -256,6 +256,7 @@
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-enable-ramp-delay = <250>;
++ regulator-always-on;
+ };
+
+ vreg_l9a_1p8: ldo9 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index f41c6d600ea8c..75a4645936232 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -615,14 +615,9 @@
+ };
+ };
+
+-&pm8998_pon {
+- resin {
+- compatible = "qcom,pm8941-resin";
+- interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+- debounce = <15625>;
+- bias-pull-up;
+- linux,code = <KEY_VOLUMEDOWN>;
+- };
++&pm8998_resin {
++ linux,code = <KEY_VOLUMEDOWN>;
++ status = "okay";
+ };
+
+ &pmi8998_lpg {
+@@ -979,7 +974,7 @@
+ };
+
+ wcd_intr_default: wcd_intr_default {
+- pins = <54>;
++ pins = "gpio54";
+ function = "gpio";
+
+ input-enable;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-lg-common.dtsi b/arch/arm64/boot/dts/qcom/sdm845-lg-common.dtsi
+index 1eb423e4be24c..943287804e1a6 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-lg-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-lg-common.dtsi
+@@ -482,14 +482,9 @@
+ status = "okay";
+ };
+
+-&pm8998_pon {
+- resin {
+- compatible = "qcom,pm8941-resin";
+- interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+- debounce = <15625>;
+- bias-pull-up;
+- linux,code = <KEY_VOLUMEDOWN>;
+- };
++&pm8998_resin {
++ linux,code = <KEY_VOLUMEDOWN>;
++ status = "okay";
+ };
+
+ &sdhc_2 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-shift-axolotl.dts b/arch/arm64/boot/dts/qcom/sdm845-shift-axolotl.dts
+index bb77ccfdc68c0..e6191602c70a8 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-shift-axolotl.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-shift-axolotl.dts
+@@ -522,14 +522,9 @@
+ };
+ };
+
+-&pm8998_pon {
+- volume_down_resin: resin {
+- compatible = "qcom,pm8941-resin";
+- interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+- debounce = <15625>;
+- bias-pull-up;
+- linux,code = <KEY_VOLUMEDOWN>;
+- };
++&pm8998_resin {
++ linux,code = <KEY_VOLUMEDOWN>;
++ status = "okay";
+ };
+
+ &pmi8998_lpg {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium-common.dtsi b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium-common.dtsi
+index eb6b2b676eca4..d2866155dd870 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium-common.dtsi
+@@ -325,14 +325,9 @@
+ qcom,cabc;
+ };
+
+-&pm8998_pon {
+- resin {
+- compatible = "qcom,pm8941-resin";
+- interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+- debounce = <15625>;
+- bias-pull-up;
+- linux,code = <KEY_VOLUMEDOWN>;
+- };
++&pm8998_resin {
++ linux,code = <KEY_VOLUMEDOWN>;
++ status = "okay";
+ };
+
+ &pmi8998_rradc {
+@@ -472,7 +467,7 @@
+ };
+
+ wcd_intr_default: wcd_intr_default {
+- pins = <54>;
++ pins = "gpio54";
+ function = "gpio";
+
+ input-enable;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+index 38ba809a95cd6..fba229d0bd108 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+@@ -530,14 +530,9 @@
+ };
+ };
+
+-&pm8998_pon {
+- resin {
+- interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+- compatible = "qcom,pm8941-resin";
+- linux,code = <KEY_VOLUMEDOWN>;
+- debounce = <15625>;
+- bias-pull-up;
+- };
++&pm8998_resin {
++ linux,code = <KEY_VOLUMEDOWN>;
++ status = "okay";
+ };
+
+ &q6afedai {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 65032b94b46d6..f36c23e7a2248 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4593,7 +4593,6 @@
+ <&dispcc DISP_CC_MDSS_DP_PIXEL_CLK>;
+ clock-names = "core_iface", "core_aux", "ctrl_link",
+ "ctrl_link_iface", "stream_pixel";
+- #clock-cells = <1>;
+ assigned-clocks = <&dispcc DISP_CC_MDSS_DP_LINK_CLK_SRC>,
+ <&dispcc DISP_CC_MDSS_DP_PIXEL_CLK_SRC>;
+ assigned-clock-parents = <&dp_phy 0>, <&dp_phy 1>;
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index 572bf04adf906..9de56365703cf 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -296,6 +296,8 @@
+
+ rpmcc: clock-controller {
+ compatible = "qcom,rpmcc-sm6115", "qcom,rpmcc";
++ clocks = <&xo_board>;
++ clock-names = "xo";
+ #clock-cells = <1>;
+ };
+
+@@ -361,7 +363,7 @@
+ reg-names = "west", "south", "east";
+ interrupts = <GIC_SPI 227 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+- gpio-ranges = <&tlmm 0 0 121>;
++ gpio-ranges = <&tlmm 0 0 114>; /* GPIOs + ufs_reset */
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+@@ -704,6 +706,7 @@
+ ufs_mem_hc: ufs@4804000 {
+ compatible = "qcom,sm6115-ufshc", "qcom,ufshc", "jedec,ufs-2.0";
+ reg = <0x04804000 0x3000>, <0x04810000 0x8000>;
++ reg-names = "std", "ice";
+ interrupts = <GIC_SPI 356 IRQ_TYPE_LEVEL_HIGH>;
+ phys = <&ufs_mem_phy_lanes>;
+ phy-names = "ufsphy";
+@@ -736,10 +739,10 @@
+ <0 0>,
+ <0 0>,
+ <37500000 150000000>,
+- <75000000 300000000>,
+ <0 0>,
+ <0 0>,
+- <0 0>;
++ <0 0>,
++ <75000000 300000000>;
+
+ status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts b/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts
+index 0de6c5b7f742e..09cff5d1d0ae8 100644
+--- a/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts
++++ b/arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts
+@@ -41,17 +41,18 @@
+ };
+
+ gpio-keys {
+- status = "okay";
+ compatible = "gpio-keys";
+- autorepeat;
+
+- key-vol-dn {
++ pinctrl-0 = <&vol_down_n>;
++ pinctrl-names = "default";
++
++ key-volume-down {
+ label = "Volume Down";
+ gpios = <&tlmm 47 GPIO_ACTIVE_LOW>;
+- linux,input-type = <1>;
+ linux,code = <KEY_VOLUMEDOWN>;
+- gpio-key,wakeup;
+ debounce-interval = <15>;
++ linux,can-disable;
++ wakeup-source;
+ };
+ };
+
+@@ -270,6 +271,14 @@
+
+ &tlmm {
+ gpio-reserved-ranges = <22 2>, <28 6>;
++
++ vol_down_n: vol-down-n-state {
++ pins = "gpio47";
++ function = "gpio";
++ drive-strength = <2>;
++ bias-disable;
++ input-enable;
++ };
+ };
+
+ &usb3 {
+diff --git a/arch/arm64/boot/dts/qcom/sm6125.dtsi b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+index 7e25a4f85594f..bf9e8d45ee44f 100644
+--- a/arch/arm64/boot/dts/qcom/sm6125.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+@@ -442,9 +442,9 @@
+ reg = <0x01613000 0x180>;
+ #phy-cells = <0>;
+
+- clocks = <&rpmcc RPM_SMD_XO_CLK_SRC>,
+- <&gcc GCC_AHB2PHY_USB_CLK>;
+- clock-names = "ref", "cfg_ahb";
++ clocks = <&gcc GCC_AHB2PHY_USB_CLK>,
++ <&rpmcc RPM_SMD_XO_CLK_SRC>;
++ clock-names = "cfg_ahb", "ref";
+
+ resets = <&gcc GCC_QUSB2PHY_PRIM_BCR>;
+ status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/sm6350-sony-xperia-lena-pdx213.dts b/arch/arm64/boot/dts/qcom/sm6350-sony-xperia-lena-pdx213.dts
+index 94f77d376662e..4916d0db5b47f 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350-sony-xperia-lena-pdx213.dts
++++ b/arch/arm64/boot/dts/qcom/sm6350-sony-xperia-lena-pdx213.dts
+@@ -35,10 +35,10 @@
+ gpio-keys {
+ compatible = "gpio-keys";
+ pinctrl-names = "default";
+- pinctrl-0 = <&gpio_keys_state>;
++ pinctrl-0 = <&vol_down_n>;
+
+ key-volume-down {
+- label = "volume_down";
++ label = "Volume Down";
+ linux,code = <KEY_VOLUMEDOWN>;
+ gpios = <&pm6350_gpios 2 GPIO_ACTIVE_LOW>;
+ };
+@@ -305,14 +305,12 @@
+ };
+
+ &pm6350_gpios {
+- gpio_keys_state: gpio-keys-state {
+- key-volume-down-pins {
+- pins = "gpio2";
+- function = PMIC_GPIO_FUNC_NORMAL;
+- power-source = <0>;
+- bias-disable;
+- input-enable;
+- };
++ vol_down_n: vol-down-n-state {
++ pins = "gpio2";
++ function = PMIC_GPIO_FUNC_NORMAL;
++ power-source = <0>;
++ bias-disable;
++ input-enable;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 43324bf291c30..00e43a0d2dd67 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -342,13 +342,12 @@
+ };
+
+ ramoops: ramoops@ffc00000 {
+- compatible = "removed-dma-pool", "ramoops";
+- reg = <0 0xffc00000 0 0x00100000>;
++ compatible = "ramoops";
++ reg = <0 0xffc00000 0 0x100000>;
+ record-size = <0x1000>;
+ console-size = <0x40000>;
+- ftrace-size = <0x0>;
+ msg-size = <0x20000 0x20000>;
+- cc-size = <0x0>;
++ ecc-size = <16>;
+ no-map;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi b/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi
+index c958a8b167303..fd8c0097072ab 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi
+@@ -33,9 +33,10 @@
+ framebuffer: framebuffer@9c000000 {
+ compatible = "simple-framebuffer";
+ reg = <0 0x9c000000 0 0x2300000>;
+- width = <1644>;
+- height = <3840>;
+- stride = <(1644 * 4)>;
++ /* Griffin BL initializes in 2.5k mode, not 4k */
++ width = <1096>;
++ height = <2560>;
++ stride = <(1096 * 4)>;
+ format = "a8r8g8b8";
+ /*
+ * That's (going to be) a lot of clocks, but it's necessary due
+diff --git a/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx214.dts b/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx214.dts
+index cc650508dc2d6..e6824c8c2774d 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx214.dts
++++ b/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx214.dts
+@@ -17,3 +17,26 @@
+ height = <2520>;
+ stride = <(1080 * 4)>;
+ };
++
++&pm8350b_gpios {
++ gpio-line-names = "NC", /* GPIO_1 */
++ "NC",
++ "NC",
++ "NC",
++ "SNAPSHOT_N",
++ "NC",
++ "NC",
++ "FOCUS_N";
++};
++
++&pm8350c_gpios {
++ gpio-line-names = "FL_STROBE_TRIG_WIDE", /* GPIO_1 */
++ "FL_STROBE_TRIG_TELE",
++ "NC",
++ "NC",
++ "NC",
++ "RGBC_IR_PWR_EN",
++ "NC",
++ "NC",
++ "WIDEC_PWR_EN";
++};
+diff --git a/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx215.dts b/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx215.dts
+index c74c973a69d2d..c6f402c3ef352 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx215.dts
++++ b/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami-pdx215.dts
+@@ -12,6 +12,93 @@
+ compatible = "sony,pdx215-generic", "qcom,sm8350";
+ };
+
++&i2c13 {
++ pmic@75 {
++ compatible = "dlg,slg51000";
++ reg = <0x75>;
++ dlg,cs-gpios = <&pm8350b_gpios 1 GPIO_ACTIVE_HIGH>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&cam_pwr_a_cs>;
++
++ regulators {
++ slg51000_a_ldo1: ldo1 {
++ regulator-name = "slg51000_a_ldo1";
++ regulator-min-microvolt = <2400000>;
++ regulator-max-microvolt = <3300000>;
++ };
++
++ slg51000_a_ldo2: ldo2 {
++ regulator-name = "slg51000_a_ldo2";
++ regulator-min-microvolt = <2400000>;
++ regulator-max-microvolt = <3300000>;
++ };
++
++ slg51000_a_ldo3: ldo3 {
++ regulator-name = "slg51000_a_ldo3";
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <3750000>;
++ };
++
++ slg51000_a_ldo4: ldo4 {
++ regulator-name = "slg51000_a_ldo4";
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <3750000>;
++ };
++
++ slg51000_a_ldo5: ldo5 {
++ regulator-name = "slg51000_a_ldo5";
++ regulator-min-microvolt = <500000>;
++ regulator-max-microvolt = <1200000>;
++ };
++
++ slg51000_a_ldo6: ldo6 {
++ regulator-name = "slg51000_a_ldo6";
++ regulator-min-microvolt = <500000>;
++ regulator-max-microvolt = <1200000>;
++ };
++
++ slg51000_a_ldo7: ldo7 {
++ regulator-name = "slg51000_a_ldo7";
++ regulator-min-microvolt = <1200000>;
++ regulator-max-microvolt = <3750000>;
++ };
++ };
++ };
++};
++
++&pm8350b_gpios {
++ gpio-line-names = "CAM_PWR_A_CS", /* GPIO_1 */
++ "NC",
++ "NC",
++ "NC",
++ "SNAPSHOT_N",
++ "CAM_PWR_LD_EN",
++ "NC",
++ "FOCUS_N";
++
++ cam_pwr_a_cs: cam-pwr-a-cs-state {
++ pins = "gpio1";
++ function = "normal";
++ qcom,drive-strength = <PMIC_GPIO_STRENGTH_LOW>;
++ power-source = <1>;
++ drive-push-pull;
++ output-high;
++ };
++};
++
++&pm8350c_gpios {
++ gpio-line-names = "FL_STROBE_TRIG_WIDE", /* GPIO_1 */
++ "FL_STROBE_TRIG_TELE",
++ "NC",
++ "WLC_TXPWR_EN",
++ "NC",
++ "RGBC_IR_PWR_EN",
++ "NC",
++ "NC",
++ "WIDEC_PWR_EN";
++};
++
+ &tlmm {
+ gpio-line-names = "APPS_I2C_0_SDA", /* GPIO_0 */
+ "APPS_I2C_0_SCL",
+diff --git a/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami.dtsi b/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami.dtsi
+index 1f2d660f8f86c..8df6ccbedfae7 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami.dtsi
+@@ -3,6 +3,7 @@
+ * Copyright (c) 2021, Konrad Dybcio <konrad.dybcio@somainline.org>
+ */
+
++#include <dt-bindings/pinctrl/qcom,pmic-gpio.h>
+ #include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+ #include "sm8350.dtsi"
+ #include "pm8350.dtsi"
+@@ -48,7 +49,35 @@
+ gpio-keys {
+ compatible = "gpio-keys";
+
+- /* For reasons still unknown, GAssist key and Camera Focus/Shutter don't work.. */
++ pinctrl-names = "default";
++ pinctrl-0 = <&focus_n &snapshot_n &vol_down_n &g_assist_n>;
++
++ key-camera-focus {
++ label = "Camera Focus";
++ linux,code = <KEY_CAMERA_FOCUS>;
++ gpios = <&pm8350b_gpios 8 GPIO_ACTIVE_LOW>;
++ debounce-interval = <15>;
++ linux,can-disable;
++ wakeup-source;
++ };
++
++ key-camera-snapshot {
++ label = "Camera Snapshot";
++ linux,code = <KEY_CAMERA>;
++ gpios = <&pm8350b_gpios 5 GPIO_ACTIVE_LOW>;
++ debounce-interval = <15>;
++ linux,can-disable;
++ wakeup-source;
++ };
++
++ key-google-assist {
++ label = "Google Assistant Key";
++ gpios = <&pm8350_gpios 9 GPIO_ACTIVE_LOW>;
++ linux,code = <KEY_LEFTMETA>;
++ debounce-interval = <15>;
++ linux,can-disable;
++ wakeup-source;
++ };
+
+ key-vol-down {
+ label = "Volume Down";
+@@ -56,7 +85,7 @@
+ gpios = <&pmk8350_gpios 3 GPIO_ACTIVE_LOW>;
+ debounce-interval = <15>;
+ linux,can-disable;
+- gpio-key,wakeup;
++ wakeup-source;
+ };
+ };
+
+@@ -506,7 +535,6 @@
+ clock-frequency = <100000>;
+
+ /* Qualcomm PM8008i/PM8008j (?) @ 8, 9, c, d */
+- /* Dialog SLG51000 CMIC @ 75 */
+ };
+
+ &i2c15 {
+@@ -534,6 +562,60 @@
+ firmware-name = "qcom/sm8350/Sony/sagami/modem.mbn";
+ };
+
++&pm8350_gpios {
++ gpio-line-names = "ASSIGN1_THERM", /* GPIO_1 */
++ "LCD_ID",
++ "SDR_MMW_THERM",
++ "RF_ID",
++ "NC",
++ "FP_LDO_EN",
++ "SP_ARI_PWR_ALARM",
++ "NC",
++ "G_ASSIST_N",
++ "PM8350_OPTION"; /* GPIO_10 */
++
++ g_assist_n: g-assist-n-state {
++ pins = "gpio9";
++ function = "normal";
++ power-source = <1>;
++ bias-pull-up;
++ input-enable;
++ };
++};
++
++&pm8350b_gpios {
++ snapshot_n: snapshot-n-state {
++ pins = "gpio5";
++ function = "normal";
++ power-source = <0>;
++ bias-pull-up;
++ input-enable;
++ };
++
++ focus_n: focus-n-state {
++ pins = "gpio8";
++ function = "normal";
++ power-source = <0>;
++ input-enable;
++ bias-pull-up;
++ };
++};
++
++&pmk8350_gpios {
++ gpio-line-names = "NC", /* GPIO_1 */
++ "NC",
++ "VOL_DOWN_N",
++ "PMK8350_OPTION";
++
++ vol_down_n: vol-down-n-state {
++ pins = "gpio3";
++ function = "normal";
++ power-source = <0>;
++ bias-pull-up;
++ input-enable;
++ };
++};
++
+ &pmk8350_rtc {
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index fb3cd20a82b5e..646c64f0d1e28 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1043,8 +1043,6 @@
+ interrupts = <GIC_SPI 604 IRQ_TYPE_LEVEL_HIGH>;
+ power-domains = <&rpmhpd SM8350_CX>;
+ operating-points-v2 = <&qup_opp_table_100mhz>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara.dtsi b/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara.dtsi
+index 38256226d2297..e437e9a12069f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450-sony-xperia-nagara.dtsi
+@@ -534,17 +534,17 @@
+ };
+
+ &remoteproc_adsp {
+- firmware-name = "qcom/sm8350/Sony/nagara/adsp.mbn";
++ firmware-name = "qcom/sm8450/Sony/nagara/adsp.mbn";
+ status = "okay";
+ };
+
+ &remoteproc_cdsp {
+- firmware-name = "qcom/sm8350/Sony/nagara/cdsp.mbn";
++ firmware-name = "qcom/sm8450/Sony/nagara/cdsp.mbn";
+ status = "okay";
+ };
+
+ &remoteproc_slpi {
+- firmware-name = "qcom/sm8350/Sony/nagara/slpi.mbn";
++ firmware-name = "qcom/sm8450/Sony/nagara/slpi.mbn";
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 570475040d95c..f57980a32b433 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -997,8 +997,6 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&qup_uart20_default>;
+ interrupts = <GIC_SPI 587 IRQ_TYPE_LEVEL_HIGH>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ status = "disabled";
+ };
+
+@@ -1391,8 +1389,6 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&qup_uart7_tx>, <&qup_uart7_rx>;
+ interrupts = <GIC_SPI 608 IRQ_TYPE_LEVEL_HIGH>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ status = "disabled";
+ };
+ };
+@@ -2263,7 +2259,7 @@
+ reg = <0 0x33b0000 0 0x2000>;
+ interrupts-extended = <&intc GIC_SPI 496 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc GIC_SPI 520 IRQ_TYPE_LEVEL_HIGH>;
+- interrupt-names = "core", "wake";
++ interrupt-names = "core", "wakeup";
+
+ clocks = <&vamacro>;
+ clock-names = "iface";
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+index 8166e3c1ff4e5..cafde91b4721b 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+@@ -437,20 +437,6 @@
+ };
+ };
+
+- /* 0 - lcd_reset */
+- /* 1 - lcd_pwr */
+- /* 2 - lcd_select */
+- /* 3 - backlight-enable */
+- /* 4 - Touch_shdwn */
+- /* 5 - LCD_H_pol */
+- /* 6 - lcd_V_pol */
+- gpio_exp1: gpio@20 {
+- compatible = "onnn,pca9654";
+- reg = <0x20>;
+- gpio-controller;
+- #gpio-cells = <2>;
+- };
+-
+ touchscreen@26 {
+ compatible = "ilitek,ili2117";
+ reg = <0x26>;
+@@ -482,6 +468,16 @@
+ };
+ };
+ };
++
++ gpio_exp1: gpio@70 {
++ compatible = "nxp,pca9538";
++ reg = <0x70>;
++ gpio-controller;
++ #gpio-cells = <2>;
++ gpio-line-names = "lcd_reset", "lcd_pwr", "lcd_select",
++ "backlight-enable", "Touch_shdwn",
++ "LCD_H_pol", "lcd_V_pol";
++ };
+ };
+
+ &lvds0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index 072903649d6ee..ae1ec58117c35 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -413,7 +413,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 141 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 172 0>;
++ clocks = <&k3_clks 141 0>;
+ status = "disabled";
+ };
+
+@@ -424,7 +424,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 142 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 173 0>;
++ clocks = <&k3_clks 142 0>;
+ status = "disabled";
+ };
+
+@@ -435,7 +435,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 143 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 174 0>;
++ clocks = <&k3_clks 143 0>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 6240856e48631..0d39d6b8cc0ca 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -80,7 +80,7 @@
+ };
+ };
+
+-&wkup_pmx0 {
++&wkup_pmx2 {
+ mcu_cpsw_pins_default: mcu-cpsw-pins-default {
+ pinctrl-single,pins = <
+ J721E_WKUP_IOPAD(0x0068, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+index fe669deba4896..de56a0165bd0c 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+@@ -56,7 +56,34 @@
+ wkup_pmx0: pinctrl@4301c000 {
+ compatible = "pinctrl-single";
+ /* Proxy 0 addressing */
+- reg = <0x00 0x4301c000 0x00 0x178>;
++ reg = <0x00 0x4301c000 0x00 0x34>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ wkup_pmx1: pinctrl@0x4301c038 {
++ compatible = "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x4301c038 0x00 0x8>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ wkup_pmx2: pinctrl@0x4301c068 {
++ compatible = "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x4301c068 0x00 0xec>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ wkup_pmx3: pinctrl@0x4301c174 {
++ compatible = "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x4301c174 0x00 0x20>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+index 4325cb8526edc..f92df478f0eea 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
++++ b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+@@ -858,6 +858,7 @@
+ clock-names = "bus_early", "ref";
+ iommus = <&smmu 0x860>;
+ snps,quirk-frame-length-adjustment = <0x20>;
++ snps,resume-hs-terminations;
+ /* dma-coherent; */
+ };
+ };
+@@ -884,6 +885,7 @@
+ clock-names = "bus_early", "ref";
+ iommus = <&smmu 0x861>;
+ snps,quirk-frame-length-adjustment = <0x20>;
++ snps,resume-hs-terminations;
+ /* dma-coherent; */
+ };
+ };
+diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
+index 378453faa87e1..dba8fcec7f33d 100644
+--- a/arch/arm64/kernel/acpi.c
++++ b/arch/arm64/kernel/acpi.c
+@@ -435,10 +435,6 @@ int acpi_ffh_address_space_arch_setup(void *handler_ctxt, void **region_ctxt)
+ enum arm_smccc_conduit conduit;
+ struct acpi_ffh_data *ffh_ctxt;
+
+- ffh_ctxt = kzalloc(sizeof(*ffh_ctxt), GFP_KERNEL);
+- if (!ffh_ctxt)
+- return -ENOMEM;
+-
+ if (arm_smccc_get_version() < ARM_SMCCC_VERSION_1_2)
+ return -EOPNOTSUPP;
+
+@@ -448,6 +444,10 @@ int acpi_ffh_address_space_arch_setup(void *handler_ctxt, void **region_ctxt)
+ return -EOPNOTSUPP;
+ }
+
++ ffh_ctxt = kzalloc(sizeof(*ffh_ctxt), GFP_KERNEL);
++ if (!ffh_ctxt)
++ return -ENOMEM;
++
+ if (conduit == SMCCC_CONDUIT_SMC) {
+ ffh_ctxt->invoke_ffh_fn = __arm_smccc_smc;
+ ffh_ctxt->invoke_ffh64_fn = arm_smccc_1_2_smc;
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index a77315b338e61..ee40dca9f28ef 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -2777,7 +2777,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_FP_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
+ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_AdvSIMD_SHIFT, 4, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
+ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_AdvSIMD_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_DIT_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_EL1_DIT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
+ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_DPB_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
+ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_DPB_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_DCPODP),
+ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_EL1_JSCVT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
+diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
+index 8dd5a8fe64b4f..4aadcfb017545 100644
+--- a/arch/arm64/mm/copypage.c
++++ b/arch/arm64/mm/copypage.c
+@@ -22,7 +22,8 @@ void copy_highpage(struct page *to, struct page *from)
+ copy_page(kto, kfrom);
+
+ if (system_supports_mte() && page_mte_tagged(from)) {
+- page_kasan_tag_reset(to);
++ if (kasan_hw_tags_enabled())
++ page_kasan_tag_reset(to);
+ /* It's a new page, shouldn't have been tagged yet */
+ WARN_ON_ONCE(!try_page_mte_tagging(to));
+ mte_copy_page_tags(kto, kfrom);
+diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
+index 184e58fd5631a..e8fb6684d7f3c 100644
+--- a/arch/arm64/tools/sysreg
++++ b/arch/arm64/tools/sysreg
+@@ -689,17 +689,17 @@ EndEnum
+ Enum 11:8 FPDP
+ 0b0000 NI
+ 0b0001 VFPv2
+- 0b0001 VFPv3
++ 0b0010 VFPv3
+ EndEnum
+ Enum 7:4 FPSP
+ 0b0000 NI
+ 0b0001 VFPv2
+- 0b0001 VFPv3
++ 0b0010 VFPv3
+ EndEnum
+ Enum 3:0 SIMDReg
+ 0b0000 NI
+ 0b0001 IMP_16x64
+- 0b0001 IMP_32x64
++ 0b0010 IMP_32x64
+ EndEnum
+ EndSysreg
+
+@@ -718,7 +718,7 @@ EndEnum
+ Enum 23:20 SIMDHP
+ 0b0000 NI
+ 0b0001 SIMDHP
+- 0b0001 SIMDHP_FLOAT
++ 0b0010 SIMDHP_FLOAT
+ EndEnum
+ Enum 19:16 SIMDSP
+ 0b0000 NI
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index c4b1947ebf768..288003a9f0cae 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -841,7 +841,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ if (ret < 0)
+ return ret;
+
+- move_imm(ctx, t1, func_addr, is32);
++ move_addr(ctx, t1, func_addr);
+ emit_insn(ctx, jirl, t1, LOONGARCH_GPR_RA, 0);
+ move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
+ break;
+diff --git a/arch/loongarch/net/bpf_jit.h b/arch/loongarch/net/bpf_jit.h
+index ca708024fdd3e..c335dc4eed370 100644
+--- a/arch/loongarch/net/bpf_jit.h
++++ b/arch/loongarch/net/bpf_jit.h
+@@ -82,6 +82,27 @@ static inline void emit_sext_32(struct jit_ctx *ctx, enum loongarch_gpr reg, boo
+ emit_insn(ctx, addiw, reg, reg, 0);
+ }
+
++static inline void move_addr(struct jit_ctx *ctx, enum loongarch_gpr rd, u64 addr)
++{
++ u64 imm_11_0, imm_31_12, imm_51_32, imm_63_52;
++
++ /* lu12iw rd, imm_31_12 */
++ imm_31_12 = (addr >> 12) & 0xfffff;
++ emit_insn(ctx, lu12iw, rd, imm_31_12);
++
++ /* ori rd, rd, imm_11_0 */
++ imm_11_0 = addr & 0xfff;
++ emit_insn(ctx, ori, rd, rd, imm_11_0);
++
++ /* lu32id rd, imm_51_32 */
++ imm_51_32 = (addr >> 32) & 0xfffff;
++ emit_insn(ctx, lu32id, rd, imm_51_32);
++
++ /* lu52id rd, rd, imm_63_52 */
++ imm_63_52 = (addr >> 52) & 0xfff;
++ emit_insn(ctx, lu52id, rd, rd, imm_63_52);
++}
++
+ static inline void move_imm(struct jit_ctx *ctx, enum loongarch_gpr rd, long imm, bool is32)
+ {
+ long imm_11_0, imm_31_12, imm_51_32, imm_63_52, imm_51_0, imm_51_31;
+diff --git a/arch/m68k/68000/entry.S b/arch/m68k/68000/entry.S
+index 997b549330156..7d63e2f1555a0 100644
+--- a/arch/m68k/68000/entry.S
++++ b/arch/m68k/68000/entry.S
+@@ -45,6 +45,8 @@ do_trace:
+ jbsr syscall_trace_enter
+ RESTORE_SWITCH_STACK
+ addql #4,%sp
++ addql #1,%d0
++ jeq ret_from_exception
+ movel %sp@(PT_OFF_ORIG_D0),%d1
+ movel #-ENOSYS,%d0
+ cmpl #NR_syscalls,%d1
+diff --git a/arch/m68k/Kconfig.devices b/arch/m68k/Kconfig.devices
+index 6a87b4a5fcac2..e6e3efac18407 100644
+--- a/arch/m68k/Kconfig.devices
++++ b/arch/m68k/Kconfig.devices
+@@ -19,6 +19,7 @@ config HEARTBEAT
+ # We have a dedicated heartbeat LED. :-)
+ config PROC_HARDWARE
+ bool "/proc/hardware support"
++ depends on PROC_FS
+ help
+ Say Y here to support the /proc/hardware file, which gives you
+ access to information about the machine you're running on,
+diff --git a/arch/m68k/coldfire/entry.S b/arch/m68k/coldfire/entry.S
+index 9f337c70243a3..35104c5417ff4 100644
+--- a/arch/m68k/coldfire/entry.S
++++ b/arch/m68k/coldfire/entry.S
+@@ -90,6 +90,8 @@ ENTRY(system_call)
+ jbsr syscall_trace_enter
+ RESTORE_SWITCH_STACK
+ addql #4,%sp
++ addql #1,%d0
++ jeq ret_from_exception
+ movel %d3,%a0
+ jbsr %a0@
+ movel %d0,%sp@(PT_OFF_D0) /* save the return value */
+diff --git a/arch/m68k/kernel/entry.S b/arch/m68k/kernel/entry.S
+index 18f278bdbd218..42879e6eb651d 100644
+--- a/arch/m68k/kernel/entry.S
++++ b/arch/m68k/kernel/entry.S
+@@ -184,9 +184,12 @@ do_trace_entry:
+ jbsr syscall_trace_enter
+ RESTORE_SWITCH_STACK
+ addql #4,%sp
++ addql #1,%d0 | optimization for cmpil #-1,%d0
++ jeq ret_from_syscall
+ movel %sp@(PT_OFF_ORIG_D0),%d0
+ cmpl #NR_syscalls,%d0
+ jcs syscall
++ jra ret_from_syscall
+ badsys:
+ movel #-ENOSYS,%sp@(PT_OFF_D0)
+ jra ret_from_syscall
+diff --git a/arch/mips/boot/dts/ingenic/ci20.dts b/arch/mips/boot/dts/ingenic/ci20.dts
+index f38c39572a9e8..8f21d2304737c 100644
+--- a/arch/mips/boot/dts/ingenic/ci20.dts
++++ b/arch/mips/boot/dts/ingenic/ci20.dts
+@@ -113,7 +113,7 @@
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+
+- gpio = <&gpf 14 GPIO_ACTIVE_LOW>;
++ gpio = <&gpf 15 GPIO_ACTIVE_LOW>;
+ enable-active-high;
+ };
+ };
+diff --git a/arch/mips/include/asm/syscall.h b/arch/mips/include/asm/syscall.h
+index 25fa651c937d5..ebdf4d910af2f 100644
+--- a/arch/mips/include/asm/syscall.h
++++ b/arch/mips/include/asm/syscall.h
+@@ -38,7 +38,7 @@ static inline bool mips_syscall_is_indirect(struct task_struct *task,
+ static inline long syscall_get_nr(struct task_struct *task,
+ struct pt_regs *regs)
+ {
+- return current_thread_info()->syscall;
++ return task_thread_info(task)->syscall;
+ }
+
+ static inline void mips_syscall_update_nr(struct task_struct *task,
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index dc4cbf0a5ca95..4fd630efe39d3 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -90,7 +90,7 @@ aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian
+
+ ifeq ($(HAS_BIARCH),y)
+ KBUILD_CFLAGS += -m$(BITS)
+-KBUILD_AFLAGS += -m$(BITS) -Wl,-a$(BITS)
++KBUILD_AFLAGS += -m$(BITS)
+ KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
+ endif
+
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index 4e29b619578c1..6d7a1ef723e69 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -1179,15 +1179,12 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm,
+ }
+ }
+ } else {
+- bool hflush = false;
++ bool hflush;
+ unsigned long hstart, hend;
+
+- if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+- hstart = (start + PMD_SIZE - 1) & PMD_MASK;
+- hend = end & PMD_MASK;
+- if (hstart < hend)
+- hflush = true;
+- }
++ hstart = (start + PMD_SIZE - 1) & PMD_MASK;
++ hend = end & PMD_MASK;
++ hflush = IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hstart < hend;
+
+ if (type == FLUSH_TYPE_LOCAL) {
+ asm volatile("ptesync": : :"memory");
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index e2b656043abf3..ee0d39b267946 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -138,7 +138,7 @@ config RISCV
+ select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
+ select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
+ select HAVE_FUNCTION_GRAPH_TRACER
+- select HAVE_FUNCTION_TRACER if !XIP_KERNEL
++ select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
+
+ config ARCH_MMAP_RND_BITS_MIN
+ default 18 if 64BIT
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 82153960ac001..56b9219981665 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -11,7 +11,11 @@ LDFLAGS_vmlinux :=
+ ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
+ LDFLAGS_vmlinux := --no-relax
+ KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
+- CC_FLAGS_FTRACE := -fpatchable-function-entry=8
++ifeq ($(CONFIG_RISCV_ISA_C),y)
++ CC_FLAGS_FTRACE := -fpatchable-function-entry=4
++else
++ CC_FLAGS_FTRACE := -fpatchable-function-entry=2
++endif
+ endif
+
+ ifeq ($(CONFIG_CMODEL_MEDLOW),y)
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index 04dad33800418..9e73922e1e2e5 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -42,6 +42,14 @@ struct dyn_arch_ftrace {
+ * 2) jalr: setting low-12 offset to ra, jump to ra, and set ra to
+ * return address (original pc + 4)
+ *
++ *<ftrace enable>:
++ * 0: auipc t0/ra, 0x?
++ * 4: jalr t0/ra, ?(t0/ra)
++ *
++ *<ftrace disable>:
++ * 0: nop
++ * 4: nop
++ *
+ * Dynamic ftrace generates probes to call sites, so we must deal with
+ * both auipc and jalr at the same time.
+ */
+@@ -52,25 +60,43 @@ struct dyn_arch_ftrace {
+ #define AUIPC_OFFSET_MASK (0xfffff000)
+ #define AUIPC_PAD (0x00001000)
+ #define JALR_SHIFT 20
+-#define JALR_BASIC (0x000080e7)
+-#define AUIPC_BASIC (0x00000097)
++#define JALR_RA (0x000080e7)
++#define AUIPC_RA (0x00000097)
++#define JALR_T0 (0x000282e7)
++#define AUIPC_T0 (0x00000297)
+ #define NOP4 (0x00000013)
+
+-#define make_call(caller, callee, call) \
++#define to_jalr_t0(offset) \
++ (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_T0)
++
++#define to_auipc_t0(offset) \
++ ((offset & JALR_SIGN_MASK) ? \
++ (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_T0) : \
++ ((offset & AUIPC_OFFSET_MASK) | AUIPC_T0))
++
++#define make_call_t0(caller, callee, call) \
+ do { \
+- call[0] = to_auipc_insn((unsigned int)((unsigned long)callee - \
+- (unsigned long)caller)); \
+- call[1] = to_jalr_insn((unsigned int)((unsigned long)callee - \
+- (unsigned long)caller)); \
++ unsigned int offset = \
++ (unsigned long) callee - (unsigned long) caller; \
++ call[0] = to_auipc_t0(offset); \
++ call[1] = to_jalr_t0(offset); \
+ } while (0)
+
+-#define to_jalr_insn(offset) \
+- (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_BASIC)
++#define to_jalr_ra(offset) \
++ (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_RA)
+
+-#define to_auipc_insn(offset) \
++#define to_auipc_ra(offset) \
+ ((offset & JALR_SIGN_MASK) ? \
+- (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_BASIC) : \
+- ((offset & AUIPC_OFFSET_MASK) | AUIPC_BASIC))
++ (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_RA) : \
++ ((offset & AUIPC_OFFSET_MASK) | AUIPC_RA))
++
++#define make_call_ra(caller, callee, call) \
++do { \
++ unsigned int offset = \
++ (unsigned long) callee - (unsigned long) caller; \
++ call[0] = to_auipc_ra(offset); \
++ call[1] = to_jalr_ra(offset); \
++} while (0)
+
+ /*
+ * Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
+diff --git a/arch/riscv/include/asm/jump_label.h b/arch/riscv/include/asm/jump_label.h
+index 6d58bbb5da467..14a5ea8d8ef0f 100644
+--- a/arch/riscv/include/asm/jump_label.h
++++ b/arch/riscv/include/asm/jump_label.h
+@@ -18,6 +18,7 @@ static __always_inline bool arch_static_branch(struct static_key * const key,
+ const bool branch)
+ {
+ asm_volatile_goto(
++ " .align 2 \n\t"
+ " .option push \n\t"
+ " .option norelax \n\t"
+ " .option norvc \n\t"
+@@ -39,6 +40,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke
+ const bool branch)
+ {
+ asm_volatile_goto(
++ " .align 2 \n\t"
+ " .option push \n\t"
+ " .option norelax \n\t"
+ " .option norvc \n\t"
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 3e01f4f3ab08a..6da0f3285dd2e 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -415,7 +415,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
+ * Relying on flush_tlb_fix_spurious_fault would suffice, but
+ * the extra traps reduce performance. So, eagerly SFENCE.VMA.
+ */
+- flush_tlb_page(vma, address);
++ local_flush_tlb_page(address);
+ }
+
+ #define __HAVE_ARCH_UPDATE_MMU_TLB
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index 67322f878e0d7..f704c8dd57e04 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -43,6 +43,7 @@
+ #ifndef __ASSEMBLY__
+
+ extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)];
++extern unsigned long spin_shadow_stack;
+
+ #include <asm/processor.h>
+ #include <asm/csr.h>
+diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
+index 2086f65857737..5bff37af4770b 100644
+--- a/arch/riscv/kernel/ftrace.c
++++ b/arch/riscv/kernel/ftrace.c
+@@ -55,12 +55,15 @@ static int ftrace_check_current_call(unsigned long hook_pos,
+ }
+
+ static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
+- bool enable)
++ bool enable, bool ra)
+ {
+ unsigned int call[2];
+ unsigned int nops[2] = {NOP4, NOP4};
+
+- make_call(hook_pos, target, call);
++ if (ra)
++ make_call_ra(hook_pos, target, call);
++ else
++ make_call_t0(hook_pos, target, call);
+
+ /* Replace the auipc-jalr pair at once. Return -EPERM on write error. */
+ if (patch_text_nosync
+@@ -70,42 +73,13 @@ static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
+ return 0;
+ }
+
+-/*
+- * Put 5 instructions with 16 bytes at the front of function within
+- * patchable function entry nops' area.
+- *
+- * 0: REG_S ra, -SZREG(sp)
+- * 1: auipc ra, 0x?
+- * 2: jalr -?(ra)
+- * 3: REG_L ra, -SZREG(sp)
+- *
+- * So the opcodes is:
+- * 0: 0xfe113c23 (sd)/0xfe112e23 (sw)
+- * 1: 0x???????? -> auipc
+- * 2: 0x???????? -> jalr
+- * 3: 0xff813083 (ld)/0xffc12083 (lw)
+- */
+-#if __riscv_xlen == 64
+-#define INSN0 0xfe113c23
+-#define INSN3 0xff813083
+-#elif __riscv_xlen == 32
+-#define INSN0 0xfe112e23
+-#define INSN3 0xffc12083
+-#endif
+-
+-#define FUNC_ENTRY_SIZE 16
+-#define FUNC_ENTRY_JMP 4
+-
+ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ {
+- unsigned int call[4] = {INSN0, 0, 0, INSN3};
+- unsigned long target = addr;
+- unsigned long caller = rec->ip + FUNC_ENTRY_JMP;
++ unsigned int call[2];
+
+- call[1] = to_auipc_insn((unsigned int)(target - caller));
+- call[2] = to_jalr_insn((unsigned int)(target - caller));
++ make_call_t0(rec->ip, addr, call);
+
+- if (patch_text_nosync((void *)rec->ip, call, FUNC_ENTRY_SIZE))
++ if (patch_text_nosync((void *)rec->ip, call, MCOUNT_INSN_SIZE))
+ return -EPERM;
+
+ return 0;
+@@ -114,15 +88,14 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ unsigned long addr)
+ {
+- unsigned int nops[4] = {NOP4, NOP4, NOP4, NOP4};
++ unsigned int nops[2] = {NOP4, NOP4};
+
+- if (patch_text_nosync((void *)rec->ip, nops, FUNC_ENTRY_SIZE))
++ if (patch_text_nosync((void *)rec->ip, nops, MCOUNT_INSN_SIZE))
+ return -EPERM;
+
+ return 0;
+ }
+
+-
+ /*
+ * This is called early on, and isn't wrapped by
+ * ftrace_arch_code_modify_{prepare,post_process}() and therefor doesn't hold
+@@ -144,10 +117,10 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
+ int ftrace_update_ftrace_func(ftrace_func_t func)
+ {
+ int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
+- (unsigned long)func, true);
++ (unsigned long)func, true, true);
+ if (!ret) {
+ ret = __ftrace_modify_call((unsigned long)&ftrace_regs_call,
+- (unsigned long)func, true);
++ (unsigned long)func, true, true);
+ }
+
+ return ret;
+@@ -159,16 +132,16 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+ {
+ unsigned int call[2];
+- unsigned long caller = rec->ip + FUNC_ENTRY_JMP;
++ unsigned long caller = rec->ip;
+ int ret;
+
+- make_call(caller, old_addr, call);
++ make_call_t0(caller, old_addr, call);
+ ret = ftrace_check_current_call(caller, call);
+
+ if (ret)
+ return ret;
+
+- return __ftrace_modify_call(caller, addr, true);
++ return __ftrace_modify_call(caller, addr, true, false);
+ }
+ #endif
+
+@@ -203,12 +176,12 @@ int ftrace_enable_ftrace_graph_caller(void)
+ int ret;
+
+ ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+- (unsigned long)&prepare_ftrace_return, true);
++ (unsigned long)&prepare_ftrace_return, true, true);
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call,
+- (unsigned long)&prepare_ftrace_return, true);
++ (unsigned long)&prepare_ftrace_return, true, true);
+ }
+
+ int ftrace_disable_ftrace_graph_caller(void)
+@@ -216,12 +189,12 @@ int ftrace_disable_ftrace_graph_caller(void)
+ int ret;
+
+ ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+- (unsigned long)&prepare_ftrace_return, false);
++ (unsigned long)&prepare_ftrace_return, false, true);
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call,
+- (unsigned long)&prepare_ftrace_return, false);
++ (unsigned long)&prepare_ftrace_return, false, true);
+ }
+ #endif /* CONFIG_DYNAMIC_FTRACE */
+ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
+index d171eca623b6f..125de818d1bab 100644
+--- a/arch/riscv/kernel/mcount-dyn.S
++++ b/arch/riscv/kernel/mcount-dyn.S
+@@ -13,8 +13,8 @@
+
+ .text
+
+-#define FENTRY_RA_OFFSET 12
+-#define ABI_SIZE_ON_STACK 72
++#define FENTRY_RA_OFFSET 8
++#define ABI_SIZE_ON_STACK 80
+ #define ABI_A0 0
+ #define ABI_A1 8
+ #define ABI_A2 16
+@@ -23,10 +23,10 @@
+ #define ABI_A5 40
+ #define ABI_A6 48
+ #define ABI_A7 56
+-#define ABI_RA 64
++#define ABI_T0 64
++#define ABI_RA 72
+
+ .macro SAVE_ABI
+- addi sp, sp, -SZREG
+ addi sp, sp, -ABI_SIZE_ON_STACK
+
+ REG_S a0, ABI_A0(sp)
+@@ -37,6 +37,7 @@
+ REG_S a5, ABI_A5(sp)
+ REG_S a6, ABI_A6(sp)
+ REG_S a7, ABI_A7(sp)
++ REG_S t0, ABI_T0(sp)
+ REG_S ra, ABI_RA(sp)
+ .endm
+
+@@ -49,24 +50,18 @@
+ REG_L a5, ABI_A5(sp)
+ REG_L a6, ABI_A6(sp)
+ REG_L a7, ABI_A7(sp)
++ REG_L t0, ABI_T0(sp)
+ REG_L ra, ABI_RA(sp)
+
+ addi sp, sp, ABI_SIZE_ON_STACK
+- addi sp, sp, SZREG
+ .endm
+
+ #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ .macro SAVE_ALL
+- addi sp, sp, -SZREG
+ addi sp, sp, -PT_SIZE_ON_STACK
+
+- REG_S x1, PT_EPC(sp)
+- addi sp, sp, PT_SIZE_ON_STACK
+- REG_L x1, (sp)
+- addi sp, sp, -PT_SIZE_ON_STACK
++ REG_S t0, PT_EPC(sp)
+ REG_S x1, PT_RA(sp)
+- REG_L x1, PT_EPC(sp)
+-
+ REG_S x2, PT_SP(sp)
+ REG_S x3, PT_GP(sp)
+ REG_S x4, PT_TP(sp)
+@@ -100,15 +95,11 @@
+ .endm
+
+ .macro RESTORE_ALL
++ REG_L t0, PT_EPC(sp)
+ REG_L x1, PT_RA(sp)
+- addi sp, sp, PT_SIZE_ON_STACK
+- REG_S x1, (sp)
+- addi sp, sp, -PT_SIZE_ON_STACK
+- REG_L x1, PT_EPC(sp)
+ REG_L x2, PT_SP(sp)
+ REG_L x3, PT_GP(sp)
+ REG_L x4, PT_TP(sp)
+- REG_L x5, PT_T0(sp)
+ REG_L x6, PT_T1(sp)
+ REG_L x7, PT_T2(sp)
+ REG_L x8, PT_S0(sp)
+@@ -137,17 +128,16 @@
+ REG_L x31, PT_T6(sp)
+
+ addi sp, sp, PT_SIZE_ON_STACK
+- addi sp, sp, SZREG
+ .endm
+ #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
+
+ ENTRY(ftrace_caller)
+ SAVE_ABI
+
+- addi a0, ra, -FENTRY_RA_OFFSET
++ addi a0, t0, -FENTRY_RA_OFFSET
+ la a1, function_trace_op
+ REG_L a2, 0(a1)
+- REG_L a1, ABI_SIZE_ON_STACK(sp)
++ mv a1, ra
+ mv a3, sp
+
+ ftrace_call:
+@@ -155,8 +145,8 @@ ftrace_call:
+ call ftrace_stub
+
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+- addi a0, sp, ABI_SIZE_ON_STACK
+- REG_L a1, ABI_RA(sp)
++ addi a0, sp, ABI_RA
++ REG_L a1, ABI_T0(sp)
+ addi a1, a1, -FENTRY_RA_OFFSET
+ #ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ mv a2, s0
+@@ -166,17 +156,17 @@ ftrace_graph_call:
+ call ftrace_stub
+ #endif
+ RESTORE_ABI
+- ret
++ jr t0
+ ENDPROC(ftrace_caller)
+
+ #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ ENTRY(ftrace_regs_caller)
+ SAVE_ALL
+
+- addi a0, ra, -FENTRY_RA_OFFSET
++ addi a0, t0, -FENTRY_RA_OFFSET
+ la a1, function_trace_op
+ REG_L a2, 0(a1)
+- REG_L a1, PT_SIZE_ON_STACK(sp)
++ mv a1, ra
+ mv a3, sp
+
+ ftrace_regs_call:
+@@ -196,6 +186,6 @@ ftrace_graph_regs_call:
+ #endif
+
+ RESTORE_ALL
+- ret
++ jr t0
+ ENDPROC(ftrace_regs_caller)
+ #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
+diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
+index 8217b0f67c6cb..1cf21db4fcc77 100644
+--- a/arch/riscv/kernel/time.c
++++ b/arch/riscv/kernel/time.c
+@@ -5,6 +5,7 @@
+ */
+
+ #include <linux/of_clk.h>
++#include <linux/clockchips.h>
+ #include <linux/clocksource.h>
+ #include <linux/delay.h>
+ #include <asm/sbi.h>
+@@ -29,6 +30,8 @@ void __init time_init(void)
+
+ of_clk_init(NULL);
+ timer_probe();
++
++ tick_setup_hrtimer_broadcast();
+ }
+
+ void clocksource_arch_init(struct clocksource *cs)
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 549bde5c970a1..70c98ce23be24 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -34,10 +34,11 @@ void die(struct pt_regs *regs, const char *str)
+ static int die_counter;
+ int ret;
+ long cause;
++ unsigned long flags;
+
+ oops_enter();
+
+- spin_lock_irq(&die_lock);
++ spin_lock_irqsave(&die_lock, flags);
+ console_verbose();
+ bust_spinlocks(1);
+
+@@ -54,7 +55,7 @@ void die(struct pt_regs *regs, const char *str)
+
+ bust_spinlocks(0);
+ add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+- spin_unlock_irq(&die_lock);
++ spin_unlock_irqrestore(&die_lock, flags);
+ oops_exit();
+
+ if (in_interrupt())
+diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
+index d86f7cebd4a7e..eb0774d9c03b1 100644
+--- a/arch/riscv/mm/fault.c
++++ b/arch/riscv/mm/fault.c
+@@ -267,10 +267,12 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
+ if (user_mode(regs))
+ flags |= FAULT_FLAG_USER;
+
+- if (!user_mode(regs) && addr < TASK_SIZE &&
+- unlikely(!(regs->status & SR_SUM)))
+- die_kernel_fault("access to user memory without uaccess routines",
+- addr, regs);
++ if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_SUM))) {
++ if (fixup_exception(regs))
++ return;
++
++ die_kernel_fault("access to user memory without uaccess routines", addr, regs);
++ }
+
+ perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
+
+diff --git a/arch/s390/boot/boot.h b/arch/s390/boot/boot.h
+index 70418389414d3..939a1b7806df2 100644
+--- a/arch/s390/boot/boot.h
++++ b/arch/s390/boot/boot.h
+@@ -8,10 +8,26 @@
+
+ #ifndef __ASSEMBLY__
+
++struct vmlinux_info {
++ unsigned long default_lma;
++ void (*entry)(void);
++ unsigned long image_size; /* does not include .bss */
++ unsigned long bss_size; /* uncompressed image .bss size */
++ unsigned long bootdata_off;
++ unsigned long bootdata_size;
++ unsigned long bootdata_preserved_off;
++ unsigned long bootdata_preserved_size;
++ unsigned long dynsym_start;
++ unsigned long rela_dyn_start;
++ unsigned long rela_dyn_end;
++ unsigned long amode31_size;
++};
++
+ void startup_kernel(void);
+-unsigned long detect_memory(void);
++unsigned long detect_memory(unsigned long *safe_addr);
+ bool is_ipl_block_dump(void);
+ void store_ipl_parmblock(void);
++unsigned long read_ipl_report(unsigned long safe_addr);
+ void setup_boot_command_line(void);
+ void parse_boot_command_line(void);
+ void verify_facilities(void);
+@@ -20,6 +36,7 @@ void sclp_early_setup_buffer(void);
+ void print_pgm_check_info(void);
+ unsigned long get_random_base(unsigned long safe_addr);
+ void __printf(1, 2) decompressor_printk(const char *fmt, ...);
++void error(char *m);
+
+ /* Symbols defined by linker scripts */
+ extern const char kernel_version[];
+@@ -31,8 +48,11 @@ extern char __boot_data_start[], __boot_data_end[];
+ extern char __boot_data_preserved_start[], __boot_data_preserved_end[];
+ extern char _decompressor_syms_start[], _decompressor_syms_end[];
+ extern char _stack_start[], _stack_end[];
+-
+-unsigned long read_ipl_report(unsigned long safe_offset);
++extern char _end[];
++extern unsigned char _compressed_start[];
++extern unsigned char _compressed_end[];
++extern struct vmlinux_info _vmlinux_info;
++#define vmlinux _vmlinux_info
+
+ #endif /* __ASSEMBLY__ */
+ #endif /* BOOT_BOOT_H */
+diff --git a/arch/s390/boot/decompressor.c b/arch/s390/boot/decompressor.c
+index b519a1f045d8f..d762733a07530 100644
+--- a/arch/s390/boot/decompressor.c
++++ b/arch/s390/boot/decompressor.c
+@@ -11,6 +11,7 @@
+ #include <linux/string.h>
+ #include <asm/page.h>
+ #include "decompressor.h"
++#include "boot.h"
+
+ /*
+ * gzip declarations
+diff --git a/arch/s390/boot/decompressor.h b/arch/s390/boot/decompressor.h
+index f75cc31a77dd9..92b81d2ea35d6 100644
+--- a/arch/s390/boot/decompressor.h
++++ b/arch/s390/boot/decompressor.h
+@@ -2,37 +2,11 @@
+ #ifndef BOOT_COMPRESSED_DECOMPRESSOR_H
+ #define BOOT_COMPRESSED_DECOMPRESSOR_H
+
+-#include <linux/stddef.h>
+-
+ #ifdef CONFIG_KERNEL_UNCOMPRESSED
+ static inline void *decompress_kernel(void) { return NULL; }
+ #else
+ void *decompress_kernel(void);
+ #endif
+ unsigned long mem_safe_offset(void);
+-void error(char *m);
+-
+-struct vmlinux_info {
+- unsigned long default_lma;
+- void (*entry)(void);
+- unsigned long image_size; /* does not include .bss */
+- unsigned long bss_size; /* uncompressed image .bss size */
+- unsigned long bootdata_off;
+- unsigned long bootdata_size;
+- unsigned long bootdata_preserved_off;
+- unsigned long bootdata_preserved_size;
+- unsigned long dynsym_start;
+- unsigned long rela_dyn_start;
+- unsigned long rela_dyn_end;
+- unsigned long amode31_size;
+-};
+-
+-/* Symbols defined by linker scripts */
+-extern char _end[];
+-extern unsigned char _compressed_start[];
+-extern unsigned char _compressed_end[];
+-extern char _vmlinux_info[];
+-
+-#define vmlinux (*(struct vmlinux_info *)_vmlinux_info)
+
+ #endif /* BOOT_COMPRESSED_DECOMPRESSOR_H */
+diff --git a/arch/s390/boot/kaslr.c b/arch/s390/boot/kaslr.c
+index e8d74d4f62aa5..58a8d8c8a1007 100644
+--- a/arch/s390/boot/kaslr.c
++++ b/arch/s390/boot/kaslr.c
+@@ -174,7 +174,6 @@ unsigned long get_random_base(unsigned long safe_addr)
+ {
+ unsigned long memory_limit = get_mem_detect_end();
+ unsigned long base_pos, max_pos, kernel_size;
+- unsigned long kasan_needs;
+ int i;
+
+ memory_limit = min(memory_limit, ident_map_size);
+@@ -186,12 +185,7 @@ unsigned long get_random_base(unsigned long safe_addr)
+ */
+ memory_limit -= kasan_estimate_memory_needs(memory_limit);
+
+- if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_data.start && initrd_data.size) {
+- if (safe_addr < initrd_data.start + initrd_data.size)
+- safe_addr = initrd_data.start + initrd_data.size;
+- }
+ safe_addr = ALIGN(safe_addr, THREAD_SIZE);
+-
+ kernel_size = vmlinux.image_size + vmlinux.bss_size;
+ if (safe_addr + kernel_size > memory_limit)
+ return 0;
+diff --git a/arch/s390/boot/mem_detect.c b/arch/s390/boot/mem_detect.c
+index 7fa1a32ea0f3f..daa1593171835 100644
+--- a/arch/s390/boot/mem_detect.c
++++ b/arch/s390/boot/mem_detect.c
+@@ -16,29 +16,10 @@ struct mem_detect_info __bootdata(mem_detect);
+ #define ENTRIES_EXTENDED_MAX \
+ (256 * (1020 / 2) * sizeof(struct mem_detect_block))
+
+-/*
+- * To avoid corrupting old kernel memory during dump, find lowest memory
+- * chunk possible either right after the kernel end (decompressed kernel) or
+- * after initrd (if it is present and there is no hole between the kernel end
+- * and initrd)
+- */
+-static void *mem_detect_alloc_extended(void)
+-{
+- unsigned long offset = ALIGN(mem_safe_offset(), sizeof(u64));
+-
+- if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_data.start && initrd_data.size &&
+- initrd_data.start < offset + ENTRIES_EXTENDED_MAX)
+- offset = ALIGN(initrd_data.start + initrd_data.size, sizeof(u64));
+-
+- return (void *)offset;
+-}
+-
+ static struct mem_detect_block *__get_mem_detect_block_ptr(u32 n)
+ {
+ if (n < MEM_INLINED_ENTRIES)
+ return &mem_detect.entries[n];
+- if (unlikely(!mem_detect.entries_extended))
+- mem_detect.entries_extended = mem_detect_alloc_extended();
+ return &mem_detect.entries_extended[n - MEM_INLINED_ENTRIES];
+ }
+
+@@ -147,7 +128,7 @@ static int tprot(unsigned long addr)
+ return rc;
+ }
+
+-static void search_mem_end(void)
++static unsigned long search_mem_end(void)
+ {
+ unsigned long range = 1 << (MAX_PHYSMEM_BITS - 20); /* in 1MB blocks */
+ unsigned long offset = 0;
+@@ -159,33 +140,34 @@ static void search_mem_end(void)
+ if (!tprot(pivot << 20))
+ offset = pivot;
+ }
+-
+- add_mem_detect_block(0, (offset + 1) << 20);
++ return (offset + 1) << 20;
+ }
+
+-unsigned long detect_memory(void)
++unsigned long detect_memory(unsigned long *safe_addr)
+ {
+- unsigned long max_physmem_end;
++ unsigned long max_physmem_end = 0;
+
+ sclp_early_get_memsize(&max_physmem_end);
++ mem_detect.entries_extended = (struct mem_detect_block *)ALIGN(*safe_addr, sizeof(u64));
+
+ if (!sclp_early_read_storage_info()) {
+ mem_detect.info_source = MEM_DETECT_SCLP_STOR_INFO;
+- return max_physmem_end;
+- }
+-
+- if (!diag260()) {
++ } else if (!diag260()) {
+ mem_detect.info_source = MEM_DETECT_DIAG260;
+- return max_physmem_end;
+- }
+-
+- if (max_physmem_end) {
++ max_physmem_end = max_physmem_end ?: get_mem_detect_end();
++ } else if (max_physmem_end) {
+ add_mem_detect_block(0, max_physmem_end);
+ mem_detect.info_source = MEM_DETECT_SCLP_READ_INFO;
+- return max_physmem_end;
++ } else {
++ max_physmem_end = search_mem_end();
++ add_mem_detect_block(0, max_physmem_end);
++ mem_detect.info_source = MEM_DETECT_BIN_SEARCH;
++ }
++
++ if (mem_detect.count > MEM_INLINED_ENTRIES) {
++ *safe_addr += (mem_detect.count - MEM_INLINED_ENTRIES) *
++ sizeof(struct mem_detect_block);
+ }
+
+- search_mem_end();
+- mem_detect.info_source = MEM_DETECT_BIN_SEARCH;
+- return get_mem_detect_end();
++ return max_physmem_end;
+ }
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index 47ca3264c0230..e0863d28759a5 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -57,16 +57,17 @@ unsigned long mem_safe_offset(void)
+ }
+ #endif
+
+-static void rescue_initrd(unsigned long addr)
++static unsigned long rescue_initrd(unsigned long safe_addr)
+ {
+ if (!IS_ENABLED(CONFIG_BLK_DEV_INITRD))
+- return;
++ return safe_addr;
+ if (!initrd_data.start || !initrd_data.size)
+- return;
+- if (addr <= initrd_data.start)
+- return;
+- memmove((void *)addr, (void *)initrd_data.start, initrd_data.size);
+- initrd_data.start = addr;
++ return safe_addr;
++ if (initrd_data.start < safe_addr) {
++ memmove((void *)safe_addr, (void *)initrd_data.start, initrd_data.size);
++ initrd_data.start = safe_addr;
++ }
++ return initrd_data.start + initrd_data.size;
+ }
+
+ static void copy_bootdata(void)
+@@ -250,6 +251,7 @@ static unsigned long reserve_amode31(unsigned long safe_addr)
+
+ void startup_kernel(void)
+ {
++ unsigned long max_physmem_end;
+ unsigned long random_lma;
+ unsigned long safe_addr;
+ void *img;
+@@ -265,12 +267,13 @@ void startup_kernel(void)
+ safe_addr = reserve_amode31(safe_addr);
+ safe_addr = read_ipl_report(safe_addr);
+ uv_query_info();
+- rescue_initrd(safe_addr);
++ safe_addr = rescue_initrd(safe_addr);
+ sclp_early_read_info();
+ setup_boot_command_line();
+ parse_boot_command_line();
+ sanitize_prot_virt_host();
+- setup_ident_map_size(detect_memory());
++ max_physmem_end = detect_memory(&safe_addr);
++ setup_ident_map_size(max_physmem_end);
+ setup_vmalloc_size();
+ setup_kernel_memory_layout();
+
+diff --git a/arch/s390/include/asm/ap.h b/arch/s390/include/asm/ap.h
+index f508f5025e388..57a2d6518d272 100644
+--- a/arch/s390/include/asm/ap.h
++++ b/arch/s390/include/asm/ap.h
+@@ -239,7 +239,10 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
+ union {
+ unsigned long value;
+ struct ap_qirq_ctrl qirqctrl;
+- struct ap_queue_status status;
++ struct {
++ u32 _pad;
++ struct ap_queue_status status;
++ };
+ } reg1;
+ unsigned long reg2 = pa_ind;
+
+@@ -253,7 +256,7 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
+ " lgr %[reg1],1\n" /* gr1 (status) into reg1 */
+ : [reg1] "+&d" (reg1)
+ : [reg0] "d" (reg0), [reg2] "d" (reg2)
+- : "cc", "0", "1", "2");
++ : "cc", "memory", "0", "1", "2");
+
+ return reg1.status;
+ }
+@@ -290,7 +293,10 @@ static inline struct ap_queue_status ap_qact(ap_qid_t qid, int ifbit,
+ unsigned long reg0 = qid | (5UL << 24) | ((ifbit & 0x01) << 22);
+ union {
+ unsigned long value;
+- struct ap_queue_status status;
++ struct {
++ u32 _pad;
++ struct ap_queue_status status;
++ };
+ } reg1;
+ unsigned long reg2;
+
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index 6030fdd6997bc..9693c8630e73f 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -288,7 +288,6 @@ static void __init sort_amode31_extable(void)
+
+ void __init startup_init(void)
+ {
+- sclp_early_adjust_va();
+ reset_tod_clock();
+ check_image_bootable();
+ time_early_init();
+diff --git a/arch/s390/kernel/head64.S b/arch/s390/kernel/head64.S
+index d7b8b6ad574dc..3b3bf8329e6c1 100644
+--- a/arch/s390/kernel/head64.S
++++ b/arch/s390/kernel/head64.S
+@@ -25,6 +25,7 @@ ENTRY(startup_continue)
+ larl %r14,init_task
+ stg %r14,__LC_CURRENT
+ larl %r15,init_thread_union+THREAD_SIZE-STACK_FRAME_OVERHEAD-__PT_SIZE
++ brasl %r14,sclp_early_adjust_va # allow sclp_early_printk
+ #ifdef CONFIG_KASAN
+ brasl %r14,kasan_early_init
+ #endif
+diff --git a/arch/s390/kernel/idle.c b/arch/s390/kernel/idle.c
+index 4bf1ee293f2b3..a0da049e73609 100644
+--- a/arch/s390/kernel/idle.c
++++ b/arch/s390/kernel/idle.c
+@@ -44,7 +44,7 @@ void account_idle_time_irq(void)
+ S390_lowcore.last_update_timer = idle->timer_idle_exit;
+ }
+
+-void arch_cpu_idle(void)
++void noinstr arch_cpu_idle(void)
+ {
+ struct s390_idle_data *idle = this_cpu_ptr(&s390_idle);
+ unsigned long idle_time;
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index fbd646dbf4402..bcf03939e6fe8 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -593,6 +593,7 @@ static struct attribute *ipl_eckd_attrs[] = {
+ &sys_ipl_type_attr.attr,
+ &sys_ipl_eckd_bootprog_attr.attr,
+ &sys_ipl_eckd_br_chr_attr.attr,
++ &sys_ipl_ccw_loadparm_attr.attr,
+ &sys_ipl_device_attr.attr,
+ &sys_ipl_secure_attr.attr,
+ &sys_ipl_has_secure_attr.attr,
+@@ -888,23 +889,27 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb,
+ return len;
+ }
+
+-/* FCP wrapper */
+-static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj,
+- struct kobj_attribute *attr, char *page)
+-{
+- return reipl_generic_loadparm_show(reipl_block_fcp, page);
+-}
+-
+-static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj,
+- struct kobj_attribute *attr,
+- const char *buf, size_t len)
+-{
+- return reipl_generic_loadparm_store(reipl_block_fcp, buf, len);
+-}
+-
+-static struct kobj_attribute sys_reipl_fcp_loadparm_attr =
+- __ATTR(loadparm, 0644, reipl_fcp_loadparm_show,
+- reipl_fcp_loadparm_store);
++#define DEFINE_GENERIC_LOADPARM(name) \
++static ssize_t reipl_##name##_loadparm_show(struct kobject *kobj, \
++ struct kobj_attribute *attr, char *page) \
++{ \
++ return reipl_generic_loadparm_show(reipl_block_##name, page); \
++} \
++static ssize_t reipl_##name##_loadparm_store(struct kobject *kobj, \
++ struct kobj_attribute *attr, \
++ const char *buf, size_t len) \
++{ \
++ return reipl_generic_loadparm_store(reipl_block_##name, buf, len); \
++} \
++static struct kobj_attribute sys_reipl_##name##_loadparm_attr = \
++ __ATTR(loadparm, 0644, reipl_##name##_loadparm_show, \
++ reipl_##name##_loadparm_store)
++
++DEFINE_GENERIC_LOADPARM(fcp);
++DEFINE_GENERIC_LOADPARM(nvme);
++DEFINE_GENERIC_LOADPARM(ccw);
++DEFINE_GENERIC_LOADPARM(nss);
++DEFINE_GENERIC_LOADPARM(eckd);
+
+ static ssize_t reipl_fcp_clear_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *page)
+@@ -994,24 +999,6 @@ DEFINE_IPL_ATTR_RW(reipl_nvme, bootprog, "%lld\n", "%lld\n",
+ DEFINE_IPL_ATTR_RW(reipl_nvme, br_lba, "%lld\n", "%lld\n",
+ reipl_block_nvme->nvme.br_lba);
+
+-/* nvme wrapper */
+-static ssize_t reipl_nvme_loadparm_show(struct kobject *kobj,
+- struct kobj_attribute *attr, char *page)
+-{
+- return reipl_generic_loadparm_show(reipl_block_nvme, page);
+-}
+-
+-static ssize_t reipl_nvme_loadparm_store(struct kobject *kobj,
+- struct kobj_attribute *attr,
+- const char *buf, size_t len)
+-{
+- return reipl_generic_loadparm_store(reipl_block_nvme, buf, len);
+-}
+-
+-static struct kobj_attribute sys_reipl_nvme_loadparm_attr =
+- __ATTR(loadparm, 0644, reipl_nvme_loadparm_show,
+- reipl_nvme_loadparm_store);
+-
+ static struct attribute *reipl_nvme_attrs[] = {
+ &sys_reipl_nvme_fid_attr.attr,
+ &sys_reipl_nvme_nsid_attr.attr,
+@@ -1047,38 +1034,6 @@ static struct kobj_attribute sys_reipl_nvme_clear_attr =
+ /* CCW reipl device attributes */
+ DEFINE_IPL_CCW_ATTR_RW(reipl_ccw, device, reipl_block_ccw->ccw);
+
+-/* NSS wrapper */
+-static ssize_t reipl_nss_loadparm_show(struct kobject *kobj,
+- struct kobj_attribute *attr, char *page)
+-{
+- return reipl_generic_loadparm_show(reipl_block_nss, page);
+-}
+-
+-static ssize_t reipl_nss_loadparm_store(struct kobject *kobj,
+- struct kobj_attribute *attr,
+- const char *buf, size_t len)
+-{
+- return reipl_generic_loadparm_store(reipl_block_nss, buf, len);
+-}
+-
+-/* CCW wrapper */
+-static ssize_t reipl_ccw_loadparm_show(struct kobject *kobj,
+- struct kobj_attribute *attr, char *page)
+-{
+- return reipl_generic_loadparm_show(reipl_block_ccw, page);
+-}
+-
+-static ssize_t reipl_ccw_loadparm_store(struct kobject *kobj,
+- struct kobj_attribute *attr,
+- const char *buf, size_t len)
+-{
+- return reipl_generic_loadparm_store(reipl_block_ccw, buf, len);
+-}
+-
+-static struct kobj_attribute sys_reipl_ccw_loadparm_attr =
+- __ATTR(loadparm, 0644, reipl_ccw_loadparm_show,
+- reipl_ccw_loadparm_store);
+-
+ static ssize_t reipl_ccw_clear_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *page)
+ {
+@@ -1176,6 +1131,7 @@ static struct attribute *reipl_eckd_attrs[] = {
+ &sys_reipl_eckd_device_attr.attr,
+ &sys_reipl_eckd_bootprog_attr.attr,
+ &sys_reipl_eckd_br_chr_attr.attr,
++ &sys_reipl_eckd_loadparm_attr.attr,
+ NULL,
+ };
+
+@@ -1251,10 +1207,6 @@ static struct kobj_attribute sys_reipl_nss_name_attr =
+ __ATTR(name, 0644, reipl_nss_name_show,
+ reipl_nss_name_store);
+
+-static struct kobj_attribute sys_reipl_nss_loadparm_attr =
+- __ATTR(loadparm, 0644, reipl_nss_loadparm_show,
+- reipl_nss_loadparm_store);
+-
+ static struct attribute *reipl_nss_attrs[] = {
+ &sys_reipl_nss_name_attr.attr,
+ &sys_reipl_nss_loadparm_attr.attr,
+diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c
+index 401f9c933ff94..5ca02680fc3c6 100644
+--- a/arch/s390/kernel/kprobes.c
++++ b/arch/s390/kernel/kprobes.c
+@@ -278,6 +278,7 @@ static void pop_kprobe(struct kprobe_ctlblk *kcb)
+ {
+ __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
+ kcb->kprobe_status = kcb->prev_kprobe.status;
++ kcb->prev_kprobe.kp = NULL;
+ }
+ NOKPROBE_SYMBOL(pop_kprobe);
+
+@@ -432,12 +433,11 @@ static int post_kprobe_handler(struct pt_regs *regs)
+ if (!p)
+ return 0;
+
++ resume_execution(p, regs);
+ if (kcb->kprobe_status != KPROBE_REENTER && p->post_handler) {
+ kcb->kprobe_status = KPROBE_HIT_SSDONE;
+ p->post_handler(p, regs, 0);
+ }
+-
+- resume_execution(p, regs);
+ pop_kprobe(kcb);
+ preempt_enable_no_resched();
+
+diff --git a/arch/s390/kernel/vdso64/Makefile b/arch/s390/kernel/vdso64/Makefile
+index 9e2b95a222a98..1605ba45ac4c0 100644
+--- a/arch/s390/kernel/vdso64/Makefile
++++ b/arch/s390/kernel/vdso64/Makefile
+@@ -25,7 +25,7 @@ KBUILD_AFLAGS_64 := $(filter-out -m64,$(KBUILD_AFLAGS))
+ KBUILD_AFLAGS_64 += -m64 -s
+
+ KBUILD_CFLAGS_64 := $(filter-out -m64,$(KBUILD_CFLAGS))
+-KBUILD_CFLAGS_64 += -m64 -fPIC -shared -fno-common -fno-builtin
++KBUILD_CFLAGS_64 += -m64 -fPIC -fno-common -fno-builtin
+ ldflags-y := -fPIC -shared -soname=linux-vdso64.so.1 \
+ --hash-style=both --build-id=sha1 -T
+
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index cbf9c1b0beda4..729d4f949cfe8 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -228,5 +228,6 @@ SECTIONS
+ DISCARDS
+ /DISCARD/ : {
+ *(.eh_frame)
++ *(.interp)
+ }
+ }
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index e4890e04b2108..cb72f9a09fb36 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -5633,23 +5633,40 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ if (kvm_s390_pv_get_handle(kvm))
+ return -EINVAL;
+
+- if (change == KVM_MR_DELETE || change == KVM_MR_FLAGS_ONLY)
+- return 0;
++ if (change != KVM_MR_DELETE && change != KVM_MR_FLAGS_ONLY) {
++ /*
++ * A few sanity checks. We can have memory slots which have to be
++ * located/ended at a segment boundary (1MB). The memory in userland is
++ * ok to be fragmented into various different vmas. It is okay to mmap()
++ * and munmap() stuff in this slot after doing this call at any time
++ */
+
+- /* A few sanity checks. We can have memory slots which have to be
+- located/ended at a segment boundary (1MB). The memory in userland is
+- ok to be fragmented into various different vmas. It is okay to mmap()
+- and munmap() stuff in this slot after doing this call at any time */
++ if (new->userspace_addr & 0xffffful)
++ return -EINVAL;
+
+- if (new->userspace_addr & 0xffffful)
+- return -EINVAL;
++ size = new->npages * PAGE_SIZE;
++ if (size & 0xffffful)
++ return -EINVAL;
+
+- size = new->npages * PAGE_SIZE;
+- if (size & 0xffffful)
+- return -EINVAL;
++ if ((new->base_gfn * PAGE_SIZE) + size > kvm->arch.mem_limit)
++ return -EINVAL;
++ }
+
+- if ((new->base_gfn * PAGE_SIZE) + size > kvm->arch.mem_limit)
+- return -EINVAL;
++ if (!kvm->arch.migration_mode)
++ return 0;
++
++ /*
++ * Turn off migration mode when:
++ * - userspace creates a new memslot with dirty logging off,
++ * - userspace modifies an existing memslot (MOVE or FLAGS_ONLY) and
++ * dirty logging is turned off.
++ * Migration mode expects dirty page logging being enabled to store
++ * its dirty bitmap.
++ */
++ if (change != KVM_MR_DELETE &&
++ !(new->flags & KVM_MEM_LOG_DIRTY_PAGES))
++ WARN(kvm_s390_vm_stop_migration(kvm),
++ "Failed to stop migration mode");
+
+ return 0;
+ }
+diff --git a/arch/s390/mm/dump_pagetables.c b/arch/s390/mm/dump_pagetables.c
+index 9953819d79596..ba5f802688781 100644
+--- a/arch/s390/mm/dump_pagetables.c
++++ b/arch/s390/mm/dump_pagetables.c
+@@ -33,10 +33,6 @@ enum address_markers_idx {
+ #endif
+ IDENTITY_AFTER_NR,
+ IDENTITY_AFTER_END_NR,
+-#ifdef CONFIG_KASAN
+- KASAN_SHADOW_START_NR,
+- KASAN_SHADOW_END_NR,
+-#endif
+ VMEMMAP_NR,
+ VMEMMAP_END_NR,
+ VMALLOC_NR,
+@@ -47,6 +43,10 @@ enum address_markers_idx {
+ ABS_LOWCORE_END_NR,
+ MEMCPY_REAL_NR,
+ MEMCPY_REAL_END_NR,
++#ifdef CONFIG_KASAN
++ KASAN_SHADOW_START_NR,
++ KASAN_SHADOW_END_NR,
++#endif
+ };
+
+ static struct addr_marker address_markers[] = {
+@@ -62,10 +62,6 @@ static struct addr_marker address_markers[] = {
+ #endif
+ [IDENTITY_AFTER_NR] = {(unsigned long)_end, "Identity Mapping Start"},
+ [IDENTITY_AFTER_END_NR] = {0, "Identity Mapping End"},
+-#ifdef CONFIG_KASAN
+- [KASAN_SHADOW_START_NR] = {KASAN_SHADOW_START, "Kasan Shadow Start"},
+- [KASAN_SHADOW_END_NR] = {KASAN_SHADOW_END, "Kasan Shadow End"},
+-#endif
+ [VMEMMAP_NR] = {0, "vmemmap Area Start"},
+ [VMEMMAP_END_NR] = {0, "vmemmap Area End"},
+ [VMALLOC_NR] = {0, "vmalloc Area Start"},
+@@ -76,6 +72,10 @@ static struct addr_marker address_markers[] = {
+ [ABS_LOWCORE_END_NR] = {0, "Lowcore Area End"},
+ [MEMCPY_REAL_NR] = {0, "Real Memory Copy Area Start"},
+ [MEMCPY_REAL_END_NR] = {0, "Real Memory Copy Area End"},
++#ifdef CONFIG_KASAN
++ [KASAN_SHADOW_START_NR] = {KASAN_SHADOW_START, "Kasan Shadow Start"},
++ [KASAN_SHADOW_END_NR] = {KASAN_SHADOW_END, "Kasan Shadow End"},
++#endif
+ { -1, NULL }
+ };
+
+diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
+index 5060956b8e7d6..1bc42ce265990 100644
+--- a/arch/s390/mm/extmem.c
++++ b/arch/s390/mm/extmem.c
+@@ -289,15 +289,17 @@ segment_overlaps_others (struct dcss_segment *seg)
+
+ /*
+ * real segment loading function, called from segment_load
++ * Must return either an error code < 0, or the segment type code >= 0
+ */
+ static int
+ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long *end)
+ {
+ unsigned long start_addr, end_addr, dummy;
+ struct dcss_segment *seg;
+- int rc, diag_cc;
++ int rc, diag_cc, segtype;
+
+ start_addr = end_addr = 0;
++ segtype = -1;
+ seg = kmalloc(sizeof(*seg), GFP_KERNEL | GFP_DMA);
+ if (seg == NULL) {
+ rc = -ENOMEM;
+@@ -326,9 +328,9 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+ seg->res_name[8] = '\0';
+ strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name));
+ seg->res->name = seg->res_name;
+- rc = seg->vm_segtype;
+- if (rc == SEG_TYPE_SC ||
+- ((rc == SEG_TYPE_SR || rc == SEG_TYPE_ER) && !do_nonshared))
++ segtype = seg->vm_segtype;
++ if (segtype == SEG_TYPE_SC ||
++ ((segtype == SEG_TYPE_SR || segtype == SEG_TYPE_ER) && !do_nonshared))
+ seg->res->flags |= IORESOURCE_READONLY;
+
+ /* Check for overlapping resources before adding the mapping. */
+@@ -386,7 +388,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+ out_free:
+ kfree(seg);
+ out:
+- return rc;
++ return rc < 0 ? rc : segtype;
+ }
+
+ /*
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index 9649d9382e0ae..8e84ed2bb944e 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -96,6 +96,20 @@ static enum fault_type get_fault_type(struct pt_regs *regs)
+ return KERNEL_FAULT;
+ }
+
++static unsigned long get_fault_address(struct pt_regs *regs)
++{
++ unsigned long trans_exc_code = regs->int_parm_long;
++
++ return trans_exc_code & __FAIL_ADDR_MASK;
++}
++
++static bool fault_is_write(struct pt_regs *regs)
++{
++ unsigned long trans_exc_code = regs->int_parm_long;
++
++ return (trans_exc_code & store_indication) == 0x400;
++}
++
+ static int bad_address(void *p)
+ {
+ unsigned long dummy;
+@@ -228,15 +242,26 @@ static noinline void do_sigsegv(struct pt_regs *regs, int si_code)
+ (void __user *)(regs->int_parm_long & __FAIL_ADDR_MASK));
+ }
+
+-static noinline void do_no_context(struct pt_regs *regs)
++static noinline void do_no_context(struct pt_regs *regs, vm_fault_t fault)
+ {
++ enum fault_type fault_type;
++ unsigned long address;
++ bool is_write;
++
+ if (fixup_exception(regs))
+ return;
++ fault_type = get_fault_type(regs);
++ if ((fault_type == KERNEL_FAULT) && (fault == VM_FAULT_BADCONTEXT)) {
++ address = get_fault_address(regs);
++ is_write = fault_is_write(regs);
++ if (kfence_handle_page_fault(address, is_write, regs))
++ return;
++ }
+ /*
+ * Oops. The kernel tried to access some bad page. We'll have to
+ * terminate things with extreme prejudice.
+ */
+- if (get_fault_type(regs) == KERNEL_FAULT)
++ if (fault_type == KERNEL_FAULT)
+ printk(KERN_ALERT "Unable to handle kernel pointer dereference"
+ " in virtual kernel address space\n");
+ else
+@@ -255,7 +280,7 @@ static noinline void do_low_address(struct pt_regs *regs)
+ die (regs, "Low-address protection");
+ }
+
+- do_no_context(regs);
++ do_no_context(regs, VM_FAULT_BADACCESS);
+ }
+
+ static noinline void do_sigbus(struct pt_regs *regs)
+@@ -286,28 +311,28 @@ static noinline void do_fault_error(struct pt_regs *regs, vm_fault_t fault)
+ fallthrough;
+ case VM_FAULT_BADCONTEXT:
+ case VM_FAULT_PFAULT:
+- do_no_context(regs);
++ do_no_context(regs, fault);
+ break;
+ case VM_FAULT_SIGNAL:
+ if (!user_mode(regs))
+- do_no_context(regs);
++ do_no_context(regs, fault);
+ break;
+ default: /* fault & VM_FAULT_ERROR */
+ if (fault & VM_FAULT_OOM) {
+ if (!user_mode(regs))
+- do_no_context(regs);
++ do_no_context(regs, fault);
+ else
+ pagefault_out_of_memory();
+ } else if (fault & VM_FAULT_SIGSEGV) {
+ /* Kernel mode? Handle exceptions or die */
+ if (!user_mode(regs))
+- do_no_context(regs);
++ do_no_context(regs, fault);
+ else
+ do_sigsegv(regs, SEGV_MAPERR);
+ } else if (fault & VM_FAULT_SIGBUS) {
+ /* Kernel mode? Handle exceptions or die */
+ if (!user_mode(regs))
+- do_no_context(regs);
++ do_no_context(regs, fault);
+ else
+ do_sigbus(regs);
+ } else
+@@ -334,7 +359,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
+ struct mm_struct *mm;
+ struct vm_area_struct *vma;
+ enum fault_type type;
+- unsigned long trans_exc_code;
+ unsigned long address;
+ unsigned int flags;
+ vm_fault_t fault;
+@@ -351,9 +375,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
+ return 0;
+
+ mm = tsk->mm;
+- trans_exc_code = regs->int_parm_long;
+- address = trans_exc_code & __FAIL_ADDR_MASK;
+- is_write = (trans_exc_code & store_indication) == 0x400;
++ address = get_fault_address(regs);
++ is_write = fault_is_write(regs);
+
+ /*
+ * Verify that the fault happened in user space, that
+@@ -364,8 +387,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
+ type = get_fault_type(regs);
+ switch (type) {
+ case KERNEL_FAULT:
+- if (kfence_handle_page_fault(address, is_write, regs))
+- return 0;
+ goto out;
+ case USER_FAULT:
+ case GMAP_FAULT:
+diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
+index ee1a97078527b..9a0ce5315f36d 100644
+--- a/arch/s390/mm/vmem.c
++++ b/arch/s390/mm/vmem.c
+@@ -297,7 +297,7 @@ static void try_free_pmd_table(pud_t *pud, unsigned long start)
+ if (end > VMALLOC_START)
+ return;
+ #ifdef CONFIG_KASAN
+- if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
++ if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
+ return;
+ #endif
+ pmd = pmd_offset(pud, start);
+@@ -372,7 +372,7 @@ static void try_free_pud_table(p4d_t *p4d, unsigned long start)
+ if (end > VMALLOC_START)
+ return;
+ #ifdef CONFIG_KASAN
+- if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
++ if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
+ return;
+ #endif
+
+@@ -426,7 +426,7 @@ static void try_free_p4d_table(pgd_t *pgd, unsigned long start)
+ if (end > VMALLOC_START)
+ return;
+ #ifdef CONFIG_KASAN
+- if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
++ if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
+ return;
+ #endif
+
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index af35052d06ed6..fbdba4c306bea 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -1393,8 +1393,16 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ /* lg %r1,bpf_func(%r1) */
+ EMIT6_DISP_LH(0xe3000000, 0x0004, REG_1, REG_1, REG_0,
+ offsetof(struct bpf_prog, bpf_func));
+- /* bc 0xf,tail_call_start(%r1) */
+- _EMIT4(0x47f01000 + jit->tail_call_start);
++ if (nospec_uses_trampoline()) {
++ jit->seen |= SEEN_FUNC;
++ /* aghi %r1,tail_call_start */
++ EMIT4_IMM(0xa70b0000, REG_1, jit->tail_call_start);
++ /* brcl 0xf,__s390_indirect_jump_r1 */
++ EMIT6_PCREL_RILC(0xc0040000, 0xf, jit->r1_thunk_ip);
++ } else {
++ /* bc 0xf,tail_call_start(%r1) */
++ _EMIT4(0x47f01000 + jit->tail_call_start);
++ }
+ /* out: */
+ if (jit->prg_buf) {
+ *(u16 *)(jit->prg_buf + patch_1_clrj + 2) =
+diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
+index 4d3d1af90d521..84437a4c65454 100644
+--- a/arch/sparc/Kconfig
++++ b/arch/sparc/Kconfig
+@@ -283,7 +283,7 @@ config ARCH_FORCE_MAX_ORDER
+ This config option is actually maximum order plus one. For example,
+ a value of 13 means that the largest free memory block is 2^12 pages.
+
+-if SPARC64
++if SPARC64 || COMPILE_TEST
+ source "kernel/power/Kconfig"
+ endif
+
+diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c
+index 1f1a95f3dd0ca..c0ab0ff4af655 100644
+--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
++++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c
+@@ -19,6 +19,7 @@
+ #include <crypto/internal/simd.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/simd.h>
++#include <asm/unaligned.h>
+
+ #define GHASH_BLOCK_SIZE 16
+ #define GHASH_DIGEST_SIZE 16
+@@ -54,15 +55,14 @@ static int ghash_setkey(struct crypto_shash *tfm,
+ const u8 *key, unsigned int keylen)
+ {
+ struct ghash_ctx *ctx = crypto_shash_ctx(tfm);
+- be128 *x = (be128 *)key;
+ u64 a, b;
+
+ if (keylen != GHASH_BLOCK_SIZE)
+ return -EINVAL;
+
+ /* perform multiplication by 'x' in GF(2^128) */
+- a = be64_to_cpu(x->a);
+- b = be64_to_cpu(x->b);
++ a = get_unaligned_be64(key);
++ b = get_unaligned_be64(key + 8);
+
+ ctx->shash.a = (b << 1) | (a >> 63);
+ ctx->shash.b = (a << 1) | (b >> 63);
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 88e58b6ee73c0..91b214231e03c 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -2,12 +2,14 @@
+ #include <linux/bitops.h>
+ #include <linux/types.h>
+ #include <linux/slab.h>
++#include <linux/sched/clock.h>
+
+ #include <asm/cpu_entry_area.h>
+ #include <asm/perf_event.h>
+ #include <asm/tlbflush.h>
+ #include <asm/insn.h>
+ #include <asm/io.h>
++#include <asm/timer.h>
+
+ #include "../perf_event.h"
+
+@@ -1519,6 +1521,27 @@ static u64 get_data_src(struct perf_event *event, u64 aux)
+ return val;
+ }
+
++static void setup_pebs_time(struct perf_event *event,
++ struct perf_sample_data *data,
++ u64 tsc)
++{
++ /* Converting to a user-defined clock is not supported yet. */
++ if (event->attr.use_clockid != 0)
++ return;
++
++ /*
++ * Doesn't support the conversion when the TSC is unstable.
++ * The TSC unstable case is a corner case and very unlikely to
++ * happen. If it happens, the TSC in a PEBS record will be
++ * dropped and fall back to perf_event_clock().
++ */
++ if (!using_native_sched_clock() || !sched_clock_stable())
++ return;
++
++ data->time = native_sched_clock_from_tsc(tsc) + __sched_clock_offset;
++ data->sample_flags |= PERF_SAMPLE_TIME;
++}
++
+ #define PERF_SAMPLE_ADDR_TYPE (PERF_SAMPLE_ADDR | \
+ PERF_SAMPLE_PHYS_ADDR | \
+ PERF_SAMPLE_DATA_PAGE_SIZE)
+@@ -1668,11 +1691,8 @@ static void setup_pebs_fixed_sample_data(struct perf_event *event,
+ *
+ * We can only do this for the default trace clock.
+ */
+- if (x86_pmu.intel_cap.pebs_format >= 3 &&
+- event->attr.use_clockid == 0) {
+- data->time = native_sched_clock_from_tsc(pebs->tsc);
+- data->sample_flags |= PERF_SAMPLE_TIME;
+- }
++ if (x86_pmu.intel_cap.pebs_format >= 3)
++ setup_pebs_time(event, data, pebs->tsc);
+
+ if (has_branch_stack(event)) {
+ data->br_stack = &cpuc->lbr_stack;
+@@ -1735,10 +1755,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
+ perf_sample_data_init(data, 0, event->hw.last_period);
+ data->period = event->hw.last_period;
+
+- if (event->attr.use_clockid == 0) {
+- data->time = native_sched_clock_from_tsc(basic->tsc);
+- data->sample_flags |= PERF_SAMPLE_TIME;
+- }
++ setup_pebs_time(event, data, basic->tsc);
+
+ /*
+ * We must however always use iregs for the unwinder to stay sane; the
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index 459b1aafd4d4a..27b34f5b87600 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -1765,6 +1765,11 @@ static const struct intel_uncore_init_fun adl_uncore_init __initconst = {
+ .mmio_init = adl_uncore_mmio_init,
+ };
+
++static const struct intel_uncore_init_fun mtl_uncore_init __initconst = {
++ .cpu_init = mtl_uncore_cpu_init,
++ .mmio_init = adl_uncore_mmio_init,
++};
++
+ static const struct intel_uncore_init_fun icx_uncore_init __initconst = {
+ .cpu_init = icx_uncore_cpu_init,
+ .pci_init = icx_uncore_pci_init,
+@@ -1832,6 +1837,8 @@ static const struct x86_cpu_id intel_uncore_match[] __initconst = {
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &adl_uncore_init),
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, &adl_uncore_init),
+ X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, &adl_uncore_init),
++ X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE, &mtl_uncore_init),
++ X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, &mtl_uncore_init),
+ X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &spr_uncore_init),
+ X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &spr_uncore_init),
+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, &snr_uncore_init),
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index e278e2e7c051a..305a54d88beee 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -602,6 +602,7 @@ void skl_uncore_cpu_init(void);
+ void icl_uncore_cpu_init(void);
+ void tgl_uncore_cpu_init(void);
+ void adl_uncore_cpu_init(void);
++void mtl_uncore_cpu_init(void);
+ void tgl_uncore_mmio_init(void);
+ void tgl_l_uncore_mmio_init(void);
+ void adl_uncore_mmio_init(void);
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index 1f4869227efb9..7fd4334e12a17 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -109,6 +109,19 @@
+ #define PCI_DEVICE_ID_INTEL_RPL_23_IMC 0xA728
+ #define PCI_DEVICE_ID_INTEL_RPL_24_IMC 0xA729
+ #define PCI_DEVICE_ID_INTEL_RPL_25_IMC 0xA72A
++#define PCI_DEVICE_ID_INTEL_MTL_1_IMC 0x7d00
++#define PCI_DEVICE_ID_INTEL_MTL_2_IMC 0x7d01
++#define PCI_DEVICE_ID_INTEL_MTL_3_IMC 0x7d02
++#define PCI_DEVICE_ID_INTEL_MTL_4_IMC 0x7d05
++#define PCI_DEVICE_ID_INTEL_MTL_5_IMC 0x7d10
++#define PCI_DEVICE_ID_INTEL_MTL_6_IMC 0x7d14
++#define PCI_DEVICE_ID_INTEL_MTL_7_IMC 0x7d15
++#define PCI_DEVICE_ID_INTEL_MTL_8_IMC 0x7d16
++#define PCI_DEVICE_ID_INTEL_MTL_9_IMC 0x7d21
++#define PCI_DEVICE_ID_INTEL_MTL_10_IMC 0x7d22
++#define PCI_DEVICE_ID_INTEL_MTL_11_IMC 0x7d23
++#define PCI_DEVICE_ID_INTEL_MTL_12_IMC 0x7d24
++#define PCI_DEVICE_ID_INTEL_MTL_13_IMC 0x7d28
+
+
+ #define IMC_UNCORE_DEV(a) \
+@@ -205,6 +218,32 @@
+ #define ADL_UNC_ARB_PERFEVTSEL0 0x2FD0
+ #define ADL_UNC_ARB_MSR_OFFSET 0x8
+
++/* MTL Cbo register */
++#define MTL_UNC_CBO_0_PER_CTR0 0x2448
++#define MTL_UNC_CBO_0_PERFEVTSEL0 0x2442
++
++/* MTL HAC_ARB register */
++#define MTL_UNC_HAC_ARB_CTR 0x2018
++#define MTL_UNC_HAC_ARB_CTRL 0x2012
++
++/* MTL ARB register */
++#define MTL_UNC_ARB_CTR 0x2418
++#define MTL_UNC_ARB_CTRL 0x2412
++
++/* MTL cNCU register */
++#define MTL_UNC_CNCU_FIXED_CTR 0x2408
++#define MTL_UNC_CNCU_FIXED_CTRL 0x2402
++#define MTL_UNC_CNCU_BOX_CTL 0x240e
++
++/* MTL sNCU register */
++#define MTL_UNC_SNCU_FIXED_CTR 0x2008
++#define MTL_UNC_SNCU_FIXED_CTRL 0x2002
++#define MTL_UNC_SNCU_BOX_CTL 0x200e
++
++/* MTL HAC_CBO register */
++#define MTL_UNC_HBO_CTR 0x2048
++#define MTL_UNC_HBO_CTRL 0x2042
++
+ DEFINE_UNCORE_FORMAT_ATTR(event, event, "config:0-7");
+ DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15");
+ DEFINE_UNCORE_FORMAT_ATTR(chmask, chmask, "config:8-11");
+@@ -598,6 +637,115 @@ void adl_uncore_cpu_init(void)
+ uncore_msr_uncores = adl_msr_uncores;
+ }
+
++static struct intel_uncore_type mtl_uncore_cbox = {
++ .name = "cbox",
++ .num_counters = 2,
++ .perf_ctr_bits = 48,
++ .perf_ctr = MTL_UNC_CBO_0_PER_CTR0,
++ .event_ctl = MTL_UNC_CBO_0_PERFEVTSEL0,
++ .event_mask = ADL_UNC_RAW_EVENT_MASK,
++ .msr_offset = SNB_UNC_CBO_MSR_OFFSET,
++ .ops = &icl_uncore_msr_ops,
++ .format_group = &adl_uncore_format_group,
++};
++
++static struct intel_uncore_type mtl_uncore_hac_arb = {
++ .name = "hac_arb",
++ .num_counters = 2,
++ .num_boxes = 2,
++ .perf_ctr_bits = 48,
++ .perf_ctr = MTL_UNC_HAC_ARB_CTR,
++ .event_ctl = MTL_UNC_HAC_ARB_CTRL,
++ .event_mask = ADL_UNC_RAW_EVENT_MASK,
++ .msr_offset = SNB_UNC_CBO_MSR_OFFSET,
++ .ops = &icl_uncore_msr_ops,
++ .format_group = &adl_uncore_format_group,
++};
++
++static struct intel_uncore_type mtl_uncore_arb = {
++ .name = "arb",
++ .num_counters = 2,
++ .num_boxes = 2,
++ .perf_ctr_bits = 48,
++ .perf_ctr = MTL_UNC_ARB_CTR,
++ .event_ctl = MTL_UNC_ARB_CTRL,
++ .event_mask = ADL_UNC_RAW_EVENT_MASK,
++ .msr_offset = SNB_UNC_CBO_MSR_OFFSET,
++ .ops = &icl_uncore_msr_ops,
++ .format_group = &adl_uncore_format_group,
++};
++
++static struct intel_uncore_type mtl_uncore_hac_cbox = {
++ .name = "hac_cbox",
++ .num_counters = 2,
++ .num_boxes = 2,
++ .perf_ctr_bits = 48,
++ .perf_ctr = MTL_UNC_HBO_CTR,
++ .event_ctl = MTL_UNC_HBO_CTRL,
++ .event_mask = ADL_UNC_RAW_EVENT_MASK,
++ .msr_offset = SNB_UNC_CBO_MSR_OFFSET,
++ .ops = &icl_uncore_msr_ops,
++ .format_group = &adl_uncore_format_group,
++};
++
++static void mtl_uncore_msr_init_box(struct intel_uncore_box *box)
++{
++ wrmsrl(uncore_msr_box_ctl(box), SNB_UNC_GLOBAL_CTL_EN);
++}
++
++static struct intel_uncore_ops mtl_uncore_msr_ops = {
++ .init_box = mtl_uncore_msr_init_box,
++ .disable_event = snb_uncore_msr_disable_event,
++ .enable_event = snb_uncore_msr_enable_event,
++ .read_counter = uncore_msr_read_counter,
++};
++
++static struct intel_uncore_type mtl_uncore_cncu = {
++ .name = "cncu",
++ .num_counters = 1,
++ .num_boxes = 1,
++ .box_ctl = MTL_UNC_CNCU_BOX_CTL,
++ .fixed_ctr_bits = 48,
++ .fixed_ctr = MTL_UNC_CNCU_FIXED_CTR,
++ .fixed_ctl = MTL_UNC_CNCU_FIXED_CTRL,
++ .single_fixed = 1,
++ .event_mask = SNB_UNC_CTL_EV_SEL_MASK,
++ .format_group = &icl_uncore_clock_format_group,
++ .ops = &mtl_uncore_msr_ops,
++ .event_descs = icl_uncore_events,
++};
++
++static struct intel_uncore_type mtl_uncore_sncu = {
++ .name = "sncu",
++ .num_counters = 1,
++ .num_boxes = 1,
++ .box_ctl = MTL_UNC_SNCU_BOX_CTL,
++ .fixed_ctr_bits = 48,
++ .fixed_ctr = MTL_UNC_SNCU_FIXED_CTR,
++ .fixed_ctl = MTL_UNC_SNCU_FIXED_CTRL,
++ .single_fixed = 1,
++ .event_mask = SNB_UNC_CTL_EV_SEL_MASK,
++ .format_group = &icl_uncore_clock_format_group,
++ .ops = &mtl_uncore_msr_ops,
++ .event_descs = icl_uncore_events,
++};
++
++static struct intel_uncore_type *mtl_msr_uncores[] = {
++ &mtl_uncore_cbox,
++ &mtl_uncore_hac_arb,
++ &mtl_uncore_arb,
++ &mtl_uncore_hac_cbox,
++ &mtl_uncore_cncu,
++ &mtl_uncore_sncu,
++ NULL
++};
++
++void mtl_uncore_cpu_init(void)
++{
++ mtl_uncore_cbox.num_boxes = icl_get_cbox_num();
++ uncore_msr_uncores = mtl_msr_uncores;
++}
++
+ enum {
+ SNB_PCI_UNCORE_IMC,
+ };
+@@ -1264,6 +1412,19 @@ static const struct pci_device_id tgl_uncore_pci_ids[] = {
+ IMC_UNCORE_DEV(RPL_23),
+ IMC_UNCORE_DEV(RPL_24),
+ IMC_UNCORE_DEV(RPL_25),
++ IMC_UNCORE_DEV(MTL_1),
++ IMC_UNCORE_DEV(MTL_2),
++ IMC_UNCORE_DEV(MTL_3),
++ IMC_UNCORE_DEV(MTL_4),
++ IMC_UNCORE_DEV(MTL_5),
++ IMC_UNCORE_DEV(MTL_6),
++ IMC_UNCORE_DEV(MTL_7),
++ IMC_UNCORE_DEV(MTL_8),
++ IMC_UNCORE_DEV(MTL_9),
++ IMC_UNCORE_DEV(MTL_10),
++ IMC_UNCORE_DEV(MTL_11),
++ IMC_UNCORE_DEV(MTL_12),
++ IMC_UNCORE_DEV(MTL_13),
+ { /* end: all zeroes */ }
+ };
+
+diff --git a/arch/x86/events/zhaoxin/core.c b/arch/x86/events/zhaoxin/core.c
+index 949d845c922b4..3e9acdaeed1ec 100644
+--- a/arch/x86/events/zhaoxin/core.c
++++ b/arch/x86/events/zhaoxin/core.c
+@@ -541,7 +541,13 @@ __init int zhaoxin_pmu_init(void)
+
+ switch (boot_cpu_data.x86) {
+ case 0x06:
+- if (boot_cpu_data.x86_model == 0x0f || boot_cpu_data.x86_model == 0x19) {
++ /*
++ * Support Zhaoxin CPU from ZXC series, exclude Nano series through FMS.
++ * Nano FMS: Family=6, Model=F, Stepping=[0-A][C-D]
++ * ZXC FMS: Family=6, Model=F, Stepping=E-F OR Family=6, Model=0x19, Stepping=0-3
++ */
++ if ((boot_cpu_data.x86_model == 0x0f && boot_cpu_data.x86_stepping >= 0x0e) ||
++ boot_cpu_data.x86_model == 0x19) {
+
+ x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
+
+diff --git a/arch/x86/include/asm/fpu/sched.h b/arch/x86/include/asm/fpu/sched.h
+index b2486b2cbc6e0..c2d6cd78ed0c2 100644
+--- a/arch/x86/include/asm/fpu/sched.h
++++ b/arch/x86/include/asm/fpu/sched.h
+@@ -39,7 +39,7 @@ extern void fpu_flush_thread(void);
+ static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu)
+ {
+ if (cpu_feature_enabled(X86_FEATURE_FPU) &&
+- !(current->flags & PF_KTHREAD)) {
++ !(current->flags & (PF_KTHREAD | PF_IO_WORKER))) {
+ save_fpregs_to_fpstate(old_fpu);
+ /*
+ * The save operation preserved register state, so the
+diff --git a/arch/x86/include/asm/fpu/xcr.h b/arch/x86/include/asm/fpu/xcr.h
+index 9656a5bc6feae..9a710c0604457 100644
+--- a/arch/x86/include/asm/fpu/xcr.h
++++ b/arch/x86/include/asm/fpu/xcr.h
+@@ -5,7 +5,7 @@
+ #define XCR_XFEATURE_ENABLED_MASK 0x00000000
+ #define XCR_XFEATURE_IN_USE_MASK 0x00000001
+
+-static inline u64 xgetbv(u32 index)
++static __always_inline u64 xgetbv(u32 index)
+ {
+ u32 eax, edx;
+
+@@ -27,7 +27,7 @@ static inline void xsetbv(u32 index, u64 value)
+ *
+ * Callers should check X86_FEATURE_XGETBV1.
+ */
+-static inline u64 xfeatures_in_use(void)
++static __always_inline u64 xfeatures_in_use(void)
+ {
+ return xgetbv(XCR_XFEATURE_IN_USE_MASK);
+ }
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index d5a58bde091c8..320566a0443db 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -125,13 +125,13 @@ static inline unsigned int x86_cpuid_family(void)
+ #ifdef CONFIG_MICROCODE
+ extern void __init load_ucode_bsp(void);
+ extern void load_ucode_ap(void);
+-void reload_early_microcode(void);
++void reload_early_microcode(unsigned int cpu);
+ extern bool initrd_gone;
+ void microcode_bsp_resume(void);
+ #else
+ static inline void __init load_ucode_bsp(void) { }
+ static inline void load_ucode_ap(void) { }
+-static inline void reload_early_microcode(void) { }
++static inline void reload_early_microcode(unsigned int cpu) { }
+ static inline void microcode_bsp_resume(void) { }
+ #endif
+
+diff --git a/arch/x86/include/asm/microcode_amd.h b/arch/x86/include/asm/microcode_amd.h
+index ac31f9140d07d..e6662adf3af4d 100644
+--- a/arch/x86/include/asm/microcode_amd.h
++++ b/arch/x86/include/asm/microcode_amd.h
+@@ -47,12 +47,12 @@ struct microcode_amd {
+ extern void __init load_ucode_amd_bsp(unsigned int family);
+ extern void load_ucode_amd_ap(unsigned int family);
+ extern int __init save_microcode_in_initrd_amd(unsigned int family);
+-void reload_ucode_amd(void);
++void reload_ucode_amd(unsigned int cpu);
+ #else
+ static inline void __init load_ucode_amd_bsp(unsigned int family) {}
+ static inline void load_ucode_amd_ap(unsigned int family) {}
+ static inline int __init
+ save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
+-static inline void reload_ucode_amd(void) {}
++static inline void reload_ucode_amd(unsigned int cpu) {}
+ #endif
+ #endif /* _ASM_X86_MICROCODE_AMD_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index d3fe82c5d6b66..978a3e203cdbb 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -49,6 +49,10 @@
+ #define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */
+ #define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
+
++/* A mask for bits which the kernel toggles when controlling mitigations */
++#define SPEC_CTRL_MITIGATIONS_MASK (SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD \
++ | SPEC_CTRL_RRSBA_DIS_S)
++
+ #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
+ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
+
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 4e35c66edeb7d..a77dee6a2bf2e 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -697,7 +697,8 @@ bool xen_set_default_idle(void);
+ #endif
+
+ void __noreturn stop_this_cpu(void *dummy);
+-void microcode_check(void);
++void microcode_check(struct cpuinfo_x86 *prev_info);
++void store_cpu_caps(struct cpuinfo_x86 *info);
+
+ enum l1tf_mitigations {
+ L1TF_MITIGATION_OFF,
+diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h
+index 04c17be9b5fda..bc5b4d788c08d 100644
+--- a/arch/x86/include/asm/reboot.h
++++ b/arch/x86/include/asm/reboot.h
+@@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
+ #define MRR_BIOS 0
+ #define MRR_APM 1
+
++void cpu_emergency_disable_virtualization(void);
++
+ typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
+ void nmi_panic_self_stop(struct pt_regs *regs);
+ void nmi_shootdown_cpus(nmi_shootdown_cb callback);
+diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
+index 35f709f619fb4..c2e322189f853 100644
+--- a/arch/x86/include/asm/special_insns.h
++++ b/arch/x86/include/asm/special_insns.h
+@@ -295,7 +295,7 @@ static inline int enqcmds(void __iomem *dst, const void *src)
+ return 0;
+ }
+
+-static inline void tile_release(void)
++static __always_inline void tile_release(void)
+ {
+ /*
+ * Instruction opcode for TILERELEASE; supported in binutils
+diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
+index 8757078d4442a..3b12e6b994123 100644
+--- a/arch/x86/include/asm/virtext.h
++++ b/arch/x86/include/asm/virtext.h
+@@ -126,7 +126,21 @@ static inline void cpu_svm_disable(void)
+
+ wrmsrl(MSR_VM_HSAVE_PA, 0);
+ rdmsrl(MSR_EFER, efer);
+- wrmsrl(MSR_EFER, efer & ~EFER_SVME);
++ if (efer & EFER_SVME) {
++ /*
++ * Force GIF=1 prior to disabling SVM to ensure INIT and NMI
++ * aren't blocked, e.g. if a fatal error occurred between CLGI
++ * and STGI. Note, STGI may #UD if SVM is disabled from NMI
++ * context between reading EFER and executing STGI. In that
++ * case, GIF must already be set, otherwise the NMI would have
++ * been blocked, so just eat the fault.
++ */
++ asm_volatile_goto("1: stgi\n\t"
++ _ASM_EXTABLE(1b, %l[fault])
++ ::: "memory" : fault);
++fault:
++ wrmsrl(MSR_EFER, efer & ~EFER_SVME);
++ }
+ }
+
+ /** Makes sure SVM is disabled, if it is supported on the CPU
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 907cc98b19380..518bda50068cb 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -188,6 +188,17 @@ static int acpi_register_lapic(int id, u32 acpiid, u8 enabled)
+ return cpu;
+ }
+
++static bool __init acpi_is_processor_usable(u32 lapic_flags)
++{
++ if (lapic_flags & ACPI_MADT_ENABLED)
++ return true;
++
++ if (acpi_support_online_capable && (lapic_flags & ACPI_MADT_ONLINE_CAPABLE))
++ return true;
++
++ return false;
++}
++
+ static int __init
+ acpi_parse_x2apic(union acpi_subtable_headers *header, const unsigned long end)
+ {
+@@ -212,6 +223,10 @@ acpi_parse_x2apic(union acpi_subtable_headers *header, const unsigned long end)
+ if (apic_id == 0xffffffff)
+ return 0;
+
++ /* don't register processors that cannot be onlined */
++ if (!acpi_is_processor_usable(processor->lapic_flags))
++ return 0;
++
+ /*
+ * We need to register disabled CPU as well to permit
+ * counting disabled CPUs. This allows us to size
+@@ -250,9 +265,7 @@ acpi_parse_lapic(union acpi_subtable_headers * header, const unsigned long end)
+ return 0;
+
+ /* don't register processors that can not be onlined */
+- if (acpi_support_online_capable &&
+- !(processor->lapic_flags & ACPI_MADT_ENABLED) &&
+- !(processor->lapic_flags & ACPI_MADT_ONLINE_CAPABLE))
++ if (!acpi_is_processor_usable(processor->lapic_flags))
+ return 0;
+
+ /*
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index bca0bd8f48464..daad10e7665bf 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -144,9 +144,17 @@ void __init check_bugs(void)
+ * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
+ * init code as it is not enumerated and depends on the family.
+ */
+- if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
++ if (cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL)) {
+ rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+
++ /*
++ * Previously running kernel (kexec), may have some controls
++ * turned ON. Clear them and let the mitigations setup below
++ * rediscover them based on configuration.
++ */
++ x86_spec_ctrl_base &= ~SPEC_CTRL_MITIGATIONS_MASK;
++ }
++
+ /* Select the proper CPU mitigations before patching alternatives: */
+ spectre_v1_select_mitigation();
+ spectre_v2_select_mitigation();
+@@ -1124,14 +1132,18 @@ spectre_v2_parse_user_cmdline(void)
+ return SPECTRE_V2_USER_CMD_AUTO;
+ }
+
+-static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
++static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
+ {
+- return mode == SPECTRE_V2_IBRS ||
+- mode == SPECTRE_V2_EIBRS ||
++ return mode == SPECTRE_V2_EIBRS ||
+ mode == SPECTRE_V2_EIBRS_RETPOLINE ||
+ mode == SPECTRE_V2_EIBRS_LFENCE;
+ }
+
++static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
++{
++ return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
++}
++
+ static void __init
+ spectre_v2_user_select_mitigation(void)
+ {
+@@ -1194,12 +1206,19 @@ spectre_v2_user_select_mitigation(void)
+ }
+
+ /*
+- * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible,
+- * STIBP is not required.
++ * If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP
++ * is not required.
++ *
++ * Enhanced IBRS also protects against cross-thread branch target
++ * injection in user-mode as the IBRS bit remains always set which
++ * implicitly enables cross-thread protections. However, in legacy IBRS
++ * mode, the IBRS bit is set only on kernel entry and cleared on return
++ * to userspace. This disables the implicit cross-thread protection,
++ * so allow for STIBP to be selected in that case.
+ */
+ if (!boot_cpu_has(X86_FEATURE_STIBP) ||
+ !smt_possible ||
+- spectre_v2_in_ibrs_mode(spectre_v2_enabled))
++ spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ return;
+
+ /*
+@@ -2327,7 +2346,7 @@ static ssize_t mmio_stale_data_show_state(char *buf)
+
+ static char *stibp_state(void)
+ {
+- if (spectre_v2_in_ibrs_mode(spectre_v2_enabled))
++ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ return "";
+
+ switch (spectre_v2_user_stibp) {
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index f3cc7699e1e1b..6a25e93f2a87c 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2302,30 +2302,45 @@ void cpu_init_secondary(void)
+ #endif
+
+ #ifdef CONFIG_MICROCODE_LATE_LOADING
+-/*
++/**
++ * store_cpu_caps() - Store a snapshot of CPU capabilities
++ * @curr_info: Pointer where to store it
++ *
++ * Returns: None
++ */
++void store_cpu_caps(struct cpuinfo_x86 *curr_info)
++{
++ /* Reload CPUID max function as it might've changed. */
++ curr_info->cpuid_level = cpuid_eax(0);
++
++ /* Copy all capability leafs and pick up the synthetic ones. */
++ memcpy(&curr_info->x86_capability, &boot_cpu_data.x86_capability,
++ sizeof(curr_info->x86_capability));
++
++ /* Get the hardware CPUID leafs */
++ get_cpu_cap(curr_info);
++}
++
++/**
++ * microcode_check() - Check if any CPU capabilities changed after an update.
++ * @prev_info: CPU capabilities stored before an update.
++ *
+ * The microcode loader calls this upon late microcode load to recheck features,
+ * only when microcode has been updated. Caller holds microcode_mutex and CPU
+ * hotplug lock.
++ *
++ * Return: None
+ */
+-void microcode_check(void)
++void microcode_check(struct cpuinfo_x86 *prev_info)
+ {
+- struct cpuinfo_x86 info;
++ struct cpuinfo_x86 curr_info;
+
+ perf_check_microcode();
+
+- /* Reload CPUID max function as it might've changed. */
+- info.cpuid_level = cpuid_eax(0);
+-
+- /*
+- * Copy all capability leafs to pick up the synthetic ones so that
+- * memcmp() below doesn't fail on that. The ones coming from CPUID will
+- * get overwritten in get_cpu_cap().
+- */
+- memcpy(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability));
+-
+- get_cpu_cap(&info);
++ store_cpu_caps(&curr_info);
+
+- if (!memcmp(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability)))
++ if (!memcmp(&prev_info->x86_capability, &curr_info.x86_capability,
++ sizeof(prev_info->x86_capability)))
+ return;
+
+ pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 56471f750762a..ac59783e6e9f6 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -55,7 +55,9 @@ struct cont_desc {
+ };
+
+ static u32 ucode_new_rev;
+-static u8 amd_ucode_patch[PATCH_MAX_SIZE];
++
++/* One blob per node. */
++static u8 amd_ucode_patch[MAX_NUMNODES][PATCH_MAX_SIZE];
+
+ /*
+ * Microcode patch container file is prepended to the initrd in cpio
+@@ -428,7 +430,7 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
+ patch = (u8 (*)[PATCH_MAX_SIZE])__pa_nodebug(&amd_ucode_patch);
+ #else
+ new_rev = &ucode_new_rev;
+- patch = &amd_ucode_patch;
++ patch = &amd_ucode_patch[0];
+ #endif
+
+ desc.cpuid_1_eax = cpuid_1_eax;
+@@ -553,8 +555,7 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
+ apply_microcode_early_amd(cpuid_1_eax, cp.data, cp.size, false);
+ }
+
+-static enum ucode_state
+-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size);
++static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
+
+ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
+ {
+@@ -572,19 +573,19 @@ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
+ if (!desc.mc)
+ return -EINVAL;
+
+- ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size);
++ ret = load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
+ if (ret > UCODE_UPDATED)
+ return -EINVAL;
+
+ return 0;
+ }
+
+-void reload_ucode_amd(void)
++void reload_ucode_amd(unsigned int cpu)
+ {
+- struct microcode_amd *mc;
+ u32 rev, dummy __always_unused;
++ struct microcode_amd *mc;
+
+- mc = (struct microcode_amd *)amd_ucode_patch;
++ mc = (struct microcode_amd *)amd_ucode_patch[cpu_to_node(cpu)];
+
+ rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+
+@@ -850,9 +851,10 @@ static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
+ return UCODE_OK;
+ }
+
+-static enum ucode_state
+-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
++static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size)
+ {
++ struct cpuinfo_x86 *c;
++ unsigned int nid, cpu;
+ struct ucode_patch *p;
+ enum ucode_state ret;
+
+@@ -865,22 +867,22 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
+ return ret;
+ }
+
+- p = find_patch(0);
+- if (!p) {
+- return ret;
+- } else {
+- if (boot_cpu_data.microcode >= p->patch_id)
+- return ret;
++ for_each_node(nid) {
++ cpu = cpumask_first(cpumask_of_node(nid));
++ c = &cpu_data(cpu);
+
+- ret = UCODE_NEW;
+- }
++ p = find_patch(cpu);
++ if (!p)
++ continue;
+
+- /* save BSP's matching patch for early load */
+- if (!save)
+- return ret;
++ if (c->microcode >= p->patch_id)
++ continue;
++
++ ret = UCODE_NEW;
+
+- memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+- memcpy(amd_ucode_patch, p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
++ memset(&amd_ucode_patch[nid], 0, PATCH_MAX_SIZE);
++ memcpy(&amd_ucode_patch[nid], p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
++ }
+
+ return ret;
+ }
+@@ -905,14 +907,9 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device)
+ {
+ char fw_name[36] = "amd-ucode/microcode_amd.bin";
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
+- bool bsp = c->cpu_index == boot_cpu_data.cpu_index;
+ enum ucode_state ret = UCODE_NFOUND;
+ const struct firmware *fw;
+
+- /* reload ucode container only on the boot cpu */
+- if (!bsp)
+- return UCODE_OK;
+-
+ if (c->x86 >= 0x15)
+ snprintf(fw_name, sizeof(fw_name), "amd-ucode/microcode_amd_fam%.2xh.bin", c->x86);
+
+@@ -925,7 +922,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device)
+ if (!verify_container(fw->data, fw->size, false))
+ goto fw_release;
+
+- ret = load_microcode_amd(bsp, c->x86, fw->data, fw->size);
++ ret = load_microcode_amd(c->x86, fw->data, fw->size);
+
+ fw_release:
+ release_firmware(fw);
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 712aafff96e03..7487518dc2eb0 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -298,7 +298,7 @@ struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa)
+ #endif
+ }
+
+-void reload_early_microcode(void)
++void reload_early_microcode(unsigned int cpu)
+ {
+ int vendor, family;
+
+@@ -312,7 +312,7 @@ void reload_early_microcode(void)
+ break;
+ case X86_VENDOR_AMD:
+ if (family >= 0x10)
+- reload_ucode_amd();
++ reload_ucode_amd(cpu);
+ break;
+ default:
+ break;
+@@ -438,6 +438,7 @@ wait_for_siblings:
+ static int microcode_reload_late(void)
+ {
+ int old = boot_cpu_data.microcode, ret;
++ struct cpuinfo_x86 prev_info;
+
+ pr_err("Attempting late microcode loading - it is dangerous and taints the kernel.\n");
+ pr_err("You should switch to early loading, if possible.\n");
+@@ -445,12 +446,21 @@ static int microcode_reload_late(void)
+ atomic_set(&late_cpus_in, 0);
+ atomic_set(&late_cpus_out, 0);
+
+- ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
+- if (ret == 0)
+- microcode_check();
++ /*
++ * Take a snapshot before the microcode update in order to compare and
++ * check whether any bits changed after an update.
++ */
++ store_cpu_caps(&prev_info);
+
+- pr_info("Reload completed, microcode revision: 0x%x -> 0x%x\n",
+- old, boot_cpu_data.microcode);
++ ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
++ if (!ret) {
++ pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n",
++ old, boot_cpu_data.microcode);
++ microcode_check(&prev_info);
++ } else {
++ pr_info("Reload failed, current microcode revision: 0x%x\n",
++ boot_cpu_data.microcode);
++ }
+
+ return ret;
+ }
+@@ -557,7 +567,7 @@ void microcode_bsp_resume(void)
+ if (uci->mc)
+ microcode_ops->apply_microcode(cpu);
+ else
+- reload_early_microcode();
++ reload_early_microcode(cpu);
+ }
+
+ static struct syscore_ops mc_syscore_ops = {
+diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
+index 305514431f26e..cdd92ab43cda4 100644
+--- a/arch/x86/kernel/crash.c
++++ b/arch/x86/kernel/crash.c
+@@ -37,7 +37,6 @@
+ #include <linux/kdebug.h>
+ #include <asm/cpu.h>
+ #include <asm/reboot.h>
+-#include <asm/virtext.h>
+ #include <asm/intel_pt.h>
+ #include <asm/crash.h>
+ #include <asm/cmdline.h>
+@@ -81,15 +80,6 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
+ */
+ cpu_crash_vmclear_loaded_vmcss();
+
+- /* Disable VMX or SVM if needed.
+- *
+- * We need to disable virtualization on all CPUs.
+- * Having VMX or SVM enabled on any CPU may break rebooting
+- * after the kdump kernel has finished its task.
+- */
+- cpu_emergency_vmxoff();
+- cpu_emergency_svm_disable();
+-
+ /*
+ * Disable Intel PT to stop its logging
+ */
+@@ -148,12 +138,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
+ */
+ cpu_crash_vmclear_loaded_vmcss();
+
+- /* Booting kdump kernel with VMX or SVM enabled won't work,
+- * because (among other limitations) we can't disable paging
+- * with the virt flags.
+- */
+- cpu_emergency_vmxoff();
+- cpu_emergency_svm_disable();
++ cpu_emergency_disable_virtualization();
+
+ /*
+ * Disable Intel PT to stop its logging
+diff --git a/arch/x86/kernel/fpu/context.h b/arch/x86/kernel/fpu/context.h
+index 958accf2ccf07..9fcfa5c4dad79 100644
+--- a/arch/x86/kernel/fpu/context.h
++++ b/arch/x86/kernel/fpu/context.h
+@@ -57,7 +57,7 @@ static inline void fpregs_restore_userregs(void)
+ struct fpu *fpu = ¤t->thread.fpu;
+ int cpu = smp_processor_id();
+
+- if (WARN_ON_ONCE(current->flags & PF_KTHREAD))
++ if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_IO_WORKER)))
+ return;
+
+ if (!fpregs_state_valid(fpu, cpu)) {
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 9baa89a8877d0..caf33486dc5ee 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -426,7 +426,7 @@ void kernel_fpu_begin_mask(unsigned int kfpu_mask)
+
+ this_cpu_write(in_kernel_fpu, true);
+
+- if (!(current->flags & PF_KTHREAD) &&
++ if (!(current->flags & (PF_KTHREAD | PF_IO_WORKER)) &&
+ !test_thread_flag(TIF_NEED_FPU_LOAD)) {
+ set_thread_flag(TIF_NEED_FPU_LOAD);
+ save_fpregs_to_fpstate(¤t->thread.fpu);
+@@ -853,12 +853,12 @@ int fpu__exception_code(struct fpu *fpu, int trap_nr)
+ * Initialize register state that may prevent from entering low-power idle.
+ * This function will be invoked from the cpuidle driver only when needed.
+ */
+-void fpu_idle_fpregs(void)
++noinstr void fpu_idle_fpregs(void)
+ {
+ /* Note: AMX_TILE being enabled implies XGETBV1 support */
+ if (cpu_feature_enabled(X86_FEATURE_AMX_TILE) &&
+ (xfeatures_in_use() & XFEATURE_MASK_XTILE)) {
+ tile_release();
+- fpregs_deactivate(¤t->thread.fpu);
++ __this_cpu_write(fpu_fpregs_owner_ctx, NULL);
+ }
+ }
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index e57e07b0edb64..57b0037d0a996 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
+ /* This function only handles jump-optimized kprobe */
+ if (kp && kprobe_optimized(kp)) {
+ op = container_of(kp, struct optimized_kprobe, kp);
+- /* If op->list is not empty, op is under optimizing */
+- if (list_empty(&op->list))
++ /* If op is optimized or under unoptimizing */
++ if (list_empty(&op->list) || optprobe_queued_unopt(op))
+ goto found;
+ }
+ }
+@@ -353,7 +353,7 @@ int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+
+ for (i = 1; i < op->optinsn.size; i++) {
+ p = get_kprobe(op->kp.addr + i);
+- if (p && !kprobe_disabled(p))
++ if (p && !kprobe_disarmed(p))
+ return -EEXIST;
+ }
+
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index c3636ea4aa71f..d03c551defccf 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -528,33 +528,29 @@ static inline void kb_wait(void)
+ }
+ }
+
+-static void vmxoff_nmi(int cpu, struct pt_regs *regs)
+-{
+- cpu_emergency_vmxoff();
+-}
++static inline void nmi_shootdown_cpus_on_restart(void);
+
+-/* Use NMIs as IPIs to tell all CPUs to disable virtualization */
+-static void emergency_vmx_disable_all(void)
++static void emergency_reboot_disable_virtualization(void)
+ {
+ /* Just make sure we won't change CPUs while doing this */
+ local_irq_disable();
+
+ /*
+- * Disable VMX on all CPUs before rebooting, otherwise we risk hanging
+- * the machine, because the CPU blocks INIT when it's in VMX root.
++ * Disable virtualization on all CPUs before rebooting to avoid hanging
++ * the system, as VMX and SVM block INIT when running in the host.
+ *
+ * We can't take any locks and we may be on an inconsistent state, so
+- * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt.
++ * use NMIs as IPIs to tell the other CPUs to disable VMX/SVM and halt.
+ *
+- * Do the NMI shootdown even if VMX if off on _this_ CPU, as that
+- * doesn't prevent a different CPU from being in VMX root operation.
++ * Do the NMI shootdown even if virtualization is off on _this_ CPU, as
++ * other CPUs may have virtualization enabled.
+ */
+- if (cpu_has_vmx()) {
+- /* Safely force _this_ CPU out of VMX root operation. */
+- __cpu_emergency_vmxoff();
++ if (cpu_has_vmx() || cpu_has_svm(NULL)) {
++ /* Safely force _this_ CPU out of VMX/SVM operation. */
++ cpu_emergency_disable_virtualization();
+
+- /* Halt and exit VMX root operation on the other CPUs. */
+- nmi_shootdown_cpus(vmxoff_nmi);
++ /* Disable VMX/SVM and halt on other CPUs. */
++ nmi_shootdown_cpus_on_restart();
+ }
+ }
+
+@@ -590,7 +586,7 @@ static void native_machine_emergency_restart(void)
+ unsigned short mode;
+
+ if (reboot_emergency)
+- emergency_vmx_disable_all();
++ emergency_reboot_disable_virtualization();
+
+ tboot_shutdown(TB_SHUTDOWN_REBOOT);
+
+@@ -795,6 +791,17 @@ void machine_crash_shutdown(struct pt_regs *regs)
+ /* This is the CPU performing the emergency shutdown work. */
+ int crashing_cpu = -1;
+
++/*
++ * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized during
++ * reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT if
++ * GIF=0, i.e. if the crash occurred between CLGI and STGI.
++ */
++void cpu_emergency_disable_virtualization(void)
++{
++ cpu_emergency_vmxoff();
++ cpu_emergency_svm_disable();
++}
++
+ #if defined(CONFIG_SMP)
+
+ static nmi_shootdown_cb shootdown_callback;
+@@ -817,7 +824,14 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
+ return NMI_HANDLED;
+ local_irq_disable();
+
+- shootdown_callback(cpu, regs);
++ if (shootdown_callback)
++ shootdown_callback(cpu, regs);
++
++ /*
++ * Prepare the CPU for reboot _after_ invoking the callback so that the
++ * callback can safely use virtualization instructions, e.g. VMCLEAR.
++ */
++ cpu_emergency_disable_virtualization();
+
+ atomic_dec(&waiting_for_crash_ipi);
+ /* Assume hlt works */
+@@ -828,18 +842,32 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
+ return NMI_HANDLED;
+ }
+
+-/*
+- * Halt all other CPUs, calling the specified function on each of them
++/**
++ * nmi_shootdown_cpus - Stop other CPUs via NMI
++ * @callback: Optional callback to be invoked from the NMI handler
++ *
++ * The NMI handler on the remote CPUs invokes @callback, if not
++ * NULL, first and then disables virtualization to ensure that
++ * INIT is recognized during reboot.
+ *
+- * This function can be used to halt all other CPUs on crash
+- * or emergency reboot time. The function passed as parameter
+- * will be called inside a NMI handler on all CPUs.
++ * nmi_shootdown_cpus() can only be invoked once. After the first
++ * invocation all other CPUs are stuck in crash_nmi_callback() and
++ * cannot respond to a second NMI.
+ */
+ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ {
+ unsigned long msecs;
++
+ local_irq_disable();
+
++ /*
++ * Avoid certain doom if a shootdown already occurred; re-registering
++ * the NMI handler will cause list corruption, modifying the callback
++ * will do who knows what, etc...
++ */
++ if (WARN_ON_ONCE(crash_ipi_issued))
++ return;
++
+ /* Make a note of crashing cpu. Will be used in NMI callback. */
+ crashing_cpu = safe_smp_processor_id();
+
+@@ -867,7 +895,17 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ msecs--;
+ }
+
+- /* Leave the nmi callback set */
++ /*
++ * Leave the nmi callback set, shootdown is a one-time thing. Clearing
++ * the callback could result in a NULL pointer dereference if a CPU
++ * (finally) responds after the timeout expires.
++ */
++}
++
++static inline void nmi_shootdown_cpus_on_restart(void)
++{
++ if (!crash_ipi_issued)
++ nmi_shootdown_cpus(NULL);
+ }
+
+ /*
+@@ -897,6 +935,8 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ /* No other CPUs to shoot down */
+ }
+
++static inline void nmi_shootdown_cpus_on_restart(void) { }
++
+ void run_crash_ipi_callback(struct pt_regs *regs)
+ {
+ }
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index 1504eb8d25aa6..004cb30b74198 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -360,7 +360,7 @@ static bool strict_sigaltstack_size __ro_after_init = false;
+
+ static int __init strict_sas_size(char *arg)
+ {
+- return kstrtobool(arg, &strict_sigaltstack_size);
++ return kstrtobool(arg, &strict_sigaltstack_size) == 0;
+ }
+ __setup("strict_sas_size", strict_sas_size);
+
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index 06db901fabe8e..375b33ecafa27 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -32,7 +32,7 @@
+ #include <asm/mce.h>
+ #include <asm/trace/irq_vectors.h>
+ #include <asm/kexec.h>
+-#include <asm/virtext.h>
++#include <asm/reboot.h>
+
+ /*
+ * Some notes on x86 processor bugs affecting SMP operation:
+@@ -122,7 +122,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
+ if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
+ return NMI_HANDLED;
+
+- cpu_emergency_vmxoff();
++ cpu_emergency_disable_virtualization();
+ stop_this_cpu(NULL);
+
+ return NMI_HANDLED;
+@@ -134,7 +134,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
+ DEFINE_IDTENTRY_SYSVEC(sysvec_reboot)
+ {
+ ack_APIC_irq();
+- cpu_emergency_vmxoff();
++ cpu_emergency_disable_virtualization();
+ stop_this_cpu(NULL);
+ }
+
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 4efdb4a4d72c6..7f0c02b5dfdd4 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -2072,10 +2072,18 @@ static void kvm_lapic_xapic_id_updated(struct kvm_lapic *apic)
+ {
+ struct kvm *kvm = apic->vcpu->kvm;
+
++ if (!kvm_apic_hw_enabled(apic))
++ return;
++
+ if (KVM_BUG_ON(apic_x2apic_mode(apic), kvm))
+ return;
+
+- if (kvm_xapic_id(apic) == apic->vcpu->vcpu_id)
++ /*
++ * Deliberately truncate the vCPU ID when detecting a modified APIC ID
++ * to avoid false positives if the vCPU ID, i.e. x2APIC ID, is a 32-bit
++ * value.
++ */
++ if (kvm_xapic_id(apic) == (u8)apic->vcpu->vcpu_id)
+ return;
+
+ kvm_set_apicv_inhibit(apic->vcpu->kvm, APICV_INHIBIT_REASON_APIC_ID_MODIFIED);
+@@ -2219,10 +2227,14 @@ static int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+ break;
+
+ case APIC_SELF_IPI:
+- if (apic_x2apic_mode(apic))
+- kvm_apic_send_ipi(apic, APIC_DEST_SELF | (val & APIC_VECTOR_MASK), 0);
+- else
++ /*
++ * Self-IPI exists only when x2APIC is enabled. Bits 7:0 hold
++ * the vector, everything else is reserved.
++ */
++ if (!apic_x2apic_mode(apic) || (val & ~APIC_VECTOR_MASK))
+ ret = 1;
++ else
++ kvm_apic_send_ipi(apic, APIC_DEST_SELF | val, 0);
+ break;
+ default:
+ ret = 1;
+@@ -2284,23 +2296,18 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
+ struct kvm_lapic *apic = vcpu->arch.apic;
+ u64 val;
+
+- if (apic_x2apic_mode(apic)) {
+- if (KVM_BUG_ON(kvm_lapic_msr_read(apic, offset, &val), vcpu->kvm))
+- return;
+- } else {
+- val = kvm_lapic_get_reg(apic, offset);
+- }
+-
+ /*
+ * ICR is a single 64-bit register when x2APIC is enabled. For legacy
+ * xAPIC, ICR writes need to go down the common (slightly slower) path
+ * to get the upper half from ICR2.
+ */
+ if (apic_x2apic_mode(apic) && offset == APIC_ICR) {
++ val = kvm_lapic_get_reg64(apic, APIC_ICR);
+ kvm_apic_send_ipi(apic, (u32)val, (u32)(val >> 32));
+ trace_kvm_apic_write(APIC_ICR, val);
+ } else {
+ /* TODO: optimize to just emulate side effect w/o one more write */
++ val = kvm_lapic_get_reg(apic, offset);
+ kvm_lapic_reg_write(apic, offset, (u32)val);
+ }
+ }
+@@ -2429,6 +2436,7 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu)
+ */
+ apic->isr_count = count_vectors(apic->regs + APIC_ISR);
+ }
++ apic->highest_isr_cache = -1;
+ }
+
+ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
+@@ -2484,7 +2492,6 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
+ kvm_lapic_set_reg(apic, APIC_TMR + 0x10 * i, 0);
+ }
+ kvm_apic_update_apicv(vcpu);
+- apic->highest_isr_cache = -1;
+ update_divide_count(apic);
+ atomic_set(&apic->lapic_timer.pending, 0);
+
+@@ -2772,7 +2779,6 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
+ __start_apic_timer(apic, APIC_TMCCT);
+ kvm_lapic_set_reg(apic, APIC_TMCCT, 0);
+ kvm_apic_update_apicv(vcpu);
+- apic->highest_isr_cache = -1;
+ if (apic->apicv_active) {
+ static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
+ static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
+@@ -2943,13 +2949,17 @@ static int kvm_lapic_msr_read(struct kvm_lapic *apic, u32 reg, u64 *data)
+ static int kvm_lapic_msr_write(struct kvm_lapic *apic, u32 reg, u64 data)
+ {
+ /*
+- * ICR is a 64-bit register in x2APIC mode (and Hyper'v PV vAPIC) and
++ * ICR is a 64-bit register in x2APIC mode (and Hyper-V PV vAPIC) and
+ * can be written as such, all other registers remain accessible only
+ * through 32-bit reads/writes.
+ */
+ if (reg == APIC_ICR)
+ return kvm_x2apic_icr_write(apic, data);
+
++ /* Bits 63:32 are reserved in all other registers. */
++ if (data >> 32)
++ return 1;
++
+ return kvm_lapic_reg_write(apic, reg, (u32)data);
+ }
+
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index 6919dee69f182..97ad0661f9639 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -86,6 +86,12 @@ static void avic_activate_vmcb(struct vcpu_svm *svm)
+ /* Disabling MSR intercept for x2APIC registers */
+ svm_set_x2apic_msr_interception(svm, false);
+ } else {
++ /*
++ * Flush the TLB, the guest may have inserted a non-APIC
++ * mapping into the TLB while AVIC was disabled.
++ */
++ kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, &svm->vcpu);
++
+ /* For xAVIC and hybrid-xAVIC modes */
+ vmcb->control.avic_physical_id |= AVIC_MAX_PHYSICAL_ID;
+ /* Enabling MSR intercept for x2APIC registers */
+@@ -496,14 +502,18 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu)
+ trace_kvm_avic_incomplete_ipi(vcpu->vcpu_id, icrh, icrl, id, index);
+
+ switch (id) {
++ case AVIC_IPI_FAILURE_INVALID_TARGET:
+ case AVIC_IPI_FAILURE_INVALID_INT_TYPE:
+ /*
+ * Emulate IPIs that are not handled by AVIC hardware, which
+- * only virtualizes Fixed, Edge-Triggered INTRs. The exit is
+- * a trap, e.g. ICR holds the correct value and RIP has been
+- * advanced, KVM is responsible only for emulating the IPI.
+- * Sadly, hardware may sometimes leave the BUSY flag set, in
+- * which case KVM needs to emulate the ICR write as well in
++ * only virtualizes Fixed, Edge-Triggered INTRs, and falls over
++ * if _any_ targets are invalid, e.g. if the logical mode mask
++ * is a superset of running vCPUs.
++ *
++ * The exit is a trap, e.g. ICR holds the correct value and RIP
++ * has been advanced, KVM is responsible only for emulating the
++ * IPI. Sadly, hardware may sometimes leave the BUSY flag set,
++ * in which case KVM needs to emulate the ICR write as well in
+ * order to clear the BUSY flag.
+ */
+ if (icrl & APIC_ICR_BUSY)
+@@ -519,8 +529,6 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu)
+ */
+ avic_kick_target_vcpus(vcpu->kvm, apic, icrl, icrh, index);
+ break;
+- case AVIC_IPI_FAILURE_INVALID_TARGET:
+- break;
+ case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE:
+ WARN_ONCE(1, "Invalid backing page\n");
+ break;
+@@ -739,18 +747,6 @@ void avic_apicv_post_state_restore(struct kvm_vcpu *vcpu)
+ avic_handle_ldr_update(vcpu);
+ }
+
+-void avic_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+-{
+- if (!lapic_in_kernel(vcpu) || avic_mode == AVIC_MODE_NONE)
+- return;
+-
+- if (kvm_get_apic_mode(vcpu) == LAPIC_MODE_INVALID) {
+- WARN_ONCE(true, "Invalid local APIC state (vcpu_id=%d)", vcpu->vcpu_id);
+- return;
+- }
+- avic_refresh_apicv_exec_ctrl(vcpu);
+-}
+-
+ static int avic_set_pi_irte_mode(struct kvm_vcpu *vcpu, bool activate)
+ {
+ int ret = 0;
+@@ -1092,17 +1088,18 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu)
+ WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
+ }
+
+-
+-void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
++void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct vmcb *vmcb = svm->vmcb01.ptr;
+- bool activated = kvm_vcpu_apicv_active(vcpu);
++
++ if (!lapic_in_kernel(vcpu) || avic_mode == AVIC_MODE_NONE)
++ return;
+
+ if (!enable_apicv)
+ return;
+
+- if (activated) {
++ if (kvm_vcpu_apicv_active(vcpu)) {
+ /**
+ * During AVIC temporary deactivation, guest could update
+ * APIC ID, DFR and LDR registers, which would not be trapped
+@@ -1116,6 +1113,16 @@ void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+ avic_deactivate_vmcb(svm);
+ }
+ vmcb_mark_dirty(vmcb, VMCB_AVIC);
++}
++
++void avic_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
++{
++ bool activated = kvm_vcpu_apicv_active(vcpu);
++
++ if (!enable_apicv)
++ return;
++
++ avic_refresh_virtual_apic_mode(vcpu);
+
+ if (activated)
+ avic_vcpu_load(vcpu, vcpu->cpu);
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 86d6897f48068..579038eee94a3 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -1293,7 +1293,7 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+
+ /* Check if we are crossing the page boundary */
+ offset = params.guest_uaddr & (PAGE_SIZE - 1);
+- if ((params.guest_len + offset > PAGE_SIZE))
++ if (params.guest_len > PAGE_SIZE || (params.guest_len + offset) > PAGE_SIZE)
+ return -EINVAL;
+
+ /* Pin guest memory */
+@@ -1473,7 +1473,7 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+
+ /* Check if we are crossing the page boundary */
+ offset = params.guest_uaddr & (PAGE_SIZE - 1);
+- if ((params.guest_len + offset > PAGE_SIZE))
++ if (params.guest_len > PAGE_SIZE || (params.guest_len + offset) > PAGE_SIZE)
+ return -EINVAL;
+
+ hdr = psp_copy_user_blob(params.hdr_uaddr, params.hdr_len);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 9a194aa1a75a4..22d054ba5939a 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4771,7 +4771,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ .enable_nmi_window = svm_enable_nmi_window,
+ .enable_irq_window = svm_enable_irq_window,
+ .update_cr8_intercept = svm_update_cr8_intercept,
+- .set_virtual_apic_mode = avic_set_virtual_apic_mode,
++ .set_virtual_apic_mode = avic_refresh_virtual_apic_mode,
+ .refresh_apicv_exec_ctrl = avic_refresh_apicv_exec_ctrl,
+ .check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
+ .apicv_post_state_restore = avic_apicv_post_state_restore,
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 4826e6cc611bf..d0ed3f5952295 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -648,7 +648,7 @@ void avic_vcpu_blocking(struct kvm_vcpu *vcpu);
+ void avic_vcpu_unblocking(struct kvm_vcpu *vcpu);
+ void avic_ring_doorbell(struct kvm_vcpu *vcpu);
+ unsigned long avic_vcpu_get_apicv_inhibit_reasons(struct kvm_vcpu *vcpu);
+-void avic_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
++void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu);
+
+
+ /* sev.c */
+diff --git a/arch/x86/kvm/svm/svm_onhyperv.h b/arch/x86/kvm/svm/svm_onhyperv.h
+index 45faf84476cec..65c355b4b8bf0 100644
+--- a/arch/x86/kvm/svm/svm_onhyperv.h
++++ b/arch/x86/kvm/svm/svm_onhyperv.h
+@@ -30,7 +30,7 @@ static inline void svm_hv_init_vmcb(struct vmcb *vmcb)
+ hve->hv_enlightenments_control.msr_bitmap = 1;
+ }
+
+-static inline void svm_hv_hardware_setup(void)
++static inline __init void svm_hv_hardware_setup(void)
+ {
+ if (npt_enabled &&
+ ms_hyperv.nested_features & HV_X64_NESTED_ENLIGHTENED_TLB) {
+@@ -84,7 +84,7 @@ static inline void svm_hv_init_vmcb(struct vmcb *vmcb)
+ {
+ }
+
+-static inline void svm_hv_hardware_setup(void)
++static inline __init void svm_hv_hardware_setup(void)
+ {
+ }
+
+diff --git a/arch/x86/kvm/vmx/hyperv.h b/arch/x86/kvm/vmx/hyperv.h
+index 571e7929d14e7..9dee71441b594 100644
+--- a/arch/x86/kvm/vmx/hyperv.h
++++ b/arch/x86/kvm/vmx/hyperv.h
+@@ -190,16 +190,6 @@ static inline u16 evmcs_read16(unsigned long field)
+ return *(u16 *)((char *)current_evmcs + offset);
+ }
+
+-static inline void evmcs_touch_msr_bitmap(void)
+-{
+- if (unlikely(!current_evmcs))
+- return;
+-
+- if (current_evmcs->hv_enlightenments_control.msr_bitmap)
+- current_evmcs->hv_clean_fields &=
+- ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_MSR_BITMAP;
+-}
+-
+ static inline void evmcs_load(u64 phys_addr)
+ {
+ struct hv_vp_assist_page *vp_ap =
+@@ -219,7 +209,6 @@ static inline u64 evmcs_read64(unsigned long field) { return 0; }
+ static inline u32 evmcs_read32(unsigned long field) { return 0; }
+ static inline u16 evmcs_read16(unsigned long field) { return 0; }
+ static inline void evmcs_load(u64 phys_addr) {}
+-static inline void evmcs_touch_msr_bitmap(void) {}
+ #endif /* IS_ENABLED(CONFIG_HYPERV) */
+
+ #define EVMPTR_INVALID (-1ULL)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 7eec0226d56a2..939e395cda3ff 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -3865,8 +3865,13 @@ static void vmx_msr_bitmap_l01_changed(struct vcpu_vmx *vmx)
+ * 'Enlightened MSR Bitmap' feature L0 needs to know that MSR
+ * bitmap has changed.
+ */
+- if (static_branch_unlikely(&enable_evmcs))
+- evmcs_touch_msr_bitmap();
++ if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs)) {
++ struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;
++
++ if (evmcs->hv_enlightenments_control.msr_bitmap)
++ evmcs->hv_clean_fields &=
++ ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_MSR_BITMAP;
++ }
+
+ vmx->nested.force_msr_bitmap_recalc = true;
+ }
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 3f5685c00e360..91ffee6fc8cb4 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -418,6 +418,7 @@ int bio_integrity_clone(struct bio *bio, struct bio *bio_src,
+
+ bip->bip_vcnt = bip_src->bip_vcnt;
+ bip->bip_iter = bip_src->bip_iter;
++ bip->bip_flags = bip_src->bip_flags & ~BIP_BLOCK_INTEGRITY;
+
+ return 0;
+ }
+diff --git a/block/bio.c b/block/bio.c
+index ab59a491a883e..4e7d11672306b 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -773,6 +773,7 @@ static inline void bio_put_percpu_cache(struct bio *bio)
+
+ if ((bio->bi_opf & REQ_POLLED) && !WARN_ON_ONCE(in_interrupt())) {
+ bio->bi_next = cache->free_list;
++ bio->bi_bdev = NULL;
+ cache->free_list = bio;
+ cache->nr++;
+ } else {
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 9ac1efb053e08..45881f8c79130 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -118,14 +118,32 @@ static void blkg_free_workfn(struct work_struct *work)
+ {
+ struct blkcg_gq *blkg = container_of(work, struct blkcg_gq,
+ free_work);
++ struct request_queue *q = blkg->q;
+ int i;
+
++ /*
++ * pd_free_fn() can also be called from blkcg_deactivate_policy(),
++ * in order to make sure pd_free_fn() is called in order, the deletion
++ * of the list blkg->q_node is delayed to here from blkg_destroy(), and
++ * blkcg_mutex is used to synchronize blkg_free_workfn() and
++ * blkcg_deactivate_policy().
++ */
++ if (q)
++ mutex_lock(&q->blkcg_mutex);
++
+ for (i = 0; i < BLKCG_MAX_POLS; i++)
+ if (blkg->pd[i])
+ blkcg_policy[i]->pd_free_fn(blkg->pd[i]);
+
+- if (blkg->q)
+- blk_put_queue(blkg->q);
++ if (blkg->parent)
++ blkg_put(blkg->parent);
++
++ if (q) {
++ list_del_init(&blkg->q_node);
++ mutex_unlock(&q->blkcg_mutex);
++ blk_put_queue(q);
++ }
++
+ free_percpu(blkg->iostat_cpu);
+ percpu_ref_exit(&blkg->refcnt);
+ kfree(blkg);
+@@ -158,8 +176,6 @@ static void __blkg_release(struct rcu_head *rcu)
+
+ /* release the blkcg and parent blkg refs this blkg has been holding */
+ css_put(&blkg->blkcg->css);
+- if (blkg->parent)
+- blkg_put(blkg->parent);
+ blkg_free(blkg);
+ }
+
+@@ -458,9 +474,14 @@ static void blkg_destroy(struct blkcg_gq *blkg)
+ lockdep_assert_held(&blkg->q->queue_lock);
+ lockdep_assert_held(&blkcg->lock);
+
+- /* Something wrong if we are trying to remove same group twice */
+- WARN_ON_ONCE(list_empty(&blkg->q_node));
+- WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
++ /*
++ * blkg stays on the queue list until blkg_free_workfn(), see details in
++ * blkg_free_workfn(), hence this function can be called from
++ * blkcg_destroy_blkgs() first and again from blkg_destroy_all() before
++ * blkg_free_workfn().
++ */
++ if (hlist_unhashed(&blkg->blkcg_node))
++ return;
+
+ for (i = 0; i < BLKCG_MAX_POLS; i++) {
+ struct blkcg_policy *pol = blkcg_policy[i];
+@@ -472,7 +493,6 @@ static void blkg_destroy(struct blkcg_gq *blkg)
+ blkg->online = false;
+
+ radix_tree_delete(&blkcg->blkg_tree, blkg->q->id);
+- list_del_init(&blkg->q_node);
+ hlist_del_init_rcu(&blkg->blkcg_node);
+
+ /*
+@@ -1273,6 +1293,7 @@ int blkcg_init_disk(struct gendisk *disk)
+ int ret;
+
+ INIT_LIST_HEAD(&q->blkg_list);
++ mutex_init(&q->blkcg_mutex);
+
+ new_blkg = blkg_alloc(&blkcg_root, disk, GFP_KERNEL);
+ if (!new_blkg)
+@@ -1510,6 +1531,7 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ if (queue_is_mq(q))
+ blk_mq_freeze_queue(q);
+
++ mutex_lock(&q->blkcg_mutex);
+ spin_lock_irq(&q->queue_lock);
+
+ __clear_bit(pol->plid, q->blkcg_pols);
+@@ -1528,6 +1550,7 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ }
+
+ spin_unlock_irq(&q->queue_lock);
++ mutex_unlock(&q->blkcg_mutex);
+
+ if (queue_is_mq(q))
+ blk_mq_unfreeze_queue(q);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index b5098355d8b27..5a0049215ee72 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -684,6 +684,18 @@ static void __submit_bio_noacct_mq(struct bio *bio)
+
+ void submit_bio_noacct_nocheck(struct bio *bio)
+ {
++ blk_cgroup_bio_start(bio);
++ blkcg_bio_issue_init(bio);
++
++ if (!bio_flagged(bio, BIO_TRACE_COMPLETION)) {
++ trace_block_bio_queue(bio);
++ /*
++ * Now that enqueuing has been traced, we need to trace
++ * completion as well.
++ */
++ bio_set_flag(bio, BIO_TRACE_COMPLETION);
++ }
++
+ /*
+ * We only want one ->submit_bio to be active at a time, else stack
+ * usage with stacked devices could be a problem. Use current->bio_list
+@@ -788,17 +800,6 @@ void submit_bio_noacct(struct bio *bio)
+
+ if (blk_throtl_bio(bio))
+ return;
+-
+- blk_cgroup_bio_start(bio);
+- blkcg_bio_issue_init(bio);
+-
+- if (!bio_flagged(bio, BIO_TRACE_COMPLETION)) {
+- trace_block_bio_queue(bio);
+- /* Now that enqueuing has been traced, we need to trace
+- * completion as well.
+- */
+- bio_set_flag(bio, BIO_TRACE_COMPLETION);
+- }
+ submit_bio_noacct_nocheck(bio);
+ return;
+
+@@ -853,10 +854,16 @@ EXPORT_SYMBOL(submit_bio);
+ */
+ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
+ {
+- struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+ blk_qc_t cookie = READ_ONCE(bio->bi_cookie);
++ struct block_device *bdev;
++ struct request_queue *q;
+ int ret = 0;
+
++ bdev = READ_ONCE(bio->bi_bdev);
++ if (!bdev)
++ return 0;
++
++ q = bdev_get_queue(bdev);
+ if (cookie == BLK_QC_T_NONE ||
+ !test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+ return 0;
+@@ -916,7 +923,7 @@ int iocb_bio_iopoll(struct kiocb *kiocb, struct io_comp_batch *iob,
+ */
+ rcu_read_lock();
+ bio = READ_ONCE(kiocb->private);
+- if (bio && bio->bi_bdev)
++ if (bio)
+ ret = bio_poll(bio, iob, flags);
+ rcu_read_unlock();
+
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 6955605629e4f..ec7219caea165 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -866,9 +866,14 @@ static void calc_lcoefs(u64 bps, u64 seqiops, u64 randiops,
+
+ *page = *seqio = *randio = 0;
+
+- if (bps)
+- *page = DIV64_U64_ROUND_UP(VTIME_PER_SEC,
+- DIV_ROUND_UP_ULL(bps, IOC_PAGE_SIZE));
++ if (bps) {
++ u64 bps_pages = DIV_ROUND_UP_ULL(bps, IOC_PAGE_SIZE);
++
++ if (bps_pages)
++ *page = DIV64_U64_ROUND_UP(VTIME_PER_SEC, bps_pages);
++ else
++ *page = 1;
++ }
+
+ if (seqiops) {
+ v = DIV64_U64_ROUND_UP(VTIME_PER_SEC, seqiops);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index b7c193d67185d..808b58129d3e4 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -757,6 +757,33 @@ void blk_rq_set_mixed_merge(struct request *rq)
+ rq->rq_flags |= RQF_MIXED_MERGE;
+ }
+
++static inline blk_opf_t bio_failfast(const struct bio *bio)
++{
++ if (bio->bi_opf & REQ_RAHEAD)
++ return REQ_FAILFAST_MASK;
++
++ return bio->bi_opf & REQ_FAILFAST_MASK;
++}
++
++/*
++ * After we are marked as MIXED_MERGE, any new RA bio has to be updated
++ * as failfast, and request's failfast has to be updated in case of
++ * front merge.
++ */
++static inline void blk_update_mixed_merge(struct request *req,
++ struct bio *bio, bool front_merge)
++{
++ if (req->rq_flags & RQF_MIXED_MERGE) {
++ if (bio->bi_opf & REQ_RAHEAD)
++ bio->bi_opf |= REQ_FAILFAST_MASK;
++
++ if (front_merge) {
++ req->cmd_flags &= ~REQ_FAILFAST_MASK;
++ req->cmd_flags |= bio->bi_opf & REQ_FAILFAST_MASK;
++ }
++ }
++}
++
+ static void blk_account_io_merge_request(struct request *req)
+ {
+ if (blk_do_io_stat(req)) {
+@@ -954,7 +981,7 @@ enum bio_merge_status {
+ static enum bio_merge_status bio_attempt_back_merge(struct request *req,
+ struct bio *bio, unsigned int nr_segs)
+ {
+- const blk_opf_t ff = bio->bi_opf & REQ_FAILFAST_MASK;
++ const blk_opf_t ff = bio_failfast(bio);
+
+ if (!ll_back_merge_fn(req, bio, nr_segs))
+ return BIO_MERGE_FAILED;
+@@ -965,6 +992,8 @@ static enum bio_merge_status bio_attempt_back_merge(struct request *req,
+ if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff)
+ blk_rq_set_mixed_merge(req);
+
++ blk_update_mixed_merge(req, bio, false);
++
+ req->biotail->bi_next = bio;
+ req->biotail = bio;
+ req->__data_len += bio->bi_iter.bi_size;
+@@ -978,7 +1007,7 @@ static enum bio_merge_status bio_attempt_back_merge(struct request *req,
+ static enum bio_merge_status bio_attempt_front_merge(struct request *req,
+ struct bio *bio, unsigned int nr_segs)
+ {
+- const blk_opf_t ff = bio->bi_opf & REQ_FAILFAST_MASK;
++ const blk_opf_t ff = bio_failfast(bio);
+
+ if (!ll_front_merge_fn(req, bio, nr_segs))
+ return BIO_MERGE_FAILED;
+@@ -989,6 +1018,8 @@ static enum bio_merge_status bio_attempt_front_merge(struct request *req,
+ if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff)
+ blk_rq_set_mixed_merge(req);
+
++ blk_update_mixed_merge(req, bio, true);
++
+ bio->bi_next = req->bio;
+ req->bio = bio;
+
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 23d1a90fec427..06b312c691143 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -19,8 +19,7 @@
+ #include "blk-wbt.h"
+
+ /*
+- * Mark a hardware queue as needing a restart. For shared queues, maintain
+- * a count of how many hardware queues are marked for restart.
++ * Mark a hardware queue as needing a restart.
+ */
+ void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx)
+ {
+@@ -82,7 +81,7 @@ dispatch:
+ /*
+ * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
+ * its queue by itself in its completion handler, so we don't need to
+- * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
++ * restart queue if .get_budget() fails to get the budget.
+ *
+ * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
+ * be run again. This is necessary to avoid starving flushes.
+@@ -210,7 +209,7 @@ static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx,
+ /*
+ * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
+ * its queue by itself in its completion handler, so we don't need to
+- * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
++ * restart queue if .get_budget() fails to get the budget.
+ *
+ * Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
+ * be run again. This is necessary to avoid starving flushes.
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9c8dc70020bc9..b9e3b558367f1 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -658,7 +658,8 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+ * allocator for this for the rare use case of a command tied to
+ * a specific queue.
+ */
+- if (WARN_ON_ONCE(!(flags & (BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED))))
++ if (WARN_ON_ONCE(!(flags & BLK_MQ_REQ_NOWAIT)) ||
++ WARN_ON_ONCE(!(flags & BLK_MQ_REQ_RESERVED)))
+ return ERR_PTR(-EINVAL);
+
+ if (hctx_idx >= q->nr_hw_queues)
+@@ -1825,12 +1826,13 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
+ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ struct request *rq)
+ {
+- struct sbitmap_queue *sbq = &hctx->tags->bitmap_tags;
++ struct sbitmap_queue *sbq;
+ struct wait_queue_head *wq;
+ wait_queue_entry_t *wait;
+ bool ret;
+
+- if (!(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) {
++ if (!(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED) &&
++ !(blk_mq_is_shared_tags(hctx->flags))) {
+ blk_mq_sched_mark_restart_hctx(hctx);
+
+ /*
+@@ -1848,6 +1850,10 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ if (!list_empty_careful(&wait->entry))
+ return false;
+
++ if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag))
++ sbq = &hctx->tags->breserved_tags;
++ else
++ sbq = &hctx->tags->bitmap_tags;
+ wq = &bt_wait_ptr(sbq, hctx)->wait;
+
+ spin_lock_irq(&wq->lock);
+@@ -2096,7 +2102,8 @@ out:
+ bool needs_restart;
+ /* For non-shared tags, the RESTART check will suffice */
+ bool no_tag = prep == PREP_DISPATCH_NO_TAG &&
+- (hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED);
++ ((hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED) ||
++ blk_mq_is_shared_tags(hctx->flags));
+
+ if (nr_budgets)
+ blk_mq_release_budgets(q, list);
+diff --git a/block/fops.c b/block/fops.c
+index 50d245e8c913a..d2e6be4e3d1c7 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -221,6 +221,24 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
+ bio_endio(bio);
+ break;
+ }
++ if (iocb->ki_flags & IOCB_NOWAIT) {
++ /*
++ * This is nonblocking IO, and we need to allocate
++ * another bio if we have data left to map. As we
++ * cannot guarantee that one of the sub bios will not
++ * fail getting issued FOR NOWAIT and as error results
++ * are coalesced across all of them, be safe and ask for
++ * a retry of this from blocking context.
++ */
++ if (unlikely(iov_iter_count(iter))) {
++ bio_release_pages(bio, false);
++ bio_clear_flag(bio, BIO_REFFED);
++ bio_put(bio);
++ blk_finish_plug(&plug);
++ return -EAGAIN;
++ }
++ bio->bi_opf |= REQ_NOWAIT;
++ }
+
+ if (is_read) {
+ if (dio->flags & DIO_SHOULD_DIRTY)
+@@ -228,9 +246,6 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
+ } else {
+ task_io_account_write(bio->bi_iter.bi_size);
+ }
+- if (iocb->ki_flags & IOCB_NOWAIT)
+- bio->bi_opf |= REQ_NOWAIT;
+-
+ dio->size += bio->bi_iter.bi_size;
+ pos += bio->bi_iter.bi_size;
+
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index 2f8352e888602..eca5671ad3f22 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -186,8 +186,28 @@ static int software_key_query(const struct kernel_pkey_params *params,
+
+ len = crypto_akcipher_maxsize(tfm);
+ info->key_size = len * 8;
+- info->max_data_size = len;
+- info->max_sig_size = len;
++
++ if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) {
++ /*
++ * ECDSA key sizes are much smaller than RSA, and thus could
++ * operate on (hashed) inputs that are larger than key size.
++ * For example SHA384-hashed input used with secp256r1
++ * based keys. Set max_data_size to be at least as large as
++ * the largest supported hash size (SHA512)
++ */
++ info->max_data_size = 64;
++
++ /*
++ * Verify takes ECDSA-Sig (described in RFC 5480) as input,
++ * which is actually 2 'key_size'-bit integers encoded in
++ * ASN.1. Account for the ASN.1 encoding overhead here.
++ */
++ info->max_sig_size = 2 * (len + 3) + 2;
++ } else {
++ info->max_data_size = len;
++ info->max_sig_size = len;
++ }
++
+ info->max_enc_size = len;
+ info->max_dec_size = len;
+ info->supported_ops = (KEYCTL_SUPPORTS_ENCRYPT |
+diff --git a/crypto/essiv.c b/crypto/essiv.c
+index e33369df90344..307eba74b901e 100644
+--- a/crypto/essiv.c
++++ b/crypto/essiv.c
+@@ -171,7 +171,12 @@ static void essiv_aead_done(struct crypto_async_request *areq, int err)
+ struct aead_request *req = areq->data;
+ struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+
++ if (err == -EINPROGRESS)
++ goto out;
++
+ kfree(rctx->assoc);
++
++out:
+ aead_request_complete(req, err);
+ }
+
+@@ -247,7 +252,7 @@ static int essiv_aead_crypt(struct aead_request *req, bool enc)
+ err = enc ? crypto_aead_encrypt(subreq) :
+ crypto_aead_decrypt(subreq);
+
+- if (rctx->assoc && err != -EINPROGRESS)
++ if (rctx->assoc && err != -EINPROGRESS && err != -EBUSY)
+ kfree(rctx->assoc);
+ return err;
+ }
+diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
+index 6ee5b8a060c06..4e9d2244ee317 100644
+--- a/crypto/rsa-pkcs1pad.c
++++ b/crypto/rsa-pkcs1pad.c
+@@ -214,16 +214,14 @@ static void pkcs1pad_encrypt_sign_complete_cb(
+ struct crypto_async_request *child_async_req, int err)
+ {
+ struct akcipher_request *req = child_async_req->data;
+- struct crypto_async_request async_req;
+
+ if (err == -EINPROGRESS)
+- return;
++ goto out;
++
++ err = pkcs1pad_encrypt_sign_complete(req, err);
+
+- async_req.data = req->base.data;
+- async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
+- async_req.flags = child_async_req->flags;
+- req->base.complete(&async_req,
+- pkcs1pad_encrypt_sign_complete(req, err));
++out:
++ akcipher_request_complete(req, err);
+ }
+
+ static int pkcs1pad_encrypt(struct akcipher_request *req)
+@@ -332,15 +330,14 @@ static void pkcs1pad_decrypt_complete_cb(
+ struct crypto_async_request *child_async_req, int err)
+ {
+ struct akcipher_request *req = child_async_req->data;
+- struct crypto_async_request async_req;
+
+ if (err == -EINPROGRESS)
+- return;
++ goto out;
++
++ err = pkcs1pad_decrypt_complete(req, err);
+
+- async_req.data = req->base.data;
+- async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
+- async_req.flags = child_async_req->flags;
+- req->base.complete(&async_req, pkcs1pad_decrypt_complete(req, err));
++out:
++ akcipher_request_complete(req, err);
+ }
+
+ static int pkcs1pad_decrypt(struct akcipher_request *req)
+@@ -513,15 +510,14 @@ static void pkcs1pad_verify_complete_cb(
+ struct crypto_async_request *child_async_req, int err)
+ {
+ struct akcipher_request *req = child_async_req->data;
+- struct crypto_async_request async_req;
+
+ if (err == -EINPROGRESS)
+- return;
++ goto out;
+
+- async_req.data = req->base.data;
+- async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
+- async_req.flags = child_async_req->flags;
+- req->base.complete(&async_req, pkcs1pad_verify_complete(req, err));
++ err = pkcs1pad_verify_complete(req, err);
++
++out:
++ akcipher_request_complete(req, err);
+ }
+
+ /*
+diff --git a/crypto/seqiv.c b/crypto/seqiv.c
+index 0899d527c2845..b1bcfe537daf1 100644
+--- a/crypto/seqiv.c
++++ b/crypto/seqiv.c
+@@ -23,7 +23,7 @@ static void seqiv_aead_encrypt_complete2(struct aead_request *req, int err)
+ struct aead_request *subreq = aead_request_ctx(req);
+ struct crypto_aead *geniv;
+
+- if (err == -EINPROGRESS)
++ if (err == -EINPROGRESS || err == -EBUSY)
+ return;
+
+ if (err)
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 63c85b9e64e08..de6cbcf69bbd6 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -203,12 +203,12 @@ static void xts_encrypt_done(struct crypto_async_request *areq, int err)
+ if (!err) {
+ struct xts_request_ctx *rctx = skcipher_request_ctx(req);
+
+- rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
++ rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+ err = xts_xor_tweak_post(req, true);
+
+ if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) {
+ err = xts_cts_final(req, crypto_skcipher_encrypt);
+- if (err == -EINPROGRESS)
++ if (err == -EINPROGRESS || err == -EBUSY)
+ return;
+ }
+ }
+@@ -223,12 +223,12 @@ static void xts_decrypt_done(struct crypto_async_request *areq, int err)
+ if (!err) {
+ struct xts_request_ctx *rctx = skcipher_request_ctx(req);
+
+- rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
++ rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+ err = xts_xor_tweak_post(req, false);
+
+ if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) {
+ err = xts_cts_final(req, crypto_skcipher_decrypt);
+- if (err == -EINPROGRESS)
++ if (err == -EINPROGRESS || err == -EBUSY)
+ return;
+ }
+ }
+diff --git a/drivers/accel/Kconfig b/drivers/accel/Kconfig
+index c9ce849b2984a..c8177ae415b8b 100644
+--- a/drivers/accel/Kconfig
++++ b/drivers/accel/Kconfig
+@@ -6,9 +6,10 @@
+ # as, but not limited to, Machine-Learning and Deep-Learning acceleration
+ # devices
+ #
++if DRM
++
+ menuconfig DRM_ACCEL
+ bool "Compute Acceleration Framework"
+- depends on DRM
+ help
+ Framework for device drivers of compute acceleration devices, such
+ as, but not limited to, Machine-Learning and Deep-Learning
+@@ -22,3 +23,5 @@ menuconfig DRM_ACCEL
+ major number than GPUs, and will be exposed to user-space using
+ different device files, called accel/accel* (in /dev, sysfs
+ and debugfs).
++
++endif
+diff --git a/drivers/acpi/acpica/Makefile b/drivers/acpi/acpica/Makefile
+index 9e0d95d76fff7..30f3fc13c29d1 100644
+--- a/drivers/acpi/acpica/Makefile
++++ b/drivers/acpi/acpica/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for ACPICA Core interpreter
+ #
+
+-ccflags-y := -Os -D_LINUX -DBUILDING_ACPICA
++ccflags-y := -D_LINUX -DBUILDING_ACPICA
+ ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
+
+ # use acpi.o to put all files here into acpi.o modparam namespace
+diff --git a/drivers/acpi/acpica/hwvalid.c b/drivers/acpi/acpica/hwvalid.c
+index 915b26448d2c9..0d392e7b0747b 100644
+--- a/drivers/acpi/acpica/hwvalid.c
++++ b/drivers/acpi/acpica/hwvalid.c
+@@ -23,8 +23,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width);
+ *
+ * The table is used to implement the Microsoft port access rules that
+ * first appeared in Windows XP. Some ports are always illegal, and some
+- * ports are only illegal if the BIOS calls _OSI with a win_XP string or
+- * later (meaning that the BIOS itelf is post-XP.)
++ * ports are only illegal if the BIOS calls _OSI with nothing newer than
++ * the specific _OSI strings.
+ *
+ * This provides ACPICA with the desired port protections and
+ * Microsoft compatibility.
+@@ -145,7 +145,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width)
+
+ /* Port illegality may depend on the _OSI calls made by the BIOS */
+
+- if (acpi_gbl_osi_data >= port_info->osi_dependency) {
++ if (port_info->osi_dependency == ACPI_ALWAYS_ILLEGAL ||
++ acpi_gbl_osi_data == port_info->osi_dependency) {
+ ACPI_DEBUG_PRINT((ACPI_DB_VALUES,
+ "Denied AML access to port 0x%8.8X%8.8X/%X (%s 0x%.4X-0x%.4X)\n",
+ ACPI_FORMAT_UINT64(address),
+diff --git a/drivers/acpi/acpica/nsrepair.c b/drivers/acpi/acpica/nsrepair.c
+index 367fcd201f96e..ec512e06a48ed 100644
+--- a/drivers/acpi/acpica/nsrepair.c
++++ b/drivers/acpi/acpica/nsrepair.c
+@@ -181,8 +181,9 @@ acpi_ns_simple_repair(struct acpi_evaluate_info *info,
+ * Try to fix if there was no return object. Warning if failed to fix.
+ */
+ if (!return_object) {
+- if (expected_btypes && (!(expected_btypes & ACPI_RTYPE_NONE))) {
+- if (package_index != ACPI_NOT_PACKAGE_ELEMENT) {
++ if (expected_btypes) {
++ if (!(expected_btypes & ACPI_RTYPE_NONE) &&
++ package_index != ACPI_NOT_PACKAGE_ELEMENT) {
+ ACPI_WARN_PREDEFINED((AE_INFO,
+ info->full_pathname,
+ ACPI_WARN_ALWAYS,
+@@ -196,14 +197,15 @@ acpi_ns_simple_repair(struct acpi_evaluate_info *info,
+ if (ACPI_SUCCESS(status)) {
+ return (AE_OK); /* Repair was successful */
+ }
+- } else {
++ }
++
++ if (expected_btypes != ACPI_RTYPE_NONE) {
+ ACPI_WARN_PREDEFINED((AE_INFO,
+ info->full_pathname,
+ ACPI_WARN_ALWAYS,
+ "Missing expected return value"));
++ return (AE_AML_NO_RETURN_VALUE);
+ }
+-
+- return (AE_AML_NO_RETURN_VALUE);
+ }
+ }
+
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index f4badcdde76e6..fb64bd217d826 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -440,7 +440,7 @@ static int extract_package(struct acpi_battery *battery,
+
+ if (element->type == ACPI_TYPE_STRING ||
+ element->type == ACPI_TYPE_BUFFER)
+- strncpy(ptr, element->string.pointer, 32);
++ strscpy(ptr, element->string.pointer, 32);
+ else if (element->type == ACPI_TYPE_INTEGER) {
+ strncpy(ptr, (u8 *)&element->integer.value,
+ sizeof(u64));
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 192d1784e409b..a222bda7e15b0 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -467,17 +467,34 @@ static const struct dmi_system_id lenovo_laptop[] = {
+ { }
+ };
+
+-static const struct dmi_system_id schenker_gm_rg[] = {
++static const struct dmi_system_id tongfang_gm_rg[] = {
+ {
+- .ident = "XMG CORE 15 (M22)",
++ .ident = "TongFang GMxRGxx/XMG CORE 15 (M22)/TUXEDO Stellaris 15 Gen4 AMD",
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
+ DMI_MATCH(DMI_BOARD_NAME, "GMxRGxx"),
+ },
+ },
+ { }
+ };
+
++static const struct dmi_system_id maingear_laptop[] = {
++ {
++ .ident = "MAINGEAR Vector Pro 2 15",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Micro Electronics Inc"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MG-VCP2-15A3070T"),
++ }
++ },
++ {
++ .ident = "MAINGEAR Vector Pro 2 17",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Micro Electronics Inc"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MG-VCP2-17A3070T"),
++ },
++ },
++ { }
++};
++
+ struct irq_override_cmp {
+ const struct dmi_system_id *system;
+ unsigned char irq;
+@@ -492,7 +509,8 @@ static const struct irq_override_cmp override_table[] = {
+ { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+ { lenovo_laptop, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
+ { lenovo_laptop, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
+- { schenker_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
++ { tongfang_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
++ { maingear_laptop, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
+ };
+
+ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index a8c02608dde45..710ac640267dd 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -434,7 +434,7 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ /* Lenovo Ideapad Z570 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "102434U"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"),
+ },
+ },
+ {
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 3bb9bb483fe37..14a1c0d14916f 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -421,7 +421,6 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x34d3), board_ahci_low_power }, /* Ice Lake LP AHCI */
+ { PCI_VDEVICE(INTEL, 0x02d3), board_ahci_low_power }, /* Comet Lake PCH-U AHCI */
+ { PCI_VDEVICE(INTEL, 0x02d7), board_ahci_low_power }, /* Comet Lake PCH RAID */
+- { PCI_VDEVICE(INTEL, 0xa0d3), board_ahci_low_power }, /* Tiger Lake UP{3,4} AHCI */
+
+ /* JMicron 360/1/3/5/6, match class to avoid IDE function */
+ { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index a3e14143ec0cf..6ed21587be287 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -54,11 +54,12 @@ static LIST_HEAD(deferred_sync);
+ static unsigned int defer_sync_state_count = 1;
+ static DEFINE_MUTEX(fwnode_link_lock);
+ static bool fw_devlink_is_permissive(void);
++static void __fw_devlink_link_to_consumers(struct device *dev);
+ static bool fw_devlink_drv_reg_done;
+ static bool fw_devlink_best_effort;
+
+ /**
+- * fwnode_link_add - Create a link between two fwnode_handles.
++ * __fwnode_link_add - Create a link between two fwnode_handles.
+ * @con: Consumer end of the link.
+ * @sup: Supplier end of the link.
+ *
+@@ -74,35 +75,42 @@ static bool fw_devlink_best_effort;
+ * Attempts to create duplicate links between the same pair of fwnode handles
+ * are ignored and there is no reference counting.
+ */
+-int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup)
++static int __fwnode_link_add(struct fwnode_handle *con,
++ struct fwnode_handle *sup, u8 flags)
+ {
+ struct fwnode_link *link;
+- int ret = 0;
+-
+- mutex_lock(&fwnode_link_lock);
+
+ list_for_each_entry(link, &sup->consumers, s_hook)
+- if (link->consumer == con)
+- goto out;
++ if (link->consumer == con) {
++ link->flags |= flags;
++ return 0;
++ }
+
+ link = kzalloc(sizeof(*link), GFP_KERNEL);
+- if (!link) {
+- ret = -ENOMEM;
+- goto out;
+- }
++ if (!link)
++ return -ENOMEM;
+
+ link->supplier = sup;
+ INIT_LIST_HEAD(&link->s_hook);
+ link->consumer = con;
+ INIT_LIST_HEAD(&link->c_hook);
++ link->flags = flags;
+
+ list_add(&link->s_hook, &sup->consumers);
+ list_add(&link->c_hook, &con->suppliers);
+ pr_debug("%pfwP Linked as a fwnode consumer to %pfwP\n",
+ con, sup);
+-out:
+- mutex_unlock(&fwnode_link_lock);
+
++ return 0;
++}
++
++int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup)
++{
++ int ret;
++
++ mutex_lock(&fwnode_link_lock);
++ ret = __fwnode_link_add(con, sup, 0);
++ mutex_unlock(&fwnode_link_lock);
+ return ret;
+ }
+
+@@ -121,6 +129,19 @@ static void __fwnode_link_del(struct fwnode_link *link)
+ kfree(link);
+ }
+
++/**
++ * __fwnode_link_cycle - Mark a fwnode link as being part of a cycle.
++ * @link: the fwnode_link to be marked
++ *
++ * The fwnode_link_lock needs to be held when this function is called.
++ */
++static void __fwnode_link_cycle(struct fwnode_link *link)
++{
++ pr_debug("%pfwf: Relaxing link with %pfwf\n",
++ link->consumer, link->supplier);
++ link->flags |= FWLINK_FLAG_CYCLE;
++}
++
+ /**
+ * fwnode_links_purge_suppliers - Delete all supplier links of fwnode_handle.
+ * @fwnode: fwnode whose supplier links need to be deleted
+@@ -181,6 +202,51 @@ void fw_devlink_purge_absent_suppliers(struct fwnode_handle *fwnode)
+ }
+ EXPORT_SYMBOL_GPL(fw_devlink_purge_absent_suppliers);
+
++/**
++ * __fwnode_links_move_consumers - Move consumer from @from to @to fwnode_handle
++ * @from: move consumers away from this fwnode
++ * @to: move consumers to this fwnode
++ *
++ * Move all consumer links from @from fwnode to @to fwnode.
++ */
++static void __fwnode_links_move_consumers(struct fwnode_handle *from,
++ struct fwnode_handle *to)
++{
++ struct fwnode_link *link, *tmp;
++
++ list_for_each_entry_safe(link, tmp, &from->consumers, s_hook) {
++ __fwnode_link_add(link->consumer, to, link->flags);
++ __fwnode_link_del(link);
++ }
++}
++
++/**
++ * __fw_devlink_pickup_dangling_consumers - Pick up dangling consumers
++ * @fwnode: fwnode from which to pick up dangling consumers
++ * @new_sup: fwnode of new supplier
++ *
++ * If the @fwnode has a corresponding struct device and the device supports
++ * probing (that is, added to a bus), then we want to let fw_devlink create
++ * MANAGED device links to this device, so leave @fwnode and its descendant's
++ * fwnode links alone.
++ *
++ * Otherwise, move its consumers to the new supplier @new_sup.
++ */
++static void __fw_devlink_pickup_dangling_consumers(struct fwnode_handle *fwnode,
++ struct fwnode_handle *new_sup)
++{
++ struct fwnode_handle *child;
++
++ if (fwnode->dev && fwnode->dev->bus)
++ return;
++
++ fwnode->flags |= FWNODE_FLAG_NOT_DEVICE;
++ __fwnode_links_move_consumers(fwnode, new_sup);
++
++ fwnode_for_each_available_child_node(fwnode, child)
++ __fw_devlink_pickup_dangling_consumers(child, new_sup);
++}
++
+ #ifdef CONFIG_SRCU
+ static DEFINE_MUTEX(device_links_lock);
+ DEFINE_STATIC_SRCU(device_links_srcu);
+@@ -272,6 +338,12 @@ static bool device_is_ancestor(struct device *dev, struct device *target)
+ return false;
+ }
+
++static inline bool device_link_flag_is_sync_state_only(u32 flags)
++{
++ return (flags & ~(DL_FLAG_INFERRED | DL_FLAG_CYCLE)) ==
++ (DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED);
++}
++
+ /**
+ * device_is_dependent - Check if one device depends on another one
+ * @dev: Device to check dependencies for.
+@@ -298,8 +370,7 @@ int device_is_dependent(struct device *dev, void *target)
+ return ret;
+
+ list_for_each_entry(link, &dev->links.consumers, s_node) {
+- if ((link->flags & ~DL_FLAG_INFERRED) ==
+- (DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
++ if (device_link_flag_is_sync_state_only(link->flags))
+ continue;
+
+ if (link->consumer == target)
+@@ -372,8 +443,7 @@ static int device_reorder_to_tail(struct device *dev, void *not_used)
+
+ device_for_each_child(dev, NULL, device_reorder_to_tail);
+ list_for_each_entry(link, &dev->links.consumers, s_node) {
+- if ((link->flags & ~DL_FLAG_INFERRED) ==
+- (DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
++ if (device_link_flag_is_sync_state_only(link->flags))
+ continue;
+ device_reorder_to_tail(link->consumer, NULL);
+ }
+@@ -634,7 +704,8 @@ postcore_initcall(devlink_class_init);
+ DL_FLAG_AUTOREMOVE_SUPPLIER | \
+ DL_FLAG_AUTOPROBE_CONSUMER | \
+ DL_FLAG_SYNC_STATE_ONLY | \
+- DL_FLAG_INFERRED)
++ DL_FLAG_INFERRED | \
++ DL_FLAG_CYCLE)
+
+ #define DL_ADD_VALID_FLAGS (DL_MANAGED_LINK_FLAGS | DL_FLAG_STATELESS | \
+ DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE)
+@@ -703,8 +774,6 @@ struct device_link *device_link_add(struct device *consumer,
+ if (!consumer || !supplier || consumer == supplier ||
+ flags & ~DL_ADD_VALID_FLAGS ||
+ (flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
+- (flags & DL_FLAG_SYNC_STATE_ONLY &&
+- (flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||
+ (flags & DL_FLAG_AUTOPROBE_CONSUMER &&
+ flags & (DL_FLAG_AUTOREMOVE_CONSUMER |
+ DL_FLAG_AUTOREMOVE_SUPPLIER)))
+@@ -720,6 +789,10 @@ struct device_link *device_link_add(struct device *consumer,
+ if (!(flags & DL_FLAG_STATELESS))
+ flags |= DL_FLAG_MANAGED;
+
++ if (flags & DL_FLAG_SYNC_STATE_ONLY &&
++ !device_link_flag_is_sync_state_only(flags))
++ return NULL;
++
+ device_links_write_lock();
+ device_pm_lock();
+
+@@ -984,6 +1057,21 @@ static bool dev_is_best_effort(struct device *dev)
+ (dev->fwnode && (dev->fwnode->flags & FWNODE_FLAG_BEST_EFFORT));
+ }
+
++static struct fwnode_handle *fwnode_links_check_suppliers(
++ struct fwnode_handle *fwnode)
++{
++ struct fwnode_link *link;
++
++ if (!fwnode || fw_devlink_is_permissive())
++ return NULL;
++
++ list_for_each_entry(link, &fwnode->suppliers, c_hook)
++ if (!(link->flags & FWLINK_FLAG_CYCLE))
++ return link->supplier;
++
++ return NULL;
++}
++
+ /**
+ * device_links_check_suppliers - Check presence of supplier drivers.
+ * @dev: Consumer device.
+@@ -1011,11 +1099,8 @@ int device_links_check_suppliers(struct device *dev)
+ * probe.
+ */
+ mutex_lock(&fwnode_link_lock);
+- if (dev->fwnode && !list_empty(&dev->fwnode->suppliers) &&
+- !fw_devlink_is_permissive()) {
+- sup_fw = list_first_entry(&dev->fwnode->suppliers,
+- struct fwnode_link,
+- c_hook)->supplier;
++ sup_fw = fwnode_links_check_suppliers(dev->fwnode);
++ if (sup_fw) {
+ if (!dev_is_best_effort(dev)) {
+ fwnode_ret = -EPROBE_DEFER;
+ dev_err_probe(dev, -EPROBE_DEFER,
+@@ -1204,7 +1289,9 @@ static ssize_t waiting_for_supplier_show(struct device *dev,
+ bool val;
+
+ device_lock(dev);
+- val = !list_empty(&dev->fwnode->suppliers);
++ mutex_lock(&fwnode_link_lock);
++ val = !!fwnode_links_check_suppliers(dev->fwnode);
++ mutex_unlock(&fwnode_link_lock);
+ device_unlock(dev);
+ return sysfs_emit(buf, "%u\n", val);
+ }
+@@ -1267,16 +1354,23 @@ void device_links_driver_bound(struct device *dev)
+ * them. So, fw_devlink no longer needs to create device links to any
+ * of the device's suppliers.
+ *
+- * Also, if a child firmware node of this bound device is not added as
+- * a device by now, assume it is never going to be added and make sure
+- * other devices don't defer probe indefinitely by waiting for such a
+- * child device.
++ * Also, if a child firmware node of this bound device is not added as a
++ * device by now, assume it is never going to be added. Make this bound
++ * device the fallback supplier to the dangling consumers of the child
++ * firmware node because this bound device is probably implementing the
++ * child firmware node functionality and we don't want the dangling
++ * consumers to defer probe indefinitely waiting for a device for the
++ * child firmware node.
+ */
+ if (dev->fwnode && dev->fwnode->dev == dev) {
+ struct fwnode_handle *child;
+ fwnode_links_purge_suppliers(dev->fwnode);
++ mutex_lock(&fwnode_link_lock);
+ fwnode_for_each_available_child_node(dev->fwnode, child)
+- fw_devlink_purge_absent_suppliers(child);
++ __fw_devlink_pickup_dangling_consumers(child,
++ dev->fwnode);
++ __fw_devlink_link_to_consumers(dev);
++ mutex_unlock(&fwnode_link_lock);
+ }
+ device_remove_file(dev, &dev_attr_waiting_for_supplier);
+
+@@ -1633,8 +1727,11 @@ static int __init fw_devlink_strict_setup(char *arg)
+ }
+ early_param("fw_devlink.strict", fw_devlink_strict_setup);
+
+-u32 fw_devlink_get_flags(void)
++static inline u32 fw_devlink_get_flags(u8 fwlink_flags)
+ {
++ if (fwlink_flags & FWLINK_FLAG_CYCLE)
++ return FW_DEVLINK_FLAGS_PERMISSIVE | DL_FLAG_CYCLE;
++
+ return fw_devlink_flags;
+ }
+
+@@ -1672,7 +1769,7 @@ static void fw_devlink_relax_link(struct device_link *link)
+ if (!(link->flags & DL_FLAG_INFERRED))
+ return;
+
+- if (link->flags == (DL_FLAG_MANAGED | FW_DEVLINK_FLAGS_PERMISSIVE))
++ if (device_link_flag_is_sync_state_only(link->flags))
+ return;
+
+ pm_runtime_drop_link(link);
+@@ -1769,44 +1866,138 @@ static void fw_devlink_unblock_consumers(struct device *dev)
+ device_links_write_unlock();
+ }
+
++
++static bool fwnode_init_without_drv(struct fwnode_handle *fwnode)
++{
++ struct device *dev;
++ bool ret;
++
++ if (!(fwnode->flags & FWNODE_FLAG_INITIALIZED))
++ return false;
++
++ dev = get_dev_from_fwnode(fwnode);
++ ret = !dev || dev->links.status == DL_DEV_NO_DRIVER;
++ put_device(dev);
++
++ return ret;
++}
++
++static bool fwnode_ancestor_init_without_drv(struct fwnode_handle *fwnode)
++{
++ struct fwnode_handle *parent;
++
++ fwnode_for_each_parent_node(fwnode, parent) {
++ if (fwnode_init_without_drv(parent)) {
++ fwnode_handle_put(parent);
++ return true;
++ }
++ }
++
++ return false;
++}
++
+ /**
+- * fw_devlink_relax_cycle - Convert cyclic links to SYNC_STATE_ONLY links
+- * @con: Device to check dependencies for.
+- * @sup: Device to check against.
+- *
+- * Check if @sup depends on @con or any device dependent on it (its child or
+- * its consumer etc). When such a cyclic dependency is found, convert all
+- * device links created solely by fw_devlink into SYNC_STATE_ONLY device links.
+- * This is the equivalent of doing fw_devlink=permissive just between the
+- * devices in the cycle. We need to do this because, at this point, fw_devlink
+- * can't tell which of these dependencies is not a real dependency.
+- *
+- * Return 1 if a cycle is found. Otherwise, return 0.
++ * __fw_devlink_relax_cycles - Relax and mark dependency cycles.
++ * @con: Potential consumer device.
++ * @sup_handle: Potential supplier's fwnode.
++ *
++ * Needs to be called with fwnode_lock and device link lock held.
++ *
++ * Check if @sup_handle or any of its ancestors or suppliers direct/indirectly
++ * depend on @con. This function can detect multiple cyles between @sup_handle
++ * and @con. When such dependency cycles are found, convert all device links
++ * created solely by fw_devlink into SYNC_STATE_ONLY device links. Also, mark
++ * all fwnode links in the cycle with FWLINK_FLAG_CYCLE so that when they are
++ * converted into a device link in the future, they are created as
++ * SYNC_STATE_ONLY device links. This is the equivalent of doing
++ * fw_devlink=permissive just between the devices in the cycle. We need to do
++ * this because, at this point, fw_devlink can't tell which of these
++ * dependencies is not a real dependency.
++ *
++ * Return true if one or more cycles were found. Otherwise, return false.
+ */
+-static int fw_devlink_relax_cycle(struct device *con, void *sup)
++static bool __fw_devlink_relax_cycles(struct device *con,
++ struct fwnode_handle *sup_handle)
+ {
+- struct device_link *link;
+- int ret;
++ struct device *sup_dev = NULL, *par_dev = NULL;
++ struct fwnode_link *link;
++ struct device_link *dev_link;
++ bool ret = false;
+
+- if (con == sup)
+- return 1;
++ if (!sup_handle)
++ return false;
+
+- ret = device_for_each_child(con, sup, fw_devlink_relax_cycle);
+- if (ret)
+- return ret;
++ /*
++ * We aren't trying to find all cycles. Just a cycle between con and
++ * sup_handle.
++ */
++ if (sup_handle->flags & FWNODE_FLAG_VISITED)
++ return false;
+
+- list_for_each_entry(link, &con->links.consumers, s_node) {
+- if ((link->flags & ~DL_FLAG_INFERRED) ==
+- (DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
+- continue;
++ sup_handle->flags |= FWNODE_FLAG_VISITED;
+
+- if (!fw_devlink_relax_cycle(link->consumer, sup))
+- continue;
++ sup_dev = get_dev_from_fwnode(sup_handle);
+
+- ret = 1;
++ /* Termination condition. */
++ if (sup_dev == con) {
++ ret = true;
++ goto out;
++ }
+
+- fw_devlink_relax_link(link);
++ /*
++ * If sup_dev is bound to a driver and @con hasn't started binding to a
++ * driver, sup_dev can't be a consumer of @con. So, no need to check
++ * further.
++ */
++ if (sup_dev && sup_dev->links.status == DL_DEV_DRIVER_BOUND &&
++ con->links.status == DL_DEV_NO_DRIVER) {
++ ret = false;
++ goto out;
++ }
++
++ list_for_each_entry(link, &sup_handle->suppliers, c_hook) {
++ if (__fw_devlink_relax_cycles(con, link->supplier)) {
++ __fwnode_link_cycle(link);
++ ret = true;
++ }
++ }
++
++ /*
++ * Give priority to device parent over fwnode parent to account for any
++ * quirks in how fwnodes are converted to devices.
++ */
++ if (sup_dev)
++ par_dev = get_device(sup_dev->parent);
++ else
++ par_dev = fwnode_get_next_parent_dev(sup_handle);
++
++ if (par_dev && __fw_devlink_relax_cycles(con, par_dev->fwnode))
++ ret = true;
++
++ if (!sup_dev)
++ goto out;
++
++ list_for_each_entry(dev_link, &sup_dev->links.suppliers, c_node) {
++ /*
++ * Ignore a SYNC_STATE_ONLY flag only if it wasn't marked as
++ * such due to a cycle.
++ */
++ if (device_link_flag_is_sync_state_only(dev_link->flags) &&
++ !(dev_link->flags & DL_FLAG_CYCLE))
++ continue;
++
++ if (__fw_devlink_relax_cycles(con,
++ dev_link->supplier->fwnode)) {
++ fw_devlink_relax_link(dev_link);
++ dev_link->flags |= DL_FLAG_CYCLE;
++ ret = true;
++ }
+ }
++
++out:
++ sup_handle->flags &= ~FWNODE_FLAG_VISITED;
++ put_device(sup_dev);
++ put_device(par_dev);
+ return ret;
+ }
+
+@@ -1814,7 +2005,7 @@ static int fw_devlink_relax_cycle(struct device *con, void *sup)
+ * fw_devlink_create_devlink - Create a device link from a consumer to fwnode
+ * @con: consumer device for the device link
+ * @sup_handle: fwnode handle of supplier
+- * @flags: devlink flags
++ * @link: fwnode link that's being converted to a device link
+ *
+ * This function will try to create a device link between the consumer device
+ * @con and the supplier device represented by @sup_handle.
+@@ -1831,10 +2022,17 @@ static int fw_devlink_relax_cycle(struct device *con, void *sup)
+ * possible to do that in the future
+ */
+ static int fw_devlink_create_devlink(struct device *con,
+- struct fwnode_handle *sup_handle, u32 flags)
++ struct fwnode_handle *sup_handle,
++ struct fwnode_link *link)
+ {
+ struct device *sup_dev;
+ int ret = 0;
++ u32 flags;
++
++ if (con->fwnode == link->consumer)
++ flags = fw_devlink_get_flags(link->flags);
++ else
++ flags = FW_DEVLINK_FLAGS_PERMISSIVE;
+
+ /*
+ * In some cases, a device P might also be a supplier to its child node
+@@ -1855,7 +2053,26 @@ static int fw_devlink_create_devlink(struct device *con,
+ fwnode_is_ancestor_of(sup_handle, con->fwnode))
+ return -EINVAL;
+
+- sup_dev = get_dev_from_fwnode(sup_handle);
++ /*
++ * SYNC_STATE_ONLY device links don't block probing and supports cycles.
++ * So cycle detection isn't necessary and shouldn't be done.
++ */
++ if (!(flags & DL_FLAG_SYNC_STATE_ONLY)) {
++ device_links_write_lock();
++ if (__fw_devlink_relax_cycles(con, sup_handle)) {
++ __fwnode_link_cycle(link);
++ flags = fw_devlink_get_flags(link->flags);
++ dev_info(con, "Fixed dependency cycle(s) with %pfwf\n",
++ sup_handle);
++ }
++ device_links_write_unlock();
++ }
++
++ if (sup_handle->flags & FWNODE_FLAG_NOT_DEVICE)
++ sup_dev = fwnode_get_next_parent_dev(sup_handle);
++ else
++ sup_dev = get_dev_from_fwnode(sup_handle);
++
+ if (sup_dev) {
+ /*
+ * If it's one of those drivers that don't actually bind to
+@@ -1864,71 +2081,34 @@ static int fw_devlink_create_devlink(struct device *con,
+ */
+ if (sup_dev->links.status == DL_DEV_NO_DRIVER &&
+ sup_handle->flags & FWNODE_FLAG_INITIALIZED) {
++ dev_dbg(con,
++ "Not linking %pfwf - dev might never probe\n",
++ sup_handle);
+ ret = -EINVAL;
+ goto out;
+ }
+
+- /*
+- * If this fails, it is due to cycles in device links. Just
+- * give up on this link and treat it as invalid.
+- */
+- if (!device_link_add(con, sup_dev, flags) &&
+- !(flags & DL_FLAG_SYNC_STATE_ONLY)) {
+- dev_info(con, "Fixing up cyclic dependency with %s\n",
+- dev_name(sup_dev));
+- device_links_write_lock();
+- fw_devlink_relax_cycle(con, sup_dev);
+- device_links_write_unlock();
+- device_link_add(con, sup_dev,
+- FW_DEVLINK_FLAGS_PERMISSIVE);
++ if (con != sup_dev && !device_link_add(con, sup_dev, flags)) {
++ dev_err(con, "Failed to create device link (0x%x) with %s\n",
++ flags, dev_name(sup_dev));
+ ret = -EINVAL;
+ }
+
+ goto out;
+ }
+
+- /* Supplier that's already initialized without a struct device. */
+- if (sup_handle->flags & FWNODE_FLAG_INITIALIZED)
+- return -EINVAL;
+-
+ /*
+- * DL_FLAG_SYNC_STATE_ONLY doesn't block probing and supports
+- * cycles. So cycle detection isn't necessary and shouldn't be
+- * done.
++ * Supplier or supplier's ancestor already initialized without a struct
++ * device or being probed by a driver.
+ */
+- if (flags & DL_FLAG_SYNC_STATE_ONLY)
+- return -EAGAIN;
+-
+- /*
+- * If we can't find the supplier device from its fwnode, it might be
+- * due to a cyclic dependency between fwnodes. Some of these cycles can
+- * be broken by applying logic. Check for these types of cycles and
+- * break them so that devices in the cycle probe properly.
+- *
+- * If the supplier's parent is dependent on the consumer, then the
+- * consumer and supplier have a cyclic dependency. Since fw_devlink
+- * can't tell which of the inferred dependencies are incorrect, don't
+- * enforce probe ordering between any of the devices in this cyclic
+- * dependency. Do this by relaxing all the fw_devlink device links in
+- * this cycle and by treating the fwnode link between the consumer and
+- * the supplier as an invalid dependency.
+- */
+- sup_dev = fwnode_get_next_parent_dev(sup_handle);
+- if (sup_dev && device_is_dependent(con, sup_dev)) {
+- dev_info(con, "Fixing up cyclic dependency with %pfwP (%s)\n",
+- sup_handle, dev_name(sup_dev));
+- device_links_write_lock();
+- fw_devlink_relax_cycle(con, sup_dev);
+- device_links_write_unlock();
+- ret = -EINVAL;
+- } else {
+- /*
+- * Can't check for cycles or no cycles. So let's try
+- * again later.
+- */
+- ret = -EAGAIN;
++ if (fwnode_init_without_drv(sup_handle) ||
++ fwnode_ancestor_init_without_drv(sup_handle)) {
++ dev_dbg(con, "Not linking %pfwf - might never become dev\n",
++ sup_handle);
++ return -EINVAL;
+ }
+
++ ret = -EAGAIN;
+ out:
+ put_device(sup_dev);
+ return ret;
+@@ -1956,7 +2136,6 @@ static void __fw_devlink_link_to_consumers(struct device *dev)
+ struct fwnode_link *link, *tmp;
+
+ list_for_each_entry_safe(link, tmp, &fwnode->consumers, s_hook) {
+- u32 dl_flags = fw_devlink_get_flags();
+ struct device *con_dev;
+ bool own_link = true;
+ int ret;
+@@ -1986,14 +2165,13 @@ static void __fw_devlink_link_to_consumers(struct device *dev)
+ con_dev = NULL;
+ } else {
+ own_link = false;
+- dl_flags = FW_DEVLINK_FLAGS_PERMISSIVE;
+ }
+ }
+
+ if (!con_dev)
+ continue;
+
+- ret = fw_devlink_create_devlink(con_dev, fwnode, dl_flags);
++ ret = fw_devlink_create_devlink(con_dev, fwnode, link);
+ put_device(con_dev);
+ if (!own_link || ret == -EAGAIN)
+ continue;
+@@ -2013,10 +2191,7 @@ static void __fw_devlink_link_to_consumers(struct device *dev)
+ *
+ * The function creates normal (non-SYNC_STATE_ONLY) device links between @dev
+ * and the real suppliers of @dev. Once these device links are created, the
+- * fwnode links are deleted. When such device links are successfully created,
+- * this function is called recursively on those supplier devices. This is
+- * needed to detect and break some invalid cycles in fwnode links. See
+- * fw_devlink_create_devlink() for more details.
++ * fwnode links are deleted.
+ *
+ * In addition, it also looks at all the suppliers of the entire fwnode tree
+ * because some of the child devices of @dev that have not been added yet
+@@ -2034,44 +2209,16 @@ static void __fw_devlink_link_to_suppliers(struct device *dev,
+ bool own_link = (dev->fwnode == fwnode);
+ struct fwnode_link *link, *tmp;
+ struct fwnode_handle *child = NULL;
+- u32 dl_flags;
+-
+- if (own_link)
+- dl_flags = fw_devlink_get_flags();
+- else
+- dl_flags = FW_DEVLINK_FLAGS_PERMISSIVE;
+
+ list_for_each_entry_safe(link, tmp, &fwnode->suppliers, c_hook) {
+ int ret;
+- struct device *sup_dev;
+ struct fwnode_handle *sup = link->supplier;
+
+- ret = fw_devlink_create_devlink(dev, sup, dl_flags);
++ ret = fw_devlink_create_devlink(dev, sup, link);
+ if (!own_link || ret == -EAGAIN)
+ continue;
+
+ __fwnode_link_del(link);
+-
+- /* If no device link was created, nothing more to do. */
+- if (ret)
+- continue;
+-
+- /*
+- * If a device link was successfully created to a supplier, we
+- * now need to try and link the supplier to all its suppliers.
+- *
+- * This is needed to detect and delete false dependencies in
+- * fwnode links that haven't been converted to a device link
+- * yet. See comments in fw_devlink_create_devlink() for more
+- * details on the false dependency.
+- *
+- * Without deleting these false dependencies, some devices will
+- * never probe because they'll keep waiting for their false
+- * dependency fwnode links to be converted to device links.
+- */
+- sup_dev = get_dev_from_fwnode(sup);
+- __fw_devlink_link_to_suppliers(sup_dev, sup_dev->fwnode);
+- put_device(sup_dev);
+ }
+
+ /*
+@@ -3413,7 +3560,7 @@ int device_add(struct device *dev)
+ /* we require the name to be set before, and pass NULL */
+ error = kobject_add(&dev->kobj, dev->kobj.parent, NULL);
+ if (error) {
+- glue_dir = get_glue_dir(dev);
++ glue_dir = kobj;
+ goto Error;
+ }
+
+@@ -3513,6 +3660,7 @@ done:
+ device_pm_remove(dev);
+ dpm_sysfs_remove(dev);
+ DPMError:
++ dev->driver = NULL;
+ bus_remove_device(dev);
+ BusError:
+ device_remove_attrs(dev);
+diff --git a/drivers/base/physical_location.c b/drivers/base/physical_location.c
+index 87af641cfe1a3..951819e71b4ad 100644
+--- a/drivers/base/physical_location.c
++++ b/drivers/base/physical_location.c
+@@ -24,8 +24,11 @@ bool dev_add_physical_location(struct device *dev)
+
+ dev->physical_location =
+ kzalloc(sizeof(*dev->physical_location), GFP_KERNEL);
+- if (!dev->physical_location)
++ if (!dev->physical_location) {
++ ACPI_FREE(pld);
+ return false;
++ }
++
+ dev->physical_location->panel = pld->panel;
+ dev->physical_location->vertical_position = pld->vertical_position;
+ dev->physical_location->horizontal_position = pld->horizontal_position;
+diff --git a/drivers/base/platform-msi.c b/drivers/base/platform-msi.c
+index 5883e7634a2b7..f37ad34c80ec4 100644
+--- a/drivers/base/platform-msi.c
++++ b/drivers/base/platform-msi.c
+@@ -324,6 +324,7 @@ void platform_msi_device_domain_free(struct irq_domain *domain, unsigned int vir
+ struct platform_msi_priv_data *data = domain->host_data;
+
+ msi_lock_descs(data->dev);
++ msi_domain_depopulate_descs(data->dev, virq, nr_irqs);
+ irq_domain_free_irqs_common(domain, virq, nr_irqs);
+ msi_free_msi_descs_range(data->dev, virq, virq + nr_irqs - 1);
+ msi_unlock_descs(data->dev);
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 967bcf9d415ea..6097644ebdc51 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -220,13 +220,10 @@ static void genpd_debug_add(struct generic_pm_domain *genpd);
+
+ static void genpd_debug_remove(struct generic_pm_domain *genpd)
+ {
+- struct dentry *d;
+-
+ if (!genpd_debugfs_dir)
+ return;
+
+- d = debugfs_lookup(genpd->name, genpd_debugfs_dir);
+- debugfs_remove(d);
++ debugfs_lookup_and_remove(genpd->name, genpd_debugfs_dir);
+ }
+
+ static void genpd_update_accounting(struct generic_pm_domain *genpd)
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index d12d669157f24..d2a54eb0efd9b 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1942,6 +1942,8 @@ static int _regmap_bus_reg_write(void *context, unsigned int reg,
+ {
+ struct regmap *map = context;
+
++ reg += map->reg_base;
++ reg >>= map->format.reg_downshift;
+ return map->bus->reg_write(map->bus_context, reg, val);
+ }
+
+@@ -2840,6 +2842,8 @@ static int _regmap_bus_reg_read(void *context, unsigned int reg,
+ {
+ struct regmap *map = context;
+
++ reg += map->reg_base;
++ reg >>= map->format.reg_downshift;
+ return map->bus->reg_read(map->bus_context, reg, val);
+ }
+
+@@ -3231,6 +3235,8 @@ static int _regmap_update_bits(struct regmap *map, unsigned int reg,
+ *change = false;
+
+ if (regmap_volatile(map, reg) && map->reg_update_bits) {
++ reg += map->reg_base;
++ reg >>= map->format.reg_downshift;
+ ret = map->reg_update_bits(map->bus_context, reg, mask, val);
+ if (ret == 0 && change)
+ *change = true;
+diff --git a/drivers/base/transport_class.c b/drivers/base/transport_class.c
+index ccc86206e5087..09ee2a1e35bbd 100644
+--- a/drivers/base/transport_class.c
++++ b/drivers/base/transport_class.c
+@@ -155,12 +155,27 @@ static int transport_add_class_device(struct attribute_container *cont,
+ struct device *dev,
+ struct device *classdev)
+ {
++ struct transport_class *tclass = class_to_transport_class(cont->class);
+ int error = attribute_container_add_class_device(classdev);
+ struct transport_container *tcont =
+ attribute_container_to_transport_container(cont);
+
+- if (!error && tcont->statistics)
++ if (error)
++ goto err_remove;
++
++ if (tcont->statistics) {
+ error = sysfs_create_group(&classdev->kobj, tcont->statistics);
++ if (error)
++ goto err_del;
++ }
++
++ return 0;
++
++err_del:
++ attribute_container_class_device_del(classdev);
++err_remove:
++ if (tclass->remove)
++ tclass->remove(tcont, dev, classdev);
+
+ return error;
+ }
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index 20acc4a1fd6de..a8a77a1efe1e3 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -78,32 +78,25 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector)
+ }
+
+ /*
+- * Look up and return a brd's page for a given sector.
+- * If one does not exist, allocate an empty page, and insert that. Then
+- * return it.
++ * Insert a new page for a given sector, if one does not already exist.
+ */
+-static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
++static int brd_insert_page(struct brd_device *brd, sector_t sector, gfp_t gfp)
+ {
+ pgoff_t idx;
+ struct page *page;
+- gfp_t gfp_flags;
++ int ret = 0;
+
+ page = brd_lookup_page(brd, sector);
+ if (page)
+- return page;
++ return 0;
+
+- /*
+- * Must use NOIO because we don't want to recurse back into the
+- * block or filesystem layers from page reclaim.
+- */
+- gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM;
+- page = alloc_page(gfp_flags);
++ page = alloc_page(gfp | __GFP_ZERO | __GFP_HIGHMEM);
+ if (!page)
+- return NULL;
++ return -ENOMEM;
+
+- if (radix_tree_preload(GFP_NOIO)) {
++ if (radix_tree_maybe_preload(gfp)) {
+ __free_page(page);
+- return NULL;
++ return -ENOMEM;
+ }
+
+ spin_lock(&brd->brd_lock);
+@@ -112,16 +105,17 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
+ if (radix_tree_insert(&brd->brd_pages, idx, page)) {
+ __free_page(page);
+ page = radix_tree_lookup(&brd->brd_pages, idx);
+- BUG_ON(!page);
+- BUG_ON(page->index != idx);
++ if (!page)
++ ret = -ENOMEM;
++ else if (page->index != idx)
++ ret = -EIO;
+ } else {
+ brd->brd_nr_pages++;
+ }
+ spin_unlock(&brd->brd_lock);
+
+ radix_tree_preload_end();
+-
+- return page;
++ return ret;
+ }
+
+ /*
+@@ -170,20 +164,22 @@ static void brd_free_pages(struct brd_device *brd)
+ /*
+ * copy_to_brd_setup must be called before copy_to_brd. It may sleep.
+ */
+-static int copy_to_brd_setup(struct brd_device *brd, sector_t sector, size_t n)
++static int copy_to_brd_setup(struct brd_device *brd, sector_t sector, size_t n,
++ gfp_t gfp)
+ {
+ unsigned int offset = (sector & (PAGE_SECTORS-1)) << SECTOR_SHIFT;
+ size_t copy;
++ int ret;
+
+ copy = min_t(size_t, n, PAGE_SIZE - offset);
+- if (!brd_insert_page(brd, sector))
+- return -ENOSPC;
++ ret = brd_insert_page(brd, sector, gfp);
++ if (ret)
++ return ret;
+ if (copy < n) {
+ sector += copy >> SECTOR_SHIFT;
+- if (!brd_insert_page(brd, sector))
+- return -ENOSPC;
++ ret = brd_insert_page(brd, sector, gfp);
+ }
+- return 0;
++ return ret;
+ }
+
+ /*
+@@ -256,20 +252,26 @@ static void copy_from_brd(void *dst, struct brd_device *brd,
+ * Process a single bvec of a bio.
+ */
+ static int brd_do_bvec(struct brd_device *brd, struct page *page,
+- unsigned int len, unsigned int off, enum req_op op,
++ unsigned int len, unsigned int off, blk_opf_t opf,
+ sector_t sector)
+ {
+ void *mem;
+ int err = 0;
+
+- if (op_is_write(op)) {
+- err = copy_to_brd_setup(brd, sector, len);
++ if (op_is_write(opf)) {
++ /*
++ * Must use NOIO because we don't want to recurse back into the
++ * block or filesystem layers from page reclaim.
++ */
++ gfp_t gfp = opf & REQ_NOWAIT ? GFP_NOWAIT : GFP_NOIO;
++
++ err = copy_to_brd_setup(brd, sector, len, gfp);
+ if (err)
+ goto out;
+ }
+
+ mem = kmap_atomic(page);
+- if (!op_is_write(op)) {
++ if (!op_is_write(opf)) {
+ copy_from_brd(mem + off, brd, sector, len);
+ flush_dcache_page(page);
+ } else {
+@@ -298,8 +300,12 @@ static void brd_submit_bio(struct bio *bio)
+ (len & (SECTOR_SIZE - 1)));
+
+ err = brd_do_bvec(brd, bvec.bv_page, len, bvec.bv_offset,
+- bio_op(bio), sector);
++ bio->bi_opf, sector);
+ if (err) {
++ if (err == -ENOMEM && bio->bi_opf & REQ_NOWAIT) {
++ bio_wouldblock_error(bio);
++ return;
++ }
+ bio_io_error(bio);
+ return;
+ }
+@@ -412,6 +418,7 @@ static int brd_alloc(int i)
+ /* Tell the block layer that this is not a rotational device */
+ blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
+ blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue);
++ blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
+ err = add_disk(disk);
+ if (err)
+ goto out_cleanup_disk;
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 04453f4a319cb..60aed196a2e54 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -5292,8 +5292,7 @@ static void rbd_dev_release(struct device *dev)
+ module_put(THIS_MODULE);
+ }
+
+-static struct rbd_device *__rbd_dev_create(struct rbd_client *rbdc,
+- struct rbd_spec *spec)
++static struct rbd_device *__rbd_dev_create(struct rbd_spec *spec)
+ {
+ struct rbd_device *rbd_dev;
+
+@@ -5338,9 +5337,6 @@ static struct rbd_device *__rbd_dev_create(struct rbd_client *rbdc,
+ rbd_dev->dev.parent = &rbd_root_dev;
+ device_initialize(&rbd_dev->dev);
+
+- rbd_dev->rbd_client = rbdc;
+- rbd_dev->spec = spec;
+-
+ return rbd_dev;
+ }
+
+@@ -5353,12 +5349,10 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
+ {
+ struct rbd_device *rbd_dev;
+
+- rbd_dev = __rbd_dev_create(rbdc, spec);
++ rbd_dev = __rbd_dev_create(spec);
+ if (!rbd_dev)
+ return NULL;
+
+- rbd_dev->opts = opts;
+-
+ /* get an id and fill in device name */
+ rbd_dev->dev_id = ida_simple_get(&rbd_dev_id_ida, 0,
+ minor_to_rbd_dev_id(1 << MINORBITS),
+@@ -5375,6 +5369,10 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
+ /* we have a ref from do_rbd_add() */
+ __module_get(THIS_MODULE);
+
++ rbd_dev->rbd_client = rbdc;
++ rbd_dev->spec = spec;
++ rbd_dev->opts = opts;
++
+ dout("%s rbd_dev %p dev_id %d\n", __func__, rbd_dev, rbd_dev->dev_id);
+ return rbd_dev;
+
+@@ -6736,7 +6734,7 @@ static int rbd_dev_probe_parent(struct rbd_device *rbd_dev, int depth)
+ goto out_err;
+ }
+
+- parent = __rbd_dev_create(rbd_dev->rbd_client, rbd_dev->parent_spec);
++ parent = __rbd_dev_create(rbd_dev->parent_spec);
+ if (!parent) {
+ ret = -ENOMEM;
+ goto out_err;
+@@ -6746,8 +6744,8 @@ static int rbd_dev_probe_parent(struct rbd_device *rbd_dev, int depth)
+ * Images related by parent/child relationships always share
+ * rbd_client and spec/parent_spec, so bump their refcounts.
+ */
+- __rbd_get_client(rbd_dev->rbd_client);
+- rbd_spec_get(rbd_dev->parent_spec);
++ parent->rbd_client = __rbd_get_client(rbd_dev->rbd_client);
++ parent->spec = rbd_spec_get(rbd_dev->parent_spec);
+
+ __set_bit(RBD_DEV_FLAG_READONLY, &parent->flags);
+
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 6368b56eacf11..4aec9be0ab77e 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -159,7 +159,7 @@ struct ublk_device {
+
+ struct completion completion;
+ unsigned int nr_queues_ready;
+- atomic_t nr_aborted_queues;
++ unsigned int nr_privileged_daemon;
+
+ /*
+ * Our ubq->daemon may be killed without any notification, so
+@@ -1179,6 +1179,9 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
+ ubq->ubq_daemon = current;
+ get_task_struct(ubq->ubq_daemon);
+ ub->nr_queues_ready++;
++
++ if (capable(CAP_SYS_ADMIN))
++ ub->nr_privileged_daemon++;
+ }
+ if (ub->nr_queues_ready == ub->dev_info.nr_hw_queues)
+ complete_all(&ub->completion);
+@@ -1203,6 +1206,7 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+ u32 cmd_op = cmd->cmd_op;
+ unsigned tag = ub_cmd->tag;
+ int ret = -EINVAL;
++ struct request *req;
+
+ pr_devel("%s: received: cmd op %d queue %d tag %d result %d\n",
+ __func__, cmd->cmd_op, ub_cmd->q_id, tag,
+@@ -1253,8 +1257,8 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+ */
+ if (io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)
+ goto out;
+- /* FETCH_RQ has to provide IO buffer */
+- if (!ub_cmd->addr)
++ /* FETCH_RQ has to provide IO buffer if NEED GET DATA is not enabled */
++ if (!ub_cmd->addr && !ublk_need_get_data(ubq))
+ goto out;
+ io->cmd = cmd;
+ io->flags |= UBLK_IO_FLAG_ACTIVE;
+@@ -1263,8 +1267,12 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
+ ublk_mark_io_ready(ub, ubq);
+ break;
+ case UBLK_IO_COMMIT_AND_FETCH_REQ:
+- /* FETCH_RQ has to provide IO buffer */
+- if (!ub_cmd->addr)
++ req = blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag);
++ /*
++ * COMMIT_AND_FETCH_REQ has to provide IO buffer if NEED GET DATA is
++ * not enabled or it is Read IO.
++ */
++ if (!ub_cmd->addr && (!ublk_need_get_data(ubq) || req_op(req) == REQ_OP_READ))
+ goto out;
+ if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
+ goto out;
+@@ -1535,6 +1543,10 @@ static int ublk_ctrl_start_dev(struct io_uring_cmd *cmd)
+ if (ret)
+ goto out_put_disk;
+
++ /* don't probe partitions if any one ubq daemon is un-trusted */
++ if (ub->nr_privileged_daemon != ub->nr_queues_ready)
++ set_bit(GD_SUPPRESS_PART_SCAN, &disk->state);
++
+ get_device(&ub->cdev_dev);
+ ret = add_disk(disk);
+ if (ret) {
+@@ -1936,6 +1948,7 @@ static int ublk_ctrl_start_recovery(struct io_uring_cmd *cmd)
+ /* set to NULL, otherwise new ubq_daemon cannot mmap the io_cmd_buf */
+ ub->mm = NULL;
+ ub->nr_queues_ready = 0;
++ ub->nr_privileged_daemon = 0;
+ init_completion(&ub->completion);
+ ret = 0;
+ out_unlock:
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2ad4efdd9e40b..18bc947187115 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -64,6 +64,7 @@ static struct usb_driver btusb_driver;
+ #define BTUSB_INTEL_BROKEN_SHUTDOWN_LED BIT(24)
+ #define BTUSB_INTEL_BROKEN_INITIAL_NCMD BIT(25)
+ #define BTUSB_INTEL_NO_WBS_SUPPORT BIT(26)
++#define BTUSB_ACTIONS_SEMI BIT(27)
+
+ static const struct usb_device_id btusb_table[] = {
+ /* Generic Bluetooth USB device */
+@@ -492,6 +493,10 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01),
+ .driver_info = BTUSB_IGNORE },
+
++ /* Realtek 8821CE Bluetooth devices */
++ { USB_DEVICE(0x13d3, 0x3529), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
++
+ /* Realtek 8822CE Bluetooth devices */
+ { USB_DEVICE(0x0bda, 0xb00c), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+@@ -566,6 +571,9 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x0489, 0xe0e0), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH |
+ BTUSB_VALID_LE_STATES },
++ { USB_DEVICE(0x0489, 0xe0f2), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH |
++ BTUSB_VALID_LE_STATES },
+ { USB_DEVICE(0x04ca, 0x3802), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH |
+ BTUSB_VALID_LE_STATES },
+@@ -677,6 +685,9 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x0cb5, 0xc547), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+
++ /* Actions Semiconductor ATS2851 based devices */
++ { USB_DEVICE(0x10d7, 0xb012), .driver_info = BTUSB_ACTIONS_SEMI },
++
+ /* Silicon Wave based devices */
+ { USB_DEVICE(0x0c10, 0x0000), .driver_info = BTUSB_SWAVE },
+
+@@ -4098,6 +4109,11 @@ static int btusb_probe(struct usb_interface *intf,
+ set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
+ }
+
++ if (id->driver_info & BTUSB_ACTIONS_SEMI) {
++ /* Support is advertised, but not implemented */
++ set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
++ }
++
+ if (!reset)
+ set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks);
+
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index bbe9cf1cae27f..d331772809d56 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1588,10 +1588,11 @@ static bool qca_wakeup(struct hci_dev *hdev)
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ bool wakeup;
+
+- /* UART driver handles the interrupt from BT SoC.So we need to use
+- * device handle of UART driver to get the status of device may wakeup.
++ /* BT SoC attached through the serial bus is handled by the serdev driver.
++ * So we need to use the device handle of the serdev driver to get the
++ * status of device may wakeup.
+ */
+- wakeup = device_may_wakeup(hu->serdev->ctrl->dev.parent);
++ wakeup = device_may_wakeup(&hu->serdev->ctrl->dev);
+ bt_dev_dbg(hu->hdev, "wakeup status : %d", wakeup);
+
+ return wakeup;
+diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
+index 1dc8a3557a464..9c42886818418 100644
+--- a/drivers/bus/mhi/ep/main.c
++++ b/drivers/bus/mhi/ep/main.c
+@@ -196,9 +196,11 @@ static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_ele
+ mhi_ep_mmio_disable_chdb(mhi_cntrl, ch_id);
+
+ /* Send channel disconnect status to client drivers */
+- result.transaction_status = -ENOTCONN;
+- result.bytes_xferd = 0;
+- mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
++ if (mhi_chan->xfer_cb) {
++ result.transaction_status = -ENOTCONN;
++ result.bytes_xferd = 0;
++ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
++ }
+
+ /* Set channel state to STOP */
+ mhi_chan->state = MHI_CH_STATE_STOP;
+@@ -228,9 +230,11 @@ static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_ele
+ mhi_ep_ring_reset(mhi_cntrl, ch_ring);
+
+ /* Send channel disconnect status to client driver */
+- result.transaction_status = -ENOTCONN;
+- result.bytes_xferd = 0;
+- mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
++ if (mhi_chan->xfer_cb) {
++ result.transaction_status = -ENOTCONN;
++ result.bytes_xferd = 0;
++ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
++ }
+
+ /* Set channel state to DISABLED */
+ mhi_chan->state = MHI_CH_STATE_DISABLED;
+@@ -719,24 +723,37 @@ static void mhi_ep_ch_ring_worker(struct work_struct *work)
+ list_del(&itr->node);
+ ring = itr->ring;
+
++ chan = &mhi_cntrl->mhi_chan[ring->ch_id];
++ mutex_lock(&chan->lock);
++
++ /*
++ * The ring could've stopped while we waited to grab the (chan->lock), so do
++ * a sanity check before going further.
++ */
++ if (!ring->started) {
++ mutex_unlock(&chan->lock);
++ kfree(itr);
++ continue;
++ }
++
+ /* Update the write offset for the ring */
+ ret = mhi_ep_update_wr_offset(ring);
+ if (ret) {
+ dev_err(dev, "Error updating write offset for ring\n");
++ mutex_unlock(&chan->lock);
+ kfree(itr);
+ continue;
+ }
+
+ /* Sanity check to make sure there are elements in the ring */
+ if (ring->rd_offset == ring->wr_offset) {
++ mutex_unlock(&chan->lock);
+ kfree(itr);
+ continue;
+ }
+
+ el = &ring->ring_cache[ring->rd_offset];
+- chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+
+- mutex_lock(&chan->lock);
+ dev_dbg(dev, "Processing the ring for channel (%u)\n", ring->ch_id);
+ ret = mhi_ep_process_ch_ring(ring, el);
+ if (ret) {
+@@ -1119,6 +1136,7 @@ void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl)
+
+ dev_dbg(&mhi_chan->mhi_dev->dev, "Suspending channel\n");
+ /* Set channel state to SUSPENDED */
++ mhi_chan->state = MHI_CH_STATE_SUSPENDED;
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_SUSPENDED);
+ mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
+@@ -1148,6 +1166,7 @@ void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl)
+
+ dev_dbg(&mhi_chan->mhi_dev->dev, "Resuming channel\n");
+ /* Set channel state to RUNNING */
++ mhi_chan->state = MHI_CH_STATE_RUNNING;
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
+ mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
+diff --git a/drivers/char/applicom.c b/drivers/char/applicom.c
+index 36203d3fa6ea6..69314532f38cd 100644
+--- a/drivers/char/applicom.c
++++ b/drivers/char/applicom.c
+@@ -197,8 +197,10 @@ static int __init applicom_init(void)
+ if (!pci_match_id(applicom_pci_tbl, dev))
+ continue;
+
+- if (pci_enable_device(dev))
++ if (pci_enable_device(dev)) {
++ pci_dev_put(dev);
+ return -EIO;
++ }
+
+ RamIO = ioremap(pci_resource_start(dev, 0), LEN_RAM_IO);
+
+@@ -207,6 +209,7 @@ static int __init applicom_init(void)
+ "space at 0x%llx\n",
+ (unsigned long long)pci_resource_start(dev, 0));
+ pci_disable_device(dev);
++ pci_dev_put(dev);
+ return -EIO;
+ }
+
+diff --git a/drivers/char/ipmi/ipmi_ipmb.c b/drivers/char/ipmi/ipmi_ipmb.c
+index 7c1aee5e11b77..3f1c9f1573e78 100644
+--- a/drivers/char/ipmi/ipmi_ipmb.c
++++ b/drivers/char/ipmi/ipmi_ipmb.c
+@@ -27,7 +27,7 @@ MODULE_PARM_DESC(bmcaddr, "Address to use for BMC.");
+
+ static unsigned int retry_time_ms = 250;
+ module_param(retry_time_ms, uint, 0644);
+-MODULE_PARM_DESC(max_retries, "Timeout time between retries, in milliseconds.");
++MODULE_PARM_DESC(retry_time_ms, "Timeout time between retries, in milliseconds.");
+
+ static unsigned int max_retries = 1;
+ module_param(max_retries, uint, 0644);
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 4bfd1e3066162..f49d2c2ef3cfd 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -74,7 +74,8 @@
+ /*
+ * Timer values
+ */
+-#define SSIF_MSG_USEC 60000 /* 60ms between message tries. */
++#define SSIF_MSG_USEC 60000 /* 60ms between message tries (T3). */
++#define SSIF_REQ_RETRY_USEC 60000 /* 60ms between send retries (T6). */
+ #define SSIF_MSG_PART_USEC 5000 /* 5ms for a message part */
+
+ /* How many times to we retry sending/receiving the message. */
+@@ -82,7 +83,9 @@
+ #define SSIF_RECV_RETRIES 250
+
+ #define SSIF_MSG_MSEC (SSIF_MSG_USEC / 1000)
++#define SSIF_REQ_RETRY_MSEC (SSIF_REQ_RETRY_USEC / 1000)
+ #define SSIF_MSG_JIFFIES ((SSIF_MSG_USEC * 1000) / TICK_NSEC)
++#define SSIF_REQ_RETRY_JIFFIES ((SSIF_REQ_RETRY_USEC * 1000) / TICK_NSEC)
+ #define SSIF_MSG_PART_JIFFIES ((SSIF_MSG_PART_USEC * 1000) / TICK_NSEC)
+
+ /*
+@@ -92,7 +95,7 @@
+ #define SSIF_WATCH_WATCHDOG_TIMEOUT msecs_to_jiffies(250)
+
+ enum ssif_intf_state {
+- SSIF_NORMAL,
++ SSIF_IDLE,
+ SSIF_GETTING_FLAGS,
+ SSIF_GETTING_EVENTS,
+ SSIF_CLEARING_FLAGS,
+@@ -100,8 +103,8 @@ enum ssif_intf_state {
+ /* FIXME - add watchdog stuff. */
+ };
+
+-#define SSIF_IDLE(ssif) ((ssif)->ssif_state == SSIF_NORMAL \
+- && (ssif)->curr_msg == NULL)
++#define IS_SSIF_IDLE(ssif) ((ssif)->ssif_state == SSIF_IDLE \
++ && (ssif)->curr_msg == NULL)
+
+ /*
+ * Indexes into stats[] in ssif_info below.
+@@ -229,6 +232,9 @@ struct ssif_info {
+ bool got_alert;
+ bool waiting_alert;
+
++ /* Used to inform the timeout that it should do a resend. */
++ bool do_resend;
++
+ /*
+ * If set to true, this will request events the next time the
+ * state machine is idle.
+@@ -348,9 +354,9 @@ static void return_hosed_msg(struct ssif_info *ssif_info,
+
+ /*
+ * Must be called with the message lock held. This will release the
+- * message lock. Note that the caller will check SSIF_IDLE and start a
+- * new operation, so there is no need to check for new messages to
+- * start in here.
++ * message lock. Note that the caller will check IS_SSIF_IDLE and
++ * start a new operation, so there is no need to check for new
++ * messages to start in here.
+ */
+ static void start_clear_flags(struct ssif_info *ssif_info, unsigned long *flags)
+ {
+@@ -367,7 +373,7 @@ static void start_clear_flags(struct ssif_info *ssif_info, unsigned long *flags)
+
+ if (start_send(ssif_info, msg, 3) != 0) {
+ /* Error, just go to normal state. */
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ }
+ }
+
+@@ -382,7 +388,7 @@ static void start_flag_fetch(struct ssif_info *ssif_info, unsigned long *flags)
+ mb[0] = (IPMI_NETFN_APP_REQUEST << 2);
+ mb[1] = IPMI_GET_MSG_FLAGS_CMD;
+ if (start_send(ssif_info, mb, 2) != 0)
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ }
+
+ static void check_start_send(struct ssif_info *ssif_info, unsigned long *flags,
+@@ -393,7 +399,7 @@ static void check_start_send(struct ssif_info *ssif_info, unsigned long *flags,
+
+ flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+ ssif_info->curr_msg = NULL;
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ ipmi_free_smi_msg(msg);
+ }
+@@ -407,7 +413,7 @@ static void start_event_fetch(struct ssif_info *ssif_info, unsigned long *flags)
+
+ msg = ipmi_alloc_smi_msg();
+ if (!msg) {
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ return;
+ }
+@@ -430,7 +436,7 @@ static void start_recv_msg_fetch(struct ssif_info *ssif_info,
+
+ msg = ipmi_alloc_smi_msg();
+ if (!msg) {
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ return;
+ }
+@@ -448,9 +454,9 @@ static void start_recv_msg_fetch(struct ssif_info *ssif_info,
+
+ /*
+ * Must be called with the message lock held. This will release the
+- * message lock. Note that the caller will check SSIF_IDLE and start a
+- * new operation, so there is no need to check for new messages to
+- * start in here.
++ * message lock. Note that the caller will check IS_SSIF_IDLE and
++ * start a new operation, so there is no need to check for new
++ * messages to start in here.
+ */
+ static void handle_flags(struct ssif_info *ssif_info, unsigned long *flags)
+ {
+@@ -466,7 +472,7 @@ static void handle_flags(struct ssif_info *ssif_info, unsigned long *flags)
+ /* Events available. */
+ start_event_fetch(ssif_info, flags);
+ else {
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ }
+ }
+@@ -538,22 +544,28 @@ static void start_get(struct ssif_info *ssif_info)
+ ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
+ }
+
++static void start_resend(struct ssif_info *ssif_info);
++
+ static void retry_timeout(struct timer_list *t)
+ {
+ struct ssif_info *ssif_info = from_timer(ssif_info, t, retry_timer);
+ unsigned long oflags, *flags;
+- bool waiting;
++ bool waiting, resend;
+
+ if (ssif_info->stopping)
+ return;
+
+ flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
++ resend = ssif_info->do_resend;
++ ssif_info->do_resend = false;
+ waiting = ssif_info->waiting_alert;
+ ssif_info->waiting_alert = false;
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+
+ if (waiting)
+ start_get(ssif_info);
++ if (resend)
++ start_resend(ssif_info);
+ }
+
+ static void watch_timeout(struct timer_list *t)
+@@ -568,7 +580,7 @@ static void watch_timeout(struct timer_list *t)
+ if (ssif_info->watch_timeout) {
+ mod_timer(&ssif_info->watch_timer,
+ jiffies + ssif_info->watch_timeout);
+- if (SSIF_IDLE(ssif_info)) {
++ if (IS_SSIF_IDLE(ssif_info)) {
+ start_flag_fetch(ssif_info, flags); /* Releases lock */
+ return;
+ }
+@@ -602,8 +614,6 @@ static void ssif_alert(struct i2c_client *client, enum i2c_alert_protocol type,
+ start_get(ssif_info);
+ }
+
+-static int start_resend(struct ssif_info *ssif_info);
+-
+ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ unsigned char *data, unsigned int len)
+ {
+@@ -756,7 +766,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ }
+
+ switch (ssif_info->ssif_state) {
+- case SSIF_NORMAL:
++ case SSIF_IDLE:
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ if (!msg)
+ break;
+@@ -774,7 +784,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ * Error fetching flags, or invalid length,
+ * just give up for now.
+ */
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ dev_warn(&ssif_info->client->dev,
+ "Error getting flags: %d %d, %x\n",
+@@ -809,7 +819,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ "Invalid response clearing flags: %x %x\n",
+ data[0], data[1]);
+ }
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ break;
+
+@@ -887,7 +897,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ }
+
+ flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+- if (SSIF_IDLE(ssif_info) && !ssif_info->stopping) {
++ if (IS_SSIF_IDLE(ssif_info) && !ssif_info->stopping) {
+ if (ssif_info->req_events)
+ start_event_fetch(ssif_info, flags);
+ else if (ssif_info->req_flags)
+@@ -909,31 +919,23 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ if (result < 0) {
+ ssif_info->retries_left--;
+ if (ssif_info->retries_left > 0) {
+- if (!start_resend(ssif_info)) {
+- ssif_inc_stat(ssif_info, send_retries);
+- return;
+- }
+- /* request failed, just return the error. */
+- ssif_inc_stat(ssif_info, send_errors);
+-
+- if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+- dev_dbg(&ssif_info->client->dev,
+- "%s: Out of retries\n", __func__);
+- msg_done_handler(ssif_info, -EIO, NULL, 0);
++ /*
++ * Wait the retry timeout time per the spec,
++ * then redo the send.
++ */
++ ssif_info->do_resend = true;
++ mod_timer(&ssif_info->retry_timer,
++ jiffies + SSIF_REQ_RETRY_JIFFIES);
+ return;
+ }
+
+ ssif_inc_stat(ssif_info, send_errors);
+
+- /*
+- * Got an error on transmit, let the done routine
+- * handle it.
+- */
+ if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
+ dev_dbg(&ssif_info->client->dev,
+- "%s: Error %d\n", __func__, result);
++ "%s: Out of retries\n", __func__);
+
+- msg_done_handler(ssif_info, result, NULL, 0);
++ msg_done_handler(ssif_info, -EIO, NULL, 0);
+ return;
+ }
+
+@@ -996,7 +998,7 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ }
+ }
+
+-static int start_resend(struct ssif_info *ssif_info)
++static void start_resend(struct ssif_info *ssif_info)
+ {
+ int command;
+
+@@ -1021,7 +1023,6 @@ static int start_resend(struct ssif_info *ssif_info)
+
+ ssif_i2c_send(ssif_info, msg_written_handler, I2C_SMBUS_WRITE,
+ command, ssif_info->data, I2C_SMBUS_BLOCK_DATA);
+- return 0;
+ }
+
+ static int start_send(struct ssif_info *ssif_info,
+@@ -1036,7 +1037,8 @@ static int start_send(struct ssif_info *ssif_info,
+ ssif_info->retries_left = SSIF_SEND_RETRIES;
+ memcpy(ssif_info->data + 1, data, len);
+ ssif_info->data_len = len;
+- return start_resend(ssif_info);
++ start_resend(ssif_info);
++ return 0;
+ }
+
+ /* Must be called with the message lock held. */
+@@ -1046,7 +1048,7 @@ static void start_next_msg(struct ssif_info *ssif_info, unsigned long *flags)
+ unsigned long oflags;
+
+ restart:
+- if (!SSIF_IDLE(ssif_info)) {
++ if (!IS_SSIF_IDLE(ssif_info)) {
+ ipmi_ssif_unlock_cond(ssif_info, flags);
+ return;
+ }
+@@ -1269,7 +1271,7 @@ static void shutdown_ssif(void *send_info)
+ dev_set_drvdata(&ssif_info->client->dev, NULL);
+
+ /* make sure the driver is not looking for flags any more. */
+- while (ssif_info->ssif_state != SSIF_NORMAL)
++ while (ssif_info->ssif_state != SSIF_IDLE)
+ schedule_timeout(1);
+
+ ssif_info->stopping = true;
+@@ -1334,8 +1336,10 @@ static int do_cmd(struct i2c_client *client, int len, unsigned char *msg,
+ ret = i2c_smbus_write_block_data(client, SSIF_IPMI_REQUEST, len, msg);
+ if (ret) {
+ retry_cnt--;
+- if (retry_cnt > 0)
++ if (retry_cnt > 0) {
++ msleep(SSIF_REQ_RETRY_MSEC);
+ goto retry1;
++ }
+ return -ENODEV;
+ }
+
+@@ -1476,8 +1480,10 @@ retry_write:
+ 32, msg);
+ if (ret) {
+ retry_cnt--;
+- if (retry_cnt > 0)
++ if (retry_cnt > 0) {
++ msleep(SSIF_REQ_RETRY_MSEC);
+ goto retry_write;
++ }
+ dev_err(&client->dev, "Could not write multi-part start, though the BMC said it could handle it. Just limit sends to one part.\n");
+ return ret;
+ }
+@@ -1839,7 +1845,7 @@ static int ssif_probe(struct i2c_client *client)
+ }
+
+ spin_lock_init(&ssif_info->lock);
+- ssif_info->ssif_state = SSIF_NORMAL;
++ ssif_info->ssif_state = SSIF_IDLE;
+ timer_setup(&ssif_info->retry_timer, retry_timeout, 0);
+ timer_setup(&ssif_info->watch_timer, watch_timeout, 0);
+
+diff --git a/drivers/char/pcmcia/cm4000_cs.c b/drivers/char/pcmcia/cm4000_cs.c
+index adaec8fd4b16c..e656f42a28ac2 100644
+--- a/drivers/char/pcmcia/cm4000_cs.c
++++ b/drivers/char/pcmcia/cm4000_cs.c
+@@ -529,7 +529,8 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
+ DEBUGP(5, dev, "NumRecBytes is valid\n");
+ break;
+ }
+- usleep_range(10000, 11000);
++ /* can not sleep as this is in atomic context */
++ mdelay(10);
+ }
+ if (i == 100) {
+ DEBUGP(5, dev, "Timeout waiting for NumRecBytes getting "
+@@ -549,7 +550,8 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
+ }
+ break;
+ }
+- usleep_range(10000, 11000);
++ /* can not sleep as this is in atomic context */
++ mdelay(10);
+ }
+
+ /* check whether it is a short PTS reply? */
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index a0d66fabf0732..a01c2bd241349 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -177,6 +177,11 @@ static int __init riscv_timer_init_dt(struct device_node *n)
+ return error;
+ }
+
++ if (riscv_isa_extension_available(NULL, SSTC)) {
++ pr_info("Timer interrupt in S-mode is available via sstc extension\n");
++ static_branch_enable(&riscv_sstc_available);
++ }
++
+ error = cpuhp_setup_state(CPUHP_AP_RISCV_TIMER_STARTING,
+ "clockevents/riscv/timer:starting",
+ riscv_timer_starting_cpu, riscv_timer_dying_cpu);
+@@ -184,11 +189,6 @@ static int __init riscv_timer_init_dt(struct device_node *n)
+ pr_err("cpu hp setup state failed for RISCV timer [%d]\n",
+ error);
+
+- if (riscv_isa_extension_available(NULL, SSTC)) {
+- pr_info("Timer interrupt in S-mode is available via sstc extension\n");
+- static_branch_enable(&riscv_sstc_available);
+- }
+-
+ return error;
+ }
+
+diff --git a/drivers/cpufreq/davinci-cpufreq.c b/drivers/cpufreq/davinci-cpufreq.c
+index 9e97f60f81996..ebb3a81026816 100644
+--- a/drivers/cpufreq/davinci-cpufreq.c
++++ b/drivers/cpufreq/davinci-cpufreq.c
+@@ -133,12 +133,14 @@ static int __init davinci_cpufreq_probe(struct platform_device *pdev)
+
+ static int __exit davinci_cpufreq_remove(struct platform_device *pdev)
+ {
++ cpufreq_unregister_driver(&davinci_driver);
++
+ clk_put(cpufreq.armclk);
+
+ if (cpufreq.asyncclk)
+ clk_put(cpufreq.asyncclk);
+
+- return cpufreq_unregister_driver(&davinci_driver);
++ return 0;
+ }
+
+ static struct platform_driver davinci_cpufreq_driver = {
+diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm
+index 747aa537389b9..f0714a32921e6 100644
+--- a/drivers/cpuidle/Kconfig.arm
++++ b/drivers/cpuidle/Kconfig.arm
+@@ -102,6 +102,7 @@ config ARM_MVEBU_V7_CPUIDLE
+ config ARM_TEGRA_CPUIDLE
+ bool "CPU Idle Driver for NVIDIA Tegra SoCs"
+ depends on (ARCH_TEGRA || COMPILE_TEST) && !ARM64 && MMU
++ depends on ARCH_SUSPEND_POSSIBLE
+ select ARCH_NEEDS_CPU_IDLE_COUPLED if SMP
+ select ARM_CPU_SUSPEND
+ help
+@@ -110,6 +111,7 @@ config ARM_TEGRA_CPUIDLE
+ config ARM_QCOM_SPM_CPUIDLE
+ bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)"
+ depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 && MMU
++ depends on ARCH_SUSPEND_POSSIBLE
+ select ARM_CPU_SUSPEND
+ select CPU_IDLE_MULTIPLE_DRIVERS
+ select DT_IDLE_STATES
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index 280f4b0e71334..50dc783821b69 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -522,7 +522,6 @@ static void crypto4xx_cipher_done(struct crypto4xx_device *dev,
+ {
+ struct skcipher_request *req;
+ struct scatterlist *dst;
+- dma_addr_t addr;
+
+ req = skcipher_request_cast(pd_uinfo->async_req);
+
+@@ -531,8 +530,8 @@ static void crypto4xx_cipher_done(struct crypto4xx_device *dev,
+ req->cryptlen, req->dst);
+ } else {
+ dst = pd_uinfo->dest_va;
+- addr = dma_map_page(dev->core_dev->device, sg_page(dst),
+- dst->offset, dst->length, DMA_FROM_DEVICE);
++ dma_unmap_page(dev->core_dev->device, pd->dest, dst->length,
++ DMA_FROM_DEVICE);
+ }
+
+ if (pd_uinfo->sa_va->sa_command_0.bf.save_iv == SA_SAVE_IV) {
+@@ -557,10 +556,9 @@ static void crypto4xx_ahash_done(struct crypto4xx_device *dev,
+ struct ahash_request *ahash_req;
+
+ ahash_req = ahash_request_cast(pd_uinfo->async_req);
+- ctx = crypto_tfm_ctx(ahash_req->base.tfm);
++ ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(ahash_req));
+
+- crypto4xx_copy_digest_to_dst(ahash_req->result, pd_uinfo,
+- crypto_tfm_ctx(ahash_req->base.tfm));
++ crypto4xx_copy_digest_to_dst(ahash_req->result, pd_uinfo, ctx);
+ crypto4xx_ret_sg_desc(dev, pd_uinfo);
+
+ if (pd_uinfo->state & PD_ENTRY_BUSY)
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index 9f753cb4f5f18..b386a7063818b 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -642,14 +642,26 @@ static void ccp_dma_release(struct ccp_device *ccp)
+ chan = ccp->ccp_dma_chan + i;
+ dma_chan = &chan->dma_chan;
+
+- if (dma_chan->client_count)
+- dma_release_channel(dma_chan);
+-
+ tasklet_kill(&chan->cleanup_tasklet);
+ list_del_rcu(&dma_chan->device_node);
+ }
+ }
+
++static void ccp_dma_release_channels(struct ccp_device *ccp)
++{
++ struct ccp_dma_chan *chan;
++ struct dma_chan *dma_chan;
++ unsigned int i;
++
++ for (i = 0; i < ccp->cmd_q_count; i++) {
++ chan = ccp->ccp_dma_chan + i;
++ dma_chan = &chan->dma_chan;
++
++ if (dma_chan->client_count)
++ dma_release_channel(dma_chan);
++ }
++}
++
+ int ccp_dmaengine_register(struct ccp_device *ccp)
+ {
+ struct ccp_dma_chan *chan;
+@@ -770,8 +782,9 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
+ if (!dmaengine)
+ return;
+
+- ccp_dma_release(ccp);
++ ccp_dma_release_channels(ccp);
+ dma_async_device_unregister(dma_dev);
++ ccp_dma_release(ccp);
+
+ kmem_cache_destroy(ccp->dma_desc_cache);
+ kmem_cache_destroy(ccp->dma_cmd_cache);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 06fc7156c04f3..3e583f0324874 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -26,6 +26,7 @@
+ #include <linux/fs_struct.h>
+
+ #include <asm/smp.h>
++#include <asm/cacheflush.h>
+
+ #include "psp-dev.h"
+ #include "sev-dev.h"
+@@ -881,7 +882,14 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
+ input_address = (void __user *)input.address;
+
+ if (input.address && input.length) {
+- id_blob = kzalloc(input.length, GFP_KERNEL);
++ /*
++ * The length of the ID shouldn't be assumed by software since
++ * it may change in the future. The allocation size is limited
++ * to 1 << (PAGE_SHIFT + MAX_ORDER - 1) by the page allocator.
++ * If the allocation fails, simply return ENOMEM rather than
++ * warning in the kernel log.
++ */
++ id_blob = kzalloc(input.length, GFP_KERNEL | __GFP_NOWARN);
+ if (!id_blob)
+ return -ENOMEM;
+
+@@ -1327,7 +1335,10 @@ void sev_pci_init(void)
+
+ /* Obtain the TMR memory area for SEV-ES use */
+ sev_es_tmr = sev_fw_alloc(SEV_ES_TMR_SIZE);
+- if (!sev_es_tmr)
++ if (sev_es_tmr)
++ /* Must flush the cache before giving it to the firmware */
++ clflush_cache_range(sev_es_tmr, SEV_ES_TMR_SIZE);
++ else
+ dev_warn(sev->dev,
+ "SEV: TMR allocation failed, SEV-ES support unavailable\n");
+
+diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
+index 2b6f2281cfd6c..0974b00414050 100644
+--- a/drivers/crypto/hisilicon/sgl.c
++++ b/drivers/crypto/hisilicon/sgl.c
+@@ -124,9 +124,8 @@ err_free_mem:
+ for (j = 0; j < i; j++) {
+ dma_free_coherent(dev, block_size, block[j].sgl,
+ block[j].sgl_dma);
+- memset(block + j, 0, sizeof(*block));
+ }
+- kfree(pool);
++ kfree_sensitive(pool);
+ return ERR_PTR(-ENOMEM);
+ }
+ EXPORT_SYMBOL_GPL(hisi_acc_create_sgl_pool);
+diff --git a/drivers/crypto/marvell/octeontx2/Makefile b/drivers/crypto/marvell/octeontx2/Makefile
+index 965297e969546..f0f2942c1d278 100644
+--- a/drivers/crypto/marvell/octeontx2/Makefile
++++ b/drivers/crypto/marvell/octeontx2/Makefile
+@@ -1,11 +1,10 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_CRYPTO_DEV_OCTEONTX2_CPT) += rvu_cptpf.o rvu_cptvf.o
++obj-$(CONFIG_CRYPTO_DEV_OCTEONTX2_CPT) += rvu_cptcommon.o rvu_cptpf.o rvu_cptvf.o
+
++rvu_cptcommon-objs := cn10k_cpt.o otx2_cptlf.o otx2_cpt_mbox_common.o
+ rvu_cptpf-objs := otx2_cptpf_main.o otx2_cptpf_mbox.o \
+- otx2_cpt_mbox_common.o otx2_cptpf_ucode.o otx2_cptlf.o \
+- cn10k_cpt.o otx2_cpt_devlink.o
+-rvu_cptvf-objs := otx2_cptvf_main.o otx2_cptvf_mbox.o otx2_cptlf.o \
+- otx2_cpt_mbox_common.o otx2_cptvf_reqmgr.o \
+- otx2_cptvf_algs.o cn10k_cpt.o
++ otx2_cptpf_ucode.o otx2_cpt_devlink.o
++rvu_cptvf-objs := otx2_cptvf_main.o otx2_cptvf_mbox.o \
++ otx2_cptvf_reqmgr.o otx2_cptvf_algs.o
+
+ ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af
+diff --git a/drivers/crypto/marvell/octeontx2/cn10k_cpt.c b/drivers/crypto/marvell/octeontx2/cn10k_cpt.c
+index 1499ef75b5c22..93d22b3289919 100644
+--- a/drivers/crypto/marvell/octeontx2/cn10k_cpt.c
++++ b/drivers/crypto/marvell/octeontx2/cn10k_cpt.c
+@@ -7,6 +7,9 @@
+ #include "otx2_cptlf.h"
+ #include "cn10k_cpt.h"
+
++static void cn10k_cpt_send_cmd(union otx2_cpt_inst_s *cptinst, u32 insts_num,
++ struct otx2_cptlf_info *lf);
++
+ static struct cpt_hw_ops otx2_hw_ops = {
+ .send_cmd = otx2_cpt_send_cmd,
+ .cpt_get_compcode = otx2_cpt_get_compcode,
+@@ -19,8 +22,8 @@ static struct cpt_hw_ops cn10k_hw_ops = {
+ .cpt_get_uc_compcode = cn10k_cpt_get_uc_compcode,
+ };
+
+-void cn10k_cpt_send_cmd(union otx2_cpt_inst_s *cptinst, u32 insts_num,
+- struct otx2_cptlf_info *lf)
++static void cn10k_cpt_send_cmd(union otx2_cpt_inst_s *cptinst, u32 insts_num,
++ struct otx2_cptlf_info *lf)
+ {
+ void __iomem *lmtline = lf->lmtline;
+ u64 val = (lf->slot & 0x7FF);
+@@ -68,6 +71,7 @@ int cn10k_cptpf_lmtst_init(struct otx2_cptpf_dev *cptpf)
+
+ return 0;
+ }
++EXPORT_SYMBOL_NS_GPL(cn10k_cptpf_lmtst_init, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int cn10k_cptvf_lmtst_init(struct otx2_cptvf_dev *cptvf)
+ {
+@@ -91,3 +95,4 @@ int cn10k_cptvf_lmtst_init(struct otx2_cptvf_dev *cptvf)
+
+ return 0;
+ }
++EXPORT_SYMBOL_NS_GPL(cn10k_cptvf_lmtst_init, CRYPTO_DEV_OCTEONTX2_CPT);
+diff --git a/drivers/crypto/marvell/octeontx2/cn10k_cpt.h b/drivers/crypto/marvell/octeontx2/cn10k_cpt.h
+index c091392b47e0f..aaefc7e38e060 100644
+--- a/drivers/crypto/marvell/octeontx2/cn10k_cpt.h
++++ b/drivers/crypto/marvell/octeontx2/cn10k_cpt.h
+@@ -28,8 +28,6 @@ static inline u8 otx2_cpt_get_uc_compcode(union otx2_cpt_res_s *result)
+ return ((struct cn9k_cpt_res_s *)result)->uc_compcode;
+ }
+
+-void cn10k_cpt_send_cmd(union otx2_cpt_inst_s *cptinst, u32 insts_num,
+- struct otx2_cptlf_info *lf);
+ int cn10k_cptpf_lmtst_init(struct otx2_cptpf_dev *cptpf);
+ int cn10k_cptvf_lmtst_init(struct otx2_cptvf_dev *cptvf);
+
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+index 5012b7e669f07..6019066a6451a 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
++++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+@@ -145,8 +145,6 @@ int otx2_cpt_send_mbox_msg(struct otx2_mbox *mbox, struct pci_dev *pdev);
+
+ int otx2_cpt_send_af_reg_requests(struct otx2_mbox *mbox,
+ struct pci_dev *pdev);
+-int otx2_cpt_add_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+- u64 reg, u64 *val, int blkaddr);
+ int otx2_cpt_add_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+ u64 reg, u64 val, int blkaddr);
+ int otx2_cpt_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c
+index a317319696eff..115997475beb3 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c
+@@ -19,6 +19,7 @@ int otx2_cpt_send_mbox_msg(struct otx2_mbox *mbox, struct pci_dev *pdev)
+ }
+ return ret;
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_send_mbox_msg, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cpt_send_ready_msg(struct otx2_mbox *mbox, struct pci_dev *pdev)
+ {
+@@ -36,14 +37,17 @@ int otx2_cpt_send_ready_msg(struct otx2_mbox *mbox, struct pci_dev *pdev)
+
+ return otx2_cpt_send_mbox_msg(mbox, pdev);
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_send_ready_msg, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cpt_send_af_reg_requests(struct otx2_mbox *mbox, struct pci_dev *pdev)
+ {
+ return otx2_cpt_send_mbox_msg(mbox, pdev);
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_send_af_reg_requests, CRYPTO_DEV_OCTEONTX2_CPT);
+
+-int otx2_cpt_add_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+- u64 reg, u64 *val, int blkaddr)
++static int otx2_cpt_add_read_af_reg(struct otx2_mbox *mbox,
++ struct pci_dev *pdev, u64 reg,
++ u64 *val, int blkaddr)
+ {
+ struct cpt_rd_wr_reg_msg *reg_msg;
+
+@@ -91,6 +95,7 @@ int otx2_cpt_add_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+
+ return 0;
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_add_write_af_reg, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cpt_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+ u64 reg, u64 *val, int blkaddr)
+@@ -103,6 +108,7 @@ int otx2_cpt_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+
+ return otx2_cpt_send_mbox_msg(mbox, pdev);
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_read_af_reg, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cpt_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+ u64 reg, u64 val, int blkaddr)
+@@ -115,6 +121,7 @@ int otx2_cpt_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev,
+
+ return otx2_cpt_send_mbox_msg(mbox, pdev);
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_write_af_reg, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cpt_attach_rscrs_msg(struct otx2_cptlfs_info *lfs)
+ {
+@@ -170,6 +177,7 @@ int otx2_cpt_detach_rsrcs_msg(struct otx2_cptlfs_info *lfs)
+
+ return ret;
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_detach_rsrcs_msg, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cpt_msix_offset_msg(struct otx2_cptlfs_info *lfs)
+ {
+@@ -202,6 +210,7 @@ int otx2_cpt_msix_offset_msg(struct otx2_cptlfs_info *lfs)
+ }
+ return ret;
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_msix_offset_msg, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cpt_sync_mbox_msg(struct otx2_mbox *mbox)
+ {
+@@ -216,3 +225,4 @@ int otx2_cpt_sync_mbox_msg(struct otx2_mbox *mbox)
+
+ return otx2_mbox_check_rsp_msgs(mbox, 0);
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cpt_sync_mbox_msg, CRYPTO_DEV_OCTEONTX2_CPT);
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptlf.c b/drivers/crypto/marvell/octeontx2/otx2_cptlf.c
+index c8350fcd60fab..71e5f79431afa 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptlf.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptlf.c
+@@ -274,6 +274,8 @@ void otx2_cptlf_unregister_interrupts(struct otx2_cptlfs_info *lfs)
+ }
+ cptlf_disable_intrs(lfs);
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cptlf_unregister_interrupts,
++ CRYPTO_DEV_OCTEONTX2_CPT);
+
+ static int cptlf_do_register_interrrupts(struct otx2_cptlfs_info *lfs,
+ int lf_num, int irq_offset,
+@@ -321,6 +323,7 @@ free_irq:
+ otx2_cptlf_unregister_interrupts(lfs);
+ return ret;
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cptlf_register_interrupts, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ void otx2_cptlf_free_irqs_affinity(struct otx2_cptlfs_info *lfs)
+ {
+@@ -334,6 +337,7 @@ void otx2_cptlf_free_irqs_affinity(struct otx2_cptlfs_info *lfs)
+ free_cpumask_var(lfs->lf[slot].affinity_mask);
+ }
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cptlf_free_irqs_affinity, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cptlf_set_irqs_affinity(struct otx2_cptlfs_info *lfs)
+ {
+@@ -366,6 +370,7 @@ free_affinity_mask:
+ otx2_cptlf_free_irqs_affinity(lfs);
+ return ret;
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cptlf_set_irqs_affinity, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ int otx2_cptlf_init(struct otx2_cptlfs_info *lfs, u8 eng_grp_mask, int pri,
+ int lfs_num)
+@@ -422,6 +427,7 @@ clear_lfs_num:
+ lfs->lfs_num = 0;
+ return ret;
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cptlf_init, CRYPTO_DEV_OCTEONTX2_CPT);
+
+ void otx2_cptlf_shutdown(struct otx2_cptlfs_info *lfs)
+ {
+@@ -431,3 +437,8 @@ void otx2_cptlf_shutdown(struct otx2_cptlfs_info *lfs)
+ /* Send request to detach LFs */
+ otx2_cpt_detach_rsrcs_msg(lfs);
+ }
++EXPORT_SYMBOL_NS_GPL(otx2_cptlf_shutdown, CRYPTO_DEV_OCTEONTX2_CPT);
++
++MODULE_AUTHOR("Marvell");
++MODULE_DESCRIPTION("Marvell RVU CPT Common module");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+index a402ccfac5577..ddf6e913c1c45 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+@@ -831,6 +831,8 @@ static struct pci_driver otx2_cpt_pci_driver = {
+
+ module_pci_driver(otx2_cpt_pci_driver);
+
++MODULE_IMPORT_NS(CRYPTO_DEV_OCTEONTX2_CPT);
++
+ MODULE_AUTHOR("Marvell");
+ MODULE_DESCRIPTION(OTX2_CPT_DRV_STRING);
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c
+index 3411e664cf50c..392e9fee05e81 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c
+@@ -429,6 +429,8 @@ static struct pci_driver otx2_cptvf_pci_driver = {
+
+ module_pci_driver(otx2_cptvf_pci_driver);
+
++MODULE_IMPORT_NS(CRYPTO_DEV_OCTEONTX2_CPT);
++
+ MODULE_AUTHOR("Marvell");
+ MODULE_DESCRIPTION("Marvell RVU CPT Virtual Function Driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
+index b4b9f0aa59b98..b61ada5591586 100644
+--- a/drivers/crypto/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/qat/qat_common/qat_algs.c
+@@ -435,8 +435,8 @@ static void qat_alg_skcipher_init_com(struct qat_alg_skcipher_ctx *ctx,
+ } else if (aes_v2_capable && mode == ICP_QAT_HW_CIPHER_CTR_MODE) {
+ ICP_QAT_FW_LA_SLICE_TYPE_SET(header->serv_specif_flags,
+ ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE);
+- keylen = round_up(keylen, 16);
+ memcpy(cd->ucs_aes.key, key, keylen);
++ keylen = round_up(keylen, 16);
+ } else {
+ memcpy(cd->aes.key, key, keylen);
+ }
+diff --git a/drivers/crypto/ux500/Kconfig b/drivers/crypto/ux500/Kconfig
+index dcbd7404768f1..ac89cd2de12a1 100644
+--- a/drivers/crypto/ux500/Kconfig
++++ b/drivers/crypto/ux500/Kconfig
+@@ -15,8 +15,7 @@ config CRYPTO_DEV_UX500_HASH
+ Depends on UX500/STM DMA if running in DMA mode.
+
+ config CRYPTO_DEV_UX500_DEBUG
+- bool "Activate ux500 platform debug-mode for crypto and hash block"
+- depends on CRYPTO_DEV_UX500_CRYP || CRYPTO_DEV_UX500_HASH
++ bool "Activate debug-mode for UX500 crypto driver for HASH block"
++ depends on CRYPTO_DEV_UX500_HASH
+ help
+- Say Y if you want to add debug prints to ux500_hash and
+- ux500_cryp devices.
++ Say Y if you want to add debug prints to ux500_hash devices.
+diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
+index 08bbbac9a6d08..71cfa1fdf9027 100644
+--- a/drivers/cxl/pmem.c
++++ b/drivers/cxl/pmem.c
+@@ -76,6 +76,7 @@ static int cxl_nvdimm_probe(struct device *dev)
+ return rc;
+
+ set_bit(NDD_LABELING, &flags);
++ set_bit(NDD_REGISTER_SYNC, &flags);
+ set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
+ set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
+ set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
+diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
+index 1dad813ee4a69..c64e7076537cb 100644
+--- a/drivers/dax/bus.c
++++ b/drivers/dax/bus.c
+@@ -427,8 +427,8 @@ static void unregister_dev_dax(void *dev)
+ dev_dbg(dev, "%s\n", __func__);
+
+ kill_dev_dax(dev_dax);
+- free_dev_dax_ranges(dev_dax);
+ device_del(dev);
++ free_dev_dax_ranges(dev_dax);
+ put_device(dev);
+ }
+
+diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
+index 4852a2dbdb278..4aa758a2b3d1b 100644
+--- a/drivers/dax/kmem.c
++++ b/drivers/dax/kmem.c
+@@ -146,7 +146,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
+ if (rc) {
+ dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n",
+ i, range.start, range.end);
+- release_resource(res);
++ remove_resource(res);
+ kfree(res);
+ data->res[i] = NULL;
+ if (mapped)
+@@ -195,7 +195,7 @@ static void dev_dax_kmem_remove(struct dev_dax *dev_dax)
+
+ rc = remove_memory(range.start, range_len(&range));
+ if (rc == 0) {
+- release_resource(data->res[i]);
++ remove_resource(data->res[i]);
+ kfree(data->res[i]);
+ data->res[i] = NULL;
+ success++;
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index b6d48d54f42fc..7b95f07c6f1af 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -245,7 +245,7 @@ config FSL_RAID
+
+ config HISI_DMA
+ tristate "HiSilicon DMA Engine support"
+- depends on ARM64 || COMPILE_TEST
++ depends on ARCH_HISI || COMPILE_TEST
+ depends on PCI_MSI
+ select DMA_ENGINE
+ select DMA_VIRTUAL_CHANNELS
+diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+index bf85aa0979ecb..152c5d98524d7 100644
+--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
++++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+@@ -325,8 +325,6 @@ dma_chan_tx_status(struct dma_chan *dchan, dma_cookie_t cookie,
+ len = vd_to_axi_desc(vdesc)->hw_desc[0].len;
+ completed_length = completed_blocks * len;
+ bytes = length - completed_length;
+- } else {
+- bytes = vd_to_axi_desc(vdesc)->length;
+ }
+
+ spin_unlock_irqrestore(&chan->vc.lock, flags);
+diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
+index c54b24ff5206a..52bdf04aff511 100644
+--- a/drivers/dma/dw-edma/dw-edma-core.c
++++ b/drivers/dma/dw-edma/dw-edma-core.c
+@@ -455,6 +455,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
+ * and destination addresses are increased
+ * by the same portion (data length)
+ */
++ } else if (xfer->type == EDMA_XFER_INTERLEAVED) {
++ burst->dar = dst_addr;
+ }
+ } else {
+ burst->dar = dst_addr;
+@@ -470,6 +472,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
+ * and destination addresses are increased
+ * by the same portion (data length)
+ */
++ } else if (xfer->type == EDMA_XFER_INTERLEAVED) {
++ burst->sar = src_addr;
+ }
+ }
+
+diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c
+index 77e6cfe52e0a3..a3816ba632851 100644
+--- a/drivers/dma/dw-edma/dw-edma-v0-core.c
++++ b/drivers/dma/dw-edma/dw-edma-v0-core.c
+@@ -192,7 +192,7 @@ static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
+ static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
+ const void __iomem *addr)
+ {
+- u32 value;
++ u64 value;
+
+ if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) {
+ u32 viewport_sel;
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 29dbb0f52e186..8b4573dc7ecc5 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -701,7 +701,7 @@ static void idxd_groups_clear_state(struct idxd_device *idxd)
+ group->use_rdbuf_limit = false;
+ group->rdbufs_allowed = 0;
+ group->rdbufs_reserved = 0;
+- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override) {
++ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override) {
+ group->tc_a = 1;
+ group->tc_b = 1;
+ } else {
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 529ea09c90940..e63b0c674d883 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -295,7 +295,7 @@ static int idxd_setup_groups(struct idxd_device *idxd)
+ }
+
+ idxd->groups[i] = group;
+- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override) {
++ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override) {
+ group->tc_a = 1;
+ group->tc_b = 1;
+ } else {
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 3229dfc786507..18cd8151dee02 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -387,7 +387,7 @@ static ssize_t group_traffic_class_a_store(struct device *dev,
+ if (idxd->state == IDXD_DEV_ENABLED)
+ return -EPERM;
+
+- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override)
++ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override)
+ return -EPERM;
+
+ if (val < 0 || val > 7)
+@@ -429,7 +429,7 @@ static ssize_t group_traffic_class_b_store(struct device *dev,
+ if (idxd->state == IDXD_DEV_ENABLED)
+ return -EPERM;
+
+- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override)
++ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override)
+ return -EPERM;
+
+ if (val < 0 || val > 7)
+diff --git a/drivers/dma/ptdma/ptdma-dmaengine.c b/drivers/dma/ptdma/ptdma-dmaengine.c
+index cc22d162ce250..1aa65e5de0f3a 100644
+--- a/drivers/dma/ptdma/ptdma-dmaengine.c
++++ b/drivers/dma/ptdma/ptdma-dmaengine.c
+@@ -254,7 +254,7 @@ static void pt_issue_pending(struct dma_chan *dma_chan)
+ spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+ /* If there was nothing active, start processing */
+- if (engine_is_idle)
++ if (engine_is_idle && desc)
+ pt_cmd_callback(desc, 0);
+ }
+
+diff --git a/drivers/dma/sf-pdma/sf-pdma.c b/drivers/dma/sf-pdma/sf-pdma.c
+index 6b524eb6bcf3a..e578ad5569494 100644
+--- a/drivers/dma/sf-pdma/sf-pdma.c
++++ b/drivers/dma/sf-pdma/sf-pdma.c
+@@ -96,7 +96,6 @@ sf_pdma_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest, dma_addr_t src,
+ if (!desc)
+ return NULL;
+
+- desc->in_use = true;
+ desc->dirn = DMA_MEM_TO_MEM;
+ desc->async_tx = vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
+
+@@ -290,7 +289,7 @@ static void sf_pdma_free_desc(struct virt_dma_desc *vdesc)
+ struct sf_pdma_desc *desc;
+
+ desc = to_sf_pdma_desc(vdesc);
+- desc->in_use = false;
++ kfree(desc);
+ }
+
+ static void sf_pdma_donebh_tasklet(struct tasklet_struct *t)
+diff --git a/drivers/dma/sf-pdma/sf-pdma.h b/drivers/dma/sf-pdma/sf-pdma.h
+index dcb3687bd5da2..5c398a83b491a 100644
+--- a/drivers/dma/sf-pdma/sf-pdma.h
++++ b/drivers/dma/sf-pdma/sf-pdma.h
+@@ -78,7 +78,6 @@ struct sf_pdma_desc {
+ u64 src_addr;
+ struct virt_dma_desc vdesc;
+ struct sf_pdma_chan *chan;
+- bool in_use;
+ enum dma_transfer_direction dirn;
+ struct dma_async_tx_descriptor *async_tx;
+ };
+diff --git a/drivers/firmware/dmi-sysfs.c b/drivers/firmware/dmi-sysfs.c
+index 66727ad3361b9..402217c570333 100644
+--- a/drivers/firmware/dmi-sysfs.c
++++ b/drivers/firmware/dmi-sysfs.c
+@@ -603,16 +603,16 @@ static void __init dmi_sysfs_register_handle(const struct dmi_header *dh,
+ *ret = kobject_init_and_add(&entry->kobj, &dmi_sysfs_entry_ktype, NULL,
+ "%d-%d", dh->type, entry->instance);
+
+- if (*ret) {
+- kobject_put(&entry->kobj);
+- return;
+- }
+-
+ /* Thread on the global list for cleanup */
+ spin_lock(&entry_list_lock);
+ list_add_tail(&entry->list, &entry_list);
+ spin_unlock(&entry_list_lock);
+
++ if (*ret) {
++ kobject_put(&entry->kobj);
++ return;
++ }
++
+ /* Handle specializations by type */
+ switch (dh->type) {
+ case DMI_ENTRY_SYSTEM_EVENT_LOG:
+diff --git a/drivers/firmware/google/framebuffer-coreboot.c b/drivers/firmware/google/framebuffer-coreboot.c
+index c6dcc1ef93acf..c323a818805cc 100644
+--- a/drivers/firmware/google/framebuffer-coreboot.c
++++ b/drivers/firmware/google/framebuffer-coreboot.c
+@@ -43,9 +43,7 @@ static int framebuffer_probe(struct coreboot_device *dev)
+ fb->green_mask_pos == formats[i].green.offset &&
+ fb->green_mask_size == formats[i].green.length &&
+ fb->blue_mask_pos == formats[i].blue.offset &&
+- fb->blue_mask_size == formats[i].blue.length &&
+- fb->reserved_mask_pos == formats[i].transp.offset &&
+- fb->reserved_mask_size == formats[i].transp.length)
++ fb->blue_mask_size == formats[i].blue.length)
+ pdata.format = formats[i].name;
+ }
+ if (!pdata.format)
+diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
+index 447ee4ea5c903..f78249fe2512a 100644
+--- a/drivers/firmware/psci/psci.c
++++ b/drivers/firmware/psci/psci.c
+@@ -108,9 +108,10 @@ bool psci_power_state_is_valid(u32 state)
+ return !(state & ~valid_mask);
+ }
+
+-static unsigned long __invoke_psci_fn_hvc(unsigned long function_id,
+- unsigned long arg0, unsigned long arg1,
+- unsigned long arg2)
++static __always_inline unsigned long
++__invoke_psci_fn_hvc(unsigned long function_id,
++ unsigned long arg0, unsigned long arg1,
++ unsigned long arg2)
+ {
+ struct arm_smccc_res res;
+
+@@ -118,9 +119,10 @@ static unsigned long __invoke_psci_fn_hvc(unsigned long function_id,
+ return res.a0;
+ }
+
+-static unsigned long __invoke_psci_fn_smc(unsigned long function_id,
+- unsigned long arg0, unsigned long arg1,
+- unsigned long arg2)
++static __always_inline unsigned long
++__invoke_psci_fn_smc(unsigned long function_id,
++ unsigned long arg0, unsigned long arg1,
++ unsigned long arg2)
+ {
+ struct arm_smccc_res res;
+
+@@ -128,7 +130,7 @@ static unsigned long __invoke_psci_fn_smc(unsigned long function_id,
+ return res.a0;
+ }
+
+-static int psci_to_linux_errno(int errno)
++static __always_inline int psci_to_linux_errno(int errno)
+ {
+ switch (errno) {
+ case PSCI_RET_SUCCESS:
+@@ -169,7 +171,8 @@ int psci_set_osi_mode(bool enable)
+ return psci_to_linux_errno(err);
+ }
+
+-static int __psci_cpu_suspend(u32 fn, u32 state, unsigned long entry_point)
++static __always_inline int
++__psci_cpu_suspend(u32 fn, u32 state, unsigned long entry_point)
+ {
+ int err;
+
+@@ -177,13 +180,15 @@ static int __psci_cpu_suspend(u32 fn, u32 state, unsigned long entry_point)
+ return psci_to_linux_errno(err);
+ }
+
+-static int psci_0_1_cpu_suspend(u32 state, unsigned long entry_point)
++static __always_inline int
++psci_0_1_cpu_suspend(u32 state, unsigned long entry_point)
+ {
+ return __psci_cpu_suspend(psci_0_1_function_ids.cpu_suspend,
+ state, entry_point);
+ }
+
+-static int psci_0_2_cpu_suspend(u32 state, unsigned long entry_point)
++static __always_inline int
++psci_0_2_cpu_suspend(u32 state, unsigned long entry_point)
+ {
+ return __psci_cpu_suspend(PSCI_FN_NATIVE(0_2, CPU_SUSPEND),
+ state, entry_point);
+@@ -450,10 +455,12 @@ late_initcall(psci_debugfs_init)
+ #endif
+
+ #ifdef CONFIG_CPU_IDLE
+-static int psci_suspend_finisher(unsigned long state)
++static noinstr int psci_suspend_finisher(unsigned long state)
+ {
+ u32 power_state = state;
+- phys_addr_t pa_cpu_resume = __pa_symbol(cpu_resume);
++ phys_addr_t pa_cpu_resume;
++
++ pa_cpu_resume = __pa_symbol_nodebug((unsigned long)cpu_resume);
+
+ return psci_ops.cpu_suspend(power_state, pa_cpu_resume);
+ }
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index b4081f4d88a37..bde1f543f5298 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -1138,13 +1138,17 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+
+ /* allocate service controller and supporting channel */
+ controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL);
+- if (!controller)
+- return -ENOMEM;
++ if (!controller) {
++ ret = -ENOMEM;
++ goto err_destroy_pool;
++ }
+
+ chans = devm_kmalloc_array(dev, SVC_NUM_CHANNEL,
+ sizeof(*chans), GFP_KERNEL | __GFP_ZERO);
+- if (!chans)
+- return -ENOMEM;
++ if (!chans) {
++ ret = -ENOMEM;
++ goto err_destroy_pool;
++ }
+
+ controller->dev = dev;
+ controller->num_chans = SVC_NUM_CHANNEL;
+@@ -1159,7 +1163,7 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+ ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL);
+ if (ret) {
+ dev_err(dev, "failed to allocate FIFO\n");
+- return ret;
++ goto err_destroy_pool;
+ }
+ spin_lock_init(&controller->svc_fifo_lock);
+
+@@ -1198,19 +1202,20 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+ ret = platform_device_add(svc->stratix10_svc_rsu);
+ if (ret) {
+ platform_device_put(svc->stratix10_svc_rsu);
+- return ret;
++ goto err_free_kfifo;
+ }
+
+ svc->intel_svc_fcs = platform_device_alloc(INTEL_FCS, 1);
+ if (!svc->intel_svc_fcs) {
+ dev_err(dev, "failed to allocate %s device\n", INTEL_FCS);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto err_unregister_dev;
+ }
+
+ ret = platform_device_add(svc->intel_svc_fcs);
+ if (ret) {
+ platform_device_put(svc->intel_svc_fcs);
+- return ret;
++ goto err_unregister_dev;
+ }
+
+ dev_set_drvdata(dev, svc);
+@@ -1219,8 +1224,12 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+
+ return 0;
+
++err_unregister_dev:
++ platform_device_unregister(svc->stratix10_svc_rsu);
+ err_free_kfifo:
+ kfifo_free(&controller->svc_fifo);
++err_destroy_pool:
++ gen_pool_destroy(genpool);
+ return ret;
+ }
+
+diff --git a/drivers/fpga/microchip-spi.c b/drivers/fpga/microchip-spi.c
+index 7436976ea9048..137fafdf57a6f 100644
+--- a/drivers/fpga/microchip-spi.c
++++ b/drivers/fpga/microchip-spi.c
+@@ -6,6 +6,7 @@
+ #include <asm/unaligned.h>
+ #include <linux/delay.h>
+ #include <linux/fpga/fpga-mgr.h>
++#include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/spi/spi.h>
+@@ -33,7 +34,7 @@
+
+ #define MPF_BITS_PER_COMPONENT_SIZE 22
+
+-#define MPF_STATUS_POLL_RETRIES 10000
++#define MPF_STATUS_POLL_TIMEOUT (2 * USEC_PER_SEC)
+ #define MPF_STATUS_BUSY BIT(0)
+ #define MPF_STATUS_READY BIT(1)
+ #define MPF_STATUS_SPI_VIOLATION BIT(2)
+@@ -42,46 +43,55 @@
+ struct mpf_priv {
+ struct spi_device *spi;
+ bool program_mode;
++ u8 tx __aligned(ARCH_KMALLOC_MINALIGN);
++ u8 rx;
+ };
+
+-static int mpf_read_status(struct spi_device *spi)
++static int mpf_read_status(struct mpf_priv *priv)
+ {
+- u8 status = 0, status_command = MPF_SPI_READ_STATUS;
+- struct spi_transfer xfers[2] = { 0 };
+- int ret;
+-
+ /*
+ * HW status is returned on MISO in the first byte after CS went
+ * active. However, first reading can be inadequate, so we submit
+ * two identical SPI transfers and use result of the later one.
+ */
+- xfers[0].tx_buf = &status_command;
+- xfers[1].tx_buf = &status_command;
+- xfers[0].rx_buf = &status;
+- xfers[1].rx_buf = &status;
+- xfers[0].len = 1;
+- xfers[1].len = 1;
+- xfers[0].cs_change = 1;
++ struct spi_transfer xfers[2] = {
++ {
++ .tx_buf = &priv->tx,
++ .rx_buf = &priv->rx,
++ .len = 1,
++ .cs_change = 1,
++ }, {
++ .tx_buf = &priv->tx,
++ .rx_buf = &priv->rx,
++ .len = 1,
++ },
++ };
++ u8 status;
++ int ret;
+
+- ret = spi_sync_transfer(spi, xfers, 2);
++ priv->tx = MPF_SPI_READ_STATUS;
++
++ ret = spi_sync_transfer(priv->spi, xfers, 2);
++ if (ret)
++ return ret;
++
++ status = priv->rx;
+
+ if ((status & MPF_STATUS_SPI_VIOLATION) ||
+ (status & MPF_STATUS_SPI_ERROR))
+- ret = -EIO;
++ return -EIO;
+
+- return ret ? : status;
++ return status;
+ }
+
+ static enum fpga_mgr_states mpf_ops_state(struct fpga_manager *mgr)
+ {
+ struct mpf_priv *priv = mgr->priv;
+- struct spi_device *spi;
+ bool program_mode;
+ int status;
+
+- spi = priv->spi;
+ program_mode = priv->program_mode;
+- status = mpf_read_status(spi);
++ status = mpf_read_status(priv);
+
+ if (!program_mode && !status)
+ return FPGA_MGR_STATE_OPERATING;
+@@ -185,52 +195,53 @@ static int mpf_ops_parse_header(struct fpga_manager *mgr,
+ return 0;
+ }
+
+-/* Poll HW status until busy bit is cleared and mask bits are set. */
+-static int mpf_poll_status(struct spi_device *spi, u8 mask)
++static int mpf_poll_status(struct mpf_priv *priv, u8 mask)
+ {
+- int status, retries = MPF_STATUS_POLL_RETRIES;
++ int ret, status;
+
+- while (retries--) {
+- status = mpf_read_status(spi);
+- if (status < 0)
+- return status;
+-
+- if (status & MPF_STATUS_BUSY)
+- continue;
+-
+- if (!mask || (status & mask))
+- return status;
+- }
++ /*
++ * Busy poll HW status. Polling stops if any of the following
++ * conditions are met:
++ * - timeout is reached
++ * - mpf_read_status() returns an error
++ * - busy bit is cleared AND mask bits are set
++ */
++ ret = read_poll_timeout(mpf_read_status, status,
++ (status < 0) ||
++ ((status & (MPF_STATUS_BUSY | mask)) == mask),
++ 0, MPF_STATUS_POLL_TIMEOUT, false, priv);
++ if (ret < 0)
++ return ret;
+
+- return -EBUSY;
++ return status;
+ }
+
+-static int mpf_spi_write(struct spi_device *spi, const void *buf, size_t buf_size)
++static int mpf_spi_write(struct mpf_priv *priv, const void *buf, size_t buf_size)
+ {
+- int status = mpf_poll_status(spi, 0);
++ int status = mpf_poll_status(priv, 0);
+
+ if (status < 0)
+ return status;
+
+- return spi_write(spi, buf, buf_size);
++ return spi_write_then_read(priv->spi, buf, buf_size, NULL, 0);
+ }
+
+-static int mpf_spi_write_then_read(struct spi_device *spi,
++static int mpf_spi_write_then_read(struct mpf_priv *priv,
+ const void *txbuf, size_t txbuf_size,
+ void *rxbuf, size_t rxbuf_size)
+ {
+ const u8 read_command[] = { MPF_SPI_READ_DATA };
+ int ret;
+
+- ret = mpf_spi_write(spi, txbuf, txbuf_size);
++ ret = mpf_spi_write(priv, txbuf, txbuf_size);
+ if (ret)
+ return ret;
+
+- ret = mpf_poll_status(spi, MPF_STATUS_READY);
++ ret = mpf_poll_status(priv, MPF_STATUS_READY);
+ if (ret < 0)
+ return ret;
+
+- return spi_write_then_read(spi, read_command, sizeof(read_command),
++ return spi_write_then_read(priv->spi, read_command, sizeof(read_command),
+ rxbuf, rxbuf_size);
+ }
+
+@@ -242,7 +253,6 @@ static int mpf_ops_write_init(struct fpga_manager *mgr,
+ const u8 isc_en_command[] = { MPF_SPI_ISC_ENABLE };
+ struct mpf_priv *priv = mgr->priv;
+ struct device *dev = &mgr->dev;
+- struct spi_device *spi;
+ u32 isc_ret = 0;
+ int ret;
+
+@@ -251,9 +261,7 @@ static int mpf_ops_write_init(struct fpga_manager *mgr,
+ return -EOPNOTSUPP;
+ }
+
+- spi = priv->spi;
+-
+- ret = mpf_spi_write_then_read(spi, isc_en_command, sizeof(isc_en_command),
++ ret = mpf_spi_write_then_read(priv, isc_en_command, sizeof(isc_en_command),
+ &isc_ret, sizeof(isc_ret));
+ if (ret || isc_ret) {
+ dev_err(dev, "Failed to enable ISC: spi_ret %d, isc_ret %u\n",
+@@ -261,7 +269,7 @@ static int mpf_ops_write_init(struct fpga_manager *mgr,
+ return -EFAULT;
+ }
+
+- ret = mpf_spi_write(spi, program_mode, sizeof(program_mode));
++ ret = mpf_spi_write(priv, program_mode, sizeof(program_mode));
+ if (ret) {
+ dev_err(dev, "Failed to enter program mode: %d\n", ret);
+ return ret;
+@@ -274,11 +282,9 @@ static int mpf_ops_write_init(struct fpga_manager *mgr,
+
+ static int mpf_ops_write(struct fpga_manager *mgr, const char *buf, size_t count)
+ {
+- u8 spi_frame_command[] = { MPF_SPI_FRAME };
+ struct spi_transfer xfers[2] = { 0 };
+ struct mpf_priv *priv = mgr->priv;
+ struct device *dev = &mgr->dev;
+- struct spi_device *spi;
+ int ret, i;
+
+ if (count % MPF_SPI_FRAME_SIZE) {
+@@ -287,18 +293,18 @@ static int mpf_ops_write(struct fpga_manager *mgr, const char *buf, size_t count
+ return -EINVAL;
+ }
+
+- spi = priv->spi;
+-
+- xfers[0].tx_buf = spi_frame_command;
+- xfers[0].len = sizeof(spi_frame_command);
++ xfers[0].tx_buf = &priv->tx;
++ xfers[0].len = 1;
+
+ for (i = 0; i < count / MPF_SPI_FRAME_SIZE; i++) {
+ xfers[1].tx_buf = buf + i * MPF_SPI_FRAME_SIZE;
+ xfers[1].len = MPF_SPI_FRAME_SIZE;
+
+- ret = mpf_poll_status(spi, 0);
+- if (ret >= 0)
+- ret = spi_sync_transfer(spi, xfers, ARRAY_SIZE(xfers));
++ ret = mpf_poll_status(priv, 0);
++ if (ret >= 0) {
++ priv->tx = MPF_SPI_FRAME;
++ ret = spi_sync_transfer(priv->spi, xfers, ARRAY_SIZE(xfers));
++ }
+
+ if (ret) {
+ dev_err(dev, "Failed to write bitstream frame %d/%zu\n",
+@@ -317,12 +323,9 @@ static int mpf_ops_write_complete(struct fpga_manager *mgr,
+ const u8 release_command[] = { MPF_SPI_RELEASE };
+ struct mpf_priv *priv = mgr->priv;
+ struct device *dev = &mgr->dev;
+- struct spi_device *spi;
+ int ret;
+
+- spi = priv->spi;
+-
+- ret = mpf_spi_write(spi, isc_dis_command, sizeof(isc_dis_command));
++ ret = mpf_spi_write(priv, isc_dis_command, sizeof(isc_dis_command));
+ if (ret) {
+ dev_err(dev, "Failed to disable ISC: %d\n", ret);
+ return ret;
+@@ -330,7 +333,7 @@ static int mpf_ops_write_complete(struct fpga_manager *mgr,
+
+ usleep_range(1000, 2000);
+
+- ret = mpf_spi_write(spi, release_command, sizeof(release_command));
++ ret = mpf_spi_write(priv, release_command, sizeof(release_command));
+ if (ret) {
+ dev_err(dev, "Failed to exit program mode: %d\n", ret);
+ return ret;
+diff --git a/drivers/gpio/gpio-pca9570.c b/drivers/gpio/gpio-pca9570.c
+index 6c07a8811a7a5..6a5a8e593ed55 100644
+--- a/drivers/gpio/gpio-pca9570.c
++++ b/drivers/gpio/gpio-pca9570.c
+@@ -18,11 +18,11 @@
+ #define SLG7XL45106_GPO_REG 0xDB
+
+ /**
+- * struct pca9570_platform_data - GPIO platformdata
++ * struct pca9570_chip_data - GPIO platformdata
+ * @ngpio: no of gpios
+ * @command: Command to be sent
+ */
+-struct pca9570_platform_data {
++struct pca9570_chip_data {
+ u16 ngpio;
+ u32 command;
+ };
+@@ -36,7 +36,7 @@ struct pca9570_platform_data {
+ */
+ struct pca9570 {
+ struct gpio_chip chip;
+- const struct pca9570_platform_data *p_data;
++ const struct pca9570_chip_data *chip_data;
+ struct mutex lock;
+ u8 out;
+ };
+@@ -46,8 +46,8 @@ static int pca9570_read(struct pca9570 *gpio, u8 *value)
+ struct i2c_client *client = to_i2c_client(gpio->chip.parent);
+ int ret;
+
+- if (gpio->p_data->command != 0)
+- ret = i2c_smbus_read_byte_data(client, gpio->p_data->command);
++ if (gpio->chip_data->command != 0)
++ ret = i2c_smbus_read_byte_data(client, gpio->chip_data->command);
+ else
+ ret = i2c_smbus_read_byte(client);
+
+@@ -62,8 +62,8 @@ static int pca9570_write(struct pca9570 *gpio, u8 value)
+ {
+ struct i2c_client *client = to_i2c_client(gpio->chip.parent);
+
+- if (gpio->p_data->command != 0)
+- return i2c_smbus_write_byte_data(client, gpio->p_data->command, value);
++ if (gpio->chip_data->command != 0)
++ return i2c_smbus_write_byte_data(client, gpio->chip_data->command, value);
+
+ return i2c_smbus_write_byte(client, value);
+ }
+@@ -127,8 +127,8 @@ static int pca9570_probe(struct i2c_client *client)
+ gpio->chip.get = pca9570_get;
+ gpio->chip.set = pca9570_set;
+ gpio->chip.base = -1;
+- gpio->p_data = device_get_match_data(&client->dev);
+- gpio->chip.ngpio = gpio->p_data->ngpio;
++ gpio->chip_data = device_get_match_data(&client->dev);
++ gpio->chip.ngpio = gpio->chip_data->ngpio;
+ gpio->chip.can_sleep = true;
+
+ mutex_init(&gpio->lock);
+@@ -141,15 +141,15 @@ static int pca9570_probe(struct i2c_client *client)
+ return devm_gpiochip_add_data(&client->dev, &gpio->chip, gpio);
+ }
+
+-static const struct pca9570_platform_data pca9570_gpio = {
++static const struct pca9570_chip_data pca9570_gpio = {
+ .ngpio = 4,
+ };
+
+-static const struct pca9570_platform_data pca9571_gpio = {
++static const struct pca9570_chip_data pca9571_gpio = {
+ .ngpio = 8,
+ };
+
+-static const struct pca9570_platform_data slg7xl45106_gpio = {
++static const struct pca9570_chip_data slg7xl45106_gpio = {
+ .ngpio = 8,
+ .command = SLG7XL45106_GPO_REG,
+ };
+diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
+index 9033db00c360d..d3f3a69d49077 100644
+--- a/drivers/gpio/gpio-vf610.c
++++ b/drivers/gpio/gpio-vf610.c
+@@ -317,7 +317,7 @@ static int vf610_gpio_probe(struct platform_device *pdev)
+
+ gc = &port->gc;
+ gc->parent = dev;
+- gc->label = "vf610-gpio";
++ gc->label = dev_name(dev);
+ gc->ngpio = VF610_GPIO_PER_PORT;
+ gc->base = of_alias_get_id(np, "gpio") * VF610_GPIO_PER_PORT;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index 0040deaf8a83a..90a5254ec1387 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -97,7 +97,7 @@ struct amdgpu_amdkfd_fence {
+
+ struct amdgpu_kfd_dev {
+ struct kfd_dev *dev;
+- uint64_t vram_used;
++ int64_t vram_used;
+ uint64_t vram_used_aligned;
+ bool init_complete;
+ struct work_struct reset_work;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 3b5c53712d319..05b884fe0a927 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -1612,6 +1612,7 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
+ struct amdgpu_bo *bo;
+ struct drm_gem_object *gobj = NULL;
+ u32 domain, alloc_domain;
++ uint64_t aligned_size;
+ u64 alloc_flags;
+ int ret;
+
+@@ -1667,22 +1668,23 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
+ * the memory.
+ */
+ if ((*mem)->aql_queue)
+- size = size >> 1;
++ size >>= 1;
++ aligned_size = PAGE_ALIGN(size);
+
+ (*mem)->alloc_flags = flags;
+
+ amdgpu_sync_create(&(*mem)->sync);
+
+- ret = amdgpu_amdkfd_reserve_mem_limit(adev, size, flags);
++ ret = amdgpu_amdkfd_reserve_mem_limit(adev, aligned_size, flags);
+ if (ret) {
+ pr_debug("Insufficient memory\n");
+ goto err_reserve_limit;
+ }
+
+ pr_debug("\tcreate BO VA 0x%llx size 0x%llx domain %s\n",
+- va, size, domain_string(alloc_domain));
++ va, (*mem)->aql_queue ? size << 1 : size, domain_string(alloc_domain));
+
+- ret = amdgpu_gem_object_create(adev, size, 1, alloc_domain, alloc_flags,
++ ret = amdgpu_gem_object_create(adev, aligned_size, 1, alloc_domain, alloc_flags,
+ bo_type, NULL, &gobj);
+ if (ret) {
+ pr_debug("Failed to create BO on domain %s. ret %d\n",
+@@ -1739,7 +1741,7 @@ err_node_allow:
+ /* Don't unreserve system mem limit twice */
+ goto err_reserve_limit;
+ err_bo_create:
+- amdgpu_amdkfd_unreserve_mem_limit(adev, size, flags);
++ amdgpu_amdkfd_unreserve_mem_limit(adev, aligned_size, flags);
+ err_reserve_limit:
+ mutex_destroy(&(*mem)->lock);
+ if (gobj)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index fbf2f24169eb5..d8e79de839d65 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4022,7 +4022,8 @@ void amdgpu_device_fini_hw(struct amdgpu_device *adev)
+
+ amdgpu_gart_dummy_page_fini(adev);
+
+- amdgpu_device_unmap_mmio(adev);
++ if (drm_dev_is_unplugged(adev_to_drm(adev)))
++ amdgpu_device_unmap_mmio(adev);
+
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 3fe277bc233f4..7f598977d6942 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2236,6 +2236,8 @@ amdgpu_pci_remove(struct pci_dev *pdev)
+ struct drm_device *dev = pci_get_drvdata(pdev);
+ struct amdgpu_device *adev = drm_to_adev(dev);
+
++ drm_dev_unplug(dev);
++
+ if (adev->pm.rpm_mode != AMDGPU_RUNPM_NONE) {
+ pm_runtime_get_sync(dev->dev);
+ pm_runtime_forbid(dev->dev);
+@@ -2275,8 +2277,6 @@ amdgpu_pci_remove(struct pci_dev *pdev)
+
+ amdgpu_driver_unload_kms(dev);
+
+- drm_dev_unplug(dev);
+-
+ /*
+ * Flush any in flight DMA operations from device.
+ * Clear the Bus Master Enable bit and then wait on the PCIe Device
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 7a2fc920739bb..ba092072308fa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -380,7 +380,7 @@ static int psp_init_sriov_microcode(struct psp_context *psp)
+ adev->virt.autoload_ucode_id = AMDGPU_UCODE_ID_CP_MES1_DATA;
+ break;
+ default:
+- BUG();
++ ret = -EINVAL;
+ break;
+ }
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+index 677ad2016976d..98d91ebf5c26b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+@@ -153,10 +153,10 @@ TRACE_EVENT(amdgpu_cs,
+
+ TP_fast_assign(
+ __entry->bo_list = p->bo_list;
+- __entry->ring = to_amdgpu_ring(job->base.sched)->idx;
++ __entry->ring = to_amdgpu_ring(job->base.entity->rq->sched)->idx;
+ __entry->dw = ib->length_dw;
+ __entry->fences = amdgpu_fence_count_emitted(
+- to_amdgpu_ring(job->base.sched));
++ to_amdgpu_ring(job->base.entity->rq->sched));
+ ),
+ TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u",
+ __entry->bo_list, __entry->ring, __entry->dw,
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_2.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_2.c
+index 31776b12e4c45..4b0d563c6522c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_2.c
+@@ -382,6 +382,11 @@ static void nbio_v7_2_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_PCIE_PORT(SOC15_REG_OFFSET(NBIO, 0, regBIF1_PCIE_MST_CTRL_3), data);
+ break;
++ case IP_VERSION(7, 5, 1):
++ data = RREG32_SOC15(NBIO, 0, regRCC_DEV2_EPF0_STRAP2);
++ data &= ~RCC_DEV2_EPF0_STRAP2__STRAP_NO_SOFT_RESET_DEV2_F0_MASK;
++ WREG32_SOC15(NBIO, 0, regRCC_DEV2_EPF0_STRAP2, data);
++ fallthrough;
+ default:
+ def = data = RREG32_PCIE_PORT(SOC15_REG_OFFSET(NBIO, 0, regPCIE_CONFIG_CNTL));
+ data = REG_SET_FIELD(data, PCIE_CONFIG_CNTL,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 6d291aa6386bd..f79b8e964140e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -1127,8 +1127,13 @@ static int kfd_ioctl_alloc_memory_of_gpu(struct file *filep,
+ }
+
+ /* Update the VRAM usage count */
+- if (flags & KFD_IOC_ALLOC_MEM_FLAGS_VRAM)
+- WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + args->size);
++ if (flags & KFD_IOC_ALLOC_MEM_FLAGS_VRAM) {
++ uint64_t size = args->size;
++
++ if (flags & KFD_IOC_ALLOC_MEM_FLAGS_AQL_QUEUE_MEM)
++ size >>= 1;
++ WRITE_ONCE(pdd->vram_usage, pdd->vram_usage + PAGE_ALIGN(size));
++ }
+
+ mutex_unlock(&p->mutex);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index af16d6bb974b7..1ba8a2905f824 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1239,7 +1239,7 @@ static void mmhub_read_system_context(struct amdgpu_device *adev, struct dc_phy_
+ pa_config->gart_config.page_table_end_addr = page_table_end.quad_part << 12;
+ pa_config->gart_config.page_table_base_addr = page_table_base.quad_part;
+
+- pa_config->is_hvm_enabled = 0;
++ pa_config->is_hvm_enabled = adev->mode_info.gpu_vm_support;
+
+ }
+
+@@ -1551,6 +1551,11 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ if (amdgpu_dc_feature_mask & DC_DISABLE_LTTPR_DP2_0)
+ init_data.flags.allow_lttpr_non_transparent_mode.bits.DP2_0 = true;
+
++ /* Disable SubVP + DRR config by default */
++ init_data.flags.disable_subvp_drr = true;
++ if (amdgpu_dc_feature_mask & DC_ENABLE_SUBVP_DRR)
++ init_data.flags.disable_subvp_drr = false;
++
+ init_data.flags.seamless_boot_edp_requested = false;
+
+ if (check_seamless_boot_capability(adev)) {
+@@ -2747,12 +2752,14 @@ static int dm_resume(void *handle)
+ drm_for_each_connector_iter(connector, &iter) {
+ aconnector = to_amdgpu_dm_connector(connector);
+
++ if (!aconnector->dc_link)
++ continue;
++
+ /*
+ * this is the case when traversing through already created
+ * MST connectors, should be skipped
+ */
+- if (aconnector->dc_link &&
+- aconnector->dc_link->type == dc_connection_mst_branch)
++ if (aconnector->dc_link->type == dc_connection_mst_branch)
+ continue;
+
+ mutex_lock(&aconnector->hpd_lock);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 22125daf9dcfe..78c2ed59e87d2 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -77,6 +77,9 @@ int dm_set_vupdate_irq(struct drm_crtc *crtc, bool enable)
+ struct amdgpu_device *adev = drm_to_adev(crtc->dev);
+ int rc;
+
++ if (acrtc->otg_inst == -1)
++ return 0;
++
+ irq_source = IRQ_TYPE_VUPDATE + acrtc->otg_inst;
+
+ rc = dc_interrupt_set(adev->dm.dc, irq_source, enable) ? 0 : -EBUSY;
+@@ -152,6 +155,9 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, bool enable)
+ struct vblank_control_work *work;
+ int rc = 0;
+
++ if (acrtc->otg_inst == -1)
++ goto skip;
++
+ if (enable) {
+ /* vblank irq on -> Only need vupdate irq in vrr mode */
+ if (amdgpu_dm_vrr_active(acrtc_state))
+@@ -169,6 +175,7 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, bool enable)
+ if (!dc_interrupt_set(adev->dm.dc, irq_source, enable))
+ return -EBUSY;
+
++skip:
+ if (amdgpu_in_reset(adev))
+ return 0;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c
+index f47cfe6b42bd2..0765334f08259 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c
+@@ -146,6 +146,9 @@ static int dcn314_smu_send_msg_with_param(struct clk_mgr_internal *clk_mgr,
+ if (msg_id == VBIOSSMC_MSG_TransferTableDram2Smu &&
+ param == TABLE_WATERMARKS)
+ DC_LOG_WARNING("Watermarks table not configured properly by SMU");
++ else if (msg_id == VBIOSSMC_MSG_SetHardMinDcfclkByFreq ||
++ msg_id == VBIOSSMC_MSG_SetMinDeepSleepDcfclk)
++ DC_LOG_WARNING("DCFCLK_DPM is not enabled by BIOS");
+ else
+ ASSERT(0);
+ REG_WRITE(MP1_SMN_C2PMSG_91, VBIOSSMC_Result_OK);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 0cb8d1f934d12..698ef50e83f3f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -862,6 +862,7 @@ static bool dc_construct_ctx(struct dc *dc,
+
+ dc_ctx->perf_trace = dc_perf_trace_create();
+ if (!dc_ctx->perf_trace) {
++ kfree(dc_ctx);
+ ASSERT_CRITICAL(false);
+ return false;
+ }
+@@ -3334,6 +3335,21 @@ static void commit_planes_for_stream(struct dc *dc,
+
+ dc_z10_restore(dc);
+
++ if (update_type == UPDATE_TYPE_FULL) {
++ /* wait for all double-buffer activity to clear on all pipes */
++ int pipe_idx;
++
++ for (pipe_idx = 0; pipe_idx < dc->res_pool->pipe_count; pipe_idx++) {
++ struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[pipe_idx];
++
++ if (!pipe_ctx->stream)
++ continue;
++
++ if (pipe_ctx->stream_res.tg->funcs->wait_drr_doublebuffer_pending_clear)
++ pipe_ctx->stream_res.tg->funcs->wait_drr_doublebuffer_pending_clear(pipe_ctx->stream_res.tg);
++ }
++ }
++
+ if (get_seamless_boot_stream_count(context) > 0 && surface_count > 0) {
+ /* Optimize seamless boot flag keeps clocks and watermarks high until
+ * first flip. After first flip, optimization is required to lower
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index c88f044666fee..754fc86341494 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -1916,12 +1916,6 @@ struct dc_link *link_create(const struct link_init_data *init_params)
+ if (false == dc_link_construct(link, init_params))
+ goto construct_fail;
+
+- /*
+- * Must use preferred_link_setting, not reported_link_cap or verified_link_cap,
+- * since struct preferred_link_setting won't be reset after S3.
+- */
+- link->preferred_link_setting.dpcd_source_device_specific_field_support = true;
+-
+ return link;
+
+ construct_fail:
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index dedd1246ce588..475ad3eed002d 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -6554,18 +6554,10 @@ void dpcd_set_source_specific_data(struct dc_link *link)
+
+ uint8_t hblank_size = (uint8_t)link->dc->caps.min_horizontal_blanking_period;
+
+- if (link->preferred_link_setting.dpcd_source_device_specific_field_support) {
+- result_write_min_hblank = core_link_write_dpcd(link,
+- DP_SOURCE_MINIMUM_HBLANK_SUPPORTED, (uint8_t *)(&hblank_size),
+- sizeof(hblank_size));
+-
+- if (result_write_min_hblank == DC_ERROR_UNEXPECTED)
+- link->preferred_link_setting.dpcd_source_device_specific_field_support = false;
+- } else {
+- DC_LOG_DC("Sink device does not support 00340h DPCD write. Skipping on purpose.\n");
+- }
++ result_write_min_hblank = core_link_write_dpcd(link,
++ DP_SOURCE_MINIMUM_HBLANK_SUPPORTED, (uint8_t *)(&hblank_size),
++ sizeof(hblank_size));
+ }
+-
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_DC_DETECTION_DP_CAPS,
+ "result=%u link_index=%u enum dce_version=%d DPCD=0x%04X min_hblank=%u branch_dev_id=0x%x branch_dev_name='%c%c%c%c%c%c'",
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 85ebeaa2de186..37998dc0fc144 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -410,7 +410,7 @@ struct dc_config {
+ bool force_bios_enable_lttpr;
+ uint8_t force_bios_fixed_vs;
+ int sdpif_request_limit_words_per_umc;
+-
++ bool disable_subvp_drr;
+ };
+
+ enum visual_confirm {
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+index 2c54b6e0498bf..296793d8b2bf2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+@@ -149,7 +149,6 @@ struct dc_link_settings {
+ enum dc_link_spread link_spread;
+ bool use_link_rate_set;
+ uint8_t link_rate_set;
+- bool dpcd_source_device_specific_field_support;
+ };
+
+ union dc_dp_ffe_preset {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
+index 88ac5f6f4c96c..0b37bb0e184b2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
+@@ -519,7 +519,8 @@ struct dcn_optc_registers {
+ type OTG_CRC_DATA_STREAM_COMBINE_MODE;\
+ type OTG_CRC_DATA_STREAM_SPLIT_MODE;\
+ type OTG_CRC_DATA_FORMAT;\
+- type OTG_V_TOTAL_LAST_USED_BY_DRR;
++ type OTG_V_TOTAL_LAST_USED_BY_DRR;\
++ type OTG_DRR_TIMING_DBUF_UPDATE_PENDING;
+
+ #define TG_REG_FIELD_LIST_DCN3_2(type) \
+ type OTG_H_TIMING_DIV_MODE_MANUAL;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c
+index 867d60151aebb..08b92715e2e64 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.c
+@@ -291,6 +291,14 @@ static void optc3_set_timing_double_buffer(struct timing_generator *optc, bool e
+ OTG_DRR_TIMING_DBUF_UPDATE_MODE, mode);
+ }
+
++void optc3_wait_drr_doublebuffer_pending_clear(struct timing_generator *optc)
++{
++ struct optc *optc1 = DCN10TG_FROM_TG(optc);
++
++ REG_WAIT(OTG_DOUBLE_BUFFER_CONTROL, OTG_DRR_TIMING_DBUF_UPDATE_PENDING, 0, 2, 100000); /* 1 vupdate at 5hz */
++
++}
++
+ void optc3_set_vtotal_min_max(struct timing_generator *optc, int vtotal_min, int vtotal_max)
+ {
+ optc1_set_vtotal_min_max(optc, vtotal_min, vtotal_max);
+@@ -360,6 +368,7 @@ static struct timing_generator_funcs dcn30_tg_funcs = {
+ .program_manual_trigger = optc2_program_manual_trigger,
+ .setup_manual_trigger = optc2_setup_manual_trigger,
+ .get_hw_timing = optc1_get_hw_timing,
++ .wait_drr_doublebuffer_pending_clear = optc3_wait_drr_doublebuffer_pending_clear,
+ };
+
+ void dcn30_timing_generator_init(struct optc *optc1)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h
+index dd45a5499b078..fb06dc9a48937 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h
+@@ -279,6 +279,7 @@
+ SF(OTG0_OTG_DRR_TRIGGER_WINDOW, OTG_DRR_TRIGGER_WINDOW_END_X, mask_sh),\
+ SF(OTG0_OTG_DRR_V_TOTAL_CHANGE, OTG_DRR_V_TOTAL_CHANGE_LIMIT, mask_sh),\
+ SF(OTG0_OTG_H_TIMING_CNTL, OTG_H_TIMING_DIV_BY2, mask_sh),\
++ SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_DRR_TIMING_DBUF_UPDATE_PENDING, mask_sh),\
+ SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_DRR_TIMING_DBUF_UPDATE_MODE, mask_sh),\
+ SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_BLANK_DATA_DOUBLE_BUFFER_EN, mask_sh)
+
+@@ -317,6 +318,7 @@
+ SF(OTG0_OTG_DRR_TRIGGER_WINDOW, OTG_DRR_TRIGGER_WINDOW_END_X, mask_sh),\
+ SF(OTG0_OTG_DRR_V_TOTAL_CHANGE, OTG_DRR_V_TOTAL_CHANGE_LIMIT, mask_sh),\
+ SF(OTG0_OTG_H_TIMING_CNTL, OTG_H_TIMING_DIV_MODE, mask_sh),\
++ SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_DRR_TIMING_DBUF_UPDATE_PENDING, mask_sh),\
+ SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_DRR_TIMING_DBUF_UPDATE_MODE, mask_sh)
+
+ void dcn30_timing_generator_init(struct optc *optc1);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c
+index 38842f938bed0..0926db0183383 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_dio_stream_encoder.c
+@@ -278,10 +278,10 @@ static void enc314_stream_encoder_dp_blank(
+ struct dc_link *link,
+ struct stream_encoder *enc)
+ {
+- /* New to DCN314 - disable the FIFO before VID stream disable. */
+- enc314_disable_fifo(enc);
+-
+ enc1_stream_encoder_dp_blank(link, enc);
++
++ /* Disable FIFO after the DP vid stream is disabled to avoid corruption. */
++ enc314_disable_fifo(enc);
+ }
+
+ static void enc314_stream_encoder_dp_unblank(
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
+index 79850a68f62ab..73f519dbdb531 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
+@@ -892,6 +892,8 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .force_abm_enable = false,
+ .timing_trace = false,
+ .clock_trace = true,
++ .disable_dpp_power_gate = true,
++ .disable_hubp_power_gate = true,
+ .disable_pplib_clock_request = false,
+ .pipe_split_policy = MPC_SPLIT_DYNAMIC,
+ .force_single_disp_pipe_split = false,
+@@ -901,7 +903,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .max_downscale_src_width = 4096,/*upto true 4k*/
+ .disable_pplib_wm_range = false,
+ .scl_reset_length10 = true,
+- .sanity_checks = false,
++ .sanity_checks = true,
+ .underflow_assert_delay_us = 0xFFFFFFFF,
+ .dwb_fi_phase = -1, // -1 = disable,
+ .dmub_command_table = true,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+index d3b5b6fedf042..6266b0788387e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+@@ -3897,14 +3897,14 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
+ * (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+
+- locals->ODMCombineEnablePerState[i][k] = false;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
+ if (mode_lib->vba.ODMCapability) {
+ if (locals->PlaneRequiredDISPCLKWithoutODMCombine > mode_lib->vba.MaxDispclkRoundedDownToDFSGranularity) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ } else if (locals->HActive[k] > DCN20_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ }
+ }
+@@ -3957,7 +3957,7 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ locals->RequiredDISPCLK[i][j] = 0.0;
+ locals->DISPCLK_DPPCLK_Support[i][j] = true;
+ for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
+- locals->ODMCombineEnablePerState[i][k] = false;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
+ locals->NoOfDPP[i][j][k] = 1;
+ locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+index edd098c7eb927..989d83ee38421 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+@@ -4008,17 +4008,17 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
+ mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
+ * (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+
+- locals->ODMCombineEnablePerState[i][k] = false;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
+ if (mode_lib->vba.ODMCapability) {
+ if (locals->PlaneRequiredDISPCLKWithoutODMCombine > MaxMaxDispclkRoundedDown) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ } else if (locals->DSCEnabled[k] && (locals->HActive[k] > DCN20_MAX_DSC_IMAGE_WIDTH)) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ } else if (locals->HActive[k] > DCN20_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ }
+ }
+@@ -4071,7 +4071,7 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
+ locals->RequiredDISPCLK[i][j] = 0.0;
+ locals->DISPCLK_DPPCLK_Support[i][j] = true;
+ for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
+- locals->ODMCombineEnablePerState[i][k] = false;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
+ locals->NoOfDPP[i][j][k] = 1;
+ locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+index 1d84ae50311d9..b7c2844d0cbee 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+@@ -4102,17 +4102,17 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
+ * (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+
+- locals->ODMCombineEnablePerState[i][k] = false;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
+ if (mode_lib->vba.ODMCapability) {
+ if (locals->PlaneRequiredDISPCLKWithoutODMCombine > MaxMaxDispclkRoundedDown) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ } else if (locals->DSCEnabled[k] && (locals->HActive[k] > DCN21_MAX_DSC_IMAGE_WIDTH)) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ } else if (locals->HActive[k] > DCN21_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
+- locals->ODMCombineEnablePerState[i][k] = true;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
+ }
+ }
+@@ -4165,7 +4165,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ locals->RequiredDISPCLK[i][j] = 0.0;
+ locals->DISPCLK_DPPCLK_Support[i][j] = true;
+ for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
+- locals->ODMCombineEnablePerState[i][k] = false;
++ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
+ locals->NoOfDPP[i][j][k] = 1;
+ locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
+@@ -5230,7 +5230,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ mode_lib->vba.ODMCombineEnabled[k] =
+ locals->ODMCombineEnablePerState[mode_lib->vba.VoltageLevel][k];
+ } else {
+- mode_lib->vba.ODMCombineEnabled[k] = false;
++ mode_lib->vba.ODMCombineEnabled[k] = dm_odm_combine_mode_disabled;
+ }
+ mode_lib->vba.DSCEnabled[k] =
+ locals->RequiresDSC[mode_lib->vba.VoltageLevel][k];
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+index f94abd124021e..69e205ac58b25 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+@@ -877,6 +877,10 @@ static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context, struc
+ int16_t stretched_drr_us = 0;
+ int16_t drr_stretched_vblank_us = 0;
+ int16_t max_vblank_mallregion = 0;
++ const struct dc_config *config = &dc->config;
++
++ if (config->disable_subvp_drr)
++ return false;
+
+ // Find SubVP pipe
+ for (i = 0; i < dc->res_pool->pipe_count; i++) {
+@@ -2038,6 +2042,10 @@ void dcn32_calculate_wm_and_dlg_fpu(struct dc *dc, struct dc_state *context,
+ */
+ context->bw_ctx.bw.dcn.watermarks.a = context->bw_ctx.bw.dcn.watermarks.c;
+ context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.pstate_change_ns = 0;
++ /* Calculate FCLK p-state change watermark based on FCLK pstate change latency in case
++ * UCLK p-state is not supported, to avoid underflow in case FCLK pstate is supported
++ */
++ context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+ } else {
+ /* Set A:
+ * All clocks min.
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
+index f4b176599be7a..0ea406145c1d7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn321/dcn321_fpu.c
+@@ -136,7 +136,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = {
+ .urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096,
+ .urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096,
+ .urgent_out_of_order_return_per_channel_vm_only_bytes = 4096,
+- .pct_ideal_sdp_bw_after_urgent = 100.0,
++ .pct_ideal_sdp_bw_after_urgent = 90.0,
+ .pct_ideal_fabric_bw_after_urgent = 67.0,
+ .pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 20.0,
+ .pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 60.0, // N/A, for now keep as is until DML implemented
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dcn20/hw_factory_dcn20.c b/drivers/gpu/drm/amd/display/dc/gpio/dcn20/hw_factory_dcn20.c
+index 9b63c6c0cc844..e0bd0c722e006 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/dcn20/hw_factory_dcn20.c
++++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn20/hw_factory_dcn20.c
+@@ -138,7 +138,8 @@ static const struct ddc_sh_mask ddc_shift[] = {
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 3),
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 4),
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 5),
+- DDC_MASK_SH_LIST_DCN2(__SHIFT, 6)
++ DDC_MASK_SH_LIST_DCN2(__SHIFT, 6),
++ DDC_MASK_SH_LIST_DCN2_VGA(__SHIFT)
+ };
+
+ static const struct ddc_sh_mask ddc_mask[] = {
+@@ -147,7 +148,8 @@ static const struct ddc_sh_mask ddc_mask[] = {
+ DDC_MASK_SH_LIST_DCN2(_MASK, 3),
+ DDC_MASK_SH_LIST_DCN2(_MASK, 4),
+ DDC_MASK_SH_LIST_DCN2(_MASK, 5),
+- DDC_MASK_SH_LIST_DCN2(_MASK, 6)
++ DDC_MASK_SH_LIST_DCN2(_MASK, 6),
++ DDC_MASK_SH_LIST_DCN2_VGA(_MASK)
+ };
+
+ #include "../generic_regs.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c b/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c
+index 687d4f128480e..36a5736c58c92 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c
++++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn30/hw_factory_dcn30.c
+@@ -145,7 +145,8 @@ static const struct ddc_sh_mask ddc_shift[] = {
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 3),
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 4),
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 5),
+- DDC_MASK_SH_LIST_DCN2(__SHIFT, 6)
++ DDC_MASK_SH_LIST_DCN2(__SHIFT, 6),
++ DDC_MASK_SH_LIST_DCN2_VGA(__SHIFT)
+ };
+
+ static const struct ddc_sh_mask ddc_mask[] = {
+@@ -154,7 +155,8 @@ static const struct ddc_sh_mask ddc_mask[] = {
+ DDC_MASK_SH_LIST_DCN2(_MASK, 3),
+ DDC_MASK_SH_LIST_DCN2(_MASK, 4),
+ DDC_MASK_SH_LIST_DCN2(_MASK, 5),
+- DDC_MASK_SH_LIST_DCN2(_MASK, 6)
++ DDC_MASK_SH_LIST_DCN2(_MASK, 6),
++ DDC_MASK_SH_LIST_DCN2_VGA(_MASK)
+ };
+
+ #include "../generic_regs.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c
+index 9fd8b269dd79c..985f10b397509 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c
++++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c
+@@ -149,7 +149,8 @@ static const struct ddc_sh_mask ddc_shift[] = {
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 3),
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 4),
+ DDC_MASK_SH_LIST_DCN2(__SHIFT, 5),
+- DDC_MASK_SH_LIST_DCN2(__SHIFT, 6)
++ DDC_MASK_SH_LIST_DCN2(__SHIFT, 6),
++ DDC_MASK_SH_LIST_DCN2_VGA(__SHIFT)
+ };
+
+ static const struct ddc_sh_mask ddc_mask[] = {
+@@ -158,7 +159,8 @@ static const struct ddc_sh_mask ddc_mask[] = {
+ DDC_MASK_SH_LIST_DCN2(_MASK, 3),
+ DDC_MASK_SH_LIST_DCN2(_MASK, 4),
+ DDC_MASK_SH_LIST_DCN2(_MASK, 5),
+- DDC_MASK_SH_LIST_DCN2(_MASK, 6)
++ DDC_MASK_SH_LIST_DCN2(_MASK, 6),
++ DDC_MASK_SH_LIST_DCN2_VGA(_MASK)
+ };
+
+ #include "../generic_regs.h"
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h b/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h
+index 308a543178a56..59884ef651b39 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h
++++ b/drivers/gpu/drm/amd/display/dc/gpio/ddc_regs.h
+@@ -113,6 +113,13 @@
+ (PHY_AUX_CNTL__AUX## cd ##_PAD_RXSEL## mask_sh),\
+ (DC_GPIO_AUX_CTRL_5__DDC_PAD## cd ##_I2CMODE## mask_sh)}
+
++#define DDC_MASK_SH_LIST_DCN2_VGA(mask_sh) \
++ {DDC_MASK_SH_LIST_COMMON(mask_sh),\
++ 0,\
++ 0,\
++ 0,\
++ 0}
++
+ struct ddc_registers {
+ struct gpio_registers gpio;
+ uint32_t ddc_setup;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
+index 0e42e721dd15a..1d9f9c53d2bd6 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
+@@ -331,6 +331,7 @@ struct timing_generator_funcs {
+ uint32_t vtotal_change_limit);
+
+ void (*init_odm)(struct timing_generator *tg);
++ void (*wait_drr_doublebuffer_pending_clear)(struct timing_generator *tg);
+ };
+
+ #endif
+diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
+index f175e65b853a0..e4a22c68517d1 100644
+--- a/drivers/gpu/drm/amd/include/amd_shared.h
++++ b/drivers/gpu/drm/amd/include/amd_shared.h
+@@ -240,6 +240,7 @@ enum DC_FEATURE_MASK {
+ DC_DISABLE_LTTPR_DP2_0 = (1 << 6), //0x40, disabled by default
+ DC_PSR_ALLOW_SMU_OPT = (1 << 7), //0x80, disabled by default
+ DC_PSR_ALLOW_MULTI_DISP_OPT = (1 << 8), //0x100, disabled by default
++ DC_ENABLE_SUBVP_DRR = (1 << 9), // 0x200, disabled by default
+ };
+
+ enum DC_DEBUG_MASK {
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 66a4a41c3fe94..d314b9e7c05f9 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -636,7 +636,7 @@ static void ast_handle_damage(struct ast_plane *ast_plane, struct iosys_map *src
+ struct drm_framebuffer *fb,
+ const struct drm_rect *clip)
+ {
+- struct iosys_map dst = IOSYS_MAP_INIT_VADDR(ast_plane->vaddr);
++ struct iosys_map dst = IOSYS_MAP_INIT_VADDR_IOMEM(ast_plane->vaddr);
+
+ iosys_map_incr(&dst, drm_fb_clip_offset(fb->pitches[0], fb->format, clip));
+ drm_fb_memcpy(&dst, fb->pitches, src, fb, clip);
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 21a9b8422bda5..e7f7d0ce13805 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -412,7 +412,6 @@ struct it6505 {
+ * Mutex protects extcon and interrupt functions from interfering
+ * each other.
+ */
+- struct mutex irq_lock;
+ struct mutex extcon_lock;
+ struct mutex mode_lock; /* used to bridge_detect */
+ struct mutex aux_lock; /* used to aux data transfers */
+@@ -2494,10 +2493,8 @@ static irqreturn_t it6505_int_threaded_handler(int unused, void *data)
+ };
+ int int_status[3], i;
+
+- mutex_lock(&it6505->irq_lock);
+-
+- if (it6505->enable_drv_hold || !it6505->powered)
+- goto unlock;
++ if (it6505->enable_drv_hold || pm_runtime_get_if_in_use(dev) <= 0)
++ return IRQ_HANDLED;
+
+ int_status[0] = it6505_read(it6505, INT_STATUS_01);
+ int_status[1] = it6505_read(it6505, INT_STATUS_02);
+@@ -2515,16 +2512,14 @@ static irqreturn_t it6505_int_threaded_handler(int unused, void *data)
+ if (it6505_test_bit(irq_vec[0].bit, (unsigned int *)int_status))
+ irq_vec[0].handler(it6505);
+
+- if (!it6505->hpd_state)
+- goto unlock;
+-
+- for (i = 1; i < ARRAY_SIZE(irq_vec); i++) {
+- if (it6505_test_bit(irq_vec[i].bit, (unsigned int *)int_status))
+- irq_vec[i].handler(it6505);
++ if (it6505->hpd_state) {
++ for (i = 1; i < ARRAY_SIZE(irq_vec); i++) {
++ if (it6505_test_bit(irq_vec[i].bit, (unsigned int *)int_status))
++ irq_vec[i].handler(it6505);
++ }
+ }
+
+-unlock:
+- mutex_unlock(&it6505->irq_lock);
++ pm_runtime_put_sync(dev);
+
+ return IRQ_HANDLED;
+ }
+@@ -3277,7 +3272,6 @@ static int it6505_i2c_probe(struct i2c_client *client,
+ if (!it6505)
+ return -ENOMEM;
+
+- mutex_init(&it6505->irq_lock);
+ mutex_init(&it6505->extcon_lock);
+ mutex_init(&it6505->mode_lock);
+ mutex_init(&it6505->aux_lock);
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index 7c0a99173b39f..3b77238ca4aff 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -187,12 +187,14 @@ static void lt9611_mipi_video_setup(struct lt9611 *lt9611,
+
+ regmap_write(lt9611->regmap, 0x8319, (u8)(hfront_porch % 256));
+
+- regmap_write(lt9611->regmap, 0x831a, (u8)(hsync_porch / 256));
++ regmap_write(lt9611->regmap, 0x831a, (u8)(hsync_porch / 256) |
++ ((hfront_porch / 256) << 4));
+ regmap_write(lt9611->regmap, 0x831b, (u8)(hsync_porch % 256));
+ }
+
+-static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode)
++static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode, unsigned int postdiv)
+ {
++ unsigned int pcr_m = mode->clock * 5 * postdiv / 27000;
+ const struct reg_sequence reg_cfg[] = {
+ { 0x830b, 0x01 },
+ { 0x830c, 0x10 },
+@@ -207,7 +209,6 @@ static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mod
+
+ /* stage 2 */
+ { 0x834a, 0x40 },
+- { 0x831d, 0x10 },
+
+ /* MK limit */
+ { 0x832d, 0x38 },
+@@ -222,30 +223,28 @@ static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mod
+ { 0x8325, 0x00 },
+ { 0x832a, 0x01 },
+ { 0x834a, 0x10 },
+- { 0x831d, 0x10 },
+- { 0x8326, 0x37 },
+ };
++ u8 pol = 0x10;
+
+- regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
++ if (mode->flags & DRM_MODE_FLAG_NHSYNC)
++ pol |= 0x2;
++ if (mode->flags & DRM_MODE_FLAG_NVSYNC)
++ pol |= 0x1;
++ regmap_write(lt9611->regmap, 0x831d, pol);
+
+- switch (mode->hdisplay) {
+- case 640:
+- regmap_write(lt9611->regmap, 0x8326, 0x14);
+- break;
+- case 1920:
+- regmap_write(lt9611->regmap, 0x8326, 0x37);
+- break;
+- case 3840:
++ if (mode->hdisplay == 3840)
+ regmap_multi_reg_write(lt9611->regmap, reg_cfg2, ARRAY_SIZE(reg_cfg2));
+- break;
+- }
++ else
++ regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
++
++ regmap_write(lt9611->regmap, 0x8326, pcr_m);
+
+ /* pcr rst */
+ regmap_write(lt9611->regmap, 0x8011, 0x5a);
+ regmap_write(lt9611->regmap, 0x8011, 0xfa);
+ }
+
+-static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode)
++static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode, unsigned int *postdiv)
+ {
+ unsigned int pclk = mode->clock;
+ const struct reg_sequence reg_cfg[] = {
+@@ -263,12 +262,16 @@ static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode
+
+ regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
+
+- if (pclk > 150000)
++ if (pclk > 150000) {
+ regmap_write(lt9611->regmap, 0x812d, 0x88);
+- else if (pclk > 70000)
++ *postdiv = 1;
++ } else if (pclk > 70000) {
+ regmap_write(lt9611->regmap, 0x812d, 0x99);
+- else
++ *postdiv = 2;
++ } else {
+ regmap_write(lt9611->regmap, 0x812d, 0xaa);
++ *postdiv = 4;
++ }
+
+ /*
+ * first divide pclk by 2 first
+@@ -448,12 +451,11 @@ static void lt9611_sleep_setup(struct lt9611 *lt9611)
+ { 0x8023, 0x01 },
+ { 0x8157, 0x03 }, /* set addr pin as output */
+ { 0x8149, 0x0b },
+- { 0x8151, 0x30 }, /* disable IRQ */
++
+ { 0x8102, 0x48 }, /* MIPI Rx power down */
+ { 0x8123, 0x80 },
+ { 0x8130, 0x00 },
+- { 0x8100, 0x01 }, /* bandgap power down */
+- { 0x8101, 0x00 }, /* system clk power down */
++ { 0x8011, 0x0a },
+ };
+
+ regmap_multi_reg_write(lt9611->regmap,
+@@ -767,7 +769,7 @@ static const struct drm_connector_funcs lt9611_bridge_connector_funcs = {
+ static struct mipi_dsi_device *lt9611_attach_dsi(struct lt9611 *lt9611,
+ struct device_node *dsi_node)
+ {
+- const struct mipi_dsi_device_info info = { "lt9611", 0, NULL };
++ const struct mipi_dsi_device_info info = { "lt9611", 0, lt9611->dev->of_node};
+ struct mipi_dsi_device *dsi;
+ struct mipi_dsi_host *host;
+ struct device *dev = lt9611->dev;
+@@ -857,12 +859,18 @@ static enum drm_mode_status lt9611_bridge_mode_valid(struct drm_bridge *bridge,
+ static void lt9611_bridge_pre_enable(struct drm_bridge *bridge)
+ {
+ struct lt9611 *lt9611 = bridge_to_lt9611(bridge);
++ static const struct reg_sequence reg_cfg[] = {
++ { 0x8102, 0x12 },
++ { 0x8123, 0x40 },
++ { 0x8130, 0xea },
++ { 0x8011, 0xfa },
++ };
+
+ if (!lt9611->sleep)
+ return;
+
+- lt9611_reset(lt9611);
+- regmap_write(lt9611->regmap, 0x80ee, 0x01);
++ regmap_multi_reg_write(lt9611->regmap,
++ reg_cfg, ARRAY_SIZE(reg_cfg));
+
+ lt9611->sleep = false;
+ }
+@@ -882,14 +890,15 @@ static void lt9611_bridge_mode_set(struct drm_bridge *bridge,
+ {
+ struct lt9611 *lt9611 = bridge_to_lt9611(bridge);
+ struct hdmi_avi_infoframe avi_frame;
++ unsigned int postdiv;
+ int ret;
+
+ lt9611_bridge_pre_enable(bridge);
+
+ lt9611_mipi_input_digital(lt9611, mode);
+- lt9611_pll_setup(lt9611, mode);
++ lt9611_pll_setup(lt9611, mode, &postdiv);
+ lt9611_mipi_video_setup(lt9611, mode);
+- lt9611_pcr_setup(lt9611, mode);
++ lt9611_pcr_setup(lt9611, mode, postdiv);
+
+ ret = drm_hdmi_avi_infoframe_from_display_mode(&avi_frame,
+ <9611->connector,
+diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+index 97359f807bfc3..cbfa05a6767b5 100644
+--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+@@ -440,7 +440,11 @@ static int __init stdpxxxx_ge_b850v3_init(void)
+ if (ret)
+ return ret;
+
+- return i2c_add_driver(&stdp2690_ge_b850v3_fw_driver);
++ ret = i2c_add_driver(&stdp2690_ge_b850v3_fw_driver);
++ if (ret)
++ i2c_del_driver(&stdp4028_ge_b850v3_fw_driver);
++
++ return ret;
+ }
+ module_init(stdpxxxx_ge_b850v3_init);
+
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 2a58eb271f701..b9b681086fc49 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1264,10 +1264,10 @@ static int tc_dsi_rx_enable(struct tc_data *tc)
+ u32 value;
+ int ret;
+
+- regmap_write(tc->regmap, PPI_D0S_CLRSIPOCOUNT, 5);
+- regmap_write(tc->regmap, PPI_D1S_CLRSIPOCOUNT, 5);
+- regmap_write(tc->regmap, PPI_D2S_CLRSIPOCOUNT, 5);
+- regmap_write(tc->regmap, PPI_D3S_CLRSIPOCOUNT, 5);
++ regmap_write(tc->regmap, PPI_D0S_CLRSIPOCOUNT, 25);
++ regmap_write(tc->regmap, PPI_D1S_CLRSIPOCOUNT, 25);
++ regmap_write(tc->regmap, PPI_D2S_CLRSIPOCOUNT, 25);
++ regmap_write(tc->regmap, PPI_D3S_CLRSIPOCOUNT, 25);
+ regmap_write(tc->regmap, PPI_D0S_ATMR, 0);
+ regmap_write(tc->regmap, PPI_D1S_ATMR, 0);
+ regmap_write(tc->regmap, PPI_TX_RX_TA, TTA_GET | TTA_SURE);
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+index 7ba9467fff129..047c14ddbbf11 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -346,7 +346,7 @@ static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
+
+ /* Deassert reset */
+ gpiod_set_value_cansleep(ctx->enable_gpio, 1);
+- usleep_range(1000, 1100);
++ usleep_range(10000, 11000);
+
+ /* Get the LVDS format from the bridge state. */
+ bridge_state = drm_atomic_get_new_bridge_state(state, bridge);
+diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
+index 056ab9d5f313b..313cbabb12b2d 100644
+--- a/drivers/gpu/drm/drm_client.c
++++ b/drivers/gpu/drm/drm_client.c
+@@ -198,6 +198,11 @@ void drm_client_dev_hotplug(struct drm_device *dev)
+ if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ return;
+
++ if (!dev->mode_config.num_connector) {
++ drm_dbg_kms(dev, "No connectors found, will not send hotplug events!\n");
++ return;
++ }
++
+ mutex_lock(&dev->clientlist_mutex);
+ list_for_each_entry(client, &dev->clientlist, list) {
+ if (!client->funcs || !client->funcs->hotplug)
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 3841aba17abdc..b94adb9bbefb8 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5249,13 +5249,12 @@ static int add_cea_modes(struct drm_connector *connector,
+ {
+ const struct cea_db *db;
+ struct cea_db_iter iter;
++ const u8 *hdmi = NULL, *video = NULL;
++ u8 hdmi_len = 0, video_len = 0;
+ int modes = 0;
+
+ cea_db_iter_edid_begin(drm_edid, &iter);
+ cea_db_iter_for_each(db, &iter) {
+- const u8 *hdmi = NULL, *video = NULL;
+- u8 hdmi_len = 0, video_len = 0;
+-
+ if (cea_db_tag(db) == CTA_DB_VIDEO) {
+ video = cea_db_data(db);
+ video_len = cea_db_payload_len(db);
+@@ -5271,18 +5270,17 @@ static int add_cea_modes(struct drm_connector *connector,
+ modes += do_y420vdb_modes(connector, vdb420,
+ cea_db_payload_len(db) - 1);
+ }
+-
+- /*
+- * We parse the HDMI VSDB after having added the cea modes as we
+- * will be patching their flags when the sink supports stereo
+- * 3D.
+- */
+- if (hdmi)
+- modes += do_hdmi_vsdb_modes(connector, hdmi, hdmi_len,
+- video, video_len);
+ }
+ cea_db_iter_end(&iter);
+
++ /*
++ * We parse the HDMI VSDB after having added the cea modes as we will be
++ * patching their flags when the sink supports stereo 3D.
++ */
++ if (hdmi)
++ modes += do_hdmi_vsdb_modes(connector, hdmi, hdmi_len,
++ video, video_len);
++
+ return modes;
+ }
+
+@@ -6885,8 +6883,6 @@ static u8 drm_mode_hdmi_vic(const struct drm_connector *connector,
+ static u8 drm_mode_cea_vic(const struct drm_connector *connector,
+ const struct drm_display_mode *mode)
+ {
+- u8 vic;
+-
+ /*
+ * HDMI spec says if a mode is found in HDMI 1.4b 4K modes
+ * we should send its VIC in vendor infoframes, else send the
+@@ -6896,13 +6892,18 @@ static u8 drm_mode_cea_vic(const struct drm_connector *connector,
+ if (drm_mode_hdmi_vic(connector, mode))
+ return 0;
+
+- vic = drm_match_cea_mode(mode);
++ return drm_match_cea_mode(mode);
++}
+
+- /*
+- * HDMI 1.4 VIC range: 1 <= VIC <= 64 (CEA-861-D) but
+- * HDMI 2.0 VIC range: 1 <= VIC <= 107 (CEA-861-F). So we
+- * have to make sure we dont break HDMI 1.4 sinks.
+- */
++/*
++ * Avoid sending VICs defined in HDMI 2.0 in AVI infoframes to sinks that
++ * conform to HDMI 1.4.
++ *
++ * HDMI 1.4 (CTA-861-D) VIC range: [1..64]
++ * HDMI 2.0 (CTA-861-F) VIC range: [1..107]
++ */
++static u8 vic_for_avi_infoframe(const struct drm_connector *connector, u8 vic)
++{
+ if (!is_hdmi2_sink(connector) && vic > 64)
+ return 0;
+
+@@ -6978,7 +6979,7 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
+ picture_aspect = HDMI_PICTURE_ASPECT_NONE;
+ }
+
+- frame->video_code = vic;
++ frame->video_code = vic_for_avi_infoframe(connector, vic);
+ frame->picture_aspect = picture_aspect;
+ frame->active_aspect = HDMI_ACTIVE_ASPECT_PICTURE;
+ frame->scan_mode = HDMI_SCAN_MODE_UNDERSCAN;
+diff --git a/drivers/gpu/drm/drm_fbdev_generic.c b/drivers/gpu/drm/drm_fbdev_generic.c
+index 593aa3283792b..215fe16ff1fb4 100644
+--- a/drivers/gpu/drm/drm_fbdev_generic.c
++++ b/drivers/gpu/drm/drm_fbdev_generic.c
+@@ -390,11 +390,6 @@ static int drm_fbdev_client_hotplug(struct drm_client_dev *client)
+ if (dev->fb_helper)
+ return drm_fb_helper_hotplug_event(dev->fb_helper);
+
+- if (!dev->mode_config.num_connector) {
+- drm_dbg_kms(dev, "No connectors found, will not create framebuffer!\n");
+- return 0;
+- }
+-
+ drm_fb_helper_prepare(dev, fb_helper, &drm_fb_helper_generic_funcs);
+
+ ret = drm_fb_helper_init(dev, fb_helper);
+diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c
+index 6242dfbe92402..0f17dfa8702b4 100644
+--- a/drivers/gpu/drm/drm_fourcc.c
++++ b/drivers/gpu/drm/drm_fourcc.c
+@@ -190,6 +190,10 @@ const struct drm_format_info *__drm_format_info(u32 format)
+ { .format = DRM_FORMAT_BGRA5551, .depth = 15, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1, .has_alpha = true },
+ { .format = DRM_FORMAT_RGB565, .depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
+ { .format = DRM_FORMAT_BGR565, .depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
++#ifdef __BIG_ENDIAN
++ { .format = DRM_FORMAT_XRGB1555 | DRM_FORMAT_BIG_ENDIAN, .depth = 15, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
++ { .format = DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN, .depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
++#endif
+ { .format = DRM_FORMAT_RGB888, .depth = 24, .num_planes = 1, .cpp = { 3, 0, 0 }, .hsub = 1, .vsub = 1 },
+ { .format = DRM_FORMAT_BGR888, .depth = 24, .num_planes = 1, .cpp = { 3, 0, 0 }, .hsub = 1, .vsub = 1 },
+ { .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 1, .cpp = { 4, 0, 0 }, .hsub = 1, .vsub = 1 },
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index b602cd72a1205..7af9da886d4e5 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -681,23 +681,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
+ }
+ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
+
+-/**
+- * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
+- * scatter/gather table for a shmem GEM object.
+- * @shmem: shmem GEM object
+- *
+- * This function returns a scatter/gather table suitable for driver usage. If
+- * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
+- * table created.
+- *
+- * This is the main function for drivers to get at backing storage, and it hides
+- * and difference between dma-buf imported and natively allocated objects.
+- * drm_gem_shmem_get_sg_table() should not be directly called by drivers.
+- *
+- * Returns:
+- * A pointer to the scatter/gather table of pinned pages or errno on failure.
+- */
+-struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
++static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem)
+ {
+ struct drm_gem_object *obj = &shmem->base;
+ int ret;
+@@ -708,7 +692,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
+
+ WARN_ON(obj->import_attach);
+
+- ret = drm_gem_shmem_get_pages(shmem);
++ ret = drm_gem_shmem_get_pages_locked(shmem);
+ if (ret)
+ return ERR_PTR(ret);
+
+@@ -730,9 +714,39 @@ err_free_sgt:
+ sg_free_table(sgt);
+ kfree(sgt);
+ err_put_pages:
+- drm_gem_shmem_put_pages(shmem);
++ drm_gem_shmem_put_pages_locked(shmem);
+ return ERR_PTR(ret);
+ }
++
++/**
++ * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
++ * scatter/gather table for a shmem GEM object.
++ * @shmem: shmem GEM object
++ *
++ * This function returns a scatter/gather table suitable for driver usage. If
++ * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
++ * table created.
++ *
++ * This is the main function for drivers to get at backing storage, and it hides
++ * and difference between dma-buf imported and natively allocated objects.
++ * drm_gem_shmem_get_sg_table() should not be directly called by drivers.
++ *
++ * Returns:
++ * A pointer to the scatter/gather table of pinned pages or errno on failure.
++ */
++struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
++{
++ int ret;
++ struct sg_table *sgt;
++
++ ret = mutex_lock_interruptible(&shmem->pages_lock);
++ if (ret)
++ return ERR_PTR(ret);
++ sgt = drm_gem_shmem_get_pages_sgt_locked(shmem);
++ mutex_unlock(&shmem->pages_lock);
++
++ return sgt;
++}
+ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
+
+ /**
+diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
+index 497ef4b6a90a4..4bc15fbd009d8 100644
+--- a/drivers/gpu/drm/drm_mipi_dsi.c
++++ b/drivers/gpu/drm/drm_mipi_dsi.c
+@@ -1224,6 +1224,58 @@ int mipi_dsi_dcs_get_display_brightness(struct mipi_dsi_device *dsi,
+ }
+ EXPORT_SYMBOL(mipi_dsi_dcs_get_display_brightness);
+
++/**
++ * mipi_dsi_dcs_set_display_brightness_large() - sets the 16-bit brightness value
++ * of the display
++ * @dsi: DSI peripheral device
++ * @brightness: brightness value
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++int mipi_dsi_dcs_set_display_brightness_large(struct mipi_dsi_device *dsi,
++ u16 brightness)
++{
++ u8 payload[2] = { brightness >> 8, brightness & 0xff };
++ ssize_t err;
++
++ err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_DISPLAY_BRIGHTNESS,
++ payload, sizeof(payload));
++ if (err < 0)
++ return err;
++
++ return 0;
++}
++EXPORT_SYMBOL(mipi_dsi_dcs_set_display_brightness_large);
++
++/**
++ * mipi_dsi_dcs_get_display_brightness_large() - gets the current 16-bit
++ * brightness value of the display
++ * @dsi: DSI peripheral device
++ * @brightness: brightness value
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
++ u16 *brightness)
++{
++ u8 brightness_be[2];
++ ssize_t err;
++
++ err = mipi_dsi_dcs_read(dsi, MIPI_DCS_GET_DISPLAY_BRIGHTNESS,
++ brightness_be, sizeof(brightness_be));
++ if (err <= 0) {
++ if (err == 0)
++ err = -ENODATA;
++
++ return err;
++ }
++
++ *brightness = (brightness_be[0] << 8) | brightness_be[1];
++
++ return 0;
++}
++EXPORT_SYMBOL(mipi_dsi_dcs_get_display_brightness_large);
++
+ static int mipi_dsi_drv_probe(struct device *dev)
+ {
+ struct mipi_dsi_driver *drv = to_mipi_dsi_driver(dev->driver);
+diff --git a/drivers/gpu/drm/drm_mode_config.c b/drivers/gpu/drm/drm_mode_config.c
+index 688c8afe0bf17..8525ef8515406 100644
+--- a/drivers/gpu/drm/drm_mode_config.c
++++ b/drivers/gpu/drm/drm_mode_config.c
+@@ -399,6 +399,8 @@ static void drm_mode_config_init_release(struct drm_device *dev, void *ptr)
+ */
+ int drmm_mode_config_init(struct drm_device *dev)
+ {
++ int ret;
++
+ mutex_init(&dev->mode_config.mutex);
+ drm_modeset_lock_init(&dev->mode_config.connection_mutex);
+ mutex_init(&dev->mode_config.idr_mutex);
+@@ -420,7 +422,11 @@ int drmm_mode_config_init(struct drm_device *dev)
+ init_llist_head(&dev->mode_config.connector_free_list);
+ INIT_WORK(&dev->mode_config.connector_free_work, drm_connector_free_work_fn);
+
+- drm_mode_create_standard_properties(dev);
++ ret = drm_mode_create_standard_properties(dev);
++ if (ret) {
++ drm_mode_config_cleanup(dev);
++ return ret;
++ }
+
+ /* Just to be sure */
+ dev->mode_config.num_fb = 0;
+diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c
+index 3c8034a8c27bd..951afe8279da8 100644
+--- a/drivers/gpu/drm/drm_modes.c
++++ b/drivers/gpu/drm/drm_modes.c
+@@ -1809,7 +1809,7 @@ static int drm_mode_parse_cmdline_named_mode(const char *name,
+ if (ret != name_end)
+ continue;
+
+- strcpy(cmdline_mode->name, mode->name);
++ strscpy(cmdline_mode->name, mode->name, sizeof(cmdline_mode->name));
+ cmdline_mode->pixel_clock = mode->pixel_clock_khz;
+ cmdline_mode->xres = mode->xres;
+ cmdline_mode->yres = mode->yres;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 3659f0465a724..5522d610c5cfd 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -30,12 +30,6 @@ struct drm_dmi_panel_orientation_data {
+ int orientation;
+ };
+
+-static const struct drm_dmi_panel_orientation_data asus_t100ha = {
+- .width = 800,
+- .height = 1280,
+- .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
+-};
+-
+ static const struct drm_dmi_panel_orientation_data gpd_micropc = {
+ .width = 720,
+ .height = 1280,
+@@ -97,6 +91,12 @@ static const struct drm_dmi_panel_orientation_data lcd720x1280_rightside_up = {
+ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+
++static const struct drm_dmi_panel_orientation_data lcd800x1280_leftside_up = {
++ .width = 800,
++ .height = 1280,
++ .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data lcd800x1280_rightside_up = {
+ .width = 800,
+ .height = 1280,
+@@ -127,6 +127,12 @@ static const struct drm_dmi_panel_orientation_data lcd1600x2560_leftside_up = {
+ .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
+ };
+
++static const struct drm_dmi_panel_orientation_data lcd1600x2560_rightside_up = {
++ .width = 1600,
++ .height = 2560,
++ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct dmi_system_id orientation_data[] = {
+ { /* Acer One 10 (S1003) */
+ .matches = {
+@@ -151,7 +157,7 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100HAN"),
+ },
+- .driver_data = (void *)&asus_t100ha,
++ .driver_data = (void *)&lcd800x1280_leftside_up,
+ }, { /* Asus T101HA */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+@@ -196,6 +202,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Hi10 pro tablet"),
+ },
+ .driver_data = (void *)&lcd1200x1920_rightside_up,
++ }, { /* Dynabook K50 */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dynabook Inc."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "dynabook K50/FR"),
++ },
++ .driver_data = (void *)&lcd800x1280_leftside_up,
+ }, { /* GPD MicroPC (generic strings, also match on bios date) */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
+@@ -310,6 +322,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGL"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* Lenovo IdeaPad Duet 3 10IGL5 */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "IdeaPad Duet 3 10IGL5"),
++ },
++ .driver_data = (void *)&lcd1200x1920_rightside_up,
+ }, { /* Lenovo Yoga Book X90F / X91F / X91L */
+ .matches = {
+ /* Non exact match to match all versions */
+@@ -331,6 +349,13 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_MATCH(DMI_BIOS_VERSION, "BLADE_21"),
+ },
+ .driver_data = (void *)&lcd1200x1920_rightside_up,
++ }, { /* Lenovo Yoga Tab 3 X90F */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
++ },
++ .driver_data = (void *)&lcd1600x2560_rightside_up,
+ }, { /* Nanote UMPC-01 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "RWC CO.,LTD"),
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_dsi.c b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
+index ec673223d6b7a..b5305b145ddbd 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_dsi.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
+@@ -805,15 +805,15 @@ static int exynos_dsi_init_link(struct exynos_dsi *dsi)
+ reg |= DSIM_AUTO_MODE;
+ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_HSE)
+ reg |= DSIM_HSE_MODE;
+- if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HFP))
++ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HFP)
+ reg |= DSIM_HFP_MODE;
+- if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HBP))
++ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HBP)
+ reg |= DSIM_HBP_MODE;
+- if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HSA))
++ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HSA)
+ reg |= DSIM_HSA_MODE;
+ }
+
+- if (!(dsi->mode_flags & MIPI_DSI_MODE_NO_EOT_PACKET))
++ if (dsi->mode_flags & MIPI_DSI_MODE_NO_EOT_PACKET)
+ reg |= DSIM_EOT_DISABLE;
+
+ switch (dsi->format) {
+diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
+index 7c6dc2bcd14a6..61f4abaf1811f 100644
+--- a/drivers/gpu/drm/gud/gud_pipe.c
++++ b/drivers/gpu/drm/gud/gud_pipe.c
+@@ -157,8 +157,8 @@ static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
+ {
+ struct dma_buf_attachment *import_attach = fb->obj[0]->import_attach;
+ u8 compression = gdrm->compression;
+- struct iosys_map map[DRM_FORMAT_MAX_PLANES];
+- struct iosys_map map_data[DRM_FORMAT_MAX_PLANES];
++ struct iosys_map map[DRM_FORMAT_MAX_PLANES] = { };
++ struct iosys_map map_data[DRM_FORMAT_MAX_PLANES] = { };
+ struct iosys_map dst;
+ void *vaddr, *buf;
+ size_t pitch, len;
+diff --git a/drivers/gpu/drm/i915/display/intel_quirks.c b/drivers/gpu/drm/i915/display/intel_quirks.c
+index 6e48d3bcdfec5..a280448df771a 100644
+--- a/drivers/gpu/drm/i915/display/intel_quirks.c
++++ b/drivers/gpu/drm/i915/display/intel_quirks.c
+@@ -199,6 +199,8 @@ static struct intel_quirk intel_quirks[] = {
+ /* ECS Liva Q2 */
+ { 0x3185, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
+ { 0x3184, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
++ /* HP Notebook - 14-r206nv */
++ { 0x0f31, 0x103c, 0x220f, quirk_invert_brightness },
+ };
+
+ void intel_init_quirks(struct drm_i915_private *i915)
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+index d37931e16fd9b..34b0a9dadce4f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+@@ -1476,10 +1476,12 @@ static int __intel_engine_stop_cs(struct intel_engine_cs *engine,
+ intel_uncore_write_fw(uncore, mode, _MASKED_BIT_ENABLE(STOP_RING));
+
+ /*
+- * Wa_22011802037 : gen11, gen12, Prior to doing a reset, ensure CS is
++ * Wa_22011802037: Prior to doing a reset, ensure CS is
+ * stopped, set ring stop bit and prefetch disable bit to halt CS
+ */
+- if (IS_GRAPHICS_VER(engine->i915, 11, 12))
++ if (IS_MTL_GRAPHICS_STEP(engine->i915, M, STEP_A0, STEP_B0) ||
++ (GRAPHICS_VER(engine->i915) >= 11 &&
++ GRAPHICS_VER_FULL(engine->i915) < IP_VER(12, 70)))
+ intel_uncore_write_fw(uncore, RING_MODE_GEN7(engine->mmio_base),
+ _MASKED_BIT_ENABLE(GEN12_GFX_PREFETCH_DISABLE));
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+index 21cb5b69d82eb..3c573d41d4046 100644
+--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+@@ -2989,10 +2989,12 @@ static void execlists_reset_prepare(struct intel_engine_cs *engine)
+ intel_engine_stop_cs(engine);
+
+ /*
+- * Wa_22011802037:gen11/gen12: In addition to stopping the cs, we need
++ * Wa_22011802037: In addition to stopping the cs, we need
+ * to wait for any pending mi force wakeups
+ */
+- if (IS_GRAPHICS_VER(engine->i915, 11, 12))
++ if (IS_MTL_GRAPHICS_STEP(engine->i915, M, STEP_A0, STEP_B0) ||
++ (GRAPHICS_VER(engine->i915) >= 11 &&
++ GRAPHICS_VER_FULL(engine->i915) < IP_VER(12, 70)))
+ intel_engine_wait_for_pending_mi_fw(engine);
+
+ engine->execlists.reset_ccid = active_ccid(engine);
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_mcr.c b/drivers/gpu/drm/i915/gt/intel_gt_mcr.c
+index ea86c1ab5dc56..58ea3325bbdaa 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_mcr.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt_mcr.c
+@@ -162,8 +162,15 @@ void intel_gt_mcr_init(struct intel_gt *gt)
+ if (MEDIA_VER(i915) >= 13 && gt->type == GT_MEDIA) {
+ gt->steering_table[OADDRM] = xelpmp_oaddrm_steering_table;
+ } else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70)) {
+- fuse = REG_FIELD_GET(GT_L3_EXC_MASK,
+- intel_uncore_read(gt->uncore, XEHP_FUSE4));
++ /* Wa_14016747170 */
++ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
++ IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0))
++ fuse = REG_FIELD_GET(MTL_GT_L3_EXC_MASK,
++ intel_uncore_read(gt->uncore,
++ MTL_GT_ACTIVITY_FACTOR));
++ else
++ fuse = REG_FIELD_GET(GT_L3_EXC_MASK,
++ intel_uncore_read(gt->uncore, XEHP_FUSE4));
+
+ /*
+ * Despite the register field being named "exclude mask" the
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h
+index a5454af2a9cfd..9758b0b635601 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h
+@@ -413,6 +413,7 @@
+ #define TBIMR_FAST_CLIP REG_BIT(5)
+
+ #define VFLSKPD MCR_REG(0x62a8)
++#define VF_PREFETCH_TLB_DIS REG_BIT(5)
+ #define DIS_OVER_FETCH_CACHE REG_BIT(1)
+ #define DIS_MULT_MISS_RD_SQUASH REG_BIT(0)
+
+@@ -680,10 +681,7 @@
+ #define GEN6_RSTCTL _MMIO(0x9420)
+
+ #define GEN7_MISCCPCTL _MMIO(0x9424)
+-#define GEN7_DOP_CLOCK_GATE_ENABLE (1 << 0)
+-
+-#define GEN8_MISCCPCTL MCR_REG(0x9424)
+-#define GEN8_DOP_CLOCK_GATE_ENABLE REG_BIT(0)
++#define GEN7_DOP_CLOCK_GATE_ENABLE REG_BIT(0)
+ #define GEN12_DOP_CLOCK_GATE_RENDER_ENABLE REG_BIT(1)
+ #define GEN8_DOP_CLOCK_GATE_CFCLK_ENABLE (1 << 2)
+ #define GEN8_DOP_CLOCK_GATE_GUC_ENABLE (1 << 4)
+@@ -968,7 +966,8 @@
+ #define GEN7_WA_FOR_GEN7_L3_CONTROL 0x3C47FF8C
+ #define GEN7_L3AGDIS (1 << 19)
+
+-#define XEHPC_LNCFMISCCFGREG0 _MMIO(0xb01c)
++#define XEHPC_LNCFMISCCFGREG0 MCR_REG(0xb01c)
++#define XEHPC_HOSTCACHEEN REG_BIT(1)
+ #define XEHPC_OVRLSCCC REG_BIT(0)
+
+ #define GEN7_L3CNTLREG2 _MMIO(0xb020)
+@@ -1030,7 +1029,7 @@
+ #define XEHP_L3SCQREG7 MCR_REG(0xb188)
+ #define BLEND_FILL_CACHING_OPT_DIS REG_BIT(3)
+
+-#define XEHPC_L3SCRUB _MMIO(0xb18c)
++#define XEHPC_L3SCRUB MCR_REG(0xb18c)
+ #define SCRUB_CL_DWNGRADE_SHARED REG_BIT(12)
+ #define SCRUB_RATE_PER_BANK_MASK REG_GENMASK(2, 0)
+ #define SCRUB_RATE_4B_PER_CLK REG_FIELD_PREP(SCRUB_RATE_PER_BANK_MASK, 0x6)
+@@ -1088,16 +1087,19 @@
+ #define XEHP_MERT_MOD_CTRL MCR_REG(0xcf28)
+ #define RENDER_MOD_CTRL MCR_REG(0xcf2c)
+ #define COMP_MOD_CTRL MCR_REG(0xcf30)
+-#define VDBX_MOD_CTRL MCR_REG(0xcf34)
+-#define VEBX_MOD_CTRL MCR_REG(0xcf38)
++#define XELPMP_GSC_MOD_CTRL _MMIO(0xcf30) /* media GT only */
++#define XEHP_VDBX_MOD_CTRL MCR_REG(0xcf34)
++#define XELPMP_VDBX_MOD_CTRL _MMIO(0xcf34)
++#define XEHP_VEBX_MOD_CTRL MCR_REG(0xcf38)
++#define XELPMP_VEBX_MOD_CTRL _MMIO(0xcf38)
+ #define FORCE_MISS_FTLB REG_BIT(3)
+
+-#define GEN12_GAMSTLB_CTRL _MMIO(0xcf4c)
++#define XEHP_GAMSTLB_CTRL MCR_REG(0xcf4c)
+ #define CONTROL_BLOCK_CLKGATE_DIS REG_BIT(12)
+ #define EGRESS_BLOCK_CLKGATE_DIS REG_BIT(11)
+ #define TAG_BLOCK_CLKGATE_DIS REG_BIT(7)
+
+-#define GEN12_GAMCNTRL_CTRL _MMIO(0xcf54)
++#define XEHP_GAMCNTRL_CTRL MCR_REG(0xcf54)
+ #define INVALIDATION_BROADCAST_MODE_DIS REG_BIT(12)
+ #define GLOBAL_INVALIDATION_MODE REG_BIT(2)
+
+@@ -1528,6 +1530,9 @@
+
+ #define MTL_MEDIA_MC6 _MMIO(0x138048)
+
++#define MTL_GT_ACTIVITY_FACTOR _MMIO(0x138010)
++#define MTL_GT_L3_EXC_MASK REG_GENMASK(5, 3)
++
+ #define GEN6_GT_THREAD_STATUS_REG _MMIO(0x13805c)
+ #define GEN6_GT_THREAD_STATUS_CORE_MASK 0x7
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
+index 15ec64d881c44..fb99143be98e7 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring.c
+@@ -53,7 +53,7 @@ int intel_ring_pin(struct intel_ring *ring, struct i915_gem_ww_ctx *ww)
+ if (unlikely(ret))
+ goto err_unpin;
+
+- if (i915_vma_is_map_and_fenceable(vma)) {
++ if (i915_vma_is_map_and_fenceable(vma) && !HAS_LLC(vma->vm->i915)) {
+ addr = (void __force *)i915_vma_pin_iomap(vma);
+ } else {
+ int type = i915_coherent_map_type(vma->vm->i915, vma->obj, false);
+@@ -98,7 +98,7 @@ void intel_ring_unpin(struct intel_ring *ring)
+ return;
+
+ i915_vma_unset_ggtt_write(vma);
+- if (i915_vma_is_map_and_fenceable(vma))
++ if (i915_vma_is_map_and_fenceable(vma) && !HAS_LLC(vma->vm->i915))
+ i915_vma_unpin_iomap(vma);
+ else
+ i915_gem_object_unpin_map(vma->obj);
+@@ -116,7 +116,7 @@ static struct i915_vma *create_ring_vma(struct i915_ggtt *ggtt, int size)
+
+ obj = i915_gem_object_create_lmem(i915, size, I915_BO_ALLOC_VOLATILE |
+ I915_BO_ALLOC_PM_VOLATILE);
+- if (IS_ERR(obj) && i915_ggtt_has_aperture(ggtt))
++ if (IS_ERR(obj) && i915_ggtt_has_aperture(ggtt) && !HAS_LLC(i915))
+ obj = i915_gem_object_create_stolen(i915, size);
+ if (IS_ERR(obj))
+ obj = i915_gem_object_create_internal(i915, size);
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index a0740308555d8..e13052c5dae19 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -224,6 +224,12 @@ wa_write(struct i915_wa_list *wal, i915_reg_t reg, u32 set)
+ wa_write_clr_set(wal, reg, ~0, set);
+ }
+
++static void
++wa_mcr_write(struct i915_wa_list *wal, i915_mcr_reg_t reg, u32 set)
++{
++ wa_mcr_write_clr_set(wal, reg, ~0, set);
++}
++
+ static void
+ wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 set)
+ {
+@@ -786,6 +792,32 @@ static void dg2_ctx_workarounds_init(struct intel_engine_cs *engine,
+ wa_masked_en(wal, CACHE_MODE_1, MSAA_OPTIMIZATION_REDUC_DISABLE);
+ }
+
++static void mtl_ctx_workarounds_init(struct intel_engine_cs *engine,
++ struct i915_wa_list *wal)
++{
++ struct drm_i915_private *i915 = engine->i915;
++
++ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
++ IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) {
++ /* Wa_14014947963 */
++ wa_masked_field_set(wal, VF_PREEMPTION,
++ PREEMPTION_VERTEX_COUNT, 0x4000);
++
++ /* Wa_16013271637 */
++ wa_mcr_masked_en(wal, XEHP_SLICE_COMMON_ECO_CHICKEN1,
++ MSC_MSAA_REODER_BUF_BYPASS_DISABLE);
++
++ /* Wa_18019627453 */
++ wa_mcr_masked_en(wal, VFLSKPD, VF_PREFETCH_TLB_DIS);
++
++ /* Wa_18018764978 */
++ wa_masked_en(wal, PSS_MODE2, SCOREBOARD_STALL_FLUSH_CONTROL);
++ }
++
++ /* Wa_18019271663 */
++ wa_masked_en(wal, CACHE_MODE_1, MSAA_OPTIMIZATION_REDUC_DISABLE);
++}
++
+ static void fakewa_disable_nestedbb_mode(struct intel_engine_cs *engine,
+ struct i915_wa_list *wal)
+ {
+@@ -872,7 +904,9 @@ __intel_engine_init_ctx_wa(struct intel_engine_cs *engine,
+ if (engine->class != RENDER_CLASS)
+ goto done;
+
+- if (IS_PONTEVECCHIO(i915))
++ if (IS_METEORLAKE(i915))
++ mtl_ctx_workarounds_init(engine, wal);
++ else if (IS_PONTEVECCHIO(i915))
+ ; /* noop; none at this time */
+ else if (IS_DG2(i915))
+ dg2_ctx_workarounds_init(engine, wal);
+@@ -1522,6 +1556,13 @@ xehpsdv_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+
+ /* Wa_14011060649:xehpsdv */
+ wa_14011060649(gt, wal);
++
++ /* Wa_14012362059:xehpsdv */
++ wa_mcr_write_or(wal, XEHP_MERT_MOD_CTRL, FORCE_MISS_FTLB);
++
++ /* Wa_14014368820:xehpsdv */
++ wa_mcr_write_or(wal, XEHP_GAMCNTRL_CTRL,
++ INVALIDATION_BROADCAST_MODE_DIS | GLOBAL_INVALIDATION_MODE);
+ }
+
+ static void
+@@ -1562,6 +1603,12 @@ dg2_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+ DSS_ROUTER_CLKGATE_DIS);
+ }
+
++ if (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_B0) ||
++ IS_DG2_GRAPHICS_STEP(gt->i915, G11, STEP_A0, STEP_B0)) {
++ /* Wa_14012362059:dg2 */
++ wa_mcr_write_or(wal, XEHP_MERT_MOD_CTRL, FORCE_MISS_FTLB);
++ }
++
+ if (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_B0)) {
+ /* Wa_14010948348:dg2_g10 */
+ wa_write_or(wal, UNSLCGCTL9430, MSQDUNIT_CLKGATE_DIS);
+@@ -1607,6 +1654,12 @@ dg2_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+
+ /* Wa_14011028019:dg2_g10 */
+ wa_mcr_write_or(wal, SSMCGCTL9530, RTFUNIT_CLKGATE_DIS);
++
++ /* Wa_14010680813:dg2_g10 */
++ wa_mcr_write_or(wal, XEHP_GAMSTLB_CTRL,
++ CONTROL_BLOCK_CLKGATE_DIS |
++ EGRESS_BLOCK_CLKGATE_DIS |
++ TAG_BLOCK_CLKGATE_DIS);
+ }
+
+ /* Wa_14014830051:dg2 */
+@@ -1620,7 +1673,17 @@ dg2_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+ wa_mcr_write_or(wal, XEHP_SQCM, EN_32B_ACCESS);
+
+ /* Wa_14015795083 */
+- wa_mcr_write_clr(wal, GEN8_MISCCPCTL, GEN12_DOP_CLOCK_GATE_RENDER_ENABLE);
++ wa_write_clr(wal, GEN7_MISCCPCTL, GEN12_DOP_CLOCK_GATE_RENDER_ENABLE);
++
++ /* Wa_18018781329 */
++ wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_mcr_write_or(wal, XEHP_VDBX_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_mcr_write_or(wal, XEHP_VEBX_MOD_CTRL, FORCE_MISS_FTLB);
++
++ /* Wa_1509235366:dg2 */
++ wa_mcr_write_or(wal, XEHP_GAMCNTRL_CTRL,
++ INVALIDATION_BROADCAST_MODE_DIS | GLOBAL_INVALIDATION_MODE);
+ }
+
+ static void
+@@ -1629,13 +1692,27 @@ pvc_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+ pvc_init_mcr(gt, wal);
+
+ /* Wa_14015795083 */
+- wa_mcr_write_clr(wal, GEN8_MISCCPCTL, GEN12_DOP_CLOCK_GATE_RENDER_ENABLE);
++ wa_write_clr(wal, GEN7_MISCCPCTL, GEN12_DOP_CLOCK_GATE_RENDER_ENABLE);
++
++ /* Wa_18018781329 */
++ wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_mcr_write_or(wal, XEHP_VDBX_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_mcr_write_or(wal, XEHP_VEBX_MOD_CTRL, FORCE_MISS_FTLB);
+ }
+
+ static void
+ xelpg_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+ {
+- /* FIXME: Actual workarounds will be added in future patch(es) */
++ if (IS_MTL_GRAPHICS_STEP(gt->i915, M, STEP_A0, STEP_B0) ||
++ IS_MTL_GRAPHICS_STEP(gt->i915, P, STEP_A0, STEP_B0)) {
++ /* Wa_14014830051 */
++ wa_mcr_write_clr(wal, SARB_CHICKEN1, COMP_CKN_IN);
++
++ /* Wa_18018781329 */
++ wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
++ }
+
+ /*
+ * Unlike older platforms, we no longer setup implicit steering here;
+@@ -1647,7 +1724,17 @@ xelpg_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+ static void
+ xelpmp_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
+ {
+- /* FIXME: Actual workarounds will be added in future patch(es) */
++ if (IS_MTL_MEDIA_STEP(gt->i915, STEP_A0, STEP_B0)) {
++ /*
++ * Wa_18018781329
++ *
++ * Note that although these registers are MCR on the primary
++ * GT, the media GT's versions are regular singleton registers.
++ */
++ wa_write_or(wal, XELPMP_GSC_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_write_or(wal, XELPMP_VDBX_MOD_CTRL, FORCE_MISS_FTLB);
++ wa_write_or(wal, XELPMP_VEBX_MOD_CTRL, FORCE_MISS_FTLB);
++ }
+
+ debug_dump_steering(gt);
+ }
+@@ -2171,7 +2258,9 @@ void intel_engine_init_whitelist(struct intel_engine_cs *engine)
+
+ wa_init_start(w, engine->gt, "whitelist", engine->name);
+
+- if (IS_PONTEVECCHIO(i915))
++ if (IS_METEORLAKE(i915))
++ ; /* noop; none at this time */
++ else if (IS_PONTEVECCHIO(i915))
+ pvc_whitelist_build(engine);
+ else if (IS_DG2(i915))
+ dg2_whitelist_build(engine);
+@@ -2281,22 +2370,37 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ {
+ struct drm_i915_private *i915 = engine->i915;
+
+- if (IS_DG2(i915)) {
+- /* Wa_1509235366:dg2 */
+- wa_write_or(wal, GEN12_GAMCNTRL_CTRL, INVALIDATION_BROADCAST_MODE_DIS |
+- GLOBAL_INVALIDATION_MODE);
+- }
+-
+- if (IS_DG2_GRAPHICS_STEP(i915, G11, STEP_A0, STEP_B0)) {
+- /* Wa_14013392000:dg2_g11 */
+- wa_mcr_masked_en(wal, GEN8_ROW_CHICKEN2, GEN12_ENABLE_LARGE_GRF_MODE);
++ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
++ IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) {
++ /* Wa_22014600077 */
++ wa_mcr_masked_en(wal, GEN10_CACHE_MODE_SS,
++ ENABLE_EU_COUNT_FOR_TDL_FLUSH);
+ }
+
+- if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
++ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
++ IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) ||
++ IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
+ IS_DG2_G11(i915) || IS_DG2_G12(i915)) {
+- /* Wa_1509727124:dg2 */
++ /* Wa_1509727124 */
+ wa_mcr_masked_en(wal, GEN10_SAMPLER_MODE,
+ SC_DISABLE_POWER_OPTIMIZATION_EBB);
++
++ /* Wa_22013037850 */
++ wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0_UDW,
++ DISABLE_128B_EVICTION_COMMAND_UDW);
++ }
++
++ if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
++ IS_DG2_G11(i915) || IS_DG2_G12(i915) ||
++ IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0)) {
++ /* Wa_22012856258 */
++ wa_mcr_masked_en(wal, GEN8_ROW_CHICKEN2,
++ GEN12_DISABLE_READ_SUPPRESSION);
++ }
++
++ if (IS_DG2_GRAPHICS_STEP(i915, G11, STEP_A0, STEP_B0)) {
++ /* Wa_14013392000:dg2_g11 */
++ wa_mcr_masked_en(wal, GEN8_ROW_CHICKEN2, GEN12_ENABLE_LARGE_GRF_MODE);
+ }
+
+ if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_A0, STEP_B0) ||
+@@ -2330,14 +2434,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+
+ if (IS_DG2_GRAPHICS_STEP(i915, G10, STEP_B0, STEP_FOREVER) ||
+ IS_DG2_G11(i915) || IS_DG2_G12(i915)) {
+- /* Wa_22013037850:dg2 */
+- wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0_UDW,
+- DISABLE_128B_EVICTION_COMMAND_UDW);
+-
+- /* Wa_22012856258:dg2 */
+- wa_mcr_masked_en(wal, GEN8_ROW_CHICKEN2,
+- GEN12_DISABLE_READ_SUPPRESSION);
+-
+ /*
+ * Wa_22010960976:dg2
+ * Wa_14013347512:dg2
+@@ -2386,18 +2482,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ wa_mcr_masked_en(wal, GEN9_HALF_SLICE_CHICKEN7,
+ DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA);
+
+- if (IS_DG2_GRAPHICS_STEP(engine->i915, G10, STEP_A0, STEP_B0)) {
+- /* Wa_14010680813:dg2_g10 */
+- wa_write_or(wal, GEN12_GAMSTLB_CTRL, CONTROL_BLOCK_CLKGATE_DIS |
+- EGRESS_BLOCK_CLKGATE_DIS | TAG_BLOCK_CLKGATE_DIS);
+- }
+-
+- if (IS_DG2_GRAPHICS_STEP(engine->i915, G10, STEP_A0, STEP_B0) ||
+- IS_DG2_GRAPHICS_STEP(engine->i915, G11, STEP_A0, STEP_B0)) {
+- /* Wa_14012362059:dg2 */
+- wa_mcr_write_or(wal, XEHP_MERT_MOD_CTRL, FORCE_MISS_FTLB);
+- }
+-
+ if (IS_DG2_GRAPHICS_STEP(i915, G11, STEP_B0, STEP_FOREVER) ||
+ IS_DG2_G10(i915)) {
+ /* Wa_22014600077:dg2 */
+@@ -2901,8 +2985,9 @@ add_render_compute_tuning_settings(struct drm_i915_private *i915,
+ struct i915_wa_list *wal)
+ {
+ if (IS_PONTEVECCHIO(i915)) {
+- wa_write(wal, XEHPC_L3SCRUB,
+- SCRUB_CL_DWNGRADE_SHARED | SCRUB_RATE_4B_PER_CLK);
++ wa_mcr_write(wal, XEHPC_L3SCRUB,
++ SCRUB_CL_DWNGRADE_SHARED | SCRUB_RATE_4B_PER_CLK);
++ wa_mcr_masked_en(wal, XEHPC_LNCFMISCCFGREG0, XEHPC_HOSTCACHEEN);
+ }
+
+ if (IS_DG2(i915)) {
+@@ -2950,9 +3035,24 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
+
+ add_render_compute_tuning_settings(i915, wal);
+
++ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
++ IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) ||
++ IS_PONTEVECCHIO(i915) ||
++ IS_DG2(i915)) {
++ /* Wa_22014226127 */
++ wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0, DISABLE_D8_D16_COASLESCE);
++ }
++
++ if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
++ IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0) ||
++ IS_DG2(i915)) {
++ /* Wa_18017747507 */
++ wa_masked_en(wal, VFG_PREEMPTION_CHICKEN, POLYGON_TRIFAN_LINELOOP_DISABLE);
++ }
++
+ if (IS_PONTEVECCHIO(i915)) {
+ /* Wa_16016694945 */
+- wa_masked_en(wal, XEHPC_LNCFMISCCFGREG0, XEHPC_OVRLSCCC);
++ wa_mcr_masked_en(wal, XEHPC_LNCFMISCCFGREG0, XEHPC_OVRLSCCC);
+ }
+
+ if (IS_XEHPSDV(i915)) {
+@@ -2978,30 +3078,14 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
+ wa_mcr_masked_dis(wal, MLTICTXCTL, TDONRENDER);
+ wa_mcr_write_or(wal, L3SQCREG1_CCS0, FLUSHALLNONCOH);
+ }
+-
+- /* Wa_14012362059:xehpsdv */
+- wa_mcr_write_or(wal, XEHP_MERT_MOD_CTRL, FORCE_MISS_FTLB);
+-
+- /* Wa_14014368820:xehpsdv */
+- wa_write_or(wal, GEN12_GAMCNTRL_CTRL, INVALIDATION_BROADCAST_MODE_DIS |
+- GLOBAL_INVALIDATION_MODE);
+ }
+
+ if (IS_DG2(i915) || IS_PONTEVECCHIO(i915)) {
+ /* Wa_14015227452:dg2,pvc */
+ wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, XEHP_DIS_BBL_SYSPIPE);
+
+- /* Wa_22014226127:dg2,pvc */
+- wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0, DISABLE_D8_D16_COASLESCE);
+-
+ /* Wa_16015675438:dg2,pvc */
+ wa_masked_en(wal, FF_SLICE_CS_CHICKEN2, GEN12_PERF_FIX_BALANCING_CFE_DISABLE);
+-
+- /* Wa_18018781329:dg2,pvc */
+- wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
+- wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
+- wa_mcr_write_or(wal, VDBX_MOD_CTRL, FORCE_MISS_FTLB);
+- wa_mcr_write_or(wal, VEBX_MOD_CTRL, FORCE_MISS_FTLB);
+ }
+
+ if (IS_DG2(i915)) {
+@@ -3010,9 +3094,6 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li
+ * Wa_22015475538:dg2
+ */
+ wa_mcr_write_or(wal, LSC_CHICKEN_BIT_0_UDW, DIS_CHAIN_2XSIMD8);
+-
+- /* Wa_18017747507:dg2 */
+- wa_masked_en(wal, VFG_PREEMPTION_CHICKEN, POLYGON_TRIFAN_LINELOOP_DISABLE);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+index 52aede324788e..ca940a00e84a3 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c
+@@ -274,8 +274,9 @@ static u32 guc_ctl_wa_flags(struct intel_guc *guc)
+ if (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_B0))
+ flags |= GUC_WA_GAM_CREDITS;
+
+- /* Wa_14014475959:dg2 */
+- if (IS_DG2(gt->i915))
++ /* Wa_14014475959 */
++ if (IS_MTL_GRAPHICS_STEP(gt->i915, M, STEP_A0, STEP_B0) ||
++ IS_DG2(gt->i915))
+ flags |= GUC_WA_HOLD_CCS_SWITCHOUT;
+
+ /*
+@@ -289,7 +290,9 @@ static u32 guc_ctl_wa_flags(struct intel_guc *guc)
+ flags |= GUC_WA_DUAL_QUEUE;
+
+ /* Wa_22011802037: graphics version 11/12 */
+- if (IS_GRAPHICS_VER(gt->i915, 11, 12))
++ if (IS_MTL_GRAPHICS_STEP(gt->i915, M, STEP_A0, STEP_B0) ||
++ (GRAPHICS_VER(gt->i915) >= 11 &&
++ GRAPHICS_VER_FULL(gt->i915) < IP_VER(12, 70)))
+ flags |= GUC_WA_PRE_PARSER;
+
+ /* Wa_16011777198:dg2 */
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
+index 5b86b2e286e07..42c5d9d2e2182 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_fw.c
+@@ -38,9 +38,8 @@ static void guc_prepare_xfer(struct intel_gt *gt)
+
+ if (GRAPHICS_VER(uncore->i915) == 9) {
+ /* DOP Clock Gating Enable for GuC clocks */
+- intel_gt_mcr_multicast_write(gt, GEN8_MISCCPCTL,
+- GEN8_DOP_CLOCK_GATE_GUC_ENABLE |
+- intel_gt_mcr_read_any(gt, GEN8_MISCCPCTL));
++ intel_uncore_rmw(uncore, GEN7_MISCCPCTL, 0,
++ GEN8_DOP_CLOCK_GATE_GUC_ENABLE);
+
+ /* allows for 5us (in 10ns units) before GT can go to RC6 */
+ intel_uncore_write(uncore, GUC_ARAT_C6DIS, 0x1FF);
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index c10977cb06b97..ddf071865adc5 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -1621,7 +1621,7 @@ static void guc_engine_reset_prepare(struct intel_engine_cs *engine)
+ intel_engine_stop_cs(engine);
+
+ /*
+- * Wa_22011802037:gen11/gen12: In addition to stopping the cs, we need
++ * Wa_22011802037: In addition to stopping the cs, we need
+ * to wait for any pending mi force wakeups
+ */
+ intel_engine_wait_for_pending_mi_fw(engine);
+@@ -4203,8 +4203,10 @@ static void guc_default_vfuncs(struct intel_engine_cs *engine)
+ engine->flags |= I915_ENGINE_HAS_TIMESLICES;
+
+ /* Wa_14014475959:dg2 */
+- if (IS_DG2(engine->i915) && engine->class == COMPUTE_CLASS)
+- engine->flags |= I915_ENGINE_USES_WA_HOLD_CCS_SWITCHOUT;
++ if (engine->class == COMPUTE_CLASS)
++ if (IS_MTL_GRAPHICS_STEP(engine->i915, M, STEP_A0, STEP_B0) ||
++ IS_DG2(engine->i915))
++ engine->flags |= I915_ENGINE_USES_WA_HOLD_CCS_SWITCHOUT;
+
+ /*
+ * TODO: GuC supports timeslicing and semaphores as well, but they're
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index a380db36d52c4..03c3a59d0939b 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -726,6 +726,10 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ (IS_SUBPLATFORM(__i915, INTEL_METEORLAKE, INTEL_SUBPLATFORM_##variant) && \
+ IS_GRAPHICS_STEP(__i915, since, until))
+
++#define IS_MTL_MEDIA_STEP(__i915, since, until) \
++ (IS_METEORLAKE(__i915) && \
++ IS_MEDIA_STEP(__i915, since, until))
++
+ /*
+ * DG2 hardware steppings are a bit unusual. The hardware design was forked to
+ * create three variants (G10, G11, and G12) which each have distinct
+diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c
+index 849baf6c3b3c6..05e90d09b2081 100644
+--- a/drivers/gpu/drm/i915/intel_device_info.c
++++ b/drivers/gpu/drm/i915/intel_device_info.c
+@@ -343,6 +343,12 @@ static void intel_ipver_early_init(struct drm_i915_private *i915)
+
+ ip_ver_read(i915, i915_mmio_reg_offset(GMD_ID_GRAPHICS),
+ &runtime->graphics.ip);
++ /* Wa_22012778468 */
++ if (runtime->graphics.ip.ver == 0x0 &&
++ INTEL_INFO(i915)->platform == INTEL_METEORLAKE) {
++ RUNTIME_INFO(i915)->graphics.ip.ver = 12;
++ RUNTIME_INFO(i915)->graphics.ip.rel = 70;
++ }
+ ip_ver_read(i915, i915_mmio_reg_offset(GMD_ID_DISPLAY),
+ &runtime->display.ip);
+ ip_ver_read(i915, i915_mmio_reg_offset(GMD_ID_MEDIA),
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 73c88b1c9545c..ac61df46d02c5 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -4299,8 +4299,8 @@ static void gen8_set_l3sqc_credits(struct drm_i915_private *dev_priv,
+ u32 val;
+
+ /* WaTempDisableDOPClkGating:bdw */
+- misccpctl = intel_gt_mcr_multicast_rmw(to_gt(dev_priv), GEN8_MISCCPCTL,
+- GEN8_DOP_CLOCK_GATE_ENABLE, 0);
++ misccpctl = intel_uncore_rmw(&dev_priv->uncore, GEN7_MISCCPCTL,
++ GEN7_DOP_CLOCK_GATE_ENABLE, 0);
+
+ val = intel_gt_mcr_read_any(to_gt(dev_priv), GEN8_L3SQCREG1);
+ val &= ~L3_PRIO_CREDITS_MASK;
+@@ -4314,7 +4314,7 @@ static void gen8_set_l3sqc_credits(struct drm_i915_private *dev_priv,
+ */
+ intel_gt_mcr_read_any(to_gt(dev_priv), GEN8_L3SQCREG1);
+ udelay(1);
+- intel_gt_mcr_multicast_write(to_gt(dev_priv), GEN8_MISCCPCTL, misccpctl);
++ intel_uncore_write(&dev_priv->uncore, GEN7_MISCCPCTL, misccpctl);
+ }
+
+ static void icl_init_clock_gating(struct drm_i915_private *dev_priv)
+@@ -4465,8 +4465,8 @@ static void skl_init_clock_gating(struct drm_i915_private *dev_priv)
+ gen9_init_clock_gating(dev_priv);
+
+ /* WaDisableDopClockGating:skl */
+- intel_gt_mcr_multicast_rmw(to_gt(dev_priv), GEN8_MISCCPCTL,
+- GEN8_DOP_CLOCK_GATE_ENABLE, 0);
++ intel_uncore_rmw(&dev_priv->uncore, GEN7_MISCCPCTL,
++ GEN7_DOP_CLOCK_GATE_ENABLE, 0);
+
+ /* WAC6entrylatency:skl */
+ intel_uncore_rmw(&dev_priv->uncore, FBC_LLC_READ_CTRL, 0, FBC_LLC_FULLY_OPEN);
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index 112615817dcbe..5071f1263216b 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -945,6 +945,8 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
+
+ mtk_crtc->planes = devm_kcalloc(dev, num_comp_planes,
+ sizeof(struct drm_plane), GFP_KERNEL);
++ if (!mtk_crtc->planes)
++ return -ENOMEM;
+
+ for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
+ ret = mtk_drm_crtc_init_comp_planes(drm_dev, mtk_crtc, i,
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index cd5b18ef79512..d3e57dd79f5f5 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -520,6 +520,7 @@ static int mtk_drm_bind(struct device *dev)
+ err_deinit:
+ mtk_drm_kms_deinit(drm);
+ err_free:
++ private->drm = NULL;
+ drm_dev_put(drm);
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+index 47e96b0289f98..6c204ccfb9ece 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+@@ -164,8 +164,6 @@ static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj,
+
+ ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie,
+ mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs);
+- if (ret)
+- drm_gem_vm_close(vma);
+
+ return ret;
+ }
+@@ -262,6 +260,6 @@ void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj,
+ return;
+
+ vunmap(vaddr);
+- mtk_gem->kvaddr = 0;
++ mtk_gem->kvaddr = NULL;
+ kfree(mtk_gem->pages);
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index 3b7d13028fb6b..9e1363c9fcdb4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -721,7 +721,7 @@ static void mtk_dsi_lane_ready(struct mtk_dsi *dsi)
+ mtk_dsi_clk_ulp_mode_leave(dsi);
+ mtk_dsi_lane0_ulp_mode_leave(dsi);
+ mtk_dsi_clk_hs_mode(dsi, 0);
+- msleep(20);
++ usleep_range(1000, 3000);
+ /* The reaction time after pulling up the mipi signal for dsi_rx */
+ }
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 3605f095b2de2..8175997663299 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -1083,13 +1083,13 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
+ {
+ struct msm_gpu *gpu = &adreno_gpu->base;
+- struct msm_drm_private *priv = gpu->dev->dev_private;
++ struct msm_drm_private *priv = gpu->dev ? gpu->dev->dev_private : NULL;
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
+ release_firmware(adreno_gpu->fw[i]);
+
+- if (pm_runtime_enabled(&priv->gpu_pdev->dev))
++ if (priv && pm_runtime_enabled(&priv->gpu_pdev->dev))
+ pm_runtime_disable(&priv->gpu_pdev->dev);
+
+ msm_gpu_cleanup(&adreno_gpu->base);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 13ce321283ff9..c9d1c412628e9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -968,7 +968,10 @@ static void dpu_crtc_reset(struct drm_crtc *crtc)
+ if (crtc->state)
+ dpu_crtc_destroy_state(crtc, crtc->state);
+
+- __drm_atomic_helper_crtc_reset(crtc, &cstate->base);
++ if (cstate)
++ __drm_atomic_helper_crtc_reset(crtc, &cstate->base);
++ else
++ __drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+
+ /**
+@@ -1150,6 +1153,8 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+ bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
+
+ pstates = kzalloc(sizeof(*pstates) * DPU_STAGE_MAX * 4, GFP_KERNEL);
++ if (!pstates)
++ return -ENOMEM;
+
+ if (!crtc_state->enable || !crtc_state->active) {
+ DRM_DEBUG_ATOMIC("crtc%d -> enable %d, active %d, skip atomic_check\n",
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 2196e205efa5e..83f1dd2c22bd7 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -459,6 +459,8 @@ static const struct dpu_mdp_cfg sc7180_mdp[] = {
+ .reg_off = 0x2B4, .bit_off = 8},
+ .clk_ctrls[DPU_CLK_CTRL_CURSOR1] = {
+ .reg_off = 0x2C4, .bit_off = 8},
++ .clk_ctrls[DPU_CLK_CTRL_WB2] = {
++ .reg_off = 0x3B8, .bit_off = 24},
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index b71199511a52d..09757166a064a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -930,6 +930,11 @@ static void dpu_kms_mdp_snapshot(struct msm_disp_state *disp_state, struct msm_k
+ msm_disp_snapshot_add_block(disp_state, cat->mdp[0].len,
+ dpu_kms->mmio + cat->mdp[0].base, "top");
+
++ /* dump DSC sub-blocks HW regs info */
++ for (i = 0; i < cat->dsc_count; i++)
++ msm_disp_snapshot_add_block(disp_state, cat->dsc[i].len,
++ dpu_kms->mmio + cat->dsc[i].base, "dsc_%d", i);
++
+ pm_runtime_put_sync(&dpu_kms->pdev->dev);
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+index 86719020afe20..bfd5be89e8b8d 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+@@ -1126,7 +1126,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
+ struct dpu_plane_state *pstate = to_dpu_plane_state(state);
+ struct drm_crtc *crtc = state->crtc;
+ struct drm_framebuffer *fb = state->fb;
+- bool is_rt_pipe, update_qos_remap;
++ bool is_rt_pipe;
+ const struct dpu_format *fmt =
+ to_dpu_format(msm_framebuffer_format(fb));
+ struct dpu_hw_pipe_cfg pipe_cfg;
+@@ -1138,6 +1138,9 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
+ pstate->pending = true;
+
+ is_rt_pipe = (dpu_crtc_get_client_type(crtc) != NRT_CLIENT);
++ pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe);
++ pdpu->is_rt_pipe = is_rt_pipe;
++
+ _dpu_plane_set_qos_ctrl(plane, false, DPU_PLANE_QOS_PANIC_CTRL);
+
+ DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT
+@@ -1219,14 +1222,8 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
+ _dpu_plane_set_ot_limit(plane, crtc, &pipe_cfg);
+ }
+
+- update_qos_remap = (is_rt_pipe != pdpu->is_rt_pipe) ||
+- pstate->needs_qos_remap;
+-
+- if (update_qos_remap) {
+- if (is_rt_pipe != pdpu->is_rt_pipe)
+- pdpu->is_rt_pipe = is_rt_pipe;
+- else if (pstate->needs_qos_remap)
+- pstate->needs_qos_remap = false;
++ if (pstate->needs_qos_remap) {
++ pstate->needs_qos_remap = false;
+ _dpu_plane_set_qos_remap(plane);
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+index 73b3442e74679..7ada957adbbb8 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+@@ -660,6 +660,11 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm,
+ blks_size, enc_id);
+ break;
+ }
++ if (!hw_blks[i]) {
++ DPU_ERROR("Allocated resource %d unavailable to assign to enc %d\n",
++ type, enc_id);
++ break;
++ }
+ blks[num_blks++] = hw_blks[i];
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
+index 088ec990a2f26..2a5a68366582b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
+@@ -70,6 +70,8 @@ int dpu_writeback_init(struct drm_device *dev, struct drm_encoder *enc,
+ int rc = 0;
+
+ dpu_wb_conn = devm_kzalloc(dev->dev, sizeof(*dpu_wb_conn), GFP_KERNEL);
++ if (!dpu_wb_conn)
++ return -ENOMEM;
+
+ drm_connector_helper_add(&dpu_wb_conn->base.base, &dpu_wb_conn_helper_funcs);
+
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index e86421c69bd1f..86036dd4e1e82 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -1139,7 +1139,10 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
+ if (crtc->state)
+ mdp5_crtc_destroy_state(crtc, crtc->state);
+
+- __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
++ if (mdp5_cstate)
++ __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
++ else
++ __drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+
+ static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_cfg.c b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+index 7e97c239ed489..e0bd452a9f1e6 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_cfg.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+@@ -209,8 +209,8 @@ static const struct msm_dsi_config sc7280_dsi_cfg = {
+ .num_regulators = ARRAY_SIZE(sc7280_dsi_regulators),
+ .bus_clk_names = dsi_sc7280_bus_clk_names,
+ .num_bus_clks = ARRAY_SIZE(dsi_sc7280_bus_clk_names),
+- .io_start = { 0xae94000 },
+- .num_dsi = 1,
++ .io_start = { 0xae94000, 0xae96000 },
++ .num_dsi = 2,
+ };
+
+ static const char * const dsi_qcm2290_bus_clk_names[] = {
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 89aadd3b3202b..f167a45f1fbdd 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1977,6 +1977,9 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi)
+
+ /* setup workqueue */
+ msm_host->workqueue = alloc_ordered_workqueue("dsi_drm_work", 0);
++ if (!msm_host->workqueue)
++ return -ENOMEM;
++
+ INIT_WORK(&msm_host->err_work, dsi_err_worker);
+
+ msm_dsi->id = msm_host->id;
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index 97372bb241d89..4ad36bc8fe5ed 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -120,6 +120,10 @@ static int msm_hdmi_init(struct hdmi *hdmi)
+ int ret;
+
+ hdmi->workq = alloc_ordered_workqueue("msm_hdmi", 0);
++ if (!hdmi->workq) {
++ ret = -ENOMEM;
++ goto fail;
++ }
+
+ hdmi->i2c = msm_hdmi_i2c_init(hdmi);
+ if (IS_ERR(hdmi->i2c)) {
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 45e81eb148a8d..ee2f60b6f09b3 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -491,7 +491,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ if (IS_ERR(priv->event_thread[i].worker)) {
+ ret = PTR_ERR(priv->event_thread[i].worker);
+ DRM_DEV_ERROR(dev, "failed to create crtc_event kthread\n");
+- ret = PTR_ERR(priv->event_thread[i].worker);
++ priv->event_thread[i].worker = NULL;
+ goto err_msm_uninit;
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
+index a47e5837c528f..56641408ea742 100644
+--- a/drivers/gpu/drm/msm/msm_fence.c
++++ b/drivers/gpu/drm/msm/msm_fence.c
+@@ -22,7 +22,7 @@ msm_fence_context_alloc(struct drm_device *dev, volatile uint32_t *fenceptr,
+ return ERR_PTR(-ENOMEM);
+
+ fctx->dev = dev;
+- strncpy(fctx->name, name, sizeof(fctx->name));
++ strscpy(fctx->name, name, sizeof(fctx->name));
+ fctx->context = dma_fence_context_alloc(1);
+ fctx->index = index++;
+ fctx->fenceptr = fenceptr;
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index 73a2ca122c570..1c4be193fd23f 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -209,6 +209,10 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit,
+ goto out;
+ }
+ submit->cmd[i].relocs = kmalloc(sz, GFP_KERNEL);
++ if (!submit->cmd[i].relocs) {
++ ret = -ENOMEM;
++ goto out;
++ }
+ ret = copy_from_user(submit->cmd[i].relocs, userptr, sz);
+ if (ret) {
+ ret = -EFAULT;
+diff --git a/drivers/gpu/drm/mxsfb/Kconfig b/drivers/gpu/drm/mxsfb/Kconfig
+index 116f8168bda4a..518b533453548 100644
+--- a/drivers/gpu/drm/mxsfb/Kconfig
++++ b/drivers/gpu/drm/mxsfb/Kconfig
+@@ -8,6 +8,7 @@ config DRM_MXSFB
+ tristate "i.MX (e)LCDIF LCD controller"
+ depends on DRM && OF
+ depends on COMMON_CLK
++ depends on ARCH_MXS || ARCH_MXC || COMPILE_TEST
+ select DRM_MXS
+ select DRM_KMS_HELPER
+ select DRM_GEM_DMA_HELPER
+@@ -24,6 +25,7 @@ config DRM_IMX_LCDIF
+ tristate "i.MX LCDIFv3 LCD controller"
+ depends on DRM && OF
+ depends on COMMON_CLK
++ depends on ARCH_MXC || COMPILE_TEST
+ select DRM_MXS
+ select DRM_KMS_HELPER
+ select DRM_GEM_DMA_HELPER
+diff --git a/drivers/gpu/drm/nouveau/include/nvif/outp.h b/drivers/gpu/drm/nouveau/include/nvif/outp.h
+index 45daadec3c0c7..fa76a7b5e4b37 100644
+--- a/drivers/gpu/drm/nouveau/include/nvif/outp.h
++++ b/drivers/gpu/drm/nouveau/include/nvif/outp.h
+@@ -3,6 +3,7 @@
+ #define __NVIF_OUTP_H__
+ #include <nvif/object.h>
+ #include <nvif/if0012.h>
++#include <drm/display/drm_dp.h>
+ struct nvif_disp;
+
+ struct nvif_outp {
+@@ -21,7 +22,7 @@ int nvif_outp_acquire_rgb_crt(struct nvif_outp *);
+ int nvif_outp_acquire_tmds(struct nvif_outp *, int head,
+ bool hdmi, u8 max_ac_packet, u8 rekey, u8 scdc, bool hda);
+ int nvif_outp_acquire_lvds(struct nvif_outp *, bool dual, bool bpc8);
+-int nvif_outp_acquire_dp(struct nvif_outp *, u8 dpcd[16],
++int nvif_outp_acquire_dp(struct nvif_outp *outp, u8 dpcd[DP_RECEIVER_CAP_SIZE],
+ int link_nr, int link_bw, bool hda, bool mst);
+ void nvif_outp_release(struct nvif_outp *);
+ int nvif_outp_infoframe(struct nvif_outp *, u8 type, struct nvif_outp_infoframe_v0 *, u32 size);
+diff --git a/drivers/gpu/drm/nouveau/nvif/outp.c b/drivers/gpu/drm/nouveau/nvif/outp.c
+index 7da39f1eae9fb..c24bc5eae3ecf 100644
+--- a/drivers/gpu/drm/nouveau/nvif/outp.c
++++ b/drivers/gpu/drm/nouveau/nvif/outp.c
+@@ -127,7 +127,7 @@ nvif_outp_acquire(struct nvif_outp *outp, u8 proto, struct nvif_outp_acquire_v0
+ }
+
+ int
+-nvif_outp_acquire_dp(struct nvif_outp *outp, u8 dpcd[16],
++nvif_outp_acquire_dp(struct nvif_outp *outp, u8 dpcd[DP_RECEIVER_CAP_SIZE],
+ int link_nr, int link_bw, bool hda, bool mst)
+ {
+ struct nvif_outp_acquire_v0 args;
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index a6845856cbce4..4c1084eb01759 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -1039,22 +1039,26 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+ {
+ struct dsi_data *dsi = s->private;
+ unsigned long flags;
+- struct dsi_irq_stats stats;
++ struct dsi_irq_stats *stats;
++
++ stats = kmalloc(sizeof(*stats), GFP_KERNEL);
++ if (!stats)
++ return -ENOMEM;
+
+ spin_lock_irqsave(&dsi->irq_stats_lock, flags);
+
+- stats = dsi->irq_stats;
++ *stats = dsi->irq_stats;
+ memset(&dsi->irq_stats, 0, sizeof(dsi->irq_stats));
+ dsi->irq_stats.last_reset = jiffies;
+
+ spin_unlock_irqrestore(&dsi->irq_stats_lock, flags);
+
+ seq_printf(s, "period %u ms\n",
+- jiffies_to_msecs(jiffies - stats.last_reset));
++ jiffies_to_msecs(jiffies - stats->last_reset));
+
+- seq_printf(s, "irqs %d\n", stats.irq_count);
++ seq_printf(s, "irqs %d\n", stats->irq_count);
+ #define PIS(x) \
+- seq_printf(s, "%-20s %10d\n", #x, stats.dsi_irqs[ffs(DSI_IRQ_##x)-1]);
++ seq_printf(s, "%-20s %10d\n", #x, stats->dsi_irqs[ffs(DSI_IRQ_##x)-1]);
+
+ seq_printf(s, "-- DSI%d interrupts --\n", dsi->module_id + 1);
+ PIS(VC0);
+@@ -1078,10 +1082,10 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+
+ #define PIS(x) \
+ seq_printf(s, "%-20s %10d %10d %10d %10d\n", #x, \
+- stats.vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
+- stats.vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
+- stats.vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
+- stats.vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
++ stats->vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
++ stats->vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
++ stats->vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
++ stats->vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
+
+ seq_printf(s, "-- VC interrupts --\n");
+ PIS(CS);
+@@ -1097,7 +1101,7 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+
+ #define PIS(x) \
+ seq_printf(s, "%-20s %10d\n", #x, \
+- stats.cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
++ stats->cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
+
+ seq_printf(s, "-- CIO interrupts --\n");
+ PIS(ERRSYNCESC1);
+@@ -1122,6 +1126,8 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
+ PIS(ULPSACTIVENOT_ALL1);
+ #undef PIS
+
++ kfree(stats);
++
+ return 0;
+ }
+ #endif
+diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
+index 5cb8dc2ebe184..ef70928c3ccbc 100644
+--- a/drivers/gpu/drm/panel/panel-edp.c
++++ b/drivers/gpu/drm/panel/panel-edp.c
+@@ -1891,7 +1891,7 @@ static const struct edp_panel_entry edp_panels[] = {
+ EDP_PANEL_ENTRY('C', 'M', 'N', 0x1247, &delay_200_500_e80_d50, "N120ACA-EA1"),
+
+ EDP_PANEL_ENTRY('I', 'V', 'O', 0x057d, &delay_200_500_e200, "R140NWF5 RH"),
+- EDP_PANEL_ENTRY('I', 'V', 'O', 0x854b, &delay_200_500_p2e100, "M133NW4J-R3"),
++ EDP_PANEL_ENTRY('I', 'V', 'O', 0x854b, &delay_200_500_p2e100, "R133NW4K-R0"),
+
+ EDP_PANEL_ENTRY('K', 'D', 'B', 0x0624, &kingdisplay_kd116n21_30nv_a010.delay, "116N21-30NV-A010"),
+ EDP_PANEL_ENTRY('K', 'D', 'B', 0x1120, &delay_200_500_e80_d50, "116N29-30NK-C007"),
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c b/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
+index 5c621b15e84c2..439ef30735128 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
+@@ -692,7 +692,9 @@ static int s6e3ha2_probe(struct mipi_dsi_device *dsi)
+
+ dsi->lanes = 4;
+ dsi->format = MIPI_DSI_FMT_RGB888;
+- dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
++ dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
++ MIPI_DSI_MODE_VIDEO_NO_HFP | MIPI_DSI_MODE_VIDEO_NO_HBP |
++ MIPI_DSI_MODE_VIDEO_NO_HSA | MIPI_DSI_MODE_NO_EOT_PACKET;
+
+ ctx->supplies[0].supply = "vdd3";
+ ctx->supplies[1].supply = "vci";
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c b/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
+index e06fd35de814b..9c3e76171759a 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
+@@ -446,7 +446,8 @@ static int s6e63j0x03_probe(struct mipi_dsi_device *dsi)
+
+ dsi->lanes = 1;
+ dsi->format = MIPI_DSI_FMT_RGB888;
+- dsi->mode_flags = MIPI_DSI_MODE_NO_EOT_PACKET;
++ dsi->mode_flags = MIPI_DSI_MODE_VIDEO_NO_HFP |
++ MIPI_DSI_MODE_VIDEO_NO_HBP | MIPI_DSI_MODE_VIDEO_NO_HSA;
+
+ ctx->supplies[0].supply = "vdd3";
+ ctx->supplies[1].supply = "vci";
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+index 54213beafaf5e..ebf4c2d39ea88 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+@@ -990,8 +990,6 @@ static int s6e8aa0_probe(struct mipi_dsi_device *dsi)
+ dsi->lanes = 4;
+ dsi->format = MIPI_DSI_FMT_RGB888;
+ dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST
+- | MIPI_DSI_MODE_VIDEO_NO_HFP | MIPI_DSI_MODE_VIDEO_NO_HBP
+- | MIPI_DSI_MODE_VIDEO_NO_HSA | MIPI_DSI_MODE_NO_EOT_PACKET
+ | MIPI_DSI_MODE_VSYNC_FLUSH | MIPI_DSI_MODE_VIDEO_AUTO_VERT;
+
+ ret = s6e8aa0_parse_dt(ctx);
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index c841c273222e7..3e24fa11d4d38 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -2122,11 +2122,12 @@ int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
+
+ /*
+ * On DCE32 any encoder can drive any block so usually just use crtc id,
+- * but Apple thinks different at least on iMac10,1, so there use linkb,
++ * but Apple thinks different at least on iMac10,1 and iMac11,2, so there use linkb,
+ * otherwise the internal eDP panel will stay dark.
+ */
+ if (ASIC_IS_DCE32(rdev)) {
+- if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1"))
++ if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1") ||
++ dmi_match(DMI_PRODUCT_NAME, "iMac11,2"))
+ enc_idx = (dig->linkb) ? 1 : 0;
+ else
+ enc_idx = radeon_crtc->crtc_id;
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index 6344454a77217..4f9729b4a8119 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1023,6 +1023,7 @@ void radeon_atombios_fini(struct radeon_device *rdev)
+ {
+ if (rdev->mode_info.atom_context) {
+ kfree(rdev->mode_info.atom_context->scratch);
++ kfree(rdev->mode_info.atom_context->iio);
+ }
+ kfree(rdev->mode_info.atom_context);
+ rdev->mode_info.atom_context = NULL;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+index 3619e1ddeb620..b7dd59fe119e6 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+@@ -10,7 +10,6 @@
+ #include <linux/clk.h>
+ #include <linux/mutex.h>
+ #include <linux/platform_device.h>
+-#include <linux/sys_soc.h>
+
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+@@ -204,11 +203,6 @@ static void rcar_du_escr_divider(struct clk *clk, unsigned long target,
+ }
+ }
+
+-static const struct soc_device_attribute rcar_du_r8a7795_es1[] = {
+- { .soc_id = "r8a7795", .revision = "ES1.*" },
+- { /* sentinel */ }
+-};
+-
+ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ {
+ const struct drm_display_mode *mode = &rcrtc->crtc.state->adjusted_mode;
+@@ -238,7 +232,7 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ * no post-divider when a display PLL is present (as shown by
+ * the workaround breaking HDMI output on M3-W during testing).
+ */
+- if (soc_device_match(rcar_du_r8a7795_es1)) {
++ if (rcdu->info->quirks & RCAR_DU_QUIRK_H3_ES1_PCLK_STABILITY) {
+ target *= 2;
+ div = 1;
+ }
+@@ -251,13 +245,30 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
+ | DPLLCR_N(dpll.n) | DPLLCR_M(dpll.m)
+ | DPLLCR_STBY;
+
+- if (rcrtc->index == 1)
++ if (rcrtc->index == 1) {
+ dpllcr |= DPLLCR_PLCS1
+ | DPLLCR_INCS_DOTCLKIN1;
+- else
+- dpllcr |= DPLLCR_PLCS0
++ } else {
++ dpllcr |= DPLLCR_PLCS0_PLL
+ | DPLLCR_INCS_DOTCLKIN0;
+
++ /*
++ * On ES2.x we have a single mux controlled via bit 21,
++ * which selects between DCLKIN source (bit 21 = 0) and
++ * a PLL source (bit 21 = 1), where the PLL is always
++ * PLL1.
++ *
++ * On ES1.x we have an additional mux, controlled
++ * via bit 20, for choosing between PLL0 (bit 20 = 0)
++ * and PLL1 (bit 20 = 1). We always want to use PLL1,
++ * so on ES1.x, in addition to setting bit 21, we need
++ * to set the bit 20.
++ */
++
++ if (rcdu->info->quirks & RCAR_DU_QUIRK_H3_ES1_PLL)
++ dpllcr |= DPLLCR_PLCS0_H3ES1X_PLL1;
++ }
++
+ rcar_du_group_write(rcrtc->group, DPLLCR, dpllcr);
+
+ escr = ESCR_DCLKSEL_DCLKIN | div;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+index d003e8d9e7a26..53c9669a3851c 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+@@ -16,6 +16,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm.h>
+ #include <linux/slab.h>
++#include <linux/sys_soc.h>
+ #include <linux/wait.h>
+
+ #include <drm/drm_atomic_helper.h>
+@@ -386,6 +387,43 @@ static const struct rcar_du_device_info rcar_du_r8a7795_info = {
+ .dpll_mask = BIT(2) | BIT(1),
+ };
+
++static const struct rcar_du_device_info rcar_du_r8a7795_es1_info = {
++ .gen = 3,
++ .features = RCAR_DU_FEATURE_CRTC_IRQ
++ | RCAR_DU_FEATURE_CRTC_CLOCK
++ | RCAR_DU_FEATURE_VSP1_SOURCE
++ | RCAR_DU_FEATURE_INTERLACED
++ | RCAR_DU_FEATURE_TVM_SYNC,
++ .quirks = RCAR_DU_QUIRK_H3_ES1_PCLK_STABILITY
++ | RCAR_DU_QUIRK_H3_ES1_PLL,
++ .channels_mask = BIT(3) | BIT(2) | BIT(1) | BIT(0),
++ .routes = {
++ /*
++ * R8A7795 has one RGB output, two HDMI outputs and one
++ * LVDS output.
++ */
++ [RCAR_DU_OUTPUT_DPAD0] = {
++ .possible_crtcs = BIT(3),
++ .port = 0,
++ },
++ [RCAR_DU_OUTPUT_HDMI0] = {
++ .possible_crtcs = BIT(1),
++ .port = 1,
++ },
++ [RCAR_DU_OUTPUT_HDMI1] = {
++ .possible_crtcs = BIT(2),
++ .port = 2,
++ },
++ [RCAR_DU_OUTPUT_LVDS0] = {
++ .possible_crtcs = BIT(0),
++ .port = 3,
++ },
++ },
++ .num_lvds = 1,
++ .num_rpf = 5,
++ .dpll_mask = BIT(2) | BIT(1),
++};
++
+ static const struct rcar_du_device_info rcar_du_r8a7796_info = {
+ .gen = 3,
+ .features = RCAR_DU_FEATURE_CRTC_IRQ
+@@ -554,6 +592,11 @@ static const struct of_device_id rcar_du_of_table[] = {
+
+ MODULE_DEVICE_TABLE(of, rcar_du_of_table);
+
++static const struct soc_device_attribute rcar_du_soc_table[] = {
++ { .soc_id = "r8a7795", .revision = "ES1.*", .data = &rcar_du_r8a7795_es1_info },
++ { /* sentinel */ }
++};
++
+ const char *rcar_du_output_name(enum rcar_du_output output)
+ {
+ static const char * const names[] = {
+@@ -645,6 +688,7 @@ static void rcar_du_shutdown(struct platform_device *pdev)
+
+ static int rcar_du_probe(struct platform_device *pdev)
+ {
++ const struct soc_device_attribute *soc_attr;
+ struct rcar_du_device *rcdu;
+ unsigned int mask;
+ int ret;
+@@ -659,8 +703,13 @@ static int rcar_du_probe(struct platform_device *pdev)
+ return PTR_ERR(rcdu);
+
+ rcdu->dev = &pdev->dev;
++
+ rcdu->info = of_device_get_match_data(rcdu->dev);
+
++ soc_attr = soc_device_match(rcar_du_soc_table);
++ if (soc_attr)
++ rcdu->info = soc_attr->data;
++
+ platform_set_drvdata(pdev, rcdu);
+
+ /* I/O resources */
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.h b/drivers/gpu/drm/rcar-du/rcar_du_drv.h
+index 5cfa2bb7ad93d..acc3673fefe18 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.h
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.h
+@@ -34,6 +34,8 @@ struct rcar_du_device;
+ #define RCAR_DU_FEATURE_NO_BLENDING BIT(5) /* PnMR.SPIM does not have ALP nor EOR bits */
+
+ #define RCAR_DU_QUIRK_ALIGN_128B BIT(0) /* Align pitches to 128 bytes */
++#define RCAR_DU_QUIRK_H3_ES1_PCLK_STABILITY BIT(1) /* H3 ES1 has pclk stability issue */
++#define RCAR_DU_QUIRK_H3_ES1_PLL BIT(2) /* H3 ES1 PLL setup differs from non-ES1 */
+
+ enum rcar_du_output {
+ RCAR_DU_OUTPUT_DPAD0,
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_regs.h b/drivers/gpu/drm/rcar-du/rcar_du_regs.h
+index c1bcb0e8b5b4e..789ae9285108e 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_regs.h
++++ b/drivers/gpu/drm/rcar-du/rcar_du_regs.h
+@@ -283,12 +283,8 @@
+ #define DPLLCR 0x20044
+ #define DPLLCR_CODE (0x95 << 24)
+ #define DPLLCR_PLCS1 (1 << 23)
+-/*
+- * PLCS0 is bit 21, but H3 ES1.x requires bit 20 to be set as well. As bit 20
+- * isn't implemented by other SoC in the Gen3 family it can safely be set
+- * unconditionally.
+- */
+-#define DPLLCR_PLCS0 (3 << 20)
++#define DPLLCR_PLCS0_PLL (1 << 21)
++#define DPLLCR_PLCS0_H3ES1X_PLL1 (1 << 20)
+ #define DPLLCR_CLKE (1 << 18)
+ #define DPLLCR_FDPLL(n) ((n) << 12)
+ #define DPLLCR_N(n) ((n) << 5)
+diff --git a/drivers/gpu/drm/tegra/firewall.c b/drivers/gpu/drm/tegra/firewall.c
+index 1824d2db0e2ce..d53f890fa6893 100644
+--- a/drivers/gpu/drm/tegra/firewall.c
++++ b/drivers/gpu/drm/tegra/firewall.c
+@@ -97,6 +97,9 @@ static int fw_check_regs_imm(struct tegra_drm_firewall *fw, u32 offset)
+ {
+ bool is_addr;
+
++ if (!fw->client->ops->is_addr_reg)
++ return 0;
++
+ is_addr = fw->client->ops->is_addr_reg(fw->client->base.dev, fw->class,
+ offset);
+ if (is_addr)
+diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c
+index ad93acc9abd2a..16301bdfead12 100644
+--- a/drivers/gpu/drm/tidss/tidss_dispc.c
++++ b/drivers/gpu/drm/tidss/tidss_dispc.c
+@@ -1858,8 +1858,8 @@ static const struct {
+ { DRM_FORMAT_XBGR4444, 0x21, },
+ { DRM_FORMAT_RGBX4444, 0x22, },
+
+- { DRM_FORMAT_ARGB1555, 0x25, },
+- { DRM_FORMAT_ABGR1555, 0x26, },
++ { DRM_FORMAT_XRGB1555, 0x25, },
++ { DRM_FORMAT_XBGR1555, 0x26, },
+
+ { DRM_FORMAT_XRGB8888, 0x27, },
+ { DRM_FORMAT_XBGR8888, 0x28, },
+diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
+index 1bb847466b107..a63b15817f112 100644
+--- a/drivers/gpu/drm/tiny/ili9486.c
++++ b/drivers/gpu/drm/tiny/ili9486.c
+@@ -43,6 +43,7 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ size_t num)
+ {
+ struct spi_device *spi = mipi->spi;
++ unsigned int bpw = 8;
+ void *data = par;
+ u32 speed_hz;
+ int i, ret;
+@@ -56,8 +57,6 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ * The displays are Raspberry Pi HATs and connected to the 8-bit only
+ * SPI controller, so 16-bit command and parameters need byte swapping
+ * before being transferred as 8-bit on the big endian SPI bus.
+- * Pixel data bytes have already been swapped before this function is
+- * called.
+ */
+ buf[0] = cpu_to_be16(*cmd);
+ gpiod_set_value_cansleep(mipi->dc, 0);
+@@ -71,12 +70,18 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ for (i = 0; i < num; i++)
+ buf[i] = cpu_to_be16(par[i]);
+ num *= 2;
+- speed_hz = mipi_dbi_spi_cmd_max_speed(spi, num);
+ data = buf;
+ }
+
++ /*
++ * Check whether pixel data bytes needs to be swapped or not
++ */
++ if (*cmd == MIPI_DCS_WRITE_MEMORY_START && !mipi->swap_bytes)
++ bpw = 16;
++
+ gpiod_set_value_cansleep(mipi->dc, 1);
+- ret = mipi_dbi_spi_transfer(spi, speed_hz, 8, data, num);
++ speed_hz = mipi_dbi_spi_cmd_max_speed(spi, num);
++ ret = mipi_dbi_spi_transfer(spi, speed_hz, bpw, data, num);
+ free:
+ kfree(buf);
+
+diff --git a/drivers/gpu/drm/vc4/vc4_dpi.c b/drivers/gpu/drm/vc4/vc4_dpi.c
+index 1f8f44b7b5a5f..61ef7d232a12c 100644
+--- a/drivers/gpu/drm/vc4/vc4_dpi.c
++++ b/drivers/gpu/drm/vc4/vc4_dpi.c
+@@ -179,7 +179,7 @@ static void vc4_dpi_encoder_enable(struct drm_encoder *encoder)
+ DPI_FORMAT);
+ break;
+ case MEDIA_BUS_FMT_RGB565_1X16:
+- dpi_c |= VC4_SET_FIELD(DPI_FORMAT_16BIT_565_RGB_3,
++ dpi_c |= VC4_SET_FIELD(DPI_FORMAT_16BIT_565_RGB_1,
+ DPI_FORMAT);
+ break;
+ default:
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 7546103f14997..3f3f94e7b8339 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -406,6 +406,7 @@ static void vc4_hdmi_handle_hotplug(struct vc4_hdmi *vc4_hdmi,
+ {
+ struct drm_connector *connector = &vc4_hdmi->connector;
+ struct edid *edid;
++ int ret;
+
+ /*
+ * NOTE: This function should really be called with
+@@ -434,7 +435,15 @@ static void vc4_hdmi_handle_hotplug(struct vc4_hdmi *vc4_hdmi,
+ cec_s_phys_addr_from_edid(vc4_hdmi->cec_adap, edid);
+ kfree(edid);
+
+- vc4_hdmi_reset_link(connector, ctx);
++ for (;;) {
++ ret = vc4_hdmi_reset_link(connector, ctx);
++ if (ret == -EDEADLK) {
++ drm_modeset_backoff(ctx);
++ continue;
++ }
++
++ break;
++ }
+ }
+
+ static int vc4_hdmi_connector_detect_ctx(struct drm_connector *connector,
+@@ -1302,11 +1311,12 @@ static void vc5_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
+ VC4_SET_FIELD(mode->crtc_vdisplay, VC5_HDMI_VERTA_VAL));
+ u32 vertb = (VC4_SET_FIELD(mode->htotal >> (2 - pixel_rep),
+ VC5_HDMI_VERTB_VSPO) |
+- VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
++ VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end +
++ interlaced,
+ VC4_HDMI_VERTB_VBP));
+ u32 vertb_even = (VC4_SET_FIELD(0, VC5_HDMI_VERTB_VSPO) |
+ VC4_SET_FIELD(mode->crtc_vtotal -
+- mode->crtc_vsync_end - interlaced,
++ mode->crtc_vsync_end,
+ VC4_HDMI_VERTB_VBP));
+ unsigned long flags;
+ unsigned char gcp;
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index c4453a5ae163a..d9fc0d03023b0 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -370,28 +370,30 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
+ * mode.
+ */
+ dispctrl = SCALER_DISPCTRLX_ENABLE;
++ dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(chan));
+
+- if (!vc4->is_vc5)
++ if (!vc4->is_vc5) {
+ dispctrl |= VC4_SET_FIELD(mode->hdisplay,
+ SCALER_DISPCTRLX_WIDTH) |
+ VC4_SET_FIELD(mode->vdisplay,
+ SCALER_DISPCTRLX_HEIGHT) |
+ (oneshot ? SCALER_DISPCTRLX_ONESHOT : 0);
+- else
++ dispbkgndx |= SCALER_DISPBKGND_AUTOHS;
++ } else {
+ dispctrl |= VC4_SET_FIELD(mode->hdisplay,
+ SCALER5_DISPCTRLX_WIDTH) |
+ VC4_SET_FIELD(mode->vdisplay,
+ SCALER5_DISPCTRLX_HEIGHT) |
+ (oneshot ? SCALER5_DISPCTRLX_ONESHOT : 0);
++ dispbkgndx &= ~SCALER5_DISPBKGND_BCK2BCK;
++ }
+
+ HVS_WRITE(SCALER_DISPCTRLX(chan), dispctrl);
+
+- dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(chan));
+ dispbkgndx &= ~SCALER_DISPBKGND_GAMMA;
+ dispbkgndx &= ~SCALER_DISPBKGND_INTERLACE;
+
+ HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx |
+- SCALER_DISPBKGND_AUTOHS |
+ ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) |
+ (interlace ? SCALER_DISPBKGND_INTERLACE : 0));
+
+@@ -658,7 +660,8 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl &= ~SCALER_DISPCTRL_DSPEISLUR(channel);
++ dispctrl &= ~(hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+
+@@ -675,7 +678,8 @@ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl |= SCALER_DISPCTRL_DSPEISLUR(channel);
++ dispctrl |= (hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPSTAT,
+ SCALER_DISPSTAT_EUFLOW(channel));
+@@ -701,6 +705,7 @@ static irqreturn_t vc4_hvs_irq_handler(int irq, void *data)
+ int channel;
+ u32 control;
+ u32 status;
++ u32 dspeislur;
+
+ /*
+ * NOTE: We don't need to protect the register access using
+@@ -717,9 +722,11 @@ static irqreturn_t vc4_hvs_irq_handler(int irq, void *data)
+ control = HVS_READ(SCALER_DISPCTRL);
+
+ for (channel = 0; channel < SCALER_CHANNELS_COUNT; channel++) {
++ dspeislur = vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel);
+ /* Interrupt masking is not always honored, so check it here. */
+ if (status & SCALER_DISPSTAT_EUFLOW(channel) &&
+- control & SCALER_DISPCTRL_DSPEISLUR(channel)) {
++ control & dspeislur) {
+ vc4_hvs_mask_underrun(hvs, channel);
+ vc4_hvs_report_underrun(dev);
+
+@@ -776,7 +783,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ struct vc4_hvs *hvs = NULL;
+ int ret;
+ u32 dispctrl;
+- u32 reg;
++ u32 reg, top;
+
+ hvs = drmm_kzalloc(drm, sizeof(*hvs), GFP_KERNEL);
+ if (!hvs)
+@@ -896,22 +903,102 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_DISPEIRQ(1) |
+ SCALER_DISPCTRL_DISPEIRQ(2);
+
+- dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
+- SCALER_DISPCTRL_SLVWREIRQ |
+- SCALER_DISPCTRL_SLVRDEIRQ |
+- SCALER_DISPCTRL_DSPEIEOF(0) |
+- SCALER_DISPCTRL_DSPEIEOF(1) |
+- SCALER_DISPCTRL_DSPEIEOF(2) |
+- SCALER_DISPCTRL_DSPEIEOLN(0) |
+- SCALER_DISPCTRL_DSPEIEOLN(1) |
+- SCALER_DISPCTRL_DSPEIEOLN(2) |
+- SCALER_DISPCTRL_DSPEISLUR(0) |
+- SCALER_DISPCTRL_DSPEISLUR(1) |
+- SCALER_DISPCTRL_DSPEISLUR(2) |
+- SCALER_DISPCTRL_SCLEIRQ);
++ if (!vc4->is_vc5)
++ dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
++ SCALER_DISPCTRL_SLVWREIRQ |
++ SCALER_DISPCTRL_SLVRDEIRQ |
++ SCALER_DISPCTRL_DSPEIEOF(0) |
++ SCALER_DISPCTRL_DSPEIEOF(1) |
++ SCALER_DISPCTRL_DSPEIEOF(2) |
++ SCALER_DISPCTRL_DSPEIEOLN(0) |
++ SCALER_DISPCTRL_DSPEIEOLN(1) |
++ SCALER_DISPCTRL_DSPEIEOLN(2) |
++ SCALER_DISPCTRL_DSPEISLUR(0) |
++ SCALER_DISPCTRL_DSPEISLUR(1) |
++ SCALER_DISPCTRL_DSPEISLUR(2) |
++ SCALER_DISPCTRL_SCLEIRQ);
++ else
++ dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
++ SCALER5_DISPCTRL_SLVEIRQ |
++ SCALER5_DISPCTRL_DSPEIEOF(0) |
++ SCALER5_DISPCTRL_DSPEIEOF(1) |
++ SCALER5_DISPCTRL_DSPEIEOF(2) |
++ SCALER5_DISPCTRL_DSPEIEOLN(0) |
++ SCALER5_DISPCTRL_DSPEIEOLN(1) |
++ SCALER5_DISPCTRL_DSPEIEOLN(2) |
++ SCALER5_DISPCTRL_DSPEISLUR(0) |
++ SCALER5_DISPCTRL_DSPEISLUR(1) |
++ SCALER5_DISPCTRL_DSPEISLUR(2) |
++ SCALER_DISPCTRL_SCLEIRQ);
++
++
++ /* Set AXI panic mode.
++ * VC4 panics when < 2 lines in FIFO.
++ * VC5 panics when less than 1 line in the FIFO.
++ */
++ dispctrl &= ~(SCALER_DISPCTRL_PANIC0_MASK |
++ SCALER_DISPCTRL_PANIC1_MASK |
++ SCALER_DISPCTRL_PANIC2_MASK);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC0);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC1);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC2);
+
+ HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+
++ /* Recompute Composite Output Buffer (COB) allocations for the displays
++ */
++ if (!vc4->is_vc5) {
++ /* The COB is 20736 pixels, or just over 10 lines at 2048 wide.
++ * The bottom 2048 pixels are full 32bpp RGBA (intended for the
++ * TXP composing RGBA to memory), whilst the remainder are only
++ * 24bpp RGB.
++ *
++ * Assign 3 lines to channels 1 & 2, and just over 4 lines to
++ * channel 0.
++ */
++ #define VC4_COB_SIZE 20736
++ #define VC4_COB_LINE_WIDTH 2048
++ #define VC4_COB_NUM_LINES 3
++ reg = 0;
++ top = VC4_COB_LINE_WIDTH * VC4_COB_NUM_LINES;
++ reg |= (top - 1) << 16;
++ HVS_WRITE(SCALER_DISPBASE2, reg);
++ reg = top;
++ top += VC4_COB_LINE_WIDTH * VC4_COB_NUM_LINES;
++ reg |= (top - 1) << 16;
++ HVS_WRITE(SCALER_DISPBASE1, reg);
++ reg = top;
++ top = VC4_COB_SIZE;
++ reg |= (top - 1) << 16;
++ HVS_WRITE(SCALER_DISPBASE0, reg);
++ } else {
++ /* The COB is 44416 pixels, or 10.8 lines at 4096 wide.
++ * The bottom 4096 pixels are full RGBA (intended for the TXP
++ * composing RGBA to memory), whilst the remainder are only
++ * RGB. Addressing is always pixel wide.
++ *
++ * Assign 3 lines of 4096 to channels 1 & 2, and just over 4
++ * lines. to channel 0.
++ */
++ #define VC5_COB_SIZE 44416
++ #define VC5_COB_LINE_WIDTH 4096
++ #define VC5_COB_NUM_LINES 3
++ reg = 0;
++ top = VC5_COB_LINE_WIDTH * VC5_COB_NUM_LINES;
++ reg |= top << 16;
++ HVS_WRITE(SCALER_DISPBASE2, reg);
++ top += 16;
++ reg = top;
++ top += VC5_COB_LINE_WIDTH * VC5_COB_NUM_LINES;
++ reg |= top << 16;
++ HVS_WRITE(SCALER_DISPBASE1, reg);
++ top += 16;
++ reg = top;
++ top = VC5_COB_SIZE;
++ reg |= top << 16;
++ HVS_WRITE(SCALER_DISPBASE0, reg);
++ }
++
+ ret = devm_request_irq(dev, platform_get_irq(pdev, 0),
+ vc4_hvs_irq_handler, 0, "vc4 hvs", drm);
+ if (ret)
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index bd5acc4a86876..eb08020154f30 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -75,11 +75,13 @@ static const struct hvs_format {
+ .drm = DRM_FORMAT_ARGB1555,
+ .hvs = HVS_PIXEL_FORMAT_RGBA5551,
+ .pixel_order = HVS_PIXEL_ORDER_ABGR,
++ .pixel_order_hvs5 = HVS_PIXEL_ORDER_ARGB,
+ },
+ {
+ .drm = DRM_FORMAT_XRGB1555,
+ .hvs = HVS_PIXEL_FORMAT_RGBA5551,
+ .pixel_order = HVS_PIXEL_ORDER_ABGR,
++ .pixel_order_hvs5 = HVS_PIXEL_ORDER_ARGB,
+ },
+ {
+ .drm = DRM_FORMAT_RGB888,
+diff --git a/drivers/gpu/drm/vc4/vc4_regs.h b/drivers/gpu/drm/vc4/vc4_regs.h
+index f0290fad991de..1256f0877ff66 100644
+--- a/drivers/gpu/drm/vc4/vc4_regs.h
++++ b/drivers/gpu/drm/vc4/vc4_regs.h
+@@ -220,6 +220,12 @@
+ #define SCALER_DISPCTRL 0x00000000
+ /* Global register for clock gating the HVS */
+ # define SCALER_DISPCTRL_ENABLE BIT(31)
++# define SCALER_DISPCTRL_PANIC0_MASK VC4_MASK(25, 24)
++# define SCALER_DISPCTRL_PANIC0_SHIFT 24
++# define SCALER_DISPCTRL_PANIC1_MASK VC4_MASK(27, 26)
++# define SCALER_DISPCTRL_PANIC1_SHIFT 26
++# define SCALER_DISPCTRL_PANIC2_MASK VC4_MASK(29, 28)
++# define SCALER_DISPCTRL_PANIC2_SHIFT 28
+ # define SCALER_DISPCTRL_DSP3_MUX_MASK VC4_MASK(19, 18)
+ # define SCALER_DISPCTRL_DSP3_MUX_SHIFT 18
+
+@@ -228,15 +234,21 @@
+ * always enabled.
+ */
+ # define SCALER_DISPCTRL_DSPEISLUR(x) BIT(13 + (x))
++# define SCALER5_DISPCTRL_DSPEISLUR(x) BIT(9 + ((x) * 4))
+ /* Enables Display 0 end-of-line-N contribution to
+ * SCALER_DISPSTAT_IRQDISP0
+ */
+ # define SCALER_DISPCTRL_DSPEIEOLN(x) BIT(8 + ((x) * 2))
++# define SCALER5_DISPCTRL_DSPEIEOLN(x) BIT(8 + ((x) * 4))
+ /* Enables Display 0 EOF contribution to SCALER_DISPSTAT_IRQDISP0 */
+ # define SCALER_DISPCTRL_DSPEIEOF(x) BIT(7 + ((x) * 2))
++# define SCALER5_DISPCTRL_DSPEIEOF(x) BIT(7 + ((x) * 4))
+
+-# define SCALER_DISPCTRL_SLVRDEIRQ BIT(6)
+-# define SCALER_DISPCTRL_SLVWREIRQ BIT(5)
++# define SCALER5_DISPCTRL_DSPEIVST(x) BIT(6 + ((x) * 4))
++
++# define SCALER_DISPCTRL_SLVRDEIRQ BIT(6) /* HVS4 only */
++# define SCALER_DISPCTRL_SLVWREIRQ BIT(5) /* HVS4 only */
++# define SCALER5_DISPCTRL_SLVEIRQ BIT(5)
+ # define SCALER_DISPCTRL_DMAEIRQ BIT(4)
+ /* Enables interrupt generation on the enabled EOF/EOLN/EISLUR
+ * bits and short frames..
+@@ -360,6 +372,7 @@
+
+ #define SCALER_DISPBKGND0 0x00000044
+ # define SCALER_DISPBKGND_AUTOHS BIT(31)
++# define SCALER5_DISPBKGND_BCK2BCK BIT(31)
+ # define SCALER_DISPBKGND_INTERLACE BIT(30)
+ # define SCALER_DISPBKGND_GAMMA BIT(29)
+ # define SCALER_DISPBKGND_TESTMODE_MASK VC4_MASK(28, 25)
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
+index 293dbca50c316..69346906ec813 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.c
++++ b/drivers/gpu/drm/vkms/vkms_drv.c
+@@ -57,7 +57,8 @@ static void vkms_release(struct drm_device *dev)
+ {
+ struct vkms_device *vkms = drm_device_to_vkms_device(dev);
+
+- destroy_workqueue(vkms->output.composer_workq);
++ if (vkms->output.composer_workq)
++ destroy_workqueue(vkms->output.composer_workq);
+ }
+
+ static void vkms_atomic_commit_tail(struct drm_atomic_state *old_state)
+@@ -218,6 +219,7 @@ out_unregister:
+
+ static int __init vkms_init(void)
+ {
++ int ret;
+ struct vkms_config *config;
+
+ config = kmalloc(sizeof(*config), GFP_KERNEL);
+@@ -230,7 +232,11 @@ static int __init vkms_init(void)
+ config->writeback = enable_writeback;
+ config->overlay = enable_overlay;
+
+- return vkms_create(config);
++ ret = vkms_create(config);
++ if (ret)
++ kfree(config);
++
++ return ret;
+ }
+
+ static void vkms_destroy(struct vkms_config *config)
+diff --git a/drivers/gpu/host1x/hw/hw_host1x06_uclass.h b/drivers/gpu/host1x/hw/hw_host1x06_uclass.h
+index 5f831438d19bb..50c32de452fb1 100644
+--- a/drivers/gpu/host1x/hw/hw_host1x06_uclass.h
++++ b/drivers/gpu/host1x/hw/hw_host1x06_uclass.h
+@@ -53,7 +53,7 @@ static inline u32 host1x_uclass_incr_syncpt_cond_f(u32 v)
+ host1x_uclass_incr_syncpt_cond_f(v)
+ static inline u32 host1x_uclass_incr_syncpt_indx_f(u32 v)
+ {
+- return (v & 0xff) << 0;
++ return (v & 0x3ff) << 0;
+ }
+ #define HOST1X_UCLASS_INCR_SYNCPT_INDX_F(v) \
+ host1x_uclass_incr_syncpt_indx_f(v)
+diff --git a/drivers/gpu/host1x/hw/hw_host1x07_uclass.h b/drivers/gpu/host1x/hw/hw_host1x07_uclass.h
+index 8cd2ef087d5d0..887b878f92f79 100644
+--- a/drivers/gpu/host1x/hw/hw_host1x07_uclass.h
++++ b/drivers/gpu/host1x/hw/hw_host1x07_uclass.h
+@@ -53,7 +53,7 @@ static inline u32 host1x_uclass_incr_syncpt_cond_f(u32 v)
+ host1x_uclass_incr_syncpt_cond_f(v)
+ static inline u32 host1x_uclass_incr_syncpt_indx_f(u32 v)
+ {
+- return (v & 0xff) << 0;
++ return (v & 0x3ff) << 0;
+ }
+ #define HOST1X_UCLASS_INCR_SYNCPT_INDX_F(v) \
+ host1x_uclass_incr_syncpt_indx_f(v)
+diff --git a/drivers/gpu/host1x/hw/hw_host1x08_uclass.h b/drivers/gpu/host1x/hw/hw_host1x08_uclass.h
+index 724cccd71aa1a..4fb1d090edae5 100644
+--- a/drivers/gpu/host1x/hw/hw_host1x08_uclass.h
++++ b/drivers/gpu/host1x/hw/hw_host1x08_uclass.h
+@@ -53,7 +53,7 @@ static inline u32 host1x_uclass_incr_syncpt_cond_f(u32 v)
+ host1x_uclass_incr_syncpt_cond_f(v)
+ static inline u32 host1x_uclass_incr_syncpt_indx_f(u32 v)
+ {
+- return (v & 0xff) << 0;
++ return (v & 0x3ff) << 0;
+ }
+ #define HOST1X_UCLASS_INCR_SYNCPT_INDX_F(v) \
+ host1x_uclass_incr_syncpt_indx_f(v)
+diff --git a/drivers/gpu/host1x/hw/syncpt_hw.c b/drivers/gpu/host1x/hw/syncpt_hw.c
+index dd39d67ccec36..8cf35b2eff3db 100644
+--- a/drivers/gpu/host1x/hw/syncpt_hw.c
++++ b/drivers/gpu/host1x/hw/syncpt_hw.c
+@@ -106,9 +106,6 @@ static void syncpt_assign_to_channel(struct host1x_syncpt *sp,
+ #if HOST1X_HW >= 6
+ struct host1x *host = sp->host;
+
+- if (!host->hv_regs)
+- return;
+-
+ host1x_sync_writel(host,
+ HOST1X_SYNC_SYNCPT_CH_APP_CH(ch ? ch->id : 0xff),
+ HOST1X_SYNC_SYNCPT_CH_APP(sp->id));
+diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
+index 118318513e2d2..c35eac1116f5f 100644
+--- a/drivers/gpu/ipu-v3/ipu-common.c
++++ b/drivers/gpu/ipu-v3/ipu-common.c
+@@ -1165,6 +1165,7 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
+ pdev = platform_device_alloc(reg->name, id++);
+ if (!pdev) {
+ ret = -ENOMEM;
++ of_node_put(of_node);
+ goto err_register;
+ }
+
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index f99752b998f3d..d1094bb1aa429 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -98,6 +98,7 @@ struct asus_kbd_leds {
+ struct hid_device *hdev;
+ struct work_struct work;
+ unsigned int brightness;
++ spinlock_t lock;
+ bool removed;
+ };
+
+@@ -490,21 +491,42 @@ static int rog_nkey_led_init(struct hid_device *hdev)
+ return ret;
+ }
+
++static void asus_schedule_work(struct asus_kbd_leds *led)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&led->lock, flags);
++ if (!led->removed)
++ schedule_work(&led->work);
++ spin_unlock_irqrestore(&led->lock, flags);
++}
++
+ static void asus_kbd_backlight_set(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+ {
+ struct asus_kbd_leds *led = container_of(led_cdev, struct asus_kbd_leds,
+ cdev);
++ unsigned long flags;
++
++ spin_lock_irqsave(&led->lock, flags);
+ led->brightness = brightness;
+- schedule_work(&led->work);
++ spin_unlock_irqrestore(&led->lock, flags);
++
++ asus_schedule_work(led);
+ }
+
+ static enum led_brightness asus_kbd_backlight_get(struct led_classdev *led_cdev)
+ {
+ struct asus_kbd_leds *led = container_of(led_cdev, struct asus_kbd_leds,
+ cdev);
++ enum led_brightness brightness;
++ unsigned long flags;
++
++ spin_lock_irqsave(&led->lock, flags);
++ brightness = led->brightness;
++ spin_unlock_irqrestore(&led->lock, flags);
+
+- return led->brightness;
++ return brightness;
+ }
+
+ static void asus_kbd_backlight_work(struct work_struct *work)
+@@ -512,11 +534,11 @@ static void asus_kbd_backlight_work(struct work_struct *work)
+ struct asus_kbd_leds *led = container_of(work, struct asus_kbd_leds, work);
+ u8 buf[] = { FEATURE_KBD_REPORT_ID, 0xba, 0xc5, 0xc4, 0x00 };
+ int ret;
++ unsigned long flags;
+
+- if (led->removed)
+- return;
+-
++ spin_lock_irqsave(&led->lock, flags);
+ buf[4] = led->brightness;
++ spin_unlock_irqrestore(&led->lock, flags);
+
+ ret = asus_kbd_set_report(led->hdev, buf, sizeof(buf));
+ if (ret < 0)
+@@ -584,6 +606,7 @@ static int asus_kbd_register_leds(struct hid_device *hdev)
+ drvdata->kbd_backlight->cdev.brightness_set = asus_kbd_backlight_set;
+ drvdata->kbd_backlight->cdev.brightness_get = asus_kbd_backlight_get;
+ INIT_WORK(&drvdata->kbd_backlight->work, asus_kbd_backlight_work);
++ spin_lock_init(&drvdata->kbd_backlight->lock);
+
+ ret = devm_led_classdev_register(&hdev->dev, &drvdata->kbd_backlight->cdev);
+ if (ret < 0) {
+@@ -1119,9 +1142,13 @@ err_stop_hw:
+ static void asus_remove(struct hid_device *hdev)
+ {
+ struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
++ unsigned long flags;
+
+ if (drvdata->kbd_backlight) {
++ spin_lock_irqsave(&drvdata->kbd_backlight->lock, flags);
+ drvdata->kbd_backlight->removed = true;
++ spin_unlock_irqrestore(&drvdata->kbd_backlight->lock, flags);
++
+ cancel_work_sync(&drvdata->kbd_backlight->work);
+ }
+
+diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
+index e8b16665860d6..a02cb517b4c47 100644
+--- a/drivers/hid/hid-bigbenff.c
++++ b/drivers/hid/hid-bigbenff.c
+@@ -174,6 +174,7 @@ static __u8 pid0902_rdesc_fixed[] = {
+ struct bigben_device {
+ struct hid_device *hid;
+ struct hid_report *report;
++ spinlock_t lock;
+ bool removed;
+ u8 led_state; /* LED1 = 1 .. LED4 = 8 */
+ u8 right_motor_on; /* right motor off/on 0/1 */
+@@ -184,18 +185,39 @@ struct bigben_device {
+ struct work_struct worker;
+ };
+
++static inline void bigben_schedule_work(struct bigben_device *bigben)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&bigben->lock, flags);
++ if (!bigben->removed)
++ schedule_work(&bigben->worker);
++ spin_unlock_irqrestore(&bigben->lock, flags);
++}
+
+ static void bigben_worker(struct work_struct *work)
+ {
+ struct bigben_device *bigben = container_of(work,
+ struct bigben_device, worker);
+ struct hid_field *report_field = bigben->report->field[0];
+-
+- if (bigben->removed || !report_field)
++ bool do_work_led = false;
++ bool do_work_ff = false;
++ u8 *buf;
++ u32 len;
++ unsigned long flags;
++
++ buf = hid_alloc_report_buf(bigben->report, GFP_KERNEL);
++ if (!buf)
+ return;
+
++ len = hid_report_len(bigben->report);
++
++ /* LED work */
++ spin_lock_irqsave(&bigben->lock, flags);
++
+ if (bigben->work_led) {
+ bigben->work_led = false;
++ do_work_led = true;
+ report_field->value[0] = 0x01; /* 1 = led message */
+ report_field->value[1] = 0x08; /* reserved value, always 8 */
+ report_field->value[2] = bigben->led_state;
+@@ -204,11 +226,22 @@ static void bigben_worker(struct work_struct *work)
+ report_field->value[5] = 0x00; /* padding */
+ report_field->value[6] = 0x00; /* padding */
+ report_field->value[7] = 0x00; /* padding */
+- hid_hw_request(bigben->hid, bigben->report, HID_REQ_SET_REPORT);
++ hid_output_report(bigben->report, buf);
++ }
++
++ spin_unlock_irqrestore(&bigben->lock, flags);
++
++ if (do_work_led) {
++ hid_hw_raw_request(bigben->hid, bigben->report->id, buf, len,
++ bigben->report->type, HID_REQ_SET_REPORT);
+ }
+
++ /* FF work */
++ spin_lock_irqsave(&bigben->lock, flags);
++
+ if (bigben->work_ff) {
+ bigben->work_ff = false;
++ do_work_ff = true;
+ report_field->value[0] = 0x02; /* 2 = rumble effect message */
+ report_field->value[1] = 0x08; /* reserved value, always 8 */
+ report_field->value[2] = bigben->right_motor_on;
+@@ -217,8 +250,17 @@ static void bigben_worker(struct work_struct *work)
+ report_field->value[5] = 0x00; /* padding */
+ report_field->value[6] = 0x00; /* padding */
+ report_field->value[7] = 0x00; /* padding */
+- hid_hw_request(bigben->hid, bigben->report, HID_REQ_SET_REPORT);
++ hid_output_report(bigben->report, buf);
++ }
++
++ spin_unlock_irqrestore(&bigben->lock, flags);
++
++ if (do_work_ff) {
++ hid_hw_raw_request(bigben->hid, bigben->report->id, buf, len,
++ bigben->report->type, HID_REQ_SET_REPORT);
+ }
++
++ kfree(buf);
+ }
+
+ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
+@@ -228,6 +270,7 @@ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
+ struct bigben_device *bigben = hid_get_drvdata(hid);
+ u8 right_motor_on;
+ u8 left_motor_force;
++ unsigned long flags;
+
+ if (!bigben) {
+ hid_err(hid, "no device data\n");
+@@ -242,10 +285,13 @@ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
+
+ if (right_motor_on != bigben->right_motor_on ||
+ left_motor_force != bigben->left_motor_force) {
++ spin_lock_irqsave(&bigben->lock, flags);
+ bigben->right_motor_on = right_motor_on;
+ bigben->left_motor_force = left_motor_force;
+ bigben->work_ff = true;
+- schedule_work(&bigben->worker);
++ spin_unlock_irqrestore(&bigben->lock, flags);
++
++ bigben_schedule_work(bigben);
+ }
+
+ return 0;
+@@ -259,6 +305,7 @@ static void bigben_set_led(struct led_classdev *led,
+ struct bigben_device *bigben = hid_get_drvdata(hid);
+ int n;
+ bool work;
++ unsigned long flags;
+
+ if (!bigben) {
+ hid_err(hid, "no device data\n");
+@@ -267,6 +314,7 @@ static void bigben_set_led(struct led_classdev *led,
+
+ for (n = 0; n < NUM_LEDS; n++) {
+ if (led == bigben->leds[n]) {
++ spin_lock_irqsave(&bigben->lock, flags);
+ if (value == LED_OFF) {
+ work = (bigben->led_state & BIT(n));
+ bigben->led_state &= ~BIT(n);
+@@ -274,10 +322,11 @@ static void bigben_set_led(struct led_classdev *led,
+ work = !(bigben->led_state & BIT(n));
+ bigben->led_state |= BIT(n);
+ }
++ spin_unlock_irqrestore(&bigben->lock, flags);
+
+ if (work) {
+ bigben->work_led = true;
+- schedule_work(&bigben->worker);
++ bigben_schedule_work(bigben);
+ }
+ return;
+ }
+@@ -307,8 +356,12 @@ static enum led_brightness bigben_get_led(struct led_classdev *led)
+ static void bigben_remove(struct hid_device *hid)
+ {
+ struct bigben_device *bigben = hid_get_drvdata(hid);
++ unsigned long flags;
+
++ spin_lock_irqsave(&bigben->lock, flags);
+ bigben->removed = true;
++ spin_unlock_irqrestore(&bigben->lock, flags);
++
+ cancel_work_sync(&bigben->worker);
+ hid_hw_stop(hid);
+ }
+@@ -318,7 +371,6 @@ static int bigben_probe(struct hid_device *hid,
+ {
+ struct bigben_device *bigben;
+ struct hid_input *hidinput;
+- struct list_head *report_list;
+ struct led_classdev *led;
+ char *name;
+ size_t name_sz;
+@@ -343,14 +395,12 @@ static int bigben_probe(struct hid_device *hid,
+ return error;
+ }
+
+- report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
+- if (list_empty(report_list)) {
++ bigben->report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 8);
++ if (!bigben->report) {
+ hid_err(hid, "no output report found\n");
+ error = -ENODEV;
+ goto error_hw_stop;
+ }
+- bigben->report = list_entry(report_list->next,
+- struct hid_report, list);
+
+ if (list_empty(&hid->inputs)) {
+ hid_err(hid, "no inputs found\n");
+@@ -362,6 +412,7 @@ static int bigben_probe(struct hid_device *hid,
+ set_bit(FF_RUMBLE, hidinput->input->ffbit);
+
+ INIT_WORK(&bigben->worker, bigben_worker);
++ spin_lock_init(&bigben->lock);
+
+ error = input_ff_create_memless(hidinput->input, NULL,
+ hid_bigben_play_effect);
+@@ -402,7 +453,7 @@ static int bigben_probe(struct hid_device *hid,
+ bigben->left_motor_force = 0;
+ bigben->work_led = true;
+ bigben->work_ff = true;
+- schedule_work(&bigben->worker);
++ bigben_schedule_work(bigben);
+
+ hid_info(hid, "LED and force feedback support for BigBen gamepad\n");
+
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index e213bdde543af..e7ef1ea107c9e 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -975,6 +975,7 @@ static const char *keys[KEY_MAX + 1] = {
+ [KEY_CAMERA_ACCESS_DISABLE] = "CameraAccessDisable",
+ [KEY_CAMERA_ACCESS_TOGGLE] = "CameraAccessToggle",
+ [KEY_DICTATE] = "Dictate",
++ [KEY_MICMUTE] = "MicrophoneMute",
+ [KEY_BRIGHTNESS_MIN] = "BrightnessMin",
+ [KEY_BRIGHTNESS_MAX] = "BrightnessMax",
+ [KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 9e36b4cd905ee..2235d78784b1b 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -1299,7 +1299,9 @@
+ #define USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO01 0x0042
+ #define USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO01_V2 0x0905
+ #define USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L 0x0935
++#define USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_MW 0x0934
+ #define USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_S 0x0909
++#define USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_SW 0x0933
+ #define USB_DEVICE_ID_UGEE_XPPEN_TABLET_STAR06 0x0078
+ #define USB_DEVICE_ID_UGEE_TABLET_G5 0x0074
+ #define USB_DEVICE_ID_UGEE_TABLET_EX07S 0x0071
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 77c8c49852b5c..c3c7d0abb01ad 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -378,6 +378,10 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ HID_BATTERY_QUIRK_IGNORE },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L),
+ HID_BATTERY_QUIRK_AVOID_QUERY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_MW),
++ HID_BATTERY_QUIRK_AVOID_QUERY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_SW),
++ HID_BATTERY_QUIRK_AVOID_QUERY },
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15),
+ HID_BATTERY_QUIRK_IGNORE },
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100),
+@@ -793,6 +797,14 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ break;
+ }
+
++ if ((usage->hid & 0xf0) == 0xa0) { /* SystemControl */
++ switch (usage->hid & 0xf) {
++ case 0x9: map_key_clear(KEY_MICMUTE); break;
++ default: goto ignore;
++ }
++ break;
++ }
++
+ if ((usage->hid & 0xf0) == 0xb0) { /* SC - Display */
+ switch (usage->hid & 0xf) {
+ case 0x05: map_key_clear(KEY_SWITCHVIDEOMODE); break;
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 9c1ee8e91e0ca..5efc591a02a03 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -77,6 +77,7 @@ MODULE_PARM_DESC(disable_tap_to_click,
+ #define HIDPP_QUIRK_HIDPP_WHEELS BIT(26)
+ #define HIDPP_QUIRK_HIDPP_EXTRA_MOUSE_BTNS BIT(27)
+ #define HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS BIT(28)
++#define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(29)
+
+ /* These are just aliases for now */
+ #define HIDPP_QUIRK_KBD_SCROLL_WHEEL HIDPP_QUIRK_HIDPP_WHEELS
+@@ -3472,14 +3473,8 @@ static int hidpp_initialize_hires_scroll(struct hidpp_device *hidpp)
+ hid_dbg(hidpp->hid_dev, "Detected HID++ 2.0 hi-res scrolling\n");
+ }
+ } else {
+- struct hidpp_report response;
+-
+- ret = hidpp_send_rap_command_sync(hidpp,
+- REPORT_ID_HIDPP_SHORT,
+- HIDPP_GET_REGISTER,
+- HIDPP_ENABLE_FAST_SCROLL,
+- NULL, 0, &response);
+- if (!ret) {
++ /* We cannot detect fast scrolling support on HID++ 1.0 devices */
++ if (hidpp->quirks & HIDPP_QUIRK_HI_RES_SCROLL_1P0) {
+ hidpp->capabilities |= HIDPP_CAPABILITY_HIDPP10_FAST_SCROLL;
+ hid_dbg(hidpp->hid_dev, "Detected HID++ 1.0 fast scroll\n");
+ }
+@@ -4107,6 +4102,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ bool connected;
+ unsigned int connect_mask = HID_CONNECT_DEFAULT;
+ struct hidpp_ff_private_data data;
++ bool will_restart = false;
+
+ /* report_fixup needs drvdata to be set before we call hid_parse */
+ hidpp = devm_kzalloc(&hdev->dev, sizeof(*hidpp), GFP_KERNEL);
+@@ -4162,6 +4158,10 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ return ret;
+ }
+
++ if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT ||
++ hidpp->quirks & HIDPP_QUIRK_UNIFYING)
++ will_restart = true;
++
+ INIT_WORK(&hidpp->work, delayed_work_cb);
+ mutex_init(&hidpp->send_mutex);
+ init_waitqueue_head(&hidpp->wait);
+@@ -4176,7 +4176,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ * Plain USB connections need to actually call start and open
+ * on the transport driver to allow incoming data.
+ */
+- ret = hid_hw_start(hdev, 0);
++ ret = hid_hw_start(hdev, will_restart ? 0 : connect_mask);
+ if (ret) {
+ hid_err(hdev, "hw start failed\n");
+ goto hid_hw_start_fail;
+@@ -4213,6 +4213,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ hidpp->wireless_feature_index = 0;
+ else if (ret)
+ goto hid_hw_init_fail;
++ ret = 0;
+ }
+
+ if (connected && (hidpp->quirks & HIDPP_QUIRK_CLASS_WTP)) {
+@@ -4227,19 +4228,21 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
+
+ hidpp_connect_event(hidpp);
+
+- /* Reset the HID node state */
+- hid_device_io_stop(hdev);
+- hid_hw_close(hdev);
+- hid_hw_stop(hdev);
++ if (will_restart) {
++ /* Reset the HID node state */
++ hid_device_io_stop(hdev);
++ hid_hw_close(hdev);
++ hid_hw_stop(hdev);
+
+- if (hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT)
+- connect_mask &= ~HID_CONNECT_HIDINPUT;
++ if (hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT)
++ connect_mask &= ~HID_CONNECT_HIDINPUT;
+
+- /* Now export the actual inputs and hidraw nodes to the world */
+- ret = hid_hw_start(hdev, connect_mask);
+- if (ret) {
+- hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
+- goto hid_hw_start_fail;
++ /* Now export the actual inputs and hidraw nodes to the world */
++ ret = hid_hw_start(hdev, connect_mask);
++ if (ret) {
++ hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
++ goto hid_hw_start_fail;
++ }
+ }
+
+ if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920) {
+@@ -4297,9 +4300,15 @@ static const struct hid_device_id hidpp_devices[] = {
+ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
+ USB_DEVICE_ID_LOGITECH_T651),
+ .driver_data = HIDPP_QUIRK_CLASS_WTP },
++ { /* Mouse Logitech Anywhere MX */
++ LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
+ { /* Mouse logitech M560 */
+ LDJ_DEVICE(0x402d),
+ .driver_data = HIDPP_QUIRK_DELAYED_INIT | HIDPP_QUIRK_CLASS_M560 },
++ { /* Mouse Logitech M705 (firmware RQM17) */
++ LDJ_DEVICE(0x101b), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
++ { /* Mouse Logitech Performance MX */
++ LDJ_DEVICE(0x101a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
+ { /* Keyboard logitech K400 */
+ LDJ_DEVICE(0x4024),
+ .driver_data = HIDPP_QUIRK_CLASS_K400 },
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 372cbdd223e09..e31be0cb8b850 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -71,6 +71,7 @@ MODULE_LICENSE("GPL");
+ #define MT_QUIRK_SEPARATE_APP_REPORT BIT(19)
+ #define MT_QUIRK_FORCE_MULTI_INPUT BIT(20)
+ #define MT_QUIRK_DISABLE_WAKEUP BIT(21)
++#define MT_QUIRK_ORIENTATION_INVERT BIT(22)
+
+ #define MT_INPUTMODE_TOUCHSCREEN 0x02
+ #define MT_INPUTMODE_TOUCHPAD 0x03
+@@ -1009,6 +1010,7 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
+ struct mt_usages *slot)
+ {
+ struct input_mt *mt = input->mt;
++ struct hid_device *hdev = td->hdev;
+ __s32 quirks = app->quirks;
+ bool valid = true;
+ bool confidence_state = true;
+@@ -1086,6 +1088,10 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
+ int orientation = wide;
+ int max_azimuth;
+ int azimuth;
++ int x;
++ int y;
++ int cx;
++ int cy;
+
+ if (slot->a != DEFAULT_ZERO) {
+ /*
+@@ -1104,6 +1110,9 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
+ if (azimuth > max_azimuth * 2)
+ azimuth -= max_azimuth * 4;
+ orientation = -azimuth;
++ if (quirks & MT_QUIRK_ORIENTATION_INVERT)
++ orientation = -orientation;
++
+ }
+
+ if (quirks & MT_QUIRK_TOUCH_SIZE_SCALING) {
+@@ -1115,10 +1124,23 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
+ minor = minor >> 1;
+ }
+
+- input_event(input, EV_ABS, ABS_MT_POSITION_X, *slot->x);
+- input_event(input, EV_ABS, ABS_MT_POSITION_Y, *slot->y);
+- input_event(input, EV_ABS, ABS_MT_TOOL_X, *slot->cx);
+- input_event(input, EV_ABS, ABS_MT_TOOL_Y, *slot->cy);
++ x = hdev->quirks & HID_QUIRK_X_INVERT ?
++ input_abs_get_max(input, ABS_MT_POSITION_X) - *slot->x :
++ *slot->x;
++ y = hdev->quirks & HID_QUIRK_Y_INVERT ?
++ input_abs_get_max(input, ABS_MT_POSITION_Y) - *slot->y :
++ *slot->y;
++ cx = hdev->quirks & HID_QUIRK_X_INVERT ?
++ input_abs_get_max(input, ABS_MT_POSITION_X) - *slot->cx :
++ *slot->cx;
++ cy = hdev->quirks & HID_QUIRK_Y_INVERT ?
++ input_abs_get_max(input, ABS_MT_POSITION_Y) - *slot->cy :
++ *slot->cy;
++
++ input_event(input, EV_ABS, ABS_MT_POSITION_X, x);
++ input_event(input, EV_ABS, ABS_MT_POSITION_Y, y);
++ input_event(input, EV_ABS, ABS_MT_TOOL_X, cx);
++ input_event(input, EV_ABS, ABS_MT_TOOL_Y, cy);
+ input_event(input, EV_ABS, ABS_MT_DISTANCE, !*slot->tip_state);
+ input_event(input, EV_ABS, ABS_MT_ORIENTATION, orientation);
+ input_event(input, EV_ABS, ABS_MT_PRESSURE, *slot->p);
+@@ -1735,6 +1757,15 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ if (id->vendor == HID_ANY_ID && id->product == HID_ANY_ID)
+ td->serial_maybe = true;
+
++
++ /* Orientation is inverted if the X or Y axes are
++ * flipped, but normalized if both are inverted.
++ */
++ if (hdev->quirks & (HID_QUIRK_X_INVERT | HID_QUIRK_Y_INVERT) &&
++ !((hdev->quirks & HID_QUIRK_X_INVERT)
++ && (hdev->quirks & HID_QUIRK_Y_INVERT)))
++ td->mtclass.quirks = MT_QUIRK_ORIENTATION_INVERT;
++
+ /* This allows the driver to correctly support devices
+ * that emit events over several HID messages.
+ */
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 5bc91f68b3747..66e64350f1386 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -1237,7 +1237,7 @@ EXPORT_SYMBOL_GPL(hid_quirks_exit);
+ static unsigned long hid_gets_squirk(const struct hid_device *hdev)
+ {
+ const struct hid_device_id *bl_entry;
+- unsigned long quirks = 0;
++ unsigned long quirks = hdev->initial_quirks;
+
+ if (hid_match_id(hdev, hid_ignore_list))
+ quirks |= HID_QUIRK_IGNORE;
+diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c
+index cfbbc39807a69..bfbb51f8b5beb 100644
+--- a/drivers/hid/hid-uclogic-core.c
++++ b/drivers/hid/hid-uclogic-core.c
+@@ -22,25 +22,6 @@
+
+ #include "hid-ids.h"
+
+-/* Driver data */
+-struct uclogic_drvdata {
+- /* Interface parameters */
+- struct uclogic_params params;
+- /* Pointer to the replacement report descriptor. NULL if none. */
+- __u8 *desc_ptr;
+- /*
+- * Size of the replacement report descriptor.
+- * Only valid if desc_ptr is not NULL
+- */
+- unsigned int desc_size;
+- /* Pen input device */
+- struct input_dev *pen_input;
+- /* In-range timer */
+- struct timer_list inrange_timer;
+- /* Last rotary encoder state, or U8_MAX for none */
+- u8 re_state;
+-};
+-
+ /**
+ * uclogic_inrange_timeout - handle pen in-range state timeout.
+ * Emulate input events normally generated when pen goes out of range for
+@@ -202,6 +183,7 @@ static int uclogic_probe(struct hid_device *hdev,
+ }
+ timer_setup(&drvdata->inrange_timer, uclogic_inrange_timeout, 0);
+ drvdata->re_state = U8_MAX;
++ drvdata->quirks = id->driver_data;
+ hid_set_drvdata(hdev, drvdata);
+
+ /* Initialize the device and retrieve interface parameters */
+@@ -529,8 +511,14 @@ static const struct hid_device_id uclogic_devices[] = {
+ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO01_V2) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE,
+ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE,
++ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_MW),
++ .driver_data = UCLOGIC_MOUSE_FRAME_QUIRK | UCLOGIC_BATTERY_QUIRK },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE,
+ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_S) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE,
++ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_SW),
++ .driver_data = UCLOGIC_MOUSE_FRAME_QUIRK | UCLOGIC_BATTERY_QUIRK },
+ { HID_USB_DEVICE(USB_VENDOR_ID_UGEE,
+ USB_DEVICE_ID_UGEE_XPPEN_TABLET_STAR06) },
+ { }
+diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c
+index 3c5eea3df3288..0cc03c11ecc22 100644
+--- a/drivers/hid/hid-uclogic-params.c
++++ b/drivers/hid/hid-uclogic-params.c
+@@ -1222,6 +1222,11 @@ static int uclogic_params_ugee_v2_init_frame_mouse(struct uclogic_params *p)
+ */
+ static bool uclogic_params_ugee_v2_has_battery(struct hid_device *hdev)
+ {
++ struct uclogic_drvdata *drvdata = hid_get_drvdata(hdev);
++
++ if (drvdata->quirks & UCLOGIC_BATTERY_QUIRK)
++ return true;
++
+ /* The XP-PEN Deco LW vendor, product and version are identical to the
+ * Deco L. The only difference reported by their firmware is the product
+ * name. Add a quirk to support battery reporting on the wireless
+@@ -1298,6 +1303,7 @@ static int uclogic_params_ugee_v2_init(struct uclogic_params *params,
+ struct hid_device *hdev)
+ {
+ int rc = 0;
++ struct uclogic_drvdata *drvdata;
+ struct usb_interface *iface;
+ __u8 bInterfaceNumber;
+ const int str_desc_len = 12;
+@@ -1316,6 +1322,7 @@ static int uclogic_params_ugee_v2_init(struct uclogic_params *params,
+ goto cleanup;
+ }
+
++ drvdata = hid_get_drvdata(hdev);
+ iface = to_usb_interface(hdev->dev.parent);
+ bInterfaceNumber = iface->cur_altsetting->desc.bInterfaceNumber;
+
+@@ -1382,6 +1389,9 @@ static int uclogic_params_ugee_v2_init(struct uclogic_params *params,
+ p.pen.subreport_list[0].id = UCLOGIC_RDESC_V1_FRAME_ID;
+
+ /* Initialize the frame interface */
++ if (drvdata->quirks & UCLOGIC_MOUSE_FRAME_QUIRK)
++ frame_type = UCLOGIC_PARAMS_FRAME_MOUSE;
++
+ switch (frame_type) {
+ case UCLOGIC_PARAMS_FRAME_DIAL:
+ case UCLOGIC_PARAMS_FRAME_MOUSE:
+@@ -1659,8 +1669,12 @@ int uclogic_params_init(struct uclogic_params *params,
+ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO01_V2):
+ case VID_PID(USB_VENDOR_ID_UGEE,
+ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L):
++ case VID_PID(USB_VENDOR_ID_UGEE,
++ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_MW):
+ case VID_PID(USB_VENDOR_ID_UGEE,
+ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_S):
++ case VID_PID(USB_VENDOR_ID_UGEE,
++ USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_SW):
+ rc = uclogic_params_ugee_v2_init(&p, hdev);
+ if (rc != 0)
+ goto cleanup;
+diff --git a/drivers/hid/hid-uclogic-params.h b/drivers/hid/hid-uclogic-params.h
+index a97477c02ff82..b0e7f3807939b 100644
+--- a/drivers/hid/hid-uclogic-params.h
++++ b/drivers/hid/hid-uclogic-params.h
+@@ -19,6 +19,9 @@
+ #include <linux/usb.h>
+ #include <linux/hid.h>
+
++#define UCLOGIC_MOUSE_FRAME_QUIRK BIT(0)
++#define UCLOGIC_BATTERY_QUIRK BIT(1)
++
+ /* Types of pen in-range reporting */
+ enum uclogic_params_pen_inrange {
+ /* Normal reports: zero - out of proximity, one - in proximity */
+@@ -215,6 +218,27 @@ struct uclogic_params {
+ struct uclogic_params_frame frame_list[3];
+ };
+
++/* Driver data */
++struct uclogic_drvdata {
++ /* Interface parameters */
++ struct uclogic_params params;
++ /* Pointer to the replacement report descriptor. NULL if none. */
++ __u8 *desc_ptr;
++ /*
++ * Size of the replacement report descriptor.
++ * Only valid if desc_ptr is not NULL
++ */
++ unsigned int desc_size;
++ /* Pen input device */
++ struct input_dev *pen_input;
++ /* In-range timer */
++ struct timer_list inrange_timer;
++ /* Last rotary encoder state, or U8_MAX for none */
++ u8 re_state;
++ /* Device quirks */
++ unsigned long quirks;
++};
++
+ /* Initialize a tablet interface and discover its parameters */
+ extern int uclogic_params_init(struct uclogic_params *params,
+ struct hid_device *hdev);
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index b86b62f971080..72f2c379812c7 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -1035,6 +1035,10 @@ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
+ hid->vendor = le16_to_cpu(ihid->hdesc.wVendorID);
+ hid->product = le16_to_cpu(ihid->hdesc.wProductID);
+
++ hid->initial_quirks = quirks;
++ hid->initial_quirks |= i2c_hid_get_dmi_quirks(hid->vendor,
++ hid->product);
++
+ snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X",
+ client->name, (u16)hid->vendor, (u16)hid->product);
+ strscpy(hid->phys, dev_name(&client->dev), sizeof(hid->phys));
+@@ -1048,8 +1052,6 @@ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
+ goto err_mem_free;
+ }
+
+- hid->quirks |= quirks;
+-
+ return 0;
+
+ err_mem_free:
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index 8e0f67455c098..210f17c3a0be0 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -10,8 +10,10 @@
+ #include <linux/types.h>
+ #include <linux/dmi.h>
+ #include <linux/mod_devicetable.h>
++#include <linux/hid.h>
+
+ #include "i2c-hid.h"
++#include "../hid-ids.h"
+
+
+ struct i2c_hid_desc_override {
+@@ -416,6 +418,28 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ { } /* Terminate list */
+ };
+
++static const struct hid_device_id i2c_hid_elan_flipped_quirks = {
++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, USB_VENDOR_ID_ELAN, 0x2dcd),
++ HID_QUIRK_X_INVERT | HID_QUIRK_Y_INVERT
++};
++
++/*
++ * This list contains devices which have specific issues based on the system
++ * they're on and not just the device itself. The driver_data will have a
++ * specific hid device to match against.
++ */
++static const struct dmi_system_id i2c_hid_dmi_quirk_table[] = {
++ {
++ .ident = "DynaBook K50/FR",
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dynabook Inc."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "dynabook K50/FR"),
++ },
++ .driver_data = (void *)&i2c_hid_elan_flipped_quirks,
++ },
++ { } /* Terminate list */
++};
++
+
+ struct i2c_hid_desc *i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name)
+ {
+@@ -450,3 +474,21 @@ char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
+ *size = override->hid_report_desc_size;
+ return override->hid_report_desc;
+ }
++
++u32 i2c_hid_get_dmi_quirks(const u16 vendor, const u16 product)
++{
++ u32 quirks = 0;
++ const struct dmi_system_id *system_id =
++ dmi_first_match(i2c_hid_dmi_quirk_table);
++
++ if (system_id) {
++ const struct hid_device_id *device_id =
++ (struct hid_device_id *)(system_id->driver_data);
++
++ if (device_id && device_id->vendor == vendor &&
++ device_id->product == product)
++ quirks = device_id->driver_data;
++ }
++
++ return quirks;
++}
+diff --git a/drivers/hid/i2c-hid/i2c-hid.h b/drivers/hid/i2c-hid/i2c-hid.h
+index 96c75510ad3f1..2c7b66d5caa0f 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.h
++++ b/drivers/hid/i2c-hid/i2c-hid.h
+@@ -9,6 +9,7 @@
+ struct i2c_hid_desc *i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name);
+ char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
+ unsigned int *size);
++u32 i2c_hid_get_dmi_quirks(const u16 vendor, const u16 product);
+ #else
+ static inline struct i2c_hid_desc
+ *i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name)
+@@ -16,6 +17,8 @@ static inline struct i2c_hid_desc
+ static inline char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
+ unsigned int *size)
+ { return NULL; }
++static inline u32 i2c_hid_get_dmi_quirks(const u16 vendor, const u16 product)
++{ return 0; }
+ #endif
+
+ /**
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 3176c33af6c69..300ce8115ce4f 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1516,7 +1516,7 @@ config SENSORS_NCT6775_CORE
+ config SENSORS_NCT6775
+ tristate "Platform driver for Nuvoton NCT6775F and compatibles"
+ depends on !PPC
+- depends on ACPI_WMI || ACPI_WMI=n
++ depends on ACPI || ACPI=n
+ select HWMON_VID
+ select SENSORS_NCT6775_CORE
+ help
+diff --git a/drivers/hwmon/asus-ec-sensors.c b/drivers/hwmon/asus-ec-sensors.c
+index a901e4e33d81d..b4d65916b3c00 100644
+--- a/drivers/hwmon/asus-ec-sensors.c
++++ b/drivers/hwmon/asus-ec-sensors.c
+@@ -299,6 +299,7 @@ static const struct ec_board_info board_info_pro_art_x570_creator_wifi = {
+ .sensors = SENSOR_SET_TEMP_CHIPSET_CPU_MB | SENSOR_TEMP_VRM |
+ SENSOR_TEMP_T_SENSOR | SENSOR_FAN_CPU_OPT |
+ SENSOR_CURR_CPU | SENSOR_IN_CPU_CORE,
++ .mutex_path = ASUS_HW_ACCESS_MUTEX_ASMX,
+ .family = family_amd_500_series,
+ };
+
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index ca7a9b373bbd6..3e440ebe2508c 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -588,66 +588,49 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
+ ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO);
+ }
+
+-static int coretemp_probe(struct platform_device *pdev)
++static int coretemp_device_add(int zoneid)
+ {
+- struct device *dev = &pdev->dev;
++ struct platform_device *pdev;
+ struct platform_data *pdata;
++ int err;
+
+ /* Initialize the per-zone data structures */
+- pdata = devm_kzalloc(dev, sizeof(struct platform_data), GFP_KERNEL);
++ pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ return -ENOMEM;
+
+- pdata->pkg_id = pdev->id;
++ pdata->pkg_id = zoneid;
+ ida_init(&pdata->ida);
+- platform_set_drvdata(pdev, pdata);
+
+- pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME,
+- pdata, NULL);
+- return PTR_ERR_OR_ZERO(pdata->hwmon_dev);
+-}
+-
+-static int coretemp_remove(struct platform_device *pdev)
+-{
+- struct platform_data *pdata = platform_get_drvdata(pdev);
+- int i;
++ pdev = platform_device_alloc(DRVNAME, zoneid);
++ if (!pdev) {
++ err = -ENOMEM;
++ goto err_free_pdata;
++ }
+
+- for (i = MAX_CORE_DATA - 1; i >= 0; --i)
+- if (pdata->core_data[i])
+- coretemp_remove_core(pdata, i);
++ err = platform_device_add(pdev);
++ if (err)
++ goto err_put_dev;
+
+- ida_destroy(&pdata->ida);
++ platform_set_drvdata(pdev, pdata);
++ zone_devices[zoneid] = pdev;
+ return 0;
+-}
+
+-static struct platform_driver coretemp_driver = {
+- .driver = {
+- .name = DRVNAME,
+- },
+- .probe = coretemp_probe,
+- .remove = coretemp_remove,
+-};
++err_put_dev:
++ platform_device_put(pdev);
++err_free_pdata:
++ kfree(pdata);
++ return err;
++}
+
+-static struct platform_device *coretemp_device_add(unsigned int cpu)
++static void coretemp_device_remove(int zoneid)
+ {
+- int err, zoneid = topology_logical_die_id(cpu);
+- struct platform_device *pdev;
+-
+- if (zoneid < 0)
+- return ERR_PTR(-ENOMEM);
+-
+- pdev = platform_device_alloc(DRVNAME, zoneid);
+- if (!pdev)
+- return ERR_PTR(-ENOMEM);
+-
+- err = platform_device_add(pdev);
+- if (err) {
+- platform_device_put(pdev);
+- return ERR_PTR(err);
+- }
++ struct platform_device *pdev = zone_devices[zoneid];
++ struct platform_data *pdata = platform_get_drvdata(pdev);
+
+- zone_devices[zoneid] = pdev;
+- return pdev;
++ ida_destroy(&pdata->ida);
++ kfree(pdata);
++ platform_device_unregister(pdev);
+ }
+
+ static int coretemp_cpu_online(unsigned int cpu)
+@@ -671,7 +654,10 @@ static int coretemp_cpu_online(unsigned int cpu)
+ if (!cpu_has(c, X86_FEATURE_DTHERM))
+ return -ENODEV;
+
+- if (!pdev) {
++ pdata = platform_get_drvdata(pdev);
++ if (!pdata->hwmon_dev) {
++ struct device *hwmon;
++
+ /* Check the microcode version of the CPU */
+ if (chk_ucode_version(cpu))
+ return -EINVAL;
+@@ -682,9 +668,11 @@ static int coretemp_cpu_online(unsigned int cpu)
+ * online. So, initialize per-pkg data structures and
+ * then bring this core online.
+ */
+- pdev = coretemp_device_add(cpu);
+- if (IS_ERR(pdev))
+- return PTR_ERR(pdev);
++ hwmon = hwmon_device_register_with_groups(&pdev->dev, DRVNAME,
++ pdata, NULL);
++ if (IS_ERR(hwmon))
++ return PTR_ERR(hwmon);
++ pdata->hwmon_dev = hwmon;
+
+ /*
+ * Check whether pkgtemp support is available.
+@@ -694,7 +682,6 @@ static int coretemp_cpu_online(unsigned int cpu)
+ coretemp_add_core(pdev, cpu, 1);
+ }
+
+- pdata = platform_get_drvdata(pdev);
+ /*
+ * Check whether a thread sibling is already online. If not add the
+ * interface for this CPU core.
+@@ -713,18 +700,14 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ struct temp_data *tdata;
+ int i, indx = -1, target;
+
+- /*
+- * Don't execute this on suspend as the device remove locks
+- * up the machine.
+- */
++ /* No need to tear down any interfaces for suspend */
+ if (cpuhp_tasks_frozen)
+ return 0;
+
+ /* If the physical CPU device does not exist, just return */
+- if (!pdev)
+- return 0;
+-
+ pd = platform_get_drvdata(pdev);
++ if (!pd->hwmon_dev)
++ return 0;
+
+ for (i = 0; i < NUM_REAL_CORES; i++) {
+ if (pd->cpu_map[i] == topology_core_id(cpu)) {
+@@ -756,13 +739,14 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ }
+
+ /*
+- * If all cores in this pkg are offline, remove the device. This
+- * will invoke the platform driver remove function, which cleans up
+- * the rest.
++ * If all cores in this pkg are offline, remove the interface.
+ */
++ tdata = pd->core_data[PKG_SYSFS_ATTR_NO];
+ if (cpumask_empty(&pd->cpumask)) {
+- zone_devices[topology_logical_die_id(cpu)] = NULL;
+- platform_device_unregister(pdev);
++ if (tdata)
++ coretemp_remove_core(pd, PKG_SYSFS_ATTR_NO);
++ hwmon_device_unregister(pd->hwmon_dev);
++ pd->hwmon_dev = NULL;
+ return 0;
+ }
+
+@@ -770,7 +754,6 @@ static int coretemp_cpu_offline(unsigned int cpu)
+ * Check whether this core is the target for the package
+ * interface. We need to assign it to some other cpu.
+ */
+- tdata = pd->core_data[PKG_SYSFS_ATTR_NO];
+ if (tdata && tdata->cpu == cpu) {
+ target = cpumask_first(&pd->cpumask);
+ mutex_lock(&tdata->update_lock);
+@@ -789,7 +772,7 @@ static enum cpuhp_state coretemp_hp_online;
+
+ static int __init coretemp_init(void)
+ {
+- int err;
++ int i, err;
+
+ /*
+ * CPUID.06H.EAX[0] indicates whether the CPU has thermal
+@@ -805,20 +788,22 @@ static int __init coretemp_init(void)
+ if (!zone_devices)
+ return -ENOMEM;
+
+- err = platform_driver_register(&coretemp_driver);
+- if (err)
+- goto outzone;
++ for (i = 0; i < max_zones; i++) {
++ err = coretemp_device_add(i);
++ if (err)
++ goto outzone;
++ }
+
+ err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hwmon/coretemp:online",
+ coretemp_cpu_online, coretemp_cpu_offline);
+ if (err < 0)
+- goto outdrv;
++ goto outzone;
+ coretemp_hp_online = err;
+ return 0;
+
+-outdrv:
+- platform_driver_unregister(&coretemp_driver);
+ outzone:
++ while (i--)
++ coretemp_device_remove(i);
+ kfree(zone_devices);
+ return err;
+ }
+@@ -826,8 +811,11 @@ module_init(coretemp_init)
+
+ static void __exit coretemp_exit(void)
+ {
++ int i;
++
+ cpuhp_remove_state(coretemp_hp_online);
+- platform_driver_unregister(&coretemp_driver);
++ for (i = 0; i < max_zones; i++)
++ coretemp_device_remove(i);
+ kfree(zone_devices);
+ }
+ module_exit(coretemp_exit)
+diff --git a/drivers/hwmon/ftsteutates.c b/drivers/hwmon/ftsteutates.c
+index f5b8e724a8ca1..ffa0bb3648775 100644
+--- a/drivers/hwmon/ftsteutates.c
++++ b/drivers/hwmon/ftsteutates.c
+@@ -12,6 +12,7 @@
+ #include <linux/i2c.h>
+ #include <linux/init.h>
+ #include <linux/jiffies.h>
++#include <linux/math.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
+ #include <linux/slab.h>
+@@ -347,13 +348,15 @@ static ssize_t in_value_show(struct device *dev,
+ {
+ struct fts_data *data = dev_get_drvdata(dev);
+ int index = to_sensor_dev_attr(devattr)->index;
+- int err;
++ int value, err;
+
+ err = fts_update_device(data);
+ if (err < 0)
+ return err;
+
+- return sprintf(buf, "%u\n", data->volt[index]);
++ value = DIV_ROUND_CLOSEST(data->volt[index] * 3300, 255);
++
++ return sprintf(buf, "%d\n", value);
+ }
+
+ static ssize_t temp_value_show(struct device *dev,
+@@ -361,13 +364,15 @@ static ssize_t temp_value_show(struct device *dev,
+ {
+ struct fts_data *data = dev_get_drvdata(dev);
+ int index = to_sensor_dev_attr(devattr)->index;
+- int err;
++ int value, err;
+
+ err = fts_update_device(data);
+ if (err < 0)
+ return err;
+
+- return sprintf(buf, "%u\n", data->temp_input[index]);
++ value = (data->temp_input[index] - 64) * 1000;
++
++ return sprintf(buf, "%d\n", value);
+ }
+
+ static ssize_t temp_fault_show(struct device *dev,
+@@ -436,13 +441,15 @@ static ssize_t fan_value_show(struct device *dev,
+ {
+ struct fts_data *data = dev_get_drvdata(dev);
+ int index = to_sensor_dev_attr(devattr)->index;
+- int err;
++ int value, err;
+
+ err = fts_update_device(data);
+ if (err < 0)
+ return err;
+
+- return sprintf(buf, "%u\n", data->fan_input[index]);
++ value = data->fan_input[index] * 60;
++
++ return sprintf(buf, "%d\n", value);
+ }
+
+ static ssize_t fan_source_show(struct device *dev,
+diff --git a/drivers/hwmon/ltc2945.c b/drivers/hwmon/ltc2945.c
+index 9adebb59f6042..c06ab7317431f 100644
+--- a/drivers/hwmon/ltc2945.c
++++ b/drivers/hwmon/ltc2945.c
+@@ -248,6 +248,8 @@ static ssize_t ltc2945_value_store(struct device *dev,
+
+ /* convert to register value, then clamp and write result */
+ regval = ltc2945_val_to_reg(dev, reg, val);
++ if (regval < 0)
++ return regval;
+ if (is_power_reg(reg)) {
+ regval = clamp_val(regval, 0, 0xffffff);
+ regbuf[0] = regval >> 16;
+diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c
+index b48bd7c961d66..96017cc8da7ec 100644
+--- a/drivers/hwmon/mlxreg-fan.c
++++ b/drivers/hwmon/mlxreg-fan.c
+@@ -155,6 +155,12 @@ mlxreg_fan_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
+ if (err)
+ return err;
+
++ if (MLXREG_FAN_GET_FAULT(regval, tacho->mask)) {
++ /* FAN is broken - return zero for FAN speed. */
++ *val = 0;
++ return 0;
++ }
++
+ *val = MLXREG_FAN_GET_RPM(regval, fan->divider,
+ fan->samples);
+ break;
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index da9ec6983e139..c54233f0369b2 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -1150,7 +1150,7 @@ static int nct6775_write_fan_div(struct nct6775_data *data, int nr)
+ if (err)
+ return err;
+ reg &= 0x70 >> oddshift;
+- reg |= data->fan_div[nr] & (0x7 << oddshift);
++ reg |= (data->fan_div[nr] & 0x7) << oddshift;
+ return nct6775_write_value(data, fandiv_reg, reg);
+ }
+
+diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
+index bf43f73dc835f..76c6b564d7fc4 100644
+--- a/drivers/hwmon/nct6775-platform.c
++++ b/drivers/hwmon/nct6775-platform.c
+@@ -17,7 +17,6 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+-#include <linux/wmi.h>
+
+ #include "nct6775.h"
+
+@@ -107,40 +106,51 @@ struct nct6775_sio_data {
+ void (*sio_exit)(struct nct6775_sio_data *sio_data);
+ };
+
+-#define ASUSWMI_MONITORING_GUID "466747A0-70EC-11DE-8A39-0800200C9A66"
++#define ASUSWMI_METHOD "WMBD"
+ #define ASUSWMI_METHODID_RSIO 0x5253494F
+ #define ASUSWMI_METHODID_WSIO 0x5753494F
+ #define ASUSWMI_METHODID_RHWM 0x5248574D
+ #define ASUSWMI_METHODID_WHWM 0x5748574D
+ #define ASUSWMI_UNSUPPORTED_METHOD 0xFFFFFFFE
++#define ASUSWMI_DEVICE_HID "PNP0C14"
++#define ASUSWMI_DEVICE_UID "ASUSWMI"
++#define ASUSMSI_DEVICE_UID "AsusMbSwInterface"
++
++#if IS_ENABLED(CONFIG_ACPI)
++/*
++ * ASUS boards have only one device with WMI "WMBD" method and have provided
++ * access to only one SuperIO chip at 0x0290.
++ */
++static struct acpi_device *asus_acpi_dev;
++#endif
+
+ static int nct6775_asuswmi_evaluate_method(u32 method_id, u8 bank, u8 reg, u8 val, u32 *retval)
+ {
+-#if IS_ENABLED(CONFIG_ACPI_WMI)
++#if IS_ENABLED(CONFIG_ACPI)
++ acpi_handle handle = acpi_device_handle(asus_acpi_dev);
+ u32 args = bank | (reg << 8) | (val << 16);
+- struct acpi_buffer input = { (acpi_size) sizeof(args), &args };
+- struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
++ struct acpi_object_list input;
++ union acpi_object params[3];
++ unsigned long long result;
+ acpi_status status;
+- union acpi_object *obj;
+- u32 tmp = ASUSWMI_UNSUPPORTED_METHOD;
+-
+- status = wmi_evaluate_method(ASUSWMI_MONITORING_GUID, 0,
+- method_id, &input, &output);
+
++ params[0].type = ACPI_TYPE_INTEGER;
++ params[0].integer.value = 0;
++ params[1].type = ACPI_TYPE_INTEGER;
++ params[1].integer.value = method_id;
++ params[2].type = ACPI_TYPE_BUFFER;
++ params[2].buffer.length = sizeof(args);
++ params[2].buffer.pointer = (void *)&args;
++ input.count = 3;
++ input.pointer = params;
++
++ status = acpi_evaluate_integer(handle, ASUSWMI_METHOD, &input, &result);
+ if (ACPI_FAILURE(status))
+ return -EIO;
+
+- obj = output.pointer;
+- if (obj && obj->type == ACPI_TYPE_INTEGER)
+- tmp = obj->integer.value;
+-
+ if (retval)
+- *retval = tmp;
+-
+- kfree(obj);
++ *retval = (u32)result & 0xFFFFFFFF;
+
+- if (tmp == ASUSWMI_UNSUPPORTED_METHOD)
+- return -ENODEV;
+ return 0;
+ #else
+ return -EOPNOTSUPP;
+@@ -1099,6 +1109,91 @@ static const char * const asus_wmi_boards[] = {
+ "TUF GAMING Z490-PLUS (WI-FI)",
+ };
+
++static const char * const asus_msi_boards[] = {
++ "EX-B660M-V5 PRO D4",
++ "PRIME B650-PLUS",
++ "PRIME B650M-A",
++ "PRIME B650M-A AX",
++ "PRIME B650M-A II",
++ "PRIME B650M-A WIFI",
++ "PRIME B650M-A WIFI II",
++ "PRIME B660M-A D4",
++ "PRIME B660M-A WIFI D4",
++ "PRIME X670-P",
++ "PRIME X670-P WIFI",
++ "PRIME X670E-PRO WIFI",
++ "Pro B660M-C-D4",
++ "ProArt B660-CREATOR D4",
++ "ProArt X670E-CREATOR WIFI",
++ "ROG CROSSHAIR X670E EXTREME",
++ "ROG CROSSHAIR X670E GENE",
++ "ROG CROSSHAIR X670E HERO",
++ "ROG MAXIMUS XIII EXTREME GLACIAL",
++ "ROG MAXIMUS Z690 EXTREME",
++ "ROG MAXIMUS Z690 EXTREME GLACIAL",
++ "ROG STRIX B650-A GAMING WIFI",
++ "ROG STRIX B650E-E GAMING WIFI",
++ "ROG STRIX B650E-F GAMING WIFI",
++ "ROG STRIX B650E-I GAMING WIFI",
++ "ROG STRIX B660-A GAMING WIFI D4",
++ "ROG STRIX B660-F GAMING WIFI",
++ "ROG STRIX B660-G GAMING WIFI",
++ "ROG STRIX B660-I GAMING WIFI",
++ "ROG STRIX X670E-A GAMING WIFI",
++ "ROG STRIX X670E-E GAMING WIFI",
++ "ROG STRIX X670E-F GAMING WIFI",
++ "ROG STRIX X670E-I GAMING WIFI",
++ "ROG STRIX Z590-A GAMING WIFI II",
++ "ROG STRIX Z690-A GAMING WIFI D4",
++ "TUF GAMING B650-PLUS",
++ "TUF GAMING B650-PLUS WIFI",
++ "TUF GAMING B650M-PLUS",
++ "TUF GAMING B650M-PLUS WIFI",
++ "TUF GAMING B660M-PLUS WIFI",
++ "TUF GAMING X670E-PLUS",
++ "TUF GAMING X670E-PLUS WIFI",
++ "TUF GAMING Z590-PLUS WIFI",
++};
++
++#if IS_ENABLED(CONFIG_ACPI)
++/*
++ * Callback for acpi_bus_for_each_dev() to find the right device
++ * by _UID and _HID and return 1 to stop iteration.
++ */
++static int nct6775_asuswmi_device_match(struct device *dev, void *data)
++{
++ struct acpi_device *adev = to_acpi_device(dev);
++ const char *uid = acpi_device_uid(adev);
++ const char *hid = acpi_device_hid(adev);
++
++ if (hid && !strcmp(hid, ASUSWMI_DEVICE_HID) && uid && !strcmp(uid, data)) {
++ asus_acpi_dev = adev;
++ return 1;
++ }
++
++ return 0;
++}
++#endif
++
++static enum sensor_access nct6775_determine_access(const char *device_uid)
++{
++#if IS_ENABLED(CONFIG_ACPI)
++ u8 tmp;
++
++ acpi_bus_for_each_dev(nct6775_asuswmi_device_match, (void *)device_uid);
++ if (!asus_acpi_dev)
++ return access_direct;
++
++ /* if reading chip id via ACPI succeeds, use WMI "WMBD" method for access */
++ if (!nct6775_asuswmi_read(0, NCT6775_PORT_CHIPID, &tmp) && tmp) {
++ pr_debug("Using Asus WMBD method of %s to access %#x chip.\n", device_uid, tmp);
++ return access_asuswmi;
++ }
++#endif
++
++ return access_direct;
++}
++
+ static int __init sensors_nct6775_platform_init(void)
+ {
+ int i, err;
+@@ -1109,7 +1204,6 @@ static int __init sensors_nct6775_platform_init(void)
+ int sioaddr[2] = { 0x2e, 0x4e };
+ enum sensor_access access = access_direct;
+ const char *board_vendor, *board_name;
+- u8 tmp;
+
+ err = platform_driver_register(&nct6775_driver);
+ if (err)
+@@ -1122,15 +1216,13 @@ static int __init sensors_nct6775_platform_init(void)
+ !strcmp(board_vendor, "ASUSTeK COMPUTER INC.")) {
+ err = match_string(asus_wmi_boards, ARRAY_SIZE(asus_wmi_boards),
+ board_name);
+- if (err >= 0) {
+- /* if reading chip id via WMI succeeds, use WMI */
+- if (!nct6775_asuswmi_read(0, NCT6775_PORT_CHIPID, &tmp) && tmp) {
+- pr_info("Using Asus WMI to access %#x chip.\n", tmp);
+- access = access_asuswmi;
+- } else {
+- pr_err("Can't read ChipID by Asus WMI.\n");
+- }
+- }
++ if (err >= 0)
++ access = nct6775_determine_access(ASUSWMI_DEVICE_UID);
++
++ err = match_string(asus_msi_boards, ARRAY_SIZE(asus_msi_boards),
++ board_name);
++ if (err >= 0)
++ access = nct6775_determine_access(ASUSMSI_DEVICE_UID);
+ }
+
+ /*
+diff --git a/drivers/hwmon/peci/cputemp.c b/drivers/hwmon/peci/cputemp.c
+index 57470fda5f6c9..30850a479f61f 100644
+--- a/drivers/hwmon/peci/cputemp.c
++++ b/drivers/hwmon/peci/cputemp.c
+@@ -402,7 +402,7 @@ static int create_temp_label(struct peci_cputemp *priv)
+ unsigned long core_max = find_last_bit(priv->core_mask, CORE_NUMS_MAX);
+ int i;
+
+- priv->coretemp_label = devm_kzalloc(priv->dev, core_max * sizeof(char *), GFP_KERNEL);
++ priv->coretemp_label = devm_kzalloc(priv->dev, (core_max + 1) * sizeof(char *), GFP_KERNEL);
+ if (!priv->coretemp_label)
+ return -ENOMEM;
+
+diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
+index d2cf4f4848e1b..838872f2484d3 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-core.c
++++ b/drivers/hwtracing/coresight/coresight-cti-core.c
+@@ -151,9 +151,16 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
+ {
+ struct cti_config *config = &drvdata->config;
+ struct coresight_device *csdev = drvdata->csdev;
++ int ret = 0;
+
+ spin_lock(&drvdata->spinlock);
+
++ /* don't allow negative refcounts, return an error */
++ if (!atomic_read(&drvdata->config.enable_req_count)) {
++ ret = -EINVAL;
++ goto cti_not_disabled;
++ }
++
+ /* check refcount - disable on 0 */
+ if (atomic_dec_return(&drvdata->config.enable_req_count) > 0)
+ goto cti_not_disabled;
+@@ -171,12 +178,12 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
+ coresight_disclaim_device_unlocked(csdev);
+ CS_LOCK(drvdata->base);
+ spin_unlock(&drvdata->spinlock);
+- return 0;
++ return ret;
+
+ /* not disabled this call */
+ cti_not_disabled:
+ spin_unlock(&drvdata->spinlock);
+- return 0;
++ return ret;
+ }
+
+ void cti_write_single_reg(struct cti_drvdata *drvdata, int offset, u32 value)
+diff --git a/drivers/hwtracing/coresight/coresight-cti-sysfs.c b/drivers/hwtracing/coresight/coresight-cti-sysfs.c
+index 6d59c815ecf5e..71e7a8266bb32 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-cti-sysfs.c
+@@ -108,10 +108,19 @@ static ssize_t enable_store(struct device *dev,
+ if (ret)
+ return ret;
+
+- if (val)
++ if (val) {
++ ret = pm_runtime_resume_and_get(dev->parent);
++ if (ret)
++ return ret;
+ ret = cti_enable(drvdata->csdev);
+- else
++ if (ret)
++ pm_runtime_put(dev->parent);
++ } else {
+ ret = cti_disable(drvdata->csdev);
++ if (!ret)
++ pm_runtime_put(dev->parent);
++ }
++
+ if (ret)
+ return ret;
+ return size;
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 1cc052979e016..77bca6932f017 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -427,8 +427,10 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ etm4x_relaxed_write32(csa, config->vipcssctlr, TRCVIPCSSCTLR);
+ for (i = 0; i < drvdata->nrseqstate - 1; i++)
+ etm4x_relaxed_write32(csa, config->seq_ctrl[i], TRCSEQEVRn(i));
+- etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR);
+- etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR);
++ if (drvdata->nrseqstate) {
++ etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR);
++ etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR);
++ }
+ etm4x_relaxed_write32(csa, config->ext_inp, TRCEXTINSELR);
+ for (i = 0; i < drvdata->nr_cntr; i++) {
+ etm4x_relaxed_write32(csa, config->cntrldvr[i], TRCCNTRLDVRn(i));
+@@ -1634,8 +1636,10 @@ static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ for (i = 0; i < drvdata->nrseqstate - 1; i++)
+ state->trcseqevr[i] = etm4x_read32(csa, TRCSEQEVRn(i));
+
+- state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR);
+- state->trcseqstr = etm4x_read32(csa, TRCSEQSTR);
++ if (drvdata->nrseqstate) {
++ state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR);
++ state->trcseqstr = etm4x_read32(csa, TRCSEQSTR);
++ }
+ state->trcextinselr = etm4x_read32(csa, TRCEXTINSELR);
+
+ for (i = 0; i < drvdata->nr_cntr; i++) {
+@@ -1763,8 +1767,10 @@ static void __etm4_cpu_restore(struct etmv4_drvdata *drvdata)
+ for (i = 0; i < drvdata->nrseqstate - 1; i++)
+ etm4x_relaxed_write32(csa, state->trcseqevr[i], TRCSEQEVRn(i));
+
+- etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR);
+- etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR);
++ if (drvdata->nrseqstate) {
++ etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR);
++ etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR);
++ }
+ etm4x_relaxed_write32(csa, state->trcextinselr, TRCEXTINSELR);
+
+ for (i = 0; i < drvdata->nr_cntr; i++) {
+diff --git a/drivers/hwtracing/ptt/hisi_ptt.c b/drivers/hwtracing/ptt/hisi_ptt.c
+index 5d5526aa60c40..30f1525639b57 100644
+--- a/drivers/hwtracing/ptt/hisi_ptt.c
++++ b/drivers/hwtracing/ptt/hisi_ptt.c
+@@ -356,8 +356,18 @@ static int hisi_ptt_register_irq(struct hisi_ptt *hisi_ptt)
+
+ static int hisi_ptt_init_filters(struct pci_dev *pdev, void *data)
+ {
++ struct pci_dev *root_port = pcie_find_root_port(pdev);
+ struct hisi_ptt_filter_desc *filter;
+ struct hisi_ptt *hisi_ptt = data;
++ u32 port_devid;
++
++ if (!root_port)
++ return 0;
++
++ port_devid = PCI_DEVID(root_port->bus->number, root_port->devfn);
++ if (port_devid < hisi_ptt->lower_bdf ||
++ port_devid > hisi_ptt->upper_bdf)
++ return 0;
+
+ /*
+ * We won't fail the probe if filter allocation failed here. The filters
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index 581e02cc979a0..2f2e99882b011 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -465,7 +465,7 @@ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+ dev_warn(dev->dev, "timeout in disabling adapter\n");
+ }
+
+-unsigned long i2c_dw_clk_rate(struct dw_i2c_dev *dev)
++u32 i2c_dw_clk_rate(struct dw_i2c_dev *dev)
+ {
+ /*
+ * Clock is not necessary if we got LCNT/HCNT values directly from
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index 95ebc5eaa5d12..6bc2edec14f2f 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -320,7 +320,7 @@ int i2c_dw_init_regmap(struct dw_i2c_dev *dev);
+ u32 i2c_dw_scl_hcnt(u32 ic_clk, u32 tSYMBOL, u32 tf, int cond, int offset);
+ u32 i2c_dw_scl_lcnt(u32 ic_clk, u32 tLOW, u32 tf, int offset);
+ int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev);
+-unsigned long i2c_dw_clk_rate(struct dw_i2c_dev *dev);
++u32 i2c_dw_clk_rate(struct dw_i2c_dev *dev);
+ int i2c_dw_prepare_clk(struct dw_i2c_dev *dev, bool prepare);
+ int i2c_dw_acquire_lock(struct dw_i2c_dev *dev);
+ void i2c_dw_release_lock(struct dw_i2c_dev *dev);
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index fd70794bfceec..a378f679b499d 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -1025,7 +1025,7 @@ static const struct dev_pm_ops geni_i2c_pm_ops = {
+ NULL)
+ };
+
+-const struct geni_i2c_desc i2c_master_hub = {
++static const struct geni_i2c_desc i2c_master_hub = {
+ .has_core_clk = true,
+ .icc_ddr = NULL,
+ .no_dma_support = true,
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index cfeb24d40d378..f060ac7376e69 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -168,13 +168,7 @@ static __cpuidle int intel_idle_irq(struct cpuidle_device *dev,
+
+ raw_local_irq_enable();
+ ret = __intel_idle(dev, drv, index);
+-
+- /*
+- * The lockdep hardirqs state may be changed to 'on' with timer
+- * tick interrupt followed by __do_softirq(). Use local_irq_disable()
+- * to keep the hardirqs state correct.
+- */
+- local_irq_disable();
++ raw_local_irq_disable();
+
+ return ret;
+ }
+diff --git a/drivers/iio/light/tsl2563.c b/drivers/iio/light/tsl2563.c
+index d0e42b73203a6..71302ae864d99 100644
+--- a/drivers/iio/light/tsl2563.c
++++ b/drivers/iio/light/tsl2563.c
+@@ -704,6 +704,7 @@ static int tsl2563_probe(struct i2c_client *client)
+ struct iio_dev *indio_dev;
+ struct tsl2563_chip *chip;
+ struct tsl2563_platform_data *pdata = client->dev.platform_data;
++ unsigned long irq_flags;
+ int err = 0;
+ u8 id = 0;
+
+@@ -759,10 +760,15 @@ static int tsl2563_probe(struct i2c_client *client)
+ indio_dev->info = &tsl2563_info_no_irq;
+
+ if (client->irq) {
++ irq_flags = irq_get_trigger_type(client->irq);
++ if (irq_flags == IRQF_TRIGGER_NONE)
++ irq_flags = IRQF_TRIGGER_RISING;
++ irq_flags |= IRQF_ONESHOT;
++
+ err = devm_request_threaded_irq(&client->dev, client->irq,
+ NULL,
+ &tsl2563_event_handler,
+- IRQF_TRIGGER_RISING | IRQF_ONESHOT,
++ irq_flags,
+ "tsl2563_event",
+ indio_dev);
+ if (err) {
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 499a425a33791..ced615b5ea096 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -2676,6 +2676,9 @@ static int pass_establish(struct c4iw_dev *dev, struct sk_buff *skb)
+ u16 tcp_opt = ntohs(req->tcp_opt);
+
+ ep = get_ep_from_tid(dev, tid);
++ if (!ep)
++ return 0;
++
+ pr_debug("ep %p tid %u\n", ep, ep->hwtid);
+ ep->snd_seq = be32_to_cpu(req->snd_isn);
+ ep->rcv_seq = be32_to_cpu(req->rcv_isn);
+@@ -4144,6 +4147,10 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb)
+
+ if (neigh->dev->flags & IFF_LOOPBACK) {
+ pdev = ip_dev_find(&init_net, iph->daddr);
++ if (!pdev) {
++ pr_err("%s - failed to find device!\n", __func__);
++ goto free_dst;
++ }
+ e = cxgb4_l2t_get(dev->rdev.lldi.l2t, neigh,
+ pdev, 0);
+ pi = (struct port_info *)netdev_priv(pdev);
+diff --git a/drivers/infiniband/hw/cxgb4/restrack.c b/drivers/infiniband/hw/cxgb4/restrack.c
+index ff645b955a082..fd22c85d35f4f 100644
+--- a/drivers/infiniband/hw/cxgb4/restrack.c
++++ b/drivers/infiniband/hw/cxgb4/restrack.c
+@@ -238,7 +238,7 @@ int c4iw_fill_res_cm_id_entry(struct sk_buff *msg,
+ if (rdma_nl_put_driver_u64_hex(msg, "history", epcp->history))
+ goto err_cancel_table;
+
+- if (epcp->state == LISTEN) {
++ if (listen_ep) {
+ if (rdma_nl_put_driver_u32(msg, "stid", listen_ep->stid))
+ goto err_cancel_table;
+ if (rdma_nl_put_driver_u32(msg, "backlog", listen_ep->backlog))
+diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
+index 5dab1e87975ba..9c30d78730aa1 100644
+--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
+@@ -1110,12 +1110,14 @@ int erdma_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma)
+ prot = pgprot_device(vma->vm_page_prot);
+ break;
+ default:
+- return -EINVAL;
++ err = -EINVAL;
++ goto put_entry;
+ }
+
+ err = rdma_user_mmap_io(ctx, vma, PFN_DOWN(entry->address), PAGE_SIZE,
+ prot, rdma_entry);
+
++put_entry:
+ rdma_user_mmap_entry_put(rdma_entry);
+ return err;
+ }
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index a95b654f52540..8ed20392e9f0d 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -3160,8 +3160,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ {
+ int rval = 0;
+
+- tx->num_desc++;
+- if ((unlikely(tx->num_desc == tx->desc_limit))) {
++ if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) {
+ rval = _extend_sdma_tx_descs(dd, tx);
+ if (rval) {
+ __sdma_txclean(dd, tx);
+@@ -3174,6 +3173,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ SDMA_MAP_NONE,
+ dd->sdma_pad_phys,
+ sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
++ tx->num_desc++;
+ _sdma_close_tx(dd, tx);
+ return rval;
+ }
+diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
+index d8170fcbfbdd5..b023fc461bd51 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.h
++++ b/drivers/infiniband/hw/hfi1/sdma.h
+@@ -631,14 +631,13 @@ static inline void sdma_txclean(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ static inline void _sdma_close_tx(struct hfi1_devdata *dd,
+ struct sdma_txreq *tx)
+ {
+- tx->descp[tx->num_desc].qw[0] |=
+- SDMA_DESC0_LAST_DESC_FLAG;
+- tx->descp[tx->num_desc].qw[1] |=
+- dd->default_desc1;
++ u16 last_desc = tx->num_desc - 1;
++
++ tx->descp[last_desc].qw[0] |= SDMA_DESC0_LAST_DESC_FLAG;
++ tx->descp[last_desc].qw[1] |= dd->default_desc1;
+ if (tx->flags & SDMA_TXREQ_F_URGENT)
+- tx->descp[tx->num_desc].qw[1] |=
+- (SDMA_DESC1_HEAD_TO_HOST_FLAG |
+- SDMA_DESC1_INT_REQ_FLAG);
++ tx->descp[last_desc].qw[1] |= (SDMA_DESC1_HEAD_TO_HOST_FLAG |
++ SDMA_DESC1_INT_REQ_FLAG);
+ }
+
+ static inline int _sdma_txadd_daddr(
+@@ -655,6 +654,7 @@ static inline int _sdma_txadd_daddr(
+ type,
+ addr, len);
+ WARN_ON(len > tx->tlen);
++ tx->num_desc++;
+ tx->tlen -= len;
+ /* special cases for last */
+ if (!tx->tlen) {
+@@ -666,7 +666,6 @@ static inline int _sdma_txadd_daddr(
+ _sdma_close_tx(dd, tx);
+ }
+ }
+- tx->num_desc++;
+ return rval;
+ }
+
+diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c
+index 7bce963e2ae69..36aaedc651456 100644
+--- a/drivers/infiniband/hw/hfi1/user_pages.c
++++ b/drivers/infiniband/hw/hfi1/user_pages.c
+@@ -29,33 +29,52 @@ MODULE_PARM_DESC(cache_size, "Send and receive side cache size limit (in MB)");
+ bool hfi1_can_pin_pages(struct hfi1_devdata *dd, struct mm_struct *mm,
+ u32 nlocked, u32 npages)
+ {
+- unsigned long ulimit = rlimit(RLIMIT_MEMLOCK), pinned, cache_limit,
+- size = (cache_size * (1UL << 20)); /* convert to bytes */
+- unsigned int usr_ctxts =
+- dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt;
+- bool can_lock = capable(CAP_IPC_LOCK);
++ unsigned long ulimit_pages;
++ unsigned long cache_limit_pages;
++ unsigned int usr_ctxts;
+
+ /*
+- * Calculate per-cache size. The calculation below uses only a quarter
+- * of the available per-context limit. This leaves space for other
+- * pinning. Should we worry about shared ctxts?
++ * Perform RLIMIT_MEMLOCK based checks unless CAP_IPC_LOCK is present.
+ */
+- cache_limit = (ulimit / usr_ctxts) / 4;
+-
+- /* If ulimit isn't set to "unlimited" and is smaller than cache_size. */
+- if (ulimit != (-1UL) && size > cache_limit)
+- size = cache_limit;
+-
+- /* Convert to number of pages */
+- size = DIV_ROUND_UP(size, PAGE_SIZE);
+-
+- pinned = atomic64_read(&mm->pinned_vm);
++ if (!capable(CAP_IPC_LOCK)) {
++ ulimit_pages =
++ DIV_ROUND_DOWN_ULL(rlimit(RLIMIT_MEMLOCK), PAGE_SIZE);
++
++ /*
++ * Pinning these pages would exceed this process's locked memory
++ * limit.
++ */
++ if (atomic64_read(&mm->pinned_vm) + npages > ulimit_pages)
++ return false;
++
++ /*
++ * Only allow 1/4 of the user's RLIMIT_MEMLOCK to be used for HFI
++ * caches. This fraction is then equally distributed among all
++ * existing user contexts. Note that if RLIMIT_MEMLOCK is
++ * 'unlimited' (-1), the value of this limit will be > 2^42 pages
++ * (2^64 / 2^12 / 2^8 / 2^2).
++ *
++ * The effectiveness of this check may be reduced if I/O occurs on
++ * some user contexts before all user contexts are created. This
++ * check assumes that this process is the only one using this
++ * context (e.g., the corresponding fd was not passed to another
++ * process for concurrent access) as there is no per-context,
++ * per-process tracking of pinned pages. It also assumes that each
++ * user context has only one cache to limit.
++ */
++ usr_ctxts = dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt;
++ if (nlocked + npages > (ulimit_pages / usr_ctxts / 4))
++ return false;
++ }
+
+- /* First, check the absolute limit against all pinned pages. */
+- if (pinned + npages >= ulimit && !can_lock)
++ /*
++ * Pinning these pages would exceed the size limit for this cache.
++ */
++ cache_limit_pages = cache_size * (1024 * 1024) / PAGE_SIZE;
++ if (nlocked + npages > cache_limit_pages)
+ return false;
+
+- return ((nlocked + npages) <= size) || can_lock;
++ return true;
+ }
+
+ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t npages,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 8ba68ac12388d..946ba1109e878 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -443,14 +443,15 @@ static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma)
+ prot = pgprot_device(vma->vm_page_prot);
+ break;
+ default:
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out;
+ }
+
+ ret = rdma_user_mmap_io(uctx, vma, pfn, rdma_entry->npages * PAGE_SIZE,
+ prot, rdma_entry);
+
++out:
+ rdma_user_mmap_entry_put(rdma_entry);
+-
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
+index ab246447520bd..2e1e2bad04011 100644
+--- a/drivers/infiniband/hw/irdma/hw.c
++++ b/drivers/infiniband/hw/irdma/hw.c
+@@ -483,6 +483,8 @@ static int irdma_save_msix_info(struct irdma_pci_f *rf)
+ iw_qvlist->num_vectors = rf->msix_count;
+ if (rf->msix_count <= num_online_cpus())
+ rf->msix_shared = true;
++ else if (rf->msix_count > num_online_cpus() + 1)
++ rf->msix_count = num_online_cpus() + 1;
+
+ pmsix = rf->msix_entries;
+ for (i = 0, ceq_idx = 0; i < rf->msix_count; i++, iw_qvinfo++) {
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index 8b3bc302d6f3a..7be4c3adb4e2b 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -249,7 +249,8 @@ static int
+ mana_ib_gd_first_dma_region(struct mana_ib_dev *dev,
+ struct gdma_context *gc,
+ struct gdma_create_dma_region_req *create_req,
+- size_t num_pages, mana_handle_t *gdma_region)
++ size_t num_pages, mana_handle_t *gdma_region,
++ u32 expected_status)
+ {
+ struct gdma_create_dma_region_resp create_resp = {};
+ unsigned int create_req_msg_size;
+@@ -261,7 +262,7 @@ mana_ib_gd_first_dma_region(struct mana_ib_dev *dev,
+
+ err = mana_gd_send_request(gc, create_req_msg_size, create_req,
+ sizeof(create_resp), &create_resp);
+- if (err || create_resp.hdr.status) {
++ if (err || create_resp.hdr.status != expected_status) {
+ ibdev_dbg(&dev->ib_dev,
+ "Failed to create DMA region: %d, 0x%x\n",
+ err, create_resp.hdr.status);
+@@ -372,14 +373,21 @@ int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem *umem,
+
+ page_addr_list = create_req->page_addr_list;
+ rdma_umem_for_each_dma_block(umem, &biter, page_sz) {
++ u32 expected_status = 0;
++
+ page_addr_list[tail++] = rdma_block_iter_dma_address(&biter);
+ if (tail < num_pages_to_handle)
+ continue;
+
++ if (num_pages_processed + num_pages_to_handle <
++ num_pages_total)
++ expected_status = GDMA_STATUS_MORE_ENTRIES;
++
+ if (!num_pages_processed) {
+ /* First create message */
+ err = mana_ib_gd_first_dma_region(dev, gc, create_req,
+- tail, gdma_region);
++ tail, gdma_region,
++ expected_status);
+ if (err)
+ goto out;
+
+@@ -392,14 +400,8 @@ int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem *umem,
+ page_addr_list = add_req->page_addr_list;
+ } else {
+ /* Subsequent create messages */
+- u32 expected_s = 0;
+-
+- if (num_pages_processed + num_pages_to_handle <
+- num_pages_total)
+- expected_s = GDMA_STATUS_MORE_ENTRIES;
+-
+ err = mana_ib_gd_add_dma_region(dev, gc, add_req, tail,
+- expected_s);
++ expected_status);
+ if (err)
+ break;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h
+index ab334900fcc3d..2415f3704f576 100644
+--- a/drivers/infiniband/sw/rxe/rxe.h
++++ b/drivers/infiniband/sw/rxe/rxe.h
+@@ -57,6 +57,44 @@
+ #define rxe_dbg_mw(mw, fmt, ...) ibdev_dbg((mw)->ibmw.device, \
+ "mw#%d %s: " fmt, (mw)->elem.index, __func__, ##__VA_ARGS__)
+
++/* responder states */
++enum resp_states {
++ RESPST_NONE,
++ RESPST_GET_REQ,
++ RESPST_CHK_PSN,
++ RESPST_CHK_OP_SEQ,
++ RESPST_CHK_OP_VALID,
++ RESPST_CHK_RESOURCE,
++ RESPST_CHK_LENGTH,
++ RESPST_CHK_RKEY,
++ RESPST_EXECUTE,
++ RESPST_READ_REPLY,
++ RESPST_ATOMIC_REPLY,
++ RESPST_ATOMIC_WRITE_REPLY,
++ RESPST_PROCESS_FLUSH,
++ RESPST_COMPLETE,
++ RESPST_ACKNOWLEDGE,
++ RESPST_CLEANUP,
++ RESPST_DUPLICATE_REQUEST,
++ RESPST_ERR_MALFORMED_WQE,
++ RESPST_ERR_UNSUPPORTED_OPCODE,
++ RESPST_ERR_MISALIGNED_ATOMIC,
++ RESPST_ERR_PSN_OUT_OF_SEQ,
++ RESPST_ERR_MISSING_OPCODE_FIRST,
++ RESPST_ERR_MISSING_OPCODE_LAST_C,
++ RESPST_ERR_MISSING_OPCODE_LAST_D1E,
++ RESPST_ERR_TOO_MANY_RDMA_ATM_REQ,
++ RESPST_ERR_RNR,
++ RESPST_ERR_RKEY_VIOLATION,
++ RESPST_ERR_INVALIDATE_RKEY,
++ RESPST_ERR_LENGTH,
++ RESPST_ERR_CQ_OVERFLOW,
++ RESPST_ERROR,
++ RESPST_RESET,
++ RESPST_DONE,
++ RESPST_EXIT,
++};
++
+ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu);
+
+ int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name);
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index 948ce4902b10f..1bb0cb479eb12 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -64,12 +64,16 @@ void rxe_mr_init_dma(int access, struct rxe_mr *mr);
+ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
+ int access, struct rxe_mr *mr);
+ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr);
+-int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length);
+-int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
+- enum rxe_mr_copy_dir dir);
++int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length);
++int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
++ unsigned int length, enum rxe_mr_copy_dir dir);
+ int copy_data(struct rxe_pd *pd, int access, struct rxe_dma_info *dma,
+ void *addr, int length, enum rxe_mr_copy_dir dir);
+-void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length);
++int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
++ int sg_nents, unsigned int *sg_offset);
++int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
++ u64 compare, u64 swap_add, u64 *orig_val);
++int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value);
+ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
+ enum rxe_mr_lookup_type type);
+ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index 072eac4b65d29..5e9a03831bf9f 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -26,22 +26,22 @@ u8 rxe_get_next_key(u32 last_key)
+
+ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
+ {
+-
+-
+ switch (mr->ibmr.type) {
+ case IB_MR_TYPE_DMA:
+ return 0;
+
+ case IB_MR_TYPE_USER:
+ case IB_MR_TYPE_MEM_REG:
+- if (iova < mr->ibmr.iova || length > mr->ibmr.length ||
+- iova > mr->ibmr.iova + mr->ibmr.length - length)
+- return -EFAULT;
++ if (iova < mr->ibmr.iova ||
++ iova + length > mr->ibmr.iova + mr->ibmr.length) {
++ rxe_dbg_mr(mr, "iova/length out of range");
++ return -EINVAL;
++ }
+ return 0;
+
+ default:
+- rxe_dbg_mr(mr, "type (%d) not supported\n", mr->ibmr.type);
+- return -EFAULT;
++ rxe_dbg_mr(mr, "mr type not supported\n");
++ return -EINVAL;
+ }
+ }
+
+@@ -62,57 +62,31 @@ static void rxe_mr_init(int access, struct rxe_mr *mr)
+ mr->lkey = mr->ibmr.lkey = lkey;
+ mr->rkey = mr->ibmr.rkey = rkey;
+
++ mr->access = access;
++ mr->ibmr.page_size = PAGE_SIZE;
++ mr->page_mask = PAGE_MASK;
++ mr->page_shift = PAGE_SHIFT;
+ mr->state = RXE_MR_STATE_INVALID;
+ }
+
+-static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf)
+-{
+- int i;
+- int num_map;
+- struct rxe_map **map = mr->map;
+-
+- num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP;
+-
+- mr->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL);
+- if (!mr->map)
+- goto err1;
+-
+- for (i = 0; i < num_map; i++) {
+- mr->map[i] = kmalloc(sizeof(**map), GFP_KERNEL);
+- if (!mr->map[i])
+- goto err2;
+- }
+-
+- BUILD_BUG_ON(!is_power_of_2(RXE_BUF_PER_MAP));
+-
+- mr->map_shift = ilog2(RXE_BUF_PER_MAP);
+- mr->map_mask = RXE_BUF_PER_MAP - 1;
+-
+- mr->num_buf = num_buf;
+- mr->num_map = num_map;
+- mr->max_buf = num_map * RXE_BUF_PER_MAP;
+-
+- return 0;
+-
+-err2:
+- for (i--; i >= 0; i--)
+- kfree(mr->map[i]);
+-
+- kfree(mr->map);
+- mr->map = NULL;
+-err1:
+- return -ENOMEM;
+-}
+-
+ void rxe_mr_init_dma(int access, struct rxe_mr *mr)
+ {
+ rxe_mr_init(access, mr);
+
+- mr->access = access;
+ mr->state = RXE_MR_STATE_VALID;
+ mr->ibmr.type = IB_MR_TYPE_DMA;
+ }
+
++static unsigned long rxe_mr_iova_to_index(struct rxe_mr *mr, u64 iova)
++{
++ return (iova >> mr->page_shift) - (mr->ibmr.iova >> mr->page_shift);
++}
++
++static unsigned long rxe_mr_iova_to_page_offset(struct rxe_mr *mr, u64 iova)
++{
++ return iova & (mr_page_size(mr) - 1);
++}
++
+ static bool is_pmem_page(struct page *pg)
+ {
+ unsigned long paddr = page_to_phys(pg);
+@@ -122,86 +96,98 @@ static bool is_pmem_page(struct page *pg)
+ IORES_DESC_PERSISTENT_MEMORY);
+ }
+
++static int rxe_mr_fill_pages_from_sgt(struct rxe_mr *mr, struct sg_table *sgt)
++{
++ XA_STATE(xas, &mr->page_list, 0);
++ struct sg_page_iter sg_iter;
++ struct page *page;
++ bool persistent = !!(mr->access & IB_ACCESS_FLUSH_PERSISTENT);
++
++ __sg_page_iter_start(&sg_iter, sgt->sgl, sgt->orig_nents, 0);
++ if (!__sg_page_iter_next(&sg_iter))
++ return 0;
++
++ do {
++ xas_lock(&xas);
++ while (true) {
++ page = sg_page_iter_page(&sg_iter);
++
++ if (persistent && !is_pmem_page(page)) {
++ rxe_dbg_mr(mr, "Page can't be persistent\n");
++ xas_set_err(&xas, -EINVAL);
++ break;
++ }
++
++ xas_store(&xas, page);
++ if (xas_error(&xas))
++ break;
++ xas_next(&xas);
++ if (!__sg_page_iter_next(&sg_iter))
++ break;
++ }
++ xas_unlock(&xas);
++ } while (xas_nomem(&xas, GFP_KERNEL));
++
++ return xas_error(&xas);
++}
++
+ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
+ int access, struct rxe_mr *mr)
+ {
+- struct rxe_map **map;
+- struct rxe_phys_buf *buf = NULL;
+- struct ib_umem *umem;
+- struct sg_page_iter sg_iter;
+- int num_buf;
+- void *vaddr;
++ struct ib_umem *umem;
+ int err;
+
++ rxe_mr_init(access, mr);
++
++ xa_init(&mr->page_list);
++
+ umem = ib_umem_get(&rxe->ib_dev, start, length, access);
+ if (IS_ERR(umem)) {
+ rxe_dbg_mr(mr, "Unable to pin memory region err = %d\n",
+ (int)PTR_ERR(umem));
+- err = PTR_ERR(umem);
+- goto err_out;
++ return PTR_ERR(umem);
+ }
+
+- num_buf = ib_umem_num_pages(umem);
+-
+- rxe_mr_init(access, mr);
+-
+- err = rxe_mr_alloc(mr, num_buf);
++ err = rxe_mr_fill_pages_from_sgt(mr, &umem->sgt_append.sgt);
+ if (err) {
+- rxe_dbg_mr(mr, "Unable to allocate memory for map\n");
+- goto err_release_umem;
++ ib_umem_release(umem);
++ return err;
+ }
+
+- mr->page_shift = PAGE_SHIFT;
+- mr->page_mask = PAGE_SIZE - 1;
+-
+- num_buf = 0;
+- map = mr->map;
+- if (length > 0) {
+- bool persistent_access = access & IB_ACCESS_FLUSH_PERSISTENT;
+-
+- buf = map[0]->buf;
+- for_each_sgtable_page (&umem->sgt_append.sgt, &sg_iter, 0) {
+- struct page *pg = sg_page_iter_page(&sg_iter);
++ mr->umem = umem;
++ mr->ibmr.type = IB_MR_TYPE_USER;
++ mr->state = RXE_MR_STATE_VALID;
+
+- if (persistent_access && !is_pmem_page(pg)) {
+- rxe_dbg_mr(mr, "Unable to register persistent access to non-pmem device\n");
+- err = -EINVAL;
+- goto err_release_umem;
+- }
++ return 0;
++}
+
+- if (num_buf >= RXE_BUF_PER_MAP) {
+- map++;
+- buf = map[0]->buf;
+- num_buf = 0;
+- }
++static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf)
++{
++ XA_STATE(xas, &mr->page_list, 0);
++ int i = 0;
++ int err;
+
+- vaddr = page_address(pg);
+- if (!vaddr) {
+- rxe_dbg_mr(mr, "Unable to get virtual address\n");
+- err = -ENOMEM;
+- goto err_release_umem;
+- }
+- buf->addr = (uintptr_t)vaddr;
+- buf->size = PAGE_SIZE;
+- num_buf++;
+- buf++;
++ xa_init(&mr->page_list);
+
++ do {
++ xas_lock(&xas);
++ while (i != num_buf) {
++ xas_store(&xas, XA_ZERO_ENTRY);
++ if (xas_error(&xas))
++ break;
++ xas_next(&xas);
++ i++;
+ }
+- }
++ xas_unlock(&xas);
++ } while (xas_nomem(&xas, GFP_KERNEL));
+
+- mr->umem = umem;
+- mr->access = access;
+- mr->offset = ib_umem_offset(umem);
+- mr->state = RXE_MR_STATE_VALID;
+- mr->ibmr.type = IB_MR_TYPE_USER;
+- mr->ibmr.page_size = PAGE_SIZE;
++ err = xas_error(&xas);
++ if (err)
++ return err;
+
+- return 0;
++ mr->num_buf = num_buf;
+
+-err_release_umem:
+- ib_umem_release(umem);
+-err_out:
+- return err;
++ return 0;
+ }
+
+ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr)
+@@ -215,7 +201,6 @@ int rxe_mr_init_fast(int max_pages, struct rxe_mr *mr)
+ if (err)
+ goto err1;
+
+- mr->max_buf = max_pages;
+ mr->state = RXE_MR_STATE_FREE;
+ mr->ibmr.type = IB_MR_TYPE_MEM_REG;
+
+@@ -225,187 +210,125 @@ err1:
+ return err;
+ }
+
+-static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out,
+- size_t *offset_out)
++static int rxe_set_page(struct ib_mr *ibmr, u64 iova)
+ {
+- size_t offset = iova - mr->ibmr.iova + mr->offset;
+- int map_index;
+- int buf_index;
+- u64 length;
+-
+- if (likely(mr->page_shift)) {
+- *offset_out = offset & mr->page_mask;
+- offset >>= mr->page_shift;
+- *n_out = offset & mr->map_mask;
+- *m_out = offset >> mr->map_shift;
+- } else {
+- map_index = 0;
+- buf_index = 0;
++ struct rxe_mr *mr = to_rmr(ibmr);
++ struct page *page = virt_to_page(iova & mr->page_mask);
++ bool persistent = !!(mr->access & IB_ACCESS_FLUSH_PERSISTENT);
++ int err;
+
+- length = mr->map[map_index]->buf[buf_index].size;
++ if (persistent && !is_pmem_page(page)) {
++ rxe_dbg_mr(mr, "Page cannot be persistent\n");
++ return -EINVAL;
++ }
+
+- while (offset >= length) {
+- offset -= length;
+- buf_index++;
++ if (unlikely(mr->nbuf == mr->num_buf))
++ return -ENOMEM;
+
+- if (buf_index == RXE_BUF_PER_MAP) {
+- map_index++;
+- buf_index = 0;
+- }
+- length = mr->map[map_index]->buf[buf_index].size;
+- }
++ err = xa_err(xa_store(&mr->page_list, mr->nbuf, page, GFP_KERNEL));
++ if (err)
++ return err;
+
+- *m_out = map_index;
+- *n_out = buf_index;
+- *offset_out = offset;
+- }
++ mr->nbuf++;
++ return 0;
+ }
+
+-void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length)
++int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sgl,
++ int sg_nents, unsigned int *sg_offset)
+ {
+- size_t offset;
+- int m, n;
+- void *addr;
+-
+- if (mr->state != RXE_MR_STATE_VALID) {
+- rxe_dbg_mr(mr, "Not in valid state\n");
+- addr = NULL;
+- goto out;
+- }
+-
+- if (!mr->map) {
+- addr = (void *)(uintptr_t)iova;
+- goto out;
+- }
+-
+- if (mr_check_range(mr, iova, length)) {
+- rxe_dbg_mr(mr, "Range violation\n");
+- addr = NULL;
+- goto out;
+- }
+-
+- lookup_iova(mr, iova, &m, &n, &offset);
+-
+- if (offset + length > mr->map[m]->buf[n].size) {
+- rxe_dbg_mr(mr, "Crosses page boundary\n");
+- addr = NULL;
+- goto out;
+- }
++ struct rxe_mr *mr = to_rmr(ibmr);
++ unsigned int page_size = mr_page_size(mr);
+
+- addr = (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset;
++ mr->nbuf = 0;
++ mr->page_shift = ilog2(page_size);
++ mr->page_mask = ~((u64)page_size - 1);
++ mr->page_offset = mr->ibmr.iova & (page_size - 1);
+
+-out:
+- return addr;
++ return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset, rxe_set_page);
+ }
+
+-int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, int length)
++static int rxe_mr_copy_xarray(struct rxe_mr *mr, u64 iova, void *addr,
++ unsigned int length, enum rxe_mr_copy_dir dir)
+ {
+- size_t offset;
++ unsigned int page_offset = rxe_mr_iova_to_page_offset(mr, iova);
++ unsigned long index = rxe_mr_iova_to_index(mr, iova);
++ unsigned int bytes;
++ struct page *page;
++ void *va;
+
+- if (length == 0)
+- return 0;
+-
+- if (mr->ibmr.type == IB_MR_TYPE_DMA)
+- return -EFAULT;
+-
+- offset = (iova - mr->ibmr.iova + mr->offset) & mr->page_mask;
+- while (length > 0) {
+- u8 *va;
+- int bytes;
+-
+- bytes = mr->ibmr.page_size - offset;
+- if (bytes > length)
+- bytes = length;
+-
+- va = iova_to_vaddr(mr, iova, length);
+- if (!va)
++ while (length) {
++ page = xa_load(&mr->page_list, index);
++ if (!page)
+ return -EFAULT;
+
+- arch_wb_cache_pmem(va, bytes);
+-
++ bytes = min_t(unsigned int, length,
++ mr_page_size(mr) - page_offset);
++ va = kmap_local_page(page);
++ if (dir == RXE_FROM_MR_OBJ)
++ memcpy(addr, va + page_offset, bytes);
++ else
++ memcpy(va + page_offset, addr, bytes);
++ kunmap_local(va);
++
++ page_offset = 0;
++ addr += bytes;
+ length -= bytes;
+- iova += bytes;
+- offset = 0;
++ index++;
+ }
+
+ return 0;
+ }
+
+-/* copy data from a range (vaddr, vaddr+length-1) to or from
+- * a mr object starting at iova.
+- */
+-int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
+- enum rxe_mr_copy_dir dir)
++static void rxe_mr_copy_dma(struct rxe_mr *mr, u64 iova, void *addr,
++ unsigned int length, enum rxe_mr_copy_dir dir)
+ {
+- int err;
+- int bytes;
+- u8 *va;
+- struct rxe_map **map;
+- struct rxe_phys_buf *buf;
+- int m;
+- int i;
+- size_t offset;
+-
+- if (length == 0)
+- return 0;
++ unsigned int page_offset = iova & (PAGE_SIZE - 1);
++ unsigned int bytes;
++ struct page *page;
++ u8 *va;
+
+- if (mr->ibmr.type == IB_MR_TYPE_DMA) {
+- u8 *src, *dest;
++ while (length) {
++ page = virt_to_page(iova & mr->page_mask);
++ bytes = min_t(unsigned int, length,
++ PAGE_SIZE - page_offset);
++ va = kmap_local_page(page);
++
++ if (dir == RXE_TO_MR_OBJ)
++ memcpy(va + page_offset, addr, bytes);
++ else
++ memcpy(addr, va + page_offset, bytes);
++
++ kunmap_local(va);
++ page_offset = 0;
++ iova += bytes;
++ addr += bytes;
++ length -= bytes;
++ }
++}
+
+- src = (dir == RXE_TO_MR_OBJ) ? addr : ((void *)(uintptr_t)iova);
++int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr,
++ unsigned int length, enum rxe_mr_copy_dir dir)
++{
++ int err;
+
+- dest = (dir == RXE_TO_MR_OBJ) ? ((void *)(uintptr_t)iova) : addr;
++ if (length == 0)
++ return 0;
+
+- memcpy(dest, src, length);
++ if (WARN_ON(!mr))
++ return -EINVAL;
+
++ if (mr->ibmr.type == IB_MR_TYPE_DMA) {
++ rxe_mr_copy_dma(mr, iova, addr, length, dir);
+ return 0;
+ }
+
+- WARN_ON_ONCE(!mr->map);
+-
+ err = mr_check_range(mr, iova, length);
+- if (err) {
+- err = -EFAULT;
+- goto err1;
+- }
+-
+- lookup_iova(mr, iova, &m, &i, &offset);
+-
+- map = mr->map + m;
+- buf = map[0]->buf + i;
+-
+- while (length > 0) {
+- u8 *src, *dest;
+-
+- va = (u8 *)(uintptr_t)buf->addr + offset;
+- src = (dir == RXE_TO_MR_OBJ) ? addr : va;
+- dest = (dir == RXE_TO_MR_OBJ) ? va : addr;
+-
+- bytes = buf->size - offset;
+-
+- if (bytes > length)
+- bytes = length;
+-
+- memcpy(dest, src, bytes);
+-
+- length -= bytes;
+- addr += bytes;
+-
+- offset = 0;
+- buf++;
+- i++;
+-
+- if (i == RXE_BUF_PER_MAP) {
+- i = 0;
+- map++;
+- buf = map[0]->buf;
+- }
++ if (unlikely(err)) {
++ rxe_dbg_mr(mr, "iova out of range");
++ return err;
+ }
+
+- return 0;
+-
+-err1:
+- return err;
++ return rxe_mr_copy_xarray(mr, iova, addr, length, dir);
+ }
+
+ /* copy data in or out of a wqe, i.e. sg list
+@@ -477,7 +400,6 @@ int copy_data(
+
+ if (bytes > 0) {
+ iova = sge->addr + offset;
+-
+ err = rxe_mr_copy(mr, iova, addr, bytes, dir);
+ if (err)
+ goto err2;
+@@ -504,6 +426,165 @@ err1:
+ return err;
+ }
+
++int rxe_flush_pmem_iova(struct rxe_mr *mr, u64 iova, unsigned int length)
++{
++ unsigned int page_offset;
++ unsigned long index;
++ struct page *page;
++ unsigned int bytes;
++ int err;
++ u8 *va;
++
++ /* mr must be valid even if length is zero */
++ if (WARN_ON(!mr))
++ return -EINVAL;
++
++ if (length == 0)
++ return 0;
++
++ if (mr->ibmr.type == IB_MR_TYPE_DMA)
++ return -EFAULT;
++
++ err = mr_check_range(mr, iova, length);
++ if (err)
++ return err;
++
++ while (length > 0) {
++ index = rxe_mr_iova_to_index(mr, iova);
++ page = xa_load(&mr->page_list, index);
++ page_offset = rxe_mr_iova_to_page_offset(mr, iova);
++ if (!page)
++ return -EFAULT;
++ bytes = min_t(unsigned int, length,
++ mr_page_size(mr) - page_offset);
++
++ va = kmap_local_page(page);
++ arch_wb_cache_pmem(va + page_offset, bytes);
++ kunmap_local(va);
++
++ length -= bytes;
++ iova += bytes;
++ page_offset = 0;
++ }
++
++ return 0;
++}
++
++/* Guarantee atomicity of atomic operations at the machine level. */
++static DEFINE_SPINLOCK(atomic_ops_lock);
++
++int rxe_mr_do_atomic_op(struct rxe_mr *mr, u64 iova, int opcode,
++ u64 compare, u64 swap_add, u64 *orig_val)
++{
++ unsigned int page_offset;
++ struct page *page;
++ u64 value;
++ u64 *va;
++
++ if (unlikely(mr->state != RXE_MR_STATE_VALID)) {
++ rxe_dbg_mr(mr, "mr not in valid state");
++ return RESPST_ERR_RKEY_VIOLATION;
++ }
++
++ if (mr->ibmr.type == IB_MR_TYPE_DMA) {
++ page_offset = iova & (PAGE_SIZE - 1);
++ page = virt_to_page(iova & PAGE_MASK);
++ } else {
++ unsigned long index;
++ int err;
++
++ err = mr_check_range(mr, iova, sizeof(value));
++ if (err) {
++ rxe_dbg_mr(mr, "iova out of range");
++ return RESPST_ERR_RKEY_VIOLATION;
++ }
++ page_offset = rxe_mr_iova_to_page_offset(mr, iova);
++ index = rxe_mr_iova_to_index(mr, iova);
++ page = xa_load(&mr->page_list, index);
++ if (!page)
++ return RESPST_ERR_RKEY_VIOLATION;
++ }
++
++ if (unlikely(page_offset & 0x7)) {
++ rxe_dbg_mr(mr, "iova not aligned");
++ return RESPST_ERR_MISALIGNED_ATOMIC;
++ }
++
++ va = kmap_local_page(page);
++
++ spin_lock_bh(&atomic_ops_lock);
++ value = *orig_val = va[page_offset >> 3];
++
++ if (opcode == IB_OPCODE_RC_COMPARE_SWAP) {
++ if (value == compare)
++ va[page_offset >> 3] = swap_add;
++ } else {
++ value += swap_add;
++ va[page_offset >> 3] = value;
++ }
++ spin_unlock_bh(&atomic_ops_lock);
++
++ kunmap_local(va);
++
++ return 0;
++}
++
++#if defined CONFIG_64BIT
++/* only implemented or called for 64 bit architectures */
++int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value)
++{
++ unsigned int page_offset;
++ struct page *page;
++ u64 *va;
++
++ /* See IBA oA19-28 */
++ if (unlikely(mr->state != RXE_MR_STATE_VALID)) {
++ rxe_dbg_mr(mr, "mr not in valid state");
++ return RESPST_ERR_RKEY_VIOLATION;
++ }
++
++ if (mr->ibmr.type == IB_MR_TYPE_DMA) {
++ page_offset = iova & (PAGE_SIZE - 1);
++ page = virt_to_page(iova & PAGE_MASK);
++ } else {
++ unsigned long index;
++ int err;
++
++ /* See IBA oA19-28 */
++ err = mr_check_range(mr, iova, sizeof(value));
++ if (unlikely(err)) {
++ rxe_dbg_mr(mr, "iova out of range");
++ return RESPST_ERR_RKEY_VIOLATION;
++ }
++ page_offset = rxe_mr_iova_to_page_offset(mr, iova);
++ index = rxe_mr_iova_to_index(mr, iova);
++ page = xa_load(&mr->page_list, index);
++ if (!page)
++ return RESPST_ERR_RKEY_VIOLATION;
++ }
++
++ /* See IBA A19.4.2 */
++ if (unlikely(page_offset & 0x7)) {
++ rxe_dbg_mr(mr, "misaligned address");
++ return RESPST_ERR_MISALIGNED_ATOMIC;
++ }
++
++ va = kmap_local_page(page);
++
++ /* Do atomic write after all prior operations have completed */
++ smp_store_release(&va[page_offset >> 3], value);
++
++ kunmap_local(va);
++
++ return 0;
++}
++#else
++int rxe_mr_do_atomic_write(struct rxe_mr *mr, u64 iova, u64 value)
++{
++ return RESPST_ERR_UNSUPPORTED_OPCODE;
++}
++#endif
++
+ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length)
+ {
+ struct rxe_sge *sge = &dma->sge[dma->cur_sge];
+@@ -537,12 +618,6 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length)
+ return 0;
+ }
+
+-/* (1) find the mr corresponding to lkey/rkey
+- * depending on lookup_type
+- * (2) verify that the (qp) pd matches the mr pd
+- * (3) verify that the mr can support the requested access
+- * (4) verify that mr state is valid
+- */
+ struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key,
+ enum rxe_mr_lookup_type type)
+ {
+@@ -663,15 +738,10 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
+ void rxe_mr_cleanup(struct rxe_pool_elem *elem)
+ {
+ struct rxe_mr *mr = container_of(elem, typeof(*mr), elem);
+- int i;
+
+ rxe_put(mr_pd(mr));
+ ib_umem_release(mr->umem);
+
+- if (mr->map) {
+- for (i = 0; i < mr->num_map; i++)
+- kfree(mr->map[i]);
+-
+- kfree(mr->map);
+- }
++ if (mr->ibmr.type != IB_MR_TYPE_DMA)
++ xa_destroy(&mr->page_list);
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_queue.h b/drivers/infiniband/sw/rxe/rxe_queue.h
+index ed44042782fa7..c711cb98b9496 100644
+--- a/drivers/infiniband/sw/rxe/rxe_queue.h
++++ b/drivers/infiniband/sw/rxe/rxe_queue.h
+@@ -35,19 +35,26 @@
+ /**
+ * enum queue_type - type of queue
+ * @QUEUE_TYPE_TO_CLIENT: Queue is written by rxe driver and
+- * read by client. Used by rxe driver only.
++ * read by client which may be a user space
++ * application or a kernel ulp.
++ * Used by rxe internals only.
+ * @QUEUE_TYPE_FROM_CLIENT: Queue is written by client and
+- * read by rxe driver. Used by rxe driver only.
+- * @QUEUE_TYPE_TO_DRIVER: Queue is written by client and
+- * read by rxe driver. Used by kernel client only.
+- * @QUEUE_TYPE_FROM_DRIVER: Queue is written by rxe driver and
+- * read by client. Used by kernel client only.
++ * read by rxe driver.
++ * Used by rxe internals only.
++ * @QUEUE_TYPE_FROM_ULP: Queue is written by kernel ulp and
++ * read by rxe driver.
++ * Used by kernel verbs APIs only on
++ * behalf of ulps.
++ * @QUEUE_TYPE_TO_ULP: Queue is written by rxe driver and
++ * read by kernel ulp.
++ * Used by kernel verbs APIs only on
++ * behalf of ulps.
+ */
+ enum queue_type {
+ QUEUE_TYPE_TO_CLIENT,
+ QUEUE_TYPE_FROM_CLIENT,
+- QUEUE_TYPE_TO_DRIVER,
+- QUEUE_TYPE_FROM_DRIVER,
++ QUEUE_TYPE_FROM_ULP,
++ QUEUE_TYPE_TO_ULP,
+ };
+
+ struct rxe_queue_buf;
+@@ -62,9 +69,9 @@ struct rxe_queue {
+ u32 index_mask;
+ enum queue_type type;
+ /* private copy of index for shared queues between
+- * kernel space and user space. Kernel reads and writes
++ * driver and clients. Driver reads and writes
+ * this copy and then replicates to rxe_queue_buf
+- * for read access by user space.
++ * for read access by clients.
+ */
+ u32 index;
+ };
+@@ -97,19 +104,21 @@ static inline u32 queue_get_producer(const struct rxe_queue *q,
+
+ switch (type) {
+ case QUEUE_TYPE_FROM_CLIENT:
+- /* protect user index */
++ /* used by rxe, client owns the index */
+ prod = smp_load_acquire(&q->buf->producer_index);
+ break;
+ case QUEUE_TYPE_TO_CLIENT:
++ /* used by rxe which owns the index */
+ prod = q->index;
+ break;
+- case QUEUE_TYPE_FROM_DRIVER:
+- /* protect driver index */
+- prod = smp_load_acquire(&q->buf->producer_index);
+- break;
+- case QUEUE_TYPE_TO_DRIVER:
++ case QUEUE_TYPE_FROM_ULP:
++ /* used by ulp which owns the index */
+ prod = q->buf->producer_index;
+ break;
++ case QUEUE_TYPE_TO_ULP:
++ /* used by ulp, rxe owns the index */
++ prod = smp_load_acquire(&q->buf->producer_index);
++ break;
+ }
+
+ return prod;
+@@ -122,19 +131,21 @@ static inline u32 queue_get_consumer(const struct rxe_queue *q,
+
+ switch (type) {
+ case QUEUE_TYPE_FROM_CLIENT:
++ /* used by rxe which owns the index */
+ cons = q->index;
+ break;
+ case QUEUE_TYPE_TO_CLIENT:
+- /* protect user index */
++ /* used by rxe, client owns the index */
+ cons = smp_load_acquire(&q->buf->consumer_index);
+ break;
+- case QUEUE_TYPE_FROM_DRIVER:
+- cons = q->buf->consumer_index;
+- break;
+- case QUEUE_TYPE_TO_DRIVER:
+- /* protect driver index */
++ case QUEUE_TYPE_FROM_ULP:
++ /* used by ulp, rxe owns the index */
+ cons = smp_load_acquire(&q->buf->consumer_index);
+ break;
++ case QUEUE_TYPE_TO_ULP:
++ /* used by ulp which owns the index */
++ cons = q->buf->consumer_index;
++ break;
+ }
+
+ return cons;
+@@ -172,24 +183,31 @@ static inline void queue_advance_producer(struct rxe_queue *q,
+
+ switch (type) {
+ case QUEUE_TYPE_FROM_CLIENT:
+- pr_warn("%s: attempt to advance client index\n",
+- __func__);
++ /* used by rxe, client owns the index */
++ if (WARN_ON(1))
++ pr_warn("%s: attempt to advance client index\n",
++ __func__);
+ break;
+ case QUEUE_TYPE_TO_CLIENT:
++ /* used by rxe which owns the index */
+ prod = q->index;
+ prod = (prod + 1) & q->index_mask;
+ q->index = prod;
+- /* protect user index */
++ /* release so client can read it safely */
+ smp_store_release(&q->buf->producer_index, prod);
+ break;
+- case QUEUE_TYPE_FROM_DRIVER:
+- pr_warn("%s: attempt to advance driver index\n",
+- __func__);
+- break;
+- case QUEUE_TYPE_TO_DRIVER:
++ case QUEUE_TYPE_FROM_ULP:
++ /* used by ulp which owns the index */
+ prod = q->buf->producer_index;
+ prod = (prod + 1) & q->index_mask;
+- q->buf->producer_index = prod;
++ /* release so rxe can read it safely */
++ smp_store_release(&q->buf->producer_index, prod);
++ break;
++ case QUEUE_TYPE_TO_ULP:
++ /* used by ulp, rxe owns the index */
++ if (WARN_ON(1))
++ pr_warn("%s: attempt to advance driver index\n",
++ __func__);
+ break;
+ }
+ }
+@@ -201,24 +219,30 @@ static inline void queue_advance_consumer(struct rxe_queue *q,
+
+ switch (type) {
+ case QUEUE_TYPE_FROM_CLIENT:
+- cons = q->index;
+- cons = (cons + 1) & q->index_mask;
++ /* used by rxe which owns the index */
++ cons = (q->index + 1) & q->index_mask;
+ q->index = cons;
+- /* protect user index */
++ /* release so client can read it safely */
+ smp_store_release(&q->buf->consumer_index, cons);
+ break;
+ case QUEUE_TYPE_TO_CLIENT:
+- pr_warn("%s: attempt to advance client index\n",
+- __func__);
++ /* used by rxe, client owns the index */
++ if (WARN_ON(1))
++ pr_warn("%s: attempt to advance client index\n",
++ __func__);
++ break;
++ case QUEUE_TYPE_FROM_ULP:
++ /* used by ulp, rxe owns the index */
++ if (WARN_ON(1))
++ pr_warn("%s: attempt to advance driver index\n",
++ __func__);
+ break;
+- case QUEUE_TYPE_FROM_DRIVER:
++ case QUEUE_TYPE_TO_ULP:
++ /* used by ulp which owns the index */
+ cons = q->buf->consumer_index;
+ cons = (cons + 1) & q->index_mask;
+- q->buf->consumer_index = cons;
+- break;
+- case QUEUE_TYPE_TO_DRIVER:
+- pr_warn("%s: attempt to advance driver index\n",
+- __func__);
++ /* release so rxe can read it safely */
++ smp_store_release(&q->buf->consumer_index, cons);
+ break;
+ }
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index c74972244f08f..0cc1ba91d48cc 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -10,43 +10,6 @@
+ #include "rxe_loc.h"
+ #include "rxe_queue.h"
+
+-enum resp_states {
+- RESPST_NONE,
+- RESPST_GET_REQ,
+- RESPST_CHK_PSN,
+- RESPST_CHK_OP_SEQ,
+- RESPST_CHK_OP_VALID,
+- RESPST_CHK_RESOURCE,
+- RESPST_CHK_LENGTH,
+- RESPST_CHK_RKEY,
+- RESPST_EXECUTE,
+- RESPST_READ_REPLY,
+- RESPST_ATOMIC_REPLY,
+- RESPST_ATOMIC_WRITE_REPLY,
+- RESPST_PROCESS_FLUSH,
+- RESPST_COMPLETE,
+- RESPST_ACKNOWLEDGE,
+- RESPST_CLEANUP,
+- RESPST_DUPLICATE_REQUEST,
+- RESPST_ERR_MALFORMED_WQE,
+- RESPST_ERR_UNSUPPORTED_OPCODE,
+- RESPST_ERR_MISALIGNED_ATOMIC,
+- RESPST_ERR_PSN_OUT_OF_SEQ,
+- RESPST_ERR_MISSING_OPCODE_FIRST,
+- RESPST_ERR_MISSING_OPCODE_LAST_C,
+- RESPST_ERR_MISSING_OPCODE_LAST_D1E,
+- RESPST_ERR_TOO_MANY_RDMA_ATM_REQ,
+- RESPST_ERR_RNR,
+- RESPST_ERR_RKEY_VIOLATION,
+- RESPST_ERR_INVALIDATE_RKEY,
+- RESPST_ERR_LENGTH,
+- RESPST_ERR_CQ_OVERFLOW,
+- RESPST_ERROR,
+- RESPST_RESET,
+- RESPST_DONE,
+- RESPST_EXIT,
+-};
+-
+ static char *resp_state_name[] = {
+ [RESPST_NONE] = "NONE",
+ [RESPST_GET_REQ] = "GET_REQ",
+@@ -457,13 +420,23 @@ static enum resp_states rxe_resp_check_length(struct rxe_qp *qp,
+ return RESPST_CHK_RKEY;
+ }
+
++/* if the reth length field is zero we can assume nothing
++ * about the rkey value and should not validate or use it.
++ * Instead set qp->resp.rkey to 0 which is an invalid rkey
++ * value since the minimum index part is 1.
++ */
+ static void qp_resp_from_reth(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ {
++ unsigned int length = reth_len(pkt);
++
+ qp->resp.va = reth_va(pkt);
+ qp->resp.offset = 0;
+- qp->resp.rkey = reth_rkey(pkt);
+- qp->resp.resid = reth_len(pkt);
+- qp->resp.length = reth_len(pkt);
++ qp->resp.resid = length;
++ qp->resp.length = length;
++ if (pkt->mask & RXE_READ_OR_WRITE_MASK && length == 0)
++ qp->resp.rkey = 0;
++ else
++ qp->resp.rkey = reth_rkey(pkt);
+ }
+
+ static void qp_resp_from_atmeth(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+@@ -474,6 +447,10 @@ static void qp_resp_from_atmeth(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ qp->resp.resid = sizeof(u64);
+ }
+
++/* resolve the packet rkey to qp->resp.mr or set qp->resp.mr to NULL
++ * if an invalid rkey is received or the rdma length is zero. For middle
++ * or last packets use the stored value of mr.
++ */
+ static enum resp_states check_rkey(struct rxe_qp *qp,
+ struct rxe_pkt_info *pkt)
+ {
+@@ -510,10 +487,12 @@ static enum resp_states check_rkey(struct rxe_qp *qp,
+ return RESPST_EXECUTE;
+ }
+
+- /* A zero-byte op is not required to set an addr or rkey. See C9-88 */
++ /* A zero-byte read or write op is not required to
++ * set an addr or rkey. See C9-88
++ */
+ if ((pkt->mask & RXE_READ_OR_WRITE_MASK) &&
+- (pkt->mask & RXE_RETH_MASK) &&
+- reth_len(pkt) == 0) {
++ (pkt->mask & RXE_RETH_MASK) && reth_len(pkt) == 0) {
++ qp->resp.mr = NULL;
+ return RESPST_EXECUTE;
+ }
+
+@@ -592,6 +571,7 @@ skip_check_range:
+ return RESPST_EXECUTE;
+
+ err:
++ qp->resp.mr = NULL;
+ if (mr)
+ rxe_put(mr);
+ if (mw)
+@@ -725,17 +705,12 @@ static enum resp_states process_flush(struct rxe_qp *qp,
+ return RESPST_ACKNOWLEDGE;
+ }
+
+-/* Guarantee atomicity of atomic operations at the machine level. */
+-static DEFINE_SPINLOCK(atomic_ops_lock);
+-
+ static enum resp_states atomic_reply(struct rxe_qp *qp,
+- struct rxe_pkt_info *pkt)
++ struct rxe_pkt_info *pkt)
+ {
+- u64 *vaddr;
+- enum resp_states ret;
+ struct rxe_mr *mr = qp->resp.mr;
+ struct resp_res *res = qp->resp.res;
+- u64 value;
++ int err;
+
+ if (!res) {
+ res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK);
+@@ -743,32 +718,14 @@ static enum resp_states atomic_reply(struct rxe_qp *qp,
+ }
+
+ if (!res->replay) {
+- if (mr->state != RXE_MR_STATE_VALID) {
+- ret = RESPST_ERR_RKEY_VIOLATION;
+- goto out;
+- }
+-
+- vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset,
+- sizeof(u64));
+-
+- /* check vaddr is 8 bytes aligned. */
+- if (!vaddr || (uintptr_t)vaddr & 7) {
+- ret = RESPST_ERR_MISALIGNED_ATOMIC;
+- goto out;
+- }
++ u64 iova = qp->resp.va + qp->resp.offset;
+
+- spin_lock_bh(&atomic_ops_lock);
+- res->atomic.orig_val = value = *vaddr;
+-
+- if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) {
+- if (value == atmeth_comp(pkt))
+- value = atmeth_swap_add(pkt);
+- } else {
+- value += atmeth_swap_add(pkt);
+- }
+-
+- *vaddr = value;
+- spin_unlock_bh(&atomic_ops_lock);
++ err = rxe_mr_do_atomic_op(mr, iova, pkt->opcode,
++ atmeth_comp(pkt),
++ atmeth_swap_add(pkt),
++ &res->atomic.orig_val);
++ if (err)
++ return err;
+
+ qp->resp.msn++;
+
+@@ -780,35 +737,35 @@ static enum resp_states atomic_reply(struct rxe_qp *qp,
+ qp->resp.status = IB_WC_SUCCESS;
+ }
+
+- ret = RESPST_ACKNOWLEDGE;
+-out:
+- return ret;
++ return RESPST_ACKNOWLEDGE;
+ }
+
+-#ifdef CONFIG_64BIT
+-static enum resp_states do_atomic_write(struct rxe_qp *qp,
+- struct rxe_pkt_info *pkt)
++static enum resp_states atomic_write_reply(struct rxe_qp *qp,
++ struct rxe_pkt_info *pkt)
+ {
+- struct rxe_mr *mr = qp->resp.mr;
+- int payload = payload_size(pkt);
+- u64 src, *dst;
+-
+- if (mr->state != RXE_MR_STATE_VALID)
+- return RESPST_ERR_RKEY_VIOLATION;
++ struct resp_res *res = qp->resp.res;
++ struct rxe_mr *mr;
++ u64 value;
++ u64 iova;
++ int err;
+
+- memcpy(&src, payload_addr(pkt), payload);
++ if (!res) {
++ res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_WRITE_MASK);
++ qp->resp.res = res;
++ }
+
+- dst = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, payload);
+- /* check vaddr is 8 bytes aligned. */
+- if (!dst || (uintptr_t)dst & 7)
+- return RESPST_ERR_MISALIGNED_ATOMIC;
++ if (res->replay)
++ return RESPST_ACKNOWLEDGE;
+
+- /* Do atomic write after all prior operations have completed */
+- smp_store_release(dst, src);
++ mr = qp->resp.mr;
++ value = *(u64 *)payload_addr(pkt);
++ iova = qp->resp.va + qp->resp.offset;
+
+- /* decrease resp.resid to zero */
+- qp->resp.resid -= sizeof(payload);
++ err = rxe_mr_do_atomic_write(mr, iova, value);
++ if (err)
++ return err;
+
++ qp->resp.resid = 0;
+ qp->resp.msn++;
+
+ /* next expected psn, read handles this separately */
+@@ -817,29 +774,8 @@ static enum resp_states do_atomic_write(struct rxe_qp *qp,
+
+ qp->resp.opcode = pkt->opcode;
+ qp->resp.status = IB_WC_SUCCESS;
+- return RESPST_ACKNOWLEDGE;
+-}
+-#else
+-static enum resp_states do_atomic_write(struct rxe_qp *qp,
+- struct rxe_pkt_info *pkt)
+-{
+- return RESPST_ERR_UNSUPPORTED_OPCODE;
+-}
+-#endif /* CONFIG_64BIT */
+
+-static enum resp_states atomic_write_reply(struct rxe_qp *qp,
+- struct rxe_pkt_info *pkt)
+-{
+- struct resp_res *res = qp->resp.res;
+-
+- if (!res) {
+- res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_WRITE_MASK);
+- qp->resp.res = res;
+- }
+-
+- if (res->replay)
+- return RESPST_ACKNOWLEDGE;
+- return do_atomic_write(qp, pkt);
++ return RESPST_ACKNOWLEDGE;
+ }
+
+ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp,
+@@ -966,7 +902,11 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ }
+
+ if (res->state == rdatm_res_state_new) {
+- if (!res->replay) {
++ if (!res->replay || qp->resp.length == 0) {
++ /* if length == 0 mr will be NULL (is ok)
++ * otherwise qp->resp.mr holds a ref on mr
++ * which we transfer to mr and drop below.
++ */
+ mr = qp->resp.mr;
+ qp->resp.mr = NULL;
+ } else {
+@@ -980,6 +920,10 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ else
+ opcode = IB_OPCODE_RC_RDMA_READ_RESPONSE_FIRST;
+ } else {
++ /* re-lookup mr from rkey on all later packets.
++ * length will be non-zero. This can fail if someone
++ * modifies or destroys the mr since the first packet.
++ */
+ mr = rxe_recheck_mr(qp, res->read.rkey);
+ if (!mr)
+ return RESPST_ERR_RKEY_VIOLATION;
+@@ -997,18 +941,16 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ skb = prepare_ack_packet(qp, &ack_pkt, opcode, payload,
+ res->cur_psn, AETH_ACK_UNLIMITED);
+ if (!skb) {
+- if (mr)
+- rxe_put(mr);
+- return RESPST_ERR_RNR;
++ state = RESPST_ERR_RNR;
++ goto err_out;
+ }
+
+ err = rxe_mr_copy(mr, res->read.va, payload_addr(&ack_pkt),
+ payload, RXE_FROM_MR_OBJ);
+- if (mr)
+- rxe_put(mr);
+ if (err) {
+ kfree_skb(skb);
+- return RESPST_ERR_RKEY_VIOLATION;
++ state = RESPST_ERR_RKEY_VIOLATION;
++ goto err_out;
+ }
+
+ if (bth_pad(&ack_pkt)) {
+@@ -1017,9 +959,12 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ memset(pad, 0, bth_pad(&ack_pkt));
+ }
+
++ /* rxe_xmit_packet always consumes the skb */
+ err = rxe_xmit_packet(qp, &ack_pkt, skb);
+- if (err)
+- return RESPST_ERR_RNR;
++ if (err) {
++ state = RESPST_ERR_RNR;
++ goto err_out;
++ }
+
+ res->read.va += payload;
+ res->read.resid -= payload;
+@@ -1036,6 +981,9 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ state = RESPST_CLEANUP;
+ }
+
++err_out:
++ if (mr)
++ rxe_put(mr);
+ return state;
+ }
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 025b35bf014e2..a3aee247aa157 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -245,7 +245,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr)
+ int num_sge = ibwr->num_sge;
+ int full;
+
+- full = queue_full(rq->queue, QUEUE_TYPE_TO_DRIVER);
++ full = queue_full(rq->queue, QUEUE_TYPE_FROM_ULP);
+ if (unlikely(full))
+ return -ENOMEM;
+
+@@ -256,7 +256,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr)
+ for (i = 0; i < num_sge; i++)
+ length += ibwr->sg_list[i].length;
+
+- recv_wqe = queue_producer_addr(rq->queue, QUEUE_TYPE_TO_DRIVER);
++ recv_wqe = queue_producer_addr(rq->queue, QUEUE_TYPE_FROM_ULP);
+ recv_wqe->wr_id = ibwr->wr_id;
+
+ memcpy(recv_wqe->dma.sge, ibwr->sg_list,
+@@ -268,7 +268,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr)
+ recv_wqe->dma.cur_sge = 0;
+ recv_wqe->dma.sge_offset = 0;
+
+- queue_advance_producer(rq->queue, QUEUE_TYPE_TO_DRIVER);
++ queue_advance_producer(rq->queue, QUEUE_TYPE_FROM_ULP);
+
+ return 0;
+ }
+@@ -623,17 +623,17 @@ static int post_one_send(struct rxe_qp *qp, const struct ib_send_wr *ibwr,
+
+ spin_lock_irqsave(&qp->sq.sq_lock, flags);
+
+- full = queue_full(sq->queue, QUEUE_TYPE_TO_DRIVER);
++ full = queue_full(sq->queue, QUEUE_TYPE_FROM_ULP);
+
+ if (unlikely(full)) {
+ spin_unlock_irqrestore(&qp->sq.sq_lock, flags);
+ return -ENOMEM;
+ }
+
+- send_wqe = queue_producer_addr(sq->queue, QUEUE_TYPE_TO_DRIVER);
++ send_wqe = queue_producer_addr(sq->queue, QUEUE_TYPE_FROM_ULP);
+ init_send_wqe(qp, ibwr, mask, length, send_wqe);
+
+- queue_advance_producer(sq->queue, QUEUE_TYPE_TO_DRIVER);
++ queue_advance_producer(sq->queue, QUEUE_TYPE_FROM_ULP);
+
+ spin_unlock_irqrestore(&qp->sq.sq_lock, flags);
+
+@@ -821,12 +821,12 @@ static int rxe_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
+
+ spin_lock_irqsave(&cq->cq_lock, flags);
+ for (i = 0; i < num_entries; i++) {
+- cqe = queue_head(cq->queue, QUEUE_TYPE_FROM_DRIVER);
++ cqe = queue_head(cq->queue, QUEUE_TYPE_TO_ULP);
+ if (!cqe)
+ break;
+
+ memcpy(wc++, &cqe->ibwc, sizeof(*wc));
+- queue_advance_consumer(cq->queue, QUEUE_TYPE_FROM_DRIVER);
++ queue_advance_consumer(cq->queue, QUEUE_TYPE_TO_ULP);
+ }
+ spin_unlock_irqrestore(&cq->cq_lock, flags);
+
+@@ -838,7 +838,7 @@ static int rxe_peek_cq(struct ib_cq *ibcq, int wc_cnt)
+ struct rxe_cq *cq = to_rcq(ibcq);
+ int count;
+
+- count = queue_count(cq->queue, QUEUE_TYPE_FROM_DRIVER);
++ count = queue_count(cq->queue, QUEUE_TYPE_TO_ULP);
+
+ return (count > wc_cnt) ? wc_cnt : count;
+ }
+@@ -854,7 +854,7 @@ static int rxe_req_notify_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
+ if (cq->notify != IB_CQ_NEXT_COMP)
+ cq->notify = flags & IB_CQ_SOLICITED_MASK;
+
+- empty = queue_empty(cq->queue, QUEUE_TYPE_FROM_DRIVER);
++ empty = queue_empty(cq->queue, QUEUE_TYPE_TO_ULP);
+
+ if ((flags & IB_CQ_REPORT_MISSED_EVENTS) && !empty)
+ ret = 1;
+@@ -948,42 +948,6 @@ err1:
+ return ERR_PTR(err);
+ }
+
+-static int rxe_set_page(struct ib_mr *ibmr, u64 addr)
+-{
+- struct rxe_mr *mr = to_rmr(ibmr);
+- struct rxe_map *map;
+- struct rxe_phys_buf *buf;
+-
+- if (unlikely(mr->nbuf == mr->num_buf))
+- return -ENOMEM;
+-
+- map = mr->map[mr->nbuf / RXE_BUF_PER_MAP];
+- buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP];
+-
+- buf->addr = addr;
+- buf->size = ibmr->page_size;
+- mr->nbuf++;
+-
+- return 0;
+-}
+-
+-static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+- int sg_nents, unsigned int *sg_offset)
+-{
+- struct rxe_mr *mr = to_rmr(ibmr);
+- int n;
+-
+- mr->nbuf = 0;
+-
+- n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page);
+-
+- mr->page_shift = ilog2(ibmr->page_size);
+- mr->page_mask = ibmr->page_size - 1;
+- mr->offset = ibmr->iova & mr->page_mask;
+-
+- return n;
+-}
+-
+ static ssize_t parent_show(struct device *device,
+ struct device_attribute *attr, char *buf)
+ {
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 19ddfa8904803..c269ae2a32243 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -283,17 +283,6 @@ enum rxe_mr_lookup_type {
+ RXE_LOOKUP_REMOTE,
+ };
+
+-#define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf))
+-
+-struct rxe_phys_buf {
+- u64 addr;
+- u64 size;
+-};
+-
+-struct rxe_map {
+- struct rxe_phys_buf buf[RXE_BUF_PER_MAP];
+-};
+-
+ static inline int rkey_is_mw(u32 rkey)
+ {
+ u32 index = rkey >> 8;
+@@ -310,25 +299,24 @@ struct rxe_mr {
+ u32 lkey;
+ u32 rkey;
+ enum rxe_mr_state state;
+- u32 offset;
+ int access;
++ atomic_t num_mw;
+
+- int page_shift;
+- int page_mask;
+- int map_shift;
+- int map_mask;
++ unsigned int page_offset;
++ unsigned int page_shift;
++ u64 page_mask;
+
+ u32 num_buf;
+ u32 nbuf;
+
+- u32 max_buf;
+- u32 num_map;
+-
+- atomic_t num_mw;
+-
+- struct rxe_map **map;
++ struct xarray page_list;
+ };
+
++static inline unsigned int mr_page_size(struct rxe_mr *mr)
++{
++ return mr ? mr->ibmr.page_size : PAGE_SIZE;
++}
++
+ enum rxe_mw_state {
+ RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID,
+ RXE_MW_STATE_FREE = RXE_MR_STATE_FREE,
+diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c
+index b2b33dd3b4fa1..f51ab2ccf1511 100644
+--- a/drivers/infiniband/sw/siw/siw_mem.c
++++ b/drivers/infiniband/sw/siw/siw_mem.c
+@@ -398,7 +398,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
+
+ mlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+
+- if (num_pages + atomic64_read(&mm_s->pinned_vm) > mlock_limit) {
++ if (atomic64_add_return(num_pages, &mm_s->pinned_vm) > mlock_limit) {
+ rv = -ENOMEM;
+ goto out_sem_up;
+ }
+@@ -411,30 +411,27 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
+ goto out_sem_up;
+ }
+ for (i = 0; num_pages; i++) {
+- int got, nents = min_t(int, num_pages, PAGES_PER_CHUNK);
+-
+- umem->page_chunk[i].plist =
++ int nents = min_t(int, num_pages, PAGES_PER_CHUNK);
++ struct page **plist =
+ kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
+- if (!umem->page_chunk[i].plist) {
++
++ if (!plist) {
+ rv = -ENOMEM;
+ goto out_sem_up;
+ }
+- got = 0;
++ umem->page_chunk[i].plist = plist;
+ while (nents) {
+- struct page **plist = &umem->page_chunk[i].plist[got];
+-
+ rv = pin_user_pages(first_page_va, nents, foll_flags,
+ plist, NULL);
+ if (rv < 0)
+ goto out_sem_up;
+
+ umem->num_pages += rv;
+- atomic64_add(rv, &mm_s->pinned_vm);
+ first_page_va += rv * PAGE_SIZE;
++ plist += rv;
+ nents -= rv;
+- got += rv;
++ num_pages -= rv;
+ }
+- num_pages -= got;
+ }
+ out_sem_up:
+ mmap_read_unlock(mm_s);
+@@ -442,6 +439,10 @@ out_sem_up:
+ if (rv > 0)
+ return umem;
+
++ /* Adjust accounting for pages not pinned */
++ if (num_pages)
++ atomic64_sub(num_pages, &mm_s->pinned_vm);
++
+ siw_umem_release(umem, false);
+
+ return ERR_PTR(rv);
+diff --git a/drivers/input/touchscreen/exc3000.c b/drivers/input/touchscreen/exc3000.c
+index 4b7eee01c6aad..69eae79e2087c 100644
+--- a/drivers/input/touchscreen/exc3000.c
++++ b/drivers/input/touchscreen/exc3000.c
+@@ -109,6 +109,11 @@ static inline void exc3000_schedule_timer(struct exc3000_data *data)
+ mod_timer(&data->timer, jiffies + msecs_to_jiffies(EXC3000_TIMEOUT_MS));
+ }
+
++static void exc3000_shutdown_timer(void *timer)
++{
++ timer_shutdown_sync(timer);
++}
++
+ static int exc3000_read_frame(struct exc3000_data *data, u8 *buf)
+ {
+ struct i2c_client *client = data->client;
+@@ -386,6 +391,11 @@ static int exc3000_probe(struct i2c_client *client)
+ if (error)
+ return error;
+
++ error = devm_add_action_or_reset(&client->dev, exc3000_shutdown_timer,
++ &data->timer);
++ if (error)
++ return error;
++
+ error = devm_request_threaded_irq(&client->dev, client->irq,
+ NULL, exc3000_interrupt, IRQF_ONESHOT,
+ client->name, data);
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 467b194975b30..19a46b9f73574 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -3475,15 +3475,26 @@ found:
+ return 1;
+ }
+
++#define ACPIID_LEN (ACPIHID_UID_LEN + ACPIHID_HID_LEN)
++
+ static int __init parse_ivrs_acpihid(char *str)
+ {
+ u32 seg = 0, bus, dev, fn;
+ char *hid, *uid, *p, *addr;
+- char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0};
++ char acpiid[ACPIID_LEN] = {0};
+ int i;
+
+ addr = strchr(str, '@');
+ if (!addr) {
++ addr = strchr(str, '=');
++ if (!addr)
++ goto not_found;
++
++ ++addr;
++
++ if (strlen(addr) > ACPIID_LEN)
++ goto not_found;
++
+ if (sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid) == 4 ||
+ sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid) == 5) {
+ pr_warn("ivrs_acpihid%s option format deprecated; use ivrs_acpihid=%s@%04x:%02x:%02x.%d instead\n",
+@@ -3496,6 +3507,9 @@ static int __init parse_ivrs_acpihid(char *str)
+ /* We have the '@', make it the terminator to get just the acpiid */
+ *addr++ = 0;
+
++ if (strlen(str) > ACPIID_LEN + 1)
++ goto not_found;
++
+ if (sscanf(str, "=%s", acpiid) != 1)
+ goto not_found;
+
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index cbeaab55c0dbc..ff4f3d4da3402 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -558,6 +558,15 @@ static void amd_iommu_report_page_fault(struct amd_iommu *iommu,
+ * prevent logging it.
+ */
+ if (IS_IOMMU_MEM_TRANSACTION(flags)) {
++ /* Device not attached to domain properly */
++ if (dev_data->domain == NULL) {
++ pr_err_ratelimited("Event logged [Device not attached to domain properly]\n");
++ pr_err_ratelimited(" device=%04x:%02x:%02x.%x domain=0x%04x\n",
++ iommu->pci_seg->id, PCI_BUS_NUM(devid), PCI_SLOT(devid),
++ PCI_FUNC(devid), domain_id);
++ goto out;
++ }
++
+ if (!report_iommu_fault(&dev_data->domain->domain,
+ &pdev->dev, address,
+ IS_WRITE_REQUEST(flags) ?
+@@ -1702,27 +1711,29 @@ static int pdev_pri_ats_enable(struct pci_dev *pdev)
+ /* Only allow access to user-accessible pages */
+ ret = pci_enable_pasid(pdev, 0);
+ if (ret)
+- goto out_err;
++ return ret;
+
+ /* First reset the PRI state of the device */
+ ret = pci_reset_pri(pdev);
+ if (ret)
+- goto out_err;
++ goto out_err_pasid;
+
+ /* Enable PRI */
+ /* FIXME: Hardcode number of outstanding requests for now */
+ ret = pci_enable_pri(pdev, 32);
+ if (ret)
+- goto out_err;
++ goto out_err_pasid;
+
+ ret = pci_enable_ats(pdev, PAGE_SHIFT);
+ if (ret)
+- goto out_err;
++ goto out_err_pri;
+
+ return 0;
+
+-out_err:
++out_err_pri:
+ pci_disable_pri(pdev);
++
++out_err_pasid:
+ pci_disable_pasid(pdev);
+
+ return ret;
+@@ -2159,6 +2170,13 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
+ struct amd_iommu *iommu = rlookup_amd_iommu(dev);
+ int ret;
+
++ /*
++ * Skip attach device to domain if new domain is same as
++ * devices current domain
++ */
++ if (dev_data->domain == domain)
++ return 0;
++
+ dev_data->defer_attach = false;
+
+ if (dev_data->domain)
+@@ -2387,12 +2405,17 @@ static int amd_iommu_def_domain_type(struct device *dev)
+ return 0;
+
+ /*
+- * Do not identity map IOMMUv2 capable devices when memory encryption is
+- * active, because some of those devices (AMD GPUs) don't have the
+- * encryption bit in their DMA-mask and require remapping.
++ * Do not identity map IOMMUv2 capable devices when:
++ * - memory encryption is active, because some of those devices
++ * (AMD GPUs) don't have the encryption bit in their DMA-mask
++ * and require remapping.
++ * - SNP is enabled, because it prohibits DTE[Mode]=0.
+ */
+- if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT) && dev_data->iommu_v2)
++ if (dev_data->iommu_v2 &&
++ !cc_platform_has(CC_ATTR_MEM_ENCRYPT) &&
++ !amd_iommu_snp_en) {
+ return IOMMU_DOMAIN_IDENTITY;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
+index b0cde22119875..c1d579c24740b 100644
+--- a/drivers/iommu/exynos-iommu.c
++++ b/drivers/iommu/exynos-iommu.c
+@@ -1446,7 +1446,7 @@ static int __init exynos_iommu_init(void)
+
+ return 0;
+ err_reg_driver:
+- platform_driver_unregister(&exynos_sysmmu_driver);
++ kmem_cache_free(lv2table_kmem_cache, zero_lv2_table);
+ err_zero_lv2:
+ kmem_cache_destroy(lv2table_kmem_cache);
+ return ret;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 59df7e42fd533..52afcdaf7c7f1 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -4005,7 +4005,8 @@ int __init intel_iommu_init(void)
+ * is likely to be much lower than the overhead of synchronizing
+ * the virtual and physical IOMMU page-tables.
+ */
+- if (cap_caching_mode(iommu->cap)) {
++ if (cap_caching_mode(iommu->cap) &&
++ !first_level_by_default(IOMMU_DOMAIN_DMA)) {
+ pr_info_once("IOMMU batching disallowed due to virtualization\n");
+ iommu_set_dma_strict();
+ }
+@@ -4346,7 +4347,12 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain,
+ if (dmar_domain->max_addr == iova + size)
+ dmar_domain->max_addr = iova;
+
+- iommu_iotlb_gather_add_page(domain, gather, iova, size);
++ /*
++ * We do not use page-selective IOTLB invalidation in flush queue,
++ * so there is no need to track page and sync iotlb.
++ */
++ if (!iommu_iotlb_gather_queued(gather))
++ iommu_iotlb_gather_add_page(domain, gather, iova, size);
+
+ return size;
+ }
+@@ -4642,8 +4648,12 @@ static int intel_iommu_enable_sva(struct device *dev)
+ return -EINVAL;
+
+ ret = iopf_queue_add_device(iommu->iopf_queue, dev);
+- if (!ret)
+- ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
++ if (ret)
++ return ret;
++
++ ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
++ if (ret)
++ iopf_queue_remove_device(iommu->iopf_queue, dev);
+
+ return ret;
+ }
+@@ -4655,8 +4665,12 @@ static int intel_iommu_disable_sva(struct device *dev)
+ int ret;
+
+ ret = iommu_unregister_device_fault_handler(dev);
+- if (!ret)
+- ret = iopf_queue_remove_device(iommu->iopf_queue, dev);
++ if (ret)
++ return ret;
++
++ ret = iopf_queue_remove_device(iommu->iopf_queue, dev);
++ if (ret)
++ iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
+
+ return ret;
+ }
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index fb3c7020028d0..9d2f05cf61648 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -128,6 +128,9 @@ int intel_pasid_alloc_table(struct device *dev)
+ pasid_table->max_pasid = 1 << (order + PAGE_SHIFT + 3);
+ info->pasid_table = pasid_table;
+
++ if (!ecap_coherent(info->iommu->ecap))
++ clflush_cache_range(pasid_table->table, size);
++
+ return 0;
+ }
+
+@@ -215,6 +218,10 @@ retry:
+ free_pgtable_page(entries);
+ goto retry;
+ }
++ if (!ecap_coherent(info->iommu->ecap)) {
++ clflush_cache_range(entries, VTD_PAGE_SIZE);
++ clflush_cache_range(&dir[dir_index].val, sizeof(*dir));
++ }
+ }
+
+ return &entries[index];
+@@ -364,6 +371,16 @@ static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value)
+ pasid_set_bits(&pe->val[1], 1 << 23, value << 23);
+ }
+
++/*
++ * Setup No Execute Enable bit (Bit 133) of a scalable mode PASID
++ * entry. It is required when XD bit of the first level page table
++ * entry is about to be set.
++ */
++static inline void pasid_set_nxe(struct pasid_entry *pe)
++{
++ pasid_set_bits(&pe->val[2], 1 << 5, 1 << 5);
++}
++
+ /*
+ * Setup the Page Snoop (PGSNP) field (Bit 88) of a scalable mode
+ * PASID entry.
+@@ -557,6 +574,7 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
+ pasid_set_domain_id(pte, did);
+ pasid_set_address_width(pte, iommu->agaw);
+ pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
++ pasid_set_nxe(pte);
+
+ /* Setup Present and PASID Granular Transfer Type: */
+ pasid_set_translation_type(pte, PASID_ENTRY_PGTT_FL_ONLY);
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 5f6a85aea501e..50d858f36a81b 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -774,12 +774,16 @@ struct iommu_group *iommu_group_alloc(void)
+
+ ret = iommu_group_create_file(group,
+ &iommu_group_attr_reserved_regions);
+- if (ret)
++ if (ret) {
++ kobject_put(group->devices_kobj);
+ return ERR_PTR(ret);
++ }
+
+ ret = iommu_group_create_file(group, &iommu_group_attr_type);
+- if (ret)
++ if (ret) {
++ kobject_put(group->devices_kobj);
+ return ERR_PTR(ret);
++ }
+
+ pr_debug("Allocated group %d\n", group->id);
+
+@@ -2124,8 +2128,22 @@ static int __iommu_attach_group(struct iommu_domain *domain,
+
+ ret = __iommu_group_for_each_dev(group, domain,
+ iommu_group_do_attach_device);
+- if (ret == 0)
++ if (ret == 0) {
+ group->domain = domain;
++ } else {
++ /*
++ * To recover from the case when certain device within the
++ * group fails to attach to the new domain, we need force
++ * attaching all devices back to the old domain. The old
++ * domain is compatible for all devices in the group,
++ * hence the iommu driver should always return success.
++ */
++ struct iommu_domain *old_domain = group->domain;
++
++ group->domain = NULL;
++ WARN(__iommu_group_set_domain(group, old_domain),
++ "iommu driver failed to attach a compatible domain");
++ }
+
+ return ret;
+ }
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index d81f93a321afc..f6f42d8bc8ad8 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -346,10 +346,6 @@ int iommufd_device_attach(struct iommufd_device *idev, u32 *pt_id)
+ rc = iommufd_device_do_attach(idev, hwpt);
+ if (rc)
+ goto out_put_pt_obj;
+-
+- mutex_lock(&hwpt->ioas->mutex);
+- list_add_tail(&hwpt->hwpt_item, &hwpt->ioas->hwpt_list);
+- mutex_unlock(&hwpt->ioas->mutex);
+ break;
+ }
+ case IOMMUFD_OBJ_IOAS: {
+diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
+index 083e6fcbe10ad..3fbe636c3d8a6 100644
+--- a/drivers/iommu/iommufd/main.c
++++ b/drivers/iommu/iommufd/main.c
+@@ -252,9 +252,12 @@ union ucmd_buffer {
+ struct iommu_destroy destroy;
+ struct iommu_ioas_alloc alloc;
+ struct iommu_ioas_allow_iovas allow_iovas;
++ struct iommu_ioas_copy ioas_copy;
+ struct iommu_ioas_iova_ranges iova_ranges;
+ struct iommu_ioas_map map;
+ struct iommu_ioas_unmap unmap;
++ struct iommu_option option;
++ struct iommu_vfio_ioas vfio_ioas;
+ #ifdef CONFIG_IOMMUFD_TEST
+ struct iommu_test_cmd test;
+ #endif
+diff --git a/drivers/iommu/iommufd/vfio_compat.c b/drivers/iommu/iommufd/vfio_compat.c
+index 3ceca0e8311c3..dba88ee1d4571 100644
+--- a/drivers/iommu/iommufd/vfio_compat.c
++++ b/drivers/iommu/iommufd/vfio_compat.c
+@@ -381,7 +381,7 @@ static int iommufd_vfio_iommu_get_info(struct iommufd_ctx *ictx,
+ };
+ size_t minsz = offsetofend(struct vfio_iommu_type1_info, iova_pgsizes);
+ struct vfio_info_cap_header __user *last_cap = NULL;
+- struct vfio_iommu_type1_info info;
++ struct vfio_iommu_type1_info info = {};
+ struct iommufd_ioas *ioas;
+ size_t total_cap_size;
+ int rc;
+diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
+index 5ddb8e578ac6a..fc1ef7de37973 100644
+--- a/drivers/irqchip/irq-alpine-msi.c
++++ b/drivers/irqchip/irq-alpine-msi.c
+@@ -199,6 +199,7 @@ static int alpine_msix_init_domains(struct alpine_msix_data *priv,
+ }
+
+ gic_domain = irq_find_host(gic_node);
++ of_node_put(gic_node);
+ if (!gic_domain) {
+ pr_err("Failed to find the GIC domain\n");
+ return -ENXIO;
+diff --git a/drivers/irqchip/irq-bcm7120-l2.c b/drivers/irqchip/irq-bcm7120-l2.c
+index bb6609cebdbce..1e9dab6e0d86f 100644
+--- a/drivers/irqchip/irq-bcm7120-l2.c
++++ b/drivers/irqchip/irq-bcm7120-l2.c
+@@ -279,7 +279,8 @@ static int __init bcm7120_l2_intc_probe(struct device_node *dn,
+ flags |= IRQ_GC_BE_IO;
+
+ ret = irq_alloc_domain_generic_chips(data->domain, IRQS_PER_WORD, 1,
+- dn->full_name, handle_level_irq, clr, 0, flags);
++ dn->full_name, handle_level_irq, clr,
++ IRQ_LEVEL, flags);
+ if (ret) {
+ pr_err("failed to allocate generic irq chip\n");
+ goto out_free_domain;
+diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
+index e4efc08ac5948..091b0fe7e3242 100644
+--- a/drivers/irqchip/irq-brcmstb-l2.c
++++ b/drivers/irqchip/irq-brcmstb-l2.c
+@@ -161,6 +161,7 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
+ *init_params)
+ {
+ unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN;
++ unsigned int set = 0;
+ struct brcmstb_l2_intc_data *data;
+ struct irq_chip_type *ct;
+ int ret;
+@@ -208,9 +209,12 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ flags |= IRQ_GC_BE_IO;
+
++ if (init_params->handler == handle_level_irq)
++ set |= IRQ_LEVEL;
++
+ /* Allocate a single Generic IRQ chip for this node */
+ ret = irq_alloc_domain_generic_chips(data->domain, 32, 1,
+- np->full_name, init_params->handler, clr, 0, flags);
++ np->full_name, init_params->handler, clr, set, flags);
+ if (ret) {
+ pr_err("failed to allocate generic irq chip\n");
+ goto out_free_domain;
+diff --git a/drivers/irqchip/irq-mvebu-gicp.c b/drivers/irqchip/irq-mvebu-gicp.c
+index fe88a782173dd..c43a345061d53 100644
+--- a/drivers/irqchip/irq-mvebu-gicp.c
++++ b/drivers/irqchip/irq-mvebu-gicp.c
+@@ -221,6 +221,7 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
+ }
+
+ parent_domain = irq_find_host(irq_parent_dn);
++ of_node_put(irq_parent_dn);
+ if (!parent_domain) {
+ dev_err(&pdev->dev, "failed to find parent IRQ domain\n");
+ return -ENODEV;
+diff --git a/drivers/irqchip/irq-ti-sci-intr.c b/drivers/irqchip/irq-ti-sci-intr.c
+index fe8fad22bcf96..020ddf29efb80 100644
+--- a/drivers/irqchip/irq-ti-sci-intr.c
++++ b/drivers/irqchip/irq-ti-sci-intr.c
+@@ -236,6 +236,7 @@ static int ti_sci_intr_irq_domain_probe(struct platform_device *pdev)
+ }
+
+ parent_domain = irq_find_host(parent_node);
++ of_node_put(parent_node);
+ if (!parent_domain) {
+ dev_err(dev, "Failed to find IRQ parent domain\n");
+ return -ENODEV;
+diff --git a/drivers/irqchip/irqchip.c b/drivers/irqchip/irqchip.c
+index 3570f0a588c4b..7899607fbee8d 100644
+--- a/drivers/irqchip/irqchip.c
++++ b/drivers/irqchip/irqchip.c
+@@ -38,8 +38,10 @@ int platform_irqchip_probe(struct platform_device *pdev)
+ struct device_node *par_np = of_irq_find_parent(np);
+ of_irq_init_cb_t irq_init_cb = of_device_get_match_data(&pdev->dev);
+
+- if (!irq_init_cb)
++ if (!irq_init_cb) {
++ of_node_put(par_np);
+ return -EINVAL;
++ }
+
+ if (par_np == np)
+ par_np = NULL;
+@@ -52,8 +54,10 @@ int platform_irqchip_probe(struct platform_device *pdev)
+ * interrupt controller. The actual initialization callback of this
+ * interrupt controller can check for specific domains as necessary.
+ */
+- if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY))
++ if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY)) {
++ of_node_put(par_np);
+ return -EPROBE_DEFER;
++ }
+
+ return irq_init_cb(np, par_np);
+ }
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 6a8ea94834fa3..aa39b2a48fdff 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -235,14 +235,17 @@ struct led_classdev *of_led_get(struct device_node *np, int index)
+
+ led_dev = class_find_device_by_of_node(leds_class, led_node);
+ of_node_put(led_node);
++ put_device(led_dev);
+
+ if (!led_dev)
+ return ERR_PTR(-EPROBE_DEFER);
+
+ led_cdev = dev_get_drvdata(led_dev);
+
+- if (!try_module_get(led_cdev->dev->parent->driver->owner))
++ if (!try_module_get(led_cdev->dev->parent->driver->owner)) {
++ put_device(led_cdev->dev);
+ return ERR_PTR(-ENODEV);
++ }
+
+ return led_cdev;
+ }
+@@ -255,6 +258,7 @@ EXPORT_SYMBOL_GPL(of_led_get);
+ void led_put(struct led_classdev *led_cdev)
+ {
+ module_put(led_cdev->dev->parent->driver->owner);
++ put_device(led_cdev->dev);
+ }
+ EXPORT_SYMBOL_GPL(led_put);
+
+diff --git a/drivers/leds/leds-is31fl319x.c b/drivers/leds/leds-is31fl319x.c
+index b2f4c4ec7c567..7c908414ac7e0 100644
+--- a/drivers/leds/leds-is31fl319x.c
++++ b/drivers/leds/leds-is31fl319x.c
+@@ -495,6 +495,11 @@ static inline int is31fl3196_db_to_gain(u32 dezibel)
+ return dezibel / IS31FL3196_AUDIO_GAIN_DB_STEP;
+ }
+
++static void is31f1319x_mutex_destroy(void *lock)
++{
++ mutex_destroy(lock);
++}
++
+ static int is31fl319x_probe(struct i2c_client *client)
+ {
+ struct is31fl319x_chip *is31;
+@@ -511,7 +516,7 @@ static int is31fl319x_probe(struct i2c_client *client)
+ return -ENOMEM;
+
+ mutex_init(&is31->lock);
+- err = devm_add_action(dev, (void (*)(void *))mutex_destroy, &is31->lock);
++ err = devm_add_action_or_reset(dev, is31f1319x_mutex_destroy, &is31->lock);
+ if (err)
+ return err;
+
+diff --git a/drivers/leds/simple/simatic-ipc-leds-gpio.c b/drivers/leds/simple/simatic-ipc-leds-gpio.c
+index 07f0d79d604d4..e8d329b5a68c3 100644
+--- a/drivers/leds/simple/simatic-ipc-leds-gpio.c
++++ b/drivers/leds/simple/simatic-ipc-leds-gpio.c
+@@ -77,6 +77,8 @@ static int simatic_ipc_leds_gpio_probe(struct platform_device *pdev)
+
+ switch (plat->devmode) {
+ case SIMATIC_IPC_DEVICE_127E:
++ if (!IS_ENABLED(CONFIG_PINCTRL_BROXTON))
++ return -ENODEV;
+ simatic_ipc_led_gpio_table = &simatic_ipc_led_gpio_table_127e;
+ break;
+ case SIMATIC_IPC_DEVICE_227G:
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index bb786c39545ec..19caaf684ee34 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -1833,7 +1833,7 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
+ c->shrinker.scan_objects = dm_bufio_shrink_scan;
+ c->shrinker.seeks = 1;
+ c->shrinker.batch = 0;
+- r = register_shrinker(&c->shrinker, "md-%s:(%u:%u)", slab_name,
++ r = register_shrinker(&c->shrinker, "dm-bufio:(%u:%u)",
+ MAJOR(bdev->bd_dev), MINOR(bdev->bd_dev));
+ if (r)
+ goto bad;
+diff --git a/drivers/md/dm-cache-background-tracker.c b/drivers/md/dm-cache-background-tracker.c
+index 84814e819e4c3..7887f99b82bd5 100644
+--- a/drivers/md/dm-cache-background-tracker.c
++++ b/drivers/md/dm-cache-background-tracker.c
+@@ -60,6 +60,14 @@ EXPORT_SYMBOL_GPL(btracker_create);
+
+ void btracker_destroy(struct background_tracker *b)
+ {
++ struct bt_work *w, *tmp;
++
++ BUG_ON(!list_empty(&b->issued));
++ list_for_each_entry_safe (w, tmp, &b->queued, list) {
++ list_del(&w->list);
++ kmem_cache_free(b->work_cache, w);
++ }
++
+ kmem_cache_destroy(b->work_cache);
+ kfree(b);
+ }
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 5e92fac90b675..17fde3e5a1f7b 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -1805,6 +1805,7 @@ static void process_deferred_bios(struct work_struct *ws)
+
+ else
+ commit_needed = process_bio(cache, bio) || commit_needed;
++ cond_resched();
+ }
+
+ if (commit_needed)
+@@ -1827,6 +1828,7 @@ static void requeue_deferred_bios(struct cache *cache)
+ while ((bio = bio_list_pop(&bios))) {
+ bio->bi_status = BLK_STS_DM_REQUEUE;
+ bio_endio(bio);
++ cond_resched();
+ }
+ }
+
+@@ -1867,6 +1869,8 @@ static void check_migrations(struct work_struct *ws)
+ r = mg_start(cache, op, NULL);
+ if (r)
+ break;
++
++ cond_resched();
+ }
+ }
+
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 89fa7a68c6c42..335684a1aeaa5 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -303,9 +303,13 @@ static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc)
+ */
+ bio_for_each_segment(bvec, bio, iter) {
+ if (bio_iter_len(bio, iter) > corrupt_bio_byte) {
+- char *segment = (page_address(bio_iter_page(bio, iter))
+- + bio_iter_offset(bio, iter));
++ char *segment;
++ struct page *page = bio_iter_page(bio, iter);
++ if (unlikely(page == ZERO_PAGE(0)))
++ break;
++ segment = bvec_kmap_local(&bvec);
+ segment[corrupt_bio_byte] = fc->corrupt_bio_value;
++ kunmap_local(segment);
+ DMDEBUG("Corrupting data bio=%p by writing %u to byte %u "
+ "(rw=%c bi_opf=%u bi_sector=%llu size=%u)\n",
+ bio, fc->corrupt_bio_value, fc->corrupt_bio_byte,
+@@ -361,9 +365,11 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ /*
+ * Corrupt matching writes.
+ */
+- if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == WRITE)) {
+- if (all_corrupt_bio_flags_match(bio, fc))
+- corrupt_bio_data(bio, fc);
++ if (fc->corrupt_bio_byte) {
++ if (fc->corrupt_bio_rw == WRITE) {
++ if (all_corrupt_bio_flags_match(bio, fc))
++ corrupt_bio_data(bio, fc);
++ }
+ goto map_bio;
+ }
+
+@@ -389,13 +395,14 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio,
+ return DM_ENDIO_DONE;
+
+ if (!*error && pb->bio_submitted && (bio_data_dir(bio) == READ)) {
+- if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == READ) &&
+- all_corrupt_bio_flags_match(bio, fc)) {
+- /*
+- * Corrupt successful matching READs while in down state.
+- */
+- corrupt_bio_data(bio, fc);
+-
++ if (fc->corrupt_bio_byte) {
++ if ((fc->corrupt_bio_rw == READ) &&
++ all_corrupt_bio_flags_match(bio, fc)) {
++ /*
++ * Corrupt successful matching READs while in down state.
++ */
++ corrupt_bio_data(bio, fc);
++ }
+ } else if (!test_bit(DROP_WRITES, &fc->flags) &&
+ !test_bit(ERROR_WRITES, &fc->flags)) {
+ /*
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 36fc6ae4737a0..e031088ff15c6 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -482,7 +482,7 @@ static struct mapped_device *dm_hash_rename(struct dm_ioctl *param,
+ dm_table_event(table);
+ dm_put_live_table(hc->md, srcu_idx);
+
+- if (!dm_kobject_uevent(hc->md, KOBJ_CHANGE, param->event_nr))
++ if (!dm_kobject_uevent(hc->md, KOBJ_CHANGE, param->event_nr, false))
+ param->flags |= DM_UEVENT_GENERATED_FLAG;
+
+ md = hc->md;
+@@ -995,7 +995,7 @@ static int dev_remove(struct file *filp, struct dm_ioctl *param, size_t param_si
+
+ dm_ima_measure_on_device_remove(md, false);
+
+- if (!dm_kobject_uevent(md, KOBJ_REMOVE, param->event_nr))
++ if (!dm_kobject_uevent(md, KOBJ_REMOVE, param->event_nr, false))
+ param->flags |= DM_UEVENT_GENERATED_FLAG;
+
+ dm_put(md);
+@@ -1129,6 +1129,7 @@ static int do_resume(struct dm_ioctl *param)
+ struct hash_cell *hc;
+ struct mapped_device *md;
+ struct dm_table *new_map, *old_map = NULL;
++ bool need_resize_uevent = false;
+
+ down_write(&_hash_lock);
+
+@@ -1149,6 +1150,8 @@ static int do_resume(struct dm_ioctl *param)
+
+ /* Do we need to load a new map ? */
+ if (new_map) {
++ sector_t old_size, new_size;
++
+ /* Suspend if it isn't already suspended */
+ if (param->flags & DM_SKIP_LOCKFS_FLAG)
+ suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
+@@ -1157,6 +1160,7 @@ static int do_resume(struct dm_ioctl *param)
+ if (!dm_suspended_md(md))
+ dm_suspend(md, suspend_flags);
+
++ old_size = dm_get_size(md);
+ old_map = dm_swap_table(md, new_map);
+ if (IS_ERR(old_map)) {
+ dm_sync_table(md);
+@@ -1164,6 +1168,9 @@ static int do_resume(struct dm_ioctl *param)
+ dm_put(md);
+ return PTR_ERR(old_map);
+ }
++ new_size = dm_get_size(md);
++ if (old_size && new_size && old_size != new_size)
++ need_resize_uevent = true;
+
+ if (dm_table_get_mode(new_map) & FMODE_WRITE)
+ set_disk_ro(dm_disk(md), 0);
+@@ -1176,7 +1183,7 @@ static int do_resume(struct dm_ioctl *param)
+ if (!r) {
+ dm_ima_measure_on_device_resume(md, new_map ? true : false);
+
+- if (!dm_kobject_uevent(md, KOBJ_CHANGE, param->event_nr))
++ if (!dm_kobject_uevent(md, KOBJ_CHANGE, param->event_nr, need_resize_uevent))
+ param->flags |= DM_UEVENT_GENERATED_FLAG;
+ }
+ }
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 64cfcf46881dc..e4c1a8a21bbd0 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2207,6 +2207,7 @@ static void process_thin_deferred_bios(struct thin_c *tc)
+ throttle_work_update(&pool->throttle);
+ dm_pool_issue_prefetches(pool->pmd);
+ }
++ cond_resched();
+ }
+ blk_finish_plug(&plug);
+ }
+@@ -2289,6 +2290,7 @@ static void process_thin_deferred_cells(struct thin_c *tc)
+ else
+ pool->process_cell(tc, cell);
+ }
++ cond_resched();
+ } while (!list_empty(&cells));
+ }
+
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 0278482fac94a..c795ea7da7917 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -2945,7 +2945,7 @@ int dmz_ctr_metadata(struct dmz_dev *dev, int num_dev,
+ zmd->mblk_shrinker.seeks = DEFAULT_SEEKS;
+
+ /* Metadata cache shrinker */
+- ret = register_shrinker(&zmd->mblk_shrinker, "md-meta:(%u:%u)",
++ ret = register_shrinker(&zmd->mblk_shrinker, "dm-zoned-meta:(%u:%u)",
+ MAJOR(dev->bdev->bd_dev),
+ MINOR(dev->bdev->bd_dev));
+ if (ret) {
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index b424a6ee27baf..605662935ce91 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -231,7 +231,6 @@ out_uevent_exit:
+
+ static void local_exit(void)
+ {
+- flush_scheduled_work();
+ destroy_workqueue(deferred_remove_workqueue);
+
+ unregister_blkdev(_major, _name);
+@@ -1008,6 +1007,7 @@ static void dm_wq_requeue_work(struct work_struct *work)
+ io->next = NULL;
+ __dm_io_complete(io, false);
+ io = next;
++ cond_resched();
+ }
+ }
+
+@@ -2172,10 +2172,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
+ if (size != dm_get_size(md))
+ memset(&md->geometry, 0, sizeof(md->geometry));
+
+- if (!get_capacity(md->disk))
+- set_capacity(md->disk, size);
+- else
+- set_capacity_and_notify(md->disk, size);
++ set_capacity(md->disk, size);
+
+ dm_table_event_callback(t, event_callback, md);
+
+@@ -2569,6 +2566,7 @@ static void dm_wq_work(struct work_struct *work)
+ break;
+
+ submit_bio_noacct(bio);
++ cond_resched();
+ }
+ }
+
+@@ -2968,24 +2966,26 @@ EXPORT_SYMBOL_GPL(dm_internal_resume_fast);
+ * Event notification.
+ *---------------------------------------------------------------*/
+ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,
+- unsigned cookie)
++ unsigned cookie, bool need_resize_uevent)
+ {
+ int r;
+ unsigned noio_flag;
+ char udev_cookie[DM_COOKIE_LENGTH];
+- char *envp[] = { udev_cookie, NULL };
+-
+- noio_flag = memalloc_noio_save();
+-
+- if (!cookie)
+- r = kobject_uevent(&disk_to_dev(md->disk)->kobj, action);
+- else {
++ char *envp[3] = { NULL, NULL, NULL };
++ char **envpp = envp;
++ if (cookie) {
+ snprintf(udev_cookie, DM_COOKIE_LENGTH, "%s=%u",
+ DM_COOKIE_ENV_VAR_NAME, cookie);
+- r = kobject_uevent_env(&disk_to_dev(md->disk)->kobj,
+- action, envp);
++ *envpp++ = udev_cookie;
++ }
++ if (need_resize_uevent) {
++ *envpp++ = "RESIZE=1";
+ }
+
++ noio_flag = memalloc_noio_save();
++
++ r = kobject_uevent_env(&disk_to_dev(md->disk)->kobj, action, envp);
++
+ memalloc_noio_restore(noio_flag);
+
+ return r;
+diff --git a/drivers/md/dm.h b/drivers/md/dm.h
+index 5201df03ce402..a9a3ffcad084c 100644
+--- a/drivers/md/dm.h
++++ b/drivers/md/dm.h
+@@ -203,7 +203,7 @@ int dm_get_table_device(struct mapped_device *md, dev_t dev, fmode_t mode,
+ void dm_put_table_device(struct mapped_device *md, struct dm_dev *d);
+
+ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,
+- unsigned cookie);
++ unsigned cookie, bool need_resize_uevent);
+
+ void dm_internal_suspend(struct mapped_device *md);
+ void dm_internal_resume(struct mapped_device *md);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 02b0240e7c715..272cc5d14906f 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9030,7 +9030,7 @@ void md_do_sync(struct md_thread *thread)
+ mddev->pers->sync_request(mddev, max_sectors, &skipped);
+
+ if (!test_bit(MD_RECOVERY_CHECK, &mddev->recovery) &&
+- mddev->curr_resync >= MD_RESYNC_ACTIVE) {
++ mddev->curr_resync > MD_RESYNC_ACTIVE) {
+ if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
+ if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
+ if (mddev->curr_resync >= mddev->recovery_cp) {
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 77bd79a5954ed..7a14688f8c228 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -89,6 +89,12 @@
+
+ #define IMX219_REG_ORIENTATION 0x0172
+
++/* Binning Mode */
++#define IMX219_REG_BINNING_MODE 0x0174
++#define IMX219_BINNING_NONE 0x0000
++#define IMX219_BINNING_2X2 0x0101
++#define IMX219_BINNING_2X2_ANALOG 0x0303
++
+ /* Test Pattern Control */
+ #define IMX219_REG_TEST_PATTERN 0x0600
+ #define IMX219_TEST_PATTERN_DISABLE 0
+@@ -143,25 +149,66 @@ struct imx219_mode {
+
+ /* Default register values */
+ struct imx219_reg_list reg_list;
++
++ /* 2x2 binning is used */
++ bool binning;
+ };
+
+-/*
+- * Register sets lifted off the i2C interface from the Raspberry Pi firmware
+- * driver.
+- * 3280x2464 = mode 2, 1920x1080 = mode 1, 1640x1232 = mode 4, 640x480 = mode 7.
+- */
+-static const struct imx219_reg mode_3280x2464_regs[] = {
+- {0x0100, 0x00},
++static const struct imx219_reg imx219_common_regs[] = {
++ {0x0100, 0x00}, /* Mode Select */
++
++ /* To Access Addresses 3000-5fff, send the following commands */
+ {0x30eb, 0x0c},
+ {0x30eb, 0x05},
+ {0x300a, 0xff},
+ {0x300b, 0xff},
+ {0x30eb, 0x05},
+ {0x30eb, 0x09},
+- {0x0114, 0x01},
+- {0x0128, 0x00},
+- {0x012a, 0x18},
++
++ /* PLL Clock Table */
++ {0x0301, 0x05}, /* VTPXCK_DIV */
++ {0x0303, 0x01}, /* VTSYSCK_DIV */
++ {0x0304, 0x03}, /* PREPLLCK_VT_DIV 0x03 = AUTO set */
++ {0x0305, 0x03}, /* PREPLLCK_OP_DIV 0x03 = AUTO set */
++ {0x0306, 0x00}, /* PLL_VT_MPY */
++ {0x0307, 0x39},
++ {0x030b, 0x01}, /* OP_SYS_CLK_DIV */
++ {0x030c, 0x00}, /* PLL_OP_MPY */
++ {0x030d, 0x72},
++
++ /* Undocumented registers */
++ {0x455e, 0x00},
++ {0x471e, 0x4b},
++ {0x4767, 0x0f},
++ {0x4750, 0x14},
++ {0x4540, 0x00},
++ {0x47b4, 0x14},
++ {0x4713, 0x30},
++ {0x478b, 0x10},
++ {0x478f, 0x10},
++ {0x4793, 0x10},
++ {0x4797, 0x0e},
++ {0x479b, 0x0e},
++
++ /* Frame Bank Register Group "A" */
++ {0x0162, 0x0d}, /* Line_Length_A */
++ {0x0163, 0x78},
++ {0x0170, 0x01}, /* X_ODD_INC_A */
++ {0x0171, 0x01}, /* Y_ODD_INC_A */
++
++ /* Output setup registers */
++ {0x0114, 0x01}, /* CSI 2-Lane Mode */
++ {0x0128, 0x00}, /* DPHY Auto Mode */
++ {0x012a, 0x18}, /* EXCK_Freq */
+ {0x012b, 0x00},
++};
++
++/*
++ * Register sets lifted off the i2C interface from the Raspberry Pi firmware
++ * driver.
++ * 3280x2464 = mode 2, 1920x1080 = mode 1, 1640x1232 = mode 4, 640x480 = mode 7.
++ */
++static const struct imx219_reg mode_3280x2464_regs[] = {
+ {0x0164, 0x00},
+ {0x0165, 0x00},
+ {0x0166, 0x0c},
+@@ -174,53 +221,13 @@ static const struct imx219_reg mode_3280x2464_regs[] = {
+ {0x016d, 0xd0},
+ {0x016e, 0x09},
+ {0x016f, 0xa0},
+- {0x0170, 0x01},
+- {0x0171, 0x01},
+- {0x0174, 0x00},
+- {0x0175, 0x00},
+- {0x0301, 0x05},
+- {0x0303, 0x01},
+- {0x0304, 0x03},
+- {0x0305, 0x03},
+- {0x0306, 0x00},
+- {0x0307, 0x39},
+- {0x030b, 0x01},
+- {0x030c, 0x00},
+- {0x030d, 0x72},
+ {0x0624, 0x0c},
+ {0x0625, 0xd0},
+ {0x0626, 0x09},
+ {0x0627, 0xa0},
+- {0x455e, 0x00},
+- {0x471e, 0x4b},
+- {0x4767, 0x0f},
+- {0x4750, 0x14},
+- {0x4540, 0x00},
+- {0x47b4, 0x14},
+- {0x4713, 0x30},
+- {0x478b, 0x10},
+- {0x478f, 0x10},
+- {0x4793, 0x10},
+- {0x4797, 0x0e},
+- {0x479b, 0x0e},
+- {0x0162, 0x0d},
+- {0x0163, 0x78},
+ };
+
+ static const struct imx219_reg mode_1920_1080_regs[] = {
+- {0x0100, 0x00},
+- {0x30eb, 0x05},
+- {0x30eb, 0x0c},
+- {0x300a, 0xff},
+- {0x300b, 0xff},
+- {0x30eb, 0x05},
+- {0x30eb, 0x09},
+- {0x0114, 0x01},
+- {0x0128, 0x00},
+- {0x012a, 0x18},
+- {0x012b, 0x00},
+- {0x0162, 0x0d},
+- {0x0163, 0x78},
+ {0x0164, 0x02},
+ {0x0165, 0xa8},
+ {0x0166, 0x0a},
+@@ -233,49 +240,13 @@ static const struct imx219_reg mode_1920_1080_regs[] = {
+ {0x016d, 0x80},
+ {0x016e, 0x04},
+ {0x016f, 0x38},
+- {0x0170, 0x01},
+- {0x0171, 0x01},
+- {0x0174, 0x00},
+- {0x0175, 0x00},
+- {0x0301, 0x05},
+- {0x0303, 0x01},
+- {0x0304, 0x03},
+- {0x0305, 0x03},
+- {0x0306, 0x00},
+- {0x0307, 0x39},
+- {0x030b, 0x01},
+- {0x030c, 0x00},
+- {0x030d, 0x72},
+ {0x0624, 0x07},
+ {0x0625, 0x80},
+ {0x0626, 0x04},
+ {0x0627, 0x38},
+- {0x455e, 0x00},
+- {0x471e, 0x4b},
+- {0x4767, 0x0f},
+- {0x4750, 0x14},
+- {0x4540, 0x00},
+- {0x47b4, 0x14},
+- {0x4713, 0x30},
+- {0x478b, 0x10},
+- {0x478f, 0x10},
+- {0x4793, 0x10},
+- {0x4797, 0x0e},
+- {0x479b, 0x0e},
+ };
+
+ static const struct imx219_reg mode_1640_1232_regs[] = {
+- {0x0100, 0x00},
+- {0x30eb, 0x0c},
+- {0x30eb, 0x05},
+- {0x300a, 0xff},
+- {0x300b, 0xff},
+- {0x30eb, 0x05},
+- {0x30eb, 0x09},
+- {0x0114, 0x01},
+- {0x0128, 0x00},
+- {0x012a, 0x18},
+- {0x012b, 0x00},
+ {0x0164, 0x00},
+ {0x0165, 0x00},
+ {0x0166, 0x0c},
+@@ -288,53 +259,13 @@ static const struct imx219_reg mode_1640_1232_regs[] = {
+ {0x016d, 0x68},
+ {0x016e, 0x04},
+ {0x016f, 0xd0},
+- {0x0170, 0x01},
+- {0x0171, 0x01},
+- {0x0174, 0x01},
+- {0x0175, 0x01},
+- {0x0301, 0x05},
+- {0x0303, 0x01},
+- {0x0304, 0x03},
+- {0x0305, 0x03},
+- {0x0306, 0x00},
+- {0x0307, 0x39},
+- {0x030b, 0x01},
+- {0x030c, 0x00},
+- {0x030d, 0x72},
+ {0x0624, 0x06},
+ {0x0625, 0x68},
+ {0x0626, 0x04},
+ {0x0627, 0xd0},
+- {0x455e, 0x00},
+- {0x471e, 0x4b},
+- {0x4767, 0x0f},
+- {0x4750, 0x14},
+- {0x4540, 0x00},
+- {0x47b4, 0x14},
+- {0x4713, 0x30},
+- {0x478b, 0x10},
+- {0x478f, 0x10},
+- {0x4793, 0x10},
+- {0x4797, 0x0e},
+- {0x479b, 0x0e},
+- {0x0162, 0x0d},
+- {0x0163, 0x78},
+ };
+
+ static const struct imx219_reg mode_640_480_regs[] = {
+- {0x0100, 0x00},
+- {0x30eb, 0x05},
+- {0x30eb, 0x0c},
+- {0x300a, 0xff},
+- {0x300b, 0xff},
+- {0x30eb, 0x05},
+- {0x30eb, 0x09},
+- {0x0114, 0x01},
+- {0x0128, 0x00},
+- {0x012a, 0x18},
+- {0x012b, 0x00},
+- {0x0162, 0x0d},
+- {0x0163, 0x78},
+ {0x0164, 0x03},
+ {0x0165, 0xe8},
+ {0x0166, 0x08},
+@@ -347,35 +278,10 @@ static const struct imx219_reg mode_640_480_regs[] = {
+ {0x016d, 0x80},
+ {0x016e, 0x01},
+ {0x016f, 0xe0},
+- {0x0170, 0x01},
+- {0x0171, 0x01},
+- {0x0174, 0x03},
+- {0x0175, 0x03},
+- {0x0301, 0x05},
+- {0x0303, 0x01},
+- {0x0304, 0x03},
+- {0x0305, 0x03},
+- {0x0306, 0x00},
+- {0x0307, 0x39},
+- {0x030b, 0x01},
+- {0x030c, 0x00},
+- {0x030d, 0x72},
+ {0x0624, 0x06},
+ {0x0625, 0x68},
+ {0x0626, 0x04},
+ {0x0627, 0xd0},
+- {0x455e, 0x00},
+- {0x471e, 0x4b},
+- {0x4767, 0x0f},
+- {0x4750, 0x14},
+- {0x4540, 0x00},
+- {0x47b4, 0x14},
+- {0x4713, 0x30},
+- {0x478b, 0x10},
+- {0x478f, 0x10},
+- {0x4793, 0x10},
+- {0x4797, 0x0e},
+- {0x479b, 0x0e},
+ };
+
+ static const struct imx219_reg raw8_framefmt_regs[] = {
+@@ -485,6 +391,7 @@ static const struct imx219_mode supported_modes[] = {
+ .num_of_regs = ARRAY_SIZE(mode_3280x2464_regs),
+ .regs = mode_3280x2464_regs,
+ },
++ .binning = false,
+ },
+ {
+ /* 1080P 30fps cropped */
+@@ -501,6 +408,7 @@ static const struct imx219_mode supported_modes[] = {
+ .num_of_regs = ARRAY_SIZE(mode_1920_1080_regs),
+ .regs = mode_1920_1080_regs,
+ },
++ .binning = false,
+ },
+ {
+ /* 2x2 binned 30fps mode */
+@@ -517,6 +425,7 @@ static const struct imx219_mode supported_modes[] = {
+ .num_of_regs = ARRAY_SIZE(mode_1640_1232_regs),
+ .regs = mode_1640_1232_regs,
+ },
++ .binning = true,
+ },
+ {
+ /* 640x480 30fps mode */
+@@ -533,6 +442,7 @@ static const struct imx219_mode supported_modes[] = {
+ .num_of_regs = ARRAY_SIZE(mode_640_480_regs),
+ .regs = mode_640_480_regs,
+ },
++ .binning = true,
+ },
+ };
+
+@@ -979,6 +889,35 @@ static int imx219_set_framefmt(struct imx219 *imx219)
+ return -EINVAL;
+ }
+
++static int imx219_set_binning(struct imx219 *imx219)
++{
++ if (!imx219->mode->binning) {
++ return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
++ IMX219_REG_VALUE_16BIT,
++ IMX219_BINNING_NONE);
++ }
++
++ switch (imx219->fmt.code) {
++ case MEDIA_BUS_FMT_SRGGB8_1X8:
++ case MEDIA_BUS_FMT_SGRBG8_1X8:
++ case MEDIA_BUS_FMT_SGBRG8_1X8:
++ case MEDIA_BUS_FMT_SBGGR8_1X8:
++ return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
++ IMX219_REG_VALUE_16BIT,
++ IMX219_BINNING_2X2_ANALOG);
++
++ case MEDIA_BUS_FMT_SRGGB10_1X10:
++ case MEDIA_BUS_FMT_SGRBG10_1X10:
++ case MEDIA_BUS_FMT_SGBRG10_1X10:
++ case MEDIA_BUS_FMT_SBGGR10_1X10:
++ return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
++ IMX219_REG_VALUE_16BIT,
++ IMX219_BINNING_2X2);
++ }
++
++ return -EINVAL;
++}
++
+ static const struct v4l2_rect *
+ __imx219_get_pad_crop(struct imx219 *imx219,
+ struct v4l2_subdev_state *sd_state,
+@@ -1041,6 +980,13 @@ static int imx219_start_streaming(struct imx219 *imx219)
+ if (ret < 0)
+ return ret;
+
++ /* Send all registers that are common to all modes */
++ ret = imx219_write_regs(imx219, imx219_common_regs, ARRAY_SIZE(imx219_common_regs));
++ if (ret) {
++ dev_err(&client->dev, "%s failed to send mfg header\n", __func__);
++ goto err_rpm_put;
++ }
++
+ /* Apply default values of current mode */
+ reg_list = &imx219->mode->reg_list;
+ ret = imx219_write_regs(imx219, reg_list->regs, reg_list->num_of_regs);
+@@ -1056,6 +1002,13 @@ static int imx219_start_streaming(struct imx219 *imx219)
+ goto err_rpm_put;
+ }
+
++ ret = imx219_set_binning(imx219);
++ if (ret) {
++ dev_err(&client->dev, "%s failed to set binning: %d\n",
++ __func__, ret);
++ goto err_rpm_put;
++ }
++
+ /* Apply customized values from user */
+ ret = __v4l2_ctrl_handler_setup(imx219->sd.ctrl_handler);
+ if (ret)
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index 9c083cf142319..d034a67042e35 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -932,6 +932,7 @@ static int max9286_v4l2_register(struct max9286_priv *priv)
+ err_put_node:
+ fwnode_handle_put(ep);
+ err_async:
++ v4l2_ctrl_handler_free(&priv->ctrls);
+ max9286_v4l2_notifier_unregister(priv);
+
+ return ret;
+diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c
+index f3731f932a946..89d126240c345 100644
+--- a/drivers/media/i2c/ov2740.c
++++ b/drivers/media/i2c/ov2740.c
+@@ -629,8 +629,10 @@ static int ov2740_init_controls(struct ov2740 *ov2740)
+ V4L2_CID_TEST_PATTERN,
+ ARRAY_SIZE(ov2740_test_pattern_menu) - 1,
+ 0, 0, ov2740_test_pattern_menu);
+- if (ctrl_hdlr->error)
++ if (ctrl_hdlr->error) {
++ v4l2_ctrl_handler_free(ctrl_hdlr);
+ return ctrl_hdlr->error;
++ }
+
+ ov2740->sd.ctrl_handler = ctrl_hdlr;
+
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index e0f908af581b8..c159f297ab92a 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -50,6 +50,7 @@
+ #define OV5640_REG_SYS_CTRL0 0x3008
+ #define OV5640_REG_SYS_CTRL0_SW_PWDN 0x42
+ #define OV5640_REG_SYS_CTRL0_SW_PWUP 0x02
++#define OV5640_REG_SYS_CTRL0_SW_RST 0x82
+ #define OV5640_REG_CHIP_ID 0x300a
+ #define OV5640_REG_IO_MIPI_CTRL00 0x300e
+ #define OV5640_REG_PAD_OUTPUT_ENABLE01 0x3017
+@@ -532,7 +533,7 @@ static const struct v4l2_mbus_framefmt ov5640_default_fmt = {
+ };
+
+ static const struct reg_value ov5640_init_setting[] = {
+- {0x3103, 0x11, 0, 0}, {0x3008, 0x82, 0, 5}, {0x3008, 0x42, 0, 0},
++ {0x3103, 0x11, 0, 0},
+ {0x3103, 0x03, 0, 0}, {0x3630, 0x36, 0, 0},
+ {0x3631, 0x0e, 0, 0}, {0x3632, 0xe2, 0, 0}, {0x3633, 0x12, 0, 0},
+ {0x3621, 0xe0, 0, 0}, {0x3704, 0xa0, 0, 0}, {0x3703, 0x5a, 0, 0},
+@@ -2424,24 +2425,48 @@ static void ov5640_power(struct ov5640_dev *sensor, bool enable)
+ gpiod_set_value_cansleep(sensor->pwdn_gpio, enable ? 0 : 1);
+ }
+
+-static void ov5640_reset(struct ov5640_dev *sensor)
++/*
++ * From section 2.7 power up sequence:
++ * t0 + t1 + t2 >= 5ms Delay from DOVDD stable to PWDN pull down
++ * t3 >= 1ms Delay from PWDN pull down to RESETB pull up
++ * t4 >= 20ms Delay from RESETB pull up to SCCB (i2c) stable
++ *
++ * Some modules don't expose RESETB/PWDN pins directly, instead providing a
++ * "PWUP" GPIO which is wired through appropriate delays and inverters to the
++ * pins.
++ *
++ * In such cases, this gpio should be mapped to pwdn_gpio in the driver, and we
++ * should still toggle the pwdn_gpio below with the appropriate delays, while
++ * the calls to reset_gpio will be ignored.
++ */
++static void ov5640_powerup_sequence(struct ov5640_dev *sensor)
+ {
+- if (!sensor->reset_gpio)
+- return;
+-
+- gpiod_set_value_cansleep(sensor->reset_gpio, 0);
++ if (sensor->pwdn_gpio) {
++ gpiod_set_value_cansleep(sensor->reset_gpio, 0);
+
+- /* camera power cycle */
+- ov5640_power(sensor, false);
+- usleep_range(5000, 10000);
+- ov5640_power(sensor, true);
+- usleep_range(5000, 10000);
++ /* camera power cycle */
++ ov5640_power(sensor, false);
++ usleep_range(5000, 10000);
++ ov5640_power(sensor, true);
++ usleep_range(5000, 10000);
+
+- gpiod_set_value_cansleep(sensor->reset_gpio, 1);
+- usleep_range(1000, 2000);
++ gpiod_set_value_cansleep(sensor->reset_gpio, 1);
++ usleep_range(1000, 2000);
+
+- gpiod_set_value_cansleep(sensor->reset_gpio, 0);
++ gpiod_set_value_cansleep(sensor->reset_gpio, 0);
++ } else {
++ /* software reset */
++ ov5640_write_reg(sensor, OV5640_REG_SYS_CTRL0,
++ OV5640_REG_SYS_CTRL0_SW_RST);
++ }
+ usleep_range(20000, 25000);
++
++ /*
++ * software standby: allows registers programming;
++ * exit at restore_mode() for CSI, s_stream(1) for DVP
++ */
++ ov5640_write_reg(sensor, OV5640_REG_SYS_CTRL0,
++ OV5640_REG_SYS_CTRL0_SW_PWDN);
+ }
+
+ static int ov5640_set_power_on(struct ov5640_dev *sensor)
+@@ -2464,8 +2489,7 @@ static int ov5640_set_power_on(struct ov5640_dev *sensor)
+ goto xclk_off;
+ }
+
+- ov5640_reset(sensor);
+- ov5640_power(sensor, true);
++ ov5640_powerup_sequence(sensor);
+
+ ret = ov5640_init_slave_id(sensor);
+ if (ret)
+diff --git a/drivers/media/i2c/ov5675.c b/drivers/media/i2c/ov5675.c
+index 94dc8cb7a7c00..a6e6b367d1283 100644
+--- a/drivers/media/i2c/ov5675.c
++++ b/drivers/media/i2c/ov5675.c
+@@ -820,8 +820,10 @@ static int ov5675_init_controls(struct ov5675 *ov5675)
+ v4l2_ctrl_new_std(ctrl_hdlr, &ov5675_ctrl_ops,
+ V4L2_CID_VFLIP, 0, 1, 1, 0);
+
+- if (ctrl_hdlr->error)
++ if (ctrl_hdlr->error) {
++ v4l2_ctrl_handler_free(ctrl_hdlr);
+ return ctrl_hdlr->error;
++ }
+
+ ov5675->sd.ctrl_handler = ctrl_hdlr;
+
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index 11d3bef65d43c..6e423cbcdc462 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -1840,7 +1840,7 @@ static int ov7670_parse_dt(struct device *dev,
+
+ if (bus_cfg.bus_type != V4L2_MBUS_PARALLEL) {
+ dev_err(dev, "Unsupported media bus type\n");
+- return ret;
++ return -EINVAL;
+ }
+ info->mbus_config = bus_cfg.bus.parallel.flags;
+
+diff --git a/drivers/media/i2c/ov772x.c b/drivers/media/i2c/ov772x.c
+index 4189e3fc3d535..a238e63425f8c 100644
+--- a/drivers/media/i2c/ov772x.c
++++ b/drivers/media/i2c/ov772x.c
+@@ -1462,7 +1462,7 @@ static int ov772x_probe(struct i2c_client *client)
+ priv->subdev.ctrl_handler = &priv->hdl;
+ if (priv->hdl.error) {
+ ret = priv->hdl.error;
+- goto error_mutex_destroy;
++ goto error_ctrl_free;
+ }
+
+ priv->clk = clk_get(&client->dev, NULL);
+@@ -1515,7 +1515,6 @@ error_clk_put:
+ clk_put(priv->clk);
+ error_ctrl_free:
+ v4l2_ctrl_handler_free(&priv->hdl);
+-error_mutex_destroy:
+ mutex_destroy(&priv->lock);
+
+ return ret;
+diff --git a/drivers/media/i2c/tc358746.c b/drivers/media/i2c/tc358746.c
+index d1f552bd81d42..4063754a67320 100644
+--- a/drivers/media/i2c/tc358746.c
++++ b/drivers/media/i2c/tc358746.c
+@@ -406,7 +406,7 @@ tc358746_apply_pll_config(struct tc358746 *tc358746)
+
+ val = PLL_FRS(ilog2(post)) | RESETB | PLL_EN;
+ mask = PLL_FRS_MASK | RESETB | PLL_EN;
+- tc358746_update_bits(tc358746, PLLCTL1_REG, mask, val);
++ err = tc358746_update_bits(tc358746, PLLCTL1_REG, mask, val);
+ if (err)
+ return err;
+
+@@ -988,6 +988,8 @@ static int __maybe_unused
+ tc358746_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg)
+ {
+ struct tc358746 *tc358746 = to_tc358746(sd);
++ u32 val;
++ int err;
+
+ /* 32-bit registers starting from CLW_DPHYCONTTX */
+ reg->size = reg->reg < CLW_DPHYCONTTX_REG ? 2 : 4;
+@@ -995,12 +997,13 @@ tc358746_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg)
+ if (!pm_runtime_get_if_in_use(sd->dev))
+ return 0;
+
+- tc358746_read(tc358746, reg->reg, (u32 *)®->val);
++ err = tc358746_read(tc358746, reg->reg, &val);
++ reg->val = val;
+
+ pm_runtime_mark_last_busy(sd->dev);
+ pm_runtime_put_sync_autosuspend(sd->dev);
+
+- return 0;
++ return err;
+ }
+
+ static int __maybe_unused
+diff --git a/drivers/media/mc/mc-entity.c b/drivers/media/mc/mc-entity.c
+index b8bcbc734eaf4..f268cf66053e1 100644
+--- a/drivers/media/mc/mc-entity.c
++++ b/drivers/media/mc/mc-entity.c
+@@ -703,7 +703,7 @@ done:
+ __must_check int __media_pipeline_start(struct media_pad *pad,
+ struct media_pipeline *pipe)
+ {
+- struct media_device *mdev = pad->entity->graph_obj.mdev;
++ struct media_device *mdev = pad->graph_obj.mdev;
+ struct media_pipeline_pad *err_ppad;
+ struct media_pipeline_pad *ppad;
+ int ret;
+@@ -851,7 +851,7 @@ EXPORT_SYMBOL_GPL(__media_pipeline_start);
+ __must_check int media_pipeline_start(struct media_pad *pad,
+ struct media_pipeline *pipe)
+ {
+- struct media_device *mdev = pad->entity->graph_obj.mdev;
++ struct media_device *mdev = pad->graph_obj.mdev;
+ int ret;
+
+ mutex_lock(&mdev->graph_mutex);
+@@ -888,7 +888,7 @@ EXPORT_SYMBOL_GPL(__media_pipeline_stop);
+
+ void media_pipeline_stop(struct media_pad *pad)
+ {
+- struct media_device *mdev = pad->entity->graph_obj.mdev;
++ struct media_device *mdev = pad->graph_obj.mdev;
+
+ mutex_lock(&mdev->graph_mutex);
+ __media_pipeline_stop(pad);
+@@ -898,7 +898,7 @@ EXPORT_SYMBOL_GPL(media_pipeline_stop);
+
+ __must_check int media_pipeline_alloc_start(struct media_pad *pad)
+ {
+- struct media_device *mdev = pad->entity->graph_obj.mdev;
++ struct media_device *mdev = pad->graph_obj.mdev;
+ struct media_pipeline *new_pipe = NULL;
+ struct media_pipeline *pipe;
+ int ret;
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
+index 390bd5ea34724..3b76a9d0383a8 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
+@@ -1843,6 +1843,9 @@ static void cio2_pci_remove(struct pci_dev *pci_dev)
+ v4l2_device_unregister(&cio2->v4l2_dev);
+ media_device_cleanup(&cio2->media_dev);
+ mutex_destroy(&cio2->lock);
++
++ pm_runtime_forbid(&pci_dev->dev);
++ pm_runtime_get_noresume(&pci_dev->dev);
+ }
+
+ static int __maybe_unused cio2_runtime_suspend(struct device *dev)
+diff --git a/drivers/media/pci/saa7134/saa7134-core.c b/drivers/media/pci/saa7134/saa7134-core.c
+index 96328b0af1641..cf2871306987c 100644
+--- a/drivers/media/pci/saa7134/saa7134-core.c
++++ b/drivers/media/pci/saa7134/saa7134-core.c
+@@ -978,7 +978,7 @@ static void saa7134_unregister_video(struct saa7134_dev *dev)
+ }
+ if (dev->radio_dev) {
+ if (video_is_registered(dev->radio_dev))
+- vb2_video_unregister_device(dev->radio_dev);
++ video_unregister_device(dev->radio_dev);
+ else
+ video_device_release(dev->radio_dev);
+ dev->radio_dev = NULL;
+diff --git a/drivers/media/platform/amphion/vpu_color.c b/drivers/media/platform/amphion/vpu_color.c
+index 80b9a53fd1c14..4ae435cbc5cda 100644
+--- a/drivers/media/platform/amphion/vpu_color.c
++++ b/drivers/media/platform/amphion/vpu_color.c
+@@ -17,7 +17,7 @@
+ #include "vpu_helpers.h"
+
+ static const u8 colorprimaries[] = {
+- 0,
++ V4L2_COLORSPACE_LAST,
+ V4L2_COLORSPACE_REC709, /*Rec. ITU-R BT.709-6*/
+ 0,
+ 0,
+@@ -31,7 +31,7 @@ static const u8 colorprimaries[] = {
+ };
+
+ static const u8 colortransfers[] = {
+- 0,
++ V4L2_XFER_FUNC_LAST,
+ V4L2_XFER_FUNC_709, /*Rec. ITU-R BT.709-6*/
+ 0,
+ 0,
+@@ -53,7 +53,7 @@ static const u8 colortransfers[] = {
+ };
+
+ static const u8 colormatrixcoefs[] = {
+- 0,
++ V4L2_YCBCR_ENC_LAST,
+ V4L2_YCBCR_ENC_709, /*Rec. ITU-R BT.709-6*/
+ 0,
+ 0,
+diff --git a/drivers/media/platform/mediatek/mdp3/Kconfig b/drivers/media/platform/mediatek/mdp3/Kconfig
+index 846e759a8f6a9..602329c447501 100644
+--- a/drivers/media/platform/mediatek/mdp3/Kconfig
++++ b/drivers/media/platform/mediatek/mdp3/Kconfig
+@@ -3,14 +3,13 @@ config VIDEO_MEDIATEK_MDP3
+ tristate "MediaTek MDP v3 driver"
+ depends on MTK_IOMMU || COMPILE_TEST
+ depends on VIDEO_DEV
+- depends on ARCH_MEDIATEK || COMPILE_TEST
+ depends on HAS_DMA
+ depends on REMOTEPROC
++ depends on MTK_MMSYS
++ depends on MTK_CMDQ
++ depends on MTK_SCP
+ select VIDEOBUF2_DMA_CONTIG
+ select V4L2_MEM2MEM_DEV
+- select MTK_MMSYS
+- select MTK_CMDQ
+- select MTK_SCP
+ default n
+ help
+ It is a v4l2 driver and present in MediaTek MT8183 SoC.
+diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
+index 2d1f6ae9f0802..97edcd9d1c817 100644
+--- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
++++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
+@@ -207,8 +207,8 @@ static int mdp_probe(struct platform_device *pdev)
+ }
+ for (i = 0; i < MDP_PIPE_MAX; i++) {
+ mdp->mdp_mutex[i] = mtk_mutex_get(&mm_pdev->dev);
+- if (!mdp->mdp_mutex[i]) {
+- ret = -ENODEV;
++ if (IS_ERR(mdp->mdp_mutex[i])) {
++ ret = PTR_ERR(mdp->mdp_mutex[i]);
+ goto err_free_mutex;
+ }
+ }
+@@ -289,7 +289,8 @@ err_deinit_comp:
+ mdp_comp_destroy(mdp);
+ err_free_mutex:
+ for (i = 0; i < MDP_PIPE_MAX; i++)
+- mtk_mutex_put(mdp->mdp_mutex[i]);
++ if (!IS_ERR_OR_NULL(mdp->mdp_mutex[i]))
++ mtk_mutex_put(mdp->mdp_mutex[i]);
+ err_destroy_device:
+ kfree(mdp);
+ err_return:
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index 6cd015a35f7c4..f085f14d676ad 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -2472,19 +2472,12 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
+ jpeg->mode = mode;
+
+ /* Get clocks */
+- jpeg->clk_ipg = devm_clk_get(dev, "ipg");
+- if (IS_ERR(jpeg->clk_ipg)) {
+- dev_err(dev, "failed to get clock: ipg\n");
+- ret = PTR_ERR(jpeg->clk_ipg);
+- goto err_clk;
+- }
+-
+- jpeg->clk_per = devm_clk_get(dev, "per");
+- if (IS_ERR(jpeg->clk_per)) {
+- dev_err(dev, "failed to get clock: per\n");
+- ret = PTR_ERR(jpeg->clk_per);
++ ret = devm_clk_bulk_get_all(&pdev->dev, &jpeg->clks);
++ if (ret < 0) {
++ dev_err(dev, "failed to get clock\n");
+ goto err_clk;
+ }
++ jpeg->num_clks = ret;
+
+ ret = mxc_jpeg_attach_pm_domains(jpeg);
+ if (ret < 0) {
+@@ -2581,32 +2574,20 @@ static int mxc_jpeg_runtime_resume(struct device *dev)
+ struct mxc_jpeg_dev *jpeg = dev_get_drvdata(dev);
+ int ret;
+
+- ret = clk_prepare_enable(jpeg->clk_ipg);
+- if (ret < 0) {
+- dev_err(dev, "failed to enable clock: ipg\n");
+- goto err_ipg;
+- }
+-
+- ret = clk_prepare_enable(jpeg->clk_per);
++ ret = clk_bulk_prepare_enable(jpeg->num_clks, jpeg->clks);
+ if (ret < 0) {
+- dev_err(dev, "failed to enable clock: per\n");
+- goto err_per;
++ dev_err(dev, "failed to enable clock\n");
++ return ret;
+ }
+
+ return 0;
+-
+-err_per:
+- clk_disable_unprepare(jpeg->clk_ipg);
+-err_ipg:
+- return ret;
+ }
+
+ static int mxc_jpeg_runtime_suspend(struct device *dev)
+ {
+ struct mxc_jpeg_dev *jpeg = dev_get_drvdata(dev);
+
+- clk_disable_unprepare(jpeg->clk_ipg);
+- clk_disable_unprepare(jpeg->clk_per);
++ clk_bulk_disable_unprepare(jpeg->num_clks, jpeg->clks);
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
+index 8fa8c0aec5a2d..87157db780826 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
+@@ -120,8 +120,8 @@ struct mxc_jpeg_dev {
+ spinlock_t hw_lock; /* hardware access lock */
+ unsigned int mode;
+ struct mutex lock; /* v4l2 ioctls serialization */
+- struct clk *clk_ipg;
+- struct clk *clk_per;
++ struct clk_bulk_data *clks;
++ int num_clks;
+ struct platform_device *pdev;
+ struct device *dev;
+ void __iomem *base_reg;
+diff --git a/drivers/media/platform/nxp/imx7-media-csi.c b/drivers/media/platform/nxp/imx7-media-csi.c
+index 886374d3a6ff1..1ef92c8c0098c 100644
+--- a/drivers/media/platform/nxp/imx7-media-csi.c
++++ b/drivers/media/platform/nxp/imx7-media-csi.c
+@@ -638,8 +638,10 @@ static int imx7_csi_init(struct imx7_csi *csi)
+ imx7_csi_configure(csi);
+
+ ret = imx7_csi_dma_setup(csi);
+- if (ret < 0)
++ if (ret < 0) {
++ clk_disable_unprepare(csi->mclk);
+ return ret;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c b/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c
+index 451a4c9b3d30d..04baa80494c66 100644
+--- a/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c
++++ b/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c
+@@ -429,7 +429,8 @@ static void csiphy_gen2_config_lanes(struct csiphy_device *csiphy,
+ array_size = ARRAY_SIZE(lane_regs_sm8250[0]);
+ break;
+ default:
+- unreachable();
++ WARN(1, "unknown cspi version\n");
++ return;
+ }
+
+ for (l = 0; l < 5; l++) {
+diff --git a/drivers/media/platform/ti/cal/cal.c b/drivers/media/platform/ti/cal/cal.c
+index 56b61c0583cf8..1236215ec70eb 100644
+--- a/drivers/media/platform/ti/cal/cal.c
++++ b/drivers/media/platform/ti/cal/cal.c
+@@ -1050,8 +1050,10 @@ static struct cal_ctx *cal_ctx_create(struct cal_dev *cal, int inst)
+ ctx->cport = inst;
+
+ ret = cal_ctx_v4l2_init(ctx);
+- if (ret)
++ if (ret) {
++ kfree(ctx);
+ return NULL;
++ }
+
+ return ctx;
+ }
+diff --git a/drivers/media/platform/ti/omap3isp/isp.c b/drivers/media/platform/ti/omap3isp/isp.c
+index 1d40bb59ff814..e7327e38482de 100644
+--- a/drivers/media/platform/ti/omap3isp/isp.c
++++ b/drivers/media/platform/ti/omap3isp/isp.c
+@@ -2307,7 +2307,16 @@ static int isp_probe(struct platform_device *pdev)
+
+ /* Regulators */
+ isp->isp_csiphy1.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy1");
++ if (IS_ERR(isp->isp_csiphy1.vdd)) {
++ ret = PTR_ERR(isp->isp_csiphy1.vdd);
++ goto error;
++ }
++
+ isp->isp_csiphy2.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy2");
++ if (IS_ERR(isp->isp_csiphy2.vdd)) {
++ ret = PTR_ERR(isp->isp_csiphy2.vdd);
++ goto error;
++ }
+
+ /* Clocks
+ *
+diff --git a/drivers/media/platform/verisilicon/hantro_v4l2.c b/drivers/media/platform/verisilicon/hantro_v4l2.c
+index 2c7a805289e7b..30e650edaea8a 100644
+--- a/drivers/media/platform/verisilicon/hantro_v4l2.c
++++ b/drivers/media/platform/verisilicon/hantro_v4l2.c
+@@ -161,8 +161,11 @@ static int vidioc_enum_framesizes(struct file *file, void *priv,
+ }
+
+ /* For non-coded formats check if postprocessing scaling is possible */
+- if (fmt->codec_mode == HANTRO_MODE_NONE && hantro_needs_postproc(ctx, fmt)) {
+- return hanto_postproc_enum_framesizes(ctx, fsize);
++ if (fmt->codec_mode == HANTRO_MODE_NONE) {
++ if (hantro_needs_postproc(ctx, fmt))
++ return hanto_postproc_enum_framesizes(ctx, fsize);
++ else
++ return -ENOTTY;
+ } else if (fsize->index != 0) {
+ vpu_debug(0, "invalid frame size index (expected 0, got %d)\n",
+ fsize->index);
+diff --git a/drivers/media/rc/ene_ir.c b/drivers/media/rc/ene_ir.c
+index e09270916fbca..11ee21a7db8f0 100644
+--- a/drivers/media/rc/ene_ir.c
++++ b/drivers/media/rc/ene_ir.c
+@@ -1106,6 +1106,8 @@ static void ene_remove(struct pnp_dev *pnp_dev)
+ struct ene_device *dev = pnp_get_drvdata(pnp_dev);
+ unsigned long flags;
+
++ rc_unregister_device(dev->rdev);
++ del_timer_sync(&dev->tx_sim_timer);
+ spin_lock_irqsave(&dev->hw_lock, flags);
+ ene_rx_disable(dev);
+ ene_rx_restore_hw_buffer(dev);
+@@ -1113,7 +1115,6 @@ static void ene_remove(struct pnp_dev *pnp_dev)
+
+ free_irq(dev->irq, dev);
+ release_region(dev->hw_io, ENE_IO_SIZE);
+- rc_unregister_device(dev->rdev);
+ kfree(dev);
+ }
+
+diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
+index fe9c7b3a950e8..6f443c542c6da 100644
+--- a/drivers/media/usb/siano/smsusb.c
++++ b/drivers/media/usb/siano/smsusb.c
+@@ -179,6 +179,7 @@ static void smsusb_stop_streaming(struct smsusb_device_t *dev)
+
+ for (i = 0; i < MAX_URBS; i++) {
+ usb_kill_urb(&dev->surbs[i].urb);
++ cancel_work_sync(&dev->surbs[i].wq);
+
+ if (dev->surbs[i].cb) {
+ smscore_putbuffer(dev->coredev, dev->surbs[i].cb);
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index c95a2229f4fa9..44b0cfb8ee1c7 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -6,6 +6,7 @@
+ * Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+ */
+
++#include <linux/bitops.h>
+ #include <linux/kernel.h>
+ #include <linux/list.h>
+ #include <linux/module.h>
+@@ -525,7 +526,8 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
+ .v4l2_type = V4L2_CTRL_TYPE_MENU,
+ .data_type = UVC_CTRL_DATA_TYPE_BITMASK,
+ .menu_info = exposure_auto_controls,
+- .menu_count = ARRAY_SIZE(exposure_auto_controls),
++ .menu_mask = GENMASK(V4L2_EXPOSURE_APERTURE_PRIORITY,
++ V4L2_EXPOSURE_AUTO),
+ .slave_ids = { V4L2_CID_EXPOSURE_ABSOLUTE, },
+ },
+ {
+@@ -721,32 +723,53 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
+ },
+ };
+
+-static const struct uvc_control_mapping uvc_ctrl_mappings_uvc11[] = {
+- {
+- .id = V4L2_CID_POWER_LINE_FREQUENCY,
+- .entity = UVC_GUID_UVC_PROCESSING,
+- .selector = UVC_PU_POWER_LINE_FREQUENCY_CONTROL,
+- .size = 2,
+- .offset = 0,
+- .v4l2_type = V4L2_CTRL_TYPE_MENU,
+- .data_type = UVC_CTRL_DATA_TYPE_ENUM,
+- .menu_info = power_line_frequency_controls,
+- .menu_count = ARRAY_SIZE(power_line_frequency_controls) - 1,
+- },
++const struct uvc_control_mapping uvc_ctrl_power_line_mapping_limited = {
++ .id = V4L2_CID_POWER_LINE_FREQUENCY,
++ .entity = UVC_GUID_UVC_PROCESSING,
++ .selector = UVC_PU_POWER_LINE_FREQUENCY_CONTROL,
++ .size = 2,
++ .offset = 0,
++ .v4l2_type = V4L2_CTRL_TYPE_MENU,
++ .data_type = UVC_CTRL_DATA_TYPE_ENUM,
++ .menu_info = power_line_frequency_controls,
++ .menu_mask = GENMASK(V4L2_CID_POWER_LINE_FREQUENCY_60HZ,
++ V4L2_CID_POWER_LINE_FREQUENCY_50HZ),
+ };
+
+-static const struct uvc_control_mapping uvc_ctrl_mappings_uvc15[] = {
+- {
+- .id = V4L2_CID_POWER_LINE_FREQUENCY,
+- .entity = UVC_GUID_UVC_PROCESSING,
+- .selector = UVC_PU_POWER_LINE_FREQUENCY_CONTROL,
+- .size = 2,
+- .offset = 0,
+- .v4l2_type = V4L2_CTRL_TYPE_MENU,
+- .data_type = UVC_CTRL_DATA_TYPE_ENUM,
+- .menu_info = power_line_frequency_controls,
+- .menu_count = ARRAY_SIZE(power_line_frequency_controls),
+- },
++static const struct uvc_control_mapping uvc_ctrl_power_line_mapping_uvc11 = {
++ .id = V4L2_CID_POWER_LINE_FREQUENCY,
++ .entity = UVC_GUID_UVC_PROCESSING,
++ .selector = UVC_PU_POWER_LINE_FREQUENCY_CONTROL,
++ .size = 2,
++ .offset = 0,
++ .v4l2_type = V4L2_CTRL_TYPE_MENU,
++ .data_type = UVC_CTRL_DATA_TYPE_ENUM,
++ .menu_info = power_line_frequency_controls,
++ .menu_mask = GENMASK(V4L2_CID_POWER_LINE_FREQUENCY_60HZ,
++ V4L2_CID_POWER_LINE_FREQUENCY_DISABLED),
++};
++
++static const struct uvc_control_mapping *uvc_ctrl_mappings_uvc11[] = {
++ &uvc_ctrl_power_line_mapping_uvc11,
++ NULL, /* Sentinel */
++};
++
++static const struct uvc_control_mapping uvc_ctrl_power_line_mapping_uvc15 = {
++ .id = V4L2_CID_POWER_LINE_FREQUENCY,
++ .entity = UVC_GUID_UVC_PROCESSING,
++ .selector = UVC_PU_POWER_LINE_FREQUENCY_CONTROL,
++ .size = 2,
++ .offset = 0,
++ .v4l2_type = V4L2_CTRL_TYPE_MENU,
++ .data_type = UVC_CTRL_DATA_TYPE_ENUM,
++ .menu_info = power_line_frequency_controls,
++ .menu_mask = GENMASK(V4L2_CID_POWER_LINE_FREQUENCY_AUTO,
++ V4L2_CID_POWER_LINE_FREQUENCY_DISABLED),
++};
++
++static const struct uvc_control_mapping *uvc_ctrl_mappings_uvc15[] = {
++ &uvc_ctrl_power_line_mapping_uvc15,
++ NULL, /* Sentinel */
+ };
+
+ /* ------------------------------------------------------------------------
+@@ -975,7 +998,9 @@ static s32 __uvc_ctrl_get_value(struct uvc_control_mapping *mapping,
+ const struct uvc_menu_info *menu = mapping->menu_info;
+ unsigned int i;
+
+- for (i = 0; i < mapping->menu_count; ++i, ++menu) {
++ for (i = 0; BIT(i) <= mapping->menu_mask; ++i, ++menu) {
++ if (!test_bit(i, &mapping->menu_mask))
++ continue;
+ if (menu->value == value) {
+ value = i;
+ break;
+@@ -1085,11 +1110,28 @@ static int uvc_query_v4l2_class(struct uvc_video_chain *chain, u32 req_id,
+ return 0;
+ }
+
++/*
++ * Check if control @v4l2_id can be accessed by the given control @ioctl
++ * (VIDIOC_G_EXT_CTRLS, VIDIOC_TRY_EXT_CTRLS or VIDIOC_S_EXT_CTRLS).
++ *
++ * For set operations on slave controls, check if the master's value is set to
++ * manual, either in the others controls set in the same ioctl call, or from
++ * the master's current value. This catches VIDIOC_S_EXT_CTRLS calls that set
++ * both the master and slave control, such as for instance setting
++ * auto_exposure=1, exposure_time_absolute=251.
++ */
+ int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id,
+- bool read)
++ const struct v4l2_ext_controls *ctrls,
++ unsigned long ioctl)
+ {
++ struct uvc_control_mapping *master_map = NULL;
++ struct uvc_control *master_ctrl = NULL;
+ struct uvc_control_mapping *mapping;
+ struct uvc_control *ctrl;
++ bool read = ioctl == VIDIOC_G_EXT_CTRLS;
++ s32 val;
++ int ret;
++ int i;
+
+ if (__uvc_query_v4l2_class(chain, v4l2_id, 0) >= 0)
+ return -EACCES;
+@@ -1104,6 +1146,29 @@ int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id,
+ if (!(ctrl->info.flags & UVC_CTRL_FLAG_SET_CUR) && !read)
+ return -EACCES;
+
++ if (ioctl != VIDIOC_S_EXT_CTRLS || !mapping->master_id)
++ return 0;
++
++ /*
++ * Iterate backwards in cases where the master control is accessed
++ * multiple times in the same ioctl. We want the last value.
++ */
++ for (i = ctrls->count - 1; i >= 0; i--) {
++ if (ctrls->controls[i].id == mapping->master_id)
++ return ctrls->controls[i].value ==
++ mapping->master_manual ? 0 : -EACCES;
++ }
++
++ __uvc_find_control(ctrl->entity, mapping->master_id, &master_map,
++ &master_ctrl, 0);
++
++ if (!master_ctrl || !(master_ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR))
++ return 0;
++
++ ret = __uvc_ctrl_get(chain, master_ctrl, master_map, &val);
++ if (ret >= 0 && val != mapping->master_manual)
++ return -EACCES;
++
+ return 0;
+ }
+
+@@ -1169,12 +1234,14 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+
+ switch (mapping->v4l2_type) {
+ case V4L2_CTRL_TYPE_MENU:
+- v4l2_ctrl->minimum = 0;
+- v4l2_ctrl->maximum = mapping->menu_count - 1;
++ v4l2_ctrl->minimum = ffs(mapping->menu_mask) - 1;
++ v4l2_ctrl->maximum = fls(mapping->menu_mask) - 1;
+ v4l2_ctrl->step = 1;
+
+ menu = mapping->menu_info;
+- for (i = 0; i < mapping->menu_count; ++i, ++menu) {
++ for (i = 0; BIT(i) <= mapping->menu_mask; ++i, ++menu) {
++ if (!test_bit(i, &mapping->menu_mask))
++ continue;
+ if (menu->value == v4l2_ctrl->default_value) {
+ v4l2_ctrl->default_value = i;
+ break;
+@@ -1289,7 +1356,7 @@ int uvc_query_v4l2_menu(struct uvc_video_chain *chain,
+ goto done;
+ }
+
+- if (query_menu->index >= mapping->menu_count) {
++ if (!test_bit(query_menu->index, &mapping->menu_mask)) {
+ ret = -EINVAL;
+ goto done;
+ }
+@@ -1797,8 +1864,13 @@ int uvc_ctrl_set(struct uvc_fh *handle,
+ break;
+
+ case V4L2_CTRL_TYPE_MENU:
+- if (xctrl->value < 0 || xctrl->value >= mapping->menu_count)
++ if (xctrl->value < (ffs(mapping->menu_mask) - 1) ||
++ xctrl->value > (fls(mapping->menu_mask) - 1))
+ return -ERANGE;
++
++ if (!test_bit(xctrl->value, &mapping->menu_mask))
++ return -EINVAL;
++
+ value = mapping->menu_info[xctrl->value].value;
+
+ /*
+@@ -2237,7 +2309,7 @@ static int __uvc_ctrl_add_mapping(struct uvc_video_chain *chain,
+
+ INIT_LIST_HEAD(&map->ev_subs);
+
+- size = sizeof(*mapping->menu_info) * mapping->menu_count;
++ size = sizeof(*mapping->menu_info) * fls(mapping->menu_mask);
+ map->menu_info = kmemdup(mapping->menu_info, size, GFP_KERNEL);
+ if (map->menu_info == NULL) {
+ kfree(map->name);
+@@ -2421,8 +2493,7 @@ static void uvc_ctrl_prune_entity(struct uvc_device *dev,
+ static void uvc_ctrl_init_ctrl(struct uvc_video_chain *chain,
+ struct uvc_control *ctrl)
+ {
+- const struct uvc_control_mapping *mappings;
+- unsigned int num_mappings;
++ const struct uvc_control_mapping **mappings;
+ unsigned int i;
+
+ /*
+@@ -2489,16 +2560,11 @@ static void uvc_ctrl_init_ctrl(struct uvc_video_chain *chain,
+ }
+
+ /* Finally process version-specific mappings. */
+- if (chain->dev->uvc_version < 0x0150) {
+- mappings = uvc_ctrl_mappings_uvc11;
+- num_mappings = ARRAY_SIZE(uvc_ctrl_mappings_uvc11);
+- } else {
+- mappings = uvc_ctrl_mappings_uvc15;
+- num_mappings = ARRAY_SIZE(uvc_ctrl_mappings_uvc15);
+- }
++ mappings = chain->dev->uvc_version < 0x0150
++ ? uvc_ctrl_mappings_uvc11 : uvc_ctrl_mappings_uvc15;
+
+- for (i = 0; i < num_mappings; ++i) {
+- const struct uvc_control_mapping *mapping = &mappings[i];
++ for (i = 0; mappings[i]; ++i) {
++ const struct uvc_control_mapping *mapping = mappings[i];
+
+ if (uvc_entity_match_guid(ctrl->entity, mapping->entity) &&
+ ctrl->info.selector == mapping->selector)
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index e4bcb50113607..d5ff8df20f18a 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -7,6 +7,7 @@
+ */
+
+ #include <linux/atomic.h>
++#include <linux/bits.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/kernel.h>
+ #include <linux/list.h>
+@@ -2370,23 +2371,6 @@ MODULE_PARM_DESC(timeout, "Streaming control requests timeout");
+ * Driver initialization and cleanup
+ */
+
+-static const struct uvc_menu_info power_line_frequency_controls_limited[] = {
+- { 1, "50 Hz" },
+- { 2, "60 Hz" },
+-};
+-
+-static const struct uvc_control_mapping uvc_ctrl_power_line_mapping_limited = {
+- .id = V4L2_CID_POWER_LINE_FREQUENCY,
+- .entity = UVC_GUID_UVC_PROCESSING,
+- .selector = UVC_PU_POWER_LINE_FREQUENCY_CONTROL,
+- .size = 2,
+- .offset = 0,
+- .v4l2_type = V4L2_CTRL_TYPE_MENU,
+- .data_type = UVC_CTRL_DATA_TYPE_ENUM,
+- .menu_info = power_line_frequency_controls_limited,
+- .menu_count = ARRAY_SIZE(power_line_frequency_controls_limited),
+-};
+-
+ static const struct uvc_device_info uvc_ctrl_power_line_limited = {
+ .mappings = (const struct uvc_control_mapping *[]) {
+ &uvc_ctrl_power_line_mapping_limited,
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index f4d4c33b6dfbd..0774a11360c03 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -6,6 +6,7 @@
+ * Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+ */
+
++#include <linux/bits.h>
+ #include <linux/compat.h>
+ #include <linux/kernel.h>
+ #include <linux/list.h>
+@@ -80,7 +81,7 @@ static int uvc_ioctl_ctrl_map(struct uvc_video_chain *chain,
+ goto free_map;
+ }
+
+- map->menu_count = xmap->menu_count;
++ map->menu_mask = GENMASK(xmap->menu_count - 1, 0);
+ break;
+
+ default:
+@@ -1020,8 +1021,7 @@ static int uvc_ctrl_check_access(struct uvc_video_chain *chain,
+ int ret = 0;
+
+ for (i = 0; i < ctrls->count; ++ctrl, ++i) {
+- ret = uvc_ctrl_is_accessible(chain, ctrl->id,
+- ioctl == VIDIOC_G_EXT_CTRLS);
++ ret = uvc_ctrl_is_accessible(chain, ctrl->id, ctrls, ioctl);
+ if (ret)
+ break;
+ }
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index df93db259312e..1227ae63f85b7 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -117,7 +117,7 @@ struct uvc_control_mapping {
+ u32 data_type;
+
+ const struct uvc_menu_info *menu_info;
+- u32 menu_count;
++ unsigned long menu_mask;
+
+ u32 master_id;
+ s32 master_manual;
+@@ -728,6 +728,7 @@ int uvc_status_start(struct uvc_device *dev, gfp_t flags);
+ void uvc_status_stop(struct uvc_device *dev);
+
+ /* Controls */
++extern const struct uvc_control_mapping uvc_ctrl_power_line_mapping_limited;
+ extern const struct v4l2_subscribed_event_ops uvc_ctrl_sub_ev_ops;
+
+ int uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+@@ -761,7 +762,8 @@ static inline int uvc_ctrl_rollback(struct uvc_fh *handle)
+ int uvc_ctrl_get(struct uvc_video_chain *chain, struct v4l2_ext_control *xctrl);
+ int uvc_ctrl_set(struct uvc_fh *handle, struct v4l2_ext_control *xctrl);
+ int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id,
+- bool read);
++ const struct v4l2_ext_controls *ctrls,
++ unsigned long ioctl);
+
+ int uvc_xu_ctrl_query(struct uvc_video_chain *chain,
+ struct uvc_xu_control_query *xqry);
+diff --git a/drivers/media/v4l2-core/v4l2-h264.c b/drivers/media/v4l2-core/v4l2-h264.c
+index 72bd64f651981..c00197d095e75 100644
+--- a/drivers/media/v4l2-core/v4l2-h264.c
++++ b/drivers/media/v4l2-core/v4l2-h264.c
+@@ -305,6 +305,8 @@ static const char *format_ref_list_p(const struct v4l2_h264_reflist_builder *bui
+ int n = 0, i;
+
+ *out_str = kmalloc(tmp_str_size, GFP_KERNEL);
++ if (!(*out_str))
++ return NULL;
+
+ n += snprintf(*out_str + n, tmp_str_size - n, "|");
+
+@@ -343,6 +345,8 @@ static const char *format_ref_list_b(const struct v4l2_h264_reflist_builder *bui
+ int n = 0, i;
+
+ *out_str = kmalloc(tmp_str_size, GFP_KERNEL);
++ if (!(*out_str))
++ return NULL;
+
+ n += snprintf(*out_str + n, tmp_str_size - n, "|");
+
+diff --git a/drivers/media/v4l2-core/v4l2-jpeg.c b/drivers/media/v4l2-core/v4l2-jpeg.c
+index c2513b775f6a7..94435a7b68169 100644
+--- a/drivers/media/v4l2-core/v4l2-jpeg.c
++++ b/drivers/media/v4l2-core/v4l2-jpeg.c
+@@ -460,7 +460,7 @@ static int jpeg_parse_app14_data(struct jpeg_stream *stream,
+ /* Check for "Adobe\0" in Ap1..6 */
+ if (stream->curr + 6 > stream->end ||
+ strncmp(stream->curr, "Adobe\0", 6))
+- return -EINVAL;
++ return jpeg_skip(stream, lp - 2);
+
+ /* get to Ap12 */
+ ret = jpeg_skip(stream, 11);
+@@ -474,7 +474,7 @@ static int jpeg_parse_app14_data(struct jpeg_stream *stream,
+ *tf = ret;
+
+ /* skip the rest of the segment, this ensures at least it is complete */
+- skip = lp - 2 - 11;
++ skip = lp - 2 - 11 - 1;
+ return jpeg_skip(stream, skip);
+ }
+
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 30db49f318668..7ed31fbd8c7fa 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -15,6 +15,7 @@ config MFD_CS5535
+ tristate "AMD CS5535 and CS5536 southbridge core functions"
+ select MFD_CORE
+ depends on PCI && (X86_32 || (X86 && COMPILE_TEST))
++ depends on !UML
+ help
+ This is the core driver for CS5535/CS5536 MFD functions. This is
+ necessary for using the board's GPIO and MFGPT functionality.
+diff --git a/drivers/mfd/pcf50633-adc.c b/drivers/mfd/pcf50633-adc.c
+index 5cd653e615125..191b1bc6141c2 100644
+--- a/drivers/mfd/pcf50633-adc.c
++++ b/drivers/mfd/pcf50633-adc.c
+@@ -136,6 +136,7 @@ int pcf50633_adc_async_read(struct pcf50633 *pcf, int mux, int avg,
+ void *callback_param)
+ {
+ struct pcf50633_adc_request *req;
++ int ret;
+
+ /* req is freed when the result is ready, in interrupt handler */
+ req = kmalloc(sizeof(*req), GFP_KERNEL);
+@@ -147,7 +148,11 @@ int pcf50633_adc_async_read(struct pcf50633 *pcf, int mux, int avg,
+ req->callback = callback;
+ req->callback_param = callback_param;
+
+- return adc_enqueue_request(pcf, req);
++ ret = adc_enqueue_request(pcf, req);
++ if (ret)
++ kfree(req);
++
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(pcf50633_adc_async_read);
+
+diff --git a/drivers/mfd/rk808.c b/drivers/mfd/rk808.c
+index f44fc3f080a8e..0f22ef61e8170 100644
+--- a/drivers/mfd/rk808.c
++++ b/drivers/mfd/rk808.c
+@@ -189,6 +189,7 @@ static const struct mfd_cell rk817s[] = {
+ };
+
+ static const struct mfd_cell rk818s[] = {
++ { .name = "rk808-clkout", .id = PLATFORM_DEVID_NONE, },
+ { .name = "rk808-regulator", .id = PLATFORM_DEVID_NONE, },
+ {
+ .name = "rk808-rtc",
+diff --git a/drivers/misc/eeprom/idt_89hpesx.c b/drivers/misc/eeprom/idt_89hpesx.c
+index 4e07ee9cb500e..7075d0b378811 100644
+--- a/drivers/misc/eeprom/idt_89hpesx.c
++++ b/drivers/misc/eeprom/idt_89hpesx.c
+@@ -1566,12 +1566,20 @@ static struct i2c_driver idt_driver = {
+ */
+ static int __init idt_init(void)
+ {
++ int ret;
++
+ /* Create Debugfs directory first */
+ if (debugfs_initialized())
+ csr_dbgdir = debugfs_create_dir("idt_csr", NULL);
+
+ /* Add new i2c-device driver */
+- return i2c_add_driver(&idt_driver);
++ ret = i2c_add_driver(&idt_driver);
++ if (ret) {
++ debugfs_remove_recursive(csr_dbgdir);
++ return ret;
++ }
++
++ return 0;
+ }
+ module_init(idt_init);
+
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 5310606113fe5..7ccaca1b7cb8b 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -2315,7 +2315,18 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
+ data->domain_id = domain_id;
+ data->rpdev = rpdev;
+
+- return of_platform_populate(rdev->of_node, NULL, NULL, rdev);
++ err = of_platform_populate(rdev->of_node, NULL, NULL, rdev);
++ if (err)
++ goto populate_error;
++
++ return 0;
++
++populate_error:
++ if (data->fdevice)
++ misc_deregister(&data->fdevice->miscdev);
++ if (data->secure_fdevice)
++ misc_deregister(&data->secure_fdevice->miscdev);
++
+ fdev_error:
+ kfree(data);
+ return err;
+diff --git a/drivers/misc/habanalabs/common/command_submission.c b/drivers/misc/habanalabs/common/command_submission.c
+index ea0e5101c10ed..6367cbea4ca2a 100644
+--- a/drivers/misc/habanalabs/common/command_submission.c
++++ b/drivers/misc/habanalabs/common/command_submission.c
+@@ -3119,19 +3119,18 @@ start_over:
+ goto start_over;
+ }
+ } else {
++ /* Fill up the new registration node info */
++ requested_offset_record->ts_reg_info.buf = buf;
++ requested_offset_record->ts_reg_info.cq_cb = cq_cb;
++ requested_offset_record->ts_reg_info.timestamp_kernel_addr =
++ (u64 *) ts_buff->user_buff_address + ts_offset;
++ requested_offset_record->cq_kernel_addr =
++ (u64 *) cq_cb->kernel_address + cq_offset;
++ requested_offset_record->cq_target_value = target_value;
++
+ spin_unlock_irqrestore(wait_list_lock, flags);
+ }
+
+- /* Fill up the new registration node info */
+- requested_offset_record->ts_reg_info.in_use = 1;
+- requested_offset_record->ts_reg_info.buf = buf;
+- requested_offset_record->ts_reg_info.cq_cb = cq_cb;
+- requested_offset_record->ts_reg_info.timestamp_kernel_addr =
+- (u64 *) ts_buff->user_buff_address + ts_offset;
+- requested_offset_record->cq_kernel_addr =
+- (u64 *) cq_cb->kernel_address + cq_offset;
+- requested_offset_record->cq_target_value = target_value;
+-
+ *pend = requested_offset_record;
+
+ dev_dbg(buf->mmg->dev, "Found available node in TS kernel CB %p\n",
+@@ -3179,7 +3178,7 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx,
+ goto put_cq_cb;
+ }
+
+- /* Find first available record */
++ /* get ts buffer record */
+ rc = ts_buff_get_kernel_ts_record(buf, cq_cb, ts_offset,
+ cq_counters_offset, target_value,
+ &interrupt->wait_list_lock, &pend);
+@@ -3227,7 +3226,19 @@ static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx,
+ * Note that we cannot have sorted list by target value,
+ * in order to shorten the list pass loop, since
+ * same list could have nodes for different cq counter handle.
++ * Note:
++ * Mark ts buff offset as in use here in the spinlock protection area
++ * to avoid getting in the re-use section in ts_buff_get_kernel_ts_record
++ * before adding the node to the list. this scenario might happen when
++ * multiple threads are racing on same offset and one thread could
++ * set the ts buff in ts_buff_get_kernel_ts_record then the other thread
++ * takes over and get to ts_buff_get_kernel_ts_record and then we will try
++ * to re-use the same ts buff offset, and will try to delete a non existing
++ * node from the list.
+ */
++ if (register_ts_record)
++ pend->ts_reg_info.in_use = 1;
++
+ list_add_tail(&pend->wait_list_node, &interrupt->wait_list_head);
+ spin_unlock_irqrestore(&interrupt->wait_list_lock, flags);
+
+diff --git a/drivers/misc/habanalabs/common/device.c b/drivers/misc/habanalabs/common/device.c
+index 87ab329e65d49..f7b9c3871518b 100644
+--- a/drivers/misc/habanalabs/common/device.c
++++ b/drivers/misc/habanalabs/common/device.c
+@@ -1566,7 +1566,8 @@ kill_processes:
+ if (rc == -EBUSY) {
+ if (hdev->device_fini_pending) {
+ dev_crit(hdev->dev,
+- "Failed to kill all open processes, stopping hard reset\n");
++ "%s Failed to kill all open processes, stopping hard reset\n",
++ dev_name(&(hdev)->pdev->dev));
+ goto out_err;
+ }
+
+@@ -1576,7 +1577,8 @@ kill_processes:
+
+ if (rc) {
+ dev_crit(hdev->dev,
+- "Failed to kill all open processes, stopping hard reset\n");
++ "%s Failed to kill all open processes, stopping hard reset\n",
++ dev_name(&(hdev)->pdev->dev));
+ goto out_err;
+ }
+
+@@ -1627,14 +1629,16 @@ kill_processes:
+ * ensure driver puts the driver in a unusable state
+ */
+ dev_crit(hdev->dev,
+- "Consecutive FW fatal errors received, stopping hard reset\n");
++ "%s Consecutive FW fatal errors received, stopping hard reset\n",
++ dev_name(&(hdev)->pdev->dev));
+ rc = -EIO;
+ goto out_err;
+ }
+
+ if (hdev->kernel_ctx) {
+ dev_crit(hdev->dev,
+- "kernel ctx was alive during hard reset, something is terribly wrong\n");
++ "%s kernel ctx was alive during hard reset, something is terribly wrong\n",
++ dev_name(&(hdev)->pdev->dev));
+ rc = -EBUSY;
+ goto out_err;
+ }
+@@ -1752,9 +1756,13 @@ kill_processes:
+ hdev->reset_info.needs_reset = false;
+
+ if (hard_reset)
+- dev_info(hdev->dev, "Successfully finished resetting the device\n");
++ dev_info(hdev->dev,
++ "Successfully finished resetting the %s device\n",
++ dev_name(&(hdev)->pdev->dev));
+ else
+- dev_dbg(hdev->dev, "Successfully finished resetting the device\n");
++ dev_dbg(hdev->dev,
++ "Successfully finished resetting the %s device\n",
++ dev_name(&(hdev)->pdev->dev));
+
+ if (hard_reset) {
+ hdev->reset_info.hard_reset_cnt++;
+@@ -1789,7 +1797,9 @@ out_err:
+ hdev->reset_info.in_compute_reset = 0;
+
+ if (hard_reset) {
+- dev_err(hdev->dev, "Failed to reset! Device is NOT usable\n");
++ dev_err(hdev->dev,
++ "%s Failed to reset! Device is NOT usable\n",
++ dev_name(&(hdev)->pdev->dev));
+ hdev->reset_info.hard_reset_cnt++;
+ } else if (reset_upon_device_release) {
+ spin_unlock(&hdev->reset_info.lock);
+@@ -2186,7 +2196,8 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
+ }
+
+ dev_notice(hdev->dev,
+- "Successfully added device to habanalabs driver\n");
++ "Successfully added device %s to habanalabs driver\n",
++ dev_name(&(hdev)->pdev->dev));
+
+ hdev->init_done = true;
+
+@@ -2235,11 +2246,11 @@ out_disabled:
+ device_cdev_sysfs_add(hdev);
+ if (hdev->pdev)
+ dev_err(&hdev->pdev->dev,
+- "Failed to initialize hl%d. Device is NOT usable !\n",
+- hdev->cdev_idx);
++ "Failed to initialize hl%d. Device %s is NOT usable !\n",
++ hdev->cdev_idx, dev_name(&(hdev)->pdev->dev));
+ else
+- pr_err("Failed to initialize hl%d. Device is NOT usable !\n",
+- hdev->cdev_idx);
++ pr_err("Failed to initialize hl%d. Device %s is NOT usable !\n",
++ hdev->cdev_idx, dev_name(&(hdev)->pdev->dev));
+
+ return rc;
+ }
+@@ -2295,7 +2306,8 @@ void hl_device_fini(struct hl_device *hdev)
+
+ if (ktime_compare(ktime_get(), timeout) > 0) {
+ dev_crit(hdev->dev,
+- "Failed to remove device because reset function did not finish\n");
++ "%s Failed to remove device because reset function did not finish\n",
++ dev_name(&(hdev)->pdev->dev));
+ return;
+ }
+ }
+diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
+index 5e9ae7600d75e..047306e33baad 100644
+--- a/drivers/misc/habanalabs/common/memory.c
++++ b/drivers/misc/habanalabs/common/memory.c
+@@ -2089,12 +2089,13 @@ static int hl_ts_mmap(struct hl_mmap_mem_buf *buf, struct vm_area_struct *vma, v
+ static int hl_ts_alloc_buf(struct hl_mmap_mem_buf *buf, gfp_t gfp, void *args)
+ {
+ struct hl_ts_buff *ts_buff = NULL;
+- u32 size, num_elements;
++ u32 num_elements;
++ size_t size;
+ void *p;
+
+ num_elements = *(u32 *)args;
+
+- ts_buff = kzalloc(sizeof(*ts_buff), GFP_KERNEL);
++ ts_buff = kzalloc(sizeof(*ts_buff), gfp);
+ if (!ts_buff)
+ return -ENOMEM;
+
+diff --git a/drivers/misc/mei/hdcp/mei_hdcp.c b/drivers/misc/mei/hdcp/mei_hdcp.c
+index e889a8bd7ac88..e0dcd5c114db1 100644
+--- a/drivers/misc/mei/hdcp/mei_hdcp.c
++++ b/drivers/misc/mei/hdcp/mei_hdcp.c
+@@ -859,8 +859,8 @@ static void mei_hdcp_remove(struct mei_cl_device *cldev)
+ dev_warn(&cldev->dev, "mei_cldev_disable() failed\n");
+ }
+
+-#define MEI_UUID_HDCP GUID_INIT(0xB638AB7E, 0x94E2, 0x4EA2, 0xA5, \
+- 0x52, 0xD1, 0xC5, 0x4B, 0x62, 0x7F, 0x04)
++#define MEI_UUID_HDCP UUID_LE(0xB638AB7E, 0x94E2, 0x4EA2, 0xA5, \
++ 0x52, 0xD1, 0xC5, 0x4B, 0x62, 0x7F, 0x04)
+
+ static const struct mei_cl_device_id mei_hdcp_tbl[] = {
+ { .uuid = MEI_UUID_HDCP, .version = MEI_CL_VERSION_ANY },
+diff --git a/drivers/misc/mei/pxp/mei_pxp.c b/drivers/misc/mei/pxp/mei_pxp.c
+index 8dd09b1722ebd..7ee1fa7b1cb31 100644
+--- a/drivers/misc/mei/pxp/mei_pxp.c
++++ b/drivers/misc/mei/pxp/mei_pxp.c
+@@ -238,8 +238,8 @@ static void mei_pxp_remove(struct mei_cl_device *cldev)
+ }
+
+ /* fbf6fcf1-96cf-4e2e-a6a6-1bab8cbe36b1 : PAVP GUID*/
+-#define MEI_GUID_PXP GUID_INIT(0xfbf6fcf1, 0x96cf, 0x4e2e, 0xA6, \
+- 0xa6, 0x1b, 0xab, 0x8c, 0xbe, 0x36, 0xb1)
++#define MEI_GUID_PXP UUID_LE(0xfbf6fcf1, 0x96cf, 0x4e2e, 0xA6, \
++ 0xa6, 0x1b, 0xab, 0x8c, 0xbe, 0x36, 0xb1)
+
+ static struct mei_cl_device_id mei_pxp_tbl[] = {
+ { .uuid = MEI_GUID_PXP, .version = MEI_CL_VERSION_ANY },
+diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
+index da1e2a773823e..857b9851402a6 100644
+--- a/drivers/misc/vmw_vmci/vmci_host.c
++++ b/drivers/misc/vmw_vmci/vmci_host.c
+@@ -242,6 +242,8 @@ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ context->notify_page = NULL;
+ return VMCI_ERROR_GENERIC;
+ }
++ if (context->notify_page == NULL)
++ return VMCI_ERROR_UNAVAILABLE;
+
+ /*
+ * Map the locked page and set up notify pointer.
+diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
+index d442fa94c8720..85f5ee6f06fc6 100644
+--- a/drivers/mtd/mtdpart.c
++++ b/drivers/mtd/mtdpart.c
+@@ -577,6 +577,7 @@ static int mtd_part_of_parse(struct mtd_info *master,
+ {
+ struct mtd_part_parser *parser;
+ struct device_node *np;
++ struct device_node *child;
+ struct property *prop;
+ struct device *dev;
+ const char *compat;
+@@ -594,6 +595,15 @@ static int mtd_part_of_parse(struct mtd_info *master,
+ else
+ np = of_get_child_by_name(np, "partitions");
+
++ /*
++ * Don't create devices that are added to a bus but will never get
++ * probed. That'll cause fw_devlink to block probing of consumers of
++ * this partition until the partition device is probed.
++ */
++ for_each_child_of_node(np, child)
++ if (of_device_is_compatible(child, "nvmem-cells"))
++ of_node_set_flag(child, OF_POPULATED);
++
+ of_property_for_each_string(np, "compatible", prop, compat) {
+ parser = mtd_part_get_compatible_parser(compat);
+ if (!parser)
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index d67c926bca8ba..2ef2660f58180 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -2026,6 +2026,15 @@ void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
+ erase->size_mask = (1 << erase->size_shift) - 1;
+ }
+
++/**
++ * spi_nor_mask_erase_type() - mask out a SPI NOR erase type
++ * @erase: pointer to a structure that describes a SPI NOR erase type
++ */
++void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase)
++{
++ erase->size = 0;
++}
++
+ /**
+ * spi_nor_init_uniform_erase_map() - Initialize uniform erase map
+ * @map: the erase map of the SPI NOR
+diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h
+index f03b55cf7e6fe..958cd143c9346 100644
+--- a/drivers/mtd/spi-nor/core.h
++++ b/drivers/mtd/spi-nor/core.h
+@@ -684,6 +684,7 @@ void spi_nor_set_pp_settings(struct spi_nor_pp_command *pp, u8 opcode,
+
+ void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
+ u8 opcode);
++void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase);
+ struct spi_nor_erase_region *
+ spi_nor_region_next(struct spi_nor_erase_region *region);
+ void spi_nor_init_uniform_erase_map(struct spi_nor_erase_map *map,
+diff --git a/drivers/mtd/spi-nor/sfdp.c b/drivers/mtd/spi-nor/sfdp.c
+index 8434f654eca1e..223906606ecb5 100644
+--- a/drivers/mtd/spi-nor/sfdp.c
++++ b/drivers/mtd/spi-nor/sfdp.c
+@@ -875,7 +875,7 @@ static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
+ */
+ for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
+ if (!(regions_erase_type & BIT(erase[i].idx)))
+- spi_nor_set_erase_type(&erase[i], 0, 0xFF);
++ spi_nor_mask_erase_type(&erase[i]);
+
+ return 0;
+ }
+@@ -1089,7 +1089,7 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
+ erase_type[i].opcode = (dwords[1] >>
+ erase_type[i].idx * 8) & 0xFF;
+ else
+- spi_nor_set_erase_type(&erase_type[i], 0u, 0xFF);
++ spi_nor_mask_erase_type(&erase_type[i]);
+ }
+
+ /*
+@@ -1228,7 +1228,7 @@ static int spi_nor_parse_sccr(struct spi_nor *nor,
+
+ le32_to_cpu_array(dwords, sccr_header->length);
+
+- if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE, dwords[22]))
++ if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE, dwords[21]))
+ nor->flags |= SNOR_F_IO_MODE_EN_VOLATILE;
+
+ out:
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index b621cdfd506fd..07fe0f6fdfe3e 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -21,8 +21,13 @@
+ #define SPINOR_REG_CYPRESS_CFR3V 0x00800004
+ #define SPINOR_REG_CYPRESS_CFR3V_PGSZ BIT(4) /* Page size. */
+ #define SPINOR_REG_CYPRESS_CFR5V 0x00800006
+-#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN 0x3
+-#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS 0
++#define SPINOR_REG_CYPRESS_CFR5_BIT6 BIT(6)
++#define SPINOR_REG_CYPRESS_CFR5_DDR BIT(1)
++#define SPINOR_REG_CYPRESS_CFR5_OPI BIT(0)
++#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN \
++ (SPINOR_REG_CYPRESS_CFR5_BIT6 | SPINOR_REG_CYPRESS_CFR5_DDR | \
++ SPINOR_REG_CYPRESS_CFR5_OPI)
++#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS SPINOR_REG_CYPRESS_CFR5_BIT6
+ #define SPINOR_OP_CYPRESS_RD_FAST 0xee
+
+ /* Cypress SPI NOR flash operations. */
+diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
+index f6fa7157b99b0..77b21c82faf38 100644
+--- a/drivers/net/can/rcar/rcar_canfd.c
++++ b/drivers/net/can/rcar/rcar_canfd.c
+@@ -92,10 +92,10 @@
+ /* RSCFDnCFDGAFLCFG0 / RSCFDnGAFLCFG0 */
+ #define RCANFD_GAFLCFG_SETRNC(gpriv, n, x) \
+ (((x) & reg_v3u(gpriv, 0x1ff, 0xff)) << \
+- (reg_v3u(gpriv, 16, 24) - (n) * reg_v3u(gpriv, 16, 8)))
++ (reg_v3u(gpriv, 16, 24) - ((n) & 1) * reg_v3u(gpriv, 16, 8)))
+
+ #define RCANFD_GAFLCFG_GETRNC(gpriv, n, x) \
+- (((x) >> (reg_v3u(gpriv, 16, 24) - (n) * reg_v3u(gpriv, 16, 8))) & \
++ (((x) >> (reg_v3u(gpriv, 16, 24) - ((n) & 1) * reg_v3u(gpriv, 16, 8))) & \
+ reg_v3u(gpriv, 0x1ff, 0xff))
+
+ /* RSCFDnCFDGAFLECTR / RSCFDnGAFLECTR */
+@@ -197,8 +197,8 @@
+ #define RCANFD_DCFG_DBRP(x) (((x) & 0xff) << 0)
+
+ /* RSCFDnCFDCmFDCFG */
+-#define RCANFD_FDCFG_CLOE BIT(30)
+-#define RCANFD_FDCFG_FDOE BIT(28)
++#define RCANFD_V3U_FDCFG_CLOE BIT(30)
++#define RCANFD_V3U_FDCFG_FDOE BIT(28)
+ #define RCANFD_FDCFG_TDCE BIT(9)
+ #define RCANFD_FDCFG_TDCOC BIT(8)
+ #define RCANFD_FDCFG_TDCO(x) (((x) & 0x7f) >> 16)
+@@ -429,8 +429,8 @@
+ #define RCANFD_C_RPGACC(r) (0x1900 + (0x04 * (r)))
+
+ /* R-Car V3U Classical and CAN FD mode specific register map */
+-#define RCANFD_V3U_CFDCFG (0x1314)
+ #define RCANFD_V3U_DCFG(m) (0x1400 + (0x20 * (m)))
++#define RCANFD_V3U_FDCFG(m) (0x1404 + (0x20 * (m)))
+
+ #define RCANFD_V3U_GAFL_OFFSET (0x1800)
+
+@@ -689,12 +689,13 @@ static void rcar_canfd_tx_failure_cleanup(struct net_device *ndev)
+ static void rcar_canfd_set_mode(struct rcar_canfd_global *gpriv)
+ {
+ if (is_v3u(gpriv)) {
+- if (gpriv->fdmode)
+- rcar_canfd_set_bit(gpriv->base, RCANFD_V3U_CFDCFG,
+- RCANFD_FDCFG_FDOE);
+- else
+- rcar_canfd_set_bit(gpriv->base, RCANFD_V3U_CFDCFG,
+- RCANFD_FDCFG_CLOE);
++ u32 ch, val = gpriv->fdmode ? RCANFD_V3U_FDCFG_FDOE
++ : RCANFD_V3U_FDCFG_CLOE;
++
++ for_each_set_bit(ch, &gpriv->channels_mask,
++ gpriv->info->max_channels)
++ rcar_canfd_set_bit(gpriv->base, RCANFD_V3U_FDCFG(ch),
++ val);
+ } else {
+ if (gpriv->fdmode)
+ rcar_canfd_set_bit(gpriv->base, RCANFD_GRMCFG,
+diff --git a/drivers/net/can/usb/esd_usb.c b/drivers/net/can/usb/esd_usb.c
+index 42323f5e6f3a0..578b25f873e58 100644
+--- a/drivers/net/can/usb/esd_usb.c
++++ b/drivers/net/can/usb/esd_usb.c
+@@ -239,41 +239,42 @@ static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
+ msg->msg.rx.dlc, state, ecc, rxerr, txerr);
+
+ skb = alloc_can_err_skb(priv->netdev, &cf);
+- if (skb == NULL) {
+- stats->rx_dropped++;
+- return;
+- }
+
+ if (state != priv->old_state) {
++ enum can_state tx_state, rx_state;
++ enum can_state new_state = CAN_STATE_ERROR_ACTIVE;
++
+ priv->old_state = state;
+
+ switch (state & ESD_BUSSTATE_MASK) {
+ case ESD_BUSSTATE_BUSOFF:
+- priv->can.state = CAN_STATE_BUS_OFF;
+- cf->can_id |= CAN_ERR_BUSOFF;
+- priv->can.can_stats.bus_off++;
++ new_state = CAN_STATE_BUS_OFF;
+ can_bus_off(priv->netdev);
+ break;
+ case ESD_BUSSTATE_WARN:
+- priv->can.state = CAN_STATE_ERROR_WARNING;
+- priv->can.can_stats.error_warning++;
++ new_state = CAN_STATE_ERROR_WARNING;
+ break;
+ case ESD_BUSSTATE_ERRPASSIVE:
+- priv->can.state = CAN_STATE_ERROR_PASSIVE;
+- priv->can.can_stats.error_passive++;
++ new_state = CAN_STATE_ERROR_PASSIVE;
+ break;
+ default:
+- priv->can.state = CAN_STATE_ERROR_ACTIVE;
++ new_state = CAN_STATE_ERROR_ACTIVE;
+ txerr = 0;
+ rxerr = 0;
+ break;
+ }
+- } else {
++
++ if (new_state != priv->can.state) {
++ tx_state = (txerr >= rxerr) ? new_state : 0;
++ rx_state = (txerr <= rxerr) ? new_state : 0;
++ can_change_state(priv->netdev, cf,
++ tx_state, rx_state);
++ }
++ } else if (skb) {
+ priv->can.can_stats.bus_error++;
+ stats->rx_errors++;
+
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR |
+- CAN_ERR_CNT;
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (ecc & SJA1000_ECC_MASK) {
+ case SJA1000_ECC_BIT:
+@@ -286,7 +287,6 @@ static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
+ cf->data[2] |= CAN_ERR_PROT_STUFF;
+ break;
+ default:
+- cf->data[3] = ecc & SJA1000_ECC_SEG;
+ break;
+ }
+
+@@ -294,20 +294,22 @@ static void esd_usb_rx_event(struct esd_usb_net_priv *priv,
+ if (!(ecc & SJA1000_ECC_DIR))
+ cf->data[2] |= CAN_ERR_PROT_TX;
+
+- if (priv->can.state == CAN_STATE_ERROR_WARNING ||
+- priv->can.state == CAN_STATE_ERROR_PASSIVE) {
+- cf->data[1] = (txerr > rxerr) ?
+- CAN_ERR_CRTL_TX_PASSIVE :
+- CAN_ERR_CRTL_RX_PASSIVE;
+- }
+- cf->data[6] = txerr;
+- cf->data[7] = rxerr;
++ /* Bit stream position in CAN frame as the error was detected */
++ cf->data[3] = ecc & SJA1000_ECC_SEG;
+ }
+
+ priv->bec.txerr = txerr;
+ priv->bec.rxerr = rxerr;
+
+- netif_rx(skb);
++ if (skb) {
++ cf->can_id |= CAN_ERR_CNT;
++ cf->data[6] = txerr;
++ cf->data[7] = rxerr;
++
++ netif_rx(skb);
++ } else {
++ stats->rx_dropped++;
++ }
+ }
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 21973046b12b4..d937daa8ee883 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2316,6 +2316,14 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
+ __func__, p_index, ring->c_index,
+ ring->read_ptr, dma_length_status);
+
++ if (unlikely(len > RX_BUF_LENGTH)) {
++ netif_err(priv, rx_status, dev, "oversized packet\n");
++ dev->stats.rx_length_errors++;
++ dev->stats.rx_errors++;
++ dev_kfree_skb_any(skb);
++ goto next;
++ }
++
+ if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
+ netif_err(priv, rx_status, dev,
+ "dropping fragmented packet!\n");
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index b615176338b26..be042905ada2a 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -176,15 +176,6 @@ void bcmgenet_phy_power_set(struct net_device *dev, bool enable)
+
+ static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
+ {
+- u32 reg;
+-
+- if (!GENET_IS_V5(priv)) {
+- /* Speed settings are set in bcmgenet_mii_setup() */
+- reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL);
+- reg |= LED_ACT_SOURCE_MAC;
+- bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
+- }
+-
+ if (priv->hw_params->flags & GENET_HAS_MOCA_LINK_DET)
+ fixed_phy_set_link_update(priv->dev->phydev,
+ bcmgenet_fixed_phy_link_update);
+@@ -217,6 +208,8 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
+
+ if (!phy_name) {
+ phy_name = "MoCA";
++ if (!GENET_IS_V5(priv))
++ port_ctrl |= LED_ACT_SOURCE_MAC;
+ bcmgenet_moca_phy_setup(priv);
+ }
+ break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 8ec24f6cf6beb..3811462824390 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6182,15 +6182,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi)
+ {
+ int err;
+
+- if (vsi->netdev) {
++ if (vsi->netdev && vsi->type == ICE_VSI_PF) {
+ ice_set_rx_mode(vsi->netdev);
+
+- if (vsi->type != ICE_VSI_LB) {
+- err = ice_vsi_vlan_setup(vsi);
+-
+- if (err)
+- return err;
+- }
++ err = ice_vsi_vlan_setup(vsi);
++ if (err)
++ return err;
+ }
+ ice_vsi_cfg_dcb_rings(vsi);
+
+@@ -6371,7 +6368,7 @@ static int ice_up_complete(struct ice_vsi *vsi)
+
+ if (vsi->port_info &&
+ (vsi->port_info->phy.link_info.link_info & ICE_AQ_LINK_UP) &&
+- vsi->netdev) {
++ vsi->netdev && vsi->type == ICE_VSI_PF) {
+ ice_print_link_msg(vsi, true);
+ netif_tx_start_all_queues(vsi->netdev);
+ netif_carrier_on(vsi->netdev);
+@@ -6382,7 +6379,9 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ * set the baseline so counters are ready when interface is up
+ */
+ ice_update_eth_stats(vsi);
+- ice_service_task_schedule(pf);
++
++ if (vsi->type == ICE_VSI_PF)
++ ice_service_task_schedule(pf);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index d63161d73eb16..3abc8db1d0659 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -2269,7 +2269,7 @@ static void ice_ptp_set_caps(struct ice_pf *pf)
+ snprintf(info->name, sizeof(info->name) - 1, "%s-%s-clk",
+ dev_driver_string(dev), dev_name(dev));
+ info->owner = THIS_MODULE;
+- info->max_adj = 999999999;
++ info->max_adj = 100000000;
+ info->adjtime = ice_ptp_adjtime;
+ info->adjfine = ice_ptp_adjfine;
+ info->gettimex64 = ice_ptp_gettimex64;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+index c5758637b7bed..2f79378fbf6ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+@@ -699,32 +699,32 @@ static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc,
+ inl->byte_count = cpu_to_be32(1 << 31 | skb->len);
+ } else {
+ inl->byte_count = cpu_to_be32(1 << 31 | MIN_PKT_LEN);
+- memset(((void *)(inl + 1)) + skb->len, 0,
++ memset(inl->data + skb->len, 0,
+ MIN_PKT_LEN - skb->len);
+ }
+- skb_copy_from_linear_data(skb, inl + 1, hlen);
++ skb_copy_from_linear_data(skb, inl->data, hlen);
+ if (shinfo->nr_frags)
+- memcpy(((void *)(inl + 1)) + hlen, fragptr,
++ memcpy(inl->data + hlen, fragptr,
+ skb_frag_size(&shinfo->frags[0]));
+
+ } else {
+ inl->byte_count = cpu_to_be32(1 << 31 | spc);
+ if (hlen <= spc) {
+- skb_copy_from_linear_data(skb, inl + 1, hlen);
++ skb_copy_from_linear_data(skb, inl->data, hlen);
+ if (hlen < spc) {
+- memcpy(((void *)(inl + 1)) + hlen,
++ memcpy(inl->data + hlen,
+ fragptr, spc - hlen);
+ fragptr += spc - hlen;
+ }
+- inl = (void *) (inl + 1) + spc;
+- memcpy(((void *)(inl + 1)), fragptr, skb->len - spc);
++ inl = (void *)inl->data + spc;
++ memcpy(inl->data, fragptr, skb->len - spc);
+ } else {
+- skb_copy_from_linear_data(skb, inl + 1, spc);
+- inl = (void *) (inl + 1) + spc;
+- skb_copy_from_linear_data_offset(skb, spc, inl + 1,
++ skb_copy_from_linear_data(skb, inl->data, spc);
++ inl = (void *)inl->data + spc;
++ skb_copy_from_linear_data_offset(skb, spc, inl->data,
+ hlen - spc);
+ if (shinfo->nr_frags)
+- memcpy(((void *)(inl + 1)) + hlen - spc,
++ memcpy(inl->data + hlen - spc,
+ fragptr,
+ skb_frag_size(&shinfo->frags[0]));
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 5b05b884b5fb3..d7b2ee5de1158 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -603,7 +603,7 @@ static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer,
+ } else {
+ cur_string = mlx5_tracer_message_get(tracer, tracer_event);
+ if (!cur_string) {
+- pr_debug("%s Got string event for unknown string tdsm: %d\n",
++ pr_debug("%s Got string event for unknown string tmsn: %d\n",
+ __func__, tracer_event->string_event.tmsn);
+ return -1;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+index 8bed9c3610754..d739d77d68986 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+@@ -119,7 +119,7 @@ struct mlx5e_ipsec_work {
+ };
+
+ struct mlx5e_ipsec_aso {
+- u8 ctx[MLX5_ST_SZ_BYTES(ipsec_aso)];
++ u8 __aligned(64) ctx[MLX5_ST_SZ_BYTES(ipsec_aso)];
+ dma_addr_t dma_addr;
+ struct mlx5_aso *aso;
+ /* Protect ASO WQ access, as it is global to whole IPsec */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index 0eb50be175cc4..64d4e7125e9bb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -219,7 +219,8 @@ static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function)
+
+ n = find_first_bit(&fp->bitmask, 8 * sizeof(fp->bitmask));
+ if (n >= MLX5_NUM_4K_IN_PAGE) {
+- mlx5_core_warn(dev, "alloc 4k bug\n");
++ mlx5_core_warn(dev, "alloc 4k bug: fw page = 0x%llx, n = %u, bitmask: %lu, max num of 4K pages: %d\n",
++ fp->addr, n, fp->bitmask, MLX5_NUM_4K_IN_PAGE);
+ return -ENOENT;
+ }
+ clear_bit(n, &fp->bitmask);
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+index a8348437dd87f..61fbabf5bebc3 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+@@ -524,9 +524,9 @@ irqreturn_t lan966x_ptp_irq_handler(int irq, void *args)
+ if (WARN_ON(!skb_match))
+ continue;
+
+- spin_lock(&lan966x->ptp_ts_id_lock);
++ spin_lock_irqsave(&lan966x->ptp_ts_id_lock, flags);
+ lan966x->ptp_skbs--;
+- spin_unlock(&lan966x->ptp_ts_id_lock);
++ spin_unlock_irqrestore(&lan966x->ptp_ts_id_lock, flags);
+
+ /* Get the h/w timestamp */
+ lan966x_get_hwtimestamp(lan966x, &ts, delay);
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 953f304b8588c..89d64a5a4951a 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -960,7 +960,6 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
+ {
+ u8 fp_combined, fp_rx = edev->fp_num_rx;
+ struct qede_fastpath *fp;
+- void *mem;
+ int i;
+
+ edev->fp_array = kcalloc(QEDE_QUEUE_CNT(edev),
+@@ -970,14 +969,15 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
+ goto err;
+ }
+
+- mem = krealloc(edev->coal_entry, QEDE_QUEUE_CNT(edev) *
+- sizeof(*edev->coal_entry), GFP_KERNEL);
+- if (!mem) {
+- DP_ERR(edev, "coalesce entry allocation failed\n");
+- kfree(edev->coal_entry);
+- goto err;
++ if (!edev->coal_entry) {
++ edev->coal_entry = kcalloc(QEDE_MAX_RSS_CNT(edev),
++ sizeof(*edev->coal_entry),
++ GFP_KERNEL);
++ if (!edev->coal_entry) {
++ DP_ERR(edev, "coalesce entry allocation failed\n");
++ goto err;
++ }
+ }
+- edev->coal_entry = mem;
+
+ fp_combined = QEDE_QUEUE_CNT(edev) - fp_rx - edev->fp_num_tx;
+
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 6cda4b7c10cb6..3e17152798554 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2852,6 +2852,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+
+ err_free_phylink:
+ am65_cpsw_nuss_phylink_cleanup(common);
++ am65_cpts_release(common->cpts);
+ err_of_clear:
+ of_platform_device_destroy(common->mdio_dev, NULL);
+ err_pm_clear:
+@@ -2880,6 +2881,7 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
+ */
+ am65_cpsw_nuss_cleanup_ndev(common);
+ am65_cpsw_nuss_phylink_cleanup(common);
++ am65_cpts_release(common->cpts);
+
+ of_platform_device_destroy(common->mdio_dev, NULL);
+
+diff --git a/drivers/net/ethernet/ti/am65-cpts.c b/drivers/net/ethernet/ti/am65-cpts.c
+index 9535396b28cd9..a297890152d92 100644
+--- a/drivers/net/ethernet/ti/am65-cpts.c
++++ b/drivers/net/ethernet/ti/am65-cpts.c
+@@ -929,14 +929,13 @@ static int am65_cpts_of_parse(struct am65_cpts *cpts, struct device_node *node)
+ return cpts_of_mux_clk_setup(cpts, node);
+ }
+
+-static void am65_cpts_release(void *data)
++void am65_cpts_release(struct am65_cpts *cpts)
+ {
+- struct am65_cpts *cpts = data;
+-
+ ptp_clock_unregister(cpts->ptp_clock);
+ am65_cpts_disable(cpts);
+ clk_disable_unprepare(cpts->refclk);
+ }
++EXPORT_SYMBOL_GPL(am65_cpts_release);
+
+ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
+ struct device_node *node)
+@@ -1014,18 +1013,12 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
+ }
+ cpts->phc_index = ptp_clock_index(cpts->ptp_clock);
+
+- ret = devm_add_action_or_reset(dev, am65_cpts_release, cpts);
+- if (ret) {
+- dev_err(dev, "failed to add ptpclk reset action %d", ret);
+- return ERR_PTR(ret);
+- }
+-
+ ret = devm_request_threaded_irq(dev, cpts->irq, NULL,
+ am65_cpts_interrupt,
+ IRQF_ONESHOT, dev_name(dev), cpts);
+ if (ret < 0) {
+ dev_err(cpts->dev, "error attaching irq %d\n", ret);
+- return ERR_PTR(ret);
++ goto reset_ptpclk;
+ }
+
+ dev_info(dev, "CPTS ver 0x%08x, freq:%u, add_val:%u\n",
+@@ -1034,6 +1027,8 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
+
+ return cpts;
+
++reset_ptpclk:
++ am65_cpts_release(cpts);
+ refclk_disable:
+ clk_disable_unprepare(cpts->refclk);
+ return ERR_PTR(ret);
+diff --git a/drivers/net/ethernet/ti/am65-cpts.h b/drivers/net/ethernet/ti/am65-cpts.h
+index bd08f4b2edd2d..6e14df0be1137 100644
+--- a/drivers/net/ethernet/ti/am65-cpts.h
++++ b/drivers/net/ethernet/ti/am65-cpts.h
+@@ -18,6 +18,7 @@ struct am65_cpts_estf_cfg {
+ };
+
+ #if IS_ENABLED(CONFIG_TI_K3_AM65_CPTS)
++void am65_cpts_release(struct am65_cpts *cpts);
+ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs,
+ struct device_node *node);
+ int am65_cpts_phc_index(struct am65_cpts *cpts);
+@@ -31,6 +32,10 @@ void am65_cpts_estf_disable(struct am65_cpts *cpts, int idx);
+ void am65_cpts_suspend(struct am65_cpts *cpts);
+ void am65_cpts_resume(struct am65_cpts *cpts);
+ #else
++static inline void am65_cpts_release(struct am65_cpts *cpts)
++{
++}
++
+ static inline struct am65_cpts *am65_cpts_create(struct device *dev,
+ void __iomem *regs,
+ struct device_node *node)
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 79f4e13620a46..da737d959e81c 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -851,6 +851,7 @@ static void netvsc_send_completion(struct net_device *ndev,
+ u32 msglen = hv_pkt_datalen(desc);
+ struct nvsp_message *pkt_rqst;
+ u64 cmd_rqst;
++ u32 status;
+
+ /* First check if this is a VMBUS completion without data payload */
+ if (!msglen) {
+@@ -922,6 +923,23 @@ static void netvsc_send_completion(struct net_device *ndev,
+ break;
+
+ case NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE:
++ if (msglen < sizeof(struct nvsp_message_header) +
++ sizeof(struct nvsp_1_message_send_rndis_packet_complete)) {
++ if (net_ratelimit())
++ netdev_err(ndev, "nvsp_rndis_pkt_complete length too small: %u\n",
++ msglen);
++ return;
++ }
++
++ /* If status indicates an error, output a message so we know
++ * there's a problem. But process the completion anyway so the
++ * resources are released.
++ */
++ status = nvsp_packet->msg.v1_msg.send_rndis_pkt_complete.status;
++ if (status != NVSP_STAT_SUCCESS && net_ratelimit())
++ netdev_err(ndev, "nvsp_rndis_pkt_complete error status: %x\n",
++ status);
++
+ netvsc_send_tx_complete(ndev, net_device, incoming_channel,
+ desc, budget);
+ break;
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index bea2da1c4c51d..f1a3938294866 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -1666,7 +1666,8 @@ static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
+ val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK);
+ val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK);
+ val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK);
+- val |= u32_encode_bits(params, GENERIC_PARAMS_FMASK);
++ if (gsi->version >= IPA_VERSION_4_11)
++ val |= u32_encode_bits(params, GENERIC_PARAMS_FMASK);
+
+ timeout = !gsi_command(gsi, GSI_GENERIC_CMD_OFFSET, val);
+
+diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h
+index 3763359f208f7..e65f2f055cfff 100644
+--- a/drivers/net/ipa/gsi_reg.h
++++ b/drivers/net/ipa/gsi_reg.h
+@@ -372,7 +372,6 @@ enum gsi_general_id {
+ #define GSI_ERROR_LOG_OFFSET \
+ (0x0001f200 + 0x4000 * GSI_EE_AP)
+
+-/* Fields below are present for IPA v3.5.1 and above */
+ #define ERR_ARG3_FMASK GENMASK(3, 0)
+ #define ERR_ARG2_FMASK GENMASK(7, 4)
+ #define ERR_ARG1_FMASK GENMASK(11, 8)
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index a2be1994b3894..8941aa199ea33 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -533,7 +533,7 @@ static int tap_open(struct inode *inode, struct file *file)
+ q->sock.state = SS_CONNECTED;
+ q->sock.file = file;
+ q->sock.ops = &tap_socket_ops;
+- sock_init_data(&q->sock, &q->sk);
++ sock_init_data_uid(&q->sock, &q->sk, inode->i_uid);
+ q->sk.sk_write_space = tap_sock_write_space;
+ q->sk.sk_destruct = tap_sock_destruct;
+ q->flags = IFF_VNET_HDR | IFF_NO_PI | IFF_TAP;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index a7d17c680f4a0..745131b2d6dbf 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -3448,7 +3448,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
+ tfile->socket.file = file;
+ tfile->socket.ops = &tun_socket_ops;
+
+- sock_init_data(&tfile->socket, &tfile->sk);
++ sock_init_data_uid(&tfile->socket, &tfile->sk, inode->i_uid);
+
+ tfile->sk.sk_write_space = tun_sock_write_space;
+ tfile->sk.sk_sndbuf = INT_MAX;
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 22460b0abf037..ac34c57e4bc69 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -912,7 +912,6 @@ struct ath11k_base {
+ enum ath11k_dfs_region dfs_region;
+ #ifdef CONFIG_ATH11K_DEBUGFS
+ struct dentry *debugfs_soc;
+- struct dentry *debugfs_ath11k;
+ #endif
+ struct ath11k_soc_dp_stats soc_stats;
+
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
+index ccdf3d5ba1ab6..5bb6fd17fdf6f 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.c
++++ b/drivers/net/wireless/ath/ath11k/debugfs.c
+@@ -976,10 +976,6 @@ int ath11k_debugfs_pdev_create(struct ath11k_base *ab)
+ if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags))
+ return 0;
+
+- ab->debugfs_soc = debugfs_create_dir(ab->hw_params.name, ab->debugfs_ath11k);
+- if (IS_ERR(ab->debugfs_soc))
+- return PTR_ERR(ab->debugfs_soc);
+-
+ debugfs_create_file("simulate_fw_crash", 0600, ab->debugfs_soc, ab,
+ &fops_simulate_fw_crash);
+
+@@ -1001,15 +997,51 @@ void ath11k_debugfs_pdev_destroy(struct ath11k_base *ab)
+
+ int ath11k_debugfs_soc_create(struct ath11k_base *ab)
+ {
+- ab->debugfs_ath11k = debugfs_create_dir("ath11k", NULL);
++ struct dentry *root;
++ bool dput_needed;
++ char name[64];
++ int ret;
++
++ root = debugfs_lookup("ath11k", NULL);
++ if (!root) {
++ root = debugfs_create_dir("ath11k", NULL);
++ if (IS_ERR_OR_NULL(root))
++ return PTR_ERR(root);
++
++ dput_needed = false;
++ } else {
++ /* a dentry from lookup() needs dput() after we don't use it */
++ dput_needed = true;
++ }
++
++ scnprintf(name, sizeof(name), "%s-%s", ath11k_bus_str(ab->hif.bus),
++ dev_name(ab->dev));
++
++ ab->debugfs_soc = debugfs_create_dir(name, root);
++ if (IS_ERR_OR_NULL(ab->debugfs_soc)) {
++ ret = PTR_ERR(ab->debugfs_soc);
++ goto out;
++ }
++
++ ret = 0;
+
+- return PTR_ERR_OR_ZERO(ab->debugfs_ath11k);
++out:
++ if (dput_needed)
++ dput(root);
++
++ return ret;
+ }
+
+ void ath11k_debugfs_soc_destroy(struct ath11k_base *ab)
+ {
+- debugfs_remove_recursive(ab->debugfs_ath11k);
+- ab->debugfs_ath11k = NULL;
++ debugfs_remove_recursive(ab->debugfs_soc);
++ ab->debugfs_soc = NULL;
++
++ /* We are not removing ath11k directory on purpose, even if it
++ * would be empty. This simplifies the directory handling and it's
++ * a minor cosmetic issue to leave an empty ath11k directory to
++ * debugfs.
++ */
+ }
+ EXPORT_SYMBOL(ath11k_debugfs_soc_destroy);
+
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index c5a4c34d77499..e964e1b722871 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -3126,6 +3126,7 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
+ if (!peer) {
+ ath11k_warn(ab, "failed to find the peer to set up fragment info\n");
+ spin_unlock_bh(&ab->base_lock);
++ crypto_free_shash(tfm);
+ return -ENOENT;
+ }
+
+@@ -5022,6 +5023,7 @@ static int ath11k_dp_rx_mon_deliver(struct ath11k *ar, u32 mac_id,
+ } else {
+ rxs->flag |= RX_FLAG_ALLOW_SAME_PN;
+ }
++ rxs->flag |= RX_FLAG_ONLY_MONITOR;
+ ath11k_update_radiotap(ar, ppduinfo, mon_skb, rxs);
+
+ ath11k_dp_rx_deliver_msdu(ar, napi, mon_skb, rxs);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index 99cf3357c66e1..3c6005ab9a717 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -979,7 +979,7 @@ static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
+ if (ret)
+ ath11k_warn(ab, "failed to suspend core: %d\n", ret);
+
+- return ret;
++ return 0;
+ }
+
+ static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index 1a2e0c7eeb023..f521dfa2f1945 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -561,11 +561,11 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ memcpy(ptr, skb->data, rx_remain_len);
+
+ rx_pkt_len += rx_remain_len;
+- hif_dev->rx_remain_len = 0;
+ skb_put(remain_skb, rx_pkt_len);
+
+ skb_pool[pool_index++] = remain_skb;
+-
++ hif_dev->remain_skb = NULL;
++ hif_dev->rx_remain_len = 0;
+ } else {
+ index = rx_remain_len;
+ }
+@@ -584,16 +584,21 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ pkt_len = get_unaligned_le16(ptr + index);
+ pkt_tag = get_unaligned_le16(ptr + index + 2);
+
++ /* It is supposed that if we have an invalid pkt_tag or
++ * pkt_len then the whole input SKB is considered invalid
++ * and dropped; the associated packets already in skb_pool
++ * are dropped, too.
++ */
+ if (pkt_tag != ATH_USB_RX_STREAM_MODE_TAG) {
+ RX_STAT_INC(hif_dev, skb_dropped);
+- return;
++ goto invalid_pkt;
+ }
+
+ if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
+ dev_err(&hif_dev->udev->dev,
+ "ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
+ RX_STAT_INC(hif_dev, skb_dropped);
+- return;
++ goto invalid_pkt;
+ }
+
+ pad_len = 4 - (pkt_len & 0x3);
+@@ -605,11 +610,6 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+
+ if (index > MAX_RX_BUF_SIZE) {
+ spin_lock(&hif_dev->rx_lock);
+- hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
+- hif_dev->rx_transfer_len =
+- MAX_RX_BUF_SIZE - chk_idx - 4;
+- hif_dev->rx_pad_len = pad_len;
+-
+ nskb = __dev_alloc_skb(pkt_len + 32, GFP_ATOMIC);
+ if (!nskb) {
+ dev_err(&hif_dev->udev->dev,
+@@ -617,6 +617,12 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ spin_unlock(&hif_dev->rx_lock);
+ goto err;
+ }
++
++ hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
++ hif_dev->rx_transfer_len =
++ MAX_RX_BUF_SIZE - chk_idx - 4;
++ hif_dev->rx_pad_len = pad_len;
++
+ skb_reserve(nskb, 32);
+ RX_STAT_INC(hif_dev, skb_allocated);
+
+@@ -654,6 +660,13 @@ err:
+ skb_pool[i]->len, USB_WLAN_RX_PIPE);
+ RX_STAT_INC(hif_dev, skb_completed);
+ }
++ return;
++invalid_pkt:
++ for (i = 0; i < pool_index; i++) {
++ dev_kfree_skb_any(skb_pool[i]);
++ RX_STAT_INC(hif_dev, skb_dropped);
++ }
++ return;
+ }
+
+ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+@@ -1411,8 +1424,6 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
+
+ if (hif_dev->flags & HIF_USB_READY) {
+ ath9k_htc_hw_deinit(hif_dev->htc_handle, unplugged);
+- ath9k_hif_usb_dev_deinit(hif_dev);
+- ath9k_destroy_wmi(hif_dev->htc_handle->drv_priv);
+ ath9k_htc_hw_free(hif_dev->htc_handle);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index 07ac88fb1c577..96a3185a96d75 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -988,6 +988,8 @@ void ath9k_htc_disconnect_device(struct htc_target *htc_handle, bool hotunplug)
+
+ ath9k_deinit_device(htc_handle->drv_priv);
+ ath9k_stop_wmi(htc_handle->drv_priv);
++ ath9k_hif_usb_dealloc_urbs((struct hif_device_usb *)htc_handle->hif_dev);
++ ath9k_destroy_wmi(htc_handle->drv_priv);
+ ieee80211_free_hw(htc_handle->drv_priv->hw);
+ }
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index ca05b07a45e67..fe62ff668f757 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -391,7 +391,7 @@ static void ath9k_htc_fw_panic_report(struct htc_target *htc_handle,
+ * HTC Messages are handled directly here and the obtained SKB
+ * is freed.
+ *
+- * Service messages (Data, WMI) passed to the corresponding
++ * Service messages (Data, WMI) are passed to the corresponding
+ * endpoint RX handlers, which have to free the SKB.
+ */
+ void ath9k_htc_rx_msg(struct htc_target *htc_handle,
+@@ -478,6 +478,8 @@ invalid:
+ if (endpoint->ep_callbacks.rx)
+ endpoint->ep_callbacks.rx(endpoint->ep_callbacks.priv,
+ skb, epid);
++ else
++ goto invalid;
+ }
+ }
+
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index f315c54bd3ac0..19345b8f7bfd5 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -341,6 +341,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ if (!time_left) {
+ ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
+ wmi_cmd_to_name(cmd_id));
++ wmi->last_seq_id = 0;
+ mutex_unlock(&wmi->op_mutex);
+ return -ETIMEDOUT;
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
+index 121893bbaa1d7..8073f31be27d9 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c
+@@ -726,17 +726,17 @@ static u32 brcmf_chip_tcm_rambase(struct brcmf_chip_priv *ci)
+ case BRCM_CC_43664_CHIP_ID:
+ case BRCM_CC_43666_CHIP_ID:
+ return 0x200000;
++ case BRCM_CC_4355_CHIP_ID:
+ case BRCM_CC_4359_CHIP_ID:
+ return (ci->pub.chiprev < 9) ? 0x180000 : 0x160000;
+ case BRCM_CC_4364_CHIP_ID:
+ case CY_CC_4373_CHIP_ID:
+ return 0x160000;
+ case CY_CC_43752_CHIP_ID:
++ case BRCM_CC_4377_CHIP_ID:
+ return 0x170000;
+ case BRCM_CC_4378_CHIP_ID:
+ return 0x352000;
+- case CY_CC_89459_CHIP_ID:
+- return ((ci->pub.chiprev < 9) ? 0x180000 : 0x160000);
+ default:
+ brcmf_err("unknown chip: %s\n", ci->pub.name);
+ break;
+@@ -1426,8 +1426,8 @@ bool brcmf_chip_sr_capable(struct brcmf_chip *pub)
+ addr = CORE_CC_REG(base, sr_control1);
+ reg = chip->ops->read32(chip->ctx, addr);
+ return reg != 0;
++ case BRCM_CC_4355_CHIP_ID:
+ case CY_CC_4373_CHIP_ID:
+- case CY_CC_89459_CHIP_ID:
+ /* explicitly check SR engine enable bit */
+ addr = CORE_CC_REG(base, sr_control0);
+ reg = chip->ops->read32(chip->ctx, addr);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+index 4a309e5a5707b..f235beaddddba 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+@@ -299,6 +299,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
+ err);
+ goto done;
+ }
++ buf[sizeof(buf) - 1] = '\0';
+ ptr = (char *)buf;
+ strsep(&ptr, "\n");
+
+@@ -319,15 +320,17 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
+ if (err) {
+ brcmf_dbg(TRACE, "retrieving clmver failed, %d\n", err);
+ } else {
++ buf[sizeof(buf) - 1] = '\0';
+ clmver = (char *)buf;
+- /* store CLM version for adding it to revinfo debugfs file */
+- memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
+
+ /* Replace all newline/linefeed characters with space
+ * character
+ */
+ strreplace(clmver, '\n', ' ');
+
++ /* store CLM version for adding it to revinfo debugfs file */
++ memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
++
+ brcmf_dbg(INFO, "CLM version = %s\n", clmver);
+ }
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 83ea251cfcecf..f599d5f896e89 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -336,6 +336,7 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
+ bphy_err(drvr, "%s: failed to expand headroom\n",
+ brcmf_ifname(ifp));
+ atomic_inc(&drvr->bus_if->stats.pktcow_failed);
++ dev_kfree_skb(skb);
+ goto done;
+ }
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+index cec53f934940a..45fbcbdc7d9e4 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+@@ -347,8 +347,11 @@ brcmf_msgbuf_alloc_pktid(struct device *dev,
+ count++;
+ } while (count < pktids->array_size);
+
+- if (count == pktids->array_size)
++ if (count == pktids->array_size) {
++ dma_unmap_single(dev, *physaddr, skb->len - data_offset,
++ pktids->direction);
+ return -ENOMEM;
++ }
+
+ array[*idx].data_offset = data_offset;
+ array[*idx].physaddr = *physaddr;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index b67f6d0810b6c..a9b9b2dc62d4f 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -51,18 +51,21 @@ enum brcmf_pcie_state {
+ BRCMF_FW_DEF(43602, "brcmfmac43602-pcie");
+ BRCMF_FW_DEF(4350, "brcmfmac4350-pcie");
+ BRCMF_FW_DEF(4350C, "brcmfmac4350c2-pcie");
++BRCMF_FW_CLM_DEF(4355, "brcmfmac4355-pcie");
++BRCMF_FW_CLM_DEF(4355C1, "brcmfmac4355c1-pcie");
+ BRCMF_FW_CLM_DEF(4356, "brcmfmac4356-pcie");
+ BRCMF_FW_CLM_DEF(43570, "brcmfmac43570-pcie");
+ BRCMF_FW_DEF(4358, "brcmfmac4358-pcie");
+ BRCMF_FW_DEF(4359, "brcmfmac4359-pcie");
+-BRCMF_FW_DEF(4364, "brcmfmac4364-pcie");
++BRCMF_FW_CLM_DEF(4364B2, "brcmfmac4364b2-pcie");
++BRCMF_FW_CLM_DEF(4364B3, "brcmfmac4364b3-pcie");
+ BRCMF_FW_DEF(4365B, "brcmfmac4365b-pcie");
+ BRCMF_FW_DEF(4365C, "brcmfmac4365c-pcie");
+ BRCMF_FW_DEF(4366B, "brcmfmac4366b-pcie");
+ BRCMF_FW_DEF(4366C, "brcmfmac4366c-pcie");
+ BRCMF_FW_DEF(4371, "brcmfmac4371-pcie");
++BRCMF_FW_CLM_DEF(4377B3, "brcmfmac4377b3-pcie");
+ BRCMF_FW_CLM_DEF(4378B1, "brcmfmac4378b1-pcie");
+-BRCMF_FW_DEF(4355, "brcmfmac89459-pcie");
+
+ /* firmware config files */
+ MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.txt");
+@@ -78,13 +81,16 @@ static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
+ BRCMF_FW_ENTRY(BRCM_CC_4350_CHIP_ID, 0x000000FF, 4350C),
+ BRCMF_FW_ENTRY(BRCM_CC_4350_CHIP_ID, 0xFFFFFF00, 4350),
+ BRCMF_FW_ENTRY(BRCM_CC_43525_CHIP_ID, 0xFFFFFFF0, 4365C),
++ BRCMF_FW_ENTRY(BRCM_CC_4355_CHIP_ID, 0x000007FF, 4355),
++ BRCMF_FW_ENTRY(BRCM_CC_4355_CHIP_ID, 0xFFFFF800, 4355C1), /* rev ID 12/C2 seen */
+ BRCMF_FW_ENTRY(BRCM_CC_4356_CHIP_ID, 0xFFFFFFFF, 4356),
+ BRCMF_FW_ENTRY(BRCM_CC_43567_CHIP_ID, 0xFFFFFFFF, 43570),
+ BRCMF_FW_ENTRY(BRCM_CC_43569_CHIP_ID, 0xFFFFFFFF, 43570),
+ BRCMF_FW_ENTRY(BRCM_CC_43570_CHIP_ID, 0xFFFFFFFF, 43570),
+ BRCMF_FW_ENTRY(BRCM_CC_4358_CHIP_ID, 0xFFFFFFFF, 4358),
+ BRCMF_FW_ENTRY(BRCM_CC_4359_CHIP_ID, 0xFFFFFFFF, 4359),
+- BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0xFFFFFFFF, 4364),
++ BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0x0000000F, 4364B2), /* 3 */
++ BRCMF_FW_ENTRY(BRCM_CC_4364_CHIP_ID, 0xFFFFFFF0, 4364B3), /* 4 */
+ BRCMF_FW_ENTRY(BRCM_CC_4365_CHIP_ID, 0x0000000F, 4365B),
+ BRCMF_FW_ENTRY(BRCM_CC_4365_CHIP_ID, 0xFFFFFFF0, 4365C),
+ BRCMF_FW_ENTRY(BRCM_CC_4366_CHIP_ID, 0x0000000F, 4366B),
+@@ -92,8 +98,8 @@ static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
+ BRCMF_FW_ENTRY(BRCM_CC_43664_CHIP_ID, 0xFFFFFFF0, 4366C),
+ BRCMF_FW_ENTRY(BRCM_CC_43666_CHIP_ID, 0xFFFFFFF0, 4366C),
+ BRCMF_FW_ENTRY(BRCM_CC_4371_CHIP_ID, 0xFFFFFFFF, 4371),
++ BRCMF_FW_ENTRY(BRCM_CC_4377_CHIP_ID, 0xFFFFFFFF, 4377B3), /* revision ID 4 */
+ BRCMF_FW_ENTRY(BRCM_CC_4378_CHIP_ID, 0xFFFFFFFF, 4378B1), /* revision ID 3 */
+- BRCMF_FW_ENTRY(CY_CC_89459_CHIP_ID, 0xFFFFFFFF, 4355),
+ };
+
+ #define BRCMF_PCIE_FW_UP_TIMEOUT 5000 /* msec */
+@@ -1994,6 +2000,17 @@ static int brcmf_pcie_read_otp(struct brcmf_pciedev_info *devinfo)
+ int ret;
+
+ switch (devinfo->ci->chip) {
++ case BRCM_CC_4355_CHIP_ID:
++ coreid = BCMA_CORE_CHIPCOMMON;
++ base = 0x8c0;
++ words = 0xb2;
++ break;
++ case BRCM_CC_4364_CHIP_ID:
++ coreid = BCMA_CORE_CHIPCOMMON;
++ base = 0x8c0;
++ words = 0x1a0;
++ break;
++ case BRCM_CC_4377_CHIP_ID:
+ case BRCM_CC_4378_CHIP_ID:
+ coreid = BCMA_CORE_GCI;
+ base = 0x1120;
+@@ -2590,6 +2607,7 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4350_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE_SUB(0x4355, BRCM_PCIE_VENDOR_ID_BROADCOM, 0x4355, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4354_RAW_DEVICE_ID, WCC),
++ BRCMF_PCIE_DEVICE(BRCM_PCIE_4355_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4356_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_43567_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_43570_DEVICE_ID, WCC),
+@@ -2600,7 +2618,7 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_2G_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_5G_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_RAW_DEVICE_ID, WCC),
+- BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, BCA),
++ BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_DEVICE_ID, BCA),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_2G_DEVICE_ID, BCA),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_5G_DEVICE_ID, BCA),
+@@ -2609,9 +2627,10 @@ static const struct pci_device_id brcmf_pcie_devid_table[] = {
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4366_2G_DEVICE_ID, BCA),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4366_5G_DEVICE_ID, BCA),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4371_DEVICE_ID, WCC),
++ BRCMF_PCIE_DEVICE(BRCM_PCIE_43596_DEVICE_ID, CYW),
++ BRCMF_PCIE_DEVICE(BRCM_PCIE_4377_DEVICE_ID, WCC),
+ BRCMF_PCIE_DEVICE(BRCM_PCIE_4378_DEVICE_ID, WCC),
+- BRCMF_PCIE_DEVICE(CY_PCIE_89459_DEVICE_ID, CYW),
+- BRCMF_PCIE_DEVICE(CY_PCIE_89459_RAW_DEVICE_ID, CYW),
++
+ { /* end: all zeroes */ }
+ };
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
+index f4939cf627672..896615f579522 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
++++ b/drivers/net/wireless/broadcom/brcm80211/include/brcm_hw_ids.h
+@@ -37,6 +37,7 @@
+ #define BRCM_CC_4350_CHIP_ID 0x4350
+ #define BRCM_CC_43525_CHIP_ID 43525
+ #define BRCM_CC_4354_CHIP_ID 0x4354
++#define BRCM_CC_4355_CHIP_ID 0x4355
+ #define BRCM_CC_4356_CHIP_ID 0x4356
+ #define BRCM_CC_43566_CHIP_ID 43566
+ #define BRCM_CC_43567_CHIP_ID 43567
+@@ -51,12 +52,12 @@
+ #define BRCM_CC_43664_CHIP_ID 43664
+ #define BRCM_CC_43666_CHIP_ID 43666
+ #define BRCM_CC_4371_CHIP_ID 0x4371
++#define BRCM_CC_4377_CHIP_ID 0x4377
+ #define BRCM_CC_4378_CHIP_ID 0x4378
+ #define CY_CC_4373_CHIP_ID 0x4373
+ #define CY_CC_43012_CHIP_ID 43012
+ #define CY_CC_43439_CHIP_ID 43439
+ #define CY_CC_43752_CHIP_ID 43752
+-#define CY_CC_89459_CHIP_ID 0x4355
+
+ /* USB Device IDs */
+ #define BRCM_USB_43143_DEVICE_ID 0xbd1e
+@@ -72,6 +73,7 @@
+ #define BRCM_PCIE_4350_DEVICE_ID 0x43a3
+ #define BRCM_PCIE_4354_DEVICE_ID 0x43df
+ #define BRCM_PCIE_4354_RAW_DEVICE_ID 0x4354
++#define BRCM_PCIE_4355_DEVICE_ID 0x43dc
+ #define BRCM_PCIE_4356_DEVICE_ID 0x43ec
+ #define BRCM_PCIE_43567_DEVICE_ID 0x43d3
+ #define BRCM_PCIE_43570_DEVICE_ID 0x43d9
+@@ -90,9 +92,9 @@
+ #define BRCM_PCIE_4366_2G_DEVICE_ID 0x43c4
+ #define BRCM_PCIE_4366_5G_DEVICE_ID 0x43c5
+ #define BRCM_PCIE_4371_DEVICE_ID 0x440d
++#define BRCM_PCIE_43596_DEVICE_ID 0x4415
++#define BRCM_PCIE_4377_DEVICE_ID 0x4488
+ #define BRCM_PCIE_4378_DEVICE_ID 0x4425
+-#define CY_PCIE_89459_DEVICE_ID 0x4415
+-#define CY_PCIE_89459_RAW_DEVICE_ID 0x4355
+
+ /* brcmsmac IDs */
+ #define BCM4313_D11N2G_ID 0x4727 /* 4313 802.11n 2.4G device */
+diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+index ca802af8cddcc..d382f20173256 100644
+--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c
++++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+@@ -3427,7 +3427,7 @@ static void ipw_rx_queue_reset(struct ipw_priv *priv,
+ dma_unmap_single(&priv->pci_dev->dev,
+ rxq->pool[i].dma_addr,
+ IPW_RX_BUF_SIZE, DMA_FROM_DEVICE);
+- dev_kfree_skb(rxq->pool[i].skb);
++ dev_kfree_skb_irq(rxq->pool[i].skb);
+ rxq->pool[i].skb = NULL;
+ }
+ list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
+@@ -11383,9 +11383,14 @@ static int ipw_wdev_init(struct net_device *dev)
+ set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
+
+ /* With that information in place, we can now register the wiphy... */
+- if (wiphy_register(wdev->wiphy))
+- rc = -EIO;
++ rc = wiphy_register(wdev->wiphy);
++ if (rc)
++ goto out;
++
++ return 0;
+ out:
++ kfree(priv->ieee->a_band.channels);
++ kfree(priv->ieee->bg_band.channels);
+ return rc;
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlegacy/3945-mac.c b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+index d7e99d50b287b..9eaf5ec133f9e 100644
+--- a/drivers/net/wireless/intel/iwlegacy/3945-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+@@ -3372,10 +3372,12 @@ static DEVICE_ATTR(dump_errors, 0200, NULL, il3945_dump_error_log);
+ *
+ *****************************************************************************/
+
+-static void
++static int
+ il3945_setup_deferred_work(struct il_priv *il)
+ {
+ il->workqueue = create_singlethread_workqueue(DRV_NAME);
++ if (!il->workqueue)
++ return -ENOMEM;
+
+ init_waitqueue_head(&il->wait_command_queue);
+
+@@ -3392,6 +3394,8 @@ il3945_setup_deferred_work(struct il_priv *il)
+ timer_setup(&il->watchdog, il_bg_watchdog, 0);
+
+ tasklet_setup(&il->irq_tasklet, il3945_irq_tasklet);
++
++ return 0;
+ }
+
+ static void
+@@ -3712,7 +3716,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ }
+
+ il_set_rxon_channel(il, &il->bands[NL80211_BAND_2GHZ].channels[5]);
+- il3945_setup_deferred_work(il);
++ err = il3945_setup_deferred_work(il);
++ if (err)
++ goto out_remove_sysfs;
++
+ il3945_setup_handlers(il);
+ il_power_initialize(il);
+
+@@ -3724,7 +3731,7 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ err = il3945_setup_mac(il);
+ if (err)
+- goto out_remove_sysfs;
++ goto out_destroy_workqueue;
+
+ il_dbgfs_register(il, DRV_NAME);
+
+@@ -3733,9 +3740,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ return 0;
+
+-out_remove_sysfs:
++out_destroy_workqueue:
+ destroy_workqueue(il->workqueue);
+ il->workqueue = NULL;
++out_remove_sysfs:
+ sysfs_remove_group(&pdev->dev.kobj, &il3945_attribute_group);
+ out_release_irq:
+ free_irq(il->pci_dev->irq, il);
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index 721b4042b4bf7..4d3c544ff2e66 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -6211,10 +6211,12 @@ out:
+ mutex_unlock(&il->mutex);
+ }
+
+-static void
++static int
+ il4965_setup_deferred_work(struct il_priv *il)
+ {
+ il->workqueue = create_singlethread_workqueue(DRV_NAME);
++ if (!il->workqueue)
++ return -ENOMEM;
+
+ init_waitqueue_head(&il->wait_command_queue);
+
+@@ -6233,6 +6235,8 @@ il4965_setup_deferred_work(struct il_priv *il)
+ timer_setup(&il->watchdog, il_bg_watchdog, 0);
+
+ tasklet_setup(&il->irq_tasklet, il4965_irq_tasklet);
++
++ return 0;
+ }
+
+ static void
+@@ -6618,7 +6622,10 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto out_disable_msi;
+ }
+
+- il4965_setup_deferred_work(il);
++ err = il4965_setup_deferred_work(il);
++ if (err)
++ goto out_free_irq;
++
+ il4965_setup_handlers(il);
+
+ /*********************************************
+@@ -6656,6 +6663,7 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ out_destroy_workqueue:
+ destroy_workqueue(il->workqueue);
+ il->workqueue = NULL;
++out_free_irq:
+ free_irq(il->pci_dev->irq, il);
+ out_disable_msi:
+ pci_disable_msi(il->pci_dev);
+diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
+index 341c17fe2af4d..96002121bb8b2 100644
+--- a/drivers/net/wireless/intel/iwlegacy/common.c
++++ b/drivers/net/wireless/intel/iwlegacy/common.c
+@@ -5174,7 +5174,7 @@ il_mac_reset_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ memset(&il->current_ht_config, 0, sizeof(struct il_ht_config));
+
+ /* new association get rid of ibss beacon skb */
+- dev_kfree_skb(il->beacon_skb);
++ dev_consume_skb_irq(il->beacon_skb);
+ il->beacon_skb = NULL;
+ il->timestamp = 0;
+
+@@ -5293,7 +5293,7 @@ il_beacon_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ }
+
+ spin_lock_irqsave(&il->lock, flags);
+- dev_kfree_skb(il->beacon_skb);
++ dev_consume_skb_irq(il->beacon_skb);
+ il->beacon_skb = skb;
+
+ timestamp = ((struct ieee80211_mgmt *)skb->data)->u.beacon.timestamp;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mei/main.c b/drivers/net/wireless/intel/iwlwifi/mei/main.c
+index f9d11935ed97e..67dfb77fedf79 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mei/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/mei/main.c
+@@ -788,7 +788,7 @@ static void iwl_mei_handle_amt_state(struct mei_cl_device *cldev,
+ if (mei->amt_enabled)
+ iwl_mei_set_init_conf(mei);
+ else if (iwl_mei_cache.ops)
+- iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, false, false);
++ iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, false);
+
+ schedule_work(&mei->netdev_work);
+
+@@ -829,7 +829,7 @@ static void iwl_mei_handle_csme_taking_ownership(struct mei_cl_device *cldev,
+ */
+ mei->csme_taking_ownership = true;
+
+- iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, true, true);
++ iwl_mei_cache.ops->rfkill(iwl_mei_cache.priv, true);
+ } else {
+ iwl_mei_send_sap_msg(cldev,
+ SAP_MSG_NOTIF_CSME_OWNERSHIP_CONFIRMED);
+@@ -1774,7 +1774,7 @@ int iwl_mei_register(void *priv, const struct iwl_mei_ops *ops)
+ if (mei->amt_enabled)
+ iwl_mei_send_sap_msg(mei->cldev,
+ SAP_MSG_NOTIF_WIFIDR_UP);
+- ops->rfkill(priv, mei->link_prot_state, false);
++ ops->rfkill(priv, mei->link_prot_state);
+ }
+ }
+ ret = 0;
+diff --git a/drivers/net/wireless/intersil/orinoco/hw.c b/drivers/net/wireless/intersil/orinoco/hw.c
+index 0aea35c9c11c7..4fcca08e50de2 100644
+--- a/drivers/net/wireless/intersil/orinoco/hw.c
++++ b/drivers/net/wireless/intersil/orinoco/hw.c
+@@ -931,6 +931,8 @@ int __orinoco_hw_setup_enc(struct orinoco_private *priv)
+ err = hermes_write_wordrec(hw, USER_BAP,
+ HERMES_RID_CNFAUTHENTICATION_AGERE,
+ auth_flag);
++ if (err)
++ return err;
+ }
+ err = hermes_write_wordrec(hw, USER_BAP,
+ HERMES_RID_CNFWEPENABLED_AGERE,
+diff --git a/drivers/net/wireless/marvell/libertas/cmdresp.c b/drivers/net/wireless/marvell/libertas/cmdresp.c
+index cb515c5584c1f..74cb7551f4275 100644
+--- a/drivers/net/wireless/marvell/libertas/cmdresp.c
++++ b/drivers/net/wireless/marvell/libertas/cmdresp.c
+@@ -48,7 +48,7 @@ void lbs_mac_event_disconnected(struct lbs_private *priv,
+
+ /* Free Tx and Rx packets */
+ spin_lock_irqsave(&priv->driver_lock, flags);
+- kfree_skb(priv->currenttxskb);
++ dev_kfree_skb_irq(priv->currenttxskb);
+ priv->currenttxskb = NULL;
+ priv->tx_pending_len = 0;
+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index 32fdc4150b605..2240b4db8c036 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -637,7 +637,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
+ priv->resp_len[i] = (recvlength - MESSAGE_HEADER_LEN);
+ memcpy(priv->resp_buf[i], recvbuff + MESSAGE_HEADER_LEN,
+ priv->resp_len[i]);
+- kfree_skb(skb);
++ dev_kfree_skb_irq(skb);
+ lbs_notify_command_response(priv, i);
+
+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+diff --git a/drivers/net/wireless/marvell/libertas/main.c b/drivers/net/wireless/marvell/libertas/main.c
+index 8f5220cee1123..78e8b5aecec0e 100644
+--- a/drivers/net/wireless/marvell/libertas/main.c
++++ b/drivers/net/wireless/marvell/libertas/main.c
+@@ -216,7 +216,7 @@ int lbs_stop_iface(struct lbs_private *priv)
+
+ spin_lock_irqsave(&priv->driver_lock, flags);
+ priv->iface_running = false;
+- kfree_skb(priv->currenttxskb);
++ dev_kfree_skb_irq(priv->currenttxskb);
+ priv->currenttxskb = NULL;
+ priv->tx_pending_len = 0;
+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+@@ -869,6 +869,7 @@ static int lbs_init_adapter(struct lbs_private *priv)
+ ret = kfifo_alloc(&priv->event_fifo, sizeof(u32) * 16, GFP_KERNEL);
+ if (ret) {
+ pr_err("Out of memory allocating event FIFO buffer\n");
++ lbs_free_cmd_buffer(priv);
+ goto out;
+ }
+
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index 75b5319d033f3..1750f5e93de21 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -613,7 +613,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
+ spin_lock_irqsave(&priv->driver_lock, flags);
+ memcpy(priv->cmd_resp_buff, recvbuff + MESSAGE_HEADER_LEN,
+ recvlength - MESSAGE_HEADER_LEN);
+- kfree_skb(skb);
++ dev_kfree_skb_irq(skb);
+ lbtf_cmd_response_rx(priv);
+ spin_unlock_irqrestore(&priv->driver_lock, flags);
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
+index 4af57e6d43932..90e4011008981 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n.c
+@@ -878,7 +878,7 @@ mwifiex_send_delba_txbastream_tbl(struct mwifiex_private *priv, u8 tid)
+ */
+ void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
+ {
+- u8 i;
++ u8 i, j;
+ u32 tx_win_size;
+ struct mwifiex_private *priv;
+
+@@ -909,8 +909,8 @@ void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
+ if (tx_win_size != priv->add_ba_param.tx_win_size) {
+ if (!priv->media_connected)
+ continue;
+- for (i = 0; i < MAX_NUM_TID; i++)
+- mwifiex_send_delba_txbastream_tbl(priv, i);
++ for (j = 0; j < MAX_NUM_TID; j++)
++ mwifiex_send_delba_txbastream_tbl(priv, j);
+ }
+ }
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 06161815c180e..d147dc698c9db 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -737,6 +737,7 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
+ return;
+
+ spin_lock_bh(&q->lock);
++
+ do {
+ buf = mt76_dma_dequeue(dev, q, true, NULL, NULL, &more, NULL);
+ if (!buf)
+@@ -744,6 +745,12 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
+
+ skb_free_frag(buf);
+ } while (1);
++
++ if (q->rx_head) {
++ dev_kfree_skb(q->rx_head);
++ q->rx_head = NULL;
++ }
++
+ spin_unlock_bh(&q->lock);
+
+ if (!q->rx_page.va)
+@@ -769,12 +776,6 @@ mt76_dma_rx_reset(struct mt76_dev *dev, enum mt76_rxq_id qid)
+ mt76_dma_rx_cleanup(dev, q);
+ mt76_dma_sync_idx(dev, q);
+ mt76_dma_rx_fill(dev, q);
+-
+- if (!q->rx_head)
+- return;
+-
+- dev_kfree_skb(q->rx_head);
+- q->rx_head = NULL;
+ }
+
+ static void
+@@ -975,8 +976,7 @@ void mt76_dma_cleanup(struct mt76_dev *dev)
+ struct mt76_queue *q = &dev->q_rx[i];
+
+ netif_napi_del(&dev->napi[i]);
+- if (FIELD_GET(MT_QFLAG_WED_TYPE, q->flags))
+- mt76_dma_rx_cleanup(dev, q);
++ mt76_dma_rx_cleanup(dev, q);
+ }
+
+ mt76_free_pending_txwi(dev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+index 8ba883b03e500..2ee9a3c8e25c4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+@@ -370,6 +370,9 @@ void mt76_connac2_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
+ struct sk_buff *skb, struct mt76_wcid *wcid,
+ struct ieee80211_key_conf *key, int pid,
+ enum mt76_txq_id qid, u32 changed);
++u16 mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy,
++ struct ieee80211_vif *vif,
++ bool beacon, bool mcast);
+ bool mt76_connac2_mac_fill_txs(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ __le32 *txs_data);
+ bool mt76_connac2_mac_add_txs_skb(struct mt76_dev *dev, struct mt76_wcid *wcid,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+index fd60123fb2840..aed4ee95fb2ec 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+@@ -267,9 +267,9 @@ int mt76_connac_init_tx_queues(struct mt76_phy *phy, int idx, int n_desc,
+ }
+ EXPORT_SYMBOL_GPL(mt76_connac_init_tx_queues);
+
+-static u16
+-mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+- bool beacon, bool mcast)
++u16 mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy,
++ struct ieee80211_vif *vif,
++ bool beacon, bool mcast)
+ {
+ u8 mode = 0, band = mphy->chandef.chan->band;
+ int rateidx = 0, mcast_rate;
+@@ -319,6 +319,7 @@ out:
+ return FIELD_PREP(MT_TX_RATE_IDX, rateidx) |
+ FIELD_PREP(MT_TX_RATE_MODE, mode);
+ }
++EXPORT_SYMBOL_GPL(mt76_connac2_mac_tx_rate_val);
+
+ static void
+ mt76_connac2_mac_write_txwi_8023(__le32 *txwi, struct sk_buff *skb,
+@@ -930,7 +931,7 @@ int mt76_connac2_reverse_frag0_hdr_trans(struct ieee80211_vif *vif,
+ ether_addr_copy(hdr.addr4, eth_hdr->h_source);
+ break;
+ default:
+- break;
++ return -EINVAL;
+ }
+
+ skb_pull(skb, hdr_offset + sizeof(struct ethhdr) - 2);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index f1e942b9a887b..82fdf6d794bcf 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -1198,7 +1198,7 @@ enum {
+ MCU_UNI_CMD_REPT_MUAR = 0x09,
+ MCU_UNI_CMD_WSYS_CONFIG = 0x0b,
+ MCU_UNI_CMD_REG_ACCESS = 0x0d,
+- MCU_UNI_CMD_POWER_CREL = 0x0f,
++ MCU_UNI_CMD_POWER_CTRL = 0x0f,
+ MCU_UNI_CMD_RX_HDR_TRANS = 0x12,
+ MCU_UNI_CMD_SER = 0x13,
+ MCU_UNI_CMD_TWT = 0x14,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
+index 6c6c8ada7943b..d543ef3de65be 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x0/phy.c
+@@ -642,7 +642,12 @@ mt76x0_phy_get_target_power(struct mt76x02_dev *dev, u8 tx_mode,
+ if (tx_rate > 9)
+ return -EINVAL;
+
+- *target_power = cur_power + dev->rate_power.vht[tx_rate];
++ *target_power = cur_power;
++ if (tx_rate > 7)
++ *target_power += dev->rate_power.vht[tx_rate - 8];
++ else
++ *target_power += dev->rate_power.ht[tx_rate];
++
+ *target_pa_power = mt76x0_phy_get_rf_pa_mode(dev, 1, tx_rate);
+ break;
+ default:
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index fb46c2c1784f2..5a46813a59eac 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -811,7 +811,7 @@ mt7915_hw_queue_read(struct seq_file *s, u32 size,
+ if (val & BIT(map[i].index))
+ continue;
+
+- ctrl = BIT(31) | (map[i].pid << 10) | (map[i].qid << 24);
++ ctrl = BIT(31) | (map[i].pid << 10) | ((u32)map[i].qid << 24);
+ mt76_wr(dev, MT_FL_Q0_CTRL, ctrl);
+
+ head = mt76_get_field(dev, MT_FL_Q2_CTRL,
+@@ -996,7 +996,7 @@ mt7915_rate_txpower_get(struct file *file, char __user *user_buf,
+
+ ret = mt7915_mcu_get_txpower_sku(phy, txpwr, sizeof(txpwr));
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Txpower propagation path: TMAC -> TXV -> BBP */
+ len += scnprintf(buf + len, sz - len,
+@@ -1047,6 +1047,8 @@ mt7915_rate_txpower_get(struct file *file, char __user *user_buf,
+ mt76_get_field(dev, reg, MT_WF_PHY_TPC_POWER));
+
+ ret = simple_read_from_buffer(user_buf, count, ppos, buf, len);
++
++out:
+ kfree(buf);
+ return ret;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+index 59069fb864147..24efa280dd868 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+@@ -110,18 +110,23 @@ static int mt7915_eeprom_load(struct mt7915_dev *dev)
+ } else {
+ u8 free_block_num;
+ u32 block_num, i;
++ u32 eeprom_blk_size = MT7915_EEPROM_BLOCK_SIZE;
+
+- mt7915_mcu_get_eeprom_free_block(dev, &free_block_num);
+- /* efuse info not enough */
++ ret = mt7915_mcu_get_eeprom_free_block(dev, &free_block_num);
++ if (ret < 0)
++ return ret;
++
++ /* efuse info isn't enough */
+ if (free_block_num >= 29)
+ return -EINVAL;
+
+ /* read eeprom data from efuse */
+- block_num = DIV_ROUND_UP(eeprom_size,
+- MT7915_EEPROM_BLOCK_SIZE);
+- for (i = 0; i < block_num; i++)
+- mt7915_mcu_get_eeprom(dev,
+- i * MT7915_EEPROM_BLOCK_SIZE);
++ block_num = DIV_ROUND_UP(eeprom_size, eeprom_blk_size);
++ for (i = 0; i < block_num; i++) {
++ ret = mt7915_mcu_get_eeprom(dev, i * eeprom_blk_size);
++ if (ret < 0)
++ return ret;
++ }
+ }
+
+ return mt7915_check_eeprom(dev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index c810c31fbd6e9..a80ae31e7abff 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -83,9 +83,23 @@ static ssize_t mt7915_thermal_temp_store(struct device *dev,
+
+ mutex_lock(&phy->dev->mt76.mutex);
+ val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 60, 130);
++
++ if ((i - 1 == MT7915_CRIT_TEMP_IDX &&
++ val > phy->throttle_temp[MT7915_MAX_TEMP_IDX]) ||
++ (i - 1 == MT7915_MAX_TEMP_IDX &&
++ val < phy->throttle_temp[MT7915_CRIT_TEMP_IDX])) {
++ dev_err(phy->dev->mt76.dev,
++ "temp1_max shall be greater than temp1_crit.");
++ return -EINVAL;
++ }
++
+ phy->throttle_temp[i - 1] = val;
+ mutex_unlock(&phy->dev->mt76.mutex);
+
++ ret = mt7915_mcu_set_thermal_protect(phy);
++ if (ret)
++ return ret;
++
+ return count;
+ }
+
+@@ -134,9 +148,6 @@ mt7915_thermal_set_cur_throttle_state(struct thermal_cooling_device *cdev,
+ if (state > MT7915_CDEV_THROTTLE_MAX)
+ return -EINVAL;
+
+- if (phy->throttle_temp[0] > phy->throttle_temp[1])
+- return 0;
+-
+ if (state == phy->cdev_state)
+ return 0;
+
+@@ -198,11 +209,10 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
+ return PTR_ERR(hwmon);
+
+ /* initialize critical/maximum high temperature */
+- phy->throttle_temp[0] = 110;
+- phy->throttle_temp[1] = 120;
++ phy->throttle_temp[MT7915_CRIT_TEMP_IDX] = 110;
++ phy->throttle_temp[MT7915_MAX_TEMP_IDX] = 120;
+
+- return mt7915_mcu_set_thermal_throttling(phy,
+- MT7915_THERMAL_THROTTLE_MAX);
++ return 0;
+ }
+
+ static void mt7915_led_set_config(struct led_classdev *led_cdev,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index f0d5a3603902a..1a6def77db571 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1061,9 +1061,6 @@ static void mt7915_mac_add_txs(struct mt7915_dev *dev, void *data)
+ u16 wcidx;
+ u8 pid;
+
+- if (le32_get_bits(txs_data[0], MT_TXS0_TXS_FORMAT) > 1)
+- return;
+-
+ wcidx = le32_get_bits(txs_data[2], MT_TXS2_WCID);
+ pid = le32_get_bits(txs_data[3], MT_TXS3_PID);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index 0511d6a505b09..7589af4b3dab7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -57,6 +57,17 @@ int mt7915_run(struct ieee80211_hw *hw)
+ mt7915_mac_enable_nf(dev, phy->mt76->band_idx);
+ }
+
++ ret = mt7915_mcu_set_thermal_throttling(phy,
++ MT7915_THERMAL_THROTTLE_MAX);
++
++ if (ret)
++ goto out;
++
++ ret = mt7915_mcu_set_thermal_protect(phy);
++
++ if (ret)
++ goto out;
++
+ ret = mt76_connac_mcu_set_rts_thresh(&dev->mt76, 0x92b,
+ phy->mt76->band_idx);
+ if (ret)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index b2652de082baa..f566ba77b2ed4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -2349,13 +2349,14 @@ void mt7915_mcu_exit(struct mt7915_dev *dev)
+ __mt76_mcu_restart(&dev->mt76);
+ if (mt7915_firmware_state(dev, false)) {
+ dev_err(dev->mt76.dev, "Failed to exit mcu\n");
+- return;
++ goto out;
+ }
+
+ mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(0), MT_TOP_LPCR_HOST_FW_OWN);
+ if (dev->hif2)
+ mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(1),
+ MT_TOP_LPCR_HOST_FW_OWN);
++out:
+ skb_queue_purge(&dev->mt76.mcu.res_q);
+ }
+
+@@ -2792,8 +2793,9 @@ int mt7915_mcu_get_eeprom(struct mt7915_dev *dev, u32 offset)
+ int ret;
+ u8 *buf;
+
+- ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_QUERY(EFUSE_ACCESS), &req,
+- sizeof(req), true, &skb);
++ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
++ MCU_EXT_QUERY(EFUSE_ACCESS),
++ &req, sizeof(req), true, &skb);
+ if (ret)
+ return ret;
+
+@@ -2818,8 +2820,9 @@ int mt7915_mcu_get_eeprom_free_block(struct mt7915_dev *dev, u8 *block_num)
+ struct sk_buff *skb;
+ int ret;
+
+- ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_QUERY(EFUSE_FREE_BLOCK), &req,
+- sizeof(req), true, &skb);
++ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
++ MCU_EXT_QUERY(EFUSE_FREE_BLOCK),
++ &req, sizeof(req), true, &skb);
+ if (ret)
+ return ret;
+
+@@ -3058,6 +3061,29 @@ int mt7915_mcu_get_temperature(struct mt7915_phy *phy)
+ }
+
+ int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state)
++{
++ struct mt7915_dev *dev = phy->dev;
++ struct mt7915_mcu_thermal_ctrl req = {
++ .band_idx = phy->mt76->band_idx,
++ .ctrl_id = THERMAL_PROTECT_DUTY_CONFIG,
++ };
++ int level, ret;
++
++ /* set duty cycle and level */
++ for (level = 0; level < 4; level++) {
++ req.duty.duty_level = level;
++ req.duty.duty_cycle = state;
++ state /= 2;
++
++ ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
++ &req, sizeof(req), false);
++ if (ret)
++ return ret;
++ }
++ return 0;
++}
++
++int mt7915_mcu_set_thermal_protect(struct mt7915_phy *phy)
+ {
+ struct mt7915_dev *dev = phy->dev;
+ struct {
+@@ -3070,29 +3096,18 @@ int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state)
+ } __packed req = {
+ .ctrl = {
+ .band_idx = phy->mt76->band_idx,
++ .type.protect_type = 1,
++ .type.trigger_type = 1,
+ },
+ };
+- int level;
+-
+- if (!state) {
+- req.ctrl.ctrl_id = THERMAL_PROTECT_DISABLE;
+- goto out;
+- }
+-
+- /* set duty cycle and level */
+- for (level = 0; level < 4; level++) {
+- int ret;
++ int ret;
+
+- req.ctrl.ctrl_id = THERMAL_PROTECT_DUTY_CONFIG;
+- req.ctrl.duty.duty_level = level;
+- req.ctrl.duty.duty_cycle = state;
+- state /= 2;
++ req.ctrl.ctrl_id = THERMAL_PROTECT_DISABLE;
++ ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
++ &req, sizeof(req.ctrl), false);
+
+- ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
+- &req, sizeof(req.ctrl), false);
+- if (ret)
+- return ret;
+- }
++ if (ret)
++ return ret;
+
+ /* set high-temperature trigger threshold */
+ req.ctrl.ctrl_id = THERMAL_PROTECT_ENABLE;
+@@ -3101,10 +3116,6 @@ int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state)
+ req.trigger_temp = cpu_to_le32(phy->throttle_temp[1]);
+ req.sustain_time = cpu_to_le16(10);
+
+-out:
+- req.ctrl.type.protect_type = 1;
+- req.ctrl.type.trigger_type = 1;
+-
+ return mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(THERMAL_PROT),
+ &req, sizeof(req), false);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+index 8388e2a658535..afa558c9a9302 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+@@ -495,7 +495,7 @@ static u32 __mt7915_reg_addr(struct mt7915_dev *dev, u32 addr)
+
+ if (dev_is_pci(dev->mt76.dev) &&
+ ((addr >= MT_CBTOP1_PHY_START && addr <= MT_CBTOP1_PHY_END) ||
+- (addr >= MT_CBTOP2_PHY_START && addr <= MT_CBTOP2_PHY_END)))
++ addr >= MT_CBTOP2_PHY_START))
+ return mt7915_reg_map_l1(dev, addr);
+
+ /* CONN_INFRA: covert to phyiscal addr and use layer 1 remap */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index 6351feba6bdf9..e58650bbbd14a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -70,6 +70,9 @@
+
+ #define MT7915_WED_RX_TOKEN_SIZE 12288
+
++#define MT7915_CRIT_TEMP_IDX 0
++#define MT7915_MAX_TEMP_IDX 1
++
+ struct mt7915_vif;
+ struct mt7915_sta;
+ struct mt7915_dfs_pulse;
+@@ -543,6 +546,7 @@ int mt7915_mcu_apply_tx_dpd(struct mt7915_phy *phy);
+ int mt7915_mcu_get_chan_mib_info(struct mt7915_phy *phy, bool chan_switch);
+ int mt7915_mcu_get_temperature(struct mt7915_phy *phy);
+ int mt7915_mcu_set_thermal_throttling(struct mt7915_phy *phy, u8 state);
++int mt7915_mcu_set_thermal_protect(struct mt7915_phy *phy);
+ int mt7915_mcu_get_rx_rate(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, struct rate_info *rate);
+ int mt7915_mcu_rdd_background_enable(struct mt7915_phy *phy,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
+index aca1b2f1e9e3b..7e0d86366c778 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
+@@ -803,7 +803,6 @@ enum offs_rev {
+ #define MT_CBTOP1_PHY_START 0x70000000
+ #define MT_CBTOP1_PHY_END __REG(CBTOP1_PHY_END)
+ #define MT_CBTOP2_PHY_START 0xf0000000
+-#define MT_CBTOP2_PHY_END 0xffffffff
+ #define MT_INFRA_MCU_START 0x7c000000
+ #define MT_INFRA_MCU_END __REG(INFRA_MCU_ADDR_END)
+ #define MT_CONN_INFRA_OFFSET(p) ((p) - MT_INFRA_BASE)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+index c06c56a0270d6..686c9bbd59293 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+@@ -278,6 +278,7 @@ static int mt7986_wmac_coninfra_setup(struct mt7915_dev *dev)
+ return -EINVAL;
+
+ rmem = of_reserved_mem_lookup(np);
++ of_node_put(np);
+ if (!rmem)
+ return -EINVAL;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c b/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c
+index 47e034a9b0037..ed9241d4aa641 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/acpi_sar.c
+@@ -33,14 +33,17 @@ mt7921_acpi_read(struct mt7921_dev *dev, u8 *method, u8 **tbl, u32 *len)
+ sar_root->package.elements[0].type != ACPI_TYPE_INTEGER) {
+ dev_err(mdev->dev, "sar cnt = %d\n",
+ sar_root->package.count);
++ ret = -EINVAL;
+ goto free;
+ }
+
+ if (!*tbl) {
+ *tbl = devm_kzalloc(mdev->dev, sar_root->package.count,
+ GFP_KERNEL);
+- if (!*tbl)
++ if (!*tbl) {
++ ret = -ENOMEM;
+ goto free;
++ }
+ }
+ if (len)
+ *len = sar_root->package.count;
+@@ -52,9 +55,9 @@ mt7921_acpi_read(struct mt7921_dev *dev, u8 *method, u8 **tbl, u32 *len)
+ break;
+ *(*tbl + i) = (u8)sar_unit->integer.value;
+ }
+-free:
+ ret = (i == sar_root->package.count) ? 0 : -EINVAL;
+
++free:
+ kfree(sar_root);
+
+ return ret;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index 542dfd4251290..d4b681d7e1d22 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -175,7 +175,7 @@ u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
+
+ if (!fw || !fw->data || fw->size < sizeof(*hdr)) {
+ dev_err(dev, "Invalid firmware\n");
+- return -EINVAL;
++ goto out;
+ }
+
+ data = fw->data;
+@@ -206,6 +206,7 @@ u8 mt7921_check_offload_capability(struct device *dev, const char *fw_wm)
+ data += le16_to_cpu(rel_info->len) + rel_info->pad_len;
+ }
+
++out:
+ release_firmware(fw);
+
+ return features ? features->data : 0;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 76ac5069638fe..cdb0d61903935 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -422,15 +422,15 @@ void mt7921_roc_timer(struct timer_list *timer)
+
+ static int mt7921_abort_roc(struct mt7921_phy *phy, struct mt7921_vif *vif)
+ {
+- int err;
+-
+- if (!test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
+- return 0;
++ int err = 0;
+
+ del_timer_sync(&phy->roc_timer);
+ cancel_work_sync(&phy->roc_work);
+- err = mt7921_mcu_abort_roc(phy, vif, phy->roc_token_id);
+- clear_bit(MT76_STATE_ROC, &phy->mt76->state);
++
++ mt7921_mutex_acquire(phy->dev);
++ if (test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
++ err = mt7921_mcu_abort_roc(phy, vif, phy->roc_token_id);
++ mt7921_mutex_release(phy->dev);
+
+ return err;
+ }
+@@ -487,13 +487,8 @@ static int mt7921_cancel_remain_on_channel(struct ieee80211_hw *hw,
+ {
+ struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
+ struct mt7921_phy *phy = mt7921_hw_phy(hw);
+- int err;
+
+- mt7921_mutex_acquire(phy->dev);
+- err = mt7921_abort_roc(phy, mvif);
+- mt7921_mutex_release(phy->dev);
+-
+- return err;
++ return mt7921_abort_roc(phy, mvif);
+ }
+
+ static int mt7921_set_channel(struct mt7921_phy *phy)
+@@ -1711,7 +1706,10 @@ static void mt7921_ctx_iter(void *priv, u8 *mac,
+ if (ctx != mvif->ctx)
+ return;
+
+- mt76_connac_mcu_uni_set_chctx(mvif->phy->mt76, &mvif->mt76, ctx);
++ if (vif->type & NL80211_IFTYPE_MONITOR)
++ mt7921_mcu_config_sniffer(mvif, ctx);
++ else
++ mt76_connac_mcu_uni_set_chctx(mvif->phy->mt76, &mvif->mt76, ctx);
+ }
+
+ static void
+@@ -1778,11 +1776,8 @@ static void mt7921_mgd_complete_tx(struct ieee80211_hw *hw,
+ struct ieee80211_prep_tx_info *info)
+ {
+ struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
+- struct mt7921_dev *dev = mt7921_hw_dev(hw);
+
+- mt7921_mutex_acquire(dev);
+ mt7921_abort_roc(mvif->phy, mvif);
+- mt7921_mutex_release(dev);
+ }
+
+ const struct ieee80211_ops mt7921_ops = {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index fb9c0f66cb27c..7253ce90234ef 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -174,7 +174,7 @@ mt7921_mcu_uni_roc_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ wake_up(&dev->phy.roc_wait);
+ duration = le32_to_cpu(grant->max_interval);
+ mod_timer(&dev->phy.roc_timer,
+- round_jiffies_up(jiffies + msecs_to_jiffies(duration)));
++ jiffies + msecs_to_jiffies(duration));
+ }
+
+ static void
+@@ -1093,6 +1093,74 @@ int mt7921_mcu_set_sniffer(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ true);
+ }
+
++int mt7921_mcu_config_sniffer(struct mt7921_vif *vif,
++ struct ieee80211_chanctx_conf *ctx)
++{
++ struct cfg80211_chan_def *chandef = &ctx->def;
++ int freq1 = chandef->center_freq1, freq2 = chandef->center_freq2;
++ const u8 ch_band[] = {
++ [NL80211_BAND_2GHZ] = 1,
++ [NL80211_BAND_5GHZ] = 2,
++ [NL80211_BAND_6GHZ] = 3,
++ };
++ const u8 ch_width[] = {
++ [NL80211_CHAN_WIDTH_20_NOHT] = 0,
++ [NL80211_CHAN_WIDTH_20] = 0,
++ [NL80211_CHAN_WIDTH_40] = 0,
++ [NL80211_CHAN_WIDTH_80] = 1,
++ [NL80211_CHAN_WIDTH_160] = 2,
++ [NL80211_CHAN_WIDTH_80P80] = 3,
++ [NL80211_CHAN_WIDTH_5] = 4,
++ [NL80211_CHAN_WIDTH_10] = 5,
++ [NL80211_CHAN_WIDTH_320] = 6,
++ };
++ struct {
++ struct {
++ u8 band_idx;
++ u8 pad[3];
++ } __packed hdr;
++ struct config_tlv {
++ __le16 tag;
++ __le16 len;
++ u16 aid;
++ u8 ch_band;
++ u8 bw;
++ u8 control_ch;
++ u8 sco;
++ u8 center_ch;
++ u8 center_ch2;
++ u8 drop_err;
++ u8 pad[3];
++ } __packed tlv;
++ } __packed req = {
++ .hdr = {
++ .band_idx = vif->mt76.band_idx,
++ },
++ .tlv = {
++ .tag = cpu_to_le16(1),
++ .len = cpu_to_le16(sizeof(req.tlv)),
++ .control_ch = chandef->chan->hw_value,
++ .center_ch = ieee80211_frequency_to_channel(freq1),
++ .drop_err = 1,
++ },
++ };
++ if (chandef->chan->band < ARRAY_SIZE(ch_band))
++ req.tlv.ch_band = ch_band[chandef->chan->band];
++ if (chandef->width < ARRAY_SIZE(ch_width))
++ req.tlv.bw = ch_width[chandef->width];
++
++ if (freq2)
++ req.tlv.center_ch2 = ieee80211_frequency_to_channel(freq2);
++
++ if (req.tlv.control_ch < req.tlv.center_ch)
++ req.tlv.sco = 1; /* SCA */
++ else if (req.tlv.control_ch > req.tlv.center_ch)
++ req.tlv.sco = 3; /* SCB */
++
++ return mt76_mcu_send_msg(vif->phy->mt76->dev, MCU_UNI_CMD(SNIFFER),
++ &req, sizeof(req), true);
++}
++
+ int
+ mt7921_mcu_uni_add_beacon_offload(struct mt7921_dev *dev,
+ struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 15d6b7fe1c6c8..d4cfa26c373c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -529,6 +529,8 @@ void mt7921_set_ipv6_ns_work(struct work_struct *work);
+
+ int mt7921_mcu_set_sniffer(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ bool enable);
++int mt7921_mcu_config_sniffer(struct mt7921_vif *vif,
++ struct ieee80211_chanctx_conf *ctx);
+
+ int mt7921_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ enum mt76_txq_id qid, struct mt76_wcid *wcid,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c
+index 2e4a8909b9e80..3d4fbbbcc2062 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/debugfs.c
+@@ -457,7 +457,7 @@ mt7996_hw_queue_read(struct seq_file *s, u32 size,
+ if (val & BIT(map[i].index))
+ continue;
+
+- ctrl = BIT(31) | (map[i].pid << 10) | (map[i].qid << 24);
++ ctrl = BIT(31) | (map[i].pid << 10) | ((u32)map[i].qid << 24);
+ mt76_wr(dev, MT_FL_Q0_CTRL, ctrl);
+
+ head = mt76_get_field(dev, MT_FL_Q2_CTRL,
+@@ -653,8 +653,9 @@ static int
+ mt7996_rf_regval_set(void *data, u64 val)
+ {
+ struct mt7996_dev *dev = data;
++ u32 val32 = val;
+
+- return mt7996_mcu_rf_regval(dev, dev->mt76.debugfs_reg, (u32 *)&val, true);
++ return mt7996_mcu_rf_regval(dev, dev->mt76.debugfs_reg, &val32, true);
+ }
+
+ DEFINE_DEBUGFS_ATTRIBUTE(fops_rf_regval, mt7996_rf_regval_get,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
+index b9f62bedbc485..5d8e0353627e1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
+@@ -65,17 +65,23 @@ static int mt7996_eeprom_load(struct mt7996_dev *dev)
+ } else {
+ u8 free_block_num;
+ u32 block_num, i;
++ u32 eeprom_blk_size = MT7996_EEPROM_BLOCK_SIZE;
+
+- /* TODO: check free block event */
+- mt7996_mcu_get_eeprom_free_block(dev, &free_block_num);
+- /* efuse info not enough */
++ ret = mt7996_mcu_get_eeprom_free_block(dev, &free_block_num);
++ if (ret < 0)
++ return ret;
++
++ /* efuse info isn't enough */
+ if (free_block_num >= 59)
+ return -EINVAL;
+
+ /* read eeprom data from efuse */
+- block_num = DIV_ROUND_UP(MT7996_EEPROM_SIZE, MT7996_EEPROM_BLOCK_SIZE);
+- for (i = 0; i < block_num; i++)
+- mt7996_mcu_get_eeprom(dev, i * MT7996_EEPROM_BLOCK_SIZE);
++ block_num = DIV_ROUND_UP(MT7996_EEPROM_SIZE, eeprom_blk_size);
++ for (i = 0; i < block_num; i++) {
++ ret = mt7996_mcu_get_eeprom(dev, i * eeprom_blk_size);
++ if (ret < 0)
++ return ret;
++ }
+ }
+
+ return mt7996_check_eeprom(dev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 0b3e28748e76b..0eb9e4d73f2c1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -469,7 +469,7 @@ static int mt7996_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+ ether_addr_copy(hdr.addr4, eth_hdr->h_source);
+ break;
+ default:
+- break;
++ return -EINVAL;
+ }
+
+ skb_pull(skb, hdr_gap + sizeof(struct ethhdr) - 2);
+@@ -959,51 +959,6 @@ mt7996_mac_write_txwi_80211(struct mt7996_dev *dev, __le32 *txwi,
+ }
+ }
+
+-static u16
+-mt7996_mac_tx_rate_val(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+- bool beacon, bool mcast)
+-{
+- u8 mode = 0, band = mphy->chandef.chan->band;
+- int rateidx = 0, mcast_rate;
+-
+- if (beacon) {
+- struct cfg80211_bitrate_mask *mask;
+-
+- mask = &vif->bss_conf.beacon_tx_rate;
+- if (hweight16(mask->control[band].he_mcs[0]) == 1) {
+- rateidx = ffs(mask->control[band].he_mcs[0]) - 1;
+- mode = MT_PHY_TYPE_HE_SU;
+- goto out;
+- } else if (hweight16(mask->control[band].vht_mcs[0]) == 1) {
+- rateidx = ffs(mask->control[band].vht_mcs[0]) - 1;
+- mode = MT_PHY_TYPE_VHT;
+- goto out;
+- } else if (hweight8(mask->control[band].ht_mcs[0]) == 1) {
+- rateidx = ffs(mask->control[band].ht_mcs[0]) - 1;
+- mode = MT_PHY_TYPE_HT;
+- goto out;
+- } else if (hweight32(mask->control[band].legacy) == 1) {
+- rateidx = ffs(mask->control[band].legacy) - 1;
+- goto legacy;
+- }
+- }
+-
+- mcast_rate = vif->bss_conf.mcast_rate[band];
+- if (mcast && mcast_rate > 0)
+- rateidx = mcast_rate - 1;
+- else
+- rateidx = ffs(vif->bss_conf.basic_rates) - 1;
+-
+-legacy:
+- rateidx = mt76_calculate_default_rate(mphy, rateidx);
+- mode = rateidx >> 8;
+- rateidx &= GENMASK(7, 0);
+-
+-out:
+- return FIELD_PREP(MT_TX_RATE_IDX, rateidx) |
+- FIELD_PREP(MT_TX_RATE_MODE, mode);
+-}
+-
+ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ struct sk_buff *skb, struct mt76_wcid *wcid, int pid,
+ struct ieee80211_key_conf *key, u32 changed)
+@@ -1091,7 +1046,8 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ /* Fixed rata is available just for 802.11 txd */
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ bool multicast = is_multicast_ether_addr(hdr->addr1);
+- u16 rate = mt7996_mac_tx_rate_val(mphy, vif, beacon, multicast);
++ u16 rate = mt76_connac2_mac_tx_rate_val(mphy, vif, beacon,
++ multicast);
+
+ /* fix to bw 20 */
+ val = MT_TXD6_FIXED_BW |
+@@ -1690,7 +1646,7 @@ void mt7996_mac_set_timing(struct mt7996_phy *phy)
+ else
+ val = MT7996_CFEND_RATE_11B;
+
+- mt76_rmw_field(dev, MT_AGG_ACR0(band_idx), MT_AGG_ACR_CFEND_RATE, val);
++ mt76_rmw_field(dev, MT_RATE_HRCR0(band_idx), MT_RATE_HRCR0_CFEND_RATE, val);
+ mt76_clear(dev, MT_ARB_SCR(band_idx),
+ MT_ARB_SCR_TX_DISABLE | MT_ARB_SCR_RX_DISABLE);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 4421cd54311b1..c423b052e4f4c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -880,7 +880,10 @@ mt7996_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
+ phy->mt76->antenna_mask = tx_ant;
+
+ /* restore to the origin chainmask which might have auxiliary path */
+- if (hweight8(tx_ant) == max_nss)
++ if (hweight8(tx_ant) == max_nss && band_idx < MT_BAND2)
++ phy->mt76->chainmask = ((dev->chainmask >> shift) &
++ (BIT(dev->chainshift[band_idx + 1] - shift) - 1)) << shift;
++ else if (hweight8(tx_ant) == max_nss)
+ phy->mt76->chainmask = (dev->chainmask >> shift) << shift;
+ else
+ phy->mt76->chainmask = tx_ant << shift;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 04e1d10bbd21e..d593ed9e3f73c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -335,6 +335,9 @@ mt7996_mcu_rx_radar_detected(struct mt7996_dev *dev, struct sk_buff *skb)
+
+ r = (struct mt7996_mcu_rdd_report *)skb->data;
+
++ if (r->band_idx >= ARRAY_SIZE(dev->mt76.phys))
++ return;
++
+ mphy = dev->mt76.phys[r->band_idx];
+ if (!mphy)
+ return;
+@@ -412,6 +415,9 @@ mt7996_mcu_ie_countdown(struct mt7996_dev *dev, struct sk_buff *skb)
+ struct header *hdr = (struct header *)data;
+ struct tlv *tlv = (struct tlv *)(data + 4);
+
++ if (hdr->band >= ARRAY_SIZE(dev->mt76.phys))
++ return;
++
+ if (hdr->band && dev->mt76.phys[hdr->band])
+ mphy = dev->mt76.phys[hdr->band];
+
+@@ -903,8 +909,8 @@ mt7996_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+ he = (struct sta_rec_he_v2 *)tlv;
+ for (i = 0; i < 11; i++) {
+ if (i < 6)
+- he->he_mac_cap[i] = cpu_to_le16(elem->mac_cap_info[i]);
+- he->he_phy_cap[i] = cpu_to_le16(elem->phy_cap_info[i]);
++ he->he_mac_cap[i] = elem->mac_cap_info[i];
++ he->he_phy_cap[i] = elem->phy_cap_info[i];
+ }
+
+ mcs_map = sta->deflink.he_cap.he_mcs_nss_supp;
+@@ -2393,7 +2399,7 @@ mt7996_mcu_restart(struct mt76_dev *dev)
+ .power_mode = 1,
+ };
+
+- return mt76_mcu_send_msg(dev, MCU_WM_UNI_CMD(POWER_CREL), &req,
++ return mt76_mcu_send_msg(dev, MCU_WM_UNI_CMD(POWER_CTRL), &req,
+ sizeof(req), false);
+ }
+
+@@ -2454,13 +2460,14 @@ void mt7996_mcu_exit(struct mt7996_dev *dev)
+ __mt76_mcu_restart(&dev->mt76);
+ if (mt7996_firmware_state(dev, false)) {
+ dev_err(dev->mt76.dev, "Failed to exit mcu\n");
+- return;
++ goto out;
+ }
+
+ mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(0), MT_TOP_LPCR_HOST_FW_OWN);
+ if (dev->hif2)
+ mt76_wr(dev, MT_TOP_LPCR_HOST_BAND(1),
+ MT_TOP_LPCR_HOST_FW_OWN);
++out:
+ skb_queue_purge(&dev->mt76.mcu.res_q);
+ }
+
+@@ -2921,8 +2928,9 @@ int mt7996_mcu_get_eeprom(struct mt7996_dev *dev, u32 offset)
+ bool valid;
+ int ret;
+
+- ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_WM_UNI_CMD_QUERY(EFUSE_CTRL), &req,
+- sizeof(req), true, &skb);
++ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
++ MCU_WM_UNI_CMD_QUERY(EFUSE_CTRL),
++ &req, sizeof(req), true, &skb);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+index 521769eb6b0e9..d8a2c1a744b25 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+@@ -21,6 +21,7 @@ static const struct __base mt7996_reg_base[] = {
+ [WF_ETBF_BASE] = { { 0x820ea000, 0x820fa000, 0x830ea000 } },
+ [WF_LPON_BASE] = { { 0x820eb000, 0x820fb000, 0x830eb000 } },
+ [WF_MIB_BASE] = { { 0x820ed000, 0x820fd000, 0x830ed000 } },
++ [WF_RATE_BASE] = { { 0x820ee000, 0x820fe000, 0x830ee000 } },
+ };
+
+ static const struct __map mt7996_reg_map[] = {
+@@ -149,7 +150,7 @@ static u32 __mt7996_reg_addr(struct mt7996_dev *dev, u32 addr)
+
+ if (dev_is_pci(dev->mt76.dev) &&
+ ((addr >= MT_CBTOP1_PHY_START && addr <= MT_CBTOP1_PHY_END) ||
+- (addr >= MT_CBTOP2_PHY_START && addr <= MT_CBTOP2_PHY_END)))
++ addr >= MT_CBTOP2_PHY_START))
+ return mt7996_reg_map_l1(dev, addr);
+
+ /* CONN_INFRA: covert to phyiscal addr and use layer 1 remap */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/regs.h b/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
+index 794f61b93a466..7a28cae34e34b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
+@@ -33,6 +33,7 @@ enum base_rev {
+ WF_ETBF_BASE,
+ WF_LPON_BASE,
+ WF_MIB_BASE,
++ WF_RATE_BASE,
+ __MT_REG_BASE_MAX,
+ };
+
+@@ -235,13 +236,6 @@ enum base_rev {
+ FIELD_PREP(MT_WTBL_LMAC_ID, _id) | \
+ FIELD_PREP(MT_WTBL_LMAC_DW, _dw))
+
+-/* AGG: band 0(0x820e2000), band 1(0x820f2000), band 2(0x830e2000) */
+-#define MT_WF_AGG_BASE(_band) __BASE(WF_AGG_BASE, (_band))
+-#define MT_WF_AGG(_band, ofs) (MT_WF_AGG_BASE(_band) + (ofs))
+-
+-#define MT_AGG_ACR0(_band) MT_WF_AGG(_band, 0x054)
+-#define MT_AGG_ACR_CFEND_RATE GENMASK(13, 0)
+-
+ /* ARB: band 0(0x820e3000), band 1(0x820f3000), band 2(0x830e3000) */
+ #define MT_WF_ARB_BASE(_band) __BASE(WF_ARB_BASE, (_band))
+ #define MT_WF_ARB(_band, ofs) (MT_WF_ARB_BASE(_band) + (ofs))
+@@ -300,6 +294,13 @@ enum base_rev {
+ #define MT_WF_RMAC_RSVD0(_band) MT_WF_RMAC(_band, 0x03e0)
+ #define MT_WF_RMAC_RSVD0_EIFS_CLR BIT(21)
+
++/* RATE: band 0(0x820ee000), band 1(0x820fe000), band 2(0x830ee000) */
++#define MT_WF_RATE_BASE(_band) __BASE(WF_RATE_BASE, (_band))
++#define MT_WF_RATE(_band, ofs) (MT_WF_RATE_BASE(_band) + (ofs))
++
++#define MT_RATE_HRCR0(_band) MT_WF_RATE(_band, 0x050)
++#define MT_RATE_HRCR0_CFEND_RATE GENMASK(14, 0)
++
+ /* WFDMA0 */
+ #define MT_WFDMA0_BASE 0xd4000
+ #define MT_WFDMA0(ofs) (MT_WFDMA0_BASE + (ofs))
+@@ -463,7 +464,6 @@ enum base_rev {
+ #define MT_CBTOP1_PHY_START 0x70000000
+ #define MT_CBTOP1_PHY_END 0x77ffffff
+ #define MT_CBTOP2_PHY_START 0xf0000000
+-#define MT_CBTOP2_PHY_END 0xffffffff
+ #define MT_INFRA_MCU_START 0x7c000000
+ #define MT_INFRA_MCU_END 0x7c3fffff
+
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
+index 228bc7d45011c..419723118ded8 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio.c
+@@ -562,6 +562,10 @@ mt76s_tx_queue_skb_raw(struct mt76_dev *dev, struct mt76_queue *q,
+
+ q->entry[q->head].buf_sz = len;
+ q->entry[q->head].skb = skb;
++
++ /* ensure the entry fully updated before bus access */
++ smp_wmb();
++
+ q->head = (q->head + 1) % q->ndesc;
+ q->queued++;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
+index bfc4de50a4d23..ddd8c0cc744df 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio_txrx.c
+@@ -254,6 +254,10 @@ static int mt76s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
+
+ if (!test_bit(MT76_STATE_MCU_RUNNING, &dev->phy.state)) {
+ __skb_put_zero(e->skb, 4);
++ err = __skb_grow(e->skb, roundup(e->skb->len,
++ sdio->func->cur_blksize));
++ if (err)
++ return err;
+ err = __mt76s_xmit_queue(dev, e->skb->data,
+ e->skb->len);
+ if (err)
+diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c
+index 457147394edc4..773a1cc2f8520 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/dma.c
++++ b/drivers/net/wireless/mediatek/mt7601u/dma.c
+@@ -123,7 +123,8 @@ static u16 mt7601u_rx_next_seg_len(u8 *data, u32 data_len)
+ if (data_len < min_seg_len ||
+ WARN_ON_ONCE(!dma_len) ||
+ WARN_ON_ONCE(dma_len + MT_DMA_HDRS > data_len) ||
+- WARN_ON_ONCE(dma_len & 0x3))
++ WARN_ON_ONCE(dma_len & 0x3) ||
++ WARN_ON_ONCE(dma_len < min_seg_len))
+ return 0;
+
+ return MT_DMA_HDRS + dma_len;
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 9b319a455b96d..e9f59de31b0b9 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -730,6 +730,7 @@ netdev_tx_t wilc_mac_xmit(struct sk_buff *skb, struct net_device *ndev)
+
+ if (skb->dev != ndev) {
+ netdev_err(ndev, "Packet not destined to this device\n");
++ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
+
+@@ -980,7 +981,7 @@ struct wilc_vif *wilc_netdev_ifc_init(struct wilc *wl, const char *name,
+ ndev->name);
+ if (!wl->hif_workqueue) {
+ ret = -ENOMEM;
+- goto error;
++ goto unregister_netdev;
+ }
+
+ ndev->needs_free_netdev = true;
+@@ -995,6 +996,11 @@ struct wilc_vif *wilc_netdev_ifc_init(struct wilc *wl, const char *name,
+
+ return vif;
+
++unregister_netdev:
++ if (rtnl_locked)
++ cfg80211_unregister_netdevice(ndev);
++ else
++ unregister_netdev(ndev);
+ error:
+ free_netdev(ndev);
+ return ERR_PTR(ret);
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c
+index 2c4f403ba68f3..97e7ff7289fab 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8188f.c
+@@ -1122,7 +1122,7 @@ static void rtl8188fu_phy_iqcalibrate(struct rtl8xxxu_priv *priv,
+
+ if (t == 0) {
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM1);
+- priv->pi_enabled = val32 & FPGA0_HSSI_PARM1_PI;
++ priv->pi_enabled = u32_get_bits(val32, FPGA0_HSSI_PARM1_PI);
+ }
+
+ /* save RF path */
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+index a7d76693c02db..9d0ed6760cb61 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+@@ -1744,6 +1744,11 @@ static void rtl8192e_enable_rf(struct rtl8xxxu_priv *priv)
+ val8 = rtl8xxxu_read8(priv, REG_PAD_CTRL1);
+ val8 &= ~BIT(0);
+ rtl8xxxu_write8(priv, REG_PAD_CTRL1, val8);
++
++ /*
++ * Fix transmission failure of rtl8192e.
++ */
++ rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
+ }
+
+ static s8 rtl8192e_cck_rssi(struct rtl8xxxu_priv *priv, u8 cck_agc_rpt)
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 3ed435401e570..d22990464dad6 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4208,10 +4208,12 @@ static int rtl8xxxu_init_device(struct ieee80211_hw *hw)
+ * should be equal or CCK RSSI report may be incorrect
+ */
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XA_HSSI_PARM2);
+- priv->cck_agc_report_type = val32 & FPGA0_HSSI_PARM2_CCK_HIGH_PWR;
++ priv->cck_agc_report_type =
++ u32_get_bits(val32, FPGA0_HSSI_PARM2_CCK_HIGH_PWR);
+
+ val32 = rtl8xxxu_read32(priv, REG_FPGA0_XB_HSSI_PARM2);
+- if (priv->cck_agc_report_type != (bool)(val32 & FPGA0_HSSI_PARM2_CCK_HIGH_PWR)) {
++ if (priv->cck_agc_report_type !=
++ u32_get_bits(val32, FPGA0_HSSI_PARM2_CCK_HIGH_PWR)) {
+ if (priv->cck_agc_report_type)
+ val32 |= FPGA0_HSSI_PARM2_CCK_HIGH_PWR;
+ else
+@@ -5274,7 +5276,7 @@ static void rtl8xxxu_queue_rx_urb(struct rtl8xxxu_priv *priv,
+ pending = priv->rx_urb_pending_count;
+ } else {
+ skb = (struct sk_buff *)rx_urb->urb.context;
+- dev_kfree_skb(skb);
++ dev_kfree_skb_irq(skb);
+ usb_free_urb(&rx_urb->urb);
+ }
+
+@@ -5550,9 +5552,6 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
+ btcoex = &priv->bt_coex;
+ rarpt = &priv->ra_report;
+
+- if (priv->rf_paths > 1)
+- goto out;
+-
+ while (!skb_queue_empty(&priv->c2hcmd_queue)) {
+ skb = skb_dequeue(&priv->c2hcmd_queue);
+
+@@ -5585,10 +5584,9 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
+ default:
+ break;
+ }
+- }
+
+-out:
+- dev_kfree_skb(skb);
++ dev_kfree_skb(skb);
++ }
+ }
+
+ static void rtl8723bu_handle_c2h(struct rtl8xxxu_priv *priv,
+@@ -5956,7 +5954,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
+ {
+ struct rtl8xxxu_priv *priv = hw->priv;
+ struct device *dev = &priv->udev->dev;
+- u16 val16;
+ int ret = 0, channel;
+ bool ht40;
+
+@@ -5966,14 +5963,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
+ __func__, hw->conf.chandef.chan->hw_value,
+ changed, hw->conf.chandef.width);
+
+- if (changed & IEEE80211_CONF_CHANGE_RETRY_LIMITS) {
+- val16 = ((hw->conf.long_frame_max_tx_count <<
+- RETRY_LIMIT_LONG_SHIFT) & RETRY_LIMIT_LONG_MASK) |
+- ((hw->conf.short_frame_max_tx_count <<
+- RETRY_LIMIT_SHORT_SHIFT) & RETRY_LIMIT_SHORT_MASK);
+- rtl8xxxu_write16(priv, REG_RETRY_LIMIT, val16);
+- }
+-
+ if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
+ switch (hw->conf.chandef.width) {
+ case NL80211_CHAN_WIDTH_20_NOHT:
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+index 58c2ab3d44bef..de61c9c0ddec4 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+@@ -68,8 +68,10 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+ struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
++ struct sk_buff_head free_list;
+ unsigned long flags;
+
++ skb_queue_head_init(&free_list);
+ spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
+ while (skb_queue_len(&ring->queue)) {
+ struct rtl_tx_desc *entry = &ring->desc[ring->idx];
+@@ -79,10 +81,12 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
+ true, HW_DESC_TXBUFF_ADDR),
+ skb->len, DMA_TO_DEVICE);
+- kfree_skb(skb);
++ __skb_queue_tail(&free_list, skb);
+ ring->idx = (ring->idx + 1) % ring->entries;
+ }
+ spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
++
++ __skb_queue_purge(&free_list);
+ }
+
+ static void _rtl88ee_disable_bcn_sub_func(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+index 189cc6437600f..0ba3bbed6ed36 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+@@ -30,8 +30,10 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+ struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
++ struct sk_buff_head free_list;
+ unsigned long flags;
+
++ skb_queue_head_init(&free_list);
+ spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
+ while (skb_queue_len(&ring->queue)) {
+ struct rtl_tx_desc *entry = &ring->desc[ring->idx];
+@@ -41,10 +43,12 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
+ true, HW_DESC_TXBUFF_ADDR),
+ skb->len, DMA_TO_DEVICE);
+- kfree_skb(skb);
++ __skb_queue_tail(&free_list, skb);
+ ring->idx = (ring->idx + 1) % ring->entries;
+ }
+ spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
++
++ __skb_queue_purge(&free_list);
+ }
+
+ static void _rtl8723be_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+index 7e0f62d59fe17..a7e3250957dc9 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+@@ -26,8 +26,10 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+ struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
++ struct sk_buff_head free_list;
+ unsigned long flags;
+
++ skb_queue_head_init(&free_list);
+ spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
+ while (skb_queue_len(&ring->queue)) {
+ struct rtl_tx_desc *entry = &ring->desc[ring->idx];
+@@ -37,10 +39,12 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
+ rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
+ true, HW_DESC_TXBUFF_ADDR),
+ skb->len, DMA_TO_DEVICE);
+- kfree_skb(skb);
++ __skb_queue_tail(&free_list, skb);
+ ring->idx = (ring->idx + 1) % ring->entries;
+ }
+ spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
++
++ __skb_queue_purge(&free_list);
+ }
+
+ static void _rtl8821ae_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+index a29321e2fa72f..5323ead30db03 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+@@ -1598,18 +1598,6 @@ static bool _rtl8812ae_get_integer_from_string(const char *str, u8 *pint)
+ return true;
+ }
+
+-static bool _rtl8812ae_eq_n_byte(const char *str1, const char *str2, u32 num)
+-{
+- if (num == 0)
+- return false;
+- while (num > 0) {
+- num--;
+- if (str1[num] != str2[num])
+- return false;
+- }
+- return true;
+-}
+-
+ static s8 _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
+ u8 band, u8 channel)
+ {
+@@ -1659,42 +1647,42 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw,
+ power_limit = power_limit > MAX_POWER_INDEX ?
+ MAX_POWER_INDEX : power_limit;
+
+- if (_rtl8812ae_eq_n_byte(pregulation, "FCC", 3))
++ if (strcmp(pregulation, "FCC") == 0)
+ regulation = 0;
+- else if (_rtl8812ae_eq_n_byte(pregulation, "MKK", 3))
++ else if (strcmp(pregulation, "MKK") == 0)
+ regulation = 1;
+- else if (_rtl8812ae_eq_n_byte(pregulation, "ETSI", 4))
++ else if (strcmp(pregulation, "ETSI") == 0)
+ regulation = 2;
+- else if (_rtl8812ae_eq_n_byte(pregulation, "WW13", 4))
++ else if (strcmp(pregulation, "WW13") == 0)
+ regulation = 3;
+
+- if (_rtl8812ae_eq_n_byte(prate_section, "CCK", 3))
++ if (strcmp(prate_section, "CCK") == 0)
+ rate_section = 0;
+- else if (_rtl8812ae_eq_n_byte(prate_section, "OFDM", 4))
++ else if (strcmp(prate_section, "OFDM") == 0)
+ rate_section = 1;
+- else if (_rtl8812ae_eq_n_byte(prate_section, "HT", 2) &&
+- _rtl8812ae_eq_n_byte(prf_path, "1T", 2))
++ else if (strcmp(prate_section, "HT") == 0 &&
++ strcmp(prf_path, "1T") == 0)
+ rate_section = 2;
+- else if (_rtl8812ae_eq_n_byte(prate_section, "HT", 2) &&
+- _rtl8812ae_eq_n_byte(prf_path, "2T", 2))
++ else if (strcmp(prate_section, "HT") == 0 &&
++ strcmp(prf_path, "2T") == 0)
+ rate_section = 3;
+- else if (_rtl8812ae_eq_n_byte(prate_section, "VHT", 3) &&
+- _rtl8812ae_eq_n_byte(prf_path, "1T", 2))
++ else if (strcmp(prate_section, "VHT") == 0 &&
++ strcmp(prf_path, "1T") == 0)
+ rate_section = 4;
+- else if (_rtl8812ae_eq_n_byte(prate_section, "VHT", 3) &&
+- _rtl8812ae_eq_n_byte(prf_path, "2T", 2))
++ else if (strcmp(prate_section, "VHT") == 0 &&
++ strcmp(prf_path, "2T") == 0)
+ rate_section = 5;
+
+- if (_rtl8812ae_eq_n_byte(pbandwidth, "20M", 3))
++ if (strcmp(pbandwidth, "20M") == 0)
+ bandwidth = 0;
+- else if (_rtl8812ae_eq_n_byte(pbandwidth, "40M", 3))
++ else if (strcmp(pbandwidth, "40M") == 0)
+ bandwidth = 1;
+- else if (_rtl8812ae_eq_n_byte(pbandwidth, "80M", 3))
++ else if (strcmp(pbandwidth, "80M") == 0)
+ bandwidth = 2;
+- else if (_rtl8812ae_eq_n_byte(pbandwidth, "160M", 4))
++ else if (strcmp(pbandwidth, "160M") == 0)
+ bandwidth = 3;
+
+- if (_rtl8812ae_eq_n_byte(pband, "2.4G", 4)) {
++ if (strcmp(pband, "2.4G") == 0) {
+ ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
+ BAND_ON_2_4G,
+ channel);
+@@ -1718,7 +1706,7 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw,
+ regulation, bandwidth, rate_section, channel_index,
+ rtlphy->txpwr_limit_2_4g[regulation][bandwidth]
+ [rate_section][channel_index][RF90_PATH_A]);
+- } else if (_rtl8812ae_eq_n_byte(pband, "5G", 2)) {
++ } else if (strcmp(pband, "5G") == 0) {
+ ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
+ BAND_ON_5G,
+ channel);
+diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
+index 38697237ee5f0..86467d2f8888c 100644
+--- a/drivers/net/wireless/realtek/rtw88/coex.c
++++ b/drivers/net/wireless/realtek/rtw88/coex.c
+@@ -4056,7 +4056,7 @@ void rtw_coex_display_coex_info(struct rtw_dev *rtwdev, struct seq_file *m)
+ rtwdev->stats.tx_throughput, rtwdev->stats.rx_throughput);
+ seq_printf(m, "%-40s = %u/ %u/ %u\n",
+ "IPS/ Low Power/ PS mode",
+- test_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags),
++ !test_bit(RTW_FLAG_POWERON, rtwdev->flags),
+ test_bit(RTW_FLAG_LEISURE_PS_DEEP, rtwdev->flags),
+ rtwdev->lps_conf.mode);
+
+diff --git a/drivers/net/wireless/realtek/rtw88/mac.c b/drivers/net/wireless/realtek/rtw88/mac.c
+index 98777f294945f..aa7c5901ef260 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac.c
++++ b/drivers/net/wireless/realtek/rtw88/mac.c
+@@ -273,6 +273,11 @@ static int rtw_mac_power_switch(struct rtw_dev *rtwdev, bool pwr_on)
+ if (rtw_pwr_seq_parser(rtwdev, pwr_seq))
+ return -EINVAL;
+
++ if (pwr_on)
++ set_bit(RTW_FLAG_POWERON, rtwdev->flags);
++ else
++ clear_bit(RTW_FLAG_POWERON, rtwdev->flags);
++
+ return 0;
+ }
+
+@@ -335,6 +340,11 @@ int rtw_mac_power_on(struct rtw_dev *rtwdev)
+ ret = rtw_mac_power_switch(rtwdev, true);
+ if (ret == -EALREADY) {
+ rtw_mac_power_switch(rtwdev, false);
++
++ ret = rtw_mac_pre_system_cfg(rtwdev);
++ if (ret)
++ goto err;
++
+ ret = rtw_mac_power_switch(rtwdev, true);
+ if (ret)
+ goto err;
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index 776a9a9884b5d..3b92ac611d3fd 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -737,7 +737,7 @@ static void rtw_ra_mask_info_update(struct rtw_dev *rtwdev,
+ br_data.rtwdev = rtwdev;
+ br_data.vif = vif;
+ br_data.mask = mask;
+- rtw_iterate_stas_atomic(rtwdev, rtw_ra_mask_info_update_iter, &br_data);
++ rtw_iterate_stas(rtwdev, rtw_ra_mask_info_update_iter, &br_data);
+ }
+
+ static int rtw_ops_set_bitrate_mask(struct ieee80211_hw *hw,
+@@ -746,7 +746,9 @@ static int rtw_ops_set_bitrate_mask(struct ieee80211_hw *hw,
+ {
+ struct rtw_dev *rtwdev = hw->priv;
+
++ mutex_lock(&rtwdev->mutex);
+ rtw_ra_mask_info_update(rtwdev, vif, mask);
++ mutex_unlock(&rtwdev->mutex);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 888427cf3bdf9..b2e78737bd5d0 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -241,8 +241,10 @@ static void rtw_watch_dog_work(struct work_struct *work)
+ rtw_phy_dynamic_mechanism(rtwdev);
+
+ data.rtwdev = rtwdev;
+- /* use atomic version to avoid taking local->iflist_mtx mutex */
+- rtw_iterate_vifs_atomic(rtwdev, rtw_vif_watch_dog_iter, &data);
++ /* rtw_iterate_vifs internally uses an atomic iterator which is needed
++ * to avoid taking local->iflist_mtx mutex
++ */
++ rtw_iterate_vifs(rtwdev, rtw_vif_watch_dog_iter, &data);
+
+ /* fw supports only one station associated to enter lps, if there are
+ * more than two stations associated to the AP, then we can not enter
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index 165f299e8e1f9..d4a53d5567451 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -356,7 +356,7 @@ enum rtw_flags {
+ RTW_FLAG_RUNNING,
+ RTW_FLAG_FW_RUNNING,
+ RTW_FLAG_SCANNING,
+- RTW_FLAG_INACTIVE_PS,
++ RTW_FLAG_POWERON,
+ RTW_FLAG_LEISURE_PS,
+ RTW_FLAG_LEISURE_PS_DEEP,
+ RTW_FLAG_DIG_DISABLE,
+diff --git a/drivers/net/wireless/realtek/rtw88/ps.c b/drivers/net/wireless/realtek/rtw88/ps.c
+index 11594940d6b00..996365575f44f 100644
+--- a/drivers/net/wireless/realtek/rtw88/ps.c
++++ b/drivers/net/wireless/realtek/rtw88/ps.c
+@@ -25,7 +25,7 @@ static int rtw_ips_pwr_up(struct rtw_dev *rtwdev)
+
+ int rtw_enter_ips(struct rtw_dev *rtwdev)
+ {
+- if (test_and_set_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags))
++ if (!test_bit(RTW_FLAG_POWERON, rtwdev->flags))
+ return 0;
+
+ rtw_coex_ips_notify(rtwdev, COEX_IPS_ENTER);
+@@ -50,7 +50,7 @@ int rtw_leave_ips(struct rtw_dev *rtwdev)
+ {
+ int ret;
+
+- if (!test_and_clear_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags))
++ if (test_bit(RTW_FLAG_POWERON, rtwdev->flags))
+ return 0;
+
+ rtw_hci_link_ps(rtwdev, false);
+diff --git a/drivers/net/wireless/realtek/rtw88/wow.c b/drivers/net/wireless/realtek/rtw88/wow.c
+index 89dc595094d5c..16ddee577efec 100644
+--- a/drivers/net/wireless/realtek/rtw88/wow.c
++++ b/drivers/net/wireless/realtek/rtw88/wow.c
+@@ -592,7 +592,7 @@ static int rtw_wow_leave_no_link_ps(struct rtw_dev *rtwdev)
+ if (rtw_get_lps_deep_mode(rtwdev) != LPS_DEEP_MODE_NONE)
+ rtw_leave_lps_deep(rtwdev);
+ } else {
+- if (test_bit(RTW_FLAG_INACTIVE_PS, rtwdev->flags)) {
++ if (!test_bit(RTW_FLAG_POWERON, rtwdev->flags)) {
+ rtw_wow->ips_enabled = true;
+ ret = rtw_leave_ips(rtwdev);
+ if (ret)
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 931aff8b5dc95..e99eccf11c762 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -3124,6 +3124,8 @@ int rtw89_core_init(struct rtw89_dev *rtwdev)
+ INIT_DELAYED_WORK(&rtwdev->cfo_track_work, rtw89_phy_cfo_track_work);
+ INIT_DELAYED_WORK(&rtwdev->forbid_ba_work, rtw89_forbid_ba_work);
+ rtwdev->txq_wq = alloc_workqueue("rtw89_tx_wq", WQ_UNBOUND | WQ_HIGHPRI, 0);
++ if (!rtwdev->txq_wq)
++ return -ENOMEM;
+ spin_lock_init(&rtwdev->ba_lock);
+ spin_lock_init(&rtwdev->rpwm_lock);
+ mutex_init(&rtwdev->mutex);
+@@ -3149,6 +3151,7 @@ int rtw89_core_init(struct rtw89_dev *rtwdev)
+ ret = rtw89_load_firmware(rtwdev);
+ if (ret) {
+ rtw89_warn(rtwdev, "no firmware loaded\n");
++ destroy_workqueue(rtwdev->txq_wq);
+ return ret;
+ }
+ rtw89_ser_init(rtwdev);
+diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c
+index 8297e35bfa52b..6730eea930ece 100644
+--- a/drivers/net/wireless/realtek/rtw89/debug.c
++++ b/drivers/net/wireless/realtek/rtw89/debug.c
+@@ -615,6 +615,7 @@ rtw89_debug_priv_mac_reg_dump_select(struct file *filp,
+ struct seq_file *m = (struct seq_file *)filp->private_data;
+ struct rtw89_debugfs_priv *debugfs_priv = m->private;
+ struct rtw89_dev *rtwdev = debugfs_priv->rtwdev;
++ const struct rtw89_chip_info *chip = rtwdev->chip;
+ char buf[32];
+ size_t buf_size;
+ int sel;
+@@ -634,6 +635,12 @@ rtw89_debug_priv_mac_reg_dump_select(struct file *filp,
+ return -EINVAL;
+ }
+
++ if (sel == RTW89_DBG_SEL_MAC_30 && chip->chip_id != RTL8852C) {
++ rtw89_info(rtwdev, "sel %d is address hole on chip %d\n", sel,
++ chip->chip_id);
++ return -EINVAL;
++ }
++
+ debugfs_priv->cb_data = sel;
+ rtw89_info(rtwdev, "select mac page dump %d\n", debugfs_priv->cb_data);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index de1f23779fc62..3b7af8faca505 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -2665,8 +2665,10 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
+
+ list_add_tail(&info->list, &scan_info->pkt_list[band]);
+ ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, new);
+- if (ret)
++ if (ret) {
++ kfree_skb(new);
+ goto out;
++ }
+
+ kfree_skb(new);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
+index 4d2f9ea9e0022..2e4ca1cc5cae9 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.h
++++ b/drivers/net/wireless/realtek/rtw89/fw.h
+@@ -3209,16 +3209,16 @@ static inline struct rtw89_fw_c2h_attr *RTW89_SKB_C2H_CB(struct sk_buff *skb)
+ le32_get_bits(*((const __le32 *)(c2h) + 5), GENMASK(25, 24))
+
+ #define RTW89_GET_MAC_C2H_MCC_RCV_ACK_GROUP(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(1, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(1, 0))
+ #define RTW89_GET_MAC_C2H_MCC_RCV_ACK_H2C_FUNC(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
+
+ #define RTW89_GET_MAC_C2H_MCC_REQ_ACK_GROUP(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(1, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(1, 0))
+ #define RTW89_GET_MAC_C2H_MCC_REQ_ACK_H2C_RETURN(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(7, 2))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(7, 2))
+ #define RTW89_GET_MAC_C2H_MCC_REQ_ACK_H2C_FUNC(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
+
+ struct rtw89_mac_mcc_tsf_rpt {
+ u32 macid_x;
+@@ -3232,30 +3232,30 @@ struct rtw89_mac_mcc_tsf_rpt {
+ static_assert(sizeof(struct rtw89_mac_mcc_tsf_rpt) <= RTW89_COMPLETION_BUF_SIZE);
+
+ #define RTW89_GET_MAC_C2H_MCC_TSF_RPT_MACID_X(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(7, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(7, 0))
+ #define RTW89_GET_MAC_C2H_MCC_TSF_RPT_MACID_Y(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
+ #define RTW89_GET_MAC_C2H_MCC_TSF_RPT_GROUP(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(17, 16))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(17, 16))
+ #define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_LOW_X(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h) + 1), GENMASK(31, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 3), GENMASK(31, 0))
+ #define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_HIGH_X(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(31, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 4), GENMASK(31, 0))
+ #define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_LOW_Y(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h) + 3), GENMASK(31, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 5), GENMASK(31, 0))
+ #define RTW89_GET_MAC_C2H_MCC_TSF_RPT_TSF_HIGH_Y(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h) + 4), GENMASK(31, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 6), GENMASK(31, 0))
+
+ #define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_STATUS(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(5, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(5, 0))
+ #define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_GROUP(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(7, 6))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(7, 6))
+ #define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_MACID(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h)), GENMASK(15, 8))
++ le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(15, 8))
+ #define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_TSF_LOW(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h) + 1), GENMASK(31, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 3), GENMASK(31, 0))
+ #define RTW89_GET_MAC_C2H_MCC_STATUS_RPT_TSF_HIGH(c2h) \
+- le32_get_bits(*((const __le32 *)(c2h) + 2), GENMASK(31, 0))
++ le32_get_bits(*((const __le32 *)(c2h) + 4), GENMASK(31, 0))
+
+ #define RTW89_FW_HDR_SIZE 32
+ #define RTW89_FW_SECTION_HDR_SIZE 16
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index 1c4500ba777c6..0ea734c81b4f0 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -1384,7 +1384,7 @@ static int rtw89_pci_ops_tx_write(struct rtw89_dev *rtwdev, struct rtw89_core_tx
+ return 0;
+ }
+
+-static const struct rtw89_pci_bd_ram bd_ram_table[RTW89_TXCH_NUM] = {
++const struct rtw89_pci_bd_ram rtw89_bd_ram_table_dual[RTW89_TXCH_NUM] = {
+ [RTW89_TXCH_ACH0] = {.start_idx = 0, .max_num = 5, .min_num = 2},
+ [RTW89_TXCH_ACH1] = {.start_idx = 5, .max_num = 5, .min_num = 2},
+ [RTW89_TXCH_ACH2] = {.start_idx = 10, .max_num = 5, .min_num = 2},
+@@ -1399,11 +1399,24 @@ static const struct rtw89_pci_bd_ram bd_ram_table[RTW89_TXCH_NUM] = {
+ [RTW89_TXCH_CH11] = {.start_idx = 55, .max_num = 5, .min_num = 1},
+ [RTW89_TXCH_CH12] = {.start_idx = 60, .max_num = 4, .min_num = 1},
+ };
++EXPORT_SYMBOL(rtw89_bd_ram_table_dual);
++
++const struct rtw89_pci_bd_ram rtw89_bd_ram_table_single[RTW89_TXCH_NUM] = {
++ [RTW89_TXCH_ACH0] = {.start_idx = 0, .max_num = 5, .min_num = 2},
++ [RTW89_TXCH_ACH1] = {.start_idx = 5, .max_num = 5, .min_num = 2},
++ [RTW89_TXCH_ACH2] = {.start_idx = 10, .max_num = 5, .min_num = 2},
++ [RTW89_TXCH_ACH3] = {.start_idx = 15, .max_num = 5, .min_num = 2},
++ [RTW89_TXCH_CH8] = {.start_idx = 20, .max_num = 4, .min_num = 1},
++ [RTW89_TXCH_CH9] = {.start_idx = 24, .max_num = 4, .min_num = 1},
++ [RTW89_TXCH_CH12] = {.start_idx = 28, .max_num = 4, .min_num = 1},
++};
++EXPORT_SYMBOL(rtw89_bd_ram_table_single);
+
+ static void rtw89_pci_reset_trx_rings(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
+ const struct rtw89_pci_info *info = rtwdev->pci_info;
++ const struct rtw89_pci_bd_ram *bd_ram_table = *info->bd_ram_table;
+ struct rtw89_pci_tx_ring *tx_ring;
+ struct rtw89_pci_rx_ring *rx_ring;
+ struct rtw89_pci_dma_ring *bd_ring;
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.h b/drivers/net/wireless/realtek/rtw89/pci.h
+index 7d033501d4d95..1e19740db8c54 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.h
++++ b/drivers/net/wireless/realtek/rtw89/pci.h
+@@ -750,6 +750,12 @@ struct rtw89_pci_ch_dma_addr_set {
+ struct rtw89_pci_ch_dma_addr rx[RTW89_RXCH_NUM];
+ };
+
++struct rtw89_pci_bd_ram {
++ u8 start_idx;
++ u8 max_num;
++ u8 min_num;
++};
++
+ struct rtw89_pci_info {
+ enum mac_ax_bd_trunc_mode txbd_trunc_mode;
+ enum mac_ax_bd_trunc_mode rxbd_trunc_mode;
+@@ -785,6 +791,7 @@ struct rtw89_pci_info {
+ u32 tx_dma_ch_mask;
+ const struct rtw89_pci_bd_idx_addr *bd_idx_addr_low_power;
+ const struct rtw89_pci_ch_dma_addr_set *dma_addr_set;
++ const struct rtw89_pci_bd_ram (*bd_ram_table)[RTW89_TXCH_NUM];
+
+ int (*ltr_set)(struct rtw89_dev *rtwdev, bool en);
+ u32 (*fill_txaddr_info)(struct rtw89_dev *rtwdev,
+@@ -798,12 +805,6 @@ struct rtw89_pci_info {
+ struct rtw89_pci_isrs *isrs);
+ };
+
+-struct rtw89_pci_bd_ram {
+- u8 start_idx;
+- u8 max_num;
+- u8 min_num;
+-};
+-
+ struct rtw89_pci_tx_data {
+ dma_addr_t dma;
+ };
+@@ -1057,6 +1058,8 @@ static inline bool rtw89_pci_ltr_is_err_reg_val(u32 val)
+ extern const struct dev_pm_ops rtw89_pm_ops;
+ extern const struct rtw89_pci_ch_dma_addr_set rtw89_pci_ch_dma_addr_set;
+ extern const struct rtw89_pci_ch_dma_addr_set rtw89_pci_ch_dma_addr_set_v1;
++extern const struct rtw89_pci_bd_ram rtw89_bd_ram_table_dual[RTW89_TXCH_NUM];
++extern const struct rtw89_pci_bd_ram rtw89_bd_ram_table_single[RTW89_TXCH_NUM];
+
+ struct pci_device_id;
+
+diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
+index 5324e645728bb..ca6f6c3e63095 100644
+--- a/drivers/net/wireless/realtek/rtw89/reg.h
++++ b/drivers/net/wireless/realtek/rtw89/reg.h
+@@ -3671,6 +3671,8 @@
+ #define RR_TXRSV_GAPK BIT(19)
+ #define RR_BIAS 0x5e
+ #define RR_BIAS_GAPK BIT(19)
++#define RR_TXAC 0x5f
++#define RR_TXAC_IQG GENMASK(3, 0)
+ #define RR_BIASA 0x60
+ #define RR_BIASA_TXG GENMASK(15, 12)
+ #define RR_BIASA_TXA GENMASK(19, 16)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852ae.c b/drivers/net/wireless/realtek/rtw89/rtw8852ae.c
+index 0cd8c0c44d19d..d835a44a1d0d0 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852ae.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852ae.c
+@@ -44,6 +44,7 @@ static const struct rtw89_pci_info rtw8852a_pci_info = {
+ .tx_dma_ch_mask = 0,
+ .bd_idx_addr_low_power = NULL,
+ .dma_addr_set = &rtw89_pci_ch_dma_addr_set,
++ .bd_ram_table = &rtw89_bd_ram_table_dual,
+
+ .ltr_set = rtw89_pci_ltr_set,
+ .fill_txaddr_info = rtw89_pci_fill_txaddr_info,
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852be.c b/drivers/net/wireless/realtek/rtw89/rtw8852be.c
+index 0ef2ca8efeb0e..ecf39d2d9f81f 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852be.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852be.c
+@@ -46,6 +46,7 @@ static const struct rtw89_pci_info rtw8852b_pci_info = {
+ BIT(RTW89_TXCH_CH10) | BIT(RTW89_TXCH_CH11),
+ .bd_idx_addr_low_power = NULL,
+ .dma_addr_set = &rtw89_pci_ch_dma_addr_set,
++ .bd_ram_table = &rtw89_bd_ram_table_single,
+
+ .ltr_set = rtw89_pci_ltr_set,
+ .fill_txaddr_info = rtw89_pci_fill_txaddr_info,
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
+index 60cd676fe22c9..f3a07b0e672f7 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
+@@ -337,7 +337,7 @@ static void _dack_reload_by_path(struct rtw89_dev *rtwdev,
+ (dack->dadck_d[path][index] << 14);
+ addr = 0xc210 + offset;
+ rtw89_phy_write32(rtwdev, addr, val32);
+- rtw89_phy_write32_set(rtwdev, addr, BIT(1));
++ rtw89_phy_write32_set(rtwdev, addr, BIT(0));
+ }
+
+ static void _dack_reload(struct rtw89_dev *rtwdev, enum rtw89_rf_path path)
+@@ -1872,12 +1872,11 @@ static void _dpk_rf_setting(struct rtw89_dev *rtwdev, u8 gain,
+ 0x50101 | BIT(rtwdev->dbcc_en));
+ rtw89_write_rf(rtwdev, path, RR_MOD_V1, RR_MOD_MASK, RF_DPK);
+
+- if (dpk->bp[path][kidx].band == RTW89_BAND_6G && dpk->bp[path][kidx].ch >= 161) {
++ if (dpk->bp[path][kidx].band == RTW89_BAND_6G && dpk->bp[path][kidx].ch >= 161)
+ rtw89_write_rf(rtwdev, path, RR_IQGEN, RR_IQGEN_BIAS, 0x8);
+- rtw89_write_rf(rtwdev, path, RR_LOGEN, RR_LOGEN_RPT, 0xd);
+- } else {
+- rtw89_write_rf(rtwdev, path, RR_LOGEN, RR_LOGEN_RPT, 0xd);
+- }
++
++ rtw89_write_rf(rtwdev, path, RR_LOGEN, RR_LOGEN_RPT, 0xd);
++ rtw89_write_rf(rtwdev, path, RR_TXAC, RR_TXAC_IQG, 0x8);
+
+ rtw89_write_rf(rtwdev, path, RR_RXA2, RR_RXA2_ATT, 0x0);
+ rtw89_write_rf(rtwdev, path, RR_TXIQK, RR_TXIQK_ATT2, 0x3);
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852ce.c b/drivers/net/wireless/realtek/rtw89/rtw8852ce.c
+index 35901f64d17de..80490a5437df6 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852ce.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852ce.c
+@@ -53,6 +53,7 @@ static const struct rtw89_pci_info rtw8852c_pci_info = {
+ .tx_dma_ch_mask = 0,
+ .bd_idx_addr_low_power = &rtw8852c_bd_idx_addr_low_power,
+ .dma_addr_set = &rtw89_pci_ch_dma_addr_set_v1,
++ .bd_ram_table = &rtw89_bd_ram_table_dual,
+
+ .ltr_set = rtw89_pci_ltr_set_v1,
+ .fill_txaddr_info = rtw89_pci_fill_txaddr_info_v1,
+diff --git a/drivers/net/wireless/rsi/rsi_91x_coex.c b/drivers/net/wireless/rsi/rsi_91x_coex.c
+index 8a3d86897ea8e..45ac9371f2621 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_coex.c
++++ b/drivers/net/wireless/rsi/rsi_91x_coex.c
+@@ -160,6 +160,7 @@ int rsi_coex_attach(struct rsi_common *common)
+ rsi_coex_scheduler_thread,
+ "Coex-Tx-Thread")) {
+ rsi_dbg(ERR_ZONE, "%s: Unable to init tx thrd\n", __func__);
++ kfree(coex_cb);
+ return -EINVAL;
+ }
+ return 0;
+diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
+index 1b532e00a56fb..7fb2f95134760 100644
+--- a/drivers/net/wireless/wl3501_cs.c
++++ b/drivers/net/wireless/wl3501_cs.c
+@@ -1328,7 +1328,7 @@ static netdev_tx_t wl3501_hard_start_xmit(struct sk_buff *skb,
+ } else {
+ ++dev->stats.tx_packets;
+ dev->stats.tx_bytes += skb->len;
+- kfree_skb(skb);
++ dev_kfree_skb_irq(skb);
+
+ if (this->tx_buffer_cnt < 2)
+ netif_stop_queue(dev);
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index b38d0355b0ac3..5ad49056921b5 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -508,7 +508,7 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
+ put_device(dev);
+ }
+
+-void nd_device_register(struct device *dev)
++static void __nd_device_register(struct device *dev, bool sync)
+ {
+ if (!dev)
+ return;
+@@ -531,11 +531,24 @@ void nd_device_register(struct device *dev)
+ }
+ get_device(dev);
+
+- async_schedule_dev_domain(nd_async_device_register, dev,
+- &nd_async_domain);
++ if (sync)
++ nd_async_device_register(dev, 0);
++ else
++ async_schedule_dev_domain(nd_async_device_register, dev,
++ &nd_async_domain);
++}
++
++void nd_device_register(struct device *dev)
++{
++ __nd_device_register(dev, false);
+ }
+ EXPORT_SYMBOL(nd_device_register);
+
++void nd_device_register_sync(struct device *dev)
++{
++ __nd_device_register(dev, true);
++}
++
+ void nd_device_unregister(struct device *dev, enum nd_async_mode mode)
+ {
+ bool killed;
+diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
+index 1fc081dcf6315..6d3b03a9fa02a 100644
+--- a/drivers/nvdimm/dimm_devs.c
++++ b/drivers/nvdimm/dimm_devs.c
+@@ -624,7 +624,10 @@ struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus,
+ nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
+ device_initialize(dev);
+ lockdep_set_class(&dev->mutex, &nvdimm_key);
+- nd_device_register(dev);
++ if (test_bit(NDD_REGISTER_SYNC, &flags))
++ nd_device_register_sync(dev);
++ else
++ nd_device_register(dev);
+
+ return nvdimm;
+ }
+diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h
+index cc86ee09d7c08..845408f106556 100644
+--- a/drivers/nvdimm/nd-core.h
++++ b/drivers/nvdimm/nd-core.h
+@@ -107,6 +107,7 @@ int nvdimm_bus_create_ndctl(struct nvdimm_bus *nvdimm_bus);
+ void nvdimm_bus_destroy_ndctl(struct nvdimm_bus *nvdimm_bus);
+ void nd_synchronize(void);
+ void nd_device_register(struct device *dev);
++void nd_device_register_sync(struct device *dev);
+ struct nd_label_id;
+ char *nd_label_gen_id(struct nd_label_id *label_id, const uuid_t *uuid,
+ u32 flags);
+diff --git a/drivers/opp/debugfs.c b/drivers/opp/debugfs.c
+index 96a30a032c5f9..2c7fb683441ef 100644
+--- a/drivers/opp/debugfs.c
++++ b/drivers/opp/debugfs.c
+@@ -235,7 +235,7 @@ static void opp_migrate_dentry(struct opp_device *opp_dev,
+
+ dentry = debugfs_rename(rootdir, opp_dev->dentry, rootdir,
+ opp_table->dentry_name);
+- if (!dentry) {
++ if (IS_ERR(dentry)) {
+ dev_err(dev, "%s: Failed to rename link from: %s to %s\n",
+ __func__, dev_name(opp_dev->dev), dev_name(dev));
+ return;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 77e5dc7b88ad4..7e23c74fb4230 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1534,8 +1534,19 @@ err_deinit:
+ return ret;
+ }
+
++static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp)
++{
++ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
++ struct qcom_pcie *pcie = to_qcom_pcie(pci);
++
++ qcom_ep_reset_assert(pcie);
++ phy_power_off(pcie->phy);
++ pcie->cfg->ops->deinit(pcie);
++}
++
+ static const struct dw_pcie_host_ops qcom_pcie_dw_ops = {
+- .host_init = qcom_pcie_host_init,
++ .host_init = qcom_pcie_host_init,
++ .host_deinit = qcom_pcie_host_deinit,
+ };
+
+ /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */
+diff --git a/drivers/pci/controller/pcie-mt7621.c b/drivers/pci/controller/pcie-mt7621.c
+index ee7aad09d6277..63a5f4463a9f6 100644
+--- a/drivers/pci/controller/pcie-mt7621.c
++++ b/drivers/pci/controller/pcie-mt7621.c
+@@ -60,6 +60,7 @@
+ #define PCIE_PORT_LINKUP BIT(0)
+ #define PCIE_PORT_CNT 3
+
++#define INIT_PORTS_DELAY_MS 100
+ #define PERST_DELAY_MS 100
+
+ /**
+@@ -369,6 +370,7 @@ static int mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
+ }
+ }
+
++ msleep(INIT_PORTS_DELAY_MS);
+ mt7621_pcie_reset_ep_deassert(pcie);
+
+ tmp = NULL;
+diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+index 04698e7995a54..b7c7a8af99f4f 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -652,6 +652,7 @@ err_alloc_mem:
+ /**
+ * epf_ntb_mw_bar_clear() - Clear Memory window BARs
+ * @ntb: NTB device that facilitates communication between HOST and VHOST
++ * @num_mws: the number of Memory window BARs that to be cleared
+ */
+ static void epf_ntb_mw_bar_clear(struct epf_ntb *ntb, int num_mws)
+ {
+diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
+index 952217572113c..b2e8322755c17 100644
+--- a/drivers/pci/iov.c
++++ b/drivers/pci/iov.c
+@@ -14,7 +14,7 @@
+ #include <linux/delay.h>
+ #include "pci.h"
+
+-#define VIRTFN_ID_LEN 16
++#define VIRTFN_ID_LEN 17 /* "virtfn%u\0" for 2^32 - 1 */
+
+ int pci_iov_virtfn_bus(struct pci_dev *dev, int vf_id)
+ {
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index a2ceeacc33eb6..7a19f11daca3a 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -572,7 +572,7 @@ static void pci_pm_default_resume_early(struct pci_dev *pci_dev)
+
+ static void pci_pm_bridge_power_up_actions(struct pci_dev *pci_dev)
+ {
+- pci_bridge_wait_for_secondary_bus(pci_dev);
++ pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT);
+ /*
+ * When powering on a bridge from D3cold, the whole hierarchy may be
+ * powered on into D0uninitialized state, resume them to give them a
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 5641786bd0206..da748247061d2 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -167,9 +167,6 @@ static int __init pcie_port_pm_setup(char *str)
+ }
+ __setup("pcie_port_pm=", pcie_port_pm_setup);
+
+-/* Time to wait after a reset for device to become responsive */
+-#define PCIE_RESET_READY_POLL_MS 60000
+-
+ /**
+ * pci_bus_max_busnr - returns maximum PCI bus number of given bus' children
+ * @bus: pointer to PCI bus structure to search
+@@ -1174,7 +1171,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
+ return -ENOTTY;
+ }
+
+- if (delay > 1000)
++ if (delay > PCI_RESET_WAIT)
+ pci_info(dev, "not ready %dms after %s; waiting\n",
+ delay - 1, reset_type);
+
+@@ -1183,7 +1180,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
+ pci_read_config_dword(dev, PCI_COMMAND, &id);
+ }
+
+- if (delay > 1000)
++ if (delay > PCI_RESET_WAIT)
+ pci_info(dev, "ready %dms after %s\n", delay - 1,
+ reset_type);
+
+@@ -4941,24 +4938,31 @@ static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
+ /**
+ * pci_bridge_wait_for_secondary_bus - Wait for secondary bus to be accessible
+ * @dev: PCI bridge
++ * @reset_type: reset type in human-readable form
++ * @timeout: maximum time to wait for devices on secondary bus (milliseconds)
+ *
+ * Handle necessary delays before access to the devices on the secondary
+- * side of the bridge are permitted after D3cold to D0 transition.
++ * side of the bridge are permitted after D3cold to D0 transition
++ * or Conventional Reset.
+ *
+ * For PCIe this means the delays in PCIe 5.0 section 6.6.1. For
+ * conventional PCI it means Tpvrh + Trhfa specified in PCI 3.0 section
+ * 4.3.2.
++ *
++ * Return 0 on success or -ENOTTY if the first device on the secondary bus
++ * failed to become accessible.
+ */
+-void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
++int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
++ int timeout)
+ {
+ struct pci_dev *child;
+ int delay;
+
+ if (pci_dev_is_disconnected(dev))
+- return;
++ return 0;
+
+- if (!pci_is_bridge(dev) || !dev->bridge_d3)
+- return;
++ if (!pci_is_bridge(dev))
++ return 0;
+
+ down_read(&pci_bus_sem);
+
+@@ -4970,14 +4974,14 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ */
+ if (!dev->subordinate || list_empty(&dev->subordinate->devices)) {
+ up_read(&pci_bus_sem);
+- return;
++ return 0;
+ }
+
+ /* Take d3cold_delay requirements into account */
+ delay = pci_bus_max_d3cold_delay(dev->subordinate);
+ if (!delay) {
+ up_read(&pci_bus_sem);
+- return;
++ return 0;
+ }
+
+ child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
+@@ -4986,14 +4990,12 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+
+ /*
+ * Conventional PCI and PCI-X we need to wait Tpvrh + Trhfa before
+- * accessing the device after reset (that is 1000 ms + 100 ms). In
+- * practice this should not be needed because we don't do power
+- * management for them (see pci_bridge_d3_possible()).
++ * accessing the device after reset (that is 1000 ms + 100 ms).
+ */
+ if (!pci_is_pcie(dev)) {
+ pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
+ msleep(1000 + delay);
+- return;
++ return 0;
+ }
+
+ /*
+@@ -5010,11 +5012,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ * configuration requests if we only wait for 100 ms (see
+ * https://bugzilla.kernel.org/show_bug.cgi?id=203885).
+ *
+- * Therefore we wait for 100 ms and check for the device presence.
+- * If it is still not present give it an additional 100 ms.
++ * Therefore we wait for 100 ms and check for the device presence
++ * until the timeout expires.
+ */
+ if (!pcie_downstream_port(dev))
+- return;
++ return 0;
+
+ if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
+ pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
+@@ -5025,14 +5027,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+ if (!pcie_wait_for_link_delay(dev, true, delay)) {
+ /* Did not train, no need to wait any further */
+ pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
+- return;
++ return -ENOTTY;
+ }
+ }
+
+- if (!pci_device_is_present(child)) {
+- pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+- msleep(delay);
+- }
++ return pci_dev_wait(child, reset_type, timeout - delay);
+ }
+
+ void pci_reset_secondary_bus(struct pci_dev *dev)
+@@ -5051,15 +5050,6 @@ void pci_reset_secondary_bus(struct pci_dev *dev)
+
+ ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
+ pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
+-
+- /*
+- * Trhfa for conventional PCI is 2^25 clock cycles.
+- * Assuming a minimum 33MHz clock this results in a 1s
+- * delay before we can consider subordinate devices to
+- * be re-initialized. PCIe has some ways to shorten this,
+- * but we don't make use of them yet.
+- */
+- ssleep(1);
+ }
+
+ void __weak pcibios_reset_secondary_bus(struct pci_dev *dev)
+@@ -5078,7 +5068,8 @@ int pci_bridge_secondary_bus_reset(struct pci_dev *dev)
+ {
+ pcibios_reset_secondary_bus(dev);
+
+- return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS);
++ return pci_bridge_wait_for_secondary_bus(dev, "bus reset",
++ PCIE_RESET_READY_POLL_MS);
+ }
+ EXPORT_SYMBOL_GPL(pci_bridge_secondary_bus_reset);
+
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 9049d07d3aaec..d2c08670a20ed 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -64,6 +64,19 @@ struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev,
+ #define PCI_PM_D3HOT_WAIT 10 /* msec */
+ #define PCI_PM_D3COLD_WAIT 100 /* msec */
+
++/*
++ * Following exit from Conventional Reset, devices must be ready within 1 sec
++ * (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional
++ * Reset (PCIe r6.0 sec 5.8).
++ */
++#define PCI_RESET_WAIT 1000 /* msec */
++/*
++ * Devices may extend the 1 sec period through Request Retry Status completions
++ * (PCIe r6.0 sec 2.3.1). The spec does not provide an upper limit, but 60 sec
++ * ought to be enough for any device to become responsive.
++ */
++#define PCIE_RESET_READY_POLL_MS 60000 /* msec */
++
+ void pci_update_current_state(struct pci_dev *dev, pci_power_t state);
+ void pci_refresh_power_state(struct pci_dev *dev);
+ int pci_power_up(struct pci_dev *dev);
+@@ -86,8 +99,9 @@ void pci_msi_init(struct pci_dev *dev);
+ void pci_msix_init(struct pci_dev *dev);
+ bool pci_bridge_d3_possible(struct pci_dev *dev);
+ void pci_bridge_d3_update(struct pci_dev *dev);
+-void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev);
+ void pci_bridge_reconfigure_ltr(struct pci_dev *dev);
++int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
++ int timeout);
+
+ static inline void pci_wakeup_event(struct pci_dev *dev)
+ {
+@@ -310,53 +324,36 @@ struct pci_sriov {
+ * @dev: PCI device to set new error_state
+ * @new: the state we want dev to be in
+ *
+- * Must be called with device_lock held.
++ * If the device is experiencing perm_failure, it has to remain in that state.
++ * Any other transition is allowed.
+ *
+ * Returns true if state has been changed to the requested state.
+ */
+ static inline bool pci_dev_set_io_state(struct pci_dev *dev,
+ pci_channel_state_t new)
+ {
+- bool changed = false;
++ pci_channel_state_t old;
+
+- device_lock_assert(&dev->dev);
+ switch (new) {
+ case pci_channel_io_perm_failure:
+- switch (dev->error_state) {
+- case pci_channel_io_frozen:
+- case pci_channel_io_normal:
+- case pci_channel_io_perm_failure:
+- changed = true;
+- break;
+- }
+- break;
++ xchg(&dev->error_state, pci_channel_io_perm_failure);
++ return true;
+ case pci_channel_io_frozen:
+- switch (dev->error_state) {
+- case pci_channel_io_frozen:
+- case pci_channel_io_normal:
+- changed = true;
+- break;
+- }
+- break;
++ old = cmpxchg(&dev->error_state, pci_channel_io_normal,
++ pci_channel_io_frozen);
++ return old != pci_channel_io_perm_failure;
+ case pci_channel_io_normal:
+- switch (dev->error_state) {
+- case pci_channel_io_frozen:
+- case pci_channel_io_normal:
+- changed = true;
+- break;
+- }
+- break;
++ old = cmpxchg(&dev->error_state, pci_channel_io_frozen,
++ pci_channel_io_normal);
++ return old != pci_channel_io_perm_failure;
++ default:
++ return false;
+ }
+- if (changed)
+- dev->error_state = new;
+- return changed;
+ }
+
+ static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
+ {
+- device_lock(&dev->dev);
+ pci_dev_set_io_state(dev, pci_channel_io_perm_failure);
+- device_unlock(&dev->dev);
+
+ return 0;
+ }
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index f5ffea17c7f87..a5d7c69b764e0 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -170,8 +170,8 @@ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
+ pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
+ PCI_EXP_DPC_STATUS_TRIGGER);
+
+- if (!pcie_wait_for_link(pdev, true)) {
+- pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n");
++ if (pci_bridge_wait_for_secondary_bus(pdev, "DPC",
++ PCIE_RESET_READY_POLL_MS)) {
+ clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
+ ret = PCI_ERS_RESULT_DISCONNECT;
+ } else {
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 1779582fb5007..5988584825482 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -996,7 +996,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ resource_list_for_each_entry_safe(window, n, &resources) {
+ offset = window->offset;
+ res = window->res;
+- if (!res->end)
++ if (!res->flags && !res->start && !res->end)
+ continue;
+
+ list_move_tail(&window->node, &bridge->windows);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 285acc4aaccc1..20ac67d590348 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5340,6 +5340,7 @@ static void quirk_no_flr(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1487, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x7901, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr);
+
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index 75be4fe225090..0c1faa6c1973a 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -606,21 +606,20 @@ static ssize_t switchtec_dev_read(struct file *filp, char __user *data,
+ rc = copy_to_user(data, &stuser->return_code,
+ sizeof(stuser->return_code));
+ if (rc) {
+- rc = -EFAULT;
+- goto out;
++ mutex_unlock(&stdev->mrpc_mutex);
++ return -EFAULT;
+ }
+
+ data += sizeof(stuser->return_code);
+ rc = copy_to_user(data, &stuser->data,
+ size - sizeof(stuser->return_code));
+ if (rc) {
+- rc = -EFAULT;
+- goto out;
++ mutex_unlock(&stdev->mrpc_mutex);
++ return -EFAULT;
+ }
+
+ stuser_set_state(stuser, MRPC_IDLE);
+
+-out:
+ mutex_unlock(&stdev->mrpc_mutex);
+
+ if (stuser->status == SWITCHTEC_MRPC_STATUS_DONE ||
+diff --git a/drivers/phy/mediatek/phy-mtk-io.h b/drivers/phy/mediatek/phy-mtk-io.h
+index d20ad5e5be814..58f06db822cb0 100644
+--- a/drivers/phy/mediatek/phy-mtk-io.h
++++ b/drivers/phy/mediatek/phy-mtk-io.h
+@@ -39,8 +39,8 @@ static inline void mtk_phy_update_bits(void __iomem *reg, u32 mask, u32 val)
+ /* field @mask shall be constant and continuous */
+ #define mtk_phy_update_field(reg, mask, val) \
+ ({ \
+- typeof(mask) mask_ = (mask); \
+- mtk_phy_update_bits(reg, mask_, FIELD_PREP(mask_, val)); \
++ BUILD_BUG_ON_MSG(!__builtin_constant_p(mask), "mask is not constant"); \
++ mtk_phy_update_bits(reg, mask, FIELD_PREP(mask, val)); \
+ })
+
+ #endif
+diff --git a/drivers/phy/rockchip/phy-rockchip-typec.c b/drivers/phy/rockchip/phy-rockchip-typec.c
+index d76440ae10ff4..6aea512e5d4ee 100644
+--- a/drivers/phy/rockchip/phy-rockchip-typec.c
++++ b/drivers/phy/rockchip/phy-rockchip-typec.c
+@@ -821,10 +821,10 @@ static int tcphy_get_mode(struct rockchip_typec_phy *tcphy)
+ mode = MODE_DFP_USB;
+ id = EXTCON_USB_HOST;
+
+- if (ufp) {
++ if (ufp > 0) {
+ mode = MODE_UFP_USB;
+ id = EXTCON_USB;
+- } else if (dp) {
++ } else if (dp > 0) {
+ mode = MODE_DFP_DP;
+ id = EXTCON_DISP_DP;
+
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 7857e612a1008..c7cdccdb4332a 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -363,8 +363,6 @@ static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc,
+ {
+ struct pinctrl_dev *pctldev = of_pinctrl_get(np);
+
+- of_node_put(np);
+-
+ if (!pctldev)
+ return 0;
+
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index 475f4172d5085..37761a8e7a18f 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -640,7 +640,7 @@ static int mtk_hw_get_value_wrap(struct mtk_pinctrl *hw, unsigned int gpio, int
+ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
+ unsigned int gpio, char *buf, unsigned int buf_len)
+ {
+- int pinmux, pullup, pullen, len = 0, r1 = -1, r0 = -1, rsel = -1;
++ int pinmux, pullup = 0, pullen = 0, len = 0, r1 = -1, r0 = -1, rsel = -1;
+ const struct mtk_pin_desc *desc;
+ u32 try_all_type = 0;
+
+@@ -717,7 +717,7 @@ static void mtk_pctrl_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *s,
+ unsigned int gpio)
+ {
+ struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
+- char buf[PIN_DBG_BUF_SZ];
++ char buf[PIN_DBG_BUF_SZ] = { 0 };
+
+ (void)mtk_pctrl_show_one_pin(hw, gpio, buf, PIN_DBG_BUF_SZ);
+
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 39b233f73e132..373eed8bc4be9 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1149,8 +1149,8 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+
+ pin_desc[i].number = i;
+ /* Pin naming convention: P(bank_name)(bank_pin_number). */
+- pin_desc[i].name = kasprintf(GFP_KERNEL, "P%c%d",
+- bank + 'A', line);
++ pin_desc[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "P%c%d",
++ bank + 'A', line);
+
+ group->name = group_names[i] = pin_desc[i].name;
+ group->pin = pin_desc[i].number;
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 1e1813d7c5508..c405296e49896 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -1885,7 +1885,7 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ }
+
+ for (i = 0; i < chip->ngpio; i++)
+- names[i] = kasprintf(GFP_KERNEL, "pio%c%d", alias_idx + 'A', i);
++ names[i] = devm_kasprintf(&pdev->dev, GFP_KERNEL, "pio%c%d", alias_idx + 'A', i);
+
+ chip->names = (const char *const *)names;
+
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 5eeac92f610a0..0276b52f37168 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -3045,6 +3045,7 @@ static int rockchip_pinctrl_parse_groups(struct device_node *np,
+ np_config = of_find_node_by_phandle(be32_to_cpup(phandle));
+ ret = pinconf_generic_parse_dt_config(np_config, NULL,
+ &grp->data[j].configs, &grp->data[j].nconfigs);
++ of_node_put(np_config);
+ if (ret)
+ return ret;
+ }
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm8976.c b/drivers/pinctrl/qcom/pinctrl-msm8976.c
+index ec43edf9b660a..e11d845847190 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm8976.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm8976.c
+@@ -733,7 +733,7 @@ static const char * const codec_int2_groups[] = {
+ "gpio74",
+ };
+ static const char * const wcss_bt_groups[] = {
+- "gpio39", "gpio47", "gpio88",
++ "gpio39", "gpio47", "gpio48",
+ };
+ static const char * const sdc3_groups[] = {
+ "gpio39", "gpio40", "gpio41",
+@@ -958,9 +958,9 @@ static const struct msm_pingroup msm8976_groups[] = {
+ PINGROUP(37, NA, NA, NA, qdss_tracedata_b, NA, NA, NA, NA, NA),
+ PINGROUP(38, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b, NA),
+ PINGROUP(39, wcss_bt, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+- PINGROUP(40, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+- PINGROUP(41, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+- PINGROUP(42, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
++ PINGROUP(40, wcss_wlan2, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
++ PINGROUP(41, wcss_wlan1, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
++ PINGROUP(42, wcss_wlan0, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+ PINGROUP(43, wcss_wlan, sdc3, NA, NA, qdss_tracedata_a, NA, NA, NA, NA),
+ PINGROUP(44, wcss_wlan, sdc3, NA, NA, NA, NA, NA, NA, NA),
+ PINGROUP(45, wcss_fm, NA, qdss_tracectl_a, NA, NA, NA, NA, NA, NA),
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 5aa3836dbc226..6f762097557af 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -130,6 +130,7 @@ struct rzg2l_dedicated_configs {
+ struct rzg2l_pinctrl_data {
+ const char * const *port_pins;
+ const u32 *port_pin_configs;
++ unsigned int n_ports;
+ struct rzg2l_dedicated_configs *dedicated_pins;
+ unsigned int n_port_pins;
+ unsigned int n_dedicated_pins;
+@@ -1124,7 +1125,7 @@ static struct {
+ }
+ };
+
+-static int rzg2l_gpio_get_gpioint(unsigned int virq)
++static int rzg2l_gpio_get_gpioint(unsigned int virq, const struct rzg2l_pinctrl_data *data)
+ {
+ unsigned int gpioint;
+ unsigned int i;
+@@ -1133,13 +1134,13 @@ static int rzg2l_gpio_get_gpioint(unsigned int virq)
+ port = virq / 8;
+ bit = virq % 8;
+
+- if (port >= ARRAY_SIZE(rzg2l_gpio_configs) ||
+- bit >= RZG2L_GPIO_PORT_GET_PINCNT(rzg2l_gpio_configs[port]))
++ if (port >= data->n_ports ||
++ bit >= RZG2L_GPIO_PORT_GET_PINCNT(data->port_pin_configs[port]))
+ return -EINVAL;
+
+ gpioint = bit;
+ for (i = 0; i < port; i++)
+- gpioint += RZG2L_GPIO_PORT_GET_PINCNT(rzg2l_gpio_configs[i]);
++ gpioint += RZG2L_GPIO_PORT_GET_PINCNT(data->port_pin_configs[i]);
+
+ return gpioint;
+ }
+@@ -1239,7 +1240,7 @@ static int rzg2l_gpio_child_to_parent_hwirq(struct gpio_chip *gc,
+ unsigned long flags;
+ int gpioint, irq;
+
+- gpioint = rzg2l_gpio_get_gpioint(child);
++ gpioint = rzg2l_gpio_get_gpioint(child, pctrl->data);
+ if (gpioint < 0)
+ return gpioint;
+
+@@ -1313,8 +1314,8 @@ static void rzg2l_init_irq_valid_mask(struct gpio_chip *gc,
+ port = offset / 8;
+ bit = offset % 8;
+
+- if (port >= ARRAY_SIZE(rzg2l_gpio_configs) ||
+- bit >= RZG2L_GPIO_PORT_GET_PINCNT(rzg2l_gpio_configs[port]))
++ if (port >= pctrl->data->n_ports ||
++ bit >= RZG2L_GPIO_PORT_GET_PINCNT(pctrl->data->port_pin_configs[port]))
+ clear_bit(offset, valid_mask);
+ }
+ }
+@@ -1519,6 +1520,7 @@ static int rzg2l_pinctrl_probe(struct platform_device *pdev)
+ static struct rzg2l_pinctrl_data r9a07g043_data = {
+ .port_pins = rzg2l_gpio_names,
+ .port_pin_configs = r9a07g043_gpio_configs,
++ .n_ports = ARRAY_SIZE(r9a07g043_gpio_configs),
+ .dedicated_pins = rzg2l_dedicated_pins.common,
+ .n_port_pins = ARRAY_SIZE(r9a07g043_gpio_configs) * RZG2L_PINS_PER_PORT,
+ .n_dedicated_pins = ARRAY_SIZE(rzg2l_dedicated_pins.common),
+@@ -1527,6 +1529,7 @@ static struct rzg2l_pinctrl_data r9a07g043_data = {
+ static struct rzg2l_pinctrl_data r9a07g044_data = {
+ .port_pins = rzg2l_gpio_names,
+ .port_pin_configs = rzg2l_gpio_configs,
++ .n_ports = ARRAY_SIZE(rzg2l_gpio_configs),
+ .dedicated_pins = rzg2l_dedicated_pins.common,
+ .n_port_pins = ARRAY_SIZE(rzg2l_gpio_names),
+ .n_dedicated_pins = ARRAY_SIZE(rzg2l_dedicated_pins.common) +
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 1cddca506ad7e..cb33a23ab0c11 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -1382,6 +1382,7 @@ static struct irq_domain *stm32_pctrl_get_irq_domain(struct platform_device *pde
+ return ERR_PTR(-ENXIO);
+
+ domain = irq_find_host(parent);
++ of_node_put(parent);
+ if (!domain)
+ /* domain not registered yet */
+ return ERR_PTR(-EPROBE_DEFER);
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 001b0de95a46e..d1714b5d085be 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -27,7 +27,7 @@
+ #define DRV_NAME "cros-ec-typec"
+
+ #define DP_PORT_VDO (DP_CONF_SET_PIN_ASSIGN(BIT(DP_PIN_ASSIGN_C) | BIT(DP_PIN_ASSIGN_D)) | \
+- DP_CAP_DFP_D)
++ DP_CAP_DFP_D | DP_CAP_RECEPTACLE)
+
+ /* Supported alt modes. */
+ enum {
+diff --git a/drivers/platform/x86/dell/dell-wmi-ddv.c b/drivers/platform/x86/dell/dell-wmi-ddv.c
+index 2bb449845d143..9cb6ae42dbdc8 100644
+--- a/drivers/platform/x86/dell/dell-wmi-ddv.c
++++ b/drivers/platform/x86/dell/dell-wmi-ddv.c
+@@ -26,7 +26,8 @@
+
+ #define DRIVER_NAME "dell-wmi-ddv"
+
+-#define DELL_DDV_SUPPORTED_INTERFACE 2
++#define DELL_DDV_SUPPORTED_VERSION_MIN 2
++#define DELL_DDV_SUPPORTED_VERSION_MAX 3
+ #define DELL_DDV_GUID "8A42EA14-4F2A-FD45-6422-0087F7A7E608"
+
+ #define DELL_EPPID_LENGTH 20
+@@ -49,6 +50,7 @@ enum dell_ddv_method {
+ DELL_DDV_BATTERY_RAW_ANALYTICS_START = 0x0E,
+ DELL_DDV_BATTERY_RAW_ANALYTICS = 0x0F,
+ DELL_DDV_BATTERY_DESIGN_VOLTAGE = 0x10,
++ DELL_DDV_BATTERY_RAW_ANALYTICS_A_BLOCK = 0x11, /* version 3 */
+
+ DELL_DDV_INTERFACE_VERSION = 0x12,
+
+@@ -340,7 +342,7 @@ static int dell_wmi_ddv_probe(struct wmi_device *wdev, const void *context)
+ return ret;
+
+ dev_dbg(&wdev->dev, "WMI interface version: %d\n", version);
+- if (version != DELL_DDV_SUPPORTED_INTERFACE)
++ if (version < DELL_DDV_SUPPORTED_VERSION_MIN || version > DELL_DDV_SUPPORTED_VERSION_MAX)
+ return -ENODEV;
+
+ data = devm_kzalloc(&wdev->dev, sizeof(*data), GFP_KERNEL);
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 7c790c41e2fe3..cc5b2e22b42ac 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -1186,83 +1186,6 @@ static void psy_unregister_thermal(struct power_supply *psy)
+ thermal_zone_device_unregister(psy->tzd);
+ }
+
+-/* thermal cooling device callbacks */
+-static int ps_get_max_charge_cntl_limit(struct thermal_cooling_device *tcd,
+- unsigned long *state)
+-{
+- struct power_supply *psy;
+- union power_supply_propval val;
+- int ret;
+-
+- psy = tcd->devdata;
+- ret = power_supply_get_property(psy,
+- POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT_MAX, &val);
+- if (ret)
+- return ret;
+-
+- *state = val.intval;
+-
+- return ret;
+-}
+-
+-static int ps_get_cur_charge_cntl_limit(struct thermal_cooling_device *tcd,
+- unsigned long *state)
+-{
+- struct power_supply *psy;
+- union power_supply_propval val;
+- int ret;
+-
+- psy = tcd->devdata;
+- ret = power_supply_get_property(psy,
+- POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT, &val);
+- if (ret)
+- return ret;
+-
+- *state = val.intval;
+-
+- return ret;
+-}
+-
+-static int ps_set_cur_charge_cntl_limit(struct thermal_cooling_device *tcd,
+- unsigned long state)
+-{
+- struct power_supply *psy;
+- union power_supply_propval val;
+- int ret;
+-
+- psy = tcd->devdata;
+- val.intval = state;
+- ret = psy->desc->set_property(psy,
+- POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT, &val);
+-
+- return ret;
+-}
+-
+-static const struct thermal_cooling_device_ops psy_tcd_ops = {
+- .get_max_state = ps_get_max_charge_cntl_limit,
+- .get_cur_state = ps_get_cur_charge_cntl_limit,
+- .set_cur_state = ps_set_cur_charge_cntl_limit,
+-};
+-
+-static int psy_register_cooler(struct power_supply *psy)
+-{
+- /* Register for cooling device if psy can control charging */
+- if (psy_has_property(psy->desc, POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT)) {
+- psy->tcd = thermal_cooling_device_register(
+- (char *)psy->desc->name,
+- psy, &psy_tcd_ops);
+- return PTR_ERR_OR_ZERO(psy->tcd);
+- }
+-
+- return 0;
+-}
+-
+-static void psy_unregister_cooler(struct power_supply *psy)
+-{
+- if (IS_ERR_OR_NULL(psy->tcd))
+- return;
+- thermal_cooling_device_unregister(psy->tcd);
+-}
+ #else
+ static int psy_register_thermal(struct power_supply *psy)
+ {
+@@ -1272,15 +1195,6 @@ static int psy_register_thermal(struct power_supply *psy)
+ static void psy_unregister_thermal(struct power_supply *psy)
+ {
+ }
+-
+-static int psy_register_cooler(struct power_supply *psy)
+-{
+- return 0;
+-}
+-
+-static void psy_unregister_cooler(struct power_supply *psy)
+-{
+-}
+ #endif
+
+ static struct power_supply *__must_check
+@@ -1354,10 +1268,6 @@ __power_supply_register(struct device *parent,
+ if (rc)
+ goto register_thermal_failed;
+
+- rc = psy_register_cooler(psy);
+- if (rc)
+- goto register_cooler_failed;
+-
+ rc = power_supply_create_triggers(psy);
+ if (rc)
+ goto create_triggers_failed;
+@@ -1387,8 +1297,6 @@ __power_supply_register(struct device *parent,
+ add_hwmon_sysfs_failed:
+ power_supply_remove_triggers(psy);
+ create_triggers_failed:
+- psy_unregister_cooler(psy);
+-register_cooler_failed:
+ psy_unregister_thermal(psy);
+ register_thermal_failed:
+ wakeup_init_failed:
+@@ -1540,7 +1448,6 @@ void power_supply_unregister(struct power_supply *psy)
+ sysfs_remove_link(&psy->dev.kobj, "powers");
+ power_supply_remove_hwmon_sysfs(psy);
+ power_supply_remove_triggers(psy);
+- psy_unregister_cooler(psy);
+ psy_unregister_thermal(psy);
+ device_init_wakeup(&psy->dev, false);
+ device_unregister(&psy->dev);
+diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c
+index 1f968353d4799..e180dee0f83d0 100644
+--- a/drivers/powercap/powercap_sys.c
++++ b/drivers/powercap/powercap_sys.c
+@@ -530,9 +530,6 @@ struct powercap_zone *powercap_register_zone(
+ power_zone->name = kstrdup(name, GFP_KERNEL);
+ if (!power_zone->name)
+ goto err_name_alloc;
+- dev_set_name(&power_zone->dev, "%s:%x",
+- dev_name(power_zone->dev.parent),
+- power_zone->id);
+ power_zone->constraints = kcalloc(nr_constraints,
+ sizeof(*power_zone->constraints),
+ GFP_KERNEL);
+@@ -555,9 +552,16 @@ struct powercap_zone *powercap_register_zone(
+ power_zone->dev_attr_groups[0] = &power_zone->dev_zone_attr_group;
+ power_zone->dev_attr_groups[1] = NULL;
+ power_zone->dev.groups = power_zone->dev_attr_groups;
++ dev_set_name(&power_zone->dev, "%s:%x",
++ dev_name(power_zone->dev.parent),
++ power_zone->id);
+ result = device_register(&power_zone->dev);
+- if (result)
+- goto err_dev_ret;
++ if (result) {
++ put_device(&power_zone->dev);
++ mutex_unlock(&control_type->lock);
++
++ return ERR_PTR(result);
++ }
+
+ control_type->nr_zones++;
+ mutex_unlock(&control_type->lock);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index ae69e493913da..4fcd36055b025 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1584,7 +1584,7 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ }
+
+ if (rdev->desc->off_on_delay)
+- rdev->last_off = ktime_get();
++ rdev->last_off = ktime_get_boottime();
+
+ /* If the constraints say the regulator should be on at this point
+ * and we have control then make sure it is enabled.
+@@ -2673,7 +2673,7 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
+ * this regulator was disabled.
+ */
+ ktime_t end = ktime_add_us(rdev->last_off, rdev->desc->off_on_delay);
+- s64 remaining = ktime_us_delta(end, ktime_get());
++ s64 remaining = ktime_us_delta(end, ktime_get_boottime());
+
+ if (remaining > 0)
+ _regulator_delay_helper(remaining);
+@@ -2912,7 +2912,7 @@ static int _regulator_do_disable(struct regulator_dev *rdev)
+ }
+
+ if (rdev->desc->off_on_delay)
+- rdev->last_off = ktime_get();
++ rdev->last_off = ktime_get_boottime();
+
+ trace_regulator_disable_complete(rdev_get_name(rdev));
+
+diff --git a/drivers/regulator/max77802-regulator.c b/drivers/regulator/max77802-regulator.c
+index 21e0eb0f43f94..befe5f319819b 100644
+--- a/drivers/regulator/max77802-regulator.c
++++ b/drivers/regulator/max77802-regulator.c
+@@ -94,9 +94,11 @@ static int max77802_set_suspend_disable(struct regulator_dev *rdev)
+ {
+ unsigned int val = MAX77802_OFF_PWRREQ;
+ struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+- int id = rdev_get_id(rdev);
++ unsigned int id = rdev_get_id(rdev);
+ int shift = max77802_get_opmode_shift(id);
+
++ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++ return -EINVAL;
+ max77802->opmode[id] = val;
+ return regmap_update_bits(rdev->regmap, rdev->desc->enable_reg,
+ rdev->desc->enable_mask, val << shift);
+@@ -110,7 +112,7 @@ static int max77802_set_suspend_disable(struct regulator_dev *rdev)
+ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
+ {
+ struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+- int id = rdev_get_id(rdev);
++ unsigned int id = rdev_get_id(rdev);
+ unsigned int val;
+ int shift = max77802_get_opmode_shift(id);
+
+@@ -127,6 +129,9 @@ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
+ return -EINVAL;
+ }
+
++ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++ return -EINVAL;
++
+ max77802->opmode[id] = val;
+ return regmap_update_bits(rdev->regmap, rdev->desc->enable_reg,
+ rdev->desc->enable_mask, val << shift);
+@@ -135,8 +140,10 @@ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
+ static unsigned max77802_get_mode(struct regulator_dev *rdev)
+ {
+ struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+- int id = rdev_get_id(rdev);
++ unsigned int id = rdev_get_id(rdev);
+
++ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++ return -EINVAL;
+ return max77802_map_mode(max77802->opmode[id]);
+ }
+
+@@ -160,10 +167,13 @@ static int max77802_set_suspend_mode(struct regulator_dev *rdev,
+ unsigned int mode)
+ {
+ struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+- int id = rdev_get_id(rdev);
++ unsigned int id = rdev_get_id(rdev);
+ unsigned int val;
+ int shift = max77802_get_opmode_shift(id);
+
++ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++ return -EINVAL;
++
+ /*
+ * If the regulator has been disabled for suspend
+ * then is invalid to try setting a suspend mode.
+@@ -209,9 +219,11 @@ static int max77802_set_suspend_mode(struct regulator_dev *rdev,
+ static int max77802_enable(struct regulator_dev *rdev)
+ {
+ struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
+- int id = rdev_get_id(rdev);
++ unsigned int id = rdev_get_id(rdev);
+ int shift = max77802_get_opmode_shift(id);
+
++ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
++ return -EINVAL;
+ if (max77802->opmode[id] == MAX77802_OFF_PWRREQ)
+ max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
+
+@@ -495,7 +507,7 @@ static int max77802_pmic_probe(struct platform_device *pdev)
+
+ for (i = 0; i < MAX77802_REG_MAX; i++) {
+ struct regulator_dev *rdev;
+- int id = regulators[i].id;
++ unsigned int id = regulators[i].id;
+ int shift = max77802_get_opmode_shift(id);
+ int ret;
+
+@@ -513,10 +525,12 @@ static int max77802_pmic_probe(struct platform_device *pdev)
+ * the hardware reports OFF as the regulator operating mode.
+ * Default to operating mode NORMAL in that case.
+ */
+- if (val == MAX77802_STATUS_OFF)
+- max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
+- else
+- max77802->opmode[id] = val;
++ if (id < ARRAY_SIZE(max77802->opmode)) {
++ if (val == MAX77802_STATUS_OFF)
++ max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
++ else
++ max77802->opmode[id] = val;
++ }
+
+ rdev = devm_regulator_register(&pdev->dev,
+ ®ulators[i], &config);
+diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c
+index 35269f9982105..754c6fcc6e642 100644
+--- a/drivers/regulator/s5m8767.c
++++ b/drivers/regulator/s5m8767.c
+@@ -923,10 +923,14 @@ static int s5m8767_pmic_probe(struct platform_device *pdev)
+
+ for (i = 0; i < pdata->num_regulators; i++) {
+ const struct sec_voltage_desc *desc;
+- int id = pdata->regulators[i].id;
++ unsigned int id = pdata->regulators[i].id;
+ int enable_reg, enable_val;
+ struct regulator_dev *rdev;
+
++ BUILD_BUG_ON(ARRAY_SIZE(regulators) != ARRAY_SIZE(reg_voltage_map));
++ if (WARN_ON_ONCE(id >= ARRAY_SIZE(regulators)))
++ continue;
++
+ desc = reg_voltage_map[id];
+ if (desc) {
+ regulators[id].n_voltages =
+diff --git a/drivers/regulator/tps65219-regulator.c b/drivers/regulator/tps65219-regulator.c
+index c484c943e4675..58f6541b6417b 100644
+--- a/drivers/regulator/tps65219-regulator.c
++++ b/drivers/regulator/tps65219-regulator.c
+@@ -173,24 +173,6 @@ static unsigned int tps65219_get_mode(struct regulator_dev *dev)
+ return REGULATOR_MODE_NORMAL;
+ }
+
+-/*
+- * generic regulator_set_bypass_regmap does not fully match requirements
+- * TPS65219 Requires explicitly that regulator is disabled before switch
+- */
+-static int tps65219_set_bypass(struct regulator_dev *dev, bool enable)
+-{
+- struct tps65219 *tps = rdev_get_drvdata(dev);
+- unsigned int rid = rdev_get_id(dev);
+-
+- if (dev->desc->ops->is_enabled(dev)) {
+- dev_err(tps->dev,
+- "%s LDO%d enabled, must be shut down to set bypass ",
+- __func__, rid);
+- return -EBUSY;
+- }
+- return regulator_set_bypass_regmap(dev, enable);
+-}
+-
+ /* Operations permitted on BUCK1/2/3 */
+ static const struct regulator_ops tps65219_bucks_ops = {
+ .is_enabled = regulator_is_enabled_regmap,
+@@ -217,7 +199,7 @@ static const struct regulator_ops tps65219_ldos_1_2_ops = {
+ .set_voltage_sel = regulator_set_voltage_sel_regmap,
+ .list_voltage = regulator_list_voltage_linear_range,
+ .map_voltage = regulator_map_voltage_linear_range,
+- .set_bypass = tps65219_set_bypass,
++ .set_bypass = regulator_set_bypass_regmap,
+ .get_bypass = regulator_get_bypass_regmap,
+ };
+
+@@ -367,7 +349,7 @@ static int tps65219_regulator_probe(struct platform_device *pdev)
+ irq_data[i].type = irq_type;
+
+ tps65219_get_rdev_by_name(irq_type->regulator_name, rdevtbl, rdev);
+- if (rdev < 0) {
++ if (IS_ERR(rdev)) {
+ dev_err(tps->dev, "Failed to get rdev for %s\n",
+ irq_type->regulator_name);
+ return -EINVAL;
+diff --git a/drivers/remoteproc/mtk_scp_ipi.c b/drivers/remoteproc/mtk_scp_ipi.c
+index 00f041ebcde63..4c0d121c2f54d 100644
+--- a/drivers/remoteproc/mtk_scp_ipi.c
++++ b/drivers/remoteproc/mtk_scp_ipi.c
+@@ -164,21 +164,21 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,
+ WARN_ON(len > sizeof(send_obj->share_buf)) || WARN_ON(!buf))
+ return -EINVAL;
+
+- mutex_lock(&scp->send_lock);
+-
+ ret = clk_prepare_enable(scp->clk);
+ if (ret) {
+ dev_err(scp->dev, "failed to enable clock\n");
+- goto unlock_mutex;
++ return ret;
+ }
+
++ mutex_lock(&scp->send_lock);
++
+ /* Wait until SCP receives the last command */
+ timeout = jiffies + msecs_to_jiffies(2000);
+ do {
+ if (time_after(jiffies, timeout)) {
+ dev_err(scp->dev, "%s: IPI timeout!\n", __func__);
+ ret = -ETIMEDOUT;
+- goto clock_disable;
++ goto unlock_mutex;
+ }
+ } while (readl(scp->reg_base + scp->data->host_to_scp_reg));
+
+@@ -205,10 +205,9 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,
+ ret = 0;
+ }
+
+-clock_disable:
+- clk_disable_unprepare(scp->clk);
+ unlock_mutex:
+ mutex_unlock(&scp->send_lock);
++ clk_disable_unprepare(scp->clk);
+
+ return ret;
+ }
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index fddb63cffee07..7dbab5fcbe1e7 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -10,7 +10,6 @@
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/devcoredump.h>
+-#include <linux/dma-map-ops.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/interrupt.h>
+ #include <linux/kernel.h>
+@@ -18,6 +17,7 @@
+ #include <linux/module.h>
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm_domain.h>
+ #include <linux/pm_runtime.h>
+@@ -211,6 +211,9 @@ struct q6v5 {
+ size_t mba_size;
+ size_t dp_size;
+
++ phys_addr_t mdata_phys;
++ size_t mdata_size;
++
+ phys_addr_t mpss_phys;
+ phys_addr_t mpss_reloc;
+ size_t mpss_size;
+@@ -933,52 +936,47 @@ static void q6v5proc_halt_axi_port(struct q6v5 *qproc,
+ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw,
+ const char *fw_name)
+ {
+- unsigned long dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS | DMA_ATTR_NO_KERNEL_MAPPING;
+- unsigned long flags = VM_DMA_COHERENT | VM_FLUSH_RESET_PERMS;
+- struct page **pages;
+- struct page *page;
++ unsigned long dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS;
+ dma_addr_t phys;
+ void *metadata;
+ int mdata_perm;
+ int xferop_ret;
+ size_t size;
+- void *vaddr;
+- int count;
++ void *ptr;
+ int ret;
+- int i;
+
+ metadata = qcom_mdt_read_metadata(fw, &size, fw_name, qproc->dev);
+ if (IS_ERR(metadata))
+ return PTR_ERR(metadata);
+
+- page = dma_alloc_attrs(qproc->dev, size, &phys, GFP_KERNEL, dma_attrs);
+- if (!page) {
+- kfree(metadata);
+- dev_err(qproc->dev, "failed to allocate mdt buffer\n");
+- return -ENOMEM;
+- }
+-
+- count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+- pages = kmalloc_array(count, sizeof(struct page *), GFP_KERNEL);
+- if (!pages) {
+- ret = -ENOMEM;
+- goto free_dma_attrs;
+- }
+-
+- for (i = 0; i < count; i++)
+- pages[i] = nth_page(page, i);
++ if (qproc->mdata_phys) {
++ if (size > qproc->mdata_size) {
++ ret = -EINVAL;
++ dev_err(qproc->dev, "metadata size outside memory range\n");
++ goto free_metadata;
++ }
+
+- vaddr = vmap(pages, count, flags, pgprot_dmacoherent(PAGE_KERNEL));
+- kfree(pages);
+- if (!vaddr) {
+- dev_err(qproc->dev, "unable to map memory region: %pa+%zx\n", &phys, size);
+- ret = -EBUSY;
+- goto free_dma_attrs;
++ phys = qproc->mdata_phys;
++ ptr = memremap(qproc->mdata_phys, size, MEMREMAP_WC);
++ if (!ptr) {
++ ret = -EBUSY;
++ dev_err(qproc->dev, "unable to map memory region: %pa+%zx\n",
++ &qproc->mdata_phys, size);
++ goto free_metadata;
++ }
++ } else {
++ ptr = dma_alloc_attrs(qproc->dev, size, &phys, GFP_KERNEL, dma_attrs);
++ if (!ptr) {
++ ret = -ENOMEM;
++ dev_err(qproc->dev, "failed to allocate mdt buffer\n");
++ goto free_metadata;
++ }
+ }
+
+- memcpy(vaddr, metadata, size);
++ memcpy(ptr, metadata, size);
+
+- vunmap(vaddr);
++ if (qproc->mdata_phys)
++ memunmap(ptr);
+
+ /* Hypervisor mapping to access metadata by modem */
+ mdata_perm = BIT(QCOM_SCM_VMID_HLOS);
+@@ -1008,7 +1006,9 @@ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw,
+ "mdt buffer not reclaimed system may become unstable\n");
+
+ free_dma_attrs:
+- dma_free_attrs(qproc->dev, size, page, phys, dma_attrs);
++ if (!qproc->mdata_phys)
++ dma_free_attrs(qproc->dev, size, ptr, phys, dma_attrs);
++free_metadata:
+ kfree(metadata);
+
+ return ret < 0 ? ret : 0;
+@@ -1836,6 +1836,7 @@ static int q6v5_init_reset(struct q6v5 *qproc)
+ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ {
+ struct device_node *child;
++ struct reserved_mem *rmem;
+ struct device_node *node;
+ struct resource r;
+ int ret;
+@@ -1882,6 +1883,26 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ qproc->mpss_size = resource_size(&r);
+
++ if (!child) {
++ node = of_parse_phandle(qproc->dev->of_node, "memory-region", 2);
++ } else {
++ child = of_get_child_by_name(qproc->dev->of_node, "metadata");
++ node = of_parse_phandle(child, "memory-region", 0);
++ of_node_put(child);
++ }
++
++ if (!node)
++ return 0;
++
++ rmem = of_reserved_mem_lookup(node);
++ if (!rmem) {
++ dev_err(qproc->dev, "unable to resolve metadata region\n");
++ return -EINVAL;
++ }
++
++ qproc->mdata_phys = rmem->base;
++ qproc->mdata_size = rmem->size;
++
+ return 0;
+ }
+
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 115c0a1eddb10..35df1b0a515bf 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -954,6 +954,7 @@ static void qcom_glink_handle_intent(struct qcom_glink *glink,
+ spin_unlock_irqrestore(&glink->idr_lock, flags);
+ if (!channel) {
+ dev_err(glink->dev, "intents for non-existing channel\n");
++ qcom_glink_rx_advance(glink, ALIGN(msglen, 8));
+ return;
+ }
+
+@@ -1446,6 +1447,7 @@ static void qcom_glink_rpdev_release(struct device *dev)
+ {
+ struct rpmsg_device *rpdev = to_rpmsg_device(dev);
+
++ kfree(rpdev->driver_override);
+ kfree(rpdev);
+ }
+
+@@ -1689,6 +1691,7 @@ static void qcom_glink_device_release(struct device *dev)
+
+ /* Release qcom_glink_alloc_channel() reference */
+ kref_put(&channel->refcount, qcom_glink_channel_release);
++ kfree(rpdev->driver_override);
+ kfree(rpdev);
+ }
+
+diff --git a/drivers/rtc/rtc-pm8xxx.c b/drivers/rtc/rtc-pm8xxx.c
+index 716e5d9ad74d1..d114f0da537d2 100644
+--- a/drivers/rtc/rtc-pm8xxx.c
++++ b/drivers/rtc/rtc-pm8xxx.c
+@@ -221,7 +221,6 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ {
+ int rc, i;
+ u8 value[NUM_8_BIT_RTC_REGS];
+- unsigned int ctrl_reg;
+ unsigned long secs, irq_flags;
+ struct pm8xxx_rtc *rtc_dd = dev_get_drvdata(dev);
+ const struct pm8xxx_rtc_regs *regs = rtc_dd->regs;
+@@ -233,6 +232,11 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ secs >>= 8;
+ }
+
++ rc = regmap_update_bits(rtc_dd->regmap, regs->alarm_ctrl,
++ regs->alarm_en, 0);
++ if (rc)
++ return rc;
++
+ spin_lock_irqsave(&rtc_dd->ctrl_reg_lock, irq_flags);
+
+ rc = regmap_bulk_write(rtc_dd->regmap, regs->alarm_rw, value,
+@@ -242,19 +246,11 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
+ goto rtc_rw_fail;
+ }
+
+- rc = regmap_read(rtc_dd->regmap, regs->alarm_ctrl, &ctrl_reg);
+- if (rc)
+- goto rtc_rw_fail;
+-
+- if (alarm->enabled)
+- ctrl_reg |= regs->alarm_en;
+- else
+- ctrl_reg &= ~regs->alarm_en;
+-
+- rc = regmap_write(rtc_dd->regmap, regs->alarm_ctrl, ctrl_reg);
+- if (rc) {
+- dev_err(dev, "Write to RTC alarm control register failed\n");
+- goto rtc_rw_fail;
++ if (alarm->enabled) {
++ rc = regmap_update_bits(rtc_dd->regmap, regs->alarm_ctrl,
++ regs->alarm_en, regs->alarm_en);
++ if (rc)
++ goto rtc_rw_fail;
+ }
+
+ dev_dbg(dev, "Alarm Set for h:m:s=%ptRt, y-m-d=%ptRdr\n",
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 5d0b9991e91a4..b20ce86b97b29 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -6956,8 +6956,10 @@ dasd_eckd_init(void)
+ return -ENOMEM;
+ dasd_vol_info_req = kmalloc(sizeof(*dasd_vol_info_req),
+ GFP_KERNEL | GFP_DMA);
+- if (!dasd_vol_info_req)
++ if (!dasd_vol_info_req) {
++ kfree(dasd_reserve_req);
+ return -ENOMEM;
++ }
+ pe_handler_worker = kmalloc(sizeof(*pe_handler_worker),
+ GFP_KERNEL | GFP_DMA);
+ if (!pe_handler_worker) {
+diff --git a/drivers/s390/char/sclp_early.c b/drivers/s390/char/sclp_early.c
+index c1c70a161c0e2..f480d6c7fd399 100644
+--- a/drivers/s390/char/sclp_early.c
++++ b/drivers/s390/char/sclp_early.c
+@@ -163,7 +163,7 @@ static void __init sclp_early_console_detect(struct init_sccb *sccb)
+ sclp.has_linemode = 1;
+ }
+
+-void __init sclp_early_adjust_va(void)
++void __init __no_sanitize_address sclp_early_adjust_va(void)
+ {
+ sclp_early_sccb = __va((unsigned long)sclp_early_sccb);
+ }
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index 54aba7cceb33f..ff538a086fc77 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -225,7 +225,7 @@ static void vfio_ccw_sch_shutdown(struct subchannel *sch)
+ struct vfio_ccw_parent *parent = dev_get_drvdata(&sch->dev);
+ struct vfio_ccw_private *private = dev_get_drvdata(&parent->dev);
+
+- if (WARN_ON(!private))
++ if (!private)
+ return;
+
+ vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_CLOSE);
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index 9c01957e56b3f..2bba5ed83dfcf 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -349,6 +349,8 @@ static int vfio_ap_validate_nib(struct kvm_vcpu *vcpu, dma_addr_t *nib)
+ {
+ *nib = vcpu->run->s.regs.gprs[2];
+
++ if (!*nib)
++ return -EINVAL;
+ if (kvm_is_error_hva(gfn_to_hva(vcpu->kvm, *nib >> PAGE_SHIFT)))
+ return -EINVAL;
+
+@@ -1857,8 +1859,10 @@ int vfio_ap_mdev_probe_queue(struct ap_device *apdev)
+ return ret;
+
+ q = kzalloc(sizeof(*q), GFP_KERNEL);
+- if (!q)
+- return -ENOMEM;
++ if (!q) {
++ ret = -ENOMEM;
++ goto err_remove_group;
++ }
+
+ q->apqn = to_ap_queue(&apdev->device)->qid;
+ q->saved_isc = VFIO_AP_ISC_INVALID;
+@@ -1876,6 +1880,10 @@ int vfio_ap_mdev_probe_queue(struct ap_device *apdev)
+ release_update_locks_for_mdev(matrix_mdev);
+
+ return 0;
++
++err_remove_group:
++ sysfs_remove_group(&apdev->device.kobj, &vfio_queue_attr_group);
++ return ret;
+ }
+
+ void vfio_ap_mdev_remove_queue(struct ap_device *apdev)
+diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
+index 4d4cb47b38467..24c049eff157a 100644
+--- a/drivers/scsi/aacraid/aachba.c
++++ b/drivers/scsi/aacraid/aachba.c
+@@ -818,8 +818,8 @@ static void aac_probe_container_scsi_done(struct scsi_cmnd *scsi_cmnd)
+
+ int aac_probe_container(struct aac_dev *dev, int cid)
+ {
+- struct scsi_cmnd *scsicmd = kzalloc(sizeof(*scsicmd), GFP_KERNEL);
+- struct aac_cmd_priv *cmd_priv = aac_priv(scsicmd);
++ struct aac_cmd_priv *cmd_priv;
++ struct scsi_cmnd *scsicmd = kzalloc(sizeof(*scsicmd) + sizeof(*cmd_priv), GFP_KERNEL);
+ struct scsi_device *scsidev = kzalloc(sizeof(*scsidev), GFP_KERNEL);
+ int status;
+
+@@ -838,6 +838,7 @@ int aac_probe_container(struct aac_dev *dev, int cid)
+ while (scsicmd->device == scsidev)
+ schedule();
+ kfree(scsidev);
++ cmd_priv = aac_priv(scsicmd);
+ status = cmd_priv->status;
+ kfree(scsicmd);
+ return status;
+diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
+index ed119a3f6f2ed..7f02083001100 100644
+--- a/drivers/scsi/aic94xx/aic94xx_task.c
++++ b/drivers/scsi/aic94xx/aic94xx_task.c
+@@ -50,6 +50,9 @@ static int asd_map_scatterlist(struct sas_task *task,
+ dma_addr_t dma = dma_map_single(&asd_ha->pcidev->dev, p,
+ task->total_xfer_len,
+ task->data_dir);
++ if (dma_mapping_error(&asd_ha->pcidev->dev, dma))
++ return -ENOMEM;
++
+ sg_arr[0].bus_addr = cpu_to_le64((u64)dma);
+ sg_arr[0].size = cpu_to_le32(task->total_xfer_len);
+ sg_arr[0].flags |= ASD_SG_EL_LIST_EOL;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 182aaae603868..55a0d4013439f 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -20819,6 +20819,7 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
+ struct lpfc_mbx_wr_object *wr_object;
+ LPFC_MBOXQ_t *mbox;
+ int rc = 0, i = 0;
++ int mbox_status = 0;
+ uint32_t shdr_status, shdr_add_status, shdr_add_status_2;
+ uint32_t shdr_change_status = 0, shdr_csf = 0;
+ uint32_t mbox_tmo;
+@@ -20864,11 +20865,15 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
+ wr_object->u.request.bde_count = i;
+ bf_set(lpfc_wr_object_write_length, &wr_object->u.request, written);
+ if (!phba->sli4_hba.intr_enable)
+- rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
++ mbox_status = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
+ else {
+ mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
+- rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
++ mbox_status = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
+ }
++
++ /* The mbox status needs to be maintained to detect MBOX_TIMEOUT. */
++ rc = mbox_status;
++
+ /* The IOCTL status is embedded in the mailbox subheader. */
+ shdr_status = bf_get(lpfc_mbox_hdr_status,
+ &wr_object->header.cfg_shdr.response);
+@@ -20883,10 +20888,6 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
+ &wr_object->u.response);
+ }
+
+- if (!phba->sli4_hba.intr_enable)
+- mempool_free(mbox, phba->mbox_mem_pool);
+- else if (rc != MBX_TIMEOUT)
+- mempool_free(mbox, phba->mbox_mem_pool);
+ if (shdr_status || shdr_add_status || shdr_add_status_2 || rc) {
+ lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ "3025 Write Object mailbox failed with "
+@@ -20904,6 +20905,12 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
+ lpfc_log_fw_write_cmpl(phba, shdr_status, shdr_add_status,
+ shdr_add_status_2, shdr_change_status,
+ shdr_csf);
++
++ if (!phba->sli4_hba.intr_enable)
++ mempool_free(mbox, phba->mbox_mem_pool);
++ else if (mbox_status != MBX_TIMEOUT)
++ mempool_free(mbox, phba->mbox_mem_pool);
++
+ return rc;
+ }
+
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 9baac224b2135..bff6377023979 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -293,7 +293,6 @@ out:
+ static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc,
+ struct bsg_job *job)
+ {
+- long rval = -EINVAL;
+ u16 num_devices = 0, i = 0, size;
+ unsigned long flags;
+ struct mpi3mr_tgt_dev *tgtdev;
+@@ -304,7 +303,7 @@ static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc,
+ if (job->request_payload.payload_len < sizeof(u32)) {
+ dprint_bsg_err(mrioc, "%s: invalid size argument\n",
+ __func__);
+- return rval;
++ return -EINVAL;
+ }
+
+ spin_lock_irqsave(&mrioc->tgtdev_lock, flags);
+@@ -312,7 +311,7 @@ static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc,
+ num_devices++;
+ spin_unlock_irqrestore(&mrioc->tgtdev_lock, flags);
+
+- if ((job->request_payload.payload_len == sizeof(u32)) ||
++ if ((job->request_payload.payload_len <= sizeof(u64)) ||
+ list_empty(&mrioc->tgtdev_list)) {
+ sg_copy_from_buffer(job->request_payload.sg_list,
+ job->request_payload.sg_cnt,
+@@ -320,14 +319,14 @@ static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc,
+ return 0;
+ }
+
+- kern_entrylen = (num_devices - 1) * sizeof(*devmap_info);
+- size = sizeof(*alltgt_info) + kern_entrylen;
++ kern_entrylen = num_devices * sizeof(*devmap_info);
++ size = sizeof(u64) + kern_entrylen;
+ alltgt_info = kzalloc(size, GFP_KERNEL);
+ if (!alltgt_info)
+ return -ENOMEM;
+
+ devmap_info = alltgt_info->dmi;
+- memset((u8 *)devmap_info, 0xFF, (kern_entrylen + sizeof(*devmap_info)));
++ memset((u8 *)devmap_info, 0xFF, kern_entrylen);
+ spin_lock_irqsave(&mrioc->tgtdev_lock, flags);
+ list_for_each_entry(tgtdev, &mrioc->tgtdev_list, list) {
+ if (i < num_devices) {
+@@ -344,25 +343,18 @@ static long mpi3mr_get_all_tgt_info(struct mpi3mr_ioc *mrioc,
+ num_devices = i;
+ spin_unlock_irqrestore(&mrioc->tgtdev_lock, flags);
+
+- memcpy(&alltgt_info->num_devices, &num_devices, sizeof(num_devices));
++ alltgt_info->num_devices = num_devices;
+
+- usr_entrylen = (job->request_payload.payload_len - sizeof(u32)) / sizeof(*devmap_info);
++ usr_entrylen = (job->request_payload.payload_len - sizeof(u64)) /
++ sizeof(*devmap_info);
+ usr_entrylen *= sizeof(*devmap_info);
+ min_entrylen = min(usr_entrylen, kern_entrylen);
+- if (min_entrylen && (!memcpy(&alltgt_info->dmi, devmap_info, min_entrylen))) {
+- dprint_bsg_err(mrioc, "%s:%d: device map info copy failed\n",
+- __func__, __LINE__);
+- rval = -EFAULT;
+- goto out;
+- }
+
+ sg_copy_from_buffer(job->request_payload.sg_list,
+ job->request_payload.sg_cnt,
+- alltgt_info, job->request_payload.payload_len);
+- rval = 0;
+-out:
++ alltgt_info, (min_entrylen + sizeof(u64)));
+ kfree(alltgt_info);
+- return rval;
++ return 0;
+ }
+ /**
+ * mpi3mr_get_change_count - Get topology change count
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 3306de7170f64..6eaeba41072cb 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -4952,6 +4952,10 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ mpi3mr_init_drv_cmd(&mrioc->dev_rmhs_cmds[i],
+ MPI3MR_HOSTTAG_DEVRMCMD_MIN + i);
+
++ for (i = 0; i < MPI3MR_NUM_EVTACKCMD; i++)
++ mpi3mr_init_drv_cmd(&mrioc->evtack_cmds[i],
++ MPI3MR_HOSTTAG_EVTACKCMD_MIN + i);
++
+ if (pdev->revision)
+ mrioc->enable_segqueue = true;
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 69061545d9d2f..2ee9ea57554d7 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -5849,6 +5849,9 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ }
+ dma_pool_destroy(ioc->pcie_sgl_dma_pool);
+ }
++ kfree(ioc->pcie_sg_lookup);
++ ioc->pcie_sg_lookup = NULL;
++
+ if (ioc->config_page) {
+ dexitprintk(ioc,
+ ioc_info(ioc, "config_page(0x%p): free\n",
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index cd75b179410d7..dba7bba788d76 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -278,8 +278,8 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ const char *type;
+ int req_sg_cnt, rsp_sg_cnt;
+ int rval = (DID_ERROR << 16);
+- uint16_t nextlid = 0;
+ uint32_t els_cmd = 0;
++ int qla_port_allocated = 0;
+
+ if (bsg_request->msgcode == FC_BSG_RPT_ELS) {
+ rport = fc_bsg_to_rport(bsg_job);
+@@ -329,9 +329,9 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ /* make sure the rport is logged in,
+ * if not perform fabric login
+ */
+- if (qla2x00_fabric_login(vha, fcport, &nextlid)) {
++ if (atomic_read(&fcport->state) != FCS_ONLINE) {
+ ql_dbg(ql_dbg_user, vha, 0x7003,
+- "Failed to login port %06X for ELS passthru.\n",
++ "Port %06X is not online for ELS passthru.\n",
+ fcport->d_id.b24);
+ rval = -EIO;
+ goto done;
+@@ -348,6 +348,7 @@ qla2x00_process_els(struct bsg_job *bsg_job)
+ goto done;
+ }
+
++ qla_port_allocated = 1;
+ /* Initialize all required fields of fcport */
+ fcport->vha = vha;
+ fcport->d_id.b.al_pa =
+@@ -432,7 +433,7 @@ done_unmap_sg:
+ goto done_free_fcport;
+
+ done_free_fcport:
+- if (bsg_request->msgcode != FC_BSG_RPT_ELS)
++ if (qla_port_allocated)
+ qla2x00_free_fcport(fcport);
+ done:
+ return rval;
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index a26a373be9da3..cd4eb11b07079 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -660,7 +660,7 @@ enum {
+
+ struct iocb_resource {
+ u8 res_type;
+- u8 pad;
++ u8 exch_cnt;
+ u16 iocb_cnt;
+ };
+
+@@ -3721,6 +3721,10 @@ struct qla_fw_resources {
+ u16 iocbs_limit;
+ u16 iocbs_qp_limit;
+ u16 iocbs_used;
++ u16 exch_total;
++ u16 exch_limit;
++ u16 exch_used;
++ u16 pad;
+ };
+
+ #define QLA_IOCB_PCT_LIMIT 95
+diff --git a/drivers/scsi/qla2xxx/qla_dfs.c b/drivers/scsi/qla2xxx/qla_dfs.c
+index 777808af56347..1925cc6897b68 100644
+--- a/drivers/scsi/qla2xxx/qla_dfs.c
++++ b/drivers/scsi/qla2xxx/qla_dfs.c
+@@ -235,7 +235,7 @@ qla_dfs_fw_resource_cnt_show(struct seq_file *s, void *unused)
+ uint16_t mb[MAX_IOCB_MB_REG];
+ int rc;
+ struct qla_hw_data *ha = vha->hw;
+- u16 iocbs_used, i;
++ u16 iocbs_used, i, exch_used;
+
+ rc = qla24xx_res_count_wait(vha, mb, SIZEOF_IOCB_MB_REG);
+ if (rc != QLA_SUCCESS) {
+@@ -263,13 +263,19 @@ qla_dfs_fw_resource_cnt_show(struct seq_file *s, void *unused)
+ if (ql2xenforce_iocb_limit) {
+ /* lock is not require. It's an estimate. */
+ iocbs_used = ha->base_qpair->fwres.iocbs_used;
++ exch_used = ha->base_qpair->fwres.exch_used;
+ for (i = 0; i < ha->max_qpairs; i++) {
+- if (ha->queue_pair_map[i])
++ if (ha->queue_pair_map[i]) {
+ iocbs_used += ha->queue_pair_map[i]->fwres.iocbs_used;
++ exch_used += ha->queue_pair_map[i]->fwres.exch_used;
++ }
+ }
+
+ seq_printf(s, "Driver: estimate iocb used [%d] high water limit [%d]\n",
+ iocbs_used, ha->base_qpair->fwres.iocbs_limit);
++
++ seq_printf(s, "estimate exchange used[%d] high water limit [%d] n",
++ exch_used, ha->base_qpair->fwres.exch_limit);
+ }
+
+ return 0;
+diff --git a/drivers/scsi/qla2xxx/qla_edif.c b/drivers/scsi/qla2xxx/qla_edif.c
+index e4240aae5f9e3..38d5bda1f2748 100644
+--- a/drivers/scsi/qla2xxx/qla_edif.c
++++ b/drivers/scsi/qla2xxx/qla_edif.c
+@@ -925,7 +925,9 @@ qla_edif_app_getfcinfo(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ if (!(fcport->flags & FCF_FCSP_DEVICE))
+ continue;
+
+- tdid = app_req.remote_pid;
++ tdid.b.domain = app_req.remote_pid.domain;
++ tdid.b.area = app_req.remote_pid.area;
++ tdid.b.al_pa = app_req.remote_pid.al_pa;
+
+ ql_dbg(ql_dbg_edif, vha, 0x2058,
+ "APP request entry - portid=%06x.\n", tdid.b24);
+@@ -2989,9 +2991,10 @@ qla28xx_start_scsi_edif(srb_t *sp)
+ tot_dsds = nseg;
+ req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+
+- sp->iores.res_type = RESOURCE_INI;
++ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
++ sp->iores.exch_cnt = 1;
+ sp->iores.iocb_cnt = req_cnt;
+- if (qla_get_iocbs(sp->qpair, &sp->iores))
++ if (qla_get_fw_resources(sp->qpair, &sp->iores))
+ goto queuing_error;
+
+ if (req->cnt < (req_cnt + 2)) {
+@@ -3185,7 +3188,7 @@ queuing_error:
+ mempool_free(sp->u.scmd.ct6_ctx, ha->ctx_mempool);
+ sp->u.scmd.ct6_ctx = NULL;
+ }
+- qla_put_iocbs(sp->qpair, &sp->iores);
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ spin_unlock_irqrestore(lock, flags);
+
+ return QLA_FUNCTION_FAILED;
+diff --git a/drivers/scsi/qla2xxx/qla_edif_bsg.h b/drivers/scsi/qla2xxx/qla_edif_bsg.h
+index 0931f4e4e127a..514c265ba86e2 100644
+--- a/drivers/scsi/qla2xxx/qla_edif_bsg.h
++++ b/drivers/scsi/qla2xxx/qla_edif_bsg.h
+@@ -89,7 +89,20 @@ struct app_plogi_reply {
+ struct app_pinfo_req {
+ struct app_id app_info;
+ uint8_t num_ports;
+- port_id_t remote_pid;
++ struct {
++#ifdef __BIG_ENDIAN
++ uint8_t domain;
++ uint8_t area;
++ uint8_t al_pa;
++#elif defined(__LITTLE_ENDIAN)
++ uint8_t al_pa;
++ uint8_t area;
++ uint8_t domain;
++#else
++#error "__BIG_ENDIAN or __LITTLE_ENDIAN must be defined!"
++#endif
++ uint8_t rsvd_1;
++ } remote_pid;
+ uint8_t version;
+ uint8_t pad[VND_CMD_PAD_SIZE];
+ uint8_t reserved[VND_CMD_APP_RESERVED_SIZE];
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8d9ecabb1aac1..8f2a968793913 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -128,12 +128,14 @@ static void qla24xx_abort_iocb_timeout(void *data)
+ sp->cmd_sp)) {
+ qpair->req->outstanding_cmds[handle] = NULL;
+ cmdsp_found = 1;
++ qla_put_fw_resources(qpair, &sp->cmd_sp->iores);
+ }
+
+ /* removing the abort */
+ if (qpair->req->outstanding_cmds[handle] == sp) {
+ qpair->req->outstanding_cmds[handle] = NULL;
+ sp_found = 1;
++ qla_put_fw_resources(qpair, &sp->iores);
+ break;
+ }
+ }
+@@ -2000,6 +2002,7 @@ qla2x00_tmf_iocb_timeout(void *data)
+ for (h = 1; h < sp->qpair->req->num_outstanding_cmds; h++) {
+ if (sp->qpair->req->outstanding_cmds[h] == sp) {
+ sp->qpair->req->outstanding_cmds[h] = NULL;
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ break;
+ }
+ }
+@@ -2073,7 +2076,6 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+ done_free_sp:
+ /* ref: INIT */
+ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+- fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ return rval;
+ }
+@@ -3943,6 +3945,12 @@ void qla_init_iocb_limit(scsi_qla_host_t *vha)
+ ha->base_qpair->fwres.iocbs_limit = limit;
+ ha->base_qpair->fwres.iocbs_qp_limit = limit / num_qps;
+ ha->base_qpair->fwres.iocbs_used = 0;
++
++ ha->base_qpair->fwres.exch_total = ha->orig_fw_xcb_count;
++ ha->base_qpair->fwres.exch_limit = (ha->orig_fw_xcb_count *
++ QLA_IOCB_PCT_LIMIT) / 100;
++ ha->base_qpair->fwres.exch_used = 0;
++
+ for (i = 0; i < ha->max_qpairs; i++) {
+ if (ha->queue_pair_map[i]) {
+ ha->queue_pair_map[i]->fwres.iocbs_total =
+@@ -3951,6 +3959,10 @@ void qla_init_iocb_limit(scsi_qla_host_t *vha)
+ ha->queue_pair_map[i]->fwres.iocbs_qp_limit =
+ limit / num_qps;
+ ha->queue_pair_map[i]->fwres.iocbs_used = 0;
++ ha->queue_pair_map[i]->fwres.exch_total = ha->orig_fw_xcb_count;
++ ha->queue_pair_map[i]->fwres.exch_limit =
++ (ha->orig_fw_xcb_count * QLA_IOCB_PCT_LIMIT) / 100;
++ ha->queue_pair_map[i]->fwres.exch_used = 0;
+ }
+ }
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index 5185dc5daf80d..b0ee307b5d4b9 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -380,24 +380,26 @@ qla2xxx_get_fc4_priority(struct scsi_qla_host *vha)
+
+ enum {
+ RESOURCE_NONE,
+- RESOURCE_INI,
++ RESOURCE_IOCB = BIT_0,
++ RESOURCE_EXCH = BIT_1, /* exchange */
++ RESOURCE_FORCE = BIT_2,
+ };
+
+ static inline int
+-qla_get_iocbs(struct qla_qpair *qp, struct iocb_resource *iores)
++qla_get_fw_resources(struct qla_qpair *qp, struct iocb_resource *iores)
+ {
+ u16 iocbs_used, i;
++ u16 exch_used;
+ struct qla_hw_data *ha = qp->vha->hw;
+
+ if (!ql2xenforce_iocb_limit) {
+ iores->res_type = RESOURCE_NONE;
+ return 0;
+ }
++ if (iores->res_type & RESOURCE_FORCE)
++ goto force;
+
+- if ((iores->iocb_cnt + qp->fwres.iocbs_used) < qp->fwres.iocbs_qp_limit) {
+- qp->fwres.iocbs_used += iores->iocb_cnt;
+- return 0;
+- } else {
++ if ((iores->iocb_cnt + qp->fwres.iocbs_used) >= qp->fwres.iocbs_qp_limit) {
+ /* no need to acquire qpair lock. It's just rough calculation */
+ iocbs_used = ha->base_qpair->fwres.iocbs_used;
+ for (i = 0; i < ha->max_qpairs; i++) {
+@@ -405,30 +407,49 @@ qla_get_iocbs(struct qla_qpair *qp, struct iocb_resource *iores)
+ iocbs_used += ha->queue_pair_map[i]->fwres.iocbs_used;
+ }
+
+- if ((iores->iocb_cnt + iocbs_used) < qp->fwres.iocbs_limit) {
+- qp->fwres.iocbs_used += iores->iocb_cnt;
+- return 0;
+- } else {
++ if ((iores->iocb_cnt + iocbs_used) >= qp->fwres.iocbs_limit) {
++ iores->res_type = RESOURCE_NONE;
++ return -ENOSPC;
++ }
++ }
++
++ if (iores->res_type & RESOURCE_EXCH) {
++ exch_used = ha->base_qpair->fwres.exch_used;
++ for (i = 0; i < ha->max_qpairs; i++) {
++ if (ha->queue_pair_map[i])
++ exch_used += ha->queue_pair_map[i]->fwres.exch_used;
++ }
++
++ if ((exch_used + iores->exch_cnt) >= qp->fwres.exch_limit) {
+ iores->res_type = RESOURCE_NONE;
+ return -ENOSPC;
+ }
+ }
++force:
++ qp->fwres.iocbs_used += iores->iocb_cnt;
++ qp->fwres.exch_used += iores->exch_cnt;
++ return 0;
+ }
+
+ static inline void
+-qla_put_iocbs(struct qla_qpair *qp, struct iocb_resource *iores)
++qla_put_fw_resources(struct qla_qpair *qp, struct iocb_resource *iores)
+ {
+- switch (iores->res_type) {
+- case RESOURCE_NONE:
+- break;
+- default:
++ if (iores->res_type & RESOURCE_IOCB) {
+ if (qp->fwres.iocbs_used >= iores->iocb_cnt) {
+ qp->fwres.iocbs_used -= iores->iocb_cnt;
+ } else {
+- // should not happen
++ /* should not happen */
+ qp->fwres.iocbs_used = 0;
+ }
+- break;
++ }
++
++ if (iores->res_type & RESOURCE_EXCH) {
++ if (qp->fwres.exch_used >= iores->exch_cnt) {
++ qp->fwres.exch_used -= iores->exch_cnt;
++ } else {
++ /* should not happen */
++ qp->fwres.exch_used = 0;
++ }
+ }
+ iores->res_type = RESOURCE_NONE;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index 42ce4e1fe7441..4f48f098ea5a6 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -1589,9 +1589,10 @@ qla24xx_start_scsi(srb_t *sp)
+ tot_dsds = nseg;
+ req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+
+- sp->iores.res_type = RESOURCE_INI;
++ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
++ sp->iores.exch_cnt = 1;
+ sp->iores.iocb_cnt = req_cnt;
+- if (qla_get_iocbs(sp->qpair, &sp->iores))
++ if (qla_get_fw_resources(sp->qpair, &sp->iores))
+ goto queuing_error;
+
+ if (req->cnt < (req_cnt + 2)) {
+@@ -1678,7 +1679,7 @@ queuing_error:
+ if (tot_dsds)
+ scsi_dma_unmap(cmd);
+
+- qla_put_iocbs(sp->qpair, &sp->iores);
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+ return QLA_FUNCTION_FAILED;
+@@ -1793,9 +1794,10 @@ qla24xx_dif_start_scsi(srb_t *sp)
+ tot_prot_dsds = nseg;
+ tot_dsds += nseg;
+
+- sp->iores.res_type = RESOURCE_INI;
++ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
++ sp->iores.exch_cnt = 1;
+ sp->iores.iocb_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+- if (qla_get_iocbs(sp->qpair, &sp->iores))
++ if (qla_get_fw_resources(sp->qpair, &sp->iores))
+ goto queuing_error;
+
+ if (req->cnt < (req_cnt + 2)) {
+@@ -1883,7 +1885,7 @@ queuing_error:
+ }
+ /* Cleanup will be performed by the caller (queuecommand) */
+
+- qla_put_iocbs(sp->qpair, &sp->iores);
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ spin_unlock_irqrestore(&ha->hardware_lock, flags);
+
+ return QLA_FUNCTION_FAILED;
+@@ -1952,9 +1954,10 @@ qla2xxx_start_scsi_mq(srb_t *sp)
+ tot_dsds = nseg;
+ req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+
+- sp->iores.res_type = RESOURCE_INI;
++ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
++ sp->iores.exch_cnt = 1;
+ sp->iores.iocb_cnt = req_cnt;
+- if (qla_get_iocbs(sp->qpair, &sp->iores))
++ if (qla_get_fw_resources(sp->qpair, &sp->iores))
+ goto queuing_error;
+
+ if (req->cnt < (req_cnt + 2)) {
+@@ -2041,7 +2044,7 @@ queuing_error:
+ if (tot_dsds)
+ scsi_dma_unmap(cmd);
+
+- qla_put_iocbs(sp->qpair, &sp->iores);
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ spin_unlock_irqrestore(&qpair->qp_lock, flags);
+
+ return QLA_FUNCTION_FAILED;
+@@ -2171,9 +2174,10 @@ qla2xxx_dif_start_scsi_mq(srb_t *sp)
+ tot_prot_dsds = nseg;
+ tot_dsds += nseg;
+
+- sp->iores.res_type = RESOURCE_INI;
++ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
++ sp->iores.exch_cnt = 1;
+ sp->iores.iocb_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+- if (qla_get_iocbs(sp->qpair, &sp->iores))
++ if (qla_get_fw_resources(sp->qpair, &sp->iores))
+ goto queuing_error;
+
+ if (req->cnt < (req_cnt + 2)) {
+@@ -2260,7 +2264,7 @@ queuing_error:
+ }
+ /* Cleanup will be performed by the caller (queuecommand) */
+
+- qla_put_iocbs(sp->qpair, &sp->iores);
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ spin_unlock_irqrestore(&qpair->qp_lock, flags);
+
+ return QLA_FUNCTION_FAILED;
+@@ -3813,6 +3817,65 @@ qla24xx_prlo_iocb(srb_t *sp, struct logio_entry_24xx *logio)
+ logio->vp_index = sp->fcport->vha->vp_idx;
+ }
+
++int qla_get_iocbs_resource(struct srb *sp)
++{
++ bool get_exch;
++ bool push_it_through = false;
++
++ if (!ql2xenforce_iocb_limit) {
++ sp->iores.res_type = RESOURCE_NONE;
++ return 0;
++ }
++ sp->iores.res_type = RESOURCE_NONE;
++
++ switch (sp->type) {
++ case SRB_TM_CMD:
++ case SRB_PRLI_CMD:
++ case SRB_ADISC_CMD:
++ push_it_through = true;
++ fallthrough;
++ case SRB_LOGIN_CMD:
++ case SRB_ELS_CMD_RPT:
++ case SRB_ELS_CMD_HST:
++ case SRB_ELS_CMD_HST_NOLOGIN:
++ case SRB_CT_CMD:
++ case SRB_NVME_LS:
++ case SRB_ELS_DCMD:
++ get_exch = true;
++ break;
++
++ case SRB_FXIOCB_DCMD:
++ case SRB_FXIOCB_BCMD:
++ sp->iores.res_type = RESOURCE_NONE;
++ return 0;
++
++ case SRB_SA_UPDATE:
++ case SRB_SA_REPLACE:
++ case SRB_MB_IOCB:
++ case SRB_ABT_CMD:
++ case SRB_NACK_PLOGI:
++ case SRB_NACK_PRLI:
++ case SRB_NACK_LOGO:
++ case SRB_LOGOUT_CMD:
++ case SRB_CTRL_VP:
++ push_it_through = true;
++ fallthrough;
++ default:
++ get_exch = false;
++ }
++
++ sp->iores.res_type |= RESOURCE_IOCB;
++ sp->iores.iocb_cnt = 1;
++ if (get_exch) {
++ sp->iores.res_type |= RESOURCE_EXCH;
++ sp->iores.exch_cnt = 1;
++ }
++ if (push_it_through)
++ sp->iores.res_type |= RESOURCE_FORCE;
++
++ return qla_get_fw_resources(sp->qpair, &sp->iores);
++}
++
+ int
+ qla2x00_start_sp(srb_t *sp)
+ {
+@@ -3827,6 +3890,12 @@ qla2x00_start_sp(srb_t *sp)
+ return -EIO;
+
+ spin_lock_irqsave(qp->qp_lock_ptr, flags);
++ rval = qla_get_iocbs_resource(sp);
++ if (rval) {
++ spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
++ return -EAGAIN;
++ }
++
+ pkt = __qla2x00_alloc_iocbs(sp->qpair, sp);
+ if (!pkt) {
+ rval = EAGAIN;
+@@ -3927,6 +3996,8 @@ qla2x00_start_sp(srb_t *sp)
+ wmb();
+ qla2x00_start_iocbs(vha, qp->req);
+ done:
++ if (rval)
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
+ return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index e19fde304e5c6..cbbd7014da939 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3112,6 +3112,7 @@ qla25xx_process_bidir_status_iocb(scsi_qla_host_t *vha, void *pkt,
+ }
+ bsg_reply->reply_payload_rcv_len = 0;
+
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ done:
+ /* Return the vendor specific reply to API */
+ bsg_reply->reply_data.vendor_reply.vendor_rsp[0] = rval;
+@@ -3197,7 +3198,7 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
+ }
+ return;
+ }
+- qla_put_iocbs(sp->qpair, &sp->iores);
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+
+ if (sp->cmd_type != TYPE_SRB) {
+ req->outstanding_cmds[handle] = NULL;
+@@ -3362,8 +3363,6 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
+ "Dropped frame(s) detected (0x%x of 0x%x bytes).\n",
+ resid, scsi_bufflen(cp));
+
+- vha->interface_err_cnt++;
+-
+ res = DID_ERROR << 16 | lscsi_status;
+ goto check_scsi_status;
+ }
+@@ -3618,7 +3617,6 @@ qla2x00_error_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, sts_entry_t *pkt)
+ default:
+ sp = qla2x00_get_sp_from_handle(vha, func, req, pkt);
+ if (sp) {
+- qla_put_iocbs(sp->qpair, &sp->iores);
+ sp->done(sp, res);
+ return 0;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 02fdeb0d31ec4..c57e02a355219 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -170,18 +170,6 @@ out:
+ qla2xxx_rel_qpair_sp(sp->qpair, sp);
+ }
+
+-static void qla_nvme_ls_unmap(struct srb *sp, struct nvmefc_ls_req *fd)
+-{
+- if (sp->flags & SRB_DMA_VALID) {
+- struct srb_iocb *nvme = &sp->u.iocb_cmd;
+- struct qla_hw_data *ha = sp->fcport->vha->hw;
+-
+- dma_unmap_single(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+- fd->rqstlen, DMA_TO_DEVICE);
+- sp->flags &= ~SRB_DMA_VALID;
+- }
+-}
+-
+ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ {
+ struct srb *sp = container_of(kref, struct srb, cmd_kref);
+@@ -199,7 +187,6 @@ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+
+ fd = priv->fd;
+
+- qla_nvme_ls_unmap(sp, fd);
+ fd->done(fd, priv->comp_status);
+ out:
+ qla2x00_rel_sp(sp);
+@@ -365,13 +352,10 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ nvme->u.nvme.rsp_len = fd->rsplen;
+ nvme->u.nvme.rsp_dma = fd->rspdma;
+ nvme->u.nvme.timeout_sec = fd->timeout;
+- nvme->u.nvme.cmd_dma = dma_map_single(&ha->pdev->dev, fd->rqstaddr,
+- fd->rqstlen, DMA_TO_DEVICE);
++ nvme->u.nvme.cmd_dma = fd->rqstdma;
+ dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+ fd->rqstlen, DMA_TO_DEVICE);
+
+- sp->flags |= SRB_DMA_VALID;
+-
+ rval = qla2x00_start_sp(sp);
+ if (rval != QLA_SUCCESS) {
+ ql_log(ql_log_warn, vha, 0x700e,
+@@ -379,7 +363,6 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ wake_up(&sp->nvme_ls_waitq);
+ sp->priv = NULL;
+ priv->sp = NULL;
+- qla_nvme_ls_unmap(sp, fd);
+ qla2x00_rel_sp(sp);
+ return rval;
+ }
+@@ -445,13 +428,24 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
+ goto queuing_error;
+ }
+ req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
++
++ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
++ sp->iores.exch_cnt = 1;
++ sp->iores.iocb_cnt = req_cnt;
++ if (qla_get_fw_resources(sp->qpair, &sp->iores)) {
++ rval = -EBUSY;
++ goto queuing_error;
++ }
++
+ if (req->cnt < (req_cnt + 2)) {
+ if (IS_SHADOW_REG_CAPABLE(ha)) {
+ cnt = *req->out_ptr;
+ } else {
+ cnt = rd_reg_dword_relaxed(req->req_q_out);
+- if (qla2x00_check_reg16_for_disconnect(vha, cnt))
++ if (qla2x00_check_reg16_for_disconnect(vha, cnt)) {
++ rval = -EBUSY;
+ goto queuing_error;
++ }
+ }
+
+ if (req->ring_index < cnt)
+@@ -600,6 +594,8 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
+ qla24xx_process_response_queue(vha, rsp);
+
+ queuing_error:
++ if (rval)
++ qla_put_fw_resources(sp->qpair, &sp->iores);
+ spin_unlock_irqrestore(&qpair->qp_lock, flags);
+
+ return rval;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 7fb28c207ee50..2d86f804872bf 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -7094,9 +7094,12 @@ qla2x00_do_dpc(void *data)
+ }
+ }
+ loop_resync_check:
+- if (test_and_clear_bit(LOOP_RESYNC_NEEDED,
++ if (!qla2x00_reset_active(base_vha) &&
++ test_and_clear_bit(LOOP_RESYNC_NEEDED,
+ &base_vha->dpc_flags)) {
+-
++ /*
++ * Allow abort_isp to complete before moving on to scanning.
++ */
+ ql_dbg(ql_dbg_dpc, base_vha, 0x400f,
+ "Loop resync scheduled.\n");
+
+@@ -7447,7 +7450,7 @@ qla2x00_timer(struct timer_list *t)
+
+ /* if the loop has been down for 4 minutes, reinit adapter */
+ if (atomic_dec_and_test(&vha->loop_down_timer) != 0) {
+- if (!(vha->device_flags & DFLG_NO_CABLE)) {
++ if (!(vha->device_flags & DFLG_NO_CABLE) && !vha->vp_idx) {
+ ql_log(ql_log_warn, vha, 0x6009,
+ "Loop down - aborting ISP.\n");
+
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index 0a1734f34587d..1707d6d144d21 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -433,8 +433,8 @@ int ses_match_host(struct enclosure_device *edev, void *data)
+ }
+ #endif /* 0 */
+
+-static void ses_process_descriptor(struct enclosure_component *ecomp,
+- unsigned char *desc)
++static int ses_process_descriptor(struct enclosure_component *ecomp,
++ unsigned char *desc, int max_desc_len)
+ {
+ int eip = desc[0] & 0x10;
+ int invalid = desc[0] & 0x80;
+@@ -445,22 +445,32 @@ static void ses_process_descriptor(struct enclosure_component *ecomp,
+ unsigned char *d;
+
+ if (invalid)
+- return;
++ return 0;
+
+ switch (proto) {
+ case SCSI_PROTOCOL_FCP:
+ if (eip) {
++ if (max_desc_len <= 7)
++ return 1;
+ d = desc + 4;
+ slot = d[3];
+ }
+ break;
+ case SCSI_PROTOCOL_SAS:
++
+ if (eip) {
++ if (max_desc_len <= 27)
++ return 1;
+ d = desc + 4;
+ slot = d[3];
+ d = desc + 8;
+- } else
++ } else {
++ if (max_desc_len <= 23)
++ return 1;
+ d = desc + 4;
++ }
++
++
+ /* only take the phy0 addr */
+ addr = (u64)d[12] << 56 |
+ (u64)d[13] << 48 |
+@@ -477,6 +487,8 @@ static void ses_process_descriptor(struct enclosure_component *ecomp,
+ }
+ ecomp->slot = slot;
+ scomp->addr = addr;
++
++ return 0;
+ }
+
+ struct efd {
+@@ -549,7 +561,7 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ /* skip past overall descriptor */
+ desc_ptr += len + 4;
+ }
+- if (ses_dev->page10)
++ if (ses_dev->page10 && ses_dev->page10_len > 9)
+ addl_desc_ptr = ses_dev->page10 + 8;
+ type_ptr = ses_dev->page1_types;
+ components = 0;
+@@ -557,17 +569,22 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ for (j = 0; j < type_ptr[1]; j++) {
+ char *name = NULL;
+ struct enclosure_component *ecomp;
++ int max_desc_len;
+
+ if (desc_ptr) {
+- if (desc_ptr >= buf + page7_len) {
++ if (desc_ptr + 3 >= buf + page7_len) {
+ desc_ptr = NULL;
+ } else {
+ len = (desc_ptr[2] << 8) + desc_ptr[3];
+ desc_ptr += 4;
+- /* Add trailing zero - pushes into
+- * reserved space */
+- desc_ptr[len] = '\0';
+- name = desc_ptr;
++ if (desc_ptr + len > buf + page7_len)
++ desc_ptr = NULL;
++ else {
++ /* Add trailing zero - pushes into
++ * reserved space */
++ desc_ptr[len] = '\0';
++ name = desc_ptr;
++ }
+ }
+ }
+ if (type_ptr[0] == ENCLOSURE_COMPONENT_DEVICE ||
+@@ -583,10 +600,14 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ ecomp = &edev->component[components++];
+
+ if (!IS_ERR(ecomp)) {
+- if (addl_desc_ptr)
+- ses_process_descriptor(
+- ecomp,
+- addl_desc_ptr);
++ if (addl_desc_ptr) {
++ max_desc_len = ses_dev->page10_len -
++ (addl_desc_ptr - ses_dev->page10);
++ if (ses_process_descriptor(ecomp,
++ addl_desc_ptr,
++ max_desc_len))
++ addl_desc_ptr = NULL;
++ }
+ if (create)
+ enclosure_component_register(
+ ecomp);
+@@ -603,9 +624,11 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ /* these elements are optional */
+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_TARGET_PORT ||
+ type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT ||
+- type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS))
++ type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS)) {
+ addl_desc_ptr += addl_desc_ptr[1] + 2;
+-
++ if (addl_desc_ptr + 1 >= ses_dev->page10 + ses_dev->page10_len)
++ addl_desc_ptr = NULL;
++ }
+ }
+ }
+ kfree(buf);
+@@ -704,6 +727,12 @@ static int ses_intf_add(struct device *cdev,
+ type_ptr[0] == ENCLOSURE_COMPONENT_ARRAY_DEVICE)
+ components += type_ptr[1];
+ }
++
++ if (components == 0) {
++ sdev_printk(KERN_WARNING, sdev, "enclosure has no enumerated components\n");
++ goto err_free;
++ }
++
+ ses_dev->page1 = buf;
+ ses_dev->page1_len = len;
+ buf = NULL;
+@@ -827,7 +856,8 @@ static void ses_intf_remove_enclosure(struct scsi_device *sdev)
+ kfree(ses_dev->page2);
+ kfree(ses_dev);
+
+- kfree(edev->component[0].scratch);
++ if (edev->components)
++ kfree(edev->component[0].scratch);
+
+ put_device(&edev->edev);
+ enclosure_unregister(edev);
+diff --git a/drivers/scsi/snic/snic_debugfs.c b/drivers/scsi/snic/snic_debugfs.c
+index 57bdc3ba49d9c..9dd975b36b5bd 100644
+--- a/drivers/scsi/snic/snic_debugfs.c
++++ b/drivers/scsi/snic/snic_debugfs.c
+@@ -437,6 +437,6 @@ void snic_trc_debugfs_init(void)
+ void
+ snic_trc_debugfs_term(void)
+ {
+- debugfs_remove(debugfs_lookup(TRC_FILE, snic_glob->trc_root));
+- debugfs_remove(debugfs_lookup(TRC_ENABLE_FILE, snic_glob->trc_root));
++ debugfs_lookup_and_remove(TRC_FILE, snic_glob->trc_root);
++ debugfs_lookup_and_remove(TRC_ENABLE_FILE, snic_glob->trc_root);
+ }
+diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
+index a1de363eba3ff..27699f341f2c5 100644
+--- a/drivers/soundwire/cadence_master.c
++++ b/drivers/soundwire/cadence_master.c
+@@ -127,7 +127,8 @@ MODULE_PARM_DESC(cdns_mcp_int_mask, "Cadence MCP IntMask");
+
+ #define CDNS_MCP_CMD_BASE 0x80
+ #define CDNS_MCP_RESP_BASE 0x80
+-#define CDNS_MCP_CMD_LEN 0x20
++/* FIFO can hold 8 commands */
++#define CDNS_MCP_CMD_LEN 8
+ #define CDNS_MCP_CMD_WORD_LEN 0x4
+
+ #define CDNS_MCP_CMD_SSP_TAG BIT(31)
+diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
+index 3b1c0878bb857..930c6075b78cf 100644
+--- a/drivers/spi/Kconfig
++++ b/drivers/spi/Kconfig
+@@ -295,7 +295,6 @@ config SPI_DW_BT1
+ tristate "Baikal-T1 SPI driver for DW SPI core"
+ depends on MIPS_BAIKAL_T1 || COMPILE_TEST
+ select MULTIPLEXER
+- select MUX_MMIO
+ help
+ Baikal-T1 SoC is equipped with three DW APB SSI-based MMIO SPI
+ controllers. Two of them are pretty much normal: with IRQ, DMA,
+diff --git a/drivers/spi/spi-bcm63xx-hsspi.c b/drivers/spi/spi-bcm63xx-hsspi.c
+index b871fd810d801..02f56fc001b47 100644
+--- a/drivers/spi/spi-bcm63xx-hsspi.c
++++ b/drivers/spi/spi-bcm63xx-hsspi.c
+@@ -163,6 +163,7 @@ static int bcm63xx_hsspi_do_txrx(struct spi_device *spi, struct spi_transfer *t)
+ int step_size = HSSPI_BUFFER_LEN;
+ const u8 *tx = t->tx_buf;
+ u8 *rx = t->rx_buf;
++ u32 val = 0;
+
+ bcm63xx_hsspi_set_clk(bs, spi, t->speed_hz);
+ bcm63xx_hsspi_set_cs(bs, spi->chip_select, true);
+@@ -178,11 +179,16 @@ static int bcm63xx_hsspi_do_txrx(struct spi_device *spi, struct spi_transfer *t)
+ step_size -= HSSPI_OPCODE_LEN;
+
+ if ((opcode == HSSPI_OP_READ && t->rx_nbits == SPI_NBITS_DUAL) ||
+- (opcode == HSSPI_OP_WRITE && t->tx_nbits == SPI_NBITS_DUAL))
++ (opcode == HSSPI_OP_WRITE && t->tx_nbits == SPI_NBITS_DUAL)) {
+ opcode |= HSSPI_OP_MULTIBIT;
+
+- __raw_writel(1 << MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT |
+- 1 << MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT | 0xff,
++ if (t->rx_nbits == SPI_NBITS_DUAL)
++ val |= 1 << MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT;
++ if (t->tx_nbits == SPI_NBITS_DUAL)
++ val |= 1 << MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT;
++ }
++
++ __raw_writel(val | 0xff,
+ bs->regs + HSSPI_PROFILE_MODE_CTRL_REG(chip_select));
+
+ while (pending > 0) {
+diff --git a/drivers/spi/spi-intel.c b/drivers/spi/spi-intel.c
+index f619212b0d5c3..627287925fedb 100644
+--- a/drivers/spi/spi-intel.c
++++ b/drivers/spi/spi-intel.c
+@@ -1368,14 +1368,14 @@ static int intel_spi_populate_chip(struct intel_spi *ispi)
+ if (!spi_new_device(ispi->master, &chip))
+ return -ENODEV;
+
+- /* Add the second chip if present */
+- if (ispi->master->num_chipselect < 2)
+- return 0;
+-
+ ret = intel_spi_read_desc(ispi);
+ if (ret)
+ return ret;
+
++ /* Add the second chip if present */
++ if (ispi->master->num_chipselect < 2)
++ return 0;
++
+ chip.platform_data = NULL;
+ chip.chip_select = 1;
+
+diff --git a/drivers/spi/spi-sn-f-ospi.c b/drivers/spi/spi-sn-f-ospi.c
+index 348c6e1edd38a..333b22dfd8dba 100644
+--- a/drivers/spi/spi-sn-f-ospi.c
++++ b/drivers/spi/spi-sn-f-ospi.c
+@@ -611,7 +611,7 @@ static int f_ospi_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ ctlr->mode_bits = SPI_TX_DUAL | SPI_TX_QUAD | SPI_TX_OCTAL
+- | SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_OCTAL
++ | SPI_RX_DUAL | SPI_RX_QUAD | SPI_RX_OCTAL
+ | SPI_MODE_0 | SPI_MODE_1 | SPI_LSB_FIRST;
+ ctlr->mem_ops = &f_ospi_mem_ops;
+ ctlr->bus_num = -1;
+diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c
+index 47cbe73137c23..dc188f9202c97 100644
+--- a/drivers/spi/spi-synquacer.c
++++ b/drivers/spi/spi-synquacer.c
+@@ -472,10 +472,9 @@ static int synquacer_spi_transfer_one(struct spi_master *master,
+ read_fifo(sspi);
+ }
+
+- if (status < 0) {
+- dev_err(sspi->dev, "failed to transfer. status: 0x%x\n",
+- status);
+- return status;
++ if (status == 0) {
++ dev_err(sspi->dev, "failed to transfer. Timeout.\n");
++ return -ETIMEDOUT;
+ }
+
+ return 0;
+diff --git a/drivers/staging/media/atomisp/Kconfig b/drivers/staging/media/atomisp/Kconfig
+index 2c8d7fdcc5f7a..c9bff98e5309a 100644
+--- a/drivers/staging/media/atomisp/Kconfig
++++ b/drivers/staging/media/atomisp/Kconfig
+@@ -14,7 +14,7 @@ config VIDEO_ATOMISP
+ depends on VIDEO_DEV && INTEL_ATOMISP
+ depends on PMIC_OPREGION
+ select IOSF_MBI
+- select VIDEOBUF_VMALLOC
++ select VIDEOBUF2_VMALLOC
+ select VIDEO_V4L2_SUBDEV_API
+ help
+ Say Y here if your platform supports Intel Atom SoC
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_fops.c b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+index acea7492847d8..9b9d50d7166a0 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_fops.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_fops.c
+@@ -821,13 +821,13 @@ init_subdev:
+ goto done;
+
+ atomisp_subdev_init_struct(asd);
++ /* Ensure that a mode is set */
++ v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
+
+ done:
+ pipe->users++;
+ mutex_unlock(&isp->mutex);
+
+- /* Ensure that a mode is set */
+- v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode);
+
+ return 0;
+
+diff --git a/drivers/thermal/hisi_thermal.c b/drivers/thermal/hisi_thermal.c
+index d6974db7aaf76..15af90f5c7d91 100644
+--- a/drivers/thermal/hisi_thermal.c
++++ b/drivers/thermal/hisi_thermal.c
+@@ -427,10 +427,6 @@ static int hi3660_thermal_probe(struct hisi_thermal_data *data)
+ data->sensor[0].irq_name = "tsensor_a73";
+ data->sensor[0].data = data;
+
+- data->sensor[1].id = HI3660_LITTLE_SENSOR;
+- data->sensor[1].irq_name = "tsensor_a53";
+- data->sensor[1].data = data;
+-
+ return 0;
+ }
+
+diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c
+index 4df925e3a80bd..dfadb03580ae1 100644
+--- a/drivers/thermal/imx_sc_thermal.c
++++ b/drivers/thermal/imx_sc_thermal.c
+@@ -88,7 +88,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ if (!resource_id)
+ return -EINVAL;
+
+- for (i = 0; resource_id[i] > 0; i++) {
++ for (i = 0; resource_id[i] >= 0; i++) {
+
+ sensor = devm_kzalloc(&pdev->dev, sizeof(*sensor), GFP_KERNEL);
+ if (!sensor)
+@@ -127,7 +127,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ return 0;
+ }
+
+-static int imx_sc_sensors[] = { IMX_SC_R_SYSTEM, IMX_SC_R_PMIC_0, -1 };
++static const int imx_sc_sensors[] = { IMX_SC_R_SYSTEM, IMX_SC_R_PMIC_0, -1 };
+
+ static const struct of_device_id imx_sc_thermal_table[] = {
+ { .compatible = "fsl,imx-sc-thermal", .data = imx_sc_sensors },
+diff --git a/drivers/thermal/intel/intel_pch_thermal.c b/drivers/thermal/intel/intel_pch_thermal.c
+index dabf11a687a15..9e27f430e0345 100644
+--- a/drivers/thermal/intel/intel_pch_thermal.c
++++ b/drivers/thermal/intel/intel_pch_thermal.c
+@@ -29,6 +29,7 @@
+ #define PCH_THERMAL_DID_CNL_LP 0x02F9 /* CNL-LP PCH */
+ #define PCH_THERMAL_DID_CML_H 0X06F9 /* CML-H PCH */
+ #define PCH_THERMAL_DID_LWB 0xA1B1 /* Lewisburg PCH */
++#define PCH_THERMAL_DID_WBG 0x8D24 /* Wellsburg PCH */
+
+ /* Wildcat Point-LP PCH Thermal registers */
+ #define WPT_TEMP 0x0000 /* Temperature */
+@@ -350,6 +351,7 @@ enum board_ids {
+ board_cnl,
+ board_cml,
+ board_lwb,
++ board_wbg,
+ };
+
+ static const struct board_info {
+@@ -380,6 +382,10 @@ static const struct board_info {
+ .name = "pch_lewisburg",
+ .ops = &pch_dev_ops_wpt,
+ },
++ [board_wbg] = {
++ .name = "pch_wellsburg",
++ .ops = &pch_dev_ops_wpt,
++ },
+ };
+
+ static int intel_pch_thermal_probe(struct pci_dev *pdev,
+@@ -495,6 +501,8 @@ static const struct pci_device_id intel_pch_thermal_id[] = {
+ .driver_data = board_cml, },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCH_THERMAL_DID_LWB),
+ .driver_data = board_lwb, },
++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCH_THERMAL_DID_WBG),
++ .driver_data = board_wbg, },
+ { 0, },
+ };
+ MODULE_DEVICE_TABLE(pci, intel_pch_thermal_id);
+diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
+index b80e25ec12615..2f4cbfdf26a00 100644
+--- a/drivers/thermal/intel/intel_powerclamp.c
++++ b/drivers/thermal/intel/intel_powerclamp.c
+@@ -57,6 +57,7 @@
+
+ static unsigned int target_mwait;
+ static struct dentry *debug_dir;
++static bool poll_pkg_cstate_enable;
+
+ /* user selected target */
+ static unsigned int set_target_ratio;
+@@ -261,6 +262,9 @@ static unsigned int get_compensation(int ratio)
+ {
+ unsigned int comp = 0;
+
++ if (!poll_pkg_cstate_enable)
++ return 0;
++
+ /* we only use compensation if all adjacent ones are good */
+ if (ratio == 1 &&
+ cal_data[ratio].confidence >= CONFIDENCE_OK &&
+@@ -519,7 +523,8 @@ static int start_power_clamp(void)
+ control_cpu = cpumask_first(cpu_online_mask);
+
+ clamping = true;
+- schedule_delayed_work(&poll_pkg_cstate_work, 0);
++ if (poll_pkg_cstate_enable)
++ schedule_delayed_work(&poll_pkg_cstate_work, 0);
+
+ /* start one kthread worker per online cpu */
+ for_each_online_cpu(cpu) {
+@@ -585,11 +590,15 @@ static int powerclamp_get_max_state(struct thermal_cooling_device *cdev,
+ static int powerclamp_get_cur_state(struct thermal_cooling_device *cdev,
+ unsigned long *state)
+ {
+- if (true == clamping)
+- *state = pkg_cstate_ratio_cur;
+- else
++ if (clamping) {
++ if (poll_pkg_cstate_enable)
++ *state = pkg_cstate_ratio_cur;
++ else
++ *state = set_target_ratio;
++ } else {
+ /* to save power, do not poll idle ratio while not clamping */
+ *state = -1; /* indicates invalid state */
++ }
+
+ return 0;
+ }
+@@ -712,6 +721,9 @@ static int __init powerclamp_init(void)
+ goto exit_unregister;
+ }
+
++ if (topology_max_packages() == 1 && topology_max_die_per_package() == 1)
++ poll_pkg_cstate_enable = true;
++
+ cooling_dev = thermal_cooling_device_register("intel_powerclamp", NULL,
+ &powerclamp_cooling_ops);
+ if (IS_ERR(cooling_dev)) {
+diff --git a/drivers/thermal/intel/intel_soc_dts_iosf.c b/drivers/thermal/intel/intel_soc_dts_iosf.c
+index 342b0bb5a56d9..8651ff1abe754 100644
+--- a/drivers/thermal/intel/intel_soc_dts_iosf.c
++++ b/drivers/thermal/intel/intel_soc_dts_iosf.c
+@@ -405,7 +405,7 @@ struct intel_soc_dts_sensors *intel_soc_dts_iosf_init(
+ {
+ struct intel_soc_dts_sensors *sensors;
+ bool notification;
+- u32 tj_max;
++ int tj_max;
+ int ret;
+ int i;
+
+diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c
+index 04d012e4f7288..3158f13c54305 100644
+--- a/drivers/thermal/qcom/tsens-v0_1.c
++++ b/drivers/thermal/qcom/tsens-v0_1.c
+@@ -285,7 +285,7 @@ static int calibrate_8939(struct tsens_priv *priv)
+ u32 p1[10], p2[10];
+ int mode = 0;
+ u32 *qfprom_cdata;
+- u32 cdata[6];
++ u32 cdata[4];
+
+ qfprom_cdata = (u32 *)qfprom_read(priv->dev, "calib");
+ if (IS_ERR(qfprom_cdata))
+@@ -296,8 +296,6 @@ static int calibrate_8939(struct tsens_priv *priv)
+ cdata[1] = qfprom_cdata[13];
+ cdata[2] = qfprom_cdata[0];
+ cdata[3] = qfprom_cdata[1];
+- cdata[4] = qfprom_cdata[22];
+- cdata[5] = qfprom_cdata[21];
+
+ mode = (cdata[0] & MSM8939_CAL_SEL_MASK) >> MSM8939_CAL_SEL_SHIFT;
+ dev_dbg(priv->dev, "calibration mode is %d\n", mode);
+@@ -314,8 +312,6 @@ static int calibrate_8939(struct tsens_priv *priv)
+ p2[6] = (cdata[2] & MSM8939_S6_P2_MASK) >> MSM8939_S6_P2_SHIFT;
+ p2[7] = (cdata[3] & MSM8939_S7_P2_MASK) >> MSM8939_S7_P2_SHIFT;
+ p2[8] = (cdata[3] & MSM8939_S8_P2_MASK) >> MSM8939_S8_P2_SHIFT;
+- p2[9] = (cdata[4] & MSM8939_S9_P2_MASK_0_4) >> MSM8939_S9_P2_SHIFT_0_4;
+- p2[9] |= ((cdata[5] & MSM8939_S9_P2_MASK_5) >> MSM8939_S9_P2_SHIFT_5) << 5;
+ for (i = 0; i < priv->num_sensors; i++)
+ p2[i] = (base1 + p2[i]) << 2;
+ fallthrough;
+@@ -331,7 +327,6 @@ static int calibrate_8939(struct tsens_priv *priv)
+ p1[6] = (cdata[2] & MSM8939_S6_P1_MASK) >> MSM8939_S6_P1_SHIFT;
+ p1[7] = (cdata[3] & MSM8939_S7_P1_MASK) >> MSM8939_S7_P1_SHIFT;
+ p1[8] = (cdata[3] & MSM8939_S8_P1_MASK) >> MSM8939_S8_P1_SHIFT;
+- p1[9] = (cdata[4] & MSM8939_S9_P1_MASK) >> MSM8939_S9_P1_SHIFT;
+ for (i = 0; i < priv->num_sensors; i++)
+ p1[i] = ((base0) + p1[i]) << 2;
+ break;
+@@ -534,6 +529,21 @@ static int calibrate_9607(struct tsens_priv *priv)
+ return 0;
+ }
+
++static int __init init_8939(struct tsens_priv *priv) {
++ priv->sensor[0].slope = 2911;
++ priv->sensor[1].slope = 2789;
++ priv->sensor[2].slope = 2906;
++ priv->sensor[3].slope = 2763;
++ priv->sensor[4].slope = 2922;
++ priv->sensor[5].slope = 2867;
++ priv->sensor[6].slope = 2833;
++ priv->sensor[7].slope = 2838;
++ priv->sensor[8].slope = 2840;
++ /* priv->sensor[9].slope = 2852; */
++
++ return init_common(priv);
++}
++
+ /* v0.1: 8916, 8939, 8974, 9607 */
+
+ static struct tsens_features tsens_v0_1_feat = {
+@@ -599,15 +609,15 @@ struct tsens_plat_data data_8916 = {
+ };
+
+ static const struct tsens_ops ops_8939 = {
+- .init = init_common,
++ .init = init_8939,
+ .calibrate = calibrate_8939,
+ .get_temp = get_temp_common,
+ };
+
+ struct tsens_plat_data data_8939 = {
+- .num_sensors = 10,
++ .num_sensors = 9,
+ .ops = &ops_8939,
+- .hw_ids = (unsigned int []){ 0, 1, 2, 3, 5, 6, 7, 8, 9, 10 },
++ .hw_ids = (unsigned int []){ 0, 1, 2, 3, 5, 6, 7, 8, 9, /* 10 */ },
+
+ .feat = &tsens_v0_1_feat,
+ .fields = tsens_v0_1_regfields,
+diff --git a/drivers/thermal/qcom/tsens-v1.c b/drivers/thermal/qcom/tsens-v1.c
+index 1d7f8a80bd13a..9c443a2fb32ca 100644
+--- a/drivers/thermal/qcom/tsens-v1.c
++++ b/drivers/thermal/qcom/tsens-v1.c
+@@ -78,11 +78,6 @@
+
+ #define MSM8976_CAL_SEL_MASK 0x3
+
+-#define MSM8976_CAL_DEGC_PT1 30
+-#define MSM8976_CAL_DEGC_PT2 120
+-#define MSM8976_SLOPE_FACTOR 1000
+-#define MSM8976_SLOPE_DEFAULT 3200
+-
+ /* eeprom layout data for qcs404/405 (v1) */
+ #define BASE0_MASK 0x000007f8
+ #define BASE1_MASK 0x0007f800
+@@ -142,30 +137,6 @@
+ #define CAL_SEL_MASK 7
+ #define CAL_SEL_SHIFT 0
+
+-static void compute_intercept_slope_8976(struct tsens_priv *priv,
+- u32 *p1, u32 *p2, u32 mode)
+-{
+- int i;
+-
+- priv->sensor[0].slope = 3313;
+- priv->sensor[1].slope = 3275;
+- priv->sensor[2].slope = 3320;
+- priv->sensor[3].slope = 3246;
+- priv->sensor[4].slope = 3279;
+- priv->sensor[5].slope = 3257;
+- priv->sensor[6].slope = 3234;
+- priv->sensor[7].slope = 3269;
+- priv->sensor[8].slope = 3255;
+- priv->sensor[9].slope = 3239;
+- priv->sensor[10].slope = 3286;
+-
+- for (i = 0; i < priv->num_sensors; i++) {
+- priv->sensor[i].offset = (p1[i] * MSM8976_SLOPE_FACTOR) -
+- (MSM8976_CAL_DEGC_PT1 *
+- priv->sensor[i].slope);
+- }
+-}
+-
+ static int calibrate_v1(struct tsens_priv *priv)
+ {
+ u32 base0 = 0, base1 = 0;
+@@ -291,7 +262,7 @@ static int calibrate_8976(struct tsens_priv *priv)
+ break;
+ }
+
+- compute_intercept_slope_8976(priv, p1, p2, mode);
++ compute_intercept_slope(priv, p1, p2, mode);
+ kfree(qfprom_cdata);
+
+ return 0;
+@@ -365,6 +336,22 @@ static const struct reg_field tsens_v1_regfields[MAX_REGFIELDS] = {
+ [TRDY] = REG_FIELD(TM_TRDY_OFF, 0, 0),
+ };
+
++static int __init init_8956(struct tsens_priv *priv) {
++ priv->sensor[0].slope = 3313;
++ priv->sensor[1].slope = 3275;
++ priv->sensor[2].slope = 3320;
++ priv->sensor[3].slope = 3246;
++ priv->sensor[4].slope = 3279;
++ priv->sensor[5].slope = 3257;
++ priv->sensor[6].slope = 3234;
++ priv->sensor[7].slope = 3269;
++ priv->sensor[8].slope = 3255;
++ priv->sensor[9].slope = 3239;
++ priv->sensor[10].slope = 3286;
++
++ return init_common(priv);
++}
++
+ static const struct tsens_ops ops_generic_v1 = {
+ .init = init_common,
+ .calibrate = calibrate_v1,
+@@ -377,13 +364,25 @@ struct tsens_plat_data data_tsens_v1 = {
+ .fields = tsens_v1_regfields,
+ };
+
++static const struct tsens_ops ops_8956 = {
++ .init = init_8956,
++ .calibrate = calibrate_8976,
++ .get_temp = get_temp_tsens_valid,
++};
++
++struct tsens_plat_data data_8956 = {
++ .num_sensors = 11,
++ .ops = &ops_8956,
++ .feat = &tsens_v1_feat,
++ .fields = tsens_v1_regfields,
++};
++
+ static const struct tsens_ops ops_8976 = {
+ .init = init_common,
+ .calibrate = calibrate_8976,
+ .get_temp = get_temp_tsens_valid,
+ };
+
+-/* Valid for both MSM8956 and MSM8976. */
+ struct tsens_plat_data data_8976 = {
+ .num_sensors = 11,
+ .ops = &ops_8976,
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index b5b136ff323f9..b191e19df93dc 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -983,6 +983,9 @@ static const struct of_device_id tsens_table[] = {
+ }, {
+ .compatible = "qcom,msm8939-tsens",
+ .data = &data_8939,
++ }, {
++ .compatible = "qcom,msm8956-tsens",
++ .data = &data_8956,
+ }, {
+ .compatible = "qcom,msm8960-tsens",
+ .data = &data_8960,
+diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
+index 899af128855f7..7dd5fc2468945 100644
+--- a/drivers/thermal/qcom/tsens.h
++++ b/drivers/thermal/qcom/tsens.h
+@@ -594,7 +594,7 @@ extern struct tsens_plat_data data_8960;
+ extern struct tsens_plat_data data_8916, data_8939, data_8974, data_9607;
+
+ /* TSENS v1 targets */
+-extern struct tsens_plat_data data_tsens_v1, data_8976;
++extern struct tsens_plat_data data_tsens_v1, data_8976, data_8956;
+
+ /* TSENS v2 targets */
+ extern struct tsens_plat_data data_8996, data_ipq8074, data_tsens_v2;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 5e69fb73f570f..23910ac724b11 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1387,9 +1387,9 @@ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termio
+ * Note: UART is assumed to be active high.
+ */
+ if (rs485->flags & SER_RS485_RTS_ON_SEND)
+- modem &= ~UARTMODEM_TXRTSPOL;
+- else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
+ modem |= UARTMODEM_TXRTSPOL;
++ else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
++ modem &= ~UARTMODEM_TXRTSPOL;
+ }
+
+ lpuart32_write(&sport->port, modem, UARTMODIR);
+@@ -1683,12 +1683,6 @@ static void lpuart32_configure(struct lpuart_port *sport)
+ {
+ unsigned long temp;
+
+- if (sport->lpuart_dma_rx_use) {
+- /* RXWATER must be 0 */
+- temp = lpuart32_read(&sport->port, UARTWATER);
+- temp &= ~(UARTWATER_WATER_MASK << UARTWATER_RXWATER_OFF);
+- lpuart32_write(&sport->port, temp, UARTWATER);
+- }
+ temp = lpuart32_read(&sport->port, UARTCTRL);
+ if (!sport->lpuart_dma_rx_use)
+ temp |= UARTCTRL_RIE;
+@@ -1796,6 +1790,15 @@ static void lpuart32_shutdown(struct uart_port *port)
+
+ spin_lock_irqsave(&port->lock, flags);
+
++ /* clear status */
++ temp = lpuart32_read(&sport->port, UARTSTAT);
++ lpuart32_write(&sport->port, temp, UARTSTAT);
++
++ /* disable Rx/Tx DMA */
++ temp = lpuart32_read(port, UARTBAUD);
++ temp &= ~(UARTBAUD_TDMAE | UARTBAUD_RDMAE);
++ lpuart32_write(port, temp, UARTBAUD);
++
+ /* disable Rx/Tx and interrupts */
+ temp = lpuart32_read(port, UARTCTRL);
+ temp &= ~(UARTCTRL_TE | UARTCTRL_RE |
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 757825edb0cd9..5f35343f81309 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2374,6 +2374,11 @@ static int imx_uart_probe(struct platform_device *pdev)
+ ucr1 &= ~(UCR1_ADEN | UCR1_TRDYEN | UCR1_IDEN | UCR1_RRDYEN | UCR1_RTSDEN);
+ imx_uart_writel(sport, ucr1, UCR1);
+
++ /* Disable Ageing Timer interrupt */
++ ucr2 = imx_uart_readl(sport, UCR2);
++ ucr2 &= ~UCR2_ATEN;
++ imx_uart_writel(sport, ucr2, UCR2);
++
+ /*
+ * In case RS485 is enabled without GPIO RTS control, the UART IP
+ * is used to control CTS signal. Keep both the UART and Receiver
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index e5b9773db5e36..1cf08b33456c9 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -1046,6 +1046,7 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ if (tup->cdata->fifo_mode_enable_status) {
+ ret = tegra_uart_wait_fifo_mode_enabled(tup);
+ if (ret < 0) {
++ clk_disable_unprepare(tup->uart_clk);
+ dev_err(tup->uport.dev,
+ "Failed to enable FIFO mode: %d\n", ret);
+ return ret;
+@@ -1067,6 +1068,7 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ */
+ ret = tegra_set_baudrate(tup, TEGRA_UART_DEFAULT_BAUD);
+ if (ret < 0) {
++ clk_disable_unprepare(tup->uart_clk);
+ dev_err(tup->uport.dev, "Failed to set baud rate\n");
+ return ret;
+ }
+@@ -1226,10 +1228,13 @@ static int tegra_uart_startup(struct uart_port *u)
+ dev_name(u->dev), tup);
+ if (ret < 0) {
+ dev_err(u->dev, "Failed to register ISR for IRQ %d\n", u->irq);
+- goto fail_hw_init;
++ goto fail_request_irq;
+ }
+ return 0;
+
++fail_request_irq:
++ /* tup->uart_clk is already enabled in tegra_uart_hw_init */
++ clk_disable_unprepare(tup->uart_clk);
+ fail_hw_init:
+ if (!tup->use_rx_pio)
+ tegra_uart_dma_channel_free(tup, true);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 3a1c4d31e010d..2ddc1aba0ad75 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -3008,6 +3008,22 @@ retry:
+ } else {
+ dev_err(hba->dev, "%s: failed to clear tag %d\n",
+ __func__, lrbp->task_tag);
++
++ spin_lock_irqsave(&hba->outstanding_lock, flags);
++ pending = test_bit(lrbp->task_tag,
++ &hba->outstanding_reqs);
++ if (pending)
++ hba->dev_cmd.complete = NULL;
++ spin_unlock_irqrestore(&hba->outstanding_lock, flags);
++
++ if (!pending) {
++ /*
++ * The completion handler ran while we tried to
++ * clear the command.
++ */
++ time_left = 1;
++ goto retry;
++ }
+ }
+ }
+
+@@ -5030,8 +5046,8 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
+ ufshcd_hpb_configure(hba, sdev);
+
+ blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
+- if (hba->quirks & UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE)
+- blk_queue_update_dma_alignment(q, PAGE_SIZE - 1);
++ if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT)
++ blk_queue_update_dma_alignment(q, 4096 - 1);
+ /*
+ * Block runtime-pm until all consumers are added.
+ * Refer ufshcd_setup_links().
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index c3628a8645a56..3cdac89a28b81 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -1673,7 +1673,7 @@ static const struct exynos_ufs_drv_data exynos_ufs_drvs = {
+ UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR |
+ UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL |
+ UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING |
+- UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE,
++ UFSHCD_QUIRK_4KB_DMA_ALIGNMENT,
+ .opts = EXYNOS_UFS_OPT_HAS_APB_CLK_CTRL |
+ EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL |
+ EXYNOS_UFS_OPT_BROKEN_RX_SEL_IDX |
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index 7970471548202..f3e23be227d41 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -874,7 +874,8 @@ retry:
+
+ static void early_xdbc_write(struct console *con, const char *str, u32 n)
+ {
+- static char buf[XDBC_MAX_PACKET];
++ /* static variables are zeroed, so buf is always NULL terminated */
++ static char buf[XDBC_MAX_PACKET + 1];
+ int chunk, ret;
+ int use_cr = 0;
+
+diff --git a/drivers/usb/fotg210/fotg210-udc.c b/drivers/usb/fotg210/fotg210-udc.c
+index eb076746f0320..7ba7fb52ddaac 100644
+--- a/drivers/usb/fotg210/fotg210-udc.c
++++ b/drivers/usb/fotg210/fotg210-udc.c
+@@ -710,6 +710,20 @@ static int fotg210_is_epnstall(struct fotg210_ep *ep)
+ return value & INOUTEPMPSR_STL_EP ? 1 : 0;
+ }
+
++/* For EP0 requests triggered by this driver (currently GET_STATUS response) */
++static void fotg210_ep0_complete(struct usb_ep *_ep, struct usb_request *req)
++{
++ struct fotg210_ep *ep;
++ struct fotg210_udc *fotg210;
++
++ ep = container_of(_ep, struct fotg210_ep, ep);
++ fotg210 = ep->fotg210;
++
++ if (req->status || req->actual != req->length) {
++ dev_warn(&fotg210->gadget.dev, "EP0 request failed: %d\n", req->status);
++ }
++}
++
+ static void fotg210_get_status(struct fotg210_udc *fotg210,
+ struct usb_ctrlrequest *ctrl)
+ {
+@@ -1261,6 +1275,8 @@ int fotg210_udc_probe(struct platform_device *pdev)
+ if (fotg210->ep0_req == NULL)
+ goto err_map;
+
++ fotg210->ep0_req->complete = fotg210_ep0_complete;
++
+ fotg210_init(fotg210);
+
+ fotg210_disable_unplug(fotg210);
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 0853536cbf2e6..2ff34dc129c40 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -430,6 +430,12 @@ static int config_usb_cfg_link(
+ * from another gadget or a random directory.
+ * Also a function instance can only be linked once.
+ */
++
++ if (gi->composite.gadget_driver.udc_name) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ list_for_each_entry(iter, &gi->available_func, cfs_list) {
+ if (iter != fi)
+ continue;
+diff --git a/drivers/usb/gadget/udc/fusb300_udc.c b/drivers/usb/gadget/udc/fusb300_udc.c
+index 5954800d652ca..08ba9c8c1e677 100644
+--- a/drivers/usb/gadget/udc/fusb300_udc.c
++++ b/drivers/usb/gadget/udc/fusb300_udc.c
+@@ -1346,6 +1346,7 @@ static int fusb300_remove(struct platform_device *pdev)
+ usb_del_gadget_udc(&fusb300->gadget);
+ iounmap(fusb300->reg);
+ free_irq(platform_get_irq(pdev, 0), fusb300);
++ free_irq(platform_get_irq(pdev, 1), fusb300);
+
+ fusb300_free_request(&fusb300->ep[0]->ep, fusb300->ep0_req);
+ for (i = 0; i < FUSB300_MAX_NUM_EP; i++)
+@@ -1431,7 +1432,7 @@ static int fusb300_probe(struct platform_device *pdev)
+ IRQF_SHARED, udc_name, fusb300);
+ if (ret < 0) {
+ pr_err("request_irq1 error (%d)\n", ret);
+- goto clean_up;
++ goto err_request_irq1;
+ }
+
+ INIT_LIST_HEAD(&fusb300->gadget.ep_list);
+@@ -1470,7 +1471,7 @@ static int fusb300_probe(struct platform_device *pdev)
+ GFP_KERNEL);
+ if (fusb300->ep0_req == NULL) {
+ ret = -ENOMEM;
+- goto clean_up3;
++ goto err_alloc_request;
+ }
+
+ init_controller(fusb300);
+@@ -1485,7 +1486,10 @@ static int fusb300_probe(struct platform_device *pdev)
+ err_add_udc:
+ fusb300_free_request(&fusb300->ep[0]->ep, fusb300->ep0_req);
+
+-clean_up3:
++err_alloc_request:
++ free_irq(ires1->start, fusb300);
++
++err_request_irq1:
+ free_irq(ires->start, fusb300);
+
+ clean_up:
+diff --git a/drivers/usb/host/fsl-mph-dr-of.c b/drivers/usb/host/fsl-mph-dr-of.c
+index e5df175228928..46c6a152b8655 100644
+--- a/drivers/usb/host/fsl-mph-dr-of.c
++++ b/drivers/usb/host/fsl-mph-dr-of.c
+@@ -112,8 +112,7 @@ static struct platform_device *fsl_usb2_device_register(
+ goto error;
+ }
+
+- pdev->dev.of_node = ofdev->dev.of_node;
+- pdev->dev.of_node_reused = true;
++ device_set_of_node_from_dev(&pdev->dev, &ofdev->dev);
+
+ retval = platform_device_add(pdev);
+ if (retval)
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index 352e3ac2b377b..19111e83ac131 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -1436,7 +1436,7 @@ max3421_spi_thread(void *dev_id)
+ * use spi_wr_buf().
+ */
+ for (i = 0; i < ARRAY_SIZE(max3421_hcd->iopins); ++i) {
+- u8 val = spi_rd8(hcd, MAX3421_REG_IOPINS1);
++ u8 val = spi_rd8(hcd, MAX3421_REG_IOPINS1 + i);
+
+ val = ((val & 0xf0) |
+ (max3421_hcd->iopins[i] & 0x0f));
+diff --git a/drivers/usb/musb/mediatek.c b/drivers/usb/musb/mediatek.c
+index cad991380b0cf..27b9bd2583400 100644
+--- a/drivers/usb/musb/mediatek.c
++++ b/drivers/usb/musb/mediatek.c
+@@ -294,7 +294,8 @@ static int mtk_musb_init(struct musb *musb)
+ err_phy_power_on:
+ phy_exit(glue->phy);
+ err_phy_init:
+- mtk_otg_switch_exit(glue);
++ if (musb->port_mode == MUSB_OTG)
++ mtk_otg_switch_exit(glue);
+ return ret;
+ }
+
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index fdbf3694e21f4..87e2c91306070 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -614,8 +614,10 @@ static int pmc_usb_probe_iom(struct pmc_usb *pmc)
+
+ INIT_LIST_HEAD(&resource_list);
+ ret = acpi_dev_get_memory_resources(adev, &resource_list);
+- if (ret < 0)
++ if (ret < 0) {
++ acpi_dev_put(adev);
+ return ret;
++ }
+
+ rentry = list_first_entry_or_null(&resource_list, struct resource_entry, node);
+ if (rentry)
+diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
+index bb24b2f0271e0..855db15477813 100644
+--- a/drivers/vfio/group.c
++++ b/drivers/vfio/group.c
+@@ -137,7 +137,7 @@ static int vfio_group_ioctl_set_container(struct vfio_group *group,
+
+ ret = iommufd_vfio_compat_ioas_id(iommufd, &ioas_id);
+ if (ret) {
+- iommufd_ctx_put(group->iommufd);
++ iommufd_ctx_put(iommufd);
+ goto out_unlock;
+ }
+
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 2209372f236db..7fa68dc4e938a 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -100,6 +100,8 @@ struct vfio_dma {
+ struct task_struct *task;
+ struct rb_root pfn_list; /* Ex-user pinned pfn list */
+ unsigned long *bitmap;
++ struct mm_struct *mm;
++ size_t locked_vm;
+ };
+
+ struct vfio_batch {
+@@ -412,6 +414,19 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
+ return ret;
+ }
+
++static int mm_lock_acct(struct task_struct *task, struct mm_struct *mm,
++ bool lock_cap, long npage)
++{
++ int ret = mmap_write_lock_killable(mm);
++
++ if (ret)
++ return ret;
++
++ ret = __account_locked_vm(mm, abs(npage), npage > 0, task, lock_cap);
++ mmap_write_unlock(mm);
++ return ret;
++}
++
+ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
+ {
+ struct mm_struct *mm;
+@@ -420,16 +435,13 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
+ if (!npage)
+ return 0;
+
+- mm = async ? get_task_mm(dma->task) : dma->task->mm;
+- if (!mm)
++ mm = dma->mm;
++ if (async && !mmget_not_zero(mm))
+ return -ESRCH; /* process exited */
+
+- ret = mmap_write_lock_killable(mm);
+- if (!ret) {
+- ret = __account_locked_vm(mm, abs(npage), npage > 0, dma->task,
+- dma->lock_cap);
+- mmap_write_unlock(mm);
+- }
++ ret = mm_lock_acct(dma->task, mm, dma->lock_cap, npage);
++ if (!ret)
++ dma->locked_vm += npage;
+
+ if (async)
+ mmput(mm);
+@@ -794,8 +806,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ struct mm_struct *mm;
+ int ret;
+
+- mm = get_task_mm(dma->task);
+- if (!mm)
++ mm = dma->mm;
++ if (!mmget_not_zero(mm))
+ return -ENODEV;
+
+ ret = vaddr_get_pfns(mm, vaddr, 1, dma->prot, pfn_base, pages);
+@@ -805,7 +817,7 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ ret = 0;
+
+ if (do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
+- ret = vfio_lock_acct(dma, 1, true);
++ ret = vfio_lock_acct(dma, 1, false);
+ if (ret) {
+ put_pfn(*pfn_base, dma->prot);
+ if (ret == -ENOMEM)
+@@ -861,6 +873,12 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
+
+ mutex_lock(&iommu->lock);
+
++ if (WARN_ONCE(iommu->vaddr_invalid_count,
++ "vfio_pin_pages not allowed with VFIO_UPDATE_VADDR\n")) {
++ ret = -EBUSY;
++ goto pin_done;
++ }
++
+ /*
+ * Wait for all necessary vaddr's to be valid so they can be used in
+ * the main loop without dropping the lock, to avoid racing vs unmap.
+@@ -1174,6 +1192,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
+ vfio_unmap_unpin(iommu, dma, true);
+ vfio_unlink_dma(iommu, dma);
+ put_task_struct(dma->task);
++ mmdrop(dma->mm);
+ vfio_dma_bitmap_free(dma);
+ if (dma->vaddr_invalid) {
+ iommu->vaddr_invalid_count--;
+@@ -1343,6 +1362,12 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu,
+
+ mutex_lock(&iommu->lock);
+
++ /* Cannot update vaddr if mdev is present. */
++ if (invalidate_vaddr && !list_empty(&iommu->emulated_iommu_groups)) {
++ ret = -EBUSY;
++ goto unlock;
++ }
++
+ pgshift = __ffs(iommu->pgsize_bitmap);
+ pgsize = (size_t)1 << pgshift;
+
+@@ -1566,6 +1591,38 @@ static bool vfio_iommu_iova_dma_valid(struct vfio_iommu *iommu,
+ return list_empty(iova);
+ }
+
++static int vfio_change_dma_owner(struct vfio_dma *dma)
++{
++ struct task_struct *task = current->group_leader;
++ struct mm_struct *mm = current->mm;
++ long npage = dma->locked_vm;
++ bool lock_cap;
++ int ret;
++
++ if (mm == dma->mm)
++ return 0;
++
++ lock_cap = capable(CAP_IPC_LOCK);
++ ret = mm_lock_acct(task, mm, lock_cap, npage);
++ if (ret)
++ return ret;
++
++ if (mmget_not_zero(dma->mm)) {
++ mm_lock_acct(dma->task, dma->mm, dma->lock_cap, -npage);
++ mmput(dma->mm);
++ }
++
++ if (dma->task != task) {
++ put_task_struct(dma->task);
++ dma->task = get_task_struct(task);
++ }
++ mmdrop(dma->mm);
++ dma->mm = mm;
++ mmgrab(dma->mm);
++ dma->lock_cap = lock_cap;
++ return 0;
++}
++
+ static int vfio_dma_do_map(struct vfio_iommu *iommu,
+ struct vfio_iommu_type1_dma_map *map)
+ {
+@@ -1615,6 +1672,9 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
+ dma->size != size) {
+ ret = -EINVAL;
+ } else {
++ ret = vfio_change_dma_owner(dma);
++ if (ret)
++ goto out_unlock;
+ dma->vaddr = vaddr;
+ dma->vaddr_invalid = false;
+ iommu->vaddr_invalid_count--;
+@@ -1652,29 +1712,15 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
+ * against the locked memory limit and we need to be able to do both
+ * outside of this call path as pinning can be asynchronous via the
+ * external interfaces for mdev devices. RLIMIT_MEMLOCK requires a
+- * task_struct and VM locked pages requires an mm_struct, however
+- * holding an indefinite mm reference is not recommended, therefore we
+- * only hold a reference to a task. We could hold a reference to
+- * current, however QEMU uses this call path through vCPU threads,
+- * which can be killed resulting in a NULL mm and failure in the unmap
+- * path when called via a different thread. Avoid this problem by
+- * using the group_leader as threads within the same group require
+- * both CLONE_THREAD and CLONE_VM and will therefore use the same
+- * mm_struct.
+- *
+- * Previously we also used the task for testing CAP_IPC_LOCK at the
+- * time of pinning and accounting, however has_capability() makes use
+- * of real_cred, a copy-on-write field, so we can't guarantee that it
+- * matches group_leader, or in fact that it might not change by the
+- * time it's evaluated. If a process were to call MAP_DMA with
+- * CAP_IPC_LOCK but later drop it, it doesn't make sense that they
+- * possibly see different results for an iommu_mapped vfio_dma vs
+- * externally mapped. Therefore track CAP_IPC_LOCK in vfio_dma at the
+- * time of calling MAP_DMA.
++ * task_struct. Save the group_leader so that all DMA tracking uses
++ * the same task, to make debugging easier. VM locked pages requires
++ * an mm_struct, so grab the mm in case the task dies.
+ */
+ get_task_struct(current->group_leader);
+ dma->task = current->group_leader;
+ dma->lock_cap = capable(CAP_IPC_LOCK);
++ dma->mm = current->mm;
++ mmgrab(dma->mm);
+
+ dma->pfn_list = RB_ROOT;
+
+@@ -2194,11 +2240,16 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
+ struct iommu_domain_geometry *geo;
+ LIST_HEAD(iova_copy);
+ LIST_HEAD(group_resv_regions);
+- int ret = -EINVAL;
++ int ret = -EBUSY;
+
+ mutex_lock(&iommu->lock);
+
++ /* Attach could require pinning, so disallow while vaddr is invalid. */
++ if (iommu->vaddr_invalid_count)
++ goto out_unlock;
++
+ /* Check for duplicates */
++ ret = -EINVAL;
+ if (vfio_iommu_find_iommu_group(iommu, iommu_group))
+ goto out_unlock;
+
+@@ -2669,6 +2720,16 @@ static int vfio_domains_have_enforce_cache_coherency(struct vfio_iommu *iommu)
+ return ret;
+ }
+
++static bool vfio_iommu_has_emulated(struct vfio_iommu *iommu)
++{
++ bool ret;
++
++ mutex_lock(&iommu->lock);
++ ret = !list_empty(&iommu->emulated_iommu_groups);
++ mutex_unlock(&iommu->lock);
++ return ret;
++}
++
+ static int vfio_iommu_type1_check_extension(struct vfio_iommu *iommu,
+ unsigned long arg)
+ {
+@@ -2677,8 +2738,13 @@ static int vfio_iommu_type1_check_extension(struct vfio_iommu *iommu,
+ case VFIO_TYPE1v2_IOMMU:
+ case VFIO_TYPE1_NESTING_IOMMU:
+ case VFIO_UNMAP_ALL:
+- case VFIO_UPDATE_VADDR:
+ return 1;
++ case VFIO_UPDATE_VADDR:
++ /*
++ * Disable this feature if mdevs are present. They cannot
++ * safely pin/unpin/rw while vaddrs are being updated.
++ */
++ return iommu && !vfio_iommu_has_emulated(iommu);
+ case VFIO_DMA_CC_IOMMU:
+ if (!iommu)
+ return 0;
+@@ -3099,9 +3165,8 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
+ !(dma->prot & IOMMU_READ))
+ return -EPERM;
+
+- mm = get_task_mm(dma->task);
+-
+- if (!mm)
++ mm = dma->mm;
++ if (!mmget_not_zero(mm))
+ return -EPERM;
+
+ if (kthread)
+@@ -3147,6 +3212,13 @@ static int vfio_iommu_type1_dma_rw(void *iommu_data, dma_addr_t user_iova,
+ size_t done;
+
+ mutex_lock(&iommu->lock);
++
++ if (WARN_ONCE(iommu->vaddr_invalid_count,
++ "vfio_dma_rw not allowed with VFIO_UPDATE_VADDR\n")) {
++ ret = -EBUSY;
++ goto out;
++ }
++
+ while (count > 0) {
+ ret = vfio_iommu_type1_dma_rw_chunk(iommu, user_iova, data,
+ count, write, &done);
+@@ -3158,6 +3230,7 @@ static int vfio_iommu_type1_dma_rw(void *iommu_data, dma_addr_t user_iova,
+ user_iova += done;
+ }
+
++out:
+ mutex_unlock(&iommu->lock);
+ return ret;
+ }
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 1b14c21af2b74..2bc8baa90c0f2 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -958,7 +958,7 @@ static const char *fbcon_startup(void)
+ set_blitting_type(vc, info);
+
+ /* Setup default font */
+- if (!p->fontdata && !vc->vc_font.data) {
++ if (!p->fontdata) {
+ if (!fontname[0] || !(font = find_font(fontname)))
+ font = get_default_font(info->var.xres,
+ info->var.yres,
+@@ -968,8 +968,6 @@ static const char *fbcon_startup(void)
+ vc->vc_font.height = font->height;
+ vc->vc_font.data = (void *)(p->fontdata = font->data);
+ vc->vc_font.charcount = font->charcount;
+- } else {
+- p->fontdata = vc->vc_font.data;
+ }
+
+ cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+@@ -1135,9 +1133,9 @@ static void fbcon_init(struct vc_data *vc, int init)
+ ops->p = &fb_display[fg_console];
+ }
+
+-static void fbcon_free_font(struct fbcon_display *p, bool freefont)
++static void fbcon_free_font(struct fbcon_display *p)
+ {
+- if (freefont && p->userfont && p->fontdata && (--REFCOUNT(p->fontdata) == 0))
++ if (p->userfont && p->fontdata && (--REFCOUNT(p->fontdata) == 0))
+ kfree(p->fontdata - FONT_EXTRA_WORDS * sizeof(int));
+ p->fontdata = NULL;
+ p->userfont = 0;
+@@ -1172,8 +1170,8 @@ static void fbcon_deinit(struct vc_data *vc)
+ struct fb_info *info;
+ struct fbcon_ops *ops;
+ int idx;
+- bool free_font = true;
+
++ fbcon_free_font(p);
+ idx = con2fb_map[vc->vc_num];
+
+ if (idx == -1)
+@@ -1184,8 +1182,6 @@ static void fbcon_deinit(struct vc_data *vc)
+ if (!info)
+ goto finished;
+
+- if (info->flags & FBINFO_MISC_FIRMWARE)
+- free_font = false;
+ ops = info->fbcon_par;
+
+ if (!ops)
+@@ -1197,9 +1193,8 @@ static void fbcon_deinit(struct vc_data *vc)
+ ops->initialized = false;
+ finished:
+
+- fbcon_free_font(p, free_font);
+- if (free_font)
+- vc->vc_font.data = NULL;
++ fbcon_free_font(p);
++ vc->vc_font.data = NULL;
+
+ if (vc->vc_hi_font_mask && vc->vc_screenbuf)
+ set_vc_hi_font(vc, false);
+diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c
+index 4ec4174e05a3c..7b4e9009f3355 100644
+--- a/drivers/virt/coco/sev-guest/sev-guest.c
++++ b/drivers/virt/coco/sev-guest/sev-guest.c
+@@ -377,9 +377,26 @@ static int handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code, in
+ snp_dev->input.data_npages = certs_npages;
+ }
+
++ /*
++ * Increment the message sequence number. There is no harm in doing
++ * this now because decryption uses the value stored in the response
++ * structure and any failure will wipe the VMPCK, preventing further
++ * use anyway.
++ */
++ snp_inc_msg_seqno(snp_dev);
++
+ if (fw_err)
+ *fw_err = err;
+
++ /*
++ * If an extended guest request was issued and the supplied certificate
++ * buffer was not large enough, a standard guest request was issued to
++ * prevent IV reuse. If the standard request was successful, return -EIO
++ * back to the caller as would have originally been returned.
++ */
++ if (!rc && err == SNP_GUEST_REQ_INVALID_LEN)
++ return -EIO;
++
+ if (rc) {
+ dev_alert(snp_dev->dev,
+ "Detected error from ASP request. rc: %d, fw_err: %llu\n",
+@@ -395,9 +412,6 @@ static int handle_guest_request(struct snp_guest_dev *snp_dev, u64 exit_code, in
+ goto disable_vmpck;
+ }
+
+- /* Increment to new message sequence after payload decryption was successful. */
+- snp_inc_msg_seqno(snp_dev);
+-
+ return 0;
+
+ disable_vmpck:
+diff --git a/drivers/xen/grant-dma-iommu.c b/drivers/xen/grant-dma-iommu.c
+index 16b8bc0c0b33d..6a9fe02c6bfcc 100644
+--- a/drivers/xen/grant-dma-iommu.c
++++ b/drivers/xen/grant-dma-iommu.c
+@@ -16,8 +16,15 @@ struct grant_dma_iommu_device {
+ struct iommu_device iommu;
+ };
+
+-/* Nothing is really needed here */
+-static const struct iommu_ops grant_dma_iommu_ops;
++static struct iommu_device *grant_dma_iommu_probe_device(struct device *dev)
++{
++ return ERR_PTR(-ENODEV);
++}
++
++/* Nothing is really needed here except a dummy probe_device callback */
++static const struct iommu_ops grant_dma_iommu_ops = {
++ .probe_device = grant_dma_iommu_probe_device,
++};
+
+ static const struct of_device_id grant_dma_iommu_of_match[] = {
+ { .compatible = "xen,grant-dma" },
+diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
+index ff2e524d99377..317aeff6c1dac 100644
+--- a/fs/btrfs/discard.c
++++ b/fs/btrfs/discard.c
+@@ -78,6 +78,7 @@ static struct list_head *get_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ static void __add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ struct btrfs_block_group *block_group)
+ {
++ lockdep_assert_held(&discard_ctl->lock);
+ if (!btrfs_run_discard_work(discard_ctl))
+ return;
+
+@@ -89,6 +90,8 @@ static void __add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ BTRFS_DISCARD_DELAY);
+ block_group->discard_state = BTRFS_DISCARD_RESET_CURSOR;
+ }
++ if (list_empty(&block_group->discard_list))
++ btrfs_get_block_group(block_group);
+
+ list_move_tail(&block_group->discard_list,
+ get_discard_list(discard_ctl, block_group));
+@@ -108,8 +111,12 @@ static void add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ static void add_to_discard_unused_list(struct btrfs_discard_ctl *discard_ctl,
+ struct btrfs_block_group *block_group)
+ {
++ bool queued;
++
+ spin_lock(&discard_ctl->lock);
+
++ queued = !list_empty(&block_group->discard_list);
++
+ if (!btrfs_run_discard_work(discard_ctl)) {
+ spin_unlock(&discard_ctl->lock);
+ return;
+@@ -121,6 +128,8 @@ static void add_to_discard_unused_list(struct btrfs_discard_ctl *discard_ctl,
+ block_group->discard_eligible_time = (ktime_get_ns() +
+ BTRFS_DISCARD_UNUSED_DELAY);
+ block_group->discard_state = BTRFS_DISCARD_RESET_CURSOR;
++ if (!queued)
++ btrfs_get_block_group(block_group);
+ list_add_tail(&block_group->discard_list,
+ &discard_ctl->discard_list[BTRFS_DISCARD_INDEX_UNUSED]);
+
+@@ -131,6 +140,7 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ struct btrfs_block_group *block_group)
+ {
+ bool running = false;
++ bool queued = false;
+
+ spin_lock(&discard_ctl->lock);
+
+@@ -140,7 +150,16 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
+ }
+
+ block_group->discard_eligible_time = 0;
++ queued = !list_empty(&block_group->discard_list);
+ list_del_init(&block_group->discard_list);
++ /*
++ * If the block group is currently running in the discard workfn, we
++ * don't want to deref it, since it's still being used by the workfn.
++ * The workfn will notice this case and deref the block group when it is
++ * finished.
++ */
++ if (queued && !running)
++ btrfs_put_block_group(block_group);
+
+ spin_unlock(&discard_ctl->lock);
+
+@@ -214,10 +233,12 @@ again:
+ if (block_group && now >= block_group->discard_eligible_time) {
+ if (block_group->discard_index == BTRFS_DISCARD_INDEX_UNUSED &&
+ block_group->used != 0) {
+- if (btrfs_is_block_group_data_only(block_group))
++ if (btrfs_is_block_group_data_only(block_group)) {
+ __add_to_discard_list(discard_ctl, block_group);
+- else
++ } else {
+ list_del_init(&block_group->discard_list);
++ btrfs_put_block_group(block_group);
++ }
+ goto again;
+ }
+ if (block_group->discard_state == BTRFS_DISCARD_RESET_CURSOR) {
+@@ -511,6 +532,15 @@ static void btrfs_discard_workfn(struct work_struct *work)
+ spin_lock(&discard_ctl->lock);
+ discard_ctl->prev_discard = trimmed;
+ discard_ctl->prev_discard_time = now;
++ /*
++ * If the block group was removed from the discard list while it was
++ * running in this workfn, then we didn't deref it, since this function
++ * still owned that reference. But we set the discard_ctl->block_group
++ * back to NULL, so we can use that condition to know that now we need
++ * to deref the block_group.
++ */
++ if (discard_ctl->block_group == NULL)
++ btrfs_put_block_group(block_group);
+ discard_ctl->block_group = NULL;
+ __btrfs_discard_schedule_work(discard_ctl, now, false);
+ spin_unlock(&discard_ctl->lock);
+@@ -651,8 +681,12 @@ void btrfs_discard_punt_unused_bgs_list(struct btrfs_fs_info *fs_info)
+ list_for_each_entry_safe(block_group, next, &fs_info->unused_bgs,
+ bg_list) {
+ list_del_init(&block_group->bg_list);
+- btrfs_put_block_group(block_group);
+ btrfs_discard_queue_work(&fs_info->discard_ctl, block_group);
++ /*
++ * This put is for the get done by btrfs_mark_bg_unused.
++ * Queueing discard incremented it for discard's reference.
++ */
++ btrfs_put_block_group(block_group);
+ }
+ spin_unlock(&fs_info->unused_bgs_lock);
+ }
+@@ -683,6 +717,7 @@ static void btrfs_discard_purge_list(struct btrfs_discard_ctl *discard_ctl)
+ if (block_group->used == 0)
+ btrfs_mark_bg_unused(block_group);
+ spin_lock(&discard_ctl->lock);
++ btrfs_put_block_group(block_group);
+ }
+ }
+ spin_unlock(&discard_ctl->lock);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 3aa04224315eb..fde40112a2593 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1910,6 +1910,9 @@ static int cleaner_kthread(void *arg)
+ goto sleep;
+ }
+
++ if (test_and_clear_bit(BTRFS_FS_FEATURE_CHANGED, &fs_info->flags))
++ btrfs_sysfs_feature_update(fs_info);
++
+ btrfs_run_delayed_iputs(fs_info);
+
+ again = btrfs_clean_one_deleted_snapshot(fs_info);
+diff --git a/fs/btrfs/fs.c b/fs/btrfs/fs.c
+index 5553e1f8afe8e..31c1648bc0b46 100644
+--- a/fs/btrfs/fs.c
++++ b/fs/btrfs/fs.c
+@@ -24,6 +24,7 @@ void __btrfs_set_fs_incompat(struct btrfs_fs_info *fs_info, u64 flag,
+ name, flag);
+ }
+ spin_unlock(&fs_info->super_lock);
++ set_bit(BTRFS_FS_FEATURE_CHANGED, &fs_info->flags);
+ }
+ }
+
+@@ -46,6 +47,7 @@ void __btrfs_clear_fs_incompat(struct btrfs_fs_info *fs_info, u64 flag,
+ name, flag);
+ }
+ spin_unlock(&fs_info->super_lock);
++ set_bit(BTRFS_FS_FEATURE_CHANGED, &fs_info->flags);
+ }
+ }
+
+@@ -68,6 +70,7 @@ void __btrfs_set_fs_compat_ro(struct btrfs_fs_info *fs_info, u64 flag,
+ name, flag);
+ }
+ spin_unlock(&fs_info->super_lock);
++ set_bit(BTRFS_FS_FEATURE_CHANGED, &fs_info->flags);
+ }
+ }
+
+@@ -90,5 +93,6 @@ void __btrfs_clear_fs_compat_ro(struct btrfs_fs_info *fs_info, u64 flag,
+ name, flag);
+ }
+ spin_unlock(&fs_info->super_lock);
++ set_bit(BTRFS_FS_FEATURE_CHANGED, &fs_info->flags);
+ }
+ }
+diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
+index 37b86acfcbcf8..3d8156fc8523f 100644
+--- a/fs/btrfs/fs.h
++++ b/fs/btrfs/fs.h
+@@ -125,6 +125,12 @@ enum {
+ */
+ BTRFS_FS_NO_OVERCOMMIT,
+
++ /*
++ * Indicate if we have some features changed, this is mostly for
++ * cleaner thread to update the sysfs interface.
++ */
++ BTRFS_FS_FEATURE_CHANGED,
++
+ #if BITS_PER_LONG == 32
+ /* Indicate if we have error/warn message printed on 32bit systems */
+ BTRFS_FS_32BIT_ERROR,
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 52b346795f660..a5d026041be45 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -2053,20 +2053,33 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
+ * a) don't have an extent buffer and
+ * b) the page is already kmapped
+ */
+- if (sblock->logical != btrfs_stack_header_bytenr(h))
++ if (sblock->logical != btrfs_stack_header_bytenr(h)) {
+ sblock->header_error = 1;
+-
+- if (sector->generation != btrfs_stack_header_generation(h)) {
+- sblock->header_error = 1;
+- sblock->generation_error = 1;
++ btrfs_warn_rl(fs_info,
++ "tree block %llu mirror %u has bad bytenr, has %llu want %llu",
++ sblock->logical, sblock->mirror_num,
++ btrfs_stack_header_bytenr(h),
++ sblock->logical);
++ goto out;
+ }
+
+- if (!scrub_check_fsid(h->fsid, sector))
++ if (!scrub_check_fsid(h->fsid, sector)) {
+ sblock->header_error = 1;
++ btrfs_warn_rl(fs_info,
++ "tree block %llu mirror %u has bad fsid, has %pU want %pU",
++ sblock->logical, sblock->mirror_num,
++ h->fsid, sblock->dev->fs_devices->fsid);
++ goto out;
++ }
+
+- if (memcmp(h->chunk_tree_uuid, fs_info->chunk_tree_uuid,
+- BTRFS_UUID_SIZE))
++ if (memcmp(h->chunk_tree_uuid, fs_info->chunk_tree_uuid, BTRFS_UUID_SIZE)) {
+ sblock->header_error = 1;
++ btrfs_warn_rl(fs_info,
++ "tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU",
++ sblock->logical, sblock->mirror_num,
++ h->chunk_tree_uuid, fs_info->chunk_tree_uuid);
++ goto out;
++ }
+
+ shash->tfm = fs_info->csum_shash;
+ crypto_shash_init(shash);
+@@ -2079,9 +2092,27 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock)
+ }
+
+ crypto_shash_final(shash, calculated_csum);
+- if (memcmp(calculated_csum, on_disk_csum, sctx->fs_info->csum_size))
++ if (memcmp(calculated_csum, on_disk_csum, sctx->fs_info->csum_size)) {
+ sblock->checksum_error = 1;
++ btrfs_warn_rl(fs_info,
++ "tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT,
++ sblock->logical, sblock->mirror_num,
++ CSUM_FMT_VALUE(fs_info->csum_size, on_disk_csum),
++ CSUM_FMT_VALUE(fs_info->csum_size, calculated_csum));
++ goto out;
++ }
++
++ if (sector->generation != btrfs_stack_header_generation(h)) {
++ sblock->header_error = 1;
++ sblock->generation_error = 1;
++ btrfs_warn_rl(fs_info,
++ "tree block %llu mirror %u has bad generation, has %llu want %llu",
++ sblock->logical, sblock->mirror_num,
++ btrfs_stack_header_generation(h),
++ sector->generation);
++ }
+
++out:
+ return sblock->header_error || sblock->checksum_error;
+ }
+
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 45615ce364988..108aa38761860 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -2272,36 +2272,23 @@ void btrfs_sysfs_del_one_qgroup(struct btrfs_fs_info *fs_info,
+ * Change per-fs features in /sys/fs/btrfs/UUID/features to match current
+ * values in superblock. Call after any changes to incompat/compat_ro flags
+ */
+-void btrfs_sysfs_feature_update(struct btrfs_fs_info *fs_info,
+- u64 bit, enum btrfs_feature_set set)
++void btrfs_sysfs_feature_update(struct btrfs_fs_info *fs_info)
+ {
+- struct btrfs_fs_devices *fs_devs;
+ struct kobject *fsid_kobj;
+- u64 __maybe_unused features;
+- int __maybe_unused ret;
++ int ret;
+
+ if (!fs_info)
+ return;
+
+- /*
+- * See 14e46e04958df74 and e410e34fad913dd, feature bit updates are not
+- * safe when called from some contexts (eg. balance)
+- */
+- features = get_features(fs_info, set);
+- ASSERT(bit & supported_feature_masks[set]);
+-
+- fs_devs = fs_info->fs_devices;
+- fsid_kobj = &fs_devs->fsid_kobj;
+-
++ fsid_kobj = &fs_info->fs_devices->fsid_kobj;
+ if (!fsid_kobj->state_initialized)
+ return;
+
+- /*
+- * FIXME: this is too heavy to update just one value, ideally we'd like
+- * to use sysfs_update_group but some refactoring is needed first.
+- */
+- sysfs_remove_group(fsid_kobj, &btrfs_feature_attr_group);
+- ret = sysfs_create_group(fsid_kobj, &btrfs_feature_attr_group);
++ ret = sysfs_update_group(fsid_kobj, &btrfs_feature_attr_group);
++ if (ret < 0)
++ btrfs_warn(fs_info,
++ "failed to update /sys/fs/btrfs/%pU/features: %d",
++ fs_info->fs_devices->fsid, ret);
+ }
+
+ int __init btrfs_init_sysfs(void)
+diff --git a/fs/btrfs/sysfs.h b/fs/btrfs/sysfs.h
+index bacef43f72672..86c7eef128731 100644
+--- a/fs/btrfs/sysfs.h
++++ b/fs/btrfs/sysfs.h
+@@ -19,8 +19,7 @@ void btrfs_sysfs_remove_device(struct btrfs_device *device);
+ int btrfs_sysfs_add_fsid(struct btrfs_fs_devices *fs_devs);
+ void btrfs_sysfs_remove_fsid(struct btrfs_fs_devices *fs_devs);
+ void btrfs_sysfs_update_sprout_fsid(struct btrfs_fs_devices *fs_devices);
+-void btrfs_sysfs_feature_update(struct btrfs_fs_info *fs_info,
+- u64 bit, enum btrfs_feature_set set);
++void btrfs_sysfs_feature_update(struct btrfs_fs_info *fs_info);
+ void btrfs_kobject_uevent(struct block_device *bdev, enum kobject_action action);
+
+ int __init btrfs_init_sysfs(void);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index b8c52e89688c8..8f8d0fce6e4a3 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -2464,6 +2464,11 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ wake_up(&fs_info->transaction_wait);
+ btrfs_trans_state_lockdep_release(fs_info, BTRFS_LOCKDEP_TRANS_UNBLOCKED);
+
++ /* If we have features changed, wake up the cleaner to update sysfs. */
++ if (test_bit(BTRFS_FS_FEATURE_CHANGED, &fs_info->flags) &&
++ fs_info->cleaner_kthread)
++ wake_up_process(fs_info->cleaner_kthread);
++
+ ret = btrfs_write_and_wait_transaction(trans);
+ if (ret) {
+ btrfs_handle_fs_error(fs_info, ret,
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index b5cff85925a10..dc39a4b0ec8ec 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -2102,6 +2102,9 @@ static long ceph_fallocate(struct file *file, int mode,
+ loff_t endoff = 0;
+ loff_t size;
+
++ dout("%s %p %llx.%llx mode %x, offset %llu length %llu\n", __func__,
++ inode, ceph_vinop(inode), mode, offset, length);
++
+ if (mode != (FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
+ return -EOPNOTSUPP;
+
+@@ -2136,6 +2139,10 @@ static long ceph_fallocate(struct file *file, int mode,
+ if (ret < 0)
+ goto unlock;
+
++ ret = file_modified(file);
++ if (ret)
++ goto put_caps;
++
+ filemap_invalidate_lock(inode->i_mapping);
+ ceph_fscache_invalidate(inode, false);
+ ceph_zero_pagecache_range(inode, offset, length);
+@@ -2151,6 +2158,7 @@ static long ceph_fallocate(struct file *file, int mode,
+ }
+ filemap_invalidate_unlock(inode->i_mapping);
+
++put_caps:
+ ceph_put_cap_refs(ci, got);
+ unlock:
+ inode_unlock(inode);
+diff --git a/fs/cifs/cached_dir.c b/fs/cifs/cached_dir.c
+index 60399081046a5..75d5e06306ea5 100644
+--- a/fs/cifs/cached_dir.c
++++ b/fs/cifs/cached_dir.c
+@@ -14,6 +14,7 @@
+
+ static struct cached_fid *init_cached_dir(const char *path);
+ static void free_cached_dir(struct cached_fid *cfid);
++static void smb2_close_cached_fid(struct kref *ref);
+
+ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ const char *path,
+@@ -181,12 +182,13 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ rqst[0].rq_iov = open_iov;
+ rqst[0].rq_nvec = SMB2_CREATE_IOV_SIZE;
+
+- oparms.tcon = tcon;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_FILE);
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.disposition = FILE_OPEN;
+- oparms.fid = pfid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_FILE),
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .disposition = FILE_OPEN,
++ .fid = pfid,
++ };
+
+ rc = SMB2_open_init(tcon, server,
+ &rqst[0], &oplock, &oparms, utf16_path);
+@@ -220,8 +222,8 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ goto oshr_free;
+ }
+-
+- atomic_inc(&tcon->num_remote_opens);
++ cfid->tcon = tcon;
++ cfid->is_open = true;
+
+ o_rsp = (struct smb2_create_rsp *)rsp_iov[0].iov_base;
+ oparms.fid->persistent_fid = o_rsp->PersistentFileId;
+@@ -233,12 +235,12 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ if (o_rsp->OplockLevel != SMB2_OPLOCK_LEVEL_LEASE)
+ goto oshr_free;
+
+-
+ smb2_parse_contexts(server, o_rsp,
+ &oparms.fid->epoch,
+ oparms.fid->lease_key, &oplock,
+ NULL, NULL);
+-
++ if (!(oplock & SMB2_LEASE_READ_CACHING_HE))
++ goto oshr_free;
+ qi_rsp = (struct smb2_query_info_rsp *)rsp_iov[1].iov_base;
+ if (le32_to_cpu(qi_rsp->OutputBufferLength) < sizeof(struct smb2_file_all_info))
+ goto oshr_free;
+@@ -259,9 +261,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ cfid->dentry = dentry;
+- cfid->tcon = tcon;
+ cfid->time = jiffies;
+- cfid->is_open = true;
+ cfid->has_lease = true;
+
+ oshr_free:
+@@ -271,7 +271,7 @@ oshr_free:
+ free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+ spin_lock(&cfids->cfid_list_lock);
+- if (!cfid->has_lease) {
++ if (rc && !cfid->has_lease) {
+ if (cfid->on_list) {
+ list_del(&cfid->entry);
+ cfid->on_list = false;
+@@ -280,13 +280,27 @@ oshr_free:
+ rc = -ENOENT;
+ }
+ spin_unlock(&cfids->cfid_list_lock);
++ if (!rc && !cfid->has_lease) {
++ /*
++ * We are guaranteed to have two references at this point.
++ * One for the caller and one for a potential lease.
++ * Release the Lease-ref so that the directory will be closed
++ * when the caller closes the cached handle.
++ */
++ kref_put(&cfid->refcount, smb2_close_cached_fid);
++ }
+ if (rc) {
++ if (cfid->is_open)
++ SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
++ cfid->fid.volatile_fid);
+ free_cached_dir(cfid);
+ cfid = NULL;
+ }
+
+- if (rc == 0)
++ if (rc == 0) {
+ *ret_cfid = cfid;
++ atomic_inc(&tcon->num_remote_opens);
++ }
+
+ return rc;
+ }
+@@ -335,6 +349,7 @@ smb2_close_cached_fid(struct kref *ref)
+ if (cfid->is_open) {
+ SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
+ cfid->fid.volatile_fid);
++ atomic_dec(&cfid->tcon->num_remote_opens);
+ }
+
+ free_cached_dir(cfid);
+diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c
+index bbf58c2439da2..3cc3471199f54 100644
+--- a/fs/cifs/cifsacl.c
++++ b/fs/cifs/cifsacl.c
+@@ -1428,14 +1428,15 @@ static struct cifs_ntsd *get_cifs_acl_by_path(struct cifs_sb_info *cifs_sb,
+ tcon = tlink_tcon(tlink);
+ xid = get_xid();
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = READ_CONTROL;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = READ_CONTROL,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .disposition = FILE_OPEN,
++ .path = path,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (!rc) {
+@@ -1494,14 +1495,15 @@ int set_cifs_acl(struct cifs_ntsd *pnntsd, __u32 acllen,
+ else
+ access_flags = WRITE_DAC;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = access_flags;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = access_flags,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .disposition = FILE_OPEN,
++ .path = path,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (rc) {
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index 1207b39686fb9..e75184544ecb4 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -670,11 +670,21 @@ static inline int get_dfs_path(const unsigned int xid, struct cifs_ses *ses,
+ int match_target_ip(struct TCP_Server_Info *server,
+ const char *share, size_t share_len,
+ bool *result);
+-
+-int cifs_dfs_query_info_nonascii_quirk(const unsigned int xid,
+- struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb,
+- const char *dfs_link_path);
++int cifs_inval_name_dfs_link_error(const unsigned int xid,
++ struct cifs_tcon *tcon,
++ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
++ bool *islink);
++#else
++static inline int cifs_inval_name_dfs_link_error(const unsigned int xid,
++ struct cifs_tcon *tcon,
++ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
++ bool *islink)
++{
++ *islink = false;
++ return 0;
++}
+ #endif
+
+ static inline int cifs_create_options(struct cifs_sb_info *cifs_sb, int options)
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 23f10e0d6e7e3..8c014a3ff9e00 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -5372,14 +5372,15 @@ CIFSSMBSetPathInfoFB(const unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_fid fid;
+ int rc;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_WRITE;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = fileName;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_WRITE,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .disposition = FILE_OPEN,
++ .path = fileName,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (rc)
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index b2a04b4e89a5e..af49ae53aaf40 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2843,72 +2843,48 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
+ * negprot - BB check reconnection in case where second
+ * sessinit is sent but no second negprot
+ */
+- struct rfc1002_session_packet *ses_init_buf;
+- unsigned int req_noscope_len;
+- struct smb_hdr *smb_buf;
++ struct rfc1002_session_packet req = {};
++ struct smb_hdr *smb_buf = (struct smb_hdr *)&req;
++ unsigned int len;
+
+- ses_init_buf = kzalloc(sizeof(struct rfc1002_session_packet),
+- GFP_KERNEL);
++ req.trailer.session_req.called_len = sizeof(req.trailer.session_req.called_name);
+
+- if (ses_init_buf) {
+- ses_init_buf->trailer.session_req.called_len = 32;
++ if (server->server_RFC1001_name[0] != 0)
++ rfc1002mangle(req.trailer.session_req.called_name,
++ server->server_RFC1001_name,
++ RFC1001_NAME_LEN_WITH_NULL);
++ else
++ rfc1002mangle(req.trailer.session_req.called_name,
++ DEFAULT_CIFS_CALLED_NAME,
++ RFC1001_NAME_LEN_WITH_NULL);
+
+- if (server->server_RFC1001_name[0] != 0)
+- rfc1002mangle(ses_init_buf->trailer.
+- session_req.called_name,
+- server->server_RFC1001_name,
+- RFC1001_NAME_LEN_WITH_NULL);
+- else
+- rfc1002mangle(ses_init_buf->trailer.
+- session_req.called_name,
+- DEFAULT_CIFS_CALLED_NAME,
+- RFC1001_NAME_LEN_WITH_NULL);
++ req.trailer.session_req.calling_len = sizeof(req.trailer.session_req.calling_name);
+
+- ses_init_buf->trailer.session_req.calling_len = 32;
++ /* calling name ends in null (byte 16) from old smb convention */
++ if (server->workstation_RFC1001_name[0] != 0)
++ rfc1002mangle(req.trailer.session_req.calling_name,
++ server->workstation_RFC1001_name,
++ RFC1001_NAME_LEN_WITH_NULL);
++ else
++ rfc1002mangle(req.trailer.session_req.calling_name,
++ "LINUX_CIFS_CLNT",
++ RFC1001_NAME_LEN_WITH_NULL);
+
+- /*
+- * calling name ends in null (byte 16) from old smb
+- * convention.
+- */
+- if (server->workstation_RFC1001_name[0] != 0)
+- rfc1002mangle(ses_init_buf->trailer.
+- session_req.calling_name,
+- server->workstation_RFC1001_name,
+- RFC1001_NAME_LEN_WITH_NULL);
+- else
+- rfc1002mangle(ses_init_buf->trailer.
+- session_req.calling_name,
+- "LINUX_CIFS_CLNT",
+- RFC1001_NAME_LEN_WITH_NULL);
+-
+- ses_init_buf->trailer.session_req.scope1 = 0;
+- ses_init_buf->trailer.session_req.scope2 = 0;
+- smb_buf = (struct smb_hdr *)ses_init_buf;
+-
+- /* sizeof RFC1002_SESSION_REQUEST with no scopes */
+- req_noscope_len = sizeof(struct rfc1002_session_packet) - 2;
+-
+- /* == cpu_to_be32(0x81000044) */
+- smb_buf->smb_buf_length =
+- cpu_to_be32((RFC1002_SESSION_REQUEST << 24) | req_noscope_len);
+- rc = smb_send(server, smb_buf, 0x44);
+- kfree(ses_init_buf);
+- /*
+- * RFC1001 layer in at least one server
+- * requires very short break before negprot
+- * presumably because not expecting negprot
+- * to follow so fast. This is a simple
+- * solution that works without
+- * complicating the code and causes no
+- * significant slowing down on mount
+- * for everyone else
+- */
+- usleep_range(1000, 2000);
+- }
+ /*
+- * else the negprot may still work without this
+- * even though malloc failed
++ * As per rfc1002, @len must be the number of bytes that follows the
++ * length field of a rfc1002 session request payload.
++ */
++ len = sizeof(req) - offsetof(struct rfc1002_session_packet, trailer.session_req);
++
++ smb_buf->smb_buf_length = cpu_to_be32((RFC1002_SESSION_REQUEST << 24) | len);
++ rc = smb_send(server, smb_buf, len);
++ /*
++ * RFC1001 layer in at least one server requires very short break before
++ * negprot presumably because not expecting negprot to follow so fast.
++ * This is a simple solution that works without complicating the code
++ * and causes no significant slowing down on mount for everyone else
+ */
++ usleep_range(1000, 2000);
+
+ return rc;
+ }
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index ad4208bf1e321..1bf61778f44c6 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -304,15 +304,16 @@ static int cifs_do_create(struct inode *inode, struct dentry *direntry, unsigned
+ if (!tcon->unix_ext && (mode & S_IWUGO) == 0)
+ create_options |= CREATE_OPTION_READONLY;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = desired_access;
+- oparms.create_options = cifs_create_options(cifs_sb, create_options);
+- oparms.disposition = disposition;
+- oparms.path = full_path;
+- oparms.fid = fid;
+- oparms.reconnect = false;
+- oparms.mode = mode;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = desired_access,
++ .create_options = cifs_create_options(cifs_sb, create_options),
++ .disposition = disposition,
++ .path = full_path,
++ .fid = fid,
++ .mode = mode,
++ };
+ rc = server->ops->open(xid, &oparms, oplock, buf);
+ if (rc) {
+ cifs_dbg(FYI, "cifs_create returned 0x%x\n", rc);
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index b8d1cbadb6897..a53ddc81b698c 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -260,14 +260,15 @@ static int cifs_nt_open(const char *full_path, struct inode *inode, struct cifs_
+ if (f_flags & O_DIRECT)
+ create_options |= CREATE_NO_BUFFER;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = desired_access;
+- oparms.create_options = cifs_create_options(cifs_sb, create_options);
+- oparms.disposition = disposition;
+- oparms.path = full_path;
+- oparms.fid = fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = desired_access,
++ .create_options = cifs_create_options(cifs_sb, create_options),
++ .disposition = disposition,
++ .path = full_path,
++ .fid = fid,
++ };
+
+ rc = server->ops->open(xid, &oparms, oplock, buf);
+ if (rc)
+@@ -848,14 +849,16 @@ cifs_reopen_file(struct cifsFileInfo *cfile, bool can_flush)
+ if (server->ops->get_lease_key)
+ server->ops->get_lease_key(inode, &cfile->fid);
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = desired_access;
+- oparms.create_options = cifs_create_options(cifs_sb, create_options);
+- oparms.disposition = disposition;
+- oparms.path = full_path;
+- oparms.fid = &cfile->fid;
+- oparms.reconnect = true;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = desired_access,
++ .create_options = cifs_create_options(cifs_sb, create_options),
++ .disposition = disposition,
++ .path = full_path,
++ .fid = &cfile->fid,
++ .reconnect = true,
++ };
+
+ /*
+ * Can not refresh inode by passing in file_info buf to be returned by
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index f145a59af89be..7d0cc39d2921f 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -508,14 +508,15 @@ cifs_sfu_type(struct cifs_fattr *fattr, const char *path,
+ return PTR_ERR(tlink);
+ tcon = tlink_tcon(tlink);
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_READ;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_READ,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_OPEN,
++ .path = path,
++ .fid = &fid,
++ };
+
+ if (tcon->ses->server->oplocks)
+ oplock = REQ_OPLOCK;
+@@ -1518,14 +1519,15 @@ cifs_rename_pending_delete(const char *full_path, struct dentry *dentry,
+ goto out;
+ }
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = DELETE | FILE_WRITE_ATTRIBUTES;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = full_path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = DELETE | FILE_WRITE_ATTRIBUTES,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_OPEN,
++ .path = full_path,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (rc != 0)
+@@ -2112,15 +2114,16 @@ cifs_do_rename(const unsigned int xid, struct dentry *from_dentry,
+ if (to_dentry->d_parent != from_dentry->d_parent)
+ goto do_rename_exit;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- /* open the file to be renamed -- we need DELETE perms */
+- oparms.desired_access = DELETE;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = from_path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ /* open the file to be renamed -- we need DELETE perms */
++ .desired_access = DELETE,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_OPEN,
++ .path = from_path,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (rc == 0) {
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index a5a097a699837..d937eedd74fb6 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -271,14 +271,15 @@ cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ int buf_type = CIFS_NO_BUFFER;
+ FILE_ALL_INFO file_info;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_READ;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_READ,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_OPEN,
++ .path = path,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, &file_info);
+ if (rc)
+@@ -313,14 +314,15 @@ cifs_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_open_parms oparms;
+ struct cifs_io_parms io_parms = {0};
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_WRITE;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_CREATE;
+- oparms.path = path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_WRITE,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_CREATE,
++ .path = path,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (rc)
+@@ -355,13 +357,14 @@ smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ __u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ struct smb2_file_all_info *pfile_info = NULL;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_READ;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_OPEN;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_READ,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_OPEN,
++ .fid = &fid,
++ };
+
+ utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
+ if (utf16_path == NULL)
+@@ -421,14 +424,15 @@ smb3_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ if (!utf16_path)
+ return -ENOMEM;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_WRITE;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_CREATE;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
+- oparms.mode = 0644;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_WRITE,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_CREATE,
++ .fid = &fid,
++ .mode = 0644,
++ };
+
+ rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
+ NULL, NULL);
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 2a19c7987c5bd..ae0679f0c0d25 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -21,6 +21,7 @@
+ #include "cifsfs.h"
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ #include "dns_resolve.h"
++#include "dfs_cache.h"
+ #endif
+ #include "fs_context.h"
+ #include "cached_dir.h"
+@@ -1300,4 +1301,70 @@ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix)
+ cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;
+ return 0;
+ }
++
++/*
++ * Handle weird Windows SMB server behaviour. It responds with
++ * STATUS_OBJECT_NAME_INVALID code to SMB2 QUERY_INFO request for
++ * "\<server>\<dfsname>\<linkpath>" DFS reference, where <dfsname> contains
++ * non-ASCII unicode symbols.
++ */
++int cifs_inval_name_dfs_link_error(const unsigned int xid,
++ struct cifs_tcon *tcon,
++ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
++ bool *islink)
++{
++ struct cifs_ses *ses = tcon->ses;
++ size_t len;
++ char *path;
++ char *ref_path;
++
++ *islink = false;
++
++ /*
++ * Fast path - skip check when @full_path doesn't have a prefix path to
++ * look up or tcon is not DFS.
++ */
++ if (strlen(full_path) < 2 || !cifs_sb ||
++ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS) ||
++ !is_tcon_dfs(tcon) || !ses->server->origin_fullpath)
++ return 0;
++
++ /*
++ * Slow path - tcon is DFS and @full_path has prefix path, so attempt
++ * to get a referral to figure out whether it is an DFS link.
++ */
++ len = strnlen(tcon->tree_name, MAX_TREE_SIZE + 1) + strlen(full_path) + 1;
++ path = kmalloc(len, GFP_KERNEL);
++ if (!path)
++ return -ENOMEM;
++
++ scnprintf(path, len, "%s%s", tcon->tree_name, full_path);
++ ref_path = dfs_cache_canonical_path(path + 1, cifs_sb->local_nls,
++ cifs_remap(cifs_sb));
++ kfree(path);
++
++ if (IS_ERR(ref_path)) {
++ if (PTR_ERR(ref_path) != -EINVAL)
++ return PTR_ERR(ref_path);
++ } else {
++ struct dfs_info3_param *refs = NULL;
++ int num_refs = 0;
++
++ /*
++ * XXX: we are not using dfs_cache_find() here because we might
++ * end filling all the DFS cache and thus potentially
++ * removing cached DFS targets that the client would eventually
++ * need during failover.
++ */
++ if (ses->server->ops->get_dfs_refer &&
++ !ses->server->ops->get_dfs_refer(xid, ses, ref_path, &refs,
++ &num_refs, cifs_sb->local_nls,
++ cifs_remap(cifs_sb)))
++ *islink = refs[0].server_type == DFS_TYPE_LINK;
++ free_dfs_info_array(refs, num_refs);
++ kfree(ref_path);
++ }
++ return 0;
++}
+ #endif
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index 4cb364454e130..abda6148be10f 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -576,14 +576,15 @@ static int cifs_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+ if (!(le32_to_cpu(fi.Attributes) & ATTR_REPARSE))
+ return 0;
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = full_path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .disposition = FILE_OPEN,
++ .path = full_path,
++ .fid = &fid,
++ };
+
+ /* Need to check if this is a symbolic link or not */
+ tmprc = CIFS_open(xid, &oparms, &oplock, NULL);
+@@ -823,14 +824,15 @@ smb_set_file_info(struct inode *inode, const char *full_path,
+ goto out;
+ }
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = SYNCHRONIZE | FILE_WRITE_ATTRIBUTES;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = full_path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = SYNCHRONIZE | FILE_WRITE_ATTRIBUTES,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR),
++ .disposition = FILE_OPEN,
++ .path = full_path,
++ .fid = &fid,
++ };
+
+ cifs_dbg(FYI, "calling SetFileInfo since SetPathInfo for times not supported by this server\n");
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+@@ -998,15 +1000,16 @@ cifs_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ goto out;
+ }
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.create_options = cifs_create_options(cifs_sb,
+- OPEN_REPARSE_POINT);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = full_path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .create_options = cifs_create_options(cifs_sb,
++ OPEN_REPARSE_POINT),
++ .disposition = FILE_OPEN,
++ .path = full_path,
++ .fid = &fid,
++ };
+
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (rc)
+@@ -1115,15 +1118,16 @@ cifs_make_node(unsigned int xid, struct inode *inode,
+
+ cifs_dbg(FYI, "sfu compat create special file\n");
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_WRITE;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR |
+- CREATE_OPTION_SPECIAL);
+- oparms.disposition = FILE_CREATE;
+- oparms.path = full_path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_WRITE,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR |
++ CREATE_OPTION_SPECIAL),
++ .disposition = FILE_CREATE,
++ .path = full_path,
++ .fid = &fid,
++ };
+
+ if (tcon->ses->server->oplocks)
+ oplock = REQ_OPLOCK;
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index 8521adf9ce790..9b956294e8643 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -105,14 +105,15 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+
+- vars->oparms.tcon = tcon;
+- vars->oparms.desired_access = desired_access;
+- vars->oparms.disposition = create_disposition;
+- vars->oparms.create_options = cifs_create_options(cifs_sb, create_options);
+- vars->oparms.fid = &fid;
+- vars->oparms.reconnect = false;
+- vars->oparms.mode = mode;
+- vars->oparms.cifs_sb = cifs_sb;
++ vars->oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = desired_access,
++ .disposition = create_disposition,
++ .create_options = cifs_create_options(cifs_sb, create_options),
++ .fid = &fid,
++ .mode = mode,
++ .cifs_sb = cifs_sb,
++ };
+
+ rqst[num_rqst].rq_iov = &vars->open_iov[0];
+ rqst[num_rqst].rq_nvec = SMB2_CREATE_IOV_SIZE;
+@@ -526,12 +527,13 @@ int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb, const char *full_path,
+ struct cifs_open_info_data *data, bool *adjust_tz, bool *reparse)
+ {
+- int rc;
+ __u32 create_options = 0;
+ struct cifsFileInfo *cfile;
+ struct cached_fid *cfid = NULL;
+ struct kvec err_iov[3] = {};
+ int err_buftype[3] = {};
++ bool islink;
++ int rc, rc2;
+
+ *adjust_tz = false;
+ *reparse = false;
+@@ -579,15 +581,15 @@ int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+ SMB2_OP_QUERY_INFO, cfile, NULL, NULL,
+ NULL, NULL);
+ goto out;
+- } else if (rc != -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) &&
+- hdr->Status == STATUS_OBJECT_NAME_INVALID) {
+- /*
+- * Handle weird Windows SMB server behaviour. It responds with
+- * STATUS_OBJECT_NAME_INVALID code to SMB2 QUERY_INFO request
+- * for "\<server>\<dfsname>\<linkpath>" DFS reference,
+- * where <dfsname> contains non-ASCII unicode symbols.
+- */
+- rc = -EREMOTE;
++ } else if (rc != -EREMOTE && hdr->Status == STATUS_OBJECT_NAME_INVALID) {
++ rc2 = cifs_inval_name_dfs_link_error(xid, tcon, cifs_sb,
++ full_path, &islink);
++ if (rc2) {
++ rc = rc2;
++ goto out;
++ }
++ if (islink)
++ rc = -EREMOTE;
+ }
+ if (rc == -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
+ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS))
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index e6bcd2baf446a..c7f8dba5a855a 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -729,12 +729,13 @@ smb3_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_fid fid;
+ struct cached_fid *cfid = NULL;
+
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = &fid,
++ };
+
+ rc = open_cached_dir(xid, tcon, "", cifs_sb, false, &cfid);
+ if (rc == 0)
+@@ -771,12 +772,13 @@ smb2_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_open_parms oparms;
+ struct cifs_fid fid;
+
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open(xid, &oparms, &srch_path, &oplock, NULL, NULL,
+ NULL, NULL);
+@@ -794,7 +796,6 @@ static int
+ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb, const char *full_path)
+ {
+- int rc;
+ __le16 *utf16_path;
+ __u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ int err_buftype = CIFS_NO_BUFFER;
+@@ -802,6 +803,8 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ struct kvec err_iov = {};
+ struct cifs_fid fid;
+ struct cached_fid *cfid;
++ bool islink;
++ int rc, rc2;
+
+ rc = open_cached_dir(xid, tcon, full_path, cifs_sb, true, &cfid);
+ if (!rc) {
+@@ -816,12 +819,13 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ if (!utf16_path)
+ return -ENOMEM;
+
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
+ &err_iov, &err_buftype);
+@@ -830,15 +834,17 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+
+ if (unlikely(!hdr || err_buftype == CIFS_NO_BUFFER))
+ goto out;
+- /*
+- * Handle weird Windows SMB server behaviour. It responds with
+- * STATUS_OBJECT_NAME_INVALID code to SMB2 QUERY_INFO request
+- * for "\<server>\<dfsname>\<linkpath>" DFS reference,
+- * where <dfsname> contains non-ASCII unicode symbols.
+- */
+- if (rc != -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) &&
+- hdr->Status == STATUS_OBJECT_NAME_INVALID)
+- rc = -EREMOTE;
++
++ if (rc != -EREMOTE && hdr->Status == STATUS_OBJECT_NAME_INVALID) {
++ rc2 = cifs_inval_name_dfs_link_error(xid, tcon, cifs_sb,
++ full_path, &islink);
++ if (rc2) {
++ rc = rc2;
++ goto out;
++ }
++ if (islink)
++ rc = -EREMOTE;
++ }
+ if (rc == -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
+ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS))
+ rc = -EOPNOTSUPP;
+@@ -1097,13 +1103,13 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ rqst[0].rq_iov = open_iov;
+ rqst[0].rq_nvec = SMB2_CREATE_IOV_SIZE;
+
+- memset(&oparms, 0, sizeof(oparms));
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_WRITE_EA;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_WRITE_EA,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open_init(tcon, server,
+ &rqst[0], &oplock, &oparms, utf16_path);
+@@ -1453,12 +1459,12 @@ smb2_ioctl_query_info(const unsigned int xid,
+ rqst[0].rq_iov = &vars->open_iov[0];
+ rqst[0].rq_nvec = SMB2_CREATE_IOV_SIZE;
+
+- memset(&oparms, 0, sizeof(oparms));
+- oparms.tcon = tcon;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, create_options);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, create_options),
++ .fid = &fid,
++ };
+
+ if (qi.flags & PASSTHRU_FSCTL) {
+ switch (qi.info_type & FSCTL_DEVICE_ACCESS_MASK) {
+@@ -2088,12 +2094,13 @@ smb3_notify(const unsigned int xid, struct file *pfile,
+ }
+
+ tcon = cifs_sb_master_tcon(cifs_sb);
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL, NULL,
+ NULL);
+@@ -2159,12 +2166,13 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ rqst[0].rq_iov = open_iov;
+ rqst[0].rq_nvec = SMB2_CREATE_IOV_SIZE;
+
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = fid,
++ };
+
+ rc = SMB2_open_init(tcon, server,
+ &rqst[0], &oplock, &oparms, utf16_path);
+@@ -2490,12 +2498,13 @@ smb2_query_info_compound(const unsigned int xid, struct cifs_tcon *tcon,
+ rqst[0].rq_iov = open_iov;
+ rqst[0].rq_nvec = SMB2_CREATE_IOV_SIZE;
+
+- oparms.tcon = tcon;
+- oparms.desired_access = desired_access;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = desired_access,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open_init(tcon, server,
+ &rqst[0], &oplock, &oparms, utf16_path);
+@@ -2623,12 +2632,13 @@ smb311_queryfs(const unsigned int xid, struct cifs_tcon *tcon,
+ if (!tcon->posix_extensions)
+ return smb2_queryfs(xid, tcon, cifs_sb, buf);
+
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open(xid, &oparms, &srch_path, &oplock, NULL, NULL,
+ NULL, NULL);
+@@ -2916,13 +2926,13 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ rqst[0].rq_iov = open_iov;
+ rqst[0].rq_nvec = SMB2_CREATE_IOV_SIZE;
+
+- memset(&oparms, 0, sizeof(oparms));
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, create_options);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, create_options),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open_init(tcon, server,
+ &rqst[0], &oplock, &oparms, utf16_path);
+@@ -3056,13 +3066,13 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
+ rqst[0].rq_iov = open_iov;
+ rqst[0].rq_nvec = SMB2_CREATE_IOV_SIZE;
+
+- memset(&oparms, 0, sizeof(oparms));
+- oparms.tcon = tcon;
+- oparms.desired_access = FILE_READ_ATTRIBUTES;
+- oparms.disposition = FILE_OPEN;
+- oparms.create_options = cifs_create_options(cifs_sb, OPEN_REPARSE_POINT);
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = FILE_READ_ATTRIBUTES,
++ .disposition = FILE_OPEN,
++ .create_options = cifs_create_options(cifs_sb, OPEN_REPARSE_POINT),
++ .fid = &fid,
++ };
+
+ rc = SMB2_open_init(tcon, server,
+ &rqst[0], &oplock, &oparms, utf16_path);
+@@ -3196,17 +3206,20 @@ get_smb2_acl_by_path(struct cifs_sb_info *cifs_sb,
+ return ERR_PTR(rc);
+ }
+
+- oparms.tcon = tcon;
+- oparms.desired_access = READ_CONTROL;
+- oparms.disposition = FILE_OPEN;
+- /*
+- * When querying an ACL, even if the file is a symlink we want to open
+- * the source not the target, and so the protocol requires that the
+- * client specify this flag when opening a reparse point
+- */
+- oparms.create_options = cifs_create_options(cifs_sb, 0) | OPEN_REPARSE_POINT;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = READ_CONTROL,
++ .disposition = FILE_OPEN,
++ /*
++ * When querying an ACL, even if the file is a symlink
++ * we want to open the source not the target, and so
++ * the protocol requires that the client specify this
++ * flag when opening a reparse point
++ */
++ .create_options = cifs_create_options(cifs_sb, 0) |
++ OPEN_REPARSE_POINT,
++ .fid = &fid,
++ };
+
+ if (info & SACL_SECINFO)
+ oparms.desired_access |= SYSTEM_SECURITY;
+@@ -3265,13 +3278,14 @@ set_smb2_acl(struct cifs_ntsd *pnntsd, __u32 acllen,
+ return rc;
+ }
+
+- oparms.tcon = tcon;
+- oparms.desired_access = access_flags;
+- oparms.create_options = cifs_create_options(cifs_sb, 0);
+- oparms.disposition = FILE_OPEN;
+- oparms.path = path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .desired_access = access_flags,
++ .create_options = cifs_create_options(cifs_sb, 0),
++ .disposition = FILE_OPEN,
++ .path = path,
++ .fid = &fid,
++ };
+
+ rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
+ NULL, NULL);
+@@ -5134,15 +5148,16 @@ smb2_make_node(unsigned int xid, struct inode *inode,
+
+ cifs_dbg(FYI, "sfu compat create special file\n");
+
+- oparms.tcon = tcon;
+- oparms.cifs_sb = cifs_sb;
+- oparms.desired_access = GENERIC_WRITE;
+- oparms.create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR |
+- CREATE_OPTION_SPECIAL);
+- oparms.disposition = FILE_CREATE;
+- oparms.path = full_path;
+- oparms.fid = &fid;
+- oparms.reconnect = false;
++ oparms = (struct cifs_open_parms) {
++ .tcon = tcon,
++ .cifs_sb = cifs_sb,
++ .desired_access = GENERIC_WRITE,
++ .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR |
++ CREATE_OPTION_SPECIAL),
++ .disposition = FILE_CREATE,
++ .path = full_path,
++ .fid = &fid,
++ };
+
+ if (tcon->ses->server->oplocks)
+ oplock = REQ_OPLOCK;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 2c9ffa921e6f6..23926f754d2aa 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -139,6 +139,66 @@ out:
+ return;
+ }
+
++static int wait_for_server_reconnect(struct TCP_Server_Info *server,
++ __le16 smb2_command, bool retry)
++{
++ int timeout = 10;
++ int rc;
++
++ spin_lock(&server->srv_lock);
++ if (server->tcpStatus != CifsNeedReconnect) {
++ spin_unlock(&server->srv_lock);
++ return 0;
++ }
++ timeout *= server->nr_targets;
++ spin_unlock(&server->srv_lock);
++
++ /*
++ * Return to caller for TREE_DISCONNECT and LOGOFF and CLOSE
++ * here since they are implicitly done when session drops.
++ */
++ switch (smb2_command) {
++ /*
++ * BB Should we keep oplock break and add flush to exceptions?
++ */
++ case SMB2_TREE_DISCONNECT:
++ case SMB2_CANCEL:
++ case SMB2_CLOSE:
++ case SMB2_OPLOCK_BREAK:
++ return -EAGAIN;
++ }
++
++ /*
++ * Give demultiplex thread up to 10 seconds to each target available for
++ * reconnect -- should be greater than cifs socket timeout which is 7
++ * seconds.
++ *
++ * On "soft" mounts we wait once. Hard mounts keep retrying until
++ * process is killed or server comes back on-line.
++ */
++ do {
++ rc = wait_event_interruptible_timeout(server->response_q,
++ (server->tcpStatus != CifsNeedReconnect),
++ timeout * HZ);
++ if (rc < 0) {
++ cifs_dbg(FYI, "%s: aborting reconnect due to received signal\n",
++ __func__);
++ return -ERESTARTSYS;
++ }
++
++ /* are we still trying to reconnect? */
++ spin_lock(&server->srv_lock);
++ if (server->tcpStatus != CifsNeedReconnect) {
++ spin_unlock(&server->srv_lock);
++ return 0;
++ }
++ spin_unlock(&server->srv_lock);
++ } while (retry);
++
++ cifs_dbg(FYI, "%s: gave up waiting on reconnect\n", __func__);
++ return -EHOSTDOWN;
++}
++
+ static int
+ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ struct TCP_Server_Info *server)
+@@ -146,7 +206,6 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ int rc = 0;
+ struct nls_table *nls_codepage;
+ struct cifs_ses *ses;
+- int retries;
+
+ /*
+ * SMB2s NegProt, SessSetup, Logoff do not have tcon yet so
+@@ -184,61 +243,11 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ (!tcon->ses->server) || !server)
+ return -EIO;
+
+- ses = tcon->ses;
+- retries = server->nr_targets;
+-
+- /*
+- * Give demultiplex thread up to 10 seconds to each target available for
+- * reconnect -- should be greater than cifs socket timeout which is 7
+- * seconds.
+- */
+- while (server->tcpStatus == CifsNeedReconnect) {
+- /*
+- * Return to caller for TREE_DISCONNECT and LOGOFF and CLOSE
+- * here since they are implicitly done when session drops.
+- */
+- switch (smb2_command) {
+- /*
+- * BB Should we keep oplock break and add flush to exceptions?
+- */
+- case SMB2_TREE_DISCONNECT:
+- case SMB2_CANCEL:
+- case SMB2_CLOSE:
+- case SMB2_OPLOCK_BREAK:
+- return -EAGAIN;
+- }
+-
+- rc = wait_event_interruptible_timeout(server->response_q,
+- (server->tcpStatus != CifsNeedReconnect),
+- 10 * HZ);
+- if (rc < 0) {
+- cifs_dbg(FYI, "%s: aborting reconnect due to a received signal by the process\n",
+- __func__);
+- return -ERESTARTSYS;
+- }
+-
+- /* are we still trying to reconnect? */
+- spin_lock(&server->srv_lock);
+- if (server->tcpStatus != CifsNeedReconnect) {
+- spin_unlock(&server->srv_lock);
+- break;
+- }
+- spin_unlock(&server->srv_lock);
+-
+- if (retries && --retries)
+- continue;
++ rc = wait_for_server_reconnect(server, smb2_command, tcon->retry);
++ if (rc)
++ return rc;
+
+- /*
+- * on "soft" mounts we wait once. Hard mounts keep
+- * retrying until process is killed or server comes
+- * back on-line
+- */
+- if (!tcon->retry) {
+- cifs_dbg(FYI, "gave up waiting on reconnect in smb_init\n");
+- return -EHOSTDOWN;
+- }
+- retries = server->nr_targets;
+- }
++ ses = tcon->ses;
+
+ spin_lock(&ses->chan_lock);
+ if (!cifs_chan_needs_reconnect(ses, server) && !tcon->need_reconnect) {
+@@ -3898,7 +3907,7 @@ void smb2_reconnect_server(struct work_struct *work)
+ goto done;
+
+ /* allocate a dummy tcon struct used for reconnect */
+- tcon = kzalloc(sizeof(struct cifs_tcon), GFP_KERNEL);
++ tcon = tconInfoAlloc();
+ if (!tcon) {
+ resched = true;
+ list_for_each_entry_safe(ses, ses2, &tmp_ses_list, rlist) {
+@@ -3921,7 +3930,7 @@ void smb2_reconnect_server(struct work_struct *work)
+ list_del_init(&ses->rlist);
+ cifs_put_smb_ses(ses);
+ }
+- kfree(tcon);
++ tconInfoFree(tcon);
+
+ done:
+ cifs_dbg(FYI, "Reconnecting tcons and channels finished\n");
+@@ -4054,6 +4063,36 @@ SMB2_flush(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+ return rc;
+ }
+
++#ifdef CONFIG_CIFS_SMB_DIRECT
++static inline bool smb3_use_rdma_offload(struct cifs_io_parms *io_parms)
++{
++ struct TCP_Server_Info *server = io_parms->server;
++ struct cifs_tcon *tcon = io_parms->tcon;
++
++ /* we can only offload if we're connected */
++ if (!server || !tcon)
++ return false;
++
++ /* we can only offload on an rdma connection */
++ if (!server->rdma || !server->smbd_conn)
++ return false;
++
++ /* we don't support signed offload yet */
++ if (server->sign)
++ return false;
++
++ /* we don't support encrypted offload yet */
++ if (smb3_encryption_required(tcon))
++ return false;
++
++ /* offload also has its overhead, so only do it if desired */
++ if (io_parms->length < server->smbd_conn->rdma_readwrite_threshold)
++ return false;
++
++ return true;
++}
++#endif /* CONFIG_CIFS_SMB_DIRECT */
++
+ /*
+ * To form a chain of read requests, any read requests after the first should
+ * have the end_of_chain boolean set to true.
+@@ -4097,9 +4136,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
+ * If we want to do a RDMA write, fill in and append
+ * smbd_buffer_descriptor_v1 to the end of read request
+ */
+- if (server->rdma && rdata && !server->sign &&
+- rdata->bytes >= server->smbd_conn->rdma_readwrite_threshold) {
+-
++ if (smb3_use_rdma_offload(io_parms)) {
+ struct smbd_buffer_descriptor_v1 *v1;
+ bool need_invalidate = server->dialect == SMB30_PROT_ID;
+
+@@ -4495,10 +4532,27 @@ smb2_async_writev(struct cifs_writedata *wdata,
+ struct kvec iov[1];
+ struct smb_rqst rqst = { };
+ unsigned int total_len;
++ struct cifs_io_parms _io_parms;
++ struct cifs_io_parms *io_parms = NULL;
+
+ if (!wdata->server)
+ server = wdata->server = cifs_pick_channel(tcon->ses);
+
++ /*
++ * in future we may get cifs_io_parms passed in from the caller,
++ * but for now we construct it here...
++ */
++ _io_parms = (struct cifs_io_parms) {
++ .tcon = tcon,
++ .server = server,
++ .offset = wdata->offset,
++ .length = wdata->bytes,
++ .persistent_fid = wdata->cfile->fid.persistent_fid,
++ .volatile_fid = wdata->cfile->fid.volatile_fid,
++ .pid = wdata->pid,
++ };
++ io_parms = &_io_parms;
++
+ rc = smb2_plain_req_init(SMB2_WRITE, tcon, server,
+ (void **) &req, &total_len);
+ if (rc)
+@@ -4508,28 +4562,31 @@ smb2_async_writev(struct cifs_writedata *wdata,
+ flags |= CIFS_TRANSFORM_REQ;
+
+ shdr = (struct smb2_hdr *)req;
+- shdr->Id.SyncId.ProcessId = cpu_to_le32(wdata->cfile->pid);
++ shdr->Id.SyncId.ProcessId = cpu_to_le32(io_parms->pid);
+
+- req->PersistentFileId = wdata->cfile->fid.persistent_fid;
+- req->VolatileFileId = wdata->cfile->fid.volatile_fid;
++ req->PersistentFileId = io_parms->persistent_fid;
++ req->VolatileFileId = io_parms->volatile_fid;
+ req->WriteChannelInfoOffset = 0;
+ req->WriteChannelInfoLength = 0;
+ req->Channel = 0;
+- req->Offset = cpu_to_le64(wdata->offset);
++ req->Offset = cpu_to_le64(io_parms->offset);
+ req->DataOffset = cpu_to_le16(
+ offsetof(struct smb2_write_req, Buffer));
+ req->RemainingBytes = 0;
+
+- trace_smb3_write_enter(0 /* xid */, wdata->cfile->fid.persistent_fid,
+- tcon->tid, tcon->ses->Suid, wdata->offset, wdata->bytes);
++ trace_smb3_write_enter(0 /* xid */,
++ io_parms->persistent_fid,
++ io_parms->tcon->tid,
++ io_parms->tcon->ses->Suid,
++ io_parms->offset,
++ io_parms->length);
++
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ /*
+ * If we want to do a server RDMA read, fill in and append
+ * smbd_buffer_descriptor_v1 to the end of write request
+ */
+- if (server->rdma && !server->sign && wdata->bytes >=
+- server->smbd_conn->rdma_readwrite_threshold) {
+-
++ if (smb3_use_rdma_offload(io_parms)) {
+ struct smbd_buffer_descriptor_v1 *v1;
+ bool need_invalidate = server->dialect == SMB30_PROT_ID;
+
+@@ -4581,14 +4638,14 @@ smb2_async_writev(struct cifs_writedata *wdata,
+ }
+ #endif
+ cifs_dbg(FYI, "async write at %llu %u bytes\n",
+- wdata->offset, wdata->bytes);
++ io_parms->offset, io_parms->length);
+
+ #ifdef CONFIG_CIFS_SMB_DIRECT
+ /* For RDMA read, I/O size is in RemainingBytes not in Length */
+ if (!wdata->mr)
+- req->Length = cpu_to_le32(wdata->bytes);
++ req->Length = cpu_to_le32(io_parms->length);
+ #else
+- req->Length = cpu_to_le32(wdata->bytes);
++ req->Length = cpu_to_le32(io_parms->length);
+ #endif
+
+ if (wdata->credits.value > 0) {
+@@ -4596,7 +4653,7 @@ smb2_async_writev(struct cifs_writedata *wdata,
+ SMB2_MAX_BUFFER_SIZE));
+ shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8);
+
+- rc = adjust_credits(server, &wdata->credits, wdata->bytes);
++ rc = adjust_credits(server, &wdata->credits, io_parms->length);
+ if (rc)
+ goto async_writev_out;
+
+@@ -4609,9 +4666,12 @@ smb2_async_writev(struct cifs_writedata *wdata,
+
+ if (rc) {
+ trace_smb3_write_err(0 /* no xid */,
+- req->PersistentFileId,
+- tcon->tid, tcon->ses->Suid, wdata->offset,
+- wdata->bytes, rc);
++ io_parms->persistent_fid,
++ io_parms->tcon->tid,
++ io_parms->tcon->ses->Suid,
++ io_parms->offset,
++ io_parms->length,
++ rc);
+ kref_put(&wdata->refcount, release);
+ cifs_stats_fail_inc(tcon, SMB2_WRITE_HE);
+ }
+diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
+index 8c816b25ce7c6..cf923f211c512 100644
+--- a/fs/cifs/smbdirect.c
++++ b/fs/cifs/smbdirect.c
+@@ -1700,6 +1700,7 @@ static struct smbd_connection *_smbd_get_connection(
+
+ allocate_mr_failed:
+ /* At this point, need to a full transport shutdown */
++ server->smbd_conn = info;
+ smbd_destroy(server);
+ return NULL;
+
+@@ -2217,6 +2218,7 @@ static int allocate_mr_list(struct smbd_connection *info)
+ atomic_set(&info->mr_ready_count, 0);
+ atomic_set(&info->mr_used_count, 0);
+ init_waitqueue_head(&info->wait_for_mr_cleanup);
++ INIT_WORK(&info->mr_recovery_work, smbd_mr_recovery_work);
+ /* Allocate more MRs (2x) than hardware responder_resources */
+ for (i = 0; i < info->responder_resources * 2; i++) {
+ smbdirect_mr = kzalloc(sizeof(*smbdirect_mr), GFP_KERNEL);
+@@ -2244,13 +2246,13 @@ static int allocate_mr_list(struct smbd_connection *info)
+ list_add_tail(&smbdirect_mr->list, &info->mr_list);
+ atomic_inc(&info->mr_ready_count);
+ }
+- INIT_WORK(&info->mr_recovery_work, smbd_mr_recovery_work);
+ return 0;
+
+ out:
+ kfree(smbdirect_mr);
+
+ list_for_each_entry_safe(smbdirect_mr, tmp, &info->mr_list, list) {
++ list_del(&smbdirect_mr->list);
+ ib_dereg_mr(smbdirect_mr->mr);
+ kfree(smbdirect_mr->sgl);
+ kfree(smbdirect_mr);
+diff --git a/fs/coda/upcall.c b/fs/coda/upcall.c
+index 59f6cfd06f96a..cd6a3721f6f69 100644
+--- a/fs/coda/upcall.c
++++ b/fs/coda/upcall.c
+@@ -791,7 +791,7 @@ static int coda_upcall(struct venus_comm *vcp,
+ sig_req = kmalloc(sizeof(struct upc_req), GFP_KERNEL);
+ if (!sig_req) goto exit;
+
+- sig_inputArgs = kvzalloc(sizeof(struct coda_in_hdr), GFP_KERNEL);
++ sig_inputArgs = kvzalloc(sizeof(*sig_inputArgs), GFP_KERNEL);
+ if (!sig_inputArgs) {
+ kfree(sig_req);
+ goto exit;
+diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
+index 61ccf7722fc3c..6dae27d6f553f 100644
+--- a/fs/cramfs/inode.c
++++ b/fs/cramfs/inode.c
+@@ -183,7 +183,7 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
+ unsigned int len)
+ {
+ struct address_space *mapping = sb->s_bdev->bd_inode->i_mapping;
+- struct file_ra_state ra;
++ struct file_ra_state ra = {};
+ struct page *pages[BLKS_PER_BUF];
+ unsigned i, blocknr, buffer;
+ unsigned long devsize;
+diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
+index d0b4e2181a5f3..99bc96f907799 100644
+--- a/fs/dlm/lockspace.c
++++ b/fs/dlm/lockspace.c
+@@ -381,23 +381,23 @@ static int threads_start(void)
+ {
+ int error;
+
+- error = dlm_scand_start();
++ /* Thread for sending/receiving messages for all lockspace's */
++ error = dlm_midcomms_start();
+ if (error) {
+- log_print("cannot start dlm_scand thread %d", error);
++ log_print("cannot start dlm midcomms %d", error);
+ goto fail;
+ }
+
+- /* Thread for sending/receiving messages for all lockspace's */
+- error = dlm_midcomms_start();
++ error = dlm_scand_start();
+ if (error) {
+- log_print("cannot start dlm midcomms %d", error);
+- goto scand_fail;
++ log_print("cannot start dlm_scand thread %d", error);
++ goto midcomms_fail;
+ }
+
+ return 0;
+
+- scand_fail:
+- dlm_scand_stop();
++ midcomms_fail:
++ dlm_midcomms_stop();
+ fail:
+ return error;
+ }
+diff --git a/fs/dlm/memory.c b/fs/dlm/memory.c
+index eb7a08641fcf5..cdbaa452fc05a 100644
+--- a/fs/dlm/memory.c
++++ b/fs/dlm/memory.c
+@@ -51,7 +51,7 @@ int __init dlm_memory_init(void)
+ cb_cache = kmem_cache_create("dlm_cb", sizeof(struct dlm_callback),
+ __alignof__(struct dlm_callback), 0,
+ NULL);
+- if (!rsb_cache)
++ if (!cb_cache)
+ goto cb;
+
+ return 0;
+diff --git a/fs/dlm/midcomms.c b/fs/dlm/midcomms.c
+index fc015a6abe178..ecfb3beb0bb88 100644
+--- a/fs/dlm/midcomms.c
++++ b/fs/dlm/midcomms.c
+@@ -375,7 +375,7 @@ static int dlm_send_ack(int nodeid, uint32_t seq)
+ struct dlm_msg *msg;
+ char *ppc;
+
+- msg = dlm_lowcomms_new_msg(nodeid, mb_len, GFP_NOFS, &ppc,
++ msg = dlm_lowcomms_new_msg(nodeid, mb_len, GFP_ATOMIC, &ppc,
+ NULL, NULL);
+ if (!msg)
+ return -ENOMEM;
+@@ -402,10 +402,11 @@ static int dlm_send_fin(struct midcomms_node *node,
+ struct dlm_mhandle *mh;
+ char *ppc;
+
+- mh = dlm_midcomms_get_mhandle(node->nodeid, mb_len, GFP_NOFS, &ppc);
++ mh = dlm_midcomms_get_mhandle(node->nodeid, mb_len, GFP_ATOMIC, &ppc);
+ if (!mh)
+ return -ENOMEM;
+
++ set_bit(DLM_NODE_FLAG_STOP_TX, &node->flags);
+ mh->ack_rcv = ack_rcv;
+
+ m_header = (struct dlm_header *)ppc;
+@@ -417,7 +418,6 @@ static int dlm_send_fin(struct midcomms_node *node,
+
+ pr_debug("sending fin msg to node %d\n", node->nodeid);
+ dlm_midcomms_commit_mhandle(mh, NULL, 0);
+- set_bit(DLM_NODE_FLAG_STOP_TX, &node->flags);
+
+ return 0;
+ }
+@@ -498,15 +498,14 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
+
+ switch (p->header.h_cmd) {
+ case DLM_FIN:
+- /* send ack before fin */
+- dlm_send_ack(node->nodeid, node->seq_next);
+-
+ spin_lock(&node->state_lock);
+ pr_debug("receive fin msg from node %d with state %s\n",
+ node->nodeid, dlm_state_str(node->state));
+
+ switch (node->state) {
+ case DLM_ESTABLISHED:
++ dlm_send_ack(node->nodeid, node->seq_next);
++
+ node->state = DLM_CLOSE_WAIT;
+ pr_debug("switch node %d to state %s\n",
+ node->nodeid, dlm_state_str(node->state));
+@@ -518,16 +517,19 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
+ node->state = DLM_LAST_ACK;
+ pr_debug("switch node %d to state %s case 1\n",
+ node->nodeid, dlm_state_str(node->state));
+- spin_unlock(&node->state_lock);
+- goto send_fin;
++ set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
++ dlm_send_fin(node, dlm_pas_fin_ack_rcv);
+ }
+ break;
+ case DLM_FIN_WAIT1:
++ dlm_send_ack(node->nodeid, node->seq_next);
+ node->state = DLM_CLOSING;
++ set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
+ pr_debug("switch node %d to state %s\n",
+ node->nodeid, dlm_state_str(node->state));
+ break;
+ case DLM_FIN_WAIT2:
++ dlm_send_ack(node->nodeid, node->seq_next);
+ midcomms_node_reset(node);
+ pr_debug("switch node %d to state %s\n",
+ node->nodeid, dlm_state_str(node->state));
+@@ -544,8 +546,6 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
+ return;
+ }
+ spin_unlock(&node->state_lock);
+-
+- set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
+ break;
+ default:
+ WARN_ON_ONCE(test_bit(DLM_NODE_FLAG_STOP_RX, &node->flags));
+@@ -564,12 +564,6 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
+ log_print_ratelimited("ignore dlm msg because seq mismatch, seq: %u, expected: %u, nodeid: %d",
+ seq, node->seq_next, node->nodeid);
+ }
+-
+- return;
+-
+-send_fin:
+- set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
+- dlm_send_fin(node, dlm_pas_fin_ack_rcv);
+ }
+
+ static struct midcomms_node *
+@@ -1214,8 +1208,15 @@ void dlm_midcomms_commit_mhandle(struct dlm_mhandle *mh,
+ dlm_free_mhandle(mh);
+ break;
+ case DLM_VERSION_3_2:
++ /* held rcu read lock here, because we sending the
++ * dlm message out, when we do that we could receive
++ * an ack back which releases the mhandle and we
++ * get a use after free.
++ */
++ rcu_read_lock();
+ dlm_midcomms_commit_msg_3_2(mh, name, namelen);
+ srcu_read_unlock(&nodes_srcu, mh->idx);
++ rcu_read_unlock();
+ break;
+ default:
+ srcu_read_unlock(&nodes_srcu, mh->idx);
+@@ -1362,11 +1363,11 @@ void dlm_midcomms_remove_member(int nodeid)
+ case DLM_CLOSE_WAIT:
+ /* passive shutdown DLM_LAST_ACK case 2 */
+ node->state = DLM_LAST_ACK;
+- spin_unlock(&node->state_lock);
+-
+ pr_debug("switch node %d to state %s case 2\n",
+ node->nodeid, dlm_state_str(node->state));
+- goto send_fin;
++ set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
++ dlm_send_fin(node, dlm_pas_fin_ack_rcv);
++ break;
+ case DLM_LAST_ACK:
+ /* probably receive fin caught it, do nothing */
+ break;
+@@ -1382,12 +1383,6 @@ void dlm_midcomms_remove_member(int nodeid)
+ spin_unlock(&node->state_lock);
+
+ srcu_read_unlock(&nodes_srcu, idx);
+- return;
+-
+-send_fin:
+- set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
+- dlm_send_fin(node, dlm_pas_fin_ack_rcv);
+- srcu_read_unlock(&nodes_srcu, idx);
+ }
+
+ static void midcomms_node_release(struct rcu_head *rcu)
+@@ -1395,6 +1390,7 @@ static void midcomms_node_release(struct rcu_head *rcu)
+ struct midcomms_node *node = container_of(rcu, struct midcomms_node, rcu);
+
+ WARN_ON_ONCE(atomic_read(&node->send_queue_cnt));
++ dlm_send_queue_flush(node);
+ kfree(node);
+ }
+
+@@ -1418,6 +1414,7 @@ static void midcomms_shutdown(struct midcomms_node *node)
+ node->state = DLM_FIN_WAIT1;
+ pr_debug("switch node %d to state %s case 2\n",
+ node->nodeid, dlm_state_str(node->state));
++ dlm_send_fin(node, dlm_act_fin_ack_rcv);
+ break;
+ case DLM_CLOSED:
+ /* we have what we want */
+@@ -1431,12 +1428,8 @@ static void midcomms_shutdown(struct midcomms_node *node)
+ }
+ spin_unlock(&node->state_lock);
+
+- if (node->state == DLM_FIN_WAIT1) {
+- dlm_send_fin(node, dlm_act_fin_ack_rcv);
+-
+- if (DLM_DEBUG_FENCE_TERMINATION)
+- msleep(5000);
+- }
++ if (DLM_DEBUG_FENCE_TERMINATION)
++ msleep(5000);
+
+ /* wait for other side dlm + fin */
+ ret = wait_event_timeout(node->shutdown_wait,
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 014e209623762..a7923d6661301 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -337,8 +337,8 @@ static void erofs_fscache_domain_put(struct erofs_domain *domain)
+ kern_unmount(erofs_pseudo_mnt);
+ erofs_pseudo_mnt = NULL;
+ }
+- mutex_unlock(&erofs_domain_list_lock);
+ fscache_relinquish_volume(domain->volume, NULL, false);
++ mutex_unlock(&erofs_domain_list_lock);
+ kfree(domain->domain_id);
+ kfree(domain);
+ return;
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 1dfa67f307f17..158427e8124e1 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -100,7 +100,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ clu.dir = ei->hint_bmap.clu;
+ }
+
+- while (clu_offset > 0) {
++ while (clu_offset > 0 && clu.dir != EXFAT_EOF_CLUSTER) {
+ if (exfat_get_next_cluster(sb, &(clu.dir)))
+ return -EIO;
+
+@@ -234,10 +234,7 @@ static int exfat_iterate(struct file *file, struct dir_context *ctx)
+ fake_offset = 1;
+ }
+
+- if (cpos & (DENTRY_SIZE - 1)) {
+- err = -ENOENT;
+- goto unlock;
+- }
++ cpos = round_up(cpos, DENTRY_SIZE);
+
+ /* name buffer should be allocated before use */
+ err = exfat_alloc_namebuf(nb);
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index bc6d21d7c5adf..25a5df0fdfe01 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -50,7 +50,7 @@ enum {
+ #define ES_IDX_LAST_FILENAME(name_len) \
+ (ES_IDX_FIRST_FILENAME + EXFAT_FILENAME_ENTRY_NUM(name_len) - 1)
+
+-#define DIR_DELETED 0xFFFF0321
++#define DIR_DELETED 0xFFFFFFF7
+
+ /* type values */
+ #define TYPE_UNUSED 0x0000
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index f5b29072775de..b33431c74c8af 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -209,8 +209,7 @@ void exfat_truncate(struct inode *inode)
+ if (err)
+ goto write_size;
+
+- inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
+- inode->i_blkbits;
++ inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
+ write_size:
+ aligned_size = i_size_read(inode);
+ if (aligned_size & (blocksize - 1)) {
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index 5b644cb057fa8..481dd338f2b8e 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -220,8 +220,7 @@ static int exfat_map_cluster(struct inode *inode, unsigned int clu_offset,
+ num_clusters += num_to_be_allocated;
+ *clu = new_clu.dir;
+
+- inode->i_blocks +=
+- num_to_be_allocated << sbi->sect_per_clus_bits;
++ inode->i_blocks += EXFAT_CLU_TO_B(num_to_be_allocated, sbi) >> 9;
+
+ /*
+ * Move *clu pointer along FAT chains (hole care) because the
+@@ -576,8 +575,7 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
+
+ exfat_save_attr(inode, info->attr);
+
+- inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
+- inode->i_blkbits;
++ inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
+ inode->i_mtime = info->mtime;
+ inode->i_ctime = info->mtime;
+ ei->i_crtime = info->crtime;
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 5f995eba5dbbe..7442fead02793 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -396,7 +396,7 @@ static int exfat_find_empty_entry(struct inode *inode,
+ ei->i_size_ondisk += sbi->cluster_size;
+ ei->i_size_aligned += sbi->cluster_size;
+ ei->flags = p_dir->flags;
+- inode->i_blocks += 1 << sbi->sect_per_clus_bits;
++ inode->i_blocks += sbi->cluster_size >> 9;
+ }
+
+ return dentry;
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index 35f0305cd493c..8c32460e031e8 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -373,8 +373,7 @@ static int exfat_read_root(struct inode *inode)
+ inode->i_op = &exfat_dir_inode_operations;
+ inode->i_fop = &exfat_dir_operations;
+
+- inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
+- inode->i_blkbits;
++ inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
+ ei->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
+ ei->i_size_aligned = i_size_read(inode);
+ ei->i_size_ondisk = i_size_read(inode);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index a2f04a3808db5..0c6b011a91b3f 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1438,6 +1438,13 @@ static struct inode *ext4_xattr_inode_create(handle_t *handle,
+ uid_t owner[2] = { i_uid_read(inode), i_gid_read(inode) };
+ int err;
+
++ if (inode->i_sb->s_root == NULL) {
++ ext4_warning(inode->i_sb,
++ "refuse to create EA inode when umounting");
++ WARN_ON(1);
++ return ERR_PTR(-EINVAL);
++ }
++
+ /*
+ * Let the next inode be the goal, so we try and allocate the EA inode
+ * in the same group, or nearby one.
+@@ -2567,9 +2574,8 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+
+ is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
+ bs = kzalloc(sizeof(struct ext4_xattr_block_find), GFP_NOFS);
+- buffer = kvmalloc(value_size, GFP_NOFS);
+ b_entry_name = kmalloc(entry->e_name_len + 1, GFP_NOFS);
+- if (!is || !bs || !buffer || !b_entry_name) {
++ if (!is || !bs || !b_entry_name) {
+ error = -ENOMEM;
+ goto out;
+ }
+@@ -2581,12 +2587,18 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+
+ /* Save the entry name and the entry value */
+ if (entry->e_value_inum) {
++ buffer = kvmalloc(value_size, GFP_NOFS);
++ if (!buffer) {
++ error = -ENOMEM;
++ goto out;
++ }
++
+ error = ext4_xattr_inode_get(inode, entry, buffer, value_size);
+ if (error)
+ goto out;
+ } else {
+ size_t value_offs = le16_to_cpu(entry->e_value_offs);
+- memcpy(buffer, (void *)IFIRST(header) + value_offs, value_size);
++ buffer = (void *)IFIRST(header) + value_offs;
+ }
+
+ memcpy(b_entry_name, entry->e_name, entry->e_name_len);
+@@ -2601,25 +2613,26 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ if (error)
+ goto out;
+
+- /* Remove the chosen entry from the inode */
+- error = ext4_xattr_ibody_set(handle, inode, &i, is);
+- if (error)
+- goto out;
+-
+ i.value = buffer;
+ i.value_len = value_size;
+ error = ext4_xattr_block_find(inode, &i, bs);
+ if (error)
+ goto out;
+
+- /* Add entry which was removed from the inode into the block */
++ /* Move ea entry from the inode into the block */
+ error = ext4_xattr_block_set(handle, inode, &i, bs);
+ if (error)
+ goto out;
+- error = 0;
++
++ /* Remove the chosen entry from the inode */
++ i.value = NULL;
++ i.value_len = 0;
++ error = ext4_xattr_ibody_set(handle, inode, &i, is);
++
+ out:
+ kfree(b_entry_name);
+- kvfree(buffer);
++ if (entry->e_value_inum && buffer)
++ kvfree(buffer);
+ if (is)
+ brelse(is->iloc.bh);
+ if (bs)
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 97e816590cd95..8cca566baf3ab 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -655,6 +655,9 @@ static void __f2fs_submit_merged_write(struct f2fs_sb_info *sbi,
+
+ f2fs_down_write(&io->io_rwsem);
+
++ if (!io->bio)
++ goto unlock_out;
++
+ /* change META to META_FLUSH in the checkpoint procedure */
+ if (type >= META_FLUSH) {
+ io->fio.type = META_FLUSH;
+@@ -663,6 +666,7 @@ static void __f2fs_submit_merged_write(struct f2fs_sb_info *sbi,
+ io->bio->bi_opf |= REQ_PREFLUSH | REQ_FUA;
+ }
+ __submit_merged_bio(io);
++unlock_out:
+ f2fs_up_write(&io->io_rwsem);
+ }
+
+@@ -741,7 +745,7 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ }
+
+ if (fio->io_wbc && !is_read_io(fio->op))
+- wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+
+ inc_page_count(fio->sbi, is_read_io(fio->op) ?
+ __read_io_type(page) : WB_DATA_TYPE(fio->page));
+@@ -948,7 +952,7 @@ alloc_new:
+ }
+
+ if (fio->io_wbc)
+- wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+
+ inc_page_count(fio->sbi, WB_DATA_TYPE(page));
+
+@@ -1022,7 +1026,7 @@ alloc_new:
+ }
+
+ if (fio->io_wbc)
+- wbc_account_cgroup_owner(fio->io_wbc, bio_page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
+
+ io->last_block_in_bio = fio->new_blkaddr;
+
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 21a495234ffd7..7e867dff681dc 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -422,18 +422,17 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
+
+ dentry_blk = page_address(page);
+
++ /*
++ * Start by zeroing the full block, to ensure that all unused space is
++ * zeroed and no uninitialized memory is leaked to disk.
++ */
++ memset(dentry_blk, 0, F2FS_BLKSIZE);
++
+ make_dentry_ptr_inline(dir, &src, inline_dentry);
+ make_dentry_ptr_block(dir, &dst, dentry_blk);
+
+ /* copy data from inline dentry block to new dentry block */
+ memcpy(dst.bitmap, src.bitmap, src.nr_bitmap);
+- memset(dst.bitmap + src.nr_bitmap, 0, dst.nr_bitmap - src.nr_bitmap);
+- /*
+- * we do not need to zero out remainder part of dentry and filename
+- * field, since we have used bitmap for marking the usage status of
+- * them, besides, we can also ignore copying/zeroing reserved space
+- * of dentry block, because them haven't been used so far.
+- */
+ memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max);
+ memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN);
+
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index ff6cf66ed46b2..fb489f55fef3a 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -714,18 +714,19 @@ void f2fs_update_inode_page(struct inode *inode)
+ {
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct page *node_page;
++ int count = 0;
+ retry:
+ node_page = f2fs_get_node_page(sbi, inode->i_ino);
+ if (IS_ERR(node_page)) {
+ int err = PTR_ERR(node_page);
+
+- if (err == -ENOMEM) {
+- cond_resched();
++ /* The node block was truncated. */
++ if (err == -ENOENT)
++ return;
++
++ if (err == -ENOMEM || ++count <= DEFAULT_RETRY_IO_COUNT)
+ goto retry;
+- } else if (err != -ENOENT) {
+- f2fs_stop_checkpoint(sbi, false,
+- STOP_CP_REASON_UPDATE_INODE);
+- }
++ f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_UPDATE_INODE);
+ return;
+ }
+ f2fs_update_inode(inode, node_page);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index ae3c4e5474efa..b019f63fd5403 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -262,19 +262,24 @@ static void __complete_revoke_list(struct inode *inode, struct list_head *head,
+ bool revoke)
+ {
+ struct revoke_entry *cur, *tmp;
++ pgoff_t start_index = 0;
+ bool truncate = is_inode_flag_set(inode, FI_ATOMIC_REPLACE);
+
+ list_for_each_entry_safe(cur, tmp, head, list) {
+- if (revoke)
++ if (revoke) {
+ __replace_atomic_write_block(inode, cur->index,
+ cur->old_addr, NULL, true);
++ } else if (truncate) {
++ f2fs_truncate_hole(inode, start_index, cur->index);
++ start_index = cur->index + 1;
++ }
+
+ list_del(&cur->list);
+ kmem_cache_free(revoke_entry_slab, cur);
+ }
+
+ if (!revoke && truncate)
+- f2fs_do_truncate_blocks(inode, 0, false);
++ f2fs_do_truncate_blocks(inode, start_index * PAGE_SIZE, false);
+ }
+
+ static int __f2fs_commit_atomic_write(struct inode *inode)
+diff --git a/fs/fuse/ioctl.c b/fs/fuse/ioctl.c
+index fcce94ace2c23..8ba1545e01f95 100644
+--- a/fs/fuse/ioctl.c
++++ b/fs/fuse/ioctl.c
+@@ -419,6 +419,12 @@ static struct fuse_file *fuse_priv_ioctl_prepare(struct inode *inode)
+ struct fuse_mount *fm = get_fuse_mount(inode);
+ bool isdir = S_ISDIR(inode->i_mode);
+
++ if (!fuse_allow_current_process(fm->fc))
++ return ERR_PTR(-EACCES);
++
++ if (fuse_is_bad(inode))
++ return ERR_PTR(-EIO);
++
+ if (!S_ISREG(inode->i_mode) && !isdir)
+ return ERR_PTR(-ENOTTY);
+
+diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
+index e782b4f1d1043..2f04c0ff7470b 100644
+--- a/fs/gfs2/aops.c
++++ b/fs/gfs2/aops.c
+@@ -127,7 +127,6 @@ static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *w
+ {
+ struct inode *inode = page->mapping->host;
+ struct gfs2_inode *ip = GFS2_I(inode);
+- struct gfs2_sbd *sdp = GFS2_SB(inode);
+
+ if (PageChecked(page)) {
+ ClearPageChecked(page);
+@@ -135,7 +134,7 @@ static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *w
+ create_empty_buffers(page, inode->i_sb->s_blocksize,
+ BIT(BH_Dirty)|BIT(BH_Uptodate));
+ }
+- gfs2_page_add_databufs(ip, page, 0, sdp->sd_vfs->s_blocksize);
++ gfs2_page_add_databufs(ip, page, 0, PAGE_SIZE);
+ }
+ return gfs2_write_jdata_page(page, wbc);
+ }
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 999cc146d7083..a07cf31f58ec3 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -138,8 +138,10 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ return -EIO;
+
+ error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+- if (error || gfs2_withdrawn(sdp))
++ if (error) {
++ gfs2_consist(sdp);
+ return error;
++ }
+
+ if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
+ gfs2_consist(sdp);
+@@ -151,7 +153,9 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ gfs2_log_pointers_init(sdp, head.lh_blkno);
+
+ error = gfs2_quota_init(sdp);
+- if (!error && !gfs2_withdrawn(sdp))
++ if (!error && gfs2_withdrawn(sdp))
++ error = -EIO;
++ if (!error)
+ set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ return error;
+ }
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index 2015e42e752a6..6add6ebfef896 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -274,6 +274,7 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
+ tree->node_hash[hash] = node;
+ tree->node_hash_cnt++;
+ } else {
++ hfs_bnode_get(node2);
+ spin_unlock(&tree->hash_lock);
+ kfree(node);
+ wait_event(node2->lock_wq, !test_bit(HFS_BNODE_NEW, &node2->flags));
+diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
+index 122ed89ebf9f2..1986b4f18a901 100644
+--- a/fs/hfsplus/super.c
++++ b/fs/hfsplus/super.c
+@@ -295,11 +295,11 @@ static void hfsplus_put_super(struct super_block *sb)
+ hfsplus_sync_fs(sb, 1);
+ }
+
++ iput(sbi->alloc_file);
++ iput(sbi->hidden_dir);
+ hfs_btree_close(sbi->attr_tree);
+ hfs_btree_close(sbi->cat_tree);
+ hfs_btree_close(sbi->ext_tree);
+- iput(sbi->alloc_file);
+- iput(sbi->hidden_dir);
+ kfree(sbi->s_vhdr_buf);
+ kfree(sbi->s_backup_vhdr_buf);
+ unload_nls(sbi->nls);
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 6a404ac1c178f..15de1385012eb 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1010,36 +1010,28 @@ repeat:
+ * ie. locked but not dirty) or tune2fs (which may actually have
+ * the buffer dirtied, ugh.) */
+
+- if (buffer_dirty(bh)) {
++ if (buffer_dirty(bh) && jh->b_transaction) {
++ warn_dirty_buffer(bh);
+ /*
+- * First question: is this buffer already part of the current
+- * transaction or the existing committing transaction?
+- */
+- if (jh->b_transaction) {
+- J_ASSERT_JH(jh,
+- jh->b_transaction == transaction ||
+- jh->b_transaction ==
+- journal->j_committing_transaction);
+- if (jh->b_next_transaction)
+- J_ASSERT_JH(jh, jh->b_next_transaction ==
+- transaction);
+- warn_dirty_buffer(bh);
+- }
+- /*
+- * In any case we need to clean the dirty flag and we must
+- * do it under the buffer lock to be sure we don't race
+- * with running write-out.
++ * We need to clean the dirty flag and we must do it under the
++ * buffer lock to be sure we don't race with running write-out.
+ */
+ JBUFFER_TRACE(jh, "Journalling dirty buffer");
+ clear_buffer_dirty(bh);
++ /*
++ * The buffer is going to be added to BJ_Reserved list now and
++ * nothing guarantees jbd2_journal_dirty_metadata() will be
++ * ever called for it. So we need to set jbddirty bit here to
++ * make sure the buffer is dirtied and written out when the
++ * journaling machinery is done with it.
++ */
+ set_buffer_jbddirty(bh);
+ }
+
+- unlock_buffer(bh);
+-
+ error = -EROFS;
+ if (is_handle_aborted(handle)) {
+ spin_unlock(&jh->b_state_lock);
++ unlock_buffer(bh);
+ goto out;
+ }
+ error = 0;
+@@ -1049,8 +1041,10 @@ repeat:
+ * b_next_transaction points to it
+ */
+ if (jh->b_transaction == transaction ||
+- jh->b_next_transaction == transaction)
++ jh->b_next_transaction == transaction) {
++ unlock_buffer(bh);
+ goto done;
++ }
+
+ /*
+ * this is the first time this transaction is touching this buffer,
+@@ -1074,10 +1068,24 @@ repeat:
+ */
+ smp_wmb();
+ spin_lock(&journal->j_list_lock);
++ if (test_clear_buffer_dirty(bh)) {
++ /*
++ * Execute buffer dirty clearing and jh->b_transaction
++ * assignment under journal->j_list_lock locked to
++ * prevent bh being removed from checkpoint list if
++ * the buffer is in an intermediate state (not dirty
++ * and jh->b_transaction is NULL).
++ */
++ JBUFFER_TRACE(jh, "Journalling dirty buffer");
++ set_buffer_jbddirty(bh);
++ }
+ __jbd2_journal_file_buffer(jh, transaction, BJ_Reserved);
+ spin_unlock(&journal->j_list_lock);
++ unlock_buffer(bh);
+ goto done;
+ }
++ unlock_buffer(bh);
++
+ /*
+ * If there is already a copy-out version of this buffer, then we don't
+ * need to make another one
+diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
+index 6e25ace365684..fbdde426dd01d 100644
+--- a/fs/ksmbd/smb2misc.c
++++ b/fs/ksmbd/smb2misc.c
+@@ -149,15 +149,11 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
+ break;
+ case SMB2_LOCK:
+ {
+- int lock_count;
++ unsigned short lock_count;
+
+- /*
+- * smb2_lock request size is 48 included single
+- * smb2_lock_element structure size.
+- */
+- lock_count = le16_to_cpu(((struct smb2_lock_req *)hdr)->LockCount) - 1;
++ lock_count = le16_to_cpu(((struct smb2_lock_req *)hdr)->LockCount);
+ if (lock_count > 0) {
+- *off = __SMB2_HEADER_STRUCTURE_SIZE + 48;
++ *off = offsetof(struct smb2_lock_req, locks);
+ *len = sizeof(struct smb2_lock_element) * lock_count;
+ }
+ break;
+@@ -412,20 +408,19 @@ int ksmbd_smb2_check_message(struct ksmbd_work *work)
+ goto validate_credit;
+
+ /*
+- * windows client also pad up to 8 bytes when compounding.
+- * If pad is longer than eight bytes, log the server behavior
+- * (once), since may indicate a problem but allow it and
+- * continue since the frame is parseable.
++ * SMB2 NEGOTIATE request will be validated when message
++ * handling proceeds.
+ */
+- if (clc_len < len) {
+- ksmbd_debug(SMB,
+- "cli req padded more than expected. Length %d not %d for cmd:%d mid:%llu\n",
+- len, clc_len, command,
+- le64_to_cpu(hdr->MessageId));
++ if (command == SMB2_NEGOTIATE_HE)
++ goto validate_credit;
++
++ /*
++ * Allow a message that padded to 8byte boundary.
++ */
++ if (clc_len < len && (len - clc_len) < 8)
+ goto validate_credit;
+- }
+
+- ksmbd_debug(SMB,
++ pr_err_ratelimited(
+ "cli req too short, len %d not %d. cmd:%d mid:%llu\n",
+ len, clc_len, command,
+ le64_to_cpu(hdr->MessageId));
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index d681f91947d92..875eecc6b95e7 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -6644,7 +6644,7 @@ int smb2_cancel(struct ksmbd_work *work)
+ struct ksmbd_conn *conn = work->conn;
+ struct smb2_hdr *hdr = smb2_get_msg(work->request_buf);
+ struct smb2_hdr *chdr;
+- struct ksmbd_work *cancel_work = NULL, *iter;
++ struct ksmbd_work *iter;
+ struct list_head *command_list;
+
+ ksmbd_debug(SMB, "smb2 cancel called on mid %llu, async flags 0x%x\n",
+@@ -6666,7 +6666,9 @@ int smb2_cancel(struct ksmbd_work *work)
+ "smb2 with AsyncId %llu cancelled command = 0x%x\n",
+ le64_to_cpu(hdr->Id.AsyncId),
+ le16_to_cpu(chdr->Command));
+- cancel_work = iter;
++ iter->state = KSMBD_WORK_CANCELLED;
++ if (iter->cancel_fn)
++ iter->cancel_fn(iter->cancel_argv);
+ break;
+ }
+ spin_unlock(&conn->request_lock);
+@@ -6685,18 +6687,12 @@ int smb2_cancel(struct ksmbd_work *work)
+ "smb2 with mid %llu cancelled command = 0x%x\n",
+ le64_to_cpu(hdr->MessageId),
+ le16_to_cpu(chdr->Command));
+- cancel_work = iter;
++ iter->state = KSMBD_WORK_CANCELLED;
+ break;
+ }
+ spin_unlock(&conn->request_lock);
+ }
+
+- if (cancel_work) {
+- cancel_work->state = KSMBD_WORK_CANCELLED;
+- if (cancel_work->cancel_fn)
+- cancel_work->cancel_fn(cancel_work->cancel_argv);
+- }
+-
+ /* For SMB2_CANCEL command itself send no response*/
+ work->send_no_response = 1;
+ return 0;
+@@ -7061,6 +7057,14 @@ skip:
+
+ ksmbd_vfs_posix_lock_wait(flock);
+
++ spin_lock(&work->conn->request_lock);
++ spin_lock(&fp->f_lock);
++ list_del(&work->fp_entry);
++ work->cancel_fn = NULL;
++ kfree(argv);
++ spin_unlock(&fp->f_lock);
++ spin_unlock(&work->conn->request_lock);
++
+ if (work->state != KSMBD_WORK_ACTIVE) {
+ list_del(&smb_lock->llist);
+ spin_lock(&work->conn->llist_lock);
+@@ -7069,9 +7073,6 @@ skip:
+ locks_free_lock(flock);
+
+ if (work->state == KSMBD_WORK_CANCELLED) {
+- spin_lock(&fp->f_lock);
+- list_del(&work->fp_entry);
+- spin_unlock(&fp->f_lock);
+ rsp->hdr.Status =
+ STATUS_CANCELLED;
+ kfree(smb_lock);
+@@ -7093,9 +7094,6 @@ skip:
+ list_del(&smb_lock->clist);
+ spin_unlock(&work->conn->llist_lock);
+
+- spin_lock(&fp->f_lock);
+- list_del(&work->fp_entry);
+- spin_unlock(&fp->f_lock);
+ goto retry;
+ } else if (!rc) {
+ spin_lock(&work->conn->llist_lock);
+diff --git a/fs/ksmbd/vfs_cache.c b/fs/ksmbd/vfs_cache.c
+index da9163b003503..0ae5dd0829e92 100644
+--- a/fs/ksmbd/vfs_cache.c
++++ b/fs/ksmbd/vfs_cache.c
+@@ -364,12 +364,11 @@ static void __put_fd_final(struct ksmbd_work *work, struct ksmbd_file *fp)
+
+ static void set_close_state_blocked_works(struct ksmbd_file *fp)
+ {
+- struct ksmbd_work *cancel_work, *ctmp;
++ struct ksmbd_work *cancel_work;
+
+ spin_lock(&fp->f_lock);
+- list_for_each_entry_safe(cancel_work, ctmp, &fp->blocked_works,
++ list_for_each_entry(cancel_work, &fp->blocked_works,
+ fp_entry) {
+- list_del(&cancel_work->fp_entry);
+ cancel_work->state = KSMBD_WORK_CLOSED;
+ cancel_work->cancel_fn(cancel_work->cancel_argv);
+ }
+diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
+index 59ef8a1f843f3..914ea1c3537d1 100644
+--- a/fs/lockd/svc.c
++++ b/fs/lockd/svc.c
+@@ -496,7 +496,7 @@ static struct ctl_table nlm_sysctls[] = {
+ {
+ .procname = "nsm_use_hostnames",
+ .data = &nsm_use_hostnames,
+- .maxlen = sizeof(int),
++ .maxlen = sizeof(bool),
+ .mode = 0644,
+ .proc_handler = proc_dobool,
+ },
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 40d749f29ed3f..4214286e01450 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10604,7 +10604,9 @@ static void nfs4_disable_swap(struct inode *inode)
+ /* The state manager thread will now exit once it is
+ * woken.
+ */
+- wake_up_var(&NFS_SERVER(inode)->nfs_client->cl_state);
++ struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
++
++ nfs4_schedule_state_manager(clp);
+ }
+
+ static const struct inode_operations nfs4_dir_inode_operations = {
+diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
+index 214bc56f92d2b..d27919d7241d3 100644
+--- a/fs/nfs/nfs4trace.h
++++ b/fs/nfs/nfs4trace.h
+@@ -292,32 +292,34 @@ TRACE_DEFINE_ENUM(NFS4CLNT_MOVED);
+ TRACE_DEFINE_ENUM(NFS4CLNT_LEASE_MOVED);
+ TRACE_DEFINE_ENUM(NFS4CLNT_DELEGATION_EXPIRED);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RUN_MANAGER);
++TRACE_DEFINE_ENUM(NFS4CLNT_MANAGER_AVAILABLE);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_RUNNING);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_ANY_LAYOUT_READ);
+ TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_ANY_LAYOUT_RW);
++TRACE_DEFINE_ENUM(NFS4CLNT_DELEGRETURN_DELAYED);
+
+ #define show_nfs4_clp_state(state) \
+ __print_flags(state, "|", \
+- { NFS4CLNT_MANAGER_RUNNING, "MANAGER_RUNNING" }, \
+- { NFS4CLNT_CHECK_LEASE, "CHECK_LEASE" }, \
+- { NFS4CLNT_LEASE_EXPIRED, "LEASE_EXPIRED" }, \
+- { NFS4CLNT_RECLAIM_REBOOT, "RECLAIM_REBOOT" }, \
+- { NFS4CLNT_RECLAIM_NOGRACE, "RECLAIM_NOGRACE" }, \
+- { NFS4CLNT_DELEGRETURN, "DELEGRETURN" }, \
+- { NFS4CLNT_SESSION_RESET, "SESSION_RESET" }, \
+- { NFS4CLNT_LEASE_CONFIRM, "LEASE_CONFIRM" }, \
+- { NFS4CLNT_SERVER_SCOPE_MISMATCH, \
+- "SERVER_SCOPE_MISMATCH" }, \
+- { NFS4CLNT_PURGE_STATE, "PURGE_STATE" }, \
+- { NFS4CLNT_BIND_CONN_TO_SESSION, \
+- "BIND_CONN_TO_SESSION" }, \
+- { NFS4CLNT_MOVED, "MOVED" }, \
+- { NFS4CLNT_LEASE_MOVED, "LEASE_MOVED" }, \
+- { NFS4CLNT_DELEGATION_EXPIRED, "DELEGATION_EXPIRED" }, \
+- { NFS4CLNT_RUN_MANAGER, "RUN_MANAGER" }, \
+- { NFS4CLNT_RECALL_RUNNING, "RECALL_RUNNING" }, \
+- { NFS4CLNT_RECALL_ANY_LAYOUT_READ, "RECALL_ANY_LAYOUT_READ" }, \
+- { NFS4CLNT_RECALL_ANY_LAYOUT_RW, "RECALL_ANY_LAYOUT_RW" })
++ { BIT(NFS4CLNT_MANAGER_RUNNING), "MANAGER_RUNNING" }, \
++ { BIT(NFS4CLNT_CHECK_LEASE), "CHECK_LEASE" }, \
++ { BIT(NFS4CLNT_LEASE_EXPIRED), "LEASE_EXPIRED" }, \
++ { BIT(NFS4CLNT_RECLAIM_REBOOT), "RECLAIM_REBOOT" }, \
++ { BIT(NFS4CLNT_RECLAIM_NOGRACE), "RECLAIM_NOGRACE" }, \
++ { BIT(NFS4CLNT_DELEGRETURN), "DELEGRETURN" }, \
++ { BIT(NFS4CLNT_SESSION_RESET), "SESSION_RESET" }, \
++ { BIT(NFS4CLNT_LEASE_CONFIRM), "LEASE_CONFIRM" }, \
++ { BIT(NFS4CLNT_SERVER_SCOPE_MISMATCH), "SERVER_SCOPE_MISMATCH" }, \
++ { BIT(NFS4CLNT_PURGE_STATE), "PURGE_STATE" }, \
++ { BIT(NFS4CLNT_BIND_CONN_TO_SESSION), "BIND_CONN_TO_SESSION" }, \
++ { BIT(NFS4CLNT_MOVED), "MOVED" }, \
++ { BIT(NFS4CLNT_LEASE_MOVED), "LEASE_MOVED" }, \
++ { BIT(NFS4CLNT_DELEGATION_EXPIRED), "DELEGATION_EXPIRED" }, \
++ { BIT(NFS4CLNT_RUN_MANAGER), "RUN_MANAGER" }, \
++ { BIT(NFS4CLNT_MANAGER_AVAILABLE), "MANAGER_AVAILABLE" }, \
++ { BIT(NFS4CLNT_RECALL_RUNNING), "RECALL_RUNNING" }, \
++ { BIT(NFS4CLNT_RECALL_ANY_LAYOUT_READ), "RECALL_ANY_LAYOUT_READ" }, \
++ { BIT(NFS4CLNT_RECALL_ANY_LAYOUT_RW), "RECALL_ANY_LAYOUT_RW" }, \
++ { BIT(NFS4CLNT_DELEGRETURN_DELAYED), "DELERETURN_DELAYED" })
+
+ TRACE_EVENT(nfs4_state_mgr,
+ TP_PROTO(
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index c0950edb26b0d..697acf5c3c681 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -331,37 +331,27 @@ nfsd_file_alloc(struct nfsd_file_lookup_key *key, unsigned int may)
+ return nf;
+ }
+
++/**
++ * nfsd_file_check_write_error - check for writeback errors on a file
++ * @nf: nfsd_file to check for writeback errors
++ *
++ * Check whether a nfsd_file has an unseen error. Reset the write
++ * verifier if so.
++ */
+ static void
+-nfsd_file_fsync(struct nfsd_file *nf)
+-{
+- struct file *file = nf->nf_file;
+- int ret;
+-
+- if (!file || !(file->f_mode & FMODE_WRITE))
+- return;
+- ret = vfs_fsync(file, 1);
+- trace_nfsd_file_fsync(nf, ret);
+- if (ret)
+- nfsd_reset_write_verifier(net_generic(nf->nf_net, nfsd_net_id));
+-}
+-
+-static int
+ nfsd_file_check_write_error(struct nfsd_file *nf)
+ {
+ struct file *file = nf->nf_file;
+
+- if (!file || !(file->f_mode & FMODE_WRITE))
+- return 0;
+- return filemap_check_wb_err(file->f_mapping, READ_ONCE(file->f_wb_err));
++ if ((file->f_mode & FMODE_WRITE) &&
++ filemap_check_wb_err(file->f_mapping, READ_ONCE(file->f_wb_err)))
++ nfsd_reset_write_verifier(net_generic(nf->nf_net, nfsd_net_id));
+ }
+
+ static void
+ nfsd_file_hash_remove(struct nfsd_file *nf)
+ {
+ trace_nfsd_file_unhash(nf);
+-
+- if (nfsd_file_check_write_error(nf))
+- nfsd_reset_write_verifier(net_generic(nf->nf_net, nfsd_net_id));
+ rhashtable_remove_fast(&nfsd_file_rhash_tbl, &nf->nf_rhash,
+ nfsd_file_rhash_params);
+ }
+@@ -387,23 +377,12 @@ nfsd_file_free(struct nfsd_file *nf)
+ this_cpu_add(nfsd_file_total_age, age);
+
+ nfsd_file_unhash(nf);
+-
+- /*
+- * We call fsync here in order to catch writeback errors. It's not
+- * strictly required by the protocol, but an nfsd_file could get
+- * evicted from the cache before a COMMIT comes in. If another
+- * task were to open that file in the interim and scrape the error,
+- * then the client may never see it. By calling fsync here, we ensure
+- * that writeback happens before the entry is freed, and that any
+- * errors reported result in the write verifier changing.
+- */
+- nfsd_file_fsync(nf);
+-
+ if (nf->nf_mark)
+ nfsd_file_mark_put(nf->nf_mark);
+ if (nf->nf_file) {
+ get_file(nf->nf_file);
+ filp_close(nf->nf_file, NULL);
++ nfsd_file_check_write_error(nf);
+ fput(nf->nf_file);
+ }
+
+@@ -1159,6 +1138,7 @@ wait_for_construction:
+ out:
+ if (status == nfs_ok) {
+ this_cpu_inc(nfsd_file_acquisitions);
++ nfsd_file_check_write_error(nf);
+ *pnf = nf;
+ } else {
+ if (refcount_dec_and_test(&nf->nf_ref))
+diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
+index 3564d1c6f6104..e8a80052cb1ba 100644
+--- a/fs/nfsd/nfs4layouts.c
++++ b/fs/nfsd/nfs4layouts.c
+@@ -323,11 +323,11 @@ nfsd4_recall_file_layout(struct nfs4_layout_stateid *ls)
+ if (ls->ls_recalled)
+ goto out_unlock;
+
+- ls->ls_recalled = true;
+- atomic_inc(&ls->ls_stid.sc_file->fi_lo_recalls);
+ if (list_empty(&ls->ls_layouts))
+ goto out_unlock;
+
++ ls->ls_recalled = true;
++ atomic_inc(&ls->ls_stid.sc_file->fi_lo_recalls);
+ trace_nfsd_layout_recall(&ls->ls_stid.sc_stateid);
+
+ refcount_inc(&ls->ls_stid.sc_count);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index f189ba7995f5a..e02ff76fad82c 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1214,8 +1214,10 @@ out:
+ return status;
+ out_put_dst:
+ nfsd_file_put(*dst);
++ *dst = NULL;
+ out_put_src:
+ nfsd_file_put(*src);
++ *src = NULL;
+ goto out;
+ }
+
+@@ -1293,15 +1295,15 @@ extern void nfs_sb_deactive(struct super_block *sb);
+ * setup a work entry in the ssc delayed unmount list.
+ */
+ static __be32 nfsd4_ssc_setup_dul(struct nfsd_net *nn, char *ipaddr,
+- struct nfsd4_ssc_umount_item **retwork, struct vfsmount **ss_mnt)
++ struct nfsd4_ssc_umount_item **nsui)
+ {
+ struct nfsd4_ssc_umount_item *ni = NULL;
+ struct nfsd4_ssc_umount_item *work = NULL;
+ struct nfsd4_ssc_umount_item *tmp;
+ DEFINE_WAIT(wait);
++ __be32 status = 0;
+
+- *ss_mnt = NULL;
+- *retwork = NULL;
++ *nsui = NULL;
+ work = kzalloc(sizeof(*work), GFP_KERNEL);
+ try_again:
+ spin_lock(&nn->nfsd_ssc_lock);
+@@ -1325,12 +1327,12 @@ try_again:
+ finish_wait(&nn->nfsd_ssc_waitq, &wait);
+ goto try_again;
+ }
+- *ss_mnt = ni->nsui_vfsmount;
++ *nsui = ni;
+ refcount_inc(&ni->nsui_refcnt);
+ spin_unlock(&nn->nfsd_ssc_lock);
+ kfree(work);
+
+- /* return vfsmount in ss_mnt */
++ /* return vfsmount in (*nsui)->nsui_vfsmount */
+ return 0;
+ }
+ if (work) {
+@@ -1338,31 +1340,32 @@ try_again:
+ refcount_set(&work->nsui_refcnt, 2);
+ work->nsui_busy = true;
+ list_add_tail(&work->nsui_list, &nn->nfsd_ssc_mount_list);
+- *retwork = work;
+- }
++ *nsui = work;
++ } else
++ status = nfserr_resource;
+ spin_unlock(&nn->nfsd_ssc_lock);
+- return 0;
++ return status;
+ }
+
+-static void nfsd4_ssc_update_dul_work(struct nfsd_net *nn,
+- struct nfsd4_ssc_umount_item *work, struct vfsmount *ss_mnt)
++static void nfsd4_ssc_update_dul(struct nfsd_net *nn,
++ struct nfsd4_ssc_umount_item *nsui,
++ struct vfsmount *ss_mnt)
+ {
+- /* set nsui_vfsmount, clear busy flag and wakeup waiters */
+ spin_lock(&nn->nfsd_ssc_lock);
+- work->nsui_vfsmount = ss_mnt;
+- work->nsui_busy = false;
++ nsui->nsui_vfsmount = ss_mnt;
++ nsui->nsui_busy = false;
+ wake_up_all(&nn->nfsd_ssc_waitq);
+ spin_unlock(&nn->nfsd_ssc_lock);
+ }
+
+-static void nfsd4_ssc_cancel_dul_work(struct nfsd_net *nn,
+- struct nfsd4_ssc_umount_item *work)
++static void nfsd4_ssc_cancel_dul(struct nfsd_net *nn,
++ struct nfsd4_ssc_umount_item *nsui)
+ {
+ spin_lock(&nn->nfsd_ssc_lock);
+- list_del(&work->nsui_list);
++ list_del(&nsui->nsui_list);
+ wake_up_all(&nn->nfsd_ssc_waitq);
+ spin_unlock(&nn->nfsd_ssc_lock);
+- kfree(work);
++ kfree(nsui);
+ }
+
+ /*
+@@ -1370,7 +1373,7 @@ static void nfsd4_ssc_cancel_dul_work(struct nfsd_net *nn,
+ */
+ static __be32
+ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+- struct vfsmount **mount)
++ struct nfsd4_ssc_umount_item **nsui)
+ {
+ struct file_system_type *type;
+ struct vfsmount *ss_mnt;
+@@ -1381,7 +1384,6 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+ char *ipaddr, *dev_name, *raw_data;
+ int len, raw_len;
+ __be32 status = nfserr_inval;
+- struct nfsd4_ssc_umount_item *work = NULL;
+ struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id);
+
+ naddr = &nss->u.nl4_addr;
+@@ -1389,6 +1391,7 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+ naddr->addr_len,
+ (struct sockaddr *)&tmp_addr,
+ sizeof(tmp_addr));
++ *nsui = NULL;
+ if (tmp_addrlen == 0)
+ goto out_err;
+
+@@ -1431,10 +1434,10 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+ goto out_free_rawdata;
+ snprintf(dev_name, len + 5, "%s%s%s:/", startsep, ipaddr, endsep);
+
+- status = nfsd4_ssc_setup_dul(nn, ipaddr, &work, &ss_mnt);
++ status = nfsd4_ssc_setup_dul(nn, ipaddr, nsui);
+ if (status)
+ goto out_free_devname;
+- if (ss_mnt)
++ if ((*nsui)->nsui_vfsmount)
+ goto out_done;
+
+ /* Use an 'internal' mount: SB_KERNMOUNT -> MNT_INTERNAL */
+@@ -1442,15 +1445,12 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
+ module_put(type->owner);
+ if (IS_ERR(ss_mnt)) {
+ status = nfserr_nodev;
+- if (work)
+- nfsd4_ssc_cancel_dul_work(nn, work);
++ nfsd4_ssc_cancel_dul(nn, *nsui);
+ goto out_free_devname;
+ }
+- if (work)
+- nfsd4_ssc_update_dul_work(nn, work, ss_mnt);
++ nfsd4_ssc_update_dul(nn, *nsui, ss_mnt);
+ out_done:
+ status = 0;
+- *mount = ss_mnt;
+
+ out_free_devname:
+ kfree(dev_name);
+@@ -1474,7 +1474,7 @@ out_err:
+ static __be32
+ nfsd4_setup_inter_ssc(struct svc_rqst *rqstp,
+ struct nfsd4_compound_state *cstate,
+- struct nfsd4_copy *copy, struct vfsmount **mount)
++ struct nfsd4_copy *copy)
+ {
+ struct svc_fh *s_fh = NULL;
+ stateid_t *s_stid = ©->cp_src_stateid;
+@@ -1487,7 +1487,7 @@ nfsd4_setup_inter_ssc(struct svc_rqst *rqstp,
+ if (status)
+ goto out;
+
+- status = nfsd4_interssc_connect(copy->cp_src, rqstp, mount);
++ status = nfsd4_interssc_connect(copy->cp_src, rqstp, ©->ss_nsui);
+ if (status)
+ goto out;
+
+@@ -1505,45 +1505,26 @@ out:
+ }
+
+ static void
+-nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct file *filp,
++nfsd4_cleanup_inter_ssc(struct nfsd4_ssc_umount_item *nsui, struct file *filp,
+ struct nfsd_file *dst)
+ {
+- bool found = false;
+- long timeout;
+- struct nfsd4_ssc_umount_item *tmp;
+- struct nfsd4_ssc_umount_item *ni = NULL;
+ struct nfsd_net *nn = net_generic(dst->nf_net, nfsd_net_id);
++ long timeout = msecs_to_jiffies(nfsd4_ssc_umount_timeout);
+
+ nfs42_ssc_close(filp);
+- nfsd_file_put(dst);
+ fput(filp);
+
+- if (!nn) {
+- mntput(ss_mnt);
+- return;
+- }
+ spin_lock(&nn->nfsd_ssc_lock);
+- timeout = msecs_to_jiffies(nfsd4_ssc_umount_timeout);
+- list_for_each_entry_safe(ni, tmp, &nn->nfsd_ssc_mount_list, nsui_list) {
+- if (ni->nsui_vfsmount->mnt_sb == ss_mnt->mnt_sb) {
+- list_del(&ni->nsui_list);
+- /*
+- * vfsmount can be shared by multiple exports,
+- * decrement refcnt. If the count drops to 1 it
+- * will be unmounted when nsui_expire expires.
+- */
+- refcount_dec(&ni->nsui_refcnt);
+- ni->nsui_expire = jiffies + timeout;
+- list_add_tail(&ni->nsui_list, &nn->nfsd_ssc_mount_list);
+- found = true;
+- break;
+- }
+- }
++ list_del(&nsui->nsui_list);
++ /*
++ * vfsmount can be shared by multiple exports,
++ * decrement refcnt. If the count drops to 1 it
++ * will be unmounted when nsui_expire expires.
++ */
++ refcount_dec(&nsui->nsui_refcnt);
++ nsui->nsui_expire = jiffies + timeout;
++ list_add_tail(&nsui->nsui_list, &nn->nfsd_ssc_mount_list);
+ spin_unlock(&nn->nfsd_ssc_lock);
+- if (!found) {
+- mntput(ss_mnt);
+- return;
+- }
+ }
+
+ #else /* CONFIG_NFSD_V4_2_INTER_SSC */
+@@ -1551,15 +1532,13 @@ nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct file *filp,
+ static __be32
+ nfsd4_setup_inter_ssc(struct svc_rqst *rqstp,
+ struct nfsd4_compound_state *cstate,
+- struct nfsd4_copy *copy,
+- struct vfsmount **mount)
++ struct nfsd4_copy *copy)
+ {
+- *mount = NULL;
+ return nfserr_inval;
+ }
+
+ static void
+-nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct file *filp,
++nfsd4_cleanup_inter_ssc(struct nfsd4_ssc_umount_item *nsui, struct file *filp,
+ struct nfsd_file *dst)
+ {
+ }
+@@ -1582,13 +1561,6 @@ nfsd4_setup_intra_ssc(struct svc_rqst *rqstp,
+ ©->nf_dst);
+ }
+
+-static void
+-nfsd4_cleanup_intra_ssc(struct nfsd_file *src, struct nfsd_file *dst)
+-{
+- nfsd_file_put(src);
+- nfsd_file_put(dst);
+-}
+-
+ static void nfsd4_cb_offload_release(struct nfsd4_callback *cb)
+ {
+ struct nfsd4_cb_offload *cbo =
+@@ -1700,18 +1672,27 @@ static void dup_copy_fields(struct nfsd4_copy *src, struct nfsd4_copy *dst)
+ memcpy(dst->cp_src, src->cp_src, sizeof(struct nl4_server));
+ memcpy(&dst->stateid, &src->stateid, sizeof(src->stateid));
+ memcpy(&dst->c_fh, &src->c_fh, sizeof(src->c_fh));
+- dst->ss_mnt = src->ss_mnt;
++ dst->ss_nsui = src->ss_nsui;
++}
++
++static void release_copy_files(struct nfsd4_copy *copy)
++{
++ if (copy->nf_src)
++ nfsd_file_put(copy->nf_src);
++ if (copy->nf_dst)
++ nfsd_file_put(copy->nf_dst);
+ }
+
+ static void cleanup_async_copy(struct nfsd4_copy *copy)
+ {
+ nfs4_free_copy_state(copy);
+- nfsd_file_put(copy->nf_dst);
+- if (!nfsd4_ssc_is_inter(copy))
+- nfsd_file_put(copy->nf_src);
+- spin_lock(©->cp_clp->async_lock);
+- list_del(©->copies);
+- spin_unlock(©->cp_clp->async_lock);
++ release_copy_files(copy);
++ if (copy->cp_clp) {
++ spin_lock(©->cp_clp->async_lock);
++ if (!list_empty(©->copies))
++ list_del_init(©->copies);
++ spin_unlock(©->cp_clp->async_lock);
++ }
+ nfs4_put_copy(copy);
+ }
+
+@@ -1749,8 +1730,8 @@ static int nfsd4_do_async_copy(void *data)
+ if (nfsd4_ssc_is_inter(copy)) {
+ struct file *filp;
+
+- filp = nfs42_ssc_open(copy->ss_mnt, ©->c_fh,
+- ©->stateid);
++ filp = nfs42_ssc_open(copy->ss_nsui->nsui_vfsmount,
++ ©->c_fh, ©->stateid);
+ if (IS_ERR(filp)) {
+ switch (PTR_ERR(filp)) {
+ case -EBADF:
+@@ -1764,11 +1745,10 @@ static int nfsd4_do_async_copy(void *data)
+ }
+ nfserr = nfsd4_do_copy(copy, filp, copy->nf_dst->nf_file,
+ false);
+- nfsd4_cleanup_inter_ssc(copy->ss_mnt, filp, copy->nf_dst);
++ nfsd4_cleanup_inter_ssc(copy->ss_nsui, filp, copy->nf_dst);
+ } else {
+ nfserr = nfsd4_do_copy(copy, copy->nf_src->nf_file,
+ copy->nf_dst->nf_file, false);
+- nfsd4_cleanup_intra_ssc(copy->nf_src, copy->nf_dst);
+ }
+
+ do_callback:
+@@ -1790,8 +1770,7 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ status = nfserr_notsupp;
+ goto out;
+ }
+- status = nfsd4_setup_inter_ssc(rqstp, cstate, copy,
+- ©->ss_mnt);
++ status = nfsd4_setup_inter_ssc(rqstp, cstate, copy);
+ if (status)
+ return nfserr_offload_denied;
+ } else {
+@@ -1810,12 +1789,13 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ async_copy = kzalloc(sizeof(struct nfsd4_copy), GFP_KERNEL);
+ if (!async_copy)
+ goto out_err;
++ INIT_LIST_HEAD(&async_copy->copies);
++ refcount_set(&async_copy->refcount, 1);
+ async_copy->cp_src = kmalloc(sizeof(*async_copy->cp_src), GFP_KERNEL);
+ if (!async_copy->cp_src)
+ goto out_err;
+ if (!nfs4_init_copy_state(nn, copy))
+ goto out_err;
+- refcount_set(&async_copy->refcount, 1);
+ memcpy(©->cp_res.cb_stateid, ©->cp_stateid.cs_stid,
+ sizeof(copy->cp_res.cb_stateid));
+ dup_copy_fields(copy, async_copy);
+@@ -1832,18 +1812,22 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ } else {
+ status = nfsd4_do_copy(copy, copy->nf_src->nf_file,
+ copy->nf_dst->nf_file, true);
+- nfsd4_cleanup_intra_ssc(copy->nf_src, copy->nf_dst);
+ }
+ out:
++ release_copy_files(copy);
+ return status;
+ out_err:
++ if (nfsd4_ssc_is_inter(copy)) {
++ /*
++ * Source's vfsmount of inter-copy will be unmounted
++ * by the laundromat. Use copy instead of async_copy
++ * since async_copy->ss_nsui might not be set yet.
++ */
++ refcount_dec(©->ss_nsui->nsui_refcnt);
++ }
+ if (async_copy)
+ cleanup_async_copy(async_copy);
+ status = nfserrno(-ENOMEM);
+- /*
+- * source's vfsmount of inter-copy will be unmounted
+- * by the laundromat
+- */
+ goto out;
+ }
+
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index c69f27d3adb79..8852a05126926 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -992,7 +992,6 @@ static int nfs4_init_cp_state(struct nfsd_net *nn, copy_stateid_t *stid,
+
+ stid->cs_stid.si_opaque.so_clid.cl_boot = (u32)nn->boot_time;
+ stid->cs_stid.si_opaque.so_clid.cl_id = nn->s2s_cp_cl_id;
+- stid->cs_type = cs_type;
+
+ idr_preload(GFP_KERNEL);
+ spin_lock(&nn->s2s_cp_lock);
+@@ -1003,6 +1002,7 @@ static int nfs4_init_cp_state(struct nfsd_net *nn, copy_stateid_t *stid,
+ idr_preload_end();
+ if (new_id < 0)
+ return 0;
++ stid->cs_type = cs_type;
+ return 1;
+ }
+
+@@ -1036,7 +1036,8 @@ void nfs4_free_copy_state(struct nfsd4_copy *copy)
+ {
+ struct nfsd_net *nn;
+
+- WARN_ON_ONCE(copy->cp_stateid.cs_type != NFS4_COPY_STID);
++ if (copy->cp_stateid.cs_type != NFS4_COPY_STID)
++ return;
+ nn = net_generic(copy->cp_clp->net, nfsd_net_id);
+ spin_lock(&nn->s2s_cp_lock);
+ idr_remove(&nn->s2s_cp_stateids,
+@@ -5298,16 +5299,17 @@ nfs4_upgrade_open(struct svc_rqst *rqstp, struct nfs4_file *fp,
+ /* test and set deny mode */
+ spin_lock(&fp->fi_lock);
+ status = nfs4_file_check_deny(fp, open->op_share_deny);
+- if (status == nfs_ok) {
+- if (status != nfserr_share_denied) {
+- set_deny(open->op_share_deny, stp);
+- fp->fi_share_deny |=
+- (open->op_share_deny & NFS4_SHARE_DENY_BOTH);
+- } else {
+- if (nfs4_resolve_deny_conflicts_locked(fp, false,
+- stp, open->op_share_deny, false))
+- status = nfserr_jukebox;
+- }
++ switch (status) {
++ case nfs_ok:
++ set_deny(open->op_share_deny, stp);
++ fp->fi_share_deny |=
++ (open->op_share_deny & NFS4_SHARE_DENY_BOTH);
++ break;
++ case nfserr_share_denied:
++ if (nfs4_resolve_deny_conflicts_locked(fp, false,
++ stp, open->op_share_deny, false))
++ status = nfserr_jukebox;
++ break;
+ }
+ spin_unlock(&fp->fi_lock);
+
+@@ -5438,6 +5440,23 @@ nfsd4_verify_deleg_dentry(struct nfsd4_open *open, struct nfs4_file *fp,
+ return 0;
+ }
+
++/*
++ * We avoid breaking delegations held by a client due to its own activity, but
++ * clearing setuid/setgid bits on a write is an implicit activity and the client
++ * may not notice and continue using the old mode. Avoid giving out a delegation
++ * on setuid/setgid files when the client is requesting an open for write.
++ */
++static int
++nfsd4_verify_setuid_write(struct nfsd4_open *open, struct nfsd_file *nf)
++{
++ struct inode *inode = file_inode(nf->nf_file);
++
++ if ((open->op_share_access & NFS4_SHARE_ACCESS_WRITE) &&
++ (inode->i_mode & (S_ISUID|S_ISGID)))
++ return -EAGAIN;
++ return 0;
++}
++
+ static struct nfs4_delegation *
+ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ struct svc_fh *parent)
+@@ -5471,6 +5490,8 @@ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ spin_lock(&fp->fi_lock);
+ if (nfs4_delegation_exists(clp, fp))
+ status = -EAGAIN;
++ else if (nfsd4_verify_setuid_write(open, nf))
++ status = -EAGAIN;
+ else if (!fp->fi_deleg_file) {
+ fp->fi_deleg_file = nf;
+ /* increment early to prevent fi_deleg_file from being
+@@ -5511,6 +5532,14 @@ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ if (status)
+ goto out_unlock;
+
++ /*
++ * Now that the deleg is set, check again to ensure that nothing
++ * raced in and changed the mode while we weren't lookng.
++ */
++ status = nfsd4_verify_setuid_write(open, fp->fi_deleg_file);
++ if (status)
++ goto out_unlock;
++
+ spin_lock(&state_lock);
+ spin_lock(&fp->fi_lock);
+ if (fp->fi_had_conflict)
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 325d3d3f12110..a0ecec54d3d7d 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -363,7 +363,7 @@ void nfsd_copy_write_verifier(__be32 verf[2], struct nfsd_net *nn)
+
+ do {
+ read_seqbegin_or_lock(&nn->writeverf_lock, &seq);
+- memcpy(verf, nn->writeverf, sizeof(*verf));
++ memcpy(verf, nn->writeverf, sizeof(nn->writeverf));
+ } while (need_seqretry(&nn->writeverf_lock, seq));
+ done_seqretry(&nn->writeverf_lock, seq);
+ }
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index 8f9c82d9e075b..4183819ea0829 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -1202,37 +1202,6 @@ TRACE_EVENT(nfsd_file_close,
+ )
+ );
+
+-TRACE_EVENT(nfsd_file_fsync,
+- TP_PROTO(
+- const struct nfsd_file *nf,
+- int ret
+- ),
+- TP_ARGS(nf, ret),
+- TP_STRUCT__entry(
+- __field(void *, nf_inode)
+- __field(int, nf_ref)
+- __field(int, ret)
+- __field(unsigned long, nf_flags)
+- __field(unsigned char, nf_may)
+- __field(struct file *, nf_file)
+- ),
+- TP_fast_assign(
+- __entry->nf_inode = nf->nf_inode;
+- __entry->nf_ref = refcount_read(&nf->nf_ref);
+- __entry->ret = ret;
+- __entry->nf_flags = nf->nf_flags;
+- __entry->nf_may = nf->nf_may;
+- __entry->nf_file = nf->nf_file;
+- ),
+- TP_printk("inode=%p ref=%d flags=%s may=%s nf_file=%p ret=%d",
+- __entry->nf_inode,
+- __entry->nf_ref,
+- show_nf_flags(__entry->nf_flags),
+- show_nfsd_may_flags(__entry->nf_may),
+- __entry->nf_file, __entry->ret
+- )
+-);
+-
+ #include "cache.h"
+
+ TRACE_DEFINE_ENUM(RC_DROPIT);
+diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
+index 4fd2cf6d1d2dc..510978e602da6 100644
+--- a/fs/nfsd/xdr4.h
++++ b/fs/nfsd/xdr4.h
+@@ -571,7 +571,7 @@ struct nfsd4_copy {
+ struct task_struct *copy_task;
+ refcount_t refcount;
+
+- struct vfsmount *ss_mnt;
++ struct nfsd4_ssc_umount_item *ss_nsui;
+ struct nfs_fh c_fh;
+ nfs4_stateid stateid;
+ };
+diff --git a/fs/ocfs2/move_extents.c b/fs/ocfs2/move_extents.c
+index 192cad0662d8b..b1e32ec4a9d41 100644
+--- a/fs/ocfs2/move_extents.c
++++ b/fs/ocfs2/move_extents.c
+@@ -105,14 +105,6 @@ static int __ocfs2_move_extent(handle_t *handle,
+ */
+ replace_rec.e_flags = ext_flags & ~OCFS2_EXT_REFCOUNTED;
+
+- ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode),
+- context->et.et_root_bh,
+- OCFS2_JOURNAL_ACCESS_WRITE);
+- if (ret) {
+- mlog_errno(ret);
+- goto out;
+- }
+-
+ ret = ocfs2_split_extent(handle, &context->et, path, index,
+ &replace_rec, context->meta_ac,
+ &context->dealloc);
+@@ -121,8 +113,6 @@ static int __ocfs2_move_extent(handle_t *handle,
+ goto out;
+ }
+
+- ocfs2_journal_dirty(handle, context->et.et_root_bh);
+-
+ context->new_phys_cpos = new_p_cpos;
+
+ /*
+@@ -444,7 +434,7 @@ static int ocfs2_find_victim_alloc_group(struct inode *inode,
+ bg = (struct ocfs2_group_desc *)gd_bh->b_data;
+
+ if (vict_blkno < (le64_to_cpu(bg->bg_blkno) +
+- le16_to_cpu(bg->bg_bits))) {
++ (le16_to_cpu(bg->bg_bits) << bits_per_unit))) {
+
+ *ret_bh = gd_bh;
+ *vict_bit = (vict_blkno - blkno) >>
+@@ -559,6 +549,7 @@ static void ocfs2_probe_alloc_group(struct inode *inode, struct buffer_head *bh,
+ last_free_bits++;
+
+ if (last_free_bits == move_len) {
++ i -= move_len;
+ *goal_bit = i;
+ *phys_cpos = base_cpos + i;
+ break;
+@@ -1030,18 +1021,19 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
+
+ context->range = ⦥
+
++ /*
++ * ok, the default theshold for the defragmentation
++ * is 1M, since our maximum clustersize was 1M also.
++ * any thought?
++ */
++ if (!range.me_threshold)
++ range.me_threshold = 1024 * 1024;
++
++ if (range.me_threshold > i_size_read(inode))
++ range.me_threshold = i_size_read(inode);
++
+ if (range.me_flags & OCFS2_MOVE_EXT_FL_AUTO_DEFRAG) {
+ context->auto_defrag = 1;
+- /*
+- * ok, the default theshold for the defragmentation
+- * is 1M, since our maximum clustersize was 1M also.
+- * any thought?
+- */
+- if (!range.me_threshold)
+- range.me_threshold = 1024 * 1024;
+-
+- if (range.me_threshold > i_size_read(inode))
+- range.me_threshold = i_size_read(inode);
+
+ if (range.me_flags & OCFS2_MOVE_EXT_FL_PART_DEFRAG)
+ context->partial = 1;
+diff --git a/fs/open.c b/fs/open.c
+index 82c1a28b33089..ceb88ac0ca3b2 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1411,8 +1411,9 @@ int filp_close(struct file *filp, fl_owner_t id)
+ {
+ int retval = 0;
+
+- if (!file_count(filp)) {
+- printk(KERN_ERR "VFS: Close: file count is 0\n");
++ if (CHECK_DATA_CORRUPTION(file_count(filp) == 0,
++ "VFS: Close: file count is 0 (f_op=%ps)",
++ filp->f_op)) {
+ return 0;
+ }
+
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 48f2d60bd78a2..436025e0f77a6 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1124,6 +1124,11 @@ static int sysctl_check_table_array(const char *path, struct ctl_table *table)
+ err |= sysctl_err(path, table, "array not allowed");
+ }
+
++ if (table->proc_handler == proc_dobool) {
++ if (table->maxlen != sizeof(bool))
++ err |= sysctl_err(path, table, "array not allowed");
++ }
++
+ return err;
+ }
+
+@@ -1136,6 +1141,7 @@ static int sysctl_check_table(const char *path, struct ctl_table *table)
+ err |= sysctl_err(path, entry, "Not a file");
+
+ if ((entry->proc_handler == proc_dostring) ||
++ (entry->proc_handler == proc_dobool) ||
+ (entry->proc_handler == proc_dointvec) ||
+ (entry->proc_handler == proc_douintvec) ||
+ (entry->proc_handler == proc_douintvec_minmax) ||
+diff --git a/fs/super.c b/fs/super.c
+index 12c08cb20405d..cf737ec2bd05c 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -491,10 +491,23 @@ void generic_shutdown_super(struct super_block *sb)
+ if (sop->put_super)
+ sop->put_super(sb);
+
+- if (!list_empty(&sb->s_inodes)) {
+- printk("VFS: Busy inodes after unmount of %s. "
+- "Self-destruct in 5 seconds. Have a nice day...\n",
+- sb->s_id);
++ if (CHECK_DATA_CORRUPTION(!list_empty(&sb->s_inodes),
++ "VFS: Busy inodes after unmount of %s (%s)",
++ sb->s_id, sb->s_type->name)) {
++ /*
++ * Adding a proper bailout path here would be hard, but
++ * we can at least make it more likely that a later
++ * iput_final() or such crashes cleanly.
++ */
++ struct inode *inode;
++
++ spin_lock(&sb->s_inode_list_lock);
++ list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
++ inode->i_op = VFS_PTR_POISON;
++ inode->i_sb = VFS_PTR_POISON;
++ inode->i_mapping = VFS_PTR_POISON;
++ }
++ spin_unlock(&sb->s_inode_list_lock);
+ }
+ }
+ spin_lock(&sb_lock);
+diff --git a/fs/udf/file.c b/fs/udf/file.c
+index 5c659e23e578f..8be51161f3e52 100644
+--- a/fs/udf/file.c
++++ b/fs/udf/file.c
+@@ -149,26 +149,24 @@ static ssize_t udf_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ goto out;
+
+ down_write(&iinfo->i_data_sem);
+- if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) {
+- loff_t end = iocb->ki_pos + iov_iter_count(from);
+-
+- if (inode->i_sb->s_blocksize <
+- (udf_file_entry_alloc_offset(inode) + end)) {
+- err = udf_expand_file_adinicb(inode);
+- if (err) {
+- inode_unlock(inode);
+- udf_debug("udf_expand_adinicb: err=%d\n", err);
+- return err;
+- }
+- } else {
+- iinfo->i_lenAlloc = max(end, inode->i_size);
+- up_write(&iinfo->i_data_sem);
++ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB &&
++ inode->i_sb->s_blocksize < (udf_file_entry_alloc_offset(inode) +
++ iocb->ki_pos + iov_iter_count(from))) {
++ err = udf_expand_file_adinicb(inode);
++ if (err) {
++ inode_unlock(inode);
++ udf_debug("udf_expand_adinicb: err=%d\n", err);
++ return err;
+ }
+ } else
+ up_write(&iinfo->i_data_sem);
+
+ retval = __generic_file_write_iter(iocb, from);
+ out:
++ down_write(&iinfo->i_data_sem);
++ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB && retval > 0)
++ iinfo->i_lenAlloc = inode->i_size;
++ up_write(&iinfo->i_data_sem);
+ inode_unlock(inode);
+
+ if (retval > 0) {
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 34e416327dd4e..a1af2c2e1c295 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -521,8 +521,10 @@ static int udf_do_extend_file(struct inode *inode,
+ }
+
+ if (fake) {
+- udf_add_aext(inode, last_pos, &last_ext->extLocation,
+- last_ext->extLength, 1);
++ err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
++ last_ext->extLength, 1);
++ if (err < 0)
++ goto out_err;
+ count++;
+ } else {
+ struct kernel_lb_addr tmploc;
+@@ -556,7 +558,7 @@ static int udf_do_extend_file(struct inode *inode,
+ err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
+ last_ext->extLength, 1);
+ if (err)
+- return err;
++ goto out_err;
+ count++;
+ }
+ if (new_block_bytes) {
+@@ -565,7 +567,7 @@ static int udf_do_extend_file(struct inode *inode,
+ err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
+ last_ext->extLength, 1);
+ if (err)
+- return err;
++ goto out_err;
+ count++;
+ }
+
+@@ -579,6 +581,11 @@ out:
+ return -EIO;
+
+ return count;
++out_err:
++ /* Remove extents we've created so far */
++ udf_clear_extent_cache(inode);
++ udf_truncate_extents(inode);
++ return err;
+ }
+
+ /* Extend the final block of the file to final_block_len bytes */
+@@ -792,19 +799,17 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ c = 0;
+ offset = 0;
+ count += ret;
+- /* We are not covered by a preallocated extent? */
+- if ((laarr[0].extLength & UDF_EXTENT_FLAG_MASK) !=
+- EXT_NOT_RECORDED_ALLOCATED) {
+- /* Is there any real extent? - otherwise we overwrite
+- * the fake one... */
+- if (count)
+- c = !c;
+- laarr[c].extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
+- inode->i_sb->s_blocksize;
+- memset(&laarr[c].extLocation, 0x00,
+- sizeof(struct kernel_lb_addr));
+- count++;
+- }
++ /*
++ * Is there any real extent? - otherwise we overwrite the fake
++ * one...
++ */
++ if (count)
++ c = !c;
++ laarr[c].extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
++ inode->i_sb->s_blocksize;
++ memset(&laarr[c].extLocation, 0x00,
++ sizeof(struct kernel_lb_addr));
++ count++;
+ endnum = c + 1;
+ lastblock = 1;
+ } else {
+@@ -1080,23 +1085,8 @@ static void udf_merge_extents(struct inode *inode, struct kernel_long_ad *laarr,
+ blocksize - 1) >> blocksize_bits)))) {
+
+ if (((li->extLength & UDF_EXTENT_LENGTH_MASK) +
+- (lip1->extLength & UDF_EXTENT_LENGTH_MASK) +
+- blocksize - 1) & ~UDF_EXTENT_LENGTH_MASK) {
+- lip1->extLength = (lip1->extLength -
+- (li->extLength &
+- UDF_EXTENT_LENGTH_MASK) +
+- UDF_EXTENT_LENGTH_MASK) &
+- ~(blocksize - 1);
+- li->extLength = (li->extLength &
+- UDF_EXTENT_FLAG_MASK) +
+- (UDF_EXTENT_LENGTH_MASK + 1) -
+- blocksize;
+- lip1->extLocation.logicalBlockNum =
+- li->extLocation.logicalBlockNum +
+- ((li->extLength &
+- UDF_EXTENT_LENGTH_MASK) >>
+- blocksize_bits);
+- } else {
++ (lip1->extLength & UDF_EXTENT_LENGTH_MASK) +
++ blocksize - 1) <= UDF_EXTENT_LENGTH_MASK) {
+ li->extLength = lip1->extLength +
+ (((li->extLength &
+ UDF_EXTENT_LENGTH_MASK) +
+@@ -1381,6 +1371,7 @@ reread:
+ ret = -EIO;
+ goto out;
+ }
++ iinfo->i_hidden = hidden_inode;
+ iinfo->i_unique = 0;
+ iinfo->i_lenEAttr = 0;
+ iinfo->i_lenExtents = 0;
+@@ -1716,8 +1707,12 @@ static int udf_update_inode(struct inode *inode, int do_sync)
+
+ if (S_ISDIR(inode->i_mode) && inode->i_nlink > 0)
+ fe->fileLinkCount = cpu_to_le16(inode->i_nlink - 1);
+- else
+- fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
++ else {
++ if (iinfo->i_hidden)
++ fe->fileLinkCount = cpu_to_le16(0);
++ else
++ fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
++ }
+
+ fe->informationLength = cpu_to_le64(inode->i_size);
+
+@@ -1888,8 +1883,13 @@ struct inode *__udf_iget(struct super_block *sb, struct kernel_lb_addr *ino,
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+
+- if (!(inode->i_state & I_NEW))
++ if (!(inode->i_state & I_NEW)) {
++ if (UDF_I(inode)->i_hidden != hidden_inode) {
++ iput(inode);
++ return ERR_PTR(-EFSCORRUPTED);
++ }
+ return inode;
++ }
+
+ memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr));
+ err = udf_read_inode(inode, hidden_inode);
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 06eda8177b5f1..241b40e886b36 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -147,6 +147,7 @@ static struct inode *udf_alloc_inode(struct super_block *sb)
+ ei->i_next_alloc_goal = 0;
+ ei->i_strat4096 = 0;
+ ei->i_streamdir = 0;
++ ei->i_hidden = 0;
+ init_rwsem(&ei->i_data_sem);
+ ei->cached_extent.lstart = -1;
+ spin_lock_init(&ei->i_extent_cache_lock);
+diff --git a/fs/udf/udf_i.h b/fs/udf/udf_i.h
+index 06ff7006b8227..312b7c9ef10e2 100644
+--- a/fs/udf/udf_i.h
++++ b/fs/udf/udf_i.h
+@@ -44,7 +44,8 @@ struct udf_inode_info {
+ unsigned i_use : 1; /* unallocSpaceEntry */
+ unsigned i_strat4096 : 1;
+ unsigned i_streamdir : 1;
+- unsigned reserved : 25;
++ unsigned i_hidden : 1; /* hidden system inode */
++ unsigned reserved : 24;
+ __u8 *i_data;
+ struct kernel_lb_addr i_locStreamdir;
+ __u64 i_lenStreams;
+diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
+index 291b56dd011ee..6bccff3c70f54 100644
+--- a/fs/udf/udf_sb.h
++++ b/fs/udf/udf_sb.h
+@@ -55,6 +55,8 @@
+ #define MF_DUPLICATE_MD 0x01
+ #define MF_MIRROR_FE_LOADED 0x02
+
++#define EFSCORRUPTED EUCLEAN
++
+ struct udf_meta_data {
+ __u32 s_meta_file_loc;
+ __u32 s_mirror_file_loc;
+diff --git a/include/drm/drm_mipi_dsi.h b/include/drm/drm_mipi_dsi.h
+index 20b21b577deaa..9054a5185e1a9 100644
+--- a/include/drm/drm_mipi_dsi.h
++++ b/include/drm/drm_mipi_dsi.h
+@@ -296,6 +296,10 @@ int mipi_dsi_dcs_set_display_brightness(struct mipi_dsi_device *dsi,
+ u16 brightness);
+ int mipi_dsi_dcs_get_display_brightness(struct mipi_dsi_device *dsi,
+ u16 *brightness);
++int mipi_dsi_dcs_set_display_brightness_large(struct mipi_dsi_device *dsi,
++ u16 brightness);
++int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
++ u16 *brightness);
+
+ /**
+ * mipi_dsi_dcs_write_seq - transmit a DCS command with payload
+diff --git a/include/drm/drm_print.h b/include/drm/drm_print.h
+index a44fb7ef257f6..094ded23534c7 100644
+--- a/include/drm/drm_print.h
++++ b/include/drm/drm_print.h
+@@ -521,7 +521,7 @@ __printf(1, 2)
+ void __drm_err(const char *format, ...);
+
+ #if !defined(CONFIG_DRM_USE_DYNAMIC_DEBUG)
+-#define __drm_dbg(fmt, ...) ___drm_dbg(NULL, fmt, ##__VA_ARGS__)
++#define __drm_dbg(cat, fmt, ...) ___drm_dbg(NULL, cat, fmt, ##__VA_ARGS__)
+ #else
+ #define __drm_dbg(cat, fmt, ...) \
+ _dynamic_func_call_cls(cat, fmt, ___drm_dbg, \
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 43d4e073b1115..10ee92db680c9 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -484,6 +484,7 @@ struct request_queue {
+ DECLARE_BITMAP (blkcg_pols, BLKCG_MAX_POLS);
+ struct blkcg_gq *root_blkg;
+ struct list_head blkg_list;
++ struct mutex blkcg_mutex;
+ #endif
+
+ struct queue_limits limits;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 634d37a599fa7..cf0d88109e3f9 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -346,6 +346,13 @@ static inline void bpf_obj_init(const struct btf_field_offs *foffs, void *obj)
+ memset(obj + foffs->field_off[i], 0, foffs->field_sz[i]);
+ }
+
++/* 'dst' must be a temporary buffer and should not point to memory that is being
++ * used in parallel by a bpf program or bpf syscall, otherwise the access from
++ * the bpf program or bpf syscall may be corrupted by the reinitialization,
++ * leading to weird problems. Even 'dst' is newly-allocated from bpf memory
++ * allocator, it is still possible for 'dst' to be used in parallel by a bpf
++ * program or bpf syscall.
++ */
+ static inline void check_and_init_map_value(struct bpf_map *map, void *dst)
+ {
+ bpf_obj_init(map->field_offs, dst);
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index 898b3458b24a0..b83126452c651 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -75,12 +75,6 @@
+ # define __assume_aligned(a, ...)
+ #endif
+
+-/*
+- * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-cold-function-attribute
+- * gcc: https://gcc.gnu.org/onlinedocs/gcc/Label-Attributes.html#index-cold-label-attribute
+- */
+-#define __cold __attribute__((__cold__))
+-
+ /*
+ * Note the long name.
+ *
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index 7c1afe0f4129c..aab34e30128e9 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -79,6 +79,33 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { }
+ /* Attributes */
+ #include <linux/compiler_attributes.h>
+
++#if CONFIG_FUNCTION_ALIGNMENT > 0
++#define __function_aligned __aligned(CONFIG_FUNCTION_ALIGNMENT)
++#else
++#define __function_aligned
++#endif
++
++/*
++ * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-cold-function-attribute
++ * gcc: https://gcc.gnu.org/onlinedocs/gcc/Label-Attributes.html#index-cold-label-attribute
++ *
++ * When -falign-functions=N is in use, we must avoid the cold attribute as
++ * contemporary versions of GCC drop the alignment for cold functions. Worse,
++ * GCC can implicitly mark callees of cold functions as cold themselves, so
++ * it's not sufficient to add __function_aligned here as that will not ensure
++ * that callees are correctly aligned.
++ *
++ * See:
++ *
++ * https://lore.kernel.org/lkml/Y77%2FqVgvaJidFpYt@FVFF77S0Q05N
++ * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88345#c9
++ */
++#if !defined(CONFIG_CC_IS_GCC) || (CONFIG_FUNCTION_ALIGNMENT == 0)
++#define __cold __attribute__((__cold__))
++#else
++#define __cold
++#endif
++
+ /* Builtins */
+
+ /*
+diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
+index dcef4a9e4d63e..d4afa8508a806 100644
+--- a/include/linux/context_tracking.h
++++ b/include/linux/context_tracking.h
+@@ -130,9 +130,36 @@ static __always_inline unsigned long ct_state_inc(int incby)
+ return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
+ }
+
++static __always_inline bool warn_rcu_enter(void)
++{
++ bool ret = false;
++
++ /*
++ * Horrible hack to shut up recursive RCU isn't watching fail since
++ * lots of the actual reporting also relies on RCU.
++ */
++ preempt_disable_notrace();
++ if (rcu_dynticks_curr_cpu_in_eqs()) {
++ ret = true;
++ ct_state_inc(RCU_DYNTICKS_IDX);
++ }
++
++ return ret;
++}
++
++static __always_inline void warn_rcu_exit(bool rcu)
++{
++ if (rcu)
++ ct_state_inc(RCU_DYNTICKS_IDX);
++ preempt_enable_notrace();
++}
++
+ #else
+ static inline void ct_idle_enter(void) { }
+ static inline void ct_idle_exit(void) { }
++
++static __always_inline bool warn_rcu_enter(void) { return false; }
++static __always_inline void warn_rcu_exit(bool rcu) { }
+ #endif /* !CONFIG_CONTEXT_TRACKING_IDLE */
+
+ #endif
+diff --git a/include/linux/device.h b/include/linux/device.h
+index 44e3acae7b36e..f4d20655d2d7e 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -328,6 +328,7 @@ enum device_link_state {
+ #define DL_FLAG_MANAGED BIT(6)
+ #define DL_FLAG_SYNC_STATE_ONLY BIT(7)
+ #define DL_FLAG_INFERRED BIT(8)
++#define DL_FLAG_CYCLE BIT(9)
+
+ /**
+ * enum dl_dev_state - Device driver presence tracking information.
+diff --git a/include/linux/fwnode.h b/include/linux/fwnode.h
+index 89b9bdfca925c..5700451b300fb 100644
+--- a/include/linux/fwnode.h
++++ b/include/linux/fwnode.h
+@@ -18,7 +18,7 @@ struct fwnode_operations;
+ struct device;
+
+ /*
+- * fwnode link flags
++ * fwnode flags
+ *
+ * LINKS_ADDED: The fwnode has already be parsed to add fwnode links.
+ * NOT_DEVICE: The fwnode will never be populated as a struct device.
+@@ -36,6 +36,7 @@ struct device;
+ #define FWNODE_FLAG_INITIALIZED BIT(2)
+ #define FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD BIT(3)
+ #define FWNODE_FLAG_BEST_EFFORT BIT(4)
++#define FWNODE_FLAG_VISITED BIT(5)
+
+ struct fwnode_handle {
+ struct fwnode_handle *secondary;
+@@ -46,11 +47,19 @@ struct fwnode_handle {
+ u8 flags;
+ };
+
++/*
++ * fwnode link flags
++ *
++ * CYCLE: The fwnode link is part of a cycle. Don't defer probe.
++ */
++#define FWLINK_FLAG_CYCLE BIT(0)
++
+ struct fwnode_link {
+ struct fwnode_handle *supplier;
+ struct list_head s_hook;
+ struct fwnode_handle *consumer;
+ struct list_head c_hook;
++ u8 flags;
+ };
+
+ /**
+@@ -198,7 +207,6 @@ static inline void fwnode_dev_initialized(struct fwnode_handle *fwnode,
+ fwnode->flags &= ~FWNODE_FLAG_INITIALIZED;
+ }
+
+-extern u32 fw_devlink_get_flags(void);
+ extern bool fw_devlink_is_strict(void);
+ int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup);
+ void fwnode_links_purge(struct fwnode_handle *fwnode);
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 8677ae38599e4..48563dc09e171 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -619,6 +619,7 @@ struct hid_device { /* device report descriptor */
+ unsigned long status; /* see STAT flags above */
+ unsigned claimed; /* Claimed by hidinput, hiddev? */
+ unsigned quirks; /* Various quirks the device can pull on us */
++ unsigned initial_quirks; /* Initial set of quirks supplied when creating device */
+ bool io_started; /* If IO has started */
+
+ struct list_head inputs; /* The list of inputs */
+diff --git a/include/linux/ima.h b/include/linux/ima.h
+index 5a0b2a285a18a..d79fee67235ee 100644
+--- a/include/linux/ima.h
++++ b/include/linux/ima.h
+@@ -21,7 +21,8 @@ extern int ima_file_check(struct file *file, int mask);
+ extern void ima_post_create_tmpfile(struct user_namespace *mnt_userns,
+ struct inode *inode);
+ extern void ima_file_free(struct file *file);
+-extern int ima_file_mmap(struct file *file, unsigned long prot);
++extern int ima_file_mmap(struct file *file, unsigned long reqprot,
++ unsigned long prot, unsigned long flags);
+ extern int ima_file_mprotect(struct vm_area_struct *vma, unsigned long prot);
+ extern int ima_load_data(enum kernel_load_data_id id, bool contents);
+ extern int ima_post_load_data(char *buf, loff_t size,
+@@ -76,7 +77,8 @@ static inline void ima_file_free(struct file *file)
+ return;
+ }
+
+-static inline int ima_file_mmap(struct file *file, unsigned long prot)
++static inline int ima_file_mmap(struct file *file, unsigned long reqprot,
++ unsigned long prot, unsigned long flags)
+ {
+ return 0;
+ }
+diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
+index ddb5a358fd829..90e2fdc17d79f 100644
+--- a/include/linux/kernel_stat.h
++++ b/include/linux/kernel_stat.h
+@@ -75,7 +75,7 @@ extern unsigned int kstat_irqs_usr(unsigned int irq);
+ /*
+ * Number of interrupts per cpu, since bootup
+ */
+-static inline unsigned int kstat_cpu_irqs_sum(unsigned int cpu)
++static inline unsigned long kstat_cpu_irqs_sum(unsigned int cpu)
+ {
+ return kstat_cpu(cpu).irqs_sum;
+ }
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index a0b92be98984e..85a64cb95d755 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -378,6 +378,8 @@ extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs);
+ DEFINE_INSN_CACHE_OPS(optinsn);
+
+ extern void wait_for_kprobe_optimizer(void);
++bool optprobe_queued_unopt(struct optimized_kprobe *op);
++bool kprobe_disarmed(struct kprobe *p);
+ #else /* !CONFIG_OPTPROBES */
+ static inline void wait_for_kprobe_optimizer(void) { }
+ #endif /* CONFIG_OPTPROBES */
+diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
+index af38252ad7045..e772aae718431 100644
+--- a/include/linux/libnvdimm.h
++++ b/include/linux/libnvdimm.h
+@@ -41,6 +41,9 @@ enum {
+ */
+ NDD_INCOHERENT = 7,
+
++ /* dimm provider wants synchronous registration by __nvdimm_create() */
++ NDD_REGISTER_SYNC = 8,
++
+ /* need to set a limit somewhere, but yes, this is likely overkill */
+ ND_IOCTL_MAX_BUFLEN = SZ_4M,
+ ND_CMD_MAX_ELEM = 5,
+diff --git a/include/linux/mlx4/qp.h b/include/linux/mlx4/qp.h
+index 9db93e487496a..b6b626157b03a 100644
+--- a/include/linux/mlx4/qp.h
++++ b/include/linux/mlx4/qp.h
+@@ -446,6 +446,7 @@ enum {
+
+ struct mlx4_wqe_inline_seg {
+ __be32 byte_count;
++ __u8 data[];
+ };
+
+ enum mlx4_update_qp_attr {
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index a112b913fff94..15dd71817996f 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -631,6 +631,8 @@ int msi_domain_prepare_irqs(struct irq_domain *domain, struct device *dev,
+ int nvec, msi_alloc_info_t *args);
+ int msi_domain_populate_irqs(struct irq_domain *domain, struct device *dev,
+ int virq, int nvec, msi_alloc_info_t *args);
++void msi_domain_depopulate_descs(struct device *dev, int virq, int nvec);
++
+ struct irq_domain *
+ __platform_msi_create_device_domain(struct device *dev,
+ unsigned int nvec,
+diff --git a/include/linux/nfs_ssc.h b/include/linux/nfs_ssc.h
+index 75843c00f326a..22265b1ff0800 100644
+--- a/include/linux/nfs_ssc.h
++++ b/include/linux/nfs_ssc.h
+@@ -53,6 +53,7 @@ static inline void nfs42_ssc_close(struct file *filep)
+ if (nfs_ssc_client_tbl.ssc_nfs4_ops)
+ (*nfs_ssc_client_tbl.ssc_nfs4_ops->sco_close)(filep);
+ }
++#endif
+
+ struct nfsd4_ssc_umount_item {
+ struct list_head nsui_list;
+@@ -66,7 +67,6 @@ struct nfsd4_ssc_umount_item {
+ struct vfsmount *nsui_vfsmount;
+ char nsui_ipaddr[RPC_MAX_ADDRBUFLEN + 1];
+ };
+-#endif
+
+ /*
+ * NFS_FS
+diff --git a/include/linux/poison.h b/include/linux/poison.h
+index 2d3249eb0e62d..0e8a1f2ceb2f1 100644
+--- a/include/linux/poison.h
++++ b/include/linux/poison.h
+@@ -84,4 +84,7 @@
+ /********** kernel/bpf/ **********/
+ #define BPF_PTR_POISON ((void *)(0xeB9FUL + POISON_POINTER_DELTA))
+
++/********** VFS **********/
++#define VFS_PTR_POISON ((void *)(0xF5 + POISON_POINTER_DELTA))
++
+ #endif
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 03abf883a281b..8d4bf695e7666 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -238,6 +238,7 @@ void synchronize_rcu_tasks_rude(void);
+
+ #define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false)
+ void exit_tasks_rcu_start(void);
++void exit_tasks_rcu_stop(void);
+ void exit_tasks_rcu_finish(void);
+ #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */
+ #define rcu_tasks_classic_qs(t, preempt) do { } while (0)
+@@ -246,6 +247,7 @@ void exit_tasks_rcu_finish(void);
+ #define call_rcu_tasks call_rcu
+ #define synchronize_rcu_tasks synchronize_rcu
+ static inline void exit_tasks_rcu_start(void) { }
++static inline void exit_tasks_rcu_stop(void) { }
+ static inline void exit_tasks_rcu_finish(void) { }
+ #endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */
+
+@@ -374,11 +376,18 @@ static inline int debug_lockdep_rcu_enabled(void)
+ * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
+ * @c: condition to check
+ * @s: informative message
++ *
++ * This checks debug_lockdep_rcu_enabled() before checking (c) to
++ * prevent early boot splats due to lockdep not yet being initialized,
++ * and rechecks it after checking (c) to prevent false-positive splats
++ * due to races with lockdep being disabled. See commit 3066820034b5dd
++ * ("rcu: Reject RCU_LOCKDEP_WARN() false positives") for more detail.
+ */
+ #define RCU_LOCKDEP_WARN(c, s) \
+ do { \
+ static bool __section(".data.unlikely") __warned; \
+- if ((c) && debug_lockdep_rcu_enabled() && !__warned) { \
++ if (debug_lockdep_rcu_enabled() && (c) && \
++ debug_lockdep_rcu_enabled() && !__warned) { \
+ __warned = true; \
+ lockdep_rcu_suspicious(__FILE__, __LINE__, s); \
+ } \
+diff --git a/include/linux/rmap.h b/include/linux/rmap.h
+index bd3504d11b155..2bdba700bc3e3 100644
+--- a/include/linux/rmap.h
++++ b/include/linux/rmap.h
+@@ -94,7 +94,7 @@ enum ttu_flags {
+ TTU_SPLIT_HUGE_PMD = 0x4, /* split huge PMD if any */
+ TTU_IGNORE_MLOCK = 0x8, /* ignore mlock */
+ TTU_SYNC = 0x10, /* avoid racy checks with PVMW_SYNC */
+- TTU_IGNORE_HWPOISON = 0x20, /* corrupted page is recoverable */
++ TTU_HWPOISON = 0x20, /* do convert pte to hwpoison entry */
+ TTU_BATCH_FLUSH = 0x40, /* Batch TLB flushes where possible
+ * and caller guarantees they will
+ * do a final flush if necessary */
+diff --git a/include/linux/transport_class.h b/include/linux/transport_class.h
+index 63076fb835e34..2efc271a96fa6 100644
+--- a/include/linux/transport_class.h
++++ b/include/linux/transport_class.h
+@@ -70,8 +70,14 @@ void transport_destroy_device(struct device *);
+ static inline int
+ transport_register_device(struct device *dev)
+ {
++ int ret;
++
+ transport_setup_device(dev);
+- return transport_add_device(dev);
++ ret = transport_add_device(dev);
++ if (ret)
++ transport_destroy_device(dev);
++
++ return ret;
+ }
+
+ static inline void
+diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
+index afb18f198843b..ab9728138ad67 100644
+--- a/include/linux/uaccess.h
++++ b/include/linux/uaccess.h
+@@ -329,6 +329,10 @@ copy_struct_from_user(void *dst, size_t ksize, const void __user *src,
+ size_t size = min(ksize, usize);
+ size_t rest = max(ksize, usize) - size;
+
++ /* Double check if ksize is larger than a known object size. */
++ if (WARN_ON_ONCE(ksize > __builtin_object_size(dst, 1)))
++ return -E2BIG;
++
+ /* Deal with trailing bytes. */
+ if (usize < ksize) {
+ memset(dst + size, 0, rest);
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 5562097276336..c6584a3524638 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1956,7 +1956,12 @@ void sk_common_release(struct sock *sk);
+ * Default socket callbacks and setup code
+ */
+
+-/* Initialise core socket variables */
++/* Initialise core socket variables using an explicit uid. */
++void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid);
++
++/* Initialise core socket variables.
++ * Assumes struct socket *sock is embedded in a struct socket_alloc.
++ */
+ void sock_init_data(struct socket *sock, struct sock *sk);
+
+ /*
+diff --git a/include/sound/hda_codec.h b/include/sound/hda_codec.h
+index eba23daf2c290..bbb7805e85d8e 100644
+--- a/include/sound/hda_codec.h
++++ b/include/sound/hda_codec.h
+@@ -259,6 +259,7 @@ struct hda_codec {
+ unsigned int relaxed_resume:1; /* don't resume forcibly for jack */
+ unsigned int forced_resume:1; /* forced resume for jack */
+ unsigned int no_stream_clean_at_suspend:1; /* do not clean streams at suspend */
++ unsigned int ctl_dev_id:1; /* old control element id build behaviour */
+
+ #ifdef CONFIG_PM
+ unsigned long power_on_acct;
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index 77495e5988c12..64915ebd641ee 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -16,6 +16,7 @@
+ #include <sound/asoc.h>
+
+ struct device;
++struct snd_pcm_substream;
+ struct snd_soc_pcm_runtime;
+ struct soc_enum;
+
+diff --git a/include/trace/events/devlink.h b/include/trace/events/devlink.h
+index 24969184c5348..77ff7cfc6049a 100644
+--- a/include/trace/events/devlink.h
++++ b/include/trace/events/devlink.h
+@@ -88,7 +88,7 @@ TRACE_EVENT(devlink_health_report,
+ __string(bus_name, devlink_to_dev(devlink)->bus->name)
+ __string(dev_name, dev_name(devlink_to_dev(devlink)))
+ __string(driver_name, devlink_to_dev(devlink)->driver->name)
+- __string(reporter_name, msg)
++ __string(reporter_name, reporter_name)
+ __string(msg, msg)
+ ),
+
+diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
+index 2780bce62fafe..434f62e0fb72c 100644
+--- a/include/uapi/linux/io_uring.h
++++ b/include/uapi/linux/io_uring.h
+@@ -625,7 +625,7 @@ struct io_uring_buf_ring {
+ __u16 resv3;
+ __u16 tail;
+ };
+- struct io_uring_buf bufs[0];
++ __DECLARE_FLEX_ARRAY(struct io_uring_buf, bufs);
+ };
+ };
+
+diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
+index 23105eb036fa6..0552e8dcf0cbf 100644
+--- a/include/uapi/linux/vfio.h
++++ b/include/uapi/linux/vfio.h
+@@ -49,7 +49,11 @@
+ /* Supports VFIO_DMA_UNMAP_FLAG_ALL */
+ #define VFIO_UNMAP_ALL 9
+
+-/* Supports the vaddr flag for DMA map and unmap */
++/*
++ * Supports the vaddr flag for DMA map and unmap. Not supported for mediated
++ * devices, so this capability is subject to change as groups are added or
++ * removed.
++ */
+ #define VFIO_UPDATE_VADDR 10
+
+ /*
+@@ -1343,8 +1347,7 @@ struct vfio_iommu_type1_info_dma_avail {
+ * Map process virtual addresses to IO virtual addresses using the
+ * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required.
+ *
+- * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova, and
+- * unblock translation of host virtual addresses in the iova range. The vaddr
++ * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova. The vaddr
+ * must have previously been invalidated with VFIO_DMA_UNMAP_FLAG_VADDR. To
+ * maintain memory consistency within the user application, the updated vaddr
+ * must address the same memory object as originally mapped. Failure to do so
+@@ -1395,9 +1398,9 @@ struct vfio_bitmap {
+ * must be 0. This cannot be combined with the get-dirty-bitmap flag.
+ *
+ * If flags & VFIO_DMA_UNMAP_FLAG_VADDR, do not unmap, but invalidate host
+- * virtual addresses in the iova range. Tasks that attempt to translate an
+- * iova's vaddr will block. DMA to already-mapped pages continues. This
+- * cannot be combined with the get-dirty-bitmap flag.
++ * virtual addresses in the iova range. DMA to already-mapped pages continues.
++ * Groups may not be added to the container while any addresses are invalid.
++ * This cannot be combined with the get-dirty-bitmap flag.
+ */
+ struct vfio_iommu_type1_dma_unmap {
+ __u32 argsz;
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 727084cd79be4..97a09a14c6349 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -566,9 +566,9 @@ enum ufshcd_quirks {
+ UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING = 1 << 13,
+
+ /*
+- * This quirk allows only sg entries aligned with page size.
++ * Align DMA SG entries on a 4 KiB boundary.
+ */
+- UFSHCD_QUIRK_ALIGN_SG_WITH_PAGE_SIZE = 1 << 14,
++ UFSHCD_QUIRK_4KB_DMA_ALIGNMENT = 1 << 14,
+
+ /*
+ * This quirk needs to be enabled if the host controller does not
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index db623b3185c82..a4e9dbc7b67a8 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1143,10 +1143,16 @@ static unsigned int handle_tw_list(struct llist_node *node,
+ /* if not contended, grab and improve batching */
+ *locked = mutex_trylock(&(*ctx)->uring_lock);
+ percpu_ref_get(&(*ctx)->refs);
+- }
++ } else if (!*locked)
++ *locked = mutex_trylock(&(*ctx)->uring_lock);
+ req->io_task_work.func(req, locked);
+ node = next;
+ count++;
++ if (unlikely(need_resched())) {
++ ctx_flush_and_put(*ctx, locked);
++ *ctx = NULL;
++ cond_resched();
++ }
+ }
+
+ return count;
+@@ -1722,7 +1728,7 @@ int io_req_prep_async(struct io_kiocb *req)
+ const struct io_op_def *def = &io_op_defs[req->opcode];
+
+ /* assign early for deferred execution for non-fixed file */
+- if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE))
++ if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE) && !req->file)
+ req->file = io_file_get_normal(req, req->cqe.fd);
+ if (!def->prep_async)
+ return 0;
+@@ -2790,7 +2796,7 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait)
+ * pushes them to do the flush.
+ */
+
+- if (io_cqring_events(ctx) || io_has_work(ctx))
++ if (__io_cqring_events_user(ctx) || io_has_work(ctx))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ return mask;
+@@ -3053,6 +3059,7 @@ static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+ while (!wq_list_empty(&ctx->iopoll_list)) {
+ io_iopoll_try_reap_events(ctx);
+ ret = true;
++ cond_resched();
+ }
+ }
+
+diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
+index ab4b2a1c3b7e8..87426c0c6d3e5 100644
+--- a/io_uring/io_uring.h
++++ b/io_uring/io_uring.h
+@@ -3,6 +3,7 @@
+
+ #include <linux/errno.h>
+ #include <linux/lockdep.h>
++#include <linux/resume_user_mode.h>
+ #include <linux/io_uring_types.h>
+ #include <uapi/linux/eventpoll.h>
+ #include "io-wq.h"
+@@ -270,6 +271,15 @@ static inline int io_run_task_work(void)
+ */
+ if (test_thread_flag(TIF_NOTIFY_SIGNAL))
+ clear_notify_signal();
++ /*
++ * PF_IO_WORKER never returns to userspace, so check here if we have
++ * notify work that needs processing.
++ */
++ if (current->flags & PF_IO_WORKER &&
++ test_thread_flag(TIF_NOTIFY_RESUME)) {
++ __set_current_state(TASK_RUNNING);
++ resume_user_mode_work(NULL);
++ }
+ if (task_work_pending(current)) {
+ __set_current_state(TASK_RUNNING);
+ task_work_run();
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 90326b2799657..02587f7d5908d 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -568,7 +568,7 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ sr->flags = READ_ONCE(sqe->ioprio);
+ if (sr->flags & ~(RECVMSG_FLAGS))
+ return -EINVAL;
+- sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
++ sr->msg_flags = READ_ONCE(sqe->msg_flags);
+ if (sr->msg_flags & MSG_DONTWAIT)
+ req->flags |= REQ_F_NOWAIT;
+ if (sr->msg_flags & MSG_ERRQUEUE)
+diff --git a/io_uring/opdef.c b/io_uring/opdef.c
+index 3aa0d65c50e34..be45b76649a08 100644
+--- a/io_uring/opdef.c
++++ b/io_uring/opdef.c
+@@ -313,6 +313,7 @@ const struct io_op_def io_op_defs[] = {
+ },
+ [IORING_OP_MADVISE] = {
+ .name = "MADVISE",
++ .audit_skip = 1,
+ .prep = io_madvise_prep,
+ .issue = io_madvise,
+ },
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 2ac1366adbd77..fea739eef56f4 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -650,6 +650,14 @@ static void io_async_queue_proc(struct file *file, struct wait_queue_head *head,
+ __io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll);
+ }
+
++/*
++ * We can't reliably detect loops in repeated poll triggers and issue
++ * subsequently failing. But rather than fail these immediately, allow a
++ * certain amount of retries before we give up. Given that this condition
++ * should _rarely_ trigger even once, we should be fine with a larger value.
++ */
++#define APOLL_MAX_RETRY 128
++
+ static struct async_poll *io_req_alloc_apoll(struct io_kiocb *req,
+ unsigned issue_flags)
+ {
+@@ -665,14 +673,18 @@ static struct async_poll *io_req_alloc_apoll(struct io_kiocb *req,
+ if (entry == NULL)
+ goto alloc_apoll;
+ apoll = container_of(entry, struct async_poll, cache);
++ apoll->poll.retries = APOLL_MAX_RETRY;
+ } else {
+ alloc_apoll:
+ apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
+ if (unlikely(!apoll))
+ return NULL;
++ apoll->poll.retries = APOLL_MAX_RETRY;
+ }
+ apoll->double_poll = NULL;
+ req->apoll = apoll;
++ if (unlikely(!--apoll->poll.retries))
++ return NULL;
+ return apoll;
+ }
+
+@@ -694,8 +706,6 @@ int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
+ return IO_APOLL_ABORTED;
+ if (!file_can_poll(req->file))
+ return IO_APOLL_ABORTED;
+- if ((req->flags & (REQ_F_POLLED|REQ_F_PARTIAL_IO)) == REQ_F_POLLED)
+- return IO_APOLL_ABORTED;
+ if (!(req->flags & REQ_F_APOLL_MULTISHOT))
+ mask |= EPOLLONESHOT;
+
+diff --git a/io_uring/poll.h b/io_uring/poll.h
+index 5f3bae50fc81a..b2393b403a2c2 100644
+--- a/io_uring/poll.h
++++ b/io_uring/poll.h
+@@ -12,6 +12,7 @@ struct io_poll {
+ struct file *file;
+ struct wait_queue_head *head;
+ __poll_t events;
++ int retries;
+ struct wait_queue_entry wait;
+ };
+
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index 18de10c68a151..4cbf3ad725d13 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -1162,14 +1162,17 @@ struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages)
+ pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
+ pages, vmas);
+ if (pret == nr_pages) {
++ struct file *file = vmas[0]->vm_file;
++
+ /* don't support file backed memory */
+ for (i = 0; i < nr_pages; i++) {
+- struct vm_area_struct *vma = vmas[i];
+-
+- if (vma_is_shmem(vma))
++ if (vmas[i]->vm_file != file) {
++ ret = -EINVAL;
++ break;
++ }
++ if (!file)
+ continue;
+- if (vma->vm_file &&
+- !is_file_hugepages(vma->vm_file)) {
++ if (!vma_is_shmem(vmas[i]) && !is_file_hugepages(file)) {
+ ret = -EOPNOTSUPP;
+ break;
+ }
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index b7017cae6fd1e..530e200fbc477 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -5573,6 +5573,7 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
+ if (!ctx_struct)
+ /* should not happen */
+ return NULL;
++again:
+ ctx_tname = btf_name_by_offset(btf_vmlinux, ctx_struct->name_off);
+ if (!ctx_tname) {
+ /* should not happen */
+@@ -5586,8 +5587,16 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
+ * int socket_filter_bpf_prog(struct __sk_buff *skb)
+ * { // no fields of skb are ever used }
+ */
+- if (strcmp(ctx_tname, tname))
+- return NULL;
++ if (strcmp(ctx_tname, tname)) {
++ /* bpf_user_pt_regs_t is a typedef, so resolve it to
++ * underlying struct and check name again
++ */
++ if (!btf_type_is_modifier(ctx_struct))
++ return NULL;
++ while (btf_type_is_modifier(ctx_struct))
++ ctx_struct = btf_type_by_id(btf_vmlinux, ctx_struct->type);
++ goto again;
++ }
+ return ctx_type;
+ }
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 66bded1443773..5dfcb5ad0d068 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -1004,8 +1004,6 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
+ l_new = ERR_PTR(-ENOMEM);
+ goto dec_count;
+ }
+- check_and_init_map_value(&htab->map,
+- l_new->key + round_up(key_size, 8));
+ }
+
+ memcpy(l_new->key, key, key_size);
+@@ -1592,6 +1590,7 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key,
+ else
+ copy_map_value(map, value, l->key +
+ roundup_key_size);
++ /* Zeroing special fields in the temp buffer */
+ check_and_init_map_value(map, value);
+ }
+
+@@ -1792,6 +1791,7 @@ again_nocopy:
+ true);
+ else
+ copy_map_value(map, dst_val, value);
++ /* Zeroing special fields in the temp buffer */
+ check_and_init_map_value(map, dst_val);
+ }
+ if (do_delete) {
+diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
+index 1db156405b68b..7b784823a52ef 100644
+--- a/kernel/bpf/memalloc.c
++++ b/kernel/bpf/memalloc.c
+@@ -143,7 +143,7 @@ static void *__alloc(struct bpf_mem_cache *c, int node)
+ return obj;
+ }
+
+- return kmalloc_node(c->unit_size, flags, node);
++ return kmalloc_node(c->unit_size, flags | __GFP_ZERO, node);
+ }
+
+ static struct mem_cgroup *get_memcg(const struct bpf_mem_cache *c)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 7ee2188272597..68455fd56eea5 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env,
+ verbose(env, "D");
+ }
+
+-static int get_spi(s32 off)
++static int __get_spi(s32 off)
+ {
+ return (-off - 1) / BPF_REG_SIZE;
+ }
+
++static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
++{
++ int off, spi;
++
++ if (!tnum_is_const(reg->var_off)) {
++ verbose(env, "dynptr has to be at a constant offset\n");
++ return -EINVAL;
++ }
++
++ off = reg->off + reg->var_off.value;
++ if (off % BPF_REG_SIZE) {
++ verbose(env, "cannot pass in dynptr at an offset=%d\n", off);
++ return -EINVAL;
++ }
++
++ spi = __get_spi(off);
++ if (spi < 1) {
++ verbose(env, "cannot pass in dynptr at an offset=%d\n", off);
++ return -EINVAL;
++ }
++ return spi;
++}
++
+ static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots)
+ {
+ int allocated_slots = state->allocated_stack / BPF_REG_SIZE;
+@@ -746,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg,
+ __mark_dynptr_reg(reg, type, true);
+ }
+
++static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
++ struct bpf_func_state *state, int spi);
+
+ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
+ enum bpf_arg_type arg_type, int insn_idx)
+@@ -754,7 +779,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
+ enum bpf_dynptr_type type;
+ int spi, i, id;
+
+- spi = get_spi(reg->off);
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return spi;
+
+ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
+ return -EINVAL;
+@@ -781,6 +808,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
+ state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
+ }
+
++ state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
++ state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
++
+ return 0;
+ }
+
+@@ -789,7 +819,9 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
+ struct bpf_func_state *state = func(env, reg);
+ int spi, i;
+
+- spi = get_spi(reg->off);
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return spi;
+
+ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
+ return -EINVAL;
+@@ -805,6 +837,80 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
+
+ __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
+ __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
++
++ /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot?
++ *
++ * While we don't allow reading STACK_INVALID, it is still possible to
++ * do <8 byte writes marking some but not all slots as STACK_MISC. Then,
++ * helpers or insns can do partial read of that part without failing,
++ * but check_stack_range_initialized, check_stack_read_var_off, and
++ * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of
++ * the slot conservatively. Hence we need to prevent those liveness
++ * marking walks.
++ *
++ * This was not a problem before because STACK_INVALID is only set by
++ * default (where the default reg state has its reg->parent as NULL), or
++ * in clean_live_states after REG_LIVE_DONE (at which point
++ * mark_reg_read won't walk reg->parent chain), but not randomly during
++ * verifier state exploration (like we did above). Hence, for our case
++ * parentage chain will still be live (i.e. reg->parent may be
++ * non-NULL), while earlier reg->parent was NULL, so we need
++ * REG_LIVE_WRITTEN to screen off read marker propagation when it is
++ * done later on reads or by mark_dynptr_read as well to unnecessary
++ * mark registers in verifier state.
++ */
++ state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
++ state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
++
++ return 0;
++}
++
++static void __mark_reg_unknown(const struct bpf_verifier_env *env,
++ struct bpf_reg_state *reg);
++
++static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
++ struct bpf_func_state *state, int spi)
++{
++ int i;
++
++ /* We always ensure that STACK_DYNPTR is never set partially,
++ * hence just checking for slot_type[0] is enough. This is
++ * different for STACK_SPILL, where it may be only set for
++ * 1 byte, so code has to use is_spilled_reg.
++ */
++ if (state->stack[spi].slot_type[0] != STACK_DYNPTR)
++ return 0;
++
++ /* Reposition spi to first slot */
++ if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
++ spi = spi + 1;
++
++ if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
++ verbose(env, "cannot overwrite referenced dynptr\n");
++ return -EINVAL;
++ }
++
++ mark_stack_slot_scratched(env, spi);
++ mark_stack_slot_scratched(env, spi - 1);
++
++ /* Writing partially to one dynptr stack slot destroys both. */
++ for (i = 0; i < BPF_REG_SIZE; i++) {
++ state->stack[spi].slot_type[i] = STACK_INVALID;
++ state->stack[spi - 1].slot_type[i] = STACK_INVALID;
++ }
++
++ /* TODO: Invalidate any slices associated with this dynptr */
++
++ /* Do not release reference state, we are destroying dynptr on stack,
++ * not using some helper to release it. Just reset register.
++ */
++ __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
++ __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
++
++ /* Same reason as unmark_stack_slots_dynptr above */
++ state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
++ state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN;
++
+ return 0;
+ }
+
+@@ -816,7 +922,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
+ if (reg->type == CONST_PTR_TO_DYNPTR)
+ return false;
+
+- spi = get_spi(reg->off);
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return false;
++
++ /* We will do check_mem_access to check and update stack bounds later */
+ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS))
+ return true;
+
+@@ -832,14 +942,15 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_
+ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+ {
+ struct bpf_func_state *state = func(env, reg);
+- int spi;
+- int i;
++ int spi, i;
+
+ /* This already represents first slot of initialized bpf_dynptr */
+ if (reg->type == CONST_PTR_TO_DYNPTR)
+ return true;
+
+- spi = get_spi(reg->off);
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return false;
+ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
+ !state->stack[spi].spilled_ptr.dynptr.first_slot)
+ return false;
+@@ -868,7 +979,9 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg
+ if (reg->type == CONST_PTR_TO_DYNPTR) {
+ return reg->dynptr.type == dynptr_type;
+ } else {
+- spi = get_spi(reg->off);
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return false;
+ return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type;
+ }
+ }
+@@ -2386,6 +2499,32 @@ static int mark_reg_read(struct bpf_verifier_env *env,
+ return 0;
+ }
+
++static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
++{
++ struct bpf_func_state *state = func(env, reg);
++ int spi, ret;
++
++ /* For CONST_PTR_TO_DYNPTR, it must have already been done by
++ * check_reg_arg in check_helper_call and mark_btf_func_reg_size in
++ * check_kfunc_call.
++ */
++ if (reg->type == CONST_PTR_TO_DYNPTR)
++ return 0;
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return spi;
++ /* Caller ensures dynptr is valid and initialized, which means spi is in
++ * bounds and spi is the first dynptr slot. Simply mark stack slot as
++ * read.
++ */
++ ret = mark_reg_read(env, &state->stack[spi].spilled_ptr,
++ state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64);
++ if (ret)
++ return ret;
++ return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr,
++ state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64);
++}
++
+ /* This function is supposed to be used by the following 32-bit optimization
+ * code only. It returns TRUE if the source or destination register operates
+ * on 64-bit, otherwise return FALSE.
+@@ -3318,6 +3457,10 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
+ }
+
++ err = destroy_if_dynptr_stack_slot(env, state, spi);
++ if (err)
++ return err;
++
+ mark_stack_slot_scratched(env, spi);
+ if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
+ !register_is_null(reg) && env->bpf_capable) {
+@@ -3431,6 +3574,14 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
+ if (err)
+ return err;
+
++ for (i = min_off; i < max_off; i++) {
++ int spi;
++
++ spi = __get_spi(i);
++ err = destroy_if_dynptr_stack_slot(env, state, spi);
++ if (err)
++ return err;
++ }
+
+ /* Variable offset writes destroy any spilled pointers in range. */
+ for (i = min_off; i < max_off; i++) {
+@@ -5458,6 +5609,31 @@ static int check_stack_range_initialized(
+ }
+
+ if (meta && meta->raw_mode) {
++ /* Ensure we won't be overwriting dynptrs when simulating byte
++ * by byte access in check_helper_call using meta.access_size.
++ * This would be a problem if we have a helper in the future
++ * which takes:
++ *
++ * helper(uninit_mem, len, dynptr)
++ *
++ * Now, uninint_mem may overlap with dynptr pointer. Hence, it
++ * may end up writing to dynptr itself when touching memory from
++ * arg 1. This can be relaxed on a case by case basis for known
++ * safe cases, but reject due to the possibilitiy of aliasing by
++ * default.
++ */
++ for (i = min_off; i < max_off + access_size; i++) {
++ int stack_off = -i - 1;
++
++ spi = __get_spi(i);
++ /* raw_mode may write past allocated_stack */
++ if (state->allocated_stack <= stack_off)
++ continue;
++ if (state->stack[spi].slot_type[stack_off % BPF_REG_SIZE] == STACK_DYNPTR) {
++ verbose(env, "potential write to dynptr at off=%d disallowed\n", i);
++ return -EACCES;
++ }
++ }
+ meta->access_size = access_size;
+ meta->regno = regno;
+ return 0;
+@@ -5955,12 +6131,15 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
+ }
+ /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to
+ * check_func_arg_reg_off's logic. We only need to check offset
+- * alignment for PTR_TO_STACK.
++ * and its alignment for PTR_TO_STACK.
+ */
+- if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) {
+- verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off);
+- return -EINVAL;
++ if (reg->type == PTR_TO_STACK) {
++ int err = dynptr_get_spi(env, reg);
++
++ if (err < 0)
++ return err;
+ }
++
+ /* MEM_UNINIT - Points to memory that is an appropriate candidate for
+ * constructing a mutable bpf_dynptr object.
+ *
+@@ -5992,6 +6171,8 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
+
+ meta->uninit_dynptr_regno = regno;
+ } else /* MEM_RDONLY and None case from above */ {
++ int err;
++
+ /* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */
+ if (reg->type == CONST_PTR_TO_DYNPTR && !(arg_type & MEM_RDONLY)) {
+ verbose(env, "cannot pass pointer to const bpf_dynptr, the helper mutates it\n");
+@@ -6025,6 +6206,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno,
+ err_extra, regno);
+ return -EINVAL;
+ }
++
++ err = mark_dynptr_read(env, reg);
++ if (err)
++ return err;
+ }
+ return 0;
+ }
+@@ -6362,15 +6547,16 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env,
+ }
+ }
+
+-static u32 dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
++static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+ {
+ struct bpf_func_state *state = func(env, reg);
+ int spi;
+
+ if (reg->type == CONST_PTR_TO_DYNPTR)
+ return reg->ref_obj_id;
+-
+- spi = get_spi(reg->off);
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return spi;
+ return state->stack[spi].spilled_ptr.ref_obj_id;
+ }
+
+@@ -6444,7 +6630,9 @@ skip_type_check:
+ * PTR_TO_STACK.
+ */
+ if (reg->type == PTR_TO_STACK) {
+- spi = get_spi(reg->off);
++ spi = dynptr_get_spi(env, reg);
++ if (spi < 0)
++ return spi;
+ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) ||
+ !state->stack[spi].spilled_ptr.ref_obj_id) {
+ verbose(env, "arg %d is an unacquired reference\n", regno);
+@@ -7933,13 +8121,19 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) {
+ if (arg_type_is_dynptr(fn->arg_type[i])) {
+ struct bpf_reg_state *reg = ®s[BPF_REG_1 + i];
++ int ref_obj_id;
+
+ if (meta.ref_obj_id) {
+ verbose(env, "verifier internal error: meta.ref_obj_id already set\n");
+ return -EFAULT;
+ }
+
+- meta.ref_obj_id = dynptr_ref_obj_id(env, reg);
++ ref_obj_id = dynptr_ref_obj_id(env, reg);
++ if (ref_obj_id < 0) {
++ verbose(env, "verifier internal error: failed to obtain dynptr ref_obj_id\n");
++ return ref_obj_id;
++ }
++ meta.ref_obj_id = ref_obj_id;
+ break;
+ }
+ }
+@@ -13231,10 +13425,9 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ return false;
+ if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
+ continue;
+- if (!is_spilled_reg(&old->stack[spi]))
+- continue;
+- if (!regsafe(env, &old->stack[spi].spilled_ptr,
+- &cur->stack[spi].spilled_ptr, idmap))
++ /* Both old and cur are having same slot_type */
++ switch (old->stack[spi].slot_type[BPF_REG_SIZE - 1]) {
++ case STACK_SPILL:
+ /* when explored and current stack slot are both storing
+ * spilled registers, check that stored pointers types
+ * are the same as well.
+@@ -13245,7 +13438,30 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ * such verifier states are not equivalent.
+ * return false to continue verification of this path
+ */
++ if (!regsafe(env, &old->stack[spi].spilled_ptr,
++ &cur->stack[spi].spilled_ptr, idmap))
++ return false;
++ break;
++ case STACK_DYNPTR:
++ {
++ const struct bpf_reg_state *old_reg, *cur_reg;
++
++ old_reg = &old->stack[spi].spilled_ptr;
++ cur_reg = &cur->stack[spi].spilled_ptr;
++ if (old_reg->dynptr.type != cur_reg->dynptr.type ||
++ old_reg->dynptr.first_slot != cur_reg->dynptr.first_slot ||
++ !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap))
++ return false;
++ break;
++ }
++ case STACK_MISC:
++ case STACK_ZERO:
++ case STACK_INVALID:
++ continue;
++ /* Ensure that new unhandled slot types return false by default */
++ default:
+ return false;
++ }
+ }
+ return true;
+ }
+diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
+index 77978e3723771..a09f1c19336ae 100644
+--- a/kernel/context_tracking.c
++++ b/kernel/context_tracking.c
+@@ -510,7 +510,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
+ * In this we case we don't care about any concurrency/ordering.
+ */
+ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
+- atomic_set(&ct->state, state);
++ arch_atomic_set(&ct->state, state);
+ } else {
+ /*
+ * Even if context tracking is disabled on this CPU, because it's outside
+@@ -527,7 +527,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
+ */
+ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
+ /* Tracking for vtime only, no concurrent RCU EQS accounting */
+- atomic_set(&ct->state, state);
++ arch_atomic_set(&ct->state, state);
+ } else {
+ /*
+ * Tracking for vtime and RCU EQS. Make sure we don't race
+@@ -535,7 +535,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
+ * RCU only requires RCU_DYNTICKS_IDX increments to be fully
+ * ordered.
+ */
+- atomic_add(state, &ct->state);
++ arch_atomic_add(state, &ct->state);
+ }
+ }
+ }
+@@ -630,12 +630,12 @@ void noinstr __ct_user_exit(enum ctx_state state)
+ * In this we case we don't care about any concurrency/ordering.
+ */
+ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
+- atomic_set(&ct->state, CONTEXT_KERNEL);
++ arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+
+ } else {
+ if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
+ /* Tracking for vtime only, no concurrent RCU EQS accounting */
+- atomic_set(&ct->state, CONTEXT_KERNEL);
++ arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+ } else {
+ /*
+ * Tracking for vtime and RCU EQS. Make sure we don't race
+@@ -643,7 +643,7 @@ void noinstr __ct_user_exit(enum ctx_state state)
+ * RCU only requires RCU_DYNTICKS_IDX increments to be fully
+ * ordered.
+ */
+- atomic_sub(state, &ct->state);
++ arch_atomic_sub(state, &ct->state);
+ }
+ }
+ }
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 15dc2ec80c467..f2afdb0add7c5 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -807,6 +807,8 @@ void __noreturn do_exit(long code)
+ struct task_struct *tsk = current;
+ int group_dead;
+
++ WARN_ON(irqs_disabled());
++
+ synchronize_group_exit(tsk, code);
+
+ WARN_ON(tsk->plug);
+@@ -938,6 +940,11 @@ void __noreturn make_task_dead(int signr)
+ if (unlikely(!tsk->pid))
+ panic("Attempted to kill the idle task!");
+
++ if (unlikely(irqs_disabled())) {
++ pr_info("note: %s[%d] exited with irqs disabled\n",
++ current->comm, task_pid_nr(current));
++ local_irq_enable();
++ }
+ if (unlikely(in_atomic())) {
+ pr_info("note: %s[%d] exited with preempt_count %d\n",
+ current->comm, task_pid_nr(current),
+@@ -1898,7 +1905,14 @@ bool thread_group_exited(struct pid *pid)
+ }
+ EXPORT_SYMBOL(thread_group_exited);
+
+-__weak void abort(void)
++/*
++ * This needs to be __function_aligned as GCC implicitly makes any
++ * implementation of abort() cold and drops alignment specified by
++ * -falign-functions=N.
++ *
++ * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88345#c11
++ */
++__weak __function_aligned void abort(void)
+ {
+ BUG();
+
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index 798a9042421fc..8e14805c55083 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
+
+ static struct irq_domain *irq_default_domain;
+
++static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
++ unsigned int nr_irqs, int node, void *arg,
++ bool realloc, const struct irq_affinity_desc *affinity);
+ static void irq_domain_check_hierarchy(struct irq_domain *domain);
+
+ struct irqchip_fwid {
+@@ -123,23 +126,12 @@ void irq_domain_free_fwnode(struct fwnode_handle *fwnode)
+ }
+ EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);
+
+-/**
+- * __irq_domain_add() - Allocate a new irq_domain data structure
+- * @fwnode: firmware node for the interrupt controller
+- * @size: Size of linear map; 0 for radix mapping only
+- * @hwirq_max: Maximum number of interrupts supported by controller
+- * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
+- * direct mapping
+- * @ops: domain callbacks
+- * @host_data: Controller private data pointer
+- *
+- * Allocates and initializes an irq_domain structure.
+- * Returns pointer to IRQ domain, or NULL on failure.
+- */
+-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
+- irq_hw_number_t hwirq_max, int direct_max,
+- const struct irq_domain_ops *ops,
+- void *host_data)
++static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
++ unsigned int size,
++ irq_hw_number_t hwirq_max,
++ int direct_max,
++ const struct irq_domain_ops *ops,
++ void *host_data)
+ {
+ struct irqchip_fwid *fwid;
+ struct irq_domain *domain;
+@@ -227,12 +219,44 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
+
+ irq_domain_check_hierarchy(domain);
+
++ return domain;
++}
++
++static void __irq_domain_publish(struct irq_domain *domain)
++{
+ mutex_lock(&irq_domain_mutex);
+ debugfs_add_domain_dir(domain);
+ list_add(&domain->link, &irq_domain_list);
+ mutex_unlock(&irq_domain_mutex);
+
+ pr_debug("Added domain %s\n", domain->name);
++}
++
++/**
++ * __irq_domain_add() - Allocate a new irq_domain data structure
++ * @fwnode: firmware node for the interrupt controller
++ * @size: Size of linear map; 0 for radix mapping only
++ * @hwirq_max: Maximum number of interrupts supported by controller
++ * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
++ * direct mapping
++ * @ops: domain callbacks
++ * @host_data: Controller private data pointer
++ *
++ * Allocates and initializes an irq_domain structure.
++ * Returns pointer to IRQ domain, or NULL on failure.
++ */
++struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
++ irq_hw_number_t hwirq_max, int direct_max,
++ const struct irq_domain_ops *ops,
++ void *host_data)
++{
++ struct irq_domain *domain;
++
++ domain = __irq_domain_create(fwnode, size, hwirq_max, direct_max,
++ ops, host_data);
++ if (domain)
++ __irq_domain_publish(domain);
++
+ return domain;
+ }
+ EXPORT_SYMBOL_GPL(__irq_domain_add);
+@@ -538,6 +562,9 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
+ return;
+
+ hwirq = irq_data->hwirq;
++
++ mutex_lock(&irq_domain_mutex);
++
+ irq_set_status_flags(irq, IRQ_NOREQUEST);
+
+ /* remove chip and handler */
+@@ -557,10 +584,12 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
+
+ /* Clear reverse map for this hwirq */
+ irq_domain_clear_mapping(domain, hwirq);
++
++ mutex_unlock(&irq_domain_mutex);
+ }
+
+-int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+- irq_hw_number_t hwirq)
++static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
++ irq_hw_number_t hwirq)
+ {
+ struct irq_data *irq_data = irq_get_irq_data(virq);
+ int ret;
+@@ -573,7 +602,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+ if (WARN(irq_data->domain, "error: virq%i is already associated", virq))
+ return -EINVAL;
+
+- mutex_lock(&irq_domain_mutex);
+ irq_data->hwirq = hwirq;
+ irq_data->domain = domain;
+ if (domain->ops->map) {
+@@ -590,7 +618,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+ }
+ irq_data->domain = NULL;
+ irq_data->hwirq = 0;
+- mutex_unlock(&irq_domain_mutex);
+ return ret;
+ }
+
+@@ -601,12 +628,23 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+
+ domain->mapcount++;
+ irq_domain_set_mapping(domain, hwirq, irq_data);
+- mutex_unlock(&irq_domain_mutex);
+
+ irq_clear_status_flags(virq, IRQ_NOREQUEST);
+
+ return 0;
+ }
++
++int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
++ irq_hw_number_t hwirq)
++{
++ int ret;
++
++ mutex_lock(&irq_domain_mutex);
++ ret = irq_domain_associate_locked(domain, virq, hwirq);
++ mutex_unlock(&irq_domain_mutex);
++
++ return ret;
++}
+ EXPORT_SYMBOL_GPL(irq_domain_associate);
+
+ void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base,
+@@ -668,6 +706,34 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
+ EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
+ #endif
+
++static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
++ irq_hw_number_t hwirq,
++ const struct irq_affinity_desc *affinity)
++{
++ struct device_node *of_node = irq_domain_get_of_node(domain);
++ int virq;
++
++ pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
++
++ /* Allocate a virtual interrupt number */
++ virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
++ affinity);
++ if (virq <= 0) {
++ pr_debug("-> virq allocation failed\n");
++ return 0;
++ }
++
++ if (irq_domain_associate_locked(domain, virq, hwirq)) {
++ irq_free_desc(virq);
++ return 0;
++ }
++
++ pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
++ hwirq, of_node_full_name(of_node), virq);
++
++ return virq;
++}
++
+ /**
+ * irq_create_mapping_affinity() - Map a hardware interrupt into linux irq space
+ * @domain: domain owning this hardware interrupt or NULL for default domain
+@@ -680,14 +746,11 @@ EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
+ * on the number returned from that call.
+ */
+ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
+- irq_hw_number_t hwirq,
+- const struct irq_affinity_desc *affinity)
++ irq_hw_number_t hwirq,
++ const struct irq_affinity_desc *affinity)
+ {
+- struct device_node *of_node;
+ int virq;
+
+- pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
+-
+ /* Look for default domain if necessary */
+ if (domain == NULL)
+ domain = irq_default_domain;
+@@ -695,32 +758,19 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
+ WARN(1, "%s(, %lx) called with NULL domain\n", __func__, hwirq);
+ return 0;
+ }
+- pr_debug("-> using domain @%p\n", domain);
+
+- of_node = irq_domain_get_of_node(domain);
++ mutex_lock(&irq_domain_mutex);
+
+ /* Check if mapping already exists */
+ virq = irq_find_mapping(domain, hwirq);
+ if (virq) {
+- pr_debug("-> existing mapping on virq %d\n", virq);
+- return virq;
+- }
+-
+- /* Allocate a virtual interrupt number */
+- virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
+- affinity);
+- if (virq <= 0) {
+- pr_debug("-> virq allocation failed\n");
+- return 0;
++ pr_debug("existing mapping on virq %d\n", virq);
++ goto out;
+ }
+
+- if (irq_domain_associate(domain, virq, hwirq)) {
+- irq_free_desc(virq);
+- return 0;
+- }
+-
+- pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
+- hwirq, of_node_full_name(of_node), virq);
++ virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
++out:
++ mutex_unlock(&irq_domain_mutex);
+
+ return virq;
+ }
+@@ -789,6 +839,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
+ if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
+ type &= IRQ_TYPE_SENSE_MASK;
+
++ mutex_lock(&irq_domain_mutex);
++
+ /*
+ * If we've already configured this interrupt,
+ * don't do it again, or hell will break loose.
+@@ -801,7 +853,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
+ * interrupt number.
+ */
+ if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
+- return virq;
++ goto out;
+
+ /*
+ * If the trigger type has not been set yet, then set
+@@ -809,40 +861,45 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
+ */
+ if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
+ irq_data = irq_get_irq_data(virq);
+- if (!irq_data)
+- return 0;
++ if (!irq_data) {
++ virq = 0;
++ goto out;
++ }
+
+ irqd_set_trigger_type(irq_data, type);
+- return virq;
++ goto out;
+ }
+
+ pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
+ hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
+- return 0;
++ virq = 0;
++ goto out;
+ }
+
+ if (irq_domain_is_hierarchy(domain)) {
+- virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
+- if (virq <= 0)
+- return 0;
++ virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
++ fwspec, false, NULL);
++ if (virq <= 0) {
++ virq = 0;
++ goto out;
++ }
+ } else {
+ /* Create mapping */
+- virq = irq_create_mapping(domain, hwirq);
++ virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
+ if (!virq)
+- return virq;
++ goto out;
+ }
+
+ irq_data = irq_get_irq_data(virq);
+- if (!irq_data) {
+- if (irq_domain_is_hierarchy(domain))
+- irq_domain_free_irqs(virq, 1);
+- else
+- irq_dispose_mapping(virq);
+- return 0;
++ if (WARN_ON(!irq_data)) {
++ virq = 0;
++ goto out;
+ }
+
+ /* Store trigger type */
+ irqd_set_trigger_type(irq_data, type);
++out:
++ mutex_unlock(&irq_domain_mutex);
+
+ return virq;
+ }
+@@ -1102,12 +1159,15 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
+ struct irq_domain *domain;
+
+ if (size)
+- domain = irq_domain_create_linear(fwnode, size, ops, host_data);
++ domain = __irq_domain_create(fwnode, size, size, 0, ops, host_data);
+ else
+- domain = irq_domain_create_tree(fwnode, ops, host_data);
++ domain = __irq_domain_create(fwnode, 0, ~0, 0, ops, host_data);
++
+ if (domain) {
+ domain->parent = parent;
+ domain->flags |= flags;
++
++ __irq_domain_publish(domain);
+ }
+
+ return domain;
+@@ -1426,40 +1486,12 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
+ return domain->ops->alloc(domain, irq_base, nr_irqs, arg);
+ }
+
+-/**
+- * __irq_domain_alloc_irqs - Allocate IRQs from domain
+- * @domain: domain to allocate from
+- * @irq_base: allocate specified IRQ number if irq_base >= 0
+- * @nr_irqs: number of IRQs to allocate
+- * @node: NUMA node id for memory allocation
+- * @arg: domain specific argument
+- * @realloc: IRQ descriptors have already been allocated if true
+- * @affinity: Optional irq affinity mask for multiqueue devices
+- *
+- * Allocate IRQ numbers and initialized all data structures to support
+- * hierarchy IRQ domains.
+- * Parameter @realloc is mainly to support legacy IRQs.
+- * Returns error code or allocated IRQ number
+- *
+- * The whole process to setup an IRQ has been split into two steps.
+- * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
+- * descriptor and required hardware resources. The second step,
+- * irq_domain_activate_irq(), is to program the hardware with preallocated
+- * resources. In this way, it's easier to rollback when failing to
+- * allocate resources.
+- */
+-int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
+- unsigned int nr_irqs, int node, void *arg,
+- bool realloc, const struct irq_affinity_desc *affinity)
++static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
++ unsigned int nr_irqs, int node, void *arg,
++ bool realloc, const struct irq_affinity_desc *affinity)
+ {
+ int i, ret, virq;
+
+- if (domain == NULL) {
+- domain = irq_default_domain;
+- if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
+- return -EINVAL;
+- }
+-
+ if (realloc && irq_base >= 0) {
+ virq = irq_base;
+ } else {
+@@ -1478,24 +1510,18 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
+ goto out_free_desc;
+ }
+
+- mutex_lock(&irq_domain_mutex);
+ ret = irq_domain_alloc_irqs_hierarchy(domain, virq, nr_irqs, arg);
+- if (ret < 0) {
+- mutex_unlock(&irq_domain_mutex);
++ if (ret < 0)
+ goto out_free_irq_data;
+- }
+
+ for (i = 0; i < nr_irqs; i++) {
+ ret = irq_domain_trim_hierarchy(virq + i);
+- if (ret) {
+- mutex_unlock(&irq_domain_mutex);
++ if (ret)
+ goto out_free_irq_data;
+- }
+ }
+-
++
+ for (i = 0; i < nr_irqs; i++)
+ irq_domain_insert_irq(virq + i);
+- mutex_unlock(&irq_domain_mutex);
+
+ return virq;
+
+@@ -1505,6 +1531,48 @@ out_free_desc:
+ irq_free_descs(virq, nr_irqs);
+ return ret;
+ }
++
++/**
++ * __irq_domain_alloc_irqs - Allocate IRQs from domain
++ * @domain: domain to allocate from
++ * @irq_base: allocate specified IRQ number if irq_base >= 0
++ * @nr_irqs: number of IRQs to allocate
++ * @node: NUMA node id for memory allocation
++ * @arg: domain specific argument
++ * @realloc: IRQ descriptors have already been allocated if true
++ * @affinity: Optional irq affinity mask for multiqueue devices
++ *
++ * Allocate IRQ numbers and initialized all data structures to support
++ * hierarchy IRQ domains.
++ * Parameter @realloc is mainly to support legacy IRQs.
++ * Returns error code or allocated IRQ number
++ *
++ * The whole process to setup an IRQ has been split into two steps.
++ * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ
++ * descriptor and required hardware resources. The second step,
++ * irq_domain_activate_irq(), is to program the hardware with preallocated
++ * resources. In this way, it's easier to rollback when failing to
++ * allocate resources.
++ */
++int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
++ unsigned int nr_irqs, int node, void *arg,
++ bool realloc, const struct irq_affinity_desc *affinity)
++{
++ int ret;
++
++ if (domain == NULL) {
++ domain = irq_default_domain;
++ if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n"))
++ return -EINVAL;
++ }
++
++ mutex_lock(&irq_domain_mutex);
++ ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
++ realloc, affinity);
++ mutex_unlock(&irq_domain_mutex);
++
++ return ret;
++}
+ EXPORT_SYMBOL_GPL(__irq_domain_alloc_irqs);
+
+ /* The irq_data was moved, fix the revmap to refer to the new location */
+@@ -1865,6 +1933,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq,
+ irq_set_handler_data(virq, handler_data);
+ }
+
++static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
++ unsigned int nr_irqs, int node, void *arg,
++ bool realloc, const struct irq_affinity_desc *affinity)
++{
++ return -EINVAL;
++}
++
+ static void irq_domain_check_hierarchy(struct irq_domain *domain)
+ {
+ }
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index 783a3e6a0b107..a020bc97021f3 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -1084,10 +1084,13 @@ int msi_domain_populate_irqs(struct irq_domain *domain, struct device *dev,
+ struct xarray *xa;
+ int ret, virq;
+
+- if (!msi_ctrl_valid(dev, &ctrl))
+- return -EINVAL;
+-
+ msi_lock_descs(dev);
++
++ if (!msi_ctrl_valid(dev, &ctrl)) {
++ ret = -EINVAL;
++ goto unlock;
++ }
++
+ ret = msi_domain_add_simple_msi_descs(dev, &ctrl);
+ if (ret)
+ goto unlock;
+@@ -1109,14 +1112,35 @@ int msi_domain_populate_irqs(struct irq_domain *domain, struct device *dev,
+ return 0;
+
+ fail:
+- for (--virq; virq >= virq_base; virq--)
++ for (--virq; virq >= virq_base; virq--) {
++ msi_domain_depopulate_descs(dev, virq, 1);
+ irq_domain_free_irqs_common(domain, virq, 1);
++ }
+ msi_domain_free_descs(dev, &ctrl);
+ unlock:
+ msi_unlock_descs(dev);
+ return ret;
+ }
+
++void msi_domain_depopulate_descs(struct device *dev, int virq_base, int nvec)
++{
++ struct msi_ctrl ctrl = {
++ .domid = MSI_DEFAULT_DOMAIN,
++ .first = virq_base,
++ .last = virq_base + nvec - 1,
++ };
++ struct msi_desc *desc;
++ struct xarray *xa;
++ unsigned long idx;
++
++ if (!msi_ctrl_valid(dev, &ctrl))
++ return;
++
++ xa = &dev->msi.data->__domains[ctrl.domid].store;
++ xa_for_each_range(xa, idx, desc, ctrl.first, ctrl.last)
++ desc->irq = 0;
++}
++
+ /*
+ * Carefully check whether the device can use reservation mode. If
+ * reservation mode is enabled then the early activation will assign a
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 1c18ecf9f98b1..00e177de91ccd 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -458,7 +458,7 @@ static inline int kprobe_optready(struct kprobe *p)
+ }
+
+ /* Return true if the kprobe is disarmed. Note: p must be on hash list */
+-static inline bool kprobe_disarmed(struct kprobe *p)
++bool kprobe_disarmed(struct kprobe *p)
+ {
+ struct optimized_kprobe *op;
+
+@@ -555,17 +555,15 @@ static void do_unoptimize_kprobes(void)
+ /* See comment in do_optimize_kprobes() */
+ lockdep_assert_cpus_held();
+
+- /* Unoptimization must be done anytime */
+- if (list_empty(&unoptimizing_list))
+- return;
++ if (!list_empty(&unoptimizing_list))
++ arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
+
+- arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list);
+- /* Loop on 'freeing_list' for disarming */
++ /* Loop on 'freeing_list' for disarming and removing from kprobe hash list */
+ list_for_each_entry_safe(op, tmp, &freeing_list, list) {
+ /* Switching from detour code to origin */
+ op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
+- /* Disarm probes if marked disabled */
+- if (kprobe_disabled(&op->kp))
++ /* Disarm probes if marked disabled and not gone */
++ if (kprobe_disabled(&op->kp) && !kprobe_gone(&op->kp))
+ arch_disarm_kprobe(&op->kp);
+ if (kprobe_unused(&op->kp)) {
+ /*
+@@ -662,7 +660,7 @@ void wait_for_kprobe_optimizer(void)
+ mutex_unlock(&kprobe_mutex);
+ }
+
+-static bool optprobe_queued_unopt(struct optimized_kprobe *op)
++bool optprobe_queued_unopt(struct optimized_kprobe *op)
+ {
+ struct optimized_kprobe *_op;
+
+@@ -797,14 +795,13 @@ static void kill_optimized_kprobe(struct kprobe *p)
+ op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
+
+ if (kprobe_unused(p)) {
+- /* Enqueue if it is unused */
+- list_add(&op->list, &freeing_list);
+ /*
+- * Remove unused probes from the hash list. After waiting
+- * for synchronization, this probe is reclaimed.
+- * (reclaiming is done by do_free_cleaned_kprobes().)
++ * Unused kprobe is on unoptimizing or freeing list. We move it
++ * to freeing_list and let the kprobe_optimizer() remove it from
++ * the kprobe hash list and free it.
+ */
+- hlist_del_rcu(&op->kp.hlist);
++ if (optprobe_queued_unopt(op))
++ list_move(&op->list, &freeing_list);
+ }
+
+ /* Don't touch the code, because it is already freed. */
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index e3375bc40dadc..50d4863974e7a 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -55,6 +55,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/kprobes.h>
+ #include <linux/lockdep.h>
++#include <linux/context_tracking.h>
+
+ #include <asm/sections.h>
+
+@@ -6555,6 +6556,7 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
+ {
+ struct task_struct *curr = current;
+ int dl = READ_ONCE(debug_locks);
++ bool rcu = warn_rcu_enter();
+
+ /* Note: the following can be executed concurrently, so be careful. */
+ pr_warn("\n");
+@@ -6595,5 +6597,6 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
+ lockdep_print_held_locks(curr);
+ pr_warn("\nstack backtrace:\n");
+ dump_stack();
++ warn_rcu_exit(rcu);
+ }
+ EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);
+diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
+index 44873594de031..84d5b649b95fe 100644
+--- a/kernel/locking/rwsem.c
++++ b/kernel/locking/rwsem.c
+@@ -624,18 +624,16 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
+ */
+ if (first->handoff_set && (waiter != first))
+ return false;
+-
+- /*
+- * First waiter can inherit a previously set handoff
+- * bit and spin on rwsem if lock acquisition fails.
+- */
+- if (waiter == first)
+- waiter->handoff_set = true;
+ }
+
+ new = count;
+
+ if (count & RWSEM_LOCK_MASK) {
++ /*
++ * A waiter (first or not) can set the handoff bit
++ * if it is an RT task or wait in the wait queue
++ * for too long.
++ */
+ if (has_handoff || (!rt_task(waiter->task) &&
+ !time_after(jiffies, waiter->timeout)))
+ return false;
+@@ -651,11 +649,12 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
+ } while (!atomic_long_try_cmpxchg_acquire(&sem->count, &count, new));
+
+ /*
+- * We have either acquired the lock with handoff bit cleared or
+- * set the handoff bit.
++ * We have either acquired the lock with handoff bit cleared or set
++ * the handoff bit. Only the first waiter can have its handoff_set
++ * set here to enable optimistic spinning in slowpath loop.
+ */
+ if (new & RWSEM_FLAG_HANDOFF) {
+- waiter->handoff_set = true;
++ first->handoff_set = true;
+ lockevent_inc(rwsem_wlock_handoff);
+ return false;
+ }
+@@ -1092,7 +1091,7 @@ queue:
+ /* Ordered by sem->wait_lock against rwsem_mark_wake(). */
+ break;
+ }
+- schedule();
++ schedule_preempt_disabled();
+ lockevent_inc(rwsem_sleep_reader);
+ }
+
+@@ -1254,14 +1253,20 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
+ */
+ static inline int __down_read_common(struct rw_semaphore *sem, int state)
+ {
++ int ret = 0;
+ long count;
+
++ preempt_disable();
+ if (!rwsem_read_trylock(sem, &count)) {
+- if (IS_ERR(rwsem_down_read_slowpath(sem, count, state)))
+- return -EINTR;
++ if (IS_ERR(rwsem_down_read_slowpath(sem, count, state))) {
++ ret = -EINTR;
++ goto out;
++ }
+ DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
+ }
+- return 0;
++out:
++ preempt_enable();
++ return ret;
+ }
+
+ static inline void __down_read(struct rw_semaphore *sem)
+@@ -1281,19 +1286,23 @@ static inline int __down_read_killable(struct rw_semaphore *sem)
+
+ static inline int __down_read_trylock(struct rw_semaphore *sem)
+ {
++ int ret = 0;
+ long tmp;
+
+ DEBUG_RWSEMS_WARN_ON(sem->magic != sem, sem);
+
++ preempt_disable();
+ tmp = atomic_long_read(&sem->count);
+ while (!(tmp & RWSEM_READ_FAILED_MASK)) {
+ if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
+ tmp + RWSEM_READER_BIAS)) {
+ rwsem_set_reader_owned(sem);
+- return 1;
++ ret = 1;
++ break;
+ }
+ }
+- return 0;
++ preempt_enable();
++ return ret;
+ }
+
+ /*
+@@ -1335,6 +1344,7 @@ static inline void __up_read(struct rw_semaphore *sem)
+ DEBUG_RWSEMS_WARN_ON(sem->magic != sem, sem);
+ DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
+
++ preempt_disable();
+ rwsem_clear_reader_owned(sem);
+ tmp = atomic_long_add_return_release(-RWSEM_READER_BIAS, &sem->count);
+ DEBUG_RWSEMS_WARN_ON(tmp < 0, sem);
+@@ -1343,6 +1353,7 @@ static inline void __up_read(struct rw_semaphore *sem)
+ clear_nonspinnable(sem);
+ rwsem_wake(sem);
+ }
++ preempt_enable();
+ }
+
+ /*
+@@ -1662,6 +1673,12 @@ void down_read_non_owner(struct rw_semaphore *sem)
+ {
+ might_sleep();
+ __down_read(sem);
++ /*
++ * The owner value for a reader-owned lock is mostly for debugging
++ * purpose only and is not critical to the correct functioning of
++ * rwsem. So it is perfectly fine to set it in a preempt-enabled
++ * context here.
++ */
+ __rwsem_set_reader_owned(sem, NULL);
+ }
+ EXPORT_SYMBOL(down_read_non_owner);
+diff --git a/kernel/panic.c b/kernel/panic.c
+index 463c9295bc28a..5cfea8302d23a 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -34,6 +34,7 @@
+ #include <linux/ratelimit.h>
+ #include <linux/debugfs.h>
+ #include <linux/sysfs.h>
++#include <linux/context_tracking.h>
+ #include <trace/events/error_report.h>
+ #include <asm/sections.h>
+
+@@ -211,9 +212,6 @@ static void panic_print_sys_info(bool console_flush)
+ return;
+ }
+
+- if (panic_print & PANIC_PRINT_ALL_CPU_BT)
+- trigger_all_cpu_backtrace();
+-
+ if (panic_print & PANIC_PRINT_TASK_INFO)
+ show_state();
+
+@@ -243,6 +241,30 @@ void check_panic_on_warn(const char *origin)
+ origin, limit);
+ }
+
++/*
++ * Helper that triggers the NMI backtrace (if set in panic_print)
++ * and then performs the secondary CPUs shutdown - we cannot have
++ * the NMI backtrace after the CPUs are off!
++ */
++static void panic_other_cpus_shutdown(bool crash_kexec)
++{
++ if (panic_print & PANIC_PRINT_ALL_CPU_BT)
++ trigger_all_cpu_backtrace();
++
++ /*
++ * Note that smp_send_stop() is the usual SMP shutdown function,
++ * which unfortunately may not be hardened to work in a panic
++ * situation. If we want to do crash dump after notifier calls
++ * and kmsg_dump, we will need architecture dependent extra
++ * bits in addition to stopping other CPUs, hence we rely on
++ * crash_smp_send_stop() for that.
++ */
++ if (!crash_kexec)
++ smp_send_stop();
++ else
++ crash_smp_send_stop();
++}
++
+ /**
+ * panic - halt the system
+ * @fmt: The text string to print
+@@ -333,23 +355,10 @@ void panic(const char *fmt, ...)
+ *
+ * Bypass the panic_cpu check and call __crash_kexec directly.
+ */
+- if (!_crash_kexec_post_notifiers) {
++ if (!_crash_kexec_post_notifiers)
+ __crash_kexec(NULL);
+
+- /*
+- * Note smp_send_stop is the usual smp shutdown function, which
+- * unfortunately means it may not be hardened to work in a
+- * panic situation.
+- */
+- smp_send_stop();
+- } else {
+- /*
+- * If we want to do crash dump after notifier calls and
+- * kmsg_dump, we will need architecture dependent extra
+- * works in addition to stopping other CPUs.
+- */
+- crash_smp_send_stop();
+- }
++ panic_other_cpus_shutdown(_crash_kexec_post_notifiers);
+
+ /*
+ * Run any panic handlers, including those that might need to
+@@ -679,6 +688,7 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
+ void warn_slowpath_fmt(const char *file, int line, unsigned taint,
+ const char *fmt, ...)
+ {
++ bool rcu = warn_rcu_enter();
+ struct warn_args args;
+
+ pr_warn(CUT_HERE);
+@@ -693,11 +703,13 @@ void warn_slowpath_fmt(const char *file, int line, unsigned taint,
+ va_start(args.args, fmt);
+ __warn(file, line, __builtin_return_address(0), taint, NULL, &args);
+ va_end(args.args);
++ warn_rcu_exit(rcu);
+ }
+ EXPORT_SYMBOL(warn_slowpath_fmt);
+ #else
+ void __warn_printk(const char *fmt, ...)
+ {
++ bool rcu = warn_rcu_enter();
+ va_list args;
+
+ pr_warn(CUT_HERE);
+@@ -705,6 +717,7 @@ void __warn_printk(const char *fmt, ...)
+ va_start(args, fmt);
+ vprintk(fmt, args);
+ va_end(args);
++ warn_rcu_exit(rcu);
+ }
+ EXPORT_SYMBOL(__warn_printk);
+ #endif
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index f4f8cb0435b45..fc21c5d5fd5de 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -244,7 +244,24 @@ void zap_pid_ns_processes(struct pid_namespace *pid_ns)
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (pid_ns->pid_allocated == init_pids)
+ break;
++ /*
++ * Release tasks_rcu_exit_srcu to avoid following deadlock:
++ *
++ * 1) TASK A unshare(CLONE_NEWPID)
++ * 2) TASK A fork() twice -> TASK B (child reaper for new ns)
++ * and TASK C
++ * 3) TASK B exits, kills TASK C, waits for TASK A to reap it
++ * 4) TASK A calls synchronize_rcu_tasks()
++ * -> synchronize_srcu(tasks_rcu_exit_srcu)
++ * 5) *DEADLOCK*
++ *
++ * It is considered safe to release tasks_rcu_exit_srcu here
++ * because we assume the current task can not be concurrently
++ * reaped at this point.
++ */
++ exit_tasks_rcu_stop();
+ schedule();
++ exit_tasks_rcu_start();
+ }
+ __set_current_state(TASK_RUNNING);
+
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index f82111837b8d1..7b44f5b89fa15 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -87,10 +87,7 @@ static void em_debug_create_pd(struct device *dev)
+
+ static void em_debug_remove_pd(struct device *dev)
+ {
+- struct dentry *debug_dir;
+-
+- debug_dir = debugfs_lookup(dev_name(dev), rootdir);
+- debugfs_remove_recursive(debug_dir);
++ debugfs_lookup_and_remove(dev_name(dev), rootdir);
+ }
+
+ static int __init em_debug_init(void)
+diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
+index ca4b5dcec675b..16953784a0bdf 100644
+--- a/kernel/rcu/srcutree.c
++++ b/kernel/rcu/srcutree.c
+@@ -726,7 +726,7 @@ static void srcu_gp_start(struct srcu_struct *ssp)
+ int state;
+
+ if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER)
+- sdp = per_cpu_ptr(ssp->sda, 0);
++ sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id());
+ else
+ sdp = this_cpu_ptr(ssp->sda);
+ lockdep_assert_held(&ACCESS_PRIVATE(ssp, lock));
+@@ -837,7 +837,8 @@ static void srcu_gp_end(struct srcu_struct *ssp)
+ /* Initiate callback invocation as needed. */
+ ss_state = smp_load_acquire(&ssp->srcu_size_state);
+ if (ss_state < SRCU_SIZE_WAIT_BARRIER) {
+- srcu_schedule_cbs_sdp(per_cpu_ptr(ssp->sda, 0), cbdelay);
++ srcu_schedule_cbs_sdp(per_cpu_ptr(ssp->sda, get_boot_cpu_id()),
++ cbdelay);
+ } else {
+ idx = rcu_seq_ctr(gpseq) % ARRAY_SIZE(snp->srcu_have_cbs);
+ srcu_for_each_node_breadth_first(ssp, snp) {
+@@ -1161,7 +1162,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp,
+ idx = __srcu_read_lock_nmisafe(ssp);
+ ss_state = smp_load_acquire(&ssp->srcu_size_state);
+ if (ss_state < SRCU_SIZE_WAIT_CALL)
+- sdp = per_cpu_ptr(ssp->sda, 0);
++ sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id());
+ else
+ sdp = raw_cpu_ptr(ssp->sda);
+ spin_lock_irqsave_sdp_contention(sdp, &flags);
+@@ -1497,7 +1498,7 @@ void srcu_barrier(struct srcu_struct *ssp)
+
+ idx = __srcu_read_lock_nmisafe(ssp);
+ if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER)
+- srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, 0));
++ srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, get_boot_cpu_id()));
+ else
+ for_each_possible_cpu(cpu)
+ srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, cpu));
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index fe9840d90e960..d5a4a129a85e9 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -384,6 +384,7 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ {
+ int cpu;
+ unsigned long flags;
++ bool gpdone = poll_state_synchronize_rcu(rtp->percpu_dequeue_gpseq);
+ long n;
+ long ncbs = 0;
+ long ncbsnz = 0;
+@@ -425,21 +426,23 @@ static int rcu_tasks_need_gpcb(struct rcu_tasks *rtp)
+ WRITE_ONCE(rtp->percpu_enqueue_shift, order_base_2(nr_cpu_ids));
+ smp_store_release(&rtp->percpu_enqueue_lim, 1);
+ rtp->percpu_dequeue_gpseq = get_state_synchronize_rcu();
++ gpdone = false;
+ pr_info("Starting switch %s to CPU-0 callback queuing.\n", rtp->name);
+ }
+ raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
+ }
+- if (rcu_task_cb_adjust && !ncbsnz &&
+- poll_state_synchronize_rcu(rtp->percpu_dequeue_gpseq)) {
++ if (rcu_task_cb_adjust && !ncbsnz && gpdone) {
+ raw_spin_lock_irqsave(&rtp->cbs_gbl_lock, flags);
+ if (rtp->percpu_enqueue_lim < rtp->percpu_dequeue_lim) {
+ WRITE_ONCE(rtp->percpu_dequeue_lim, 1);
+ pr_info("Completing switch %s to CPU-0 callback queuing.\n", rtp->name);
+ }
+- for (cpu = rtp->percpu_dequeue_lim; cpu < nr_cpu_ids; cpu++) {
+- struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
++ if (rtp->percpu_dequeue_lim == 1) {
++ for (cpu = rtp->percpu_dequeue_lim; cpu < nr_cpu_ids; cpu++) {
++ struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rtp->rtpcpu, cpu);
+
+- WARN_ON_ONCE(rcu_segcblist_n_cbs(&rtpcp->cblist));
++ WARN_ON_ONCE(rcu_segcblist_n_cbs(&rtpcp->cblist));
++ }
+ }
+ raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
+ }
+@@ -560,8 +563,9 @@ static int __noreturn rcu_tasks_kthread(void *arg)
+ static void synchronize_rcu_tasks_generic(struct rcu_tasks *rtp)
+ {
+ /* Complain if the scheduler has not started. */
+- WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
+- "synchronize_rcu_tasks called too soon");
++ if (WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
++ "synchronize_%s() called too soon", rtp->name))
++ return;
+
+ // If the grace-period kthread is running, use it.
+ if (READ_ONCE(rtp->kthread_ptr)) {
+@@ -827,11 +831,21 @@ static void rcu_tasks_pertask(struct task_struct *t, struct list_head *hop)
+ static void rcu_tasks_postscan(struct list_head *hop)
+ {
+ /*
+- * Wait for tasks that are in the process of exiting. This
+- * does only part of the job, ensuring that all tasks that were
+- * previously exiting reach the point where they have disabled
+- * preemption, allowing the later synchronize_rcu() to finish
+- * the job.
++ * Exiting tasks may escape the tasklist scan. Those are vulnerable
++ * until their final schedule() with TASK_DEAD state. To cope with
++ * this, divide the fragile exit path part in two intersecting
++ * read side critical sections:
++ *
++ * 1) An _SRCU_ read side starting before calling exit_notify(),
++ * which may remove the task from the tasklist, and ending after
++ * the final preempt_disable() call in do_exit().
++ *
++ * 2) An _RCU_ read side starting with the final preempt_disable()
++ * call in do_exit() and ending with the final call to schedule()
++ * with TASK_DEAD state.
++ *
++ * This handles the part 1). And postgp will handle part 2) with a
++ * call to synchronize_rcu().
+ */
+ synchronize_srcu(&tasks_rcu_exit_srcu);
+ }
+@@ -898,7 +912,10 @@ static void rcu_tasks_postgp(struct rcu_tasks *rtp)
+ *
+ * In addition, this synchronize_rcu() waits for exiting tasks
+ * to complete their final preempt_disable() region of execution,
+- * cleaning up after the synchronize_srcu() above.
++ * cleaning up after synchronize_srcu(&tasks_rcu_exit_srcu),
++ * enforcing the whole region before tasklist removal until
++ * the final schedule() with TASK_DEAD state to be an RCU TASKS
++ * read side critical section.
+ */
+ synchronize_rcu();
+ }
+@@ -988,27 +1005,42 @@ void show_rcu_tasks_classic_gp_kthread(void)
+ EXPORT_SYMBOL_GPL(show_rcu_tasks_classic_gp_kthread);
+ #endif // !defined(CONFIG_TINY_RCU)
+
+-/* Do the srcu_read_lock() for the above synchronize_srcu(). */
++/*
++ * Contribute to protect against tasklist scan blind spot while the
++ * task is exiting and may be removed from the tasklist. See
++ * corresponding synchronize_srcu() for further details.
++ */
+ void exit_tasks_rcu_start(void) __acquires(&tasks_rcu_exit_srcu)
+ {
+- preempt_disable();
+ current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu);
+- preempt_enable();
+ }
+
+-/* Do the srcu_read_unlock() for the above synchronize_srcu(). */
+-void exit_tasks_rcu_finish(void) __releases(&tasks_rcu_exit_srcu)
++/*
++ * Contribute to protect against tasklist scan blind spot while the
++ * task is exiting and may be removed from the tasklist. See
++ * corresponding synchronize_srcu() for further details.
++ */
++void exit_tasks_rcu_stop(void) __releases(&tasks_rcu_exit_srcu)
+ {
+ struct task_struct *t = current;
+
+- preempt_disable();
+ __srcu_read_unlock(&tasks_rcu_exit_srcu, t->rcu_tasks_idx);
+- preempt_enable();
+- exit_tasks_rcu_finish_trace(t);
++}
++
++/*
++ * Contribute to protect against tasklist scan blind spot while the
++ * task is exiting and may be removed from the tasklist. See
++ * corresponding synchronize_srcu() for further details.
++ */
++void exit_tasks_rcu_finish(void)
++{
++ exit_tasks_rcu_stop();
++ exit_tasks_rcu_finish_trace(current);
+ }
+
+ #else /* #ifdef CONFIG_TASKS_RCU */
+ void exit_tasks_rcu_start(void) { }
++void exit_tasks_rcu_stop(void) { }
+ void exit_tasks_rcu_finish(void) { exit_tasks_rcu_finish_trace(current); }
+ #endif /* #else #ifdef CONFIG_TASKS_RCU */
+
+@@ -1036,9 +1068,6 @@ static void rcu_tasks_be_rude(struct work_struct *work)
+ // Wait for one rude RCU-tasks grace period.
+ static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
+ {
+- if (num_online_cpus() <= 1)
+- return; // Fastpath for only one CPU.
+-
+ rtp->n_ipis += cpumask_weight(cpu_online_mask);
+ schedule_on_each_cpu(rcu_tasks_be_rude);
+ }
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index ed6c3cce28f23..927abaf6c822e 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -667,7 +667,9 @@ static void synchronize_rcu_expedited_wait(void)
+ mask = leaf_node_cpu_bit(rnp, cpu);
+ if (!(READ_ONCE(rnp->expmask) & mask))
+ continue;
++ preempt_disable(); // For smp_processor_id() in dump_cpu_task().
+ dump_cpu_task(cpu);
++ preempt_enable();
+ }
+ }
+ jiffies_stall = 3 * rcu_exp_jiffies_till_stall_check() + 3;
+diff --git a/kernel/resource.c b/kernel/resource.c
+index ddbbacb9fb508..b1763b2fd7ef3 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -1343,20 +1343,6 @@ retry:
+ continue;
+ }
+
+- /*
+- * All memory regions added from memory-hotplug path have the
+- * flag IORESOURCE_SYSTEM_RAM. If the resource does not have
+- * this flag, we know that we are dealing with a resource coming
+- * from HMM/devm. HMM/devm use another mechanism to add/release
+- * a resource. This goes via devm_request_mem_region and
+- * devm_release_mem_region.
+- * HMM/devm take care to release their resources when they want,
+- * so if we are dealing with them, let us just back off here.
+- */
+- if (!(res->flags & IORESOURCE_SYSRAM)) {
+- break;
+- }
+-
+ if (!(res->flags & IORESOURCE_MEM))
+ break;
+
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index ed2a47e4ddaec..0a11f44adee57 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -1777,6 +1777,8 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
+ BUG_ON(idx >= MAX_RT_PRIO);
+
+ queue = array->queue + idx;
++ if (SCHED_WARN_ON(list_empty(queue)))
++ return NULL;
+ next = list_entry(queue->next, struct sched_rt_entity, run_list);
+
+ return next;
+@@ -1789,7 +1791,8 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq)
+
+ do {
+ rt_se = pick_next_rt_entity(rt_rq);
+- BUG_ON(!rt_se);
++ if (unlikely(!rt_se))
++ return NULL;
+ rt_rq = group_rt_rq(rt_se);
+ } while (rt_rq);
+
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 137d4abe3eda1..1c240d2c99bcb 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -425,21 +425,6 @@ static void proc_put_char(void **buf, size_t *size, char c)
+ }
+ }
+
+-static int do_proc_dobool_conv(bool *negp, unsigned long *lvalp,
+- int *valp,
+- int write, void *data)
+-{
+- if (write) {
+- *(bool *)valp = *lvalp;
+- } else {
+- int val = *(bool *)valp;
+-
+- *lvalp = (unsigned long)val;
+- *negp = false;
+- }
+- return 0;
+-}
+-
+ static int do_proc_dointvec_conv(bool *negp, unsigned long *lvalp,
+ int *valp,
+ int write, void *data)
+@@ -710,16 +695,36 @@ int do_proc_douintvec(struct ctl_table *table, int write,
+ * @lenp: the size of the user buffer
+ * @ppos: file position
+ *
+- * Reads/writes up to table->maxlen/sizeof(unsigned int) integer
+- * values from/to the user buffer, treated as an ASCII string.
++ * Reads/writes one integer value from/to the user buffer,
++ * treated as an ASCII string.
++ *
++ * table->data must point to a bool variable and table->maxlen must
++ * be sizeof(bool).
+ *
+ * Returns 0 on success.
+ */
+ int proc_dobool(struct ctl_table *table, int write, void *buffer,
+ size_t *lenp, loff_t *ppos)
+ {
+- return do_proc_dointvec(table, write, buffer, lenp, ppos,
+- do_proc_dobool_conv, NULL);
++ struct ctl_table tmp;
++ bool *data = table->data;
++ int res, val;
++
++ /* Do not support arrays yet. */
++ if (table->maxlen != sizeof(bool))
++ return -EINVAL;
++
++ tmp = *table;
++ tmp.maxlen = sizeof(val);
++ tmp.data = &val;
++
++ val = READ_ONCE(*data);
++ res = proc_dointvec(&tmp, write, buffer, lenp, ppos);
++ if (res)
++ return res;
++ if (write)
++ WRITE_ONCE(*data, val);
++ return 0;
+ }
+
+ /**
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 9cf32ccda715d..8cd74b89d5776 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -384,6 +384,15 @@ void clocksource_verify_percpu(struct clocksource *cs)
+ }
+ EXPORT_SYMBOL_GPL(clocksource_verify_percpu);
+
++static inline void clocksource_reset_watchdog(void)
++{
++ struct clocksource *cs;
++
++ list_for_each_entry(cs, &watchdog_list, wd_list)
++ cs->flags &= ~CLOCK_SOURCE_WATCHDOG;
++}
++
++
+ static void clocksource_watchdog(struct timer_list *unused)
+ {
+ u64 csnow, wdnow, cslast, wdlast, delta;
+@@ -391,6 +400,7 @@ static void clocksource_watchdog(struct timer_list *unused)
+ int64_t wd_nsec, cs_nsec;
+ struct clocksource *cs;
+ enum wd_read_status read_ret;
++ unsigned long extra_wait = 0;
+ u32 md;
+
+ spin_lock(&watchdog_lock);
+@@ -410,13 +420,30 @@ static void clocksource_watchdog(struct timer_list *unused)
+
+ read_ret = cs_watchdog_read(cs, &csnow, &wdnow);
+
+- if (read_ret != WD_READ_SUCCESS) {
+- if (read_ret == WD_READ_UNSTABLE)
+- /* Clock readout unreliable, so give it up. */
+- __clocksource_unstable(cs);
++ if (read_ret == WD_READ_UNSTABLE) {
++ /* Clock readout unreliable, so give it up. */
++ __clocksource_unstable(cs);
+ continue;
+ }
+
++ /*
++ * When WD_READ_SKIP is returned, it means the system is likely
++ * under very heavy load, where the latency of reading
++ * watchdog/clocksource is very big, and affect the accuracy of
++ * watchdog check. So give system some space and suspend the
++ * watchdog check for 5 minutes.
++ */
++ if (read_ret == WD_READ_SKIP) {
++ /*
++ * As the watchdog timer will be suspended, and
++ * cs->last could keep unchanged for 5 minutes, reset
++ * the counters.
++ */
++ clocksource_reset_watchdog();
++ extra_wait = HZ * 300;
++ break;
++ }
++
+ /* Clocksource initialized ? */
+ if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) ||
+ atomic_read(&watchdog_reset_pending)) {
+@@ -512,7 +539,7 @@ static void clocksource_watchdog(struct timer_list *unused)
+ * pair clocksource_stop_watchdog() clocksource_start_watchdog().
+ */
+ if (!timer_pending(&watchdog_timer)) {
+- watchdog_timer.expires += WATCHDOG_INTERVAL;
++ watchdog_timer.expires += WATCHDOG_INTERVAL + extra_wait;
+ add_timer_on(&watchdog_timer, next_cpu);
+ }
+ out:
+@@ -537,14 +564,6 @@ static inline void clocksource_stop_watchdog(void)
+ watchdog_running = 0;
+ }
+
+-static inline void clocksource_reset_watchdog(void)
+-{
+- struct clocksource *cs;
+-
+- list_for_each_entry(cs, &watchdog_list, wd_list)
+- cs->flags &= ~CLOCK_SOURCE_WATCHDOG;
+-}
+-
+ static void clocksource_resume_watchdog(void)
+ {
+ atomic_inc(&watchdog_reset_pending);
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 3ae661ab62603..e4f0e3b0c4f4f 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2126,6 +2126,7 @@ SYSCALL_DEFINE2(nanosleep, struct __kernel_timespec __user *, rqtp,
+ if (!timespec64_valid(&tu))
+ return -EINVAL;
+
++ current->restart_block.fn = do_no_restart_syscall;
+ current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
+ current->restart_block.nanosleep.rmtp = rmtp;
+ return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
+@@ -2147,6 +2148,7 @@ SYSCALL_DEFINE2(nanosleep_time32, struct old_timespec32 __user *, rqtp,
+ if (!timespec64_valid(&tu))
+ return -EINVAL;
+
++ current->restart_block.fn = do_no_restart_syscall;
+ current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
+ current->restart_block.nanosleep.compat_rmtp = rmtp;
+ return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
+diff --git a/kernel/time/posix-stubs.c b/kernel/time/posix-stubs.c
+index 90ea5f373e50e..828aeecbd1e8a 100644
+--- a/kernel/time/posix-stubs.c
++++ b/kernel/time/posix-stubs.c
+@@ -147,6 +147,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
+ return -EINVAL;
+ if (flags & TIMER_ABSTIME)
+ rmtp = NULL;
++ current->restart_block.fn = do_no_restart_syscall;
+ current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
+ current->restart_block.nanosleep.rmtp = rmtp;
+ texp = timespec64_to_ktime(t);
+@@ -240,6 +241,7 @@ SYSCALL_DEFINE4(clock_nanosleep_time32, clockid_t, which_clock, int, flags,
+ return -EINVAL;
+ if (flags & TIMER_ABSTIME)
+ rmtp = NULL;
++ current->restart_block.fn = do_no_restart_syscall;
+ current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
+ current->restart_block.nanosleep.compat_rmtp = rmtp;
+ texp = timespec64_to_ktime(t);
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index 5dead89308b74..0c8a87a11b39d 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -1270,6 +1270,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
+ return -EINVAL;
+ if (flags & TIMER_ABSTIME)
+ rmtp = NULL;
++ current->restart_block.fn = do_no_restart_syscall;
+ current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
+ current->restart_block.nanosleep.rmtp = rmtp;
+
+@@ -1297,6 +1298,7 @@ SYSCALL_DEFINE4(clock_nanosleep_time32, clockid_t, which_clock, int, flags,
+ return -EINVAL;
+ if (flags & TIMER_ABSTIME)
+ rmtp = NULL;
++ current->restart_block.fn = do_no_restart_syscall;
+ current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
+ current->restart_block.nanosleep.compat_rmtp = rmtp;
+
+diff --git a/kernel/time/test_udelay.c b/kernel/time/test_udelay.c
+index 13b11eb62685e..20d5df631570e 100644
+--- a/kernel/time/test_udelay.c
++++ b/kernel/time/test_udelay.c
+@@ -149,7 +149,7 @@ module_init(udelay_test_init);
+ static void __exit udelay_test_exit(void)
+ {
+ mutex_lock(&udelay_test_lock);
+- debugfs_remove(debugfs_lookup(DEBUGFS_FILENAME, NULL));
++ debugfs_lookup_and_remove(DEBUGFS_FILENAME, NULL);
+ mutex_unlock(&udelay_test_lock);
+ }
+
+diff --git a/kernel/torture.c b/kernel/torture.c
+index 789aeb0e1159c..9266ca168b8f5 100644
+--- a/kernel/torture.c
++++ b/kernel/torture.c
+@@ -915,7 +915,7 @@ void torture_kthread_stopping(char *title)
+ VERBOSE_TOROUT_STRING(buf);
+ while (!kthread_should_stop()) {
+ torture_shutdown_absorb(title);
+- schedule_timeout_uninterruptible(1);
++ schedule_timeout_uninterruptible(HZ / 20);
+ }
+ }
+ EXPORT_SYMBOL_GPL(torture_kthread_stopping);
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 918a7d12df8ff..5743be5594153 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -320,8 +320,8 @@ static void blk_trace_free(struct request_queue *q, struct blk_trace *bt)
+ * under 'q->debugfs_dir', thus lookup and remove them.
+ */
+ if (!bt->dir) {
+- debugfs_remove(debugfs_lookup("dropped", q->debugfs_dir));
+- debugfs_remove(debugfs_lookup("msg", q->debugfs_dir));
++ debugfs_lookup_and_remove("dropped", q->debugfs_dir);
++ debugfs_lookup_and_remove("msg", q->debugfs_dir);
+ } else {
+ debugfs_remove(bt->dir);
+ }
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index c366a0a9ddba4..b641cab2745e9 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1580,19 +1580,6 @@ static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer,
+ return 0;
+ }
+
+-/**
+- * rb_check_list - make sure a pointer to a list has the last bits zero
+- */
+-static int rb_check_list(struct ring_buffer_per_cpu *cpu_buffer,
+- struct list_head *list)
+-{
+- if (RB_WARN_ON(cpu_buffer, rb_list_head(list->prev) != list->prev))
+- return 1;
+- if (RB_WARN_ON(cpu_buffer, rb_list_head(list->next) != list->next))
+- return 1;
+- return 0;
+-}
+-
+ /**
+ * rb_check_pages - integrity check of buffer pages
+ * @cpu_buffer: CPU buffer with pages to test
+@@ -1602,36 +1589,27 @@ static int rb_check_list(struct ring_buffer_per_cpu *cpu_buffer,
+ */
+ static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
+ {
+- struct list_head *head = cpu_buffer->pages;
+- struct buffer_page *bpage, *tmp;
++ struct list_head *head = rb_list_head(cpu_buffer->pages);
++ struct list_head *tmp;
+
+- /* Reset the head page if it exists */
+- if (cpu_buffer->head_page)
+- rb_set_head_page(cpu_buffer);
+-
+- rb_head_page_deactivate(cpu_buffer);
+-
+- if (RB_WARN_ON(cpu_buffer, head->next->prev != head))
+- return -1;
+- if (RB_WARN_ON(cpu_buffer, head->prev->next != head))
++ if (RB_WARN_ON(cpu_buffer,
++ rb_list_head(rb_list_head(head->next)->prev) != head))
+ return -1;
+
+- if (rb_check_list(cpu_buffer, head))
++ if (RB_WARN_ON(cpu_buffer,
++ rb_list_head(rb_list_head(head->prev)->next) != head))
+ return -1;
+
+- list_for_each_entry_safe(bpage, tmp, head, list) {
++ for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) {
+ if (RB_WARN_ON(cpu_buffer,
+- bpage->list.next->prev != &bpage->list))
++ rb_list_head(rb_list_head(tmp->next)->prev) != tmp))
+ return -1;
++
+ if (RB_WARN_ON(cpu_buffer,
+- bpage->list.prev->next != &bpage->list))
+- return -1;
+- if (rb_check_list(cpu_buffer, &bpage->list))
++ rb_list_head(rb_list_head(tmp->prev)->next) != tmp))
+ return -1;
+ }
+
+- rb_head_page_activate(cpu_buffer);
+-
+ return 0;
+ }
+
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index c9e40f6926504..b677f8d61deb1 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5598,7 +5598,7 @@ static const char readme_msg[] =
+ #ifdef CONFIG_HIST_TRIGGERS
+ "\t s:[synthetic/]<event> <field> [<field>]\n"
+ #endif
+- "\t e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]\n"
++ "\t e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>] [if <filter>]\n"
+ "\t -:[<group>/][<event>]\n"
+ #ifdef CONFIG_KPROBE_EVENTS
+ "\t place: [<module>:]<symbol>[+<offset>]|<memaddr>\n"
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 07895deca2711..76ea87b0251ce 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -326,7 +326,7 @@ static struct rcuwait manager_wait = __RCUWAIT_INITIALIZER(manager_wait);
+ static LIST_HEAD(workqueues); /* PR: list of all workqueues */
+ static bool workqueue_freezing; /* PL: have wqs started freezing? */
+
+-/* PL: allowable cpus for unbound wqs and work items */
++/* PL&A: allowable cpus for unbound wqs and work items */
+ static cpumask_var_t wq_unbound_cpumask;
+
+ /* CPU where unbound work was last round robin scheduled from this CPU */
+@@ -3952,7 +3952,8 @@ static void apply_wqattrs_cleanup(struct apply_wqattrs_ctx *ctx)
+ /* allocate the attrs and pwqs for later installation */
+ static struct apply_wqattrs_ctx *
+ apply_wqattrs_prepare(struct workqueue_struct *wq,
+- const struct workqueue_attrs *attrs)
++ const struct workqueue_attrs *attrs,
++ const cpumask_var_t unbound_cpumask)
+ {
+ struct apply_wqattrs_ctx *ctx;
+ struct workqueue_attrs *new_attrs, *tmp_attrs;
+@@ -3968,14 +3969,15 @@ apply_wqattrs_prepare(struct workqueue_struct *wq,
+ goto out_free;
+
+ /*
+- * Calculate the attrs of the default pwq.
++ * Calculate the attrs of the default pwq with unbound_cpumask
++ * which is wq_unbound_cpumask or to set to wq_unbound_cpumask.
+ * If the user configured cpumask doesn't overlap with the
+ * wq_unbound_cpumask, we fallback to the wq_unbound_cpumask.
+ */
+ copy_workqueue_attrs(new_attrs, attrs);
+- cpumask_and(new_attrs->cpumask, new_attrs->cpumask, wq_unbound_cpumask);
++ cpumask_and(new_attrs->cpumask, new_attrs->cpumask, unbound_cpumask);
+ if (unlikely(cpumask_empty(new_attrs->cpumask)))
+- cpumask_copy(new_attrs->cpumask, wq_unbound_cpumask);
++ cpumask_copy(new_attrs->cpumask, unbound_cpumask);
+
+ /*
+ * We may create multiple pwqs with differing cpumasks. Make a
+@@ -4072,7 +4074,7 @@ static int apply_workqueue_attrs_locked(struct workqueue_struct *wq,
+ wq->flags &= ~__WQ_ORDERED;
+ }
+
+- ctx = apply_wqattrs_prepare(wq, attrs);
++ ctx = apply_wqattrs_prepare(wq, attrs, wq_unbound_cpumask);
+ if (!ctx)
+ return -ENOMEM;
+
+@@ -5334,7 +5336,7 @@ out_unlock:
+ }
+ #endif /* CONFIG_FREEZER */
+
+-static int workqueue_apply_unbound_cpumask(void)
++static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask)
+ {
+ LIST_HEAD(ctxs);
+ int ret = 0;
+@@ -5350,7 +5352,7 @@ static int workqueue_apply_unbound_cpumask(void)
+ if (wq->flags & __WQ_ORDERED)
+ continue;
+
+- ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs);
++ ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs, unbound_cpumask);
+ if (!ctx) {
+ ret = -ENOMEM;
+ break;
+@@ -5365,6 +5367,11 @@ static int workqueue_apply_unbound_cpumask(void)
+ apply_wqattrs_cleanup(ctx);
+ }
+
++ if (!ret) {
++ mutex_lock(&wq_pool_attach_mutex);
++ cpumask_copy(wq_unbound_cpumask, unbound_cpumask);
++ mutex_unlock(&wq_pool_attach_mutex);
++ }
+ return ret;
+ }
+
+@@ -5383,7 +5390,6 @@ static int workqueue_apply_unbound_cpumask(void)
+ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ {
+ int ret = -EINVAL;
+- cpumask_var_t saved_cpumask;
+
+ /*
+ * Not excluding isolated cpus on purpose.
+@@ -5397,23 +5403,8 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask)
+ goto out_unlock;
+ }
+
+- if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL)) {
+- ret = -ENOMEM;
+- goto out_unlock;
+- }
+-
+- /* save the old wq_unbound_cpumask. */
+- cpumask_copy(saved_cpumask, wq_unbound_cpumask);
+-
+- /* update wq_unbound_cpumask at first and apply it to wqs. */
+- cpumask_copy(wq_unbound_cpumask, cpumask);
+- ret = workqueue_apply_unbound_cpumask();
+-
+- /* restore the wq_unbound_cpumask when failed. */
+- if (ret < 0)
+- cpumask_copy(wq_unbound_cpumask, saved_cpumask);
++ ret = workqueue_apply_unbound_cpumask(cpumask);
+
+- free_cpumask_var(saved_cpumask);
+ out_unlock:
+ apply_wqattrs_unlock();
+ }
+diff --git a/lib/bug.c b/lib/bug.c
+index c223a2575b721..e0ff219899902 100644
+--- a/lib/bug.c
++++ b/lib/bug.c
+@@ -47,6 +47,7 @@
+ #include <linux/sched.h>
+ #include <linux/rculist.h>
+ #include <linux/ftrace.h>
++#include <linux/context_tracking.h>
+
+ extern struct bug_entry __start___bug_table[], __stop___bug_table[];
+
+@@ -153,7 +154,7 @@ struct bug_entry *find_bug(unsigned long bugaddr)
+ return module_find_bug(bugaddr);
+ }
+
+-enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
++static enum bug_trap_type __report_bug(unsigned long bugaddr, struct pt_regs *regs)
+ {
+ struct bug_entry *bug;
+ const char *file;
+@@ -209,6 +210,18 @@ enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
+ return BUG_TRAP_TYPE_BUG;
+ }
+
++enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
++{
++ enum bug_trap_type ret;
++ bool rcu = false;
++
++ rcu = warn_rcu_enter();
++ ret = __report_bug(bugaddr, regs);
++ warn_rcu_exit(rcu);
++
++ return ret;
++}
++
+ static void clear_once_table(struct bug_entry *start, struct bug_entry *end)
+ {
+ struct bug_entry *bug;
+diff --git a/lib/errname.c b/lib/errname.c
+index 05cbf731545f0..67739b174a8cc 100644
+--- a/lib/errname.c
++++ b/lib/errname.c
+@@ -21,6 +21,7 @@ static const char *names_0[] = {
+ E(EADDRNOTAVAIL),
+ E(EADV),
+ E(EAFNOSUPPORT),
++ E(EAGAIN), /* EWOULDBLOCK */
+ E(EALREADY),
+ E(EBADE),
+ E(EBADF),
+@@ -31,15 +32,17 @@ static const char *names_0[] = {
+ E(EBADSLT),
+ E(EBFONT),
+ E(EBUSY),
+-#ifdef ECANCELLED
+- E(ECANCELLED),
+-#endif
++ E(ECANCELED), /* ECANCELLED */
+ E(ECHILD),
+ E(ECHRNG),
+ E(ECOMM),
+ E(ECONNABORTED),
++ E(ECONNREFUSED), /* EREFUSED */
+ E(ECONNRESET),
++ E(EDEADLK), /* EDEADLOCK */
++#if EDEADLK != EDEADLOCK /* mips, sparc, powerpc */
+ E(EDEADLOCK),
++#endif
+ E(EDESTADDRREQ),
+ E(EDOM),
+ E(EDOTDOT),
+@@ -166,14 +169,17 @@ static const char *names_0[] = {
+ E(EUSERS),
+ E(EXDEV),
+ E(EXFULL),
+-
+- E(ECANCELED), /* ECANCELLED */
+- E(EAGAIN), /* EWOULDBLOCK */
+- E(ECONNREFUSED), /* EREFUSED */
+- E(EDEADLK), /* EDEADLOCK */
+ };
+ #undef E
+
++#ifdef EREFUSED /* parisc */
++static_assert(EREFUSED == ECONNREFUSED);
++#endif
++#ifdef ECANCELLED /* parisc */
++static_assert(ECANCELLED == ECANCELED);
++#endif
++static_assert(EAGAIN == EWOULDBLOCK); /* everywhere */
++
+ #define E(err) [err - 512 + BUILD_BUG_ON_ZERO(err < 512 || err > 550)] = "-" #err
+ static const char *names_512[] = {
+ E(ERESTARTSYS),
+diff --git a/lib/kobject.c b/lib/kobject.c
+index 985ee1c4f2c60..d20ce15eec2d0 100644
+--- a/lib/kobject.c
++++ b/lib/kobject.c
+@@ -112,7 +112,7 @@ static int get_kobj_path_length(const struct kobject *kobj)
+ return length;
+ }
+
+-static void fill_kobj_path(const struct kobject *kobj, char *path, int length)
++static int fill_kobj_path(const struct kobject *kobj, char *path, int length)
+ {
+ const struct kobject *parent;
+
+@@ -121,12 +121,16 @@ static void fill_kobj_path(const struct kobject *kobj, char *path, int length)
+ int cur = strlen(kobject_name(parent));
+ /* back up enough to print this name with '/' */
+ length -= cur;
++ if (length <= 0)
++ return -EINVAL;
+ memcpy(path + length, kobject_name(parent), cur);
+ *(path + --length) = '/';
+ }
+
+ pr_debug("kobject: '%s' (%p): %s: path = '%s'\n", kobject_name(kobj),
+ kobj, __func__, path);
++
++ return 0;
+ }
+
+ /**
+@@ -141,13 +145,17 @@ char *kobject_get_path(const struct kobject *kobj, gfp_t gfp_mask)
+ char *path;
+ int len;
+
++retry:
+ len = get_kobj_path_length(kobj);
+ if (len == 0)
+ return NULL;
+ path = kzalloc(len, gfp_mask);
+ if (!path)
+ return NULL;
+- fill_kobj_path(kobj, path, len);
++ if (fill_kobj_path(kobj, path, len)) {
++ kfree(path);
++ goto retry;
++ }
+
+ return path;
+ }
+diff --git a/lib/mpi/mpicoder.c b/lib/mpi/mpicoder.c
+index 39c4c67310946..3cb6bd148fa9e 100644
+--- a/lib/mpi/mpicoder.c
++++ b/lib/mpi/mpicoder.c
+@@ -504,7 +504,8 @@ MPI mpi_read_raw_from_sgl(struct scatterlist *sgl, unsigned int nbytes)
+
+ while (sg_miter_next(&miter)) {
+ buff = miter.addr;
+- len = miter.length;
++ len = min_t(unsigned, miter.length, nbytes);
++ nbytes -= len;
+
+ for (x = 0; x < len; x++) {
+ a <<= 8;
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index 1fcede228fa25..888c51235bd3c 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -464,13 +464,10 @@ void sbitmap_queue_recalculate_wake_batch(struct sbitmap_queue *sbq,
+ unsigned int users)
+ {
+ unsigned int wake_batch;
+- unsigned int min_batch;
+ unsigned int depth = (sbq->sb.depth + users - 1) / users;
+
+- min_batch = sbq->sb.depth >= (4 * SBQ_WAIT_QUEUES) ? 4 : 1;
+-
+ wake_batch = clamp_val(depth / SBQ_WAIT_QUEUES,
+- min_batch, SBQ_WAKE_BATCH);
++ 1, SBQ_WAKE_BATCH);
+
+ WRITE_ONCE(sbq->wake_batch, wake_batch);
+ }
+@@ -521,11 +518,9 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
+
+ get_mask = ((1UL << nr_tags) - 1) << nr;
+ val = READ_ONCE(map->word);
+- do {
+- if ((val & ~get_mask) != val)
+- goto next;
+- } while (!atomic_long_try_cmpxchg(ptr, &val,
+- get_mask | val));
++ while (!atomic_long_try_cmpxchg(ptr, &val,
++ get_mask | val))
++ ;
+ get_mask = (get_mask & ~val) >> nr;
+ if (get_mask) {
+ *offset = nr + (index << sb->shift);
+diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
+index e1a4315c4be6a..402d30b37aba9 100644
+--- a/mm/damon/paddr.c
++++ b/mm/damon/paddr.c
+@@ -219,12 +219,11 @@ static unsigned long damon_pa_pageout(struct damon_region *r)
+ put_page(page);
+ continue;
+ }
+- if (PageUnevictable(page)) {
++ if (PageUnevictable(page))
+ putback_lru_page(page);
+- } else {
++ else
+ list_add(&page->lru, &page_list);
+- put_page(page);
+- }
++ put_page(page);
+ }
+ applied = reclaim_pages(&page_list);
+ cond_resched();
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 1b791b26d72d7..d6651be1aa520 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2837,6 +2837,9 @@ void deferred_split_huge_page(struct page *page)
+ if (PageSwapCache(page))
+ return;
+
++ if (!list_empty(page_deferred_list(page)))
++ return;
++
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ if (list_empty(page_deferred_list(page))) {
+ count_vm_event(THP_DEFERRED_SPLIT_PAGE);
+diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
+index 45e93a545dd7e..a559037cce00c 100644
+--- a/mm/hugetlb_vmemmap.c
++++ b/mm/hugetlb_vmemmap.c
+@@ -581,7 +581,7 @@ static struct ctl_table hugetlb_vmemmap_sysctls[] = {
+ {
+ .procname = "hugetlb_optimize_vmemmap",
+ .data = &vmemmap_optimize_enabled,
+- .maxlen = sizeof(int),
++ .maxlen = sizeof(vmemmap_optimize_enabled),
+ .mode = 0644,
+ .proc_handler = proc_dobool,
+ },
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 73afff8062f9b..2eee092f8f119 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3914,6 +3914,10 @@ static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
+ {
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
++ pr_warn_once("Cgroup memory moving (move_charge_at_immigrate) is deprecated. "
++ "Please report your usecase to linux-mm@kvack.org if you "
++ "depend on this functionality.\n");
++
+ if (val & ~MOVE_MASK)
+ return -EINVAL;
+
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index c77a9e37e27e0..89361306bfdba 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1034,7 +1034,7 @@ static int me_pagecache_dirty(struct page_state *ps, struct page *p)
+ * cache and swap cache(ie. page is freshly swapped in). So it could be
+ * referenced concurrently by 2 types of PTEs:
+ * normal PTEs and swap PTEs. We try to handle them consistently by calling
+- * try_to_unmap(TTU_IGNORE_HWPOISON) to convert the normal PTEs to swap PTEs,
++ * try_to_unmap(!TTU_HWPOISON) to convert the normal PTEs to swap PTEs,
+ * and then
+ * - clear dirty bit to prevent IO
+ * - remove from LRU
+@@ -1415,7 +1415,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
+ int flags, struct page *hpage)
+ {
+ struct folio *folio = page_folio(hpage);
+- enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC;
++ enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;
+ struct address_space *mapping;
+ LIST_HEAD(tokill);
+ bool unmap_success;
+@@ -1445,7 +1445,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
+
+ if (PageSwapCache(p)) {
+ pr_err("%#lx: keeping poisoned page in swap cache\n", pfn);
+- ttu |= TTU_IGNORE_HWPOISON;
++ ttu &= ~TTU_HWPOISON;
+ }
+
+ /*
+@@ -1460,7 +1460,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
+ if (page_mkclean(hpage)) {
+ SetPageDirty(hpage);
+ } else {
+- ttu |= TTU_IGNORE_HWPOISON;
++ ttu &= ~TTU_HWPOISON;
+ pr_info("%#lx: corrupted page was clean: dropped without side effects\n",
+ pfn);
+ }
+diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c
+index c734658c62424..e593e56e530b7 100644
+--- a/mm/memory-tiers.c
++++ b/mm/memory-tiers.c
+@@ -211,8 +211,8 @@ static struct memory_tier *find_create_memory_tier(struct memory_dev_type *memty
+
+ ret = device_register(&new_memtier->dev);
+ if (ret) {
+- list_del(&memtier->list);
+- put_device(&memtier->dev);
++ list_del(&new_memtier->list);
++ put_device(&new_memtier->dev);
+ return ERR_PTR(ret);
+ }
+ memtier = new_memtier;
+diff --git a/mm/rmap.c b/mm/rmap.c
+index b616870a09be8..3b45d049069e2 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1615,7 +1615,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
+ /* Update high watermark before we lower rss */
+ update_hiwater_rss(mm);
+
+- if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) {
++ if (PageHWPoison(subpage) && (flags & TTU_HWPOISON)) {
+ pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
+ if (folio_test_hugetlb(folio)) {
+ hugetlb_count_sub(folio_nr_pages(folio), mm);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index acf563fbdfd95..61a34801e61ea 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1981,16 +1981,14 @@ static void hci_iso_qos_setup(struct hci_dev *hdev, struct hci_conn *conn,
+ qos->latency = conn->le_conn_latency;
+ }
+
+-static struct hci_conn *hci_bind_bis(struct hci_conn *conn,
+- struct bt_iso_qos *qos)
++static void hci_bind_bis(struct hci_conn *conn,
++ struct bt_iso_qos *qos)
+ {
+ /* Update LINK PHYs according to QoS preference */
+ conn->le_tx_phy = qos->out.phy;
+ conn->le_tx_phy = qos->out.phy;
+ conn->iso_qos = *qos;
+ conn->state = BT_BOUND;
+-
+- return conn;
+ }
+
+ static int create_big_sync(struct hci_dev *hdev, void *data)
+@@ -2119,11 +2117,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ if (IS_ERR(conn))
+ return conn;
+
+- conn = hci_bind_bis(conn, qos);
+- if (!conn) {
+- hci_conn_drop(conn);
+- return ERR_PTR(-ENOMEM);
+- }
++ hci_bind_bis(conn, qos);
+
+ /* Add Basic Announcement into Peridic Adv Data if BASE is set */
+ if (base_len && base) {
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index a3e0dc6a6e732..adfc3ea06d088 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -2683,14 +2683,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+- /* Channel lock is released before requesting new skb and then
+- * reacquired thus we need to recheck channel state.
+- */
+- if (chan->state != BT_CONNECTED) {
+- kfree_skb(skb);
+- return -ENOTCONN;
+- }
+-
+ l2cap_do_send(chan, skb);
+ return len;
+ }
+@@ -2735,14 +2727,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+- /* Channel lock is released before requesting new skb and then
+- * reacquired thus we need to recheck channel state.
+- */
+- if (chan->state != BT_CONNECTED) {
+- kfree_skb(skb);
+- return -ENOTCONN;
+- }
+-
+ l2cap_do_send(chan, skb);
+ err = len;
+ break;
+@@ -2763,14 +2747,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
+ */
+ err = l2cap_segment_sdu(chan, &seg_queue, msg, len);
+
+- /* The channel could have been closed while segmenting,
+- * check that it is still connected.
+- */
+- if (chan->state != BT_CONNECTED) {
+- __skb_queue_purge(&seg_queue);
+- err = -ENOTCONN;
+- }
+-
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index ca8f07f3542b8..eebe256104bc0 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1624,6 +1624,14 @@ static struct sk_buff *l2cap_sock_alloc_skb_cb(struct l2cap_chan *chan,
+ if (!skb)
+ return ERR_PTR(err);
+
++ /* Channel lock is released before requesting new skb and then
++ * reacquired thus we need to recheck channel state.
++ */
++ if (chan->state != BT_CONNECTED) {
++ kfree_skb(skb);
++ return ERR_PTR(-ENOTCONN);
++ }
++
+ skb->priority = sk->sk_priority;
+
+ bt_cb(skb)->l2cap.chan = chan;
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index fc81d77724a13..9bc344851704e 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1220,6 +1220,9 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ if (len < ISOTP_MIN_NAMELEN)
+ return -EINVAL;
+
++ if (addr->can_family != AF_CAN)
++ return -EINVAL;
++
+ /* sanitize tx CAN identifier */
+ if (tx_id & CAN_EFF_FLAG)
+ tx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 5c356f0dee30c..acb7d776fa6ec 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -229,6 +229,8 @@ int put_cmsg(struct msghdr * msg, int level, int type, int len, void *data)
+ if (msg->msg_control_is_user) {
+ struct cmsghdr __user *cm = msg->msg_control_user;
+
++ check_object_size(data, cmlen - sizeof(*cm), true);
++
+ if (!user_write_access_begin(cm, cmlen))
+ goto efault;
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 6f27c24016fee..63680f999bf6d 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -3381,7 +3381,7 @@ void sk_stop_timer_sync(struct sock *sk, struct timer_list *timer)
+ }
+ EXPORT_SYMBOL(sk_stop_timer_sync);
+
+-void sock_init_data(struct socket *sock, struct sock *sk)
++void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid)
+ {
+ sk_init_common(sk);
+ sk->sk_send_head = NULL;
+@@ -3401,11 +3401,10 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ sk->sk_type = sock->type;
+ RCU_INIT_POINTER(sk->sk_wq, &sock->wq);
+ sock->sk = sk;
+- sk->sk_uid = SOCK_INODE(sock)->i_uid;
+ } else {
+ RCU_INIT_POINTER(sk->sk_wq, NULL);
+- sk->sk_uid = make_kuid(sock_net(sk)->user_ns, 0);
+ }
++ sk->sk_uid = uid;
+
+ rwlock_init(&sk->sk_callback_lock);
+ if (sk->sk_kern_sock)
+@@ -3463,6 +3462,16 @@ void sock_init_data(struct socket *sock, struct sock *sk)
+ refcount_set(&sk->sk_refcnt, 1);
+ atomic_set(&sk->sk_drops, 0);
+ }
++EXPORT_SYMBOL(sock_init_data_uid);
++
++void sock_init_data(struct socket *sock, struct sock *sk)
++{
++ kuid_t uid = sock ?
++ SOCK_INODE(sock)->i_uid :
++ make_kuid(sock_net(sk)->user_ns, 0);
++
++ sock_init_data_uid(sock, sk, uid);
++}
+ EXPORT_SYMBOL(sock_init_data);
+
+ void lock_sock_nested(struct sock *sk, int subclass)
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index f58d73888638b..7a13dd7f546b6 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -1008,17 +1008,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ u32 index;
+
+ if (port) {
+- head = &hinfo->bhash[inet_bhashfn(net, port,
+- hinfo->bhash_size)];
+- tb = inet_csk(sk)->icsk_bind_hash;
+- spin_lock_bh(&head->lock);
+- if (sk_head(&tb->owners) == sk && !sk->sk_bind_node.next) {
+- inet_ehash_nolisten(sk, NULL, NULL);
+- spin_unlock_bh(&head->lock);
+- return 0;
+- }
+- spin_unlock(&head->lock);
+- /* No definite answer... Walk to established hash table */
++ local_bh_disable();
+ ret = check_established(death_row, sk, port, NULL);
+ local_bh_enable();
+ return ret;
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index db2e584c625e5..f011af6601c9c 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -650,54 +650,22 @@ static int pppol2tp_tunnel_mtu(const struct l2tp_tunnel *tunnel)
+ return mtu - PPPOL2TP_HEADER_OVERHEAD;
+ }
+
+-/* connect() handler. Attach a PPPoX socket to a tunnel UDP socket
+- */
+-static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+- int sockaddr_len, int flags)
++static struct l2tp_tunnel *pppol2tp_tunnel_get(struct net *net,
++ const struct l2tp_connect_info *info,
++ bool *new_tunnel)
+ {
+- struct sock *sk = sock->sk;
+- struct pppox_sock *po = pppox_sk(sk);
+- struct l2tp_session *session = NULL;
+- struct l2tp_connect_info info;
+ struct l2tp_tunnel *tunnel;
+- struct pppol2tp_session *ps;
+- struct l2tp_session_cfg cfg = { 0, };
+- bool drop_refcnt = false;
+- bool drop_tunnel = false;
+- bool new_session = false;
+- bool new_tunnel = false;
+ int error;
+
+- error = pppol2tp_sockaddr_get_info(uservaddr, sockaddr_len, &info);
+- if (error < 0)
+- return error;
++ *new_tunnel = false;
+
+- lock_sock(sk);
+-
+- /* Check for already bound sockets */
+- error = -EBUSY;
+- if (sk->sk_state & PPPOX_CONNECTED)
+- goto end;
+-
+- /* We don't supporting rebinding anyway */
+- error = -EALREADY;
+- if (sk->sk_user_data)
+- goto end; /* socket is already attached */
+-
+- /* Don't bind if tunnel_id is 0 */
+- error = -EINVAL;
+- if (!info.tunnel_id)
+- goto end;
+-
+- tunnel = l2tp_tunnel_get(sock_net(sk), info.tunnel_id);
+- if (tunnel)
+- drop_tunnel = true;
++ tunnel = l2tp_tunnel_get(net, info->tunnel_id);
+
+ /* Special case: create tunnel context if session_id and
+ * peer_session_id is 0. Otherwise look up tunnel using supplied
+ * tunnel id.
+ */
+- if (!info.session_id && !info.peer_session_id) {
++ if (!info->session_id && !info->peer_session_id) {
+ if (!tunnel) {
+ struct l2tp_tunnel_cfg tcfg = {
+ .encap = L2TP_ENCAPTYPE_UDP,
+@@ -706,40 +674,82 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ /* Prevent l2tp_tunnel_register() from trying to set up
+ * a kernel socket.
+ */
+- if (info.fd < 0) {
+- error = -EBADF;
+- goto end;
+- }
++ if (info->fd < 0)
++ return ERR_PTR(-EBADF);
+
+- error = l2tp_tunnel_create(info.fd,
+- info.version,
+- info.tunnel_id,
+- info.peer_tunnel_id, &tcfg,
++ error = l2tp_tunnel_create(info->fd,
++ info->version,
++ info->tunnel_id,
++ info->peer_tunnel_id, &tcfg,
+ &tunnel);
+ if (error < 0)
+- goto end;
++ return ERR_PTR(error);
+
+ l2tp_tunnel_inc_refcount(tunnel);
+- error = l2tp_tunnel_register(tunnel, sock_net(sk),
+- &tcfg);
++ error = l2tp_tunnel_register(tunnel, net, &tcfg);
+ if (error < 0) {
+ kfree(tunnel);
+- goto end;
++ return ERR_PTR(error);
+ }
+- drop_tunnel = true;
+- new_tunnel = true;
++
++ *new_tunnel = true;
+ }
+ } else {
+ /* Error if we can't find the tunnel */
+- error = -ENOENT;
+ if (!tunnel)
+- goto end;
++ return ERR_PTR(-ENOENT);
+
+ /* Error if socket is not prepped */
+- if (!tunnel->sock)
+- goto end;
++ if (!tunnel->sock) {
++ l2tp_tunnel_dec_refcount(tunnel);
++ return ERR_PTR(-ENOENT);
++ }
+ }
+
++ return tunnel;
++}
++
++/* connect() handler. Attach a PPPoX socket to a tunnel UDP socket
++ */
++static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
++ int sockaddr_len, int flags)
++{
++ struct sock *sk = sock->sk;
++ struct pppox_sock *po = pppox_sk(sk);
++ struct l2tp_session *session = NULL;
++ struct l2tp_connect_info info;
++ struct l2tp_tunnel *tunnel;
++ struct pppol2tp_session *ps;
++ struct l2tp_session_cfg cfg = { 0, };
++ bool drop_refcnt = false;
++ bool new_session = false;
++ bool new_tunnel = false;
++ int error;
++
++ error = pppol2tp_sockaddr_get_info(uservaddr, sockaddr_len, &info);
++ if (error < 0)
++ return error;
++
++ /* Don't bind if tunnel_id is 0 */
++ if (!info.tunnel_id)
++ return -EINVAL;
++
++ tunnel = pppol2tp_tunnel_get(sock_net(sk), &info, &new_tunnel);
++ if (IS_ERR(tunnel))
++ return PTR_ERR(tunnel);
++
++ lock_sock(sk);
++
++ /* Check for already bound sockets */
++ error = -EBUSY;
++ if (sk->sk_state & PPPOX_CONNECTED)
++ goto end;
++
++ /* We don't supporting rebinding anyway */
++ error = -EALREADY;
++ if (sk->sk_user_data)
++ goto end; /* socket is already attached */
++
+ if (tunnel->peer_tunnel_id == 0)
+ tunnel->peer_tunnel_id = info.peer_tunnel_id;
+
+@@ -840,8 +850,7 @@ end:
+ }
+ if (drop_refcnt)
+ l2tp_session_dec_refcount(session);
+- if (drop_tunnel)
+- l2tp_tunnel_dec_refcount(tunnel);
++ l2tp_tunnel_dec_refcount(tunnel);
+ release_sock(sk);
+
+ return error;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 672eff6f5d328..d611e15301839 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -4622,6 +4622,20 @@ unlock:
+ sdata_unlock(sdata);
+ }
+
++void ieee80211_color_collision_detection_work(struct work_struct *work)
++{
++ struct delayed_work *delayed_work = to_delayed_work(work);
++ struct ieee80211_link_data *link =
++ container_of(delayed_work, struct ieee80211_link_data,
++ color_collision_detect_work);
++ struct ieee80211_sub_if_data *sdata = link->sdata;
++
++ sdata_lock(sdata);
++ cfg80211_obss_color_collision_notify(sdata->dev, link->color_bitmap,
++ GFP_KERNEL);
++ sdata_unlock(sdata);
++}
++
+ void ieee80211_color_change_finish(struct ieee80211_vif *vif)
+ {
+ struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
+@@ -4636,11 +4650,21 @@ ieeee80211_obss_color_collision_notify(struct ieee80211_vif *vif,
+ u64 color_bitmap, gfp_t gfp)
+ {
+ struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
++ struct ieee80211_link_data *link = &sdata->deflink;
+
+ if (sdata->vif.bss_conf.color_change_active || sdata->vif.bss_conf.csa_active)
+ return;
+
+- cfg80211_obss_color_collision_notify(sdata->dev, color_bitmap, gfp);
++ if (delayed_work_pending(&link->color_collision_detect_work))
++ return;
++
++ link->color_bitmap = color_bitmap;
++ /* queue the color collision detection event every 500 ms in order to
++ * avoid sending too much netlink messages to userspace.
++ */
++ ieee80211_queue_delayed_work(&sdata->local->hw,
++ &link->color_collision_detect_work,
++ msecs_to_jiffies(500));
+ }
+ EXPORT_SYMBOL_GPL(ieeee80211_obss_color_collision_notify);
+
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index d16606e84e22d..7ca9bde3c6d25 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -974,6 +974,8 @@ struct ieee80211_link_data {
+ struct cfg80211_chan_def csa_chandef;
+
+ struct work_struct color_change_finalize_work;
++ struct delayed_work color_collision_detect_work;
++ u64 color_bitmap;
+
+ /* context reservation -- protected with chanctx_mtx */
+ struct ieee80211_chanctx *reserved_chanctx;
+@@ -1929,6 +1931,7 @@ int ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
+
+ /* color change handling */
+ void ieee80211_color_change_finalize_work(struct work_struct *work);
++void ieee80211_color_collision_detection_work(struct work_struct *work);
+
+ /* interface handling */
+ #define MAC80211_SUPPORTED_FEATURES_TX (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | \
+diff --git a/net/mac80211/link.c b/net/mac80211/link.c
+index d1f5a9f7c6470..8c8869cc1fb4c 100644
+--- a/net/mac80211/link.c
++++ b/net/mac80211/link.c
+@@ -39,6 +39,8 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
+ ieee80211_csa_finalize_work);
+ INIT_WORK(&link->color_change_finalize_work,
+ ieee80211_color_change_finalize_work);
++ INIT_DELAYED_WORK(&link->color_collision_detect_work,
++ ieee80211_color_collision_detection_work);
+ INIT_LIST_HEAD(&link->assigned_chanctx_list);
+ INIT_LIST_HEAD(&link->reserved_chanctx_list);
+ INIT_DELAYED_WORK(&link->dfs_cac_timer_work,
+@@ -66,6 +68,7 @@ void ieee80211_link_stop(struct ieee80211_link_data *link)
+ if (link->sdata->vif.type == NL80211_IFTYPE_STATION)
+ ieee80211_mgd_stop_link(link);
+
++ cancel_delayed_work_sync(&link->color_collision_detect_work);
+ ieee80211_link_release_channel(link);
+ }
+
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index c6562a6d25035..1ed345d072b3f 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4052,9 +4052,6 @@ static void ieee80211_invoke_rx_handlers(struct ieee80211_rx_data *rx)
+ static bool
+ ieee80211_rx_is_valid_sta_link_id(struct ieee80211_sta *sta, u8 link_id)
+ {
+- if (!sta->mlo)
+- return false;
+-
+ return !!(sta->valid_links & BIT(link_id));
+ }
+
+@@ -4076,13 +4073,8 @@ static bool ieee80211_rx_data_set_link(struct ieee80211_rx_data *rx,
+ }
+
+ static bool ieee80211_rx_data_set_sta(struct ieee80211_rx_data *rx,
+- struct ieee80211_sta *pubsta,
+- int link_id)
++ struct sta_info *sta, int link_id)
+ {
+- struct sta_info *sta;
+-
+- sta = container_of(pubsta, struct sta_info, sta);
+-
+ rx->link_id = link_id;
+ rx->sta = sta;
+
+@@ -4120,7 +4112,7 @@ void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid)
+ if (sta->sta.valid_links)
+ link_id = ffs(sta->sta.valid_links) - 1;
+
+- if (!ieee80211_rx_data_set_sta(&rx, &sta->sta, link_id))
++ if (!ieee80211_rx_data_set_sta(&rx, sta, link_id))
+ return;
+
+ tid_agg_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[tid]);
+@@ -4166,7 +4158,7 @@ void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid,
+
+ sta = container_of(pubsta, struct sta_info, sta);
+
+- if (!ieee80211_rx_data_set_sta(&rx, pubsta, -1))
++ if (!ieee80211_rx_data_set_sta(&rx, sta, -1))
+ return;
+
+ rcu_read_lock();
+@@ -4843,7 +4835,8 @@ static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx,
+ hdr = (struct ieee80211_hdr *)rx->skb->data;
+ }
+
+- if (unlikely(rx->sta && rx->sta->sta.mlo)) {
++ if (unlikely(rx->sta && rx->sta->sta.mlo) &&
++ is_unicast_ether_addr(hdr->addr1)) {
+ /* translate to MLD addresses */
+ if (ether_addr_equal(link->conf->addr, hdr->addr1))
+ ether_addr_copy(hdr->addr1, rx->sdata->vif.addr);
+@@ -4873,6 +4866,7 @@ static void __ieee80211_rx_handle_8023(struct ieee80211_hw *hw,
+ struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+ struct ieee80211_fast_rx *fast_rx;
+ struct ieee80211_rx_data rx;
++ struct sta_info *sta;
+ int link_id = -1;
+
+ memset(&rx, 0, sizeof(rx));
+@@ -4900,7 +4894,8 @@ static void __ieee80211_rx_handle_8023(struct ieee80211_hw *hw,
+ * link_id is used only for stats purpose and updating the stats on
+ * the deflink is fine?
+ */
+- if (!ieee80211_rx_data_set_sta(&rx, pubsta, link_id))
++ sta = container_of(pubsta, struct sta_info, sta);
++ if (!ieee80211_rx_data_set_sta(&rx, sta, link_id))
+ goto drop;
+
+ fast_rx = rcu_dereference(rx.sta->fast_rx);
+@@ -4940,7 +4935,7 @@ static bool ieee80211_rx_for_interface(struct ieee80211_rx_data *rx,
+ link_id = status->link_id;
+ }
+
+- if (!ieee80211_rx_data_set_sta(rx, &sta->sta, link_id))
++ if (!ieee80211_rx_data_set_sta(rx, sta, link_id))
+ return false;
+
+ return ieee80211_prepare_and_rx_handle(rx, skb, consume);
+@@ -5007,7 +5002,8 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
+ link_id = status->link_id;
+
+ if (pubsta) {
+- if (!ieee80211_rx_data_set_sta(&rx, pubsta, link_id))
++ sta = container_of(pubsta, struct sta_info, sta);
++ if (!ieee80211_rx_data_set_sta(&rx, sta, link_id))
+ goto out;
+
+ /*
+@@ -5044,8 +5040,7 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
+ }
+
+ rx.sdata = prev_sta->sdata;
+- if (!ieee80211_rx_data_set_sta(&rx, &prev_sta->sta,
+- link_id))
++ if (!ieee80211_rx_data_set_sta(&rx, prev_sta, link_id))
+ goto out;
+
+ if (!status->link_valid && prev_sta->sta.mlo)
+@@ -5058,8 +5053,7 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
+
+ if (prev_sta) {
+ rx.sdata = prev_sta->sdata;
+- if (!ieee80211_rx_data_set_sta(&rx, &prev_sta->sta,
+- link_id))
++ if (!ieee80211_rx_data_set_sta(&rx, prev_sta, link_id))
+ goto out;
+
+ if (!status->link_valid && prev_sta->sta.mlo)
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 04e0f132b1d9c..34cb833db25f5 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2411,7 +2411,7 @@ static void sta_stats_decode_rate(struct ieee80211_local *local, u32 rate,
+
+ static int sta_set_rate_info_rx(struct sta_info *sta, struct rate_info *rinfo)
+ {
+- u16 rate = READ_ONCE(sta_get_last_rx_stats(sta)->last_rate);
++ u32 rate = READ_ONCE(sta_get_last_rx_stats(sta)->last_rate);
+
+ if (rate == STA_STATS_RATE_INVALID)
+ return -EINVAL;
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index defe97a31724d..7699fb4106701 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -4434,7 +4434,7 @@ static void ieee80211_mlo_multicast_tx(struct net_device *dev,
+ u32 ctrl_flags = IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX;
+
+ if (hweight16(links) == 1) {
+- ctrl_flags |= u32_encode_bits(ffs(links) - 1,
++ ctrl_flags |= u32_encode_bits(__ffs(links),
+ IEEE80211_TX_CTRL_MLO_LINK);
+
+ __ieee80211_subif_start_xmit(skb, sdata->dev, 0, ctrl_flags,
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 8c09e4d12ac1e..fc8256b00b320 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -6999,6 +6999,9 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
+ return -EOPNOTSUPP;
+
+ type = __nft_obj_type_get(objtype);
++ if (WARN_ON_ONCE(!type))
++ return -ENOENT;
++
+ nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+
+ return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj);
+diff --git a/net/rds/message.c b/net/rds/message.c
+index c19c935612278..7af59d2443e5d 100644
+--- a/net/rds/message.c
++++ b/net/rds/message.c
+@@ -118,7 +118,7 @@ static void rds_rm_zerocopy_callback(struct rds_sock *rs,
+ ck = &info->zcookies;
+ memset(ck, 0, sizeof(*ck));
+ WARN_ON(!rds_zcookie_add(info, cookie));
+- list_add_tail(&q->zcookie_head, &info->rs_zcookie_next);
++ list_add_tail(&info->rs_zcookie_next, &q->zcookie_head);
+
+ spin_unlock_irqrestore(&q->lock, flags);
+ /* caller invokes rds_wake_sk_sleep() */
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f3c9f0201c156..7ce562f6dc8d5 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -54,12 +54,14 @@ void rxrpc_poke_call(struct rxrpc_call *call, enum rxrpc_call_poke_trace what)
+ spin_lock_bh(&local->lock);
+ busy = !list_empty(&call->attend_link);
+ trace_rxrpc_poke_call(call, busy, what);
++ if (!busy && !rxrpc_try_get_call(call, rxrpc_call_get_poke))
++ busy = true;
+ if (!busy) {
+- rxrpc_get_call(call, rxrpc_call_get_poke);
+ list_add_tail(&call->attend_link, &local->call_attend_q);
+ }
+ spin_unlock_bh(&local->lock);
+- rxrpc_wake_up_io_thread(local);
++ if (!busy)
++ rxrpc_wake_up_io_thread(local);
+ }
+ }
+
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index e12d4fa5aece6..d9413d43b1045 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1826,8 +1826,10 @@ static int smcr_serv_conf_first_link(struct smc_sock *smc)
+ smc_llc_link_active(link);
+ smcr_lgr_set_type(link->lgr, SMC_LGR_SINGLE);
+
++ mutex_lock(&link->lgr->llc_conf_mutex);
+ /* initial contact - try to establish second link */
+ smc_llc_srv_add_link(link, NULL);
++ mutex_unlock(&link->lgr->llc_conf_mutex);
+ return 0;
+ }
+
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index c305d8dd23f80..c19d4b7c1f28a 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1120,8 +1120,9 @@ static void smcr_buf_unuse(struct smc_buf_desc *buf_desc, bool is_rmb,
+
+ smc_buf_free(lgr, is_rmb, buf_desc);
+ } else {
+- buf_desc->used = 0;
+- memset(buf_desc->cpu_addr, 0, buf_desc->len);
++ /* memzero_explicit provides potential memory barrier semantics */
++ memzero_explicit(buf_desc->cpu_addr, buf_desc->len);
++ WRITE_ONCE(buf_desc->used, 0);
+ }
+ }
+
+@@ -1132,19 +1133,17 @@ static void smc_buf_unuse(struct smc_connection *conn,
+ if (!lgr->is_smcd && conn->sndbuf_desc->is_vm) {
+ smcr_buf_unuse(conn->sndbuf_desc, false, lgr);
+ } else {
+- conn->sndbuf_desc->used = 0;
+- memset(conn->sndbuf_desc->cpu_addr, 0,
+- conn->sndbuf_desc->len);
++ memzero_explicit(conn->sndbuf_desc->cpu_addr, conn->sndbuf_desc->len);
++ WRITE_ONCE(conn->sndbuf_desc->used, 0);
+ }
+ }
+ if (conn->rmb_desc) {
+ if (!lgr->is_smcd) {
+ smcr_buf_unuse(conn->rmb_desc, true, lgr);
+ } else {
+- conn->rmb_desc->used = 0;
+- memset(conn->rmb_desc->cpu_addr, 0,
+- conn->rmb_desc->len +
+- sizeof(struct smcd_cdc_msg));
++ memzero_explicit(conn->rmb_desc->cpu_addr,
++ conn->rmb_desc->len + sizeof(struct smcd_cdc_msg));
++ WRITE_ONCE(conn->rmb_desc->used, 0);
+ }
+ }
+ }
+diff --git a/net/socket.c b/net/socket.c
+index c12af3c84d3a6..b4cdc576afc3f 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -449,7 +449,9 @@ static struct file_system_type sock_fs_type = {
+ *
+ * Returns the &file bound with @sock, implicitly storing it
+ * in sock->file. If dname is %NULL, sets to "".
+- * On failure the return is a ERR pointer (see linux/err.h).
++ *
++ * On failure @sock is released, and an ERR pointer is returned.
++ *
+ * This function uses GFP_KERNEL internally.
+ */
+
+@@ -1613,7 +1615,6 @@ static struct socket *__sys_socket_create(int family, int type, int protocol)
+ struct file *__sys_socket_file(int family, int type, int protocol)
+ {
+ struct socket *sock;
+- struct file *file;
+ int flags;
+
+ sock = __sys_socket_create(family, type, protocol);
+@@ -1624,11 +1625,7 @@ struct file *__sys_socket_file(int family, int type, int protocol)
+ if (SOCK_NONBLOCK != O_NONBLOCK && (flags & SOCK_NONBLOCK))
+ flags = (flags & ~SOCK_NONBLOCK) | O_NONBLOCK;
+
+- file = sock_alloc_file(sock, flags, NULL);
+- if (IS_ERR(file))
+- sock_release(sock);
+-
+- return file;
++ return sock_alloc_file(sock, flags, NULL);
+ }
+
+ int __sys_socket(int family, int type, int protocol)
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 0b0b9f1eed469..fd7e1c630493e 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -3350,6 +3350,8 @@ rpc_clnt_swap_deactivate_callback(struct rpc_clnt *clnt,
+ void
+ rpc_clnt_swap_deactivate(struct rpc_clnt *clnt)
+ {
++ while (clnt != clnt->cl_parent)
++ clnt = clnt->cl_parent;
+ if (atomic_dec_if_positive(&clnt->cl_swapper) == 0)
+ rpc_clnt_iterate_for_each_xprt(clnt,
+ rpc_clnt_swap_deactivate_callback, NULL);
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 33a82ecab9d56..02b9a0280896c 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -13809,7 +13809,7 @@ static int nl80211_set_rekey_data(struct sk_buff *skb, struct genl_info *info)
+ return -ERANGE;
+ if (nla_len(tb[NL80211_REKEY_DATA_KCK]) != NL80211_KCK_LEN &&
+ !(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK &&
+- nla_len(tb[NL80211_REKEY_DATA_KEK]) == NL80211_KCK_EXT_LEN))
++ nla_len(tb[NL80211_REKEY_DATA_KCK]) == NL80211_KCK_EXT_LEN))
+ return -ERANGE;
+
+ rekey_data.kek = nla_data(tb[NL80211_REKEY_DATA_KEK]);
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index 4b5b6ee0fe013..4f813e346a8bc 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -285,6 +285,15 @@ void cfg80211_conn_work(struct work_struct *work)
+ wiphy_unlock(&rdev->wiphy);
+ }
+
++static void cfg80211_step_auth_next(struct cfg80211_conn *conn,
++ struct cfg80211_bss *bss)
++{
++ memcpy(conn->bssid, bss->bssid, ETH_ALEN);
++ conn->params.bssid = conn->bssid;
++ conn->params.channel = bss->channel;
++ conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
++}
++
+ /* Returned bss is reference counted and must be cleaned up appropriately. */
+ static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
+ {
+@@ -302,10 +311,7 @@ static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
+ if (!bss)
+ return NULL;
+
+- memcpy(wdev->conn->bssid, bss->bssid, ETH_ALEN);
+- wdev->conn->params.bssid = wdev->conn->bssid;
+- wdev->conn->params.channel = bss->channel;
+- wdev->conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
++ cfg80211_step_auth_next(wdev->conn, bss);
+ schedule_work(&rdev->conn_work);
+
+ return bss;
+@@ -597,7 +603,12 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
+ wdev->conn->params.ssid_len = wdev->u.client.ssid_len;
+
+ /* see if we have the bss already */
+- bss = cfg80211_get_conn_bss(wdev);
++ bss = cfg80211_get_bss(wdev->wiphy, wdev->conn->params.channel,
++ wdev->conn->params.bssid,
++ wdev->conn->params.ssid,
++ wdev->conn->params.ssid_len,
++ wdev->conn_bss_type,
++ IEEE80211_PRIVACY(wdev->conn->params.privacy));
+
+ if (prev_bssid) {
+ memcpy(wdev->conn->prev_bssid, prev_bssid, ETH_ALEN);
+@@ -608,6 +619,7 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
+ if (bss) {
+ enum nl80211_timeout_reason treason;
+
++ cfg80211_step_auth_next(wdev->conn, bss);
+ err = cfg80211_conn_do_work(wdev, &treason);
+ cfg80211_put_bss(wdev->wiphy, bss);
+ } else {
+@@ -724,6 +736,7 @@ void __cfg80211_connect_result(struct net_device *dev,
+ {
+ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ const struct element *country_elem = NULL;
++ const struct element *ssid;
+ const u8 *country_data;
+ u8 country_datalen;
+ #ifdef CONFIG_CFG80211_WEXT
+@@ -883,6 +896,22 @@ void __cfg80211_connect_result(struct net_device *dev,
+ country_data, country_datalen);
+ kfree(country_data);
+
++ if (!wdev->u.client.ssid_len) {
++ rcu_read_lock();
++ for_each_valid_link(cr, link) {
++ ssid = ieee80211_bss_get_elem(cr->links[link].bss,
++ WLAN_EID_SSID);
++
++ if (!ssid || !ssid->datalen)
++ continue;
++
++ memcpy(wdev->u.client.ssid, ssid->data, ssid->datalen);
++ wdev->u.client.ssid_len = ssid->datalen;
++ break;
++ }
++ rcu_read_unlock();
++ }
++
+ return;
+ out:
+ for_each_valid_link(cr, link)
+@@ -1468,6 +1497,15 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
+ } else {
+ if (WARN_ON(connkeys))
+ return -EINVAL;
++
++ /* connect can point to wdev->wext.connect which
++ * can hold key data from a previous connection
++ */
++ connect->key = NULL;
++ connect->key_len = 0;
++ connect->key_idx = 0;
++ connect->crypto.cipher_group = 0;
++ connect->crypto.n_ciphers_pairwise = 0;
+ }
+
+ wdev->connect_keys = connkeys;
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 9f0561b67c12e..13f62d2402e71 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -511,7 +511,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ return skb;
+ }
+
+-static int xsk_generic_xmit(struct sock *sk)
++static int __xsk_generic_xmit(struct sock *sk)
+ {
+ struct xdp_sock *xs = xdp_sk(sk);
+ u32 max_batch = TX_BATCH_SIZE;
+@@ -594,22 +594,13 @@ out:
+ return err;
+ }
+
+-static int xsk_xmit(struct sock *sk)
++static int xsk_generic_xmit(struct sock *sk)
+ {
+- struct xdp_sock *xs = xdp_sk(sk);
+ int ret;
+
+- if (unlikely(!(xs->dev->flags & IFF_UP)))
+- return -ENETDOWN;
+- if (unlikely(!xs->tx))
+- return -ENOBUFS;
+-
+- if (xs->zc)
+- return xsk_wakeup(xs, XDP_WAKEUP_TX);
+-
+ /* Drop the RCU lock since the SKB path might sleep. */
+ rcu_read_unlock();
+- ret = xsk_generic_xmit(sk);
++ ret = __xsk_generic_xmit(sk);
+ /* Reaquire RCU lock before going into common code. */
+ rcu_read_lock();
+
+@@ -627,17 +618,31 @@ static bool xsk_no_wakeup(struct sock *sk)
+ #endif
+ }
+
++static int xsk_check_common(struct xdp_sock *xs)
++{
++ if (unlikely(!xsk_is_bound(xs)))
++ return -ENXIO;
++ if (unlikely(!(xs->dev->flags & IFF_UP)))
++ return -ENETDOWN;
++
++ return 0;
++}
++
+ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+ {
+ bool need_wait = !(m->msg_flags & MSG_DONTWAIT);
+ struct sock *sk = sock->sk;
+ struct xdp_sock *xs = xdp_sk(sk);
+ struct xsk_buff_pool *pool;
++ int err;
+
+- if (unlikely(!xsk_is_bound(xs)))
+- return -ENXIO;
++ err = xsk_check_common(xs);
++ if (err)
++ return err;
+ if (unlikely(need_wait))
+ return -EOPNOTSUPP;
++ if (unlikely(!xs->tx))
++ return -ENOBUFS;
+
+ if (sk_can_busy_loop(sk)) {
+ if (xs->zc)
+@@ -649,8 +654,11 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
+ return 0;
+
+ pool = xs->pool;
+- if (pool->cached_need_wakeup & XDP_WAKEUP_TX)
+- return xsk_xmit(sk);
++ if (pool->cached_need_wakeup & XDP_WAKEUP_TX) {
++ if (xs->zc)
++ return xsk_wakeup(xs, XDP_WAKEUP_TX);
++ return xsk_generic_xmit(sk);
++ }
+ return 0;
+ }
+
+@@ -670,11 +678,11 @@ static int __xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int
+ bool need_wait = !(flags & MSG_DONTWAIT);
+ struct sock *sk = sock->sk;
+ struct xdp_sock *xs = xdp_sk(sk);
++ int err;
+
+- if (unlikely(!xsk_is_bound(xs)))
+- return -ENXIO;
+- if (unlikely(!(xs->dev->flags & IFF_UP)))
+- return -ENETDOWN;
++ err = xsk_check_common(xs);
++ if (err)
++ return err;
+ if (unlikely(!xs->rx))
+ return -ENOBUFS;
+ if (unlikely(need_wait))
+@@ -713,21 +721,20 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ sock_poll_wait(file, sock, wait);
+
+ rcu_read_lock();
+- if (unlikely(!xsk_is_bound(xs))) {
+- rcu_read_unlock();
+- return mask;
+- }
++ if (xsk_check_common(xs))
++ goto skip_tx;
+
+ pool = xs->pool;
+
+ if (pool->cached_need_wakeup) {
+ if (xs->zc)
+ xsk_wakeup(xs, pool->cached_need_wakeup);
+- else
++ else if (xs->tx)
+ /* Poll needs to drive Tx also in copy mode */
+- xsk_xmit(sk);
++ xsk_generic_xmit(sk);
+ }
+
++skip_tx:
+ if (xs->rx && !xskq_prod_is_empty(xs->rx))
+ mask |= EPOLLIN | EPOLLRDNORM;
+ if (xs->tx && xsk_tx_writeable(xs))
+diff --git a/scripts/bpf_doc.py b/scripts/bpf_doc.py
+index e8d90829f23ed..38d51e05c7a2b 100755
+--- a/scripts/bpf_doc.py
++++ b/scripts/bpf_doc.py
+@@ -271,7 +271,7 @@ class HeaderParser(object):
+ if capture:
+ fn_defines_str += self.line
+ helper_name = capture.expand(r'bpf_\1')
+- self.helper_enum_vals[helper_name] = int(capture[2])
++ self.helper_enum_vals[helper_name] = int(capture.group(2))
+ self.helper_enum_pos[helper_name] = i
+ i += 1
+ else:
+diff --git a/scripts/gcc-plugins/Makefile b/scripts/gcc-plugins/Makefile
+index b34d11e226366..320afd3cf8e82 100644
+--- a/scripts/gcc-plugins/Makefile
++++ b/scripts/gcc-plugins/Makefile
+@@ -29,7 +29,7 @@ GCC_PLUGINS_DIR = $(shell $(CC) -print-file-name=plugin)
+ plugin_cxxflags = -Wp,-MMD,$(depfile) $(KBUILD_HOSTCXXFLAGS) -fPIC \
+ -include $(srctree)/include/linux/compiler-version.h \
+ -DPLUGIN_VERSION=$(call stringify,$(KERNELVERSION)) \
+- -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++11 \
++ -I $(GCC_PLUGINS_DIR)/include -I $(obj) \
+ -fno-rtti -fno-exceptions -fasynchronous-unwind-tables \
+ -ggdb -Wno-narrowing -Wno-unused-variable \
+ -Wno-format-diag
+diff --git a/scripts/package/mkdebian b/scripts/package/mkdebian
+index 6cf383225b8b5..c3bbef7a6754f 100755
+--- a/scripts/package/mkdebian
++++ b/scripts/package/mkdebian
+@@ -236,7 +236,7 @@ binary-arch: build-arch
+ KBUILD_BUILD_VERSION=${revision} -f \$(srctree)/Makefile intdeb-pkg
+
+ clean:
+- rm -rf debian/*tmp debian/files
++ rm -rf debian/files debian/linux-*
+ \$(MAKE) clean
+
+ binary: binary-arch
+diff --git a/security/integrity/ima/ima_api.c b/security/integrity/ima/ima_api.c
+index c1e76282b5ee5..1e3a7a4f8833f 100644
+--- a/security/integrity/ima/ima_api.c
++++ b/security/integrity/ima/ima_api.c
+@@ -292,7 +292,7 @@ int ima_collect_measurement(struct integrity_iint_cache *iint,
+ result = ima_calc_file_hash(file, &hash.hdr);
+ }
+
+- if (result == -ENOMEM)
++ if (result && result != -EBADF && result != -EINVAL)
+ goto out;
+
+ length = sizeof(hash.hdr) + hash.hdr.length;
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 377300973e6c5..53dc438009204 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -337,7 +337,7 @@ static int process_measurement(struct file *file, const struct cred *cred,
+ hash_algo = ima_get_hash_algo(xattr_value, xattr_len);
+
+ rc = ima_collect_measurement(iint, file, buf, size, hash_algo, modsig);
+- if (rc == -ENOMEM)
++ if (rc != 0 && rc != -EBADF && rc != -EINVAL)
+ goto out_locked;
+
+ if (!pathbuf) /* ima_rdwr_violation possibly pre-fetched */
+@@ -397,7 +397,9 @@ out:
+ /**
+ * ima_file_mmap - based on policy, collect/store measurement.
+ * @file: pointer to the file to be measured (May be NULL)
+- * @prot: contains the protection that will be applied by the kernel.
++ * @reqprot: protection requested by the application
++ * @prot: protection that will be applied by the kernel
++ * @flags: operational flags
+ *
+ * Measure files being mmapped executable based on the ima_must_measure()
+ * policy decision.
+@@ -405,7 +407,8 @@ out:
+ * On success return 0. On integrity appraisal error, assuming the file
+ * is in policy and IMA-appraisal is in enforcing mode, return -EACCES.
+ */
+-int ima_file_mmap(struct file *file, unsigned long prot)
++int ima_file_mmap(struct file *file, unsigned long reqprot,
++ unsigned long prot, unsigned long flags)
+ {
+ u32 secid;
+
+diff --git a/security/security.c b/security/security.c
+index d1571900a8c7d..174afa4fad813 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -1661,12 +1661,13 @@ static inline unsigned long mmap_prot(struct file *file, unsigned long prot)
+ int security_mmap_file(struct file *file, unsigned long prot,
+ unsigned long flags)
+ {
++ unsigned long prot_adj = mmap_prot(file, prot);
+ int ret;
+- ret = call_int_hook(mmap_file, 0, file, prot,
+- mmap_prot(file, prot), flags);
++
++ ret = call_int_hook(mmap_file, 0, file, prot, prot_adj, flags);
+ if (ret)
+ return ret;
+- return ima_file_mmap(file, prot);
++ return ima_file_mmap(file, prot, prot_adj, flags);
+ }
+
+ int security_mmap_addr(unsigned long addr)
+diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig
+index 06d304db4183c..886255a03e8b4 100644
+--- a/sound/pci/hda/Kconfig
++++ b/sound/pci/hda/Kconfig
+@@ -302,6 +302,20 @@ config SND_HDA_INTEL_HDMI_SILENT_STREAM
+ This feature can impact power consumption as resources
+ are kept reserved both at transmitter and receiver.
+
++config SND_HDA_CTL_DEV_ID
++ bool "Use the device identifier field for controls"
++ depends on SND_HDA_INTEL
++ help
++ Say Y to use the device identifier field for (mixer)
++ controls (old behaviour until this option is available).
++
++ When enabled, the multiple HDA codecs may set the device
++ field in control (mixer) element identifiers. The use
++ of this field is not recommended and defined for mixer controls.
++
++ The old behaviour (Y) is obsolete and will be removed. Consider
++ to not enable this option.
++
+ endif
+
+ endmenu
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 2e728aad67713..9f79c0ac2bda7 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3389,7 +3389,12 @@ int snd_hda_add_new_ctls(struct hda_codec *codec,
+ kctl = snd_ctl_new1(knew, codec);
+ if (!kctl)
+ return -ENOMEM;
+- if (addr > 0)
++ /* Do not use the id.device field for MIXER elements.
++ * This field is for real device numbers (like PCM) but codecs
++ * are hidden components from the user space view (unrelated
++ * to the mixer element identification).
++ */
++ if (addr > 0 && codec->ctl_dev_id)
+ kctl->id.device = addr;
+ if (idx > 0)
+ kctl->id.index = idx;
+@@ -3400,9 +3405,11 @@ int snd_hda_add_new_ctls(struct hda_codec *codec,
+ * the codec addr; if it still fails (or it's the
+ * primary codec), then try another control index
+ */
+- if (!addr && codec->core.addr)
++ if (!addr && codec->core.addr) {
+ addr = codec->core.addr;
+- else if (!idx && !knew->index) {
++ if (!codec->ctl_dev_id)
++ idx += 10 * addr;
++ } else if (!idx && !knew->index) {
+ idx = find_empty_mixer_ctl_idx(codec,
+ knew->name, 0);
+ if (idx <= 0)
+diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
+index 0ff286b7b66be..083df287c1a48 100644
+--- a/sound/pci/hda/hda_controller.c
++++ b/sound/pci/hda/hda_controller.c
+@@ -1231,6 +1231,7 @@ int azx_probe_codecs(struct azx *chip, unsigned int max_slots)
+ continue;
+ codec->jackpoll_interval = chip->jackpoll_interval;
+ codec->beep_mode = chip->beep_mode;
++ codec->ctl_dev_id = chip->ctl_dev_id;
+ codecs++;
+ }
+ }
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index f5bf295eb8307..8556031bcd68e 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -124,6 +124,7 @@ struct azx {
+ /* HD codec */
+ int codec_probe_mask; /* copied from probe_mask option */
+ unsigned int beep_mode;
++ bool ctl_dev_id;
+
+ #ifdef CONFIG_SND_HDA_PATCH_LOADER
+ const struct firmware *fw;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 87002670c0c92..81c4a45254ff2 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -50,6 +50,7 @@
+ #include <sound/intel-dsp-config.h>
+ #include <linux/vgaarb.h>
+ #include <linux/vga_switcheroo.h>
++#include <linux/apple-gmux.h>
+ #include <linux/firmware.h>
+ #include <sound/hda_codec.h>
+ #include "hda_controller.h"
+@@ -119,6 +120,7 @@ static bool beep_mode[SNDRV_CARDS] = {[0 ... (SNDRV_CARDS-1)] =
+ CONFIG_SND_HDA_INPUT_BEEP_MODE};
+ #endif
+ static bool dmic_detect = 1;
++static bool ctl_dev_id = IS_ENABLED(CONFIG_SND_HDA_CTL_DEV_ID) ? 1 : 0;
+
+ module_param_array(index, int, NULL, 0444);
+ MODULE_PARM_DESC(index, "Index value for Intel HD audio interface.");
+@@ -157,6 +159,8 @@ module_param(dmic_detect, bool, 0444);
+ MODULE_PARM_DESC(dmic_detect, "Allow DSP driver selection (bypass this driver) "
+ "(0=off, 1=on) (default=1); "
+ "deprecated, use snd-intel-dspcfg.dsp_driver option instead");
++module_param(ctl_dev_id, bool, 0444);
++MODULE_PARM_DESC(ctl_dev_id, "Use control device identifier (based on codec address).");
+
+ #ifdef CONFIG_PM
+ static int param_set_xint(const char *val, const struct kernel_param *kp);
+@@ -1463,7 +1467,7 @@ static struct pci_dev *get_bound_vga(struct pci_dev *pci)
+ * vgaswitcheroo.
+ */
+ if (((p->class >> 16) == PCI_BASE_CLASS_DISPLAY) &&
+- atpx_present())
++ (atpx_present() || apple_gmux_detect(NULL, NULL)))
+ return p;
+ pci_dev_put(p);
+ }
+@@ -2278,6 +2282,8 @@ static int azx_probe_continue(struct azx *chip)
+ chip->beep_mode = beep_mode[dev];
+ #endif
+
++ chip->ctl_dev_id = ctl_dev_id;
++
+ /* create codec instances */
+ if (bus->codec_mask) {
+ err = azx_probe_codecs(chip, azx_max_codecs[chip->driver_type]);
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 0a292bf271f2e..acde4cd58785e 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -2455,7 +2455,7 @@ static int dspio_set_uint_param(struct hda_codec *codec, int mod_id,
+ static int dspio_alloc_dma_chan(struct hda_codec *codec, unsigned int *dma_chan)
+ {
+ int status = 0;
+- unsigned int size = sizeof(dma_chan);
++ unsigned int size = sizeof(*dma_chan);
+
+ codec_dbg(codec, " dspio_alloc_dma_chan() -- begin\n");
+ status = dspio_scp(codec, MASTERCONTROL, 0x20,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e103bb3693c06..d4819890374b5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11617,6 +11617,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
++ SND_PCI_QUIRK(0x103c, 0x870c, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
+ SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
+diff --git a/sound/pci/ice1712/aureon.c b/sound/pci/ice1712/aureon.c
+index 9a30f6d35d135..40a0e00950301 100644
+--- a/sound/pci/ice1712/aureon.c
++++ b/sound/pci/ice1712/aureon.c
+@@ -1892,6 +1892,7 @@ static int aureon_add_controls(struct snd_ice1712 *ice)
+ unsigned char id;
+ snd_ice1712_save_gpio_status(ice);
+ id = aureon_cs8415_get(ice, CS8415_ID);
++ snd_ice1712_restore_gpio_status(ice);
+ if (id != 0x41)
+ dev_info(ice->card->dev,
+ "No CS8415 chip. Skipping CS8415 controls.\n");
+@@ -1909,7 +1910,6 @@ static int aureon_add_controls(struct snd_ice1712 *ice)
+ kctl->id.device = ice->pcm->device;
+ }
+ }
+- snd_ice1712_restore_gpio_status(ice);
+ }
+
+ return 0;
+diff --git a/sound/soc/atmel/mchp-spdifrx.c b/sound/soc/atmel/mchp-spdifrx.c
+index ec0705cc40fab..76ce37f641ebd 100644
+--- a/sound/soc/atmel/mchp-spdifrx.c
++++ b/sound/soc/atmel/mchp-spdifrx.c
+@@ -217,7 +217,6 @@ struct mchp_spdifrx_ch_stat {
+ struct mchp_spdifrx_user_data {
+ unsigned char data[SPDIFRX_UD_BITS / 8];
+ struct completion done;
+- spinlock_t lock; /* protect access to user data */
+ };
+
+ struct mchp_spdifrx_mixer_control {
+@@ -231,13 +230,13 @@ struct mchp_spdifrx_mixer_control {
+ struct mchp_spdifrx_dev {
+ struct snd_dmaengine_dai_dma_data capture;
+ struct mchp_spdifrx_mixer_control control;
+- spinlock_t blockend_lock; /* protect access to blockend_refcount */
+- int blockend_refcount;
++ struct mutex mlock;
+ struct device *dev;
+ struct regmap *regmap;
+ struct clk *pclk;
+ struct clk *gclk;
+ unsigned int fmt;
++ unsigned int trigger_enabled;
+ unsigned int gclk_enabled:1;
+ };
+
+@@ -275,37 +274,11 @@ static void mchp_spdifrx_channel_user_data_read(struct mchp_spdifrx_dev *dev,
+ }
+ }
+
+-/* called from non-atomic context only */
+-static void mchp_spdifrx_isr_blockend_en(struct mchp_spdifrx_dev *dev)
+-{
+- unsigned long flags;
+-
+- spin_lock_irqsave(&dev->blockend_lock, flags);
+- dev->blockend_refcount++;
+- /* don't enable BLOCKEND interrupt if it's already enabled */
+- if (dev->blockend_refcount == 1)
+- regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_BLOCKEND);
+- spin_unlock_irqrestore(&dev->blockend_lock, flags);
+-}
+-
+-/* called from atomic/non-atomic context */
+-static void mchp_spdifrx_isr_blockend_dis(struct mchp_spdifrx_dev *dev)
+-{
+- unsigned long flags;
+-
+- spin_lock_irqsave(&dev->blockend_lock, flags);
+- dev->blockend_refcount--;
+- /* don't enable BLOCKEND interrupt if it's already enabled */
+- if (dev->blockend_refcount == 0)
+- regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
+- spin_unlock_irqrestore(&dev->blockend_lock, flags);
+-}
+-
+ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+ {
+ struct mchp_spdifrx_dev *dev = dev_id;
+ struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+- u32 sr, imr, pending, idr = 0;
++ u32 sr, imr, pending;
+ irqreturn_t ret = IRQ_NONE;
+ int ch;
+
+@@ -320,13 +293,10 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+
+ if (pending & SPDIFRX_IR_BLOCKEND) {
+ for (ch = 0; ch < SPDIFRX_CHANNELS; ch++) {
+- spin_lock(&ctrl->user_data[ch].lock);
+ mchp_spdifrx_channel_user_data_read(dev, ch);
+- spin_unlock(&ctrl->user_data[ch].lock);
+-
+ complete(&ctrl->user_data[ch].done);
+ }
+- mchp_spdifrx_isr_blockend_dis(dev);
++ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
+ ret = IRQ_HANDLED;
+ }
+
+@@ -334,7 +304,7 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+ if (pending & SPDIFRX_IR_CSC(ch)) {
+ mchp_spdifrx_channel_status_read(dev, ch);
+ complete(&ctrl->ch_stat[ch].done);
+- idr |= SPDIFRX_IR_CSC(ch);
++ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_CSC(ch));
+ ret = IRQ_HANDLED;
+ }
+ }
+@@ -344,8 +314,6 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
+ ret = IRQ_HANDLED;
+ }
+
+- regmap_write(dev->regmap, SPDIFRX_IDR, idr);
+-
+ return ret;
+ }
+
+@@ -353,47 +321,40 @@ static int mchp_spdifrx_trigger(struct snd_pcm_substream *substream, int cmd,
+ struct snd_soc_dai *dai)
+ {
+ struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+- u32 mr;
+- int running;
+- int ret;
+-
+- regmap_read(dev->regmap, SPDIFRX_MR, &mr);
+- running = !!(mr & SPDIFRX_MR_RXEN_ENABLE);
++ int ret = 0;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+- if (!running) {
+- mr &= ~SPDIFRX_MR_RXEN_MASK;
+- mr |= SPDIFRX_MR_RXEN_ENABLE;
+- /* enable overrun interrupts */
+- regmap_write(dev->regmap, SPDIFRX_IER,
+- SPDIFRX_IR_OVERRUN);
+- }
++ mutex_lock(&dev->mlock);
++ /* Enable overrun interrupts */
++ regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_OVERRUN);
++
++ /* Enable receiver. */
++ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++ SPDIFRX_MR_RXEN_ENABLE);
++ dev->trigger_enabled = true;
++ mutex_unlock(&dev->mlock);
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+- if (running) {
+- mr &= ~SPDIFRX_MR_RXEN_MASK;
+- mr |= SPDIFRX_MR_RXEN_DISABLE;
+- /* disable overrun interrupts */
+- regmap_write(dev->regmap, SPDIFRX_IDR,
+- SPDIFRX_IR_OVERRUN);
+- }
++ mutex_lock(&dev->mlock);
++ /* Disable overrun interrupts */
++ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_OVERRUN);
++
++ /* Disable receiver. */
++ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++ SPDIFRX_MR_RXEN_DISABLE);
++ dev->trigger_enabled = false;
++ mutex_unlock(&dev->mlock);
+ break;
+ default:
+- return -EINVAL;
+- }
+-
+- ret = regmap_write(dev->regmap, SPDIFRX_MR, mr);
+- if (ret) {
+- dev_err(dev->dev, "unable to enable/disable RX: %d\n", ret);
+- return ret;
++ ret = -EINVAL;
+ }
+
+- return 0;
++ return ret;
+ }
+
+ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+@@ -401,7 +362,7 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
+ struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+- u32 mr;
++ u32 mr = 0;
+ int ret;
+
+ dev_dbg(dev->dev, "%s() rate=%u format=%#x width=%u channels=%u\n",
+@@ -413,13 +374,6 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ return -EINVAL;
+ }
+
+- regmap_read(dev->regmap, SPDIFRX_MR, &mr);
+-
+- if (mr & SPDIFRX_MR_RXEN_ENABLE) {
+- dev_err(dev->dev, "PCM already running\n");
+- return -EBUSY;
+- }
+-
+ if (params_channels(params) != SPDIFRX_CHANNELS) {
+ dev_err(dev->dev, "unsupported number of channels: %d\n",
+ params_channels(params));
+@@ -445,6 +399,13 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ return -EINVAL;
+ }
+
++ mutex_lock(&dev->mlock);
++ if (dev->trigger_enabled) {
++ dev_err(dev->dev, "PCM already running\n");
++ ret = -EBUSY;
++ goto unlock;
++ }
++
+ if (dev->gclk_enabled) {
+ clk_disable_unprepare(dev->gclk);
+ dev->gclk_enabled = 0;
+@@ -455,19 +416,24 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
+ dev_err(dev->dev,
+ "unable to set gclk min rate: rate %u * ratio %u + 1\n",
+ params_rate(params), SPDIFRX_GCLK_RATIO_MIN);
+- return ret;
++ goto unlock;
+ }
+ ret = clk_prepare_enable(dev->gclk);
+ if (ret) {
+ dev_err(dev->dev, "unable to enable gclk: %d\n", ret);
+- return ret;
++ goto unlock;
+ }
+ dev->gclk_enabled = 1;
+
+ dev_dbg(dev->dev, "GCLK range min set to %d\n",
+ params_rate(params) * SPDIFRX_GCLK_RATIO_MIN + 1);
+
+- return regmap_write(dev->regmap, SPDIFRX_MR, mr);
++ ret = regmap_write(dev->regmap, SPDIFRX_MR, mr);
++
++unlock:
++ mutex_unlock(&dev->mlock);
++
++ return ret;
+ }
+
+ static int mchp_spdifrx_hw_free(struct snd_pcm_substream *substream,
+@@ -475,10 +441,12 @@ static int mchp_spdifrx_hw_free(struct snd_pcm_substream *substream,
+ {
+ struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+
++ mutex_lock(&dev->mlock);
+ if (dev->gclk_enabled) {
+ clk_disable_unprepare(dev->gclk);
+ dev->gclk_enabled = 0;
+ }
++ mutex_unlock(&dev->mlock);
+ return 0;
+ }
+
+@@ -515,22 +483,51 @@ static int mchp_spdifrx_cs_get(struct mchp_spdifrx_dev *dev,
+ {
+ struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+ struct mchp_spdifrx_ch_stat *ch_stat = &ctrl->ch_stat[channel];
+- int ret;
+-
+- regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_CSC(channel));
+- /* check for new data available */
+- ret = wait_for_completion_interruptible_timeout(&ch_stat->done,
+- msecs_to_jiffies(100));
+- /* IP might not be started or valid stream might not be present */
+- if (ret < 0) {
+- dev_dbg(dev->dev, "channel status for channel %d timeout\n",
+- channel);
++ int ret = 0;
++
++ mutex_lock(&dev->mlock);
++
++ /*
++ * We may reach this point with both clocks enabled but the receiver
++ * still disabled. To void waiting for completion and return with
++ * timeout check the dev->trigger_enabled.
++ *
++ * To retrieve data:
++ * - if the receiver is enabled CSC IRQ will update the data in software
++ * caches (ch_stat->data)
++ * - otherwise we just update it here the software caches with latest
++ * available information and return it; in this case we don't need
++ * spin locking as the IRQ is disabled and will not be raised from
++ * anywhere else.
++ */
++
++ if (dev->trigger_enabled) {
++ reinit_completion(&ch_stat->done);
++ regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_CSC(channel));
++ /* Check for new data available */
++ ret = wait_for_completion_interruptible_timeout(&ch_stat->done,
++ msecs_to_jiffies(100));
++ /* Valid stream might not be present */
++ if (ret <= 0) {
++ dev_dbg(dev->dev, "channel status for channel %d timeout\n",
++ channel);
++ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_CSC(channel));
++ ret = ret ? : -ETIMEDOUT;
++ goto unlock;
++ } else {
++ ret = 0;
++ }
++ } else {
++ /* Update software cache with latest channel status. */
++ mchp_spdifrx_channel_status_read(dev, channel);
+ }
+
+ memcpy(uvalue->value.iec958.status, ch_stat->data,
+ sizeof(ch_stat->data));
+
+- return 0;
++unlock:
++ mutex_unlock(&dev->mlock);
++ return ret;
+ }
+
+ static int mchp_spdifrx_cs1_get(struct snd_kcontrol *kcontrol,
+@@ -564,29 +561,49 @@ static int mchp_spdifrx_subcode_ch_get(struct mchp_spdifrx_dev *dev,
+ int channel,
+ struct snd_ctl_elem_value *uvalue)
+ {
+- unsigned long flags;
+ struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+ struct mchp_spdifrx_user_data *user_data = &ctrl->user_data[channel];
+- int ret;
+-
+- reinit_completion(&user_data->done);
+- mchp_spdifrx_isr_blockend_en(dev);
+- ret = wait_for_completion_interruptible_timeout(&user_data->done,
+- msecs_to_jiffies(100));
+- /* IP might not be started or valid stream might not be present */
+- if (ret <= 0) {
+- dev_dbg(dev->dev, "user data for channel %d timeout\n",
+- channel);
+- mchp_spdifrx_isr_blockend_dis(dev);
+- return ret;
++ int ret = 0;
++
++ mutex_lock(&dev->mlock);
++
++ /*
++ * We may reach this point with both clocks enabled but the receiver
++ * still disabled. To void waiting for completion to just timeout we
++ * check here the dev->trigger_enabled flag.
++ *
++ * To retrieve data:
++ * - if the receiver is enabled we need to wait for blockend IRQ to read
++ * data to and update it for us in software caches
++ * - otherwise reading the SPDIFRX_CHUD() registers is enough.
++ */
++
++ if (dev->trigger_enabled) {
++ reinit_completion(&user_data->done);
++ regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_BLOCKEND);
++ ret = wait_for_completion_interruptible_timeout(&user_data->done,
++ msecs_to_jiffies(100));
++ /* Valid stream might not be present. */
++ if (ret <= 0) {
++ dev_dbg(dev->dev, "user data for channel %d timeout\n",
++ channel);
++ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
++ ret = ret ? : -ETIMEDOUT;
++ goto unlock;
++ } else {
++ ret = 0;
++ }
++ } else {
++ /* Update software cache with last available data. */
++ mchp_spdifrx_channel_user_data_read(dev, channel);
+ }
+
+- spin_lock_irqsave(&user_data->lock, flags);
+ memcpy(uvalue->value.iec958.subcode, user_data->data,
+ sizeof(user_data->data));
+- spin_unlock_irqrestore(&user_data->lock, flags);
+
+- return 0;
++unlock:
++ mutex_unlock(&dev->mlock);
++ return ret;
+ }
+
+ static int mchp_spdifrx_subcode_ch1_get(struct snd_kcontrol *kcontrol,
+@@ -627,10 +644,24 @@ static int mchp_spdifrx_ulock_get(struct snd_kcontrol *kcontrol,
+ u32 val;
+ bool ulock_old = ctrl->ulock;
+
+- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+- ctrl->ulock = !(val & SPDIFRX_RSR_ULOCK);
++ mutex_lock(&dev->mlock);
++
++ /*
++ * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
++ * and the receiver is disabled. Thus we take into account the
++ * dev->trigger_enabled here to return a real status.
++ */
++ if (dev->trigger_enabled) {
++ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++ ctrl->ulock = !(val & SPDIFRX_RSR_ULOCK);
++ } else {
++ ctrl->ulock = 0;
++ }
++
+ uvalue->value.integer.value[0] = ctrl->ulock;
+
++ mutex_unlock(&dev->mlock);
++
+ return ulock_old != ctrl->ulock;
+ }
+
+@@ -643,8 +674,22 @@ static int mchp_spdifrx_badf_get(struct snd_kcontrol *kcontrol,
+ u32 val;
+ bool badf_old = ctrl->badf;
+
+- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+- ctrl->badf = !!(val & SPDIFRX_RSR_BADF);
++ mutex_lock(&dev->mlock);
++
++ /*
++ * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
++ * and the receiver is disabled. Thus we take into account the
++ * dev->trigger_enabled here to return a real status.
++ */
++ if (dev->trigger_enabled) {
++ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++ ctrl->badf = !!(val & SPDIFRX_RSR_BADF);
++ } else {
++ ctrl->badf = 0;
++ }
++
++ mutex_unlock(&dev->mlock);
++
+ uvalue->value.integer.value[0] = ctrl->badf;
+
+ return badf_old != ctrl->badf;
+@@ -656,11 +701,48 @@ static int mchp_spdifrx_signal_get(struct snd_kcontrol *kcontrol,
+ struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+ struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+ struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
+- u32 val;
++ u32 val = ~0U, loops = 10;
++ int ret;
+ bool signal_old = ctrl->signal;
+
+- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+- ctrl->signal = !(val & SPDIFRX_RSR_NOSIGNAL);
++ mutex_lock(&dev->mlock);
++
++ /*
++ * To get the signal we need to have receiver enabled. This
++ * could be enabled also from trigger() function thus we need to
++ * take care of not disabling the receiver when it runs.
++ */
++ if (!dev->trigger_enabled) {
++ ret = clk_prepare_enable(dev->gclk);
++ if (ret)
++ goto unlock;
++
++ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++ SPDIFRX_MR_RXEN_ENABLE);
++
++ /* Wait for RSR.ULOCK bit. */
++ while (--loops) {
++ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++ if (!(val & SPDIFRX_RSR_ULOCK))
++ break;
++ usleep_range(100, 150);
++ }
++
++ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
++ SPDIFRX_MR_RXEN_DISABLE);
++
++ clk_disable_unprepare(dev->gclk);
++ } else {
++ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++ }
++
++unlock:
++ mutex_unlock(&dev->mlock);
++
++ if (!(val & SPDIFRX_RSR_ULOCK))
++ ctrl->signal = !(val & SPDIFRX_RSR_NOSIGNAL);
++ else
++ ctrl->signal = 0;
+ uvalue->value.integer.value[0] = ctrl->signal;
+
+ return signal_old != ctrl->signal;
+@@ -685,18 +767,32 @@ static int mchp_spdifrx_rate_get(struct snd_kcontrol *kcontrol,
+ u32 val;
+ int rate;
+
+- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+-
+- /* if the receiver is not locked, ISF data is invalid */
+- if (val & SPDIFRX_RSR_ULOCK || !(val & SPDIFRX_RSR_IFS_MASK)) {
++ mutex_lock(&dev->mlock);
++
++ /*
++ * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
++ * and the receiver is disabled. Thus we take into account the
++ * dev->trigger_enabled here to return a real status.
++ */
++ if (dev->trigger_enabled) {
++ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
++ /* If the receiver is not locked, ISF data is invalid. */
++ if (val & SPDIFRX_RSR_ULOCK || !(val & SPDIFRX_RSR_IFS_MASK)) {
++ ucontrol->value.integer.value[0] = 0;
++ goto unlock;
++ }
++ } else {
++ /* Reveicer is not locked, IFS data is invalid. */
+ ucontrol->value.integer.value[0] = 0;
+- return 0;
++ goto unlock;
+ }
+
+ rate = clk_get_rate(dev->gclk);
+
+ ucontrol->value.integer.value[0] = rate / (32 * SPDIFRX_RSR_IFS(val));
+
++unlock:
++ mutex_unlock(&dev->mlock);
+ return 0;
+ }
+
+@@ -808,11 +904,9 @@ static int mchp_spdifrx_dai_probe(struct snd_soc_dai *dai)
+ SPDIFRX_MR_AUTORST_NOACTION |
+ SPDIFRX_MR_PACK_DISABLED);
+
+- dev->blockend_refcount = 0;
+ for (ch = 0; ch < SPDIFRX_CHANNELS; ch++) {
+ init_completion(&ctrl->ch_stat[ch].done);
+ init_completion(&ctrl->user_data[ch].done);
+- spin_lock_init(&ctrl->user_data[ch].lock);
+ }
+
+ /* Add controls */
+@@ -827,7 +921,7 @@ static int mchp_spdifrx_dai_remove(struct snd_soc_dai *dai)
+ struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+
+ /* Disable interrupts */
+- regmap_write(dev->regmap, SPDIFRX_IDR, 0xFF);
++ regmap_write(dev->regmap, SPDIFRX_IDR, GENMASK(14, 0));
+
+ clk_disable_unprepare(dev->pclk);
+
+@@ -913,7 +1007,17 @@ static int mchp_spdifrx_probe(struct platform_device *pdev)
+ "failed to get the PMC generated clock: %d\n", err);
+ return err;
+ }
+- spin_lock_init(&dev->blockend_lock);
++
++ /*
++ * Signal control need a valid rate on gclk. hw_params() configures
++ * it propertly but requesting signal before any hw_params() has been
++ * called lead to invalid value returned for signal. Thus, configure
++ * gclk at a valid rate, here, in initialization, to simplify the
++ * control path.
++ */
++ clk_set_min_rate(dev->gclk, 48000 * SPDIFRX_GCLK_RATIO_MIN + 1);
++
++ mutex_init(&dev->mlock);
+
+ dev->dev = &pdev->dev;
+ dev->regmap = regmap;
+diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c
+index a9ef9d5ffcc5c..8621cfabcf5b6 100644
+--- a/sound/soc/codecs/lpass-rx-macro.c
++++ b/sound/soc/codecs/lpass-rx-macro.c
+@@ -366,7 +366,7 @@
+ #define CDC_RX_DSD1_CFG2 (0x0F8C)
+ #define RX_MAX_OFFSET (0x0F8C)
+
+-#define MCLK_FREQ 9600000
++#define MCLK_FREQ 19200000
+
+ #define RX_MACRO_RATES (SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000 |\
+ SNDRV_PCM_RATE_32000 | SNDRV_PCM_RATE_48000 |\
+@@ -3579,7 +3579,7 @@ static int rx_macro_probe(struct platform_device *pdev)
+
+ /* set MCLK and NPL rates */
+ clk_set_rate(rx->mclk, MCLK_FREQ);
+- clk_set_rate(rx->npl, 2 * MCLK_FREQ);
++ clk_set_rate(rx->npl, MCLK_FREQ);
+
+ ret = clk_prepare_enable(rx->macro);
+ if (ret)
+@@ -3601,10 +3601,6 @@ static int rx_macro_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_fsgen;
+
+- ret = rx_macro_register_mclk_output(rx);
+- if (ret)
+- goto err_clkout;
+-
+ ret = devm_snd_soc_register_component(dev, &rx_macro_component_drv,
+ rx_macro_dai,
+ ARRAY_SIZE(rx_macro_dai));
+@@ -3618,6 +3614,10 @@ static int rx_macro_probe(struct platform_device *pdev)
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
++ ret = rx_macro_register_mclk_output(rx);
++ if (ret)
++ goto err_clkout;
++
+ return 0;
+
+ err_clkout:
+diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c
+index 2ef62d6edc302..2449a2df66df0 100644
+--- a/sound/soc/codecs/lpass-tx-macro.c
++++ b/sound/soc/codecs/lpass-tx-macro.c
+@@ -203,7 +203,7 @@
+ #define TX_MACRO_AMIC_UNMUTE_DELAY_MS 100
+ #define TX_MACRO_DMIC_HPF_DELAY_MS 300
+ #define TX_MACRO_AMIC_HPF_DELAY_MS 300
+-#define MCLK_FREQ 9600000
++#define MCLK_FREQ 19200000
+
+ enum {
+ TX_MACRO_AIF_INVALID = 0,
+@@ -2014,7 +2014,7 @@ static int tx_macro_probe(struct platform_device *pdev)
+
+ /* set MCLK and NPL rates */
+ clk_set_rate(tx->mclk, MCLK_FREQ);
+- clk_set_rate(tx->npl, 2 * MCLK_FREQ);
++ clk_set_rate(tx->npl, MCLK_FREQ);
+
+ ret = clk_prepare_enable(tx->macro);
+ if (ret)
+@@ -2036,10 +2036,6 @@ static int tx_macro_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_fsgen;
+
+- ret = tx_macro_register_mclk_output(tx);
+- if (ret)
+- goto err_clkout;
+-
+ ret = devm_snd_soc_register_component(dev, &tx_macro_component_drv,
+ tx_macro_dai,
+ ARRAY_SIZE(tx_macro_dai));
+@@ -2052,6 +2048,10 @@ static int tx_macro_probe(struct platform_device *pdev)
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
++ ret = tx_macro_register_mclk_output(tx);
++ if (ret)
++ goto err_clkout;
++
+ return 0;
+
+ err_clkout:
+diff --git a/sound/soc/codecs/lpass-va-macro.c b/sound/soc/codecs/lpass-va-macro.c
+index b0b6cf29cba30..1623ba78ddb3d 100644
+--- a/sound/soc/codecs/lpass-va-macro.c
++++ b/sound/soc/codecs/lpass-va-macro.c
+@@ -1524,16 +1524,6 @@ static int va_macro_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_mclk;
+
+- ret = va_macro_register_fsgen_output(va);
+- if (ret)
+- goto err_clkout;
+-
+- va->fsgen = clk_hw_get_clk(&va->hw, "fsgen");
+- if (IS_ERR(va->fsgen)) {
+- ret = PTR_ERR(va->fsgen);
+- goto err_clkout;
+- }
+-
+ if (va->has_swr_master) {
+ /* Set default CLK div to 1 */
+ regmap_update_bits(va->regmap, CDC_VA_TOP_CSR_SWR_MIC_CTL0,
+@@ -1560,6 +1550,16 @@ static int va_macro_probe(struct platform_device *pdev)
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
++ ret = va_macro_register_fsgen_output(va);
++ if (ret)
++ goto err_clkout;
++
++ va->fsgen = clk_hw_get_clk(&va->hw, "fsgen");
++ if (IS_ERR(va->fsgen)) {
++ ret = PTR_ERR(va->fsgen);
++ goto err_clkout;
++ }
++
+ return 0;
+
+ err_clkout:
+diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c
+index 5cfe96f6e430e..c0b86d69c72e3 100644
+--- a/sound/soc/codecs/lpass-wsa-macro.c
++++ b/sound/soc/codecs/lpass-wsa-macro.c
+@@ -2451,11 +2451,6 @@ static int wsa_macro_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_fsgen;
+
+- ret = wsa_macro_register_mclk_output(wsa);
+- if (ret)
+- goto err_clkout;
+-
+-
+ ret = devm_snd_soc_register_component(dev, &wsa_macro_component_drv,
+ wsa_macro_dai,
+ ARRAY_SIZE(wsa_macro_dai));
+@@ -2468,6 +2463,10 @@ static int wsa_macro_probe(struct platform_device *pdev)
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
++ ret = wsa_macro_register_mclk_output(wsa);
++ if (ret)
++ goto err_clkout;
++
+ return 0;
+
+ err_clkout:
+diff --git a/sound/soc/codecs/tlv320adcx140.c b/sound/soc/codecs/tlv320adcx140.c
+index 91a22d9279158..530f321d08e9c 100644
+--- a/sound/soc/codecs/tlv320adcx140.c
++++ b/sound/soc/codecs/tlv320adcx140.c
+@@ -925,7 +925,7 @@ static int adcx140_configure_gpio(struct adcx140_priv *adcx140)
+
+ gpio_count = device_property_count_u32(adcx140->dev,
+ "ti,gpio-config");
+- if (gpio_count == 0)
++ if (gpio_count <= 0)
+ return 0;
+
+ if (gpio_count != ADCX140_NUM_GPIO_CFGS)
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 35a52c3a020d1..4967f2daa6d97 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -281,6 +281,7 @@ static int fsl_sai_set_dai_fmt_tr(struct snd_soc_dai *cpu_dai,
+ val_cr4 |= FSL_SAI_CR4_MF;
+
+ sai->is_pdm_mode = false;
++ sai->is_dsp_mode = false;
+ /* DAI mode */
+ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ case SND_SOC_DAIFMT_I2S:
+diff --git a/sound/soc/kirkwood/kirkwood-dma.c b/sound/soc/kirkwood/kirkwood-dma.c
+index 700a18561a940..640cebd2983e2 100644
+--- a/sound/soc/kirkwood/kirkwood-dma.c
++++ b/sound/soc/kirkwood/kirkwood-dma.c
+@@ -86,7 +86,7 @@ kirkwood_dma_conf_mbus_windows(void __iomem *base, int win,
+
+ /* try to find matching cs for current dma address */
+ for (i = 0; i < dram->num_cs; i++) {
+- const struct mbus_dram_window *cs = dram->cs + i;
++ const struct mbus_dram_window *cs = &dram->cs[i];
+ if ((cs->base & 0xffff0000) < (dma & 0xffff0000)) {
+ writel(cs->base & 0xffff0000,
+ base + KIRKWOOD_AUDIO_WIN_BASE_REG(win));
+diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
+index ee59ef36b85a6..7f02f5b2c33fd 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
+@@ -8,6 +8,7 @@
+ #include <linux/slab.h>
+ #include <sound/soc.h>
+ #include <sound/soc-dapm.h>
++#include <linux/spinlock.h>
+ #include <sound/pcm.h>
+ #include <asm/dma.h>
+ #include <linux/dma-mapping.h>
+@@ -53,6 +54,7 @@ struct q6apm_dai_rtd {
+ uint16_t session_id;
+ enum stream_state state;
+ struct q6apm_graph *graph;
++ spinlock_t lock;
+ };
+
+ struct q6apm_dai_data {
+@@ -62,7 +64,8 @@ struct q6apm_dai_data {
+ static struct snd_pcm_hardware q6apm_dai_hardware_capture = {
+ .info = (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID | SNDRV_PCM_INFO_INTERLEAVED |
+- SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME),
++ SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME |
++ SNDRV_PCM_INFO_BATCH),
+ .formats = (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE),
+ .rates = SNDRV_PCM_RATE_8000_48000,
+ .rate_min = 8000,
+@@ -80,7 +83,8 @@ static struct snd_pcm_hardware q6apm_dai_hardware_capture = {
+ static struct snd_pcm_hardware q6apm_dai_hardware_playback = {
+ .info = (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID | SNDRV_PCM_INFO_INTERLEAVED |
+- SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME),
++ SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME |
++ SNDRV_PCM_INFO_BATCH),
+ .formats = (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE),
+ .rates = SNDRV_PCM_RATE_8000_192000,
+ .rate_min = 8000,
+@@ -99,20 +103,25 @@ static void event_handler(uint32_t opcode, uint32_t token, uint32_t *payload, vo
+ {
+ struct q6apm_dai_rtd *prtd = priv;
+ struct snd_pcm_substream *substream = prtd->substream;
++ unsigned long flags;
+
+ switch (opcode) {
+ case APM_CLIENT_EVENT_CMD_EOS_DONE:
+ prtd->state = Q6APM_STREAM_STOPPED;
+ break;
+ case APM_CLIENT_EVENT_DATA_WRITE_DONE:
++ spin_lock_irqsave(&prtd->lock, flags);
+ prtd->pos += prtd->pcm_count;
++ spin_unlock_irqrestore(&prtd->lock, flags);
+ snd_pcm_period_elapsed(substream);
+ if (prtd->state == Q6APM_STREAM_RUNNING)
+ q6apm_write_async(prtd->graph, prtd->pcm_count, 0, 0, 0);
+
+ break;
+ case APM_CLIENT_EVENT_DATA_READ_DONE:
++ spin_lock_irqsave(&prtd->lock, flags);
+ prtd->pos += prtd->pcm_count;
++ spin_unlock_irqrestore(&prtd->lock, flags);
+ snd_pcm_period_elapsed(substream);
+ if (prtd->state == Q6APM_STREAM_RUNNING)
+ q6apm_read(prtd->graph);
+@@ -253,6 +262,7 @@ static int q6apm_dai_open(struct snd_soc_component *component,
+ if (prtd == NULL)
+ return -ENOMEM;
+
++ spin_lock_init(&prtd->lock);
+ prtd->substream = substream;
+ prtd->graph = q6apm_graph_open(dev, (q6apm_cb)event_handler, prtd, graph_id);
+ if (IS_ERR(prtd->graph)) {
+@@ -332,11 +342,17 @@ static snd_pcm_uframes_t q6apm_dai_pointer(struct snd_soc_component *component,
+ {
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct q6apm_dai_rtd *prtd = runtime->private_data;
++ snd_pcm_uframes_t ptr;
++ unsigned long flags;
+
++ spin_lock_irqsave(&prtd->lock, flags);
+ if (prtd->pos == prtd->pcm_size)
+ prtd->pos = 0;
+
+- return bytes_to_frames(runtime, prtd->pos);
++ ptr = bytes_to_frames(runtime, prtd->pos);
++ spin_unlock_irqrestore(&prtd->lock, flags);
++
++ return ptr;
+ }
+
+ static int q6apm_dai_hw_params(struct snd_soc_component *component,
+diff --git a/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c b/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
+index ce9e5646d8f3a..23d23bc6fbaa7 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
++++ b/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
+@@ -127,6 +127,11 @@ static int q6apm_lpass_dai_prepare(struct snd_pcm_substream *substream, struct s
+ int graph_id = dai->id;
+ int rc;
+
++ if (dai_data->is_port_started[dai->id]) {
++ q6apm_graph_stop(dai_data->graph[dai->id]);
++ dai_data->is_port_started[dai->id] = false;
++ }
++
+ /**
+ * It is recommend to load DSP with source graph first and then sink
+ * graph, so sequence for playback and capture will be different
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index d9cd190d7e198..f8ef6836ef84e 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -901,8 +901,6 @@ void rsnd_mod_make_sure(struct rsnd_mod *mod, enum rsnd_mod_type type);
+ if (!IS_BUILTIN(RSND_DEBUG_NO_DAI_CALL)) \
+ dev_dbg(dev, param)
+
+-#endif
+-
+ #ifdef CONFIG_DEBUG_FS
+ int rsnd_debugfs_probe(struct snd_soc_component *component);
+ void rsnd_debugfs_reg_show(struct seq_file *m, phys_addr_t _addr,
+@@ -913,3 +911,5 @@ void rsnd_debugfs_mod_reg_show(struct seq_file *m, struct rsnd_mod *mod,
+ #else
+ #define rsnd_debugfs_probe NULL
+ #endif
++
++#endif /* RSND_H */
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 870f13e1d389c..e7aa6f360cabe 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -149,6 +149,8 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
+ if (ret < 0)
+ goto be_err;
+
++ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
++
+ /* calculate valid and active FE <-> BE dpcms */
+ dpcm_process_paths(fe, stream, &list, 1);
+ fe->dpcm[stream].runtime = fe_substream->runtime;
+@@ -184,7 +186,6 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
+ fe->dpcm[stream].state = SND_SOC_DPCM_STATE_OPEN;
+ fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+
+- mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
+ snd_soc_runtime_activate(fe, stream);
+ mutex_unlock(&fe->card->pcm_mutex);
+
+@@ -215,7 +216,6 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
+
+ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
+ snd_soc_runtime_deactivate(fe, stream);
+- mutex_unlock(&fe->card->pcm_mutex);
+
+ fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
+
+@@ -234,6 +234,8 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
+
+ dpcm_be_disconnect(fe, stream);
+
++ mutex_unlock(&fe->card->pcm_mutex);
++
+ fe->dpcm[stream].runtime = NULL;
+
+ snd_soc_link_compr_shutdown(cstream, 0);
+@@ -409,8 +411,9 @@ static int soc_compr_set_params_fe(struct snd_compr_stream *cstream,
+ ret = snd_soc_link_compr_set_params(cstream);
+ if (ret < 0)
+ goto out;
+-
++ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
+ dpcm_dapm_stream_event(fe, stream, SND_SOC_DAPM_STREAM_START);
++ mutex_unlock(&fe->card->pcm_mutex);
+ fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE;
+
+ out:
+@@ -623,7 +626,7 @@ int snd_soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ rtd->fe_compr = 1;
+ if (rtd->dai_link->dpcm_playback)
+ be_pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream->private_data = rtd;
+- else if (rtd->dai_link->dpcm_capture)
++ if (rtd->dai_link->dpcm_capture)
+ be_pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream->private_data = rtd;
+ memcpy(compr->ops, &soc_compr_dyn_ops, sizeof(soc_compr_dyn_ops));
+ } else {
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index a79a2fb260b87..d68c48555a7e3 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -2408,7 +2408,7 @@ static int soc_valid_header(struct soc_tplg *tplg,
+ return -EINVAL;
+ }
+
+- if (soc_tplg_get_hdr_offset(tplg) + hdr->payload_size >= tplg->fw->size) {
++ if (soc_tplg_get_hdr_offset(tplg) + le32_to_cpu(hdr->payload_size) >= tplg->fw->size) {
+ dev_err(tplg->dev,
+ "ASoC: invalid header of type %d at offset %ld payload_size %d\n",
+ le32_to_cpu(hdr->type), soc_tplg_get_hdr_offset(tplg),
+diff --git a/tools/bootconfig/scripts/ftrace2bconf.sh b/tools/bootconfig/scripts/ftrace2bconf.sh
+index 6183b36c68466..1603801cf1264 100755
+--- a/tools/bootconfig/scripts/ftrace2bconf.sh
++++ b/tools/bootconfig/scripts/ftrace2bconf.sh
+@@ -93,7 +93,7 @@ referred_vars() {
+ }
+
+ event_is_enabled() { # enable-file
+- test -f $1 & grep -q "1" $1
++ test -f $1 && grep -q "1" $1
+ }
+
+ per_event_options() { # event-dir
+diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
+index f610e184ce02a..270066aff8bf1 100644
+--- a/tools/bpf/bpftool/Makefile
++++ b/tools/bpf/bpftool/Makefile
+@@ -215,7 +215,8 @@ $(OUTPUT)%.bpf.o: skeleton/%.bpf.c $(OUTPUT)vmlinux.h $(LIBBPF_BOOTSTRAP)
+ -I$(or $(OUTPUT),.) \
+ -I$(srctree)/tools/include/uapi/ \
+ -I$(LIBBPF_BOOTSTRAP_INCLUDE) \
+- -g -O2 -Wall -target bpf -c $< -o $@
++ -g -O2 -Wall -fno-stack-protector \
++ -target bpf -c $< -o $@
+ $(Q)$(LLVM_STRIP) -g $@
+
+ $(OUTPUT)%.skel.h: $(OUTPUT)%.bpf.o $(BPFTOOL_BOOTSTRAP)
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index cfc9fdc1e8634..e87738dbffc10 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -2233,10 +2233,38 @@ static void profile_close_perf_events(struct profiler_bpf *obj)
+ profile_perf_event_cnt = 0;
+ }
+
++static int profile_open_perf_event(int mid, int cpu, int map_fd)
++{
++ int pmu_fd;
++
++ pmu_fd = syscall(__NR_perf_event_open, &metrics[mid].attr,
++ -1 /*pid*/, cpu, -1 /*group_fd*/, 0);
++ if (pmu_fd < 0) {
++ if (errno == ENODEV) {
++ p_info("cpu %d may be offline, skip %s profiling.",
++ cpu, metrics[mid].name);
++ profile_perf_event_cnt++;
++ return 0;
++ }
++ return -1;
++ }
++
++ if (bpf_map_update_elem(map_fd,
++ &profile_perf_event_cnt,
++ &pmu_fd, BPF_ANY) ||
++ ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) {
++ close(pmu_fd);
++ return -1;
++ }
++
++ profile_perf_events[profile_perf_event_cnt++] = pmu_fd;
++ return 0;
++}
++
+ static int profile_open_perf_events(struct profiler_bpf *obj)
+ {
+ unsigned int cpu, m;
+- int map_fd, pmu_fd;
++ int map_fd;
+
+ profile_perf_events = calloc(
+ sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
+@@ -2255,17 +2283,11 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
+ if (!metrics[m].selected)
+ continue;
+ for (cpu = 0; cpu < obj->rodata->num_cpu; cpu++) {
+- pmu_fd = syscall(__NR_perf_event_open, &metrics[m].attr,
+- -1/*pid*/, cpu, -1/*group_fd*/, 0);
+- if (pmu_fd < 0 ||
+- bpf_map_update_elem(map_fd, &profile_perf_event_cnt,
+- &pmu_fd, BPF_ANY) ||
+- ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) {
++ if (profile_open_perf_event(m, cpu, map_fd)) {
+ p_err("failed to create event %s on cpu %d",
+ metrics[m].name, cpu);
+ return -1;
+ }
+- profile_perf_events[profile_perf_event_cnt++] = pmu_fd;
+ }
+ }
+ return 0;
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index 2972dc25ff722..9c1b1689068d1 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -137,7 +137,7 @@ struct pt_regs___s390 {
+ #define __PT_PARM3_REG gprs[4]
+ #define __PT_PARM4_REG gprs[5]
+ #define __PT_PARM5_REG gprs[6]
+-#define __PT_RET_REG grps[14]
++#define __PT_RET_REG gprs[14]
+ #define __PT_FP_REG gprs[11] /* Works only with CONFIG_FRAME_POINTER */
+ #define __PT_RC_REG gprs[2]
+ #define __PT_SP_REG gprs[15]
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 71e165b09ed59..8cbcef959456d 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -688,8 +688,21 @@ int btf__align_of(const struct btf *btf, __u32 id)
+ if (align <= 0)
+ return libbpf_err(align);
+ max_align = max(max_align, align);
++
++ /* if field offset isn't aligned according to field
++ * type's alignment, then struct must be packed
++ */
++ if (btf_member_bitfield_size(t, i) == 0 &&
++ (m->offset % (8 * align)) != 0)
++ return 1;
+ }
+
++ /* if struct/union size isn't a multiple of its alignment,
++ * then struct must be packed
++ */
++ if ((t->size % max_align) != 0)
++ return 1;
++
+ return max_align;
+ }
+ default:
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index deb2bc9a0a7b0..69e80ee5f70e2 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -959,9 +959,12 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
+ * Keep `struct empty {}` on a single line,
+ * only print newline when there are regular or padding fields.
+ */
+- if (vlen || t->size)
++ if (vlen || t->size) {
+ btf_dump_printf(d, "\n");
+- btf_dump_printf(d, "%s}", pfx(lvl));
++ btf_dump_printf(d, "%s}", pfx(lvl));
++ } else {
++ btf_dump_printf(d, "}");
++ }
+ if (packed)
+ btf_dump_printf(d, " __attribute__((packed))");
+ }
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 2a82f49ce16f3..adf818da35dda 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -7355,7 +7355,7 @@ static int bpf_object__sanitize_maps(struct bpf_object *obj)
+ if (!bpf_map__is_internal(m))
+ continue;
+ if (!kernel_supports(obj, FEAT_ARRAY_MMAP))
+- m->def.map_flags ^= BPF_F_MMAPABLE;
++ m->def.map_flags &= ~BPF_F_MMAPABLE;
+ }
+
+ return 0;
+diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c
+index 3900d052ed19e..975e265eab3bf 100644
+--- a/tools/lib/bpf/nlattr.c
++++ b/tools/lib/bpf/nlattr.c
+@@ -178,7 +178,7 @@ int libbpf_nla_dump_errormsg(struct nlmsghdr *nlh)
+ hlen += nlmsg_len(&err->msg);
+
+ attr = (struct nlattr *) ((void *) err + hlen);
+- alen = nlh->nlmsg_len - hlen;
++ alen = (void *)nlh + nlh->nlmsg_len - (void *)attr;
+
+ if (libbpf_nla_parse(tb, NLMSGERR_ATTR_MAX, attr, alen,
+ extack_policy) != 0) {
+diff --git a/tools/lib/thermal/sampling.c b/tools/lib/thermal/sampling.c
+index ee818f4e9654d..70577423a9f0c 100644
+--- a/tools/lib/thermal/sampling.c
++++ b/tools/lib/thermal/sampling.c
+@@ -54,7 +54,7 @@ int thermal_sampling_fd(struct thermal_handler *th)
+ thermal_error_t thermal_sampling_exit(struct thermal_handler *th)
+ {
+ if (nl_unsubscribe_thermal(th->sk_sampling, th->cb_sampling,
+- THERMAL_GENL_EVENT_GROUP_NAME))
++ THERMAL_GENL_SAMPLING_GROUP_NAME))
+ return THERMAL_ERROR;
+
+ nl_thermal_disconnect(th->sk_sampling, th->cb_sampling);
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 4b7c8b33069e5..b1a5f658673f0 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1186,6 +1186,8 @@ static const char *uaccess_safe_builtin[] = {
+ "__tsan_atomic64_compare_exchange_val",
+ "__tsan_atomic_thread_fence",
+ "__tsan_atomic_signal_fence",
++ "__tsan_unaligned_read16",
++ "__tsan_unaligned_write16",
+ /* KCOV */
+ "write_comp_data",
+ "check_kcov_mode",
+diff --git a/tools/perf/Documentation/perf-intel-pt.txt b/tools/perf/Documentation/perf-intel-pt.txt
+index 7b6ccd2fa3bf1..9d485a9cdb198 100644
+--- a/tools/perf/Documentation/perf-intel-pt.txt
++++ b/tools/perf/Documentation/perf-intel-pt.txt
+@@ -1821,6 +1821,36 @@ Can be compiled and traced:
+ $
+
+
++Pipe mode
++---------
++Pipe mode is a problem for Intel PT and possibly other auxtrace users.
++It's not recommended to use a pipe as data output with Intel PT because
++of the following reason.
++
++Essentially the auxtrace buffers do not behave like the regular perf
++event buffers. That is because the head and tail are updated by
++software, but in the auxtrace case the data is written by hardware.
++So the head and tail do not get updated as data is written.
++
++In the Intel PT case, the head and tail are updated only when the trace
++is disabled by software, for example:
++ - full-trace, system wide : when buffer passes watermark
++ - full-trace, not system-wide : when buffer passes watermark or
++ context switches
++ - snapshot mode : as above but also when a snapshot is made
++ - sample mode : as above but also when a sample is made
++
++That means finished-round ordering doesn't work. An auxtrace buffer
++can turn up that has data that extends back in time, possibly to the
++very beginning of tracing.
++
++For a perf.data file, that problem is solved by going through the trace
++and queuing up the auxtrace buffers in advance.
++
++For pipe mode, the order of events and timestamps can presumably
++be messed up.
++
++
+ EXAMPLE
+ -------
+
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index 3f4e4dd5abf31..f8182417b7341 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -215,14 +215,14 @@ static int perf_event__repipe_event_update(struct perf_tool *tool,
+
+ #ifdef HAVE_AUXTRACE_SUPPORT
+
+-static int copy_bytes(struct perf_inject *inject, int fd, off_t size)
++static int copy_bytes(struct perf_inject *inject, struct perf_data *data, off_t size)
+ {
+ char buf[4096];
+ ssize_t ssz;
+ int ret;
+
+ while (size > 0) {
+- ssz = read(fd, buf, min(size, (off_t)sizeof(buf)));
++ ssz = perf_data__read(data, buf, min(size, (off_t)sizeof(buf)));
+ if (ssz < 0)
+ return -errno;
+ ret = output_bytes(inject, buf, ssz);
+@@ -260,7 +260,7 @@ static s64 perf_event__repipe_auxtrace(struct perf_session *session,
+ ret = output_bytes(inject, event, event->header.size);
+ if (ret < 0)
+ return ret;
+- ret = copy_bytes(inject, perf_data__fd(session->data),
++ ret = copy_bytes(inject, session->data,
+ event->auxtrace.size);
+ } else {
+ ret = output_bytes(inject, event,
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index 29dcd454b8e21..8374117e66f6e 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -154,6 +154,7 @@ struct record {
+ struct perf_tool tool;
+ struct record_opts opts;
+ u64 bytes_written;
++ u64 thread_bytes_written;
+ struct perf_data data;
+ struct auxtrace_record *itr;
+ struct evlist *evlist;
+@@ -226,14 +227,7 @@ static bool switch_output_time(struct record *rec)
+
+ static u64 record__bytes_written(struct record *rec)
+ {
+- int t;
+- u64 bytes_written = rec->bytes_written;
+- struct record_thread *thread_data = rec->thread_data;
+-
+- for (t = 0; t < rec->nr_threads; t++)
+- bytes_written += thread_data[t].bytes_written;
+-
+- return bytes_written;
++ return rec->bytes_written + rec->thread_bytes_written;
+ }
+
+ static bool record__output_max_size_exceeded(struct record *rec)
+@@ -255,10 +249,12 @@ static int record__write(struct record *rec, struct mmap *map __maybe_unused,
+ return -1;
+ }
+
+- if (map && map->file)
++ if (map && map->file) {
+ thread->bytes_written += size;
+- else
++ rec->thread_bytes_written += size;
++ } else {
+ rec->bytes_written += size;
++ }
+
+ if (record__output_max_size_exceeded(rec) && !done) {
+ fprintf(stderr, "[ perf record: perf size limit reached (%" PRIu64 " KB),"
+diff --git a/tools/perf/perf-completion.sh b/tools/perf/perf-completion.sh
+index fdf75d45efff7..978249d7868c2 100644
+--- a/tools/perf/perf-completion.sh
++++ b/tools/perf/perf-completion.sh
+@@ -165,7 +165,12 @@ __perf_main ()
+
+ local cur1=${COMP_WORDS[COMP_CWORD]}
+ local raw_evts=$($cmd list --raw-dump)
+- local arr s tmp result
++ local arr s tmp result cpu_evts
++
++ # aarch64 doesn't have /sys/bus/event_source/devices/cpu/events
++ if [[ `uname -m` != aarch64 ]]; then
++ cpu_evts=$(ls /sys/bus/event_source/devices/cpu/events)
++ fi
+
+ if [[ "$cur1" == */* && ${cur1#*/} =~ ^[A-Z] ]]; then
+ OLD_IFS="$IFS"
+@@ -183,9 +188,9 @@ __perf_main ()
+ fi
+ done
+
+- evts=${result}" "$(ls /sys/bus/event_source/devices/cpu/events)
++ evts=${result}" "${cpu_evts}
+ else
+- evts=${raw_evts}" "$(ls /sys/bus/event_source/devices/cpu/events)
++ evts=${raw_evts}" "${cpu_evts}
+ fi
+
+ if [[ "$cur1" == , ]]; then
+diff --git a/tools/perf/pmu-events/metric_test.py b/tools/perf/pmu-events/metric_test.py
+index 15315d0f716ca..6980f452df0ad 100644
+--- a/tools/perf/pmu-events/metric_test.py
++++ b/tools/perf/pmu-events/metric_test.py
+@@ -87,8 +87,8 @@ class TestMetricExpressions(unittest.TestCase):
+ after = r'min((a + b if c > 1 else c + d), e + f)'
+ self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)
+
+- before =3D r'a if b else c if d else e'
+- after =3D r'(a if b else (c if d else e))'
++ before = r'a if b else c if d else e'
++ after = r'(a if b else (c if d else e))'
+ self.assertEqual(ParsePerfJson(before).ToPerfJson(), after)
+
+ def test_ToPython(self):
+diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
+index 17c023823713d..6a4235a9cf57e 100644
+--- a/tools/perf/tests/bpf.c
++++ b/tools/perf/tests/bpf.c
+@@ -126,6 +126,10 @@ static int do_test(struct bpf_object *obj, int (*func)(void),
+
+ err = parse_events_load_bpf_obj(&parse_state, &parse_state.list, obj, NULL);
+ parse_events_error__exit(&parse_error);
++ if (err == -ENODATA) {
++ pr_debug("Failed to add events selected by BPF, debuginfo package not installed\n");
++ return TEST_SKIP;
++ }
+ if (err || list_empty(&parse_state.list)) {
+ pr_debug("Failed to add events selected by BPF\n");
+ return TEST_FAIL;
+@@ -368,7 +372,7 @@ static struct test_case bpf_tests[] = {
+ "clang isn't installed or environment missing BPF support"),
+ #ifdef HAVE_BPF_PROLOGUE
+ TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test,
+- "clang isn't installed or environment missing BPF support"),
++ "clang/debuginfo isn't installed or environment missing BPF support"),
+ #else
+ TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in"),
+ #endif
+diff --git a/tools/perf/tests/shell/stat_all_metrics.sh b/tools/perf/tests/shell/stat_all_metrics.sh
+index 6e79349e42bef..22e9cb294b40e 100755
+--- a/tools/perf/tests/shell/stat_all_metrics.sh
++++ b/tools/perf/tests/shell/stat_all_metrics.sh
+@@ -11,7 +11,7 @@ for m in $(perf list --raw-dump metrics); do
+ continue
+ fi
+ # Failed so try system wide.
+- result=$(perf stat -M "$m" -a true 2>&1)
++ result=$(perf stat -M "$m" -a sleep 0.01 2>&1)
+ if [[ "$result" =~ "${m:0:50}" ]]
+ then
+ continue
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index c2e323cd7d496..d4b04fa07a119 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -1133,6 +1133,9 @@ int auxtrace_queue_data(struct perf_session *session, bool samples, bool events)
+ if (auxtrace__dont_decode(session))
+ return 0;
+
++ if (perf_data__is_pipe(session->data))
++ return 0;
++
+ if (!session->auxtrace || !session->auxtrace->queue_data)
+ return -EINVAL;
+
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 6d3921627e332..b8b29756fbf13 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -4379,6 +4379,12 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
+
+ intel_pt_setup_pebs_events(pt);
+
++ if (perf_data__is_pipe(session->data)) {
++ pr_warning("WARNING: Intel PT with pipe mode is not recommended.\n"
++ " The output cannot relied upon. In particular,\n"
++ " timestamps and the order of events may be incorrect.\n");
++ }
++
+ if (pt->sampling_mode || list_empty(&session->auxtrace_index))
+ err = auxtrace_queue_data(session, true, true);
+ else
+diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
+index 650ffe336f3aa..4e8e243a6e4bd 100644
+--- a/tools/perf/util/llvm-utils.c
++++ b/tools/perf/util/llvm-utils.c
+@@ -531,14 +531,37 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
+
+ pr_debug("llvm compiling command template: %s\n", template);
+
++ /*
++ * Below, substitute control characters for values that can cause the
++ * echo to misbehave, then substitute the values back.
++ */
+ err = -ENOMEM;
+- if (asprintf(&command_echo, "echo -n \"%s\"", template) < 0)
++ if (asprintf(&command_echo, "echo -n \a%s\a", template) < 0)
+ goto errout;
+
++#define SWAP_CHAR(a, b) do { if (*p == a) *p = b; } while (0)
++ for (char *p = command_echo; *p; p++) {
++ SWAP_CHAR('<', '\001');
++ SWAP_CHAR('>', '\002');
++ SWAP_CHAR('"', '\003');
++ SWAP_CHAR('\'', '\004');
++ SWAP_CHAR('|', '\005');
++ SWAP_CHAR('&', '\006');
++ SWAP_CHAR('\a', '"');
++ }
+ err = read_from_pipe(command_echo, (void **) &command_out, NULL);
+ if (err)
+ goto errout;
+
++ for (char *p = command_out; *p; p++) {
++ SWAP_CHAR('\001', '<');
++ SWAP_CHAR('\002', '>');
++ SWAP_CHAR('\003', '"');
++ SWAP_CHAR('\004', '\'');
++ SWAP_CHAR('\005', '|');
++ SWAP_CHAR('\006', '&');
++ }
++#undef SWAP_CHAR
+ pr_debug("llvm compiling command : %s\n", command_out);
+
+ err = read_from_pipe(template, &obj_buf, &obj_buf_sz);
+diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
+index 8bd8b0142630c..1b5cb20efd237 100644
+--- a/tools/perf/util/stat-display.c
++++ b/tools/perf/util/stat-display.c
+@@ -787,6 +787,51 @@ static void uniquify_counter(struct perf_stat_config *config, struct evsel *coun
+ uniquify_event_name(counter);
+ }
+
++/**
++ * should_skip_zero_count() - Check if the event should print 0 values.
++ * @config: The perf stat configuration (including aggregation mode).
++ * @counter: The evsel with its associated cpumap.
++ * @id: The aggregation id that is being queried.
++ *
++ * Due to mismatch between the event cpumap or thread-map and the
++ * aggregation mode, sometimes it'd iterate the counter with the map
++ * which does not contain any values.
++ *
++ * For example, uncore events have dedicated CPUs to manage them,
++ * result for other CPUs should be zero and skipped.
++ *
++ * Return: %true if the value should NOT be printed, %false if the value
++ * needs to be printed like "<not counted>" or "<not supported>".
++ */
++static bool should_skip_zero_counter(struct perf_stat_config *config,
++ struct evsel *counter,
++ const struct aggr_cpu_id *id)
++{
++ struct perf_cpu cpu;
++ int idx;
++
++ /*
++ * Skip value 0 when enabling --per-thread globally,
++ * otherwise it will have too many 0 output.
++ */
++ if (config->aggr_mode == AGGR_THREAD && config->system_wide)
++ return true;
++ /*
++ * Skip value 0 when it's an uncore event and the given aggr id
++ * does not belong to the PMU cpumask.
++ */
++ if (!counter->pmu || !counter->pmu->is_uncore)
++ return false;
++
++ perf_cpu_map__for_each_cpu(cpu, idx, counter->pmu->cpus) {
++ struct aggr_cpu_id own_id = config->aggr_get_id(config, cpu);
++
++ if (aggr_cpu_id__equal(id, &own_id))
++ return false;
++ }
++ return true;
++}
++
+ static void print_counter_aggrdata(struct perf_stat_config *config,
+ struct evsel *counter, int s,
+ struct outstate *os)
+@@ -814,11 +859,7 @@ static void print_counter_aggrdata(struct perf_stat_config *config,
+ ena = aggr->counts.ena;
+ run = aggr->counts.run;
+
+- /*
+- * Skip value 0 when enabling --per-thread globally, otherwise it will
+- * have too many 0 output.
+- */
+- if (val == 0 && config->aggr_mode == AGGR_THREAD && config->system_wide)
++ if (val == 0 && should_skip_zero_counter(config, counter, &id))
+ return;
+
+ if (!metric_only) {
+diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
+index cadb2df23c878..4cd05d9205e3b 100644
+--- a/tools/perf/util/stat-shadow.c
++++ b/tools/perf/util/stat-shadow.c
+@@ -311,7 +311,7 @@ void perf_stat__update_shadow_stats(struct evsel *counter, u64 count,
+ update_stats(&v->stats, count);
+ if (counter->metric_leader)
+ v->metric_total += count;
+- } else if (counter->metric_leader) {
++ } else if (counter->metric_leader && !counter->merged_stat) {
+ v = saved_value_lookup(counter->metric_leader,
+ map_idx, true, STAT_NONE, 0, st, rsd.cgrp);
+ v->metric_total += count;
+diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
+index a160bad291eb7..be3668d37d654 100644
+--- a/tools/power/x86/intel-speed-select/isst-config.c
++++ b/tools/power/x86/intel-speed-select/isst-config.c
+@@ -110,7 +110,7 @@ int is_skx_based_platform(void)
+
+ int is_spr_platform(void)
+ {
+- if (cpu_model == 0x8F)
++ if (cpu_model == 0x8F || cpu_model == 0xCF)
+ return 1;
+
+ return 0;
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index ac59999ed3ded..822794ca40292 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -178,6 +178,7 @@ my $store_failures;
+ my $store_successes;
+ my $test_name;
+ my $timeout;
++my $run_timeout;
+ my $connect_timeout;
+ my $config_bisect_exec;
+ my $booted_timeout;
+@@ -340,6 +341,7 @@ my %option_map = (
+ "STORE_SUCCESSES" => \$store_successes,
+ "TEST_NAME" => \$test_name,
+ "TIMEOUT" => \$timeout,
++ "RUN_TIMEOUT" => \$run_timeout,
+ "CONNECT_TIMEOUT" => \$connect_timeout,
+ "CONFIG_BISECT_EXEC" => \$config_bisect_exec,
+ "BOOTED_TIMEOUT" => \$booted_timeout,
+@@ -1495,7 +1497,8 @@ sub reboot {
+
+ # Still need to wait for the reboot to finish
+ wait_for_monitor($time, $reboot_success_line);
+-
++ }
++ if ($powercycle || $time) {
+ end_monitor;
+ }
+ }
+@@ -1857,6 +1860,14 @@ sub run_command {
+ $command =~ s/\$SSH_USER/$ssh_user/g;
+ $command =~ s/\$MACHINE/$machine/g;
+
++ if (!defined($timeout)) {
++ $timeout = $run_timeout;
++ }
++
++ if (!defined($timeout)) {
++ $timeout = -1; # tell wait_for_input to wait indefinitely
++ }
++
+ doprint("$command ... ");
+ $start_time = time;
+
+@@ -1883,13 +1894,10 @@ sub run_command {
+
+ while (1) {
+ my $fp = \*CMD;
+- if (defined($timeout)) {
+- doprint "timeout = $timeout\n";
+- }
+ my $line = wait_for_input($fp, $timeout);
+ if (!defined($line)) {
+ my $now = time;
+- if (defined($timeout) && (($now - $start_time) >= $timeout)) {
++ if ($timeout >= 0 && (($now - $start_time) >= $timeout)) {
+ doprint "Hit timeout of $timeout, killing process\n";
+ $hit_timeout = 1;
+ kill 9, $pid;
+@@ -2061,6 +2069,11 @@ sub wait_for_input {
+ $time = $timeout;
+ }
+
++ if ($time < 0) {
++ # Negative number means wait indefinitely
++ undef $time;
++ }
++
+ $rin = '';
+ vec($rin, fileno($fp), 1) = 1;
+ vec($rin, fileno(\*STDIN), 1) = 1;
+@@ -4200,6 +4213,9 @@ sub send_email {
+ }
+
+ sub cancel_test {
++ if ($monitor_cnt) {
++ end_monitor;
++ }
+ if ($email_when_canceled) {
+ my $name = get_test_name;
+ send_email("KTEST: Your [$name] test was cancelled",
+diff --git a/tools/testing/ktest/sample.conf b/tools/testing/ktest/sample.conf
+index 2d0fe15a096dd..f43477a9b8574 100644
+--- a/tools/testing/ktest/sample.conf
++++ b/tools/testing/ktest/sample.conf
+@@ -817,6 +817,11 @@
+ # is issued instead of a reboot.
+ # CONNECT_TIMEOUT = 25
+
++# The timeout in seconds for how long to wait for any running command
++# to timeout. If not defined, it will let it go indefinitely.
++# (default undefined)
++#RUN_TIMEOUT = 600
++
+ # In between tests, a reboot of the box may occur, and this
+ # is the time to wait for the console after it stops producing
+ # output. Some machines may not produce a large lag on reboot
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 41b649452560c..06578963f4f1d 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -236,8 +236,8 @@ ifdef INSTALL_PATH
+ @# included in the generated runlist.
+ for TARGET in $(TARGETS); do \
+ BUILD_TARGET=$$BUILD/$$TARGET; \
+- [ ! -d $(INSTALL_PATH)/$$TARGET ] && echo "Skipping non-existent dir: $$TARGET" && continue; \
+- echo -ne "Emit Tests for $$TARGET\n"; \
++ [ ! -d $(INSTALL_PATH)/$$TARGET ] && printf "Skipping non-existent dir: $$TARGET\n" && continue; \
++ printf "Emit Tests for $$TARGET\n"; \
+ $(MAKE) -s --no-print-directory OUTPUT=$$BUILD_TARGET COLLECTION=$$TARGET \
+ -C $$TARGET emit_tests >> $(TEST_LIST); \
+ done;
+diff --git a/tools/testing/selftests/arm64/abi/syscall-abi.c b/tools/testing/selftests/arm64/abi/syscall-abi.c
+index dd7ebe536d05f..ffe719b50c215 100644
+--- a/tools/testing/selftests/arm64/abi/syscall-abi.c
++++ b/tools/testing/selftests/arm64/abi/syscall-abi.c
+@@ -390,6 +390,10 @@ static void test_one_syscall(struct syscall_cfg *cfg)
+
+ sme_vl &= PR_SME_VL_LEN_MASK;
+
++ /* Found lowest VL */
++ if (sve_vq_from_vl(sme_vl) > sme_vq)
++ break;
++
+ if (sme_vq != sve_vq_from_vl(sme_vl))
+ sme_vq = sve_vq_from_vl(sme_vl);
+
+@@ -461,6 +465,10 @@ int sme_count_vls(void)
+
+ vl &= PR_SME_VL_LEN_MASK;
+
++ /* Found lowest VL */
++ if (sve_vq_from_vl(vl) > vq)
++ break;
++
+ if (vq != sve_vq_from_vl(vl))
+ vq = sve_vq_from_vl(vl);
+
+diff --git a/tools/testing/selftests/arm64/fp/Makefile b/tools/testing/selftests/arm64/fp/Makefile
+index 36db61358ed5b..932ec8792316d 100644
+--- a/tools/testing/selftests/arm64/fp/Makefile
++++ b/tools/testing/selftests/arm64/fp/Makefile
+@@ -3,7 +3,7 @@
+ # A proper top_srcdir is needed by KSFT(lib.mk)
+ top_srcdir = $(realpath ../../../../../)
+
+-CFLAGS += -I$(top_srcdir)/usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+
+ TEST_GEN_PROGS := fp-stress \
+ sve-ptrace sve-probe-vls \
+diff --git a/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
+index d0a178945b1a8..c6b17c47cac4c 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/ssve_regs.c
+@@ -34,6 +34,10 @@ static bool sme_get_vls(struct tdescr *td)
+
+ vl &= PR_SME_VL_LEN_MASK;
+
++ /* Did we find the lowest supported VL? */
++ if (vq < sve_vq_from_vl(vl))
++ break;
++
+ /* Skip missing VLs */
+ vq = sve_vq_from_vl(vl);
+
+diff --git a/tools/testing/selftests/arm64/signal/testcases/za_regs.c b/tools/testing/selftests/arm64/signal/testcases/za_regs.c
+index ea45acb115d5b..174ad66566964 100644
+--- a/tools/testing/selftests/arm64/signal/testcases/za_regs.c
++++ b/tools/testing/selftests/arm64/signal/testcases/za_regs.c
+@@ -34,6 +34,10 @@ static bool sme_get_vls(struct tdescr *td)
+
+ vl &= PR_SME_VL_LEN_MASK;
+
++ /* Did we find the lowest supported VL? */
++ if (vq < sve_vq_from_vl(vl))
++ break;
++
+ /* Skip missing VLs */
+ vq = sve_vq_from_vl(vl);
+
+diff --git a/tools/testing/selftests/arm64/tags/Makefile b/tools/testing/selftests/arm64/tags/Makefile
+index 41cb750705117..6d29cfde43a21 100644
+--- a/tools/testing/selftests/arm64/tags/Makefile
++++ b/tools/testing/selftests/arm64/tags/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+-CFLAGS += -I../../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+ TEST_GEN_PROGS := tags_test
+ TEST_PROGS := run_tags_test.sh
+
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index c22c43bbee194..43c559b7729b5 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -149,8 +149,6 @@ endif
+ # NOTE: Semicolon at the end is critical to override lib.mk's default static
+ # rule for binaries.
+ $(notdir $(TEST_GEN_PROGS) \
+- $(TEST_PROGS) \
+- $(TEST_PROGS_EXTENDED) \
+ $(TEST_GEN_PROGS_EXTENDED) \
+ $(TEST_CUSTOM_PROGS)): %: $(OUTPUT)/% ;
+
+@@ -181,14 +179,15 @@ endif
+ # do not fail. Static builds leave urandom_read relying on system-wide shared libraries.
+ $(OUTPUT)/liburandom_read.so: urandom_read_lib1.c urandom_read_lib2.c
+ $(call msg,LIB,,$@)
+- $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $^ $(LDLIBS) \
++ $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) \
++ $^ $(filter-out -static,$(LDLIBS)) \
+ -fuse-ld=$(LLD) -Wl,-znoseparate-code -Wl,--build-id=sha1 \
+ -fPIC -shared -o $@
+
+ $(OUTPUT)/urandom_read: urandom_read.c urandom_read_aux.c $(OUTPUT)/liburandom_read.so
+ $(call msg,BINARY,,$@)
+ $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $(filter %.c,$^) \
+- liburandom_read.so $(LDLIBS) \
++ liburandom_read.so $(filter-out -static,$(LDLIBS)) \
+ -fuse-ld=$(LLD) -Wl,-znoseparate-code -Wl,--build-id=sha1 \
+ -Wl,-rpath=. -o $@
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
+index a9229260a6cec..72800b1e8395a 100644
+--- a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
++++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c
+@@ -18,7 +18,7 @@ static struct {
+ const char *expected_verifier_err_msg;
+ int expected_runtime_err;
+ } kfunc_dynptr_tests[] = {
+- {"not_valid_dynptr", "Expected an initialized dynptr as arg #1", 0},
++ {"not_valid_dynptr", "cannot pass in dynptr at an offset=-8", 0},
+ {"not_ptr_to_stack", "arg#0 expected pointer to stack or dynptr_ptr", 0},
+ {"dynptr_data_null", NULL, -EBADMSG},
+ };
+diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
+index a50971c6cf4a5..ac70e871d62f8 100644
+--- a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
++++ b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
+@@ -65,7 +65,11 @@ static int attach_tc_prog(struct bpf_tc_hook *hook, int fd)
+ /* The maximum permissible size is: PAGE_SIZE - sizeof(struct xdp_page_head) -
+ * sizeof(struct skb_shared_info) - XDP_PACKET_HEADROOM = 3368 bytes
+ */
++#if defined(__s390x__)
++#define MAX_PKT_SIZE 3176
++#else
+ #define MAX_PKT_SIZE 3368
++#endif
+ static void test_max_pkt_size(int fd)
+ {
+ char data[MAX_PKT_SIZE + 1] = {};
+diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
+index 78debc1b38207..9dc3f23a82707 100644
+--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
++++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
+@@ -382,7 +382,7 @@ int invalid_helper1(void *ctx)
+
+ /* A dynptr can't be passed into a helper function at a non-zero offset */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("cannot pass in dynptr at an offset=-8")
+ int invalid_helper2(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -420,7 +420,7 @@ int invalid_write1(void *ctx)
+ * offset
+ */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("cannot overwrite referenced dynptr")
+ int invalid_write2(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -444,7 +444,7 @@ int invalid_write2(void *ctx)
+ * non-const offset
+ */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("cannot overwrite referenced dynptr")
+ int invalid_write3(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -476,7 +476,7 @@ static int invalid_write4_callback(__u32 index, void *data)
+ * be invalidated as a dynptr
+ */
+ SEC("?raw_tp")
+-__failure __msg("arg 1 is an unacquired reference")
++__failure __msg("cannot overwrite referenced dynptr")
+ int invalid_write4(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -584,7 +584,7 @@ int invalid_read4(void *ctx)
+
+ /* Initializing a dynptr on an offset should fail */
+ SEC("?raw_tp")
+-__failure __msg("invalid write to stack")
++__failure __msg("cannot pass in dynptr at an offset=0")
+ int invalid_offset(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+diff --git a/tools/testing/selftests/bpf/progs/map_kptr.c b/tools/testing/selftests/bpf/progs/map_kptr.c
+index eb82178034934..228ec45365a8d 100644
+--- a/tools/testing/selftests/bpf/progs/map_kptr.c
++++ b/tools/testing/selftests/bpf/progs/map_kptr.c
+@@ -62,21 +62,23 @@ extern struct prog_test_ref_kfunc *
+ bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym;
+ extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
+
++#define WRITE_ONCE(x, val) ((*(volatile typeof(x) *) &(x)) = (val))
++
+ static void test_kptr_unref(struct map_value *v)
+ {
+ struct prog_test_ref_kfunc *p;
+
+ p = v->unref_ptr;
+ /* store untrusted_ptr_or_null_ */
+- v->unref_ptr = p;
++ WRITE_ONCE(v->unref_ptr, p);
+ if (!p)
+ return;
+ if (p->a + p->b > 100)
+ return;
+ /* store untrusted_ptr_ */
+- v->unref_ptr = p;
++ WRITE_ONCE(v->unref_ptr, p);
+ /* store NULL */
+- v->unref_ptr = NULL;
++ WRITE_ONCE(v->unref_ptr, NULL);
+ }
+
+ static void test_kptr_ref(struct map_value *v)
+@@ -85,7 +87,7 @@ static void test_kptr_ref(struct map_value *v)
+
+ p = v->ref_ptr;
+ /* store ptr_or_null_ */
+- v->unref_ptr = p;
++ WRITE_ONCE(v->unref_ptr, p);
+ if (!p)
+ return;
+ if (p->a + p->b > 100)
+@@ -99,7 +101,7 @@ static void test_kptr_ref(struct map_value *v)
+ return;
+ }
+ /* store ptr_ */
+- v->unref_ptr = p;
++ WRITE_ONCE(v->unref_ptr, p);
+ bpf_kfunc_call_test_release(p);
+
+ p = bpf_kfunc_call_test_acquire(&(unsigned long){0});
+diff --git a/tools/testing/selftests/bpf/progs/test_bpf_nf.c b/tools/testing/selftests/bpf/progs/test_bpf_nf.c
+index 227e85e85ddaf..9fc603c9d673e 100644
+--- a/tools/testing/selftests/bpf/progs/test_bpf_nf.c
++++ b/tools/testing/selftests/bpf/progs/test_bpf_nf.c
+@@ -34,6 +34,11 @@ __be16 dport = 0;
+ int test_exist_lookup = -ENOENT;
+ u32 test_exist_lookup_mark = 0;
+
++enum nf_nat_manip_type___local {
++ NF_NAT_MANIP_SRC___local,
++ NF_NAT_MANIP_DST___local
++};
++
+ struct nf_conn;
+
+ struct bpf_ct_opts___local {
+@@ -58,7 +63,7 @@ int bpf_ct_change_timeout(struct nf_conn *, u32) __ksym;
+ int bpf_ct_set_status(struct nf_conn *, u32) __ksym;
+ int bpf_ct_change_status(struct nf_conn *, u32) __ksym;
+ int bpf_ct_set_nat_info(struct nf_conn *, union nf_inet_addr *,
+- int port, enum nf_nat_manip_type) __ksym;
++ int port, enum nf_nat_manip_type___local) __ksym;
+
+ static __always_inline void
+ nf_ct_test(struct nf_conn *(*lookup_fn)(void *, struct bpf_sock_tuple *, u32,
+@@ -157,10 +162,10 @@ nf_ct_test(struct nf_conn *(*lookup_fn)(void *, struct bpf_sock_tuple *, u32,
+
+ /* snat */
+ saddr.ip = bpf_get_prandom_u32();
+- bpf_ct_set_nat_info(ct, &saddr, sport, NF_NAT_MANIP_SRC);
++ bpf_ct_set_nat_info(ct, &saddr, sport, NF_NAT_MANIP_SRC___local);
+ /* dnat */
+ daddr.ip = bpf_get_prandom_u32();
+- bpf_ct_set_nat_info(ct, &daddr, dport, NF_NAT_MANIP_DST);
++ bpf_ct_set_nat_info(ct, &daddr, dport, NF_NAT_MANIP_DST___local);
+
+ ct_ins = bpf_ct_insert_entry(ct);
+ if (ct_ins) {
+diff --git a/tools/testing/selftests/bpf/xdp_synproxy.c b/tools/testing/selftests/bpf/xdp_synproxy.c
+index 410a1385a01dd..6dbe0b7451985 100644
+--- a/tools/testing/selftests/bpf/xdp_synproxy.c
++++ b/tools/testing/selftests/bpf/xdp_synproxy.c
+@@ -116,6 +116,7 @@ static void parse_options(int argc, char *argv[], unsigned int *ifindex, __u32 *
+ *tcpipopts = 0;
+ *ports = NULL;
+ *single = false;
++ *tc = false;
+
+ while (true) {
+ int opt;
+diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
+index 162d3a516f2ca..1b9f48daa2257 100644
+--- a/tools/testing/selftests/bpf/xskxceiver.c
++++ b/tools/testing/selftests/bpf/xskxceiver.c
+@@ -350,7 +350,7 @@ static bool ifobj_zc_avail(struct ifobject *ifobject)
+ umem = calloc(1, sizeof(struct xsk_umem_info));
+ if (!umem) {
+ munmap(bufs, umem_sz);
+- exit_with_error(-ENOMEM);
++ exit_with_error(ENOMEM);
+ }
+ umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
+ ret = xsk_configure_umem(umem, bufs, umem_sz);
+@@ -767,7 +767,7 @@ static void pkt_dump(void *pkt, u32 len)
+ struct ethhdr *ethhdr;
+ struct udphdr *udphdr;
+ struct iphdr *iphdr;
+- int payload, i;
++ u32 payload, i;
+
+ ethhdr = pkt;
+ iphdr = pkt + sizeof(*ethhdr);
+@@ -792,7 +792,7 @@ static void pkt_dump(void *pkt, u32 len)
+ fprintf(stdout, "DEBUG>> L4: udp_hdr->src: %d\n", ntohs(udphdr->source));
+ fprintf(stdout, "DEBUG>> L4: udp_hdr->dst: %d\n", ntohs(udphdr->dest));
+ /*extract L5 frame */
+- payload = *((uint32_t *)(pkt + PKT_HDR_SIZE));
++ payload = ntohl(*((u32 *)(pkt + PKT_HDR_SIZE)));
+
+ fprintf(stdout, "DEBUG>> L5: payload: %d\n", payload);
+ fprintf(stdout, "---------------------------------------\n");
+@@ -936,7 +936,7 @@ static int receive_pkts(struct test_spec *test, struct pollfd *fds)
+ if (ifobj->use_poll) {
+ ret = poll(fds, 1, POLL_TMOUT);
+ if (ret < 0)
+- exit_with_error(-ret);
++ exit_with_error(errno);
+
+ if (!ret) {
+ if (!is_umem_valid(test->ifobj_tx))
+@@ -963,7 +963,7 @@ static int receive_pkts(struct test_spec *test, struct pollfd *fds)
+ if (xsk_ring_prod__needs_wakeup(&umem->fq)) {
+ ret = poll(fds, 1, POLL_TMOUT);
+ if (ret < 0)
+- exit_with_error(-ret);
++ exit_with_error(errno);
+ }
+ ret = xsk_ring_prod__reserve(&umem->fq, rcvd, &idx_fq);
+ }
+@@ -1015,7 +1015,7 @@ static int __send_pkts(struct ifobject *ifobject, u32 *pkt_nb, struct pollfd *fd
+ if (timeout) {
+ if (ret < 0) {
+ ksft_print_msg("ERROR: [%s] Poll error %d\n",
+- __func__, ret);
++ __func__, errno);
+ return TEST_FAILURE;
+ }
+ if (ret == 0)
+@@ -1024,7 +1024,7 @@ static int __send_pkts(struct ifobject *ifobject, u32 *pkt_nb, struct pollfd *fd
+ }
+ if (ret <= 0) {
+ ksft_print_msg("ERROR: [%s] Poll error %d\n",
+- __func__, ret);
++ __func__, errno);
+ return TEST_FAILURE;
+ }
+ }
+@@ -1323,18 +1323,18 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
+ if (ifobject->xdp_flags & XDP_FLAGS_SKB_MODE) {
+ if (opts.attach_mode != XDP_ATTACHED_SKB) {
+ ksft_print_msg("ERROR: [%s] XDP prog not in SKB mode\n");
+- exit_with_error(-EINVAL);
++ exit_with_error(EINVAL);
+ }
+ } else if (ifobject->xdp_flags & XDP_FLAGS_DRV_MODE) {
+ if (opts.attach_mode != XDP_ATTACHED_DRV) {
+ ksft_print_msg("ERROR: [%s] XDP prog not in DRV mode\n");
+- exit_with_error(-EINVAL);
++ exit_with_error(EINVAL);
+ }
+ }
+
+ ret = xsk_socket__update_xskmap(ifobject->xsk->xsk, ifobject->xsk_map_fd);
+ if (ret)
+- exit_with_error(-ret);
++ exit_with_error(errno);
+ }
+
+ static void *worker_testapp_validate_tx(void *arg)
+@@ -1541,7 +1541,7 @@ static void swap_xsk_resources(struct ifobject *ifobj_tx, struct ifobject *ifobj
+
+ ret = xsk_socket__update_xskmap(ifobj_rx->xsk->xsk, ifobj_rx->xsk_map_fd);
+ if (ret)
+- exit_with_error(-ret);
++ exit_with_error(errno);
+ }
+
+ static void testapp_bpf_res(struct test_spec *test)
+diff --git a/tools/testing/selftests/clone3/Makefile b/tools/testing/selftests/clone3/Makefile
+index 79b19a2863a0b..84832c369a2ea 100644
+--- a/tools/testing/selftests/clone3/Makefile
++++ b/tools/testing/selftests/clone3/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -g -std=gnu99 -I../../../../usr/include/
++CFLAGS += -g -std=gnu99 $(KHDR_INCLUDES)
+ LDLIBS += -lcap
+
+ TEST_GEN_PROGS := clone3 clone3_clear_sighand clone3_set_tid \
+diff --git a/tools/testing/selftests/core/Makefile b/tools/testing/selftests/core/Makefile
+index f6f2d6f473c6a..ce262d0972699 100644
+--- a/tools/testing/selftests/core/Makefile
++++ b/tools/testing/selftests/core/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-CFLAGS += -g -I../../../../usr/include/
++CFLAGS += -g $(KHDR_INCLUDES)
+
+ TEST_GEN_PROGS := close_range_test
+
+diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
+index 604b43ece15f5..9e7e158d5fa32 100644
+--- a/tools/testing/selftests/dmabuf-heaps/Makefile
++++ b/tools/testing/selftests/dmabuf-heaps/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
++CFLAGS += -static -O3 -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
+
+ TEST_GEN_PROGS = dmabuf-heap
+
+diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+index 29af27acd40ea..890a8236a8ba7 100644
+--- a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
++++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+@@ -13,10 +13,9 @@
+ #include <sys/types.h>
+
+ #include <linux/dma-buf.h>
++#include <linux/dma-heap.h>
+ #include <drm/drm.h>
+
+-#include "../../../../include/uapi/linux/dma-heap.h"
+-
+ #define DEVPATH "/dev/dma_heap"
+
+ static int check_vgem(int fd)
+diff --git a/tools/testing/selftests/drivers/dma-buf/Makefile b/tools/testing/selftests/drivers/dma-buf/Makefile
+index 79cb16b4e01a9..441407bb0e801 100644
+--- a/tools/testing/selftests/drivers/dma-buf/Makefile
++++ b/tools/testing/selftests/drivers/dma-buf/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-CFLAGS += -I../../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+
+ TEST_GEN_PROGS := udmabuf
+
+diff --git a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+index a08c02abde121..7f7d20f222070 100755
+--- a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
++++ b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+@@ -17,6 +17,18 @@ SYSFS_NET_DIR=/sys/bus/netdevsim/devices/$DEV_NAME/net/
+ DEBUGFS_DIR=/sys/kernel/debug/netdevsim/$DEV_NAME/
+ DL_HANDLE=netdevsim/$DEV_NAME
+
++wait_for_devlink()
++{
++ "$@" | grep -q $DL_HANDLE
++}
++
++devlink_wait()
++{
++ local timeout=$1
++
++ busywait "$timeout" wait_for_devlink devlink dev
++}
++
+ fw_flash_test()
+ {
+ RET=0
+@@ -256,6 +268,9 @@ netns_reload_test()
+ ip netns del testns2
+ ip netns del testns1
+
++ # Wait until netns async cleanup is done.
++ devlink_wait 2000
++
+ log_test "netns reload test"
+ }
+
+@@ -348,6 +363,9 @@ resource_test()
+ ip netns del testns2
+ ip netns del testns1
+
++ # Wait until netns async cleanup is done.
++ devlink_wait 2000
++
+ log_test "resource test"
+ }
+
+diff --git a/tools/testing/selftests/drivers/s390x/uvdevice/Makefile b/tools/testing/selftests/drivers/s390x/uvdevice/Makefile
+index 891215a7dc8a1..755d164384c46 100644
+--- a/tools/testing/selftests/drivers/s390x/uvdevice/Makefile
++++ b/tools/testing/selftests/drivers/s390x/uvdevice/Makefile
+@@ -11,10 +11,9 @@ else
+ TEST_GEN_PROGS := test_uvdevice
+
+ top_srcdir ?= ../../../../../..
+-khdr_dir = $(top_srcdir)/usr/include
+ LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include
+
+-CFLAGS += -Wall -Werror -static -I$(khdr_dir) -I$(LINUX_TOOL_ARCH_INCLUDE)
++CFLAGS += -Wall -Werror -static $(KHDR_INCLUDES) -I$(LINUX_TOOL_ARCH_INCLUDE)
+
+ include ../../../lib.mk
+
+diff --git a/tools/testing/selftests/filesystems/Makefile b/tools/testing/selftests/filesystems/Makefile
+index 129880fb42d34..c647fd6a0446a 100644
+--- a/tools/testing/selftests/filesystems/Makefile
++++ b/tools/testing/selftests/filesystems/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+-CFLAGS += -I../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+ TEST_GEN_PROGS := devpts_pts
+ TEST_GEN_PROGS_EXTENDED := dnotify_test
+
+diff --git a/tools/testing/selftests/filesystems/binderfs/Makefile b/tools/testing/selftests/filesystems/binderfs/Makefile
+index 8af25ae960498..c2f7cef919c04 100644
+--- a/tools/testing/selftests/filesystems/binderfs/Makefile
++++ b/tools/testing/selftests/filesystems/binderfs/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+-CFLAGS += -I../../../../../usr/include/ -pthread
++CFLAGS += $(KHDR_INCLUDES) -pthread
+ TEST_GEN_PROGS := binderfs_test
+
+ binderfs_test: binderfs_test.c ../../kselftest.h ../../kselftest_harness.h
+diff --git a/tools/testing/selftests/filesystems/epoll/Makefile b/tools/testing/selftests/filesystems/epoll/Makefile
+index 78ae4aaf7141a..0788a7dc80042 100644
+--- a/tools/testing/selftests/filesystems/epoll/Makefile
++++ b/tools/testing/selftests/filesystems/epoll/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+-CFLAGS += -I../../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+ LDLIBS += -lpthread
+ TEST_GEN_PROGS := epoll_wakeup_test
+
+diff --git a/tools/testing/selftests/ftrace/test.d/dynevent/eprobes_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/dynevent/eprobes_syntax_errors.tc
+index fc1daac7f0668..4f5e8c6651562 100644
+--- a/tools/testing/selftests/ftrace/test.d/dynevent/eprobes_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/dynevent/eprobes_syntax_errors.tc
+@@ -22,6 +22,8 @@ check_error 'e:foo/^bar.1 syscalls/sys_enter_openat' # BAD_EVENT_NAME
+ check_error 'e:foo/bar syscalls/sys_enter_openat arg=^dfd' # BAD_FETCH_ARG
+ check_error 'e:foo/bar syscalls/sys_enter_openat ^arg=$foo' # BAD_ATTACH_ARG
+
+-check_error 'e:foo/bar syscalls/sys_enter_openat if ^' # NO_EP_FILTER
++if grep -q '<attached-group>\.<attached-event>.*\[if <filter>\]' README; then
++ check_error 'e:foo/bar syscalls/sys_enter_openat if ^' # NO_EP_FILTER
++fi
+
+ exit 0
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+index 3eea2abf68f9e..2ad7d4b501cc1 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+@@ -42,7 +42,7 @@ test_event_enabled() {
+
+ while [ $check_times -ne 0 ]; do
+ e=`cat $EVENT_ENABLE`
+- if [ "$e" == $val ]; then
++ if [ "$e" = $val ]; then
+ return 0
+ fi
+ sleep $SLEEP_TIME
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/probepoint.tc b/tools/testing/selftests/ftrace/test.d/kprobe/probepoint.tc
+index 624269c8d5343..68425987a5dd9 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/probepoint.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/probepoint.tc
+@@ -21,7 +21,7 @@ set_offs() { # prev target next
+
+ # We have to decode symbol addresses to get correct offsets.
+ # If the offset is not an instruction boundary, it cause -EILSEQ.
+-set_offs `grep -A1 -B1 ${TARGET_FUNC} /proc/kallsyms | cut -f 1 -d " " | xargs`
++set_offs `grep -v __pfx_ /proc/kallsyms | grep -A1 -B1 ${TARGET_FUNC} | cut -f 1 -d " " | xargs`
+
+ UINT_TEST=no
+ # printf "%x" -1 returns (unsigned long)-1.
+diff --git a/tools/testing/selftests/futex/functional/Makefile b/tools/testing/selftests/futex/functional/Makefile
+index 5a0e0df8de9b3..a392d0917b4e5 100644
+--- a/tools/testing/selftests/futex/functional/Makefile
++++ b/tools/testing/selftests/futex/functional/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-INCLUDES := -I../include -I../../ -I../../../../../usr/include/
++INCLUDES := -I../include -I../../ $(KHDR_INCLUDES)
+ CFLAGS := $(CFLAGS) -g -O2 -Wall -D_GNU_SOURCE -pthread $(INCLUDES) $(KHDR_INCLUDES)
+ LDLIBS := -lpthread -lrt
+
+diff --git a/tools/testing/selftests/gpio/Makefile b/tools/testing/selftests/gpio/Makefile
+index 616ed40196554..e0884390447dc 100644
+--- a/tools/testing/selftests/gpio/Makefile
++++ b/tools/testing/selftests/gpio/Makefile
+@@ -3,6 +3,6 @@
+ TEST_PROGS := gpio-mockup.sh gpio-sim.sh
+ TEST_FILES := gpio-mockup-sysfs.sh
+ TEST_GEN_PROGS_EXTENDED := gpio-mockup-cdev gpio-chip-info gpio-line-name
+-CFLAGS += -O2 -g -Wall -I../../../../usr/include/ $(KHDR_INCLUDES)
++CFLAGS += -O2 -g -Wall $(KHDR_INCLUDES)
+
+ include ../lib.mk
+diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c
+index 8aa8a346cf221..fa08209268c42 100644
+--- a/tools/testing/selftests/iommu/iommufd.c
++++ b/tools/testing/selftests/iommu/iommufd.c
+@@ -1259,7 +1259,7 @@ TEST_F(iommufd_mock_domain, user_copy)
+
+ test_cmd_destroy_access_pages(
+ access_cmd.id, access_cmd.access_pages.out_access_pages_id);
+- test_cmd_destroy_access(access_cmd.id) test_ioctl_destroy(ioas_id);
++ test_cmd_destroy_access(access_cmd.id);
+
+ test_ioctl_destroy(ioas_id);
+ }
+diff --git a/tools/testing/selftests/ipc/Makefile b/tools/testing/selftests/ipc/Makefile
+index 1c4448a843a41..50e9c299fc4ae 100644
+--- a/tools/testing/selftests/ipc/Makefile
++++ b/tools/testing/selftests/ipc/Makefile
+@@ -10,7 +10,7 @@ ifeq ($(ARCH),x86_64)
+ CFLAGS := -DCONFIG_X86_64 -D__x86_64__
+ endif
+
+-CFLAGS += -I../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+
+ TEST_GEN_PROGS := msgque
+
+diff --git a/tools/testing/selftests/kcmp/Makefile b/tools/testing/selftests/kcmp/Makefile
+index b4d39f6b5124d..59a1e53790181 100644
+--- a/tools/testing/selftests/kcmp/Makefile
++++ b/tools/testing/selftests/kcmp/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-CFLAGS += -I../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+
+ TEST_GEN_PROGS := kcmp_test
+
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index d5dab986f6125..b6c4be3faf7a9 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -11,6 +11,7 @@
+ #include <fcntl.h>
+ #include <linux/landlock.h>
+ #include <sched.h>
++#include <stdio.h>
+ #include <string.h>
+ #include <sys/capability.h>
+ #include <sys/mount.h>
+@@ -89,6 +90,40 @@ static const char dir_s3d3[] = TMP_DIR "/s3d1/s3d2/s3d3";
+ * └── s3d3
+ */
+
++static bool fgrep(FILE *const inf, const char *const str)
++{
++ char line[32];
++ const int slen = strlen(str);
++
++ while (!feof(inf)) {
++ if (!fgets(line, sizeof(line), inf))
++ break;
++ if (strncmp(line, str, slen))
++ continue;
++
++ return true;
++ }
++
++ return false;
++}
++
++static bool supports_overlayfs(void)
++{
++ bool res;
++ FILE *const inf = fopen("/proc/filesystems", "r");
++
++ /*
++ * Consider that the filesystem is supported if we cannot get the
++ * supported ones.
++ */
++ if (!inf)
++ return true;
++
++ res = fgrep(inf, "nodev\toverlay\n");
++ fclose(inf);
++ return res;
++}
++
+ static void mkdir_parents(struct __test_metadata *const _metadata,
+ const char *const path)
+ {
+@@ -4001,6 +4036,9 @@ FIXTURE(layout2_overlay) {};
+
+ FIXTURE_SETUP(layout2_overlay)
+ {
++ if (!supports_overlayfs())
++ SKIP(return, "overlayfs is not supported");
++
+ prepare_layout(_metadata);
+
+ create_directory(_metadata, LOWER_BASE);
+@@ -4037,6 +4075,9 @@ FIXTURE_SETUP(layout2_overlay)
+
+ FIXTURE_TEARDOWN(layout2_overlay)
+ {
++ if (!supports_overlayfs())
++ SKIP(return, "overlayfs is not supported");
++
+ EXPECT_EQ(0, remove_path(lower_do1_fl3));
+ EXPECT_EQ(0, remove_path(lower_dl1_fl2));
+ EXPECT_EQ(0, remove_path(lower_fl1));
+@@ -4068,6 +4109,9 @@ FIXTURE_TEARDOWN(layout2_overlay)
+
+ TEST_F_FORK(layout2_overlay, no_restriction)
+ {
++ if (!supports_overlayfs())
++ SKIP(return, "overlayfs is not supported");
++
+ ASSERT_EQ(0, test_open(lower_fl1, O_RDONLY));
+ ASSERT_EQ(0, test_open(lower_dl1, O_RDONLY));
+ ASSERT_EQ(0, test_open(lower_dl1_fl2, O_RDONLY));
+@@ -4231,6 +4275,9 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ size_t i;
+ const char *path_entry;
+
++ if (!supports_overlayfs())
++ SKIP(return, "overlayfs is not supported");
++
+ /* Sets rules on base directories (i.e. outside overlay scope). */
+ ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1_base);
+ ASSERT_LE(0, ruleset_fd);
+diff --git a/tools/testing/selftests/landlock/ptrace_test.c b/tools/testing/selftests/landlock/ptrace_test.c
+index c28ef98ff3ac1..55e7871631a19 100644
+--- a/tools/testing/selftests/landlock/ptrace_test.c
++++ b/tools/testing/selftests/landlock/ptrace_test.c
+@@ -19,6 +19,12 @@
+
+ #include "common.h"
+
++/* Copied from security/yama/yama_lsm.c */
++#define YAMA_SCOPE_DISABLED 0
++#define YAMA_SCOPE_RELATIONAL 1
++#define YAMA_SCOPE_CAPABILITY 2
++#define YAMA_SCOPE_NO_ATTACH 3
++
+ static void create_domain(struct __test_metadata *const _metadata)
+ {
+ int ruleset_fd;
+@@ -60,6 +66,25 @@ static int test_ptrace_read(const pid_t pid)
+ return 0;
+ }
+
++static int get_yama_ptrace_scope(void)
++{
++ int ret;
++ char buf[2] = {};
++ const int fd = open("/proc/sys/kernel/yama/ptrace_scope", O_RDONLY);
++
++ if (fd < 0)
++ return 0;
++
++ if (read(fd, buf, 1) < 0) {
++ close(fd);
++ return -1;
++ }
++
++ ret = atoi(buf);
++ close(fd);
++ return ret;
++}
++
+ /* clang-format off */
+ FIXTURE(hierarchy) {};
+ /* clang-format on */
+@@ -232,8 +257,51 @@ TEST_F(hierarchy, trace)
+ pid_t child, parent;
+ int status, err_proc_read;
+ int pipe_child[2], pipe_parent[2];
++ int yama_ptrace_scope;
+ char buf_parent;
+ long ret;
++ bool can_read_child, can_trace_child, can_read_parent, can_trace_parent;
++
++ yama_ptrace_scope = get_yama_ptrace_scope();
++ ASSERT_LE(0, yama_ptrace_scope);
++
++ if (yama_ptrace_scope > YAMA_SCOPE_DISABLED)
++ TH_LOG("Incomplete tests due to Yama restrictions (scope %d)",
++ yama_ptrace_scope);
++
++ /*
++ * can_read_child is true if a parent process can read its child
++ * process, which is only the case when the parent process is not
++ * isolated from the child with a dedicated Landlock domain.
++ */
++ can_read_child = !variant->domain_parent;
++
++ /*
++ * can_trace_child is true if a parent process can trace its child
++ * process. This depends on two conditions:
++ * - The parent process is not isolated from the child with a dedicated
++ * Landlock domain.
++ * - Yama allows tracing children (up to YAMA_SCOPE_RELATIONAL).
++ */
++ can_trace_child = can_read_child &&
++ yama_ptrace_scope <= YAMA_SCOPE_RELATIONAL;
++
++ /*
++ * can_read_parent is true if a child process can read its parent
++ * process, which is only the case when the child process is not
++ * isolated from the parent with a dedicated Landlock domain.
++ */
++ can_read_parent = !variant->domain_child;
++
++ /*
++ * can_trace_parent is true if a child process can trace its parent
++ * process. This depends on two conditions:
++ * - The child process is not isolated from the parent with a dedicated
++ * Landlock domain.
++ * - Yama is disabled (YAMA_SCOPE_DISABLED).
++ */
++ can_trace_parent = can_read_parent &&
++ yama_ptrace_scope <= YAMA_SCOPE_DISABLED;
+
+ /*
+ * Removes all effective and permitted capabilities to not interfere
+@@ -264,16 +332,21 @@ TEST_F(hierarchy, trace)
+ /* Waits for the parent to be in a domain, if any. */
+ ASSERT_EQ(1, read(pipe_parent[0], &buf_child, 1));
+
+- /* Tests PTRACE_ATTACH and PTRACE_MODE_READ on the parent. */
++ /* Tests PTRACE_MODE_READ on the parent. */
+ err_proc_read = test_ptrace_read(parent);
++ if (can_read_parent) {
++ EXPECT_EQ(0, err_proc_read);
++ } else {
++ EXPECT_EQ(EACCES, err_proc_read);
++ }
++
++ /* Tests PTRACE_ATTACH on the parent. */
+ ret = ptrace(PTRACE_ATTACH, parent, NULL, 0);
+- if (variant->domain_child) {
++ if (can_trace_parent) {
++ EXPECT_EQ(0, ret);
++ } else {
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EPERM, errno);
+- EXPECT_EQ(EACCES, err_proc_read);
+- } else {
+- EXPECT_EQ(0, ret);
+- EXPECT_EQ(0, err_proc_read);
+ }
+ if (ret == 0) {
+ ASSERT_EQ(parent, waitpid(parent, &status, 0));
+@@ -283,11 +356,11 @@ TEST_F(hierarchy, trace)
+
+ /* Tests child PTRACE_TRACEME. */
+ ret = ptrace(PTRACE_TRACEME);
+- if (variant->domain_parent) {
++ if (can_trace_child) {
++ EXPECT_EQ(0, ret);
++ } else {
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EPERM, errno);
+- } else {
+- EXPECT_EQ(0, ret);
+ }
+
+ /*
+@@ -296,7 +369,7 @@ TEST_F(hierarchy, trace)
+ */
+ ASSERT_EQ(1, write(pipe_child[1], ".", 1));
+
+- if (!variant->domain_parent) {
++ if (can_trace_child) {
+ ASSERT_EQ(0, raise(SIGSTOP));
+ }
+
+@@ -321,7 +394,7 @@ TEST_F(hierarchy, trace)
+ ASSERT_EQ(1, read(pipe_child[0], &buf_parent, 1));
+
+ /* Tests child PTRACE_TRACEME. */
+- if (!variant->domain_parent) {
++ if (can_trace_child) {
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ ASSERT_EQ(1, WIFSTOPPED(status));
+ ASSERT_EQ(0, ptrace(PTRACE_DETACH, child, NULL, 0));
+@@ -331,17 +404,23 @@ TEST_F(hierarchy, trace)
+ EXPECT_EQ(ESRCH, errno);
+ }
+
+- /* Tests PTRACE_ATTACH and PTRACE_MODE_READ on the child. */
++ /* Tests PTRACE_MODE_READ on the child. */
+ err_proc_read = test_ptrace_read(child);
++ if (can_read_child) {
++ EXPECT_EQ(0, err_proc_read);
++ } else {
++ EXPECT_EQ(EACCES, err_proc_read);
++ }
++
++ /* Tests PTRACE_ATTACH on the child. */
+ ret = ptrace(PTRACE_ATTACH, child, NULL, 0);
+- if (variant->domain_parent) {
++ if (can_trace_child) {
++ EXPECT_EQ(0, ret);
++ } else {
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EPERM, errno);
+- EXPECT_EQ(EACCES, err_proc_read);
+- } else {
+- EXPECT_EQ(0, ret);
+- EXPECT_EQ(0, err_proc_read);
+ }
++
+ if (ret == 0) {
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ ASSERT_EQ(1, WIFSTOPPED(status));
+diff --git a/tools/testing/selftests/media_tests/Makefile b/tools/testing/selftests/media_tests/Makefile
+index 60826d7d37d49..471d83e61d95e 100644
+--- a/tools/testing/selftests/media_tests/Makefile
++++ b/tools/testing/selftests/media_tests/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ #
+-CFLAGS += -I../ -I../../../../usr/include/
++CFLAGS += -I../ $(KHDR_INCLUDES)
+ TEST_GEN_PROGS := media_device_test media_device_open video_device_test
+
+ include ../lib.mk
+diff --git a/tools/testing/selftests/membarrier/Makefile b/tools/testing/selftests/membarrier/Makefile
+index 34d1c81a2324a..fc840e06ff565 100644
+--- a/tools/testing/selftests/membarrier/Makefile
++++ b/tools/testing/selftests/membarrier/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-CFLAGS += -g -I../../../../usr/include/
++CFLAGS += -g $(KHDR_INCLUDES)
+ LDLIBS += -lpthread
+
+ TEST_GEN_PROGS := membarrier_test_single_thread \
+diff --git a/tools/testing/selftests/mount_setattr/Makefile b/tools/testing/selftests/mount_setattr/Makefile
+index 2250f7dcb81e3..fde72df01b118 100644
+--- a/tools/testing/selftests/mount_setattr/Makefile
++++ b/tools/testing/selftests/mount_setattr/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for mount selftests.
+-CFLAGS = -g -I../../../../usr/include/ -Wall -O2 -pthread
++CFLAGS = -g $(KHDR_INCLUDES) -Wall -O2 -pthread
+
+ TEST_GEN_FILES += mount_setattr_test
+
+diff --git a/tools/testing/selftests/move_mount_set_group/Makefile b/tools/testing/selftests/move_mount_set_group/Makefile
+index 80c2d86812b06..94235846b6f9b 100644
+--- a/tools/testing/selftests/move_mount_set_group/Makefile
++++ b/tools/testing/selftests/move_mount_set_group/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for mount selftests.
+-CFLAGS = -g -I../../../../usr/include/ -Wall -O2
++CFLAGS = -g $(KHDR_INCLUDES) -Wall -O2
+
+ TEST_GEN_FILES += move_mount_set_group_test
+
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 5637b5dadabdb..70ea8798b1f60 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -2065,6 +2065,8 @@ EOF
+ ################################################################################
+ # main
+
++trap cleanup EXIT
++
+ while getopts :t:pPhv o
+ do
+ case $o in
+diff --git a/tools/testing/selftests/net/udpgso_bench_rx.c b/tools/testing/selftests/net/udpgso_bench_rx.c
+index 4058c7451e70d..f35a924d4a303 100644
+--- a/tools/testing/selftests/net/udpgso_bench_rx.c
++++ b/tools/testing/selftests/net/udpgso_bench_rx.c
+@@ -214,11 +214,10 @@ static void do_verify_udp(const char *data, int len)
+
+ static int recv_msg(int fd, char *buf, int len, int *gso_size)
+ {
+- char control[CMSG_SPACE(sizeof(uint16_t))] = {0};
++ char control[CMSG_SPACE(sizeof(int))] = {0};
+ struct msghdr msg = {0};
+ struct iovec iov = {0};
+ struct cmsghdr *cmsg;
+- uint16_t *gsosizeptr;
+ int ret;
+
+ iov.iov_base = buf;
+@@ -237,8 +236,7 @@ static int recv_msg(int fd, char *buf, int len, int *gso_size)
+ cmsg = CMSG_NXTHDR(&msg, cmsg)) {
+ if (cmsg->cmsg_level == SOL_UDP
+ && cmsg->cmsg_type == UDP_GRO) {
+- gsosizeptr = (uint16_t *) CMSG_DATA(cmsg);
+- *gso_size = *gsosizeptr;
++ *gso_size = *(int *)CMSG_DATA(cmsg);
+ break;
+ }
+ }
+diff --git a/tools/testing/selftests/perf_events/Makefile b/tools/testing/selftests/perf_events/Makefile
+index fcafa5f0d34c0..db93c4ff081a4 100644
+--- a/tools/testing/selftests/perf_events/Makefile
++++ b/tools/testing/selftests/perf_events/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include
++CFLAGS += -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
+ LDFLAGS += -lpthread
+
+ TEST_GEN_PROGS := sigtrap_threads remove_on_exec
+diff --git a/tools/testing/selftests/pid_namespace/Makefile b/tools/testing/selftests/pid_namespace/Makefile
+index edafaca1aeb39..9286a1d22cd3a 100644
+--- a/tools/testing/selftests/pid_namespace/Makefile
++++ b/tools/testing/selftests/pid_namespace/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -g -I../../../../usr/include/
++CFLAGS += -g $(KHDR_INCLUDES)
+
+ TEST_GEN_PROGS = regression_enomem
+
+diff --git a/tools/testing/selftests/pidfd/Makefile b/tools/testing/selftests/pidfd/Makefile
+index 778b6cdc8aed8..d731e3e76d5bf 100644
+--- a/tools/testing/selftests/pidfd/Makefile
++++ b/tools/testing/selftests/pidfd/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-CFLAGS += -g -I../../../../usr/include/ -pthread -Wall
++CFLAGS += -g $(KHDR_INCLUDES) -pthread -Wall
+
+ TEST_GEN_PROGS := pidfd_test pidfd_fdinfo_test pidfd_open_test \
+ pidfd_poll_test pidfd_wait pidfd_getfd_test pidfd_setns_test
+diff --git a/tools/testing/selftests/powerpc/ptrace/Makefile b/tools/testing/selftests/powerpc/ptrace/Makefile
+index 2f02cb54224dc..cbeeaeae8837a 100644
+--- a/tools/testing/selftests/powerpc/ptrace/Makefile
++++ b/tools/testing/selftests/powerpc/ptrace/Makefile
+@@ -33,7 +33,7 @@ TESTS_64 := $(patsubst %,$(OUTPUT)/%,$(TESTS_64))
+ $(TESTS_64): CFLAGS += -m64
+ $(TM_TESTS): CFLAGS += -I../tm -mhtm
+
+-CFLAGS += -I../../../../../usr/include -fno-pie
++CFLAGS += $(KHDR_INCLUDES) -fno-pie
+
+ $(OUTPUT)/ptrace-gpr: ptrace-gpr.S
+ $(OUTPUT)/ptrace-pkey $(OUTPUT)/core-pkey: LDLIBS += -pthread
+diff --git a/tools/testing/selftests/powerpc/security/Makefile b/tools/testing/selftests/powerpc/security/Makefile
+index 7488315fd8474..e0d979ab02040 100644
+--- a/tools/testing/selftests/powerpc/security/Makefile
++++ b/tools/testing/selftests/powerpc/security/Makefile
+@@ -5,7 +5,7 @@ TEST_PROGS := mitigation-patching.sh
+
+ top_srcdir = ../../../../..
+
+-CFLAGS += -I../../../../../usr/include
++CFLAGS += $(KHDR_INCLUDES)
+
+ include ../../lib.mk
+
+diff --git a/tools/testing/selftests/powerpc/syscalls/Makefile b/tools/testing/selftests/powerpc/syscalls/Makefile
+index b63f8459c704e..d1f2648b112b6 100644
+--- a/tools/testing/selftests/powerpc/syscalls/Makefile
++++ b/tools/testing/selftests/powerpc/syscalls/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ TEST_GEN_PROGS := ipc_unmuxed rtas_filter
+
+-CFLAGS += -I../../../../../usr/include
++CFLAGS += $(KHDR_INCLUDES)
+
+ top_srcdir = ../../../../..
+ include ../../lib.mk
+diff --git a/tools/testing/selftests/powerpc/tm/Makefile b/tools/testing/selftests/powerpc/tm/Makefile
+index 5881e97c73c13..3876805c2f312 100644
+--- a/tools/testing/selftests/powerpc/tm/Makefile
++++ b/tools/testing/selftests/powerpc/tm/Makefile
+@@ -17,7 +17,7 @@ $(TEST_GEN_PROGS): ../harness.c ../utils.c
+ CFLAGS += -mhtm
+
+ $(OUTPUT)/tm-syscall: tm-syscall-asm.S
+-$(OUTPUT)/tm-syscall: CFLAGS += -I../../../../../usr/include
++$(OUTPUT)/tm-syscall: CFLAGS += $(KHDR_INCLUDES)
+ $(OUTPUT)/tm-tmspr: CFLAGS += -pthread
+ $(OUTPUT)/tm-vmx-unavail: CFLAGS += -pthread -m64
+ $(OUTPUT)/tm-resched-dscr: ../pmu/lib.c
+diff --git a/tools/testing/selftests/ptp/Makefile b/tools/testing/selftests/ptp/Makefile
+index ef06de0898b73..eeab44cc68638 100644
+--- a/tools/testing/selftests/ptp/Makefile
++++ b/tools/testing/selftests/ptp/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -I../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+ TEST_PROGS := testptp
+ LDLIBS += -lrt
+ all: $(TEST_PROGS)
+diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
+index 215e1067f0376..3a173e184566c 100644
+--- a/tools/testing/selftests/rseq/Makefile
++++ b/tools/testing/selftests/rseq/Makefile
+@@ -4,7 +4,7 @@ ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
+ CLANG_FLAGS += -no-integrated-as
+ endif
+
+-CFLAGS += -O2 -Wall -g -I./ -I../../../../usr/include/ -L$(OUTPUT) -Wl,-rpath=./ \
++CFLAGS += -O2 -Wall -g -I./ $(KHDR_INCLUDES) -L$(OUTPUT) -Wl,-rpath=./ \
+ $(CLANG_FLAGS)
+ LDLIBS += -lpthread -ldl
+
+diff --git a/tools/testing/selftests/sched/Makefile b/tools/testing/selftests/sched/Makefile
+index 10c72f14fea9d..099ee9213557a 100644
+--- a/tools/testing/selftests/sched/Makefile
++++ b/tools/testing/selftests/sched/Makefile
+@@ -4,7 +4,7 @@ ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
+ CLANG_FLAGS += -no-integrated-as
+ endif
+
+-CFLAGS += -O2 -Wall -g -I./ -I../../../../usr/include/ -Wl,-rpath=./ \
++CFLAGS += -O2 -Wall -g -I./ $(KHDR_INCLUDES) -Wl,-rpath=./ \
+ $(CLANG_FLAGS)
+ LDLIBS += -lpthread
+
+diff --git a/tools/testing/selftests/seccomp/Makefile b/tools/testing/selftests/seccomp/Makefile
+index f017c382c0369..584fba4870372 100644
+--- a/tools/testing/selftests/seccomp/Makefile
++++ b/tools/testing/selftests/seccomp/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -Wl,-no-as-needed -Wall -isystem ../../../../usr/include/
++CFLAGS += -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
+ LDFLAGS += -lpthread
+ LDLIBS += -lcap
+
+diff --git a/tools/testing/selftests/sync/Makefile b/tools/testing/selftests/sync/Makefile
+index d0121a8a3523a..df0f91bf6890d 100644
+--- a/tools/testing/selftests/sync/Makefile
++++ b/tools/testing/selftests/sync/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ CFLAGS += -O2 -g -std=gnu89 -pthread -Wall -Wextra
+-CFLAGS += -I../../../../usr/include/
++CFLAGS += $(KHDR_INCLUDES)
+ LDFLAGS += -pthread
+
+ .PHONY: all clean
+diff --git a/tools/testing/selftests/user_events/Makefile b/tools/testing/selftests/user_events/Makefile
+index c765d8635d9af..87d54c6400681 100644
+--- a/tools/testing/selftests/user_events/Makefile
++++ b/tools/testing/selftests/user_events/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0
+-CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include
++CFLAGS += -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
+ LDLIBS += -lrt -lpthread -lm
+
+ TEST_GEN_PROGS = ftrace_test dyn_test perf_test
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index 89c14e41bd437..ac9366065fd26 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -25,7 +25,7 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
+ # LDLIBS.
+ MAKEFLAGS += --no-builtin-rules
+
+-CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
++CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
+ LDLIBS = -lrt -lpthread
+ TEST_GEN_FILES = cow
+ TEST_GEN_FILES += compaction_test
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index 0388c4d60af0e..ca9374b56ead1 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -34,7 +34,7 @@ BINARIES_64 := $(TARGETS_C_64BIT_ALL:%=%_64)
+ BINARIES_32 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_32))
+ BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64))
+
+-CFLAGS := -O2 -g -std=gnu99 -pthread -Wall
++CFLAGS := -O2 -g -std=gnu99 -pthread -Wall $(KHDR_INCLUDES)
+
+ # call32_from_64 in thunks.S uses absolute addresses.
+ ifeq ($(CAN_BUILD_WITH_NOPIE),1)
+diff --git a/tools/tracing/rtla/src/osnoise_hist.c b/tools/tracing/rtla/src/osnoise_hist.c
+index 5d7ea479ac89f..fe34452fc4ec0 100644
+--- a/tools/tracing/rtla/src/osnoise_hist.c
++++ b/tools/tracing/rtla/src/osnoise_hist.c
+@@ -121,6 +121,7 @@ static void osnoise_hist_update_multiple(struct osnoise_tool *tool, int cpu,
+ {
+ struct osnoise_hist_params *params = tool->params;
+ struct osnoise_hist_data *data = tool->data;
++ unsigned long long total_duration;
+ int entries = data->entries;
+ int bucket;
+ int *hist;
+@@ -131,10 +132,12 @@ static void osnoise_hist_update_multiple(struct osnoise_tool *tool, int cpu,
+ if (data->bucket_size)
+ bucket = duration / data->bucket_size;
+
++ total_duration = duration * count;
++
+ hist = data->hist[cpu].samples;
+ data->hist[cpu].count += count;
+ update_min(&data->hist[cpu].min_sample, &duration);
+- update_sum(&data->hist[cpu].sum_sample, &duration);
++ update_sum(&data->hist[cpu].sum_sample, &total_duration);
+ update_max(&data->hist[cpu].max_sample, &duration);
+
+ if (bucket < entries)
+diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
+index 0be80c213f7f2..5ef88f5a08640 100644
+--- a/virt/kvm/coalesced_mmio.c
++++ b/virt/kvm/coalesced_mmio.c
+@@ -187,15 +187,17 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ r = kvm_io_bus_unregister_dev(kvm,
+ zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
+
++ kvm_iodevice_destructor(&dev->dev);
++
+ /*
+ * On failure, unregister destroys all devices on the
+ * bus _except_ the target device, i.e. coalesced_zones
+- * has been modified. No need to restart the walk as
+- * there aren't any zones left.
++ * has been modified. Bail after destroying the target
++ * device, there's no need to restart the walk as there
++ * aren't any zones left.
+ */
+ if (r)
+ break;
+- kvm_iodevice_destructor(&dev->dev);
+ }
+ }
+
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 9c60384b5ae0b..07aae60288f92 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -5995,12 +5995,6 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+
+ kvm_chardev_ops.owner = module;
+
+- r = misc_register(&kvm_dev);
+- if (r) {
+- pr_err("kvm: misc device register failed\n");
+- goto out_unreg;
+- }
+-
+ register_syscore_ops(&kvm_syscore_ops);
+
+ kvm_preempt_ops.sched_in = kvm_sched_in;
+@@ -6009,11 +6003,24 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
+ kvm_init_debug();
+
+ r = kvm_vfio_ops_init();
+- WARN_ON(r);
++ if (WARN_ON_ONCE(r))
++ goto err_vfio;
++
++ /*
++ * Registration _must_ be the very last thing done, as this exposes
++ * /dev/kvm to userspace, i.e. all infrastructure must be setup!
++ */
++ r = misc_register(&kvm_dev);
++ if (r) {
++ pr_err("kvm: misc device register failed\n");
++ goto err_register;
++ }
+
+ return 0;
+
+-out_unreg:
++err_register:
++ kvm_vfio_ops_exit();
++err_vfio:
+ kvm_async_pf_deinit();
+ out_free_4:
+ for_each_possible_cpu(cpu)
+@@ -6039,8 +6046,14 @@ void kvm_exit(void)
+ {
+ int cpu;
+
+- debugfs_remove_recursive(kvm_debugfs_dir);
++ /*
++ * Note, unregistering /dev/kvm doesn't strictly need to come first,
++ * fops_get(), a.k.a. try_module_get(), prevents acquiring references
++ * to KVM while the module is being stopped.
++ */
+ misc_deregister(&kvm_dev);
++
++ debugfs_remove_recursive(kvm_debugfs_dir);
+ for_each_possible_cpu(cpu)
+ free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
+ kmem_cache_destroy(kvm_vcpu_cache);
next reply other threads:[~2023-03-10 12:38 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-10 12:37 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2023-05-17 13:17 [gentoo-commits] proj/linux-patches:6.2 commit in: / Mike Pagano
2023-05-11 16:11 Mike Pagano
2023-05-11 14:48 Mike Pagano
2023-05-10 17:52 Mike Pagano
2023-05-10 16:08 Mike Pagano
2023-04-30 23:50 Alice Ferrazzi
2023-04-26 13:21 Mike Pagano
2023-04-20 11:15 Alice Ferrazzi
2023-04-13 16:08 Mike Pagano
2023-04-06 10:40 Alice Ferrazzi
2023-03-30 21:52 Mike Pagano
2023-03-30 11:20 Alice Ferrazzi
2023-03-29 23:09 Mike Pagano
2023-03-22 16:10 Alice Ferrazzi
2023-03-22 12:44 Mike Pagano
2023-03-21 13:32 Mike Pagano
2023-03-17 10:42 Mike Pagano
2023-03-13 11:30 Alice Ferrazzi
2023-03-11 14:08 Mike Pagano
2023-03-11 11:19 Mike Pagano
2023-03-03 13:02 Mike Pagano
2023-03-03 12:27 Mike Pagano
2023-02-27 18:45 Mike Pagano
2023-02-27 3:48 [gentoo-commits] proj/linux-patches:6.2-2 " Alice Ferrazzi
2023-02-25 11:14 ` [gentoo-commits] proj/linux-patches:6.2 " Alice Ferrazzi
2023-02-26 17:30 Mike Pagano
2023-02-26 17:26 Mike Pagano
2023-02-25 11:02 Alice Ferrazzi
2023-02-19 22:41 Mike Pagano
2023-02-19 22:39 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1678451866.3d261b682e1bda28f3fc52ce8c45e9ab259f2b3f.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox