From: "Arisu Tachibana" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: /
Date: Sat, 16 Aug 2025 03:11:22 +0000 (UTC) [thread overview]
Message-ID: <1755313867.921b812d612f64110af3fda43828ef3b7746acb6.alicef@gentoo> (raw)
commit: 921b812d612f64110af3fda43828ef3b7746acb6
Author: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 03:11:07 2025 +0000
Commit: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 03:11:07 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=921b812d
Linux patch 6.1.148
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>
0000_README | 4 +
1147_linux-6.1.148.patch | 9536 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9540 insertions(+)
diff --git a/0000_README b/0000_README
index 8ee62915..e804a41d 100644
--- a/0000_README
+++ b/0000_README
@@ -631,6 +631,10 @@ Patch: 1146_linux-6.1.147.patch
From: https://www.kernel.org
Desc: Linux 6.1.147
+Patch: 1147_linux-6.1.148.patch
+From: https://www.kernel.org
+Desc: Linux 6.1.148
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1147_linux-6.1.148.patch b/1147_linux-6.1.148.patch
new file mode 100644
index 00000000..e611c973
--- /dev/null
+++ b/1147_linux-6.1.148.patch
@@ -0,0 +1,9536 @@
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index 17df9a02ccff14..0625d5bce59685 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -230,9 +230,9 @@ usrjquota=<file> Appoint specified file and type during mount, so that quota
+ grpjquota=<file> information can be properly updated during recovery flow,
+ prjjquota=<file> <quota file>: must be in root directory;
+ jqfmt=<quota type> <quota type>: [vfsold,vfsv0,vfsv1].
+-offusrjquota Turn off user journalled quota.
+-offgrpjquota Turn off group journalled quota.
+-offprjjquota Turn off project journalled quota.
++usrjquota= Turn off user journalled quota.
++grpjquota= Turn off group journalled quota.
++prjjquota= Turn off project journalled quota.
+ quota Enable plain user disk quota accounting.
+ noquota Disable all plain disk quota option.
+ alloc_mode=%s Adjust block allocation policy, which supports "reuse"
+diff --git a/Makefile b/Makefile
+index 0e07b0834a7c18..dd9e4faf0fd5f6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 1
+-SUBLEVEL = 147
++SUBLEVEL = 148
+ EXTRAVERSION =
+ NAME = Curry Ramen
+
+diff --git a/arch/arm/boot/dts/am335x-boneblack.dts b/arch/arm/boot/dts/am335x-boneblack.dts
+index b956e2f60fe070..d20b935c0b69fb 100644
+--- a/arch/arm/boot/dts/am335x-boneblack.dts
++++ b/arch/arm/boot/dts/am335x-boneblack.dts
+@@ -34,7 +34,7 @@ &gpio0 {
+ "P9_18 [spi0_d1]",
+ "P9_17 [spi0_cs0]",
+ "[mmc0_cd]",
+- "P8_42A [ecappwm0]",
++ "P9_42A [ecappwm0]",
+ "P8_35 [lcd d12]",
+ "P8_33 [lcd d13]",
+ "P8_31 [lcd d14]",
+diff --git a/arch/arm/boot/dts/imx6ul-kontron-bl-common.dtsi b/arch/arm/boot/dts/imx6ul-kontron-bl-common.dtsi
+index 43868311f48a5d..bb324725411cfb 100644
+--- a/arch/arm/boot/dts/imx6ul-kontron-bl-common.dtsi
++++ b/arch/arm/boot/dts/imx6ul-kontron-bl-common.dtsi
+@@ -169,7 +169,6 @@ &uart2 {
+ pinctrl-0 = <&pinctrl_uart2>;
+ linux,rs485-enabled-at-boot-time;
+ rs485-rx-during-tx;
+- rs485-rts-active-low;
+ uart-has-rtscts;
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/vfxxx.dtsi b/arch/arm/boot/dts/vfxxx.dtsi
+index d53f9c9db8bfda..eb7973fb47133b 100644
+--- a/arch/arm/boot/dts/vfxxx.dtsi
++++ b/arch/arm/boot/dts/vfxxx.dtsi
+@@ -617,7 +617,7 @@ usbmisc1: usb@400b4800 {
+
+ ftm: ftm@400b8000 {
+ compatible = "fsl,ftm-timer";
+- reg = <0x400b8000 0x1000 0x400b9000 0x1000>;
++ reg = <0x400b8000 0x1000>, <0x400b9000 0x1000>;
+ interrupts = <44 IRQ_TYPE_LEVEL_HIGH>;
+ clock-names = "ftm-evt", "ftm-src",
+ "ftm-evt-counter-en", "ftm-src-counter-en";
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index 0ca94b90bc4ec5..ba98daeb119cd9 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -245,7 +245,7 @@ static int ctr_encrypt(struct skcipher_request *req)
+ while (walk.nbytes > 0) {
+ const u8 *src = walk.src.virt.addr;
+ u8 *dst = walk.dst.virt.addr;
+- int bytes = walk.nbytes;
++ unsigned int bytes = walk.nbytes;
+
+ if (unlikely(bytes < AES_BLOCK_SIZE))
+ src = dst = memcpy(buf + sizeof(buf) - bytes,
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+index 140e251094fa46..94bec023868c0b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+@@ -284,6 +284,8 @@ &usdhc3 {
+ pinctrl-0 = <&pinctrl_usdhc3>;
+ pinctrl-1 = <&pinctrl_usdhc3_100mhz>;
+ pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
++ assigned-clocks = <&clk IMX8MM_CLK_USDHC3>;
++ assigned-clock-rates = <400000000>;
+ bus-width = <8>;
+ non-removable;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+index c4b1c6029c9a93..ef138c867fc8b7 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+@@ -295,6 +295,8 @@ &usdhc3 {
+ pinctrl-0 = <&pinctrl_usdhc3>;
+ pinctrl-1 = <&pinctrl_usdhc3_100mhz>;
+ pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
++ assigned-clocks = <&clk IMX8MN_CLK_USDHC3>;
++ assigned-clock-rates = <400000000>;
+ bus-width = <8>;
+ non-removable;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index a9f937b0684797..41f6f9abf52f42 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3244,18 +3244,18 @@ spmi_bus: spmi@c440000 {
+ cell-index = <0>;
+ };
+
+- sram@146aa000 {
++ sram@14680000 {
+ compatible = "qcom,sc7180-imem", "syscon", "simple-mfd";
+- reg = <0 0x146aa000 0 0x2000>;
++ reg = <0 0x14680000 0 0x2e000>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+- ranges = <0 0 0x146aa000 0x2000>;
++ ranges = <0 0 0x14680000 0x2e000>;
+
+- pil-reloc@94c {
++ pil-reloc@2a94c {
+ compatible = "qcom,pil-reloc-info";
+- reg = <0x94c 0xc8>;
++ reg = <0x2a94c 0xc8>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index a5df310ce7f39b..b77f65a612a13f 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4915,18 +4915,18 @@ spmi_bus: spmi@c440000 {
+ cell-index = <0>;
+ };
+
+- sram@146bf000 {
++ sram@14680000 {
+ compatible = "qcom,sdm845-imem", "syscon", "simple-mfd";
+- reg = <0 0x146bf000 0 0x1000>;
++ reg = <0 0x14680000 0 0x40000>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+- ranges = <0 0 0x146bf000 0x1000>;
++ ranges = <0 0 0x14680000 0x40000>;
+
+- pil-reloc@94c {
++ pil-reloc@3f94c {
+ compatible = "qcom,pil-reloc-info";
+- reg = <0x94c 0xc8>;
++ reg = <0x3f94c 0xc8>;
+ };
+ };
+
+diff --git a/arch/m68k/Kconfig.debug b/arch/m68k/Kconfig.debug
+index 465e28be0ce46a..6a6f0ed7713f64 100644
+--- a/arch/m68k/Kconfig.debug
++++ b/arch/m68k/Kconfig.debug
+@@ -10,7 +10,7 @@ config BOOTPARAM_STRING
+
+ config EARLY_PRINTK
+ bool "Early printk"
+- depends on !(SUN3 || M68000 || COLDFIRE)
++ depends on MMU_MOTOROLA
+ help
+ Write kernel log output directly to a serial port.
+ Where implemented, output goes to the framebuffer as well.
+diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
+index f11ef9f1f56fcf..521cbb8a150c99 100644
+--- a/arch/m68k/kernel/early_printk.c
++++ b/arch/m68k/kernel/early_printk.c
+@@ -16,25 +16,10 @@
+ #include "../mvme147/mvme147.h"
+ #include "../mvme16x/mvme16x.h"
+
+-asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
+-
+-static void __ref debug_cons_write(struct console *c,
+- const char *s, unsigned n)
+-{
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+- defined(CONFIG_COLDFIRE))
+- if (MACH_IS_MVME147)
+- mvme147_scc_write(c, s, n);
+- else if (MACH_IS_MVME16x)
+- mvme16x_cons_write(c, s, n);
+- else
+- debug_cons_nputs(s, n);
+-#endif
+-}
++asmlinkage void __init debug_cons_nputs(struct console *c, const char *s, unsigned int n);
+
+ static struct console early_console_instance = {
+ .name = "debug",
+- .write = debug_cons_write,
+ .flags = CON_PRINTBUFFER | CON_BOOT,
+ .index = -1
+ };
+@@ -44,6 +29,12 @@ static int __init setup_early_printk(char *buf)
+ if (early_console || buf)
+ return 0;
+
++ if (MACH_IS_MVME147)
++ early_console_instance.write = mvme147_scc_write;
++ else if (MACH_IS_MVME16x)
++ early_console_instance.write = mvme16x_cons_write;
++ else
++ early_console_instance.write = debug_cons_nputs;
+ early_console = &early_console_instance;
+ register_console(early_console);
+
+@@ -51,20 +42,15 @@ static int __init setup_early_printk(char *buf)
+ }
+ early_param("earlyprintk", setup_early_printk);
+
+-/*
+- * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be called
+- * after init sections are discarded (for platforms that use it).
+- */
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+- defined(CONFIG_COLDFIRE))
+-
+ static int __init unregister_early_console(void)
+ {
+- if (!early_console || MACH_IS_MVME16x)
+- return 0;
++ /*
++ * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be
++ * called after init sections are discarded (for platforms that use it).
++ */
++ if (early_console && early_console->write == debug_cons_nputs)
++ return unregister_console(early_console);
+
+- return unregister_console(early_console);
++ return 0;
+ }
+ late_initcall(unregister_early_console);
+-
+-#endif
+diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
+index 9e812d8606be83..397114962a1427 100644
+--- a/arch/m68k/kernel/head.S
++++ b/arch/m68k/kernel/head.S
+@@ -3267,8 +3267,8 @@ func_return putn
+ * turns around and calls the internal routines. This routine
+ * is used by the boot console.
+ *
+- * The calling parameters are:
+- * void debug_cons_nputs(const char *str, unsigned length)
++ * The function signature is -
++ * void debug_cons_nputs(struct console *c, const char *s, unsigned int n)
+ *
+ * This routine does NOT understand variable arguments only
+ * simple strings!
+@@ -3277,8 +3277,8 @@ ENTRY(debug_cons_nputs)
+ moveml %d0/%d1/%a0,%sp@-
+ movew %sr,%sp@-
+ ori #0x0700,%sr
+- movel %sp@(18),%a0 /* fetch parameter */
+- movel %sp@(22),%d1 /* fetch parameter */
++ movel %sp@(22),%a0 /* char *s */
++ movel %sp@(26),%d1 /* unsigned int n */
+ jra 2f
+ 1:
+ #ifdef CONSOLE_DEBUG
+diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
+index 1b939abbe4caaf..2e987b6e42bc16 100644
+--- a/arch/mips/mm/tlb-r4k.c
++++ b/arch/mips/mm/tlb-r4k.c
+@@ -498,6 +498,60 @@ static int __init set_ntlb(char *str)
+
+ __setup("ntlb=", set_ntlb);
+
++/* Initialise all TLB entries with unique values */
++static void r4k_tlb_uniquify(void)
++{
++ int entry = num_wired_entries();
++
++ htw_stop();
++ write_c0_entrylo0(0);
++ write_c0_entrylo1(0);
++
++ while (entry < current_cpu_data.tlbsize) {
++ unsigned long asid_mask = cpu_asid_mask(¤t_cpu_data);
++ unsigned long asid = 0;
++ int idx;
++
++ /* Skip wired MMID to make ginvt_mmid work */
++ if (cpu_has_mmid)
++ asid = MMID_KERNEL_WIRED + 1;
++
++ /* Check for match before using UNIQUE_ENTRYHI */
++ do {
++ if (cpu_has_mmid) {
++ write_c0_memorymapid(asid);
++ write_c0_entryhi(UNIQUE_ENTRYHI(entry));
++ } else {
++ write_c0_entryhi(UNIQUE_ENTRYHI(entry) | asid);
++ }
++ mtc0_tlbw_hazard();
++ tlb_probe();
++ tlb_probe_hazard();
++ idx = read_c0_index();
++ /* No match or match is on current entry */
++ if (idx < 0 || idx == entry)
++ break;
++ /*
++ * If we hit a match, we need to try again with
++ * a different ASID.
++ */
++ asid++;
++ } while (asid < asid_mask);
++
++ if (idx >= 0 && idx != entry)
++ panic("Unable to uniquify TLB entry %d", idx);
++
++ write_c0_index(entry);
++ mtc0_tlbw_hazard();
++ tlb_write_indexed();
++ entry++;
++ }
++
++ tlbw_use_hazard();
++ htw_start();
++ flush_micro_tlb();
++}
++
+ /*
+ * Configure TLB (for init or after a CPU has been powered off).
+ */
+@@ -537,7 +591,7 @@ static void r4k_tlb_configure(void)
+ temp_tlb_entry = current_cpu_data.tlbsize - 1;
+
+ /* From this point on the ARC firmware is dead. */
+- local_flush_tlb_all();
++ r4k_tlb_uniquify();
+
+ /* Did I tell you that ARC SUCKS? */
+ }
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index d23deb94b36e75..4a1c19cb6ea883 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -263,7 +263,6 @@ CONFIG_NET_SCH_DSMARK=m
+ CONFIG_NET_SCH_NETEM=m
+ CONFIG_NET_SCH_INGRESS=m
+ CONFIG_NET_CLS_BASIC=m
+-CONFIG_NET_CLS_TCINDEX=m
+ CONFIG_NET_CLS_ROUTE4=m
+ CONFIG_NET_CLS_FW=m
+ CONFIG_NET_CLS_U32=m
+diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
+index 2e286bba2f6456..82626363a3090c 100644
+--- a/arch/powerpc/kernel/eeh.c
++++ b/arch/powerpc/kernel/eeh.c
+@@ -1130,6 +1130,7 @@ int eeh_unfreeze_pe(struct eeh_pe *pe)
+
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(eeh_unfreeze_pe);
+
+
+ static struct pci_device_id eeh_reset_ids[] = {
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index f279295179bdfe..429abaecad4160 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -257,13 +257,12 @@ static void eeh_pe_report_edev(struct eeh_dev *edev, eeh_report_fn fn,
+ struct pci_driver *driver;
+ enum pci_ers_result new_result;
+
+- pci_lock_rescan_remove();
+ pdev = edev->pdev;
+ if (pdev)
+ get_device(&pdev->dev);
+- pci_unlock_rescan_remove();
+ if (!pdev) {
+ eeh_edev_info(edev, "no device");
++ *result = PCI_ERS_RESULT_DISCONNECT;
+ return;
+ }
+ device_lock(&pdev->dev);
+@@ -304,8 +303,9 @@ static void eeh_pe_report(const char *name, struct eeh_pe *root,
+ struct eeh_dev *edev, *tmp;
+
+ pr_info("EEH: Beginning: '%s'\n", name);
+- eeh_for_each_pe(root, pe) eeh_pe_for_each_dev(pe, edev, tmp)
+- eeh_pe_report_edev(edev, fn, result);
++ eeh_for_each_pe(root, pe)
++ eeh_pe_for_each_dev(pe, edev, tmp)
++ eeh_pe_report_edev(edev, fn, result);
+ if (result)
+ pr_info("EEH: Finished:'%s' with aggregate recovery state:'%s'\n",
+ name, pci_ers_result_name(*result));
+@@ -383,6 +383,8 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ if (!edev)
+ return;
+
++ pci_lock_rescan_remove();
++
+ /*
+ * The content in the config space isn't saved because
+ * the blocked config space on some adapters. We have
+@@ -393,14 +395,19 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ if (list_is_last(&edev->entry, &edev->pe->edevs))
+ eeh_pe_restore_bars(edev->pe);
+
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+ pdev = eeh_dev_to_pci_dev(edev);
+- if (!pdev)
++ if (!pdev) {
++ pci_unlock_rescan_remove();
+ return;
++ }
+
+ pci_restore_state(pdev);
++
++ pci_unlock_rescan_remove();
+ }
+
+ /**
+@@ -647,9 +654,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ if (any_passed || driver_eeh_aware || (pe->type & EEH_PE_VF)) {
+ eeh_pe_dev_traverse(pe, eeh_rmv_device, rmv_data);
+ } else {
+- pci_lock_rescan_remove();
+ pci_hp_remove_devices(bus);
+- pci_unlock_rescan_remove();
+ }
+
+ /*
+@@ -665,8 +670,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ if (rc)
+ return rc;
+
+- pci_lock_rescan_remove();
+-
+ /* Restore PE */
+ eeh_ops->configure_bridge(pe);
+ eeh_pe_restore_bars(pe);
+@@ -674,7 +677,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ /* Clear frozen state */
+ rc = eeh_clear_pe_frozen_state(pe, false);
+ if (rc) {
+- pci_unlock_rescan_remove();
+ return rc;
+ }
+
+@@ -709,7 +711,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ pe->tstamp = tstamp;
+ pe->freeze_count = cnt;
+
+- pci_unlock_rescan_remove();
+ return 0;
+ }
+
+@@ -843,10 +844,13 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ {LIST_HEAD_INIT(rmv_data.removed_vf_list), 0};
+ int devices = 0;
+
++ pci_lock_rescan_remove();
++
+ bus = eeh_pe_bus_get(pe);
+ if (!bus) {
+ pr_err("%s: Cannot find PCI bus for PHB#%x-PE#%x\n",
+ __func__, pe->phb->global_number, pe->addr);
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+@@ -1085,10 +1089,15 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
+ eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
+
+- pci_lock_rescan_remove();
+- pci_hp_remove_devices(bus);
+- pci_unlock_rescan_remove();
++ bus = eeh_pe_bus_get(pe);
++ if (bus)
++ pci_hp_remove_devices(bus);
++ else
++ pr_err("%s: PCI bus for PHB#%x-PE#%x disappeared\n",
++ __func__, pe->phb->global_number, pe->addr);
++
+ /* The passed PE should no longer be used */
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+@@ -1105,6 +1114,8 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ eeh_clear_slot_attention(edev->pdev);
+
+ eeh_pe_state_clear(pe, EEH_PE_RECOVERING, true);
++
++ pci_unlock_rescan_remove();
+ }
+
+ /**
+@@ -1123,6 +1134,7 @@ void eeh_handle_special_event(void)
+ unsigned long flags;
+ int rc;
+
++ pci_lock_rescan_remove();
+
+ do {
+ rc = eeh_ops->next_error(&pe);
+@@ -1162,10 +1174,12 @@ void eeh_handle_special_event(void)
+
+ break;
+ case EEH_NEXT_ERR_NONE:
++ pci_unlock_rescan_remove();
+ return;
+ default:
+ pr_warn("%s: Invalid value %d from next_error()\n",
+ __func__, rc);
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+@@ -1177,7 +1191,9 @@ void eeh_handle_special_event(void)
+ if (rc == EEH_NEXT_ERR_FROZEN_PE ||
+ rc == EEH_NEXT_ERR_FENCED_PHB) {
+ eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
++ pci_unlock_rescan_remove();
+ eeh_handle_normal_event(pe);
++ pci_lock_rescan_remove();
+ } else {
+ eeh_for_each_pe(pe, tmp_pe)
+ eeh_pe_for_each_dev(tmp_pe, edev, tmp_edev)
+@@ -1190,7 +1206,6 @@ void eeh_handle_special_event(void)
+ "error_detected(permanent failure)", pe,
+ eeh_report_failure, NULL);
+
+- pci_lock_rescan_remove();
+ list_for_each_entry(hose, &hose_list, list_node) {
+ phb_pe = eeh_phb_pe_get(hose);
+ if (!phb_pe ||
+@@ -1209,7 +1224,6 @@ void eeh_handle_special_event(void)
+ }
+ pci_hp_remove_devices(bus);
+ }
+- pci_unlock_rescan_remove();
+ }
+
+ /*
+@@ -1219,4 +1233,6 @@ void eeh_handle_special_event(void)
+ if (rc == EEH_NEXT_ERR_DEAD_IOC)
+ break;
+ } while (rc != EEH_NEXT_ERR_NONE);
++
++ pci_unlock_rescan_remove();
+ }
+diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
+index e4624d78962944..08095aeba5c983 100644
+--- a/arch/powerpc/kernel/eeh_pe.c
++++ b/arch/powerpc/kernel/eeh_pe.c
+@@ -671,11 +671,12 @@ static void eeh_bridge_check_link(struct eeh_dev *edev)
+ eeh_ops->write_config(edev, cap + PCI_EXP_LNKCTL, 2, val);
+
+ /* Check link */
+- eeh_ops->read_config(edev, cap + PCI_EXP_LNKCAP, 4, &val);
+- if (!(val & PCI_EXP_LNKCAP_DLLLARC)) {
+- eeh_edev_dbg(edev, "No link reporting capability (0x%08x) \n", val);
+- msleep(1000);
+- return;
++ if (edev->pdev) {
++ if (!edev->pdev->link_active_reporting) {
++ eeh_edev_dbg(edev, "No link reporting capability\n");
++ msleep(1000);
++ return;
++ }
+ }
+
+ /* Wait the link is up until timeout (5s) */
+diff --git a/arch/powerpc/kernel/pci-hotplug.c b/arch/powerpc/kernel/pci-hotplug.c
+index 0fe251c6ac2ce7..ac70e85b0df85d 100644
+--- a/arch/powerpc/kernel/pci-hotplug.c
++++ b/arch/powerpc/kernel/pci-hotplug.c
+@@ -111,6 +111,9 @@ void pci_hp_add_devices(struct pci_bus *bus)
+ struct pci_controller *phb;
+ struct device_node *dn = pci_bus_to_OF_node(bus);
+
++ if (!dn)
++ return;
++
+ phb = pci_bus_to_host(bus);
+
+ mode = PCI_PROBE_NORMAL;
+diff --git a/arch/sh/Makefile b/arch/sh/Makefile
+index 5c8776482530c3..22c47e4ad5725d 100644
+--- a/arch/sh/Makefile
++++ b/arch/sh/Makefile
+@@ -103,16 +103,16 @@ UTS_MACHINE := sh
+ LDFLAGS_vmlinux += -e _stext
+
+ ifdef CONFIG_CPU_LITTLE_ENDIAN
+-ld-bfd := elf32-sh-linux
+-LDFLAGS_vmlinux += --defsym jiffies=jiffies_64 --oformat $(ld-bfd)
++ld_bfd := elf32-sh-linux
++LDFLAGS_vmlinux += --defsym jiffies=jiffies_64 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS += -EL
+ else
+-ld-bfd := elf32-shbig-linux
+-LDFLAGS_vmlinux += --defsym jiffies=jiffies_64+4 --oformat $(ld-bfd)
++ld_bfd := elf32-shbig-linux
++LDFLAGS_vmlinux += --defsym jiffies=jiffies_64+4 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS += -EB
+ endif
+
+-export ld-bfd
++export ld_bfd
+
+ # Mach groups
+ machdir-$(CONFIG_SOLUTION_ENGINE) += mach-se
+diff --git a/arch/sh/boot/compressed/Makefile b/arch/sh/boot/compressed/Makefile
+index 591125c42d49df..05542eb2013619 100644
+--- a/arch/sh/boot/compressed/Makefile
++++ b/arch/sh/boot/compressed/Makefile
+@@ -36,7 +36,7 @@ endif
+
+ ccflags-remove-$(CONFIG_MCOUNT) += -pg
+
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(IMAGE_OFFSET) -e startup \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(IMAGE_OFFSET) -e startup \
+ -T $(obj)/../../kernel/vmlinux.lds
+
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+@@ -60,7 +60,7 @@ $(obj)/vmlinux.bin.lzo: $(obj)/vmlinux.bin FORCE
+
+ OBJCOPYFLAGS += -R .empty_zero_page
+
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
+ $(call if_changed,ld)
+diff --git a/arch/sh/boot/romimage/Makefile b/arch/sh/boot/romimage/Makefile
+index c7c8be58400cd9..17b03df0a8de4d 100644
+--- a/arch/sh/boot/romimage/Makefile
++++ b/arch/sh/boot/romimage/Makefile
+@@ -13,7 +13,7 @@ mmcif-obj-$(CONFIG_CPU_SUBTYPE_SH7724) := $(obj)/mmcif-sh7724.o
+ load-$(CONFIG_ROMIMAGE_MMCIF) := $(mmcif-load-y)
+ obj-$(CONFIG_ROMIMAGE_MMCIF) := $(mmcif-obj-y)
+
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(load-y) -e romstart \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(load-y) -e romstart \
+ -T $(obj)/../../kernel/vmlinux.lds
+
+ $(obj)/vmlinux: $(obj)/head.o $(obj-y) $(obj)/piggy.o FORCE
+@@ -24,7 +24,7 @@ OBJCOPYFLAGS += -j .empty_zero_page
+ $(obj)/zeropage.bin: vmlinux FORCE
+ $(call if_changed,objcopy)
+
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/zeropage.bin arch/sh/boot/zImage FORCE
+ $(call if_changed,ld)
+diff --git a/arch/um/drivers/rtc_user.c b/arch/um/drivers/rtc_user.c
+index 7c3cec4c68cffe..006a5a164ea91d 100644
+--- a/arch/um/drivers/rtc_user.c
++++ b/arch/um/drivers/rtc_user.c
+@@ -28,7 +28,7 @@ int uml_rtc_start(bool timetravel)
+ int err;
+
+ if (timetravel) {
+- int err = os_pipe(uml_rtc_irq_fds, 1, 1);
++ err = os_pipe(uml_rtc_irq_fds, 1, 1);
+ if (err)
+ goto fail;
+ } else {
+diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
+index 3c5d5c97f8f73b..4f61d48f257595 100644
+--- a/arch/x86/boot/compressed/sev.c
++++ b/arch/x86/boot/compressed/sev.c
+@@ -164,6 +164,13 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
+ */
+ if (op == SNP_PAGE_STATE_PRIVATE && pvalidate(paddr, RMP_PG_SIZE_4K, 1))
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
++
++ /*
++ * If validating memory (making it private) and affected by the
++ * cache-coherency vulnerability, perform the cache eviction mitigation.
++ */
++ if (op == SNP_PAGE_STATE_PRIVATE && !has_cpuflag(X86_FEATURE_COHERENCY_SFW_NO))
++ sev_evict_cache((void *)paddr, 1);
+ }
+
+ void snp_set_page_private(unsigned long paddr)
+diff --git a/arch/x86/boot/cpuflags.c b/arch/x86/boot/cpuflags.c
+index a83d67ec627d17..aa4943432dcf69 100644
+--- a/arch/x86/boot/cpuflags.c
++++ b/arch/x86/boot/cpuflags.c
+@@ -124,5 +124,18 @@ void get_cpuflags(void)
+ cpuid(0x80000001, &ignored, &ignored, &cpu.flags[6],
+ &cpu.flags[1]);
+ }
++
++ if (max_amd_level >= 0x8000001f) {
++ u32 ebx;
++
++ /*
++ * The X86_FEATURE_COHERENCY_SFW_NO feature bit is in
++ * the virtualization flags entry (word 8) and set by
++ * scattered.c, so the bit needs to be explicitly set.
++ */
++ cpuid(0x8000001f, &ignored, &ebx, &ignored, &ignored);
++ if (ebx & BIT(31))
++ set_bit(X86_FEATURE_COHERENCY_SFW_NO, cpu.flags);
++ }
+ }
+ }
+diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c
+index 42c70d28ef272d..865ae4be233b37 100644
+--- a/arch/x86/hyperv/irqdomain.c
++++ b/arch/x86/hyperv/irqdomain.c
+@@ -192,7 +192,6 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ struct pci_dev *dev;
+ struct hv_interrupt_entry out_entry, *stored_entry;
+ struct irq_cfg *cfg = irqd_cfg(data);
+- const cpumask_t *affinity;
+ int cpu;
+ u64 status;
+
+@@ -204,8 +203,7 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ return;
+ }
+
+- affinity = irq_data_get_effective_affinity_mask(data);
+- cpu = cpumask_first_and(affinity, cpu_online_mask);
++ cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
+
+ if (data->chip_data) {
+ /*
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 6f6ea3b9a95e03..c48a9733e906ab 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -230,6 +230,7 @@
+ #define X86_FEATURE_FLEXPRIORITY ( 8*32+ 2) /* Intel FlexPriority */
+ #define X86_FEATURE_EPT ( 8*32+ 3) /* Intel Extended Page Table */
+ #define X86_FEATURE_VPID ( 8*32+ 4) /* Intel Virtual Processor ID */
++#define X86_FEATURE_COHERENCY_SFW_NO ( 8*32+ 5) /* "" SNP cache coherency software work around not needed */
+
+ #define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */
+ #define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 4785d41558d61b..2d71c329b3475c 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -563,6 +563,8 @@ static bool amd_check_tsa_microcode(void)
+ p.model = c->x86_model;
+ p.ext_model = c->x86_model >> 4;
+ p.stepping = c->x86_stepping;
++ /* reserved bits are expected to be 0 in test below */
++ p.__reserved = 0;
+
+ if (c->x86 == 0x19) {
+ switch (p.ucode_rev >> 8) {
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index b9e39c9eb274c1..0d019e6972bec2 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -45,6 +45,7 @@ static const struct cpuid_bit cpuid_bits[] = {
+ { X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 },
+ { X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
+ { X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
++ { X86_FEATURE_COHERENCY_SFW_NO, CPUID_EBX, 31, 0x8000001f, 0 },
+ { X86_FEATURE_TSA_SQ_NO, CPUID_ECX, 1, 0x80000021, 0 },
+ { X86_FEATURE_TSA_L1_NO, CPUID_ECX, 2, 0x80000021, 0 },
+ { X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 },
+diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
+index 3fe76bf17d95e9..e658e83c62aeed 100644
+--- a/arch/x86/kernel/sev-shared.c
++++ b/arch/x86/kernel/sev-shared.c
+@@ -1064,3 +1064,21 @@ static void __head setup_cpuid_table(const struct cc_blob_sev_info *cc_info)
+ RIP_REL_REF(cpuid_ext_range_max) = fn->eax;
+ }
+ }
++
++static inline void sev_evict_cache(void *va, int npages)
++{
++ volatile u8 val __always_unused;
++ u8 *bytes = va;
++ int page_idx;
++
++ /*
++ * For SEV guests, a read from the first/last cache-lines of a 4K page
++ * using the guest key is sufficient to cause a flush of all cache-lines
++ * associated with that 4K page without incurring all the overhead of a
++ * full CLFLUSH sequence.
++ */
++ for (page_idx = 0; page_idx < npages; page_idx++) {
++ val = bytes[page_idx * PAGE_SIZE];
++ val = bytes[page_idx * PAGE_SIZE + PAGE_SIZE - 1];
++ }
++}
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index f8a8249ae11779..7b7fa85d154792 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -676,10 +676,12 @@ static u64 __init get_jump_table_addr(void)
+
+ static void pvalidate_pages(unsigned long vaddr, unsigned long npages, bool validate)
+ {
+- unsigned long vaddr_end;
++ unsigned long vaddr_begin, vaddr_end;
+ int rc;
+
+ vaddr = vaddr & PAGE_MASK;
++
++ vaddr_begin = vaddr;
+ vaddr_end = vaddr + (npages << PAGE_SHIFT);
+
+ while (vaddr < vaddr_end) {
+@@ -689,6 +691,13 @@ static void pvalidate_pages(unsigned long vaddr, unsigned long npages, bool vali
+
+ vaddr = vaddr + PAGE_SIZE;
+ }
++
++ /*
++ * If validating memory (making it private) and affected by the
++ * cache-coherency vulnerability, perform the cache eviction mitigation.
++ */
++ if (validate && !cpu_feature_enabled(X86_FEATURE_COHERENCY_SFW_NO))
++ sev_evict_cache((void *)vaddr_begin, npages);
+ }
+
+ static void __head early_set_pages_state(unsigned long paddr, unsigned long npages, enum psc_op op)
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+index 60814e110a54ca..fcb8e7af5b4dc3 100644
+--- a/arch/x86/mm/extable.c
++++ b/arch/x86/mm/extable.c
+@@ -121,13 +121,12 @@ static bool ex_handler_sgx(const struct exception_table_entry *fixup,
+ static bool ex_handler_fprestore(const struct exception_table_entry *fixup,
+ struct pt_regs *regs)
+ {
+- regs->ip = ex_fixup_addr(fixup);
+-
+ WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.",
+ (void *)instruction_pointer(regs));
+
+ fpu_reset_from_exception_fixup();
+- return true;
++
++ return ex_handler_default(fixup, regs);
+ }
+
+ static bool ex_handler_uaccess(const struct exception_table_entry *fixup,
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index f0e314abcafc53..168532931c86da 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1266,6 +1266,8 @@ struct regmap *__regmap_init(struct device *dev,
+ err_map:
+ kfree(map);
+ err:
++ if (bus && bus->free_on_exit)
++ kfree(bus);
+ return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(__regmap_init);
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index f2a99e5d304dd0..3a7c42f76d894a 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1391,7 +1391,7 @@ static void ublk_deinit_queues(struct ublk_device *ub)
+
+ for (i = 0; i < nr_queues; i++)
+ ublk_deinit_queue(ub, i);
+- kfree(ub->__queues);
++ kvfree(ub->__queues);
+ }
+
+ static int ublk_init_queues(struct ublk_device *ub)
+@@ -1402,7 +1402,7 @@ static int ublk_init_queues(struct ublk_device *ub)
+ int i, ret = -ENOMEM;
+
+ ub->queue_size = ubq_size;
+- ub->__queues = kcalloc(nr_queues, ubq_size, GFP_KERNEL);
++ ub->__queues = kvcalloc(nr_queues, ubq_size, GFP_KERNEL);
+ if (!ub->__queues)
+ return ret;
+
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index 6e4556530df585..3447930240b85e 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -947,6 +947,7 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
+ struct fsl_mc_obj_desc endpoint_desc = {{ 0 }};
+ struct dprc_endpoint endpoint1 = {{ 0 }};
+ struct dprc_endpoint endpoint2 = {{ 0 }};
++ struct fsl_mc_bus *mc_bus;
+ int state, err;
+
+ mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent);
+@@ -970,6 +971,8 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
+ strcpy(endpoint_desc.type, endpoint2.type);
+ endpoint_desc.id = endpoint2.id;
+ endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
++ if (endpoint)
++ return endpoint;
+
+ /*
+ * We know that the device has an endpoint because we verified by
+@@ -977,17 +980,13 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
+ * yet discovered by the fsl-mc bus, thus the lookup returned NULL.
+ * Force a rescan of the devices in this container and retry the lookup.
+ */
+- if (!endpoint) {
+- struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
+-
+- if (mutex_trylock(&mc_bus->scan_mutex)) {
+- err = dprc_scan_objects(mc_bus_dev, true);
+- mutex_unlock(&mc_bus->scan_mutex);
+- }
+-
+- if (err < 0)
+- return ERR_PTR(err);
++ mc_bus = to_fsl_mc_bus(mc_bus_dev);
++ if (mutex_trylock(&mc_bus->scan_mutex)) {
++ err = dprc_scan_objects(mc_bus_dev, true);
++ mutex_unlock(&mc_bus->scan_mutex);
+ }
++ if (err < 0)
++ return ERR_PTR(err);
+
+ endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
+ /*
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index 3e00506543b69c..72269d0f2a4ecd 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -142,7 +142,9 @@ static int mtk_rng_probe(struct platform_device *pdev)
+ dev_set_drvdata(&pdev->dev, priv);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, RNG_AUTOSUSPEND_TIMEOUT);
+ pm_runtime_use_autosuspend(&pdev->dev);
+- devm_pm_runtime_enable(&pdev->dev);
++ ret = devm_pm_runtime_enable(&pdev->dev);
++ if (ret)
++ return ret;
+
+ dev_info(&pdev->dev, "registered RNG driver\n");
+
+diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
+index bb5cd9d3899307..df9a4c77835148 100644
+--- a/drivers/clk/clk-axi-clkgen.c
++++ b/drivers/clk/clk-axi-clkgen.c
+@@ -118,7 +118,7 @@ static const struct axi_clkgen_limits axi_clkgen_zynqmp_default_limits = {
+
+ static const struct axi_clkgen_limits axi_clkgen_zynq_default_limits = {
+ .fpfd_min = 10000,
+- .fpfd_max = 300000,
++ .fpfd_max = 450000,
+ .fvco_min = 600000,
+ .fvco_max = 1200000,
+ };
+diff --git a/drivers/clk/davinci/psc.c b/drivers/clk/davinci/psc.c
+index 42a59dbd49c8bc..ecb111be56f740 100644
+--- a/drivers/clk/davinci/psc.c
++++ b/drivers/clk/davinci/psc.c
+@@ -278,6 +278,11 @@ davinci_lpsc_clk_register(struct device *dev, const char *name,
+
+ lpsc->pm_domain.name = devm_kasprintf(dev, GFP_KERNEL, "%s: %s",
+ best_dev_name(dev), name);
++ if (!lpsc->pm_domain.name) {
++ clk_hw_unregister(&lpsc->hw);
++ kfree(lpsc);
++ return ERR_PTR(-ENOMEM);
++ }
+ lpsc->pm_domain.attach_dev = davinci_psc_genpd_attach_dev;
+ lpsc->pm_domain.detach_dev = davinci_psc_genpd_detach_dev;
+ lpsc->pm_domain.flags = GENPD_FLAG_PM_CLK;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+index fbb3529f0d3ef7..8263beac203ba1 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+@@ -347,8 +347,7 @@ static SUNXI_CCU_GATE(dram_ohci_clk, "dram-ohci", "dram",
+
+ static const char * const de_parents[] = { "pll-video", "pll-periph0" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(de_clk, "de", de_parents,
+- 0x104, 0, 4, 24, 2, BIT(31),
+- CLK_SET_RATE_PARENT);
++ 0x104, 0, 4, 24, 3, BIT(31), 0);
+
+ static const char * const tcon_parents[] = { "pll-video" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(tcon_clk, "tcon", tcon_parents,
+diff --git a/drivers/clk/xilinx/xlnx_vcu.c b/drivers/clk/xilinx/xlnx_vcu.c
+index d66b1315114e65..292d50ba01125d 100644
+--- a/drivers/clk/xilinx/xlnx_vcu.c
++++ b/drivers/clk/xilinx/xlnx_vcu.c
+@@ -587,8 +587,8 @@ static void xvcu_unregister_clock_provider(struct xvcu_device *xvcu)
+ xvcu_clk_hw_unregister_leaf(hws[CLK_XVCU_ENC_MCU]);
+ if (!IS_ERR_OR_NULL(hws[CLK_XVCU_ENC_CORE]))
+ xvcu_clk_hw_unregister_leaf(hws[CLK_XVCU_ENC_CORE]);
+-
+- clk_hw_unregister_fixed_factor(xvcu->pll_post);
++ if (!IS_ERR_OR_NULL(xvcu->pll_post))
++ clk_hw_unregister_fixed_factor(xvcu->pll_post);
+ }
+
+ /**
+diff --git a/drivers/comedi/drivers/comedi_test.c b/drivers/comedi/drivers/comedi_test.c
+index 626d53bf9146ac..aecb5f193be1b8 100644
+--- a/drivers/comedi/drivers/comedi_test.c
++++ b/drivers/comedi/drivers/comedi_test.c
+@@ -788,7 +788,7 @@ static void waveform_detach(struct comedi_device *dev)
+ {
+ struct waveform_private *devpriv = dev->private;
+
+- if (devpriv) {
++ if (devpriv && dev->n_subdevices) {
+ del_timer_sync(&devpriv->ai_timer);
+ del_timer_sync(&devpriv->ao_timer);
+ }
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 3f35ce19c7b64f..805b4d26e9d21c 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1239,6 +1239,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ goto err_free_real_cpus;
+ }
+
++ init_rwsem(&policy->rwsem);
++
+ freq_constraints_init(&policy->constraints);
+
+ policy->nb_min.notifier_call = cpufreq_notifier_min;
+@@ -1261,7 +1263,6 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ }
+
+ INIT_LIST_HEAD(&policy->policy_list);
+- init_rwsem(&policy->rwsem);
+ spin_lock_init(&policy->transition_lock);
+ init_waitqueue_head(&policy->transition_wait);
+ INIT_WORK(&policy->update, handle_update);
+@@ -2882,15 +2883,6 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ cpufreq_driver = driver_data;
+ write_unlock_irqrestore(&cpufreq_driver_lock, flags);
+
+- /*
+- * Mark support for the scheduler's frequency invariance engine for
+- * drivers that implement target(), target_index() or fast_switch().
+- */
+- if (!cpufreq_driver->setpolicy) {
+- static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
+- pr_debug("supports frequency invariance");
+- }
+-
+ if (driver_data->setpolicy)
+ driver_data->flags |= CPUFREQ_CONST_LOOPS;
+
+@@ -2921,6 +2913,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ hp_online = ret;
+ ret = 0;
+
++ /*
++ * Mark support for the scheduler's frequency invariance engine for
++ * drivers that implement target(), target_index() or fast_switch().
++ */
++ if (!cpufreq_driver->setpolicy) {
++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
++ pr_debug("supports frequency invariance");
++ }
++
+ pr_debug("driver %s up and running\n", driver_data->name);
+ goto out;
+
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index d471d74df3bbbc..ee676ae1bc4882 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2867,8 +2867,8 @@ static int intel_cpufreq_update_pstate(struct cpufreq_policy *policy,
+ int max_pstate = policy->strict_target ?
+ target_pstate : cpu->max_perf_ratio;
+
+- intel_cpufreq_hwp_update(cpu, target_pstate, max_pstate, 0,
+- fast_switch);
++ intel_cpufreq_hwp_update(cpu, target_pstate, max_pstate,
++ target_pstate, fast_switch);
+ } else if (target_pstate != old_pstate) {
+ intel_cpufreq_perf_ctl_update(cpu, target_pstate, fast_switch);
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 4c6afc7367235a..f7b6dfc3170a81 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -260,8 +260,8 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
+ }
+
+ chan->timeout = areq->cryptlen;
+- rctx->nr_sgs = nr_sgs;
+- rctx->nr_sgd = nr_sgd;
++ rctx->nr_sgs = ns;
++ rctx->nr_sgd = nd;
+ return 0;
+
+ theend_sgs:
+diff --git a/drivers/crypto/ccp/ccp-debugfs.c b/drivers/crypto/ccp/ccp-debugfs.c
+index a1055554b47a24..dc26bc22c91d1d 100644
+--- a/drivers/crypto/ccp/ccp-debugfs.c
++++ b/drivers/crypto/ccp/ccp-debugfs.c
+@@ -319,5 +319,8 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
+
+ void ccp5_debugfs_destroy(void)
+ {
++ mutex_lock(&ccp_debugfs_lock);
+ debugfs_remove_recursive(ccp_debugfs_dir);
++ ccp_debugfs_dir = NULL;
++ mutex_unlock(&ccp_debugfs_lock);
+ }
+diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
+index 9629e98bd68b70..0de49efb9ef9a7 100644
+--- a/drivers/crypto/img-hash.c
++++ b/drivers/crypto/img-hash.c
+@@ -436,7 +436,7 @@ static int img_hash_write_via_dma_stop(struct img_hash_dev *hdev)
+ struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
+
+ if (ctx->flags & DRIVER_FLAGS_SG)
+- dma_unmap_sg(hdev->dev, ctx->sg, ctx->dma_ct, DMA_TO_DEVICE);
++ dma_unmap_sg(hdev->dev, ctx->sg, 1, DMA_TO_DEVICE);
+
+ return 0;
+ }
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index ecf64cc35fffc7..08227d44a27ba2 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -249,7 +249,9 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv,
+ safexcel_complete(priv, ring);
+
+ if (sreq->nents) {
+- dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
++ dma_unmap_sg(priv->dev, areq->src,
++ sg_nents_for_len(areq->src, areq->nbytes),
++ DMA_TO_DEVICE);
+ sreq->nents = 0;
+ }
+
+@@ -497,7 +499,9 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ DMA_FROM_DEVICE);
+ unmap_sg:
+ if (req->nents) {
+- dma_unmap_sg(priv->dev, areq->src, req->nents, DMA_TO_DEVICE);
++ dma_unmap_sg(priv->dev, areq->src,
++ sg_nents_for_len(areq->src, areq->nbytes),
++ DMA_TO_DEVICE);
+ req->nents = 0;
+ }
+ cdesc_rollback:
+diff --git a/drivers/crypto/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/keembay/keembay-ocs-hcu-core.c
+index 0379dbf32a4c46..6b46c37f00ae10 100644
+--- a/drivers/crypto/keembay/keembay-ocs-hcu-core.c
++++ b/drivers/crypto/keembay/keembay-ocs-hcu-core.c
+@@ -68,6 +68,7 @@ struct ocs_hcu_ctx {
+ * @sg_data_total: Total data in the SG list at any time.
+ * @sg_data_offset: Offset into the data of the current individual SG node.
+ * @sg_dma_nents: Number of sg entries mapped in dma_list.
++ * @nents: Number of entries in the scatterlist.
+ */
+ struct ocs_hcu_rctx {
+ struct ocs_hcu_dev *hcu_dev;
+@@ -91,6 +92,7 @@ struct ocs_hcu_rctx {
+ unsigned int sg_data_total;
+ unsigned int sg_data_offset;
+ unsigned int sg_dma_nents;
++ unsigned int nents;
+ };
+
+ /**
+@@ -199,7 +201,7 @@ static void kmb_ocs_hcu_dma_cleanup(struct ahash_request *req,
+
+ /* Unmap req->src (if mapped). */
+ if (rctx->sg_dma_nents) {
+- dma_unmap_sg(dev, req->src, rctx->sg_dma_nents, DMA_TO_DEVICE);
++ dma_unmap_sg(dev, req->src, rctx->nents, DMA_TO_DEVICE);
+ rctx->sg_dma_nents = 0;
+ }
+
+@@ -260,6 +262,10 @@ static int kmb_ocs_dma_prepare(struct ahash_request *req)
+ rc = -ENOMEM;
+ goto cleanup;
+ }
++
++ /* Save the value of nents to pass to dma_unmap_sg. */
++ rctx->nents = nents;
++
+ /*
+ * The value returned by dma_map_sg() can be < nents; so update
+ * nents accordingly.
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index 3876e3ce822f44..eabed9d977df6c 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -75,9 +75,12 @@ mv_cesa_skcipher_dma_cleanup(struct skcipher_request *req)
+ static inline void mv_cesa_skcipher_cleanup(struct skcipher_request *req)
+ {
+ struct mv_cesa_skcipher_req *creq = skcipher_request_ctx(req);
++ struct mv_cesa_engine *engine = creq->base.engine;
+
+ if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ mv_cesa_skcipher_dma_cleanup(req);
++
++ atomic_sub(req->cryptlen, &engine->load);
+ }
+
+ static void mv_cesa_skcipher_std_step(struct skcipher_request *req)
+@@ -212,7 +215,6 @@ mv_cesa_skcipher_complete(struct crypto_async_request *req)
+ struct mv_cesa_engine *engine = creq->base.engine;
+ unsigned int ivsize;
+
+- atomic_sub(skreq->cryptlen, &engine->load);
+ ivsize = crypto_skcipher_ivsize(crypto_skcipher_reqtfm(skreq));
+
+ if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ) {
+diff --git a/drivers/crypto/marvell/cesa/hash.c b/drivers/crypto/marvell/cesa/hash.c
+index 72b0f863dee072..66ebe26e59cb07 100644
+--- a/drivers/crypto/marvell/cesa/hash.c
++++ b/drivers/crypto/marvell/cesa/hash.c
+@@ -110,9 +110,12 @@ static inline void mv_cesa_ahash_dma_cleanup(struct ahash_request *req)
+ static inline void mv_cesa_ahash_cleanup(struct ahash_request *req)
+ {
+ struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
++ struct mv_cesa_engine *engine = creq->base.engine;
+
+ if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ mv_cesa_ahash_dma_cleanup(req);
++
++ atomic_sub(req->nbytes, &engine->load);
+ }
+
+ static void mv_cesa_ahash_last_cleanup(struct ahash_request *req)
+@@ -395,8 +398,6 @@ static void mv_cesa_ahash_complete(struct crypto_async_request *req)
+ }
+ }
+ }
+-
+- atomic_sub(ahashreq->nbytes, &engine->load);
+ }
+
+ static void mv_cesa_ahash_prepare(struct crypto_async_request *req,
+diff --git a/drivers/crypto/qat/qat_common/adf_transport_debug.c b/drivers/crypto/qat/qat_common/adf_transport_debug.c
+index e2dd568b87b519..621b5d3dfcef91 100644
+--- a/drivers/crypto/qat/qat_common/adf_transport_debug.c
++++ b/drivers/crypto/qat/qat_common/adf_transport_debug.c
+@@ -31,8 +31,10 @@ static void *adf_ring_next(struct seq_file *sfile, void *v, loff_t *pos)
+ struct adf_etr_ring_data *ring = sfile->private;
+
+ if (*pos >= (ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size) /
+- ADF_MSG_SIZE_TO_BYTES(ring->msg_size)))
++ ADF_MSG_SIZE_TO_BYTES(ring->msg_size))) {
++ (*pos)++;
+ return NULL;
++ }
+
+ return ring->base_addr +
+ (ADF_MSG_SIZE_TO_BYTES(ring->msg_size) * (*pos)++);
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 344e276165e41c..9ab97164443eed 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1381,15 +1381,11 @@ int devfreq_remove_governor(struct devfreq_governor *governor)
+ int ret;
+ struct device *dev = devfreq->dev.parent;
+
++ if (!devfreq->governor)
++ continue;
++
+ if (!strncmp(devfreq->governor->name, governor->name,
+ DEVFREQ_NAME_LEN)) {
+- /* we should have a devfreq governor! */
+- if (!devfreq->governor) {
+- dev_warn(dev, "%s: Governor %s NOT present\n",
+- __func__, governor->name);
+- continue;
+- /* Fall through */
+- }
+ ret = devfreq->governor->event_handler(devfreq,
+ DEVFREQ_GOV_STOP, NULL);
+ if (ret) {
+diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
+index ea48661e87ea70..ca0ba1d462832d 100644
+--- a/drivers/dma/mv_xor.c
++++ b/drivers/dma/mv_xor.c
+@@ -1061,8 +1061,16 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ */
+ mv_chan->dummy_src_addr = dma_map_single(dma_dev->dev,
+ mv_chan->dummy_src, MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++ if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_src_addr))
++ return ERR_PTR(-ENOMEM);
++
+ mv_chan->dummy_dst_addr = dma_map_single(dma_dev->dev,
+ mv_chan->dummy_dst, MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++ if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_dst_addr)) {
++ ret = -ENOMEM;
++ goto err_unmap_src;
++ }
++
+
+ /* allocate coherent memory for hardware descriptors
+ * note: writecombine gives slightly better performance, but
+@@ -1071,8 +1079,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ mv_chan->dma_desc_pool_virt =
+ dma_alloc_wc(&pdev->dev, MV_XOR_POOL_SIZE, &mv_chan->dma_desc_pool,
+ GFP_KERNEL);
+- if (!mv_chan->dma_desc_pool_virt)
+- return ERR_PTR(-ENOMEM);
++ if (!mv_chan->dma_desc_pool_virt) {
++ ret = -ENOMEM;
++ goto err_unmap_dst;
++ }
+
+ /* discover transaction capabilites from the platform data */
+ dma_dev->cap_mask = cap_mask;
+@@ -1155,6 +1165,13 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ err_free_dma:
+ dma_free_coherent(&pdev->dev, MV_XOR_POOL_SIZE,
+ mv_chan->dma_desc_pool_virt, mv_chan->dma_desc_pool);
++err_unmap_dst:
++ dma_unmap_single(dma_dev->dev, mv_chan->dummy_dst_addr,
++ MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++err_unmap_src:
++ dma_unmap_single(dma_dev->dev, mv_chan->dummy_src_addr,
++ MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
+index e389945e36f252..256ae956b55e93 100644
+--- a/drivers/dma/nbpfaxi.c
++++ b/drivers/dma/nbpfaxi.c
+@@ -712,6 +712,9 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ list_add_tail(&ldesc->node, &lhead);
+ ldesc->hwdesc_dma_addr = dma_map_single(dchan->device->dev,
+ hwdesc, sizeof(*hwdesc), DMA_TO_DEVICE);
++ if (dma_mapping_error(dchan->device->dev,
++ ldesc->hwdesc_dma_addr))
++ goto unmap_error;
+
+ dev_dbg(dev, "%s(): mapped 0x%p to %pad\n", __func__,
+ hwdesc, &ldesc->hwdesc_dma_addr);
+@@ -738,6 +741,16 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ spin_unlock_irq(&chan->lock);
+
+ return ARRAY_SIZE(dpage->desc);
++
++unmap_error:
++ while (i--) {
++ ldesc--; hwdesc--;
++
++ dma_unmap_single(dchan->device->dev, ldesc->hwdesc_dma_addr,
++ sizeof(hwdesc), DMA_TO_DEVICE);
++ }
++
++ return -ENOMEM;
+ }
+
+ static void nbpf_desc_put(struct nbpf_desc *desc)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 7fa5e70f1aacea..09ce90cf6b532f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1076,13 +1076,12 @@ svm_range_split_head(struct svm_range *prange,
+ }
+
+ static void
+-svm_range_add_child(struct svm_range *prange, struct mm_struct *mm,
+- struct svm_range *pchild, enum svm_work_list_ops op)
++svm_range_add_child(struct svm_range *prange, struct svm_range *pchild, enum svm_work_list_ops op)
+ {
+ pr_debug("add child 0x%p [0x%lx 0x%lx] to prange 0x%p child list %d\n",
+ pchild, pchild->start, pchild->last, prange, op);
+
+- pchild->work_item.mm = mm;
++ pchild->work_item.mm = NULL;
+ pchild->work_item.op = op;
+ list_add_tail(&pchild->child_list, &prange->child_list);
+ }
+@@ -1128,14 +1127,14 @@ svm_range_split_by_granularity(struct kfd_process *p, struct mm_struct *mm,
+ r = svm_range_split(prange, start, prange->last, &head);
+ if (r)
+ return r;
+- svm_range_add_child(parent, mm, head, SVM_OP_ADD_RANGE);
++ svm_range_add_child(parent, head, SVM_OP_ADD_RANGE);
+ }
+
+ if (last < prange->last) {
+ r = svm_range_split(prange, prange->start, last, &tail);
+ if (r)
+ return r;
+- svm_range_add_child(parent, mm, tail, SVM_OP_ADD_RANGE);
++ svm_range_add_child(parent, tail, SVM_OP_ADD_RANGE);
+ }
+
+ /* xnack on, update mapping on GPUs with ACCESS_IN_PLACE */
+@@ -2265,15 +2264,17 @@ svm_range_add_list_work(struct svm_range_list *svms, struct svm_range *prange,
+ prange->work_item.op != SVM_OP_UNMAP_RANGE)
+ prange->work_item.op = op;
+ } else {
+- prange->work_item.op = op;
+-
+- /* Pairs with mmput in deferred_list_work */
+- mmget(mm);
+- prange->work_item.mm = mm;
+- list_add_tail(&prange->deferred_list,
+- &prange->svms->deferred_range_list);
+- pr_debug("add prange 0x%p [0x%lx 0x%lx] to work list op %d\n",
+- prange, prange->start, prange->last, op);
++ /* Pairs with mmput in deferred_list_work.
++ * If process is exiting and mm is gone, don't update mmu notifier.
++ */
++ if (mmget_not_zero(mm)) {
++ prange->work_item.mm = mm;
++ prange->work_item.op = op;
++ list_add_tail(&prange->deferred_list,
++ &prange->svms->deferred_range_list);
++ pr_debug("add prange 0x%p [0x%lx 0x%lx] to work list op %d\n",
++ prange, prange->start, prange->last, op);
++ }
+ }
+ spin_unlock(&svms->deferred_list_lock);
+ }
+@@ -2287,8 +2288,7 @@ void schedule_deferred_list_work(struct svm_range_list *svms)
+ }
+
+ static void
+-svm_range_unmap_split(struct mm_struct *mm, struct svm_range *parent,
+- struct svm_range *prange, unsigned long start,
++svm_range_unmap_split(struct svm_range *parent, struct svm_range *prange, unsigned long start,
+ unsigned long last)
+ {
+ struct svm_range *head;
+@@ -2309,12 +2309,12 @@ svm_range_unmap_split(struct mm_struct *mm, struct svm_range *parent,
+ svm_range_split(tail, last + 1, tail->last, &head);
+
+ if (head != prange && tail != prange) {
+- svm_range_add_child(parent, mm, head, SVM_OP_UNMAP_RANGE);
+- svm_range_add_child(parent, mm, tail, SVM_OP_ADD_RANGE);
++ svm_range_add_child(parent, head, SVM_OP_UNMAP_RANGE);
++ svm_range_add_child(parent, tail, SVM_OP_ADD_RANGE);
+ } else if (tail != prange) {
+- svm_range_add_child(parent, mm, tail, SVM_OP_UNMAP_RANGE);
++ svm_range_add_child(parent, tail, SVM_OP_UNMAP_RANGE);
+ } else if (head != prange) {
+- svm_range_add_child(parent, mm, head, SVM_OP_UNMAP_RANGE);
++ svm_range_add_child(parent, head, SVM_OP_UNMAP_RANGE);
+ } else if (parent != prange) {
+ prange->work_item.op = SVM_OP_UNMAP_RANGE;
+ }
+@@ -2353,14 +2353,14 @@ svm_range_unmap_from_cpu(struct mm_struct *mm, struct svm_range *prange,
+ l = min(last, pchild->last);
+ if (l >= s)
+ svm_range_unmap_from_gpus(pchild, s, l, trigger);
+- svm_range_unmap_split(mm, prange, pchild, start, last);
++ svm_range_unmap_split(prange, pchild, start, last);
+ mutex_unlock(&pchild->lock);
+ }
+ s = max(start, prange->start);
+ l = min(last, prange->last);
+ if (l >= s)
+ svm_range_unmap_from_gpus(prange, s, l, trigger);
+- svm_range_unmap_split(mm, prange, prange, start, last);
++ svm_range_unmap_split(prange, prange, start, last);
+
+ if (unmap_parent)
+ svm_range_add_list_work(svms, prange, mm, SVM_OP_UNMAP_RANGE);
+@@ -2403,8 +2403,6 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
+
+ if (range->event == MMU_NOTIFY_RELEASE)
+ return true;
+- if (!mmget_not_zero(mni->mm))
+- return true;
+
+ start = mni->interval_tree.start;
+ last = mni->interval_tree.last;
+@@ -2431,7 +2429,6 @@ svm_range_cpu_invalidate_pagetables(struct mmu_interval_notifier *mni,
+ }
+
+ svm_range_unlock(prange);
+- mmput(mni->mm);
+
+ return true;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+index d0b1ab6c452312..54d191b2dc202f 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+@@ -149,7 +149,7 @@ int phm_wait_on_indirect_register(struct pp_hwmgr *hwmgr,
+ }
+
+ cgs_write_register(hwmgr->device, indirect_port, index);
+- return phm_wait_on_register(hwmgr, indirect_port + 1, mask, value);
++ return phm_wait_on_register(hwmgr, indirect_port + 1, value, mask);
+ }
+
+ int phm_wait_for_register_unequal(struct pp_hwmgr *hwmgr,
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 26a064624d9761..6595f954135ad7 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -1333,7 +1333,7 @@ static int ti_sn_bridge_probe(struct auxiliary_device *adev,
+ regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG,
+ HPD_DISABLE, 0);
+ mutex_unlock(&pdata->comms_mutex);
+- };
++ }
+
+ drm_bridge_add(&pdata->bridge);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 3f65d890b8a904..9120e367a91327 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -1139,6 +1139,12 @@ int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
+ void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
+ u8 *link_bw, u8 *rate_select)
+ {
++ struct drm_i915_private *i915 = dp_to_i915(intel_dp);
++
++ /* FIXME g4x can't generate an exact 2.7GHz with the 96MHz non-SSC refclk */
++ if (IS_G4X(i915) && port_clock == 268800)
++ port_clock = 270000;
++
+ /* eDP 1.4 rate select method. */
+ if (intel_dp->use_rate_select) {
+ *link_bw = 0;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+index 092bf863110b75..9545ee64a90a9f 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+@@ -82,16 +82,9 @@ rockchip_fb_create(struct drm_device *dev, struct drm_file *file,
+ }
+
+ if (drm_is_afbc(mode_cmd->modifier[0])) {
+- int ret, i;
+-
+ ret = drm_gem_fb_afbc_init(dev, mode_cmd, afbc_fb);
+ if (ret) {
+- struct drm_gem_object **obj = afbc_fb->base.obj;
+-
+- for (i = 0; i < info->num_planes; ++i)
+- drm_gem_object_put(obj[i]);
+-
+- kfree(afbc_fb);
++ drm_framebuffer_put(&afbc_fb->base);
+ return ERR_PTR(ret);
+ }
+ }
+diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c
+index 82de4651d18f01..855aa23c96d111 100644
+--- a/drivers/i2c/busses/i2c-qup.c
++++ b/drivers/i2c/busses/i2c-qup.c
+@@ -452,8 +452,10 @@ static int qup_i2c_bus_active(struct qup_i2c_dev *qup, int len)
+ if (!(status & I2C_STATUS_BUS_ACTIVE))
+ break;
+
+- if (time_after(jiffies, timeout))
++ if (time_after(jiffies, timeout)) {
+ ret = -ETIMEDOUT;
++ break;
++ }
+
+ usleep_range(len, len * 2);
+ }
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index b8726167cf739b..850d76d9114c4b 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -623,7 +623,6 @@ static int tegra_i2c_wait_for_config_load(struct tegra_i2c_dev *i2c_dev)
+ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev)
+ {
+ u32 val, clk_divisor, clk_multiplier, tsu_thd, tlow, thigh, non_hs_mode;
+- acpi_handle handle = ACPI_HANDLE(i2c_dev->dev);
+ struct i2c_timings *t = &i2c_dev->timings;
+ int err;
+
+@@ -635,11 +634,7 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev)
+ * emit a noisy warning on error, which won't stay unnoticed and
+ * won't hose machine entirely.
+ */
+- if (handle)
+- err = acpi_evaluate_object(handle, "_RST", NULL, NULL);
+- else
+- err = reset_control_reset(i2c_dev->rst);
+-
++ err = device_reset(i2c_dev->dev);
+ WARN_ON_ONCE(err);
+
+ if (IS_DVC(i2c_dev))
+@@ -1696,19 +1691,6 @@ static void tegra_i2c_parse_dt(struct tegra_i2c_dev *i2c_dev)
+ i2c_dev->is_vi = true;
+ }
+
+-static int tegra_i2c_init_reset(struct tegra_i2c_dev *i2c_dev)
+-{
+- if (ACPI_HANDLE(i2c_dev->dev))
+- return 0;
+-
+- i2c_dev->rst = devm_reset_control_get_exclusive(i2c_dev->dev, "i2c");
+- if (IS_ERR(i2c_dev->rst))
+- return dev_err_probe(i2c_dev->dev, PTR_ERR(i2c_dev->rst),
+- "failed to get reset control\n");
+-
+- return 0;
+-}
+-
+ static int tegra_i2c_init_clocks(struct tegra_i2c_dev *i2c_dev)
+ {
+ int err;
+@@ -1818,10 +1800,6 @@ static int tegra_i2c_probe(struct platform_device *pdev)
+
+ tegra_i2c_parse_dt(i2c_dev);
+
+- err = tegra_i2c_init_reset(i2c_dev);
+- if (err)
+- return err;
+-
+ err = tegra_i2c_init_clocks(i2c_dev);
+ if (err)
+ return err;
+diff --git a/drivers/i2c/busses/i2c-virtio.c b/drivers/i2c/busses/i2c-virtio.c
+index 4b9536f508006d..12e317434f781e 100644
+--- a/drivers/i2c/busses/i2c-virtio.c
++++ b/drivers/i2c/busses/i2c-virtio.c
+@@ -116,15 +116,16 @@ static int virtio_i2c_complete_reqs(struct virtqueue *vq,
+ for (i = 0; i < num; i++) {
+ struct virtio_i2c_req *req = &reqs[i];
+
+- wait_for_completion(&req->completion);
+-
+- if (!failed && req->in_hdr.status != VIRTIO_I2C_MSG_OK)
+- failed = true;
++ if (!failed) {
++ if (wait_for_completion_interruptible(&req->completion))
++ failed = true;
++ else if (req->in_hdr.status != VIRTIO_I2C_MSG_OK)
++ failed = true;
++ else
++ j++;
++ }
+
+ i2c_put_dma_safe_msg_buf(reqs[i].buf, &msgs[i], !failed);
+-
+- if (!failed)
+- j++;
+ }
+
+ return j;
+diff --git a/drivers/iio/adc/ad7949.c b/drivers/iio/adc/ad7949.c
+index edd0c3a35ab73c..202561cad4012b 100644
+--- a/drivers/iio/adc/ad7949.c
++++ b/drivers/iio/adc/ad7949.c
+@@ -308,7 +308,6 @@ static void ad7949_disable_reg(void *reg)
+
+ static int ad7949_spi_probe(struct spi_device *spi)
+ {
+- u32 spi_ctrl_mask = spi->controller->bits_per_word_mask;
+ struct device *dev = &spi->dev;
+ const struct ad7949_adc_spec *spec;
+ struct ad7949_adc_chip *ad7949_adc;
+@@ -337,11 +336,11 @@ static int ad7949_spi_probe(struct spi_device *spi)
+ ad7949_adc->resolution = spec->resolution;
+
+ /* Set SPI bits per word */
+- if (spi_ctrl_mask & SPI_BPW_MASK(ad7949_adc->resolution)) {
++ if (spi_is_bpw_supported(spi, ad7949_adc->resolution)) {
+ spi->bits_per_word = ad7949_adc->resolution;
+- } else if (spi_ctrl_mask == SPI_BPW_MASK(16)) {
++ } else if (spi_is_bpw_supported(spi, 16)) {
+ spi->bits_per_word = 16;
+- } else if (spi_ctrl_mask == SPI_BPW_MASK(8)) {
++ } else if (spi_is_bpw_supported(spi, 8)) {
+ spi->bits_per_word = 8;
+ } else {
+ dev_err(dev, "unable to find common BPW with spi controller\n");
+diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
+index 873988e5c5280f..0023aad0e7e436 100644
+--- a/drivers/infiniband/core/cache.c
++++ b/drivers/infiniband/core/cache.c
+@@ -582,8 +582,8 @@ static int __ib_cache_gid_add(struct ib_device *ib_dev, u32 port,
+ out_unlock:
+ mutex_unlock(&table->lock);
+ if (ret)
+- pr_warn("%s: unable to add gid %pI6 error=%d\n",
+- __func__, gid->raw, ret);
++ pr_warn_ratelimited("%s: unable to add gid %pI6 error=%d\n",
++ __func__, gid->raw, ret);
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index be5d7a8ab4d433..72c719805af321 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -5331,11 +5331,10 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ {
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+- struct hns_roce_v2_qp_context ctx[2];
+- struct hns_roce_v2_qp_context *context = ctx;
+- struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
++ struct hns_roce_v2_qp_context *context;
++ struct hns_roce_v2_qp_context *qpc_mask;
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+- int ret;
++ int ret = -ENOMEM;
+
+ if (attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
+ return -EOPNOTSUPP;
+@@ -5346,7 +5345,11 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ * we should set all bits of the relevant fields in context mask to
+ * 0 at the same time, else set them to 0x1.
+ */
+- memset(context, 0, hr_dev->caps.qpc_sz);
++ context = kvzalloc(sizeof(*context), GFP_KERNEL);
++ qpc_mask = kvzalloc(sizeof(*qpc_mask), GFP_KERNEL);
++ if (!context || !qpc_mask)
++ goto out;
++
+ memset(qpc_mask, 0xff, hr_dev->caps.qpc_sz);
+
+ ret = hns_roce_v2_set_abs_fields(ibqp, attr, attr_mask, cur_state,
+@@ -5388,6 +5391,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ clear_qp(hr_qp);
+
+ out:
++ kvfree(qpc_mask);
++ kvfree(context);
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/dm.c b/drivers/infiniband/hw/mlx5/dm.c
+index 3669c90b2dadc6..672e5cfd2fca5d 100644
+--- a/drivers/infiniband/hw/mlx5/dm.c
++++ b/drivers/infiniband/hw/mlx5/dm.c
+@@ -282,7 +282,7 @@ static struct ib_dm *handle_alloc_dm_memic(struct ib_ucontext *ctx,
+ int err;
+ u64 address;
+
+- if (!MLX5_CAP_DEV_MEM(dm_db->dev, memic))
++ if (!dm_db || !MLX5_CAP_DEV_MEM(dm_db->dev, memic))
+ return ERR_PTR(-EOPNOTSUPP);
+
+ dm = kzalloc(sizeof(*dm), GFP_KERNEL);
+diff --git a/drivers/input/keyboard/gpio_keys.c b/drivers/input/keyboard/gpio_keys.c
+index b55306cb354aea..f8ae4b08668bbb 100644
+--- a/drivers/input/keyboard/gpio_keys.c
++++ b/drivers/input/keyboard/gpio_keys.c
+@@ -495,7 +495,7 @@ static irqreturn_t gpio_keys_irq_isr(int irq, void *dev_id)
+ if (bdata->release_delay)
+ hrtimer_start(&bdata->release_timer,
+ ms_to_ktime(bdata->release_delay),
+- HRTIMER_MODE_REL_HARD);
++ HRTIMER_MODE_REL);
+ out:
+ spin_unlock_irqrestore(&bdata->lock, flags);
+ return IRQ_HANDLED;
+@@ -635,7 +635,7 @@ static int gpio_keys_setup_key(struct platform_device *pdev,
+
+ bdata->release_delay = button->debounce_interval;
+ hrtimer_init(&bdata->release_timer,
+- CLOCK_REALTIME, HRTIMER_MODE_REL_HARD);
++ CLOCK_REALTIME, HRTIMER_MODE_REL);
+ bdata->release_timer.function = gpio_keys_irq_timer;
+
+ isr = gpio_keys_irq_isr;
+diff --git a/drivers/interconnect/qcom/sc7280.c b/drivers/interconnect/qcom/sc7280.c
+index 3c39edd21b6cae..79794d8fd711fa 100644
+--- a/drivers/interconnect/qcom/sc7280.c
++++ b/drivers/interconnect/qcom/sc7280.c
+@@ -164,6 +164,7 @@ static struct qcom_icc_node xm_pcie3_1 = {
+ .id = SC7280_MASTER_PCIE_1,
+ .channels = 1,
+ .buswidth = 8,
++ .num_links = 1,
+ .links = { SC7280_SLAVE_ANOC_PCIE_GEM_NOC },
+ };
+
+diff --git a/drivers/interconnect/qcom/sc8180x.c b/drivers/interconnect/qcom/sc8180x.c
+index d9ee193fb18bdc..df2fdce42f93ed 100644
+--- a/drivers/interconnect/qcom/sc8180x.c
++++ b/drivers/interconnect/qcom/sc8180x.c
+@@ -1507,22 +1507,26 @@ static struct qcom_icc_bcm bcm_sh3 = {
+
+ static struct qcom_icc_bcm bcm_sn0 = {
+ .name = "SN0",
++ .num_nodes = 1,
+ .nodes = { &slv_qns_gemnoc_sf }
+ };
+
+ static struct qcom_icc_bcm bcm_sn1 = {
+ .name = "SN1",
++ .num_nodes = 1,
+ .nodes = { &slv_qxs_imem }
+ };
+
+ static struct qcom_icc_bcm bcm_sn2 = {
+ .name = "SN2",
+ .keepalive = true,
++ .num_nodes = 1,
+ .nodes = { &slv_qns_gemnoc_gc }
+ };
+
+ static struct qcom_icc_bcm bcm_co2 = {
+ .name = "CO2",
++ .num_nodes = 1,
+ .nodes = { &mas_qnm_npu }
+ };
+
+@@ -1534,12 +1538,14 @@ static struct qcom_icc_bcm bcm_ip0 = {
+ static struct qcom_icc_bcm bcm_sn3 = {
+ .name = "SN3",
+ .keepalive = true,
++ .num_nodes = 2,
+ .nodes = { &slv_srvc_aggre1_noc,
+ &slv_qns_cnoc }
+ };
+
+ static struct qcom_icc_bcm bcm_sn4 = {
+ .name = "SN4",
++ .num_nodes = 1,
+ .nodes = { &slv_qxs_pimem }
+ };
+
+diff --git a/drivers/interconnect/qcom/sc8280xp.c b/drivers/interconnect/qcom/sc8280xp.c
+index 489f259a02e5b0..d759b04e33910d 100644
+--- a/drivers/interconnect/qcom/sc8280xp.c
++++ b/drivers/interconnect/qcom/sc8280xp.c
+@@ -47,6 +47,7 @@ static struct qcom_icc_node qnm_a1noc_cfg = {
+ .id = SC8280XP_MASTER_A1NOC_CFG,
+ .channels = 1,
+ .buswidth = 4,
++ .num_links = 1,
+ .links = { SC8280XP_SLAVE_SERVICE_A1NOC },
+ };
+
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index a29a426e4eed77..eae221b283970c 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -486,6 +486,7 @@ config IMX_MU_MSI
+ tristate "i.MX MU used as MSI controller"
+ depends on OF && HAS_IOMEM
+ depends on ARCH_MXC || COMPILE_TEST
++ depends on ARM || ARM64
+ default m if ARCH_MXC
+ select IRQ_DOMAIN
+ select IRQ_DOMAIN_HIERARCHY
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+index 29169170880a69..ad5a40e4c2d570 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+@@ -651,12 +651,12 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+
+ p_h264_sps->flags &=
+ ~V4L2_H264_SPS_FLAG_QPPRIME_Y_ZERO_TRANSFORM_BYPASS;
+-
+- if (p_h264_sps->chroma_format_idc < 3)
+- p_h264_sps->flags &=
+- ~V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE;
+ }
+
++ if (p_h264_sps->chroma_format_idc < 3)
++ p_h264_sps->flags &=
++ ~V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE;
++
+ if (p_h264_sps->flags & V4L2_H264_SPS_FLAG_FRAME_MBS_ONLY)
+ p_h264_sps->flags &=
+ ~V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD;
+diff --git a/drivers/mtd/ftl.c b/drivers/mtd/ftl.c
+index 8c22064ead3870..f2bd1984609ccc 100644
+--- a/drivers/mtd/ftl.c
++++ b/drivers/mtd/ftl.c
+@@ -344,7 +344,7 @@ static int erase_xfer(partition_t *part,
+ return -ENOMEM;
+
+ erase->addr = xfer->Offset;
+- erase->len = 1 << part->header.EraseUnitSize;
++ erase->len = 1ULL << part->header.EraseUnitSize;
+
+ ret = mtd_erase(part->mbd.mtd, erase);
+ if (!ret) {
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 41c6bd6e2d72ca..710d1d73eb352c 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -373,7 +373,7 @@ static int atmel_nand_dma_transfer(struct atmel_nand_controller *nc,
+ dma_cookie_t cookie;
+
+ buf_dma = dma_map_single(nc->dev, buf, len, dir);
+- if (dma_mapping_error(nc->dev, dev_dma)) {
++ if (dma_mapping_error(nc->dev, buf_dma)) {
+ dev_err(nc->dev,
+ "Failed to prepare a buffer for DMA access\n");
+ goto err;
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index 3c7dee1be21df1..0b402823b619cf 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -143,6 +143,7 @@ struct atmel_pmecc_caps {
+ int nstrengths;
+ int el_offset;
+ bool correct_erased_chunks;
++ bool clk_ctrl;
+ };
+
+ struct atmel_pmecc {
+@@ -843,6 +844,10 @@ static struct atmel_pmecc *atmel_pmecc_create(struct platform_device *pdev,
+ if (IS_ERR(pmecc->regs.errloc))
+ return ERR_CAST(pmecc->regs.errloc);
+
++ /* pmecc data setup time */
++ if (caps->clk_ctrl)
++ writel(PMECC_CLK_133MHZ, pmecc->regs.base + ATMEL_PMECC_CLK);
++
+ /* Disable all interrupts before registering the PMECC handler. */
+ writel(0xffffffff, pmecc->regs.base + ATMEL_PMECC_IDR);
+ atmel_pmecc_reset(pmecc);
+@@ -896,6 +901,7 @@ static struct atmel_pmecc_caps at91sam9g45_caps = {
+ .strengths = atmel_pmecc_strengths,
+ .nstrengths = 5,
+ .el_offset = 0x8c,
++ .clk_ctrl = true,
+ };
+
+ static struct atmel_pmecc_caps sama5d4_caps = {
+diff --git a/drivers/mtd/nand/raw/rockchip-nand-controller.c b/drivers/mtd/nand/raw/rockchip-nand-controller.c
+index d8456b849c13d1..1efe97fd659503 100644
+--- a/drivers/mtd/nand/raw/rockchip-nand-controller.c
++++ b/drivers/mtd/nand/raw/rockchip-nand-controller.c
+@@ -657,9 +657,16 @@ static int rk_nfc_write_page_hwecc(struct nand_chip *chip, const u8 *buf,
+
+ dma_data = dma_map_single(nfc->dev, (void *)nfc->page_buf,
+ mtd->writesize, DMA_TO_DEVICE);
++ if (dma_mapping_error(nfc->dev, dma_data))
++ return -ENOMEM;
++
+ dma_oob = dma_map_single(nfc->dev, nfc->oob_buf,
+ ecc->steps * oob_step,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(nfc->dev, dma_oob)) {
++ dma_unmap_single(nfc->dev, dma_data, mtd->writesize, DMA_TO_DEVICE);
++ return -ENOMEM;
++ }
+
+ reinit_completion(&nfc->done);
+ writel(INT_DMA, nfc->regs + nfc->cfg->int_en_off);
+@@ -773,9 +780,17 @@ static int rk_nfc_read_page_hwecc(struct nand_chip *chip, u8 *buf, int oob_on,
+ dma_data = dma_map_single(nfc->dev, nfc->page_buf,
+ mtd->writesize,
+ DMA_FROM_DEVICE);
++ if (dma_mapping_error(nfc->dev, dma_data))
++ return -ENOMEM;
++
+ dma_oob = dma_map_single(nfc->dev, nfc->oob_buf,
+ ecc->steps * oob_step,
+ DMA_FROM_DEVICE);
++ if (dma_mapping_error(nfc->dev, dma_oob)) {
++ dma_unmap_single(nfc->dev, dma_data, mtd->writesize,
++ DMA_FROM_DEVICE);
++ return -ENOMEM;
++ }
+
+ /*
+ * The first blocks (4, 8 or 16 depending on the device)
+diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
+index 43125ce96f1aa5..89f80d74f27e3d 100644
+--- a/drivers/net/can/dev/dev.c
++++ b/drivers/net/can/dev/dev.c
+@@ -125,13 +125,16 @@ void can_change_state(struct net_device *dev, struct can_frame *cf,
+ EXPORT_SYMBOL_GPL(can_change_state);
+
+ /* CAN device restart for bus-off recovery */
+-static void can_restart(struct net_device *dev)
++static int can_restart(struct net_device *dev)
+ {
+ struct can_priv *priv = netdev_priv(dev);
+ struct sk_buff *skb;
+ struct can_frame *cf;
+ int err;
+
++ if (!priv->do_set_mode)
++ return -EOPNOTSUPP;
++
+ if (netif_carrier_ok(dev))
+ netdev_err(dev, "Attempt to restart for bus-off recovery, but carrier is OK?\n");
+
+@@ -142,24 +145,25 @@ static void can_restart(struct net_device *dev)
+
+ /* send restart message upstream */
+ skb = alloc_can_err_skb(dev, &cf);
+- if (!skb)
+- goto restart;
+-
+- cf->can_id |= CAN_ERR_RESTARTED;
+-
+- netif_rx(skb);
+-
+-restart:
+- netdev_dbg(dev, "restarted\n");
+- priv->can_stats.restarts++;
++ if (skb) {
++ cf->can_id |= CAN_ERR_RESTARTED;
++ netif_rx(skb);
++ }
+
+ /* Now restart the device */
+ netif_carrier_on(dev);
+ err = priv->do_set_mode(dev, CAN_MODE_START);
+ if (err) {
+- netdev_err(dev, "Error %d during restart", err);
++ netdev_err(dev, "Restart failed, error %pe\n", ERR_PTR(err));
+ netif_carrier_off(dev);
++
++ return err;
++ } else {
++ netdev_dbg(dev, "Restarted\n");
++ priv->can_stats.restarts++;
+ }
++
++ return 0;
+ }
+
+ static void can_restart_work(struct work_struct *work)
+@@ -184,9 +188,8 @@ int can_restart_now(struct net_device *dev)
+ return -EBUSY;
+
+ cancel_delayed_work_sync(&priv->restart_work);
+- can_restart(dev);
+
+- return 0;
++ return can_restart(dev);
+ }
+
+ /* CAN bus-off
+diff --git a/drivers/net/can/dev/netlink.c b/drivers/net/can/dev/netlink.c
+index 053d375eae4f53..7425db9d34dd92 100644
+--- a/drivers/net/can/dev/netlink.c
++++ b/drivers/net/can/dev/netlink.c
+@@ -252,6 +252,12 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ }
+
+ if (data[IFLA_CAN_RESTART_MS]) {
++ if (!priv->do_set_mode) {
++ NL_SET_ERR_MSG(extack,
++ "Device doesn't support restart from Bus Off");
++ return -EOPNOTSUPP;
++ }
++
+ /* Do not allow changing restart delay while running */
+ if (dev->flags & IFF_UP)
+ return -EBUSY;
+@@ -259,6 +265,12 @@ static int can_changelink(struct net_device *dev, struct nlattr *tb[],
+ }
+
+ if (data[IFLA_CAN_RESTART]) {
++ if (!priv->do_set_mode) {
++ NL_SET_ERR_MSG(extack,
++ "Device doesn't support restart from Bus Off");
++ return -EOPNOTSUPP;
++ }
++
+ /* Do not allow a restart while not running */
+ if (!(dev->flags & IFF_UP))
+ return -EINVAL;
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 74a47244f1291c..c6406fc1b0d5e6 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -966,6 +966,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ can->err_rep_cnt = 0;
+ can->bec.txerr = 0;
+ can->bec.rxerr = 0;
++ can->can.dev->dev_port = i;
+
+ init_completion(&can->start_comp);
+ init_completion(&can->flush_comp);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 65dd57247c62e2..57e5cb3c39c572 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -858,6 +858,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ }
+ SET_NETDEV_DEV(netdev, &dev->intf->dev);
+ netdev->dev_id = channel;
++ netdev->dev_port = channel;
+
+ dev->nets[channel] = priv;
+
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+index 2ea1500df393f9..a203b7fca2f363 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+@@ -49,7 +49,7 @@ struct __packed pcan_ufd_fw_info {
+ __le32 ser_no; /* S/N */
+ __le32 flags; /* special functions */
+
+- /* extended data when type == PCAN_USBFD_TYPE_EXT */
++ /* extended data when type >= PCAN_USBFD_TYPE_EXT */
+ u8 cmd_out_ep; /* ep for cmd */
+ u8 cmd_in_ep; /* ep for replies */
+ u8 data_out_ep[2]; /* ep for CANx TX */
+@@ -939,10 +939,11 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
+ dev->can.ctrlmode |= CAN_CTRLMODE_FD_NON_ISO;
+ }
+
+- /* if vendor rsp is of type 2, then it contains EP numbers to
+- * use for cmds pipes. If not, then default EP should be used.
++ /* if vendor rsp type is greater than or equal to 2, then it
++ * contains EP numbers to use for cmds pipes. If not, then
++ * default EP should be used.
+ */
+- if (fw_info->type != cpu_to_le16(PCAN_USBFD_TYPE_EXT)) {
++ if (le16_to_cpu(fw_info->type) < PCAN_USBFD_TYPE_EXT) {
+ fw_info->cmd_out_ep = PCAN_USBPRO_EP_CMDOUT;
+ fw_info->cmd_in_ep = PCAN_USBPRO_EP_CMDIN;
+ }
+@@ -975,11 +976,11 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
+ dev->device_number =
+ le32_to_cpu(pdev->usb_if->fw_info.dev_id[dev->ctrl_idx]);
+
+- /* if vendor rsp is of type 2, then it contains EP numbers to
+- * use for data pipes. If not, then statically defined EP are used
+- * (see peak_usb_create_dev()).
++ /* if vendor rsp type is greater than or equal to 2, then it contains EP
++ * numbers to use for data pipes. If not, then statically defined EP are
++ * used (see peak_usb_create_dev()).
+ */
+- if (fw_info->type == cpu_to_le16(PCAN_USBFD_TYPE_EXT)) {
++ if (le16_to_cpu(fw_info->type) >= PCAN_USBFD_TYPE_EXT) {
+ dev->ep_msg_in = fw_info->data_in_ep;
+ dev->ep_msg_out = fw_info->data_out_ep[dev->ctrl_idx];
+ }
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 17098cd89dfff6..e764d2be4948a7 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -3851,8 +3851,8 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ spin_unlock_bh(&adapter->mcc_lock);
++ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ return status;
+ }
+
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 5ef117c9d0ecac..dbc40e4514f0a6 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -4446,12 +4446,19 @@ static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
+ if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER)
+ return PTR_ERR(dpmac_dev);
+
+- if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
++ if (IS_ERR(dpmac_dev))
+ return 0;
+
++ if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) {
++ err = 0;
++ goto out_put_device;
++ }
++
+ mac = kzalloc(sizeof(struct dpaa2_mac), GFP_KERNEL);
+- if (!mac)
+- return -ENOMEM;
++ if (!mac) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+
+ mac->mc_dev = dpmac_dev;
+ mac->mc_io = priv->mc_io;
+@@ -4478,6 +4485,8 @@ static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
+ priv->mac = NULL;
+ err_free_mac:
+ kfree(mac);
++out_put_device:
++ put_device(&dpmac_dev->dev);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index d6c871f227947e..732fd2e389c417 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -1439,12 +1439,19 @@ static int dpaa2_switch_port_connect_mac(struct ethsw_port_priv *port_priv)
+ if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER)
+ return PTR_ERR(dpmac_dev);
+
+- if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
++ if (IS_ERR(dpmac_dev))
+ return 0;
+
++ if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) {
++ err = 0;
++ goto out_put_device;
++ }
++
+ mac = kzalloc(sizeof(*mac), GFP_KERNEL);
+- if (!mac)
+- return -ENOMEM;
++ if (!mac) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+
+ mac->mc_dev = dpmac_dev;
+ mac->mc_io = port_priv->ethsw_data->mc_io;
+@@ -1472,6 +1479,8 @@ static int dpaa2_switch_port_connect_mac(struct ethsw_port_priv *port_priv)
+ port_priv->mac = NULL;
+ err_free_mac:
+ kfree(mac);
++out_put_device:
++ put_device(&dpmac_dev->dev);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 7e7890334ff608..4fee466a8e903b 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1124,49 +1124,56 @@ static void gve_turnup(struct gve_priv *priv)
+ gve_set_napi_enabled(priv);
+ }
+
+-static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
++static struct gve_notify_block *gve_get_tx_notify_block(struct gve_priv *priv,
++ unsigned int txqueue)
+ {
+- struct gve_notify_block *block;
+- struct gve_tx_ring *tx = NULL;
+- struct gve_priv *priv;
+- u32 last_nic_done;
+- u32 current_time;
+ u32 ntfy_idx;
+
+- netdev_info(dev, "Timeout on tx queue, %d", txqueue);
+- priv = netdev_priv(dev);
+ if (txqueue > priv->tx_cfg.num_queues)
+- goto reset;
++ return NULL;
+
+ ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue);
+ if (ntfy_idx >= priv->num_ntfy_blks)
+- goto reset;
++ return NULL;
++
++ return &priv->ntfy_blocks[ntfy_idx];
++}
++
++static bool gve_tx_timeout_try_q_kick(struct gve_priv *priv,
++ unsigned int txqueue)
++{
++ struct gve_notify_block *block;
++ u32 current_time;
+
+- block = &priv->ntfy_blocks[ntfy_idx];
+- tx = block->tx;
++ block = gve_get_tx_notify_block(priv, txqueue);
++
++ if (!block)
++ return false;
+
+ current_time = jiffies_to_msecs(jiffies);
+- if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
+- goto reset;
++ if (block->tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
++ return false;
+
+- /* Check to see if there are missed completions, which will allow us to
+- * kick the queue.
+- */
+- last_nic_done = gve_tx_load_event_counter(priv, tx);
+- if (last_nic_done - tx->done) {
+- netdev_info(dev, "Kicking queue %d", txqueue);
+- iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block));
+- napi_schedule(&block->napi);
+- tx->last_kick_msec = current_time;
+- goto out;
+- } // Else reset.
++ netdev_info(priv->dev, "Kicking queue %d", txqueue);
++ napi_schedule(&block->napi);
++ block->tx->last_kick_msec = current_time;
++ return true;
++}
+
+-reset:
+- gve_schedule_reset(priv);
++static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
++{
++ struct gve_notify_block *block;
++ struct gve_priv *priv;
++
++ netdev_info(dev, "Timeout on tx queue, %d", txqueue);
++ priv = netdev_priv(dev);
++
++ if (!gve_tx_timeout_try_q_kick(priv, txqueue))
++ gve_schedule_reset(priv);
+
+-out:
+- if (tx)
+- tx->queue_timeout++;
++ block = gve_get_tx_notify_block(priv, txqueue);
++ if (block)
++ block->tx->queue_timeout++;
+ priv->tx_timeo_cnt++;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index ed1b49a360165f..c509c1e12109fc 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -9599,33 +9599,36 @@ static bool hclge_need_enable_vport_vlan_filter(struct hclge_vport *vport)
+ return false;
+ }
+
+-int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en)
++static int __hclge_enable_vport_vlan_filter(struct hclge_vport *vport,
++ bool request_en)
+ {
+- struct hclge_dev *hdev = vport->back;
+ bool need_en;
+ int ret;
+
+- mutex_lock(&hdev->vport_lock);
+-
+- vport->req_vlan_fltr_en = request_en;
+-
+ need_en = hclge_need_enable_vport_vlan_filter(vport);
+- if (need_en == vport->cur_vlan_fltr_en) {
+- mutex_unlock(&hdev->vport_lock);
++ if (need_en == vport->cur_vlan_fltr_en)
+ return 0;
+- }
+
+ ret = hclge_set_vport_vlan_filter(vport, need_en);
+- if (ret) {
+- mutex_unlock(&hdev->vport_lock);
++ if (ret)
+ return ret;
+- }
+
+ vport->cur_vlan_fltr_en = need_en;
+
++ return 0;
++}
++
++int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en)
++{
++ struct hclge_dev *hdev = vport->back;
++ int ret;
++
++ mutex_lock(&hdev->vport_lock);
++ vport->req_vlan_fltr_en = request_en;
++ ret = __hclge_enable_vport_vlan_filter(vport, request_en);
+ mutex_unlock(&hdev->vport_lock);
+
+- return 0;
++ return ret;
+ }
+
+ static int hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable)
+@@ -10646,16 +10649,19 @@ static void hclge_sync_vlan_fltr_state(struct hclge_dev *hdev)
+ &vport->state))
+ continue;
+
+- ret = hclge_enable_vport_vlan_filter(vport,
+- vport->req_vlan_fltr_en);
++ mutex_lock(&hdev->vport_lock);
++ ret = __hclge_enable_vport_vlan_filter(vport,
++ vport->req_vlan_fltr_en);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to sync vlan filter state for vport%u, ret = %d\n",
+ vport->vport_id, ret);
+ set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE,
+ &vport->state);
++ mutex_unlock(&hdev->vport_lock);
+ return;
+ }
++ mutex_unlock(&hdev->vport_lock);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+index b7cf9fbf97183b..6d7aeac6001282 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+@@ -509,14 +509,14 @@ int hclge_ptp_init(struct hclge_dev *hdev)
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to init freq, ret = %d\n", ret);
+- goto out;
++ goto out_clear_int;
+ }
+
+ ret = hclge_ptp_set_ts_mode(hdev, &hdev->ptp->ts_cfg);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to init ts mode, ret = %d\n", ret);
+- goto out;
++ goto out_clear_int;
+ }
+
+ ktime_get_real_ts64(&ts);
+@@ -524,7 +524,7 @@ int hclge_ptp_init(struct hclge_dev *hdev)
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to init ts time, ret = %d\n", ret);
+- goto out;
++ goto out_clear_int;
+ }
+
+ set_bit(HCLGE_STATE_PTP_EN, &hdev->state);
+@@ -532,6 +532,9 @@ int hclge_ptp_init(struct hclge_dev *hdev)
+
+ return 0;
+
++out_clear_int:
++ clear_bit(HCLGE_PTP_FLAG_EN, &hdev->ptp->flags);
++ hclge_ptp_int_en(hdev, false);
+ out:
+ hclge_ptp_destroy_clock(hdev);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index b11d38a6093f83..cff8654354e6d4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -3086,11 +3086,7 @@ static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+
+ static u32 hclgevf_get_max_channels(struct hclgevf_dev *hdev)
+ {
+- struct hnae3_handle *nic = &hdev->nic;
+- struct hnae3_knic_private_info *kinfo = &nic->kinfo;
+-
+- return min_t(u32, hdev->rss_size_max,
+- hdev->num_tqps / kinfo->tc_info.num_tc);
++ return min(hdev->rss_size_max, hdev->num_tqps);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
+index 63c3c79380a1b6..32e6d16b2dcf2d 100644
+--- a/drivers/net/ethernet/intel/e1000e/defines.h
++++ b/drivers/net/ethernet/intel/e1000e/defines.h
+@@ -638,6 +638,9 @@
+ /* For checksumming, the sum of all words in the NVM should equal 0xBABA. */
+ #define NVM_SUM 0xBABA
+
++/* Uninitialized ("empty") checksum word value */
++#define NVM_CHECKSUM_UNINITIALIZED 0xFFFF
++
+ /* PBA (printed board assembly) number words */
+ #define NVM_PBA_OFFSET_0 8
+ #define NVM_PBA_OFFSET_1 9
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 9466f65a6da774..0cb7dce57cce64 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -4146,6 +4146,8 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
+ ret_val = e1000e_update_nvm_checksum(hw);
+ if (ret_val)
+ return ret_val;
++ } else if (hw->mac.type == e1000_pch_tgp) {
++ return 0;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/intel/e1000e/nvm.c b/drivers/net/ethernet/intel/e1000e/nvm.c
+index e609f4df86f455..16369e6d245a4a 100644
+--- a/drivers/net/ethernet/intel/e1000e/nvm.c
++++ b/drivers/net/ethernet/intel/e1000e/nvm.c
+@@ -558,6 +558,12 @@ s32 e1000e_validate_nvm_checksum_generic(struct e1000_hw *hw)
+ checksum += nvm_data;
+ }
+
++ if (hw->mac.type == e1000_pch_tgp &&
++ nvm_data == NVM_CHECKSUM_UNINITIALIZED) {
++ e_dbg("Uninitialized NVM Checksum on TGP platform - ignoring\n");
++ return 0;
++ }
++
+ if (checksum != (u16)NVM_SUM) {
+ e_dbg("NVM Checksum Invalid\n");
+ return -E1000_ERR_NVM;
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k.h b/drivers/net/ethernet/intel/fm10k/fm10k.h
+index 6119a410883815..65a2816142d962 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k.h
++++ b/drivers/net/ethernet/intel/fm10k/fm10k.h
+@@ -189,13 +189,14 @@ struct fm10k_q_vector {
+ struct fm10k_ring_container rx, tx;
+
+ struct napi_struct napi;
++ struct rcu_head rcu; /* to avoid race with update stats on free */
++
+ cpumask_t affinity_mask;
+ char name[IFNAMSIZ + 9];
+
+ #ifdef CONFIG_DEBUG_FS
+ struct dentry *dbg_q_vector;
+ #endif /* CONFIG_DEBUG_FS */
+- struct rcu_head rcu; /* to avoid race with update stats on free */
+
+ /* for dynamic allocation of rings associated with this q_vector */
+ struct fm10k_ring ring[] ____cacheline_internodealigned_in_smp;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 22ac8c48ca340f..61590e92f3abcb 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -980,6 +980,7 @@ struct i40e_q_vector {
+ u16 reg_idx; /* register index of the interrupt */
+
+ struct napi_struct napi;
++ struct rcu_head rcu; /* to avoid race with update stats on free */
+
+ struct i40e_ring_container rx;
+ struct i40e_ring_container tx;
+@@ -990,7 +991,6 @@ struct i40e_q_vector {
+ cpumask_t affinity_mask;
+ struct irq_affinity_notify affinity_notify;
+
+- struct rcu_head rcu; /* to avoid race with update stats on free */
+ char name[I40E_INT_NAME_STR_LEN];
+ bool arm_wb_state;
+ bool in_busy_poll;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 107bcca7db8c96..9b5044cfea872e 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -245,6 +245,7 @@ static const struct i40e_stats i40e_gstrings_net_stats[] = {
+ I40E_NETDEV_STAT(rx_errors),
+ I40E_NETDEV_STAT(tx_errors),
+ I40E_NETDEV_STAT(rx_dropped),
++ I40E_NETDEV_STAT(rx_missed_errors),
+ I40E_NETDEV_STAT(tx_dropped),
+ I40E_NETDEV_STAT(collisions),
+ I40E_NETDEV_STAT(rx_length_errors),
+@@ -321,7 +322,7 @@ static const struct i40e_stats i40e_gstrings_stats[] = {
+ I40E_PF_STAT("port.rx_broadcast", stats.eth.rx_broadcast),
+ I40E_PF_STAT("port.tx_broadcast", stats.eth.tx_broadcast),
+ I40E_PF_STAT("port.tx_errors", stats.eth.tx_errors),
+- I40E_PF_STAT("port.rx_dropped", stats.eth.rx_discards),
++ I40E_PF_STAT("port.rx_discards", stats.eth.rx_discards),
+ I40E_PF_STAT("port.tx_dropped_link_down", stats.tx_dropped_link_down),
+ I40E_PF_STAT("port.rx_crc_errors", stats.crc_errors),
+ I40E_PF_STAT("port.illegal_bytes", stats.illegal_bytes),
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 3b165d8f03dc27..37d83b4bca7fda 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -495,6 +495,7 @@ static void i40e_get_netdev_stats_struct(struct net_device *netdev,
+ stats->tx_dropped = vsi_stats->tx_dropped;
+ stats->rx_errors = vsi_stats->rx_errors;
+ stats->rx_dropped = vsi_stats->rx_dropped;
++ stats->rx_missed_errors = vsi_stats->rx_missed_errors;
+ stats->rx_crc_errors = vsi_stats->rx_crc_errors;
+ stats->rx_length_errors = vsi_stats->rx_length_errors;
+ }
+@@ -686,17 +687,13 @@ i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw,
+ struct i40e_eth_stats *stat_offset,
+ struct i40e_eth_stats *stat)
+ {
+- u64 rx_rdpc, rx_rxerr;
+-
+ i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded,
+- &stat_offset->rx_discards, &rx_rdpc);
++ &stat_offset->rx_discards, &stat->rx_discards);
+ i40e_stat_update64(hw,
+ I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)),
+ I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)),
+ offset_loaded, &stat_offset->rx_discards_other,
+- &rx_rxerr);
+-
+- stat->rx_discards = rx_rdpc + rx_rxerr;
++ &stat->rx_discards_other);
+ }
+
+ /**
+@@ -718,9 +715,6 @@ void i40e_update_eth_stats(struct i40e_vsi *vsi)
+ i40e_stat_update32(hw, I40E_GLV_TEPC(stat_idx),
+ vsi->stat_offsets_loaded,
+ &oes->tx_errors, &es->tx_errors);
+- i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx),
+- vsi->stat_offsets_loaded,
+- &oes->rx_discards, &es->rx_discards);
+ i40e_stat_update32(hw, I40E_GLV_RUPP(stat_idx),
+ vsi->stat_offsets_loaded,
+ &oes->rx_unknown_protocol, &es->rx_unknown_protocol);
+@@ -977,8 +971,10 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
+ ns->tx_errors = es->tx_errors;
+ ons->multicast = oes->rx_multicast;
+ ns->multicast = es->rx_multicast;
+- ons->rx_dropped = oes->rx_discards;
+- ns->rx_dropped = es->rx_discards;
++ ons->rx_dropped = oes->rx_discards_other;
++ ns->rx_dropped = es->rx_discards_other;
++ ons->rx_missed_errors = oes->rx_discards;
++ ns->rx_missed_errors = es->rx_discards;
+ ons->tx_dropped = oes->tx_discards;
+ ns->tx_dropped = es->tx_discards;
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index ff4f1c4f3829b4..7cfcb16c309114 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -3076,10 +3076,10 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg)
+ const u8 *addr = al->list[i].addr;
+
+ /* Allow to delete VF primary MAC only if it was not set
+- * administratively by PF or if VF is trusted.
++ * administratively by PF.
+ */
+ if (ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+- if (i40e_can_vf_change_mac(vf))
++ if (!vf->pf_set_mac)
+ was_unimac_deleted = true;
+ else
+ continue;
+@@ -4934,8 +4934,8 @@ int i40e_get_vf_stats(struct net_device *netdev, int vf_id,
+ vf_stats->tx_bytes = stats->tx_bytes;
+ vf_stats->broadcast = stats->rx_broadcast;
+ vf_stats->multicast = stats->rx_multicast;
+- vf_stats->rx_dropped = stats->rx_discards;
+- vf_stats->tx_dropped = stats->tx_discards;
++ vf_stats->rx_dropped = stats->rx_discards + stats->rx_discards_other;
++ vf_stats->tx_dropped = stats->tx_errors;
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+index 4b3bb19e1d06a9..93a1b6b90856fa 100644
+--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
++++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+@@ -1753,6 +1753,8 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len)
+ return ICE_DDP_PKG_ERR;
+
+ buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL);
++ if (!buf_copy)
++ return ICE_DDP_PKG_ERR;
+
+ state = ice_init_pkg(hw, buf_copy, len);
+ if (!ice_is_init_pkg_successful(state)) {
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+index 2bf387e52e202c..f49b99b175ef43 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+@@ -498,9 +498,10 @@ struct ixgbe_q_vector {
+ struct ixgbe_ring_container rx, tx;
+
+ struct napi_struct napi;
++ struct rcu_head rcu; /* to avoid race with update stats on free */
++
+ cpumask_t affinity_mask;
+ int numa_node;
+- struct rcu_head rcu; /* to avoid race with update stats on free */
+ char name[IFNAMSIZ + 9];
+
+ /* for dynamic allocation of rings associated with this q_vector */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 6dbb4021fd2fac..c83523395d5ee4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1913,8 +1913,8 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
+
+ err = mlx5_cmd_invoke(dev, inb, outb, out, out_size, callback, context,
+ pages_queue, token, force_polling);
+- if (callback)
+- return err;
++ if (callback && !err)
++ return 0;
+
+ if (err > 0) /* Failed in FW, command didn't execute */
+ err = deliv_status_to_err(err);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 74dc45d9c242ed..2768eab89eada5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1428,6 +1428,7 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
+ unsigned int hdrlen = mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
+
+ skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt - hdrlen, lro_num_seg);
++ skb_shinfo(skb)->gso_segs = lro_num_seg;
+ /* Subtract one since we already counted this as one
+ * "regular" packet in mlx5e_complete_rx_cqe()
+ */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
+index 9482e51ac82a58..bdbbfaf504d988 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
+@@ -28,7 +28,7 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev)
+
+ dm = kzalloc(sizeof(*dm), GFP_KERNEL);
+ if (!dm)
+- return ERR_PTR(-ENOMEM);
++ return NULL;
+
+ spin_lock_init(&dm->lock);
+
+@@ -80,7 +80,7 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev)
+ err_steering:
+ kfree(dm);
+
+- return ERR_PTR(-ENOMEM);
++ return NULL;
+ }
+
+ void mlx5_dm_cleanup(struct mlx5_core_dev *dev)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 8c9633f740b485..a53f222e3feed4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1022,9 +1022,6 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
+ }
+
+ dev->dm = mlx5_dm_create(dev);
+- if (IS_ERR(dev->dm))
+- mlx5_core_warn(dev, "Failed to init device memory %ld\n", PTR_ERR(dev->dm));
+-
+ dev->tracer = mlx5_fw_tracer_create(dev);
+ dev->hv_vhca = mlx5_hv_vhca_create(dev);
+ dev->rsc_dump = mlx5_rsc_dump_create(dev);
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index 7e7ce79eadffb9..d0bd6ab45ebed7 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -897,6 +897,7 @@ static int vsc85xx_eth1_conf(struct phy_device *phydev, enum ts_blk blk,
+ get_unaligned_be32(ptp_multicast));
+ } else {
+ val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST;
++ val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST;
+ vsc85xx_ts_write_csr(phydev, blk,
+ MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(0), val);
+ vsc85xx_ts_write_csr(phydev, blk,
+diff --git a/drivers/net/phy/mscc/mscc_ptp.h b/drivers/net/phy/mscc/mscc_ptp.h
+index da3465360e9018..ae9ad925bfa8c0 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.h
++++ b/drivers/net/phy/mscc/mscc_ptp.h
+@@ -98,6 +98,7 @@
+ #define MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(x) (MSCC_ANA_ETH1_FLOW_ENA(x) + 3)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_MASK_MASK GENMASK(22, 20)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST 0x400000
++#define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST 0x200000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_FULL_ADDR 0x100000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST_MASK GENMASK(17, 16)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST 0x020000
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 32183f24e63ff7..bf011bbb610589 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -159,19 +159,17 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ int len;
+ unsigned char *data;
+ __u32 seq_recv;
+-
+-
+ struct rtable *rt;
+ struct net_device *tdev;
+ struct iphdr *iph;
+ int max_headroom;
+
+ if (sk_pppox(po)->sk_state & PPPOX_DEAD)
+- goto tx_error;
++ goto tx_drop;
+
+ rt = pptp_route_output(po, &fl4);
+ if (IS_ERR(rt))
+- goto tx_error;
++ goto tx_drop;
+
+ tdev = rt->dst.dev;
+
+@@ -179,16 +177,20 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+
+ if (skb_headroom(skb) < max_headroom || skb_cloned(skb) || skb_shared(skb)) {
+ struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
+- if (!new_skb) {
+- ip_rt_put(rt);
++
++ if (!new_skb)
+ goto tx_error;
+- }
++
+ if (skb->sk)
+ skb_set_owner_w(new_skb, skb->sk);
+ consume_skb(skb);
+ skb = new_skb;
+ }
+
++ /* Ensure we can safely access protocol field and LCP code */
++ if (!pskb_may_pull(skb, 3))
++ goto tx_error;
++
+ data = skb->data;
+ islcp = ((data[0] << 8) + data[1]) == PPP_LCP && 1 <= data[2] && data[2] <= 7;
+
+@@ -262,6 +264,8 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ return 1;
+
+ tx_error:
++ ip_rt_put(rt);
++tx_drop:
+ kfree_skb(skb);
+ return 1;
+ }
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 51d93422d09c67..a68fead887207a 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1115,6 +1115,9 @@ static void __handle_link_change(struct usbnet *dev)
+ if (!test_bit(EVENT_DEV_OPEN, &dev->flags))
+ return;
+
++ if (test_and_clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags))
++ netif_carrier_on(dev->net);
++
+ if (!netif_carrier_ok(dev->net)) {
+ /* kill URBs for reading packets to save bus bandwidth */
+ unlink_urbs(dev, &dev->rxq);
+@@ -2010,10 +2013,12 @@ EXPORT_SYMBOL(usbnet_manage_power);
+ void usbnet_link_change(struct usbnet *dev, bool link, bool need_reset)
+ {
+ /* update link after link is reseted */
+- if (link && !need_reset)
+- netif_carrier_on(dev->net);
+- else
++ if (link && !need_reset) {
++ set_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
++ } else {
++ clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
+ netif_carrier_off(dev->net);
++ }
+
+ if (need_reset && link)
+ usbnet_defer_kevent(dev, EVENT_LINK_RESET);
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index c8a1009d659e9d..75e95f6dd816c6 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1355,6 +1355,8 @@ static void vrf_ip6_input_dst(struct sk_buff *skb, struct net_device *vrf_dev,
+ struct net *net = dev_net(vrf_dev);
+ struct rt6_info *rt6;
+
++ skb_dst_drop(skb);
++
+ rt6 = vrf_ip6_route_lookup(net, vrf_dev, &fl6, ifindex, skb,
+ RT6_LOOKUP_F_HAS_SADDR | RT6_LOOKUP_F_IFACE);
+ if (unlikely(!rt6))
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index a5028efbdd2e37..ec64fbf9aa8269 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -1314,6 +1314,10 @@ EXPORT_SYMBOL(ath11k_hal_srng_init);
+ void ath11k_hal_srng_deinit(struct ath11k_base *ab)
+ {
+ struct ath11k_hal *hal = &ab->hal;
++ int i;
++
++ for (i = 0; i < HAL_SRNG_RING_ID_MAX; i++)
++ ab->hal.srng_list[i].initialized = 0;
+
+ ath11k_hal_unregister_srng_key(ab);
+ ath11k_hal_free_cont_rdp(ab);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 24a3d5a593f15e..8b6e3cbaf4632d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -1200,10 +1200,6 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ return -EAGAIN;
+ }
+
+- /* If scan req comes for p2p0, send it over primary I/F */
+- if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
+- vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
+-
+ brcmf_dbg(SCAN, "START ESCAN\n");
+
+ cfg->scan_request = request;
+@@ -1219,6 +1215,10 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ if (err)
+ goto scan_out;
+
++ /* If scan req comes for p2p0, send it over primary I/F */
++ if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
++ vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
++
+ err = brcmf_do_escan(vif->ifp, request);
+ if (err)
+ goto scan_out;
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+index a873be109f4399..b490a88b97ca75 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+@@ -1048,9 +1048,11 @@ static void iwl_bg_restart(struct work_struct *data)
+ *
+ *****************************************************************************/
+
+-static void iwl_setup_deferred_work(struct iwl_priv *priv)
++static int iwl_setup_deferred_work(struct iwl_priv *priv)
+ {
+ priv->workqueue = alloc_ordered_workqueue(DRV_NAME, 0);
++ if (!priv->workqueue)
++ return -ENOMEM;
+
+ INIT_WORK(&priv->restart, iwl_bg_restart);
+ INIT_WORK(&priv->beacon_update, iwl_bg_beacon_update);
+@@ -1067,6 +1069,8 @@ static void iwl_setup_deferred_work(struct iwl_priv *priv)
+ timer_setup(&priv->statistics_periodic, iwl_bg_statistics_periodic, 0);
+
+ timer_setup(&priv->ucode_trace, iwl_bg_ucode_trace, 0);
++
++ return 0;
+ }
+
+ void iwl_cancel_deferred_work(struct iwl_priv *priv)
+@@ -1456,7 +1460,9 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ /********************
+ * 6. Setup services
+ ********************/
+- iwl_setup_deferred_work(priv);
++ if (iwl_setup_deferred_work(priv))
++ goto out_uninit_drv;
++
+ iwl_setup_rx_handlers(priv);
+
+ iwl_power_initialize(priv);
+@@ -1494,6 +1500,7 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ iwl_cancel_deferred_work(priv);
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
++out_uninit_drv:
+ iwl_uninit_drv(priv);
+ out_free_eeprom_blob:
+ kfree(priv->eeprom_blob);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 0a11ee347bf321..64cc1e1bbb479f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -64,8 +64,10 @@ static int __init iwl_mvm_init(void)
+ }
+
+ ret = iwl_opmode_register("iwlmvm", &iwl_mvm_ops);
+- if (ret)
++ if (ret) {
+ pr_err("Unable to register MVM op_mode: %d\n", ret);
++ iwl_mvm_rate_control_unregister();
++ }
+
+ return ret;
+ }
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index 61697dad4ea614..cc3a9543d255e0 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -1222,6 +1222,10 @@ static int rxq_refill(struct ieee80211_hw *hw, int index, int limit)
+
+ addr = dma_map_single(&priv->pdev->dev, skb->data,
+ MWL8K_RX_MAXSZ, DMA_FROM_DEVICE);
++ if (dma_mapping_error(&priv->pdev->dev, addr)) {
++ kfree_skb(skb);
++ break;
++ }
+
+ rxq->rxd_count++;
+ rx = rxq->tail++;
+diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.c b/drivers/net/wireless/purelifi/plfxlc/mac.c
+index 70d6f5244e5e46..7585ded553dd47 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/mac.c
++++ b/drivers/net/wireless/purelifi/plfxlc/mac.c
+@@ -100,11 +100,6 @@ int plfxlc_mac_init_hw(struct ieee80211_hw *hw)
+ return r;
+ }
+
+-void plfxlc_mac_release(struct plfxlc_mac *mac)
+-{
+- plfxlc_chip_release(&mac->chip);
+-}
+-
+ int plfxlc_op_start(struct ieee80211_hw *hw)
+ {
+ plfxlc_hw_mac(hw)->chip.usb.initialized = 1;
+@@ -751,3 +746,9 @@ struct ieee80211_hw *plfxlc_mac_alloc_hw(struct usb_interface *intf)
+ SET_IEEE80211_DEV(hw, &intf->dev);
+ return hw;
+ }
++
++void plfxlc_mac_release_hw(struct ieee80211_hw *hw)
++{
++ plfxlc_chip_release(&plfxlc_hw_mac(hw)->chip);
++ ieee80211_free_hw(hw);
++}
+diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.h b/drivers/net/wireless/purelifi/plfxlc/mac.h
+index 49b92413729bfa..c0445932b2e8a5 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/mac.h
++++ b/drivers/net/wireless/purelifi/plfxlc/mac.h
+@@ -168,7 +168,7 @@ static inline u8 *plfxlc_mac_get_perm_addr(struct plfxlc_mac *mac)
+ }
+
+ struct ieee80211_hw *plfxlc_mac_alloc_hw(struct usb_interface *intf);
+-void plfxlc_mac_release(struct plfxlc_mac *mac);
++void plfxlc_mac_release_hw(struct ieee80211_hw *hw);
+
+ int plfxlc_mac_preinit_hw(struct ieee80211_hw *hw, const u8 *hw_address);
+ int plfxlc_mac_init_hw(struct ieee80211_hw *hw);
+diff --git a/drivers/net/wireless/purelifi/plfxlc/usb.c b/drivers/net/wireless/purelifi/plfxlc/usb.c
+index 8151bc5e00ccc8..901e0139969e8d 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/usb.c
++++ b/drivers/net/wireless/purelifi/plfxlc/usb.c
+@@ -604,7 +604,7 @@ static int probe(struct usb_interface *intf,
+ r = plfxlc_upload_mac_and_serial(intf, hw_address, serial_number);
+ if (r) {
+ dev_err(&intf->dev, "MAC and Serial upload failed (%d)\n", r);
+- goto error;
++ goto error_free_hw;
+ }
+
+ chip->unit_type = STA;
+@@ -613,13 +613,13 @@ static int probe(struct usb_interface *intf,
+ r = plfxlc_mac_preinit_hw(hw, hw_address);
+ if (r) {
+ dev_err(&intf->dev, "Init mac failed (%d)\n", r);
+- goto error;
++ goto error_free_hw;
+ }
+
+ r = ieee80211_register_hw(hw);
+ if (r) {
+ dev_err(&intf->dev, "Register device failed (%d)\n", r);
+- goto error;
++ goto error_free_hw;
+ }
+
+ if ((le16_to_cpu(interface_to_usbdev(intf)->descriptor.idVendor) ==
+@@ -632,7 +632,7 @@ static int probe(struct usb_interface *intf,
+ }
+ if (r != 0) {
+ dev_err(&intf->dev, "FPGA download failed (%d)\n", r);
+- goto error;
++ goto error_unreg_hw;
+ }
+
+ tx->mac_fifo_full = 0;
+@@ -642,21 +642,21 @@ static int probe(struct usb_interface *intf,
+ r = plfxlc_usb_init_hw(usb);
+ if (r < 0) {
+ dev_err(&intf->dev, "usb_init_hw failed (%d)\n", r);
+- goto error;
++ goto error_unreg_hw;
+ }
+
+ msleep(PLF_MSLEEP_TIME);
+ r = plfxlc_chip_switch_radio(chip, PLFXLC_RADIO_ON);
+ if (r < 0) {
+ dev_dbg(&intf->dev, "chip_switch_radio_on failed (%d)\n", r);
+- goto error;
++ goto error_unreg_hw;
+ }
+
+ msleep(PLF_MSLEEP_TIME);
+ r = plfxlc_chip_set_rate(chip, 8);
+ if (r < 0) {
+ dev_dbg(&intf->dev, "chip_set_rate failed (%d)\n", r);
+- goto error;
++ goto error_unreg_hw;
+ }
+
+ msleep(PLF_MSLEEP_TIME);
+@@ -664,7 +664,7 @@ static int probe(struct usb_interface *intf,
+ hw_address, ETH_ALEN, USB_REQ_MAC_WR);
+ if (r < 0) {
+ dev_dbg(&intf->dev, "MAC_WR failure (%d)\n", r);
+- goto error;
++ goto error_unreg_hw;
+ }
+
+ plfxlc_chip_enable_rxtx(chip);
+@@ -691,12 +691,12 @@ static int probe(struct usb_interface *intf,
+ plfxlc_mac_init_hw(hw);
+ usb->initialized = true;
+ return 0;
++
++error_unreg_hw:
++ ieee80211_unregister_hw(hw);
++error_free_hw:
++ plfxlc_mac_release_hw(hw);
+ error:
+- if (hw) {
+- plfxlc_mac_release(plfxlc_hw_mac(hw));
+- ieee80211_unregister_hw(hw);
+- ieee80211_free_hw(hw);
+- }
+ dev_err(&intf->dev, "pureLifi:Device error");
+ return r;
+ }
+@@ -730,8 +730,7 @@ static void disconnect(struct usb_interface *intf)
+ */
+ usb_reset_device(interface_to_usbdev(intf));
+
+- plfxlc_mac_release(mac);
+- ieee80211_free_hw(hw);
++ plfxlc_mac_release_hw(hw);
+ }
+
+ static void plfxlc_usb_resume(struct plfxlc_usb *usb)
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+index c0f6e9c6d03e8a..fa3fb93f4485d8 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+@@ -1041,10 +1041,11 @@ static void rtl8187_stop(struct ieee80211_hw *dev)
+ rtl818x_iowrite8(priv, &priv->map->CONFIG4, reg | RTL818X_CONFIG4_VCOOFF);
+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
+
++ usb_kill_anchored_urbs(&priv->anchored);
++
+ while ((skb = skb_dequeue(&priv->b_tx_status.queue)))
+ dev_kfree_skb_any(skb);
+
+- usb_kill_anchored_urbs(&priv->anchored);
+ mutex_unlock(&priv->conf_mutex);
+
+ if (!priv->is_rtl8187b)
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index cd22c756acc694..c1bc55f0e4c01c 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -5858,7 +5858,7 @@ static int rtl8xxxu_submit_rx_urb(struct rtl8xxxu_priv *priv,
+ skb_size = fops->rx_agg_buf_size;
+ skb_size += (rx_desc_sz + sizeof(struct rtl8723au_phy_stats));
+ } else {
+- skb_size = IEEE80211_MAX_FRAME_LEN;
++ skb_size = IEEE80211_MAX_FRAME_LEN + rx_desc_sz;
+ }
+
+ skb = __netdev_alloc_skb(NULL, skb_size, GFP_KERNEL);
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index 7352b5ff8d3598..bd982390e04c4d 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -439,7 +439,7 @@ static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg)
+ dev_dbg(dev, "malformed TLP received from the link\n");
+
+ if (sub_reg & PCIE_CORE_INT_UCR)
+- dev_dbg(dev, "malformed TLP received from the link\n");
++ dev_dbg(dev, "Unexpected Completion received from the link\n");
+
+ if (sub_reg & PCIE_CORE_INT_FCE)
+ dev_dbg(dev, "an error was observed in the flow control advertisements from the other side\n");
+diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+index 6708d2e789cb49..d057537781f60d 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -534,7 +534,7 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
+ struct device *dev = &ntb->epf->dev;
+ int ret;
+ struct pci_epf_bar *epf_bar;
+- void __iomem *mw_addr;
++ void *mw_addr;
+ enum pci_barno barno;
+ size_t size = 4 * ntb->db_count;
+
+@@ -714,7 +714,7 @@ static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
+ barno = pci_epc_get_next_free_bar(epc_features, barno);
+ if (barno < 0) {
+ dev_err(dev, "Fail to get NTB function BAR\n");
+- return barno;
++ return -ENOENT;
+ }
+ ntb->epf_ntb_bar[bar] = barno;
+ }
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index 092c9ac0d26d27..ec7828ad666179 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -3,11 +3,14 @@
+ * PCI Hotplug Driver for PowerPC PowerNV platform.
+ *
+ * Copyright Gavin Shan, IBM Corporation 2016.
++ * Copyright (C) 2025 Raptor Engineering, LLC
++ * Copyright (C) 2025 Raptor Computing Systems, LLC
+ */
+
+ #include <linux/libfdt.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
++#include <linux/delay.h>
+ #include <linux/pci_hotplug.h>
+ #include <linux/of_fdt.h>
+
+@@ -35,8 +38,10 @@ static void pnv_php_register(struct device_node *dn);
+ static void pnv_php_unregister_one(struct device_node *dn);
+ static void pnv_php_unregister(struct device_node *dn);
+
++static void pnv_php_enable_irq(struct pnv_php_slot *php_slot);
++
+ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+- bool disable_device)
++ bool disable_device, bool disable_msi)
+ {
+ struct pci_dev *pdev = php_slot->pdev;
+ u16 ctrl;
+@@ -52,19 +57,15 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ php_slot->irq = 0;
+ }
+
+- if (php_slot->wq) {
+- destroy_workqueue(php_slot->wq);
+- php_slot->wq = NULL;
+- }
+-
+- if (disable_device) {
++ if (disable_device || disable_msi) {
+ if (pdev->msix_enabled)
+ pci_disable_msix(pdev);
+ else if (pdev->msi_enabled)
+ pci_disable_msi(pdev);
++ }
+
++ if (disable_device)
+ pci_disable_device(pdev);
+- }
+ }
+
+ static void pnv_php_free_slot(struct kref *kref)
+@@ -73,7 +74,8 @@ static void pnv_php_free_slot(struct kref *kref)
+ struct pnv_php_slot, kref);
+
+ WARN_ON(!list_empty(&php_slot->children));
+- pnv_php_disable_irq(php_slot, false);
++ pnv_php_disable_irq(php_slot, false, false);
++ destroy_workqueue(php_slot->wq);
+ kfree(php_slot->name);
+ kfree(php_slot);
+ }
+@@ -390,6 +392,20 @@ static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state)
+ return 0;
+ }
+
++static int pcie_check_link_active(struct pci_dev *pdev)
++{
++ u16 lnk_status;
++ int ret;
++
++ ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
++ if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
++ return -ENODEV;
++
++ ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
++
++ return ret;
++}
++
+ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ {
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
+@@ -402,6 +418,19 @@ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ */
+ ret = pnv_pci_get_presence_state(php_slot->id, &presence);
+ if (ret >= 0) {
++ if (pci_pcie_type(php_slot->pdev) == PCI_EXP_TYPE_DOWNSTREAM &&
++ presence == OPAL_PCI_SLOT_EMPTY) {
++ /*
++ * Similar to pciehp_hpc, check whether the Link Active
++ * bit is set to account for broken downstream bridges
++ * that don't properly assert Presence Detect State, as
++ * was observed on the Microsemi Switchtec PM8533 PFX
++ * [11f8:8533].
++ */
++ if (pcie_check_link_active(php_slot->pdev) > 0)
++ presence = OPAL_PCI_SLOT_PRESENT;
++ }
++
+ *state = presence;
+ ret = 0;
+ } else {
+@@ -441,6 +470,61 @@ static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state)
+ return 0;
+ }
+
++static int pnv_php_activate_slot(struct pnv_php_slot *php_slot,
++ struct hotplug_slot *slot)
++{
++ int ret, i;
++
++ /*
++ * Issue initial slot activation command to firmware
++ *
++ * Firmware will power slot on, attempt to train the link, and
++ * discover any downstream devices. If this process fails, firmware
++ * will return an error code and an invalid device tree. Failure
++ * can be caused for multiple reasons, including a faulty
++ * downstream device, poor connection to the downstream device, or
++ * a previously latched PHB fence. On failure, issue fundamental
++ * reset up to three times before aborting.
++ */
++ ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++ if (ret) {
++ SLOT_WARN(
++ php_slot,
++ "PCI slot activation failed with error code %d, possible frozen PHB",
++ ret);
++ SLOT_WARN(
++ php_slot,
++ "Attempting complete PHB reset before retrying slot activation\n");
++ for (i = 0; i < 3; i++) {
++ /*
++ * Slot activation failed, PHB may be fenced from a
++ * prior device failure.
++ *
++ * Use the OPAL fundamental reset call to both try a
++ * device reset and clear any potentially active PHB
++ * fence / freeze.
++ */
++ SLOT_WARN(php_slot, "Try %d...\n", i + 1);
++ pci_set_pcie_reset_state(php_slot->pdev,
++ pcie_warm_reset);
++ msleep(250);
++ pci_set_pcie_reset_state(php_slot->pdev,
++ pcie_deassert_reset);
++
++ ret = pnv_php_set_slot_power_state(
++ slot, OPAL_PCI_SLOT_POWER_ON);
++ if (!ret)
++ break;
++ }
++
++ if (i >= 3)
++ SLOT_WARN(php_slot,
++ "Failed to bring slot online, aborting!\n");
++ }
++
++ return ret;
++}
++
+ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ {
+ struct hotplug_slot *slot = &php_slot->slot;
+@@ -503,7 +587,7 @@ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ goto scan;
+
+ /* Power is off, turn it on and then scan the slot */
+- ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++ ret = pnv_php_activate_slot(php_slot, slot);
+ if (ret)
+ return ret;
+
+@@ -560,8 +644,58 @@ static int pnv_php_reset_slot(struct hotplug_slot *slot, bool probe)
+ static int pnv_php_enable_slot(struct hotplug_slot *slot)
+ {
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
++ u32 prop32;
++ int ret;
++
++ ret = pnv_php_enable(php_slot, true);
++ if (ret)
++ return ret;
++
++ /* (Re-)enable interrupt if the slot supports surprise hotplug */
++ ret = of_property_read_u32(php_slot->dn, "ibm,slot-surprise-pluggable",
++ &prop32);
++ if (!ret && prop32)
++ pnv_php_enable_irq(php_slot);
++
++ return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all slots on the provided bus, as well as
++ * all downstream slots in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_irqs(struct pci_bus *bus)
++{
++ struct pci_bus *child_bus;
++ struct pci_slot *slot;
++
++ /* First go down child buses */
++ list_for_each_entry(child_bus, &bus->children, node)
++ pnv_php_disable_all_irqs(child_bus);
++
++ /* Disable IRQs for all pnv_php slots on this bus */
++ list_for_each_entry(slot, &bus->slots, list) {
++ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot->hotplug);
+
+- return pnv_php_enable(php_slot, true);
++ pnv_php_disable_irq(php_slot, false, true);
++ }
++
++ return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all downstream slots on the provided
++ * bus in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_downstream_irqs(struct pci_bus *bus)
++{
++ struct pci_bus *child_bus;
++
++ /* Go down child buses, recursively deactivating their IRQs */
++ list_for_each_entry(child_bus, &bus->children, node)
++ pnv_php_disable_all_irqs(child_bus);
++
++ return 0;
+ }
+
+ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+@@ -578,6 +712,13 @@ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+ php_slot->state != PNV_PHP_STATE_REGISTERED)
+ return 0;
+
++ /*
++ * Free all IRQ resources from all child slots before remove.
++ * Note that we do not disable the root slot IRQ here as that
++ * would also deactivate the slot hot (re)plug interrupt!
++ */
++ pnv_php_disable_all_downstream_irqs(php_slot->bus);
++
+ /* Remove all devices behind the slot */
+ pci_lock_rescan_remove();
+ pci_hp_remove_devices(php_slot->bus);
+@@ -646,6 +787,15 @@ static struct pnv_php_slot *pnv_php_alloc_slot(struct device_node *dn)
+ return NULL;
+ }
+
++ /* Allocate workqueue for this slot's interrupt handling */
++ php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
++ if (!php_slot->wq) {
++ SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
++ kfree(php_slot->name);
++ kfree(php_slot);
++ return NULL;
++ }
++
+ if (dn->child && PCI_DN(dn->child))
+ php_slot->slot_no = PCI_SLOT(PCI_DN(dn->child)->devfn);
+ else
+@@ -744,16 +894,63 @@ static int pnv_php_enable_msix(struct pnv_php_slot *php_slot)
+ return entry.vector;
+ }
+
++static void
++pnv_php_detect_clear_suprise_removal_freeze(struct pnv_php_slot *php_slot)
++{
++ struct pci_dev *pdev = php_slot->pdev;
++ struct eeh_dev *edev;
++ struct eeh_pe *pe;
++ int i, rc;
++
++ /*
++ * When a device is surprise removed from a downstream bridge slot,
++ * the upstream bridge port can still end up frozen due to related EEH
++ * events, which will in turn block the MSI interrupts for slot hotplug
++ * detection.
++ *
++ * Detect and thaw any frozen upstream PE after slot deactivation.
++ */
++ edev = pci_dev_to_eeh_dev(pdev);
++ pe = edev ? edev->pe : NULL;
++ rc = eeh_pe_get_state(pe);
++ if ((rc == -ENODEV) || (rc == -ENOENT)) {
++ SLOT_WARN(
++ php_slot,
++ "Upstream bridge PE state unknown, hotplug detect may fail\n");
++ } else {
++ if (pe->state & EEH_PE_ISOLATED) {
++ SLOT_WARN(
++ php_slot,
++ "Upstream bridge PE %02x frozen, thawing...\n",
++ pe->addr);
++ for (i = 0; i < 3; i++)
++ if (!eeh_unfreeze_pe(pe))
++ break;
++ if (i >= 3)
++ SLOT_WARN(
++ php_slot,
++ "Unable to thaw PE %02x, hotplug detect will fail!\n",
++ pe->addr);
++ else
++ SLOT_WARN(php_slot,
++ "PE %02x thawed successfully\n",
++ pe->addr);
++ }
++ }
++}
++
+ static void pnv_php_event_handler(struct work_struct *work)
+ {
+ struct pnv_php_event *event =
+ container_of(work, struct pnv_php_event, work);
+ struct pnv_php_slot *php_slot = event->php_slot;
+
+- if (event->added)
++ if (event->added) {
+ pnv_php_enable_slot(&php_slot->slot);
+- else
++ } else {
+ pnv_php_disable_slot(&php_slot->slot);
++ pnv_php_detect_clear_suprise_removal_freeze(php_slot);
++ }
+
+ kfree(event);
+ }
+@@ -842,14 +1039,6 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ u16 sts, ctrl;
+ int ret;
+
+- /* Allocate workqueue */
+- php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
+- if (!php_slot->wq) {
+- SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
+- pnv_php_disable_irq(php_slot, true);
+- return;
+- }
+-
+ /* Check PDC (Presence Detection Change) is broken or not */
+ ret = of_property_read_u32(php_slot->dn, "ibm,slot-broken-pdc",
+ &broken_pdc);
+@@ -868,7 +1057,7 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ ret = request_irq(irq, pnv_php_interrupt, IRQF_SHARED,
+ php_slot->name, php_slot);
+ if (ret) {
+- pnv_php_disable_irq(php_slot, true);
++ pnv_php_disable_irq(php_slot, true, true);
+ SLOT_WARN(php_slot, "Error %d enabling IRQ %d\n", ret, irq);
+ return;
+ }
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index 6c04027d0dd977..df2e721297fc9d 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -396,6 +396,7 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ const char *function, *pin_prop;
+ const char *group;
+ int ret, npins, nmaps, configlen = 0, i = 0;
++ struct pinctrl_map *new_map;
+
+ *map = NULL;
+ *num_maps = 0;
+@@ -470,9 +471,13 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ * We know have the number of maps we need, we can resize our
+ * map array
+ */
+- *map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
+- if (!*map)
+- return -ENOMEM;
++ new_map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
++ if (!new_map) {
++ ret = -ENOMEM;
++ goto err_free_map;
++ }
++
++ *map = new_map;
+
+ return 0;
+
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index bddd240d68abc3..54ee8b170733ff 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -1498,7 +1498,7 @@ static int ideapad_kbd_bl_init(struct ideapad_private *priv)
+ priv->kbd_bl.led.max_brightness = 1;
+ priv->kbd_bl.led.brightness_get = ideapad_kbd_bl_led_cdev_brightness_get;
+ priv->kbd_bl.led.brightness_set_blocking = ideapad_kbd_bl_led_cdev_brightness_set;
+- priv->kbd_bl.led.flags = LED_BRIGHT_HW_CHANGED;
++ priv->kbd_bl.led.flags = LED_BRIGHT_HW_CHANGED | LED_RETAIN_AT_SHUTDOWN;
+
+ err = led_classdev_register(&priv->platform_device->dev, &priv->kbd_bl.led);
+ if (err)
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index be9764541d52a5..05a3f4b208a420 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -689,9 +689,8 @@ static void cpcap_usb_detect(struct work_struct *work)
+ struct power_supply *battery;
+
+ battery = power_supply_get_by_name("battery");
+- if (IS_ERR_OR_NULL(battery)) {
+- dev_err(ddata->dev, "battery power_supply not available %li\n",
+- PTR_ERR(battery));
++ if (!battery) {
++ dev_err(ddata->dev, "battery power_supply not available\n");
+ return;
+ }
+
+diff --git a/drivers/power/supply/max14577_charger.c b/drivers/power/supply/max14577_charger.c
+index f244cd902eb947..e4461caecea33a 100644
+--- a/drivers/power/supply/max14577_charger.c
++++ b/drivers/power/supply/max14577_charger.c
+@@ -501,7 +501,7 @@ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ struct platform_device *pdev)
+ {
+- return NULL;
++ return ERR_PTR(-ENODATA);
+ }
+ #endif /* CONFIG_OF */
+
+@@ -572,7 +572,7 @@ static int max14577_charger_probe(struct platform_device *pdev)
+ chg->max14577 = max14577;
+
+ chg->pdata = max14577_charger_dt_init(pdev);
+- if (IS_ERR_OR_NULL(chg->pdata))
++ if (IS_ERR(chg->pdata))
+ return PTR_ERR(chg->pdata);
+
+ ret = max14577_charger_reg_init(chg);
+diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
+index ae7ee611978ba1..99a82060ead984 100644
+--- a/drivers/powercap/dtpm_cpu.c
++++ b/drivers/powercap/dtpm_cpu.c
+@@ -93,6 +93,8 @@ static u64 get_pd_power_uw(struct dtpm *dtpm)
+ int i;
+
+ pd = em_cpu_get(dtpm_cpu->cpu);
++ if (!pd)
++ return 0;
+
+ pd_mask = em_span_cpus(pd);
+
+diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c
+index 2d008e0d116ab5..ea966fc67d2870 100644
+--- a/drivers/pps/pps.c
++++ b/drivers/pps/pps.c
+@@ -41,6 +41,9 @@ static __poll_t pps_cdev_poll(struct file *file, poll_table *wait)
+
+ poll_wait(file, &pps->queue, wait);
+
++ if (pps->last_fetched_ev == pps->last_ev)
++ return 0;
++
+ return EPOLLIN | EPOLLRDNORM;
+ }
+
+@@ -186,9 +189,11 @@ static long pps_cdev_ioctl(struct file *file,
+ if (err)
+ return err;
+
+- /* Return the fetched timestamp */
++ /* Return the fetched timestamp and save last fetched event */
+ spin_lock_irq(&pps->lock);
+
++ pps->last_fetched_ev = pps->last_ev;
++
+ fdata.info.assert_sequence = pps->assert_sequence;
+ fdata.info.clear_sequence = pps->clear_sequence;
+ fdata.info.assert_tu = pps->assert_tu;
+@@ -272,9 +277,11 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ if (err)
+ return err;
+
+- /* Return the fetched timestamp */
++ /* Return the fetched timestamp and save last fetched event */
+ spin_lock_irq(&pps->lock);
+
++ pps->last_fetched_ev = pps->last_ev;
++
+ compat.info.assert_sequence = pps->assert_sequence;
+ compat.info.clear_sequence = pps->clear_sequence;
+ compat.info.current_mode = pps->current_mode;
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 29c9171e923a29..7e6ff7e72784bd 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5423,6 +5423,7 @@ static void regulator_remove_coupling(struct regulator_dev *rdev)
+ ERR_PTR(err));
+ }
+
++ rdev->coupling_desc.n_coupled = 0;
+ kfree(rdev->coupling_desc.coupled_rdevs);
+ rdev->coupling_desc.coupled_rdevs = NULL;
+ }
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index b7f8b3f9b0595c..73f2dd3af4d49a 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1461,7 +1461,7 @@ static long ds3231_clk_sqw_round_rate(struct clk_hw *hw, unsigned long rate,
+ return ds3231_clk_sqw_rates[i];
+ }
+
+- return 0;
++ return ds3231_clk_sqw_rates[ARRAY_SIZE(ds3231_clk_sqw_rates) - 1];
+ }
+
+ static int ds3231_clk_sqw_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-hym8563.c b/drivers/rtc/rtc-hym8563.c
+index cc710d682121bd..2f52aabd129a9d 100644
+--- a/drivers/rtc/rtc-hym8563.c
++++ b/drivers/rtc/rtc-hym8563.c
+@@ -294,7 +294,7 @@ static long hym8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int hym8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-nct3018y.c b/drivers/rtc/rtc-nct3018y.c
+index 108eced8f0030f..43b01f3e640ad6 100644
+--- a/drivers/rtc/rtc-nct3018y.c
++++ b/drivers/rtc/rtc-nct3018y.c
+@@ -342,7 +342,7 @@ static long nct3018y_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int nct3018y_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index 4a29b44e75e6a3..b095663d5ebccf 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -410,7 +410,7 @@ static long pcf85063_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int pcf85063_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf8563.c b/drivers/rtc/rtc-pcf8563.c
+index 11fa9788558bea..dd27acae137cb8 100644
+--- a/drivers/rtc/rtc-pcf8563.c
++++ b/drivers/rtc/rtc-pcf8563.c
+@@ -386,7 +386,7 @@ static long pcf8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int pcf8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-rv3028.c b/drivers/rtc/rtc-rv3028.c
+index dd170e3efd83ed..436523605f8fb7 100644
+--- a/drivers/rtc/rtc-rv3028.c
++++ b/drivers/rtc/rtc-rv3028.c
+@@ -738,7 +738,7 @@ static long rv3028_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int rv3028_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
+index be4b5c1ee32dc1..150d059e51ac75 100644
+--- a/drivers/scsi/elx/efct/efct_lio.c
++++ b/drivers/scsi/elx/efct/efct_lio.c
+@@ -396,7 +396,7 @@ efct_lio_sg_unmap(struct efct_io *io)
+ return;
+
+ dma_unmap_sg(&io->efct->pci->dev, cmd->t_data_sg,
+- ocp->seg_map_cnt, cmd->data_direction);
++ cmd->t_data_nents, cmd->data_direction);
+ ocp->seg_map_cnt = 0;
+ }
+
+diff --git a/drivers/scsi/ibmvscsi_tgt/libsrp.c b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+index 8a0e28aec928e4..0ecad398ed3db0 100644
+--- a/drivers/scsi/ibmvscsi_tgt/libsrp.c
++++ b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+@@ -184,7 +184,8 @@ static int srp_direct_data(struct ibmvscsis_cmd *cmd, struct srp_direct_buf *md,
+ err = rdma_io(cmd, sg, nsg, md, 1, dir, len);
+
+ if (dma_map)
+- dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++ dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++ DMA_BIDIRECTIONAL);
+
+ return err;
+ }
+@@ -256,7 +257,8 @@ static int srp_indirect_data(struct ibmvscsis_cmd *cmd, struct srp_cmd *srp_cmd,
+ err = rdma_io(cmd, sg, nsg, md, nmd, dir, len);
+
+ if (dma_map)
+- dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++ dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++ DMA_BIDIRECTIONAL);
+
+ free_mem:
+ if (token && dma_map) {
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index 0f0732d56800d9..46d30a725d7e37 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -2907,7 +2907,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ task->total_xfer_len, task->data_dir);
+ else /* unmap the sgl dma addresses */
+ dma_unmap_sg(&ihost->pdev->dev, task->scatter,
+- request->num_sg_entries, task->data_dir);
++ task->num_scatter, task->data_dir);
+ break;
+ case SAS_PROTOCOL_SMP: {
+ struct scatterlist *sg = &task->smp_task.smp_req;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 31768da482a574..b5b77b82d69f18 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -10819,8 +10819,7 @@ _mpt3sas_fw_work(struct MPT3SAS_ADAPTER *ioc, struct fw_event_work *fw_event)
+ break;
+ case MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST:
+ _scsih_pcie_topology_change_event(ioc, fw_event);
+- ioc->current_event = NULL;
+- return;
++ break;
+ }
+ out:
+ fw_event_work_put(fw_event);
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index a6867dae0e7c21..1275f3be530f78 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -831,7 +831,7 @@ static int mvs_task_prep(struct sas_task *task, struct mvs_info *mvi, int is_tmf
+ dev_printk(KERN_ERR, mvi->dev, "mvsas prep failed[%d]!\n", rc);
+ if (!sas_protocol_ata(task->task_proto))
+ if (n_elem)
+- dma_unmap_sg(mvi->dev, task->scatter, n_elem,
++ dma_unmap_sg(mvi->dev, task->scatter, task->num_scatter,
+ task->data_dir);
+ prep_out:
+ return rc;
+@@ -877,7 +877,7 @@ static void mvs_slot_task_free(struct mvs_info *mvi, struct sas_task *task,
+ if (!sas_protocol_ata(task->task_proto))
+ if (slot->n_elem)
+ dma_unmap_sg(mvi->dev, task->scatter,
+- slot->n_elem, task->data_dir);
++ task->num_scatter, task->data_dir);
+
+ switch (task->task_proto) {
+ case SAS_PROTOCOL_SMP:
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index c44103752b6995..ef20fea959578a 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2168,6 +2168,8 @@ static int iscsi_iter_destroy_conn_fn(struct device *dev, void *data)
+ return 0;
+
+ iscsi_remove_conn(iscsi_dev_to_conn(dev));
++ iscsi_put_conn(iscsi_dev_to_conn(dev));
++
+ return 0;
+ }
+
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index c3006524eb039c..3b481779af3516 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3771,7 +3771,9 @@ static void sd_shutdown(struct device *dev)
+ if ((system_state != SYSTEM_RESTART &&
+ sdkp->device->manage_system_start_stop) ||
+ (system_state == SYSTEM_POWER_OFF &&
+- sdkp->device->manage_shutdown)) {
++ sdkp->device->manage_shutdown) ||
++ (system_state == SYSTEM_RUNNING &&
++ sdkp->device->manage_runtime_start_stop)) {
+ sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
+ sd_start_stop_device(sdkp, 0);
+ }
+diff --git a/drivers/soc/qcom/qmi_encdec.c b/drivers/soc/qcom/qmi_encdec.c
+index 5c7161b18b7240..645c4ee24f5b4f 100644
+--- a/drivers/soc/qcom/qmi_encdec.c
++++ b/drivers/soc/qcom/qmi_encdec.c
+@@ -304,6 +304,8 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ const void *buf_src;
+ int encode_tlv = 0;
+ int rc;
++ u8 val8;
++ u16 val16;
+
+ if (!ei_array)
+ return 0;
+@@ -338,7 +340,6 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ break;
+
+ case QMI_DATA_LEN:
+- memcpy(&data_len_value, buf_src, temp_ei->elem_size);
+ data_len_sz = temp_ei->elem_size == sizeof(u8) ?
+ sizeof(u8) : sizeof(u16);
+ /* Check to avoid out of range buffer access */
+@@ -348,8 +349,17 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ __func__);
+ return -ETOOSMALL;
+ }
+- rc = qmi_encode_basic_elem(buf_dst, &data_len_value,
+- 1, data_len_sz);
++ if (data_len_sz == sizeof(u8)) {
++ val8 = *(u8 *)buf_src;
++ data_len_value = (u32)val8;
++ rc = qmi_encode_basic_elem(buf_dst, &val8,
++ 1, data_len_sz);
++ } else {
++ val16 = *(u16 *)buf_src;
++ data_len_value = (u32)le16_to_cpu(val16);
++ rc = qmi_encode_basic_elem(buf_dst, &val16,
++ 1, data_len_sz);
++ }
+ UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
+ encoded_bytes, tlv_len,
+ encode_tlv, rc);
+@@ -523,14 +533,23 @@ static int qmi_decode_string_elem(const struct qmi_elem_info *ei_array,
+ u32 string_len = 0;
+ u32 string_len_sz = 0;
+ const struct qmi_elem_info *temp_ei = ei_array;
++ u8 val8;
++ u16 val16;
+
+ if (dec_level == 1) {
+ string_len = tlv_len;
+ } else {
+ string_len_sz = temp_ei->elem_len <= U8_MAX ?
+ sizeof(u8) : sizeof(u16);
+- rc = qmi_decode_basic_elem(&string_len, buf_src,
+- 1, string_len_sz);
++ if (string_len_sz == sizeof(u8)) {
++ rc = qmi_decode_basic_elem(&val8, buf_src,
++ 1, string_len_sz);
++ string_len = (u32)val8;
++ } else {
++ rc = qmi_decode_basic_elem(&val16, buf_src,
++ 1, string_len_sz);
++ string_len = (u32)val16;
++ }
+ decoded_bytes += rc;
+ }
+
+@@ -604,6 +623,9 @@ static int qmi_decode(const struct qmi_elem_info *ei_array, void *out_c_struct,
+ u32 decoded_bytes = 0;
+ const void *buf_src = in_buf;
+ int rc;
++ u8 val8;
++ u16 val16;
++ u32 val32;
+
+ while (decoded_bytes < in_buf_len) {
+ if (dec_level >= 2 && temp_ei->data_type == QMI_EOTI)
+@@ -642,9 +664,17 @@ static int qmi_decode(const struct qmi_elem_info *ei_array, void *out_c_struct,
+ if (temp_ei->data_type == QMI_DATA_LEN) {
+ data_len_sz = temp_ei->elem_size == sizeof(u8) ?
+ sizeof(u8) : sizeof(u16);
+- rc = qmi_decode_basic_elem(&data_len_value, buf_src,
+- 1, data_len_sz);
+- memcpy(buf_dst, &data_len_value, sizeof(u32));
++ if (data_len_sz == sizeof(u8)) {
++ rc = qmi_decode_basic_elem(&val8, buf_src,
++ 1, data_len_sz);
++ data_len_value = (u32)val8;
++ } else {
++ rc = qmi_decode_basic_elem(&val16, buf_src,
++ 1, data_len_sz);
++ data_len_value = (u32)val16;
++ }
++ val32 = cpu_to_le32(data_len_value);
++ memcpy(buf_dst, &val32, sizeof(u32));
+ temp_ei = temp_ei + 1;
+ buf_dst = out_c_struct + temp_ei->offset;
+ tlv_len -= data_len_sz;
+diff --git a/drivers/soc/tegra/cbb/tegra234-cbb.c b/drivers/soc/tegra/cbb/tegra234-cbb.c
+index f33d094e5ea60c..5813c55222ca36 100644
+--- a/drivers/soc/tegra/cbb/tegra234-cbb.c
++++ b/drivers/soc/tegra/cbb/tegra234-cbb.c
+@@ -189,6 +189,8 @@ static void tegra234_cbb_error_clear(struct tegra_cbb *cbb)
+ {
+ struct tegra234_cbb *priv = to_tegra234_cbb(cbb);
+
++ writel(0, priv->mon + FABRIC_MN_MASTER_ERR_FORCE_0);
++
+ writel(0x3f, priv->mon + FABRIC_MN_MASTER_ERR_STATUS_0);
+ dsb(sy);
+ }
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 2624441d2fa92c..3f23d9f0f72655 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1402,7 +1402,7 @@ static int _sdw_prepare_stream(struct sdw_stream_runtime *stream,
+ if (ret < 0) {
+ dev_err(bus->dev, "Prepare port(s) failed ret = %d\n",
+ ret);
+- return ret;
++ goto restore_params;
+ }
+ }
+
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index afaba94d1d1ca2..7b802b02262738 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -744,6 +744,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ return info;
+
+ release_framebuf:
++ fb_deferred_io_cleanup(info);
+ framebuffer_release(info);
+
+ alloc_fail:
+diff --git a/drivers/staging/nvec/nvec_power.c b/drivers/staging/nvec/nvec_power.c
+index b1ef196e1cfe89..622d99ea955529 100644
+--- a/drivers/staging/nvec/nvec_power.c
++++ b/drivers/staging/nvec/nvec_power.c
+@@ -194,7 +194,7 @@ static int nvec_power_bat_notifier(struct notifier_block *nb,
+ break;
+ case MANUFACTURER:
+ memcpy(power->bat_manu, &res->plc, res->length - 2);
+- power->bat_model[res->length - 2] = '\0';
++ power->bat_manu[res->length - 2] = '\0';
+ break;
+ case MODEL:
+ memcpy(power->bat_model, &res->plc, res->length - 2);
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+index f4c2c9506d863e..2a5a43e7ff295c 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+@@ -101,7 +101,7 @@ static enum vchiq_status audio_vchi_callback(struct vchiq_instance *vchiq_instan
+ struct vc_audio_msg *m;
+
+ if (reason != VCHIQ_MESSAGE_AVAILABLE)
+- return VCHIQ_SUCCESS;
++ return 0;
+
+ m = (void *)header->data;
+ if (m->type == VC_AUDIO_MSG_TYPE_RESULT) {
+@@ -119,7 +119,7 @@ static enum vchiq_status audio_vchi_callback(struct vchiq_instance *vchiq_instan
+ }
+
+ vchiq_release_message(vchiq_instance, instance->service_handle, header);
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ static int
+diff --git a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
+index 690ab7165b2c18..842bec937bd901 100644
+--- a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
++++ b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
+@@ -18,8 +18,6 @@ enum vchiq_reason {
+ };
+
+ enum vchiq_status {
+- VCHIQ_ERROR = -1,
+- VCHIQ_SUCCESS = 0,
+ VCHIQ_RETRY = 1
+ };
+
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 456a9508fb911e..3fafc94deb476d 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -496,7 +496,7 @@ static int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state
+
+ vchiq_slot_zero = vchiq_init_slots(slot_mem, slot_mem_size);
+ if (!vchiq_slot_zero)
+- return -EINVAL;
++ return -ENOMEM;
+
+ vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX] =
+ (int)slot_phys + slot_mem_size;
+@@ -711,11 +711,10 @@ void free_bulk_waiter(struct vchiq_instance *instance)
+
+ enum vchiq_status vchiq_shutdown(struct vchiq_instance *instance)
+ {
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+ struct vchiq_state *state = instance->state;
+
+- if (mutex_lock_killable(&state->mutex))
+- return VCHIQ_RETRY;
++ mutex_lock(&state->mutex);
+
+ /* Remove all services */
+ vchiq_shutdown_internal(state, instance);
+@@ -743,12 +742,12 @@ enum vchiq_status vchiq_connect(struct vchiq_instance *instance)
+
+ if (mutex_lock_killable(&state->mutex)) {
+ vchiq_log_trace(vchiq_core_log_level, "%s: call to mutex_lock failed", __func__);
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ goto failed;
+ }
+ status = vchiq_connect_internal(state, instance);
+
+- if (status == VCHIQ_SUCCESS)
++ if (!status)
+ instance->connected = 1;
+
+ mutex_unlock(&state->mutex);
+@@ -780,9 +779,9 @@ vchiq_add_service(struct vchiq_instance *instance,
+
+ if (service) {
+ *phandle = service->handle;
+- status = VCHIQ_SUCCESS;
++ status = 0;
+ } else {
+- status = VCHIQ_ERROR;
++ status = -EINVAL;
+ }
+
+ vchiq_log_trace(vchiq_core_log_level, "%s(%p): returning %d", __func__, instance, status);
+@@ -795,7 +794,7 @@ vchiq_open_service(struct vchiq_instance *instance,
+ const struct vchiq_service_params_kernel *params,
+ unsigned int *phandle)
+ {
+- enum vchiq_status status = VCHIQ_ERROR;
++ int status = -EINVAL;
+ struct vchiq_state *state = instance->state;
+ struct vchiq_service *service = NULL;
+
+@@ -809,7 +808,7 @@ vchiq_open_service(struct vchiq_instance *instance,
+ if (service) {
+ *phandle = service->handle;
+ status = vchiq_open_service_internal(service, current->pid);
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ vchiq_remove_service(instance, service->handle);
+ *phandle = VCHIQ_SERVICE_HANDLE_INVALID;
+ }
+@@ -842,15 +841,15 @@ vchiq_bulk_transmit(struct vchiq_instance *instance, unsigned int handle, const
+ VCHIQ_BULK_TRANSMIT);
+ break;
+ default:
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ /*
+- * vchiq_*_bulk_transfer() may return VCHIQ_RETRY, so we need
++ * vchiq_*_bulk_transfer() may return -EAGAIN, so we need
+ * to implement a retry mechanism since this function is
+ * supposed to block until queued
+ */
+- if (status != VCHIQ_RETRY)
++ if (status != -EAGAIN)
+ break;
+
+ msleep(1);
+@@ -879,15 +878,15 @@ enum vchiq_status vchiq_bulk_receive(struct vchiq_instance *instance, unsigned i
+ VCHIQ_BULK_RECEIVE);
+ break;
+ default:
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ /*
+- * vchiq_*_bulk_transfer() may return VCHIQ_RETRY, so we need
++ * vchiq_*_bulk_transfer() may return -EAGAIN, so we need
+ * to implement a retry mechanism since this function is
+ * supposed to block until queued
+ */
+- if (status != VCHIQ_RETRY)
++ if (status != -EAGAIN)
+ break;
+
+ msleep(1);
+@@ -907,7 +906,7 @@ vchiq_blocking_bulk_transfer(struct vchiq_instance *instance, unsigned int handl
+
+ service = find_service_by_handle(instance, handle);
+ if (!service)
+- return VCHIQ_ERROR;
++ return -EINVAL;
+
+ vchiq_service_put(service);
+
+@@ -941,14 +940,14 @@ vchiq_blocking_bulk_transfer(struct vchiq_instance *instance, unsigned int handl
+ waiter = kzalloc(sizeof(*waiter), GFP_KERNEL);
+ if (!waiter) {
+ vchiq_log_error(vchiq_core_log_level, "%s - out of memory", __func__);
+- return VCHIQ_ERROR;
++ return -ENOMEM;
+ }
+ }
+
+ status = vchiq_bulk_transfer(instance, handle, data, NULL, size,
+ &waiter->bulk_waiter,
+ VCHIQ_BULK_MODE_BLOCKING, dir);
+- if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) || !waiter->bulk_waiter.bulk) {
++ if ((status != -EAGAIN) || fatal_signal_pending(current) || !waiter->bulk_waiter.bulk) {
+ struct vchiq_bulk *bulk = waiter->bulk_waiter.bulk;
+
+ if (bulk) {
+@@ -988,10 +987,10 @@ add_completion(struct vchiq_instance *instance, enum vchiq_reason reason,
+ DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT);
+ if (wait_for_completion_interruptible(&instance->remove_event)) {
+ vchiq_log_info(vchiq_arm_log_level, "service_callback interrupted");
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+ } else if (instance->closing) {
+ vchiq_log_info(vchiq_arm_log_level, "service_callback closing");
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+ DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+ }
+@@ -1028,7 +1027,7 @@ add_completion(struct vchiq_instance *instance, enum vchiq_reason reason,
+
+ complete(&instance->insert_event);
+
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ enum vchiq_status
+@@ -1053,14 +1052,14 @@ service_callback(struct vchiq_instance *instance, enum vchiq_reason reason,
+ service = handle_to_service(instance, handle);
+ if (WARN_ON(!service)) {
+ rcu_read_unlock();
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ user_service = (struct user_service *)service->base.userdata;
+
+ if (!instance || instance->closing) {
+ rcu_read_unlock();
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ /*
+@@ -1097,7 +1096,7 @@ service_callback(struct vchiq_instance *instance, enum vchiq_reason reason,
+ DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+ status = add_completion(instance, reason, NULL, user_service,
+ bulk_userdata);
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+ vchiq_service_put(service);
+ return status;
+@@ -1109,12 +1108,12 @@ service_callback(struct vchiq_instance *instance, enum vchiq_reason reason,
+ vchiq_log_info(vchiq_arm_log_level, "%s interrupted", __func__);
+ DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+ vchiq_service_put(service);
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+ } else if (instance->closing) {
+ vchiq_log_info(vchiq_arm_log_level, "%s closing", __func__);
+ DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+ vchiq_service_put(service);
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+ DEBUG_TRACE(SERVICE_CALLBACK_LINE);
+ spin_lock(&msg_queue_spinlock);
+@@ -1145,7 +1144,7 @@ service_callback(struct vchiq_instance *instance, enum vchiq_reason reason,
+ vchiq_service_put(service);
+
+ if (skip_completion)
+- return VCHIQ_SUCCESS;
++ return 0;
+
+ return add_completion(instance, reason, header, user_service,
+ bulk_userdata);
+@@ -1337,14 +1336,14 @@ vchiq_keepalive_thread_func(void *v)
+ }
+
+ status = vchiq_connect(instance);
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ vchiq_log_error(vchiq_susp_log_level, "%s vchiq_connect failed %d", __func__,
+ status);
+ goto shutdown;
+ }
+
+ status = vchiq_add_service(instance, ¶ms, &ka_handle);
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ vchiq_log_error(vchiq_susp_log_level, "%s vchiq_open_service failed %d", __func__,
+ status);
+ goto shutdown;
+@@ -1373,14 +1372,14 @@ vchiq_keepalive_thread_func(void *v)
+ while (uc--) {
+ atomic_inc(&arm_state->ka_use_ack_count);
+ status = vchiq_use_service(instance, ka_handle);
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ vchiq_log_error(vchiq_susp_log_level,
+ "%s vchiq_use_service error %d", __func__, status);
+ }
+ }
+ while (rc--) {
+ status = vchiq_release_service(instance, ka_handle);
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ vchiq_log_error(vchiq_susp_log_level,
+ "%s vchiq_release_service error %d", __func__,
+ status);
+@@ -1433,13 +1432,13 @@ vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service,
+ write_unlock_bh(&arm_state->susp_res_lock);
+
+ if (!ret) {
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+ long ack_cnt = atomic_xchg(&arm_state->ka_use_ack_count, 0);
+
+- while (ack_cnt && (status == VCHIQ_SUCCESS)) {
++ while (ack_cnt && !status) {
+ /* Send the use notify to videocore */
+ status = vchiq_send_remote_use_active(state);
+- if (status == VCHIQ_SUCCESS)
++ if (!status)
+ ack_cnt--;
+ else
+ atomic_add(ack_cnt, &arm_state->ka_use_ack_count);
+@@ -1577,7 +1576,7 @@ vchiq_instance_set_trace(struct vchiq_instance *instance, int trace)
+ enum vchiq_status
+ vchiq_use_service(struct vchiq_instance *instance, unsigned int handle)
+ {
+- enum vchiq_status ret = VCHIQ_ERROR;
++ int ret = -EINVAL;
+ struct vchiq_service *service = find_service_by_handle(instance, handle);
+
+ if (service) {
+@@ -1591,7 +1590,7 @@ EXPORT_SYMBOL(vchiq_use_service);
+ enum vchiq_status
+ vchiq_release_service(struct vchiq_instance *instance, unsigned int handle)
+ {
+- enum vchiq_status ret = VCHIQ_ERROR;
++ int ret = -EINVAL;
+ struct vchiq_service *service = find_service_by_handle(instance, handle);
+
+ if (service) {
+@@ -1686,7 +1685,7 @@ enum vchiq_status
+ vchiq_check_service(struct vchiq_service *service)
+ {
+ struct vchiq_arm_state *arm_state;
+- enum vchiq_status ret = VCHIQ_ERROR;
++ int ret = -EINVAL;
+
+ if (!service || !service->state)
+ goto out;
+@@ -1695,10 +1694,10 @@ vchiq_check_service(struct vchiq_service *service)
+
+ read_lock_bh(&arm_state->susp_res_lock);
+ if (service->service_use_count)
+- ret = VCHIQ_SUCCESS;
++ ret = 0;
+ read_unlock_bh(&arm_state->susp_res_lock);
+
+- if (ret == VCHIQ_ERROR) {
++ if (ret) {
+ vchiq_log_error(vchiq_susp_log_level,
+ "%s ERROR - %c%c%c%c:%d service count %d, state count %d", __func__,
+ VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc), service->client_id,
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 45ed30bfdbf561..da85f9d165c70a 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -467,18 +467,18 @@ static inline enum vchiq_status
+ make_service_callback(struct vchiq_service *service, enum vchiq_reason reason,
+ struct vchiq_header *header, void *bulk_userdata)
+ {
+- enum vchiq_status status;
++ int status;
+
+ vchiq_log_trace(vchiq_core_log_level, "%d: callback:%d (%s, %pK, %pK)",
+ service->state->id, service->localport, reason_names[reason],
+ header, bulk_userdata);
+ status = service->base.callback(service->instance, reason, header, service->handle,
+ bulk_userdata);
+- if (status == VCHIQ_ERROR) {
++ if (status && (status != -EAGAIN)) {
+ vchiq_log_warning(vchiq_core_log_level,
+ "%d: ignoring ERROR from callback to service %x",
+ service->state->id, service->handle);
+- status = VCHIQ_SUCCESS;
++ status = 0;
+ }
+
+ if (reason != VCHIQ_MESSAGE_AVAILABLE)
+@@ -922,7 +922,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+
+ if (!(flags & QMFLAGS_NO_MUTEX_LOCK) &&
+ mutex_lock_killable(&state->slot_mutex))
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+
+ if (type == VCHIQ_MSG_DATA) {
+ int tx_end_index;
+@@ -930,7 +930,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ if (!service) {
+ WARN(1, "%s: service is NULL\n", __func__);
+ mutex_unlock(&state->slot_mutex);
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ WARN_ON(flags & (QMFLAGS_NO_MUTEX_LOCK |
+@@ -939,7 +939,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ if (service->closing) {
+ /* The service has been closed */
+ mutex_unlock(&state->slot_mutex);
+- return VCHIQ_ERROR;
++ return -EHOSTDOWN;
+ }
+
+ quota = &state->service_quotas[service->localport];
+@@ -963,7 +963,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ mutex_unlock(&state->slot_mutex);
+
+ if (wait_for_completion_interruptible(&state->data_quota_event))
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+
+ mutex_lock(&state->slot_mutex);
+ spin_lock("a_spinlock);
+@@ -987,15 +987,15 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ VCHIQ_SERVICE_STATS_INC(service, quota_stalls);
+ mutex_unlock(&state->slot_mutex);
+ if (wait_for_completion_interruptible("a->quota_event))
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+ if (service->closing)
+- return VCHIQ_ERROR;
++ return -EHOSTDOWN;
+ if (mutex_lock_killable(&state->slot_mutex))
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+ if (service->srvstate != VCHIQ_SRVSTATE_OPEN) {
+ /* The service has been closed */
+ mutex_unlock(&state->slot_mutex);
+- return VCHIQ_ERROR;
++ return -EHOSTDOWN;
+ }
+ spin_lock("a_spinlock);
+ tx_end_index = SLOT_QUEUE_INDEX_FROM_POS(state->local_tx_pos + stride - 1);
+@@ -1015,7 +1015,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ */
+ if (!(flags & QMFLAGS_NO_MUTEX_LOCK))
+ mutex_unlock(&state->slot_mutex);
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+ }
+
+ if (type == VCHIQ_MSG_DATA) {
+@@ -1037,7 +1037,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+ if (callback_result < 0) {
+ mutex_unlock(&state->slot_mutex);
+ VCHIQ_SERVICE_STATS_INC(service, error_count);
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ if (SRVTRACE_ENABLED(service,
+@@ -1135,7 +1135,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
+
+ remote_event_signal(&state->remote->trigger);
+
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ /* Called by the slot handler and application threads */
+@@ -1154,7 +1154,7 @@ queue_message_sync(struct vchiq_state *state, struct vchiq_service *service,
+
+ if (VCHIQ_MSG_TYPE(msgid) != VCHIQ_MSG_RESUME &&
+ mutex_lock_killable(&state->sync_mutex))
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+
+ remote_event_wait(&state->sync_release_event, &local->sync_release);
+
+@@ -1185,7 +1185,7 @@ queue_message_sync(struct vchiq_state *state, struct vchiq_service *service,
+ if (callback_result < 0) {
+ mutex_unlock(&state->slot_mutex);
+ VCHIQ_SERVICE_STATS_INC(service, error_count);
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ if (service) {
+@@ -1223,7 +1223,7 @@ queue_message_sync(struct vchiq_state *state, struct vchiq_service *service,
+ if (VCHIQ_MSG_TYPE(msgid) != VCHIQ_MSG_PAUSE)
+ mutex_unlock(&state->sync_mutex);
+
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ static inline void
+@@ -1303,7 +1303,7 @@ static enum vchiq_status
+ notify_bulks(struct vchiq_service *service, struct vchiq_bulk_queue *queue,
+ int retry_poll)
+ {
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+
+ vchiq_log_trace(vchiq_core_log_level, "%d: nb:%d %cx - p=%x rn=%x r=%x", service->state->id,
+ service->localport, (queue == &service->bulk_tx) ? 't' : 'r',
+@@ -1348,7 +1348,7 @@ notify_bulks(struct vchiq_service *service, struct vchiq_bulk_queue *queue,
+ get_bulk_reason(bulk);
+ status = make_service_callback(service, reason, NULL,
+ bulk->userdata);
+- if (status == VCHIQ_RETRY)
++ if (status == -EAGAIN)
+ break;
+ }
+ }
+@@ -1357,9 +1357,9 @@ notify_bulks(struct vchiq_service *service, struct vchiq_bulk_queue *queue,
+ complete(&service->bulk_remove_event);
+ }
+ if (!retry_poll)
+- status = VCHIQ_SUCCESS;
++ status = 0;
+
+- if (status == VCHIQ_RETRY)
++ if (status == -EAGAIN)
+ request_poll(service->state, service, (queue == &service->bulk_tx) ?
+ VCHIQ_POLL_TXNOTIFY : VCHIQ_POLL_RXNOTIFY);
+
+@@ -1398,13 +1398,12 @@ poll_services_of_group(struct vchiq_state *state, int group)
+ */
+ service->public_fourcc = VCHIQ_FOURCC_INVALID;
+
+- if (vchiq_close_service_internal(service, NO_CLOSE_RECVD) !=
+- VCHIQ_SUCCESS)
++ if (vchiq_close_service_internal(service, NO_CLOSE_RECVD))
+ request_poll(state, service, VCHIQ_POLL_REMOVE);
+ } else if (service_flags & BIT(VCHIQ_POLL_TERMINATE)) {
+ vchiq_log_info(vchiq_core_log_level, "%d: ps - terminate %d<->%d",
+ state->id, service->localport, service->remoteport);
+- if (vchiq_close_service_internal(service, NO_CLOSE_RECVD) != VCHIQ_SUCCESS)
++ if (vchiq_close_service_internal(service, NO_CLOSE_RECVD))
+ request_poll(state, service, VCHIQ_POLL_TERMINATE);
+ }
+ if (service_flags & BIT(VCHIQ_POLL_TXNOTIFY))
+@@ -1527,14 +1526,14 @@ parse_open(struct vchiq_state *state, struct vchiq_header *header)
+ /* Acknowledge the OPEN */
+ if (service->sync) {
+ if (queue_message_sync(state, NULL, openack_id, memcpy_copy_callback,
+- &ack_payload, sizeof(ack_payload), 0) == VCHIQ_RETRY)
++ &ack_payload, sizeof(ack_payload), 0) == -EAGAIN)
+ goto bail_not_ready;
+
+ /* The service is now open */
+ set_service_state(service, VCHIQ_SRVSTATE_OPENSYNC);
+ } else {
+ if (queue_message(state, NULL, openack_id, memcpy_copy_callback,
+- &ack_payload, sizeof(ack_payload), 0) == VCHIQ_RETRY)
++ &ack_payload, sizeof(ack_payload), 0) == -EAGAIN)
+ goto bail_not_ready;
+
+ /* The service is now open */
+@@ -1549,7 +1548,7 @@ parse_open(struct vchiq_state *state, struct vchiq_header *header)
+ fail_open:
+ /* No available service, or an invalid request - send a CLOSE */
+ if (queue_message(state, NULL, MAKE_CLOSE(0, VCHIQ_MSG_SRCPORT(msgid)),
+- NULL, NULL, 0, 0) == VCHIQ_RETRY)
++ NULL, NULL, 0, 0) == -EAGAIN)
+ goto bail_not_ready;
+
+ return 1;
+@@ -1688,7 +1687,7 @@ parse_message(struct vchiq_state *state, struct vchiq_header *header)
+
+ mark_service_closing_internal(service, 1);
+
+- if (vchiq_close_service_internal(service, CLOSE_RECVD) == VCHIQ_RETRY)
++ if (vchiq_close_service_internal(service, CLOSE_RECVD) == -EAGAIN)
+ goto bail_not_ready;
+
+ vchiq_log_info(vchiq_core_log_level, "Close Service %c%c%c%c s:%u d:%d",
+@@ -1705,7 +1704,7 @@ parse_message(struct vchiq_state *state, struct vchiq_header *header)
+ claim_slot(state->rx_info);
+ DEBUG_TRACE(PARSE_LINE);
+ if (make_service_callback(service, VCHIQ_MESSAGE_AVAILABLE, header,
+- NULL) == VCHIQ_RETRY) {
++ NULL) == -EAGAIN) {
+ DEBUG_TRACE(PARSE_LINE);
+ goto bail_not_ready;
+ }
+@@ -1803,7 +1802,7 @@ parse_message(struct vchiq_state *state, struct vchiq_header *header)
+ if (state->conn_state != VCHIQ_CONNSTATE_PAUSE_SENT) {
+ /* Send a PAUSE in response */
+ if (queue_message(state, NULL, MAKE_PAUSE, NULL, NULL, 0,
+- QMFLAGS_NO_MUTEX_UNLOCK) == VCHIQ_RETRY)
++ QMFLAGS_NO_MUTEX_UNLOCK) == -EAGAIN)
+ goto bail_not_ready;
+ }
+ /* At this point slot_mutex is held */
+@@ -1920,7 +1919,7 @@ handle_poll(struct vchiq_state *state)
+
+ case VCHIQ_CONNSTATE_PAUSING:
+ if (queue_message(state, NULL, MAKE_PAUSE, NULL, NULL, 0,
+- QMFLAGS_NO_MUTEX_UNLOCK) != VCHIQ_RETRY) {
++ QMFLAGS_NO_MUTEX_UNLOCK) != -EAGAIN) {
+ vchiq_set_conn_state(state, VCHIQ_CONNSTATE_PAUSE_SENT);
+ } else {
+ /* Retry later */
+@@ -1930,7 +1929,7 @@ handle_poll(struct vchiq_state *state)
+
+ case VCHIQ_CONNSTATE_RESUMING:
+ if (queue_message(state, NULL, MAKE_RESUME, NULL, NULL, 0,
+- QMFLAGS_NO_MUTEX_LOCK) != VCHIQ_RETRY) {
++ QMFLAGS_NO_MUTEX_LOCK) != -EAGAIN) {
+ vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
+ } else {
+ /*
+@@ -2086,9 +2085,9 @@ sync_func(void *v)
+ if ((service->remoteport == remoteport) &&
+ (service->srvstate == VCHIQ_SRVSTATE_OPENSYNC)) {
+ if (make_service_callback(service, VCHIQ_MESSAGE_AVAILABLE, header,
+- NULL) == VCHIQ_RETRY)
++ NULL) == -EAGAIN)
+ vchiq_log_error(vchiq_sync_log_level,
+- "synchronous callback to service %d returns VCHIQ_RETRY",
++ "synchronous callback to service %d returns -EAGAIN",
+ localport);
+ }
+ break;
+@@ -2495,7 +2494,7 @@ vchiq_open_service_internal(struct vchiq_service *service, int client_id)
+ service->version,
+ service->version_min
+ };
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+
+ service->client_id = client_id;
+ vchiq_use_service_internal(service);
+@@ -2506,12 +2505,12 @@ vchiq_open_service_internal(struct vchiq_service *service, int client_id)
+ sizeof(payload),
+ QMFLAGS_IS_BLOCKING);
+
+- if (status != VCHIQ_SUCCESS)
++ if (status)
+ return status;
+
+ /* Wait for the ACK/NAK */
+ if (wait_for_completion_interruptible(&service->remove_event)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ vchiq_release_service_internal(service);
+ } else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) &&
+ (service->srvstate != VCHIQ_SRVSTATE_OPENSYNC)) {
+@@ -2521,7 +2520,7 @@ vchiq_open_service_internal(struct vchiq_service *service, int client_id)
+ service->state->id,
+ srvstate_names[service->srvstate],
+ kref_read(&service->ref_count));
+- status = VCHIQ_ERROR;
++ status = -EINVAL;
+ VCHIQ_SERVICE_STATS_INC(service, error_count);
+ vchiq_release_service_internal(service);
+ }
+@@ -2602,11 +2601,11 @@ do_abort_bulks(struct vchiq_service *service)
+ mutex_unlock(&service->bulk_mutex);
+
+ status = notify_bulks(service, &service->bulk_tx, NO_RETRY_POLL);
+- if (status != VCHIQ_SUCCESS)
++ if (status)
+ return 0;
+
+ status = notify_bulks(service, &service->bulk_rx, NO_RETRY_POLL);
+- return (status == VCHIQ_SUCCESS);
++ return !status;
+ }
+
+ static enum vchiq_status
+@@ -2639,12 +2638,12 @@ close_service_complete(struct vchiq_service *service, int failstate)
+ vchiq_log_error(vchiq_core_log_level, "%s(%x) called in state %s", __func__,
+ service->handle, srvstate_names[service->srvstate]);
+ WARN(1, "%s in unexpected state\n", __func__);
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ status = make_service_callback(service, VCHIQ_SERVICE_CLOSED, NULL, NULL);
+
+- if (status != VCHIQ_RETRY) {
++ if (status != -EAGAIN) {
+ int uc = service->service_use_count;
+ int i;
+ /* Complete the close process */
+@@ -2678,7 +2677,7 @@ enum vchiq_status
+ vchiq_close_service_internal(struct vchiq_service *service, int close_recvd)
+ {
+ struct vchiq_state *state = service->state;
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+ int is_server = (service->public_fourcc != VCHIQ_FOURCC_INVALID);
+ int close_id = MAKE_CLOSE(service->localport,
+ VCHIQ_MSG_DSTPORT(service->remoteport));
+@@ -2696,7 +2695,7 @@ vchiq_close_service_internal(struct vchiq_service *service, int close_recvd)
+ __func__, srvstate_names[service->srvstate]);
+ } else if (is_server) {
+ if (service->srvstate == VCHIQ_SRVSTATE_LISTENING) {
+- status = VCHIQ_ERROR;
++ status = -EINVAL;
+ } else {
+ service->client_id = 0;
+ service->remoteport = VCHIQ_PORT_FREE;
+@@ -2725,16 +2724,16 @@ vchiq_close_service_internal(struct vchiq_service *service, int close_recvd)
+ case VCHIQ_SRVSTATE_OPEN:
+ if (close_recvd) {
+ if (!do_abort_bulks(service))
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ }
+
+ release_service_messages(service);
+
+- if (status == VCHIQ_SUCCESS)
++ if (!status)
+ status = queue_message(state, service, close_id, NULL,
+ NULL, 0, QMFLAGS_NO_MUTEX_UNLOCK);
+
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ if (service->srvstate == VCHIQ_SRVSTATE_OPENSYNC)
+ mutex_unlock(&state->sync_mutex);
+ break;
+@@ -2764,11 +2763,11 @@ vchiq_close_service_internal(struct vchiq_service *service, int close_recvd)
+ break;
+
+ if (!do_abort_bulks(service)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ break;
+ }
+
+- if (status == VCHIQ_SUCCESS)
++ if (!status)
+ status = close_service_complete(service, VCHIQ_SRVSTATE_CLOSERECVD);
+ break;
+
+@@ -2848,21 +2847,21 @@ vchiq_connect_internal(struct vchiq_state *state, struct vchiq_instance *instanc
+
+ if (state->conn_state == VCHIQ_CONNSTATE_DISCONNECTED) {
+ if (queue_message(state, NULL, MAKE_CONNECT, NULL, NULL, 0,
+- QMFLAGS_IS_BLOCKING) == VCHIQ_RETRY)
+- return VCHIQ_RETRY;
++ QMFLAGS_IS_BLOCKING) == -EAGAIN)
++ return -EAGAIN;
+
+ vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTING);
+ }
+
+ if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) {
+ if (wait_for_completion_interruptible(&state->connect))
+- return VCHIQ_RETRY;
++ return -EAGAIN;
+
+ vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
+ complete(&state->connect);
+ }
+
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ void
+@@ -2884,10 +2883,10 @@ vchiq_close_service(struct vchiq_instance *instance, unsigned int handle)
+ {
+ /* Unregister the service */
+ struct vchiq_service *service = find_service_by_handle(instance, handle);
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+
+ if (!service)
+- return VCHIQ_ERROR;
++ return -EINVAL;
+
+ vchiq_log_info(vchiq_core_log_level, "%d: close_service:%d",
+ service->state->id, service->localport);
+@@ -2896,14 +2895,14 @@ vchiq_close_service(struct vchiq_instance *instance, unsigned int handle)
+ (service->srvstate == VCHIQ_SRVSTATE_LISTENING) ||
+ (service->srvstate == VCHIQ_SRVSTATE_HIDDEN)) {
+ vchiq_service_put(service);
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ mark_service_closing(service);
+
+ if (current == service->state->slot_handler_thread) {
+ status = vchiq_close_service_internal(service, NO_CLOSE_RECVD);
+- WARN_ON(status == VCHIQ_RETRY);
++ WARN_ON(status == -EAGAIN);
+ } else {
+ /* Mark the service for termination by the slot handler */
+ request_poll(service->state, service, VCHIQ_POLL_TERMINATE);
+@@ -2911,7 +2910,7 @@ vchiq_close_service(struct vchiq_instance *instance, unsigned int handle)
+
+ while (1) {
+ if (wait_for_completion_interruptible(&service->remove_event)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ break;
+ }
+
+@@ -2926,10 +2925,10 @@ vchiq_close_service(struct vchiq_instance *instance, unsigned int handle)
+ srvstate_names[service->srvstate]);
+ }
+
+- if ((status == VCHIQ_SUCCESS) &&
++ if (!status &&
+ (service->srvstate != VCHIQ_SRVSTATE_FREE) &&
+ (service->srvstate != VCHIQ_SRVSTATE_LISTENING))
+- status = VCHIQ_ERROR;
++ status = -EINVAL;
+
+ vchiq_service_put(service);
+
+@@ -2942,17 +2941,17 @@ vchiq_remove_service(struct vchiq_instance *instance, unsigned int handle)
+ {
+ /* Unregister the service */
+ struct vchiq_service *service = find_service_by_handle(instance, handle);
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+
+ if (!service)
+- return VCHIQ_ERROR;
++ return -EINVAL;
+
+ vchiq_log_info(vchiq_core_log_level, "%d: remove_service:%d",
+ service->state->id, service->localport);
+
+ if (service->srvstate == VCHIQ_SRVSTATE_FREE) {
+ vchiq_service_put(service);
+- return VCHIQ_ERROR;
++ return -EINVAL;
+ }
+
+ mark_service_closing(service);
+@@ -2966,14 +2965,14 @@ vchiq_remove_service(struct vchiq_instance *instance, unsigned int handle)
+ service->public_fourcc = VCHIQ_FOURCC_INVALID;
+
+ status = vchiq_close_service_internal(service, NO_CLOSE_RECVD);
+- WARN_ON(status == VCHIQ_RETRY);
++ WARN_ON(status == -EAGAIN);
+ } else {
+ /* Mark the service for removal by the slot handler */
+ request_poll(service->state, service, VCHIQ_POLL_REMOVE);
+ }
+ while (1) {
+ if (wait_for_completion_interruptible(&service->remove_event)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ break;
+ }
+
+@@ -2987,9 +2986,8 @@ vchiq_remove_service(struct vchiq_instance *instance, unsigned int handle)
+ srvstate_names[service->srvstate]);
+ }
+
+- if ((status == VCHIQ_SUCCESS) &&
+- (service->srvstate != VCHIQ_SRVSTATE_FREE))
+- status = VCHIQ_ERROR;
++ if (!status && (service->srvstate != VCHIQ_SRVSTATE_FREE))
++ status = -EINVAL;
+
+ vchiq_service_put(service);
+
+@@ -2998,7 +2996,7 @@ vchiq_remove_service(struct vchiq_instance *instance, unsigned int handle)
+
+ /*
+ * This function may be called by kernel threads or user threads.
+- * User threads may receive VCHIQ_RETRY to indicate that a signal has been
++ * User threads may receive -EAGAIN to indicate that a signal has been
+ * received and the call should be retried after being returned to user
+ * context.
+ * When called in blocking mode, the userdata field points to a bulk_waiter
+@@ -3016,7 +3014,7 @@ enum vchiq_status vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned
+ const char dir_char = (dir == VCHIQ_BULK_TRANSMIT) ? 't' : 'r';
+ const int dir_msgtype = (dir == VCHIQ_BULK_TRANSMIT) ?
+ VCHIQ_MSG_BULK_TX : VCHIQ_MSG_BULK_RX;
+- enum vchiq_status status = VCHIQ_ERROR;
++ int status = -EINVAL;
+ int payload[2];
+
+ if (!service)
+@@ -3028,7 +3026,7 @@ enum vchiq_status vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned
+ if (!offset && !uoffset)
+ goto error_exit;
+
+- if (vchiq_check_service(service) != VCHIQ_SUCCESS)
++ if (vchiq_check_service(service))
+ goto error_exit;
+
+ switch (mode) {
+@@ -3055,7 +3053,7 @@ enum vchiq_status vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned
+ &service->bulk_tx : &service->bulk_rx;
+
+ if (mutex_lock_killable(&service->bulk_mutex)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ goto error_exit;
+ }
+
+@@ -3064,11 +3062,11 @@ enum vchiq_status vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned
+ do {
+ mutex_unlock(&service->bulk_mutex);
+ if (wait_for_completion_interruptible(&service->bulk_remove_event)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ goto error_exit;
+ }
+ if (mutex_lock_killable(&service->bulk_mutex)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ goto error_exit;
+ }
+ } while (queue->local_insert == queue->remove +
+@@ -3101,7 +3099,7 @@ enum vchiq_status vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned
+ * claim it here to ensure that isn't happening
+ */
+ if (mutex_lock_killable(&state->slot_mutex)) {
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ goto cancel_bulk_error_exit;
+ }
+
+@@ -3121,7 +3119,7 @@ enum vchiq_status vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned
+ QMFLAGS_IS_BLOCKING |
+ QMFLAGS_NO_MUTEX_LOCK |
+ QMFLAGS_NO_MUTEX_UNLOCK);
+- if (status != VCHIQ_SUCCESS)
++ if (status)
+ goto unlock_both_error_exit;
+
+ queue->local_insert++;
+@@ -3136,14 +3134,14 @@ enum vchiq_status vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned
+ waiting:
+ vchiq_service_put(service);
+
+- status = VCHIQ_SUCCESS;
++ status = 0;
+
+ if (bulk_waiter) {
+ bulk_waiter->bulk = bulk;
+ if (wait_for_completion_interruptible(&bulk_waiter->event))
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED)
+- status = VCHIQ_ERROR;
++ status = -EINVAL;
+ }
+
+ return status;
+@@ -3169,13 +3167,13 @@ vchiq_queue_message(struct vchiq_instance *instance, unsigned int handle,
+ size_t size)
+ {
+ struct vchiq_service *service = find_service_by_handle(instance, handle);
+- enum vchiq_status status = VCHIQ_ERROR;
++ int status = -EINVAL;
+ int data_id;
+
+ if (!service)
+ goto error_exit;
+
+- if (vchiq_check_service(service) != VCHIQ_SUCCESS)
++ if (vchiq_check_service(service))
+ goto error_exit;
+
+ if (!size) {
+@@ -3200,7 +3198,7 @@ vchiq_queue_message(struct vchiq_instance *instance, unsigned int handle,
+ copy_callback, context, size, 1);
+ break;
+ default:
+- status = VCHIQ_ERROR;
++ status = -EINVAL;
+ break;
+ }
+
+@@ -3221,11 +3219,11 @@ int vchiq_queue_kernel_message(struct vchiq_instance *instance, unsigned int han
+ data, size);
+
+ /*
+- * vchiq_queue_message() may return VCHIQ_RETRY, so we need to
++ * vchiq_queue_message() may return -EAGAIN, so we need to
+ * implement a retry mechanism since this function is supposed
+ * to block until queued
+ */
+- if (status != VCHIQ_RETRY)
++ if (status != -EAGAIN)
+ break;
+
+ msleep(1);
+@@ -3280,20 +3278,20 @@ release_message_sync(struct vchiq_state *state, struct vchiq_header *header)
+ enum vchiq_status
+ vchiq_get_peer_version(struct vchiq_instance *instance, unsigned int handle, short *peer_version)
+ {
+- enum vchiq_status status = VCHIQ_ERROR;
++ int status = -EINVAL;
+ struct vchiq_service *service = find_service_by_handle(instance, handle);
+
+ if (!service)
+ goto exit;
+
+- if (vchiq_check_service(service) != VCHIQ_SUCCESS)
++ if (vchiq_check_service(service))
+ goto exit;
+
+ if (!peer_version)
+ goto exit;
+
+ *peer_version = service->peer_version;
+- status = VCHIQ_SUCCESS;
++ status = 0;
+
+ exit:
+ if (service)
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_dev.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_dev.c
+index 7e297494437e1b..841e1a535642a4 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_dev.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_dev.c
+@@ -112,7 +112,7 @@ vchiq_ioc_queue_message(struct vchiq_instance *instance, unsigned int handle,
+ struct vchiq_element *elements, unsigned long count)
+ {
+ struct vchiq_io_copy_callback_context context;
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+ unsigned long i;
+ size_t total_size = 0;
+
+@@ -130,9 +130,9 @@ vchiq_ioc_queue_message(struct vchiq_instance *instance, unsigned int handle,
+ status = vchiq_queue_message(instance, handle, vchiq_ioc_copy_element_data,
+ &context, total_size);
+
+- if (status == VCHIQ_ERROR)
++ if (status == -EINVAL)
+ return -EIO;
+- else if (status == VCHIQ_RETRY)
++ else if (status == -EAGAIN)
+ return -EINTR;
+ return 0;
+ }
+@@ -142,7 +142,7 @@ static int vchiq_ioc_create_service(struct vchiq_instance *instance,
+ {
+ struct user_service *user_service = NULL;
+ struct vchiq_service *service;
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+ struct vchiq_service_params_kernel params;
+ int srvstate;
+
+@@ -190,9 +190,9 @@ static int vchiq_ioc_create_service(struct vchiq_instance *instance,
+
+ if (args->is_open) {
+ status = vchiq_open_service_internal(service, instance->pid);
+- if (status != VCHIQ_SUCCESS) {
++ if (status) {
+ vchiq_remove_service(instance, service->handle);
+- return (status == VCHIQ_RETRY) ?
++ return (status == -EAGAIN) ?
+ -EINTR : -EIO;
+ }
+ }
+@@ -338,7 +338,7 @@ static int vchiq_irq_queue_bulk_tx_rx(struct vchiq_instance *instance,
+ goto out;
+ }
+
+- if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) ||
++ if ((status != -EAGAIN) || fatal_signal_pending(current) ||
+ !waiter->bulk_waiter.bulk) {
+ if (waiter->bulk_waiter.bulk) {
+ /* Cancel the signal when the transfer completes. */
+@@ -364,9 +364,9 @@ static int vchiq_irq_queue_bulk_tx_rx(struct vchiq_instance *instance,
+ vchiq_service_put(service);
+ if (ret)
+ return ret;
+- else if (status == VCHIQ_ERROR)
++ else if (status == -EINVAL)
+ return -EIO;
+- else if (status == VCHIQ_RETRY)
++ else if (status == -EAGAIN)
+ return -EINTR;
+ return 0;
+ }
+@@ -577,7 +577,7 @@ static long
+ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ struct vchiq_instance *instance = file->private_data;
+- enum vchiq_status status = VCHIQ_SUCCESS;
++ int status = 0;
+ struct vchiq_service *service = NULL;
+ long ret = 0;
+ int i, rc;
+@@ -598,12 +598,12 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ instance, &i))) {
+ status = vchiq_remove_service(instance, service->handle);
+ vchiq_service_put(service);
+- if (status != VCHIQ_SUCCESS)
++ if (status)
+ break;
+ }
+ service = NULL;
+
+- if (status == VCHIQ_SUCCESS) {
++ if (!status) {
+ /* Wake the completion thread and ask it to exit */
+ instance->closing = 1;
+ complete(&instance->insert_event);
+@@ -627,7 +627,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ status = vchiq_connect_internal(instance->state, instance);
+ mutex_unlock(&instance->state->mutex);
+
+- if (status == VCHIQ_SUCCESS)
++ if (!status)
+ instance->connected = 1;
+ else
+ vchiq_log_error(vchiq_arm_log_level,
+@@ -675,7 +675,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ status = (cmd == VCHIQ_IOC_CLOSE_SERVICE) ?
+ vchiq_close_service(instance, service->handle) :
+ vchiq_remove_service(instance, service->handle);
+- if (status != VCHIQ_SUCCESS)
++ if (status)
+ break;
+ }
+
+@@ -686,7 +686,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ */
+ if (user_service->close_pending &&
+ wait_for_completion_interruptible(&user_service->close_event))
+- status = VCHIQ_RETRY;
++ status = -EAGAIN;
+ break;
+ }
+
+@@ -862,13 +862,13 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ vchiq_service_put(service);
+
+ if (ret == 0) {
+- if (status == VCHIQ_ERROR)
++ if (status == -EINVAL)
+ ret = -EIO;
+- else if (status == VCHIQ_RETRY)
++ else if (status == -EAGAIN)
+ ret = -EINTR;
+ }
+
+- if ((status == VCHIQ_SUCCESS) && (ret < 0) && (ret != -EINTR) && (ret != -EWOULDBLOCK))
++ if (!status && (ret < 0) && (ret != -EINTR) && (ret != -EWOULDBLOCK))
+ vchiq_log_info(vchiq_arm_log_level,
+ " ioctl instance %pK, cmd %s -> status %d, %ld",
+ instance, (_IOC_NR(cmd) <= VCHIQ_IOC_MAX) ?
+diff --git a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+index 90eb4c5936f382..e6dea0c8eecd2c 100644
+--- a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+@@ -560,7 +560,7 @@ static enum vchiq_status service_callback(struct vchiq_instance *vchiq_instance,
+
+ if (!instance) {
+ pr_err("Message callback passed NULL instance\n");
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ switch (reason) {
+@@ -644,7 +644,7 @@ static enum vchiq_status service_callback(struct vchiq_instance *vchiq_instance,
+ break;
+ }
+
+- return VCHIQ_SUCCESS;
++ return 0;
+ }
+
+ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index dc17ae1dfe260e..f9adb11067470e 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -4125,7 +4125,7 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ hba->uic_async_done = NULL;
+ if (reenable_intr)
+ ufshcd_enable_intr(hba, UIC_COMMAND_COMPL);
+- if (ret) {
++ if (ret && !hba->pm_op_in_progress) {
+ ufshcd_set_link_broken(hba);
+ ufshcd_schedule_eh_work(hba);
+ }
+@@ -4133,6 +4133,14 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+ mutex_unlock(&hba->uic_cmd_mutex);
+
++ /*
++ * If the h8 exit fails during the runtime resume process, it becomes
++ * stuck and cannot be recovered through the error handler. To fix
++ * this, use link recovery instead of the error handler.
++ */
++ if (ret && hba->pm_op_in_progress)
++ ret = ufshcd_link_recovery(hba);
++
+ return ret;
+ }
+
+diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
+index 2ff83911219f85..de23e10470d387 100644
+--- a/drivers/usb/chipidea/ci.h
++++ b/drivers/usb/chipidea/ci.h
+@@ -278,8 +278,19 @@ static inline int ci_role_start(struct ci_hdrc *ci, enum ci_role role)
+ return -ENXIO;
+
+ ret = ci->roles[role]->start(ci);
+- if (!ret)
+- ci->role = role;
++ if (ret)
++ return ret;
++
++ ci->role = role;
++
++ if (ci->usb_phy) {
++ if (role == CI_ROLE_HOST)
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_ID);
++ else
++ /* in device mode but vbus is invalid*/
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_NONE);
++ }
++
+ return ret;
+ }
+
+@@ -293,6 +304,9 @@ static inline void ci_role_stop(struct ci_hdrc *ci)
+ ci->role = CI_ROLE_END;
+
+ ci->roles[role]->stop(ci);
++
++ if (ci->usb_phy)
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_NONE);
+ }
+
+ static inline enum usb_role ci_role_to_usb_role(struct ci_hdrc *ci)
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 3795c70a31555c..e7a02d9e1c079b 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -1724,6 +1724,13 @@ static int ci_udc_vbus_session(struct usb_gadget *_gadget, int is_active)
+ ret = ci->platdata->notify_event(ci,
+ CI_HDRC_CONTROLLER_VBUS_EVENT);
+
++ if (ci->usb_phy) {
++ if (is_active)
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_VBUS);
++ else
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_NONE);
++ }
++
+ if (ci->driver)
+ ci_hdrc_gadget_connect(_gadget, is_active);
+
+@@ -2040,6 +2047,9 @@ static irqreturn_t udc_irq(struct ci_hdrc *ci)
+ if (USBi_PCI & intr) {
+ ci->gadget.speed = hw_port_is_high_speed(ci) ?
+ USB_SPEED_HIGH : USB_SPEED_FULL;
++ if (ci->usb_phy)
++ usb_phy_set_event(ci->usb_phy,
++ USB_EVENT_ENUMERATED);
+ if (ci->suspended) {
+ if (ci->driver->resume) {
+ spin_unlock(&ci->lock);
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index 7ef0a4b397620c..a4e1390ed4fd73 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -682,6 +682,10 @@ int __init early_xdbc_setup_hardware(void)
+
+ xdbc.table_base = NULL;
+ xdbc.out_buf = NULL;
++
++ early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
++ xdbc.xhci_base = NULL;
++ xdbc.xhci_length = 0;
+ }
+
+ return ret;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 87404340763da5..c64e7d30db7d2f 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2366,6 +2366,11 @@ int composite_os_desc_req_prepare(struct usb_composite_dev *cdev,
+ if (!cdev->os_desc_req->buf) {
+ ret = -ENOMEM;
+ usb_ep_free_request(ep0, cdev->os_desc_req);
++ /*
++ * Set os_desc_req to NULL so that composite_dev_cleanup()
++ * will not try to free it again.
++ */
++ cdev->os_desc_req = NULL;
+ goto end;
+ }
+ cdev->os_desc_req->context = cdev;
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 6704bd76e157eb..7ec4c38c3ceec4 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -184,7 +184,7 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ int ret;
+ int irq;
+ struct xhci_plat_priv *priv = NULL;
+- bool of_match;
++ const struct of_device_id *of_match;
+
+ if (usb_disabled())
+ return -ENODEV;
+diff --git a/drivers/usb/misc/apple-mfi-fastcharge.c b/drivers/usb/misc/apple-mfi-fastcharge.c
+index ac8695195c13c8..8e852f4b8262e6 100644
+--- a/drivers/usb/misc/apple-mfi-fastcharge.c
++++ b/drivers/usb/misc/apple-mfi-fastcharge.c
+@@ -44,6 +44,7 @@ MODULE_DEVICE_TABLE(usb, mfi_fc_id_table);
+ struct mfi_device {
+ struct usb_device *udev;
+ struct power_supply *battery;
++ struct power_supply_desc battery_desc;
+ int charge_type;
+ };
+
+@@ -178,6 +179,7 @@ static int mfi_fc_probe(struct usb_device *udev)
+ {
+ struct power_supply_config battery_cfg = {};
+ struct mfi_device *mfi = NULL;
++ char *battery_name;
+ int err;
+
+ if (!mfi_fc_match(udev))
+@@ -187,23 +189,38 @@ static int mfi_fc_probe(struct usb_device *udev)
+ if (!mfi)
+ return -ENOMEM;
+
++ battery_name = kasprintf(GFP_KERNEL, "apple_mfi_fastcharge_%d-%d",
++ udev->bus->busnum, udev->devnum);
++ if (!battery_name) {
++ err = -ENOMEM;
++ goto err_free_mfi;
++ }
++
++ mfi->battery_desc = apple_mfi_fc_desc;
++ mfi->battery_desc.name = battery_name;
++
+ battery_cfg.drv_data = mfi;
+
+ mfi->charge_type = POWER_SUPPLY_CHARGE_TYPE_TRICKLE;
+ mfi->battery = power_supply_register(&udev->dev,
+- &apple_mfi_fc_desc,
++ &mfi->battery_desc,
+ &battery_cfg);
+ if (IS_ERR(mfi->battery)) {
+ dev_err(&udev->dev, "Can't register battery\n");
+ err = PTR_ERR(mfi->battery);
+- kfree(mfi);
+- return err;
++ goto err_free_name;
+ }
+
+ mfi->udev = usb_get_dev(udev);
+ dev_set_drvdata(&udev->dev, mfi);
+
+ return 0;
++
++err_free_name:
++ kfree(battery_name);
++err_free_mfi:
++ kfree(mfi);
++ return err;
+ }
+
+ static void mfi_fc_disconnect(struct usb_device *udev)
+@@ -213,6 +230,7 @@ static void mfi_fc_disconnect(struct usb_device *udev)
+ mfi = dev_get_drvdata(&udev->dev);
+ if (mfi->battery)
+ power_supply_unregister(mfi->battery);
++ kfree(mfi->battery_desc.name);
+ dev_set_drvdata(&udev->dev, NULL);
+ usb_put_dev(mfi->udev);
+ kfree(mfi);
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index 160c9264339f0b..f3075ff6cd20f1 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -394,6 +394,7 @@ static bool mxs_phy_is_otg_host(struct mxs_phy *mxs_phy)
+ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+ {
+ bool vbus_is_on = false;
++ enum usb_phy_events last_event = mxs_phy->phy.last_event;
+
+ /* If the SoCs don't need to disconnect line without vbus, quit */
+ if (!(mxs_phy->data->flags & MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS))
+@@ -405,7 +406,8 @@ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+
+ vbus_is_on = mxs_phy_get_vbus_status(mxs_phy);
+
+- if (on && !vbus_is_on && !mxs_phy_is_otg_host(mxs_phy))
++ if (on && ((!vbus_is_on && !mxs_phy_is_otg_host(mxs_phy))
++ || (last_event == USB_EVENT_VBUS)))
+ __mxs_phy_disconnect_line(mxs_phy, true);
+ else
+ __mxs_phy_disconnect_line(mxs_phy, false);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 1c31ae9fd162f6..2a3bf8718efcad 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2346,6 +2346,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff), /* Foxconn T99W651 RNDIS */
+ .driver_info = RSVD(5) | RSVD(6) },
++ { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe15f, 0xff), /* Foxconn T99W709 */
++ .driver_info = RSVD(5) },
+ { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe167, 0xff), /* Foxconn T99W640 MBIM */
+ .driver_info = RSVD(3) },
+ { USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 (IOT version) */
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index f40eabb7e24f7a..9d8fcfac57610c 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -1032,7 +1032,7 @@ static int tcpm_set_attached_state(struct tcpm_port *port, bool attached)
+ port->data_role);
+ }
+
+-static int tcpm_set_roles(struct tcpm_port *port, bool attached,
++static int tcpm_set_roles(struct tcpm_port *port, bool attached, int state,
+ enum typec_role role, enum typec_data_role data)
+ {
+ enum typec_orientation orientation;
+@@ -1069,7 +1069,7 @@ static int tcpm_set_roles(struct tcpm_port *port, bool attached,
+ }
+ }
+
+- ret = tcpm_mux_set(port, TYPEC_STATE_USB, usb_role, orientation);
++ ret = tcpm_mux_set(port, state, usb_role, orientation);
+ if (ret < 0)
+ return ret;
+
+@@ -3686,16 +3686,6 @@ static int tcpm_src_attach(struct tcpm_port *port)
+
+ tcpm_enable_auto_vbus_discharge(port, true);
+
+- ret = tcpm_set_roles(port, true, TYPEC_SOURCE, tcpm_data_role_for_source(port));
+- if (ret < 0)
+- return ret;
+-
+- if (port->pd_supported) {
+- ret = port->tcpc->set_pd_rx(port->tcpc, true);
+- if (ret < 0)
+- goto out_disable_mux;
+- }
+-
+ /*
+ * USB Type-C specification, version 1.2,
+ * chapter 4.5.2.2.8.1 (Attached.SRC Requirements)
+@@ -3705,13 +3695,24 @@ static int tcpm_src_attach(struct tcpm_port *port)
+ (polarity == TYPEC_POLARITY_CC2 && port->cc1 == TYPEC_CC_RA)) {
+ ret = tcpm_set_vconn(port, true);
+ if (ret < 0)
+- goto out_disable_pd;
++ return ret;
+ }
+
+ ret = tcpm_set_vbus(port, true);
+ if (ret < 0)
+ goto out_disable_vconn;
+
++ ret = tcpm_set_roles(port, true, TYPEC_STATE_USB, TYPEC_SOURCE,
++ tcpm_data_role_for_source(port));
++ if (ret < 0)
++ goto out_disable_vbus;
++
++ if (port->pd_supported) {
++ ret = port->tcpc->set_pd_rx(port->tcpc, true);
++ if (ret < 0)
++ goto out_disable_mux;
++ }
++
+ port->pd_capable = false;
+
+ port->partner = NULL;
+@@ -3721,14 +3722,14 @@ static int tcpm_src_attach(struct tcpm_port *port)
+
+ return 0;
+
+-out_disable_vconn:
+- tcpm_set_vconn(port, false);
+-out_disable_pd:
+- if (port->pd_supported)
+- port->tcpc->set_pd_rx(port->tcpc, false);
+ out_disable_mux:
+ tcpm_mux_set(port, TYPEC_STATE_SAFE, USB_ROLE_NONE,
+ TYPEC_ORIENTATION_NONE);
++out_disable_vbus:
++ tcpm_set_vbus(port, false);
++out_disable_vconn:
++ tcpm_set_vconn(port, false);
++
+ return ret;
+ }
+
+@@ -3844,7 +3845,8 @@ static int tcpm_snk_attach(struct tcpm_port *port)
+
+ tcpm_enable_auto_vbus_discharge(port, true);
+
+- ret = tcpm_set_roles(port, true, TYPEC_SINK, tcpm_data_role_for_sink(port));
++ ret = tcpm_set_roles(port, true, TYPEC_STATE_USB,
++ TYPEC_SINK, tcpm_data_role_for_sink(port));
+ if (ret < 0)
+ return ret;
+
+@@ -3866,12 +3868,24 @@ static void tcpm_snk_detach(struct tcpm_port *port)
+ static int tcpm_acc_attach(struct tcpm_port *port)
+ {
+ int ret;
++ enum typec_role role;
++ enum typec_data_role data;
++ int state = TYPEC_STATE_USB;
+
+ if (port->attached)
+ return 0;
+
+- ret = tcpm_set_roles(port, true, TYPEC_SOURCE,
+- tcpm_data_role_for_source(port));
++ role = tcpm_port_is_sink(port) ? TYPEC_SINK : TYPEC_SOURCE;
++ data = tcpm_port_is_sink(port) ? tcpm_data_role_for_sink(port)
++ : tcpm_data_role_for_source(port);
++
++ if (tcpm_port_is_audio(port))
++ state = TYPEC_MODE_AUDIO;
++
++ if (tcpm_port_is_debug(port))
++ state = TYPEC_MODE_DEBUG;
++
++ ret = tcpm_set_roles(port, true, state, role, data);
+ if (ret < 0)
+ return ret;
+
+@@ -4551,7 +4565,7 @@ static void run_state_machine(struct tcpm_port *port)
+ */
+ tcpm_set_vconn(port, false);
+ tcpm_set_vbus(port, false);
+- tcpm_set_roles(port, port->self_powered, TYPEC_SOURCE,
++ tcpm_set_roles(port, port->self_powered, TYPEC_STATE_USB, TYPEC_SOURCE,
+ tcpm_data_role_for_source(port));
+ /*
+ * If tcpc fails to notify vbus off, TCPM will wait for PD_T_SAFE_0V +
+@@ -4583,7 +4597,7 @@ static void run_state_machine(struct tcpm_port *port)
+ tcpm_set_vconn(port, false);
+ if (port->pd_capable)
+ tcpm_set_charge(port, false);
+- tcpm_set_roles(port, port->self_powered, TYPEC_SINK,
++ tcpm_set_roles(port, port->self_powered, TYPEC_STATE_USB, TYPEC_SINK,
+ tcpm_data_role_for_sink(port));
+ /*
+ * VBUS may or may not toggle, depending on the adapter.
+@@ -4688,10 +4702,10 @@ static void run_state_machine(struct tcpm_port *port)
+ case DR_SWAP_CHANGE_DR:
+ tcpm_unregister_altmodes(port);
+ if (port->data_role == TYPEC_HOST)
+- tcpm_set_roles(port, true, port->pwr_role,
++ tcpm_set_roles(port, true, TYPEC_STATE_USB, port->pwr_role,
+ TYPEC_DEVICE);
+ else
+- tcpm_set_roles(port, true, port->pwr_role,
++ tcpm_set_roles(port, true, TYPEC_STATE_USB, port->pwr_role,
+ TYPEC_HOST);
+ tcpm_ams_finish(port);
+ tcpm_set_state(port, ready_state(port), 0);
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index aa362b434413a9..13c223228c31e0 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -2144,7 +2144,7 @@ int vfio_pci_core_register_device(struct vfio_pci_core_device *vdev)
+ return -EBUSY;
+ }
+
+- if (pci_is_root_bus(pdev->bus)) {
++ if (pci_is_root_bus(pdev->bus) || pdev->is_virtfn) {
+ ret = vfio_assign_device_set(&vdev->vdev, vdev);
+ } else if (!pci_probe_reset_slot(pdev->slot)) {
+ ret = vfio_assign_device_set(&vdev->vdev, pdev->slot);
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 87f2f56fd20abd..de6f108a50a9d6 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -913,10 +913,8 @@ vhost_scsi_get_req(struct vhost_virtqueue *vq, struct vhost_scsi_ctx *vc,
+ /* validated at handler entry */
+ vs_tpg = vhost_vq_get_backend(vq);
+ tpg = READ_ONCE(vs_tpg[*vc->target]);
+- if (unlikely(!tpg)) {
+- vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
++ if (unlikely(!tpg))
+ goto out;
+- }
+ }
+
+ if (tpgp)
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 1a17274187112d..194889e1cc34e8 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -935,13 +935,13 @@ static const char *fbcon_startup(void)
+ int rows, cols;
+
+ /*
+- * If num_registered_fb is zero, this is a call for the dummy part.
++ * If fbcon_num_registered_fb is zero, this is a call for the dummy part.
+ * The frame buffer devices weren't initialized yet.
+ */
+ if (!fbcon_num_registered_fb || info_idx == -1)
+ return display_desc;
+ /*
+- * Instead of blindly using registered_fb[0], we use info_idx, set by
++ * Instead of blindly using fbcon_registered_fb[0], we use info_idx, set by
+ * fbcon_fb_registered();
+ */
+ info = fbcon_registered_fb[info_idx];
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index 32b8374abeca5e..3770225a0b9084 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -1011,8 +1011,13 @@ static int imxfb_probe(struct platform_device *pdev)
+ info->fix.smem_start = fbi->map_dma;
+
+ INIT_LIST_HEAD(&info->modelist);
+- for (i = 0; i < fbi->num_modes; i++)
+- fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++ for (i = 0; i < fbi->num_modes; i++) {
++ ret = fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++ if (ret) {
++ dev_err(&pdev->dev, "Failed to add videomode\n");
++ goto failed_cmap;
++ }
++ }
+
+ /*
+ * This makes sure that our colour bitfield
+diff --git a/drivers/watchdog/ziirave_wdt.c b/drivers/watchdog/ziirave_wdt.c
+index d0e88875443ae9..06d59805c9c09d 100644
+--- a/drivers/watchdog/ziirave_wdt.c
++++ b/drivers/watchdog/ziirave_wdt.c
+@@ -302,6 +302,9 @@ static int ziirave_firm_verify(struct watchdog_device *wdd,
+ const u16 len = be16_to_cpu(rec->len);
+ const u32 addr = be32_to_cpu(rec->addr);
+
++ if (len > sizeof(data))
++ return -EINVAL;
++
+ if (ziirave_firm_addr_readonly(addr))
+ continue;
+
+diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
+index 9c286b2a190016..ac8ce3179ba2e9 100644
+--- a/drivers/xen/gntdev-common.h
++++ b/drivers/xen/gntdev-common.h
+@@ -26,6 +26,10 @@ struct gntdev_priv {
+ /* lock protects maps and freeable_maps. */
+ struct mutex lock;
+
++ /* Free instances of struct gntdev_copy_batch. */
++ struct gntdev_copy_batch *batch;
++ struct mutex batch_lock;
++
+ #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
+ /* Device for which DMA memory is allocated. */
+ struct device *dma_dev;
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 4d9a3050de6a3f..de8a36502aa2a0 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -56,6 +56,18 @@ MODULE_AUTHOR("Derek G. Murray <Derek.Murray@cl.cam.ac.uk>, "
+ "Gerd Hoffmann <kraxel@redhat.com>");
+ MODULE_DESCRIPTION("User-space granted page access driver");
+
++#define GNTDEV_COPY_BATCH 16
++
++struct gntdev_copy_batch {
++ struct gnttab_copy ops[GNTDEV_COPY_BATCH];
++ struct page *pages[GNTDEV_COPY_BATCH];
++ s16 __user *status[GNTDEV_COPY_BATCH];
++ unsigned int nr_ops;
++ unsigned int nr_pages;
++ bool writeable;
++ struct gntdev_copy_batch *next;
++};
++
+ static unsigned int limit = 64*1024;
+ module_param(limit, uint, 0644);
+ MODULE_PARM_DESC(limit,
+@@ -584,6 +596,8 @@ static int gntdev_open(struct inode *inode, struct file *flip)
+ INIT_LIST_HEAD(&priv->maps);
+ mutex_init(&priv->lock);
+
++ mutex_init(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ priv->dmabuf_priv = gntdev_dmabuf_init(flip);
+ if (IS_ERR(priv->dmabuf_priv)) {
+@@ -608,6 +622,7 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ {
+ struct gntdev_priv *priv = flip->private_data;
+ struct gntdev_grant_map *map;
++ struct gntdev_copy_batch *batch;
+
+ pr_debug("priv %p\n", priv);
+
+@@ -620,6 +635,14 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ }
+ mutex_unlock(&priv->lock);
+
++ mutex_lock(&priv->batch_lock);
++ while (priv->batch) {
++ batch = priv->batch;
++ priv->batch = batch->next;
++ kfree(batch);
++ }
++ mutex_unlock(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ gntdev_dmabuf_fini(priv->dmabuf_priv);
+ #endif
+@@ -785,17 +808,6 @@ static long gntdev_ioctl_notify(struct gntdev_priv *priv, void __user *u)
+ return rc;
+ }
+
+-#define GNTDEV_COPY_BATCH 16
+-
+-struct gntdev_copy_batch {
+- struct gnttab_copy ops[GNTDEV_COPY_BATCH];
+- struct page *pages[GNTDEV_COPY_BATCH];
+- s16 __user *status[GNTDEV_COPY_BATCH];
+- unsigned int nr_ops;
+- unsigned int nr_pages;
+- bool writeable;
+-};
+-
+ static int gntdev_get_page(struct gntdev_copy_batch *batch, void __user *virt,
+ unsigned long *gfn)
+ {
+@@ -953,36 +965,53 @@ static int gntdev_grant_copy_seg(struct gntdev_copy_batch *batch,
+ static long gntdev_ioctl_grant_copy(struct gntdev_priv *priv, void __user *u)
+ {
+ struct ioctl_gntdev_grant_copy copy;
+- struct gntdev_copy_batch batch;
++ struct gntdev_copy_batch *batch;
+ unsigned int i;
+ int ret = 0;
+
+ if (copy_from_user(©, u, sizeof(copy)))
+ return -EFAULT;
+
+- batch.nr_ops = 0;
+- batch.nr_pages = 0;
++ mutex_lock(&priv->batch_lock);
++ if (!priv->batch) {
++ batch = kmalloc(sizeof(*batch), GFP_KERNEL);
++ } else {
++ batch = priv->batch;
++ priv->batch = batch->next;
++ }
++ mutex_unlock(&priv->batch_lock);
++ if (!batch)
++ return -ENOMEM;
++
++ batch->nr_ops = 0;
++ batch->nr_pages = 0;
+
+ for (i = 0; i < copy.count; i++) {
+ struct gntdev_grant_copy_segment seg;
+
+ if (copy_from_user(&seg, ©.segments[i], sizeof(seg))) {
+ ret = -EFAULT;
++ gntdev_put_pages(batch);
+ goto out;
+ }
+
+- ret = gntdev_grant_copy_seg(&batch, &seg, ©.segments[i].status);
+- if (ret < 0)
++ ret = gntdev_grant_copy_seg(batch, &seg, ©.segments[i].status);
++ if (ret < 0) {
++ gntdev_put_pages(batch);
+ goto out;
++ }
+
+ cond_resched();
+ }
+- if (batch.nr_ops)
+- ret = gntdev_copy(&batch);
+- return ret;
++ if (batch->nr_ops)
++ ret = gntdev_copy(batch);
++
++ out:
++ mutex_lock(&priv->batch_lock);
++ batch->next = priv->batch;
++ priv->batch = batch;
++ mutex_unlock(&priv->batch_lock);
+
+- out:
+- gntdev_put_pages(&batch);
+ return ret;
+ }
+
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index 0eaa9e495346d3..e524c0b432f393 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -323,7 +323,7 @@ static int z_erofs_transform_plain(struct z_erofs_decompress_req *rq,
+ const unsigned int lefthalf = rq->outputsize - righthalf;
+ const unsigned int interlaced_offset =
+ rq->alg == Z_EROFS_COMPRESSION_SHIFTED ? 0 : rq->pageofs_out;
+- unsigned char *src, *dst;
++ u8 *src;
+
+ if (outpages > 2 && rq->alg == Z_EROFS_COMPRESSION_SHIFTED) {
+ DBG_BUGON(1);
+@@ -336,23 +336,18 @@ static int z_erofs_transform_plain(struct z_erofs_decompress_req *rq,
+ }
+
+ src = kmap_local_page(rq->in[inpages - 1]) + rq->pageofs_in;
+- if (rq->out[0]) {
+- dst = kmap_local_page(rq->out[0]);
+- memcpy(dst + rq->pageofs_out, src + interlaced_offset,
+- righthalf);
+- kunmap_local(dst);
+- }
++ if (rq->out[0])
++ memcpy_to_page(rq->out[0], rq->pageofs_out,
++ src + interlaced_offset, righthalf);
+
+ if (outpages > inpages) {
+ DBG_BUGON(!rq->out[outpages - 1]);
+- if (rq->out[outpages - 1] != rq->in[inpages - 1]) {
+- dst = kmap_local_page(rq->out[outpages - 1]);
+- memcpy(dst, interlaced_offset ? src :
+- (src + righthalf), lefthalf);
+- kunmap_local(dst);
+- } else if (!interlaced_offset) {
++ if (rq->out[outpages - 1] != rq->in[inpages - 1])
++ memcpy_to_page(rq->out[outpages - 1], 0, src +
++ (interlaced_offset ? 0 : righthalf),
++ lefthalf);
++ else if (!interlaced_offset)
+ memmove(src, src + righthalf, lefthalf);
+- }
+ }
+ kunmap_local(src);
+ return 0;
+diff --git a/fs/erofs/dir.c b/fs/erofs/dir.c
+index 966a88cc529ebb..963bbed0b69949 100644
+--- a/fs/erofs/dir.c
++++ b/fs/erofs/dir.c
+@@ -6,21 +6,6 @@
+ */
+ #include "internal.h"
+
+-static void debug_one_dentry(unsigned char d_type, const char *de_name,
+- unsigned int de_namelen)
+-{
+-#ifdef CONFIG_EROFS_FS_DEBUG
+- /* since the on-disk name could not have the trailing '\0' */
+- unsigned char dbg_namebuf[EROFS_NAME_LEN + 1];
+-
+- memcpy(dbg_namebuf, de_name, de_namelen);
+- dbg_namebuf[de_namelen] = '\0';
+-
+- erofs_dbg("found dirent %s de_len %u d_type %d", dbg_namebuf,
+- de_namelen, d_type);
+-#endif
+-}
+-
+ static int erofs_fill_dentries(struct inode *dir, struct dir_context *ctx,
+ void *dentry_blk, struct erofs_dirent *de,
+ unsigned int nameoff, unsigned int maxsize)
+@@ -52,10 +37,8 @@ static int erofs_fill_dentries(struct inode *dir, struct dir_context *ctx,
+ return -EFSCORRUPTED;
+ }
+
+- debug_one_dentry(d_type, de_name, de_namelen);
+ if (!dir_emit(ctx, de_name, de_namelen,
+ le64_to_cpu(de->nid), d_type))
+- /* stopped by some reason */
+ return 1;
+ ++de;
+ ctx->pos += sizeof(struct erofs_dirent);
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index 7dcf350b9fef9e..3cbef6318b7b1a 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -26,9 +26,6 @@ static void *erofs_read_inode(struct erofs_buf *buf,
+ blkaddr = erofs_blknr(sb, inode_loc);
+ *ofs = erofs_blkoff(sb, inode_loc);
+
+- erofs_dbg("%s, reading inode nid %llu at %u of blkaddr %u",
+- __func__, vi->nid, *ofs, blkaddr);
+-
+ kaddr = erofs_read_metabuf(buf, sb, blkaddr, EROFS_KMAP);
+ if (IS_ERR(kaddr)) {
+ erofs_err(sb, "failed to get inode (nid: %llu) page, err %ld",
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index d7cd1e619d46f2..1269709328056f 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -32,10 +32,8 @@ __printf(3, 4) void _erofs_info(struct super_block *sb,
+ #define erofs_info(sb, fmt, ...) \
+ _erofs_info(sb, __func__, fmt "\n", ##__VA_ARGS__)
+ #ifdef CONFIG_EROFS_FS_DEBUG
+-#define erofs_dbg(x, ...) pr_debug(x "\n", ##__VA_ARGS__)
+ #define DBG_BUGON BUG_ON
+ #else
+-#define erofs_dbg(x, ...) ((void)0)
+ #define DBG_BUGON(x) ((void)(x))
+ #endif /* !CONFIG_EROFS_FS_DEBUG */
+
+diff --git a/fs/erofs/namei.c b/fs/erofs/namei.c
+index 8332428b780cd3..c0d5ffb62420a3 100644
+--- a/fs/erofs/namei.c
++++ b/fs/erofs/namei.c
+@@ -203,16 +203,13 @@ static struct dentry *erofs_lookup(struct inode *dir, struct dentry *dentry,
+
+ err = erofs_namei(dir, &dentry->d_name, &nid, &d_type);
+
+- if (err == -ENOENT) {
++ if (err == -ENOENT)
+ /* negative dentry */
+ inode = NULL;
+- } else if (err) {
++ else if (err)
+ inode = ERR_PTR(err);
+- } else {
+- erofs_dbg("%s, %pd (nid %llu) found, d_type %u", __func__,
+- dentry, nid, d_type);
++ else
+ inode = erofs_iget(dir->i_sb, nid);
+- }
+ return d_splice_alias(inode, dentry);
+ }
+
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 32ca6d3e373abb..5e658021731816 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -123,9 +123,11 @@ static inline unsigned int z_erofs_pclusterpages(struct z_erofs_pcluster *pcl)
+
+ /*
+ * bit 30: I/O error occurred on this page
++ * bit 29: CPU has dirty data in D-cache (needs aliasing handling);
+ * bit 0 - 29: remaining parts to complete this page
+ */
+-#define Z_EROFS_PAGE_EIO (1 << 30)
++#define Z_EROFS_ONLINEPAGE_EIO 30
++#define Z_EROFS_ONLINEPAGE_DIRTY 29
+
+ static inline void z_erofs_onlinepage_init(struct page *page)
+ {
+@@ -144,29 +146,28 @@ static inline void z_erofs_onlinepage_split(struct page *page)
+ atomic_inc((atomic_t *)&page->private);
+ }
+
+-static inline void z_erofs_page_mark_eio(struct page *page)
++static void z_erofs_onlinepage_end(struct page *page, int err, bool dirty)
+ {
+- int orig;
++ int orig, v;
++
++ DBG_BUGON(!PagePrivate(page));
+
+ do {
+ orig = atomic_read((atomic_t *)&page->private);
+- } while (atomic_cmpxchg((atomic_t *)&page->private, orig,
+- orig | Z_EROFS_PAGE_EIO) != orig);
+-}
+-
+-static inline void z_erofs_onlinepage_endio(struct page *page)
+-{
+- unsigned int v;
++ DBG_BUGON(orig <= 0);
++ v = dirty << Z_EROFS_ONLINEPAGE_DIRTY;
++ v |= (orig - 1) | (!!err << Z_EROFS_ONLINEPAGE_EIO);
++ } while (atomic_cmpxchg((atomic_t *)&page->private, orig, v) != orig);
+
+- DBG_BUGON(!PagePrivate(page));
+- v = atomic_dec_return((atomic_t *)&page->private);
+- if (!(v & ~Z_EROFS_PAGE_EIO)) {
+- set_page_private(page, 0);
+- ClearPagePrivate(page);
+- if (!(v & Z_EROFS_PAGE_EIO))
+- SetPageUptodate(page);
+- unlock_page(page);
+- }
++ if (v & (BIT(Z_EROFS_ONLINEPAGE_DIRTY) - 1))
++ return;
++ set_page_private(page, 0);
++ ClearPagePrivate(page);
++ if (v & BIT(Z_EROFS_ONLINEPAGE_DIRTY))
++ flush_dcache_page(page);
++ if (!(v & BIT(Z_EROFS_ONLINEPAGE_EIO)))
++ SetPageUptodate(page);
++ unlock_page(page);
+ }
+
+ #define Z_EROFS_ONSTACK_PAGES 32
+@@ -818,8 +819,6 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe,
+
+ if (offset + cur < map->m_la ||
+ offset + cur >= map->m_la + map->m_llen) {
+- erofs_dbg("out-of-range map @ pos %llu", offset + cur);
+-
+ if (z_erofs_collector_end(fe))
+ fe->backmost = false;
+ map->m_la = offset + cur;
+@@ -932,12 +931,7 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe,
+ goto repeat;
+
+ out:
+- if (err)
+- z_erofs_page_mark_eio(page);
+- z_erofs_onlinepage_endio(page);
+-
+- erofs_dbg("%s, finish page: %pK spiltted: %u map->m_llen %llu",
+- __func__, page, spiltted, map->m_llen);
++ z_erofs_onlinepage_end(page, err, false);
+ return err;
+ }
+
+@@ -1040,9 +1034,7 @@ static void z_erofs_fill_other_copies(struct z_erofs_decompress_backend *be,
+ cur += len;
+ }
+ kunmap_local(dst);
+- if (err)
+- z_erofs_page_mark_eio(bvi->bvec.page);
+- z_erofs_onlinepage_endio(bvi->bvec.page);
++ z_erofs_onlinepage_end(bvi->bvec.page, err, true);
+ list_del(p);
+ kfree(bvi);
+ }
+@@ -1210,9 +1202,7 @@ static int z_erofs_decompress_pcluster(struct z_erofs_decompress_backend *be,
+ /* recycle all individual short-lived pages */
+ if (z_erofs_put_shortlivedpage(be->pagepool, page))
+ continue;
+- if (err)
+- z_erofs_page_mark_eio(page);
+- z_erofs_onlinepage_endio(page);
++ z_erofs_onlinepage_end(page, err, true);
+ }
+
+ if (be->decompressed_pages != be->onstack_pages)
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index 2cd70cf4c8b270..d2d7fe826091f6 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -603,9 +603,6 @@ static int z_erofs_do_map_blocks(struct inode *inode,
+
+ unmap_out:
+ erofs_unmap_metabuf(&m.map->buf);
+- erofs_dbg("%s, m_la %llu m_pa %llu m_llen %llu m_plen %llu m_flags 0%o",
+- __func__, map->m_la, map->m_pa,
+- map->m_llen, map->m_plen, map->m_flags);
+ return err;
+ }
+
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 7b65766d365f1e..dc8f283f210cc6 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -273,7 +273,7 @@ static void f2fs_read_end_io(struct bio *bio)
+ {
+ struct f2fs_sb_info *sbi = F2FS_P_SB(bio_first_page_all(bio));
+ struct bio_post_read_ctx *ctx;
+- bool intask = in_task();
++ bool intask = in_task() && !irqs_disabled();
+
+ iostat_update_and_unbind_ctx(bio, 0);
+ ctx = bio->bi_private;
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index f13143efc4b1c2..a5c63c7da29943 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -440,7 +440,7 @@ void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage)
+ struct f2fs_extent *i_ext = &F2FS_INODE(ipage)->i_ext;
+ struct extent_tree *et;
+ struct extent_node *en;
+- struct extent_info ei;
++ struct extent_info ei = {0};
+
+ if (!__may_extent_tree(inode, EX_READ)) {
+ /* drop largest read extent */
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index ef9149bd398ae9..1ad9669666e8b3 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1226,7 +1226,7 @@ struct f2fs_bio_info {
+ #define RDEV(i) (raw_super->devs[i])
+ struct f2fs_dev_info {
+ struct block_device *bdev;
+- char path[MAX_PATH_LEN];
++ char path[MAX_PATH_LEN + 1];
+ unsigned int total_segments;
+ block_t start_blk;
+ block_t end_blk;
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index b8296b0414fcb9..c02b5ea43f07c4 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -831,6 +831,19 @@ void f2fs_evict_inode(struct inode *inode)
+ f2fs_update_inode_page(inode);
+ if (dquot_initialize_needed(inode))
+ set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++
++ /*
++ * If both f2fs_truncate() and f2fs_update_inode_page() failed
++ * due to fuzzed corrupted inode, call f2fs_inode_synced() to
++ * avoid triggering later f2fs_bug_on().
++ */
++ if (is_inode_flag_set(inode, FI_DIRTY_INODE)) {
++ f2fs_warn(sbi,
++ "f2fs_evict_inode: inode is dirty, ino:%lu",
++ inode->i_ino);
++ f2fs_inode_synced(inode);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ }
+ }
+ if (!is_sbi_flag_set(sbi, SBI_IS_FREEZING))
+ sb_end_intwrite(inode->i_sb);
+@@ -847,8 +860,12 @@ void f2fs_evict_inode(struct inode *inode)
+ if (likely(!f2fs_cp_error(sbi) &&
+ !is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ f2fs_bug_on(sbi, is_inode_flag_set(inode, FI_DIRTY_INODE));
+- else
+- f2fs_inode_synced(inode);
++
++ /*
++ * anyway, it needs to remove the inode from sbi->inode_list[DIRTY_META]
++ * list to avoid UAF in f2fs_sync_inode_meta() during checkpoint.
++ */
++ f2fs_inode_synced(inode);
+
+ /* for the case f2fs_new_inode() was failed, .i_ino is zero, skip it */
+ if (inode->i_ino)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 5ef5a88f47a0a8..20d4387c661d7b 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -625,8 +625,7 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
+ unsigned int data_blocks = 0;
+
+- if (f2fs_lfs_mode(sbi) &&
+- unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++ if (f2fs_lfs_mode(sbi)) {
+ total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA);
+ data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi);
+ data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi);
+@@ -635,7 +634,7 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ if (lower_p)
+ *lower_p = node_secs + dent_secs + data_secs;
+ if (upper_p)
+- *upper_p = node_secs + dent_secs +
++ *upper_p = node_secs + dent_secs + data_secs +
+ (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) +
+ (data_blocks ? 1 : 0);
+ if (curseg_p)
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index 91354e769642f8..839bf83448c34d 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -342,9 +342,6 @@ static int hfsplus_free_extents(struct super_block *sb,
+ int i;
+ int err = 0;
+
+- /* Mapping the allocation file may lock the extent tree */
+- WARN_ON(mutex_is_locked(&HFSPLUS_SB(sb)->ext_tree->tree_lock));
+-
+ hfsplus_dump_extent(extent);
+ for (i = 0; i < 8; extent++, i++) {
+ count = be32_to_cpu(extent->block_count);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 32ae408ee69977..c761291f59ac54 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1809,8 +1809,10 @@ dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno, s64 * results)
+ return -EIO;
+ dp = (struct dmap *) mp->data;
+
+- if (dp->tree.budmin < 0)
++ if (dp->tree.budmin < 0) {
++ release_metapage(mp);
+ return -EIO;
++ }
+
+ /* try to allocate the blocks.
+ */
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index 9adb29e7862cfc..1f2e452a767644 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -3029,14 +3029,23 @@ static void duplicateIXtree(struct super_block *sb, s64 blkno,
+ *
+ * RETURN VALUES:
+ * 0 - success
+- * -ENOMEM - insufficient memory
++ * -EINVAL - unexpected inode type
+ */
+ static int copy_from_dinode(struct dinode * dip, struct inode *ip)
+ {
+ struct jfs_inode_info *jfs_ip = JFS_IP(ip);
+ struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb);
++ int fileset = le32_to_cpu(dip->di_fileset);
++
++ switch (fileset) {
++ case AGGR_RESERVED_I: case AGGREGATE_I: case BMAP_I:
++ case LOG_I: case BADBLOCK_I: case FILESYSTEM_I:
++ break;
++ default:
++ return -EINVAL;
++ }
+
+- jfs_ip->fileset = le32_to_cpu(dip->di_fileset);
++ jfs_ip->fileset = fileset;
+ jfs_ip->mode2 = le32_to_cpu(dip->di_mode);
+ jfs_set_inode_flags(ip);
+
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 1876978107ca10..3c98049912dfda 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1825,9 +1825,7 @@ static void block_revalidate(struct dentry *dentry)
+
+ static void unblock_revalidate(struct dentry *dentry)
+ {
+- /* store_release ensures wait_var_event() sees the update */
+- smp_store_release(&dentry->d_fsdata, NULL);
+- wake_up_var(&dentry->d_fsdata);
++ store_release_wake_up(&dentry->d_fsdata, NULL);
+ }
+
+ /*
+diff --git a/fs/nfs/export.c b/fs/nfs/export.c
+index 9fe9586a51b713..aacf6220ab44e1 100644
+--- a/fs/nfs/export.c
++++ b/fs/nfs/export.c
+@@ -66,14 +66,21 @@ nfs_fh_to_dentry(struct super_block *sb, struct fid *fid,
+ {
+ struct nfs_fattr *fattr = NULL;
+ struct nfs_fh *server_fh = nfs_exp_embedfh(fid->raw);
+- size_t fh_size = offsetof(struct nfs_fh, data) + server_fh->size;
++ size_t fh_size = offsetof(struct nfs_fh, data);
+ const struct nfs_rpc_ops *rpc_ops;
+ struct dentry *dentry;
+ struct inode *inode;
+- int len = EMBED_FH_OFF + XDR_QUADLEN(fh_size);
++ int len = EMBED_FH_OFF;
+ u32 *p = fid->raw;
+ int ret;
+
++ /* Initial check of bounds */
++ if (fh_len < len + XDR_QUADLEN(fh_size) ||
++ fh_len > XDR_QUADLEN(NFS_MAXFHSIZE))
++ return NULL;
++ /* Calculate embedded filehandle size */
++ fh_size += server_fh->size;
++ len += XDR_QUADLEN(fh_size);
+ /* NULL translates to ESTALE */
+ if (fh_len < len || fh_type != len)
+ return NULL;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index aa55b5df065bcc..5dd16f4ae74d19 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -745,14 +745,14 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ {
+ struct nfs4_ff_layout_segment *fls = FF_LAYOUT_LSEG(lseg);
+ struct nfs4_ff_layout_mirror *mirror;
+- struct nfs4_pnfs_ds *ds;
++ struct nfs4_pnfs_ds *ds = ERR_PTR(-EAGAIN);
+ u32 idx;
+
+ /* mirrors are initially sorted by efficiency */
+ for (idx = start_idx; idx < fls->mirror_array_cnt; idx++) {
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
+- if (!ds)
++ if (IS_ERR(ds))
+ continue;
+
+ if (check_device &&
+@@ -760,10 +760,10 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ continue;
+
+ *best_idx = idx;
+- return ds;
++ break;
+ }
+
+- return NULL;
++ return ds;
+ }
+
+ static struct nfs4_pnfs_ds *
+@@ -933,7 +933,7 @@ ff_layout_pg_init_write(struct nfs_pageio_descriptor *pgio,
+ for (i = 0; i < pgio->pg_mirror_count; i++) {
+ mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);
+ ds = nfs4_ff_layout_prepare_ds(pgio->pg_lseg, mirror, true);
+- if (!ds) {
++ if (IS_ERR(ds)) {
+ if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ goto out_mds;
+ pnfs_generic_pg_cleanup(pgio);
+@@ -1839,6 +1839,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ u32 idx = hdr->pgio_mirror_idx;
+ int vers;
+ struct nfs_fh *fh;
++ bool ds_fatal_error = false;
+
+ dprintk("--> %s ino %lu pgbase %u req %zu@%llu\n",
+ __func__, hdr->inode->i_ino,
+@@ -1846,8 +1847,10 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
+- if (!ds)
++ if (IS_ERR(ds)) {
++ ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ goto out_failed;
++ }
+
+ ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ hdr->inode);
+@@ -1888,7 +1891,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ return PNFS_ATTEMPTED;
+
+ out_failed:
+- if (ff_layout_avoid_mds_available_ds(lseg))
++ if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ return PNFS_TRY_AGAIN;
+ trace_pnfs_mds_fallback_read_pagelist(hdr->inode,
+ hdr->args.offset, hdr->args.count,
+@@ -1909,11 +1912,14 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ int vers;
+ struct nfs_fh *fh;
+ u32 idx = hdr->pgio_mirror_idx;
++ bool ds_fatal_error = false;
+
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+- if (!ds)
++ if (IS_ERR(ds)) {
++ ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ goto out_failed;
++ }
+
+ ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ hdr->inode);
+@@ -1956,7 +1962,7 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ return PNFS_ATTEMPTED;
+
+ out_failed:
+- if (ff_layout_avoid_mds_available_ds(lseg))
++ if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ return PNFS_TRY_AGAIN;
+ trace_pnfs_mds_fallback_write_pagelist(hdr->inode,
+ hdr->args.offset, hdr->args.count,
+@@ -1998,7 +2004,7 @@ static int ff_layout_initiate_commit(struct nfs_commit_data *data, int how)
+ idx = calc_ds_index_from_commit(lseg, data->ds_commit_index);
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+- if (!ds)
++ if (IS_ERR(ds))
+ goto out_err;
+
+ ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index d21c5ecfbf1cc3..95d5dca6714563 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -370,11 +370,11 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ struct nfs4_ff_layout_mirror *mirror,
+ bool fail_return)
+ {
+- struct nfs4_pnfs_ds *ds = NULL;
++ struct nfs4_pnfs_ds *ds;
+ struct inode *ino = lseg->pls_layout->plh_inode;
+ struct nfs_server *s = NFS_SERVER(ino);
+ unsigned int max_payload;
+- int status;
++ int status = -EAGAIN;
+
+ if (!ff_layout_init_mirror_ds(lseg->pls_layout, mirror))
+ goto noconnect;
+@@ -412,7 +412,7 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ ff_layout_send_layouterror(lseg);
+ if (fail_return || !ff_layout_has_available_ds(lseg))
+ pnfs_error_mark_layout_for_return(ino, lseg);
+- ds = NULL;
++ ds = ERR_PTR(status);
+ out:
+ return ds;
+ }
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 84361674bffc7f..6ea10abfa851ad 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -598,9 +598,12 @@ nfs_write_match_verf(const struct nfs_writeverf *verf,
+
+ static inline gfp_t nfs_io_gfp_mask(void)
+ {
+- if (current->flags & PF_WQ_WORKER)
+- return GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
+- return GFP_KERNEL;
++ gfp_t ret = current_gfp_context(GFP_KERNEL);
++
++ /* For workers __GFP_NORETRY only with __GFP_IO or __GFP_FS */
++ if ((current->flags & PF_WQ_WORKER) && ret == GFP_KERNEL)
++ ret |= __GFP_NORETRY | __GFP_NOWARN;
++ return ret;
+ }
+
+ /*
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 29f8a2df2c11a7..4abac68a4f0f2d 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10635,7 +10635,7 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+- ssize_t error, error2, error3, error4;
++ ssize_t error, error2, error3, error4 = 0;
+ size_t left = size;
+
+ error = generic_listxattr(dentry, list, left);
+@@ -10663,9 +10663,11 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ left -= error3;
+ }
+
+- error4 = security_inode_listsecurity(d_inode(dentry), list, left);
+- if (error4 < 0)
+- return error4;
++ if (!nfs_server_capable(d_inode(dentry), NFS_CAP_SECURITY_LABEL)) {
++ error4 = security_inode_listsecurity(d_inode(dentry), list, left);
++ if (error4 < 0)
++ return error4;
++ }
+
+ error += error2 + error3 + error4;
+ if (size && error > size)
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index 452fb23d2e4c42..1eb6c90fb7f4ce 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -517,11 +517,18 @@ static int __nilfs_read_inode(struct super_block *sb,
+ inode->i_op = &nilfs_symlink_inode_operations;
+ inode_nohighmem(inode);
+ inode->i_mapping->a_ops = &nilfs_aops;
+- } else {
++ } else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
++ S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
+ inode->i_op = &nilfs_special_inode_operations;
+ init_special_inode(
+ inode, inode->i_mode,
+ huge_decode_dev(le64_to_cpu(raw_inode->i_device_code)));
++ } else {
++ nilfs_error(sb,
++ "invalid file type bits in mode 0%o for inode %lu",
++ inode->i_mode, ino);
++ err = -EIO;
++ goto failed_unmap;
+ }
+ nilfs_ifile_unmap_inode(root->ifile, ino, bh);
+ brelse(bh);
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index 46eec986ec9ca7..6d9c1dfe9b1b64 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -317,7 +317,10 @@ static int ntfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ }
+
+ if (ni->i_valid < to) {
+- inode_lock(inode);
++ if (!inode_trylock(inode)) {
++ err = -EAGAIN;
++ goto out;
++ }
+ err = ntfs_extend_initialized_size(file, ni,
+ ni->i_valid, to);
+ inode_unlock(inode);
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index fa41db08848802..b57140ebfad0f7 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -728,8 +728,8 @@ static void do_k_string(void *k_mask, int index)
+
+ if (*mask & s_kmod_keyword_mask_map[index].mask_val) {
+ if ((strlen(kernel_debug_string) +
+- strlen(s_kmod_keyword_mask_map[index].keyword))
+- < ORANGEFS_MAX_DEBUG_STRING_LEN - 1) {
++ strlen(s_kmod_keyword_mask_map[index].keyword) + 1)
++ < ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ strcat(kernel_debug_string,
+ s_kmod_keyword_mask_map[index].keyword);
+ strcat(kernel_debug_string, ",");
+@@ -756,7 +756,7 @@ static void do_c_string(void *c_mask, int index)
+ (mask->mask2 & cdm_array[index].mask2)) {
+ if ((strlen(client_debug_string) +
+ strlen(cdm_array[index].keyword) + 1)
+- < ORANGEFS_MAX_DEBUG_STRING_LEN - 2) {
++ < ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ strcat(client_debug_string,
+ cdm_array[index].keyword);
+ strcat(client_debug_string, ",");
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index b721bb88b4a6a4..c3a809e1d7198c 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -568,6 +568,8 @@ static void pde_set_flags(struct proc_dir_entry *pde)
+ if (pde->proc_ops->proc_compat_ioctl)
+ pde->flags |= PROC_ENTRY_proc_compat_ioctl;
+ #endif
++ if (pde->proc_ops->proc_lseek)
++ pde->flags |= PROC_ENTRY_proc_lseek;
+ }
+
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index bc4011901c901a..623aa0d97a6d4c 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -494,7 +494,7 @@ static int proc_reg_open(struct inode *inode, struct file *file)
+ typeof_member(struct proc_ops, proc_release) release;
+ struct pde_opener *pdeo;
+
+- if (!pde->proc_ops->proc_lseek)
++ if (!pde_has_proc_lseek(pde))
+ file->f_mode &= ~FMODE_LSEEK;
+
+ if (pde_is_permanent(pde)) {
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index d115d22c01d498..019137261a039b 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -98,6 +98,11 @@ static inline bool pde_has_proc_compat_ioctl(const struct proc_dir_entry *pde)
+ #endif
+ }
+
++static inline bool pde_has_proc_lseek(const struct proc_dir_entry *pde)
++{
++ return pde->flags & PROC_ENTRY_proc_lseek;
++}
++
+ extern struct kmem_cache *proc_dir_entry_cache;
+ void pde_free(struct proc_dir_entry *pde);
+
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index cf923f211c512c..d47eae133a202e 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -455,7 +455,6 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
+ log_rdma_recv(INFO, "wc->status=%d opcode=%d\n",
+ wc->status, wc->opcode);
+- smbd_disconnect_rdma_connection(info);
+ goto error;
+ }
+
+@@ -472,8 +471,9 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ info->full_packet_received = true;
+ info->negotiate_done =
+ process_negotiation_response(response, wc->byte_len);
++ put_receive_buffer(info, response);
+ complete(&info->negotiate_completion);
+- break;
++ return;
+
+ /* SMBD data transfer packet */
+ case SMBD_TRANSFER_DATA:
+@@ -530,14 +530,16 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ }
+
+ return;
+-
+- default:
+- log_rdma_recv(ERR,
+- "unexpected response type=%d\n", response->type);
+ }
+
++ /*
++ * This is an internal error!
++ */
++ log_rdma_recv(ERR, "unexpected response type=%d\n", response->type);
++ WARN_ON_ONCE(response->type != SMBD_TRANSFER_DATA);
+ error:
+ put_receive_buffer(info, response);
++ smbd_disconnect_rdma_connection(info);
+ }
+
+ static struct rdma_cm_id *smbd_create_id(
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 0e04cf8b1d896a..0e72be594e910b 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -45,6 +45,7 @@ struct ksmbd_conn {
+ struct mutex srv_mutex;
+ int status;
+ unsigned int cli_cap;
++ __be32 inet_addr;
+ char *request_buf;
+ struct ksmbd_transport *transport;
+ struct nls_table *local_nls;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index a04413095b23be..3e2cd22fb2bd1e 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1621,11 +1621,24 @@ static int krb5_authenticate(struct ksmbd_work *work,
+
+ rsp->SecurityBufferLength = cpu_to_le16(out_len);
+
+- if ((conn->sign || server_conf.enforced_signing) ||
++ /*
++ * If session state is SMB2_SESSION_VALID, We can assume
++ * that it is reauthentication. And the user/password
++ * has been verified, so return it here.
++ */
++ if (sess->state == SMB2_SESSION_VALID) {
++ if (conn->binding)
++ goto binding_session;
++ return 0;
++ }
++
++ if ((rsp->SessionFlags != SMB2_SESSION_FLAG_IS_GUEST_LE &&
++ (conn->sign || server_conf.enforced_signing)) ||
+ (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED))
+ sess->sign = true;
+
+- if (smb3_encryption_negotiated(conn)) {
++ if (smb3_encryption_negotiated(conn) &&
++ !(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+ retval = conn->ops->generate_encryptionkey(conn, sess);
+ if (retval) {
+ ksmbd_debug(SMB,
+@@ -1638,6 +1651,7 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ sess->sign = false;
+ }
+
++binding_session:
+ if (conn->dialect >= SMB30_PROT_ID) {
+ chann = lookup_chann_list(sess, conn);
+ if (!chann) {
+@@ -1828,8 +1842,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ ksmbd_conn_set_good(conn);
+ sess->state = SMB2_SESSION_VALID;
+ }
+- kfree(sess->Preauth_HashValue);
+- sess->Preauth_HashValue = NULL;
+ } else if (conn->preferred_auth_mech == KSMBD_AUTH_NTLMSSP) {
+ if (negblob->MessageType == NtLmNegotiate) {
+ rc = ntlm_negotiate(work, negblob, negblob_len, rsp);
+@@ -1856,8 +1868,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ kfree(preauth_sess);
+ }
+ }
+- kfree(sess->Preauth_HashValue);
+- sess->Preauth_HashValue = NULL;
+ } else {
+ pr_info_ratelimited("Unknown NTLMSSP message type : 0x%x\n",
+ le32_to_cpu(negblob->MessageType));
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 7134abeeb53ec4..2850802f4a508f 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -508,7 +508,7 @@ int ksmbd_extract_shortname(struct ksmbd_conn *conn, const char *longname,
+
+ p = strrchr(longname, '.');
+ if (p == longname) { /*name starts with a dot*/
+- strscpy(extension, "___", strlen("___"));
++ strscpy(extension, "___", sizeof(extension));
+ } else {
+ if (p) {
+ p++;
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 7b6639949c250c..7d59ed6e138312 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -128,9 +128,6 @@ struct smb_direct_transport {
+ spinlock_t recvmsg_queue_lock;
+ struct list_head recvmsg_queue;
+
+- spinlock_t empty_recvmsg_queue_lock;
+- struct list_head empty_recvmsg_queue;
+-
+ int send_credit_target;
+ atomic_t send_credits;
+ spinlock_t lock_new_recv_credits;
+@@ -266,40 +263,19 @@ smb_direct_recvmsg *get_free_recvmsg(struct smb_direct_transport *t)
+ static void put_recvmsg(struct smb_direct_transport *t,
+ struct smb_direct_recvmsg *recvmsg)
+ {
+- ib_dma_unmap_single(t->cm_id->device, recvmsg->sge.addr,
+- recvmsg->sge.length, DMA_FROM_DEVICE);
++ if (likely(recvmsg->sge.length != 0)) {
++ ib_dma_unmap_single(t->cm_id->device,
++ recvmsg->sge.addr,
++ recvmsg->sge.length,
++ DMA_FROM_DEVICE);
++ recvmsg->sge.length = 0;
++ }
+
+ spin_lock(&t->recvmsg_queue_lock);
+ list_add(&recvmsg->list, &t->recvmsg_queue);
+ spin_unlock(&t->recvmsg_queue_lock);
+ }
+
+-static struct
+-smb_direct_recvmsg *get_empty_recvmsg(struct smb_direct_transport *t)
+-{
+- struct smb_direct_recvmsg *recvmsg = NULL;
+-
+- spin_lock(&t->empty_recvmsg_queue_lock);
+- if (!list_empty(&t->empty_recvmsg_queue)) {
+- recvmsg = list_first_entry(&t->empty_recvmsg_queue,
+- struct smb_direct_recvmsg, list);
+- list_del(&recvmsg->list);
+- }
+- spin_unlock(&t->empty_recvmsg_queue_lock);
+- return recvmsg;
+-}
+-
+-static void put_empty_recvmsg(struct smb_direct_transport *t,
+- struct smb_direct_recvmsg *recvmsg)
+-{
+- ib_dma_unmap_single(t->cm_id->device, recvmsg->sge.addr,
+- recvmsg->sge.length, DMA_FROM_DEVICE);
+-
+- spin_lock(&t->empty_recvmsg_queue_lock);
+- list_add_tail(&recvmsg->list, &t->empty_recvmsg_queue);
+- spin_unlock(&t->empty_recvmsg_queue_lock);
+-}
+-
+ static void enqueue_reassembly(struct smb_direct_transport *t,
+ struct smb_direct_recvmsg *recvmsg,
+ int data_length)
+@@ -384,9 +360,6 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id)
+ spin_lock_init(&t->recvmsg_queue_lock);
+ INIT_LIST_HEAD(&t->recvmsg_queue);
+
+- spin_lock_init(&t->empty_recvmsg_queue_lock);
+- INIT_LIST_HEAD(&t->empty_recvmsg_queue);
+-
+ init_waitqueue_head(&t->wait_send_pending);
+ atomic_set(&t->send_pending, 0);
+
+@@ -542,13 +515,13 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ t = recvmsg->transport;
+
+ if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
++ put_recvmsg(t, recvmsg);
+ if (wc->status != IB_WC_WR_FLUSH_ERR) {
+ pr_err("Recv error. status='%s (%d)' opcode=%d\n",
+ ib_wc_status_msg(wc->status), wc->status,
+ wc->opcode);
+ smb_direct_disconnect_rdma_connection(t);
+ }
+- put_empty_recvmsg(t, recvmsg);
+ return;
+ }
+
+@@ -562,7 +535,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ switch (recvmsg->type) {
+ case SMB_DIRECT_MSG_NEGOTIATE_REQ:
+ if (wc->byte_len < sizeof(struct smb_direct_negotiate_req)) {
+- put_empty_recvmsg(t, recvmsg);
++ put_recvmsg(t, recvmsg);
++ smb_direct_disconnect_rdma_connection(t);
+ return;
+ }
+ t->negotiation_requested = true;
+@@ -570,7 +544,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ t->status = SMB_DIRECT_CS_CONNECTED;
+ enqueue_reassembly(t, recvmsg, 0);
+ wake_up_interruptible(&t->wait_status);
+- break;
++ return;
+ case SMB_DIRECT_MSG_DATA_TRANSFER: {
+ struct smb_direct_data_transfer *data_transfer =
+ (struct smb_direct_data_transfer *)recvmsg->packet;
+@@ -579,7 +553,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+
+ if (wc->byte_len <
+ offsetof(struct smb_direct_data_transfer, padding)) {
+- put_empty_recvmsg(t, recvmsg);
++ put_recvmsg(t, recvmsg);
++ smb_direct_disconnect_rdma_connection(t);
+ return;
+ }
+
+@@ -587,7 +562,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ if (data_length) {
+ if (wc->byte_len < sizeof(struct smb_direct_data_transfer) +
+ (u64)data_length) {
+- put_empty_recvmsg(t, recvmsg);
++ put_recvmsg(t, recvmsg);
++ smb_direct_disconnect_rdma_connection(t);
+ return;
+ }
+
+@@ -599,16 +575,11 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ else
+ t->full_packet_received = true;
+
+- enqueue_reassembly(t, recvmsg, (int)data_length);
+- wake_up_interruptible(&t->wait_reassembly_queue);
+-
+ spin_lock(&t->receive_credit_lock);
+ receive_credits = --(t->recv_credits);
+ avail_recvmsg_count = t->count_avail_recvmsg;
+ spin_unlock(&t->receive_credit_lock);
+ } else {
+- put_empty_recvmsg(t, recvmsg);
+-
+ spin_lock(&t->receive_credit_lock);
+ receive_credits = --(t->recv_credits);
+ avail_recvmsg_count = ++(t->count_avail_recvmsg);
+@@ -630,11 +601,23 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count))
+ mod_delayed_work(smb_direct_wq,
+ &t->post_recv_credits_work, 0);
+- break;
++
++ if (data_length) {
++ enqueue_reassembly(t, recvmsg, (int)data_length);
++ wake_up_interruptible(&t->wait_reassembly_queue);
++ } else
++ put_recvmsg(t, recvmsg);
++
++ return;
+ }
+- default:
+- break;
+ }
++
++ /*
++ * This is an internal error!
++ */
++ WARN_ON_ONCE(recvmsg->type != SMB_DIRECT_MSG_DATA_TRANSFER);
++ put_recvmsg(t, recvmsg);
++ smb_direct_disconnect_rdma_connection(t);
+ }
+
+ static int smb_direct_post_recv(struct smb_direct_transport *t,
+@@ -664,6 +647,7 @@ static int smb_direct_post_recv(struct smb_direct_transport *t,
+ ib_dma_unmap_single(t->cm_id->device,
+ recvmsg->sge.addr, recvmsg->sge.length,
+ DMA_FROM_DEVICE);
++ recvmsg->sge.length = 0;
+ smb_direct_disconnect_rdma_connection(t);
+ return ret;
+ }
+@@ -805,7 +789,6 @@ static void smb_direct_post_recv_credits(struct work_struct *work)
+ struct smb_direct_recvmsg *recvmsg;
+ int receive_credits, credits = 0;
+ int ret;
+- int use_free = 1;
+
+ spin_lock(&t->receive_credit_lock);
+ receive_credits = t->recv_credits;
+@@ -813,18 +796,9 @@ static void smb_direct_post_recv_credits(struct work_struct *work)
+
+ if (receive_credits < t->recv_credit_target) {
+ while (true) {
+- if (use_free)
+- recvmsg = get_free_recvmsg(t);
+- else
+- recvmsg = get_empty_recvmsg(t);
+- if (!recvmsg) {
+- if (use_free) {
+- use_free = 0;
+- continue;
+- } else {
+- break;
+- }
+- }
++ recvmsg = get_free_recvmsg(t);
++ if (!recvmsg)
++ break;
+
+ recvmsg->type = SMB_DIRECT_MSG_DATA_TRANSFER;
+ recvmsg->first_segment = false;
+@@ -1800,8 +1774,6 @@ static void smb_direct_destroy_pools(struct smb_direct_transport *t)
+
+ while ((recvmsg = get_free_recvmsg(t)))
+ mempool_free(recvmsg, t->recvmsg_mempool);
+- while ((recvmsg = get_empty_recvmsg(t)))
+- mempool_free(recvmsg, t->recvmsg_mempool);
+
+ mempool_destroy(t->recvmsg_mempool);
+ t->recvmsg_mempool = NULL;
+@@ -1857,6 +1829,7 @@ static int smb_direct_create_pools(struct smb_direct_transport *t)
+ if (!recvmsg)
+ goto err;
+ recvmsg->transport = t;
++ recvmsg->sge.length = 0;
+ list_add(&recvmsg->list, &t->recvmsg_queue);
+ }
+ t->count_avail_recvmsg = t->recv_credit_max;
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index 25f7c86ba9b984..1222cf6be5efab 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -87,6 +87,7 @@ static struct tcp_transport *alloc_transport(struct socket *client_sk)
+ return NULL;
+ }
+
++ conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr;
+ conn->transport = KSMBD_TRANS(t);
+ KSMBD_TRANS(t)->conn = conn;
+ KSMBD_TRANS(t)->ops = &ksmbd_tcp_transport_ops;
+@@ -226,6 +227,8 @@ static int ksmbd_kthread_fn(void *p)
+ {
+ struct socket *client_sk = NULL;
+ struct interface *iface = (struct interface *)p;
++ struct inet_sock *csk_inet;
++ struct ksmbd_conn *conn;
+ int ret;
+
+ while (!kthread_should_stop()) {
+@@ -244,6 +247,20 @@ static int ksmbd_kthread_fn(void *p)
+ continue;
+ }
+
++ /*
++ * Limits repeated connections from clients with the same IP.
++ */
++ csk_inet = inet_sk(client_sk->sk);
++ down_read(&conn_list_lock);
++ list_for_each_entry(conn, &conn_list, conns_list)
++ if (csk_inet->inet_daddr == conn->inet_addr) {
++ ret = -EAGAIN;
++ break;
++ }
++ up_read(&conn_list_lock);
++ if (ret == -EAGAIN)
++ continue;
++
+ if (server_conf.max_connections &&
+ atomic_inc_return(&active_num_conn) >= server_conf.max_connections) {
+ pr_info_ratelimited("Limit the maximum number of connections(%u)\n",
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 63276a752373ed..871c0d8e5012ab 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -562,7 +562,8 @@ int ksmbd_vfs_getattr(const struct path *path, struct kstat *stat)
+ {
+ int err;
+
+- err = vfs_getattr(path, stat, STATX_BTIME, AT_STATX_SYNC_AS_STAT);
++ err = vfs_getattr(path, stat, STATX_BASIC_STATS | STATX_BTIME,
++ AT_STATX_SYNC_AS_STAT);
+ if (err)
+ pr_err("getattr failed, err %d\n", err);
+ return err;
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index 13fa6f3df8e465..c861b2c894ba34 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -209,7 +209,7 @@ void logfc(struct fc_log *log, const char *prefix, char level, const char *fmt,
+ */
+ #define infof(fc, fmt, ...) __logfc(fc, 'i', fmt, ## __VA_ARGS__)
+ #define info_plog(p, fmt, ...) __plog(p, 'i', fmt, ## __VA_ARGS__)
+-#define infofc(p, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
++#define infofc(fc, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
+
+ /**
+ * warnf - Store supplementary warning message
+diff --git a/include/linux/moduleparam.h b/include/linux/moduleparam.h
+index 962cd41a2cb5af..061e19c94a6bc6 100644
+--- a/include/linux/moduleparam.h
++++ b/include/linux/moduleparam.h
+@@ -282,10 +282,9 @@ struct kparam_array
+ #define __moduleparam_const const
+ #endif
+
+-/* This is the fundamental function for registering boot/module
+- parameters. */
++/* This is the fundamental function for registering boot/module parameters. */
+ #define __module_param_call(prefix, name, ops, arg, perm, level, flags) \
+- /* Default value instead of permissions? */ \
++ static_assert(sizeof(""prefix) - 1 <= MAX_PARAM_PREFIX_LEN); \
+ static const char __param_str_##name[] = prefix #name; \
+ static struct kernel_param __moduleparam_const __param_##name \
+ __used __section("__param") \
+diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h
+index c7abce28ed2995..aab0aebb529e02 100644
+--- a/include/linux/pps_kernel.h
++++ b/include/linux/pps_kernel.h
+@@ -52,6 +52,7 @@ struct pps_device {
+ int current_mode; /* PPS mode at event time */
+
+ unsigned int last_ev; /* last PPS event id */
++ unsigned int last_fetched_ev; /* last fetched PPS event id */
+ wait_queue_head_t queue; /* PPS event queue */
+
+ unsigned int id; /* PPS source unique ID */
+diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
+index 39532c19aa2861..ca9cd8a2569e94 100644
+--- a/include/linux/proc_fs.h
++++ b/include/linux/proc_fs.h
+@@ -27,6 +27,7 @@ enum {
+
+ PROC_ENTRY_proc_read_iter = 1U << 1,
+ PROC_ENTRY_proc_compat_ioctl = 1U << 2,
++ PROC_ENTRY_proc_lseek = 1U << 3,
+ };
+
+ struct proc_ops {
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 2b546620488824..8014a335414e41 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -2850,6 +2850,29 @@ static inline void skb_reset_transport_header(struct sk_buff *skb)
+ skb->transport_header = skb->data - skb->head;
+ }
+
++/**
++ * skb_reset_transport_header_careful - conditionally reset transport header
++ * @skb: buffer to alter
++ *
++ * Hardened version of skb_reset_transport_header().
++ *
++ * Returns: true if the operation was a success.
++ */
++static inline bool __must_check
++skb_reset_transport_header_careful(struct sk_buff *skb)
++{
++ long offset = skb->data - skb->head;
++
++ if (unlikely(offset != (typeof(skb->transport_header))offset))
++ return false;
++
++ if (unlikely(offset == (typeof(skb->transport_header))~0U))
++ return false;
++
++ skb->transport_header = offset;
++ return true;
++}
++
+ static inline void skb_set_transport_header(struct sk_buff *skb,
+ const int offset)
+ {
+diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
+index 0b9f1e598e3a6b..4bc6bb01a0eb8b 100644
+--- a/include/linux/usb/usbnet.h
++++ b/include/linux/usb/usbnet.h
+@@ -76,6 +76,7 @@ struct usbnet {
+ # define EVENT_LINK_CHANGE 11
+ # define EVENT_SET_RX_MODE 12
+ # define EVENT_NO_IP_ALIGN 13
++# define EVENT_LINK_CARRIER_ON 14
+ /* This one is special, as it indicates that the device is going away
+ * there are cyclic dependencies between tasklet, timer and bh
+ * that must be broken
+diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h
+index 7725b7579b7819..2209c227e85920 100644
+--- a/include/linux/wait_bit.h
++++ b/include/linux/wait_bit.h
+@@ -335,4 +335,64 @@ static inline void clear_and_wake_up_bit(int bit, void *word)
+ wake_up_bit(word, bit);
+ }
+
++/**
++ * test_and_clear_wake_up_bit - clear a bit if it was set: wake up anyone waiting on that bit
++ * @bit: the bit of the word being waited on
++ * @word: the address of memory containing that bit
++ *
++ * If the bit is set and can be atomically cleared, any tasks waiting in
++ * wait_on_bit() or similar will be woken. This call has the same
++ * complete ordering semantics as test_and_clear_bit(). Any changes to
++ * memory made before this call are guaranteed to be visible after the
++ * corresponding wait_on_bit() completes.
++ *
++ * Returns %true if the bit was successfully set and the wake up was sent.
++ */
++static inline bool test_and_clear_wake_up_bit(int bit, unsigned long *word)
++{
++ if (!test_and_clear_bit(bit, word))
++ return false;
++ /* no extra barrier required */
++ wake_up_bit(word, bit);
++ return true;
++}
++
++/**
++ * atomic_dec_and_wake_up - decrement an atomic_t and if zero, wake up waiters
++ * @var: the variable to dec and test
++ *
++ * Decrements the atomic variable and if it reaches zero, send a wake_up to any
++ * processes waiting on the variable.
++ *
++ * This function has the same complete ordering semantics as atomic_dec_and_test.
++ *
++ * Returns %true is the variable reaches zero and the wake up was sent.
++ */
++
++static inline bool atomic_dec_and_wake_up(atomic_t *var)
++{
++ if (!atomic_dec_and_test(var))
++ return false;
++ /* No extra barrier required */
++ wake_up_var(var);
++ return true;
++}
++
++/**
++ * store_release_wake_up - update a variable and send a wake_up
++ * @var: the address of the variable to be updated and woken
++ * @val: the value to store in the variable.
++ *
++ * Store the given value in the variable send a wake up to any tasks
++ * waiting on the variable. All necessary barriers are included to ensure
++ * the task calling wait_var_event() sees the new value and all values
++ * written to memory before this call.
++ */
++#define store_release_wake_up(var, val) \
++do { \
++ smp_store_release(var, val); \
++ smp_mb(); \
++ wake_up_var(var); \
++} while (0)
++
+ #endif /* _LINUX_WAIT_BIT_H */
+diff --git a/include/net/tc_act/tc_ctinfo.h b/include/net/tc_act/tc_ctinfo.h
+index f071c1d70a25e1..a04bcac7adf4b6 100644
+--- a/include/net/tc_act/tc_ctinfo.h
++++ b/include/net/tc_act/tc_ctinfo.h
+@@ -18,9 +18,9 @@ struct tcf_ctinfo_params {
+ struct tcf_ctinfo {
+ struct tc_action common;
+ struct tcf_ctinfo_params __rcu *params;
+- u64 stats_dscp_set;
+- u64 stats_dscp_error;
+- u64 stats_cpmark_set;
++ atomic64_t stats_dscp_set;
++ atomic64_t stats_dscp_error;
++ atomic64_t stats_cpmark_set;
+ };
+
+ enum {
+diff --git a/include/net/udp.h b/include/net/udp.h
+index fa4cdbe55552cf..aba442bd1439ab 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -460,6 +460,16 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ {
+ netdev_features_t features = NETIF_F_SG;
+ struct sk_buff *segs;
++ int drop_count;
++
++ /*
++ * Segmentation in UDP receive path is only for UDP GRO, drop udp
++ * fragmentation offload (UFO) packets.
++ */
++ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP) {
++ drop_count = 1;
++ goto drop;
++ }
+
+ /* Avoid csum recalculation by skb_segment unless userspace explicitly
+ * asks for the final checksum values
+@@ -483,16 +493,18 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ */
+ segs = __skb_gso_segment(skb, features, false);
+ if (IS_ERR_OR_NULL(segs)) {
+- int segs_nr = skb_shinfo(skb)->gso_segs;
+-
+- atomic_add(segs_nr, &sk->sk_drops);
+- SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, segs_nr);
+- kfree_skb(skb);
+- return NULL;
++ drop_count = skb_shinfo(skb)->gso_segs;
++ goto drop;
+ }
+
+ consume_skb(skb);
+ return segs;
++
++drop:
++ atomic_add(drop_count, &sk->sk_drops);
++ SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, drop_count);
++ kfree_skb(skb);
++ return NULL;
+ }
+
+ static inline void udp_post_segment_fix_csum(struct sk_buff *skb)
+diff --git a/kernel/bpf/preload/Kconfig b/kernel/bpf/preload/Kconfig
+index c9d45c9d6918d1..f9b11d01c3b50d 100644
+--- a/kernel/bpf/preload/Kconfig
++++ b/kernel/bpf/preload/Kconfig
+@@ -10,7 +10,6 @@ menuconfig BPF_PRELOAD
+ # The dependency on !COMPILE_TEST prevents it from being enabled
+ # in allmodconfig or allyesconfig configurations
+ depends on !COMPILE_TEST
+- select USERMODE_DRIVER
+ help
+ This builds kernel module with several embedded BPF programs that are
+ pinned into BPF FS mount point as human readable files that are
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index f815b808db20a5..4d7bf0536348f2 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6285,11 +6285,21 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ ring_buffer_put(rb); /* could be last */
+ }
+
++static int perf_mmap_may_split(struct vm_area_struct *vma, unsigned long addr)
++{
++ /*
++ * Forbid splitting perf mappings to prevent refcount leaks due to
++ * the resulting non-matching offsets and sizes. See open()/close().
++ */
++ return -EINVAL;
++}
++
+ static const struct vm_operations_struct perf_mmap_vmops = {
+ .open = perf_mmap_open,
+ .close = perf_mmap_close, /* non mergeable */
+ .fault = perf_mmap_fault,
+ .page_mkwrite = perf_mmap_fault,
++ .may_split = perf_mmap_may_split,
+ };
+
+ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+@@ -6381,9 +6391,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ goto unlock;
+ }
+
+- atomic_set(&rb->aux_mmap_count, 1);
+ user_extra = nr_pages;
+-
+ goto accounting;
+ }
+
+@@ -6485,8 +6493,10 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ } else {
+ ret = rb_alloc_aux(rb, event, vma->vm_pgoff, nr_pages,
+ event->attr.aux_watermark, flags);
+- if (!ret)
++ if (!ret) {
++ atomic_set(&rb->aux_mmap_count, 1);
+ rb->aux_mmap_locked = extra;
++ }
+ }
+
+ unlock:
+@@ -6496,6 +6506,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+
+ atomic_inc(&event->mmap_count);
+ } else if (rb) {
++ /* AUX allocation failed */
+ atomic_dec(&rb->mmap_count);
+ }
+ aux_unlock:
+@@ -6503,6 +6514,9 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ mutex_unlock(aux_mutex);
+ mutex_unlock(&event->mmap_mutex);
+
++ if (ret)
++ return ret;
++
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c
+index a60c561724be96..fb5e7b65f79923 100644
+--- a/kernel/kcsan/kcsan_test.c
++++ b/kernel/kcsan/kcsan_test.c
+@@ -530,7 +530,7 @@ static void test_barrier_nothreads(struct kunit *test)
+ struct kcsan_scoped_access *reorder_access = NULL;
+ #endif
+ arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED;
+- atomic_t dummy;
++ atomic_t dummy = ATOMIC_INIT(0);
+
+ KCSAN_TEST_REQUIRES(test, reorder_access != NULL);
+ KCSAN_TEST_REQUIRES(test, IS_ENABLED(CONFIG_SMP));
+diff --git a/kernel/trace/preemptirq_delay_test.c b/kernel/trace/preemptirq_delay_test.c
+index cb0871fbdb07f0..8af92dbe98f07b 100644
+--- a/kernel/trace/preemptirq_delay_test.c
++++ b/kernel/trace/preemptirq_delay_test.c
+@@ -119,12 +119,15 @@ static int preemptirq_delay_run(void *data)
+ {
+ int i;
+ int s = MIN(burst_size, NR_TEST_FUNCS);
+- struct cpumask cpu_mask;
++ cpumask_var_t cpu_mask;
++
++ if (!alloc_cpumask_var(&cpu_mask, GFP_KERNEL))
++ return -ENOMEM;
+
+ if (cpu_affinity > -1) {
+- cpumask_clear(&cpu_mask);
+- cpumask_set_cpu(cpu_affinity, &cpu_mask);
+- if (set_cpus_allowed_ptr(current, &cpu_mask))
++ cpumask_clear(cpu_mask);
++ cpumask_set_cpu(cpu_affinity, cpu_mask);
++ if (set_cpus_allowed_ptr(current, cpu_mask))
+ pr_err("cpu_affinity:%d, failed\n", cpu_affinity);
+ }
+
+@@ -141,6 +144,8 @@ static int preemptirq_delay_run(void *data)
+
+ __set_current_state(TASK_RUNNING);
+
++ free_cpumask_var(cpu_mask);
++
+ return 0;
+ }
+
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index ced99a4bb56245..8afa2878422d58 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -212,7 +212,7 @@ void put_ucounts(struct ucounts *ucounts)
+ }
+ }
+
+-static inline bool atomic_long_inc_below(atomic_long_t *v, int u)
++static inline bool atomic_long_inc_below(atomic_long_t *v, long u)
+ {
+ long c, old;
+ c = atomic_long_read(v);
+diff --git a/mm/hmm.c b/mm/hmm.c
+index 3850fb625dda18..2cb3626fd1034e 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -173,6 +173,7 @@ static inline unsigned long hmm_pfn_flags_order(unsigned long order)
+ return order << HMM_PFN_ORDER_SHIFT;
+ }
+
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ pmd_t pmd)
+ {
+@@ -183,7 +184,6 @@ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ hmm_pfn_flags_order(PMD_SHIFT - PAGE_SHIFT);
+ }
+
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
+ unsigned long end, unsigned long hmm_pfns[],
+ pmd_t pmd)
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index d21d216f838a26..240560ec8d99a8 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -337,7 +337,9 @@ static void print_address_description(void *addr, u8 tag,
+ }
+
+ if (is_vmalloc_addr(addr)) {
+- pr_err("The buggy address %px belongs to a vmalloc virtual mapping\n", addr);
++ pr_err("The buggy address belongs to a");
++ if (!vmalloc_dump_obj(addr))
++ pr_cont(" vmalloc virtual mapping\n");
+ page = vmalloc_to_page(addr);
+ }
+
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 085fca1fa27af0..eb46acfd3d2057 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -2345,7 +2345,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
+ VM_BUG_ON(khugepaged_scan.address < hstart ||
+ khugepaged_scan.address + HPAGE_PMD_SIZE >
+ hend);
+- if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) {
++ if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
+ struct file *file = get_file(vma->vm_file);
+ pgoff_t pgoff = linear_page_index(vma,
+ khugepaged_scan.address);
+@@ -2694,7 +2694,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
+ mmap_assert_locked(mm);
+ memset(cc->node_load, 0, sizeof(cc->node_load));
+ nodes_clear(cc->alloc_nmask);
+- if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) {
++ if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
+ struct file *file = get_file(vma->vm_file);
+ pgoff_t pgoff = linear_page_index(vma, addr);
+
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index 37f755c9a1b70b..af130e2dcea289 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -1049,6 +1049,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
+ if (!zspage)
+ return NULL;
+
++ if (!IS_ENABLED(CONFIG_COMPACTION))
++ gfp &= ~__GFP_MOVABLE;
++
+ zspage->magic = ZSPAGE_MAGIC;
+ migrate_lock_init(zspage);
+
+diff --git a/net/appletalk/aarp.c b/net/appletalk/aarp.c
+index c7236daa24152a..0d7c14a4966819 100644
+--- a/net/appletalk/aarp.c
++++ b/net/appletalk/aarp.c
+@@ -35,6 +35,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/export.h>
+ #include <linux/etherdevice.h>
++#include <linux/refcount.h>
+
+ int sysctl_aarp_expiry_time = AARP_EXPIRY_TIME;
+ int sysctl_aarp_tick_time = AARP_TICK_TIME;
+@@ -44,6 +45,7 @@ int sysctl_aarp_resolve_time = AARP_RESOLVE_TIME;
+ /* Lists of aarp entries */
+ /**
+ * struct aarp_entry - AARP entry
++ * @refcnt: Reference count
+ * @last_sent: Last time we xmitted the aarp request
+ * @packet_queue: Queue of frames wait for resolution
+ * @status: Used for proxy AARP
+@@ -55,6 +57,7 @@ int sysctl_aarp_resolve_time = AARP_RESOLVE_TIME;
+ * @next: Next entry in chain
+ */
+ struct aarp_entry {
++ refcount_t refcnt;
+ /* These first two are only used for unresolved entries */
+ unsigned long last_sent;
+ struct sk_buff_head packet_queue;
+@@ -79,6 +82,17 @@ static DEFINE_RWLOCK(aarp_lock);
+ /* Used to walk the list and purge/kick entries. */
+ static struct timer_list aarp_timer;
+
++static inline void aarp_entry_get(struct aarp_entry *a)
++{
++ refcount_inc(&a->refcnt);
++}
++
++static inline void aarp_entry_put(struct aarp_entry *a)
++{
++ if (refcount_dec_and_test(&a->refcnt))
++ kfree(a);
++}
++
+ /*
+ * Delete an aarp queue
+ *
+@@ -87,7 +101,7 @@ static struct timer_list aarp_timer;
+ static void __aarp_expire(struct aarp_entry *a)
+ {
+ skb_queue_purge(&a->packet_queue);
+- kfree(a);
++ aarp_entry_put(a);
+ }
+
+ /*
+@@ -380,9 +394,11 @@ static void aarp_purge(void)
+ static struct aarp_entry *aarp_alloc(void)
+ {
+ struct aarp_entry *a = kmalloc(sizeof(*a), GFP_ATOMIC);
++ if (!a)
++ return NULL;
+
+- if (a)
+- skb_queue_head_init(&a->packet_queue);
++ refcount_set(&a->refcnt, 1);
++ skb_queue_head_init(&a->packet_queue);
+ return a;
+ }
+
+@@ -508,6 +524,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
+ entry->dev = atif->dev;
+
+ write_lock_bh(&aarp_lock);
++ aarp_entry_get(entry);
+
+ hash = sa->s_node % (AARP_HASH_SIZE - 1);
+ entry->next = proxies[hash];
+@@ -533,6 +550,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
+ retval = 1;
+ }
+
++ aarp_entry_put(entry);
+ write_unlock_bh(&aarp_lock);
+ out:
+ return retval;
+diff --git a/net/caif/cfctrl.c b/net/caif/cfctrl.c
+index 8480684f276251..10eeace2278b72 100644
+--- a/net/caif/cfctrl.c
++++ b/net/caif/cfctrl.c
+@@ -351,17 +351,154 @@ int cfctrl_cancel_req(struct cflayer *layr, struct cflayer *adap_layer)
+ return found;
+ }
+
++static int cfctrl_link_setup(struct cfctrl *cfctrl, struct cfpkt *pkt, u8 cmdrsp)
++{
++ u8 len;
++ u8 linkid = 0;
++ enum cfctrl_srv serv;
++ enum cfctrl_srv servtype;
++ u8 endpoint;
++ u8 physlinkid;
++ u8 prio;
++ u8 tmp;
++ u8 *cp;
++ int i;
++ struct cfctrl_link_param linkparam;
++ struct cfctrl_request_info rsp, *req;
++
++ memset(&linkparam, 0, sizeof(linkparam));
++
++ tmp = cfpkt_extr_head_u8(pkt);
++
++ serv = tmp & CFCTRL_SRV_MASK;
++ linkparam.linktype = serv;
++
++ servtype = tmp >> 4;
++ linkparam.chtype = servtype;
++
++ tmp = cfpkt_extr_head_u8(pkt);
++ physlinkid = tmp & 0x07;
++ prio = tmp >> 3;
++
++ linkparam.priority = prio;
++ linkparam.phyid = physlinkid;
++ endpoint = cfpkt_extr_head_u8(pkt);
++ linkparam.endpoint = endpoint & 0x03;
++
++ switch (serv) {
++ case CFCTRL_SRV_VEI:
++ case CFCTRL_SRV_DBG:
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ break;
++ case CFCTRL_SRV_VIDEO:
++ tmp = cfpkt_extr_head_u8(pkt);
++ linkparam.u.video.connid = tmp;
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ break;
++
++ case CFCTRL_SRV_DATAGRAM:
++ linkparam.u.datagram.connid = cfpkt_extr_head_u32(pkt);
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ break;
++ case CFCTRL_SRV_RFM:
++ /* Construct a frame, convert
++ * DatagramConnectionID
++ * to network format long and copy it out...
++ */
++ linkparam.u.rfm.connid = cfpkt_extr_head_u32(pkt);
++ cp = (u8 *) linkparam.u.rfm.volume;
++ for (tmp = cfpkt_extr_head_u8(pkt);
++ cfpkt_more(pkt) && tmp != '\0';
++ tmp = cfpkt_extr_head_u8(pkt))
++ *cp++ = tmp;
++ *cp = '\0';
++
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++
++ break;
++ case CFCTRL_SRV_UTIL:
++ /* Construct a frame, convert
++ * DatagramConnectionID
++ * to network format long and copy it out...
++ */
++ /* Fifosize KB */
++ linkparam.u.utility.fifosize_kb = cfpkt_extr_head_u16(pkt);
++ /* Fifosize bufs */
++ linkparam.u.utility.fifosize_bufs = cfpkt_extr_head_u16(pkt);
++ /* name */
++ cp = (u8 *) linkparam.u.utility.name;
++ caif_assert(sizeof(linkparam.u.utility.name)
++ >= UTILITY_NAME_LENGTH);
++ for (i = 0; i < UTILITY_NAME_LENGTH && cfpkt_more(pkt); i++) {
++ tmp = cfpkt_extr_head_u8(pkt);
++ *cp++ = tmp;
++ }
++ /* Length */
++ len = cfpkt_extr_head_u8(pkt);
++ linkparam.u.utility.paramlen = len;
++ /* Param Data */
++ cp = linkparam.u.utility.params;
++ while (cfpkt_more(pkt) && len--) {
++ tmp = cfpkt_extr_head_u8(pkt);
++ *cp++ = tmp;
++ }
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ /* Length */
++ len = cfpkt_extr_head_u8(pkt);
++ /* Param Data */
++ cfpkt_extr_head(pkt, NULL, len);
++ break;
++ default:
++ pr_warn("Request setup, invalid type (%d)\n", serv);
++ return -1;
++ }
++
++ rsp.cmd = CFCTRL_CMD_LINK_SETUP;
++ rsp.param = linkparam;
++ spin_lock_bh(&cfctrl->info_list_lock);
++ req = cfctrl_remove_req(cfctrl, &rsp);
++
++ if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
++ cfpkt_erroneous(pkt)) {
++ pr_err("Invalid O/E bit or parse error "
++ "on CAIF control channel\n");
++ cfctrl->res.reject_rsp(cfctrl->serv.layer.up, 0,
++ req ? req->client_layer : NULL);
++ } else {
++ cfctrl->res.linksetup_rsp(cfctrl->serv.layer.up, linkid,
++ serv, physlinkid,
++ req ? req->client_layer : NULL);
++ }
++
++ kfree(req);
++
++ spin_unlock_bh(&cfctrl->info_list_lock);
++
++ return 0;
++}
++
+ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ {
+ u8 cmdrsp;
+ u8 cmd;
+- int ret = -1;
+- u8 len;
+- u8 param[255];
++ int ret = 0;
+ u8 linkid = 0;
+ struct cfctrl *cfctrl = container_obj(layer);
+- struct cfctrl_request_info rsp, *req;
+-
+
+ cmdrsp = cfpkt_extr_head_u8(pkt);
+ cmd = cmdrsp & CFCTRL_CMD_MASK;
+@@ -374,150 +511,7 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+
+ switch (cmd) {
+ case CFCTRL_CMD_LINK_SETUP:
+- {
+- enum cfctrl_srv serv;
+- enum cfctrl_srv servtype;
+- u8 endpoint;
+- u8 physlinkid;
+- u8 prio;
+- u8 tmp;
+- u8 *cp;
+- int i;
+- struct cfctrl_link_param linkparam;
+- memset(&linkparam, 0, sizeof(linkparam));
+-
+- tmp = cfpkt_extr_head_u8(pkt);
+-
+- serv = tmp & CFCTRL_SRV_MASK;
+- linkparam.linktype = serv;
+-
+- servtype = tmp >> 4;
+- linkparam.chtype = servtype;
+-
+- tmp = cfpkt_extr_head_u8(pkt);
+- physlinkid = tmp & 0x07;
+- prio = tmp >> 3;
+-
+- linkparam.priority = prio;
+- linkparam.phyid = physlinkid;
+- endpoint = cfpkt_extr_head_u8(pkt);
+- linkparam.endpoint = endpoint & 0x03;
+-
+- switch (serv) {
+- case CFCTRL_SRV_VEI:
+- case CFCTRL_SRV_DBG:
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- break;
+- case CFCTRL_SRV_VIDEO:
+- tmp = cfpkt_extr_head_u8(pkt);
+- linkparam.u.video.connid = tmp;
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- break;
+-
+- case CFCTRL_SRV_DATAGRAM:
+- linkparam.u.datagram.connid =
+- cfpkt_extr_head_u32(pkt);
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- break;
+- case CFCTRL_SRV_RFM:
+- /* Construct a frame, convert
+- * DatagramConnectionID
+- * to network format long and copy it out...
+- */
+- linkparam.u.rfm.connid =
+- cfpkt_extr_head_u32(pkt);
+- cp = (u8 *) linkparam.u.rfm.volume;
+- for (tmp = cfpkt_extr_head_u8(pkt);
+- cfpkt_more(pkt) && tmp != '\0';
+- tmp = cfpkt_extr_head_u8(pkt))
+- *cp++ = tmp;
+- *cp = '\0';
+-
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+-
+- break;
+- case CFCTRL_SRV_UTIL:
+- /* Construct a frame, convert
+- * DatagramConnectionID
+- * to network format long and copy it out...
+- */
+- /* Fifosize KB */
+- linkparam.u.utility.fifosize_kb =
+- cfpkt_extr_head_u16(pkt);
+- /* Fifosize bufs */
+- linkparam.u.utility.fifosize_bufs =
+- cfpkt_extr_head_u16(pkt);
+- /* name */
+- cp = (u8 *) linkparam.u.utility.name;
+- caif_assert(sizeof(linkparam.u.utility.name)
+- >= UTILITY_NAME_LENGTH);
+- for (i = 0;
+- i < UTILITY_NAME_LENGTH
+- && cfpkt_more(pkt); i++) {
+- tmp = cfpkt_extr_head_u8(pkt);
+- *cp++ = tmp;
+- }
+- /* Length */
+- len = cfpkt_extr_head_u8(pkt);
+- linkparam.u.utility.paramlen = len;
+- /* Param Data */
+- cp = linkparam.u.utility.params;
+- while (cfpkt_more(pkt) && len--) {
+- tmp = cfpkt_extr_head_u8(pkt);
+- *cp++ = tmp;
+- }
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- /* Length */
+- len = cfpkt_extr_head_u8(pkt);
+- /* Param Data */
+- cfpkt_extr_head(pkt, ¶m, len);
+- break;
+- default:
+- pr_warn("Request setup, invalid type (%d)\n",
+- serv);
+- goto error;
+- }
+-
+- rsp.cmd = cmd;
+- rsp.param = linkparam;
+- spin_lock_bh(&cfctrl->info_list_lock);
+- req = cfctrl_remove_req(cfctrl, &rsp);
+-
+- if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
+- cfpkt_erroneous(pkt)) {
+- pr_err("Invalid O/E bit or parse error "
+- "on CAIF control channel\n");
+- cfctrl->res.reject_rsp(cfctrl->serv.layer.up,
+- 0,
+- req ? req->client_layer
+- : NULL);
+- } else {
+- cfctrl->res.linksetup_rsp(cfctrl->serv.
+- layer.up, linkid,
+- serv, physlinkid,
+- req ? req->
+- client_layer : NULL);
+- }
+-
+- kfree(req);
+-
+- spin_unlock_bh(&cfctrl->info_list_lock);
+- }
++ ret = cfctrl_link_setup(cfctrl, pkt, cmdrsp);
+ break;
+ case CFCTRL_CMD_LINK_DESTROY:
+ linkid = cfpkt_extr_head_u8(pkt);
+@@ -544,9 +538,9 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ break;
+ default:
+ pr_err("Unrecognized Control Frame\n");
++ ret = -1;
+ goto error;
+ }
+- ret = 0;
+ error:
+ cfpkt_destroy(pkt);
+ return ret;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 4c806ce62739d9..cd0c28e94979a4 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -9270,6 +9270,9 @@ static bool flow_dissector_is_valid_access(int off, int size,
+ if (off < 0 || off >= sizeof(struct __sk_buff))
+ return false;
+
++ if (off % size != 0)
++ return false;
++
+ if (type == BPF_WRITE)
+ return false;
+
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 657abbb7d0d7e6..89f5358d7a1beb 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -800,6 +800,13 @@ int netpoll_setup(struct netpoll *np)
+ goto put;
+ netdev_tracker_alloc(ndev, &np->dev_tracker, GFP_KERNEL);
+ rtnl_unlock();
++
++ /* Make sure all NAPI polls which started before dev->npinfo
++ * was visible have exited before we start calling NAPI poll.
++ * NAPI skips locking if dev->npinfo is NULL.
++ */
++ synchronize_rcu();
++
+ return 0;
+
+ put:
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 2aa6262f19e845..01ca497fe2cd61 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -654,6 +654,13 @@ static void sk_psock_backlog(struct work_struct *work)
+ bool ingress;
+ int ret;
+
++ /* If sk is quickly removed from the map and then added back, the old
++ * psock should not be scheduled, because there are now two psocks
++ * pointing to the same sk.
++ */
++ if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
++ return;
++
+ /* Increment the psock refcnt to synchronize with close(fd) path in
+ * sock_map_close(), ensuring we wait for backlog thread completion
+ * before sk_socket freed. If refcnt increment fails, it indicates
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 222b829f33f429..5ee1e1c2082cfe 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4834,8 +4834,9 @@ static void tcp_ofo_queue(struct sock *sk)
+
+ if (before(TCP_SKB_CB(skb)->seq, dsack_high)) {
+ __u32 dsack = dsack_high;
++
+ if (before(TCP_SKB_CB(skb)->end_seq, dsack_high))
+- dsack_high = TCP_SKB_CB(skb)->end_seq;
++ dsack = TCP_SKB_CB(skb)->end_seq;
+ tcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq, dsack);
+ }
+ p = rb_next(p);
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index b6a7cbd6bee0db..bb51a911a6ce76 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -438,15 +438,17 @@ struct fib6_dump_arg {
+ static int fib6_rt_dump(struct fib6_info *rt, struct fib6_dump_arg *arg)
+ {
+ enum fib_event_type fib_event = FIB_EVENT_ENTRY_REPLACE;
++ unsigned int nsiblings;
+ int err;
+
+ if (!rt || rt == arg->net->ipv6.fib6_null_entry)
+ return 0;
+
+- if (rt->fib6_nsiblings)
++ nsiblings = READ_ONCE(rt->fib6_nsiblings);
++ if (nsiblings)
+ err = call_fib6_multipath_entry_notifier(arg->nb, fib_event,
+ rt,
+- rt->fib6_nsiblings,
++ nsiblings,
+ arg->extack);
+ else
+ err = call_fib6_entry_notifier(arg->nb, fib_event, rt,
+@@ -1119,7 +1121,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+
+ if (rt6_duplicate_nexthop(iter, rt)) {
+ if (rt->fib6_nsiblings)
+- rt->fib6_nsiblings = 0;
++ WRITE_ONCE(rt->fib6_nsiblings, 0);
+ if (!(iter->fib6_flags & RTF_EXPIRES))
+ return -EEXIST;
+ if (!(rt->fib6_flags & RTF_EXPIRES))
+@@ -1145,7 +1147,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ */
+ if (rt_can_ecmp &&
+ rt6_qualify_for_ecmp(iter))
+- rt->fib6_nsiblings++;
++ WRITE_ONCE(rt->fib6_nsiblings,
++ rt->fib6_nsiblings + 1);
+ }
+
+ if (iter->fib6_metric > rt->fib6_metric)
+@@ -1195,7 +1198,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ fib6_nsiblings = 0;
+ list_for_each_entry_safe(sibling, temp_sibling,
+ &rt->fib6_siblings, fib6_siblings) {
+- sibling->fib6_nsiblings++;
++ WRITE_ONCE(sibling->fib6_nsiblings,
++ sibling->fib6_nsiblings + 1);
+ BUG_ON(sibling->fib6_nsiblings != rt->fib6_nsiblings);
+ fib6_nsiblings++;
+ }
+@@ -1240,8 +1244,9 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ list_for_each_entry_safe(sibling, next_sibling,
+ &rt->fib6_siblings,
+ fib6_siblings)
+- sibling->fib6_nsiblings--;
+- rt->fib6_nsiblings = 0;
++ WRITE_ONCE(sibling->fib6_nsiblings,
++ sibling->fib6_nsiblings - 1);
++ WRITE_ONCE(rt->fib6_nsiblings, 0);
+ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ return err;
+@@ -1953,8 +1958,9 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ notify_del = true;
+ list_for_each_entry_safe(sibling, next_sibling,
+ &rt->fib6_siblings, fib6_siblings)
+- sibling->fib6_nsiblings--;
+- rt->fib6_nsiblings = 0;
++ WRITE_ONCE(sibling->fib6_nsiblings,
++ sibling->fib6_nsiblings - 1);
++ WRITE_ONCE(rt->fib6_nsiblings, 0);
+ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ }
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 3ee345672849a8..171a5c1afefeaa 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -134,7 +134,9 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+
+ ops = rcu_dereference(inet6_offloads[proto]);
+ if (likely(ops && ops->callbacks.gso_segment)) {
+- skb_reset_transport_header(skb);
++ if (!skb_reset_transport_header_careful(skb))
++ goto out;
++
+ segs = ops->callbacks.gso_segment(skb, features);
+ if (!segs)
+ skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 138f6aee70afcc..06f66531628fec 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -2045,6 +2045,7 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
+ struct sk_buff *skb, int vifi)
+ {
+ struct vif_device *vif = &mrt->vif_table[vifi];
++ struct net_device *indev = skb->dev;
+ struct net_device *vif_dev;
+ struct ipv6hdr *ipv6h;
+ struct dst_entry *dst;
+@@ -2107,7 +2108,7 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
+ IP6CB(skb)->flags |= IP6SKB_FORWARDED;
+
+ return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD,
+- net, NULL, skb, skb->dev, vif_dev,
++ net, NULL, skb, indev, skb->dev,
+ ip6mr_forward2_finish);
+
+ out_free:
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 4e6b833dc40bb4..07e3d59c24059b 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -5233,7 +5233,8 @@ static void ip6_route_mpath_notify(struct fib6_info *rt,
+ */
+ rcu_read_lock();
+
+- if ((nlflags & NLM_F_APPEND) && rt_last && rt_last->fib6_nsiblings) {
++ if ((nlflags & NLM_F_APPEND) && rt_last &&
++ READ_ONCE(rt_last->fib6_nsiblings)) {
+ rt = list_first_or_null_rcu(&rt_last->fib6_siblings,
+ struct fib6_info,
+ fib6_siblings);
+@@ -5580,32 +5581,34 @@ static int rt6_nh_nlmsg_size(struct fib6_nh *nh, void *arg)
+
+ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
+ {
++ struct fib6_info *sibling;
++ struct fib6_nh *nh;
+ int nexthop_len;
+
+ if (f6i->nh) {
+ nexthop_len = nla_total_size(4); /* RTA_NH_ID */
+ nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
+ &nexthop_len);
+- } else {
+- struct fib6_nh *nh = f6i->fib6_nh;
+- struct fib6_info *sibling;
+-
+- nexthop_len = 0;
+- if (f6i->fib6_nsiblings) {
+- rt6_nh_nlmsg_size(nh, &nexthop_len);
+-
+- rcu_read_lock();
++ goto common;
++ }
+
+- list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
+- fib6_siblings) {
+- rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
+- }
++ rcu_read_lock();
++retry:
++ nh = f6i->fib6_nh;
++ nexthop_len = 0;
++ if (READ_ONCE(f6i->fib6_nsiblings)) {
++ rt6_nh_nlmsg_size(nh, &nexthop_len);
+
+- rcu_read_unlock();
++ list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++ fib6_siblings) {
++ rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
++ if (!READ_ONCE(f6i->fib6_nsiblings))
++ goto retry;
+ }
+- nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
+ }
+-
++ rcu_read_unlock();
++ nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
++common:
+ return NLMSG_ALIGN(sizeof(struct rtmsg))
+ + nla_total_size(16) /* RTA_SRC */
+ + nla_total_size(16) /* RTA_DST */
+@@ -5764,7 +5767,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ if (dst->lwtstate &&
+ lwtunnel_fill_encap(skb, dst->lwtstate, RTA_ENCAP, RTA_ENCAP_TYPE) < 0)
+ goto nla_put_failure;
+- } else if (rt->fib6_nsiblings) {
++ } else if (READ_ONCE(rt->fib6_nsiblings)) {
+ struct fib6_info *sibling;
+ struct nlattr *mp;
+
+@@ -5866,16 +5869,21 @@ static bool fib6_info_uses_dev(const struct fib6_info *f6i,
+ if (f6i->fib6_nh->fib_nh_dev == dev)
+ return true;
+
+- if (f6i->fib6_nsiblings) {
+- struct fib6_info *sibling, *next_sibling;
++ if (READ_ONCE(f6i->fib6_nsiblings)) {
++ const struct fib6_info *sibling;
+
+- list_for_each_entry_safe(sibling, next_sibling,
+- &f6i->fib6_siblings, fib6_siblings) {
+- if (sibling->fib6_nh->fib_nh_dev == dev)
++ rcu_read_lock();
++ list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++ fib6_siblings) {
++ if (sibling->fib6_nh->fib_nh_dev == dev) {
++ rcu_read_unlock();
+ return true;
++ }
++ if (!READ_ONCE(f6i->fib6_nsiblings))
++ break;
+ }
++ rcu_read_unlock();
+ }
+-
+ return false;
+ }
+
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index f4b4d25eef95f1..04531d18fa931c 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -1351,7 +1351,7 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
+ if (!(wiphy->flags & WIPHY_FLAG_SUPPORTS_TDLS))
+ return -ENOTSUPP;
+
+- if (sdata->vif.type != NL80211_IFTYPE_STATION)
++ if (sdata->vif.type != NL80211_IFTYPE_STATION || !sdata->vif.cfg.assoc)
+ return -EINVAL;
+
+ switch (oper) {
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 62b2817df2ba90..e6cf5ab928a638 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -644,6 +644,12 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx)
+ else
+ tx->key = NULL;
+
++ if (info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) {
++ if (tx->key && tx->key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)
++ info->control.hw_key = &tx->key->conf;
++ return TX_CONTINUE;
++ }
++
+ if (tx->key) {
+ bool skip_hw = false;
+
+@@ -1467,7 +1473,7 @@ static void ieee80211_txq_enqueue(struct ieee80211_local *local,
+ {
+ struct fq *fq = &local->fq;
+ struct fq_tin *tin = &txqi->tin;
+- u32 flow_idx = fq_flow_idx(fq, skb);
++ u32 flow_idx;
+
+ ieee80211_set_skb_enqueue_time(skb);
+
+@@ -1483,6 +1489,7 @@ static void ieee80211_txq_enqueue(struct ieee80211_local *local,
+ IEEE80211_TX_INTCFL_NEED_TXPROCESSING;
+ __skb_queue_tail(&txqi->frags, skb);
+ } else {
++ flow_idx = fq_flow_idx(fq, skb);
+ fq_tin_enqueue(fq, tin, flow_idx, skb,
+ fq_skb_free_func);
+ }
+@@ -3800,6 +3807,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ * The key can be removed while the packet was queued, so need to call
+ * this here to get the current key.
+ */
++ info->control.hw_key = NULL;
+ r = ieee80211_tx_h_select_key(&tx);
+ if (r != TX_CONTINUE) {
+ ieee80211_free_txskb(&local->hw, skb);
+@@ -4024,7 +4032,9 @@ void __ieee80211_schedule_txq(struct ieee80211_hw *hw,
+
+ spin_lock_bh(&local->active_txq_lock[txq->ac]);
+
+- has_queue = force || txq_has_queue(txq);
++ has_queue = force ||
++ (!test_bit(IEEE80211_TXQ_STOP, &txqi->flags) &&
++ txq_has_queue(txq));
+ if (list_empty(&txqi->schedule_order) &&
+ (has_queue || ieee80211_txq_keep_active(txqi))) {
+ /* If airtime accounting is active, always enqueue STAs at the
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0bf347a0a1dd6e..df83224bef06ce 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3513,7 +3513,7 @@ void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ /* can only be used if rule is no longer visible to dumps */
+ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+- lockdep_commit_lock_is_held(ctx->net);
++ WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+
+ nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ nf_tables_rule_destroy(ctx, rule);
+@@ -5250,7 +5250,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ struct nft_set_binding *binding,
+ enum nft_trans_phase phase)
+ {
+- lockdep_commit_lock_is_held(ctx->net);
++ WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+
+ switch (phase) {
+ case NFT_TRANS_PREPARE_ERROR:
+diff --git a/net/netfilter/xt_nfacct.c b/net/netfilter/xt_nfacct.c
+index 7c6bf1c168131a..0ca1cdfc4095b6 100644
+--- a/net/netfilter/xt_nfacct.c
++++ b/net/netfilter/xt_nfacct.c
+@@ -38,8 +38,8 @@ nfacct_mt_checkentry(const struct xt_mtchk_param *par)
+
+ nfacct = nfnl_acct_find_get(par->net, info->name);
+ if (nfacct == NULL) {
+- pr_info_ratelimited("accounting object `%s' does not exists\n",
+- info->name);
++ pr_info_ratelimited("accounting object `%.*s' does not exist\n",
++ NFACCT_NAME_MAX, info->name);
+ return -ENOENT;
+ }
+ info->nfacct = nfacct;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 4753a796cf4c77..8c06e3e6b52b59 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4541,10 +4541,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ spin_lock(&po->bind_lock);
+ was_running = po->running;
+ num = po->num;
+- if (was_running) {
+- WRITE_ONCE(po->num, 0);
++ WRITE_ONCE(po->num, 0);
++ if (was_running)
+ __unregister_prot_hook(sk, false);
+- }
++
+ spin_unlock(&po->bind_lock);
+
+ synchronize_net();
+@@ -4576,10 +4576,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ mutex_unlock(&po->pg_vec_lock);
+
+ spin_lock(&po->bind_lock);
+- if (was_running) {
+- WRITE_ONCE(po->num, num);
++ WRITE_ONCE(po->num, num);
++ if (was_running)
+ register_prot_hook(sk);
+- }
++
+ spin_unlock(&po->bind_lock);
+ if (pg_vec && (po->tp_version > TPACKET_V2)) {
+ /* Because we don't support block-based V3 on tx-ring */
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index 7275ad869f8ea5..34a1fb617a0fe5 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -43,9 +43,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ ipv4_change_dsfield(ip_hdr(skb),
+ INET_ECN_MASK,
+ newdscp);
+- ca->stats_dscp_set++;
++ atomic64_inc(&ca->stats_dscp_set);
+ } else {
+- ca->stats_dscp_error++;
++ atomic64_inc(&ca->stats_dscp_error);
+ }
+ }
+ break;
+@@ -56,9 +56,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ ipv6_change_dsfield(ipv6_hdr(skb),
+ INET_ECN_MASK,
+ newdscp);
+- ca->stats_dscp_set++;
++ atomic64_inc(&ca->stats_dscp_set);
+ } else {
+- ca->stats_dscp_error++;
++ atomic64_inc(&ca->stats_dscp_error);
+ }
+ }
+ break;
+@@ -71,7 +71,7 @@ static void tcf_ctinfo_cpmark_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ struct tcf_ctinfo_params *cp,
+ struct sk_buff *skb)
+ {
+- ca->stats_cpmark_set++;
++ atomic64_inc(&ca->stats_cpmark_set);
+ skb->mark = READ_ONCE(ct->mark) & cp->cpmarkmask;
+ }
+
+@@ -321,15 +321,18 @@ static int tcf_ctinfo_dump(struct sk_buff *skb, struct tc_action *a,
+ }
+
+ if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_SET,
+- ci->stats_dscp_set, TCA_CTINFO_PAD))
++ atomic64_read(&ci->stats_dscp_set),
++ TCA_CTINFO_PAD))
+ goto nla_put_failure;
+
+ if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_ERROR,
+- ci->stats_dscp_error, TCA_CTINFO_PAD))
++ atomic64_read(&ci->stats_dscp_error),
++ TCA_CTINFO_PAD))
+ goto nla_put_failure;
+
+ if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_CPMARK_SET,
+- ci->stats_cpmark_set, TCA_CTINFO_PAD))
++ atomic64_read(&ci->stats_cpmark_set),
++ TCA_CTINFO_PAD))
+ goto nla_put_failure;
+
+ spin_unlock_bh(&ci->tcf_lock);
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index cb38e58ee771d8..2613353defde7a 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -962,6 +962,41 @@ static int parse_attr(struct nlattr *tb[], int maxtype, struct nlattr *nla,
+ return 0;
+ }
+
++static const struct Qdisc_class_ops netem_class_ops;
++
++static int check_netem_in_tree(struct Qdisc *sch, bool duplicates,
++ struct netlink_ext_ack *extack)
++{
++ struct Qdisc *root, *q;
++ unsigned int i;
++
++ root = qdisc_root_sleeping(sch);
++
++ if (sch != root && root->ops->cl_ops == &netem_class_ops) {
++ if (duplicates ||
++ ((struct netem_sched_data *)qdisc_priv(root))->duplicate)
++ goto err;
++ }
++
++ if (!qdisc_dev(root))
++ return 0;
++
++ hash_for_each(qdisc_dev(root)->qdisc_hash, i, q, hash) {
++ if (sch != q && q->ops->cl_ops == &netem_class_ops) {
++ if (duplicates ||
++ ((struct netem_sched_data *)qdisc_priv(q))->duplicate)
++ goto err;
++ }
++ }
++
++ return 0;
++
++err:
++ NL_SET_ERR_MSG(extack,
++ "netem: cannot mix duplicating netems with other netems in tree");
++ return -EINVAL;
++}
++
+ /* Parse netlink message to set options */
+ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+@@ -1020,6 +1055,11 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ q->gap = qopt->gap;
+ q->counter = 0;
+ q->loss = qopt->loss;
++
++ ret = check_netem_in_tree(sch, qopt->duplicate, extack);
++ if (ret)
++ goto unlock;
++
+ q->duplicate = qopt->duplicate;
+
+ /* for compatibility with earlier versions.
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index f2692c9173f79c..2f2863ae18ad59 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -540,9 +540,6 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+
+ static void qfq_destroy_class(struct Qdisc *sch, struct qfq_class *cl)
+ {
+- struct qfq_sched *q = qdisc_priv(sch);
+-
+- qfq_rm_from_agg(q, cl);
+ gen_kill_estimator(&cl->rate_est);
+ qdisc_put(cl->qdisc);
+ kfree(cl);
+@@ -561,10 +558,11 @@ static int qfq_delete_class(struct Qdisc *sch, unsigned long arg,
+
+ qdisc_purge_queue(cl->qdisc);
+ qdisc_class_hash_remove(&q->clhash, &cl->common);
+- qfq_destroy_class(sch, cl);
++ qfq_rm_from_agg(q, cl);
+
+ sch_tree_unlock(sch);
+
++ qfq_destroy_class(sch, cl);
+ return 0;
+ }
+
+@@ -1505,6 +1503,7 @@ static void qfq_destroy_qdisc(struct Qdisc *sch)
+ for (i = 0; i < q->clhash.hashsize; i++) {
+ hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
+ common.hnode) {
++ qfq_rm_from_agg(q, cl);
+ qfq_destroy_class(sch, cl);
+ }
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 5f95f837dfc7fd..6ac3dcbe87b5cb 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -868,6 +868,19 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ delta = msg->sg.size;
+ psock->eval = sk_psock_msg_verdict(sk, psock, msg);
+ delta -= msg->sg.size;
++
++ if ((s32)delta > 0) {
++ /* It indicates that we executed bpf_msg_pop_data(),
++ * causing the plaintext data size to decrease.
++ * Therefore the encrypted data size also needs to
++ * correspondingly decrease. We only need to subtract
++ * delta to calculate the new ciphertext length since
++ * ktls does not support block encryption.
++ */
++ struct sk_msg *enc = &ctx->open_rec->msg_encrypted;
++
++ sk_msg_trim(sk, enc, enc->sg.size - delta);
++ }
+ }
+ if (msg->cork_bytes && msg->cork_bytes > msg->sg.size &&
+ !enospc && !full_record) {
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 678b809affe03e..4184e8110f5632 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -686,7 +686,8 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk,
+ unsigned int i;
+
+ for (i = 0; i < MAX_PORT_RETRIES; i++) {
+- if (port <= LAST_RESERVED_PORT)
++ if (port == VMADDR_PORT_ANY ||
++ port <= LAST_RESERVED_PORT)
+ port = LAST_RESERVED_PORT + 1;
+
+ new_addr.svm_port = port++;
+diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c
+index 85501b77f4e371..45466fa4ace434 100644
+--- a/net/xfrm/xfrm_interface_core.c
++++ b/net/xfrm/xfrm_interface_core.c
+@@ -871,7 +871,7 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ return -EINVAL;
+ }
+
+- if (p.collect_md) {
++ if (p.collect_md || xi->p.collect_md) {
+ NL_SET_ERR_MSG(extack, "collect_md can't be changed");
+ return -EINVAL;
+ }
+@@ -882,11 +882,6 @@ static int xfrmi_changelink(struct net_device *dev, struct nlattr *tb[],
+ } else {
+ if (xi->dev != dev)
+ return -EEXIST;
+- if (xi->p.collect_md) {
+- NL_SET_ERR_MSG(extack,
+- "device can't be changed to collect_md");
+- return -EINVAL;
+- }
+ }
+
+ return xfrmi_update(xi, &p);
+diff --git a/samples/mei/mei-amt-version.c b/samples/mei/mei-amt-version.c
+index 867debd3b9124c..1d7254bcb44cb7 100644
+--- a/samples/mei/mei-amt-version.c
++++ b/samples/mei/mei-amt-version.c
+@@ -69,11 +69,11 @@
+ #include <string.h>
+ #include <fcntl.h>
+ #include <sys/ioctl.h>
++#include <sys/time.h>
+ #include <unistd.h>
+ #include <errno.h>
+ #include <stdint.h>
+ #include <stdbool.h>
+-#include <bits/wordsize.h>
+ #include <linux/mei.h>
+
+ /*****************************************************************************
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index 61b679f6c2f2a1..c31dead186cca2 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -478,7 +478,7 @@ void ConfigList::updateListAllForAll()
+ while (it.hasNext()) {
+ ConfigList *list = it.next();
+
+- list->updateList();
++ list->updateListAll();
+ }
+ }
+
+diff --git a/security/apparmor/include/match.h b/security/apparmor/include/match.h
+index 8844895905881b..29306ec87fd1ab 100644
+--- a/security/apparmor/include/match.h
++++ b/security/apparmor/include/match.h
+@@ -141,7 +141,8 @@ unsigned int aa_dfa_matchn_until(struct aa_dfa *dfa, unsigned int start,
+
+ void aa_dfa_free_kref(struct kref *kref);
+
+-#define WB_HISTORY_SIZE 24
++/* This needs to be a power of 2 */
++#define WB_HISTORY_SIZE 32
+ struct match_workbuf {
+ unsigned int count;
+ unsigned int pos;
+diff --git a/security/apparmor/match.c b/security/apparmor/match.c
+index 3e9e1eaf990ed7..0e683ee323e3cf 100644
+--- a/security/apparmor/match.c
++++ b/security/apparmor/match.c
+@@ -672,6 +672,7 @@ unsigned int aa_dfa_matchn_until(struct aa_dfa *dfa, unsigned int start,
+
+ #define inc_wb_pos(wb) \
+ do { \
++ BUILD_BUG_ON_NOT_POWER_OF_2(WB_HISTORY_SIZE); \
+ wb->pos = (wb->pos + 1) & (WB_HISTORY_SIZE - 1); \
+ wb->len = (wb->len + 1) & (WB_HISTORY_SIZE - 1); \
+ } while (0)
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 976a112c7d0061..b921cc180d83b5 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -71,6 +71,10 @@
+ struct hda_tegra_soc {
+ bool has_hda2codec_2x_reset;
+ bool has_hda2hdmi;
++ bool has_hda2codec_2x;
++ bool input_stream;
++ bool always_on;
++ bool requires_init;
+ };
+
+ struct hda_tegra {
+@@ -186,7 +190,9 @@ static int __maybe_unused hda_tegra_runtime_resume(struct device *dev)
+ if (rc != 0)
+ return rc;
+ if (chip->running) {
+- hda_tegra_init(hda);
++ if (hda->soc->requires_init)
++ hda_tegra_init(hda);
++
+ azx_init_chip(chip, 1);
+ /* disable controller wake up event*/
+ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
+@@ -251,7 +257,8 @@ static int hda_tegra_init_chip(struct azx *chip, struct platform_device *pdev)
+ bus->remap_addr = hda->regs + HDA_BAR0;
+ bus->addr = res->start + HDA_BAR0;
+
+- hda_tegra_init(hda);
++ if (hda->soc->requires_init)
++ hda_tegra_init(hda);
+
+ return 0;
+ }
+@@ -324,7 +331,7 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
+ * starts with offset 0 which is wrong as HW register for output stream
+ * offset starts with 4.
+ */
+- if (of_device_is_compatible(np, "nvidia,tegra234-hda"))
++ if (!hda->soc->input_stream)
+ chip->capture_streams = 4;
+
+ chip->playback_streams = (gcap >> 12) & 0x0f;
+@@ -420,7 +427,6 @@ static int hda_tegra_create(struct snd_card *card,
+ chip->driver_caps = driver_caps;
+ chip->driver_type = driver_caps & 0xff;
+ chip->dev_index = 0;
+- chip->jackpoll_interval = msecs_to_jiffies(5000);
+ INIT_LIST_HEAD(&chip->pcm_list);
+
+ chip->codec_probe_mask = -1;
+@@ -437,7 +443,16 @@ static int hda_tegra_create(struct snd_card *card,
+ chip->bus.core.sync_write = 0;
+ chip->bus.core.needs_damn_long_delay = 1;
+ chip->bus.core.aligned_mmio = 1;
+- chip->bus.jackpoll_in_suspend = 1;
++
++ /*
++ * HDA power domain and clocks are always on for Tegra264 and
++ * the jack detection logic would work always, so no need of
++ * jack polling mechanism running.
++ */
++ if (!hda->soc->always_on) {
++ chip->jackpoll_interval = msecs_to_jiffies(5000);
++ chip->bus.jackpoll_in_suspend = 1;
++ }
+
+ err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops);
+ if (err < 0) {
+@@ -451,22 +466,44 @@ static int hda_tegra_create(struct snd_card *card,
+ static const struct hda_tegra_soc tegra30_data = {
+ .has_hda2codec_2x_reset = true,
+ .has_hda2hdmi = true,
++ .has_hda2codec_2x = true,
++ .input_stream = true,
++ .always_on = false,
++ .requires_init = true,
+ };
+
+ static const struct hda_tegra_soc tegra194_data = {
+ .has_hda2codec_2x_reset = false,
+ .has_hda2hdmi = true,
++ .has_hda2codec_2x = true,
++ .input_stream = true,
++ .always_on = false,
++ .requires_init = true,
+ };
+
+ static const struct hda_tegra_soc tegra234_data = {
+ .has_hda2codec_2x_reset = true,
+ .has_hda2hdmi = false,
++ .has_hda2codec_2x = true,
++ .input_stream = false,
++ .always_on = false,
++ .requires_init = true,
++};
++
++static const struct hda_tegra_soc tegra264_data = {
++ .has_hda2codec_2x_reset = true,
++ .has_hda2hdmi = false,
++ .has_hda2codec_2x = false,
++ .input_stream = false,
++ .always_on = true,
++ .requires_init = false,
+ };
+
+ static const struct of_device_id hda_tegra_match[] = {
+ { .compatible = "nvidia,tegra30-hda", .data = &tegra30_data },
+ { .compatible = "nvidia,tegra194-hda", .data = &tegra194_data },
+ { .compatible = "nvidia,tegra234-hda", .data = &tegra234_data },
++ { .compatible = "nvidia,tegra264-hda", .data = &tegra264_data },
+ {},
+ };
+ MODULE_DEVICE_TABLE(of, hda_tegra_match);
+@@ -521,7 +558,9 @@ static int hda_tegra_probe(struct platform_device *pdev)
+ hda->clocks[hda->nclocks++].id = "hda";
+ if (hda->soc->has_hda2hdmi)
+ hda->clocks[hda->nclocks++].id = "hda2hdmi";
+- hda->clocks[hda->nclocks++].id = "hda2codec_2x";
++
++ if (hda->soc->has_hda2codec_2x)
++ hda->clocks[hda->nclocks++].id = "hda2codec_2x";
+
+ err = devm_clk_bulk_get(&pdev->dev, hda->nclocks, hda->clocks);
+ if (err < 0)
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 748a3c40966e97..d825fcce05eefc 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4791,7 +4791,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ if (err < 0)
+ goto exit;
+
+- if (ca0132_alt_select_out_quirk_set(codec) < 0)
++ err = ca0132_alt_select_out_quirk_set(codec);
++ if (err < 0)
+ goto exit;
+
+ switch (spec->cur_out_type) {
+@@ -4881,6 +4882,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ spec->bass_redirection_val);
+ else
+ err = ca0132_alt_surround_set_bass_redirection(codec, 0);
++ if (err < 0)
++ goto exit;
+
+ /* Unmute DSP now that we're done with output selection. */
+ err = dspio_set_uint_param(codec, 0x96,
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 0ffacc779cd66e..3388e407e44e21 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4556,6 +4556,9 @@ HDA_CODEC_ENTRY(0x10de002e, "Tegra186 HDMI/DP1", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de002f, "Tegra194 HDMI/DP2", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de0030, "Tegra194 HDMI/DP3", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de0031, "Tegra234 HDMI/DP", patch_tegra234_hdmi),
++HDA_CODEC_ENTRY(0x10de0033, "SoC 33 HDMI/DP", patch_tegra234_hdmi),
++HDA_CODEC_ENTRY(0x10de0034, "Tegra264 HDMI/DP", patch_tegra234_hdmi),
++HDA_CODEC_ENTRY(0x10de0035, "SoC 35 HDMI/DP", patch_tegra234_hdmi),
+ HDA_CODEC_ENTRY(0x10de0040, "GPU 40 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0041, "GPU 41 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0042, "GPU 42 HDMI/DP", patch_nvhdmi),
+@@ -4594,15 +4597,32 @@ HDA_CODEC_ENTRY(0x10de0097, "GPU 97 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0098, "GPU 98 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0099, "GPU 99 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009a, "GPU 9a HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009b, "GPU 9b HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009c, "GPU 9c HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a1, "GPU a1 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a3, "GPU a3 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a4, "GPU a4 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a5, "GPU a5 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a8, "GPU a8 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a9, "GPU a9 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00aa, "GPU aa HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ab, "GPU ab HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ad, "GPU ad HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ae, "GPU ae HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00af, "GPU af HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00b0, "GPU b0 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00b1, "GPU b1 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c0, "GPU c0 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c1, "GPU c1 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c3, "GPU c3 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c4, "GPU c4 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c5, "GPU c5 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI", patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI", patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x67663d82, "Arise 82 HDMI/DP", patch_gf_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f0c67b6af33ae9..43265f4d42a53b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9947,6 +9947,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x87cc, "HP Pavilion 15-eg0xxx", ALC287_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87d3, "HP Laptop 15-gw0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x87df, "HP ProBook 430 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 1f4c43bf817e48..fce918a089e37a 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -458,6 +458,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb1xxx"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/intel/boards/Kconfig b/sound/soc/intel/boards/Kconfig
+index ca49cc49c378c2..109efda305bfa1 100644
+--- a/sound/soc/intel/boards/Kconfig
++++ b/sound/soc/intel/boards/Kconfig
+@@ -11,7 +11,7 @@ menuconfig SND_SOC_INTEL_MACH
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Intel ASoC machine drivers.
+
+-if SND_SOC_INTEL_MACH
++if SND_SOC_INTEL_MACH && (SND_SOC_SOF_INTEL_COMMON || !SND_SOC_SOF_INTEL_COMMON)
+
+ config SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES
+ bool "Use more user friendly long card names"
+diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
+index ba38b6e6b26494..ba8a99124869b6 100644
+--- a/sound/soc/soc-dai.c
++++ b/sound/soc/soc-dai.c
+@@ -268,13 +268,15 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai,
+ {
+ int ret = -ENOTSUPP;
+
+- if (dai->driver->ops &&
+- dai->driver->ops->xlate_tdm_slot_mask)
+- ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+- else
+- ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+- if (ret)
+- goto err;
++ if (slots) {
++ if (dai->driver->ops &&
++ dai->driver->ops->xlate_tdm_slot_mask)
++ ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++ else
++ ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++ if (ret)
++ goto err;
++ }
+
+ dai->tx_mask = tx_mask;
+ dai->rx_mask = rx_mask;
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index eff1355cc3df00..5be32c37bb8a09 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -641,28 +641,32 @@ EXPORT_SYMBOL_GPL(snd_soc_get_volsw_range);
+ static int snd_soc_clip_to_platform_max(struct snd_kcontrol *kctl)
+ {
+ struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value;
+- struct snd_ctl_elem_value uctl;
++ struct snd_ctl_elem_value *uctl;
+ int ret;
+
+ if (!mc->platform_max)
+ return 0;
+
+- ret = kctl->get(kctl, &uctl);
++ uctl = kzalloc(sizeof(*uctl), GFP_KERNEL);
++ if (!uctl)
++ return -ENOMEM;
++
++ ret = kctl->get(kctl, uctl);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+- if (uctl.value.integer.value[0] > mc->platform_max)
+- uctl.value.integer.value[0] = mc->platform_max;
++ if (uctl->value.integer.value[0] > mc->platform_max)
++ uctl->value.integer.value[0] = mc->platform_max;
+
+ if (snd_soc_volsw_is_stereo(mc) &&
+- uctl.value.integer.value[1] > mc->platform_max)
+- uctl.value.integer.value[1] = mc->platform_max;
++ uctl->value.integer.value[1] > mc->platform_max)
++ uctl->value.integer.value[1] = mc->platform_max;
+
+- ret = kctl->put(kctl, &uctl);
+- if (ret < 0)
+- return ret;
++ ret = kctl->put(kctl, uctl);
+
+- return 0;
++out:
++ kfree(uctl);
++ return ret;
+ }
+
+ /**
+diff --git a/sound/usb/mixer_scarlett2.c b/sound/usb/mixer_scarlett2.c
+index bcb8b761740651..fa2bfa102ab56f 100644
+--- a/sound/usb/mixer_scarlett2.c
++++ b/sound/usb/mixer_scarlett2.c
+@@ -1279,6 +1279,8 @@ static int scarlett2_usb(
+ struct scarlett2_usb_packet *req, *resp = NULL;
+ size_t req_buf_size = struct_size(req, data, req_size);
+ size_t resp_buf_size = struct_size(resp, data, resp_size);
++ int retries = 0;
++ const int max_retries = 5;
+ int err;
+
+ req = kmalloc(req_buf_size, GFP_KERNEL);
+@@ -1302,10 +1304,15 @@ static int scarlett2_usb(
+ if (req_size)
+ memcpy(req->data, req_data, req_size);
+
++retry:
+ err = scarlett2_usb_tx(dev, private->bInterfaceNumber,
+ req, req_buf_size);
+
+ if (err != req_buf_size) {
++ if (err == -EPROTO && ++retries <= max_retries) {
++ msleep(5 * (1 << (retries - 1)));
++ goto retry;
++ }
+ usb_audio_err(
+ mixer->chip,
+ "%s USB request result cmd %x was %d\n",
+diff --git a/sound/x86/intel_hdmi_audio.c b/sound/x86/intel_hdmi_audio.c
+index ab95fb34a63584..7b9292cf839f27 100644
+--- a/sound/x86/intel_hdmi_audio.c
++++ b/sound/x86/intel_hdmi_audio.c
+@@ -1766,7 +1766,7 @@ static int __hdmi_lpe_audio_probe(struct platform_device *pdev)
+ /* setup private data which can be retrieved when required */
+ pcm->private_data = ctx;
+ pcm->info_flags = 0;
+- strscpy(pcm->name, card->shortname, strlen(card->shortname));
++ strscpy(pcm->name, card->shortname, sizeof(pcm->name));
+ /* setup the ops for playback */
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &had_pcm_ops);
+
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index 526a332c48e6eb..7c9e86faab6ce8 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -353,17 +353,18 @@ static int dump_link_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ struct bpf_netdev_t *netinfo = cookie;
+ struct ifinfomsg *ifinfo = msg;
++ struct ip_devname_ifindex *tmp;
+
+ if (netinfo->filter_idx > 0 && netinfo->filter_idx != ifinfo->ifi_index)
+ return 0;
+
+ if (netinfo->used_len == netinfo->array_len) {
+- netinfo->devices = realloc(netinfo->devices,
+- (netinfo->array_len + 16) *
+- sizeof(struct ip_devname_ifindex));
+- if (!netinfo->devices)
++ tmp = realloc(netinfo->devices,
++ (netinfo->array_len + 16) * sizeof(struct ip_devname_ifindex));
++ if (!tmp)
+ return -ENOMEM;
+
++ netinfo->devices = tmp;
+ netinfo->array_len += 16;
+ }
+ netinfo->devices[netinfo->used_len].ifindex = ifinfo->ifi_index;
+@@ -382,6 +383,7 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ struct bpf_tcinfo_t *tcinfo = cookie;
+ struct tcmsg *info = msg;
++ struct tc_kind_handle *tmp;
+
+ if (tcinfo->is_qdisc) {
+ /* skip clsact qdisc */
+@@ -393,11 +395,12 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ }
+
+ if (tcinfo->used_len == tcinfo->array_len) {
+- tcinfo->handle_array = realloc(tcinfo->handle_array,
++ tmp = realloc(tcinfo->handle_array,
+ (tcinfo->array_len + 16) * sizeof(struct tc_kind_handle));
+- if (!tcinfo->handle_array)
++ if (!tmp)
+ return -ENOMEM;
+
++ tcinfo->handle_array = tmp;
+ tcinfo->array_len += 16;
+ }
+ tcinfo->handle_array[tcinfo->used_len].handle = info->tcm_handle;
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index e440a00b1613e8..3ffb41fa82b814 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -1125,6 +1125,21 @@ add_sched_in_event(struct work_atoms *atoms, u64 timestamp)
+ atoms->nb_atoms++;
+ }
+
++static void free_work_atoms(struct work_atoms *atoms)
++{
++ struct work_atom *atom, *tmp;
++
++ if (atoms == NULL)
++ return;
++
++ list_for_each_entry_safe(atom, tmp, &atoms->work_list, list) {
++ list_del(&atom->list);
++ free(atom);
++ }
++ thread__zput(atoms->thread);
++ free(atoms);
++}
++
+ static int latency_switch_event(struct perf_sched *sched,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+@@ -1929,6 +1944,16 @@ static u64 evsel__get_time(struct evsel *evsel, u32 cpu)
+ return r->last_time[cpu];
+ }
+
++static void timehist__evsel_priv_destructor(void *priv)
++{
++ struct evsel_runtime *r = priv;
++
++ if (r) {
++ free(r->last_time);
++ free(r);
++ }
++}
++
+ static int comm_width = 30;
+
+ static char *timehist_get_commstr(struct thread *thread)
+@@ -3068,6 +3093,8 @@ static int perf_sched__timehist(struct perf_sched *sched)
+
+ setup_pager();
+
++ evsel__set_priv_destructor(timehist__evsel_priv_destructor);
++
+ /* prefer sched_waking if it is captured */
+ if (evlist__find_tracepoint_by_name(session->evlist, "sched:sched_waking"))
+ handlers[1].handler = timehist_sched_wakeup_ignore;
+@@ -3168,13 +3195,13 @@ static void __merge_work_atoms(struct rb_root_cached *root, struct work_atoms *d
+ this->total_runtime += data->total_runtime;
+ this->nb_atoms += data->nb_atoms;
+ this->total_lat += data->total_lat;
+- list_splice(&data->work_list, &this->work_list);
++ list_splice_init(&data->work_list, &this->work_list);
+ if (this->max_lat < data->max_lat) {
+ this->max_lat = data->max_lat;
+ this->max_lat_start = data->max_lat_start;
+ this->max_lat_end = data->max_lat_end;
+ }
+- zfree(&data);
++ free_work_atoms(data);
+ return;
+ }
+ }
+@@ -3253,7 +3280,6 @@ static int perf_sched__lat(struct perf_sched *sched)
+ work_list = rb_entry(next, struct work_atoms, node);
+ output_lat_thread(sched, work_list);
+ next = rb_next(next);
+- thread__zput(work_list->thread);
+ }
+
+ printf(" -----------------------------------------------------------------------------------------------------------------\n");
+@@ -3267,6 +3293,13 @@ static int perf_sched__lat(struct perf_sched *sched)
+
+ rc = 0;
+
++ while ((next = rb_first_cached(&sched->sorted_atom_root))) {
++ struct work_atoms *data;
++
++ data = rb_entry(next, struct work_atoms, node);
++ rb_erase_cached(next, &sched->sorted_atom_root);
++ free_work_atoms(data);
++ }
+ out_free_cpus_switch_event:
+ free_cpus_switch_event(sched);
+ return rc;
+diff --git a/tools/perf/tests/bp_account.c b/tools/perf/tests/bp_account.c
+index 6f921db33cf90e..855b81c3326c7c 100644
+--- a/tools/perf/tests/bp_account.c
++++ b/tools/perf/tests/bp_account.c
+@@ -102,6 +102,7 @@ static int bp_accounting(int wp_cnt, int share)
+ fd_wp = wp_event((void *)&the_var, &attr_new);
+ TEST_ASSERT_VAL("failed to create max wp\n", fd_wp != -1);
+ pr_debug("wp max created\n");
++ close(fd_wp);
+ }
+
+ for (i = 0; i < wp_cnt; i++)
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 7db35dbdfcefe7..22969cc00a5fc3 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1486,6 +1486,15 @@ static void evsel__free_config_terms(struct evsel *evsel)
+ free_config_terms(&evsel->config_terms);
+ }
+
++static void (*evsel__priv_destructor)(void *priv);
++
++void evsel__set_priv_destructor(void (*destructor)(void *priv))
++{
++ assert(evsel__priv_destructor == NULL);
++
++ evsel__priv_destructor = destructor;
++}
++
+ void evsel__exit(struct evsel *evsel)
+ {
+ assert(list_empty(&evsel->core.node));
+@@ -1508,6 +1517,8 @@ void evsel__exit(struct evsel *evsel)
+ hashmap__free(evsel->per_pkg_mask);
+ evsel->per_pkg_mask = NULL;
+ zfree(&evsel->metric_events);
++ if (evsel__priv_destructor)
++ evsel__priv_destructor(evsel->priv);
+ perf_evsel__object.fini(evsel);
+ }
+
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index 8ce30329a0772c..fabf0697c36a47 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -246,6 +246,8 @@ void evsel__init(struct evsel *evsel, struct perf_event_attr *attr, int idx);
+ void evsel__exit(struct evsel *evsel);
+ void evsel__delete(struct evsel *evsel);
+
++void evsel__set_priv_destructor(void (*destructor)(void *priv));
++
+ struct callchain_param;
+
+ void evsel__config(struct evsel *evsel, struct record_opts *opts,
+diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+index 8c48479775837c..91dd31629ffedd 100644
+--- a/tools/testing/selftests/arm64/fp/sve-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+@@ -241,7 +241,7 @@ static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
+ return;
+ }
+
+- ksft_test_result(new_sve->vl = prctl_vl, "Set %s VL %u\n",
++ ksft_test_result(new_sve->vl == prctl_vl, "Set %s VL %u\n",
+ type->name, vl);
+
+ free(new_sve);
+diff --git a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+index b7c8f29c09a978..65916bb55dfbbf 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+@@ -14,11 +14,35 @@ fail() { #msg
+ exit_fail
+ }
+
++# As reading trace can last forever, simply look for 3 different
++# events then exit out of reading the file. If there's not 3 different
++# events, then the test has failed.
++check_unique() {
++ cat trace | grep -v '^#' | awk '
++ BEGIN { cnt = 0; }
++ {
++ for (i = 0; i < cnt; i++) {
++ if (event[i] == $5) {
++ break;
++ }
++ }
++ if (i == cnt) {
++ event[cnt++] = $5;
++ if (cnt > 2) {
++ exit;
++ }
++ }
++ }
++ END {
++ printf "%d", cnt;
++ }'
++}
++
+ echo 'sched:*' > set_event
+
+ yield
+
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -29,7 +53,7 @@ echo 1 > events/sched/enable
+
+ yield
+
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
+index 7b936a92685949..3c2fb9efb0b140 100644
+--- a/tools/testing/selftests/net/mptcp/Makefile
++++ b/tools/testing/selftests/net/mptcp/Makefile
+@@ -4,7 +4,8 @@ top_srcdir = ../../../../..
+
+ CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
+
+-TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
++TEST_PROGS := mptcp_connect.sh mptcp_connect_mmap.sh mptcp_connect_sendfile.sh \
++ mptcp_connect_checksum.sh pm_netlink.sh mptcp_join.sh diag.sh \
+ simult_flows.sh mptcp_sockopt.sh userspace_pm.sh
+
+ TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh
+new file mode 100644
+index 00000000000000..ce93ec2f107fba
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh
+@@ -0,0 +1,5 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
++ "$(dirname "${0}")/mptcp_connect.sh" -C "${@}"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh
+new file mode 100644
+index 00000000000000..5dd30f9394af6a
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh
+@@ -0,0 +1,5 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
++ "$(dirname "${0}")/mptcp_connect.sh" -m mmap "${@}"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh
+new file mode 100644
+index 00000000000000..1d16fb1cc9bb6d
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh
+@@ -0,0 +1,5 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
++ "$(dirname "${0}")/mptcp_connect.sh" -m sendfile "${@}"
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index ff1242f2afaacc..7d2164e0a39d4d 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -746,6 +746,11 @@ kci_test_ipsec_offload()
+ sysfsf=$sysfsd/ipsec
+ sysfsnet=/sys/bus/netdevsim/devices/netdevsim0/net/
+ probed=false
++ esp4_offload_probed_default=false
++
++ if lsmod | grep -q esp4_offload; then
++ esp4_offload_probed_default=true
++ fi
+
+ # setup netdevsim since dummydev doesn't have offload support
+ if [ ! -w /sys/bus/netdevsim/new_device ] ; then
+@@ -835,6 +840,7 @@ EOF
+ fi
+
+ # clean up any leftovers
++ ! "$esp4_offload_probed_default" && lsmod | grep -q esp4_offload && rmmod esp4_offload
+ echo 0 > /sys/bus/netdevsim/del_device
+ $probed && rmmod netdevsim
+
+diff --git a/tools/testing/selftests/perf_events/.gitignore b/tools/testing/selftests/perf_events/.gitignore
+index 790c47001e77e3..4858977dd55b5f 100644
+--- a/tools/testing/selftests/perf_events/.gitignore
++++ b/tools/testing/selftests/perf_events/.gitignore
+@@ -1,3 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ sigtrap_threads
+ remove_on_exec
++mmap
+diff --git a/tools/testing/selftests/perf_events/Makefile b/tools/testing/selftests/perf_events/Makefile
+index db93c4ff081a45..913854914ae499 100644
+--- a/tools/testing/selftests/perf_events/Makefile
++++ b/tools/testing/selftests/perf_events/Makefile
+@@ -2,5 +2,5 @@
+ CFLAGS += -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
+ LDFLAGS += -lpthread
+
+-TEST_GEN_PROGS := sigtrap_threads remove_on_exec
++TEST_GEN_PROGS := sigtrap_threads remove_on_exec mmap
+ include ../lib.mk
+diff --git a/tools/testing/selftests/perf_events/mmap.c b/tools/testing/selftests/perf_events/mmap.c
+new file mode 100644
+index 00000000000000..ea0427aac1f98f
+--- /dev/null
++++ b/tools/testing/selftests/perf_events/mmap.c
+@@ -0,0 +1,236 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#define _GNU_SOURCE
++
++#include <dirent.h>
++#include <sched.h>
++#include <stdbool.h>
++#include <stdio.h>
++#include <unistd.h>
++
++#include <sys/ioctl.h>
++#include <sys/mman.h>
++#include <sys/syscall.h>
++#include <sys/types.h>
++
++#include <linux/perf_event.h>
++
++#include "../kselftest_harness.h"
++
++#define RB_SIZE 0x3000
++#define AUX_SIZE 0x10000
++#define AUX_OFFS 0x4000
++
++#define HOLE_SIZE 0x1000
++
++/* Reserve space for rb, aux with space for shrink-beyond-vma testing. */
++#define REGION_SIZE (2 * RB_SIZE + 2 * AUX_SIZE)
++#define REGION_AUX_OFFS (2 * RB_SIZE)
++
++#define MAP_BASE 1
++#define MAP_AUX 2
++
++#define EVENT_SRC_DIR "/sys/bus/event_source/devices"
++
++FIXTURE(perf_mmap)
++{
++ int fd;
++ void *ptr;
++ void *region;
++};
++
++FIXTURE_VARIANT(perf_mmap)
++{
++ bool aux;
++ unsigned long ptr_size;
++};
++
++FIXTURE_VARIANT_ADD(perf_mmap, rb)
++{
++ .aux = false,
++ .ptr_size = RB_SIZE,
++};
++
++FIXTURE_VARIANT_ADD(perf_mmap, aux)
++{
++ .aux = true,
++ .ptr_size = AUX_SIZE,
++};
++
++static bool read_event_type(struct dirent *dent, __u32 *type)
++{
++ char typefn[512];
++ FILE *fp;
++ int res;
++
++ snprintf(typefn, sizeof(typefn), "%s/%s/type", EVENT_SRC_DIR, dent->d_name);
++ fp = fopen(typefn, "r");
++ if (!fp)
++ return false;
++
++ res = fscanf(fp, "%u", type);
++ fclose(fp);
++ return res > 0;
++}
++
++FIXTURE_SETUP(perf_mmap)
++{
++ struct perf_event_attr attr = {
++ .size = sizeof(attr),
++ .disabled = 1,
++ .exclude_kernel = 1,
++ .exclude_hv = 1,
++ };
++ struct perf_event_attr attr_ok = {};
++ unsigned int eacces = 0, map = 0;
++ struct perf_event_mmap_page *rb;
++ struct dirent *dent;
++ void *aux, *region;
++ DIR *dir;
++
++ self->ptr = NULL;
++
++ dir = opendir(EVENT_SRC_DIR);
++ if (!dir)
++ SKIP(return, "perf not available.");
++
++ region = mmap(NULL, REGION_SIZE, PROT_NONE, MAP_ANON | MAP_PRIVATE, -1, 0);
++ ASSERT_NE(region, MAP_FAILED);
++ self->region = region;
++
++ // Try to find a suitable event on this system
++ while ((dent = readdir(dir))) {
++ int fd;
++
++ if (!read_event_type(dent, &attr.type))
++ continue;
++
++ fd = syscall(SYS_perf_event_open, &attr, 0, -1, -1, 0);
++ if (fd < 0) {
++ if (errno == EACCES)
++ eacces++;
++ continue;
++ }
++
++ // Check whether the event supports mmap()
++ rb = mmap(region, RB_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0);
++ if (rb == MAP_FAILED) {
++ close(fd);
++ continue;
++ }
++
++ if (!map) {
++ // Save the event in case that no AUX capable event is found
++ attr_ok = attr;
++ map = MAP_BASE;
++ }
++
++ if (!variant->aux)
++ continue;
++
++ rb->aux_offset = AUX_OFFS;
++ rb->aux_size = AUX_SIZE;
++
++ // Check whether it supports a AUX buffer
++ aux = mmap(region + REGION_AUX_OFFS, AUX_SIZE, PROT_READ | PROT_WRITE,
++ MAP_SHARED | MAP_FIXED, fd, AUX_OFFS);
++ if (aux == MAP_FAILED) {
++ munmap(rb, RB_SIZE);
++ close(fd);
++ continue;
++ }
++
++ attr_ok = attr;
++ map = MAP_AUX;
++ munmap(aux, AUX_SIZE);
++ munmap(rb, RB_SIZE);
++ close(fd);
++ break;
++ }
++ closedir(dir);
++
++ if (!map) {
++ if (!eacces)
++ SKIP(return, "No mappable perf event found.");
++ else
++ SKIP(return, "No permissions for perf_event_open()");
++ }
++
++ self->fd = syscall(SYS_perf_event_open, &attr_ok, 0, -1, -1, 0);
++ ASSERT_NE(self->fd, -1);
++
++ rb = mmap(region, RB_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, self->fd, 0);
++ ASSERT_NE(rb, MAP_FAILED);
++
++ if (!variant->aux) {
++ self->ptr = rb;
++ return;
++ }
++
++ if (map != MAP_AUX)
++ SKIP(return, "No AUX event found.");
++
++ rb->aux_offset = AUX_OFFS;
++ rb->aux_size = AUX_SIZE;
++ aux = mmap(region + REGION_AUX_OFFS, AUX_SIZE, PROT_READ | PROT_WRITE,
++ MAP_SHARED | MAP_FIXED, self->fd, AUX_OFFS);
++ ASSERT_NE(aux, MAP_FAILED);
++ self->ptr = aux;
++}
++
++FIXTURE_TEARDOWN(perf_mmap)
++{
++ ASSERT_EQ(munmap(self->region, REGION_SIZE), 0);
++ if (self->fd != -1)
++ ASSERT_EQ(close(self->fd), 0);
++}
++
++TEST_F(perf_mmap, remap)
++{
++ void *tmp, *ptr = self->ptr;
++ unsigned long size = variant->ptr_size;
++
++ // Test the invalid remaps
++ ASSERT_EQ(mremap(ptr, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++ ASSERT_EQ(mremap(ptr + HOLE_SIZE, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++ ASSERT_EQ(mremap(ptr + size - HOLE_SIZE, HOLE_SIZE, size, MREMAP_MAYMOVE), MAP_FAILED);
++ // Shrink the end of the mapping such that we only unmap past end of the VMA,
++ // which should succeed and poke a hole into the PROT_NONE region
++ ASSERT_NE(mremap(ptr + size - HOLE_SIZE, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++
++ // Remap the whole buffer to a new address
++ tmp = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
++ ASSERT_NE(tmp, MAP_FAILED);
++
++ // Try splitting offset 1 hole size into VMA, this should fail
++ ASSERT_EQ(mremap(ptr + HOLE_SIZE, size - HOLE_SIZE, size - HOLE_SIZE,
++ MREMAP_MAYMOVE | MREMAP_FIXED, tmp), MAP_FAILED);
++ // Remapping the whole thing should succeed fine
++ ptr = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tmp);
++ ASSERT_EQ(ptr, tmp);
++ ASSERT_EQ(munmap(tmp, size), 0);
++}
++
++TEST_F(perf_mmap, unmap)
++{
++ unsigned long size = variant->ptr_size;
++
++ // Try to poke holes into the mappings
++ ASSERT_NE(munmap(self->ptr, HOLE_SIZE), 0);
++ ASSERT_NE(munmap(self->ptr + HOLE_SIZE, HOLE_SIZE), 0);
++ ASSERT_NE(munmap(self->ptr + size - HOLE_SIZE, HOLE_SIZE), 0);
++}
++
++TEST_F(perf_mmap, map)
++{
++ unsigned long size = variant->ptr_size;
++
++ // Try to poke holes into the mappings by mapping anonymous memory over it
++ ASSERT_EQ(mmap(self->ptr, HOLE_SIZE, PROT_READ | PROT_WRITE,
++ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++ ASSERT_EQ(mmap(self->ptr + HOLE_SIZE, HOLE_SIZE, PROT_READ | PROT_WRITE,
++ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++ ASSERT_EQ(mmap(self->ptr + size - HOLE_SIZE, HOLE_SIZE, PROT_READ | PROT_WRITE,
++ MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++}
++
++TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/syscall_user_dispatch/sud_test.c b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+index d975a67673299f..48cf01aeec3e77 100644
+--- a/tools/testing/selftests/syscall_user_dispatch/sud_test.c
++++ b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+@@ -79,6 +79,21 @@ TEST_SIGNAL(dispatch_trigger_sigsys, SIGSYS)
+ }
+ }
+
++static void prctl_valid(struct __test_metadata *_metadata,
++ unsigned long op, unsigned long off,
++ unsigned long size, void *sel)
++{
++ EXPECT_EQ(0, prctl(PR_SET_SYSCALL_USER_DISPATCH, op, off, size, sel));
++}
++
++static void prctl_invalid(struct __test_metadata *_metadata,
++ unsigned long op, unsigned long off,
++ unsigned long size, void *sel, int err)
++{
++ EXPECT_EQ(-1, prctl(PR_SET_SYSCALL_USER_DISPATCH, op, off, size, sel));
++ EXPECT_EQ(err, errno);
++}
++
+ TEST(bad_prctl_param)
+ {
+ char sel = SYSCALL_DISPATCH_FILTER_ALLOW;
+@@ -86,57 +101,42 @@ TEST(bad_prctl_param)
+
+ /* Invalid op */
+ op = -1;
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0, 0, &sel);
+- ASSERT_EQ(EINVAL, errno);
++ prctl_invalid(_metadata, op, 0, 0, &sel, EINVAL);
+
+ /* PR_SYS_DISPATCH_OFF */
+ op = PR_SYS_DISPATCH_OFF;
+
+ /* offset != 0 */
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x1, 0x0, 0);
+- EXPECT_EQ(EINVAL, errno);
++ prctl_invalid(_metadata, op, 0x1, 0x0, 0, EINVAL);
+
+ /* len != 0 */
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0xff, 0);
+- EXPECT_EQ(EINVAL, errno);
++ prctl_invalid(_metadata, op, 0x0, 0xff, 0, EINVAL);
+
+ /* sel != NULL */
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x0, &sel);
+- EXPECT_EQ(EINVAL, errno);
++ prctl_invalid(_metadata, op, 0x0, 0x0, &sel, EINVAL);
+
+ /* Valid parameter */
+- errno = 0;
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x0, 0x0);
+- EXPECT_EQ(0, errno);
++ prctl_valid(_metadata, op, 0x0, 0x0, 0x0);
+
+ /* PR_SYS_DISPATCH_ON */
+ op = PR_SYS_DISPATCH_ON;
+
+ /* Dispatcher region is bad (offset > 0 && len == 0) */
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x1, 0x0, &sel);
+- EXPECT_EQ(EINVAL, errno);
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, -1L, 0x0, &sel);
+- EXPECT_EQ(EINVAL, errno);
++ prctl_invalid(_metadata, op, 0x1, 0x0, &sel, EINVAL);
++ prctl_invalid(_metadata, op, -1L, 0x0, &sel, EINVAL);
+
+ /* Invalid selector */
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x1, (void *) -1);
+- ASSERT_EQ(EFAULT, errno);
++ prctl_invalid(_metadata, op, 0x0, 0x1, (void *) -1, EFAULT);
+
+ /*
+ * Dispatcher range overflows unsigned long
+ */
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, PR_SYS_DISPATCH_ON, 1, -1L, &sel);
+- ASSERT_EQ(EINVAL, errno) {
+- TH_LOG("Should reject bad syscall range");
+- }
++ prctl_invalid(_metadata, PR_SYS_DISPATCH_ON, 1, -1L, &sel, EINVAL);
+
+ /*
+ * Allowed range overflows usigned long
+ */
+- prctl(PR_SET_SYSCALL_USER_DISPATCH, PR_SYS_DISPATCH_ON, -1L, 0x1, &sel);
+- ASSERT_EQ(EINVAL, errno) {
+- TH_LOG("Should reject bad syscall range");
+- }
++ prctl_invalid(_metadata, PR_SYS_DISPATCH_ON, -1L, 0x1, &sel, EINVAL);
+ }
+
+ /*
next reply other threads:[~2025-08-16 3:11 UTC|newest]
Thread overview: 215+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-16 3:11 Arisu Tachibana [this message]
-- strict thread matches above, loose matches on Subject: below --
2025-10-20 5:31 [gentoo-commits] proj/linux-patches:6.1 commit in: / Arisu Tachibana
2025-10-15 17:30 Arisu Tachibana
2025-10-02 13:26 Arisu Tachibana
2025-09-25 12:03 Arisu Tachibana
2025-09-20 5:26 Arisu Tachibana
2025-09-12 3:57 Arisu Tachibana
2025-09-10 5:32 Arisu Tachibana
2025-09-04 14:31 Arisu Tachibana
2025-08-28 15:26 Arisu Tachibana
2025-07-24 9:18 Arisu Tachibana
2025-07-18 12:21 Arisu Tachibana
2025-07-18 12:06 Arisu Tachibana
2025-07-14 16:21 Arisu Tachibana
2025-07-11 2:29 Arisu Tachibana
2025-07-06 13:28 Arisu Tachibana
2025-06-27 11:19 Mike Pagano
2025-06-04 18:13 Mike Pagano
2025-05-22 13:39 Mike Pagano
2025-05-18 14:34 Mike Pagano
2025-05-09 10:59 Mike Pagano
2025-05-05 11:32 Mike Pagano
2025-05-03 20:22 Mike Pagano
2025-05-02 10:56 Mike Pagano
2025-04-25 11:49 Mike Pagano
2025-04-10 13:35 Mike Pagano
2025-04-07 10:31 Mike Pagano
2025-03-29 10:49 Mike Pagano
2025-03-13 12:56 Mike Pagano
2025-03-07 16:38 Mike Pagano
2025-02-21 13:32 Mike Pagano
2025-02-01 23:08 Mike Pagano
2025-01-30 12:56 Mike Pagano
2025-01-23 17:04 Mike Pagano
2025-01-19 10:58 Mike Pagano
2025-01-17 13:19 Mike Pagano
2025-01-09 13:54 Mike Pagano
2025-01-02 12:35 Mike Pagano
2024-12-27 14:09 Mike Pagano
2024-12-19 18:08 Mike Pagano
2024-12-14 23:49 Mike Pagano
2024-12-12 19:42 Mike Pagano
2024-11-22 17:48 Mike Pagano
2024-11-17 18:17 Mike Pagano
2024-11-14 14:55 Mike Pagano
2024-11-08 16:31 Mike Pagano
2024-11-04 20:52 Mike Pagano
2024-11-03 13:58 Mike Pagano
2024-11-01 11:33 Mike Pagano
2024-11-01 11:28 Mike Pagano
2024-10-25 11:46 Mike Pagano
2024-10-22 16:58 Mike Pagano
2024-10-17 14:24 Mike Pagano
2024-10-17 14:06 Mike Pagano
2024-09-30 16:04 Mike Pagano
2024-09-18 18:04 Mike Pagano
2024-09-12 12:35 Mike Pagano
2024-09-08 11:06 Mike Pagano
2024-09-04 13:52 Mike Pagano
2024-08-29 16:49 Mike Pagano
2024-08-19 10:43 Mike Pagano
2024-08-14 15:06 Mike Pagano
2024-08-14 14:11 Mike Pagano
2024-08-11 13:32 Mike Pagano
2024-08-11 13:29 Mike Pagano
2024-08-10 15:45 Mike Pagano
2024-08-03 15:28 Mike Pagano
2024-07-27 13:47 Mike Pagano
2024-07-25 12:15 Mike Pagano
2024-07-25 12:09 Mike Pagano
2024-07-18 12:15 Mike Pagano
2024-07-15 11:16 Mike Pagano
2024-07-11 11:49 Mike Pagano
2024-07-05 11:07 Mike Pagano
2024-06-27 13:10 Mike Pagano
2024-06-27 12:33 Mike Pagano
2024-06-21 14:07 Mike Pagano
2024-06-16 14:33 Mike Pagano
2024-06-12 10:16 Mike Pagano
2024-05-25 15:16 Mike Pagano
2024-05-17 11:36 Mike Pagano
2024-05-05 18:10 Mike Pagano
2024-05-02 15:01 Mike Pagano
2024-04-29 11:30 Mike Pagano
2024-04-29 11:27 Mike Pagano
2024-04-27 22:45 Mike Pagano
2024-04-27 17:06 Mike Pagano
2024-04-18 3:05 Alice Ferrazzi
2024-04-13 13:07 Mike Pagano
2024-04-10 15:10 Mike Pagano
2024-04-03 13:54 Mike Pagano
2024-03-27 11:24 Mike Pagano
2024-03-15 22:00 Mike Pagano
2024-03-06 18:07 Mike Pagano
2024-03-01 13:07 Mike Pagano
2024-02-23 13:19 Mike Pagano
2024-02-23 12:37 Mike Pagano
2024-02-16 19:00 Mike Pagano
2024-02-05 21:01 Mike Pagano
2024-02-01 1:23 Mike Pagano
2024-01-26 0:09 Mike Pagano
2024-01-20 11:45 Mike Pagano
2024-01-15 18:47 Mike Pagano
2024-01-10 17:16 Mike Pagano
2024-01-05 14:54 Mike Pagano
2024-01-05 14:50 Mike Pagano
2024-01-04 16:10 Mike Pagano
2024-01-01 13:46 Mike Pagano
2023-12-20 16:56 Mike Pagano
2023-12-13 18:27 Mike Pagano
2023-12-11 14:20 Mike Pagano
2023-12-08 10:55 Mike Pagano
2023-12-03 11:16 Mike Pagano
2023-12-01 10:36 Mike Pagano
2023-11-28 17:51 Mike Pagano
2023-11-20 11:23 Mike Pagano
2023-11-08 14:02 Mike Pagano
2023-11-02 11:10 Mike Pagano
2023-10-25 11:36 Mike Pagano
2023-10-22 22:53 Mike Pagano
2023-10-19 22:30 Mike Pagano
2023-10-18 20:04 Mike Pagano
2023-10-15 17:40 Mike Pagano
2023-10-10 22:56 Mike Pagano
2023-10-06 13:18 Mike Pagano
2023-10-05 14:23 Mike Pagano
2023-09-23 11:03 Mike Pagano
2023-09-23 10:16 Mike Pagano
2023-09-19 13:20 Mike Pagano
2023-09-15 18:04 Mike Pagano
2023-09-13 11:19 Mike Pagano
2023-09-13 11:05 Mike Pagano
2023-09-06 22:16 Mike Pagano
2023-09-02 9:56 Mike Pagano
2023-08-30 14:42 Mike Pagano
2023-08-27 21:41 Mike Pagano
2023-08-26 15:19 Mike Pagano
2023-08-26 15:00 Mike Pagano
2023-08-23 18:08 Mike Pagano
2023-08-16 18:32 Mike Pagano
2023-08-16 18:32 Mike Pagano
2023-08-11 11:55 Mike Pagano
2023-08-08 18:40 Mike Pagano
2023-08-03 11:54 Mike Pagano
2023-08-03 11:48 Mike Pagano
2023-07-27 11:48 Mike Pagano
2023-07-24 20:27 Mike Pagano
2023-07-23 15:14 Mike Pagano
2023-07-19 17:05 Mike Pagano
2023-07-05 20:34 Mike Pagano
2023-07-05 20:28 Mike Pagano
2023-07-04 13:15 Mike Pagano
2023-07-01 18:27 Mike Pagano
2023-06-28 10:26 Mike Pagano
2023-06-21 14:54 Alice Ferrazzi
2023-06-14 10:17 Mike Pagano
2023-06-09 12:02 Mike Pagano
2023-06-09 11:29 Mike Pagano
2023-06-05 11:48 Mike Pagano
2023-06-02 15:07 Mike Pagano
2023-05-30 16:51 Mike Pagano
2023-05-24 17:05 Mike Pagano
2023-05-17 10:57 Mike Pagano
2023-05-11 16:08 Mike Pagano
2023-05-11 14:49 Mike Pagano
2023-05-10 17:54 Mike Pagano
2023-05-10 16:18 Mike Pagano
2023-04-30 23:50 Alice Ferrazzi
2023-04-26 13:19 Mike Pagano
2023-04-20 11:16 Alice Ferrazzi
2023-04-13 16:09 Mike Pagano
2023-04-06 10:41 Alice Ferrazzi
2023-03-30 20:52 Mike Pagano
2023-03-30 11:21 Alice Ferrazzi
2023-03-22 14:15 Alice Ferrazzi
2023-03-21 13:32 Mike Pagano
2023-03-17 10:43 Mike Pagano
2023-03-13 11:30 Alice Ferrazzi
2023-03-11 14:09 Mike Pagano
2023-03-11 11:19 Mike Pagano
2023-03-10 12:57 Mike Pagano
2023-03-10 12:47 Mike Pagano
2023-03-06 17:30 Mike Pagano
2023-03-03 13:01 Mike Pagano
2023-03-03 12:28 Mike Pagano
2023-02-27 16:59 Mike Pagano
2023-02-26 18:24 Mike Pagano
2023-02-26 18:16 Mike Pagano
2023-02-25 11:02 Alice Ferrazzi
2023-02-24 3:03 Alice Ferrazzi
2023-02-22 13:46 Alice Ferrazzi
2023-02-14 18:35 Mike Pagano
2023-02-13 13:38 Mike Pagano
2023-02-09 12:52 Mike Pagano
2023-02-09 12:49 Mike Pagano
2023-02-09 12:47 Mike Pagano
2023-02-09 12:40 Mike Pagano
2023-02-09 12:34 Mike Pagano
2023-02-06 12:46 Mike Pagano
2023-02-02 19:02 Mike Pagano
2023-02-01 8:05 Alice Ferrazzi
2023-01-24 7:19 Alice Ferrazzi
2023-01-22 14:59 Mike Pagano
2023-01-18 11:29 Mike Pagano
2023-01-14 13:48 Mike Pagano
2023-01-12 15:25 Mike Pagano
2023-01-12 12:16 Mike Pagano
2023-01-07 11:10 Mike Pagano
2023-01-04 11:37 Mike Pagano
2022-12-31 15:28 Mike Pagano
2022-12-21 19:05 Alice Ferrazzi
2022-12-16 20:25 Mike Pagano
2022-12-16 19:44 Mike Pagano
2022-12-11 23:32 Mike Pagano
2022-12-11 14:28 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1755313867.921b812d612f64110af3fda43828ef3b7746acb6.alicef@gentoo \
--to=alicef@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox