From: "Arisu Tachibana" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: /
Date: Thu, 28 Aug 2025 16:55:51 +0000 (UTC) [thread overview]
Message-ID: <1756394986.2c5751ebb312d8b238e09a3407da09bf4b1182c0.alicef@gentoo> (raw)
commit: 2c5751ebb312d8b238e09a3407da09bf4b1182c0
Author: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 28 15:29:46 2025 +0000
Commit: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 28 15:29:46 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2c5751eb
Linux patch 5.10.241
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>
0000_README | 4 +
1240_linux-5.10.241.patch | 17784 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 17788 insertions(+)
diff --git a/0000_README b/0000_README
index ae0aa0e6..24501f38 100644
--- a/0000_README
+++ b/0000_README
@@ -1003,6 +1003,10 @@ Patch: 1239_linux-5.10.240.patch
From: https://www.kernel.org
Desc: Linux 5.10.240
+Patch: 1240_linux-5.10.241.patch
+From: https://www.kernel.org
+Desc: Linux 5.10.241
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1240_linux-5.10.241.patch b/1240_linux-5.10.241.patch
new file mode 100644
index 00000000..c01e8ffb
--- /dev/null
+++ b/1240_linux-5.10.241.patch
@@ -0,0 +1,17784 @@
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index de2bacc418fee9..483573166ac978 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -203,9 +203,9 @@ usrjquota=<file> Appoint specified file and type during mount, so that quota
+ grpjquota=<file> information can be properly updated during recovery flow,
+ prjjquota=<file> <quota file>: must be in root directory;
+ jqfmt=<quota type> <quota type>: [vfsold,vfsv0,vfsv1].
+-offusrjquota Turn off user journalled quota.
+-offgrpjquota Turn off group journalled quota.
+-offprjjquota Turn off project journalled quota.
++usrjquota= Turn off user journalled quota.
++grpjquota= Turn off group journalled quota.
++prjjquota= Turn off project journalled quota.
+ quota Enable plain user disk quota accounting.
+ noquota Disable all plain disk quota option.
+ whint_mode=%s Control which write hints are passed down to block
+diff --git a/Documentation/firmware-guide/acpi/i2c-muxes.rst b/Documentation/firmware-guide/acpi/i2c-muxes.rst
+index 3a8997ccd7c4b6..f366539acd792a 100644
+--- a/Documentation/firmware-guide/acpi/i2c-muxes.rst
++++ b/Documentation/firmware-guide/acpi/i2c-muxes.rst
+@@ -14,7 +14,7 @@ Consider this topology::
+ | | | 0x70 |--CH01--> i2c client B (0x50)
+ +------+ +------+
+
+-which corresponds to the following ASL::
++which corresponds to the following ASL (in the scope of \_SB)::
+
+ Device (SMB1)
+ {
+@@ -24,7 +24,7 @@ which corresponds to the following ASL::
+ Name (_HID, ...)
+ Name (_CRS, ResourceTemplate () {
+ I2cSerialBus (0x70, ControllerInitiated, I2C_SPEED,
+- AddressingMode7Bit, "^SMB1", 0x00,
++ AddressingMode7Bit, "\\_SB.SMB1", 0x00,
+ ResourceConsumer,,)
+ }
+
+@@ -37,7 +37,7 @@ which corresponds to the following ASL::
+ Name (_HID, ...)
+ Name (_CRS, ResourceTemplate () {
+ I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED,
+- AddressingMode7Bit, "^CH00", 0x00,
++ AddressingMode7Bit, "\\_SB.SMB1.CH00", 0x00,
+ ResourceConsumer,,)
+ }
+ }
+@@ -52,7 +52,7 @@ which corresponds to the following ASL::
+ Name (_HID, ...)
+ Name (_CRS, ResourceTemplate () {
+ I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED,
+- AddressingMode7Bit, "^CH01", 0x00,
++ AddressingMode7Bit, "\\_SB.SMB1.CH01", 0x00,
+ ResourceConsumer,,)
+ }
+ }
+diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
+index 17c8e0c2deb46d..774816cf15be77 100644
+--- a/Documentation/memory-barriers.txt
++++ b/Documentation/memory-barriers.txt
+@@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
+
+ (*) dma_wmb();
+ (*) dma_rmb();
++ (*) dma_mb();
+
+ These are for use with consistent memory to guarantee the ordering
+ of writes or reads of shared memory accessible to both the CPU and a
+@@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
+ The dma_rmb() allows us guarantee the device has released ownership
+ before we read the data from the descriptor, and the dma_wmb() allows
+ us to guarantee the data is written to the descriptor before the device
+- can see it now has ownership. Note that, when using writel(), a prior
+- wmb() is not needed to guarantee that the cache coherent memory writes
+- have completed before writing to the MMIO region. The cheaper
+- writel_relaxed() does not provide this guarantee and must not be used
+- here.
++ can see it now has ownership. The dma_mb() implies both a dma_rmb() and
++ a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
++ to guarantee that the cache coherent memory writes have completed before
++ writing to the MMIO region. The cheaper writel_relaxed() does not provide
++ this guarantee and must not be used here.
+
+ See the subsection "Kernel I/O barrier effects" for more information on
+ relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
+diff --git a/Makefile b/Makefile
+index cff26a5d22bbed..3f7fa3e35285f2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 240
++SUBLEVEL = 241
+ EXTRAVERSION =
+ NAME = Dare mighty things
+
+@@ -1037,7 +1037,7 @@ KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD
+ KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+
+ # userspace programs are linked via the compiler, use the correct linker
+-ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy)
++ifdef CONFIG_CC_IS_CLANG
+ KBUILD_USERLDFLAGS += $(call cc-option, --ld-path=$(LD))
+ endif
+
+diff --git a/arch/arm/Makefile b/arch/arm/Makefile
+index 0e5a8765e60b3f..37646ba4feae6b 100644
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -126,7 +126,7 @@ endif
+
+ # Need -Uarm for gcc < 3.x
+ KBUILD_CFLAGS +=$(CFLAGS_ABI) $(CFLAGS_ISA) $(arch-y) $(tune-y) $(call cc-option,-mshort-load-bytes,$(call cc-option,-malignment-traps,)) -msoft-float -Uarm
+-KBUILD_AFLAGS +=$(CFLAGS_ABI) $(AFLAGS_ISA) $(arch-y) $(tune-y) -include asm/unified.h -msoft-float
++KBUILD_AFLAGS +=$(CFLAGS_ABI) $(AFLAGS_ISA) $(arch-y) $(tune-y) -include $(srctree)/arch/arm/include/asm/unified.h -msoft-float
+
+ CHECKFLAGS += -D__arm__
+
+diff --git a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi
+index 770f59b23102c2..44477206ba0f69 100644
+--- a/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi
++++ b/arch/arm/boot/dts/imx6ul-kontron-n6x1x-s.dtsi
+@@ -170,7 +170,6 @@ &uart2 {
+ pinctrl-0 = <&pinctrl_uart2>;
+ linux,rs485-enabled-at-boot-time;
+ rs485-rx-during-tx;
+- rs485-rts-active-low;
+ uart-has-rtscts;
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/vfxxx.dtsi b/arch/arm/boot/dts/vfxxx.dtsi
+index 2259d11af721e8..fb7709b8a3344a 100644
+--- a/arch/arm/boot/dts/vfxxx.dtsi
++++ b/arch/arm/boot/dts/vfxxx.dtsi
+@@ -617,7 +617,7 @@ usbmisc1: usb@400b4800 {
+
+ ftm: ftm@400b8000 {
+ compatible = "fsl,ftm-timer";
+- reg = <0x400b8000 0x1000 0x400b9000 0x1000>;
++ reg = <0x400b8000 0x1000>, <0x400b9000 0x1000>;
+ interrupts = <44 IRQ_TYPE_LEVEL_HIGH>;
+ clock-names = "ftm-evt", "ftm-src",
+ "ftm-evt-counter-en", "ftm-src-counter-en";
+diff --git a/arch/arm/mach-rockchip/platsmp.c b/arch/arm/mach-rockchip/platsmp.c
+index d60856898d97ac..17aee4701e81ac 100644
+--- a/arch/arm/mach-rockchip/platsmp.c
++++ b/arch/arm/mach-rockchip/platsmp.c
+@@ -279,11 +279,6 @@ static void __init rockchip_smp_prepare_cpus(unsigned int max_cpus)
+ }
+
+ if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) {
+- if (rockchip_smp_prepare_sram(node)) {
+- of_node_put(node);
+- return;
+- }
+-
+ /* enable the SCU power domain */
+ pmu_set_power_domain(PMU_PWRDN_SCU, true);
+
+@@ -316,11 +311,19 @@ static void __init rockchip_smp_prepare_cpus(unsigned int max_cpus)
+ asm ("mrc p15, 1, %0, c9, c0, 2\n" : "=r" (l2ctlr));
+ ncores = ((l2ctlr >> 24) & 0x3) + 1;
+ }
+- of_node_put(node);
+
+ /* Make sure that all cores except the first are really off */
+ for (i = 1; i < ncores; i++)
+ pmu_set_power_domain(0 + i, false);
++
++ if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) {
++ if (rockchip_smp_prepare_sram(node)) {
++ of_node_put(node);
++ return;
++ }
++ }
++
++ of_node_put(node);
+ }
+
+ static void __init rk3036_smp_prepare_cpus(unsigned int max_cpus)
+diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c
+index d5c805adf7a82b..ea706fac63587a 100644
+--- a/arch/arm/mach-tegra/reset.c
++++ b/arch/arm/mach-tegra/reset.c
+@@ -63,7 +63,7 @@ static void __init tegra_cpu_reset_handler_enable(void)
+ BUG_ON(is_enabled);
+ BUG_ON(tegra_cpu_reset_handler_size > TEGRA_IRAM_RESET_HANDLER_SIZE);
+
+- memcpy(iram_base, (void *)__tegra_cpu_reset_handler_start,
++ memcpy_toio(iram_base, (void *)__tegra_cpu_reset_handler_start,
+ tegra_cpu_reset_handler_size);
+
+ err = call_firmware_op(set_cpu_boot_addr, 0, reset_address);
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+index 34b2e862b7083e..f97e8a8fd16ff2 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+@@ -246,6 +246,8 @@ &usdhc3 {
+ pinctrl-0 = <&pinctrl_usdhc3>;
+ pinctrl-1 = <&pinctrl_usdhc3_100mhz>;
+ pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
++ assigned-clocks = <&clk IMX8MM_CLK_USDHC3>;
++ assigned-clock-rates = <400000000>;
+ bus-width = <8>;
+ non-removable;
+ status = "okay";
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index 702587fda70cf7..8cbbd08cc8c54f 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -128,7 +128,7 @@ acpi_set_mailbox_entry(int cpu, struct acpi_madt_generic_interrupt *processor)
+ {}
+ #endif
+
+-static inline const char *acpi_get_enable_method(int cpu)
++static __always_inline const char *acpi_get_enable_method(int cpu)
+ {
+ if (acpi_psci_present())
+ return "psci";
+diff --git a/arch/arm64/mm/ptdump_debugfs.c b/arch/arm64/mm/ptdump_debugfs.c
+index d29d722ec3ec6e..457bf5f03771cf 100644
+--- a/arch/arm64/mm/ptdump_debugfs.c
++++ b/arch/arm64/mm/ptdump_debugfs.c
+@@ -1,6 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/debugfs.h>
+-#include <linux/memory_hotplug.h>
+ #include <linux/seq_file.h>
+
+ #include <asm/ptdump.h>
+@@ -9,9 +8,7 @@ static int ptdump_show(struct seq_file *m, void *v)
+ {
+ struct ptdump_info *info = m->private;
+
+- get_online_mems();
+ ptdump_walk(m, info);
+- put_online_mems();
+ return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(ptdump);
+diff --git a/arch/m68k/Kconfig.debug b/arch/m68k/Kconfig.debug
+index 11b306bdd7886a..5a3713170a619e 100644
+--- a/arch/m68k/Kconfig.debug
++++ b/arch/m68k/Kconfig.debug
+@@ -10,7 +10,7 @@ config BOOTPARAM_STRING
+
+ config EARLY_PRINTK
+ bool "Early printk"
+- depends on !(SUN3 || M68000 || COLDFIRE)
++ depends on MMU_MOTOROLA
+ help
+ Write kernel log output directly to a serial port.
+ Where implemented, output goes to the framebuffer as well.
+diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
+index f11ef9f1f56fcf..521cbb8a150c99 100644
+--- a/arch/m68k/kernel/early_printk.c
++++ b/arch/m68k/kernel/early_printk.c
+@@ -16,25 +16,10 @@
+ #include "../mvme147/mvme147.h"
+ #include "../mvme16x/mvme16x.h"
+
+-asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
+-
+-static void __ref debug_cons_write(struct console *c,
+- const char *s, unsigned n)
+-{
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+- defined(CONFIG_COLDFIRE))
+- if (MACH_IS_MVME147)
+- mvme147_scc_write(c, s, n);
+- else if (MACH_IS_MVME16x)
+- mvme16x_cons_write(c, s, n);
+- else
+- debug_cons_nputs(s, n);
+-#endif
+-}
++asmlinkage void __init debug_cons_nputs(struct console *c, const char *s, unsigned int n);
+
+ static struct console early_console_instance = {
+ .name = "debug",
+- .write = debug_cons_write,
+ .flags = CON_PRINTBUFFER | CON_BOOT,
+ .index = -1
+ };
+@@ -44,6 +29,12 @@ static int __init setup_early_printk(char *buf)
+ if (early_console || buf)
+ return 0;
+
++ if (MACH_IS_MVME147)
++ early_console_instance.write = mvme147_scc_write;
++ else if (MACH_IS_MVME16x)
++ early_console_instance.write = mvme16x_cons_write;
++ else
++ early_console_instance.write = debug_cons_nputs;
+ early_console = &early_console_instance;
+ register_console(early_console);
+
+@@ -51,20 +42,15 @@ static int __init setup_early_printk(char *buf)
+ }
+ early_param("earlyprintk", setup_early_printk);
+
+-/*
+- * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be called
+- * after init sections are discarded (for platforms that use it).
+- */
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+- defined(CONFIG_COLDFIRE))
+-
+ static int __init unregister_early_console(void)
+ {
+- if (!early_console || MACH_IS_MVME16x)
+- return 0;
++ /*
++ * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be
++ * called after init sections are discarded (for platforms that use it).
++ */
++ if (early_console && early_console->write == debug_cons_nputs)
++ return unregister_console(early_console);
+
+- return unregister_console(early_console);
++ return 0;
+ }
+ late_initcall(unregister_early_console);
+-
+-#endif
+diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
+index 493c95db0e512d..094afc355d89b5 100644
+--- a/arch/m68k/kernel/head.S
++++ b/arch/m68k/kernel/head.S
+@@ -3242,8 +3242,8 @@ func_return putn
+ * turns around and calls the internal routines. This routine
+ * is used by the boot console.
+ *
+- * The calling parameters are:
+- * void debug_cons_nputs(const char *str, unsigned length)
++ * The function signature is -
++ * void debug_cons_nputs(struct console *c, const char *s, unsigned int n)
+ *
+ * This routine does NOT understand variable arguments only
+ * simple strings!
+@@ -3252,8 +3252,8 @@ ENTRY(debug_cons_nputs)
+ moveml %d0/%d1/%a0,%sp@-
+ movew %sr,%sp@-
+ ori #0x0700,%sr
+- movel %sp@(18),%a0 /* fetch parameter */
+- movel %sp@(22),%d1 /* fetch parameter */
++ movel %sp@(22),%a0 /* char *s */
++ movel %sp@(26),%d1 /* unsigned int n */
+ jra 2f
+ 1:
+ #ifdef CONSOLE_DEBUG
+@@ -3379,6 +3379,7 @@ L(console_clear_loop):
+
+ movel %d4,%d1 /* screen height in pixels */
+ divul %a0@(FONT_DESC_HEIGHT),%d1 /* d1 = max num rows */
++ subql #1,%d1 /* row range is 0 to num - 1 */
+
+ movel %d0,%a2@(Lconsole_struct_num_columns)
+ movel %d1,%a2@(Lconsole_struct_num_rows)
+@@ -3525,15 +3526,14 @@ func_start console_putc,%a0/%a1/%d0-%d7
+ cmpib #10,%d7
+ jne L(console_not_lf)
+ movel %a0@(Lconsole_struct_cur_row),%d0
+- addil #1,%d0
+- movel %d0,%a0@(Lconsole_struct_cur_row)
+ movel %a0@(Lconsole_struct_num_rows),%d1
+ cmpl %d1,%d0
+ jcs 1f
+- subil #1,%d0
+- movel %d0,%a0@(Lconsole_struct_cur_row)
+ console_scroll
++ jra L(console_exit)
+ 1:
++ addql #1,%d0
++ movel %d0,%a0@(Lconsole_struct_cur_row)
+ jra L(console_exit)
+
+ L(console_not_lf):
+@@ -3560,12 +3560,6 @@ L(console_not_cr):
+ */
+ L(console_not_home):
+ movel %a0@(Lconsole_struct_cur_column),%d0
+- addql #1,%a0@(Lconsole_struct_cur_column)
+- movel %a0@(Lconsole_struct_num_columns),%d1
+- cmpl %d1,%d0
+- jcs 1f
+- console_putc #'\n' /* recursion is OK! */
+-1:
+ movel %a0@(Lconsole_struct_cur_row),%d1
+
+ /*
+@@ -3612,6 +3606,23 @@ L(console_do_font_scanline):
+ addq #1,%d1
+ dbra %d7,L(console_read_char_scanline)
+
++ /*
++ * Register usage in the code below:
++ * a0 = pointer to console globals
++ * d0 = cursor column
++ * d1 = cursor column limit
++ */
++
++ lea %pc@(L(console_globals)),%a0
++
++ movel %a0@(Lconsole_struct_cur_column),%d0
++ addql #1,%d0
++ movel %d0,%a0@(Lconsole_struct_cur_column) /* Update cursor pos */
++ movel %a0@(Lconsole_struct_num_columns),%d1
++ cmpl %d1,%d0
++ jcs L(console_exit)
++ console_putc #'\n' /* Line wrap using tail recursion */
++
+ L(console_exit):
+ func_return console_putc
+
+diff --git a/arch/mips/crypto/chacha-core.S b/arch/mips/crypto/chacha-core.S
+index 5755f69cfe0074..706aeb850fb0d6 100644
+--- a/arch/mips/crypto/chacha-core.S
++++ b/arch/mips/crypto/chacha-core.S
+@@ -55,17 +55,13 @@
+ #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+ #define MSB 0
+ #define LSB 3
+-#define ROTx rotl
+-#define ROTR(n) rotr n, 24
+ #define CPU_TO_LE32(n) \
+- wsbh n; \
++ wsbh n, n; \
+ rotr n, 16;
+ #else
+ #define MSB 3
+ #define LSB 0
+-#define ROTx rotr
+ #define CPU_TO_LE32(n)
+-#define ROTR(n)
+ #endif
+
+ #define FOR_EACH_WORD(x) \
+@@ -192,10 +188,10 @@ CONCAT3(.Lchacha_mips_xor_aligned_, PLUS_ONE(x), _b: ;) \
+ xor X(W), X(B); \
+ xor X(Y), X(C); \
+ xor X(Z), X(D); \
+- rotl X(V), S; \
+- rotl X(W), S; \
+- rotl X(Y), S; \
+- rotl X(Z), S;
++ rotr X(V), 32 - S; \
++ rotr X(W), 32 - S; \
++ rotr X(Y), 32 - S; \
++ rotr X(Z), 32 - S;
+
+ .text
+ .set reorder
+@@ -372,21 +368,19 @@ chacha_crypt_arch:
+ /* First byte */
+ lbu T1, 0(IN)
+ addiu $at, BYTES, 1
+- CPU_TO_LE32(SAVED_X)
+- ROTR(SAVED_X)
+ xor T1, SAVED_X
+ sb T1, 0(OUT)
+ beqz $at, .Lchacha_mips_xor_done
+ /* Second byte */
+ lbu T1, 1(IN)
+ addiu $at, BYTES, 2
+- ROTx SAVED_X, 8
++ rotr SAVED_X, 8
+ xor T1, SAVED_X
+ sb T1, 1(OUT)
+ beqz $at, .Lchacha_mips_xor_done
+ /* Third byte */
+ lbu T1, 2(IN)
+- ROTx SAVED_X, 8
++ rotr SAVED_X, 8
+ xor T1, SAVED_X
+ sb T1, 2(OUT)
+ b .Lchacha_mips_xor_done
+diff --git a/arch/mips/include/asm/vpe.h b/arch/mips/include/asm/vpe.h
+index 012731546cf607..3de6b0ff1627c5 100644
+--- a/arch/mips/include/asm/vpe.h
++++ b/arch/mips/include/asm/vpe.h
+@@ -126,4 +126,12 @@ void cleanup_tc(struct tc *tc);
+
+ int __init vpe_module_init(void);
+ void __exit vpe_module_exit(void);
++
++#ifdef CONFIG_MIPS_VPE_LOADER_MT
++void *vpe_alloc(void);
++int vpe_start(void *vpe, unsigned long start);
++int vpe_stop(void *vpe);
++int vpe_free(void *vpe);
++#endif /* CONFIG_MIPS_VPE_LOADER_MT */
++
+ #endif /* _ASM_VPE_H */
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 98ecaf6f3edb0c..0a5710a4d69651 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -675,18 +675,20 @@ unsigned long mips_stack_top(void)
+ }
+
+ /* Space for the VDSO, data page & GIC user page */
+- top -= PAGE_ALIGN(current->thread.abi->vdso->size);
+- top -= PAGE_SIZE;
+- top -= mips_gic_present() ? PAGE_SIZE : 0;
++ if (current->thread.abi) {
++ top -= PAGE_ALIGN(current->thread.abi->vdso->size);
++ top -= PAGE_SIZE;
++ top -= mips_gic_present() ? PAGE_SIZE : 0;
++
++ /* Space to randomize the VDSO base */
++ if (current->flags & PF_RANDOMIZE)
++ top -= VDSO_RANDOMIZE_SIZE;
++ }
+
+ /* Space for cache colour alignment */
+ if (cpu_has_dc_aliases)
+ top -= shm_align_mask + 1;
+
+- /* Space to randomize the VDSO base */
+- if (current->flags & PF_RANDOMIZE)
+- top -= VDSO_RANDOMIZE_SIZE;
+-
+ return top;
+ }
+
+diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
+index 1b939abbe4caaf..2e987b6e42bc16 100644
+--- a/arch/mips/mm/tlb-r4k.c
++++ b/arch/mips/mm/tlb-r4k.c
+@@ -498,6 +498,60 @@ static int __init set_ntlb(char *str)
+
+ __setup("ntlb=", set_ntlb);
+
++/* Initialise all TLB entries with unique values */
++static void r4k_tlb_uniquify(void)
++{
++ int entry = num_wired_entries();
++
++ htw_stop();
++ write_c0_entrylo0(0);
++ write_c0_entrylo1(0);
++
++ while (entry < current_cpu_data.tlbsize) {
++ unsigned long asid_mask = cpu_asid_mask(¤t_cpu_data);
++ unsigned long asid = 0;
++ int idx;
++
++ /* Skip wired MMID to make ginvt_mmid work */
++ if (cpu_has_mmid)
++ asid = MMID_KERNEL_WIRED + 1;
++
++ /* Check for match before using UNIQUE_ENTRYHI */
++ do {
++ if (cpu_has_mmid) {
++ write_c0_memorymapid(asid);
++ write_c0_entryhi(UNIQUE_ENTRYHI(entry));
++ } else {
++ write_c0_entryhi(UNIQUE_ENTRYHI(entry) | asid);
++ }
++ mtc0_tlbw_hazard();
++ tlb_probe();
++ tlb_probe_hazard();
++ idx = read_c0_index();
++ /* No match or match is on current entry */
++ if (idx < 0 || idx == entry)
++ break;
++ /*
++ * If we hit a match, we need to try again with
++ * a different ASID.
++ */
++ asid++;
++ } while (asid < asid_mask);
++
++ if (idx >= 0 && idx != entry)
++ panic("Unable to uniquify TLB entry %d", idx);
++
++ write_c0_index(entry);
++ mtc0_tlbw_hazard();
++ tlb_write_indexed();
++ entry++;
++ }
++
++ tlbw_use_hazard();
++ htw_start();
++ flush_micro_tlb();
++}
++
+ /*
+ * Configure TLB (for init or after a CPU has been powered off).
+ */
+@@ -537,7 +591,7 @@ static void r4k_tlb_configure(void)
+ temp_tlb_entry = current_cpu_data.tlbsize - 1;
+
+ /* From this point on the ARC firmware is dead. */
+- local_flush_tlb_all();
++ r4k_tlb_uniquify();
+
+ /* Did I tell you that ARC SUCKS? */
+ }
+diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
+index 0cf86ed2b7c17f..3b615c1840a880 100644
+--- a/arch/parisc/Makefile
++++ b/arch/parisc/Makefile
+@@ -141,7 +141,7 @@ palo lifimage: vmlinuz
+ fi
+ @if test ! -f "$(PALOCONF)"; then \
+ cp $(srctree)/arch/parisc/defpalo.conf $(objtree)/palo.conf; \
+- echo 'A generic palo config file ($(objree)/palo.conf) has been created for you.'; \
++ echo 'A generic palo config file ($(objtree)/palo.conf) has been created for you.'; \
+ echo 'You should check it and re-run "make palo".'; \
+ echo 'WARNING: the "lifimage" file is now placed in this directory by default!'; \
+ false; \
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index 021da6736570e8..7c14fecc7154f2 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -265,7 +265,6 @@ CONFIG_NET_SCH_DSMARK=m
+ CONFIG_NET_SCH_NETEM=m
+ CONFIG_NET_SCH_INGRESS=m
+ CONFIG_NET_CLS_BASIC=m
+-CONFIG_NET_CLS_TCINDEX=m
+ CONFIG_NET_CLS_ROUTE4=m
+ CONFIG_NET_CLS_FW=m
+ CONFIG_NET_CLS_U32=m
+diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
+index fbc6eaaf10e1fa..cd5364e8fe3d2a 100644
+--- a/arch/powerpc/kernel/eeh.c
++++ b/arch/powerpc/kernel/eeh.c
+@@ -1140,6 +1140,7 @@ int eeh_unfreeze_pe(struct eeh_pe *pe)
+
+ return ret;
+ }
++EXPORT_SYMBOL_GPL(eeh_unfreeze_pe);
+
+
+ static struct pci_device_id eeh_reset_ids[] = {
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index 665d847ef9b5a5..ed5be1bff60ca6 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -258,13 +258,12 @@ static void eeh_pe_report_edev(struct eeh_dev *edev, eeh_report_fn fn,
+ struct pci_driver *driver;
+ enum pci_ers_result new_result;
+
+- pci_lock_rescan_remove();
+ pdev = edev->pdev;
+ if (pdev)
+ get_device(&pdev->dev);
+- pci_unlock_rescan_remove();
+ if (!pdev) {
+ eeh_edev_info(edev, "no device");
++ *result = PCI_ERS_RESULT_DISCONNECT;
+ return;
+ }
+ device_lock(&pdev->dev);
+@@ -305,8 +304,9 @@ static void eeh_pe_report(const char *name, struct eeh_pe *root,
+ struct eeh_dev *edev, *tmp;
+
+ pr_info("EEH: Beginning: '%s'\n", name);
+- eeh_for_each_pe(root, pe) eeh_pe_for_each_dev(pe, edev, tmp)
+- eeh_pe_report_edev(edev, fn, result);
++ eeh_for_each_pe(root, pe)
++ eeh_pe_for_each_dev(pe, edev, tmp)
++ eeh_pe_report_edev(edev, fn, result);
+ if (result)
+ pr_info("EEH: Finished:'%s' with aggregate recovery state:'%s'\n",
+ name, pci_ers_result_name(*result));
+@@ -384,6 +384,8 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ if (!edev)
+ return;
+
++ pci_lock_rescan_remove();
++
+ /*
+ * The content in the config space isn't saved because
+ * the blocked config space on some adapters. We have
+@@ -394,14 +396,19 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ if (list_is_last(&edev->entry, &edev->pe->edevs))
+ eeh_pe_restore_bars(edev->pe);
+
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+ pdev = eeh_dev_to_pci_dev(edev);
+- if (!pdev)
++ if (!pdev) {
++ pci_unlock_rescan_remove();
+ return;
++ }
+
+ pci_restore_state(pdev);
++
++ pci_unlock_rescan_remove();
+ }
+
+ /**
+@@ -648,9 +655,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ if (any_passed || driver_eeh_aware || (pe->type & EEH_PE_VF)) {
+ eeh_pe_dev_traverse(pe, eeh_rmv_device, rmv_data);
+ } else {
+- pci_lock_rescan_remove();
+ pci_hp_remove_devices(bus);
+- pci_unlock_rescan_remove();
+ }
+
+ /*
+@@ -666,8 +671,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ if (rc)
+ return rc;
+
+- pci_lock_rescan_remove();
+-
+ /* Restore PE */
+ eeh_ops->configure_bridge(pe);
+ eeh_pe_restore_bars(pe);
+@@ -675,7 +678,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ /* Clear frozen state */
+ rc = eeh_clear_pe_frozen_state(pe, false);
+ if (rc) {
+- pci_unlock_rescan_remove();
+ return rc;
+ }
+
+@@ -710,7 +712,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ pe->tstamp = tstamp;
+ pe->freeze_count = cnt;
+
+- pci_unlock_rescan_remove();
+ return 0;
+ }
+
+@@ -844,10 +845,13 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ {LIST_HEAD_INIT(rmv_data.removed_vf_list), 0};
+ int devices = 0;
+
++ pci_lock_rescan_remove();
++
+ bus = eeh_pe_bus_get(pe);
+ if (!bus) {
+ pr_err("%s: Cannot find PCI bus for PHB#%x-PE#%x\n",
+ __func__, pe->phb->global_number, pe->addr);
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+@@ -1089,10 +1093,15 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
+ eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
+
+- pci_lock_rescan_remove();
+- pci_hp_remove_devices(bus);
+- pci_unlock_rescan_remove();
++ bus = eeh_pe_bus_get(pe);
++ if (bus)
++ pci_hp_remove_devices(bus);
++ else
++ pr_err("%s: PCI bus for PHB#%x-PE#%x disappeared\n",
++ __func__, pe->phb->global_number, pe->addr);
++
+ /* The passed PE should no longer be used */
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+@@ -1109,6 +1118,8 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ eeh_clear_slot_attention(edev->pdev);
+
+ eeh_pe_state_clear(pe, EEH_PE_RECOVERING, true);
++
++ pci_unlock_rescan_remove();
+ }
+
+ /**
+@@ -1127,6 +1138,7 @@ void eeh_handle_special_event(void)
+ unsigned long flags;
+ int rc;
+
++ pci_lock_rescan_remove();
+
+ do {
+ rc = eeh_ops->next_error(&pe);
+@@ -1166,10 +1178,12 @@ void eeh_handle_special_event(void)
+
+ break;
+ case EEH_NEXT_ERR_NONE:
++ pci_unlock_rescan_remove();
+ return;
+ default:
+ pr_warn("%s: Invalid value %d from next_error()\n",
+ __func__, rc);
++ pci_unlock_rescan_remove();
+ return;
+ }
+
+@@ -1181,7 +1195,9 @@ void eeh_handle_special_event(void)
+ if (rc == EEH_NEXT_ERR_FROZEN_PE ||
+ rc == EEH_NEXT_ERR_FENCED_PHB) {
+ eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
++ pci_unlock_rescan_remove();
+ eeh_handle_normal_event(pe);
++ pci_lock_rescan_remove();
+ } else {
+ eeh_for_each_pe(pe, tmp_pe)
+ eeh_pe_for_each_dev(tmp_pe, edev, tmp_edev)
+@@ -1194,7 +1210,6 @@ void eeh_handle_special_event(void)
+ eeh_report_failure, NULL);
+ eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+
+- pci_lock_rescan_remove();
+ list_for_each_entry(hose, &hose_list, list_node) {
+ phb_pe = eeh_phb_pe_get(hose);
+ if (!phb_pe ||
+@@ -1213,7 +1228,6 @@ void eeh_handle_special_event(void)
+ }
+ pci_hp_remove_devices(bus);
+ }
+- pci_unlock_rescan_remove();
+ }
+
+ /*
+@@ -1223,4 +1237,6 @@ void eeh_handle_special_event(void)
+ if (rc == EEH_NEXT_ERR_DEAD_IOC)
+ break;
+ } while (rc != EEH_NEXT_ERR_NONE);
++
++ pci_unlock_rescan_remove();
+ }
+diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
+index a856d9ba42d206..fea58e9546f981 100644
+--- a/arch/powerpc/kernel/eeh_pe.c
++++ b/arch/powerpc/kernel/eeh_pe.c
+@@ -670,11 +670,12 @@ static void eeh_bridge_check_link(struct eeh_dev *edev)
+ eeh_ops->write_config(edev, cap + PCI_EXP_LNKCTL, 2, val);
+
+ /* Check link */
+- eeh_ops->read_config(edev, cap + PCI_EXP_LNKCAP, 4, &val);
+- if (!(val & PCI_EXP_LNKCAP_DLLLARC)) {
+- eeh_edev_dbg(edev, "No link reporting capability (0x%08x) \n", val);
+- msleep(1000);
+- return;
++ if (edev->pdev) {
++ if (!edev->pdev->link_active_reporting) {
++ eeh_edev_dbg(edev, "No link reporting capability\n");
++ msleep(1000);
++ return;
++ }
+ }
+
+ /* Wait the link is up until timeout (5s) */
+diff --git a/arch/powerpc/kernel/pci-hotplug.c b/arch/powerpc/kernel/pci-hotplug.c
+index 2fc12198ec071c..62de678f9f50e4 100644
+--- a/arch/powerpc/kernel/pci-hotplug.c
++++ b/arch/powerpc/kernel/pci-hotplug.c
+@@ -110,6 +110,9 @@ void pci_hp_add_devices(struct pci_bus *bus)
+ struct pci_controller *phb;
+ struct device_node *dn = pci_bus_to_OF_node(bus);
+
++ if (!dn)
++ return;
++
+ phb = pci_bus_to_host(bus);
+
+ mode = PCI_PROBE_NORMAL;
+diff --git a/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c b/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
+index 04bf6ecf7d55d6..85e0fa7d902b75 100644
+--- a/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
++++ b/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
+@@ -240,10 +240,8 @@ static int mpc512x_lpbfifo_kick(void)
+ dma_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+
+ /* Make DMA channel work with LPB FIFO data register */
+- if (dma_dev->device_config(lpbfifo.chan, &dma_conf)) {
+- ret = -EINVAL;
+- goto err_dma_prep;
+- }
++ if (dma_dev->device_config(lpbfifo.chan, &dma_conf))
++ return -EINVAL;
+
+ sg_init_table(&sg, 1);
+
+diff --git a/arch/s390/hypfs/hypfs_dbfs.c b/arch/s390/hypfs/hypfs_dbfs.c
+index f4c7dbfaf8ee9d..5848f2e374a6dc 100644
+--- a/arch/s390/hypfs/hypfs_dbfs.c
++++ b/arch/s390/hypfs/hypfs_dbfs.c
+@@ -6,6 +6,7 @@
+ * Author(s): Michael Holzheu <holzheu@linux.vnet.ibm.com>
+ */
+
++#include <linux/security.h>
+ #include <linux/slab.h>
+ #include "hypfs.h"
+
+@@ -64,24 +65,28 @@ static long dbfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ long rc;
+
+ mutex_lock(&df->lock);
+- if (df->unlocked_ioctl)
+- rc = df->unlocked_ioctl(file, cmd, arg);
+- else
+- rc = -ENOTTY;
++ rc = df->unlocked_ioctl(file, cmd, arg);
+ mutex_unlock(&df->lock);
+ return rc;
+ }
+
+-static const struct file_operations dbfs_ops = {
++static const struct file_operations dbfs_ops_ioctl = {
+ .read = dbfs_read,
+ .llseek = no_llseek,
+ .unlocked_ioctl = dbfs_ioctl,
+ };
+
++static const struct file_operations dbfs_ops = {
++ .read = dbfs_read,
++};
++
+ void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df)
+ {
+- df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df,
+- &dbfs_ops);
++ const struct file_operations *fops = &dbfs_ops;
++
++ if (df->unlocked_ioctl && !security_locked_down(LOCKDOWN_DEBUGFS))
++ fops = &dbfs_ops_ioctl;
++ df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, fops);
+ mutex_init(&df->lock);
+ }
+
+diff --git a/arch/s390/include/asm/timex.h b/arch/s390/include/asm/timex.h
+index 588aa0f2c842c3..d0260a1ec298bf 100644
+--- a/arch/s390/include/asm/timex.h
++++ b/arch/s390/include/asm/timex.h
+@@ -167,13 +167,6 @@ static inline unsigned long long get_tod_clock_fast(void)
+ return get_tod_clock();
+ #endif
+ }
+-
+-static inline cycles_t get_cycles(void)
+-{
+- return (cycles_t) get_tod_clock() >> 2;
+-}
+-#define get_cycles get_cycles
+-
+ int get_phys_clock(unsigned long *clock);
+ void init_cpu_timer(void);
+
+@@ -196,6 +189,12 @@ static inline unsigned long long get_tod_clock_monotonic(void)
+ return tod;
+ }
+
++static inline cycles_t get_cycles(void)
++{
++ return (cycles_t)get_tod_clock_monotonic() >> 2;
++}
++#define get_cycles get_cycles
++
+ /**
+ * tod_to_ns - convert a TOD format value to nanoseconds
+ * @todval: to be converted TOD format value
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index b6517453fa2348..cd79e9cd785fd3 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -587,7 +587,7 @@ static int stp_sync_clock(void *data)
+ atomic_dec(&sync->cpus);
+ /* Wait for in_sync to be set. */
+ while (READ_ONCE(sync->in_sync) == 0)
+- __udelay(1);
++ ;
+ }
+ if (sync->in_sync != 1)
+ /* Didn't work. Clear per-cpu in sync bit again. */
+diff --git a/arch/s390/mm/dump_pagetables.c b/arch/s390/mm/dump_pagetables.c
+index 8f9ff7e7187d7e..b83c684ad6bbc0 100644
+--- a/arch/s390/mm/dump_pagetables.c
++++ b/arch/s390/mm/dump_pagetables.c
+@@ -218,11 +218,9 @@ static int ptdump_show(struct seq_file *m, void *v)
+ .marker = address_markers,
+ };
+
+- get_online_mems();
+ mutex_lock(&cpa_mutex);
+ ptdump_walk_pgd(&st.ptdump, &init_mm, NULL);
+ mutex_unlock(&cpa_mutex);
+- put_online_mems();
+ return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(ptdump);
+diff --git a/arch/sh/Makefile b/arch/sh/Makefile
+index 2faebfd72ecab5..8e8e24227fffd8 100644
+--- a/arch/sh/Makefile
++++ b/arch/sh/Makefile
+@@ -103,16 +103,16 @@ UTS_MACHINE := sh
+ LDFLAGS_vmlinux += -e _stext
+
+ ifdef CONFIG_CPU_LITTLE_ENDIAN
+-ld-bfd := elf32-sh-linux
+-LDFLAGS_vmlinux += --defsym jiffies=jiffies_64 --oformat $(ld-bfd)
++ld_bfd := elf32-sh-linux
++LDFLAGS_vmlinux += --defsym jiffies=jiffies_64 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS += -EL
+ else
+-ld-bfd := elf32-shbig-linux
+-LDFLAGS_vmlinux += --defsym jiffies=jiffies_64+4 --oformat $(ld-bfd)
++ld_bfd := elf32-shbig-linux
++LDFLAGS_vmlinux += --defsym jiffies=jiffies_64+4 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS += -EB
+ endif
+
+-export ld-bfd
++export ld_bfd
+
+ head-y := arch/sh/kernel/head_32.o
+
+diff --git a/arch/sh/boot/compressed/Makefile b/arch/sh/boot/compressed/Makefile
+index 589d2d8a573dbd..d4baaaace17f57 100644
+--- a/arch/sh/boot/compressed/Makefile
++++ b/arch/sh/boot/compressed/Makefile
+@@ -30,7 +30,7 @@ endif
+
+ ccflags-remove-$(CONFIG_MCOUNT) += -pg
+
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(IMAGE_OFFSET) -e startup \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(IMAGE_OFFSET) -e startup \
+ -T $(obj)/../../kernel/vmlinux.lds
+
+ #
+@@ -68,7 +68,7 @@ $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
+
+ OBJCOPYFLAGS += -R .empty_zero_page
+
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix-y) FORCE
+ $(call if_changed,ld)
+diff --git a/arch/sh/boot/romimage/Makefile b/arch/sh/boot/romimage/Makefile
+index c7c8be58400cd9..17b03df0a8de4d 100644
+--- a/arch/sh/boot/romimage/Makefile
++++ b/arch/sh/boot/romimage/Makefile
+@@ -13,7 +13,7 @@ mmcif-obj-$(CONFIG_CPU_SUBTYPE_SH7724) := $(obj)/mmcif-sh7724.o
+ load-$(CONFIG_ROMIMAGE_MMCIF) := $(mmcif-load-y)
+ obj-$(CONFIG_ROMIMAGE_MMCIF) := $(mmcif-obj-y)
+
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(load-y) -e romstart \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(load-y) -e romstart \
+ -T $(obj)/../../kernel/vmlinux.lds
+
+ $(obj)/vmlinux: $(obj)/head.o $(obj-y) $(obj)/piggy.o FORCE
+@@ -24,7 +24,7 @@ OBJCOPYFLAGS += -j .empty_zero_page
+ $(obj)/zeropage.bin: vmlinux FORCE
+ $(call if_changed,objcopy)
+
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/zeropage.bin arch/sh/boot/zImage FORCE
+ $(call if_changed,ld)
+diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
+index 89cd98693efc53..019fc7f78d5361 100644
+--- a/arch/x86/include/asm/xen/hypercall.h
++++ b/arch/x86/include/asm/xen/hypercall.h
+@@ -37,6 +37,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/errno.h>
+ #include <linux/string.h>
++#include <linux/stringify.h>
+ #include <linux/types.h>
+ #include <linux/pgtable.h>
+ #include <linux/instrumentation.h>
+@@ -94,12 +95,13 @@ DECLARE_STATIC_CALL(xen_hypercall, xen_hypercall_func);
+ #ifdef MODULE
+ #define __ADDRESSABLE_xen_hypercall
+ #else
+-#define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall)
++#define __ADDRESSABLE_xen_hypercall \
++ __stringify(.global STATIC_CALL_KEY(xen_hypercall);)
+ #endif
+
+ #define __HYPERCALL \
+ __ADDRESSABLE_xen_hypercall \
+- "call __SCT__xen_hypercall"
++ __stringify(call STATIC_CALL_TRAMP(xen_hypercall))
+
+ #define __HYPERCALL_ENTRY(x) "a" (x)
+
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index e67d7603449b71..bf07b2c5418a00 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -599,6 +599,8 @@ static bool amd_check_tsa_microcode(void)
+ p.model = c->x86_model;
+ p.ext_model = c->x86_model >> 4;
+ p.stepping = c->x86_stepping;
++ /* reserved bits are expected to be 0 in test below */
++ p.__reserved = 0;
+
+ if (c->x86 == 0x19) {
+ switch (p.ucode_rev >> 8) {
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 7c269dcb7cecee..6ff9fd836d874a 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -67,10 +67,9 @@ void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk;
+
+ static void __init set_return_thunk(void *thunk)
+ {
+- if (x86_return_thunk != __x86_return_thunk)
+- pr_warn("x86/bugs: return thunk changed\n");
+-
+ x86_return_thunk = thunk;
++
++ pr_info("active return thunk: %ps\n", thunk);
+ }
+
+ /* Update SPEC_CTRL MSR and its cached copy unconditionally */
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index c011fe79f0249a..2bd22090a159dc 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -1215,13 +1215,20 @@ static const char *get_name(unsigned int bank, struct threshold_block *b)
+ }
+
+ bank_type = smca_get_bank_type(bank);
+- if (bank_type >= N_SMCA_BANK_TYPES)
+- return NULL;
+
+ if (b && bank_type == SMCA_UMC) {
+ if (b->block < ARRAY_SIZE(smca_umc_block_names))
+ return smca_umc_block_names[b->block];
+- return NULL;
++ }
++
++ if (b && b->block) {
++ snprintf(buf_mcatype, MAX_MCATYPE_NAME_LEN, "th_block_%u", b->block);
++ return buf_mcatype;
++ }
++
++ if (bank_type >= N_SMCA_BANK_TYPES) {
++ snprintf(buf_mcatype, MAX_MCATYPE_NAME_LEN, "th_bank_%u", bank);
++ return buf_mcatype;
+ }
+
+ if (smca_banks[bank].hwid->count == 1)
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+index b93d6cd08a7ff0..1ccf065f7af866 100644
+--- a/arch/x86/mm/extable.c
++++ b/arch/x86/mm/extable.c
+@@ -60,13 +60,12 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup,
+ unsigned long error_code,
+ unsigned long fault_addr)
+ {
+- regs->ip = ex_fixup_addr(fixup);
+-
+ WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.",
+ (void *)instruction_pointer(regs));
+
+ __copy_kernel_to_fpregs(&init_fpstate, -1);
+- return true;
++
++ return ex_handler_default(fixup, regs, trapnr, error_code, fault_addr);
+ }
+ EXPORT_SYMBOL_GPL(ex_handler_fprestore);
+
+diff --git a/block/bio.c b/block/bio.c
+index 88a09c31095fb7..7851f54edc7634 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1430,7 +1430,7 @@ void bio_endio(struct bio *bio)
+ if (!bio_integrity_endio(bio))
+ return;
+
+- if (bio->bi_disk)
++ if (bio->bi_disk && bio_flagged(bio, BIO_TRACKED))
+ rq_qos_done_bio(bio->bi_disk->queue, bio);
+
+ /*
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index ebd373469c8079..18855d4bfda234 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -605,7 +605,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
+ }
+
+ /* chunk_sectors a multiple of the physical block size? */
+- if ((t->chunk_sectors << 9) & (t->physical_block_size - 1)) {
++ if (t->chunk_sectors % (t->physical_block_size >> SECTOR_SHIFT)) {
+ t->chunk_sectors = 0;
+ t->misaligned = 1;
+ ret = -1;
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 707b2c37e5ee6e..74e949d340a19b 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -228,7 +228,7 @@ static inline int acpi_processor_hotadd_init(struct acpi_processor *pr)
+
+ static int acpi_processor_get_info(struct acpi_device *device)
+ {
+- union acpi_object object = { 0 };
++ union acpi_object object = { .processor = { 0 } };
+ struct acpi_buffer buffer = { sizeof(union acpi_object), &object };
+ struct acpi_processor *pr = acpi_driver_data(device);
+ int device_declaration = 0;
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 72087e05b5a5f2..250ea9ec5f0c25 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -860,6 +860,8 @@ static void __ghes_panic(struct ghes *ghes,
+
+ __ghes_print_estatus(KERN_EMERG, ghes->generic, estatus);
+
++ add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
++
+ ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx);
+
+ if (!panic_timeout)
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 42c7bdb352d202..949efdd1b9a18a 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -1128,7 +1128,9 @@ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
+
+ status = acpi_get_parent(handle, &pr_ahandle);
+ while (ACPI_SUCCESS(status)) {
+- acpi_bus_get_device(pr_ahandle, &d);
++ if (acpi_bus_get_device(pr_ahandle, &d))
++ break;
++
+ handle = pr_ahandle;
+
+ if (strcmp(acpi_device_hid(d), ACPI_PROCESSOR_CONTAINER_HID))
+diff --git a/drivers/acpi/processor_perflib.c b/drivers/acpi/processor_perflib.c
+index fc42d649c7e4fb..b67c2ba7b161c9 100644
+--- a/drivers/acpi/processor_perflib.c
++++ b/drivers/acpi/processor_perflib.c
+@@ -176,6 +176,9 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
+ {
+ unsigned int cpu;
+
++ if (ignore_ppc == 1)
++ return;
++
+ for_each_cpu(cpu, policy->related_cpus) {
+ struct acpi_processor *pr = per_cpu(processors, cpu);
+ int ret;
+@@ -196,6 +199,14 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
+ if (ret < 0)
+ pr_err("Failed to add freq constraint for CPU%d (%d)\n",
+ cpu, ret);
++
++ if (!pr->performance)
++ continue;
++
++ ret = acpi_processor_get_platform_limit(pr);
++ if (ret)
++ pr_err("Failed to update freq constraint for CPU%d (%d)\n",
++ cpu, ret);
+ }
+ }
+
+diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
+index 030cb32da980fc..a752253f7a5e79 100644
+--- a/drivers/ata/Kconfig
++++ b/drivers/ata/Kconfig
+@@ -117,22 +117,39 @@ config SATA_AHCI
+
+ config SATA_MOBILE_LPM_POLICY
+ int "Default SATA Link Power Management policy for mobile chipsets"
+- range 0 4
++ range 0 5
+ default 0
+ depends on SATA_AHCI
+ help
+ Select the Default SATA Link Power Management (LPM) policy to use
+ for mobile / laptop variants of chipsets / "South Bridges".
+
+- The value set has the following meanings:
++ Each policy combines power saving states and features:
++ - Partial: The Phy logic is powered but is in a reduced power
++ state. The exit latency from this state is no longer than
++ 10us).
++ - Slumber: The Phy logic is powered but is in an even lower power
++ state. The exit latency from this state is potentially
++ longer, but no longer than 10ms.
++ - DevSleep: The Phy logic may be powered down. The exit latency from
++ this state is no longer than 20 ms, unless otherwise
++ specified by DETO in the device Identify Device Data log.
++ - HIPM: Host Initiated Power Management (host automatically
++ transitions to partial and slumber).
++ - DIPM: Device Initiated Power Management (device automatically
++ transitions to partial and slumber).
++
++ The possible values for the default SATA link power management
++ policies are:
+ 0 => Keep firmware settings
+- 1 => Maximum performance
+- 2 => Medium power
+- 3 => Medium power with Device Initiated PM enabled
+- 4 => Minimum power
+-
+- Note "Minimum power" is known to cause issues, including disk
+- corruption, with some disks and should not be used.
++ 1 => No power savings (maximum performance)
++ 2 => HIPM (Partial)
++ 3 => HIPM (Partial) and DIPM (Partial and Slumber)
++ 4 => HIPM (Partial and DevSleep) and DIPM (Partial and Slumber)
++ 5 => HIPM (Slumber and DevSleep) and DIPM (Partial and Slumber)
++
++ Excluding the value 0, higher values represent policies with higher
++ power savings.
+
+ config SATA_AHCI_PLATFORM
+ tristate "Platform AHCI SATA support"
+diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
+index 45656067c547a8..d5c97dba2dd419 100644
+--- a/drivers/ata/libata-sata.c
++++ b/drivers/ata/libata-sata.c
+@@ -815,6 +815,11 @@ static ssize_t ata_scsi_lpm_store(struct device *device,
+
+ spin_lock_irqsave(ap->lock, flags);
+
++ if (ap->flags & ATA_FLAG_NO_LPM) {
++ count = -EOPNOTSUPP;
++ goto out_unlock;
++ }
++
+ ata_for_each_link(link, ap, EDGE) {
+ ata_for_each_dev(dev, &ap->link, ENABLED) {
+ if (dev->horkage & ATA_HORKAGE_NOLPM) {
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 36f32fa052df39..23f158601c8cb5 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -766,18 +766,14 @@ static void ata_to_sense_error(unsigned id, u8 drv_stat, u8 drv_err, u8 *sk,
+ {0xFF, 0xFF, 0xFF, 0xFF}, // END mark
+ };
+ static const unsigned char stat_table[][4] = {
+- /* Must be first because BUSY means no other bits valid */
+- {0x80, ABORTED_COMMAND, 0x47, 0x00},
+- // Busy, fake parity for now
+- {0x40, ILLEGAL_REQUEST, 0x21, 0x04},
+- // Device ready, unaligned write command
+- {0x20, HARDWARE_ERROR, 0x44, 0x00},
+- // Device fault, internal target failure
+- {0x08, ABORTED_COMMAND, 0x47, 0x00},
+- // Timed out in xfer, fake parity for now
+- {0x04, RECOVERED_ERROR, 0x11, 0x00},
+- // Recovered ECC error Medium error, recovered
+- {0xFF, 0xFF, 0xFF, 0xFF}, // END mark
++ /* Busy: must be first because BUSY means no other bits valid */
++ { ATA_BUSY, ABORTED_COMMAND, 0x00, 0x00 },
++ /* Device fault: INTERNAL TARGET FAILURE */
++ { ATA_DF, HARDWARE_ERROR, 0x44, 0x00 },
++ /* Corrected data error */
++ { ATA_CORR, RECOVERED_ERROR, 0x00, 0x00 },
++
++ { 0xFF, 0xFF, 0xFF, 0xFF }, /* END mark */
+ };
+
+ /*
+diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
+index 490ed7deb99a79..99427e18e6237b 100644
+--- a/drivers/base/power/domain_governor.c
++++ b/drivers/base/power/domain_governor.c
+@@ -8,6 +8,7 @@
+ #include <linux/pm_domain.h>
+ #include <linux/pm_qos.h>
+ #include <linux/hrtimer.h>
++#include <linux/cpu.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpumask.h>
+ #include <linux/ktime.h>
+@@ -254,6 +255,8 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ struct generic_pm_domain *genpd = pd_to_genpd(pd);
+ struct cpuidle_device *dev;
+ ktime_t domain_wakeup, next_hrtimer;
++ struct device *cpu_dev;
++ s64 cpu_constraint, global_constraint;
+ s64 idle_duration_ns;
+ int cpu, i;
+
+@@ -264,6 +267,7 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN))
+ return true;
+
++ global_constraint = cpu_latency_qos_limit();
+ /*
+ * Find the next wakeup for any of the online CPUs within the PM domain
+ * and its subdomains. Note, we only need the genpd->cpus, as it already
+@@ -277,8 +281,16 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ if (ktime_before(next_hrtimer, domain_wakeup))
+ domain_wakeup = next_hrtimer;
+ }
++
++ cpu_dev = get_cpu_device(cpu);
++ if (cpu_dev) {
++ cpu_constraint = dev_pm_qos_raw_resume_latency(cpu_dev);
++ if (cpu_constraint < global_constraint)
++ global_constraint = cpu_constraint;
++ }
+ }
+
++ global_constraint *= NSEC_PER_USEC;
+ /* The minimum idle duration is from now - until the next wakeup. */
+ idle_duration_ns = ktime_to_ns(ktime_sub(domain_wakeup, ktime_get()));
+ if (idle_duration_ns <= 0)
+@@ -291,8 +303,10 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
+ */
+ i = genpd->state_idx;
+ do {
+- if (idle_duration_ns >= (genpd->states[i].residency_ns +
+- genpd->states[i].power_off_latency_ns)) {
++ if ((idle_duration_ns >= (genpd->states[i].residency_ns +
++ genpd->states[i].power_off_latency_ns)) &&
++ (global_constraint >= (genpd->states[i].power_on_latency_ns +
++ genpd->states[i].power_off_latency_ns))) {
+ genpd->state_idx = i;
+ return true;
+ }
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 58d376b1cd680d..7e912d2ed4270e 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1716,6 +1716,11 @@ void pm_runtime_reinit(struct device *dev)
+ pm_runtime_put(dev->parent);
+ }
+ }
++ /*
++ * Clear power.needs_force_resume in case it has been set by
++ * pm_runtime_force_suspend() invoked from a driver remove callback.
++ */
++ dev->power.needs_force_resume = false;
+ }
+
+ /**
+diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
+index 405e09575f08a2..b4d4e4a41b0860 100644
+--- a/drivers/block/drbd/drbd_receiver.c
++++ b/drivers/block/drbd/drbd_receiver.c
+@@ -2532,7 +2532,11 @@ static int handle_write_conflicts(struct drbd_device *device,
+ peer_req->w.cb = superseded ? e_send_superseded :
+ e_send_retry_write;
+ list_add_tail(&peer_req->w.list, &device->done_ee);
+- queue_work(connection->ack_sender, &peer_req->peer_device->send_acks_work);
++ /* put is in drbd_send_acks_wf() */
++ kref_get(&device->kref);
++ if (!queue_work(connection->ack_sender,
++ &peer_req->peer_device->send_acks_work))
++ kref_put(&device->kref, drbd_destroy_device);
+
+ err = -ENOENT;
+ goto out;
+diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
+index d9e41d3bbe7170..9cf0b858f7b853 100644
+--- a/drivers/block/sunvdc.c
++++ b/drivers/block/sunvdc.c
+@@ -968,8 +968,10 @@ static bool vdc_port_mpgroup_check(struct vio_dev *vdev)
+ dev = device_find_child(vdev->dev.parent, &port_data,
+ vdc_device_probed);
+
+- if (dev)
++ if (dev) {
++ put_device(dev);
+ return true;
++ }
+
+ return false;
+ }
+diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
+index 24422f5c3d808d..1612078e67a97e 100644
+--- a/drivers/bus/mhi/host/boot.c
++++ b/drivers/bus/mhi/host/boot.c
+@@ -30,8 +30,8 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+ unsigned int i;
+
+ for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
+- bhi_vec->dma_addr = mhi_buf->dma_addr;
+- bhi_vec->size = mhi_buf->len;
++ bhi_vec->dma_addr = cpu_to_le64(mhi_buf->dma_addr);
++ bhi_vec->size = cpu_to_le64(mhi_buf->len);
+ }
+
+ dev_dbg(dev, "BHIe programming for RDDM\n");
+@@ -372,8 +372,8 @@ static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
+ while (remainder) {
+ to_cpy = min(remainder, mhi_buf->len);
+ memcpy(mhi_buf->buf, buf, to_cpy);
+- bhi_vec->dma_addr = mhi_buf->dma_addr;
+- bhi_vec->size = to_cpy;
++ bhi_vec->dma_addr = cpu_to_le64(mhi_buf->dma_addr);
++ bhi_vec->size = cpu_to_le64(to_cpy);
+
+ buf += to_cpy;
+ remainder -= to_cpy;
+diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
+index 7989269ddd9635..cdf7ae183d2ead 100644
+--- a/drivers/bus/mhi/host/internal.h
++++ b/drivers/bus/mhi/host/internal.h
+@@ -263,8 +263,8 @@ struct mhi_tre {
+ };
+
+ struct bhi_vec_entry {
+- u64 dma_addr;
+- u64 size;
++ __le64 dma_addr;
++ __le64 size;
+ };
+
+ enum mhi_cmd_type {
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index 3e00506543b69c..72269d0f2a4ecd 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -142,7 +142,9 @@ static int mtk_rng_probe(struct platform_device *pdev)
+ dev_set_drvdata(&pdev->dev, priv);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, RNG_AUTOSUSPEND_TIMEOUT);
+ pm_runtime_use_autosuspend(&pdev->dev);
+- devm_pm_runtime_enable(&pdev->dev);
++ ret = devm_pm_runtime_enable(&pdev->dev);
++ if (ret)
++ return ret;
+
+ dev_info(&pdev->dev, "registered RNG driver\n");
+
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index b89f300751b1b5..5b01985aed229f 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -4307,10 +4307,10 @@ static int handle_one_recv_msg(struct ipmi_smi *intf,
+ * The NetFN and Command in the response is not even
+ * marginally correct.
+ */
+- dev_warn(intf->si_dev,
+- "BMC returned incorrect response, expected netfn %x cmd %x, got netfn %x cmd %x\n",
+- (msg->data[0] >> 2) | 1, msg->data[1],
+- msg->rsp[0] >> 2, msg->rsp[1]);
++ dev_warn_ratelimited(intf->si_dev,
++ "BMC returned incorrect response, expected netfn %x cmd %x, got netfn %x cmd %x\n",
++ (msg->data[0] >> 2) | 1, msg->data[1],
++ msg->rsp[0] >> 2, msg->rsp[1]);
+
+ /* Generate an error response for the message. */
+ msg->rsp[0] = msg->data[0] | (1 << 2);
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index 883b4a3410122b..56be20f7485b3d 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -1198,14 +1198,8 @@ static struct ipmi_smi_watcher smi_watcher = {
+ .smi_gone = ipmi_smi_gone
+ };
+
+-static int action_op(const char *inval, char *outval)
++static int action_op_set_val(const char *inval)
+ {
+- if (outval)
+- strcpy(outval, action);
+-
+- if (!inval)
+- return 0;
+-
+ if (strcmp(inval, "reset") == 0)
+ action_val = WDOG_TIMEOUT_RESET;
+ else if (strcmp(inval, "none") == 0)
+@@ -1216,18 +1210,26 @@ static int action_op(const char *inval, char *outval)
+ action_val = WDOG_TIMEOUT_POWER_DOWN;
+ else
+ return -EINVAL;
+- strcpy(action, inval);
+ return 0;
+ }
+
+-static int preaction_op(const char *inval, char *outval)
++static int action_op(const char *inval, char *outval)
+ {
++ int rv;
++
+ if (outval)
+- strcpy(outval, preaction);
++ strcpy(outval, action);
+
+ if (!inval)
+ return 0;
++ rv = action_op_set_val(inval);
++ if (!rv)
++ strcpy(action, inval);
++ return rv;
++}
+
++static int preaction_op_set_val(const char *inval)
++{
+ if (strcmp(inval, "pre_none") == 0)
+ preaction_val = WDOG_PRETIMEOUT_NONE;
+ else if (strcmp(inval, "pre_smi") == 0)
+@@ -1240,18 +1242,26 @@ static int preaction_op(const char *inval, char *outval)
+ preaction_val = WDOG_PRETIMEOUT_MSG_INT;
+ else
+ return -EINVAL;
+- strcpy(preaction, inval);
+ return 0;
+ }
+
+-static int preop_op(const char *inval, char *outval)
++static int preaction_op(const char *inval, char *outval)
+ {
++ int rv;
++
+ if (outval)
+- strcpy(outval, preop);
++ strcpy(outval, preaction);
+
+ if (!inval)
+ return 0;
++ rv = preaction_op_set_val(inval);
++ if (!rv)
++ strcpy(preaction, inval);
++ return 0;
++}
+
++static int preop_op_set_val(const char *inval)
++{
+ if (strcmp(inval, "preop_none") == 0)
+ preop_val = WDOG_PREOP_NONE;
+ else if (strcmp(inval, "preop_panic") == 0)
+@@ -1260,7 +1270,22 @@ static int preop_op(const char *inval, char *outval)
+ preop_val = WDOG_PREOP_GIVE_DATA;
+ else
+ return -EINVAL;
+- strcpy(preop, inval);
++ return 0;
++}
++
++static int preop_op(const char *inval, char *outval)
++{
++ int rv;
++
++ if (outval)
++ strcpy(outval, preop);
++
++ if (!inval)
++ return 0;
++
++ rv = preop_op_set_val(inval);
++ if (!rv)
++ strcpy(preop, inval);
+ return 0;
+ }
+
+@@ -1297,18 +1322,18 @@ static int __init ipmi_wdog_init(void)
+ {
+ int rv;
+
+- if (action_op(action, NULL)) {
++ if (action_op_set_val(action)) {
+ action_op("reset", NULL);
+ pr_info("Unknown action '%s', defaulting to reset\n", action);
+ }
+
+- if (preaction_op(preaction, NULL)) {
++ if (preaction_op_set_val(preaction)) {
+ preaction_op("pre_none", NULL);
+ pr_info("Unknown preaction '%s', defaulting to none\n",
+ preaction);
+ }
+
+- if (preop_op(preop, NULL)) {
++ if (preop_op_set_val(preop)) {
+ preop_op("preop_none", NULL);
+ pr_info("Unknown preop '%s', defaulting to none\n", preop);
+ }
+diff --git a/drivers/clk/davinci/psc.c b/drivers/clk/davinci/psc.c
+index 7387e7f6276eb4..4e1abfc1e564c7 100644
+--- a/drivers/clk/davinci/psc.c
++++ b/drivers/clk/davinci/psc.c
+@@ -278,6 +278,11 @@ davinci_lpsc_clk_register(struct device *dev, const char *name,
+
+ lpsc->pm_domain.name = devm_kasprintf(dev, GFP_KERNEL, "%s: %s",
+ best_dev_name(dev), name);
++ if (!lpsc->pm_domain.name) {
++ clk_hw_unregister(&lpsc->hw);
++ kfree(lpsc);
++ return ERR_PTR(-ENOMEM);
++ }
+ lpsc->pm_domain.attach_dev = davinci_psc_genpd_attach_dev;
+ lpsc->pm_domain.detach_dev = davinci_psc_genpd_detach_dev;
+ lpsc->pm_domain.flags = GENPD_FLAG_PM_CLK;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+index 0e36ca3bf3d528..4fddb489cdce62 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+@@ -334,8 +334,7 @@ static SUNXI_CCU_GATE(dram_ohci_clk, "dram-ohci", "dram",
+
+ static const char * const de_parents[] = { "pll-video", "pll-periph0" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(de_clk, "de", de_parents,
+- 0x104, 0, 4, 24, 2, BIT(31),
+- CLK_SET_RATE_PARENT);
++ 0x104, 0, 4, 24, 3, BIT(31), 0);
+
+ static const char * const tcon_parents[] = { "pll-video" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(tcon_clk, "tcon", tcon_parents,
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index b0fc5e84f8570f..cab86a9be6bd33 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -96,7 +96,7 @@ static void armada_8k_cpufreq_free_table(struct freq_table *freq_tables)
+ {
+ int opps_index, nb_cpus = num_possible_cpus();
+
+- for (opps_index = 0 ; opps_index <= nb_cpus; opps_index++) {
++ for (opps_index = 0 ; opps_index < nb_cpus; opps_index++) {
+ int i;
+
+ /* If cpu_dev is NULL then we reached the end of the array */
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index f29e8d0553a824..17e71265109023 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -396,7 +396,7 @@ static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
+ }
+
+ static struct cpufreq_driver cppc_cpufreq_driver = {
+- .flags = CPUFREQ_CONST_LOOPS,
++ .flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,
+ .verify = cppc_verify_policy,
+ .target = cppc_cpufreq_set_target,
+ .get = cppc_cpufreq_get_rate,
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 6294e10657b46b..149ba2e39a9659 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1227,6 +1227,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ goto err_free_real_cpus;
+ }
+
++ init_rwsem(&policy->rwsem);
++
+ freq_constraints_init(&policy->constraints);
+
+ policy->nb_min.notifier_call = cpufreq_notifier_min;
+@@ -1249,7 +1251,6 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ }
+
+ INIT_LIST_HEAD(&policy->policy_list);
+- init_rwsem(&policy->rwsem);
+ spin_lock_init(&policy->transition_lock);
+ init_waitqueue_head(&policy->transition_wait);
+ INIT_WORK(&policy->update, handle_update);
+@@ -2544,10 +2545,12 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
+ pr_debug("starting governor %s failed\n", policy->governor->name);
+ if (old_gov) {
+ policy->governor = old_gov;
+- if (cpufreq_init_governor(policy))
++ if (cpufreq_init_governor(policy)) {
+ policy->governor = NULL;
+- else
+- cpufreq_start_governor(policy);
++ } else if (cpufreq_start_governor(policy)) {
++ cpufreq_exit_governor(policy);
++ policy->governor = NULL;
++ }
+ }
+
+ return ret;
+@@ -2780,15 +2783,6 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ cpufreq_driver = driver_data;
+ write_unlock_irqrestore(&cpufreq_driver_lock, flags);
+
+- /*
+- * Mark support for the scheduler's frequency invariance engine for
+- * drivers that implement target(), target_index() or fast_switch().
+- */
+- if (!cpufreq_driver->setpolicy) {
+- static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
+- pr_debug("supports frequency invariance");
+- }
+-
+ if (driver_data->setpolicy)
+ driver_data->flags |= CPUFREQ_CONST_LOOPS;
+
+@@ -2820,6 +2814,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ hp_online = ret;
+ ret = 0;
+
++ /*
++ * Mark support for the scheduler's frequency invariance engine for
++ * drivers that implement target(), target_index() or fast_switch().
++ */
++ if (!cpufreq_driver->setpolicy) {
++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
++ pr_debug("supports frequency invariance");
++ }
++
+ pr_debug("driver %s up and running\n", driver_data->name);
+ goto out;
+
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index a95cc8f024fde8..d34463f96848b3 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -158,6 +158,14 @@ static inline int performance_multiplier(unsigned long nr_iowaiters)
+
+ static DEFINE_PER_CPU(struct menu_device, menu_devices);
+
++static void menu_update_intervals(struct menu_device *data, unsigned int interval_us)
++{
++ /* Update the repeating-pattern data. */
++ data->intervals[data->interval_ptr++] = interval_us;
++ if (data->interval_ptr >= INTERVALS)
++ data->interval_ptr = 0;
++}
++
+ static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev);
+
+ /*
+@@ -288,6 +296,14 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ if (data->needs_update) {
+ menu_update(drv, dev);
+ data->needs_update = 0;
++ } else if (!dev->last_residency_ns) {
++ /*
++ * This happens when the driver rejects the previously selected
++ * idle state and returns an error, so update the recent
++ * intervals table to prevent invalid information from being
++ * used going forward.
++ */
++ menu_update_intervals(data, UINT_MAX);
+ }
+
+ /* determine the expected residency time, round up */
+@@ -537,10 +553,7 @@ static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+
+ data->correction_factor[data->bucket] = new_factor;
+
+- /* update the repeating-pattern data */
+- data->intervals[data->interval_ptr++] = ktime_to_us(measured_ns);
+- if (data->interval_ptr >= INTERVALS)
+- data->interval_ptr = 0;
++ menu_update_intervals(data, ktime_to_us(measured_ns));
+ }
+
+ /**
+diff --git a/drivers/crypto/ccp/ccp-debugfs.c b/drivers/crypto/ccp/ccp-debugfs.c
+index a1055554b47a24..dc26bc22c91d1d 100644
+--- a/drivers/crypto/ccp/ccp-debugfs.c
++++ b/drivers/crypto/ccp/ccp-debugfs.c
+@@ -319,5 +319,8 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
+
+ void ccp5_debugfs_destroy(void)
+ {
++ mutex_lock(&ccp_debugfs_lock);
+ debugfs_remove_recursive(ccp_debugfs_dir);
++ ccp_debugfs_dir = NULL;
++ mutex_unlock(&ccp_debugfs_lock);
+ }
+diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
+index cecae50d0f58d7..87eed86ef3fedf 100644
+--- a/drivers/crypto/img-hash.c
++++ b/drivers/crypto/img-hash.c
+@@ -435,7 +435,7 @@ static int img_hash_write_via_dma_stop(struct img_hash_dev *hdev)
+ struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
+
+ if (ctx->flags & DRIVER_FLAGS_SG)
+- dma_unmap_sg(hdev->dev, ctx->sg, ctx->dma_ct, DMA_TO_DEVICE);
++ dma_unmap_sg(hdev->dev, ctx->sg, 1, DMA_TO_DEVICE);
+
+ return 0;
+ }
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index 1c9af02eb63b6a..bdb60810ec72ef 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -247,7 +247,9 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv,
+ safexcel_complete(priv, ring);
+
+ if (sreq->nents) {
+- dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
++ dma_unmap_sg(priv->dev, areq->src,
++ sg_nents_for_len(areq->src, areq->nbytes),
++ DMA_TO_DEVICE);
+ sreq->nents = 0;
+ }
+
+@@ -495,7 +497,9 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ DMA_FROM_DEVICE);
+ unmap_sg:
+ if (req->nents) {
+- dma_unmap_sg(priv->dev, areq->src, req->nents, DMA_TO_DEVICE);
++ dma_unmap_sg(priv->dev, areq->src,
++ sg_nents_for_len(areq->src, areq->nbytes),
++ DMA_TO_DEVICE);
+ req->nents = 0;
+ }
+ cdesc_rollback:
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index 051a661a63eeb6..e9411c84db7436 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -75,9 +75,12 @@ mv_cesa_skcipher_dma_cleanup(struct skcipher_request *req)
+ static inline void mv_cesa_skcipher_cleanup(struct skcipher_request *req)
+ {
+ struct mv_cesa_skcipher_req *creq = skcipher_request_ctx(req);
++ struct mv_cesa_engine *engine = creq->base.engine;
+
+ if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ mv_cesa_skcipher_dma_cleanup(req);
++
++ atomic_sub(req->cryptlen, &engine->load);
+ }
+
+ static void mv_cesa_skcipher_std_step(struct skcipher_request *req)
+@@ -205,7 +208,6 @@ mv_cesa_skcipher_complete(struct crypto_async_request *req)
+ struct mv_cesa_engine *engine = creq->base.engine;
+ unsigned int ivsize;
+
+- atomic_sub(skreq->cryptlen, &engine->load);
+ ivsize = crypto_skcipher_ivsize(crypto_skcipher_reqtfm(skreq));
+
+ if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ) {
+diff --git a/drivers/crypto/marvell/cesa/hash.c b/drivers/crypto/marvell/cesa/hash.c
+index 823a8fb114bbbb..3c4f4f704c6448 100644
+--- a/drivers/crypto/marvell/cesa/hash.c
++++ b/drivers/crypto/marvell/cesa/hash.c
+@@ -109,9 +109,12 @@ static inline void mv_cesa_ahash_dma_cleanup(struct ahash_request *req)
+ static inline void mv_cesa_ahash_cleanup(struct ahash_request *req)
+ {
+ struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
++ struct mv_cesa_engine *engine = creq->base.engine;
+
+ if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ mv_cesa_ahash_dma_cleanup(req);
++
++ atomic_sub(req->nbytes, &engine->load);
+ }
+
+ static void mv_cesa_ahash_last_cleanup(struct ahash_request *req)
+@@ -371,8 +374,6 @@ static void mv_cesa_ahash_complete(struct crypto_async_request *req)
+ }
+ }
+ }
+-
+- atomic_sub(ahashreq->nbytes, &engine->load);
+ }
+
+ static void mv_cesa_ahash_prepare(struct crypto_async_request *req,
+diff --git a/drivers/crypto/qat/qat_common/adf_transport_debug.c b/drivers/crypto/qat/qat_common/adf_transport_debug.c
+index e6bdbd3c9b1f26..b0a553d680dc30 100644
+--- a/drivers/crypto/qat/qat_common/adf_transport_debug.c
++++ b/drivers/crypto/qat/qat_common/adf_transport_debug.c
+@@ -31,8 +31,10 @@ static void *adf_ring_next(struct seq_file *sfile, void *v, loff_t *pos)
+ struct adf_etr_ring_data *ring = sfile->private;
+
+ if (*pos >= (ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size) /
+- ADF_MSG_SIZE_TO_BYTES(ring->msg_size)))
++ ADF_MSG_SIZE_TO_BYTES(ring->msg_size))) {
++ (*pos)++;
+ return NULL;
++ }
+
+ return ring->base_addr +
+ (ADF_MSG_SIZE_TO_BYTES(ring->msg_size) * (*pos)++);
+diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
+index 8a9cf8220808e8..82c60dedcffd2b 100644
+--- a/drivers/devfreq/governor_userspace.c
++++ b/drivers/devfreq/governor_userspace.c
+@@ -9,6 +9,7 @@
+ #include <linux/slab.h>
+ #include <linux/device.h>
+ #include <linux/devfreq.h>
++#include <linux/kstrtox.h>
+ #include <linux/pm.h>
+ #include <linux/mutex.h>
+ #include <linux/module.h>
+@@ -39,10 +40,13 @@ static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
+ unsigned long wanted;
+ int err = 0;
+
++ err = kstrtoul(buf, 0, &wanted);
++ if (err)
++ return err;
++
+ mutex_lock(&devfreq->lock);
+ data = devfreq->governor_data;
+
+- sscanf(buf, "%lu", &wanted);
+ data->user_frequency = wanted;
+ data->valid = true;
+ err = update_devfreq(devfreq);
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index 539cb4e0433865..72e87380b61a81 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -290,8 +290,9 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
+
+ replace:
+ RCU_INIT_POINTER(fobj->shared[i], fence);
+- /* pointer update must be visible before we extend the shared_count */
+- smp_store_mb(fobj->shared_count, count);
++ /* fence update must be visible before we extend the shared_count */
++ smp_wmb();
++ fobj->shared_count = count;
+
+ write_seqcount_end(&obj->seq);
+ dma_fence_put(old);
+diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
+index 65a7db8bb71bea..94a12f3267c14c 100644
+--- a/drivers/dma/mv_xor.c
++++ b/drivers/dma/mv_xor.c
+@@ -1061,8 +1061,16 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ */
+ mv_chan->dummy_src_addr = dma_map_single(dma_dev->dev,
+ mv_chan->dummy_src, MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++ if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_src_addr))
++ return ERR_PTR(-ENOMEM);
++
+ mv_chan->dummy_dst_addr = dma_map_single(dma_dev->dev,
+ mv_chan->dummy_dst, MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++ if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_dst_addr)) {
++ ret = -ENOMEM;
++ goto err_unmap_src;
++ }
++
+
+ /* allocate coherent memory for hardware descriptors
+ * note: writecombine gives slightly better performance, but
+@@ -1071,8 +1079,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ mv_chan->dma_desc_pool_virt =
+ dma_alloc_wc(&pdev->dev, MV_XOR_POOL_SIZE, &mv_chan->dma_desc_pool,
+ GFP_KERNEL);
+- if (!mv_chan->dma_desc_pool_virt)
+- return ERR_PTR(-ENOMEM);
++ if (!mv_chan->dma_desc_pool_virt) {
++ ret = -ENOMEM;
++ goto err_unmap_dst;
++ }
+
+ /* discover transaction capabilites from the platform data */
+ dma_dev->cap_mask = cap_mask;
+@@ -1155,6 +1165,13 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ err_free_dma:
+ dma_free_coherent(&pdev->dev, MV_XOR_POOL_SIZE,
+ mv_chan->dma_desc_pool_virt, mv_chan->dma_desc_pool);
++err_unmap_dst:
++ dma_unmap_single(dma_dev->dev, mv_chan->dummy_dst_addr,
++ MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++err_unmap_src:
++ dma_unmap_single(dma_dev->dev, mv_chan->dummy_src_addr,
++ MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
+index 9c52c57919c615..94e7e3290691a5 100644
+--- a/drivers/dma/nbpfaxi.c
++++ b/drivers/dma/nbpfaxi.c
+@@ -712,6 +712,9 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ list_add_tail(&ldesc->node, &lhead);
+ ldesc->hwdesc_dma_addr = dma_map_single(dchan->device->dev,
+ hwdesc, sizeof(*hwdesc), DMA_TO_DEVICE);
++ if (dma_mapping_error(dchan->device->dev,
++ ldesc->hwdesc_dma_addr))
++ goto unmap_error;
+
+ dev_dbg(dev, "%s(): mapped 0x%p to %pad\n", __func__,
+ hwdesc, &ldesc->hwdesc_dma_addr);
+@@ -738,6 +741,16 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ spin_unlock_irq(&chan->lock);
+
+ return ARRAY_SIZE(dpage->desc);
++
++unmap_error:
++ while (i--) {
++ ldesc--; hwdesc--;
++
++ dma_unmap_single(dchan->device->dev, ldesc->hwdesc_dma_addr,
++ sizeof(hwdesc), DMA_TO_DEVICE);
++ }
++
++ return -ENOMEM;
+ }
+
+ static void nbpf_desc_put(struct nbpf_desc *desc)
+@@ -1356,7 +1369,7 @@ static int nbpf_probe(struct platform_device *pdev)
+ if (irqs == 1) {
+ eirq = irqbuf[0];
+
+- for (i = 0; i <= num_channels; i++)
++ for (i = 0; i < num_channels; i++)
+ nbpf->chan[i].irq = irqbuf[0];
+ } else {
+ eirq = platform_get_irq_byname(pdev, "error");
+@@ -1366,16 +1379,15 @@ static int nbpf_probe(struct platform_device *pdev)
+ if (irqs == num_channels + 1) {
+ struct nbpf_channel *chan;
+
+- for (i = 0, chan = nbpf->chan; i <= num_channels;
++ for (i = 0, chan = nbpf->chan; i < num_channels;
+ i++, chan++) {
+ /* Skip the error IRQ */
+ if (irqbuf[i] == eirq)
+ i++;
++ if (i >= ARRAY_SIZE(irqbuf))
++ return -EINVAL;
+ chan->irq = irqbuf[i];
+ }
+-
+- if (chan != nbpf->chan + num_channels)
+- return -EINVAL;
+ } else {
+ /* 2 IRQs and more than one channel */
+ if (irqbuf[0] == eirq)
+@@ -1383,7 +1395,7 @@ static int nbpf_probe(struct platform_device *pdev)
+ else
+ irq = irqbuf[0];
+
+- for (i = 0; i <= num_channels; i++)
++ for (i = 0; i < num_channels; i++)
+ nbpf->chan[i].irq = irq;
+ }
+ }
+diff --git a/drivers/fpga/zynq-fpga.c b/drivers/fpga/zynq-fpga.c
+index 07fa8d9ec67500..cdb2bea8b6c22c 100644
+--- a/drivers/fpga/zynq-fpga.c
++++ b/drivers/fpga/zynq-fpga.c
+@@ -405,12 +405,12 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
+ }
+ }
+
+- priv->dma_nelms =
+- dma_map_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
+- if (priv->dma_nelms == 0) {
++ err = dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
++ if (err) {
+ dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n");
+- return -ENOMEM;
++ return err;
+ }
++ priv->dma_nelms = sgt->nents;
+
+ /* enable clock */
+ err = clk_enable(priv->clk);
+@@ -478,7 +478,7 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
+ clk_disable(priv->clk);
+
+ out_free:
+- dma_unmap_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
++ dma_unmap_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
+ return err;
+ }
+
+diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c
+index 80bf2a84f296c9..d5db55d78304e4 100644
+--- a/drivers/gpio/gpio-rcar.c
++++ b/drivers/gpio/gpio-rcar.c
+@@ -34,7 +34,7 @@ struct gpio_rcar_bank_info {
+
+ struct gpio_rcar_priv {
+ void __iomem *base;
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ struct device *dev;
+ struct gpio_chip gpio_chip;
+ struct irq_chip irq_chip;
+@@ -114,7 +114,7 @@ static void gpio_rcar_config_interrupt_input_mode(struct gpio_rcar_priv *p,
+ * "Setting Level-Sensitive Interrupt Input Mode"
+ */
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+
+ /* Configure positive or negative logic in POSNEG */
+ gpio_rcar_modify_bit(p, POSNEG, hwirq, !active_high_rising_edge);
+@@ -133,7 +133,7 @@ static void gpio_rcar_config_interrupt_input_mode(struct gpio_rcar_priv *p,
+ if (!level_trigger)
+ gpio_rcar_write(p, INTCLR, BIT(hwirq));
+
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static int gpio_rcar_irq_set_type(struct irq_data *d, unsigned int type)
+@@ -226,7 +226,7 @@ static void gpio_rcar_config_general_input_output_mode(struct gpio_chip *chip,
+ * "Setting General Input Mode"
+ */
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+
+ /* Configure positive logic in POSNEG */
+ gpio_rcar_modify_bit(p, POSNEG, gpio, false);
+@@ -241,7 +241,7 @@ static void gpio_rcar_config_general_input_output_mode(struct gpio_chip *chip,
+ if (p->has_outdtsel && output)
+ gpio_rcar_modify_bit(p, OUTDTSEL, gpio, false);
+
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset)
+@@ -310,9 +310,9 @@ static void gpio_rcar_set(struct gpio_chip *chip, unsigned offset, int value)
+ struct gpio_rcar_priv *p = gpiochip_get_data(chip);
+ unsigned long flags;
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+ gpio_rcar_modify_bit(p, OUTDT, offset, value);
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask,
+@@ -329,12 +329,12 @@ static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask,
+ if (!bankmask)
+ return;
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+ val = gpio_rcar_read(p, OUTDT);
+ val &= ~bankmask;
+ val |= (bankmask & bits[0]);
+ gpio_rcar_write(p, OUTDT, val);
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static int gpio_rcar_direction_output(struct gpio_chip *chip, unsigned offset,
+@@ -454,7 +454,7 @@ static int gpio_rcar_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ p->dev = dev;
+- spin_lock_init(&p->lock);
++ raw_spin_lock_init(&p->lock);
+
+ /* Get device configuration from DT node */
+ ret = gpio_rcar_parse_dt(p, &npins);
+diff --git a/drivers/gpio/gpio-tps65912.c b/drivers/gpio/gpio-tps65912.c
+index 510d9ed9fd2a35..a8a2ad13e09e08 100644
+--- a/drivers/gpio/gpio-tps65912.c
++++ b/drivers/gpio/gpio-tps65912.c
+@@ -49,10 +49,13 @@ static int tps65912_gpio_direction_output(struct gpio_chip *gc,
+ unsigned offset, int value)
+ {
+ struct tps65912_gpio *gpio = gpiochip_get_data(gc);
++ int ret;
+
+ /* Set the initial value */
+- regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
+- GPIO_SET_MASK, value ? GPIO_SET_MASK : 0);
++ ret = regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
++ GPIO_SET_MASK, value ? GPIO_SET_MASK : 0);
++ if (ret)
++ return ret;
+
+ return regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
+ GPIO_CFG_MASK, GPIO_CFG_MASK);
+diff --git a/drivers/gpio/gpio-wcd934x.c b/drivers/gpio/gpio-wcd934x.c
+index 97e6caedf1f33e..c00968ce7a5692 100644
+--- a/drivers/gpio/gpio-wcd934x.c
++++ b/drivers/gpio/gpio-wcd934x.c
+@@ -45,9 +45,12 @@ static int wcd_gpio_direction_output(struct gpio_chip *chip, unsigned int pin,
+ int val)
+ {
+ struct wcd_gpio_data *data = gpiochip_get_data(chip);
++ int ret;
+
+- regmap_update_bits(data->map, WCD_REG_DIR_CTL_OFFSET,
+- WCD_PIN_MASK(pin), WCD_PIN_MASK(pin));
++ ret = regmap_update_bits(data->map, WCD_REG_DIR_CTL_OFFSET,
++ WCD_PIN_MASK(pin), WCD_PIN_MASK(pin));
++ if (ret)
++ return ret;
+
+ return regmap_update_bits(data->map, WCD_REG_VAL_CTL_OFFSET,
+ WCD_PIN_MASK(pin),
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index ff5555353eb4f2..683bbefc39c15b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -997,6 +997,7 @@ struct amdgpu_device {
+
+ bool in_pci_err_recovery;
+ struct pci_saved_state *pci_state;
++ pci_channel_state_t pci_channel_state;
+ };
+
+ static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+index 08047bc4d58860..2df9e81e2b490e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+@@ -94,8 +94,8 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ }
+
+ r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size,
+- AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
+- AMDGPU_PTE_EXECUTABLE);
++ AMDGPU_VM_PAGE_READABLE | AMDGPU_VM_PAGE_WRITEABLE |
++ AMDGPU_VM_PAGE_EXECUTABLE);
+
+ if (r) {
+ DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 40d2f0ed1c75fd..8efd3ee2621f0c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4944,6 +4944,8 @@ pci_ers_result_t amdgpu_pci_error_detected(struct pci_dev *pdev, pci_channel_sta
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
++ adev->pci_channel_state = state;
++
+ switch (state) {
+ case pci_channel_io_normal:
+ return PCI_ERS_RESULT_CAN_RECOVER;
+@@ -5079,6 +5081,10 @@ void amdgpu_pci_resume(struct pci_dev *pdev)
+
+ DRM_INFO("PCI error: resume callback!!\n");
+
++ /* Only continue execution for the case of pci_channel_io_frozen */
++ if (adev->pci_channel_state != pci_channel_io_frozen)
++ return;
++
+ for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
+ struct amdgpu_ring *ring = adev->rings[i];
+
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+index afc10b954ffa7b..c45fe45ae564bd 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+@@ -992,7 +992,7 @@ static enum bp_result set_pixel_clock_v3(
+ allocation.sPCLKInput.usFbDiv =
+ cpu_to_le16((uint16_t)bp_params->feedback_divider);
+ allocation.sPCLKInput.ucFracFbDiv =
+- (uint8_t)bp_params->fractional_feedback_divider;
++ (uint8_t)(bp_params->fractional_feedback_divider / 100000);
+ allocation.sPCLKInput.ucPostDiv =
+ (uint8_t)bp_params->pixel_clock_post_divider;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
+index b210f8e9d592d1..6a1ac3568c8749 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
+@@ -384,8 +384,6 @@ static void dce_pplib_apply_display_requirements(
+ {
+ struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;
+
+- pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
+-
+ dce110_fill_display_configs(context, pp_display_cfg);
+
+ if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0)
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+index b1e657e137a98b..061cc12c97520f 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+@@ -120,9 +120,15 @@ void dce110_fill_display_configs(
+ const struct dc_state *context,
+ struct dm_pp_display_configuration *pp_display_cfg)
+ {
++ struct dc *dc = context->clk_mgr->ctx->dc;
+ int j;
+ int num_cfgs = 0;
+
++ pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
++ pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz;
++ pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0;
++ pp_display_cfg->crtc_index = dc->res_pool->res_cap->num_timing_generator;
++
+ for (j = 0; j < context->stream_count; j++) {
+ int k;
+
+@@ -164,6 +170,23 @@ void dce110_fill_display_configs(
+ cfg->v_refresh /= stream->timing.h_total;
+ cfg->v_refresh = (cfg->v_refresh + stream->timing.v_total / 2)
+ / stream->timing.v_total;
++
++ /* Find first CRTC index and calculate its line time.
++ * This is necessary for DPM on SI GPUs.
++ */
++ if (cfg->pipe_idx < pp_display_cfg->crtc_index) {
++ const struct dc_crtc_timing *timing =
++ &context->streams[0]->timing;
++
++ pp_display_cfg->crtc_index = cfg->pipe_idx;
++ pp_display_cfg->line_time_in_us =
++ timing->h_total * 10000 / timing->pix_clk_100hz;
++ }
++ }
++
++ if (!num_cfgs) {
++ pp_display_cfg->crtc_index = 0;
++ pp_display_cfg->line_time_in_us = 0;
+ }
+
+ pp_display_cfg->display_count = num_cfgs;
+@@ -222,25 +245,8 @@ void dce11_pplib_apply_display_requirements(
+ pp_display_cfg->min_engine_clock_deep_sleep_khz
+ = context->bw_ctx.bw.dce.sclk_deep_sleep_khz;
+
+- pp_display_cfg->avail_mclk_switch_time_us =
+- dce110_get_min_vblank_time_us(context);
+- /* TODO: dce11.2*/
+- pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0;
+-
+- pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz;
+-
+ dce110_fill_display_configs(context, pp_display_cfg);
+
+- /* TODO: is this still applicable?*/
+- if (pp_display_cfg->display_count == 1) {
+- const struct dc_crtc_timing *timing =
+- &context->streams[0]->timing;
+-
+- pp_display_cfg->crtc_index =
+- pp_display_cfg->disp_configs[0].pipe_idx;
+- pp_display_cfg->line_time_in_us = timing->h_total * 10000 / timing->pix_clk_100hz;
+- }
+-
+ if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0)
+ dm_pp_apply_display_requirements(dc->ctx, pp_display_cfg);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
+index 0267644717b27a..ffd0f4a7631023 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
+@@ -83,22 +83,13 @@ static const struct state_dependent_clocks dce60_max_clks_by_state[] = {
+ static int dce60_get_dp_ref_freq_khz(struct clk_mgr *clk_mgr_base)
+ {
+ struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+- int dprefclk_wdivider;
+- int dp_ref_clk_khz;
+- int target_div;
++ struct dc_context *ctx = clk_mgr_base->ctx;
++ int dp_ref_clk_khz = 0;
+
+- /* DCE6 has no DPREFCLK_CNTL to read DP Reference Clock source */
+-
+- /* Read the mmDENTIST_DISPCLK_CNTL to get the currently
+- * programmed DID DENTIST_DPREFCLK_WDIVIDER*/
+- REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, &dprefclk_wdivider);
+-
+- /* Convert DENTIST_DPREFCLK_WDIVIDERto actual divider*/
+- target_div = dentist_get_divider_from_did(dprefclk_wdivider);
+-
+- /* Calculate the current DFS clock, in kHz.*/
+- dp_ref_clk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR
+- * clk_mgr->base.dentist_vco_freq_khz) / target_div;
++ if (ASIC_REV_IS_TAHITI_P(ctx->asic_id.hw_internal_rev))
++ dp_ref_clk_khz = ctx->dc_bios->fw_info.default_display_engine_pll_frequency;
++ else
++ dp_ref_clk_khz = clk_mgr_base->clks.dispclk_khz;
+
+ return dce_adjust_dp_ref_freq_for_ss(clk_mgr, dp_ref_clk_khz);
+ }
+@@ -109,8 +100,6 @@ static void dce60_pplib_apply_display_requirements(
+ {
+ struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;
+
+- pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
+-
+ dce110_fill_display_configs(context, pp_display_cfg);
+
+ if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0)
+@@ -123,11 +112,9 @@ static void dce60_update_clocks(struct clk_mgr *clk_mgr_base,
+ {
+ struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ struct dm_pp_power_level_change_request level_change_req;
+- int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz;
+-
+- /*TODO: W/A for dal3 linux, investigate why this works */
+- if (!clk_mgr_dce->dfs_bypass_active)
+- patched_disp_clk = patched_disp_clk * 115 / 100;
++ const int max_disp_clk =
++ clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz;
++ int patched_disp_clk = min(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz);
+
+ level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context);
+ /* get max clock state from PPLIB */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index c6c4888c666511..402d65759e730e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -159,14 +159,13 @@ static void dcn20_setup_gsl_group_as_lock(
+ }
+
+ /* at this point we want to program whether it's to enable or disable */
+- if (pipe_ctx->stream_res.tg->funcs->set_gsl != NULL &&
+- pipe_ctx->stream_res.tg->funcs->set_gsl_source_select != NULL) {
++ if (pipe_ctx->stream_res.tg->funcs->set_gsl != NULL) {
+ pipe_ctx->stream_res.tg->funcs->set_gsl(
+ pipe_ctx->stream_res.tg,
+ &gsl);
+-
+- pipe_ctx->stream_res.tg->funcs->set_gsl_source_select(
+- pipe_ctx->stream_res.tg, group_idx, enable ? 4 : 0);
++ if (pipe_ctx->stream_res.tg->funcs->set_gsl_source_select != NULL)
++ pipe_ctx->stream_res.tg->funcs->set_gsl_source_select(
++ pipe_ctx->stream_res.tg, group_idx, enable ? 4 : 0);
+ } else
+ BREAK_TO_DEBUGGER();
+ }
+@@ -736,7 +735,7 @@ enum dc_status dcn20_enable_stream_timing(
+ return DC_ERROR_UNEXPECTED;
+ }
+
+- hws->funcs.wait_for_blank_complete(pipe_ctx->stream_res.opp);
++ fsleep(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
+
+ params.vertical_total_min = stream->adjust.v_total_min;
+ params.vertical_total_max = stream->adjust.v_total_max;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+index 60b5ca974356aa..80d5fb3880037e 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+@@ -149,7 +149,7 @@ int phm_wait_on_indirect_register(struct pp_hwmgr *hwmgr,
+ }
+
+ cgs_write_register(hwmgr->device, indirect_port, index);
+- return phm_wait_on_register(hwmgr, indirect_port + 1, mask, value);
++ return phm_wait_on_register(hwmgr, indirect_port + 1, value, mask);
+ }
+
+ int phm_wait_for_register_unequal(struct pp_hwmgr *hwmgr,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index ee27970cfff952..3cedfd4851f271 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1332,6 +1332,12 @@ static int smu_resume(void *handle)
+
+ adev->pm.dpm_enabled = true;
+
++ if (smu->current_power_limit) {
++ ret = smu_set_power_limit(smu, smu->current_power_limit);
++ if (ret && ret != -EOPNOTSUPP)
++ return ret;
++ }
++
+ dev_info(adev->dev, "SMU is resumed successfully!\n");
+
+ return 0;
+diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
+index 6ba16db7750031..ba8ab1dc4912e9 100644
+--- a/drivers/gpu/drm/drm_dp_helper.c
++++ b/drivers/gpu/drm/drm_dp_helper.c
+@@ -299,7 +299,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ * monitor doesn't power down exactly after the throw away read.
+ */
+ if (!aux->is_remote) {
+- ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_READ, DP_DPCD_REV,
++ ret = drm_dp_dpcd_access(aux, DP_AUX_NATIVE_READ, DP_LANE0_1_STATUS,
+ buffer, 1);
+ if (ret != 1)
+ goto out;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+index 3aa37e177667ea..b386c17e866862 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+@@ -81,16 +81,9 @@ rockchip_fb_create(struct drm_device *dev, struct drm_file *file,
+ }
+
+ if (drm_is_afbc(mode_cmd->modifier[0])) {
+- int ret, i;
+-
+ ret = drm_gem_fb_afbc_init(dev, mode_cmd, afbc_fb);
+ if (ret) {
+- struct drm_gem_object **obj = afbc_fb->base.obj;
+-
+- for (i = 0; i < info->num_planes; ++i)
+- drm_gem_object_put(obj[i]);
+-
+- kfree(afbc_fb);
++ drm_framebuffer_put(&afbc_fb->base);
+ return ERR_PTR(ret);
+ }
+ }
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index 750d91370c8ab7..259bcb3aec7ddc 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -313,19 +313,6 @@ void drm_sched_entity_destroy(struct drm_sched_entity *entity)
+ }
+ EXPORT_SYMBOL(drm_sched_entity_destroy);
+
+-/**
+- * drm_sched_entity_clear_dep - callback to clear the entities dependency
+- */
+-static void drm_sched_entity_clear_dep(struct dma_fence *f,
+- struct dma_fence_cb *cb)
+-{
+- struct drm_sched_entity *entity =
+- container_of(cb, struct drm_sched_entity, cb);
+-
+- entity->dependency = NULL;
+- dma_fence_put(f);
+-}
+-
+ /**
+ * drm_sched_entity_clear_dep - callback to clear the entities dependency and
+ * wake up scheduler
+@@ -336,7 +323,8 @@ static void drm_sched_entity_wakeup(struct dma_fence *f,
+ struct drm_sched_entity *entity =
+ container_of(cb, struct drm_sched_entity, cb);
+
+- drm_sched_entity_clear_dep(f, cb);
++ entity->dependency = NULL;
++ dma_fence_put(f);
+ drm_sched_wakeup(entity->rq->sched);
+ }
+
+@@ -392,13 +380,6 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity)
+ fence = dma_fence_get(&s_fence->scheduled);
+ dma_fence_put(entity->dependency);
+ entity->dependency = fence;
+- if (!dma_fence_add_callback(fence, &entity->cb,
+- drm_sched_entity_clear_dep))
+- return true;
+-
+- /* Ignore it when it is already scheduled */
+- dma_fence_put(fence);
+- return false;
+ }
+
+ if (!dma_fence_add_callback(entity->dependency, &entity->cb,
+diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
+index b325b9264203ca..e6db512ff581c0 100644
+--- a/drivers/gpu/drm/ttm/ttm_resource.c
++++ b/drivers/gpu/drm/ttm/ttm_resource.c
+@@ -113,6 +113,9 @@ int ttm_resource_manager_force_list_clean(struct ttm_bo_device *bdev,
+ }
+ spin_unlock(&glob->lru_lock);
+
++ if (ret && ret != -ENOENT)
++ return ret;
++
+ spin_lock(&man->move_lock);
+ fence = dma_fence_get(man->move);
+ spin_unlock(&man->move_lock);
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 126acf5441c8cb..89aa7a0e51dee1 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1662,9 +1662,12 @@ u8 *hid_alloc_report_buf(struct hid_report *report, gfp_t flags)
+ /*
+ * 7 extra bytes are necessary to achieve proper functionality
+ * of implement() working on 8 byte chunks
++ * 1 extra byte for the report ID if it is null (not used) so
++ * we can reserve that extra byte in the first position of the buffer
++ * when sending it to .raw_request()
+ */
+
+- u32 len = hid_report_len(report) + 7;
++ u32 len = hid_report_len(report) + 7 + (report->id == 0);
+
+ return kzalloc(len, flags);
+ }
+@@ -1727,7 +1730,7 @@ static struct hid_report *hid_get_report(struct hid_report_enum *report_enum,
+ int __hid_request(struct hid_device *hid, struct hid_report *report,
+ int reqtype)
+ {
+- char *buf;
++ char *buf, *data_buf;
+ int ret;
+ u32 len;
+
+@@ -1735,13 +1738,19 @@ int __hid_request(struct hid_device *hid, struct hid_report *report,
+ if (!buf)
+ return -ENOMEM;
+
++ data_buf = buf;
+ len = hid_report_len(report);
+
++ if (report->id == 0) {
++ /* reserve the first byte for the report ID */
++ data_buf++;
++ len++;
++ }
++
+ if (reqtype == HID_REQ_SET_REPORT)
+- hid_output_report(report, buf);
++ hid_output_report(report, data_buf);
+
+- ret = hid->ll_driver->raw_request(hid, report->id, buf, len,
+- report->type, reqtype);
++ ret = hid_hw_raw_request(hid, report->id, buf, len, report->type, reqtype);
+ if (ret < 0) {
+ dbg_hid("unable to complete request: %d\n", ret);
+ goto out;
+diff --git a/drivers/hwmon/corsair-cpro.c b/drivers/hwmon/corsair-cpro.c
+index 05df31cab2e522..074f812332e89b 100644
+--- a/drivers/hwmon/corsair-cpro.c
++++ b/drivers/hwmon/corsair-cpro.c
+@@ -84,6 +84,7 @@ struct ccp_device {
+ struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */
+ u8 *cmd_buffer;
+ u8 *buffer;
++ int buffer_recv_size; /* number of received bytes in buffer */
+ int target[6];
+ DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS);
+ DECLARE_BITMAP(fan_cnct, NUM_FANS);
+@@ -139,6 +140,9 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2,
+ if (!t)
+ return -ETIMEDOUT;
+
++ if (ccp->buffer_recv_size != IN_BUFFER_SIZE)
++ return -EPROTO;
++
+ return ccp_get_errno(ccp);
+ }
+
+@@ -150,6 +154,7 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8
+ spin_lock(&ccp->wait_input_report_lock);
+ if (!completion_done(&ccp->wait_input_report)) {
+ memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size));
++ ccp->buffer_recv_size = size;
+ complete_all(&ccp->wait_input_report);
+ }
+ spin_unlock(&ccp->wait_input_report_lock);
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index 89d036bf88df7d..cded881ac01f54 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -65,7 +65,7 @@ static ssize_t pwm_auto_point_temp_show(struct device *dev,
+ return ret;
+
+ ret = regs[0] | regs[1] << 8;
+- return sprintf(buf, "%d\n", ret * 10);
++ return sprintf(buf, "%d\n", ret * 100);
+ }
+
+ static ssize_t pwm_auto_point_temp_store(struct device *dev,
+@@ -100,7 +100,7 @@ static ssize_t pwm_auto_point_pwm_show(struct device *dev,
+ {
+ struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+
+- return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)));
++ return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)) / 100);
+ }
+
+ static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point1_pwm, pwm_auto_point_pwm, 0);
+diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c
+index 8fd51de64d96b0..49001b2458d4f8 100644
+--- a/drivers/i2c/busses/i2c-qup.c
++++ b/drivers/i2c/busses/i2c-qup.c
+@@ -452,8 +452,10 @@ static int qup_i2c_bus_active(struct qup_i2c_dev *qup, int len)
+ if (!(status & I2C_STATUS_BUS_ACTIVE))
+ break;
+
+- if (time_after(jiffies, timeout))
++ if (time_after(jiffies, timeout)) {
+ ret = -ETIMEDOUT;
++ break;
++ }
+
+ usleep_range(len, len * 2);
+ }
+diff --git a/drivers/i2c/busses/i2c-stm32.c b/drivers/i2c/busses/i2c-stm32.c
+index 157c64e27d0bd3..f84ec056e36dfe 100644
+--- a/drivers/i2c/busses/i2c-stm32.c
++++ b/drivers/i2c/busses/i2c-stm32.c
+@@ -102,7 +102,6 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
+ void *dma_async_param)
+ {
+ struct dma_async_tx_descriptor *txdesc;
+- struct device *chan_dev;
+ int ret;
+
+ if (rd_wr) {
+@@ -116,11 +115,10 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
+ }
+
+ dma->dma_len = len;
+- chan_dev = dma->chan_using->device->dev;
+
+- dma->dma_buf = dma_map_single(chan_dev, buf, dma->dma_len,
++ dma->dma_buf = dma_map_single(dev, buf, dma->dma_len,
+ dma->dma_data_dir);
+- if (dma_mapping_error(chan_dev, dma->dma_buf)) {
++ if (dma_mapping_error(dev, dma->dma_buf)) {
+ dev_err(dev, "DMA mapping failed\n");
+ return -EINVAL;
+ }
+@@ -150,7 +148,7 @@ int stm32_i2c_prep_dma_xfer(struct device *dev, struct stm32_i2c_dma *dma,
+ return 0;
+
+ err:
+- dma_unmap_single(chan_dev, dma->dma_buf, dma->dma_len,
++ dma_unmap_single(dev, dma->dma_buf, dma->dma_len,
+ dma->dma_data_dir);
+ return ret;
+ }
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 0b4e73e6382087..c33c655bc41cce 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -700,10 +700,10 @@ static void stm32f7_i2c_dma_callback(void *arg)
+ {
+ struct stm32f7_i2c_dev *i2c_dev = (struct stm32f7_i2c_dev *)arg;
+ struct stm32_i2c_dma *dma = i2c_dev->dma;
+- struct device *dev = dma->chan_using->device->dev;
+
+ stm32f7_i2c_disable_dma_req(i2c_dev);
+- dma_unmap_single(dev, dma->dma_buf, dma->dma_len, dma->dma_data_dir);
++ dma_unmap_single(i2c_dev->dev, dma->dma_buf, dma->dma_len,
++ dma->dma_data_dir);
+ complete(&dma->dma_complete);
+ }
+
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index e7aed9442d56d9..197b89f0b3e09c 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -314,6 +314,7 @@ static const struct acpi_device_id i2c_acpi_force_100khz_device_ids[] = {
+ * the device works without issues on Windows at what is expected to be
+ * a 400KHz frequency. The root cause of the issue is not known.
+ */
++ { "DLL0945", 0 },
+ { "ELAN06FA", 0 },
+ {}
+ };
+diff --git a/drivers/i3c/internals.h b/drivers/i3c/internals.h
+index 86b7b44cfca28b..1906c711f38ab0 100644
+--- a/drivers/i3c/internals.h
++++ b/drivers/i3c/internals.h
+@@ -9,6 +9,7 @@
+ #define I3C_INTERNALS_H
+
+ #include <linux/i3c/master.h>
++#include <linux/io.h>
+
+ extern struct bus_type i3c_bus_type;
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 6d56d23d642988..203b7497b52dce 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1308,7 +1308,7 @@ static int i3c_master_retrieve_dev_info(struct i3c_dev_desc *dev)
+
+ if (dev->info.bcr & I3C_BCR_HDR_CAP) {
+ ret = i3c_master_gethdrcap_locked(master, &dev->info);
+- if (ret)
++ if (ret && ret != -ENOTSUPP)
+ return ret;
+ }
+
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 1cead368f961ab..f6a2211ca4ef69 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -1154,7 +1154,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
+ };
+
+ static const struct x86_cpu_id intel_mwait_ids[] __initconst = {
+- X86_MATCH_VENDOR_FAM_FEATURE(INTEL, 6, X86_FEATURE_MWAIT, NULL),
++ X86_MATCH_VENDOR_FAM_FEATURE(INTEL, X86_FAMILY_ANY, X86_FEATURE_MWAIT, NULL),
+ {}
+ };
+
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 9580a7f7f73d28..883399ad80e03d 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -202,6 +202,24 @@ static int ad7768_spi_reg_write(struct ad7768_state *st,
+ return spi_write(st->spi, st->data.d8, 2);
+ }
+
++static int ad7768_send_sync_pulse(struct ad7768_state *st)
++{
++ /*
++ * The datasheet specifies a minimum SYNC_IN pulse width of 1.5 × Tmclk,
++ * where Tmclk is the MCLK period. The supported MCLK frequencies range
++ * from 0.6 MHz to 17 MHz, which corresponds to a minimum SYNC_IN pulse
++ * width of approximately 2.5 µs in the worst-case scenario (0.6 MHz).
++ *
++ * Add a delay to ensure the pulse width is always sufficient to
++ * trigger synchronization.
++ */
++ gpiod_set_value_cansleep(st->gpio_sync_in, 1);
++ fsleep(3);
++ gpiod_set_value_cansleep(st->gpio_sync_in, 0);
++
++ return 0;
++}
++
+ static int ad7768_set_mode(struct ad7768_state *st,
+ enum ad7768_conv_mode mode)
+ {
+@@ -287,10 +305,7 @@ static int ad7768_set_dig_fil(struct ad7768_state *st,
+ return ret;
+
+ /* A sync-in pulse is required every time the filter dec rate changes */
+- gpiod_set_value(st->gpio_sync_in, 1);
+- gpiod_set_value(st->gpio_sync_in, 0);
+-
+- return 0;
++ return ad7768_send_sync_pulse(st);
+ }
+
+ static int ad7768_set_freq(struct ad7768_state *st,
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 496cb2b26bfda6..5dd0debb089a83 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -371,7 +371,7 @@ static int ad_sd_buffer_postenable(struct iio_dev *indio_dev)
+ return ret;
+ }
+
+-static int ad_sd_buffer_postdisable(struct iio_dev *indio_dev)
++static int ad_sd_buffer_predisable(struct iio_dev *indio_dev)
+ {
+ struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev);
+
+@@ -432,7 +432,7 @@ static irqreturn_t ad_sd_trigger_handler(int irq, void *p)
+
+ static const struct iio_buffer_setup_ops ad_sd_buffer_setup_ops = {
+ .postenable = &ad_sd_buffer_postenable,
+- .postdisable = &ad_sd_buffer_postdisable,
++ .predisable = &ad_sd_buffer_predisable,
+ .validate_scan_mask = &iio_validate_scan_mask_onehot,
+ };
+
+diff --git a/drivers/iio/adc/max1363.c b/drivers/iio/adc/max1363.c
+index f2b576c69949e0..29c1824162cd02 100644
+--- a/drivers/iio/adc/max1363.c
++++ b/drivers/iio/adc/max1363.c
+@@ -513,10 +513,10 @@ static const struct iio_event_spec max1363_events[] = {
+ MAX1363_CHAN_U(1, _s1, 1, bits, ev_spec, num_ev_spec), \
+ MAX1363_CHAN_U(2, _s2, 2, bits, ev_spec, num_ev_spec), \
+ MAX1363_CHAN_U(3, _s3, 3, bits, ev_spec, num_ev_spec), \
+- MAX1363_CHAN_B(0, 1, d0m1, 4, bits, ev_spec, num_ev_spec), \
+- MAX1363_CHAN_B(2, 3, d2m3, 5, bits, ev_spec, num_ev_spec), \
+- MAX1363_CHAN_B(1, 0, d1m0, 6, bits, ev_spec, num_ev_spec), \
+- MAX1363_CHAN_B(3, 2, d3m2, 7, bits, ev_spec, num_ev_spec), \
++ MAX1363_CHAN_B(0, 1, d0m1, 12, bits, ev_spec, num_ev_spec), \
++ MAX1363_CHAN_B(2, 3, d2m3, 13, bits, ev_spec, num_ev_spec), \
++ MAX1363_CHAN_B(1, 0, d1m0, 18, bits, ev_spec, num_ev_spec), \
++ MAX1363_CHAN_B(3, 2, d3m2, 19, bits, ev_spec, num_ev_spec), \
+ IIO_CHAN_SOFT_TIMESTAMP(8) \
+ }
+
+@@ -534,23 +534,23 @@ static const struct iio_chan_spec max1363_channels[] =
+ /* Applies to max1236, max1237 */
+ static const enum max1363_modes max1236_mode_list[] = {
+ _s0, _s1, _s2, _s3,
+- s0to1, s0to2, s0to3,
++ s0to1, s0to2, s2to3, s0to3,
+ d0m1, d2m3, d1m0, d3m2,
+ d0m1to2m3, d1m0to3m2,
+- s2to3,
+ };
+
+ /* Applies to max1238, max1239 */
+ static const enum max1363_modes max1238_mode_list[] = {
+ _s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7, _s8, _s9, _s10, _s11,
+ s0to1, s0to2, s0to3, s0to4, s0to5, s0to6,
++ s6to7, s6to8, s6to9, s6to10, s6to11,
+ s0to7, s0to8, s0to9, s0to10, s0to11,
+ d0m1, d2m3, d4m5, d6m7, d8m9, d10m11,
+ d1m0, d3m2, d5m4, d7m6, d9m8, d11m10,
+- d0m1to2m3, d0m1to4m5, d0m1to6m7, d0m1to8m9, d0m1to10m11,
+- d1m0to3m2, d1m0to5m4, d1m0to7m6, d1m0to9m8, d1m0to11m10,
+- s6to7, s6to8, s6to9, s6to10, s6to11,
+- d6m7to8m9, d6m7to10m11, d7m6to9m8, d7m6to11m10,
++ d0m1to2m3, d0m1to4m5, d0m1to6m7, d6m7to8m9,
++ d0m1to8m9, d6m7to10m11, d0m1to10m11, d1m0to3m2,
++ d1m0to5m4, d1m0to7m6, d7m6to9m8, d1m0to9m8,
++ d7m6to11m10, d1m0to11m10,
+ };
+
+ #define MAX1363_12X_CHANS(bits) { \
+@@ -586,16 +586,15 @@ static const struct iio_chan_spec max1238_channels[] = MAX1363_12X_CHANS(12);
+
+ static const enum max1363_modes max11607_mode_list[] = {
+ _s0, _s1, _s2, _s3,
+- s0to1, s0to2, s0to3,
+- s2to3,
++ s0to1, s0to2, s2to3,
++ s0to3,
+ d0m1, d2m3, d1m0, d3m2,
+ d0m1to2m3, d1m0to3m2,
+ };
+
+ static const enum max1363_modes max11608_mode_list[] = {
+ _s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7,
+- s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s0to7,
+- s6to7,
++ s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s6to7, s0to7,
+ d0m1, d2m3, d4m5, d6m7,
+ d1m0, d3m2, d5m4, d7m6,
+ d0m1to2m3, d0m1to4m5, d0m1to6m7,
+@@ -611,14 +610,14 @@ static const enum max1363_modes max11608_mode_list[] = {
+ MAX1363_CHAN_U(5, _s5, 5, bits, NULL, 0), \
+ MAX1363_CHAN_U(6, _s6, 6, bits, NULL, 0), \
+ MAX1363_CHAN_U(7, _s7, 7, bits, NULL, 0), \
+- MAX1363_CHAN_B(0, 1, d0m1, 8, bits, NULL, 0), \
+- MAX1363_CHAN_B(2, 3, d2m3, 9, bits, NULL, 0), \
+- MAX1363_CHAN_B(4, 5, d4m5, 10, bits, NULL, 0), \
+- MAX1363_CHAN_B(6, 7, d6m7, 11, bits, NULL, 0), \
+- MAX1363_CHAN_B(1, 0, d1m0, 12, bits, NULL, 0), \
+- MAX1363_CHAN_B(3, 2, d3m2, 13, bits, NULL, 0), \
+- MAX1363_CHAN_B(5, 4, d5m4, 14, bits, NULL, 0), \
+- MAX1363_CHAN_B(7, 6, d7m6, 15, bits, NULL, 0), \
++ MAX1363_CHAN_B(0, 1, d0m1, 12, bits, NULL, 0), \
++ MAX1363_CHAN_B(2, 3, d2m3, 13, bits, NULL, 0), \
++ MAX1363_CHAN_B(4, 5, d4m5, 14, bits, NULL, 0), \
++ MAX1363_CHAN_B(6, 7, d6m7, 15, bits, NULL, 0), \
++ MAX1363_CHAN_B(1, 0, d1m0, 18, bits, NULL, 0), \
++ MAX1363_CHAN_B(3, 2, d3m2, 19, bits, NULL, 0), \
++ MAX1363_CHAN_B(5, 4, d5m4, 20, bits, NULL, 0), \
++ MAX1363_CHAN_B(7, 6, d7m6, 21, bits, NULL, 0), \
+ IIO_CHAN_SOFT_TIMESTAMP(16) \
+ }
+ static const struct iio_chan_spec max11602_channels[] = MAX1363_8X_CHANS(8);
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 20fc867e399860..d7db30b11fb17c 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -391,10 +391,9 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ return -ENOMEM;
+ }
+
+- for (i = 0; i < priv->cfg->num_irqs; i++) {
+- irq_set_chained_handler(priv->irq[i], stm32_adc_irq_handler);
+- irq_set_handler_data(priv->irq[i], priv);
+- }
++ for (i = 0; i < priv->cfg->num_irqs; i++)
++ irq_set_chained_handler_and_data(priv->irq[i],
++ stm32_adc_irq_handler, priv);
+
+ return 0;
+ }
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
+index 91f0f381082bda..8926b48d76614e 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
+@@ -32,8 +32,12 @@ static int inv_icm42600_temp_read(struct inv_icm42600_state *st, int16_t *temp)
+ goto exit;
+
+ *temp = (int16_t)be16_to_cpup(raw);
++ /*
++ * Temperature data is invalid if both accel and gyro are off.
++ * Return -EBUSY in this case.
++ */
+ if (*temp == INV_ICM42600_DATA_INVALID)
+- ret = -EINVAL;
++ ret = -EBUSY;
+
+ exit:
+ mutex_unlock(&st->lock);
+diff --git a/drivers/iio/light/as73211.c b/drivers/iio/light/as73211.c
+index 76b334dc5fbf35..dfeaa786b148c0 100644
+--- a/drivers/iio/light/as73211.c
++++ b/drivers/iio/light/as73211.c
+@@ -574,7 +574,7 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p)
+ struct {
+ __le16 chan[4];
+ s64 ts __aligned(8);
+- } scan;
++ } scan = { };
+ int data_result, ret;
+
+ mutex_lock(&data->mutex);
+diff --git a/drivers/iio/light/hid-sensor-prox.c b/drivers/iio/light/hid-sensor-prox.c
+index e9e00ce0c6d4d4..9109e5d2de36bc 100644
+--- a/drivers/iio/light/hid-sensor-prox.c
++++ b/drivers/iio/light/hid-sensor-prox.c
+@@ -101,8 +101,7 @@ static int prox_read_raw(struct iio_dev *indio_dev,
+ ret_type = prox_state->scale_precision;
+ break;
+ case IIO_CHAN_INFO_OFFSET:
+- *val = hid_sensor_convert_exponent(
+- prox_state->prox_attr.unit_expo);
++ *val = 0;
+ ret_type = IIO_VAL_INT;
+ break;
+ case IIO_CHAN_INFO_SAMP_FREQ:
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index 919a338d91814a..81670c2fb6eabf 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -1064,11 +1064,12 @@ int bmp280_common_probe(struct device *dev,
+
+ /* Bring chip out of reset if there is an assigned GPIO line */
+ gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
++ if (IS_ERR(gpiod))
++ return dev_err_probe(dev, PTR_ERR(gpiod), "failed to get reset GPIO\n");
++
+ /* Deassert the signal */
+- if (gpiod) {
+- dev_info(dev, "release reset\n");
+- gpiod_set_value(gpiod, 0);
+- }
++ dev_info(dev, "release reset\n");
++ gpiod_set_value(gpiod, 0);
+
+ data->regmap = regmap;
+ ret = regmap_read(regmap, BMP280_REG_ID, &chip_id);
+diff --git a/drivers/iio/proximity/isl29501.c b/drivers/iio/proximity/isl29501.c
+index 5b6ea783795d91..3ccc95cf645c42 100644
+--- a/drivers/iio/proximity/isl29501.c
++++ b/drivers/iio/proximity/isl29501.c
+@@ -938,12 +938,18 @@ static irqreturn_t isl29501_trigger_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct isl29501_private *isl29501 = iio_priv(indio_dev);
+ const unsigned long *active_mask = indio_dev->active_scan_mask;
+- u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */
+-
+- if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask))
+- isl29501_register_read(isl29501, REG_DISTANCE, buffer);
++ u32 value;
++ struct {
++ u16 data;
++ aligned_s64 ts;
++ } scan = { };
++
++ if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) {
++ isl29501_register_read(isl29501, REG_DISTANCE, &value);
++ scan.data = value;
++ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp);
++ iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
+ iio_trigger_notify_done(indio_dev->trig);
+
+ return IRQ_HANDLED;
+diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
+index 7989b7e1d1c046..2bd9fb3195f5e3 100644
+--- a/drivers/infiniband/core/cache.c
++++ b/drivers/infiniband/core/cache.c
+@@ -582,8 +582,8 @@ static int __ib_cache_gid_add(struct ib_device *ib_dev, u8 port,
+ out_unlock:
+ mutex_unlock(&table->lock);
+ if (ret)
+- pr_warn("%s: unable to add gid %pI6 error=%d\n",
+- __func__, gid->raw, ret);
++ pr_warn_ratelimited("%s: unable to add gid %pI6 error=%d\n",
++ __func__, gid->raw, ret);
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index f8dfec7ad7cc44..1475069aa4285d 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -1240,10 +1240,11 @@ static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
+ },
+ };
+
+-static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+- struct netlink_ext_ack *extack,
+- enum rdma_restrack_type res_type,
+- res_fill_func_t fill_func)
++static noinline_for_stack int
++res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
++ struct netlink_ext_ack *extack,
++ enum rdma_restrack_type res_type,
++ res_fill_func_t fill_func)
+ {
+ const struct nldev_fill_res_entry *fe = &fill_entries[res_type];
+ struct nlattr *tb[RDMA_NLDEV_ATTR_MAX];
+@@ -1877,10 +1878,10 @@ static int nldev_stat_del_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+ return ret;
+ }
+
+-static int stat_get_doit_default_counter(struct sk_buff *skb,
+- struct nlmsghdr *nlh,
+- struct netlink_ext_ack *extack,
+- struct nlattr *tb[])
++static noinline_for_stack int
++stat_get_doit_default_counter(struct sk_buff *skb, struct nlmsghdr *nlh,
++ struct netlink_ext_ack *extack,
++ struct nlattr *tb[])
+ {
+ struct rdma_hw_stats *stats;
+ struct nlattr *table_attr;
+@@ -1970,8 +1971,9 @@ static int stat_get_doit_default_counter(struct sk_buff *skb,
+ return ret;
+ }
+
+-static int stat_get_doit_qp(struct sk_buff *skb, struct nlmsghdr *nlh,
+- struct netlink_ext_ack *extack, struct nlattr *tb[])
++static noinline_for_stack int
++stat_get_doit_qp(struct sk_buff *skb, struct nlmsghdr *nlh,
++ struct netlink_ext_ack *extack, struct nlattr *tb[])
+
+ {
+ static enum rdma_nl_counter_mode mode;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index af23e57fc78ed0..be98b23488b455 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -121,6 +121,7 @@ static int __alloc_pbl(struct bnxt_qplib_res *res,
+ pbl->pg_arr = vmalloc(pages * sizeof(void *));
+ if (!pbl->pg_arr)
+ return -ENOMEM;
++ memset(pbl->pg_arr, 0, pages * sizeof(void *));
+
+ pbl->pg_map_arr = vmalloc(pages * sizeof(dma_addr_t));
+ if (!pbl->pg_map_arr) {
+@@ -128,6 +129,7 @@ static int __alloc_pbl(struct bnxt_qplib_res *res,
+ pbl->pg_arr = NULL;
+ return -ENOMEM;
+ }
++ memset(pbl->pg_map_arr, 0, pages * sizeof(dma_addr_t));
+ pbl->pg_count = 0;
+ pbl->pg_size = sginfo->pgsize;
+
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index d5a8d0173709ac..5eaf61784788d5 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -1008,31 +1008,35 @@ static void find_hw_thread_mask(uint hw_thread_no, cpumask_var_t hw_thread_mask,
+ struct hfi1_affinity_node_list *affinity)
+ {
+ int possible, curr_cpu, i;
+- uint num_cores_per_socket = node_affinity.num_online_cpus /
++ uint num_cores_per_socket;
++
++ cpumask_copy(hw_thread_mask, &affinity->proc.mask);
++
++ if (affinity->num_core_siblings == 0)
++ return;
++
++ num_cores_per_socket = node_affinity.num_online_cpus /
+ affinity->num_core_siblings /
+ node_affinity.num_online_nodes;
+
+- cpumask_copy(hw_thread_mask, &affinity->proc.mask);
+- if (affinity->num_core_siblings > 0) {
+- /* Removing other siblings not needed for now */
+- possible = cpumask_weight(hw_thread_mask);
+- curr_cpu = cpumask_first(hw_thread_mask);
+- for (i = 0;
+- i < num_cores_per_socket * node_affinity.num_online_nodes;
+- i++)
+- curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+-
+- for (; i < possible; i++) {
+- cpumask_clear_cpu(curr_cpu, hw_thread_mask);
+- curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+- }
++ /* Removing other siblings not needed for now */
++ possible = cpumask_weight(hw_thread_mask);
++ curr_cpu = cpumask_first(hw_thread_mask);
++ for (i = 0;
++ i < num_cores_per_socket * node_affinity.num_online_nodes;
++ i++)
++ curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+
+- /* Identifying correct HW threads within physical cores */
+- cpumask_shift_left(hw_thread_mask, hw_thread_mask,
+- num_cores_per_socket *
+- node_affinity.num_online_nodes *
+- hw_thread_no);
++ for (; i < possible; i++) {
++ cpumask_clear_cpu(curr_cpu, hw_thread_mask);
++ curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+ }
++
++ /* Identifying correct HW threads within physical cores */
++ cpumask_shift_left(hw_thread_mask, hw_thread_mask,
++ num_cores_per_socket *
++ node_affinity.num_online_nodes *
++ hw_thread_no);
+ }
+
+ int hfi1_get_proc_affinity(int node)
+diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
+index a54d800043429e..b7645de067f3c1 100644
+--- a/drivers/infiniband/sw/rxe/rxe_comp.c
++++ b/drivers/infiniband/sw/rxe/rxe_comp.c
+@@ -346,13 +346,15 @@ static inline enum comp_state do_read(struct rxe_qp *qp,
+ ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE,
+ &wqe->dma, payload_addr(pkt),
+ payload_size(pkt), to_mem_obj, NULL);
+- if (ret)
++ if (ret) {
++ wqe->status = IB_WC_LOC_PROT_ERR;
+ return COMPST_ERROR;
++ }
+
+ if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK))
+ return COMPST_COMP_ACK;
+- else
+- return COMPST_UPDATE_COMP;
++
++ return COMPST_UPDATE_COMP;
+ }
+
+ static inline enum comp_state do_atomic(struct rxe_qp *qp,
+@@ -366,10 +368,12 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp,
+ ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE,
+ &wqe->dma, &atomic_orig,
+ sizeof(u64), to_mem_obj, NULL);
+- if (ret)
++ if (ret) {
++ wqe->status = IB_WC_LOC_PROT_ERR;
+ return COMPST_ERROR;
+- else
+- return COMPST_COMP_ACK;
++ }
++
++ return COMPST_COMP_ACK;
+ }
+
+ static void make_send_cqe(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index a0362201b5d35b..31f7d09c71dc8b 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -142,12 +142,12 @@ static const struct xpad_device {
+ { 0x046d, 0xca88, "Logitech Compact Controller for Xbox", 0, XTYPE_XBOX },
+ { 0x046d, 0xca8a, "Logitech Precision Vibration Feedback Wheel", 0, XTYPE_XBOX },
+ { 0x046d, 0xcaa3, "Logitech DriveFx Racing Wheel", 0, XTYPE_XBOX360 },
++ { 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX360 },
+ { 0x056e, 0x2004, "Elecom JC-U3613M", 0, XTYPE_XBOX360 },
+ { 0x05fd, 0x1007, "Mad Catz Controller (unverified)", 0, XTYPE_XBOX },
+ { 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX },
+ { 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX },
+ { 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX },
+- { 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX },
+ { 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX },
+ { 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX },
+ { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index e09391ab3deb01..1ba6adb5b91241 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -3186,7 +3186,7 @@ static int __init parse_ivrs_acpihid(char *str)
+ {
+ u32 seg = 0, bus, dev, fn;
+ char *hid, *uid, *p, *addr;
+- char acpiid[ACPIID_LEN] = {0};
++ char acpiid[ACPIID_LEN + 1] = { }; /* size with NULL terminator */
+ int i;
+
+ addr = strchr(str, '@');
+@@ -3212,7 +3212,7 @@ static int __init parse_ivrs_acpihid(char *str)
+ /* We have the '@', make it the terminator to get just the acpiid */
+ *addr++ = 0;
+
+- if (strlen(str) > ACPIID_LEN + 1)
++ if (strlen(str) > ACPIID_LEN)
+ goto not_found;
+
+ if (sscanf(str, "=%s", acpiid) != 1)
+diff --git a/drivers/leds/leds-lp50xx.c b/drivers/leds/leds-lp50xx.c
+index d4529082935b8a..279f3958e0ab5a 100644
+--- a/drivers/leds/leds-lp50xx.c
++++ b/drivers/leds/leds-lp50xx.c
+@@ -493,6 +493,7 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ }
+
+ fwnode_for_each_child_node(child, led_node) {
++ int multi_index;
+ ret = fwnode_property_read_u32(led_node, "color",
+ &color_id);
+ if (ret) {
+@@ -500,8 +501,16 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ dev_err(priv->dev, "Cannot read color\n");
+ goto child_out;
+ }
++ ret = fwnode_property_read_u32(led_node, "reg", &multi_index);
++ if (ret != 0) {
++ dev_err(priv->dev, "reg must be set\n");
++ return -EINVAL;
++ } else if (multi_index >= LP50XX_LEDS_PER_MODULE) {
++ dev_err(priv->dev, "reg %i out of range\n", multi_index);
++ return -EINVAL;
++ }
+
+- mc_led_info[num_colors].color_index = color_id;
++ mc_led_info[multi_index].color_index = color_id;
+ num_colors++;
+ }
+
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index ff73b2c17be538..99b2d2e2cf59a7 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -124,6 +124,19 @@ struct mapped_device {
+ struct srcu_struct io_barrier;
+ };
+
++/*
++ * Bits for the flags field of struct mapped_device.
++ */
++#define DMF_BLOCK_IO_FOR_SUSPEND 0
++#define DMF_SUSPENDED 1
++#define DMF_FROZEN 2
++#define DMF_FREEING 3
++#define DMF_DELETING 4
++#define DMF_NOFLUSH_SUSPENDING 5
++#define DMF_DEFERRED_REMOVE 6
++#define DMF_SUSPENDED_INTERNALLY 7
++#define DMF_POST_SUSPENDING 8
++
+ void disable_discard(struct mapped_device *md);
+ void disable_write_same(struct mapped_device *md);
+ void disable_write_zeroes(struct mapped_device *md);
+@@ -177,6 +190,45 @@ struct dm_table {
+ struct dm_md_mempools *mempools;
+ };
+
++/*
++ * One of these is allocated per clone bio.
++ */
++#define DM_TIO_MAGIC 7282014
++struct dm_target_io {
++ unsigned int magic;
++ struct dm_io *io;
++ struct dm_target *ti;
++ unsigned int target_bio_nr;
++ unsigned int *len_ptr;
++ bool inside_dm_io;
++ struct bio clone;
++};
++
++/*
++ * One of these is allocated per original bio.
++ * It contains the first clone used for that original.
++ */
++#define DM_IO_MAGIC 5191977
++struct dm_io {
++ unsigned int magic;
++ struct mapped_device *md;
++ blk_status_t status;
++ atomic_t io_count;
++ struct bio *orig_bio;
++ unsigned long start_time;
++ spinlock_t endio_lock;
++ struct dm_stats_aux stats_aux;
++ /* last member of dm_target_io is 'struct bio' */
++ struct dm_target_io tio;
++};
++
++static inline void dm_io_inc_pending(struct dm_io *io)
++{
++ atomic_inc(&io->io_count);
++}
++
++void dm_io_dec_pending(struct dm_io *io, blk_status_t error);
++
+ static inline struct completion *dm_get_completion_from_kobject(struct kobject *kobj)
+ {
+ return &container_of(kobj, struct dm_kobject_holder, kobj)->completion;
+diff --git a/drivers/md/dm-historical-service-time.c b/drivers/md/dm-historical-service-time.c
+index 06fe43c13ba38b..2d23de6742fb82 100644
+--- a/drivers/md/dm-historical-service-time.c
++++ b/drivers/md/dm-historical-service-time.c
+@@ -537,8 +537,10 @@ static int __init dm_hst_init(void)
+ {
+ int r = dm_register_path_selector(&hst_ps);
+
+- if (r < 0)
++ if (r < 0) {
+ DMERR("register failed %d", r);
++ return r;
++ }
+
+ DMINFO("version " HST_VERSION " loaded");
+
+diff --git a/drivers/md/dm-queue-length.c b/drivers/md/dm-queue-length.c
+index 5fd018d184187a..cbb72039005adc 100644
+--- a/drivers/md/dm-queue-length.c
++++ b/drivers/md/dm-queue-length.c
+@@ -256,8 +256,10 @@ static int __init dm_ql_init(void)
+ {
+ int r = dm_register_path_selector(&ql_ps);
+
+- if (r < 0)
++ if (r < 0) {
+ DMERR("register failed %d", r);
++ return r;
++ }
+
+ DMINFO("version " QL_VERSION " loaded");
+
+diff --git a/drivers/md/dm-round-robin.c b/drivers/md/dm-round-robin.c
+index bdbb7e6e8212bf..fa7205f8f0b4b0 100644
+--- a/drivers/md/dm-round-robin.c
++++ b/drivers/md/dm-round-robin.c
+@@ -212,8 +212,10 @@ static int __init dm_rr_init(void)
+ {
+ int r = dm_register_path_selector(&rr_ps);
+
+- if (r < 0)
++ if (r < 0) {
+ DMERR("register failed %d", r);
++ return r;
++ }
+
+ DMINFO("version " RR_VERSION " loaded");
+
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 7762bde40963ee..a6ea77432e34ca 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -490,6 +490,14 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ struct mapped_device *md = tio->md;
+ struct dm_target *ti = md->immutable_target;
+
++ /*
++ * blk-mq's unquiesce may come from outside events, such as
++ * elevator switch, updating nr_requests or others, and request may
++ * come during suspend, so simply ask for blk-mq to requeue it.
++ */
++ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)))
++ return BLK_STS_RESOURCE;
++
+ if (unlikely(!ti)) {
+ int srcu_idx;
+ struct dm_table *map;
+diff --git a/drivers/md/dm-service-time.c b/drivers/md/dm-service-time.c
+index 9cfda665e9ebd3..563bd9e4d16fa4 100644
+--- a/drivers/md/dm-service-time.c
++++ b/drivers/md/dm-service-time.c
+@@ -338,8 +338,10 @@ static int __init dm_st_init(void)
+ {
+ int r = dm_register_path_selector(&st_ps);
+
+- if (r < 0)
++ if (r < 0) {
+ DMERR("register failed %d", r);
++ return r;
++ }
+
+ DMINFO("version " ST_VERSION " loaded");
+
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index 48fc723f1ac81c..e5f61a9080e4c1 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -1066,7 +1066,7 @@ static int dmz_iterate_devices(struct dm_target *ti,
+ struct dmz_target *dmz = ti->private;
+ unsigned int zone_nr_sectors = dmz_zone_nr_sectors(dmz->metadata);
+ sector_t capacity;
+- int i, r;
++ int i, r = 0;
+
+ for (i = 0; i < dmz->nr_ddevs; i++) {
+ capacity = dmz->dev[i].capacity & ~(zone_nr_sectors - 1);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 4fdf0e666777a6..0868358a7a8d27 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -73,38 +73,6 @@ struct clone_info {
+ unsigned sector_count;
+ };
+
+-/*
+- * One of these is allocated per clone bio.
+- */
+-#define DM_TIO_MAGIC 7282014
+-struct dm_target_io {
+- unsigned magic;
+- struct dm_io *io;
+- struct dm_target *ti;
+- unsigned target_bio_nr;
+- unsigned *len_ptr;
+- bool inside_dm_io;
+- struct bio clone;
+-};
+-
+-/*
+- * One of these is allocated per original bio.
+- * It contains the first clone used for that original.
+- */
+-#define DM_IO_MAGIC 5191977
+-struct dm_io {
+- unsigned magic;
+- struct mapped_device *md;
+- blk_status_t status;
+- atomic_t io_count;
+- struct bio *orig_bio;
+- unsigned long start_time;
+- spinlock_t endio_lock;
+- struct dm_stats_aux stats_aux;
+- /* last member of dm_target_io is 'struct bio' */
+- struct dm_target_io tio;
+-};
+-
+ void *dm_per_bio_data(struct bio *bio, size_t data_size)
+ {
+ struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone);
+@@ -132,19 +100,6 @@ EXPORT_SYMBOL_GPL(dm_bio_get_target_bio_nr);
+
+ #define MINOR_ALLOCED ((void *)-1)
+
+-/*
+- * Bits for the md->flags field.
+- */
+-#define DMF_BLOCK_IO_FOR_SUSPEND 0
+-#define DMF_SUSPENDED 1
+-#define DMF_FROZEN 2
+-#define DMF_FREEING 3
+-#define DMF_DELETING 4
+-#define DMF_NOFLUSH_SUSPENDING 5
+-#define DMF_DEFERRED_REMOVE 6
+-#define DMF_SUSPENDED_INTERNALLY 7
+-#define DMF_POST_SUSPENDING 8
+-
+ #define DM_NUMA_NODE NUMA_NO_NODE
+ static int dm_numa_node = DM_NUMA_NODE;
+
+@@ -897,7 +852,7 @@ static int __noflush_suspending(struct mapped_device *md)
+ * Decrements the number of outstanding ios that a bio has been
+ * cloned into, completing the original io if necc.
+ */
+-static void dec_pending(struct dm_io *io, blk_status_t error)
++void dm_io_dec_pending(struct dm_io *io, blk_status_t error)
+ {
+ unsigned long flags;
+ blk_status_t io_error;
+@@ -1041,7 +996,7 @@ static void clone_endio(struct bio *bio)
+ }
+
+ free_tio(tio);
+- dec_pending(io, error);
++ dm_io_dec_pending(io, error);
+ }
+
+ /*
+@@ -1309,7 +1264,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio)
+ * anything, the target has assumed ownership of
+ * this io.
+ */
+- atomic_inc(&io->io_count);
++ dm_io_inc_pending(io);
+ sector = clone->bi_iter.bi_sector;
+
+ if (unlikely(swap_bios_limit(ti, clone))) {
+@@ -1336,7 +1291,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio)
+ up(&md->swap_bios_semaphore);
+ }
+ free_tio(tio);
+- dec_pending(io, BLK_STS_IOERR);
++ dm_io_dec_pending(io, BLK_STS_IOERR);
+ break;
+ case DM_MAPIO_REQUEUE:
+ if (unlikely(swap_bios_limit(ti, clone))) {
+@@ -1344,7 +1299,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio)
+ up(&md->swap_bios_semaphore);
+ }
+ free_tio(tio);
+- dec_pending(io, BLK_STS_DM_REQUEUE);
++ dm_io_dec_pending(io, BLK_STS_DM_REQUEUE);
+ break;
+ default:
+ DMWARN("unimplemented target map return value: %d", r);
+@@ -1640,7 +1595,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
+
+ if (bio->bi_opf & REQ_PREFLUSH) {
+ error = __send_empty_flush(&ci);
+- /* dec_pending submits any data associated with flush */
++ /* dm_io_dec_pending submits any data associated with flush */
+ } else if (op_is_zone_mgmt(bio_op(bio))) {
+ ci.bio = bio;
+ ci.sector_count = 0;
+@@ -1684,7 +1639,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
+ }
+
+ /* drop the extra reference count */
+- dec_pending(ci.io, errno_to_blk_status(error));
++ dm_io_dec_pending(ci.io, errno_to_blk_status(error));
+ return ret;
+ }
+
+diff --git a/drivers/media/cec/usb/rainshadow/rainshadow-cec.c b/drivers/media/cec/usb/rainshadow/rainshadow-cec.c
+index ee870ea1a88601..6f8d6797c61459 100644
+--- a/drivers/media/cec/usb/rainshadow/rainshadow-cec.c
++++ b/drivers/media/cec/usb/rainshadow/rainshadow-cec.c
+@@ -171,11 +171,12 @@ static irqreturn_t rain_interrupt(struct serio *serio, unsigned char data,
+ {
+ struct rain *rain = serio_get_drvdata(serio);
+
++ spin_lock(&rain->buf_lock);
+ if (rain->buf_len == DATA_SIZE) {
++ spin_unlock(&rain->buf_lock);
+ dev_warn_once(rain->dev, "buffer overflow\n");
+ return IRQ_HANDLED;
+ }
+- spin_lock(&rain->buf_lock);
+ rain->buf_len++;
+ rain->buf[rain->buf_wr_idx] = data;
+ rain->buf_wr_idx = (rain->buf_wr_idx + 1) & 0xff;
+diff --git a/drivers/media/dvb-frontends/dib7000p.c b/drivers/media/dvb-frontends/dib7000p.c
+index 8c426baf76ee3e..08b3ac8ff1083a 100644
+--- a/drivers/media/dvb-frontends/dib7000p.c
++++ b/drivers/media/dvb-frontends/dib7000p.c
+@@ -2198,6 +2198,8 @@ static int w7090p_tuner_write_serpar(struct i2c_adapter *i2c_adap, struct i2c_ms
+ struct dib7000p_state *state = i2c_get_adapdata(i2c_adap);
+ u8 n_overflow = 1;
+ u16 i = 1000;
++ if (msg[0].len < 3)
++ return -EOPNOTSUPP;
+ u16 serpar_num = msg[0].buf[0];
+
+ while (n_overflow == 1 && i) {
+@@ -2217,6 +2219,8 @@ static int w7090p_tuner_read_serpar(struct i2c_adapter *i2c_adap, struct i2c_msg
+ struct dib7000p_state *state = i2c_get_adapdata(i2c_adap);
+ u8 n_overflow = 1, n_empty = 1;
+ u16 i = 1000;
++ if (msg[0].len < 1 || msg[1].len < 2)
++ return -EOPNOTSUPP;
+ u16 serpar_num = msg[0].buf[0];
+ u16 read_word;
+
+@@ -2261,8 +2265,12 @@ static int dib7090p_rw_on_apb(struct i2c_adapter *i2c_adap,
+ u16 word;
+
+ if (num == 1) { /* write */
++ if (msg[0].len < 3)
++ return -EOPNOTSUPP;
+ dib7000p_write_word(state, apb_address, ((msg[0].buf[1] << 8) | (msg[0].buf[2])));
+ } else {
++ if (msg[1].len < 2)
++ return -EOPNOTSUPP;
+ word = dib7000p_read_word(state, apb_address);
+ msg[1].buf[0] = (word >> 8) & 0xff;
+ msg[1].buf[1] = (word) & 0xff;
+diff --git a/drivers/media/i2c/hi556.c b/drivers/media/i2c/hi556.c
+index c66cd1446c0fd7..e15a9ce1e3b044 100644
+--- a/drivers/media/i2c/hi556.c
++++ b/drivers/media/i2c/hi556.c
+@@ -602,21 +602,23 @@ static int hi556_test_pattern(struct hi556 *hi556, u32 pattern)
+ int ret;
+ u32 val;
+
+- if (pattern) {
+- ret = hi556_read_reg(hi556, HI556_REG_ISP,
+- HI556_REG_VALUE_08BIT, &val);
+- if (ret)
+- return ret;
++ ret = hi556_read_reg(hi556, HI556_REG_ISP,
++ HI556_REG_VALUE_08BIT, &val);
++ if (ret)
++ return ret;
+
+- ret = hi556_write_reg(hi556, HI556_REG_ISP,
+- HI556_REG_VALUE_08BIT,
+- val | HI556_REG_ISP_TPG_EN);
+- if (ret)
+- return ret;
+- }
++ val = pattern ? (val | HI556_REG_ISP_TPG_EN) :
++ (val & ~HI556_REG_ISP_TPG_EN);
++
++ ret = hi556_write_reg(hi556, HI556_REG_ISP,
++ HI556_REG_VALUE_08BIT, val);
++ if (ret)
++ return ret;
++
++ val = pattern ? BIT(pattern - 1) : 0;
+
+ return hi556_write_reg(hi556, HI556_REG_TEST_PATTERN,
+- HI556_REG_VALUE_08BIT, pattern);
++ HI556_REG_VALUE_08BIT, val);
+ }
+
+ static int hi556_set_ctrl(struct v4l2_ctrl *ctrl)
+diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
+index fb78a1cedc03b7..e2a0a887728846 100644
+--- a/drivers/media/i2c/ov2659.c
++++ b/drivers/media/i2c/ov2659.c
+@@ -1482,14 +1482,15 @@ static int ov2659_probe(struct i2c_client *client)
+ V4L2_CID_TEST_PATTERN,
+ ARRAY_SIZE(ov2659_test_pattern_menu) - 1,
+ 0, 0, ov2659_test_pattern_menu);
+- ov2659->sd.ctrl_handler = &ov2659->ctrls;
+
+ if (ov2659->ctrls.error) {
+ dev_err(&client->dev, "%s: control initialization error %d\n",
+ __func__, ov2659->ctrls.error);
++ v4l2_ctrl_handler_free(&ov2659->ctrls);
+ return ov2659->ctrls.error;
+ }
+
++ ov2659->sd.ctrl_handler = &ov2659->ctrls;
+ sd = &ov2659->sd;
+ client->flags |= I2C_CLIENT_SCCB;
+ #ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 1b3441510b6fa9..9cc52beb3b5e17 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -110,7 +110,7 @@ static inline struct tc358743_state *to_state(struct v4l2_subdev *sd)
+
+ /* --------------- I2C --------------- */
+
+-static void i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
++static int i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ {
+ struct tc358743_state *state = to_state(sd);
+ struct i2c_client *client = state->i2c_client;
+@@ -136,6 +136,7 @@ static void i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ v4l2_err(sd, "%s: reading register 0x%x from 0x%x failed\n",
+ __func__, reg, client->addr);
+ }
++ return err != ARRAY_SIZE(msgs);
+ }
+
+ static void i2c_wr(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+@@ -192,15 +193,24 @@ static void i2c_wr(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ }
+ }
+
+-static noinline u32 i2c_rdreg(struct v4l2_subdev *sd, u16 reg, u32 n)
++static noinline u32 i2c_rdreg_err(struct v4l2_subdev *sd, u16 reg, u32 n,
++ int *err)
+ {
++ int error;
+ __le32 val = 0;
+
+- i2c_rd(sd, reg, (u8 __force *)&val, n);
++ error = i2c_rd(sd, reg, (u8 __force *)&val, n);
++ if (err)
++ *err = error;
+
+ return le32_to_cpu(val);
+ }
+
++static inline u32 i2c_rdreg(struct v4l2_subdev *sd, u16 reg, u32 n)
++{
++ return i2c_rdreg_err(sd, reg, n, NULL);
++}
++
+ static noinline void i2c_wrreg(struct v4l2_subdev *sd, u16 reg, u32 val, u32 n)
+ {
+ __le32 raw = cpu_to_le32(val);
+@@ -229,6 +239,13 @@ static u16 i2c_rd16(struct v4l2_subdev *sd, u16 reg)
+ return i2c_rdreg(sd, reg, 2);
+ }
+
++static int i2c_rd16_err(struct v4l2_subdev *sd, u16 reg, u16 *value)
++{
++ int err;
++ *value = i2c_rdreg_err(sd, reg, 2, &err);
++ return err;
++}
++
+ static void i2c_wr16(struct v4l2_subdev *sd, u16 reg, u16 val)
+ {
+ i2c_wrreg(sd, reg, val, 2);
+@@ -1669,12 +1686,23 @@ static int tc358743_enum_mbus_code(struct v4l2_subdev *sd,
+ return 0;
+ }
+
++static u32 tc358743_g_colorspace(u32 code)
++{
++ switch (code) {
++ case MEDIA_BUS_FMT_RGB888_1X24:
++ return V4L2_COLORSPACE_SRGB;
++ case MEDIA_BUS_FMT_UYVY8_1X16:
++ return V4L2_COLORSPACE_SMPTE170M;
++ default:
++ return 0;
++ }
++}
++
+ static int tc358743_get_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *format)
+ {
+ struct tc358743_state *state = to_state(sd);
+- u8 vi_rep = i2c_rd8(sd, VI_REP);
+
+ if (format->pad != 0)
+ return -EINVAL;
+@@ -1684,23 +1712,7 @@ static int tc358743_get_fmt(struct v4l2_subdev *sd,
+ format->format.height = state->timings.bt.height;
+ format->format.field = V4L2_FIELD_NONE;
+
+- switch (vi_rep & MASK_VOUT_COLOR_SEL) {
+- case MASK_VOUT_COLOR_RGB_FULL:
+- case MASK_VOUT_COLOR_RGB_LIMITED:
+- format->format.colorspace = V4L2_COLORSPACE_SRGB;
+- break;
+- case MASK_VOUT_COLOR_601_YCBCR_LIMITED:
+- case MASK_VOUT_COLOR_601_YCBCR_FULL:
+- format->format.colorspace = V4L2_COLORSPACE_SMPTE170M;
+- break;
+- case MASK_VOUT_COLOR_709_YCBCR_FULL:
+- case MASK_VOUT_COLOR_709_YCBCR_LIMITED:
+- format->format.colorspace = V4L2_COLORSPACE_REC709;
+- break;
+- default:
+- format->format.colorspace = 0;
+- break;
+- }
++ format->format.colorspace = tc358743_g_colorspace(format->format.code);
+
+ return 0;
+ }
+@@ -1714,19 +1726,14 @@ static int tc358743_set_fmt(struct v4l2_subdev *sd,
+ u32 code = format->format.code; /* is overwritten by get_fmt */
+ int ret = tc358743_get_fmt(sd, cfg, format);
+
+- format->format.code = code;
++ if (code == MEDIA_BUS_FMT_RGB888_1X24 ||
++ code == MEDIA_BUS_FMT_UYVY8_1X16)
++ format->format.code = code;
++ format->format.colorspace = tc358743_g_colorspace(format->format.code);
+
+ if (ret)
+ return ret;
+
+- switch (code) {
+- case MEDIA_BUS_FMT_RGB888_1X24:
+- case MEDIA_BUS_FMT_UYVY8_1X16:
+- break;
+- default:
+- return -EINVAL;
+- }
+-
+ if (format->which == V4L2_SUBDEV_FORMAT_TRY)
+ return 0;
+
+@@ -1953,8 +1960,19 @@ static int tc358743_probe_of(struct tc358743_state *state)
+ state->pdata.refclk_hz = clk_get_rate(refclk);
+ state->pdata.ddc5v_delay = DDC5V_DELAY_100_MS;
+ state->pdata.enable_hdcp = false;
+- /* A FIFO level of 16 should be enough for 2-lane 720p60 at 594 MHz. */
+- state->pdata.fifo_level = 16;
++ /*
++ * Ideally the FIFO trigger level should be set based on the input and
++ * output data rates, but the calculations required are buried in
++ * Toshiba's register settings spreadsheet.
++ * A value of 16 works with a 594Mbps data rate for 720p60 (using 2
++ * lanes) and 1080p60 (using 4 lanes), but fails when the data rate
++ * is increased, or a lower pixel clock is used that result in CSI
++ * reading out faster than the data is arriving.
++ *
++ * A value of 374 works with both those modes at 594Mbps, and with most
++ * modes on 972Mbps.
++ */
++ state->pdata.fifo_level = 374;
+ /*
+ * The PLL input clock is obtained by dividing refclk by pll_prd.
+ * It must be between 6 MHz and 40 MHz, lower frequency is better.
+@@ -2042,6 +2060,7 @@ static int tc358743_probe(struct i2c_client *client)
+ struct tc358743_platform_data *pdata = client->dev.platform_data;
+ struct v4l2_subdev *sd;
+ u16 irq_mask = MASK_HDMI_MSK | MASK_CSI_MSK;
++ u16 chipid;
+ int err;
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+@@ -2073,7 +2092,8 @@ static int tc358743_probe(struct i2c_client *client)
+ sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
+
+ /* i2c access */
+- if ((i2c_rd16(sd, CHIPID) & MASK_CHIPID) != 0) {
++ if (i2c_rd16_err(sd, CHIPID, &chipid) ||
++ (chipid & MASK_CHIPID) != 0) {
+ v4l2_info(sd, "not a TC358743 on address 0x%x\n",
+ client->addr << 1);
+ return -ENODEV;
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index d074f426980dd5..a4d3cb61b9d044 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -888,7 +888,7 @@ static int camss_probe(struct platform_device *pdev)
+ ret = v4l2_device_register(camss->dev, &camss->v4l2_dev);
+ if (ret < 0) {
+ dev_err(dev, "Failed to register V4L2 device: %d\n", ret);
+- goto err_cleanup;
++ goto err_media_device_cleanup;
+ }
+
+ ret = camss_register_entities(camss);
+@@ -945,6 +945,8 @@ static int camss_probe(struct platform_device *pdev)
+ camss_unregister_entities(camss);
+ err_register_entities:
+ v4l2_device_unregister(&camss->v4l2_dev);
++err_media_device_cleanup:
++ media_device_cleanup(&camss->media_dev);
+ err_cleanup:
+ v4l2_async_notifier_cleanup(&camss->notifier);
+ err_free:
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index b8bbd9d71b790d..bdf58cc192903b 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -220,6 +220,19 @@ static void venus_assign_register_offsets(struct venus_core *core)
+ core->wrapper_base = core->base + WRAPPER_BASE;
+ }
+
++static irqreturn_t venus_isr_thread(int irq, void *dev_id)
++{
++ struct venus_core *core = dev_id;
++ irqreturn_t ret;
++
++ ret = hfi_isr_thread(irq, dev_id);
++
++ if (ret == IRQ_HANDLED && venus_fault_inject_ssr())
++ hfi_core_trigger_ssr(core, HFI_TEST_SSR_SW_ERR_FATAL);
++
++ return ret;
++}
++
+ static int venus_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -276,13 +289,13 @@ static int venus_probe(struct platform_device *pdev)
+ mutex_init(&core->lock);
+ INIT_DELAYED_WORK(&core->work, venus_sys_error_handler);
+
+- ret = devm_request_threaded_irq(dev, core->irq, hfi_isr, hfi_isr_thread,
+- IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+- "venus", core);
++ ret = hfi_create(core, &venus_core_ops);
+ if (ret)
+ goto err_core_put;
+
+- ret = hfi_create(core, &venus_core_ops);
++ ret = devm_request_threaded_irq(dev, core->irq, hfi_isr, venus_isr_thread,
++ IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
++ "venus", core);
+ if (ret)
+ goto err_core_put;
+
+diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h
+index 785a5bbb19c3ce..0abb152c681f24 100644
+--- a/drivers/media/platform/qcom/venus/core.h
++++ b/drivers/media/platform/qcom/venus/core.h
+@@ -25,6 +25,8 @@
+ #define VIDC_VCODEC_CLKS_NUM_MAX 2
+ #define VIDC_PMDOMAINS_NUM_MAX 3
+
++#define VENUS_MAX_FPS 240
++
+ extern int venus_fw_debug;
+
+ struct freq_tbl {
+diff --git a/drivers/media/platform/qcom/venus/dbgfs.c b/drivers/media/platform/qcom/venus/dbgfs.c
+index 52de47f2ca8893..726f4b730e69bc 100644
+--- a/drivers/media/platform/qcom/venus/dbgfs.c
++++ b/drivers/media/platform/qcom/venus/dbgfs.c
+@@ -4,13 +4,22 @@
+ */
+
+ #include <linux/debugfs.h>
++#include <linux/fault-inject.h>
+
+ #include "core.h"
+
++#ifdef CONFIG_FAULT_INJECTION
++DECLARE_FAULT_ATTR(venus_ssr_attr);
++#endif
++
+ void venus_dbgfs_init(struct venus_core *core)
+ {
+ core->root = debugfs_create_dir("venus", NULL);
+ debugfs_create_x32("fw_level", 0644, core->root, &venus_fw_debug);
++
++#ifdef CONFIG_FAULT_INJECTION
++ fault_create_debugfs_attr("fail_ssr", core->root, &venus_ssr_attr);
++#endif
+ }
+
+ void venus_dbgfs_deinit(struct venus_core *core)
+diff --git a/drivers/media/platform/qcom/venus/dbgfs.h b/drivers/media/platform/qcom/venus/dbgfs.h
+index b7b621a8472f1a..c87c1355d03924 100644
+--- a/drivers/media/platform/qcom/venus/dbgfs.h
++++ b/drivers/media/platform/qcom/venus/dbgfs.h
+@@ -4,8 +4,21 @@
+ #ifndef __VENUS_DBGFS_H__
+ #define __VENUS_DBGFS_H__
+
++#include <linux/fault-inject.h>
++
+ struct venus_core;
+
++#ifdef CONFIG_FAULT_INJECTION
++extern struct fault_attr venus_ssr_attr;
++static inline bool venus_fault_inject_ssr(void)
++{
++ return should_fail(&venus_ssr_attr, 1);
++}
++#else
++static inline bool venus_fault_inject_ssr(void) { return false; }
++#endif
++
++
+ void venus_dbgfs_init(struct venus_core *core);
+ void venus_dbgfs_deinit(struct venus_core *core);
+
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c
+index 91584d197af99e..4f25703e55a7fa 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -240,6 +240,7 @@ static int venus_write_queue(struct venus_hfi_device *hdev,
+ static int venus_read_queue(struct venus_hfi_device *hdev,
+ struct iface_queue *queue, void *pkt, u32 *tx_req)
+ {
++ struct hfi_pkt_hdr *pkt_hdr = NULL;
+ struct hfi_queue_header *qhdr;
+ u32 dwords, new_rd_idx;
+ u32 rd_idx, wr_idx, type, qsize;
+@@ -305,6 +306,9 @@ static int venus_read_queue(struct venus_hfi_device *hdev,
+ memcpy(pkt, rd_ptr, len);
+ memcpy(pkt + len, queue->qmem.kva, new_rd_idx << 2);
+ }
++ pkt_hdr = (struct hfi_pkt_hdr *)(pkt);
++ if ((pkt_hdr->size >> 2) != dwords)
++ return -EINVAL;
+ } else {
+ /* bad packet received, dropping */
+ new_rd_idx = qhdr->write_idx;
+@@ -1067,12 +1071,15 @@ static irqreturn_t venus_isr(struct venus_core *core)
+ {
+ struct venus_hfi_device *hdev = to_hfi_priv(core);
+ u32 status;
+- void __iomem *cpu_cs_base = hdev->core->cpu_cs_base;
+- void __iomem *wrapper_base = hdev->core->wrapper_base;
++ void __iomem *cpu_cs_base;
++ void __iomem *wrapper_base;
+
+ if (!hdev)
+ return IRQ_NONE;
+
++ cpu_cs_base = hdev->core->cpu_cs_base;
++ wrapper_base = hdev->core->wrapper_base;
++
+ status = readl(wrapper_base + WRAPPER_INTR_STATUS);
+
+ if (status & WRAPPER_INTR_STATUS_A2H_MASK ||
+@@ -1609,10 +1616,11 @@ void venus_hfi_destroy(struct venus_core *core)
+ {
+ struct venus_hfi_device *hdev = to_hfi_priv(core);
+
++ core->priv = NULL;
+ venus_interface_queues_release(hdev);
+ mutex_destroy(&hdev->lock);
+ kfree(hdev);
+- core->priv = NULL;
++ disable_irq(core->irq);
+ core->ops = NULL;
+ }
+
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index 68390143d37df4..b18459c5290c46 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -427,11 +427,10 @@ static int vdec_s_parm(struct file *file, void *fh, struct v4l2_streamparm *a)
+ us_per_frame = timeperframe->numerator * (u64)USEC_PER_SEC;
+ do_div(us_per_frame, timeperframe->denominator);
+
+- if (!us_per_frame)
+- return -EINVAL;
+-
++ us_per_frame = clamp(us_per_frame, 1, USEC_PER_SEC);
+ fps = (u64)USEC_PER_SEC;
+ do_div(fps, us_per_frame);
++ fps = min(VENUS_MAX_FPS, fps);
+
+ inst->fps = fps;
+ inst->timeperframe = *timeperframe;
+diff --git a/drivers/media/usb/gspca/vicam.c b/drivers/media/usb/gspca/vicam.c
+index d98343fd33fe34..91e177aa8136fd 100644
+--- a/drivers/media/usb/gspca/vicam.c
++++ b/drivers/media/usb/gspca/vicam.c
+@@ -227,6 +227,7 @@ static int sd_init(struct gspca_dev *gspca_dev)
+ const struct ihex_binrec *rec;
+ const struct firmware *fw;
+ u8 *firmware_buf;
++ int len;
+
+ ret = request_ihex_firmware(&fw, VICAM_FIRMWARE,
+ &gspca_dev->dev->dev);
+@@ -241,9 +242,14 @@ static int sd_init(struct gspca_dev *gspca_dev)
+ goto exit;
+ }
+ for (rec = (void *)fw->data; rec; rec = ihex_next_binrec(rec)) {
+- memcpy(firmware_buf, rec->data, be16_to_cpu(rec->len));
++ len = be16_to_cpu(rec->len);
++ if (len > PAGE_SIZE) {
++ ret = -EINVAL;
++ break;
++ }
++ memcpy(firmware_buf, rec->data, len);
+ ret = vicam_control_msg(gspca_dev, 0xff, 0, 0, firmware_buf,
+- be16_to_cpu(rec->len));
++ len);
+ if (ret < 0)
+ break;
+ }
+diff --git a/drivers/media/usb/hdpvr/hdpvr-i2c.c b/drivers/media/usb/hdpvr/hdpvr-i2c.c
+index 070559b01b01b8..54956a8ff15e86 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-i2c.c
++++ b/drivers/media/usb/hdpvr/hdpvr-i2c.c
+@@ -165,10 +165,16 @@ static const struct i2c_algorithm hdpvr_algo = {
+ .functionality = hdpvr_functionality,
+ };
+
++/* prevent invalid 0-length usb_control_msg */
++static const struct i2c_adapter_quirks hdpvr_quirks = {
++ .flags = I2C_AQ_NO_ZERO_LEN_READ,
++};
++
+ static const struct i2c_adapter hdpvr_i2c_adapter_template = {
+ .name = "Hauppauge HD PVR I2C",
+ .owner = THIS_MODULE,
+ .algo = &hdpvr_algo,
++ .quirks = &hdpvr_quirks,
+ };
+
+ static int hdpvr_activate_ir(struct hdpvr_device *dev)
+diff --git a/drivers/media/usb/usbtv/usbtv-video.c b/drivers/media/usb/usbtv/usbtv-video.c
+index f2aaec0f77c8d4..89f4b55a79c1bb 100644
+--- a/drivers/media/usb/usbtv/usbtv-video.c
++++ b/drivers/media/usb/usbtv/usbtv-video.c
+@@ -73,6 +73,10 @@ static int usbtv_configure_for_norm(struct usbtv *usbtv, v4l2_std_id norm)
+ }
+
+ if (params) {
++ if (vb2_is_busy(&usbtv->vb2q) &&
++ (usbtv->width != params->cap_width ||
++ usbtv->height != params->cap_height))
++ return -EBUSY;
+ usbtv->width = params->cap_width;
+ usbtv->height = params->cap_height;
+ usbtv->n_chunks = usbtv->width * usbtv->height
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index f3f91635d67b67..419fbdbb7a3b8f 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -511,6 +511,9 @@ static int uvc_parse_format(struct uvc_device *dev,
+ unsigned int i, n;
+ u8 ftype;
+
++ if (buflen < 4)
++ return -EINVAL;
++
+ format->type = buffer[2];
+ format->index = buffer[3];
+
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index f6e97ff7a8e4b5..66bed1b64ac99e 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -228,6 +228,15 @@ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+
+ ctrl->dwMaxPayloadTransferSize = bandwidth;
+ }
++
++ if (stream->intf->num_altsetting > 1 &&
++ ctrl->dwMaxPayloadTransferSize > stream->maxpsize) {
++ dev_warn_ratelimited(&stream->intf->dev,
++ "UVC non compliance: the max payload transmission size (%u) exceeds the size of the ep max packet (%u). Using the max size.\n",
++ ctrl->dwMaxPayloadTransferSize,
++ stream->maxpsize);
++ ctrl->dwMaxPayloadTransferSize = stream->maxpsize;
++ }
+ }
+
+ static size_t uvc_video_ctrl_size(struct uvc_streaming *stream)
+@@ -1300,12 +1309,6 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
+ if (!meta_buf || length == 2)
+ return;
+
+- if (meta_buf->length - meta_buf->bytesused <
+- length + sizeof(meta->ns) + sizeof(meta->sof)) {
+- meta_buf->error = 1;
+- return;
+- }
+-
+ has_pts = mem[1] & UVC_STREAM_PTS;
+ has_scr = mem[1] & UVC_STREAM_SCR;
+
+@@ -1326,6 +1329,12 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
+ !memcmp(scr, stream->clock.last_scr, 6)))
+ return;
+
++ if (meta_buf->length - meta_buf->bytesused <
++ length + sizeof(meta->ns) + sizeof(meta->sof)) {
++ meta_buf->error = 1;
++ return;
++ }
++
+ meta = (struct uvc_meta_buf *)((u8 *)meta_buf->mem + meta_buf->bytesused);
+ local_irq_save(flags);
+ time = uvc_video_get_time();
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 41f8410d08d65f..e754bb2a8a4e96 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -2388,7 +2388,6 @@ void v4l2_ctrl_handler_free(struct v4l2_ctrl_handler *hdl)
+ kvfree(hdl->buckets);
+ hdl->buckets = NULL;
+ hdl->cached = NULL;
+- hdl->error = 0;
+ mutex_unlock(hdl->lock);
+ mutex_destroy(&hdl->_lock);
+ }
+@@ -3767,8 +3766,19 @@ v4l2_ctrls_find_req_obj(struct v4l2_ctrl_handler *hdl,
+ obj = media_request_object_find(req, &req_ops, hdl);
+ if (obj)
+ return obj;
++ /*
++ * If there are no controls in this completed request,
++ * then that can only happen if:
++ *
++ * 1) no controls were present in the queued request, and
++ * 2) v4l2_ctrl_request_complete() could not allocate a
++ * control handler object to store the completed state in.
++ *
++ * So return ENOMEM to indicate that there was an out-of-memory
++ * error.
++ */
+ if (!set)
+- return ERR_PTR(-ENOENT);
++ return ERR_PTR(-ENOMEM);
+
+ new_hdl = kzalloc(sizeof(*new_hdl), GFP_KERNEL);
+ if (!new_hdl)
+@@ -3779,8 +3789,8 @@ v4l2_ctrls_find_req_obj(struct v4l2_ctrl_handler *hdl,
+ if (!ret)
+ ret = v4l2_ctrl_request_bind(req, new_hdl, hdl);
+ if (ret) {
++ v4l2_ctrl_handler_free(new_hdl);
+ kfree(new_hdl);
+-
+ return ERR_PTR(ret);
+ }
+
+@@ -4369,8 +4379,25 @@ void v4l2_ctrl_request_complete(struct media_request *req,
+ * wants to leave the controls unchanged.
+ */
+ obj = media_request_object_find(req, &req_ops, main_hdl);
+- if (!obj)
+- return;
++ if (!obj) {
++ int ret;
++
++ /* Create a new request so the driver can return controls */
++ hdl = kzalloc(sizeof(*hdl), GFP_KERNEL);
++ if (!hdl)
++ return;
++
++ ret = v4l2_ctrl_handler_init(hdl, (main_hdl->nr_of_buckets - 1) * 8);
++ if (!ret)
++ ret = v4l2_ctrl_request_bind(req, hdl, main_hdl);
++ if (ret) {
++ v4l2_ctrl_handler_free(hdl);
++ kfree(hdl);
++ return;
++ }
++ hdl->request_is_queued = true;
++ obj = media_request_object_find(req, &req_ops, main_hdl);
++ }
+ hdl = container_of(obj, struct v4l2_ctrl_handler, req_obj);
+
+ list_for_each_entry(ref, &hdl->ctrl_refs, node) {
+diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c
+index 1c7a9dcfed6589..e24ab362e51a91 100644
+--- a/drivers/memstick/core/memstick.c
++++ b/drivers/memstick/core/memstick.c
+@@ -324,7 +324,7 @@ EXPORT_SYMBOL(memstick_init_req);
+ static int h_memstick_read_dev_id(struct memstick_dev *card,
+ struct memstick_request **mrq)
+ {
+- struct ms_id_register id_reg;
++ struct ms_id_register id_reg = {};
+
+ if (!(*mrq)) {
+ memstick_init_req(&card->current_mrq, MS_TPC_READ_REG, &id_reg,
+@@ -550,7 +550,6 @@ EXPORT_SYMBOL(memstick_add_host);
+ */
+ void memstick_remove_host(struct memstick_host *host)
+ {
+- host->removing = 1;
+ flush_workqueue(workqueue);
+ mutex_lock(&host->lock);
+ if (host->card)
+diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
+index dec279845a752d..43ec4948daa206 100644
+--- a/drivers/memstick/host/rtsx_usb_ms.c
++++ b/drivers/memstick/host/rtsx_usb_ms.c
+@@ -812,6 +812,7 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+ int err;
+
+ host->eject = true;
++ msh->removing = true;
+ cancel_work_sync(&host->handle_req);
+ cancel_delayed_work_sync(&host->poll_card);
+
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index f150d8769f1986..f546b050cb495f 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -698,6 +698,12 @@ static void rtsx_usb_disconnect(struct usb_interface *intf)
+ }
+
+ #ifdef CONFIG_PM
++static int rtsx_usb_resume_child(struct device *dev, void *data)
++{
++ pm_request_resume(dev);
++ return 0;
++}
++
+ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ struct rtsx_ucr *ucr =
+@@ -713,8 +719,10 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ mutex_unlock(&ucr->dev_mutex);
+
+ /* Defer the autosuspend if card exists */
+- if (val & (SD_CD | MS_CD))
++ if (val & (SD_CD | MS_CD)) {
++ device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child);
+ return -EAGAIN;
++ }
+ } else {
+ /* There is an ongoing operation*/
+ return -EAGAIN;
+@@ -724,12 +732,6 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ return 0;
+ }
+
+-static int rtsx_usb_resume_child(struct device *dev, void *data)
+-{
+- pm_request_resume(dev);
+- return 0;
+-}
+-
+ static int rtsx_usb_resume(struct usb_interface *intf)
+ {
+ device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child);
+diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c
+index 985079943be764..f9d1e7bba2c8b2 100644
+--- a/drivers/mmc/host/bcm2835.c
++++ b/drivers/mmc/host/bcm2835.c
+@@ -507,7 +507,8 @@ void bcm2835_prepare_dma(struct bcm2835_host *host, struct mmc_data *data)
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+
+ if (!desc) {
+- dma_unmap_sg(dma_chan->device->dev, data->sg, sg_len, dir_data);
++ dma_unmap_sg(dma_chan->device->dev, data->sg, data->sg_len,
++ dir_data);
+ return;
+ }
+
+diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c
+index 1be3a355f10d54..ab7023d956ebc3 100644
+--- a/drivers/mmc/host/rtsx_usb_sdmmc.c
++++ b/drivers/mmc/host/rtsx_usb_sdmmc.c
+@@ -1032,9 +1032,7 @@ static int sd_set_power_mode(struct rtsx_usb_sdmmc *host,
+ err = sd_power_on(host);
+ }
+
+- if (!err)
+- host->power_mode = power_mode;
+-
++ host->power_mode = power_mode;
+ return err;
+ }
+
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index c9298a986ef0a6..183617d56b446b 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1544,6 +1544,7 @@ static void sdhci_msm_check_power_status(struct sdhci_host *host, u32 req_type)
+ {
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
++ struct mmc_host *mmc = host->mmc;
+ bool done = false;
+ u32 val = SWITCHABLE_SIGNALING_VOLTAGE;
+ const struct sdhci_msm_offset *msm_offset =
+@@ -1601,6 +1602,12 @@ static void sdhci_msm_check_power_status(struct sdhci_host *host, u32 req_type)
+ "%s: pwr_irq for req: (%d) timed out\n",
+ mmc_hostname(host->mmc), req_type);
+ }
++
++ if ((req_type & REQ_BUS_ON) && mmc->card && !mmc->ops->get_cd(mmc)) {
++ sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);
++ host->pwr = 0;
++ }
++
+ pr_debug("%s: %s: request %d done\n", mmc_hostname(host->mmc),
+ __func__, req_type);
+ }
+@@ -1659,6 +1666,13 @@ static void sdhci_msm_handle_pwr_irq(struct sdhci_host *host, int irq)
+ udelay(10);
+ }
+
++ if ((irq_status & CORE_PWRCTL_BUS_ON) && mmc->card &&
++ !mmc->ops->get_cd(mmc)) {
++ msm_host_writel(msm_host, CORE_PWRCTL_BUS_FAIL, host,
++ msm_offset->core_pwrctl_ctl);
++ return;
++ }
++
+ /* Handle BUS ON/OFF*/
+ if (irq_status & CORE_PWRCTL_BUS_ON) {
+ pwr_state = REQ_BUS_ON;
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 3769595693531a..f55fdd7468b63d 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -969,7 +969,8 @@ static bool glk_broken_cqhci(struct sdhci_pci_slot *slot)
+ {
+ return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC &&
+ (dmi_match(DMI_BIOS_VENDOR, "LENOVO") ||
+- dmi_match(DMI_SYS_VENDOR, "IRBIS"));
++ dmi_match(DMI_SYS_VENDOR, "IRBIS") ||
++ dmi_match(DMI_SYS_VENDOR, "Positivo Tecnologia SA"));
+ }
+
+ static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot)
+diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
+index 23b89b4cad0887..3a171221f97c45 100644
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -755,7 +755,7 @@ static void sdhci_gl9763e_reset(struct sdhci_host *host, u8 mask)
+ sdhci_reset(host, mask);
+ }
+
+-static void gli_set_gl9763e(struct sdhci_pci_slot *slot)
++static void gl9763e_hw_setting(struct sdhci_pci_slot *slot)
+ {
+ struct pci_dev *pdev = slot->chip->pdev;
+ u32 value;
+@@ -797,7 +797,7 @@ static int gli_probe_slot_gl9763e(struct sdhci_pci_slot *slot)
+ gli_pcie_enable_msi(slot);
+ host->mmc_host_ops.hs400_enhanced_strobe =
+ gl9763e_hs400_enhanced_strobe;
+- gli_set_gl9763e(slot);
++ gl9763e_hw_setting(slot);
+ sdhci_enable_v4_mode(host);
+
+ return 0;
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index 9d74ee989cb724..6ed4ed6071e4fe 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -558,7 +558,8 @@ static struct sdhci_ops sdhci_am654_ops = {
+ static const struct sdhci_pltfm_data sdhci_am654_pdata = {
+ .ops = &sdhci_am654_ops,
+ .quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++ SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
+ };
+
+ static const struct sdhci_am654_driver_data sdhci_am654_sr1_drvdata = {
+@@ -588,7 +589,8 @@ static struct sdhci_ops sdhci_j721e_8bit_ops = {
+ static const struct sdhci_pltfm_data sdhci_j721e_8bit_pdata = {
+ .ops = &sdhci_j721e_8bit_ops,
+ .quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++ SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
+ };
+
+ static const struct sdhci_am654_driver_data sdhci_j721e_8bit_drvdata = {
+@@ -612,7 +614,8 @@ static struct sdhci_ops sdhci_j721e_4bit_ops = {
+ static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = {
+ .ops = &sdhci_j721e_4bit_ops,
+ .quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++ SDHCI_QUIRK2_DISABLE_HW_TIMEOUT,
+ };
+
+ static const struct sdhci_am654_driver_data sdhci_j721e_4bit_drvdata = {
+diff --git a/drivers/most/core.c b/drivers/most/core.c
+index 353ab277cbc6bc..0cf0bbd2abfaac 100644
+--- a/drivers/most/core.c
++++ b/drivers/most/core.c
+@@ -538,8 +538,8 @@ static struct most_channel *get_channel(char *mdev, char *mdev_ch)
+ dev = bus_find_device_by_name(&mostbus, NULL, mdev);
+ if (!dev)
+ return NULL;
+- put_device(dev);
+ iface = dev_get_drvdata(dev);
++ put_device(dev);
+ list_for_each_entry_safe(c, tmp, &iface->p->channel_list, list) {
+ if (!strcmp(dev_name(&c->dev), mdev_ch))
+ return c;
+diff --git a/drivers/mtd/ftl.c b/drivers/mtd/ftl.c
+index 2578f27914ef50..ffe89209cf4bae 100644
+--- a/drivers/mtd/ftl.c
++++ b/drivers/mtd/ftl.c
+@@ -344,7 +344,7 @@ static int erase_xfer(partition_t *part,
+ return -ENOMEM;
+
+ erase->addr = xfer->Offset;
+- erase->len = 1 << part->header.EraseUnitSize;
++ erase->len = 1ULL << part->header.EraseUnitSize;
+
+ ret = mtd_erase(part->mbd.mtd, erase);
+ if (!ret) {
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 0d84f8156d8e49..3468cc3293992a 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -373,7 +373,7 @@ static int atmel_nand_dma_transfer(struct atmel_nand_controller *nc,
+ dma_cookie_t cookie;
+
+ buf_dma = dma_map_single(nc->dev, buf, len, dir);
+- if (dma_mapping_error(nc->dev, dev_dma)) {
++ if (dma_mapping_error(nc->dev, buf_dma)) {
+ dev_err(nc->dev,
+ "Failed to prepare a buffer for DMA access\n");
+ goto err;
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index d1ed5878b3b1f3..28ed65dd3d43f3 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -143,6 +143,7 @@ struct atmel_pmecc_caps {
+ int nstrengths;
+ int el_offset;
+ bool correct_erased_chunks;
++ bool clk_ctrl;
+ };
+
+ struct atmel_pmecc {
+@@ -846,6 +847,10 @@ static struct atmel_pmecc *atmel_pmecc_create(struct platform_device *pdev,
+ if (IS_ERR(pmecc->regs.errloc))
+ return ERR_CAST(pmecc->regs.errloc);
+
++ /* pmecc data setup time */
++ if (caps->clk_ctrl)
++ writel(PMECC_CLK_133MHZ, pmecc->regs.base + ATMEL_PMECC_CLK);
++
+ /* Disable all interrupts before registering the PMECC handler. */
+ writel(0xffffffff, pmecc->regs.base + ATMEL_PMECC_IDR);
+ atmel_pmecc_reset(pmecc);
+@@ -899,6 +904,7 @@ static struct atmel_pmecc_caps at91sam9g45_caps = {
+ .strengths = atmel_pmecc_strengths,
+ .nstrengths = 5,
+ .el_offset = 0x8c,
++ .clk_ctrl = true,
+ };
+
+ static struct atmel_pmecc_caps sama5d4_caps = {
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index 3da66e95e5b7ec..7fe5418c3e3b32 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -492,6 +492,8 @@ static int dma_xfer(struct fsmc_nand_data *host, void *buffer, int len,
+
+ dma_dev = chan->device;
+ dma_addr = dma_map_single(dma_dev->dev, buffer, len, direction);
++ if (dma_mapping_error(dma_dev->dev, dma_addr))
++ return -EINVAL;
+
+ if (direction == DMA_TO_DEVICE) {
+ dma_src = dma_addr;
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 197390dfc6abcb..42c2b56d783e2f 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -955,6 +955,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ can->err_rep_cnt = 0;
+ can->bec.txerr = 0;
+ can->bec.rxerr = 0;
++ can->can.dev->dev_port = i;
+
+ init_completion(&can->start_comp);
+ init_completion(&can->flush_comp);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index a96b2239840705..602f0b3bbcdf61 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -813,6 +813,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+
+ SET_NETDEV_DEV(netdev, &dev->intf->dev);
+ netdev->dev_id = channel;
++ netdev->dev_port = channel;
+
+ dev->nets[channel] = priv;
+
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 39a56cedbc1f42..361f9be65386e6 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -339,18 +339,23 @@ static void b53_set_forwarding(struct b53_device *dev, int enable)
+
+ b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_MODE, mgmt);
+
+- /* Include IMP port in dumb forwarding mode
+- */
+- b53_read8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, &mgmt);
+- mgmt |= B53_MII_DUMB_FWDG_EN;
+- b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt);
+-
+- /* Look at B53_UC_FWD_EN and B53_MC_FWD_EN to decide whether
+- * frames should be flooded or not.
+- */
+- b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
+- mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IPMC_FWD_EN;
+- b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++ if (!is5325(dev)) {
++ /* Include IMP port in dumb forwarding mode */
++ b53_read8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, &mgmt);
++ mgmt |= B53_MII_DUMB_FWDG_EN;
++ b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt);
++
++ /* Look at B53_UC_FWD_EN and B53_MC_FWD_EN to decide whether
++ * frames should be flooded or not.
++ */
++ b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
++ mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IPMC_FWD_EN;
++ b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++ } else {
++ b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
++ mgmt |= B53_IP_MCAST_25;
++ b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++ }
+ }
+
+ static void b53_enable_vlan(struct b53_device *dev, bool enable,
+@@ -504,6 +509,10 @@ void b53_imp_vlan_setup(struct dsa_switch *ds, int cpu_port)
+ unsigned int i;
+ u16 pvlan;
+
++ /* BCM5325 CPU port is at 8 */
++ if ((is5325(dev) || is5365(dev)) && cpu_port == B53_CPU_PORT_25)
++ cpu_port = B53_CPU_PORT;
++
+ /* Enable the IMP port to be in the same VLAN as the other ports
+ * on a per-port basis such that we only have Port i and IMP in
+ * the same VLAN.
+@@ -1102,6 +1111,8 @@ static void b53_force_link(struct b53_device *dev, int port, int link)
+ if (port == dev->imp_port) {
+ off = B53_PORT_OVERRIDE_CTRL;
+ val = PORT_OVERRIDE_EN;
++ } else if (is5325(dev)) {
++ return;
+ } else {
+ off = B53_GMII_PORT_OVERRIDE_CTRL(port);
+ val = GMII_PO_EN;
+@@ -1126,6 +1137,8 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ if (port == dev->imp_port) {
+ off = B53_PORT_OVERRIDE_CTRL;
+ val = PORT_OVERRIDE_EN;
++ } else if (is5325(dev)) {
++ return;
+ } else {
+ off = B53_GMII_PORT_OVERRIDE_CTRL(port);
+ val = GMII_PO_EN;
+@@ -1156,10 +1169,19 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ return;
+ }
+
+- if (rx_pause)
+- reg |= PORT_OVERRIDE_RX_FLOW;
+- if (tx_pause)
+- reg |= PORT_OVERRIDE_TX_FLOW;
++ if (rx_pause) {
++ if (is5325(dev))
++ reg |= PORT_OVERRIDE_LP_FLOW_25;
++ else
++ reg |= PORT_OVERRIDE_RX_FLOW;
++ }
++
++ if (tx_pause) {
++ if (is5325(dev))
++ reg |= PORT_OVERRIDE_LP_FLOW_25;
++ else
++ reg |= PORT_OVERRIDE_TX_FLOW;
++ }
+
+ b53_write8(dev, B53_CTRL_PAGE, off, reg);
+ }
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index b2c539a4215450..77fb7ae660b8c8 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -92,6 +92,7 @@
+ #define PORT_OVERRIDE_SPEED_10M (0 << PORT_OVERRIDE_SPEED_S)
+ #define PORT_OVERRIDE_SPEED_100M (1 << PORT_OVERRIDE_SPEED_S)
+ #define PORT_OVERRIDE_SPEED_1000M (2 << PORT_OVERRIDE_SPEED_S)
++#define PORT_OVERRIDE_LP_FLOW_25 BIT(3) /* BCM5325 only */
+ #define PORT_OVERRIDE_RV_MII_25 BIT(4) /* BCM5325 only */
+ #define PORT_OVERRIDE_RX_FLOW BIT(4)
+ #define PORT_OVERRIDE_TX_FLOW BIT(5)
+@@ -103,6 +104,7 @@
+
+ /* IP Multicast control (8 bit) */
+ #define B53_IP_MULTICAST_CTRL 0x21
++#define B53_IP_MCAST_25 BIT(0)
+ #define B53_IPMC_FWD_EN BIT(1)
+ #define B53_UC_FWD_EN BIT(6)
+ #define B53_MC_FWD_EN BIT(7)
+diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c
+index 41f8821f792d30..fa04e37de08961 100644
+--- a/drivers/net/ethernet/agere/et131x.c
++++ b/drivers/net/ethernet/agere/et131x.c
+@@ -2460,6 +2460,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ skb->data,
+ skb_headlen(skb),
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev,
++ dma_addr))
++ return -ENOMEM;
++
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+@@ -2469,6 +2473,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ skb->data,
+ skb_headlen(skb) / 2,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev,
++ dma_addr))
++ return -ENOMEM;
++
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+@@ -2479,6 +2487,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ skb_headlen(skb) / 2,
+ skb_headlen(skb) / 2,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev,
++ dma_addr))
++ goto unmap_first_out;
++
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+@@ -2490,6 +2502,9 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 0,
+ desc[frag].len_vlan,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev, dma_addr))
++ goto unmap_out;
++
+ desc[frag].addr_lo = lower_32_bits(dma_addr);
+ desc[frag].addr_hi = upper_32_bits(dma_addr);
+ frag++;
+@@ -2579,6 +2594,27 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ &adapter->regs->global.watchdog_timer);
+ }
+ return 0;
++
++unmap_out:
++ // Unmap the body of the packet with map_page
++ while (--i) {
++ frag--;
++ dma_addr = desc[frag].addr_lo;
++ dma_addr |= (u64)desc[frag].addr_hi << 32;
++ dma_unmap_page(&adapter->pdev->dev, dma_addr,
++ desc[frag].len_vlan, DMA_TO_DEVICE);
++ }
++
++unmap_first_out:
++ // Unmap the header with map_single
++ while (frag--) {
++ dma_addr = desc[frag].addr_lo;
++ dma_addr |= (u64)desc[frag].addr_hi << 32;
++ dma_unmap_single(&adapter->pdev->dev, dma_addr,
++ desc[frag].len_vlan, DMA_TO_DEVICE);
++ }
++
++ return -ENOMEM;
+ }
+
+ static int send_packet(struct sk_buff *skb, struct et131x_adapter *adapter)
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index fe3ca3af431a4e..67409a53d5100f 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -1275,6 +1275,11 @@ static bool ag71xx_fill_rx_buf(struct ag71xx *ag, struct ag71xx_buf *buf,
+ buf->rx.rx_buf = data;
+ buf->rx.dma_addr = dma_map_single(&ag->pdev->dev, data, ag->rx_buf_size,
+ DMA_FROM_DEVICE);
++ if (dma_mapping_error(&ag->pdev->dev, buf->rx.dma_addr)) {
++ skb_free_frag(data);
++ buf->rx.rx_buf = NULL;
++ return false;
++ }
+ desc->data = (u32)buf->rx.dma_addr + offset;
+ return true;
+ }
+@@ -1573,6 +1578,10 @@ static netdev_tx_t ag71xx_hard_start_xmit(struct sk_buff *skb,
+
+ dma_addr = dma_map_single(&ag->pdev->dev, skb->data, skb->len,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&ag->pdev->dev, dma_addr)) {
++ netif_dbg(ag, tx_err, ndev, "DMA mapping error\n");
++ goto err_drop;
++ }
+
+ i = ring->curr & ring_mask;
+ desc = ag71xx_ring_desc(ring, i);
+diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+index f0e48b9373d6dc..0a71909bb2ee50 100644
+--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
++++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+@@ -1430,9 +1430,9 @@ static acpi_status bgx_acpi_match_id(acpi_handle handle, u32 lvl,
+ {
+ struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL };
+ struct bgx *bgx = context;
+- char bgx_sel[5];
++ char bgx_sel[7];
+
+- snprintf(bgx_sel, 5, "BGX%d", bgx->bgx_id);
++ snprintf(bgx_sel, sizeof(bgx_sel), "BGX%d", bgx->bgx_id);
+ if (ACPI_FAILURE(acpi_get_name(handle, ACPI_SINGLE_NAME, &string))) {
+ pr_warn("Invalid link device\n");
+ return AE_OK;
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index d9bceb26f4e5b0..d6984c179bae0c 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -3851,8 +3851,8 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ spin_unlock_bh(&adapter->mcc_lock);
++ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ return status;
+ }
+
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index ff3ea24d2e3f9a..97cbe7737eb4b0 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1466,10 +1466,10 @@ static void be_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ ntohs(tcphdr->source));
+ dev_info(dev, "TCP dest port %d\n",
+ ntohs(tcphdr->dest));
+- dev_info(dev, "TCP sequence num %d\n",
+- ntohs(tcphdr->seq));
+- dev_info(dev, "TCP ack_seq %d\n",
+- ntohs(tcphdr->ack_seq));
++ dev_info(dev, "TCP sequence num %u\n",
++ ntohl(tcphdr->seq));
++ dev_info(dev, "TCP ack_seq %u\n",
++ ntohl(tcphdr->ack_seq));
+ } else if (ip_hdr(skb)->protocol ==
+ IPPROTO_UDP) {
+ udphdr = udp_hdr(skb);
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index d8cb0b99684ad3..6542abeca78779 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -499,8 +499,10 @@ static int dpaa_get_ts_info(struct net_device *net_dev,
+ of_node_put(ptp_node);
+ }
+
+- if (ptp_dev)
++ if (ptp_dev) {
+ ptp = platform_get_drvdata(ptp_dev);
++ put_device(&ptp_dev->dev);
++ }
+
+ if (ptp)
+ info->phc_index = ptp->phc_index;
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 776f624e3b8eed..0a1a7d94583b45 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1765,7 +1765,7 @@ static int dpaa2_eth_link_state_update(struct dpaa2_eth_priv *priv)
+ /* When we manage the MAC/PHY using phylink there is no need
+ * to manually update the netif_carrier.
+ */
+- if (priv->mac)
++ if (dpaa2_eth_is_type_phy(priv))
+ goto out;
+
+ /* Chech link state; speed / duplex changes are not treated yet */
+@@ -1804,7 +1804,7 @@ static int dpaa2_eth_open(struct net_device *net_dev)
+ priv->dpbp_dev->obj_desc.id, priv->bpid);
+ }
+
+- if (!priv->mac) {
++ if (!dpaa2_eth_is_type_phy(priv)) {
+ /* We'll only start the txqs when the link is actually ready;
+ * make sure we don't race against the link up notification,
+ * which may come immediately after dpni_enable();
+@@ -1826,7 +1826,7 @@ static int dpaa2_eth_open(struct net_device *net_dev)
+ goto enable_err;
+ }
+
+- if (priv->mac)
++ if (dpaa2_eth_is_type_phy(priv))
+ phylink_start(priv->mac->phylink);
+
+ return 0;
+@@ -1900,11 +1900,11 @@ static int dpaa2_eth_stop(struct net_device *net_dev)
+ int dpni_enabled = 0;
+ int retries = 10;
+
+- if (!priv->mac) {
++ if (dpaa2_eth_is_type_phy(priv)) {
++ phylink_stop(priv->mac->phylink);
++ } else {
+ netif_tx_stop_all_queues(net_dev);
+ netif_carrier_off(net_dev);
+- } else {
+- phylink_stop(priv->mac->phylink);
+ }
+
+ /* On dpni_disable(), the MC firmware will:
+@@ -2192,7 +2192,7 @@ static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+ if (cmd == SIOCSHWTSTAMP)
+ return dpaa2_eth_ts_ioctl(dev, rq, cmd);
+
+- if (priv->mac)
++ if (dpaa2_eth_is_type_phy(priv))
+ return phylink_mii_ioctl(priv->mac->phylink, rq, cmd);
+
+ return -EOPNOTSUPP;
+@@ -4134,37 +4134,59 @@ static int dpaa2_eth_connect_mac(struct dpaa2_eth_priv *priv)
+
+ dpni_dev = to_fsl_mc_device(priv->net_dev->dev.parent);
+ dpmac_dev = fsl_mc_get_endpoint(dpni_dev);
+- if (IS_ERR_OR_NULL(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)
+- return 0;
+
+- if (dpaa2_mac_is_type_fixed(dpmac_dev, priv->mc_io))
++ if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER)
++ return PTR_ERR(dpmac_dev);
++
++ if (IS_ERR(dpmac_dev))
+ return 0;
+
++ if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) {
++ err = 0;
++ goto out_put_device;
++ }
++
+ mac = kzalloc(sizeof(struct dpaa2_mac), GFP_KERNEL);
+- if (!mac)
+- return -ENOMEM;
++ if (!mac) {
++ err = -ENOMEM;
++ goto out_put_device;
++ }
+
+ mac->mc_dev = dpmac_dev;
+ mac->mc_io = priv->mc_io;
+ mac->net_dev = priv->net_dev;
+
+- err = dpaa2_mac_connect(mac);
+- if (err) {
+- netdev_err(priv->net_dev, "Error connecting to the MAC endpoint\n");
+- kfree(mac);
+- return err;
+- }
++ err = dpaa2_mac_open(mac);
++ if (err)
++ goto err_free_mac;
+ priv->mac = mac;
+
++ if (dpaa2_eth_is_type_phy(priv)) {
++ err = dpaa2_mac_connect(mac);
++ if (err) {
++ netdev_err(priv->net_dev, "Error connecting to the MAC endpoint\n");
++ goto err_close_mac;
++ }
++ }
++
+ return 0;
++
++err_close_mac:
++ dpaa2_mac_close(mac);
++ priv->mac = NULL;
++err_free_mac:
++ kfree(mac);
++out_put_device:
++ put_device(&dpmac_dev->dev);
++ return err;
+ }
+
+ static void dpaa2_eth_disconnect_mac(struct dpaa2_eth_priv *priv)
+ {
+- if (!priv->mac)
+- return;
++ if (dpaa2_eth_is_type_phy(priv))
++ dpaa2_mac_disconnect(priv->mac);
+
+- dpaa2_mac_disconnect(priv->mac);
++ dpaa2_mac_close(priv->mac);
+ kfree(priv->mac);
+ priv->mac = NULL;
+ }
+@@ -4193,7 +4215,7 @@ static irqreturn_t dpni_irq0_handler_thread(int irq_num, void *arg)
+ dpaa2_eth_update_tx_fqids(priv);
+
+ rtnl_lock();
+- if (priv->mac)
++ if (dpaa2_eth_has_mac(priv))
+ dpaa2_eth_disconnect_mac(priv);
+ else
+ dpaa2_eth_connect_mac(priv);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+index 5934b1b4ee9732..77b1d39dd5c519 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+@@ -705,6 +705,19 @@ static inline unsigned int dpaa2_eth_rx_head_room(struct dpaa2_eth_priv *priv)
+ return priv->tx_data_offset - DPAA2_ETH_RX_HWA_SIZE;
+ }
+
++static inline bool dpaa2_eth_is_type_phy(struct dpaa2_eth_priv *priv)
++{
++ if (priv->mac && priv->mac->attr.link_type == DPMAC_LINK_TYPE_PHY)
++ return true;
++
++ return false;
++}
++
++static inline bool dpaa2_eth_has_mac(struct dpaa2_eth_priv *priv)
++{
++ return priv->mac ? true : false;
++}
++
+ int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags);
+ int dpaa2_eth_set_cls(struct net_device *net_dev, u64 key);
+ int dpaa2_eth_cls_key_size(u64 key);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+index d7de60049700f2..f65179ed4f33a5 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+@@ -85,7 +85,7 @@ static int dpaa2_eth_nway_reset(struct net_device *net_dev)
+ {
+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+
+- if (priv->mac)
++ if (dpaa2_eth_is_type_phy(priv))
+ return phylink_ethtool_nway_reset(priv->mac->phylink);
+
+ return -EOPNOTSUPP;
+@@ -97,7 +97,7 @@ dpaa2_eth_get_link_ksettings(struct net_device *net_dev,
+ {
+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+
+- if (priv->mac)
++ if (dpaa2_eth_is_type_phy(priv))
+ return phylink_ethtool_ksettings_get(priv->mac->phylink,
+ link_settings);
+
+@@ -115,7 +115,7 @@ dpaa2_eth_set_link_ksettings(struct net_device *net_dev,
+ {
+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+
+- if (!priv->mac)
++ if (!dpaa2_eth_is_type_phy(priv))
+ return -ENOTSUPP;
+
+ return phylink_ethtool_ksettings_set(priv->mac->phylink, link_settings);
+@@ -127,7 +127,7 @@ static void dpaa2_eth_get_pauseparam(struct net_device *net_dev,
+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+ u64 link_options = priv->link_state.options;
+
+- if (priv->mac) {
++ if (dpaa2_eth_is_type_phy(priv)) {
+ phylink_ethtool_get_pauseparam(priv->mac->phylink, pause);
+ return;
+ }
+@@ -150,7 +150,7 @@ static int dpaa2_eth_set_pauseparam(struct net_device *net_dev,
+ return -EOPNOTSUPP;
+ }
+
+- if (priv->mac)
++ if (dpaa2_eth_is_type_phy(priv))
+ return phylink_ethtool_set_pauseparam(priv->mac->phylink,
+ pause);
+ if (pause->autoneg)
+@@ -198,7 +198,7 @@ static void dpaa2_eth_get_strings(struct net_device *netdev, u32 stringset,
+ strlcpy(p, dpaa2_ethtool_extras[i], ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+- if (priv->mac)
++ if (dpaa2_eth_has_mac(priv))
+ dpaa2_mac_get_strings(p);
+ break;
+ }
+@@ -211,7 +211,7 @@ static int dpaa2_eth_get_sset_count(struct net_device *net_dev, int sset)
+
+ switch (sset) {
+ case ETH_SS_STATS: /* ethtool_get_stats(), ethtool_get_drvinfo() */
+- if (priv->mac)
++ if (dpaa2_eth_has_mac(priv))
+ num_ss_stats += dpaa2_mac_get_sset_count();
+ return num_ss_stats;
+ default:
+@@ -311,7 +311,7 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
+ }
+ *(data + i++) = buf_cnt;
+
+- if (priv->mac)
++ if (dpaa2_eth_has_mac(priv))
+ dpaa2_mac_get_ethtool_stats(priv->mac, data + i);
+ }
+
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.c
+index 828c177df03d59..81b2822a7dc9f5 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.c
+@@ -228,32 +228,6 @@ static const struct phylink_mac_ops dpaa2_mac_phylink_ops = {
+ .mac_link_down = dpaa2_mac_link_down,
+ };
+
+-bool dpaa2_mac_is_type_fixed(struct fsl_mc_device *dpmac_dev,
+- struct fsl_mc_io *mc_io)
+-{
+- struct dpmac_attr attr;
+- bool fixed = false;
+- u16 mc_handle = 0;
+- int err;
+-
+- err = dpmac_open(mc_io, 0, dpmac_dev->obj_desc.id,
+- &mc_handle);
+- if (err || !mc_handle)
+- return false;
+-
+- err = dpmac_get_attributes(mc_io, 0, mc_handle, &attr);
+- if (err)
+- goto out;
+-
+- if (attr.link_type == DPMAC_LINK_TYPE_FIXED)
+- fixed = true;
+-
+-out:
+- dpmac_close(mc_io, 0, mc_handle);
+-
+- return fixed;
+-}
+-
+ static int dpaa2_pcs_create(struct dpaa2_mac *mac,
+ struct device_node *dpmac_node, int id)
+ {
+@@ -302,36 +276,20 @@ static void dpaa2_pcs_destroy(struct dpaa2_mac *mac)
+
+ int dpaa2_mac_connect(struct dpaa2_mac *mac)
+ {
+- struct fsl_mc_device *dpmac_dev = mac->mc_dev;
+ struct net_device *net_dev = mac->net_dev;
+ struct device_node *dpmac_node;
+ struct phylink *phylink;
+- struct dpmac_attr attr;
+ int err;
+
+- err = dpmac_open(mac->mc_io, 0, dpmac_dev->obj_desc.id,
+- &dpmac_dev->mc_handle);
+- if (err || !dpmac_dev->mc_handle) {
+- netdev_err(net_dev, "dpmac_open() = %d\n", err);
+- return -ENODEV;
+- }
+-
+- err = dpmac_get_attributes(mac->mc_io, 0, dpmac_dev->mc_handle, &attr);
+- if (err) {
+- netdev_err(net_dev, "dpmac_get_attributes() = %d\n", err);
+- goto err_close_dpmac;
+- }
++ mac->if_link_type = mac->attr.link_type;
+
+- mac->if_link_type = attr.link_type;
+-
+- dpmac_node = dpaa2_mac_get_node(attr.id);
++ dpmac_node = dpaa2_mac_get_node(mac->attr.id);
+ if (!dpmac_node) {
+- netdev_err(net_dev, "No dpmac@%d node found.\n", attr.id);
+- err = -ENODEV;
+- goto err_close_dpmac;
++ netdev_err(net_dev, "No dpmac@%d node found.\n", mac->attr.id);
++ return -ENODEV;
+ }
+
+- err = dpaa2_mac_get_if_mode(dpmac_node, attr);
++ err = dpaa2_mac_get_if_mode(dpmac_node, mac->attr);
+ if (err < 0) {
+ err = -EINVAL;
+ goto err_put_node;
+@@ -351,9 +309,9 @@ int dpaa2_mac_connect(struct dpaa2_mac *mac)
+ goto err_put_node;
+ }
+
+- if (attr.link_type == DPMAC_LINK_TYPE_PHY &&
+- attr.eth_if != DPMAC_ETH_IF_RGMII) {
+- err = dpaa2_pcs_create(mac, dpmac_node, attr.id);
++ if (mac->attr.link_type == DPMAC_LINK_TYPE_PHY &&
++ mac->attr.eth_if != DPMAC_ETH_IF_RGMII) {
++ err = dpaa2_pcs_create(mac, dpmac_node, mac->attr.id);
+ if (err)
+ goto err_put_node;
+ }
+@@ -389,8 +347,7 @@ int dpaa2_mac_connect(struct dpaa2_mac *mac)
+ dpaa2_pcs_destroy(mac);
+ err_put_node:
+ of_node_put(dpmac_node);
+-err_close_dpmac:
+- dpmac_close(mac->mc_io, 0, dpmac_dev->mc_handle);
++
+ return err;
+ }
+
+@@ -402,8 +359,40 @@ void dpaa2_mac_disconnect(struct dpaa2_mac *mac)
+ phylink_disconnect_phy(mac->phylink);
+ phylink_destroy(mac->phylink);
+ dpaa2_pcs_destroy(mac);
++}
+
+- dpmac_close(mac->mc_io, 0, mac->mc_dev->mc_handle);
++int dpaa2_mac_open(struct dpaa2_mac *mac)
++{
++ struct fsl_mc_device *dpmac_dev = mac->mc_dev;
++ struct net_device *net_dev = mac->net_dev;
++ int err;
++
++ err = dpmac_open(mac->mc_io, 0, dpmac_dev->obj_desc.id,
++ &dpmac_dev->mc_handle);
++ if (err || !dpmac_dev->mc_handle) {
++ netdev_err(net_dev, "dpmac_open() = %d\n", err);
++ return -ENODEV;
++ }
++
++ err = dpmac_get_attributes(mac->mc_io, 0, dpmac_dev->mc_handle,
++ &mac->attr);
++ if (err) {
++ netdev_err(net_dev, "dpmac_get_attributes() = %d\n", err);
++ goto err_close_dpmac;
++ }
++
++ return 0;
++
++err_close_dpmac:
++ dpmac_close(mac->mc_io, 0, dpmac_dev->mc_handle);
++ return err;
++}
++
++void dpaa2_mac_close(struct dpaa2_mac *mac)
++{
++ struct fsl_mc_device *dpmac_dev = mac->mc_dev;
++
++ dpmac_close(mac->mc_io, 0, dpmac_dev->mc_handle);
+ }
+
+ static char dpaa2_mac_ethtool_stats[][ETH_GSTRING_LEN] = {
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.h
+index 955a52856210f4..13d42dd58ec90a 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.h
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-mac.h
+@@ -17,6 +17,7 @@ struct dpaa2_mac {
+ struct dpmac_link_state state;
+ struct net_device *net_dev;
+ struct fsl_mc_io *mc_io;
++ struct dpmac_attr attr;
+
+ struct phylink_config phylink_config;
+ struct phylink *phylink;
+@@ -28,6 +29,10 @@ struct dpaa2_mac {
+ bool dpaa2_mac_is_type_fixed(struct fsl_mc_device *dpmac_dev,
+ struct fsl_mc_io *mc_io);
+
++int dpaa2_mac_open(struct dpaa2_mac *mac);
++
++void dpaa2_mac_close(struct dpaa2_mac *mac);
++
+ int dpaa2_mac_connect(struct dpaa2_mac *mac);
+
+ void dpaa2_mac_disconnect(struct dpaa2_mac *mac);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 805434ba3035bb..adf70a1650f4d7 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2630,27 +2630,25 @@ static int fec_enet_us_to_itr_clock(struct net_device *ndev, int us)
+ static void fec_enet_itr_coal_set(struct net_device *ndev)
+ {
+ struct fec_enet_private *fep = netdev_priv(ndev);
+- int rx_itr, tx_itr;
++ u32 rx_itr = 0, tx_itr = 0;
++ int rx_ictt, tx_ictt;
+
+- /* Must be greater than zero to avoid unpredictable behavior */
+- if (!fep->rx_time_itr || !fep->rx_pkts_itr ||
+- !fep->tx_time_itr || !fep->tx_pkts_itr)
+- return;
+-
+- /* Select enet system clock as Interrupt Coalescing
+- * timer Clock Source
+- */
+- rx_itr = FEC_ITR_CLK_SEL;
+- tx_itr = FEC_ITR_CLK_SEL;
++ rx_ictt = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
++ tx_ictt = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
+
+- /* set ICFT and ICTT */
+- rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
+- rx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr));
+- tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
+- tx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr));
++ if (rx_ictt > 0 && fep->rx_pkts_itr > 1) {
++ /* Enable with enet system clock as Interrupt Coalescing timer Clock Source */
++ rx_itr = FEC_ITR_EN | FEC_ITR_CLK_SEL;
++ rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
++ rx_itr |= FEC_ITR_ICTT(rx_ictt);
++ }
+
+- rx_itr |= FEC_ITR_EN;
+- tx_itr |= FEC_ITR_EN;
++ if (tx_ictt > 0 && fep->tx_pkts_itr > 1) {
++ /* Enable with enet system clock as Interrupt Coalescing timer Clock Source */
++ tx_itr = FEC_ITR_EN | FEC_ITR_CLK_SEL;
++ tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
++ tx_itr |= FEC_ITR_ICTT(tx_ictt);
++ }
+
+ writel(tx_itr, fep->hwp + FEC_TXIC0);
+ writel(rx_itr, fep->hwp + FEC_RXIC0);
+diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+index 799a1486f586db..0004138088dd42 100644
+--- a/drivers/net/ethernet/freescale/gianfar_ethtool.c
++++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+@@ -1457,8 +1457,10 @@ static int gfar_get_ts_info(struct net_device *dev,
+ if (ptp_node) {
+ ptp_dev = of_find_device_by_node(ptp_node);
+ of_node_put(ptp_node);
+- if (ptp_dev)
++ if (ptp_dev) {
+ ptp = platform_get_drvdata(ptp_dev);
++ put_device(&ptp_dev->dev);
++ }
+ }
+
+ if (ptp)
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 67f2b9a61463ad..f02b3f01a557d6 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -246,6 +246,7 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ break;
+ default:
+ dev_err(&priv->pdev->dev, "unknown AQ command opcode %d\n", opcode);
++ return -EINVAL;
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index f458a97dd7910c..c409e46e3cfd8d 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -944,49 +944,56 @@ static void gve_turnup(struct gve_priv *priv)
+ gve_set_napi_enabled(priv);
+ }
+
+-static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
++static struct gve_notify_block *gve_get_tx_notify_block(struct gve_priv *priv,
++ unsigned int txqueue)
+ {
+- struct gve_notify_block *block;
+- struct gve_tx_ring *tx = NULL;
+- struct gve_priv *priv;
+- u32 last_nic_done;
+- u32 current_time;
+ u32 ntfy_idx;
+
+- netdev_info(dev, "Timeout on tx queue, %d", txqueue);
+- priv = netdev_priv(dev);
+ if (txqueue > priv->tx_cfg.num_queues)
+- goto reset;
++ return NULL;
+
+ ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue);
+ if (ntfy_idx >= priv->num_ntfy_blks)
+- goto reset;
++ return NULL;
++
++ return &priv->ntfy_blocks[ntfy_idx];
++}
++
++static bool gve_tx_timeout_try_q_kick(struct gve_priv *priv,
++ unsigned int txqueue)
++{
++ struct gve_notify_block *block;
++ u32 current_time;
+
+- block = &priv->ntfy_blocks[ntfy_idx];
+- tx = block->tx;
++ block = gve_get_tx_notify_block(priv, txqueue);
++
++ if (!block)
++ return false;
+
+ current_time = jiffies_to_msecs(jiffies);
+- if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
+- goto reset;
++ if (block->tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time)
++ return false;
+
+- /* Check to see if there are missed completions, which will allow us to
+- * kick the queue.
+- */
+- last_nic_done = gve_tx_load_event_counter(priv, tx);
+- if (last_nic_done - tx->done) {
+- netdev_info(dev, "Kicking queue %d", txqueue);
+- iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block));
+- napi_schedule(&block->napi);
+- tx->last_kick_msec = current_time;
+- goto out;
+- } // Else reset.
++ netdev_info(priv->dev, "Kicking queue %d", txqueue);
++ napi_schedule(&block->napi);
++ block->tx->last_kick_msec = current_time;
++ return true;
++}
+
+-reset:
+- gve_schedule_reset(priv);
++static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue)
++{
++ struct gve_notify_block *block;
++ struct gve_priv *priv;
++
++ netdev_info(dev, "Timeout on tx queue, %d", txqueue);
++ priv = netdev_priv(dev);
++
++ if (!gve_tx_timeout_try_q_kick(priv, txqueue))
++ gve_schedule_reset(priv);
+
+-out:
+- if (tx)
+- tx->queue_timeout++;
++ block = gve_get_tx_notify_block(priv, txqueue);
++ if (block)
++ block->tx->queue_timeout++;
+ priv->tx_timeo_cnt++;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 25b6b4f780f1df..b0b5324e7f9902 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -656,15 +656,16 @@ struct hnae3_ae_algo {
+ #define HNAE3_INT_NAME_LEN 32
+ #define HNAE3_ITR_COUNTDOWN_START 100
+
++#define HNAE3_MAX_TC 8
++#define HNAE3_MAX_USER_PRIO 8
+ struct hnae3_tc_info {
+- u16 tqp_offset; /* TQP offset from base TQP */
+- u16 tqp_count; /* Total TQPs */
+- u8 tc; /* TC index */
+- bool enable; /* If this TC is enable or not */
++ u8 prio_tc[HNAE3_MAX_USER_PRIO]; /* TC indexed by prio */
++ u16 tqp_count[HNAE3_MAX_TC];
++ u16 tqp_offset[HNAE3_MAX_TC];
++ unsigned long tc_en; /* bitmap of TC enabled */
++ u8 num_tc; /* Total number of enabled TCs */
+ };
+
+-#define HNAE3_MAX_TC 8
+-#define HNAE3_MAX_USER_PRIO 8
+ struct hnae3_knic_private_info {
+ struct net_device *netdev; /* Set by KNIC client when init instance */
+ u16 rss_size; /* Allocated RSS queues */
+@@ -673,9 +674,7 @@ struct hnae3_knic_private_info {
+ u16 num_tx_desc;
+ u16 num_rx_desc;
+
+- u8 num_tc; /* Total number of enabled TCs */
+- u8 prio_tc[HNAE3_MAX_USER_PRIO]; /* TC indexed by prio */
+- struct hnae3_tc_info tc_info[HNAE3_MAX_TC]; /* Idx of array is HW TC */
++ struct hnae3_tc_info tc_info;
+
+ u16 num_tqps; /* total number of TQPs in this handle */
+ struct hnae3_queue **tqp; /* array base of all TQPs in this instance */
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index dc9a85745e62b2..d299787eae29a3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -347,7 +347,8 @@ static void hns3_dbg_dev_specs(struct hnae3_handle *h)
+ dev_info(priv->dev, "RX buffer length: %u\n", kinfo->rx_buf_len);
+ dev_info(priv->dev, "Desc num per TX queue: %u\n", kinfo->num_tx_desc);
+ dev_info(priv->dev, "Desc num per RX queue: %u\n", kinfo->num_rx_desc);
+- dev_info(priv->dev, "Total number of enabled TCs: %u\n", kinfo->num_tc);
++ dev_info(priv->dev, "Total number of enabled TCs: %u\n",
++ kinfo->tc_info.num_tc);
+ dev_info(priv->dev, "MAX INT QL: %u\n", dev_specs->int_ql_max);
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 110baa9949a0ce..727b18cd6c4a95 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -273,13 +273,14 @@ static int hns3_nic_set_real_num_queue(struct net_device *netdev)
+ {
+ struct hnae3_handle *h = hns3_get_handle(netdev);
+ struct hnae3_knic_private_info *kinfo = &h->kinfo;
+- unsigned int queue_size = kinfo->rss_size * kinfo->num_tc;
++ struct hnae3_tc_info *tc_info = &kinfo->tc_info;
++ unsigned int queue_size = kinfo->rss_size * tc_info->num_tc;
+ int i, ret;
+
+- if (kinfo->num_tc <= 1) {
++ if (tc_info->num_tc <= 1) {
+ netdev_reset_tc(netdev);
+ } else {
+- ret = netdev_set_num_tc(netdev, kinfo->num_tc);
++ ret = netdev_set_num_tc(netdev, tc_info->num_tc);
+ if (ret) {
+ netdev_err(netdev,
+ "netdev_set_num_tc fail, ret=%d!\n", ret);
+@@ -287,13 +288,11 @@ static int hns3_nic_set_real_num_queue(struct net_device *netdev)
+ }
+
+ for (i = 0; i < HNAE3_MAX_TC; i++) {
+- if (!kinfo->tc_info[i].enable)
++ if (!test_bit(i, &tc_info->tc_en))
+ continue;
+
+- netdev_set_tc_queue(netdev,
+- kinfo->tc_info[i].tc,
+- kinfo->tc_info[i].tqp_count,
+- kinfo->tc_info[i].tqp_offset);
++ netdev_set_tc_queue(netdev, i, tc_info->tqp_count[i],
++ tc_info->tqp_offset[i]);
+ }
+ }
+
+@@ -319,7 +318,7 @@ static u16 hns3_get_max_available_channels(struct hnae3_handle *h)
+ u16 alloc_tqps, max_rss_size, rss_size;
+
+ h->ae_algo->ops->get_tqps_and_rss_info(h, &alloc_tqps, &max_rss_size);
+- rss_size = alloc_tqps / h->kinfo.num_tc;
++ rss_size = alloc_tqps / h->kinfo.tc_info.num_tc;
+
+ return min_t(u16, rss_size, max_rss_size);
+ }
+@@ -463,7 +462,7 @@ static int hns3_nic_net_open(struct net_device *netdev)
+
+ kinfo = &h->kinfo;
+ for (i = 0; i < HNAE3_MAX_USER_PRIO; i++)
+- netdev_set_prio_tc_map(netdev, i, kinfo->prio_tc[i]);
++ netdev_set_prio_tc_map(netdev, i, kinfo->tc_info.prio_tc[i]);
+
+ if (h->ae_algo->ops->set_timer_task)
+ h->ae_algo->ops->set_timer_task(priv->ae_handle, true);
+@@ -3914,21 +3913,20 @@ static void hns3_init_ring_hw(struct hns3_enet_ring *ring)
+ static void hns3_init_tx_ring_tc(struct hns3_nic_priv *priv)
+ {
+ struct hnae3_knic_private_info *kinfo = &priv->ae_handle->kinfo;
++ struct hnae3_tc_info *tc_info = &kinfo->tc_info;
+ int i;
+
+ for (i = 0; i < HNAE3_MAX_TC; i++) {
+- struct hnae3_tc_info *tc_info = &kinfo->tc_info[i];
+ int j;
+
+- if (!tc_info->enable)
++ if (!test_bit(i, &tc_info->tc_en))
+ continue;
+
+- for (j = 0; j < tc_info->tqp_count; j++) {
++ for (j = 0; j < tc_info->tqp_count[i]; j++) {
+ struct hnae3_queue *q;
+
+- q = priv->ring[tc_info->tqp_offset + j].tqp;
+- hns3_write_dev(q, HNS3_RING_TX_RING_TC_REG,
+- tc_info->tc);
++ q = priv->ring[tc_info->tqp_offset[i] + j].tqp;
++ hns3_write_dev(q, HNS3_RING_TX_RING_TC_REG, i);
+ }
+ }
+ }
+@@ -4056,7 +4054,8 @@ static void hns3_info_show(struct hns3_nic_priv *priv)
+ dev_info(priv->dev, "RX buffer length: %u\n", kinfo->rx_buf_len);
+ dev_info(priv->dev, "Desc num per TX queue: %u\n", kinfo->num_tx_desc);
+ dev_info(priv->dev, "Desc num per RX queue: %u\n", kinfo->num_rx_desc);
+- dev_info(priv->dev, "Total number of enabled TCs: %u\n", kinfo->num_tc);
++ dev_info(priv->dev, "Total number of enabled TCs: %u\n",
++ kinfo->tc_info.num_tc);
+ dev_info(priv->dev, "Max mtu size: %u\n", priv->netdev->max_mtu);
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+index 16df050e72cfc7..9688b394634f17 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+@@ -1418,7 +1418,7 @@ static void hclge_dbg_dump_qs_shaper_all(struct hclge_dev *hdev)
+
+ dev_info(&hdev->pdev->dev, "qs cfg of vport%d:\n", vport_id);
+
+- for (i = 0; i < kinfo->num_tc; i++) {
++ for (i = 0; i < kinfo->tc_info.num_tc; i++) {
+ u16 qsid = vport->qs_offset + i;
+
+ hclge_dbg_dump_qs_shaper_single(hdev, qsid);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index ec918f2981ec21..aa987cad7cadf5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -10682,7 +10682,7 @@ static u32 hclge_get_max_channels(struct hnae3_handle *handle)
+ struct hclge_dev *hdev = vport->back;
+
+ return min_t(u32, hdev->rss_size_max,
+- vport->alloc_tqps / kinfo->num_tc);
++ vport->alloc_tqps / kinfo->tc_info.num_tc);
+ }
+
+ static void hclge_get_channels(struct hnae3_handle *handle,
+@@ -10769,7 +10769,7 @@ static int hclge_set_channels(struct hnae3_handle *handle, u32 new_tqps_num,
+ dev_info(&hdev->pdev->dev,
+ "Channels changed, rss_size from %u to %u, tqps from %u to %u",
+ cur_rss_size, kinfo->rss_size,
+- cur_tqps, kinfo->rss_size * kinfo->num_tc);
++ cur_tqps, kinfo->rss_size * kinfo->tc_info.num_tc);
+
+ return ret;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 9969714d1133dc..cdfa04bc45988a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -423,7 +423,7 @@ static void hclge_get_vf_tcinfo(struct hclge_vport *vport,
+ struct hnae3_knic_private_info *kinfo = &vport->nic.kinfo;
+ unsigned int i;
+
+- for (i = 0; i < kinfo->num_tc; i++)
++ for (i = 0; i < kinfo->tc_info.num_tc; i++)
+ resp_msg->data[0] |= BIT(i);
+
+ resp_msg->len = sizeof(u8);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 8c5c5562c0a736..df42458d909b33 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -535,7 +535,7 @@ int hclge_tm_qs_shaper_cfg(struct hclge_vport *vport, int max_tx_rate)
+ HCLGE_SHAPER_BS_U_DEF,
+ HCLGE_SHAPER_BS_S_DEF);
+
+- for (i = 0; i < kinfo->num_tc; i++) {
++ for (i = 0; i < kinfo->tc_info.num_tc; i++) {
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QCN_SHAPPING_CFG,
+ false);
+
+@@ -566,13 +566,13 @@ static void hclge_tm_vport_tc_info_update(struct hclge_vport *vport)
+ /* TC configuration is shared by PF/VF in one port, only allow
+ * one tc for VF for simplicity. VF's vport_id is non zero.
+ */
+- kinfo->num_tc = vport->vport_id ? 1 :
++ kinfo->tc_info.num_tc = vport->vport_id ? 1 :
+ min_t(u16, vport->alloc_tqps, hdev->tm_info.num_tc);
+ vport->qs_offset = (vport->vport_id ? HNAE3_MAX_TC : 0) +
+ (vport->vport_id ? (vport->vport_id - 1) : 0);
+
+ max_rss_size = min_t(u16, hdev->rss_size_max,
+- vport->alloc_tqps / kinfo->num_tc);
++ vport->alloc_tqps / kinfo->tc_info.num_tc);
+
+ /* Set to user value, no larger than max_rss_size. */
+ if (kinfo->req_rss_size != kinfo->rss_size && kinfo->req_rss_size &&
+@@ -589,34 +589,32 @@ static void hclge_tm_vport_tc_info_update(struct hclge_vport *vport)
+ if (!kinfo->req_rss_size)
+ max_rss_size = min_t(u16, max_rss_size,
+ (hdev->num_nic_msi - 1) /
+- kinfo->num_tc);
++ kinfo->tc_info.num_tc);
+
+ /* Set to the maximum specification value (max_rss_size). */
+ kinfo->rss_size = max_rss_size;
+ }
+
+- kinfo->num_tqps = kinfo->num_tc * kinfo->rss_size;
++ kinfo->num_tqps = kinfo->tc_info.num_tc * kinfo->rss_size;
+ vport->dwrr = 100; /* 100 percent as init */
+ vport->alloc_rss_size = kinfo->rss_size;
+ vport->bw_limit = hdev->tm_info.pg_info[0].bw_limit;
+
+ for (i = 0; i < HNAE3_MAX_TC; i++) {
+- if (hdev->hw_tc_map & BIT(i) && i < kinfo->num_tc) {
+- kinfo->tc_info[i].enable = true;
+- kinfo->tc_info[i].tqp_offset = i * kinfo->rss_size;
+- kinfo->tc_info[i].tqp_count = kinfo->rss_size;
+- kinfo->tc_info[i].tc = i;
++ if (hdev->hw_tc_map & BIT(i) && i < kinfo->tc_info.num_tc) {
++ set_bit(i, &kinfo->tc_info.tc_en);
++ kinfo->tc_info.tqp_offset[i] = i * kinfo->rss_size;
++ kinfo->tc_info.tqp_count[i] = kinfo->rss_size;
+ } else {
+ /* Set to default queue if TC is disable */
+- kinfo->tc_info[i].enable = false;
+- kinfo->tc_info[i].tqp_offset = 0;
+- kinfo->tc_info[i].tqp_count = 1;
+- kinfo->tc_info[i].tc = 0;
++ clear_bit(i, &kinfo->tc_info.tc_en);
++ kinfo->tc_info.tqp_offset[i] = 0;
++ kinfo->tc_info.tqp_count[i] = 1;
+ }
+ }
+
+- memcpy(kinfo->prio_tc, hdev->tm_info.prio_tc,
+- sizeof_field(struct hnae3_knic_private_info, prio_tc));
++ memcpy(kinfo->tc_info.prio_tc, hdev->tm_info.prio_tc,
++ sizeof_field(struct hnae3_tc_info, prio_tc));
+ }
+
+ static void hclge_tm_vport_info_update(struct hclge_dev *hdev)
+@@ -815,15 +813,14 @@ static int hclge_vport_q_to_qs_map(struct hclge_dev *hdev,
+ struct hclge_vport *vport)
+ {
+ struct hnae3_knic_private_info *kinfo = &vport->nic.kinfo;
++ struct hnae3_tc_info *tc_info = &kinfo->tc_info;
+ struct hnae3_queue **tqp = kinfo->tqp;
+- struct hnae3_tc_info *v_tc_info;
+ u32 i, j;
+ int ret;
+
+- for (i = 0; i < kinfo->num_tc; i++) {
+- v_tc_info = &kinfo->tc_info[i];
+- for (j = 0; j < v_tc_info->tqp_count; j++) {
+- struct hnae3_queue *q = tqp[v_tc_info->tqp_offset + j];
++ for (i = 0; i < tc_info->num_tc; i++) {
++ for (j = 0; j < tc_info->tqp_count[i]; j++) {
++ struct hnae3_queue *q = tqp[tc_info->tqp_offset[i] + j];
+
+ ret = hclge_tm_q_to_qs_map_cfg(hdev,
+ hclge_get_queue_id(q),
+@@ -848,7 +845,7 @@ static int hclge_tm_pri_q_qs_cfg(struct hclge_dev *hdev)
+ struct hnae3_knic_private_info *kinfo =
+ &vport[k].nic.kinfo;
+
+- for (i = 0; i < kinfo->num_tc; i++) {
++ for (i = 0; i < kinfo->tc_info.num_tc; i++) {
+ ret = hclge_tm_qs_to_pri_map_cfg(
+ hdev, vport[k].qs_offset + i, i);
+ if (ret)
+@@ -959,7 +956,7 @@ static int hclge_tm_pri_vnet_base_shaper_qs_cfg(struct hclge_vport *vport)
+ u32 i;
+ int ret;
+
+- for (i = 0; i < kinfo->num_tc; i++) {
++ for (i = 0; i < kinfo->tc_info.num_tc; i++) {
+ ret = hclge_shaper_para_calc(hdev->tm_info.tc_info[i].bw_limit,
+ HCLGE_SHAPER_LVL_QSET,
+ &ir_para, max_tm_rate);
+@@ -1074,7 +1071,7 @@ static int hclge_tm_pri_vnet_base_dwrr_pri_cfg(struct hclge_vport *vport)
+ return ret;
+
+ /* Qset dwrr */
+- for (i = 0; i < kinfo->num_tc; i++) {
++ for (i = 0; i < kinfo->tc_info.num_tc; i++) {
+ ret = hclge_tm_qs_weight_cfg(
+ hdev, vport->qs_offset + i,
+ hdev->tm_info.pg_info[0].tc_dwrr[i]);
+@@ -1205,7 +1202,7 @@ static int hclge_tm_schd_mode_vnet_base_cfg(struct hclge_vport *vport)
+ if (ret)
+ return ret;
+
+- for (i = 0; i < kinfo->num_tc; i++) {
++ for (i = 0; i < kinfo->tc_info.num_tc; i++) {
+ u8 sch_mode = hdev->tm_info.tc_info[i].tc_sch_mode;
+
+ ret = hclge_tm_qs_schd_mode_cfg(hdev, vport->qs_offset + i,
+@@ -1428,7 +1425,7 @@ void hclge_tm_prio_tc_info_update(struct hclge_dev *hdev, u8 *prio_tc)
+
+ for (k = 0; k < hdev->num_alloc_vport; k++) {
+ kinfo = &vport[k].nic.kinfo;
+- kinfo->prio_tc[i] = prio_tc[i];
++ kinfo->tc_info.prio_tc[i] = prio_tc[i];
+ }
+ }
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index aa026eb5cf5884..15dca78fd736c5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -418,19 +418,20 @@ static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
+ struct hnae3_knic_private_info *kinfo;
+ u16 new_tqps = hdev->num_tqps;
+ unsigned int i;
++ u8 num_tc = 0;
+
+ kinfo = &nic->kinfo;
+- kinfo->num_tc = 0;
+ kinfo->num_tx_desc = hdev->num_tx_desc;
+ kinfo->num_rx_desc = hdev->num_rx_desc;
+ kinfo->rx_buf_len = hdev->rx_buf_len;
+ for (i = 0; i < HCLGEVF_MAX_TC_NUM; i++)
+ if (hdev->hw_tc_map & BIT(i))
+- kinfo->num_tc++;
++ num_tc++;
+
+- kinfo->rss_size
+- = min_t(u16, hdev->rss_size_max, new_tqps / kinfo->num_tc);
+- new_tqps = kinfo->rss_size * kinfo->num_tc;
++ num_tc = num_tc ? num_tc : 1;
++ kinfo->tc_info.num_tc = num_tc;
++ kinfo->rss_size = min_t(u16, hdev->rss_size_max, new_tqps / num_tc);
++ new_tqps = kinfo->rss_size * num_tc;
+ kinfo->num_tqps = min(new_tqps, hdev->num_tqps);
+
+ kinfo->tqp = devm_kcalloc(&hdev->pdev->dev, kinfo->num_tqps,
+@@ -448,7 +449,7 @@ static int hclgevf_knic_setup(struct hclgevf_dev *hdev)
+ * and rss size with the actual vector numbers
+ */
+ kinfo->num_tqps = min_t(u16, hdev->num_nic_msix - 1, kinfo->num_tqps);
+- kinfo->rss_size = min_t(u16, kinfo->num_tqps / kinfo->num_tc,
++ kinfo->rss_size = min_t(u16, kinfo->num_tqps / num_tc,
+ kinfo->rss_size);
+
+ return 0;
+@@ -3345,11 +3346,7 @@ static void hclgevf_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+
+ static u32 hclgevf_get_max_channels(struct hclgevf_dev *hdev)
+ {
+- struct hnae3_handle *nic = &hdev->nic;
+- struct hnae3_knic_private_info *kinfo = &nic->kinfo;
+-
+- return min_t(u32, hdev->rss_size_max,
+- hdev->num_tqps / kinfo->num_tc);
++ return min(hdev->rss_size_max, hdev->num_tqps);
+ }
+
+ /**
+@@ -3392,7 +3389,7 @@ static void hclgevf_update_rss_size(struct hnae3_handle *handle,
+ kinfo->req_rss_size = new_tqps_num;
+
+ max_rss_size = min_t(u16, hdev->rss_size_max,
+- hdev->num_tqps / kinfo->num_tc);
++ hdev->num_tqps / kinfo->tc_info.num_tc);
+
+ /* Use the user's configuration when it is not larger than
+ * max_rss_size, otherwise, use the maximum specification value.
+@@ -3404,7 +3401,7 @@ static void hclgevf_update_rss_size(struct hnae3_handle *handle,
+ (!kinfo->req_rss_size && kinfo->rss_size < max_rss_size))
+ kinfo->rss_size = max_rss_size;
+
+- kinfo->num_tqps = kinfo->num_tc * kinfo->rss_size;
++ kinfo->num_tqps = kinfo->tc_info.num_tc * kinfo->rss_size;
+ }
+
+ static int hclgevf_set_channels(struct hnae3_handle *handle, u32 new_tqps_num,
+@@ -3450,7 +3447,7 @@ static int hclgevf_set_channels(struct hnae3_handle *handle, u32 new_tqps_num,
+ dev_info(&hdev->pdev->dev,
+ "Channels changed, rss_size from %u to %u, tqps from %u to %u",
+ cur_rss_size, kinfo->rss_size,
+- cur_tqps, kinfo->rss_size * kinfo->num_tc);
++ cur_tqps, kinfo->rss_size * kinfo->tc_info.num_tc);
+
+ return ret;
+ }
+diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
+index 63c3c79380a1b6..32e6d16b2dcf2d 100644
+--- a/drivers/net/ethernet/intel/e1000e/defines.h
++++ b/drivers/net/ethernet/intel/e1000e/defines.h
+@@ -638,6 +638,9 @@
+ /* For checksumming, the sum of all words in the NVM should equal 0xBABA. */
+ #define NVM_SUM 0xBABA
+
++/* Uninitialized ("empty") checksum word value */
++#define NVM_CHECKSUM_UNINITIALIZED 0xFFFF
++
+ /* PBA (printed board assembly) number words */
+ #define NVM_PBA_OFFSET_0 8
+ #define NVM_PBA_OFFSET_1 9
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 1f51252b465a6f..88a1d47f900501 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -4144,6 +4144,8 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
+ ret_val = e1000e_update_nvm_checksum(hw);
+ if (ret_val)
+ return ret_val;
++ } else if (hw->mac.type == e1000_pch_tgp) {
++ return 0;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/intel/e1000e/nvm.c b/drivers/net/ethernet/intel/e1000e/nvm.c
+index e609f4df86f455..16369e6d245a4a 100644
+--- a/drivers/net/ethernet/intel/e1000e/nvm.c
++++ b/drivers/net/ethernet/intel/e1000e/nvm.c
+@@ -558,6 +558,12 @@ s32 e1000e_validate_nvm_checksum_generic(struct e1000_hw *hw)
+ checksum += nvm_data;
+ }
+
++ if (hw->mac.type == e1000_pch_tgp &&
++ nvm_data == NVM_CHECKSUM_UNINITIALIZED) {
++ e_dbg("Uninitialized NVM Checksum on TGP platform - ignoring\n");
++ return 0;
++ }
++
+ if (checksum != (u16)NVM_SUM) {
+ e_dbg("NVM Checksum Invalid\n");
+ return -E1000_ERR_NVM;
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k.h b/drivers/net/ethernet/intel/fm10k/fm10k.h
+index 6119a410883815..65a2816142d962 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k.h
++++ b/drivers/net/ethernet/intel/fm10k/fm10k.h
+@@ -189,13 +189,14 @@ struct fm10k_q_vector {
+ struct fm10k_ring_container rx, tx;
+
+ struct napi_struct napi;
++ struct rcu_head rcu; /* to avoid race with update stats on free */
++
+ cpumask_t affinity_mask;
+ char name[IFNAMSIZ + 9];
+
+ #ifdef CONFIG_DEBUG_FS
+ struct dentry *dbg_q_vector;
+ #endif /* CONFIG_DEBUG_FS */
+- struct rcu_head rcu; /* to avoid race with update stats on free */
+
+ /* for dynamic allocation of rings associated with this q_vector */
+ struct fm10k_ring ring[] ____cacheline_internodealigned_in_smp;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index dd630b6bc74bd4..add9a3107d9a0c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -863,6 +863,7 @@ struct i40e_q_vector {
+ u16 reg_idx; /* register index of the interrupt */
+
+ struct napi_struct napi;
++ struct rcu_head rcu; /* to avoid race with update stats on free */
+
+ struct i40e_ring_container rx;
+ struct i40e_ring_container tx;
+@@ -873,7 +874,6 @@ struct i40e_q_vector {
+ cpumask_t affinity_mask;
+ struct irq_affinity_notify affinity_notify;
+
+- struct rcu_head rcu; /* to avoid race with update stats on free */
+ char name[I40E_INT_NAME_STR_LEN];
+ bool arm_wb_state;
+ } ____cacheline_internodealigned_in_smp;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 520929f4d535f8..7f8fc9b3b105f2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -247,6 +247,7 @@ static const struct i40e_stats i40e_gstrings_net_stats[] = {
+ I40E_NETDEV_STAT(rx_errors),
+ I40E_NETDEV_STAT(tx_errors),
+ I40E_NETDEV_STAT(rx_dropped),
++ I40E_NETDEV_STAT(rx_missed_errors),
+ I40E_NETDEV_STAT(tx_dropped),
+ I40E_NETDEV_STAT(collisions),
+ I40E_NETDEV_STAT(rx_length_errors),
+@@ -317,7 +318,7 @@ static const struct i40e_stats i40e_gstrings_stats[] = {
+ I40E_PF_STAT("port.rx_broadcast", stats.eth.rx_broadcast),
+ I40E_PF_STAT("port.tx_broadcast", stats.eth.tx_broadcast),
+ I40E_PF_STAT("port.tx_errors", stats.eth.tx_errors),
+- I40E_PF_STAT("port.rx_dropped", stats.eth.rx_discards),
++ I40E_PF_STAT("port.rx_discards", stats.eth.rx_discards),
+ I40E_PF_STAT("port.tx_dropped_link_down", stats.tx_dropped_link_down),
+ I40E_PF_STAT("port.rx_crc_errors", stats.crc_errors),
+ I40E_PF_STAT("port.illegal_bytes", stats.illegal_bytes),
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 35a903f6df2151..aa24d1808c9815 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -492,6 +492,7 @@ static void i40e_get_netdev_stats_struct(struct net_device *netdev,
+ stats->tx_dropped = vsi_stats->tx_dropped;
+ stats->rx_errors = vsi_stats->rx_errors;
+ stats->rx_dropped = vsi_stats->rx_dropped;
++ stats->rx_missed_errors = vsi_stats->rx_missed_errors;
+ stats->rx_crc_errors = vsi_stats->rx_crc_errors;
+ stats->rx_length_errors = vsi_stats->rx_length_errors;
+ }
+@@ -683,17 +684,13 @@ i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw,
+ struct i40e_eth_stats *stat_offset,
+ struct i40e_eth_stats *stat)
+ {
+- u64 rx_rdpc, rx_rxerr;
+-
+ i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded,
+- &stat_offset->rx_discards, &rx_rdpc);
++ &stat_offset->rx_discards, &stat->rx_discards);
+ i40e_stat_update64(hw,
+ I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)),
+ I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)),
+ offset_loaded, &stat_offset->rx_discards_other,
+- &rx_rxerr);
+-
+- stat->rx_discards = rx_rdpc + rx_rxerr;
++ &stat->rx_discards_other);
+ }
+
+ /**
+@@ -715,9 +712,6 @@ void i40e_update_eth_stats(struct i40e_vsi *vsi)
+ i40e_stat_update32(hw, I40E_GLV_TEPC(stat_idx),
+ vsi->stat_offsets_loaded,
+ &oes->tx_errors, &es->tx_errors);
+- i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx),
+- vsi->stat_offsets_loaded,
+- &oes->rx_discards, &es->rx_discards);
+ i40e_stat_update32(hw, I40E_GLV_RUPP(stat_idx),
+ vsi->stat_offsets_loaded,
+ &oes->rx_unknown_protocol, &es->rx_unknown_protocol);
+@@ -958,8 +952,10 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
+ ns->tx_errors = es->tx_errors;
+ ons->multicast = oes->rx_multicast;
+ ns->multicast = es->rx_multicast;
+- ons->rx_dropped = oes->rx_discards;
+- ns->rx_dropped = es->rx_discards;
++ ons->rx_dropped = oes->rx_discards_other;
++ ns->rx_dropped = es->rx_discards_other;
++ ons->rx_missed_errors = oes->rx_discards;
++ ns->rx_missed_errors = es->rx_discards;
+ ons->tx_dropped = oes->tx_discards;
+ ns->tx_dropped = es->tx_discards;
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 852ece241a2780..c86c429e9a3a32 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -4835,8 +4835,8 @@ int i40e_get_vf_stats(struct net_device *netdev, int vf_id,
+ vf_stats->tx_bytes = stats->tx_bytes;
+ vf_stats->broadcast = stats->rx_broadcast;
+ vf_stats->multicast = stats->rx_multicast;
+- vf_stats->rx_dropped = stats->rx_discards;
+- vf_stats->tx_dropped = stats->tx_discards;
++ vf_stats->rx_dropped = stats->rx_discards + stats->rx_discards_other;
++ vf_stats->tx_dropped = stats->tx_errors;
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+index a81be917f6538f..da2906720c63e5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
++++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+@@ -1449,6 +1449,8 @@ enum ice_status ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len)
+ return ICE_ERR_PARAM;
+
+ buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL);
++ if (!buf_copy)
++ return ICE_ERR_NO_MEMORY;
+
+ status = ice_init_pkg(hw, buf_copy, len);
+ if (status) {
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+index 18251edbfabfb1..3ea7095fc04f5f 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+@@ -457,9 +457,10 @@ struct ixgbe_q_vector {
+ struct ixgbe_ring_container rx, tx;
+
+ struct napi_struct napi;
++ struct rcu_head rcu; /* to avoid race with update stats on free */
++
+ cpumask_t affinity_mask;
+ int numa_node;
+- struct rcu_head rcu; /* to avoid race with update stats on free */
+ char name[IFNAMSIZ + 9];
+
+ /* for dynamic allocation of rings associated with this q_vector */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index ca1a428b278e0d..54351d6742d070 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -390,7 +390,7 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+ dma_addr_t dma;
+ u32 cmd_type;
+
+- while (budget-- > 0) {
++ while (likely(budget)) {
+ if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
+ work_done = false;
+ break;
+@@ -425,6 +425,8 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+ xdp_ring->next_to_use++;
+ if (xdp_ring->next_to_use == xdp_ring->count)
+ xdp_ring->next_to_use = 0;
++
++ budget--;
+ }
+
+ if (tx_desc) {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 4110e15c22c799..8ab7e591b66af9 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2222,6 +2222,8 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = {
+ ROUTER_EXP, false),
+ MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_DIP_LINK_LOCAL, FORWARD,
+ ROUTER_EXP, false),
++ MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_SIP_LINK_LOCAL, FORWARD,
++ ROUTER_EXP, false),
+ /* Multicast Router Traps */
+ MLXSW_SP_RXL_MARK(ACL1, TRAP_TO_CPU, MULTICAST, false),
+ MLXSW_SP_RXL_L3_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false),
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/trap.h b/drivers/net/ethernet/mellanox/mlxsw/trap.h
+index 57f9e24602d04c..93ca6f90f3203b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/trap.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/trap.h
+@@ -92,6 +92,7 @@ enum {
+ MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_SIP_BC = 0x16A,
+ MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_DIP_LOCAL_NET = 0x16B,
+ MLXSW_TRAP_ID_DISCARD_ING_ROUTER_DIP_LINK_LOCAL = 0x16C,
++ MLXSW_TRAP_ID_DISCARD_ING_ROUTER_SIP_LINK_LOCAL = 0x16D,
+ MLXSW_TRAP_ID_DISCARD_ROUTER_IRIF_EN = 0x178,
+ MLXSW_TRAP_ID_DISCARD_ROUTER_ERIF_EN = 0x179,
+ MLXSW_TRAP_ID_DISCARD_ROUTER_LPM4 = 0x17B,
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 02b95afe250660..c8bd4880b609d6 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -293,7 +293,7 @@ static void xemaclite_aligned_read(u32 *src_ptr, u8 *dest_ptr,
+
+ /* Read the remaining data */
+ for (; length > 0; length--)
+- *to_u8_ptr = *from_u8_ptr;
++ *to_u8_ptr++ = *from_u8_ptr++;
+ }
+ }
+
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index 15652d7951f9e7..4ee5ee2eb85290 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -985,6 +985,7 @@ struct net_device_context {
+ struct net_device __rcu *vf_netdev;
+ struct netvsc_vf_pcpu_stats __percpu *vf_stats;
+ struct delayed_work vf_takeover;
++ struct delayed_work vfns_work;
+
+ /* 1: allocated, serial number is valid. 0: not allocated */
+ u32 vf_alloc;
+@@ -999,6 +1000,8 @@ struct net_device_context {
+ struct netvsc_device_info *saved_netvsc_dev_info;
+ };
+
++void netvsc_vfns_work(struct work_struct *w);
++
+ /* Per channel data */
+ struct netvsc_channel {
+ struct vmbus_channel *channel;
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 9ae4f88ab45546..f9f10800968b50 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2541,6 +2541,7 @@ static int netvsc_probe(struct hv_device *dev,
+ spin_lock_init(&net_device_ctx->lock);
+ INIT_LIST_HEAD(&net_device_ctx->reconfig_events);
+ INIT_DELAYED_WORK(&net_device_ctx->vf_takeover, netvsc_vf_setup);
++ INIT_DELAYED_WORK(&net_device_ctx->vfns_work, netvsc_vfns_work);
+
+ net_device_ctx->vf_stats
+ = netdev_alloc_pcpu_stats(struct netvsc_vf_pcpu_stats);
+@@ -2679,6 +2680,8 @@ static int netvsc_remove(struct hv_device *dev)
+ cancel_delayed_work_sync(&ndev_ctx->dwork);
+
+ rtnl_lock();
++ cancel_delayed_work_sync(&ndev_ctx->vfns_work);
++
+ nvdev = rtnl_dereference(ndev_ctx->nvdev);
+ if (nvdev) {
+ cancel_work_sync(&nvdev->subchan_work);
+@@ -2721,6 +2724,7 @@ static int netvsc_suspend(struct hv_device *dev)
+ cancel_delayed_work_sync(&ndev_ctx->dwork);
+
+ rtnl_lock();
++ cancel_delayed_work_sync(&ndev_ctx->vfns_work);
+
+ nvdev = rtnl_dereference(ndev_ctx->nvdev);
+ if (nvdev == NULL) {
+@@ -2814,6 +2818,27 @@ static void netvsc_event_set_vf_ns(struct net_device *ndev)
+ }
+ }
+
++void netvsc_vfns_work(struct work_struct *w)
++{
++ struct net_device_context *ndev_ctx =
++ container_of(w, struct net_device_context, vfns_work.work);
++ struct net_device *ndev;
++
++ if (!rtnl_trylock()) {
++ schedule_delayed_work(&ndev_ctx->vfns_work, 1);
++ return;
++ }
++
++ ndev = hv_get_drvdata(ndev_ctx->device_ctx);
++ if (!ndev)
++ goto out;
++
++ netvsc_event_set_vf_ns(ndev);
++
++out:
++ rtnl_unlock();
++}
++
+ /*
+ * On Hyper-V, every VF interface is matched with a corresponding
+ * synthetic interface. The synthetic interface is presented first
+@@ -2824,10 +2849,12 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ unsigned long event, void *ptr)
+ {
+ struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
++ struct net_device_context *ndev_ctx;
+ int ret = 0;
+
+ if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) {
+- netvsc_event_set_vf_ns(event_dev);
++ ndev_ctx = netdev_priv(event_dev);
++ schedule_delayed_work(&ndev_ctx->vfns_work, 0);
+ return NOTIFY_DONE;
+ }
+
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index 85102e895665e5..4a00b82d3ca5c2 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -897,6 +897,7 @@ static int vsc85xx_eth1_conf(struct phy_device *phydev, enum ts_blk blk,
+ get_unaligned_be32(ptp_multicast));
+ } else {
+ val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST;
++ val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST;
+ vsc85xx_ts_write_csr(phydev, blk,
+ MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(0), val);
+ vsc85xx_ts_write_csr(phydev, blk,
+diff --git a/drivers/net/phy/mscc/mscc_ptp.h b/drivers/net/phy/mscc/mscc_ptp.h
+index 3ea163af0f4f4b..7e3809e4aa5f52 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.h
++++ b/drivers/net/phy/mscc/mscc_ptp.h
+@@ -98,6 +98,7 @@
+ #define MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(x) (MSCC_ANA_ETH1_FLOW_ENA(x) + 3)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_MASK_MASK GENMASK(22, 20)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST 0x400000
++#define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST 0x200000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_FULL_ADDR 0x100000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST_MASK GENMASK(17, 16)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST 0x020000
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index d860a2626b13be..c799c650576779 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -427,6 +427,7 @@ static struct phy_driver smsc_phy_driver[] = {
+
+ /* PHY_BASIC_FEATURES */
+
++ .flags = PHY_RST_AFTER_CLK_EN,
+ .probe = smsc_phy_probe,
+ .remove = smsc_phy_remove,
+
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 05a75b5a8b6803..3c1e7155e2bf75 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -159,19 +159,17 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ int len;
+ unsigned char *data;
+ __u32 seq_recv;
+-
+-
+ struct rtable *rt;
+ struct net_device *tdev;
+ struct iphdr *iph;
+ int max_headroom;
+
+ if (sk_pppox(po)->sk_state & PPPOX_DEAD)
+- goto tx_error;
++ goto tx_drop;
+
+ rt = pptp_route_output(po, &fl4);
+ if (IS_ERR(rt))
+- goto tx_error;
++ goto tx_drop;
+
+ tdev = rt->dst.dev;
+
+@@ -179,16 +177,20 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+
+ if (skb_headroom(skb) < max_headroom || skb_cloned(skb) || skb_shared(skb)) {
+ struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
+- if (!new_skb) {
+- ip_rt_put(rt);
++
++ if (!new_skb)
+ goto tx_error;
+- }
++
+ if (skb->sk)
+ skb_set_owner_w(new_skb, skb->sk);
+ consume_skb(skb);
+ skb = new_skb;
+ }
+
++ /* Ensure we can safely access protocol field and LCP code */
++ if (!pskb_may_pull(skb, 3))
++ goto tx_error;
++
+ data = skb->data;
+ islcp = ((data[0] << 8) + data[1]) == PPP_LCP && 1 <= data[2] && data[2] <= 7;
+
+@@ -262,6 +264,8 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ return 1;
+
+ tx_error:
++ ip_rt_put(rt);
++tx_drop:
+ kfree_skb(skb);
+ return 1;
+ }
+diff --git a/drivers/net/usb/sierra_net.c b/drivers/net/usb/sierra_net.c
+index 777f672f288cbf..cfc519bc454515 100644
+--- a/drivers/net/usb/sierra_net.c
++++ b/drivers/net/usb/sierra_net.c
+@@ -689,6 +689,10 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf)
+ status);
+ return -ENODEV;
+ }
++ if (!dev->status) {
++ dev_err(&dev->udev->dev, "No status endpoint found");
++ return -ENODEV;
++ }
+ /* Initialize sierra private data */
+ priv = kzalloc(sizeof *priv, GFP_KERNEL);
+ if (!priv)
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index ad425e09c75f51..ac439f9ccfd465 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1071,6 +1071,9 @@ static void __handle_link_change(struct usbnet *dev)
+ if (!test_bit(EVENT_DEV_OPEN, &dev->flags))
+ return;
+
++ if (test_and_clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags))
++ netif_carrier_on(dev->net);
++
+ if (!netif_carrier_ok(dev->net)) {
+ /* kill URBs for reading packets to save bus bandwidth */
+ unlink_urbs(dev, &dev->rxq);
+@@ -1960,10 +1963,12 @@ EXPORT_SYMBOL(usbnet_manage_power);
+ void usbnet_link_change(struct usbnet *dev, bool link, bool need_reset)
+ {
+ /* update link after link is reseted */
+- if (link && !need_reset)
+- netif_carrier_on(dev->net);
+- else
++ if (link && !need_reset) {
++ set_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
++ } else {
++ clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
+ netif_carrier_off(dev->net);
++ }
+
+ if (need_reset && link)
+ usbnet_defer_kevent(dev, EVENT_LINK_RESET);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 99dea89b267888..5698683779eed6 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -394,6 +394,26 @@ static unsigned int mergeable_ctx_to_truesize(void *mrg_ctx)
+ return (unsigned long)mrg_ctx & ((1 << MRG_CTX_HEADER_SHIFT) - 1);
+ }
+
++static int check_mergeable_len(struct net_device *dev, void *mrg_ctx,
++ unsigned int len)
++{
++ unsigned int headroom, tailroom, room, truesize;
++
++ truesize = mergeable_ctx_to_truesize(mrg_ctx);
++ headroom = mergeable_ctx_to_headroom(mrg_ctx);
++ tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
++ room = SKB_DATA_ALIGN(headroom + tailroom);
++
++ if (len > truesize - room) {
++ pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
++ dev->name, len, (unsigned long)(truesize - room));
++ dev->stats.rx_length_errors++;
++ return -1;
++ }
++
++ return 0;
++}
++
+ /* Called from bottom half context */
+ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
+ struct receive_queue *rq,
+@@ -639,7 +659,8 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
+ * across multiple buffers (num_buf > 1), and we make sure buffers
+ * have enough headroom.
+ */
+-static struct page *xdp_linearize_page(struct receive_queue *rq,
++static struct page *xdp_linearize_page(struct net_device *dev,
++ struct receive_queue *rq,
+ u16 *num_buf,
+ struct page *p,
+ int offset,
+@@ -659,18 +680,27 @@ static struct page *xdp_linearize_page(struct receive_queue *rq,
+ memcpy(page_address(page) + page_off, page_address(p) + offset, *len);
+ page_off += *len;
+
++ /* Only mergeable mode can go inside this while loop. In small mode,
++ * *num_buf == 1, so it cannot go inside.
++ */
+ while (--*num_buf) {
+ unsigned int buflen;
+ void *buf;
++ void *ctx;
+ int off;
+
+- buf = virtqueue_get_buf(rq->vq, &buflen);
++ buf = virtqueue_get_buf_ctx(rq->vq, &buflen, &ctx);
+ if (unlikely(!buf))
+ goto err_buf;
+
+ p = virt_to_head_page(buf);
+ off = buf - page_address(p);
+
++ if (check_mergeable_len(dev, ctx, buflen)) {
++ put_page(p);
++ goto err_buf;
++ }
++
+ /* guard against a misconfigured or uncooperative backend that
+ * is sending packet larger than the MTU.
+ */
+@@ -745,7 +775,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ headroom = vi->hdr_len + header_offset;
+ buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+- xdp_page = xdp_linearize_page(rq, &num_buf, page,
++ xdp_page = xdp_linearize_page(dev, rq, &num_buf, page,
+ offset, header_offset,
+ &tlen);
+ if (!xdp_page)
+@@ -916,7 +946,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ if (unlikely(num_buf > 1 ||
+ headroom < virtnet_get_headroom(vi))) {
+ /* linearize data for XDP */
+- xdp_page = xdp_linearize_page(rq, &num_buf,
++ xdp_page = xdp_linearize_page(dev, rq, &num_buf,
+ page, offset,
+ VIRTIO_XDP_HEADROOM,
+ &len);
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index c801185ade2b1e..b43e8041fda349 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1316,6 +1316,8 @@ static void vrf_ip6_input_dst(struct sk_buff *skb, struct net_device *vrf_dev,
+ struct net *net = dev_net(vrf_dev);
+ struct rt6_info *rt6;
+
++ skb_dst_drop(skb);
++
+ rt6 = vrf_ip6_route_lookup(net, vrf_dev, &fl6, ifindex, skb,
+ RT6_LOOKUP_F_HAS_SADDR | RT6_LOOKUP_F_IFACE);
+ if (unlikely(!rt6))
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index f3b9108ab6bd07..d5921805af636c 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -833,7 +833,6 @@ void ath11k_hal_srng_access_end(struct ath11k_base *ab, struct hal_srng *srng)
+ {
+ lockdep_assert_held(&srng->lock);
+
+- /* TODO: See if we need a write memory barrier here */
+ if (srng->flags & HAL_SRNG_FLAGS_LMAC_RING) {
+ /* For LMAC rings, ring pointer updates are done through FW and
+ * hence written to a shared memory location that is read by FW
+@@ -841,21 +840,37 @@ void ath11k_hal_srng_access_end(struct ath11k_base *ab, struct hal_srng *srng)
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ srng->u.src_ring.last_tp =
+ *(volatile u32 *)srng->u.src_ring.tp_addr;
+- *srng->u.src_ring.hp_addr = srng->u.src_ring.hp;
++ /* Make sure descriptor is written before updating the
++ * head pointer.
++ */
++ dma_wmb();
++ WRITE_ONCE(*srng->u.src_ring.hp_addr, srng->u.src_ring.hp);
+ } else {
+ srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
+- *srng->u.dst_ring.tp_addr = srng->u.dst_ring.tp;
++ /* Make sure descriptor is read before updating the
++ * tail pointer.
++ */
++ dma_mb();
++ WRITE_ONCE(*srng->u.dst_ring.tp_addr, srng->u.dst_ring.tp);
+ }
+ } else {
+ if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ srng->u.src_ring.last_tp =
+ *(volatile u32 *)srng->u.src_ring.tp_addr;
++ /* Assume implementation use an MMIO write accessor
++ * which has the required wmb() so that the descriptor
++ * is written before the updating the head pointer.
++ */
+ ath11k_hif_write32(ab,
+ (unsigned long)srng->u.src_ring.hp_addr -
+ (unsigned long)ab->mem,
+ srng->u.src_ring.hp);
+ } else {
+ srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
++ /* Make sure descriptor is read before updating the
++ * tail pointer.
++ */
++ mb();
+ ath11k_hif_write32(ab,
+ (unsigned long)srng->u.dst_ring.tp_addr -
+ (unsigned long)ab->mem,
+@@ -1290,6 +1305,10 @@ EXPORT_SYMBOL(ath11k_hal_srng_init);
+ void ath11k_hal_srng_deinit(struct ath11k_base *ab)
+ {
+ struct ath11k_hal *hal = &ab->hal;
++ int i;
++
++ for (i = 0; i < HAL_SRNG_RING_ID_MAX; i++)
++ ab->hal.srng_list[i].initialized = 0;
+
+ ath11k_hal_unregister_srng_key(ab);
+ ath11k_hal_free_cont_rdp(ab);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index fbb5e29530e3dd..af06f31db0e280 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -1199,10 +1199,6 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ return -EAGAIN;
+ }
+
+- /* If scan req comes for p2p0, send it over primary I/F */
+- if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
+- vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
+-
+ brcmf_dbg(SCAN, "START ESCAN\n");
+
+ cfg->scan_request = request;
+@@ -1218,6 +1214,10 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ if (err)
+ goto scan_out;
+
++ /* If scan req comes for p2p0, send it over primary I/F */
++ if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
++ vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
++
+ err = brcmf_do_escan(vif->ifp, request);
+ if (err)
+ goto scan_out;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+index 47c0e8e429e544..3064e603e7e3e0 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+@@ -919,7 +919,7 @@ void wlc_lcnphy_read_table(struct brcms_phy *pi, struct phytbl_info *pti)
+
+ static void
+ wlc_lcnphy_common_read_table(struct brcms_phy *pi, u32 tbl_id,
+- const u16 *tbl_ptr, u32 tbl_len,
++ u16 *tbl_ptr, u32 tbl_len,
+ u32 tbl_width, u32 tbl_offset)
+ {
+ struct phytbl_info tab;
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index 2549902552e1dc..6e5decf79a06b4 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -1574,8 +1574,11 @@ il4965_tx_cmd_build_rate(struct il_priv *il,
+ || rate_idx > RATE_COUNT_LEGACY)
+ rate_idx = rate_lowest_index(&il->bands[info->band], sta);
+ /* For 5 GHZ band, remap mac80211 rate indices into driver indices */
+- if (info->band == NL80211_BAND_5GHZ)
++ if (info->band == NL80211_BAND_5GHZ) {
+ rate_idx += IL_FIRST_OFDM_RATE;
++ if (rate_idx > IL_LAST_OFDM_RATE)
++ rate_idx = IL_LAST_OFDM_RATE;
++ }
+ /* Get PLCP rate for tx_cmd->rate_n_flags */
+ rate_plcp = il_rates[rate_idx].plcp;
+ /* Zero out flags for this packet */
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+index 6a19fc4c68604e..54fef25a11a167 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+@@ -1054,9 +1054,11 @@ static void iwl_bg_restart(struct work_struct *data)
+ *
+ *****************************************************************************/
+
+-static void iwl_setup_deferred_work(struct iwl_priv *priv)
++static int iwl_setup_deferred_work(struct iwl_priv *priv)
+ {
+ priv->workqueue = alloc_ordered_workqueue(DRV_NAME, 0);
++ if (!priv->workqueue)
++ return -ENOMEM;
+
+ INIT_WORK(&priv->restart, iwl_bg_restart);
+ INIT_WORK(&priv->beacon_update, iwl_bg_beacon_update);
+@@ -1073,6 +1075,8 @@ static void iwl_setup_deferred_work(struct iwl_priv *priv)
+ timer_setup(&priv->statistics_periodic, iwl_bg_statistics_periodic, 0);
+
+ timer_setup(&priv->ucode_trace, iwl_bg_ucode_trace, 0);
++
++ return 0;
+ }
+
+ void iwl_cancel_deferred_work(struct iwl_priv *priv)
+@@ -1462,7 +1466,9 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ /********************
+ * 6. Setup services
+ ********************/
+- iwl_setup_deferred_work(priv);
++ if (iwl_setup_deferred_work(priv))
++ goto out_uninit_drv;
++
+ iwl_setup_rx_handlers(priv);
+
+ iwl_power_initialize(priv);
+@@ -1500,6 +1506,7 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ iwl_cancel_deferred_work(priv);
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
++out_uninit_drv:
+ iwl_uninit_drv(priv);
+ out_free_eeprom_blob:
+ kfree(priv->eeprom_blob);
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+index 958bfc38d39006..f44448a13172ad 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+@@ -2926,7 +2926,7 @@ static void rs_fill_link_cmd(struct iwl_priv *priv,
+ /* Repeat initial/next rate.
+ * For legacy IWL_NUMBER_TRY == 1, this loop will not execute.
+ * For HT IWL_HT_NUMBER_TRY == 3, this executes twice. */
+- while (repeat_rate > 0 && (index < LINK_QUAL_MAX_RETRY_NUM)) {
++ while (repeat_rate > 0 && index < (LINK_QUAL_MAX_RETRY_NUM - 1)) {
+ if (is_legacy(tbl_type.lq_type)) {
+ if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE)
+ ant_toggle_cnt++;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 558caf78a56da7..37c1158b9225d8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -2507,6 +2507,7 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt,
+ struct iwl_fw_dump_desc *desc;
+ unsigned int delay = 0;
+ bool monitor_only = false;
++ int ret;
+
+ if (trigger) {
+ u16 occurrences = le16_to_cpu(trigger->occurrences) - 1;
+@@ -2537,7 +2538,11 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt,
+ desc->trig_desc.type = cpu_to_le32(trig);
+ memcpy(desc->trig_desc.data, str, len);
+
+- return iwl_fw_dbg_collect_desc(fwrt, desc, monitor_only, delay);
++ ret = iwl_fw_dbg_collect_desc(fwrt, desc, monitor_only, delay);
++ if (ret)
++ kfree(desc);
++
++ return ret;
+ }
+ IWL_EXPORT_SYMBOL(iwl_fw_dbg_collect);
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 9b1a1455a7d517..1f14636d6a3a8e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -116,8 +116,10 @@ static int __init iwl_mvm_init(void)
+ }
+
+ ret = iwl_opmode_register("iwlmvm", &iwl_mvm_ops);
+- if (ret)
++ if (ret) {
+ pr_err("Unable to register MVM op_mode: %d\n", ret);
++ iwl_mvm_rate_control_unregister();
++ }
+
+ return ret;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index a52af491eed584..6e6325717c0ae0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -876,7 +876,7 @@ static inline bool iwl_mvm_scan_fits(struct iwl_mvm *mvm, int n_ssids,
+ int n_channels)
+ {
+ return ((n_ssids <= PROBE_OPTION_MAX) &&
+- (n_channels <= mvm->fw->ucode_capa.n_scan_channels) &
++ (n_channels <= mvm->fw->ucode_capa.n_scan_channels) &&
+ (ies->common_ie_len +
+ ies->len[NL80211_BAND_2GHZ] +
+ ies->len[NL80211_BAND_5GHZ] <=
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index dd72e9f8b4079f..194087e6a7649d 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -1220,6 +1220,10 @@ static int rxq_refill(struct ieee80211_hw *hw, int index, int limit)
+
+ addr = pci_map_single(priv->pdev, skb->data,
+ MWL8K_RX_MAXSZ, DMA_FROM_DEVICE);
++ if (dma_mapping_error(&priv->pdev->dev, addr)) {
++ kfree_skb(skb);
++ break;
++ }
+
+ rxq->rxd_count++;
+ rx = rxq->tail++;
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+index eb68b2d3caa121..c9df185dc3f4f0 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+@@ -1041,10 +1041,11 @@ static void rtl8187_stop(struct ieee80211_hw *dev)
+ rtl818x_iowrite8(priv, &priv->map->CONFIG4, reg | RTL818X_CONFIG4_VCOOFF);
+ rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
+
++ usb_kill_anchored_urbs(&priv->anchored);
++
+ while ((skb = skb_dequeue(&priv->b_tx_status.queue)))
+ dev_kfree_skb_any(skb);
+
+- usb_kill_anchored_urbs(&priv->anchored);
+ mutex_unlock(&priv->conf_mutex);
+
+ if (!priv->is_rtl8187b)
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 5b27c22e7e5816..7cf2693619c92b 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -5794,7 +5794,7 @@ static int rtl8xxxu_submit_rx_urb(struct rtl8xxxu_priv *priv,
+ skb_size = fops->rx_agg_buf_size;
+ skb_size += (rx_desc_sz + sizeof(struct rtl8723au_phy_stats));
+ } else {
+- skb_size = IEEE80211_MAX_FRAME_LEN;
++ skb_size = IEEE80211_MAX_FRAME_LEN + rx_desc_sz;
+ }
+
+ skb = __netdev_alloc_skb(NULL, skb_size, GFP_KERNEL);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index f024533d34a94a..02821588673e53 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -573,8 +573,11 @@ static int _rtl_pci_init_one_rxdesc(struct ieee80211_hw *hw,
+ dma_map_single(&rtlpci->pdev->dev, skb_tail_pointer(skb),
+ rtlpci->rxbuffersize, DMA_FROM_DEVICE);
+ bufferaddress = *((dma_addr_t *)skb->cb);
+- if (dma_mapping_error(&rtlpci->pdev->dev, bufferaddress))
++ if (dma_mapping_error(&rtlpci->pdev->dev, bufferaddress)) {
++ if (!new_skb)
++ kfree_skb(skb);
+ return 0;
++ }
+ rtlpci->rx_ring[rxring_idx].rx_buf[desc_idx] = skb;
+ if (rtlpriv->use_new_trx_flow) {
+ /* skb->cb may be 64 bit address */
+@@ -803,13 +806,19 @@ static void _rtl_pci_rx_interrupt(struct ieee80211_hw *hw)
+ skb = new_skb;
+ no_new:
+ if (rtlpriv->use_new_trx_flow) {
+- _rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,
+- rxring_idx,
+- rtlpci->rx_ring[rxring_idx].idx);
++ if (!_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,
++ rxring_idx,
++ rtlpci->rx_ring[rxring_idx].idx)) {
++ if (new_skb)
++ dev_kfree_skb_any(skb);
++ }
+ } else {
+- _rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,
+- rxring_idx,
+- rtlpci->rx_ring[rxring_idx].idx);
++ if (!_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,
++ rxring_idx,
++ rtlpci->rx_ring[rxring_idx].idx)) {
++ if (new_skb)
++ dev_kfree_skb_any(skb);
++ }
+ if (rtlpci->rx_ring[rxring_idx].idx ==
+ rtlpci->rxringcount - 1)
+ rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc,
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index bad9e549d533ee..34c4770bf55502 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -638,8 +638,6 @@ static int xennet_xdp_xmit_one(struct net_device *dev,
+ tx_stats->packets++;
+ u64_stats_update_end(&tx_stats->syncp);
+
+- xennet_tx_buf_gc(queue);
+-
+ return 0;
+ }
+
+@@ -851,9 +849,6 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ tx_stats->packets++;
+ u64_stats_update_end(&tx_stats->syncp);
+
+- /* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
+- xennet_tx_buf_gc(queue);
+-
+ if (!netfront_tx_slot_available(queue))
+ netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index 0d6df73bb91816..86bb4f82048a31 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -442,7 +442,7 @@ static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg)
+ dev_dbg(dev, "malformed TLP received from the link\n");
+
+ if (sub_reg & PCIE_CORE_INT_UCR)
+- dev_dbg(dev, "malformed TLP received from the link\n");
++ dev_dbg(dev, "Unexpected Completion received from the link\n");
+
+ if (sub_reg & PCIE_CORE_INT_FCE)
+ dev_dbg(dev, "an error was observed in the flow control advertisements from the other side\n");
+diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c
+index 3710adf51912de..e9642721e2ff9a 100644
+--- a/drivers/pci/endpoint/pci-ep-cfs.c
++++ b/drivers/pci/endpoint/pci-ep-cfs.c
+@@ -464,6 +464,7 @@ void pci_ep_cfs_remove_epf_group(struct config_group *group)
+ if (IS_ERR_OR_NULL(group))
+ return;
+
++ list_del(&group->group_entry);
+ configfs_unregister_default_group(group);
+ }
+ EXPORT_SYMBOL(pci_ep_cfs_remove_epf_group);
+diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
+index c977cf9dce5614..365ea23f928517 100644
+--- a/drivers/pci/endpoint/pci-epf-core.c
++++ b/drivers/pci/endpoint/pci-epf-core.c
+@@ -147,7 +147,7 @@ static void pci_epf_remove_cfs(struct pci_epf_driver *driver)
+ mutex_lock(&pci_epf_mutex);
+ list_for_each_entry_safe(group, tmp, &driver->epf_group, group_entry)
+ pci_ep_cfs_remove_epf_group(group);
+- list_del(&driver->epf_group);
++ WARN_ON(!list_empty(&driver->epf_group));
+ mutex_unlock(&pci_epf_mutex);
+ }
+
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index cf9c0e75f0be48..03c4d0d7f9b151 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -3,11 +3,14 @@
+ * PCI Hotplug Driver for PowerPC PowerNV platform.
+ *
+ * Copyright Gavin Shan, IBM Corporation 2016.
++ * Copyright (C) 2025 Raptor Engineering, LLC
++ * Copyright (C) 2025 Raptor Computing Systems, LLC
+ */
+
+ #include <linux/libfdt.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
++#include <linux/delay.h>
+ #include <linux/pci_hotplug.h>
+
+ #include <asm/opal.h>
+@@ -34,8 +37,10 @@ static void pnv_php_register(struct device_node *dn);
+ static void pnv_php_unregister_one(struct device_node *dn);
+ static void pnv_php_unregister(struct device_node *dn);
+
++static void pnv_php_enable_irq(struct pnv_php_slot *php_slot);
++
+ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+- bool disable_device)
++ bool disable_device, bool disable_msi)
+ {
+ struct pci_dev *pdev = php_slot->pdev;
+ u16 ctrl;
+@@ -51,19 +56,15 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ php_slot->irq = 0;
+ }
+
+- if (php_slot->wq) {
+- destroy_workqueue(php_slot->wq);
+- php_slot->wq = NULL;
+- }
+-
+- if (disable_device) {
++ if (disable_device || disable_msi) {
+ if (pdev->msix_enabled)
+ pci_disable_msix(pdev);
+ else if (pdev->msi_enabled)
+ pci_disable_msi(pdev);
++ }
+
++ if (disable_device)
+ pci_disable_device(pdev);
+- }
+ }
+
+ static void pnv_php_free_slot(struct kref *kref)
+@@ -72,7 +73,8 @@ static void pnv_php_free_slot(struct kref *kref)
+ struct pnv_php_slot, kref);
+
+ WARN_ON(!list_empty(&php_slot->children));
+- pnv_php_disable_irq(php_slot, false);
++ pnv_php_disable_irq(php_slot, false, false);
++ destroy_workqueue(php_slot->wq);
+ kfree(php_slot->name);
+ kfree(php_slot);
+ }
+@@ -389,6 +391,20 @@ static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state)
+ return 0;
+ }
+
++static int pcie_check_link_active(struct pci_dev *pdev)
++{
++ u16 lnk_status;
++ int ret;
++
++ ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
++ if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
++ return -ENODEV;
++
++ ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
++
++ return ret;
++}
++
+ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ {
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
+@@ -401,6 +417,19 @@ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ */
+ ret = pnv_pci_get_presence_state(php_slot->id, &presence);
+ if (ret >= 0) {
++ if (pci_pcie_type(php_slot->pdev) == PCI_EXP_TYPE_DOWNSTREAM &&
++ presence == OPAL_PCI_SLOT_EMPTY) {
++ /*
++ * Similar to pciehp_hpc, check whether the Link Active
++ * bit is set to account for broken downstream bridges
++ * that don't properly assert Presence Detect State, as
++ * was observed on the Microsemi Switchtec PM8533 PFX
++ * [11f8:8533].
++ */
++ if (pcie_check_link_active(php_slot->pdev) > 0)
++ presence = OPAL_PCI_SLOT_PRESENT;
++ }
++
+ *state = presence;
+ ret = 0;
+ } else {
+@@ -440,6 +469,61 @@ static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state)
+ return 0;
+ }
+
++static int pnv_php_activate_slot(struct pnv_php_slot *php_slot,
++ struct hotplug_slot *slot)
++{
++ int ret, i;
++
++ /*
++ * Issue initial slot activation command to firmware
++ *
++ * Firmware will power slot on, attempt to train the link, and
++ * discover any downstream devices. If this process fails, firmware
++ * will return an error code and an invalid device tree. Failure
++ * can be caused for multiple reasons, including a faulty
++ * downstream device, poor connection to the downstream device, or
++ * a previously latched PHB fence. On failure, issue fundamental
++ * reset up to three times before aborting.
++ */
++ ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++ if (ret) {
++ SLOT_WARN(
++ php_slot,
++ "PCI slot activation failed with error code %d, possible frozen PHB",
++ ret);
++ SLOT_WARN(
++ php_slot,
++ "Attempting complete PHB reset before retrying slot activation\n");
++ for (i = 0; i < 3; i++) {
++ /*
++ * Slot activation failed, PHB may be fenced from a
++ * prior device failure.
++ *
++ * Use the OPAL fundamental reset call to both try a
++ * device reset and clear any potentially active PHB
++ * fence / freeze.
++ */
++ SLOT_WARN(php_slot, "Try %d...\n", i + 1);
++ pci_set_pcie_reset_state(php_slot->pdev,
++ pcie_warm_reset);
++ msleep(250);
++ pci_set_pcie_reset_state(php_slot->pdev,
++ pcie_deassert_reset);
++
++ ret = pnv_php_set_slot_power_state(
++ slot, OPAL_PCI_SLOT_POWER_ON);
++ if (!ret)
++ break;
++ }
++
++ if (i >= 3)
++ SLOT_WARN(php_slot,
++ "Failed to bring slot online, aborting!\n");
++ }
++
++ return ret;
++}
++
+ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ {
+ struct hotplug_slot *slot = &php_slot->slot;
+@@ -502,7 +586,7 @@ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ goto scan;
+
+ /* Power is off, turn it on and then scan the slot */
+- ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++ ret = pnv_php_activate_slot(php_slot, slot);
+ if (ret)
+ return ret;
+
+@@ -559,8 +643,58 @@ static int pnv_php_reset_slot(struct hotplug_slot *slot, int probe)
+ static int pnv_php_enable_slot(struct hotplug_slot *slot)
+ {
+ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
++ u32 prop32;
++ int ret;
++
++ ret = pnv_php_enable(php_slot, true);
++ if (ret)
++ return ret;
++
++ /* (Re-)enable interrupt if the slot supports surprise hotplug */
++ ret = of_property_read_u32(php_slot->dn, "ibm,slot-surprise-pluggable",
++ &prop32);
++ if (!ret && prop32)
++ pnv_php_enable_irq(php_slot);
++
++ return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all slots on the provided bus, as well as
++ * all downstream slots in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_irqs(struct pci_bus *bus)
++{
++ struct pci_bus *child_bus;
++ struct pci_slot *slot;
++
++ /* First go down child buses */
++ list_for_each_entry(child_bus, &bus->children, node)
++ pnv_php_disable_all_irqs(child_bus);
++
++ /* Disable IRQs for all pnv_php slots on this bus */
++ list_for_each_entry(slot, &bus->slots, list) {
++ struct pnv_php_slot *php_slot = to_pnv_php_slot(slot->hotplug);
+
+- return pnv_php_enable(php_slot, true);
++ pnv_php_disable_irq(php_slot, false, true);
++ }
++
++ return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all downstream slots on the provided
++ * bus in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_downstream_irqs(struct pci_bus *bus)
++{
++ struct pci_bus *child_bus;
++
++ /* Go down child buses, recursively deactivating their IRQs */
++ list_for_each_entry(child_bus, &bus->children, node)
++ pnv_php_disable_all_irqs(child_bus);
++
++ return 0;
+ }
+
+ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+@@ -577,6 +711,13 @@ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+ php_slot->state != PNV_PHP_STATE_REGISTERED)
+ return 0;
+
++ /*
++ * Free all IRQ resources from all child slots before remove.
++ * Note that we do not disable the root slot IRQ here as that
++ * would also deactivate the slot hot (re)plug interrupt!
++ */
++ pnv_php_disable_all_downstream_irqs(php_slot->bus);
++
+ /* Remove all devices behind the slot */
+ pci_lock_rescan_remove();
+ pci_hp_remove_devices(php_slot->bus);
+@@ -645,6 +786,15 @@ static struct pnv_php_slot *pnv_php_alloc_slot(struct device_node *dn)
+ return NULL;
+ }
+
++ /* Allocate workqueue for this slot's interrupt handling */
++ php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
++ if (!php_slot->wq) {
++ SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
++ kfree(php_slot->name);
++ kfree(php_slot);
++ return NULL;
++ }
++
+ if (dn->child && PCI_DN(dn->child))
+ php_slot->slot_no = PCI_SLOT(PCI_DN(dn->child)->devfn);
+ else
+@@ -743,16 +893,63 @@ static int pnv_php_enable_msix(struct pnv_php_slot *php_slot)
+ return entry.vector;
+ }
+
++static void
++pnv_php_detect_clear_suprise_removal_freeze(struct pnv_php_slot *php_slot)
++{
++ struct pci_dev *pdev = php_slot->pdev;
++ struct eeh_dev *edev;
++ struct eeh_pe *pe;
++ int i, rc;
++
++ /*
++ * When a device is surprise removed from a downstream bridge slot,
++ * the upstream bridge port can still end up frozen due to related EEH
++ * events, which will in turn block the MSI interrupts for slot hotplug
++ * detection.
++ *
++ * Detect and thaw any frozen upstream PE after slot deactivation.
++ */
++ edev = pci_dev_to_eeh_dev(pdev);
++ pe = edev ? edev->pe : NULL;
++ rc = eeh_pe_get_state(pe);
++ if ((rc == -ENODEV) || (rc == -ENOENT)) {
++ SLOT_WARN(
++ php_slot,
++ "Upstream bridge PE state unknown, hotplug detect may fail\n");
++ } else {
++ if (pe->state & EEH_PE_ISOLATED) {
++ SLOT_WARN(
++ php_slot,
++ "Upstream bridge PE %02x frozen, thawing...\n",
++ pe->addr);
++ for (i = 0; i < 3; i++)
++ if (!eeh_unfreeze_pe(pe))
++ break;
++ if (i >= 3)
++ SLOT_WARN(
++ php_slot,
++ "Unable to thaw PE %02x, hotplug detect will fail!\n",
++ pe->addr);
++ else
++ SLOT_WARN(php_slot,
++ "PE %02x thawed successfully\n",
++ pe->addr);
++ }
++ }
++}
++
+ static void pnv_php_event_handler(struct work_struct *work)
+ {
+ struct pnv_php_event *event =
+ container_of(work, struct pnv_php_event, work);
+ struct pnv_php_slot *php_slot = event->php_slot;
+
+- if (event->added)
++ if (event->added) {
+ pnv_php_enable_slot(&php_slot->slot);
+- else
++ } else {
+ pnv_php_disable_slot(&php_slot->slot);
++ pnv_php_detect_clear_suprise_removal_freeze(php_slot);
++ }
+
+ kfree(event);
+ }
+@@ -841,14 +1038,6 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ u16 sts, ctrl;
+ int ret;
+
+- /* Allocate workqueue */
+- php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
+- if (!php_slot->wq) {
+- SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
+- pnv_php_disable_irq(php_slot, true);
+- return;
+- }
+-
+ /* Check PDC (Presence Detection Change) is broken or not */
+ ret = of_property_read_u32(php_slot->dn, "ibm,slot-broken-pdc",
+ &broken_pdc);
+@@ -867,7 +1056,7 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ ret = request_irq(irq, pnv_php_interrupt, IRQF_SHARED,
+ php_slot->name, php_slot);
+ if (ret) {
+- pnv_php_disable_irq(php_slot, true);
++ pnv_php_disable_irq(php_slot, true, true);
+ SLOT_WARN(php_slot, "Error %d enabling IRQ %d\n", ret, irq);
+ return;
+ }
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index bda45c5241879d..f8afce7ba73ce3 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -791,13 +791,11 @@ int pci_acpi_program_hp_params(struct pci_dev *dev)
+ bool pciehp_is_native(struct pci_dev *bridge)
+ {
+ const struct pci_host_bridge *host;
+- u32 slot_cap;
+
+ if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
+ return false;
+
+- pcie_capability_read_dword(bridge, PCI_EXP_SLTCAP, &slot_cap);
+- if (!(slot_cap & PCI_EXP_SLTCAP_HPC))
++ if (!bridge->is_pciehp)
+ return false;
+
+ if (pcie_ports_native)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 31bcda363cbb60..15618b87bc4b91 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2860,8 +2860,12 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ * pci_bridge_d3_possible - Is it possible to put the bridge into D3
+ * @bridge: Bridge to check
+ *
+- * This function checks if it is possible to move the bridge to D3.
+- * Currently we only allow D3 for recent enough PCIe ports and Thunderbolt.
++ * Currently we only allow D3 for some PCIe ports and for Thunderbolt.
++ *
++ * Return: Whether it is possible to move the bridge to D3.
++ *
++ * The return value is guaranteed to be constant across the entire lifetime
++ * of the bridge, including its hot-removal.
+ */
+ bool pci_bridge_d3_possible(struct pci_dev *bridge)
+ {
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 7f3d10957eca7f..0b1ef4f2c90dd6 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1582,7 +1582,7 @@ void set_pcie_hotplug_bridge(struct pci_dev *pdev)
+
+ pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, ®32);
+ if (reg32 & PCI_EXP_SLTCAP_HPC)
+- pdev->is_hotplug_bridge = 1;
++ pdev->is_hotplug_bridge = pdev->is_pciehp = 1;
+ }
+
+ static void set_pcie_thunderbolt(struct pci_dev *dev)
+diff --git a/drivers/phy/tegra/xusb-tegra186.c b/drivers/phy/tegra/xusb-tegra186.c
+index 5aeffe79ba7fbe..b3fd0370cd0fb1 100644
+--- a/drivers/phy/tegra/xusb-tegra186.c
++++ b/drivers/phy/tegra/xusb-tegra186.c
+@@ -328,13 +328,15 @@ static int tegra186_xusb_padctl_vbus_override(struct tegra_xusb_padctl *padctl,
+ }
+
+ static int tegra186_xusb_padctl_id_override(struct tegra_xusb_padctl *padctl,
+- bool status)
++ struct tegra_xusb_usb2_port *port, bool status)
+ {
+- u32 value;
++ u32 value, id_override;
++ int err = 0;
+
+ dev_dbg(padctl->dev, "%s id override\n", status ? "set" : "clear");
+
+ value = padctl_readl(padctl, USB2_VBUS_ID);
++ id_override = value & ID_OVERRIDE(~0);
+
+ if (status) {
+ if (value & VBUS_OVERRIDE) {
+@@ -345,15 +347,35 @@ static int tegra186_xusb_padctl_id_override(struct tegra_xusb_padctl *padctl,
+ value = padctl_readl(padctl, USB2_VBUS_ID);
+ }
+
+- value &= ~ID_OVERRIDE(~0);
+- value |= ID_OVERRIDE_GROUNDED;
++ if (id_override != ID_OVERRIDE_GROUNDED) {
++ value &= ~ID_OVERRIDE(~0);
++ value |= ID_OVERRIDE_GROUNDED;
++ padctl_writel(padctl, value, USB2_VBUS_ID);
++
++ err = regulator_enable(port->supply);
++ if (err) {
++ dev_err(padctl->dev, "Failed to enable regulator: %d\n", err);
++ return err;
++ }
++ }
+ } else {
+- value &= ~ID_OVERRIDE(~0);
+- value |= ID_OVERRIDE_FLOATING;
++ if (id_override == ID_OVERRIDE_GROUNDED) {
++ /*
++ * The regulator is disabled only when the role transitions
++ * from USB_ROLE_HOST to USB_ROLE_NONE.
++ */
++ err = regulator_disable(port->supply);
++ if (err) {
++ dev_err(padctl->dev, "Failed to disable regulator: %d\n", err);
++ return err;
++ }
++
++ value &= ~ID_OVERRIDE(~0);
++ value |= ID_OVERRIDE_FLOATING;
++ padctl_writel(padctl, value, USB2_VBUS_ID);
++ }
+ }
+
+- padctl_writel(padctl, value, USB2_VBUS_ID);
+-
+ return 0;
+ }
+
+@@ -372,27 +394,20 @@ static int tegra186_utmi_phy_set_mode(struct phy *phy, enum phy_mode mode,
+
+ if (mode == PHY_MODE_USB_OTG) {
+ if (submode == USB_ROLE_HOST) {
+- tegra186_xusb_padctl_id_override(padctl, true);
+-
+- err = regulator_enable(port->supply);
++ err = tegra186_xusb_padctl_id_override(padctl, port, true);
++ if (err)
++ goto out;
+ } else if (submode == USB_ROLE_DEVICE) {
+ tegra186_xusb_padctl_vbus_override(padctl, true);
+ } else if (submode == USB_ROLE_NONE) {
+- /*
+- * When port is peripheral only or role transitions to
+- * USB_ROLE_NONE from USB_ROLE_DEVICE, regulator is not
+- * enabled.
+- */
+- if (regulator_is_enabled(port->supply))
+- regulator_disable(port->supply);
+-
+- tegra186_xusb_padctl_id_override(padctl, false);
++ err = tegra186_xusb_padctl_id_override(padctl, port, false);
++ if (err)
++ goto out;
+ tegra186_xusb_padctl_vbus_override(padctl, false);
+ }
+ }
+-
++out:
+ mutex_unlock(&padctl->lock);
+-
+ return err;
+ }
+
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 2d852f15cc501d..6b6fdb71165901 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -412,6 +412,7 @@ static struct irq_chip stm32_gpio_irq_chip = {
+ .irq_set_wake = irq_chip_set_wake_parent,
+ .irq_request_resources = stm32_gpio_irq_request_resources,
+ .irq_release_resources = stm32_gpio_irq_release_resources,
++ .irq_set_affinity = IS_ENABLED(CONFIG_SMP) ? irq_chip_set_affinity_parent : NULL,
+ };
+
+ static int stm32_gpio_domain_translate(struct irq_domain *d,
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index e4b41cc6c58609..0a50f37c63f403 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -335,6 +335,7 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ const char *function, *pin_prop;
+ const char *group;
+ int ret, npins, nmaps, configlen = 0, i = 0;
++ struct pinctrl_map *new_map;
+
+ *map = NULL;
+ *num_maps = 0;
+@@ -409,9 +410,13 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ * We know have the number of maps we need, we can resize our
+ * map array
+ */
+- *map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
+- if (!*map)
+- return -ENOMEM;
++ new_map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
++ if (!new_map) {
++ ret = -ENOMEM;
++ goto err_free_map;
++ }
++
++ *map = new_map;
+
+ return 0;
+
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index 5a622666a07552..a0b6cec9bfee70 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -186,12 +186,14 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ if (!ec_dev->dout)
+ return -ENOMEM;
+
++ lockdep_register_key(&ec_dev->lockdep_key);
+ mutex_init(&ec_dev->lock);
++ lockdep_set_class(&ec_dev->lock, &ec_dev->lockdep_key);
+
+ err = cros_ec_query_all(ec_dev);
+ if (err) {
+ dev_err(dev, "Cannot identify the EC: error %d\n", err);
+- return err;
++ goto exit;
+ }
+
+ if (ec_dev->irq > 0) {
+@@ -203,7 +205,7 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ if (err) {
+ dev_err(dev, "Failed to request IRQ %d: %d",
+ ec_dev->irq, err);
+- return err;
++ goto exit;
+ }
+ }
+
+@@ -214,7 +216,8 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ if (IS_ERR(ec_dev->ec)) {
+ dev_err(ec_dev->dev,
+ "Failed to create CrOS EC platform device\n");
+- return PTR_ERR(ec_dev->ec);
++ err = PTR_ERR(ec_dev->ec);
++ goto exit;
+ }
+
+ if (ec_dev->max_passthru) {
+@@ -273,6 +276,8 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ exit:
+ platform_device_unregister(ec_dev->ec);
+ platform_device_unregister(ec_dev->pd);
++ mutex_destroy(&ec_dev->lock);
++ lockdep_unregister_key(&ec_dev->lockdep_key);
+ return err;
+ }
+ EXPORT_SYMBOL(cros_ec_register);
+@@ -285,13 +290,15 @@ EXPORT_SYMBOL(cros_ec_register);
+ *
+ * Return: 0 on success or negative error code.
+ */
+-int cros_ec_unregister(struct cros_ec_device *ec_dev)
++void cros_ec_unregister(struct cros_ec_device *ec_dev)
+ {
+- if (ec_dev->pd)
+- platform_device_unregister(ec_dev->pd);
++ if (ec_dev->mkbp_event_supported)
++ blocking_notifier_chain_unregister(&ec_dev->event_notifier,
++ &ec_dev->notifier_ready);
++ platform_device_unregister(ec_dev->pd);
+ platform_device_unregister(ec_dev->ec);
+-
+- return 0;
++ mutex_destroy(&ec_dev->lock);
++ lockdep_unregister_key(&ec_dev->lockdep_key);
+ }
+ EXPORT_SYMBOL(cros_ec_unregister);
+
+diff --git a/drivers/platform/chrome/cros_ec.h b/drivers/platform/chrome/cros_ec.h
+index e69fc1ff68b408..8ed455edbaeb25 100644
+--- a/drivers/platform/chrome/cros_ec.h
++++ b/drivers/platform/chrome/cros_ec.h
+@@ -9,7 +9,7 @@
+ #define __CROS_EC_H
+
+ int cros_ec_register(struct cros_ec_device *ec_dev);
+-int cros_ec_unregister(struct cros_ec_device *ec_dev);
++void cros_ec_unregister(struct cros_ec_device *ec_dev);
+
+ int cros_ec_suspend(struct cros_ec_device *ec_dev);
+ int cros_ec_resume(struct cros_ec_device *ec_dev);
+diff --git a/drivers/platform/chrome/cros_ec_i2c.c b/drivers/platform/chrome/cros_ec_i2c.c
+index 30c8938c27d54e..22feb0fd4ce71a 100644
+--- a/drivers/platform/chrome/cros_ec_i2c.c
++++ b/drivers/platform/chrome/cros_ec_i2c.c
+@@ -313,7 +313,9 @@ static int cros_ec_i2c_remove(struct i2c_client *client)
+ {
+ struct cros_ec_device *ec_dev = i2c_get_clientdata(client);
+
+- return cros_ec_unregister(ec_dev);
++ cros_ec_unregister(ec_dev);
++
++ return 0;
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/platform/chrome/cros_ec_lpc.c b/drivers/platform/chrome/cros_ec_lpc.c
+index 1f78619440448d..8527a1bac765a4 100644
+--- a/drivers/platform/chrome/cros_ec_lpc.c
++++ b/drivers/platform/chrome/cros_ec_lpc.c
+@@ -439,7 +439,9 @@ static int cros_ec_lpc_remove(struct platform_device *pdev)
+ acpi_remove_notify_handler(adev->handle, ACPI_ALL_NOTIFY,
+ cros_ec_lpc_acpi_notify);
+
+- return cros_ec_unregister(ec_dev);
++ cros_ec_unregister(ec_dev);
++
++ return 0;
+ }
+
+ static const struct acpi_device_id cros_ec_lpc_acpi_device_ids[] = {
+diff --git a/drivers/platform/chrome/cros_ec_spi.c b/drivers/platform/chrome/cros_ec_spi.c
+index f9df218fc2bbe2..2f2c07e8f95a81 100644
+--- a/drivers/platform/chrome/cros_ec_spi.c
++++ b/drivers/platform/chrome/cros_ec_spi.c
+@@ -791,7 +791,9 @@ static int cros_ec_spi_remove(struct spi_device *spi)
+ {
+ struct cros_ec_device *ec_dev = spi_get_drvdata(spi);
+
+- return cros_ec_unregister(ec_dev);
++ cros_ec_unregister(ec_dev);
++
++ return 0;
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 983daa220ee31f..137f99c1848efb 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -716,8 +716,8 @@ static int cros_typec_probe(struct platform_device *pdev)
+
+ typec->ec = dev_get_drvdata(pdev->dev.parent);
+ if (!typec->ec) {
+- dev_err(dev, "couldn't find parent EC device\n");
+- return -ENODEV;
++ dev_warn(dev, "couldn't find parent EC device\n");
++ return -EPROBE_DEFER;
+ }
+
+ platform_set_drvdata(pdev, typec);
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 5a8434da60e782..d18b6ddba9829c 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -515,12 +515,12 @@ static unsigned long __init tpacpi_check_quirks(
+ return 0;
+ }
+
+-static inline bool __pure __init tpacpi_is_lenovo(void)
++static __always_inline bool __pure __init tpacpi_is_lenovo(void)
+ {
+ return thinkpad_id.vendor == PCI_VENDOR_ID_LENOVO;
+ }
+
+-static inline bool __pure __init tpacpi_is_ibm(void)
++static __always_inline bool __pure __init tpacpi_is_ibm(void)
+ {
+ return thinkpad_id.vendor == PCI_VENDOR_ID_IBM;
+ }
+diff --git a/drivers/power/supply/max14577_charger.c b/drivers/power/supply/max14577_charger.c
+index dcedae18d7be07..5436e2818ec1fc 100644
+--- a/drivers/power/supply/max14577_charger.c
++++ b/drivers/power/supply/max14577_charger.c
+@@ -501,7 +501,7 @@ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ struct platform_device *pdev)
+ {
+- return NULL;
++ return ERR_PTR(-ENODATA);
+ }
+ #endif /* CONFIG_OF */
+
+@@ -572,7 +572,7 @@ static int max14577_charger_probe(struct platform_device *pdev)
+ chg->max14577 = max14577;
+
+ chg->pdata = max14577_charger_dt_init(pdev);
+- if (IS_ERR_OR_NULL(chg->pdata))
++ if (IS_ERR(chg->pdata))
+ return PTR_ERR(chg->pdata);
+
+ ret = max14577_charger_reg_init(chg);
+diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c
+index 2d008e0d116ab5..ea966fc67d2870 100644
+--- a/drivers/pps/pps.c
++++ b/drivers/pps/pps.c
+@@ -41,6 +41,9 @@ static __poll_t pps_cdev_poll(struct file *file, poll_table *wait)
+
+ poll_wait(file, &pps->queue, wait);
+
++ if (pps->last_fetched_ev == pps->last_ev)
++ return 0;
++
+ return EPOLLIN | EPOLLRDNORM;
+ }
+
+@@ -186,9 +189,11 @@ static long pps_cdev_ioctl(struct file *file,
+ if (err)
+ return err;
+
+- /* Return the fetched timestamp */
++ /* Return the fetched timestamp and save last fetched event */
+ spin_lock_irq(&pps->lock);
+
++ pps->last_fetched_ev = pps->last_ev;
++
+ fdata.info.assert_sequence = pps->assert_sequence;
+ fdata.info.clear_sequence = pps->clear_sequence;
+ fdata.info.assert_tu = pps->assert_tu;
+@@ -272,9 +277,11 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ if (err)
+ return err;
+
+- /* Return the fetched timestamp */
++ /* Return the fetched timestamp and save last fetched event */
+ spin_lock_irq(&pps->lock);
+
++ pps->last_fetched_ev = pps->last_ev;
++
+ compat.info.assert_sequence = pps->assert_sequence;
+ compat.info.clear_sequence = pps->clear_sequence;
+ compat.info.current_mode = pps->current_mode;
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index c895e26b1f1716..869023f0987efc 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -283,15 +283,20 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
+ /* Create a posix clock and link it to the device. */
+ err = posix_clock_register(&ptp->clock, &ptp->dev);
+ if (err) {
++ if (ptp->pps_source)
++ pps_unregister_source(ptp->pps_source);
++
++ if (ptp->kworker)
++ kthread_destroy_worker(ptp->kworker);
++
++ put_device(&ptp->dev);
++
+ pr_err("failed to create posix clock\n");
+- goto no_clock;
++ return ERR_PTR(err);
+ }
+
+ return ptp;
+
+-no_clock:
+- if (ptp->pps_source)
+- pps_unregister_source(ptp->pps_source);
+ no_pps:
+ ptp_cleanup_pin_groups(ptp);
+ no_pin_groups:
+diff --git a/drivers/pwm/pwm-imx-tpm.c b/drivers/pwm/pwm-imx-tpm.c
+index 6e8db0acf71db0..b7307acfce33c9 100644
+--- a/drivers/pwm/pwm-imx-tpm.c
++++ b/drivers/pwm/pwm-imx-tpm.c
+@@ -202,6 +202,15 @@ static int pwm_imx_tpm_apply_hw(struct pwm_chip *chip,
+ val |= FIELD_PREP(PWM_IMX_TPM_SC_PS, p->prescale);
+ writel(val, tpm->base + PWM_IMX_TPM_SC);
+
++ /*
++ * if the counter is disabled (CMOD == 0), programming the new
++ * period length (MOD) will not reset the counter (CNT). If
++ * CNT.COUNT happens to be bigger than the new MOD value then
++ * the counter will end up being reset way too late. Therefore,
++ * manually reset it to 0.
++ */
++ if (!cmod)
++ writel(0x0, tpm->base + PWM_IMX_TPM_CNT);
+ /*
+ * set period count:
+ * if the PWM is disabled (CMOD[1:0] = 2b00), then MOD register
+diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c
+index 239eb052f40be0..870fdf8f1d9223 100644
+--- a/drivers/pwm/pwm-mediatek.c
++++ b/drivers/pwm/pwm-mediatek.c
+@@ -120,6 +120,26 @@ static inline void pwm_mediatek_writel(struct pwm_mediatek_chip *chip,
+ writel(value, chip->regs + pwm_mediatek_reg_offset[num] + offset);
+ }
+
++static void pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
++{
++ struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
++ u32 value;
++
++ value = readl(pc->regs);
++ value |= BIT(pwm->hwpwm);
++ writel(value, pc->regs);
++}
++
++static void pwm_mediatek_disable(struct pwm_chip *chip, struct pwm_device *pwm)
++{
++ struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
++ u32 value;
++
++ value = readl(pc->regs);
++ value &= ~BIT(pwm->hwpwm);
++ writel(value, pc->regs);
++}
++
+ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ int duty_ns, int period_ns)
+ {
+@@ -149,7 +169,10 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ do_div(resolution, clk_rate);
+
+ cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000, resolution);
+- while (cnt_period > 8191) {
++ if (!cnt_period)
++ return -EINVAL;
++
++ while (cnt_period > 8192) {
+ resolution *= 2;
+ clkdiv++;
+ cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000,
+@@ -172,9 +195,16 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ }
+
+ cnt_duty = DIV_ROUND_CLOSEST_ULL((u64)duty_ns * 1000, resolution);
++
+ pwm_mediatek_writel(pc, pwm->hwpwm, PWMCON, BIT(15) | clkdiv);
+- pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period);
+- pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty);
++ pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period - 1);
++
++ if (cnt_duty) {
++ pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty - 1);
++ pwm_mediatek_enable(chip, pwm);
++ } else {
++ pwm_mediatek_disable(chip, pwm);
++ }
+
+ out:
+ pwm_mediatek_clk_disable(chip, pwm);
+@@ -182,39 +212,35 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ return ret;
+ }
+
+-static int pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
++static int pwm_mediatek_apply(struct pwm_chip *chip, struct pwm_device *pwm,
++ const struct pwm_state *state)
+ {
+- struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
+- u32 value;
+- int ret;
++ int err;
+
+- ret = pwm_mediatek_clk_enable(chip, pwm);
+- if (ret < 0)
+- return ret;
++ if (state->polarity != PWM_POLARITY_NORMAL)
++ return -EINVAL;
+
+- value = readl(pc->regs);
+- value |= BIT(pwm->hwpwm);
+- writel(value, pc->regs);
++ if (!state->enabled) {
++ if (pwm->state.enabled) {
++ pwm_mediatek_disable(chip, pwm);
++ pwm_mediatek_clk_disable(chip, pwm);
++ }
+
+- return 0;
+-}
++ return 0;
++ }
+
+-static void pwm_mediatek_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+-{
+- struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
+- u32 value;
++ err = pwm_mediatek_config(pwm->chip, pwm, state->duty_cycle, state->period);
++ if (err)
++ return err;
+
+- value = readl(pc->regs);
+- value &= ~BIT(pwm->hwpwm);
+- writel(value, pc->regs);
++ if (!pwm->state.enabled)
++ err = pwm_mediatek_clk_enable(chip, pwm);
+
+- pwm_mediatek_clk_disable(chip, pwm);
++ return err;
+ }
+
+ static const struct pwm_ops pwm_mediatek_ops = {
+- .config = pwm_mediatek_config,
+- .enable = pwm_mediatek_enable,
+- .disable = pwm_mediatek_disable,
++ .apply = pwm_mediatek_apply,
+ .owner = THIS_MODULE,
+ };
+
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index a0cc907a76c18c..b2d866d6065124 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5198,6 +5198,7 @@ static void regulator_remove_coupling(struct regulator_dev *rdev)
+ ERR_PTR(err));
+ }
+
++ rdev->coupling_desc.n_coupled = 0;
+ kfree(rdev->coupling_desc.coupled_rdevs);
+ rdev->coupling_desc.coupled_rdevs = NULL;
+ }
+diff --git a/drivers/reset/Kconfig b/drivers/reset/Kconfig
+index 147543ad303f25..315324dcdac412 100644
+--- a/drivers/reset/Kconfig
++++ b/drivers/reset/Kconfig
+@@ -43,8 +43,8 @@ config RESET_BERLIN
+
+ config RESET_BRCMSTB
+ tristate "Broadcom STB reset controller"
+- depends on ARCH_BRCMSTB || COMPILE_TEST
+- default ARCH_BRCMSTB
++ depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
++ default ARCH_BRCMSTB || ARCH_BCM2835
+ help
+ This enables the reset controller driver for Broadcom STB SoCs using
+ a SUN_TOP_CTRL_SW_INIT style controller.
+@@ -52,11 +52,11 @@ config RESET_BRCMSTB
+ config RESET_BRCMSTB_RESCAL
+ bool "Broadcom STB RESCAL reset controller"
+ depends on HAS_IOMEM
+- depends on ARCH_BRCMSTB || COMPILE_TEST
+- default ARCH_BRCMSTB
++ depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
++ default ARCH_BRCMSTB || ARCH_BCM2835
+ help
+ This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on
+- BCM7216.
++ BCM7216 or the BCM2712.
+
+ config RESET_HSDK
+ bool "Synopsys HSDK Reset Driver"
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index 3a2401ce2ec9c0..ecfd6c27ba542c 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -275,6 +275,13 @@ static int ds1307_get_time(struct device *dev, struct rtc_time *t)
+ if (tmp & DS1340_BIT_OSF)
+ return -EINVAL;
+ break;
++ case ds_1341:
++ ret = regmap_read(ds1307->regmap, DS1337_REG_STATUS, &tmp);
++ if (ret)
++ return ret;
++ if (tmp & DS1337_BIT_OSF)
++ return -EINVAL;
++ break;
+ case ds_1388:
+ ret = regmap_read(ds1307->regmap, DS1388_REG_FLAG, &tmp);
+ if (ret)
+@@ -373,6 +380,10 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
+ regmap_update_bits(ds1307->regmap, DS1340_REG_FLAG,
+ DS1340_BIT_OSF, 0);
+ break;
++ case ds_1341:
++ regmap_update_bits(ds1307->regmap, DS1337_REG_STATUS,
++ DS1337_BIT_OSF, 0);
++ break;
+ case ds_1388:
+ regmap_update_bits(ds1307->regmap, DS1388_REG_FLAG,
+ DS1388_BIT_OSF, 0);
+@@ -1518,7 +1529,7 @@ static long ds3231_clk_sqw_round_rate(struct clk_hw *hw, unsigned long rate,
+ return ds3231_clk_sqw_rates[i];
+ }
+
+- return 0;
++ return ds3231_clk_sqw_rates[ARRAY_SIZE(ds3231_clk_sqw_rates) - 1];
+ }
+
+ static int ds3231_clk_sqw_set_rate(struct clk_hw *hw, unsigned long rate,
+@@ -1870,10 +1881,8 @@ static int ds1307_probe(struct i2c_client *client,
+ regmap_write(ds1307->regmap, DS1337_REG_CONTROL,
+ regs[0]);
+
+- /* oscillator fault? clear flag, and warn */
++ /* oscillator fault? warn */
+ if (regs[1] & DS1337_BIT_OSF) {
+- regmap_write(ds1307->regmap, DS1337_REG_STATUS,
+- regs[1] & ~DS1337_BIT_OSF);
+ dev_warn(ds1307->dev, "SET TIME!\n");
+ }
+ break;
+diff --git a/drivers/rtc/rtc-hym8563.c b/drivers/rtc/rtc-hym8563.c
+index 0fb79c4afb4639..b5e76d6ee64bea 100644
+--- a/drivers/rtc/rtc-hym8563.c
++++ b/drivers/rtc/rtc-hym8563.c
+@@ -312,7 +312,7 @@ static long hym8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int hym8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index dd3336cbb7921b..0c957144e8ec16 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -415,7 +415,7 @@ static long pcf85063_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int pcf85063_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf8563.c b/drivers/rtc/rtc-pcf8563.c
+index 2dc30eafa63961..129bd2f5177933 100644
+--- a/drivers/rtc/rtc-pcf8563.c
++++ b/drivers/rtc/rtc-pcf8563.c
+@@ -399,7 +399,7 @@ static long pcf8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int pcf8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-rv3028.c b/drivers/rtc/rtc-rv3028.c
+index fa226f0fe67d71..56fa66b6cadf30 100644
+--- a/drivers/rtc/rtc-rv3028.c
++++ b/drivers/rtc/rtc-rv3028.c
+@@ -672,7 +672,7 @@ static long rv3028_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ if (clkout_rates[i] <= rate)
+ return clkout_rates[i];
+
+- return 0;
++ return clkout_rates[0];
+ }
+
+ static int rv3028_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
+index 34e45c87cae03c..7b520e824c2939 100644
+--- a/drivers/scsi/aacraid/comminit.c
++++ b/drivers/scsi/aacraid/comminit.c
+@@ -481,8 +481,7 @@ void aac_define_int_mode(struct aac_dev *dev)
+ pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) {
+ min_msix = 2;
+ i = pci_alloc_irq_vectors(dev->pdev,
+- min_msix, msi_count,
+- PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
++ min_msix, msi_count, PCI_IRQ_MSIX);
+ if (i > 0) {
+ dev->msi_enabled = 1;
+ msi_count = i;
+diff --git a/drivers/scsi/bfa/bfad_im.c b/drivers/scsi/bfa/bfad_im.c
+index 22f06be2606f56..6dcf1094e01b23 100644
+--- a/drivers/scsi/bfa/bfad_im.c
++++ b/drivers/scsi/bfa/bfad_im.c
+@@ -707,6 +707,7 @@ bfad_im_probe(struct bfad_s *bfad)
+
+ if (bfad_thread_workq(bfad) != BFA_STATUS_OK) {
+ kfree(im);
++ bfad->im = NULL;
+ return BFA_STATUS_FAILED;
+ }
+
+diff --git a/drivers/scsi/ibmvscsi_tgt/libsrp.c b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+index 8a0e28aec928e4..0ecad398ed3db0 100644
+--- a/drivers/scsi/ibmvscsi_tgt/libsrp.c
++++ b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+@@ -184,7 +184,8 @@ static int srp_direct_data(struct ibmvscsis_cmd *cmd, struct srp_direct_buf *md,
+ err = rdma_io(cmd, sg, nsg, md, 1, dir, len);
+
+ if (dma_map)
+- dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++ dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++ DMA_BIDIRECTIONAL);
+
+ return err;
+ }
+@@ -256,7 +257,8 @@ static int srp_indirect_data(struct ibmvscsis_cmd *cmd, struct srp_cmd *srp_cmd,
+ err = rdma_io(cmd, sg, nsg, md, nmd, dir, len);
+
+ if (dma_map)
+- dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++ dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++ DMA_BIDIRECTIONAL);
+
+ free_mem:
+ if (token && dma_map) {
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index a4129e456efa01..b375245ce2cd71 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -2914,7 +2914,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ task->total_xfer_len, task->data_dir);
+ else /* unmap the sgl dma addresses */
+ dma_unmap_sg(&ihost->pdev->dev, task->scatter,
+- request->num_sg_entries, task->data_dir);
++ task->num_scatter, task->data_dir);
+ break;
+ case SAS_PROTOCOL_SMP: {
+ struct scatterlist *sg = &task->smp_task.smp_req;
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 05799b41974d5b..bad5730bf7ab3a 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -2949,7 +2949,8 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ conn = cls_conn->dd_data;
+ memset(conn, 0, sizeof(*conn) + dd_size);
+
+- conn->dd_data = cls_conn->dd_data + sizeof(*conn);
++ if (dd_size)
++ conn->dd_data = cls_conn->dd_data + sizeof(*conn);
+ conn->session = session;
+ conn->cls_conn = cls_conn;
+ conn->c_stage = ISCSI_CONN_INITIAL_STAGE;
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index f91eee01ce95ee..c3e77db18945f1 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -6104,7 +6104,6 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
+ }
+ phba->nvmeio_trc_on = 1;
+ phba->nvmeio_trc_output_idx = 0;
+- phba->nvmeio_trc = NULL;
+ } else {
+ nvmeio_off:
+ phba->nvmeio_trc_size = 0;
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 2121534838747c..a35426409a6fba 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -469,6 +469,10 @@ lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *vport)
+ if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
+ return;
+
++ /* may be called before queues established if hba_setup fails */
++ if (!phba->sli4_hba.hdwq)
++ return;
++
+ spin_lock_irqsave(&phba->hbalock, iflag);
+ for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
+ qp = &phba->sli4_hba.hdwq[idx];
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index ff39c596f00079..49931577da38ba 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -11432,10 +11432,12 @@ lpfc_sli_abort_iotag_issue(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ if (cmdiocb->iocb_flag & LPFC_IO_FOF)
+ abtsiocbp->iocb_flag |= LPFC_IO_FOF;
+
+- if (phba->link_state >= LPFC_LINK_UP)
+- iabt->ulpCommand = CMD_ABORT_XRI_CN;
+- else
++ if (phba->link_state < LPFC_LINK_UP ||
++ (phba->sli_rev == LPFC_SLI_REV4 &&
++ phba->sli4_hba.link_state.status == LPFC_FC_LA_TYPE_LINK_DOWN))
+ iabt->ulpCommand = CMD_CLOSE_XRI_CN;
++ else
++ iabt->ulpCommand = CMD_ABORT_XRI_CN;
+
+ abtsiocbp->iocb_cmpl = lpfc_sli_abort_els_cmpl;
+ abtsiocbp->vport = vport;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index e797f6e3982cf5..4f4c2a20f47aef 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -181,6 +181,14 @@ struct sense_info {
+ #define MPT3SAS_PORT_ENABLE_COMPLETE (0xFFFD)
+ #define MPT3SAS_ABRT_TASK_SET (0xFFFE)
+ #define MPT3SAS_REMOVE_UNRESPONDING_DEVICES (0xFFFF)
++
++/*
++ * SAS Log info code for a NCQ collateral abort after an NCQ error:
++ * IOC_LOGINFO_PREFIX_PL | PL_LOGINFO_CODE_SATA_NCQ_FAIL_ALL_CMDS_AFTR_ERR
++ * See: drivers/message/fusion/lsi/mpi_log_sas.h
++ */
++#define IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR 0x31080000
++
+ /**
+ * struct fw_event_work - firmware event struct
+ * @list: link list framework
+@@ -5628,6 +5636,17 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
+ scmd->result = DID_TRANSPORT_DISRUPTED << 16;
+ goto out;
+ }
++ if (log_info == IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR) {
++ /*
++ * This is a ATA NCQ command aborted due to another NCQ
++ * command failure. We must retry this command
++ * immediately but without incrementing its retry
++ * counter.
++ */
++ WARN_ON_ONCE(xfer_cnt != 0);
++ scmd->result = DID_IMM_RETRY << 16;
++ break;
++ }
+ if (log_info == 0x31110630) {
+ if (scmd->retries > 2) {
+ scmd->result = DID_NO_CONNECT << 16;
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index a2a13969c686e6..239b81ab924f70 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -829,7 +829,7 @@ static int mvs_task_prep(struct sas_task *task, struct mvs_info *mvi, int is_tmf
+ dev_printk(KERN_ERR, mvi->dev, "mvsas prep failed[%d]!\n", rc);
+ if (!sas_protocol_ata(task->task_proto))
+ if (n_elem)
+- dma_unmap_sg(mvi->dev, task->scatter, n_elem,
++ dma_unmap_sg(mvi->dev, task->scatter, task->num_scatter,
+ task->data_dir);
+ prep_out:
+ return rc;
+@@ -880,7 +880,7 @@ static void mvs_slot_task_free(struct mvs_info *mvi, struct sas_task *task,
+ if (!sas_protocol_ata(task->task_proto))
+ if (slot->n_elem)
+ dma_unmap_sg(mvi->dev, task->scatter,
+- slot->n_elem, task->data_dir);
++ task->num_scatter, task->data_dir);
+
+ switch (task->task_proto) {
+ case SAS_PROTOCOL_SMP:
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index f40db6f40b1dc8..45bffa49f8766d 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -1166,6 +1166,7 @@ pm8001_init_ccb_tag(struct pm8001_hba_info *pm8001_ha, struct Scsi_Host *shost,
+ goto err_out;
+
+ /* Memory region for ccb_info*/
++ pm8001_ha->ccb_count = ccb_count;
+ pm8001_ha->ccb_info = (struct pm8001_ccb_info *)
+ kcalloc(ccb_count, sizeof(struct pm8001_ccb_info), GFP_KERNEL);
+ if (!pm8001_ha->ccb_info) {
+@@ -1226,6 +1227,16 @@ static void pm8001_pci_remove(struct pci_dev *pdev)
+ tasklet_kill(&pm8001_ha->tasklet[j]);
+ #endif
+ scsi_host_put(pm8001_ha->shost);
++
++ for (i = 0; i < pm8001_ha->ccb_count; i++) {
++ dma_free_coherent(&pm8001_ha->pdev->dev,
++ sizeof(struct pm8001_prd) * PM8001_MAX_DMA_SG,
++ pm8001_ha->ccb_info[i].buf_prd,
++ pm8001_ha->ccb_info[i].ccb_dma_handle);
++ }
++ kfree(pm8001_ha->ccb_info);
++ kfree(pm8001_ha->devices);
++
+ pm8001_free(pm8001_ha);
+ kfree(sha->sas_phy);
+ kfree(sha->sas_port);
+diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h
+index 5cd6fe6a7d2d9f..74099d82e43649 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.h
++++ b/drivers/scsi/pm8001/pm8001_sas.h
+@@ -515,6 +515,7 @@ struct pm8001_hba_info {
+ u32 iomb_size; /* SPC and SPCV IOMB size */
+ struct pm8001_device *devices;
+ struct pm8001_ccb_info *ccb_info;
++ u32 ccb_count;
+ #ifdef PM8001_USE_MSIX
+ int number_of_intr;/*will be used in remove()*/
+ char intr_drvname[PM8001_MAX_MSIX_VEC]
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index f02d8bbea3e511..fc938283343542 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -6619,6 +6619,8 @@ static struct iscsi_endpoint *qla4xxx_get_ep_fwdb(struct scsi_qla_host *ha,
+
+ ep = qla4xxx_ep_connect(ha->host, (struct sockaddr *)dst_addr, 0);
+ vfree(dst_addr);
++ if (IS_ERR(ep))
++ return NULL;
+ return ep;
+ }
+
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index e8703b043805e5..1dea44c1c568d3 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -1686,7 +1686,7 @@ int scsi_scan_host_selected(struct Scsi_Host *shost, unsigned int channel,
+
+ return 0;
+ }
+-
++EXPORT_SYMBOL(scsi_scan_host_selected);
+ static void scsi_sysfs_add_devices(struct Scsi_Host *shost)
+ {
+ struct scsi_device *sdev;
+diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
+index c6256fdc24b10c..1eb58f8765e22d 100644
+--- a/drivers/scsi/scsi_transport_sas.c
++++ b/drivers/scsi/scsi_transport_sas.c
+@@ -41,6 +41,8 @@
+ #include <scsi/scsi_transport_sas.h>
+
+ #include "scsi_sas_internal.h"
++#include "scsi_priv.h"
++
+ struct sas_host_attrs {
+ struct list_head rphy_list;
+ struct mutex lock;
+@@ -1652,32 +1654,66 @@ int scsi_is_sas_rphy(const struct device *dev)
+ }
+ EXPORT_SYMBOL(scsi_is_sas_rphy);
+
+-
+-/*
+- * SCSI scan helper
+- */
+-
+-static int sas_user_scan(struct Scsi_Host *shost, uint channel,
+- uint id, u64 lun)
++static void scan_channel_zero(struct Scsi_Host *shost, uint id, u64 lun)
+ {
+ struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
+ struct sas_rphy *rphy;
+
+- mutex_lock(&sas_host->lock);
+ list_for_each_entry(rphy, &sas_host->rphy_list, list) {
+ if (rphy->identify.device_type != SAS_END_DEVICE ||
+ rphy->scsi_target_id == -1)
+ continue;
+
+- if ((channel == SCAN_WILD_CARD || channel == 0) &&
+- (id == SCAN_WILD_CARD || id == rphy->scsi_target_id)) {
++ if (id == SCAN_WILD_CARD || id == rphy->scsi_target_id) {
+ scsi_scan_target(&rphy->dev, 0, rphy->scsi_target_id,
+ lun, SCSI_SCAN_MANUAL);
+ }
+ }
+- mutex_unlock(&sas_host->lock);
++}
+
+- return 0;
++/*
++ * SCSI scan helper
++ */
++
++static int sas_user_scan(struct Scsi_Host *shost, uint channel,
++ uint id, u64 lun)
++{
++ struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
++ int res = 0;
++ int i;
++
++ switch (channel) {
++ case 0:
++ mutex_lock(&sas_host->lock);
++ scan_channel_zero(shost, id, lun);
++ mutex_unlock(&sas_host->lock);
++ break;
++
++ case SCAN_WILD_CARD:
++ mutex_lock(&sas_host->lock);
++ scan_channel_zero(shost, id, lun);
++ mutex_unlock(&sas_host->lock);
++
++ for (i = 1; i <= shost->max_channel; i++) {
++ res = scsi_scan_host_selected(shost, i, id, lun,
++ SCSI_SCAN_MANUAL);
++ if (res)
++ goto exit_scan;
++ }
++ break;
++
++ default:
++ if (channel < shost->max_channel) {
++ res = scsi_scan_host_selected(shost, channel, id, lun,
++ SCSI_SCAN_MANUAL);
++ } else {
++ res = -EINVAL;
++ }
++ break;
++ }
++
++exit_scan:
++ return res;
+ }
+
+
+diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c
+index 3bc7121921ce6b..3fd024c487ef30 100644
+--- a/drivers/scsi/ufs/ufs-exynos.c
++++ b/drivers/scsi/ufs/ufs-exynos.c
+@@ -850,8 +850,8 @@ static int exynos_ufs_post_link(struct ufs_hba *hba)
+ hci_writel(ufs, 0xa, HCI_DATA_REORDER);
+ hci_writel(ufs, PRDT_SET_SIZE(12), HCI_TXPRDT_ENTRY_SIZE);
+ hci_writel(ufs, PRDT_SET_SIZE(12), HCI_RXPRDT_ENTRY_SIZE);
+- hci_writel(ufs, (1 << hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE);
+- hci_writel(ufs, (1 << hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE);
++ hci_writel(ufs, BIT(hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE);
++ hci_writel(ufs, BIT(hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE);
+ hci_writel(ufs, 0xf, HCI_AXIDMA_RWDATA_BURST_LEN);
+
+ if (ufs->opts & EXYNOS_UFS_OPT_SKIP_CONNECTION_ESTAB)
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index a4c70fbc809f1b..a212e6ad11d5f6 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3872,7 +3872,7 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ hba->uic_async_done = NULL;
+ if (reenable_intr)
+ ufshcd_enable_intr(hba, UIC_COMMAND_COMPL);
+- if (ret) {
++ if (ret && !hba->pm_op_in_progress) {
+ ufshcd_set_link_broken(hba);
+ ufshcd_schedule_eh_work(hba);
+ }
+@@ -3880,6 +3880,14 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+ mutex_unlock(&hba->uic_cmd_mutex);
+
++ /*
++ * If the h8 exit fails during the runtime resume process, it becomes
++ * stuck and cannot be recovered through the error handler. To fix
++ * this, use link recovery instead of the error handler.
++ */
++ if (ret && hba->pm_op_in_progress)
++ ret = ufshcd_link_recovery(hba);
++
+ return ret;
+ }
+
+diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+index 43e30937fc9da2..f3462ce22a0bdc 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+@@ -60,6 +60,7 @@ struct aspeed_lpc_snoop_model_data {
+ };
+
+ struct aspeed_lpc_snoop_channel {
++ bool enabled;
+ struct kfifo fifo;
+ wait_queue_head_t wq;
+ struct miscdevice miscdev;
+@@ -192,6 +193,9 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ const struct aspeed_lpc_snoop_model_data *model_data =
+ of_device_get_match_data(dev);
+
++ if (WARN_ON(lpc_snoop->chan[channel].enabled))
++ return -EBUSY;
++
+ init_waitqueue_head(&lpc_snoop->chan[channel].wq);
+ /* Create FIFO datastructure */
+ rc = kfifo_alloc(&lpc_snoop->chan[channel].fifo,
+@@ -238,6 +242,8 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ regmap_update_bits(lpc_snoop->regmap, HICRB,
+ hicrb_en, hicrb_en);
+
++ lpc_snoop->chan[channel].enabled = true;
++
+ return 0;
+
+ err_misc_deregister:
+@@ -250,6 +256,9 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ static void aspeed_lpc_disable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ int channel)
+ {
++ if (!lpc_snoop->chan[channel].enabled)
++ return;
++
+ switch (channel) {
+ case 0:
+ regmap_update_bits(lpc_snoop->regmap, HICR5,
+@@ -265,8 +274,10 @@ static void aspeed_lpc_disable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ return;
+ }
+
+- kfifo_free(&lpc_snoop->chan[channel].fifo);
++ lpc_snoop->chan[channel].enabled = false;
++ /* Consider improving safety wrt concurrent reader(s) */
+ misc_deregister(&lpc_snoop->chan[channel].miscdev);
++ kfifo_free(&lpc_snoop->chan[channel].fifo);
+ }
+
+ static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index 6034cd8992b0ea..c2bbde533e66a4 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -12,11 +12,43 @@
+ #include <linux/firmware.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/overflow.h>
+ #include <linux/qcom_scm.h>
+ #include <linux/sizes.h>
+ #include <linux/slab.h>
+ #include <linux/soc/qcom/mdt_loader.h>
+
++static bool mdt_header_valid(const struct firmware *fw)
++{
++ const struct elf32_hdr *ehdr;
++ size_t phend;
++ size_t shend;
++
++ if (fw->size < sizeof(*ehdr))
++ return false;
++
++ ehdr = (struct elf32_hdr *)fw->data;
++
++ if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG))
++ return false;
++
++ if (ehdr->e_phentsize != sizeof(struct elf32_phdr))
++ return false;
++
++ phend = size_add(size_mul(sizeof(struct elf32_phdr), ehdr->e_phnum), ehdr->e_phoff);
++ if (phend > fw->size)
++ return false;
++
++ if (ehdr->e_shentsize != sizeof(struct elf32_shdr))
++ return false;
++
++ shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff);
++ if (shend > fw->size)
++ return false;
++
++ return true;
++}
++
+ static bool mdt_phdr_valid(const struct elf32_phdr *phdr)
+ {
+ if (phdr->p_type != PT_LOAD)
+@@ -46,6 +78,9 @@ ssize_t qcom_mdt_get_size(const struct firmware *fw)
+ phys_addr_t max_addr = 0;
+ int i;
+
++ if (!mdt_header_valid(fw))
++ return -EINVAL;
++
+ ehdr = (struct elf32_hdr *)fw->data;
+ phdrs = (struct elf32_phdr *)(ehdr + 1);
+
+@@ -92,6 +127,9 @@ void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len)
+ size_t ehdr_size;
+ void *data;
+
++ if (!mdt_header_valid(fw))
++ return ERR_PTR(-EINVAL);
++
+ ehdr = (struct elf32_hdr *)fw->data;
+ phdrs = (struct elf32_phdr *)(ehdr + 1);
+
+@@ -151,6 +189,9 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ if (!fw || !mem_region || !mem_phys || !mem_size)
+ return -EINVAL;
+
++ if (!mdt_header_valid(fw))
++ return -EINVAL;
++
+ ehdr = (struct elf32_hdr *)fw->data;
+ phdrs = (struct elf32_phdr *)(ehdr + 1);
+
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index 5726c232e61d5b..f2666fe8e8067d 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -1064,7 +1064,7 @@ static int tegra_powergate_of_get_clks(struct tegra_powergate *pg,
+ }
+
+ static int tegra_powergate_of_get_resets(struct tegra_powergate *pg,
+- struct device_node *np, bool off)
++ struct device_node *np)
+ {
+ struct device *dev = pg->pmc->dev;
+ int err;
+@@ -1079,22 +1079,6 @@ static int tegra_powergate_of_get_resets(struct tegra_powergate *pg,
+ err = reset_control_acquire(pg->reset);
+ if (err < 0) {
+ pr_err("failed to acquire resets: %d\n", err);
+- goto out;
+- }
+-
+- if (off) {
+- err = reset_control_assert(pg->reset);
+- } else {
+- err = reset_control_deassert(pg->reset);
+- if (err < 0)
+- goto out;
+-
+- reset_control_release(pg->reset);
+- }
+-
+-out:
+- if (err) {
+- reset_control_release(pg->reset);
+ reset_control_put(pg->reset);
+ }
+
+@@ -1139,20 +1123,43 @@ static int tegra_powergate_add(struct tegra_pmc *pmc, struct device_node *np)
+ goto set_available;
+ }
+
+- err = tegra_powergate_of_get_resets(pg, np, off);
++ err = tegra_powergate_of_get_resets(pg, np);
+ if (err < 0) {
+ dev_err(dev, "failed to get resets for %pOFn: %d\n", np, err);
+ goto remove_clks;
+ }
+
+- if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) {
+- if (off)
+- WARN_ON(tegra_powergate_power_up(pg, true));
++ /*
++ * If the power-domain is off, then ensure the resets are asserted.
++ * If the power-domain is on, then power down to ensure that when is
++ * it turned on the power-domain, clocks and resets are all in the
++ * expected state.
++ */
++ if (off) {
++ err = reset_control_assert(pg->reset);
++ if (err) {
++ pr_err("failed to assert resets: %d\n", err);
++ goto remove_resets;
++ }
++ } else {
++ err = tegra_powergate_power_down(pg);
++ if (err) {
++ dev_err(dev, "failed to turn off PM domain %s: %d\n",
++ pg->genpd.name, err);
++ goto remove_resets;
++ }
++ }
+
++ /*
++ * If PM_GENERIC_DOMAINS is not enabled, power-on
++ * the domain and skip the genpd registration.
++ */
++ if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) {
++ WARN_ON(tegra_powergate_power_up(pg, true));
+ goto remove_resets;
+ }
+
+- err = pm_genpd_init(&pg->genpd, NULL, off);
++ err = pm_genpd_init(&pg->genpd, NULL, true);
+ if (err < 0) {
+ dev_err(dev, "failed to initialise PM domain %pOFn: %d\n", np,
+ err);
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index a377c3d02c5592..e4ceaea331a292 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1555,7 +1555,7 @@ static int _sdw_prepare_stream(struct sdw_stream_runtime *stream,
+ if (ret < 0) {
+ dev_err(bus->dev, "Prepare port(s) failed ret = %d\n",
+ ret);
+- return ret;
++ goto restore_params;
+ }
+ }
+
+diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
+index 8f896e6208a8d1..854b8bdc57a1b7 100644
+--- a/drivers/staging/comedi/comedi_fops.c
++++ b/drivers/staging/comedi/comedi_fops.c
+@@ -783,6 +783,7 @@ static int is_device_busy(struct comedi_device *dev)
+ struct comedi_subdevice *s;
+ int i;
+
++ lockdep_assert_held_write(&dev->attach_lock);
+ lockdep_assert_held(&dev->mutex);
+ if (!dev->attached)
+ return 0;
+@@ -791,7 +792,16 @@ static int is_device_busy(struct comedi_device *dev)
+ s = &dev->subdevices[i];
+ if (s->busy)
+ return 1;
+- if (s->async && comedi_buf_is_mmapped(s))
++ if (!s->async)
++ continue;
++ if (comedi_buf_is_mmapped(s))
++ return 1;
++ /*
++ * There may be tasks still waiting on the subdevice's wait
++ * queue, although they should already be about to be removed
++ * from it since the subdevice has no active async command.
++ */
++ if (wq_has_sleeper(&s->async->wait_head))
+ return 1;
+ }
+
+@@ -821,15 +831,22 @@ static int do_devconfig_ioctl(struct comedi_device *dev,
+ return -EPERM;
+
+ if (!arg) {
+- if (is_device_busy(dev))
+- return -EBUSY;
++ int rc = 0;
++
+ if (dev->attached) {
+- struct module *driver_module = dev->driver->module;
++ down_write(&dev->attach_lock);
++ if (is_device_busy(dev)) {
++ rc = -EBUSY;
++ } else {
++ struct module *driver_module =
++ dev->driver->module;
+
+- comedi_device_detach(dev);
+- module_put(driver_module);
++ comedi_device_detach_locked(dev);
++ module_put(driver_module);
++ }
++ up_write(&dev->attach_lock);
+ }
+- return 0;
++ return rc;
+ }
+
+ if (copy_from_user(&it, arg, sizeof(it)))
+@@ -1551,21 +1568,27 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
+ }
+
+ for (i = 0; i < n_insns; ++i) {
++ unsigned int n = insns[i].n;
++
+ if (insns[i].insn & INSN_MASK_WRITE) {
+ if (copy_from_user(data, insns[i].data,
+- insns[i].n * sizeof(unsigned int))) {
++ n * sizeof(unsigned int))) {
+ dev_dbg(dev->class_dev,
+ "copy_from_user failed\n");
+ ret = -EFAULT;
+ goto error;
+ }
++ if (n < MIN_SAMPLES) {
++ memset(&data[n], 0, (MIN_SAMPLES - n) *
++ sizeof(unsigned int));
++ }
+ }
+ ret = parse_insn(dev, insns + i, data, file);
+ if (ret < 0)
+ goto error;
+ if (insns[i].insn & INSN_MASK_READ) {
+ if (copy_to_user(insns[i].data, data,
+- insns[i].n * sizeof(unsigned int))) {
++ n * sizeof(unsigned int))) {
+ dev_dbg(dev->class_dev,
+ "copy_to_user failed\n");
+ ret = -EFAULT;
+@@ -1584,6 +1607,16 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
+ return i;
+ }
+
++#define MAX_INSNS MAX_SAMPLES
++static int check_insnlist_len(struct comedi_device *dev, unsigned int n_insns)
++{
++ if (n_insns > MAX_INSNS) {
++ dev_dbg(dev->class_dev, "insnlist length too large\n");
++ return -EINVAL;
++ }
++ return 0;
++}
++
+ /*
+ * COMEDI_INSN ioctl
+ * synchronous instruction
+@@ -1628,6 +1661,10 @@ static int do_insn_ioctl(struct comedi_device *dev,
+ ret = -EFAULT;
+ goto error;
+ }
++ if (insn->n < MIN_SAMPLES) {
++ memset(&data[insn->n], 0,
++ (MIN_SAMPLES - insn->n) * sizeof(unsigned int));
++ }
+ }
+ ret = parse_insn(dev, insn, data, file);
+ if (ret < 0)
+@@ -2234,6 +2271,9 @@ static long comedi_unlocked_ioctl(struct file *file, unsigned int cmd,
+ rc = -EFAULT;
+ break;
+ }
++ rc = check_insnlist_len(dev, insnlist.n_insns);
++ if (rc)
++ break;
+ insns = kcalloc(insnlist.n_insns, sizeof(*insns), GFP_KERNEL);
+ if (!insns) {
+ rc = -ENOMEM;
+@@ -3085,6 +3125,9 @@ static int compat_insnlist(struct file *file, unsigned long arg)
+ if (copy_from_user(&insnlist32, compat_ptr(arg), sizeof(insnlist32)))
+ return -EFAULT;
+
++ rc = check_insnlist_len(dev, insnlist32.n_insns);
++ if (rc)
++ return rc;
+ insns = kcalloc(insnlist32.n_insns, sizeof(*insns), GFP_KERNEL);
+ if (!insns)
+ return -ENOMEM;
+diff --git a/drivers/staging/comedi/comedi_internal.h b/drivers/staging/comedi/comedi_internal.h
+index 9b3631a654c895..cf10ba016ebc81 100644
+--- a/drivers/staging/comedi/comedi_internal.h
++++ b/drivers/staging/comedi/comedi_internal.h
+@@ -50,6 +50,7 @@ extern struct mutex comedi_drivers_list_lock;
+ int insn_inval(struct comedi_device *dev, struct comedi_subdevice *s,
+ struct comedi_insn *insn, unsigned int *data);
+
++void comedi_device_detach_locked(struct comedi_device *dev);
+ void comedi_device_detach(struct comedi_device *dev);
+ int comedi_device_attach(struct comedi_device *dev,
+ struct comedi_devconfig *it);
+diff --git a/drivers/staging/comedi/drivers.c b/drivers/staging/comedi/drivers.c
+index 750a6ff3c03c26..fd098e62a308ce 100644
+--- a/drivers/staging/comedi/drivers.c
++++ b/drivers/staging/comedi/drivers.c
+@@ -159,7 +159,7 @@ static void comedi_device_detach_cleanup(struct comedi_device *dev)
+ int i;
+ struct comedi_subdevice *s;
+
+- lockdep_assert_held(&dev->attach_lock);
++ lockdep_assert_held_write(&dev->attach_lock);
+ lockdep_assert_held(&dev->mutex);
+ if (dev->subdevices) {
+ for (i = 0; i < dev->n_subdevices; i++) {
+@@ -196,16 +196,23 @@ static void comedi_device_detach_cleanup(struct comedi_device *dev)
+ comedi_clear_hw_dev(dev);
+ }
+
+-void comedi_device_detach(struct comedi_device *dev)
++void comedi_device_detach_locked(struct comedi_device *dev)
+ {
++ lockdep_assert_held_write(&dev->attach_lock);
+ lockdep_assert_held(&dev->mutex);
+ comedi_device_cancel_all(dev);
+- down_write(&dev->attach_lock);
+ dev->attached = false;
+ dev->detach_count++;
+ if (dev->driver)
+ dev->driver->detach(dev);
+ comedi_device_detach_cleanup(dev);
++}
++
++void comedi_device_detach(struct comedi_device *dev)
++{
++ lockdep_assert_held(&dev->mutex);
++ down_write(&dev->attach_lock);
++ comedi_device_detach_locked(dev);
+ up_write(&dev->attach_lock);
+ }
+
+@@ -339,10 +346,10 @@ int comedi_dio_insn_config(struct comedi_device *dev,
+ unsigned int *data,
+ unsigned int mask)
+ {
+- unsigned int chan_mask = 1 << CR_CHAN(insn->chanspec);
++ unsigned int chan = CR_CHAN(insn->chanspec);
+
+- if (!mask)
+- mask = chan_mask;
++ if (!mask && chan < 32)
++ mask = 1U << chan;
+
+ switch (data[0]) {
+ case INSN_CONFIG_DIO_INPUT:
+@@ -382,7 +389,7 @@ EXPORT_SYMBOL_GPL(comedi_dio_insn_config);
+ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ unsigned int *data)
+ {
+- unsigned int chanmask = (s->n_chan < 32) ? ((1 << s->n_chan) - 1)
++ unsigned int chanmask = (s->n_chan < 32) ? ((1U << s->n_chan) - 1)
+ : 0xffffffff;
+ unsigned int mask = data[0] & chanmask;
+ unsigned int bits = data[1];
+@@ -615,6 +622,9 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
+ unsigned int _data[2];
+ int ret;
+
++ if (insn->n == 0)
++ return 0;
++
+ memset(_data, 0, sizeof(_data));
+ memset(&_insn, 0, sizeof(_insn));
+ _insn.insn = INSN_BITS;
+@@ -625,8 +635,8 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
+ if (insn->insn == INSN_WRITE) {
+ if (!(s->subdev_flags & SDF_WRITABLE))
+ return -EINVAL;
+- _data[0] = 1 << (chan - base_chan); /* mask */
+- _data[1] = data[0] ? (1 << (chan - base_chan)) : 0; /* bits */
++ _data[0] = 1U << (chan - base_chan); /* mask */
++ _data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */
+ }
+
+ ret = s->insn_bits(dev, s, &_insn, _data);
+@@ -709,7 +719,7 @@ static int __comedi_device_postconfig(struct comedi_device *dev)
+
+ if (s->type == COMEDI_SUBD_DO) {
+ if (s->n_chan < 32)
+- s->io_bits = (1 << s->n_chan) - 1;
++ s->io_bits = (1U << s->n_chan) - 1;
+ else
+ s->io_bits = 0xffffffff;
+ }
+diff --git a/drivers/staging/comedi/drivers/aio_iiro_16.c b/drivers/staging/comedi/drivers/aio_iiro_16.c
+index fe38762350752f..60c9c683906bb1 100644
+--- a/drivers/staging/comedi/drivers/aio_iiro_16.c
++++ b/drivers/staging/comedi/drivers/aio_iiro_16.c
+@@ -178,7 +178,8 @@ static int aio_iiro_16_attach(struct comedi_device *dev,
+ * Digital input change of state interrupts are optionally supported
+ * using IRQ 2-7, 10-12, 14, or 15.
+ */
+- if ((1 << it->options[1]) & 0xdcfc) {
++ if (it->options[1] > 0 && it->options[1] < 16 &&
++ (1 << it->options[1]) & 0xdcfc) {
+ ret = request_irq(it->options[1], aio_iiro_16_cos, 0,
+ dev->board_name, dev);
+ if (ret == 0)
+diff --git a/drivers/staging/comedi/drivers/comedi_test.c b/drivers/staging/comedi/drivers/comedi_test.c
+index bea9a3adf08c85..f5199474c0e935 100644
+--- a/drivers/staging/comedi/drivers/comedi_test.c
++++ b/drivers/staging/comedi/drivers/comedi_test.c
+@@ -790,7 +790,7 @@ static void waveform_detach(struct comedi_device *dev)
+ {
+ struct waveform_private *devpriv = dev->private;
+
+- if (devpriv) {
++ if (devpriv && dev->n_subdevices) {
+ del_timer_sync(&devpriv->ai_timer);
+ del_timer_sync(&devpriv->ao_timer);
+ }
+diff --git a/drivers/staging/comedi/drivers/das16m1.c b/drivers/staging/comedi/drivers/das16m1.c
+index 75f3dbbe97ac8b..0d54387a1c2659 100644
+--- a/drivers/staging/comedi/drivers/das16m1.c
++++ b/drivers/staging/comedi/drivers/das16m1.c
+@@ -523,7 +523,8 @@ static int das16m1_attach(struct comedi_device *dev,
+ devpriv->extra_iobase = dev->iobase + DAS16M1_8255_IOBASE;
+
+ /* only irqs 2, 3, 4, 5, 6, 7, 10, 11, 12, 14, and 15 are valid */
+- if ((1 << it->options[1]) & 0xdcfc) {
++ if (it->options[1] >= 2 && it->options[1] <= 15 &&
++ (1 << it->options[1]) & 0xdcfc) {
+ ret = request_irq(it->options[1], das16m1_interrupt, 0,
+ dev->board_name, dev);
+ if (ret == 0)
+diff --git a/drivers/staging/comedi/drivers/das6402.c b/drivers/staging/comedi/drivers/das6402.c
+index 96f4107b8054df..927d4b832ecc66 100644
+--- a/drivers/staging/comedi/drivers/das6402.c
++++ b/drivers/staging/comedi/drivers/das6402.c
+@@ -569,7 +569,8 @@ static int das6402_attach(struct comedi_device *dev,
+ das6402_reset(dev);
+
+ /* IRQs 2,3,5,6,7, 10,11,15 are valid for "enhanced" mode */
+- if ((1 << it->options[1]) & 0x8cec) {
++ if (it->options[1] > 0 && it->options[1] < 16 &&
++ (1 << it->options[1]) & 0x8cec) {
+ ret = request_irq(it->options[1], das6402_interrupt, 0,
+ dev->board_name, dev);
+ if (ret == 0) {
+diff --git a/drivers/staging/comedi/drivers/pcl812.c b/drivers/staging/comedi/drivers/pcl812.c
+index b87ab3840eeef2..fc06f284ba746b 100644
+--- a/drivers/staging/comedi/drivers/pcl812.c
++++ b/drivers/staging/comedi/drivers/pcl812.c
+@@ -1151,7 +1151,8 @@ static int pcl812_attach(struct comedi_device *dev, struct comedi_devconfig *it)
+ if (!dev->pacer)
+ return -ENOMEM;
+
+- if ((1 << it->options[1]) & board->irq_bits) {
++ if (it->options[1] > 0 && it->options[1] < 16 &&
++ (1 << it->options[1]) & board->irq_bits) {
+ ret = request_irq(it->options[1], pcl812_interrupt, 0,
+ dev->board_name, dev);
+ if (ret == 0)
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index d0c8d85f3db0f5..2c04fcff0e1c51 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -745,6 +745,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ return info;
+
+ release_framebuf:
++ fb_deferred_io_cleanup(info);
+ framebuffer_release(info);
+
+ alloc_fail:
+diff --git a/drivers/staging/media/imx/imx-media-csc-scaler.c b/drivers/staging/media/imx/imx-media-csc-scaler.c
+index 939843b895440b..7ffd9e80a1d42a 100644
+--- a/drivers/staging/media/imx/imx-media-csc-scaler.c
++++ b/drivers/staging/media/imx/imx-media-csc-scaler.c
+@@ -914,7 +914,7 @@ imx_media_csc_scaler_device_init(struct imx_media_dev *md)
+ return &priv->vdev;
+
+ err_m2m:
+- video_set_drvdata(vfd, NULL);
++ video_device_release(vfd);
+ err_vfd:
+ kfree(priv);
+ return ERR_PTR(ret);
+diff --git a/drivers/staging/nvec/nvec_power.c b/drivers/staging/nvec/nvec_power.c
+index 0e861c4bfcbfa0..590b801c5992b0 100644
+--- a/drivers/staging/nvec/nvec_power.c
++++ b/drivers/staging/nvec/nvec_power.c
+@@ -194,7 +194,7 @@ static int nvec_power_bat_notifier(struct notifier_block *nb,
+ break;
+ case MANUFACTURER:
+ memcpy(power->bat_manu, &res->plc, res->length - 2);
+- power->bat_model[res->length - 2] = '\0';
++ power->bat_manu[res->length - 2] = '\0';
+ break;
+ case MODEL:
+ memcpy(power->bat_model, &res->plc, res->length - 2);
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index 05e9a3de80b597..d64af62abcc675 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -39,10 +39,13 @@ temp_show(struct device *dev, struct device_attribute *attr, char *buf)
+
+ ret = thermal_zone_get_temp(tz, &temperature);
+
+- if (ret)
+- return ret;
++ if (!ret)
++ return sprintf(buf, "%d\n", temperature);
+
+- return sprintf(buf, "%d\n", temperature);
++ if (ret == -EAGAIN)
++ return -ENODATA;
++
++ return ret;
+ }
+
+ static ssize_t
+diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
+index f0de94f7acbf9f..4414d44953d7e1 100644
+--- a/drivers/thunderbolt/domain.c
++++ b/drivers/thunderbolt/domain.c
+@@ -38,7 +38,7 @@ static bool match_service_id(const struct tb_service_id *id,
+ return false;
+ }
+
+- if (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {
++ if (id->match_flags & TBSVC_MATCH_PROTOCOL_REVISION) {
+ if (id->protocol_revision != svc->prtcrevs)
+ return false;
+ }
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index f6580d11c0fe46..3a65650c82cec7 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -1179,7 +1179,7 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
+ return ret;
+
+ data[0] &= ~ADP_DP_CS_0_VIDEO_HOPID_MASK;
+- data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
++ data[1] &= ~ADP_DP_CS_1_AUX_TX_HOPID_MASK;
+ data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK;
+
+ data[0] |= (video << ADP_DP_CS_0_VIDEO_HOPID_SHIFT) &
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 0042ac7e713b72..c65a190ac060a0 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2329,9 +2329,8 @@ int serial8250_do_startup(struct uart_port *port)
+ /*
+ * Now, initialize the UART
+ */
+- serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
+-
+ spin_lock_irqsave(&port->lock, flags);
++ serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
+ if (up->port.flags & UPF_FOURPORT) {
+ if (!up->port.irq)
+ up->port.mctrl |= TIOCM_OUT1;
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index cb68a1028090a2..e96aff583dc86e 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -1018,7 +1018,7 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv)
+ __func__);
+ return 0;
+ }
+- dma_sync_sg_for_device(port->dev, priv->sg_tx_p, nent, DMA_TO_DEVICE);
++ dma_sync_sg_for_device(port->dev, priv->sg_tx_p, num, DMA_TO_DEVICE);
+ priv->desc_tx = desc;
+ desc->callback = pch_dma_tx_complete;
+ desc->callback_param = priv;
+diff --git a/drivers/tty/vt/defkeymap.c_shipped b/drivers/tty/vt/defkeymap.c_shipped
+index c7095fb7d2d15a..14e742b0d1c248 100644
+--- a/drivers/tty/vt/defkeymap.c_shipped
++++ b/drivers/tty/vt/defkeymap.c_shipped
+@@ -23,6 +23,22 @@ u_short plain_map[NR_KEYS] = {
+ 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116,
+ 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d,
+ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
+ };
+
+ u_short shift_map[NR_KEYS] = {
+@@ -42,6 +58,22 @@ u_short shift_map[NR_KEYS] = {
+ 0xf20b, 0xf601, 0xf602, 0xf117, 0xf600, 0xf20a, 0xf115, 0xf116,
+ 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d,
+ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
+ };
+
+ u_short altgr_map[NR_KEYS] = {
+@@ -61,6 +93,22 @@ u_short altgr_map[NR_KEYS] = {
+ 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116,
+ 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d,
+ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
+ };
+
+ u_short ctrl_map[NR_KEYS] = {
+@@ -80,6 +128,22 @@ u_short ctrl_map[NR_KEYS] = {
+ 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116,
+ 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d,
+ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
+ };
+
+ u_short shift_ctrl_map[NR_KEYS] = {
+@@ -99,6 +163,22 @@ u_short shift_ctrl_map[NR_KEYS] = {
+ 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116,
+ 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d,
+ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
+ };
+
+ u_short alt_map[NR_KEYS] = {
+@@ -118,6 +198,22 @@ u_short alt_map[NR_KEYS] = {
+ 0xf118, 0xf210, 0xf211, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116,
+ 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d,
+ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
+ };
+
+ u_short ctrl_alt_map[NR_KEYS] = {
+@@ -137,6 +233,22 @@ u_short ctrl_alt_map[NR_KEYS] = {
+ 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf20c,
+ 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d,
+ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
+ };
+
+ ushort *key_maps[MAX_NR_KEYMAPS] = {
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index aa0026a9839c88..1c60976b2f86b6 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -1461,7 +1461,7 @@ static void kbd_keycode(unsigned int keycode, int down, int hw_raw)
+ rc = atomic_notifier_call_chain(&keyboard_notifier_list,
+ KBD_UNICODE, ¶m);
+ if (rc != NOTIFY_STOP)
+- if (down && !raw_mode)
++ if (down && !(raw_mode || kbd->kbdmode == VC_OFF))
+ k_unicode(vc, keysym, !down);
+ return;
+ }
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index 6625d340f3ac5d..865a5b289e0a5d 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -306,7 +306,7 @@ hv_uio_probe(struct hv_device *dev,
+ pdata->recv_buf = vzalloc(RECV_BUFFER_SIZE);
+ if (pdata->recv_buf == NULL) {
+ ret = -ENOMEM;
+- goto fail_close;
++ goto fail_free_ring;
+ }
+
+ ret = vmbus_establish_gpadl(channel, pdata->recv_buf,
+@@ -366,6 +366,8 @@ hv_uio_probe(struct hv_device *dev,
+
+ fail_close:
+ hv_uio_cleanup(dev, pdata);
++fail_free_ring:
++ vmbus_free_ring(dev->channel);
+
+ return ret;
+ }
+diff --git a/drivers/usb/atm/cxacru.c b/drivers/usb/atm/cxacru.c
+index a30ee59d7c05e2..65a1bfc62ae604 100644
+--- a/drivers/usb/atm/cxacru.c
++++ b/drivers/usb/atm/cxacru.c
+@@ -983,25 +983,60 @@ static int cxacru_fw(struct usb_device *usb_dev, enum cxacru_fw_request fw,
+ return ret;
+ }
+
+-static void cxacru_upload_firmware(struct cxacru_data *instance,
+- const struct firmware *fw,
+- const struct firmware *bp)
++
++static int cxacru_find_firmware(struct cxacru_data *instance,
++ char *phase, const struct firmware **fw_p)
+ {
+- int ret;
++ struct usbatm_data *usbatm = instance->usbatm;
++ struct device *dev = &usbatm->usb_intf->dev;
++ char buf[16];
++
++ sprintf(buf, "cxacru-%s.bin", phase);
++ usb_dbg(usbatm, "cxacru_find_firmware: looking for %s\n", buf);
++
++ if (request_firmware(fw_p, buf, dev)) {
++ usb_dbg(usbatm, "no stage %s firmware found\n", phase);
++ return -ENOENT;
++ }
++
++ usb_info(usbatm, "found firmware %s\n", buf);
++
++ return 0;
++}
++
++static int cxacru_heavy_init(struct usbatm_data *usbatm_instance,
++ struct usb_interface *usb_intf)
++{
++ const struct firmware *fw, *bp;
++ struct cxacru_data *instance = usbatm_instance->driver_data;
+ struct usbatm_data *usbatm = instance->usbatm;
+ struct usb_device *usb_dev = usbatm->usb_dev;
+ __le16 signature[] = { usb_dev->descriptor.idVendor,
+ usb_dev->descriptor.idProduct };
+ __le32 val;
++ int ret;
+
+- usb_dbg(usbatm, "%s\n", __func__);
++ ret = cxacru_find_firmware(instance, "fw", &fw);
++ if (ret) {
++ usb_warn(usbatm_instance, "firmware (cxacru-fw.bin) unavailable (system misconfigured?)\n");
++ return ret;
++ }
++
++ if (instance->modem_type->boot_rom_patch) {
++ ret = cxacru_find_firmware(instance, "bp", &bp);
++ if (ret) {
++ usb_warn(usbatm_instance, "boot ROM patch (cxacru-bp.bin) unavailable (system misconfigured?)\n");
++ release_firmware(fw);
++ return ret;
++ }
++ }
+
+ /* FirmwarePllFClkValue */
+ val = cpu_to_le32(instance->modem_type->pll_f_clk);
+ ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, PLLFCLK_ADDR, (u8 *) &val, 4);
+ if (ret) {
+ usb_err(usbatm, "FirmwarePllFClkValue failed: %d\n", ret);
+- return;
++ goto done;
+ }
+
+ /* FirmwarePllBClkValue */
+@@ -1009,7 +1044,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, PLLBCLK_ADDR, (u8 *) &val, 4);
+ if (ret) {
+ usb_err(usbatm, "FirmwarePllBClkValue failed: %d\n", ret);
+- return;
++ goto done;
+ }
+
+ /* Enable SDRAM */
+@@ -1017,7 +1052,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, SDRAMEN_ADDR, (u8 *) &val, 4);
+ if (ret) {
+ usb_err(usbatm, "Enable SDRAM failed: %d\n", ret);
+- return;
++ goto done;
+ }
+
+ /* Firmware */
+@@ -1025,7 +1060,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, FW_ADDR, fw->data, fw->size);
+ if (ret) {
+ usb_err(usbatm, "Firmware upload failed: %d\n", ret);
+- return;
++ goto done;
+ }
+
+ /* Boot ROM patch */
+@@ -1034,7 +1069,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, BR_ADDR, bp->data, bp->size);
+ if (ret) {
+ usb_err(usbatm, "Boot ROM patching failed: %d\n", ret);
+- return;
++ goto done;
+ }
+ }
+
+@@ -1042,7 +1077,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, SIG_ADDR, (u8 *) signature, 4);
+ if (ret) {
+ usb_err(usbatm, "Signature storing failed: %d\n", ret);
+- return;
++ goto done;
+ }
+
+ usb_info(usbatm, "starting device\n");
+@@ -1054,7 +1089,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ }
+ if (ret) {
+ usb_err(usbatm, "Passing control to firmware failed: %d\n", ret);
+- return;
++ goto done;
+ }
+
+ /* Delay to allow firmware to start up. */
+@@ -1068,53 +1103,10 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ ret = cxacru_cm(instance, CM_REQUEST_CARD_GET_STATUS, NULL, 0, NULL, 0);
+ if (ret < 0) {
+ usb_err(usbatm, "modem failed to initialize: %d\n", ret);
+- return;
+- }
+-}
+-
+-static int cxacru_find_firmware(struct cxacru_data *instance,
+- char *phase, const struct firmware **fw_p)
+-{
+- struct usbatm_data *usbatm = instance->usbatm;
+- struct device *dev = &usbatm->usb_intf->dev;
+- char buf[16];
+-
+- sprintf(buf, "cxacru-%s.bin", phase);
+- usb_dbg(usbatm, "cxacru_find_firmware: looking for %s\n", buf);
+-
+- if (request_firmware(fw_p, buf, dev)) {
+- usb_dbg(usbatm, "no stage %s firmware found\n", phase);
+- return -ENOENT;
+- }
+-
+- usb_info(usbatm, "found firmware %s\n", buf);
+-
+- return 0;
+-}
+-
+-static int cxacru_heavy_init(struct usbatm_data *usbatm_instance,
+- struct usb_interface *usb_intf)
+-{
+- const struct firmware *fw, *bp;
+- struct cxacru_data *instance = usbatm_instance->driver_data;
+- int ret = cxacru_find_firmware(instance, "fw", &fw);
+-
+- if (ret) {
+- usb_warn(usbatm_instance, "firmware (cxacru-fw.bin) unavailable (system misconfigured?)\n");
+- return ret;
++ goto done;
+ }
+
+- if (instance->modem_type->boot_rom_patch) {
+- ret = cxacru_find_firmware(instance, "bp", &bp);
+- if (ret) {
+- usb_warn(usbatm_instance, "boot ROM patch (cxacru-bp.bin) unavailable (system misconfigured?)\n");
+- release_firmware(fw);
+- return ret;
+- }
+- }
+-
+- cxacru_upload_firmware(instance, fw, bp);
+-
++done:
+ if (instance->modem_type->boot_rom_patch)
+ release_firmware(bp);
+ release_firmware(fw);
+diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
+index 7b00b93dad9b8d..f12e177bfc551a 100644
+--- a/drivers/usb/chipidea/ci.h
++++ b/drivers/usb/chipidea/ci.h
+@@ -278,8 +278,19 @@ static inline int ci_role_start(struct ci_hdrc *ci, enum ci_role role)
+ return -ENXIO;
+
+ ret = ci->roles[role]->start(ci);
+- if (!ret)
+- ci->role = role;
++ if (ret)
++ return ret;
++
++ ci->role = role;
++
++ if (ci->usb_phy) {
++ if (role == CI_ROLE_HOST)
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_ID);
++ else
++ /* in device mode but vbus is invalid*/
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_NONE);
++ }
++
+ return ret;
+ }
+
+@@ -293,6 +304,9 @@ static inline void ci_role_stop(struct ci_hdrc *ci)
+ ci->role = CI_ROLE_END;
+
+ ci->roles[role]->stop(ci);
++
++ if (ci->usb_phy)
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_NONE);
+ }
+
+ static inline enum usb_role ci_role_to_usb_role(struct ci_hdrc *ci)
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 1c7af91bf03af4..122d2d82c67c7d 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -1697,6 +1697,13 @@ static int ci_udc_vbus_session(struct usb_gadget *_gadget, int is_active)
+ ret = ci->platdata->notify_event(ci,
+ CI_HDRC_CONTROLLER_VBUS_EVENT);
+
++ if (ci->usb_phy) {
++ if (is_active)
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_VBUS);
++ else
++ usb_phy_set_event(ci->usb_phy, USB_EVENT_NONE);
++ }
++
+ if (ci->driver)
+ ci_hdrc_gadget_connect(_gadget, is_active);
+
+@@ -2012,6 +2019,9 @@ static irqreturn_t udc_irq(struct ci_hdrc *ci)
+ if (USBi_PCI & intr) {
+ ci->gadget.speed = hw_port_is_high_speed(ci) ?
+ USB_SPEED_HIGH : USB_SPEED_FULL;
++ if (ci->usb_phy)
++ usb_phy_set_event(ci->usb_phy,
++ USB_EVENT_ENUMERATED);
+ if (ci->suspended) {
+ if (ci->driver->resume) {
+ spin_unlock(&ci->lock);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 571b70b9231cf8..07543731bfa575 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1510,8 +1510,6 @@ static int acm_probe(struct usb_interface *intf,
+ acm->nb_index = 0;
+ acm->nb_size = 0;
+
+- dev_info(&intf->dev, "ttyACM%d: USB ACM device\n", minor);
+-
+ acm->line.dwDTERate = cpu_to_le32(9600);
+ acm->line.bDataBits = 8;
+ acm_set_line(acm, &acm->line);
+@@ -1519,6 +1517,12 @@ static int acm_probe(struct usb_interface *intf,
+ usb_driver_claim_interface(&acm_driver, data_interface, acm);
+ usb_set_intfdata(data_interface, acm);
+
++ if (quirks & CLEAR_HALT_CONDITIONS) {
++ /* errors intentionally ignored */
++ usb_clear_halt(usb_dev, acm->in);
++ usb_clear_halt(usb_dev, acm->out);
++ }
++
+ tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor,
+ &control_interface->dev);
+ if (IS_ERR(tty_dev)) {
+@@ -1526,10 +1530,7 @@ static int acm_probe(struct usb_interface *intf,
+ goto alloc_fail6;
+ }
+
+- if (quirks & CLEAR_HALT_CONDITIONS) {
+- usb_clear_halt(usb_dev, acm->in);
+- usb_clear_halt(usb_dev, acm->out);
+- }
++ dev_info(&intf->dev, "ttyACM%d: USB ACM device\n", minor);
+
+ return 0;
+ alloc_fail6:
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 1508e0f00dbc69..93eaeea8766095 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -81,8 +81,14 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ */
+ desc = (struct usb_ss_ep_comp_descriptor *) buffer;
+
+- if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP ||
+- size < USB_DT_SS_EP_COMP_SIZE) {
++ if (size < USB_DT_SS_EP_COMP_SIZE) {
++ dev_notice(ddev,
++ "invalid SuperSpeed endpoint companion descriptor "
++ "of length %d, skipping\n", size);
++ return;
++ }
++
++ if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP) {
+ dev_notice(ddev, "No SuperSpeed endpoint companion for config %d "
+ " interface %d altsetting %d ep %d: "
+ "using minimum values\n",
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index b88e3a5e861683..667ab60a18dbee 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -53,6 +53,12 @@
+ #define USB_TP_TRANSMISSION_DELAY_MAX 65535 /* ns */
+ #define USB_PING_RESPONSE_TIME 400 /* ns */
+
++/*
++ * Give SS hubs 200ms time after wake to train downstream links before
++ * assuming no port activity and allowing hub to runtime suspend back.
++ */
++#define USB_SS_PORT_U0_WAKE_TIME 200 /* ms */
++
+ /* Protect struct usb_device->state and ->children members
+ * Note: Both are also protected by ->dev.sem, except that ->state can
+ * change to USB_STATE_NOTATTACHED even when the semaphore isn't held. */
+@@ -1052,6 +1058,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ goto init2;
+ goto init3;
+ }
++
+ hub_get(hub);
+
+ /* The superspeed hub except for root hub has to use Hub Depth
+@@ -1300,6 +1307,17 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ device_unlock(&hdev->dev);
+ }
+
++ if (type == HUB_RESUME && hub_is_superspeed(hub->hdev)) {
++ /* give usb3 downstream links training time after hub resume */
++ usb_autopm_get_interface_no_resume(
++ to_usb_interface(hub->intfdev));
++
++ queue_delayed_work(system_power_efficient_wq,
++ &hub->post_resume_work,
++ msecs_to_jiffies(USB_SS_PORT_U0_WAKE_TIME));
++ return;
++ }
++
+ hub_put(hub);
+ }
+
+@@ -1318,6 +1336,14 @@ static void hub_init_func3(struct work_struct *ws)
+ hub_activate(hub, HUB_INIT3);
+ }
+
++static void hub_post_resume(struct work_struct *ws)
++{
++ struct usb_hub *hub = container_of(ws, struct usb_hub, post_resume_work.work);
++
++ usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));
++ hub_put(hub);
++}
++
+ enum hub_quiescing_type {
+ HUB_DISCONNECT, HUB_PRE_RESET, HUB_SUSPEND
+ };
+@@ -1343,6 +1369,7 @@ static void hub_quiesce(struct usb_hub *hub, enum hub_quiescing_type type)
+
+ /* Stop hub_wq and related activity */
+ del_timer_sync(&hub->irq_urb_retry);
++ flush_delayed_work(&hub->post_resume_work);
+ usb_kill_urb(hub->urb);
+ if (hub->has_indicators)
+ cancel_delayed_work_sync(&hub->leds);
+@@ -1899,6 +1926,7 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ hub->hdev = hdev;
+ INIT_DELAYED_WORK(&hub->leds, led_work);
+ INIT_DELAYED_WORK(&hub->init_work, NULL);
++ INIT_DELAYED_WORK(&hub->post_resume_work, hub_post_resume);
+ INIT_WORK(&hub->events, hub_event);
+ spin_lock_init(&hub->irq_urb_lock);
+ timer_setup(&hub->irq_urb_retry, hub_retry_irq_urb, 0);
+@@ -2856,6 +2884,8 @@ static unsigned hub_is_wusb(struct usb_hub *hub)
+ #define PORT_INIT_TRIES 4
+ #endif /* CONFIG_USB_FEW_INIT_RETRIES */
+
++#define DETECT_DISCONNECT_TRIES 5
++
+ #define HUB_ROOT_RESET_TIME 60 /* times are in msec */
+ #define HUB_SHORT_RESET_TIME 10
+ #define HUB_BH_RESET_TIME 50
+@@ -5657,6 +5687,8 @@ static void port_event(struct usb_hub *hub, int port1)
+ struct usb_device *udev = port_dev->child;
+ struct usb_device *hdev = hub->hdev;
+ u16 portstatus, portchange;
++ int i = 0;
++ int err;
+
+ connect_change = test_bit(port1, hub->change_bits);
+ clear_bit(port1, hub->event_bits);
+@@ -5733,17 +5765,30 @@ static void port_event(struct usb_hub *hub, int port1)
+ connect_change = 1;
+
+ /*
+- * Warm reset a USB3 protocol port if it's in
+- * SS.Inactive state.
++ * Avoid trying to recover a USB3 SS.Inactive port with a warm reset if
++ * the device was disconnected. A 12ms disconnect detect timer in
++ * SS.Inactive state transitions the port to RxDetect automatically.
++ * SS.Inactive link error state is common during device disconnect.
+ */
+- if (hub_port_warm_reset_required(hub, port1, portstatus)) {
+- dev_dbg(&port_dev->dev, "do warm reset\n");
+- if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION)
++ while (hub_port_warm_reset_required(hub, port1, portstatus)) {
++ if ((i++ < DETECT_DISCONNECT_TRIES) && udev) {
++ u16 unused;
++
++ msleep(20);
++ hub_port_status(hub, port1, &portstatus, &unused);
++ dev_dbg(&port_dev->dev, "Wait for inactive link disconnect detect\n");
++ continue;
++ } else if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION)
+ || udev->state == USB_STATE_NOTATTACHED) {
+- if (hub_port_reset(hub, port1, NULL,
+- HUB_BH_RESET_TIME, true) < 0)
++ dev_dbg(&port_dev->dev, "do warm reset, port only\n");
++ err = hub_port_reset(hub, port1, NULL,
++ HUB_BH_RESET_TIME, true);
++ if (!udev && err == -ENOTCONN)
++ connect_change = 0;
++ else if (err < 0)
+ hub_port_disable(hub, port1, 1);
+ } else {
++ dev_dbg(&port_dev->dev, "do warm reset, full device\n");
+ usb_unlock_port(port_dev);
+ usb_lock_device(udev);
+ usb_reset_device(udev);
+@@ -5751,6 +5796,7 @@ static void port_event(struct usb_hub *hub, int port1)
+ usb_lock_port(port_dev);
+ connect_change = 0;
+ }
++ break;
+ }
+
+ if (connect_change)
+diff --git a/drivers/usb/core/hub.h b/drivers/usb/core/hub.h
+index dd049bc85f88c6..9144e02dfe5c55 100644
+--- a/drivers/usb/core/hub.h
++++ b/drivers/usb/core/hub.h
+@@ -69,6 +69,7 @@ struct usb_hub {
+ u8 indicator[USB_MAXCHILDREN];
+ struct delayed_work leds;
+ struct delayed_work init_work;
++ struct delayed_work post_resume_work;
+ struct work_struct events;
+ spinlock_t irq_urb_lock;
+ struct timer_list irq_urb_retry;
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index ff3b5131903ace..f5894cb1668670 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -368,6 +368,7 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM },
+
+ /* SanDisk Corp. SanDisk 3.2Gen1 */
++ { USB_DEVICE(0x0781, 0x5596), .driver_info = USB_QUIRK_DELAY_INIT },
+ { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT },
+
+ /* SanDisk Extreme 55AE */
+diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
+index 9c285026f82767..c41b25bc585cf0 100644
+--- a/drivers/usb/core/urb.c
++++ b/drivers/usb/core/urb.c
+@@ -490,7 +490,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
+
+ /* Check that the pipe's type matches the endpoint's type */
+ if (usb_pipe_type_check(urb->dev, urb->pipe))
+- dev_WARN(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",
++ dev_warn_once(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",
+ usb_pipetype(urb->pipe), pipetypes[xfertype]);
+
+ /* Check against a simple/standard policy */
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 1264683d45f20b..118ab77903667f 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1735,16 +1735,13 @@ static int dwc3_core_init_for_resume(struct dwc3 *dwc)
+
+ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ {
+- unsigned long flags;
+ u32 reg;
+
+ switch (dwc->current_dr_role) {
+ case DWC3_GCTL_PRTCAP_DEVICE:
+ if (pm_runtime_suspended(dwc->dev))
+ break;
+- spin_lock_irqsave(&dwc->lock, flags);
+ dwc3_gadget_suspend(dwc);
+- spin_unlock_irqrestore(&dwc->lock, flags);
+ synchronize_irq(dwc->irq_gadget);
+ dwc3_core_exit(dwc);
+ break;
+@@ -1775,9 +1772,7 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ break;
+
+ if (dwc->current_otg_role == DWC3_OTG_ROLE_DEVICE) {
+- spin_lock_irqsave(&dwc->lock, flags);
+ dwc3_gadget_suspend(dwc);
+- spin_unlock_irqrestore(&dwc->lock, flags);
+ synchronize_irq(dwc->irq_gadget);
+ }
+
+@@ -1794,7 +1789,6 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+
+ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ {
+- unsigned long flags;
+ int ret;
+ u32 reg;
+
+@@ -1805,9 +1799,7 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ return ret;
+
+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);
+- spin_lock_irqsave(&dwc->lock, flags);
+ dwc3_gadget_resume(dwc);
+- spin_unlock_irqrestore(&dwc->lock, flags);
+ break;
+ case DWC3_GCTL_PRTCAP_HOST:
+ if (!PMSG_IS_AUTO(msg)) {
+@@ -1845,9 +1837,7 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ if (dwc->current_otg_role == DWC3_OTG_ROLE_HOST) {
+ dwc3_otg_host_init(dwc);
+ } else if (dwc->current_otg_role == DWC3_OTG_ROLE_DEVICE) {
+- spin_lock_irqsave(&dwc->lock, flags);
+ dwc3_gadget_resume(dwc);
+- spin_unlock_irqrestore(&dwc->lock, flags);
+ }
+
+ break;
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index 2d3ca6e8eb654e..a970eae93a205b 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -844,6 +844,9 @@ static int dwc3_meson_g12a_remove(struct platform_device *pdev)
+ if (priv->drvdata->otg_switch_supported)
+ usb_role_switch_unregister(priv->role_switch);
+
++ put_device(priv->switch_desc.udc);
++ put_device(priv->switch_desc.usb2_port);
++
+ of_platform_depopulate(dev);
+
+ for (i = 0 ; i < PHY_COUNT ; ++i) {
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index db3559a1020776..d013d774edc139 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -786,13 +786,13 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ ret = reset_control_deassert(qcom->resets);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to deassert resets, err=%d\n", ret);
+- goto reset_assert;
++ return ret;
+ }
+
+ ret = dwc3_qcom_clk_init(qcom, of_clk_get_parent_count(np));
+ if (ret) {
+ dev_err(dev, "failed to get clocks\n");
+- goto reset_assert;
++ return ret;
+ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -892,8 +892,6 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ clk_disable_unprepare(qcom->clks[i]);
+ clk_put(qcom->clks[i]);
+ }
+-reset_assert:
+- reset_control_assert(qcom->resets);
+
+ return ret;
+ }
+@@ -921,7 +919,6 @@ static int dwc3_qcom_remove(struct platform_device *pdev)
+ qcom->num_clocks = 0;
+
+ dwc3_qcom_interconnect_exit(qcom);
+- reset_control_assert(qcom->resets);
+
+ pm_runtime_allow(dev);
+ pm_runtime_disable(dev);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index f3103baa745903..a2aa37ca6ad2ca 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3107,6 +3107,15 @@ static void dwc3_gadget_endpoint_transfer_complete(struct dwc3_ep *dep,
+ static void dwc3_gadget_endpoint_transfer_not_ready(struct dwc3_ep *dep,
+ const struct dwc3_event_depevt *event)
+ {
++ /*
++ * During a device-initiated disconnect, a late xferNotReady event can
++ * be generated after the End Transfer command resets the event filter,
++ * but before the controller is halted. Ignore it to prevent a new
++ * transfer from starting.
++ */
++ if (!dep->dwc->connected)
++ return;
++
+ dwc3_gadget_endpoint_frame_from_event(dep, event);
+
+ /*
+@@ -4106,12 +4115,17 @@ void dwc3_gadget_exit(struct dwc3 *dwc)
+
+ int dwc3_gadget_suspend(struct dwc3 *dwc)
+ {
++ unsigned long flags;
++
+ if (!dwc->gadget_driver)
+ return 0;
+
+ dwc3_gadget_run_stop(dwc, false, false);
++
++ spin_lock_irqsave(&dwc->lock, flags);
+ dwc3_disconnect_gadget(dwc);
+ __dwc3_gadget_stop(dwc);
++ spin_unlock_irqrestore(&dwc->lock, flags);
+
+ return 0;
+ }
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index 6c0434100e38cf..7f832c98699c70 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -679,6 +679,10 @@ int __init early_xdbc_setup_hardware(void)
+
+ xdbc.table_base = NULL;
+ xdbc.out_buf = NULL;
++
++ early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
++ xdbc.xhci_base = NULL;
++ xdbc.xhci_length = 0;
+ }
+
+ return ret;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 8adb54886443d3..f694823f67a14e 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2241,6 +2241,11 @@ int composite_os_desc_req_prepare(struct usb_composite_dev *cdev,
+ if (!cdev->os_desc_req->buf) {
+ ret = -ENOMEM;
+ usb_ep_free_request(ep0, cdev->os_desc_req);
++ /*
++ * Set os_desc_req to NULL so that composite_dev_cleanup()
++ * will not try to free it again.
++ */
++ cdev->os_desc_req = NULL;
+ goto end;
+ }
+ cdev->os_desc_req->context = cdev;
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 6bb69a4e64704f..d810b96a7ba43e 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -855,6 +855,8 @@ static ssize_t os_desc_qw_sign_store(struct config_item *item, const char *page,
+ struct gadget_info *gi = os_desc_item_to_gadget_info(item);
+ int res, l;
+
++ if (!len)
++ return len;
+ l = min((int)len, OS_STRING_QW_SIGN_LEN >> 1);
+ if (page[l - 1] == '\n')
+ --l;
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index d888741d3e2f97..3cc65f1d2a0665 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2566,6 +2566,7 @@ static int renesas_usb3_remove(struct platform_device *pdev)
+ struct renesas_usb3 *usb3 = platform_get_drvdata(pdev);
+
+ debugfs_remove_recursive(usb3->dentry);
++ put_device(usb3->host_dev);
+ device_remove_file(&pdev->dev, &dev_attr_role);
+
+ cancel_work_sync(&usb3->role_work);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index e92f920256b2e0..05f119e7178c33 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -630,8 +630,7 @@ static int xhci_enter_test_mode(struct xhci_hcd *xhci,
+ if (!xhci->devs[i])
+ continue;
+
+- retval = xhci_disable_slot(xhci, i);
+- xhci_free_virt_device(xhci, i);
++ retval = xhci_disable_and_free_slot(xhci, i);
+ if (retval)
+ xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n",
+ i, retval);
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 5b0e0097832299..e681c0dd9fbfe7 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -888,21 +888,20 @@ int xhci_alloc_tt_info(struct xhci_hcd *xhci,
+ * will be manipulated by the configure endpoint, allocate device, or update
+ * hub functions while this function is removing the TT entries from the list.
+ */
+-void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
++void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev,
++ int slot_id)
+ {
+- struct xhci_virt_device *dev;
+ int i;
+ int old_active_eps = 0;
+
+ /* Slot ID 0 is reserved */
+- if (slot_id == 0 || !xhci->devs[slot_id])
++ if (slot_id == 0 || !dev)
+ return;
+
+- dev = xhci->devs[slot_id];
+-
+- xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
+- if (!dev)
+- return;
++ /* If device ctx array still points to _this_ device, clear it */
++ if (dev->out_ctx &&
++ xhci->dcbaa->dev_context_ptrs[slot_id] == cpu_to_le64(dev->out_ctx->dma))
++ xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
+
+ trace_xhci_free_virt_device(dev);
+
+@@ -941,8 +940,9 @@ void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
+
+ if (dev->udev && dev->udev->slot_id)
+ dev->udev->slot_id = 0;
+- kfree(xhci->devs[slot_id]);
+- xhci->devs[slot_id] = NULL;
++ if (xhci->devs[slot_id] == dev)
++ xhci->devs[slot_id] = NULL;
++ kfree(dev);
+ }
+
+ /*
+@@ -984,7 +984,7 @@ static void xhci_free_virt_devices_depth_first(struct xhci_hcd *xhci, int slot_i
+ out:
+ /* we are now at a leaf device */
+ xhci_debugfs_remove_slot(xhci, slot_id);
+- xhci_free_virt_device(xhci, slot_id);
++ xhci_free_virt_device(xhci, vdev, slot_id);
+ }
+
+ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+@@ -1227,6 +1227,8 @@ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *ud
+ ep0_ctx->deq = cpu_to_le64(dev->eps[0].ring->first_seg->dma |
+ dev->eps[0].ring->cycle_state);
+
++ ep0_ctx->tx_info = cpu_to_le32(EP_AVG_TRB_LENGTH(8));
++
+ trace_xhci_setup_addressable_virt_device(dev);
+
+ /* Steps 7 and 8 were done in xhci_alloc_virt_device() */
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index 01ad6fc1adcaf6..68ca59b2918367 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -47,8 +47,9 @@
+ #define RENESAS_ROM_ERASE_MAGIC 0x5A65726F
+ #define RENESAS_ROM_WRITE_MAGIC 0x53524F4D
+
+-#define RENESAS_RETRY 10000
+-#define RENESAS_DELAY 10
++#define RENESAS_RETRY 50000 /* 50000 * RENESAS_DELAY ~= 500ms */
++#define RENESAS_CHIP_ERASE_RETRY 500000 /* 500000 * RENESAS_DELAY ~= 5s */
++#define RENESAS_DELAY 10
+
+ static int renesas_fw_download_image(struct pci_dev *dev,
+ const u32 *fw, size_t step, bool rom)
+@@ -409,7 +410,7 @@ static void renesas_rom_erase(struct pci_dev *pdev)
+ /* sleep a bit while ROM is erased */
+ msleep(20);
+
+- for (i = 0; i < RENESAS_RETRY; i++) {
++ for (i = 0; i < RENESAS_CHIP_ERASE_RETRY; i++) {
+ retval = pci_read_config_byte(pdev, RENESAS_ROM_STATUS,
+ &status);
+ status &= RENESAS_ROM_STATUS_ERASE;
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 83c7dffa945c31..daf93bee7669b2 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -361,7 +361,8 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ if (ret)
+ goto disable_usb_phy;
+
+- if (HCC_MAX_PSA(xhci->hcc_params) >= 4)
++ if (HCC_MAX_PSA(xhci->hcc_params) >= 4 &&
++ !(xhci->quirks & XHCI_BROKEN_STREAMS))
+ xhci->shared_hcd->can_do_streams = 1;
+
+ ret = usb_add_hcd(xhci->shared_hcd, irq, IRQF_SHARED);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 954cd962e113c3..bf2787bb04eaf0 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1094,12 +1094,15 @@ static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci,
+ */
+ void xhci_hc_died(struct xhci_hcd *xhci)
+ {
++ bool notify;
+ int i, j;
+
+ if (xhci->xhc_state & XHCI_STATE_DYING)
+ return;
+
+- xhci_err(xhci, "xHCI host controller not responding, assume dead\n");
++ notify = !(xhci->xhc_state & XHCI_STATE_REMOVING);
++ if (notify)
++ xhci_err(xhci, "xHCI host controller not responding, assume dead\n");
+ xhci->xhc_state |= XHCI_STATE_DYING;
+
+ xhci_cleanup_command_queue(xhci);
+@@ -1113,7 +1116,7 @@ void xhci_hc_died(struct xhci_hcd *xhci)
+ }
+
+ /* inform usb core hc died if PCI remove isn't already handling it */
+- if (!(xhci->xhc_state & XHCI_STATE_REMOVING))
++ if (notify)
+ usb_hc_died(xhci_to_hcd(xhci));
+ }
+
+@@ -1381,7 +1384,8 @@ static void xhci_handle_cmd_enable_slot(struct xhci_hcd *xhci, int slot_id,
+ command->slot_id = 0;
+ }
+
+-static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id)
++static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id,
++ u32 cmd_comp_code)
+ {
+ struct xhci_virt_device *virt_dev;
+ struct xhci_slot_ctx *slot_ctx;
+@@ -1396,6 +1400,10 @@ static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id)
+ if (xhci->quirks & XHCI_EP_LIMIT_QUIRK)
+ /* Delete default control endpoint resources */
+ xhci_free_device_endpoint_resources(xhci, virt_dev, true);
++ if (cmd_comp_code == COMP_SUCCESS) {
++ xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
++ xhci->devs[slot_id] = NULL;
++ }
+ }
+
+ static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id,
+@@ -1635,7 +1643,7 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
+ xhci_handle_cmd_enable_slot(xhci, slot_id, cmd, cmd_comp_code);
+ break;
+ case TRB_DISABLE_SLOT:
+- xhci_handle_cmd_disable_slot(xhci, slot_id);
++ xhci_handle_cmd_disable_slot(xhci, slot_id, cmd_comp_code);
+ break;
+ case TRB_CONFIG_EP:
+ if (!cmd->completion)
+@@ -4183,7 +4191,8 @@ static int queue_command(struct xhci_hcd *xhci, struct xhci_command *cmd,
+
+ if ((xhci->xhc_state & XHCI_STATE_DYING) ||
+ (xhci->xhc_state & XHCI_STATE_HALTED)) {
+- xhci_dbg(xhci, "xHCI dying or halted, can't queue_command\n");
++ xhci_dbg(xhci, "xHCI dying or halted, can't queue_command. state: 0x%x\n",
++ xhci->xhc_state);
+ return -ESHUTDOWN;
+ }
+
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 908445cff24f80..7eb06019747484 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -118,7 +118,8 @@ int xhci_halt(struct xhci_hcd *xhci)
+ ret = xhci_handshake(&xhci->op_regs->status,
+ STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
+ if (ret) {
+- xhci_warn(xhci, "Host halt failed, %d\n", ret);
++ if (!(xhci->xhc_state & XHCI_STATE_DYING))
++ xhci_warn(xhci, "Host halt failed, %d\n", ret);
+ return ret;
+ }
+ xhci->xhc_state |= XHCI_STATE_HALTED;
+@@ -175,7 +176,8 @@ int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ state = readl(&xhci->op_regs->status);
+
+ if (state == ~(u32)0) {
+- xhci_warn(xhci, "Host not accessible, reset failed.\n");
++ if (!(xhci->xhc_state & XHCI_STATE_DYING))
++ xhci_warn(xhci, "Host not accessible, reset failed.\n");
+ return -ENODEV;
+ }
+
+@@ -3956,7 +3958,7 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ xhci_disable_slot(xhci, udev->slot_id);
+
+ spin_lock_irqsave(&xhci->lock, flags);
+- xhci_free_virt_device(xhci, udev->slot_id);
++ xhci_free_virt_device(xhci, virt_dev, udev->slot_id);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+
+ }
+@@ -4005,6 +4007,16 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+ return ret;
+ }
+
++int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id)
++{
++ struct xhci_virt_device *vdev = xhci->devs[slot_id];
++ int ret;
++
++ ret = xhci_disable_slot(xhci, slot_id);
++ xhci_free_virt_device(xhci, vdev, slot_id);
++ return ret;
++}
++
+ /*
+ * Checks if we have enough host controller resources for the default control
+ * endpoint.
+@@ -4110,8 +4122,7 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ return 1;
+
+ disable_slot:
+- xhci_disable_slot(xhci, udev->slot_id);
+- xhci_free_virt_device(xhci, udev->slot_id);
++ xhci_disable_and_free_slot(xhci, udev->slot_id);
+
+ return 0;
+ }
+@@ -4247,8 +4258,7 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ dev_warn(&udev->dev, "Device not responding to setup %s.\n", act);
+
+ mutex_unlock(&xhci->mutex);
+- ret = xhci_disable_slot(xhci, udev->slot_id);
+- xhci_free_virt_device(xhci, udev->slot_id);
++ ret = xhci_disable_and_free_slot(xhci, udev->slot_id);
+ if (!ret) {
+ if (xhci_alloc_dev(hcd, udev) == 1)
+ xhci_setup_addressable_virt_dev(xhci, udev);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 2d2e9c59add625..6bb1ddc3918c6b 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -2012,7 +2012,7 @@ void xhci_dbg_trace(struct xhci_hcd *xhci, void (*trace)(struct va_format *),
+ /* xHCI memory management */
+ void xhci_mem_cleanup(struct xhci_hcd *xhci);
+ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags);
+-void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id);
++void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, int slot_id);
+ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, struct usb_device *udev, gfp_t flags);
+ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *udev);
+ void xhci_copy_ep0_dequeue_into_input_ctx(struct xhci_hcd *xhci,
+@@ -2104,6 +2104,7 @@ void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+ int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ struct usb_tt *tt, gfp_t mem_flags);
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
++int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id);
+ int xhci_ext_cap_init(struct xhci_hcd *xhci);
+
+ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup);
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 4c8f0112481f37..0e13621f743a94 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -501,7 +501,7 @@ int musb_set_host(struct musb *musb)
+
+ init_data:
+ musb->is_active = 1;
+- musb->xceiv->otg->state = OTG_STATE_A_IDLE;
++ musb_set_state(musb, OTG_STATE_A_IDLE);
+ MUSB_HST_MODE(musb);
+
+ return error;
+@@ -548,7 +548,7 @@ int musb_set_peripheral(struct musb *musb)
+
+ init_data:
+ musb->is_active = 0;
+- musb->xceiv->otg->state = OTG_STATE_B_IDLE;
++ musb_set_state(musb, OTG_STATE_B_IDLE);
+ MUSB_DEV_MODE(musb);
+
+ return error;
+@@ -598,12 +598,12 @@ static void musb_otg_timer_func(struct timer_list *t)
+ unsigned long flags;
+
+ spin_lock_irqsave(&musb->lock, flags);
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_B_WAIT_ACON:
+ musb_dbg(musb,
+ "HNP: b_wait_acon timeout; back to b_peripheral");
+ musb_g_disconnect(musb);
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ musb->is_active = 0;
+ break;
+ case OTG_STATE_A_SUSPEND:
+@@ -611,7 +611,7 @@ static void musb_otg_timer_func(struct timer_list *t)
+ musb_dbg(musb, "HNP: %s timeout",
+ usb_otg_state_string(musb->xceiv->otg->state));
+ musb_platform_set_vbus(musb, 0);
+- musb->xceiv->otg->state = OTG_STATE_A_WAIT_VFALL;
++ musb_set_state(musb, OTG_STATE_A_WAIT_VFALL);
+ break;
+ default:
+ musb_dbg(musb, "HNP: Unhandled mode %s",
+@@ -632,7 +632,7 @@ void musb_hnp_stop(struct musb *musb)
+ musb_dbg(musb, "HNP: stop from %s",
+ usb_otg_state_string(musb->xceiv->otg->state));
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_PERIPHERAL:
+ musb_g_disconnect(musb);
+ musb_dbg(musb, "HNP: back to %s",
+@@ -642,7 +642,7 @@ void musb_hnp_stop(struct musb *musb)
+ musb_dbg(musb, "HNP: Disabling HR");
+ if (hcd)
+ hcd->self.is_b_host = 0;
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ MUSB_DEV_MODE(musb);
+ reg = musb_readb(mbase, MUSB_POWER);
+ reg |= MUSB_POWER_SUSPENDM;
+@@ -670,7 +670,7 @@ static void musb_handle_intr_resume(struct musb *musb, u8 devctl)
+ usb_otg_state_string(musb->xceiv->otg->state));
+
+ if (devctl & MUSB_DEVCTL_HM) {
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_SUSPEND:
+ /* remote wakeup? */
+ musb->port1_status |=
+@@ -678,14 +678,14 @@ static void musb_handle_intr_resume(struct musb *musb, u8 devctl)
+ | MUSB_PORT_STAT_RESUME;
+ musb->rh_timer = jiffies
+ + msecs_to_jiffies(USB_RESUME_TIMEOUT);
+- musb->xceiv->otg->state = OTG_STATE_A_HOST;
++ musb_set_state(musb, OTG_STATE_A_HOST);
+ musb->is_active = 1;
+ musb_host_resume_root_hub(musb);
+ schedule_delayed_work(&musb->finish_resume_work,
+ msecs_to_jiffies(USB_RESUME_TIMEOUT));
+ break;
+ case OTG_STATE_B_WAIT_ACON:
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ musb->is_active = 1;
+ MUSB_DEV_MODE(musb);
+ break;
+@@ -695,10 +695,10 @@ static void musb_handle_intr_resume(struct musb *musb, u8 devctl)
+ usb_otg_state_string(musb->xceiv->otg->state));
+ }
+ } else {
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_SUSPEND:
+ /* possibly DISCONNECT is upcoming */
+- musb->xceiv->otg->state = OTG_STATE_A_HOST;
++ musb_set_state(musb, OTG_STATE_A_HOST);
+ musb_host_resume_root_hub(musb);
+ break;
+ case OTG_STATE_B_WAIT_ACON:
+@@ -749,7 +749,7 @@ static irqreturn_t musb_handle_intr_sessreq(struct musb *musb, u8 devctl)
+ */
+ musb_writeb(mbase, MUSB_DEVCTL, MUSB_DEVCTL_SESSION);
+ musb->ep0_stage = MUSB_EP0_START;
+- musb->xceiv->otg->state = OTG_STATE_A_IDLE;
++ musb_set_state(musb, OTG_STATE_A_IDLE);
+ MUSB_HST_MODE(musb);
+ musb_platform_set_vbus(musb, 1);
+
+@@ -776,7 +776,7 @@ static void musb_handle_intr_vbuserr(struct musb *musb, u8 devctl)
+ * REVISIT: do delays from lots of DEBUG_KERNEL checks
+ * make trouble here, keeping VBUS < 4.4V ?
+ */
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_HOST:
+ /* recovery is dicey once we've gotten past the
+ * initial stages of enumeration, but if VBUS
+@@ -832,7 +832,7 @@ static void musb_handle_intr_suspend(struct musb *musb, u8 devctl)
+ musb_dbg(musb, "SUSPEND (%s) devctl %02x",
+ usb_otg_state_string(musb->xceiv->otg->state), devctl);
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_PERIPHERAL:
+ /* We also come here if the cable is removed, since
+ * this silicon doesn't report ID-no-longer-grounded.
+@@ -857,7 +857,7 @@ static void musb_handle_intr_suspend(struct musb *musb, u8 devctl)
+ musb_g_suspend(musb);
+ musb->is_active = musb->g.b_hnp_enable;
+ if (musb->is_active) {
+- musb->xceiv->otg->state = OTG_STATE_B_WAIT_ACON;
++ musb_set_state(musb, OTG_STATE_B_WAIT_ACON);
+ musb_dbg(musb, "HNP: Setting timer for b_ase0_brst");
+ mod_timer(&musb->otg_timer, jiffies
+ + msecs_to_jiffies(
+@@ -870,7 +870,7 @@ static void musb_handle_intr_suspend(struct musb *musb, u8 devctl)
+ + msecs_to_jiffies(musb->a_wait_bcon));
+ break;
+ case OTG_STATE_A_HOST:
+- musb->xceiv->otg->state = OTG_STATE_A_SUSPEND;
++ musb_set_state(musb, OTG_STATE_A_SUSPEND);
+ musb->is_active = musb->hcd->self.b_hnp_enable;
+ break;
+ case OTG_STATE_B_HOST:
+@@ -908,7 +908,7 @@ static void musb_handle_intr_connect(struct musb *musb, u8 devctl, u8 int_usb)
+ musb->port1_status |= USB_PORT_STAT_LOW_SPEED;
+
+ /* indicate new connection to OTG machine */
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_B_PERIPHERAL:
+ if (int_usb & MUSB_INTR_SUSPEND) {
+ musb_dbg(musb, "HNP: SUSPEND+CONNECT, now b_host");
+@@ -920,7 +920,7 @@ static void musb_handle_intr_connect(struct musb *musb, u8 devctl, u8 int_usb)
+ case OTG_STATE_B_WAIT_ACON:
+ musb_dbg(musb, "HNP: CONNECT, now b_host");
+ b_host:
+- musb->xceiv->otg->state = OTG_STATE_B_HOST;
++ musb_set_state(musb, OTG_STATE_B_HOST);
+ if (musb->hcd)
+ musb->hcd->self.is_b_host = 1;
+ del_timer(&musb->otg_timer);
+@@ -928,7 +928,7 @@ static void musb_handle_intr_connect(struct musb *musb, u8 devctl, u8 int_usb)
+ default:
+ if ((devctl & MUSB_DEVCTL_VBUS)
+ == (3 << MUSB_DEVCTL_VBUS_SHIFT)) {
+- musb->xceiv->otg->state = OTG_STATE_A_HOST;
++ musb_set_state(musb, OTG_STATE_A_HOST);
+ if (hcd)
+ hcd->self.is_b_host = 0;
+ }
+@@ -947,7 +947,7 @@ static void musb_handle_intr_disconnect(struct musb *musb, u8 devctl)
+ usb_otg_state_string(musb->xceiv->otg->state),
+ MUSB_MODE(musb), devctl);
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_HOST:
+ case OTG_STATE_A_SUSPEND:
+ musb_host_resume_root_hub(musb);
+@@ -965,7 +965,7 @@ static void musb_handle_intr_disconnect(struct musb *musb, u8 devctl)
+ musb_root_disconnect(musb);
+ if (musb->hcd)
+ musb->hcd->self.is_b_host = 0;
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ MUSB_DEV_MODE(musb);
+ musb_g_disconnect(musb);
+ break;
+@@ -1005,7 +1005,7 @@ static void musb_handle_intr_reset(struct musb *musb)
+ } else {
+ musb_dbg(musb, "BUS RESET as %s",
+ usb_otg_state_string(musb->xceiv->otg->state));
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_SUSPEND:
+ musb_g_reset(musb);
+ fallthrough;
+@@ -1024,11 +1024,11 @@ static void musb_handle_intr_reset(struct musb *musb)
+ case OTG_STATE_B_WAIT_ACON:
+ musb_dbg(musb, "HNP: RESET (%s), to b_peripheral",
+ usb_otg_state_string(musb->xceiv->otg->state));
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ musb_g_reset(musb);
+ break;
+ case OTG_STATE_B_IDLE:
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ fallthrough;
+ case OTG_STATE_B_PERIPHERAL:
+ musb_g_reset(musb);
+@@ -1215,8 +1215,8 @@ void musb_start(struct musb *musb)
+ * (c) peripheral initiates, using SRP
+ */
+ if (musb->port_mode != MUSB_HOST &&
+- musb->xceiv->otg->state != OTG_STATE_A_WAIT_BCON &&
+- (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) {
++ musb_get_state(musb) != OTG_STATE_A_WAIT_BCON &&
++ (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) {
+ musb->is_active = 1;
+ } else {
+ devctl |= MUSB_DEVCTL_SESSION;
+@@ -1907,7 +1907,7 @@ vbus_store(struct device *dev, struct device_attribute *attr,
+ spin_lock_irqsave(&musb->lock, flags);
+ /* force T(a_wait_bcon) to be zero/unlimited *OR* valid */
+ musb->a_wait_bcon = val ? max_t(int, val, OTG_TIME_A_WAIT_BCON) : 0 ;
+- if (musb->xceiv->otg->state == OTG_STATE_A_WAIT_BCON)
++ if (musb_get_state(musb) == OTG_STATE_A_WAIT_BCON)
+ musb->is_active = 0;
+ musb_platform_try_idle(musb, jiffies + msecs_to_jiffies(val));
+ spin_unlock_irqrestore(&musb->lock, flags);
+@@ -2078,8 +2078,8 @@ static void musb_irq_work(struct work_struct *data)
+
+ musb_pm_runtime_check_session(musb);
+
+- if (musb->xceiv->otg->state != musb->xceiv_old_state) {
+- musb->xceiv_old_state = musb->xceiv->otg->state;
++ if (musb_get_state(musb) != musb->xceiv_old_state) {
++ musb->xceiv_old_state = musb_get_state(musb);
+ sysfs_notify(&musb->controller->kobj, NULL, "mode");
+ }
+
+@@ -2521,7 +2521,7 @@ musb_init_controller(struct device *dev, int nIrq, void __iomem *ctrl)
+ }
+
+ MUSB_DEV_MODE(musb);
+- musb->xceiv->otg->state = OTG_STATE_B_IDLE;
++ musb_set_state(musb, OTG_STATE_B_IDLE);
+
+ switch (musb->port_mode) {
+ case MUSB_HOST:
+diff --git a/drivers/usb/musb/musb_core.h b/drivers/usb/musb/musb_core.h
+index dbe5623db1e03f..8c0fec972b6573 100644
+--- a/drivers/usb/musb/musb_core.h
++++ b/drivers/usb/musb/musb_core.h
+@@ -592,6 +592,17 @@ static inline void musb_platform_clear_ep_rxintr(struct musb *musb, int epnum)
+ musb->ops->clear_ep_rxintr(musb, epnum);
+ }
+
++static inline void musb_set_state(struct musb *musb,
++ enum usb_otg_state otg_state)
++{
++ musb->xceiv->otg->state = otg_state;
++}
++
++static inline enum usb_otg_state musb_get_state(struct musb *musb)
++{
++ return musb->xceiv->otg->state;
++}
++
+ /*
+ * gets the "dr_mode" property from DT and converts it into musb_mode
+ * if the property is not found or not recognized returns MUSB_OTG
+diff --git a/drivers/usb/musb/musb_debugfs.c b/drivers/usb/musb/musb_debugfs.c
+index 5401ae66894eb3..2d623284edf63f 100644
+--- a/drivers/usb/musb/musb_debugfs.c
++++ b/drivers/usb/musb/musb_debugfs.c
+@@ -235,7 +235,7 @@ static int musb_softconnect_show(struct seq_file *s, void *unused)
+ u8 reg;
+ int connect;
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_HOST:
+ case OTG_STATE_A_WAIT_BCON:
+ pm_runtime_get_sync(musb->controller);
+@@ -275,7 +275,7 @@ static ssize_t musb_softconnect_write(struct file *file,
+
+ pm_runtime_get_sync(musb->controller);
+ if (!strncmp(buf, "0", 1)) {
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_HOST:
+ musb_root_disconnect(musb);
+ reg = musb_readb(musb->mregs, MUSB_DEVCTL);
+@@ -286,7 +286,7 @@ static ssize_t musb_softconnect_write(struct file *file,
+ break;
+ }
+ } else if (!strncmp(buf, "1", 1)) {
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_WAIT_BCON:
+ /*
+ * musb_save_context() called in musb_runtime_suspend()
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index 8dc657c71541ca..421fe1645e3200 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1523,7 +1523,7 @@ static int musb_gadget_wakeup(struct usb_gadget *gadget)
+
+ spin_lock_irqsave(&musb->lock, flags);
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_B_PERIPHERAL:
+ /* NOTE: OTG state machine doesn't include B_SUSPENDED;
+ * that's part of the standard usb 1.1 state machine, and
+@@ -1785,7 +1785,7 @@ int musb_gadget_setup(struct musb *musb)
+ musb->g.speed = USB_SPEED_UNKNOWN;
+
+ MUSB_DEV_MODE(musb);
+- musb->xceiv->otg->state = OTG_STATE_B_IDLE;
++ musb_set_state(musb, OTG_STATE_B_IDLE);
+
+ /* this "gadget" abstracts/virtualizes the controller */
+ musb->g.name = musb_driver_name;
+@@ -1850,7 +1850,7 @@ static int musb_gadget_start(struct usb_gadget *g,
+ musb->is_active = 1;
+
+ otg_set_peripheral(otg, &musb->g);
+- musb->xceiv->otg->state = OTG_STATE_B_IDLE;
++ musb_set_state(musb, OTG_STATE_B_IDLE);
+ spin_unlock_irqrestore(&musb->lock, flags);
+
+ musb_start(musb);
+@@ -1895,7 +1895,7 @@ static int musb_gadget_stop(struct usb_gadget *g)
+
+ (void) musb_gadget_vbus_draw(&musb->g, 0);
+
+- musb->xceiv->otg->state = OTG_STATE_UNDEFINED;
++ musb_set_state(musb, OTG_STATE_UNDEFINED);
+ musb_stop(musb);
+ otg_set_peripheral(musb->xceiv->otg, NULL);
+
+@@ -1909,6 +1909,7 @@ static int musb_gadget_stop(struct usb_gadget *g)
+ * gadget driver here and have everything work;
+ * that currently misbehaves.
+ */
++ usb_gadget_set_state(g, USB_STATE_NOTATTACHED);
+
+ /* Force check of devctl register for PM runtime */
+ schedule_delayed_work(&musb->irq_work, 0);
+@@ -1926,7 +1927,7 @@ static int musb_gadget_stop(struct usb_gadget *g)
+ void musb_g_resume(struct musb *musb)
+ {
+ musb->is_suspended = 0;
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_B_IDLE:
+ break;
+ case OTG_STATE_B_WAIT_ACON:
+@@ -1952,10 +1953,10 @@ void musb_g_suspend(struct musb *musb)
+ devctl = musb_readb(musb->mregs, MUSB_DEVCTL);
+ musb_dbg(musb, "musb_g_suspend: devctl %02x", devctl);
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_B_IDLE:
+ if ((devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS)
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ break;
+ case OTG_STATE_B_PERIPHERAL:
+ musb->is_suspended = 1;
+@@ -2001,22 +2002,23 @@ void musb_g_disconnect(struct musb *musb)
+ spin_lock(&musb->lock);
+ }
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ default:
+ musb_dbg(musb, "Unhandled disconnect %s, setting a_idle",
+ usb_otg_state_string(musb->xceiv->otg->state));
+- musb->xceiv->otg->state = OTG_STATE_A_IDLE;
++ musb_set_state(musb, OTG_STATE_A_IDLE);
+ MUSB_HST_MODE(musb);
+ break;
+ case OTG_STATE_A_PERIPHERAL:
+- musb->xceiv->otg->state = OTG_STATE_A_WAIT_BCON;
++ musb_set_state(musb, OTG_STATE_A_WAIT_BCON);
+ MUSB_HST_MODE(musb);
+ break;
+ case OTG_STATE_B_WAIT_ACON:
+ case OTG_STATE_B_HOST:
+ case OTG_STATE_B_PERIPHERAL:
+ case OTG_STATE_B_IDLE:
+- musb->xceiv->otg->state = OTG_STATE_B_IDLE;
++ musb_set_state(musb, OTG_STATE_B_IDLE);
++ usb_gadget_set_state(&musb->g, USB_STATE_NOTATTACHED);
+ break;
+ case OTG_STATE_B_SRP_INIT:
+ break;
+@@ -2080,13 +2082,13 @@ __acquires(musb->lock)
+ * In that case, do not rely on devctl for setting
+ * peripheral mode.
+ */
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ musb->g.is_a_peripheral = 0;
+ } else if (devctl & MUSB_DEVCTL_BDEVICE) {
+- musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_B_PERIPHERAL);
+ musb->g.is_a_peripheral = 0;
+ } else {
+- musb->xceiv->otg->state = OTG_STATE_A_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_A_PERIPHERAL);
+ musb->g.is_a_peripheral = 1;
+ }
+
+diff --git a/drivers/usb/musb/musb_host.c b/drivers/usb/musb/musb_host.c
+index 1880b0f20df005..a89011c946dd24 100644
+--- a/drivers/usb/musb/musb_host.c
++++ b/drivers/usb/musb/musb_host.c
+@@ -2514,7 +2514,7 @@ static int musb_bus_suspend(struct usb_hcd *hcd)
+ if (!is_host_active(musb))
+ return 0;
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_SUSPEND:
+ return 0;
+ case OTG_STATE_A_WAIT_VRISE:
+@@ -2524,7 +2524,7 @@ static int musb_bus_suspend(struct usb_hcd *hcd)
+ */
+ devctl = musb_readb(musb->mregs, MUSB_DEVCTL);
+ if ((devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS)
+- musb->xceiv->otg->state = OTG_STATE_A_WAIT_BCON;
++ musb_set_state(musb, OTG_STATE_A_WAIT_BCON);
+ break;
+ default:
+ break;
+@@ -2733,7 +2733,7 @@ int musb_host_setup(struct musb *musb, int power_budget)
+
+ if (musb->port_mode == MUSB_HOST) {
+ MUSB_HST_MODE(musb);
+- musb->xceiv->otg->state = OTG_STATE_A_IDLE;
++ musb_set_state(musb, OTG_STATE_A_IDLE);
+ }
+ otg_set_host(musb->xceiv->otg, &hcd->self);
+ /* don't support otg protocols */
+diff --git a/drivers/usb/musb/musb_virthub.c b/drivers/usb/musb/musb_virthub.c
+index cafc69536e1d78..d1cfd45d69e367 100644
+--- a/drivers/usb/musb/musb_virthub.c
++++ b/drivers/usb/musb/musb_virthub.c
+@@ -43,7 +43,7 @@ void musb_host_finish_resume(struct work_struct *work)
+ musb->port1_status |= USB_PORT_STAT_C_SUSPEND << 16;
+ usb_hcd_poll_rh_status(musb->hcd);
+ /* NOTE: it might really be A_WAIT_BCON ... */
+- musb->xceiv->otg->state = OTG_STATE_A_HOST;
++ musb_set_state(musb, OTG_STATE_A_HOST);
+
+ spin_unlock_irqrestore(&musb->lock, flags);
+ }
+@@ -85,9 +85,9 @@ int musb_port_suspend(struct musb *musb, bool do_suspend)
+ musb_dbg(musb, "Root port suspended, power %02x", power);
+
+ musb->port1_status |= USB_PORT_STAT_SUSPEND;
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_HOST:
+- musb->xceiv->otg->state = OTG_STATE_A_SUSPEND;
++ musb_set_state(musb, OTG_STATE_A_SUSPEND);
+ musb->is_active = otg->host->b_hnp_enable;
+ if (musb->is_active)
+ mod_timer(&musb->otg_timer, jiffies
+@@ -96,7 +96,7 @@ int musb_port_suspend(struct musb *musb, bool do_suspend)
+ musb_platform_try_idle(musb, 0);
+ break;
+ case OTG_STATE_B_HOST:
+- musb->xceiv->otg->state = OTG_STATE_B_WAIT_ACON;
++ musb_set_state(musb, OTG_STATE_B_WAIT_ACON);
+ musb->is_active = otg->host->b_hnp_enable;
+ musb_platform_try_idle(musb, 0);
+ break;
+@@ -123,7 +123,7 @@ void musb_port_reset(struct musb *musb, bool do_reset)
+ u8 power;
+ void __iomem *mbase = musb->mregs;
+
+- if (musb->xceiv->otg->state == OTG_STATE_B_IDLE) {
++ if (musb_get_state(musb) == OTG_STATE_B_IDLE) {
+ musb_dbg(musb, "HNP: Returning from HNP; no hub reset from b_idle");
+ musb->port1_status &= ~USB_PORT_STAT_RESET;
+ return;
+@@ -204,20 +204,20 @@ void musb_root_disconnect(struct musb *musb)
+ usb_hcd_poll_rh_status(musb->hcd);
+ musb->is_active = 0;
+
+- switch (musb->xceiv->otg->state) {
++ switch (musb_get_state(musb)) {
+ case OTG_STATE_A_SUSPEND:
+ if (otg->host->b_hnp_enable) {
+- musb->xceiv->otg->state = OTG_STATE_A_PERIPHERAL;
++ musb_set_state(musb, OTG_STATE_A_PERIPHERAL);
+ musb->g.is_a_peripheral = 1;
+ break;
+ }
+ fallthrough;
+ case OTG_STATE_A_HOST:
+- musb->xceiv->otg->state = OTG_STATE_A_WAIT_BCON;
++ musb_set_state(musb, OTG_STATE_A_WAIT_BCON);
+ musb->is_active = 0;
+ break;
+ case OTG_STATE_A_WAIT_VFALL:
+- musb->xceiv->otg->state = OTG_STATE_B_IDLE;
++ musb_set_state(musb, OTG_STATE_B_IDLE);
+ break;
+ default:
+ musb_dbg(musb, "host disconnect (%s)",
+diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
+index 1d435e4ee857d3..baa95e22e602d7 100644
+--- a/drivers/usb/musb/omap2430.c
++++ b/drivers/usb/musb/omap2430.c
+@@ -403,13 +403,13 @@ static int omap2430_probe(struct platform_device *pdev)
+ ARRAY_SIZE(musb_resources));
+ if (ret) {
+ dev_err(&pdev->dev, "failed to add resources\n");
+- goto err2;
++ goto err_put_control_otghs;
+ }
+
+ ret = platform_device_add_data(musb, pdata, sizeof(*pdata));
+ if (ret) {
+ dev_err(&pdev->dev, "failed to add platform_data\n");
+- goto err2;
++ goto err_put_control_otghs;
+ }
+
+ pm_runtime_enable(glue->dev);
+@@ -424,7 +424,9 @@ static int omap2430_probe(struct platform_device *pdev)
+
+ err3:
+ pm_runtime_disable(glue->dev);
+-
++err_put_control_otghs:
++ if (!IS_ERR(glue->control_otghs))
++ put_device(glue->control_otghs);
+ err2:
+ platform_device_put(musb);
+
+@@ -438,6 +440,8 @@ static int omap2430_remove(struct platform_device *pdev)
+
+ platform_device_unregister(glue->musb);
+ pm_runtime_disable(glue->dev);
++ if (!IS_ERR(glue->control_otghs))
++ put_device(glue->control_otghs);
+
+ return 0;
+ }
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index e3cddcac3252ce..214db28f3c151c 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -394,6 +394,7 @@ static bool mxs_phy_is_otg_host(struct mxs_phy *mxs_phy)
+ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+ {
+ bool vbus_is_on = false;
++ enum usb_phy_events last_event = mxs_phy->phy.last_event;
+
+ /* If the SoCs don't need to disconnect line without vbus, quit */
+ if (!(mxs_phy->data->flags & MXS_PHY_DISCONNECT_LINE_WITHOUT_VBUS))
+@@ -405,7 +406,8 @@ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+
+ vbus_is_on = mxs_phy_get_vbus_status(mxs_phy);
+
+- if (on && !vbus_is_on && !mxs_phy_is_otg_host(mxs_phy))
++ if (on && ((!vbus_is_on && !mxs_phy_is_otg_host(mxs_phy))
++ || (last_event == USB_EVENT_VBUS)))
+ __mxs_phy_disconnect_line(mxs_phy, true);
+ else
+ __mxs_phy_disconnect_line(mxs_phy, false);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 749c8630cd0cba..ece0d3e2350067 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -781,6 +781,8 @@ static const struct usb_device_id id_table_combined[] = {
+ .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
+ { USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID),
+ .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
++ { USB_DEVICE(FTDI_NDI_VID, FTDI_NDI_EMGUIDE_GEMINI_PID),
++ .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
+ { USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
+ { USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) },
+ { USB_DEVICE(FTDI_VID, RTSYSTEMS_USB_VX8_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 9c95ca876baeaf..324065cc352cdd 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -197,6 +197,9 @@
+ #define FTDI_NDI_FUTURE_3_PID 0xDA73 /* NDI future device #3 */
+ #define FTDI_NDI_AURORA_SCU_PID 0xDA74 /* NDI Aurora SCU */
+
++#define FTDI_NDI_VID 0x23F2
++#define FTDI_NDI_EMGUIDE_GEMINI_PID 0x0003 /* NDI Emguide Gemini */
++
+ /*
+ * ChamSys Limited (www.chamsys.co.uk) USB wing/interface product IDs
+ */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 99fa4fb5692081..f1a99519bbd551 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1415,6 +1415,9 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(5) },
+ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x30), /* Telit FE910C04 (ECM) */
++ .driver_info = NCTRL(4) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */
+ .driver_info = NCTRL(6) },
+ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) },
+@@ -2343,6 +2346,10 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff), /* Foxconn T99W651 RNDIS */
+ .driver_info = RSVD(5) | RSVD(6) },
++ { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe15f, 0xff), /* Foxconn T99W709 */
++ .driver_info = RSVD(5) },
++ { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe167, 0xff), /* Foxconn T99W640 MBIM */
++ .driver_info = RSVD(3) },
+ { USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 (IOT version) */
+ .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
+ { USB_DEVICE(0x1782, 0x4d10) }, /* Fibocom L610 (AT mode) */
+diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c
+index 0c423916d7bfa4..a026c6cb6e684b 100644
+--- a/drivers/usb/storage/realtek_cr.c
++++ b/drivers/usb/storage/realtek_cr.c
+@@ -252,7 +252,7 @@ static int rts51x_bulk_transport(struct us_data *us, u8 lun,
+ return USB_STOR_TRANSPORT_ERROR;
+ }
+
+- residue = bcs->Residue;
++ residue = le32_to_cpu(bcs->Residue);
+ if (bcs->Tag != us->tag)
+ return USB_STOR_TRANSPORT_ERROR;
+
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index e7f45e60812d1e..b7bc46c104890b 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -934,6 +934,13 @@ UNUSUAL_DEV( 0x05e3, 0x0723, 0x9451, 0x9451,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_SANE_SENSE ),
+
++/* Added by Maël GUERIN <mael.guerin@murena.io> */
++UNUSUAL_DEV( 0x0603, 0x8611, 0x0000, 0xffff,
++ "Novatek",
++ "NTK96550-based camera",
++ USB_SC_SCSI, USB_PR_BULK, NULL,
++ US_FL_BULK_IGNORE_TAG ),
++
+ /*
+ * Reported by Hanno Boeck <hanno@gmx.de>
+ * Taken from the Lycoris Kernel
+@@ -1483,6 +1490,28 @@ UNUSUAL_DEV( 0x0bc2, 0x3332, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_WP_DETECT ),
+
++/*
++ * Reported by Zenm Chen <zenmchen@gmail.com>
++ * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch
++ * the device into Wi-Fi mode.
++ */
++UNUSUAL_DEV( 0x0bda, 0x1a2b, 0x0000, 0xffff,
++ "Realtek",
++ "DISK",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_DEVICE ),
++
++/*
++ * Reported by Zenm Chen <zenmchen@gmail.com>
++ * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch
++ * the device into Wi-Fi mode.
++ */
++UNUSUAL_DEV( 0x0bda, 0xa192, 0x0000, 0xffff,
++ "Realtek",
++ "DISK",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_DEVICE ),
++
+ UNUSUAL_DEV( 0x0d49, 0x7310, 0x0000, 0x9999,
+ "Maxtor",
+ "USB to SATA",
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index 1276112edeff9e..9b4963450fe889 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -644,7 +644,7 @@ static int pmc_usb_probe(struct platform_device *pdev)
+
+ pmc->ipc = devm_intel_scu_ipc_dev_get(&pdev->dev);
+ if (!pmc->ipc)
+- return -ENODEV;
++ return -EPROBE_DEFER;
+
+ pmc->dev = &pdev->dev;
+
+diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c
+index 700e38e9215238..f0eb65cf939328 100644
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -103,6 +103,7 @@ struct fusb302_chip {
+ bool vconn_on;
+ bool vbus_on;
+ bool charge_on;
++ bool pd_rx_on;
+ bool vbus_present;
+ enum typec_cc_polarity cc_polarity;
+ enum typec_cc_status cc1;
+@@ -841,6 +842,11 @@ static int tcpm_set_pd_rx(struct tcpc_dev *dev, bool on)
+ int ret = 0;
+
+ mutex_lock(&chip->lock);
++ if (chip->pd_rx_on == on) {
++ fusb302_log(chip, "pd is already %s", on ? "on" : "off");
++ goto done;
++ }
++
+ ret = fusb302_pd_rx_flush(chip);
+ if (ret < 0) {
+ fusb302_log(chip, "cannot flush pd rx buffer, ret=%d", ret);
+@@ -863,6 +869,8 @@ static int tcpm_set_pd_rx(struct tcpc_dev *dev, bool on)
+ on ? "on" : "off", ret);
+ goto done;
+ }
++
++ chip->pd_rx_on = on;
+ fusb302_log(chip, "pd := %s", on ? "on" : "off");
+ done:
+ mutex_unlock(&chip->lock);
+diff --git a/drivers/usb/typec/ucsi/psy.c b/drivers/usb/typec/ucsi/psy.c
+index 571a51e1623461..ba5f797156dcb4 100644
+--- a/drivers/usb/typec/ucsi/psy.c
++++ b/drivers/usb/typec/ucsi/psy.c
+@@ -142,7 +142,7 @@ static int ucsi_psy_get_current_max(struct ucsi_connector *con,
+ case UCSI_CONSTAT_PWR_OPMODE_DEFAULT:
+ /* UCSI can't tell b/w DCP/CDP or USB2/3x1/3x2 SDP chargers */
+ default:
+- val->intval = 0;
++ val->intval = UCSI_TYPEC_DEFAULT_CURRENT * 1000;
+ break;
+ }
+ return 0;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index ee625f47029ae7..0851d93d590961 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -779,6 +779,7 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+
+ if (con->status.change & UCSI_CONSTAT_CONNECT_CHANGE) {
+ typec_set_pwr_role(con->port, role);
++ ucsi_port_psy_changed(con);
+
+ switch (UCSI_CONSTAT_PARTNER_TYPE(con->status.flags)) {
+ case UCSI_CONSTAT_PARTNER_TYPE_UFP:
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index f75b1e2c05fec7..ed8fcd7ecf2182 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -305,9 +305,10 @@ struct ucsi {
+ #define UCSI_MAX_SVID 5
+ #define UCSI_MAX_ALTMODES (UCSI_MAX_SVID * 6)
+
+-#define UCSI_TYPEC_VSAFE5V 5000
+-#define UCSI_TYPEC_1_5_CURRENT 1500
+-#define UCSI_TYPEC_3_0_CURRENT 3000
++#define UCSI_TYPEC_VSAFE5V 5000
++#define UCSI_TYPEC_DEFAULT_CURRENT 100
++#define UCSI_TYPEC_1_5_CURRENT 1500
++#define UCSI_TYPEC_3_0_CURRENT 3000
+
+ struct ucsi_connector {
+ int num;
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index fcde3752b4f1b3..6956b4e0b9be71 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -927,10 +927,8 @@ vhost_scsi_get_req(struct vhost_virtqueue *vq, struct vhost_scsi_ctx *vc,
+ /* validated at handler entry */
+ vs_tpg = vhost_vq_get_backend(vq);
+ tpg = READ_ONCE(vs_tpg[*vc->target]);
+- if (unlikely(!tpg)) {
+- vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
++ if (unlikely(!tpg))
+ goto out;
+- }
+ }
+
+ if (tpgp)
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 8ed9c9b63eb167..97e00c481870e7 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2416,6 +2416,9 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
+ }
+ r = __vhost_add_used_n(vq, heads, count);
+
++ if (r < 0)
++ return r;
++
+ /* Make sure buffer is written before we update index. */
+ smp_wmb();
+ if (vhost_put_used_idx(vq)) {
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index ae555fa5f583b8..042e166d926824 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -1200,7 +1200,7 @@ static bool vgacon_scroll(struct vc_data *c, unsigned int t, unsigned int b,
+ c->vc_screenbuf_size - delta);
+ c->vc_origin = vga_vram_end - c->vc_screenbuf_size;
+ vga_rolled_over = 0;
+- } else if (oldo - delta >= (unsigned long)c->vc_screenbuf)
++ } else
+ c->vc_origin -= delta;
+ c->vc_scr_end = c->vc_origin + c->vc_screenbuf_size;
+ scr_memsetw((u16 *) (c->vc_origin), c->vc_video_erase_char,
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index d663e080b15717..91f041f3a56f2c 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -1007,8 +1007,13 @@ static int imxfb_probe(struct platform_device *pdev)
+
+
+ INIT_LIST_HEAD(&info->modelist);
+- for (i = 0; i < fbi->num_modes; i++)
+- fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++ for (i = 0; i < fbi->num_modes; i++) {
++ ret = fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++ if (ret) {
++ dev_err(&pdev->dev, "Failed to add videomode\n");
++ goto failed_cmap;
++ }
++ }
+
+ /*
+ * This makes sure that our colour bitfield
+diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c
+index 3cd11828198096..d18530bafc4e3c 100644
+--- a/drivers/watchdog/dw_wdt.c
++++ b/drivers/watchdog/dw_wdt.c
+@@ -661,6 +661,8 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
+ } else {
+ wdd->timeout = DW_WDT_DEFAULT_SECONDS;
+ watchdog_init_timeout(wdd, 0, dev);
++ /* Limit timeout value to hardware constraints. */
++ dw_wdt_set_timeout(wdd, wdd->timeout);
+ }
+
+ platform_set_drvdata(pdev, dw_wdt);
+diff --git a/drivers/watchdog/ziirave_wdt.c b/drivers/watchdog/ziirave_wdt.c
+index cab86a08456bcd..3cfab859e5074d 100644
+--- a/drivers/watchdog/ziirave_wdt.c
++++ b/drivers/watchdog/ziirave_wdt.c
+@@ -306,6 +306,9 @@ static int ziirave_firm_verify(struct watchdog_device *wdd,
+ const u16 len = be16_to_cpu(rec->len);
+ const u32 addr = be32_to_cpu(rec->addr);
+
++ if (len > sizeof(data))
++ return -EINVAL;
++
+ if (ziirave_firm_addr_readonly(addr))
+ continue;
+
+diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
+index 9c286b2a190016..ac8ce3179ba2e9 100644
+--- a/drivers/xen/gntdev-common.h
++++ b/drivers/xen/gntdev-common.h
+@@ -26,6 +26,10 @@ struct gntdev_priv {
+ /* lock protects maps and freeable_maps. */
+ struct mutex lock;
+
++ /* Free instances of struct gntdev_copy_batch. */
++ struct gntdev_copy_batch *batch;
++ struct mutex batch_lock;
++
+ #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
+ /* Device for which DMA memory is allocated. */
+ struct device *dma_dev;
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 16acddaff9aead..8b1fa03ac1e52c 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -56,6 +56,18 @@ MODULE_AUTHOR("Derek G. Murray <Derek.Murray@cl.cam.ac.uk>, "
+ "Gerd Hoffmann <kraxel@redhat.com>");
+ MODULE_DESCRIPTION("User-space granted page access driver");
+
++#define GNTDEV_COPY_BATCH 16
++
++struct gntdev_copy_batch {
++ struct gnttab_copy ops[GNTDEV_COPY_BATCH];
++ struct page *pages[GNTDEV_COPY_BATCH];
++ s16 __user *status[GNTDEV_COPY_BATCH];
++ unsigned int nr_ops;
++ unsigned int nr_pages;
++ bool writeable;
++ struct gntdev_copy_batch *next;
++};
++
+ static unsigned int limit = 64*1024;
+ module_param(limit, uint, 0644);
+ MODULE_PARM_DESC(limit,
+@@ -574,6 +586,8 @@ static int gntdev_open(struct inode *inode, struct file *flip)
+ INIT_LIST_HEAD(&priv->maps);
+ mutex_init(&priv->lock);
+
++ mutex_init(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ priv->dmabuf_priv = gntdev_dmabuf_init(flip);
+ if (IS_ERR(priv->dmabuf_priv)) {
+@@ -598,6 +612,7 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ {
+ struct gntdev_priv *priv = flip->private_data;
+ struct gntdev_grant_map *map;
++ struct gntdev_copy_batch *batch;
+
+ pr_debug("priv %p\n", priv);
+
+@@ -610,6 +625,14 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ }
+ mutex_unlock(&priv->lock);
+
++ mutex_lock(&priv->batch_lock);
++ while (priv->batch) {
++ batch = priv->batch;
++ priv->batch = batch->next;
++ kfree(batch);
++ }
++ mutex_unlock(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ gntdev_dmabuf_fini(priv->dmabuf_priv);
+ #endif
+@@ -775,17 +798,6 @@ static long gntdev_ioctl_notify(struct gntdev_priv *priv, void __user *u)
+ return rc;
+ }
+
+-#define GNTDEV_COPY_BATCH 16
+-
+-struct gntdev_copy_batch {
+- struct gnttab_copy ops[GNTDEV_COPY_BATCH];
+- struct page *pages[GNTDEV_COPY_BATCH];
+- s16 __user *status[GNTDEV_COPY_BATCH];
+- unsigned int nr_ops;
+- unsigned int nr_pages;
+- bool writeable;
+-};
+-
+ static int gntdev_get_page(struct gntdev_copy_batch *batch, void __user *virt,
+ unsigned long *gfn)
+ {
+@@ -943,36 +955,53 @@ static int gntdev_grant_copy_seg(struct gntdev_copy_batch *batch,
+ static long gntdev_ioctl_grant_copy(struct gntdev_priv *priv, void __user *u)
+ {
+ struct ioctl_gntdev_grant_copy copy;
+- struct gntdev_copy_batch batch;
++ struct gntdev_copy_batch *batch;
+ unsigned int i;
+ int ret = 0;
+
+ if (copy_from_user(©, u, sizeof(copy)))
+ return -EFAULT;
+
+- batch.nr_ops = 0;
+- batch.nr_pages = 0;
++ mutex_lock(&priv->batch_lock);
++ if (!priv->batch) {
++ batch = kmalloc(sizeof(*batch), GFP_KERNEL);
++ } else {
++ batch = priv->batch;
++ priv->batch = batch->next;
++ }
++ mutex_unlock(&priv->batch_lock);
++ if (!batch)
++ return -ENOMEM;
++
++ batch->nr_ops = 0;
++ batch->nr_pages = 0;
+
+ for (i = 0; i < copy.count; i++) {
+ struct gntdev_grant_copy_segment seg;
+
+ if (copy_from_user(&seg, ©.segments[i], sizeof(seg))) {
+ ret = -EFAULT;
++ gntdev_put_pages(batch);
+ goto out;
+ }
+
+- ret = gntdev_grant_copy_seg(&batch, &seg, ©.segments[i].status);
+- if (ret < 0)
++ ret = gntdev_grant_copy_seg(batch, &seg, ©.segments[i].status);
++ if (ret < 0) {
++ gntdev_put_pages(batch);
+ goto out;
++ }
+
+ cond_resched();
+ }
+- if (batch.nr_ops)
+- ret = gntdev_copy(&batch);
+- return ret;
++ if (batch->nr_ops)
++ ret = gntdev_copy(batch);
++
++ out:
++ mutex_lock(&priv->batch_lock);
++ batch->next = priv->batch;
++ priv->batch = batch;
++ mutex_unlock(&priv->batch_lock);
+
+- out:
+- gntdev_put_pages(&batch);
+ return ret;
+ }
+
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 7ad3091db57151..d9d6a57acafef6 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -3013,7 +3013,7 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
+ struct inode *inode, u64 new_size,
+ u32 min_type);
+
+-int btrfs_start_delalloc_snapshot(struct btrfs_root *root);
++int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context);
+ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr,
+ bool in_reclaim_context);
+ int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 82805ac91b06cc..7e66ebb91af785 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9566,7 +9566,7 @@ static int start_delalloc_inodes(struct btrfs_root *root,
+ return ret;
+ }
+
+-int btrfs_start_delalloc_snapshot(struct btrfs_root *root)
++int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context)
+ {
+ struct writeback_control wbc = {
+ .nr_to_write = LONG_MAX,
+@@ -9579,7 +9579,7 @@ int btrfs_start_delalloc_snapshot(struct btrfs_root *root)
+ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state))
+ return -EROFS;
+
+- return start_delalloc_inodes(root, &wbc, true, false);
++ return start_delalloc_inodes(root, &wbc, true, in_reclaim_context);
+ }
+
+ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 24c4d059cfabba..9d5dfcec22de10 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1030,7 +1030,7 @@ static noinline int btrfs_mksnapshot(const struct path *parent,
+ */
+ btrfs_drew_read_lock(&root->snapshot_lock);
+
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, false);
+ if (ret)
+ goto out;
+
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 95a39d535a8213..bc1feb97698c9d 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -3704,7 +3704,7 @@ static int try_flush_qgroup(struct btrfs_root *root)
+ return 0;
+ }
+
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, true);
+ if (ret < 0)
+ goto out;
+ btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1);
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 3e7bb24eb22764..d86b4d13cae484 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -7207,7 +7207,7 @@ static int flush_delalloc_roots(struct send_ctx *sctx)
+ int i;
+
+ if (root) {
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, false);
+ if (ret)
+ return ret;
+ btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
+@@ -7215,7 +7215,7 @@ static int flush_delalloc_roots(struct send_ctx *sctx)
+
+ for (i = 0; i < sctx->clone_roots_cnt; i++) {
+ root = sctx->clone_roots[i].root;
+- ret = btrfs_start_delalloc_snapshot(root);
++ ret = btrfs_start_delalloc_snapshot(root, false);
+ if (ret)
+ return ret;
+ btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 21a5a963c70e3c..f68cfcc1f8300f 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -2045,7 +2045,7 @@ static inline int btrfs_start_delalloc_flush(struct btrfs_trans_handle *trans)
+ list_for_each_entry(pending, head, list) {
+ int ret;
+
+- ret = btrfs_start_delalloc_snapshot(pending->root);
++ ret = btrfs_start_delalloc_snapshot(pending->root, false);
+ if (ret)
+ return ret;
+ }
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index dd1c40019412cb..6d715bb773643c 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -272,8 +272,7 @@ struct walk_control {
+
+ /*
+ * Ignore any items from the inode currently being processed. Needs
+- * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
+- * the LOG_WALK_REPLAY_INODES stage.
++ * to be set every time we find a BTRFS_INODE_ITEM_KEY.
+ */
+ bool ignore_cur_inode;
+
+@@ -2581,23 +2580,30 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+
+ nritems = btrfs_header_nritems(eb);
+ for (i = 0; i < nritems; i++) {
+- btrfs_item_key_to_cpu(eb, &key, i);
++ struct btrfs_inode_item *inode_item;
+
+- /* inode keys are done during the first stage */
+- if (key.type == BTRFS_INODE_ITEM_KEY &&
+- wc->stage == LOG_WALK_REPLAY_INODES) {
+- struct btrfs_inode_item *inode_item;
+- u32 mode;
++ btrfs_item_key_to_cpu(eb, &key, i);
+
+- inode_item = btrfs_item_ptr(eb, i,
+- struct btrfs_inode_item);
++ if (key.type == BTRFS_INODE_ITEM_KEY) {
++ inode_item = btrfs_item_ptr(eb, i, struct btrfs_inode_item);
+ /*
+- * If we have a tmpfile (O_TMPFILE) that got fsync'ed
+- * and never got linked before the fsync, skip it, as
+- * replaying it is pointless since it would be deleted
+- * later. We skip logging tmpfiles, but it's always
+- * possible we are replaying a log created with a kernel
+- * that used to log tmpfiles.
++ * An inode with no links is either:
++ *
++ * 1) A tmpfile (O_TMPFILE) that got fsync'ed and never
++ * got linked before the fsync, skip it, as replaying
++ * it is pointless since it would be deleted later.
++ * We skip logging tmpfiles, but it's always possible
++ * we are replaying a log created with a kernel that
++ * used to log tmpfiles;
++ *
++ * 2) A non-tmpfile which got its last link deleted
++ * while holding an open fd on it and later got
++ * fsynced through that fd. We always log the
++ * parent inodes when inode->last_unlink_trans is
++ * set to the current transaction, so ignore all the
++ * inode items for this inode. We will delete the
++ * inode when processing the parent directory with
++ * replay_dir_deletes().
+ */
+ if (btrfs_inode_nlink(eb, inode_item) == 0) {
+ wc->ignore_cur_inode = true;
+@@ -2605,8 +2611,14 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ } else {
+ wc->ignore_cur_inode = false;
+ }
+- ret = replay_xattr_deletes(wc->trans, root, log,
+- path, key.objectid);
++ }
++
++ /* Inode keys are done during the first stage. */
++ if (key.type == BTRFS_INODE_ITEM_KEY &&
++ wc->stage == LOG_WALK_REPLAY_INODES) {
++ u32 mode;
++
++ ret = replay_xattr_deletes(wc->trans, root, log, path, key.objectid);
+ if (ret)
+ break;
+ mode = btrfs_inode_mode(eb, inode_item);
+@@ -3909,6 +3921,11 @@ static void fill_inode_item(struct btrfs_trans_handle *trans,
+ btrfs_set_token_timespec_nsec(&token, &item->ctime,
+ inode->i_ctime.tv_nsec);
+
++ btrfs_set_token_timespec_sec(&token, &item->otime,
++ BTRFS_I(inode)->i_otime.tv_sec);
++ btrfs_set_token_timespec_nsec(&token, &item->otime,
++ BTRFS_I(inode)->i_otime.tv_nsec);
++
+ btrfs_set_token_inode_nbytes(&token, item, inode_get_bytes(inode));
+
+ btrfs_set_token_inode_sequence(&token, item, inode_peek_iversion(inode));
+diff --git a/fs/buffer.c b/fs/buffer.c
+index ee66abadcbc2b9..9c41306e8d821f 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -156,8 +156,8 @@ static void __end_buffer_read_notouch(struct buffer_head *bh, int uptodate)
+ */
+ void end_buffer_read_sync(struct buffer_head *bh, int uptodate)
+ {
+- __end_buffer_read_notouch(bh, uptodate);
+ put_bh(bh);
++ __end_buffer_read_notouch(bh, uptodate);
+ }
+ EXPORT_SYMBOL(end_buffer_read_sync);
+
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 95992c93bbe347..a19e5e7c7d0f4f 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -4470,6 +4470,12 @@ CIFSFindFirst(const unsigned int xid, struct cifs_tcon *tcon,
+ pSMB->FileName[name_len] = 0;
+ pSMB->FileName[name_len+1] = 0;
+ name_len += 2;
++ } else if (!searchName[0]) {
++ pSMB->FileName[0] = CIFS_DIR_SEP(cifs_sb);
++ pSMB->FileName[1] = 0;
++ pSMB->FileName[2] = 0;
++ pSMB->FileName[3] = 0;
++ name_len = 4;
+ }
+ } else {
+ name_len = copy_path_name(pSMB->FileName, searchName);
+@@ -4481,6 +4487,10 @@ CIFSFindFirst(const unsigned int xid, struct cifs_tcon *tcon,
+ pSMB->FileName[name_len] = '*';
+ pSMB->FileName[name_len+1] = 0;
+ name_len += 2;
++ } else if (!searchName[0]) {
++ pSMB->FileName[0] = CIFS_DIR_SEP(cifs_sb);
++ pSMB->FileName[1] = 0;
++ name_len = 2;
+ }
+ }
+
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index aa1acc698caa2f..1297acb5bf8e56 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -4302,6 +4302,7 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ u8 key[SMB3_ENC_DEC_KEY_SIZE];
+ struct aead_request *req;
+ u8 *iv;
++ DECLARE_CRYPTO_WAIT(wait);
+ unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize);
+ void *creq;
+
+@@ -4350,7 +4351,11 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
+ aead_request_set_crypt(req, sg, sg, crypt_len, iv);
+ aead_request_set_ad(req, assoc_data_len);
+
+- rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
++ aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
++ crypto_req_done, &wait);
++
++ rc = crypto_wait_req(enc ? crypto_aead_encrypt(req)
++ : crypto_aead_decrypt(req), &wait);
+
+ if (!rc && enc)
+ memcpy(&tr_hdr->Signature, sign, SMB2_SIGNATURE_SIZE);
+diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
+index 7d18b92688176f..ae332f3771f634 100644
+--- a/fs/cifs/smbdirect.c
++++ b/fs/cifs/smbdirect.c
+@@ -454,7 +454,6 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
+ log_rdma_recv(INFO, "wc->status=%d opcode=%d\n",
+ wc->status, wc->opcode);
+- smbd_disconnect_rdma_connection(info);
+ goto error;
+ }
+
+@@ -471,8 +470,9 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ info->full_packet_received = true;
+ info->negotiate_done =
+ process_negotiation_response(response, wc->byte_len);
++ put_receive_buffer(info, response);
+ complete(&info->negotiate_completion);
+- break;
++ return;
+
+ /* SMBD data transfer packet */
+ case SMBD_TRANSFER_DATA:
+@@ -529,14 +529,16 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ }
+
+ return;
+-
+- default:
+- log_rdma_recv(ERR,
+- "unexpected response type=%d\n", response->type);
+ }
+
++ /*
++ * This is an internal error!
++ */
++ log_rdma_recv(ERR, "unexpected response type=%d\n", response->type);
++ WARN_ON_ONCE(response->type != SMBD_TRANSFER_DATA);
+ error:
+ put_receive_buffer(info, response);
++ smbd_disconnect_rdma_connection(info);
+ }
+
+ static struct rdma_cm_id *smbd_create_id(
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index 53a05b8292f033..1b68586f73f3fe 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -393,6 +393,14 @@ static unsigned int ext4_getfsmap_find_sb(struct super_block *sb,
+ /* Reserved GDT blocks */
+ if (!ext4_has_feature_meta_bg(sb) || metagroup < first_meta_bg) {
+ len = le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks);
++
++ /*
++ * mkfs.ext4 can set s_reserved_gdt_blocks as 0 in some cases,
++ * check for that.
++ */
++ if (!len)
++ return 0;
++
+ error = ext4_getfsmap_fill(meta_list, fsb, len,
+ EXT4_FMR_OWN_RESV_GDT);
+ if (error)
+@@ -526,6 +534,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ ext4_group_t end_ag;
+ ext4_grpblk_t first_cluster;
+ ext4_grpblk_t last_cluster;
++ struct ext4_fsmap irec;
+ int error = 0;
+
+ bofs = le32_to_cpu(sbi->s_es->s_first_data_block);
+@@ -609,10 +618,18 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ goto err;
+ }
+
+- /* Report any gaps at the end of the bg */
++ /*
++ * The dummy record below will cause ext4_getfsmap_helper() to report
++ * any allocated blocks at the end of the range.
++ */
++ irec.fmr_device = 0;
++ irec.fmr_physical = end_fsb + 1;
++ irec.fmr_length = 0;
++ irec.fmr_owner = EXT4_FMR_OWN_FREE;
++ irec.fmr_flags = 0;
++
+ info->gfi_last = true;
+- error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,
+- 0, info);
++ error = ext4_getfsmap_helper(sb, info, &irec);
+ if (error)
+ goto err;
+
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index c2bb2ff3fbb6bf..48b368f13d306b 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -537,7 +537,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
+ int indirect_blks;
+ int blocks_to_boundary = 0;
+ int depth;
+- int count = 0;
++ u64 count = 0;
+ ext4_fsblk_t first_block = 0;
+
+ trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
+@@ -586,7 +586,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
+ count++;
+ /* Fill in size of a hole we found */
+ map->m_pblk = 0;
+- map->m_len = min_t(unsigned int, map->m_len, count);
++ map->m_len = umin(map->m_len, count);
+ goto cleanup;
+ }
+
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 8ccbb3703954b3..f02fcaa62804f7 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -296,7 +296,11 @@ static int ext4_create_inline_data(handle_t *handle,
+ if (error)
+ goto out;
+
+- BUG_ON(!is.s.not_found);
++ if (!is.s.not_found) {
++ EXT4_ERROR_INODE(inode, "unexpected inline data xattr");
++ error = -EFSCORRUPTED;
++ goto out;
++ }
+
+ error = ext4_xattr_ibody_set(handle, inode, &i, &is);
+ if (error) {
+@@ -347,7 +351,11 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ if (error)
+ goto out;
+
+- BUG_ON(is.s.not_found);
++ if (is.s.not_found) {
++ EXT4_ERROR_INODE(inode, "missing inline data xattr");
++ error = -EFSCORRUPTED;
++ goto out;
++ }
+
+ len -= EXT4_MIN_INLINE_DATA_SIZE;
+ value = kzalloc(len, GFP_NOFS);
+@@ -1939,7 +1947,12 @@ int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ if ((err = ext4_xattr_ibody_find(inode, &i, &is)) != 0)
+ goto out_error;
+
+- BUG_ON(is.s.not_found);
++ if (is.s.not_found) {
++ EXT4_ERROR_INODE(inode,
++ "missing inline data xattr");
++ err = -EFSCORRUPTED;
++ goto out_error;
++ }
+
+ value_len = le32_to_cpu(is.s.here->e_value_size);
+ value = kmalloc(value_len, GFP_NOFS);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index ef39d6b141a6e2..8fe4aa6b459302 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -148,7 +148,7 @@ static int ext4_meta_trans_blocks(struct inode *inode, int lblocks,
+ */
+ int ext4_inode_is_fast_symlink(struct inode *inode)
+ {
+- if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
++ if (!ext4_has_feature_ea_inode(inode->i_sb)) {
+ int ea_blocks = EXT4_I(inode)->i_file_acl ?
+ EXT4_CLUSTER_SIZE(inode->i_sb) >> 9 : 0;
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 4e42ca56da86a8..622e8a816f7e60 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1154,7 +1154,7 @@ struct f2fs_bio_info {
+ #define RDEV(i) (raw_super->devs[i])
+ struct f2fs_dev_info {
+ struct block_device *bdev;
+- char path[MAX_PATH_LEN];
++ char path[MAX_PATH_LEN + 1];
+ unsigned int total_segments;
+ block_t start_blk;
+ block_t end_blk;
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 452c0240cc11eb..1b504bdf39c26f 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -222,6 +222,13 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ return false;
+ }
+
++ if (ino_of_node(node_page) == fi->i_xattr_nid) {
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ f2fs_warn(sbi, "%s: corrupted inode i_ino=%lx, xnid=%x, run fsck to fix.",
++ __func__, inode->i_ino, fi->i_xattr_nid);
++ return false;
++ }
++
+ if (f2fs_sb_has_flexible_inline_xattr(sbi)
+ && !f2fs_has_extra_attr(inode)) {
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+@@ -798,6 +805,19 @@ void f2fs_evict_inode(struct inode *inode)
+ f2fs_update_inode_page(inode);
+ if (dquot_initialize_needed(inode))
+ set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++
++ /*
++ * If both f2fs_truncate() and f2fs_update_inode_page() failed
++ * due to fuzzed corrupted inode, call f2fs_inode_synced() to
++ * avoid triggering later f2fs_bug_on().
++ */
++ if (is_inode_flag_set(inode, FI_DIRTY_INODE)) {
++ f2fs_warn(sbi,
++ "f2fs_evict_inode: inode is dirty, ino:%lu",
++ inode->i_ino);
++ f2fs_inode_synced(inode);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ }
+ }
+ sb_end_intwrite(inode->i_sb);
+ no_delete:
+@@ -813,8 +833,12 @@ void f2fs_evict_inode(struct inode *inode)
+ if (likely(!f2fs_cp_error(sbi) &&
+ !is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ f2fs_bug_on(sbi, is_inode_flag_set(inode, FI_DIRTY_INODE));
+- else
+- f2fs_inode_synced(inode);
++
++ /*
++ * anyway, it needs to remove the inode from sbi->inode_list[DIRTY_META]
++ * list to avoid UAF in f2fs_sync_inode_meta() during checkpoint.
++ */
++ f2fs_inode_synced(inode);
+
+ /* for the case f2fs_new_inode() was failed, .i_ino is zero, skip it */
+ if (inode->i_ino)
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 57baaba17174d0..1dddb65e249adc 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -760,6 +760,16 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
+ for (i = 1; i <= level; i++) {
+ bool done = false;
+
++ if (nids[i] && nids[i] == dn->inode->i_ino) {
++ err = -EFSCORRUPTED;
++ f2fs_err(sbi,
++ "inode mapping table is corrupted, run fsck to fix it, "
++ "ino:%lu, nid:%u, level:%d, offset:%d",
++ dn->inode->i_ino, nids[i], level, offset[level]);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ goto release_pages;
++ }
++
+ if (!nids[i] && mode == ALLOC_NODE) {
+ /* alloc new node */
+ if (!f2fs_alloc_nid(sbi, &(nids[i]))) {
+diff --git a/fs/file.c b/fs/file.c
+index 975b1227a2f6d9..c8fff3d7933676 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -90,18 +90,11 @@ static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
+ * 'unsigned long' in some places, but simply because that is how the Linux
+ * kernel bitmaps are defined to work: they are not "bits in an array of bytes",
+ * they are very much "bits in an array of unsigned long".
+- *
+- * The ALIGN(nr, BITS_PER_LONG) here is for clarity: since we just multiplied
+- * by that "1024/sizeof(ptr)" before, we already know there are sufficient
+- * clear low bits. Clang seems to realize that, gcc ends up being confused.
+- *
+- * On a 128-bit machine, the ALIGN() would actually matter. In the meantime,
+- * let's consider it documentation (and maybe a test-case for gcc to improve
+- * its code generation ;)
+ */
+-static struct fdtable * alloc_fdtable(unsigned int nr)
++static struct fdtable *alloc_fdtable(unsigned int slots_wanted)
+ {
+ struct fdtable *fdt;
++ unsigned int nr;
+ void *data;
+
+ /*
+@@ -109,22 +102,47 @@ static struct fdtable * alloc_fdtable(unsigned int nr)
+ * Allocation steps are keyed to the size of the fdarray, since it
+ * grows far faster than any of the other dynamic data. We try to fit
+ * the fdarray into comfortable page-tuned chunks: starting at 1024B
+- * and growing in powers of two from there on.
++ * and growing in powers of two from there on. Since we called only
++ * with slots_wanted > BITS_PER_LONG (embedded instance in files->fdtab
++ * already gives BITS_PER_LONG slots), the above boils down to
++ * 1. use the smallest power of two large enough to give us that many
++ * slots.
++ * 2. on 32bit skip 64 and 128 - the minimal capacity we want there is
++ * 256 slots (i.e. 1Kb fd array).
++ * 3. on 64bit don't skip anything, 1Kb fd array means 128 slots there
++ * and we are never going to be asked for 64 or less.
+ */
+- nr /= (1024 / sizeof(struct file *));
+- nr = roundup_pow_of_two(nr + 1);
+- nr *= (1024 / sizeof(struct file *));
+- nr = ALIGN(nr, BITS_PER_LONG);
++ if (IS_ENABLED(CONFIG_32BIT) && slots_wanted < 256)
++ nr = 256;
++ else
++ nr = roundup_pow_of_two(slots_wanted);
+ /*
+ * Note that this can drive nr *below* what we had passed if sysctl_nr_open
+- * had been set lower between the check in expand_files() and here. Deal
+- * with that in caller, it's cheaper that way.
++ * had been set lower between the check in expand_files() and here.
+ *
+ * We make sure that nr remains a multiple of BITS_PER_LONG - otherwise
+ * bitmaps handling below becomes unpleasant, to put it mildly...
+ */
+- if (unlikely(nr > sysctl_nr_open))
+- nr = ((sysctl_nr_open - 1) | (BITS_PER_LONG - 1)) + 1;
++ if (unlikely(nr > sysctl_nr_open)) {
++ nr = round_down(sysctl_nr_open, BITS_PER_LONG);
++ if (nr < slots_wanted)
++ return ERR_PTR(-EMFILE);
++ }
++
++ /*
++ * Check if the allocation size would exceed INT_MAX. kvmalloc_array()
++ * and kvmalloc() will warn if the allocation size is greater than
++ * INT_MAX, as filp_cachep objects are not __GFP_NOWARN.
++ *
++ * This can happen when sysctl_nr_open is set to a very high value and
++ * a process tries to use a file descriptor near that limit. For example,
++ * if sysctl_nr_open is set to 1073741816 (0x3ffffff8) - which is what
++ * systemd typically sets it to - then trying to use a file descriptor
++ * close to that value will require allocating a file descriptor table
++ * that exceeds 8GB in size.
++ */
++ if (unlikely(nr > INT_MAX / sizeof(struct file *)))
++ return ERR_PTR(-EMFILE);
+
+ fdt = kmalloc(sizeof(struct fdtable), GFP_KERNEL_ACCOUNT);
+ if (!fdt)
+@@ -153,7 +171,7 @@ static struct fdtable * alloc_fdtable(unsigned int nr)
+ out_fdt:
+ kfree(fdt);
+ out:
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ /*
+@@ -170,7 +188,7 @@ static int expand_fdtable(struct files_struct *files, unsigned int nr)
+ struct fdtable *new_fdt, *cur_fdt;
+
+ spin_unlock(&files->file_lock);
+- new_fdt = alloc_fdtable(nr);
++ new_fdt = alloc_fdtable(nr + 1);
+
+ /* make sure all fd_install() have seen resize_in_progress
+ * or have finished their rcu_read_lock_sched() section.
+@@ -179,16 +197,8 @@ static int expand_fdtable(struct files_struct *files, unsigned int nr)
+ synchronize_rcu();
+
+ spin_lock(&files->file_lock);
+- if (!new_fdt)
+- return -ENOMEM;
+- /*
+- * extremely unlikely race - sysctl_nr_open decreased between the check in
+- * caller and alloc_fdtable(). Cheaper to catch it here...
+- */
+- if (unlikely(new_fdt->max_fds <= nr)) {
+- __free_fdtable(new_fdt);
+- return -EMFILE;
+- }
++ if (IS_ERR(new_fdt))
++ return PTR_ERR(new_fdt);
+ cur_fdt = files_fdtable(files);
+ BUG_ON(nr < cur_fdt->max_fds);
+ copy_fdtable(new_fdt, cur_fdt);
+@@ -348,16 +358,9 @@ struct files_struct *dup_fd(struct files_struct *oldf, unsigned int max_fds, int
+ if (new_fdt != &newf->fdtab)
+ __free_fdtable(new_fdt);
+
+- new_fdt = alloc_fdtable(open_files - 1);
+- if (!new_fdt) {
+- *errorp = -ENOMEM;
+- goto out_release;
+- }
+-
+- /* beyond sysctl_nr_open; nothing to do */
+- if (unlikely(new_fdt->max_fds < open_files)) {
+- __free_fdtable(new_fdt);
+- *errorp = -EMFILE;
++ new_fdt = alloc_fdtable(open_files);
++ if (IS_ERR(new_fdt)) {
++ *errorp = PTR_ERR(new_fdt);
+ goto out_release;
+ }
+
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index 2251286cd83f0f..219e3b8fd6a847 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -15,6 +15,48 @@
+
+ #include "btree.h"
+
++static inline
++bool is_bnode_offset_valid(struct hfs_bnode *node, int off)
++{
++ bool is_valid = off < node->tree->node_size;
++
++ if (!is_valid) {
++ pr_err("requested invalid offset: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off);
++ }
++
++ return is_valid;
++}
++
++static inline
++int check_and_correct_requested_length(struct hfs_bnode *node, int off, int len)
++{
++ unsigned int node_size;
++
++ if (!is_bnode_offset_valid(node, off))
++ return 0;
++
++ node_size = node->tree->node_size;
++
++ if ((off + len) > node_size) {
++ int new_len = (int)node_size - off;
++
++ pr_err("requested length has been corrected: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, "
++ "requested_len %d, corrected_len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len, new_len);
++
++ return new_len;
++ }
++
++ return len;
++}
++
+ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ struct page *page;
+@@ -23,6 +65,20 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ int bytes_to_read;
+ void *vaddr;
+
++ if (!is_bnode_offset_valid(node, off))
++ return;
++
++ if (len == 0) {
++ pr_err("requested zero length: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len);
++ return;
++ }
++
++ len = check_and_correct_requested_length(node, off, len);
++
+ off += node->page_offset;
+ pagenum = off >> PAGE_SHIFT;
+ off &= ~PAGE_MASK; /* compute page offset for the first page */
+@@ -83,6 +139,20 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ struct page *page;
+
++ if (!is_bnode_offset_valid(node, off))
++ return;
++
++ if (len == 0) {
++ pr_err("requested zero length: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len);
++ return;
++ }
++
++ len = check_and_correct_requested_length(node, off, len);
++
+ off += node->page_offset;
+ page = node->page[0];
+
+@@ -108,6 +178,20 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
+ {
+ struct page *page;
+
++ if (!is_bnode_offset_valid(node, off))
++ return;
++
++ if (len == 0) {
++ pr_err("requested zero length: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len);
++ return;
++ }
++
++ len = check_and_correct_requested_length(node, off, len);
++
+ off += node->page_offset;
+ page = node->page[0];
+
+@@ -124,6 +208,10 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
+ hfs_dbg(BNODE_MOD, "copybytes: %u,%u,%u\n", dst, src, len);
+ if (!len)
+ return;
++
++ len = check_and_correct_requested_length(src_node, src, len);
++ len = check_and_correct_requested_length(dst_node, dst, len);
++
+ src += src_node->page_offset;
+ dst += dst_node->page_offset;
+ src_page = src_node->page[0];
+@@ -143,6 +231,10 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
+ hfs_dbg(BNODE_MOD, "movebytes: %u,%u,%u\n", dst, src, len);
+ if (!len)
+ return;
++
++ len = check_and_correct_requested_length(node, src, len);
++ len = check_and_correct_requested_length(node, dst, len);
++
+ src += node->page_offset;
+ dst += node->page_offset;
+ page = node->page[0];
+@@ -494,6 +586,7 @@ void hfs_bnode_put(struct hfs_bnode *node)
+ if (test_bit(HFS_BNODE_DELETED, &node->flags)) {
+ hfs_bnode_unhash(node);
+ spin_unlock(&tree->hash_lock);
++ hfs_bnode_clear(node, 0, tree->node_size);
+ hfs_bmap_free(node);
+ hfs_bnode_free(node);
+ return;
+diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
+index cf6e5de7b9da91..c9c38fddf505bf 100644
+--- a/fs/hfsplus/bnode.c
++++ b/fs/hfsplus/bnode.c
+@@ -18,12 +18,68 @@
+ #include "hfsplus_fs.h"
+ #include "hfsplus_raw.h"
+
++static inline
++bool is_bnode_offset_valid(struct hfs_bnode *node, int off)
++{
++ bool is_valid = off < node->tree->node_size;
++
++ if (!is_valid) {
++ pr_err("requested invalid offset: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off);
++ }
++
++ return is_valid;
++}
++
++static inline
++int check_and_correct_requested_length(struct hfs_bnode *node, int off, int len)
++{
++ unsigned int node_size;
++
++ if (!is_bnode_offset_valid(node, off))
++ return 0;
++
++ node_size = node->tree->node_size;
++
++ if ((off + len) > node_size) {
++ int new_len = (int)node_size - off;
++
++ pr_err("requested length has been corrected: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, "
++ "requested_len %d, corrected_len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len, new_len);
++
++ return new_len;
++ }
++
++ return len;
++}
++
+ /* Copy a specified range of bytes from the raw data of a node */
+ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ struct page **pagep;
+ int l;
+
++ if (!is_bnode_offset_valid(node, off))
++ return;
++
++ if (len == 0) {
++ pr_err("requested zero length: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len);
++ return;
++ }
++
++ len = check_and_correct_requested_length(node, off, len);
++
+ off += node->page_offset;
+ pagep = node->page + (off >> PAGE_SHIFT);
+ off &= ~PAGE_MASK;
+@@ -83,6 +139,20 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
+ struct page **pagep;
+ int l;
+
++ if (!is_bnode_offset_valid(node, off))
++ return;
++
++ if (len == 0) {
++ pr_err("requested zero length: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len);
++ return;
++ }
++
++ len = check_and_correct_requested_length(node, off, len);
++
+ off += node->page_offset;
+ pagep = node->page + (off >> PAGE_SHIFT);
+ off &= ~PAGE_MASK;
+@@ -113,6 +183,20 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
+ struct page **pagep;
+ int l;
+
++ if (!is_bnode_offset_valid(node, off))
++ return;
++
++ if (len == 0) {
++ pr_err("requested zero length: "
++ "NODE: id %u, type %#x, height %u, "
++ "node_size %u, offset %d, len %d\n",
++ node->this, node->type, node->height,
++ node->tree->node_size, off, len);
++ return;
++ }
++
++ len = check_and_correct_requested_length(node, off, len);
++
+ off += node->page_offset;
+ pagep = node->page + (off >> PAGE_SHIFT);
+ off &= ~PAGE_MASK;
+@@ -139,6 +223,10 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
+ hfs_dbg(BNODE_MOD, "copybytes: %u,%u,%u\n", dst, src, len);
+ if (!len)
+ return;
++
++ len = check_and_correct_requested_length(src_node, src, len);
++ len = check_and_correct_requested_length(dst_node, dst, len);
++
+ src += src_node->page_offset;
+ dst += dst_node->page_offset;
+ src_page = src_node->page + (src >> PAGE_SHIFT);
+@@ -196,6 +284,10 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
+ hfs_dbg(BNODE_MOD, "movebytes: %u,%u,%u\n", dst, src, len);
+ if (!len)
+ return;
++
++ len = check_and_correct_requested_length(node, src, len);
++ len = check_and_correct_requested_length(node, dst, len);
++
+ src += node->page_offset;
+ dst += node->page_offset;
+ if (dst > src) {
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index c95a2f0ed4a74e..fad1c250f150b8 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -342,9 +342,6 @@ static int hfsplus_free_extents(struct super_block *sb,
+ int i;
+ int err = 0;
+
+- /* Mapping the allocation file may lock the extent tree */
+- WARN_ON(mutex_is_locked(&HFSPLUS_SB(sb)->ext_tree->tree_lock));
+-
+ hfsplus_dump_extent(extent);
+ for (i = 0; i < 8; extent++, i++) {
+ count = be32_to_cpu(extent->block_count);
+diff --git a/fs/hfsplus/unicode.c b/fs/hfsplus/unicode.c
+index 73342c925a4b6e..36b6cf2a3abba4 100644
+--- a/fs/hfsplus/unicode.c
++++ b/fs/hfsplus/unicode.c
+@@ -132,7 +132,14 @@ int hfsplus_uni2asc(struct super_block *sb,
+
+ op = astr;
+ ip = ustr->unicode;
++
+ ustrlen = be16_to_cpu(ustr->length);
++ if (ustrlen > HFSPLUS_MAX_STRLEN) {
++ ustrlen = HFSPLUS_MAX_STRLEN;
++ pr_err("invalid length %u has been corrected to %d\n",
++ be16_to_cpu(ustr->length), ustrlen);
++ }
++
+ len = *len_p;
+ ce1 = NULL;
+ compose = !test_bit(HFSPLUS_SB_NODECOMPOSE, &HFSPLUS_SB(sb)->flags);
+diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
+index d91f76ef18d9bb..2438cd7596206c 100644
+--- a/fs/hfsplus/xattr.c
++++ b/fs/hfsplus/xattr.c
+@@ -172,7 +172,11 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
+ return PTR_ERR(attr_file);
+ }
+
+- BUG_ON(i_size_read(attr_file) != 0);
++ if (i_size_read(attr_file) != 0) {
++ err = -EIO;
++ pr_err("detected inconsistent attributes file, running fsck.hfsplus is recommended.\n");
++ goto end_attr_file_creation;
++ }
+
+ hip = HFSPLUS_I(attr_file);
+
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index bf3cda4989623c..6e97a54ffda124 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -148,7 +148,7 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND;
+ vma->vm_ops = &hugetlb_vm_ops;
+
+- ret = seal_check_future_write(info->seals, vma);
++ ret = seal_check_write(info->seals, vma);
+ if (ret)
+ return ret;
+
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index 4c763f573faf31..a5385b91275f8c 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -1492,9 +1492,16 @@ static int isofs_read_inode(struct inode *inode, int relocated)
+ inode->i_op = &page_symlink_inode_operations;
+ inode_nohighmem(inode);
+ inode->i_data.a_ops = &isofs_symlink_aops;
+- } else
++ } else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
++ S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
+ /* XXX - parse_rock_ridge_inode() had already set i_rdev. */
+ init_special_inode(inode, inode->i_mode, inode->i_rdev);
++ } else {
++ printk(KERN_DEBUG "ISOFS: Invalid file type 0%04o for inode %lu.\n",
++ inode->i_mode, inode->i_ino);
++ ret = -EIO;
++ goto fail;
++ }
+
+ ret = 0;
+ out:
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 0aaff82ecd1c55..8a2446d44b03ae 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -321,6 +321,7 @@ int jbd2_log_do_checkpoint(journal_t *journal)
+ retry:
+ if (batch_count)
+ __flush_batch(journal, &batch_count);
++ cond_resched();
+ spin_lock(&journal->j_list_lock);
+ goto restart;
+ }
+diff --git a/fs/jfs/file.c b/fs/jfs/file.c
+index 930d2701f2062b..44872daeca015f 100644
+--- a/fs/jfs/file.c
++++ b/fs/jfs/file.c
+@@ -44,6 +44,9 @@ static int jfs_open(struct inode *inode, struct file *file)
+ {
+ int rc;
+
++ if (S_ISREG(inode->i_mode) && inode->i_size < 0)
++ return -EIO;
++
+ if ((rc = dquot_file_open(inode, file)))
+ return rc;
+
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 980aa3300f106f..2472b33e3a2d86 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -145,9 +145,9 @@ void jfs_evict_inode(struct inode *inode)
+ if (!inode->i_nlink && !is_bad_inode(inode)) {
+ dquot_initialize(inode);
+
++ truncate_inode_pages_final(&inode->i_data);
+ if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
+ struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
+- truncate_inode_pages_final(&inode->i_data);
+
+ if (test_cflag(COMMIT_Freewmap, inode))
+ jfs_free_zero_link(inode);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 37888187b97738..f34025cc9b057e 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1457,6 +1457,12 @@ dbAllocAG(struct bmap * bmp, int agno, s64 nblocks, int l2nb, s64 * results)
+ (1 << (L2LPERCTL - (bmp->db_agheight << 1))) / bmp->db_agwidth;
+ ti = bmp->db_agstart + bmp->db_agwidth * (agno & (agperlev - 1));
+
++ if (ti < 0 || ti >= le32_to_cpu(dcp->nleafs)) {
++ jfs_error(bmp->db_ipbmap->i_sb, "Corrupt dmapctl page\n");
++ release_metapage(mp);
++ return -EIO;
++ }
++
+ /* dmap control page trees fan-out by 4 and a single allocation
+ * group may be described by 1 or 2 subtrees within the ag level
+ * dmap control page, depending upon the ag size. examine the ag's
+@@ -1877,8 +1883,10 @@ dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno, s64 * results)
+ return -EIO;
+ dp = (struct dmap *) mp->data;
+
+- if (dp->tree.budmin < 0)
++ if (dp->tree.budmin < 0) {
++ release_metapage(mp);
+ return -EIO;
++ }
+
+ /* try to allocate the blocks.
+ */
+diff --git a/fs/libfs.c b/fs/libfs.c
+index aa0fbd720409a2..c6ed6c58dee6a5 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -272,7 +272,7 @@ void simple_recursive_removal(struct dentry *dentry,
+ struct dentry *victim = NULL, *child;
+ struct inode *inode = this->d_inode;
+
+- inode_lock(inode);
++ inode_lock_nested(inode, I_MUTEX_CHILD);
+ if (d_is_dir(this))
+ inode->i_flags |= S_DEAD;
+ while ((child = find_next_child(this, victim)) == NULL) {
+@@ -284,7 +284,7 @@ void simple_recursive_removal(struct dentry *dentry,
+ victim = this;
+ this = this->d_parent;
+ inode = this->d_inode;
+- inode_lock(inode);
++ inode_lock_nested(inode, I_MUTEX_CHILD);
+ if (simple_positive(victim)) {
+ d_invalidate(victim); // avoid lost mounts
+ if (d_is_dir(victim))
+diff --git a/fs/namespace.c b/fs/namespace.c
+index ee6d139f75292d..d1751f9b6f1ce4 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2272,6 +2272,19 @@ static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
+ return attach_recursive_mnt(mnt, p, mp, false);
+ }
+
++static int may_change_propagation(const struct mount *m)
++{
++ struct mnt_namespace *ns = m->mnt_ns;
++
++ // it must be mounted in some namespace
++ if (IS_ERR_OR_NULL(ns)) // is_mounted()
++ return -EINVAL;
++ // and the caller must be admin in userns of that namespace
++ if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN))
++ return -EPERM;
++ return 0;
++}
++
+ /*
+ * Sanity check the flags to change_mnt_propagation.
+ */
+@@ -2308,10 +2321,10 @@ static int do_change_type(struct path *path, int ms_flags)
+ return -EINVAL;
+
+ namespace_lock();
+- if (!check_mnt(mnt)) {
+- err = -EINVAL;
++ err = may_change_propagation(mnt);
++ if (err)
+ goto out_unlock;
+- }
++
+ if (type == MS_SHARED) {
+ err = invent_group_ids(mnt, recurse);
+ if (err)
+@@ -2692,6 +2705,71 @@ static bool check_for_nsfs_mounts(struct mount *subtree)
+ return ret;
+ }
+
++static int do_set_group(struct path *from_path, struct path *to_path)
++{
++ struct mount *from, *to;
++ int err;
++
++ from = real_mount(from_path->mnt);
++ to = real_mount(to_path->mnt);
++
++ namespace_lock();
++
++ err = may_change_propagation(from);
++ if (err)
++ goto out;
++ err = may_change_propagation(to);
++ if (err)
++ goto out;
++
++ err = -EINVAL;
++ /* To and From paths should be mount roots */
++ if (from_path->dentry != from_path->mnt->mnt_root)
++ goto out;
++ if (to_path->dentry != to_path->mnt->mnt_root)
++ goto out;
++
++ /* Setting sharing groups is only allowed across same superblock */
++ if (from->mnt.mnt_sb != to->mnt.mnt_sb)
++ goto out;
++
++ /* From mount root should be wider than To mount root */
++ if (!is_subdir(to->mnt.mnt_root, from->mnt.mnt_root))
++ goto out;
++
++ /* From mount should not have locked children in place of To's root */
++ if (has_locked_children(from, to->mnt.mnt_root))
++ goto out;
++
++ /* Setting sharing groups is only allowed on private mounts */
++ if (IS_MNT_SHARED(to) || IS_MNT_SLAVE(to))
++ goto out;
++
++ /* From should not be private */
++ if (!IS_MNT_SHARED(from) && !IS_MNT_SLAVE(from))
++ goto out;
++
++ if (IS_MNT_SLAVE(from)) {
++ struct mount *m = from->mnt_master;
++
++ list_add(&to->mnt_slave, &m->mnt_slave_list);
++ to->mnt_master = m;
++ }
++
++ if (IS_MNT_SHARED(from)) {
++ to->mnt_group_id = from->mnt_group_id;
++ list_add(&to->mnt_share, &from->mnt_share);
++ lock_mount_hash();
++ set_mnt_shared(to);
++ unlock_mount_hash();
++ }
++
++ err = 0;
++out:
++ namespace_unlock();
++ return err;
++}
++
+ static int do_move_mount(struct path *old_path, struct path *new_path)
+ {
+ struct mnt_namespace *ns;
+@@ -3667,7 +3745,10 @@ SYSCALL_DEFINE5(move_mount,
+ if (ret < 0)
+ goto out_to;
+
+- ret = do_move_mount(&from_path, &to_path);
++ if (flags & MOVE_MOUNT_SET_GROUP)
++ ret = do_set_group(&from_path, &to_path);
++ else
++ ret = do_move_mount(&from_path, &to_path);
+
+ out_to:
+ path_put(&to_path);
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index a9e563145e0c28..a853711bcad274 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -171,8 +171,8 @@ do_add_page_to_bio(struct bio *bio, int npg, int rw, sector_t isect,
+
+ /* limit length to what the device mapping allows */
+ end = disk_addr + *len;
+- if (end >= map->start + map->len)
+- *len = map->start + map->len - disk_addr;
++ if (end >= map->disk_offset + map->len)
++ *len = map->disk_offset + map->len - disk_addr;
+
+ retry:
+ if (!bio) {
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index 16412d6636e860..4e176d7d704d12 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -199,10 +199,11 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ struct pnfs_block_dev *child;
+ u64 chunk;
+ u32 chunk_idx;
++ u64 disk_chunk;
+ u64 disk_offset;
+
+ chunk = div_u64(offset, dev->chunk_size);
+- div_u64_rem(chunk, dev->nr_children, &chunk_idx);
++ disk_chunk = div_u64_rem(chunk, dev->nr_children, &chunk_idx);
+
+ if (chunk_idx >= dev->nr_children) {
+ dprintk("%s: invalid chunk idx %d (%lld/%lld)\n",
+@@ -215,7 +216,7 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ offset = chunk * dev->chunk_size;
+
+ /* disk offset of the stripe */
+- disk_offset = div_u64(offset, dev->nr_children);
++ disk_offset = disk_chunk * dev->chunk_size;
+
+ child = &dev->children[chunk_idx];
+ child->map(child, disk_offset, map);
+diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
+index 8f7cff7a42938e..0add0f329816b0 100644
+--- a/fs/nfs/blocklayout/extent_tree.c
++++ b/fs/nfs/blocklayout/extent_tree.c
+@@ -552,6 +552,15 @@ static int ext_tree_encode_commit(struct pnfs_block_layout *bl, __be32 *p,
+ return ret;
+ }
+
++/**
++ * ext_tree_prepare_commit - encode extents that need to be committed
++ * @arg: layout commit data
++ *
++ * Return values:
++ * %0: Success, all required extents are encoded
++ * %-ENOSPC: Some extents are encoded, but not all, due to RPC size limit
++ * %-ENOMEM: Out of memory, extents not encoded
++ */
+ int
+ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ {
+@@ -568,12 +577,12 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ start_p = page_address(arg->layoutupdate_page);
+ arg->layoutupdate_pages = &arg->layoutupdate_page;
+
+-retry:
+- ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size, &count, &arg->lastbytewritten);
++ ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size,
++ &count, &arg->lastbytewritten);
+ if (unlikely(ret)) {
+ ext_tree_free_commitdata(arg, buffer_size);
+
+- buffer_size = ext_tree_layoutupdate_size(bl, count);
++ buffer_size = NFS_SERVER(arg->inode)->wsize;
+ count = 0;
+
+ arg->layoutupdate_pages =
+@@ -588,7 +597,8 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ return -ENOMEM;
+ }
+
+- goto retry;
++ ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size,
++ &count, &arg->lastbytewritten);
+ }
+
+ *start_p = cpu_to_be32(count);
+@@ -608,7 +618,7 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ }
+
+ dprintk("%s found %zu ranges\n", __func__, count);
+- return 0;
++ return ret;
+ }
+
+ void
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 6afb66b8855e9b..ac2fbbba1521ac 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -661,6 +661,44 @@ struct nfs_client *nfs_init_client(struct nfs_client *clp,
+ }
+ EXPORT_SYMBOL_GPL(nfs_init_client);
+
++static void nfs4_server_set_init_caps(struct nfs_server *server)
++{
++#if IS_ENABLED(CONFIG_NFS_V4)
++ /* Set the basic capabilities */
++ server->caps = server->nfs_client->cl_mvops->init_caps;
++ if (server->flags & NFS_MOUNT_NORDIRPLUS)
++ server->caps &= ~NFS_CAP_READDIRPLUS;
++ if (server->nfs_client->cl_proto == XPRT_TRANSPORT_RDMA)
++ server->caps &= ~NFS_CAP_READ_PLUS;
++
++ /*
++ * Don't use NFS uid/gid mapping if we're using AUTH_SYS or lower
++ * authentication.
++ */
++ if (nfs4_disable_idmapping &&
++ server->client->cl_auth->au_flavor == RPC_AUTH_UNIX)
++ server->caps |= NFS_CAP_UIDGID_NOMAP;
++#endif
++}
++
++void nfs_server_set_init_caps(struct nfs_server *server)
++{
++ switch (server->nfs_client->rpc_ops->version) {
++ case 2:
++ server->caps = NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
++ break;
++ case 3:
++ server->caps = NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
++ if (!(server->flags & NFS_MOUNT_NORDIRPLUS))
++ server->caps |= NFS_CAP_READDIRPLUS;
++ break;
++ default:
++ nfs4_server_set_init_caps(server);
++ break;
++ }
++}
++EXPORT_SYMBOL_GPL(nfs_server_set_init_caps);
++
+ /*
+ * Create a version 2 or 3 client
+ */
+@@ -699,9 +737,6 @@ static int nfs_init_server(struct nfs_server *server,
+ /* Initialise the client representation from the mount data */
+ server->flags = ctx->flags;
+ server->options = ctx->options;
+- server->caps |= NFS_CAP_HARDLINKS|NFS_CAP_SYMLINKS|NFS_CAP_FILEID|
+- NFS_CAP_MODE|NFS_CAP_NLINK|NFS_CAP_OWNER|NFS_CAP_OWNER_GROUP|
+- NFS_CAP_ATIME|NFS_CAP_CTIME|NFS_CAP_MTIME;
+
+ if (ctx->rsize)
+ server->rsize = nfs_block_size(ctx->rsize, NULL);
+@@ -726,6 +761,8 @@ static int nfs_init_server(struct nfs_server *server,
+ if (error < 0)
+ goto error;
+
++ nfs_server_set_init_caps(server);
++
+ /* Preserve the values of mount_server-related mount options */
+ if (ctx->mount_server.addrlen) {
+ memcpy(&server->mountd_address, &ctx->mount_server.address,
+@@ -867,7 +904,6 @@ void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_server *sour
+ target->acregmax = source->acregmax;
+ target->acdirmin = source->acdirmin;
+ target->acdirmax = source->acdirmax;
+- target->caps = source->caps;
+ target->options = source->options;
+ target->auth_info = source->auth_info;
+ target->port = source->port;
+@@ -1076,6 +1112,8 @@ struct nfs_server *nfs_clone_server(struct nfs_server *source,
+ if (error < 0)
+ goto out_free_server;
+
++ nfs_server_set_init_caps(server);
++
+ /* probe the filesystem info for this server filesystem */
+ error = nfs_probe_fsinfo(server, fh, fattr_fsinfo);
+ if (error < 0)
+diff --git a/fs/nfs/export.c b/fs/nfs/export.c
+index 993be63ab3015f..784d0f1cfb931b 100644
+--- a/fs/nfs/export.c
++++ b/fs/nfs/export.c
+@@ -67,14 +67,21 @@ nfs_fh_to_dentry(struct super_block *sb, struct fid *fid,
+ struct nfs4_label *label = NULL;
+ struct nfs_fattr *fattr = NULL;
+ struct nfs_fh *server_fh = nfs_exp_embedfh(fid->raw);
+- size_t fh_size = offsetof(struct nfs_fh, data) + server_fh->size;
++ size_t fh_size = offsetof(struct nfs_fh, data);
+ const struct nfs_rpc_ops *rpc_ops;
+ struct dentry *dentry;
+ struct inode *inode;
+- int len = EMBED_FH_OFF + XDR_QUADLEN(fh_size);
++ int len = EMBED_FH_OFF;
+ u32 *p = fid->raw;
+ int ret;
+
++ /* Initial check of bounds */
++ if (fh_len < len + XDR_QUADLEN(fh_size) ||
++ fh_len > XDR_QUADLEN(NFS_MAXFHSIZE))
++ return NULL;
++ /* Calculate embedded filehandle size */
++ fh_size += server_fh->size;
++ len += XDR_QUADLEN(fh_size);
+ /* NULL translates to ESTALE */
+ if (fh_len < len || fh_type != len)
+ return NULL;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index f8962eaec87bc4..57150b27c0fd27 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -739,18 +739,14 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ {
+ struct nfs4_ff_layout_segment *fls = FF_LAYOUT_LSEG(lseg);
+ struct nfs4_ff_layout_mirror *mirror;
+- struct nfs4_pnfs_ds *ds;
+- bool fail_return = false;
++ struct nfs4_pnfs_ds *ds = ERR_PTR(-EAGAIN);
+ u32 idx;
+
+ /* mirrors are initially sorted by efficiency */
+ for (idx = start_idx; idx < fls->mirror_array_cnt; idx++) {
+- if (idx+1 == fls->mirror_array_cnt)
+- fail_return = !check_device;
+-
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+- ds = nfs4_ff_layout_prepare_ds(lseg, mirror, fail_return);
+- if (!ds)
++ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
++ if (IS_ERR(ds))
+ continue;
+
+ if (check_device &&
+@@ -758,10 +754,10 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ continue;
+
+ *best_idx = idx;
+- return ds;
++ break;
+ }
+
+- return NULL;
++ return ds;
+ }
+
+ static struct nfs4_pnfs_ds *
+@@ -937,7 +933,7 @@ ff_layout_pg_init_write(struct nfs_pageio_descriptor *pgio,
+ for (i = 0; i < pgio->pg_mirror_count; i++) {
+ mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);
+ ds = nfs4_ff_layout_prepare_ds(pgio->pg_lseg, mirror, true);
+- if (!ds) {
++ if (IS_ERR(ds)) {
+ if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ goto out_mds;
+ pnfs_generic_pg_cleanup(pgio);
+@@ -1824,6 +1820,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ u32 idx = hdr->pgio_mirror_idx;
+ int vers;
+ struct nfs_fh *fh;
++ bool ds_fatal_error = false;
+
+ dprintk("--> %s ino %lu pgbase %u req %zu@%llu\n",
+ __func__, hdr->inode->i_ino,
+@@ -1831,8 +1828,10 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
+- if (!ds)
++ if (IS_ERR(ds)) {
++ ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ goto out_failed;
++ }
+
+ ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ hdr->inode);
+@@ -1873,7 +1872,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ return PNFS_ATTEMPTED;
+
+ out_failed:
+- if (ff_layout_avoid_mds_available_ds(lseg))
++ if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ return PNFS_TRY_AGAIN;
+ trace_pnfs_mds_fallback_read_pagelist(hdr->inode,
+ hdr->args.offset, hdr->args.count,
+@@ -1894,11 +1893,14 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ int vers;
+ struct nfs_fh *fh;
+ u32 idx = hdr->pgio_mirror_idx;
++ bool ds_fatal_error = false;
+
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+- if (!ds)
++ if (IS_ERR(ds)) {
++ ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ goto out_failed;
++ }
+
+ ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ hdr->inode);
+@@ -1941,7 +1943,7 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ return PNFS_ATTEMPTED;
+
+ out_failed:
+- if (ff_layout_avoid_mds_available_ds(lseg))
++ if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ return PNFS_TRY_AGAIN;
+ trace_pnfs_mds_fallback_write_pagelist(hdr->inode,
+ hdr->args.offset, hdr->args.count,
+@@ -1983,7 +1985,7 @@ static int ff_layout_initiate_commit(struct nfs_commit_data *data, int how)
+ idx = calc_ds_index_from_commit(lseg, data->ds_commit_index);
+ mirror = FF_LAYOUT_COMP(lseg, idx);
+ ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+- if (!ds)
++ if (IS_ERR(ds))
+ goto out_err;
+
+ ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 4b0cdddce6eb37..11777d33a85e87 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -368,11 +368,11 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ struct nfs4_ff_layout_mirror *mirror,
+ bool fail_return)
+ {
+- struct nfs4_pnfs_ds *ds = NULL;
++ struct nfs4_pnfs_ds *ds;
+ struct inode *ino = lseg->pls_layout->plh_inode;
+ struct nfs_server *s = NFS_SERVER(ino);
+ unsigned int max_payload;
+- int status;
++ int status = -EAGAIN;
+
+ if (!ff_layout_init_mirror_ds(lseg->pls_layout, mirror))
+ goto noconnect;
+@@ -410,7 +410,7 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ ff_layout_send_layouterror(lseg);
+ if (fail_return || !ff_layout_has_available_ds(lseg))
+ pnfs_error_mark_layout_for_return(ino, lseg);
+- ds = NULL;
++ ds = ERR_PTR(status);
+ out:
+ return ds;
+ }
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 3e3114a9d19375..da8d727eb09dd9 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -217,11 +217,12 @@ static void nfs_set_cache_invalid(struct inode *inode, unsigned long flags)
+ flags &= ~NFS_INO_INVALID_OTHER;
+ flags &= ~(NFS_INO_INVALID_CHANGE
+ | NFS_INO_INVALID_SIZE
+- | NFS_INO_REVAL_PAGECACHE
+ | NFS_INO_INVALID_XATTR);
+ } else if (flags & NFS_INO_REVAL_PAGECACHE)
+ flags |= NFS_INO_INVALID_CHANGE | NFS_INO_INVALID_SIZE;
+
++ flags &= ~NFS_INO_REVAL_PAGECACHE;
++
+ if (!nfs_has_xattr_cache(nfsi))
+ flags &= ~NFS_INO_INVALID_XATTR;
+ if (inode->i_mapping->nrpages == 0)
+@@ -1900,7 +1901,6 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ nfsi->cache_validity &= ~(NFS_INO_INVALID_ATTR
+ | NFS_INO_INVALID_ATIME
+ | NFS_INO_REVAL_FORCED
+- | NFS_INO_REVAL_PAGECACHE
+ | NFS_INO_INVALID_BLOCKS);
+
+ /* Do atomic weak cache consistency updates */
+@@ -1942,7 +1942,6 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ } else {
+ nfsi->cache_validity |= save_cache_validity &
+ (NFS_INO_INVALID_CHANGE
+- | NFS_INO_REVAL_PAGECACHE
+ | NFS_INO_REVAL_FORCED);
+ cache_revalidated = false;
+ }
+@@ -1988,7 +1987,6 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ } else {
+ nfsi->cache_validity |= save_cache_validity &
+ (NFS_INO_INVALID_SIZE
+- | NFS_INO_REVAL_PAGECACHE
+ | NFS_INO_REVAL_FORCED);
+ cache_revalidated = false;
+ }
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 2fdc7c2a17fe8a..838f3a3744851a 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -222,6 +222,7 @@ extern struct nfs_client *
+ nfs4_find_client_sessionid(struct net *, const struct sockaddr *,
+ struct nfs4_sessionid *, u32);
+ extern struct nfs_server *nfs_create_server(struct fs_context *);
++extern void nfs_server_set_init_caps(struct nfs_server *);
+ extern struct nfs_server *nfs4_create_server(struct fs_context *);
+ extern struct nfs_server *nfs4_create_referral_server(struct fs_context *);
+ extern int nfs4_update_server(struct nfs_server *server, const char *hostname,
+@@ -588,9 +589,12 @@ nfs_write_match_verf(const struct nfs_writeverf *verf,
+
+ static inline gfp_t nfs_io_gfp_mask(void)
+ {
+- if (current->flags & PF_WQ_WORKER)
+- return GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
+- return GFP_KERNEL;
++ gfp_t ret = current_gfp_context(GFP_KERNEL);
++
++ /* For workers __GFP_NORETRY only with __GFP_IO or __GFP_FS */
++ if ((current->flags & PF_WQ_WORKER) && ret == GFP_KERNEL)
++ ret |= __GFP_NORETRY | __GFP_NOWARN;
++ return ret;
+ }
+
+ /* unlink.c */
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 252c99c76a42d3..89835457b7fdaf 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1044,20 +1044,7 @@ static int nfs4_server_common_setup(struct nfs_server *server,
+ if (error < 0)
+ goto out;
+
+- /* Set the basic capabilities */
+- server->caps |= server->nfs_client->cl_mvops->init_caps;
+- if (server->flags & NFS_MOUNT_NORDIRPLUS)
+- server->caps &= ~NFS_CAP_READDIRPLUS;
+- if (server->nfs_client->cl_proto == XPRT_TRANSPORT_RDMA)
+- server->caps &= ~NFS_CAP_READ_PLUS;
+- /*
+- * Don't use NFS uid/gid mapping if we're using AUTH_SYS or lower
+- * authentication.
+- */
+- if (nfs4_disable_idmapping &&
+- server->client->cl_auth->au_flavor == RPC_AUTH_UNIX)
+- server->caps |= NFS_CAP_UIDGID_NOMAP;
+-
++ nfs_server_set_init_caps(server);
+
+ /* Probe the root fh to retrieve its FSID and filehandle */
+ error = nfs4_get_rootfh(server, mntfh, auth_probe);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 77cc1c4219e15b..973b708ff33284 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -292,7 +292,7 @@ const u32 nfs4_fs_locations_bitmap[3] = {
+ };
+
+ static void nfs4_bitmap_copy_adjust(__u32 *dst, const __u32 *src,
+- struct inode *inode)
++ struct inode *inode, unsigned long flags)
+ {
+ unsigned long cache_validity;
+
+@@ -300,22 +300,19 @@ static void nfs4_bitmap_copy_adjust(__u32 *dst, const __u32 *src,
+ if (!inode || !nfs4_have_delegation(inode, FMODE_READ))
+ return;
+
+- cache_validity = READ_ONCE(NFS_I(inode)->cache_validity);
+- if (!(cache_validity & NFS_INO_REVAL_FORCED))
+- cache_validity &= ~(NFS_INO_INVALID_CHANGE
+- | NFS_INO_INVALID_SIZE);
++ cache_validity = READ_ONCE(NFS_I(inode)->cache_validity) | flags;
+
++ /* Remove the attributes over which we have full control */
++ dst[1] &= ~FATTR4_WORD1_RAWDEV;
+ if (!(cache_validity & NFS_INO_INVALID_SIZE))
+ dst[0] &= ~FATTR4_WORD0_SIZE;
+
+ if (!(cache_validity & NFS_INO_INVALID_CHANGE))
+ dst[0] &= ~FATTR4_WORD0_CHANGE;
+-}
+
+-static void nfs4_bitmap_copy_adjust_setattr(__u32 *dst,
+- const __u32 *src, struct inode *inode)
+-{
+- nfs4_bitmap_copy_adjust(dst, src, inode);
++ if (!(cache_validity & NFS_INO_INVALID_OTHER))
++ dst[1] &= ~(FATTR4_WORD1_MODE | FATTR4_WORD1_OWNER |
++ FATTR4_WORD1_OWNER_GROUP);
+ }
+
+ static void nfs4_setup_readdir(u64 cookie, __be32 *verifier, struct dentry *dentry,
+@@ -1213,7 +1210,6 @@ nfs4_update_changeattr_locked(struct inode *inode,
+ | cache_validity;
+
+ if (cinfo->atomic && cinfo->before == inode_peek_iversion_raw(inode)) {
+- nfsi->cache_validity &= ~NFS_INO_REVAL_PAGECACHE;
+ nfsi->attrtimeo_timestamp = jiffies;
+ } else {
+ if (S_ISDIR(inode->i_mode)) {
+@@ -3380,12 +3376,15 @@ static int nfs4_do_setattr(struct inode *inode, const struct cred *cred,
+ .inode = inode,
+ .stateid = &arg.stateid,
+ };
++ unsigned long adjust_flags = NFS_INO_INVALID_CHANGE;
+ int err;
+
++ if (sattr->ia_valid & (ATTR_MODE|ATTR_UID|ATTR_GID))
++ adjust_flags |= NFS_INO_INVALID_OTHER;
++
+ do {
+- nfs4_bitmap_copy_adjust_setattr(bitmask,
+- nfs4_bitmask(server, olabel),
+- inode);
++ nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, olabel),
++ inode, adjust_flags);
+
+ err = _nfs4_do_setattr(inode, &arg, &res, cred, ctx);
+ switch (err) {
+@@ -3935,6 +3934,8 @@ int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ .interruptible = true,
+ };
+ int err;
++
++ nfs_server_set_init_caps(server);
+ do {
+ err = nfs4_handle_exception(server,
+ _nfs4_server_capabilities(server, fhandle),
+@@ -4193,8 +4194,7 @@ static int _nfs4_proc_getattr(struct nfs_server *server, struct nfs_fh *fhandle,
+ if (inode && (server->flags & NFS_MOUNT_SOFTREVAL))
+ task_flags |= RPC_TASK_TIMEOUT;
+
+- nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, label), inode);
+-
++ nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, label), inode, 0);
+ nfs_fattr_init(fattr);
+ nfs4_init_sequence(&args.seq_args, &res.seq_res, 0, 0);
+ return nfs4_do_call_sync(server->client, server, &msg,
+@@ -4796,8 +4796,8 @@ static int _nfs4_proc_link(struct inode *inode, struct inode *dir, const struct
+ }
+
+ nfs4_inode_make_writeable(inode);
+- nfs4_bitmap_copy_adjust_setattr(bitmask, nfs4_bitmask(server, res.label), inode);
+-
++ nfs4_bitmap_copy_adjust(bitmask, nfs4_bitmask(server, res.label), inode,
++ NFS_INO_INVALID_CHANGE);
+ status = nfs4_call_sync(server->client, server, &msg, &arg.seq_args, &res.seq_res, 1);
+ if (!status) {
+ nfs4_update_changeattr(dir, &res.cinfo, res.fattr->time_start,
+@@ -10378,7 +10378,7 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+- ssize_t error, error2, error3, error4;
++ ssize_t error, error2, error3, error4 = 0;
+ size_t left = size;
+
+ error = generic_listxattr(dentry, list, left);
+@@ -10406,9 +10406,11 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ left -= error3;
+ }
+
+- error4 = security_inode_listsecurity(d_inode(dentry), list, left);
+- if (error4 < 0)
+- return error4;
++ if (!nfs_server_capable(d_inode(dentry), NFS_CAP_SECURITY_LABEL)) {
++ error4 = security_inode_listsecurity(d_inode(dentry), list, left);
++ if (error4 < 0)
++ return error4;
++ }
+
+ error += error2 + error3 + error4;
+ if (size && error > size)
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 758689877d85d7..e14cf7140bab46 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -3219,6 +3219,7 @@ pnfs_layoutcommit_inode(struct inode *inode, bool sync)
+ struct nfs_inode *nfsi = NFS_I(inode);
+ loff_t end_pos;
+ int status;
++ bool mark_as_dirty = false;
+
+ if (!pnfs_layoutcommit_outstanding(inode))
+ return 0;
+@@ -3270,19 +3271,23 @@ pnfs_layoutcommit_inode(struct inode *inode, bool sync)
+ if (ld->prepare_layoutcommit) {
+ status = ld->prepare_layoutcommit(&data->args);
+ if (status) {
+- put_cred(data->cred);
++ if (status != -ENOSPC)
++ put_cred(data->cred);
+ spin_lock(&inode->i_lock);
+ set_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags);
+ if (end_pos > nfsi->layout->plh_lwb)
+ nfsi->layout->plh_lwb = end_pos;
+- goto out_unlock;
++ if (status != -ENOSPC)
++ goto out_unlock;
++ spin_unlock(&inode->i_lock);
++ mark_as_dirty = true;
+ }
+ }
+
+
+ status = nfs4_proc_layoutcommit(data, sync);
+ out:
+- if (status)
++ if (status || mark_as_dirty)
+ mark_inode_dirty_sync(inode);
+ dprintk("<-- %s status %d\n", __func__, status);
+ return status;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index bf78745b19ca90..202e89613aa893 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4284,10 +4284,16 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ }
+ status = nfs_ok;
+ if (conf) {
+- old = unconf;
+- unhash_client_locked(old);
+- nfsd4_change_callback(conf, &unconf->cl_cb_conn);
+- } else {
++ if (get_client_locked(conf) == nfs_ok) {
++ old = unconf;
++ unhash_client_locked(old);
++ nfsd4_change_callback(conf, &unconf->cl_cb_conn);
++ } else {
++ conf = NULL;
++ }
++ }
++
++ if (!conf) {
+ old = find_confirmed_client_by_name(&unconf->cl_name, nn);
+ if (old) {
+ status = nfserr_clid_inuse;
+@@ -4304,10 +4310,14 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ }
+ trace_nfsd_clid_replaced(&old->cl_clientid);
+ }
++ status = get_client_locked(unconf);
++ if (status != nfs_ok) {
++ old = NULL;
++ goto out;
++ }
+ move_to_confirmed(unconf);
+ conf = unconf;
+ }
+- get_client_locked(conf);
+ spin_unlock(&nn->client_lock);
+ if (conf == unconf)
+ fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
+@@ -5722,6 +5732,20 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ status = nfs4_check_deleg(cl, open, &dp);
+ if (status)
+ goto out;
++ if (dp && nfsd4_is_deleg_cur(open) &&
++ (dp->dl_stid.sc_file != fp)) {
++ /*
++ * RFC8881 section 8.2.4 mandates the server to return
++ * NFS4ERR_BAD_STATEID if the selected table entry does
++ * not match the current filehandle. However returning
++ * NFS4ERR_BAD_STATEID in the OPEN can cause the client
++ * to repeatedly retry the operation with the same
++ * stateid, since the stateid itself is valid. To avoid
++ * this situation NFSD returns NFS4ERR_INVAL instead.
++ */
++ status = nfserr_inval;
++ goto out;
++ }
+ stp = nfsd4_find_and_lock_existing_open(fp, open);
+ } else {
+ open->op_file = NULL;
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index b7873d2fb4eff1..11201919aa4461 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -517,11 +517,18 @@ static int __nilfs_read_inode(struct super_block *sb,
+ inode->i_op = &nilfs_symlink_inode_operations;
+ inode_nohighmem(inode);
+ inode->i_mapping->a_ops = &nilfs_aops;
+- } else {
++ } else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
++ S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
+ inode->i_op = &nilfs_special_inode_operations;
+ init_special_inode(
+ inode, inode->i_mode,
+ huge_decode_dev(le64_to_cpu(raw_inode->i_device_code)));
++ } else {
++ nilfs_error(sb,
++ "invalid file type bits in mode 0%o for inode %lu",
++ inode->i_mode, ino);
++ err = -EIO;
++ goto failed_unmap;
+ }
+ nilfs_ifile_unmap_inode(root->ifile, ino, bh);
+ brelse(bh);
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index fa41db08848802..cd4bfd92ebd6e8 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -354,7 +354,7 @@ static ssize_t orangefs_debug_read(struct file *file,
+ goto out;
+
+ mutex_lock(&orangefs_debug_lock);
+- sprintf_ret = sprintf(buf, "%s", (char *)file->private_data);
++ sprintf_ret = scnprintf(buf, ORANGEFS_MAX_DEBUG_STRING_LEN, "%s", (char *)file->private_data);
+ mutex_unlock(&orangefs_debug_lock);
+
+ read_ret = simple_read_from_buffer(ubuf, count, ppos, buf, sprintf_ret);
+@@ -728,8 +728,8 @@ static void do_k_string(void *k_mask, int index)
+
+ if (*mask & s_kmod_keyword_mask_map[index].mask_val) {
+ if ((strlen(kernel_debug_string) +
+- strlen(s_kmod_keyword_mask_map[index].keyword))
+- < ORANGEFS_MAX_DEBUG_STRING_LEN - 1) {
++ strlen(s_kmod_keyword_mask_map[index].keyword) + 1)
++ < ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ strcat(kernel_debug_string,
+ s_kmod_keyword_mask_map[index].keyword);
+ strcat(kernel_debug_string, ",");
+@@ -756,7 +756,7 @@ static void do_c_string(void *c_mask, int index)
+ (mask->mask2 & cdm_array[index].mask2)) {
+ if ((strlen(client_debug_string) +
+ strlen(cdm_array[index].keyword) + 1)
+- < ORANGEFS_MAX_DEBUG_STRING_LEN - 2) {
++ < ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ strcat(client_debug_string,
+ cdm_array[index].keyword);
+ strcat(client_debug_string, ",");
+diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
+index 5a47b5c2fdc000..7b640c8fe80482 100644
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -74,10 +74,15 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ unsigned short flags;
+ unsigned int fragments;
+ u64 lookup_table_start, xattr_id_table_start, next_table;
+- int err;
++ int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
+
+ TRACE("Entered squashfs_fill_superblock\n");
+
++ if (!devblksize) {
++ errorf(fc, "squashfs: unable to set blocksize\n");
++ return -EINVAL;
++ }
++
+ sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL);
+ if (sb->s_fs_info == NULL) {
+ ERROR("Failed to allocate squashfs_sb_info\n");
+@@ -85,12 +90,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ }
+ msblk = sb->s_fs_info;
+
+- msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
+- if (!msblk->devblksize) {
+- errorf(fc, "squashfs: unable to set blocksize\n");
+- return -EINVAL;
+- }
+-
++ msblk->devblksize = devblksize;
+ msblk->devblksize_log2 = ffz(~msblk->devblksize);
+
+ mutex_init(&msblk->meta_index_mutex);
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 8dae5e73a00bce..723184b1201f8a 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1410,7 +1410,7 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ struct genericPartitionMap *gpm;
+ uint16_t ident;
+ struct buffer_head *bh;
+- unsigned int table_len;
++ unsigned int table_len, part_map_count;
+ int ret;
+
+ bh = udf_read_tagged(sb, block, block, &ident);
+@@ -1431,7 +1431,16 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ "logical volume");
+ if (ret)
+ goto out_bh;
+- ret = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
++
++ part_map_count = le32_to_cpu(lvd->numPartitionMaps);
++ if (part_map_count > table_len / sizeof(struct genericPartitionMap1)) {
++ udf_err(sb, "error loading logical volume descriptor: "
++ "Too many partition maps (%u > %u)\n", part_map_count,
++ table_len / (unsigned)sizeof(struct genericPartitionMap1));
++ ret = -EIO;
++ goto out_bh;
++ }
++ ret = udf_sb_alloc_partition_maps(sb, part_map_count);
+ if (ret)
+ goto out_bh;
+
+diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
+index 640f09479bdf79..ce75c53dd33804 100644
+--- a/include/asm-generic/barrier.h
++++ b/include/asm-generic/barrier.h
+@@ -20,6 +20,35 @@
+ #define nop() asm volatile ("nop")
+ #endif
+
++/*
++ * Architectures that want generic instrumentation can define __ prefixed
++ * variants of all barriers.
++ */
++
++#ifdef __mb
++#define mb() do { kcsan_mb(); __mb(); } while (0)
++#endif
++
++#ifdef __rmb
++#define rmb() do { kcsan_rmb(); __rmb(); } while (0)
++#endif
++
++#ifdef __wmb
++#define wmb() do { kcsan_wmb(); __wmb(); } while (0)
++#endif
++
++#ifdef __dma_mb
++#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
++#endif
++
++#ifdef __dma_rmb
++#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
++#endif
++
++#ifdef __dma_wmb
++#define dma_wmb() do { kcsan_wmb(); __dma_wmb(); } while (0)
++#endif
++
+ /*
+ * Force strict CPU ordering. And yes, this is required on UP too when we're
+ * talking to devices.
+@@ -39,6 +68,10 @@
+ #define wmb() mb()
+ #endif
+
++#ifndef dma_mb
++#define dma_mb() mb()
++#endif
++
+ #ifndef dma_rmb
+ #define dma_rmb() rmb()
+ #endif
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index d9b69bbde5cc54..4c7b7c5c821698 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -353,13 +353,13 @@ enum req_opf {
+ /* Close a zone */
+ REQ_OP_ZONE_CLOSE = 11,
+ /* Transition a zone to full */
+- REQ_OP_ZONE_FINISH = 12,
++ REQ_OP_ZONE_FINISH = 13,
+ /* write data at the current zone write pointer */
+- REQ_OP_ZONE_APPEND = 13,
++ REQ_OP_ZONE_APPEND = 15,
+ /* reset a zone write pointer */
+- REQ_OP_ZONE_RESET = 15,
++ REQ_OP_ZONE_RESET = 17,
+ /* reset all the zone present on the device */
+- REQ_OP_ZONE_RESET_ALL = 17,
++ REQ_OP_ZONE_RESET_ALL = 19,
+
+ /* SCSI passthrough using struct scsi_request */
+ REQ_OP_SCSI_IN = 32,
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 13a43651984fbd..bbd74420fa214f 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -242,14 +242,6 @@ static inline void *offset_to_ptr(const int *off)
+ static void * __section(".discard.addressable") __used \
+ __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)&sym;
+
+-#define __ADDRESSABLE_ASM(sym) \
+- .pushsection .discard.addressable,"aw"; \
+- .align ARCH_SEL(8,4); \
+- ARCH_SEL(.quad, .long) __stringify(sym); \
+- .popsection;
+-
+-#define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym))
+-
+ /* &a[0] degrades to a pointer: a different type from an array */
+ #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
+
+diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
+index b70224370832fc..e0139d9747d40b 100644
+--- a/include/linux/cpuset.h
++++ b/include/linux/cpuset.h
+@@ -33,6 +33,8 @@
+ */
+ extern struct static_key_false cpusets_pre_enable_key;
+ extern struct static_key_false cpusets_enabled_key;
++extern struct static_key_false cpusets_insane_config_key;
++
+ static inline bool cpusets_enabled(void)
+ {
+ return static_branch_unlikely(&cpusets_enabled_key);
+@@ -50,6 +52,19 @@ static inline void cpuset_dec(void)
+ static_branch_dec_cpuslocked(&cpusets_pre_enable_key);
+ }
+
++/*
++ * This will get enabled whenever a cpuset configuration is considered
++ * unsupportable in general. E.g. movable only node which cannot satisfy
++ * any non movable allocations (see update_nodemask). Page allocator
++ * needs to make additional checks for those configurations and this
++ * check is meant to guard those checks without any overhead for sane
++ * configurations.
++ */
++static inline bool cpusets_insane_config(void)
++{
++ return static_branch_unlikely(&cpusets_insane_config_key);
++}
++
+ extern int cpuset_init(void);
+ extern void cpuset_init_smp(void);
+ extern void cpuset_force_rebuild(void);
+@@ -168,6 +183,8 @@ static inline void set_mems_allowed(nodemask_t nodemask)
+
+ static inline bool cpusets_enabled(void) { return false; }
+
++static inline bool cpusets_insane_config(void) { return false; }
++
+ static inline int cpuset_init(void) { return 0; }
+ static inline void cpuset_init_smp(void) {}
+
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 9463dddce6bf7d..11294a89a53bdf 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -436,7 +436,7 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
+ * @host: Owner, either the inode or the block_device.
+ * @i_pages: Cached pages.
+ * @gfp_mask: Memory allocation flags to use for allocating pages.
+- * @i_mmap_writable: Number of VM_SHARED mappings.
++ * @i_mmap_writable: Number of VM_SHARED, VM_MAYWRITE mappings.
+ * @nr_thps: Number of THPs in the pagecache (non-shmem only).
+ * @i_mmap: Tree of private and shared mappings.
+ * @i_mmap_rwsem: Protects @i_mmap and @i_mmap_writable.
+@@ -535,7 +535,7 @@ static inline int mapping_mapped(struct address_space *mapping)
+
+ /*
+ * Might pages of this file have been modified in userspace?
+- * Note that i_mmap_writable counts all VM_SHARED vmas: do_mmap
++ * Note that i_mmap_writable counts all VM_SHARED, VM_MAYWRITE vmas: do_mmap
+ * marks vma as VM_SHARED if it is shared, and the file was opened for
+ * writing i.e. vma may be mprotected writable even if now readonly.
+ *
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index e869ce3ae6600f..40dd74bdd9fbd9 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -207,7 +207,7 @@ void logfc(struct fc_log *log, const char *prefix, char level, const char *fmt,
+ */
+ #define infof(fc, fmt, ...) __logfc(fc, 'i', fmt, ## __VA_ARGS__)
+ #define info_plog(p, fmt, ...) __plog(p, 'i', fmt, ## __VA_ARGS__)
+-#define infofc(p, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
++#define infofc(fc, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
+
+ /**
+ * warnf - Store supplementary warning message
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index 64cfe7cd292cb1..3728e3978f8333 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -248,19 +248,19 @@ vlan_for_each(struct net_device *dev,
+
+ static inline struct net_device *vlan_dev_real_dev(const struct net_device *dev)
+ {
+- BUG();
++ WARN_ON_ONCE(1);
+ return NULL;
+ }
+
+ static inline u16 vlan_dev_vlan_id(const struct net_device *dev)
+ {
+- BUG();
++ WARN_ON_ONCE(1);
+ return 0;
+ }
+
+ static inline __be16 vlan_dev_vlan_proto(const struct net_device *dev)
+ {
+- BUG();
++ WARN_ON_ONCE(1);
+ return 0;
+ }
+
+diff --git a/include/linux/memfd.h b/include/linux/memfd.h
+index 4f1600413f916e..5d06bba9d7e5c8 100644
+--- a/include/linux/memfd.h
++++ b/include/linux/memfd.h
+@@ -6,11 +6,25 @@
+
+ #ifdef CONFIG_MEMFD_CREATE
+ extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned long arg);
++unsigned int *memfd_file_seals_ptr(struct file *file);
+ #else
+ static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned long a)
+ {
+ return -EINVAL;
+ }
++
++static inline unsigned int *memfd_file_seals_ptr(struct file *file)
++{
++ return NULL;
++}
+ #endif
+
++/* Retrieve memfd seals associated with the file, if any. */
++static inline unsigned int memfd_file_seals(struct file *file)
++{
++ unsigned int *sealsp = memfd_file_seals_ptr(file);
++
++ return sealsp ? *sealsp : 0;
++}
++
+ #endif /* __LINUX_MEMFD_H */
+diff --git a/include/linux/minmax.h b/include/linux/minmax.h
+index 5433c08fcc6858..1aea34b8f19bf9 100644
+--- a/include/linux/minmax.h
++++ b/include/linux/minmax.h
+@@ -51,6 +51,23 @@
+ */
+ #define max(x, y) __careful_cmp(x, y, >)
+
++/**
++ * umin - return minimum of two non-negative values
++ * Signed types are zero extended to match a larger unsigned type.
++ * @x: first value
++ * @y: second value
++ */
++#define umin(x, y) \
++ __careful_cmp((x) + 0u + 0ul + 0ull, (y) + 0u + 0ul + 0ull, <)
++
++/**
++ * umax - return maximum of two non-negative values
++ * @x: first value
++ * @y: second value
++ */
++#define umax(x, y) \
++ __careful_cmp((x) + 0u + 0ul + 0ull, (y) + 0u + 0ul + 0ull, >)
++
+ /**
+ * min3 - return minimum of three values
+ * @x: first value
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index e159a11424f1a7..e168d87d6f2eef 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -666,6 +666,17 @@ static inline bool vma_is_accessible(struct vm_area_struct *vma)
+ return vma->vm_flags & VM_ACCESS_FLAGS;
+ }
+
++static inline bool is_shared_maywrite(vm_flags_t vm_flags)
++{
++ return (vm_flags & (VM_SHARED | VM_MAYWRITE)) ==
++ (VM_SHARED | VM_MAYWRITE);
++}
++
++static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma)
++{
++ return is_shared_maywrite(vma->vm_flags);
++}
++
+ #ifdef CONFIG_SHMEM
+ /*
+ * The vma_is_shmem is not inline because it is used only by slow
+@@ -3189,34 +3200,57 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
+
+ extern int sysctl_nr_trim_pages;
+
++static inline bool is_write_sealed(int seals)
++{
++ return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE);
++}
++
+ /**
+- * seal_check_future_write - Check for F_SEAL_FUTURE_WRITE flag and handle it
++ * is_readonly_sealed - Checks whether write-sealed but mapped read-only,
++ * in which case writes should be disallowing moving
++ * forwards.
++ * @seals: the seals to check
++ * @vm_flags: the VMA flags to check
++ *
++ * Returns whether readonly sealed, in which case writess should be disallowed
++ * going forward.
++ */
++static inline bool is_readonly_sealed(int seals, vm_flags_t vm_flags)
++{
++ /*
++ * Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as
++ * MAP_SHARED and read-only, take care to not allow mprotect to
++ * revert protections on such mappings. Do this only for shared
++ * mappings. For private mappings, don't need to mask
++ * VM_MAYWRITE as we still want them to be COW-writable.
++ */
++ if (is_write_sealed(seals) &&
++ ((vm_flags & (VM_SHARED | VM_WRITE)) == VM_SHARED))
++ return true;
++
++ return false;
++}
++
++/**
++ * seal_check_write - Check for F_SEAL_WRITE or F_SEAL_FUTURE_WRITE flags and
++ * handle them.
+ * @seals: the seals to check
+ * @vma: the vma to operate on
+ *
+- * Check whether F_SEAL_FUTURE_WRITE is set; if so, do proper check/handling on
+- * the vma flags. Return 0 if check pass, or <0 for errors.
++ * Check whether F_SEAL_WRITE or F_SEAL_FUTURE_WRITE are set; if so, do proper
++ * check/handling on the vma flags. Return 0 if check pass, or <0 for errors.
+ */
+-static inline int seal_check_future_write(int seals, struct vm_area_struct *vma)
+-{
+- if (seals & F_SEAL_FUTURE_WRITE) {
+- /*
+- * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
+- * "future write" seal active.
+- */
+- if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
+- return -EPERM;
+-
+- /*
+- * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as
+- * MAP_SHARED and read-only, take care to not allow mprotect to
+- * revert protections on such mappings. Do this only for shared
+- * mappings. For private mappings, don't need to mask
+- * VM_MAYWRITE as we still want them to be COW-writable.
+- */
+- if (vma->vm_flags & VM_SHARED)
+- vma->vm_flags &= ~(VM_MAYWRITE);
+- }
++static inline int seal_check_write(int seals, struct vm_area_struct *vma)
++{
++ if (!is_write_sealed(seals))
++ return 0;
++
++ /*
++ * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
++ * write seals are active.
++ */
++ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
++ return -EPERM;
+
+ return 0;
+ }
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index 71150fb1cb2ad9..e0106e13f74f57 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1128,6 +1128,28 @@ static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist,
+ #define for_each_zone_zonelist(zone, z, zlist, highidx) \
+ for_each_zone_zonelist_nodemask(zone, z, zlist, highidx, NULL)
+
++/* Whether the 'nodes' are all movable nodes */
++static inline bool movable_only_nodes(nodemask_t *nodes)
++{
++ struct zonelist *zonelist;
++ struct zoneref *z;
++ int nid;
++
++ if (nodes_empty(*nodes))
++ return false;
++
++ /*
++ * We can chose arbitrary node from the nodemask to get a
++ * zonelist as they are interlinked. We just need to find
++ * at least one zone that can satisfy kernel allocations.
++ */
++ nid = first_node(*nodes);
++ zonelist = &NODE_DATA(nid)->node_zonelists[ZONELIST_FALLBACK];
++ z = first_zones_zonelist(zonelist, ZONE_NORMAL, nodes);
++ return (!z->zone) ? true : false;
++}
++
++
+ #ifdef CONFIG_SPARSEMEM
+ #include <asm/sparsemem.h>
+ #endif
+diff --git a/include/linux/moduleparam.h b/include/linux/moduleparam.h
+index f25a1c48439034..2c9d43eed9c7a5 100644
+--- a/include/linux/moduleparam.h
++++ b/include/linux/moduleparam.h
+@@ -282,10 +282,9 @@ struct kparam_array
+ #define __moduleparam_const const
+ #endif
+
+-/* This is the fundamental function for registering boot/module
+- parameters. */
++/* This is the fundamental function for registering boot/module parameters. */
+ #define __module_param_call(prefix, name, ops, arg, perm, level, flags) \
+- /* Default value instead of permissions? */ \
++ static_assert(sizeof(""prefix) - 1 <= MAX_PARAM_PREFIX_LEN); \
+ static const char __param_str_##name[] = prefix #name; \
+ static struct kernel_param __moduleparam_const __param_##name \
+ __used \
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index d3d84eb466f020..8360b87ca4d53b 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -439,6 +439,7 @@ struct pci_dev {
+ unsigned int is_virtfn:1;
+ unsigned int reset_fn:1;
+ unsigned int is_hotplug_bridge:1;
++ unsigned int is_pciehp:1;
+ unsigned int shpc_managed:1; /* SHPC owned by shpchp */
+ unsigned int is_thunderbolt:1; /* Thunderbolt controller */
+ /*
+diff --git a/include/linux/platform_data/cros_ec_proto.h b/include/linux/platform_data/cros_ec_proto.h
+index 7f03e02c48cd41..4e78365bad830b 100644
+--- a/include/linux/platform_data/cros_ec_proto.h
++++ b/include/linux/platform_data/cros_ec_proto.h
+@@ -9,6 +9,7 @@
+ #define __LINUX_CROS_EC_PROTO_H
+
+ #include <linux/device.h>
++#include <linux/lockdep_types.h>
+ #include <linux/mutex.h>
+ #include <linux/notifier.h>
+
+@@ -114,6 +115,8 @@ struct cros_ec_command {
+ * command. The caller should check msg.result for the EC's result
+ * code.
+ * @pkt_xfer: Send packet to EC and get response.
++ * @lockdep_key: Lockdep class for each instance. Unused if CONFIG_LOCKDEP is
++ * not enabled.
+ * @lock: One transaction at a time.
+ * @mkbp_event_supported: 0 if MKBP not supported. Otherwise its value is
+ * the maximum supported version of the MKBP host event
+@@ -159,6 +162,7 @@ struct cros_ec_device {
+ struct cros_ec_command *msg);
+ int (*pkt_xfer)(struct cros_ec_device *ec,
+ struct cros_ec_command *msg);
++ struct lock_class_key lockdep_key;
+ struct mutex lock;
+ u8 mkbp_event_supported;
+ bool host_sleep_v1;
+diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h
+index c7abce28ed2995..aab0aebb529e02 100644
+--- a/include/linux/pps_kernel.h
++++ b/include/linux/pps_kernel.h
+@@ -52,6 +52,7 @@ struct pps_device {
+ int current_mode; /* PPS mode at event time */
+
+ unsigned int last_ev; /* last PPS event id */
++ unsigned int last_fetched_ev; /* last fetched PPS event id */
+ wait_queue_head_t queue; /* PPS event queue */
+
+ unsigned int id; /* PPS source unique ID */
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index e3e5e149b00e6b..a856c4478d8cc8 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -189,6 +189,22 @@ static inline void fs_reclaim_acquire(gfp_t gfp_mask) { }
+ static inline void fs_reclaim_release(gfp_t gfp_mask) { }
+ #endif
+
++/**
++ * might_alloc - Mark possible allocation sites
++ * @gfp_mask: gfp_t flags that would be used to allocate
++ *
++ * Similar to might_sleep() and other annotations, this can be used in functions
++ * that might allocate, but often don't. Compiles to nothing without
++ * CONFIG_LOCKDEP. Includes a conditional might_sleep() if @gfp allows blocking.
++ */
++static inline void might_alloc(gfp_t gfp_mask)
++{
++ fs_reclaim_acquire(gfp_mask);
++ fs_reclaim_release(gfp_mask);
++
++ might_sleep_if(gfpflags_allow_blocking(gfp_mask));
++}
++
+ /**
+ * memalloc_noio_save - Marks implicit GFP_NOIO allocation scope.
+ *
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 3248e4aeec0371..4b5731245bf15b 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -2519,6 +2519,29 @@ static inline void skb_reset_transport_header(struct sk_buff *skb)
+ skb->transport_header = skb->data - skb->head;
+ }
+
++/**
++ * skb_reset_transport_header_careful - conditionally reset transport header
++ * @skb: buffer to alter
++ *
++ * Hardened version of skb_reset_transport_header().
++ *
++ * Returns: true if the operation was a success.
++ */
++static inline bool __must_check
++skb_reset_transport_header_careful(struct sk_buff *skb)
++{
++ long offset = skb->data - skb->head;
++
++ if (unlikely(offset != (typeof(skb->transport_header))offset))
++ return false;
++
++ if (unlikely(offset == (typeof(skb->transport_header))~0U))
++ return false;
++
++ skb->transport_header = offset;
++ return true;
++}
++
+ static inline void skb_set_transport_header(struct sk_buff *skb,
+ const int offset)
+ {
+@@ -3081,7 +3104,13 @@ static inline void *skb_frag_address(const skb_frag_t *frag)
+ */
+ static inline void *skb_frag_address_safe(const skb_frag_t *frag)
+ {
+- void *ptr = page_address(skb_frag_page(frag));
++ struct page *page = skb_frag_page(frag);
++ void *ptr;
++
++ if (!page)
++ return NULL;
++
++ ptr = page_address(page);
+ if (unlikely(!ptr))
+ return NULL;
+
+diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
+index 8110c29fab42d5..6aaef1a3e16c8e 100644
+--- a/include/linux/usb/usbnet.h
++++ b/include/linux/usb/usbnet.h
+@@ -83,6 +83,7 @@ struct usbnet {
+ # define EVENT_LINK_CHANGE 11
+ # define EVENT_SET_RX_MODE 12
+ # define EVENT_NO_IP_ALIGN 13
++# define EVENT_LINK_CARRIER_ON 14
+ u32 rx_speed; /* in bps - NOT Mbps */
+ u32 tx_speed; /* in bps - NOT Mbps */
+ };
+diff --git a/include/linux/xarray.h b/include/linux/xarray.h
+index 92c0160b335231..05c025c5c100dc 100644
+--- a/include/linux/xarray.h
++++ b/include/linux/xarray.h
+@@ -15,6 +15,7 @@
+ #include <linux/kconfig.h>
+ #include <linux/kernel.h>
+ #include <linux/rcupdate.h>
++#include <linux/sched/mm.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+
+@@ -583,6 +584,7 @@ static inline void *xa_store_bh(struct xarray *xa, unsigned long index,
+ {
+ void *curr;
+
++ might_alloc(gfp);
+ xa_lock_bh(xa);
+ curr = __xa_store(xa, index, entry, gfp);
+ xa_unlock_bh(xa);
+@@ -609,6 +611,7 @@ static inline void *xa_store_irq(struct xarray *xa, unsigned long index,
+ {
+ void *curr;
+
++ might_alloc(gfp);
+ xa_lock_irq(xa);
+ curr = __xa_store(xa, index, entry, gfp);
+ xa_unlock_irq(xa);
+@@ -684,6 +687,7 @@ static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,
+ {
+ void *curr;
+
++ might_alloc(gfp);
+ xa_lock(xa);
+ curr = __xa_cmpxchg(xa, index, old, entry, gfp);
+ xa_unlock(xa);
+@@ -711,6 +715,7 @@ static inline void *xa_cmpxchg_bh(struct xarray *xa, unsigned long index,
+ {
+ void *curr;
+
++ might_alloc(gfp);
+ xa_lock_bh(xa);
+ curr = __xa_cmpxchg(xa, index, old, entry, gfp);
+ xa_unlock_bh(xa);
+@@ -738,6 +743,7 @@ static inline void *xa_cmpxchg_irq(struct xarray *xa, unsigned long index,
+ {
+ void *curr;
+
++ might_alloc(gfp);
+ xa_lock_irq(xa);
+ curr = __xa_cmpxchg(xa, index, old, entry, gfp);
+ xa_unlock_irq(xa);
+@@ -767,6 +773,7 @@ static inline int __must_check xa_insert(struct xarray *xa,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock(xa);
+ err = __xa_insert(xa, index, entry, gfp);
+ xa_unlock(xa);
+@@ -796,6 +803,7 @@ static inline int __must_check xa_insert_bh(struct xarray *xa,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock_bh(xa);
+ err = __xa_insert(xa, index, entry, gfp);
+ xa_unlock_bh(xa);
+@@ -825,6 +833,7 @@ static inline int __must_check xa_insert_irq(struct xarray *xa,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock_irq(xa);
+ err = __xa_insert(xa, index, entry, gfp);
+ xa_unlock_irq(xa);
+@@ -854,6 +863,7 @@ static inline __must_check int xa_alloc(struct xarray *xa, u32 *id,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock(xa);
+ err = __xa_alloc(xa, id, entry, limit, gfp);
+ xa_unlock(xa);
+@@ -883,6 +893,7 @@ static inline int __must_check xa_alloc_bh(struct xarray *xa, u32 *id,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock_bh(xa);
+ err = __xa_alloc(xa, id, entry, limit, gfp);
+ xa_unlock_bh(xa);
+@@ -912,6 +923,7 @@ static inline int __must_check xa_alloc_irq(struct xarray *xa, u32 *id,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock_irq(xa);
+ err = __xa_alloc(xa, id, entry, limit, gfp);
+ xa_unlock_irq(xa);
+@@ -945,6 +957,7 @@ static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock(xa);
+ err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
+ xa_unlock(xa);
+@@ -978,6 +991,7 @@ static inline int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock_bh(xa);
+ err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
+ xa_unlock_bh(xa);
+@@ -1011,6 +1025,7 @@ static inline int xa_alloc_cyclic_irq(struct xarray *xa, u32 *id, void *entry,
+ {
+ int err;
+
++ might_alloc(gfp);
+ xa_lock_irq(xa);
+ err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
+ xa_unlock_irq(xa);
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 4536a122c4bc55..5595c2a949397f 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -510,7 +510,7 @@ ieee80211_get_sband_iftype_data(const struct ieee80211_supported_band *sband,
+ {
+ int i;
+
+- if (WARN_ON(iftype >= NL80211_IFTYPE_MAX))
++ if (WARN_ON(iftype >= NUM_NL80211_IFTYPES))
+ return NULL;
+
+ if (iftype == NL80211_IFTYPE_AP_VLAN)
+diff --git a/include/net/tc_act/tc_ctinfo.h b/include/net/tc_act/tc_ctinfo.h
+index f071c1d70a25e1..a04bcac7adf4b6 100644
+--- a/include/net/tc_act/tc_ctinfo.h
++++ b/include/net/tc_act/tc_ctinfo.h
+@@ -18,9 +18,9 @@ struct tcf_ctinfo_params {
+ struct tcf_ctinfo {
+ struct tc_action common;
+ struct tcf_ctinfo_params __rcu *params;
+- u64 stats_dscp_set;
+- u64 stats_dscp_error;
+- u64 stats_cpmark_set;
++ atomic64_t stats_dscp_set;
++ atomic64_t stats_dscp_error;
++ atomic64_t stats_cpmark_set;
+ };
+
+ enum {
+diff --git a/include/net/udp.h b/include/net/udp.h
+index db599b15b6304d..5e5e8c7c67770d 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -479,6 +479,16 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ {
+ netdev_features_t features = NETIF_F_SG;
+ struct sk_buff *segs;
++ int drop_count;
++
++ /*
++ * Segmentation in UDP receive path is only for UDP GRO, drop udp
++ * fragmentation offload (UFO) packets.
++ */
++ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP) {
++ drop_count = 1;
++ goto drop;
++ }
+
+ /* Avoid csum recalculation by skb_segment unless userspace explicitly
+ * asks for the final checksum values
+@@ -502,16 +512,18 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ */
+ segs = __skb_gso_segment(skb, features, false);
+ if (IS_ERR_OR_NULL(segs)) {
+- int segs_nr = skb_shinfo(skb)->gso_segs;
+-
+- atomic_add(segs_nr, &sk->sk_drops);
+- SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, segs_nr);
+- kfree_skb(skb);
+- return NULL;
++ drop_count = skb_shinfo(skb)->gso_segs;
++ goto drop;
+ }
+
+ consume_skb(skb);
+ return segs;
++
++drop:
++ atomic_add(drop_count, &sk->sk_drops);
++ SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, drop_count);
++ kfree_skb(skb);
++ return NULL;
+ }
+
+ #ifdef CONFIG_BPF_SYSCALL
+diff --git a/include/uapi/linux/in6.h b/include/uapi/linux/in6.h
+index 5ad396a57eb327..327fd76c09629a 100644
+--- a/include/uapi/linux/in6.h
++++ b/include/uapi/linux/in6.h
+@@ -151,7 +151,6 @@ struct in6_flowlabel_req {
+ /*
+ * IPV6 socket options
+ */
+-#if __UAPI_DEF_IPV6_OPTIONS
+ #define IPV6_ADDRFORM 1
+ #define IPV6_2292PKTINFO 2
+ #define IPV6_2292HOPOPTS 3
+@@ -168,8 +167,10 @@ struct in6_flowlabel_req {
+ #define IPV6_MULTICAST_IF 17
+ #define IPV6_MULTICAST_HOPS 18
+ #define IPV6_MULTICAST_LOOP 19
++#if __UAPI_DEF_IPV6_OPTIONS
+ #define IPV6_ADD_MEMBERSHIP 20
+ #define IPV6_DROP_MEMBERSHIP 21
++#endif
+ #define IPV6_ROUTER_ALERT 22
+ #define IPV6_MTU_DISCOVER 23
+ #define IPV6_MTU 24
+@@ -202,7 +203,6 @@ struct in6_flowlabel_req {
+ #define IPV6_IPSEC_POLICY 34
+ #define IPV6_XFRM_POLICY 35
+ #define IPV6_HDRINCL 36
+-#endif
+
+ /*
+ * Multicast:
+diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
+index 6481db93700287..a4f27ccb3ae4c4 100644
+--- a/include/uapi/linux/io_uring.h
++++ b/include/uapi/linux/io_uring.h
+@@ -29,7 +29,7 @@ struct io_uring_sqe {
+ };
+ __u32 len; /* buffer size or number of iovecs */
+ union {
+- __kernel_rwf_t rw_flags;
++ __u32 rw_flags;
+ __u32 fsync_flags;
+ __u16 poll_events; /* compatibility */
+ __u32 poll32_events; /* word-reversed for BE */
+diff --git a/include/uapi/linux/mount.h b/include/uapi/linux/mount.h
+index dd8306ea336c19..fc6a2e63130b79 100644
+--- a/include/uapi/linux/mount.h
++++ b/include/uapi/linux/mount.h
+@@ -71,7 +71,8 @@
+ #define MOVE_MOUNT_T_SYMLINKS 0x00000010 /* Follow symlinks on to path */
+ #define MOVE_MOUNT_T_AUTOMOUNTS 0x00000020 /* Follow automounts on to path */
+ #define MOVE_MOUNT_T_EMPTY_PATH 0x00000040 /* Empty to path permitted */
+-#define MOVE_MOUNT__MASK 0x00000077
++#define MOVE_MOUNT_SET_GROUP 0x00000100 /* Set sharing group instead */
++#define MOVE_MOUNT__MASK 0x00000177
+
+ /*
+ * fsopen() flags.
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 731547a0d057a5..efe9785c6c1391 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -71,6 +71,13 @@
+ DEFINE_STATIC_KEY_FALSE(cpusets_pre_enable_key);
+ DEFINE_STATIC_KEY_FALSE(cpusets_enabled_key);
+
++/*
++ * There could be abnormal cpuset configurations for cpu or memory
++ * node binding, add this key to provide a quick low-cost judgement
++ * of the situation.
++ */
++DEFINE_STATIC_KEY_FALSE(cpusets_insane_config_key);
++
+ /* See "Frequency meter" comments, below. */
+
+ struct fmeter {
+@@ -381,6 +388,17 @@ static DECLARE_WORK(cpuset_hotplug_work, cpuset_hotplug_workfn);
+
+ static DECLARE_WAIT_QUEUE_HEAD(cpuset_attach_wq);
+
++static inline void check_insane_mems_config(nodemask_t *nodes)
++{
++ if (!cpusets_insane_config() &&
++ movable_only_nodes(nodes)) {
++ static_branch_enable_cpuslocked(&cpusets_insane_config_key);
++ pr_info("Unsupported (movable nodes only) cpuset configuration detected (nmask=%*pbl)!\n"
++ "Cpuset allocations might fail even with a lot of memory available.\n",
++ nodemask_pr_args(nodes));
++ }
++}
++
+ /*
+ * Cgroup v2 behavior is used on the "cpus" and "mems" control files when
+ * on default hierarchy or when the cpuset_v2_mode flag is set by mounting
+@@ -1878,6 +1896,8 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
+ if (retval < 0)
+ goto done;
+
++ check_insane_mems_config(&trialcs->mems_allowed);
++
+ spin_lock_irq(&callback_lock);
+ cs->mems_allowed = trialcs->mems_allowed;
+ spin_unlock_irq(&callback_lock);
+@@ -3215,6 +3235,9 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
+ cpus_updated = !cpumask_equal(&new_cpus, cs->effective_cpus);
+ mems_updated = !nodes_equal(new_mems, cs->effective_mems);
+
++ if (mems_updated)
++ check_insane_mems_config(&new_mems);
++
+ if (is_in_v2_mode())
+ hotplug_update_tasks(cs, &new_cpus, &new_mems,
+ cpus_updated, mems_updated);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index bf9f9eab6f67f8..c9cd1f622a1fcb 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6194,11 +6194,21 @@ static void perf_mmap_close(struct vm_area_struct *vma)
+ ring_buffer_put(rb); /* could be last */
+ }
+
++static int perf_mmap_may_split(struct vm_area_struct *vma, unsigned long addr)
++{
++ /*
++ * Forbid splitting perf mappings to prevent refcount leaks due to
++ * the resulting non-matching offsets and sizes. See open()/close().
++ */
++ return -EINVAL;
++}
++
+ static const struct vm_operations_struct perf_mmap_vmops = {
+ .open = perf_mmap_open,
+ .close = perf_mmap_close, /* non mergeable */
+ .fault = perf_mmap_fault,
+ .page_mkwrite = perf_mmap_fault,
++ .split = perf_mmap_may_split,
+ };
+
+ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+@@ -6290,9 +6300,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ goto unlock;
+ }
+
+- atomic_set(&rb->aux_mmap_count, 1);
+ user_extra = nr_pages;
+-
+ goto accounting;
+ }
+
+@@ -6394,8 +6402,10 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ } else {
+ ret = rb_alloc_aux(rb, event, vma->vm_pgoff, nr_pages,
+ event->attr.aux_watermark, flags);
+- if (!ret)
++ if (!ret) {
++ atomic_set(&rb->aux_mmap_count, 1);
+ rb->aux_mmap_locked = extra;
++ }
+ }
+
+ unlock:
+@@ -6405,6 +6415,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+
+ atomic_inc(&event->mmap_count);
+ } else if (rb) {
++ /* AUX allocation failed */
+ atomic_dec(&rb->mmap_count);
+ }
+ aux_unlock:
+@@ -6412,6 +6423,9 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ mutex_unlock(aux_mutex);
+ mutex_unlock(&event->mmap_mutex);
+
++ if (ret)
++ return ret;
++
+ /*
+ * Since pinned accounting is per vm we cannot allow fork() to copy our
+ * vma.
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 6ece27056fe9d6..cdf28ac4487997 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -561,7 +561,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ if (tmp->vm_flags & VM_DENYWRITE)
+ put_write_access(inode);
+ i_mmap_lock_write(mapping);
+- if (tmp->vm_flags & VM_SHARED)
++ if (vma_is_shared_maywrite(tmp))
+ mapping_allow_writable(mapping);
+ flush_dcache_mmap_lock(mapping);
+ /* insert tmp into the share list, just after mpnt */
+diff --git a/kernel/power/console.c b/kernel/power/console.c
+index fcdf0e14a47d47..19c48aa5355d2b 100644
+--- a/kernel/power/console.c
++++ b/kernel/power/console.c
+@@ -16,6 +16,7 @@
+ #define SUSPEND_CONSOLE (MAX_NR_CONSOLES-1)
+
+ static int orig_fgconsole, orig_kmsg;
++static bool vt_switch_done;
+
+ static DEFINE_MUTEX(vt_switch_mutex);
+
+@@ -136,17 +137,21 @@ void pm_prepare_console(void)
+ if (orig_fgconsole < 0)
+ return;
+
++ vt_switch_done = true;
++
+ orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
+ return;
+ }
+
+ void pm_restore_console(void)
+ {
+- if (!pm_vt_switch())
++ if (!pm_vt_switch() && !vt_switch_done)
+ return;
+
+ if (orig_fgconsole >= 0) {
+ vt_move_to_console(orig_fgconsole, 0);
+ vt_kmsg_redirect(orig_kmsg);
+ }
++
++ vt_switch_done = false;
+ }
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index c07a841971739d..ed17deba8b1858 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -585,10 +585,13 @@ static void rcu_preempt_deferred_qs(struct task_struct *t)
+ */
+ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
+ {
++ unsigned long flags;
+ struct rcu_data *rdp;
+
+ rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
++ local_irq_save(flags);
+ rdp->defer_qs_iw_pending = false;
++ local_irq_restore(flags);
+ }
+
+ /*
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 5a33ee30b40ffa..e62bbeb94faf1c 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3829,13 +3829,17 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
+ } else {
+ iter->hash = alloc_and_copy_ftrace_hash(size_bits, hash);
+ }
++ } else {
++ if (hash)
++ iter->hash = alloc_and_copy_ftrace_hash(hash->size_bits, hash);
++ else
++ iter->hash = EMPTY_HASH;
++ }
+
+- if (!iter->hash) {
+- trace_parser_put(&iter->parser);
+- goto out_unlock;
+- }
+- } else
+- iter->hash = hash;
++ if (!iter->hash) {
++ trace_parser_put(&iter->parser);
++ goto out_unlock;
++ }
+
+ ret = 0;
+
+@@ -5707,9 +5711,6 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
+ ret = ftrace_hash_move_and_update_ops(iter->ops, orig_hash,
+ iter->hash, filter_hash);
+ mutex_unlock(&ftrace_lock);
+- } else {
+- /* For read only, the hash is the ops hash */
+- iter->hash = NULL;
+ }
+
+ mutex_unlock(&iter->ops->func_hash->regex_lock);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 42c38a26c8010a..0c7aa47fb4d3b0 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1613,7 +1613,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+
+ ret = get_user(ch, ubuf++);
+ if (ret)
+- goto out;
++ goto fail;
+
+ read++;
+ cnt--;
+@@ -1627,7 +1627,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ while (cnt && isspace(ch)) {
+ ret = get_user(ch, ubuf++);
+ if (ret)
+- goto out;
++ goto fail;
+ read++;
+ cnt--;
+ }
+@@ -1637,8 +1637,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ /* only spaces were written */
+ if (isspace(ch) || !ch) {
+ *ppos += read;
+- ret = read;
+- goto out;
++ return read;
+ }
+ }
+
+@@ -1648,11 +1647,12 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ parser->buffer[parser->idx++] = ch;
+ else {
+ ret = -EINVAL;
+- goto out;
++ goto fail;
+ }
++
+ ret = get_user(ch, ubuf++);
+ if (ret)
+- goto out;
++ goto fail;
+ read++;
+ cnt--;
+ }
+@@ -1668,13 +1668,13 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ parser->buffer[parser->idx] = 0;
+ } else {
+ ret = -EINVAL;
+- goto out;
++ goto fail;
+ }
+
+ *ppos += read;
+- ret = read;
+-
+-out:
++ return read;
++fail:
++ trace_parser_fail(parser);
+ return ret;
+ }
+
+@@ -2139,10 +2139,10 @@ int __init register_tracer(struct tracer *type)
+ mutex_unlock(&trace_types_lock);
+
+ if (ret || !default_bootup_tracer)
+- goto out_unlock;
++ return ret;
+
+ if (strncmp(default_bootup_tracer, type->name, MAX_TRACER_SIZE))
+- goto out_unlock;
++ return 0;
+
+ printk(KERN_INFO "Starting tracer '%s'\n", type->name);
+ /* Do we want this tracer to start on bootup? */
+@@ -2154,8 +2154,7 @@ int __init register_tracer(struct tracer *type)
+ /* disable other selftests, since this will break it. */
+ disable_tracing_selftest("running a tracer");
+
+- out_unlock:
+- return ret;
++ return 0;
+ }
+
+ static void tracing_reset_cpu(struct array_buffer *buf, int cpu)
+@@ -8240,11 +8239,10 @@ ftrace_trace_snapshot_callback(struct trace_array *tr, struct ftrace_hash *hash,
+ out_reg:
+ ret = tracing_alloc_snapshot_instance(tr);
+ if (ret < 0)
+- goto out;
++ return ret;
+
+ ret = register_ftrace_function_probe(glob, tr, ops, count);
+
+- out:
+ return ret < 0 ? ret : 0;
+ }
+
+@@ -9746,7 +9744,7 @@ __init static int tracer_alloc_buffers(void)
+ BUILD_BUG_ON(TRACE_ITER_LAST_BIT > TRACE_FLAGS_MAX_SIZE);
+
+ if (!alloc_cpumask_var(&tracing_buffer_mask, GFP_KERNEL))
+- goto out;
++ return -ENOMEM;
+
+ if (!alloc_cpumask_var(&global_trace.tracing_cpumask, GFP_KERNEL))
+ goto out_free_buffer_mask;
+@@ -9857,7 +9855,6 @@ __init static int tracer_alloc_buffers(void)
+ free_cpumask_var(global_trace.tracing_cpumask);
+ out_free_buffer_mask:
+ free_cpumask_var(tracing_buffer_mask);
+-out:
+ return ret;
+ }
+
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index f47938d8401a22..2f5558a097e9bb 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1269,6 +1269,7 @@ bool ftrace_event_is_function(struct trace_event_call *call);
+ */
+ struct trace_parser {
+ bool cont;
++ bool fail;
+ char *buffer;
+ unsigned idx;
+ unsigned size;
+@@ -1276,7 +1277,7 @@ struct trace_parser {
+
+ static inline bool trace_parser_loaded(struct trace_parser *parser)
+ {
+- return (parser->idx != 0);
++ return !parser->fail && parser->idx != 0;
+ }
+
+ static inline bool trace_parser_cont(struct trace_parser *parser)
+@@ -1290,6 +1291,11 @@ static inline void trace_parser_clear(struct trace_parser *parser)
+ parser->idx = 0;
+ }
+
++static inline void trace_parser_fail(struct trace_parser *parser)
++{
++ parser->fail = true;
++}
++
+ extern int trace_parser_get_init(struct trace_parser *parser, int size);
+ extern void trace_parser_put(struct trace_parser *parser);
+ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 92693e2140a941..9cd97b274e6c1d 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -2324,7 +2324,10 @@ __register_event(struct trace_event_call *call, struct module *mod)
+ if (ret < 0)
+ return ret;
+
++ down_write(&trace_event_sem);
+ list_add(&call->list, &ftrace_events);
++ up_write(&trace_event_sem);
++
+ call->mod = mod;
+
+ return 0;
+@@ -2710,6 +2713,8 @@ __trace_add_event_dirs(struct trace_array *tr)
+ struct trace_event_call *call;
+ int ret;
+
++ lockdep_assert_held(&trace_event_sem);
++
+ list_for_each_entry(call, &ftrace_events, list) {
+ ret = __trace_add_new_event(call, tr);
+ if (ret < 0)
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 3b0d8c6dd58704..b98af5680bb9c5 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2959,7 +2959,7 @@ int generic_file_mmap(struct file * file, struct vm_area_struct * vma)
+ */
+ int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+- if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE))
++ if (vma_is_shared_maywrite(vma))
+ return -EINVAL;
+ return generic_file_mmap(file, vma);
+ }
+diff --git a/mm/hmm.c b/mm/hmm.c
+index cbe9d0c6665045..0bd308210da767 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -171,6 +171,7 @@ static inline unsigned long hmm_pfn_flags_order(unsigned long order)
+ return order << HMM_PFN_ORDER_SHIFT;
+ }
+
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ pmd_t pmd)
+ {
+@@ -181,7 +182,6 @@ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ hmm_pfn_flags_order(PMD_SHIFT - PAGE_SHIFT);
+ }
+
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
+ unsigned long end, unsigned long hmm_pfns[],
+ pmd_t pmd)
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 4801751cb6b6d6..b977c8b6af7e5f 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -417,6 +417,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
+ {
+ unsigned long flags;
+ struct kmemleak_object *object;
++ bool warn = false;
+
+ /* try the slab allocator first */
+ if (object_cache) {
+@@ -434,8 +435,10 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
+ else if (mem_pool_free_count)
+ object = &mem_pool[--mem_pool_free_count];
+ else
+- pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
++ warn = true;
+ raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
++ if (warn)
++ pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
+
+ return object;
+ }
+@@ -1851,6 +1854,7 @@ static const struct file_operations kmemleak_fops = {
+ static void __kmemleak_do_cleanup(void)
+ {
+ struct kmemleak_object *object, *tmp;
++ unsigned int cnt = 0;
+
+ /*
+ * Kmemleak has already been disabled, no need for RCU list traversal
+@@ -1859,6 +1863,10 @@ static void __kmemleak_do_cleanup(void)
+ list_for_each_entry_safe(object, tmp, &object_list, object_list) {
+ __remove_object(object);
+ __delete_object(object);
++
++ /* Call cond_resched() once per 64 iterations to avoid soft lockup */
++ if (!(++cnt & 0x3f))
++ cond_resched();
+ }
+ }
+
+diff --git a/mm/madvise.c b/mm/madvise.c
+index a63aa04ec7fa31..370d0ef719ebec 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -848,7 +848,7 @@ static long madvise_remove(struct vm_area_struct *vma,
+ return -EINVAL;
+ }
+
+- if ((vma->vm_flags & (VM_SHARED|VM_WRITE)) != (VM_SHARED|VM_WRITE))
++ if (!vma_is_shared_maywrite(vma))
+ return -EACCES;
+
+ offset = (loff_t)(start - vma->vm_start)
+diff --git a/mm/memfd.c b/mm/memfd.c
+index 278e5636623e6a..8ce796ca5bfa46 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -133,7 +133,7 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ return error;
+ }
+
+-static unsigned int *memfd_file_seals_ptr(struct file *file)
++unsigned int *memfd_file_seals_ptr(struct file *file)
+ {
+ if (shmem_file(file))
+ return &SHMEM_I(file_inode(file))->seals;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 8c188ed3738ac8..4d5e9d085f0a30 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -47,6 +47,7 @@
+ #include <linux/pkeys.h>
+ #include <linux/oom.h>
+ #include <linux/sched/mm.h>
++#include <linux/memfd.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/cacheflush.h>
+@@ -144,7 +145,7 @@ static void __remove_shared_vm_struct(struct vm_area_struct *vma,
+ {
+ if (vma->vm_flags & VM_DENYWRITE)
+ allow_write_access(file);
+- if (vma->vm_flags & VM_SHARED)
++ if (vma_is_shared_maywrite(vma))
+ mapping_unmap_writable(mapping);
+
+ flush_dcache_mmap_lock(mapping);
+@@ -663,7 +664,7 @@ static void __vma_link_file(struct vm_area_struct *vma)
+
+ if (vma->vm_flags & VM_DENYWRITE)
+ put_write_access(file_inode(file));
+- if (vma->vm_flags & VM_SHARED)
++ if (vma_is_shared_maywrite(vma))
+ mapping_allow_writable(mapping);
+
+ flush_dcache_mmap_lock(mapping);
+@@ -1488,6 +1489,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+
+ if (file) {
+ struct inode *inode = file_inode(file);
++ unsigned int seals = memfd_file_seals(file);
+ unsigned long flags_mask;
+
+ if (!file_mmap_ok(file, inode, pgoff, len))
+@@ -1532,6 +1534,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+ vm_flags |= VM_SHARED | VM_MAYSHARE;
+ if (!(file->f_mode & FMODE_WRITE))
+ vm_flags &= ~(VM_MAYWRITE | VM_SHARED);
++ else if (is_readonly_sealed(seals, vm_flags))
++ vm_flags &= ~VM_MAYWRITE;
+ fallthrough;
+ case MAP_PRIVATE:
+ if (!(file->f_mode & FMODE_READ))
+@@ -2942,7 +2946,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
+ return -EINVAL;
+
+ /* Map writable and ensure this isn't a sealed memfd. */
+- if (file && (vm_flags & VM_SHARED)) {
++ if (file && is_shared_maywrite(vm_flags)) {
+ int error = mapping_map_writable(file->f_mapping);
+
+ if (error)
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 59e1fcc055664c..d906c6b9618158 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4691,6 +4691,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ if (!ac->preferred_zoneref->zone)
+ goto nopage;
+
++ /*
++ * Check for insane configurations where the cpuset doesn't contain
++ * any suitable zone to satisfy the request - e.g. non-movable
++ * GFP_HIGHUSER allocations from MOVABLE nodes only.
++ */
++ if (cpusets_insane_config() && (gfp_mask & __GFP_HARDWALL)) {
++ struct zoneref *z = first_zones_zonelist(ac->zonelist,
++ ac->highest_zoneidx,
++ &cpuset_current_mems_allowed);
++ if (!z->zone)
++ goto nopage;
++ }
++
+ if (alloc_flags & ALLOC_KSWAPD)
+ wake_all_kswapds(order, gfp_mask, ac);
+
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index a917bf55c61ea9..adbc141083c77e 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -141,6 +141,7 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ {
+ const struct ptdump_range *range = st->range;
+
++ get_online_mems();
+ mmap_write_lock(mm);
+ while (range->start != range->end) {
+ walk_page_range_novma(mm, range->start, range->end,
+@@ -148,6 +149,7 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ range++;
+ }
+ mmap_write_unlock(mm);
++ put_online_mems();
+
+ /* Flush out the last page */
+ st->note_page(st, 0, -1, 0);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 6666114ed53bbe..5f8d8899bd0e75 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2263,7 +2263,7 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
+ struct shmem_inode_info *info = SHMEM_I(file_inode(file));
+ int ret;
+
+- ret = seal_check_future_write(info->seals, vma);
++ ret = seal_check_write(info->seals, vma);
+ if (ret)
+ return ret;
+
+diff --git a/mm/slab.h b/mm/slab.h
+index 6952e10cf33b44..4b70cf4493e64c 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -507,10 +507,7 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
+ {
+ flags &= gfp_allowed_mask;
+
+- fs_reclaim_acquire(flags);
+- fs_reclaim_release(flags);
+-
+- might_sleep_if(gfpflags_allow_blocking(flags));
++ might_alloc(flags);
+
+ if (should_failslab(s, flags))
+ return NULL;
+diff --git a/mm/slob.c b/mm/slob.c
+index 7cc9805c809115..8d4bfa46247f47 100644
+--- a/mm/slob.c
++++ b/mm/slob.c
+@@ -474,8 +474,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
+
+ gfp &= gfp_allowed_mask;
+
+- fs_reclaim_acquire(gfp);
+- fs_reclaim_release(gfp);
++ might_alloc(gfp);
+
+ if (size < PAGE_SIZE - minalign) {
+ int align = minalign;
+@@ -597,8 +596,7 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
+
+ flags &= gfp_allowed_mask;
+
+- fs_reclaim_acquire(flags);
+- fs_reclaim_release(flags);
++ might_alloc(flags);
+
+ if (c->size < PAGE_SIZE) {
+ b = slob_alloc(c->size, flags, c->align, node, 0);
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index fd1c8f51aa530e..66c24ed6e20195 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -193,6 +193,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
+ unsigned long end, pgprot_t prot, struct page **pages, int *nr,
+ pgtbl_mod_mask *mask)
+ {
++ int err = 0;
+ pte_t *pte;
+
+ /*
+@@ -206,15 +207,20 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
+ do {
+ struct page *page = pages[*nr];
+
+- if (WARN_ON(!pte_none(*pte)))
+- return -EBUSY;
+- if (WARN_ON(!page))
+- return -ENOMEM;
++ if (WARN_ON(!pte_none(*pte))) {
++ err = -EBUSY;
++ break;
++ }
++ if (WARN_ON(!page)) {
++ err = -ENOMEM;
++ break;
++ }
+ set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
+ (*nr)++;
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+ *mask |= PGTBL_PTE_MODIFIED;
+- return 0;
++
++ return err;
+ }
+
+ static int vmap_pmd_range(pud_t *pud, unsigned long addr,
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index c18dc8e61d3528..f5f80981ac9866 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -357,7 +357,7 @@ static void cache_free_handle(struct zs_pool *pool, unsigned long handle)
+
+ static struct zspage *cache_alloc_zspage(struct zs_pool *pool, gfp_t flags)
+ {
+- return kmem_cache_alloc(pool->zspage_cachep,
++ return kmem_cache_zalloc(pool->zspage_cachep,
+ flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE));
+ }
+
+@@ -1067,7 +1067,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
+ if (!zspage)
+ return NULL;
+
+- memset(zspage, 0, sizeof(struct zspage));
++ if (!IS_ENABLED(CONFIG_COMPACTION))
++ gfp &= ~__GFP_MOVABLE;
++
+ zspage->magic = ZSPAGE_MAGIC;
+ migrate_lock_init(zspage);
+
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index b45b9c9b12684f..07b829d19e01e2 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -356,6 +356,35 @@ static int __vlan_device_event(struct net_device *dev, unsigned long event)
+ return err;
+ }
+
++static void vlan_vid0_add(struct net_device *dev)
++{
++ struct vlan_info *vlan_info;
++ int err;
++
++ if (!(dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
++ return;
++
++ pr_info("adding VLAN 0 to HW filter on device %s\n", dev->name);
++
++ err = vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
++ if (err)
++ return;
++
++ vlan_info = rtnl_dereference(dev->vlan_info);
++ vlan_info->auto_vid0 = true;
++}
++
++static void vlan_vid0_del(struct net_device *dev)
++{
++ struct vlan_info *vlan_info = rtnl_dereference(dev->vlan_info);
++
++ if (!vlan_info || !vlan_info->auto_vid0)
++ return;
++
++ vlan_info->auto_vid0 = false;
++ vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
++}
++
+ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
+ void *ptr)
+ {
+@@ -377,15 +406,10 @@ static int vlan_device_event(struct notifier_block *unused, unsigned long event,
+ return notifier_from_errno(err);
+ }
+
+- if ((event == NETDEV_UP) &&
+- (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {
+- pr_info("adding VLAN 0 to HW filter on device %s\n",
+- dev->name);
+- vlan_vid_add(dev, htons(ETH_P_8021Q), 0);
+- }
+- if (event == NETDEV_DOWN &&
+- (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))
+- vlan_vid_del(dev, htons(ETH_P_8021Q), 0);
++ if (event == NETDEV_UP)
++ vlan_vid0_add(dev);
++ else if (event == NETDEV_DOWN)
++ vlan_vid0_del(dev);
+
+ vlan_info = rtnl_dereference(dev->vlan_info);
+ if (!vlan_info)
+diff --git a/net/8021q/vlan.h b/net/8021q/vlan.h
+index c3734927711469..2633b7616526fc 100644
+--- a/net/8021q/vlan.h
++++ b/net/8021q/vlan.h
+@@ -33,6 +33,7 @@ struct vlan_info {
+ struct vlan_group grp;
+ struct list_head vid_list;
+ unsigned int nr_vids;
++ bool auto_vid0;
+ struct rcu_head rcu;
+ };
+
+diff --git a/net/appletalk/aarp.c b/net/appletalk/aarp.c
+index 45f584171de79f..17d9cb380e7bd0 100644
+--- a/net/appletalk/aarp.c
++++ b/net/appletalk/aarp.c
+@@ -35,6 +35,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/export.h>
+ #include <linux/etherdevice.h>
++#include <linux/refcount.h>
+
+ int sysctl_aarp_expiry_time = AARP_EXPIRY_TIME;
+ int sysctl_aarp_tick_time = AARP_TICK_TIME;
+@@ -44,17 +45,19 @@ int sysctl_aarp_resolve_time = AARP_RESOLVE_TIME;
+ /* Lists of aarp entries */
+ /**
+ * struct aarp_entry - AARP entry
+- * @last_sent - Last time we xmitted the aarp request
+- * @packet_queue - Queue of frames wait for resolution
+- * @status - Used for proxy AARP
+- * expires_at - Entry expiry time
+- * target_addr - DDP Address
+- * dev - Device to use
+- * hwaddr - Physical i/f address of target/router
+- * xmit_count - When this hits 10 we give up
+- * next - Next entry in chain
++ * @refcnt: Reference count
++ * @last_sent: Last time we xmitted the aarp request
++ * @packet_queue: Queue of frames wait for resolution
++ * @status: Used for proxy AARP
++ * @expires_at: Entry expiry time
++ * @target_addr: DDP Address
++ * @dev: Device to use
++ * @hwaddr: Physical i/f address of target/router
++ * @xmit_count: When this hits 10 we give up
++ * @next: Next entry in chain
+ */
+ struct aarp_entry {
++ refcount_t refcnt;
+ /* These first two are only used for unresolved entries */
+ unsigned long last_sent;
+ struct sk_buff_head packet_queue;
+@@ -79,6 +82,17 @@ static DEFINE_RWLOCK(aarp_lock);
+ /* Used to walk the list and purge/kick entries. */
+ static struct timer_list aarp_timer;
+
++static inline void aarp_entry_get(struct aarp_entry *a)
++{
++ refcount_inc(&a->refcnt);
++}
++
++static inline void aarp_entry_put(struct aarp_entry *a)
++{
++ if (refcount_dec_and_test(&a->refcnt))
++ kfree(a);
++}
++
+ /*
+ * Delete an aarp queue
+ *
+@@ -87,7 +101,7 @@ static struct timer_list aarp_timer;
+ static void __aarp_expire(struct aarp_entry *a)
+ {
+ skb_queue_purge(&a->packet_queue);
+- kfree(a);
++ aarp_entry_put(a);
+ }
+
+ /*
+@@ -380,9 +394,11 @@ static void aarp_purge(void)
+ static struct aarp_entry *aarp_alloc(void)
+ {
+ struct aarp_entry *a = kmalloc(sizeof(*a), GFP_ATOMIC);
++ if (!a)
++ return NULL;
+
+- if (a)
+- skb_queue_head_init(&a->packet_queue);
++ refcount_set(&a->refcnt, 1);
++ skb_queue_head_init(&a->packet_queue);
+ return a;
+ }
+
+@@ -508,6 +524,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
+ entry->dev = atif->dev;
+
+ write_lock_bh(&aarp_lock);
++ aarp_entry_get(entry);
+
+ hash = sa->s_node % (AARP_HASH_SIZE - 1);
+ entry->next = proxies[hash];
+@@ -533,6 +550,7 @@ int aarp_proxy_probe_network(struct atalk_iface *atif, struct atalk_addr *sa)
+ retval = 1;
+ }
+
++ aarp_entry_put(entry);
+ write_unlock_bh(&aarp_lock);
+ out:
+ return retval;
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index c9edfca153c99e..f38b170a51af09 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -1408,9 +1408,10 @@ static int atalk_route_packet(struct sk_buff *skb, struct net_device *dev,
+
+ /**
+ * atalk_rcv - Receive a packet (in skb) from device dev
+- * @skb - packet received
+- * @dev - network device where the packet comes from
+- * @pt - packet type
++ * @skb: packet received
++ * @dev: network device where the packet comes from
++ * @pt: packet type
++ * @orig_dev: the original receive net device
+ *
+ * Receive a packet (in skb) from device dev. This has come from the SNAP
+ * decoder, and on entry skb->transport_header is the DDP header, skb->len
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 8c8631e609f6bf..b6345996fc022c 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3682,12 +3682,28 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ /* Configure output options and let the other side know
+ * which ones we don't like. */
+
+- /* If MTU is not provided in configure request, use the most recently
+- * explicitly or implicitly accepted value for the other direction,
+- * or the default value.
++ /* If MTU is not provided in configure request, try adjusting it
++ * to the current output MTU if it has been set
++ *
++ * Bluetooth Core 6.1, Vol 3, Part A, Section 4.5
++ *
++ * Each configuration parameter value (if any is present) in an
++ * L2CAP_CONFIGURATION_RSP packet reflects an ‘adjustment’ to a
++ * configuration parameter value that has been sent (or, in case
++ * of default values, implied) in the corresponding
++ * L2CAP_CONFIGURATION_REQ packet.
+ */
+- if (mtu == 0)
+- mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU;
++ if (!mtu) {
++ /* Only adjust for ERTM channels as for older modes the
++ * remote stack may not be able to detect that the
++ * adjustment causing it to silently drop packets.
++ */
++ if (chan->mode == L2CAP_MODE_ERTM &&
++ chan->omtu && chan->omtu != L2CAP_DEFAULT_MTU)
++ mtu = chan->omtu;
++ else
++ mtu = L2CAP_DEFAULT_MTU;
++ }
+
+ if (mtu < L2CAP_DEFAULT_MIN_MTU)
+ result = L2CAP_CONF_UNACCEPT;
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 49564c61ad4a14..7d7f4ba60a208f 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1666,6 +1666,9 @@ static void l2cap_sock_resume_cb(struct l2cap_chan *chan)
+ {
+ struct sock *sk = chan->data;
+
++ if (!sk)
++ return;
++
+ if (test_and_clear_bit(FLAG_PENDING_SECURITY, &chan->flags)) {
+ sk->sk_state = BT_CONNECTED;
+ chan->state = BT_CONNECTED;
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 8f9566f37498eb..fc896d39a6d952 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -1374,7 +1374,7 @@ static void smp_timeout(struct work_struct *work)
+
+ bt_dev_dbg(conn->hcon->hdev, "conn %p", conn);
+
+- hci_disconnect(conn->hcon, HCI_ERROR_REMOTE_USER_TERM);
++ hci_disconnect(conn->hcon, HCI_ERROR_AUTH_FAILURE);
+ }
+
+ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
+@@ -2972,8 +2972,25 @@ static int smp_sig_channel(struct l2cap_chan *chan, struct sk_buff *skb)
+ if (code > SMP_CMD_MAX)
+ goto drop;
+
+- if (smp && !test_and_clear_bit(code, &smp->allow_cmd))
++ if (smp && !test_and_clear_bit(code, &smp->allow_cmd)) {
++ /* If there is a context and the command is not allowed consider
++ * it a failure so the session is cleanup properly.
++ */
++ switch (code) {
++ case SMP_CMD_IDENT_INFO:
++ case SMP_CMD_IDENT_ADDR_INFO:
++ case SMP_CMD_SIGN_INFO:
++ /* 3.6.1. Key distribution and generation
++ *
++ * A device may reject a distributed key by sending the
++ * Pairing Failed command with the reason set to
++ * "Key Rejected".
++ */
++ smp_failure(conn, SMP_KEY_REJECTED);
++ break;
++ }
+ goto drop;
++ }
+
+ /* If we don't have a context the only allowed commands are
+ * pairing request and security request.
+diff --git a/net/bluetooth/smp.h b/net/bluetooth/smp.h
+index 5fe68e255cb29c..bad594642a53dc 100644
+--- a/net/bluetooth/smp.h
++++ b/net/bluetooth/smp.h
+@@ -138,6 +138,7 @@ struct smp_cmd_keypress_notify {
+ #define SMP_NUMERIC_COMP_FAILED 0x0c
+ #define SMP_BREDR_PAIRING_IN_PROGRESS 0x0d
+ #define SMP_CROSS_TRANSP_NOT_ALLOWED 0x0e
++#define SMP_KEY_REJECTED 0x0f
+
+ #define SMP_MIN_ENC_KEY_SIZE 7
+ #define SMP_MAX_ENC_KEY_SIZE 16
+diff --git a/net/bridge/netfilter/nft_reject_bridge.c b/net/bridge/netfilter/nft_reject_bridge.c
+index deae2c9a0f6966..9ffadcd524f83a 100644
+--- a/net/bridge/netfilter/nft_reject_bridge.c
++++ b/net/bridge/netfilter/nft_reject_bridge.c
+@@ -364,69 +364,13 @@ static int nft_reject_bridge_validate(const struct nft_ctx *ctx,
+ (1 << NF_BR_LOCAL_IN));
+ }
+
+-static int nft_reject_bridge_init(const struct nft_ctx *ctx,
+- const struct nft_expr *expr,
+- const struct nlattr * const tb[])
+-{
+- struct nft_reject *priv = nft_expr_priv(expr);
+- int icmp_code;
+-
+- if (tb[NFTA_REJECT_TYPE] == NULL)
+- return -EINVAL;
+-
+- priv->type = ntohl(nla_get_be32(tb[NFTA_REJECT_TYPE]));
+- switch (priv->type) {
+- case NFT_REJECT_ICMP_UNREACH:
+- case NFT_REJECT_ICMPX_UNREACH:
+- if (tb[NFTA_REJECT_ICMP_CODE] == NULL)
+- return -EINVAL;
+-
+- icmp_code = nla_get_u8(tb[NFTA_REJECT_ICMP_CODE]);
+- if (priv->type == NFT_REJECT_ICMPX_UNREACH &&
+- icmp_code > NFT_REJECT_ICMPX_MAX)
+- return -EINVAL;
+-
+- priv->icmp_code = icmp_code;
+- break;
+- case NFT_REJECT_TCP_RST:
+- break;
+- default:
+- return -EINVAL;
+- }
+- return 0;
+-}
+-
+-static int nft_reject_bridge_dump(struct sk_buff *skb,
+- const struct nft_expr *expr)
+-{
+- const struct nft_reject *priv = nft_expr_priv(expr);
+-
+- if (nla_put_be32(skb, NFTA_REJECT_TYPE, htonl(priv->type)))
+- goto nla_put_failure;
+-
+- switch (priv->type) {
+- case NFT_REJECT_ICMP_UNREACH:
+- case NFT_REJECT_ICMPX_UNREACH:
+- if (nla_put_u8(skb, NFTA_REJECT_ICMP_CODE, priv->icmp_code))
+- goto nla_put_failure;
+- break;
+- default:
+- break;
+- }
+-
+- return 0;
+-
+-nla_put_failure:
+- return -1;
+-}
+-
+ static struct nft_expr_type nft_reject_bridge_type;
+ static const struct nft_expr_ops nft_reject_bridge_ops = {
+ .type = &nft_reject_bridge_type,
+ .size = NFT_EXPR_SIZE(sizeof(struct nft_reject)),
+ .eval = nft_reject_bridge_eval,
+- .init = nft_reject_bridge_init,
+- .dump = nft_reject_bridge_dump,
++ .init = nft_reject_init,
++ .dump = nft_reject_dump,
+ .validate = nft_reject_bridge_validate,
+ };
+
+diff --git a/net/caif/cfctrl.c b/net/caif/cfctrl.c
+index d8cb4b2a076b46..3eec293ab22f95 100644
+--- a/net/caif/cfctrl.c
++++ b/net/caif/cfctrl.c
+@@ -351,17 +351,154 @@ int cfctrl_cancel_req(struct cflayer *layr, struct cflayer *adap_layer)
+ return found;
+ }
+
++static int cfctrl_link_setup(struct cfctrl *cfctrl, struct cfpkt *pkt, u8 cmdrsp)
++{
++ u8 len;
++ u8 linkid = 0;
++ enum cfctrl_srv serv;
++ enum cfctrl_srv servtype;
++ u8 endpoint;
++ u8 physlinkid;
++ u8 prio;
++ u8 tmp;
++ u8 *cp;
++ int i;
++ struct cfctrl_link_param linkparam;
++ struct cfctrl_request_info rsp, *req;
++
++ memset(&linkparam, 0, sizeof(linkparam));
++
++ tmp = cfpkt_extr_head_u8(pkt);
++
++ serv = tmp & CFCTRL_SRV_MASK;
++ linkparam.linktype = serv;
++
++ servtype = tmp >> 4;
++ linkparam.chtype = servtype;
++
++ tmp = cfpkt_extr_head_u8(pkt);
++ physlinkid = tmp & 0x07;
++ prio = tmp >> 3;
++
++ linkparam.priority = prio;
++ linkparam.phyid = physlinkid;
++ endpoint = cfpkt_extr_head_u8(pkt);
++ linkparam.endpoint = endpoint & 0x03;
++
++ switch (serv) {
++ case CFCTRL_SRV_VEI:
++ case CFCTRL_SRV_DBG:
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ break;
++ case CFCTRL_SRV_VIDEO:
++ tmp = cfpkt_extr_head_u8(pkt);
++ linkparam.u.video.connid = tmp;
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ break;
++
++ case CFCTRL_SRV_DATAGRAM:
++ linkparam.u.datagram.connid = cfpkt_extr_head_u32(pkt);
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ break;
++ case CFCTRL_SRV_RFM:
++ /* Construct a frame, convert
++ * DatagramConnectionID
++ * to network format long and copy it out...
++ */
++ linkparam.u.rfm.connid = cfpkt_extr_head_u32(pkt);
++ cp = (u8 *) linkparam.u.rfm.volume;
++ for (tmp = cfpkt_extr_head_u8(pkt);
++ cfpkt_more(pkt) && tmp != '\0';
++ tmp = cfpkt_extr_head_u8(pkt))
++ *cp++ = tmp;
++ *cp = '\0';
++
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++
++ break;
++ case CFCTRL_SRV_UTIL:
++ /* Construct a frame, convert
++ * DatagramConnectionID
++ * to network format long and copy it out...
++ */
++ /* Fifosize KB */
++ linkparam.u.utility.fifosize_kb = cfpkt_extr_head_u16(pkt);
++ /* Fifosize bufs */
++ linkparam.u.utility.fifosize_bufs = cfpkt_extr_head_u16(pkt);
++ /* name */
++ cp = (u8 *) linkparam.u.utility.name;
++ caif_assert(sizeof(linkparam.u.utility.name)
++ >= UTILITY_NAME_LENGTH);
++ for (i = 0; i < UTILITY_NAME_LENGTH && cfpkt_more(pkt); i++) {
++ tmp = cfpkt_extr_head_u8(pkt);
++ *cp++ = tmp;
++ }
++ /* Length */
++ len = cfpkt_extr_head_u8(pkt);
++ linkparam.u.utility.paramlen = len;
++ /* Param Data */
++ cp = linkparam.u.utility.params;
++ while (cfpkt_more(pkt) && len--) {
++ tmp = cfpkt_extr_head_u8(pkt);
++ *cp++ = tmp;
++ }
++ if (CFCTRL_ERR_BIT & cmdrsp)
++ break;
++ /* Link ID */
++ linkid = cfpkt_extr_head_u8(pkt);
++ /* Length */
++ len = cfpkt_extr_head_u8(pkt);
++ /* Param Data */
++ cfpkt_extr_head(pkt, NULL, len);
++ break;
++ default:
++ pr_warn("Request setup, invalid type (%d)\n", serv);
++ return -1;
++ }
++
++ rsp.cmd = CFCTRL_CMD_LINK_SETUP;
++ rsp.param = linkparam;
++ spin_lock_bh(&cfctrl->info_list_lock);
++ req = cfctrl_remove_req(cfctrl, &rsp);
++
++ if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
++ cfpkt_erroneous(pkt)) {
++ pr_err("Invalid O/E bit or parse error "
++ "on CAIF control channel\n");
++ cfctrl->res.reject_rsp(cfctrl->serv.layer.up, 0,
++ req ? req->client_layer : NULL);
++ } else {
++ cfctrl->res.linksetup_rsp(cfctrl->serv.layer.up, linkid,
++ serv, physlinkid,
++ req ? req->client_layer : NULL);
++ }
++
++ kfree(req);
++
++ spin_unlock_bh(&cfctrl->info_list_lock);
++
++ return 0;
++}
++
+ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ {
+ u8 cmdrsp;
+ u8 cmd;
+- int ret = -1;
+- u8 len;
+- u8 param[255];
++ int ret = 0;
+ u8 linkid = 0;
+ struct cfctrl *cfctrl = container_obj(layer);
+- struct cfctrl_request_info rsp, *req;
+-
+
+ cmdrsp = cfpkt_extr_head_u8(pkt);
+ cmd = cmdrsp & CFCTRL_CMD_MASK;
+@@ -374,150 +511,7 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+
+ switch (cmd) {
+ case CFCTRL_CMD_LINK_SETUP:
+- {
+- enum cfctrl_srv serv;
+- enum cfctrl_srv servtype;
+- u8 endpoint;
+- u8 physlinkid;
+- u8 prio;
+- u8 tmp;
+- u8 *cp;
+- int i;
+- struct cfctrl_link_param linkparam;
+- memset(&linkparam, 0, sizeof(linkparam));
+-
+- tmp = cfpkt_extr_head_u8(pkt);
+-
+- serv = tmp & CFCTRL_SRV_MASK;
+- linkparam.linktype = serv;
+-
+- servtype = tmp >> 4;
+- linkparam.chtype = servtype;
+-
+- tmp = cfpkt_extr_head_u8(pkt);
+- physlinkid = tmp & 0x07;
+- prio = tmp >> 3;
+-
+- linkparam.priority = prio;
+- linkparam.phyid = physlinkid;
+- endpoint = cfpkt_extr_head_u8(pkt);
+- linkparam.endpoint = endpoint & 0x03;
+-
+- switch (serv) {
+- case CFCTRL_SRV_VEI:
+- case CFCTRL_SRV_DBG:
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- break;
+- case CFCTRL_SRV_VIDEO:
+- tmp = cfpkt_extr_head_u8(pkt);
+- linkparam.u.video.connid = tmp;
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- break;
+-
+- case CFCTRL_SRV_DATAGRAM:
+- linkparam.u.datagram.connid =
+- cfpkt_extr_head_u32(pkt);
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- break;
+- case CFCTRL_SRV_RFM:
+- /* Construct a frame, convert
+- * DatagramConnectionID
+- * to network format long and copy it out...
+- */
+- linkparam.u.rfm.connid =
+- cfpkt_extr_head_u32(pkt);
+- cp = (u8 *) linkparam.u.rfm.volume;
+- for (tmp = cfpkt_extr_head_u8(pkt);
+- cfpkt_more(pkt) && tmp != '\0';
+- tmp = cfpkt_extr_head_u8(pkt))
+- *cp++ = tmp;
+- *cp = '\0';
+-
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+-
+- break;
+- case CFCTRL_SRV_UTIL:
+- /* Construct a frame, convert
+- * DatagramConnectionID
+- * to network format long and copy it out...
+- */
+- /* Fifosize KB */
+- linkparam.u.utility.fifosize_kb =
+- cfpkt_extr_head_u16(pkt);
+- /* Fifosize bufs */
+- linkparam.u.utility.fifosize_bufs =
+- cfpkt_extr_head_u16(pkt);
+- /* name */
+- cp = (u8 *) linkparam.u.utility.name;
+- caif_assert(sizeof(linkparam.u.utility.name)
+- >= UTILITY_NAME_LENGTH);
+- for (i = 0;
+- i < UTILITY_NAME_LENGTH
+- && cfpkt_more(pkt); i++) {
+- tmp = cfpkt_extr_head_u8(pkt);
+- *cp++ = tmp;
+- }
+- /* Length */
+- len = cfpkt_extr_head_u8(pkt);
+- linkparam.u.utility.paramlen = len;
+- /* Param Data */
+- cp = linkparam.u.utility.params;
+- while (cfpkt_more(pkt) && len--) {
+- tmp = cfpkt_extr_head_u8(pkt);
+- *cp++ = tmp;
+- }
+- if (CFCTRL_ERR_BIT & cmdrsp)
+- break;
+- /* Link ID */
+- linkid = cfpkt_extr_head_u8(pkt);
+- /* Length */
+- len = cfpkt_extr_head_u8(pkt);
+- /* Param Data */
+- cfpkt_extr_head(pkt, ¶m, len);
+- break;
+- default:
+- pr_warn("Request setup, invalid type (%d)\n",
+- serv);
+- goto error;
+- }
+-
+- rsp.cmd = cmd;
+- rsp.param = linkparam;
+- spin_lock_bh(&cfctrl->info_list_lock);
+- req = cfctrl_remove_req(cfctrl, &rsp);
+-
+- if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
+- cfpkt_erroneous(pkt)) {
+- pr_err("Invalid O/E bit or parse error "
+- "on CAIF control channel\n");
+- cfctrl->res.reject_rsp(cfctrl->serv.layer.up,
+- 0,
+- req ? req->client_layer
+- : NULL);
+- } else {
+- cfctrl->res.linksetup_rsp(cfctrl->serv.
+- layer.up, linkid,
+- serv, physlinkid,
+- req ? req->
+- client_layer : NULL);
+- }
+-
+- kfree(req);
+-
+- spin_unlock_bh(&cfctrl->info_list_lock);
+- }
++ ret = cfctrl_link_setup(cfctrl, pkt, cmdrsp);
+ break;
+ case CFCTRL_CMD_LINK_DESTROY:
+ linkid = cfpkt_extr_head_u8(pkt);
+@@ -544,9 +538,9 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ break;
+ default:
+ pr_err("Unrecognized Control Frame\n");
++ ret = -1;
+ goto error;
+ }
+- ret = 0;
+ error:
+ cfpkt_destroy(pkt);
+ return ret;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 2018001d16bff2..076b317c359497 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -8318,6 +8318,9 @@ static bool flow_dissector_is_valid_access(int off, int size,
+ if (off < 0 || off >= sizeof(struct __sk_buff))
+ return false;
+
++ if (off % size != 0)
++ return false;
++
+ if (type == BPF_WRITE)
+ return false;
+
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 66a6f624123939..db18154aa23830 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -812,6 +812,13 @@ int netpoll_setup(struct netpoll *np)
+ goto put;
+
+ rtnl_unlock();
++
++ /* Make sure all NAPI polls which started before dev->npinfo
++ * was visible have exited before we start calling NAPI poll.
++ * NAPI skips locking if dev->npinfo is NULL.
++ */
++ synchronize_rcu();
++
+ return 0;
+
+ put:
+diff --git a/net/hsr/hsr_slave.c b/net/hsr/hsr_slave.c
+index aecc05a28fa198..03b64ecb9dca26 100644
+--- a/net/hsr/hsr_slave.c
++++ b/net/hsr/hsr_slave.c
+@@ -60,8 +60,14 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb)
+ skb_push(skb, ETH_HLEN);
+ skb_reset_mac_header(skb);
+ if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) ||
+- protocol == htons(ETH_P_HSR))
++ protocol == htons(ETH_P_HSR)) {
++ if (!pskb_may_pull(skb, ETH_HLEN + HSR_HLEN)) {
++ kfree_skb(skb);
++ goto finish_consume;
++ }
++
+ skb_set_network_header(skb, ETH_HLEN + HSR_HLEN);
++ }
+ skb_reset_mac_len(skb);
+
+ hsr_forward_skb(skb, port);
+diff --git a/net/ipv4/netfilter/nf_reject_ipv4.c b/net/ipv4/netfilter/nf_reject_ipv4.c
+index efe14a6a5d9b8a..e89a4cbd9f5df9 100644
+--- a/net/ipv4/netfilter/nf_reject_ipv4.c
++++ b/net/ipv4/netfilter/nf_reject_ipv4.c
+@@ -125,7 +125,7 @@ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ if (!oth)
+ return;
+
+- if (hook == NF_INET_PRE_ROUTING && nf_reject_fill_skb_dst(oldskb))
++ if (!skb_dst(oldskb) && nf_reject_fill_skb_dst(oldskb) < 0)
+ return;
+
+ if (skb_rtable(oldskb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
+@@ -193,7 +193,7 @@ void nf_send_unreach(struct sk_buff *skb_in, int code, int hook)
+ if (iph->frag_off & htons(IP_OFFSET))
+ return;
+
+- if (hook == NF_INET_PRE_ROUTING && nf_reject_fill_skb_dst(skb_in))
++ if (!skb_dst(skb_in) && nf_reject_fill_skb_dst(skb_in) < 0)
+ return;
+
+ if (skb_csum_unnecessary(skb_in) || !nf_reject_verify_csum(proto)) {
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 815b6b0089c29c..7c4479adbf3255 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2465,7 +2465,6 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
+ do_cache = true;
+ if (type == RTN_BROADCAST) {
+ flags |= RTCF_BROADCAST | RTCF_LOCAL;
+- fi = NULL;
+ } else if (type == RTN_MULTICAST) {
+ flags |= RTCF_MULTICAST | RTCF_LOCAL;
+ if (!ip_check_mc_rcu(in_dev, fl4->daddr, fl4->saddr,
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 82382ac1514f9b..64a87a39287a1b 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4769,8 +4769,9 @@ static void tcp_ofo_queue(struct sock *sk)
+
+ if (before(TCP_SKB_CB(skb)->seq, dsack_high)) {
+ __u32 dsack = dsack_high;
++
+ if (before(TCP_SKB_CB(skb)->end_seq, dsack_high))
+- dsack_high = TCP_SKB_CB(skb)->end_seq;
++ dsack = TCP_SKB_CB(skb)->end_seq;
+ tcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq, dsack);
+ }
+ p = rb_next(p);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 73beaa7e2d7038..5d4413fe41952d 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -58,7 +58,7 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
+ remcsum = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TUNNEL_REMCSUM);
+ skb->remcsum_offload = remcsum;
+
+- need_ipsec = skb_dst(skb) && dst_xfrm(skb_dst(skb));
++ need_ipsec = (skb_dst(skb) && dst_xfrm(skb_dst(skb))) || skb_sec_path(skb);
+ /* Try to offload checksum if possible */
+ offload_csum = !!(need_csum &&
+ !need_ipsec &&
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 673f02ea62aae2..c145be2fd6e423 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -111,7 +111,9 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+
+ ops = rcu_dereference(inet6_offloads[proto]);
+ if (likely(ops && ops->callbacks.gso_segment)) {
+- skb_reset_transport_header(skb);
++ if (!skb_reset_transport_header_careful(skb))
++ goto out;
++
+ segs = ops->callbacks.gso_segment(skb, features);
+ if (!segs)
+ skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index df572724f254d8..5384b73e318eaf 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -161,7 +161,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ fl6.fl6_sport = otcph->dest;
+ fl6.fl6_dport = otcph->source;
+
+- if (hook == NF_INET_PRE_ROUTING) {
++ if (!skb_dst(oldskb)) {
+ nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false);
+ if (!dst)
+ return;
+@@ -259,7 +259,7 @@ void nf_send_unreach6(struct net *net, struct sk_buff *skb_in,
+ if (hooknum == NF_INET_LOCAL_OUT && skb_in->dev == NULL)
+ skb_in->dev = net->loopback_dev;
+
+- if (hooknum == NF_INET_PRE_ROUTING && nf_reject6_fill_skb_dst(skb_in))
++ if (!skb_dst(skb_in) && nf_reject6_fill_skb_dst(skb_in) < 0)
+ return;
+
+ icmpv6_send(skb_in, ICMPV6_DEST_UNREACH, code, 0);
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index 5d47948c036425..b849d2a13f87c4 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -129,13 +129,13 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ struct dst_entry *cache_dst)
+ {
+ struct ipv6_rpl_sr_hdr *isrh, *csrh;
+- const struct ipv6hdr *oldhdr;
++ struct ipv6hdr oldhdr;
+ struct ipv6hdr *hdr;
+ unsigned char *buf;
+ size_t hdrlen;
+ int err;
+
+- oldhdr = ipv6_hdr(skb);
++ memcpy(&oldhdr, ipv6_hdr(skb), sizeof(oldhdr));
+
+ buf = kcalloc(struct_size(srh, segments.addr, srh->segments_left), 2, GFP_ATOMIC);
+ if (!buf)
+@@ -147,7 +147,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ memcpy(isrh, srh, sizeof(*isrh));
+ memcpy(isrh->rpl_segaddr, &srh->rpl_segaddr[1],
+ (srh->segments_left - 1) * 16);
+- isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr->daddr;
++ isrh->rpl_segaddr[srh->segments_left - 1] = oldhdr.daddr;
+
+ ipv6_rpl_srh_compress(csrh, isrh, &srh->rpl_segaddr[0],
+ isrh->segments_left - 1);
+@@ -169,7 +169,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ skb_mac_header_rebuild(skb);
+
+ hdr = ipv6_hdr(skb);
+- memmove(hdr, oldhdr, sizeof(*hdr));
++ memmove(hdr, &oldhdr, sizeof(*hdr));
+ isrh = (void *)hdr + sizeof(*hdr);
+ memcpy(isrh, csrh, hdrlen);
+
+diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c
+index 2e2b94ae635522..4a3f7bb027ed27 100644
+--- a/net/ipv6/seg6_hmac.c
++++ b/net/ipv6/seg6_hmac.c
+@@ -294,6 +294,9 @@ int seg6_hmac_info_add(struct net *net, u32 key, struct seg6_hmac_info *hinfo)
+ struct seg6_pernet_data *sdata = seg6_pernet(net);
+ int err;
+
++ if (!__hmac_get_algo(hinfo->alg_id))
++ return -EINVAL;
++
+ err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node,
+ rht_params);
+
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 0d6d12fc3c07ef..30ad46cfcad86b 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -620,6 +620,12 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx)
+ else
+ tx->key = NULL;
+
++ if (info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) {
++ if (tx->key && tx->key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)
++ info->control.hw_key = &tx->key->conf;
++ return TX_CONTINUE;
++ }
++
+ if (tx->key) {
+ bool skip_hw = false;
+
+@@ -3691,6 +3697,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ * The key can be removed while the packet was queued, so need to call
+ * this here to get the current key.
+ */
++ info->control.hw_key = NULL;
+ r = ieee80211_tx_h_select_key(&tx);
+ if (r != TX_CONTINUE) {
+ ieee80211_free_txskb(&local->hw, skb);
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index c61d2e2e93adc3..6ebf9e55c04640 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -107,7 +107,7 @@ struct ncsi_channel_version {
+ u8 update; /* NCSI version update */
+ char alpha1; /* NCSI version alpha1 */
+ char alpha2; /* NCSI version alpha2 */
+- u8 fw_name[12]; /* Firmware name string */
++ u8 fw_name[12 + 1]; /* Firmware name string */
+ u32 fw_version; /* Firmware version */
+ u16 pci_ids[4]; /* PCI identification */
+ u32 mf_id; /* Manufacture ID */
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 88fb86cf7b2081..c1d42bbfdc7e8b 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -782,6 +782,7 @@ static int ncsi_rsp_handler_gvi(struct ncsi_request *nr)
+ ncv->alpha1 = rsp->alpha1;
+ ncv->alpha2 = rsp->alpha2;
+ memcpy(ncv->fw_name, rsp->fw_name, 12);
++ ncv->fw_name[12] = '\0';
+ ncv->fw_version = ntohl(rsp->fw_version);
+ for (i = 0; i < ARRAY_SIZE(ncv->pci_ids); i++)
+ ncv->pci_ids[i] = ntohs(rsp->pci_ids[i]);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index b2b06033ef2c9c..f622fcad3f503e 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -839,8 +839,6 @@ ctnetlink_conntrack_event(unsigned int events, struct nf_ct_event *item)
+
+ static int ctnetlink_done(struct netlink_callback *cb)
+ {
+- if (cb->args[1])
+- nf_ct_put((struct nf_conn *)cb->args[1]);
+ kfree(cb->data);
+ return 0;
+ }
+@@ -1112,19 +1110,26 @@ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ return 0;
+ }
+
++static unsigned long ctnetlink_get_id(const struct nf_conn *ct)
++{
++ unsigned long id = nf_ct_get_id(ct);
++
++ return id ? id : 1;
++}
++
+ static int
+ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+ unsigned int flags = cb->data ? NLM_F_DUMP_FILTERED : 0;
+ struct net *net = sock_net(skb->sk);
+- struct nf_conn *ct, *last;
++ unsigned long last_id = cb->args[1];
+ struct nf_conntrack_tuple_hash *h;
+ struct hlist_nulls_node *n;
+ struct nf_conn *nf_ct_evict[8];
++ struct nf_conn *ct;
+ int res, i;
+ spinlock_t *lockp;
+
+- last = (struct nf_conn *)cb->args[1];
+ i = 0;
+
+ local_bh_disable();
+@@ -1160,7 +1165,7 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ continue;
+
+ if (cb->args[1]) {
+- if (ct != last)
++ if (ctnetlink_get_id(ct) != last_id)
+ continue;
+ cb->args[1] = 0;
+ }
+@@ -1173,8 +1178,7 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ NFNL_MSG_TYPE(cb->nlh->nlmsg_type),
+ ct, true, flags);
+ if (res < 0) {
+- nf_conntrack_get(&ct->ct_general);
+- cb->args[1] = (unsigned long)ct;
++ cb->args[1] = ctnetlink_get_id(ct);
+ spin_unlock(lockp);
+ goto out;
+ }
+@@ -1187,12 +1191,10 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ }
+ out:
+ local_bh_enable();
+- if (last) {
++ if (last_id) {
+ /* nf ct hash resize happened, now clear the leftover. */
+- if ((struct nf_conn *)cb->args[1] == last)
++ if (cb->args[1] == last_id)
+ cb->args[1] = 0;
+-
+- nf_ct_put(last);
+ }
+
+ while (i) {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index ff419ecb268a80..8e799848cbcc1c 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3347,7 +3347,7 @@ void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ /* can only be used if rule is no longer visible to dumps */
+ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+- lockdep_commit_lock_is_held(ctx->net);
++ WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+
+ nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ nf_tables_rule_destroy(ctx, rule);
+@@ -4860,7 +4860,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ struct nft_set_binding *binding,
+ enum nft_trans_phase phase)
+ {
+- lockdep_commit_lock_is_held(ctx->net);
++ WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+
+ switch (phase) {
+ case NFT_TRANS_PREPARE_ERROR:
+diff --git a/net/netfilter/nft_reject.c b/net/netfilter/nft_reject.c
+index 61fb7e8afbf0a8..927ff8459bd90e 100644
+--- a/net/netfilter/nft_reject.c
++++ b/net/netfilter/nft_reject.c
+@@ -40,6 +40,7 @@ int nft_reject_init(const struct nft_ctx *ctx,
+ const struct nlattr * const tb[])
+ {
+ struct nft_reject *priv = nft_expr_priv(expr);
++ int icmp_code;
+
+ if (tb[NFTA_REJECT_TYPE] == NULL)
+ return -EINVAL;
+@@ -47,9 +48,17 @@ int nft_reject_init(const struct nft_ctx *ctx,
+ priv->type = ntohl(nla_get_be32(tb[NFTA_REJECT_TYPE]));
+ switch (priv->type) {
+ case NFT_REJECT_ICMP_UNREACH:
++ case NFT_REJECT_ICMPX_UNREACH:
+ if (tb[NFTA_REJECT_ICMP_CODE] == NULL)
+ return -EINVAL;
+- priv->icmp_code = nla_get_u8(tb[NFTA_REJECT_ICMP_CODE]);
++
++ icmp_code = nla_get_u8(tb[NFTA_REJECT_ICMP_CODE]);
++ if (priv->type == NFT_REJECT_ICMPX_UNREACH &&
++ icmp_code > NFT_REJECT_ICMPX_MAX)
++ return -EINVAL;
++
++ priv->icmp_code = icmp_code;
++ break;
+ case NFT_REJECT_TCP_RST:
+ break;
+ default:
+@@ -69,6 +78,7 @@ int nft_reject_dump(struct sk_buff *skb, const struct nft_expr *expr)
+
+ switch (priv->type) {
+ case NFT_REJECT_ICMP_UNREACH:
++ case NFT_REJECT_ICMPX_UNREACH:
+ if (nla_put_u8(skb, NFTA_REJECT_ICMP_CODE, priv->icmp_code))
+ goto nla_put_failure;
+ break;
+diff --git a/net/netfilter/nft_reject_inet.c b/net/netfilter/nft_reject_inet.c
+index c00b94a1668245..554caf967baa20 100644
+--- a/net/netfilter/nft_reject_inet.c
++++ b/net/netfilter/nft_reject_inet.c
+@@ -60,60 +60,16 @@ static void nft_reject_inet_eval(const struct nft_expr *expr,
+ regs->verdict.code = NF_DROP;
+ }
+
+-static int nft_reject_inet_init(const struct nft_ctx *ctx,
+- const struct nft_expr *expr,
+- const struct nlattr * const tb[])
++static int nft_reject_inet_validate(const struct nft_ctx *ctx,
++ const struct nft_expr *expr,
++ const struct nft_data **data)
+ {
+- struct nft_reject *priv = nft_expr_priv(expr);
+- int icmp_code;
+-
+- if (tb[NFTA_REJECT_TYPE] == NULL)
+- return -EINVAL;
+-
+- priv->type = ntohl(nla_get_be32(tb[NFTA_REJECT_TYPE]));
+- switch (priv->type) {
+- case NFT_REJECT_ICMP_UNREACH:
+- case NFT_REJECT_ICMPX_UNREACH:
+- if (tb[NFTA_REJECT_ICMP_CODE] == NULL)
+- return -EINVAL;
+-
+- icmp_code = nla_get_u8(tb[NFTA_REJECT_ICMP_CODE]);
+- if (priv->type == NFT_REJECT_ICMPX_UNREACH &&
+- icmp_code > NFT_REJECT_ICMPX_MAX)
+- return -EINVAL;
+-
+- priv->icmp_code = icmp_code;
+- break;
+- case NFT_REJECT_TCP_RST:
+- break;
+- default:
+- return -EINVAL;
+- }
+- return 0;
+-}
+-
+-static int nft_reject_inet_dump(struct sk_buff *skb,
+- const struct nft_expr *expr)
+-{
+- const struct nft_reject *priv = nft_expr_priv(expr);
+-
+- if (nla_put_be32(skb, NFTA_REJECT_TYPE, htonl(priv->type)))
+- goto nla_put_failure;
+-
+- switch (priv->type) {
+- case NFT_REJECT_ICMP_UNREACH:
+- case NFT_REJECT_ICMPX_UNREACH:
+- if (nla_put_u8(skb, NFTA_REJECT_ICMP_CODE, priv->icmp_code))
+- goto nla_put_failure;
+- break;
+- default:
+- break;
+- }
+-
+- return 0;
+-
+-nla_put_failure:
+- return -1;
++ return nft_chain_validate_hooks(ctx->chain,
++ (1 << NF_INET_LOCAL_IN) |
++ (1 << NF_INET_FORWARD) |
++ (1 << NF_INET_LOCAL_OUT) |
++ (1 << NF_INET_PRE_ROUTING) |
++ (1 << NF_INET_INGRESS));
+ }
+
+ static struct nft_expr_type nft_reject_inet_type;
+@@ -121,9 +77,9 @@ static const struct nft_expr_ops nft_reject_inet_ops = {
+ .type = &nft_reject_inet_type,
+ .size = NFT_EXPR_SIZE(sizeof(struct nft_reject)),
+ .eval = nft_reject_inet_eval,
+- .init = nft_reject_inet_init,
+- .dump = nft_reject_inet_dump,
+- .validate = nft_reject_validate,
++ .init = nft_reject_init,
++ .dump = nft_reject_dump,
++ .validate = nft_reject_inet_validate,
+ };
+
+ static struct nft_expr_type nft_reject_inet_type __read_mostly = {
+diff --git a/net/netfilter/xt_nfacct.c b/net/netfilter/xt_nfacct.c
+index a97c2259bbc8d5..dd72f6fc57aaec 100644
+--- a/net/netfilter/xt_nfacct.c
++++ b/net/netfilter/xt_nfacct.c
+@@ -38,8 +38,8 @@ nfacct_mt_checkentry(const struct xt_mtchk_param *par)
+
+ nfacct = nfnl_acct_find_get(par->net, info->name);
+ if (nfacct == NULL) {
+- pr_info_ratelimited("accounting object `%s' does not exists\n",
+- info->name);
++ pr_info_ratelimited("accounting object `%.*s' does not exist\n",
++ NFACCT_NAME_MAX, info->name);
+ return -ENOENT;
+ }
+ info->nfacct = nfacct;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 77631cb74a192a..552682a5ff243f 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1213,7 +1213,7 @@ int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
+ nlk = nlk_sk(sk);
+ rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
+
+- if ((rmem == skb->truesize || rmem < READ_ONCE(sk->sk_rcvbuf)) &&
++ if ((rmem == skb->truesize || rmem <= READ_ONCE(sk->sk_rcvbuf)) &&
+ !test_bit(NETLINK_S_CONGESTED, &nlk->state)) {
+ netlink_skb_set_owner_r(skb, sk);
+ return 0;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 2f69cf5270db6d..4614fae54ed7f0 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2739,7 +2739,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ int len_sum = 0;
+ int status = TP_STATUS_AVAILABLE;
+ int hlen, tlen, copylen = 0;
+- long timeo = 0;
++ long timeo;
+
+ mutex_lock(&po->pg_vec_lock);
+
+@@ -2793,22 +2793,28 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ if ((size_max > dev->mtu + reserve + VLAN_HLEN) && !po->has_vnet_hdr)
+ size_max = dev->mtu + reserve + VLAN_HLEN;
+
++ timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
+ reinit_completion(&po->skb_completion);
+
+ do {
+ ph = packet_current_frame(po, &po->tx_ring,
+ TP_STATUS_SEND_REQUEST);
+ if (unlikely(ph == NULL)) {
+- if (need_wait && skb) {
+- timeo = sock_sndtimeo(&po->sk, msg->msg_flags & MSG_DONTWAIT);
++ /* Note: packet_read_pending() might be slow if we
++ * have to call it as it's per_cpu variable, but in
++ * fast-path we don't have to call it, only when ph
++ * is NULL, we need to check the pending_refcnt.
++ */
++ if (need_wait && packet_read_pending(&po->tx_ring)) {
+ timeo = wait_for_completion_interruptible_timeout(&po->skb_completion, timeo);
+ if (timeo <= 0) {
+ err = !timeo ? -ETIMEDOUT : -ERESTARTSYS;
+ goto out_put;
+ }
+- }
+- /* check for additional frames */
+- continue;
++ /* check for additional frames */
++ continue;
++ } else
++ break;
+ }
+
+ skb = NULL;
+@@ -2898,14 +2904,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ }
+ packet_increment_head(&po->tx_ring);
+ len_sum += tp_len;
+- } while (likely((ph != NULL) ||
+- /* Note: packet_read_pending() might be slow if we have
+- * to call it as it's per_cpu variable, but in fast-path
+- * we already short-circuit the loop with the first
+- * condition, and luckily don't have to go that path
+- * anyway.
+- */
+- (need_wait && packet_read_pending(&po->tx_ring))));
++ } while (1);
+
+ err = len_sum;
+ goto out_put;
+@@ -4515,10 +4514,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ spin_lock(&po->bind_lock);
+ was_running = po->running;
+ num = po->num;
+- if (was_running) {
+- WRITE_ONCE(po->num, 0);
++ WRITE_ONCE(po->num, 0);
++ if (was_running)
+ __unregister_prot_hook(sk, false);
+- }
++
+ spin_unlock(&po->bind_lock);
+
+ synchronize_net();
+@@ -4550,10 +4549,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ mutex_unlock(&po->pg_vec_lock);
+
+ spin_lock(&po->bind_lock);
+- if (was_running) {
+- WRITE_ONCE(po->num, num);
++ WRITE_ONCE(po->num, num);
++ if (was_running)
+ register_prot_hook(sk);
+- }
++
+ spin_unlock(&po->bind_lock);
+ if (pg_vec && (po->tp_version > TPACKET_V2)) {
+ /* Because we don't support block-based V3 on tx-ring */
+diff --git a/net/phonet/pep.c b/net/phonet/pep.c
+index 65d463ad877077..87066e3d488701 100644
+--- a/net/phonet/pep.c
++++ b/net/phonet/pep.c
+@@ -825,6 +825,7 @@ static struct sock *pep_sock_accept(struct sock *sk, int flags, int *errp,
+ }
+
+ /* Check for duplicate pipe handle */
++ pn_skb_get_dst_sockaddr(skb, &dst);
+ newsk = pep_find_pipe(&pn->hlist, &dst, pipe_handle);
+ if (unlikely(newsk)) {
+ __sock_put(newsk);
+@@ -849,7 +850,6 @@ static struct sock *pep_sock_accept(struct sock *sk, int flags, int *errp,
+ newsk->sk_destruct = pipe_destruct;
+
+ newpn = pep_sk(newsk);
+- pn_skb_get_dst_sockaddr(skb, &dst);
+ pn_skb_get_src_sockaddr(skb, &src);
+ newpn->pn_sk.sobject = pn_sockaddr_get_object(&dst);
+ newpn->pn_sk.dobject = pn_sockaddr_get_object(&src);
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index 5aa005835c066e..9e7dab17c97831 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -44,9 +44,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ ipv4_change_dsfield(ip_hdr(skb),
+ INET_ECN_MASK,
+ newdscp);
+- ca->stats_dscp_set++;
++ atomic64_inc(&ca->stats_dscp_set);
+ } else {
+- ca->stats_dscp_error++;
++ atomic64_inc(&ca->stats_dscp_error);
+ }
+ }
+ break;
+@@ -57,9 +57,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ ipv6_change_dsfield(ipv6_hdr(skb),
+ INET_ECN_MASK,
+ newdscp);
+- ca->stats_dscp_set++;
++ atomic64_inc(&ca->stats_dscp_set);
+ } else {
+- ca->stats_dscp_error++;
++ atomic64_inc(&ca->stats_dscp_error);
+ }
+ }
+ break;
+@@ -72,7 +72,7 @@ static void tcf_ctinfo_cpmark_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ struct tcf_ctinfo_params *cp,
+ struct sk_buff *skb)
+ {
+- ca->stats_cpmark_set++;
++ atomic64_inc(&ca->stats_cpmark_set);
+ skb->mark = READ_ONCE(ct->mark) & cp->cpmarkmask;
+ }
+
+@@ -322,15 +322,18 @@ static int tcf_ctinfo_dump(struct sk_buff *skb, struct tc_action *a,
+ }
+
+ if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_SET,
+- ci->stats_dscp_set, TCA_CTINFO_PAD))
++ atomic64_read(&ci->stats_dscp_set),
++ TCA_CTINFO_PAD))
+ goto nla_put_failure;
+
+ if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_ERROR,
+- ci->stats_dscp_error, TCA_CTINFO_PAD))
++ atomic64_read(&ci->stats_dscp_error),
++ TCA_CTINFO_PAD))
+ goto nla_put_failure;
+
+ if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_CPMARK_SET,
+- ci->stats_cpmark_set, TCA_CTINFO_PAD))
++ atomic64_read(&ci->stats_cpmark_set),
++ TCA_CTINFO_PAD))
+ goto nla_put_failure;
+
+ spin_unlock_bh(&ci->tcf_lock);
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index d9535129f4e9ca..6dabe5eaa3be53 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -1761,7 +1761,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ ktime_t now = ktime_get();
+ struct cake_tin_data *b;
+ struct cake_flow *flow;
+- u32 idx;
++ u32 idx, tin;
+
+ /* choose flow to insert into */
+ idx = cake_classify(sch, &b, skb, q->flow_mode, &ret);
+@@ -1771,6 +1771,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ __qdisc_drop(skb, to_free);
+ return ret;
+ }
++ tin = (u32)(b - q->tins);
+ idx--;
+ flow = &b->flows[idx];
+
+@@ -1938,13 +1939,22 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ q->buffer_max_used = q->buffer_used;
+
+ if (q->buffer_used > q->buffer_limit) {
++ bool same_flow = false;
+ u32 dropped = 0;
++ u32 drop_id;
+
+ while (q->buffer_used > q->buffer_limit) {
+ dropped++;
+- cake_drop(sch, to_free);
++ drop_id = cake_drop(sch, to_free);
++
++ if ((drop_id >> 16) == tin &&
++ (drop_id & 0xFFFF) == idx)
++ same_flow = true;
+ }
+ b->drop_overlimit += dropped;
++
++ if (same_flow)
++ return NET_XMIT_CN;
+ }
+ return NET_XMIT_SUCCESS;
+ }
+diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c
+index d99c7386e24e6b..0d4228bfd1a02f 100644
+--- a/net/sched/sch_codel.c
++++ b/net/sched/sch_codel.c
+@@ -95,10 +95,7 @@ static struct sk_buff *codel_qdisc_dequeue(struct Qdisc *sch)
+ &q->stats, qdisc_pkt_len, codel_get_enqueue_time,
+ drop_func, dequeue_func);
+
+- /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
+- * or HTB crashes. Defer it for next round.
+- */
+- if (q->stats.drop_count && sch->q.qlen) {
++ if (q->stats.drop_count) {
+ qdisc_tree_reduce_backlog(sch, q->stats.drop_count, q->stats.drop_len);
+ q->stats.drop_count = 0;
+ q->stats.drop_len = 0;
+diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c
+index 7ddf73f5a41817..7200895d98c01e 100644
+--- a/net/sched/sch_drr.c
++++ b/net/sched/sch_drr.c
+@@ -111,6 +111,7 @@ static int drr_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ if (cl == NULL)
+ return -ENOBUFS;
+
++ INIT_LIST_HEAD(&cl->alist);
+ cl->common.classid = classid;
+ cl->quantum = quantum;
+ cl->qdisc = qdisc_create_dflt(sch->dev_queue,
+@@ -234,7 +235,7 @@ static void drr_qlen_notify(struct Qdisc *csh, unsigned long arg)
+ {
+ struct drr_class *cl = (struct drr_class *)arg;
+
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ }
+
+ static int drr_dump_class(struct Qdisc *sch, unsigned long arg,
+@@ -401,7 +402,7 @@ static struct sk_buff *drr_dequeue(struct Qdisc *sch)
+ if (unlikely(skb == NULL))
+ goto out;
+ if (cl->qdisc->q.qlen == 0)
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+
+ bstats_update(&cl->bstats, skb);
+ qdisc_bstats_update(sch, skb);
+@@ -442,7 +443,7 @@ static void drr_reset_qdisc(struct Qdisc *sch)
+ for (i = 0; i < q->clhash.hashsize; i++) {
+ hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode) {
+ if (cl->qdisc->q.qlen)
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ qdisc_reset(cl->qdisc);
+ }
+ }
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index 4f4da11a2c7798..e38879e5987216 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -664,24 +664,24 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+
+ sch_tree_lock(sch);
+
+- q->nbands = nbands;
++ for (i = nbands; i < oldbands; i++) {
++ if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
++ list_del_init(&q->classes[i].alist);
++ qdisc_purge_queue(q->classes[i].qdisc);
++ }
++
++ WRITE_ONCE(q->nbands, nbands);
+ for (i = nstrict; i < q->nstrict; i++) {
+- INIT_LIST_HEAD(&q->classes[i].alist);
+ if (q->classes[i].qdisc->q.qlen) {
+ list_add_tail(&q->classes[i].alist, &q->active);
+ q->classes[i].deficit = quanta[i];
+ }
+ }
+- for (i = q->nbands; i < oldbands; i++) {
+- if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+- list_del_init(&q->classes[i].alist);
+- qdisc_purge_queue(q->classes[i].qdisc);
+- }
+- q->nstrict = nstrict;
++ WRITE_ONCE(q->nstrict, nstrict);
+ memcpy(q->prio2band, priomap, sizeof(priomap));
+
+ for (i = 0; i < q->nbands; i++)
+- q->classes[i].quantum = quanta[i];
++ WRITE_ONCE(q->classes[i].quantum, quanta[i]);
+
+ for (i = oldbands; i < q->nbands; i++) {
+ q->classes[i].qdisc = queues[i];
+@@ -694,7 +694,11 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ ets_offload_change(sch);
+ for (i = q->nbands; i < oldbands; i++) {
+ qdisc_put(q->classes[i].qdisc);
+- memset(&q->classes[i], 0, sizeof(q->classes[i]));
++ q->classes[i].qdisc = NULL;
++ WRITE_ONCE(q->classes[i].quantum, 0);
++ q->classes[i].deficit = 0;
++ memset(&q->classes[i].bstats, 0, sizeof(q->classes[i].bstats));
++ memset(&q->classes[i].qstats, 0, sizeof(q->classes[i].qstats));
+ }
+ return 0;
+ }
+@@ -703,7 +707,7 @@ static int ets_qdisc_init(struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+ {
+ struct ets_sched *q = qdisc_priv(sch);
+- int err;
++ int err, i;
+
+ if (!opt)
+ return -EINVAL;
+@@ -713,6 +717,9 @@ static int ets_qdisc_init(struct Qdisc *sch, struct nlattr *opt,
+ return err;
+
+ INIT_LIST_HEAD(&q->active);
++ for (i = 0; i < TCQ_ETS_MAX_BANDS; i++)
++ INIT_LIST_HEAD(&q->classes[i].alist);
++
+ return ets_qdisc_change(sch, opt, extack);
+ }
+
+@@ -745,6 +752,7 @@ static int ets_qdisc_dump(struct Qdisc *sch, struct sk_buff *skb)
+ struct ets_sched *q = qdisc_priv(sch);
+ struct nlattr *opts;
+ struct nlattr *nest;
++ u8 nbands, nstrict;
+ int band;
+ int prio;
+ int err;
+@@ -757,21 +765,22 @@ static int ets_qdisc_dump(struct Qdisc *sch, struct sk_buff *skb)
+ if (!opts)
+ goto nla_err;
+
+- if (nla_put_u8(skb, TCA_ETS_NBANDS, q->nbands))
++ nbands = READ_ONCE(q->nbands);
++ if (nla_put_u8(skb, TCA_ETS_NBANDS, nbands))
+ goto nla_err;
+
+- if (q->nstrict &&
+- nla_put_u8(skb, TCA_ETS_NSTRICT, q->nstrict))
++ nstrict = READ_ONCE(q->nstrict);
++ if (nstrict && nla_put_u8(skb, TCA_ETS_NSTRICT, nstrict))
+ goto nla_err;
+
+- if (q->nbands > q->nstrict) {
++ if (nbands > nstrict) {
+ nest = nla_nest_start(skb, TCA_ETS_QUANTA);
+ if (!nest)
+ goto nla_err;
+
+- for (band = q->nstrict; band < q->nbands; band++) {
++ for (band = nstrict; band < nbands; band++) {
+ if (nla_put_u32(skb, TCA_ETS_QUANTA_BAND,
+- q->classes[band].quantum))
++ READ_ONCE(q->classes[band].quantum)))
+ goto nla_err;
+ }
+
+@@ -783,7 +792,8 @@ static int ets_qdisc_dump(struct Qdisc *sch, struct sk_buff *skb)
+ goto nla_err;
+
+ for (prio = 0; prio <= TC_PRIO_MAX; prio++) {
+- if (nla_put_u8(skb, TCA_ETS_PRIOMAP_BAND, q->prio2band[prio]))
++ if (nla_put_u8(skb, TCA_ETS_PRIOMAP_BAND,
++ READ_ONCE(q->prio2band[prio])))
+ goto nla_err;
+ }
+
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 60dbc549e99103..3c1efe360defad 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -314,10 +314,8 @@ static struct sk_buff *fq_codel_dequeue(struct Qdisc *sch)
+ }
+ qdisc_bstats_update(sch, skb);
+ flow->deficit -= qdisc_pkt_len(skb);
+- /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
+- * or HTB crashes. Defer it for next round.
+- */
+- if (q->cstats.drop_count && sch->q.qlen) {
++
++ if (q->cstats.drop_count) {
+ qdisc_tree_reduce_backlog(sch, q->cstats.drop_count,
+ q->cstats.drop_len);
+ q->cstats.drop_count = 0;
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index 443db2c08a0958..2454bafbbb11fd 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -209,7 +209,10 @@ eltree_insert(struct hfsc_class *cl)
+ static inline void
+ eltree_remove(struct hfsc_class *cl)
+ {
+- rb_erase(&cl->el_node, &cl->sched->eligible);
++ if (!RB_EMPTY_NODE(&cl->el_node)) {
++ rb_erase(&cl->el_node, &cl->sched->eligible);
++ RB_CLEAR_NODE(&cl->el_node);
++ }
+ }
+
+ static inline void
+@@ -1230,7 +1233,8 @@ hfsc_qlen_notify(struct Qdisc *sch, unsigned long arg)
+ /* vttree is now handled in update_vf() so that update_vf(cl, 0, 0)
+ * needs to be called explicitly to remove a class from vttree.
+ */
+- update_vf(cl, 0, 0);
++ if (cl->cl_nactive)
++ update_vf(cl, 0, 0);
+ if (cl->cl_flags & HFSC_RSC)
+ eltree_remove(cl);
+ }
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index ff84ed531199a8..b301efa41c1ced 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -331,7 +331,8 @@ static void htb_add_to_wait_tree(struct htb_sched *q,
+ */
+ static inline void htb_next_rb_node(struct rb_node **n)
+ {
+- *n = rb_next(*n);
++ if (*n)
++ *n = rb_next(*n);
+ }
+
+ /**
+@@ -557,7 +558,7 @@ htb_change_class_mode(struct htb_sched *q, struct htb_class *cl, s64 *diff)
+ */
+ static inline void htb_activate(struct htb_sched *q, struct htb_class *cl)
+ {
+- WARN_ON(cl->level || !cl->leaf.q || !cl->leaf.q->q.qlen);
++ WARN_ON(cl->level || !cl->leaf.q);
+
+ if (!cl->prio_activity) {
+ cl->prio_activity = 1 << cl->prio;
+@@ -573,8 +574,8 @@ static inline void htb_activate(struct htb_sched *q, struct htb_class *cl)
+ */
+ static inline void htb_deactivate(struct htb_sched *q, struct htb_class *cl)
+ {
+- WARN_ON(!cl->prio_activity);
+-
++ if (!cl->prio_activity)
++ return;
+ htb_deactivate_prios(q, cl);
+ cl->prio_activity = 0;
+ }
+@@ -775,7 +776,9 @@ static struct htb_class *htb_lookup_leaf(struct htb_prio *hprio, const int prio)
+ u32 *pid;
+ } stk[TC_HTB_MAXDEPTH], *sp = stk;
+
+- BUG_ON(!hprio->row.rb_node);
++ if (unlikely(!hprio->row.rb_node))
++ return NULL;
++
+ sp->root = hprio->row.rb_node;
+ sp->pptr = &hprio->ptr;
+ sp->pid = &hprio->last_ptr_id;
+@@ -1278,8 +1281,7 @@ static int htb_delete(struct Qdisc *sch, unsigned long arg)
+ if (cl->parent)
+ cl->parent->children--;
+
+- if (cl->prio_activity)
+- htb_deactivate(q, cl);
++ htb_deactivate(q, cl);
+
+ if (cl->cmode != HTB_CAN_SEND)
+ htb_safe_rb_erase(&cl->pq_node,
+@@ -1404,8 +1406,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
+ /* turn parent into inner node */
+ qdisc_purge_queue(parent->leaf.q);
+ parent_qdisc = parent->leaf.q;
+- if (parent->prio_activity)
+- htb_deactivate(q, parent);
++ htb_deactivate(q, parent);
+
+ /* remove from evt list because of level change */
+ if (parent->cmode != HTB_CAN_SEND) {
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 22f5d9421f6a61..951156d7e5485a 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -962,6 +962,41 @@ static int parse_attr(struct nlattr *tb[], int maxtype, struct nlattr *nla,
+ return 0;
+ }
+
++static const struct Qdisc_class_ops netem_class_ops;
++
++static int check_netem_in_tree(struct Qdisc *sch, bool duplicates,
++ struct netlink_ext_ack *extack)
++{
++ struct Qdisc *root, *q;
++ unsigned int i;
++
++ root = qdisc_root_sleeping(sch);
++
++ if (sch != root && root->ops->cl_ops == &netem_class_ops) {
++ if (duplicates ||
++ ((struct netem_sched_data *)qdisc_priv(root))->duplicate)
++ goto err;
++ }
++
++ if (!qdisc_dev(root))
++ return 0;
++
++ hash_for_each(qdisc_dev(root)->qdisc_hash, i, q, hash) {
++ if (sch != q && q->ops->cl_ops == &netem_class_ops) {
++ if (duplicates ||
++ ((struct netem_sched_data *)qdisc_priv(q))->duplicate)
++ goto err;
++ }
++ }
++
++ return 0;
++
++err:
++ NL_SET_ERR_MSG(extack,
++ "netem: cannot mix duplicating netems with other netems in tree");
++ return -EINVAL;
++}
++
+ /* Parse netlink message to set options */
+ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+@@ -1023,6 +1058,11 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ q->gap = qopt->gap;
+ q->counter = 0;
+ q->loss = qopt->loss;
++
++ ret = check_netem_in_tree(sch, qopt->duplicate, extack);
++ if (ret)
++ goto unlock;
++
+ q->duplicate = qopt->duplicate;
+
+ /* for compatibility with earlier versions.
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index 1ee15db5fcc8c9..e85b56a9a39ef6 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -354,7 +354,7 @@ static void qfq_deactivate_class(struct qfq_sched *q, struct qfq_class *cl)
+ struct qfq_aggregate *agg = cl->agg;
+
+
+- list_del(&cl->alist); /* remove from RR queue of the aggregate */
++ list_del_init(&cl->alist); /* remove from RR queue of the aggregate */
+ if (list_empty(&agg->active)) /* agg is now inactive */
+ qfq_deactivate_agg(q, agg);
+ }
+@@ -414,7 +414,7 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ bool existing = false;
+ struct nlattr *tb[TCA_QFQ_MAX + 1];
+ struct qfq_aggregate *new_agg = NULL;
+- u32 weight, lmax, inv_w;
++ u32 weight, lmax, inv_w, old_weight, old_lmax;
+ int err;
+ int delta_w;
+
+@@ -448,12 +448,16 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ inv_w = ONE_FP / weight;
+ weight = ONE_FP / inv_w;
+
+- if (cl != NULL &&
+- lmax == cl->agg->lmax &&
+- weight == cl->agg->class_weight)
+- return 0; /* nothing to change */
++ if (cl != NULL) {
++ sch_tree_lock(sch);
++ old_weight = cl->agg->class_weight;
++ old_lmax = cl->agg->lmax;
++ sch_tree_unlock(sch);
++ if (lmax == old_lmax && weight == old_weight)
++ return 0; /* nothing to change */
++ }
+
+- delta_w = weight - (cl ? cl->agg->class_weight : 0);
++ delta_w = weight - (cl ? old_weight : 0);
+
+ if (q->wsum + delta_w > QFQ_MAX_WSUM) {
+ pr_notice("qfq: total weight out of range (%d + %u)\n",
+@@ -482,6 +486,7 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+
+ cl->common.classid = classid;
+ cl->deficit = lmax;
++ INIT_LIST_HEAD(&cl->alist);
+
+ cl->qdisc = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,
+ classid, NULL);
+@@ -534,9 +539,6 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+
+ static void qfq_destroy_class(struct Qdisc *sch, struct qfq_class *cl)
+ {
+- struct qfq_sched *q = qdisc_priv(sch);
+-
+- qfq_rm_from_agg(q, cl);
+ gen_kill_estimator(&cl->rate_est);
+ qdisc_put(cl->qdisc);
+ kfree(cl);
+@@ -554,6 +556,7 @@ static int qfq_delete_class(struct Qdisc *sch, unsigned long arg)
+
+ qdisc_purge_queue(cl->qdisc);
+ qdisc_class_hash_remove(&q->clhash, &cl->common);
++ qfq_rm_from_agg(q, cl);
+
+ sch_tree_unlock(sch);
+
+@@ -624,6 +627,7 @@ static int qfq_dump_class(struct Qdisc *sch, unsigned long arg,
+ {
+ struct qfq_class *cl = (struct qfq_class *)arg;
+ struct nlattr *nest;
++ u32 class_weight, lmax;
+
+ tcm->tcm_parent = TC_H_ROOT;
+ tcm->tcm_handle = cl->common.classid;
+@@ -632,8 +636,13 @@ static int qfq_dump_class(struct Qdisc *sch, unsigned long arg,
+ nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
+ if (nest == NULL)
+ goto nla_put_failure;
+- if (nla_put_u32(skb, TCA_QFQ_WEIGHT, cl->agg->class_weight) ||
+- nla_put_u32(skb, TCA_QFQ_LMAX, cl->agg->lmax))
++
++ sch_tree_lock(sch);
++ class_weight = cl->agg->class_weight;
++ lmax = cl->agg->lmax;
++ sch_tree_unlock(sch);
++ if (nla_put_u32(skb, TCA_QFQ_WEIGHT, class_weight) ||
++ nla_put_u32(skb, TCA_QFQ_LMAX, lmax))
+ goto nla_put_failure;
+ return nla_nest_end(skb, nest);
+
+@@ -650,8 +659,10 @@ static int qfq_dump_class_stats(struct Qdisc *sch, unsigned long arg,
+
+ memset(&xstats, 0, sizeof(xstats));
+
++ sch_tree_lock(sch);
+ xstats.weight = cl->agg->class_weight;
+ xstats.lmax = cl->agg->lmax;
++ sch_tree_unlock(sch);
+
+ if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch),
+ d, NULL, &cl->bstats) < 0 ||
+@@ -995,7 +1006,7 @@ static struct sk_buff *agg_dequeue(struct qfq_aggregate *agg,
+ cl->deficit -= (int) len;
+
+ if (cl->qdisc->q.qlen == 0) /* no more packets, remove from list */
+- list_del(&cl->alist);
++ list_del_init(&cl->alist);
+ else if (cl->deficit < qdisc_pkt_len(cl->qdisc->ops->peek(cl->qdisc))) {
+ cl->deficit += agg->lmax;
+ list_move_tail(&cl->alist, &agg->active);
+@@ -1427,6 +1438,8 @@ static void qfq_qlen_notify(struct Qdisc *sch, unsigned long arg)
+ struct qfq_sched *q = qdisc_priv(sch);
+ struct qfq_class *cl = (struct qfq_class *)arg;
+
++ if (list_empty(&cl->alist))
++ return;
+ qfq_deactivate_class(q, cl);
+ }
+
+@@ -1497,6 +1510,7 @@ static void qfq_destroy_qdisc(struct Qdisc *sch)
+ for (i = 0; i < q->clhash.hashsize; i++) {
+ hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
+ common.hnode) {
++ qfq_rm_from_agg(q, cl);
+ qfq_destroy_class(sch, cl);
+ }
+ }
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 8fe1a74f0618d3..079b1bfc7d3106 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -114,7 +114,7 @@ int sctp_rcv(struct sk_buff *skb)
+ * it's better to just linearize it otherwise crc computing
+ * takes longer.
+ */
+- if ((!is_gso && skb_linearize(skb)) ||
++ if (((!is_gso || skb_cloned(skb)) && skb_linearize(skb)) ||
+ !pskb_may_pull(skb, sizeof(struct sctphdr)))
+ goto discard_it;
+
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 7a448fd96f81c6..e519a0160668eb 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -825,6 +825,19 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ delta = msg->sg.size;
+ psock->eval = sk_psock_msg_verdict(sk, psock, msg);
+ delta -= msg->sg.size;
++
++ if ((s32)delta > 0) {
++ /* It indicates that we executed bpf_msg_pop_data(),
++ * causing the plaintext data size to decrease.
++ * Therefore the encrypted data size also needs to
++ * correspondingly decrease. We only need to subtract
++ * delta to calculate the new ciphertext length since
++ * ktls does not support block encryption.
++ */
++ struct sk_msg *enc = &ctx->open_rec->msg_encrypted;
++
++ sk_msg_trim(sk, enc, enc->sg.size - delta);
++ }
+ }
+ if (msg->cork_bytes && msg->cork_bytes > msg->sg.size &&
+ !enospc && !full_record) {
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 56bbc2970ffef0..f04b39c601f89e 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -674,7 +674,8 @@ static int __vsock_bind_stream(struct vsock_sock *vsk,
+ unsigned int i;
+
+ for (i = 0; i < MAX_PORT_RETRIES; i++) {
+- if (port <= LAST_RESERVED_PORT)
++ if (port == VMADDR_PORT_ANY ||
++ port <= LAST_RESERVED_PORT)
+ port = LAST_RESERVED_PORT + 1;
+
+ new_addr.svm_port = port++;
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 6dcfc5a3487425..8fce621a3f018d 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -657,7 +657,8 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
+
+ mgmt = (const struct ieee80211_mgmt *)params->buf;
+
+- if (!ieee80211_is_mgmt(mgmt->frame_control))
++ if (!ieee80211_is_mgmt(mgmt->frame_control) ||
++ ieee80211_has_order(mgmt->frame_control))
+ return -EINVAL;
+
+ stype = le16_to_cpu(mgmt->frame_control) & IEEE80211_FCTL_STYPE;
+diff --git a/samples/mei/mei-amt-version.c b/samples/mei/mei-amt-version.c
+index ad3e56042f9611..892a221b44b440 100644
+--- a/samples/mei/mei-amt-version.c
++++ b/samples/mei/mei-amt-version.c
+@@ -69,11 +69,11 @@
+ #include <string.h>
+ #include <fcntl.h>
+ #include <sys/ioctl.h>
++#include <sys/time.h>
+ #include <unistd.h>
+ #include <errno.h>
+ #include <stdint.h>
+ #include <stdbool.h>
+-#include <bits/wordsize.h>
+ #include <linux/mei.h>
+
+ /*****************************************************************************
+diff --git a/scripts/kconfig/gconf.c b/scripts/kconfig/gconf.c
+index 40979991273155..e092bb686f45b9 100644
+--- a/scripts/kconfig/gconf.c
++++ b/scripts/kconfig/gconf.c
+@@ -787,7 +787,7 @@ static void renderer_edited(GtkCellRendererText * cell,
+ struct symbol *sym;
+
+ if (!gtk_tree_model_get_iter(model2, &iter, path))
+- return;
++ goto free;
+
+ gtk_tree_model_get(model2, &iter, COL_MENU, &menu, -1);
+ sym = menu->sym;
+@@ -799,6 +799,7 @@ static void renderer_edited(GtkCellRendererText * cell,
+
+ update_tree(&rootmenu, NULL);
+
++free:
+ gtk_tree_path_free(path);
+ }
+
+@@ -981,13 +982,14 @@ on_treeview2_key_press_event(GtkWidget * widget,
+ void
+ on_treeview2_cursor_changed(GtkTreeView * treeview, gpointer user_data)
+ {
++ GtkTreeModel *model = gtk_tree_view_get_model(treeview);
+ GtkTreeSelection *selection;
+ GtkTreeIter iter;
+ struct menu *menu;
+
+ selection = gtk_tree_view_get_selection(treeview);
+- if (gtk_tree_selection_get_selected(selection, &model2, &iter)) {
+- gtk_tree_model_get(model2, &iter, COL_MENU, &menu, -1);
++ if (gtk_tree_selection_get_selected(selection, &model, &iter)) {
++ gtk_tree_model_get(model, &iter, COL_MENU, &menu, -1);
+ text_insert_help(menu);
+ }
+ }
+diff --git a/scripts/kconfig/lxdialog/inputbox.c b/scripts/kconfig/lxdialog/inputbox.c
+index 1dcfb288ee6363..327b60cdb8dace 100644
+--- a/scripts/kconfig/lxdialog/inputbox.c
++++ b/scripts/kconfig/lxdialog/inputbox.c
+@@ -39,8 +39,10 @@ int dialog_inputbox(const char *title, const char *prompt, int height, int width
+
+ if (!init)
+ instr[0] = '\0';
+- else
+- strcpy(instr, init);
++ else {
++ strncpy(instr, init, sizeof(dialog_input_result) - 1);
++ instr[sizeof(dialog_input_result) - 1] = '\0';
++ }
+
+ do_resize:
+ if (getmaxy(stdscr) <= (height - INPUTBOX_HEIGTH_MIN))
+diff --git a/scripts/kconfig/lxdialog/menubox.c b/scripts/kconfig/lxdialog/menubox.c
+index 58c2f8afe59bbd..7e10e919fbdc1b 100644
+--- a/scripts/kconfig/lxdialog/menubox.c
++++ b/scripts/kconfig/lxdialog/menubox.c
+@@ -272,7 +272,7 @@ int dialog_menu(const char *title, const char *prompt,
+ if (key < 256 && isalpha(key))
+ key = tolower(key);
+
+- if (strchr("ynmh", key))
++ if (strchr("ynmh ", key))
+ i = max_choice;
+ else {
+ for (i = choice + 1; i < max_choice; i++) {
+diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c
+index af814b39b87656..cdbd60a3ae16a8 100644
+--- a/scripts/kconfig/nconf.c
++++ b/scripts/kconfig/nconf.c
+@@ -581,6 +581,8 @@ static void item_add_str(const char *fmt, ...)
+ tmp_str,
+ sizeof(k_menu_items[index].str));
+
++ k_menu_items[index].str[sizeof(k_menu_items[index].str) - 1] = '\0';
++
+ free_item(curses_menu_items[index]);
+ curses_menu_items[index] = new_item(
+ k_menu_items[index].str,
+diff --git a/scripts/kconfig/nconf.gui.c b/scripts/kconfig/nconf.gui.c
+index 77f525a8617c27..8b3e9bc893a72d 100644
+--- a/scripts/kconfig/nconf.gui.c
++++ b/scripts/kconfig/nconf.gui.c
+@@ -398,6 +398,7 @@ int dialog_inputbox(WINDOW *main_window,
+ x = (columns-win_cols)/2;
+
+ strncpy(result, init, *result_len);
++ result[*result_len - 1] = '\0';
+
+ /* create the windows */
+ win = newwin(win_lines, win_cols, y, x);
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index b889fe604e422e..1c44c83f61a6c9 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -476,7 +476,7 @@ void ConfigList::updateListAllForAll()
+ while (it.hasNext()) {
+ ConfigList *list = it.next();
+
+- list->updateList();
++ list->updateListAll();
+ }
+ }
+
+diff --git a/security/apparmor/include/match.h b/security/apparmor/include/match.h
+index 8844895905881b..29306ec87fd1ab 100644
+--- a/security/apparmor/include/match.h
++++ b/security/apparmor/include/match.h
+@@ -141,7 +141,8 @@ unsigned int aa_dfa_matchn_until(struct aa_dfa *dfa, unsigned int start,
+
+ void aa_dfa_free_kref(struct kref *kref);
+
+-#define WB_HISTORY_SIZE 24
++/* This needs to be a power of 2 */
++#define WB_HISTORY_SIZE 32
+ struct match_workbuf {
+ unsigned int count;
+ unsigned int pos;
+diff --git a/security/apparmor/match.c b/security/apparmor/match.c
+index 3e9e1eaf990ed7..0e683ee323e3cf 100644
+--- a/security/apparmor/match.c
++++ b/security/apparmor/match.c
+@@ -672,6 +672,7 @@ unsigned int aa_dfa_matchn_until(struct aa_dfa *dfa, unsigned int start,
+
+ #define inc_wb_pos(wb) \
+ do { \
++ BUILD_BUG_ON_NOT_POWER_OF_2(WB_HISTORY_SIZE); \
+ wb->pos = (wb->pos + 1) & (WB_HISTORY_SIZE - 1); \
+ wb->len = (wb->len + 1) & (WB_HISTORY_SIZE - 1); \
+ } while (0)
+diff --git a/security/inode.c b/security/inode.c
+index 6c326939750d1a..e6e07787eec9d9 100644
+--- a/security/inode.c
++++ b/security/inode.c
+@@ -159,7 +159,6 @@ static struct dentry *securityfs_create_dentry(const char *name, umode_t mode,
+ inode->i_fop = fops;
+ }
+ d_instantiate(dentry, inode);
+- dget(dentry);
+ inode_unlock(dir);
+ return dentry;
+
+@@ -306,7 +305,6 @@ void securityfs_remove(struct dentry *dentry)
+ simple_rmdir(dir, dentry);
+ else
+ simple_unlink(dir, dentry);
+- dput(dentry);
+ }
+ inode_unlock(dir);
+ simple_release_fs(&mount, &mount_count);
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 98bd6fe850d30d..145e5157515f6a 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -24,6 +24,7 @@
+ #include <sound/minors.h>
+ #include <linux/uio.h>
+ #include <linux/delay.h>
++#include <linux/bitops.h>
+
+ #include "pcm_local.h"
+
+@@ -3094,13 +3095,23 @@ struct snd_pcm_sync_ptr32 {
+ static snd_pcm_uframes_t recalculate_boundary(struct snd_pcm_runtime *runtime)
+ {
+ snd_pcm_uframes_t boundary;
++ snd_pcm_uframes_t border;
++ int order;
+
+ if (! runtime->buffer_size)
+ return 0;
+- boundary = runtime->buffer_size;
+- while (boundary * 2 <= 0x7fffffffUL - runtime->buffer_size)
+- boundary *= 2;
+- return boundary;
++
++ border = 0x7fffffffUL - runtime->buffer_size;
++ if (runtime->buffer_size > border)
++ return runtime->buffer_size;
++
++ order = __fls(border) - __fls(runtime->buffer_size);
++ boundary = runtime->buffer_size << order;
++
++ if (boundary <= border)
++ return boundary;
++ else
++ return boundary / 2;
+ }
+
+ static int snd_pcm_ioctl_sync_ptr_compat(struct snd_pcm_substream *substream,
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 6d67cca4cfa698..3acb4066b5eac4 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4279,7 +4279,7 @@ static int add_tuning_control(struct hda_codec *codec,
+ }
+ knew.private_value =
+ HDA_COMPOSE_AMP_VAL(nid, 1, 0, type);
+- sprintf(namestr, "%s %s Volume", name, dirstr[dir]);
++ snprintf(namestr, sizeof(namestr), "%s %s Volume", name, dirstr[dir]);
+ return snd_hda_ctl_add(codec, nid, snd_ctl_new1(&knew, codec));
+ }
+
+@@ -4671,7 +4671,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ if (err < 0)
+ goto exit;
+
+- if (ca0132_alt_select_out_quirk_set(codec) < 0)
++ err = ca0132_alt_select_out_quirk_set(codec);
++ if (err < 0)
+ goto exit;
+
+ switch (spec->cur_out_type) {
+@@ -4761,6 +4762,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ spec->bass_redirection_val);
+ else
+ err = ca0132_alt_surround_set_bass_redirection(codec, 0);
++ if (err < 0)
++ goto exit;
+
+ /* Unmute DSP now that we're done with output selection. */
+ err = dspio_set_uint_param(codec, 0x96,
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 0ffab5541de816..426b0db21dd09f 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -4354,6 +4354,8 @@ HDA_CODEC_ENTRY(0x10de002d, "Tegra186 HDMI/DP0", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de002e, "Tegra186 HDMI/DP1", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de002f, "Tegra194 HDMI/DP2", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de0030, "Tegra194 HDMI/DP3", patch_tegra_hdmi),
++HDA_CODEC_ENTRY(0x10de0033, "SoC 33 HDMI/DP", patch_tegra_hdmi),
++HDA_CODEC_ENTRY(0x10de0035, "SoC 35 HDMI/DP", patch_tegra_hdmi),
+ HDA_CODEC_ENTRY(0x10de0040, "GPU 40 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0041, "GPU 41 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0042, "GPU 42 HDMI/DP", patch_nvhdmi),
+@@ -4392,15 +4394,32 @@ HDA_CODEC_ENTRY(0x10de0097, "GPU 97 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0098, "GPU 98 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de0099, "GPU 99 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009a, "GPU 9a HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009b, "GPU 9b HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de009c, "GPU 9c HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009d, "GPU 9d HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009e, "GPU 9e HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de009f, "GPU 9f HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a0, "GPU a0 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a1, "GPU a1 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a3, "GPU a3 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a4, "GPU a4 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a5, "GPU a5 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a6, "GPU a6 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de00a7, "GPU a7 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a8, "GPU a8 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00a9, "GPU a9 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00aa, "GPU aa HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ab, "GPU ab HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ad, "GPU ad HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00ae, "GPU ae HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00af, "GPU af HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00b0, "GPU b0 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00b1, "GPU b1 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c0, "GPU c0 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c1, "GPU c1 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c3, "GPU c3 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c4, "GPU c4 HDMI/DP", patch_nvhdmi),
++HDA_CODEC_ENTRY(0x10de00c5, "GPU c5 HDMI/DP", patch_nvhdmi),
+ HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI", patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x10de8067, "MCP67/68 HDMI", patch_nvhdmi_2ch),
+ HDA_CODEC_ENTRY(0x67663d82, "Arise 82 HDMI/DP", patch_gf_hdmi),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2c289a42d17852..a78ca3f97967a5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9256,6 +9256,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
+ SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++ SND_PCI_QUIRK(0x103c, 0x8548, "HP EliteBook x360 830 G6", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x854a, "HP EliteBook 830 G6", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11),
+ SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360),
+ SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -9287,6 +9289,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x87cc, "HP Pavilion 15-eg0xxx", ALC287_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87d3, "HP Laptop 15-gw0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x87df, "HP ProBook 430 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index 3707dc27324d20..324d9df7f1114a 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -2270,7 +2270,7 @@ static int snd_intel8x0_mixer(struct intel8x0 *chip, int ac97_clock,
+ tmp |= chip->ac97_sdin[0] << ICH_DI1L_SHIFT;
+ for (i = 1; i < 4; i++) {
+ if (pcm->r[0].codec[i]) {
+- tmp |= chip->ac97_sdin[pcm->r[0].codec[1]->num] << ICH_DI2L_SHIFT;
++ tmp |= chip->ac97_sdin[pcm->r[0].codec[i]->num] << ICH_DI2L_SHIFT;
+ break;
+ }
+ }
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index 6de3e47b92d852..76e06c88f27967 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -1230,7 +1230,8 @@ static int hdac_hdmi_parse_eld(struct hdac_device *hdev,
+ >> DRM_ELD_VER_SHIFT;
+
+ if (ver != ELD_VER_CEA_861D && ver != ELD_VER_PARTIAL) {
+- dev_err(&hdev->dev, "HDMI: Unknown ELD version %d\n", ver);
++ dev_err_ratelimited(&hdev->dev,
++ "HDMI: Unknown ELD version %d\n", ver);
+ return -EINVAL;
+ }
+
+@@ -1238,7 +1239,8 @@ static int hdac_hdmi_parse_eld(struct hdac_device *hdev,
+ DRM_ELD_MNL_MASK) >> DRM_ELD_MNL_SHIFT;
+
+ if (mnl > ELD_MAX_MNL) {
+- dev_err(&hdev->dev, "HDMI: MNL Invalid %d\n", mnl);
++ dev_err_ratelimited(&hdev->dev,
++ "HDMI: MNL Invalid %d\n", mnl);
+ return -EINVAL;
+ }
+
+@@ -1297,8 +1299,8 @@ static void hdac_hdmi_present_sense(struct hdac_hdmi_pin *pin,
+
+ if (!port->eld.monitor_present || !port->eld.eld_valid) {
+
+- dev_err(&hdev->dev, "%s: disconnect for pin:port %d:%d\n",
+- __func__, pin->nid, port->id);
++ dev_dbg(&hdev->dev, "%s: disconnect for pin:port %d:%d\n",
++ __func__, pin->nid, port->id);
+
+ /*
+ * PCMs are not registered during device probe, so don't
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index a5674c227b3a60..c12966025cfae1 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -2810,6 +2810,11 @@ static int rt5640_i2c_probe(struct i2c_client *i2c,
+ }
+
+ regmap_read(rt5640->regmap, RT5640_VENDOR_ID2, &val);
++ if (val != RT5640_DEVICE_ID) {
++ usleep_range(60000, 100000);
++ regmap_read(rt5640->regmap, RT5640_VENDOR_ID2, &val);
++ }
++
+ if (val != RT5640_DEVICE_ID) {
+ dev_err(&i2c->dev,
+ "Device with ID register %#x is not rt5640/39\n", val);
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 998102711da096..0314d4257b2de3 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -572,13 +572,15 @@ static void fsl_sai_config_disable(struct fsl_sai *sai, int dir)
+ * anymore. Add software reset to fix this issue.
+ * This is a hardware bug, and will be fix in the
+ * next sai version.
++ *
++ * In consumer mode, this can happen even after a
++ * single open/close, especially if both tx and rx
++ * are running concurrently.
+ */
+- if (!sai->is_slave_mode) {
+- /* Software Reset */
+- regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR);
+- /* Clear SR bit to finish the reset */
+- regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0);
+- }
++ /* Software Reset */
++ regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
++ /* Clear SR bit to finish the reset */
++ regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR, 0);
+ }
+
+ static int fsl_sai_trigger(struct snd_pcm_substream *substream, int cmd,
+@@ -708,11 +710,11 @@ static int fsl_sai_dai_probe(struct snd_soc_dai *cpu_dai)
+ unsigned int ofs = sai->soc_data->reg_offset;
+
+ /* Software Reset for both Tx and Rx */
+- regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR);
+- regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR);
++ regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
++ regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ /* Clear SR bit to finish the reset */
+- regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), 0);
+- regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
++ regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, 0);
++ regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, 0);
+
+ regmap_update_bits(sai->regmap, FSL_SAI_TCR1(ofs),
+ FSL_SAI_CR1_RFW_MASK(sai->soc_data->fifo_depth),
+@@ -1254,11 +1256,11 @@ static int fsl_sai_runtime_resume(struct device *dev)
+
+ regcache_cache_only(sai->regmap, false);
+ regcache_mark_dirty(sai->regmap);
+- regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR);
+- regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR);
++ regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
++ regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ usleep_range(1000, 2000);
+- regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), 0);
+- regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
++ regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, 0);
++ regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, 0);
+
+ ret = regcache_sync(sai->regmap);
+ if (ret)
+diff --git a/sound/soc/intel/boards/Kconfig b/sound/soc/intel/boards/Kconfig
+index dddb672a6d5533..0e1166c4f89b69 100644
+--- a/sound/soc/intel/boards/Kconfig
++++ b/sound/soc/intel/boards/Kconfig
+@@ -11,7 +11,7 @@ menuconfig SND_SOC_INTEL_MACH
+ kernel: saying N will just cause the configurator to skip all
+ the questions about Intel ASoC machine drivers.
+
+-if SND_SOC_INTEL_MACH
++if SND_SOC_INTEL_MACH && (SND_SOC_SOF_INTEL_COMMON || !SND_SOC_SOF_INTEL_COMMON)
+
+ config SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES
+ bool "Use more user friendly long card names"
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index e9da95ebccc837..1120d669fe2e33 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -958,6 +958,9 @@ static int soc_dai_link_sanity_check(struct snd_soc_card *card,
+ void snd_soc_remove_pcm_runtime(struct snd_soc_card *card,
+ struct snd_soc_pcm_runtime *rtd)
+ {
++ if (!rtd)
++ return;
++
+ lockdep_assert_held(&client_mutex);
+
+ /* release machine specific resources */
+diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
+index f9aba413e4952b..583b18d0f44654 100644
+--- a/sound/soc/soc-dai.c
++++ b/sound/soc/soc-dai.c
+@@ -206,13 +206,15 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai,
+ {
+ int ret = -ENOTSUPP;
+
+- if (dai->driver->ops &&
+- dai->driver->ops->xlate_tdm_slot_mask)
+- ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+- else
+- ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+- if (ret)
+- goto err;
++ if (slots) {
++ if (dai->driver->ops &&
++ dai->driver->ops->xlate_tdm_slot_mask)
++ ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++ else
++ ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++ if (ret)
++ goto err;
++ }
+
+ dai->tx_mask = tx_mask;
+ dai->rx_mask = rx_mask;
+@@ -486,6 +488,9 @@ int snd_soc_pcm_dai_probe(struct snd_soc_pcm_runtime *rtd, int order)
+ if (dai->driver->probe_order != order)
+ continue;
+
++ if (dai->probed)
++ continue;
++
+ if (dai->driver->probe) {
+ int ret = dai->driver->probe(dai);
+
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index acb46e1f9c0ae6..175c8c264b62bc 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -743,6 +743,10 @@ static int snd_soc_dapm_set_bias_level(struct snd_soc_dapm_context *dapm,
+ out:
+ trace_snd_soc_bias_level_done(card, level);
+
++ /* success */
++ if (ret == 0)
++ snd_soc_dapm_init_bias_level(dapm, level);
++
+ return ret;
+ }
+
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 55f7c7999330a3..91afd73cdd1399 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -618,28 +618,32 @@ EXPORT_SYMBOL_GPL(snd_soc_get_volsw_range);
+ static int snd_soc_clip_to_platform_max(struct snd_kcontrol *kctl)
+ {
+ struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value;
+- struct snd_ctl_elem_value uctl;
++ struct snd_ctl_elem_value *uctl;
+ int ret;
+
+ if (!mc->platform_max)
+ return 0;
+
+- ret = kctl->get(kctl, &uctl);
++ uctl = kzalloc(sizeof(*uctl), GFP_KERNEL);
++ if (!uctl)
++ return -ENOMEM;
++
++ ret = kctl->get(kctl, uctl);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+- if (uctl.value.integer.value[0] > mc->platform_max)
+- uctl.value.integer.value[0] = mc->platform_max;
++ if (uctl->value.integer.value[0] > mc->platform_max)
++ uctl->value.integer.value[0] = mc->platform_max;
+
+ if (snd_soc_volsw_is_stereo(mc) &&
+- uctl.value.integer.value[1] > mc->platform_max)
+- uctl.value.integer.value[1] = mc->platform_max;
++ uctl->value.integer.value[1] > mc->platform_max)
++ uctl->value.integer.value[1] = mc->platform_max;
+
+- ret = kctl->put(kctl, &uctl);
+- if (ret < 0)
+- return ret;
++ ret = kctl->put(kctl, uctl);
+
+- return 0;
++out:
++ kfree(uctl);
++ return ret;
+ }
+
+ /**
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index eee5a3ce9471f7..c0948922562b6e 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -1991,15 +1991,15 @@ static int dell_dock_mixer_init(struct usb_mixer_interface *mixer)
+ #define SND_RME_CLK_FREQMUL_SHIFT 18
+ #define SND_RME_CLK_FREQMUL_MASK 0x7
+ #define SND_RME_CLK_SYSTEM(x) \
+- ((x >> SND_RME_CLK_SYSTEM_SHIFT) & SND_RME_CLK_SYSTEM_MASK)
++ (((x) >> SND_RME_CLK_SYSTEM_SHIFT) & SND_RME_CLK_SYSTEM_MASK)
+ #define SND_RME_CLK_AES(x) \
+- ((x >> SND_RME_CLK_AES_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
++ (((x) >> SND_RME_CLK_AES_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
+ #define SND_RME_CLK_SPDIF(x) \
+- ((x >> SND_RME_CLK_SPDIF_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
++ (((x) >> SND_RME_CLK_SPDIF_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
+ #define SND_RME_CLK_SYNC(x) \
+- ((x >> SND_RME_CLK_SYNC_SHIFT) & SND_RME_CLK_SYNC_MASK)
++ (((x) >> SND_RME_CLK_SYNC_SHIFT) & SND_RME_CLK_SYNC_MASK)
+ #define SND_RME_CLK_FREQMUL(x) \
+- ((x >> SND_RME_CLK_FREQMUL_SHIFT) & SND_RME_CLK_FREQMUL_MASK)
++ (((x) >> SND_RME_CLK_FREQMUL_SHIFT) & SND_RME_CLK_FREQMUL_MASK)
+ #define SND_RME_CLK_AES_LOCK 0x1
+ #define SND_RME_CLK_AES_SYNC 0x4
+ #define SND_RME_CLK_SPDIF_LOCK 0x2
+@@ -2008,9 +2008,9 @@ static int dell_dock_mixer_init(struct usb_mixer_interface *mixer)
+ #define SND_RME_SPDIF_FORMAT_SHIFT 5
+ #define SND_RME_BINARY_MASK 0x1
+ #define SND_RME_SPDIF_IF(x) \
+- ((x >> SND_RME_SPDIF_IF_SHIFT) & SND_RME_BINARY_MASK)
++ (((x) >> SND_RME_SPDIF_IF_SHIFT) & SND_RME_BINARY_MASK)
+ #define SND_RME_SPDIF_FORMAT(x) \
+- ((x >> SND_RME_SPDIF_FORMAT_SHIFT) & SND_RME_BINARY_MASK)
++ (((x) >> SND_RME_SPDIF_FORMAT_SHIFT) & SND_RME_BINARY_MASK)
+
+ static const u32 snd_rme_rate_table[] = {
+ 32000, 44100, 48000, 50000,
+diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
+index 1b7c7b754c3855..1c31a9d21d42ca 100644
+--- a/sound/usb/mixer_scarlett_gen2.c
++++ b/sound/usb/mixer_scarlett_gen2.c
+@@ -95,6 +95,7 @@
+ #include <linux/slab.h>
+ #include <linux/usb.h>
+ #include <linux/moduleparam.h>
++#include <linux/delay.h>
+
+ #include <sound/control.h>
+ #include <sound/tlv.h>
+@@ -591,6 +592,8 @@ static int scarlett2_usb(
+ u16 req_buf_size = sizeof(struct scarlett2_usb_packet) + req_size;
+ u16 resp_buf_size = sizeof(struct scarlett2_usb_packet) + resp_size;
+ struct scarlett2_usb_packet *req = NULL, *resp = NULL;
++ int retries = 0;
++ const int max_retries = 5;
+ int err = 0;
+
+ req = kmalloc(req_buf_size, GFP_KERNEL);
+@@ -614,6 +617,7 @@ static int scarlett2_usb(
+ if (req_size)
+ memcpy(req->data, req_data, req_size);
+
++retry:
+ err = snd_usb_ctl_msg(mixer->chip->dev,
+ usb_sndctrlpipe(mixer->chip->dev, 0),
+ SCARLETT2_USB_VENDOR_SPECIFIC_CMD_REQ,
+@@ -624,6 +628,10 @@ static int scarlett2_usb(
+ req_buf_size);
+
+ if (err != req_buf_size) {
++ if (err == -EPROTO && ++retries <= max_retries) {
++ msleep(5 * (1 << (retries - 1)));
++ goto retry;
++ }
+ usb_audio_err(
+ mixer->chip,
+ "Scarlett Gen 2 USB request result cmd %x was %d\n",
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index d6d3ce9e963739..1bdb6a2f559667 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -342,20 +342,28 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor
+
+ len = le16_to_cpu(cluster->wLength);
+ c = 0;
+- p += sizeof(struct uac3_cluster_header_descriptor);
++ p += sizeof(*cluster);
++ len -= sizeof(*cluster);
+
+- while (((p - (void *)cluster) < len) && (c < channels)) {
++ while (len > 0 && (c < channels)) {
+ struct uac3_cluster_segment_descriptor *cs_desc = p;
+ u16 cs_len;
+ u8 cs_type;
+
++ if (len < sizeof(*cs_desc))
++ break;
+ cs_len = le16_to_cpu(cs_desc->wLength);
++ if (len < cs_len)
++ break;
+ cs_type = cs_desc->bSegmentType;
+
+ if (cs_type == UAC3_CHANNEL_INFORMATION) {
+ struct uac3_cluster_information_segment_descriptor *is = p;
+ unsigned char map;
+
++ if (cs_len < sizeof(*is))
++ break;
++
+ /*
+ * TODO: this conversion is not complete, update it
+ * after adding UAC3 values to asound.h
+@@ -457,6 +465,7 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor
+ chmap->map[c++] = map;
+ }
+ p += cs_len;
++ len -= cs_len;
+ }
+
+ if (channels < c)
+@@ -877,7 +886,7 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ u64 badd_formats = 0;
+ unsigned int num_channels;
+ struct audioformat *fp;
+- u16 cluster_id, wLength;
++ u16 cluster_id, wLength, cluster_wLength;
+ int clock = 0;
+ int err;
+
+@@ -1006,6 +1015,16 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ return ERR_PTR(-EIO);
+ }
+
++ cluster_wLength = le16_to_cpu(cluster->wLength);
++ if (cluster_wLength < sizeof(*cluster) ||
++ cluster_wLength > wLength) {
++ dev_err(&dev->dev,
++ "%u:%d : invalid Cluster Descriptor size\n",
++ iface_no, altno);
++ kfree(cluster);
++ return ERR_PTR(-EIO);
++ }
++
+ num_channels = cluster->bNrChannels;
+ chmap = convert_chmap_v3(cluster);
+ kfree(cluster);
+diff --git a/sound/usb/validate.c b/sound/usb/validate.c
+index 6fe206f6e91105..a0d55b77c9941d 100644
+--- a/sound/usb/validate.c
++++ b/sound/usb/validate.c
+@@ -221,6 +221,17 @@ static bool validate_uac3_feature_unit(const void *p,
+ return d->bLength >= sizeof(*d) + 4 + 2;
+ }
+
++static bool validate_uac3_power_domain_unit(const void *p,
++ const struct usb_desc_validator *v)
++{
++ const struct uac3_power_domain_descriptor *d = p;
++
++ if (d->bLength < sizeof(*d))
++ return false;
++ /* baEntities[] + wPDomainDescrStr */
++ return d->bLength >= sizeof(*d) + d->bNrEntities + 2;
++}
++
+ static bool validate_midi_out_jack(const void *p,
+ const struct usb_desc_validator *v)
+ {
+@@ -274,7 +285,7 @@ static const struct usb_desc_validator audio_validators[] = {
+ /* UAC_VERSION_3, UAC3_EXTENDED_TERMINAL: not implemented yet */
+ FUNC(UAC_VERSION_3, UAC3_MIXER_UNIT, validate_mixer_unit),
+ FUNC(UAC_VERSION_3, UAC3_SELECTOR_UNIT, validate_selector_unit),
+- FUNC(UAC_VERSION_3, UAC_FEATURE_UNIT, validate_uac3_feature_unit),
++ FUNC(UAC_VERSION_3, UAC3_FEATURE_UNIT, validate_uac3_feature_unit),
+ /* UAC_VERSION_3, UAC3_EFFECT_UNIT: not implemented yet */
+ FUNC(UAC_VERSION_3, UAC3_PROCESSING_UNIT, validate_processing_unit),
+ FUNC(UAC_VERSION_3, UAC3_EXTENSION_UNIT, validate_processing_unit),
+@@ -285,6 +296,7 @@ static const struct usb_desc_validator audio_validators[] = {
+ struct uac3_clock_multiplier_descriptor),
+ /* UAC_VERSION_3, UAC3_SAMPLE_RATE_CONVERTER: not implemented yet */
+ /* UAC_VERSION_3, UAC3_CONNECTORS: not implemented yet */
++ FUNC(UAC_VERSION_3, UAC3_POWER_DOMAIN, validate_uac3_power_domain_unit),
+ { } /* terminator */
+ };
+
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index ff3aa0cf399787..7f0421713e1cda 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -353,17 +353,18 @@ static int dump_link_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ struct bpf_netdev_t *netinfo = cookie;
+ struct ifinfomsg *ifinfo = msg;
++ struct ip_devname_ifindex *tmp;
+
+ if (netinfo->filter_idx > 0 && netinfo->filter_idx != ifinfo->ifi_index)
+ return 0;
+
+ if (netinfo->used_len == netinfo->array_len) {
+- netinfo->devices = realloc(netinfo->devices,
+- (netinfo->array_len + 16) *
+- sizeof(struct ip_devname_ifindex));
+- if (!netinfo->devices)
++ tmp = realloc(netinfo->devices,
++ (netinfo->array_len + 16) * sizeof(struct ip_devname_ifindex));
++ if (!tmp)
+ return -ENOMEM;
+
++ netinfo->devices = tmp;
+ netinfo->array_len += 16;
+ }
+ netinfo->devices[netinfo->used_len].ifindex = ifinfo->ifi_index;
+@@ -382,6 +383,7 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ struct bpf_tcinfo_t *tcinfo = cookie;
+ struct tcmsg *info = msg;
++ struct tc_kind_handle *tmp;
+
+ if (tcinfo->is_qdisc) {
+ /* skip clsact qdisc */
+@@ -393,11 +395,12 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ }
+
+ if (tcinfo->used_len == tcinfo->array_len) {
+- tcinfo->handle_array = realloc(tcinfo->handle_array,
++ tmp = realloc(tcinfo->handle_array,
+ (tcinfo->array_len + 16) * sizeof(struct tc_kind_handle));
+- if (!tcinfo->handle_array)
++ if (!tmp)
+ return -ENOMEM;
+
++ tcinfo->handle_array = tmp;
+ tcinfo->array_len += 16;
+ }
+ tcinfo->handle_array[tcinfo->used_len].handle = info->tcm_handle;
+diff --git a/tools/include/linux/sched/mm.h b/tools/include/linux/sched/mm.h
+index c8d9f19c1f357f..967294b8edcfcd 100644
+--- a/tools/include/linux/sched/mm.h
++++ b/tools/include/linux/sched/mm.h
+@@ -1,4 +1,6 @@
+ #ifndef _TOOLS_PERF_LINUX_SCHED_MM_H
+ #define _TOOLS_PERF_LINUX_SCHED_MM_H
+
++#define might_alloc(gfp) do { } while (0)
++
+ #endif /* _TOOLS_PERF_LINUX_SCHED_MM_H */
+diff --git a/tools/perf/tests/bp_account.c b/tools/perf/tests/bp_account.c
+index 489b50604cf274..ac39f4947fd886 100644
+--- a/tools/perf/tests/bp_account.c
++++ b/tools/perf/tests/bp_account.c
+@@ -89,6 +89,7 @@ static int bp_accounting(int wp_cnt, int share)
+ fd_wp = wp_event((void *)&the_var, &attr_new);
+ TEST_ASSERT_VAL("failed to create max wp\n", fd_wp != -1);
+ pr_debug("wp max created\n");
++ close(fd_wp);
+ }
+
+ for (i = 0; i < wp_cnt; i++)
+diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+index 08a399b0be286c..6ab9139f16af90 100644
+--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+@@ -240,9 +240,9 @@ static int mperf_stop(void)
+ int cpu;
+
+ for (cpu = 0; cpu < cpu_count; cpu++) {
+- mperf_measure_stats(cpu);
+- mperf_get_tsc(&tsc_at_measure_end[cpu]);
+ clock_gettime(CLOCK_REALTIME, &time_end[cpu]);
++ mperf_get_tsc(&tsc_at_measure_end[cpu]);
++ mperf_measure_stats(cpu);
+ }
+
+ return 0;
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index 8ac30e2ac3acca..512a3cc586fdd0 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -1291,7 +1291,10 @@ sub __eval_option {
+ # If a variable contains itself, use the default var
+ if (($var eq $name) && defined($opt{$var})) {
+ $o = $opt{$var};
+- $retval = "$retval$o";
++ # Only append if the default doesn't contain itself
++ if ($o !~ m/\$\{$var\}/) {
++ $retval = "$retval$o";
++ }
+ } elsif (defined($opt{$o})) {
+ $o = $opt{$o};
+ $retval = "$retval$o";
+diff --git a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+index b7c8f29c09a978..65916bb55dfbbf 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+@@ -14,11 +14,35 @@ fail() { #msg
+ exit_fail
+ }
+
++# As reading trace can last forever, simply look for 3 different
++# events then exit out of reading the file. If there's not 3 different
++# events, then the test has failed.
++check_unique() {
++ cat trace | grep -v '^#' | awk '
++ BEGIN { cnt = 0; }
++ {
++ for (i = 0; i < cnt; i++) {
++ if (event[i] == $5) {
++ break;
++ }
++ }
++ if (i == cnt) {
++ event[cnt++] = $5;
++ if (cnt > 2) {
++ exit;
++ }
++ }
++ }
++ END {
++ printf "%d", cnt;
++ }'
++}
++
+ echo 'sched:*' > set_event
+
+ yield
+
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -29,7 +53,7 @@ echo 1 > events/sched/enable
+
+ yield
+
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
+index 4b994b6df5ac30..ed81eaf2afd6d9 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
+@@ -29,7 +29,7 @@ ftrace_filter_check 'schedule*' '^schedule.*$'
+ ftrace_filter_check '*pin*lock' '.*pin.*lock$'
+
+ # filter by start*mid*
+-ftrace_filter_check 'mutex*try*' '^mutex.*try.*'
++ftrace_filter_check 'mutex*unl*' '^mutex.*unl.*'
+
+ # Advanced full-glob matching feature is recently supported.
+ # Skip the tests if we are sure the kernel does not support it.
+diff --git a/tools/testing/selftests/futex/include/futextest.h b/tools/testing/selftests/futex/include/futextest.h
+index ddbcfc9b7bac4a..7a5fd1d5355e7e 100644
+--- a/tools/testing/selftests/futex/include/futextest.h
++++ b/tools/testing/selftests/futex/include/futextest.h
+@@ -47,6 +47,17 @@ typedef volatile u_int32_t futex_t;
+ FUTEX_PRIVATE_FLAG)
+ #endif
+
++/*
++ * SYS_futex is expected from system C library, in glibc some 32-bit
++ * architectures (e.g. RV32) are using 64-bit time_t, therefore it doesn't have
++ * SYS_futex defined but just SYS_futex_time64. Define SYS_futex as
++ * SYS_futex_time64 in this situation to ensure the compilation and the
++ * compatibility.
++ */
++#if !defined(SYS_futex) && defined(SYS_futex_time64)
++#define SYS_futex SYS_futex_time64
++#endif
++
+ /**
+ * futex() - SYS_futex syscall wrapper
+ * @uaddr: address of first futex
+diff --git a/tools/testing/selftests/memfd/memfd_test.c b/tools/testing/selftests/memfd/memfd_test.c
+index fba322d1c67a17..5d1ad547416ac5 100644
+--- a/tools/testing/selftests/memfd/memfd_test.c
++++ b/tools/testing/selftests/memfd/memfd_test.c
+@@ -186,6 +186,24 @@ static void *mfd_assert_mmap_shared(int fd)
+ return p;
+ }
+
++static void *mfd_assert_mmap_read_shared(int fd)
++{
++ void *p;
++
++ p = mmap(NULL,
++ mfd_def_size,
++ PROT_READ,
++ MAP_SHARED,
++ fd,
++ 0);
++ if (p == MAP_FAILED) {
++ printf("mmap() failed: %m\n");
++ abort();
++ }
++
++ return p;
++}
++
+ static void *mfd_assert_mmap_private(int fd)
+ {
+ void *p;
+@@ -802,6 +820,30 @@ static void test_seal_future_write(void)
+ close(fd);
+ }
+
++static void test_seal_write_map_read_shared(void)
++{
++ int fd;
++ void *p;
++
++ printf("%s SEAL-WRITE-MAP-READ\n", memfd_str);
++
++ fd = mfd_assert_new("kern_memfd_seal_write_map_read",
++ mfd_def_size,
++ MFD_CLOEXEC | MFD_ALLOW_SEALING);
++
++ mfd_assert_add_seals(fd, F_SEAL_WRITE);
++ mfd_assert_has_seals(fd, F_SEAL_WRITE);
++
++ p = mfd_assert_mmap_read_shared(fd);
++
++ mfd_assert_read(fd);
++ mfd_assert_read_shared(fd);
++ mfd_fail_write(fd);
++
++ munmap(p, mfd_def_size);
++ close(fd);
++}
++
+ /*
+ * Test SEAL_SHRINK
+ * Test whether SEAL_SHRINK actually prevents shrinking
+@@ -1056,6 +1098,7 @@ int main(int argc, char **argv)
+
+ test_seal_write();
+ test_seal_future_write();
++ test_seal_write_map_read_shared();
+ test_seal_shrink();
+ test_seal_grow();
+ test_seal_resize();
+diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
+index 7072ef1c0ae744..c77ce687ae2d44 100644
+--- a/tools/testing/selftests/net/mptcp/Makefile
++++ b/tools/testing/selftests/net/mptcp/Makefile
+@@ -5,7 +5,8 @@ KSFT_KHDR_INSTALL := 1
+
+ CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include
+
+-TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
++TEST_PROGS := mptcp_connect.sh mptcp_connect_mmap.sh \
++ pm_netlink.sh mptcp_join.sh diag.sh \
+ simult_flows.sh
+
+ TEST_GEN_FILES = mptcp_connect pm_nl_ctl
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh b/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh
+new file mode 100644
+index 00000000000000..5dd30f9394af6a
+--- /dev/null
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh
+@@ -0,0 +1,5 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++
++MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \
++ "$(dirname "${0}")/mptcp_connect.sh" -m mmap "${@}"
+diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+index fff6f74ebe1606..7d194f5c293938 100755
+--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+@@ -130,6 +130,7 @@ ip netns exec $ns1 ./pm_nl_ctl limits 1 9
+ check "ip netns exec $ns1 ./pm_nl_ctl limits" "$default_limits" "subflows above hard limit"
+
+ ip netns exec $ns1 ./pm_nl_ctl limits 8 8
++ip netns exec $ns1 ./pm_nl_ctl flush
+ check "ip netns exec $ns1 ./pm_nl_ctl limits" "accept 8
+ subflows 8" "set limits"
+
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index a3597b3e579fed..0a6212a964155d 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -746,6 +746,11 @@ kci_test_ipsec_offload()
+ sysfsf=$sysfsd/ipsec
+ sysfsnet=/sys/bus/netdevsim/devices/netdevsim0/net/
+ probed=false
++ esp4_offload_probed_default=false
++
++ if lsmod | grep -q esp4_offload; then
++ esp4_offload_probed_default=true
++ fi
+
+ # setup netdevsim since dummydev doesn't have offload support
+ if [ ! -w /sys/bus/netdevsim/new_device ] ; then
+@@ -835,6 +840,7 @@ EOF
+ fi
+
+ # clean up any leftovers
++ ! "$esp4_offload_probed_default" && lsmod | grep -q esp4_offload && rmmod esp4_offload
+ echo 0 > /sys/bus/netdevsim/del_device
+ $probed && rmmod netdevsim
+
next reply other threads:[~2025-08-28 16:55 UTC|newest]
Thread overview: 312+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-28 16:55 Arisu Tachibana [this message]
-- strict thread matches above, loose matches on Subject: below --
2025-10-02 13:27 [gentoo-commits] proj/linux-patches:5.10 commit in: / Arisu Tachibana
2025-09-12 3:58 Arisu Tachibana
2025-09-10 5:33 Arisu Tachibana
2025-09-04 15:19 Arisu Tachibana
2025-09-04 14:32 Arisu Tachibana
2025-08-29 9:13 Arisu Tachibana
2025-07-18 12:07 Arisu Tachibana
2025-06-27 11:21 Mike Pagano
2025-06-04 18:15 Mike Pagano
2025-05-02 10:58 Mike Pagano
2025-04-10 13:16 Mike Pagano
2025-03-13 12:58 Mike Pagano
2025-02-01 23:10 Mike Pagano
2025-01-09 13:58 Mike Pagano
2024-12-19 18:10 Mike Pagano
2024-12-14 23:50 Mike Pagano
2024-11-30 17:35 Mike Pagano
2024-11-17 18:19 Mike Pagano
2024-11-08 17:46 Mike Pagano
2024-10-22 17:00 Mike Pagano
2024-10-17 14:11 Mike Pagano
2024-10-17 14:08 Mike Pagano
2024-09-12 12:42 Mike Pagano
2024-09-04 13:53 Mike Pagano
2024-08-19 10:44 Mike Pagano
2024-07-27 9:20 Mike Pagano
2024-07-27 9:17 Mike Pagano
2024-07-18 12:17 Mike Pagano
2024-07-05 10:53 Mike Pagano
2024-07-05 10:51 Mike Pagano
2024-06-21 14:08 Mike Pagano
2024-06-16 14:35 Mike Pagano
2024-05-25 15:14 Mike Pagano
2024-05-17 11:38 Mike Pagano
2024-05-05 18:14 Mike Pagano
2024-05-02 15:03 Mike Pagano
2024-04-27 22:57 Mike Pagano
2024-04-13 13:09 Mike Pagano
2024-03-27 11:26 Mike Pagano
2024-03-15 22:02 Mike Pagano
2024-03-06 18:09 Mike Pagano
2024-03-01 13:09 Mike Pagano
2024-02-23 12:45 Mike Pagano
2024-02-23 12:39 Mike Pagano
2024-01-25 23:34 Mike Pagano
2024-01-15 18:49 Mike Pagano
2024-01-12 20:35 Mike Pagano
2024-01-05 14:29 Mike Pagano
2023-12-20 15:21 Mike Pagano
2023-12-13 18:29 Mike Pagano
2023-12-08 11:16 Mike Pagano
2023-12-01 17:47 Mike Pagano
2023-11-28 17:52 Mike Pagano
2023-11-20 11:25 Mike Pagano
2023-11-08 17:28 Mike Pagano
2023-10-25 11:38 Mike Pagano
2023-10-18 20:16 Mike Pagano
2023-10-10 20:34 Mike Pagano
2023-10-05 14:24 Mike Pagano
2023-09-23 10:19 Mike Pagano
2023-09-21 11:29 Mike Pagano
2023-09-19 13:22 Mike Pagano
2023-09-02 9:59 Mike Pagano
2023-08-30 14:45 Mike Pagano
2023-08-26 15:21 Mike Pagano
2023-08-16 17:01 Mike Pagano
2023-08-11 11:56 Mike Pagano
2023-08-08 18:42 Mike Pagano
2023-07-27 11:50 Mike Pagano
2023-07-24 20:28 Mike Pagano
2023-06-28 10:27 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:34 Mike Pagano
2023-06-14 10:20 Mike Pagano
2023-06-09 11:31 Mike Pagano
2023-06-05 11:50 Mike Pagano
2023-05-30 12:56 Mike Pagano
2023-05-17 11:25 Mike Pagano
2023-05-17 10:59 Mike Pagano
2023-05-10 17:56 Mike Pagano
2023-04-27 14:11 Mike Pagano
2023-04-26 9:50 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 10:01 Alice Ferrazzi
2023-03-22 14:15 Alice Ferrazzi
2023-03-17 10:45 Mike Pagano
2023-03-13 11:32 Alice Ferrazzi
2023-03-11 16:05 Mike Pagano
2023-03-03 15:01 Mike Pagano
2023-03-03 12:30 Mike Pagano
2023-02-25 11:44 Mike Pagano
2023-02-24 3:06 Alice Ferrazzi
2023-02-22 14:04 Alice Ferrazzi
2023-02-15 16:40 Mike Pagano
2023-02-06 12:47 Mike Pagano
2023-02-02 19:11 Mike Pagano
2023-02-01 8:09 Alice Ferrazzi
2023-01-24 7:13 Alice Ferrazzi
2023-01-18 11:09 Mike Pagano
2023-01-14 13:52 Mike Pagano
2023-01-04 11:39 Mike Pagano
2022-12-21 19:00 Alice Ferrazzi
2022-12-19 12:33 Alice Ferrazzi
2022-12-14 12:14 Mike Pagano
2022-12-08 11:51 Alice Ferrazzi
2022-12-02 17:26 Mike Pagano
2022-11-25 17:06 Mike Pagano
2022-11-16 12:08 Alice Ferrazzi
2022-11-10 18:05 Mike Pagano
2022-11-03 15:17 Mike Pagano
2022-10-30 9:33 Mike Pagano
2022-10-28 13:38 Mike Pagano
2022-10-26 11:46 Mike Pagano
2022-10-17 16:46 Mike Pagano
2022-10-15 10:05 Mike Pagano
2022-10-05 11:58 Mike Pagano
2022-09-28 9:30 Mike Pagano
2022-09-23 12:40 Mike Pagano
2022-09-20 12:01 Mike Pagano
2022-09-15 10:31 Mike Pagano
2022-09-08 10:46 Mike Pagano
2022-09-05 12:04 Mike Pagano
2022-08-31 15:39 Mike Pagano
2022-08-29 10:46 Mike Pagano
2022-08-25 10:33 Mike Pagano
2022-08-21 16:52 Mike Pagano
2022-08-11 12:34 Mike Pagano
2022-08-03 14:24 Alice Ferrazzi
2022-07-29 16:37 Mike Pagano
2022-07-25 10:19 Alice Ferrazzi
2022-07-21 20:08 Mike Pagano
2022-07-15 10:03 Mike Pagano
2022-07-12 15:59 Mike Pagano
2022-07-07 16:17 Mike Pagano
2022-07-02 16:10 Mike Pagano
2022-06-29 11:08 Mike Pagano
2022-06-27 11:12 Mike Pagano
2022-06-25 19:45 Mike Pagano
2022-06-22 12:45 Mike Pagano
2022-06-16 11:44 Mike Pagano
2022-06-14 17:12 Mike Pagano
2022-06-09 11:27 Mike Pagano
2022-06-06 11:03 Mike Pagano
2022-05-30 13:59 Mike Pagano
2022-05-25 11:54 Mike Pagano
2022-05-18 9:48 Mike Pagano
2022-05-15 22:10 Mike Pagano
2022-05-12 11:29 Mike Pagano
2022-05-09 10:56 Mike Pagano
2022-04-27 12:24 Mike Pagano
2022-04-27 12:20 Mike Pagano
2022-04-26 12:17 Mike Pagano
2022-04-20 12:07 Mike Pagano
2022-04-13 20:20 Mike Pagano
2022-04-13 19:48 Mike Pagano
2022-04-12 19:08 Mike Pagano
2022-04-08 13:16 Mike Pagano
2022-03-28 10:58 Mike Pagano
2022-03-23 11:55 Mike Pagano
2022-03-19 13:20 Mike Pagano
2022-03-16 13:33 Mike Pagano
2022-03-11 11:31 Mike Pagano
2022-03-08 18:32 Mike Pagano
2022-03-02 13:06 Mike Pagano
2022-02-26 20:27 Mike Pagano
2022-02-23 12:37 Mike Pagano
2022-02-16 12:46 Mike Pagano
2022-02-11 12:35 Mike Pagano
2022-02-08 17:54 Mike Pagano
2022-02-05 19:04 Mike Pagano
2022-02-05 12:13 Mike Pagano
2022-02-01 17:23 Mike Pagano
2022-01-31 12:25 Mike Pagano
2022-01-29 17:43 Mike Pagano
2022-01-27 11:37 Mike Pagano
2022-01-20 10:00 Mike Pagano
2022-01-16 10:21 Mike Pagano
2022-01-11 14:50 Mike Pagano
2022-01-05 12:53 Mike Pagano
2021-12-29 13:06 Mike Pagano
2021-12-22 14:05 Mike Pagano
2021-12-21 19:37 Mike Pagano
2021-12-17 11:55 Mike Pagano
2021-12-16 16:04 Mike Pagano
2021-12-14 12:51 Mike Pagano
2021-12-14 12:12 Mike Pagano
2021-12-08 12:53 Mike Pagano
2021-12-01 12:49 Mike Pagano
2021-11-26 11:57 Mike Pagano
2021-11-21 20:42 Mike Pagano
2021-11-18 15:33 Mike Pagano
2021-11-12 14:18 Mike Pagano
2021-11-06 13:36 Mike Pagano
2021-11-02 19:30 Mike Pagano
2021-10-27 14:55 Mike Pagano
2021-10-27 11:57 Mike Pagano
2021-10-20 13:23 Mike Pagano
2021-10-18 21:17 Mike Pagano
2021-10-17 13:11 Mike Pagano
2021-10-13 9:35 Alice Ferrazzi
2021-10-09 21:31 Mike Pagano
2021-10-06 14:18 Mike Pagano
2021-09-30 10:48 Mike Pagano
2021-09-26 14:12 Mike Pagano
2021-09-22 11:38 Mike Pagano
2021-09-20 22:02 Mike Pagano
2021-09-18 16:07 Mike Pagano
2021-09-17 12:50 Mike Pagano
2021-09-17 12:46 Mike Pagano
2021-09-16 11:20 Mike Pagano
2021-09-15 12:00 Mike Pagano
2021-09-12 14:38 Mike Pagano
2021-09-08 13:00 Alice Ferrazzi
2021-09-03 11:47 Mike Pagano
2021-09-03 11:20 Mike Pagano
2021-08-26 14:34 Mike Pagano
2021-08-25 16:23 Mike Pagano
2021-08-24 21:33 Mike Pagano
2021-08-24 21:32 Mike Pagano
2021-08-21 14:17 Mike Pagano
2021-08-19 11:56 Mike Pagano
2021-08-18 12:46 Mike Pagano
2021-08-15 20:05 Mike Pagano
2021-08-12 11:53 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-08 13:36 Mike Pagano
2021-08-04 11:52 Mike Pagano
2021-08-03 11:03 Mike Pagano
2021-08-02 22:35 Mike Pagano
2021-07-31 10:30 Alice Ferrazzi
2021-07-28 13:22 Mike Pagano
2021-07-25 17:28 Mike Pagano
2021-07-25 17:26 Mike Pagano
2021-07-20 15:44 Alice Ferrazzi
2021-07-19 11:17 Mike Pagano
2021-07-14 16:31 Mike Pagano
2021-07-14 16:21 Mike Pagano
2021-07-13 12:37 Mike Pagano
2021-07-12 17:25 Mike Pagano
2021-07-11 15:11 Mike Pagano
2021-07-11 14:43 Mike Pagano
2021-07-08 12:27 Mike Pagano
2021-07-08 3:27 Alice Ferrazzi
2021-07-07 13:13 Mike Pagano
2021-07-02 19:38 Mike Pagano
2021-07-01 14:32 Mike Pagano
2021-06-30 14:23 Mike Pagano
2021-06-23 15:12 Mike Pagano
2021-06-18 11:37 Mike Pagano
2021-06-16 12:24 Mike Pagano
2021-06-11 17:34 Mike Pagano
2021-06-10 13:14 Mike Pagano
2021-06-10 12:09 Mike Pagano
2021-06-08 22:42 Mike Pagano
2021-06-03 10:26 Alice Ferrazzi
2021-05-28 12:15 Alice Ferrazzi
2021-05-26 12:07 Mike Pagano
2021-05-22 16:59 Mike Pagano
2021-05-19 12:24 Mike Pagano
2021-05-14 14:07 Alice Ferrazzi
2021-05-11 14:20 Mike Pagano
2021-05-07 11:27 Alice Ferrazzi
2021-05-02 16:03 Mike Pagano
2021-04-30 18:58 Mike Pagano
2021-04-28 12:03 Alice Ferrazzi
2021-04-21 11:42 Mike Pagano
2021-04-16 11:02 Alice Ferrazzi
2021-04-14 11:07 Alice Ferrazzi
2021-04-10 13:26 Mike Pagano
2021-04-07 13:27 Mike Pagano
2021-03-30 12:57 Alice Ferrazzi
2021-03-25 9:04 Alice Ferrazzi
2021-03-22 15:57 Mike Pagano
2021-03-20 14:35 Mike Pagano
2021-03-17 17:00 Mike Pagano
2021-03-11 15:08 Mike Pagano
2021-03-09 12:18 Mike Pagano
2021-03-07 15:17 Mike Pagano
2021-03-04 12:04 Alice Ferrazzi
2021-02-26 13:22 Mike Pagano
2021-02-26 10:42 Alice Ferrazzi
2021-02-23 15:16 Alice Ferrazzi
2021-02-18 20:45 Mike Pagano
2021-02-18 14:48 Mike Pagano
2021-02-17 11:14 Alice Ferrazzi
2021-02-13 15:51 Mike Pagano
2021-02-13 15:48 Mike Pagano
2021-02-13 14:42 Alice Ferrazzi
2021-02-10 10:23 Alice Ferrazzi
2021-02-10 9:51 Alice Ferrazzi
2021-02-09 19:10 Mike Pagano
2021-02-07 15:20 Alice Ferrazzi
2021-02-03 23:43 Alice Ferrazzi
2021-01-30 13:27 Alice Ferrazzi
2021-01-27 11:29 Mike Pagano
2021-01-23 16:38 Mike Pagano
2021-01-19 20:31 Mike Pagano
2021-01-17 16:18 Mike Pagano
2021-01-12 20:03 Mike Pagano
2021-01-09 17:58 Mike Pagano
2021-01-09 0:14 Mike Pagano
2021-01-06 14:54 Mike Pagano
2020-12-30 12:54 Mike Pagano
2020-12-26 15:32 Mike Pagano
2020-12-26 15:29 Mike Pagano
2020-12-21 13:26 Mike Pagano
2020-12-18 16:08 Mike Pagano
2020-12-14 20:45 Mike Pagano
2020-12-13 16:09 Mike Pagano
2020-11-19 13:03 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1756394986.2c5751ebb312d8b238e09a3407da09bf4b1182c0.alicef@gentoo \
--to=alicef@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox