From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 553471581FD for ; Wed, 10 Sep 2025 05:32:07 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 3FF92340F55 for ; Wed, 10 Sep 2025 05:32:07 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id D9772110377; Wed, 10 Sep 2025 05:32:05 +0000 (UTC) Received: from smtp.gentoo.org (mail.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id D1829110377 for ; Wed, 10 Sep 2025 05:32:05 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id E71AB340F50 for ; Wed, 10 Sep 2025 05:32:04 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 7A98C394F for ; Wed, 10 Sep 2025 05:32:03 +0000 (UTC) From: "Arisu Tachibana" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Arisu Tachibana" Message-ID: <1757482308.b4acb855cc74c9ea6f8e1428afdb197919880708.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.6 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1104_linux-6.6.105.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Arisu Tachibana X-VCS-Revision: b4acb855cc74c9ea6f8e1428afdb197919880708 X-VCS-Branch: 6.6 Date: Wed, 10 Sep 2025 05:32:03 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 5e73fb33-a3f1-4f27-a706-202ffcbfae7f X-Archives-Hash: a7d7f3a6c2e44ad00db76f9e0b0709e8 commit: b4acb855cc74c9ea6f8e1428afdb197919880708 Author: Arisu Tachibana gentoo org> AuthorDate: Wed Sep 10 05:31:48 2025 +0000 Commit: Arisu Tachibana gentoo org> CommitDate: Wed Sep 10 05:31:48 2025 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b4acb855 Linux patch 6.6.105 Signed-off-by: Arisu Tachibana gentoo.org> 0000_README | 4 + 1104_linux-6.6.105.patch | 5454 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 5458 insertions(+) diff --git a/0000_README b/0000_README index f1c91278..e61b3981 100644 --- a/0000_README +++ b/0000_README @@ -459,6 +459,10 @@ Patch: 1103_linux-6.6.104.patch From: https://www.kernel.org Desc: Linux 6.6.104 +Patch: 1104_linux-6.6.105.patch +From: https://www.kernel.org +Desc: Linux 6.6.105 + Patch: 1510_fs-enable-link-security-restrictions-by-default.patch From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch Desc: Enable link security restrictions by default. diff --git a/1104_linux-6.6.105.patch b/1104_linux-6.6.105.patch new file mode 100644 index 00000000..6394a1e5 --- /dev/null +++ b/1104_linux-6.6.105.patch @@ -0,0 +1,5454 @@ +diff --git a/Documentation/userspace-api/netlink/specs.rst b/Documentation/userspace-api/netlink/specs.rst +index cc4e2430997ef8..a8218284e67a42 100644 +--- a/Documentation/userspace-api/netlink/specs.rst ++++ b/Documentation/userspace-api/netlink/specs.rst +@@ -408,10 +408,21 @@ This section describes the attribute types supported by the ``genetlink`` + compatibility level. Refer to documentation of different levels for additional + attribute types. + +-Scalar integer types ++Common integer types + -------------------- + +-Fixed-width integer types: ++``sint`` and ``uint`` represent signed and unsigned 64 bit integers. ++If the value can fit on 32 bits only 32 bits are carried in netlink ++messages, otherwise full 64 bits are carried. Note that the payload ++is only aligned to 4B, so the full 64 bit value may be unaligned! ++ ++Common integer types should be preferred over fix-width types in majority ++of cases. ++ ++Fix-width integer types ++----------------------- ++ ++Fixed-width integer types include: + ``u8``, ``u16``, ``u32``, ``u64``, ``s8``, ``s16``, ``s32``, ``s64``. + + Note that types smaller than 32 bit should be avoided as using them +@@ -421,6 +432,9 @@ See :ref:`pad_type` for padding of 64 bit attributes. + The payload of the attribute is the integer in host order unless ``byte-order`` + specifies otherwise. + ++64 bit values are usually aligned by the kernel but it is recommended ++that the user space is able to deal with unaligned values. ++ + .. _pad_type: + + pad +diff --git a/Makefile b/Makefile +index ae57f816375ebd..2b7f67d7b641ce 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 6 +-SUBLEVEL = 104 ++SUBLEVEL = 105 + EXTRAVERSION = + NAME = Pinguïn Aangedreven + +diff --git a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts +index cd44bf83745cae..678ecc9f81dbb7 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts ++++ b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts +@@ -442,6 +442,7 @@ &usdhc2 { + pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_gpio>; + cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>; + vmmc-supply = <®_usdhc2_vmmc>; ++ vqmmc-supply = <&ldo5>; + bus-width = <4>; + status = "okay"; + }; +diff --git a/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi +index eae39c1cb98568..2e93d922c86111 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi +@@ -571,6 +571,7 @@ &usdhc2 { + pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_gpio>; + cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>; + vmmc-supply = <®_usdhc2_vmmc>; ++ vqmmc-supply = <&ldo5>; + bus-width = <4>; + status = "okay"; + }; +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts +index f5e124b235c83c..fb3012a6c9fc30 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts ++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts +@@ -967,6 +967,7 @@ spiflash: flash@0 { + reg = <0>; + m25p,fast-read; + spi-max-frequency = <10000000>; ++ vcc-supply = <&vcc_3v0>; + }; + }; + +diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h +index bfa6638b4c930c..8f7ac23c404d99 100644 +--- a/arch/arm64/include/asm/module.h ++++ b/arch/arm64/include/asm/module.h +@@ -19,6 +19,7 @@ struct mod_arch_specific { + + /* for CONFIG_DYNAMIC_FTRACE */ + struct plt_entry *ftrace_trampolines; ++ struct plt_entry *init_ftrace_trampolines; + }; + + u64 module_emit_plt_entry(struct module *mod, Elf64_Shdr *sechdrs, +diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h +index b9ae8349e35dbb..fb944b46846dae 100644 +--- a/arch/arm64/include/asm/module.lds.h ++++ b/arch/arm64/include/asm/module.lds.h +@@ -2,6 +2,7 @@ SECTIONS { + .plt 0 : { BYTE(0) } + .init.plt 0 : { BYTE(0) } + .text.ftrace_trampoline 0 : { BYTE(0) } ++ .init.text.ftrace_trampoline 0 : { BYTE(0) } + + #ifdef CONFIG_KASAN_SW_TAGS + /* +diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c +index a650f5e11fc5d8..b657f058bf4d50 100644 +--- a/arch/arm64/kernel/ftrace.c ++++ b/arch/arm64/kernel/ftrace.c +@@ -195,10 +195,17 @@ int ftrace_update_ftrace_func(ftrace_func_t func) + return ftrace_modify_code(pc, 0, new, false); + } + +-static struct plt_entry *get_ftrace_plt(struct module *mod) ++static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr) + { + #ifdef CONFIG_MODULES +- struct plt_entry *plt = mod->arch.ftrace_trampolines; ++ struct plt_entry *plt = NULL; ++ ++ if (within_module_mem_type(addr, mod, MOD_INIT_TEXT)) ++ plt = mod->arch.init_ftrace_trampolines; ++ else if (within_module_mem_type(addr, mod, MOD_TEXT)) ++ plt = mod->arch.ftrace_trampolines; ++ else ++ return NULL; + + return &plt[FTRACE_PLT_IDX]; + #else +@@ -270,7 +277,7 @@ static bool ftrace_find_callable_addr(struct dyn_ftrace *rec, + if (WARN_ON(!mod)) + return false; + +- plt = get_ftrace_plt(mod); ++ plt = get_ftrace_plt(mod, pc); + if (!plt) { + pr_err("ftrace: no module PLT for %ps\n", (void *)*addr); + return false; +diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c +index 79200f21e12393..e4ddb1642ee22d 100644 +--- a/arch/arm64/kernel/module-plts.c ++++ b/arch/arm64/kernel/module-plts.c +@@ -284,7 +284,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, + unsigned long core_plts = 0; + unsigned long init_plts = 0; + Elf64_Sym *syms = NULL; +- Elf_Shdr *pltsec, *tramp = NULL; ++ Elf_Shdr *pltsec, *tramp = NULL, *init_tramp = NULL; + int i; + + /* +@@ -299,6 +299,9 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, + else if (!strcmp(secstrings + sechdrs[i].sh_name, + ".text.ftrace_trampoline")) + tramp = sechdrs + i; ++ else if (!strcmp(secstrings + sechdrs[i].sh_name, ++ ".init.text.ftrace_trampoline")) ++ init_tramp = sechdrs + i; + else if (sechdrs[i].sh_type == SHT_SYMTAB) + syms = (Elf64_Sym *)sechdrs[i].sh_addr; + } +@@ -364,5 +367,12 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, + tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry); + } + ++ if (init_tramp) { ++ init_tramp->sh_type = SHT_NOBITS; ++ init_tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC; ++ init_tramp->sh_addralign = __alignof__(struct plt_entry); ++ init_tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry); ++ } ++ + return 0; + } +diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c +index dd851297596e5e..adaf2920773b37 100644 +--- a/arch/arm64/kernel/module.c ++++ b/arch/arm64/kernel/module.c +@@ -579,6 +579,17 @@ static int module_init_ftrace_plt(const Elf_Ehdr *hdr, + __init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR); + + mod->arch.ftrace_trampolines = plts; ++ ++ s = find_section(hdr, sechdrs, ".init.text.ftrace_trampoline"); ++ if (!s) ++ return -ENOEXEC; ++ ++ plts = (void *)s->sh_addr; ++ ++ __init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR); ++ ++ mod->arch.init_ftrace_trampolines = plts; ++ + #endif + return 0; + } +diff --git a/arch/loongarch/kernel/signal.c b/arch/loongarch/kernel/signal.c +index 4a3686d1334949..0e90cd2df0ea3a 100644 +--- a/arch/loongarch/kernel/signal.c ++++ b/arch/loongarch/kernel/signal.c +@@ -697,6 +697,11 @@ static int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, + for (i = 1; i < 32; i++) + err |= __put_user(regs->regs[i], &sc->sc_regs[i]); + ++#ifdef CONFIG_CPU_HAS_LBT ++ if (extctx->lbt.addr) ++ err |= protected_save_lbt_context(extctx); ++#endif ++ + if (extctx->lasx.addr) + err |= protected_save_lasx_context(extctx); + else if (extctx->lsx.addr) +@@ -704,11 +709,6 @@ static int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, + else if (extctx->fpu.addr) + err |= protected_save_fpu_context(extctx); + +-#ifdef CONFIG_CPU_HAS_LBT +- if (extctx->lbt.addr) +- err |= protected_save_lbt_context(extctx); +-#endif +- + /* Set the "end" magic */ + info = (struct sctx_info *)extctx->end.addr; + err |= __put_user(0, &info->magic); +diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h +index b5b84c6be01e16..da818b39a24cc4 100644 +--- a/arch/riscv/include/asm/asm.h ++++ b/arch/riscv/include/asm/asm.h +@@ -90,7 +90,7 @@ + #endif + + .macro asm_per_cpu dst sym tmp +- REG_L \tmp, TASK_TI_CPU_NUM(tp) ++ lw \tmp, TASK_TI_CPU_NUM(tp) + slli \tmp, \tmp, PER_CPU_OFFSET_SHIFT + la \dst, __per_cpu_offset + add \dst, \dst, \tmp +diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h +index 35c416f061552b..ea95303ab15b81 100644 +--- a/arch/x86/include/asm/pgtable_64_types.h ++++ b/arch/x86/include/asm/pgtable_64_types.h +@@ -41,6 +41,9 @@ static inline bool pgtable_l5_enabled(void) + #define pgtable_l5_enabled() 0 + #endif /* CONFIG_X86_5LEVEL */ + ++#define ARCH_PAGE_TABLE_SYNC_MASK \ ++ (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED) ++ + extern unsigned int pgdir_shift; + extern unsigned int ptrs_per_p4d; + +diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c +index 11eb93e13ce175..cf080c96b7dd66 100644 +--- a/arch/x86/mm/init_64.c ++++ b/arch/x86/mm/init_64.c +@@ -223,6 +223,24 @@ static void sync_global_pgds(unsigned long start, unsigned long end) + sync_global_pgds_l4(start, end); + } + ++/* ++ * Make kernel mappings visible in all page tables in the system. ++ * This is necessary except when the init task populates kernel mappings ++ * during the boot process. In that case, all processes originating from ++ * the init task copies the kernel mappings, so there is no issue. ++ * Otherwise, missing synchronization could lead to kernel crashes due ++ * to missing page table entries for certain kernel mappings. ++ * ++ * Synchronization is performed at the top level, which is the PGD in ++ * 5-level paging systems. But in 4-level paging systems, however, ++ * pgd_populate() is a no-op, so synchronization is done at the P4D level. ++ * sync_global_pgds() handles this difference between paging levels. ++ */ ++void arch_sync_kernel_mappings(unsigned long start, unsigned long end) ++{ ++ sync_global_pgds(start, end); ++} ++ + /* + * NOTE: This function is marked __ref because it calls __init function + * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0. +diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c +index 1a31106a14e446..6ee8382a32302e 100644 +--- a/drivers/acpi/arm64/iort.c ++++ b/drivers/acpi/arm64/iort.c +@@ -937,8 +937,10 @@ static u32 *iort_rmr_alloc_sids(u32 *sids, u32 count, u32 id_start, + + new_sids = krealloc_array(sids, count + new_count, + sizeof(*new_sids), GFP_KERNEL); +- if (!new_sids) ++ if (!new_sids) { ++ kfree(sids); + return NULL; ++ } + + for (i = count; i < total_count; i++) + new_sids[i] = id_start++; +diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c +index 4bfc78f9781ede..0935045051699f 100644 +--- a/drivers/bluetooth/hci_vhci.c ++++ b/drivers/bluetooth/hci_vhci.c +@@ -380,6 +380,28 @@ static const struct file_operations force_devcoredump_fops = { + .write = force_devcd_write, + }; + ++static void vhci_debugfs_init(struct vhci_data *data) ++{ ++ struct hci_dev *hdev = data->hdev; ++ ++ debugfs_create_file("force_suspend", 0644, hdev->debugfs, data, ++ &force_suspend_fops); ++ ++ debugfs_create_file("force_wakeup", 0644, hdev->debugfs, data, ++ &force_wakeup_fops); ++ ++ if (IS_ENABLED(CONFIG_BT_MSFTEXT)) ++ debugfs_create_file("msft_opcode", 0644, hdev->debugfs, data, ++ &msft_opcode_fops); ++ ++ if (IS_ENABLED(CONFIG_BT_AOSPEXT)) ++ debugfs_create_file("aosp_capable", 0644, hdev->debugfs, data, ++ &aosp_capable_fops); ++ ++ debugfs_create_file("force_devcoredump", 0644, hdev->debugfs, data, ++ &force_devcoredump_fops); ++} ++ + static int __vhci_create_device(struct vhci_data *data, __u8 opcode) + { + struct hci_dev *hdev; +@@ -435,22 +457,8 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode) + return -EBUSY; + } + +- debugfs_create_file("force_suspend", 0644, hdev->debugfs, data, +- &force_suspend_fops); +- +- debugfs_create_file("force_wakeup", 0644, hdev->debugfs, data, +- &force_wakeup_fops); +- +- if (IS_ENABLED(CONFIG_BT_MSFTEXT)) +- debugfs_create_file("msft_opcode", 0644, hdev->debugfs, data, +- &msft_opcode_fops); +- +- if (IS_ENABLED(CONFIG_BT_AOSPEXT)) +- debugfs_create_file("aosp_capable", 0644, hdev->debugfs, data, +- &aosp_capable_fops); +- +- debugfs_create_file("force_devcoredump", 0644, hdev->debugfs, data, +- &force_devcoredump_fops); ++ if (!IS_ERR_OR_NULL(hdev->debugfs)) ++ vhci_debugfs_init(data); + + hci_skb_pkt_type(skb) = HCI_VENDOR_PKT; + +@@ -652,6 +660,21 @@ static int vhci_open(struct inode *inode, struct file *file) + return 0; + } + ++static void vhci_debugfs_remove(struct hci_dev *hdev) ++{ ++ debugfs_lookup_and_remove("force_suspend", hdev->debugfs); ++ ++ debugfs_lookup_and_remove("force_wakeup", hdev->debugfs); ++ ++ if (IS_ENABLED(CONFIG_BT_MSFTEXT)) ++ debugfs_lookup_and_remove("msft_opcode", hdev->debugfs); ++ ++ if (IS_ENABLED(CONFIG_BT_AOSPEXT)) ++ debugfs_lookup_and_remove("aosp_capable", hdev->debugfs); ++ ++ debugfs_lookup_and_remove("force_devcoredump", hdev->debugfs); ++} ++ + static int vhci_release(struct inode *inode, struct file *file) + { + struct vhci_data *data = file->private_data; +@@ -663,6 +686,8 @@ static int vhci_release(struct inode *inode, struct file *file) + hdev = data->hdev; + + if (hdev) { ++ if (!IS_ERR_OR_NULL(hdev->debugfs)) ++ vhci_debugfs_remove(hdev); + hci_unregister_dev(hdev); + hci_free_dev(hdev); + } +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c +index 4f1206ff0a10e9..ed782c0b48af25 100644 +--- a/drivers/cpufreq/intel_pstate.c ++++ b/drivers/cpufreq/intel_pstate.c +@@ -172,7 +172,6 @@ struct vid_data { + * based on the MSR_IA32_MISC_ENABLE value and whether or + * not the maximum reported turbo P-state is different from + * the maximum reported non-turbo one. +- * @turbo_disabled_mf: The @turbo_disabled value reflected by cpuinfo.max_freq. + * @min_perf_pct: Minimum capacity limit in percent of the maximum turbo + * P-state capacity. + * @max_perf_pct: Maximum capacity limit in percent of the maximum turbo +@@ -181,7 +180,6 @@ struct vid_data { + struct global_params { + bool no_turbo; + bool turbo_disabled; +- bool turbo_disabled_mf; + int max_perf_pct; + int min_perf_pct; + }; +@@ -592,16 +590,16 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu) + cpu->pstate.min_pstate = intel_pstate_freq_to_hwp(cpu, freq); + } + +-static inline void update_turbo_state(void) ++static bool turbo_is_disabled(void) + { + u64 misc_en; +- struct cpudata *cpu; + +- cpu = all_cpu_data[0]; ++ if (!cpu_feature_enabled(X86_FEATURE_IDA)) ++ return true; ++ + rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); +- global.turbo_disabled = +- (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE || +- cpu->pstate.max_pstate == cpu->pstate.turbo_pstate); ++ ++ return !!(misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE); + } + + static int min_perf_pct_min(void) +@@ -1156,40 +1154,16 @@ static void intel_pstate_update_policies(void) + static void __intel_pstate_update_max_freq(struct cpudata *cpudata, + struct cpufreq_policy *policy) + { +- policy->cpuinfo.max_freq = global.turbo_disabled_mf ? ++ policy->cpuinfo.max_freq = global.turbo_disabled ? + cpudata->pstate.max_freq : cpudata->pstate.turbo_freq; + refresh_frequency_limits(policy); + } + +-static void intel_pstate_update_max_freq(unsigned int cpu) +-{ +- struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu); +- +- if (!policy) +- return; +- +- __intel_pstate_update_max_freq(all_cpu_data[cpu], policy); +- +- cpufreq_cpu_release(policy); +-} +- + static void intel_pstate_update_limits(unsigned int cpu) + { + mutex_lock(&intel_pstate_driver_lock); + +- update_turbo_state(); +- /* +- * If turbo has been turned on or off globally, policy limits for +- * all CPUs need to be updated to reflect that. +- */ +- if (global.turbo_disabled_mf != global.turbo_disabled) { +- global.turbo_disabled_mf = global.turbo_disabled; +- arch_set_max_freq_ratio(global.turbo_disabled); +- for_each_possible_cpu(cpu) +- intel_pstate_update_max_freq(cpu); +- } else { +- cpufreq_update_policy(cpu); +- } ++ cpufreq_update_policy(cpu); + + mutex_unlock(&intel_pstate_driver_lock); + } +@@ -1289,11 +1263,7 @@ static ssize_t show_no_turbo(struct kobject *kobj, + return -EAGAIN; + } + +- update_turbo_state(); +- if (global.turbo_disabled) +- ret = sprintf(buf, "%u\n", global.turbo_disabled); +- else +- ret = sprintf(buf, "%u\n", global.no_turbo); ++ ret = sprintf(buf, "%u\n", global.no_turbo); + + mutex_unlock(&intel_pstate_driver_lock); + +@@ -1304,32 +1274,39 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, + const char *buf, size_t count) + { + unsigned int input; +- int ret; ++ bool no_turbo; + +- ret = sscanf(buf, "%u", &input); +- if (ret != 1) ++ if (sscanf(buf, "%u", &input) != 1) + return -EINVAL; + + mutex_lock(&intel_pstate_driver_lock); + + if (!intel_pstate_driver) { +- mutex_unlock(&intel_pstate_driver_lock); +- return -EAGAIN; ++ count = -EAGAIN; ++ goto unlock_driver; + } + +- mutex_lock(&intel_pstate_limits_lock); ++ no_turbo = !!clamp_t(int, input, 0, 1); + +- update_turbo_state(); +- if (global.turbo_disabled) { +- pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n"); +- mutex_unlock(&intel_pstate_limits_lock); +- mutex_unlock(&intel_pstate_driver_lock); +- return -EPERM; ++ WRITE_ONCE(global.turbo_disabled, turbo_is_disabled()); ++ if (global.turbo_disabled && !no_turbo) { ++ pr_notice("Turbo disabled by BIOS or unavailable on processor\n"); ++ count = -EPERM; ++ if (global.no_turbo) ++ goto unlock_driver; ++ else ++ no_turbo = 1; + } + +- global.no_turbo = clamp_t(int, input, 0, 1); ++ if (no_turbo == global.no_turbo) { ++ goto unlock_driver; ++ } + +- if (global.no_turbo) { ++ WRITE_ONCE(global.no_turbo, no_turbo); ++ ++ mutex_lock(&intel_pstate_limits_lock); ++ ++ if (no_turbo) { + struct cpudata *cpu = all_cpu_data[0]; + int pct = cpu->pstate.max_pstate * 100 / cpu->pstate.turbo_pstate; + +@@ -1341,8 +1318,9 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, + mutex_unlock(&intel_pstate_limits_lock); + + intel_pstate_update_policies(); +- arch_set_max_freq_ratio(global.no_turbo); ++ arch_set_max_freq_ratio(no_turbo); + ++unlock_driver: + mutex_unlock(&intel_pstate_driver_lock); + + return count; +@@ -1793,7 +1771,7 @@ static u64 atom_get_val(struct cpudata *cpudata, int pstate) + u32 vid; + + val = (u64)pstate << 8; +- if (global.no_turbo && !global.turbo_disabled) ++ if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled)) + val |= (u64)1 << 32; + + vid_fp = cpudata->vid.min + mul_fp( +@@ -1958,7 +1936,7 @@ static u64 core_get_val(struct cpudata *cpudata, int pstate) + u64 val; + + val = (u64)pstate << 8; +- if (global.no_turbo && !global.turbo_disabled) ++ if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled)) + val |= (u64)1 << 32; + + return val; +@@ -2031,14 +2009,6 @@ static void intel_pstate_set_min_pstate(struct cpudata *cpu) + intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); + } + +-static void intel_pstate_max_within_limits(struct cpudata *cpu) +-{ +- int pstate = max(cpu->pstate.min_pstate, cpu->max_perf_ratio); +- +- update_turbo_state(); +- intel_pstate_set_pstate(cpu, pstate); +-} +- + static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) + { + int perf_ctl_max_phys = pstate_funcs.get_max_physical(cpu->cpu); +@@ -2264,7 +2234,7 @@ static inline int32_t get_target_pstate(struct cpudata *cpu) + + sample->busy_scaled = busy_frac * 100; + +- target = global.no_turbo || global.turbo_disabled ? ++ target = READ_ONCE(global.no_turbo) ? + cpu->pstate.max_pstate : cpu->pstate.turbo_pstate; + target += target >> 2; + target = mul_fp(target, busy_frac); +@@ -2308,8 +2278,6 @@ static void intel_pstate_adjust_pstate(struct cpudata *cpu) + struct sample *sample; + int target_pstate; + +- update_turbo_state(); +- + target_pstate = get_target_pstate(cpu); + target_pstate = intel_pstate_prepare_request(cpu, target_pstate); + trace_cpu_frequency(target_pstate * cpu->pstate.scaling, cpu->cpu); +@@ -2527,7 +2495,7 @@ static void intel_pstate_clear_update_util_hook(unsigned int cpu) + + static int intel_pstate_get_max_freq(struct cpudata *cpu) + { +- return global.turbo_disabled || global.no_turbo ? ++ return READ_ONCE(global.no_turbo) ? + cpu->pstate.max_freq : cpu->pstate.turbo_freq; + } + +@@ -2612,12 +2580,14 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) + intel_pstate_update_perf_limits(cpu, policy->min, policy->max); + + if (cpu->policy == CPUFREQ_POLICY_PERFORMANCE) { ++ int pstate = max(cpu->pstate.min_pstate, cpu->max_perf_ratio); ++ + /* + * NOHZ_FULL CPUs need this as the governor callback may not + * be invoked on them. + */ + intel_pstate_clear_update_util_hook(policy->cpu); +- intel_pstate_max_within_limits(cpu); ++ intel_pstate_set_pstate(cpu, pstate); + } else { + intel_pstate_set_update_util_hook(policy->cpu); + } +@@ -2660,10 +2630,9 @@ static void intel_pstate_verify_cpu_policy(struct cpudata *cpu, + { + int max_freq; + +- update_turbo_state(); + if (hwp_active) { + intel_pstate_get_hwp_cap(cpu); +- max_freq = global.no_turbo || global.turbo_disabled ? ++ max_freq = READ_ONCE(global.no_turbo) ? + cpu->pstate.max_freq : cpu->pstate.turbo_freq; + } else { + max_freq = intel_pstate_get_max_freq(cpu); +@@ -2757,8 +2726,6 @@ static int __intel_pstate_cpu_init(struct cpufreq_policy *policy) + + /* cpuinfo and default policy values */ + policy->cpuinfo.min_freq = cpu->pstate.min_freq; +- update_turbo_state(); +- global.turbo_disabled_mf = global.turbo_disabled; + policy->cpuinfo.max_freq = global.turbo_disabled ? + cpu->pstate.max_freq : cpu->pstate.turbo_freq; + +@@ -2924,8 +2891,6 @@ static int intel_cpufreq_target(struct cpufreq_policy *policy, + struct cpufreq_freqs freqs; + int target_pstate; + +- update_turbo_state(); +- + freqs.old = policy->cur; + freqs.new = target_freq; + +@@ -2947,8 +2912,6 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy, + struct cpudata *cpu = all_cpu_data[policy->cpu]; + int target_pstate; + +- update_turbo_state(); +- + target_pstate = intel_pstate_freq_to_hwp(cpu, target_freq); + + target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, true); +@@ -2966,7 +2929,6 @@ static void intel_cpufreq_adjust_perf(unsigned int cpunum, + int old_pstate = cpu->pstate.current_pstate; + int cap_pstate, min_pstate, max_pstate, target_pstate; + +- update_turbo_state(); + cap_pstate = global.turbo_disabled ? HWP_GUARANTEED_PERF(hwp_cap) : + HWP_HIGHEST_PERF(hwp_cap); + +@@ -3156,6 +3118,10 @@ static int intel_pstate_register_driver(struct cpufreq_driver *driver) + + memset(&global, 0, sizeof(global)); + global.max_perf_pct = 100; ++ global.turbo_disabled = turbo_is_disabled(); ++ global.no_turbo = global.turbo_disabled; ++ ++ arch_set_max_freq_ratio(global.turbo_disabled); + + intel_pstate_driver = driver; + ret = cpufreq_register_driver(intel_pstate_driver); +diff --git a/drivers/dma/mediatek/mtk-cqdma.c b/drivers/dma/mediatek/mtk-cqdma.c +index 324b7387b1b922..525bb92ced8f82 100644 +--- a/drivers/dma/mediatek/mtk-cqdma.c ++++ b/drivers/dma/mediatek/mtk-cqdma.c +@@ -420,15 +420,11 @@ static struct virt_dma_desc *mtk_cqdma_find_active_desc(struct dma_chan *c, + { + struct mtk_cqdma_vchan *cvc = to_cqdma_vchan(c); + struct virt_dma_desc *vd; +- unsigned long flags; + +- spin_lock_irqsave(&cvc->pc->lock, flags); + list_for_each_entry(vd, &cvc->pc->queue, node) + if (vd->tx.cookie == cookie) { +- spin_unlock_irqrestore(&cvc->pc->lock, flags); + return vd; + } +- spin_unlock_irqrestore(&cvc->pc->lock, flags); + + list_for_each_entry(vd, &cvc->vc.desc_issued, node) + if (vd->tx.cookie == cookie) +@@ -452,9 +448,11 @@ static enum dma_status mtk_cqdma_tx_status(struct dma_chan *c, + if (ret == DMA_COMPLETE || !txstate) + return ret; + +- spin_lock_irqsave(&cvc->vc.lock, flags); ++ spin_lock_irqsave(&cvc->pc->lock, flags); ++ spin_lock(&cvc->vc.lock); + vd = mtk_cqdma_find_active_desc(c, cookie); +- spin_unlock_irqrestore(&cvc->vc.lock, flags); ++ spin_unlock(&cvc->vc.lock); ++ spin_unlock_irqrestore(&cvc->pc->lock, flags); + + if (vd) { + cvd = to_cqdma_vdesc(vd); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +index ffa5e72a84ebcb..c83445c2e37f3d 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +@@ -291,21 +291,22 @@ static int psp_memory_training_init(struct psp_context *psp) + struct psp_memory_training_context *ctx = &psp->mem_train_ctx; + + if (ctx->init != PSP_MEM_TRAIN_RESERVE_SUCCESS) { +- DRM_DEBUG("memory training is not supported!\n"); ++ dev_dbg(psp->adev->dev, "memory training is not supported!\n"); + return 0; + } + + ctx->sys_cache = kzalloc(ctx->train_data_size, GFP_KERNEL); + if (ctx->sys_cache == NULL) { +- DRM_ERROR("alloc mem_train_ctx.sys_cache failed!\n"); ++ dev_err(psp->adev->dev, "alloc mem_train_ctx.sys_cache failed!\n"); + ret = -ENOMEM; + goto Err_out; + } + +- DRM_DEBUG("train_data_size:%llx,p2c_train_data_offset:%llx,c2p_train_data_offset:%llx.\n", +- ctx->train_data_size, +- ctx->p2c_train_data_offset, +- ctx->c2p_train_data_offset); ++ dev_dbg(psp->adev->dev, ++ "train_data_size:%llx,p2c_train_data_offset:%llx,c2p_train_data_offset:%llx.\n", ++ ctx->train_data_size, ++ ctx->p2c_train_data_offset, ++ ctx->c2p_train_data_offset); + ctx->init = PSP_MEM_TRAIN_INIT_SUCCESS; + return 0; + +@@ -407,8 +408,8 @@ static int psp_sw_init(void *handle) + + psp->cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL); + if (!psp->cmd) { +- DRM_ERROR("Failed to allocate memory to command buffer!\n"); +- ret = -ENOMEM; ++ dev_err(adev->dev, "Failed to allocate memory to command buffer!\n"); ++ return -ENOMEM; + } + + adev->psp.xgmi_context.supports_extended_data = +@@ -454,13 +455,13 @@ static int psp_sw_init(void *handle) + if (mem_training_ctx->enable_mem_training) { + ret = psp_memory_training_init(psp); + if (ret) { +- DRM_ERROR("Failed to initialize memory training!\n"); ++ dev_err(adev->dev, "Failed to initialize memory training!\n"); + return ret; + } + + ret = psp_mem_training(psp, PSP_MEM_TRAIN_COLD_BOOT); + if (ret) { +- DRM_ERROR("Failed to process memory training!\n"); ++ dev_err(adev->dev, "Failed to process memory training!\n"); + return ret; + } + } +@@ -674,9 +675,11 @@ psp_cmd_submit_buf(struct psp_context *psp, + */ + if (!skip_unsupport && (psp->cmd_buf_mem->resp.status || !timeout) && !ras_intr) { + if (ucode) +- DRM_WARN("failed to load ucode %s(0x%X) ", +- amdgpu_ucode_name(ucode->ucode_id), ucode->ucode_id); +- DRM_WARN("psp gfx command %s(0x%X) failed and response status is (0x%X)\n", ++ dev_warn(psp->adev->dev, ++ "failed to load ucode %s(0x%X) ", ++ amdgpu_ucode_name(ucode->ucode_id), ucode->ucode_id); ++ dev_warn(psp->adev->dev, ++ "psp gfx command %s(0x%X) failed and response status is (0x%X)\n", + psp_gfx_cmd_name(psp->cmd_buf_mem->cmd_id), psp->cmd_buf_mem->cmd_id, + psp->cmd_buf_mem->resp.status); + /* If any firmware (including CAP) load fails under SRIOV, it should +@@ -806,7 +809,7 @@ static int psp_tmr_init(struct psp_context *psp) + psp->fw_pri_buf) { + ret = psp_load_toc(psp, &tmr_size); + if (ret) { +- DRM_ERROR("Failed to load toc\n"); ++ dev_err(psp->adev->dev, "Failed to load toc\n"); + return ret; + } + } +@@ -854,7 +857,7 @@ static int psp_tmr_load(struct psp_context *psp) + + psp_prep_tmr_cmd_buf(psp, cmd, psp->tmr_mc_addr, psp->tmr_bo); + if (psp->tmr_bo) +- DRM_INFO("reserve 0x%lx from 0x%llx for PSP TMR\n", ++ dev_info(psp->adev->dev, "reserve 0x%lx from 0x%llx for PSP TMR\n", + amdgpu_bo_size(psp->tmr_bo), psp->tmr_mc_addr); + + ret = psp_cmd_submit_buf(psp, NULL, cmd, +@@ -1112,7 +1115,7 @@ int psp_reg_program(struct psp_context *psp, enum psp_reg_prog_id reg, + psp_prep_reg_prog_cmd_buf(cmd, reg, value); + ret = psp_cmd_submit_buf(psp, NULL, cmd, psp->fence_buf_mc_addr); + if (ret) +- DRM_ERROR("PSP failed to program reg id %d", reg); ++ dev_err(psp->adev->dev, "PSP failed to program reg id %d\n", reg); + + release_psp_cmd_buf(psp); + +@@ -1492,22 +1495,22 @@ static void psp_ras_ta_check_status(struct psp_context *psp) + switch (ras_cmd->ras_status) { + case TA_RAS_STATUS__ERROR_UNSUPPORTED_IP: + dev_warn(psp->adev->dev, +- "RAS WARNING: cmd failed due to unsupported ip\n"); ++ "RAS WARNING: cmd failed due to unsupported ip\n"); + break; + case TA_RAS_STATUS__ERROR_UNSUPPORTED_ERROR_INJ: + dev_warn(psp->adev->dev, +- "RAS WARNING: cmd failed due to unsupported error injection\n"); ++ "RAS WARNING: cmd failed due to unsupported error injection\n"); + break; + case TA_RAS_STATUS__SUCCESS: + break; + case TA_RAS_STATUS__TEE_ERROR_ACCESS_DENIED: + if (ras_cmd->cmd_id == TA_RAS_COMMAND__TRIGGER_ERROR) + dev_warn(psp->adev->dev, +- "RAS WARNING: Inject error to critical region is not allowed\n"); ++ "RAS WARNING: Inject error to critical region is not allowed\n"); + break; + default: + dev_warn(psp->adev->dev, +- "RAS WARNING: ras status = 0x%X\n", ras_cmd->ras_status); ++ "RAS WARNING: ras status = 0x%X\n", ras_cmd->ras_status); + break; + } + } +@@ -1531,7 +1534,7 @@ int psp_ras_invoke(struct psp_context *psp, uint32_t ta_cmd_id) + return ret; + + if (ras_cmd->if_version > RAS_TA_HOST_IF_VER) { +- DRM_WARN("RAS: Unsupported Interface"); ++ dev_warn(psp->adev->dev, "RAS: Unsupported Interface\n"); + return -EINVAL; + } + +@@ -1681,7 +1684,7 @@ int psp_ras_initialize(struct psp_context *psp) + psp->ras_context.context.initialized = true; + else { + if (ras_cmd->ras_status) +- dev_warn(psp->adev->dev, "RAS Init Status: 0x%X\n", ras_cmd->ras_status); ++ dev_warn(adev->dev, "RAS Init Status: 0x%X\n", ras_cmd->ras_status); + + /* fail to load RAS TA */ + psp->ras_context.context.initialized = false; +@@ -2101,7 +2104,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_kdb != NULL)) { + ret = psp_bootloader_load_kdb(psp); + if (ret) { +- DRM_ERROR("PSP load kdb failed!\n"); ++ dev_err(adev->dev, "PSP load kdb failed!\n"); + return ret; + } + } +@@ -2110,7 +2113,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_spl != NULL)) { + ret = psp_bootloader_load_spl(psp); + if (ret) { +- DRM_ERROR("PSP load spl failed!\n"); ++ dev_err(adev->dev, "PSP load spl failed!\n"); + return ret; + } + } +@@ -2119,7 +2122,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_sysdrv != NULL)) { + ret = psp_bootloader_load_sysdrv(psp); + if (ret) { +- DRM_ERROR("PSP load sys drv failed!\n"); ++ dev_err(adev->dev, "PSP load sys drv failed!\n"); + return ret; + } + } +@@ -2128,7 +2131,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_soc_drv != NULL)) { + ret = psp_bootloader_load_soc_drv(psp); + if (ret) { +- DRM_ERROR("PSP load soc drv failed!\n"); ++ dev_err(adev->dev, "PSP load soc drv failed!\n"); + return ret; + } + } +@@ -2137,7 +2140,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_intf_drv != NULL)) { + ret = psp_bootloader_load_intf_drv(psp); + if (ret) { +- DRM_ERROR("PSP load intf drv failed!\n"); ++ dev_err(adev->dev, "PSP load intf drv failed!\n"); + return ret; + } + } +@@ -2146,7 +2149,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_dbg_drv != NULL)) { + ret = psp_bootloader_load_dbg_drv(psp); + if (ret) { +- DRM_ERROR("PSP load dbg drv failed!\n"); ++ dev_err(adev->dev, "PSP load dbg drv failed!\n"); + return ret; + } + } +@@ -2155,7 +2158,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_ras_drv != NULL)) { + ret = psp_bootloader_load_ras_drv(psp); + if (ret) { +- DRM_ERROR("PSP load ras_drv failed!\n"); ++ dev_err(adev->dev, "PSP load ras_drv failed!\n"); + return ret; + } + } +@@ -2164,7 +2167,7 @@ static int psp_hw_start(struct psp_context *psp) + (psp->funcs->bootloader_load_sos != NULL)) { + ret = psp_bootloader_load_sos(psp); + if (ret) { +- DRM_ERROR("PSP load sos failed!\n"); ++ dev_err(adev->dev, "PSP load sos failed!\n"); + return ret; + } + } +@@ -2172,7 +2175,7 @@ static int psp_hw_start(struct psp_context *psp) + + ret = psp_ring_create(psp, PSP_RING_TYPE__KM); + if (ret) { +- DRM_ERROR("PSP create ring failed!\n"); ++ dev_err(adev->dev, "PSP create ring failed!\n"); + return ret; + } + +@@ -2182,7 +2185,7 @@ static int psp_hw_start(struct psp_context *psp) + if (!psp_boottime_tmr(psp)) { + ret = psp_tmr_init(psp); + if (ret) { +- DRM_ERROR("PSP tmr init failed!\n"); ++ dev_err(adev->dev, "PSP tmr init failed!\n"); + return ret; + } + } +@@ -2201,7 +2204,7 @@ static int psp_hw_start(struct psp_context *psp) + + ret = psp_tmr_load(psp); + if (ret) { +- DRM_ERROR("PSP load tmr failed!\n"); ++ dev_err(adev->dev, "PSP load tmr failed!\n"); + return ret; + } + +@@ -2448,7 +2451,8 @@ static void psp_print_fw_hdr(struct psp_context *psp, + } + } + +-static int psp_prep_load_ip_fw_cmd_buf(struct amdgpu_firmware_info *ucode, ++static int psp_prep_load_ip_fw_cmd_buf(struct psp_context *psp, ++ struct amdgpu_firmware_info *ucode, + struct psp_gfx_cmd_resp *cmd) + { + int ret; +@@ -2461,7 +2465,7 @@ static int psp_prep_load_ip_fw_cmd_buf(struct amdgpu_firmware_info *ucode, + + ret = psp_get_fw_type(ucode, &cmd->cmd.cmd_load_ip_fw.fw_type); + if (ret) +- DRM_ERROR("Unknown firmware type\n"); ++ dev_err(psp->adev->dev, "Unknown firmware type\n"); + + return ret; + } +@@ -2472,7 +2476,7 @@ int psp_execute_ip_fw_load(struct psp_context *psp, + int ret = 0; + struct psp_gfx_cmd_resp *cmd = acquire_psp_cmd_buf(psp); + +- ret = psp_prep_load_ip_fw_cmd_buf(ucode, cmd); ++ ret = psp_prep_load_ip_fw_cmd_buf(psp, ucode, cmd); + if (!ret) { + ret = psp_cmd_submit_buf(psp, ucode, cmd, + psp->fence_buf_mc_addr); +@@ -2507,13 +2511,13 @@ static int psp_load_smu_fw(struct psp_context *psp) + adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 2)))) { + ret = amdgpu_dpm_set_mp1_state(adev, PP_MP1_STATE_UNLOAD); + if (ret) +- DRM_WARN("Failed to set MP1 state prepare for reload\n"); ++ dev_err(adev->dev, "Failed to set MP1 state prepare for reload\n"); + } + + ret = psp_execute_ip_fw_load(psp, ucode); + + if (ret) +- DRM_ERROR("PSP load smu failed!\n"); ++ dev_err(adev->dev, "PSP load smu failed!\n"); + + return ret; + } +@@ -2609,7 +2613,7 @@ static int psp_load_non_psp_fw(struct psp_context *psp) + adev->virt.autoload_ucode_id : AMDGPU_UCODE_ID_RLC_G)) { + ret = psp_rlc_autoload_start(psp); + if (ret) { +- DRM_ERROR("Failed to start rlc autoload\n"); ++ dev_err(adev->dev, "Failed to start rlc autoload\n"); + return ret; + } + } +@@ -2631,7 +2635,7 @@ static int psp_load_fw(struct amdgpu_device *adev) + + ret = psp_ring_init(psp, PSP_RING_TYPE__KM); + if (ret) { +- DRM_ERROR("PSP ring init failed!\n"); ++ dev_err(adev->dev, "PSP ring init failed!\n"); + goto failed; + } + } +@@ -2646,13 +2650,13 @@ static int psp_load_fw(struct amdgpu_device *adev) + + ret = psp_asd_initialize(psp); + if (ret) { +- DRM_ERROR("PSP load asd failed!\n"); ++ dev_err(adev->dev, "PSP load asd failed!\n"); + goto failed1; + } + + ret = psp_rl_load(adev); + if (ret) { +- DRM_ERROR("PSP load RL failed!\n"); ++ dev_err(adev->dev, "PSP load RL failed!\n"); + goto failed1; + } + +@@ -2672,7 +2676,7 @@ static int psp_load_fw(struct amdgpu_device *adev) + ret = psp_ras_initialize(psp); + if (ret) + dev_err(psp->adev->dev, +- "RAS: Failed to initialize RAS\n"); ++ "RAS: Failed to initialize RAS\n"); + + ret = psp_hdcp_initialize(psp); + if (ret) +@@ -2725,7 +2729,7 @@ static int psp_hw_init(void *handle) + + ret = psp_load_fw(adev); + if (ret) { +- DRM_ERROR("PSP firmware loading failed\n"); ++ dev_err(adev->dev, "PSP firmware loading failed\n"); + goto failed; + } + +@@ -2772,7 +2776,7 @@ static int psp_suspend(void *handle) + psp->xgmi_context.context.initialized) { + ret = psp_xgmi_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate xgmi ta\n"); ++ dev_err(adev->dev, "Failed to terminate xgmi ta\n"); + goto out; + } + } +@@ -2780,46 +2784,46 @@ static int psp_suspend(void *handle) + if (psp->ta_fw) { + ret = psp_ras_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate ras ta\n"); ++ dev_err(adev->dev, "Failed to terminate ras ta\n"); + goto out; + } + ret = psp_hdcp_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate hdcp ta\n"); ++ dev_err(adev->dev, "Failed to terminate hdcp ta\n"); + goto out; + } + ret = psp_dtm_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate dtm ta\n"); ++ dev_err(adev->dev, "Failed to terminate dtm ta\n"); + goto out; + } + ret = psp_rap_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate rap ta\n"); ++ dev_err(adev->dev, "Failed to terminate rap ta\n"); + goto out; + } + ret = psp_securedisplay_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate securedisplay ta\n"); ++ dev_err(adev->dev, "Failed to terminate securedisplay ta\n"); + goto out; + } + } + + ret = psp_asd_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate asd\n"); ++ dev_err(adev->dev, "Failed to terminate asd\n"); + goto out; + } + + ret = psp_tmr_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to terminate tmr\n"); ++ dev_err(adev->dev, "Failed to terminate tmr\n"); + goto out; + } + + ret = psp_ring_stop(psp, PSP_RING_TYPE__KM); + if (ret) +- DRM_ERROR("PSP ring stop failed\n"); ++ dev_err(adev->dev, "PSP ring stop failed\n"); + + out: + return ret; +@@ -2831,12 +2835,12 @@ static int psp_resume(void *handle) + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + struct psp_context *psp = &adev->psp; + +- DRM_INFO("PSP is resuming...\n"); ++ dev_info(adev->dev, "PSP is resuming...\n"); + + if (psp->mem_train_ctx.enable_mem_training) { + ret = psp_mem_training(psp, PSP_MEM_TRAIN_RESUME); + if (ret) { +- DRM_ERROR("Failed to process memory training!\n"); ++ dev_err(adev->dev, "Failed to process memory training!\n"); + return ret; + } + } +@@ -2853,7 +2857,7 @@ static int psp_resume(void *handle) + + ret = psp_asd_initialize(psp); + if (ret) { +- DRM_ERROR("PSP load asd failed!\n"); ++ dev_err(adev->dev, "PSP load asd failed!\n"); + goto failed; + } + +@@ -2877,7 +2881,7 @@ static int psp_resume(void *handle) + ret = psp_ras_initialize(psp); + if (ret) + dev_err(psp->adev->dev, +- "RAS: Failed to initialize RAS\n"); ++ "RAS: Failed to initialize RAS\n"); + + ret = psp_hdcp_initialize(psp); + if (ret) +@@ -2905,7 +2909,7 @@ static int psp_resume(void *handle) + return 0; + + failed: +- DRM_ERROR("PSP resume failed\n"); ++ dev_err(adev->dev, "PSP resume failed\n"); + mutex_unlock(&adev->firmware.mutex); + return ret; + } +@@ -2966,9 +2970,11 @@ int psp_ring_cmd_submit(struct psp_context *psp, + write_frame = ring_buffer_start + (psp_write_ptr_reg / rb_frame_size_dw); + /* Check invalid write_frame ptr address */ + if ((write_frame < ring_buffer_start) || (ring_buffer_end < write_frame)) { +- DRM_ERROR("ring_buffer_start = %p; ring_buffer_end = %p; write_frame = %p\n", +- ring_buffer_start, ring_buffer_end, write_frame); +- DRM_ERROR("write_frame is pointing to address out of bounds\n"); ++ dev_err(adev->dev, ++ "ring_buffer_start = %p; ring_buffer_end = %p; write_frame = %p\n", ++ ring_buffer_start, ring_buffer_end, write_frame); ++ dev_err(adev->dev, ++ "write_frame is pointing to address out of bounds\n"); + return -EINVAL; + } + +@@ -3495,7 +3501,7 @@ static ssize_t psp_usbc_pd_fw_sysfs_read(struct device *dev, + int ret; + + if (!adev->ip_blocks[AMD_IP_BLOCK_TYPE_PSP].status.late_initialized) { +- DRM_INFO("PSP block is not ready yet."); ++ dev_info(adev->dev, "PSP block is not ready yet\n."); + return -EBUSY; + } + +@@ -3504,7 +3510,7 @@ static ssize_t psp_usbc_pd_fw_sysfs_read(struct device *dev, + mutex_unlock(&adev->psp.mutex); + + if (ret) { +- DRM_ERROR("Failed to read USBC PD FW, err = %d", ret); ++ dev_err(adev->dev, "Failed to read USBC PD FW, err = %d\n", ret); + return ret; + } + +@@ -3526,7 +3532,7 @@ static ssize_t psp_usbc_pd_fw_sysfs_write(struct device *dev, + void *fw_pri_cpu_addr; + + if (!adev->ip_blocks[AMD_IP_BLOCK_TYPE_PSP].status.late_initialized) { +- DRM_INFO("PSP block is not ready yet."); ++ dev_err(adev->dev, "PSP block is not ready yet."); + return -EBUSY; + } + +@@ -3559,7 +3565,7 @@ static ssize_t psp_usbc_pd_fw_sysfs_write(struct device *dev, + release_firmware(usbc_pd_fw); + fail: + if (ret) { +- DRM_ERROR("Failed to load USBC PD FW, err = %d", ret); ++ dev_err(adev->dev, "Failed to load USBC PD FW, err = %d", ret); + count = ret; + } + +@@ -3606,7 +3612,7 @@ static ssize_t amdgpu_psp_vbflash_write(struct file *filp, struct kobject *kobj, + + /* Safeguard against memory drain */ + if (adev->psp.vbflash_image_size > AMD_VBIOS_FILE_MAX_SIZE_B) { +- dev_err(adev->dev, "File size cannot exceed %u", AMD_VBIOS_FILE_MAX_SIZE_B); ++ dev_err(adev->dev, "File size cannot exceed %u\n", AMD_VBIOS_FILE_MAX_SIZE_B); + kvfree(adev->psp.vbflash_tmp_buf); + adev->psp.vbflash_tmp_buf = NULL; + adev->psp.vbflash_image_size = 0; +@@ -3625,7 +3631,7 @@ static ssize_t amdgpu_psp_vbflash_write(struct file *filp, struct kobject *kobj, + adev->psp.vbflash_image_size += count; + mutex_unlock(&adev->psp.mutex); + +- dev_dbg(adev->dev, "IFWI staged for update"); ++ dev_dbg(adev->dev, "IFWI staged for update\n"); + + return count; + } +@@ -3645,7 +3651,7 @@ static ssize_t amdgpu_psp_vbflash_read(struct file *filp, struct kobject *kobj, + if (adev->psp.vbflash_image_size == 0) + return -EINVAL; + +- dev_dbg(adev->dev, "PSP IFWI flash process initiated"); ++ dev_dbg(adev->dev, "PSP IFWI flash process initiated\n"); + + ret = amdgpu_bo_create_kernel(adev, adev->psp.vbflash_image_size, + AMDGPU_GPU_PAGE_SIZE, +@@ -3670,11 +3676,11 @@ static ssize_t amdgpu_psp_vbflash_read(struct file *filp, struct kobject *kobj, + adev->psp.vbflash_image_size = 0; + + if (ret) { +- dev_err(adev->dev, "Failed to load IFWI, err = %d", ret); ++ dev_err(adev->dev, "Failed to load IFWI, err = %d\n", ret); + return ret; + } + +- dev_dbg(adev->dev, "PSP IFWI flash process done"); ++ dev_dbg(adev->dev, "PSP IFWI flash process done\n"); + return 0; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +index fded8902346f5d..2992ce494e000c 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +@@ -2125,11 +2125,13 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size, + */ + long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout) + { +- timeout = drm_sched_entity_flush(&vm->immediate, timeout); ++ timeout = dma_resv_wait_timeout(vm->root.bo->tbo.base.resv, ++ DMA_RESV_USAGE_BOOKKEEP, ++ true, timeout); + if (timeout <= 0) + return timeout; + +- return drm_sched_entity_flush(&vm->delayed, timeout); ++ return dma_fence_wait_timeout(vm->last_unlocked, true, timeout); + } + + /** +diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c +index 584cd5277f9272..e2dd7d4361cf31 100644 +--- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c +@@ -1459,17 +1459,12 @@ static int dce_v10_0_audio_init(struct amdgpu_device *adev) + + static void dce_v10_0_audio_fini(struct amdgpu_device *adev) + { +- int i; +- + if (!amdgpu_audio) + return; + + if (!adev->mode_info.audio.enabled) + return; + +- for (i = 0; i < adev->mode_info.audio.num_pins; i++) +- dce_v10_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); +- + adev->mode_info.audio.enabled = false; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c +index c14b70350a51ae..7ce89654e12b42 100644 +--- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c +@@ -1508,17 +1508,12 @@ static int dce_v11_0_audio_init(struct amdgpu_device *adev) + + static void dce_v11_0_audio_fini(struct amdgpu_device *adev) + { +- int i; +- + if (!amdgpu_audio) + return; + + if (!adev->mode_info.audio.enabled) + return; + +- for (i = 0; i < adev->mode_info.audio.num_pins; i++) +- dce_v11_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); +- + adev->mode_info.audio.enabled = false; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c +index 7f85ba5b726f68..c3d05ab7b12ff1 100644 +--- a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c +@@ -1377,17 +1377,12 @@ static int dce_v6_0_audio_init(struct amdgpu_device *adev) + + static void dce_v6_0_audio_fini(struct amdgpu_device *adev) + { +- int i; +- + if (!amdgpu_audio) + return; + + if (!adev->mode_info.audio.enabled) + return; + +- for (i = 0; i < adev->mode_info.audio.num_pins; i++) +- dce_v6_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); +- + adev->mode_info.audio.enabled = false; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c +index f2b3cb5ed6bec2..ce2300c3c36b40 100644 +--- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c +@@ -1426,17 +1426,12 @@ static int dce_v8_0_audio_init(struct amdgpu_device *adev) + + static void dce_v8_0_audio_fini(struct amdgpu_device *adev) + { +- int i; +- + if (!amdgpu_audio) + return; + + if (!adev->mode_info.audio.enabled) + return; + +- for (i = 0; i < adev->mode_info.audio.num_pins; i++) +- dce_v8_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false); +- + adev->mode_info.audio.enabled = false; + } + +diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c +index 136bd93c3b6554..0a33f8f117e921 100644 +--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c ++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c +@@ -896,13 +896,13 @@ void dce110_link_encoder_construct( + enc110->base.id, &bp_cap_info); + + /* Override features with DCE-specific values */ +- if (BP_RESULT_OK == result) { ++ if (result == BP_RESULT_OK) { + enc110->base.features.flags.bits.IS_HBR2_CAPABLE = + bp_cap_info.DP_HBR2_EN; + enc110->base.features.flags.bits.IS_HBR3_CAPABLE = + bp_cap_info.DP_HBR3_EN; + enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN; +- } else { ++ } else if (result != BP_RESULT_NORECORD) { + DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n", + __func__, + result); +@@ -1795,13 +1795,13 @@ void dce60_link_encoder_construct( + enc110->base.id, &bp_cap_info); + + /* Override features with DCE-specific values */ +- if (BP_RESULT_OK == result) { ++ if (result == BP_RESULT_OK) { + enc110->base.features.flags.bits.IS_HBR2_CAPABLE = + bp_cap_info.DP_HBR2_EN; + enc110->base.features.flags.bits.IS_HBR3_CAPABLE = + bp_cap_info.DP_HBR3_EN; + enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN; +- } else { ++ } else if (result != BP_RESULT_NORECORD) { + DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n", + __func__, + result); +diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c +index 59cbff209acd6e..560935f2e8cbe1 100644 +--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c ++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c +@@ -375,6 +375,17 @@ static int __maybe_unused ti_sn65dsi86_resume(struct device *dev) + + gpiod_set_value_cansleep(pdata->enable_gpio, 1); + ++ /* ++ * After EN is deasserted and an external clock is detected, the bridge ++ * will sample GPIO3:1 to determine its frequency. The driver will ++ * overwrite this setting in ti_sn_bridge_set_refclk_freq(). But this is ++ * racy. Thus we have to wait a couple of us. According to the datasheet ++ * the GPIO lines has to be stable at least 5 us (td5) but it seems that ++ * is not enough and the refclk frequency value is still lost or ++ * overwritten by the bridge itself. Waiting for 20us seems to work. ++ */ ++ usleep_range(20, 30); ++ + /* + * If we have a reference clock we can enable communication w/ the + * panel (including the aux channel) w/out any need for an input clock +diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c +index ef4fa70119de1a..bfa1070a5f08e2 100644 +--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c ++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c +@@ -352,6 +352,7 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev) + { + struct mtk_drm_private *drm_priv = dev_get_drvdata(dev); + struct mtk_drm_private *all_drm_priv[MAX_CRTC]; ++ struct mtk_drm_private *temp_drm_priv; + struct device_node *phandle = dev->parent->of_node; + const struct of_device_id *of_id; + struct device_node *node; +@@ -364,24 +365,41 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev) + + of_id = of_match_node(mtk_drm_of_ids, node); + if (!of_id) +- continue; ++ goto next_put_node; + + pdev = of_find_device_by_node(node); + if (!pdev) +- continue; ++ goto next_put_node; + + drm_dev = device_find_child(&pdev->dev, NULL, mtk_drm_match); +- if (!drm_dev || !dev_get_drvdata(drm_dev)) +- continue; ++ if (!drm_dev) ++ goto next_put_device_pdev_dev; ++ ++ temp_drm_priv = dev_get_drvdata(drm_dev); ++ if (!temp_drm_priv) ++ goto next_put_device_drm_dev; ++ ++ if (temp_drm_priv->data->main_len) ++ all_drm_priv[CRTC_MAIN] = temp_drm_priv; ++ else if (temp_drm_priv->data->ext_len) ++ all_drm_priv[CRTC_EXT] = temp_drm_priv; ++ else if (temp_drm_priv->data->third_len) ++ all_drm_priv[CRTC_THIRD] = temp_drm_priv; + +- all_drm_priv[cnt] = dev_get_drvdata(drm_dev); +- if (all_drm_priv[cnt] && all_drm_priv[cnt]->mtk_drm_bound) ++ if (temp_drm_priv->mtk_drm_bound) + cnt++; + +- if (cnt == MAX_CRTC) { +- of_node_put(node); ++next_put_device_drm_dev: ++ put_device(drm_dev); ++ ++next_put_device_pdev_dev: ++ put_device(&pdev->dev); ++ ++next_put_node: ++ of_node_put(node); ++ ++ if (cnt == MAX_CRTC) + break; +- } + } + + if (drm_priv->data->mmsys_dev_num == cnt) { +@@ -475,21 +493,21 @@ static int mtk_drm_kms_init(struct drm_device *drm) + for (j = 0; j < private->data->mmsys_dev_num; j++) { + priv_n = private->all_drm_private[j]; + +- if (i == 0 && priv_n->data->main_len) { ++ if (i == CRTC_MAIN && priv_n->data->main_len) { + ret = mtk_drm_crtc_create(drm, priv_n->data->main_path, + priv_n->data->main_len, j); + if (ret) + goto err_component_unbind; + + continue; +- } else if (i == 1 && priv_n->data->ext_len) { ++ } else if (i == CRTC_EXT && priv_n->data->ext_len) { + ret = mtk_drm_crtc_create(drm, priv_n->data->ext_path, + priv_n->data->ext_len, j); + if (ret) + goto err_component_unbind; + + continue; +- } else if (i == 2 && priv_n->data->third_len) { ++ } else if (i == CRTC_THIRD && priv_n->data->third_len) { + ret = mtk_drm_crtc_create(drm, priv_n->data->third_path, + priv_n->data->third_len, j); + if (ret) +diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.h b/drivers/gpu/drm/mediatek/mtk_drm_drv.h +index eb2fd45941f09d..f4de8bb2768503 100644 +--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.h ++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.h +@@ -9,11 +9,17 @@ + #include + #include "mtk_drm_ddp_comp.h" + +-#define MAX_CRTC 3 + #define MAX_CONNECTOR 2 + #define DDP_COMPONENT_DRM_OVL_ADAPTOR (DDP_COMPONENT_ID_MAX + 1) + #define DDP_COMPONENT_DRM_ID_MAX (DDP_COMPONENT_DRM_OVL_ADAPTOR + 1) + ++enum mtk_drm_crtc_path { ++ CRTC_MAIN, ++ CRTC_EXT, ++ CRTC_THIRD, ++ MAX_CRTC, ++}; ++ + struct device; + struct device_node; + struct drm_crtc; +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c +index 5db37247dc29b2..572c54a3709139 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c +@@ -348,6 +348,8 @@ nvkm_fifo_dtor(struct nvkm_engine *engine) + nvkm_chid_unref(&fifo->chid); + + nvkm_event_fini(&fifo->nonstall.event); ++ if (fifo->func->nonstall_dtor) ++ fifo->func->nonstall_dtor(fifo); + mutex_destroy(&fifo->mutex); + return fifo; + } +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c +index c56d2a839efbaf..686a2c9fec46d8 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c +@@ -516,19 +516,11 @@ ga100_fifo_nonstall_intr(struct nvkm_inth *inth) + static void + ga100_fifo_nonstall_block(struct nvkm_event *event, int type, int index) + { +- struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event); +- struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0); +- +- nvkm_inth_block(&runl->nonstall.inth); + } + + static void + ga100_fifo_nonstall_allow(struct nvkm_event *event, int type, int index) + { +- struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event); +- struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0); +- +- nvkm_inth_allow(&runl->nonstall.inth); + } + + const struct nvkm_event_func +@@ -559,12 +551,26 @@ ga100_fifo_nonstall_ctor(struct nvkm_fifo *fifo) + if (ret) + return ret; + ++ nvkm_inth_allow(&runl->nonstall.inth); ++ + nr = max(nr, runl->id + 1); + } + + return nr; + } + ++void ++ga100_fifo_nonstall_dtor(struct nvkm_fifo *fifo) ++{ ++ struct nvkm_runl *runl; ++ ++ nvkm_runl_foreach(runl, fifo) { ++ if (runl->nonstall.vector < 0) ++ continue; ++ nvkm_inth_block(&runl->nonstall.inth); ++ } ++} ++ + int + ga100_fifo_runl_ctor(struct nvkm_fifo *fifo) + { +@@ -594,6 +600,7 @@ ga100_fifo = { + .runl_ctor = ga100_fifo_runl_ctor, + .mmu_fault = &tu102_fifo_mmu_fault, + .nonstall_ctor = ga100_fifo_nonstall_ctor, ++ .nonstall_dtor = ga100_fifo_nonstall_dtor, + .nonstall = &ga100_fifo_nonstall, + .runl = &ga100_runl, + .runq = &ga100_runq, +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c +index 2cdf5da339b60b..dccf38101fd9e7 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c +@@ -28,6 +28,7 @@ ga102_fifo = { + .runl_ctor = ga100_fifo_runl_ctor, + .mmu_fault = &tu102_fifo_mmu_fault, + .nonstall_ctor = ga100_fifo_nonstall_ctor, ++ .nonstall_dtor = ga100_fifo_nonstall_dtor, + .nonstall = &ga100_fifo_nonstall, + .runl = &ga100_runl, + .runq = &ga100_runq, +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h +index 4d448be19224a8..b4ccf6b8bd21a1 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h +@@ -38,6 +38,7 @@ struct nvkm_fifo_func { + void (*start)(struct nvkm_fifo *, unsigned long *); + + int (*nonstall_ctor)(struct nvkm_fifo *); ++ void (*nonstall_dtor)(struct nvkm_fifo *); + const struct nvkm_event_func *nonstall; + + const struct nvkm_runl_func *runl; +@@ -194,6 +195,7 @@ extern const struct nvkm_fifo_func_mmu_fault tu102_fifo_mmu_fault; + + int ga100_fifo_runl_ctor(struct nvkm_fifo *); + int ga100_fifo_nonstall_ctor(struct nvkm_fifo *); ++void ga100_fifo_nonstall_dtor(struct nvkm_fifo *); + extern const struct nvkm_event_func ga100_fifo_nonstall; + extern const struct nvkm_runl_func ga100_runl; + extern const struct nvkm_runq_func ga100_runq; +diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c +index a5f89aab3fb4d2..c25a54d5b39ad5 100644 +--- a/drivers/hwmon/mlxreg-fan.c ++++ b/drivers/hwmon/mlxreg-fan.c +@@ -561,15 +561,14 @@ static int mlxreg_fan_cooling_config(struct device *dev, struct mlxreg_fan *fan) + if (!pwm->connected) + continue; + pwm->fan = fan; ++ /* Set minimal PWM speed. */ ++ pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY); + pwm->cdev = devm_thermal_of_cooling_device_register(dev, NULL, mlxreg_fan_name[i], + pwm, &mlxreg_fan_cooling_ops); + if (IS_ERR(pwm->cdev)) { + dev_err(dev, "Failed to register cooling device\n"); + return PTR_ERR(pwm->cdev); + } +- +- /* Set minimal PWM speed. */ +- pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY); + } + + return 0; +diff --git a/drivers/iio/chemical/pms7003.c b/drivers/iio/chemical/pms7003.c +index e9857d93b307e4..70c92cbfc9f141 100644 +--- a/drivers/iio/chemical/pms7003.c ++++ b/drivers/iio/chemical/pms7003.c +@@ -5,7 +5,6 @@ + * Copyright (c) Tomasz Duszynski + */ + +-#include + #include + #include + #include +@@ -19,6 +18,8 @@ + #include + #include + #include ++#include ++#include + + #define PMS7003_DRIVER_NAME "pms7003" + +@@ -76,7 +77,7 @@ struct pms7003_state { + /* Used to construct scan to push to the IIO buffer */ + struct { + u16 data[3]; /* PM1, PM2P5, PM10 */ +- s64 ts; ++ aligned_s64 ts; + } scan; + }; + +diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c +index d4f9b5d8d28d6d..ace3ce4faea73a 100644 +--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c ++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c +@@ -52,7 +52,7 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p) + u16 fifo_count; + u32 fifo_period; + s64 timestamp; +- u8 data[INV_MPU6050_OUTPUT_DATA_SIZE]; ++ u8 data[INV_MPU6050_OUTPUT_DATA_SIZE] __aligned(8); + int int_status; + size_t i, nb; + +diff --git a/drivers/iio/light/opt3001.c b/drivers/iio/light/opt3001.c +index dc529cbe3805e2..25a45c4251fbd0 100644 +--- a/drivers/iio/light/opt3001.c ++++ b/drivers/iio/light/opt3001.c +@@ -692,8 +692,9 @@ static irqreturn_t opt3001_irq(int irq, void *_iio) + struct opt3001 *opt = iio_priv(iio); + int ret; + bool wake_result_ready_queue = false; ++ bool ok_to_ignore_lock = opt->ok_to_ignore_lock; + +- if (!opt->ok_to_ignore_lock) ++ if (!ok_to_ignore_lock) + mutex_lock(&opt->lock); + + ret = i2c_smbus_read_word_swapped(opt->client, OPT3001_CONFIGURATION); +@@ -730,7 +731,7 @@ static irqreturn_t opt3001_irq(int irq, void *_iio) + } + + out: +- if (!opt->ok_to_ignore_lock) ++ if (!ok_to_ignore_lock) + mutex_unlock(&opt->lock); + + if (wake_result_ready_queue) +diff --git a/drivers/iio/pressure/mprls0025pa.c b/drivers/iio/pressure/mprls0025pa.c +index e3f0de020a40c9..829c472812e49b 100644 +--- a/drivers/iio/pressure/mprls0025pa.c ++++ b/drivers/iio/pressure/mprls0025pa.c +@@ -87,11 +87,6 @@ static const struct mpr_func_spec mpr_func_spec[] = { + [MPR_FUNCTION_C] = {.output_min = 3355443, .output_max = 13421773}, + }; + +-struct mpr_chan { +- s32 pres; /* pressure value */ +- s64 ts; /* timestamp */ +-}; +- + struct mpr_data { + struct i2c_client *client; + struct mutex lock; /* +@@ -120,7 +115,10 @@ struct mpr_data { + * loop until data is ready + */ + struct completion completion; /* handshake from irq to read */ +- struct mpr_chan chan; /* ++ struct { ++ s32 pres; /* pressure value */ ++ aligned_s64 ts; /* timestamp */ ++ } chan; /* + * channel values for buffered + * mode + */ +diff --git a/drivers/isdn/mISDN/dsp_hwec.c b/drivers/isdn/mISDN/dsp_hwec.c +index 0b3f29195330ac..0cd216e28f0090 100644 +--- a/drivers/isdn/mISDN/dsp_hwec.c ++++ b/drivers/isdn/mISDN/dsp_hwec.c +@@ -51,14 +51,14 @@ void dsp_hwec_enable(struct dsp *dsp, const char *arg) + goto _do; + + { +- char *dup, *tok, *name, *val; ++ char *dup, *next, *tok, *name, *val; + int tmp; + +- dup = kstrdup(arg, GFP_ATOMIC); ++ dup = next = kstrdup(arg, GFP_ATOMIC); + if (!dup) + return; + +- while ((tok = strsep(&dup, ","))) { ++ while ((tok = strsep(&next, ","))) { + if (!strlen(tok)) + continue; + name = strsep(&tok, "="); +diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c +index 534e7f7bca4c2f..b836ab2a649a2b 100644 +--- a/drivers/net/ethernet/cadence/macb_main.c ++++ b/drivers/net/ethernet/cadence/macb_main.c +@@ -1234,11 +1234,12 @@ static int macb_tx_complete(struct macb_queue *queue, int budget) + { + struct macb *bp = queue->bp; + u16 queue_index = queue - bp->queues; ++ unsigned long flags; + unsigned int tail; + unsigned int head; + int packets = 0; + +- spin_lock(&queue->tx_ptr_lock); ++ spin_lock_irqsave(&queue->tx_ptr_lock, flags); + head = queue->tx_head; + for (tail = queue->tx_tail; tail != head && packets < budget; tail++) { + struct macb_tx_skb *tx_skb; +@@ -1297,7 +1298,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget) + CIRC_CNT(queue->tx_head, queue->tx_tail, + bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp)) + netif_wake_subqueue(bp->dev, queue_index); +- spin_unlock(&queue->tx_ptr_lock); ++ spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); + + return packets; + } +@@ -1713,8 +1714,9 @@ static void macb_tx_restart(struct macb_queue *queue) + { + struct macb *bp = queue->bp; + unsigned int head_idx, tbqp; ++ unsigned long flags; + +- spin_lock(&queue->tx_ptr_lock); ++ spin_lock_irqsave(&queue->tx_ptr_lock, flags); + + if (queue->tx_head == queue->tx_tail) + goto out_tx_ptr_unlock; +@@ -1726,19 +1728,20 @@ static void macb_tx_restart(struct macb_queue *queue) + if (tbqp == head_idx) + goto out_tx_ptr_unlock; + +- spin_lock_irq(&bp->lock); ++ spin_lock(&bp->lock); + macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); +- spin_unlock_irq(&bp->lock); ++ spin_unlock(&bp->lock); + + out_tx_ptr_unlock: +- spin_unlock(&queue->tx_ptr_lock); ++ spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); + } + + static bool macb_tx_complete_pending(struct macb_queue *queue) + { + bool retval = false; ++ unsigned long flags; + +- spin_lock(&queue->tx_ptr_lock); ++ spin_lock_irqsave(&queue->tx_ptr_lock, flags); + if (queue->tx_head != queue->tx_tail) { + /* Make hw descriptor updates visible to CPU */ + rmb(); +@@ -1746,7 +1749,7 @@ static bool macb_tx_complete_pending(struct macb_queue *queue) + if (macb_tx_desc(queue, queue->tx_tail)->ctrl & MACB_BIT(TX_USED)) + retval = true; + } +- spin_unlock(&queue->tx_ptr_lock); ++ spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); + return retval; + } + +@@ -2314,6 +2317,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev) + struct macb_queue *queue = &bp->queues[queue_index]; + unsigned int desc_cnt, nr_frags, frag_size, f; + unsigned int hdrlen; ++ unsigned long flags; + bool is_lso; + netdev_tx_t ret = NETDEV_TX_OK; + +@@ -2374,7 +2378,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev) + desc_cnt += DIV_ROUND_UP(frag_size, bp->max_tx_length); + } + +- spin_lock_bh(&queue->tx_ptr_lock); ++ spin_lock_irqsave(&queue->tx_ptr_lock, flags); + + /* This is a hard error, log it. */ + if (CIRC_SPACE(queue->tx_head, queue->tx_tail, +@@ -2396,15 +2400,15 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev) + wmb(); + skb_tx_timestamp(skb); + +- spin_lock_irq(&bp->lock); ++ spin_lock(&bp->lock); + macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); +- spin_unlock_irq(&bp->lock); ++ spin_unlock(&bp->lock); + + if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) + netif_stop_subqueue(dev, queue_index); + + unlock: +- spin_unlock_bh(&queue->tx_ptr_lock); ++ spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); + + return ret; + } +diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c +index 087d4c2b3efd1a..a423a938821156 100644 +--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c ++++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c +@@ -1491,13 +1491,17 @@ static int bgx_init_of_phy(struct bgx *bgx) + * this cortina phy, for which there is no driver + * support, ignore it. + */ +- if (phy_np && +- !of_device_is_compatible(phy_np, "cortina,cs4223-slice")) { +- /* Wait until the phy drivers are available */ +- pd = of_phy_find_device(phy_np); +- if (!pd) +- goto defer; +- bgx->lmac[lmac].phydev = pd; ++ if (phy_np) { ++ if (!of_device_is_compatible(phy_np, "cortina,cs4223-slice")) { ++ /* Wait until the phy drivers are available */ ++ pd = of_phy_find_device(phy_np); ++ if (!pd) { ++ of_node_put(phy_np); ++ goto defer; ++ } ++ bgx->lmac[lmac].phydev = pd; ++ } ++ of_node_put(phy_np); + } + + lmac++; +@@ -1513,11 +1517,11 @@ static int bgx_init_of_phy(struct bgx *bgx) + * for phy devices we may have already found. + */ + while (lmac) { ++ lmac--; + if (bgx->lmac[lmac].phydev) { + put_device(&bgx->lmac[lmac].phydev->mdio.dev); + bgx->lmac[lmac].phydev = NULL; + } +- lmac--; + } + of_node_put(node); + return -EPROBE_DEFER; +diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c +index fc0f98ea61332f..a1abc51584a183 100644 +--- a/drivers/net/ethernet/intel/e1000e/ethtool.c ++++ b/drivers/net/ethernet/intel/e1000e/ethtool.c +@@ -567,12 +567,12 @@ static int e1000_set_eeprom(struct net_device *netdev, + { + struct e1000_adapter *adapter = netdev_priv(netdev); + struct e1000_hw *hw = &adapter->hw; ++ size_t total_len, max_len; + u16 *eeprom_buff; +- void *ptr; +- int max_len; ++ int ret_val = 0; + int first_word; + int last_word; +- int ret_val = 0; ++ void *ptr; + u16 i; + + if (eeprom->len == 0) +@@ -587,6 +587,10 @@ static int e1000_set_eeprom(struct net_device *netdev, + + max_len = hw->nvm.word_size * 2; + ++ if (check_add_overflow(eeprom->offset, eeprom->len, &total_len) || ++ total_len > max_len) ++ return -EFBIG; ++ + first_word = eeprom->offset >> 1; + last_word = (eeprom->offset + eeprom->len - 1) >> 1; + eeprom_buff = kmalloc(max_len, GFP_KERNEL); +diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c +index 306758428aefd7..a569d2fcc90af4 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_client.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c +@@ -361,8 +361,8 @@ static void i40e_client_add_instance(struct i40e_pf *pf) + if (i40e_client_get_params(vsi, &cdev->lan_info.params)) + goto free_cdev; + +- mac = list_first_entry(&cdev->lan_info.netdev->dev_addrs.list, +- struct netdev_hw_addr, list); ++ mac = list_first_entry_or_null(&cdev->lan_info.netdev->dev_addrs.list, ++ struct netdev_hw_addr, list); + if (mac) + ether_addr_copy(cdev->lan_info.lanmac, mac->addr); + else +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index cb8efc952dfda9..aefe2af6f01d41 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -1586,6 +1586,13 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) + bool gso = false; + int tx_num; + ++ if (skb_vlan_tag_present(skb) && ++ !eth_proto_is_802_3(eth_hdr(skb)->h_proto)) { ++ skb = __vlan_hwaccel_push_inside(skb); ++ if (!skb) ++ goto dropped; ++ } ++ + /* normally we can rely on the stack not calling this more than once, + * however we have 2 queues running on the same ring so we need to lock + * the ring access +@@ -1631,8 +1638,9 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) + + drop: + spin_unlock(ð->page_lock); +- stats->tx_dropped++; + dev_kfree_skb_any(skb); ++dropped: ++ stats->tx_dropped++; + return NETDEV_TX_OK; + } + +diff --git a/drivers/net/ethernet/xircom/xirc2ps_cs.c b/drivers/net/ethernet/xircom/xirc2ps_cs.c +index 9f505cf02d9651..2dc1cfcd7ce99b 100644 +--- a/drivers/net/ethernet/xircom/xirc2ps_cs.c ++++ b/drivers/net/ethernet/xircom/xirc2ps_cs.c +@@ -1578,7 +1578,7 @@ do_reset(struct net_device *dev, int full) + msleep(40); /* wait 40 msec to let it complete */ + } + if (full_duplex) +- PutByte(XIRCREG1_ECR, GetByte(XIRCREG1_ECR | FullDuplex)); ++ PutByte(XIRCREG1_ECR, GetByte(XIRCREG1_ECR) | FullDuplex); + } else { /* No MII */ + SelectPage(0); + value = GetByte(XIRCREG_ESR); /* read the ESR */ +diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c +index 767053d6c6b6f9..af6cc3e90ef7ce 100644 +--- a/drivers/net/macsec.c ++++ b/drivers/net/macsec.c +@@ -1840,7 +1840,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info) + + if (tb_sa[MACSEC_SA_ATTR_PN]) { + spin_lock_bh(&rx_sa->lock); +- rx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); ++ rx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); + spin_unlock_bh(&rx_sa->lock); + } + +@@ -2082,7 +2082,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info) + } + + spin_lock_bh(&tx_sa->lock); +- tx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); ++ tx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); + spin_unlock_bh(&tx_sa->lock); + + if (tb_sa[MACSEC_SA_ATTR_ACTIVE]) +@@ -2394,7 +2394,7 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info) + + spin_lock_bh(&tx_sa->lock); + prev_pn = tx_sa->next_pn_halves; +- tx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); ++ tx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); + spin_unlock_bh(&tx_sa->lock); + } + +@@ -2492,7 +2492,7 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info) + + spin_lock_bh(&rx_sa->lock); + prev_pn = rx_sa->next_pn_halves; +- rx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]); ++ rx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]); + spin_unlock_bh(&rx_sa->lock); + } + +diff --git a/drivers/net/pcs/pcs-rzn1-miic.c b/drivers/net/pcs/pcs-rzn1-miic.c +index 97139c07130fc9..b65682b8b6cd9a 100644 +--- a/drivers/net/pcs/pcs-rzn1-miic.c ++++ b/drivers/net/pcs/pcs-rzn1-miic.c +@@ -19,7 +19,7 @@ + #define MIIC_PRCMD 0x0 + #define MIIC_ESID_CODE 0x4 + +-#define MIIC_MODCTRL 0x20 ++#define MIIC_MODCTRL 0x8 + #define MIIC_MODCTRL_SW_MODE GENMASK(4, 0) + + #define MIIC_CONVCTRL(port) (0x100 + (port) * 4) +diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c +index 1f6237705b44b7..939a8a17595ef9 100644 +--- a/drivers/net/phy/mscc/mscc_ptp.c ++++ b/drivers/net/phy/mscc/mscc_ptp.c +@@ -455,12 +455,12 @@ static void vsc85xx_dequeue_skb(struct vsc85xx_ptp *ptp) + *p++ = (reg >> 24) & 0xff; + } + +- len = skb_queue_len(&ptp->tx_queue); ++ len = skb_queue_len_lockless(&ptp->tx_queue); + if (len < 1) + return; + + while (len--) { +- skb = __skb_dequeue(&ptp->tx_queue); ++ skb = skb_dequeue(&ptp->tx_queue); + if (!skb) + return; + +@@ -485,7 +485,7 @@ static void vsc85xx_dequeue_skb(struct vsc85xx_ptp *ptp) + * packet in the FIFO right now, reschedule it for later + * packets. + */ +- __skb_queue_tail(&ptp->tx_queue, skb); ++ skb_queue_tail(&ptp->tx_queue, skb); + } + } + +@@ -1067,6 +1067,7 @@ static int vsc85xx_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr) + case HWTSTAMP_TX_ON: + break; + case HWTSTAMP_TX_OFF: ++ skb_queue_purge(&vsc8531->ptp->tx_queue); + break; + default: + return -ERANGE; +@@ -1091,9 +1092,6 @@ static int vsc85xx_hwtstamp(struct mii_timestamper *mii_ts, struct ifreq *ifr) + + mutex_lock(&vsc8531->ts_lock); + +- __skb_queue_purge(&vsc8531->ptp->tx_queue); +- __skb_queue_head_init(&vsc8531->ptp->tx_queue); +- + /* Disable predictor while configuring the 1588 block */ + val = vsc85xx_ts_read_csr(phydev, PROCESSOR, + MSCC_PHY_PTP_INGR_PREDICTOR); +@@ -1179,9 +1177,7 @@ static void vsc85xx_txtstamp(struct mii_timestamper *mii_ts, + + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + +- mutex_lock(&vsc8531->ts_lock); +- __skb_queue_tail(&vsc8531->ptp->tx_queue, skb); +- mutex_unlock(&vsc8531->ts_lock); ++ skb_queue_tail(&vsc8531->ptp->tx_queue, skb); + return; + + out: +@@ -1547,6 +1543,7 @@ void vsc8584_ptp_deinit(struct phy_device *phydev) + if (vsc8531->ptp->ptp_clock) { + ptp_clock_unregister(vsc8531->ptp->ptp_clock); + skb_queue_purge(&vsc8531->rx_skbs_list); ++ skb_queue_purge(&vsc8531->ptp->tx_queue); + } + } + +@@ -1570,7 +1567,7 @@ irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev) + if (rc & VSC85XX_1588_INT_FIFO_ADD) { + vsc85xx_get_tx_ts(priv->ptp); + } else if (rc & VSC85XX_1588_INT_FIFO_OVERFLOW) { +- __skb_queue_purge(&priv->ptp->tx_queue); ++ skb_queue_purge(&priv->ptp->tx_queue); + vsc85xx_ts_reset_fifo(phydev); + } + +@@ -1590,6 +1587,7 @@ int vsc8584_ptp_probe(struct phy_device *phydev) + mutex_init(&vsc8531->phc_lock); + mutex_init(&vsc8531->ts_lock); + skb_queue_head_init(&vsc8531->rx_skbs_list); ++ skb_queue_head_init(&vsc8531->ptp->tx_queue); + + /* Retrieve the shared load/save GPIO. Request it as non exclusive as + * the same GPIO can be requested by all the PHYs of the same package. +diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c +index 28b894bcd7a93d..46ac51217114bd 100644 +--- a/drivers/net/ppp/ppp_generic.c ++++ b/drivers/net/ppp/ppp_generic.c +@@ -1753,7 +1753,6 @@ pad_compress_skb(struct ppp *ppp, struct sk_buff *skb) + */ + if (net_ratelimit()) + netdev_err(ppp->dev, "ppp: compressor dropped pkt\n"); +- kfree_skb(skb); + consume_skb(new_skb); + new_skb = NULL; + } +@@ -1855,9 +1854,10 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb) + "down - pkt dropped.\n"); + goto drop; + } +- skb = pad_compress_skb(ppp, skb); +- if (!skb) ++ new_skb = pad_compress_skb(ppp, skb); ++ if (!new_skb) + goto drop; ++ skb = new_skb; + } + + /* +diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c +index d9792fd515a904..22554daaf6ff17 100644 +--- a/drivers/net/usb/cdc_ncm.c ++++ b/drivers/net/usb/cdc_ncm.c +@@ -2043,6 +2043,13 @@ static const struct usb_device_id cdc_devs[] = { + .driver_info = (unsigned long)&wwan_info, + }, + ++ /* Intel modem (label from OEM reads Fibocom L850-GL) */ ++ { USB_DEVICE_AND_INTERFACE_INFO(0x8087, 0x095a, ++ USB_CLASS_COMM, ++ USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE), ++ .driver_info = (unsigned long)&wwan_info, ++ }, ++ + /* DisplayLink docking stations */ + { .match_flags = USB_DEVICE_ID_MATCH_INT_INFO + | USB_DEVICE_ID_MATCH_VENDOR, +diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c +index afd78324f3aa39..6e4023791b4761 100644 +--- a/drivers/net/vmxnet3/vmxnet3_drv.c ++++ b/drivers/net/vmxnet3/vmxnet3_drv.c +@@ -3483,8 +3483,6 @@ vmxnet3_change_mtu(struct net_device *netdev, int new_mtu) + struct vmxnet3_adapter *adapter = netdev_priv(netdev); + int err = 0; + +- netdev->mtu = new_mtu; +- + /* + * Reset_work may be in the middle of resetting the device, wait for its + * completion. +@@ -3498,6 +3496,7 @@ vmxnet3_change_mtu(struct net_device *netdev, int new_mtu) + + /* we need to re-create the rx queue based on the new mtu */ + vmxnet3_rq_destroy_all(adapter); ++ netdev->mtu = new_mtu; + vmxnet3_adjust_rx_ring_size(adapter); + err = vmxnet3_rq_create_all(adapter); + if (err) { +@@ -3514,6 +3513,8 @@ vmxnet3_change_mtu(struct net_device *netdev, int new_mtu) + "Closing it\n", err); + goto out; + } ++ } else { ++ netdev->mtu = new_mtu; + } + + out: +diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h +index 812a174f74c0b3..4bb36dc6ae08ba 100644 +--- a/drivers/net/wireless/ath/ath11k/core.h ++++ b/drivers/net/wireless/ath/ath11k/core.h +@@ -365,6 +365,8 @@ struct ath11k_vif { + struct ieee80211_chanctx_conf chanctx; + struct ath11k_arp_ns_offload arp_ns_offload; + struct ath11k_rekey_data rekey_data; ++ u32 num_stations; ++ bool reinstall_group_keys; + + #ifdef CONFIG_ATH11K_DEBUGFS + struct dentry *debugfs_twt; +@@ -1234,6 +1236,11 @@ static inline struct ath11k_vif *ath11k_vif_to_arvif(struct ieee80211_vif *vif) + return (struct ath11k_vif *)vif->drv_priv; + } + ++static inline struct ath11k_sta *ath11k_sta_to_arsta(struct ieee80211_sta *sta) ++{ ++ return (struct ath11k_sta *)sta->drv_priv; ++} ++ + static inline struct ath11k *ath11k_ab_to_ar(struct ath11k_base *ab, + int mac_id) + { +diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c +index 50bc17127e68a3..4304fed44d5839 100644 +--- a/drivers/net/wireless/ath/ath11k/debugfs.c ++++ b/drivers/net/wireless/ath/ath11k/debugfs.c +@@ -1452,7 +1452,7 @@ static void ath11k_reset_peer_ps_duration(void *data, + struct ieee80211_sta *sta) + { + struct ath11k *ar = data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + + spin_lock_bh(&ar->data_lock); + arsta->ps_total_duration = 0; +@@ -1503,7 +1503,7 @@ static void ath11k_peer_ps_state_disable(void *data, + struct ieee80211_sta *sta) + { + struct ath11k *ar = data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + + spin_lock_bh(&ar->data_lock); + arsta->peer_ps_state = WMI_PEER_PS_STATE_DISABLED; +diff --git a/drivers/net/wireless/ath/ath11k/debugfs_sta.c b/drivers/net/wireless/ath/ath11k/debugfs_sta.c +index 168879a380cb2d..f56a24b6c8da21 100644 +--- a/drivers/net/wireless/ath/ath11k/debugfs_sta.c ++++ b/drivers/net/wireless/ath/ath11k/debugfs_sta.c +@@ -137,7 +137,7 @@ static ssize_t ath11k_dbg_sta_dump_tx_stats(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + struct ath11k_htt_data_stats *stats; + static const char *str_name[ATH11K_STATS_TYPE_MAX] = {"succ", "fail", +@@ -244,7 +244,7 @@ static ssize_t ath11k_dbg_sta_dump_rx_stats(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + struct ath11k_rx_peer_stats *rx_stats = arsta->rx_stats; + int len = 0, i, retval = 0; +@@ -341,7 +341,7 @@ static int + ath11k_dbg_sta_open_htt_peer_stats(struct inode *inode, struct file *file) + { + struct ieee80211_sta *sta = inode->i_private; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + struct debug_htt_stats_req *stats_req; + int type = ar->debug.htt_stats.type; +@@ -377,7 +377,7 @@ static int + ath11k_dbg_sta_release_htt_peer_stats(struct inode *inode, struct file *file) + { + struct ieee80211_sta *sta = inode->i_private; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + + mutex_lock(&ar->conf_mutex); +@@ -414,7 +414,7 @@ static ssize_t ath11k_dbg_sta_write_peer_pktlog(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + int ret, enable; + +@@ -454,7 +454,7 @@ static ssize_t ath11k_dbg_sta_read_peer_pktlog(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + char buf[32] = {0}; + int len; +@@ -481,7 +481,7 @@ static ssize_t ath11k_dbg_sta_write_delba(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + u32 tid, initiator, reason; + int ret; +@@ -532,7 +532,7 @@ static ssize_t ath11k_dbg_sta_write_addba_resp(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + u32 tid, status; + int ret; +@@ -582,7 +582,7 @@ static ssize_t ath11k_dbg_sta_write_addba(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + u32 tid, buf_size; + int ret; +@@ -633,7 +633,7 @@ static ssize_t ath11k_dbg_sta_read_aggr_mode(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + char buf[64]; + int len = 0; +@@ -653,7 +653,7 @@ static ssize_t ath11k_dbg_sta_write_aggr_mode(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + u32 aggr_mode; + int ret; +@@ -698,7 +698,7 @@ ath11k_write_htt_peer_stats_reset(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + struct htt_ext_stats_cfg_params cfg_params = { 0 }; + int ret; +@@ -757,7 +757,7 @@ static ssize_t ath11k_dbg_sta_read_peer_ps_state(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + char buf[20]; + int len; +@@ -784,7 +784,7 @@ static ssize_t ath11k_dbg_sta_read_current_ps_duration(struct file *file, + loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + u64 time_since_station_in_power_save; + char buf[20]; +@@ -818,7 +818,7 @@ static ssize_t ath11k_dbg_sta_read_total_ps_duration(struct file *file, + size_t count, loff_t *ppos) + { + struct ieee80211_sta *sta = file->private_data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + char buf[20]; + u64 power_save_duration; +diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c +index 33b9764eaa9167..8cc51ab699de78 100644 +--- a/drivers/net/wireless/ath/ath11k/dp_rx.c ++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c +@@ -1100,7 +1100,7 @@ int ath11k_dp_rx_ampdu_start(struct ath11k *ar, + struct ieee80211_ampdu_params *params) + { + struct ath11k_base *ab = ar->ab; +- struct ath11k_sta *arsta = (void *)params->sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(params->sta); + int vdev_id = arsta->arvif->vdev_id; + int ret; + +@@ -1118,7 +1118,7 @@ int ath11k_dp_rx_ampdu_stop(struct ath11k *ar, + { + struct ath11k_base *ab = ar->ab; + struct ath11k_peer *peer; +- struct ath11k_sta *arsta = (void *)params->sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(params->sta); + int vdev_id = arsta->arvif->vdev_id; + dma_addr_t paddr; + bool active; +@@ -1460,7 +1460,7 @@ ath11k_update_per_peer_tx_stats(struct ath11k *ar, + } + + sta = peer->sta; +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + + memset(&arsta->txrate, 0, sizeof(arsta->txrate)); + +@@ -5269,7 +5269,7 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id, + goto next_skb; + } + +- arsta = (struct ath11k_sta *)peer->sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(peer->sta); + ath11k_dp_rx_update_peer_stats(arsta, ppdu_info); + + if (ath11k_debugfs_is_pktlog_peer_valid(ar, peer->addr)) +diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c +index 7dd1ee58980177..c1072e66e3e8fd 100644 +--- a/drivers/net/wireless/ath/ath11k/dp_tx.c ++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c +@@ -467,7 +467,7 @@ void ath11k_dp_tx_update_txcompl(struct ath11k *ar, struct hal_tx_status *ts) + } + + sta = peer->sta; +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + + memset(&arsta->txrate, 0, sizeof(arsta->txrate)); + pkt_type = FIELD_GET(HAL_TX_RATE_STATS_INFO0_PKT_TYPE, +@@ -627,7 +627,7 @@ static void ath11k_dp_tx_complete_msdu(struct ath11k *ar, + ieee80211_free_txskb(ar->hw, msdu); + return; + } +- arsta = (struct ath11k_sta *)peer->sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(peer->sta); + status.sta = peer->sta; + status.skb = msdu; + status.info = info; +diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c +index 9df3f6449f7689..2921be9bd530cf 100644 +--- a/drivers/net/wireless/ath/ath11k/mac.c ++++ b/drivers/net/wireless/ath/ath11k/mac.c +@@ -254,9 +254,6 @@ static const u32 ath11k_smps_map[] = { + [WLAN_HT_CAP_SM_PS_DISABLED] = WMI_PEER_SMPS_PS_NONE, + }; + +-static int ath11k_start_vdev_delay(struct ieee80211_hw *hw, +- struct ieee80211_vif *vif); +- + enum nl80211_he_ru_alloc ath11k_mac_phy_he_ru_to_nl80211_he_ru_alloc(u16 ru_phy) + { + enum nl80211_he_ru_alloc ret; +@@ -2828,7 +2825,7 @@ static void ath11k_peer_assoc_prepare(struct ath11k *ar, + + lockdep_assert_held(&ar->conf_mutex); + +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + + memset(arg, 0, sizeof(*arg)); + +@@ -4208,6 +4205,40 @@ static int ath11k_clear_peer_keys(struct ath11k_vif *arvif, + return first_errno; + } + ++static int ath11k_set_group_keys(struct ath11k_vif *arvif) ++{ ++ struct ath11k *ar = arvif->ar; ++ struct ath11k_base *ab = ar->ab; ++ const u8 *addr = arvif->bssid; ++ int i, ret, first_errno = 0; ++ struct ath11k_peer *peer; ++ ++ spin_lock_bh(&ab->base_lock); ++ peer = ath11k_peer_find(ab, arvif->vdev_id, addr); ++ spin_unlock_bh(&ab->base_lock); ++ ++ if (!peer) ++ return -ENOENT; ++ ++ for (i = 0; i < ARRAY_SIZE(peer->keys); i++) { ++ struct ieee80211_key_conf *key = peer->keys[i]; ++ ++ if (!key || (key->flags & IEEE80211_KEY_FLAG_PAIRWISE)) ++ continue; ++ ++ ret = ath11k_install_key(arvif, key, SET_KEY, addr, ++ WMI_KEY_GROUP); ++ if (ret < 0 && first_errno == 0) ++ first_errno = ret; ++ ++ if (ret < 0) ++ ath11k_warn(ab, "failed to set group key of idx %d for vdev %d: %d\n", ++ i, arvif->vdev_id, ret); ++ } ++ ++ return first_errno; ++} ++ + static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + struct ieee80211_vif *vif, struct ieee80211_sta *sta, + struct ieee80211_key_conf *key) +@@ -4217,6 +4248,7 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); + struct ath11k_peer *peer; + struct ath11k_sta *arsta; ++ bool is_ap_with_no_sta; + const u8 *peer_addr; + int ret = 0; + u32 flags = 0; +@@ -4277,16 +4309,57 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + else + flags |= WMI_KEY_GROUP; + +- ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags); +- if (ret) { +- ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret); +- goto exit; +- } ++ ath11k_dbg(ar->ab, ATH11K_DBG_MAC, ++ "%s for peer %pM on vdev %d flags 0x%X, type = %d, num_sta %d\n", ++ cmd == SET_KEY ? "SET_KEY" : "DEL_KEY", peer_addr, arvif->vdev_id, ++ flags, arvif->vdev_type, arvif->num_stations); ++ ++ /* Allow group key clearing only in AP mode when no stations are ++ * associated. There is a known race condition in firmware where ++ * group addressed packets may be dropped if the key is cleared ++ * and immediately set again during rekey. ++ * ++ * During GTK rekey, mac80211 issues a clear key (if the old key ++ * exists) followed by an install key operation for same key ++ * index. This causes ath11k to send two WMI commands in quick ++ * succession: one to clear the old key and another to install the ++ * new key in the same slot. ++ * ++ * Under certain conditions—especially under high load or time ++ * sensitive scenarios, firmware may process these commands ++ * asynchronously in a way that firmware assumes the key is ++ * cleared whereas hardware has a valid key. This inconsistency ++ * between hardware and firmware leads to group addressed packet ++ * drops after rekey. ++ * Only setting the same key again can restore a valid key in ++ * firmware and allow packets to be transmitted. ++ * ++ * There is a use case where an AP can transition from Secure mode ++ * to open mode without a vdev restart by just deleting all ++ * associated peers and clearing key, Hence allow clear key for ++ * that case alone. Mark arvif->reinstall_group_keys in such cases ++ * and reinstall the same key when the first peer is added, ++ * allowing firmware to recover from the race if it had occurred. ++ */ + +- ret = ath11k_dp_peer_rx_pn_replay_config(arvif, peer_addr, cmd, key); +- if (ret) { +- ath11k_warn(ab, "failed to offload PN replay detection %d\n", ret); +- goto exit; ++ is_ap_with_no_sta = (vif->type == NL80211_IFTYPE_AP && ++ !arvif->num_stations); ++ if ((flags & WMI_KEY_PAIRWISE) || cmd == SET_KEY || is_ap_with_no_sta) { ++ ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags); ++ if (ret) { ++ ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret); ++ goto exit; ++ } ++ ++ ret = ath11k_dp_peer_rx_pn_replay_config(arvif, peer_addr, cmd, key); ++ if (ret) { ++ ath11k_warn(ab, "failed to offload PN replay detection %d\n", ++ ret); ++ goto exit; ++ } ++ ++ if ((flags & WMI_KEY_GROUP) && cmd == SET_KEY && is_ap_with_no_sta) ++ arvif->reinstall_group_keys = true; + } + + spin_lock_bh(&ab->base_lock); +@@ -4311,7 +4384,7 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + ath11k_warn(ab, "peer %pM disappeared!\n", peer_addr); + + if (sta) { +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + + switch (key->cipher) { + case WLAN_CIPHER_SUITE_TKIP: +@@ -4879,6 +4952,7 @@ static int ath11k_mac_inc_num_stations(struct ath11k_vif *arvif, + return -ENOBUFS; + + ar->num_stations++; ++ arvif->num_stations++; + + return 0; + } +@@ -4894,100 +4968,7 @@ static void ath11k_mac_dec_num_stations(struct ath11k_vif *arvif, + return; + + ar->num_stations--; +-} +- +-static int ath11k_mac_station_add(struct ath11k *ar, +- struct ieee80211_vif *vif, +- struct ieee80211_sta *sta) +-{ +- struct ath11k_base *ab = ar->ab; +- struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; +- struct peer_create_params peer_param; +- int ret; +- +- lockdep_assert_held(&ar->conf_mutex); +- +- ret = ath11k_mac_inc_num_stations(arvif, sta); +- if (ret) { +- ath11k_warn(ab, "refusing to associate station: too many connected already (%d)\n", +- ar->max_num_stations); +- goto exit; +- } +- +- arsta->rx_stats = kzalloc(sizeof(*arsta->rx_stats), GFP_KERNEL); +- if (!arsta->rx_stats) { +- ret = -ENOMEM; +- goto dec_num_station; +- } +- +- peer_param.vdev_id = arvif->vdev_id; +- peer_param.peer_addr = sta->addr; +- peer_param.peer_type = WMI_PEER_TYPE_DEFAULT; +- +- ret = ath11k_peer_create(ar, arvif, sta, &peer_param); +- if (ret) { +- ath11k_warn(ab, "Failed to add peer: %pM for VDEV: %d\n", +- sta->addr, arvif->vdev_id); +- goto free_rx_stats; +- } +- +- ath11k_dbg(ab, ATH11K_DBG_MAC, "Added peer: %pM for VDEV: %d\n", +- sta->addr, arvif->vdev_id); +- +- if (ath11k_debugfs_is_extd_tx_stats_enabled(ar)) { +- arsta->tx_stats = kzalloc(sizeof(*arsta->tx_stats), GFP_KERNEL); +- if (!arsta->tx_stats) { +- ret = -ENOMEM; +- goto free_peer; +- } +- } +- +- if (ieee80211_vif_is_mesh(vif)) { +- ath11k_dbg(ab, ATH11K_DBG_MAC, +- "setting USE_4ADDR for mesh STA %pM\n", sta->addr); +- ret = ath11k_wmi_set_peer_param(ar, sta->addr, +- arvif->vdev_id, +- WMI_PEER_USE_4ADDR, 1); +- if (ret) { +- ath11k_warn(ab, "failed to set mesh STA %pM 4addr capability: %d\n", +- sta->addr, ret); +- goto free_tx_stats; +- } +- } +- +- ret = ath11k_dp_peer_setup(ar, arvif->vdev_id, sta->addr); +- if (ret) { +- ath11k_warn(ab, "failed to setup dp for peer %pM on vdev %i (%d)\n", +- sta->addr, arvif->vdev_id, ret); +- goto free_tx_stats; +- } +- +- if (ab->hw_params.vdev_start_delay && +- !arvif->is_started && +- arvif->vdev_type != WMI_VDEV_TYPE_AP) { +- ret = ath11k_start_vdev_delay(ar->hw, vif); +- if (ret) { +- ath11k_warn(ab, "failed to delay vdev start: %d\n", ret); +- goto free_tx_stats; +- } +- } +- +- ewma_avg_rssi_init(&arsta->avg_rssi); +- return 0; +- +-free_tx_stats: +- kfree(arsta->tx_stats); +- arsta->tx_stats = NULL; +-free_peer: +- ath11k_peer_delete(ar, arvif->vdev_id, sta->addr); +-free_rx_stats: +- kfree(arsta->rx_stats); +- arsta->rx_stats = NULL; +-dec_num_station: +- ath11k_mac_dec_num_stations(arvif, sta); +-exit: +- return ret; ++ arvif->num_stations--; + } + + static u32 ath11k_mac_ieee80211_sta_bw_to_wmi(struct ath11k *ar, +@@ -5018,140 +4999,6 @@ static u32 ath11k_mac_ieee80211_sta_bw_to_wmi(struct ath11k *ar, + return bw; + } + +-static int ath11k_mac_op_sta_state(struct ieee80211_hw *hw, +- struct ieee80211_vif *vif, +- struct ieee80211_sta *sta, +- enum ieee80211_sta_state old_state, +- enum ieee80211_sta_state new_state) +-{ +- struct ath11k *ar = hw->priv; +- struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; +- struct ath11k_peer *peer; +- int ret = 0; +- +- /* cancel must be done outside the mutex to avoid deadlock */ +- if ((old_state == IEEE80211_STA_NONE && +- new_state == IEEE80211_STA_NOTEXIST)) { +- cancel_work_sync(&arsta->update_wk); +- cancel_work_sync(&arsta->set_4addr_wk); +- } +- +- mutex_lock(&ar->conf_mutex); +- +- if (old_state == IEEE80211_STA_NOTEXIST && +- new_state == IEEE80211_STA_NONE) { +- memset(arsta, 0, sizeof(*arsta)); +- arsta->arvif = arvif; +- arsta->peer_ps_state = WMI_PEER_PS_STATE_DISABLED; +- INIT_WORK(&arsta->update_wk, ath11k_sta_rc_update_wk); +- INIT_WORK(&arsta->set_4addr_wk, ath11k_sta_set_4addr_wk); +- +- ret = ath11k_mac_station_add(ar, vif, sta); +- if (ret) +- ath11k_warn(ar->ab, "Failed to add station: %pM for VDEV: %d\n", +- sta->addr, arvif->vdev_id); +- } else if ((old_state == IEEE80211_STA_NONE && +- new_state == IEEE80211_STA_NOTEXIST)) { +- bool skip_peer_delete = ar->ab->hw_params.vdev_start_delay && +- vif->type == NL80211_IFTYPE_STATION; +- +- ath11k_dp_peer_cleanup(ar, arvif->vdev_id, sta->addr); +- +- if (!skip_peer_delete) { +- ret = ath11k_peer_delete(ar, arvif->vdev_id, sta->addr); +- if (ret) +- ath11k_warn(ar->ab, +- "Failed to delete peer: %pM for VDEV: %d\n", +- sta->addr, arvif->vdev_id); +- else +- ath11k_dbg(ar->ab, +- ATH11K_DBG_MAC, +- "Removed peer: %pM for VDEV: %d\n", +- sta->addr, arvif->vdev_id); +- } +- +- ath11k_mac_dec_num_stations(arvif, sta); +- mutex_lock(&ar->ab->tbl_mtx_lock); +- spin_lock_bh(&ar->ab->base_lock); +- peer = ath11k_peer_find(ar->ab, arvif->vdev_id, sta->addr); +- if (skip_peer_delete && peer) { +- peer->sta = NULL; +- } else if (peer && peer->sta == sta) { +- ath11k_warn(ar->ab, "Found peer entry %pM n vdev %i after it was supposedly removed\n", +- vif->addr, arvif->vdev_id); +- ath11k_peer_rhash_delete(ar->ab, peer); +- peer->sta = NULL; +- list_del(&peer->list); +- kfree(peer); +- ar->num_peers--; +- } +- spin_unlock_bh(&ar->ab->base_lock); +- mutex_unlock(&ar->ab->tbl_mtx_lock); +- +- kfree(arsta->tx_stats); +- arsta->tx_stats = NULL; +- +- kfree(arsta->rx_stats); +- arsta->rx_stats = NULL; +- } else if (old_state == IEEE80211_STA_AUTH && +- new_state == IEEE80211_STA_ASSOC && +- (vif->type == NL80211_IFTYPE_AP || +- vif->type == NL80211_IFTYPE_MESH_POINT || +- vif->type == NL80211_IFTYPE_ADHOC)) { +- ret = ath11k_station_assoc(ar, vif, sta, false); +- if (ret) +- ath11k_warn(ar->ab, "Failed to associate station: %pM\n", +- sta->addr); +- +- spin_lock_bh(&ar->data_lock); +- /* Set arsta bw and prev bw */ +- arsta->bw = ath11k_mac_ieee80211_sta_bw_to_wmi(ar, sta); +- arsta->bw_prev = arsta->bw; +- spin_unlock_bh(&ar->data_lock); +- } else if (old_state == IEEE80211_STA_ASSOC && +- new_state == IEEE80211_STA_AUTHORIZED) { +- spin_lock_bh(&ar->ab->base_lock); +- +- peer = ath11k_peer_find(ar->ab, arvif->vdev_id, sta->addr); +- if (peer) +- peer->is_authorized = true; +- +- spin_unlock_bh(&ar->ab->base_lock); +- +- if (vif->type == NL80211_IFTYPE_STATION && arvif->is_up) { +- ret = ath11k_wmi_set_peer_param(ar, sta->addr, +- arvif->vdev_id, +- WMI_PEER_AUTHORIZE, +- 1); +- if (ret) +- ath11k_warn(ar->ab, "Unable to authorize peer %pM vdev %d: %d\n", +- sta->addr, arvif->vdev_id, ret); +- } +- } else if (old_state == IEEE80211_STA_AUTHORIZED && +- new_state == IEEE80211_STA_ASSOC) { +- spin_lock_bh(&ar->ab->base_lock); +- +- peer = ath11k_peer_find(ar->ab, arvif->vdev_id, sta->addr); +- if (peer) +- peer->is_authorized = false; +- +- spin_unlock_bh(&ar->ab->base_lock); +- } else if (old_state == IEEE80211_STA_ASSOC && +- new_state == IEEE80211_STA_AUTH && +- (vif->type == NL80211_IFTYPE_AP || +- vif->type == NL80211_IFTYPE_MESH_POINT || +- vif->type == NL80211_IFTYPE_ADHOC)) { +- ret = ath11k_station_disassoc(ar, vif, sta); +- if (ret) +- ath11k_warn(ar->ab, "Failed to disassociate station: %pM\n", +- sta->addr); +- } +- +- mutex_unlock(&ar->conf_mutex); +- return ret; +-} +- + static int ath11k_mac_op_sta_set_txpwr(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_sta *sta) +@@ -5192,7 +5039,7 @@ static void ath11k_mac_op_sta_set_4addr(struct ieee80211_hw *hw, + struct ieee80211_sta *sta, bool enabled) + { + struct ath11k *ar = hw->priv; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + + if (enabled && !arsta->use_4addr_set) { + ieee80211_queue_work(ar->hw, &arsta->set_4addr_wk); +@@ -5206,7 +5053,7 @@ static void ath11k_mac_op_sta_rc_update(struct ieee80211_hw *hw, + u32 changed) + { + struct ath11k *ar = hw->priv; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); + struct ath11k_peer *peer; + u32 bw, smps; +@@ -6204,7 +6051,7 @@ static void ath11k_mac_op_tx(struct ieee80211_hw *hw, + } + + if (control->sta) +- arsta = (struct ath11k_sta *)control->sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(control->sta); + + ret = ath11k_dp_tx(ar, arvif, arsta, skb); + if (unlikely(ret)) { +@@ -7546,8 +7393,8 @@ static void ath11k_mac_op_change_chanctx(struct ieee80211_hw *hw, + mutex_unlock(&ar->conf_mutex); + } + +-static int ath11k_start_vdev_delay(struct ieee80211_hw *hw, +- struct ieee80211_vif *vif) ++static int ath11k_mac_start_vdev_delay(struct ieee80211_hw *hw, ++ struct ieee80211_vif *vif) + { + struct ath11k *ar = hw->priv; + struct ath11k_base *ab = ar->ab; +@@ -8228,7 +8075,7 @@ static void ath11k_mac_set_bitrate_mask_iter(void *data, + struct ieee80211_sta *sta) + { + struct ath11k_vif *arvif = data; +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arvif->ar; + + spin_lock_bh(&ar->data_lock); +@@ -8632,7 +8479,7 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw, + struct ieee80211_sta *sta, + struct station_info *sinfo) + { +- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv; ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); + struct ath11k *ar = arsta->arvif->ar; + s8 signal; + bool db2dbm = test_bit(WMI_TLV_SERVICE_HW_DB2DBM_CONVERSION_SUPPORT, +@@ -9099,6 +8946,249 @@ static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw, + return 0; + } + ++static int ath11k_mac_station_add(struct ath11k *ar, ++ struct ieee80211_vif *vif, ++ struct ieee80211_sta *sta) ++{ ++ struct ath11k_base *ab = ar->ab; ++ struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); ++ struct peer_create_params peer_param; ++ int ret; ++ ++ lockdep_assert_held(&ar->conf_mutex); ++ ++ ret = ath11k_mac_inc_num_stations(arvif, sta); ++ if (ret) { ++ ath11k_warn(ab, "refusing to associate station: too many connected already (%d)\n", ++ ar->max_num_stations); ++ goto exit; ++ } ++ ++ /* Driver allows the DEL KEY followed by SET KEY sequence for ++ * group keys for only when there is no clients associated, if at ++ * all firmware has entered the race during that window, ++ * reinstalling the same key when the first sta connects will allow ++ * firmware to recover from the race. ++ */ ++ if (arvif->num_stations == 1 && arvif->reinstall_group_keys) { ++ ath11k_dbg(ab, ATH11K_DBG_MAC, "set group keys on 1st station add for vdev %d\n", ++ arvif->vdev_id); ++ ret = ath11k_set_group_keys(arvif); ++ if (ret) ++ goto dec_num_station; ++ arvif->reinstall_group_keys = false; ++ } ++ ++ arsta->rx_stats = kzalloc(sizeof(*arsta->rx_stats), GFP_KERNEL); ++ if (!arsta->rx_stats) { ++ ret = -ENOMEM; ++ goto dec_num_station; ++ } ++ ++ peer_param.vdev_id = arvif->vdev_id; ++ peer_param.peer_addr = sta->addr; ++ peer_param.peer_type = WMI_PEER_TYPE_DEFAULT; ++ ++ ret = ath11k_peer_create(ar, arvif, sta, &peer_param); ++ if (ret) { ++ ath11k_warn(ab, "Failed to add peer: %pM for VDEV: %d\n", ++ sta->addr, arvif->vdev_id); ++ goto free_rx_stats; ++ } ++ ++ ath11k_dbg(ab, ATH11K_DBG_MAC, "Added peer: %pM for VDEV: %d\n", ++ sta->addr, arvif->vdev_id); ++ ++ if (ath11k_debugfs_is_extd_tx_stats_enabled(ar)) { ++ arsta->tx_stats = kzalloc(sizeof(*arsta->tx_stats), GFP_KERNEL); ++ if (!arsta->tx_stats) { ++ ret = -ENOMEM; ++ goto free_peer; ++ } ++ } ++ ++ if (ieee80211_vif_is_mesh(vif)) { ++ ath11k_dbg(ab, ATH11K_DBG_MAC, ++ "setting USE_4ADDR for mesh STA %pM\n", sta->addr); ++ ret = ath11k_wmi_set_peer_param(ar, sta->addr, ++ arvif->vdev_id, ++ WMI_PEER_USE_4ADDR, 1); ++ if (ret) { ++ ath11k_warn(ab, "failed to set mesh STA %pM 4addr capability: %d\n", ++ sta->addr, ret); ++ goto free_tx_stats; ++ } ++ } ++ ++ ret = ath11k_dp_peer_setup(ar, arvif->vdev_id, sta->addr); ++ if (ret) { ++ ath11k_warn(ab, "failed to setup dp for peer %pM on vdev %i (%d)\n", ++ sta->addr, arvif->vdev_id, ret); ++ goto free_tx_stats; ++ } ++ ++ if (ab->hw_params.vdev_start_delay && ++ !arvif->is_started && ++ arvif->vdev_type != WMI_VDEV_TYPE_AP) { ++ ret = ath11k_mac_start_vdev_delay(ar->hw, vif); ++ if (ret) { ++ ath11k_warn(ab, "failed to delay vdev start: %d\n", ret); ++ goto free_tx_stats; ++ } ++ } ++ ++ ewma_avg_rssi_init(&arsta->avg_rssi); ++ return 0; ++ ++free_tx_stats: ++ kfree(arsta->tx_stats); ++ arsta->tx_stats = NULL; ++free_peer: ++ ath11k_peer_delete(ar, arvif->vdev_id, sta->addr); ++free_rx_stats: ++ kfree(arsta->rx_stats); ++ arsta->rx_stats = NULL; ++dec_num_station: ++ ath11k_mac_dec_num_stations(arvif, sta); ++exit: ++ return ret; ++} ++ ++static int ath11k_mac_op_sta_state(struct ieee80211_hw *hw, ++ struct ieee80211_vif *vif, ++ struct ieee80211_sta *sta, ++ enum ieee80211_sta_state old_state, ++ enum ieee80211_sta_state new_state) ++{ ++ struct ath11k *ar = hw->priv; ++ struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif); ++ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta); ++ struct ath11k_peer *peer; ++ int ret = 0; ++ ++ /* cancel must be done outside the mutex to avoid deadlock */ ++ if ((old_state == IEEE80211_STA_NONE && ++ new_state == IEEE80211_STA_NOTEXIST)) { ++ cancel_work_sync(&arsta->update_wk); ++ cancel_work_sync(&arsta->set_4addr_wk); ++ } ++ ++ mutex_lock(&ar->conf_mutex); ++ ++ if (old_state == IEEE80211_STA_NOTEXIST && ++ new_state == IEEE80211_STA_NONE) { ++ memset(arsta, 0, sizeof(*arsta)); ++ arsta->arvif = arvif; ++ arsta->peer_ps_state = WMI_PEER_PS_STATE_DISABLED; ++ INIT_WORK(&arsta->update_wk, ath11k_sta_rc_update_wk); ++ INIT_WORK(&arsta->set_4addr_wk, ath11k_sta_set_4addr_wk); ++ ++ ret = ath11k_mac_station_add(ar, vif, sta); ++ if (ret) ++ ath11k_warn(ar->ab, "Failed to add station: %pM for VDEV: %d\n", ++ sta->addr, arvif->vdev_id); ++ } else if ((old_state == IEEE80211_STA_NONE && ++ new_state == IEEE80211_STA_NOTEXIST)) { ++ bool skip_peer_delete = ar->ab->hw_params.vdev_start_delay && ++ vif->type == NL80211_IFTYPE_STATION; ++ ++ ath11k_dp_peer_cleanup(ar, arvif->vdev_id, sta->addr); ++ ++ if (!skip_peer_delete) { ++ ret = ath11k_peer_delete(ar, arvif->vdev_id, sta->addr); ++ if (ret) ++ ath11k_warn(ar->ab, ++ "Failed to delete peer: %pM for VDEV: %d\n", ++ sta->addr, arvif->vdev_id); ++ else ++ ath11k_dbg(ar->ab, ++ ATH11K_DBG_MAC, ++ "Removed peer: %pM for VDEV: %d\n", ++ sta->addr, arvif->vdev_id); ++ } ++ ++ ath11k_mac_dec_num_stations(arvif, sta); ++ mutex_lock(&ar->ab->tbl_mtx_lock); ++ spin_lock_bh(&ar->ab->base_lock); ++ peer = ath11k_peer_find(ar->ab, arvif->vdev_id, sta->addr); ++ if (skip_peer_delete && peer) { ++ peer->sta = NULL; ++ } else if (peer && peer->sta == sta) { ++ ath11k_warn(ar->ab, "Found peer entry %pM n vdev %i after it was supposedly removed\n", ++ vif->addr, arvif->vdev_id); ++ ath11k_peer_rhash_delete(ar->ab, peer); ++ peer->sta = NULL; ++ list_del(&peer->list); ++ kfree(peer); ++ ar->num_peers--; ++ } ++ spin_unlock_bh(&ar->ab->base_lock); ++ mutex_unlock(&ar->ab->tbl_mtx_lock); ++ ++ kfree(arsta->tx_stats); ++ arsta->tx_stats = NULL; ++ ++ kfree(arsta->rx_stats); ++ arsta->rx_stats = NULL; ++ } else if (old_state == IEEE80211_STA_AUTH && ++ new_state == IEEE80211_STA_ASSOC && ++ (vif->type == NL80211_IFTYPE_AP || ++ vif->type == NL80211_IFTYPE_MESH_POINT || ++ vif->type == NL80211_IFTYPE_ADHOC)) { ++ ret = ath11k_station_assoc(ar, vif, sta, false); ++ if (ret) ++ ath11k_warn(ar->ab, "Failed to associate station: %pM\n", ++ sta->addr); ++ ++ spin_lock_bh(&ar->data_lock); ++ /* Set arsta bw and prev bw */ ++ arsta->bw = ath11k_mac_ieee80211_sta_bw_to_wmi(ar, sta); ++ arsta->bw_prev = arsta->bw; ++ spin_unlock_bh(&ar->data_lock); ++ } else if (old_state == IEEE80211_STA_ASSOC && ++ new_state == IEEE80211_STA_AUTHORIZED) { ++ spin_lock_bh(&ar->ab->base_lock); ++ ++ peer = ath11k_peer_find(ar->ab, arvif->vdev_id, sta->addr); ++ if (peer) ++ peer->is_authorized = true; ++ ++ spin_unlock_bh(&ar->ab->base_lock); ++ ++ if (vif->type == NL80211_IFTYPE_STATION && arvif->is_up) { ++ ret = ath11k_wmi_set_peer_param(ar, sta->addr, ++ arvif->vdev_id, ++ WMI_PEER_AUTHORIZE, ++ 1); ++ if (ret) ++ ath11k_warn(ar->ab, "Unable to authorize peer %pM vdev %d: %d\n", ++ sta->addr, arvif->vdev_id, ret); ++ } ++ } else if (old_state == IEEE80211_STA_AUTHORIZED && ++ new_state == IEEE80211_STA_ASSOC) { ++ spin_lock_bh(&ar->ab->base_lock); ++ ++ peer = ath11k_peer_find(ar->ab, arvif->vdev_id, sta->addr); ++ if (peer) ++ peer->is_authorized = false; ++ ++ spin_unlock_bh(&ar->ab->base_lock); ++ } else if (old_state == IEEE80211_STA_ASSOC && ++ new_state == IEEE80211_STA_AUTH && ++ (vif->type == NL80211_IFTYPE_AP || ++ vif->type == NL80211_IFTYPE_MESH_POINT || ++ vif->type == NL80211_IFTYPE_ADHOC)) { ++ ret = ath11k_station_disassoc(ar, vif, sta); ++ if (ret) ++ ath11k_warn(ar->ab, "Failed to disassociate station: %pM\n", ++ sta->addr); ++ } ++ ++ mutex_unlock(&ar->conf_mutex); ++ return ret; ++} ++ + static const struct ieee80211_ops ath11k_ops = { + .tx = ath11k_mac_op_tx, + .wake_tx_queue = ieee80211_handle_wake_tx_queue, +diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c +index ca719eb3f7f829..6d0126c3930185 100644 +--- a/drivers/net/wireless/ath/ath11k/peer.c ++++ b/drivers/net/wireless/ath/ath11k/peer.c +@@ -446,7 +446,7 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif, + peer->sec_type_grp = HAL_ENCRYPT_TYPE_OPEN; + + if (sta) { +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + arsta->tcl_metadata |= FIELD_PREP(HTT_TCL_META_DATA_TYPE, 0) | + FIELD_PREP(HTT_TCL_META_DATA_PEER_ID, + peer->peer_id); +diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c +index 9a829b8282420a..31dbabc9eaf330 100644 +--- a/drivers/net/wireless/ath/ath11k/wmi.c ++++ b/drivers/net/wireless/ath/ath11k/wmi.c +@@ -6452,7 +6452,7 @@ static int ath11k_wmi_tlv_rssi_chain_parse(struct ath11k_base *ab, + goto exit; + } + +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + + BUILD_BUG_ON(ARRAY_SIZE(arsta->chain_signal) > + ARRAY_SIZE(stats_rssi->rssi_avg_beacon)); +@@ -6540,7 +6540,7 @@ static int ath11k_wmi_tlv_fw_stats_data_parse(struct ath11k_base *ab, + arvif->bssid, + NULL); + if (sta) { +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + arsta->rssi_beacon = src->beacon_snr; + ath11k_dbg(ab, ATH11K_DBG_WMI, + "stats vdev id %d snr %d\n", +@@ -7469,7 +7469,7 @@ static void ath11k_wmi_event_peer_sta_ps_state_chg(struct ath11k_base *ab, + goto exit; + } + +- arsta = (struct ath11k_sta *)sta->drv_priv; ++ arsta = ath11k_sta_to_arsta(sta); + + spin_lock_bh(&ar->data_lock); + +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c +index 00794086cc7c97..bf80675667ba38 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c +@@ -392,10 +392,8 @@ void brcmf_btcoex_detach(struct brcmf_cfg80211_info *cfg) + if (!cfg->btcoex) + return; + +- if (cfg->btcoex->timer_on) { +- cfg->btcoex->timer_on = false; +- timer_shutdown_sync(&cfg->btcoex->timer); +- } ++ timer_shutdown_sync(&cfg->btcoex->timer); ++ cfg->btcoex->timer_on = false; + + cancel_work_sync(&cfg->btcoex->work); + +diff --git a/drivers/net/wireless/marvell/libertas/cfg.c b/drivers/net/wireless/marvell/libertas/cfg.c +index b700c213d10c4f..38ad49033d0bad 100644 +--- a/drivers/net/wireless/marvell/libertas/cfg.c ++++ b/drivers/net/wireless/marvell/libertas/cfg.c +@@ -1150,10 +1150,13 @@ static int lbs_associate(struct lbs_private *priv, + /* add SSID TLV */ + rcu_read_lock(); + ssid_eid = ieee80211_bss_get_ie(bss, WLAN_EID_SSID); +- if (ssid_eid) +- pos += lbs_add_ssid_tlv(pos, ssid_eid + 2, ssid_eid[1]); +- else ++ if (ssid_eid) { ++ u32 ssid_len = min(ssid_eid[1], IEEE80211_MAX_SSID_LEN); ++ ++ pos += lbs_add_ssid_tlv(pos, ssid_eid + 2, ssid_len); ++ } else { + lbs_deb_assoc("no SSID\n"); ++ } + rcu_read_unlock(); + + /* add DS param TLV */ +diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c +index b7ead0cd004508..69eea0628e670e 100644 +--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c ++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c +@@ -4316,8 +4316,9 @@ int mwifiex_init_channel_scan_gap(struct mwifiex_adapter *adapter) + * additional active scan request for hidden SSIDs on passive channels. + */ + adapter->num_in_chan_stats = 2 * (n_channels_bg + n_channels_a); +- adapter->chan_stats = vmalloc(array_size(sizeof(*adapter->chan_stats), +- adapter->num_in_chan_stats)); ++ adapter->chan_stats = kcalloc(adapter->num_in_chan_stats, ++ sizeof(*adapter->chan_stats), ++ GFP_KERNEL); + + if (!adapter->chan_stats) + return -ENOMEM; +diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c +index 6c60a4c21a3128..685dcab11a488f 100644 +--- a/drivers/net/wireless/marvell/mwifiex/main.c ++++ b/drivers/net/wireless/marvell/mwifiex/main.c +@@ -664,7 +664,7 @@ static int _mwifiex_fw_dpc(const struct firmware *firmware, void *context) + goto done; + + err_add_intf: +- vfree(adapter->chan_stats); ++ kfree(adapter->chan_stats); + err_init_chan_scan: + wiphy_unregister(adapter->wiphy); + wiphy_free(adapter->wiphy); +@@ -1481,7 +1481,7 @@ static void mwifiex_uninit_sw(struct mwifiex_adapter *adapter) + wiphy_free(adapter->wiphy); + adapter->wiphy = NULL; + +- vfree(adapter->chan_stats); ++ kfree(adapter->chan_stats); + mwifiex_free_cmd_buffers(adapter); + } + +diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c +index 65a5f24e53136b..8ab55fc705f076 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c +@@ -1616,8 +1616,8 @@ mt7996_mcu_get_mmps_mode(enum ieee80211_smps_mode smps) + int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev, + void *data, u16 version) + { ++ struct uni_header hdr = {}; + struct ra_fixed_rate *req; +- struct uni_header hdr; + struct sk_buff *skb; + struct tlv *tlv; + int len; +@@ -2638,7 +2638,7 @@ int mt7996_mcu_set_hdr_trans(struct mt7996_dev *dev, bool hdr_trans) + { + struct { + u8 __rsv[4]; +- } __packed hdr; ++ } __packed hdr = {}; + struct hdr_trans_blacklist *req_blacklist; + struct hdr_trans_en *req_en; + struct sk_buff *skb; +diff --git a/drivers/net/wireless/st/cw1200/sta.c b/drivers/net/wireless/st/cw1200/sta.c +index 8ef1d06b9bbddb..121d810c8839e5 100644 +--- a/drivers/net/wireless/st/cw1200/sta.c ++++ b/drivers/net/wireless/st/cw1200/sta.c +@@ -1290,7 +1290,7 @@ static void cw1200_do_join(struct cw1200_common *priv) + rcu_read_lock(); + ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID); + if (ssidie) { +- join.ssid_len = ssidie[1]; ++ join.ssid_len = min(ssidie[1], IEEE80211_MAX_SSID_LEN); + memcpy(join.ssid, &ssidie[2], join.ssid_len); + } + rcu_read_unlock(); +diff --git a/drivers/pci/msi/msi.c b/drivers/pci/msi/msi.c +index 053bb9fac6e3e1..b638731aa5ff2f 100644 +--- a/drivers/pci/msi/msi.c ++++ b/drivers/pci/msi/msi.c +@@ -610,6 +610,9 @@ void msix_prepare_msi_desc(struct pci_dev *dev, struct msi_desc *desc) + if (desc->pci.msi_attrib.can_mask) { + void __iomem *addr = pci_msix_desc_addr(desc); + ++ /* Workaround for SUN NIU insanity, which requires write before read */ ++ if (dev->dev_flags & PCI_DEV_FLAGS_MSIX_TOUCH_ENTRY_DATA_FIRST) ++ writel(0, addr + PCI_MSIX_ENTRY_DATA); + desc->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); + } + } +diff --git a/drivers/pcmcia/omap_cf.c b/drivers/pcmcia/omap_cf.c +index e613818dc0bc90..25382612e48acb 100644 +--- a/drivers/pcmcia/omap_cf.c ++++ b/drivers/pcmcia/omap_cf.c +@@ -215,6 +215,8 @@ static int __init omap_cf_probe(struct platform_device *pdev) + return -EINVAL; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ++ if (!res) ++ return -EINVAL; + + cf = kzalloc(sizeof *cf, GFP_KERNEL); + if (!cf) +diff --git a/drivers/pcmcia/rsrc_iodyn.c b/drivers/pcmcia/rsrc_iodyn.c +index b04b16496b0c4b..2677b577c1f858 100644 +--- a/drivers/pcmcia/rsrc_iodyn.c ++++ b/drivers/pcmcia/rsrc_iodyn.c +@@ -62,6 +62,9 @@ static struct resource *__iodyn_find_io_region(struct pcmcia_socket *s, + unsigned long min = base; + int ret; + ++ if (!res) ++ return NULL; ++ + data.mask = align - 1; + data.offset = base & data.mask; + +diff --git a/drivers/pcmcia/rsrc_nonstatic.c b/drivers/pcmcia/rsrc_nonstatic.c +index bf9d070a44966d..da494fe451baf0 100644 +--- a/drivers/pcmcia/rsrc_nonstatic.c ++++ b/drivers/pcmcia/rsrc_nonstatic.c +@@ -375,7 +375,9 @@ static int do_validate_mem(struct pcmcia_socket *s, + + if (validate && !s->fake_cis) { + /* move it to the validated data set */ +- add_interval(&s_data->mem_db_valid, base, size); ++ ret = add_interval(&s_data->mem_db_valid, base, size); ++ if (ret) ++ return ret; + sub_interval(&s_data->mem_db, base, size); + } + +diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c +index 04686ae1e976bd..6f5437d210a617 100644 +--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c ++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c +@@ -242,6 +242,20 @@ static const struct dmi_system_id fwbug_list[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"), + } + }, ++ { ++ .ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10", ++ .driver_data = &quirk_spurious_8042, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"), ++ } ++ }, ++ { ++ .ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10", ++ .driver_data = &quirk_spurious_8042, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"), ++ } ++ }, + {} + }; + +diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c +index d41fea53e41e90..502be061cc658c 100644 +--- a/drivers/scsi/lpfc/lpfc_nvmet.c ++++ b/drivers/scsi/lpfc/lpfc_nvmet.c +@@ -1243,7 +1243,7 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport, + struct lpfc_nvmet_tgtport *tgtp; + struct lpfc_async_xchg_ctx *ctxp = + container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req); +- struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer; ++ struct rqb_dmabuf *nvmebuf; + struct lpfc_hba *phba = ctxp->phba; + unsigned long iflag; + +@@ -1251,13 +1251,18 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport, + lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n", + ctxp->oxid, ctxp->size, raw_smp_processor_id()); + ++ spin_lock_irqsave(&ctxp->ctxlock, iflag); ++ nvmebuf = ctxp->rqb_buffer; + if (!nvmebuf) { ++ spin_unlock_irqrestore(&ctxp->ctxlock, iflag); + lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR, + "6425 Defer rcv: no buffer oxid x%x: " + "flg %x ste %x\n", + ctxp->oxid, ctxp->flag, ctxp->state); + return; + } ++ ctxp->rqb_buffer = NULL; ++ spin_unlock_irqrestore(&ctxp->ctxlock, iflag); + + tgtp = phba->targetport->private; + if (tgtp) +@@ -1265,9 +1270,6 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport, + + /* Free the nvmebuf since a new buffer already replaced it */ + nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); +- spin_lock_irqsave(&ctxp->ctxlock, iflag); +- ctxp->rqb_buffer = NULL; +- spin_unlock_irqrestore(&ctxp->ctxlock, iflag); + } + + /** +diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c +index a6773075bfe3ef..0afecda0bfaa38 100644 +--- a/drivers/soc/qcom/mdt_loader.c ++++ b/drivers/soc/qcom/mdt_loader.c +@@ -38,12 +38,14 @@ static bool mdt_header_valid(const struct firmware *fw) + if (phend > fw->size) + return false; + +- if (ehdr->e_shentsize != sizeof(struct elf32_shdr)) +- return false; ++ if (ehdr->e_shentsize || ehdr->e_shnum) { ++ if (ehdr->e_shentsize != sizeof(struct elf32_shdr)) ++ return false; + +- shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff); +- if (shend > fw->size) +- return false; ++ shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff); ++ if (shend > fw->size) ++ return false; ++ } + + return true; + } +diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c +index 7c17b8c0425e3c..bf9b816637d02e 100644 +--- a/drivers/spi/spi-cadence-quadspi.c ++++ b/drivers/spi/spi-cadence-quadspi.c +@@ -1868,8 +1868,6 @@ static int cqspi_probe(struct platform_device *pdev) + goto probe_setup_failed; + } + +- pm_runtime_enable(dev); +- + ret = spi_register_controller(host); + if (ret) { + dev_err(&pdev->dev, "failed to register SPI ctlr %d\n", ret); +@@ -1879,7 +1877,6 @@ static int cqspi_probe(struct platform_device *pdev) + return 0; + probe_setup_failed: + cqspi_controller_enable(cqspi, 0); +- pm_runtime_disable(dev); + probe_reset_failed: + if (cqspi->is_jh7110) + cqspi_jh7110_disable_clk(pdev, cqspi); +@@ -1901,8 +1898,7 @@ static void cqspi_remove(struct platform_device *pdev) + if (cqspi->rx_chan) + dma_release_channel(cqspi->rx_chan); + +- if (pm_runtime_get_sync(&pdev->dev) >= 0) +- clk_disable(cqspi->clk); ++ clk_disable_unprepare(cqspi->clk); + + if (cqspi->is_jh7110) + cqspi_jh7110_disable_clk(pdev, cqspi); +diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c +index fa899ab2014c6a..8ef82a11ebb0fa 100644 +--- a/drivers/spi/spi-fsl-lpspi.c ++++ b/drivers/spi/spi-fsl-lpspi.c +@@ -3,8 +3,9 @@ + // Freescale i.MX7ULP LPSPI driver + // + // Copyright 2016 Freescale Semiconductor, Inc. +-// Copyright 2018 NXP Semiconductors ++// Copyright 2018, 2023, 2025 NXP + ++#include + #include + #include + #include +@@ -70,7 +71,7 @@ + #define DER_TDDE BIT(0) + #define CFGR1_PCSCFG BIT(27) + #define CFGR1_PINCFG (BIT(24)|BIT(25)) +-#define CFGR1_PCSPOL BIT(8) ++#define CFGR1_PCSPOL_MASK GENMASK(11, 8) + #define CFGR1_NOSTALL BIT(3) + #define CFGR1_HOST BIT(0) + #define FSR_TXCOUNT (0xFF) +@@ -82,6 +83,8 @@ + #define TCR_RXMSK BIT(19) + #define TCR_TXMSK BIT(18) + ++#define SR_CLEAR_MASK GENMASK(13, 8) ++ + struct fsl_lpspi_devtype_data { + u8 prescale_max; + }; +@@ -420,7 +423,9 @@ static int fsl_lpspi_config(struct fsl_lpspi_data *fsl_lpspi) + else + temp = CFGR1_PINCFG; + if (fsl_lpspi->config.mode & SPI_CS_HIGH) +- temp |= CFGR1_PCSPOL; ++ temp |= FIELD_PREP(CFGR1_PCSPOL_MASK, ++ BIT(fsl_lpspi->config.chip_select)); ++ + writel(temp, fsl_lpspi->base + IMX7ULP_CFGR1); + + temp = readl(fsl_lpspi->base + IMX7ULP_CR); +@@ -529,14 +534,13 @@ static int fsl_lpspi_reset(struct fsl_lpspi_data *fsl_lpspi) + fsl_lpspi_intctrl(fsl_lpspi, 0); + } + +- /* W1C for all flags in SR */ +- temp = 0x3F << 8; +- writel(temp, fsl_lpspi->base + IMX7ULP_SR); +- + /* Clear FIFO and disable module */ + temp = CR_RRF | CR_RTF; + writel(temp, fsl_lpspi->base + IMX7ULP_CR); + ++ /* W1C for all flags in SR */ ++ writel(SR_CLEAR_MASK, fsl_lpspi->base + IMX7ULP_SR); ++ + return 0; + } + +@@ -727,12 +731,10 @@ static int fsl_lpspi_pio_transfer(struct spi_controller *controller, + fsl_lpspi_write_tx_fifo(fsl_lpspi); + + ret = fsl_lpspi_wait_for_completion(controller); +- if (ret) +- return ret; + + fsl_lpspi_reset(fsl_lpspi); + +- return 0; ++ return ret; + } + + static int fsl_lpspi_transfer_one(struct spi_controller *controller, +@@ -780,7 +782,7 @@ static irqreturn_t fsl_lpspi_isr(int irq, void *dev_id) + if (temp_SR & SR_MBF || + readl(fsl_lpspi->base + IMX7ULP_FSR) & FSR_TXCOUNT) { + writel(SR_FCF, fsl_lpspi->base + IMX7ULP_SR); +- fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE); ++ fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE | (temp_IER & IER_TDIE)); + return IRQ_HANDLED; + } + +diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c +index 79bac30e79af64..21e357966d2a22 100644 +--- a/drivers/spi/spi-fsl-qspi.c ++++ b/drivers/spi/spi-fsl-qspi.c +@@ -839,6 +839,19 @@ static const struct spi_controller_mem_ops fsl_qspi_mem_ops = { + .get_name = fsl_qspi_get_name, + }; + ++static void fsl_qspi_cleanup(void *data) ++{ ++ struct fsl_qspi *q = data; ++ ++ /* disable the hardware */ ++ qspi_writel(q, QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR); ++ qspi_writel(q, 0x0, q->iobase + QUADSPI_RSER); ++ ++ fsl_qspi_clk_disable_unprep(q); ++ ++ mutex_destroy(&q->lock); ++} ++ + static int fsl_qspi_probe(struct platform_device *pdev) + { + struct spi_controller *ctlr; +@@ -928,15 +941,16 @@ static int fsl_qspi_probe(struct platform_device *pdev) + + ctlr->dev.of_node = np; + ++ ret = devm_add_action_or_reset(dev, fsl_qspi_cleanup, q); ++ if (ret) ++ goto err_put_ctrl; ++ + ret = devm_spi_register_controller(dev, ctlr); + if (ret) +- goto err_destroy_mutex; ++ goto err_put_ctrl; + + return 0; + +-err_destroy_mutex: +- mutex_destroy(&q->lock); +- + err_disable_clk: + fsl_qspi_clk_disable_unprep(q); + +@@ -947,19 +961,6 @@ static int fsl_qspi_probe(struct platform_device *pdev) + return ret; + } + +-static void fsl_qspi_remove(struct platform_device *pdev) +-{ +- struct fsl_qspi *q = platform_get_drvdata(pdev); +- +- /* disable the hardware */ +- qspi_writel(q, QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR); +- qspi_writel(q, 0x0, q->iobase + QUADSPI_RSER); +- +- fsl_qspi_clk_disable_unprep(q); +- +- mutex_destroy(&q->lock); +-} +- + static int fsl_qspi_suspend(struct device *dev) + { + return 0; +@@ -997,7 +998,6 @@ static struct platform_driver fsl_qspi_driver = { + .pm = &fsl_qspi_pm_ops, + }, + .probe = fsl_qspi_probe, +- .remove_new = fsl_qspi_remove, + }; + module_platform_driver(fsl_qspi_driver); + +diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c +index b8ba360e863edf..927c3d7947f9cf 100644 +--- a/drivers/tee/optee/ffa_abi.c ++++ b/drivers/tee/optee/ffa_abi.c +@@ -653,7 +653,7 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx, + * with a matching configuration. + */ + +-static bool optee_ffa_api_is_compatbile(struct ffa_device *ffa_dev, ++static bool optee_ffa_api_is_compatible(struct ffa_device *ffa_dev, + const struct ffa_ops *ops) + { + const struct ffa_msg_ops *msg_ops = ops->msg_ops; +@@ -804,7 +804,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) + + ffa_ops = ffa_dev->ops; + +- if (!optee_ffa_api_is_compatbile(ffa_dev, ffa_ops)) ++ if (!optee_ffa_api_is_compatible(ffa_dev, ffa_ops)) + return -EINVAL; + + if (!optee_ffa_exchange_caps(ffa_dev, ffa_ops, &sec_caps, +diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c +index 673cf035949483..426b818f2dd795 100644 +--- a/drivers/tee/tee_shm.c ++++ b/drivers/tee/tee_shm.c +@@ -489,9 +489,13 @@ EXPORT_SYMBOL_GPL(tee_shm_get_from_id); + */ + void tee_shm_put(struct tee_shm *shm) + { +- struct tee_device *teedev = shm->ctx->teedev; ++ struct tee_device *teedev; + bool do_release = false; + ++ if (!shm || !shm->ctx || !shm->ctx->teedev) ++ return; ++ ++ teedev = shm->ctx->teedev; + mutex_lock(&teedev->mutex); + if (refcount_dec_and_test(&shm->refcount)) { + /* +diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c +index 8d0ccf494ba224..603b37ce1eb8e6 100644 +--- a/drivers/thermal/mediatek/lvts_thermal.c ++++ b/drivers/thermal/mediatek/lvts_thermal.c +@@ -67,10 +67,14 @@ + #define LVTS_CALSCALE_CONF 0x300 + #define LVTS_MONINT_CONF 0x8300318C + +-#define LVTS_MONINT_OFFSET_SENSOR0 0xC +-#define LVTS_MONINT_OFFSET_SENSOR1 0x180 +-#define LVTS_MONINT_OFFSET_SENSOR2 0x3000 +-#define LVTS_MONINT_OFFSET_SENSOR3 0x3000000 ++#define LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR0 BIT(3) ++#define LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR1 BIT(8) ++#define LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR2 BIT(13) ++#define LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR3 BIT(25) ++#define LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR0 BIT(2) ++#define LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR1 BIT(7) ++#define LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR2 BIT(12) ++#define LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR3 BIT(24) + + #define LVTS_INT_SENSOR0 0x0009001F + #define LVTS_INT_SENSOR1 0x001203E0 +@@ -308,23 +312,41 @@ static int lvts_get_temp(struct thermal_zone_device *tz, int *temp) + + static void lvts_update_irq_mask(struct lvts_ctrl *lvts_ctrl) + { +- u32 masks[] = { +- LVTS_MONINT_OFFSET_SENSOR0, +- LVTS_MONINT_OFFSET_SENSOR1, +- LVTS_MONINT_OFFSET_SENSOR2, +- LVTS_MONINT_OFFSET_SENSOR3, ++ static const u32 high_offset_inten_masks[] = { ++ LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR0, ++ LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR1, ++ LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR2, ++ LVTS_MONINT_OFFSET_HIGH_INTEN_SENSOR3, ++ }; ++ static const u32 low_offset_inten_masks[] = { ++ LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR0, ++ LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR1, ++ LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR2, ++ LVTS_MONINT_OFFSET_LOW_INTEN_SENSOR3, + }; + u32 value = 0; + int i; + + value = readl(LVTS_MONINT(lvts_ctrl->base)); + +- for (i = 0; i < ARRAY_SIZE(masks); i++) { ++ for (i = 0; i < ARRAY_SIZE(high_offset_inten_masks); i++) { + if (lvts_ctrl->sensors[i].high_thresh == lvts_ctrl->high_thresh +- && lvts_ctrl->sensors[i].low_thresh == lvts_ctrl->low_thresh) +- value |= masks[i]; +- else +- value &= ~masks[i]; ++ && lvts_ctrl->sensors[i].low_thresh == lvts_ctrl->low_thresh) { ++ /* ++ * The minimum threshold needs to be configured in the ++ * OFFSETL register to get working interrupts, but we ++ * don't actually want to generate interrupts when ++ * crossing it. ++ */ ++ if (lvts_ctrl->low_thresh == -INT_MAX) { ++ value &= ~low_offset_inten_masks[i]; ++ value |= high_offset_inten_masks[i]; ++ } else { ++ value |= low_offset_inten_masks[i] | high_offset_inten_masks[i]; ++ } ++ } else { ++ value &= ~(low_offset_inten_masks[i] | high_offset_inten_masks[i]); ++ } + } + + writel(value, LVTS_MONINT(lvts_ctrl->base)); +diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h +index c4968efc3fc464..a2e471d51a8f0b 100644 +--- a/fs/btrfs/btrfs_inode.h ++++ b/fs/btrfs/btrfs_inode.h +@@ -179,7 +179,7 @@ struct btrfs_inode { + u64 new_delalloc_bytes; + /* + * The offset of the last dir index key that was logged. +- * This is used only for directories. ++ * This is used only for directories. Protected by 'log_mutex'. + */ + u64 last_dir_index_offset; + }; +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index ed08d8e5639f59..48b06459bc485a 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -1742,7 +1742,7 @@ static int submit_eb_subpage(struct page *page, struct writeback_control *wbc) + subpage->bitmaps)) { + spin_unlock_irqrestore(&subpage->lock, flags); + spin_unlock(&page->mapping->private_lock); +- bit_start++; ++ bit_start += sectors_per_node; + continue; + } + +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 4502a474a81dab..ee5ffeab85bb78 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -8525,6 +8525,7 @@ struct inode *btrfs_alloc_inode(struct super_block *sb) + ei->last_sub_trans = 0; + ei->logged_trans = 0; + ei->delalloc_bytes = 0; ++ /* new_delalloc_bytes and last_dir_index_offset are in a union. */ + ei->new_delalloc_bytes = 0; + ei->defrag_bytes = 0; + ei->disk_i_size = 0; +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 9439abf415ae36..e5d6bc1bb5e5da 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -3356,6 +3356,31 @@ int btrfs_free_log_root_tree(struct btrfs_trans_handle *trans, + return 0; + } + ++static bool mark_inode_as_not_logged(const struct btrfs_trans_handle *trans, ++ struct btrfs_inode *inode) ++{ ++ bool ret = false; ++ ++ /* ++ * Do this only if ->logged_trans is still 0 to prevent races with ++ * concurrent logging as we may see the inode not logged when ++ * inode_logged() is called but it gets logged after inode_logged() did ++ * not find it in the log tree and we end up setting ->logged_trans to a ++ * value less than trans->transid after the concurrent logging task has ++ * set it to trans->transid. As a consequence, subsequent rename, unlink ++ * and link operations may end up not logging new names and removing old ++ * names from the log. ++ */ ++ spin_lock(&inode->lock); ++ if (inode->logged_trans == 0) ++ inode->logged_trans = trans->transid - 1; ++ else if (inode->logged_trans == trans->transid) ++ ret = true; ++ spin_unlock(&inode->lock); ++ ++ return ret; ++} ++ + /* + * Check if an inode was logged in the current transaction. This correctly deals + * with the case where the inode was logged but has a logged_trans of 0, which +@@ -3373,15 +3398,32 @@ static int inode_logged(const struct btrfs_trans_handle *trans, + struct btrfs_key key; + int ret; + +- if (inode->logged_trans == trans->transid) ++ /* ++ * Quick lockless call, since once ->logged_trans is set to the current ++ * transaction, we never set it to a lower value anywhere else. ++ */ ++ if (data_race(inode->logged_trans) == trans->transid) + return 1; + + /* +- * If logged_trans is not 0, then we know the inode logged was not logged +- * in this transaction, so we can return false right away. ++ * If logged_trans is not 0 and not trans->transid, then we know the ++ * inode was not logged in this transaction, so we can return false ++ * right away. We take the lock to avoid a race caused by load/store ++ * tearing with a concurrent btrfs_log_inode() call or a concurrent task ++ * in this function further below - an update to trans->transid can be ++ * teared into two 32 bits updates for example, in which case we could ++ * see a positive value that is not trans->transid and assume the inode ++ * was not logged when it was. + */ +- if (inode->logged_trans > 0) ++ spin_lock(&inode->lock); ++ if (inode->logged_trans == trans->transid) { ++ spin_unlock(&inode->lock); ++ return 1; ++ } else if (inode->logged_trans > 0) { ++ spin_unlock(&inode->lock); + return 0; ++ } ++ spin_unlock(&inode->lock); + + /* + * If no log tree was created for this root in this transaction, then +@@ -3390,10 +3432,8 @@ static int inode_logged(const struct btrfs_trans_handle *trans, + * transaction's ID, to avoid the search below in a future call in case + * a log tree gets created after this. + */ +- if (!test_bit(BTRFS_ROOT_HAS_LOG_TREE, &inode->root->state)) { +- inode->logged_trans = trans->transid - 1; +- return 0; +- } ++ if (!test_bit(BTRFS_ROOT_HAS_LOG_TREE, &inode->root->state)) ++ return mark_inode_as_not_logged(trans, inode); + + /* + * We have a log tree and the inode's logged_trans is 0. We can't tell +@@ -3447,8 +3487,7 @@ static int inode_logged(const struct btrfs_trans_handle *trans, + * Set logged_trans to a value greater than 0 and less then the + * current transaction to avoid doing the search in future calls. + */ +- inode->logged_trans = trans->transid - 1; +- return 0; ++ return mark_inode_as_not_logged(trans, inode); + } + + /* +@@ -3456,20 +3495,9 @@ static int inode_logged(const struct btrfs_trans_handle *trans, + * the current transacion's ID, to avoid future tree searches as long as + * the inode is not evicted again. + */ ++ spin_lock(&inode->lock); + inode->logged_trans = trans->transid; +- +- /* +- * If it's a directory, then we must set last_dir_index_offset to the +- * maximum possible value, so that the next attempt to log the inode does +- * not skip checking if dir index keys found in modified subvolume tree +- * leaves have been logged before, otherwise it would result in attempts +- * to insert duplicate dir index keys in the log tree. This must be done +- * because last_dir_index_offset is an in-memory only field, not persisted +- * in the inode item or any other on-disk structure, so its value is lost +- * once the inode is evicted. +- */ +- if (S_ISDIR(inode->vfs_inode.i_mode)) +- inode->last_dir_index_offset = (u64)-1; ++ spin_unlock(&inode->lock); + + return 1; + } +@@ -4041,7 +4069,7 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans, + + /* + * If the inode was logged before and it was evicted, then its +- * last_dir_index_offset is (u64)-1, so we don't the value of the last index ++ * last_dir_index_offset is 0, so we don't know the value of the last index + * key offset. If that's the case, search for it and update the inode. This + * is to avoid lookups in the log tree every time we try to insert a dir index + * key from a leaf changed in the current transaction, and to allow us to always +@@ -4057,7 +4085,7 @@ static int update_last_dir_index_offset(struct btrfs_inode *inode, + + lockdep_assert_held(&inode->log_mutex); + +- if (inode->last_dir_index_offset != (u64)-1) ++ if (inode->last_dir_index_offset != 0) + return 0; + + if (!ctx->logged_before) { +diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c +index 0a498bc60f5573..ed110568d6127f 100644 +--- a/fs/fs-writeback.c ++++ b/fs/fs-writeback.c +@@ -2536,10 +2536,6 @@ void __mark_inode_dirty(struct inode *inode, int flags) + wakeup_bdi = inode_io_list_move_locked(inode, wb, + dirty_list); + +- spin_unlock(&wb->list_lock); +- spin_unlock(&inode->i_lock); +- trace_writeback_dirty_inode_enqueue(inode); +- + /* + * If this is the first dirty inode for this bdi, + * we have to wake-up the corresponding bdi thread +@@ -2549,6 +2545,11 @@ void __mark_inode_dirty(struct inode *inode, int flags) + if (wakeup_bdi && + (wb->bdi->capabilities & BDI_CAP_WRITEBACK)) + wb_wakeup_delayed(wb); ++ ++ spin_unlock(&wb->list_lock); ++ spin_unlock(&inode->i_lock); ++ trace_writeback_dirty_inode_enqueue(inode); ++ + return; + } + } +diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c +index 999111bfc27178..c561a8a6493e7c 100644 +--- a/fs/ocfs2/inode.c ++++ b/fs/ocfs2/inode.c +@@ -1205,6 +1205,9 @@ static void ocfs2_clear_inode(struct inode *inode) + * the journal is flushed before journal shutdown. Thus it is safe to + * have inodes get cleaned up after journal shutdown. + */ ++ if (!osb->journal) ++ return; ++ + jbd2_journal_release_jbd_inode(osb->journal->j_journal, + &oi->ip_jinode); + } +diff --git a/fs/proc/generic.c b/fs/proc/generic.c +index 2187d9ca351ced..db3f2c6abc162a 100644 +--- a/fs/proc/generic.c ++++ b/fs/proc/generic.c +@@ -362,6 +362,25 @@ static const struct inode_operations proc_dir_inode_operations = { + .setattr = proc_notify_change, + }; + ++static void pde_set_flags(struct proc_dir_entry *pde) ++{ ++ const struct proc_ops *proc_ops = pde->proc_ops; ++ ++ if (!proc_ops) ++ return; ++ ++ if (proc_ops->proc_flags & PROC_ENTRY_PERMANENT) ++ pde->flags |= PROC_ENTRY_PERMANENT; ++ if (proc_ops->proc_read_iter) ++ pde->flags |= PROC_ENTRY_proc_read_iter; ++#ifdef CONFIG_COMPAT ++ if (proc_ops->proc_compat_ioctl) ++ pde->flags |= PROC_ENTRY_proc_compat_ioctl; ++#endif ++ if (proc_ops->proc_lseek) ++ pde->flags |= PROC_ENTRY_proc_lseek; ++} ++ + /* returns the registered entry, or frees dp and returns NULL on failure */ + struct proc_dir_entry *proc_register(struct proc_dir_entry *dir, + struct proc_dir_entry *dp) +@@ -369,6 +388,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir, + if (proc_alloc_inum(&dp->low_ino)) + goto out_free_entry; + ++ pde_set_flags(dp); ++ + write_lock(&proc_subdir_lock); + dp->parent = dir; + if (pde_subdir_insert(dir, dp) == false) { +@@ -557,20 +578,6 @@ struct proc_dir_entry *proc_create_reg(const char *name, umode_t mode, + return p; + } + +-static void pde_set_flags(struct proc_dir_entry *pde) +-{ +- if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT) +- pde->flags |= PROC_ENTRY_PERMANENT; +- if (pde->proc_ops->proc_read_iter) +- pde->flags |= PROC_ENTRY_proc_read_iter; +-#ifdef CONFIG_COMPAT +- if (pde->proc_ops->proc_compat_ioctl) +- pde->flags |= PROC_ENTRY_proc_compat_ioctl; +-#endif +- if (pde->proc_ops->proc_lseek) +- pde->flags |= PROC_ENTRY_proc_lseek; +-} +- + struct proc_dir_entry *proc_create_data(const char *name, umode_t mode, + struct proc_dir_entry *parent, + const struct proc_ops *proc_ops, void *data) +@@ -581,7 +588,6 @@ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode, + if (!p) + return NULL; + p->proc_ops = proc_ops; +- pde_set_flags(p); + return proc_register(parent, p); + } + EXPORT_SYMBOL(proc_create_data); +@@ -632,7 +638,6 @@ struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode, + p->proc_ops = &proc_seq_ops; + p->seq_ops = ops; + p->state_size = state_size; +- pde_set_flags(p); + return proc_register(parent, p); + } + EXPORT_SYMBOL(proc_create_seq_private); +@@ -663,7 +668,6 @@ struct proc_dir_entry *proc_create_single_data(const char *name, umode_t mode, + return NULL; + p->proc_ops = &proc_single_ops; + p->single_show = show; +- pde_set_flags(p); + return proc_register(parent, p); + } + EXPORT_SYMBOL(proc_create_single_data); +diff --git a/fs/smb/client/cifs_unicode.c b/fs/smb/client/cifs_unicode.c +index 4cc6e0896fad37..f8659d36793f17 100644 +--- a/fs/smb/client/cifs_unicode.c ++++ b/fs/smb/client/cifs_unicode.c +@@ -629,6 +629,9 @@ cifs_strndup_to_utf16(const char *src, const int maxlen, int *utf16_len, + int len; + __le16 *dst; + ++ if (!src) ++ return NULL; ++ + len = cifs_local_to_utf16_bytes(src, maxlen, cp); + len += 2; /* NULL */ + dst = kmalloc(len, GFP_KERNEL); +diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h +index 2331cd8174fe3f..684c4822f76a3f 100644 +--- a/include/linux/bpf-cgroup.h ++++ b/include/linux/bpf-cgroup.h +@@ -72,9 +72,6 @@ to_cgroup_bpf_attach_type(enum bpf_attach_type attach_type) + extern struct static_key_false cgroup_bpf_enabled_key[MAX_CGROUP_BPF_ATTACH_TYPE]; + #define cgroup_bpf_enabled(atype) static_branch_unlikely(&cgroup_bpf_enabled_key[atype]) + +-#define for_each_cgroup_storage_type(stype) \ +- for (stype = 0; stype < MAX_BPF_CGROUP_STORAGE_TYPE; stype++) +- + struct bpf_cgroup_storage_map; + + struct bpf_storage_buffer { +@@ -500,8 +497,6 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map, + #define BPF_CGROUP_RUN_PROG_SETSOCKOPT(sock, level, optname, optval, optlen, \ + kernel_optval) ({ 0; }) + +-#define for_each_cgroup_storage_type(stype) for (; false; ) +- + #endif /* CONFIG_CGROUP_BPF */ + + #endif /* _BPF_CGROUP_H */ +diff --git a/include/linux/bpf.h b/include/linux/bpf.h +index 17de12a98f858a..83da9c81fa86ad 100644 +--- a/include/linux/bpf.h ++++ b/include/linux/bpf.h +@@ -194,6 +194,20 @@ enum btf_field_type { + BPF_REFCOUNT = (1 << 8), + }; + ++enum bpf_cgroup_storage_type { ++ BPF_CGROUP_STORAGE_SHARED, ++ BPF_CGROUP_STORAGE_PERCPU, ++ __BPF_CGROUP_STORAGE_MAX ++#define MAX_BPF_CGROUP_STORAGE_TYPE __BPF_CGROUP_STORAGE_MAX ++}; ++ ++#ifdef CONFIG_CGROUP_BPF ++# define for_each_cgroup_storage_type(stype) \ ++ for (stype = 0; stype < MAX_BPF_CGROUP_STORAGE_TYPE; stype++) ++#else ++# define for_each_cgroup_storage_type(stype) for (; false; ) ++#endif /* CONFIG_CGROUP_BPF */ ++ + typedef void (*btf_dtor_kfunc_t)(void *); + + struct btf_field_kptr { +@@ -244,6 +258,19 @@ struct bpf_list_node_kern { + void *owner; + } __attribute__((aligned(8))); + ++/* 'Ownership' of program-containing map is claimed by the first program ++ * that is going to use this map or by the first program which FD is ++ * stored in the map to make sure that all callers and callees have the ++ * same prog type, JITed flag and xdp_has_frags flag. ++ */ ++struct bpf_map_owner { ++ enum bpf_prog_type type; ++ bool jited; ++ bool xdp_has_frags; ++ u64 storage_cookie[MAX_BPF_CGROUP_STORAGE_TYPE]; ++ const struct btf_type *attach_func_proto; ++}; ++ + struct bpf_map { + /* The first two cachelines with read-mostly members of which some + * are also accessed in fast-path (e.g. ops, max_entries). +@@ -282,24 +309,15 @@ struct bpf_map { + }; + struct mutex freeze_mutex; + atomic64_t writecnt; +- /* 'Ownership' of program-containing map is claimed by the first program +- * that is going to use this map or by the first program which FD is +- * stored in the map to make sure that all callers and callees have the +- * same prog type, JITed flag and xdp_has_frags flag. +- */ +- struct { +- const struct btf_type *attach_func_proto; +- spinlock_t lock; +- enum bpf_prog_type type; +- bool jited; +- bool xdp_has_frags; +- } owner; ++ spinlock_t owner_lock; ++ struct bpf_map_owner *owner; + bool bypass_spec_v1; + bool frozen; /* write-once; write-protected by freeze_mutex */ + bool free_after_mult_rcu_gp; + bool free_after_rcu_gp; + atomic64_t sleepable_refcnt; + s64 __percpu *elem_count; ++ u64 cookie; /* write-once */ + }; + + static inline const char *btf_field_type_name(enum btf_field_type type) +@@ -994,14 +1012,6 @@ struct bpf_prog_offload { + u32 jited_len; + }; + +-enum bpf_cgroup_storage_type { +- BPF_CGROUP_STORAGE_SHARED, +- BPF_CGROUP_STORAGE_PERCPU, +- __BPF_CGROUP_STORAGE_MAX +-}; +- +-#define MAX_BPF_CGROUP_STORAGE_TYPE __BPF_CGROUP_STORAGE_MAX +- + /* The longest tracepoint has 12 args. + * See include/trace/bpf_probe.h + */ +@@ -1811,6 +1821,16 @@ static inline bool bpf_map_flags_access_ok(u32 access_flags) + (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG); + } + ++static inline struct bpf_map_owner *bpf_map_owner_alloc(struct bpf_map *map) ++{ ++ return kzalloc(sizeof(*map->owner), GFP_ATOMIC); ++} ++ ++static inline void bpf_map_owner_free(struct bpf_map *map) ++{ ++ kfree(map->owner); ++} ++ + struct bpf_event_entry { + struct perf_event *event; + struct file *perf_file; +diff --git a/include/linux/pci.h b/include/linux/pci.h +index ac5bd1718af241..0511f6f9a4e6ad 100644 +--- a/include/linux/pci.h ++++ b/include/linux/pci.h +@@ -245,6 +245,8 @@ enum pci_dev_flags { + PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 11), + /* Device does honor MSI masking despite saying otherwise */ + PCI_DEV_FLAGS_HAS_MSI_MASKING = (__force pci_dev_flags_t) (1 << 12), ++ /* Device requires write to PCI_MSIX_ENTRY_DATA before any MSIX reads */ ++ PCI_DEV_FLAGS_MSIX_TOUCH_ENTRY_DATA_FIRST = (__force pci_dev_flags_t) (1 << 13), + }; + + enum pci_irq_reroute_variant { +diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h +index e2c9a0c259df3b..e42388b6998b17 100644 +--- a/include/linux/pgtable.h ++++ b/include/linux/pgtable.h +@@ -1465,6 +1465,22 @@ static inline int pmd_protnone(pmd_t pmd) + } + #endif /* CONFIG_NUMA_BALANCING */ + ++/* ++ * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values ++ * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings() ++ * needs to be called. ++ */ ++#ifndef ARCH_PAGE_TABLE_SYNC_MASK ++#define ARCH_PAGE_TABLE_SYNC_MASK 0 ++#endif ++ ++/* ++ * There is no default implementation for arch_sync_kernel_mappings(). It is ++ * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK ++ * is 0. ++ */ ++void arch_sync_kernel_mappings(unsigned long start, unsigned long end); ++ + #endif /* CONFIG_MMU */ + + #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP +diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h +index c720be70c8ddde..897f2109f6ada8 100644 +--- a/include/linux/vmalloc.h ++++ b/include/linux/vmalloc.h +@@ -173,22 +173,6 @@ extern int remap_vmalloc_range_partial(struct vm_area_struct *vma, + extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr, + unsigned long pgoff); + +-/* +- * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values +- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings() +- * needs to be called. +- */ +-#ifndef ARCH_PAGE_TABLE_SYNC_MASK +-#define ARCH_PAGE_TABLE_SYNC_MASK 0 +-#endif +- +-/* +- * There is no default implementation for arch_sync_kernel_mappings(). It is +- * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK +- * is 0. +- */ +-void arch_sync_kernel_mappings(unsigned long start, unsigned long end); +- + /* + * Lowlevel-APIs (not for driver use!) + */ +diff --git a/include/net/netlink.h b/include/net/netlink.h +index 8a7cd1170e1f7b..aba2b162a2260b 100644 +--- a/include/net/netlink.h ++++ b/include/net/netlink.h +@@ -128,6 +128,8 @@ + * nla_len(nla) length of attribute payload + * + * Attribute Payload Access for Basic Types: ++ * nla_get_uint(nla) get payload for a uint attribute ++ * nla_get_sint(nla) get payload for a sint attribute + * nla_get_u8(nla) get payload for a u8 attribute + * nla_get_u16(nla) get payload for a u16 attribute + * nla_get_u32(nla) get payload for a u32 attribute +@@ -183,6 +185,8 @@ enum { + NLA_REJECT, + NLA_BE16, + NLA_BE32, ++ NLA_SINT, ++ NLA_UINT, + __NLA_TYPE_MAX, + }; + +@@ -229,6 +233,7 @@ enum nla_policy_validation { + * nested header (or empty); len field is used if + * nested_policy is also used, for the max attr + * number in the nested policy. ++ * NLA_SINT, NLA_UINT, + * NLA_U8, NLA_U16, + * NLA_U32, NLA_U64, + * NLA_S8, NLA_S16, +@@ -260,12 +265,14 @@ enum nla_policy_validation { + * while an array has the nested attributes at another + * level down and the attribute types directly in the + * nesting don't matter. ++ * NLA_UINT, + * NLA_U8, + * NLA_U16, + * NLA_U32, + * NLA_U64, + * NLA_BE16, + * NLA_BE32, ++ * NLA_SINT, + * NLA_S8, + * NLA_S16, + * NLA_S32, +@@ -280,6 +287,7 @@ enum nla_policy_validation { + * or NLA_POLICY_FULL_RANGE_SIGNED() macros instead. + * Use the NLA_POLICY_MIN(), NLA_POLICY_MAX() and + * NLA_POLICY_RANGE() macros. ++ * NLA_UINT, + * NLA_U8, + * NLA_U16, + * NLA_U32, +@@ -288,6 +296,7 @@ enum nla_policy_validation { + * to a struct netlink_range_validation that indicates + * the min/max values. + * Use NLA_POLICY_FULL_RANGE(). ++ * NLA_SINT, + * NLA_S8, + * NLA_S16, + * NLA_S32, +@@ -377,9 +386,11 @@ struct nla_policy { + + #define __NLA_IS_UINT_TYPE(tp) \ + (tp == NLA_U8 || tp == NLA_U16 || tp == NLA_U32 || \ +- tp == NLA_U64 || tp == NLA_BE16 || tp == NLA_BE32) ++ tp == NLA_U64 || tp == NLA_UINT || \ ++ tp == NLA_BE16 || tp == NLA_BE32) + #define __NLA_IS_SINT_TYPE(tp) \ +- (tp == NLA_S8 || tp == NLA_S16 || tp == NLA_S32 || tp == NLA_S64) ++ (tp == NLA_S8 || tp == NLA_S16 || tp == NLA_S32 || tp == NLA_S64 || \ ++ tp == NLA_SINT) + + #define __NLA_ENSURE(condition) BUILD_BUG_ON_ZERO(!(condition)) + #define NLA_ENSURE_UINT_TYPE(tp) \ +@@ -1357,6 +1368,22 @@ static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value) + return nla_put(skb, attrtype, sizeof(u32), &tmp); + } + ++/** ++ * nla_put_uint - Add a variable-size unsigned int to a socket buffer ++ * @skb: socket buffer to add attribute to ++ * @attrtype: attribute type ++ * @value: numeric value ++ */ ++static inline int nla_put_uint(struct sk_buff *skb, int attrtype, u64 value) ++{ ++ u64 tmp64 = value; ++ u32 tmp32 = value; ++ ++ if (tmp64 == tmp32) ++ return nla_put_u32(skb, attrtype, tmp32); ++ return nla_put(skb, attrtype, sizeof(u64), &tmp64); ++} ++ + /** + * nla_put_be32 - Add a __be32 netlink attribute to a socket buffer + * @skb: socket buffer to add attribute to +@@ -1511,6 +1538,22 @@ static inline int nla_put_s64(struct sk_buff *skb, int attrtype, s64 value, + return nla_put_64bit(skb, attrtype, sizeof(s64), &tmp, padattr); + } + ++/** ++ * nla_put_sint - Add a variable-size signed int to a socket buffer ++ * @skb: socket buffer to add attribute to ++ * @attrtype: attribute type ++ * @value: numeric value ++ */ ++static inline int nla_put_sint(struct sk_buff *skb, int attrtype, s64 value) ++{ ++ s64 tmp64 = value; ++ s32 tmp32 = value; ++ ++ if (tmp64 == tmp32) ++ return nla_put_s32(skb, attrtype, tmp32); ++ return nla_put(skb, attrtype, sizeof(s64), &tmp64); ++} ++ + /** + * nla_put_string - Add a string netlink attribute to a socket buffer + * @skb: socket buffer to add attribute to +@@ -1667,6 +1710,17 @@ static inline u64 nla_get_u64(const struct nlattr *nla) + return tmp; + } + ++/** ++ * nla_get_uint - return payload of uint attribute ++ * @nla: uint netlink attribute ++ */ ++static inline u64 nla_get_uint(const struct nlattr *nla) ++{ ++ if (nla_len(nla) == sizeof(u32)) ++ return nla_get_u32(nla); ++ return nla_get_u64(nla); ++} ++ + /** + * nla_get_be64 - return payload of __be64 attribute + * @nla: __be64 netlink attribute +@@ -1729,6 +1783,17 @@ static inline s64 nla_get_s64(const struct nlattr *nla) + return tmp; + } + ++/** ++ * nla_get_sint - return payload of uint attribute ++ * @nla: uint netlink attribute ++ */ ++static inline s64 nla_get_sint(const struct nlattr *nla) ++{ ++ if (nla_len(nla) == sizeof(s32)) ++ return nla_get_s32(nla); ++ return nla_get_s64(nla); ++} ++ + /** + * nla_get_flag - return payload of flag attribute + * @nla: flag netlink attribute +diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h +index e2ae82e3f9f718..f87aaf28a6491d 100644 +--- a/include/uapi/linux/netlink.h ++++ b/include/uapi/linux/netlink.h +@@ -298,6 +298,8 @@ struct nla_bitfield32 { + * entry has attributes again, the policy for those inner ones + * and the corresponding maxtype may be specified. + * @NL_ATTR_TYPE_BITFIELD32: &struct nla_bitfield32 attribute ++ * @NL_ATTR_TYPE_SINT: 32-bit or 64-bit signed attribute, aligned to 4B ++ * @NL_ATTR_TYPE_UINT: 32-bit or 64-bit unsigned attribute, aligned to 4B + */ + enum netlink_attribute_type { + NL_ATTR_TYPE_INVALID, +@@ -322,6 +324,9 @@ enum netlink_attribute_type { + NL_ATTR_TYPE_NESTED_ARRAY, + + NL_ATTR_TYPE_BITFIELD32, ++ ++ NL_ATTR_TYPE_SINT, ++ NL_ATTR_TYPE_UINT, + }; + + /** +diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c +index 5eaaf95048abc1..3618be05fc3527 100644 +--- a/kernel/bpf/core.c ++++ b/kernel/bpf/core.c +@@ -2262,28 +2262,44 @@ static bool __bpf_prog_map_compatible(struct bpf_map *map, + const struct bpf_prog *fp) + { + enum bpf_prog_type prog_type = resolve_prog_type(fp); +- bool ret; + struct bpf_prog_aux *aux = fp->aux; ++ enum bpf_cgroup_storage_type i; ++ bool ret = false; ++ u64 cookie; + + if (fp->kprobe_override) +- return false; ++ return ret; + +- spin_lock(&map->owner.lock); +- if (!map->owner.type) { +- /* There's no owner yet where we could check for +- * compatibility. +- */ +- map->owner.type = prog_type; +- map->owner.jited = fp->jited; +- map->owner.xdp_has_frags = aux->xdp_has_frags; +- map->owner.attach_func_proto = aux->attach_func_proto; ++ spin_lock(&map->owner_lock); ++ /* There's no owner yet where we could check for compatibility. */ ++ if (!map->owner) { ++ map->owner = bpf_map_owner_alloc(map); ++ if (!map->owner) ++ goto err; ++ map->owner->type = prog_type; ++ map->owner->jited = fp->jited; ++ map->owner->xdp_has_frags = aux->xdp_has_frags; ++ map->owner->attach_func_proto = aux->attach_func_proto; ++ for_each_cgroup_storage_type(i) { ++ map->owner->storage_cookie[i] = ++ aux->cgroup_storage[i] ? ++ aux->cgroup_storage[i]->cookie : 0; ++ } + ret = true; + } else { +- ret = map->owner.type == prog_type && +- map->owner.jited == fp->jited && +- map->owner.xdp_has_frags == aux->xdp_has_frags; ++ ret = map->owner->type == prog_type && ++ map->owner->jited == fp->jited && ++ map->owner->xdp_has_frags == aux->xdp_has_frags; ++ for_each_cgroup_storage_type(i) { ++ if (!ret) ++ break; ++ cookie = aux->cgroup_storage[i] ? ++ aux->cgroup_storage[i]->cookie : 0; ++ ret = map->owner->storage_cookie[i] == cookie || ++ !cookie; ++ } + if (ret && +- map->owner.attach_func_proto != aux->attach_func_proto) { ++ map->owner->attach_func_proto != aux->attach_func_proto) { + switch (prog_type) { + case BPF_PROG_TYPE_TRACING: + case BPF_PROG_TYPE_LSM: +@@ -2296,8 +2312,8 @@ static bool __bpf_prog_map_compatible(struct bpf_map *map, + } + } + } +- spin_unlock(&map->owner.lock); +- ++err: ++ spin_unlock(&map->owner_lock); + return ret; + } + +diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c +index b66349f892f25e..98f3f206d112e1 100644 +--- a/kernel/bpf/syscall.c ++++ b/kernel/bpf/syscall.c +@@ -35,6 +35,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -50,6 +51,7 @@ + #define BPF_OBJ_FLAG_MASK (BPF_F_RDONLY | BPF_F_WRONLY) + + DEFINE_PER_CPU(int, bpf_prog_active); ++DEFINE_COOKIE(bpf_map_cookie); + static DEFINE_IDR(prog_idr); + static DEFINE_SPINLOCK(prog_idr_lock); + static DEFINE_IDR(map_idr); +@@ -696,6 +698,7 @@ static void bpf_map_free_deferred(struct work_struct *work) + + security_bpf_map_free(map); + bpf_map_release_memcg(map); ++ bpf_map_owner_free(map); + /* implementation dependent freeing */ + map->ops->map_free(map); + /* Delay freeing of btf_record for maps, as map_free +@@ -713,7 +716,6 @@ static void bpf_map_free_deferred(struct work_struct *work) + */ + btf_put(btf); + } +- + static void bpf_map_put_uref(struct bpf_map *map) + { + if (atomic64_dec_and_test(&map->usercnt)) { +@@ -805,12 +807,12 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp) + struct bpf_map *map = filp->private_data; + u32 type = 0, jited = 0; + +- if (map_type_contains_progs(map)) { +- spin_lock(&map->owner.lock); +- type = map->owner.type; +- jited = map->owner.jited; +- spin_unlock(&map->owner.lock); ++ spin_lock(&map->owner_lock); ++ if (map->owner) { ++ type = map->owner->type; ++ jited = map->owner->jited; + } ++ spin_unlock(&map->owner_lock); + + seq_printf(m, + "map_type:\t%u\n" +@@ -1253,10 +1255,14 @@ static int map_create(union bpf_attr *attr) + if (err < 0) + goto free_map; + ++ preempt_disable(); ++ map->cookie = gen_cookie_next(&bpf_map_cookie); ++ preempt_enable(); ++ + atomic64_set(&map->refcnt, 1); + atomic64_set(&map->usercnt, 1); + mutex_init(&map->freeze_mutex); +- spin_lock_init(&map->owner.lock); ++ spin_lock_init(&map->owner_lock); + + if (attr->btf_key_type_id || attr->btf_value_type_id || + /* Even the map's value is a kernel's struct, +diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c +index c61698cff0f3a8..b87426b74eec28 100644 +--- a/kernel/sched/topology.c ++++ b/kernel/sched/topology.c +@@ -2140,6 +2140,8 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) + goto unlock; + + hop_masks = bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), hop_cmp); ++ if (!hop_masks) ++ goto unlock; + hop = hop_masks - k.masks; + + ret = hop ? +diff --git a/lib/nlattr.c b/lib/nlattr.c +index ba698a097fc810..0319e811bb10a3 100644 +--- a/lib/nlattr.c ++++ b/lib/nlattr.c +@@ -138,6 +138,7 @@ void nla_get_range_unsigned(const struct nla_policy *pt, + range->max = U32_MAX; + break; + case NLA_U64: ++ case NLA_UINT: + case NLA_MSECS: + range->max = U64_MAX; + break; +@@ -187,6 +188,9 @@ static int nla_validate_range_unsigned(const struct nla_policy *pt, + case NLA_U64: + value = nla_get_u64(nla); + break; ++ case NLA_UINT: ++ value = nla_get_uint(nla); ++ break; + case NLA_MSECS: + value = nla_get_u64(nla); + break; +@@ -252,6 +256,7 @@ void nla_get_range_signed(const struct nla_policy *pt, + range->max = S32_MAX; + break; + case NLA_S64: ++ case NLA_SINT: + range->min = S64_MIN; + range->max = S64_MAX; + break; +@@ -299,6 +304,9 @@ static int nla_validate_int_range_signed(const struct nla_policy *pt, + case NLA_S64: + value = nla_get_s64(nla); + break; ++ case NLA_SINT: ++ value = nla_get_sint(nla); ++ break; + default: + return -EINVAL; + } +@@ -324,6 +332,7 @@ static int nla_validate_int_range(const struct nla_policy *pt, + case NLA_U16: + case NLA_U32: + case NLA_U64: ++ case NLA_UINT: + case NLA_MSECS: + case NLA_BINARY: + case NLA_BE16: +@@ -333,6 +342,7 @@ static int nla_validate_int_range(const struct nla_policy *pt, + case NLA_S16: + case NLA_S32: + case NLA_S64: ++ case NLA_SINT: + return nla_validate_int_range_signed(pt, nla, extack); + default: + WARN_ON(1); +@@ -359,6 +369,9 @@ static int nla_validate_mask(const struct nla_policy *pt, + case NLA_U64: + value = nla_get_u64(nla); + break; ++ case NLA_UINT: ++ value = nla_get_uint(nla); ++ break; + case NLA_BE16: + value = ntohs(nla_get_be16(nla)); + break; +@@ -437,6 +450,15 @@ static int validate_nla(const struct nlattr *nla, int maxtype, + goto out_err; + break; + ++ case NLA_SINT: ++ case NLA_UINT: ++ if (attrlen != sizeof(u32) && attrlen != sizeof(u64)) { ++ NL_SET_ERR_MSG_ATTR_POL(extack, nla, pt, ++ "invalid attribute length"); ++ return -EINVAL; ++ } ++ break; ++ + case NLA_BITFIELD32: + if (attrlen != sizeof(struct nla_bitfield32)) + goto out_err; +diff --git a/mm/slub.c b/mm/slub.c +index d2544c88a5c43c..400563c45266e6 100644 +--- a/mm/slub.c ++++ b/mm/slub.c +@@ -771,19 +771,19 @@ static struct track *get_track(struct kmem_cache *s, void *object, + } + + #ifdef CONFIG_STACKDEPOT +-static noinline depot_stack_handle_t set_track_prepare(void) ++static noinline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) + { + depot_stack_handle_t handle; + unsigned long entries[TRACK_ADDRS_COUNT]; + unsigned int nr_entries; + + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3); +- handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT); ++ handle = stack_depot_save(entries, nr_entries, gfp_flags); + + return handle; + } + #else +-static inline depot_stack_handle_t set_track_prepare(void) ++static inline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) + { + return 0; + } +@@ -805,9 +805,9 @@ static void set_track_update(struct kmem_cache *s, void *object, + } + + static __always_inline void set_track(struct kmem_cache *s, void *object, +- enum track_item alloc, unsigned long addr) ++ enum track_item alloc, unsigned long addr, gfp_t gfp_flags) + { +- depot_stack_handle_t handle = set_track_prepare(); ++ depot_stack_handle_t handle = set_track_prepare(gfp_flags); + + set_track_update(s, object, alloc, addr, handle); + } +@@ -988,7 +988,12 @@ static void object_err(struct kmem_cache *s, struct slab *slab, + return; + + slab_bug(s, "%s", reason); +- print_trailer(s, slab, object); ++ if (!object || !check_valid_pointer(s, slab, object)) { ++ print_slab_info(slab); ++ pr_err("Invalid pointer 0x%p\n", object); ++ } else { ++ print_trailer(s, slab, object); ++ } + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); + } + +@@ -1733,9 +1738,9 @@ static inline bool free_debug_processing(struct kmem_cache *s, + static inline void slab_pad_check(struct kmem_cache *s, struct slab *slab) {} + static inline int check_object(struct kmem_cache *s, struct slab *slab, + void *object, u8 val) { return 1; } +-static inline depot_stack_handle_t set_track_prepare(void) { return 0; } ++static inline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) { return 0; } + static inline void set_track(struct kmem_cache *s, void *object, +- enum track_item alloc, unsigned long addr) {} ++ enum track_item alloc, unsigned long addr, gfp_t gfp_flags) {} + static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n, + struct slab *slab) {} + static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, +@@ -3223,8 +3228,26 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, + pc.slab = &slab; + pc.orig_size = orig_size; + freelist = get_partial(s, node, &pc); +- if (freelist) +- goto check_new_slab; ++ if (freelist) { ++ if (kmem_cache_debug(s)) { ++ /* ++ * For debug caches here we had to go through ++ * alloc_single_from_partial() so just store the ++ * tracking info and return the object. ++ * ++ * Due to disabled preemption we need to disallow ++ * blocking. The flags are further adjusted by ++ * gfp_nested_mask() in stack_depot itself. ++ */ ++ if (s->flags & SLAB_STORE_USER) ++ set_track(s, freelist, TRACK_ALLOC, addr, ++ gfpflags & ~(__GFP_DIRECT_RECLAIM)); ++ ++ return freelist; ++ } ++ ++ goto retry_load_slab; ++ } + + slub_put_cpu_ptr(s->cpu_slab); + slab = new_slab(s, gfpflags, node); +@@ -3244,7 +3267,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, + goto new_objects; + + if (s->flags & SLAB_STORE_USER) +- set_track(s, freelist, TRACK_ALLOC, addr); ++ set_track(s, freelist, TRACK_ALLOC, addr, ++ gfpflags & ~(__GFP_DIRECT_RECLAIM)); + + return freelist; + } +@@ -3260,20 +3284,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, + + inc_slabs_node(s, slab_nid(slab), slab->objects); + +-check_new_slab: +- +- if (kmem_cache_debug(s)) { +- /* +- * For debug caches here we had to go through +- * alloc_single_from_partial() so just store the tracking info +- * and return the object +- */ +- if (s->flags & SLAB_STORE_USER) +- set_track(s, freelist, TRACK_ALLOC, addr); +- +- return freelist; +- } +- + if (unlikely(!pfmemalloc_match(slab, gfpflags))) { + /* + * For !pfmemalloc_match() case we don't load freelist so that +@@ -3546,8 +3556,12 @@ static noinline void free_to_partial_list( + unsigned long flags; + depot_stack_handle_t handle = 0; + ++ /* ++ * We cannot use GFP_NOWAIT as there are callsites where waking up ++ * kswapd could deadlock ++ */ + if (s->flags & SLAB_STORE_USER) +- handle = set_track_prepare(); ++ handle = set_track_prepare(__GFP_NOWARN); + + spin_lock_irqsave(&n->list_lock, flags); + +diff --git a/net/atm/resources.c b/net/atm/resources.c +index b19d851e1f4439..7c6fdedbcf4e5c 100644 +--- a/net/atm/resources.c ++++ b/net/atm/resources.c +@@ -112,7 +112,9 @@ struct atm_dev *atm_dev_register(const char *type, struct device *parent, + + if (atm_proc_dev_register(dev) < 0) { + pr_err("atm_proc_dev_register failed for dev %s\n", type); +- goto out_fail; ++ mutex_unlock(&atm_dev_mutex); ++ kfree(dev); ++ return NULL; + } + + if (atm_register_sysfs(dev, parent) < 0) { +@@ -128,7 +130,7 @@ struct atm_dev *atm_dev_register(const char *type, struct device *parent, + return dev; + + out_fail: +- kfree(dev); ++ put_device(&dev->class_dev); + dev = NULL; + goto out; + } +diff --git a/net/ax25/ax25_in.c b/net/ax25/ax25_in.c +index 1cac25aca63784..f2d66af8635957 100644 +--- a/net/ax25/ax25_in.c ++++ b/net/ax25/ax25_in.c +@@ -433,6 +433,10 @@ static int ax25_rcv(struct sk_buff *skb, struct net_device *dev, + int ax25_kiss_rcv(struct sk_buff *skb, struct net_device *dev, + struct packet_type *ptype, struct net_device *orig_dev) + { ++ skb = skb_share_check(skb, GFP_ATOMIC); ++ if (!skb) ++ return NET_RX_DROP; ++ + skb_orphan(skb); + + if (!net_eq(dev_net(dev), &init_net)) { +diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c +index 71ebd0284f95d2..0adc783fb83ca2 100644 +--- a/net/batman-adv/network-coding.c ++++ b/net/batman-adv/network-coding.c +@@ -1687,7 +1687,12 @@ batadv_nc_skb_decode_packet(struct batadv_priv *bat_priv, struct sk_buff *skb, + + coding_len = ntohs(coded_packet_tmp.coded_len); + +- if (coding_len > skb->len) ++ /* ensure dst buffer is large enough (payload only) */ ++ if (coding_len + h_size > skb->len) ++ return NULL; ++ ++ /* ensure src buffer is large enough (payload only) */ ++ if (coding_len + h_size > nc_packet->skb->len) + return NULL; + + /* Here the magic is reversed: +diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c +index 020f1809fc9946..7f3f700faebc24 100644 +--- a/net/bluetooth/hci_sync.c ++++ b/net/bluetooth/hci_sync.c +@@ -3354,7 +3354,7 @@ static int hci_powered_update_adv_sync(struct hci_dev *hdev) + * advertising data. This also applies to the case + * where BR/EDR was toggled during the AUTO_OFF phase. + */ +- if (hci_dev_test_flag(hdev, HCI_ADVERTISING) || ++ if (hci_dev_test_flag(hdev, HCI_ADVERTISING) && + list_empty(&hdev->adv_instances)) { + if (ext_adv_capable(hdev)) { + err = hci_setup_ext_adv_instance_sync(hdev, 0x00); +diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c +index 9a906977c8723c..59630dbeda20d6 100644 +--- a/net/bluetooth/l2cap_sock.c ++++ b/net/bluetooth/l2cap_sock.c +@@ -1406,7 +1406,10 @@ static int l2cap_sock_release(struct socket *sock) + if (!sk) + return 0; + ++ lock_sock_nested(sk, L2CAP_NESTING_PARENT); + l2cap_sock_cleanup_listen(sk); ++ release_sock(sk); ++ + bt_sock_unlink(&l2cap_sk_list, sk); + + err = l2cap_sock_shutdown(sock, SHUT_RDWR); +diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c +index 2a4958e995f2d9..e6962d693359b3 100644 +--- a/net/bridge/br_netfilter_hooks.c ++++ b/net/bridge/br_netfilter_hooks.c +@@ -648,9 +648,6 @@ static unsigned int br_nf_local_in(void *priv, + break; + } + +- ct = container_of(nfct, struct nf_conn, ct_general); +- WARN_ON_ONCE(!nf_ct_is_confirmed(ct)); +- + return ret; + } + #endif +diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c +index ea100bd25939b4..0a16c04c4bfc49 100644 +--- a/net/dsa/tag_ksz.c ++++ b/net/dsa/tag_ksz.c +@@ -139,7 +139,12 @@ static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev) + + static struct sk_buff *ksz8795_rcv(struct sk_buff *skb, struct net_device *dev) + { +- u8 *tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN; ++ u8 *tag; ++ ++ if (skb_linearize(skb)) ++ return NULL; ++ ++ tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN; + + return ksz_common_rcv(skb, dev, tag[0] & 7, KSZ_EGRESS_TAG_LEN); + } +@@ -176,8 +181,9 @@ MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_KSZ8795, KSZ8795_NAME); + + #define KSZ9477_INGRESS_TAG_LEN 2 + #define KSZ9477_PTP_TAG_LEN 4 +-#define KSZ9477_PTP_TAG_INDICATION 0x80 ++#define KSZ9477_PTP_TAG_INDICATION BIT(7) + ++#define KSZ9477_TAIL_TAG_EG_PORT_M GENMASK(2, 0) + #define KSZ9477_TAIL_TAG_PRIO GENMASK(8, 7) + #define KSZ9477_TAIL_TAG_OVERRIDE BIT(9) + #define KSZ9477_TAIL_TAG_LOOKUP BIT(10) +@@ -300,10 +306,16 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb, + + static struct sk_buff *ksz9477_rcv(struct sk_buff *skb, struct net_device *dev) + { +- /* Tag decoding */ +- u8 *tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN; +- unsigned int port = tag[0] & 7; + unsigned int len = KSZ_EGRESS_TAG_LEN; ++ unsigned int port; ++ u8 *tag; ++ ++ if (skb_linearize(skb)) ++ return NULL; ++ ++ /* Tag decoding */ ++ tag = skb_tail_pointer(skb) - KSZ_EGRESS_TAG_LEN; ++ port = tag[0] & KSZ9477_TAIL_TAG_EG_PORT_M; + + /* Extra 4-bytes PTP timestamp */ + if (tag[0] & KSZ9477_PTP_TAG_INDICATION) { +diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c +index c33b1ecc591e4e..798497c8b1923e 100644 +--- a/net/ipv4/devinet.c ++++ b/net/ipv4/devinet.c +@@ -336,14 +336,13 @@ static void inetdev_destroy(struct in_device *in_dev) + + static int __init inet_blackhole_dev_init(void) + { +- int err = 0; ++ struct in_device *in_dev; + + rtnl_lock(); +- if (!inetdev_init(blackhole_netdev)) +- err = -ENOMEM; ++ in_dev = inetdev_init(blackhole_netdev); + rtnl_unlock(); + +- return err; ++ return PTR_ERR_OR_ZERO(in_dev); + } + late_initcall(inet_blackhole_dev_init); + +diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c +index 94501bb30c431b..b17549c4e5de8a 100644 +--- a/net/ipv4/icmp.c ++++ b/net/ipv4/icmp.c +@@ -801,11 +801,12 @@ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info) + struct sk_buff *cloned_skb = NULL; + struct ip_options opts = { 0 }; + enum ip_conntrack_info ctinfo; ++ enum ip_conntrack_dir dir; + struct nf_conn *ct; + __be32 orig_ip; + + ct = nf_ct_get(skb_in, &ctinfo); +- if (!ct || !(ct->status & IPS_SRC_NAT)) { ++ if (!ct || !(READ_ONCE(ct->status) & IPS_NAT_MASK)) { + __icmp_send(skb_in, type, code, info, &opts); + return; + } +@@ -820,7 +821,8 @@ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info) + goto out; + + orig_ip = ip_hdr(skb_in)->saddr; +- ip_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.ip; ++ dir = CTINFO2DIR(ctinfo); ++ ip_hdr(skb_in)->saddr = ct->tuplehash[dir].tuple.src.u3.ip; + __icmp_send(skb_in, type, code, info, &opts); + ip_hdr(skb_in)->saddr = orig_ip; + out: +diff --git a/net/ipv6/ip6_icmp.c b/net/ipv6/ip6_icmp.c +index 9e3574880cb03e..233914b63bdb82 100644 +--- a/net/ipv6/ip6_icmp.c ++++ b/net/ipv6/ip6_icmp.c +@@ -54,11 +54,12 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info) + struct inet6_skb_parm parm = { 0 }; + struct sk_buff *cloned_skb = NULL; + enum ip_conntrack_info ctinfo; ++ enum ip_conntrack_dir dir; + struct in6_addr orig_ip; + struct nf_conn *ct; + + ct = nf_ct_get(skb_in, &ctinfo); +- if (!ct || !(ct->status & IPS_SRC_NAT)) { ++ if (!ct || !(READ_ONCE(ct->status) & IPS_NAT_MASK)) { + __icmpv6_send(skb_in, type, code, info, &parm); + return; + } +@@ -73,7 +74,8 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info) + goto out; + + orig_ip = ipv6_hdr(skb_in)->saddr; +- ipv6_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.in6; ++ dir = CTINFO2DIR(ctinfo); ++ ipv6_hdr(skb_in)->saddr = ct->tuplehash[dir].tuple.src.u3.in6; + __icmpv6_send(skb_in, type, code, info, &parm); + ipv6_hdr(skb_in)->saddr = orig_ip; + out: +diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c +index 5f9592fb57add2..805f7376cebe3f 100644 +--- a/net/mctp/af_mctp.c ++++ b/net/mctp/af_mctp.c +@@ -346,7 +346,7 @@ static int mctp_getsockopt(struct socket *sock, int level, int optname, + return 0; + } + +- return -EINVAL; ++ return -ENOPROTOOPT; + } + + static int mctp_ioctl_alloctag(struct mctp_sock *msk, unsigned long arg) +diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c +index f22691f8385363..10f72b5b4e1ad7 100644 +--- a/net/netfilter/nf_conntrack_helper.c ++++ b/net/netfilter/nf_conntrack_helper.c +@@ -373,7 +373,7 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me) + (cur->tuple.src.l3num == NFPROTO_UNSPEC || + cur->tuple.src.l3num == me->tuple.src.l3num) && + cur->tuple.dst.protonum == me->tuple.dst.protonum) { +- ret = -EEXIST; ++ ret = -EBUSY; + goto out; + } + } +@@ -384,7 +384,7 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me) + hlist_for_each_entry(cur, &nf_ct_helper_hash[h], hnode) { + if (nf_ct_tuple_src_mask_cmp(&cur->tuple, &me->tuple, + &mask)) { +- ret = -EEXIST; ++ ret = -EBUSY; + goto out; + } + } +diff --git a/net/netlink/policy.c b/net/netlink/policy.c +index 87e3de0fde8963..ef542a142b9800 100644 +--- a/net/netlink/policy.c ++++ b/net/netlink/policy.c +@@ -229,6 +229,8 @@ int netlink_policy_dump_attr_size_estimate(const struct nla_policy *pt) + case NLA_S16: + case NLA_S32: + case NLA_S64: ++ case NLA_SINT: ++ case NLA_UINT: + /* maximum is common, u64 min/max with padding */ + return common + + 2 * (nla_attr_size(0) + nla_attr_size(sizeof(u64))); +@@ -287,6 +289,7 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, + case NLA_U16: + case NLA_U32: + case NLA_U64: ++ case NLA_UINT: + case NLA_MSECS: { + struct netlink_range_validation range; + +@@ -296,8 +299,10 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, + type = NL_ATTR_TYPE_U16; + else if (pt->type == NLA_U32) + type = NL_ATTR_TYPE_U32; +- else ++ else if (pt->type == NLA_U64) + type = NL_ATTR_TYPE_U64; ++ else ++ type = NL_ATTR_TYPE_UINT; + + if (pt->validation_type == NLA_VALIDATE_MASK) { + if (nla_put_u64_64bit(skb, NL_POLICY_TYPE_ATTR_MASK, +@@ -319,7 +324,8 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, + case NLA_S8: + case NLA_S16: + case NLA_S32: +- case NLA_S64: { ++ case NLA_S64: ++ case NLA_SINT: { + struct netlink_range_validation_signed range; + + if (pt->type == NLA_S8) +@@ -328,8 +334,10 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, + type = NL_ATTR_TYPE_S16; + else if (pt->type == NLA_S32) + type = NL_ATTR_TYPE_S32; +- else ++ else if (pt->type == NLA_S64) + type = NL_ATTR_TYPE_S64; ++ else ++ type = NL_ATTR_TYPE_SINT; + + nla_get_range_signed(pt, &range); + +diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c +index dbce904c03cf73..4f485b9b31b288 100644 +--- a/net/smc/smc_clc.c ++++ b/net/smc/smc_clc.c +@@ -426,8 +426,6 @@ smc_clc_msg_decl_valid(struct smc_clc_msg_decline *dclc) + { + struct smc_clc_msg_hdr *hdr = &dclc->hdr; + +- if (hdr->typev1 != SMC_TYPE_R && hdr->typev1 != SMC_TYPE_D) +- return false; + if (hdr->version == SMC_V1) { + if (ntohs(hdr->length) != sizeof(struct smc_clc_msg_decline)) + return false; +diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c +index 598ac9ead64b72..6df543e083fb34 100644 +--- a/net/smc/smc_ib.c ++++ b/net/smc/smc_ib.c +@@ -743,6 +743,9 @@ bool smc_ib_is_sg_need_sync(struct smc_link *lnk, + unsigned int i; + bool ret = false; + ++ if (!lnk->smcibdev->ibdev->dma_device) ++ return ret; ++ + /* for now there is just one DMA address */ + for_each_sg(buf_slot->sgt[lnk->link_idx].sgl, sg, + buf_slot->sgt[lnk->link_idx].nents, i) { +diff --git a/net/wireless/scan.c b/net/wireless/scan.c +index 6db8c9a2a7a2b8..c1d64e25045484 100644 +--- a/net/wireless/scan.c ++++ b/net/wireless/scan.c +@@ -1807,7 +1807,8 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev, + */ + + f = rcu_access_pointer(new->pub.beacon_ies); +- kfree_rcu((struct cfg80211_bss_ies *)f, rcu_head); ++ if (!new->pub.hidden_beacon_bss) ++ kfree_rcu((struct cfg80211_bss_ies *)f, rcu_head); + return false; + } + +diff --git a/net/wireless/sme.c b/net/wireless/sme.c +index 70881782c25c6c..5904c869085c80 100644 +--- a/net/wireless/sme.c ++++ b/net/wireless/sme.c +@@ -915,13 +915,16 @@ void __cfg80211_connect_result(struct net_device *dev, + if (!wdev->u.client.ssid_len) { + rcu_read_lock(); + for_each_valid_link(cr, link) { ++ u32 ssid_len; ++ + ssid = ieee80211_bss_get_elem(cr->links[link].bss, + WLAN_EID_SSID); + + if (!ssid || !ssid->datalen) + continue; + +- memcpy(wdev->u.client.ssid, ssid->data, ssid->datalen); ++ ssid_len = min(ssid->datalen, IEEE80211_MAX_SSID_LEN); ++ memcpy(wdev->u.client.ssid, ssid->data, ssid_len); + wdev->u.client.ssid_len = ssid->datalen; + break; + } +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index f2c03fbf892f1b..80c015af09efde 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -1991,6 +1991,7 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid) + static const struct snd_pci_quirk force_connect_list[] = { + SND_PCI_QUIRK(0x103c, 0x83e2, "HP EliteDesk 800 G4", 1), + SND_PCI_QUIRK(0x103c, 0x83ef, "HP MP9 G4 Retail System AMS", 1), ++ SND_PCI_QUIRK(0x103c, 0x845a, "HP EliteDesk 800 G4 DM 65W", 1), + SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1), + SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1), + SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1), +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index d4bc80780a1f91..6aae06223f2664 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -10249,6 +10249,9 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x8e18, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8e19, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8e1a, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED), ++ SND_PCI_QUIRK(0x103c, 0x8e1d, "HP ZBook X Gli 16 G12", ALC236_FIXUP_HP_GPIO_LED), ++ SND_PCI_QUIRK(0x103c, 0x8e3a, "HP Agusta", ALC287_FIXUP_CS35L41_I2C_2), ++ SND_PCI_QUIRK(0x103c, 0x8e3b, "HP Agusta", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), + SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), + SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2), +@@ -10632,6 +10635,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x1387, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x1d05, 0x1409, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), ++ SND_PCI_QUIRK(0x1d05, 0x300f, "TongFang X6AR5xxY", ALC2XX_FIXUP_HEADSET_MIC), ++ SND_PCI_QUIRK(0x1d05, 0x3019, "TongFang X6FR5xxY", ALC2XX_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS), + SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), + SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), +diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c +index f2cce15be4e271..68c82e344d3baf 100644 +--- a/sound/usb/mixer_quirks.c ++++ b/sound/usb/mixer_quirks.c +@@ -3631,9 +3631,11 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer, + snd_dragonfly_quirk_db_scale(mixer, cval, kctl); + break; + /* lowest playback value is muted on some devices */ ++ case USB_ID(0x0572, 0x1b09): /* Conexant Systems (Rockwell), Inc. */ + case USB_ID(0x0d8c, 0x000c): /* C-Media */ + case USB_ID(0x0d8c, 0x0014): /* C-Media */ + case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */ ++ case USB_ID(0x2d99, 0x0026): /* HECATE G2 GAMING HEADSET */ + if (strstr(kctl->id.name, "Playback")) + cval->min_mute = 1; + break; +diff --git a/tools/gpio/Makefile b/tools/gpio/Makefile +index d29c9c49e2512a..342e056c8c665a 100644 +--- a/tools/gpio/Makefile ++++ b/tools/gpio/Makefile +@@ -77,8 +77,8 @@ $(OUTPUT)gpio-watch: $(GPIO_WATCH_IN) + + clean: + rm -f $(ALL_PROGRAMS) +- rm -f $(OUTPUT)include/linux/gpio.h +- find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete ++ rm -rf $(OUTPUT)include ++ find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete + + install: $(ALL_PROGRAMS) + install -d -m 755 $(DESTDIR)$(bindir); \ +diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c +index b94b4f16a60a54..1573d6b6478d28 100644 +--- a/tools/perf/util/bpf-event.c ++++ b/tools/perf/util/bpf-event.c +@@ -289,9 +289,15 @@ static int perf_event__synthesize_one_bpf_prog(struct perf_session *session, + + info_node->info_linear = info_linear; + if (!perf_env__insert_bpf_prog_info(env, info_node)) { +- free(info_linear); ++ /* ++ * Insert failed, likely because of a duplicate event ++ * made by the sideband thread. Ignore synthesizing the ++ * metadata. ++ */ + free(info_node); ++ goto out; + } ++ /* info_linear is now owned by info_node and shouldn't be freed below. */ + info_linear = NULL; + + /* +@@ -447,18 +453,18 @@ int perf_event__synthesize_bpf_events(struct perf_session *session, + return err; + } + +-static void perf_env__add_bpf_info(struct perf_env *env, u32 id) ++static int perf_env__add_bpf_info(struct perf_env *env, u32 id) + { + struct bpf_prog_info_node *info_node; + struct perf_bpil *info_linear; + struct btf *btf = NULL; + u64 arrays; + u32 btf_id; +- int fd; ++ int fd, err = 0; + + fd = bpf_prog_get_fd_by_id(id); + if (fd < 0) +- return; ++ return -EINVAL; + + arrays = 1UL << PERF_BPIL_JITED_KSYMS; + arrays |= 1UL << PERF_BPIL_JITED_FUNC_LENS; +@@ -471,6 +477,7 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id) + info_linear = get_bpf_prog_info_linear(fd, arrays); + if (IS_ERR_OR_NULL(info_linear)) { + pr_debug("%s: failed to get BPF program info. aborting\n", __func__); ++ err = PTR_ERR(info_linear); + goto out; + } + +@@ -480,38 +487,46 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id) + if (info_node) { + info_node->info_linear = info_linear; + if (!perf_env__insert_bpf_prog_info(env, info_node)) { ++ pr_debug("%s: duplicate add bpf info request for id %u\n", ++ __func__, btf_id); + free(info_linear); + free(info_node); ++ goto out; + } +- } else ++ } else { + free(info_linear); ++ err = -ENOMEM; ++ goto out; ++ } + + if (btf_id == 0) + goto out; + + btf = btf__load_from_kernel_by_id(btf_id); +- if (libbpf_get_error(btf)) { +- pr_debug("%s: failed to get BTF of id %u, aborting\n", +- __func__, btf_id); +- goto out; ++ if (!btf) { ++ err = -errno; ++ pr_debug("%s: failed to get BTF of id %u %d\n", __func__, btf_id, err); ++ } else { ++ perf_env__fetch_btf(env, btf_id, btf); + } +- perf_env__fetch_btf(env, btf_id, btf); + + out: + btf__free(btf); + close(fd); ++ return err; + } + + static int bpf_event__sb_cb(union perf_event *event, void *data) + { + struct perf_env *env = data; ++ int ret = 0; + + if (event->header.type != PERF_RECORD_BPF_EVENT) + return -1; + + switch (event->bpf.type) { + case PERF_BPF_EVENT_PROG_LOAD: +- perf_env__add_bpf_info(env, event->bpf.id); ++ ret = perf_env__add_bpf_info(env, event->bpf.id); + + case PERF_BPF_EVENT_PROG_UNLOAD: + /* +@@ -525,7 +540,7 @@ static int bpf_event__sb_cb(union perf_event *event, void *data) + break; + } + +- return 0; ++ return ret; + } + + int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env) +diff --git a/tools/power/cpupower/utils/cpupower-set.c b/tools/power/cpupower/utils/cpupower-set.c +index 0677b58374abf1..59ace394cf3ef9 100644 +--- a/tools/power/cpupower/utils/cpupower-set.c ++++ b/tools/power/cpupower/utils/cpupower-set.c +@@ -62,8 +62,8 @@ int cmd_set(int argc, char **argv) + + params.params = 0; + /* parameter parsing */ +- while ((ret = getopt_long(argc, argv, "b:e:m:", +- set_opts, NULL)) != -1) { ++ while ((ret = getopt_long(argc, argv, "b:e:m:t:", ++ set_opts, NULL)) != -1) { + switch (ret) { + case 'b': + if (params.perf_bias) +diff --git a/tools/testing/selftests/net/bind_bhash.c b/tools/testing/selftests/net/bind_bhash.c +index 57ff67a3751eb3..da04b0b19b73ca 100644 +--- a/tools/testing/selftests/net/bind_bhash.c ++++ b/tools/testing/selftests/net/bind_bhash.c +@@ -75,7 +75,7 @@ static void *setup(void *arg) + int *array = (int *)arg; + + for (i = 0; i < MAX_CONNECTIONS; i++) { +- sock_fd = bind_socket(SO_REUSEADDR | SO_REUSEPORT, setup_addr); ++ sock_fd = bind_socket(SO_REUSEPORT, setup_addr); + if (sock_fd < 0) { + ret = sock_fd; + pthread_exit(&ret); +@@ -103,7 +103,7 @@ int main(int argc, const char *argv[]) + + setup_addr = use_v6 ? setup_addr_v6 : setup_addr_v4; + +- listener_fd = bind_socket(SO_REUSEADDR | SO_REUSEPORT, setup_addr); ++ listener_fd = bind_socket(SO_REUSEPORT, setup_addr); + if (listen(listener_fd, 100) < 0) { + perror("listen failed"); + return -1;