From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 3FEA21581EE for ; Thu, 10 Apr 2025 13:15:30 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 2802034331B for ; Thu, 10 Apr 2025 13:15:30 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id 87AB91102B2; Thu, 10 Apr 2025 13:15:28 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id 7B23D1102B2 for ; Thu, 10 Apr 2025 13:15:28 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 7294E3432E6 for ; Thu, 10 Apr 2025 13:15:27 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id CD1751436 for ; Thu, 10 Apr 2025 13:15:25 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1744290911.f555e6dfc7c46043087214ae740177c40f693a5f.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.15 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1179_linux-5.15.180.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: f555e6dfc7c46043087214ae740177c40f693a5f X-VCS-Branch: 5.15 Date: Thu, 10 Apr 2025 13:15:25 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 1fc02b66-25a4-4b60-a063-7092c395f24d X-Archives-Hash: 7be0c47d498ec9787f35a205dd5f57fa commit: f555e6dfc7c46043087214ae740177c40f693a5f Author: Mike Pagano gentoo org> AuthorDate: Thu Apr 10 13:15:11 2025 +0000 Commit: Mike Pagano gentoo org> CommitDate: Thu Apr 10 13:15:11 2025 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f555e6df Linux patch 5.15.180 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1179_linux-5.15.180.patch | 8694 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 8698 insertions(+) diff --git a/0000_README b/0000_README index e9daa194..91543dab 100644 --- a/0000_README +++ b/0000_README @@ -759,6 +759,10 @@ Patch: 1178_linux-5.15.179.patch From: https://www.kernel.org Desc: Linux 5.15.179 +Patch: 1179_linux-5.15.180.patch +From: https://www.kernel.org +Desc: Linux 5.15.180 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1179_linux-5.15.180.patch b/1179_linux-5.15.180.patch new file mode 100644 index 00000000..2a81666a --- /dev/null +++ b/1179_linux-5.15.180.patch @@ -0,0 +1,8694 @@ +diff --git a/Documentation/timers/no_hz.rst b/Documentation/timers/no_hz.rst +index 6cadad7c3aad45..3e52928a9dfb01 100644 +--- a/Documentation/timers/no_hz.rst ++++ b/Documentation/timers/no_hz.rst +@@ -129,11 +129,8 @@ adaptive-tick CPUs: At least one non-adaptive-tick CPU must remain + online to handle timekeeping tasks in order to ensure that system + calls like gettimeofday() returns accurate values on adaptive-tick CPUs. + (This is not an issue for CONFIG_NO_HZ_IDLE=y because there are no running +-user processes to observe slight drifts in clock rate.) Therefore, the +-boot CPU is prohibited from entering adaptive-ticks mode. Specifying a +-"nohz_full=" mask that includes the boot CPU will result in a boot-time +-error message, and the boot CPU will be removed from the mask. Note that +-this means that your system must have at least two CPUs in order for ++user processes to observe slight drifts in clock rate.) Note that this ++means that your system must have at least two CPUs in order for + CONFIG_NO_HZ_FULL=y to do anything for you. + + Finally, adaptive-ticks CPUs must have their RCU callbacks offloaded. +diff --git a/Makefile b/Makefile +index 6b3a24466e2882..619f4ec0f613e3 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 15 +-SUBLEVEL = 179 ++SUBLEVEL = 180 + EXTRAVERSION = + NAME = Trick or Treat + +diff --git a/arch/alpha/include/asm/elf.h b/arch/alpha/include/asm/elf.h +index 8049997fa372ae..2039a8c8d5473a 100644 +--- a/arch/alpha/include/asm/elf.h ++++ b/arch/alpha/include/asm/elf.h +@@ -74,7 +74,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; + /* + * This is used to ensure we don't load something for the wrong architecture. + */ +-#define elf_check_arch(x) ((x)->e_machine == EM_ALPHA) ++#define elf_check_arch(x) (((x)->e_machine == EM_ALPHA) && !((x)->e_flags & EF_ALPHA_32BIT)) + + /* + * These are used to set parameters in the core dumps. +@@ -145,10 +145,6 @@ extern int dump_elf_task_fp(elf_fpreg_t *dest, struct task_struct *task); + : amask (AMASK_CIX) ? "ev6" : "ev67"); \ + }) + +-#define SET_PERSONALITY(EX) \ +- set_personality(((EX).e_flags & EF_ALPHA_32BIT) \ +- ? PER_LINUX_32BIT : PER_LINUX) +- + extern int alpha_l1i_cacheshape; + extern int alpha_l1d_cacheshape; + extern int alpha_l2_cacheshape; +diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h +index 02f0429f1068ab..8e3cf3c9f913d4 100644 +--- a/arch/alpha/include/asm/pgtable.h ++++ b/arch/alpha/include/asm/pgtable.h +@@ -340,7 +340,7 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset) + + extern void paging_init(void); + +-/* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */ ++/* We have our own get_unmapped_area */ + #define HAVE_ARCH_UNMAPPED_AREA + + #endif /* _ALPHA_PGTABLE_H */ +diff --git a/arch/alpha/include/asm/processor.h b/arch/alpha/include/asm/processor.h +index 6100431da07a3b..d27db62c3247dc 100644 +--- a/arch/alpha/include/asm/processor.h ++++ b/arch/alpha/include/asm/processor.h +@@ -8,23 +8,19 @@ + #ifndef __ASM_ALPHA_PROCESSOR_H + #define __ASM_ALPHA_PROCESSOR_H + +-#include /* for ADDR_LIMIT_32BIT */ +- + /* + * We have a 42-bit user address space: 4TB user VM... + */ + #define TASK_SIZE (0x40000000000UL) + +-#define STACK_TOP \ +- (current->personality & ADDR_LIMIT_32BIT ? 0x80000000 : 0x00120000000UL) ++#define STACK_TOP (0x00120000000UL) + + #define STACK_TOP_MAX 0x00120000000UL + + /* This decides where the kernel will search for a free chunk of vm + * space during mmap's. + */ +-#define TASK_UNMAPPED_BASE \ +- ((current->personality & ADDR_LIMIT_32BIT) ? 0x40000000 : TASK_SIZE / 2) ++#define TASK_UNMAPPED_BASE (TASK_SIZE / 2) + + typedef struct { + unsigned long seg; +diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c +index 8bbeebb73cf033..2dfb69a2ae43a7 100644 +--- a/arch/alpha/kernel/osf_sys.c ++++ b/arch/alpha/kernel/osf_sys.c +@@ -1212,8 +1212,7 @@ SYSCALL_DEFINE1(old_adjtimex, struct timex32 __user *, txc_p) + return ret; + } + +-/* Get an address range which is currently unmapped. Similar to the +- generic version except that we know how to honor ADDR_LIMIT_32BIT. */ ++/* Get an address range which is currently unmapped. */ + + static unsigned long + arch_get_unmapped_area_1(unsigned long addr, unsigned long len, +@@ -1235,13 +1234,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags) + { +- unsigned long limit; +- +- /* "32 bit" actually means 31 bit, since pointers sign extend. */ +- if (current->personality & ADDR_LIMIT_32BIT) +- limit = 0x80000000; +- else +- limit = TASK_SIZE; ++ unsigned long limit = TASK_SIZE; + + if (len > limit) + return -ENOMEM; +diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi +index 89af57482bc8fa..da0eb870260288 100644 +--- a/arch/arm/boot/dts/bcm2711.dtsi ++++ b/arch/arm/boot/dts/bcm2711.dtsi +@@ -133,7 +133,7 @@ uart2: serial@7e201400 { + clocks = <&clocks BCM2835_CLOCK_UART>, + <&clocks BCM2835_CLOCK_VPU>; + clock-names = "uartclk", "apb_pclk"; +- arm,primecell-periphid = <0x00241011>; ++ arm,primecell-periphid = <0x00341011>; + status = "disabled"; + }; + +@@ -144,7 +144,7 @@ uart3: serial@7e201600 { + clocks = <&clocks BCM2835_CLOCK_UART>, + <&clocks BCM2835_CLOCK_VPU>; + clock-names = "uartclk", "apb_pclk"; +- arm,primecell-periphid = <0x00241011>; ++ arm,primecell-periphid = <0x00341011>; + status = "disabled"; + }; + +@@ -155,7 +155,7 @@ uart4: serial@7e201800 { + clocks = <&clocks BCM2835_CLOCK_UART>, + <&clocks BCM2835_CLOCK_VPU>; + clock-names = "uartclk", "apb_pclk"; +- arm,primecell-periphid = <0x00241011>; ++ arm,primecell-periphid = <0x00341011>; + status = "disabled"; + }; + +@@ -166,7 +166,7 @@ uart5: serial@7e201a00 { + clocks = <&clocks BCM2835_CLOCK_UART>, + <&clocks BCM2835_CLOCK_VPU>; + clock-names = "uartclk", "apb_pclk"; +- arm,primecell-periphid = <0x00241011>; ++ arm,primecell-periphid = <0x00341011>; + status = "disabled"; + }; + +@@ -450,8 +450,6 @@ IRQ_TYPE_LEVEL_LOW)>, + IRQ_TYPE_LEVEL_LOW)>, + ; +- /* This only applies to the ARMv7 stub */ +- arm,cpu-registers-not-fw-configured; + }; + + cpus: cpus { +@@ -1142,6 +1140,7 @@ &txp { + }; + + &uart0 { ++ arm,primecell-periphid = <0x00341011>; + interrupts = ; + }; + +diff --git a/arch/arm/mach-shmobile/headsmp.S b/arch/arm/mach-shmobile/headsmp.S +index 9466ae61f56abd..b45c68d8827588 100644 +--- a/arch/arm/mach-shmobile/headsmp.S ++++ b/arch/arm/mach-shmobile/headsmp.S +@@ -136,6 +136,7 @@ ENDPROC(shmobile_smp_sleep) + .long shmobile_smp_arg - 1b + + .bss ++ .align 2 + .globl shmobile_smp_mpidr + shmobile_smp_mpidr: + .space NR_CPUS * 4 +diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c +index af5177801fb107..bf1577216ffabb 100644 +--- a/arch/arm/mm/fault.c ++++ b/arch/arm/mm/fault.c +@@ -26,6 +26,13 @@ + + #ifdef CONFIG_MMU + ++bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size) ++{ ++ unsigned long addr = (unsigned long)unsafe_src; ++ ++ return addr >= TASK_SIZE && ULONG_MAX - addr >= size; ++} ++ + /* + * This is useful to dump out the page tables associated with + * 'addr' in mm 'mm'. +@@ -552,6 +559,7 @@ do_PrefetchAbort(unsigned long addr, unsigned int ifsr, struct pt_regs *regs) + if (!inf->fn(addr, ifsr | FSR_LNX_PF, regs)) + return; + ++ pr_alert("8<--- cut here ---\n"); + pr_alert("Unhandled prefetch abort: %s (0x%03x) at 0x%08lx\n", + inf->name, ifsr, addr); + +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts b/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts +index cef4d18b599dd1..a992a6ac5e9f0a 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts ++++ b/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts +@@ -117,7 +117,7 @@ &u2phy0_host { + }; + + &u2phy1_host { +- status = "disabled"; ++ phy-supply = <&vdd_5v>; + }; + + &uart0 { +diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c +index ed37a93bf858a7..ea3082f2f9d1d6 100644 +--- a/arch/powerpc/platforms/cell/spufs/inode.c ++++ b/arch/powerpc/platforms/cell/spufs/inode.c +@@ -190,8 +190,10 @@ static int spufs_fill_dir(struct dentry *dir, + return -ENOMEM; + ret = spufs_new_file(dir->d_sb, dentry, files->ops, + files->mode & mode, files->size, ctx); +- if (ret) ++ if (ret) { ++ dput(dentry); + return ret; ++ } + files++; + } + return 0; +@@ -434,8 +436,11 @@ spufs_create_context(struct inode *inode, struct dentry *dentry, + } + + ret = spufs_mkdir(inode, dentry, flags, mode & 0777); +- if (ret) ++ if (ret) { ++ if (neighbor) ++ put_spu_context(neighbor); + goto out_aff_unlock; ++ } + + if (affinity) { + spufs_set_affinity(flags, SPUFS_I(d_inode(dentry))->i_ctx, +diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h +index d47d87c2d7e3d0..195f4ebd71f2c8 100644 +--- a/arch/riscv/include/asm/ftrace.h ++++ b/arch/riscv/include/asm/ftrace.h +@@ -77,7 +77,7 @@ struct dyn_arch_ftrace { + #define make_call_t0(caller, callee, call) \ + do { \ + unsigned int offset = \ +- (unsigned long) callee - (unsigned long) caller; \ ++ (unsigned long) (callee) - (unsigned long) (caller); \ + call[0] = to_auipc_t0(offset); \ + call[1] = to_jalr_t0(offset); \ + } while (0) +@@ -93,7 +93,7 @@ do { \ + #define make_call_ra(caller, callee, call) \ + do { \ + unsigned int offset = \ +- (unsigned long) callee - (unsigned long) caller; \ ++ (unsigned long) (callee) - (unsigned long) (caller); \ + call[0] = to_auipc_ra(offset); \ + call[1] = to_jalr_ra(offset); \ + } while (0) +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index 90ac8d84389cfc..de6a66ad3fa624 100644 +--- a/arch/x86/Kconfig ++++ b/arch/x86/Kconfig +@@ -194,7 +194,7 @@ config X86 + select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS + select HAVE_EBPF_JIT + select HAVE_EFFICIENT_UNALIGNED_ACCESS +- select HAVE_EISA ++ select HAVE_EISA if X86_32 + select HAVE_EXIT_THREAD + select HAVE_FAST_GUP + select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE +diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h +index b00a3a95fbfabe..16e12b45b151a4 100644 +--- a/arch/x86/entry/calling.h ++++ b/arch/x86/entry/calling.h +@@ -70,6 +70,8 @@ For 32-bit we have the following conventions - kernel is built with + pushq %rsi /* pt_regs->si */ + movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */ + movq %rdi, 8(%rsp) /* pt_regs->di (overwriting original return address) */ ++ /* We just clobbered the return address - use the IRET frame for unwinding: */ ++ UNWIND_HINT_IRET_REGS offset=3*8 + .else + pushq %rdi /* pt_regs->di */ + pushq %rsi /* pt_regs->si */ +diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h +index 22b93e35fa8861..d4857798f232a0 100644 +--- a/arch/x86/include/asm/tlbflush.h ++++ b/arch/x86/include/asm/tlbflush.h +@@ -227,7 +227,7 @@ void flush_tlb_multi(const struct cpumask *cpumask, + flush_tlb_mm_range((vma)->vm_mm, start, end, \ + ((vma)->vm_flags & VM_HUGETLB) \ + ? huge_page_shift(hstate_vma(vma)) \ +- : PAGE_SHIFT, false) ++ : PAGE_SHIFT, true) + + extern void flush_tlb_all(void); + extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, +diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c +index 6a95a52d08daa8..8143089c9b98b1 100644 +--- a/arch/x86/kernel/cpu/microcode/amd.c ++++ b/arch/x86/kernel/cpu/microcode/amd.c +@@ -861,7 +861,7 @@ static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t siz + return ret; + } + +- for_each_node(nid) { ++ for_each_node_with_cpus(nid) { + cpu = cpumask_first(cpumask_of_node(nid)); + c = &cpu_data(cpu); + +diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c +index 19762b47fbecd3..b6424322f5d3bf 100644 +--- a/arch/x86/kernel/cpu/mshyperv.c ++++ b/arch/x86/kernel/cpu/mshyperv.c +@@ -16,7 +16,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -445,16 +444,6 @@ static void __init ms_hyperv_init_platform(void) + if (efi_enabled(EFI_BOOT)) + x86_platform.get_nmi_reason = hv_get_nmi_reason; + +- /* +- * Hyper-V VMs have a PIT emulation quirk such that zeroing the +- * counter register during PIT shutdown restarts the PIT. So it +- * continues to interrupt @18.2 HZ. Setting i8253_clear_counter +- * to false tells pit_shutdown() not to zero the counter so that +- * the PIT really is shutdown. Generation 2 VMs don't have a PIT, +- * and setting this value has no effect. +- */ +- i8253_clear_counter_on_shutdown = false; +- + #if IS_ENABLED(CONFIG_HYPERV) + /* + * Setup the hook to get control post apic initialization. +diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c +index aa9b8b8688676f..afccb69cd9a2ca 100644 +--- a/arch/x86/kernel/cpu/sgx/driver.c ++++ b/arch/x86/kernel/cpu/sgx/driver.c +@@ -150,13 +150,15 @@ int __init sgx_drv_init(void) + u64 xfrm_mask; + int ret; + +- if (!cpu_feature_enabled(X86_FEATURE_SGX_LC)) ++ if (!cpu_feature_enabled(X86_FEATURE_SGX_LC)) { ++ pr_info("SGX disabled: SGX launch control CPU feature is not available, /dev/sgx_enclave disabled.\n"); + return -ENODEV; ++ } + + cpuid_count(SGX_CPUID, 0, &eax, &ebx, &ecx, &edx); + + if (!(eax & 1)) { +- pr_err("SGX disabled: SGX1 instruction support not available.\n"); ++ pr_info("SGX disabled: SGX1 instruction support not available, /dev/sgx_enclave disabled.\n"); + return -ENODEV; + } + +@@ -173,8 +175,10 @@ int __init sgx_drv_init(void) + } + + ret = misc_register(&sgx_dev_enclave); +- if (ret) ++ if (ret) { ++ pr_info("SGX disabled: Unable to register the /dev/sgx_enclave driver (%d).\n", ret); + return ret; ++ } + + return 0; + } +diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c +index 92b33c7eaf3f9d..8a8660074284f8 100644 +--- a/arch/x86/kernel/dumpstack.c ++++ b/arch/x86/kernel/dumpstack.c +@@ -195,6 +195,7 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, + printk("%sCall Trace:\n", log_lvl); + + unwind_start(&state, task, regs, stack); ++ stack = stack ?: get_stack_pointer(task, regs); + regs = unwind_get_entry_regs(&state, &partial); + + /* +@@ -213,9 +214,7 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, + * - hardirq stack + * - entry stack + */ +- for (stack = stack ?: get_stack_pointer(task, regs); +- stack; +- stack = stack_info.next_sp) { ++ for (; stack; stack = stack_info.next_sp) { + const char *stack_name; + + stack = PTR_ALIGN(stack, sizeof(long)); +diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c +index 766ffe3ba31371..439fdb3f5fdf1c 100644 +--- a/arch/x86/kernel/irq.c ++++ b/arch/x86/kernel/irq.c +@@ -23,8 +23,10 @@ + #include + #include + ++#if defined(CONFIG_X86_LOCAL_APIC) || defined(CONFIG_X86_THERMAL_VECTOR) + #define CREATE_TRACE_POINTS + #include ++#endif + + DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat); + EXPORT_PER_CPU_SYMBOL(irq_stat); +diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c +index 720d99520316ad..72eb0df1a1a5f4 100644 +--- a/arch/x86/kernel/process.c ++++ b/arch/x86/kernel/process.c +@@ -83,7 +83,12 @@ EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid); + */ + int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) + { +- memcpy(dst, src, arch_task_struct_size); ++ /* init_task is not dynamically sized (incomplete FPU state) */ ++ if (unlikely(src == &init_task)) ++ memcpy_and_pad(dst, arch_task_struct_size, src, sizeof(init_task), 0); ++ else ++ memcpy(dst, src, arch_task_struct_size); ++ + #ifdef CONFIG_VM86 + dst->thread.vm86 = NULL; + #endif +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c +index a698196377be9b..693cb785357bd8 100644 +--- a/arch/x86/kernel/tsc.c ++++ b/arch/x86/kernel/tsc.c +@@ -920,7 +920,7 @@ static unsigned long long cyc2ns_suspend; + + void tsc_save_sched_clock_state(void) + { +- if (!sched_clock_stable()) ++ if (!static_branch_likely(&__use_tsc) && !sched_clock_stable()) + return; + + cyc2ns_suspend = sched_clock(); +@@ -940,7 +940,7 @@ void tsc_restore_sched_clock_state(void) + unsigned long flags; + int cpu; + +- if (!sched_clock_stable()) ++ if (!static_branch_likely(&__use_tsc) && !sched_clock_stable()) + return; + + local_irq_save(flags); +diff --git a/arch/x86/mm/pat/cpa-test.c b/arch/x86/mm/pat/cpa-test.c +index 0612a73638a812..7641cff719bd02 100644 +--- a/arch/x86/mm/pat/cpa-test.c ++++ b/arch/x86/mm/pat/cpa-test.c +@@ -183,7 +183,7 @@ static int pageattr_test(void) + break; + + case 1: +- err = change_page_attr_set(addrs, len[1], PAGE_CPA_TEST, 1); ++ err = change_page_attr_set(addrs, len[i], PAGE_CPA_TEST, 1); + break; + + case 2: +diff --git a/block/bio.c b/block/bio.c +index 92399883bc5e11..029dba492ac2d8 100644 +--- a/block/bio.c ++++ b/block/bio.c +@@ -73,7 +73,7 @@ struct bio_slab { + struct kmem_cache *slab; + unsigned int slab_ref; + unsigned int slab_size; +- char name[8]; ++ char name[12]; + }; + static DEFINE_MUTEX(bio_slab_lock); + static DEFINE_XARRAY(bio_slabs); +diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c +index 9cc83df22de495..2a6fdce3c2e6ba 100644 +--- a/drivers/acpi/nfit/core.c ++++ b/drivers/acpi/nfit/core.c +@@ -485,7 +485,7 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, + cmd_mask = nd_desc->cmd_mask; + if (cmd == ND_CMD_CALL && call_pkg->nd_family) { + family = call_pkg->nd_family; +- if (family > NVDIMM_BUS_FAMILY_MAX || ++ if (call_pkg->nd_family > NVDIMM_BUS_FAMILY_MAX || + !test_bit(family, &nd_desc->bus_family_mask)) + return -EINVAL; + family = array_index_nospec(family, +diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c +index 5289c344de900b..469a2e5eb6e8cb 100644 +--- a/drivers/acpi/processor_idle.c ++++ b/drivers/acpi/processor_idle.c +@@ -265,6 +265,10 @@ static int acpi_processor_get_power_info_fadt(struct acpi_processor *pr) + ACPI_CX_DESC_LEN, "ACPI P_LVL3 IOPORT 0x%x", + pr->power.states[ACPI_STATE_C3].address); + ++ if (!pr->power.states[ACPI_STATE_C2].address && ++ !pr->power.states[ACPI_STATE_C3].address) ++ return -ENODEV; ++ + return 0; + } + +diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c +index 56bbdd2f9a40db..7c08cf69ca3119 100644 +--- a/drivers/acpi/resource.c ++++ b/drivers/acpi/resource.c +@@ -442,6 +442,13 @@ static const struct dmi_system_id asus_laptop[] = { + DMI_MATCH(DMI_BOARD_NAME, "B1502CBA"), + }, + }, ++ { ++ /* Asus Vivobook X1404VAP */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_MATCH(DMI_BOARD_NAME, "X1404VAP"), ++ }, ++ }, + { + /* Asus Vivobook X1504VAP */ + .matches = { +@@ -556,6 +563,12 @@ static const struct dmi_system_id maingear_laptop[] = { + DMI_MATCH(DMI_BOARD_NAME, "RP-15"), + }, + }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Eluktronics Inc."), ++ DMI_MATCH(DMI_BOARD_NAME, "MECH-17"), ++ }, ++ }, + { + /* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */ + .matches = { +diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c +index 185ea0d93a5e5f..d77ab224b861a9 100644 +--- a/drivers/base/power/main.c ++++ b/drivers/base/power/main.c +@@ -897,6 +897,9 @@ static void __device_resume(struct device *dev, pm_message_t state, bool async) + if (dev->power.syscore) + goto Complete; + ++ if (!dev->power.is_suspended) ++ goto Complete; ++ + if (dev->power.direct_complete) { + /* Match the pm_runtime_disable() in __device_suspend(). */ + pm_runtime_enable(dev); +@@ -915,9 +918,6 @@ static void __device_resume(struct device *dev, pm_message_t state, bool async) + */ + dev->power.is_prepared = false; + +- if (!dev->power.is_suspended) +- goto Unlock; +- + if (dev->pm_domain) { + info = "power domain "; + callback = pm_op(&dev->pm_domain->ops, state); +@@ -957,7 +957,6 @@ static void __device_resume(struct device *dev, pm_message_t state, bool async) + error = dpm_run_callback(callback, dev, state, info); + dev->power.is_suspended = false; + +- Unlock: + device_unlock(dev); + dpm_watchdog_clear(&wd); + +@@ -1239,14 +1238,13 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a + dev->power.is_noirq_suspended = true; + + /* +- * Skipping the resume of devices that were in use right before the +- * system suspend (as indicated by their PM-runtime usage counters) +- * would be suboptimal. Also resume them if doing that is not allowed +- * to be skipped. ++ * Devices must be resumed unless they are explicitly allowed to be left ++ * in suspend, but even in that case skipping the resume of devices that ++ * were in use right before the system suspend (as indicated by their ++ * runtime PM usage counters and child counters) would be suboptimal. + */ +- if (atomic_read(&dev->power.usage_count) > 1 || +- !(dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME) && +- dev->power.may_skip_resume)) ++ if (!(dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME) && ++ dev->power.may_skip_resume) || !pm_runtime_need_not_resume(dev)) + dev->power.must_resume = true; + + if (dev->power.must_resume) +@@ -1643,6 +1641,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) + pm_runtime_disable(dev); + if (pm_runtime_status_suspended(dev)) { + pm_dev_dbg(dev, state, "direct-complete "); ++ dev->power.is_suspended = true; + goto Complete; + } + +diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c +index 6699096ff2fa66..edee7f1af1cec1 100644 +--- a/drivers/base/power/runtime.c ++++ b/drivers/base/power/runtime.c +@@ -1809,7 +1809,7 @@ void pm_runtime_drop_link(struct device_link *link) + pm_request_idle(link->supplier); + } + +-static bool pm_runtime_need_not_resume(struct device *dev) ++bool pm_runtime_need_not_resume(struct device *dev) + { + return atomic_read(&dev->power.usage_count) <= 1 && + (atomic_read(&dev->power.child_count) == 0 || +diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c +index 310accf94830bc..d13a60fefc1b86 100644 +--- a/drivers/clk/meson/g12a.c ++++ b/drivers/clk/meson/g12a.c +@@ -1136,8 +1136,18 @@ static struct clk_regmap g12a_cpu_clk_div16_en = { + .hw.init = &(struct clk_init_data) { + .name = "cpu_clk_div16_en", + .ops = &clk_regmap_gate_ro_ops, +- .parent_hws = (const struct clk_hw *[]) { +- &g12a_cpu_clk.hw ++ .parent_data = &(const struct clk_parent_data) { ++ /* ++ * Note: ++ * G12A and G12B have different cpu clocks (with ++ * different struct clk_hw). We fallback to the global ++ * naming string mechanism so this clock picks ++ * up the appropriate one. Same goes for the other ++ * clock using cpu cluster A clock output and present ++ * on both G12 variant. ++ */ ++ .name = "cpu_clk", ++ .index = -1, + }, + .num_parents = 1, + /* +@@ -1202,7 +1212,10 @@ static struct clk_regmap g12a_cpu_clk_apb_div = { + .hw.init = &(struct clk_init_data){ + .name = "cpu_clk_apb_div", + .ops = &clk_regmap_divider_ro_ops, +- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw }, ++ .parent_data = &(const struct clk_parent_data) { ++ .name = "cpu_clk", ++ .index = -1, ++ }, + .num_parents = 1, + }, + }; +@@ -1236,7 +1249,10 @@ static struct clk_regmap g12a_cpu_clk_atb_div = { + .hw.init = &(struct clk_init_data){ + .name = "cpu_clk_atb_div", + .ops = &clk_regmap_divider_ro_ops, +- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw }, ++ .parent_data = &(const struct clk_parent_data) { ++ .name = "cpu_clk", ++ .index = -1, ++ }, + .num_parents = 1, + }, + }; +@@ -1270,7 +1286,10 @@ static struct clk_regmap g12a_cpu_clk_axi_div = { + .hw.init = &(struct clk_init_data){ + .name = "cpu_clk_axi_div", + .ops = &clk_regmap_divider_ro_ops, +- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw }, ++ .parent_data = &(const struct clk_parent_data) { ++ .name = "cpu_clk", ++ .index = -1, ++ }, + .num_parents = 1, + }, + }; +@@ -1305,13 +1324,6 @@ static struct clk_regmap g12a_cpu_clk_trace_div = { + .name = "cpu_clk_trace_div", + .ops = &clk_regmap_divider_ro_ops, + .parent_data = &(const struct clk_parent_data) { +- /* +- * Note: +- * G12A and G12B have different cpu_clks (with +- * different struct clk_hw). We fallback to the global +- * naming string mechanism so cpu_clk_trace_div picks +- * up the appropriate one. +- */ + .name = "cpu_clk", + .index = -1, + }, +@@ -4187,7 +4199,7 @@ static MESON_GATE(g12a_spicc_1, HHI_GCLK_MPEG0, 14); + static MESON_GATE(g12a_hiu_reg, HHI_GCLK_MPEG0, 19); + static MESON_GATE(g12a_mipi_dsi_phy, HHI_GCLK_MPEG0, 20); + static MESON_GATE(g12a_assist_misc, HHI_GCLK_MPEG0, 23); +-static MESON_GATE(g12a_emmc_a, HHI_GCLK_MPEG0, 4); ++static MESON_GATE(g12a_emmc_a, HHI_GCLK_MPEG0, 24); + static MESON_GATE(g12a_emmc_b, HHI_GCLK_MPEG0, 25); + static MESON_GATE(g12a_emmc_c, HHI_GCLK_MPEG0, 26); + static MESON_GATE(g12a_audio_codec, HHI_GCLK_MPEG0, 28); +diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c +index 608e0e8ca49a8b..35bc13e73c0ddf 100644 +--- a/drivers/clk/meson/gxbb.c ++++ b/drivers/clk/meson/gxbb.c +@@ -1270,14 +1270,13 @@ static struct clk_regmap gxbb_cts_i958 = { + }, + }; + ++/* ++ * This table skips a clock named 'cts_slow_oscin' in the documentation ++ * This clock does not exist yet in this controller or the AO one ++ */ ++static u32 gxbb_32k_clk_parents_val_table[] = { 0, 2, 3 }; + static const struct clk_parent_data gxbb_32k_clk_parent_data[] = { + { .fw_name = "xtal", }, +- /* +- * FIXME: This clock is provided by the ao clock controller but the +- * clock is not yet part of the binding of this controller, so string +- * name must be use to set this parent. +- */ +- { .name = "cts_slow_oscin", .index = -1 }, + { .hw = &gxbb_fclk_div3.hw }, + { .hw = &gxbb_fclk_div5.hw }, + }; +@@ -1287,6 +1286,7 @@ static struct clk_regmap gxbb_32k_clk_sel = { + .offset = HHI_32K_CLK_CNTL, + .mask = 0x3, + .shift = 16, ++ .table = gxbb_32k_clk_parents_val_table, + }, + .hw.init = &(struct clk_init_data){ + .name = "32k_clk_sel", +@@ -1310,7 +1310,7 @@ static struct clk_regmap gxbb_32k_clk_div = { + &gxbb_32k_clk_sel.hw + }, + .num_parents = 1, +- .flags = CLK_SET_RATE_PARENT | CLK_DIVIDER_ROUND_CLOSEST, ++ .flags = CLK_SET_RATE_PARENT, + }, + }; + +diff --git a/drivers/clk/qcom/gcc-msm8953.c b/drivers/clk/qcom/gcc-msm8953.c +index 49513f1366ffda..9d11f993843dbe 100644 +--- a/drivers/clk/qcom/gcc-msm8953.c ++++ b/drivers/clk/qcom/gcc-msm8953.c +@@ -3771,7 +3771,7 @@ static struct clk_branch gcc_venus0_axi_clk = { + + static struct clk_branch gcc_venus0_core0_vcodec0_clk = { + .halt_reg = 0x4c02c, +- .halt_check = BRANCH_HALT, ++ .halt_check = BRANCH_HALT_SKIP, + .clkr = { + .enable_reg = 0x4c02c, + .enable_mask = BIT(0), +diff --git a/drivers/clk/qcom/mmcc-sdm660.c b/drivers/clk/qcom/mmcc-sdm660.c +index 941993bc610dfe..04e2b0801ee426 100644 +--- a/drivers/clk/qcom/mmcc-sdm660.c ++++ b/drivers/clk/qcom/mmcc-sdm660.c +@@ -2544,7 +2544,7 @@ static struct clk_branch video_core_clk = { + + static struct clk_branch video_subcore0_clk = { + .halt_reg = 0x1048, +- .halt_check = BRANCH_HALT, ++ .halt_check = BRANCH_HALT_SKIP, + .clkr = { + .enable_reg = 0x1048, + .enable_mask = BIT(0), +diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c +index 267ab54937d3df..a3587c500de28c 100644 +--- a/drivers/clk/rockchip/clk-rk3328.c ++++ b/drivers/clk/rockchip/clk-rk3328.c +@@ -201,7 +201,7 @@ PNAME(mux_aclk_peri_pre_p) = { "cpll_peri", + "gpll_peri", + "hdmiphy_peri" }; + PNAME(mux_ref_usb3otg_src_p) = { "xin24m", +- "clk_usb3otg_ref" }; ++ "clk_ref_usb3otg_src" }; + PNAME(mux_xin24m_32k_p) = { "xin24m", + "clk_rtc32k" }; + PNAME(mux_mac2io_src_p) = { "clk_mac2io_src", +diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c +index 1949ae7851b2e5..0468ce5506aef5 100644 +--- a/drivers/clk/samsung/clk.c ++++ b/drivers/clk/samsung/clk.c +@@ -64,11 +64,11 @@ struct samsung_clk_provider *__init samsung_clk_init(struct device_node *np, + if (!ctx) + panic("could not allocate clock provider context.\n"); + ++ ctx->clk_data.num = nr_clks; + for (i = 0; i < nr_clks; ++i) + ctx->clk_data.hws[i] = ERR_PTR(-ENOENT); + + ctx->reg_base = base; +- ctx->clk_data.num = nr_clks; + spin_lock_init(&ctx->lock); + + return ctx; +diff --git a/drivers/clocksource/i8253.c b/drivers/clocksource/i8253.c +index cb215e6f2e8344..39f7c2d736d169 100644 +--- a/drivers/clocksource/i8253.c ++++ b/drivers/clocksource/i8253.c +@@ -20,13 +20,6 @@ + DEFINE_RAW_SPINLOCK(i8253_lock); + EXPORT_SYMBOL(i8253_lock); + +-/* +- * Handle PIT quirk in pit_shutdown() where zeroing the counter register +- * restarts the PIT, negating the shutdown. On platforms with the quirk, +- * platform specific code can set this to false. +- */ +-bool i8253_clear_counter_on_shutdown __ro_after_init = true; +- + #ifdef CONFIG_CLKSRC_I8253 + /* + * Since the PIT overflows every tick, its not very useful +@@ -112,12 +105,33 @@ void clockevent_i8253_disable(void) + { + raw_spin_lock(&i8253_lock); + ++ /* ++ * Writing the MODE register should stop the counter, according to ++ * the datasheet. This appears to work on real hardware (well, on ++ * modern Intel and AMD boxes; I didn't dig the Pegasos out of the ++ * shed). ++ * ++ * However, some virtual implementations differ, and the MODE change ++ * doesn't have any effect until either the counter is written (KVM ++ * in-kernel PIT) or the next interrupt (QEMU). And in those cases, ++ * it may not stop the *count*, only the interrupts. Although in ++ * the virt case, that probably doesn't matter, as the value of the ++ * counter will only be calculated on demand if the guest reads it; ++ * it's the interrupts which cause steal time. ++ * ++ * Hyper-V apparently has a bug where even in mode 0, the IRQ keeps ++ * firing repeatedly if the counter is running. But it *does* do the ++ * right thing when the MODE register is written. ++ * ++ * So: write the MODE and then load the counter, which ensures that ++ * the IRQ is stopped on those buggy virt implementations. And then ++ * write the MODE again, which is the right way to stop it. ++ */ + outb_p(0x30, PIT_MODE); ++ outb_p(0, PIT_CH0); ++ outb_p(0, PIT_CH0); + +- if (i8253_clear_counter_on_shutdown) { +- outb_p(0, PIT_CH0); +- outb_p(0, PIT_CH0); +- } ++ outb_p(0x30, PIT_MODE); + + raw_spin_unlock(&i8253_lock); + } +diff --git a/drivers/counter/microchip-tcb-capture.c b/drivers/counter/microchip-tcb-capture.c +index 96a9c32239c01b..2625823152ead9 100644 +--- a/drivers/counter/microchip-tcb-capture.c ++++ b/drivers/counter/microchip-tcb-capture.c +@@ -370,6 +370,25 @@ static int mchp_tc_probe(struct platform_device *pdev) + channel); + } + ++ /* Disable Quadrature Decoder and position measure */ ++ ret = regmap_update_bits(regmap, ATMEL_TC_BMR, ATMEL_TC_QDEN | ATMEL_TC_POSEN, 0); ++ if (ret) ++ return ret; ++ ++ /* Setup the period capture mode */ ++ ret = regmap_update_bits(regmap, ATMEL_TC_REG(priv->channel[0], CMR), ++ ATMEL_TC_WAVE | ATMEL_TC_ABETRG | ATMEL_TC_CMR_MASK | ++ ATMEL_TC_TCCLKS, ++ ATMEL_TC_CMR_MASK); ++ if (ret) ++ return ret; ++ ++ /* Enable clock and trigger counter */ ++ ret = regmap_write(regmap, ATMEL_TC_REG(priv->channel[0], CCR), ++ ATMEL_TC_CLKEN | ATMEL_TC_SWTRG); ++ if (ret) ++ return ret; ++ + priv->tc_cfg = tcb_config; + priv->regmap = regmap; + priv->counter.name = dev_name(&pdev->dev); +diff --git a/drivers/counter/stm32-lptimer-cnt.c b/drivers/counter/stm32-lptimer-cnt.c +index 637b3f0b4fa34d..ac9cb0f690b3b5 100644 +--- a/drivers/counter/stm32-lptimer-cnt.c ++++ b/drivers/counter/stm32-lptimer-cnt.c +@@ -59,37 +59,43 @@ static int stm32_lptim_set_enable_state(struct stm32_lptim_cnt *priv, + return 0; + } + ++ ret = clk_enable(priv->clk); ++ if (ret) ++ goto disable_cnt; ++ + /* LP timer must be enabled before writing CMP & ARR */ + ret = regmap_write(priv->regmap, STM32_LPTIM_ARR, priv->ceiling); + if (ret) +- return ret; ++ goto disable_clk; + + ret = regmap_write(priv->regmap, STM32_LPTIM_CMP, 0); + if (ret) +- return ret; ++ goto disable_clk; + + /* ensure CMP & ARR registers are properly written */ + ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val, + (val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK, + 100, 1000); + if (ret) +- return ret; ++ goto disable_clk; + + ret = regmap_write(priv->regmap, STM32_LPTIM_ICR, + STM32_LPTIM_CMPOKCF_ARROKCF); + if (ret) +- return ret; ++ goto disable_clk; + +- ret = clk_enable(priv->clk); +- if (ret) { +- regmap_write(priv->regmap, STM32_LPTIM_CR, 0); +- return ret; +- } + priv->enabled = true; + + /* Start LP timer in continuous mode */ + return regmap_update_bits(priv->regmap, STM32_LPTIM_CR, + STM32_LPTIM_CNTSTRT, STM32_LPTIM_CNTSTRT); ++ ++disable_clk: ++ clk_disable(priv->clk); ++disable_cnt: ++ regmap_write(priv->regmap, STM32_LPTIM_CR, 0); ++ ++ return ret; + } + + static int stm32_lptim_setup(struct stm32_lptim_cnt *priv, int enable) +diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c +index 55c80319d26843..5981e3ef9ce0ee 100644 +--- a/drivers/cpufreq/cpufreq_governor.c ++++ b/drivers/cpufreq/cpufreq_governor.c +@@ -145,7 +145,23 @@ unsigned int dbs_update(struct cpufreq_policy *policy) + time_elapsed = update_time - j_cdbs->prev_update_time; + j_cdbs->prev_update_time = update_time; + +- idle_time = cur_idle_time - j_cdbs->prev_cpu_idle; ++ /* ++ * cur_idle_time could be smaller than j_cdbs->prev_cpu_idle if ++ * it's obtained from get_cpu_idle_time_jiffy() when NOHZ is ++ * off, where idle_time is calculated by the difference between ++ * time elapsed in jiffies and "busy time" obtained from CPU ++ * statistics. If a CPU is 100% busy, the time elapsed and busy ++ * time should grow with the same amount in two consecutive ++ * samples, but in practice there could be a tiny difference, ++ * making the accumulated idle time decrease sometimes. Hence, ++ * in this case, idle_time should be regarded as 0 in order to ++ * make the further process correct. ++ */ ++ if (cur_idle_time > j_cdbs->prev_cpu_idle) ++ idle_time = cur_idle_time - j_cdbs->prev_cpu_idle; ++ else ++ idle_time = 0; ++ + j_cdbs->prev_cpu_idle = cur_idle_time; + + if (ignore_nice) { +@@ -162,7 +178,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy) + * calls, so the previous load value can be used then. + */ + load = j_cdbs->prev_load; +- } else if (unlikely((int)idle_time > 2 * sampling_rate && ++ } else if (unlikely(idle_time > 2 * sampling_rate && + j_cdbs->prev_load)) { + /* + * If the CPU had gone completely idle and a task has +@@ -189,30 +205,15 @@ unsigned int dbs_update(struct cpufreq_policy *policy) + load = j_cdbs->prev_load; + j_cdbs->prev_load = 0; + } else { +- if (time_elapsed >= idle_time) { ++ if (time_elapsed > idle_time) + load = 100 * (time_elapsed - idle_time) / time_elapsed; +- } else { +- /* +- * That can happen if idle_time is returned by +- * get_cpu_idle_time_jiffy(). In that case +- * idle_time is roughly equal to the difference +- * between time_elapsed and "busy time" obtained +- * from CPU statistics. Then, the "busy time" +- * can end up being greater than time_elapsed +- * (for example, if jiffies_64 and the CPU +- * statistics are updated by different CPUs), +- * so idle_time may in fact be negative. That +- * means, though, that the CPU was busy all +- * the time (on the rough average) during the +- * last sampling interval and 100 can be +- * returned as the load. +- */ +- load = (int)idle_time < 0 ? 100 : 0; +- } ++ else ++ load = 0; ++ + j_cdbs->prev_load = load; + } + +- if (unlikely((int)idle_time > 2 * sampling_rate)) { ++ if (unlikely(idle_time > 2 * sampling_rate)) { + unsigned int periods = idle_time / sampling_rate; + + if (periods < idle_periods) +diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c +index 763692e327b182..35b20c74dbfc7e 100644 +--- a/drivers/cpufreq/scpi-cpufreq.c ++++ b/drivers/cpufreq/scpi-cpufreq.c +@@ -47,8 +47,9 @@ static unsigned int scpi_cpufreq_get_rate(unsigned int cpu) + static int + scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index) + { +- u64 rate = policy->freq_table[index].frequency * 1000; ++ unsigned long freq_khz = policy->freq_table[index].frequency; + struct scpi_data *priv = policy->driver_data; ++ unsigned long rate = freq_khz * 1000; + int ret; + + ret = clk_set_rate(priv->clk, rate); +@@ -56,7 +57,7 @@ scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index) + if (ret) + return ret; + +- if (clk_get_rate(priv->clk) != rate) ++ if (clk_get_rate(priv->clk) / 1000 != freq_khz) + return -EIO; + + return 0; +diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c +index 32150e05a27955..915333deae6f0d 100644 +--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c ++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c +@@ -55,7 +55,6 @@ + #define SEC_TYPE_MASK 0x0F + #define SEC_DONE_MASK 0x0001 + #define SEC_ICV_MASK 0x000E +-#define SEC_SQE_LEN_RATE_MASK 0x3 + + #define SEC_TOTAL_IV_SZ (SEC_IV_SIZE * QM_Q_DEPTH) + #define SEC_SGL_SGE_NR 128 +@@ -77,16 +76,16 @@ + #define SEC_TOTAL_PBUF_SZ (PAGE_SIZE * SEC_PBUF_PAGE_NUM + \ + SEC_PBUF_LEFT_SZ) + +-#define SEC_SQE_LEN_RATE 4 + #define SEC_SQE_CFLAG 2 + #define SEC_SQE_AEAD_FLAG 3 + #define SEC_SQE_DONE 0x1 + #define SEC_ICV_ERR 0x2 +-#define MIN_MAC_LEN 4 + #define MAC_LEN_MASK 0x1U + #define MAX_INPUT_DATA_LEN 0xFFFE00 + #define BITS_MASK 0xFF ++#define WORD_MASK 0x3 + #define BYTE_BITS 0x8 ++#define BYTES_TO_WORDS(bcount) ((bcount) >> 2) + #define SEC_XTS_NAME_SZ 0x3 + #define IV_CM_CAL_NUM 2 + #define IV_CL_MASK 0x7 +@@ -1048,11 +1047,6 @@ static int sec_aead_auth_set_key(struct sec_auth_ctx *ctx, + struct crypto_shash *hash_tfm = ctx->hash_tfm; + int blocksize, digestsize, ret; + +- if (!keys->authkeylen) { +- pr_err("hisi_sec2: aead auth key error!\n"); +- return -EINVAL; +- } +- + blocksize = crypto_shash_blocksize(hash_tfm); + digestsize = crypto_shash_digestsize(hash_tfm); + if (keys->authkeylen > blocksize) { +@@ -1064,7 +1058,8 @@ static int sec_aead_auth_set_key(struct sec_auth_ctx *ctx, + } + ctx->a_key_len = digestsize; + } else { +- memcpy(ctx->a_key, keys->authkey, keys->authkeylen); ++ if (keys->authkeylen) ++ memcpy(ctx->a_key, keys->authkey, keys->authkeylen); + ctx->a_key_len = keys->authkeylen; + } + +@@ -1133,7 +1128,7 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key, + goto bad_key; + } + +- if (ctx->a_ctx.a_key_len & SEC_SQE_LEN_RATE_MASK) { ++ if (ctx->a_ctx.a_key_len & WORD_MASK) { + ret = -EINVAL; + dev_err(dev, "AUTH key length error!\n"); + goto bad_key; +@@ -1538,11 +1533,10 @@ static void sec_auth_bd_fill_ex(struct sec_auth_ctx *ctx, int dir, + + sec_sqe->type2.a_key_addr = cpu_to_le64(ctx->a_key_dma); + +- sec_sqe->type2.mac_key_alg = cpu_to_le32(authsize / SEC_SQE_LEN_RATE); ++ sec_sqe->type2.mac_key_alg = cpu_to_le32(BYTES_TO_WORDS(authsize)); + + sec_sqe->type2.mac_key_alg |= +- cpu_to_le32((u32)((ctx->a_key_len) / +- SEC_SQE_LEN_RATE) << SEC_AKEY_OFFSET); ++ cpu_to_le32((u32)BYTES_TO_WORDS(ctx->a_key_len) << SEC_AKEY_OFFSET); + + sec_sqe->type2.mac_key_alg |= + cpu_to_le32((u32)(ctx->a_alg) << SEC_AEAD_ALG_OFFSET); +@@ -1594,12 +1588,10 @@ static void sec_auth_bd_fill_ex_v3(struct sec_auth_ctx *ctx, int dir, + sqe3->a_key_addr = cpu_to_le64(ctx->a_key_dma); + + sqe3->auth_mac_key |= +- cpu_to_le32((u32)(authsize / +- SEC_SQE_LEN_RATE) << SEC_MAC_OFFSET_V3); ++ cpu_to_le32(BYTES_TO_WORDS(authsize) << SEC_MAC_OFFSET_V3); + + sqe3->auth_mac_key |= +- cpu_to_le32((u32)(ctx->a_key_len / +- SEC_SQE_LEN_RATE) << SEC_AKEY_OFFSET_V3); ++ cpu_to_le32((u32)BYTES_TO_WORDS(ctx->a_key_len) << SEC_AKEY_OFFSET_V3); + + sqe3->auth_mac_key |= + cpu_to_le32((u32)(ctx->a_alg) << SEC_AUTH_ALG_OFFSET_V3); +@@ -2205,8 +2197,8 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq) + struct device *dev = ctx->dev; + int ret; + +- /* Hardware does not handle cases where authsize is less than 4 bytes */ +- if (unlikely(sz < MIN_MAC_LEN)) { ++ /* Hardware does not handle cases where authsize is not 4 bytes aligned */ ++ if (c_mode == SEC_CMODE_CBC && (sz & WORD_MASK)) { + sreq->aead_req.fallback = true; + return -EINVAL; + } +diff --git a/drivers/crypto/nx/nx-common-pseries.c b/drivers/crypto/nx/nx-common-pseries.c +index 4e304f6081e47f..f24d58b6eb1e50 100644 +--- a/drivers/crypto/nx/nx-common-pseries.c ++++ b/drivers/crypto/nx/nx-common-pseries.c +@@ -1142,6 +1142,7 @@ static void __init nxcop_get_capabilities(void) + { + struct hv_vas_all_caps *hv_caps; + struct hv_nx_cop_caps *hv_nxc; ++ u64 feat; + int rc; + + hv_caps = kmalloc(sizeof(*hv_caps), GFP_KERNEL); +@@ -1152,27 +1153,26 @@ static void __init nxcop_get_capabilities(void) + */ + rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES, 0, + (u64)virt_to_phys(hv_caps)); ++ if (!rc) ++ feat = be64_to_cpu(hv_caps->feat_type); ++ kfree(hv_caps); + if (rc) +- goto out; ++ return; ++ if (!(feat & VAS_NX_GZIP_FEAT_BIT)) ++ return; + +- caps_feat = be64_to_cpu(hv_caps->feat_type); + /* + * NX-GZIP feature available + */ +- if (caps_feat & VAS_NX_GZIP_FEAT_BIT) { +- hv_nxc = kmalloc(sizeof(*hv_nxc), GFP_KERNEL); +- if (!hv_nxc) +- goto out; +- /* +- * Get capabilities for NX-GZIP feature +- */ +- rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES, +- VAS_NX_GZIP_FEAT, +- (u64)virt_to_phys(hv_nxc)); +- } else { +- pr_err("NX-GZIP feature is not available\n"); +- rc = -EINVAL; +- } ++ hv_nxc = kmalloc(sizeof(*hv_nxc), GFP_KERNEL); ++ if (!hv_nxc) ++ return; ++ /* ++ * Get capabilities for NX-GZIP feature ++ */ ++ rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES, ++ VAS_NX_GZIP_FEAT, ++ (u64)virt_to_phys(hv_nxc)); + + if (!rc) { + nx_cop_caps.descriptor = be64_to_cpu(hv_nxc->descriptor); +@@ -1182,13 +1182,10 @@ static void __init nxcop_get_capabilities(void) + be64_to_cpu(hv_nxc->min_compress_len); + nx_cop_caps.min_decompress_len = + be64_to_cpu(hv_nxc->min_decompress_len); +- } else { +- caps_feat = 0; ++ caps_feat = feat; + } + + kfree(hv_nxc); +-out: +- kfree(hv_caps); + } + + static const struct vio_device_id nx842_vio_driver_ids[] = { +diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c +index 9a9ff5ad611acf..acb011cfd8c4fd 100644 +--- a/drivers/edac/ie31200_edac.c ++++ b/drivers/edac/ie31200_edac.c +@@ -83,8 +83,6 @@ + (((did) & PCI_DEVICE_ID_INTEL_IE31200_HB_CFL_MASK) == \ + PCI_DEVICE_ID_INTEL_IE31200_HB_CFL_MASK)) + +-#define IE31200_DIMMS 4 +-#define IE31200_RANKS 8 + #define IE31200_RANKS_PER_CHANNEL 4 + #define IE31200_DIMMS_PER_CHANNEL 2 + #define IE31200_CHANNELS 2 +@@ -156,6 +154,7 @@ + #define IE31200_MAD_DIMM_0_OFFSET 0x5004 + #define IE31200_MAD_DIMM_0_OFFSET_SKL 0x500C + #define IE31200_MAD_DIMM_SIZE GENMASK_ULL(7, 0) ++#define IE31200_MAD_DIMM_SIZE_SKL GENMASK_ULL(5, 0) + #define IE31200_MAD_DIMM_A_RANK BIT(17) + #define IE31200_MAD_DIMM_A_RANK_SHIFT 17 + #define IE31200_MAD_DIMM_A_RANK_SKL BIT(10) +@@ -369,7 +368,7 @@ static void __iomem *ie31200_map_mchbar(struct pci_dev *pdev) + static void __skl_populate_dimm_info(struct dimm_data *dd, u32 addr_decode, + int chan) + { +- dd->size = (addr_decode >> (chan << 4)) & IE31200_MAD_DIMM_SIZE; ++ dd->size = (addr_decode >> (chan << 4)) & IE31200_MAD_DIMM_SIZE_SKL; + dd->dual_rank = (addr_decode & (IE31200_MAD_DIMM_A_RANK_SKL << (chan << 4))) ? 1 : 0; + dd->x16_width = ((addr_decode & (IE31200_MAD_DIMM_A_WIDTH_SKL << (chan << 4))) >> + (IE31200_MAD_DIMM_A_WIDTH_SKL_SHIFT + (chan << 4))); +@@ -418,7 +417,7 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx) + + nr_channels = how_many_channels(pdev); + layers[0].type = EDAC_MC_LAYER_CHIP_SELECT; +- layers[0].size = IE31200_DIMMS; ++ layers[0].size = IE31200_RANKS_PER_CHANNEL; + layers[0].is_virt_csrow = true; + layers[1].type = EDAC_MC_LAYER_CHANNEL; + layers[1].size = nr_channels; +@@ -608,7 +607,7 @@ static int __init ie31200_init(void) + + pci_rc = pci_register_driver(&ie31200_driver); + if (pci_rc < 0) +- goto fail0; ++ return pci_rc; + + if (!mci_pdev) { + ie31200_registered = 0; +@@ -619,11 +618,13 @@ static int __init ie31200_init(void) + if (mci_pdev) + break; + } ++ + if (!mci_pdev) { + edac_dbg(0, "ie31200 pci_get_device fail\n"); + pci_rc = -ENODEV; +- goto fail1; ++ goto fail0; + } ++ + pci_rc = ie31200_init_one(mci_pdev, &ie31200_pci_tbl[i]); + if (pci_rc < 0) { + edac_dbg(0, "ie31200 init fail\n"); +@@ -631,12 +632,12 @@ static int __init ie31200_init(void) + goto fail1; + } + } +- return 0; + ++ return 0; + fail1: +- pci_unregister_driver(&ie31200_driver); +-fail0: + pci_dev_put(mci_pdev); ++fail0: ++ pci_unregister_driver(&ie31200_driver); + + return pci_rc; + } +diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c +index dca79caccd01c1..fa25c082109acc 100644 +--- a/drivers/firmware/imx/imx-scu.c ++++ b/drivers/firmware/imx/imx-scu.c +@@ -279,6 +279,7 @@ static int imx_scu_probe(struct platform_device *pdev) + return ret; + + sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu"); ++ of_node_put(args.np); + + num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM; + for (i = 0; i < num_channel; i++) { +diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c +index 6e9788324fea55..371f24569b3b22 100644 +--- a/drivers/firmware/iscsi_ibft.c ++++ b/drivers/firmware/iscsi_ibft.c +@@ -310,7 +310,10 @@ static ssize_t ibft_attr_show_nic(void *data, int type, char *buf) + str += sprintf_ipaddr(str, nic->ip_addr); + break; + case ISCSI_BOOT_ETH_SUBNET_MASK: +- val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1)); ++ if (nic->subnet_mask_prefix > 32) ++ val = cpu_to_be32(~0); ++ else ++ val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1)); + str += sprintf(str, "%pI4", &val); + break; + case ISCSI_BOOT_ETH_PREFIX_LEN: +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +index 57943e90087107..adcf3adc5ca51c 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +@@ -2294,7 +2294,6 @@ static int amdgpu_pmops_freeze(struct device *dev) + + adev->in_s4 = true; + r = amdgpu_device_suspend(drm_dev, true); +- adev->in_s4 = false; + if (r) + return r; + return amdgpu_asic_reset(adev); +@@ -2303,8 +2302,13 @@ static int amdgpu_pmops_freeze(struct device *dev) + static int amdgpu_pmops_thaw(struct device *dev) + { + struct drm_device *drm_dev = dev_get_drvdata(dev); ++ struct amdgpu_device *adev = drm_to_adev(drm_dev); ++ int r; + +- return amdgpu_device_resume(drm_dev, true); ++ r = amdgpu_device_resume(drm_dev, true); ++ adev->in_s4 = false; ++ ++ return r; + } + + static int amdgpu_pmops_poweroff(struct device *dev) +@@ -2317,6 +2321,9 @@ static int amdgpu_pmops_poweroff(struct device *dev) + static int amdgpu_pmops_restore(struct device *dev) + { + struct drm_device *drm_dev = dev_get_drvdata(dev); ++ struct amdgpu_device *adev = drm_to_adev(drm_dev); ++ ++ adev->in_s4 = false; + + return amdgpu_device_resume(drm_dev, true); + } +diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c +index 947e8c09493dcf..9563a3ad7cf521 100644 +--- a/drivers/gpu/drm/amd/amdgpu/nv.c ++++ b/drivers/gpu/drm/amd/amdgpu/nv.c +@@ -87,7 +87,7 @@ static const struct amdgpu_video_codec_info nv_video_codecs_decode_array[] = + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)}, + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)}, + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)}, +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)}, ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 8192, 8192, 0)}, + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)}, + }; + +diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c +index ef5b3eedc86155..719cf8ba416d22 100644 +--- a/drivers/gpu/drm/amd/amdgpu/soc15.c ++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c +@@ -123,7 +123,7 @@ static const struct amdgpu_video_codec_info rv_video_codecs_decode_array[] = + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4906, 52)}, + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4906, 4)}, + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 4096, 186)}, +- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)}, ++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 8192, 8192, 0)}, + {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 4096, 4096, 0)}, + }; + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index bfa15d89595530..a33ca712a89c58 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -218,6 +218,10 @@ amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd); + + static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector); + ++static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm, ++ int bl_idx, ++ u32 user_brightness); ++ + static bool + is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state, + struct drm_crtc_state *new_crtc_state); +@@ -2698,8 +2702,19 @@ static int dm_resume(void *handle) + + mutex_unlock(&dm->dc_lock); + ++ /* set the backlight after a reset */ ++ for (i = 0; i < dm->num_of_edps; i++) { ++ if (dm->backlight_dev[i]) ++ amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]); ++ } ++ + return 0; + } ++ ++ /* leave display off for S4 sequence */ ++ if (adev->in_s4) ++ return 0; ++ + /* Recreate dc_state - DC invalidates it when setting power state to S3. */ + dc_release_state(dm_state->context); + dm_state->context = dc_create_state(dm->dc); +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +index c5f1dc3b596143..2248975f9aefee 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +@@ -396,6 +396,7 @@ void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *hdcp_work) + for (i = 0; i < hdcp_work->max_link; i++) { + cancel_delayed_work_sync(&hdcp_work[i].callback_dwork); + cancel_delayed_work_sync(&hdcp_work[i].watchdog_timer_dwork); ++ cancel_delayed_work_sync(&hdcp_work[i].property_validate_dwork); + } + + sysfs_remove_bin_file(kobj, &hdcp_work[0].attr); +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +index 3d126acaf525e3..a84280b65821b2 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +@@ -1025,6 +1025,16 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx) + bool res = false; + DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger); + ++ /* Invalid input */ ++ if (!plane_state || ++ !plane_state->dst_rect.width || ++ !plane_state->dst_rect.height || ++ !plane_state->src_rect.width || ++ !plane_state->src_rect.height) { ++ ASSERT(0); ++ return false; ++ } ++ + pipe_ctx->plane_res.scl_data.format = convert_pixel_format_to_dalsurface( + pipe_ctx->plane_state->format); + +@@ -1901,10 +1911,13 @@ static int get_norm_pix_clk(const struct dc_crtc_timing *timing) + break; + case COLOR_DEPTH_121212: + normalized_pix_clk = (pix_clk * 36) / 24; +- break; ++ break; ++ case COLOR_DEPTH_141414: ++ normalized_pix_clk = (pix_clk * 42) / 24; ++ break; + case COLOR_DEPTH_161616: + normalized_pix_clk = (pix_clk * 48) / 24; +- break; ++ break; + default: + ASSERT(0); + break; +diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c +index de0fa87b301a5d..5c0d49d4eb8eca 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c ++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c +@@ -283,10 +283,10 @@ static void CalculateDynamicMetadataParameters( + double DISPCLK, + double DCFClkDeepSleep, + double PixelClock, +- long HTotal, +- long VBlank, +- long DynamicMetadataTransmittedBytes, +- long DynamicMetadataLinesBeforeActiveRequired, ++ unsigned int HTotal, ++ unsigned int VBlank, ++ unsigned int DynamicMetadataTransmittedBytes, ++ int DynamicMetadataLinesBeforeActiveRequired, + int InterlaceEnable, + bool ProgressiveToInterlaceUnitInOPP, + double *Tsetup, +@@ -3375,8 +3375,8 @@ static double CalculateWriteBackDelay( + + + static void CalculateDynamicMetadataParameters(int MaxInterDCNTileRepeaters, double DPPCLK, double DISPCLK, +- double DCFClkDeepSleep, double PixelClock, long HTotal, long VBlank, long DynamicMetadataTransmittedBytes, +- long DynamicMetadataLinesBeforeActiveRequired, int InterlaceEnable, bool ProgressiveToInterlaceUnitInOPP, ++ double DCFClkDeepSleep, double PixelClock, unsigned int HTotal, unsigned int VBlank, unsigned int DynamicMetadataTransmittedBytes, ++ int DynamicMetadataLinesBeforeActiveRequired, int InterlaceEnable, bool ProgressiveToInterlaceUnitInOPP, + double *Tsetup, double *Tdmbf, double *Tdmec, double *Tdmsks) + { + double TotalRepeaterDelayTime = 0; +diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c +index 0fad15020c7401..2beca0b069254d 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c ++++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c +@@ -867,11 +867,30 @@ static unsigned int CursorBppEnumToBits(enum cursor_bpp ebpp) + } + } + ++static unsigned int get_pipe_idx(struct display_mode_lib *mode_lib, unsigned int plane_idx) ++{ ++ int pipe_idx = -1; ++ int i; ++ ++ ASSERT(plane_idx < DC__NUM_DPP__MAX); ++ ++ for (i = 0; i < DC__NUM_DPP__MAX ; i++) { ++ if (plane_idx == mode_lib->vba.pipe_plane[i]) { ++ pipe_idx = i; ++ break; ++ } ++ } ++ ASSERT(pipe_idx >= 0); ++ ++ return pipe_idx; ++} ++ + void ModeSupportAndSystemConfiguration(struct display_mode_lib *mode_lib) + { + soc_bounding_box_st *soc = &mode_lib->vba.soc; + unsigned int k; + unsigned int total_pipes = 0; ++ unsigned int pipe_idx = 0; + + mode_lib->vba.VoltageLevel = mode_lib->vba.cache_pipes[0].clks_cfg.voltage; + mode_lib->vba.ReturnBW = mode_lib->vba.ReturnBWPerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb]; +@@ -892,6 +911,11 @@ void ModeSupportAndSystemConfiguration(struct display_mode_lib *mode_lib) + + // Total Available Pipes Support Check + for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) { ++ pipe_idx = get_pipe_idx(mode_lib, k); ++ if (pipe_idx == -1) { ++ ASSERT(0); ++ continue; // skip inactive planes ++ } + total_pipes += mode_lib->vba.DPPPerPlane[k]; + } + ASSERT(total_pipes <= DC__NUM_DPP__MAX); +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c +index dfba0bc7320731..9f5dcfaebe6306 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c +@@ -1231,19 +1231,22 @@ static int navi10_get_current_clk_freq_by_table(struct smu_context *smu, + value); + } + +-static bool navi10_is_support_fine_grained_dpm(struct smu_context *smu, enum smu_clk_type clk_type) ++static int navi10_is_support_fine_grained_dpm(struct smu_context *smu, enum smu_clk_type clk_type) + { + PPTable_t *pptable = smu->smu_table.driver_pptable; + DpmDescriptor_t *dpm_desc = NULL; +- uint32_t clk_index = 0; ++ int clk_index = 0; + + clk_index = smu_cmn_to_asic_specific_index(smu, + CMN2ASIC_MAPPING_CLK, + clk_type); ++ if (clk_index < 0) ++ return clk_index; ++ + dpm_desc = &pptable->DpmDescriptor[clk_index]; + + /* 0 - Fine grained DPM, 1 - Discrete DPM */ +- return dpm_desc->SnapToDiscrete == 0; ++ return dpm_desc->SnapToDiscrete == 0 ? 1 : 0; + } + + static inline bool navi10_od_feature_is_supported(struct smu_11_0_overdrive_table *od_table, enum SMU_11_0_ODFEATURE_CAP cap) +@@ -1299,7 +1302,11 @@ static int navi10_print_clk_levels(struct smu_context *smu, + if (ret) + return size; + +- if (!navi10_is_support_fine_grained_dpm(smu, clk_type)) { ++ ret = navi10_is_support_fine_grained_dpm(smu, clk_type); ++ if (ret < 0) ++ return ret; ++ ++ if (!ret) { + for (i = 0; i < count; i++) { + ret = smu_v11_0_get_dpm_freq_by_index(smu, clk_type, i, &value); + if (ret) +@@ -1468,7 +1475,11 @@ static int navi10_force_clk_levels(struct smu_context *smu, + case SMU_UCLK: + case SMU_FCLK: + /* There is only 2 levels for fine grained DPM */ +- if (navi10_is_support_fine_grained_dpm(smu, clk_type)) { ++ ret = navi10_is_support_fine_grained_dpm(smu, clk_type); ++ if (ret < 0) ++ return ret; ++ ++ if (ret) { + soft_max_level = (soft_max_level >= 1 ? 1 : 0); + soft_min_level = (soft_min_level >= 1 ? 1 : 0); + } +diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c +index b488c6cb8f1064..1c9dd62d3c47b1 100644 +--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c ++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c +@@ -424,6 +424,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata, + const char *name) + { + struct device *dev = pdata->dev; ++ const struct i2c_client *client = to_i2c_client(dev); + struct auxiliary_device *aux; + int ret; + +@@ -432,6 +433,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata, + return -ENOMEM; + + aux->name = name; ++ aux->id = (client->adapter->nr << 10) | client->addr; + aux->dev.parent = dev; + aux->dev.release = ti_sn65dsi86_aux_device_release; + device_set_of_node_from_dev(&aux->dev, dev); +diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c +index fb646b12af042e..476357fb5c74ad 100644 +--- a/drivers/gpu/drm/drm_atomic_uapi.c ++++ b/drivers/gpu/drm/drm_atomic_uapi.c +@@ -964,6 +964,10 @@ int drm_atomic_connector_commit_dpms(struct drm_atomic_state *state, + + if (mode != DRM_MODE_DPMS_ON) + mode = DRM_MODE_DPMS_OFF; ++ ++ if (connector->dpms == mode) ++ goto out; ++ + connector->dpms = mode; + + crtc = connector->state->crtc; +diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c +index 1140292820bb16..fff2038e98035a 100644 +--- a/drivers/gpu/drm/drm_connector.c ++++ b/drivers/gpu/drm/drm_connector.c +@@ -980,6 +980,10 @@ static const struct drm_prop_enum_list dp_colorspaces[] = { + * callback. For atomic drivers the remapping to the "ACTIVE" property is + * implemented in the DRM core. + * ++ * On atomic drivers any DPMS setproperty ioctl where the value does not ++ * change is completely skipped, otherwise a full atomic commit will occur. ++ * On legacy drivers the exact behavior is driver specific. ++ * + * Note that this property cannot be set through the MODE_ATOMIC ioctl, + * userspace must use "ACTIVE" on the CRTC instead. + * +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c +index 86e1a61b6b6df3..70be5719e403b9 100644 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c +@@ -178,13 +178,13 @@ static int + drm_dp_mst_rad_to_str(const u8 rad[8], u8 lct, char *out, size_t len) + { + int i; +- u8 unpacked_rad[16]; ++ u8 unpacked_rad[16] = {}; + +- for (i = 0; i < lct; i++) { ++ for (i = 1; i < lct; i++) { + if (i % 2) +- unpacked_rad[i] = rad[i / 2] >> 4; ++ unpacked_rad[i] = rad[(i - 1) / 2] >> 4; + else +- unpacked_rad[i] = rad[i / 2] & BIT_MASK(4); ++ unpacked_rad[i] = rad[(i - 1) / 2] & 0xF; + } + + /* TODO: Eventually add something to printk so we can format the rad +diff --git a/drivers/gpu/drm/gma500/mid_bios.c b/drivers/gpu/drm/gma500/mid_bios.c +index 68e787924ed04e..1ba33f9518da23 100644 +--- a/drivers/gpu/drm/gma500/mid_bios.c ++++ b/drivers/gpu/drm/gma500/mid_bios.c +@@ -280,6 +280,11 @@ static void mid_get_vbt_data(struct drm_psb_private *dev_priv) + 0, PCI_DEVFN(2, 0)); + int ret = -1; + ++ if (pci_gfx_root == NULL) { ++ WARN_ON(1); ++ return; ++ } ++ + /* Get the address of the platform config vbt */ + pci_read_config_dword(pci_gfx_root, 0xFC, &addr); + pci_dev_put(pci_gfx_root); +diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c +index 3b55a83b7cdff5..cac98f0104257e 100644 +--- a/drivers/gpu/drm/mediatek/mtk_dsi.c ++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c +@@ -920,12 +920,12 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host, + const struct mipi_dsi_msg *msg) + { + struct mtk_dsi *dsi = host_to_dsi(host); +- u32 recv_cnt, i; ++ ssize_t recv_cnt; + u8 read_data[16]; + void *src_addr; + u8 irq_flag = CMD_DONE_INT_FLAG; + u32 dsi_mode; +- int ret; ++ int ret, i; + + dsi_mode = readl(dsi->regs + DSI_MODE_CTRL); + if (dsi_mode & MODE) { +@@ -974,7 +974,7 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host, + if (recv_cnt) + memcpy(msg->rx_buf, src_addr, recv_cnt); + +- DRM_INFO("dsi get %d byte data from the panel address(0x%x)\n", ++ DRM_INFO("dsi get %zd byte data from the panel address(0x%x)\n", + recv_cnt, *((u8 *)(msg->tx_buf))); + + restore_dsi_mode: +diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c +index 7613b0fa2be6ee..67ab6579daf72e 100644 +--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c ++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c +@@ -138,7 +138,7 @@ enum hdmi_aud_channel_swap_type { + + struct hdmi_audio_param { + enum hdmi_audio_coding_type aud_codec; +- enum hdmi_audio_sample_size aud_sampe_size; ++ enum hdmi_audio_sample_size aud_sample_size; + enum hdmi_aud_input_type aud_input_type; + enum hdmi_aud_i2s_fmt aud_i2s_fmt; + enum hdmi_aud_mclk aud_mclk; +@@ -174,6 +174,7 @@ struct mtk_hdmi { + unsigned int sys_offset; + void __iomem *regs; + enum hdmi_colorspace csp; ++ struct platform_device *audio_pdev; + struct hdmi_audio_param aud_param; + bool audio_enable; + bool powered; +@@ -1075,7 +1076,7 @@ static int mtk_hdmi_output_init(struct mtk_hdmi *hdmi) + + hdmi->csp = HDMI_COLORSPACE_RGB; + aud_param->aud_codec = HDMI_AUDIO_CODING_TYPE_PCM; +- aud_param->aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16; ++ aud_param->aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16; + aud_param->aud_input_type = HDMI_AUD_INPUT_I2S; + aud_param->aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT; + aud_param->aud_mclk = HDMI_AUD_MCLK_128FS; +@@ -1576,14 +1577,14 @@ static int mtk_hdmi_audio_hw_params(struct device *dev, void *data, + switch (daifmt->fmt) { + case HDMI_I2S: + hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM; +- hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16; ++ hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16; + hdmi_params.aud_input_type = HDMI_AUD_INPUT_I2S; + hdmi_params.aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT; + hdmi_params.aud_mclk = HDMI_AUD_MCLK_128FS; + break; + case HDMI_SPDIF: + hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM; +- hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16; ++ hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16; + hdmi_params.aud_input_type = HDMI_AUD_INPUT_SPDIF; + break; + default: +@@ -1667,6 +1668,11 @@ static const struct hdmi_codec_ops mtk_hdmi_audio_codec_ops = { + .no_capture_mute = 1, + }; + ++static void mtk_hdmi_unregister_audio_driver(void *data) ++{ ++ platform_device_unregister(data); ++} ++ + static int mtk_hdmi_register_audio_driver(struct device *dev) + { + struct mtk_hdmi *hdmi = dev_get_drvdata(dev); +@@ -1676,13 +1682,20 @@ static int mtk_hdmi_register_audio_driver(struct device *dev) + .i2s = 1, + .data = hdmi, + }; +- struct platform_device *pdev; ++ int ret; + +- pdev = platform_device_register_data(dev, HDMI_CODEC_DRV_NAME, +- PLATFORM_DEVID_AUTO, &codec_data, +- sizeof(codec_data)); +- if (IS_ERR(pdev)) +- return PTR_ERR(pdev); ++ hdmi->audio_pdev = platform_device_register_data(dev, ++ HDMI_CODEC_DRV_NAME, ++ PLATFORM_DEVID_AUTO, ++ &codec_data, ++ sizeof(codec_data)); ++ if (IS_ERR(hdmi->audio_pdev)) ++ return PTR_ERR(hdmi->audio_pdev); ++ ++ ret = devm_add_action_or_reset(dev, mtk_hdmi_unregister_audio_driver, ++ hdmi->audio_pdev); ++ if (ret) ++ return ret; + + DRM_INFO("%s driver bound to HDMI\n", HDMI_CODEC_DRV_NAME); + return 0; +diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c +index ac9eb92059bc3e..30f871be52cb38 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_connector.c ++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c +@@ -754,7 +754,6 @@ nouveau_connector_force(struct drm_connector *connector) + if (!nv_encoder) { + NV_ERROR(drm, "can't find encoder to force %s on!\n", + connector->name); +- connector->status = connector_status_disconnected; + return; + } + +diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c +index 511a942e851db8..4213aef9197209 100644 +--- a/drivers/gpu/drm/radeon/radeon_vce.c ++++ b/drivers/gpu/drm/radeon/radeon_vce.c +@@ -557,7 +557,7 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p) + { + int session_idx = -1; + bool destroyed = false, created = false, allocated = false; +- uint32_t tmp, handle = 0; ++ uint32_t tmp = 0, handle = 0; + uint32_t *size = &tmp; + int i, r = 0; + +diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c +index dd7fcc36d726a2..c357229256b73a 100644 +--- a/drivers/gpu/drm/v3d/v3d_sched.c ++++ b/drivers/gpu/drm/v3d/v3d_sched.c +@@ -202,11 +202,15 @@ v3d_tfu_job_run(struct drm_sched_job *sched_job) + struct drm_device *dev = &v3d->drm; + struct dma_fence *fence; + ++ if (unlikely(job->base.base.s_fence->finished.error)) ++ return NULL; ++ ++ v3d->tfu_job = job; ++ + fence = v3d_fence_create(v3d, V3D_TFU); + if (IS_ERR(fence)) + return NULL; + +- v3d->tfu_job = job; + if (job->base.irq_fence) + dma_fence_put(job->base.irq_fence); + job->base.irq_fence = dma_fence_get(fence); +@@ -240,6 +244,9 @@ v3d_csd_job_run(struct drm_sched_job *sched_job) + struct dma_fence *fence; + int i; + ++ if (unlikely(job->base.base.s_fence->finished.error)) ++ return NULL; ++ + v3d->csd_job = job; + + v3d_invalidate_caches(v3d); +diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c +index f716c5796f5fc7..09025ff3b1961a 100644 +--- a/drivers/gpu/drm/vkms/vkms_drv.c ++++ b/drivers/gpu/drm/vkms/vkms_drv.c +@@ -226,17 +226,19 @@ static int __init vkms_init(void) + if (!config) + return -ENOMEM; + +- default_config = config; +- + config->cursor = enable_cursor; + config->writeback = enable_writeback; + config->overlay = enable_overlay; + + ret = vkms_create(config); +- if (ret) ++ if (ret) { + kfree(config); ++ return ret; ++ } + +- return ret; ++ default_config = config; ++ ++ return 0; + } + + static void vkms_destroy(struct vkms_config *config) +@@ -260,9 +262,10 @@ static void vkms_destroy(struct vkms_config *config) + + static void __exit vkms_exit(void) + { +- if (default_config->dev) +- vkms_destroy(default_config); ++ if (!default_config) ++ return; + ++ vkms_destroy(default_config); + kfree(default_config); + } + +diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c +index 5bb42d0a2de98d..78b7dd210d89c7 100644 +--- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c ++++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c +@@ -204,6 +204,8 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev) + if (ret) + return ret; + ++ dma_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32)); ++ + /* Try the reserved memory. Proceed if there's none. */ + of_reserved_mem_device_init(&pdev->dev); + +diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile +index e29efcb1c0402c..9fc46db0a3da67 100644 +--- a/drivers/hid/Makefile ++++ b/drivers/hid/Makefile +@@ -144,7 +144,6 @@ obj-$(CONFIG_USB_KBD) += usbhid/ + obj-$(CONFIG_I2C_HID_CORE) += i2c-hid/ + + obj-$(CONFIG_INTEL_ISH_HID) += intel-ish-hid/ +-obj-$(INTEL_ISH_FIRMWARE_DOWNLOADER) += intel-ish-hid/ + + obj-$(CONFIG_AMD_SFH_HID) += amd-sfh-hid/ + +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index 81db294dda4081..44825a916eeb2e 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -1037,6 +1037,7 @@ + #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001 0x3001 + #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003 0x3003 + #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008 0x3008 ++#define USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473 0x5473 + + #define I2C_VENDOR_ID_RAYDIUM 0x2386 + #define I2C_PRODUCT_ID_RAYDIUM_4B33 0x4b33 +diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c +index 25cfd964dc25d9..acb9eb18f7ccfe 100644 +--- a/drivers/hid/hid-plantronics.c ++++ b/drivers/hid/hid-plantronics.c +@@ -6,9 +6,6 @@ + * Copyright (c) 2015-2018 Terry Junge + */ + +-/* +- */ +- + #include "hid-ids.h" + + #include +@@ -23,30 +20,28 @@ + + #define PLT_VOL_UP 0x00b1 + #define PLT_VOL_DOWN 0x00b2 ++#define PLT_MIC_MUTE 0x00b5 + + #define PLT1_VOL_UP (PLT_HID_1_0_PAGE | PLT_VOL_UP) + #define PLT1_VOL_DOWN (PLT_HID_1_0_PAGE | PLT_VOL_DOWN) ++#define PLT1_MIC_MUTE (PLT_HID_1_0_PAGE | PLT_MIC_MUTE) + #define PLT2_VOL_UP (PLT_HID_2_0_PAGE | PLT_VOL_UP) + #define PLT2_VOL_DOWN (PLT_HID_2_0_PAGE | PLT_VOL_DOWN) ++#define PLT2_MIC_MUTE (PLT_HID_2_0_PAGE | PLT_MIC_MUTE) ++#define HID_TELEPHONY_MUTE (HID_UP_TELEPHONY | 0x2f) ++#define HID_CONSUMER_MUTE (HID_UP_CONSUMER | 0xe2) + + #define PLT_DA60 0xda60 + #define PLT_BT300_MIN 0x0413 + #define PLT_BT300_MAX 0x0418 + +- +-#define PLT_ALLOW_CONSUMER (field->application == HID_CP_CONSUMERCONTROL && \ +- (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER) +- +-#define PLT_QUIRK_DOUBLE_VOLUME_KEYS BIT(0) +-#define PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS BIT(1) +- + #define PLT_DOUBLE_KEY_TIMEOUT 5 /* ms */ +-#define PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT 220 /* ms */ + + struct plt_drv_data { + unsigned long device_type; +- unsigned long last_volume_key_ts; +- u32 quirks; ++ unsigned long last_key_ts; ++ unsigned long double_key_to; ++ __u16 last_key; + }; + + static int plantronics_input_mapping(struct hid_device *hdev, +@@ -58,34 +53,43 @@ static int plantronics_input_mapping(struct hid_device *hdev, + unsigned short mapped_key; + struct plt_drv_data *drv_data = hid_get_drvdata(hdev); + unsigned long plt_type = drv_data->device_type; ++ int allow_mute = usage->hid == HID_TELEPHONY_MUTE; ++ int allow_consumer = field->application == HID_CP_CONSUMERCONTROL && ++ (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER && ++ usage->hid != HID_CONSUMER_MUTE; + + /* special case for PTT products */ + if (field->application == HID_GD_JOYSTICK) + goto defaulted; + +- /* handle volume up/down mapping */ + /* non-standard types or multi-HID interfaces - plt_type is PID */ + if (!(plt_type & HID_USAGE_PAGE)) { + switch (plt_type) { + case PLT_DA60: +- if (PLT_ALLOW_CONSUMER) ++ if (allow_consumer) + goto defaulted; +- goto ignored; ++ if (usage->hid == HID_CONSUMER_MUTE) { ++ mapped_key = KEY_MICMUTE; ++ goto mapped; ++ } ++ break; + default: +- if (PLT_ALLOW_CONSUMER) ++ if (allow_consumer || allow_mute) + goto defaulted; + } ++ goto ignored; + } +- /* handle standard types - plt_type is 0xffa0uuuu or 0xffa2uuuu */ +- /* 'basic telephony compliant' - allow default consumer page map */ +- else if ((plt_type & HID_USAGE) >= PLT_BASIC_TELEPHONY && +- (plt_type & HID_USAGE) != PLT_BASIC_EXCEPTION) { +- if (PLT_ALLOW_CONSUMER) +- goto defaulted; +- } +- /* not 'basic telephony' - apply legacy mapping */ +- /* only map if the field is in the device's primary vendor page */ +- else if (!((field->application ^ plt_type) & HID_USAGE_PAGE)) { ++ ++ /* handle standard consumer control mapping */ ++ /* and standard telephony mic mute mapping */ ++ if (allow_consumer || allow_mute) ++ goto defaulted; ++ ++ /* handle vendor unique types - plt_type is 0xffa0uuuu or 0xffa2uuuu */ ++ /* if not 'basic telephony compliant' - map vendor unique controls */ ++ if (!((plt_type & HID_USAGE) >= PLT_BASIC_TELEPHONY && ++ (plt_type & HID_USAGE) != PLT_BASIC_EXCEPTION) && ++ !((field->application ^ plt_type) & HID_USAGE_PAGE)) + switch (usage->hid) { + case PLT1_VOL_UP: + case PLT2_VOL_UP: +@@ -95,8 +99,11 @@ static int plantronics_input_mapping(struct hid_device *hdev, + case PLT2_VOL_DOWN: + mapped_key = KEY_VOLUMEDOWN; + goto mapped; ++ case PLT1_MIC_MUTE: ++ case PLT2_MIC_MUTE: ++ mapped_key = KEY_MICMUTE; ++ goto mapped; + } +- } + + /* + * Future mapping of call control or other usages, +@@ -105,6 +112,8 @@ static int plantronics_input_mapping(struct hid_device *hdev, + */ + + ignored: ++ hid_dbg(hdev, "usage: %08x (appl: %08x) - ignored\n", ++ usage->hid, field->application); + return -1; + + defaulted: +@@ -123,38 +132,26 @@ static int plantronics_event(struct hid_device *hdev, struct hid_field *field, + struct hid_usage *usage, __s32 value) + { + struct plt_drv_data *drv_data = hid_get_drvdata(hdev); ++ unsigned long prev_tsto, cur_ts; ++ __u16 prev_key, cur_key; + +- if (drv_data->quirks & PLT_QUIRK_DOUBLE_VOLUME_KEYS) { +- unsigned long prev_ts, cur_ts; ++ /* Usages are filtered in plantronics_usages. */ + +- /* Usages are filtered in plantronics_usages. */ ++ /* HZ too low for ms resolution - double key detection disabled */ ++ /* or it is a key release - handle key presses only. */ ++ if (!drv_data->double_key_to || !value) ++ return 0; + +- if (!value) /* Handle key presses only. */ +- return 0; ++ prev_tsto = drv_data->last_key_ts + drv_data->double_key_to; ++ cur_ts = drv_data->last_key_ts = jiffies; ++ prev_key = drv_data->last_key; ++ cur_key = drv_data->last_key = usage->code; + +- prev_ts = drv_data->last_volume_key_ts; +- cur_ts = jiffies; +- if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_DOUBLE_KEY_TIMEOUT) +- return 1; /* Ignore the repeated key. */ +- +- drv_data->last_volume_key_ts = cur_ts; ++ /* If the same key occurs in <= double_key_to -- ignore it */ ++ if (prev_key == cur_key && time_before_eq(cur_ts, prev_tsto)) { ++ hid_dbg(hdev, "double key %d ignored\n", cur_key); ++ return 1; /* Ignore the repeated key. */ + } +- if (drv_data->quirks & PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS) { +- unsigned long prev_ts, cur_ts; +- +- /* Usages are filtered in plantronics_usages. */ +- +- if (!value) /* Handle key presses only. */ +- return 0; +- +- prev_ts = drv_data->last_volume_key_ts; +- cur_ts = jiffies; +- if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT) +- return 1; /* Ignore the followed opposite volume key. */ +- +- drv_data->last_volume_key_ts = cur_ts; +- } +- + return 0; + } + +@@ -196,12 +193,16 @@ static int plantronics_probe(struct hid_device *hdev, + ret = hid_parse(hdev); + if (ret) { + hid_err(hdev, "parse failed\n"); +- goto err; ++ return ret; + } + + drv_data->device_type = plantronics_device_type(hdev); +- drv_data->quirks = id->driver_data; +- drv_data->last_volume_key_ts = jiffies - msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT); ++ drv_data->double_key_to = msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT); ++ drv_data->last_key_ts = jiffies - drv_data->double_key_to; ++ ++ /* if HZ does not allow ms resolution - disable double key detection */ ++ if (drv_data->double_key_to < PLT_DOUBLE_KEY_TIMEOUT) ++ drv_data->double_key_to = 0; + + hid_set_drvdata(hdev, drv_data); + +@@ -210,29 +211,10 @@ static int plantronics_probe(struct hid_device *hdev, + if (ret) + hid_err(hdev, "hw start failed\n"); + +-err: + return ret; + } + + static const struct hid_device_id plantronics_devices[] = { +- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, +- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3210_SERIES), +- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS }, +- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, +- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES), +- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS }, +- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, +- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES), +- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS }, +- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, +- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES), +- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS }, +- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, +- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3325_SERIES), +- .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS }, +- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, +- USB_DEVICE_ID_PLANTRONICS_ENCOREPRO_500_SERIES), +- .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS }, + { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) }, + { } + }; +@@ -241,6 +223,14 @@ MODULE_DEVICE_TABLE(hid, plantronics_devices); + static const struct hid_usage_id plantronics_usages[] = { + { HID_CP_VOLUMEUP, EV_KEY, HID_ANY_ID }, + { HID_CP_VOLUMEDOWN, EV_KEY, HID_ANY_ID }, ++ { HID_TELEPHONY_MUTE, EV_KEY, HID_ANY_ID }, ++ { HID_CONSUMER_MUTE, EV_KEY, HID_ANY_ID }, ++ { PLT2_VOL_UP, EV_KEY, HID_ANY_ID }, ++ { PLT2_VOL_DOWN, EV_KEY, HID_ANY_ID }, ++ { PLT2_MIC_MUTE, EV_KEY, HID_ANY_ID }, ++ { PLT1_VOL_UP, EV_KEY, HID_ANY_ID }, ++ { PLT1_VOL_DOWN, EV_KEY, HID_ANY_ID }, ++ { PLT1_MIC_MUTE, EV_KEY, HID_ANY_ID }, + { HID_TERMINATOR, HID_TERMINATOR, HID_TERMINATOR } + }; + +diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c +index 80e4247a768bdd..b5ad4c87daacfc 100644 +--- a/drivers/hid/hid-quirks.c ++++ b/drivers/hid/hid-quirks.c +@@ -871,6 +871,7 @@ static const struct hid_device_id hid_ignore_list[] = { + { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DPAD) }, + #endif + { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) }, + { } + }; + +diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c +index ba45605fc6b52b..a48f7cd514b0ff 100644 +--- a/drivers/hid/intel-ish-hid/ipc/ipc.c ++++ b/drivers/hid/intel-ish-hid/ipc/ipc.c +@@ -577,14 +577,14 @@ static void fw_reset_work_fn(struct work_struct *unused) + static void _ish_sync_fw_clock(struct ishtp_device *dev) + { + static unsigned long prev_sync; +- uint64_t usec; ++ struct ipc_time_update_msg time = {}; + + if (prev_sync && jiffies - prev_sync < 20 * HZ) + return; + + prev_sync = jiffies; +- usec = ktime_to_us(ktime_get_boottime()); +- ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &usec, sizeof(uint64_t)); ++ /* The fields of time would be updated while sending message */ ++ ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &time, sizeof(time)); + } + + /** +diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c +index 02aeb192e36712..cb3a5b13c3ec2d 100644 +--- a/drivers/hv/vmbus_drv.c ++++ b/drivers/hv/vmbus_drv.c +@@ -2419,12 +2419,25 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size) + struct resource *iter; + + mutex_lock(&hyperv_mmio_lock); ++ ++ /* ++ * If all bytes of the MMIO range to be released are within the ++ * special case fb_mmio shadow region, skip releasing the shadow ++ * region since no corresponding __request_region() was done ++ * in vmbus_allocate_mmio(). ++ */ ++ if (fb_mmio && start >= fb_mmio->start && ++ (start + size - 1 <= fb_mmio->end)) ++ goto skip_shadow_release; ++ + for (iter = hyperv_mmio; iter; iter = iter->sibling) { + if ((iter->start >= start + size) || (iter->end <= start)) + continue; + + __release_region(iter, start, size); + } ++ ++skip_shadow_release: + release_mem_region(start, size); + mutex_unlock(&hyperv_mmio_lock); + +diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c +index 3645a19cdaf4db..71cfc1c5bd12e1 100644 +--- a/drivers/hwmon/nct6775.c ++++ b/drivers/hwmon/nct6775.c +@@ -420,8 +420,8 @@ static const s8 NCT6776_BEEP_BITS[] = { + static const u16 NCT6776_REG_TOLERANCE_H[] = { + 0x10c, 0x20c, 0x30c, 0x80c, 0x90c, 0xa0c, 0xb0c }; + +-static const u8 NCT6776_REG_PWM_MODE[] = { 0x04, 0, 0, 0, 0, 0 }; +-static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0 }; ++static const u8 NCT6776_REG_PWM_MODE[] = { 0x04, 0, 0, 0, 0, 0, 0 }; ++static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0, 0 }; + + static const u16 NCT6776_REG_FAN_MIN[] = { + 0x63a, 0x63c, 0x63e, 0x640, 0x642, 0x64a, 0x64c }; +diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c +index e0740c6dbd5423..29c74a49053668 100644 +--- a/drivers/hwtracing/coresight/coresight-catu.c ++++ b/drivers/hwtracing/coresight/coresight-catu.c +@@ -267,7 +267,7 @@ catu_init_sg_table(struct device *catu_dev, int node, + * Each table can address upto 1MB and we can have + * CATU_PAGES_PER_SYSPAGE tables in a system page. + */ +- nr_tpages = DIV_ROUND_UP(size, SZ_1M) / CATU_PAGES_PER_SYSPAGE; ++ nr_tpages = DIV_ROUND_UP(size, CATU_PAGES_PER_SYSPAGE * SZ_1M); + catu_table = tmc_alloc_sg_table(catu_dev, node, nr_tpages, + size >> PAGE_SHIFT, pages); + if (IS_ERR(catu_table)) +diff --git a/drivers/i2c/busses/i2c-ali1535.c b/drivers/i2c/busses/i2c-ali1535.c +index ee83c4581bce05..cd2c8afebe798d 100644 +--- a/drivers/i2c/busses/i2c-ali1535.c ++++ b/drivers/i2c/busses/i2c-ali1535.c +@@ -490,6 +490,8 @@ MODULE_DEVICE_TABLE(pci, ali1535_ids); + + static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id) + { ++ int ret; ++ + if (ali1535_setup(dev)) { + dev_warn(&dev->dev, + "ALI1535 not detected, module not inserted.\n"); +@@ -501,7 +503,15 @@ static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id) + + snprintf(ali1535_adapter.name, sizeof(ali1535_adapter.name), + "SMBus ALI1535 adapter at %04x", ali1535_offset); +- return i2c_add_adapter(&ali1535_adapter); ++ ret = i2c_add_adapter(&ali1535_adapter); ++ if (ret) ++ goto release_region; ++ ++ return 0; ++ ++release_region: ++ release_region(ali1535_smba, ALI1535_SMB_IOSIZE); ++ return ret; + } + + static void ali1535_remove(struct pci_dev *dev) +diff --git a/drivers/i2c/busses/i2c-ali15x3.c b/drivers/i2c/busses/i2c-ali15x3.c +index cc58feacd08212..28a57cb6efb99b 100644 +--- a/drivers/i2c/busses/i2c-ali15x3.c ++++ b/drivers/i2c/busses/i2c-ali15x3.c +@@ -473,6 +473,8 @@ MODULE_DEVICE_TABLE (pci, ali15x3_ids); + + static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id) + { ++ int ret; ++ + if (ali15x3_setup(dev)) { + dev_err(&dev->dev, + "ALI15X3 not detected, module not inserted.\n"); +@@ -484,7 +486,15 @@ static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id) + + snprintf(ali15x3_adapter.name, sizeof(ali15x3_adapter.name), + "SMBus ALI15X3 adapter at %04x", ali15x3_smba); +- return i2c_add_adapter(&ali15x3_adapter); ++ ret = i2c_add_adapter(&ali15x3_adapter); ++ if (ret) ++ goto release_region; ++ ++ return 0; ++ ++release_region: ++ release_region(ali15x3_smba, ALI15X3_SMB_IOSIZE); ++ return ret; + } + + static void ali15x3_remove(struct pci_dev *dev) +diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c +index 8955f62b497e67..5c045c4da660bd 100644 +--- a/drivers/i2c/busses/i2c-omap.c ++++ b/drivers/i2c/busses/i2c-omap.c +@@ -1049,23 +1049,6 @@ static int omap_i2c_transmit_data(struct omap_i2c_dev *omap, u8 num_bytes, + return 0; + } + +-static irqreturn_t +-omap_i2c_isr(int irq, void *dev_id) +-{ +- struct omap_i2c_dev *omap = dev_id; +- irqreturn_t ret = IRQ_HANDLED; +- u16 mask; +- u16 stat; +- +- stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG); +- mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG) & ~OMAP_I2C_STAT_NACK; +- +- if (stat & mask) +- ret = IRQ_WAKE_THREAD; +- +- return ret; +-} +- + static int omap_i2c_xfer_data(struct omap_i2c_dev *omap) + { + u16 bits; +@@ -1096,8 +1079,13 @@ static int omap_i2c_xfer_data(struct omap_i2c_dev *omap) + } + + if (stat & OMAP_I2C_STAT_NACK) { +- err |= OMAP_I2C_STAT_NACK; ++ omap->cmd_err |= OMAP_I2C_STAT_NACK; + omap_i2c_ack_stat(omap, OMAP_I2C_STAT_NACK); ++ ++ if (!(stat & ~OMAP_I2C_STAT_NACK)) { ++ err = -EAGAIN; ++ break; ++ } + } + + if (stat & OMAP_I2C_STAT_AL) { +@@ -1475,7 +1463,7 @@ omap_i2c_probe(struct platform_device *pdev) + IRQF_NO_SUSPEND, pdev->name, omap); + else + r = devm_request_threaded_irq(&pdev->dev, omap->irq, +- omap_i2c_isr, omap_i2c_isr_thread, ++ NULL, omap_i2c_isr_thread, + IRQF_NO_SUSPEND | IRQF_ONESHOT, + pdev->name, omap); + +diff --git a/drivers/i2c/busses/i2c-sis630.c b/drivers/i2c/busses/i2c-sis630.c +index cfb8e04a2a8310..6befa6ff83f268 100644 +--- a/drivers/i2c/busses/i2c-sis630.c ++++ b/drivers/i2c/busses/i2c-sis630.c +@@ -509,6 +509,8 @@ MODULE_DEVICE_TABLE(pci, sis630_ids); + + static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id) + { ++ int ret; ++ + if (sis630_setup(dev)) { + dev_err(&dev->dev, + "SIS630 compatible bus not detected, " +@@ -522,7 +524,15 @@ static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id) + snprintf(sis630_adapter.name, sizeof(sis630_adapter.name), + "SMBus SIS630 adapter at %04x", smbus_base + SMB_STS); + +- return i2c_add_adapter(&sis630_adapter); ++ ret = i2c_add_adapter(&sis630_adapter); ++ if (ret) ++ goto release_region; ++ ++ return 0; ++ ++release_region: ++ release_region(smbus_base + SMB_STS, SIS630_SMB_IOREGION); ++ return ret; + } + + static void sis630_remove(struct pci_dev *dev) +diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c +index 7fc82b003b9618..29440a1266b8a4 100644 +--- a/drivers/i3c/master/svc-i3c-master.c ++++ b/drivers/i3c/master/svc-i3c-master.c +@@ -807,7 +807,7 @@ static int svc_i3c_update_ibirules(struct svc_i3c_master *master) + + /* Create the IBIRULES register for both cases */ + i3c_bus_for_each_i3cdev(&master->base.bus, dev) { +- if (I3C_BCR_DEVICE_ROLE(dev->info.bcr) == I3C_BCR_I3C_MASTER) ++ if (!(dev->info.bcr & I3C_BCR_IBI_REQ_CAP)) + continue; + + if (dev->info.bcr & I3C_BCR_IBI_PAYLOAD) { +diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c +index 1f46a73aafeacd..a7168803408f68 100644 +--- a/drivers/iio/accel/mma8452.c ++++ b/drivers/iio/accel/mma8452.c +@@ -709,7 +709,7 @@ static int mma8452_write_raw(struct iio_dev *indio_dev, + int val, int val2, long mask) + { + struct mma8452_data *data = iio_priv(indio_dev); +- int i, ret; ++ int i, j, ret; + + ret = iio_device_claim_direct_mode(indio_dev); + if (ret) +@@ -769,14 +769,18 @@ static int mma8452_write_raw(struct iio_dev *indio_dev, + break; + + case IIO_CHAN_INFO_OVERSAMPLING_RATIO: +- ret = mma8452_get_odr_index(data); ++ j = mma8452_get_odr_index(data); + + for (i = 0; i < ARRAY_SIZE(mma8452_os_ratio); i++) { +- if (mma8452_os_ratio[i][ret] == val) { ++ if (mma8452_os_ratio[i][j] == val) { + ret = mma8452_set_power_mode(data, i); + break; + } + } ++ if (i == ARRAY_SIZE(mma8452_os_ratio)) { ++ ret = -EINVAL; ++ break; ++ } + break; + default: + ret = -EINVAL; +diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c +index 93f32bba73f62e..31c8cb3bf811b6 100644 +--- a/drivers/iio/adc/ad7124.c ++++ b/drivers/iio/adc/ad7124.c +@@ -144,7 +144,11 @@ struct ad7124_chip_info { + struct ad7124_channel_config { + bool live; + unsigned int cfg_slot; +- /* Following fields are used to compare equality. */ ++ /* ++ * Following fields are used to compare for equality. If you ++ * make adaptations in it, you most likely also have to adapt ++ * ad7124_find_similar_live_cfg(), too. ++ */ + struct_group(config_props, + enum ad7124_ref_sel refsel; + bool bipolar; +@@ -331,15 +335,38 @@ static struct ad7124_channel_config *ad7124_find_similar_live_cfg(struct ad7124_ + struct ad7124_channel_config *cfg) + { + struct ad7124_channel_config *cfg_aux; +- ptrdiff_t cmp_size; + int i; + +- cmp_size = sizeof_field(struct ad7124_channel_config, config_props); ++ /* ++ * This is just to make sure that the comparison is adapted after ++ * struct ad7124_channel_config was changed. ++ */ ++ static_assert(sizeof_field(struct ad7124_channel_config, config_props) == ++ sizeof(struct { ++ enum ad7124_ref_sel refsel; ++ bool bipolar; ++ bool buf_positive; ++ bool buf_negative; ++ unsigned int vref_mv; ++ unsigned int pga_bits; ++ unsigned int odr; ++ unsigned int odr_sel_bits; ++ unsigned int filter_type; ++ })); ++ + for (i = 0; i < st->num_channels; i++) { + cfg_aux = &st->channels[i].cfg; + + if (cfg_aux->live && +- !memcmp(&cfg->config_props, &cfg_aux->config_props, cmp_size)) ++ cfg->refsel == cfg_aux->refsel && ++ cfg->bipolar == cfg_aux->bipolar && ++ cfg->buf_positive == cfg_aux->buf_positive && ++ cfg->buf_negative == cfg_aux->buf_negative && ++ cfg->vref_mv == cfg_aux->vref_mv && ++ cfg->pga_bits == cfg_aux->pga_bits && ++ cfg->odr == cfg_aux->odr && ++ cfg->odr_sel_bits == cfg_aux->odr_sel_bits && ++ cfg->filter_type == cfg_aux->filter_type) + return cfg_aux; + } + +diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c +index 5d1ce55fda71ef..241245e25f004c 100644 +--- a/drivers/infiniband/core/device.c ++++ b/drivers/infiniband/core/device.c +@@ -542,6 +542,8 @@ static struct class ib_class = { + static void rdma_init_coredev(struct ib_core_device *coredev, + struct ib_device *dev, struct net *net) + { ++ bool is_full_dev = &dev->coredev == coredev; ++ + /* This BUILD_BUG_ON is intended to catch layout change + * of union of ib_core_device and device. + * dev must be the first element as ib_core and providers +@@ -553,6 +555,13 @@ static void rdma_init_coredev(struct ib_core_device *coredev, + + coredev->dev.class = &ib_class; + coredev->dev.groups = dev->groups; ++ ++ /* ++ * Don't expose hw counters outside of the init namespace. ++ */ ++ if (!is_full_dev && dev->hw_stats_attr_index) ++ coredev->dev.groups[dev->hw_stats_attr_index] = NULL; ++ + device_initialize(&coredev->dev); + coredev->owner = dev; + INIT_LIST_HEAD(&coredev->port_list); +diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c +index 58befbaaf0ad54..242434c09e8d8f 100644 +--- a/drivers/infiniband/core/mad.c ++++ b/drivers/infiniband/core/mad.c +@@ -2671,11 +2671,11 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info, + struct ib_mad_private *mad) + { + unsigned long flags; +- int post, ret; + struct ib_mad_private *mad_priv; + struct ib_sge sg_list; + struct ib_recv_wr recv_wr; + struct ib_mad_queue *recv_queue = &qp_info->recv_queue; ++ int ret = 0; + + /* Initialize common scatter list fields */ + sg_list.lkey = qp_info->port_priv->pd->local_dma_lkey; +@@ -2685,7 +2685,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info, + recv_wr.sg_list = &sg_list; + recv_wr.num_sge = 1; + +- do { ++ while (true) { + /* Allocate and map receive buffer */ + if (mad) { + mad_priv = mad; +@@ -2693,10 +2693,8 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info, + } else { + mad_priv = alloc_mad_private(port_mad_size(qp_info->port_priv), + GFP_ATOMIC); +- if (!mad_priv) { +- ret = -ENOMEM; +- break; +- } ++ if (!mad_priv) ++ return -ENOMEM; + } + sg_list.length = mad_priv_dma_size(mad_priv); + sg_list.addr = ib_dma_map_single(qp_info->port_priv->device, +@@ -2705,37 +2703,41 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info, + DMA_FROM_DEVICE); + if (unlikely(ib_dma_mapping_error(qp_info->port_priv->device, + sg_list.addr))) { +- kfree(mad_priv); + ret = -ENOMEM; +- break; ++ goto free_mad_priv; + } + mad_priv->header.mapping = sg_list.addr; + mad_priv->header.mad_list.mad_queue = recv_queue; + mad_priv->header.mad_list.cqe.done = ib_mad_recv_done; + recv_wr.wr_cqe = &mad_priv->header.mad_list.cqe; +- +- /* Post receive WR */ + spin_lock_irqsave(&recv_queue->lock, flags); +- post = (++recv_queue->count < recv_queue->max_active); +- list_add_tail(&mad_priv->header.mad_list.list, &recv_queue->list); ++ if (recv_queue->count >= recv_queue->max_active) { ++ /* Fully populated the receive queue */ ++ spin_unlock_irqrestore(&recv_queue->lock, flags); ++ break; ++ } ++ recv_queue->count++; ++ list_add_tail(&mad_priv->header.mad_list.list, ++ &recv_queue->list); + spin_unlock_irqrestore(&recv_queue->lock, flags); ++ + ret = ib_post_recv(qp_info->qp, &recv_wr, NULL); + if (ret) { + spin_lock_irqsave(&recv_queue->lock, flags); + list_del(&mad_priv->header.mad_list.list); + recv_queue->count--; + spin_unlock_irqrestore(&recv_queue->lock, flags); +- ib_dma_unmap_single(qp_info->port_priv->device, +- mad_priv->header.mapping, +- mad_priv_dma_size(mad_priv), +- DMA_FROM_DEVICE); +- kfree(mad_priv); + dev_err(&qp_info->port_priv->device->dev, + "ib_post_recv failed: %d\n", ret); + break; + } +- } while (post); ++ } + ++ ib_dma_unmap_single(qp_info->port_priv->device, ++ mad_priv->header.mapping, ++ mad_priv_dma_size(mad_priv), DMA_FROM_DEVICE); ++free_mad_priv: ++ kfree(mad_priv); + return ret; + } + +diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c +index afc59048c40c88..f68673c370d2e1 100644 +--- a/drivers/infiniband/core/sysfs.c ++++ b/drivers/infiniband/core/sysfs.c +@@ -976,6 +976,7 @@ int ib_setup_device_attrs(struct ib_device *ibdev) + for (i = 0; i != ARRAY_SIZE(ibdev->groups); i++) + if (!ibdev->groups[i]) { + ibdev->groups[i] = &data->group; ++ ibdev->hw_stats_attr_index = i; + return 0; + } + WARN(true, "struct ib_device->groups is too small"); +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +index 27cf6e62422aa7..3725f05ad297ba 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c ++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +@@ -1181,8 +1181,6 @@ static void __modify_flags_from_init_state(struct bnxt_qplib_qp *qp) + qp->path_mtu = + CMDQ_MODIFY_QP_PATH_MTU_MTU_2048; + } +- qp->modify_flags &= +- ~CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID; + /* Bono FW require the max_dest_rd_atomic to be >= 1 */ + if (qp->max_dest_rd_atomic < 1) + qp->max_dest_rd_atomic = 1; +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +index aaf06cd939e691..08ea2925127970 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h ++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +@@ -220,9 +220,10 @@ int bnxt_qplib_deinit_rcfw(struct bnxt_qplib_rcfw *rcfw); + int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_ctx *ctx, int is_virtfn); + void bnxt_qplib_mark_qp_error(void *qp_handle); ++ + static inline u32 map_qp_id_to_tbl_indx(u32 qid, struct bnxt_qplib_rcfw *rcfw) + { + /* Last index of the qp_tbl is for QP1 ie. qp_tbl_size - 1*/ +- return (qid == 1) ? rcfw->qp_tbl_size - 1 : qid % rcfw->qp_tbl_size - 2; ++ return (qid == 1) ? rcfw->qp_tbl_size - 1 : (qid % (rcfw->qp_tbl_size - 2)); + } + #endif /* __BNXT_QPLIB_RCFW_H__ */ +diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c +index 99708a7bcda785..64ee875cc6d1c8 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_hem.c ++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c +@@ -1410,6 +1410,11 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev, + return ret; + } + ++/* This is the bottom bt pages number of a 100G MR on 4K OS, assuming ++ * the bt page size is not expanded by cal_best_bt_pg_sz() ++ */ ++#define RESCHED_LOOP_CNT_THRESHOLD_ON_4K 12800 ++ + /* construct the base address table and link them by address hop config */ + int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev, + struct hns_roce_hem_list *hem_list, +@@ -1418,6 +1423,7 @@ int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev, + { + const struct hns_roce_buf_region *r; + int ofs, end; ++ int loop; + int unit; + int ret; + int i; +@@ -1435,7 +1441,10 @@ int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev, + continue; + + end = r->offset + r->count; +- for (ofs = r->offset; ofs < end; ofs += unit) { ++ for (ofs = r->offset, loop = 1; ofs < end; ofs += unit, loop++) { ++ if (!(loop % RESCHED_LOOP_CNT_THRESHOLD_ON_4K)) ++ cond_resched(); ++ + ret = hem_list_alloc_mid_bt(hr_dev, r, unit, ofs, + hem_list->mid_bt[i], + &hem_list->btm_bt); +@@ -1487,19 +1496,22 @@ void hns_roce_hem_list_init(struct hns_roce_hem_list *hem_list) + + void *hns_roce_hem_list_find_mtt(struct hns_roce_dev *hr_dev, + struct hns_roce_hem_list *hem_list, +- int offset, int *mtt_cnt, u64 *phy_addr) ++ int offset, int *mtt_cnt) + { + struct list_head *head = &hem_list->btm_bt; + struct hns_roce_hem_item *hem, *temp_hem; + void *cpu_base = NULL; +- u64 phy_base = 0; ++ int loop = 1; + int nr = 0; + + list_for_each_entry_safe(hem, temp_hem, head, sibling) { ++ if (!(loop % RESCHED_LOOP_CNT_THRESHOLD_ON_4K)) ++ cond_resched(); ++ loop++; ++ + if (hem_list_page_is_in_range(hem, offset)) { + nr = offset - hem->start; + cpu_base = hem->addr + nr * BA_BYTE_LEN; +- phy_base = hem->dma_addr + nr * BA_BYTE_LEN; + nr = hem->end + 1 - offset; + break; + } +@@ -1508,8 +1520,5 @@ void *hns_roce_hem_list_find_mtt(struct hns_roce_dev *hr_dev, + if (mtt_cnt) + *mtt_cnt = nr; + +- if (phy_addr) +- *phy_addr = phy_base; +- + return cpu_base; + } +diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.h b/drivers/infiniband/hw/hns/hns_roce_hem.h +index fa84ce33076aca..150922b22eaa6b 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_hem.h ++++ b/drivers/infiniband/hw/hns/hns_roce_hem.h +@@ -132,7 +132,7 @@ void hns_roce_hem_list_release(struct hns_roce_dev *hr_dev, + struct hns_roce_hem_list *hem_list); + void *hns_roce_hem_list_find_mtt(struct hns_roce_dev *hr_dev, + struct hns_roce_hem_list *hem_list, +- int offset, int *mtt_cnt, u64 *phy_addr); ++ int offset, int *mtt_cnt); + + static inline void hns_roce_hem_first(struct hns_roce_hem *hem, + struct hns_roce_hem_iter *iter) +diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c +index e2d2f8f2bdbcfc..83a6b8fbe10f06 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_main.c ++++ b/drivers/infiniband/hw/hns/hns_roce_main.c +@@ -185,7 +185,7 @@ static int hns_roce_query_device(struct ib_device *ib_dev, + IB_DEVICE_RC_RNR_NAK_GEN; + props->max_send_sge = hr_dev->caps.max_sq_sg; + props->max_recv_sge = hr_dev->caps.max_rq_sg; +- props->max_sge_rd = 1; ++ props->max_sge_rd = hr_dev->caps.max_sq_sg; + props->max_cq = hr_dev->caps.num_cqs; + props->max_cqe = hr_dev->caps.max_cqes; + props->max_mr = hr_dev->caps.num_mtpts; +diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c +index 604dd38b5c8fd9..791a45802d6b28 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_mr.c ++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c +@@ -614,7 +614,7 @@ static int mtr_map_region(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr, + while (offset < end && npage < max_count) { + count = 0; + mtts = hns_roce_hem_list_find_mtt(hr_dev, &mtr->hem_list, +- offset, &count, NULL); ++ offset, &count); + if (!mtts) + return -ENOBUFS; + +@@ -864,7 +864,7 @@ int hns_roce_mtr_find(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr, + mtt_count = 0; + mtts = hns_roce_hem_list_find_mtt(hr_dev, &mtr->hem_list, + start_index + total, +- &mtt_count, NULL); ++ &mtt_count); + if (!mtts || !mtt_count) + goto done; + +diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c +index ff019e32c4552e..3875563abf3741 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_qp.c ++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c +@@ -842,12 +842,14 @@ static int alloc_user_qp_db(struct hns_roce_dev *hr_dev, + struct hns_roce_ib_create_qp *ucmd, + struct hns_roce_ib_create_qp_resp *resp) + { ++ bool has_sdb = user_qp_has_sdb(hr_dev, init_attr, udata, resp, ucmd); + struct hns_roce_ucontext *uctx = rdma_udata_to_drv_context(udata, + struct hns_roce_ucontext, ibucontext); ++ bool has_rdb = user_qp_has_rdb(hr_dev, init_attr, udata, resp); + struct ib_device *ibdev = &hr_dev->ib_dev; + int ret; + +- if (user_qp_has_sdb(hr_dev, init_attr, udata, resp, ucmd)) { ++ if (has_sdb) { + ret = hns_roce_db_map_user(uctx, ucmd->sdb_addr, &hr_qp->sdb); + if (ret) { + ibdev_err(ibdev, +@@ -858,7 +860,7 @@ static int alloc_user_qp_db(struct hns_roce_dev *hr_dev, + hr_qp->en_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB; + } + +- if (user_qp_has_rdb(hr_dev, init_attr, udata, resp)) { ++ if (has_rdb) { + ret = hns_roce_db_map_user(uctx, ucmd->db_addr, &hr_qp->rdb); + if (ret) { + ibdev_err(ibdev, +@@ -872,7 +874,7 @@ static int alloc_user_qp_db(struct hns_roce_dev *hr_dev, + return 0; + + err_sdb: +- if (hr_qp->en_flags & HNS_ROCE_QP_CAP_SQ_RECORD_DB) ++ if (has_sdb) + hns_roce_db_unmap_user(uctx, &hr_qp->sdb); + err_out: + return ret; +@@ -1115,7 +1117,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, + min(udata->outlen, sizeof(resp))); + if (ret) { + ibdev_err(ibdev, "copy qp resp failed!\n"); +- goto err_store; ++ goto err_flow_ctrl; + } + } + +diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c +index a190fb58159134..f3becb506125e1 100644 +--- a/drivers/infiniband/hw/mlx5/cq.c ++++ b/drivers/infiniband/hw/mlx5/cq.c +@@ -481,7 +481,7 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq, + } + + qpn = ntohl(cqe64->sop_drop_qpn) & 0xffffff; +- if (!*cur_qp || (qpn != (*cur_qp)->ibqp.qp_num)) { ++ if (!*cur_qp || (qpn != (*cur_qp)->trans_qp.base.mqp.qpn)) { + /* We do not have to take the QP table lock here, + * because CQs will be locked while QPs are removed + * from the table. +diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c +index 02cb48223dc679..a28cbbd9e475c9 100644 +--- a/drivers/media/dvb-frontends/dib8000.c ++++ b/drivers/media/dvb-frontends/dib8000.c +@@ -2701,8 +2701,11 @@ static void dib8000_set_dds(struct dib8000_state *state, s32 offset_khz) + u8 ratio; + + if (state->revision == 0x8090) { ++ u32 internal = dib8000_read32(state, 23) / 1000; ++ + ratio = 4; +- unit_khz_dds_val = (1<<26) / (dib8000_read32(state, 23) / 1000); ++ ++ unit_khz_dds_val = (1<<26) / (internal ?: 1); + if (offset_khz < 0) + dds = (1 << 26) - (abs_offset_khz * unit_khz_dds_val); + else +diff --git a/drivers/media/i2c/et8ek8/et8ek8_driver.c b/drivers/media/i2c/et8ek8/et8ek8_driver.c +index 873d614339bbb9..2910842705bc11 100644 +--- a/drivers/media/i2c/et8ek8/et8ek8_driver.c ++++ b/drivers/media/i2c/et8ek8/et8ek8_driver.c +@@ -1460,7 +1460,7 @@ static int et8ek8_probe(struct i2c_client *client) + return ret; + } + +-static int __exit et8ek8_remove(struct i2c_client *client) ++static int et8ek8_remove(struct i2c_client *client) + { + struct v4l2_subdev *subdev = i2c_get_clientdata(client); + struct et8ek8_sensor *sensor = to_et8ek8_sensor(subdev); +@@ -1504,7 +1504,7 @@ static struct i2c_driver et8ek8_i2c_driver = { + .of_match_table = et8ek8_of_table, + }, + .probe_new = et8ek8_probe, +- .remove = __exit_p(et8ek8_remove), ++ .remove = et8ek8_remove, + .id_table = et8ek8_id_table, + }; + +diff --git a/drivers/media/platform/allegro-dvt/allegro-core.c b/drivers/media/platform/allegro-dvt/allegro-core.c +index 881c5bbf61568c..f472eb19cd92f9 100644 +--- a/drivers/media/platform/allegro-dvt/allegro-core.c ++++ b/drivers/media/platform/allegro-dvt/allegro-core.c +@@ -3740,6 +3740,7 @@ static int allegro_probe(struct platform_device *pdev) + if (ret < 0) { + v4l2_err(&dev->v4l2_dev, + "failed to request firmware: %d\n", ret); ++ v4l2_device_unregister(&dev->v4l2_dev); + return ret; + } + +diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c +index 29271ad4728a2f..dec279845a752d 100644 +--- a/drivers/memstick/host/rtsx_usb_ms.c ++++ b/drivers/memstick/host/rtsx_usb_ms.c +@@ -813,6 +813,7 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev) + + host->eject = true; + cancel_work_sync(&host->handle_req); ++ cancel_delayed_work_sync(&host->poll_card); + + mutex_lock(&host->host_mutex); + if (host->req) { +diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c +index 3ac4508a6742ac..78dcbf8e2c15d3 100644 +--- a/drivers/mfd/sm501.c ++++ b/drivers/mfd/sm501.c +@@ -920,7 +920,7 @@ static void sm501_gpio_set(struct gpio_chip *chip, unsigned offset, int value) + { + struct sm501_gpio_chip *smchip = gpiochip_get_data(chip); + struct sm501_gpio *smgpio = smchip->ourgpio; +- unsigned long bit = 1 << offset; ++ unsigned long bit = BIT(offset); + void __iomem *regs = smchip->regbase; + unsigned long save; + unsigned long val; +@@ -946,7 +946,7 @@ static int sm501_gpio_input(struct gpio_chip *chip, unsigned offset) + struct sm501_gpio_chip *smchip = gpiochip_get_data(chip); + struct sm501_gpio *smgpio = smchip->ourgpio; + void __iomem *regs = smchip->regbase; +- unsigned long bit = 1 << offset; ++ unsigned long bit = BIT(offset); + unsigned long save; + unsigned long ddr; + +@@ -971,7 +971,7 @@ static int sm501_gpio_output(struct gpio_chip *chip, + { + struct sm501_gpio_chip *smchip = gpiochip_get_data(chip); + struct sm501_gpio *smgpio = smchip->ourgpio; +- unsigned long bit = 1 << offset; ++ unsigned long bit = BIT(offset); + void __iomem *regs = smchip->regbase; + unsigned long save; + unsigned long val; +diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c +index 493ed8c8241951..7c9eddf08f7b9d 100644 +--- a/drivers/mmc/host/atmel-mci.c ++++ b/drivers/mmc/host/atmel-mci.c +@@ -2507,8 +2507,10 @@ static int atmci_probe(struct platform_device *pdev) + /* Get MCI capabilities and set operations according to it */ + atmci_get_cap(host); + ret = atmci_configure_dma(host); +- if (ret == -EPROBE_DEFER) ++ if (ret == -EPROBE_DEFER) { ++ clk_disable_unprepare(host->mck); + goto err_dma_probe_defer; ++ } + if (ret == 0) { + host->prepare_data = &atmci_prepare_data_dma; + host->submit_data = &atmci_submit_data_dma; +diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c +index 4d42b1810acea5..05b06fcc90bf67 100644 +--- a/drivers/mmc/host/sdhci-brcmstb.c ++++ b/drivers/mmc/host/sdhci-brcmstb.c +@@ -32,6 +32,8 @@ + struct sdhci_brcmstb_priv { + void __iomem *cfg_regs; + unsigned int flags; ++ struct clk *base_clk; ++ u32 base_freq_hz; + }; + + struct brcmstb_match_priv { +@@ -251,9 +253,11 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev) + struct sdhci_pltfm_host *pltfm_host; + const struct of_device_id *match; + struct sdhci_brcmstb_priv *priv; ++ u32 actual_clock_mhz; + struct sdhci_host *host; + struct resource *iomem; + struct clk *clk; ++ struct clk *base_clk = NULL; + int res; + + match = of_match_node(sdhci_brcm_of_match, pdev->dev.of_node); +@@ -331,6 +335,35 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev) + if (match_priv->flags & BRCMSTB_MATCH_FLAGS_BROKEN_TIMEOUT) + host->quirks |= SDHCI_QUIRK_BROKEN_TIMEOUT_VAL; + ++ /* Change the base clock frequency if the DT property exists */ ++ if (device_property_read_u32(&pdev->dev, "clock-frequency", ++ &priv->base_freq_hz) != 0) ++ goto add_host; ++ ++ base_clk = devm_clk_get_optional(&pdev->dev, "sdio_freq"); ++ if (IS_ERR(base_clk)) { ++ dev_warn(&pdev->dev, "Clock for \"sdio_freq\" not found\n"); ++ goto add_host; ++ } ++ ++ res = clk_prepare_enable(base_clk); ++ if (res) ++ goto err; ++ ++ /* set improved clock rate */ ++ clk_set_rate(base_clk, priv->base_freq_hz); ++ actual_clock_mhz = clk_get_rate(base_clk) / 1000000; ++ ++ host->caps &= ~SDHCI_CLOCK_V3_BASE_MASK; ++ host->caps |= (actual_clock_mhz << SDHCI_CLOCK_BASE_SHIFT); ++ /* Disable presets because they are now incorrect */ ++ host->quirks2 |= SDHCI_QUIRK2_PRESET_VALUE_BROKEN; ++ ++ dev_dbg(&pdev->dev, "Base Clock Frequency changed to %dMHz\n", ++ actual_clock_mhz); ++ priv->base_clk = base_clk; ++ ++add_host: + res = sdhci_brcmstb_add_host(host, priv); + if (res) + goto err; +@@ -341,6 +374,7 @@ static int sdhci_brcmstb_probe(struct platform_device *pdev) + err: + sdhci_pltfm_free(pdev); + err_clk: ++ clk_disable_unprepare(base_clk); + clk_disable_unprepare(clk); + return res; + } +@@ -352,11 +386,61 @@ static void sdhci_brcmstb_shutdown(struct platform_device *pdev) + + MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match); + ++#ifdef CONFIG_PM_SLEEP ++static int sdhci_brcmstb_suspend(struct device *dev) ++{ ++ struct sdhci_host *host = dev_get_drvdata(dev); ++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ++ struct sdhci_brcmstb_priv *priv = sdhci_pltfm_priv(pltfm_host); ++ int ret; ++ ++ clk_disable_unprepare(priv->base_clk); ++ if (host->mmc->caps2 & MMC_CAP2_CQE) { ++ ret = cqhci_suspend(host->mmc); ++ if (ret) ++ return ret; ++ } ++ ++ return sdhci_pltfm_suspend(dev); ++} ++ ++static int sdhci_brcmstb_resume(struct device *dev) ++{ ++ struct sdhci_host *host = dev_get_drvdata(dev); ++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ++ struct sdhci_brcmstb_priv *priv = sdhci_pltfm_priv(pltfm_host); ++ int ret; ++ ++ ret = sdhci_pltfm_resume(dev); ++ if (!ret && priv->base_freq_hz) { ++ ret = clk_prepare_enable(priv->base_clk); ++ /* ++ * Note: using clk_get_rate() below as clk_get_rate() ++ * honors CLK_GET_RATE_NOCACHE attribute, but clk_set_rate() ++ * may do implicit get_rate() calls that do not honor ++ * CLK_GET_RATE_NOCACHE. ++ */ ++ if (!ret && ++ (clk_get_rate(priv->base_clk) != priv->base_freq_hz)) ++ ret = clk_set_rate(priv->base_clk, priv->base_freq_hz); ++ } ++ ++ if (host->mmc->caps2 & MMC_CAP2_CQE) ++ ret = cqhci_resume(host->mmc); ++ ++ return ret; ++} ++#endif ++ ++static const struct dev_pm_ops sdhci_brcmstb_pmops = { ++ SET_SYSTEM_SLEEP_PM_OPS(sdhci_brcmstb_suspend, sdhci_brcmstb_resume) ++}; ++ + static struct platform_driver sdhci_brcmstb_driver = { + .driver = { + .name = "sdhci-brcmstb", + .probe_type = PROBE_PREFER_ASYNCHRONOUS, +- .pm = &sdhci_pltfm_pmops, ++ .pm = &sdhci_brcmstb_pmops, + .of_match_table = of_match_ptr(sdhci_brcm_of_match), + }, + .probe = sdhci_brcmstb_probe, +diff --git a/drivers/mmc/host/sdhci-pxav3.c b/drivers/mmc/host/sdhci-pxav3.c +index a6d89a3f194651..e59bfd31630596 100644 +--- a/drivers/mmc/host/sdhci-pxav3.c ++++ b/drivers/mmc/host/sdhci-pxav3.c +@@ -401,6 +401,7 @@ static int sdhci_pxav3_probe(struct platform_device *pdev) + if (!IS_ERR(pxa->clk_core)) + clk_prepare_enable(pxa->clk_core); + ++ host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY; + /* enable 1/8V DDR capable */ + host->mmc->caps |= MMC_CAP_1_8V_DDR; + +diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c +index 9d9e4200064f9d..00a80f0adece45 100644 +--- a/drivers/net/arcnet/com20020-pci.c ++++ b/drivers/net/arcnet/com20020-pci.c +@@ -250,18 +250,33 @@ static int com20020pci_probe(struct pci_dev *pdev, + card->tx_led.default_trigger = devm_kasprintf(&pdev->dev, + GFP_KERNEL, "arc%d-%d-tx", + dev->dev_id, i); ++ if (!card->tx_led.default_trigger) { ++ ret = -ENOMEM; ++ goto err_free_arcdev; ++ } + card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL, + "pci:green:tx:%d-%d", + dev->dev_id, i); +- ++ if (!card->tx_led.name) { ++ ret = -ENOMEM; ++ goto err_free_arcdev; ++ } + card->tx_led.dev = &dev->dev; + card->recon_led.brightness_set = led_recon_set; + card->recon_led.default_trigger = devm_kasprintf(&pdev->dev, + GFP_KERNEL, "arc%d-%d-recon", + dev->dev_id, i); ++ if (!card->recon_led.default_trigger) { ++ ret = -ENOMEM; ++ goto err_free_arcdev; ++ } + card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL, + "pci:red:recon:%d-%d", + dev->dev_id, i); ++ if (!card->recon_led.name) { ++ ret = -ENOMEM; ++ goto err_free_arcdev; ++ } + card->recon_led.dev = &dev->dev; + + ret = devm_led_classdev_register(&pdev->dev, &card->tx_led); +diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c +index 837bca7347594c..6f4e3e0330a45b 100644 +--- a/drivers/net/can/flexcan.c ++++ b/drivers/net/can/flexcan.c +@@ -2337,14 +2337,19 @@ static int __maybe_unused flexcan_suspend(struct device *device) + + flexcan_chip_interrupts_disable(dev); + ++ err = flexcan_transceiver_disable(priv); ++ if (err) ++ return err; ++ + err = pinctrl_pm_select_sleep_state(device); + if (err) + return err; + } + netif_stop_queue(dev); + netif_device_detach(dev); ++ ++ priv->can.state = CAN_STATE_SLEEPING; + } +- priv->can.state = CAN_STATE_SLEEPING; + + return 0; + } +@@ -2355,7 +2360,6 @@ static int __maybe_unused flexcan_resume(struct device *device) + struct flexcan_priv *priv = netdev_priv(dev); + int err; + +- priv->can.state = CAN_STATE_ERROR_ACTIVE; + if (netif_running(dev)) { + netif_device_attach(dev); + netif_start_queue(dev); +@@ -2369,12 +2373,20 @@ static int __maybe_unused flexcan_resume(struct device *device) + if (err) + return err; + +- err = flexcan_chip_start(dev); ++ err = flexcan_transceiver_enable(priv); + if (err) + return err; + ++ err = flexcan_chip_start(dev); ++ if (err) { ++ flexcan_transceiver_disable(priv); ++ return err; ++ } ++ + flexcan_chip_interrupts_enable(dev); + } ++ ++ priv->can.state = CAN_STATE_ERROR_ACTIVE; + } + + return 0; +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c +index 2a55ecceab8c60..07a3f12e02dd1e 100644 +--- a/drivers/net/dsa/mv88e6xxx/chip.c ++++ b/drivers/net/dsa/mv88e6xxx/chip.c +@@ -1775,13 +1775,11 @@ mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port, + return err; + } + +-static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, +- const unsigned char *addr, u16 vid, +- u8 state) ++static int mv88e6xxx_port_db_get(struct mv88e6xxx_chip *chip, ++ const unsigned char *addr, u16 vid, ++ u16 *fid, struct mv88e6xxx_atu_entry *entry) + { +- struct mv88e6xxx_atu_entry entry; + struct mv88e6xxx_vtu_entry vlan; +- u16 fid; + int err; + + /* Ports have two private address databases: one for when the port is +@@ -1792,7 +1790,7 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, + * VLAN ID into the port's database used for VLAN-unaware bridging. + */ + if (vid == 0) { +- fid = MV88E6XXX_FID_BRIDGED; ++ *fid = MV88E6XXX_FID_BRIDGED; + } else { + err = mv88e6xxx_vtu_get(chip, vid, &vlan); + if (err) +@@ -1802,14 +1800,39 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, + if (!vlan.valid) + return -EOPNOTSUPP; + +- fid = vlan.fid; ++ *fid = vlan.fid; + } + +- entry.state = 0; +- ether_addr_copy(entry.mac, addr); +- eth_addr_dec(entry.mac); ++ entry->state = 0; ++ ether_addr_copy(entry->mac, addr); ++ eth_addr_dec(entry->mac); ++ ++ return mv88e6xxx_g1_atu_getnext(chip, *fid, entry); ++} ++ ++static bool mv88e6xxx_port_db_find(struct mv88e6xxx_chip *chip, ++ const unsigned char *addr, u16 vid) ++{ ++ struct mv88e6xxx_atu_entry entry; ++ u16 fid; ++ int err; + +- err = mv88e6xxx_g1_atu_getnext(chip, fid, &entry); ++ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry); ++ if (err) ++ return false; ++ ++ return entry.state && ether_addr_equal(entry.mac, addr); ++} ++ ++static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, ++ const unsigned char *addr, u16 vid, ++ u8 state) ++{ ++ struct mv88e6xxx_atu_entry entry; ++ u16 fid; ++ int err; ++ ++ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry); + if (err) + return err; + +@@ -2324,6 +2347,13 @@ static int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port, + mv88e6xxx_reg_lock(chip); + err = mv88e6xxx_port_db_load_purge(chip, port, addr, vid, + MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC); ++ if (err) ++ goto out; ++ ++ if (!mv88e6xxx_port_db_find(chip, addr, vid)) ++ err = -ENOSPC; ++ ++out: + mv88e6xxx_reg_unlock(chip); + + return err; +@@ -5878,6 +5908,13 @@ static int mv88e6xxx_port_mdb_add(struct dsa_switch *ds, int port, + mv88e6xxx_reg_lock(chip); + err = mv88e6xxx_port_db_load_purge(chip, port, mdb->addr, mdb->vid, + MV88E6XXX_G1_ATU_DATA_STATE_MC_STATIC); ++ if (err) ++ goto out; ++ ++ if (!mv88e6xxx_port_db_find(chip, mdb->addr, mdb->vid)) ++ err = -ENOSPC; ++ ++out: + mv88e6xxx_reg_unlock(chip); + + return err; +diff --git a/drivers/net/ethernet/intel/ice/ice_arfs.c b/drivers/net/ethernet/intel/ice/ice_arfs.c +index 88d98c9e5f914d..9cebae92364ebb 100644 +--- a/drivers/net/ethernet/intel/ice/ice_arfs.c ++++ b/drivers/net/ethernet/intel/ice/ice_arfs.c +@@ -510,7 +510,7 @@ void ice_init_arfs(struct ice_vsi *vsi) + struct hlist_head *arfs_fltr_list; + unsigned int i; + +- if (!vsi || vsi->type != ICE_VSI_PF) ++ if (!vsi || vsi->type != ICE_VSI_PF || ice_is_arfs_active(vsi)) + return; + + arfs_fltr_list = kzalloc(sizeof(*arfs_fltr_list) * ICE_MAX_ARFS_LIST, +diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +index 24a8c9b8126b74..8732134cb33c9d 100644 +--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h ++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +@@ -1108,6 +1108,9 @@ struct mvpp2 { + + /* Spinlocks for CM3 shared memory configuration */ + spinlock_t mss_spinlock; ++ ++ /* Spinlock for shared PRS parser memory and shadow table */ ++ spinlock_t prs_spinlock; + }; + + struct mvpp2_pcpu_stats { +diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +index 2a60f949d95323..7fa880e62d0961 100644 +--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c ++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +@@ -7589,8 +7589,9 @@ static int mvpp2_probe(struct platform_device *pdev) + if (mvpp2_read(priv, MVPP2_VER_ID_REG) == MVPP2_VER_PP23) + priv->hw_version = MVPP23; + +- /* Init mss lock */ ++ /* Init locks for shared packet processor resources */ + spin_lock_init(&priv->mss_spinlock); ++ spin_lock_init(&priv->prs_spinlock); + + /* Initialize network controller */ + err = mvpp2_init(pdev, priv); +diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c +index 9af22f497a40f5..93e978bdf303c4 100644 +--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c ++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c +@@ -23,6 +23,8 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe) + { + int i; + ++ lockdep_assert_held(&priv->prs_spinlock); ++ + if (pe->index > MVPP2_PRS_TCAM_SRAM_SIZE - 1) + return -EINVAL; + +@@ -43,11 +45,13 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe) + } + + /* Initialize tcam entry from hw */ +-int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe, +- int tid) ++static int __mvpp2_prs_init_from_hw(struct mvpp2 *priv, ++ struct mvpp2_prs_entry *pe, int tid) + { + int i; + ++ lockdep_assert_held(&priv->prs_spinlock); ++ + if (tid > MVPP2_PRS_TCAM_SRAM_SIZE - 1) + return -EINVAL; + +@@ -73,6 +77,18 @@ int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe, + return 0; + } + ++int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe, ++ int tid) ++{ ++ int err; ++ ++ spin_lock_bh(&priv->prs_spinlock); ++ err = __mvpp2_prs_init_from_hw(priv, pe, tid); ++ spin_unlock_bh(&priv->prs_spinlock); ++ ++ return err; ++} ++ + /* Invalidate tcam hw entry */ + static void mvpp2_prs_hw_inv(struct mvpp2 *priv, int index) + { +@@ -374,7 +390,7 @@ static int mvpp2_prs_flow_find(struct mvpp2 *priv, int flow) + priv->prs_shadow[tid].lu != MVPP2_PRS_LU_FLOWS) + continue; + +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + bits = mvpp2_prs_sram_ai_get(&pe); + + /* Sram store classification lookup ID in AI bits [5:0] */ +@@ -441,7 +457,7 @@ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add) + + if (priv->prs_shadow[MVPP2_PE_DROP_ALL].valid) { + /* Entry exist - update port only */ +- mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL); ++ __mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL); + } else { + /* Entry doesn't exist - create new */ + memset(&pe, 0, sizeof(pe)); +@@ -469,14 +485,17 @@ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add) + } + + /* Set port to unicast or multicast promiscuous mode */ +-void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, +- enum mvpp2_prs_l2_cast l2_cast, bool add) ++static void __mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, ++ enum mvpp2_prs_l2_cast l2_cast, ++ bool add) + { + struct mvpp2_prs_entry pe; + unsigned char cast_match; + unsigned int ri; + int tid; + ++ lockdep_assert_held(&priv->prs_spinlock); ++ + if (l2_cast == MVPP2_PRS_L2_UNI_CAST) { + cast_match = MVPP2_PRS_UCAST_VAL; + tid = MVPP2_PE_MAC_UC_PROMISCUOUS; +@@ -489,7 +508,7 @@ void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, + + /* promiscuous mode - Accept unknown unicast or multicast packets */ + if (priv->prs_shadow[tid].valid) { +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + } else { + memset(&pe, 0, sizeof(pe)); + mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC); +@@ -522,6 +541,14 @@ void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, + mvpp2_prs_hw_write(priv, &pe); + } + ++void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, ++ enum mvpp2_prs_l2_cast l2_cast, bool add) ++{ ++ spin_lock_bh(&priv->prs_spinlock); ++ __mvpp2_prs_mac_promisc_set(priv, port, l2_cast, add); ++ spin_unlock_bh(&priv->prs_spinlock); ++} ++ + /* Set entry for dsa packets */ + static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add, + bool tagged, bool extend) +@@ -539,7 +566,7 @@ static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add, + + if (priv->prs_shadow[tid].valid) { + /* Entry exist - update port only */ +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + } else { + /* Entry doesn't exist - create new */ + memset(&pe, 0, sizeof(pe)); +@@ -610,7 +637,7 @@ static void mvpp2_prs_dsa_tag_ethertype_set(struct mvpp2 *priv, int port, + + if (priv->prs_shadow[tid].valid) { + /* Entry exist - update port only */ +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + } else { + /* Entry doesn't exist - create new */ + memset(&pe, 0, sizeof(pe)); +@@ -673,7 +700,7 @@ static int mvpp2_prs_vlan_find(struct mvpp2 *priv, unsigned short tpid, int ai) + priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN) + continue; + +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid); + if (!match) + continue; +@@ -726,7 +753,7 @@ static int mvpp2_prs_vlan_add(struct mvpp2 *priv, unsigned short tpid, int ai, + priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN) + continue; + +- mvpp2_prs_init_from_hw(priv, &pe, tid_aux); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid_aux); + ri_bits = mvpp2_prs_sram_ri_get(&pe); + if ((ri_bits & MVPP2_PRS_RI_VLAN_MASK) == + MVPP2_PRS_RI_VLAN_DOUBLE) +@@ -760,7 +787,7 @@ static int mvpp2_prs_vlan_add(struct mvpp2 *priv, unsigned short tpid, int ai, + + mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN); + } else { +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + } + /* Update ports' mask */ + mvpp2_prs_tcam_port_map_set(&pe, port_map); +@@ -800,7 +827,7 @@ static int mvpp2_prs_double_vlan_find(struct mvpp2 *priv, unsigned short tpid1, + priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN) + continue; + +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + + match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid1) && + mvpp2_prs_tcam_data_cmp(&pe, 4, tpid2); +@@ -849,7 +876,7 @@ static int mvpp2_prs_double_vlan_add(struct mvpp2 *priv, unsigned short tpid1, + priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN) + continue; + +- mvpp2_prs_init_from_hw(priv, &pe, tid_aux); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid_aux); + ri_bits = mvpp2_prs_sram_ri_get(&pe); + ri_bits &= MVPP2_PRS_RI_VLAN_MASK; + if (ri_bits == MVPP2_PRS_RI_VLAN_SINGLE || +@@ -880,7 +907,7 @@ static int mvpp2_prs_double_vlan_add(struct mvpp2 *priv, unsigned short tpid1, + + mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN); + } else { +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + } + + /* Update ports' mask */ +@@ -1213,8 +1240,8 @@ static void mvpp2_prs_mac_init(struct mvpp2 *priv) + /* Create dummy entries for drop all and promiscuous modes */ + mvpp2_prs_drop_fc(priv); + mvpp2_prs_mac_drop_all_set(priv, 0, false); +- mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false); +- mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false); ++ __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false); ++ __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false); + } + + /* Set default entries for various types of dsa packets */ +@@ -1533,12 +1560,6 @@ static int mvpp2_prs_vlan_init(struct platform_device *pdev, struct mvpp2 *priv) + struct mvpp2_prs_entry pe; + int err; + +- priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool), +- MVPP2_PRS_DBL_VLANS_MAX, +- GFP_KERNEL); +- if (!priv->prs_double_vlans) +- return -ENOMEM; +- + /* Double VLAN: 0x88A8, 0x8100 */ + err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021AD, ETH_P_8021Q, + MVPP2_PRS_PORT_MASK); +@@ -1941,7 +1962,7 @@ static int mvpp2_prs_vid_range_find(struct mvpp2_port *port, u16 vid, u16 mask) + port->priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID) + continue; + +- mvpp2_prs_init_from_hw(port->priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(port->priv, &pe, tid); + + mvpp2_prs_tcam_data_byte_get(&pe, 2, &byte[0], &enable[0]); + mvpp2_prs_tcam_data_byte_get(&pe, 3, &byte[1], &enable[1]); +@@ -1970,6 +1991,8 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid) + + memset(&pe, 0, sizeof(pe)); + ++ spin_lock_bh(&priv->prs_spinlock); ++ + /* Scan TCAM and see if entry with this already exist */ + tid = mvpp2_prs_vid_range_find(port, vid, mask); + +@@ -1988,8 +2011,10 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid) + MVPP2_PRS_VLAN_FILT_MAX_ENTRY); + + /* There isn't room for a new VID filter */ +- if (tid < 0) ++ if (tid < 0) { ++ spin_unlock_bh(&priv->prs_spinlock); + return tid; ++ } + + mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_VID); + pe.index = tid; +@@ -1997,7 +2022,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid) + /* Mask all ports */ + mvpp2_prs_tcam_port_map_set(&pe, 0); + } else { +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + } + + /* Enable the current port */ +@@ -2019,6 +2044,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid) + mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID); + mvpp2_prs_hw_write(priv, &pe); + ++ spin_unlock_bh(&priv->prs_spinlock); + return 0; + } + +@@ -2028,15 +2054,16 @@ void mvpp2_prs_vid_entry_remove(struct mvpp2_port *port, u16 vid) + struct mvpp2 *priv = port->priv; + int tid; + +- /* Scan TCAM and see if entry with this already exist */ +- tid = mvpp2_prs_vid_range_find(port, vid, 0xfff); ++ spin_lock_bh(&priv->prs_spinlock); + +- /* No such entry */ +- if (tid < 0) +- return; ++ /* Invalidate TCAM entry with this , if it exists */ ++ tid = mvpp2_prs_vid_range_find(port, vid, 0xfff); ++ if (tid >= 0) { ++ mvpp2_prs_hw_inv(priv, tid); ++ priv->prs_shadow[tid].valid = false; ++ } + +- mvpp2_prs_hw_inv(priv, tid); +- priv->prs_shadow[tid].valid = false; ++ spin_unlock_bh(&priv->prs_spinlock); + } + + /* Remove all existing VID filters on this port */ +@@ -2045,6 +2072,8 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port) + struct mvpp2 *priv = port->priv; + int tid; + ++ spin_lock_bh(&priv->prs_spinlock); ++ + for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id); + tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) { + if (priv->prs_shadow[tid].valid) { +@@ -2052,6 +2081,8 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port) + priv->prs_shadow[tid].valid = false; + } + } ++ ++ spin_unlock_bh(&priv->prs_spinlock); + } + + /* Remove VID filering entry for this port */ +@@ -2060,10 +2091,14 @@ void mvpp2_prs_vid_disable_filtering(struct mvpp2_port *port) + unsigned int tid = MVPP2_PRS_VID_PORT_DFLT(port->id); + struct mvpp2 *priv = port->priv; + ++ spin_lock_bh(&priv->prs_spinlock); ++ + /* Invalidate the guard entry */ + mvpp2_prs_hw_inv(priv, tid); + + priv->prs_shadow[tid].valid = false; ++ ++ spin_unlock_bh(&priv->prs_spinlock); + } + + /* Add guard entry that drops packets when no VID is matched on this port */ +@@ -2079,6 +2114,8 @@ void mvpp2_prs_vid_enable_filtering(struct mvpp2_port *port) + + memset(&pe, 0, sizeof(pe)); + ++ spin_lock_bh(&priv->prs_spinlock); ++ + pe.index = tid; + + reg_val = mvpp2_read(priv, MVPP2_MH_REG(port->id)); +@@ -2111,6 +2148,8 @@ void mvpp2_prs_vid_enable_filtering(struct mvpp2_port *port) + /* Update shadow table */ + mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID); + mvpp2_prs_hw_write(priv, &pe); ++ ++ spin_unlock_bh(&priv->prs_spinlock); + } + + /* Parser default initialization */ +@@ -2118,6 +2157,20 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv) + { + int err, index, i; + ++ priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE, ++ sizeof(*priv->prs_shadow), ++ GFP_KERNEL); ++ if (!priv->prs_shadow) ++ return -ENOMEM; ++ ++ priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool), ++ MVPP2_PRS_DBL_VLANS_MAX, ++ GFP_KERNEL); ++ if (!priv->prs_double_vlans) ++ return -ENOMEM; ++ ++ spin_lock_bh(&priv->prs_spinlock); ++ + /* Enable tcam table */ + mvpp2_write(priv, MVPP2_PRS_TCAM_CTRL_REG, MVPP2_PRS_TCAM_EN_MASK); + +@@ -2136,12 +2189,6 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv) + for (index = 0; index < MVPP2_PRS_TCAM_SRAM_SIZE; index++) + mvpp2_prs_hw_inv(priv, index); + +- priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE, +- sizeof(*priv->prs_shadow), +- GFP_KERNEL); +- if (!priv->prs_shadow) +- return -ENOMEM; +- + /* Always start from lookup = 0 */ + for (index = 0; index < MVPP2_MAX_PORTS; index++) + mvpp2_prs_hw_port_init(priv, index, MVPP2_PRS_LU_MH, +@@ -2158,26 +2205,13 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv) + mvpp2_prs_vid_init(priv); + + err = mvpp2_prs_etype_init(priv); +- if (err) +- return err; +- +- err = mvpp2_prs_vlan_init(pdev, priv); +- if (err) +- return err; +- +- err = mvpp2_prs_pppoe_init(priv); +- if (err) +- return err; +- +- err = mvpp2_prs_ip6_init(priv); +- if (err) +- return err; +- +- err = mvpp2_prs_ip4_init(priv); +- if (err) +- return err; ++ err = err ? : mvpp2_prs_vlan_init(pdev, priv); ++ err = err ? : mvpp2_prs_pppoe_init(priv); ++ err = err ? : mvpp2_prs_ip6_init(priv); ++ err = err ? : mvpp2_prs_ip4_init(priv); + +- return 0; ++ spin_unlock_bh(&priv->prs_spinlock); ++ return err; + } + + /* Compare MAC DA with tcam entry data */ +@@ -2217,7 +2251,7 @@ mvpp2_prs_mac_da_range_find(struct mvpp2 *priv, int pmap, const u8 *da, + (priv->prs_shadow[tid].udf != udf_type)) + continue; + +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + entry_pmap = mvpp2_prs_tcam_port_map_get(&pe); + + if (mvpp2_prs_mac_range_equals(&pe, da, mask) && +@@ -2229,7 +2263,8 @@ mvpp2_prs_mac_da_range_find(struct mvpp2 *priv, int pmap, const u8 *da, + } + + /* Update parser's mac da entry */ +-int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add) ++static int __mvpp2_prs_mac_da_accept(struct mvpp2_port *port, ++ const u8 *da, bool add) + { + unsigned char mask[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }; + struct mvpp2 *priv = port->priv; +@@ -2261,7 +2296,7 @@ int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add) + /* Mask all ports */ + mvpp2_prs_tcam_port_map_set(&pe, 0); + } else { +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + } + + mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC); +@@ -2317,6 +2352,17 @@ int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add) + return 0; + } + ++int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add) ++{ ++ int err; ++ ++ spin_lock_bh(&port->priv->prs_spinlock); ++ err = __mvpp2_prs_mac_da_accept(port, da, add); ++ spin_unlock_bh(&port->priv->prs_spinlock); ++ ++ return err; ++} ++ + int mvpp2_prs_update_mac_da(struct net_device *dev, const u8 *da) + { + struct mvpp2_port *port = netdev_priv(dev); +@@ -2345,6 +2391,8 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port) + unsigned long pmap; + int index, tid; + ++ spin_lock_bh(&priv->prs_spinlock); ++ + for (tid = MVPP2_PE_MAC_RANGE_START; + tid <= MVPP2_PE_MAC_RANGE_END; tid++) { + unsigned char da[ETH_ALEN], da_mask[ETH_ALEN]; +@@ -2354,7 +2402,7 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port) + (priv->prs_shadow[tid].udf != MVPP2_PRS_UDF_MAC_DEF)) + continue; + +- mvpp2_prs_init_from_hw(priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(priv, &pe, tid); + + pmap = mvpp2_prs_tcam_port_map_get(&pe); + +@@ -2375,14 +2423,17 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port) + continue; + + /* Remove entry from TCAM */ +- mvpp2_prs_mac_da_accept(port, da, false); ++ __mvpp2_prs_mac_da_accept(port, da, false); + } ++ ++ spin_unlock_bh(&priv->prs_spinlock); + } + + int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type) + { + switch (type) { + case MVPP2_TAG_TYPE_EDSA: ++ spin_lock_bh(&priv->prs_spinlock); + /* Add port to EDSA entries */ + mvpp2_prs_dsa_tag_set(priv, port, true, + MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA); +@@ -2393,9 +2444,11 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type) + MVPP2_PRS_TAGGED, MVPP2_PRS_DSA); + mvpp2_prs_dsa_tag_set(priv, port, false, + MVPP2_PRS_UNTAGGED, MVPP2_PRS_DSA); ++ spin_unlock_bh(&priv->prs_spinlock); + break; + + case MVPP2_TAG_TYPE_DSA: ++ spin_lock_bh(&priv->prs_spinlock); + /* Add port to DSA entries */ + mvpp2_prs_dsa_tag_set(priv, port, true, + MVPP2_PRS_TAGGED, MVPP2_PRS_DSA); +@@ -2406,10 +2459,12 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type) + MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA); + mvpp2_prs_dsa_tag_set(priv, port, false, + MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA); ++ spin_unlock_bh(&priv->prs_spinlock); + break; + + case MVPP2_TAG_TYPE_MH: + case MVPP2_TAG_TYPE_NONE: ++ spin_lock_bh(&priv->prs_spinlock); + /* Remove port form EDSA and DSA entries */ + mvpp2_prs_dsa_tag_set(priv, port, false, + MVPP2_PRS_TAGGED, MVPP2_PRS_DSA); +@@ -2419,6 +2474,7 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type) + MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA); + mvpp2_prs_dsa_tag_set(priv, port, false, + MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA); ++ spin_unlock_bh(&priv->prs_spinlock); + break; + + default: +@@ -2437,11 +2493,15 @@ int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask) + + memset(&pe, 0, sizeof(pe)); + ++ spin_lock_bh(&priv->prs_spinlock); ++ + tid = mvpp2_prs_tcam_first_free(priv, + MVPP2_PE_LAST_FREE_TID, + MVPP2_PE_FIRST_FREE_TID); +- if (tid < 0) ++ if (tid < 0) { ++ spin_unlock_bh(&priv->prs_spinlock); + return tid; ++ } + + pe.index = tid; + +@@ -2461,6 +2521,7 @@ int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask) + mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK); + mvpp2_prs_hw_write(priv, &pe); + ++ spin_unlock_bh(&priv->prs_spinlock); + return 0; + } + +@@ -2472,6 +2533,8 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port) + + memset(&pe, 0, sizeof(pe)); + ++ spin_lock_bh(&port->priv->prs_spinlock); ++ + tid = mvpp2_prs_flow_find(port->priv, port->id); + + /* Such entry not exist */ +@@ -2480,8 +2543,10 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port) + tid = mvpp2_prs_tcam_first_free(port->priv, + MVPP2_PE_LAST_FREE_TID, + MVPP2_PE_FIRST_FREE_TID); +- if (tid < 0) ++ if (tid < 0) { ++ spin_unlock_bh(&port->priv->prs_spinlock); + return tid; ++ } + + pe.index = tid; + +@@ -2492,13 +2557,14 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port) + /* Update shadow table */ + mvpp2_prs_shadow_set(port->priv, pe.index, MVPP2_PRS_LU_FLOWS); + } else { +- mvpp2_prs_init_from_hw(port->priv, &pe, tid); ++ __mvpp2_prs_init_from_hw(port->priv, &pe, tid); + } + + mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_FLOWS); + mvpp2_prs_tcam_port_map_set(&pe, (1 << port->id)); + mvpp2_prs_hw_write(port->priv, &pe); + ++ spin_unlock_bh(&port->priv->prs_spinlock); + return 0; + } + +@@ -2509,11 +2575,14 @@ int mvpp2_prs_hits(struct mvpp2 *priv, int index) + if (index > MVPP2_PRS_TCAM_SRAM_SIZE) + return -EINVAL; + ++ spin_lock_bh(&priv->prs_spinlock); ++ + mvpp2_write(priv, MVPP2_PRS_TCAM_HIT_IDX_REG, index); + + val = mvpp2_read(priv, MVPP2_PRS_TCAM_HIT_CNT_REG); + + val &= MVPP2_PRS_TCAM_HIT_CNT_MASK; + ++ spin_unlock_bh(&priv->prs_spinlock); + return val; + } +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +index bc8187e3f3393d..0863fa06c06d16 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c ++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +@@ -2469,7 +2469,7 @@ static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq) + rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), intr); + + rvu_queue_work(&rvu->afvf_wq_info, 64, vfs, intr); +- vfs -= 64; ++ vfs = 64; + } + + intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(0)); +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c +index 40fbda152533b6..c5e3ef6b41a87a 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c ++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c +@@ -216,7 +216,7 @@ static void rvu_nix_unregister_interrupts(struct rvu *rvu) + rvu->irq_allocated[offs + NIX_AF_INT_VEC_RVU] = false; + } + +- for (i = NIX_AF_INT_VEC_AF_ERR; i < NIX_AF_INT_VEC_CNT; i++) ++ for (i = NIX_AF_INT_VEC_GEN; i < NIX_AF_INT_VEC_CNT; i++) + if (rvu->irq_allocated[offs + i]) { + free_irq(pci_irq_vector(rvu->pdev, offs + i), rvu_dl); + rvu->irq_allocated[offs + i] = false; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +index 291bd59639044e..28c3667e323f5b 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +@@ -48,15 +48,10 @@ mlx5_esw_bridge_lag_rep_get(struct net_device *dev, struct mlx5_eswitch *esw) + struct list_head *iter; + + netdev_for_each_lower_dev(dev, lower, iter) { +- struct mlx5_core_dev *mdev; +- struct mlx5e_priv *priv; +- + if (!mlx5e_eswitch_rep(lower)) + continue; + +- priv = netdev_priv(lower); +- mdev = priv->mdev; +- if (mlx5_lag_is_shared_fdb(mdev) && mlx5_esw_bridge_dev_same_esw(lower, esw)) ++ if (mlx5_esw_bridge_dev_same_esw(lower, esw)) + return lower; + } + +@@ -121,7 +116,7 @@ static bool mlx5_esw_bridge_is_local(struct net_device *dev, struct net_device * + priv = netdev_priv(rep); + mdev = priv->mdev; + if (netif_is_lag_master(dev)) +- return mlx5_lag_is_shared_fdb(mdev) && mlx5_lag_is_master(mdev); ++ return mlx5_lag_is_master(mdev); + return true; + } + +@@ -430,6 +425,9 @@ static int mlx5_esw_bridge_switchdev_event(struct notifier_block *nb, + if (!rep) + return NOTIFY_DONE; + ++ if (netif_is_lag_master(dev) && !mlx5_lag_is_shared_fdb(esw->dev)) ++ return NOTIFY_DONE; ++ + switch (event) { + case SWITCHDEV_FDB_ADD_TO_BRIDGE: + fdb_info = container_of(info, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index a0870da4145385..321441e6ad328d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -4186,11 +4186,9 @@ static int mlx5e_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, + struct mlx5e_priv *priv = netdev_priv(dev); + struct mlx5_core_dev *mdev = priv->mdev; + u8 mode, setting; +- int err; + +- err = mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting); +- if (err) +- return err; ++ if (mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting)) ++ return -EOPNOTSUPP; + mode = setting ? BRIDGE_MODE_VEPA : BRIDGE_MODE_VEB; + return ndo_dflt_bridge_getlink(skb, pid, seq, dev, + mode, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +index df58cba37930ae..64c1071bece8dd 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +@@ -196,6 +196,11 @@ mlx5_chains_create_table(struct mlx5_fs_chains *chains, + ns = mlx5_get_flow_namespace(chains->dev, chains->ns); + } + ++ if (!ns) { ++ mlx5_core_warn(chains->dev, "Failed to get flow namespace\n"); ++ return ERR_PTR(-EOPNOTSUPP); ++ } ++ + ft_attr.autogroup.num_reserved_entries = 2; + ft_attr.autogroup.max_num_groups = chains->group_num; + ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c +index df9b84f6600fe0..d7c93c409a7765 100644 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c +@@ -454,8 +454,10 @@ static int qlcnic_sriov_set_guest_vlan_mode(struct qlcnic_adapter *adapter, + + num_vlans = sriov->num_allowed_vlans; + sriov->allowed_vlans = kcalloc(num_vlans, sizeof(u16), GFP_KERNEL); +- if (!sriov->allowed_vlans) ++ if (!sriov->allowed_vlans) { ++ qlcnic_sriov_free_vlans(adapter); + return -ENOMEM; ++ } + + vlans = (u16 *)&cmd->rsp.arg[3]; + for (i = 0; i < num_vlans; i++) +@@ -2168,8 +2170,10 @@ int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) + vf = &sriov->vf_info[i]; + vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans, + sizeof(*vf->sriov_vlans), GFP_KERNEL); +- if (!vf->sriov_vlans) ++ if (!vf->sriov_vlans) { ++ qlcnic_sriov_free_vlans(adapter); + return -ENOMEM; ++ } + } + + return 0; +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index 96cbc8a7ee9b21..600a190f221287 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -1367,9 +1367,11 @@ static const struct usb_device_id products[] = { + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a9, 0)}, /* Telit FN920C04 */ ++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10b0, 0)}, /* Telit FE990B */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c0, 0)}, /* Telit FE910C04 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c4, 0)}, /* Telit FE910C04 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c8, 0)}, /* Telit FE910C04 */ ++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10d0, 0)}, /* Telit FN990B */ + {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ + {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ + {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ +diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c +index f66975c452aa1b..c32e9714428425 100644 +--- a/drivers/net/usb/usbnet.c ++++ b/drivers/net/usb/usbnet.c +@@ -181,6 +181,17 @@ int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress) + } + EXPORT_SYMBOL_GPL(usbnet_get_ethernet_addr); + ++static bool usbnet_needs_usb_name_format(struct usbnet *dev, struct net_device *net) ++{ ++ /* Point to point devices which don't have a real MAC address ++ * (or report a fake local one) have historically used the usb%d ++ * naming. Preserve this.. ++ */ ++ return (dev->driver_info->flags & FLAG_POINTTOPOINT) != 0 && ++ (is_zero_ether_addr(net->dev_addr) || ++ is_local_ether_addr(net->dev_addr)); ++} ++ + static void intr_complete (struct urb *urb) + { + struct usbnet *dev = urb->context; +@@ -522,7 +533,8 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags) + netif_device_present (dev->net) && + test_bit(EVENT_DEV_OPEN, &dev->flags) && + !test_bit (EVENT_RX_HALT, &dev->flags) && +- !test_bit (EVENT_DEV_ASLEEP, &dev->flags)) { ++ !test_bit (EVENT_DEV_ASLEEP, &dev->flags) && ++ !usbnet_going_away(dev)) { + switch (retval = usb_submit_urb (urb, GFP_ATOMIC)) { + case -EPIPE: + usbnet_defer_kevent (dev, EVENT_RX_HALT); +@@ -543,8 +555,7 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags) + tasklet_schedule (&dev->bh); + break; + case 0: +- if (!usbnet_going_away(dev)) +- __usbnet_queue_skb(&dev->rxq, skb, rx_start); ++ __usbnet_queue_skb(&dev->rxq, skb, rx_start); + } + } else { + netif_dbg(dev, ifdown, dev->net, "rx: stopped\n"); +@@ -1764,13 +1775,11 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod) + if (status < 0) + goto out1; + +- // heuristic: "usb%d" for links we know are two-host, +- // else "eth%d" when there's reasonable doubt. userspace +- // can rename the link if it knows better. ++ /* heuristic: rename to "eth%d" if we are not sure this link ++ * is two-host (these links keep "usb%d") ++ */ + if ((dev->driver_info->flags & FLAG_ETHER) != 0 && +- ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 || +- /* somebody touched it*/ +- !is_zero_ether_addr(net->dev_addr))) ++ !usbnet_needs_usb_name_format(dev, net)) + strscpy(net->name, "eth%d", sizeof(net->name)); + /* WLAN devices should always be named "wlan%d" */ + if ((dev->driver_info->flags & FLAG_WLAN) != 0) +diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +index fc630a0d9c83c9..f1d07ddb3f8317 100644 +--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c ++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +@@ -558,41 +558,71 @@ static void iwl_dump_prph(struct iwl_fw_runtime *fwrt, + } + + /* +- * alloc_sgtable - allocates scallerlist table in the given size, +- * fills it with pages and returns it ++ * alloc_sgtable - allocates (chained) scatterlist in the given size, ++ * fills it with pages and returns it + * @size: the size (in bytes) of the table +-*/ +-static struct scatterlist *alloc_sgtable(int size) ++ */ ++static struct scatterlist *alloc_sgtable(ssize_t size) + { +- int alloc_size, nents, i; +- struct page *new_page; +- struct scatterlist *iter; +- struct scatterlist *table; ++ struct scatterlist *result = NULL, *prev; ++ int nents, i, n_prev; + + nents = DIV_ROUND_UP(size, PAGE_SIZE); +- table = kcalloc(nents, sizeof(*table), GFP_KERNEL); +- if (!table) +- return NULL; +- sg_init_table(table, nents); +- iter = table; +- for_each_sg(table, iter, sg_nents(table), i) { +- new_page = alloc_page(GFP_KERNEL); +- if (!new_page) { +- /* release all previous allocated pages in the table */ +- iter = table; +- for_each_sg(table, iter, sg_nents(table), i) { +- new_page = sg_page(iter); +- if (new_page) +- __free_page(new_page); +- } +- kfree(table); ++ ++#define N_ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(*result)) ++ /* ++ * We need an additional entry for table chaining, ++ * this ensures the loop can finish i.e. we can ++ * fit at least two entries per page (obviously, ++ * many more really fit.) ++ */ ++ BUILD_BUG_ON(N_ENTRIES_PER_PAGE < 2); ++ ++ while (nents > 0) { ++ struct scatterlist *new, *iter; ++ int n_fill, n_alloc; ++ ++ if (nents <= N_ENTRIES_PER_PAGE) { ++ /* last needed table */ ++ n_fill = nents; ++ n_alloc = nents; ++ nents = 0; ++ } else { ++ /* fill a page with entries */ ++ n_alloc = N_ENTRIES_PER_PAGE; ++ /* reserve one for chaining */ ++ n_fill = n_alloc - 1; ++ nents -= n_fill; ++ } ++ ++ new = kcalloc(n_alloc, sizeof(*new), GFP_KERNEL); ++ if (!new) { ++ if (result) ++ _devcd_free_sgtable(result); + return NULL; + } +- alloc_size = min_t(int, size, PAGE_SIZE); +- size -= PAGE_SIZE; +- sg_set_page(iter, new_page, alloc_size, 0); ++ sg_init_table(new, n_alloc); ++ ++ if (!result) ++ result = new; ++ else ++ sg_chain(prev, n_prev, new); ++ prev = new; ++ n_prev = n_alloc; ++ ++ for_each_sg(new, iter, n_fill, i) { ++ struct page *new_page = alloc_page(GFP_KERNEL); ++ ++ if (!new_page) { ++ _devcd_free_sgtable(result); ++ return NULL; ++ } ++ ++ sg_set_page(iter, new_page, PAGE_SIZE, 0); ++ } + } +- return table; ++ ++ return result; + } + + static void iwl_fw_get_prph_len(struct iwl_fw_runtime *fwrt, +diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c +index ef70bb7c88ad61..43c20deab3189c 100644 +--- a/drivers/net/wwan/mhi_wwan_mbim.c ++++ b/drivers/net/wwan/mhi_wwan_mbim.c +@@ -209,7 +209,7 @@ static int mbim_rx_verify_nth16(struct mhi_mbim_context *mbim, struct sk_buff *s + if (mbim->rx_seq + 1 != le16_to_cpu(nth16->wSequence) && + (mbim->rx_seq || le16_to_cpu(nth16->wSequence)) && + !(mbim->rx_seq == 0xffff && !le16_to_cpu(nth16->wSequence))) { +- net_err_ratelimited("sequence number glitch prev=%d curr=%d\n", ++ net_dbg_ratelimited("sequence number glitch prev=%d curr=%d\n", + mbim->rx_seq, le16_to_cpu(nth16->wSequence)); + } + mbim->rx_seq = le16_to_cpu(nth16->wSequence); +diff --git a/drivers/ntb/hw/intel/ntb_hw_gen3.c b/drivers/ntb/hw/intel/ntb_hw_gen3.c +index ffcfc3e02c3532..a5aa96a31f4a64 100644 +--- a/drivers/ntb/hw/intel/ntb_hw_gen3.c ++++ b/drivers/ntb/hw/intel/ntb_hw_gen3.c +@@ -215,6 +215,9 @@ static int gen3_init_ntb(struct intel_ntb_dev *ndev) + } + + ndev->db_valid_mask = BIT_ULL(ndev->db_count) - 1; ++ /* Make sure we are not using DB's used for link status */ ++ if (ndev->hwerr_flags & NTB_HWERR_MSIX_VECTOR32_BAD) ++ ndev->db_valid_mask &= ~ndev->db_link_mask; + + ndev->reg->db_iowrite(ndev->db_valid_mask, + ndev->self_mmio + +diff --git a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c +index 759248415b5c2e..c9351063aaf15b 100644 +--- a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c ++++ b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c +@@ -288,7 +288,7 @@ static int switchtec_ntb_mw_set_trans(struct ntb_dev *ntb, int pidx, int widx, + if (size != 0 && xlate_pos < 12) + return -EINVAL; + +- if (!IS_ALIGNED(addr, BIT_ULL(xlate_pos))) { ++ if (xlate_pos >= 0 && !IS_ALIGNED(addr, BIT_ULL(xlate_pos))) { + /* + * In certain circumstances we can get a buffer that is + * not aligned to its size. (Most of the time +diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c +index 5a7a02408166ef..04690897a06769 100644 +--- a/drivers/ntb/test/ntb_perf.c ++++ b/drivers/ntb/test/ntb_perf.c +@@ -839,10 +839,8 @@ static int perf_copy_chunk(struct perf_thread *pthr, + dma_set_unmap(tx, unmap); + + ret = dma_submit_error(dmaengine_submit(tx)); +- if (ret) { +- dmaengine_unmap_put(unmap); ++ if (ret) + goto err_free_resource; +- } + + dmaengine_unmap_put(unmap); + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 7f744aa4d120c6..6748532c776b86 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -444,8 +444,6 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl, + switch (new_state) { + case NVME_CTRL_LIVE: + switch (old_state) { +- case NVME_CTRL_NEW: +- case NVME_CTRL_RESETTING: + case NVME_CTRL_CONNECTING: + changed = true; + fallthrough; +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 8dfd317509aa6d..ebe8c2f147a33b 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -3547,8 +3547,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, + list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list); + spin_unlock_irqrestore(&rport->lock, flags); + +- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING) || +- !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { ++ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { + dev_err(ctrl->ctrl.device, + "NVME-FC{%d}: failed to init ctrl state\n", ctrl->cnum); + goto fail_ctrl; +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index 2eb692876f6959..a3c5af95e8f3ea 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -1849,6 +1849,18 @@ static void nvme_map_cmb(struct nvme_dev *dev) + if (offset > bar_size) + return; + ++ /* ++ * Controllers may support a CMB size larger than their BAR, for ++ * example, due to being behind a bridge. Reduce the CMB to the ++ * reported size of the BAR ++ */ ++ size = min(size, bar_size - offset); ++ ++ if (!IS_ALIGNED(size, memremap_compat_align()) || ++ !IS_ALIGNED(pci_resource_start(pdev, bar), ++ memremap_compat_align())) ++ return; ++ + /* + * Tell the controller about the host side address mapping the CMB, + * and enable CMB decoding for the NVMe 1.4+ scheme: +@@ -1859,17 +1871,10 @@ static void nvme_map_cmb(struct nvme_dev *dev) + dev->bar + NVME_REG_CMBMSC); + } + +- /* +- * Controllers may support a CMB size larger than their BAR, +- * for example, due to being behind a bridge. Reduce the CMB to +- * the reported size of the BAR +- */ +- if (size > bar_size - offset) +- size = bar_size - offset; +- + if (pci_p2pdma_add_resource(pdev, bar, size, offset)) { + dev_warn(dev->ctrl.device, + "failed to register the CMB\n"); ++ hi_lo_writeq(0, dev->bar + NVME_REG_CMBMSC); + return; + } + +diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c +index 4ca7ef9416002c..0fc5aba88bc15b 100644 +--- a/drivers/nvme/host/tcp.c ++++ b/drivers/nvme/host/tcp.c +@@ -2491,6 +2491,7 @@ static int nvme_tcp_poll(struct blk_mq_hw_ctx *hctx) + { + struct nvme_tcp_queue *queue = hctx->driver_data; + struct sock *sk = queue->sock->sk; ++ int ret; + + if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags)) + return 0; +@@ -2498,9 +2499,9 @@ static int nvme_tcp_poll(struct blk_mq_hw_ctx *hctx) + set_bit(NVME_TCP_Q_POLLING, &queue->flags); + if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue)) + sk_busy_loop(sk, true); +- nvme_tcp_try_recv(queue); ++ ret = nvme_tcp_try_recv(queue); + clear_bit(NVME_TCP_Q_POLLING, &queue->flags); +- return queue->nr_cqe; ++ return ret < 0 ? ret : queue->nr_cqe; + } + + static const struct blk_mq_ops nvme_tcp_mq_ops = { +diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c +index 9561ba3d431388..3b4b2d134cf81a 100644 +--- a/drivers/nvme/target/rdma.c ++++ b/drivers/nvme/target/rdma.c +@@ -996,6 +996,27 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue, + nvmet_req_complete(&cmd->req, status); + } + ++static bool nvmet_rdma_recv_not_live(struct nvmet_rdma_queue *queue, ++ struct nvmet_rdma_rsp *rsp) ++{ ++ unsigned long flags; ++ bool ret = true; ++ ++ spin_lock_irqsave(&queue->state_lock, flags); ++ /* ++ * recheck queue state is not live to prevent a race condition ++ * with RDMA_CM_EVENT_ESTABLISHED handler. ++ */ ++ if (queue->state == NVMET_RDMA_Q_LIVE) ++ ret = false; ++ else if (queue->state == NVMET_RDMA_Q_CONNECTING) ++ list_add_tail(&rsp->wait_list, &queue->rsp_wait_list); ++ else ++ nvmet_rdma_put_rsp(rsp); ++ spin_unlock_irqrestore(&queue->state_lock, flags); ++ return ret; ++} ++ + static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) + { + struct nvmet_rdma_cmd *cmd = +@@ -1037,17 +1058,9 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) + rsp->req.port = queue->port; + rsp->n_rdma = 0; + +- if (unlikely(queue->state != NVMET_RDMA_Q_LIVE)) { +- unsigned long flags; +- +- spin_lock_irqsave(&queue->state_lock, flags); +- if (queue->state == NVMET_RDMA_Q_CONNECTING) +- list_add_tail(&rsp->wait_list, &queue->rsp_wait_list); +- else +- nvmet_rdma_put_rsp(rsp); +- spin_unlock_irqrestore(&queue->state_lock, flags); ++ if (unlikely(queue->state != NVMET_RDMA_Q_LIVE) && ++ nvmet_rdma_recv_not_live(queue, rsp)) + return; +- } + + nvmet_rdma_handle_command(queue, rsp); + } +diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c +index 18e32b8ffd5ef2..90d1e2ac774e04 100644 +--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c ++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c +@@ -354,8 +354,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx, + spin_unlock_irqrestore(&ep->lock, flags); + + offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) | +- CDNS_PCIE_NORMAL_MSG_CODE(msg_code) | +- CDNS_PCIE_MSG_NO_DATA; ++ CDNS_PCIE_NORMAL_MSG_CODE(msg_code); + writel(0, ep->irq_cpu_addr + offset); + } + +diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h +index 262421e5d91788..ef649c807af4e3 100644 +--- a/drivers/pci/controller/cadence/pcie-cadence.h ++++ b/drivers/pci/controller/cadence/pcie-cadence.h +@@ -235,7 +235,7 @@ struct cdns_pcie_rp_ib_bar { + #define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8) + #define CDNS_PCIE_NORMAL_MSG_CODE(code) \ + (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK) +-#define CDNS_PCIE_MSG_NO_DATA BIT(16) ++#define CDNS_PCIE_MSG_DATA BIT(16) + + struct cdns_pcie; + +diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c +index cc30215f5a4336..c3c1d700f519a7 100644 +--- a/drivers/pci/controller/pcie-brcmstb.c ++++ b/drivers/pci/controller/pcie-brcmstb.c +@@ -410,10 +410,10 @@ static int brcm_pcie_set_ssc(struct brcm_pcie *pcie) + static void brcm_pcie_set_gen(struct brcm_pcie *pcie, int gen) + { + u16 lnkctl2 = readw(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2); +- u32 lnkcap = readl(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP); ++ u32 lnkcap = readl(pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); + + lnkcap = (lnkcap & ~PCI_EXP_LNKCAP_SLS) | gen; +- writel(lnkcap, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP); ++ writel(lnkcap, pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); + + lnkctl2 = (lnkctl2 & ~0xf) | gen; + writew(lnkctl2, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2); +diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c +index 95426df0320001..1e72cea8563f38 100644 +--- a/drivers/pci/controller/pcie-xilinx-cpm.c ++++ b/drivers/pci/controller/pcie-xilinx-cpm.c +@@ -556,13 +556,15 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev) + return err; + + bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS); +- if (!bus) +- return -ENODEV; ++ if (!bus) { ++ err = -ENODEV; ++ goto err_free_irq_domains; ++ } + + err = xilinx_cpm_pcie_parse_dt(port, bus->res); + if (err) { + dev_err(dev, "Parsing DT failed\n"); +- goto err_parse_dt; ++ goto err_free_irq_domains; + } + + xilinx_cpm_pcie_init_port(port); +@@ -586,7 +588,7 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev) + xilinx_cpm_free_interrupts(port); + err_setup_irq: + pci_ecam_free(port->cfg); +-err_parse_dt: ++err_free_irq_domains: + xilinx_cpm_free_irq_domains(port); + return err; + } +diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c +index 7773009b8b32ee..6647ade09f0546 100644 +--- a/drivers/pci/hotplug/pciehp_hpc.c ++++ b/drivers/pci/hotplug/pciehp_hpc.c +@@ -840,7 +840,9 @@ void pcie_enable_interrupt(struct controller *ctrl) + { + u16 mask; + +- mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE; ++ mask = PCI_EXP_SLTCTL_DLLSCE; ++ if (!pciehp_poll_mode) ++ mask |= PCI_EXP_SLTCTL_HPIE; + pcie_write_cmd(ctrl, mask, mask); + } + +diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c +index fbaf9af62bd674..6a5f53f968c3d7 100644 +--- a/drivers/pci/pci.c ++++ b/drivers/pci/pci.c +@@ -5484,6 +5484,8 @@ static bool pci_bus_resetable(struct pci_bus *bus) + return false; + + list_for_each_entry(dev, &bus->devices, bus_list) { ++ if (!pci_reset_supported(dev)) ++ return false; + if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || + (dev->subordinate && !pci_bus_resetable(dev->subordinate))) + return false; +@@ -5560,6 +5562,8 @@ static bool pci_slot_resetable(struct pci_slot *slot) + list_for_each_entry(dev, &slot->bus->devices, bus_list) { + if (!dev->slot || dev->slot != slot) + continue; ++ if (!pci_reset_supported(dev)) ++ return false; + if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || + (dev->subordinate && !pci_bus_resetable(dev->subordinate))) + return false; +diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c +index e3f81948ce724a..521340126b3395 100644 +--- a/drivers/pci/pcie/aspm.c ++++ b/drivers/pci/pcie/aspm.c +@@ -1014,16 +1014,16 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev) + parent_link = link->parent; + + /* +- * link->downstream is a pointer to the pci_dev of function 0. If +- * we remove that function, the pci_dev is about to be deallocated, +- * so we can't use link->downstream again. Free the link state to +- * avoid this. ++ * Free the parent link state, no later than function 0 (i.e. ++ * link->downstream) being removed. + * +- * If we're removing a non-0 function, it's possible we could +- * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends +- * programming the same ASPM Control value for all functions of +- * multi-function devices, so disable ASPM for all of them. ++ * Do not free the link state any earlier. If function 0 is a ++ * switch upstream port, this link state is parent_link to all ++ * subordinate ones. + */ ++ if (pdev != link->downstream) ++ goto out; ++ + pcie_config_aspm_link(link, 0); + list_del(&link->sibling); + free_link_state(link); +@@ -1034,6 +1034,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev) + pcie_config_aspm_path(parent_link); + } + ++ out: + mutex_unlock(&aspm_lock); + up_read(&pci_bus_sem); + } +diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c +index 604feeb84ee402..3e5274ad60f101 100644 +--- a/drivers/pci/pcie/portdrv_core.c ++++ b/drivers/pci/pcie/portdrv_core.c +@@ -214,10 +214,12 @@ static int get_port_device_capability(struct pci_dev *dev) + + /* + * Disable hot-plug interrupts in case they have been enabled +- * by the BIOS and the hot-plug service driver is not loaded. ++ * by the BIOS and the hot-plug service driver won't be loaded ++ * to handle them. + */ +- pcie_capability_clear_word(dev, PCI_EXP_SLTCTL, +- PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE); ++ if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) ++ pcie_capability_clear_word(dev, PCI_EXP_SLTCTL, ++ PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE); + } + + #ifdef CONFIG_PCIEAER +diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c +index dd2134c7c4192b..51615e4d28f41d 100644 +--- a/drivers/pci/probe.c ++++ b/drivers/pci/probe.c +@@ -926,10 +926,9 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) + goto free; + + err = device_add(&bridge->dev); +- if (err) { +- put_device(&bridge->dev); ++ if (err) + goto free; +- } ++ + bus->bridge = get_device(&bridge->dev); + device_enable_async_suspend(bus->bridge); + pci_set_bus_of_node(bus); +diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c +index 9ab1f427286a7f..fbfddcc39d5ccb 100644 +--- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c ++++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c +@@ -981,7 +981,7 @@ static const struct regmap_config bcm281xx_pinctrl_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, +- .max_register = BCM281XX_PIN_VC_CAM3_SDA, ++ .max_register = BCM281XX_PIN_VC_CAM3_SDA * 4, + }; + + static int bcm281xx_pinctrl_get_groups_count(struct pinctrl_dev *pctldev) +diff --git a/drivers/pinctrl/renesas/pinctrl-rza2.c b/drivers/pinctrl/renesas/pinctrl-rza2.c +index ddd8ee6b604ef2..1fd3191d9f8d96 100644 +--- a/drivers/pinctrl/renesas/pinctrl-rza2.c ++++ b/drivers/pinctrl/renesas/pinctrl-rza2.c +@@ -253,6 +253,8 @@ static int rza2_gpio_register(struct rza2_pinctrl_priv *priv) + return ret; + } + ++ of_node_put(of_args.np); ++ + if ((of_args.args[0] != 0) || + (of_args.args[1] != 0) || + (of_args.args[2] != priv->npins)) { +diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c +index 20b2af889ca96b..f839bd3d0927ab 100644 +--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c ++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c +@@ -979,6 +979,8 @@ static int rzg2l_gpio_register(struct rzg2l_pinctrl *pctrl) + return ret; + } + ++ of_node_put(of_args.np); ++ + if (of_args.args[0] != 0 || of_args.args[1] != 0 || + of_args.args[2] != ARRAY_SIZE(rzg2l_gpio_names)) { + dev_err(pctrl->dev, "gpio-ranges does not match selected SOC\n"); +diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c +index 195cfe557511bd..90de78e4175c93 100644 +--- a/drivers/pinctrl/tegra/pinctrl-tegra.c ++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c +@@ -270,6 +270,9 @@ static int tegra_pinctrl_set_mux(struct pinctrl_dev *pctldev, + val = pmx_readl(pmx, g->mux_bank, g->mux_reg); + val &= ~(0x3 << g->mux_bit); + val |= i << g->mux_bit; ++ /* Set the SFIO/GPIO selection to SFIO when under pinmux control*/ ++ if (pmx->soc->sfsel_in_mux) ++ val |= (1 << g->sfsel_bit); + pmx_writel(pmx, val, g->mux_bank, g->mux_reg); + + return 0; +diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +index 380e36953e311d..595db17ca129f3 100644 +--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c ++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +@@ -77,7 +77,7 @@ static DECLARE_HASHTABLE(isst_hash, 8); + static DEFINE_MUTEX(isst_hash_lock); + + static int isst_store_new_cmd(int cmd, u32 cpu, int mbox_cmd_type, u32 param, +- u32 data) ++ u64 data) + { + struct isst_cmd *sst_cmd; + +diff --git a/drivers/power/supply/max77693_charger.c b/drivers/power/supply/max77693_charger.c +index a2c5c9858639fd..ef3482fa4023e6 100644 +--- a/drivers/power/supply/max77693_charger.c ++++ b/drivers/power/supply/max77693_charger.c +@@ -556,7 +556,7 @@ static int max77693_set_charge_input_threshold_volt(struct max77693_charger *chg + case 4700000: + case 4800000: + case 4900000: +- data = (uvolt - 4700000) / 100000; ++ data = ((uvolt - 4700000) / 100000) + 1; + break; + default: + dev_err(chg->dev, "Wrong value for charge input voltage regulation threshold\n"); +diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c +index ff736b006198fd..fd475e463d1fa9 100644 +--- a/drivers/powercap/powercap_sys.c ++++ b/drivers/powercap/powercap_sys.c +@@ -626,8 +626,7 @@ struct powercap_control_type *powercap_register_control_type( + dev_set_name(&control_type->dev, "%s", name); + result = device_register(&control_type->dev); + if (result) { +- if (control_type->allocated) +- kfree(control_type); ++ put_device(&control_type->dev); + return ERR_PTR(result); + } + idr_init(&control_type->idr); +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c +index 879f4a77e91dbd..18f13b9601b5b6 100644 +--- a/drivers/regulator/core.c ++++ b/drivers/regulator/core.c +@@ -2047,6 +2047,10 @@ static int regulator_resolve_supply(struct regulator_dev *rdev) + + if (have_full_constraints()) { + r = dummy_regulator_rdev; ++ if (!r) { ++ ret = -EPROBE_DEFER; ++ goto out; ++ } + get_device(&r->dev); + } else { + dev_err(dev, "Failed to resolve %s-supply for %s\n", +@@ -2064,6 +2068,10 @@ static int regulator_resolve_supply(struct regulator_dev *rdev) + goto out; + } + r = dummy_regulator_rdev; ++ if (!r) { ++ ret = -EPROBE_DEFER; ++ goto out; ++ } + get_device(&r->dev); + } + +@@ -2172,8 +2180,10 @@ struct regulator *_regulator_get(struct device *dev, const char *id, + * enabled, even if it isn't hooked up, and just + * provide a dummy. + */ +- dev_warn(dev, "supply %s not found, using dummy regulator\n", id); + rdev = dummy_regulator_rdev; ++ if (!rdev) ++ return ERR_PTR(-EPROBE_DEFER); ++ dev_warn(dev, "supply %s not found, using dummy regulator\n", id); + get_device(&rdev->dev); + break; + +diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c +index 7fe1f2c5480af1..7acd60de18c8f5 100644 +--- a/drivers/remoteproc/qcom_q6v5_mss.c ++++ b/drivers/remoteproc/qcom_q6v5_mss.c +@@ -1594,6 +1594,13 @@ static int q6v5_pds_attach(struct device *dev, struct device **devs, + while (pd_names[num_pds]) + num_pds++; + ++ /* Handle single power domain */ ++ if (num_pds == 1 && dev->pm_domain) { ++ devs[0] = dev; ++ pm_runtime_enable(dev); ++ return 1; ++ } ++ + for (i = 0; i < num_pds; i++) { + devs[i] = dev_pm_domain_attach_by_name(dev, pd_names[i]); + if (IS_ERR_OR_NULL(devs[i])) { +@@ -1614,8 +1621,15 @@ static int q6v5_pds_attach(struct device *dev, struct device **devs, + static void q6v5_pds_detach(struct q6v5 *qproc, struct device **pds, + size_t pd_count) + { ++ struct device *dev = qproc->dev; + int i; + ++ /* Handle single power domain */ ++ if (pd_count == 1 && dev->pm_domain) { ++ pm_runtime_disable(dev); ++ return; ++ } ++ + for (i = 0; i < pd_count; i++) + dev_pm_domain_detach(pds[i], false); + } +@@ -2081,13 +2095,13 @@ static const struct rproc_hexagon_res msm8974_mss = { + .supply = "pll", + .uA = 100000, + }, +- {} +- }, +- .fallback_proxy_supply = (struct qcom_mss_reg_res[]) { + { + .supply = "mx", + .uV = 1050000, + }, ++ {} ++ }, ++ .fallback_proxy_supply = (struct qcom_mss_reg_res[]) { + { + .supply = "cx", + .uA = 100000, +@@ -2113,7 +2127,6 @@ static const struct rproc_hexagon_res msm8974_mss = { + NULL + }, + .proxy_pd_names = (char*[]){ +- "mx", + "cx", + NULL + }, +diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c +index fbcbc00f2e6459..776319ab1baf9b 100644 +--- a/drivers/remoteproc/qcom_q6v5_pas.c ++++ b/drivers/remoteproc/qcom_q6v5_pas.c +@@ -330,16 +330,16 @@ static int adsp_pds_attach(struct device *dev, struct device **devs, + if (!pd_names) + return 0; + ++ while (pd_names[num_pds]) ++ num_pds++; ++ + /* Handle single power domain */ +- if (dev->pm_domain) { ++ if (num_pds == 1 && dev->pm_domain) { + devs[0] = dev; + pm_runtime_enable(dev); + return 1; + } + +- while (pd_names[num_pds]) +- num_pds++; +- + for (i = 0; i < num_pds; i++) { + devs[i] = dev_pm_domain_attach_by_name(dev, pd_names[i]); + if (IS_ERR_OR_NULL(devs[i])) { +@@ -364,7 +364,7 @@ static void adsp_pds_detach(struct qcom_adsp *adsp, struct device **pds, + int i; + + /* Handle single power domain */ +- if (dev->pm_domain && pd_count) { ++ if (pd_count == 1 && dev->pm_domain) { + pm_runtime_disable(dev); + return; + } +diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c +index 97e59f7461261c..9e6d0dda64a99f 100644 +--- a/drivers/remoteproc/remoteproc_core.c ++++ b/drivers/remoteproc/remoteproc_core.c +@@ -2102,6 +2102,7 @@ void rproc_shutdown(struct rproc *rproc) + kfree(rproc->cached_table); + rproc->cached_table = NULL; + rproc->table_ptr = NULL; ++ rproc->table_sz = 0; + out: + mutex_unlock(&rproc->lock); + } +diff --git a/drivers/s390/cio/chp.c b/drivers/s390/cio/chp.c +index 1097e76982a5d0..6b0f1b8bf2790b 100644 +--- a/drivers/s390/cio/chp.c ++++ b/drivers/s390/cio/chp.c +@@ -661,7 +661,8 @@ static int info_update(void) + if (time_after(jiffies, chp_info_expires)) { + /* Data is too old, update. */ + rc = sclp_chp_read_info(&chp_info); +- chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL ; ++ if (!rc) ++ chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL; + } + mutex_unlock(&info_lock); + +diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c +index d0b4e063bfe1e4..eb8e9c54837e09 100644 +--- a/drivers/scsi/qla1280.c ++++ b/drivers/scsi/qla1280.c +@@ -2875,7 +2875,7 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) + dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n", + cpu_to_le32(upper_32_bits(dma_handle)), + cpu_to_le32(lower_32_bits(dma_handle)), +- cpu_to_le32(sg_dma_len(sg_next(s)))); ++ cpu_to_le32(sg_dma_len(s))); + remseg--; + } + dprintk(5, "qla1280_64bit_start_scsi: Scatter/gather " +diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c +index 9c155d5768144a..f00b4624e46b5e 100644 +--- a/drivers/scsi/scsi_scan.c ++++ b/drivers/scsi/scsi_scan.c +@@ -229,7 +229,7 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev, + } + ret = sbitmap_init_node(&sdev->budget_map, + scsi_device_max_queue_depth(sdev), +- new_shift, GFP_KERNEL, ++ new_shift, GFP_NOIO, + sdev->request_queue->node, false, true); + if (!ret) + sbitmap_resize(&sdev->budget_map, depth); +diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c +index e20d97d7fb658d..97813b8c6dea82 100644 +--- a/drivers/soc/qcom/pdr_interface.c ++++ b/drivers/soc/qcom/pdr_interface.c +@@ -74,7 +74,6 @@ static int pdr_locator_new_server(struct qmi_handle *qmi, + { + struct pdr_handle *pdr = container_of(qmi, struct pdr_handle, + locator_hdl); +- struct pdr_service *pds; + + mutex_lock(&pdr->lock); + /* Create a local client port for QMI communication */ +@@ -86,12 +85,7 @@ static int pdr_locator_new_server(struct qmi_handle *qmi, + mutex_unlock(&pdr->lock); + + /* Service pending lookup requests */ +- mutex_lock(&pdr->list_lock); +- list_for_each_entry(pds, &pdr->lookups, node) { +- if (pds->need_locator_lookup) +- schedule_work(&pdr->locator_work); +- } +- mutex_unlock(&pdr->list_lock); ++ schedule_work(&pdr->locator_work); + + return 0; + } +diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c +index 25e76b5d4a1a30..a5a9118612de22 100644 +--- a/drivers/soundwire/slave.c ++++ b/drivers/soundwire/slave.c +@@ -12,6 +12,7 @@ static void sdw_slave_release(struct device *dev) + { + struct sdw_slave *slave = dev_to_sdw_dev(dev); + ++ of_node_put(slave->dev.of_node); + mutex_destroy(&slave->sdw_dev_lock); + kfree(slave); + } +diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c +index 12a60415af955e..8171c806f5f6f3 100644 +--- a/drivers/thermal/cpufreq_cooling.c ++++ b/drivers/thermal/cpufreq_cooling.c +@@ -56,8 +56,6 @@ struct time_in_idle { + * @max_level: maximum cooling level. One less than total number of valid + * cpufreq frequencies. + * @em: Reference on the Energy Model of the device +- * @cdev: thermal_cooling_device pointer to keep track of the +- * registered cooling device. + * @policy: cpufreq policy. + * @idle_time: idle time stats + * @qos_req: PM QoS contraint to apply +diff --git a/drivers/thermal/intel/int340x_thermal/int3402_thermal.c b/drivers/thermal/intel/int340x_thermal/int3402_thermal.c +index 43fa351e2b9ec1..b7fdf25bfd237e 100644 +--- a/drivers/thermal/intel/int340x_thermal/int3402_thermal.c ++++ b/drivers/thermal/intel/int340x_thermal/int3402_thermal.c +@@ -45,6 +45,9 @@ static int int3402_thermal_probe(struct platform_device *pdev) + struct int3402_thermal_data *d; + int ret; + ++ if (!adev) ++ return -ENODEV; ++ + if (!acpi_has_method(adev->handle, "_TMP")) + return -ENODEV; + +diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c +index 7a92e3397908ef..e3125f23044695 100644 +--- a/drivers/tty/serial/8250/8250_dma.c ++++ b/drivers/tty/serial/8250/8250_dma.c +@@ -150,7 +150,7 @@ void serial8250_tx_dma_flush(struct uart_8250_port *p) + */ + dma->tx_size = 0; + +- dmaengine_terminate_async(dma->rxchan); ++ dmaengine_terminate_async(dma->txchan); + } + + int serial8250_rx_dma(struct uart_8250_port *p) +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index 9248d83489b1fd..32bc4ec6ebab4c 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -2868,6 +2868,22 @@ static struct pci_serial_quirk pci_serial_quirks[] = { + .init = pci_oxsemi_tornado_init, + .setup = pci_oxsemi_tornado_setup, + }, ++ { ++ .vendor = PCI_VENDOR_ID_INTASHIELD, ++ .device = 0x4026, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .init = pci_oxsemi_tornado_init, ++ .setup = pci_oxsemi_tornado_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_INTASHIELD, ++ .device = 0x4021, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .init = pci_oxsemi_tornado_init, ++ .setup = pci_oxsemi_tornado_setup, ++ }, + { + .vendor = PCI_VENDOR_ID_INTEL, + .device = 0x8811, +@@ -5536,6 +5552,14 @@ static const struct pci_device_id serial_pci_tbl[] = { + PCI_ANY_ID, PCI_ANY_ID, + 0, 0, + pbn_b2_2_115200 }, ++ { PCI_VENDOR_ID_INTASHIELD, 0x0BA2, ++ PCI_ANY_ID, PCI_ANY_ID, ++ 0, 0, ++ pbn_b2_2_115200 }, ++ { PCI_VENDOR_ID_INTASHIELD, 0x0BA3, ++ PCI_ANY_ID, PCI_ANY_ID, ++ 0, 0, ++ pbn_b2_2_115200 }, + /* + * Brainboxes UC-235/246 + */ +@@ -5656,6 +5680,14 @@ static const struct pci_device_id serial_pci_tbl[] = { + PCI_ANY_ID, PCI_ANY_ID, + 0, 0, + pbn_b2_4_115200 }, ++ { PCI_VENDOR_ID_INTASHIELD, 0x0C42, ++ PCI_ANY_ID, PCI_ANY_ID, ++ 0, 0, ++ pbn_b2_4_115200 }, ++ { PCI_VENDOR_ID_INTASHIELD, 0x0C43, ++ PCI_ANY_ID, PCI_ANY_ID, ++ 0, 0, ++ pbn_b2_4_115200 }, + /* + * Brainboxes UC-420 + */ +@@ -5882,6 +5914,20 @@ static const struct pci_device_id serial_pci_tbl[] = { + PCI_ANY_ID, PCI_ANY_ID, + 0, 0, + pbn_oxsemi_1_15625000 }, ++ /* ++ * Brainboxes XC-235 ++ */ ++ { PCI_VENDOR_ID_INTASHIELD, 0x4026, ++ PCI_ANY_ID, PCI_ANY_ID, ++ 0, 0, ++ pbn_oxsemi_1_15625000 }, ++ /* ++ * Brainboxes XC-475 ++ */ ++ { PCI_VENDOR_ID_INTASHIELD, 0x4021, ++ PCI_ANY_ID, PCI_ANY_ID, ++ 0, 0, ++ pbn_oxsemi_1_15625000 }, + + /* + * Perle PCI-RAS cards +diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c +index 9f1be9ce47e038..6684f6512fb16b 100644 +--- a/drivers/tty/serial/fsl_lpuart.c ++++ b/drivers/tty/serial/fsl_lpuart.c +@@ -1396,7 +1396,20 @@ static int lpuart32_config_rs485(struct uart_port *port, + struct lpuart_port, port); + + unsigned long modem = lpuart32_read(&sport->port, UARTMODIR) +- & ~(UARTMODEM_TXRTSPOL | UARTMODEM_TXRTSE); ++ & ~(UARTMODIR_TXRTSPOL | UARTMODIR_TXRTSE); ++ u32 ctrl; ++ ++ /* TXRTSE and TXRTSPOL only can be changed when transmitter is disabled. */ ++ ctrl = lpuart32_read(&sport->port, UARTCTRL); ++ if (ctrl & UARTCTRL_TE) { ++ /* wait for the transmit engine to complete */ ++ lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC); ++ lpuart32_write(&sport->port, ctrl & ~UARTCTRL_TE, UARTCTRL); ++ ++ while (lpuart32_read(&sport->port, UARTCTRL) & UARTCTRL_TE) ++ cpu_relax(); ++ } ++ + lpuart32_write(&sport->port, modem, UARTMODIR); + + /* clear unsupported configurations */ +@@ -1406,7 +1419,7 @@ static int lpuart32_config_rs485(struct uart_port *port, + + if (rs485->flags & SER_RS485_ENABLED) { + /* Enable auto RS-485 RTS mode */ +- modem |= UARTMODEM_TXRTSE; ++ modem |= UARTMODIR_TXRTSE; + + /* + * RTS needs to be logic HIGH either during transfer _or_ after +@@ -1428,15 +1441,19 @@ static int lpuart32_config_rs485(struct uart_port *port, + * Note: UART is assumed to be active high. + */ + if (rs485->flags & SER_RS485_RTS_ON_SEND) +- modem |= UARTMODEM_TXRTSPOL; ++ modem |= UARTMODIR_TXRTSPOL; + else if (rs485->flags & SER_RS485_RTS_AFTER_SEND) +- modem &= ~UARTMODEM_TXRTSPOL; ++ modem &= ~UARTMODIR_TXRTSPOL; + } + + /* Store the new configuration */ + sport->port.rs485 = *rs485; + + lpuart32_write(&sport->port, modem, UARTMODIR); ++ ++ if (ctrl & UARTCTRL_TE) ++ lpuart32_write(&sport->port, ctrl, UARTCTRL); ++ + return 0; + } + +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index 1915b92c388fe4..2e1e7e4625b6b9 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -1057,6 +1057,20 @@ static const struct usb_device_id id_table_combined[] = { + .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, + /* GMC devices */ + { USB_DEVICE(GMC_VID, GMC_Z216C_PID) }, ++ /* Altera USB Blaster 3 */ ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6022_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6025_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 3) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6029_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 3) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602C_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) }, + { } /* Terminating entry */ + }; + +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h +index b2aec1106678ad..f4d729562355c3 100644 +--- a/drivers/usb/serial/ftdi_sio_ids.h ++++ b/drivers/usb/serial/ftdi_sio_ids.h +@@ -1605,3 +1605,16 @@ + */ + #define GMC_VID 0x1cd7 + #define GMC_Z216C_PID 0x0217 /* GMC Z216C Adapter IR-USB */ ++ ++/* ++ * Altera USB Blaster 3 (http://www.altera.com). ++ */ ++#define ALTERA_VID 0x09fb ++#define ALTERA_UB3_6022_PID 0x6022 ++#define ALTERA_UB3_6025_PID 0x6025 ++#define ALTERA_UB3_6026_PID 0x6026 ++#define ALTERA_UB3_6029_PID 0x6029 ++#define ALTERA_UB3_602A_PID 0x602a ++#define ALTERA_UB3_602C_PID 0x602c ++#define ALTERA_UB3_602D_PID 0x602d ++#define ALTERA_UB3_602E_PID 0x602e +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 7ca07ba1a13999..715738d70cbf5c 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1368,13 +1368,13 @@ static const struct usb_device_id option_ids[] = { + .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */ + .driver_info = RSVD(0) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990A (rmnet) */ + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990A (MBIM) */ + .driver_info = NCTRL(0) | RSVD(1) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990A (RNDIS) */ + .driver_info = NCTRL(2) | RSVD(3) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990A (ECM) */ + .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(3) }, +@@ -1388,28 +1388,44 @@ static const struct usb_device_id option_ids[] = { + .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */ + .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x30), /* Telit FE990B (rmnet) */ ++ .driver_info = NCTRL(5) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x30), /* Telit FE990B (MBIM) */ ++ .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x30), /* Telit FE990B (RNDIS) */ ++ .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x30), /* Telit FE990B (ECM) */ ++ .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x60) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x60) }, /* Telit FN990B (rmnet) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x30), /* Telit FN990B (rmnet) */ + .driver_info = NCTRL(5) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x60) }, /* Telit FN990B (MBIM) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */ + .driver_info = NCTRL(6) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x60) }, /* Telit FN990B (RNDIS) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x30), /* Telit FN990B (RNDIS) */ + .driver_info = NCTRL(6) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x60) }, /* Telit FN990B (ECM) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x30), /* Telit FN990B (ECM) */ + .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x60) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), + .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), +diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig +index fcc46380e7c914..390280ce7ea3e8 100644 +--- a/drivers/video/console/Kconfig ++++ b/drivers/video/console/Kconfig +@@ -23,7 +23,7 @@ config VGA_CONSOLE + Say Y. + + config MDA_CONSOLE +- depends on !M68K && !PARISC && ISA ++ depends on VGA_CONSOLE && ISA + tristate "MDA text console (dual-headed)" + help + Say Y here if you have an old MDA or monochrome Hercules graphics +diff --git a/drivers/video/fbdev/au1100fb.c b/drivers/video/fbdev/au1100fb.c +index 37a6512feda0fb..abb769824840eb 100644 +--- a/drivers/video/fbdev/au1100fb.c ++++ b/drivers/video/fbdev/au1100fb.c +@@ -137,13 +137,15 @@ static int au1100fb_fb_blank(int blank_mode, struct fb_info *fbi) + */ + int au1100fb_setmode(struct au1100fb_device *fbdev) + { +- struct fb_info *info = &fbdev->info; ++ struct fb_info *info; + u32 words; + int index; + + if (!fbdev) + return -EINVAL; + ++ info = &fbdev->info; ++ + /* Update var-dependent FB info */ + if (panel_is_active(fbdev->panel) || panel_is_color(fbdev->panel)) { + if (info->var.bits_per_pixel <= 8) { +diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c +index 6a881cfd7f5c0c..5fd1b33d11238c 100644 +--- a/drivers/video/fbdev/hyperv_fb.c ++++ b/drivers/video/fbdev/hyperv_fb.c +@@ -1130,7 +1130,7 @@ static void hvfb_putmem(struct hv_device *hdev, struct fb_info *info) + + if (par->need_docopy) { + vfree(par->dio_vp); +- iounmap(info->screen_base); ++ iounmap(par->mmio_vp); + vmbus_free_mmio(par->mem->start, screen_fb_size); + } else { + hvfb_release_phymem(hdev, info->fix.smem_start, +diff --git a/drivers/video/fbdev/sm501fb.c b/drivers/video/fbdev/sm501fb.c +index 6a52eba645596a..3c46838651b061 100644 +--- a/drivers/video/fbdev/sm501fb.c ++++ b/drivers/video/fbdev/sm501fb.c +@@ -326,6 +326,13 @@ static int sm501fb_check_var(struct fb_var_screeninfo *var, + if (var->xres_virtual > 4096 || var->yres_virtual > 2048) + return -EINVAL; + ++ /* geometry sanity checks */ ++ if (var->xres + var->xoffset > var->xres_virtual) ++ return -EINVAL; ++ ++ if (var->yres + var->yoffset > var->yres_virtual) ++ return -EINVAL; ++ + /* can cope with 8,16 or 32bpp */ + + if (var->bits_per_pixel <= 8) +diff --git a/fs/affs/file.c b/fs/affs/file.c +index 25d480ea797bdc..2000241431d550 100644 +--- a/fs/affs/file.c ++++ b/fs/affs/file.c +@@ -598,7 +598,7 @@ affs_extent_file_ofs(struct inode *inode, u32 newsize) + BUG_ON(tmp > bsize); + AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); + AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino); +- AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx); ++ AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1); + AFFS_DATA_HEAD(bh)->size = cpu_to_be32(tmp); + affs_fix_checksum(sb, bh); + bh->b_state &= ~(1UL << BH_New); +@@ -726,7 +726,8 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, + tmp = min(bsize - boff, to - from); + BUG_ON(boff + tmp > bsize || tmp > bsize); + memcpy(AFFS_DATA(bh) + boff, data + from, tmp); +- be32_add_cpu(&AFFS_DATA_HEAD(bh)->size, tmp); ++ AFFS_DATA_HEAD(bh)->size = cpu_to_be32( ++ max(boff + tmp, be32_to_cpu(AFFS_DATA_HEAD(bh)->size))); + affs_fix_checksum(sb, bh); + mark_buffer_dirty_inode(bh, inode); + written += tmp; +@@ -748,7 +749,7 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, + if (buffer_new(bh)) { + AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); + AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino); +- AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx); ++ AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1); + AFFS_DATA_HEAD(bh)->size = cpu_to_be32(bsize); + AFFS_DATA_HEAD(bh)->next = 0; + bh->b_state &= ~(1UL << BH_New); +@@ -782,7 +783,7 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping, + if (buffer_new(bh)) { + AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); + AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino); +- AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx); ++ AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1); + AFFS_DATA_HEAD(bh)->size = cpu_to_be32(tmp); + AFFS_DATA_HEAD(bh)->next = 0; + bh->b_state &= ~(1UL << BH_New); +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index e9659e29d657d1..551faae77bc313 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -5461,7 +5461,10 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans, + ret = btrfs_dec_ref(trans, root, eb, 1); + else + ret = btrfs_dec_ref(trans, root, eb, 0); +- BUG_ON(ret); /* -ENOMEM */ ++ if (ret) { ++ btrfs_abort_transaction(trans, ret); ++ return ret; ++ } + if (is_fstree(root->root_key.objectid)) { + ret = btrfs_qgroup_trace_leaf_items(trans, eb); + if (ret) { +diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c +index e7501533c2ec9a..8eb91bd18439b6 100644 +--- a/fs/cifs/cifs_debug.c ++++ b/fs/cifs/cifs_debug.c +@@ -183,6 +183,8 @@ static int cifs_debug_files_proc_show(struct seq_file *m, void *v) + list_for_each_entry(server, &cifs_tcp_ses_list, tcp_ses_list) { + list_for_each(tmp, &server->smb_ses_list) { + ses = list_entry(tmp, struct cifs_ses, smb_ses_list); ++ if (cifs_ses_exiting(ses)) ++ continue; + list_for_each(tmp1, &ses->tcon_list) { + tcon = list_entry(tmp1, struct cifs_tcon, tcon_list); + spin_lock(&tcon->open_file_lock); +diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h +index 2ee67a27020d9e..7b57cc5d702213 100644 +--- a/fs/cifs/cifsglob.h ++++ b/fs/cifs/cifsglob.h +@@ -2041,4 +2041,12 @@ static inline struct scatterlist *cifs_sg_set_buf(struct scatterlist *sg, + return sg; + } + ++static inline bool cifs_ses_exiting(struct cifs_ses *ses) ++{ ++ bool ret; ++ ++ ret = ses->status == CifsExiting; ++ return ret; ++} ++ + #endif /* _CIFS_GLOB_H */ +diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c +index 1cbfb74c53804a..96788385e1e735 100644 +--- a/fs/cifs/connect.c ++++ b/fs/cifs/connect.c +@@ -1582,9 +1582,8 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx) + + static int match_session(struct cifs_ses *ses, struct smb3_fs_context *ctx) + { +- if (ctx->sectype != Unspecified && +- ctx->sectype != ses->sectype) +- return 0; ++ struct TCP_Server_Info *server = ses->server; ++ enum securityEnum ctx_sec, ses_sec; + + /* + * If an existing session is limited to less channels than +@@ -1597,11 +1596,19 @@ static int match_session(struct cifs_ses *ses, struct smb3_fs_context *ctx) + } + spin_unlock(&ses->chan_lock); + +- switch (ses->sectype) { ++ ctx_sec = server->ops->select_sectype(server, ctx->sectype); ++ ses_sec = server->ops->select_sectype(server, ses->sectype); ++ ++ if (ctx_sec != ses_sec) ++ return 0; ++ ++ switch (ctx_sec) { + case Kerberos: + if (!uid_eq(ctx->cred_uid, ses->cred_uid)) + return 0; + break; ++ case NTLMv2: ++ case RawNTLMSSP: + default: + /* NULL username means anonymous session */ + if (ses->user_name == NULL) { +diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c +index fb3651513f83aa..24c42043a22718 100644 +--- a/fs/cifs/fs_context.c ++++ b/fs/cifs/fs_context.c +@@ -1055,21 +1055,21 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + ctx->got_wsize = true; + break; + case Opt_acregmax: +- ctx->acregmax = HZ * result.uint_32; +- if (ctx->acregmax > CIFS_MAX_ACTIMEO) { ++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { + cifs_errorf(fc, "acregmax too large\n"); + goto cifs_parse_mount_err; + } ++ ctx->acregmax = HZ * result.uint_32; + break; + case Opt_acdirmax: +- ctx->acdirmax = HZ * result.uint_32; +- if (ctx->acdirmax > CIFS_MAX_ACTIMEO) { ++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { + cifs_errorf(fc, "acdirmax too large\n"); + goto cifs_parse_mount_err; + } ++ ctx->acdirmax = HZ * result.uint_32; + break; + case Opt_actimeo: +- if (HZ * result.uint_32 > CIFS_MAX_ACTIMEO) { ++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { + cifs_errorf(fc, "timeout too large\n"); + goto cifs_parse_mount_err; + } +@@ -1081,11 +1081,11 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + ctx->acdirmax = ctx->acregmax = HZ * result.uint_32; + break; + case Opt_closetimeo: +- ctx->closetimeo = HZ * result.uint_32; +- if (ctx->closetimeo > SMB3_MAX_DCLOSETIMEO) { ++ if (result.uint_32 > SMB3_MAX_DCLOSETIMEO / HZ) { + cifs_errorf(fc, "closetimeo too large\n"); + goto cifs_parse_mount_err; + } ++ ctx->closetimeo = HZ * result.uint_32; + break; + case Opt_echo_interval: + ctx->echo_interval = result.uint_32; +diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c +index 9c116a58544dab..c5f6015a947cec 100644 +--- a/fs/exfat/fatent.c ++++ b/fs/exfat/fatent.c +@@ -264,7 +264,7 @@ int exfat_find_last_cluster(struct super_block *sb, struct exfat_chain *p_chain, + clu = next; + if (exfat_ent_get(sb, clu, &next)) + return -EIO; +- } while (next != EXFAT_EOF_CLUSTER); ++ } while (next != EXFAT_EOF_CLUSTER && count <= p_chain->size); + + if (p_chain->size != count) { + exfat_fs_error(sb, +diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c +index 4720c30312d0e1..4d360368919894 100644 +--- a/fs/ext4/dir.c ++++ b/fs/ext4/dir.c +@@ -104,6 +104,9 @@ int __ext4_check_dir_entry(const char *function, unsigned int line, + else if (unlikely(le32_to_cpu(de->inode) > + le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count))) + error_msg = "inode out of bounds"; ++ else if (unlikely(next_offset == size && de->name_len == 1 && ++ de->name[0] == '.')) ++ error_msg = "'.' directory cannot be the last in data block"; + else + return 0; + +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index 126b582d85fce2..541cfd118fbc77 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -6087,22 +6087,29 @@ static int ext4_statfs_project(struct super_block *sb, + dquot->dq_dqb.dqb_bhardlimit); + limit >>= sb->s_blocksize_bits; + +- if (limit && buf->f_blocks > limit) { ++ if (limit) { ++ uint64_t remaining = 0; ++ + curblock = (dquot->dq_dqb.dqb_curspace + + dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits; +- buf->f_blocks = limit; +- buf->f_bfree = buf->f_bavail = +- (buf->f_blocks > curblock) ? +- (buf->f_blocks - curblock) : 0; ++ if (limit > curblock) ++ remaining = limit - curblock; ++ ++ buf->f_blocks = min(buf->f_blocks, limit); ++ buf->f_bfree = min(buf->f_bfree, remaining); ++ buf->f_bavail = min(buf->f_bavail, remaining); + } + + limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit, + dquot->dq_dqb.dqb_ihardlimit); +- if (limit && buf->f_files > limit) { +- buf->f_files = limit; +- buf->f_ffree = +- (buf->f_files > dquot->dq_dqb.dqb_curinodes) ? +- (buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0; ++ if (limit) { ++ uint64_t remaining = 0; ++ ++ if (limit > dquot->dq_dqb.dqb_curinodes) ++ remaining = limit - dquot->dq_dqb.dqb_curinodes; ++ ++ buf->f_files = min(buf->f_files, limit); ++ buf->f_ffree = min(buf->f_ffree, remaining); + } + + spin_unlock(&dquot->dq_dqb_lock); +diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c +index 3e7aafe2e95339..d3ebb02626e2ff 100644 +--- a/fs/fuse/dax.c ++++ b/fs/fuse/dax.c +@@ -681,7 +681,6 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry, + 0, 0, fuse_wait_dax_page(inode)); + } + +-/* dmap_end == 0 leads to unmapping of whole file */ + int fuse_dax_break_layouts(struct inode *inode, u64 dmap_start, + u64 dmap_end) + { +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c +index 44d1c8cc58a42e..1b8bf81d6c16b5 100644 +--- a/fs/fuse/dir.c ++++ b/fs/fuse/dir.c +@@ -1333,7 +1333,7 @@ static const char *fuse_get_link(struct dentry *dentry, struct inode *inode, + goto out_err; + + if (fc->cache_symlinks) +- return page_get_link(dentry, inode, callback); ++ return page_get_link_raw(dentry, inode, callback); + + err = -ECHILD; + if (!dentry) +@@ -1600,7 +1600,7 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr, + if (FUSE_IS_DAX(inode) && is_truncate) { + filemap_invalidate_lock(mapping); + fault_blocked = true; +- err = fuse_dax_break_layouts(inode, 0, 0); ++ err = fuse_dax_break_layouts(inode, 0, -1); + if (err) { + filemap_invalidate_unlock(mapping); + return err; +diff --git a/fs/fuse/file.c b/fs/fuse/file.c +index 8702ef9ff8b9e1..40fdb4dac5bbc6 100644 +--- a/fs/fuse/file.c ++++ b/fs/fuse/file.c +@@ -242,7 +242,7 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir) + + if (dax_truncate) { + filemap_invalidate_lock(inode->i_mapping); +- err = fuse_dax_break_layouts(inode, 0, 0); ++ err = fuse_dax_break_layouts(inode, 0, -1); + if (err) + goto out_inode_unlock; + } +@@ -2962,7 +2962,7 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset, + inode_lock(inode); + if (block_faults) { + filemap_invalidate_lock(inode->i_mapping); +- err = fuse_dax_break_layouts(inode, 0, 0); ++ err = fuse_dax_break_layouts(inode, 0, -1); + if (err) + goto out; + } +diff --git a/fs/isofs/dir.c b/fs/isofs/dir.c +index eb2f8273e6f15e..09df40b612fbf2 100644 +--- a/fs/isofs/dir.c ++++ b/fs/isofs/dir.c +@@ -147,7 +147,8 @@ static int do_isofs_readdir(struct inode *inode, struct file *file, + de = tmpde; + } + /* Basic sanity check, whether name doesn't exceed dir entry */ +- if (de_len < de->name_len[0] + ++ if (de_len < sizeof(struct iso_directory_record) || ++ de_len < de->name_len[0] + + sizeof(struct iso_directory_record)) { + printk(KERN_NOTICE "iso9660: Corrupted directory entry" + " in block %lu of inode %lu\n", block, +diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c +index a3d1d560f4c865..417d1c2fc29112 100644 +--- a/fs/jfs/jfs_dtree.c ++++ b/fs/jfs/jfs_dtree.c +@@ -117,7 +117,8 @@ do { \ + if (!(RC)) { \ + if (((P)->header.nextindex > \ + (((BN) == 0) ? DTROOTMAXSLOT : (P)->header.maxslot)) || \ +- ((BN) && ((P)->header.maxslot > DTPAGEMAXSLOT))) { \ ++ ((BN) && (((P)->header.maxslot > DTPAGEMAXSLOT) || \ ++ ((P)->header.stblindex >= DTPAGEMAXSLOT)))) { \ + BT_PUTPAGE(MP); \ + jfs_error((IP)->i_sb, \ + "DT_GETPAGE: dtree page corrupt\n"); \ +diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c +index 2b9b98ff2dd6f6..e6f2c619b30ac0 100644 +--- a/fs/jfs/xattr.c ++++ b/fs/jfs/xattr.c +@@ -559,11 +559,16 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size) + + size_check: + if (EALIST_SIZE(ea_buf->xattr) != ea_size) { +- int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr)); +- +- printk(KERN_ERR "ea_get: invalid extended attribute\n"); +- print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, +- ea_buf->xattr, size, 1); ++ if (unlikely(EALIST_SIZE(ea_buf->xattr) > INT_MAX)) { ++ printk(KERN_ERR "ea_get: extended attribute size too large: %u > INT_MAX\n", ++ EALIST_SIZE(ea_buf->xattr)); ++ } else { ++ int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr)); ++ ++ printk(KERN_ERR "ea_get: invalid extended attribute\n"); ++ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, ++ ea_buf->xattr, size, 1); ++ } + ea_release(inode, ea_buf); + rc = -EIO; + goto clean_up; +diff --git a/fs/ksmbd/auth.c b/fs/ksmbd/auth.c +index 3b776b5de7db6c..647692ca78a288 100644 +--- a/fs/ksmbd/auth.c ++++ b/fs/ksmbd/auth.c +@@ -1211,7 +1211,7 @@ int ksmbd_crypt_message(struct ksmbd_work *work, struct kvec *iov, + free_sg: + kfree(sg); + free_req: +- kfree(req); ++ aead_request_free(req); + free_ctx: + ksmbd_release_crypto_ctx(ctx); + return rc; +diff --git a/fs/ksmbd/mgmt/user_session.c b/fs/ksmbd/mgmt/user_session.c +index 1cee9733bdac54..f59714bfc819b7 100644 +--- a/fs/ksmbd/mgmt/user_session.c ++++ b/fs/ksmbd/mgmt/user_session.c +@@ -250,6 +250,22 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn) + up_write(&conn->session_lock); + } + ++bool is_ksmbd_session_in_connection(struct ksmbd_conn *conn, ++ unsigned long long id) ++{ ++ struct ksmbd_session *sess; ++ ++ down_read(&conn->session_lock); ++ sess = xa_load(&conn->sessions, id); ++ if (sess) { ++ up_read(&conn->session_lock); ++ return true; ++ } ++ up_read(&conn->session_lock); ++ ++ return false; ++} ++ + struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn, + unsigned long long id) + { +diff --git a/fs/ksmbd/mgmt/user_session.h b/fs/ksmbd/mgmt/user_session.h +index ce91b1d698e710..f4da293c4dbb2c 100644 +--- a/fs/ksmbd/mgmt/user_session.h ++++ b/fs/ksmbd/mgmt/user_session.h +@@ -87,6 +87,8 @@ void ksmbd_session_destroy(struct ksmbd_session *sess); + struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id); + struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn, + unsigned long long id); ++bool is_ksmbd_session_in_connection(struct ksmbd_conn *conn, ++ unsigned long long id); + int ksmbd_session_register(struct ksmbd_conn *conn, + struct ksmbd_session *sess); + void ksmbd_sessions_deregister(struct ksmbd_conn *conn); +diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c +index 82b6be188ad4a6..3dfe0acf21a5da 100644 +--- a/fs/ksmbd/smb2pdu.c ++++ b/fs/ksmbd/smb2pdu.c +@@ -1725,44 +1725,38 @@ int smb2_sess_setup(struct ksmbd_work *work) + + if (conn->dialect != sess->dialect) { + rc = -EINVAL; +- ksmbd_user_session_put(sess); + goto out_err; + } + + if (!(req->hdr.Flags & SMB2_FLAGS_SIGNED)) { + rc = -EINVAL; +- ksmbd_user_session_put(sess); + goto out_err; + } + + if (strncmp(conn->ClientGUID, sess->ClientGUID, + SMB2_CLIENT_GUID_SIZE)) { + rc = -ENOENT; +- ksmbd_user_session_put(sess); + goto out_err; + } + + if (sess->state == SMB2_SESSION_IN_PROGRESS) { + rc = -EACCES; +- ksmbd_user_session_put(sess); + goto out_err; + } + + if (sess->state == SMB2_SESSION_EXPIRED) { + rc = -EFAULT; +- ksmbd_user_session_put(sess); + goto out_err; + } +- ksmbd_user_session_put(sess); + + if (ksmbd_conn_need_reconnect(conn)) { + rc = -EFAULT; ++ ksmbd_user_session_put(sess); + sess = NULL; + goto out_err; + } + +- sess = ksmbd_session_lookup(conn, sess_id); +- if (!sess) { ++ if (is_ksmbd_session_in_connection(conn, sess_id)) { + rc = -EACCES; + goto out_err; + } +@@ -1928,6 +1922,8 @@ int smb2_sess_setup(struct ksmbd_work *work) + + sess->last_active = jiffies; + sess->state = SMB2_SESSION_EXPIRED; ++ ksmbd_user_session_put(sess); ++ work->sess = NULL; + if (try_delay) { + ksmbd_conn_set_need_reconnect(conn); + ssleep(5); +diff --git a/fs/ksmbd/smbacl.c b/fs/ksmbd/smbacl.c +index 3a6c0abdb03527..ecf9db3d69c38a 100644 +--- a/fs/ksmbd/smbacl.c ++++ b/fs/ksmbd/smbacl.c +@@ -396,7 +396,9 @@ static void parse_dacl(struct user_namespace *user_ns, + if (num_aces <= 0) + return; + +- if (num_aces > ULONG_MAX / sizeof(struct smb_ace *)) ++ if (num_aces > (le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) / ++ (offsetof(struct smb_ace, sid) + ++ offsetof(struct smb_sid, sub_auth) + sizeof(__le16))) + return; + + ret = init_acl_state(&acl_state, num_aces); +@@ -430,6 +432,7 @@ static void parse_dacl(struct user_namespace *user_ns, + offsetof(struct smb_sid, sub_auth); + + if (end_of_acl - acl_base < acl_size || ++ ppace[i]->sid.num_subauth == 0 || + ppace[i]->sid.num_subauth > SID_MAX_SUB_AUTHORITIES || + (end_of_acl - acl_base < + acl_size + sizeof(__le32) * ppace[i]->sid.num_subauth) || +diff --git a/fs/namei.c b/fs/namei.c +index 05d45b9b59cb41..c188d525300d15 100644 +--- a/fs/namei.c ++++ b/fs/namei.c +@@ -5114,10 +5114,9 @@ const char *vfs_get_link(struct dentry *dentry, struct delayed_call *done) + EXPORT_SYMBOL(vfs_get_link); + + /* get the link contents into pagecache */ +-const char *page_get_link(struct dentry *dentry, struct inode *inode, +- struct delayed_call *callback) ++static char *__page_get_link(struct dentry *dentry, struct inode *inode, ++ struct delayed_call *callback) + { +- char *kaddr; + struct page *page; + struct address_space *mapping = inode->i_mapping; + +@@ -5136,8 +5135,23 @@ const char *page_get_link(struct dentry *dentry, struct inode *inode, + } + set_delayed_call(callback, page_put_link, page); + BUG_ON(mapping_gfp_mask(mapping) & __GFP_HIGHMEM); +- kaddr = page_address(page); +- nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1); ++ return page_address(page); ++} ++ ++const char *page_get_link_raw(struct dentry *dentry, struct inode *inode, ++ struct delayed_call *callback) ++{ ++ return __page_get_link(dentry, inode, callback); ++} ++EXPORT_SYMBOL_GPL(page_get_link_raw); ++ ++const char *page_get_link(struct dentry *dentry, struct inode *inode, ++ struct delayed_call *callback) ++{ ++ char *kaddr = __page_get_link(dentry, inode, callback); ++ ++ if (!IS_ERR(kaddr)) ++ nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1); + return kaddr; + } + +diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c +index ac79ef0d43a73e..0c14ff09cfbe3c 100644 +--- a/fs/nfs/delegation.c ++++ b/fs/nfs/delegation.c +@@ -570,17 +570,6 @@ static bool nfs_delegation_need_return(struct nfs_delegation *delegation) + + if (test_and_clear_bit(NFS_DELEGATION_RETURN, &delegation->flags)) + ret = true; +- else if (test_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags)) { +- struct inode *inode; +- +- spin_lock(&delegation->lock); +- inode = delegation->inode; +- if (inode && list_empty(&NFS_I(inode)->open_files)) +- ret = true; +- spin_unlock(&delegation->lock); +- } +- if (ret) +- clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags); + if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags) || + test_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags) || + test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) +@@ -821,11 +810,25 @@ int nfs4_inode_make_writeable(struct inode *inode) + return nfs4_inode_return_delegation(inode); + } + +-static void nfs_mark_return_if_closed_delegation(struct nfs_server *server, +- struct nfs_delegation *delegation) ++static void ++nfs_mark_return_if_closed_delegation(struct nfs_server *server, ++ struct nfs_delegation *delegation) + { +- set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags); +- set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state); ++ struct inode *inode; ++ ++ if (test_bit(NFS_DELEGATION_RETURN, &delegation->flags) || ++ test_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags)) ++ return; ++ spin_lock(&delegation->lock); ++ inode = delegation->inode; ++ if (!inode) ++ goto out; ++ if (list_empty(&NFS_I(inode)->open_files)) ++ nfs_mark_return_delegation(server, delegation); ++ else ++ set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags); ++out: ++ spin_unlock(&delegation->lock); + } + + static bool nfs_server_mark_return_all_delegations(struct nfs_server *server) +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index 1d18170d1f15cf..5e576b1f46c794 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -1069,6 +1069,12 @@ static struct nfs4_ol_stateid * nfs4_alloc_open_stateid(struct nfs4_client *clp) + return openlockstateid(stid); + } + ++/* ++ * As the sc_free callback of deleg, this may be called by nfs4_put_stid ++ * in nfsd_break_one_deleg. ++ * Considering nfsd_break_one_deleg is called with the flc->flc_lock held, ++ * this function mustn't ever sleep. ++ */ + static void nfs4_free_deleg(struct nfs4_stid *stid) + { + struct nfs4_delegation *dp = delegstateid(stid); +@@ -4926,6 +4932,7 @@ static const struct nfsd4_callback_ops nfsd4_cb_recall_ops = { + + static void nfsd_break_one_deleg(struct nfs4_delegation *dp) + { ++ bool queued; + /* + * We're assuming the state code never drops its reference + * without first removing the lease. Since we're in this lease +@@ -4934,7 +4941,10 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp) + * we know it's safe to take a reference. + */ + refcount_inc(&dp->dl_stid.sc_count); +- WARN_ON_ONCE(!nfsd4_run_cb(&dp->dl_recall)); ++ queued = nfsd4_run_cb(&dp->dl_recall); ++ WARN_ON_ONCE(!queued); ++ if (!queued) ++ nfs4_put_stid(&dp->dl_stid); + } + + /* Called from break_lease() with flc_lock held. */ +@@ -6240,14 +6250,19 @@ deleg_reaper(struct nfsd_net *nn) + spin_lock(&nn->client_lock); + list_for_each_safe(pos, next, &nn->client_lru) { + clp = list_entry(pos, struct nfs4_client, cl_lru); +- if (clp->cl_state != NFSD4_ACTIVE || +- list_empty(&clp->cl_delegations) || +- atomic_read(&clp->cl_delegs_in_recall) || +- test_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags) || +- (ktime_get_boottime_seconds() - +- clp->cl_ra_time < 5)) { ++ ++ if (clp->cl_state != NFSD4_ACTIVE) ++ continue; ++ if (list_empty(&clp->cl_delegations)) ++ continue; ++ if (atomic_read(&clp->cl_delegs_in_recall)) ++ continue; ++ if (test_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags)) ++ continue; ++ if (ktime_get_boottime_seconds() - clp->cl_ra_time < 5) ++ continue; ++ if (clp->cl_cb_state != NFSD4_CB_UP) + continue; +- } + list_add(&clp->cl_ra_cblist, &cblist); + + /* release in nfsd4_cb_recall_any_release */ +diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c +index 9cffd59e9735bb..cc2d29261859a5 100644 +--- a/fs/ntfs3/index.c ++++ b/fs/ntfs3/index.c +@@ -617,7 +617,7 @@ static bool index_hdr_check(const struct INDEX_HDR *hdr, u32 bytes) + u32 off = le32_to_cpu(hdr->de_off); + + if (!IS_ALIGNED(off, 8) || tot > bytes || end > tot || +- off + sizeof(struct NTFS_DE) > end) { ++ size_add(off, sizeof(struct NTFS_DE)) > end) { + /* incorrect index buffer. */ + return false; + } +@@ -736,7 +736,7 @@ static struct NTFS_DE *hdr_find_e(const struct ntfs_index *indx, + if (end > total) + return NULL; + +- if (off + sizeof(struct NTFS_DE) > end) ++ if (size_add(off, sizeof(struct NTFS_DE)) > end) + return NULL; + + e = Add2Ptr(hdr, off); +diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c +index 5d9ae17bd443f2..9c95d911a14b1a 100644 +--- a/fs/ocfs2/alloc.c ++++ b/fs/ocfs2/alloc.c +@@ -1796,6 +1796,14 @@ static int __ocfs2_find_path(struct ocfs2_caching_info *ci, + + el = root_el; + while (el->l_tree_depth) { ++ if (unlikely(le16_to_cpu(el->l_tree_depth) >= OCFS2_MAX_PATH_DEPTH)) { ++ ocfs2_error(ocfs2_metadata_cache_get_super(ci), ++ "Owner %llu has invalid tree depth %u in extent list\n", ++ (unsigned long long)ocfs2_metadata_cache_owner(ci), ++ le16_to_cpu(el->l_tree_depth)); ++ ret = -EROFS; ++ goto out; ++ } + if (le16_to_cpu(el->l_next_free_rec) == 0) { + ocfs2_error(ocfs2_metadata_cache_get_super(ci), + "Owner %llu has empty extent list at depth %u\n", +diff --git a/fs/proc/base.c b/fs/proc/base.c +index d0414e566d30a5..3405005199b613 100644 +--- a/fs/proc/base.c ++++ b/fs/proc/base.c +@@ -416,7 +416,7 @@ static const struct file_operations proc_pid_cmdline_ops = { + #ifdef CONFIG_KALLSYMS + /* + * Provides a wchan file via kallsyms in a proper one-value-per-file format. +- * Returns the resolved symbol. If that fails, simply return the address. ++ * Returns the resolved symbol to user space. + */ + static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns, + struct pid *pid, struct task_struct *task) +diff --git a/fs/proc/generic.c b/fs/proc/generic.c +index d32f69aaaa36f5..bf26fd08776ac1 100644 +--- a/fs/proc/generic.c ++++ b/fs/proc/generic.c +@@ -558,10 +558,16 @@ struct proc_dir_entry *proc_create_reg(const char *name, umode_t mode, + return p; + } + +-static inline void pde_set_flags(struct proc_dir_entry *pde) ++static void pde_set_flags(struct proc_dir_entry *pde) + { + if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT) + pde->flags |= PROC_ENTRY_PERMANENT; ++ if (pde->proc_ops->proc_read_iter) ++ pde->flags |= PROC_ENTRY_proc_read_iter; ++#ifdef CONFIG_COMPAT ++ if (pde->proc_ops->proc_compat_ioctl) ++ pde->flags |= PROC_ENTRY_proc_compat_ioctl; ++#endif + } + + struct proc_dir_entry *proc_create_data(const char *name, umode_t mode, +@@ -625,6 +631,7 @@ struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode, + p->proc_ops = &proc_seq_ops; + p->seq_ops = ops; + p->state_size = state_size; ++ pde_set_flags(p); + return proc_register(parent, p); + } + EXPORT_SYMBOL(proc_create_seq_private); +@@ -655,6 +662,7 @@ struct proc_dir_entry *proc_create_single_data(const char *name, umode_t mode, + return NULL; + p->proc_ops = &proc_single_ops; + p->single_show = show; ++ pde_set_flags(p); + return proc_register(parent, p); + } + EXPORT_SYMBOL(proc_create_single_data); +diff --git a/fs/proc/inode.c b/fs/proc/inode.c +index 599eb724ff2d57..695471fa24fe5e 100644 +--- a/fs/proc/inode.c ++++ b/fs/proc/inode.c +@@ -670,13 +670,13 @@ struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de) + + if (S_ISREG(inode->i_mode)) { + inode->i_op = de->proc_iops; +- if (de->proc_ops->proc_read_iter) ++ if (pde_has_proc_read_iter(de)) + inode->i_fop = &proc_iter_file_ops; + else + inode->i_fop = &proc_reg_file_ops; + #ifdef CONFIG_COMPAT +- if (de->proc_ops->proc_compat_ioctl) { +- if (de->proc_ops->proc_read_iter) ++ if (pde_has_proc_compat_ioctl(de)) { ++ if (pde_has_proc_read_iter(de)) + inode->i_fop = &proc_iter_file_ops_compat; + else + inode->i_fop = &proc_reg_file_ops_compat; +diff --git a/fs/proc/internal.h b/fs/proc/internal.h +index 03415f3fb3a818..407a3c54c27bd3 100644 +--- a/fs/proc/internal.h ++++ b/fs/proc/internal.h +@@ -79,6 +79,20 @@ static inline bool pde_is_permanent(const struct proc_dir_entry *pde) + return pde->flags & PROC_ENTRY_PERMANENT; + } + ++static inline bool pde_has_proc_read_iter(const struct proc_dir_entry *pde) ++{ ++ return pde->flags & PROC_ENTRY_proc_read_iter; ++} ++ ++static inline bool pde_has_proc_compat_ioctl(const struct proc_dir_entry *pde) ++{ ++#ifdef CONFIG_COMPAT ++ return pde->flags & PROC_ENTRY_proc_compat_ioctl; ++#else ++ return false; ++#endif ++} ++ + extern struct kmem_cache *proc_dir_entry_cache; + void pde_free(struct proc_dir_entry *pde); + +diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c +index 44725007ccc2cc..20cfb2a9e87076 100644 +--- a/fs/vboxsf/super.c ++++ b/fs/vboxsf/super.c +@@ -21,7 +21,8 @@ + + #define VBOXSF_SUPER_MAGIC 0x786f4256 /* 'VBox' little endian */ + +-static const unsigned char VBSF_MOUNT_SIGNATURE[4] = "\000\377\376\375"; ++static const unsigned char VBSF_MOUNT_SIGNATURE[4] = { '\000', '\377', '\376', ++ '\375' }; + + static int follow_symlinks; + module_param(follow_symlinks, int, 0444); +diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h +index ddb9231d030971..9911264e0b155c 100644 +--- a/include/drm/drm_dp_mst_helper.h ++++ b/include/drm/drm_dp_mst_helper.h +@@ -232,6 +232,13 @@ struct drm_dp_mst_branch { + */ + struct list_head destroy_next; + ++ /** ++ * @rad: Relative Address of the MST branch. ++ * For &drm_dp_mst_topology_mgr.mst_primary, it's rad[8] are all 0, ++ * unset and unused. For MST branches connected after mst_primary, ++ * in each element of rad[] the nibbles are ordered by the most ++ * signifcant 4 bits first and the least significant 4 bits second. ++ */ + u8 rad[8]; + u8 lct; + int num_ports; +diff --git a/include/linux/fs.h b/include/linux/fs.h +index d011dc742e3ef6..a111724982795b 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -3387,6 +3387,8 @@ extern const struct file_operations generic_ro_fops; + + extern int readlink_copy(char __user *, int, const char *); + extern int page_readlink(struct dentry *, char __user *, int); ++extern const char *page_get_link_raw(struct dentry *, struct inode *, ++ struct delayed_call *); + extern const char *page_get_link(struct dentry *, struct inode *, + struct delayed_call *); + extern void page_put_link(void *); +diff --git a/include/linux/fwnode.h b/include/linux/fwnode.h +index 2d68606fb725de..f0833bafe6bd42 100644 +--- a/include/linux/fwnode.h ++++ b/include/linux/fwnode.h +@@ -67,7 +67,7 @@ struct fwnode_endpoint { + #define SWNODE_GRAPH_PORT_NAME_FMT "port@%u" + #define SWNODE_GRAPH_ENDPOINT_NAME_FMT "endpoint@%u" + +-#define NR_FWNODE_REFERENCE_ARGS 8 ++#define NR_FWNODE_REFERENCE_ARGS 16 + + /** + * struct fwnode_reference_args - Fwnode reference with additional arguments +diff --git a/include/linux/i8253.h b/include/linux/i8253.h +index bf169cfef7f12d..56c280eb2d4fd4 100644 +--- a/include/linux/i8253.h ++++ b/include/linux/i8253.h +@@ -21,7 +21,6 @@ + #define PIT_LATCH ((PIT_TICK_RATE + HZ/2) / HZ) + + extern raw_spinlock_t i8253_lock; +-extern bool i8253_clear_counter_on_shutdown; + extern struct clock_event_device i8253_clockevent; + extern void clockevent_i8253_init(bool oneshot); + extern void clockevent_i8253_disable(void); +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index 1f22a30c09637b..976bca44bae0c9 100644 +--- a/include/linux/interrupt.h ++++ b/include/linux/interrupt.h +@@ -408,7 +408,7 @@ irq_calc_affinity_vectors(unsigned int minvec, unsigned int maxvec, + static inline void disable_irq_nosync_lockdep(unsigned int irq) + { + disable_irq_nosync(irq); +-#ifdef CONFIG_LOCKDEP ++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT) + local_irq_disable(); + #endif + } +@@ -416,7 +416,7 @@ static inline void disable_irq_nosync_lockdep(unsigned int irq) + static inline void disable_irq_nosync_lockdep_irqsave(unsigned int irq, unsigned long *flags) + { + disable_irq_nosync(irq); +-#ifdef CONFIG_LOCKDEP ++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT) + local_irq_save(*flags); + #endif + } +@@ -431,7 +431,7 @@ static inline void disable_irq_lockdep(unsigned int irq) + + static inline void enable_irq_lockdep(unsigned int irq) + { +-#ifdef CONFIG_LOCKDEP ++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT) + local_irq_enable(); + #endif + enable_irq(irq); +@@ -439,7 +439,7 @@ static inline void enable_irq_lockdep(unsigned int irq) + + static inline void enable_irq_lockdep_irqrestore(unsigned int irq, unsigned long *flags) + { +-#ifdef CONFIG_LOCKDEP ++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT) + local_irq_restore(*flags); + #endif + enable_irq(irq); +diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h +index 9a10b6bac4a713..ed01ae76e2fa51 100644 +--- a/include/linux/pm_runtime.h ++++ b/include/linux/pm_runtime.h +@@ -46,6 +46,7 @@ static inline bool queue_pm_work(struct work_struct *work) + + extern int pm_generic_runtime_suspend(struct device *dev); + extern int pm_generic_runtime_resume(struct device *dev); ++extern bool pm_runtime_need_not_resume(struct device *dev); + extern int pm_runtime_force_suspend(struct device *dev); + extern int pm_runtime_force_resume(struct device *dev); + +@@ -234,6 +235,7 @@ static inline bool queue_pm_work(struct work_struct *work) { return false; } + + static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; } + static inline int pm_generic_runtime_resume(struct device *dev) { return 0; } ++static inline bool pm_runtime_need_not_resume(struct device *dev) {return true; } + static inline int pm_runtime_force_suspend(struct device *dev) { return 0; } + static inline int pm_runtime_force_resume(struct device *dev) { return 0; } + +diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h +index a2f25b26ae1ec9..47897617074635 100644 +--- a/include/linux/proc_fs.h ++++ b/include/linux/proc_fs.h +@@ -20,10 +20,13 @@ enum { + * If in doubt, ignore this flag. + */ + #ifdef MODULE +- PROC_ENTRY_PERMANENT = 0U, ++ PROC_ENTRY_PERMANENT = 0U, + #else +- PROC_ENTRY_PERMANENT = 1U << 0, ++ PROC_ENTRY_PERMANENT = 1U << 0, + #endif ++ ++ PROC_ENTRY_proc_read_iter = 1U << 1, ++ PROC_ENTRY_proc_compat_ioctl = 1U << 2, + }; + + struct proc_ops { +diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h +index 59d3736c454cf8..737b50f40137bf 100644 +--- a/include/linux/sched/smt.h ++++ b/include/linux/sched/smt.h +@@ -12,7 +12,7 @@ static __always_inline bool sched_smt_active(void) + return static_branch_likely(&sched_smt_present); + } + #else +-static inline bool sched_smt_active(void) { return false; } ++static __always_inline bool sched_smt_active(void) { return false; } + #endif + + void arch_smt_update(void); +diff --git a/include/linux/slab.h b/include/linux/slab.h +index 083f3ce550bcae..3482c2ced139e0 100644 +--- a/include/linux/slab.h ++++ b/include/linux/slab.h +@@ -152,8 +152,8 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name, + slab_flags_t flags, + unsigned int useroffset, unsigned int usersize, + void (*ctor)(void *)); +-void kmem_cache_destroy(struct kmem_cache *); +-int kmem_cache_shrink(struct kmem_cache *); ++void kmem_cache_destroy(struct kmem_cache *s); ++int kmem_cache_shrink(struct kmem_cache *s); + + /* + * Please use this macro to create slab caches. Simply specify the +@@ -181,11 +181,25 @@ int kmem_cache_shrink(struct kmem_cache *); + /* + * Common kmalloc functions provided by all allocators + */ +-void * __must_check krealloc(const void *, size_t, gfp_t); +-void kfree(const void *); +-void kfree_sensitive(const void *); +-size_t __ksize(const void *); +-size_t ksize(const void *); ++void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags); ++void kfree(const void *objp); ++void kfree_sensitive(const void *objp); ++size_t __ksize(const void *objp); ++ ++/** ++ * ksize - Report actual allocation size of associated object ++ * ++ * @objp: Pointer returned from a prior kmalloc()-family allocation. ++ * ++ * This should not be used for writing beyond the originally requested ++ * allocation size. Either use krealloc() or round up the allocation size ++ * with kmalloc_size_roundup() prior to allocation. If this is used to ++ * access beyond the originally requested allocation size, UBSAN_BOUNDS ++ * and/or FORTIFY_SOURCE may trip, since they only know about the ++ * originally allocated size via the __alloc_size attribute. ++ */ ++size_t ksize(const void *objp); ++ + #ifdef CONFIG_PRINTK + bool kmem_valid_obj(void *object); + void kmem_dump_obj(void *object); +@@ -426,8 +440,8 @@ static __always_inline unsigned int __kmalloc_index(size_t size, + #endif /* !CONFIG_SLOB */ + + void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; +-void *kmem_cache_alloc(struct kmem_cache *, gfp_t flags) __assume_slab_alignment __malloc; +-void kmem_cache_free(struct kmem_cache *, void *); ++void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; ++void kmem_cache_free(struct kmem_cache *s, void *objp); + + /* + * Bulk allocation and freeing operations. These are accelerated in an +@@ -436,8 +450,8 @@ void kmem_cache_free(struct kmem_cache *, void *); + * + * Note that interrupts must be enabled when calling these functions. + */ +-void kmem_cache_free_bulk(struct kmem_cache *, size_t, void **); +-int kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); ++void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); ++int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); + + /* + * Caller must not use kfree_bulk() on memory not originally allocated +@@ -450,7 +464,8 @@ static __always_inline void kfree_bulk(size_t size, void **p) + + #ifdef CONFIG_NUMA + void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __malloc; +-void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node) __assume_slab_alignment __malloc; ++void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment ++ __malloc; + #else + static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) + { +@@ -464,25 +479,24 @@ static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t f + #endif + + #ifdef CONFIG_TRACING +-extern void *kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t) __assume_slab_alignment __malloc; ++extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) ++ __assume_slab_alignment __malloc; + + #ifdef CONFIG_NUMA +-extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, +- gfp_t gfpflags, +- int node, size_t size) __assume_slab_alignment __malloc; ++extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, ++ int node, size_t size) __assume_slab_alignment __malloc; + #else +-static __always_inline void * +-kmem_cache_alloc_node_trace(struct kmem_cache *s, +- gfp_t gfpflags, +- int node, size_t size) ++static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, ++ gfp_t gfpflags, int node, ++ size_t size) + { + return kmem_cache_alloc_trace(s, gfpflags, size); + } + #endif /* CONFIG_NUMA */ + + #else /* CONFIG_TRACING */ +-static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, +- gfp_t flags, size_t size) ++static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, ++ size_t size) + { + void *ret = kmem_cache_alloc(s, flags); + +@@ -490,10 +504,8 @@ static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, + return ret; + } + +-static __always_inline void * +-kmem_cache_alloc_node_trace(struct kmem_cache *s, +- gfp_t gfpflags, +- int node, size_t size) ++static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, ++ int node, size_t size) + { + void *ret = kmem_cache_alloc_node(s, gfpflags, node); + +@@ -502,13 +514,14 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s, + } + #endif /* CONFIG_TRACING */ + +-extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; ++extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment ++ __malloc; + + #ifdef CONFIG_TRACING +-extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; ++extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) ++ __assume_page_alignment __malloc; + #else +-static __always_inline void * +-kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) ++static __always_inline void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) + { + return kmalloc_order(size, flags, order); + } +@@ -638,8 +651,8 @@ static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) + * @new_size: new size of a single member of the array + * @flags: the type of memory to allocate (see kmalloc) + */ +-static __must_check inline void * +-krealloc_array(void *p, size_t new_n, size_t new_size, gfp_t flags) ++static inline void * __must_check krealloc_array(void *p, size_t new_n, size_t new_size, ++ gfp_t flags) + { + size_t bytes; + +@@ -668,7 +681,7 @@ static inline void *kcalloc(size_t n, size_t size, gfp_t flags) + * allocator where we care about the real place the memory allocation + * request comes from. + */ +-extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long); ++extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); + #define kmalloc_track_caller(size, flags) \ + __kmalloc_track_caller(size, flags, _RET_IP_) + +@@ -691,7 +704,8 @@ static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) + + + #ifdef CONFIG_NUMA +-extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long); ++extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, ++ unsigned long caller); + #define kmalloc_node_track_caller(size, flags, node) \ + __kmalloc_node_track_caller(size, flags, node, \ + _RET_IP_) +@@ -733,6 +747,23 @@ static inline void *kzalloc_node(size_t size, gfp_t flags, int node) + } + + unsigned int kmem_cache_size(struct kmem_cache *s); ++ ++/** ++ * kmalloc_size_roundup - Report allocation bucket size for the given size ++ * ++ * @size: Number of bytes to round up from. ++ * ++ * This returns the number of bytes that would be available in a kmalloc() ++ * allocation of @size bytes. For example, a 126 byte request would be ++ * rounded up to the next sized kmalloc bucket, 128 bytes. (This is strictly ++ * for the general-purpose kmalloc()-based allocations, and is not for the ++ * pre-sized kmem_cache_alloc()-based allocations.) ++ * ++ * Use this to kmalloc() the full bucket size ahead of time instead of using ++ * ksize() to query the size after an allocation. ++ */ ++size_t kmalloc_size_roundup(size_t size); ++ + void __init kmem_cache_init_late(void); + + #if defined(CONFIG_SMP) && defined(CONFIG_SLAB) +diff --git a/include/net/ipv6.h b/include/net/ipv6.h +index 6b3309e55dcb17..608943944ce1a7 100644 +--- a/include/net/ipv6.h ++++ b/include/net/ipv6.h +@@ -1000,7 +1000,7 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr); + int ip6_append_data(struct sock *sk, + int getfrag(void *from, char *to, int offset, int len, + int odd, struct sk_buff *skb), +- void *from, int length, int transhdrlen, ++ void *from, size_t length, int transhdrlen, + struct ipcm6_cookie *ipc6, struct flowi6 *fl6, + struct rt6_info *rt, unsigned int flags); + +@@ -1016,7 +1016,7 @@ struct sk_buff *__ip6_make_skb(struct sock *sk, struct sk_buff_head *queue, + struct sk_buff *ip6_make_skb(struct sock *sk, + int getfrag(void *from, char *to, int offset, + int len, int odd, struct sk_buff *skb), +- void *from, int length, int transhdrlen, ++ void *from, size_t length, int transhdrlen, + struct ipcm6_cookie *ipc6, struct flowi6 *fl6, + struct rt6_info *rt, unsigned int flags, + struct inet_cork_full *cork); +diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h +index fa13bf15feb3ea..f4257c2e96b6da 100644 +--- a/include/rdma/ib_verbs.h ++++ b/include/rdma/ib_verbs.h +@@ -2707,6 +2707,7 @@ struct ib_device { + * It is a NULL terminated array. + */ + const struct attribute_group *groups[4]; ++ u8 hw_stats_attr_index; + + u64 uverbs_cmd_mask; + +diff --git a/include/sound/soc.h b/include/sound/soc.h +index 3f0369aae2faf4..42358dbc19b8ca 100644 +--- a/include/sound/soc.h ++++ b/include/sound/soc.h +@@ -1113,7 +1113,10 @@ void snd_soc_close_delayed_work(struct snd_soc_pcm_runtime *rtd); + + /* mixer control */ + struct soc_mixer_control { +- int min, max, platform_max; ++ /* Minimum and maximum specified as written to the hardware */ ++ int min, max; ++ /* Limited maximum value specified as presented through the control */ ++ int platform_max; + int reg, rreg; + unsigned int shift, rshift; + unsigned int sign_bit; +diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c +index 98588e96b59193..3e1655374c2ed6 100644 +--- a/kernel/events/ring_buffer.c ++++ b/kernel/events/ring_buffer.c +@@ -19,7 +19,7 @@ + + static void perf_output_wakeup(struct perf_output_handle *handle) + { +- atomic_set(&handle->rb->poll, EPOLLIN); ++ atomic_set(&handle->rb->poll, EPOLLIN | EPOLLRDNORM); + + handle->event->pending_wakeup = 1; + irq_work_queue(&handle->event->pending_irq); +diff --git a/kernel/kexec_elf.c b/kernel/kexec_elf.c +index d3689632e8b90f..3a5c25b2adc94d 100644 +--- a/kernel/kexec_elf.c ++++ b/kernel/kexec_elf.c +@@ -390,7 +390,7 @@ int kexec_elf_load(struct kimage *image, struct elfhdr *ehdr, + struct kexec_buf *kbuf, + unsigned long *lowest_load_addr) + { +- unsigned long lowest_addr = UINT_MAX; ++ unsigned long lowest_addr = ULONG_MAX; + int ret; + size_t i; + +diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c +index 9ee381e4d2a4d0..a26c915430ba0f 100644 +--- a/kernel/locking/semaphore.c ++++ b/kernel/locking/semaphore.c +@@ -29,6 +29,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -37,7 +38,7 @@ static noinline void __down(struct semaphore *sem); + static noinline int __down_interruptible(struct semaphore *sem); + static noinline int __down_killable(struct semaphore *sem); + static noinline int __down_timeout(struct semaphore *sem, long timeout); +-static noinline void __up(struct semaphore *sem); ++static noinline void __up(struct semaphore *sem, struct wake_q_head *wake_q); + + /** + * down - acquire the semaphore +@@ -182,13 +183,16 @@ EXPORT_SYMBOL(down_timeout); + void up(struct semaphore *sem) + { + unsigned long flags; ++ DEFINE_WAKE_Q(wake_q); + + raw_spin_lock_irqsave(&sem->lock, flags); + if (likely(list_empty(&sem->wait_list))) + sem->count++; + else +- __up(sem); ++ __up(sem, &wake_q); + raw_spin_unlock_irqrestore(&sem->lock, flags); ++ if (!wake_q_empty(&wake_q)) ++ wake_up_q(&wake_q); + } + EXPORT_SYMBOL(up); + +@@ -256,11 +260,12 @@ static noinline int __sched __down_timeout(struct semaphore *sem, long timeout) + return __down_common(sem, TASK_UNINTERRUPTIBLE, timeout); + } + +-static noinline void __sched __up(struct semaphore *sem) ++static noinline void __sched __up(struct semaphore *sem, ++ struct wake_q_head *wake_q) + { + struct semaphore_waiter *waiter = list_first_entry(&sem->wait_list, + struct semaphore_waiter, list); + list_del(&waiter->list); + waiter->up = true; +- wake_up_process(waiter->task); ++ wake_q_add(wake_q, waiter->task); + } +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 70a7cf563f01e2..380938831b1302 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -941,9 +941,10 @@ void wake_up_q(struct wake_q_head *head) + struct task_struct *task; + + task = container_of(node, struct task_struct, wake_q); +- /* Task can safely be re-inserted now: */ + node = node->next; +- task->wake_q.next = NULL; ++ /* pairs with cmpxchg_relaxed() in __wake_q_add() */ ++ WRITE_ONCE(task->wake_q.next, NULL); ++ /* Task can safely be re-inserted now. */ + + /* + * wake_up_process() executes a full barrier, which pairs with +diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c +index 0a6d6899be5bdb..66eb68c59f0bbe 100644 +--- a/kernel/sched/deadline.c ++++ b/kernel/sched/deadline.c +@@ -2610,7 +2610,7 @@ int sched_dl_global_validate(void) + * value smaller than the currently allocated bandwidth in + * any of the root_domains. + */ +- for_each_possible_cpu(cpu) { ++ for_each_online_cpu(cpu) { + rcu_read_lock_sched(); + + if (dl_bw_visited(cpu, gen)) +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index 9e91f69012a737..2e4b63f3c6dda4 100644 +--- a/kernel/time/hrtimer.c ++++ b/kernel/time/hrtimer.c +@@ -144,11 +144,6 @@ static struct hrtimer_cpu_base migration_cpu_base = { + + #define migration_base migration_cpu_base.clock_base[0] + +-static inline bool is_migration_base(struct hrtimer_clock_base *base) +-{ +- return base == &migration_base; +-} +- + /* + * We are using hashed locking: holding per_cpu(hrtimer_bases)[n].lock + * means that all timers which are tied to this base via timer->base are +@@ -273,11 +268,6 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base, + + #else /* CONFIG_SMP */ + +-static inline bool is_migration_base(struct hrtimer_clock_base *base) +-{ +- return false; +-} +- + static inline struct hrtimer_clock_base * + lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags) + { +@@ -1377,6 +1367,18 @@ static void hrtimer_sync_wait_running(struct hrtimer_cpu_base *cpu_base, + } + } + ++#ifdef CONFIG_SMP ++static __always_inline bool is_migration_base(struct hrtimer_clock_base *base) ++{ ++ return base == &migration_base; ++} ++#else ++static __always_inline bool is_migration_base(struct hrtimer_clock_base *base) ++{ ++ return false; ++} ++#endif ++ + /* + * This function is called on PREEMPT_RT kernels when the fast path + * deletion of a timer failed because the timer callback function was +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c +index 60acc3c76316f8..dba736defdfec9 100644 +--- a/kernel/trace/bpf_trace.c ++++ b/kernel/trace/bpf_trace.c +@@ -799,7 +799,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type) + if (unlikely(is_global_init(current))) + return -EPERM; + +- if (!preemptible()) { ++ if (preempt_count() != 0 || irqs_disabled()) { + /* Do an early check on signal validity. Otherwise, + * the error is lost in deferred irq_work. + */ +diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c +index f9f0c198cb43cb..90a8dd91e2eb04 100644 +--- a/kernel/trace/ring_buffer.c ++++ b/kernel/trace/ring_buffer.c +@@ -5949,9 +5949,9 @@ static __init int rb_write_something(struct rb_test_data *data, bool nested) + /* Ignore dropped events before test starts. */ + if (started) { + if (nested) +- data->bytes_dropped += len; +- else + data->bytes_dropped_nested += len; ++ else ++ data->bytes_dropped += len; + } + return len; + } +diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c +index 43da364737a040..ab54810bd8d99f 100644 +--- a/kernel/trace/trace_events_synth.c ++++ b/kernel/trace/trace_events_synth.c +@@ -293,7 +293,7 @@ static const char *synth_field_fmt(char *type) + else if (strcmp(type, "gfp_t") == 0) + fmt = "%x"; + else if (synth_field_is_string(type)) +- fmt = "%.*s"; ++ fmt = "%s"; + else if (synth_field_is_stack(type)) + fmt = "%s"; + +@@ -856,6 +856,39 @@ static struct trace_event_fields synth_event_fields_array[] = { + {} + }; + ++static int synth_event_reg(struct trace_event_call *call, ++ enum trace_reg type, void *data) ++{ ++ struct synth_event *event = container_of(call, struct synth_event, call); ++ int ret; ++ ++ switch (type) { ++#ifdef CONFIG_PERF_EVENTS ++ case TRACE_REG_PERF_REGISTER: ++#endif ++ case TRACE_REG_REGISTER: ++ if (!try_module_get(event->mod)) ++ return -EBUSY; ++ break; ++ default: ++ break; ++ } ++ ++ ret = trace_event_reg(call, type, data); ++ ++ switch (type) { ++#ifdef CONFIG_PERF_EVENTS ++ case TRACE_REG_PERF_UNREGISTER: ++#endif ++ case TRACE_REG_UNREGISTER: ++ module_put(event->mod); ++ break; ++ default: ++ break; ++ } ++ return ret; ++} ++ + static int register_synth_event(struct synth_event *event) + { + struct trace_event_call *call = &event->call; +@@ -885,7 +918,7 @@ static int register_synth_event(struct synth_event *event) + goto out; + } + call->flags = TRACE_EVENT_FL_TRACEPOINT; +- call->class->reg = trace_event_reg; ++ call->class->reg = synth_event_reg; + call->class->probe = trace_event_raw_event_synth; + call->data = event; + call->tp = event->tp; +diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c +index 6b5ff3ba4251f5..e81b46a390aa9a 100644 +--- a/kernel/trace/trace_functions_graph.c ++++ b/kernel/trace/trace_functions_graph.c +@@ -1244,6 +1244,7 @@ void graph_trace_close(struct trace_iterator *iter) + if (data) { + free_percpu(data->cpu_data); + kfree(data); ++ iter->private = NULL; + } + } + +diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c +index ba37f768e2f273..6c9db857fe0ea2 100644 +--- a/kernel/trace/trace_irqsoff.c ++++ b/kernel/trace/trace_irqsoff.c +@@ -231,8 +231,6 @@ static void irqsoff_trace_open(struct trace_iterator *iter) + { + if (is_graph(iter->tr)) + graph_trace_open(iter); +- else +- iter->private = NULL; + } + + static void irqsoff_trace_close(struct trace_iterator *iter) +diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c +index 20a3e201157d61..5959b1d4ee4594 100644 +--- a/kernel/trace/trace_osnoise.c ++++ b/kernel/trace/trace_osnoise.c +@@ -1560,7 +1560,6 @@ static int start_kthread(unsigned int cpu) + + if (IS_ERR(kthread)) { + pr_err(BANNER "could not start sampling thread\n"); +- stop_per_cpu_kthreads(); + return -ENOMEM; + } + +diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c +index b239bfaa51ae89..2402de520eca71 100644 +--- a/kernel/trace/trace_sched_wakeup.c ++++ b/kernel/trace/trace_sched_wakeup.c +@@ -168,8 +168,6 @@ static void wakeup_trace_open(struct trace_iterator *iter) + { + if (is_graph(iter->tr)) + graph_trace_open(iter); +- else +- iter->private = NULL; + } + + static void wakeup_trace_close(struct trace_iterator *iter) +diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c +index ae31bf8d2feb15..6ed37bc95cb7bd 100644 +--- a/kernel/watch_queue.c ++++ b/kernel/watch_queue.c +@@ -274,6 +274,15 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes) + if (ret < 0) + goto error; + ++ /* ++ * pipe_resize_ring() does not update nr_accounted for watch_queue ++ * pipes, because the above vastly overprovisions. Set nr_accounted on ++ * and max_usage this pipe to the number that was actually charged to ++ * the user above via account_pipe_buffers. ++ */ ++ pipe->max_usage = nr_pages; ++ pipe->nr_accounted = nr_pages; ++ + ret = -ENOMEM; + pages = kcalloc(sizeof(struct page *), nr_pages, GFP_KERNEL); + if (!pages) +diff --git a/lib/842/842_compress.c b/lib/842/842_compress.c +index c02baa4168e168..055356508d97c5 100644 +--- a/lib/842/842_compress.c ++++ b/lib/842/842_compress.c +@@ -532,6 +532,8 @@ int sw842_compress(const u8 *in, unsigned int ilen, + } + if (repeat_count) { + ret = add_repeat_template(p, repeat_count); ++ if (ret) ++ return ret; + repeat_count = 0; + if (next == last) /* reached max repeat bits */ + goto repeat; +diff --git a/lib/buildid.c b/lib/buildid.c +index cc5da016b23517..391382bd054138 100644 +--- a/lib/buildid.c ++++ b/lib/buildid.c +@@ -5,6 +5,7 @@ + #include + #include + #include ++#include + + #define BUILD_ID 3 + +@@ -157,6 +158,10 @@ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, + if (!vma->vm_file) + return -EINVAL; + ++ /* reject secretmem folios created with memfd_secret() */ ++ if (vma_is_secretmem(vma)) ++ return -EFAULT; ++ + page = find_get_page(vma->vm_file->f_mapping, 0); + if (!page) + return -EFAULT; /* page not mapped */ +diff --git a/mm/slab.c b/mm/slab.c +index f5b2246f832da5..e53e50d6c29bcd 100644 +--- a/mm/slab.c ++++ b/mm/slab.c +@@ -4226,11 +4226,14 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, + #endif /* CONFIG_HARDENED_USERCOPY */ + + /** +- * __ksize -- Uninstrumented ksize. ++ * __ksize -- Report full size of underlying allocation + * @objp: pointer to the object + * +- * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same +- * safety checks as ksize() with KASAN instrumentation enabled. ++ * This should only be used internally to query the true size of allocations. ++ * It is not meant to be a way to discover the usable size of an allocation ++ * after the fact. Instead, use kmalloc_size_roundup(). Using memory beyond ++ * the originally requested allocation size may trigger KASAN, UBSAN_BOUNDS, ++ * and/or FORTIFY_SOURCE. + * + * Return: size of the actual memory used by @objp in bytes + */ +diff --git a/mm/slab_common.c b/mm/slab_common.c +index f684b06649c3ee..315e83f5daea31 100644 +--- a/mm/slab_common.c ++++ b/mm/slab_common.c +@@ -758,6 +758,26 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) + return kmalloc_caches[kmalloc_type(flags)][index]; + } + ++size_t kmalloc_size_roundup(size_t size) ++{ ++ struct kmem_cache *c; ++ ++ /* Short-circuit the 0 size case. */ ++ if (unlikely(size == 0)) ++ return 0; ++ /* Short-circuit saturated "too-large" case. */ ++ if (unlikely(size == SIZE_MAX)) ++ return SIZE_MAX; ++ /* Above the smaller buckets, size is a multiple of page size. */ ++ if (size > KMALLOC_MAX_CACHE_SIZE) ++ return PAGE_SIZE << get_order(size); ++ ++ /* The flags don't matter since size_index is common to all. */ ++ c = kmalloc_slab(size, GFP_KERNEL); ++ return c ? c->object_size : 0; ++} ++EXPORT_SYMBOL(kmalloc_size_roundup); ++ + #ifdef CONFIG_ZONE_DMA + #define KMALLOC_DMA_NAME(sz) .name[KMALLOC_DMA] = "dma-kmalloc-" #sz, + #else +@@ -1285,20 +1305,6 @@ void kfree_sensitive(const void *p) + } + EXPORT_SYMBOL(kfree_sensitive); + +-/** +- * ksize - get the actual amount of memory allocated for a given object +- * @objp: Pointer to the object +- * +- * kmalloc may internally round up allocations and return more memory +- * than requested. ksize() can be used to determine the actual amount of +- * memory allocated. The caller may use this additional memory, even though +- * a smaller amount of memory was initially specified with the kmalloc call. +- * The caller must guarantee that objp points to a valid object previously +- * allocated with either kmalloc() or kmem_cache_alloc(). The object +- * must not be freed during the duration of the call. +- * +- * Return: size of the actual memory used by @objp in bytes +- */ + size_t ksize(const void *objp) + { + size_t size; +diff --git a/mm/slob.c b/mm/slob.c +index f3fc15df971af4..d4c80bf1930d13 100644 +--- a/mm/slob.c ++++ b/mm/slob.c +@@ -567,6 +567,20 @@ void kfree(const void *block) + } + EXPORT_SYMBOL(kfree); + ++size_t kmalloc_size_roundup(size_t size) ++{ ++ /* Short-circuit the 0 size case. */ ++ if (unlikely(size == 0)) ++ return 0; ++ /* Short-circuit saturated "too-large" case. */ ++ if (unlikely(size == SIZE_MAX)) ++ return SIZE_MAX; ++ ++ return ALIGN(size, ARCH_KMALLOC_MINALIGN); ++} ++ ++EXPORT_SYMBOL(kmalloc_size_roundup); ++ + /* can't use ksize for kmem_cache_alloc memory, only kmalloc */ + size_t __ksize(const void *block) + { +diff --git a/net/8021q/vlan_netlink.c b/net/8021q/vlan_netlink.c +index dca1ec705b6c62..a3b68243fd4b18 100644 +--- a/net/8021q/vlan_netlink.c ++++ b/net/8021q/vlan_netlink.c +@@ -186,10 +186,14 @@ static int vlan_newlink(struct net *src_net, struct net_device *dev, + else if (dev->mtu > max_mtu) + return -EINVAL; + ++ /* Note: If this initial vlan_changelink() fails, we need ++ * to call vlan_dev_free_egress_priority() to free memory. ++ */ + err = vlan_changelink(dev, tb, data, extack); +- if (err) +- return err; +- err = register_vlan_dev(dev, extack); ++ ++ if (!err) ++ err = register_vlan_dev(dev, extack); ++ + if (err) + vlan_dev_free_egress_priority(dev); + return err; +diff --git a/net/atm/lec.c b/net/atm/lec.c +index 7226c784dbe0c7..ca9952c52fb5c1 100644 +--- a/net/atm/lec.c ++++ b/net/atm/lec.c +@@ -181,6 +181,7 @@ static void + lec_send(struct atm_vcc *vcc, struct sk_buff *skb) + { + struct net_device *dev = skb->dev; ++ unsigned int len = skb->len; + + ATM_SKB(skb)->vcc = vcc; + atm_account_tx(vcc, skb); +@@ -191,7 +192,7 @@ lec_send(struct atm_vcc *vcc, struct sk_buff *skb) + } + + dev->stats.tx_packets++; +- dev->stats.tx_bytes += skb->len; ++ dev->stats.tx_bytes += len; + } + + static void lec_tx_timeout(struct net_device *dev, unsigned int txqueue) +diff --git a/net/atm/mpc.c b/net/atm/mpc.c +index 033871e718a34f..583c27131b7d64 100644 +--- a/net/atm/mpc.c ++++ b/net/atm/mpc.c +@@ -1314,6 +1314,8 @@ static void MPOA_cache_impos_rcvd(struct k_message *msg, + holding_time = msg->content.eg_info.holding_time; + dprintk("(%s) entry = %p, holding_time = %u\n", + mpc->dev->name, entry, holding_time); ++ if (entry == NULL && !holding_time) ++ return; + if (entry == NULL && holding_time) { + entry = mpc->eg_ops->add_entry(msg, mpc); + mpc->eg_ops->put(entry); +diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c +index f94f538fa38274..db179c6a59124f 100644 +--- a/net/batman-adv/bat_iv_ogm.c ++++ b/net/batman-adv/bat_iv_ogm.c +@@ -325,8 +325,7 @@ batadv_iv_ogm_aggr_packet(int buff_pos, int packet_len, + /* check if there is enough space for the optional TVLV */ + next_buff_pos += ntohs(ogm_packet->tvlv_len); + +- return (next_buff_pos <= packet_len) && +- (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES); ++ return next_buff_pos <= packet_len; + } + + /* send a batman ogm to a given interface */ +diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c +index 4fe6df68dfcb75..c357cf72396ebe 100644 +--- a/net/batman-adv/bat_v_ogm.c ++++ b/net/batman-adv/bat_v_ogm.c +@@ -840,8 +840,7 @@ batadv_v_ogm_aggr_packet(int buff_pos, int packet_len, + /* check if there is enough space for the optional TVLV */ + next_buff_pos += ntohs(ogm2_packet->tvlv_len); + +- return (next_buff_pos <= packet_len) && +- (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES); ++ return next_buff_pos <= packet_len; + } + + /** +diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c +index 580b0940f067a7..c4a1b478cf3e92 100644 +--- a/net/bluetooth/6lowpan.c ++++ b/net/bluetooth/6lowpan.c +@@ -824,11 +824,16 @@ static struct sk_buff *chan_alloc_skb_cb(struct l2cap_chan *chan, + unsigned long hdr_len, + unsigned long len, int nb) + { ++ struct sk_buff *skb; ++ + /* Note that we must allocate using GFP_ATOMIC here as + * this function is called originally from netdev hard xmit + * function in atomic context. + */ +- return bt_skb_alloc(hdr_len + len, GFP_ATOMIC); ++ skb = bt_skb_alloc(hdr_len + len, GFP_ATOMIC); ++ if (!skb) ++ return ERR_PTR(-ENOMEM); ++ return skb; + } + + static void chan_suspend_cb(struct l2cap_chan *chan) +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index 50e21f67a73d4f..83af50c3838a9c 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -4859,19 +4859,16 @@ static void hci_user_confirm_request_evt(struct hci_dev *hdev, + goto unlock; + } + +- /* If no side requires MITM protection; auto-accept */ ++ /* If no side requires MITM protection; use JUST_CFM method */ + if ((!loc_mitm || conn->remote_cap == HCI_IO_NO_INPUT_OUTPUT) && + (!rem_mitm || conn->io_capability == HCI_IO_NO_INPUT_OUTPUT)) { + +- /* If we're not the initiators request authorization to +- * proceed from user space (mgmt_user_confirm with +- * confirm_hint set to 1). The exception is if neither +- * side had MITM or if the local IO capability is +- * NoInputNoOutput, in which case we do auto-accept ++ /* If we're not the initiator of request authorization and the ++ * local IO capability is not NoInputNoOutput, use JUST_WORKS ++ * method (mgmt_user_confirm with confirm_hint set to 1). + */ + if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) && +- conn->io_capability != HCI_IO_NO_INPUT_OUTPUT && +- (loc_mitm || rem_mitm)) { ++ conn->io_capability != HCI_IO_NO_INPUT_OUTPUT) { + BT_DBG("Confirming auto-accept as acceptor"); + confirm_hint = 1; + goto confirm; +diff --git a/net/can/af_can.c b/net/can/af_can.c +index 4e728b3da40b33..edf01b73d2878c 100644 +--- a/net/can/af_can.c ++++ b/net/can/af_can.c +@@ -288,8 +288,8 @@ int can_send(struct sk_buff *skb, int loop) + netif_rx_ni(newskb); + + /* update statistics */ +- pkg_stats->tx_frames++; +- pkg_stats->tx_frames_delta++; ++ atomic_long_inc(&pkg_stats->tx_frames); ++ atomic_long_inc(&pkg_stats->tx_frames_delta); + + return 0; + +@@ -649,8 +649,8 @@ static void can_receive(struct sk_buff *skb, struct net_device *dev) + int matches; + + /* update statistics */ +- pkg_stats->rx_frames++; +- pkg_stats->rx_frames_delta++; ++ atomic_long_inc(&pkg_stats->rx_frames); ++ atomic_long_inc(&pkg_stats->rx_frames_delta); + + /* create non-zero unique skb identifier together with *skb */ + while (!(can_skb_prv(skb)->skbcnt)) +@@ -671,8 +671,8 @@ static void can_receive(struct sk_buff *skb, struct net_device *dev) + consume_skb(skb); + + if (matches > 0) { +- pkg_stats->matches++; +- pkg_stats->matches_delta++; ++ atomic_long_inc(&pkg_stats->matches); ++ atomic_long_inc(&pkg_stats->matches_delta); + } + } + +diff --git a/net/can/af_can.h b/net/can/af_can.h +index 7c2d9161e22457..22f3352c77fece 100644 +--- a/net/can/af_can.h ++++ b/net/can/af_can.h +@@ -66,9 +66,9 @@ struct receiver { + struct can_pkg_stats { + unsigned long jiffies_init; + +- unsigned long rx_frames; +- unsigned long tx_frames; +- unsigned long matches; ++ atomic_long_t rx_frames; ++ atomic_long_t tx_frames; ++ atomic_long_t matches; + + unsigned long total_rx_rate; + unsigned long total_tx_rate; +@@ -82,9 +82,9 @@ struct can_pkg_stats { + unsigned long max_tx_rate; + unsigned long max_rx_match_ratio; + +- unsigned long rx_frames_delta; +- unsigned long tx_frames_delta; +- unsigned long matches_delta; ++ atomic_long_t rx_frames_delta; ++ atomic_long_t tx_frames_delta; ++ atomic_long_t matches_delta; + }; + + /* persistent statistics */ +diff --git a/net/can/proc.c b/net/can/proc.c +index b3099f0a3cb824..0533a3c4ff0e18 100644 +--- a/net/can/proc.c ++++ b/net/can/proc.c +@@ -118,6 +118,13 @@ void can_stat_update(struct timer_list *t) + struct can_pkg_stats *pkg_stats = net->can.pkg_stats; + unsigned long j = jiffies; /* snapshot */ + ++ long rx_frames = atomic_long_read(&pkg_stats->rx_frames); ++ long tx_frames = atomic_long_read(&pkg_stats->tx_frames); ++ long matches = atomic_long_read(&pkg_stats->matches); ++ long rx_frames_delta = atomic_long_read(&pkg_stats->rx_frames_delta); ++ long tx_frames_delta = atomic_long_read(&pkg_stats->tx_frames_delta); ++ long matches_delta = atomic_long_read(&pkg_stats->matches_delta); ++ + /* restart counting in timer context on user request */ + if (user_reset) + can_init_stats(net); +@@ -127,35 +134,33 @@ void can_stat_update(struct timer_list *t) + can_init_stats(net); + + /* prevent overflow in calc_rate() */ +- if (pkg_stats->rx_frames > (ULONG_MAX / HZ)) ++ if (rx_frames > (LONG_MAX / HZ)) + can_init_stats(net); + + /* prevent overflow in calc_rate() */ +- if (pkg_stats->tx_frames > (ULONG_MAX / HZ)) ++ if (tx_frames > (LONG_MAX / HZ)) + can_init_stats(net); + + /* matches overflow - very improbable */ +- if (pkg_stats->matches > (ULONG_MAX / 100)) ++ if (matches > (LONG_MAX / 100)) + can_init_stats(net); + + /* calc total values */ +- if (pkg_stats->rx_frames) +- pkg_stats->total_rx_match_ratio = (pkg_stats->matches * 100) / +- pkg_stats->rx_frames; ++ if (rx_frames) ++ pkg_stats->total_rx_match_ratio = (matches * 100) / rx_frames; + + pkg_stats->total_tx_rate = calc_rate(pkg_stats->jiffies_init, j, +- pkg_stats->tx_frames); ++ tx_frames); + pkg_stats->total_rx_rate = calc_rate(pkg_stats->jiffies_init, j, +- pkg_stats->rx_frames); ++ rx_frames); + + /* calc current values */ +- if (pkg_stats->rx_frames_delta) ++ if (rx_frames_delta) + pkg_stats->current_rx_match_ratio = +- (pkg_stats->matches_delta * 100) / +- pkg_stats->rx_frames_delta; ++ (matches_delta * 100) / rx_frames_delta; + +- pkg_stats->current_tx_rate = calc_rate(0, HZ, pkg_stats->tx_frames_delta); +- pkg_stats->current_rx_rate = calc_rate(0, HZ, pkg_stats->rx_frames_delta); ++ pkg_stats->current_tx_rate = calc_rate(0, HZ, tx_frames_delta); ++ pkg_stats->current_rx_rate = calc_rate(0, HZ, rx_frames_delta); + + /* check / update maximum values */ + if (pkg_stats->max_tx_rate < pkg_stats->current_tx_rate) +@@ -168,9 +173,9 @@ void can_stat_update(struct timer_list *t) + pkg_stats->max_rx_match_ratio = pkg_stats->current_rx_match_ratio; + + /* clear values for 'current rate' calculation */ +- pkg_stats->tx_frames_delta = 0; +- pkg_stats->rx_frames_delta = 0; +- pkg_stats->matches_delta = 0; ++ atomic_long_set(&pkg_stats->tx_frames_delta, 0); ++ atomic_long_set(&pkg_stats->rx_frames_delta, 0); ++ atomic_long_set(&pkg_stats->matches_delta, 0); + + /* restart timer (one second) */ + mod_timer(&net->can.stattimer, round_jiffies(jiffies + HZ)); +@@ -214,9 +219,12 @@ static int can_stats_proc_show(struct seq_file *m, void *v) + struct can_rcv_lists_stats *rcv_lists_stats = net->can.rcv_lists_stats; + + seq_putc(m, '\n'); +- seq_printf(m, " %8ld transmitted frames (TXF)\n", pkg_stats->tx_frames); +- seq_printf(m, " %8ld received frames (RXF)\n", pkg_stats->rx_frames); +- seq_printf(m, " %8ld matched frames (RXMF)\n", pkg_stats->matches); ++ seq_printf(m, " %8ld transmitted frames (TXF)\n", ++ atomic_long_read(&pkg_stats->tx_frames)); ++ seq_printf(m, " %8ld received frames (RXF)\n", ++ atomic_long_read(&pkg_stats->rx_frames)); ++ seq_printf(m, " %8ld matched frames (RXMF)\n", ++ atomic_long_read(&pkg_stats->matches)); + + seq_putc(m, '\n'); + +diff --git a/net/core/neighbour.c b/net/core/neighbour.c +index 9549738b818428..b83878b5bf7887 100644 +--- a/net/core/neighbour.c ++++ b/net/core/neighbour.c +@@ -2173,6 +2173,7 @@ static const struct nla_policy nl_neightbl_policy[NDTA_MAX+1] = { + static const struct nla_policy nl_ntbl_parm_policy[NDTPA_MAX+1] = { + [NDTPA_IFINDEX] = { .type = NLA_U32 }, + [NDTPA_QUEUE_LEN] = { .type = NLA_U32 }, ++ [NDTPA_QUEUE_LENBYTES] = { .type = NLA_U32 }, + [NDTPA_PROXY_QLEN] = { .type = NLA_U32 }, + [NDTPA_APP_PROBES] = { .type = NLA_U32 }, + [NDTPA_UCAST_PROBES] = { .type = NLA_U32 }, +diff --git a/net/core/netpoll.c b/net/core/netpoll.c +index 597e83e2bce861..87f5a837410c13 100644 +--- a/net/core/netpoll.c ++++ b/net/core/netpoll.c +@@ -326,6 +326,7 @@ static int netpoll_owner_active(struct net_device *dev) + static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) + { + netdev_tx_t status = NETDEV_TX_BUSY; ++ netdev_tx_t ret = NET_XMIT_DROP; + struct net_device *dev; + unsigned long tries; + /* It is up to the caller to keep npinfo alive. */ +@@ -334,11 +335,12 @@ static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) + lockdep_assert_irqs_disabled(); + + dev = np->dev; ++ rcu_read_lock(); + npinfo = rcu_dereference_bh(dev->npinfo); + + if (!npinfo || !netif_running(dev) || !netif_device_present(dev)) { + dev_kfree_skb_irq(skb); +- return NET_XMIT_DROP; ++ goto out; + } + + /* don't get messages out of order, and no recursion */ +@@ -377,7 +379,10 @@ static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) + skb_queue_tail(&npinfo->txq, skb); + schedule_delayed_work(&npinfo->tx_work,0); + } +- return NETDEV_TX_OK; ++ ret = NETDEV_TX_OK; ++out: ++ rcu_read_unlock(); ++ return ret; + } + + netdev_tx_t netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) +diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c +index 24795110b2ff39..46a97c915e93e8 100644 +--- a/net/core/rtnetlink.c ++++ b/net/core/rtnetlink.c +@@ -972,6 +972,9 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev, + /* IFLA_VF_STATS_TX_DROPPED */ + nla_total_size_64bit(sizeof(__u64))); + } ++ if (dev->netdev_ops->ndo_get_vf_guid) ++ size += num_vfs * 2 * ++ nla_total_size(sizeof(struct ifla_vf_guid)); + return size; + } else + return 0; +diff --git a/net/core/sock_map.c b/net/core/sock_map.c +index f591ec106cd6c6..487a571c28c1db 100644 +--- a/net/core/sock_map.c ++++ b/net/core/sock_map.c +@@ -411,15 +411,14 @@ static void *sock_map_lookup_sys(struct bpf_map *map, void *key) + static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test, + struct sock **psk) + { +- struct sock *sk; ++ struct sock *sk = NULL; + int err = 0; + + if (irqs_disabled()) + return -EOPNOTSUPP; /* locks here are hardirq-unsafe */ + + raw_spin_lock_bh(&stab->lock); +- sk = *psk; +- if (!sk_test || sk_test == sk) ++ if (!sk_test || sk_test == *psk) + sk = xchg(psk, NULL); + + if (likely(sk)) +diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c +index 50ddbd7021f0e4..35189f1b361ea6 100644 +--- a/net/ipv4/ip_tunnel_core.c ++++ b/net/ipv4/ip_tunnel_core.c +@@ -415,7 +415,7 @@ int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst, + + skb_dst_update_pmtu_no_confirm(skb, mtu); + +- if (!reply || skb->pkt_type == PACKET_HOST) ++ if (!reply) + return 0; + + if (skb->protocol == htons(ETH_P_IP)) +@@ -450,7 +450,7 @@ static const struct nla_policy + geneve_opt_policy[LWTUNNEL_IP_OPT_GENEVE_MAX + 1] = { + [LWTUNNEL_IP_OPT_GENEVE_CLASS] = { .type = NLA_U16 }, + [LWTUNNEL_IP_OPT_GENEVE_TYPE] = { .type = NLA_U8 }, +- [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 }, ++ [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 }, + }; + + static const struct nla_policy +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index 3c85ecab144571..c1e624ca6a2505 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -4514,13 +4514,9 @@ int tcp_abort(struct sock *sk, int err) + bh_lock_sock(sk); + + if (!sock_flag(sk, SOCK_DEAD)) { +- WRITE_ONCE(sk->sk_err, err); +- /* This barrier is coupled with smp_rmb() in tcp_poll() */ +- smp_wmb(); +- sk_error_report(sk); + if (tcp_need_reset(sk->sk_state)) + tcp_send_active_reset(sk, GFP_ATOMIC); +- tcp_done(sk); ++ tcp_done_with_error(sk, err); + } + + bh_unlock_sock(sk); +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c +index 932a10f64adcb1..07b3487e3ae97a 100644 +--- a/net/ipv6/addrconf.c ++++ b/net/ipv6/addrconf.c +@@ -5689,6 +5689,27 @@ static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype, + } + } + ++static int inet6_fill_ifla6_stats_attrs(struct sk_buff *skb, ++ struct inet6_dev *idev) ++{ ++ struct nlattr *nla; ++ ++ nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64)); ++ if (!nla) ++ goto nla_put_failure; ++ snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla)); ++ ++ nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64)); ++ if (!nla) ++ goto nla_put_failure; ++ snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla)); ++ ++ return 0; ++ ++nla_put_failure: ++ return -EMSGSIZE; ++} ++ + static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev, + u32 ext_filter_mask) + { +@@ -5710,18 +5731,10 @@ static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev, + + /* XXX - MC not implemented */ + +- if (ext_filter_mask & RTEXT_FILTER_SKIP_STATS) +- return 0; +- +- nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64)); +- if (!nla) +- goto nla_put_failure; +- snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla)); +- +- nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64)); +- if (!nla) +- goto nla_put_failure; +- snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla)); ++ if (!(ext_filter_mask & RTEXT_FILTER_SKIP_STATS)) { ++ if (inet6_fill_ifla6_stats_attrs(skb, idev) < 0) ++ goto nla_put_failure; ++ } + + nla = nla_reserve(skb, IFLA_INET6_TOKEN, sizeof(struct in6_addr)); + if (!nla) +diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c +index 1578ed9e97d892..c07e3da08d2a8b 100644 +--- a/net/ipv6/calipso.c ++++ b/net/ipv6/calipso.c +@@ -1075,8 +1075,13 @@ static int calipso_sock_getattr(struct sock *sk, + struct ipv6_opt_hdr *hop; + int opt_len, len, ret_val = -ENOMSG, offset; + unsigned char *opt; +- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); ++ struct ipv6_pinfo *pinfo = inet6_sk(sk); ++ struct ipv6_txoptions *txopts; ++ ++ if (!pinfo) ++ return -EAFNOSUPPORT; + ++ txopts = txopt_get(pinfo); + if (!txopts || !txopts->hopopt) + goto done; + +@@ -1128,8 +1133,13 @@ static int calipso_sock_setattr(struct sock *sk, + { + int ret_val; + struct ipv6_opt_hdr *old, *new; +- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); ++ struct ipv6_pinfo *pinfo = inet6_sk(sk); ++ struct ipv6_txoptions *txopts; ++ ++ if (!pinfo) ++ return -EAFNOSUPPORT; + ++ txopts = txopt_get(pinfo); + old = NULL; + if (txopts) + old = txopts->hopopt; +@@ -1156,8 +1166,13 @@ static int calipso_sock_setattr(struct sock *sk, + static void calipso_sock_delattr(struct sock *sk) + { + struct ipv6_opt_hdr *new_hop; +- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); ++ struct ipv6_pinfo *pinfo = inet6_sk(sk); ++ struct ipv6_txoptions *txopts; ++ ++ if (!pinfo) ++ return; + ++ txopts = txopt_get(pinfo); + if (!txopts || !txopts->hopopt) + goto done; + +diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c +index 9899bac5e1508b..4fcff4fe5a98f4 100644 +--- a/net/ipv6/ip6_output.c ++++ b/net/ipv6/ip6_output.c +@@ -1461,7 +1461,7 @@ static int __ip6_append_data(struct sock *sk, + struct page_frag *pfrag, + int getfrag(void *from, char *to, int offset, + int len, int odd, struct sk_buff *skb), +- void *from, int length, int transhdrlen, ++ void *from, size_t length, int transhdrlen, + unsigned int flags, struct ipcm6_cookie *ipc6) + { + struct sk_buff *skb, *skb_prev = NULL; +@@ -1806,7 +1806,7 @@ static int __ip6_append_data(struct sock *sk, + int ip6_append_data(struct sock *sk, + int getfrag(void *from, char *to, int offset, int len, + int odd, struct sk_buff *skb), +- void *from, int length, int transhdrlen, ++ void *from, size_t length, int transhdrlen, + struct ipcm6_cookie *ipc6, struct flowi6 *fl6, + struct rt6_info *rt, unsigned int flags) + { +@@ -2000,7 +2000,7 @@ EXPORT_SYMBOL_GPL(ip6_flush_pending_frames); + struct sk_buff *ip6_make_skb(struct sock *sk, + int getfrag(void *from, char *to, int offset, + int len, int odd, struct sk_buff *skb), +- void *from, int length, int transhdrlen, ++ void *from, size_t length, int transhdrlen, + struct ipcm6_cookie *ipc6, struct flowi6 *fl6, + struct rt6_info *rt, unsigned int flags, + struct inet_cork_full *cork) +diff --git a/net/ipv6/netfilter/nf_socket_ipv6.c b/net/ipv6/netfilter/nf_socket_ipv6.c +index aa5bb8789ba0b4..697b9e60e24ed2 100644 +--- a/net/ipv6/netfilter/nf_socket_ipv6.c ++++ b/net/ipv6/netfilter/nf_socket_ipv6.c +@@ -103,6 +103,10 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb, + struct sk_buff *data_skb = NULL; + int doff = 0; + int thoff = 0, tproto; ++#if IS_ENABLED(CONFIG_NF_CONNTRACK) ++ enum ip_conntrack_info ctinfo; ++ struct nf_conn const *ct; ++#endif + + tproto = ipv6_find_hdr(skb, &thoff, -1, NULL, NULL); + if (tproto < 0) { +@@ -136,6 +140,25 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb, + return NULL; + } + ++#if IS_ENABLED(CONFIG_NF_CONNTRACK) ++ /* Do the lookup with the original socket address in ++ * case this is a reply packet of an established ++ * SNAT-ted connection. ++ */ ++ ct = nf_ct_get(skb, &ctinfo); ++ if (ct && ++ ((tproto != IPPROTO_ICMPV6 && ++ ctinfo == IP_CT_ESTABLISHED_REPLY) || ++ (tproto == IPPROTO_ICMPV6 && ++ ctinfo == IP_CT_RELATED_REPLY)) && ++ (ct->status & IPS_SRC_NAT_DONE)) { ++ daddr = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.in6; ++ dport = (tproto == IPPROTO_TCP) ? ++ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.tcp.port : ++ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.udp.port; ++ } ++#endif ++ + return nf_socket_get_sock_v6(net, data_skb, doff, tproto, saddr, daddr, + sport, dport, indev); + } +diff --git a/net/ipv6/route.c b/net/ipv6/route.c +index 94526436b91e83..f8b2fdaef67f47 100644 +--- a/net/ipv6/route.c ++++ b/net/ipv6/route.c +@@ -3630,7 +3630,8 @@ int fib6_nh_init(struct net *net, struct fib6_nh *fib6_nh, + in6_dev_put(idev); + + if (err) { +- lwtstate_put(fib6_nh->fib_nh_lws); ++ fib_nh_common_release(&fib6_nh->nh_common); ++ fib6_nh->nh_common.nhc_pcpu_rth_output = NULL; + fib6_nh->fib_nh_lws = NULL; + dev_put(dev); + } +@@ -3806,10 +3807,12 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg, + if (nh) { + if (rt->fib6_src.plen) { + NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing"); ++ err = -EINVAL; + goto out_free; + } + if (!nexthop_get(nh)) { + NL_SET_ERR_MSG(extack, "Nexthop has been deleted"); ++ err = -ENOENT; + goto out_free; + } + rt->nh = nh; +diff --git a/net/mptcp/options.c b/net/mptcp/options.c +index bbdee9a9b4424b..d1443c5732c83a 100644 +--- a/net/mptcp/options.c ++++ b/net/mptcp/options.c +@@ -647,6 +647,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff * + struct mptcp_sock *msk = mptcp_sk(subflow->conn); + bool drop_other_suboptions = false; + unsigned int opt_size = *size; ++ struct mptcp_addr_info addr; + bool echo; + int len; + +@@ -655,7 +656,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff * + */ + if (!mptcp_pm_should_add_signal(msk) || + (opts->suboptions & (OPTION_MPTCP_MPJ_ACK | OPTION_MPTCP_MPC_ACK)) || +- !mptcp_pm_add_addr_signal(msk, skb, opt_size, remaining, &opts->addr, ++ !mptcp_pm_add_addr_signal(msk, skb, opt_size, remaining, &addr, + &echo, &drop_other_suboptions)) + return false; + +@@ -668,7 +669,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff * + else if (opts->suboptions & OPTION_MPTCP_DSS) + return false; + +- len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port); ++ len = mptcp_add_addr_len(addr.family, echo, !!addr.port); + if (remaining < len) + return false; + +@@ -685,6 +686,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff * + opts->ahmac = 0; + *size -= opt_size; + } ++ opts->addr = addr; + opts->suboptions |= OPTION_MPTCP_ADD_ADDR; + if (!echo) { + opts->ahmac = add_addr_generate_hmac(msk->local_key, +diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h +index 83e93a7e9b40e2..cfb6aa72515e89 100644 +--- a/net/mptcp/protocol.h ++++ b/net/mptcp/protocol.h +@@ -872,6 +872,8 @@ static inline void __mptcp_do_fallback(struct mptcp_sock *msk) + pr_debug("TCP fallback already done (msk=%p)\n", msk); + return; + } ++ if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback))) ++ return; + set_bit(MPTCP_FALLBACK_DONE, &msk->flags); + } + +diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c +index d0b64c36471d55..fb9f1badeddbfd 100644 +--- a/net/netfilter/ipvs/ip_vs_ctl.c ++++ b/net/netfilter/ipvs/ip_vs_ctl.c +@@ -2852,12 +2852,12 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len) + case IP_VS_SO_GET_SERVICES: + { + struct ip_vs_get_services *get; +- int size; ++ size_t size; + + get = (struct ip_vs_get_services *)arg; + size = struct_size(get, entrytable, get->num_services); + if (*len != size) { +- pr_err("length: %u != %u\n", *len, size); ++ pr_err("length: %u != %zu\n", *len, size); + ret = -EINVAL; + goto out; + } +@@ -2893,12 +2893,12 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len) + case IP_VS_SO_GET_DESTS: + { + struct ip_vs_get_dests *get; +- int size; ++ size_t size; + + get = (struct ip_vs_get_dests *)arg; + size = struct_size(get, entrytable, get->num_dests); + if (*len != size) { +- pr_err("length: %u != %u\n", *len, size); ++ pr_err("length: %u != %zu\n", *len, size); + ret = -EINVAL; + goto out; + } +diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c +index 0ce12a33ffda40..a66a27fe7f458c 100644 +--- a/net/netfilter/nf_conncount.c ++++ b/net/netfilter/nf_conncount.c +@@ -366,6 +366,8 @@ insert_tree(struct net *net, + + conn->tuple = *tuple; + conn->zone = *zone; ++ conn->cpu = raw_smp_processor_id(); ++ conn->jiffies32 = (u32)jiffies; + memcpy(rbconn->key, key, sizeof(u32) * data->keylen); + + nf_conncount_list_init(&rbconn->list); +diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c +index 69214993b5a2c8..83bb3f110ea846 100644 +--- a/net/netfilter/nft_ct.c ++++ b/net/netfilter/nft_ct.c +@@ -239,6 +239,7 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr, + enum ip_conntrack_info ctinfo; + u16 value = nft_reg_load16(®s->data[priv->sreg]); + struct nf_conn *ct; ++ int oldcnt; + + ct = nf_ct_get(skb, &ctinfo); + if (ct) /* already tracked */ +@@ -259,10 +260,11 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr, + + ct = this_cpu_read(nft_ct_pcpu_template); + +- if (likely(refcount_read(&ct->ct_general.use) == 1)) { +- refcount_inc(&ct->ct_general.use); ++ __refcount_inc(&ct->ct_general.use, &oldcnt); ++ if (likely(oldcnt == 1)) { + nf_ct_zone_add(ct, &zone); + } else { ++ refcount_dec(&ct->ct_general.use); + /* previous skb got queued to userspace, allocate temporary + * one until percpu template can be reused. + */ +diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c +index d1dcf5b2e92e16..7c2931e024bb05 100644 +--- a/net/netfilter/nft_exthdr.c ++++ b/net/netfilter/nft_exthdr.c +@@ -85,7 +85,6 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + unsigned char optbuf[sizeof(struct ip_options) + 40]; + struct ip_options *opt = (struct ip_options *)optbuf; + struct iphdr *iph, _iph; +- unsigned int start; + bool found = false; + __be32 info; + int optlen; +@@ -93,7 +92,6 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + iph = skb_header_pointer(skb, 0, sizeof(_iph), &_iph); + if (!iph) + return -EBADMSG; +- start = sizeof(struct iphdr); + + optlen = iph->ihl * 4 - (int)sizeof(struct iphdr); + if (optlen <= 0) +@@ -103,7 +101,7 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + /* Copy the options since __ip_options_compile() modifies + * the options. + */ +- if (skb_copy_bits(skb, start, opt->__data, optlen)) ++ if (skb_copy_bits(skb, sizeof(struct iphdr), opt->__data, optlen)) + return -EBADMSG; + opt->optlen = optlen; + +@@ -118,18 +116,18 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + found = target == IPOPT_SSRR ? opt->is_strictroute : + !opt->is_strictroute; + if (found) +- *offset = opt->srr + start; ++ *offset = opt->srr; + break; + case IPOPT_RR: + if (!opt->rr) + break; +- *offset = opt->rr + start; ++ *offset = opt->rr; + found = true; + break; + case IPOPT_RA: + if (!opt->router_alert) + break; +- *offset = opt->router_alert + start; ++ *offset = opt->router_alert; + found = true; + break; + default: +diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c +index 5c4209b49bda7b..a592cca7a61f91 100644 +--- a/net/netfilter/nft_set_hash.c ++++ b/net/netfilter/nft_set_hash.c +@@ -310,7 +310,8 @@ static bool nft_rhash_expr_needs_gc_run(const struct nft_set *set, + + nft_setelem_expr_foreach(expr, elem_expr, size) { + if (expr->ops->gc && +- expr->ops->gc(read_pnet(&set->net), expr)) ++ expr->ops->gc(read_pnet(&set->net), expr) && ++ set->flags & NFT_SET_EVAL) + return true; + } + +diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c +index c8822fa8196d97..cfe6cf1be4217f 100644 +--- a/net/netfilter/nft_tunnel.c ++++ b/net/netfilter/nft_tunnel.c +@@ -305,13 +305,13 @@ static int nft_tunnel_obj_erspan_init(const struct nlattr *attr, + static const struct nla_policy nft_tunnel_opts_geneve_policy[NFTA_TUNNEL_KEY_GENEVE_MAX + 1] = { + [NFTA_TUNNEL_KEY_GENEVE_CLASS] = { .type = NLA_U16 }, + [NFTA_TUNNEL_KEY_GENEVE_TYPE] = { .type = NLA_U8 }, +- [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 }, ++ [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 }, + }; + + static int nft_tunnel_obj_geneve_init(const struct nlattr *attr, + struct nft_tunnel_opts *opts) + { +- struct geneve_opt *opt = (struct geneve_opt *)opts->u.data + opts->len; ++ struct geneve_opt *opt = (struct geneve_opt *)(opts->u.data + opts->len); + struct nlattr *tb[NFTA_TUNNEL_KEY_GENEVE_MAX + 1]; + int err, data_len; + +@@ -592,7 +592,7 @@ static int nft_tunnel_opts_dump(struct sk_buff *skb, + if (!inner) + goto failure; + while (opts->len > offset) { +- opt = (struct geneve_opt *)opts->u.data + offset; ++ opt = (struct geneve_opt *)(opts->u.data + offset); + if (nla_put_be16(skb, NFTA_TUNNEL_KEY_GENEVE_CLASS, + opt->opt_class) || + nla_put_u8(skb, NFTA_TUNNEL_KEY_GENEVE_TYPE, +diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c +index 85af0e9e0ac6d1..aca6e2b599c860 100644 +--- a/net/openvswitch/actions.c ++++ b/net/openvswitch/actions.c +@@ -924,12 +924,6 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port, + pskb_trim(skb, ovs_mac_header_len(key)); + } + +- /* Need to set the pkt_type to involve the routing layer. The +- * packet movement through the OVS datapath doesn't generally +- * use routing, but this is needed for tunnel cases. +- */ +- skb->pkt_type = PACKET_OUTGOING; +- + if (likely(!mru || + (skb->len <= mru + vport->dev->hard_header_len))) { + ovs_vport_send(vport, skb, ovs_key_mac_proto(key)); +diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c +index 1cf431d04a4682..d9bef3decd70cd 100644 +--- a/net/openvswitch/flow_netlink.c ++++ b/net/openvswitch/flow_netlink.c +@@ -2273,15 +2273,11 @@ int ovs_nla_put_mask(const struct sw_flow *flow, struct sk_buff *skb) + OVS_FLOW_ATTR_MASK, true, skb); + } + +-#define MAX_ACTIONS_BUFSIZE (32 * 1024) +- + static struct sw_flow_actions *nla_alloc_flow_actions(int size) + { + struct sw_flow_actions *sfa; + +- WARN_ON_ONCE(size > MAX_ACTIONS_BUFSIZE); +- +- sfa = kmalloc(sizeof(*sfa) + size, GFP_KERNEL); ++ sfa = kmalloc(kmalloc_size_roundup(sizeof(*sfa) + size), GFP_KERNEL); + if (!sfa) + return ERR_PTR(-ENOMEM); + +@@ -2436,15 +2432,6 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa, + + new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2); + +- if (new_acts_size > MAX_ACTIONS_BUFSIZE) { +- if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) { +- OVS_NLERR(log, "Flow action size exceeds max %u", +- MAX_ACTIONS_BUFSIZE); +- return ERR_PTR(-EMSGSIZE); +- } +- new_acts_size = MAX_ACTIONS_BUFSIZE; +- } +- + acts = nla_alloc_flow_actions(new_acts_size); + if (IS_ERR(acts)) + return (void *)acts; +@@ -3463,7 +3450,7 @@ int ovs_nla_copy_actions(struct net *net, const struct nlattr *attr, + int err; + u32 mpls_label_count = 0; + +- *sfa = nla_alloc_flow_actions(min(nla_len(attr), MAX_ACTIONS_BUFSIZE)); ++ *sfa = nla_alloc_flow_actions(nla_len(attr)); + if (IS_ERR(*sfa)) + return PTR_ERR(*sfa); + +diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c +index d9cd174eecb79b..64277ce3c5eb9b 100644 +--- a/net/sched/act_tunnel_key.c ++++ b/net/sched/act_tunnel_key.c +@@ -67,7 +67,7 @@ geneve_opt_policy[TCA_TUNNEL_KEY_ENC_OPT_GENEVE_MAX + 1] = { + [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 }, + [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 }, + [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY, +- .len = 128 }, ++ .len = 127 }, + }; + + static const struct nla_policy +diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c +index 35842b51a24e27..af437be93e25af 100644 +--- a/net/sched/cls_flower.c ++++ b/net/sched/cls_flower.c +@@ -723,7 +723,7 @@ geneve_opt_policy[TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX + 1] = { + [TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 }, + [TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 }, + [TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY, +- .len = 128 }, ++ .len = 127 }, + }; + + static const struct nla_policy +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c +index 516874d943cd94..d9ce273ba43d84 100644 +--- a/net/sched/sch_api.c ++++ b/net/sched/sch_api.c +@@ -2164,6 +2164,12 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n, + return -EOPNOTSUPP; + } + ++ /* Prevent creation of traffic classes with classid TC_H_ROOT */ ++ if (clid == TC_H_ROOT) { ++ NL_SET_ERR_MSG(extack, "Cannot create traffic class with classid TC_H_ROOT"); ++ return -EINVAL; ++ } ++ + new_cl = cl; + err = -EOPNOTSUPP; + if (cops->change) +diff --git a/net/sched/sch_skbprio.c b/net/sched/sch_skbprio.c +index df72fb83d9c7d8..c9e422e4661596 100644 +--- a/net/sched/sch_skbprio.c ++++ b/net/sched/sch_skbprio.c +@@ -121,8 +121,6 @@ static int skbprio_enqueue(struct sk_buff *skb, struct Qdisc *sch, + /* Check to update highest and lowest priorities. */ + if (skb_queue_empty(lp_qdisc)) { + if (q->lowest_prio == q->highest_prio) { +- /* The incoming packet is the only packet in queue. */ +- BUG_ON(sch->q.qlen != 1); + q->lowest_prio = prio; + q->highest_prio = prio; + } else { +@@ -154,7 +152,6 @@ static struct sk_buff *skbprio_dequeue(struct Qdisc *sch) + /* Update highest priority field. */ + if (skb_queue_empty(hpq)) { + if (q->lowest_prio == q->highest_prio) { +- BUG_ON(sch->q.qlen); + q->highest_prio = 0; + q->lowest_prio = SKBPRIO_MAX_PRIORITY - 1; + } else { +diff --git a/net/sctp/stream.c b/net/sctp/stream.c +index ee6514af830f78..0527728aee986a 100644 +--- a/net/sctp/stream.c ++++ b/net/sctp/stream.c +@@ -735,7 +735,7 @@ struct sctp_chunk *sctp_process_strreset_tsnreq( + * value SHOULD be the smallest TSN not acknowledged by the + * receiver of the request plus 2^31. + */ +- init_tsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map) + (1 << 31); ++ init_tsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map) + (1U << 31); + sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL, + init_tsn, GFP_ATOMIC); + +diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c +index 29ce6cc7b40113..036bdcc9d5c510 100644 +--- a/net/vmw_vsock/af_vsock.c ++++ b/net/vmw_vsock/af_vsock.c +@@ -1428,7 +1428,11 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr, + timeout = vsk->connect_timeout; + prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); + +- while (sk->sk_state != TCP_ESTABLISHED && sk->sk_err == 0) { ++ /* If the socket is already closing or it is in an error state, there ++ * is no point in waiting. ++ */ ++ while (sk->sk_state != TCP_ESTABLISHED && ++ sk->sk_state != TCP_CLOSING && sk->sk_err == 0) { + if (flags & O_NONBLOCK) { + /* If we're not going to block, we schedule a timeout + * function to generate a timeout on the connection +diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c +index 4dc4a7bbe51cf6..29ce7f6f16a091 100644 +--- a/net/xfrm/xfrm_output.c ++++ b/net/xfrm/xfrm_output.c +@@ -737,7 +737,7 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb) + skb->encapsulation = 1; + + if (skb_is_gso(skb)) { +- if (skb->inner_protocol) ++ if (skb->inner_protocol && x->props.mode == XFRM_MODE_TUNNEL) + return xfrm_output_gso(net, sk, skb); + + skb_shinfo(skb)->gso_type |= SKB_GSO_ESP; +diff --git a/scripts/selinux/install_policy.sh b/scripts/selinux/install_policy.sh +index 20af56ce245c56..c68f0e045fb00f 100755 +--- a/scripts/selinux/install_policy.sh ++++ b/scripts/selinux/install_policy.sh +@@ -6,27 +6,24 @@ if [ `id -u` -ne 0 ]; then + exit 1 + fi + +-SF=`which setfiles` +-if [ $? -eq 1 ]; then ++SF=`which setfiles` || { + echo "Could not find setfiles" + echo "Do you have policycoreutils installed?" + exit 1 +-fi ++} + +-CP=`which checkpolicy` +-if [ $? -eq 1 ]; then ++CP=`which checkpolicy` || { + echo "Could not find checkpolicy" + echo "Do you have checkpolicy installed?" + exit 1 +-fi ++} + VERS=`$CP -V | awk '{print $1}'` + +-ENABLED=`which selinuxenabled` +-if [ $? -eq 1 ]; then ++ENABLED=`which selinuxenabled` || { + echo "Could not find selinuxenabled" + echo "Do you have libselinux-utils installed?" + exit 1 +-fi ++} + + if selinuxenabled; then + echo "SELinux is already enabled" +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 6440a79f4d480b..e1de24c9f62651 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -576,6 +576,9 @@ static void alc_shutup_pins(struct hda_codec *codec) + { + struct alc_spec *spec = codec->spec; + ++ if (spec->no_shutup_pins) ++ return; ++ + switch (codec->core.vendor_id) { + case 0x10ec0236: + case 0x10ec0256: +@@ -591,8 +594,7 @@ static void alc_shutup_pins(struct hda_codec *codec) + alc_headset_mic_no_shutup(codec); + break; + default: +- if (!spec->no_shutup_pins) +- snd_hda_shutup_pins(codec); ++ snd_hda_shutup_pins(codec); + break; + } + } +@@ -4757,6 +4759,21 @@ static void alc236_fixup_hp_coef_micmute_led(struct hda_codec *codec, + } + } + ++static void alc295_fixup_hp_mute_led_coefbit11(struct hda_codec *codec, ++ const struct hda_fixup *fix, int action) ++{ ++ struct alc_spec *spec = codec->spec; ++ ++ if (action == HDA_FIXUP_ACT_PRE_PROBE) { ++ spec->mute_led_polarity = 0; ++ spec->mute_led_coef.idx = 0xb; ++ spec->mute_led_coef.mask = 3 << 3; ++ spec->mute_led_coef.on = 1 << 3; ++ spec->mute_led_coef.off = 1 << 4; ++ snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set); ++ } ++} ++ + static void alc285_fixup_hp_mute_led(struct hda_codec *codec, + const struct hda_fixup *fix, int action) + { +@@ -6996,6 +7013,7 @@ enum { + ALC290_FIXUP_MONO_SPEAKERS_HSJACK, + ALC290_FIXUP_SUBWOOFER, + ALC290_FIXUP_SUBWOOFER_HSJACK, ++ ALC295_FIXUP_HP_MUTE_LED_COEFBIT11, + ALC269_FIXUP_THINKPAD_ACPI, + ALC269_FIXUP_DMIC_THINKPAD_ACPI, + ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13, +@@ -8542,6 +8560,10 @@ static const struct hda_fixup alc269_fixups[] = { + .chained = true, + .chain_id = ALC283_FIXUP_INT_MIC, + }, ++ [ALC295_FIXUP_HP_MUTE_LED_COEFBIT11] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc295_fixup_hp_mute_led_coefbit11, ++ }, + [ALC298_FIXUP_SAMSUNG_AMP] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc298_fixup_samsung_amp, +@@ -9256,6 +9278,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), + SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), + SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), ++ SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11), + SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360), + SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), + SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT), +@@ -9302,6 +9325,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), + SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), + SND_PCI_QUIRK(0x103c, 0x881d, "HP 250 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), ++ SND_PCI_QUIRK(0x103c, 0x881e, "HP Laptop 15s-du3xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), + SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), +diff --git a/sound/soc/codecs/arizona.c b/sound/soc/codecs/arizona.c +index e32871b3f68ac9..be207350b71249 100644 +--- a/sound/soc/codecs/arizona.c ++++ b/sound/soc/codecs/arizona.c +@@ -967,7 +967,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + case ARIZONA_OUT3L_ENA_SHIFT: + case ARIZONA_OUT3R_ENA_SHIFT: + priv->out_up_pending++; +- priv->out_up_delay += 17; ++ priv->out_up_delay += 17000; + break; + case ARIZONA_OUT4L_ENA_SHIFT: + case ARIZONA_OUT4R_ENA_SHIFT: +@@ -977,7 +977,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + case WM8997: + break; + default: +- priv->out_up_delay += 10; ++ priv->out_up_delay += 10000; + break; + } + break; +@@ -999,7 +999,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + if (!priv->out_up_pending && priv->out_up_delay) { + dev_dbg(component->dev, "Power up delay: %d\n", + priv->out_up_delay); +- msleep(priv->out_up_delay); ++ fsleep(priv->out_up_delay); + priv->out_up_delay = 0; + } + break; +@@ -1017,7 +1017,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + case ARIZONA_OUT3L_ENA_SHIFT: + case ARIZONA_OUT3R_ENA_SHIFT: + priv->out_down_pending++; +- priv->out_down_delay++; ++ priv->out_down_delay += 1000; + break; + case ARIZONA_OUT4L_ENA_SHIFT: + case ARIZONA_OUT4R_ENA_SHIFT: +@@ -1028,10 +1028,10 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + break; + case WM8998: + case WM1814: +- priv->out_down_delay += 5; ++ priv->out_down_delay += 5000; + break; + default: +- priv->out_down_delay++; ++ priv->out_down_delay += 1000; + break; + } + break; +@@ -1053,7 +1053,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + if (!priv->out_down_pending && priv->out_down_delay) { + dev_dbg(component->dev, "Power down delay: %d\n", + priv->out_down_delay); +- msleep(priv->out_down_delay); ++ fsleep(priv->out_down_delay); + priv->out_down_delay = 0; + } + break; +diff --git a/sound/soc/codecs/madera.c b/sound/soc/codecs/madera.c +index fd4fa1d5d2d12d..5775898fc6f9e5 100644 +--- a/sound/soc/codecs/madera.c ++++ b/sound/soc/codecs/madera.c +@@ -2322,10 +2322,10 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case CS42L92: + case CS47L92: + case CS47L93: +- out_up_delay = 6; ++ out_up_delay = 6000; + break; + default: +- out_up_delay = 17; ++ out_up_delay = 17000; + break; + } + +@@ -2356,7 +2356,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case MADERA_OUT3R_ENA_SHIFT: + priv->out_up_pending--; + if (!priv->out_up_pending) { +- msleep(priv->out_up_delay); ++ fsleep(priv->out_up_delay); + priv->out_up_delay = 0; + } + break; +@@ -2375,7 +2375,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case MADERA_OUT3L_ENA_SHIFT: + case MADERA_OUT3R_ENA_SHIFT: + priv->out_down_pending++; +- priv->out_down_delay++; ++ priv->out_down_delay += 1000; + break; + default: + break; +@@ -2392,7 +2392,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case MADERA_OUT3R_ENA_SHIFT: + priv->out_down_pending--; + if (!priv->out_down_pending) { +- msleep(priv->out_down_delay); ++ fsleep(priv->out_down_delay); + priv->out_down_delay = 0; + } + break; +diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c +index 1951bae95b315c..273bf4027a6e52 100644 +--- a/sound/soc/codecs/tas2764.c ++++ b/sound/soc/codecs/tas2764.c +@@ -315,7 +315,7 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) + { + struct snd_soc_component *component = dai->component; + struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); +- u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0; ++ u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0, asi_cfg_4 = 0; + int ret; + + switch (fmt & SND_SOC_DAIFMT_INV_MASK) { +@@ -324,12 +324,14 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) + fallthrough; + case SND_SOC_DAIFMT_NB_NF: + asi_cfg_1 = TAS2764_TDM_CFG1_RX_RISING; ++ asi_cfg_4 = TAS2764_TDM_CFG4_TX_FALLING; + break; + case SND_SOC_DAIFMT_IB_IF: + asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; + fallthrough; + case SND_SOC_DAIFMT_IB_NF: + asi_cfg_1 = TAS2764_TDM_CFG1_RX_FALLING; ++ asi_cfg_4 = TAS2764_TDM_CFG4_TX_RISING; + break; + } + +@@ -339,6 +341,12 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) + if (ret < 0) + return ret; + ++ ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG4, ++ TAS2764_TDM_CFG4_TX_MASK, ++ asi_cfg_4); ++ if (ret < 0) ++ return ret; ++ + switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { + case SND_SOC_DAIFMT_I2S: + asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; +diff --git a/sound/soc/codecs/tas2764.h b/sound/soc/codecs/tas2764.h +index f015f22a083b56..337bc611bee967 100644 +--- a/sound/soc/codecs/tas2764.h ++++ b/sound/soc/codecs/tas2764.h +@@ -25,7 +25,7 @@ + + /* Power Control */ + #define TAS2764_PWR_CTRL TAS2764_REG(0X0, 0x02) +-#define TAS2764_PWR_CTRL_MASK GENMASK(1, 0) ++#define TAS2764_PWR_CTRL_MASK GENMASK(2, 0) + #define TAS2764_PWR_CTRL_ACTIVE 0x0 + #define TAS2764_PWR_CTRL_MUTE BIT(0) + #define TAS2764_PWR_CTRL_SHUTDOWN BIT(1) +@@ -75,6 +75,12 @@ + #define TAS2764_TDM_CFG3_RXS_SHIFT 0x4 + #define TAS2764_TDM_CFG3_MASK GENMASK(3, 0) + ++/* TDM Configuration Reg4 */ ++#define TAS2764_TDM_CFG4 TAS2764_REG(0X0, 0x0d) ++#define TAS2764_TDM_CFG4_TX_MASK BIT(0) ++#define TAS2764_TDM_CFG4_TX_RISING 0x0 ++#define TAS2764_TDM_CFG4_TX_FALLING BIT(0) ++ + /* TDM Configuration Reg5 */ + #define TAS2764_TDM_CFG5 TAS2764_REG(0X0, 0x0e) + #define TAS2764_TDM_CFG5_VSNS_MASK BIT(6) +diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c +index ec0df3b1ef615c..4e71dc1cf588f6 100644 +--- a/sound/soc/codecs/tas2770.c ++++ b/sound/soc/codecs/tas2770.c +@@ -508,7 +508,7 @@ static int tas2770_codec_probe(struct snd_soc_component *component) + } + + static DECLARE_TLV_DB_SCALE(tas2770_digital_tlv, 1100, 50, 0); +-static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -12750, 50, 0); ++static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -10050, 50, 0); + + static const struct snd_kcontrol_new tas2770_snd_controls[] = { + SOC_SINGLE_TLV("Speaker Playback Volume", TAS2770_PLAY_CFG_REG2, +diff --git a/sound/soc/codecs/wm0010.c b/sound/soc/codecs/wm0010.c +index 28b4656c4e1491..b2f87af1bfc876 100644 +--- a/sound/soc/codecs/wm0010.c ++++ b/sound/soc/codecs/wm0010.c +@@ -952,7 +952,7 @@ static int wm0010_spi_probe(struct spi_device *spi) + if (ret) { + dev_err(wm0010->dev, "Failed to set IRQ %d as wake source: %d\n", + irq, ret); +- return ret; ++ goto free_irq; + } + + if (spi->max_speed_hz) +@@ -964,9 +964,18 @@ static int wm0010_spi_probe(struct spi_device *spi) + &soc_component_dev_wm0010, wm0010_dai, + ARRAY_SIZE(wm0010_dai)); + if (ret < 0) +- return ret; ++ goto disable_irq_wake; + + return 0; ++ ++disable_irq_wake: ++ irq_set_irq_wake(wm0010->irq, 0); ++ ++free_irq: ++ if (wm0010->irq) ++ free_irq(wm0010->irq, wm0010); ++ ++ return ret; + } + + static int wm0010_spi_remove(struct spi_device *spi) +diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c +index 7c6e01720d6517..bc3dfb53ba9536 100644 +--- a/sound/soc/codecs/wm5110.c ++++ b/sound/soc/codecs/wm5110.c +@@ -302,7 +302,7 @@ static int wm5110_hp_pre_enable(struct snd_soc_dapm_widget *w) + } else { + wseq = wm5110_no_dre_left_enable; + nregs = ARRAY_SIZE(wm5110_no_dre_left_enable); +- priv->out_up_delay += 10; ++ priv->out_up_delay += 10000; + } + break; + case ARIZONA_OUT1R_ENA_SHIFT: +@@ -312,7 +312,7 @@ static int wm5110_hp_pre_enable(struct snd_soc_dapm_widget *w) + } else { + wseq = wm5110_no_dre_right_enable; + nregs = ARRAY_SIZE(wm5110_no_dre_right_enable); +- priv->out_up_delay += 10; ++ priv->out_up_delay += 10000; + } + break; + default: +@@ -338,7 +338,7 @@ static int wm5110_hp_pre_disable(struct snd_soc_dapm_widget *w) + snd_soc_component_update_bits(component, + ARIZONA_SPARE_TRIGGERS, + ARIZONA_WS_TRG1, 0); +- priv->out_down_delay += 27; ++ priv->out_down_delay += 27000; + } + break; + case ARIZONA_OUT1R_ENA_SHIFT: +@@ -350,7 +350,7 @@ static int wm5110_hp_pre_disable(struct snd_soc_dapm_widget *w) + snd_soc_component_update_bits(component, + ARIZONA_SPARE_TRIGGERS, + ARIZONA_WS_TRG2, 0); +- priv->out_down_delay += 27; ++ priv->out_down_delay += 27000; + } + break; + default: +diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c +index 223234f6172b23..2b64c0384b6bbc 100644 +--- a/sound/soc/fsl/imx-card.c ++++ b/sound/soc/fsl/imx-card.c +@@ -759,6 +759,8 @@ static int imx_card_probe(struct platform_device *pdev) + data->dapm_routes[i].sink = + devm_kasprintf(&pdev->dev, GFP_KERNEL, "%d %s", + i + 1, "Playback"); ++ if (!data->dapm_routes[i].sink) ++ return -ENOMEM; + data->dapm_routes[i].source = "CPU-Playback"; + } + } +@@ -776,6 +778,8 @@ static int imx_card_probe(struct platform_device *pdev) + data->dapm_routes[i].source = + devm_kasprintf(&pdev->dev, GFP_KERNEL, "%d %s", + i + 1, "Capture"); ++ if (!data->dapm_routes[i].source) ++ return -ENOMEM; + data->dapm_routes[i].sink = "CPU-Capture"; + } + } +diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c +index af8ef2a27d341f..65022ba5c5875a 100644 +--- a/sound/soc/sh/rcar/core.c ++++ b/sound/soc/sh/rcar/core.c +@@ -1694,20 +1694,6 @@ int rsnd_kctrl_accept_anytime(struct rsnd_dai_stream *io) + return 1; + } + +-int rsnd_kctrl_accept_runtime(struct rsnd_dai_stream *io) +-{ +- struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io); +- struct rsnd_priv *priv = rsnd_io_to_priv(io); +- struct device *dev = rsnd_priv_to_dev(priv); +- +- if (!runtime) { +- dev_warn(dev, "Can't update kctrl when idle\n"); +- return 0; +- } +- +- return 1; +-} +- + struct rsnd_kctrl_cfg *rsnd_kctrl_init_m(struct rsnd_kctrl_cfg_m *cfg) + { + cfg->cfg.val = cfg->val; +diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h +index f8ef6836ef84e1..690f4932357c1f 100644 +--- a/sound/soc/sh/rcar/rsnd.h ++++ b/sound/soc/sh/rcar/rsnd.h +@@ -742,7 +742,6 @@ struct rsnd_kctrl_cfg_s { + #define rsnd_kctrl_vals(x) ((x).val) /* = (x).cfg.val[0] */ + + int rsnd_kctrl_accept_anytime(struct rsnd_dai_stream *io); +-int rsnd_kctrl_accept_runtime(struct rsnd_dai_stream *io); + struct rsnd_kctrl_cfg *rsnd_kctrl_init_m(struct rsnd_kctrl_cfg_m *cfg); + struct rsnd_kctrl_cfg *rsnd_kctrl_init_s(struct rsnd_kctrl_cfg_s *cfg); + int rsnd_kctrl_new(struct rsnd_mod *mod, +diff --git a/sound/soc/sh/rcar/src.c b/sound/soc/sh/rcar/src.c +index f832165e46bc04..9893839666d7bc 100644 +--- a/sound/soc/sh/rcar/src.c ++++ b/sound/soc/sh/rcar/src.c +@@ -530,6 +530,22 @@ static irqreturn_t rsnd_src_interrupt(int irq, void *data) + return IRQ_HANDLED; + } + ++static int rsnd_src_kctrl_accept_runtime(struct rsnd_dai_stream *io) ++{ ++ struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io); ++ ++ if (!runtime) { ++ struct rsnd_priv *priv = rsnd_io_to_priv(io); ++ struct device *dev = rsnd_priv_to_dev(priv); ++ ++ dev_warn(dev, "\"SRC Out Rate\" can use during running\n"); ++ ++ return 0; ++ } ++ ++ return 1; ++} ++ + static int rsnd_src_probe_(struct rsnd_mod *mod, + struct rsnd_dai_stream *io, + struct rsnd_priv *priv) +@@ -593,7 +609,7 @@ static int rsnd_src_pcm_new(struct rsnd_mod *mod, + rsnd_io_is_play(io) ? + "SRC Out Rate" : + "SRC In Rate", +- rsnd_kctrl_accept_runtime, ++ rsnd_src_kctrl_accept_runtime, + rsnd_src_set_convert_rate, + &src->sync, 192000); + +diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c +index 57caa91a4376ed..d8d0a26a554de3 100644 +--- a/sound/soc/soc-ops.c ++++ b/sound/soc/soc-ops.c +@@ -317,7 +317,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol, + mask = BIT(sign_bit + 1) - 1; + + val = ucontrol->value.integer.value[0]; +- if (mc->platform_max && ((int)val + min) > mc->platform_max) ++ if (mc->platform_max && val > mc->platform_max) + return -EINVAL; + if (val > max - min) + return -EINVAL; +@@ -330,7 +330,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol, + val = val << shift; + if (snd_soc_volsw_is_stereo(mc)) { + val2 = ucontrol->value.integer.value[1]; +- if (mc->platform_max && ((int)val2 + min) > mc->platform_max) ++ if (mc->platform_max && val2 > mc->platform_max) + return -EINVAL; + if (val2 > max - min) + return -EINVAL; +@@ -485,17 +485,16 @@ int snd_soc_info_volsw_range(struct snd_kcontrol *kcontrol, + { + struct soc_mixer_control *mc = + (struct soc_mixer_control *)kcontrol->private_value; +- int platform_max; +- int min = mc->min; ++ int max; + +- if (!mc->platform_max) +- mc->platform_max = mc->max; +- platform_max = mc->platform_max; ++ max = mc->max - mc->min; ++ if (mc->platform_max && mc->platform_max < max) ++ max = mc->platform_max; + + uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; + uinfo->count = snd_soc_volsw_is_stereo(mc) ? 2 : 1; + uinfo->value.integer.min = 0; +- uinfo->value.integer.max = platform_max - min; ++ uinfo->value.integer.max = max; + + return 0; + } +diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c +index 6744318de612e5..0449e7a2669ff0 100644 +--- a/sound/soc/sof/intel/hda-codec.c ++++ b/sound/soc/sof/intel/hda-codec.c +@@ -258,6 +258,7 @@ int hda_codec_i915_exit(struct snd_sof_dev *sdev) + } + EXPORT_SYMBOL_NS(hda_codec_i915_exit, SND_SOC_SOF_HDA_AUDIO_CODEC_I915); + ++MODULE_SOFTDEP("pre: snd-hda-codec-hdmi"); + #endif + + MODULE_LICENSE("Dual BSD/GPL"); +diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c +index 149f4e2ce9998c..7f2734318452fe 100644 +--- a/sound/soc/ti/j721e-evm.c ++++ b/sound/soc/ti/j721e-evm.c +@@ -182,6 +182,8 @@ static int j721e_configure_refclk(struct j721e_priv *priv, + clk_id = J721E_CLK_PARENT_48000; + else if (!(rate % 11025) && priv->pll_rates[J721E_CLK_PARENT_44100]) + clk_id = J721E_CLK_PARENT_44100; ++ else if (!(rate % 11025) && priv->pll_rates[J721E_CLK_PARENT_48000]) ++ clk_id = J721E_CLK_PARENT_48000; + else + return ret; + +diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c +index e45f3d3e11b434..5eccc8af839fc6 100644 +--- a/sound/usb/mixer_quirks.c ++++ b/sound/usb/mixer_quirks.c +@@ -3553,6 +3553,52 @@ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer, + } + } + ++/* ++ * Some Plantronics headsets have control names that don't meet ALSA naming ++ * standards. This function fixes nonstandard source names. By the time ++ * this function is called the control name should look like one of these: ++ * "source names Playback Volume" ++ * "source names Playback Switch" ++ * "source names Capture Volume" ++ * "source names Capture Switch" ++ * If any of the trigger words are found in the name then the name will ++ * be changed to: ++ * "Headset Playback Volume" ++ * "Headset Playback Switch" ++ * "Headset Capture Volume" ++ * "Headset Capture Switch" ++ * depending on the current suffix. ++ */ ++static void snd_fix_plt_name(struct snd_usb_audio *chip, ++ struct snd_ctl_elem_id *id) ++{ ++ /* no variant of "Sidetone" should be added to this list */ ++ static const char * const trigger[] = { ++ "Earphone", "Microphone", "Receive", "Transmit" ++ }; ++ static const char * const suffix[] = { ++ " Playback Volume", " Playback Switch", ++ " Capture Volume", " Capture Switch" ++ }; ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(trigger); i++) ++ if (strstr(id->name, trigger[i])) ++ goto triggered; ++ usb_audio_dbg(chip, "no change in %s\n", id->name); ++ return; ++ ++triggered: ++ for (i = 0; i < ARRAY_SIZE(suffix); i++) ++ if (strstr(id->name, suffix[i])) { ++ usb_audio_dbg(chip, "fixing kctl name %s\n", id->name); ++ snprintf(id->name, sizeof(id->name), "Headset%s", ++ suffix[i]); ++ return; ++ } ++ usb_audio_dbg(chip, "something wrong in kctl name %s\n", id->name); ++} ++ + void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer, + struct usb_mixer_elem_info *cval, int unitid, + struct snd_kcontrol *kctl) +@@ -3570,5 +3616,10 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer, + cval->min_mute = 1; + break; + } ++ ++ /* ALSA-ify some Plantronics headset control names */ ++ if (USB_ID_VENDOR(mixer->chip->usb_id) == 0x047f && ++ (cval->control == UAC_FU_MUTE || cval->control == UAC_FU_VOLUME)) ++ snd_fix_plt_name(mixer->chip, &kctl->id); + } + +diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c +index 2adf55f4874300..fc91814a35e8e2 100644 +--- a/tools/lib/bpf/linker.c ++++ b/tools/lib/bpf/linker.c +@@ -1966,7 +1966,7 @@ static int linker_append_elf_sym(struct bpf_linker *linker, struct src_obj *obj, + + obj->sym_map[src_sym_idx] = dst_sym_idx; + +- if (sym_type == STT_SECTION && dst_sym) { ++ if (sym_type == STT_SECTION && dst_sec) { + dst_sec->sec_sym_idx = dst_sym_idx; + dst_sym->st_value = 0; + } +diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c +index 8feef3a05af7b3..b9fd7edfbb3cc7 100644 +--- a/tools/perf/util/python.c ++++ b/tools/perf/util/python.c +@@ -158,7 +158,7 @@ struct pyrf_event { + }; + + #define sample_members \ +- sample_member_def(sample_ip, ip, T_ULONGLONG, "event type"), \ ++ sample_member_def(sample_ip, ip, T_ULONGLONG, "event ip"), \ + sample_member_def(sample_pid, pid, T_INT, "event pid"), \ + sample_member_def(sample_tid, tid, T_INT, "event tid"), \ + sample_member_def(sample_time, time, T_ULONGLONG, "event timestamp"), \ +@@ -585,6 +585,11 @@ static PyObject *pyrf_event__new(union perf_event *event) + event->header.type == PERF_RECORD_SWITCH_CPU_WIDE)) + return NULL; + ++ // FIXME this better be dynamic or we need to parse everything ++ // before calling perf_mmap__consume(), including tracepoint fields. ++ if (sizeof(pevent->event) < event->header.size) ++ return NULL; ++ + ptype = pyrf_event__type[event->header.type]; + pevent = PyObject_New(struct pyrf_event, ptype); + if (pevent != NULL) +@@ -1084,20 +1089,22 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist, + + evsel = evlist__event2evsel(evlist, event); + if (!evsel) { ++ Py_DECREF(pyevent); + Py_INCREF(Py_None); + return Py_None; + } + + pevent->evsel = evsel; + +- err = evsel__parse_sample(evsel, event, &pevent->sample); +- +- /* Consume the even only after we parsed it out. */ + perf_mmap__consume(&md->core); + +- if (err) ++ err = evsel__parse_sample(evsel, &pevent->event, &pevent->sample); ++ if (err) { ++ Py_DECREF(pyevent); + return PyErr_Format(PyExc_OSError, + "perf: can't parse sample, err=%d", err); ++ } ++ + return pyevent; + } + end: +diff --git a/tools/perf/util/units.c b/tools/perf/util/units.c +index 32c39cfe209b3b..4c6a86e1cb54b2 100644 +--- a/tools/perf/util/units.c ++++ b/tools/perf/util/units.c +@@ -64,7 +64,7 @@ unsigned long convert_unit(unsigned long value, char *unit) + + int unit_number__scnprintf(char *buf, size_t size, u64 n) + { +- char unit[4] = "BKMG"; ++ char unit[] = "BKMG"; + int i = 0; + + while (((n / 1024) > 1) && (i < 3)) {