From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id A49C0138359 for ; Fri, 21 Aug 2020 11:43:28 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id D4E04E0C34; Fri, 21 Aug 2020 11:43:27 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 8129AE0C34 for ; Fri, 21 Aug 2020 11:43:27 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 18F80340BBD for ; Fri, 21 Aug 2020 11:43:26 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id C6025319 for ; Fri, 21 Aug 2020 11:43:24 +0000 (UTC) From: "Alice Ferrazzi" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Alice Ferrazzi" Message-ID: <1598010189.a02b235e592f91d1cc07f284754d422947791d7b.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.7 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1016_linux-5.7.17.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Alice Ferrazzi X-VCS-Revision: a02b235e592f91d1cc07f284754d422947791d7b X-VCS-Branch: 5.7 Date: Fri, 21 Aug 2020 11:43:24 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: c6a07976-0194-4b3c-9401-28b69495b29d X-Archives-Hash: 7ff1dcc970b686a917532a72f4f4e155 commit: a02b235e592f91d1cc07f284754d422947791d7b Author: Alice Ferrazzi gentoo org> AuthorDate: Fri Aug 21 11:43:03 2020 +0000 Commit: Alice Ferrazzi gentoo org> CommitDate: Fri Aug 21 11:43:09 2020 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a02b235e Linux patch 5.7.17 Signed-off-by: Alice Ferrazzi gentoo.org> 0000_README | 4 + 1016_linux-5.7.17.patch | 7578 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 7582 insertions(+) diff --git a/0000_README b/0000_README index 1bfc2e2..18ff2b2 100644 --- a/0000_README +++ b/0000_README @@ -107,6 +107,10 @@ Patch: 1015_linux-5.7.16.patch From: http://www.kernel.org Desc: Linux 5.7.16 +Patch: 1016_linux-5.7.17.patch +From: http://www.kernel.org +Desc: Linux 5.7.17 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1016_linux-5.7.17.patch b/1016_linux-5.7.17.patch new file mode 100644 index 0000000..e5861a0 --- /dev/null +++ b/1016_linux-5.7.17.patch @@ -0,0 +1,7578 @@ +diff --git a/Documentation/admin-guide/hw-vuln/multihit.rst b/Documentation/admin-guide/hw-vuln/multihit.rst +index ba9988d8bce50..140e4cec38c33 100644 +--- a/Documentation/admin-guide/hw-vuln/multihit.rst ++++ b/Documentation/admin-guide/hw-vuln/multihit.rst +@@ -80,6 +80,10 @@ The possible values in this file are: + - The processor is not vulnerable. + * - KVM: Mitigation: Split huge pages + - Software changes mitigate this issue. ++ * - KVM: Mitigation: VMX unsupported ++ - KVM is not vulnerable because Virtual Machine Extensions (VMX) is not supported. ++ * - KVM: Mitigation: VMX disabled ++ - KVM is not vulnerable because Virtual Machine Extensions (VMX) is disabled. + * - KVM: Vulnerable + - The processor is vulnerable, but no mitigation enabled + +diff --git a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt +index c82794002595f..89647d7143879 100644 +--- a/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt ++++ b/Documentation/devicetree/bindings/iio/multiplexer/io-channel-mux.txt +@@ -21,7 +21,7 @@ controller state. The mux controller state is described in + + Example: + mux: mux-controller { +- compatible = "mux-gpio"; ++ compatible = "gpio-mux"; + #mux-control-cells = <0>; + + mux-gpios = <&pioA 0 GPIO_ACTIVE_HIGH>, +diff --git a/Makefile b/Makefile +index 627657860aa54..c0d34d03ab5f1 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 7 +-SUBLEVEL = 16 ++SUBLEVEL = 17 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c +index 4d7879484cecc..581602413a130 100644 +--- a/arch/arm64/kernel/perf_event.c ++++ b/arch/arm64/kernel/perf_event.c +@@ -155,7 +155,7 @@ armv8pmu_events_sysfs_show(struct device *dev, + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + +- return sprintf(page, "event=0x%03llx\n", pmu_attr->id); ++ return sprintf(page, "event=0x%04llx\n", pmu_attr->id); + } + + #define ARMV8_EVENT_ATTR(name, config) \ +@@ -244,10 +244,13 @@ armv8pmu_event_attr_is_visible(struct kobject *kobj, + test_bit(pmu_attr->id, cpu_pmu->pmceid_bitmap)) + return attr->mode; + +- pmu_attr->id -= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE; +- if (pmu_attr->id < ARMV8_PMUV3_MAX_COMMON_EVENTS && +- test_bit(pmu_attr->id, cpu_pmu->pmceid_ext_bitmap)) +- return attr->mode; ++ if (pmu_attr->id >= ARMV8_PMUV3_EXT_COMMON_EVENT_BASE) { ++ u64 id = pmu_attr->id - ARMV8_PMUV3_EXT_COMMON_EVENT_BASE; ++ ++ if (id < ARMV8_PMUV3_MAX_COMMON_EVENTS && ++ test_bit(id, cpu_pmu->pmceid_ext_bitmap)) ++ return attr->mode; ++ } + + return 0; + } +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig +index 690718b3701af..1db782a08c6e5 100644 +--- a/arch/mips/Kconfig ++++ b/arch/mips/Kconfig +@@ -722,6 +722,7 @@ config SGI_IP27 + select SYS_SUPPORTS_NUMA + select SYS_SUPPORTS_SMP + select MIPS_L1_CACHE_SHIFT_7 ++ select NUMA + help + This are the SGI Origin 200, Origin 2000 and Onyx 2 Graphics + workstations. To compile a Linux kernel that runs on these, say Y +diff --git a/arch/mips/boot/dts/ingenic/qi_lb60.dts b/arch/mips/boot/dts/ingenic/qi_lb60.dts +index 7a371d9c5a33f..eda37fb516f0e 100644 +--- a/arch/mips/boot/dts/ingenic/qi_lb60.dts ++++ b/arch/mips/boot/dts/ingenic/qi_lb60.dts +@@ -69,7 +69,7 @@ + "Speaker", "OUTL", + "Speaker", "OUTR", + "INL", "LOUT", +- "INL", "ROUT"; ++ "INR", "ROUT"; + + simple-audio-card,aux-devs = <&>; + +diff --git a/arch/mips/kernel/topology.c b/arch/mips/kernel/topology.c +index cd3e1f82e1a5d..08ad6371fbe08 100644 +--- a/arch/mips/kernel/topology.c ++++ b/arch/mips/kernel/topology.c +@@ -20,7 +20,7 @@ static int __init topology_init(void) + for_each_present_cpu(i) { + struct cpu *c = &per_cpu(cpu_devices, i); + +- c->hotpluggable = 1; ++ c->hotpluggable = !!i; + ret = register_cpu(c, i); + if (ret) + printk(KERN_WARNING "topology_init: register_cpu %d " +diff --git a/arch/openrisc/kernel/stacktrace.c b/arch/openrisc/kernel/stacktrace.c +index 43f140a28bc72..54d38809e22cb 100644 +--- a/arch/openrisc/kernel/stacktrace.c ++++ b/arch/openrisc/kernel/stacktrace.c +@@ -13,6 +13,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -68,12 +69,25 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) + { + unsigned long *sp = NULL; + ++ if (!try_get_task_stack(tsk)) ++ return; ++ + if (tsk == current) + sp = (unsigned long *) &sp; +- else +- sp = (unsigned long *) KSTK_ESP(tsk); ++ else { ++ unsigned long ksp; ++ ++ /* Locate stack from kernel context */ ++ ksp = task_thread_info(tsk)->ksp; ++ ksp += STACK_FRAME_OVERHEAD; /* redzone */ ++ ksp += sizeof(struct pt_regs); ++ ++ sp = (unsigned long *) ksp; ++ } + + unwind_stack(trace, sp, save_stack_address_nosched); ++ ++ put_task_stack(tsk); + } + EXPORT_SYMBOL_GPL(save_stack_trace_tsk); + +diff --git a/arch/powerpc/include/asm/percpu.h b/arch/powerpc/include/asm/percpu.h +index dce863a7635cd..8e5b7d0b851c6 100644 +--- a/arch/powerpc/include/asm/percpu.h ++++ b/arch/powerpc/include/asm/percpu.h +@@ -10,8 +10,6 @@ + + #ifdef CONFIG_SMP + +-#include +- + #define __my_cpu_offset local_paca->data_offset + + #endif /* CONFIG_SMP */ +@@ -19,4 +17,6 @@ + + #include + ++#include ++ + #endif /* _ASM_POWERPC_PERCPU_H_ */ +diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c +index 84af6c8eecf71..0a539afc8f4fa 100644 +--- a/arch/powerpc/mm/fault.c ++++ b/arch/powerpc/mm/fault.c +@@ -241,6 +241,9 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code, + return false; + } + ++// This comes from 64-bit struct rt_sigframe + __SIGNAL_FRAMESIZE ++#define SIGFRAME_MAX_SIZE (4096 + 128) ++ + static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address, + struct vm_area_struct *vma, unsigned int flags, + bool *must_retry) +@@ -248,7 +251,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address, + /* + * N.B. The POWER/Open ABI allows programs to access up to + * 288 bytes below the stack pointer. +- * The kernel signal delivery code writes up to about 1.5kB ++ * The kernel signal delivery code writes a bit over 4KB + * below the stack pointer (r1) before decrementing it. + * The exec code can write slightly over 640kB to the stack + * before setting the user r1. Thus we allow the stack to +@@ -273,7 +276,7 @@ static bool bad_stack_expansion(struct pt_regs *regs, unsigned long address, + * between the last mapped region and the stack will + * expand the stack rather than segfaulting. + */ +- if (address + 2048 >= uregs->gpr[1]) ++ if (address + SIGFRAME_MAX_SIZE >= uregs->gpr[1]) + return false; + + if ((flags & FAULT_FLAG_WRITE) && (flags & FAULT_FLAG_USER) && +diff --git a/arch/powerpc/mm/ptdump/hashpagetable.c b/arch/powerpc/mm/ptdump/hashpagetable.c +index b6ed9578382ff..18f9586fbb935 100644 +--- a/arch/powerpc/mm/ptdump/hashpagetable.c ++++ b/arch/powerpc/mm/ptdump/hashpagetable.c +@@ -259,7 +259,7 @@ static int pseries_find(unsigned long ea, int psize, bool primary, u64 *v, u64 * + for (i = 0; i < HPTES_PER_GROUP; i += 4, hpte_group += 4) { + lpar_rc = plpar_pte_read_4(0, hpte_group, (void *)ptes); + +- if (lpar_rc != H_SUCCESS) ++ if (lpar_rc) + continue; + for (j = 0; j < 4; j++) { + if (HPTE_V_COMPARE(ptes[j].v, want_v) && +diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c +index b2cde17323015..6d912db46deb7 100644 +--- a/arch/powerpc/platforms/pseries/hotplug-memory.c ++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c +@@ -27,7 +27,7 @@ static bool rtas_hp_event; + unsigned long pseries_memory_block_size(void) + { + struct device_node *np; +- unsigned int memblock_size = MIN_MEMORY_BLOCK_SIZE; ++ u64 memblock_size = MIN_MEMORY_BLOCK_SIZE; + struct resource r; + + np = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory"); +diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig +index ae01be202204b..03e491c103e76 100644 +--- a/arch/s390/Kconfig ++++ b/arch/s390/Kconfig +@@ -769,6 +769,7 @@ config VFIO_AP + def_tristate n + prompt "VFIO support for AP devices" + depends on S390_AP_IOMMU && VFIO_MDEV_DEVICE && KVM ++ depends on ZCRYPT + help + This driver grants access to Adjunct Processor (AP) devices + via the VFIO mediated device interface. +diff --git a/arch/s390/lib/test_unwind.c b/arch/s390/lib/test_unwind.c +index 32b7a30b2485d..b0b12b46bc572 100644 +--- a/arch/s390/lib/test_unwind.c ++++ b/arch/s390/lib/test_unwind.c +@@ -63,6 +63,7 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs, + break; + if (state.reliable && !addr) { + pr_err("unwind state reliable but addr is 0\n"); ++ kfree(bt); + return -EINVAL; + } + sprint_symbol(sym, addr); +diff --git a/arch/sh/boards/mach-landisk/setup.c b/arch/sh/boards/mach-landisk/setup.c +index 16b4d8b0bb850..2c44b94f82fb2 100644 +--- a/arch/sh/boards/mach-landisk/setup.c ++++ b/arch/sh/boards/mach-landisk/setup.c +@@ -82,6 +82,9 @@ device_initcall(landisk_devices_setup); + + static void __init landisk_setup(char **cmdline_p) + { ++ /* I/O port identity mapping */ ++ __set_io_port_base(0); ++ + /* LED ON */ + __raw_writeb(__raw_readb(PA_LED) | 0x03, PA_LED); + +diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c +index ece043fb7b494..fbc32b28f4cb8 100644 +--- a/arch/x86/events/rapl.c ++++ b/arch/x86/events/rapl.c +@@ -642,7 +642,7 @@ static const struct attribute_group *rapl_attr_update[] = { + &rapl_events_pkg_group, + &rapl_events_ram_group, + &rapl_events_gpu_group, +- &rapl_events_gpu_group, ++ &rapl_events_psys_group, + NULL, + }; + +diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c +index 410363e60968f..3ee4830ebfd31 100644 +--- a/arch/x86/kernel/apic/vector.c ++++ b/arch/x86/kernel/apic/vector.c +@@ -560,6 +560,10 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq, + * as that can corrupt the affinity move state. + */ + irqd_set_handle_enforce_irqctx(irqd); ++ ++ /* Don't invoke affinity setter on deactivated interrupts */ ++ irqd_set_affinity_on_activate(irqd); ++ + /* + * Legacy vectors are already assigned when the IOAPIC + * takes them over. They stay on the same vector. This is +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index b53dcff21438c..8c963ea39f9df 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -31,6 +31,7 @@ + #include + #include + #include ++#include + + #include "cpu.h" + +@@ -1556,7 +1557,12 @@ static ssize_t l1tf_show_state(char *buf) + + static ssize_t itlb_multihit_show_state(char *buf) + { +- if (itlb_multihit_kvm_mitigation) ++ if (!boot_cpu_has(X86_FEATURE_MSR_IA32_FEAT_CTL) || ++ !boot_cpu_has(X86_FEATURE_VMX)) ++ return sprintf(buf, "KVM: Mitigation: VMX unsupported\n"); ++ else if (!(cr4_read_shadow() & X86_CR4_VMXE)) ++ return sprintf(buf, "KVM: Mitigation: VMX disabled\n"); ++ else if (itlb_multihit_kvm_mitigation) + return sprintf(buf, "KVM: Mitigation: Split huge pages\n"); + else + return sprintf(buf, "KVM: Vulnerable\n"); +diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c +index 4fec6f3a1858b..a654a9b4b77c0 100644 +--- a/arch/x86/kernel/tsc_msr.c ++++ b/arch/x86/kernel/tsc_msr.c +@@ -133,10 +133,15 @@ static const struct freq_desc freq_desc_ann = { + .mask = 0x0f, + }; + +-/* 24 MHz crystal? : 24 * 13 / 4 = 78 MHz */ ++/* ++ * 24 MHz crystal? : 24 * 13 / 4 = 78 MHz ++ * Frequency step for Lightning Mountain SoC is fixed to 78 MHz, ++ * so all the frequency entries are 78000. ++ */ + static const struct freq_desc freq_desc_lgm = { + .use_msr_plat = true, +- .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 }, ++ .freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000, ++ 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 }, + .mask = 0x0f, + }; + +diff --git a/arch/xtensa/include/asm/thread_info.h b/arch/xtensa/include/asm/thread_info.h +index f092cc3f4e66d..956d4d47c6cd1 100644 +--- a/arch/xtensa/include/asm/thread_info.h ++++ b/arch/xtensa/include/asm/thread_info.h +@@ -55,6 +55,10 @@ struct thread_info { + mm_segment_t addr_limit; /* thread address space */ + + unsigned long cpenable; ++#if XCHAL_HAVE_EXCLUSIVE ++ /* result of the most recent exclusive store */ ++ unsigned long atomctl8; ++#endif + + /* Allocate storage for extra user states and coprocessor states. */ + #if XTENSA_HAVE_COPROCESSORS +diff --git a/arch/xtensa/kernel/asm-offsets.c b/arch/xtensa/kernel/asm-offsets.c +index 33a257b33723a..dc5c83cad9be8 100644 +--- a/arch/xtensa/kernel/asm-offsets.c ++++ b/arch/xtensa/kernel/asm-offsets.c +@@ -93,6 +93,9 @@ int main(void) + DEFINE(THREAD_RA, offsetof (struct task_struct, thread.ra)); + DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp)); + DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable)); ++#if XCHAL_HAVE_EXCLUSIVE ++ DEFINE(THREAD_ATOMCTL8, offsetof (struct thread_info, atomctl8)); ++#endif + #if XTENSA_HAVE_COPROCESSORS + DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0)); + DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1)); +diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S +index 06fbb0a171f1e..26e2869d255b0 100644 +--- a/arch/xtensa/kernel/entry.S ++++ b/arch/xtensa/kernel/entry.S +@@ -374,6 +374,11 @@ common_exception: + s32i a2, a1, PT_LCOUNT + #endif + ++#if XCHAL_HAVE_EXCLUSIVE ++ /* Clear exclusive access monitor set by interrupted code */ ++ clrex ++#endif ++ + /* It is now save to restore the EXC_TABLE_FIXUP variable. */ + + rsr a2, exccause +@@ -2020,6 +2025,12 @@ ENTRY(_switch_to) + s32i a3, a4, THREAD_CPENABLE + #endif + ++#if XCHAL_HAVE_EXCLUSIVE ++ l32i a3, a5, THREAD_ATOMCTL8 ++ getex a3 ++ s32i a3, a4, THREAD_ATOMCTL8 ++#endif ++ + /* Flush register file. */ + + spill_registers_kernel +diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c +index 9bae79f703013..86c9ba9631551 100644 +--- a/arch/xtensa/kernel/perf_event.c ++++ b/arch/xtensa/kernel/perf_event.c +@@ -401,7 +401,7 @@ static struct pmu xtensa_pmu = { + .read = xtensa_pmu_read, + }; + +-static int xtensa_pmu_setup(int cpu) ++static int xtensa_pmu_setup(unsigned int cpu) + { + unsigned i; + +diff --git a/crypto/af_alg.c b/crypto/af_alg.c +index 28fc323e3fe30..5882ed46f1adb 100644 +--- a/crypto/af_alg.c ++++ b/crypto/af_alg.c +@@ -635,6 +635,7 @@ void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst, + + if (!ctx->used) + ctx->merge = 0; ++ ctx->init = ctx->more; + } + EXPORT_SYMBOL_GPL(af_alg_pull_tsgl); + +@@ -734,9 +735,10 @@ EXPORT_SYMBOL_GPL(af_alg_wmem_wakeup); + * + * @sk socket of connection to user space + * @flags If MSG_DONTWAIT is set, then only report if function would sleep ++ * @min Set to minimum request size if partial requests are allowed. + * @return 0 when writable memory is available, < 0 upon error + */ +-int af_alg_wait_for_data(struct sock *sk, unsigned flags) ++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min) + { + DEFINE_WAIT_FUNC(wait, woken_wake_function); + struct alg_sock *ask = alg_sk(sk); +@@ -754,7 +756,9 @@ int af_alg_wait_for_data(struct sock *sk, unsigned flags) + if (signal_pending(current)) + break; + timeout = MAX_SCHEDULE_TIMEOUT; +- if (sk_wait_event(sk, &timeout, (ctx->used || !ctx->more), ++ if (sk_wait_event(sk, &timeout, ++ ctx->init && (!ctx->more || ++ (min && ctx->used >= min)), + &wait)) { + err = 0; + break; +@@ -843,10 +847,11 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size, + } + + lock_sock(sk); +- if (!ctx->more && ctx->used) { ++ if (ctx->init && (init || !ctx->more)) { + err = -EINVAL; + goto unlock; + } ++ ctx->init = true; + + if (init) { + ctx->enc = enc; +diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c +index 0ae000a61c7f5..43c6aa784858b 100644 +--- a/crypto/algif_aead.c ++++ b/crypto/algif_aead.c +@@ -106,8 +106,8 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, + size_t usedpages = 0; /* [in] RX bufs to be used from user */ + size_t processed = 0; /* [in] TX bufs to be consumed */ + +- if (!ctx->used) { +- err = af_alg_wait_for_data(sk, flags); ++ if (!ctx->init || ctx->more) { ++ err = af_alg_wait_for_data(sk, flags, 0); + if (err) + return err; + } +@@ -558,12 +558,6 @@ static int aead_accept_parent_nokey(void *private, struct sock *sk) + + INIT_LIST_HEAD(&ctx->tsgl_list); + ctx->len = len; +- ctx->used = 0; +- atomic_set(&ctx->rcvused, 0); +- ctx->more = 0; +- ctx->merge = 0; +- ctx->enc = 0; +- ctx->aead_assoclen = 0; + crypto_init_wait(&ctx->wait); + + ask->private = ctx; +diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c +index ec5567c87a6df..81c4022285a7c 100644 +--- a/crypto/algif_skcipher.c ++++ b/crypto/algif_skcipher.c +@@ -61,8 +61,8 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, + int err = 0; + size_t len = 0; + +- if (!ctx->used) { +- err = af_alg_wait_for_data(sk, flags); ++ if (!ctx->init || (ctx->more && ctx->used < bs)) { ++ err = af_alg_wait_for_data(sk, flags, bs); + if (err) + return err; + } +@@ -333,6 +333,7 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk) + ctx = sock_kmalloc(sk, len, GFP_KERNEL); + if (!ctx) + return -ENOMEM; ++ memset(ctx, 0, len); + + ctx->iv = sock_kmalloc(sk, crypto_skcipher_ivsize(tfm), + GFP_KERNEL); +@@ -340,16 +341,10 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk) + sock_kfree_s(sk, ctx, len); + return -ENOMEM; + } +- + memset(ctx->iv, 0, crypto_skcipher_ivsize(tfm)); + + INIT_LIST_HEAD(&ctx->tsgl_list); + ctx->len = len; +- ctx->used = 0; +- atomic_set(&ctx->rcvused, 0); +- ctx->more = 0; +- ctx->merge = 0; +- ctx->enc = 0; + crypto_init_wait(&ctx->wait); + + ask->private = ctx; +diff --git a/drivers/base/dd.c b/drivers/base/dd.c +index 60bd0a9b9918b..da5a8e90a8852 100644 +--- a/drivers/base/dd.c ++++ b/drivers/base/dd.c +@@ -846,7 +846,9 @@ static int __device_attach(struct device *dev, bool allow_async) + int ret = 0; + + device_lock(dev); +- if (dev->driver) { ++ if (dev->p->dead) { ++ goto out_unlock; ++ } else if (dev->driver) { + if (device_is_bound(dev)) { + ret = 1; + goto out_unlock; +diff --git a/drivers/clk/actions/owl-s500.c b/drivers/clk/actions/owl-s500.c +index e2007ac4d235d..0eb83a0b70bcc 100644 +--- a/drivers/clk/actions/owl-s500.c ++++ b/drivers/clk/actions/owl-s500.c +@@ -183,7 +183,7 @@ static OWL_GATE(timer_clk, "timer_clk", "hosc", CMU_DEVCLKEN1, 27, 0, 0); + static OWL_GATE(hdmi_clk, "hdmi_clk", "hosc", CMU_DEVCLKEN1, 3, 0, 0); + + /* divider clocks */ +-static OWL_DIVIDER(h_clk, "h_clk", "ahbprevdiv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0); ++static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0); + static OWL_DIVIDER(rmii_ref_clk, "rmii_ref_clk", "ethernet_pll_clk", CMU_ETHERNETPLL, 1, 1, rmii_ref_div_table, 0, 0); + + /* factor clocks */ +diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c +index 7c845c293af00..798f0b419c79f 100644 +--- a/drivers/clk/bcm/clk-bcm2835.c ++++ b/drivers/clk/bcm/clk-bcm2835.c +@@ -314,6 +314,7 @@ struct bcm2835_cprman { + struct device *dev; + void __iomem *regs; + spinlock_t regs_lock; /* spinlock for all clocks */ ++ unsigned int soc; + + /* + * Real names of cprman clock parents looked up through +@@ -525,6 +526,20 @@ static int bcm2835_pll_is_on(struct clk_hw *hw) + A2W_PLL_CTRL_PRST_DISABLE; + } + ++static u32 bcm2835_pll_get_prediv_mask(struct bcm2835_cprman *cprman, ++ const struct bcm2835_pll_data *data) ++{ ++ /* ++ * On BCM2711 there isn't a pre-divisor available in the PLL feedback ++ * loop. Bits 13:14 of ANA1 (PLLA,PLLB,PLLC,PLLD) have been re-purposed ++ * for to for VCO RANGE bits. ++ */ ++ if (cprman->soc & SOC_BCM2711) ++ return 0; ++ ++ return data->ana->fb_prediv_mask; ++} ++ + static void bcm2835_pll_choose_ndiv_and_fdiv(unsigned long rate, + unsigned long parent_rate, + u32 *ndiv, u32 *fdiv) +@@ -582,7 +597,7 @@ static unsigned long bcm2835_pll_get_rate(struct clk_hw *hw, + ndiv = (a2wctrl & A2W_PLL_CTRL_NDIV_MASK) >> A2W_PLL_CTRL_NDIV_SHIFT; + pdiv = (a2wctrl & A2W_PLL_CTRL_PDIV_MASK) >> A2W_PLL_CTRL_PDIV_SHIFT; + using_prediv = cprman_read(cprman, data->ana_reg_base + 4) & +- data->ana->fb_prediv_mask; ++ bcm2835_pll_get_prediv_mask(cprman, data); + + if (using_prediv) { + ndiv *= 2; +@@ -665,6 +680,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw, + struct bcm2835_pll *pll = container_of(hw, struct bcm2835_pll, hw); + struct bcm2835_cprman *cprman = pll->cprman; + const struct bcm2835_pll_data *data = pll->data; ++ u32 prediv_mask = bcm2835_pll_get_prediv_mask(cprman, data); + bool was_using_prediv, use_fb_prediv, do_ana_setup_first; + u32 ndiv, fdiv, a2w_ctl; + u32 ana[4]; +@@ -682,7 +698,7 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw, + for (i = 3; i >= 0; i--) + ana[i] = cprman_read(cprman, data->ana_reg_base + i * 4); + +- was_using_prediv = ana[1] & data->ana->fb_prediv_mask; ++ was_using_prediv = ana[1] & prediv_mask; + + ana[0] &= ~data->ana->mask0; + ana[0] |= data->ana->set0; +@@ -692,10 +708,10 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw, + ana[3] |= data->ana->set3; + + if (was_using_prediv && !use_fb_prediv) { +- ana[1] &= ~data->ana->fb_prediv_mask; ++ ana[1] &= ~prediv_mask; + do_ana_setup_first = true; + } else if (!was_using_prediv && use_fb_prediv) { +- ana[1] |= data->ana->fb_prediv_mask; ++ ana[1] |= prediv_mask; + do_ana_setup_first = false; + } else { + do_ana_setup_first = true; +@@ -2232,6 +2248,7 @@ static int bcm2835_clk_probe(struct platform_device *pdev) + platform_set_drvdata(pdev, cprman); + + cprman->onecell.num = asize; ++ cprman->soc = pdata->soc; + hws = cprman->onecell.hws; + + for (i = 0; i < asize; i++) { +diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c +index 9b2dfa08acb2a..1325139173c95 100644 +--- a/drivers/clk/qcom/clk-alpha-pll.c ++++ b/drivers/clk/qcom/clk-alpha-pll.c +@@ -56,7 +56,6 @@ + #define PLL_STATUS(p) ((p)->offset + (p)->regs[PLL_OFF_STATUS]) + #define PLL_OPMODE(p) ((p)->offset + (p)->regs[PLL_OFF_OPMODE]) + #define PLL_FRAC(p) ((p)->offset + (p)->regs[PLL_OFF_FRAC]) +-#define PLL_CAL_VAL(p) ((p)->offset + (p)->regs[PLL_OFF_CAL_VAL]) + + const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = { + [CLK_ALPHA_PLL_TYPE_DEFAULT] = { +@@ -115,7 +114,6 @@ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = { + [PLL_OFF_STATUS] = 0x30, + [PLL_OFF_OPMODE] = 0x38, + [PLL_OFF_ALPHA_VAL] = 0x40, +- [PLL_OFF_CAL_VAL] = 0x44, + }, + [CLK_ALPHA_PLL_TYPE_LUCID] = { + [PLL_OFF_L_VAL] = 0x04, +diff --git a/drivers/clk/qcom/gcc-sdm660.c b/drivers/clk/qcom/gcc-sdm660.c +index bf5730832ef3d..c6fb57cd576f5 100644 +--- a/drivers/clk/qcom/gcc-sdm660.c ++++ b/drivers/clk/qcom/gcc-sdm660.c +@@ -1715,6 +1715,9 @@ static struct clk_branch gcc_mss_cfg_ahb_clk = { + + static struct clk_branch gcc_mss_mnoc_bimc_axi_clk = { + .halt_reg = 0x8a004, ++ .halt_check = BRANCH_HALT, ++ .hwcg_reg = 0x8a004, ++ .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x8a004, + .enable_mask = BIT(0), +diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c +index 72524cf110487..55e9d6d75a0cd 100644 +--- a/drivers/clk/qcom/gcc-sm8150.c ++++ b/drivers/clk/qcom/gcc-sm8150.c +@@ -1617,6 +1617,7 @@ static struct clk_branch gcc_gpu_cfg_ahb_clk = { + }; + + static struct clk_branch gcc_gpu_gpll0_clk_src = { ++ .halt_check = BRANCH_HALT_SKIP, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = BIT(15), +@@ -1632,13 +1633,14 @@ static struct clk_branch gcc_gpu_gpll0_clk_src = { + }; + + static struct clk_branch gcc_gpu_gpll0_div_clk_src = { ++ .halt_check = BRANCH_HALT_SKIP, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = BIT(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_gpll0_div_clk_src", + .parent_hws = (const struct clk_hw *[]){ +- &gcc_gpu_gpll0_clk_src.clkr.hw }, ++ &gpll0_out_even.clkr.hw }, + .num_parents = 1, + .flags = CLK_SET_RATE_PARENT, + .ops = &clk_branch2_ops, +@@ -1729,6 +1731,7 @@ static struct clk_branch gcc_npu_cfg_ahb_clk = { + }; + + static struct clk_branch gcc_npu_gpll0_clk_src = { ++ .halt_check = BRANCH_HALT_SKIP, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = BIT(18), +@@ -1744,13 +1747,14 @@ static struct clk_branch gcc_npu_gpll0_clk_src = { + }; + + static struct clk_branch gcc_npu_gpll0_div_clk_src = { ++ .halt_check = BRANCH_HALT_SKIP, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = BIT(19), + .hw.init = &(struct clk_init_data){ + .name = "gcc_npu_gpll0_div_clk_src", + .parent_hws = (const struct clk_hw *[]){ +- &gcc_npu_gpll0_clk_src.clkr.hw }, ++ &gpll0_out_even.clkr.hw }, + .num_parents = 1, + .flags = CLK_SET_RATE_PARENT, + .ops = &clk_branch2_ops, +diff --git a/drivers/clk/sirf/clk-atlas6.c b/drivers/clk/sirf/clk-atlas6.c +index c84d5bab7ac28..b95483bb6a5ec 100644 +--- a/drivers/clk/sirf/clk-atlas6.c ++++ b/drivers/clk/sirf/clk-atlas6.c +@@ -135,7 +135,7 @@ static void __init atlas6_clk_init(struct device_node *np) + + for (i = pll1; i < maxclk; i++) { + atlas6_clks[i] = clk_register(NULL, atlas6_clk_hw_array[i]); +- BUG_ON(!atlas6_clks[i]); ++ BUG_ON(IS_ERR(atlas6_clks[i])); + } + clk_register_clkdev(atlas6_clks[cpu], NULL, "cpu"); + clk_register_clkdev(atlas6_clks[io], NULL, "io"); +diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c +index bf90a4fcabd1f..8149ac4d6ef22 100644 +--- a/drivers/crypto/caam/caamalg.c ++++ b/drivers/crypto/caam/caamalg.c +@@ -810,12 +810,6 @@ static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher, + return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off); + } + +-static int arc4_skcipher_setkey(struct crypto_skcipher *skcipher, +- const u8 *key, unsigned int keylen) +-{ +- return skcipher_setkey(skcipher, key, keylen, 0); +-} +- + static int des_skcipher_setkey(struct crypto_skcipher *skcipher, + const u8 *key, unsigned int keylen) + { +@@ -1967,21 +1961,6 @@ static struct caam_skcipher_alg driver_algs[] = { + }, + .caam.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_ECB, + }, +- { +- .skcipher = { +- .base = { +- .cra_name = "ecb(arc4)", +- .cra_driver_name = "ecb-arc4-caam", +- .cra_blocksize = ARC4_BLOCK_SIZE, +- }, +- .setkey = arc4_skcipher_setkey, +- .encrypt = skcipher_encrypt, +- .decrypt = skcipher_decrypt, +- .min_keysize = ARC4_MIN_KEY_SIZE, +- .max_keysize = ARC4_MAX_KEY_SIZE, +- }, +- .caam.class1_alg_type = OP_ALG_ALGSEL_ARC4 | OP_ALG_AAI_ECB, +- }, + }; + + static struct caam_aead_alg driver_aeads[] = { +@@ -3457,7 +3436,6 @@ int caam_algapi_init(struct device *ctrldev) + struct caam_drv_private *priv = dev_get_drvdata(ctrldev); + int i = 0, err = 0; + u32 aes_vid, aes_inst, des_inst, md_vid, md_inst, ccha_inst, ptha_inst; +- u32 arc4_inst; + unsigned int md_limit = SHA512_DIGEST_SIZE; + bool registered = false, gcm_support; + +@@ -3477,8 +3455,6 @@ int caam_algapi_init(struct device *ctrldev) + CHA_ID_LS_DES_SHIFT; + aes_inst = cha_inst & CHA_ID_LS_AES_MASK; + md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; +- arc4_inst = (cha_inst & CHA_ID_LS_ARC4_MASK) >> +- CHA_ID_LS_ARC4_SHIFT; + ccha_inst = 0; + ptha_inst = 0; + +@@ -3499,7 +3475,6 @@ int caam_algapi_init(struct device *ctrldev) + md_inst = mdha & CHA_VER_NUM_MASK; + ccha_inst = rd_reg32(&priv->ctrl->vreg.ccha) & CHA_VER_NUM_MASK; + ptha_inst = rd_reg32(&priv->ctrl->vreg.ptha) & CHA_VER_NUM_MASK; +- arc4_inst = rd_reg32(&priv->ctrl->vreg.afha) & CHA_VER_NUM_MASK; + + gcm_support = aesa & CHA_VER_MISC_AES_GCM; + } +@@ -3522,10 +3497,6 @@ int caam_algapi_init(struct device *ctrldev) + if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES)) + continue; + +- /* Skip ARC4 algorithms if not supported by device */ +- if (!arc4_inst && alg_sel == OP_ALG_ALGSEL_ARC4) +- continue; +- + /* + * Check support for AES modes not available + * on LP devices. +diff --git a/drivers/crypto/caam/compat.h b/drivers/crypto/caam/compat.h +index 60e2a54c19f11..c3c22a8de4c00 100644 +--- a/drivers/crypto/caam/compat.h ++++ b/drivers/crypto/caam/compat.h +@@ -43,7 +43,6 @@ + #include + #include + #include +-#include + #include + #include + #include +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +index 3c6f60c5b1a5a..088f43ebdceb6 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +@@ -1679,15 +1679,15 @@ static int psp_suspend(void *handle) + } + } + +- ret = psp_tmr_terminate(psp); ++ ret = psp_asd_unload(psp); + if (ret) { +- DRM_ERROR("Falied to terminate tmr\n"); ++ DRM_ERROR("Failed to unload asd\n"); + return ret; + } + +- ret = psp_asd_unload(psp); ++ ret = psp_tmr_terminate(psp); + if (ret) { +- DRM_ERROR("Failed to unload asd\n"); ++ DRM_ERROR("Falied to terminate tmr\n"); + return ret; + } + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index d50751ae73f1b..7cb4fe479614e 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -8458,6 +8458,29 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, + if (ret) + goto fail; + ++ /* Check connector changes */ ++ for_each_oldnew_connector_in_state(state, connector, old_con_state, new_con_state, i) { ++ struct dm_connector_state *dm_old_con_state = to_dm_connector_state(old_con_state); ++ struct dm_connector_state *dm_new_con_state = to_dm_connector_state(new_con_state); ++ ++ /* Skip connectors that are disabled or part of modeset already. */ ++ if (!old_con_state->crtc && !new_con_state->crtc) ++ continue; ++ ++ if (!new_con_state->crtc) ++ continue; ++ ++ new_crtc_state = drm_atomic_get_crtc_state(state, new_con_state->crtc); ++ if (IS_ERR(new_crtc_state)) { ++ ret = PTR_ERR(new_crtc_state); ++ goto fail; ++ } ++ ++ if (dm_old_con_state->abm_level != ++ dm_new_con_state->abm_level) ++ new_crtc_state->connectors_changed = true; ++ } ++ + #if defined(CONFIG_DRM_AMD_DC_DCN) + if (!compute_mst_dsc_configs_for_state(state, dm_state->context)) + goto fail; +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c +index 3fab9296918ab..e133edc587d31 100644 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr.c +@@ -85,12 +85,77 @@ static int rv1_determine_dppclk_threshold(struct clk_mgr_internal *clk_mgr, stru + return disp_clk_threshold; + } + +-static void ramp_up_dispclk_with_dpp(struct clk_mgr_internal *clk_mgr, struct dc *dc, struct dc_clocks *new_clocks) ++static void ramp_up_dispclk_with_dpp( ++ struct clk_mgr_internal *clk_mgr, ++ struct dc *dc, ++ struct dc_clocks *new_clocks, ++ bool safe_to_lower) + { + int i; + int dispclk_to_dpp_threshold = rv1_determine_dppclk_threshold(clk_mgr, new_clocks); + bool request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz; + ++ /* this function is to change dispclk, dppclk and dprefclk according to ++ * bandwidth requirement. Its call stack is rv1_update_clocks --> ++ * update_clocks --> dcn10_prepare_bandwidth / dcn10_optimize_bandwidth ++ * --> prepare_bandwidth / optimize_bandwidth. before change dcn hw, ++ * prepare_bandwidth will be called first to allow enough clock, ++ * watermark for change, after end of dcn hw change, optimize_bandwidth ++ * is executed to lower clock to save power for new dcn hw settings. ++ * ++ * below is sequence of commit_planes_for_stream: ++ * ++ * step 1: prepare_bandwidth - raise clock to have enough bandwidth ++ * step 2: lock_doublebuffer_enable ++ * step 3: pipe_control_lock(true) - make dchubp register change will ++ * not take effect right way ++ * step 4: apply_ctx_for_surface - program dchubp ++ * step 5: pipe_control_lock(false) - dchubp register change take effect ++ * step 6: optimize_bandwidth --> dc_post_update_surfaces_to_stream ++ * for full_date, optimize clock to save power ++ * ++ * at end of step 1, dcn clocks (dprefclk, dispclk, dppclk) may be ++ * changed for new dchubp configuration. but real dcn hub dchubps are ++ * still running with old configuration until end of step 5. this need ++ * clocks settings at step 1 should not less than that before step 1. ++ * this is checked by two conditions: 1. if (should_set_clock(safe_to_lower ++ * , new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) || ++ * new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz) ++ * 2. request_dpp_div = new_clocks->dispclk_khz > new_clocks->dppclk_khz ++ * ++ * the second condition is based on new dchubp configuration. dppclk ++ * for new dchubp may be different from dppclk before step 1. ++ * for example, before step 1, dchubps are as below: ++ * pipe 0: recout=(0,40,1920,980) viewport=(0,0,1920,979) ++ * pipe 1: recout=(0,0,1920,1080) viewport=(0,0,1920,1080) ++ * for dppclk for pipe0 need dppclk = dispclk ++ * ++ * new dchubp pipe split configuration: ++ * pipe 0: recout=(0,0,960,1080) viewport=(0,0,960,1080) ++ * pipe 1: recout=(960,0,960,1080) viewport=(960,0,960,1080) ++ * dppclk only needs dppclk = dispclk /2. ++ * ++ * dispclk, dppclk are not lock by otg master lock. they take effect ++ * after step 1. during this transition, dispclk are the same, but ++ * dppclk is changed to half of previous clock for old dchubp ++ * configuration between step 1 and step 6. This may cause p-state ++ * warning intermittently. ++ * ++ * for new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz, we ++ * need make sure dppclk are not changed to less between step 1 and 6. ++ * for new_clocks->dispclk_khz > clk_mgr_base->clks.dispclk_khz, ++ * new display clock is raised, but we do not know ratio of ++ * new_clocks->dispclk_khz and clk_mgr_base->clks.dispclk_khz, ++ * new_clocks->dispclk_khz /2 does not guarantee equal or higher than ++ * old dppclk. we could ignore power saving different between ++ * dppclk = displck and dppclk = dispclk / 2 between step 1 and step 6. ++ * as long as safe_to_lower = false, set dpclk = dispclk to simplify ++ * condition check. ++ * todo: review this change for other asic. ++ **/ ++ if (!safe_to_lower) ++ request_dpp_div = false; ++ + /* set disp clk to dpp clk threshold */ + + clk_mgr->funcs->set_dispclk(clk_mgr, dispclk_to_dpp_threshold); +@@ -209,7 +274,7 @@ static void rv1_update_clocks(struct clk_mgr *clk_mgr_base, + /* program dispclk on = as a w/a for sleep resume clock ramping issues */ + if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz) + || new_clocks->dispclk_khz == clk_mgr_base->clks.dispclk_khz) { +- ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks); ++ ramp_up_dispclk_with_dpp(clk_mgr, dc, new_clocks, safe_to_lower); + clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; + send_request_to_lower = true; + } +diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c +index 7c3e903230ca1..47eead0961297 100644 +--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c ++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c +@@ -2725,7 +2725,10 @@ static int ci_initialize_mc_reg_table(struct pp_hwmgr *hwmgr) + + static bool ci_is_dpm_running(struct pp_hwmgr *hwmgr) + { +- return ci_is_smc_ram_running(hwmgr); ++ return (1 == PHM_READ_INDIRECT_FIELD(hwmgr->device, ++ CGS_IND_REG__SMC, FEATURE_STATUS, ++ VOLTAGE_CONTROLLER_ON)) ++ ? true : false; + } + + static int ci_smu_init(struct pp_hwmgr *hwmgr) +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c +index abb1f358ec6df..252fc4b567007 100644 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c +@@ -88,8 +88,8 @@ static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr, + static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr, + u8 *guid); + +-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux); +-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux); ++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port); ++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port); + static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr); + + #define DBG_PREFIX "[dp_mst]" +@@ -1981,7 +1981,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt, + } + + /* remove i2c over sideband */ +- drm_dp_mst_unregister_i2c_bus(&port->aux); ++ drm_dp_mst_unregister_i2c_bus(port); + } else { + mutex_lock(&mgr->lock); + drm_dp_mst_topology_put_mstb(port->mstb); +@@ -1996,7 +1996,7 @@ drm_dp_port_set_pdt(struct drm_dp_mst_port *port, u8 new_pdt, + if (port->pdt != DP_PEER_DEVICE_NONE) { + if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) { + /* add i2c over sideband */ +- ret = drm_dp_mst_register_i2c_bus(&port->aux); ++ ret = drm_dp_mst_register_i2c_bus(port); + } else { + lct = drm_dp_calculate_rad(port, rad); + mstb = drm_dp_add_mst_branch_device(lct, rad); +@@ -4319,11 +4319,11 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, + { + int ret; + +- port = drm_dp_mst_topology_get_port_validated(mgr, port); +- if (!port) ++ if (slots < 0) + return false; + +- if (slots < 0) ++ port = drm_dp_mst_topology_get_port_validated(mgr, port); ++ if (!port) + return false; + + if (port->vcpi.vcpi > 0) { +@@ -4339,6 +4339,7 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, + if (ret) { + DRM_DEBUG_KMS("failed to init vcpi slots=%d max=63 ret=%d\n", + DIV_ROUND_UP(pbn, mgr->pbn_div), ret); ++ drm_dp_mst_topology_put_port(port); + goto out; + } + DRM_DEBUG_KMS("initing vcpi for pbn=%d slots=%d\n", +@@ -5406,22 +5407,26 @@ static const struct i2c_algorithm drm_dp_mst_i2c_algo = { + + /** + * drm_dp_mst_register_i2c_bus() - register an I2C adapter for I2C-over-AUX +- * @aux: DisplayPort AUX channel ++ * @port: The port to add the I2C bus on + * + * Returns 0 on success or a negative error code on failure. + */ +-static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux) ++static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port) + { ++ struct drm_dp_aux *aux = &port->aux; ++ struct device *parent_dev = port->mgr->dev->dev; ++ + aux->ddc.algo = &drm_dp_mst_i2c_algo; + aux->ddc.algo_data = aux; + aux->ddc.retries = 3; + + aux->ddc.class = I2C_CLASS_DDC; + aux->ddc.owner = THIS_MODULE; +- aux->ddc.dev.parent = aux->dev; +- aux->ddc.dev.of_node = aux->dev->of_node; ++ /* FIXME: set the kdev of the port's connector as parent */ ++ aux->ddc.dev.parent = parent_dev; ++ aux->ddc.dev.of_node = parent_dev->of_node; + +- strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(aux->dev), ++ strlcpy(aux->ddc.name, aux->name ? aux->name : dev_name(parent_dev), + sizeof(aux->ddc.name)); + + return i2c_add_adapter(&aux->ddc); +@@ -5429,11 +5434,11 @@ static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux) + + /** + * drm_dp_mst_unregister_i2c_bus() - unregister an I2C-over-AUX adapter +- * @aux: DisplayPort AUX channel ++ * @port: The port to remove the I2C bus from + */ +-static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux) ++static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port) + { +- i2c_del_adapter(&aux->ddc); ++ i2c_del_adapter(&port->aux.ddc); + } + + /** +diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c +index d00ea384dcbfe..58f5dc2f6dd52 100644 +--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c ++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c +@@ -121,6 +121,12 @@ static const struct dmi_system_id orientation_data[] = { + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T101HA"), + }, + .driver_data = (void *)&lcd800x1280_rightside_up, ++ }, { /* Asus T103HAF */ ++ .matches = { ++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"), ++ }, ++ .driver_data = (void *)&lcd800x1280_rightside_up, + }, { /* GPD MicroPC (generic strings, also match on bios date) */ + .matches = { + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"), +diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c +index d09f7596cb98b..1c2f7a5b1e94a 100644 +--- a/drivers/gpu/drm/i915/gt/intel_gt.c ++++ b/drivers/gpu/drm/i915/gt/intel_gt.c +@@ -656,6 +656,11 @@ void intel_gt_driver_unregister(struct intel_gt *gt) + void intel_gt_driver_release(struct intel_gt *gt) + { + struct i915_address_space *vm; ++ intel_wakeref_t wakeref; ++ ++ /* Scrub all HW state upon release */ ++ with_intel_runtime_pm(gt->uncore->rpm, wakeref) ++ __intel_gt_reset(gt, ALL_ENGINES); + + vm = fetch_and_zero(>->vm); + if (vm) /* FIXME being called twice on error paths :( */ +diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c +index 8e209117b049a..819a858764d93 100644 +--- a/drivers/gpu/drm/imx/imx-ldb.c ++++ b/drivers/gpu/drm/imx/imx-ldb.c +@@ -303,18 +303,19 @@ static void imx_ldb_encoder_disable(struct drm_encoder *encoder) + { + struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder); + struct imx_ldb *ldb = imx_ldb_ch->ldb; ++ int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN; + int mux, ret; + + drm_panel_disable(imx_ldb_ch->panel); + +- if (imx_ldb_ch == &ldb->channel[0]) ++ if (imx_ldb_ch == &ldb->channel[0] || dual) + ldb->ldb_ctrl &= ~LDB_CH0_MODE_EN_MASK; +- else if (imx_ldb_ch == &ldb->channel[1]) ++ if (imx_ldb_ch == &ldb->channel[1] || dual) + ldb->ldb_ctrl &= ~LDB_CH1_MODE_EN_MASK; + + regmap_write(ldb->regmap, IOMUXC_GPR2, ldb->ldb_ctrl); + +- if (ldb->ldb_ctrl & LDB_SPLIT_MODE_EN) { ++ if (dual) { + clk_disable_unprepare(ldb->clk[0]); + clk_disable_unprepare(ldb->clk[1]); + } +diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.c b/drivers/gpu/drm/ingenic/ingenic-drm.c +index 548cc25ea4abe..e525260c31b2b 100644 +--- a/drivers/gpu/drm/ingenic/ingenic-drm.c ++++ b/drivers/gpu/drm/ingenic/ingenic-drm.c +@@ -384,7 +384,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane, + addr = drm_fb_cma_get_gem_addr(state->fb, state, 0); + width = state->src_w >> 16; + height = state->src_h >> 16; +- cpp = state->fb->format->cpp[plane->index]; ++ cpp = state->fb->format->cpp[0]; + + priv->dma_hwdesc->addr = addr; + priv->dma_hwdesc->cmd = width * height * cpp / 4; +diff --git a/drivers/gpu/drm/omapdrm/dss/dispc.c b/drivers/gpu/drm/omapdrm/dss/dispc.c +index dbb90f2d2ccde..7782b163dd721 100644 +--- a/drivers/gpu/drm/omapdrm/dss/dispc.c ++++ b/drivers/gpu/drm/omapdrm/dss/dispc.c +@@ -4936,6 +4936,7 @@ static int dispc_runtime_resume(struct device *dev) + static const struct dev_pm_ops dispc_pm_ops = { + .runtime_suspend = dispc_runtime_suspend, + .runtime_resume = dispc_runtime_resume, ++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) + }; + + struct platform_driver omap_dispchw_driver = { +diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c +index 79ddfbfd1b588..eeccf40bae416 100644 +--- a/drivers/gpu/drm/omapdrm/dss/dsi.c ++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c +@@ -5467,6 +5467,7 @@ static int dsi_runtime_resume(struct device *dev) + static const struct dev_pm_ops dsi_pm_ops = { + .runtime_suspend = dsi_runtime_suspend, + .runtime_resume = dsi_runtime_resume, ++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) + }; + + struct platform_driver omap_dsihw_driver = { +diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c +index 4d5739fa4a5d8..6ccbc29c4ce4b 100644 +--- a/drivers/gpu/drm/omapdrm/dss/dss.c ++++ b/drivers/gpu/drm/omapdrm/dss/dss.c +@@ -1614,6 +1614,7 @@ static int dss_runtime_resume(struct device *dev) + static const struct dev_pm_ops dss_pm_ops = { + .runtime_suspend = dss_runtime_suspend, + .runtime_resume = dss_runtime_resume, ++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) + }; + + struct platform_driver omap_dsshw_driver = { +diff --git a/drivers/gpu/drm/omapdrm/dss/venc.c b/drivers/gpu/drm/omapdrm/dss/venc.c +index 766553bb2f87b..4d3e7a72435f3 100644 +--- a/drivers/gpu/drm/omapdrm/dss/venc.c ++++ b/drivers/gpu/drm/omapdrm/dss/venc.c +@@ -945,6 +945,7 @@ static int venc_runtime_resume(struct device *dev) + static const struct dev_pm_ops venc_pm_ops = { + .runtime_suspend = venc_runtime_suspend, + .runtime_resume = venc_runtime_resume, ++ SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) + }; + + static const struct of_device_id venc_of_match[] = { +diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c +index 17b654e1eb942..556181ea4a073 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_gem.c ++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c +@@ -46,7 +46,7 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) + sg_free_table(&bo->sgts[i]); + } + } +- kfree(bo->sgts); ++ kvfree(bo->sgts); + } + + drm_gem_shmem_free_object(obj); +diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c +index ed28aeba6d59a..3c8ae7411c800 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c ++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c +@@ -486,7 +486,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, + pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, + sizeof(struct page *), GFP_KERNEL | __GFP_ZERO); + if (!pages) { +- kfree(bo->sgts); ++ kvfree(bo->sgts); + bo->sgts = NULL; + mutex_unlock(&bo->base.pages_lock); + ret = -ENOMEM; +diff --git a/drivers/gpu/drm/tidss/tidss_kms.c b/drivers/gpu/drm/tidss/tidss_kms.c +index 7d419960b0309..74467f6eafee8 100644 +--- a/drivers/gpu/drm/tidss/tidss_kms.c ++++ b/drivers/gpu/drm/tidss/tidss_kms.c +@@ -154,7 +154,7 @@ static int tidss_dispc_modeset_init(struct tidss_device *tidss) + break; + case DISPC_VP_DPI: + enc_type = DRM_MODE_ENCODER_DPI; +- conn_type = DRM_MODE_CONNECTOR_LVDS; ++ conn_type = DRM_MODE_CONNECTOR_DPI; + break; + default: + WARN_ON(1); +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +index 04d66592f6050..b7a9cee69ea72 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +@@ -2578,7 +2578,7 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv, + ++i; + } + +- if (i != unit) { ++ if (&con->head == &dev_priv->dev->mode_config.connector_list) { + DRM_ERROR("Could not find initial display unit.\n"); + ret = -EINVAL; + goto out_unlock; +@@ -2602,13 +2602,13 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv, + break; + } + +- if (mode->type & DRM_MODE_TYPE_PREFERRED) +- *p_mode = mode; +- else { ++ if (&mode->head == &con->modes) { + WARN_ONCE(true, "Could not find initial preferred mode.\n"); + *p_mode = list_first_entry(&con->modes, + struct drm_display_mode, + head); ++ } else { ++ *p_mode = mode; + } + + out_unlock: +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +index 16dafff5cab19..009f1742bed51 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +@@ -81,7 +81,7 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv) + struct vmw_legacy_display_unit *entry; + struct drm_framebuffer *fb = NULL; + struct drm_crtc *crtc = NULL; +- int i = 0; ++ int i; + + /* If there is no display topology the host just assumes + * that the guest will set the same layout as the host. +@@ -92,12 +92,11 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv) + crtc = &entry->base.crtc; + w = max(w, crtc->x + crtc->mode.hdisplay); + h = max(h, crtc->y + crtc->mode.vdisplay); +- i++; + } + + if (crtc == NULL) + return 0; +- fb = entry->base.crtc.primary->state->fb; ++ fb = crtc->primary->state->fb; + + return vmw_kms_write_svga(dev_priv, w, h, fb->pitches[0], + fb->format->cpp[0] * 8, +diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c b/drivers/gpu/ipu-v3/ipu-image-convert.c +index eeca50d9a1ee4..aa1d4b6d278f7 100644 +--- a/drivers/gpu/ipu-v3/ipu-image-convert.c ++++ b/drivers/gpu/ipu-v3/ipu-image-convert.c +@@ -137,6 +137,17 @@ struct ipu_image_convert_ctx; + struct ipu_image_convert_chan; + struct ipu_image_convert_priv; + ++enum eof_irq_mask { ++ EOF_IRQ_IN = BIT(0), ++ EOF_IRQ_ROT_IN = BIT(1), ++ EOF_IRQ_OUT = BIT(2), ++ EOF_IRQ_ROT_OUT = BIT(3), ++}; ++ ++#define EOF_IRQ_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT) ++#define EOF_IRQ_ROT_COMPLETE (EOF_IRQ_IN | EOF_IRQ_OUT | \ ++ EOF_IRQ_ROT_IN | EOF_IRQ_ROT_OUT) ++ + struct ipu_image_convert_ctx { + struct ipu_image_convert_chan *chan; + +@@ -173,6 +184,9 @@ struct ipu_image_convert_ctx { + /* where to place converted tile in dest image */ + unsigned int out_tile_map[MAX_TILES]; + ++ /* mask of completed EOF irqs at every tile conversion */ ++ enum eof_irq_mask eof_mask; ++ + struct list_head list; + }; + +@@ -189,6 +203,8 @@ struct ipu_image_convert_chan { + struct ipuv3_channel *rotation_out_chan; + + /* the IPU end-of-frame irqs */ ++ int in_eof_irq; ++ int rot_in_eof_irq; + int out_eof_irq; + int rot_out_eof_irq; + +@@ -1380,6 +1396,9 @@ static int convert_start(struct ipu_image_convert_run *run, unsigned int tile) + dev_dbg(priv->ipu->dev, "%s: task %u: starting ctx %p run %p tile %u -> %u\n", + __func__, chan->ic_task, ctx, run, tile, dst_tile); + ++ /* clear EOF irq mask */ ++ ctx->eof_mask = 0; ++ + if (ipu_rot_mode_is_irt(ctx->rot_mode)) { + /* swap width/height for resizer */ + dest_width = d_image->tile[dst_tile].height; +@@ -1615,7 +1634,7 @@ static bool ic_settings_changed(struct ipu_image_convert_ctx *ctx) + } + + /* hold irqlock when calling */ +-static irqreturn_t do_irq(struct ipu_image_convert_run *run) ++static irqreturn_t do_tile_complete(struct ipu_image_convert_run *run) + { + struct ipu_image_convert_ctx *ctx = run->ctx; + struct ipu_image_convert_chan *chan = ctx->chan; +@@ -1700,6 +1719,7 @@ static irqreturn_t do_irq(struct ipu_image_convert_run *run) + ctx->cur_buf_num ^= 1; + } + ++ ctx->eof_mask = 0; /* clear EOF irq mask for next tile */ + ctx->next_tile++; + return IRQ_HANDLED; + done: +@@ -1709,13 +1729,15 @@ done: + return IRQ_WAKE_THREAD; + } + +-static irqreturn_t norotate_irq(int irq, void *data) ++static irqreturn_t eof_irq(int irq, void *data) + { + struct ipu_image_convert_chan *chan = data; ++ struct ipu_image_convert_priv *priv = chan->priv; + struct ipu_image_convert_ctx *ctx; + struct ipu_image_convert_run *run; ++ irqreturn_t ret = IRQ_HANDLED; ++ bool tile_complete = false; + unsigned long flags; +- irqreturn_t ret; + + spin_lock_irqsave(&chan->irqlock, flags); + +@@ -1728,46 +1750,33 @@ static irqreturn_t norotate_irq(int irq, void *data) + + ctx = run->ctx; + +- if (ipu_rot_mode_is_irt(ctx->rot_mode)) { +- /* this is a rotation operation, just ignore */ +- spin_unlock_irqrestore(&chan->irqlock, flags); +- return IRQ_HANDLED; +- } +- +- ret = do_irq(run); +-out: +- spin_unlock_irqrestore(&chan->irqlock, flags); +- return ret; +-} +- +-static irqreturn_t rotate_irq(int irq, void *data) +-{ +- struct ipu_image_convert_chan *chan = data; +- struct ipu_image_convert_priv *priv = chan->priv; +- struct ipu_image_convert_ctx *ctx; +- struct ipu_image_convert_run *run; +- unsigned long flags; +- irqreturn_t ret; +- +- spin_lock_irqsave(&chan->irqlock, flags); +- +- /* get current run and its context */ +- run = chan->current_run; +- if (!run) { ++ if (irq == chan->in_eof_irq) { ++ ctx->eof_mask |= EOF_IRQ_IN; ++ } else if (irq == chan->out_eof_irq) { ++ ctx->eof_mask |= EOF_IRQ_OUT; ++ } else if (irq == chan->rot_in_eof_irq || ++ irq == chan->rot_out_eof_irq) { ++ if (!ipu_rot_mode_is_irt(ctx->rot_mode)) { ++ /* this was NOT a rotation op, shouldn't happen */ ++ dev_err(priv->ipu->dev, ++ "Unexpected rotation interrupt\n"); ++ goto out; ++ } ++ ctx->eof_mask |= (irq == chan->rot_in_eof_irq) ? ++ EOF_IRQ_ROT_IN : EOF_IRQ_ROT_OUT; ++ } else { ++ dev_err(priv->ipu->dev, "Received unknown irq %d\n", irq); + ret = IRQ_NONE; + goto out; + } + +- ctx = run->ctx; +- +- if (!ipu_rot_mode_is_irt(ctx->rot_mode)) { +- /* this was NOT a rotation operation, shouldn't happen */ +- dev_err(priv->ipu->dev, "Unexpected rotation interrupt\n"); +- spin_unlock_irqrestore(&chan->irqlock, flags); +- return IRQ_HANDLED; +- } ++ if (ipu_rot_mode_is_irt(ctx->rot_mode)) ++ tile_complete = (ctx->eof_mask == EOF_IRQ_ROT_COMPLETE); ++ else ++ tile_complete = (ctx->eof_mask == EOF_IRQ_COMPLETE); + +- ret = do_irq(run); ++ if (tile_complete) ++ ret = do_tile_complete(run); + out: + spin_unlock_irqrestore(&chan->irqlock, flags); + return ret; +@@ -1801,6 +1810,10 @@ static void force_abort(struct ipu_image_convert_ctx *ctx) + + static void release_ipu_resources(struct ipu_image_convert_chan *chan) + { ++ if (chan->in_eof_irq >= 0) ++ free_irq(chan->in_eof_irq, chan); ++ if (chan->rot_in_eof_irq >= 0) ++ free_irq(chan->rot_in_eof_irq, chan); + if (chan->out_eof_irq >= 0) + free_irq(chan->out_eof_irq, chan); + if (chan->rot_out_eof_irq >= 0) +@@ -1819,7 +1832,27 @@ static void release_ipu_resources(struct ipu_image_convert_chan *chan) + + chan->in_chan = chan->out_chan = chan->rotation_in_chan = + chan->rotation_out_chan = NULL; +- chan->out_eof_irq = chan->rot_out_eof_irq = -1; ++ chan->in_eof_irq = -1; ++ chan->rot_in_eof_irq = -1; ++ chan->out_eof_irq = -1; ++ chan->rot_out_eof_irq = -1; ++} ++ ++static int get_eof_irq(struct ipu_image_convert_chan *chan, ++ struct ipuv3_channel *channel) ++{ ++ struct ipu_image_convert_priv *priv = chan->priv; ++ int ret, irq; ++ ++ irq = ipu_idmac_channel_irq(priv->ipu, channel, IPU_IRQ_EOF); ++ ++ ret = request_threaded_irq(irq, eof_irq, do_bh, 0, "ipu-ic", chan); ++ if (ret < 0) { ++ dev_err(priv->ipu->dev, "could not acquire irq %d\n", irq); ++ return ret; ++ } ++ ++ return irq; + } + + static int get_ipu_resources(struct ipu_image_convert_chan *chan) +@@ -1855,31 +1888,33 @@ static int get_ipu_resources(struct ipu_image_convert_chan *chan) + } + + /* acquire the EOF interrupts */ +- chan->out_eof_irq = ipu_idmac_channel_irq(priv->ipu, +- chan->out_chan, +- IPU_IRQ_EOF); ++ ret = get_eof_irq(chan, chan->in_chan); ++ if (ret < 0) { ++ chan->in_eof_irq = -1; ++ goto err; ++ } ++ chan->in_eof_irq = ret; + +- ret = request_threaded_irq(chan->out_eof_irq, norotate_irq, do_bh, +- 0, "ipu-ic", chan); ++ ret = get_eof_irq(chan, chan->rotation_in_chan); + if (ret < 0) { +- dev_err(priv->ipu->dev, "could not acquire irq %d\n", +- chan->out_eof_irq); +- chan->out_eof_irq = -1; ++ chan->rot_in_eof_irq = -1; + goto err; + } ++ chan->rot_in_eof_irq = ret; + +- chan->rot_out_eof_irq = ipu_idmac_channel_irq(priv->ipu, +- chan->rotation_out_chan, +- IPU_IRQ_EOF); ++ ret = get_eof_irq(chan, chan->out_chan); ++ if (ret < 0) { ++ chan->out_eof_irq = -1; ++ goto err; ++ } ++ chan->out_eof_irq = ret; + +- ret = request_threaded_irq(chan->rot_out_eof_irq, rotate_irq, do_bh, +- 0, "ipu-ic", chan); ++ ret = get_eof_irq(chan, chan->rotation_out_chan); + if (ret < 0) { +- dev_err(priv->ipu->dev, "could not acquire irq %d\n", +- chan->rot_out_eof_irq); + chan->rot_out_eof_irq = -1; + goto err; + } ++ chan->rot_out_eof_irq = ret; + + return 0; + err: +@@ -2458,6 +2493,8 @@ int ipu_image_convert_init(struct ipu_soc *ipu, struct device *dev) + chan->ic_task = i; + chan->priv = priv; + chan->dma_ch = &image_convert_dma_chan[i]; ++ chan->in_eof_irq = -1; ++ chan->rot_in_eof_irq = -1; + chan->out_eof_irq = -1; + chan->rot_out_eof_irq = -1; + +diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c +index d091a12596ad2..85aee6d365b40 100644 +--- a/drivers/i2c/busses/i2c-bcm-iproc.c ++++ b/drivers/i2c/busses/i2c-bcm-iproc.c +@@ -1074,7 +1074,7 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave) + if (!iproc_i2c->slave) + return -EINVAL; + +- iproc_i2c->slave = NULL; ++ disable_irq(iproc_i2c->irq); + + /* disable all slave interrupts */ + tmp = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET); +@@ -1087,6 +1087,17 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave) + tmp &= ~BIT(S_CFG_EN_NIC_SMB_ADDR3_SHIFT); + iproc_i2c_wr_reg(iproc_i2c, S_CFG_SMBUS_ADDR_OFFSET, tmp); + ++ /* flush TX/RX FIFOs */ ++ tmp = (BIT(S_FIFO_RX_FLUSH_SHIFT) | BIT(S_FIFO_TX_FLUSH_SHIFT)); ++ iproc_i2c_wr_reg(iproc_i2c, S_FIFO_CTRL_OFFSET, tmp); ++ ++ /* clear all pending slave interrupts */ ++ iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, ISR_MASK_SLAVE); ++ ++ iproc_i2c->slave = NULL; ++ ++ enable_irq(iproc_i2c->irq); ++ + return 0; + } + +diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c +index 50dd98803ca0c..5615e7c43b436 100644 +--- a/drivers/i2c/busses/i2c-rcar.c ++++ b/drivers/i2c/busses/i2c-rcar.c +@@ -583,13 +583,14 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv) + rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR); + } + +- rcar_i2c_write(priv, ICSSR, ~SAR & 0xff); ++ /* Clear SSR, too, because of old STOPs to other clients than us */ ++ rcar_i2c_write(priv, ICSSR, ~(SAR | SSR) & 0xff); + } + + /* master sent stop */ + if (ssr_filtered & SSR) { + i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value); +- rcar_i2c_write(priv, ICSIER, SAR | SSR); ++ rcar_i2c_write(priv, ICSIER, SAR); + rcar_i2c_write(priv, ICSSR, ~SSR & 0xff); + } + +@@ -853,7 +854,7 @@ static int rcar_reg_slave(struct i2c_client *slave) + priv->slave = slave; + rcar_i2c_write(priv, ICSAR, slave->addr); + rcar_i2c_write(priv, ICSSR, 0); +- rcar_i2c_write(priv, ICSIER, SAR | SSR); ++ rcar_i2c_write(priv, ICSIER, SAR); + rcar_i2c_write(priv, ICSCR, SIE | SDBS); + + return 0; +@@ -865,12 +866,14 @@ static int rcar_unreg_slave(struct i2c_client *slave) + + WARN_ON(!priv->slave); + +- /* disable irqs and ensure none is running before clearing ptr */ ++ /* ensure no irq is running before clearing ptr */ ++ disable_irq(priv->irq); + rcar_i2c_write(priv, ICSIER, 0); +- rcar_i2c_write(priv, ICSCR, 0); ++ rcar_i2c_write(priv, ICSSR, 0); ++ enable_irq(priv->irq); ++ rcar_i2c_write(priv, ICSCR, SDBS); + rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */ + +- synchronize_irq(priv->irq); + priv->slave = NULL; + + pm_runtime_put(rcar_i2c_priv_to_dev(priv)); +diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c +index e2110113e8848..6044711feea3c 100644 +--- a/drivers/iio/dac/ad5592r-base.c ++++ b/drivers/iio/dac/ad5592r-base.c +@@ -415,7 +415,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev, + s64 tmp = *val * (3767897513LL / 25LL); + *val = div_s64_rem(tmp, 1000000000LL, val2); + +- ret = IIO_VAL_INT_PLUS_MICRO; ++ return IIO_VAL_INT_PLUS_MICRO; + } else { + int mult; + +@@ -446,7 +446,7 @@ static int ad5592r_read_raw(struct iio_dev *iio_dev, + ret = IIO_VAL_INT; + break; + default: +- ret = -EINVAL; ++ return -EINVAL; + } + + unlock: +diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h +index 41cb20cb3809a..6c1fe72f2b807 100644 +--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h ++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h +@@ -436,8 +436,7 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, + u16 watermark); + int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable); + int st_lsm6dsx_flush_fifo(struct st_lsm6dsx_hw *hw); +-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw, +- enum st_lsm6dsx_fifo_mode fifo_mode); ++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw); + int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw); + int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw); + int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u32 odr, u8 *val); +diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c +index afd00daeefb2d..7de10bd636ea0 100644 +--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c ++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c +@@ -184,8 +184,8 @@ static int st_lsm6dsx_update_decimators(struct st_lsm6dsx_hw *hw) + return err; + } + +-int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw, +- enum st_lsm6dsx_fifo_mode fifo_mode) ++static int st_lsm6dsx_set_fifo_mode(struct st_lsm6dsx_hw *hw, ++ enum st_lsm6dsx_fifo_mode fifo_mode) + { + unsigned int data; + +@@ -302,6 +302,18 @@ static int st_lsm6dsx_reset_hw_ts(struct st_lsm6dsx_hw *hw) + return 0; + } + ++int st_lsm6dsx_resume_fifo(struct st_lsm6dsx_hw *hw) ++{ ++ int err; ++ ++ /* reset hw ts counter */ ++ err = st_lsm6dsx_reset_hw_ts(hw); ++ if (err < 0) ++ return err; ++ ++ return st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT); ++} ++ + /* + * Set max bulk read to ST_LSM6DSX_MAX_WORD_LEN/ST_LSM6DSX_MAX_TAGGED_WORD_LEN + * in order to avoid a kmalloc for each bus access +@@ -675,12 +687,7 @@ int st_lsm6dsx_update_fifo(struct st_lsm6dsx_sensor *sensor, bool enable) + goto out; + + if (fifo_mask) { +- /* reset hw ts counter */ +- err = st_lsm6dsx_reset_hw_ts(hw); +- if (err < 0) +- goto out; +- +- err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT); ++ err = st_lsm6dsx_resume_fifo(hw); + if (err < 0) + goto out; + } +diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c +index 4426524b59f28..fa02e90e95c37 100644 +--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c ++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c +@@ -2451,7 +2451,7 @@ static int __maybe_unused st_lsm6dsx_resume(struct device *dev) + } + + if (hw->fifo_mask) +- err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT); ++ err = st_lsm6dsx_resume_fifo(hw); + + return err; + } +diff --git a/drivers/infiniband/core/counters.c b/drivers/infiniband/core/counters.c +index 738d1faf4bba5..417ebf4d8ba9b 100644 +--- a/drivers/infiniband/core/counters.c ++++ b/drivers/infiniband/core/counters.c +@@ -288,7 +288,7 @@ int rdma_counter_bind_qp_auto(struct ib_qp *qp, u8 port) + struct rdma_counter *counter; + int ret; + +- if (!qp->res.valid) ++ if (!qp->res.valid || rdma_is_kernel_res(&qp->res)) + return 0; + + if (!rdma_is_port_valid(dev, port)) +@@ -483,7 +483,7 @@ int rdma_counter_bind_qpn(struct ib_device *dev, u8 port, + goto err; + } + +- if (counter->res.task != qp->res.task) { ++ if (rdma_is_kernel_res(&counter->res) != rdma_is_kernel_res(&qp->res)) { + ret = -EINVAL; + goto err_task; + } +diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c +index d6e9cc94dd900..b2eb87d18e602 100644 +--- a/drivers/infiniband/core/uverbs_cmd.c ++++ b/drivers/infiniband/core/uverbs_cmd.c +@@ -772,6 +772,7 @@ static int ib_uverbs_reg_mr(struct uverbs_attr_bundle *attrs) + mr->uobject = uobj; + atomic_inc(&pd->usecnt); + mr->res.type = RDMA_RESTRACK_MR; ++ mr->iova = cmd.hca_va; + rdma_restrack_uadd(&mr->res); + + uobj->object = mr; +@@ -863,6 +864,9 @@ static int ib_uverbs_rereg_mr(struct uverbs_attr_bundle *attrs) + atomic_dec(&old_pd->usecnt); + } + ++ if (cmd.flags & IB_MR_REREG_TRANS) ++ mr->iova = cmd.hca_va; ++ + memset(&resp, 0, sizeof(resp)); + resp.lkey = mr->lkey; + resp.rkey = mr->rkey; +diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c +index 962dc97a8ff2b..1e4f4e5255980 100644 +--- a/drivers/infiniband/hw/cxgb4/mem.c ++++ b/drivers/infiniband/hw/cxgb4/mem.c +@@ -399,7 +399,6 @@ static int finish_mem_reg(struct c4iw_mr *mhp, u32 stag) + mmid = stag >> 8; + mhp->ibmr.rkey = mhp->ibmr.lkey = stag; + mhp->ibmr.length = mhp->attr.len; +- mhp->ibmr.iova = mhp->attr.va_fbo; + mhp->ibmr.page_size = 1U << (mhp->attr.page_size + 12); + pr_debug("mmid 0x%x mhp %p\n", mmid, mhp); + return xa_insert_irq(&mhp->rhp->mrs, mmid, mhp, GFP_KERNEL); +diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c +index b0121c90c561f..184a281f89ec8 100644 +--- a/drivers/infiniband/hw/mlx4/mr.c ++++ b/drivers/infiniband/hw/mlx4/mr.c +@@ -439,7 +439,6 @@ struct ib_mr *mlx4_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, + + mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key; + mr->ibmr.length = length; +- mr->ibmr.iova = virt_addr; + mr->ibmr.page_size = 1U << shift; + + return &mr->ibmr; +diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h +index 9a3379c49541f..9ce6a36fe48ed 100644 +--- a/drivers/infiniband/ulp/ipoib/ipoib.h ++++ b/drivers/infiniband/ulp/ipoib/ipoib.h +@@ -515,7 +515,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev); + + int ipoib_ib_dev_open_default(struct net_device *dev); + int ipoib_ib_dev_open(struct net_device *dev); +-int ipoib_ib_dev_stop(struct net_device *dev); ++void ipoib_ib_dev_stop(struct net_device *dev); + void ipoib_ib_dev_up(struct net_device *dev); + void ipoib_ib_dev_down(struct net_device *dev); + int ipoib_ib_dev_stop_default(struct net_device *dev); +diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c +index da3c5315bbb51..494f413dc3c6c 100644 +--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c ++++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c +@@ -670,13 +670,12 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb, + return rc; + } + +-static void __ipoib_reap_ah(struct net_device *dev) ++static void ipoib_reap_dead_ahs(struct ipoib_dev_priv *priv) + { +- struct ipoib_dev_priv *priv = ipoib_priv(dev); + struct ipoib_ah *ah, *tah; + unsigned long flags; + +- netif_tx_lock_bh(dev); ++ netif_tx_lock_bh(priv->dev); + spin_lock_irqsave(&priv->lock, flags); + + list_for_each_entry_safe(ah, tah, &priv->dead_ahs, list) +@@ -687,37 +686,37 @@ static void __ipoib_reap_ah(struct net_device *dev) + } + + spin_unlock_irqrestore(&priv->lock, flags); +- netif_tx_unlock_bh(dev); ++ netif_tx_unlock_bh(priv->dev); + } + + void ipoib_reap_ah(struct work_struct *work) + { + struct ipoib_dev_priv *priv = + container_of(work, struct ipoib_dev_priv, ah_reap_task.work); +- struct net_device *dev = priv->dev; + +- __ipoib_reap_ah(dev); ++ ipoib_reap_dead_ahs(priv); + + if (!test_bit(IPOIB_STOP_REAPER, &priv->flags)) + queue_delayed_work(priv->wq, &priv->ah_reap_task, + round_jiffies_relative(HZ)); + } + +-static void ipoib_flush_ah(struct net_device *dev) ++static void ipoib_start_ah_reaper(struct ipoib_dev_priv *priv) + { +- struct ipoib_dev_priv *priv = ipoib_priv(dev); +- +- cancel_delayed_work(&priv->ah_reap_task); +- flush_workqueue(priv->wq); +- ipoib_reap_ah(&priv->ah_reap_task.work); ++ clear_bit(IPOIB_STOP_REAPER, &priv->flags); ++ queue_delayed_work(priv->wq, &priv->ah_reap_task, ++ round_jiffies_relative(HZ)); + } + +-static void ipoib_stop_ah(struct net_device *dev) ++static void ipoib_stop_ah_reaper(struct ipoib_dev_priv *priv) + { +- struct ipoib_dev_priv *priv = ipoib_priv(dev); +- + set_bit(IPOIB_STOP_REAPER, &priv->flags); +- ipoib_flush_ah(dev); ++ cancel_delayed_work(&priv->ah_reap_task); ++ /* ++ * After ipoib_stop_ah_reaper() we always go through ++ * ipoib_reap_dead_ahs() which ensures the work is really stopped and ++ * does a final flush out of the dead_ah's list ++ */ + } + + static int recvs_pending(struct net_device *dev) +@@ -846,18 +845,6 @@ timeout: + return 0; + } + +-int ipoib_ib_dev_stop(struct net_device *dev) +-{ +- struct ipoib_dev_priv *priv = ipoib_priv(dev); +- +- priv->rn_ops->ndo_stop(dev); +- +- clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags); +- ipoib_flush_ah(dev); +- +- return 0; +-} +- + int ipoib_ib_dev_open_default(struct net_device *dev) + { + struct ipoib_dev_priv *priv = ipoib_priv(dev); +@@ -901,10 +888,7 @@ int ipoib_ib_dev_open(struct net_device *dev) + return -1; + } + +- clear_bit(IPOIB_STOP_REAPER, &priv->flags); +- queue_delayed_work(priv->wq, &priv->ah_reap_task, +- round_jiffies_relative(HZ)); +- ++ ipoib_start_ah_reaper(priv); + if (priv->rn_ops->ndo_open(dev)) { + pr_warn("%s: Failed to open dev\n", dev->name); + goto dev_stop; +@@ -915,13 +899,20 @@ int ipoib_ib_dev_open(struct net_device *dev) + return 0; + + dev_stop: +- set_bit(IPOIB_STOP_REAPER, &priv->flags); +- cancel_delayed_work(&priv->ah_reap_task); +- set_bit(IPOIB_FLAG_INITIALIZED, &priv->flags); +- ipoib_ib_dev_stop(dev); ++ ipoib_stop_ah_reaper(priv); + return -1; + } + ++void ipoib_ib_dev_stop(struct net_device *dev) ++{ ++ struct ipoib_dev_priv *priv = ipoib_priv(dev); ++ ++ priv->rn_ops->ndo_stop(dev); ++ ++ clear_bit(IPOIB_FLAG_INITIALIZED, &priv->flags); ++ ipoib_stop_ah_reaper(priv); ++} ++ + void ipoib_pkey_dev_check_presence(struct net_device *dev) + { + struct ipoib_dev_priv *priv = ipoib_priv(dev); +@@ -1232,7 +1223,7 @@ static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv, + ipoib_mcast_dev_flush(dev); + if (oper_up) + set_bit(IPOIB_FLAG_OPER_UP, &priv->flags); +- ipoib_flush_ah(dev); ++ ipoib_reap_dead_ahs(priv); + } + + if (level >= IPOIB_FLUSH_NORMAL) +@@ -1307,7 +1298,7 @@ void ipoib_ib_dev_cleanup(struct net_device *dev) + * the neighbor garbage collection is stopped and reaped. + * That should all be done now, so make a final ah flush. + */ +- ipoib_stop_ah(dev); ++ ipoib_reap_dead_ahs(priv); + + clear_bit(IPOIB_PKEY_ASSIGNED, &priv->flags); + +diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c +index ceec24d451858..29ad4129d2f48 100644 +--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c ++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c +@@ -1975,6 +1975,8 @@ static void ipoib_ndo_uninit(struct net_device *dev) + + /* no more works over the priv->wq */ + if (priv->wq) { ++ /* See ipoib_mcast_carrier_on_task() */ ++ WARN_ON(test_bit(IPOIB_FLAG_OPER_UP, &priv->flags)); + flush_workqueue(priv->wq); + destroy_workqueue(priv->wq); + priv->wq = NULL; +diff --git a/drivers/input/mouse/sentelic.c b/drivers/input/mouse/sentelic.c +index e99d9bf1a267d..e78c4c7eda34d 100644 +--- a/drivers/input/mouse/sentelic.c ++++ b/drivers/input/mouse/sentelic.c +@@ -441,7 +441,7 @@ static ssize_t fsp_attr_set_setreg(struct psmouse *psmouse, void *data, + + fsp_reg_write_enable(psmouse, false); + +- return count; ++ return retval; + } + + PSMOUSE_DEFINE_WO_ATTR(setreg, S_IWUSR, NULL, fsp_attr_set_setreg); +diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c +index 2acf2842c3bd2..71a7605defdab 100644 +--- a/drivers/iommu/intel-iommu.c ++++ b/drivers/iommu/intel-iommu.c +@@ -2645,7 +2645,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu, + } + + if (info->ats_supported && ecap_prs(iommu->ecap) && +- pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI)) ++ pci_pri_supported(pdev)) + info->pri_supported = 1; + } + } +diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c +index 8e19bfa94121e..a99afb5d9011c 100644 +--- a/drivers/iommu/omap-iommu-debug.c ++++ b/drivers/iommu/omap-iommu-debug.c +@@ -98,8 +98,11 @@ static ssize_t debug_read_regs(struct file *file, char __user *userbuf, + mutex_lock(&iommu_debug_lock); + + bytes = omap_iommu_dump_ctx(obj, p, count); ++ if (bytes < 0) ++ goto err; + bytes = simple_read_from_buffer(userbuf, count, ppos, buf, bytes); + ++err: + mutex_unlock(&iommu_debug_lock); + kfree(buf); + +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c +index 237c832acdd77..0082192503d14 100644 +--- a/drivers/irqchip/irq-gic-v3-its.c ++++ b/drivers/irqchip/irq-gic-v3-its.c +@@ -3399,6 +3399,7 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, + msi_alloc_info_t *info = args; + struct its_device *its_dev = info->scratchpad[0].ptr; + struct its_node *its = its_dev->its; ++ struct irq_data *irqd; + irq_hw_number_t hwirq; + int err; + int i; +@@ -3418,7 +3419,9 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, + + irq_domain_set_hwirq_and_chip(domain, virq + i, + hwirq + i, &its_irq_chip, its_dev); +- irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(virq + i))); ++ irqd = irq_get_irq_data(virq + i); ++ irqd_set_single_target(irqd); ++ irqd_set_affinity_on_activate(irqd); + pr_debug("ID:%d pID:%d vID:%d\n", + (int)(hwirq + i - its_dev->event_map.lpi_base), + (int)(hwirq + i), virq + i); +@@ -3971,18 +3974,22 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe, + static void its_vpe_4_1_invall(struct its_vpe *vpe) + { + void __iomem *rdbase; ++ unsigned long flags; + u64 val; ++ int cpu; + + val = GICR_INVALLR_V; + val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id); + + /* Target the redistributor this vPE is currently known on */ +- raw_spin_lock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock); +- rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base; ++ cpu = vpe_to_cpuid_lock(vpe, &flags); ++ raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock); ++ rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base; + gic_write_lpir(val, rdbase + GICR_INVALLR); + + wait_for_syncr(rdbase); +- raw_spin_unlock(&gic_data_rdist_cpu(vpe->col_idx)->rd_lock); ++ raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock); ++ vpe_to_cpuid_unlock(vpe, flags); + } + + static int its_vpe_4_1_set_vcpu_affinity(struct irq_data *d, void *vcpu_info) +diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c +index 6ef86a334c62d..9ed1bc4736634 100644 +--- a/drivers/irqchip/irq-loongson-liointc.c ++++ b/drivers/irqchip/irq-loongson-liointc.c +@@ -60,7 +60,7 @@ static void liointc_chained_handle_irq(struct irq_desc *desc) + if (!pending) { + /* Always blame LPC IRQ if we have that bug */ + if (handler->priv->has_lpc_irq_errata && +- (handler->parent_int_map & ~gc->mask_cache & ++ (handler->parent_int_map & gc->mask_cache & + BIT(LIOINTC_ERRATA_IRQ))) + pending = BIT(LIOINTC_ERRATA_IRQ); + else +@@ -132,11 +132,11 @@ static void liointc_resume(struct irq_chip_generic *gc) + irq_gc_lock_irqsave(gc, flags); + /* Disable all at first */ + writel(0xffffffff, gc->reg_base + LIOINTC_REG_INTC_DISABLE); +- /* Revert map cache */ ++ /* Restore map cache */ + for (i = 0; i < LIOINTC_CHIP_IRQ; i++) + writeb(priv->map_cache[i], gc->reg_base + i); +- /* Revert mask cache */ +- writel(~gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE); ++ /* Restore mask cache */ ++ writel(gc->mask_cache, gc->reg_base + LIOINTC_REG_INTC_ENABLE); + irq_gc_unlock_irqrestore(gc, flags); + } + +@@ -244,7 +244,7 @@ int __init liointc_of_init(struct device_node *node, + ct->chip.irq_mask_ack = irq_gc_mask_disable_reg; + ct->chip.irq_set_type = liointc_set_type; + +- gc->mask_cache = 0xffffffff; ++ gc->mask_cache = 0; + priv->gc = gc; + + for (i = 0; i < LIOINTC_NUM_PARENT; i++) { +diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h +index 74a9849ea164a..756fc5425d9ba 100644 +--- a/drivers/md/bcache/bcache.h ++++ b/drivers/md/bcache/bcache.h +@@ -264,7 +264,7 @@ struct bcache_device { + #define BCACHE_DEV_UNLINK_DONE 2 + #define BCACHE_DEV_WB_RUNNING 3 + #define BCACHE_DEV_RATE_DW_RUNNING 4 +- unsigned int nr_stripes; ++ int nr_stripes; + unsigned int stripe_size; + atomic_t *stripe_sectors_dirty; + unsigned long *full_dirty_stripes; +diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c +index 4385303836d8e..ae4cd74c8001e 100644 +--- a/drivers/md/bcache/bset.c ++++ b/drivers/md/bcache/bset.c +@@ -322,7 +322,7 @@ int bch_btree_keys_alloc(struct btree_keys *b, + + b->page_order = page_order; + +- t->data = (void *) __get_free_pages(gfp, b->page_order); ++ t->data = (void *) __get_free_pages(__GFP_COMP|gfp, b->page_order); + if (!t->data) + goto err; + +diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c +index fd1f288fd8015..2f68cc51bbc70 100644 +--- a/drivers/md/bcache/btree.c ++++ b/drivers/md/bcache/btree.c +@@ -785,7 +785,7 @@ int bch_btree_cache_alloc(struct cache_set *c) + mutex_init(&c->verify_lock); + + c->verify_ondisk = (void *) +- __get_free_pages(GFP_KERNEL, ilog2(bucket_pages(c))); ++ __get_free_pages(GFP_KERNEL|__GFP_COMP, ilog2(bucket_pages(c))); + + c->verify_data = mca_bucket_alloc(c, &ZERO_KEY, GFP_KERNEL); + +diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c +index 0e3ff9745ac74..9179638b33874 100644 +--- a/drivers/md/bcache/journal.c ++++ b/drivers/md/bcache/journal.c +@@ -999,8 +999,8 @@ int bch_journal_alloc(struct cache_set *c) + j->w[1].c = c; + + if (!(init_fifo(&j->pin, JOURNAL_PIN, GFP_KERNEL)) || +- !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS)) || +- !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL, JSET_BITS))) ++ !(j->w[0].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS)) || ++ !(j->w[1].data = (void *) __get_free_pages(GFP_KERNEL|__GFP_COMP, JSET_BITS))) + return -ENOMEM; + + return 0; +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index 7048370331c38..b4d23d9f30f9b 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -1775,7 +1775,7 @@ void bch_cache_set_unregister(struct cache_set *c) + } + + #define alloc_bucket_pages(gfp, c) \ +- ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c)))) ++ ((void *) __get_free_pages(__GFP_ZERO|__GFP_COMP|gfp, ilog2(bucket_pages(c)))) + + struct cache_set *bch_cache_set_alloc(struct cache_sb *sb) + { +diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c +index 3f7641fb28d53..c0b3c36bb040b 100644 +--- a/drivers/md/bcache/writeback.c ++++ b/drivers/md/bcache/writeback.c +@@ -523,15 +523,19 @@ void bcache_dev_sectors_dirty_add(struct cache_set *c, unsigned int inode, + uint64_t offset, int nr_sectors) + { + struct bcache_device *d = c->devices[inode]; +- unsigned int stripe_offset, stripe, sectors_dirty; ++ unsigned int stripe_offset, sectors_dirty; ++ int stripe; + + if (!d) + return; + ++ stripe = offset_to_stripe(d, offset); ++ if (stripe < 0) ++ return; ++ + if (UUID_FLASH_ONLY(&c->uuids[inode])) + atomic_long_add(nr_sectors, &c->flash_dev_dirty_sectors); + +- stripe = offset_to_stripe(d, offset); + stripe_offset = offset & (d->stripe_size - 1); + + while (nr_sectors) { +@@ -571,12 +575,12 @@ static bool dirty_pred(struct keybuf *buf, struct bkey *k) + static void refill_full_stripes(struct cached_dev *dc) + { + struct keybuf *buf = &dc->writeback_keys; +- unsigned int start_stripe, stripe, next_stripe; ++ unsigned int start_stripe, next_stripe; ++ int stripe; + bool wrapped = false; + + stripe = offset_to_stripe(&dc->disk, KEY_OFFSET(&buf->last_scanned)); +- +- if (stripe >= dc->disk.nr_stripes) ++ if (stripe < 0) + stripe = 0; + + start_stripe = stripe; +diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h +index b029843ce5b6f..3f1230e22de01 100644 +--- a/drivers/md/bcache/writeback.h ++++ b/drivers/md/bcache/writeback.h +@@ -52,10 +52,22 @@ static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d) + return ret; + } + +-static inline unsigned int offset_to_stripe(struct bcache_device *d, ++static inline int offset_to_stripe(struct bcache_device *d, + uint64_t offset) + { + do_div(offset, d->stripe_size); ++ ++ /* d->nr_stripes is in range [1, INT_MAX] */ ++ if (unlikely(offset >= d->nr_stripes)) { ++ pr_err("Invalid stripe %llu (>= nr_stripes %d).\n", ++ offset, d->nr_stripes); ++ return -EINVAL; ++ } ++ ++ /* ++ * Here offset is definitly smaller than INT_MAX, ++ * return it as int will never overflow. ++ */ + return offset; + } + +@@ -63,7 +75,10 @@ static inline bool bcache_dev_stripe_dirty(struct cached_dev *dc, + uint64_t offset, + unsigned int nr_sectors) + { +- unsigned int stripe = offset_to_stripe(&dc->disk, offset); ++ int stripe = offset_to_stripe(&dc->disk, offset); ++ ++ if (stripe < 0) ++ return false; + + while (1) { + if (atomic_read(dc->disk.stripe_sectors_dirty + stripe)) +diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c +index 3f8577e2c13be..2bd2444ad99c6 100644 +--- a/drivers/md/dm-rq.c ++++ b/drivers/md/dm-rq.c +@@ -70,9 +70,6 @@ void dm_start_queue(struct request_queue *q) + + void dm_stop_queue(struct request_queue *q) + { +- if (blk_mq_queue_stopped(q)) +- return; +- + blk_mq_quiesce_queue(q); + } + +diff --git a/drivers/md/dm.c b/drivers/md/dm.c +index fabcc51b468c9..8d952bf059bea 100644 +--- a/drivers/md/dm.c ++++ b/drivers/md/dm.c +@@ -503,7 +503,8 @@ static int dm_blk_report_zones(struct gendisk *disk, sector_t sector, + } + + args.tgt = tgt; +- ret = tgt->type->report_zones(tgt, &args, nr_zones); ++ ret = tgt->type->report_zones(tgt, &args, ++ nr_zones - args.zone_idx); + if (ret < 0) + goto out; + } while (args.zone_idx < nr_zones && +diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c +index 73fd50e779754..d50737ec40394 100644 +--- a/drivers/md/md-cluster.c ++++ b/drivers/md/md-cluster.c +@@ -1139,6 +1139,7 @@ static int resize_bitmaps(struct mddev *mddev, sector_t newsize, sector_t oldsiz + bitmap = get_bitmap_from_slot(mddev, i); + if (IS_ERR(bitmap)) { + pr_err("can't get bitmap from slot %d\n", i); ++ bitmap = NULL; + goto out; + } + counts = &bitmap->counts; +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index 190dd70db514b..554e7f15325fe 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -3604,6 +3604,7 @@ static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s, + * is missing/faulty, then we need to read everything we can. + */ + if (sh->raid_conf->level != 6 && ++ sh->raid_conf->rmw_level != PARITY_DISABLE_RMW && + sh->sector < sh->raid_conf->mddev->recovery_cp) + /* reconstruct-write isn't being forced */ + return 0; +@@ -4839,7 +4840,7 @@ static void handle_stripe(struct stripe_head *sh) + * or to load a block that is being partially written. + */ + if (s.to_read || s.non_overwrite +- || (conf->level == 6 && s.to_write && s.failed) ++ || (s.to_write && s.failed) + || (s.syncing && (s.uptodate + s.compute < disks)) + || s.replacing + || s.expanding) +diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c +index abf93158857b9..531e7a41658f7 100644 +--- a/drivers/media/platform/qcom/venus/pm_helpers.c ++++ b/drivers/media/platform/qcom/venus/pm_helpers.c +@@ -496,6 +496,10 @@ min_loaded_core(struct venus_inst *inst, u32 *min_coreid, u32 *min_load) + list_for_each_entry(inst_pos, &core->instances, list) { + if (inst_pos == inst) + continue; ++ ++ if (inst_pos->state != INST_START) ++ continue; ++ + vpp_freq = inst_pos->clk_data.codec_freq_data->vpp_freq; + coreid = inst_pos->clk_data.core_id; + +diff --git a/drivers/media/platform/rockchip/rga/rga-hw.c b/drivers/media/platform/rockchip/rga/rga-hw.c +index 4be6dcf292fff..aaa96f256356b 100644 +--- a/drivers/media/platform/rockchip/rga/rga-hw.c ++++ b/drivers/media/platform/rockchip/rga/rga-hw.c +@@ -200,22 +200,25 @@ static void rga_cmd_set_trans_info(struct rga_ctx *ctx) + dst_info.data.format = ctx->out.fmt->hw_format; + dst_info.data.swap = ctx->out.fmt->color_swap; + +- if (ctx->in.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) { +- if (ctx->out.fmt->hw_format < RGA_COLOR_FMT_YUV422SP) { +- switch (ctx->in.colorspace) { +- case V4L2_COLORSPACE_REC709: +- src_info.data.csc_mode = +- RGA_SRC_CSC_MODE_BT709_R0; +- break; +- default: +- src_info.data.csc_mode = +- RGA_SRC_CSC_MODE_BT601_R0; +- break; +- } ++ /* ++ * CSC mode must only be set when the colorspace families differ between ++ * input and output. It must remain unset (zeroed) if both are the same. ++ */ ++ ++ if (RGA_COLOR_FMT_IS_YUV(ctx->in.fmt->hw_format) && ++ RGA_COLOR_FMT_IS_RGB(ctx->out.fmt->hw_format)) { ++ switch (ctx->in.colorspace) { ++ case V4L2_COLORSPACE_REC709: ++ src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0; ++ break; ++ default: ++ src_info.data.csc_mode = RGA_SRC_CSC_MODE_BT601_R0; ++ break; + } + } + +- if (ctx->out.fmt->hw_format >= RGA_COLOR_FMT_YUV422SP) { ++ if (RGA_COLOR_FMT_IS_RGB(ctx->in.fmt->hw_format) && ++ RGA_COLOR_FMT_IS_YUV(ctx->out.fmt->hw_format)) { + switch (ctx->out.colorspace) { + case V4L2_COLORSPACE_REC709: + dst_info.data.csc_mode = RGA_SRC_CSC_MODE_BT709_R0; +diff --git a/drivers/media/platform/rockchip/rga/rga-hw.h b/drivers/media/platform/rockchip/rga/rga-hw.h +index 96cb0314dfa70..e8917e5630a48 100644 +--- a/drivers/media/platform/rockchip/rga/rga-hw.h ++++ b/drivers/media/platform/rockchip/rga/rga-hw.h +@@ -95,6 +95,11 @@ + #define RGA_COLOR_FMT_CP_8BPP 15 + #define RGA_COLOR_FMT_MASK 15 + ++#define RGA_COLOR_FMT_IS_YUV(fmt) \ ++ (((fmt) >= RGA_COLOR_FMT_YUV422SP) && ((fmt) < RGA_COLOR_FMT_CP_1BPP)) ++#define RGA_COLOR_FMT_IS_RGB(fmt) \ ++ ((fmt) < RGA_COLOR_FMT_YUV422SP) ++ + #define RGA_COLOR_NONE_SWAP 0 + #define RGA_COLOR_RB_SWAP 1 + #define RGA_COLOR_ALPHA_SWAP 2 +diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c +index d7b43037e500a..e07b135613eb5 100644 +--- a/drivers/media/platform/vsp1/vsp1_dl.c ++++ b/drivers/media/platform/vsp1/vsp1_dl.c +@@ -431,6 +431,8 @@ vsp1_dl_cmd_pool_create(struct vsp1_device *vsp1, enum vsp1_extcmd_type type, + if (!pool) + return NULL; + ++ pool->vsp1 = vsp1; ++ + spin_lock_init(&pool->lock); + INIT_LIST_HEAD(&pool->free); + +diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c +index f73cf76d1373d..a5e443110fc3d 100644 +--- a/drivers/mfd/arizona-core.c ++++ b/drivers/mfd/arizona-core.c +@@ -1426,6 +1426,15 @@ err_irq: + arizona_irq_exit(arizona); + err_pm: + pm_runtime_disable(arizona->dev); ++ ++ switch (arizona->pdata.clk32k_src) { ++ case ARIZONA_32KZ_MCLK1: ++ case ARIZONA_32KZ_MCLK2: ++ arizona_clk32k_disable(arizona); ++ break; ++ default: ++ break; ++ } + err_reset: + arizona_enable_reset(arizona); + regulator_disable(arizona->dcvdd); +@@ -1448,6 +1457,15 @@ int arizona_dev_exit(struct arizona *arizona) + regulator_disable(arizona->dcvdd); + regulator_put(arizona->dcvdd); + ++ switch (arizona->pdata.clk32k_src) { ++ case ARIZONA_32KZ_MCLK1: ++ case ARIZONA_32KZ_MCLK2: ++ arizona_clk32k_disable(arizona); ++ break; ++ default: ++ break; ++ } ++ + mfd_remove_devices(arizona->dev); + arizona_free_irq(arizona, ARIZONA_IRQ_UNDERCLOCKED, arizona); + arizona_free_irq(arizona, ARIZONA_IRQ_OVERCLOCKED, arizona); +diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c +index 39276fa626d2b..83e676a096dc1 100644 +--- a/drivers/mfd/dln2.c ++++ b/drivers/mfd/dln2.c +@@ -287,7 +287,11 @@ static void dln2_rx(struct urb *urb) + len = urb->actual_length - sizeof(struct dln2_header); + + if (handle == DLN2_HANDLE_EVENT) { ++ unsigned long flags; ++ ++ spin_lock_irqsave(&dln2->event_cb_lock, flags); + dln2_run_event_callbacks(dln2, id, echo, data, len); ++ spin_unlock_irqrestore(&dln2->event_cb_lock, flags); + } else { + /* URB will be re-submitted in _dln2_transfer (free_rx_slot) */ + if (dln2_transfer_complete(dln2, urb, handle, echo)) +diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c +index 47ac53e912411..201b8ed37f2e0 100644 +--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c ++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c +@@ -229,15 +229,12 @@ static void renesas_sdhi_internal_dmac_issue_tasklet_fn(unsigned long arg) + DTRAN_CTRL_DM_START); + } + +-static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg) ++static bool renesas_sdhi_internal_dmac_complete(struct tmio_mmc_host *host) + { +- struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg; + enum dma_data_direction dir; + +- spin_lock_irq(&host->lock); +- + if (!host->data) +- goto out; ++ return false; + + if (host->data->flags & MMC_DATA_READ) + dir = DMA_FROM_DEVICE; +@@ -250,6 +247,17 @@ static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg) + if (dir == DMA_FROM_DEVICE) + clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags); + ++ return true; ++} ++ ++static void renesas_sdhi_internal_dmac_complete_tasklet_fn(unsigned long arg) ++{ ++ struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg; ++ ++ spin_lock_irq(&host->lock); ++ if (!renesas_sdhi_internal_dmac_complete(host)) ++ goto out; ++ + tmio_mmc_do_data_irq(host); + out: + spin_unlock_irq(&host->lock); +diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c +index cdae2311a3b69..0b1ea965cba08 100644 +--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c ++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c +@@ -1859,6 +1859,22 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf, + edu_writel(ctrl, EDU_STOP, 0); /* force stop */ + edu_readl(ctrl, EDU_STOP); + ++ if (!ret && edu_cmd == EDU_CMD_READ) { ++ u64 err_addr = 0; ++ ++ /* ++ * check for ECC errors here, subpage ECC errors are ++ * retained in ECC error address register ++ */ ++ err_addr = brcmnand_get_uncorrecc_addr(ctrl); ++ if (!err_addr) { ++ err_addr = brcmnand_get_correcc_addr(ctrl); ++ if (err_addr) ++ ret = -EUCLEAN; ++ } else ++ ret = -EBADMSG; ++ } ++ + return ret; + } + +@@ -2065,6 +2081,7 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip, + u64 err_addr = 0; + int err; + bool retry = true; ++ bool edu_err = false; + + dev_dbg(ctrl->dev, "read %llx -> %p\n", (unsigned long long)addr, buf); + +@@ -2082,6 +2099,10 @@ try_dmaread: + else + return -EIO; + } ++ ++ if (has_edu(ctrl) && err_addr) ++ edu_err = true; ++ + } else { + if (oob) + memset(oob, 0x99, mtd->oobsize); +@@ -2129,6 +2150,11 @@ try_dmaread: + if (mtd_is_bitflip(err)) { + unsigned int corrected = brcmnand_count_corrected(ctrl); + ++ /* in case of EDU correctable error we read again using PIO */ ++ if (edu_err) ++ err = brcmnand_read_by_pio(mtd, chip, addr, trans, buf, ++ oob, &err_addr); ++ + dev_dbg(ctrl->dev, "corrected error at 0x%llx\n", + (unsigned long long)err_addr); + mtd->ecc_stats.corrected += corrected; +diff --git a/drivers/mtd/nand/raw/fsl_upm.c b/drivers/mtd/nand/raw/fsl_upm.c +index f31fae3a4c689..6b8ec72686e29 100644 +--- a/drivers/mtd/nand/raw/fsl_upm.c ++++ b/drivers/mtd/nand/raw/fsl_upm.c +@@ -62,7 +62,6 @@ static int fun_chip_ready(struct nand_chip *chip) + static void fun_wait_rnb(struct fsl_upm_nand *fun) + { + if (fun->rnb_gpio[fun->mchip_number] >= 0) { +- struct mtd_info *mtd = nand_to_mtd(&fun->chip); + int cnt = 1000000; + + while (--cnt && !fun_chip_ready(&fun->chip)) +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h +index cd33c2e6ca5fc..f48eb66ed021b 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h ++++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h +@@ -43,7 +43,7 @@ struct qmem { + void *base; + dma_addr_t iova; + int alloc_sz; +- u8 entry_sz; ++ u16 entry_sz; + u8 align; + u32 qsize; + }; +diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c +index 18b0c7a2d6dcb..90e794c79f667 100644 +--- a/drivers/net/ethernet/qualcomm/emac/emac.c ++++ b/drivers/net/ethernet/qualcomm/emac/emac.c +@@ -473,13 +473,24 @@ static int emac_clks_phase1_init(struct platform_device *pdev, + + ret = clk_prepare_enable(adpt->clk[EMAC_CLK_CFG_AHB]); + if (ret) +- return ret; ++ goto disable_clk_axi; + + ret = clk_set_rate(adpt->clk[EMAC_CLK_HIGH_SPEED], 19200000); + if (ret) +- return ret; ++ goto disable_clk_cfg_ahb; ++ ++ ret = clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]); ++ if (ret) ++ goto disable_clk_cfg_ahb; + +- return clk_prepare_enable(adpt->clk[EMAC_CLK_HIGH_SPEED]); ++ return 0; ++ ++disable_clk_cfg_ahb: ++ clk_disable_unprepare(adpt->clk[EMAC_CLK_CFG_AHB]); ++disable_clk_axi: ++ clk_disable_unprepare(adpt->clk[EMAC_CLK_AXI]); ++ ++ return ret; + } + + /* Enable clocks; needs emac_clks_phase1_init to be called before */ +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c +index 02102c781a8cf..bf3250e0e59ca 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c +@@ -351,6 +351,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev) + plat_dat->has_gmac = true; + plat_dat->bsp_priv = gmac; + plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed; ++ plat_dat->multicast_filter_bins = 0; + + err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); + if (err) +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +index efc6ec1b8027c..fc8759f146c7c 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +@@ -164,6 +164,9 @@ static void dwmac1000_set_filter(struct mac_device_info *hw, + value = GMAC_FRAME_FILTER_PR | GMAC_FRAME_FILTER_PCF; + } else if (dev->flags & IFF_ALLMULTI) { + value = GMAC_FRAME_FILTER_PM; /* pass all multi */ ++ } else if (!netdev_mc_empty(dev) && (mcbitslog2 == 0)) { ++ /* Fall back to all multicast if we've no filter */ ++ value = GMAC_FRAME_FILTER_PM; + } else if (!netdev_mc_empty(dev)) { + struct netdev_hw_addr *ha; + +diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c +index d735f3127fe8f..6c24ddc2a9751 100644 +--- a/drivers/net/wireless/realtek/rtw88/pci.c ++++ b/drivers/net/wireless/realtek/rtw88/pci.c +@@ -14,8 +14,11 @@ + #include "debug.h" + + static bool rtw_disable_msi; ++static bool rtw_pci_disable_aspm; + module_param_named(disable_msi, rtw_disable_msi, bool, 0644); ++module_param_named(disable_aspm, rtw_pci_disable_aspm, bool, 0644); + MODULE_PARM_DESC(disable_msi, "Set Y to disable MSI interrupt support"); ++MODULE_PARM_DESC(disable_aspm, "Set Y to disable PCI ASPM support"); + + static u32 rtw_pci_tx_queue_idx_addr[] = { + [RTW_TX_QUEUE_BK] = RTK_PCI_TXBD_IDX_BKQ, +@@ -1189,6 +1192,9 @@ static void rtw_pci_clkreq_set(struct rtw_dev *rtwdev, bool enable) + u8 value; + int ret; + ++ if (rtw_pci_disable_aspm) ++ return; ++ + ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value); + if (ret) { + rtw_err(rtwdev, "failed to read CLKREQ_L1, ret=%d", ret); +@@ -1208,6 +1214,9 @@ static void rtw_pci_aspm_set(struct rtw_dev *rtwdev, bool enable) + u8 value; + int ret; + ++ if (rtw_pci_disable_aspm) ++ return; ++ + ret = rtw_dbi_read8(rtwdev, RTK_PCIE_LINK_CFG, &value); + if (ret) { + rtw_err(rtwdev, "failed to read ASPM, ret=%d", ret); +diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c +index 89b85970912db..35d265014e1ec 100644 +--- a/drivers/nvdimm/security.c ++++ b/drivers/nvdimm/security.c +@@ -450,14 +450,19 @@ void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm) + else + dev_dbg(&nvdimm->dev, "overwrite completed\n"); + +- if (nvdimm->sec.overwrite_state) +- sysfs_notify_dirent(nvdimm->sec.overwrite_state); ++ /* ++ * Mark the overwrite work done and update dimm security flags, ++ * then send a sysfs event notification to wake up userspace ++ * poll threads to picked up the changed state. ++ */ + nvdimm->sec.overwrite_tmo = 0; + clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags); + clear_bit(NDD_WORK_PENDING, &nvdimm->flags); +- put_device(&nvdimm->dev); + nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER); +- nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER); ++ nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER); ++ if (nvdimm->sec.overwrite_state) ++ sysfs_notify_dirent(nvdimm->sec.overwrite_state); ++ put_device(&nvdimm->dev); + } + + void nvdimm_security_overwrite_query(struct work_struct *work) +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index f7540a9e54fd2..ee67113d96b1b 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -368,6 +368,16 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl, + break; + } + break; ++ case NVME_CTRL_DELETING_NOIO: ++ switch (old_state) { ++ case NVME_CTRL_DELETING: ++ case NVME_CTRL_DEAD: ++ changed = true; ++ /* FALLTHRU */ ++ default: ++ break; ++ } ++ break; + case NVME_CTRL_DEAD: + switch (old_state) { + case NVME_CTRL_DELETING: +@@ -405,6 +415,7 @@ static bool nvme_state_terminal(struct nvme_ctrl *ctrl) + case NVME_CTRL_CONNECTING: + return false; + case NVME_CTRL_DELETING: ++ case NVME_CTRL_DELETING_NOIO: + case NVME_CTRL_DEAD: + return true; + default: +@@ -3280,6 +3291,7 @@ static ssize_t nvme_sysfs_show_state(struct device *dev, + [NVME_CTRL_RESETTING] = "resetting", + [NVME_CTRL_CONNECTING] = "connecting", + [NVME_CTRL_DELETING] = "deleting", ++ [NVME_CTRL_DELETING_NOIO]= "deleting (no IO)", + [NVME_CTRL_DEAD] = "dead", + }; + +@@ -3860,6 +3872,9 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl) + if (ctrl->state == NVME_CTRL_DEAD) + nvme_kill_queues(ctrl); + ++ /* this is a no-op when called from the controller reset handler */ ++ nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING_NOIO); ++ + down_write(&ctrl->namespaces_rwsem); + list_splice_init(&ctrl->namespaces, &ns_list); + up_write(&ctrl->namespaces_rwsem); +diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c +index 2a6c8190eeb76..4ec4829d62334 100644 +--- a/drivers/nvme/host/fabrics.c ++++ b/drivers/nvme/host/fabrics.c +@@ -547,7 +547,7 @@ static struct nvmf_transport_ops *nvmf_lookup_transport( + blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl, + struct request *rq) + { +- if (ctrl->state != NVME_CTRL_DELETING && ++ if (ctrl->state != NVME_CTRL_DELETING_NOIO && + ctrl->state != NVME_CTRL_DEAD && + !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) + return BLK_STS_RESOURCE; +diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h +index a0ec40ab62eeb..a9c1e3b4585ec 100644 +--- a/drivers/nvme/host/fabrics.h ++++ b/drivers/nvme/host/fabrics.h +@@ -182,7 +182,8 @@ bool nvmf_ip_options_match(struct nvme_ctrl *ctrl, + static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, + bool queue_live) + { +- if (likely(ctrl->state == NVME_CTRL_LIVE)) ++ if (likely(ctrl->state == NVME_CTRL_LIVE || ++ ctrl->state == NVME_CTRL_DELETING)) + return true; + return __nvmf_check_ready(ctrl, rq, queue_live); + } +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 564e3f220ac79..a70220df1f570 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -800,6 +800,7 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl) + break; + + case NVME_CTRL_DELETING: ++ case NVME_CTRL_DELETING_NOIO: + default: + /* no action to take - let it delete */ + break; +diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c +index d3914b7e8f52c..8f235fbfe44ee 100644 +--- a/drivers/nvme/host/multipath.c ++++ b/drivers/nvme/host/multipath.c +@@ -167,9 +167,18 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl) + + static bool nvme_path_is_disabled(struct nvme_ns *ns) + { +- return ns->ctrl->state != NVME_CTRL_LIVE || +- test_bit(NVME_NS_ANA_PENDING, &ns->flags) || +- test_bit(NVME_NS_REMOVING, &ns->flags); ++ /* ++ * We don't treat NVME_CTRL_DELETING as a disabled path as I/O should ++ * still be able to complete assuming that the controller is connected. ++ * Otherwise it will fail immediately and return to the requeue list. ++ */ ++ if (ns->ctrl->state != NVME_CTRL_LIVE && ++ ns->ctrl->state != NVME_CTRL_DELETING) ++ return true; ++ if (test_bit(NVME_NS_ANA_PENDING, &ns->flags) || ++ test_bit(NVME_NS_REMOVING, &ns->flags)) ++ return true; ++ return false; + } + + static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node) +@@ -575,6 +584,9 @@ static void nvme_ana_work(struct work_struct *work) + { + struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ana_work); + ++ if (ctrl->state != NVME_CTRL_LIVE) ++ return; ++ + nvme_read_ana_log(ctrl); + } + +diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h +index 8f1b0a30fd2a6..ff0b4079e8d6d 100644 +--- a/drivers/nvme/host/nvme.h ++++ b/drivers/nvme/host/nvme.h +@@ -183,6 +183,7 @@ enum nvme_ctrl_state { + NVME_CTRL_RESETTING, + NVME_CTRL_CONNECTING, + NVME_CTRL_DELETING, ++ NVME_CTRL_DELETING_NOIO, + NVME_CTRL_DEAD, + }; + +diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c +index 19c94080512cf..fdab0054cd809 100644 +--- a/drivers/nvme/host/rdma.c ++++ b/drivers/nvme/host/rdma.c +@@ -1023,11 +1023,12 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new) + changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE); + if (!changed) { + /* +- * state change failure is ok if we're in DELETING state, ++ * state change failure is ok if we started ctrl delete, + * unless we're during creation of a new controller to + * avoid races with teardown flow. + */ +- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING); ++ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING && ++ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO); + WARN_ON_ONCE(new); + ret = -EINVAL; + goto destroy_io; +@@ -1080,8 +1081,9 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work) + blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); + + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { +- /* state change failure is ok if we're in DELETING state */ +- WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING); ++ /* state change failure is ok if we started ctrl delete */ ++ WARN_ON_ONCE(ctrl->ctrl.state != NVME_CTRL_DELETING && ++ ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO); + return; + } + +diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c +index 99eaa0474e10b..06d6c1c6de35b 100644 +--- a/drivers/nvme/host/tcp.c ++++ b/drivers/nvme/host/tcp.c +@@ -1938,11 +1938,12 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new) + + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) { + /* +- * state change failure is ok if we're in DELETING state, ++ * state change failure is ok if we started ctrl delete, + * unless we're during creation of a new controller to + * avoid races with teardown flow. + */ +- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING); ++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING && ++ ctrl->state != NVME_CTRL_DELETING_NOIO); + WARN_ON_ONCE(new); + ret = -EINVAL; + goto destroy_io; +@@ -1998,8 +1999,9 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work) + blk_mq_unquiesce_queue(ctrl->admin_q); + + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) { +- /* state change failure is ok if we're in DELETING state */ +- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING); ++ /* state change failure is ok if we started ctrl delete */ ++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING && ++ ctrl->state != NVME_CTRL_DELETING_NOIO); + return; + } + +@@ -2034,8 +2036,9 @@ static void nvme_reset_ctrl_work(struct work_struct *work) + nvme_tcp_teardown_ctrl(ctrl, false); + + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING)) { +- /* state change failure is ok if we're in DELETING state */ +- WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING); ++ /* state change failure is ok if we started ctrl delete */ ++ WARN_ON_ONCE(ctrl->state != NVME_CTRL_DELETING && ++ ctrl->state != NVME_CTRL_DELETING_NOIO); + return; + } + +diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c +index 390e92f2d8d1f..7ce08da1c6cb1 100644 +--- a/drivers/pci/ats.c ++++ b/drivers/pci/ats.c +@@ -309,6 +309,21 @@ int pci_prg_resp_pasid_required(struct pci_dev *pdev) + + return pdev->pasid_required; + } ++ ++/** ++ * pci_pri_supported - Check if PRI is supported. ++ * @pdev: PCI device structure ++ * ++ * Returns true if PRI capability is present, false otherwise. ++ */ ++bool pci_pri_supported(struct pci_dev *pdev) ++{ ++ /* VFs share the PF PRI */ ++ if (pci_physfn(pdev)->pri_cap) ++ return true; ++ return false; ++} ++EXPORT_SYMBOL_GPL(pci_pri_supported); + #endif /* CONFIG_PCI_PRI */ + + #ifdef CONFIG_PCI_PASID +diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c +index 8e40b3e6da77d..3cef835b375fd 100644 +--- a/drivers/pci/bus.c ++++ b/drivers/pci/bus.c +@@ -322,12 +322,8 @@ void pci_bus_add_device(struct pci_dev *dev) + + dev->match_driver = true; + retval = device_attach(&dev->dev); +- if (retval < 0 && retval != -EPROBE_DEFER) { ++ if (retval < 0 && retval != -EPROBE_DEFER) + pci_warn(dev, "device attach failed (%d)\n", retval); +- pci_proc_detach_device(dev); +- pci_remove_sysfs_dev_files(dev); +- return; +- } + + pci_dev_assign_added(dev, true); + } +diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c +index 138e1a2d21ccd..5dd1740855770 100644 +--- a/drivers/pci/controller/dwc/pcie-qcom.c ++++ b/drivers/pci/controller/dwc/pcie-qcom.c +@@ -45,7 +45,13 @@ + #define PCIE_CAP_CPL_TIMEOUT_DISABLE 0x10 + + #define PCIE20_PARF_PHY_CTRL 0x40 ++#define PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK GENMASK(20, 16) ++#define PHY_CTRL_PHY_TX0_TERM_OFFSET(x) ((x) << 16) ++ + #define PCIE20_PARF_PHY_REFCLK 0x4C ++#define PHY_REFCLK_SSP_EN BIT(16) ++#define PHY_REFCLK_USE_PAD BIT(12) ++ + #define PCIE20_PARF_DBI_BASE_ADDR 0x168 + #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16C + #define PCIE20_PARF_MHI_CLOCK_RESET_CTRL 0x174 +@@ -77,6 +83,18 @@ + #define DBI_RO_WR_EN 1 + + #define PERST_DELAY_US 1000 ++/* PARF registers */ ++#define PCIE20_PARF_PCS_DEEMPH 0x34 ++#define PCS_DEEMPH_TX_DEEMPH_GEN1(x) ((x) << 16) ++#define PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(x) ((x) << 8) ++#define PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(x) ((x) << 0) ++ ++#define PCIE20_PARF_PCS_SWING 0x38 ++#define PCS_SWING_TX_SWING_FULL(x) ((x) << 8) ++#define PCS_SWING_TX_SWING_LOW(x) ((x) << 0) ++ ++#define PCIE20_PARF_CONFIG_BITS 0x50 ++#define PHY_RX0_EQ(x) ((x) << 24) + + #define PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE 0x358 + #define SLV_ADDR_SPACE_SZ 0x10000000 +@@ -286,6 +304,7 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie) + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; + struct dw_pcie *pci = pcie->pci; + struct device *dev = pci->dev; ++ struct device_node *node = dev->of_node; + u32 val; + int ret; + +@@ -330,9 +349,29 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie) + val &= ~BIT(0); + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); + ++ if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) { ++ writel(PCS_DEEMPH_TX_DEEMPH_GEN1(24) | ++ PCS_DEEMPH_TX_DEEMPH_GEN2_3_5DB(24) | ++ PCS_DEEMPH_TX_DEEMPH_GEN2_6DB(34), ++ pcie->parf + PCIE20_PARF_PCS_DEEMPH); ++ writel(PCS_SWING_TX_SWING_FULL(120) | ++ PCS_SWING_TX_SWING_LOW(120), ++ pcie->parf + PCIE20_PARF_PCS_SWING); ++ writel(PHY_RX0_EQ(4), pcie->parf + PCIE20_PARF_CONFIG_BITS); ++ } ++ ++ if (of_device_is_compatible(node, "qcom,pcie-ipq8064")) { ++ /* set TX termination offset */ ++ val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); ++ val &= ~PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK; ++ val |= PHY_CTRL_PHY_TX0_TERM_OFFSET(7); ++ writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); ++ } ++ + /* enable external reference clock */ + val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK); +- val |= BIT(16); ++ val &= ~PHY_REFCLK_USE_PAD; ++ val |= PHY_REFCLK_SSP_EN; + writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK); + + ret = reset_control_deassert(res->phy_reset); +diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c +index b3869951c0eb7..6e60b4b1bf53b 100644 +--- a/drivers/pci/hotplug/acpiphp_glue.c ++++ b/drivers/pci/hotplug/acpiphp_glue.c +@@ -122,13 +122,21 @@ static struct acpiphp_context *acpiphp_grab_context(struct acpi_device *adev) + struct acpiphp_context *context; + + acpi_lock_hp_context(); ++ + context = acpiphp_get_context(adev); +- if (!context || context->func.parent->is_going_away) { +- acpi_unlock_hp_context(); +- return NULL; ++ if (!context) ++ goto unlock; ++ ++ if (context->func.parent->is_going_away) { ++ acpiphp_put_context(context); ++ context = NULL; ++ goto unlock; + } ++ + get_bridge(context->func.parent); + acpiphp_put_context(context); ++ ++unlock: + acpi_unlock_hp_context(); + return context; + } +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c +index 5622603d96d4e..136d25acff567 100644 +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -5207,7 +5207,8 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0422, quirk_no_ext_tags); + */ + static void quirk_amd_harvest_no_ats(struct pci_dev *pdev) + { +- if (pdev->device == 0x7340 && pdev->revision != 0xc5) ++ if ((pdev->device == 0x7312 && pdev->revision != 0x00) || ++ (pdev->device == 0x7340 && pdev->revision != 0xc5)) + return; + + pci_info(pdev, "disabling ATS\n"); +@@ -5218,6 +5219,8 @@ static void quirk_amd_harvest_no_ats(struct pci_dev *pdev) + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_amd_harvest_no_ats); + /* AMD Iceland dGPU */ + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_amd_harvest_no_ats); ++/* AMD Navi10 dGPU */ ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7312, quirk_amd_harvest_no_ats); + /* AMD Navi14 dGPU */ + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7340, quirk_amd_harvest_no_ats); + #endif /* CONFIG_PCI_ATS */ +diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c +index e5dcf77fe43de..fdfe549794f30 100644 +--- a/drivers/pinctrl/pinctrl-ingenic.c ++++ b/drivers/pinctrl/pinctrl-ingenic.c +@@ -1810,9 +1810,9 @@ static void ingenic_gpio_irq_ack(struct irq_data *irqd) + */ + high = ingenic_gpio_get_value(jzgc, irq); + if (high) +- irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_FALLING); ++ irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_LOW); + else +- irq_set_type(jzgc, irq, IRQ_TYPE_EDGE_RISING); ++ irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH); + } + + if (jzgc->jzpc->info->version >= ID_JZ4760) +@@ -1848,7 +1848,7 @@ static int ingenic_gpio_irq_set_type(struct irq_data *irqd, unsigned int type) + */ + bool high = ingenic_gpio_get_value(jzgc, irqd->hwirq); + +- type = high ? IRQ_TYPE_EDGE_FALLING : IRQ_TYPE_EDGE_RISING; ++ type = high ? IRQ_TYPE_LEVEL_LOW : IRQ_TYPE_LEVEL_HIGH; + } + + irq_set_type(jzgc, irqd->hwirq, type); +@@ -1955,7 +1955,8 @@ static int ingenic_gpio_get_direction(struct gpio_chip *gc, unsigned int offset) + unsigned int pin = gc->base + offset; + + if (jzpc->info->version >= ID_JZ4760) { +- if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1)) ++ if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) || ++ ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1)) + return GPIO_LINE_DIRECTION_IN; + return GPIO_LINE_DIRECTION_OUT; + } +diff --git a/drivers/platform/chrome/cros_ec_ishtp.c b/drivers/platform/chrome/cros_ec_ishtp.c +index 93a71e93a2f15..41d60af618c9d 100644 +--- a/drivers/platform/chrome/cros_ec_ishtp.c ++++ b/drivers/platform/chrome/cros_ec_ishtp.c +@@ -660,8 +660,10 @@ static int cros_ec_ishtp_probe(struct ishtp_cl_device *cl_device) + + /* Register croc_ec_dev mfd */ + rv = cros_ec_dev_init(client_data); +- if (rv) ++ if (rv) { ++ down_write(&init_lock); + goto end_cros_ec_dev_init_error; ++ } + + return 0; + +diff --git a/drivers/pwm/pwm-bcm-iproc.c b/drivers/pwm/pwm-bcm-iproc.c +index 1f829edd8ee70..d392a828fc493 100644 +--- a/drivers/pwm/pwm-bcm-iproc.c ++++ b/drivers/pwm/pwm-bcm-iproc.c +@@ -85,8 +85,6 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm, + u64 tmp, multi, rate; + u32 value, prescale; + +- rate = clk_get_rate(ip->clk); +- + value = readl(ip->base + IPROC_PWM_CTRL_OFFSET); + + if (value & BIT(IPROC_PWM_CTRL_EN_SHIFT(pwm->hwpwm))) +@@ -99,6 +97,13 @@ static void iproc_pwmc_get_state(struct pwm_chip *chip, struct pwm_device *pwm, + else + state->polarity = PWM_POLARITY_INVERSED; + ++ rate = clk_get_rate(ip->clk); ++ if (rate == 0) { ++ state->period = 0; ++ state->duty_cycle = 0; ++ return; ++ } ++ + value = readl(ip->base + IPROC_PWM_PRESCALE_OFFSET); + prescale = value >> IPROC_PWM_PRESCALE_SHIFT(pwm->hwpwm); + prescale &= IPROC_PWM_PRESCALE_MAX; +diff --git a/drivers/remoteproc/qcom_q6v5.c b/drivers/remoteproc/qcom_q6v5.c +index 111a442c993c4..fd6fd36268d93 100644 +--- a/drivers/remoteproc/qcom_q6v5.c ++++ b/drivers/remoteproc/qcom_q6v5.c +@@ -153,6 +153,8 @@ int qcom_q6v5_request_stop(struct qcom_q6v5 *q6v5) + { + int ret; + ++ q6v5->running = false; ++ + qcom_smem_state_update_bits(q6v5->state, + BIT(q6v5->stop_bit), BIT(q6v5->stop_bit)); + +diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c +index 629abcee2c1d5..dc95cad40bd58 100644 +--- a/drivers/remoteproc/qcom_q6v5_mss.c ++++ b/drivers/remoteproc/qcom_q6v5_mss.c +@@ -408,6 +408,12 @@ static int q6v5_load(struct rproc *rproc, const struct firmware *fw) + { + struct q6v5 *qproc = rproc->priv; + ++ /* MBA is restricted to a maximum size of 1M */ ++ if (fw->size > qproc->mba_size || fw->size > SZ_1M) { ++ dev_err(qproc->dev, "MBA firmware load failed\n"); ++ return -EINVAL; ++ } ++ + memcpy(qproc->mba_region, fw->data, fw->size); + + return 0; +@@ -1139,15 +1145,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc) + } else if (phdr->p_filesz) { + /* Replace "xxx.xxx" with "xxx.bxx" */ + sprintf(fw_name + fw_name_len - 3, "b%02d", i); +- ret = request_firmware(&seg_fw, fw_name, qproc->dev); ++ ret = request_firmware_into_buf(&seg_fw, fw_name, qproc->dev, ++ ptr, phdr->p_filesz); + if (ret) { + dev_err(qproc->dev, "failed to load %s\n", fw_name); + iounmap(ptr); + goto release_firmware; + } + +- memcpy(ptr, seg_fw->data, seg_fw->size); +- + release_firmware(seg_fw); + } + +diff --git a/drivers/rtc/rtc-cpcap.c b/drivers/rtc/rtc-cpcap.c +index a603f1f211250..800667d73a6fb 100644 +--- a/drivers/rtc/rtc-cpcap.c ++++ b/drivers/rtc/rtc-cpcap.c +@@ -261,7 +261,7 @@ static int cpcap_rtc_probe(struct platform_device *pdev) + return PTR_ERR(rtc->rtc_dev); + + rtc->rtc_dev->ops = &cpcap_rtc_ops; +- rtc->rtc_dev->range_max = (1 << 14) * SECS_PER_DAY - 1; ++ rtc->rtc_dev->range_max = (timeu64_t) (DAY_MASK + 1) * SECS_PER_DAY - 1; + + err = cpcap_get_vendor(dev, rtc->regmap, &rtc->vendor); + if (err) +diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c +index 40d7450a1ce49..c6b89273feba8 100644 +--- a/drivers/rtc/rtc-pl031.c ++++ b/drivers/rtc/rtc-pl031.c +@@ -275,6 +275,7 @@ static int pl031_set_alarm(struct device *dev, struct rtc_wkalrm *alarm) + struct pl031_local *ldata = dev_get_drvdata(dev); + + writel(rtc_tm_to_time64(&alarm->time), ldata->base + RTC_MR); ++ pl031_alarm_irq_enable(dev, alarm->enabled); + + return 0; + } +diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c +index 565419bf8d74a..40b2df6e304ad 100644 +--- a/drivers/scsi/lpfc/lpfc_nvmet.c ++++ b/drivers/scsi/lpfc/lpfc_nvmet.c +@@ -1914,7 +1914,7 @@ lpfc_nvmet_destroy_targetport(struct lpfc_hba *phba) + } + tgtp->tport_unreg_cmp = &tport_unreg_cmp; + nvmet_fc_unregister_targetport(phba->targetport); +- if (!wait_for_completion_timeout(tgtp->tport_unreg_cmp, ++ if (!wait_for_completion_timeout(&tport_unreg_cmp, + msecs_to_jiffies(LPFC_NVMET_WAIT_TMO))) + lpfc_printf_log(phba, KERN_ERR, LOG_NVME, + "6179 Unreg targetport x%px timeout " +diff --git a/drivers/staging/media/rkisp1/rkisp1-isp.c b/drivers/staging/media/rkisp1/rkisp1-isp.c +index fa53f05e37d81..31c5ae2aa29fb 100644 +--- a/drivers/staging/media/rkisp1/rkisp1-isp.c ++++ b/drivers/staging/media/rkisp1/rkisp1-isp.c +@@ -25,7 +25,6 @@ + + #define RKISP1_DIR_SRC BIT(0) + #define RKISP1_DIR_SINK BIT(1) +-#define RKISP1_DIR_SINK_SRC (RKISP1_DIR_SINK | RKISP1_DIR_SRC) + + /* + * NOTE: MIPI controller and input MUX are also configured in this file. +@@ -69,84 +68,84 @@ static const struct rkisp1_isp_mbus_info rkisp1_isp_formats[] = { + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10, + .bayer_pat = RKISP1_RAW_RGGB, + .bus_width = 10, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SBGGR10_1X10, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10, + .bayer_pat = RKISP1_RAW_BGGR, + .bus_width = 10, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SGBRG10_1X10, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10, + .bayer_pat = RKISP1_RAW_GBRG, + .bus_width = 10, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SGRBG10_1X10, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW10, + .bayer_pat = RKISP1_RAW_GRBG, + .bus_width = 10, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SRGGB12_1X12, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12, + .bayer_pat = RKISP1_RAW_RGGB, + .bus_width = 12, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SBGGR12_1X12, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12, + .bayer_pat = RKISP1_RAW_BGGR, + .bus_width = 12, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SGBRG12_1X12, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12, + .bayer_pat = RKISP1_RAW_GBRG, + .bus_width = 12, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SGRBG12_1X12, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW12, + .bayer_pat = RKISP1_RAW_GRBG, + .bus_width = 12, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SRGGB8_1X8, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8, + .bayer_pat = RKISP1_RAW_RGGB, + .bus_width = 8, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SBGGR8_1X8, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8, + .bayer_pat = RKISP1_RAW_BGGR, + .bus_width = 8, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SGBRG8_1X8, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8, + .bayer_pat = RKISP1_RAW_GBRG, + .bus_width = 8, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_SGRBG8_1X8, + .fmt_type = RKISP1_FMT_BAYER, + .mipi_dt = RKISP1_CIF_CSI2_DT_RAW8, + .bayer_pat = RKISP1_RAW_GRBG, + .bus_width = 8, +- .direction = RKISP1_DIR_SINK_SRC, ++ .direction = RKISP1_DIR_SINK | RKISP1_DIR_SRC, + }, { + .mbus_code = MEDIA_BUS_FMT_YUYV8_1X16, + .fmt_type = RKISP1_FMT_YUV, +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index 9ad44a96dfe3a..33f1cca7eaa61 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -2480,12 +2480,11 @@ static int ftdi_prepare_write_buffer(struct usb_serial_port *port, + #define FTDI_RS_ERR_MASK (FTDI_RS_BI | FTDI_RS_PE | FTDI_RS_FE | FTDI_RS_OE) + + static int ftdi_process_packet(struct usb_serial_port *port, +- struct ftdi_private *priv, char *packet, int len) ++ struct ftdi_private *priv, unsigned char *buf, int len) + { ++ unsigned char status; + int i; +- char status; + char flag; +- char *ch; + + if (len < 2) { + dev_dbg(&port->dev, "malformed packet\n"); +@@ -2495,7 +2494,7 @@ static int ftdi_process_packet(struct usb_serial_port *port, + /* Compare new line status to the old one, signal if different/ + N.B. packet may be processed more than once, but differences + are only processed once. */ +- status = packet[0] & FTDI_STATUS_B0_MASK; ++ status = buf[0] & FTDI_STATUS_B0_MASK; + if (status != priv->prev_status) { + char diff_status = status ^ priv->prev_status; + +@@ -2521,13 +2520,12 @@ static int ftdi_process_packet(struct usb_serial_port *port, + } + + /* save if the transmitter is empty or not */ +- if (packet[1] & FTDI_RS_TEMT) ++ if (buf[1] & FTDI_RS_TEMT) + priv->transmit_empty = 1; + else + priv->transmit_empty = 0; + +- len -= 2; +- if (!len) ++ if (len == 2) + return 0; /* status only */ + + /* +@@ -2535,40 +2533,41 @@ static int ftdi_process_packet(struct usb_serial_port *port, + * data payload to avoid over-reporting. + */ + flag = TTY_NORMAL; +- if (packet[1] & FTDI_RS_ERR_MASK) { ++ if (buf[1] & FTDI_RS_ERR_MASK) { + /* Break takes precedence over parity, which takes precedence + * over framing errors */ +- if (packet[1] & FTDI_RS_BI) { ++ if (buf[1] & FTDI_RS_BI) { + flag = TTY_BREAK; + port->icount.brk++; + usb_serial_handle_break(port); +- } else if (packet[1] & FTDI_RS_PE) { ++ } else if (buf[1] & FTDI_RS_PE) { + flag = TTY_PARITY; + port->icount.parity++; +- } else if (packet[1] & FTDI_RS_FE) { ++ } else if (buf[1] & FTDI_RS_FE) { + flag = TTY_FRAME; + port->icount.frame++; + } + /* Overrun is special, not associated with a char */ +- if (packet[1] & FTDI_RS_OE) { ++ if (buf[1] & FTDI_RS_OE) { + port->icount.overrun++; + tty_insert_flip_char(&port->port, 0, TTY_OVERRUN); + } + } + +- port->icount.rx += len; +- ch = packet + 2; ++ port->icount.rx += len - 2; + + if (port->port.console && port->sysrq) { +- for (i = 0; i < len; i++, ch++) { +- if (!usb_serial_handle_sysrq_char(port, *ch)) +- tty_insert_flip_char(&port->port, *ch, flag); ++ for (i = 2; i < len; i++) { ++ if (usb_serial_handle_sysrq_char(port, buf[i])) ++ continue; ++ tty_insert_flip_char(&port->port, buf[i], flag); + } + } else { +- tty_insert_flip_string_fixed_flag(&port->port, ch, flag, len); ++ tty_insert_flip_string_fixed_flag(&port->port, buf + 2, flag, ++ len - 2); + } + +- return len; ++ return len - 2; + } + + static void ftdi_process_read_urb(struct urb *urb) +diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c +index e2dc8edd680e0..4907c1cfe6671 100644 +--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c ++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c +@@ -330,6 +330,7 @@ static struct vdpasim *vdpasim_create(void) + + INIT_WORK(&vdpasim->work, vdpasim_work); + spin_lock_init(&vdpasim->lock); ++ spin_lock_init(&vdpasim->iommu_lock); + + dev = &vdpasim->vdpa.dev; + dev->coherent_dma_mask = DMA_BIT_MASK(64); +@@ -520,7 +521,7 @@ static void vdpasim_get_config(struct vdpa_device *vdpa, unsigned int offset, + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + + if (offset + len < sizeof(struct virtio_net_config)) +- memcpy(buf, &vdpasim->config + offset, len); ++ memcpy(buf, (u8 *)&vdpasim->config + offset, len); + } + + static void vdpasim_set_config(struct vdpa_device *vdpa, unsigned int offset, +diff --git a/drivers/watchdog/f71808e_wdt.c b/drivers/watchdog/f71808e_wdt.c +index a3c44d75d80eb..26bf366aebc23 100644 +--- a/drivers/watchdog/f71808e_wdt.c ++++ b/drivers/watchdog/f71808e_wdt.c +@@ -690,9 +690,9 @@ static int __init watchdog_init(int sioaddr) + * into the module have been registered yet. + */ + watchdog.sioaddr = sioaddr; +- watchdog.ident.options = WDIOC_SETTIMEOUT +- | WDIOF_MAGICCLOSE +- | WDIOF_KEEPALIVEPING; ++ watchdog.ident.options = WDIOF_MAGICCLOSE ++ | WDIOF_KEEPALIVEPING ++ | WDIOF_CARDRESET; + + snprintf(watchdog.ident.identity, + sizeof(watchdog.ident.identity), "%s watchdog", +@@ -706,6 +706,13 @@ static int __init watchdog_init(int sioaddr) + wdt_conf = superio_inb(sioaddr, F71808FG_REG_WDT_CONF); + watchdog.caused_reboot = wdt_conf & BIT(F71808FG_FLAG_WDTMOUT_STS); + ++ /* ++ * We don't want WDTMOUT_STS to stick around till regular reboot. ++ * Write 1 to the bit to clear it to zero. ++ */ ++ superio_outb(sioaddr, F71808FG_REG_WDT_CONF, ++ wdt_conf | BIT(F71808FG_FLAG_WDTMOUT_STS)); ++ + superio_exit(sioaddr); + + err = watchdog_set_timeout(timeout); +diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c +index d456dd72d99a0..c904496fff65e 100644 +--- a/drivers/watchdog/rti_wdt.c ++++ b/drivers/watchdog/rti_wdt.c +@@ -211,6 +211,7 @@ static int rti_wdt_probe(struct platform_device *pdev) + + err_iomap: + pm_runtime_put_sync(&pdev->dev); ++ pm_runtime_disable(&pdev->dev); + + return ret; + } +@@ -221,6 +222,7 @@ static int rti_wdt_remove(struct platform_device *pdev) + + watchdog_unregister_device(&wdt->wdd); + pm_runtime_put(&pdev->dev); ++ pm_runtime_disable(&pdev->dev); + + return 0; + } +diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c +index 7e4cd34a8c20e..b535f5fa279b9 100644 +--- a/drivers/watchdog/watchdog_dev.c ++++ b/drivers/watchdog/watchdog_dev.c +@@ -994,6 +994,15 @@ static int watchdog_cdev_register(struct watchdog_device *wdd) + if (IS_ERR_OR_NULL(watchdog_kworker)) + return -ENODEV; + ++ device_initialize(&wd_data->dev); ++ wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id); ++ wd_data->dev.class = &watchdog_class; ++ wd_data->dev.parent = wdd->parent; ++ wd_data->dev.groups = wdd->groups; ++ wd_data->dev.release = watchdog_core_data_release; ++ dev_set_drvdata(&wd_data->dev, wdd); ++ dev_set_name(&wd_data->dev, "watchdog%d", wdd->id); ++ + kthread_init_work(&wd_data->work, watchdog_ping_work); + hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); + wd_data->timer.function = watchdog_timer_expired; +@@ -1014,15 +1023,6 @@ static int watchdog_cdev_register(struct watchdog_device *wdd) + } + } + +- device_initialize(&wd_data->dev); +- wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id); +- wd_data->dev.class = &watchdog_class; +- wd_data->dev.parent = wdd->parent; +- wd_data->dev.groups = wdd->groups; +- wd_data->dev.release = watchdog_core_data_release; +- dev_set_drvdata(&wd_data->dev, wdd); +- dev_set_name(&wd_data->dev, "watchdog%d", wdd->id); +- + /* Fill in the data structures */ + cdev_init(&wd_data->cdev, &watchdog_fops); + +diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h +index 68bd89e3d4f09..562c1d61bb8b5 100644 +--- a/fs/btrfs/ctree.h ++++ b/fs/btrfs/ctree.h +@@ -1038,8 +1038,10 @@ struct btrfs_root { + wait_queue_head_t log_writer_wait; + wait_queue_head_t log_commit_wait[2]; + struct list_head log_ctxs[2]; ++ /* Used only for log trees of subvolumes, not for the log root tree */ + atomic_t log_writers; + atomic_t log_commit[2]; ++ /* Used only for log trees of subvolumes, not for the log root tree */ + atomic_t log_batch; + int log_transid; + /* No matter the commit succeeds or not*/ +@@ -3196,7 +3198,7 @@ do { \ + /* Report first abort since mount */ \ + if (!test_and_set_bit(BTRFS_FS_STATE_TRANS_ABORTED, \ + &((trans)->fs_info->fs_state))) { \ +- if ((errno) != -EIO) { \ ++ if ((errno) != -EIO && (errno) != -EROFS) { \ + WARN(1, KERN_DEBUG \ + "BTRFS: Transaction aborted (error %d)\n", \ + (errno)); \ +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index f00e64fee5ddb..f35be66413f95 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -1432,9 +1432,16 @@ static int btrfs_init_fs_root(struct btrfs_root *root) + spin_lock_init(&root->ino_cache_lock); + init_waitqueue_head(&root->ino_cache_wait); + +- ret = get_anon_bdev(&root->anon_dev); +- if (ret) +- goto fail; ++ /* ++ * Don't assign anonymous block device to roots that are not exposed to ++ * userspace, the id pool is limited to 1M ++ */ ++ if (is_fstree(root->root_key.objectid) && ++ btrfs_root_refs(&root->root_item) > 0) { ++ ret = get_anon_bdev(&root->anon_dev); ++ if (ret) ++ goto fail; ++ } + + mutex_lock(&root->objectid_mutex); + ret = btrfs_find_highest_objectid(root, +diff --git a/fs/btrfs/extent-io-tree.h b/fs/btrfs/extent-io-tree.h +index b6561455b3c42..8bbb734f3f514 100644 +--- a/fs/btrfs/extent-io-tree.h ++++ b/fs/btrfs/extent-io-tree.h +@@ -34,6 +34,8 @@ struct io_failure_record; + */ + #define CHUNK_ALLOCATED EXTENT_DIRTY + #define CHUNK_TRIMMED EXTENT_DEFRAG ++#define CHUNK_STATE_MASK (CHUNK_ALLOCATED | \ ++ CHUNK_TRIMMED) + + enum { + IO_TREE_FS_PINNED_EXTENTS, +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 7c86188b33d43..1409bbbdeb664 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -33,6 +33,7 @@ + #include "delalloc-space.h" + #include "block-group.h" + #include "discard.h" ++#include "rcu-string.h" + + #undef SCRAMBLE_DELAYED_REFS + +@@ -5313,7 +5314,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc) + goto out; + } + +- trans = btrfs_start_transaction(tree_root, 0); ++ /* ++ * Use join to avoid potential EINTR from transaction start. See ++ * wait_reserve_ticket and the whole reservation callchain. ++ */ ++ if (for_reloc) ++ trans = btrfs_join_transaction(tree_root); ++ else ++ trans = btrfs_start_transaction(tree_root, 0); + if (IS_ERR(trans)) { + err = PTR_ERR(trans); + goto out_free; +@@ -5678,6 +5686,19 @@ static int btrfs_trim_free_extents(struct btrfs_device *device, u64 *trimmed) + &start, &end, + CHUNK_TRIMMED | CHUNK_ALLOCATED); + ++ /* Check if there are any CHUNK_* bits left */ ++ if (start > device->total_bytes) { ++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG)); ++ btrfs_warn_in_rcu(fs_info, ++"ignoring attempt to trim beyond device size: offset %llu length %llu device %s device size %llu", ++ start, end - start + 1, ++ rcu_str_deref(device->name), ++ device->total_bytes); ++ mutex_unlock(&fs_info->chunk_mutex); ++ ret = 0; ++ break; ++ } ++ + /* Ensure we skip the reserved area in the first 1M */ + start = max_t(u64, start, SZ_1M); + +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 9d6d646e1eb08..e95aa02ad6396 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -4110,7 +4110,7 @@ retry: + if (!test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) { + ret = flush_write_bio(&epd); + } else { +- ret = -EUCLEAN; ++ ret = -EROFS; + end_write_bio(&epd, ret); + } + return ret; +@@ -4504,15 +4504,25 @@ int try_release_extent_mapping(struct page *page, gfp_t mask) + free_extent_map(em); + break; + } +- if (!test_range_bit(tree, em->start, +- extent_map_end(em) - 1, +- EXTENT_LOCKED, 0, NULL)) { ++ if (test_range_bit(tree, em->start, ++ extent_map_end(em) - 1, ++ EXTENT_LOCKED, 0, NULL)) ++ goto next; ++ /* ++ * If it's not in the list of modified extents, used ++ * by a fast fsync, we can remove it. If it's being ++ * logged we can safely remove it since fsync took an ++ * extra reference on the em. ++ */ ++ if (list_empty(&em->list) || ++ test_bit(EXTENT_FLAG_LOGGING, &em->flags)) { + set_bit(BTRFS_INODE_NEEDS_FULL_SYNC, + &btrfs_inode->runtime_flags); + remove_extent_mapping(map, em); + /* once for the rb tree */ + free_extent_map(em); + } ++next: + start = extent_map_end(em); + write_unlock(&map->lock); + +diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c +index 3613da065a737..e4f495d3cb894 100644 +--- a/fs/btrfs/free-space-cache.c ++++ b/fs/btrfs/free-space-cache.c +@@ -2286,7 +2286,7 @@ out: + static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl, + struct btrfs_free_space *info, bool update_stat) + { +- struct btrfs_free_space *left_info; ++ struct btrfs_free_space *left_info = NULL; + struct btrfs_free_space *right_info; + bool merged = false; + u64 offset = info->offset; +@@ -2302,7 +2302,7 @@ static bool try_merge_free_space(struct btrfs_free_space_ctl *ctl, + if (right_info && rb_prev(&right_info->offset_index)) + left_info = rb_entry(rb_prev(&right_info->offset_index), + struct btrfs_free_space, offset_index); +- else ++ else if (!right_info) + left_info = tree_search_offset(ctl, offset - 1, 0, 0); + + /* See try_merge_free_space() comment. */ +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 6cb3dc2748974..2ccfa424a892a 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -650,12 +650,18 @@ cont: + page_error_op | + PAGE_END_WRITEBACK); + +- for (i = 0; i < nr_pages; i++) { +- WARN_ON(pages[i]->mapping); +- put_page(pages[i]); ++ /* ++ * Ensure we only free the compressed pages if we have ++ * them allocated, as we can still reach here with ++ * inode_need_compress() == false. ++ */ ++ if (pages) { ++ for (i = 0; i < nr_pages; i++) { ++ WARN_ON(pages[i]->mapping); ++ put_page(pages[i]); ++ } ++ kfree(pages); + } +- kfree(pages); +- + return 0; + } + } +@@ -4049,6 +4055,8 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry) + } + } + ++ free_anon_bdev(dest->anon_dev); ++ dest->anon_dev = 0; + out_end_trans: + trans->block_rsv = NULL; + trans->bytes_reserved = 0; +@@ -6632,7 +6640,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode, + extent_type == BTRFS_FILE_EXTENT_PREALLOC) { + /* Only regular file could have regular/prealloc extent */ + if (!S_ISREG(inode->vfs_inode.i_mode)) { +- ret = -EUCLEAN; ++ err = -EUCLEAN; + btrfs_crit(fs_info, + "regular/prealloc extent found for non-regular inode %llu", + btrfs_ino(inode)); +diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c +index 40b729dce91cd..92289adfee95a 100644 +--- a/fs/btrfs/ioctl.c ++++ b/fs/btrfs/ioctl.c +@@ -164,8 +164,11 @@ static int btrfs_ioctl_getflags(struct file *file, void __user *arg) + return 0; + } + +-/* Check if @flags are a supported and valid set of FS_*_FL flags */ +-static int check_fsflags(unsigned int flags) ++/* ++ * Check if @flags are a supported and valid set of FS_*_FL flags and that ++ * the old and new flags are not conflicting ++ */ ++static int check_fsflags(unsigned int old_flags, unsigned int flags) + { + if (flags & ~(FS_IMMUTABLE_FL | FS_APPEND_FL | \ + FS_NOATIME_FL | FS_NODUMP_FL | \ +@@ -174,9 +177,19 @@ static int check_fsflags(unsigned int flags) + FS_NOCOW_FL)) + return -EOPNOTSUPP; + ++ /* COMPR and NOCOMP on new/old are valid */ + if ((flags & FS_NOCOMP_FL) && (flags & FS_COMPR_FL)) + return -EINVAL; + ++ if ((flags & FS_COMPR_FL) && (flags & FS_NOCOW_FL)) ++ return -EINVAL; ++ ++ /* NOCOW and compression options are mutually exclusive */ ++ if ((old_flags & FS_NOCOW_FL) && (flags & (FS_COMPR_FL | FS_NOCOMP_FL))) ++ return -EINVAL; ++ if ((flags & FS_NOCOW_FL) && (old_flags & (FS_COMPR_FL | FS_NOCOMP_FL))) ++ return -EINVAL; ++ + return 0; + } + +@@ -190,7 +203,7 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg) + unsigned int fsflags, old_fsflags; + int ret; + const char *comp = NULL; +- u32 binode_flags = binode->flags; ++ u32 binode_flags; + + if (!inode_owner_or_capable(inode)) + return -EPERM; +@@ -201,22 +214,23 @@ static int btrfs_ioctl_setflags(struct file *file, void __user *arg) + if (copy_from_user(&fsflags, arg, sizeof(fsflags))) + return -EFAULT; + +- ret = check_fsflags(fsflags); +- if (ret) +- return ret; +- + ret = mnt_want_write_file(file); + if (ret) + return ret; + + inode_lock(inode); +- + fsflags = btrfs_mask_fsflags_for_type(inode, fsflags); + old_fsflags = btrfs_inode_flags_to_fsflags(binode->flags); ++ + ret = vfs_ioc_setflags_prepare(inode, old_fsflags, fsflags); + if (ret) + goto out_unlock; + ++ ret = check_fsflags(old_fsflags, fsflags); ++ if (ret) ++ goto out_unlock; ++ ++ binode_flags = binode->flags; + if (fsflags & FS_SYNC_FL) + binode_flags |= BTRFS_INODE_SYNC; + else +@@ -3197,11 +3211,15 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info, + struct btrfs_ioctl_fs_info_args *fi_args; + struct btrfs_device *device; + struct btrfs_fs_devices *fs_devices = fs_info->fs_devices; ++ u64 flags_in; + int ret = 0; + +- fi_args = kzalloc(sizeof(*fi_args), GFP_KERNEL); +- if (!fi_args) +- return -ENOMEM; ++ fi_args = memdup_user(arg, sizeof(*fi_args)); ++ if (IS_ERR(fi_args)) ++ return PTR_ERR(fi_args); ++ ++ flags_in = fi_args->flags; ++ memset(fi_args, 0, sizeof(*fi_args)); + + rcu_read_lock(); + fi_args->num_devices = fs_devices->num_devices; +@@ -3217,6 +3235,12 @@ static long btrfs_ioctl_fs_info(struct btrfs_fs_info *fs_info, + fi_args->sectorsize = fs_info->sectorsize; + fi_args->clone_alignment = fs_info->sectorsize; + ++ if (flags_in & BTRFS_FS_INFO_FLAG_CSUM_INFO) { ++ fi_args->csum_type = btrfs_super_csum_type(fs_info->super_copy); ++ fi_args->csum_size = btrfs_super_csum_size(fs_info->super_copy); ++ fi_args->flags |= BTRFS_FS_INFO_FLAG_CSUM_INFO; ++ } ++ + if (copy_to_user(arg, fi_args, sizeof(*fi_args))) + ret = -EFAULT; + +diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c +index 7887317033c98..452ca955eb75e 100644 +--- a/fs/btrfs/ref-verify.c ++++ b/fs/btrfs/ref-verify.c +@@ -286,6 +286,8 @@ static struct block_entry *add_block_entry(struct btrfs_fs_info *fs_info, + exist_re = insert_root_entry(&exist->roots, re); + if (exist_re) + kfree(re); ++ } else { ++ kfree(re); + } + kfree(be); + return exist; +diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c +index f67d736c27a12..8e9c2142c66a8 100644 +--- a/fs/btrfs/relocation.c ++++ b/fs/btrfs/relocation.c +@@ -2402,12 +2402,20 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc, + btrfs_unlock_up_safe(path, 0); + } + +- min_reserved = fs_info->nodesize * (BTRFS_MAX_LEVEL - 1) * 2; ++ /* ++ * In merge_reloc_root(), we modify the upper level pointer to swap the ++ * tree blocks between reloc tree and subvolume tree. Thus for tree ++ * block COW, we COW at most from level 1 to root level for each tree. ++ * ++ * Thus the needed metadata size is at most root_level * nodesize, ++ * and * 2 since we have two trees to COW. ++ */ ++ min_reserved = fs_info->nodesize * btrfs_root_level(root_item) * 2; + memset(&next_key, 0, sizeof(next_key)); + + while (1) { + ret = btrfs_block_rsv_refill(root, rc->block_rsv, min_reserved, +- BTRFS_RESERVE_FLUSH_ALL); ++ BTRFS_RESERVE_FLUSH_LIMIT); + if (ret) { + err = ret; + goto out; +diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c +index 7c50ac5b68762..f2b9c4ec302d3 100644 +--- a/fs/btrfs/scrub.c ++++ b/fs/btrfs/scrub.c +@@ -3761,7 +3761,7 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx, + struct btrfs_fs_info *fs_info = sctx->fs_info; + + if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) +- return -EIO; ++ return -EROFS; + + /* Seed devices of a new filesystem has their own generation. */ + if (scrub_dev->fs_devices != fs_info->fs_devices) +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index 7932d8d07cffe..6ca9bc3f51be1 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -440,6 +440,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, + char *compress_type; + bool compress_force = false; + enum btrfs_compression_type saved_compress_type; ++ int saved_compress_level; + bool saved_compress_force; + int no_compress = 0; + +@@ -522,6 +523,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, + info->compress_type : BTRFS_COMPRESS_NONE; + saved_compress_force = + btrfs_test_opt(info, FORCE_COMPRESS); ++ saved_compress_level = info->compress_level; + if (token == Opt_compress || + token == Opt_compress_force || + strncmp(args[0].from, "zlib", 4) == 0) { +@@ -566,6 +568,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, + no_compress = 0; + } else if (strncmp(args[0].from, "no", 2) == 0) { + compress_type = "no"; ++ info->compress_level = 0; ++ info->compress_type = 0; + btrfs_clear_opt(info->mount_opt, COMPRESS); + btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS); + compress_force = false; +@@ -586,11 +590,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, + */ + btrfs_clear_opt(info->mount_opt, FORCE_COMPRESS); + } +- if ((btrfs_test_opt(info, COMPRESS) && +- (info->compress_type != saved_compress_type || +- compress_force != saved_compress_force)) || +- (!btrfs_test_opt(info, COMPRESS) && +- no_compress == 1)) { ++ if (no_compress == 1) { ++ btrfs_info(info, "use no compression"); ++ } else if ((info->compress_type != saved_compress_type) || ++ (compress_force != saved_compress_force) || ++ (info->compress_level != saved_compress_level)) { + btrfs_info(info, "%s %s compression, level %d", + (compress_force) ? "force" : "use", + compress_type, info->compress_level); +@@ -1310,6 +1314,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry) + { + struct btrfs_fs_info *info = btrfs_sb(dentry->d_sb); + const char *compress_type; ++ const char *subvol_name; + + if (btrfs_test_opt(info, DEGRADED)) + seq_puts(seq, ",degraded"); +@@ -1396,8 +1401,13 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry) + seq_puts(seq, ",ref_verify"); + seq_printf(seq, ",subvolid=%llu", + BTRFS_I(d_inode(dentry))->root->root_key.objectid); +- seq_puts(seq, ",subvol="); +- seq_dentry(seq, dentry, " \t\n\\"); ++ subvol_name = btrfs_get_subvol_name_from_objectid(info, ++ BTRFS_I(d_inode(dentry))->root->root_key.objectid); ++ if (!IS_ERR(subvol_name)) { ++ seq_puts(seq, ",subvol="); ++ seq_escape(seq, subvol_name, " \t\n\\"); ++ kfree(subvol_name); ++ } + return 0; + } + +@@ -1885,6 +1895,12 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data) + set_bit(BTRFS_FS_OPEN, &fs_info->flags); + } + out: ++ /* ++ * We need to set SB_I_VERSION here otherwise it'll get cleared by VFS, ++ * since the absence of the flag means it can be toggled off by remount. ++ */ ++ *flags |= SB_I_VERSION; ++ + wake_up_process(fs_info->transaction_kthread); + btrfs_remount_cleanup(fs_info, old_opts); + return 0; +@@ -2294,9 +2310,7 @@ static int btrfs_unfreeze(struct super_block *sb) + static int btrfs_show_devname(struct seq_file *m, struct dentry *root) + { + struct btrfs_fs_info *fs_info = btrfs_sb(root->d_sb); +- struct btrfs_fs_devices *cur_devices; + struct btrfs_device *dev, *first_dev = NULL; +- struct list_head *head; + + /* + * Lightweight locking of the devices. We should not need +@@ -2306,18 +2320,13 @@ static int btrfs_show_devname(struct seq_file *m, struct dentry *root) + * least until the rcu_read_unlock. + */ + rcu_read_lock(); +- cur_devices = fs_info->fs_devices; +- while (cur_devices) { +- head = &cur_devices->devices; +- list_for_each_entry_rcu(dev, head, dev_list) { +- if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state)) +- continue; +- if (!dev->name) +- continue; +- if (!first_dev || dev->devid < first_dev->devid) +- first_dev = dev; +- } +- cur_devices = cur_devices->seed; ++ list_for_each_entry_rcu(dev, &fs_info->fs_devices->devices, dev_list) { ++ if (test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state)) ++ continue; ++ if (!dev->name) ++ continue; ++ if (!first_dev || dev->devid < first_dev->devid) ++ first_dev = dev; + } + + if (first_dev) +diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c +index a39bff64ff24e..abc4a8fd6df65 100644 +--- a/fs/btrfs/sysfs.c ++++ b/fs/btrfs/sysfs.c +@@ -1273,7 +1273,9 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices, + { + int error = 0; + struct btrfs_device *dev; ++ unsigned int nofs_flag; + ++ nofs_flag = memalloc_nofs_save(); + list_for_each_entry(dev, &fs_devices->devices, dev_list) { + + if (one_device && one_device != dev) +@@ -1301,6 +1303,7 @@ int btrfs_sysfs_add_devices_dir(struct btrfs_fs_devices *fs_devices, + break; + } + } ++ memalloc_nofs_restore(nofs_flag); + + return error; + } +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c +index 96eb313a50801..7253f7a6a1e33 100644 +--- a/fs/btrfs/transaction.c ++++ b/fs/btrfs/transaction.c +@@ -937,7 +937,10 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans, + if (TRANS_ABORTED(trans) || + test_bit(BTRFS_FS_STATE_ERROR, &info->fs_state)) { + wake_up_process(info->transaction_kthread); +- err = -EIO; ++ if (TRANS_ABORTED(trans)) ++ err = trans->aborted; ++ else ++ err = -EROFS; + } + + kmem_cache_free(btrfs_trans_handle_cachep, trans); +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index bdfc421494481..3795fede53ae0 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -3125,29 +3125,17 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans, + btrfs_init_log_ctx(&root_log_ctx, NULL); + + mutex_lock(&log_root_tree->log_mutex); +- atomic_inc(&log_root_tree->log_batch); +- atomic_inc(&log_root_tree->log_writers); + + index2 = log_root_tree->log_transid % 2; + list_add_tail(&root_log_ctx.list, &log_root_tree->log_ctxs[index2]); + root_log_ctx.log_transid = log_root_tree->log_transid; + +- mutex_unlock(&log_root_tree->log_mutex); +- +- mutex_lock(&log_root_tree->log_mutex); +- + /* + * Now we are safe to update the log_root_tree because we're under the + * log_mutex, and we're a current writer so we're holding the commit + * open until we drop the log_mutex. + */ + ret = update_log_root(trans, log, &new_root_item); +- +- if (atomic_dec_and_test(&log_root_tree->log_writers)) { +- /* atomic_dec_and_test implies a barrier */ +- cond_wake_up_nomb(&log_root_tree->log_writer_wait); +- } +- + if (ret) { + if (!list_empty(&root_log_ctx.list)) + list_del_init(&root_log_ctx.list); +@@ -3193,8 +3181,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans, + root_log_ctx.log_transid - 1); + } + +- wait_for_writer(log_root_tree); +- + /* + * now that we've moved on to the tree of log tree roots, + * check the full commit flag again +@@ -4054,11 +4040,8 @@ static noinline int copy_items(struct btrfs_trans_handle *trans, + fs_info->csum_root, + ds + cs, ds + cs + cl - 1, + &ordered_sums, 0); +- if (ret) { +- btrfs_release_path(dst_path); +- kfree(ins_data); +- return ret; +- } ++ if (ret) ++ break; + } + } + } +@@ -4071,7 +4054,6 @@ static noinline int copy_items(struct btrfs_trans_handle *trans, + * we have to do this after the loop above to avoid changing the + * log tree while trying to change the log tree. + */ +- ret = 0; + while (!list_empty(&ordered_sums)) { + struct btrfs_ordered_sum *sums = list_entry(ordered_sums.next, + struct btrfs_ordered_sum, +@@ -5151,14 +5133,13 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans, + const loff_t end, + struct btrfs_log_ctx *ctx) + { +- struct btrfs_fs_info *fs_info = root->fs_info; + struct btrfs_path *path; + struct btrfs_path *dst_path; + struct btrfs_key min_key; + struct btrfs_key max_key; + struct btrfs_root *log = root->log_root; + int err = 0; +- int ret; ++ int ret = 0; + bool fast_search = false; + u64 ino = btrfs_ino(inode); + struct extent_map_tree *em_tree = &inode->extent_tree; +@@ -5194,15 +5175,19 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans, + max_key.offset = (u64)-1; + + /* +- * Only run delayed items if we are a dir or a new file. +- * Otherwise commit the delayed inode only, which is needed in +- * order for the log replay code to mark inodes for link count +- * fixup (create temporary BTRFS_TREE_LOG_FIXUP_OBJECTID items). ++ * Only run delayed items if we are a directory. We want to make sure ++ * all directory indexes hit the fs/subvolume tree so we can find them ++ * and figure out which index ranges have to be logged. ++ * ++ * Otherwise commit the delayed inode only if the full sync flag is set, ++ * as we want to make sure an up to date version is in the subvolume ++ * tree so copy_inode_items_to_log() / copy_items() can find it and copy ++ * it to the log tree. For a non full sync, we always log the inode item ++ * based on the in-memory struct btrfs_inode which is always up to date. + */ +- if (S_ISDIR(inode->vfs_inode.i_mode) || +- inode->generation > fs_info->last_trans_committed) ++ if (S_ISDIR(inode->vfs_inode.i_mode)) + ret = btrfs_commit_inode_delayed_items(trans, inode); +- else ++ else if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags)) + ret = btrfs_commit_inode_delayed_inode(inode); + + if (ret) { +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 45cf455f906dd..ac80297bcafe7 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -245,7 +245,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, + * + * global::fs_devs - add, remove, updates to the global list + * +- * does not protect: manipulation of the fs_devices::devices list! ++ * does not protect: manipulation of the fs_devices::devices list in general ++ * but in mount context it could be used to exclude list modifications by eg. ++ * scan ioctl + * + * btrfs_device::name - renames (write side), read is RCU + * +@@ -258,6 +260,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, + * may be used to exclude some operations from running concurrently without any + * modifications to the list (see write_all_supers) + * ++ * Is not required at mount and close times, because our device list is ++ * protected by the uuid_mutex at that point. ++ * + * balance_mutex + * ------------- + * protects balance structures (status, state) and context accessed from +@@ -603,6 +608,11 @@ static int btrfs_free_stale_devices(const char *path, + return ret; + } + ++/* ++ * This is only used on mount, and we are protected from competing things ++ * messing with our fs_devices by the uuid_mutex, thus we do not need the ++ * fs_devices->device_list_mutex here. ++ */ + static int btrfs_open_one_device(struct btrfs_fs_devices *fs_devices, + struct btrfs_device *device, fmode_t flags, + void *holder) +@@ -1232,8 +1242,14 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices, + int ret; + + lockdep_assert_held(&uuid_mutex); ++ /* ++ * The device_list_mutex cannot be taken here in case opening the ++ * underlying device takes further locks like bd_mutex. ++ * ++ * We also don't need the lock here as this is called during mount and ++ * exclusion is provided by uuid_mutex ++ */ + +- mutex_lock(&fs_devices->device_list_mutex); + if (fs_devices->opened) { + fs_devices->opened++; + ret = 0; +@@ -1241,7 +1257,6 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices, + list_sort(NULL, &fs_devices->devices, devid_cmp); + ret = open_fs_devices(fs_devices, flags, holder); + } +- mutex_unlock(&fs_devices->device_list_mutex); + + return ret; + } +@@ -3235,7 +3250,7 @@ static int del_balance_item(struct btrfs_fs_info *fs_info) + if (!path) + return -ENOMEM; + +- trans = btrfs_start_transaction(root, 0); ++ trans = btrfs_start_transaction_fallback_global_rsv(root, 0); + if (IS_ERR(trans)) { + btrfs_free_path(path); + return PTR_ERR(trans); +@@ -4139,7 +4154,22 @@ int btrfs_balance(struct btrfs_fs_info *fs_info, + mutex_lock(&fs_info->balance_mutex); + if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req)) + btrfs_info(fs_info, "balance: paused"); +- else if (ret == -ECANCELED && atomic_read(&fs_info->balance_cancel_req)) ++ /* ++ * Balance can be canceled by: ++ * ++ * - Regular cancel request ++ * Then ret == -ECANCELED and balance_cancel_req > 0 ++ * ++ * - Fatal signal to "btrfs" process ++ * Either the signal caught by wait_reserve_ticket() and callers ++ * got -EINTR, or caught by btrfs_should_cancel_balance() and ++ * got -ECANCELED. ++ * Either way, in this case balance_cancel_req = 0, and ++ * ret == -EINTR or ret == -ECANCELED. ++ * ++ * So here we only check the return value to catch canceled balance. ++ */ ++ else if (ret == -ECANCELED || ret == -EINTR) + btrfs_info(fs_info, "balance: canceled"); + else + btrfs_info(fs_info, "balance: ended with status: %d", ret); +@@ -4694,6 +4724,10 @@ again: + } + + mutex_lock(&fs_info->chunk_mutex); ++ /* Clear all state bits beyond the shrunk device size */ ++ clear_extent_bits(&device->alloc_state, new_size, (u64)-1, ++ CHUNK_STATE_MASK); ++ + btrfs_device_set_disk_total_bytes(device, new_size); + if (list_empty(&device->post_commit_list)) + list_add_tail(&device->post_commit_list, +@@ -7053,7 +7087,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info) + * otherwise we don't need it. + */ + mutex_lock(&uuid_mutex); +- mutex_lock(&fs_info->chunk_mutex); + + /* + * It is possible for mount and umount to race in such a way that +@@ -7098,7 +7131,9 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info) + } else if (found_key.type == BTRFS_CHUNK_ITEM_KEY) { + struct btrfs_chunk *chunk; + chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk); ++ mutex_lock(&fs_info->chunk_mutex); + ret = read_one_chunk(&found_key, leaf, chunk); ++ mutex_unlock(&fs_info->chunk_mutex); + if (ret) + goto error; + } +@@ -7128,7 +7163,6 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info) + } + ret = 0; + error: +- mutex_unlock(&fs_info->chunk_mutex); + mutex_unlock(&uuid_mutex); + + btrfs_free_path(path); +diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c +index 4c4202c93b715..775fa63afdfd8 100644 +--- a/fs/ceph/dir.c ++++ b/fs/ceph/dir.c +@@ -924,6 +924,10 @@ static int ceph_symlink(struct inode *dir, struct dentry *dentry, + req->r_num_caps = 2; + req->r_dentry_drop = CEPH_CAP_FILE_SHARED | CEPH_CAP_AUTH_EXCL; + req->r_dentry_unless = CEPH_CAP_FILE_EXCL; ++ if (as_ctx.pagelist) { ++ req->r_pagelist = as_ctx.pagelist; ++ as_ctx.pagelist = NULL; ++ } + err = ceph_mdsc_do_request(mdsc, dir, req); + if (!err && !req->r_reply_info.head->is_dentry) + err = ceph_handle_notrace_create(dir, dentry); +diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c +index 7c63abf5bea91..95272ae36b058 100644 +--- a/fs/ceph/mds_client.c ++++ b/fs/ceph/mds_client.c +@@ -3270,8 +3270,10 @@ static void handle_session(struct ceph_mds_session *session, + goto bad; + /* version >= 3, feature bits */ + ceph_decode_32_safe(&p, end, len, bad); +- ceph_decode_64_safe(&p, end, features, bad); +- p += len - sizeof(features); ++ if (len) { ++ ceph_decode_64_safe(&p, end, features, bad); ++ p += len - sizeof(features); ++ } + } + + mutex_lock(&mdsc->mutex); +diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c +index 44fca24d993e2..c617091b02bf6 100644 +--- a/fs/cifs/smb2misc.c ++++ b/fs/cifs/smb2misc.c +@@ -508,15 +508,31 @@ cifs_ses_oplock_break(struct work_struct *work) + kfree(lw); + } + ++static void ++smb2_queue_pending_open_break(struct tcon_link *tlink, __u8 *lease_key, ++ __le32 new_lease_state) ++{ ++ struct smb2_lease_break_work *lw; ++ ++ lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL); ++ if (!lw) { ++ cifs_put_tlink(tlink); ++ return; ++ } ++ ++ INIT_WORK(&lw->lease_break, cifs_ses_oplock_break); ++ lw->tlink = tlink; ++ lw->lease_state = new_lease_state; ++ memcpy(lw->lease_key, lease_key, SMB2_LEASE_KEY_SIZE); ++ queue_work(cifsiod_wq, &lw->lease_break); ++} ++ + static bool +-smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp, +- struct smb2_lease_break_work *lw) ++smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp) + { +- bool found; + __u8 lease_state; + struct list_head *tmp; + struct cifsFileInfo *cfile; +- struct cifs_pending_open *open; + struct cifsInodeInfo *cinode; + int ack_req = le32_to_cpu(rsp->Flags & + SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED); +@@ -546,22 +562,29 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp, + cfile->oplock_level = lease_state; + + cifs_queue_oplock_break(cfile); +- kfree(lw); + return true; + } + +- found = false; ++ return false; ++} ++ ++static struct cifs_pending_open * ++smb2_tcon_find_pending_open_lease(struct cifs_tcon *tcon, ++ struct smb2_lease_break *rsp) ++{ ++ __u8 lease_state = le32_to_cpu(rsp->NewLeaseState); ++ int ack_req = le32_to_cpu(rsp->Flags & ++ SMB2_NOTIFY_BREAK_LEASE_FLAG_ACK_REQUIRED); ++ struct cifs_pending_open *open; ++ struct cifs_pending_open *found = NULL; ++ + list_for_each_entry(open, &tcon->pending_opens, olist) { + if (memcmp(open->lease_key, rsp->LeaseKey, + SMB2_LEASE_KEY_SIZE)) + continue; + + if (!found && ack_req) { +- found = true; +- memcpy(lw->lease_key, open->lease_key, +- SMB2_LEASE_KEY_SIZE); +- lw->tlink = cifs_get_tlink(open->tlink); +- queue_work(cifsiod_wq, &lw->lease_break); ++ found = open; + } + + cifs_dbg(FYI, "found in the pending open list\n"); +@@ -582,14 +605,7 @@ smb2_is_valid_lease_break(char *buffer) + struct TCP_Server_Info *server; + struct cifs_ses *ses; + struct cifs_tcon *tcon; +- struct smb2_lease_break_work *lw; +- +- lw = kmalloc(sizeof(struct smb2_lease_break_work), GFP_KERNEL); +- if (!lw) +- return false; +- +- INIT_WORK(&lw->lease_break, cifs_ses_oplock_break); +- lw->lease_state = rsp->NewLeaseState; ++ struct cifs_pending_open *open; + + cifs_dbg(FYI, "Checking for lease break\n"); + +@@ -607,11 +623,27 @@ smb2_is_valid_lease_break(char *buffer) + spin_lock(&tcon->open_file_lock); + cifs_stats_inc( + &tcon->stats.cifs_stats.num_oplock_brks); +- if (smb2_tcon_has_lease(tcon, rsp, lw)) { ++ if (smb2_tcon_has_lease(tcon, rsp)) { + spin_unlock(&tcon->open_file_lock); + spin_unlock(&cifs_tcp_ses_lock); + return true; + } ++ open = smb2_tcon_find_pending_open_lease(tcon, ++ rsp); ++ if (open) { ++ __u8 lease_key[SMB2_LEASE_KEY_SIZE]; ++ struct tcon_link *tlink; ++ ++ tlink = cifs_get_tlink(open->tlink); ++ memcpy(lease_key, open->lease_key, ++ SMB2_LEASE_KEY_SIZE); ++ spin_unlock(&tcon->open_file_lock); ++ spin_unlock(&cifs_tcp_ses_lock); ++ smb2_queue_pending_open_break(tlink, ++ lease_key, ++ rsp->NewLeaseState); ++ return true; ++ } + spin_unlock(&tcon->open_file_lock); + + if (tcon->crfid.is_valid && +@@ -629,7 +661,6 @@ smb2_is_valid_lease_break(char *buffer) + } + } + spin_unlock(&cifs_tcp_ses_lock); +- kfree(lw); + cifs_dbg(FYI, "Can not process lease break - no lease matched\n"); + return false; + } +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c +index cdad4d933bce0..cac1eaa2a7183 100644 +--- a/fs/cifs/smb2pdu.c ++++ b/fs/cifs/smb2pdu.c +@@ -1347,6 +1347,8 @@ SMB2_auth_kerberos(struct SMB2_sess_data *sess_data) + spnego_key = cifs_get_spnego_key(ses); + if (IS_ERR(spnego_key)) { + rc = PTR_ERR(spnego_key); ++ if (rc == -ENOKEY) ++ cifs_dbg(VFS, "Verify user has a krb5 ticket and keyutils is installed\n"); + spnego_key = NULL; + goto out; + } +diff --git a/fs/ext2/ialloc.c b/fs/ext2/ialloc.c +index fda7d3f5b4be5..432c3febea6df 100644 +--- a/fs/ext2/ialloc.c ++++ b/fs/ext2/ialloc.c +@@ -80,6 +80,7 @@ static void ext2_release_inode(struct super_block *sb, int group, int dir) + if (dir) + le16_add_cpu(&desc->bg_used_dirs_count, -1); + spin_unlock(sb_bgl_lock(EXT2_SB(sb), group)); ++ percpu_counter_inc(&EXT2_SB(sb)->s_freeinodes_counter); + if (dir) + percpu_counter_dec(&EXT2_SB(sb)->s_dirs_counter); + mark_buffer_dirty(bh); +@@ -528,7 +529,7 @@ got: + goto fail; + } + +- percpu_counter_add(&sbi->s_freeinodes_counter, -1); ++ percpu_counter_dec(&sbi->s_freeinodes_counter); + if (S_ISDIR(mode)) + percpu_counter_inc(&sbi->s_dirs_counter); + +diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c +index a5b2e72174bb1..527d50edcb956 100644 +--- a/fs/f2fs/compress.c ++++ b/fs/f2fs/compress.c +@@ -1250,6 +1250,8 @@ int f2fs_write_multi_pages(struct compress_ctx *cc, + err = f2fs_write_compressed_pages(cc, submitted, + wbc, io_type); + cops->destroy_compress_ctx(cc); ++ kfree(cc->cpages); ++ cc->cpages = NULL; + if (!err) + return 0; + f2fs_bug_on(F2FS_I_SB(cc->inode), err != -EAGAIN); +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index 10491ae1cb850..329afa55a581c 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -3353,6 +3353,10 @@ static int f2fs_write_end(struct file *file, + if (f2fs_compressed_file(inode) && fsdata) { + f2fs_compress_write_end(inode, fsdata, page->index, copied); + f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); ++ ++ if (pos + copied > i_size_read(inode) && ++ !f2fs_verity_in_progress(inode)) ++ f2fs_i_size_write(inode, pos + copied); + return copied; + } + #endif +diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c +index 6306eaae378b2..6d2ea788d0a17 100644 +--- a/fs/gfs2/bmap.c ++++ b/fs/gfs2/bmap.c +@@ -1351,9 +1351,15 @@ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsi + return ret; + } + ++/* ++ * NOTE: Never call gfs2_block_zero_range with an open transaction because it ++ * uses iomap write to perform its actions, which begin their own transactions ++ * (iomap_begin, page_prepare, etc.) ++ */ + static int gfs2_block_zero_range(struct inode *inode, loff_t from, + unsigned int length) + { ++ BUG_ON(current->journal_info); + return iomap_zero_range(inode, from, length, NULL, &gfs2_iomap_ops); + } + +@@ -1414,6 +1420,16 @@ static int trunc_start(struct inode *inode, u64 newsize) + u64 oldsize = inode->i_size; + int error; + ++ if (!gfs2_is_stuffed(ip)) { ++ unsigned int blocksize = i_blocksize(inode); ++ unsigned int offs = newsize & (blocksize - 1); ++ if (offs) { ++ error = gfs2_block_zero_range(inode, newsize, ++ blocksize - offs); ++ if (error) ++ return error; ++ } ++ } + if (journaled) + error = gfs2_trans_begin(sdp, RES_DINODE + RES_JDATA, GFS2_JTRUNC_REVOKES); + else +@@ -1427,19 +1443,10 @@ static int trunc_start(struct inode *inode, u64 newsize) + + gfs2_trans_add_meta(ip->i_gl, dibh); + +- if (gfs2_is_stuffed(ip)) { ++ if (gfs2_is_stuffed(ip)) + gfs2_buffer_clear_tail(dibh, sizeof(struct gfs2_dinode) + newsize); +- } else { +- unsigned int blocksize = i_blocksize(inode); +- unsigned int offs = newsize & (blocksize - 1); +- if (offs) { +- error = gfs2_block_zero_range(inode, newsize, +- blocksize - offs); +- if (error) +- goto out; +- } ++ else + ip->i_diskflags |= GFS2_DIF_TRUNC_IN_PROG; +- } + + i_size_write(inode, newsize); + ip->i_inode.i_mtime = ip->i_inode.i_ctime = current_time(&ip->i_inode); +@@ -2448,25 +2455,7 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length) + loff_t start, end; + int error; + +- start = round_down(offset, blocksize); +- end = round_up(offset + length, blocksize) - 1; +- error = filemap_write_and_wait_range(inode->i_mapping, start, end); +- if (error) +- return error; +- +- if (gfs2_is_jdata(ip)) +- error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA, +- GFS2_JTRUNC_REVOKES); +- else +- error = gfs2_trans_begin(sdp, RES_DINODE, 0); +- if (error) +- return error; +- +- if (gfs2_is_stuffed(ip)) { +- error = stuffed_zero_range(inode, offset, length); +- if (error) +- goto out; +- } else { ++ if (!gfs2_is_stuffed(ip)) { + unsigned int start_off, end_len; + + start_off = offset & (blocksize - 1); +@@ -2489,6 +2478,26 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length) + } + } + ++ start = round_down(offset, blocksize); ++ end = round_up(offset + length, blocksize) - 1; ++ error = filemap_write_and_wait_range(inode->i_mapping, start, end); ++ if (error) ++ return error; ++ ++ if (gfs2_is_jdata(ip)) ++ error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA, ++ GFS2_JTRUNC_REVOKES); ++ else ++ error = gfs2_trans_begin(sdp, RES_DINODE, 0); ++ if (error) ++ return error; ++ ++ if (gfs2_is_stuffed(ip)) { ++ error = stuffed_zero_range(inode, offset, length); ++ if (error) ++ goto out; ++ } ++ + if (gfs2_is_jdata(ip)) { + BUG_ON(!current->journal_info); + gfs2_journaled_truncate_range(inode, offset, length); +diff --git a/fs/minix/inode.c b/fs/minix/inode.c +index 0dd929346f3f3..7b09a9158e401 100644 +--- a/fs/minix/inode.c ++++ b/fs/minix/inode.c +@@ -150,8 +150,10 @@ static int minix_remount (struct super_block * sb, int * flags, char * data) + return 0; + } + +-static bool minix_check_superblock(struct minix_sb_info *sbi) ++static bool minix_check_superblock(struct super_block *sb) + { ++ struct minix_sb_info *sbi = minix_sb(sb); ++ + if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0) + return false; + +@@ -161,7 +163,7 @@ static bool minix_check_superblock(struct minix_sb_info *sbi) + * of indirect blocks which places the limit well above U32_MAX. + */ + if (sbi->s_version == MINIX_V1 && +- sbi->s_max_size > (7 + 512 + 512*512) * BLOCK_SIZE) ++ sb->s_maxbytes > (7 + 512 + 512*512) * BLOCK_SIZE) + return false; + + return true; +@@ -202,7 +204,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent) + sbi->s_zmap_blocks = ms->s_zmap_blocks; + sbi->s_firstdatazone = ms->s_firstdatazone; + sbi->s_log_zone_size = ms->s_log_zone_size; +- sbi->s_max_size = ms->s_max_size; ++ s->s_maxbytes = ms->s_max_size; + s->s_magic = ms->s_magic; + if (s->s_magic == MINIX_SUPER_MAGIC) { + sbi->s_version = MINIX_V1; +@@ -233,7 +235,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent) + sbi->s_zmap_blocks = m3s->s_zmap_blocks; + sbi->s_firstdatazone = m3s->s_firstdatazone; + sbi->s_log_zone_size = m3s->s_log_zone_size; +- sbi->s_max_size = m3s->s_max_size; ++ s->s_maxbytes = m3s->s_max_size; + sbi->s_ninodes = m3s->s_ninodes; + sbi->s_nzones = m3s->s_zones; + sbi->s_dirsize = 64; +@@ -245,7 +247,7 @@ static int minix_fill_super(struct super_block *s, void *data, int silent) + } else + goto out_no_fs; + +- if (!minix_check_superblock(sbi)) ++ if (!minix_check_superblock(s)) + goto out_illegal_sb; + + /* +diff --git a/fs/minix/itree_v1.c b/fs/minix/itree_v1.c +index 046cc96ee7adb..1fed906042aa8 100644 +--- a/fs/minix/itree_v1.c ++++ b/fs/minix/itree_v1.c +@@ -29,12 +29,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH]) + if (block < 0) { + printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n", + block, inode->i_sb->s_bdev); +- } else if (block >= (minix_sb(inode->i_sb)->s_max_size/BLOCK_SIZE)) { +- if (printk_ratelimit()) +- printk("MINIX-fs: block_to_path: " +- "block %ld too big on dev %pg\n", +- block, inode->i_sb->s_bdev); +- } else if (block < 7) { ++ return 0; ++ } ++ if ((u64)block * BLOCK_SIZE >= inode->i_sb->s_maxbytes) ++ return 0; ++ ++ if (block < 7) { + offsets[n++] = block; + } else if ((block -= 7) < 512) { + offsets[n++] = 7; +diff --git a/fs/minix/itree_v2.c b/fs/minix/itree_v2.c +index f7fc7eccccccd..9d00f31a2d9d1 100644 +--- a/fs/minix/itree_v2.c ++++ b/fs/minix/itree_v2.c +@@ -32,13 +32,12 @@ static int block_to_path(struct inode * inode, long block, int offsets[DEPTH]) + if (block < 0) { + printk("MINIX-fs: block_to_path: block %ld < 0 on dev %pg\n", + block, sb->s_bdev); +- } else if ((u64)block * (u64)sb->s_blocksize >= +- minix_sb(sb)->s_max_size) { +- if (printk_ratelimit()) +- printk("MINIX-fs: block_to_path: " +- "block %ld too big on dev %pg\n", +- block, sb->s_bdev); +- } else if (block < DIRCOUNT) { ++ return 0; ++ } ++ if ((u64)block * (u64)sb->s_blocksize >= sb->s_maxbytes) ++ return 0; ++ ++ if (block < DIRCOUNT) { + offsets[n++] = block; + } else if ((block -= DIRCOUNT) < INDIRCOUNT(sb)) { + offsets[n++] = DIRCOUNT; +diff --git a/fs/minix/minix.h b/fs/minix/minix.h +index df081e8afcc3c..168d45d3de73e 100644 +--- a/fs/minix/minix.h ++++ b/fs/minix/minix.h +@@ -32,7 +32,6 @@ struct minix_sb_info { + unsigned long s_zmap_blocks; + unsigned long s_firstdatazone; + unsigned long s_log_zone_size; +- unsigned long s_max_size; + int s_dirsize; + int s_namelen; + struct buffer_head ** s_imap; +diff --git a/fs/nfs/file.c b/fs/nfs/file.c +index f96367a2463e3..63940a7a70be1 100644 +--- a/fs/nfs/file.c ++++ b/fs/nfs/file.c +@@ -140,6 +140,7 @@ static int + nfs_file_flush(struct file *file, fl_owner_t id) + { + struct inode *inode = file_inode(file); ++ errseq_t since; + + dprintk("NFS: flush(%pD2)\n", file); + +@@ -148,7 +149,9 @@ nfs_file_flush(struct file *file, fl_owner_t id) + return 0; + + /* Flush writes to the server and return any errors */ +- return nfs_wb_all(inode); ++ since = filemap_sample_wb_err(file->f_mapping); ++ nfs_wb_all(inode); ++ return filemap_check_wb_err(file->f_mapping, since); + } + + ssize_t +@@ -587,12 +590,14 @@ static const struct vm_operations_struct nfs_file_vm_ops = { + .page_mkwrite = nfs_vm_page_mkwrite, + }; + +-static int nfs_need_check_write(struct file *filp, struct inode *inode) ++static int nfs_need_check_write(struct file *filp, struct inode *inode, ++ int error) + { + struct nfs_open_context *ctx; + + ctx = nfs_file_open_context(filp); +- if (nfs_ctx_key_to_expire(ctx, inode)) ++ if (nfs_error_is_fatal_on_server(error) || ++ nfs_ctx_key_to_expire(ctx, inode)) + return 1; + return 0; + } +@@ -603,6 +608,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from) + struct inode *inode = file_inode(file); + unsigned long written = 0; + ssize_t result; ++ errseq_t since; ++ int error; + + result = nfs_key_timeout_notify(file, inode); + if (result) +@@ -627,6 +634,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from) + if (iocb->ki_pos > i_size_read(inode)) + nfs_revalidate_mapping(inode, file->f_mapping); + ++ since = filemap_sample_wb_err(file->f_mapping); + nfs_start_io_write(inode); + result = generic_write_checks(iocb, from); + if (result > 0) { +@@ -645,7 +653,8 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from) + goto out; + + /* Return error values */ +- if (nfs_need_check_write(file, inode)) { ++ error = filemap_check_wb_err(file->f_mapping, since); ++ if (nfs_need_check_write(file, inode, error)) { + int err = nfs_wb_all(inode); + if (err < 0) + result = err; +diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c +index de03e440b7eef..048272d60a165 100644 +--- a/fs/nfs/flexfilelayout/flexfilelayout.c ++++ b/fs/nfs/flexfilelayout/flexfilelayout.c +@@ -790,6 +790,19 @@ ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg, + return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx); + } + ++static struct nfs4_pnfs_ds * ++ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, int *best_idx) ++{ ++ struct pnfs_layout_segment *lseg = pgio->pg_lseg; ++ struct nfs4_pnfs_ds *ds; ++ ++ ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx, ++ best_idx); ++ if (ds || !pgio->pg_mirror_idx) ++ return ds; ++ return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx); ++} ++ + static void + ff_layout_pg_get_read(struct nfs_pageio_descriptor *pgio, + struct nfs_page *req, +@@ -840,7 +853,7 @@ retry: + goto out_nolseg; + } + +- ds = ff_layout_choose_best_ds_for_read(pgio->pg_lseg, 0, &ds_idx); ++ ds = ff_layout_get_ds_for_read(pgio, &ds_idx); + if (!ds) { + if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg)) + goto out_mds; +@@ -1028,11 +1041,24 @@ static void ff_layout_reset_write(struct nfs_pgio_header *hdr, bool retry_pnfs) + } + } + ++static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr) ++{ ++ u32 idx = hdr->pgio_mirror_idx + 1; ++ int new_idx = 0; ++ ++ if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx)) ++ ff_layout_send_layouterror(hdr->lseg); ++ else ++ pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg); ++ pnfs_read_resend_pnfs(hdr, new_idx); ++} ++ + static void ff_layout_reset_read(struct nfs_pgio_header *hdr) + { + struct rpc_task *task = &hdr->task; + + pnfs_layoutcommit_inode(hdr->inode, false); ++ pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg); + + if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags)) { + dprintk("%s Reset task %5u for i/o through MDS " +@@ -1234,6 +1260,12 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg, + break; + case NFS4ERR_NXIO: + ff_layout_mark_ds_unreachable(lseg, idx); ++ /* ++ * Don't return the layout if this is a read and we still ++ * have layouts to try ++ */ ++ if (opnum == OP_READ) ++ break; + /* Fallthrough */ + default: + pnfs_error_mark_layout_for_return(lseg->pls_layout->plh_inode, +@@ -1247,7 +1279,6 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg, + static int ff_layout_read_done_cb(struct rpc_task *task, + struct nfs_pgio_header *hdr) + { +- int new_idx = hdr->pgio_mirror_idx; + int err; + + if (task->tk_status < 0) { +@@ -1267,10 +1298,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task, + clear_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags); + switch (err) { + case -NFS4ERR_RESET_TO_PNFS: +- if (ff_layout_choose_best_ds_for_read(hdr->lseg, +- hdr->pgio_mirror_idx + 1, +- &new_idx)) +- goto out_layouterror; + set_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags); + return task->tk_status; + case -NFS4ERR_RESET_TO_MDS: +@@ -1281,10 +1308,6 @@ static int ff_layout_read_done_cb(struct rpc_task *task, + } + + return 0; +-out_layouterror: +- ff_layout_read_record_layoutstats_done(task, hdr); +- ff_layout_send_layouterror(hdr->lseg); +- hdr->pgio_mirror_idx = new_idx; + out_eagain: + rpc_restart_call_prepare(task); + return -EAGAIN; +@@ -1411,10 +1434,9 @@ static void ff_layout_read_release(void *data) + struct nfs_pgio_header *hdr = data; + + ff_layout_read_record_layoutstats_done(&hdr->task, hdr); +- if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags)) { +- ff_layout_send_layouterror(hdr->lseg); +- pnfs_read_resend_pnfs(hdr); +- } else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags)) ++ if (test_bit(NFS_IOHDR_RESEND_PNFS, &hdr->flags)) ++ ff_layout_resend_pnfs_read(hdr); ++ else if (test_bit(NFS_IOHDR_RESEND_MDS, &hdr->flags)) + ff_layout_reset_read(hdr); + pnfs_generic_rw_release(data); + } +diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c +index 8e5d6223ddd35..a339707654673 100644 +--- a/fs/nfs/nfs4file.c ++++ b/fs/nfs/nfs4file.c +@@ -110,6 +110,7 @@ static int + nfs4_file_flush(struct file *file, fl_owner_t id) + { + struct inode *inode = file_inode(file); ++ errseq_t since; + + dprintk("NFS: flush(%pD2)\n", file); + +@@ -125,7 +126,9 @@ nfs4_file_flush(struct file *file, fl_owner_t id) + return filemap_fdatawrite(file->f_mapping); + + /* Flush writes to the server and return any errors */ +- return nfs_wb_all(inode); ++ since = filemap_sample_wb_err(file->f_mapping); ++ nfs_wb_all(inode); ++ return filemap_check_wb_err(file->f_mapping, since); + } + + #ifdef CONFIG_NFS_V4_2 +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 2e2dac29a9e91..45e0585e0667c 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -5845,8 +5845,6 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf, + return ret; + if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL)) + return -ENOENT; +- if (buflen < label.len) +- return -ERANGE; + return 0; + } + +diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c +index 47817ef0aadb1..4e0d8a3b89b67 100644 +--- a/fs/nfs/nfs4xdr.c ++++ b/fs/nfs/nfs4xdr.c +@@ -4166,7 +4166,11 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap, + return -EIO; + if (len < NFS4_MAXLABELLEN) { + if (label) { +- memcpy(label->label, p, len); ++ if (label->len) { ++ if (label->len < len) ++ return -ERANGE; ++ memcpy(label->label, p, len); ++ } + label->len = len; + label->pi = pi; + label->lfs = lfs; +diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c +index d61dac48dff50..75e988caf3cd7 100644 +--- a/fs/nfs/pnfs.c ++++ b/fs/nfs/pnfs.c +@@ -2939,7 +2939,8 @@ pnfs_try_to_read_data(struct nfs_pgio_header *hdr, + } + + /* Resend all requests through pnfs. */ +-void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr) ++void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr, ++ unsigned int mirror_idx) + { + struct nfs_pageio_descriptor pgio; + +@@ -2950,6 +2951,7 @@ void pnfs_read_resend_pnfs(struct nfs_pgio_header *hdr) + + nfs_pageio_init_read(&pgio, hdr->inode, false, + hdr->completion_ops); ++ pgio.pg_mirror_idx = mirror_idx; + hdr->task.tk_status = nfs_pageio_resend(&pgio, hdr); + } + } +diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h +index 8e0ada581b92e..2661c44c62db4 100644 +--- a/fs/nfs/pnfs.h ++++ b/fs/nfs/pnfs.h +@@ -311,7 +311,7 @@ int _pnfs_return_layout(struct inode *); + int pnfs_commit_and_return_layout(struct inode *); + void pnfs_ld_write_done(struct nfs_pgio_header *); + void pnfs_ld_read_done(struct nfs_pgio_header *); +-void pnfs_read_resend_pnfs(struct nfs_pgio_header *); ++void pnfs_read_resend_pnfs(struct nfs_pgio_header *, unsigned int mirror_idx); + struct pnfs_layout_segment *pnfs_update_layout(struct inode *ino, + struct nfs_open_context *ctx, + loff_t pos, +diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h +index 9461bd3e1c0c8..0a8cd8e59a92c 100644 +--- a/fs/ocfs2/ocfs2.h ++++ b/fs/ocfs2/ocfs2.h +@@ -326,8 +326,8 @@ struct ocfs2_super + spinlock_t osb_lock; + u32 s_next_generation; + unsigned long osb_flags; +- s16 s_inode_steal_slot; +- s16 s_meta_steal_slot; ++ u16 s_inode_steal_slot; ++ u16 s_meta_steal_slot; + atomic_t s_num_inodes_stolen; + atomic_t s_num_meta_stolen; + +diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c +index 45745cc3408a5..8c8cf7f4eb34e 100644 +--- a/fs/ocfs2/suballoc.c ++++ b/fs/ocfs2/suballoc.c +@@ -879,9 +879,9 @@ static void __ocfs2_set_steal_slot(struct ocfs2_super *osb, int slot, int type) + { + spin_lock(&osb->osb_lock); + if (type == INODE_ALLOC_SYSTEM_INODE) +- osb->s_inode_steal_slot = slot; ++ osb->s_inode_steal_slot = (u16)slot; + else if (type == EXTENT_ALLOC_SYSTEM_INODE) +- osb->s_meta_steal_slot = slot; ++ osb->s_meta_steal_slot = (u16)slot; + spin_unlock(&osb->osb_lock); + } + +diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c +index ac61eeaf38374..b74c5b25726f5 100644 +--- a/fs/ocfs2/super.c ++++ b/fs/ocfs2/super.c +@@ -78,7 +78,7 @@ struct mount_options + unsigned long commit_interval; + unsigned long mount_opt; + unsigned int atime_quantum; +- signed short slot; ++ unsigned short slot; + int localalloc_opt; + unsigned int resv_level; + int dir_resv_level; +@@ -1334,7 +1334,7 @@ static int ocfs2_parse_options(struct super_block *sb, + goto bail; + } + if (option) +- mopt->slot = (s16)option; ++ mopt->slot = (u16)option; + break; + case Opt_commit: + if (match_int(&args[0], &option)) { +diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c +index e5ec1afe1c668..2cf05f87565c2 100644 +--- a/fs/ubifs/journal.c ++++ b/fs/ubifs/journal.c +@@ -539,7 +539,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir, + const struct fscrypt_name *nm, const struct inode *inode, + int deletion, int xent) + { +- int err, dlen, ilen, len, lnum, ino_offs, dent_offs; ++ int err, dlen, ilen, len, lnum, ino_offs, dent_offs, orphan_added = 0; + int aligned_dlen, aligned_ilen, sync = IS_DIRSYNC(dir); + int last_reference = !!(deletion && inode->i_nlink == 0); + struct ubifs_inode *ui = ubifs_inode(inode); +@@ -630,6 +630,7 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir, + goto out_finish; + } + ui->del_cmtno = c->cmt_no; ++ orphan_added = 1; + } + + err = write_head(c, BASEHD, dent, len, &lnum, &dent_offs, sync); +@@ -702,7 +703,7 @@ out_release: + kfree(dent); + out_ro: + ubifs_ro_mode(c, err); +- if (last_reference) ++ if (orphan_added) + ubifs_delete_orphan(c, inode->i_ino); + finish_reservation(c); + return err; +@@ -1218,7 +1219,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir, + void *p; + union ubifs_key key; + struct ubifs_dent_node *dent, *dent2; +- int err, dlen1, dlen2, ilen, lnum, offs, len; ++ int err, dlen1, dlen2, ilen, lnum, offs, len, orphan_added = 0; + int aligned_dlen1, aligned_dlen2, plen = UBIFS_INO_NODE_SZ; + int last_reference = !!(new_inode && new_inode->i_nlink == 0); + int move = (old_dir != new_dir); +@@ -1334,6 +1335,7 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir, + goto out_finish; + } + new_ui->del_cmtno = c->cmt_no; ++ orphan_added = 1; + } + + err = write_head(c, BASEHD, dent, len, &lnum, &offs, sync); +@@ -1415,7 +1417,7 @@ out_release: + release_head(c, BASEHD); + out_ro: + ubifs_ro_mode(c, err); +- if (last_reference) ++ if (orphan_added) + ubifs_delete_orphan(c, new_inode->i_ino); + out_finish: + finish_reservation(c); +diff --git a/fs/ufs/super.c b/fs/ufs/super.c +index 1da0be667409b..e3b69fb280e8c 100644 +--- a/fs/ufs/super.c ++++ b/fs/ufs/super.c +@@ -101,7 +101,7 @@ static struct inode *ufs_nfs_get_inode(struct super_block *sb, u64 ino, u32 gene + struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi; + struct inode *inode; + +- if (ino < UFS_ROOTINO || ino > uspi->s_ncg * uspi->s_ipg) ++ if (ino < UFS_ROOTINO || ino > (u64)uspi->s_ncg * uspi->s_ipg) + return ERR_PTR(-ESTALE); + + inode = ufs_iget(sb, ino); +diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h +index 088c1ded27148..ee6412314f8f3 100644 +--- a/include/crypto/if_alg.h ++++ b/include/crypto/if_alg.h +@@ -135,6 +135,7 @@ struct af_alg_async_req { + * SG? + * @enc: Cryptographic operation to be performed when + * recvmsg is invoked. ++ * @init: True if metadata has been sent. + * @len: Length of memory allocated for this data structure. + */ + struct af_alg_ctx { +@@ -151,6 +152,7 @@ struct af_alg_ctx { + bool more; + bool merge; + bool enc; ++ bool init; + + unsigned int len; + }; +@@ -226,7 +228,7 @@ unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset); + void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst, + size_t dst_offset); + void af_alg_wmem_wakeup(struct sock *sk); +-int af_alg_wait_for_data(struct sock *sk, unsigned flags); ++int af_alg_wait_for_data(struct sock *sk, unsigned flags, unsigned min); + int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size, + unsigned int ivsize); + ssize_t af_alg_sendpage(struct socket *sock, struct page *page, +diff --git a/include/linux/fs.h b/include/linux/fs.h +index 45cc10cdf6ddd..db58786c660bf 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -546,6 +546,16 @@ static inline void i_mmap_unlock_read(struct address_space *mapping) + up_read(&mapping->i_mmap_rwsem); + } + ++static inline void i_mmap_assert_locked(struct address_space *mapping) ++{ ++ lockdep_assert_held(&mapping->i_mmap_rwsem); ++} ++ ++static inline void i_mmap_assert_write_locked(struct address_space *mapping) ++{ ++ lockdep_assert_held_write(&mapping->i_mmap_rwsem); ++} ++ + /* + * Might pages of this file be mapped into userspace? + */ +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h +index 43a1cef8f0f16..214f509bcb88f 100644 +--- a/include/linux/hugetlb.h ++++ b/include/linux/hugetlb.h +@@ -165,7 +165,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, + unsigned long addr, unsigned long sz); + pte_t *huge_pte_offset(struct mm_struct *mm, + unsigned long addr, unsigned long sz); +-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep); ++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, ++ unsigned long *addr, pte_t *ptep); + void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end); + struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address, +@@ -204,8 +205,9 @@ static inline struct address_space *hugetlb_page_mapping_lock_write( + return NULL; + } + +-static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, +- pte_t *ptep) ++static inline int huge_pmd_unshare(struct mm_struct *mm, ++ struct vm_area_struct *vma, ++ unsigned long *addr, pte_t *ptep) + { + return 0; + } +diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h +index 64a5335046b00..bc1abbc041092 100644 +--- a/include/linux/intel-iommu.h ++++ b/include/linux/intel-iommu.h +@@ -363,8 +363,8 @@ enum { + + #define QI_DEV_EIOTLB_ADDR(a) ((u64)(a) & VTD_PAGE_MASK) + #define QI_DEV_EIOTLB_SIZE (((u64)1) << 11) +-#define QI_DEV_EIOTLB_GLOB(g) ((u64)g) +-#define QI_DEV_EIOTLB_PASID(p) (((u64)p) << 32) ++#define QI_DEV_EIOTLB_GLOB(g) ((u64)(g) & 0x1) ++#define QI_DEV_EIOTLB_PASID(p) ((u64)((p) & 0xfffff) << 32) + #define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16) + #define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4) + #define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | \ +diff --git a/include/linux/irq.h b/include/linux/irq.h +index 8d5bc2c237d74..1b7f4dfee35b3 100644 +--- a/include/linux/irq.h ++++ b/include/linux/irq.h +@@ -213,6 +213,8 @@ struct irq_data { + * required + * IRQD_HANDLE_ENFORCE_IRQCTX - Enforce that handle_irq_*() is only invoked + * from actual interrupt context. ++ * IRQD_AFFINITY_ON_ACTIVATE - Affinity is set on activation. Don't call ++ * irq_chip::irq_set_affinity() when deactivated. + */ + enum { + IRQD_TRIGGER_MASK = 0xf, +@@ -237,6 +239,7 @@ enum { + IRQD_CAN_RESERVE = (1 << 26), + IRQD_MSI_NOMASK_QUIRK = (1 << 27), + IRQD_HANDLE_ENFORCE_IRQCTX = (1 << 28), ++ IRQD_AFFINITY_ON_ACTIVATE = (1 << 29), + }; + + #define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors) +@@ -421,6 +424,16 @@ static inline bool irqd_msi_nomask_quirk(struct irq_data *d) + return __irqd_to_state(d) & IRQD_MSI_NOMASK_QUIRK; + } + ++static inline void irqd_set_affinity_on_activate(struct irq_data *d) ++{ ++ __irqd_to_state(d) |= IRQD_AFFINITY_ON_ACTIVATE; ++} ++ ++static inline bool irqd_affinity_on_activate(struct irq_data *d) ++{ ++ return __irqd_to_state(d) & IRQD_AFFINITY_ON_ACTIVATE; ++} ++ + #undef __irqd_to_state + + static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d) +diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h +index d08f0869f1213..54c57a523ccec 100644 +--- a/include/linux/pci-ats.h ++++ b/include/linux/pci-ats.h +@@ -25,6 +25,10 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs); + void pci_disable_pri(struct pci_dev *pdev); + int pci_reset_pri(struct pci_dev *pdev); + int pci_prg_resp_pasid_required(struct pci_dev *pdev); ++bool pci_pri_supported(struct pci_dev *pdev); ++#else ++static inline bool pci_pri_supported(struct pci_dev *pdev) ++{ return false; } + #endif /* CONFIG_PCI_PRI */ + + #ifdef CONFIG_PCI_PASID +diff --git a/include/net/sock.h b/include/net/sock.h +index 46423e86dba50..2a1e8a683336e 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -890,6 +890,8 @@ static inline int sk_memalloc_socks(void) + { + return static_branch_unlikely(&memalloc_socks_key); + } ++ ++void __receive_sock(struct file *file); + #else + + static inline int sk_memalloc_socks(void) +@@ -897,6 +899,8 @@ static inline int sk_memalloc_socks(void) + return 0; + } + ++static inline void __receive_sock(struct file *file) ++{ } + #endif + + static inline gfp_t sk_gfp_mask(const struct sock *sk, gfp_t gfp_mask) +diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h +index e6b6cb0f8bc6a..24f6848ad78ec 100644 +--- a/include/uapi/linux/btrfs.h ++++ b/include/uapi/linux/btrfs.h +@@ -243,6 +243,13 @@ struct btrfs_ioctl_dev_info_args { + __u8 path[BTRFS_DEVICE_PATH_NAME_MAX]; /* out */ + }; + ++/* ++ * Retrieve information about the filesystem ++ */ ++ ++/* Request information about checksum type and size */ ++#define BTRFS_FS_INFO_FLAG_CSUM_INFO (1 << 0) ++ + struct btrfs_ioctl_fs_info_args { + __u64 max_id; /* out */ + __u64 num_devices; /* out */ +@@ -250,8 +257,11 @@ struct btrfs_ioctl_fs_info_args { + __u32 nodesize; /* out */ + __u32 sectorsize; /* out */ + __u32 clone_alignment; /* out */ +- __u32 reserved32; +- __u64 reserved[122]; /* pad to 1k */ ++ /* See BTRFS_FS_INFO_FLAG_* */ ++ __u16 csum_type; /* out */ ++ __u16 csum_size; /* out */ ++ __u64 flags; /* in/out */ ++ __u8 reserved[968]; /* pad to 1k */ + }; + + /* +diff --git a/init/main.c b/init/main.c +index 03371976d3872..567f7694b8044 100644 +--- a/init/main.c ++++ b/init/main.c +@@ -385,8 +385,6 @@ static int __init bootconfig_params(char *param, char *val, + { + if (strcmp(param, "bootconfig") == 0) { + bootconfig_found = true; +- } else if (strcmp(param, "--") == 0) { +- initargs_found = true; + } + return 0; + } +@@ -397,19 +395,23 @@ static void __init setup_boot_config(const char *cmdline) + const char *msg; + int pos; + u32 size, csum; +- char *data, *copy; ++ char *data, *copy, *err; + int ret; + + /* Cut out the bootconfig data even if we have no bootconfig option */ + data = get_boot_config_from_initrd(&size, &csum); + + strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE); +- parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL, +- bootconfig_params); ++ err = parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL, ++ bootconfig_params); + +- if (!bootconfig_found) ++ if (IS_ERR(err) || !bootconfig_found) + return; + ++ /* parse_args() stops at '--' and returns an address */ ++ if (err) ++ initargs_found = true; ++ + if (!data) { + pr_err("'bootconfig' found on command line, but no bootconfig found\n"); + return; +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index dc58fd245e798..c48864ae6413c 100644 +--- a/kernel/irq/manage.c ++++ b/kernel/irq/manage.c +@@ -320,12 +320,16 @@ static bool irq_set_affinity_deactivated(struct irq_data *data, + struct irq_desc *desc = irq_data_to_desc(data); + + /* ++ * Handle irq chips which can handle affinity only in activated ++ * state correctly ++ * + * If the interrupt is not yet activated, just store the affinity + * mask and do not call the chip driver at all. On activation the + * driver has to make sure anyway that the interrupt is in a + * useable state so startup works. + */ +- if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || irqd_is_activated(data)) ++ if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || ++ irqd_is_activated(data) || !irqd_affinity_on_activate(data)) + return false; + + cpumask_copy(desc->irq_common_data.affinity, mask); +diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c +index 8f557fa1f4fe4..c6c7e187ae748 100644 +--- a/kernel/irq/pm.c ++++ b/kernel/irq/pm.c +@@ -185,14 +185,18 @@ void rearm_wake_irq(unsigned int irq) + unsigned long flags; + struct irq_desc *desc = irq_get_desc_buslock(irq, &flags, IRQ_GET_DESC_CHECK_GLOBAL); + +- if (!desc || !(desc->istate & IRQS_SUSPENDED) || +- !irqd_is_wakeup_set(&desc->irq_data)) ++ if (!desc) + return; + ++ if (!(desc->istate & IRQS_SUSPENDED) || ++ !irqd_is_wakeup_set(&desc->irq_data)) ++ goto unlock; ++ + desc->istate &= ~IRQS_SUSPENDED; + irqd_set(&desc->irq_data, IRQD_WAKEUP_ARMED); + __enable_irq(desc); + ++unlock: + irq_put_desc_busunlock(desc, flags); + } + +diff --git a/kernel/kprobes.c b/kernel/kprobes.c +index 0a967db226d8a..bbff4bccb885d 100644 +--- a/kernel/kprobes.c ++++ b/kernel/kprobes.c +@@ -2104,6 +2104,13 @@ static void kill_kprobe(struct kprobe *p) + * the original probed function (which will be freed soon) any more. + */ + arch_remove_kprobe(p); ++ ++ /* ++ * The module is going away. We should disarm the kprobe which ++ * is using ftrace. ++ */ ++ if (kprobe_ftrace(p)) ++ disarm_kprobe_ftrace(p); + } + + /* Disable one kprobe */ +diff --git a/kernel/module.c b/kernel/module.c +index af59c86f1547f..8814c21266384 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -1517,18 +1517,34 @@ struct module_sect_attrs { + struct module_sect_attr attrs[]; + }; + ++#define MODULE_SECT_READ_SIZE (3 /* "0x", "\n" */ + (BITS_PER_LONG / 4)) + static ssize_t module_sect_read(struct file *file, struct kobject *kobj, + struct bin_attribute *battr, + char *buf, loff_t pos, size_t count) + { + struct module_sect_attr *sattr = + container_of(battr, struct module_sect_attr, battr); ++ char bounce[MODULE_SECT_READ_SIZE + 1]; ++ size_t wrote; + + if (pos != 0) + return -EINVAL; + +- return sprintf(buf, "0x%px\n", +- kallsyms_show_value(file->f_cred) ? (void *)sattr->address : NULL); ++ /* ++ * Since we're a binary read handler, we must account for the ++ * trailing NUL byte that sprintf will write: if "buf" is ++ * too small to hold the NUL, or the NUL is exactly the last ++ * byte, the read will look like it got truncated by one byte. ++ * Since there is no way to ask sprintf nicely to not write ++ * the NUL, we have to use a bounce buffer. ++ */ ++ wrote = scnprintf(bounce, sizeof(bounce), "0x%px\n", ++ kallsyms_show_value(file->f_cred) ++ ? (void *)sattr->address : NULL); ++ count = min(count, wrote); ++ memcpy(buf, bounce, count); ++ ++ return count; + } + + static void free_sect_attrs(struct module_sect_attrs *sect_attrs) +@@ -1577,7 +1593,7 @@ static void add_sect_attrs(struct module *mod, const struct load_info *info) + goto out; + sect_attrs->nsections++; + sattr->battr.read = module_sect_read; +- sattr->battr.size = 3 /* "0x", "\n" */ + (BITS_PER_LONG / 4); ++ sattr->battr.size = MODULE_SECT_READ_SIZE; + sattr->battr.attr.mode = 0400; + *(gattr++) = &(sattr++)->battr; + } +diff --git a/kernel/pid.c b/kernel/pid.c +index c835b844aca7c..5506efe93dd2f 100644 +--- a/kernel/pid.c ++++ b/kernel/pid.c +@@ -42,6 +42,7 @@ + #include + #include + #include ++#include + + struct pid init_struct_pid = { + .count = REFCOUNT_INIT(1), +@@ -624,10 +625,12 @@ static int pidfd_getfd(struct pid *pid, int fd) + } + + ret = get_unused_fd_flags(O_CLOEXEC); +- if (ret < 0) ++ if (ret < 0) { + fput(file); +- else ++ } else { ++ __receive_sock(file); + fd_install(ret, file); ++ } + + return ret; + } +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 1bae86fc128b2..ebecf1cc3b788 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -793,6 +793,26 @@ unsigned int sysctl_sched_uclamp_util_max = SCHED_CAPACITY_SCALE; + /* All clamps are required to be less or equal than these values */ + static struct uclamp_se uclamp_default[UCLAMP_CNT]; + ++/* ++ * This static key is used to reduce the uclamp overhead in the fast path. It ++ * primarily disables the call to uclamp_rq_{inc, dec}() in ++ * enqueue/dequeue_task(). ++ * ++ * This allows users to continue to enable uclamp in their kernel config with ++ * minimum uclamp overhead in the fast path. ++ * ++ * As soon as userspace modifies any of the uclamp knobs, the static key is ++ * enabled, since we have an actual users that make use of uclamp ++ * functionality. ++ * ++ * The knobs that would enable this static key are: ++ * ++ * * A task modifying its uclamp value with sched_setattr(). ++ * * An admin modifying the sysctl_sched_uclamp_{min, max} via procfs. ++ * * An admin modifying the cgroup cpu.uclamp.{min, max} ++ */ ++DEFINE_STATIC_KEY_FALSE(sched_uclamp_used); ++ + /* Integer rounded range for each bucket */ + #define UCLAMP_BUCKET_DELTA DIV_ROUND_CLOSEST(SCHED_CAPACITY_SCALE, UCLAMP_BUCKETS) + +@@ -989,10 +1009,38 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p, + + lockdep_assert_held(&rq->lock); + ++ /* ++ * If sched_uclamp_used was enabled after task @p was enqueued, ++ * we could end up with unbalanced call to uclamp_rq_dec_id(). ++ * ++ * In this case the uc_se->active flag should be false since no uclamp ++ * accounting was performed at enqueue time and we can just return ++ * here. ++ * ++ * Need to be careful of the following enqeueue/dequeue ordering ++ * problem too ++ * ++ * enqueue(taskA) ++ * // sched_uclamp_used gets enabled ++ * enqueue(taskB) ++ * dequeue(taskA) ++ * // Must not decrement bukcet->tasks here ++ * dequeue(taskB) ++ * ++ * where we could end up with stale data in uc_se and ++ * bucket[uc_se->bucket_id]. ++ * ++ * The following check here eliminates the possibility of such race. ++ */ ++ if (unlikely(!uc_se->active)) ++ return; ++ + bucket = &uc_rq->bucket[uc_se->bucket_id]; ++ + SCHED_WARN_ON(!bucket->tasks); + if (likely(bucket->tasks)) + bucket->tasks--; ++ + uc_se->active = false; + + /* +@@ -1020,6 +1068,15 @@ static inline void uclamp_rq_inc(struct rq *rq, struct task_struct *p) + { + enum uclamp_id clamp_id; + ++ /* ++ * Avoid any overhead until uclamp is actually used by the userspace. ++ * ++ * The condition is constructed such that a NOP is generated when ++ * sched_uclamp_used is disabled. ++ */ ++ if (!static_branch_unlikely(&sched_uclamp_used)) ++ return; ++ + if (unlikely(!p->sched_class->uclamp_enabled)) + return; + +@@ -1035,6 +1092,15 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p) + { + enum uclamp_id clamp_id; + ++ /* ++ * Avoid any overhead until uclamp is actually used by the userspace. ++ * ++ * The condition is constructed such that a NOP is generated when ++ * sched_uclamp_used is disabled. ++ */ ++ if (!static_branch_unlikely(&sched_uclamp_used)) ++ return; ++ + if (unlikely(!p->sched_class->uclamp_enabled)) + return; + +@@ -1144,8 +1210,10 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write, + update_root_tg = true; + } + +- if (update_root_tg) ++ if (update_root_tg) { ++ static_branch_enable(&sched_uclamp_used); + uclamp_update_root_tg(); ++ } + + /* + * We update all RUNNABLE tasks only when task groups are in use. +@@ -1180,6 +1248,15 @@ static int uclamp_validate(struct task_struct *p, + if (upper_bound > SCHED_CAPACITY_SCALE) + return -EINVAL; + ++ /* ++ * We have valid uclamp attributes; make sure uclamp is enabled. ++ * ++ * We need to do that here, because enabling static branches is a ++ * blocking operation which obviously cannot be done while holding ++ * scheduler locks. ++ */ ++ static_branch_enable(&sched_uclamp_used); ++ + return 0; + } + +@@ -7306,6 +7383,8 @@ static ssize_t cpu_uclamp_write(struct kernfs_open_file *of, char *buf, + if (req.ret) + return req.ret; + ++ static_branch_enable(&sched_uclamp_used); ++ + mutex_lock(&uclamp_mutex); + rcu_read_lock(); + +diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c +index 7fbaee24c824f..dc6835bc64907 100644 +--- a/kernel/sched/cpufreq_schedutil.c ++++ b/kernel/sched/cpufreq_schedutil.c +@@ -210,7 +210,7 @@ unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs, + unsigned long dl_util, util, irq; + struct rq *rq = cpu_rq(cpu); + +- if (!IS_BUILTIN(CONFIG_UCLAMP_TASK) && ++ if (!uclamp_is_used() && + type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) { + return max; + } +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index 1f58677a8f233..2a52710d2f526 100644 +--- a/kernel/sched/sched.h ++++ b/kernel/sched/sched.h +@@ -863,6 +863,8 @@ struct uclamp_rq { + unsigned int value; + struct uclamp_bucket bucket[UCLAMP_BUCKETS]; + }; ++ ++DECLARE_STATIC_KEY_FALSE(sched_uclamp_used); + #endif /* CONFIG_UCLAMP_TASK */ + + /* +@@ -2355,12 +2357,35 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} + #ifdef CONFIG_UCLAMP_TASK + unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id); + ++/** ++ * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values. ++ * @rq: The rq to clamp against. Must not be NULL. ++ * @util: The util value to clamp. ++ * @p: The task to clamp against. Can be NULL if you want to clamp ++ * against @rq only. ++ * ++ * Clamps the passed @util to the max(@rq, @p) effective uclamp values. ++ * ++ * If sched_uclamp_used static key is disabled, then just return the util ++ * without any clamping since uclamp aggregation at the rq level in the fast ++ * path is disabled, rendering this operation a NOP. ++ * ++ * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It ++ * will return the correct effective uclamp value of the task even if the ++ * static key is disabled. ++ */ + static __always_inline + unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util, + struct task_struct *p) + { +- unsigned long min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value); +- unsigned long max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value); ++ unsigned long min_util; ++ unsigned long max_util; ++ ++ if (!static_branch_likely(&sched_uclamp_used)) ++ return util; ++ ++ min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value); ++ max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value); + + if (p) { + min_util = max(min_util, uclamp_eff_value(p, UCLAMP_MIN)); +@@ -2377,6 +2402,19 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util, + + return clamp(util, min_util, max_util); + } ++ ++/* ++ * When uclamp is compiled in, the aggregation at rq level is 'turned off' ++ * by default in the fast path and only gets turned on once userspace performs ++ * an operation that requires it. ++ * ++ * Returns true if userspace opted-in to use uclamp and aggregation at rq level ++ * hence is active. ++ */ ++static inline bool uclamp_is_used(void) ++{ ++ return static_branch_likely(&sched_uclamp_used); ++} + #else /* CONFIG_UCLAMP_TASK */ + static inline + unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util, +@@ -2384,6 +2422,11 @@ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util, + { + return util; + } ++ ++static inline bool uclamp_is_used(void) ++{ ++ return false; ++} + #endif /* CONFIG_UCLAMP_TASK */ + + #ifdef arch_scale_freq_capacity +diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c +index baa7c050dc7bc..8fbe83b7f57ca 100644 +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -6198,8 +6198,11 @@ static int referenced_filters(struct dyn_ftrace *rec) + int cnt = 0; + + for (ops = ftrace_ops_list; ops != &ftrace_list_end; ops = ops->next) { +- if (ops_references_rec(ops, rec)) +- cnt++; ++ if (ops_references_rec(ops, rec)) { ++ cnt++; ++ if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) ++ rec->flags |= FTRACE_FL_REGS; ++ } + } + + return cnt; +@@ -6378,8 +6381,8 @@ void ftrace_module_enable(struct module *mod) + if (ftrace_start_up) + cnt += referenced_filters(rec); + +- /* This clears FTRACE_FL_DISABLED */ +- rec->flags = cnt; ++ rec->flags &= ~FTRACE_FL_DISABLED; ++ rec->flags += cnt; + + if (ftrace_start_up && cnt) { + int failed = __ftrace_replace_code(rec, 1); +@@ -6977,12 +6980,12 @@ void ftrace_pid_follow_fork(struct trace_array *tr, bool enable) + if (enable) { + register_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork, + tr); +- register_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit, ++ register_trace_sched_process_free(ftrace_pid_follow_sched_process_exit, + tr); + } else { + unregister_trace_sched_process_fork(ftrace_pid_follow_sched_process_fork, + tr); +- unregister_trace_sched_process_exit(ftrace_pid_follow_sched_process_exit, ++ unregister_trace_sched_process_free(ftrace_pid_follow_sched_process_exit, + tr); + } + } +diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c +index 242f59e7f17d5..671f564c33c40 100644 +--- a/kernel/trace/trace_events.c ++++ b/kernel/trace/trace_events.c +@@ -538,12 +538,12 @@ void trace_event_follow_fork(struct trace_array *tr, bool enable) + if (enable) { + register_trace_prio_sched_process_fork(event_filter_pid_sched_process_fork, + tr, INT_MIN); +- register_trace_prio_sched_process_exit(event_filter_pid_sched_process_exit, ++ register_trace_prio_sched_process_free(event_filter_pid_sched_process_exit, + tr, INT_MAX); + } else { + unregister_trace_sched_process_fork(event_filter_pid_sched_process_fork, + tr); +- unregister_trace_sched_process_exit(event_filter_pid_sched_process_exit, ++ unregister_trace_sched_process_free(event_filter_pid_sched_process_exit, + tr); + } + } +diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c +index e2be7bb7ef7e2..17e1e49e5b936 100644 +--- a/kernel/trace/trace_hwlat.c ++++ b/kernel/trace/trace_hwlat.c +@@ -283,6 +283,7 @@ static bool disable_migrate; + static void move_to_next_cpu(void) + { + struct cpumask *current_mask = &save_cpumask; ++ struct trace_array *tr = hwlat_trace; + int next_cpu; + + if (disable_migrate) +@@ -296,7 +297,7 @@ static void move_to_next_cpu(void) + goto disable; + + get_online_cpus(); +- cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask); ++ cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask); + next_cpu = cpumask_next(smp_processor_id(), current_mask); + put_online_cpus(); + +@@ -373,7 +374,7 @@ static int start_kthread(struct trace_array *tr) + /* Just pick the first CPU on first iteration */ + current_mask = &save_cpumask; + get_online_cpus(); +- cpumask_and(current_mask, cpu_online_mask, tracing_buffer_mask); ++ cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask); + put_online_cpus(); + next_cpu = cpumask_first(current_mask); + +diff --git a/lib/devres.c b/lib/devres.c +index 6ef51f159c54b..ca0d28727ccef 100644 +--- a/lib/devres.c ++++ b/lib/devres.c +@@ -119,6 +119,7 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res, + { + resource_size_t size; + void __iomem *dest_ptr; ++ char *pretty_name; + + BUG_ON(!dev); + +@@ -129,7 +130,15 @@ __devm_ioremap_resource(struct device *dev, const struct resource *res, + + size = resource_size(res); + +- if (!devm_request_mem_region(dev, res->start, size, dev_name(dev))) { ++ if (res->name) ++ pretty_name = devm_kasprintf(dev, GFP_KERNEL, "%s %s", ++ dev_name(dev), res->name); ++ else ++ pretty_name = devm_kstrdup(dev, dev_name(dev), GFP_KERNEL); ++ if (!pretty_name) ++ return IOMEM_ERR_PTR(-ENOMEM); ++ ++ if (!devm_request_mem_region(dev, res->start, size, pretty_name)) { + dev_err(dev, "can't request region for resource %pR\n", res); + return IOMEM_ERR_PTR(-EBUSY); + } +diff --git a/lib/test_kmod.c b/lib/test_kmod.c +index e651c37d56dbd..eab52770070d6 100644 +--- a/lib/test_kmod.c ++++ b/lib/test_kmod.c +@@ -745,7 +745,7 @@ static int trigger_config_run_type(struct kmod_test_device *test_dev, + break; + case TEST_KMOD_FS_TYPE: + kfree_const(config->test_fs); +- config->test_driver = NULL; ++ config->test_fs = NULL; + copied = config_copy_test_fs(config, test_str, + strlen(test_str)); + break; +diff --git a/lib/test_lockup.c b/lib/test_lockup.c +index ea09ca335b214..69ef1c17edf64 100644 +--- a/lib/test_lockup.c ++++ b/lib/test_lockup.c +@@ -512,8 +512,8 @@ static int __init test_lockup_init(void) + if (test_file_path[0]) { + test_file = filp_open(test_file_path, O_RDONLY, 0); + if (IS_ERR(test_file)) { +- pr_err("cannot find file_path\n"); +- return -EINVAL; ++ pr_err("failed to open %s: %ld\n", test_file_path, PTR_ERR(test_file)); ++ return PTR_ERR(test_file); + } + test_inode = file_inode(test_file); + } else if (test_lock_inode || +diff --git a/mm/cma.c b/mm/cma.c +index 26ecff8188817..0963c0f9c5022 100644 +--- a/mm/cma.c ++++ b/mm/cma.c +@@ -93,17 +93,15 @@ static void cma_clear_bitmap(struct cma *cma, unsigned long pfn, + mutex_unlock(&cma->lock); + } + +-static int __init cma_activate_area(struct cma *cma) ++static void __init cma_activate_area(struct cma *cma) + { + unsigned long base_pfn = cma->base_pfn, pfn = base_pfn; + unsigned i = cma->count >> pageblock_order; + struct zone *zone; + + cma->bitmap = bitmap_zalloc(cma_bitmap_maxno(cma), GFP_KERNEL); +- if (!cma->bitmap) { +- cma->count = 0; +- return -ENOMEM; +- } ++ if (!cma->bitmap) ++ goto out_error; + + WARN_ON_ONCE(!pfn_valid(pfn)); + zone = page_zone(pfn_to_page(pfn)); +@@ -133,25 +131,22 @@ static int __init cma_activate_area(struct cma *cma) + spin_lock_init(&cma->mem_head_lock); + #endif + +- return 0; ++ return; + + not_in_zone: +- pr_err("CMA area %s could not be activated\n", cma->name); + bitmap_free(cma->bitmap); ++out_error: + cma->count = 0; +- return -EINVAL; ++ pr_err("CMA area %s could not be activated\n", cma->name); ++ return; + } + + static int __init cma_init_reserved_areas(void) + { + int i; + +- for (i = 0; i < cma_area_count; i++) { +- int ret = cma_activate_area(&cma_areas[i]); +- +- if (ret) +- return ret; +- } ++ for (i = 0; i < cma_area_count; i++) ++ cma_activate_area(&cma_areas[i]); + + return 0; + } +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 461324757c750..e4599bc61e718 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -3840,7 +3840,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, + continue; + + ptl = huge_pte_lock(h, mm, ptep); +- if (huge_pmd_unshare(mm, &address, ptep)) { ++ if (huge_pmd_unshare(mm, vma, &address, ptep)) { + spin_unlock(ptl); + /* + * We just unmapped a page of PMDs by clearing a PUD. +@@ -4427,10 +4427,6 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, + } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) + return VM_FAULT_HWPOISON_LARGE | + VM_FAULT_SET_HINDEX(hstate_index(h)); +- } else { +- ptep = huge_pte_alloc(mm, haddr, huge_page_size(h)); +- if (!ptep) +- return VM_FAULT_OOM; + } + + /* +@@ -4907,7 +4903,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, + if (!ptep) + continue; + ptl = huge_pte_lock(h, mm, ptep); +- if (huge_pmd_unshare(mm, &address, ptep)) { ++ if (huge_pmd_unshare(mm, vma, &address, ptep)) { + pages++; + spin_unlock(ptl); + shared_pmd = true; +@@ -5201,25 +5197,21 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr) + void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) + { +- unsigned long check_addr; ++ unsigned long a_start, a_end; + + if (!(vma->vm_flags & VM_MAYSHARE)) + return; + +- for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) { +- unsigned long a_start = check_addr & PUD_MASK; +- unsigned long a_end = a_start + PUD_SIZE; ++ /* Extend the range to be PUD aligned for a worst case scenario */ ++ a_start = ALIGN_DOWN(*start, PUD_SIZE); ++ a_end = ALIGN(*end, PUD_SIZE); + +- /* +- * If sharing is possible, adjust start/end if necessary. +- */ +- if (range_in_vma(vma, a_start, a_end)) { +- if (a_start < *start) +- *start = a_start; +- if (a_end > *end) +- *end = a_end; +- } +- } ++ /* ++ * Intersect the range with the vma range, since pmd sharing won't be ++ * across vma after all ++ */ ++ *start = max(vma->vm_start, a_start); ++ *end = min(vma->vm_end, a_end); + } + + /* +@@ -5292,12 +5284,14 @@ out: + * returns: 1 successfully unmapped a shared pte page + * 0 the underlying pte page is not shared, or it is the last user + */ +-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) ++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, ++ unsigned long *addr, pte_t *ptep) + { + pgd_t *pgd = pgd_offset(mm, *addr); + p4d_t *p4d = p4d_offset(pgd, *addr); + pud_t *pud = pud_offset(p4d, *addr); + ++ i_mmap_assert_write_locked(vma->vm_file->f_mapping); + BUG_ON(page_count(virt_to_page(ptep)) == 0); + if (page_count(virt_to_page(ptep)) == 1) + return 0; +@@ -5315,7 +5309,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) + return NULL; + } + +-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) ++int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, ++ unsigned long *addr, pte_t *ptep) + { + return 0; + } +diff --git a/mm/khugepaged.c b/mm/khugepaged.c +index e9e7a5659d647..38874fe112d58 100644 +--- a/mm/khugepaged.c ++++ b/mm/khugepaged.c +@@ -1313,7 +1313,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) + { + unsigned long haddr = addr & HPAGE_PMD_MASK; + struct vm_area_struct *vma = find_vma(mm, haddr); +- struct page *hpage = NULL; ++ struct page *hpage; + pte_t *start_pte, *pte; + pmd_t *pmd, _pmd; + spinlock_t *ptl; +@@ -1333,9 +1333,17 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) + if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE)) + return; + ++ hpage = find_lock_page(vma->vm_file->f_mapping, ++ linear_page_index(vma, haddr)); ++ if (!hpage) ++ return; ++ ++ if (!PageHead(hpage)) ++ goto drop_hpage; ++ + pmd = mm_find_pmd(mm, haddr); + if (!pmd) +- return; ++ goto drop_hpage; + + start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); + +@@ -1354,30 +1362,11 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) + + page = vm_normal_page(vma, addr, *pte); + +- if (!page || !PageCompound(page)) +- goto abort; +- +- if (!hpage) { +- hpage = compound_head(page); +- /* +- * The mapping of the THP should not change. +- * +- * Note that uprobe, debugger, or MAP_PRIVATE may +- * change the page table, but the new page will +- * not pass PageCompound() check. +- */ +- if (WARN_ON(hpage->mapping != vma->vm_file->f_mapping)) +- goto abort; +- } +- + /* +- * Confirm the page maps to the correct subpage. +- * +- * Note that uprobe, debugger, or MAP_PRIVATE may change +- * the page table, but the new page will not pass +- * PageCompound() check. ++ * Note that uprobe, debugger, or MAP_PRIVATE may change the ++ * page table, but the new page will not be a subpage of hpage. + */ +- if (WARN_ON(hpage + i != page)) ++ if (hpage + i != page) + goto abort; + count++; + } +@@ -1396,21 +1385,26 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) + pte_unmap_unlock(start_pte, ptl); + + /* step 3: set proper refcount and mm_counters. */ +- if (hpage) { ++ if (count) { + page_ref_sub(hpage, count); + add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count); + } + + /* step 4: collapse pmd */ + ptl = pmd_lock(vma->vm_mm, pmd); +- _pmd = pmdp_collapse_flush(vma, addr, pmd); ++ _pmd = pmdp_collapse_flush(vma, haddr, pmd); + spin_unlock(ptl); + mm_dec_nr_ptes(mm); + pte_free(mm, pmd_pgtable(_pmd)); ++ ++drop_hpage: ++ unlock_page(hpage); ++ put_page(hpage); + return; + + abort: + pte_unmap_unlock(start_pte, ptl); ++ goto drop_hpage; + } + + static int khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) +@@ -1439,6 +1433,7 @@ out: + static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) + { + struct vm_area_struct *vma; ++ struct mm_struct *mm; + unsigned long addr; + pmd_t *pmd, _pmd; + +@@ -1467,7 +1462,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) + continue; + if (vma->vm_end < addr + HPAGE_PMD_SIZE) + continue; +- pmd = mm_find_pmd(vma->vm_mm, addr); ++ mm = vma->vm_mm; ++ pmd = mm_find_pmd(mm, addr); + if (!pmd) + continue; + /* +@@ -1477,17 +1473,19 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) + * mmap_sem while holding page lock. Fault path does it in + * reverse order. Trylock is a way to avoid deadlock. + */ +- if (down_write_trylock(&vma->vm_mm->mmap_sem)) { +- spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd); +- /* assume page table is clear */ +- _pmd = pmdp_collapse_flush(vma, addr, pmd); +- spin_unlock(ptl); +- up_write(&vma->vm_mm->mmap_sem); +- mm_dec_nr_ptes(vma->vm_mm); +- pte_free(vma->vm_mm, pmd_pgtable(_pmd)); ++ if (down_write_trylock(&mm->mmap_sem)) { ++ if (!khugepaged_test_exit(mm)) { ++ spinlock_t *ptl = pmd_lock(mm, pmd); ++ /* assume page table is clear */ ++ _pmd = pmdp_collapse_flush(vma, addr, pmd); ++ spin_unlock(ptl); ++ mm_dec_nr_ptes(mm); ++ pte_free(mm, pmd_pgtable(_pmd)); ++ } ++ up_write(&mm->mmap_sem); + } else { + /* Try again later */ +- khugepaged_add_pte_mapped_thp(vma->vm_mm, addr); ++ khugepaged_add_pte_mapped_thp(mm, addr); + } + } + i_mmap_unlock_write(mapping); +diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c +index 744a3ea284b78..7f28c5f7e4bb8 100644 +--- a/mm/memory_hotplug.c ++++ b/mm/memory_hotplug.c +@@ -1745,7 +1745,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size) + */ + rc = walk_memory_blocks(start, size, NULL, check_memblock_offlined_cb); + if (rc) +- goto done; ++ return rc; + + /* remove memmap entry */ + firmware_map_remove(start, start + size, "System RAM"); +@@ -1765,9 +1765,8 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size) + + try_offline_node(nid); + +-done: + mem_hotplug_done(); +- return rc; ++ return 0; + } + + /** +diff --git a/mm/page_counter.c b/mm/page_counter.c +index c56db2d5e1592..b4663844c9b37 100644 +--- a/mm/page_counter.c ++++ b/mm/page_counter.c +@@ -72,7 +72,7 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages) + long new; + + new = atomic_long_add_return(nr_pages, &c->usage); +- propagate_protected_usage(counter, new); ++ propagate_protected_usage(c, new); + /* + * This is indeed racy, but we can live with some + * inaccuracy in the watermark. +@@ -116,7 +116,7 @@ bool page_counter_try_charge(struct page_counter *counter, + new = atomic_long_add_return(nr_pages, &c->usage); + if (new > c->max) { + atomic_long_sub(nr_pages, &c->usage); +- propagate_protected_usage(counter, new); ++ propagate_protected_usage(c, new); + /* + * This is racy, but we can live with some + * inaccuracy in the failcnt. +@@ -125,7 +125,7 @@ bool page_counter_try_charge(struct page_counter *counter, + *fail = c; + goto failed; + } +- propagate_protected_usage(counter, new); ++ propagate_protected_usage(c, new); + /* + * Just like with failcnt, we can live with some + * inaccuracy in the watermark. +diff --git a/mm/rmap.c b/mm/rmap.c +index f79a206b271a6..f3c5562bc5f40 100644 +--- a/mm/rmap.c ++++ b/mm/rmap.c +@@ -1458,7 +1458,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, + * do this outside rmap routines. + */ + VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); +- if (huge_pmd_unshare(mm, &address, pvmw.pte)) { ++ if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { + /* + * huge_pmd_unshare unmapped an entire PMD + * page. There is no way of knowing exactly +diff --git a/mm/shuffle.c b/mm/shuffle.c +index 44406d9977c77..dd13ab851b3ee 100644 +--- a/mm/shuffle.c ++++ b/mm/shuffle.c +@@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400); + * For two pages to be swapped in the shuffle, they must be free (on a + * 'free_area' lru), have the same order, and have the same migratetype. + */ +-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order) ++static struct page * __meminit shuffle_valid_page(struct zone *zone, ++ unsigned long pfn, int order) + { +- struct page *page; ++ struct page *page = pfn_to_online_page(pfn); + + /* + * Given we're dealing with randomly selected pfns in a zone we + * need to ask questions like... + */ + +- /* ...is the pfn even in the memmap? */ +- if (!pfn_valid_within(pfn)) ++ /* ... is the page managed by the buddy? */ ++ if (!page) + return NULL; + +- /* ...is the pfn in a present section or a hole? */ +- if (!pfn_in_present_section(pfn)) ++ /* ... is the page assigned to the same zone? */ ++ if (page_zone(page) != zone) + return NULL; + + /* ...is the page free and currently on a free_area list? */ +- page = pfn_to_page(pfn); + if (!PageBuddy(page)) + return NULL; + +@@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z) + * page_j randomly selected in the span @zone_start_pfn to + * @spanned_pages. + */ +- page_i = shuffle_valid_page(i, order); ++ page_i = shuffle_valid_page(z, i, order); + if (!page_i) + continue; + +@@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z) + j = z->zone_start_pfn + + ALIGN_DOWN(get_random_long() % z->spanned_pages, + order_pages); +- page_j = shuffle_valid_page(j, order); ++ page_j = shuffle_valid_page(z, j, order); + if (page_j && page_j != page_i) + break; + } +diff --git a/net/compat.c b/net/compat.c +index 4bed96e84d9a6..32ea0a04a665c 100644 +--- a/net/compat.c ++++ b/net/compat.c +@@ -307,6 +307,7 @@ void scm_detach_fds_compat(struct msghdr *kmsg, struct scm_cookie *scm) + break; + } + /* Bump the usage count and install the file. */ ++ __receive_sock(fp[i]); + fd_install(new_fd, get_file(fp[i])); + } + +diff --git a/net/core/sock.c b/net/core/sock.c +index 7b0feeea61b6b..f97c5af8961ca 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -2753,6 +2753,27 @@ int sock_no_mmap(struct file *file, struct socket *sock, struct vm_area_struct * + } + EXPORT_SYMBOL(sock_no_mmap); + ++/* ++ * When a file is received (via SCM_RIGHTS, etc), we must bump the ++ * various sock-based usage counts. ++ */ ++void __receive_sock(struct file *file) ++{ ++ struct socket *sock; ++ int error; ++ ++ /* ++ * The resulting value of "error" is ignored here since we only ++ * need to take action when the file is a socket and testing ++ * "sock" for NULL is sufficient. ++ */ ++ sock = sock_from_file(file, &error); ++ if (sock) { ++ sock_update_netprioidx(&sock->sk->sk_cgrp_data); ++ sock_update_classid(&sock->sk->sk_cgrp_data); ++ } ++} ++ + ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, size_t size, int flags) + { + ssize_t res; +diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c +index cd8487bc6fc2e..032ec0303a1f7 100644 +--- a/net/mac80211/sta_info.c ++++ b/net/mac80211/sta_info.c +@@ -1050,7 +1050,7 @@ static void __sta_info_destroy_part2(struct sta_info *sta) + might_sleep(); + lockdep_assert_held(&local->sta_mtx); + +- while (sta->sta_state == IEEE80211_STA_AUTHORIZED) { ++ if (sta->sta_state == IEEE80211_STA_AUTHORIZED) { + ret = sta_info_move_state(sta, IEEE80211_STA_ASSOC); + WARN_ON_ONCE(ret); + } +diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c +index e59022b3f1254..b9c2ee7ab43fa 100644 +--- a/scripts/recordmcount.c ++++ b/scripts/recordmcount.c +@@ -42,6 +42,8 @@ + #define R_ARM_THM_CALL 10 + #define R_ARM_CALL 28 + ++#define R_AARCH64_CALL26 283 ++ + static int fd_map; /* File descriptor for file being modified. */ + static int mmap_failed; /* Boolean flag. */ + static char gpfx; /* prefix for global symbol name (sometimes '_') */ +diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c +index 3e3e568c81309..a59bf2f5b2d4f 100644 +--- a/security/integrity/ima/ima_policy.c ++++ b/security/integrity/ima/ima_policy.c +@@ -1035,6 +1035,11 @@ static bool ima_validate_rule(struct ima_rule_entry *entry) + return false; + } + ++ /* Ensure that combinations of flags are compatible with each other */ ++ if (entry->flags & IMA_CHECK_BLACKLIST && ++ !(entry->flags & IMA_MODSIG_ALLOWED)) ++ return false; ++ + return true; + } + +@@ -1371,9 +1376,17 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry) + result = -EINVAL; + break; + case Opt_appraise_flag: ++ if (entry->action != APPRAISE) { ++ result = -EINVAL; ++ break; ++ } ++ + ima_log_string(ab, "appraise_flag", args[0].from); +- if (strstr(args[0].from, "blacklist")) ++ if (IS_ENABLED(CONFIG_IMA_APPRAISE_MODSIG) && ++ strstr(args[0].from, "blacklist")) + entry->flags |= IMA_CHECK_BLACKLIST; ++ else ++ result = -EINVAL; + break; + case Opt_permit_directio: + entry->flags |= IMA_PERMIT_DIRECTIO; +diff --git a/sound/pci/echoaudio/echoaudio.c b/sound/pci/echoaudio/echoaudio.c +index 0941a7a17623a..456219a665a79 100644 +--- a/sound/pci/echoaudio/echoaudio.c ++++ b/sound/pci/echoaudio/echoaudio.c +@@ -2158,7 +2158,6 @@ static int snd_echo_resume(struct device *dev) + if (err < 0) { + kfree(commpage_bak); + dev_err(dev, "resume init_hw err=%d\n", err); +- snd_echo_free(chip); + return err; + } + +@@ -2185,7 +2184,6 @@ static int snd_echo_resume(struct device *dev) + if (request_irq(pci->irq, snd_echo_interrupt, IRQF_SHARED, + KBUILD_MODNAME, chip)) { + dev_err(chip->card->dev, "cannot grab irq\n"); +- snd_echo_free(chip); + return -EBUSY; + } + chip->irq = pci->irq; +diff --git a/sound/soc/tegra/tegra_alc5632.c b/sound/soc/tegra/tegra_alc5632.c +index ec39ecba1e8b8..2839c6cb8c386 100644 +--- a/sound/soc/tegra/tegra_alc5632.c ++++ b/sound/soc/tegra/tegra_alc5632.c +@@ -205,13 +205,11 @@ static int tegra_alc5632_probe(struct platform_device *pdev) + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ goto err_put_cpu_of_node; + } + + return 0; + +-err_fini_utils: +- tegra_asoc_utils_fini(&alc5632->util_data); + err_put_cpu_of_node: + of_node_put(tegra_alc5632_dai.cpus->of_node); + tegra_alc5632_dai.cpus->of_node = NULL; +@@ -226,12 +224,9 @@ err: + static int tegra_alc5632_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_alc5632 *machine = snd_soc_card_get_drvdata(card); + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + of_node_put(tegra_alc5632_dai.cpus->of_node); + tegra_alc5632_dai.cpus->of_node = NULL; + tegra_alc5632_dai.platforms->of_node = NULL; +diff --git a/sound/soc/tegra/tegra_asoc_utils.c b/sound/soc/tegra/tegra_asoc_utils.c +index 536a578e95126..587f62a288d14 100644 +--- a/sound/soc/tegra/tegra_asoc_utils.c ++++ b/sound/soc/tegra/tegra_asoc_utils.c +@@ -60,8 +60,6 @@ int tegra_asoc_utils_set_rate(struct tegra_asoc_utils_data *data, int srate, + data->set_mclk = 0; + + clk_disable_unprepare(data->clk_cdev1); +- clk_disable_unprepare(data->clk_pll_a_out0); +- clk_disable_unprepare(data->clk_pll_a); + + err = clk_set_rate(data->clk_pll_a, new_baseclock); + if (err) { +@@ -77,18 +75,6 @@ int tegra_asoc_utils_set_rate(struct tegra_asoc_utils_data *data, int srate, + + /* Don't set cdev1/extern1 rate; it's locked to pll_a_out0 */ + +- err = clk_prepare_enable(data->clk_pll_a); +- if (err) { +- dev_err(data->dev, "Can't enable pll_a: %d\n", err); +- return err; +- } +- +- err = clk_prepare_enable(data->clk_pll_a_out0); +- if (err) { +- dev_err(data->dev, "Can't enable pll_a_out0: %d\n", err); +- return err; +- } +- + err = clk_prepare_enable(data->clk_cdev1); + if (err) { + dev_err(data->dev, "Can't enable cdev1: %d\n", err); +@@ -109,8 +95,6 @@ int tegra_asoc_utils_set_ac97_rate(struct tegra_asoc_utils_data *data) + int err; + + clk_disable_unprepare(data->clk_cdev1); +- clk_disable_unprepare(data->clk_pll_a_out0); +- clk_disable_unprepare(data->clk_pll_a); + + /* + * AC97 rate is fixed at 24.576MHz and is used for both the host +@@ -130,18 +114,6 @@ int tegra_asoc_utils_set_ac97_rate(struct tegra_asoc_utils_data *data) + + /* Don't set cdev1/extern1 rate; it's locked to pll_a_out0 */ + +- err = clk_prepare_enable(data->clk_pll_a); +- if (err) { +- dev_err(data->dev, "Can't enable pll_a: %d\n", err); +- return err; +- } +- +- err = clk_prepare_enable(data->clk_pll_a_out0); +- if (err) { +- dev_err(data->dev, "Can't enable pll_a_out0: %d\n", err); +- return err; +- } +- + err = clk_prepare_enable(data->clk_cdev1); + if (err) { + dev_err(data->dev, "Can't enable cdev1: %d\n", err); +@@ -158,6 +130,7 @@ EXPORT_SYMBOL_GPL(tegra_asoc_utils_set_ac97_rate); + int tegra_asoc_utils_init(struct tegra_asoc_utils_data *data, + struct device *dev) + { ++ struct clk *clk_out_1, *clk_extern1; + int ret; + + data->dev = dev; +@@ -175,52 +148,78 @@ int tegra_asoc_utils_init(struct tegra_asoc_utils_data *data, + return -EINVAL; + } + +- data->clk_pll_a = clk_get(dev, "pll_a"); ++ data->clk_pll_a = devm_clk_get(dev, "pll_a"); + if (IS_ERR(data->clk_pll_a)) { + dev_err(data->dev, "Can't retrieve clk pll_a\n"); +- ret = PTR_ERR(data->clk_pll_a); +- goto err; ++ return PTR_ERR(data->clk_pll_a); + } + +- data->clk_pll_a_out0 = clk_get(dev, "pll_a_out0"); ++ data->clk_pll_a_out0 = devm_clk_get(dev, "pll_a_out0"); + if (IS_ERR(data->clk_pll_a_out0)) { + dev_err(data->dev, "Can't retrieve clk pll_a_out0\n"); +- ret = PTR_ERR(data->clk_pll_a_out0); +- goto err_put_pll_a; ++ return PTR_ERR(data->clk_pll_a_out0); + } + +- data->clk_cdev1 = clk_get(dev, "mclk"); ++ data->clk_cdev1 = devm_clk_get(dev, "mclk"); + if (IS_ERR(data->clk_cdev1)) { + dev_err(data->dev, "Can't retrieve clk cdev1\n"); +- ret = PTR_ERR(data->clk_cdev1); +- goto err_put_pll_a_out0; ++ return PTR_ERR(data->clk_cdev1); + } + +- ret = tegra_asoc_utils_set_rate(data, 44100, 256 * 44100); +- if (ret) +- goto err_put_cdev1; ++ /* ++ * If clock parents are not set in DT, configure here to use clk_out_1 ++ * as mclk and extern1 as parent for Tegra30 and higher. ++ */ ++ if (!of_find_property(dev->of_node, "assigned-clock-parents", NULL) && ++ data->soc > TEGRA_ASOC_UTILS_SOC_TEGRA20) { ++ dev_warn(data->dev, ++ "Configuring clocks for a legacy device-tree\n"); ++ dev_warn(data->dev, ++ "Please update DT to use assigned-clock-parents\n"); ++ clk_extern1 = devm_clk_get(dev, "extern1"); ++ if (IS_ERR(clk_extern1)) { ++ dev_err(data->dev, "Can't retrieve clk extern1\n"); ++ return PTR_ERR(clk_extern1); ++ } ++ ++ ret = clk_set_parent(clk_extern1, data->clk_pll_a_out0); ++ if (ret < 0) { ++ dev_err(data->dev, ++ "Set parent failed for clk extern1\n"); ++ return ret; ++ } ++ ++ clk_out_1 = devm_clk_get(dev, "pmc_clk_out_1"); ++ if (IS_ERR(clk_out_1)) { ++ dev_err(data->dev, "Can't retrieve pmc_clk_out_1\n"); ++ return PTR_ERR(clk_out_1); ++ } ++ ++ ret = clk_set_parent(clk_out_1, clk_extern1); ++ if (ret < 0) { ++ dev_err(data->dev, ++ "Set parent failed for pmc_clk_out_1\n"); ++ return ret; ++ } ++ ++ data->clk_cdev1 = clk_out_1; ++ } + +- return 0; ++ /* ++ * FIXME: There is some unknown dependency between audio mclk disable ++ * and suspend-resume functionality on Tegra30, although audio mclk is ++ * only needed for audio. ++ */ ++ ret = clk_prepare_enable(data->clk_cdev1); ++ if (ret) { ++ dev_err(data->dev, "Can't enable cdev1: %d\n", ret); ++ return ret; ++ } + +-err_put_cdev1: +- clk_put(data->clk_cdev1); +-err_put_pll_a_out0: +- clk_put(data->clk_pll_a_out0); +-err_put_pll_a: +- clk_put(data->clk_pll_a); +-err: +- return ret; ++ return 0; + } + EXPORT_SYMBOL_GPL(tegra_asoc_utils_init); + +-void tegra_asoc_utils_fini(struct tegra_asoc_utils_data *data) +-{ +- clk_put(data->clk_cdev1); +- clk_put(data->clk_pll_a_out0); +- clk_put(data->clk_pll_a); +-} +-EXPORT_SYMBOL_GPL(tegra_asoc_utils_fini); +- + MODULE_AUTHOR("Stephen Warren "); + MODULE_DESCRIPTION("Tegra ASoC utility code"); + MODULE_LICENSE("GPL"); +diff --git a/sound/soc/tegra/tegra_asoc_utils.h b/sound/soc/tegra/tegra_asoc_utils.h +index 0c13818dee759..a34439587d59f 100644 +--- a/sound/soc/tegra/tegra_asoc_utils.h ++++ b/sound/soc/tegra/tegra_asoc_utils.h +@@ -34,6 +34,5 @@ int tegra_asoc_utils_set_rate(struct tegra_asoc_utils_data *data, int srate, + int tegra_asoc_utils_set_ac97_rate(struct tegra_asoc_utils_data *data); + int tegra_asoc_utils_init(struct tegra_asoc_utils_data *data, + struct device *dev); +-void tegra_asoc_utils_fini(struct tegra_asoc_utils_data *data); + + #endif +diff --git a/sound/soc/tegra/tegra_max98090.c b/sound/soc/tegra/tegra_max98090.c +index d800b62b36f83..ec9050516cd7e 100644 +--- a/sound/soc/tegra/tegra_max98090.c ++++ b/sound/soc/tegra/tegra_max98090.c +@@ -218,19 +218,18 @@ static int tegra_max98090_probe(struct platform_device *pdev) + + ret = snd_soc_of_parse_card_name(card, "nvidia,model"); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing"); + if (ret) +- goto err; ++ return ret; + + tegra_max98090_dai.codecs->of_node = of_parse_phandle(np, + "nvidia,audio-codec", 0); + if (!tegra_max98090_dai.codecs->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,audio-codec' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_max98090_dai.cpus->of_node = of_parse_phandle(np, +@@ -238,40 +237,31 @@ static int tegra_max98090_probe(struct platform_device *pdev) + if (!tegra_max98090_dai.cpus->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,i2s-controller' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_max98090_dai.platforms->of_node = tegra_max98090_dai.cpus->of_node; + + ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_register_card(card); + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ return ret; + } + + return 0; +- +-err_fini_utils: +- tegra_asoc_utils_fini(&machine->util_data); +-err: +- return ret; + } + + static int tegra_max98090_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_max98090 *machine = snd_soc_card_get_drvdata(card); + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + return 0; + } + +diff --git a/sound/soc/tegra/tegra_rt5640.c b/sound/soc/tegra/tegra_rt5640.c +index 9878bc3eb89e9..201d132731f9b 100644 +--- a/sound/soc/tegra/tegra_rt5640.c ++++ b/sound/soc/tegra/tegra_rt5640.c +@@ -164,19 +164,18 @@ static int tegra_rt5640_probe(struct platform_device *pdev) + + ret = snd_soc_of_parse_card_name(card, "nvidia,model"); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing"); + if (ret) +- goto err; ++ return ret; + + tegra_rt5640_dai.codecs->of_node = of_parse_phandle(np, + "nvidia,audio-codec", 0); + if (!tegra_rt5640_dai.codecs->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,audio-codec' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_rt5640_dai.cpus->of_node = of_parse_phandle(np, +@@ -184,40 +183,31 @@ static int tegra_rt5640_probe(struct platform_device *pdev) + if (!tegra_rt5640_dai.cpus->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,i2s-controller' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_rt5640_dai.platforms->of_node = tegra_rt5640_dai.cpus->of_node; + + ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_register_card(card); + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ return ret; + } + + return 0; +- +-err_fini_utils: +- tegra_asoc_utils_fini(&machine->util_data); +-err: +- return ret; + } + + static int tegra_rt5640_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_rt5640 *machine = snd_soc_card_get_drvdata(card); + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + return 0; + } + +diff --git a/sound/soc/tegra/tegra_rt5677.c b/sound/soc/tegra/tegra_rt5677.c +index 5821313db977a..8f71e21f6ee97 100644 +--- a/sound/soc/tegra/tegra_rt5677.c ++++ b/sound/soc/tegra/tegra_rt5677.c +@@ -270,13 +270,11 @@ static int tegra_rt5677_probe(struct platform_device *pdev) + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ goto err_put_cpu_of_node; + } + + return 0; + +-err_fini_utils: +- tegra_asoc_utils_fini(&machine->util_data); + err_put_cpu_of_node: + of_node_put(tegra_rt5677_dai.cpus->of_node); + tegra_rt5677_dai.cpus->of_node = NULL; +@@ -291,12 +289,9 @@ err: + static int tegra_rt5677_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card); + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + tegra_rt5677_dai.platforms->of_node = NULL; + of_node_put(tegra_rt5677_dai.codecs->of_node); + tegra_rt5677_dai.codecs->of_node = NULL; +diff --git a/sound/soc/tegra/tegra_sgtl5000.c b/sound/soc/tegra/tegra_sgtl5000.c +index dc411ba2e36d5..692fcc3d7d6e6 100644 +--- a/sound/soc/tegra/tegra_sgtl5000.c ++++ b/sound/soc/tegra/tegra_sgtl5000.c +@@ -156,13 +156,11 @@ static int tegra_sgtl5000_driver_probe(struct platform_device *pdev) + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ goto err_put_cpu_of_node; + } + + return 0; + +-err_fini_utils: +- tegra_asoc_utils_fini(&machine->util_data); + err_put_cpu_of_node: + of_node_put(tegra_sgtl5000_dai.cpus->of_node); + tegra_sgtl5000_dai.cpus->of_node = NULL; +@@ -177,13 +175,10 @@ err: + static int tegra_sgtl5000_driver_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_sgtl5000 *machine = snd_soc_card_get_drvdata(card); + int ret; + + ret = snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + of_node_put(tegra_sgtl5000_dai.cpus->of_node); + tegra_sgtl5000_dai.cpus->of_node = NULL; + tegra_sgtl5000_dai.platforms->of_node = NULL; +diff --git a/sound/soc/tegra/tegra_wm8753.c b/sound/soc/tegra/tegra_wm8753.c +index 0d653a605358c..2ee2ed190872d 100644 +--- a/sound/soc/tegra/tegra_wm8753.c ++++ b/sound/soc/tegra/tegra_wm8753.c +@@ -127,19 +127,18 @@ static int tegra_wm8753_driver_probe(struct platform_device *pdev) + + ret = snd_soc_of_parse_card_name(card, "nvidia,model"); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing"); + if (ret) +- goto err; ++ return ret; + + tegra_wm8753_dai.codecs->of_node = of_parse_phandle(np, + "nvidia,audio-codec", 0); + if (!tegra_wm8753_dai.codecs->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,audio-codec' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_wm8753_dai.cpus->of_node = of_parse_phandle(np, +@@ -147,40 +146,31 @@ static int tegra_wm8753_driver_probe(struct platform_device *pdev) + if (!tegra_wm8753_dai.cpus->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,i2s-controller' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_wm8753_dai.platforms->of_node = tegra_wm8753_dai.cpus->of_node; + + ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_register_card(card); + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ return ret; + } + + return 0; +- +-err_fini_utils: +- tegra_asoc_utils_fini(&machine->util_data); +-err: +- return ret; + } + + static int tegra_wm8753_driver_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_wm8753 *machine = snd_soc_card_get_drvdata(card); + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + return 0; + } + +diff --git a/sound/soc/tegra/tegra_wm8903.c b/sound/soc/tegra/tegra_wm8903.c +index 3aca354f9e08b..7bf159965c4dd 100644 +--- a/sound/soc/tegra/tegra_wm8903.c ++++ b/sound/soc/tegra/tegra_wm8903.c +@@ -323,19 +323,18 @@ static int tegra_wm8903_driver_probe(struct platform_device *pdev) + + ret = snd_soc_of_parse_card_name(card, "nvidia,model"); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_of_parse_audio_routing(card, "nvidia,audio-routing"); + if (ret) +- goto err; ++ return ret; + + tegra_wm8903_dai.codecs->of_node = of_parse_phandle(np, + "nvidia,audio-codec", 0); + if (!tegra_wm8903_dai.codecs->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,audio-codec' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_wm8903_dai.cpus->of_node = of_parse_phandle(np, +@@ -343,40 +342,31 @@ static int tegra_wm8903_driver_probe(struct platform_device *pdev) + if (!tegra_wm8903_dai.cpus->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,i2s-controller' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + tegra_wm8903_dai.platforms->of_node = tegra_wm8903_dai.cpus->of_node; + + ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_register_card(card); + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ return ret; + } + + return 0; +- +-err_fini_utils: +- tegra_asoc_utils_fini(&machine->util_data); +-err: +- return ret; + } + + static int tegra_wm8903_driver_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_wm8903 *machine = snd_soc_card_get_drvdata(card); + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + return 0; + } + +diff --git a/sound/soc/tegra/tegra_wm9712.c b/sound/soc/tegra/tegra_wm9712.c +index b85bd9f890737..726edfa21a29d 100644 +--- a/sound/soc/tegra/tegra_wm9712.c ++++ b/sound/soc/tegra/tegra_wm9712.c +@@ -113,19 +113,17 @@ static int tegra_wm9712_driver_probe(struct platform_device *pdev) + + ret = tegra_asoc_utils_set_ac97_rate(&machine->util_data); + if (ret) +- goto asoc_utils_fini; ++ goto codec_unregister; + + ret = snd_soc_register_card(card); + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto asoc_utils_fini; ++ goto codec_unregister; + } + + return 0; + +-asoc_utils_fini: +- tegra_asoc_utils_fini(&machine->util_data); + codec_unregister: + platform_device_del(machine->codec); + codec_put: +@@ -140,8 +138,6 @@ static int tegra_wm9712_driver_remove(struct platform_device *pdev) + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&machine->util_data); +- + platform_device_unregister(machine->codec); + + return 0; +diff --git a/sound/soc/tegra/trimslice.c b/sound/soc/tegra/trimslice.c +index f9834afaa2e8b..6dca6836aa048 100644 +--- a/sound/soc/tegra/trimslice.c ++++ b/sound/soc/tegra/trimslice.c +@@ -125,8 +125,7 @@ static int tegra_snd_trimslice_probe(struct platform_device *pdev) + if (!trimslice_tlv320aic23_dai.codecs->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,audio-codec' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + trimslice_tlv320aic23_dai.cpus->of_node = of_parse_phandle(np, +@@ -134,8 +133,7 @@ static int tegra_snd_trimslice_probe(struct platform_device *pdev) + if (!trimslice_tlv320aic23_dai.cpus->of_node) { + dev_err(&pdev->dev, + "Property 'nvidia,i2s-controller' missing or invalid\n"); +- ret = -EINVAL; +- goto err; ++ return -EINVAL; + } + + trimslice_tlv320aic23_dai.platforms->of_node = +@@ -143,32 +141,24 @@ static int tegra_snd_trimslice_probe(struct platform_device *pdev) + + ret = tegra_asoc_utils_init(&trimslice->util_data, &pdev->dev); + if (ret) +- goto err; ++ return ret; + + ret = snd_soc_register_card(card); + if (ret) { + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto err_fini_utils; ++ return ret; + } + + return 0; +- +-err_fini_utils: +- tegra_asoc_utils_fini(&trimslice->util_data); +-err: +- return ret; + } + + static int tegra_snd_trimslice_remove(struct platform_device *pdev) + { + struct snd_soc_card *card = platform_get_drvdata(pdev); +- struct tegra_trimslice *trimslice = snd_soc_card_get_drvdata(card); + + snd_soc_unregister_card(card); + +- tegra_asoc_utils_fini(&trimslice->util_data); +- + return 0; + } + +diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature +index 3e0c019ef2971..82d6c43333fbd 100644 +--- a/tools/build/Makefile.feature ++++ b/tools/build/Makefile.feature +@@ -8,7 +8,7 @@ endif + + feature_check = $(eval $(feature_check_code)) + define feature_check_code +- feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0) ++ feature-$(1) := $(shell $(MAKE) OUTPUT=$(OUTPUT_FEATURES) CC="$(CC)" CXX="$(CXX)" CFLAGS="$(EXTRA_CFLAGS) $(FEATURE_CHECK_CFLAGS-$(1))" CXXFLAGS="$(EXTRA_CXXFLAGS) $(FEATURE_CHECK_CXXFLAGS-$(1))" LDFLAGS="$(LDFLAGS) $(FEATURE_CHECK_LDFLAGS-$(1))" -C $(feature_dir) $(OUTPUT_FEATURES)test-$1.bin >/dev/null 2>/dev/null && echo 1 || echo 0) + endef + + feature_set = $(eval $(feature_set_code)) +diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile +index 92012381393ad..ef4ca6e408427 100644 +--- a/tools/build/feature/Makefile ++++ b/tools/build/feature/Makefile +@@ -73,8 +73,6 @@ FILES= \ + + FILES := $(addprefix $(OUTPUT),$(FILES)) + +-CC ?= $(CROSS_COMPILE)gcc +-CXX ?= $(CROSS_COMPILE)g++ + PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config + LLVM_CONFIG ?= llvm-config + CLANG ?= clang +diff --git a/tools/perf/bench/mem-functions.c b/tools/perf/bench/mem-functions.c +index 9235b76501be8..19d45c377ac18 100644 +--- a/tools/perf/bench/mem-functions.c ++++ b/tools/perf/bench/mem-functions.c +@@ -223,12 +223,8 @@ static int bench_mem_common(int argc, const char **argv, struct bench_mem_info * + return 0; + } + +-static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst) ++static void memcpy_prefault(memcpy_t fn, size_t size, void *src, void *dst) + { +- u64 cycle_start = 0ULL, cycle_end = 0ULL; +- memcpy_t fn = r->fn.memcpy; +- int i; +- + /* Make sure to always prefault zero pages even if MMAP_THRESH is crossed: */ + memset(src, 0, size); + +@@ -237,6 +233,15 @@ static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, vo + * to not measure page fault overhead: + */ + fn(dst, src, size); ++} ++ ++static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst) ++{ ++ u64 cycle_start = 0ULL, cycle_end = 0ULL; ++ memcpy_t fn = r->fn.memcpy; ++ int i; ++ ++ memcpy_prefault(fn, size, src, dst); + + cycle_start = get_cycles(); + for (i = 0; i < nr_loops; ++i) +@@ -252,11 +257,7 @@ static double do_memcpy_gettimeofday(const struct function *r, size_t size, void + memcpy_t fn = r->fn.memcpy; + int i; + +- /* +- * We prefault the freshly allocated memory range here, +- * to not measure page fault overhead: +- */ +- fn(dst, src, size); ++ memcpy_prefault(fn, size, src, dst); + + BUG_ON(gettimeofday(&tv_start, NULL)); + for (i = 0; i < nr_loops; ++i) +diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +index f8ccfd6be0eee..7ffcbd6fcd1ae 100644 +--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c ++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +@@ -1164,6 +1164,7 @@ static int intel_pt_walk_fup(struct intel_pt_decoder *decoder) + return 0; + if (err == -EAGAIN || + intel_pt_fup_with_nlip(decoder, &intel_pt_insn, ip, err)) { ++ decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; + if (intel_pt_fup_event(decoder)) + return 0; + return -EAGAIN; +@@ -1942,17 +1943,13 @@ next: + } + if (decoder->set_fup_mwait) + no_tip = true; ++ if (no_tip) ++ decoder->pkt_state = INTEL_PT_STATE_FUP_NO_TIP; ++ else ++ decoder->pkt_state = INTEL_PT_STATE_FUP; + err = intel_pt_walk_fup(decoder); +- if (err != -EAGAIN) { +- if (err) +- return err; +- if (no_tip) +- decoder->pkt_state = +- INTEL_PT_STATE_FUP_NO_TIP; +- else +- decoder->pkt_state = INTEL_PT_STATE_FUP; +- return 0; +- } ++ if (err != -EAGAIN) ++ return err; + if (no_tip) { + no_tip = false; + break; +@@ -1980,8 +1977,10 @@ next: + * possibility of another CBR change that gets caught up + * in the PSB+. + */ +- if (decoder->cbr != decoder->cbr_seen) ++ if (decoder->cbr != decoder->cbr_seen) { ++ decoder->state.type = 0; + return 0; ++ } + break; + + case INTEL_PT_PIP: +@@ -2022,8 +2021,10 @@ next: + + case INTEL_PT_CBR: + intel_pt_calc_cbr(decoder); +- if (decoder->cbr != decoder->cbr_seen) ++ if (decoder->cbr != decoder->cbr_seen) { ++ decoder->state.type = 0; + return 0; ++ } + break; + + case INTEL_PT_MODE_EXEC: +@@ -2599,15 +2600,11 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder) + err = intel_pt_walk_tip(decoder); + break; + case INTEL_PT_STATE_FUP: +- decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; + err = intel_pt_walk_fup(decoder); + if (err == -EAGAIN) + err = intel_pt_walk_fup_tip(decoder); +- else if (!err) +- decoder->pkt_state = INTEL_PT_STATE_FUP; + break; + case INTEL_PT_STATE_FUP_NO_TIP: +- decoder->pkt_state = INTEL_PT_STATE_IN_SYNC; + err = intel_pt_walk_fup(decoder); + if (err == -EAGAIN) + err = intel_pt_walk_trace(decoder); +diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c +index 55924255c5355..659024342e9ac 100644 +--- a/tools/perf/util/probe-finder.c ++++ b/tools/perf/util/probe-finder.c +@@ -1408,6 +1408,9 @@ static int fill_empty_trace_arg(struct perf_probe_event *pev, + char *type; + int i, j, ret; + ++ if (!ntevs) ++ return -ENOENT; ++ + for (i = 0; i < pev->nargs; i++) { + type = NULL; + for (j = 0; j < ntevs; j++) { +@@ -1464,7 +1467,7 @@ int debuginfo__find_trace_events(struct debuginfo *dbg, + if (ret >= 0 && tf.pf.skip_empty_arg) + ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs); + +- if (ret < 0) { ++ if (ret < 0 || tf.ntevs == 0) { + for (i = 0; i < tf.ntevs; i++) + clear_probe_trace_event(&tf.tevs[i]); + zfree(tevs); +diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile +index af139d0e2e0c6..666b1b786bd29 100644 +--- a/tools/testing/selftests/bpf/Makefile ++++ b/tools/testing/selftests/bpf/Makefile +@@ -138,7 +138,9 @@ VMLINUX_BTF_PATHS := $(if $(O),$(O)/vmlinux) \ + /boot/vmlinux-$(shell uname -r) + VMLINUX_BTF := $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS)))) + +-$(OUTPUT)/runqslower: $(BPFOBJ) ++DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool ++ ++$(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL) + $(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/runqslower \ + OUTPUT=$(SCRATCH_DIR)/ VMLINUX_BTF=$(VMLINUX_BTF) \ + BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) && \ +@@ -160,7 +162,6 @@ $(OUTPUT)/test_netcnt: cgroup_helpers.c + $(OUTPUT)/test_sock_fields: cgroup_helpers.c + $(OUTPUT)/test_sysctl: cgroup_helpers.c + +-DEFAULT_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool + BPFTOOL ?= $(DEFAULT_BPFTOOL) + $(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \ + $(BPFOBJ) | $(BUILD_DIR)/bpftool +diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c +index 93970ec1c9e94..c2eb58382113a 100644 +--- a/tools/testing/selftests/bpf/test_progs.c ++++ b/tools/testing/selftests/bpf/test_progs.c +@@ -12,6 +12,9 @@ + #include + #include /* backtrace */ + ++#define EXIT_NO_TEST 2 ++#define EXIT_ERR_SETUP_INFRA 3 ++ + /* defined in test_progs.h */ + struct test_env env = {}; + +@@ -111,13 +114,31 @@ static void reset_affinity() { + if (err < 0) { + stdio_restore(); + fprintf(stderr, "Failed to reset process affinity: %d!\n", err); +- exit(-1); ++ exit(EXIT_ERR_SETUP_INFRA); + } + err = pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset); + if (err < 0) { + stdio_restore(); + fprintf(stderr, "Failed to reset thread affinity: %d!\n", err); +- exit(-1); ++ exit(EXIT_ERR_SETUP_INFRA); ++ } ++} ++ ++static void save_netns(void) ++{ ++ env.saved_netns_fd = open("/proc/self/ns/net", O_RDONLY); ++ if (env.saved_netns_fd == -1) { ++ perror("open(/proc/self/ns/net)"); ++ exit(EXIT_ERR_SETUP_INFRA); ++ } ++} ++ ++static void restore_netns(void) ++{ ++ if (setns(env.saved_netns_fd, CLONE_NEWNET) == -1) { ++ stdio_restore(); ++ perror("setns(CLONE_NEWNS)"); ++ exit(EXIT_ERR_SETUP_INFRA); + } + } + +@@ -138,8 +159,6 @@ void test__end_subtest() + test->test_num, test->subtest_num, + test->subtest_name, sub_error_cnt ? "FAIL" : "OK"); + +- reset_affinity(); +- + free(test->subtest_name); + test->subtest_name = NULL; + } +@@ -732,6 +751,7 @@ int main(int argc, char **argv) + return -1; + } + ++ save_netns(); + stdio_hijack(); + for (i = 0; i < prog_test_cnt; i++) { + struct prog_test_def *test = &prog_test_defs[i]; +@@ -762,6 +782,7 @@ int main(int argc, char **argv) + test->error_cnt ? "FAIL" : "OK"); + + reset_affinity(); ++ restore_netns(); + if (test->need_cgroup_cleanup) + cleanup_cgroup_environment(); + } +@@ -775,6 +796,10 @@ int main(int argc, char **argv) + free_str_set(&env.subtest_selector.blacklist); + free_str_set(&env.subtest_selector.whitelist); + free(env.subtest_selector.num_set); ++ close(env.saved_netns_fd); ++ ++ if (env.succ_cnt + env.fail_cnt + env.skip_cnt == 0) ++ return EXIT_NO_TEST; + + return env.fail_cnt ? EXIT_FAILURE : EXIT_SUCCESS; + } +diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h +index f4aff6b8284be..3817667deb103 100644 +--- a/tools/testing/selftests/bpf/test_progs.h ++++ b/tools/testing/selftests/bpf/test_progs.h +@@ -77,6 +77,8 @@ struct test_env { + int sub_succ_cnt; /* successful sub-tests */ + int fail_cnt; /* total failed tests + sub-tests */ + int skip_cnt; /* skipped tests */ ++ ++ int saved_netns_fd; + }; + + extern struct test_env env; +diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c +index bdbbbe8431e03..3694613f418f6 100644 +--- a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c ++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c +@@ -44,7 +44,7 @@ struct shared_info { + unsigned long amr2; + + /* AMR value that ptrace should refuse to write to the child. */ +- unsigned long amr3; ++ unsigned long invalid_amr; + + /* IAMR value the parent expects to read from the child. */ + unsigned long expected_iamr; +@@ -57,8 +57,8 @@ struct shared_info { + * (even though they're valid ones) because userspace doesn't have + * access to those registers. + */ +- unsigned long new_iamr; +- unsigned long new_uamor; ++ unsigned long invalid_iamr; ++ unsigned long invalid_uamor; + }; + + static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights) +@@ -66,11 +66,6 @@ static int sys_pkey_alloc(unsigned long flags, unsigned long init_access_rights) + return syscall(__NR_pkey_alloc, flags, init_access_rights); + } + +-static int sys_pkey_free(int pkey) +-{ +- return syscall(__NR_pkey_free, pkey); +-} +- + static int child(struct shared_info *info) + { + unsigned long reg; +@@ -100,28 +95,32 @@ static int child(struct shared_info *info) + + info->amr1 |= 3ul << pkeyshift(pkey1); + info->amr2 |= 3ul << pkeyshift(pkey2); +- info->amr3 |= info->amr2 | 3ul << pkeyshift(pkey3); ++ /* ++ * invalid amr value where we try to force write ++ * things which are deined by a uamor setting. ++ */ ++ info->invalid_amr = info->amr2 | (~0x0UL & ~info->expected_uamor); + ++ /* ++ * if PKEY_DISABLE_EXECUTE succeeded we should update the expected_iamr ++ */ + if (disable_execute) + info->expected_iamr |= 1ul << pkeyshift(pkey1); + else + info->expected_iamr &= ~(1ul << pkeyshift(pkey1)); + +- info->expected_iamr &= ~(1ul << pkeyshift(pkey2) | 1ul << pkeyshift(pkey3)); +- +- info->expected_uamor |= 3ul << pkeyshift(pkey1) | +- 3ul << pkeyshift(pkey2); +- info->new_iamr |= 1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2); +- info->new_uamor |= 3ul << pkeyshift(pkey1); ++ /* ++ * We allocated pkey2 and pkey 3 above. Clear the IAMR bits. ++ */ ++ info->expected_iamr &= ~(1ul << pkeyshift(pkey2)); ++ info->expected_iamr &= ~(1ul << pkeyshift(pkey3)); + + /* +- * We won't use pkey3. We just want a plausible but invalid key to test +- * whether ptrace will let us write to AMR bits we are not supposed to. +- * +- * This also tests whether the kernel restores the UAMOR permissions +- * after a key is freed. ++ * Create an IAMR value different from expected value. ++ * Kernel will reject an IAMR and UAMOR change. + */ +- sys_pkey_free(pkey3); ++ info->invalid_iamr = info->expected_iamr | (1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2)); ++ info->invalid_uamor = info->expected_uamor & ~(0x3ul << pkeyshift(pkey1)); + + printf("%-30s AMR: %016lx pkey1: %d pkey2: %d pkey3: %d\n", + user_write, info->amr1, pkey1, pkey2, pkey3); +@@ -196,9 +195,9 @@ static int parent(struct shared_info *info, pid_t pid) + PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync); + PARENT_FAIL_IF(ret, &info->child_sync); + +- info->amr1 = info->amr2 = info->amr3 = regs[0]; +- info->expected_iamr = info->new_iamr = regs[1]; +- info->expected_uamor = info->new_uamor = regs[2]; ++ info->amr1 = info->amr2 = regs[0]; ++ info->expected_iamr = regs[1]; ++ info->expected_uamor = regs[2]; + + /* Wake up child so that it can set itself up. */ + ret = prod_child(&info->child_sync); +@@ -234,10 +233,10 @@ static int parent(struct shared_info *info, pid_t pid) + return ret; + + /* Write invalid AMR value in child. */ +- ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->amr3, 1); ++ ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->invalid_amr, 1); + PARENT_FAIL_IF(ret, &info->child_sync); + +- printf("%-30s AMR: %016lx\n", ptrace_write_running, info->amr3); ++ printf("%-30s AMR: %016lx\n", ptrace_write_running, info->invalid_amr); + + /* Wake up child so that it can verify it didn't change. */ + ret = prod_child(&info->child_sync); +@@ -249,7 +248,7 @@ static int parent(struct shared_info *info, pid_t pid) + + /* Try to write to IAMR. */ + regs[0] = info->amr1; +- regs[1] = info->new_iamr; ++ regs[1] = info->invalid_iamr; + ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 2); + PARENT_FAIL_IF(!ret, &info->child_sync); + +@@ -257,7 +256,7 @@ static int parent(struct shared_info *info, pid_t pid) + ptrace_write_running, regs[0], regs[1]); + + /* Try to write to IAMR and UAMOR. */ +- regs[2] = info->new_uamor; ++ regs[2] = info->invalid_uamor; + ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 3); + PARENT_FAIL_IF(!ret, &info->child_sync); + +diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c +index c84c7b50331c6..cdab315244540 100644 +--- a/tools/testing/selftests/seccomp/seccomp_bpf.c ++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c +@@ -3257,6 +3257,11 @@ TEST(user_notification_with_tsync) + int ret; + unsigned int flags; + ++ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); ++ ASSERT_EQ(0, ret) { ++ TH_LOG("Kernel does not support PR_SET_NO_NEW_PRIVS!"); ++ } ++ + /* these were exclusive */ + flags = SECCOMP_FILTER_FLAG_NEW_LISTENER | + SECCOMP_FILTER_FLAG_TSYNC;