From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.19 commit in: /
Date: Wed, 29 Apr 2020 17:57:50 +0000 (UTC) [thread overview]
Message-ID: <1588183052.27d7c2cb01376b49c16c731e901f2ce5bf8952ea.mpagano@gentoo> (raw)
commit: 27d7c2cb01376b49c16c731e901f2ce5bf8952ea
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 29 17:57:32 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 29 17:57:32 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=27d7c2cb
Linux patch 4.19.119
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1118_linux-4.19.119.patch | 5330 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5334 insertions(+)
diff --git a/0000_README b/0000_README
index 5c6dcb8..36e1c22 100644
--- a/0000_README
+++ b/0000_README
@@ -511,6 +511,10 @@ Patch: 1117_linux-4.19.118.patch
From: https://www.kernel.org
Desc: Linux 4.19.118
+Patch: 1118_linux-4.19.119.patch
+From: https://www.kernel.org
+Desc: Linux 4.19.119
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1118_linux-4.19.119.patch b/1118_linux-4.19.119.patch
new file mode 100644
index 0000000..70f7790
--- /dev/null
+++ b/1118_linux-4.19.119.patch
@@ -0,0 +1,5330 @@
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index eeb3fc9d777b..667ea906266e 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -59,6 +59,7 @@ stable kernels.
+ | ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
+ | ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
+ | ARM | Cortex-A76 | #1463225 | ARM64_ERRATUM_1463225 |
++| ARM | Neoverse-N1 | #1542419 | ARM64_ERRATUM_1542419 |
+ | ARM | MMU-500 | #841119,#826419 | N/A |
+ | | | | |
+ | Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
+diff --git a/Makefile b/Makefile
+index 72ae7e879077..69c95fe6fba5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 19
+-SUBLEVEL = 118
++SUBLEVEL = 119
+ EXTRAVERSION =
+ NAME = "People's Front"
+
+diff --git a/arch/arm/mach-imx/Makefile b/arch/arm/mach-imx/Makefile
+index e9cfe8e86f33..02bf3eab4196 100644
+--- a/arch/arm/mach-imx/Makefile
++++ b/arch/arm/mach-imx/Makefile
+@@ -89,8 +89,10 @@ AFLAGS_suspend-imx6.o :=-Wa,-march=armv7-a
+ obj-$(CONFIG_SOC_IMX6) += suspend-imx6.o
+ obj-$(CONFIG_SOC_IMX53) += suspend-imx53.o
+ endif
++ifeq ($(CONFIG_ARM_CPU_SUSPEND),y)
+ AFLAGS_resume-imx6.o :=-Wa,-march=armv7-a
+ obj-$(CONFIG_SOC_IMX6) += resume-imx6.o
++endif
+ obj-$(CONFIG_SOC_IMX6) += pm-imx6.o
+
+ obj-$(CONFIG_SOC_IMX1) += mach-imx1.o
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 51fe21f5d078..1fe3e5cb2927 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -499,6 +499,22 @@ config ARM64_ERRATUM_1463225
+
+ If unsure, say Y.
+
++config ARM64_ERRATUM_1542419
++ bool "Neoverse-N1: workaround mis-ordering of instruction fetches"
++ default y
++ help
++ This option adds a workaround for ARM Neoverse-N1 erratum
++ 1542419.
++
++ Affected Neoverse-N1 cores could execute a stale instruction when
++ modified by another CPU. The workaround depends on a firmware
++ counterpart.
++
++ Workaround the issue by hiding the DIC feature from EL0. This
++ forces user-space to perform cache maintenance.
++
++ If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ bool "Cavium erratum 22375, 24313"
+ default y
+diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
+index 5ee5bca8c24b..baa684782358 100644
+--- a/arch/arm64/include/asm/cache.h
++++ b/arch/arm64/include/asm/cache.h
+@@ -22,6 +22,7 @@
+ #define CTR_L1IP_MASK 3
+ #define CTR_DMINLINE_SHIFT 16
+ #define CTR_IMINLINE_SHIFT 0
++#define CTR_IMINLINE_MASK 0xf
+ #define CTR_ERG_SHIFT 20
+ #define CTR_CWG_SHIFT 24
+ #define CTR_CWG_MASK 15
+@@ -29,7 +30,7 @@
+ #define CTR_DIC_SHIFT 29
+
+ #define CTR_CACHE_MINLINE_MASK \
+- (0xf << CTR_DMINLINE_SHIFT | 0xf << CTR_IMINLINE_SHIFT)
++ (0xf << CTR_DMINLINE_SHIFT | CTR_IMINLINE_MASK << CTR_IMINLINE_SHIFT)
+
+ #define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
+
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index c3de0bbf0e9a..df8fe8ecc37e 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -53,7 +53,8 @@
+ #define ARM64_HAS_STAGE2_FWB 32
+ #define ARM64_WORKAROUND_1463225 33
+ #define ARM64_SSBS 34
++#define ARM64_WORKAROUND_1542419 35
+
+-#define ARM64_NCAPS 35
++#define ARM64_NCAPS 36
+
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index fa770c070fdd..3cd936b1c79c 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -80,6 +80,7 @@
+ #define ARM_CPU_PART_CORTEX_A35 0xD04
+ #define ARM_CPU_PART_CORTEX_A55 0xD05
+ #define ARM_CPU_PART_CORTEX_A76 0xD0B
++#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
+
+ #define APM_CPU_PART_POTENZA 0x000
+
+@@ -107,6 +108,7 @@
+ #define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35)
+ #define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
+ #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
++#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
+ #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 71888808ded7..76490b0cefce 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -643,6 +643,18 @@ needs_tx2_tvm_workaround(const struct arm64_cpu_capabilities *entry,
+ return false;
+ }
+
++static bool __maybe_unused
++has_neoverse_n1_erratum_1542419(const struct arm64_cpu_capabilities *entry,
++ int scope)
++{
++ u32 midr = read_cpuid_id();
++ bool has_dic = read_cpuid_cachetype() & BIT(CTR_DIC_SHIFT);
++ const struct midr_range range = MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1);
++
++ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
++ return is_midr_in_range(midr, &range) && has_dic;
++}
++
+ #ifdef CONFIG_HARDEN_EL2_VECTORS
+
+ static const struct midr_range arm64_harden_el2_vectors[] = {
+@@ -834,6 +846,16 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ ERRATA_MIDR_RANGE_LIST(tx2_family_cpus),
+ .matches = needs_tx2_tvm_workaround,
+ },
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_1542419
++ {
++ /* we depend on the firmware portion for correctness */
++ .desc = "ARM erratum 1542419 (kernel portion)",
++ .capability = ARM64_WORKAROUND_1542419,
++ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++ .matches = has_neoverse_n1_erratum_1542419,
++ .cpu_enable = cpu_enable_trap_ctr_access,
++ },
+ #endif
+ {
+ }
+diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
+index 010212d35700..3ef9d0a3ac1d 100644
+--- a/arch/arm64/kernel/sys_compat.c
++++ b/arch/arm64/kernel/sys_compat.c
+@@ -19,6 +19,7 @@
+ */
+
+ #include <linux/compat.h>
++#include <linux/cpufeature.h>
+ #include <linux/personality.h>
+ #include <linux/sched.h>
+ #include <linux/sched/signal.h>
+@@ -28,6 +29,7 @@
+
+ #include <asm/cacheflush.h>
+ #include <asm/system_misc.h>
++#include <asm/tlbflush.h>
+ #include <asm/unistd.h>
+
+ static long
+@@ -41,6 +43,15 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
+ if (fatal_signal_pending(current))
+ return 0;
+
++ if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) {
++ /*
++ * The workaround requires an inner-shareable tlbi.
++ * We pick the reserved-ASID to minimise the impact.
++ */
++ __tlbi(aside1is, __TLBI_VADDR(0, 0));
++ dsb(ish);
++ }
++
+ ret = __flush_cache_user_range(start, start + chunk);
+ if (ret)
+ return ret;
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index c8dc3a3640e7..965595fe6804 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -481,6 +481,15 @@ static void ctr_read_handler(unsigned int esr, struct pt_regs *regs)
+ int rt = (esr & ESR_ELx_SYS64_ISS_RT_MASK) >> ESR_ELx_SYS64_ISS_RT_SHIFT;
+ unsigned long val = arm64_ftr_reg_user_value(&arm64_ftr_reg_ctrel0);
+
++ if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) {
++ /* Hide DIC so that we can trap the unnecessary maintenance...*/
++ val &= ~BIT(CTR_DIC_SHIFT);
++
++ /* ... and fake IminLine to reduce the number of traps. */
++ val &= ~CTR_IMINLINE_MASK;
++ val |= (PAGE_SHIFT - 2) & CTR_IMINLINE_MASK;
++ }
++
+ pt_regs_write_reg(regs, rt, val);
+
+ arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index eaf7300be5ab..bd4996958b13 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -518,6 +518,8 @@ static bool __init parse_cache_info(struct device_node *np,
+ lsizep = of_get_property(np, propnames[3], NULL);
+ if (bsizep == NULL)
+ bsizep = lsizep;
++ if (lsizep == NULL)
++ lsizep = bsizep;
+ if (lsizep != NULL)
+ lsize = be32_to_cpu(*lsizep);
+ if (bsizep != NULL)
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index 5449e76cf2df..f6c21f6af274 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -492,35 +492,6 @@ static inline void clear_irq_work_pending(void)
+ "i" (offsetof(struct paca_struct, irq_work_pending)));
+ }
+
+-void arch_irq_work_raise(void)
+-{
+- preempt_disable();
+- set_irq_work_pending_flag();
+- /*
+- * Non-nmi code running with interrupts disabled will replay
+- * irq_happened before it re-enables interrupts, so setthe
+- * decrementer there instead of causing a hardware exception
+- * which would immediately hit the masked interrupt handler
+- * and have the net effect of setting the decrementer in
+- * irq_happened.
+- *
+- * NMI interrupts can not check this when they return, so the
+- * decrementer hardware exception is raised, which will fire
+- * when interrupts are next enabled.
+- *
+- * BookE does not support this yet, it must audit all NMI
+- * interrupt handlers to ensure they call nmi_enter() so this
+- * check would be correct.
+- */
+- if (IS_ENABLED(CONFIG_BOOKE) || !irqs_disabled() || in_nmi()) {
+- set_dec(1);
+- } else {
+- hard_irq_disable();
+- local_paca->irq_happened |= PACA_IRQ_DEC;
+- }
+- preempt_enable();
+-}
+-
+ #else /* 32-bit */
+
+ DEFINE_PER_CPU(u8, irq_work_pending);
+@@ -529,16 +500,27 @@ DEFINE_PER_CPU(u8, irq_work_pending);
+ #define test_irq_work_pending() __this_cpu_read(irq_work_pending)
+ #define clear_irq_work_pending() __this_cpu_write(irq_work_pending, 0)
+
++#endif /* 32 vs 64 bit */
++
+ void arch_irq_work_raise(void)
+ {
++ /*
++ * 64-bit code that uses irq soft-mask can just cause an immediate
++ * interrupt here that gets soft masked, if this is called under
++ * local_irq_disable(). It might be possible to prevent that happening
++ * by noticing interrupts are disabled and setting decrementer pending
++ * to be replayed when irqs are enabled. The problem there is that
++ * tracing can call irq_work_raise, including in code that does low
++ * level manipulations of irq soft-mask state (e.g., trace_hardirqs_on)
++ * which could get tangled up if we're messing with the same state
++ * here.
++ */
+ preempt_disable();
+ set_irq_work_pending_flag();
+ set_dec(1);
+ preempt_enable();
+ }
+
+-#endif /* 32 vs 64 bit */
+-
+ #else /* CONFIG_IRQ_WORK */
+
+ #define test_irq_work_pending() 0
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 11c3cd906ab4..18662c1a9361 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -1666,6 +1666,9 @@ static int gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn)
+ start = slot + 1;
+ }
+
++ if (start >= slots->used_slots)
++ return slots->used_slots - 1;
++
+ if (gfn >= memslots[start].base_gfn &&
+ gfn < memslots[start].base_gfn + memslots[start].npages) {
+ atomic_set(&slots->lru_slot, start);
+diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
+index c4f8039a35e8..0267405ab7c6 100644
+--- a/arch/s390/lib/uaccess.c
++++ b/arch/s390/lib/uaccess.c
+@@ -64,10 +64,13 @@ mm_segment_t enable_sacf_uaccess(void)
+ {
+ mm_segment_t old_fs;
+ unsigned long asce, cr;
++ unsigned long flags;
+
+ old_fs = current->thread.mm_segment;
+ if (old_fs & 1)
+ return old_fs;
++ /* protect against a concurrent page table upgrade */
++ local_irq_save(flags);
+ current->thread.mm_segment |= 1;
+ asce = S390_lowcore.kernel_asce;
+ if (likely(old_fs == USER_DS)) {
+@@ -83,6 +86,7 @@ mm_segment_t enable_sacf_uaccess(void)
+ __ctl_load(asce, 7, 7);
+ set_cpu_flag(CIF_ASCE_SECONDARY);
+ }
++ local_irq_restore(flags);
+ return old_fs;
+ }
+ EXPORT_SYMBOL(enable_sacf_uaccess);
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index 814f26520aa2..f3bc9c9305da 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -72,8 +72,20 @@ static void __crst_table_upgrade(void *arg)
+ {
+ struct mm_struct *mm = arg;
+
+- if (current->active_mm == mm)
+- set_user_asce(mm);
++ /* we must change all active ASCEs to avoid the creation of new TLBs */
++ if (current->active_mm == mm) {
++ S390_lowcore.user_asce = mm->context.asce;
++ if (current->thread.mm_segment == USER_DS) {
++ __ctl_load(S390_lowcore.user_asce, 1, 1);
++ /* Mark user-ASCE present in CR1 */
++ clear_cpu_flag(CIF_ASCE_PRIMARY);
++ }
++ if (current->thread.mm_segment == USER_DS_SACF) {
++ __ctl_load(S390_lowcore.user_asce, 7, 7);
++ /* enable_sacf_uaccess does all or nothing */
++ WARN_ON(!test_cpu_flag(CIF_ASCE_SECONDARY));
++ }
++ }
+ __tlb_flush_local();
+ }
+
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 5c99b9bfce04..33136395db8f 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -622,10 +622,10 @@ struct kvm_vcpu_arch {
+ bool pvclock_set_guest_stopped_request;
+
+ struct {
++ u8 preempted;
+ u64 msr_val;
+ u64 last_steal;
+- struct gfn_to_hva_cache stime;
+- struct kvm_steal_time steal;
++ struct gfn_to_pfn_cache cache;
+ } st;
+
+ u64 tsc_offset;
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index d37b48173e9c..fe5036641c59 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -7015,7 +7015,7 @@ static int handle_rmode_exception(struct kvm_vcpu *vcpu,
+ */
+ static void kvm_machine_check(void)
+ {
+-#if defined(CONFIG_X86_MCE) && defined(CONFIG_X86_64)
++#if defined(CONFIG_X86_MCE)
+ struct pt_regs regs = {
+ .cs = 3, /* Fake ring 3 no matter what the guest ran on */
+ .flags = X86_EFLAGS_IF,
+@@ -10841,6 +10841,15 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ "mov %%r13, %c[r13](%0) \n\t"
+ "mov %%r14, %c[r14](%0) \n\t"
+ "mov %%r15, %c[r15](%0) \n\t"
++
++ /*
++ * Clear all general purpose registers (except RSP, which is loaded by
++ * the CPU during VM-Exit) to prevent speculative use of the guest's
++ * values, even those that are saved/loaded via the stack. In theory,
++ * an L1 cache miss when restoring registers could lead to speculative
++ * execution with the guest's values. Zeroing XORs are dirt cheap,
++ * i.e. the extra paranoia is essentially free.
++ */
+ "xor %%r8d, %%r8d \n\t"
+ "xor %%r9d, %%r9d \n\t"
+ "xor %%r10d, %%r10d \n\t"
+@@ -10855,8 +10864,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+
+ "xor %%eax, %%eax \n\t"
+ "xor %%ebx, %%ebx \n\t"
++ "xor %%ecx, %%ecx \n\t"
++ "xor %%edx, %%edx \n\t"
+ "xor %%esi, %%esi \n\t"
+ "xor %%edi, %%edi \n\t"
++ "xor %%ebp, %%ebp \n\t"
+ "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t"
+ ".pushsection .rodata \n\t"
+ ".global vmx_return \n\t"
+@@ -12125,13 +12137,9 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+
+ set_cr4_guest_host_mask(vmx);
+
+- if (kvm_mpx_supported()) {
+- if (vmx->nested.nested_run_pending &&
+- (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
+- vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
+- else
+- vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
+- }
++ if (kvm_mpx_supported() && vmx->nested.nested_run_pending &&
++ (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++ vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
+
+ if (enable_vpid) {
+ if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02)
+@@ -12195,6 +12203,9 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
+ vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.vmcs01_debugctl);
+ }
++ if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
++ !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
++ vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
+ if (vmx->nested.nested_run_pending) {
+ vmcs_write32(VM_ENTRY_INTR_INFO_FIELD,
+ vmcs12->vm_entry_intr_info_field);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 1a6e1aa2fb29..6bfc9eaf8dee 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2397,43 +2397,45 @@ static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
+
+ static void record_steal_time(struct kvm_vcpu *vcpu)
+ {
++ struct kvm_host_map map;
++ struct kvm_steal_time *st;
++
+ if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+ return;
+
+- if (unlikely(kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+- &vcpu->arch.st.steal, sizeof(struct kvm_steal_time))))
++ /* -EAGAIN is returned in atomic context so we can just return. */
++ if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT,
++ &map, &vcpu->arch.st.cache, false))
+ return;
+
++ st = map.hva +
++ offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
++
+ /*
+ * Doing a TLB flush here, on the guest's behalf, can avoid
+ * expensive IPIs.
+ */
+- if (xchg(&vcpu->arch.st.steal.preempted, 0) & KVM_VCPU_FLUSH_TLB)
++ if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
+ kvm_vcpu_flush_tlb(vcpu, false);
+
+- if (vcpu->arch.st.steal.version & 1)
+- vcpu->arch.st.steal.version += 1; /* first time write, random junk */
++ vcpu->arch.st.preempted = 0;
+
+- vcpu->arch.st.steal.version += 1;
++ if (st->version & 1)
++ st->version += 1; /* first time write, random junk */
+
+- kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+- &vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
++ st->version += 1;
+
+ smp_wmb();
+
+- vcpu->arch.st.steal.steal += current->sched_info.run_delay -
++ st->steal += current->sched_info.run_delay -
+ vcpu->arch.st.last_steal;
+ vcpu->arch.st.last_steal = current->sched_info.run_delay;
+
+- kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+- &vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
+-
+ smp_wmb();
+
+- vcpu->arch.st.steal.version += 1;
++ st->version += 1;
+
+- kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+- &vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
++ kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, false);
+ }
+
+ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+@@ -2575,11 +2577,6 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ if (data & KVM_STEAL_RESERVED_MASK)
+ return 1;
+
+- if (kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.st.stime,
+- data & KVM_STEAL_VALID_BITS,
+- sizeof(struct kvm_steal_time)))
+- return 1;
+-
+ vcpu->arch.st.msr_val = data;
+
+ if (!(data & KVM_MSR_ENABLED))
+@@ -3272,18 +3269,25 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+
+ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
+ {
++ struct kvm_host_map map;
++ struct kvm_steal_time *st;
++
+ if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+ return;
+
+- if (vcpu->arch.st.steal.preempted)
++ if (vcpu->arch.st.preempted)
++ return;
++
++ if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT, &map,
++ &vcpu->arch.st.cache, true))
+ return;
+
+- vcpu->arch.st.steal.preempted = KVM_VCPU_PREEMPTED;
++ st = map.hva +
++ offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
+
+- kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime,
+- &vcpu->arch.st.steal.preempted,
+- offsetof(struct kvm_steal_time, preempted),
+- sizeof(vcpu->arch.st.steal.preempted));
++ st->preempted = vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
++
++ kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, true);
+ }
+
+ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+@@ -8634,6 +8638,9 @@ static void fx_init(struct kvm_vcpu *vcpu)
+ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
+ {
+ void *wbinvd_dirty_mask = vcpu->arch.wbinvd_dirty_mask;
++ struct gfn_to_pfn_cache *cache = &vcpu->arch.st.cache;
++
++ kvm_release_pfn(cache->pfn, cache->dirty, cache);
+
+ kvmclock_reset(vcpu);
+
+@@ -9298,11 +9305,18 @@ out_free:
+
+ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
+ {
++ struct kvm_vcpu *vcpu;
++ int i;
++
+ /*
+ * memslots->generation has been incremented.
+ * mmio generation may have reached its maximum value.
+ */
+ kvm_mmu_invalidate_mmio_sptes(kvm, gen);
++
++ /* Force re-initialization of steal_time cache */
++ kvm_for_each_vcpu(i, vcpu, kvm)
++ kvm_vcpu_kick(vcpu);
+ }
+
+ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 9cd231a27328..c1341c86bcde 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -426,11 +426,12 @@ static int lo_fallocate(struct loop_device *lo, struct request *rq, loff_t pos,
+ * information.
+ */
+ struct file *file = lo->lo_backing_file;
++ struct request_queue *q = lo->lo_queue;
+ int ret;
+
+ mode |= FALLOC_FL_KEEP_SIZE;
+
+- if ((!file->f_op->fallocate) || lo->lo_encrypt_key_size) {
++ if (!blk_queue_discard(q)) {
+ ret = -EOPNOTSUPP;
+ goto out;
+ }
+@@ -864,28 +865,47 @@ static void loop_config_discard(struct loop_device *lo)
+ struct inode *inode = file->f_mapping->host;
+ struct request_queue *q = lo->lo_queue;
+
++ /*
++ * If the backing device is a block device, mirror its zeroing
++ * capability. Set the discard sectors to the block device's zeroing
++ * capabilities because loop discards result in blkdev_issue_zeroout(),
++ * not blkdev_issue_discard(). This maintains consistent behavior with
++ * file-backed loop devices: discarded regions read back as zero.
++ */
++ if (S_ISBLK(inode->i_mode) && !lo->lo_encrypt_key_size) {
++ struct request_queue *backingq;
++
++ backingq = bdev_get_queue(inode->i_bdev);
++ blk_queue_max_discard_sectors(q,
++ backingq->limits.max_write_zeroes_sectors);
++
++ blk_queue_max_write_zeroes_sectors(q,
++ backingq->limits.max_write_zeroes_sectors);
++
+ /*
+ * We use punch hole to reclaim the free space used by the
+ * image a.k.a. discard. However we do not support discard if
+ * encryption is enabled, because it may give an attacker
+ * useful information.
+ */
+- if ((!file->f_op->fallocate) ||
+- lo->lo_encrypt_key_size) {
++ } else if (!file->f_op->fallocate || lo->lo_encrypt_key_size) {
+ q->limits.discard_granularity = 0;
+ q->limits.discard_alignment = 0;
+ blk_queue_max_discard_sectors(q, 0);
+ blk_queue_max_write_zeroes_sectors(q, 0);
+- blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
+- return;
+- }
+
+- q->limits.discard_granularity = inode->i_sb->s_blocksize;
+- q->limits.discard_alignment = 0;
++ } else {
++ q->limits.discard_granularity = inode->i_sb->s_blocksize;
++ q->limits.discard_alignment = 0;
+
+- blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
+- blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
+- blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
++ blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
++ blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
++ }
++
++ if (q->limits.max_write_zeroes_sectors)
++ blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
++ else
++ blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
+ }
+
+ static void loop_unprepare_queue(struct loop_device *lo)
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 728c9a9609f0..9a3c2b14ac37 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -277,9 +277,14 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
+ if (err == -ENOSPC)
+ blk_mq_stop_hw_queue(hctx);
+ spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
+- if (err == -ENOMEM || err == -ENOSPC)
++ switch (err) {
++ case -ENOSPC:
+ return BLK_STS_DEV_RESOURCE;
+- return BLK_STS_IOERR;
++ case -ENOMEM:
++ return BLK_STS_RESOURCE;
++ default:
++ return BLK_STS_IOERR;
++ }
+ }
+
+ if (bd->last && virtqueue_kick_prepare(vblk->vqs[qid].vq))
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 77e47dc5aacc..569e93e1f06c 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (C) 2012 IBM Corporation
++ * Copyright (C) 2012-2020 IBM Corporation
+ *
+ * Author: Ashley Lai <ashleydlai@gmail.com>
+ *
+@@ -140,6 +140,64 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ return len;
+ }
+
++/**
++ * ibmvtpm_crq_send_init - Send a CRQ initialize message
++ * @ibmvtpm: vtpm device struct
++ *
++ * Return:
++ * 0 on success.
++ * Non-zero on failure.
++ */
++static int ibmvtpm_crq_send_init(struct ibmvtpm_dev *ibmvtpm)
++{
++ int rc;
++
++ rc = ibmvtpm_send_crq_word(ibmvtpm->vdev, INIT_CRQ_CMD);
++ if (rc != H_SUCCESS)
++ dev_err(ibmvtpm->dev,
++ "%s failed rc=%d\n", __func__, rc);
++
++ return rc;
++}
++
++/**
++ * tpm_ibmvtpm_resume - Resume from suspend
++ *
++ * @dev: device struct
++ *
++ * Return: Always 0.
++ */
++static int tpm_ibmvtpm_resume(struct device *dev)
++{
++ struct tpm_chip *chip = dev_get_drvdata(dev);
++ struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
++ int rc = 0;
++
++ do {
++ if (rc)
++ msleep(100);
++ rc = plpar_hcall_norets(H_ENABLE_CRQ,
++ ibmvtpm->vdev->unit_address);
++ } while (rc == H_IN_PROGRESS || rc == H_BUSY || H_IS_LONG_BUSY(rc));
++
++ if (rc) {
++ dev_err(dev, "Error enabling ibmvtpm rc=%d\n", rc);
++ return rc;
++ }
++
++ rc = vio_enable_interrupts(ibmvtpm->vdev);
++ if (rc) {
++ dev_err(dev, "Error vio_enable_interrupts rc=%d\n", rc);
++ return rc;
++ }
++
++ rc = ibmvtpm_crq_send_init(ibmvtpm);
++ if (rc)
++ dev_err(dev, "Error send_init rc=%d\n", rc);
++
++ return rc;
++}
++
+ /**
+ * tpm_ibmvtpm_send() - Send a TPM command
+ * @chip: tpm chip struct
+@@ -153,6 +211,7 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
++ bool retry = true;
+ int rc, sig;
+
+ if (!ibmvtpm->rtce_buf) {
+@@ -186,18 +245,27 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ */
+ ibmvtpm->tpm_processing_cmd = true;
+
++again:
+ rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+ IBMVTPM_VALID_CMD, VTPM_TPM_COMMAND,
+ count, ibmvtpm->rtce_dma_handle);
+ if (rc != H_SUCCESS) {
++ /*
++ * H_CLOSED can be returned after LPM resume. Call
++ * tpm_ibmvtpm_resume() to re-enable the CRQ then retry
++ * ibmvtpm_send_crq() once before failing.
++ */
++ if (rc == H_CLOSED && retry) {
++ tpm_ibmvtpm_resume(ibmvtpm->dev);
++ retry = false;
++ goto again;
++ }
+ dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
+- rc = 0;
+ ibmvtpm->tpm_processing_cmd = false;
+- } else
+- rc = 0;
++ }
+
+ spin_unlock(&ibmvtpm->rtce_lock);
+- return rc;
++ return 0;
+ }
+
+ static void tpm_ibmvtpm_cancel(struct tpm_chip *chip)
+@@ -275,26 +343,6 @@ static int ibmvtpm_crq_send_init_complete(struct ibmvtpm_dev *ibmvtpm)
+ return rc;
+ }
+
+-/**
+- * ibmvtpm_crq_send_init - Send a CRQ initialize message
+- * @ibmvtpm: vtpm device struct
+- *
+- * Return:
+- * 0 on success.
+- * Non-zero on failure.
+- */
+-static int ibmvtpm_crq_send_init(struct ibmvtpm_dev *ibmvtpm)
+-{
+- int rc;
+-
+- rc = ibmvtpm_send_crq_word(ibmvtpm->vdev, INIT_CRQ_CMD);
+- if (rc != H_SUCCESS)
+- dev_err(ibmvtpm->dev,
+- "ibmvtpm_crq_send_init failed rc=%d\n", rc);
+-
+- return rc;
+-}
+-
+ /**
+ * tpm_ibmvtpm_remove - ibm vtpm remove entry point
+ * @vdev: vio device struct
+@@ -407,44 +455,6 @@ static int ibmvtpm_reset_crq(struct ibmvtpm_dev *ibmvtpm)
+ ibmvtpm->crq_dma_handle, CRQ_RES_BUF_SIZE);
+ }
+
+-/**
+- * tpm_ibmvtpm_resume - Resume from suspend
+- *
+- * @dev: device struct
+- *
+- * Return: Always 0.
+- */
+-static int tpm_ibmvtpm_resume(struct device *dev)
+-{
+- struct tpm_chip *chip = dev_get_drvdata(dev);
+- struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
+- int rc = 0;
+-
+- do {
+- if (rc)
+- msleep(100);
+- rc = plpar_hcall_norets(H_ENABLE_CRQ,
+- ibmvtpm->vdev->unit_address);
+- } while (rc == H_IN_PROGRESS || rc == H_BUSY || H_IS_LONG_BUSY(rc));
+-
+- if (rc) {
+- dev_err(dev, "Error enabling ibmvtpm rc=%d\n", rc);
+- return rc;
+- }
+-
+- rc = vio_enable_interrupts(ibmvtpm->vdev);
+- if (rc) {
+- dev_err(dev, "Error vio_enable_interrupts rc=%d\n", rc);
+- return rc;
+- }
+-
+- rc = ibmvtpm_crq_send_init(ibmvtpm);
+- if (rc)
+- dev_err(dev, "Error send_init rc=%d\n", rc);
+-
+- return rc;
+-}
+-
+ static bool tpm_ibmvtpm_req_canceled(struct tpm_chip *chip, u8 status)
+ {
+ return (status == 0);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 0eaea3a7b8f4..5d8f8f018984 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -437,6 +437,9 @@ static void disable_interrupts(struct tpm_chip *chip)
+ u32 intmask;
+ int rc;
+
++ if (priv->irq == 0)
++ return;
++
+ rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
+ if (rc < 0)
+ intmask = 0;
+@@ -984,9 +987,12 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ if (irq) {
+ tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED,
+ irq);
+- if (!(chip->flags & TPM_CHIP_FLAG_IRQ))
++ if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
+ dev_err(&chip->dev, FW_BUG
+ "TPM interrupt not working, polling instead\n");
++
++ disable_interrupts(chip);
++ }
+ } else {
+ tpm_tis_probe_irq(chip, intmask);
+ }
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index 5c5c504dacb6..b0c592073a4a 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -37,11 +37,11 @@
+ * Null hashes to align with hw behavior on imx6sl and ull
+ * these are flipped for consistency with hw output
+ */
+-const uint8_t sha1_null_hash[] =
++static const uint8_t sha1_null_hash[] =
+ "\x09\x07\xd8\xaf\x90\x18\x60\x95\xef\xbf"
+ "\x55\x32\x0d\x4b\x6b\x5e\xee\xa3\x39\xda";
+
+-const uint8_t sha256_null_hash[] =
++static const uint8_t sha256_null_hash[] =
+ "\x55\xb8\x52\x78\x1b\x99\x95\xa4"
+ "\x4c\x93\x9b\x64\xe4\x41\xae\x27"
+ "\x24\xb9\x6f\x99\xc8\xf4\xfb\x9a"
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 2b2efe443c36..b64ad9e1f0c3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -996,6 +996,26 @@ bool dc_commit_state(struct dc *dc, struct dc_state *context)
+ return (result == DC_OK);
+ }
+
++static bool is_flip_pending_in_pipes(struct dc *dc, struct dc_state *context)
++{
++ int i;
++ struct pipe_ctx *pipe;
++
++ for (i = 0; i < MAX_PIPES; i++) {
++ pipe = &context->res_ctx.pipe_ctx[i];
++
++ if (!pipe->plane_state)
++ continue;
++
++ /* Must set to false to start with, due to OR in update function */
++ pipe->plane_state->status.is_flip_pending = false;
++ dc->hwss.update_pending_status(pipe);
++ if (pipe->plane_state->status.is_flip_pending)
++ return true;
++ }
++ return false;
++}
++
+ bool dc_post_update_surfaces_to_stream(struct dc *dc)
+ {
+ int i;
+@@ -1003,6 +1023,9 @@ bool dc_post_update_surfaces_to_stream(struct dc *dc)
+
+ post_surface_trace(dc);
+
++ if (is_flip_pending_in_pipes(dc, context))
++ return true;
++
+ for (i = 0; i < dc->res_pool->pipe_count; i++)
+ if (context->res_ctx.pipe_ctx[i].stream == NULL ||
+ context->res_ctx.pipe_ctx[i].plane_state == NULL) {
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index e53b7cb2211d..7c0b30c955c3 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -61,7 +61,7 @@ static void sync_for_device(struct msm_gem_object *msm_obj)
+ {
+ struct device *dev = msm_obj->base.dev->dev;
+
+- if (get_dma_ops(dev)) {
++ if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) {
+ dma_sync_sg_for_device(dev, msm_obj->sgt->sgl,
+ msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
+ } else {
+@@ -74,7 +74,7 @@ static void sync_for_cpu(struct msm_gem_object *msm_obj)
+ {
+ struct device *dev = msm_obj->base.dev->dev;
+
+- if (get_dma_ops(dev)) {
++ if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) {
+ dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl,
+ msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
+ } else {
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index 0409dcf5b047..24d5d049567a 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -1308,8 +1308,30 @@ static unsigned int stm32_adc_dma_residue(struct stm32_adc *adc)
+ static void stm32_adc_dma_buffer_done(void *data)
+ {
+ struct iio_dev *indio_dev = data;
++ struct stm32_adc *adc = iio_priv(indio_dev);
++ int residue = stm32_adc_dma_residue(adc);
++
++ /*
++ * In DMA mode the trigger services of IIO are not used
++ * (e.g. no call to iio_trigger_poll).
++ * Calling irq handler associated to the hardware trigger is not
++ * relevant as the conversions have already been done. Data
++ * transfers are performed directly in DMA callback instead.
++ * This implementation avoids to call trigger irq handler that
++ * may sleep, in an atomic context (DMA irq handler context).
++ */
++ dev_dbg(&indio_dev->dev, "%s bufi=%d\n", __func__, adc->bufi);
+
+- iio_trigger_poll_chained(indio_dev->trig);
++ while (residue >= indio_dev->scan_bytes) {
++ u16 *buffer = (u16 *)&adc->rx_buf[adc->bufi];
++
++ iio_push_to_buffers(indio_dev, buffer);
++
++ residue -= indio_dev->scan_bytes;
++ adc->bufi += indio_dev->scan_bytes;
++ if (adc->bufi >= adc->rx_buf_sz)
++ adc->bufi = 0;
++ }
+ }
+
+ static int stm32_adc_dma_start(struct iio_dev *indio_dev)
+@@ -1703,6 +1725,7 @@ static int stm32_adc_probe(struct platform_device *pdev)
+ {
+ struct iio_dev *indio_dev;
+ struct device *dev = &pdev->dev;
++ irqreturn_t (*handler)(int irq, void *p) = NULL;
+ struct stm32_adc *adc;
+ int ret;
+
+@@ -1785,9 +1808,11 @@ static int stm32_adc_probe(struct platform_device *pdev)
+ if (ret < 0)
+ goto err_clk_disable;
+
++ if (!adc->dma_chan)
++ handler = &stm32_adc_trigger_handler;
++
+ ret = iio_triggered_buffer_setup(indio_dev,
+- &iio_pollfunc_store_time,
+- &stm32_adc_trigger_handler,
++ &iio_pollfunc_store_time, handler,
+ &stm32_adc_buffer_setup_ops);
+ if (ret) {
+ dev_err(&pdev->dev, "buffer setup failed\n");
+diff --git a/drivers/iio/adc/xilinx-xadc-core.c b/drivers/iio/adc/xilinx-xadc-core.c
+index 1ae86e7359f7..1b0046cc717b 100644
+--- a/drivers/iio/adc/xilinx-xadc-core.c
++++ b/drivers/iio/adc/xilinx-xadc-core.c
+@@ -103,6 +103,16 @@ static const unsigned int XADC_ZYNQ_UNMASK_TIMEOUT = 500;
+
+ #define XADC_FLAGS_BUFFERED BIT(0)
+
++/*
++ * The XADC hardware supports a samplerate of up to 1MSPS. Unfortunately it does
++ * not have a hardware FIFO. Which means an interrupt is generated for each
++ * conversion sequence. At 1MSPS sample rate the CPU in ZYNQ7000 is completely
++ * overloaded by the interrupts that it soft-lockups. For this reason the driver
++ * limits the maximum samplerate 150kSPS. At this rate the CPU is fairly busy,
++ * but still responsive.
++ */
++#define XADC_MAX_SAMPLERATE 150000
++
+ static void xadc_write_reg(struct xadc *xadc, unsigned int reg,
+ uint32_t val)
+ {
+@@ -675,7 +685,7 @@ static int xadc_trigger_set_state(struct iio_trigger *trigger, bool state)
+
+ spin_lock_irqsave(&xadc->lock, flags);
+ xadc_read_reg(xadc, XADC_AXI_REG_IPIER, &val);
+- xadc_write_reg(xadc, XADC_AXI_REG_IPISR, val & XADC_AXI_INT_EOS);
++ xadc_write_reg(xadc, XADC_AXI_REG_IPISR, XADC_AXI_INT_EOS);
+ if (state)
+ val |= XADC_AXI_INT_EOS;
+ else
+@@ -723,13 +733,14 @@ static int xadc_power_adc_b(struct xadc *xadc, unsigned int seq_mode)
+ {
+ uint16_t val;
+
++ /* Powerdown the ADC-B when it is not needed. */
+ switch (seq_mode) {
+ case XADC_CONF1_SEQ_SIMULTANEOUS:
+ case XADC_CONF1_SEQ_INDEPENDENT:
+- val = XADC_CONF2_PD_ADC_B;
++ val = 0;
+ break;
+ default:
+- val = 0;
++ val = XADC_CONF2_PD_ADC_B;
+ break;
+ }
+
+@@ -798,6 +809,16 @@ static int xadc_preenable(struct iio_dev *indio_dev)
+ if (ret)
+ goto err;
+
++ /*
++ * In simultaneous mode the upper and lower aux channels are samples at
++ * the same time. In this mode the upper 8 bits in the sequencer
++ * register are don't care and the lower 8 bits control two channels
++ * each. As such we must set the bit if either the channel in the lower
++ * group or the upper group is enabled.
++ */
++ if (seq_mode == XADC_CONF1_SEQ_SIMULTANEOUS)
++ scan_mask = ((scan_mask >> 8) | scan_mask) & 0xff0000;
++
+ ret = xadc_write_adc_reg(xadc, XADC_REG_SEQ(1), scan_mask >> 16);
+ if (ret)
+ goto err;
+@@ -824,11 +845,27 @@ static const struct iio_buffer_setup_ops xadc_buffer_ops = {
+ .postdisable = &xadc_postdisable,
+ };
+
++static int xadc_read_samplerate(struct xadc *xadc)
++{
++ unsigned int div;
++ uint16_t val16;
++ int ret;
++
++ ret = xadc_read_adc_reg(xadc, XADC_REG_CONF2, &val16);
++ if (ret)
++ return ret;
++
++ div = (val16 & XADC_CONF2_DIV_MASK) >> XADC_CONF2_DIV_OFFSET;
++ if (div < 2)
++ div = 2;
++
++ return xadc_get_dclk_rate(xadc) / div / 26;
++}
++
+ static int xadc_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan, int *val, int *val2, long info)
+ {
+ struct xadc *xadc = iio_priv(indio_dev);
+- unsigned int div;
+ uint16_t val16;
+ int ret;
+
+@@ -881,41 +918,31 @@ static int xadc_read_raw(struct iio_dev *indio_dev,
+ *val = -((273150 << 12) / 503975);
+ return IIO_VAL_INT;
+ case IIO_CHAN_INFO_SAMP_FREQ:
+- ret = xadc_read_adc_reg(xadc, XADC_REG_CONF2, &val16);
+- if (ret)
++ ret = xadc_read_samplerate(xadc);
++ if (ret < 0)
+ return ret;
+
+- div = (val16 & XADC_CONF2_DIV_MASK) >> XADC_CONF2_DIV_OFFSET;
+- if (div < 2)
+- div = 2;
+-
+- *val = xadc_get_dclk_rate(xadc) / div / 26;
+-
++ *val = ret;
+ return IIO_VAL_INT;
+ default:
+ return -EINVAL;
+ }
+ }
+
+-static int xadc_write_raw(struct iio_dev *indio_dev,
+- struct iio_chan_spec const *chan, int val, int val2, long info)
++static int xadc_write_samplerate(struct xadc *xadc, int val)
+ {
+- struct xadc *xadc = iio_priv(indio_dev);
+ unsigned long clk_rate = xadc_get_dclk_rate(xadc);
+ unsigned int div;
+
+ if (!clk_rate)
+ return -EINVAL;
+
+- if (info != IIO_CHAN_INFO_SAMP_FREQ)
+- return -EINVAL;
+-
+ if (val <= 0)
+ return -EINVAL;
+
+ /* Max. 150 kSPS */
+- if (val > 150000)
+- val = 150000;
++ if (val > XADC_MAX_SAMPLERATE)
++ val = XADC_MAX_SAMPLERATE;
+
+ val *= 26;
+
+@@ -928,7 +955,7 @@ static int xadc_write_raw(struct iio_dev *indio_dev,
+ * limit.
+ */
+ div = clk_rate / val;
+- if (clk_rate / div / 26 > 150000)
++ if (clk_rate / div / 26 > XADC_MAX_SAMPLERATE)
+ div++;
+ if (div < 2)
+ div = 2;
+@@ -939,6 +966,17 @@ static int xadc_write_raw(struct iio_dev *indio_dev,
+ div << XADC_CONF2_DIV_OFFSET);
+ }
+
++static int xadc_write_raw(struct iio_dev *indio_dev,
++ struct iio_chan_spec const *chan, int val, int val2, long info)
++{
++ struct xadc *xadc = iio_priv(indio_dev);
++
++ if (info != IIO_CHAN_INFO_SAMP_FREQ)
++ return -EINVAL;
++
++ return xadc_write_samplerate(xadc, val);
++}
++
+ static const struct iio_event_spec xadc_temp_events[] = {
+ {
+ .type = IIO_EV_TYPE_THRESH,
+@@ -1226,6 +1264,21 @@ static int xadc_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_free_samplerate_trigger;
+
++ /*
++ * Make sure not to exceed the maximum samplerate since otherwise the
++ * resulting interrupt storm will soft-lock the system.
++ */
++ if (xadc->ops->flags & XADC_FLAGS_BUFFERED) {
++ ret = xadc_read_samplerate(xadc);
++ if (ret < 0)
++ goto err_free_samplerate_trigger;
++ if (ret > XADC_MAX_SAMPLERATE) {
++ ret = xadc_write_samplerate(xadc, XADC_MAX_SAMPLERATE);
++ if (ret < 0)
++ goto err_free_samplerate_trigger;
++ }
++ }
++
+ ret = request_irq(xadc->irq, xadc->ops->interrupt_handler, 0,
+ dev_name(&pdev->dev), indio_dev);
+ if (ret)
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index 26fbd1bd9413..09279e40c55c 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -93,7 +93,7 @@ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr)
+ struct st_sensor_odr_avl odr_out = {0, 0};
+ struct st_sensor_data *sdata = iio_priv(indio_dev);
+
+- if (!sdata->sensor_settings->odr.addr)
++ if (!sdata->sensor_settings->odr.mask)
+ return 0;
+
+ err = st_sensors_match_odr(sdata->sensor_settings, odr, &odr_out);
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index 6e96a2fb97dc..df8f5ceea2dd 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -408,16 +408,15 @@ static int addr6_resolve(struct sockaddr_in6 *src_in,
+ struct flowi6 fl6;
+ struct dst_entry *dst;
+ struct rt6_info *rt;
+- int ret;
+
+ memset(&fl6, 0, sizeof fl6);
+ fl6.daddr = dst_in->sin6_addr;
+ fl6.saddr = src_in->sin6_addr;
+ fl6.flowi6_oif = addr->bound_dev_if;
+
+- ret = ipv6_stub->ipv6_dst_lookup(addr->net, NULL, &dst, &fl6);
+- if (ret < 0)
+- return ret;
++ dst = ipv6_stub->ipv6_dst_lookup_flow(addr->net, NULL, &fl6, NULL);
++ if (IS_ERR(dst))
++ return PTR_ERR(dst);
+
+ rt = (struct rt6_info *)dst;
+ if (ipv6_addr_any(&src_in->sin6_addr)) {
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 54add70c22b5..7903bd5c639e 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -154,10 +154,12 @@ static struct dst_entry *rxe_find_route6(struct net_device *ndev,
+ memcpy(&fl6.daddr, daddr, sizeof(*daddr));
+ fl6.flowi6_proto = IPPROTO_UDP;
+
+- if (unlikely(ipv6_stub->ipv6_dst_lookup(sock_net(recv_sockets.sk6->sk),
+- recv_sockets.sk6->sk, &ndst, &fl6))) {
++ ndst = ipv6_stub->ipv6_dst_lookup_flow(sock_net(recv_sockets.sk6->sk),
++ recv_sockets.sk6->sk, &fl6,
++ NULL);
++ if (unlikely(IS_ERR(ndst))) {
+ pr_err_ratelimited("no route to %pI6\n", daddr);
+- goto put;
++ return NULL;
+ }
+
+ if (unlikely(ndst->error)) {
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index ac5d945b934a..11f3993ab7f3 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1253,6 +1253,10 @@ static int b53_arl_rw_op(struct b53_device *dev, unsigned int op)
+ reg |= ARLTBL_RW;
+ else
+ reg &= ~ARLTBL_RW;
++ if (dev->vlan_enabled)
++ reg &= ~ARLTBL_IVL_SVL_SELECT;
++ else
++ reg |= ARLTBL_IVL_SVL_SELECT;
+ b53_write8(dev, B53_ARLIO_PAGE, B53_ARLTBL_RW_CTRL, reg);
+
+ return b53_arl_op_wait(dev);
+@@ -1262,6 +1266,7 @@ static int b53_arl_read(struct b53_device *dev, u64 mac,
+ u16 vid, struct b53_arl_entry *ent, u8 *idx,
+ bool is_valid)
+ {
++ DECLARE_BITMAP(free_bins, B53_ARLTBL_MAX_BIN_ENTRIES);
+ unsigned int i;
+ int ret;
+
+@@ -1269,6 +1274,8 @@ static int b53_arl_read(struct b53_device *dev, u64 mac,
+ if (ret)
+ return ret;
+
++ bitmap_zero(free_bins, dev->num_arl_entries);
++
+ /* Read the bins */
+ for (i = 0; i < dev->num_arl_entries; i++) {
+ u64 mac_vid;
+@@ -1280,13 +1287,24 @@ static int b53_arl_read(struct b53_device *dev, u64 mac,
+ B53_ARLTBL_DATA_ENTRY(i), &fwd_entry);
+ b53_arl_to_entry(ent, mac_vid, fwd_entry);
+
+- if (!(fwd_entry & ARLTBL_VALID))
++ if (!(fwd_entry & ARLTBL_VALID)) {
++ set_bit(i, free_bins);
+ continue;
++ }
+ if ((mac_vid & ARLTBL_MAC_MASK) != mac)
+ continue;
++ if (dev->vlan_enabled &&
++ ((mac_vid >> ARLTBL_VID_S) & ARLTBL_VID_MASK) != vid)
++ continue;
+ *idx = i;
++ return 0;
+ }
+
++ if (bitmap_weight(free_bins, dev->num_arl_entries) == 0)
++ return -ENOSPC;
++
++ *idx = find_first_bit(free_bins, dev->num_arl_entries);
++
+ return -ENOENT;
+ }
+
+@@ -1316,10 +1334,21 @@ static int b53_arl_op(struct b53_device *dev, int op, int port,
+ if (op)
+ return ret;
+
+- /* We could not find a matching MAC, so reset to a new entry */
+- if (ret) {
++ switch (ret) {
++ case -ENOSPC:
++ dev_dbg(dev->dev, "{%pM,%.4d} no space left in ARL\n",
++ addr, vid);
++ return is_valid ? ret : 0;
++ case -ENOENT:
++ /* We could not find a matching MAC, so reset to a new entry */
++ dev_dbg(dev->dev, "{%pM,%.4d} not found, using idx: %d\n",
++ addr, vid, idx);
+ fwd_entry = 0;
+- idx = 1;
++ break;
++ default:
++ dev_dbg(dev->dev, "{%pM,%.4d} found, using idx: %d\n",
++ addr, vid, idx);
++ break;
+ }
+
+ memset(&ent, 0, sizeof(ent));
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index 2a9f421680aa..c90985c294a2 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -292,6 +292,7 @@
+ /* ARL Table Read/Write Register (8 bit) */
+ #define B53_ARLTBL_RW_CTRL 0x00
+ #define ARLTBL_RW BIT(0)
++#define ARLTBL_IVL_SVL_SELECT BIT(6)
+ #define ARLTBL_START_DONE BIT(7)
+
+ /* MAC Address Index Register (48 bit) */
+@@ -304,7 +305,7 @@
+ *
+ * BCM5325 and BCM5365 share most definitions below
+ */
+-#define B53_ARLTBL_MAC_VID_ENTRY(n) (0x10 * (n))
++#define B53_ARLTBL_MAC_VID_ENTRY(n) ((0x10 * (n)) + 0x10)
+ #define ARLTBL_MAC_MASK 0xffffffffffffULL
+ #define ARLTBL_VID_S 48
+ #define ARLTBL_VID_MASK_25 0xff
+@@ -316,13 +317,16 @@
+ #define ARLTBL_VALID_25 BIT(63)
+
+ /* ARL Table Data Entry N Registers (32 bit) */
+-#define B53_ARLTBL_DATA_ENTRY(n) ((0x10 * (n)) + 0x08)
++#define B53_ARLTBL_DATA_ENTRY(n) ((0x10 * (n)) + 0x18)
+ #define ARLTBL_DATA_PORT_ID_MASK 0x1ff
+ #define ARLTBL_TC(tc) ((3 & tc) << 11)
+ #define ARLTBL_AGE BIT(14)
+ #define ARLTBL_STATIC BIT(15)
+ #define ARLTBL_VALID BIT(16)
+
++/* Maximum number of bin entries in the ARL for all switches */
++#define B53_ARLTBL_MAX_BIN_ENTRIES 4
++
+ /* ARL Search Control Register (8 bit) */
+ #define B53_ARL_SRCH_CTL 0x50
+ #define B53_ARL_SRCH_CTL_25 0x20
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 736a6a5fbd98..789c206b515e 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -998,6 +998,8 @@ static void bcmgenet_get_ethtool_stats(struct net_device *dev,
+ if (netif_running(dev))
+ bcmgenet_update_mib_counters(priv);
+
++ dev->netdev_ops->ndo_get_stats(dev);
++
+ for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+ const struct bcmgenet_stats *s;
+ char *p;
+@@ -3211,6 +3213,7 @@ static struct net_device_stats *bcmgenet_get_stats(struct net_device *dev)
+ dev->stats.rx_packets = rx_packets;
+ dev->stats.rx_errors = rx_errors;
+ dev->stats.rx_missed_errors = rx_errors;
++ dev->stats.rx_dropped = rx_dropped;
+ return &dev->stats;
+ }
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+index b766362031c3..5bc58429bb1c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+@@ -1065,9 +1065,9 @@ static void cudbg_t4_fwcache(struct cudbg_init *pdbg_init,
+ }
+ }
+
+-static unsigned long cudbg_mem_region_size(struct cudbg_init *pdbg_init,
+- struct cudbg_error *cudbg_err,
+- u8 mem_type)
++static int cudbg_mem_region_size(struct cudbg_init *pdbg_init,
++ struct cudbg_error *cudbg_err,
++ u8 mem_type, unsigned long *region_size)
+ {
+ struct adapter *padap = pdbg_init->adap;
+ struct cudbg_meminfo mem_info;
+@@ -1076,15 +1076,23 @@ static unsigned long cudbg_mem_region_size(struct cudbg_init *pdbg_init,
+
+ memset(&mem_info, 0, sizeof(struct cudbg_meminfo));
+ rc = cudbg_fill_meminfo(padap, &mem_info);
+- if (rc)
++ if (rc) {
++ cudbg_err->sys_err = rc;
+ return rc;
++ }
+
+ cudbg_t4_fwcache(pdbg_init, cudbg_err);
+ rc = cudbg_meminfo_get_mem_index(padap, &mem_info, mem_type, &mc_idx);
+- if (rc)
++ if (rc) {
++ cudbg_err->sys_err = rc;
+ return rc;
++ }
++
++ if (region_size)
++ *region_size = mem_info.avail[mc_idx].limit -
++ mem_info.avail[mc_idx].base;
+
+- return mem_info.avail[mc_idx].limit - mem_info.avail[mc_idx].base;
++ return 0;
+ }
+
+ static int cudbg_collect_mem_region(struct cudbg_init *pdbg_init,
+@@ -1092,7 +1100,12 @@ static int cudbg_collect_mem_region(struct cudbg_init *pdbg_init,
+ struct cudbg_error *cudbg_err,
+ u8 mem_type)
+ {
+- unsigned long size = cudbg_mem_region_size(pdbg_init, cudbg_err, mem_type);
++ unsigned long size = 0;
++ int rc;
++
++ rc = cudbg_mem_region_size(pdbg_init, cudbg_err, mem_type, &size);
++ if (rc)
++ return rc;
+
+ return cudbg_read_fw_mem(pdbg_init, dbg_buff, mem_type, size,
+ cudbg_err);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
+index 758f2b836328..ff7e58a8c90f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
+@@ -311,32 +311,17 @@ static int cxgb4_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+ */
+ static int cxgb4_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ {
+- struct adapter *adapter = (struct adapter *)container_of(ptp,
+- struct adapter, ptp_clock_info);
+- struct fw_ptp_cmd c;
++ struct adapter *adapter = container_of(ptp, struct adapter,
++ ptp_clock_info);
+ u64 ns;
+- int err;
+-
+- memset(&c, 0, sizeof(c));
+- c.op_to_portid = cpu_to_be32(FW_CMD_OP_V(FW_PTP_CMD) |
+- FW_CMD_REQUEST_F |
+- FW_CMD_READ_F |
+- FW_PTP_CMD_PORTID_V(0));
+- c.retval_len16 = cpu_to_be32(FW_CMD_LEN16_V(sizeof(c) / 16));
+- c.u.ts.sc = FW_PTP_SC_GET_TIME;
+
+- err = t4_wr_mbox(adapter, adapter->mbox, &c, sizeof(c), &c);
+- if (err < 0) {
+- dev_err(adapter->pdev_dev,
+- "PTP: %s error %d\n", __func__, -err);
+- return err;
+- }
++ ns = t4_read_reg(adapter, T5_PORT_REG(0, MAC_PORT_PTP_SUM_LO_A));
++ ns |= (u64)t4_read_reg(adapter,
++ T5_PORT_REG(0, MAC_PORT_PTP_SUM_HI_A)) << 32;
+
+ /* convert to timespec*/
+- ns = be64_to_cpu(c.u.ts.tm);
+ *ts = ns_to_timespec64(ns);
+-
+- return err;
++ return 0;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
+index eb222d40ddbf..a64eb6ac5c76 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
+@@ -1896,6 +1896,9 @@
+
+ #define MAC_PORT_CFG2_A 0x818
+
++#define MAC_PORT_PTP_SUM_LO_A 0x990
++#define MAC_PORT_PTP_SUM_HI_A 0x994
++
+ #define MPS_CMN_CTL_A 0x9000
+
+ #define COUNTPAUSEMCRX_S 5
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index c8928ce69185..3050853774ee 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2217,12 +2217,11 @@ static int mlx5e_route_lookup_ipv6(struct mlx5e_priv *priv,
+ #if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6)
+ struct mlx5e_rep_priv *uplink_rpriv;
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+- int ret;
+
+- ret = ipv6_stub->ipv6_dst_lookup(dev_net(mirred_dev), NULL, &dst,
+- fl6);
+- if (ret < 0)
+- return ret;
++ dst = ipv6_stub->ipv6_dst_lookup_flow(dev_net(mirred_dev), NULL, fl6,
++ NULL);
++ if (IS_ERR(dst))
++ return PTR_ERR(dst);
+
+ if (!(*out_ttl))
+ *out_ttl = ip6_dst_hoplimit(dst);
+@@ -2428,7 +2427,7 @@ static int mlx5e_create_encap_header_ipv6(struct mlx5e_priv *priv,
+ int max_encap_size = MLX5_CAP_ESW(priv->mdev, max_encap_header_size);
+ int ipv6_encap_size = ETH_HLEN + sizeof(struct ipv6hdr) + VXLAN_HLEN;
+ struct ip_tunnel_key *tun_key = &e->tun_info.key;
+- struct net_device *out_dev;
++ struct net_device *out_dev = NULL;
+ struct neighbour *n = NULL;
+ struct flowi6 fl6 = {};
+ u8 nud_state, tos, ttl;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
+index c51b2adfc1e1..2cbfa5cfefab 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
+@@ -316,7 +316,7 @@ struct mlxsw_afa_block *mlxsw_afa_block_create(struct mlxsw_afa *mlxsw_afa)
+
+ block = kzalloc(sizeof(*block), GFP_KERNEL);
+ if (!block)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ INIT_LIST_HEAD(&block->resource_list);
+ block->afa = mlxsw_afa;
+
+@@ -344,7 +344,7 @@ err_second_set_create:
+ mlxsw_afa_set_destroy(block->first_set);
+ err_first_set_create:
+ kfree(block);
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+ EXPORT_SYMBOL(mlxsw_afa_block_create);
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c
+index 8ca77f3e8f27..ffd4b055fead 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c
+@@ -88,8 +88,8 @@ static int mlxsw_sp2_acl_tcam_init(struct mlxsw_sp *mlxsw_sp, void *priv,
+ * to be written using PEFA register to all indexes for all regions.
+ */
+ afa_block = mlxsw_afa_block_create(mlxsw_sp->afa);
+- if (!afa_block) {
+- err = -ENOMEM;
++ if (IS_ERR(afa_block)) {
++ err = PTR_ERR(afa_block);
+ goto err_afa_block;
+ }
+ err = mlxsw_afa_block_continue(afa_block);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
+index c4f9238591e6..c99f5542da1e 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
+@@ -442,7 +442,8 @@ mlxsw_sp_acl_rulei_create(struct mlxsw_sp_acl *acl)
+
+ rulei = kzalloc(sizeof(*rulei), GFP_KERNEL);
+ if (!rulei)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
++
+ rulei->act_block = mlxsw_afa_block_create(acl->mlxsw_sp->afa);
+ if (IS_ERR(rulei->act_block)) {
+ err = PTR_ERR(rulei->act_block);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c
+index 346f4a5fe053..221aa6a474eb 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c
+@@ -199,8 +199,8 @@ mlxsw_sp_mr_tcam_afa_block_create(struct mlxsw_sp *mlxsw_sp,
+ int err;
+
+ afa_block = mlxsw_afa_block_create(mlxsw_sp->afa);
+- if (!afa_block)
+- return ERR_PTR(-ENOMEM);
++ if (IS_ERR(afa_block))
++ return afa_block;
+
+ err = mlxsw_afa_block_append_allocated_counter(afa_block,
+ counter_index);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+index 0a17535f13ae..03bda2e0b7a8 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+@@ -125,6 +125,7 @@ static int meson8b_init_rgmii_tx_clk(struct meson8b_dwmac *dwmac)
+ { .div = 5, .val = 5, },
+ { .div = 6, .val = 6, },
+ { .div = 7, .val = 7, },
++ { /* end of array */ }
+ };
+
+ clk_configs = devm_kzalloc(dev, sizeof(*clk_configs), GFP_KERNEL);
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index ff83408733d4..36444de701cd 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -801,7 +801,9 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
+ if (dst)
+ return dst;
+ }
+- if (ipv6_stub->ipv6_dst_lookup(geneve->net, gs6->sock->sk, &dst, fl6)) {
++ dst = ipv6_stub->ipv6_dst_lookup_flow(geneve->net, gs6->sock->sk, fl6,
++ NULL);
++ if (IS_ERR(dst)) {
+ netdev_dbg(dev, "no route to %pI6\n", &fl6->daddr);
+ return ERR_PTR(-ENETUNREACH);
+ }
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index df7d6de7c59c..9e2612562981 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3238,11 +3238,11 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
+ struct netlink_ext_ack *extack)
+ {
+ struct macsec_dev *macsec = macsec_priv(dev);
++ rx_handler_func_t *rx_handler;
++ u8 icv_len = DEFAULT_ICV_LEN;
+ struct net_device *real_dev;
+- int err;
++ int err, mtu;
+ sci_t sci;
+- u8 icv_len = DEFAULT_ICV_LEN;
+- rx_handler_func_t *rx_handler;
+
+ if (!tb[IFLA_LINK])
+ return -EINVAL;
+@@ -3258,7 +3258,11 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
+
+ if (data && data[IFLA_MACSEC_ICV_LEN])
+ icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
+- dev->mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
++ mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
++ if (mtu < 0)
++ dev->mtu = 0;
++ else
++ dev->mtu = mtu;
+
+ rx_handler = rtnl_dereference(real_dev->rx_handler);
+ if (rx_handler && rx_handler != macsec_handle_frame)
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index a8f338dc0dfa..225bfc808112 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -1676,7 +1676,7 @@ static int macvlan_device_event(struct notifier_block *unused,
+ struct macvlan_dev,
+ list);
+
+- if (macvlan_sync_address(vlan->dev, dev->dev_addr))
++ if (vlan && macvlan_sync_address(vlan->dev, dev->dev_addr))
+ return NOTIFY_BAD;
+
+ break;
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 95524c06e64c..53d9562a8818 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -475,6 +475,9 @@ static const struct team_mode *team_mode_get(const char *kind)
+ struct team_mode_item *mitem;
+ const struct team_mode *mode = NULL;
+
++ if (!try_module_get(THIS_MODULE))
++ return NULL;
++
+ spin_lock(&mode_list_lock);
+ mitem = __find_mode(kind);
+ if (!mitem) {
+@@ -490,6 +493,7 @@ static const struct team_mode *team_mode_get(const char *kind)
+ }
+
+ spin_unlock(&mode_list_lock);
++ module_put(THIS_MODULE);
+ return mode;
+ }
+
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 9f895083bc0a..b55eeb8f8fa3 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -192,8 +192,8 @@ static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb,
+ fl6.flowi6_proto = iph->nexthdr;
+ fl6.flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF;
+
+- dst = ip6_route_output(net, NULL, &fl6);
+- if (dst == dst_null)
++ dst = ip6_dst_lookup_flow(net, NULL, &fl6, NULL);
++ if (IS_ERR(dst) || dst == dst_null)
+ goto err;
+
+ skb_dst_drop(skb);
+@@ -478,7 +478,8 @@ static struct sk_buff *vrf_ip6_out(struct net_device *vrf_dev,
+ if (rt6_need_strict(&ipv6_hdr(skb)->daddr))
+ return skb;
+
+- if (qdisc_tx_is_default(vrf_dev))
++ if (qdisc_tx_is_default(vrf_dev) ||
++ IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED)
+ return vrf_ip6_out_direct(vrf_dev, sk, skb);
+
+ return vrf_ip6_out_redirect(vrf_dev, skb);
+@@ -692,7 +693,8 @@ static struct sk_buff *vrf_ip_out(struct net_device *vrf_dev,
+ ipv4_is_lbcast(ip_hdr(skb)->daddr))
+ return skb;
+
+- if (qdisc_tx_is_default(vrf_dev))
++ if (qdisc_tx_is_default(vrf_dev) ||
++ IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
+ return vrf_ip_out_direct(vrf_dev, sk, skb);
+
+ return vrf_ip_out_redirect(vrf_dev, skb);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 64751b089482..7ee0bad18466 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1963,7 +1963,6 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
+ bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ struct dst_entry *ndst;
+ struct flowi6 fl6;
+- int err;
+
+ if (!sock6)
+ return ERR_PTR(-EIO);
+@@ -1986,10 +1985,9 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
+ fl6.fl6_dport = dport;
+ fl6.fl6_sport = sport;
+
+- err = ipv6_stub->ipv6_dst_lookup(vxlan->net,
+- sock6->sock->sk,
+- &ndst, &fl6);
+- if (unlikely(err < 0)) {
++ ndst = ipv6_stub->ipv6_dst_lookup_flow(vxlan->net, sock6->sock->sk,
++ &fl6, NULL);
++ if (unlikely(IS_ERR(ndst))) {
+ netdev_dbg(dev, "no route to %pI6\n", daddr);
+ return ERR_PTR(-ENETUNREACH);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+index e6a67bc02209..bdb87d8e9644 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+@@ -587,6 +587,7 @@ void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi,
+
+ struct iwl_mvm_stat_data {
+ struct iwl_mvm *mvm;
++ __le32 flags;
+ __le32 mac_id;
+ u8 beacon_filter_average_energy;
+ void *general;
+@@ -630,6 +631,13 @@ static void iwl_mvm_stat_iterator(void *_data, u8 *mac,
+ }
+ }
+
++ /* make sure that beacon statistics don't go backwards with TCM
++ * request to clear statistics
++ */
++ if (le32_to_cpu(data->flags) & IWL_STATISTICS_REPLY_FLG_CLEAR)
++ mvmvif->beacon_stats.accu_num_beacons +=
++ mvmvif->beacon_stats.num_beacons;
++
+ if (mvmvif->id != id)
+ return;
+
+@@ -790,6 +798,7 @@ void iwl_mvm_handle_rx_statistics(struct iwl_mvm *mvm,
+
+ flags = stats->flag;
+ }
++ data.flags = flags;
+
+ iwl_mvm_rx_stats_check_trigger(mvm, pkt);
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+index 7b1dff92b709..93f396d7e684 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+@@ -1231,6 +1231,9 @@ void iwl_trans_pcie_dyn_txq_free(struct iwl_trans *trans, int queue)
+
+ iwl_pcie_gen2_txq_unmap(trans, queue);
+
++ iwl_pcie_gen2_txq_free_memory(trans, trans_pcie->txq[queue]);
++ trans_pcie->txq[queue] = NULL;
++
+ IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", queue);
+ }
+
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index e8bc25aed44c..588864beabd8 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -402,7 +402,7 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ if (!nr_nsids)
+ return 0;
+
+- down_write(&ctrl->namespaces_rwsem);
++ down_read(&ctrl->namespaces_rwsem);
+ list_for_each_entry(ns, &ctrl->namespaces, list) {
+ unsigned nsid = le32_to_cpu(desc->nsids[n]);
+
+@@ -413,7 +413,7 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ if (++n == nr_nsids)
+ break;
+ }
+- up_write(&ctrl->namespaces_rwsem);
++ up_read(&ctrl->namespaces_rwsem);
+ return 0;
+ }
+
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index af79a7168677..db2efa219028 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -67,6 +67,7 @@ struct pcie_link_state {
+ u32 clkpm_capable:1; /* Clock PM capable? */
+ u32 clkpm_enabled:1; /* Current Clock PM state */
+ u32 clkpm_default:1; /* Default Clock PM state by BIOS */
++ u32 clkpm_disable:1; /* Clock PM disabled */
+
+ /* Exit latencies */
+ struct aspm_latency latency_up; /* Upstream direction exit latency */
+@@ -164,8 +165,11 @@ static void pcie_set_clkpm_nocheck(struct pcie_link_state *link, int enable)
+
+ static void pcie_set_clkpm(struct pcie_link_state *link, int enable)
+ {
+- /* Don't enable Clock PM if the link is not Clock PM capable */
+- if (!link->clkpm_capable)
++ /*
++ * Don't enable Clock PM if the link is not Clock PM capable
++ * or Clock PM is disabled
++ */
++ if (!link->clkpm_capable || link->clkpm_disable)
+ enable = 0;
+ /* Need nothing if the specified equals to current state */
+ if (link->clkpm_enabled == enable)
+@@ -195,7 +199,8 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
+ }
+ link->clkpm_enabled = enabled;
+ link->clkpm_default = enabled;
+- link->clkpm_capable = (blacklist) ? 0 : capable;
++ link->clkpm_capable = capable;
++ link->clkpm_disable = blacklist ? 1 : 0;
+ }
+
+ static bool pcie_retrain_link(struct pcie_link_state *link)
+@@ -1106,10 +1111,9 @@ static void __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
+ link->aspm_disable |= ASPM_STATE_L1;
+ pcie_config_aspm_link(link, policy_to_aspm_state(link));
+
+- if (state & PCIE_LINK_STATE_CLKPM) {
+- link->clkpm_capable = 0;
+- pcie_set_clkpm(link, 0);
+- }
++ if (state & PCIE_LINK_STATE_CLKPM)
++ link->clkpm_disable = 1;
++ pcie_set_clkpm(link, policy_to_clkpm_state(link));
+ mutex_unlock(&aspm_lock);
+ if (sem)
+ up_read(&pci_bus_sem);
+diff --git a/drivers/pwm/pwm-bcm2835.c b/drivers/pwm/pwm-bcm2835.c
+index db001cba937f..e340ad79a1ec 100644
+--- a/drivers/pwm/pwm-bcm2835.c
++++ b/drivers/pwm/pwm-bcm2835.c
+@@ -166,6 +166,7 @@ static int bcm2835_pwm_probe(struct platform_device *pdev)
+
+ pc->chip.dev = &pdev->dev;
+ pc->chip.ops = &bcm2835_pwm_ops;
++ pc->chip.base = -1;
+ pc->chip.npwm = 2;
+ pc->chip.of_xlate = of_pwm_xlate_with_flags;
+ pc->chip.of_pwm_n_cells = 3;
+diff --git a/drivers/pwm/pwm-rcar.c b/drivers/pwm/pwm-rcar.c
+index 748f614d5375..b7d71bf297d6 100644
+--- a/drivers/pwm/pwm-rcar.c
++++ b/drivers/pwm/pwm-rcar.c
+@@ -232,24 +232,28 @@ static int rcar_pwm_probe(struct platform_device *pdev)
+ rcar_pwm->chip.base = -1;
+ rcar_pwm->chip.npwm = 1;
+
++ pm_runtime_enable(&pdev->dev);
++
+ ret = pwmchip_add(&rcar_pwm->chip);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to register PWM chip: %d\n", ret);
++ pm_runtime_disable(&pdev->dev);
+ return ret;
+ }
+
+- pm_runtime_enable(&pdev->dev);
+-
+ return 0;
+ }
+
+ static int rcar_pwm_remove(struct platform_device *pdev)
+ {
+ struct rcar_pwm_chip *rcar_pwm = platform_get_drvdata(pdev);
++ int ret;
++
++ ret = pwmchip_remove(&rcar_pwm->chip);
+
+ pm_runtime_disable(&pdev->dev);
+
+- return pwmchip_remove(&rcar_pwm->chip);
++ return ret;
+ }
+
+ static const struct of_device_id rcar_pwm_of_table[] = {
+diff --git a/drivers/pwm/pwm-renesas-tpu.c b/drivers/pwm/pwm-renesas-tpu.c
+index 29267d12fb4c..9c7962f2f0aa 100644
+--- a/drivers/pwm/pwm-renesas-tpu.c
++++ b/drivers/pwm/pwm-renesas-tpu.c
+@@ -423,16 +423,17 @@ static int tpu_probe(struct platform_device *pdev)
+ tpu->chip.base = -1;
+ tpu->chip.npwm = TPU_CHANNEL_MAX;
+
++ pm_runtime_enable(&pdev->dev);
++
+ ret = pwmchip_add(&tpu->chip);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to register PWM chip\n");
++ pm_runtime_disable(&pdev->dev);
+ return ret;
+ }
+
+ dev_info(&pdev->dev, "TPU PWM %d registered\n", tpu->pdev->id);
+
+- pm_runtime_enable(&pdev->dev);
+-
+ return 0;
+ }
+
+@@ -442,12 +443,10 @@ static int tpu_remove(struct platform_device *pdev)
+ int ret;
+
+ ret = pwmchip_remove(&tpu->chip);
+- if (ret)
+- return ret;
+
+ pm_runtime_disable(&pdev->dev);
+
+- return 0;
++ return ret;
+ }
+
+ #ifdef CONFIG_OF
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index 1540229a37bb..c9bc9a6bd73b 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -827,8 +827,10 @@ static void io_subchannel_register(struct ccw_device *cdev)
+ * Now we know this subchannel will stay, we can throw
+ * our delayed uevent.
+ */
+- dev_set_uevent_suppress(&sch->dev, 0);
+- kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++ if (dev_get_uevent_suppress(&sch->dev)) {
++ dev_set_uevent_suppress(&sch->dev, 0);
++ kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++ }
+ /* make it known to the system */
+ ret = ccw_device_add(cdev);
+ if (ret) {
+@@ -1036,8 +1038,11 @@ static int io_subchannel_probe(struct subchannel *sch)
+ * Throw the delayed uevent for the subchannel, register
+ * the ccw_device and exit.
+ */
+- dev_set_uevent_suppress(&sch->dev, 0);
+- kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++ if (dev_get_uevent_suppress(&sch->dev)) {
++ /* should always be the case for the console */
++ dev_set_uevent_suppress(&sch->dev, 0);
++ kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++ }
+ cdev = sch_get_cdev(sch);
+ rc = ccw_device_add(cdev);
+ if (rc) {
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index f73726e55e44..e3013858937b 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -342,13 +342,15 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
+ if (ndlp->upcall_flags & NLP_WAIT_FOR_UNREG) {
+ ndlp->nrport = NULL;
+ ndlp->upcall_flags &= ~NLP_WAIT_FOR_UNREG;
+- }
+- spin_unlock_irq(&vport->phba->hbalock);
++ spin_unlock_irq(&vport->phba->hbalock);
+
+- /* Remove original register reference. The host transport
+- * won't reference this rport/remoteport any further.
+- */
+- lpfc_nlp_put(ndlp);
++ /* Remove original register reference. The host transport
++ * won't reference this rport/remoteport any further.
++ */
++ lpfc_nlp_put(ndlp);
++ } else {
++ spin_unlock_irq(&vport->phba->hbalock);
++ }
+
+ rport_err:
+ return;
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index a801917d3c19..a56a939792ac 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2472,6 +2472,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ !pmb->u.mb.mbxStatus) {
+ rpi = pmb->u.mb.un.varWords[0];
+ vpi = pmb->u.mb.un.varRegLogin.vpi;
++ if (phba->sli_rev == LPFC_SLI_REV4)
++ vpi -= phba->sli4_hba.max_cfg_param.vpi_base;
+ lpfc_unreg_login(phba, vpi, rpi, pmb);
+ pmb->vport = vport;
+ pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index c0fb9e789080..04d095488c76 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2010,7 +2010,7 @@ static void __iscsi_unbind_session(struct work_struct *work)
+ if (session->target_id == ISCSI_MAX_TARGET) {
+ spin_unlock_irqrestore(&session->lock, flags);
+ mutex_unlock(&ihost->mutex);
+- return;
++ goto unbind_session_exit;
+ }
+
+ target_id = session->target_id;
+@@ -2022,6 +2022,8 @@ static void __iscsi_unbind_session(struct work_struct *work)
+ ida_simple_remove(&iscsi_sess_ida, target_id);
+
+ scsi_remove_target(&session->dev);
++
++unbind_session_exit:
+ iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION);
+ ISCSI_DBG_TRANS_SESSION(session, "Completed target removal\n");
+ }
+diff --git a/drivers/scsi/smartpqi/smartpqi_sas_transport.c b/drivers/scsi/smartpqi/smartpqi_sas_transport.c
+index b209a35e482e..01dfb97b0778 100644
+--- a/drivers/scsi/smartpqi/smartpqi_sas_transport.c
++++ b/drivers/scsi/smartpqi/smartpqi_sas_transport.c
+@@ -50,9 +50,9 @@ static void pqi_free_sas_phy(struct pqi_sas_phy *pqi_sas_phy)
+ struct sas_phy *phy = pqi_sas_phy->phy;
+
+ sas_port_delete_phy(pqi_sas_phy->parent_port->port, phy);
+- sas_phy_free(phy);
+ if (pqi_sas_phy->added_to_port)
+ list_del(&pqi_sas_phy->phy_list_entry);
++ sas_phy_delete(phy);
+ kfree(pqi_sas_phy);
+ }
+
+diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
+index e18b61cdbdeb..636988248da2 100644
+--- a/drivers/staging/comedi/comedi_fops.c
++++ b/drivers/staging/comedi/comedi_fops.c
+@@ -2594,8 +2594,10 @@ static int comedi_open(struct inode *inode, struct file *file)
+ }
+
+ cfp = kzalloc(sizeof(*cfp), GFP_KERNEL);
+- if (!cfp)
++ if (!cfp) {
++ comedi_dev_put(dev);
+ return -ENOMEM;
++ }
+
+ cfp->dev = dev;
+
+diff --git a/drivers/staging/comedi/drivers/dt2815.c b/drivers/staging/comedi/drivers/dt2815.c
+index 83026ba63d1c..78a7c1b3448a 100644
+--- a/drivers/staging/comedi/drivers/dt2815.c
++++ b/drivers/staging/comedi/drivers/dt2815.c
+@@ -92,6 +92,7 @@ static int dt2815_ao_insn(struct comedi_device *dev, struct comedi_subdevice *s,
+ int ret;
+
+ for (i = 0; i < insn->n; i++) {
++ /* FIXME: lo bit 0 chooses voltage output or current output */
+ lo = ((data[i] & 0x0f) << 4) | (chan << 1) | 0x01;
+ hi = (data[i] & 0xff0) >> 4;
+
+@@ -105,6 +106,8 @@ static int dt2815_ao_insn(struct comedi_device *dev, struct comedi_subdevice *s,
+ if (ret)
+ return ret;
+
++ outb(hi, dev->iobase + DT2815_DATA);
++
+ devpriv->ao_readback[chan] = data[i];
+ }
+ return i;
+diff --git a/drivers/staging/vt6656/int.c b/drivers/staging/vt6656/int.c
+index af0060c74530..43f461f75b97 100644
+--- a/drivers/staging/vt6656/int.c
++++ b/drivers/staging/vt6656/int.c
+@@ -143,7 +143,8 @@ void vnt_int_process_data(struct vnt_private *priv)
+ priv->wake_up_count =
+ priv->hw->conf.listen_interval;
+
+- --priv->wake_up_count;
++ if (priv->wake_up_count)
++ --priv->wake_up_count;
+
+ /* Turn on wake up to listen next beacon */
+ if (priv->wake_up_count == 1)
+diff --git a/drivers/staging/vt6656/key.c b/drivers/staging/vt6656/key.c
+index 91dede54cc1f..be8dbf6c2c2f 100644
+--- a/drivers/staging/vt6656/key.c
++++ b/drivers/staging/vt6656/key.c
+@@ -81,9 +81,6 @@ static int vnt_set_keymode(struct ieee80211_hw *hw, u8 *mac_addr,
+ case VNT_KEY_PAIRWISE:
+ key_mode |= mode;
+ key_inx = 4;
+- /* Don't save entry for pairwise key for station mode */
+- if (priv->op_mode == NL80211_IFTYPE_STATION)
+- clear_bit(entry, &priv->key_entry_inuse);
+ break;
+ default:
+ return -EINVAL;
+@@ -107,7 +104,6 @@ static int vnt_set_keymode(struct ieee80211_hw *hw, u8 *mac_addr,
+ int vnt_set_keys(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ struct ieee80211_vif *vif, struct ieee80211_key_conf *key)
+ {
+- struct ieee80211_bss_conf *conf = &vif->bss_conf;
+ struct vnt_private *priv = hw->priv;
+ u8 *mac_addr = NULL;
+ u8 key_dec_mode = 0;
+@@ -149,16 +145,12 @@ int vnt_set_keys(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV;
+ }
+
+- if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE) {
++ if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE)
+ vnt_set_keymode(hw, mac_addr, key, VNT_KEY_PAIRWISE,
+ key_dec_mode, true);
+- } else {
+- vnt_set_keymode(hw, mac_addr, key, VNT_KEY_DEFAULTKEY,
++ else
++ vnt_set_keymode(hw, mac_addr, key, VNT_KEY_GROUP_ADDRESS,
+ key_dec_mode, true);
+
+- vnt_set_keymode(hw, (u8 *)conf->bssid, key,
+- VNT_KEY_GROUP_ADDRESS, key_dec_mode, true);
+- }
+-
+ return 0;
+ }
+diff --git a/drivers/staging/vt6656/main_usb.c b/drivers/staging/vt6656/main_usb.c
+index 36562ac94c1f..586a3d331511 100644
+--- a/drivers/staging/vt6656/main_usb.c
++++ b/drivers/staging/vt6656/main_usb.c
+@@ -595,8 +595,6 @@ static int vnt_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+
+ priv->op_mode = vif->type;
+
+- vnt_set_bss_mode(priv);
+-
+ /* LED blink on TX */
+ vnt_mac_set_led(priv, LEDSTS_STS, LEDSTS_INTER);
+
+@@ -683,7 +681,6 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ priv->basic_rates = conf->basic_rates;
+
+ vnt_update_top_rates(priv);
+- vnt_set_bss_mode(priv);
+
+ dev_dbg(&priv->usb->dev, "basic rates %x\n", conf->basic_rates);
+ }
+@@ -712,11 +709,14 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ priv->short_slot_time = false;
+
+ vnt_set_short_slot_time(priv);
+- vnt_update_ifs(priv);
+ vnt_set_vga_gain_offset(priv, priv->bb_vga[0]);
+ vnt_update_pre_ed_threshold(priv, false);
+ }
+
++ if (changed & (BSS_CHANGED_BASIC_RATES | BSS_CHANGED_ERP_PREAMBLE |
++ BSS_CHANGED_ERP_SLOT))
++ vnt_set_bss_mode(priv);
++
+ if (changed & BSS_CHANGED_TXPOWER)
+ vnt_rf_setpower(priv, priv->current_rate,
+ conf->chandef.chan->hw_value);
+@@ -740,12 +740,15 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ vnt_mac_reg_bits_on(priv, MAC_REG_TFTCTL,
+ TFTCTL_TSFCNTREN);
+
+- vnt_adjust_tsf(priv, conf->beacon_rate->hw_value,
+- conf->sync_tsf, priv->current_tsf);
+-
+ vnt_mac_set_beacon_interval(priv, conf->beacon_int);
+
+ vnt_reset_next_tbtt(priv, conf->beacon_int);
++
++ vnt_adjust_tsf(priv, conf->beacon_rate->hw_value,
++ conf->sync_tsf, priv->current_tsf);
++
++ vnt_update_next_tbtt(priv,
++ conf->sync_tsf, conf->beacon_int);
+ } else {
+ vnt_clear_current_tsf(priv);
+
+@@ -780,15 +783,11 @@ static void vnt_configure(struct ieee80211_hw *hw,
+ {
+ struct vnt_private *priv = hw->priv;
+ u8 rx_mode = 0;
+- int rc;
+
+ *total_flags &= FIF_ALLMULTI | FIF_OTHER_BSS | FIF_BCN_PRBRESP_PROMISC;
+
+- rc = vnt_control_in(priv, MESSAGE_TYPE_READ, MAC_REG_RCR,
+- MESSAGE_REQUEST_MACREG, sizeof(u8), &rx_mode);
+-
+- if (!rc)
+- rx_mode = RCR_MULTICAST | RCR_BROADCAST;
++ vnt_control_in(priv, MESSAGE_TYPE_READ, MAC_REG_RCR,
++ MESSAGE_REQUEST_MACREG, sizeof(u8), &rx_mode);
+
+ dev_dbg(&priv->usb->dev, "rx mode in = %x\n", rx_mode);
+
+@@ -829,8 +828,12 @@ static int vnt_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ return -EOPNOTSUPP;
+ break;
+ case DISABLE_KEY:
+- if (test_bit(key->hw_key_idx, &priv->key_entry_inuse))
++ if (test_bit(key->hw_key_idx, &priv->key_entry_inuse)) {
+ clear_bit(key->hw_key_idx, &priv->key_entry_inuse);
++
++ vnt_mac_disable_keyentry(priv, key->hw_key_idx);
++ }
++
+ default:
+ break;
+ }
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index 27284a2dcd2b..436cc51c92c3 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -302,10 +302,6 @@ int hvc_instantiate(uint32_t vtermno, int index, const struct hv_ops *ops)
+ vtermnos[index] = vtermno;
+ cons_ops[index] = ops;
+
+- /* reserve all indices up to and including this index */
+- if (last_hvc < index)
+- last_hvc = index;
+-
+ /* check if we need to re-register the kernel console */
+ hvc_check_console(index);
+
+@@ -960,13 +956,22 @@ struct hvc_struct *hvc_alloc(uint32_t vtermno, int data,
+ cons_ops[i] == hp->ops)
+ break;
+
+- /* no matching slot, just use a counter */
+- if (i >= MAX_NR_HVC_CONSOLES)
+- i = ++last_hvc;
++ if (i >= MAX_NR_HVC_CONSOLES) {
++
++ /* find 'empty' slot for console */
++ for (i = 0; i < MAX_NR_HVC_CONSOLES && vtermnos[i] != -1; i++) {
++ }
++
++ /* no matching slot, just use a counter */
++ if (i == MAX_NR_HVC_CONSOLES)
++ i = ++last_hvc + MAX_NR_HVC_CONSOLES;
++ }
+
+ hp->index = i;
+- cons_ops[i] = ops;
+- vtermnos[i] = vtermno;
++ if (i < MAX_NR_HVC_CONSOLES) {
++ cons_ops[i] = ops;
++ vtermnos[i] = vtermno;
++ }
+
+ list_add_tail(&(hp->next), &hvc_structs);
+ mutex_unlock(&hvc_structs_mutex);
+diff --git a/drivers/tty/rocket.c b/drivers/tty/rocket.c
+index 27aeca30eeae..6133830f52a3 100644
+--- a/drivers/tty/rocket.c
++++ b/drivers/tty/rocket.c
+@@ -632,18 +632,21 @@ init_r_port(int board, int aiop, int chan, struct pci_dev *pci_dev)
+ tty_port_init(&info->port);
+ info->port.ops = &rocket_port_ops;
+ info->flags &= ~ROCKET_MODE_MASK;
+- switch (pc104[board][line]) {
+- case 422:
+- info->flags |= ROCKET_MODE_RS422;
+- break;
+- case 485:
+- info->flags |= ROCKET_MODE_RS485;
+- break;
+- case 232:
+- default:
++ if (board < ARRAY_SIZE(pc104) && line < ARRAY_SIZE(pc104_1))
++ switch (pc104[board][line]) {
++ case 422:
++ info->flags |= ROCKET_MODE_RS422;
++ break;
++ case 485:
++ info->flags |= ROCKET_MODE_RS485;
++ break;
++ case 232:
++ default:
++ info->flags |= ROCKET_MODE_RS232;
++ break;
++ }
++ else
+ info->flags |= ROCKET_MODE_RS232;
+- break;
+- }
+
+ info->intmask = RXF_TRIG | TXFIFO_MT | SRC_INT | DELTA_CD | DELTA_CTS | DELTA_DSR;
+ if (sInitChan(ctlp, &info->channel, aiop, chan) == 0) {
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 9e1a6af23ca2..8aaa7900927a 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -873,9 +873,16 @@ static void sci_receive_chars(struct uart_port *port)
+ tty_insert_flip_char(tport, c, TTY_NORMAL);
+ } else {
+ for (i = 0; i < count; i++) {
+- char c = serial_port_in(port, SCxRDR);
+-
+- status = serial_port_in(port, SCxSR);
++ char c;
++
++ if (port->type == PORT_SCIF ||
++ port->type == PORT_HSCIF) {
++ status = serial_port_in(port, SCxSR);
++ c = serial_port_in(port, SCxRDR);
++ } else {
++ c = serial_port_in(port, SCxRDR);
++ status = serial_port_in(port, SCxSR);
++ }
+ if (uart_handle_sysrq_char(port, c)) {
+ count--; i--;
+ continue;
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 36c6f1b98372..ca8c6ddc1ca8 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -81,6 +81,7 @@
+ #include <linux/errno.h>
+ #include <linux/kd.h>
+ #include <linux/slab.h>
++#include <linux/vmalloc.h>
+ #include <linux/major.h>
+ #include <linux/mm.h>
+ #include <linux/console.h>
+@@ -350,7 +351,7 @@ static struct uni_screen *vc_uniscr_alloc(unsigned int cols, unsigned int rows)
+ /* allocate everything in one go */
+ memsize = cols * rows * sizeof(char32_t);
+ memsize += rows * sizeof(char32_t *);
+- p = kmalloc(memsize, GFP_KERNEL);
++ p = vmalloc(memsize);
+ if (!p)
+ return NULL;
+
+@@ -366,7 +367,7 @@ static struct uni_screen *vc_uniscr_alloc(unsigned int cols, unsigned int rows)
+
+ static void vc_uniscr_set(struct vc_data *vc, struct uni_screen *new_uniscr)
+ {
+- kfree(vc->vc_uni_screen);
++ vfree(vc->vc_uni_screen);
+ vc->vc_uni_screen = new_uniscr;
+ }
+
+@@ -1209,7 +1210,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)
+ return 0;
+
+- if (new_screen_size > (4 << 20))
++ if (new_screen_size > KMALLOC_MAX_SIZE)
+ return -EINVAL;
+ newscreen = kzalloc(new_screen_size, GFP_USER);
+ if (!newscreen)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 6e0b41861735..10ba1b4f0dbf 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -412,9 +412,12 @@ static void acm_ctrl_irq(struct urb *urb)
+
+ exit:
+ retval = usb_submit_urb(urb, GFP_ATOMIC);
+- if (retval && retval != -EPERM)
++ if (retval && retval != -EPERM && retval != -ENODEV)
+ dev_err(&acm->control->dev,
+ "%s - usb_submit_urb failed: %d\n", __func__, retval);
++ else
++ dev_vdbg(&acm->control->dev,
++ "control resubmission terminated %d\n", retval);
+ }
+
+ static int acm_submit_read_urb(struct acm *acm, int index, gfp_t mem_flags)
+@@ -430,6 +433,8 @@ static int acm_submit_read_urb(struct acm *acm, int index, gfp_t mem_flags)
+ dev_err(&acm->data->dev,
+ "urb %d failed submission with %d\n",
+ index, res);
++ } else {
++ dev_vdbg(&acm->data->dev, "intended failure %d\n", res);
+ }
+ set_bit(index, &acm->read_urbs_free);
+ return res;
+@@ -472,6 +477,7 @@ static void acm_read_bulk_callback(struct urb *urb)
+ int status = urb->status;
+ bool stopped = false;
+ bool stalled = false;
++ bool cooldown = false;
+
+ dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",
+ rb->index, urb->actual_length, status);
+@@ -498,6 +504,14 @@ static void acm_read_bulk_callback(struct urb *urb)
+ __func__, status);
+ stopped = true;
+ break;
++ case -EOVERFLOW:
++ case -EPROTO:
++ dev_dbg(&acm->data->dev,
++ "%s - cooling babbling device\n", __func__);
++ usb_mark_last_busy(acm->dev);
++ set_bit(rb->index, &acm->urbs_in_error_delay);
++ cooldown = true;
++ break;
+ default:
+ dev_dbg(&acm->data->dev,
+ "%s - nonzero urb status received: %d\n",
+@@ -519,9 +533,11 @@ static void acm_read_bulk_callback(struct urb *urb)
+ */
+ smp_mb__after_atomic();
+
+- if (stopped || stalled) {
++ if (stopped || stalled || cooldown) {
+ if (stalled)
+ schedule_work(&acm->work);
++ else if (cooldown)
++ schedule_delayed_work(&acm->dwork, HZ / 2);
+ return;
+ }
+
+@@ -563,14 +579,20 @@ static void acm_softint(struct work_struct *work)
+ struct acm *acm = container_of(work, struct acm, work);
+
+ if (test_bit(EVENT_RX_STALL, &acm->flags)) {
+- if (!(usb_autopm_get_interface(acm->data))) {
++ smp_mb(); /* against acm_suspend() */
++ if (!acm->susp_count) {
+ for (i = 0; i < acm->rx_buflimit; i++)
+ usb_kill_urb(acm->read_urbs[i]);
+ usb_clear_halt(acm->dev, acm->in);
+ acm_submit_read_urbs(acm, GFP_KERNEL);
+- usb_autopm_put_interface(acm->data);
++ clear_bit(EVENT_RX_STALL, &acm->flags);
+ }
+- clear_bit(EVENT_RX_STALL, &acm->flags);
++ }
++
++ if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
++ for (i = 0; i < ACM_NR; i++)
++ if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
++ acm_submit_read_urb(acm, i, GFP_NOIO);
+ }
+
+ if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
+@@ -1365,6 +1387,7 @@ made_compressed_probe:
+ acm->readsize = readsize;
+ acm->rx_buflimit = num_rx_buf;
+ INIT_WORK(&acm->work, acm_softint);
++ INIT_DELAYED_WORK(&acm->dwork, acm_softint);
+ init_waitqueue_head(&acm->wioctl);
+ spin_lock_init(&acm->write_lock);
+ spin_lock_init(&acm->read_lock);
+@@ -1574,6 +1597,7 @@ static void acm_disconnect(struct usb_interface *intf)
+
+ acm_kill_urbs(acm);
+ cancel_work_sync(&acm->work);
++ cancel_delayed_work_sync(&acm->dwork);
+
+ tty_unregister_device(acm_tty_driver, acm->minor);
+
+@@ -1616,6 +1640,8 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message)
+
+ acm_kill_urbs(acm);
+ cancel_work_sync(&acm->work);
++ cancel_delayed_work_sync(&acm->dwork);
++ acm->urbs_in_error_delay = 0;
+
+ return 0;
+ }
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index 515aad0847ee..30380d28a504 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -108,8 +108,11 @@ struct acm {
+ unsigned long flags;
+ # define EVENT_TTY_WAKEUP 0
+ # define EVENT_RX_STALL 1
++# define ACM_ERROR_DELAY 3
++ unsigned long urbs_in_error_delay; /* these need to be restarted after a delay */
+ struct usb_cdc_line_coding line; /* bits, stop, parity */
+- struct work_struct work; /* work queue entry for line discipline waking up */
++ struct work_struct work; /* work queue entry for various purposes*/
++ struct delayed_work dwork; /* for cool downs needed in error recovery */
+ unsigned int ctrlin; /* input control lines (DCD, DSR, RI, break, overruns) */
+ unsigned int ctrlout; /* output control lines (DTR, RTS) */
+ struct async_icount iocount; /* counters for control line changes */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 8cf2d2a5e266..fffe544d9e9f 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1196,6 +1196,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ #ifdef CONFIG_PM
+ udev->reset_resume = 1;
+ #endif
++ /* Don't set the change_bits when the device
++ * was powered off.
++ */
++ if (test_bit(port1, hub->power_bits))
++ set_bit(port1, hub->change_bits);
+
+ } else {
+ /* The power session is gone; tell hub_wq */
+@@ -3051,6 +3056,15 @@ static int check_port_resume_type(struct usb_device *udev,
+ if (portchange & USB_PORT_STAT_C_ENABLE)
+ usb_clear_port_feature(hub->hdev, port1,
+ USB_PORT_FEAT_C_ENABLE);
++
++ /*
++ * Whatever made this reset-resume necessary may have
++ * turned on the port1 bit in hub->change_bits. But after
++ * a successful reset-resume we want the bit to be clear;
++ * if it was on it would indicate that something happened
++ * following the reset-resume.
++ */
++ clear_bit(port1, hub->change_bits);
+ }
+
+ return status;
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 0d3fd2083165..fcf84bfc08e3 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -588,12 +588,13 @@ void usb_sg_cancel(struct usb_sg_request *io)
+ int i, retval;
+
+ spin_lock_irqsave(&io->lock, flags);
+- if (io->status) {
++ if (io->status || io->count == 0) {
+ spin_unlock_irqrestore(&io->lock, flags);
+ return;
+ }
+ /* shut everything down */
+ io->status = -ECONNRESET;
++ io->count++; /* Keep the request alive until we're done */
+ spin_unlock_irqrestore(&io->lock, flags);
+
+ for (i = io->entries - 1; i >= 0; --i) {
+@@ -607,6 +608,12 @@ void usb_sg_cancel(struct usb_sg_request *io)
+ dev_warn(&io->dev->dev, "%s, unlink --> %d\n",
+ __func__, retval);
+ }
++
++ spin_lock_irqsave(&io->lock, flags);
++ io->count--;
++ if (!io->count)
++ complete(&io->complete);
++ spin_unlock_irqrestore(&io->lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(usb_sg_cancel);
+
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index da30b5664ff3..3e8efe759c3e 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -430,6 +430,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* Corsair K70 LUX */
+ { USB_DEVICE(0x1b1c, 0x1b36), .driver_info = USB_QUIRK_DELAY_INIT },
+
++ /* Corsair K70 RGB RAPDIFIRE */
++ { USB_DEVICE(0x1b1c, 0x1b38), .driver_info = USB_QUIRK_DELAY_INIT |
++ USB_QUIRK_DELAY_CTRL_MSG },
++
+ /* MIDI keyboard WORLDE MINI */
+ { USB_DEVICE(0x1c75, 0x0204), .driver_info =
+ USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 8a4455d0af8b..8222e674c777 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2280,14 +2280,7 @@ static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep,
+
+ static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)
+ {
+- /*
+- * For OUT direction, host may send less than the setup
+- * length. Return true for all OUT requests.
+- */
+- if (!req->direction)
+- return true;
+-
+- return req->request.actual == req->request.length;
++ return req->num_pending_sgs == 0;
+ }
+
+ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+@@ -2311,8 +2304,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+
+ req->request.actual = req->request.length - req->remaining;
+
+- if (!dwc3_gadget_ep_request_completed(req) ||
+- req->num_pending_sgs) {
++ if (!dwc3_gadget_ep_request_completed(req)) {
+ __dwc3_gadget_kick_transfer(dep);
+ goto out;
+ }
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index e15e896f356c..97885eb57be6 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -735,19 +735,19 @@ static void xdbc_handle_tx_event(struct xdbc_trb *evt_trb)
+ case COMP_USB_TRANSACTION_ERROR:
+ case COMP_STALL_ERROR:
+ default:
+- if (ep_id == XDBC_EPID_OUT)
++ if (ep_id == XDBC_EPID_OUT || ep_id == XDBC_EPID_OUT_INTEL)
+ xdbc.flags |= XDBC_FLAGS_OUT_STALL;
+- if (ep_id == XDBC_EPID_IN)
++ if (ep_id == XDBC_EPID_IN || ep_id == XDBC_EPID_IN_INTEL)
+ xdbc.flags |= XDBC_FLAGS_IN_STALL;
+
+ xdbc_trace("endpoint %d stalled\n", ep_id);
+ break;
+ }
+
+- if (ep_id == XDBC_EPID_IN) {
++ if (ep_id == XDBC_EPID_IN || ep_id == XDBC_EPID_IN_INTEL) {
+ xdbc.flags &= ~XDBC_FLAGS_IN_PROCESS;
+ xdbc_bulk_transfer(NULL, XDBC_MAX_PACKET, true);
+- } else if (ep_id == XDBC_EPID_OUT) {
++ } else if (ep_id == XDBC_EPID_OUT || ep_id == XDBC_EPID_OUT_INTEL) {
+ xdbc.flags &= ~XDBC_FLAGS_OUT_PROCESS;
+ } else {
+ xdbc_trace("invalid endpoint id %d\n", ep_id);
+diff --git a/drivers/usb/early/xhci-dbc.h b/drivers/usb/early/xhci-dbc.h
+index 673686eeddd7..6e2b7266a695 100644
+--- a/drivers/usb/early/xhci-dbc.h
++++ b/drivers/usb/early/xhci-dbc.h
+@@ -120,8 +120,22 @@ struct xdbc_ring {
+ u32 cycle_state;
+ };
+
+-#define XDBC_EPID_OUT 2
+-#define XDBC_EPID_IN 3
++/*
++ * These are the "Endpoint ID" (also known as "Context Index") values for the
++ * OUT Transfer Ring and the IN Transfer Ring of a Debug Capability Context data
++ * structure.
++ * According to the "eXtensible Host Controller Interface for Universal Serial
++ * Bus (xHCI)" specification, section "7.6.3.2 Endpoint Contexts and Transfer
++ * Rings", these should be 0 and 1, and those are the values AMD machines give
++ * you; but Intel machines seem to use the formula from section "4.5.1 Device
++ * Context Index", which is supposed to be used for the Device Context only.
++ * Luckily the values from Intel don't overlap with those from AMD, so we can
++ * just test for both.
++ */
++#define XDBC_EPID_OUT 0
++#define XDBC_EPID_IN 1
++#define XDBC_EPID_OUT_INTEL 2
++#define XDBC_EPID_IN_INTEL 3
+
+ struct xdbc_state {
+ u16 vendor;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 31b3dda3089c..11a501d0664c 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1737,6 +1737,10 @@ static void ffs_data_reset(struct ffs_data *ffs)
+ ffs->state = FFS_READ_DESCRIPTORS;
+ ffs->setup_state = FFS_NO_SETUP;
+ ffs->flags = 0;
++
++ ffs->ms_os_descs_ext_prop_count = 0;
++ ffs->ms_os_descs_ext_prop_name_len = 0;
++ ffs->ms_os_descs_ext_prop_data_len = 0;
+ }
+
+
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index a024230f00e2..a58ef53e4ae1 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1266,7 +1266,16 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ xhci_set_link_state(xhci, ports[wIndex], link_state);
+
+ spin_unlock_irqrestore(&xhci->lock, flags);
+- msleep(20); /* wait device to enter */
++ if (link_state == USB_SS_PORT_LS_U3) {
++ int retries = 16;
++
++ while (retries--) {
++ usleep_range(4000, 8000);
++ temp = readl(ports[wIndex]->addr);
++ if ((temp & PORT_PLS_MASK) == XDEV_U3)
++ break;
++ }
++ }
+ spin_lock_irqsave(&xhci->lock, flags);
+
+ temp = readl(ports[wIndex]->addr);
+@@ -1472,6 +1481,8 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+ }
+ if ((temp & PORT_RC))
+ reset_change = true;
++ if (temp & PORT_OC)
++ status = 1;
+ }
+ if (!status && !reset_change) {
+ xhci_dbg(xhci, "%s: stopping port polling.\n", __func__);
+@@ -1537,6 +1548,13 @@ retry:
+ port_index);
+ goto retry;
+ }
++ /* bail out if port detected a over-current condition */
++ if (t1 & PORT_OC) {
++ bus_state->bus_suspended = 0;
++ spin_unlock_irqrestore(&xhci->lock, flags);
++ xhci_dbg(xhci, "Bus suspend bailout, port over-current detected\n");
++ return -EBUSY;
++ }
+ /* suspend ports in U0, or bail out for new connect changes */
+ if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
+ if ((t1 & PORT_CSC) && wake_enabled) {
+diff --git a/drivers/usb/misc/sisusbvga/sisusb.c b/drivers/usb/misc/sisusbvga/sisusb.c
+index c4f6ac5f035e..6376be1f5fd2 100644
+--- a/drivers/usb/misc/sisusbvga/sisusb.c
++++ b/drivers/usb/misc/sisusbvga/sisusb.c
+@@ -1199,18 +1199,18 @@ static int sisusb_read_mem_bulk(struct sisusb_usb_data *sisusb, u32 addr,
+ /* High level: Gfx (indexed) register access */
+
+ #ifdef INCL_SISUSB_CON
+-int sisusb_setreg(struct sisusb_usb_data *sisusb, int port, u8 data)
++int sisusb_setreg(struct sisusb_usb_data *sisusb, u32 port, u8 data)
+ {
+ return sisusb_write_memio_byte(sisusb, SISUSB_TYPE_IO, port, data);
+ }
+
+-int sisusb_getreg(struct sisusb_usb_data *sisusb, int port, u8 *data)
++int sisusb_getreg(struct sisusb_usb_data *sisusb, u32 port, u8 *data)
+ {
+ return sisusb_read_memio_byte(sisusb, SISUSB_TYPE_IO, port, data);
+ }
+ #endif
+
+-int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,
++int sisusb_setidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ u8 index, u8 data)
+ {
+ int ret;
+@@ -1220,7 +1220,7 @@ int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,
+ return ret;
+ }
+
+-int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,
++int sisusb_getidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ u8 index, u8 *data)
+ {
+ int ret;
+@@ -1230,7 +1230,7 @@ int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,
+ return ret;
+ }
+
+-int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port, u8 idx,
++int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, u32 port, u8 idx,
+ u8 myand, u8 myor)
+ {
+ int ret;
+@@ -1245,7 +1245,7 @@ int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port, u8 idx,
+ }
+
+ static int sisusb_setidxregmask(struct sisusb_usb_data *sisusb,
+- int port, u8 idx, u8 data, u8 mask)
++ u32 port, u8 idx, u8 data, u8 mask)
+ {
+ int ret;
+ u8 tmp;
+@@ -1258,13 +1258,13 @@ static int sisusb_setidxregmask(struct sisusb_usb_data *sisusb,
+ return ret;
+ }
+
+-int sisusb_setidxregor(struct sisusb_usb_data *sisusb, int port,
++int sisusb_setidxregor(struct sisusb_usb_data *sisusb, u32 port,
+ u8 index, u8 myor)
+ {
+ return sisusb_setidxregandor(sisusb, port, index, 0xff, myor);
+ }
+
+-int sisusb_setidxregand(struct sisusb_usb_data *sisusb, int port,
++int sisusb_setidxregand(struct sisusb_usb_data *sisusb, u32 port,
+ u8 idx, u8 myand)
+ {
+ return sisusb_setidxregandor(sisusb, port, idx, myand, 0x00);
+@@ -2787,8 +2787,8 @@ static loff_t sisusb_lseek(struct file *file, loff_t offset, int orig)
+ static int sisusb_handle_command(struct sisusb_usb_data *sisusb,
+ struct sisusb_command *y, unsigned long arg)
+ {
+- int retval, port, length;
+- u32 address;
++ int retval, length;
++ u32 port, address;
+
+ /* All our commands require the device
+ * to be initialized.
+diff --git a/drivers/usb/misc/sisusbvga/sisusb_init.h b/drivers/usb/misc/sisusbvga/sisusb_init.h
+index 1782c759c4ad..ace09985dae4 100644
+--- a/drivers/usb/misc/sisusbvga/sisusb_init.h
++++ b/drivers/usb/misc/sisusbvga/sisusb_init.h
+@@ -812,17 +812,17 @@ static const struct SiS_VCLKData SiSUSB_VCLKData[] = {
+ int SiSUSBSetMode(struct SiS_Private *SiS_Pr, unsigned short ModeNo);
+ int SiSUSBSetVESAMode(struct SiS_Private *SiS_Pr, unsigned short VModeNo);
+
+-extern int sisusb_setreg(struct sisusb_usb_data *sisusb, int port, u8 data);
+-extern int sisusb_getreg(struct sisusb_usb_data *sisusb, int port, u8 * data);
+-extern int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setreg(struct sisusb_usb_data *sisusb, u32 port, u8 data);
++extern int sisusb_getreg(struct sisusb_usb_data *sisusb, u32 port, u8 * data);
++extern int sisusb_setidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ u8 index, u8 data);
+-extern int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_getidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ u8 index, u8 * data);
+-extern int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, u32 port,
+ u8 idx, u8 myand, u8 myor);
+-extern int sisusb_setidxregor(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setidxregor(struct sisusb_usb_data *sisusb, u32 port,
+ u8 index, u8 myor);
+-extern int sisusb_setidxregand(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setidxregand(struct sisusb_usb_data *sisusb, u32 port,
+ u8 idx, u8 myand);
+
+ void sisusb_delete(struct kref *kref);
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 62ca8e29da48..27d8b4b6ff59 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -81,6 +81,19 @@ static void uas_free_streams(struct uas_dev_info *devinfo);
+ static void uas_log_cmd_state(struct scsi_cmnd *cmnd, const char *prefix,
+ int status);
+
++/*
++ * This driver needs its own workqueue, as we need to control memory allocation.
++ *
++ * In the course of error handling and power management uas_wait_for_pending_cmnds()
++ * needs to flush pending work items. In these contexts we cannot allocate memory
++ * by doing block IO as we would deadlock. For the same reason we cannot wait
++ * for anything allocating memory not heeding these constraints.
++ *
++ * So we have to control all work items that can be on the workqueue we flush.
++ * Hence we cannot share a queue and need our own.
++ */
++static struct workqueue_struct *workqueue;
++
+ static void uas_do_work(struct work_struct *work)
+ {
+ struct uas_dev_info *devinfo =
+@@ -109,7 +122,7 @@ static void uas_do_work(struct work_struct *work)
+ if (!err)
+ cmdinfo->state &= ~IS_IN_WORK_LIST;
+ else
+- schedule_work(&devinfo->work);
++ queue_work(workqueue, &devinfo->work);
+ }
+ out:
+ spin_unlock_irqrestore(&devinfo->lock, flags);
+@@ -134,7 +147,7 @@ static void uas_add_work(struct uas_cmd_info *cmdinfo)
+
+ lockdep_assert_held(&devinfo->lock);
+ cmdinfo->state |= IS_IN_WORK_LIST;
+- schedule_work(&devinfo->work);
++ queue_work(workqueue, &devinfo->work);
+ }
+
+ static void uas_zap_pending(struct uas_dev_info *devinfo, int result)
+@@ -190,6 +203,9 @@ static void uas_log_cmd_state(struct scsi_cmnd *cmnd, const char *prefix,
+ struct uas_cmd_info *ci = (void *)&cmnd->SCp;
+ struct uas_cmd_info *cmdinfo = (void *)&cmnd->SCp;
+
++ if (status == -ENODEV) /* too late */
++ return;
++
+ scmd_printk(KERN_INFO, cmnd,
+ "%s %d uas-tag %d inflight:%s%s%s%s%s%s%s%s%s%s%s%s ",
+ prefix, status, cmdinfo->uas_tag,
+@@ -1233,7 +1249,31 @@ static struct usb_driver uas_driver = {
+ .id_table = uas_usb_ids,
+ };
+
+-module_usb_driver(uas_driver);
++static int __init uas_init(void)
++{
++ int rv;
++
++ workqueue = alloc_workqueue("uas", WQ_MEM_RECLAIM, 0);
++ if (!workqueue)
++ return -ENOMEM;
++
++ rv = usb_register(&uas_driver);
++ if (rv) {
++ destroy_workqueue(workqueue);
++ return -ENOMEM;
++ }
++
++ return 0;
++}
++
++static void __exit uas_exit(void)
++{
++ usb_deregister(&uas_driver);
++ destroy_workqueue(workqueue);
++}
++
++module_init(uas_init);
++module_exit(uas_exit);
+
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR(
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 1880f3e13f57..f6c3681fa2e9 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2323,6 +2323,13 @@ UNUSUAL_DEV( 0x3340, 0xffff, 0x0000, 0x0000,
+ USB_SC_DEVICE,USB_PR_DEVICE,NULL,
+ US_FL_MAX_SECTORS_64 ),
+
++/* Reported by Cyril Roelandt <tipecaml@gmail.com> */
++UNUSUAL_DEV( 0x357d, 0x7788, 0x0114, 0x0114,
++ "JMicron",
++ "USB to ATA/ATAPI Bridge",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_BROKEN_FUA ),
++
+ /* Reported by Andrey Rahmatullin <wrar@altlinux.org> */
+ UNUSUAL_DEV( 0x4102, 0x1020, 0x0100, 0x0100,
+ "iRiver",
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index e64aa88e99da..10b2090f3e5e 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -264,6 +264,7 @@ static int watchdog_start(struct watchdog_device *wdd)
+ if (err == 0) {
+ set_bit(WDOG_ACTIVE, &wdd->status);
+ wd_data->last_keepalive = started_at;
++ wd_data->last_hw_keepalive = started_at;
+ watchdog_update_worker(wdd);
+ }
+
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 4c0b220e20ba..5241102b81a8 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1972,8 +1972,12 @@ retry_locked:
+ }
+
+ /* want more caps from mds? */
+- if (want & ~(cap->mds_wanted | cap->issued))
+- goto ack;
++ if (want & ~cap->mds_wanted) {
++ if (want & ~(cap->mds_wanted | cap->issued))
++ goto ack;
++ if (!__cap_is_valid(cap))
++ goto ack;
++ }
+
+ /* things we might delay */
+ if ((cap->issued & ~retain) == 0 &&
+diff --git a/fs/ceph/export.c b/fs/ceph/export.c
+index 3c59ad180ef0..4cfe1154d4c7 100644
+--- a/fs/ceph/export.c
++++ b/fs/ceph/export.c
+@@ -151,6 +151,11 @@ static struct dentry *__get_parent(struct super_block *sb,
+
+ req->r_num_caps = 1;
+ err = ceph_mdsc_do_request(mdsc, NULL, req);
++ if (err) {
++ ceph_mdsc_put_request(req);
++ return ERR_PTR(err);
++ }
++
+ inode = req->r_target_inode;
+ if (inode)
+ ihold(inode);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index a289f4bcee45..6e8049031c1a 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -498,6 +498,30 @@ int ext4_ext_check_inode(struct inode *inode)
+ return ext4_ext_check(inode, ext_inode_hdr(inode), ext_depth(inode), 0);
+ }
+
++static void ext4_cache_extents(struct inode *inode,
++ struct ext4_extent_header *eh)
++{
++ struct ext4_extent *ex = EXT_FIRST_EXTENT(eh);
++ ext4_lblk_t prev = 0;
++ int i;
++
++ for (i = le16_to_cpu(eh->eh_entries); i > 0; i--, ex++) {
++ unsigned int status = EXTENT_STATUS_WRITTEN;
++ ext4_lblk_t lblk = le32_to_cpu(ex->ee_block);
++ int len = ext4_ext_get_actual_len(ex);
++
++ if (prev && (prev != lblk))
++ ext4_es_cache_extent(inode, prev, lblk - prev, ~0,
++ EXTENT_STATUS_HOLE);
++
++ if (ext4_ext_is_unwritten(ex))
++ status = EXTENT_STATUS_UNWRITTEN;
++ ext4_es_cache_extent(inode, lblk, len,
++ ext4_ext_pblock(ex), status);
++ prev = lblk + len;
++ }
++}
++
+ static struct buffer_head *
+ __read_extent_tree_block(const char *function, unsigned int line,
+ struct inode *inode, ext4_fsblk_t pblk, int depth,
+@@ -532,26 +556,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
+ */
+ if (!(flags & EXT4_EX_NOCACHE) && depth == 0) {
+ struct ext4_extent_header *eh = ext_block_hdr(bh);
+- struct ext4_extent *ex = EXT_FIRST_EXTENT(eh);
+- ext4_lblk_t prev = 0;
+- int i;
+-
+- for (i = le16_to_cpu(eh->eh_entries); i > 0; i--, ex++) {
+- unsigned int status = EXTENT_STATUS_WRITTEN;
+- ext4_lblk_t lblk = le32_to_cpu(ex->ee_block);
+- int len = ext4_ext_get_actual_len(ex);
+-
+- if (prev && (prev != lblk))
+- ext4_es_cache_extent(inode, prev,
+- lblk - prev, ~0,
+- EXTENT_STATUS_HOLE);
+-
+- if (ext4_ext_is_unwritten(ex))
+- status = EXTENT_STATUS_UNWRITTEN;
+- ext4_es_cache_extent(inode, lblk, len,
+- ext4_ext_pblock(ex), status);
+- prev = lblk + len;
+- }
++ ext4_cache_extents(inode, eh);
+ }
+ return bh;
+ errout:
+@@ -899,6 +904,8 @@ ext4_find_extent(struct inode *inode, ext4_lblk_t block,
+ path[0].p_bh = NULL;
+
+ i = depth;
++ if (!(flags & EXT4_EX_NOCACHE) && depth == 0)
++ ext4_cache_extents(inode, eh);
+ /* walk through the tree */
+ while (i) {
+ ext_debug("depth %d: num %d, max %d\n",
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 1dae74f7ccca..201e9da1692a 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -538,8 +538,9 @@ out:
+ ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size)
+ {
+ struct inode *inode = d_inode(dentry);
++ nid_t xnid = F2FS_I(inode)->i_xattr_nid;
+ struct f2fs_xattr_entry *entry;
+- void *base_addr;
++ void *base_addr, *last_base_addr;
+ int error = 0;
+ size_t rest = buffer_size;
+
+@@ -549,6 +550,8 @@ ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size)
+ if (error)
+ return error;
+
++ last_base_addr = (void *)base_addr + XATTR_SIZE(xnid, inode);
++
+ list_for_each_xattr(entry, base_addr) {
+ const struct xattr_handler *handler =
+ f2fs_xattr_handler(entry->e_name_index);
+@@ -556,6 +559,16 @@ ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size)
+ size_t prefix_len;
+ size_t size;
+
++ if ((void *)(entry) + sizeof(__u32) > last_base_addr ||
++ (void *)XATTR_NEXT_ENTRY(entry) > last_base_addr) {
++ f2fs_msg(dentry->d_sb, KERN_ERR,
++ "inode (%lu) has corrupted xattr",
++ inode->i_ino);
++ set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK);
++ error = -EFSCORRUPTED;
++ goto cleanup;
++ }
++
+ if (!handler || (handler->list && !handler->list(dentry)))
+ continue;
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 1fce41ba3535..741f40cd955e 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -3142,8 +3142,8 @@ SYSCALL_DEFINE2(pivot_root, const char __user *, new_root,
+ /* make certain new is below the root */
+ if (!is_path_reachable(new_mnt, new.dentry, &root))
+ goto out4;
+- root_mp->m_count++; /* pin it so it won't go away */
+ lock_mount_hash();
++ root_mp->m_count++; /* pin it so it won't go away */
+ detach_mnt(new_mnt, &parent_path);
+ detach_mnt(root_mnt, &root_parent);
+ if (root_mnt->mnt.mnt_flags & MNT_LOCKED) {
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index 5c5f161763c8..c4147e50af98 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -250,7 +250,8 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
+ if (start < offset + dump->size) {
+ tsz = min(offset + (u64)dump->size - start, (u64)size);
+ buf = dump->buf + start - offset;
+- if (remap_vmalloc_range_partial(vma, dst, buf, tsz)) {
++ if (remap_vmalloc_range_partial(vma, dst, buf, 0,
++ tsz)) {
+ ret = -EFAULT;
+ goto out_unlock;
+ }
+@@ -607,7 +608,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
+ tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)start, size);
+ kaddr = elfnotes_buf + start - elfcorebuf_sz - vmcoredd_orig_sz;
+ if (remap_vmalloc_range_partial(vma, vma->vm_start + len,
+- kaddr, tsz))
++ kaddr, 0, tsz))
+ goto fail;
+
+ size -= tsz;
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 5ed84d6c7059..f2d06e1e4906 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -2949,7 +2949,8 @@ xfs_rename(
+ spaceres);
+
+ /*
+- * Set up the target.
++ * Check for expected errors before we dirty the transaction
++ * so we can return an error without a transaction abort.
+ */
+ if (target_ip == NULL) {
+ /*
+@@ -2961,6 +2962,46 @@ xfs_rename(
+ if (error)
+ goto out_trans_cancel;
+ }
++ } else {
++ /*
++ * If target exists and it's a directory, check that whether
++ * it can be destroyed.
++ */
++ if (S_ISDIR(VFS_I(target_ip)->i_mode) &&
++ (!xfs_dir_isempty(target_ip) ||
++ (VFS_I(target_ip)->i_nlink > 2))) {
++ error = -EEXIST;
++ goto out_trans_cancel;
++ }
++ }
++
++ /*
++ * Directory entry creation below may acquire the AGF. Remove
++ * the whiteout from the unlinked list first to preserve correct
++ * AGI/AGF locking order. This dirties the transaction so failures
++ * after this point will abort and log recovery will clean up the
++ * mess.
++ *
++ * For whiteouts, we need to bump the link count on the whiteout
++ * inode. After this point, we have a real link, clear the tmpfile
++ * state flag from the inode so it doesn't accidentally get misused
++ * in future.
++ */
++ if (wip) {
++ ASSERT(VFS_I(wip)->i_nlink == 0);
++ error = xfs_iunlink_remove(tp, wip);
++ if (error)
++ goto out_trans_cancel;
++
++ xfs_bumplink(tp, wip);
++ xfs_trans_log_inode(tp, wip, XFS_ILOG_CORE);
++ VFS_I(wip)->i_state &= ~I_LINKABLE;
++ }
++
++ /*
++ * Set up the target.
++ */
++ if (target_ip == NULL) {
+ /*
+ * If target does not exist and the rename crosses
+ * directories, adjust the target directory link count
+@@ -2980,22 +3021,6 @@ xfs_rename(
+ goto out_trans_cancel;
+ }
+ } else { /* target_ip != NULL */
+- /*
+- * If target exists and it's a directory, check that both
+- * target and source are directories and that target can be
+- * destroyed, or that neither is a directory.
+- */
+- if (S_ISDIR(VFS_I(target_ip)->i_mode)) {
+- /*
+- * Make sure target dir is empty.
+- */
+- if (!(xfs_dir_isempty(target_ip)) ||
+- (VFS_I(target_ip)->i_nlink > 2)) {
+- error = -EEXIST;
+- goto out_trans_cancel;
+- }
+- }
+-
+ /*
+ * Link the source inode under the target name.
+ * If the source inode is a directory and we are moving
+@@ -3086,32 +3111,6 @@ xfs_rename(
+ if (error)
+ goto out_trans_cancel;
+
+- /*
+- * For whiteouts, we need to bump the link count on the whiteout inode.
+- * This means that failures all the way up to this point leave the inode
+- * on the unlinked list and so cleanup is a simple matter of dropping
+- * the remaining reference to it. If we fail here after bumping the link
+- * count, we're shutting down the filesystem so we'll never see the
+- * intermediate state on disk.
+- */
+- if (wip) {
+- ASSERT(VFS_I(wip)->i_nlink == 0);
+- error = xfs_bumplink(tp, wip);
+- if (error)
+- goto out_trans_cancel;
+- error = xfs_iunlink_remove(tp, wip);
+- if (error)
+- goto out_trans_cancel;
+- xfs_trans_log_inode(tp, wip, XFS_ILOG_CORE);
+-
+- /*
+- * Now we have a real link, clear the "I'm a tmpfile" state
+- * flag from the inode so it doesn't accidentally get misused in
+- * future.
+- */
+- VFS_I(wip)->i_state &= ~I_LINKABLE;
+- }
+-
+ xfs_trans_ichgtime(tp, src_dp, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
+ xfs_trans_log_inode(tp, src_dp, XFS_ILOG_CORE);
+ if (new_parent)
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 6e67aeb56928..745b2d0dcf78 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -624,7 +624,7 @@ struct request_queue {
+ unsigned int sg_reserved_size;
+ int node;
+ #ifdef CONFIG_BLK_DEV_IO_TRACE
+- struct blk_trace *blk_trace;
++ struct blk_trace __rcu *blk_trace;
+ struct mutex blk_trace_mutex;
+ #endif
+ /*
+diff --git a/include/linux/blktrace_api.h b/include/linux/blktrace_api.h
+index 7bb2d8de9f30..3b6ff5902edc 100644
+--- a/include/linux/blktrace_api.h
++++ b/include/linux/blktrace_api.h
+@@ -51,9 +51,13 @@ void __trace_note_message(struct blk_trace *, struct blkcg *blkcg, const char *f
+ **/
+ #define blk_add_cgroup_trace_msg(q, cg, fmt, ...) \
+ do { \
+- struct blk_trace *bt = (q)->blk_trace; \
++ struct blk_trace *bt; \
++ \
++ rcu_read_lock(); \
++ bt = rcu_dereference((q)->blk_trace); \
+ if (unlikely(bt)) \
+ __trace_note_message(bt, cg, fmt, ##__VA_ARGS__);\
++ rcu_read_unlock(); \
+ } while (0)
+ #define blk_add_trace_msg(q, fmt, ...) \
+ blk_add_cgroup_trace_msg(q, NULL, fmt, ##__VA_ARGS__)
+@@ -61,10 +65,14 @@ void __trace_note_message(struct blk_trace *, struct blkcg *blkcg, const char *f
+
+ static inline bool blk_trace_note_message_enabled(struct request_queue *q)
+ {
+- struct blk_trace *bt = q->blk_trace;
+- if (likely(!bt))
+- return false;
+- return bt->act_mask & BLK_TC_NOTIFY;
++ struct blk_trace *bt;
++ bool ret;
++
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
++ ret = bt && (bt->act_mask & BLK_TC_NOTIFY);
++ rcu_read_unlock();
++ return ret;
+ }
+
+ extern void blk_add_driver_data(struct request_queue *q, struct request *rq,
+diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
+index a74cb177dc6f..136ce51548a8 100644
+--- a/include/linux/iio/iio.h
++++ b/include/linux/iio/iio.h
+@@ -599,7 +599,7 @@ void iio_device_unregister(struct iio_dev *indio_dev);
+ * 0 on success, negative error number on failure.
+ */
+ #define devm_iio_device_register(dev, indio_dev) \
+- __devm_iio_device_register((dev), (indio_dev), THIS_MODULE);
++ __devm_iio_device_register((dev), (indio_dev), THIS_MODULE)
+ int __devm_iio_device_register(struct device *dev, struct iio_dev *indio_dev,
+ struct module *this_mod);
+ void devm_iio_device_unregister(struct device *dev, struct iio_dev *indio_dev);
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 0f99ecc01bc7..92c6f80e6327 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -206,6 +206,32 @@ enum {
+ READING_SHADOW_PAGE_TABLES,
+ };
+
++#define KVM_UNMAPPED_PAGE ((void *) 0x500 + POISON_POINTER_DELTA)
++
++struct kvm_host_map {
++ /*
++ * Only valid if the 'pfn' is managed by the host kernel (i.e. There is
++ * a 'struct page' for it. When using mem= kernel parameter some memory
++ * can be used as guest memory but they are not managed by host
++ * kernel).
++ * If 'pfn' is not managed by the host kernel, this field is
++ * initialized to KVM_UNMAPPED_PAGE.
++ */
++ struct page *page;
++ void *hva;
++ kvm_pfn_t pfn;
++ kvm_pfn_t gfn;
++};
++
++/*
++ * Used to check if the mapping is valid or not. Never use 'kvm_host_map'
++ * directly to check for that.
++ */
++static inline bool kvm_vcpu_mapped(struct kvm_host_map *map)
++{
++ return !!map->hva;
++}
++
+ /*
+ * Sometimes a large or cross-page mmio needs to be broken up into separate
+ * exits for userspace servicing.
+@@ -682,6 +708,7 @@ void kvm_set_pfn_dirty(kvm_pfn_t pfn);
+ void kvm_set_pfn_accessed(kvm_pfn_t pfn);
+ void kvm_get_pfn(kvm_pfn_t pfn);
+
++void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache);
+ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
+ int len);
+ int kvm_read_guest_atomic(struct kvm *kvm, gpa_t gpa, void *data,
+@@ -711,7 +738,13 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu);
+ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn);
+ kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn);
+ kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn);
++int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map);
++int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map,
++ struct gfn_to_pfn_cache *cache, bool atomic);
+ struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn);
++void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty);
++int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
++ struct gfn_to_pfn_cache *cache, bool dirty, bool atomic);
+ unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn);
+ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable);
+ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset,
+@@ -966,7 +999,7 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
+ start = slot + 1;
+ }
+
+- if (gfn >= memslots[start].base_gfn &&
++ if (start < slots->used_slots && gfn >= memslots[start].base_gfn &&
+ gfn < memslots[start].base_gfn + memslots[start].npages) {
+ atomic_set(&slots->lru_slot, start);
+ return &memslots[start];
+diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
+index 8bf259dae9f6..a38729c8296f 100644
+--- a/include/linux/kvm_types.h
++++ b/include/linux/kvm_types.h
+@@ -32,7 +32,7 @@ struct kvm_memslots;
+
+ enum kvm_mr_change;
+
+-#include <asm/types.h>
++#include <linux/types.h>
+
+ /*
+ * Address types:
+@@ -63,4 +63,11 @@ struct gfn_to_hva_cache {
+ struct kvm_memory_slot *memslot;
+ };
+
++struct gfn_to_pfn_cache {
++ u64 generation;
++ gfn_t gfn;
++ kvm_pfn_t pfn;
++ bool dirty;
++};
++
+ #endif /* __KVM_TYPES_H__ */
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index 6ae8dd1d784f..206957b1b54d 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -103,7 +103,7 @@ extern void vunmap(const void *addr);
+
+ extern int remap_vmalloc_range_partial(struct vm_area_struct *vma,
+ unsigned long uaddr, void *kaddr,
+- unsigned long size);
++ unsigned long pgoff, unsigned long size);
+
+ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+ unsigned long pgoff);
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index 6def0351bcc3..c8d5bb8b3616 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -235,8 +235,10 @@ struct ipv6_stub {
+ const struct in6_addr *addr);
+ int (*ipv6_sock_mc_drop)(struct sock *sk, int ifindex,
+ const struct in6_addr *addr);
+- int (*ipv6_dst_lookup)(struct net *net, struct sock *sk,
+- struct dst_entry **dst, struct flowi6 *fl6);
++ struct dst_entry *(*ipv6_dst_lookup_flow)(struct net *net,
++ const struct sock *sk,
++ struct flowi6 *fl6,
++ const struct in6_addr *final_dst);
+
+ struct fib6_table *(*fib6_get_table)(struct net *net, u32 id);
+ struct fib6_info *(*fib6_lookup)(struct net *net, int oif,
+diff --git a/include/net/ipv6.h b/include/net/ipv6.h
+index ff33f498c137..4c2e40882e88 100644
+--- a/include/net/ipv6.h
++++ b/include/net/ipv6.h
+@@ -959,7 +959,7 @@ static inline struct sk_buff *ip6_finish_skb(struct sock *sk)
+
+ int ip6_dst_lookup(struct net *net, struct sock *sk, struct dst_entry **dst,
+ struct flowi6 *fl6);
+-struct dst_entry *ip6_dst_lookup_flow(const struct sock *sk, struct flowi6 *fl6,
++struct dst_entry *ip6_dst_lookup_flow(struct net *net, const struct sock *sk, struct flowi6 *fl6,
+ const struct in6_addr *final_dst);
+ struct dst_entry *ip6_sk_dst_lookup_flow(struct sock *sk, struct flowi6 *fl6,
+ const struct in6_addr *final_dst,
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 918bfd0d7d1f..e43df898d3db 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -53,7 +53,7 @@ extern struct inet_hashinfo tcp_hashinfo;
+ extern struct percpu_counter tcp_orphan_count;
+ void tcp_time_wait(struct sock *sk, int state, int timeo);
+
+-#define MAX_TCP_HEADER (128 + MAX_HEADER)
++#define MAX_TCP_HEADER L1_CACHE_ALIGN(128 + MAX_HEADER)
+ #define MAX_TCP_OPTION_SPACE 40
+ #define TCP_MIN_SND_MSS 48
+ #define TCP_MIN_GSO_SIZE (TCP_MIN_SND_MSS - MAX_TCP_OPTION_SPACE)
+diff --git a/ipc/util.c b/ipc/util.c
+index 0af05752969f..b111e792b312 100644
+--- a/ipc/util.c
++++ b/ipc/util.c
+@@ -735,13 +735,13 @@ static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos,
+ total++;
+ }
+
++ *new_pos = pos + 1;
+ if (total >= ids->in_use)
+ return NULL;
+
+ for (; pos < IPCMNI; pos++) {
+ ipc = idr_find(&ids->ipcs_idr, pos);
+ if (ipc != NULL) {
+- *new_pos = pos + 1;
+ rcu_read_lock();
+ ipc_lock_object(ipc);
+ return ipc;
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 1f08c38e604a..7afec5f43c63 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -1331,6 +1331,9 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ case AUDIT_FIRST_USER_MSG2 ... AUDIT_LAST_USER_MSG2:
+ if (!audit_enabled && msg_type != AUDIT_USER_AVC)
+ return 0;
++ /* exit early if there isn't at least one character to print */
++ if (data_len < 2)
++ return -EINVAL;
+
+ err = audit_filter(msg_type, AUDIT_FILTER_USER);
+ if (err == 1) { /* match or error */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 8c70ee23fbe9..00fb2fe92c4d 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6411,9 +6411,12 @@ static u64 perf_virt_to_phys(u64 virt)
+ * Try IRQ-safe __get_user_pages_fast first.
+ * If failed, leave phys_addr as 0.
+ */
+- if ((current->mm != NULL) &&
+- (__get_user_pages_fast(virt, 1, 0, &p) == 1))
+- phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
++ if (current->mm != NULL) {
++ pagefault_disable();
++ if (__get_user_pages_fast(virt, 1, 0, &p) == 1)
++ phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
++ pagefault_enable();
++ }
+
+ if (p)
+ put_page(p);
+diff --git a/kernel/gcov/fs.c b/kernel/gcov/fs.c
+index 6e40ff6be083..291e0797125b 100644
+--- a/kernel/gcov/fs.c
++++ b/kernel/gcov/fs.c
+@@ -109,9 +109,9 @@ static void *gcov_seq_next(struct seq_file *seq, void *data, loff_t *pos)
+ {
+ struct gcov_iterator *iter = data;
+
++ (*pos)++;
+ if (gcov_iter_next(iter))
+ return NULL;
+- (*pos)++;
+
+ return iter;
+ }
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 2868d85f1fb1..6cea8bbca03c 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -336,6 +336,7 @@ static void put_probe_ref(void)
+
+ static void blk_trace_cleanup(struct blk_trace *bt)
+ {
++ synchronize_rcu();
+ blk_trace_free(bt);
+ put_probe_ref();
+ }
+@@ -636,8 +637,10 @@ static int compat_blk_trace_setup(struct request_queue *q, char *name,
+ static int __blk_trace_startstop(struct request_queue *q, int start)
+ {
+ int ret;
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
++ bt = rcu_dereference_protected(q->blk_trace,
++ lockdep_is_held(&q->blk_trace_mutex));
+ if (bt == NULL)
+ return -EINVAL;
+
+@@ -746,8 +749,8 @@ int blk_trace_ioctl(struct block_device *bdev, unsigned cmd, char __user *arg)
+ void blk_trace_shutdown(struct request_queue *q)
+ {
+ mutex_lock(&q->blk_trace_mutex);
+-
+- if (q->blk_trace) {
++ if (rcu_dereference_protected(q->blk_trace,
++ lockdep_is_held(&q->blk_trace_mutex))) {
+ __blk_trace_startstop(q, 0);
+ __blk_trace_remove(q);
+ }
+@@ -759,8 +762,10 @@ void blk_trace_shutdown(struct request_queue *q)
+ static union kernfs_node_id *
+ blk_trace_bio_get_cgid(struct request_queue *q, struct bio *bio)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
++ /* We don't use the 'bt' value here except as an optimization... */
++ bt = rcu_dereference_protected(q->blk_trace, 1);
+ if (!bt || !(blk_tracer_flags.val & TRACE_BLK_OPT_CGROUP))
+ return NULL;
+
+@@ -805,10 +810,14 @@ static void blk_add_trace_rq(struct request *rq, int error,
+ unsigned int nr_bytes, u32 what,
+ union kernfs_node_id *cgid)
+ {
+- struct blk_trace *bt = rq->q->blk_trace;
++ struct blk_trace *bt;
+
+- if (likely(!bt))
++ rcu_read_lock();
++ bt = rcu_dereference(rq->q->blk_trace);
++ if (likely(!bt)) {
++ rcu_read_unlock();
+ return;
++ }
+
+ if (blk_rq_is_passthrough(rq))
+ what |= BLK_TC_ACT(BLK_TC_PC);
+@@ -817,6 +826,7 @@ static void blk_add_trace_rq(struct request *rq, int error,
+
+ __blk_add_trace(bt, blk_rq_trace_sector(rq), nr_bytes, req_op(rq),
+ rq->cmd_flags, what, error, 0, NULL, cgid);
++ rcu_read_unlock();
+ }
+
+ static void blk_add_trace_rq_insert(void *ignore,
+@@ -862,14 +872,19 @@ static void blk_add_trace_rq_complete(void *ignore, struct request *rq,
+ static void blk_add_trace_bio(struct request_queue *q, struct bio *bio,
+ u32 what, int error)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
+- if (likely(!bt))
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
++ if (likely(!bt)) {
++ rcu_read_unlock();
+ return;
++ }
+
+ __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size,
+ bio_op(bio), bio->bi_opf, what, error, 0, NULL,
+ blk_trace_bio_get_cgid(q, bio));
++ rcu_read_unlock();
+ }
+
+ static void blk_add_trace_bio_bounce(void *ignore,
+@@ -914,11 +929,14 @@ static void blk_add_trace_getrq(void *ignore,
+ if (bio)
+ blk_add_trace_bio(q, bio, BLK_TA_GETRQ, 0);
+ else {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
+ if (bt)
+ __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_GETRQ, 0, 0,
+ NULL, NULL);
++ rcu_read_unlock();
+ }
+ }
+
+@@ -930,27 +948,35 @@ static void blk_add_trace_sleeprq(void *ignore,
+ if (bio)
+ blk_add_trace_bio(q, bio, BLK_TA_SLEEPRQ, 0);
+ else {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
+ if (bt)
+ __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_SLEEPRQ,
+ 0, 0, NULL, NULL);
++ rcu_read_unlock();
+ }
+ }
+
+ static void blk_add_trace_plug(void *ignore, struct request_queue *q)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
+ if (bt)
+ __blk_add_trace(bt, 0, 0, 0, 0, BLK_TA_PLUG, 0, 0, NULL, NULL);
++ rcu_read_unlock();
+ }
+
+ static void blk_add_trace_unplug(void *ignore, struct request_queue *q,
+ unsigned int depth, bool explicit)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
+ if (bt) {
+ __be64 rpdu = cpu_to_be64(depth);
+ u32 what;
+@@ -962,14 +988,17 @@ static void blk_add_trace_unplug(void *ignore, struct request_queue *q,
+
+ __blk_add_trace(bt, 0, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu, NULL);
+ }
++ rcu_read_unlock();
+ }
+
+ static void blk_add_trace_split(void *ignore,
+ struct request_queue *q, struct bio *bio,
+ unsigned int pdu)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
+ if (bt) {
+ __be64 rpdu = cpu_to_be64(pdu);
+
+@@ -978,6 +1007,7 @@ static void blk_add_trace_split(void *ignore,
+ BLK_TA_SPLIT, bio->bi_status, sizeof(rpdu),
+ &rpdu, blk_trace_bio_get_cgid(q, bio));
+ }
++ rcu_read_unlock();
+ }
+
+ /**
+@@ -997,11 +1027,15 @@ static void blk_add_trace_bio_remap(void *ignore,
+ struct request_queue *q, struct bio *bio,
+ dev_t dev, sector_t from)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+ struct blk_io_trace_remap r;
+
+- if (likely(!bt))
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
++ if (likely(!bt)) {
++ rcu_read_unlock();
+ return;
++ }
+
+ r.device_from = cpu_to_be32(dev);
+ r.device_to = cpu_to_be32(bio_dev(bio));
+@@ -1010,6 +1044,7 @@ static void blk_add_trace_bio_remap(void *ignore,
+ __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size,
+ bio_op(bio), bio->bi_opf, BLK_TA_REMAP, bio->bi_status,
+ sizeof(r), &r, blk_trace_bio_get_cgid(q, bio));
++ rcu_read_unlock();
+ }
+
+ /**
+@@ -1030,11 +1065,15 @@ static void blk_add_trace_rq_remap(void *ignore,
+ struct request *rq, dev_t dev,
+ sector_t from)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+ struct blk_io_trace_remap r;
+
+- if (likely(!bt))
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
++ if (likely(!bt)) {
++ rcu_read_unlock();
+ return;
++ }
+
+ r.device_from = cpu_to_be32(dev);
+ r.device_to = cpu_to_be32(disk_devt(rq->rq_disk));
+@@ -1043,6 +1082,7 @@ static void blk_add_trace_rq_remap(void *ignore,
+ __blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq),
+ rq_data_dir(rq), 0, BLK_TA_REMAP, 0,
+ sizeof(r), &r, blk_trace_request_get_cgid(q, rq));
++ rcu_read_unlock();
+ }
+
+ /**
+@@ -1060,14 +1100,19 @@ void blk_add_driver_data(struct request_queue *q,
+ struct request *rq,
+ void *data, size_t len)
+ {
+- struct blk_trace *bt = q->blk_trace;
++ struct blk_trace *bt;
+
+- if (likely(!bt))
++ rcu_read_lock();
++ bt = rcu_dereference(q->blk_trace);
++ if (likely(!bt)) {
++ rcu_read_unlock();
+ return;
++ }
+
+ __blk_add_trace(bt, blk_rq_trace_sector(rq), blk_rq_bytes(rq), 0, 0,
+ BLK_TA_DRV_DATA, 0, len, data,
+ blk_trace_request_get_cgid(q, rq));
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(blk_add_driver_data);
+
+@@ -1594,6 +1639,7 @@ static int blk_trace_remove_queue(struct request_queue *q)
+ return -EINVAL;
+
+ put_probe_ref();
++ synchronize_rcu();
+ blk_trace_free(bt);
+ return 0;
+ }
+@@ -1755,6 +1801,7 @@ static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
+ struct hd_struct *p = dev_to_part(dev);
+ struct request_queue *q;
+ struct block_device *bdev;
++ struct blk_trace *bt;
+ ssize_t ret = -ENXIO;
+
+ bdev = bdget(part_devt(p));
+@@ -1767,21 +1814,23 @@ static ssize_t sysfs_blk_trace_attr_show(struct device *dev,
+
+ mutex_lock(&q->blk_trace_mutex);
+
++ bt = rcu_dereference_protected(q->blk_trace,
++ lockdep_is_held(&q->blk_trace_mutex));
+ if (attr == &dev_attr_enable) {
+- ret = sprintf(buf, "%u\n", !!q->blk_trace);
++ ret = sprintf(buf, "%u\n", !!bt);
+ goto out_unlock_bdev;
+ }
+
+- if (q->blk_trace == NULL)
++ if (bt == NULL)
+ ret = sprintf(buf, "disabled\n");
+ else if (attr == &dev_attr_act_mask)
+- ret = blk_trace_mask2str(buf, q->blk_trace->act_mask);
++ ret = blk_trace_mask2str(buf, bt->act_mask);
+ else if (attr == &dev_attr_pid)
+- ret = sprintf(buf, "%u\n", q->blk_trace->pid);
++ ret = sprintf(buf, "%u\n", bt->pid);
+ else if (attr == &dev_attr_start_lba)
+- ret = sprintf(buf, "%llu\n", q->blk_trace->start_lba);
++ ret = sprintf(buf, "%llu\n", bt->start_lba);
+ else if (attr == &dev_attr_end_lba)
+- ret = sprintf(buf, "%llu\n", q->blk_trace->end_lba);
++ ret = sprintf(buf, "%llu\n", bt->end_lba);
+
+ out_unlock_bdev:
+ mutex_unlock(&q->blk_trace_mutex);
+@@ -1798,6 +1847,7 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
+ struct block_device *bdev;
+ struct request_queue *q;
+ struct hd_struct *p;
++ struct blk_trace *bt;
+ u64 value;
+ ssize_t ret = -EINVAL;
+
+@@ -1828,8 +1878,10 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
+
+ mutex_lock(&q->blk_trace_mutex);
+
++ bt = rcu_dereference_protected(q->blk_trace,
++ lockdep_is_held(&q->blk_trace_mutex));
+ if (attr == &dev_attr_enable) {
+- if (!!value == !!q->blk_trace) {
++ if (!!value == !!bt) {
+ ret = 0;
+ goto out_unlock_bdev;
+ }
+@@ -1841,18 +1893,21 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
+ }
+
+ ret = 0;
+- if (q->blk_trace == NULL)
++ if (bt == NULL) {
+ ret = blk_trace_setup_queue(q, bdev);
++ bt = rcu_dereference_protected(q->blk_trace,
++ lockdep_is_held(&q->blk_trace_mutex));
++ }
+
+ if (ret == 0) {
+ if (attr == &dev_attr_act_mask)
+- q->blk_trace->act_mask = value;
++ bt->act_mask = value;
+ else if (attr == &dev_attr_pid)
+- q->blk_trace->pid = value;
++ bt->pid = value;
+ else if (attr == &dev_attr_start_lba)
+- q->blk_trace->start_lba = value;
++ bt->start_lba = value;
+ else if (attr == &dev_attr_end_lba)
+- q->blk_trace->end_lba = value;
++ bt->end_lba = value;
+ }
+
+ out_unlock_bdev:
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 6f4ce9547658..e068c7f75a84 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4820,8 +4820,8 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
+ {
+ pgd_t *pgd;
+ p4d_t *p4d;
+- pud_t *pud;
+- pmd_t *pmd;
++ pud_t *pud, pud_entry;
++ pmd_t *pmd, pmd_entry;
+
+ pgd = pgd_offset(mm, addr);
+ if (!pgd_present(*pgd))
+@@ -4831,17 +4831,19 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
+ return NULL;
+
+ pud = pud_offset(p4d, addr);
+- if (sz != PUD_SIZE && pud_none(*pud))
++ pud_entry = READ_ONCE(*pud);
++ if (sz != PUD_SIZE && pud_none(pud_entry))
+ return NULL;
+ /* hugepage or swap? */
+- if (pud_huge(*pud) || !pud_present(*pud))
++ if (pud_huge(pud_entry) || !pud_present(pud_entry))
+ return (pte_t *)pud;
+
+ pmd = pmd_offset(pud, addr);
+- if (sz != PMD_SIZE && pmd_none(*pmd))
++ pmd_entry = READ_ONCE(*pmd);
++ if (sz != PMD_SIZE && pmd_none(pmd_entry))
+ return NULL;
+ /* hugepage or swap? */
+- if (pmd_huge(*pmd) || !pmd_present(*pmd))
++ if (pmd_huge(pmd_entry) || !pmd_present(pmd_entry))
+ return (pte_t *)pmd;
+
+ return NULL;
+diff --git a/mm/ksm.c b/mm/ksm.c
+index b3ea0f0316eb..d021bcf94c41 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -2106,8 +2106,16 @@ static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item)
+
+ down_read(&mm->mmap_sem);
+ vma = find_mergeable_vma(mm, rmap_item->address);
+- err = try_to_merge_one_page(vma, page,
+- ZERO_PAGE(rmap_item->address));
++ if (vma) {
++ err = try_to_merge_one_page(vma, page,
++ ZERO_PAGE(rmap_item->address));
++ } else {
++ /*
++ * If the vma is out of date, we do not need to
++ * continue.
++ */
++ err = 0;
++ }
+ up_read(&mm->mmap_sem);
+ /*
+ * In case of failure, the page was not really empty, so we
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index be65161f9753..11d0f0b6ec79 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -31,6 +31,7 @@
+ #include <linux/compiler.h>
+ #include <linux/llist.h>
+ #include <linux/bitops.h>
++#include <linux/overflow.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/tlbflush.h>
+@@ -2228,6 +2229,7 @@ finished:
+ * @vma: vma to cover
+ * @uaddr: target user address to start at
+ * @kaddr: virtual address of vmalloc kernel memory
++ * @pgoff: offset from @kaddr to start at
+ * @size: size of map area
+ *
+ * Returns: 0 for success, -Exxx on failure
+@@ -2240,9 +2242,15 @@ finished:
+ * Similar to remap_pfn_range() (see mm/memory.c)
+ */
+ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+- void *kaddr, unsigned long size)
++ void *kaddr, unsigned long pgoff,
++ unsigned long size)
+ {
+ struct vm_struct *area;
++ unsigned long off;
++ unsigned long end_index;
++
++ if (check_shl_overflow(pgoff, PAGE_SHIFT, &off))
++ return -EINVAL;
+
+ size = PAGE_ALIGN(size);
+
+@@ -2256,8 +2264,10 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+ if (!(area->flags & VM_USERMAP))
+ return -EINVAL;
+
+- if (kaddr + size > area->addr + get_vm_area_size(area))
++ if (check_add_overflow(size, off, &end_index) ||
++ end_index > get_vm_area_size(area))
+ return -EINVAL;
++ kaddr += off;
+
+ do {
+ struct page *page = vmalloc_to_page(kaddr);
+@@ -2296,7 +2306,7 @@ int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+ unsigned long pgoff)
+ {
+ return remap_vmalloc_range_partial(vma, vma->vm_start,
+- addr + (pgoff << PAGE_SHIFT),
++ addr, pgoff,
+ vma->vm_end - vma->vm_start);
+ }
+ EXPORT_SYMBOL(remap_vmalloc_range);
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 58a401e9cf09..b438bed6749d 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -211,7 +211,7 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req
+ final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ dst = NULL;
+@@ -282,7 +282,7 @@ static void dccp_v6_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb)
+ security_skb_classify_flow(rxskb, flowi6_to_flowi(&fl6));
+
+ /* sk = NULL, but it is safe for now. RST socket required. */
+- dst = ip6_dst_lookup_flow(ctl_sk, &fl6, NULL);
++ dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL);
+ if (!IS_ERR(dst)) {
+ skb_dst_set(skb, dst);
+ ip6_xmit(ctl_sk, skb, &fl6, 0, NULL, 0);
+@@ -912,7 +912,7 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ opt = rcu_dereference_protected(np->opt, lockdep_sock_is_held(sk));
+ final_p = fl6_update_dst(&fl6, opt, &final);
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto failure;
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index ccb1d97dfa05..d4c4eabd02b6 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -677,10 +677,8 @@ static int __init vti_init(void)
+
+ msg = "ipip tunnel";
+ err = xfrm4_tunnel_register(&ipip_handler, AF_INET);
+- if (err < 0) {
+- pr_info("%s: cant't register tunnel\n",__func__);
++ if (err < 0)
+ goto xfrm_tunnel_failed;
+- }
+
+ msg = "netlink interface";
+ err = rtnl_link_register(&vti_link_ops);
+diff --git a/net/ipv4/xfrm4_output.c b/net/ipv4/xfrm4_output.c
+index be980c195fc5..510d2ec4c76a 100644
+--- a/net/ipv4/xfrm4_output.c
++++ b/net/ipv4/xfrm4_output.c
+@@ -77,9 +77,7 @@ int xfrm4_output_finish(struct sock *sk, struct sk_buff *skb)
+ {
+ memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+
+-#ifdef CONFIG_NETFILTER
+ IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED;
+-#endif
+
+ return xfrm_output(sk, skb);
+ }
+diff --git a/net/ipv6/addrconf_core.c b/net/ipv6/addrconf_core.c
+index 5cd0029d930e..66a1a0eb2ed0 100644
+--- a/net/ipv6/addrconf_core.c
++++ b/net/ipv6/addrconf_core.c
+@@ -127,11 +127,12 @@ int inet6addr_validator_notifier_call_chain(unsigned long val, void *v)
+ }
+ EXPORT_SYMBOL(inet6addr_validator_notifier_call_chain);
+
+-static int eafnosupport_ipv6_dst_lookup(struct net *net, struct sock *u1,
+- struct dst_entry **u2,
+- struct flowi6 *u3)
++static struct dst_entry *eafnosupport_ipv6_dst_lookup_flow(struct net *net,
++ const struct sock *sk,
++ struct flowi6 *fl6,
++ const struct in6_addr *final_dst)
+ {
+- return -EAFNOSUPPORT;
++ return ERR_PTR(-EAFNOSUPPORT);
+ }
+
+ static struct fib6_table *eafnosupport_fib6_get_table(struct net *net, u32 id)
+@@ -169,7 +170,7 @@ eafnosupport_ip6_mtu_from_fib6(struct fib6_info *f6i, struct in6_addr *daddr,
+ }
+
+ const struct ipv6_stub *ipv6_stub __read_mostly = &(struct ipv6_stub) {
+- .ipv6_dst_lookup = eafnosupport_ipv6_dst_lookup,
++ .ipv6_dst_lookup_flow = eafnosupport_ipv6_dst_lookup_flow,
+ .fib6_get_table = eafnosupport_fib6_get_table,
+ .fib6_table_lookup = eafnosupport_fib6_table_lookup,
+ .fib6_lookup = eafnosupport_fib6_lookup,
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 79fcd9550fd2..5c2351deedc8 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -740,7 +740,7 @@ int inet6_sk_rebuild_header(struct sock *sk)
+ &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ sk->sk_route_caps = 0;
+ sk->sk_err_soft = -PTR_ERR(dst);
+@@ -904,7 +904,7 @@ static struct pernet_operations inet6_net_ops = {
+ static const struct ipv6_stub ipv6_stub_impl = {
+ .ipv6_sock_mc_join = ipv6_sock_mc_join,
+ .ipv6_sock_mc_drop = ipv6_sock_mc_drop,
+- .ipv6_dst_lookup = ip6_dst_lookup,
++ .ipv6_dst_lookup_flow = ip6_dst_lookup_flow,
+ .fib6_get_table = fib6_get_table,
+ .fib6_table_lookup = fib6_table_lookup,
+ .fib6_lookup = fib6_lookup,
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 971a0fdf1fbc..727f958dd869 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -89,7 +89,7 @@ int ip6_datagram_dst_update(struct sock *sk, bool fix_sk_saddr)
+ final_p = fl6_update_dst(&fl6, opt, &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto out;
+diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
+index 890adadcda16..92fe9e565da0 100644
+--- a/net/ipv6/inet6_connection_sock.c
++++ b/net/ipv6/inet6_connection_sock.c
+@@ -52,7 +52,7 @@ struct dst_entry *inet6_csk_route_req(const struct sock *sk,
+ fl6->flowi6_uid = sk->sk_uid;
+ security_req_classify_flow(req, flowi6_to_flowi(fl6));
+
+- dst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+ if (IS_ERR(dst))
+ return NULL;
+
+@@ -107,7 +107,7 @@ static struct dst_entry *inet6_csk_route_socket(struct sock *sk,
+
+ dst = __inet6_csk_dst_check(sk, np->dst_cookie);
+ if (!dst) {
+- dst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+
+ if (!IS_ERR(dst))
+ ip6_dst_store(sk, dst, NULL, NULL);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 9886a84c2511..22665e3638ac 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1071,19 +1071,19 @@ EXPORT_SYMBOL_GPL(ip6_dst_lookup);
+ * It returns a valid dst pointer on success, or a pointer encoded
+ * error code.
+ */
+-struct dst_entry *ip6_dst_lookup_flow(const struct sock *sk, struct flowi6 *fl6,
++struct dst_entry *ip6_dst_lookup_flow(struct net *net, const struct sock *sk, struct flowi6 *fl6,
+ const struct in6_addr *final_dst)
+ {
+ struct dst_entry *dst = NULL;
+ int err;
+
+- err = ip6_dst_lookup_tail(sock_net(sk), sk, &dst, fl6);
++ err = ip6_dst_lookup_tail(net, sk, &dst, fl6);
+ if (err)
+ return ERR_PTR(err);
+ if (final_dst)
+ fl6->daddr = *final_dst;
+
+- return xfrm_lookup_route(sock_net(sk), dst, flowi6_to_flowi(fl6), sk, 0);
++ return xfrm_lookup_route(net, dst, flowi6_to_flowi(fl6), sk, 0);
+ }
+ EXPORT_SYMBOL_GPL(ip6_dst_lookup_flow);
+
+@@ -1115,7 +1115,7 @@ struct dst_entry *ip6_sk_dst_lookup_flow(struct sock *sk, struct flowi6 *fl6,
+ if (dst)
+ return dst;
+
+- dst = ip6_dst_lookup_flow(sk, fl6, final_dst);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_dst);
+ if (connected && !IS_ERR(dst))
+ ip6_sk_dst_store_flow(sk, dst_clone(dst), fl6);
+
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index a20be08f0e0b..231c489128e4 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -185,15 +185,14 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ retv = -EBUSY;
+ break;
+ }
+- } else if (sk->sk_protocol == IPPROTO_TCP) {
+- if (sk->sk_prot != &tcpv6_prot) {
+- retv = -EBUSY;
+- break;
+- }
+- break;
+- } else {
++ }
++ if (sk->sk_protocol == IPPROTO_TCP &&
++ sk->sk_prot != &tcpv6_prot) {
++ retv = -EBUSY;
+ break;
+ }
++ if (sk->sk_protocol != IPPROTO_TCP)
++ break;
+ if (sk->sk_state != TCP_ESTABLISHED) {
+ retv = -ENOTCONN;
+ break;
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index a41156a00dd4..8d19729f8516 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -928,7 +928,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+
+ fl6.flowlabel = ip6_make_flowinfo(ipc6.tclass, fl6.flowlabel);
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto out;
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index e997141aed8c..a377be8a9fb4 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -240,7 +240,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ fl6.flowi6_uid = sk->sk_uid;
+ security_req_classify_flow(req, flowi6_to_flowi(&fl6));
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst))
+ goto out_free;
+ }
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 7b0c2498f461..2e76ebfdc907 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -268,7 +268,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+
+ security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto failure;
+@@ -885,7 +885,7 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
+ * Underlying function will use this to retrieve the network
+ * namespace
+ */
+- dst = ip6_dst_lookup_flow(ctl_sk, &fl6, NULL);
++ dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL);
+ if (!IS_ERR(dst)) {
+ skb_dst_set(buff, dst);
+ ip6_xmit(ctl_sk, buff, &fl6, fl6.flowi6_mark, NULL, tclass);
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index 6a74080005cf..71d022704923 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -130,9 +130,7 @@ int xfrm6_output_finish(struct sock *sk, struct sk_buff *skb)
+ {
+ memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+
+-#ifdef CONFIG_NETFILTER
+ IP6CB(skb)->flags |= IP6SKB_XFRM_TRANSFORMED;
+-#endif
+
+ return xfrm_output(sk, skb);
+ }
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 37a69df17cab..2f28f9910b92 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -619,7 +619,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+
+ fl6.flowlabel = ip6_make_flowinfo(ipc6.tclass, fl6.flowlabel);
+
+- dst = ip6_dst_lookup_flow(sk, &fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p);
+ if (IS_ERR(dst)) {
+ err = PTR_ERR(dst);
+ goto out;
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index d5a4db5b3fe7..7623d9aec636 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -618,16 +618,15 @@ static struct net_device *inet6_fib_lookup_dev(struct net *net,
+ struct net_device *dev;
+ struct dst_entry *dst;
+ struct flowi6 fl6;
+- int err;
+
+ if (!ipv6_stub)
+ return ERR_PTR(-EAFNOSUPPORT);
+
+ memset(&fl6, 0, sizeof(fl6));
+ memcpy(&fl6.daddr, addr, sizeof(struct in6_addr));
+- err = ipv6_stub->ipv6_dst_lookup(net, NULL, &dst, &fl6);
+- if (err)
+- return ERR_PTR(err);
++ dst = ipv6_stub->ipv6_dst_lookup_flow(net, NULL, &fl6, NULL);
++ if (IS_ERR(dst))
++ return ERR_CAST(dst);
+
+ dev = dst->dev;
+ dev_hold(dev);
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index b76aa668a94b..53ced34a1fdd 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -211,6 +211,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
+ /* refcount initialized at 1 */
+ spin_unlock_bh(&nr_node_list_lock);
+
++ nr_neigh_put(nr_neigh);
+ return 0;
+ }
+ nr_node_lock(nr_node);
+diff --git a/net/sched/sch_etf.c b/net/sched/sch_etf.c
+index 1538d6fa8165..2278f3d420cd 100644
+--- a/net/sched/sch_etf.c
++++ b/net/sched/sch_etf.c
+@@ -77,7 +77,7 @@ static bool is_packet_valid(struct Qdisc *sch, struct sk_buff *nskb)
+ struct sock *sk = nskb->sk;
+ ktime_t now;
+
+- if (!sk)
++ if (!sk || !sk_fullsock(sk))
+ return false;
+
+ if (!sock_flag(sk, SOCK_TXTIME))
+@@ -129,8 +129,9 @@ static void report_sock_error(struct sk_buff *skb, u32 err, u8 code)
+ struct sock_exterr_skb *serr;
+ struct sk_buff *clone;
+ ktime_t txtime = skb->tstamp;
++ struct sock *sk = skb->sk;
+
+- if (!skb->sk || !(skb->sk->sk_txtime_report_errors))
++ if (!sk || !sk_fullsock(sk) || !(sk->sk_txtime_report_errors))
+ return;
+
+ clone = skb_clone(skb, GFP_ATOMIC);
+@@ -146,7 +147,7 @@ static void report_sock_error(struct sk_buff *skb, u32 err, u8 code)
+ serr->ee.ee_data = (txtime >> 32); /* high part of tstamp */
+ serr->ee.ee_info = txtime; /* low part of tstamp */
+
+- if (sock_queue_err_skb(skb->sk, clone))
++ if (sock_queue_err_skb(sk, clone))
+ kfree_skb(clone);
+ }
+
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index 7657194f396e..736d8ca9821b 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -288,7 +288,7 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
+ rcu_read_unlock();
+
+- dst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+ if (!asoc || saddr) {
+ t->dst = dst;
+ memcpy(fl, &_fl, sizeof(_fl));
+@@ -346,7 +346,7 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ fl6->saddr = laddr->a.v6.sin6_addr;
+ fl6->fl6_sport = laddr->a.v6.sin6_port;
+ final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
+- bdst = ip6_dst_lookup_flow(sk, fl6, final_p);
++ bdst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+
+ if (IS_ERR(bdst))
+ continue;
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 382c84d9339d..1d6235479706 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -189,10 +189,13 @@ static int tipc_udp_xmit(struct net *net, struct sk_buff *skb,
+ .saddr = src->ipv6,
+ .flowi6_proto = IPPROTO_UDP
+ };
+- err = ipv6_stub->ipv6_dst_lookup(net, ub->ubsock->sk, &ndst,
+- &fl6);
+- if (err)
++ ndst = ipv6_stub->ipv6_dst_lookup_flow(net,
++ ub->ubsock->sk,
++ &fl6, NULL);
++ if (IS_ERR(ndst)) {
++ err = PTR_ERR(ndst);
+ goto tx_error;
++ }
+ ttl = ip6_dst_hoplimit(ndst);
+ err = udp_tunnel6_xmit_skb(ndst, ub->ubsock->sk, skb, NULL,
+ &src->ipv6, &dst->ipv6, 0, ttl, 0,
+diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c
+index 39231237e1c3..30f71620d4e3 100644
+--- a/net/x25/x25_dev.c
++++ b/net/x25/x25_dev.c
+@@ -120,8 +120,10 @@ int x25_lapb_receive_frame(struct sk_buff *skb, struct net_device *dev,
+ goto drop;
+ }
+
+- if (!pskb_may_pull(skb, 1))
++ if (!pskb_may_pull(skb, 1)) {
++ x25_neigh_put(nb);
+ return 0;
++ }
+
+ switch (skb->data[0]) {
+
+diff --git a/samples/vfio-mdev/mdpy.c b/samples/vfio-mdev/mdpy.c
+index 96e7969c473a..d774717cd906 100644
+--- a/samples/vfio-mdev/mdpy.c
++++ b/samples/vfio-mdev/mdpy.c
+@@ -418,7 +418,7 @@ static int mdpy_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
+ return -EINVAL;
+
+ return remap_vmalloc_range_partial(vma, vma->vm_start,
+- mdev_state->memblk,
++ mdev_state->memblk, 0,
+ vma->vm_end - vma->vm_start);
+ }
+
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index ef4310f2558b..8f004db6f603 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -627,7 +627,7 @@ void ConfigList::updateMenuList(ConfigItem *parent, struct menu* menu)
+ last = item;
+ continue;
+ }
+- hide:
++hide:
+ if (item && item->menu == child) {
+ last = parent->firstChild();
+ if (last == item)
+@@ -692,7 +692,7 @@ void ConfigList::updateMenuList(ConfigList *parent, struct menu* menu)
+ last = item;
+ continue;
+ }
+- hide:
++hide:
+ if (item && item->menu == child) {
+ last = (ConfigItem*)parent->topLevelItem(0);
+ if (last == item)
+@@ -1225,10 +1225,11 @@ QMenu* ConfigInfoView::createStandardContextMenu(const QPoint & pos)
+ {
+ QMenu* popup = Parent::createStandardContextMenu(pos);
+ QAction* action = new QAction("Show Debug Info", popup);
+- action->setCheckable(true);
+- connect(action, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool)));
+- connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setOn(bool)));
+- action->setChecked(showDebug());
++
++ action->setCheckable(true);
++ connect(action, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool)));
++ connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setOn(bool)));
++ action->setChecked(showDebug());
+ popup->addSeparator();
+ popup->addAction(action);
+ return popup;
+diff --git a/security/keys/internal.h b/security/keys/internal.h
+index a02742621c8d..eb50212fbbf8 100644
+--- a/security/keys/internal.h
++++ b/security/keys/internal.h
+@@ -20,6 +20,8 @@
+ #include <linux/keyctl.h>
+ #include <linux/refcount.h>
+ #include <linux/compat.h>
++#include <linux/mm.h>
++#include <linux/vmalloc.h>
+
+ struct iovec;
+
+@@ -305,4 +307,14 @@ static inline void key_check(const struct key *key)
+
+ #endif
+
++/*
++ * Helper function to clear and free a kvmalloc'ed memory object.
++ */
++static inline void __kvzfree(const void *addr, size_t len)
++{
++ if (addr) {
++ memset((void *)addr, 0, len);
++ kvfree(addr);
++ }
++}
+ #endif /* _INTERNAL_H */
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 4b6a084e323b..c07c2e2b2478 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -330,7 +330,7 @@ long keyctl_update_key(key_serial_t id,
+ payload = NULL;
+ if (plen) {
+ ret = -ENOMEM;
+- payload = kmalloc(plen, GFP_KERNEL);
++ payload = kvmalloc(plen, GFP_KERNEL);
+ if (!payload)
+ goto error;
+
+@@ -351,7 +351,7 @@ long keyctl_update_key(key_serial_t id,
+
+ key_ref_put(key_ref);
+ error2:
+- kzfree(payload);
++ __kvzfree(payload, plen);
+ error:
+ return ret;
+ }
+@@ -772,7 +772,8 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
+ struct key *key;
+ key_ref_t key_ref;
+ long ret;
+- char *key_data;
++ char *key_data = NULL;
++ size_t key_data_len;
+
+ /* find the key first */
+ key_ref = lookup_user_key(keyid, 0, 0);
+@@ -823,24 +824,51 @@ can_read_key:
+ * Allocating a temporary buffer to hold the keys before
+ * transferring them to user buffer to avoid potential
+ * deadlock involving page fault and mmap_sem.
++ *
++ * key_data_len = (buflen <= PAGE_SIZE)
++ * ? buflen : actual length of key data
++ *
++ * This prevents allocating arbitrary large buffer which can
++ * be much larger than the actual key length. In the latter case,
++ * at least 2 passes of this loop is required.
+ */
+- key_data = kmalloc(buflen, GFP_KERNEL);
++ key_data_len = (buflen <= PAGE_SIZE) ? buflen : 0;
++ for (;;) {
++ if (key_data_len) {
++ key_data = kvmalloc(key_data_len, GFP_KERNEL);
++ if (!key_data) {
++ ret = -ENOMEM;
++ goto key_put_out;
++ }
++ }
+
+- if (!key_data) {
+- ret = -ENOMEM;
+- goto key_put_out;
+- }
+- ret = __keyctl_read_key(key, key_data, buflen);
++ ret = __keyctl_read_key(key, key_data, key_data_len);
++
++ /*
++ * Read methods will just return the required length without
++ * any copying if the provided length isn't large enough.
++ */
++ if (ret <= 0 || ret > buflen)
++ break;
++
++ /*
++ * The key may change (unlikely) in between 2 consecutive
++ * __keyctl_read_key() calls. In this case, we reallocate
++ * a larger buffer and redo the key read when
++ * key_data_len < ret <= buflen.
++ */
++ if (ret > key_data_len) {
++ if (unlikely(key_data))
++ __kvzfree(key_data, key_data_len);
++ key_data_len = ret;
++ continue; /* Allocate buffer */
++ }
+
+- /*
+- * Read methods will just return the required length without
+- * any copying if the provided length isn't large enough.
+- */
+- if (ret > 0 && ret <= buflen) {
+ if (copy_to_user(buffer, key_data, ret))
+ ret = -EFAULT;
++ break;
+ }
+- kzfree(key_data);
++ __kvzfree(key_data, key_data_len);
+
+ key_put_out:
+ key_put(key);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 54a9b391ecce..0502042c1616 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2215,7 +2215,6 @@ static const struct hdac_io_ops pci_hda_io_ops = {
+ * should be ignored from the beginning.
+ */
+ static const struct snd_pci_quirk driver_blacklist[] = {
+- SND_PCI_QUIRK(0x1043, 0x874f, "ASUS ROG Zenith II / Strix", 0),
+ SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0),
+ SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0),
+ {}
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index ea439bee8e6f..9620a8461d91 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -380,6 +380,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ case 0x10ec0233:
+ case 0x10ec0235:
+ case 0x10ec0236:
++ case 0x10ec0245:
+ case 0x10ec0255:
+ case 0x10ec0256:
+ case 0x10ec0257:
+@@ -801,9 +802,11 @@ static void alc_ssid_check(struct hda_codec *codec, const hda_nid_t *ports)
+ {
+ if (!alc_subsystem_id(codec, ports)) {
+ struct alc_spec *spec = codec->spec;
+- codec_dbg(codec,
+- "realtek: Enable default setup for auto mode as fallback\n");
+- spec->init_amp = ALC_INIT_DEFAULT;
++ if (spec->init_amp == ALC_INIT_UNDEFINED) {
++ codec_dbg(codec,
++ "realtek: Enable default setup for auto mode as fallback\n");
++ spec->init_amp = ALC_INIT_DEFAULT;
++ }
+ }
+ }
+
+@@ -7790,6 +7793,7 @@ static int patch_alc269(struct hda_codec *codec)
+ spec->gen.mixer_nid = 0;
+ break;
+ case 0x10ec0215:
++ case 0x10ec0245:
+ case 0x10ec0285:
+ case 0x10ec0289:
+ spec->codec_variant = ALC269_TYPE_ALC215;
+@@ -8911,6 +8915,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ HDA_CODEC_ENTRY(0x10ec0234, "ALC234", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0235, "ALC233", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0236, "ALC236", patch_alc269),
++ HDA_CODEC_ENTRY(0x10ec0245, "ALC245", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0255, "ALC255", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0256, "ALC256", patch_alc269),
+ HDA_CODEC_ENTRY(0x10ec0257, "ALC257", patch_alc269),
+diff --git a/sound/soc/intel/atom/sst-atom-controls.c b/sound/soc/intel/atom/sst-atom-controls.c
+index 737f5d553313..a1d7f93a0805 100644
+--- a/sound/soc/intel/atom/sst-atom-controls.c
++++ b/sound/soc/intel/atom/sst-atom-controls.c
+@@ -974,7 +974,9 @@ static int sst_set_be_modules(struct snd_soc_dapm_widget *w,
+ dev_dbg(c->dev, "Enter: widget=%s\n", w->name);
+
+ if (SND_SOC_DAPM_EVENT_ON(event)) {
++ mutex_lock(&drv->lock);
+ ret = sst_send_slot_map(drv);
++ mutex_unlock(&drv->lock);
+ if (ret)
+ return ret;
+ ret = sst_send_pipe_module_params(w, k);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index e58240e18b30..f29014a7d672 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -588,6 +588,17 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+ },
++ {
++ /* MPMAN MPWIN895CL */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "MPMAN"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "MPWIN8900CL"),
++ },
++ .driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++ BYT_RT5640_MONO_SPEAKER |
++ BYT_RT5640_SSP0_AIF1 |
++ BYT_RT5640_MCLK_EN),
++ },
+ { /* MSI S100 tablet */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Micro-Star International Co., Ltd."),
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index d61e954417d0..96800b7c82f6 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -410,7 +410,7 @@ static int dapm_kcontrol_data_alloc(struct snd_soc_dapm_widget *widget,
+
+ memset(&template, 0, sizeof(template));
+ template.reg = e->reg;
+- template.mask = e->mask << e->shift_l;
++ template.mask = e->mask;
+ template.shift = e->shift_l;
+ template.off_val = snd_soc_enum_item_to_val(e, 0);
+ template.on_val = template.off_val;
+@@ -536,8 +536,22 @@ static bool dapm_kcontrol_set_value(const struct snd_kcontrol *kcontrol,
+ if (data->value == value)
+ return false;
+
+- if (data->widget)
+- data->widget->on_val = value;
++ if (data->widget) {
++ switch (dapm_kcontrol_get_wlist(kcontrol)->widgets[0]->id) {
++ case snd_soc_dapm_switch:
++ case snd_soc_dapm_mixer:
++ case snd_soc_dapm_mixer_named_ctl:
++ data->widget->on_val = value & data->widget->mask;
++ break;
++ case snd_soc_dapm_demux:
++ case snd_soc_dapm_mux:
++ data->widget->on_val = value >> data->widget->shift;
++ break;
++ default:
++ data->widget->on_val = value;
++ break;
++ }
++ }
+
+ data->value = value;
+
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 9d27429ed403..c8207b52c651 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -237,6 +237,52 @@ static int parse_audio_format_rates_v1(struct snd_usb_audio *chip, struct audiof
+ return 0;
+ }
+
++/*
++ * Many Focusrite devices supports a limited set of sampling rates per
++ * altsetting. Maximum rate is exposed in the last 4 bytes of Format Type
++ * descriptor which has a non-standard bLength = 10.
++ */
++static bool focusrite_valid_sample_rate(struct snd_usb_audio *chip,
++ struct audioformat *fp,
++ unsigned int rate)
++{
++ struct usb_interface *iface;
++ struct usb_host_interface *alts;
++ unsigned char *fmt;
++ unsigned int max_rate;
++
++ iface = usb_ifnum_to_if(chip->dev, fp->iface);
++ if (!iface)
++ return true;
++
++ alts = &iface->altsetting[fp->altset_idx];
++ fmt = snd_usb_find_csint_desc(alts->extra, alts->extralen,
++ NULL, UAC_FORMAT_TYPE);
++ if (!fmt)
++ return true;
++
++ if (fmt[0] == 10) { /* bLength */
++ max_rate = combine_quad(&fmt[6]);
++
++ /* Validate max rate */
++ if (max_rate != 48000 &&
++ max_rate != 96000 &&
++ max_rate != 192000 &&
++ max_rate != 384000) {
++
++ usb_audio_info(chip,
++ "%u:%d : unexpected max rate: %u\n",
++ fp->iface, fp->altsetting, max_rate);
++
++ return true;
++ }
++
++ return rate <= max_rate;
++ }
++
++ return true;
++}
++
+ /*
+ * Helper function to walk the array of sample rate triplets reported by
+ * the device. The problem is that we need to parse whole array first to
+@@ -273,6 +319,11 @@ static int parse_uac2_sample_rate_range(struct snd_usb_audio *chip,
+ }
+
+ for (rate = min; rate <= max; rate += res) {
++ /* Filter out invalid rates on Focusrite devices */
++ if (USB_ID_VENDOR(chip->usb_id) == 0x1235 &&
++ !focusrite_valid_sample_rate(chip, fp, rate))
++ goto skip_rate;
++
+ if (fp->rate_table)
+ fp->rate_table[nr_rates] = rate;
+ if (!fp->rate_min || rate < fp->rate_min)
+@@ -287,6 +338,7 @@ static int parse_uac2_sample_rate_range(struct snd_usb_audio *chip,
+ break;
+ }
+
++skip_rate:
+ /* avoid endless loop */
+ if (res == 0)
+ break;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 257da95a4ea6..7a5c665cf4e4 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1770,8 +1770,10 @@ static void build_connector_control(struct usb_mixer_interface *mixer,
+ {
+ struct snd_kcontrol *kctl;
+ struct usb_mixer_elem_info *cval;
++ const struct usbmix_name_map *map;
+
+- if (check_ignored_ctl(find_map(imap, term->id, 0)))
++ map = find_map(imap, term->id, 0);
++ if (check_ignored_ctl(map))
+ return;
+
+ cval = kzalloc(sizeof(*cval), GFP_KERNEL);
+@@ -1803,8 +1805,12 @@ static void build_connector_control(struct usb_mixer_interface *mixer,
+ usb_mixer_elem_info_free(cval);
+ return;
+ }
+- get_connector_control_name(mixer, term, is_input, kctl->id.name,
+- sizeof(kctl->id.name));
++
++ if (check_mapped_name(map, kctl->id.name, sizeof(kctl->id.name)))
++ strlcat(kctl->id.name, " Jack", sizeof(kctl->id.name));
++ else
++ get_connector_control_name(mixer, term, is_input, kctl->id.name,
++ sizeof(kctl->id.name));
+ kctl->private_free = snd_usb_mixer_elem_free;
+ snd_usb_mixer_add_control(&cval->head, kctl);
+ }
+@@ -3109,6 +3115,7 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ if (map->id == state.chip->usb_id) {
+ state.map = map->map;
+ state.selector_map = map->selector_map;
++ mixer->connector_map = map->connector_map;
+ mixer->ignore_ctl_error |= map->ignore_ctl_error;
+ break;
+ }
+@@ -3190,10 +3197,32 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ return 0;
+ }
+
++static int delegate_notify(struct usb_mixer_interface *mixer, int unitid,
++ u8 *control, u8 *channel)
++{
++ const struct usbmix_connector_map *map = mixer->connector_map;
++
++ if (!map)
++ return unitid;
++
++ for (; map->id; map++) {
++ if (map->id == unitid) {
++ if (control && map->control)
++ *control = map->control;
++ if (channel && map->channel)
++ *channel = map->channel;
++ return map->delegated_id;
++ }
++ }
++ return unitid;
++}
++
+ void snd_usb_mixer_notify_id(struct usb_mixer_interface *mixer, int unitid)
+ {
+ struct usb_mixer_elem_list *list;
+
++ unitid = delegate_notify(mixer, unitid, NULL, NULL);
++
+ for_each_mixer_elem(list, mixer, unitid) {
+ struct usb_mixer_elem_info *info =
+ mixer_elem_list_to_info(list);
+@@ -3263,6 +3292,8 @@ static void snd_usb_mixer_interrupt_v2(struct usb_mixer_interface *mixer,
+ return;
+ }
+
++ unitid = delegate_notify(mixer, unitid, &control, &channel);
++
+ for_each_mixer_elem(list, mixer, unitid)
+ count++;
+
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index 3d12af8bf191..15ec90e96d4d 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -4,6 +4,13 @@
+
+ #include <sound/info.h>
+
++struct usbmix_connector_map {
++ u8 id;
++ u8 delegated_id;
++ u8 control;
++ u8 channel;
++};
++
+ struct usb_mixer_interface {
+ struct snd_usb_audio *chip;
+ struct usb_host_interface *hostif;
+@@ -16,6 +23,9 @@ struct usb_mixer_interface {
+ /* the usb audio specification version this interface complies to */
+ int protocol;
+
++ /* optional connector delegation map */
++ const struct usbmix_connector_map *connector_map;
++
+ /* Sound Blaster remote control stuff */
+ const struct rc_config *rc_cfg;
+ u32 rc_code;
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index bf000e54461b..1689e4f242df 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -41,6 +41,7 @@ struct usbmix_ctl_map {
+ u32 id;
+ const struct usbmix_name_map *map;
+ const struct usbmix_selector_map *selector_map;
++ const struct usbmix_connector_map *connector_map;
+ int ignore_ctl_error;
+ };
+
+@@ -373,6 +374,33 @@ static const struct usbmix_name_map asus_rog_map[] = {
+ {}
+ };
+
++/* TRX40 mobos with Realtek ALC1220-VB */
++static const struct usbmix_name_map trx40_mobo_map[] = {
++ { 18, NULL }, /* OT, IEC958 - broken response, disabled */
++ { 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */
++ { 16, "Speaker" }, /* OT */
++ { 22, "Speaker Playback" }, /* FU */
++ { 7, "Line" }, /* IT */
++ { 19, "Line Capture" }, /* FU */
++ { 17, "Front Headphone" }, /* OT */
++ { 23, "Front Headphone Playback" }, /* FU */
++ { 8, "Mic" }, /* IT */
++ { 20, "Mic Capture" }, /* FU */
++ { 9, "Front Mic" }, /* IT */
++ { 21, "Front Mic Capture" }, /* FU */
++ { 24, "IEC958 Playback" }, /* FU */
++ {}
++};
++
++static const struct usbmix_connector_map trx40_mobo_connector_map[] = {
++ { 10, 16 }, /* (Back) Speaker */
++ { 11, 17 }, /* Front Headphone */
++ { 13, 7 }, /* Line */
++ { 14, 8 }, /* Mic */
++ { 15, 9 }, /* Front Mic */
++ {}
++};
++
+ /*
+ * Control map entries
+ */
+@@ -494,7 +522,8 @@ static struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ },
+ { /* Gigabyte TRX40 Aorus Pro WiFi */
+ .id = USB_ID(0x0414, 0xa002),
+- .map = asus_rog_map,
++ .map = trx40_mobo_map,
++ .connector_map = trx40_mobo_connector_map,
+ },
+ { /* ASUS ROG Zenith II */
+ .id = USB_ID(0x0b05, 0x1916),
+@@ -506,11 +535,13 @@ static struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ },
+ { /* MSI TRX40 Creator */
+ .id = USB_ID(0x0db0, 0x0d64),
+- .map = asus_rog_map,
++ .map = trx40_mobo_map,
++ .connector_map = trx40_mobo_connector_map,
+ },
+ { /* MSI TRX40 */
+ .id = USB_ID(0x0db0, 0x543d),
+- .map = asus_rog_map,
++ .map = trx40_mobo_map,
++ .connector_map = trx40_mobo_connector_map,
+ },
+ { 0 } /* terminator */
+ };
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 10c6971cf477..983e8a3ebfcf 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -1519,11 +1519,15 @@ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+
+ /* use known values for that card: interface#1 altsetting#1 */
+ iface = usb_ifnum_to_if(chip->dev, 1);
+- if (!iface || iface->num_altsetting < 2)
+- return -EINVAL;
++ if (!iface || iface->num_altsetting < 2) {
++ err = -EINVAL;
++ goto end;
++ }
+ alts = &iface->altsetting[1];
+- if (get_iface_desc(alts)->bNumEndpoints < 1)
+- return -EINVAL;
++ if (get_iface_desc(alts)->bNumEndpoints < 1) {
++ err = -EINVAL;
++ goto end;
++ }
+ ep = get_endpoint(alts, 0)->bEndpointAddress;
+
+ err = snd_usb_ctl_msg(chip->dev,
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 90d4f61cc230..774aeedde071 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3400,4 +3400,18 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ }
+ },
+
++#define ALC1220_VB_DESKTOP(vend, prod) { \
++ USB_DEVICE(vend, prod), \
++ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { \
++ .vendor_name = "Realtek", \
++ .product_name = "ALC1220-VB-DT", \
++ .profile_name = "Realtek-ALC1220-VB-Desktop", \
++ .ifnum = QUIRK_NO_INTERFACE \
++ } \
++}
++ALC1220_VB_DESKTOP(0x0414, 0xa002), /* Gigabyte TRX40 Aorus Pro WiFi */
++ALC1220_VB_DESKTOP(0x0db0, 0x0d64), /* MSI TRX40 Creator */
++ALC1220_VB_DESKTOP(0x0db0, 0x543d), /* MSI TRX40 */
++#undef ALC1220_VB_DESKTOP
++
+ #undef USB_DEVICE_VENDOR_SPEC
+diff --git a/sound/usb/usx2y/usbusx2yaudio.c b/sound/usb/usx2y/usbusx2yaudio.c
+index 2b833054e3b0..bdb28e0229e6 100644
+--- a/sound/usb/usx2y/usbusx2yaudio.c
++++ b/sound/usb/usx2y/usbusx2yaudio.c
+@@ -695,6 +695,8 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ us->submitted = 2*NOOF_SETRATE_URBS;
+ for (i = 0; i < NOOF_SETRATE_URBS; ++i) {
+ struct urb *urb = us->urb[i];
++ if (!urb)
++ continue;
+ if (urb->status) {
+ if (!err)
+ err = -ENODEV;
+diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
+index ff0cc3c17141..1e7c619228a2 100644
+--- a/tools/bpf/bpftool/btf_dumper.c
++++ b/tools/bpf/bpftool/btf_dumper.c
+@@ -26,7 +26,7 @@ static void btf_dumper_ptr(const void *data, json_writer_t *jw,
+ bool is_plain_text)
+ {
+ if (is_plain_text)
+- jsonw_printf(jw, "%p", data);
++ jsonw_printf(jw, "%p", *(void **)data);
+ else
+ jsonw_printf(jw, "%lu", *(unsigned long *)data);
+ }
+diff --git a/tools/testing/selftests/ftrace/settings b/tools/testing/selftests/ftrace/settings
+new file mode 100644
+index 000000000000..e7b9417537fb
+--- /dev/null
++++ b/tools/testing/selftests/ftrace/settings
+@@ -0,0 +1 @@
++timeout=0
+diff --git a/tools/testing/selftests/kmod/kmod.sh b/tools/testing/selftests/kmod/kmod.sh
+index 0a76314b4414..1f118916a83e 100755
+--- a/tools/testing/selftests/kmod/kmod.sh
++++ b/tools/testing/selftests/kmod/kmod.sh
+@@ -505,18 +505,23 @@ function test_num()
+ fi
+ }
+
+-function get_test_count()
++function get_test_data()
+ {
+ test_num $1
+- TEST_DATA=$(echo $ALL_TESTS | awk '{print $'$1'}')
++ local field_num=$(echo $1 | sed 's/^0*//')
++ echo $ALL_TESTS | awk '{print $'$field_num'}'
++}
++
++function get_test_count()
++{
++ TEST_DATA=$(get_test_data $1)
+ LAST_TWO=${TEST_DATA#*:*}
+ echo ${LAST_TWO%:*}
+ }
+
+ function get_test_enabled()
+ {
+- test_num $1
+- TEST_DATA=$(echo $ALL_TESTS | awk '{print $'$1'}')
++ TEST_DATA=$(get_test_data $1)
+ echo ${TEST_DATA#*:*:}
+ }
+
+diff --git a/tools/vm/Makefile b/tools/vm/Makefile
+index 20f6cf04377f..9860622cbb15 100644
+--- a/tools/vm/Makefile
++++ b/tools/vm/Makefile
+@@ -1,6 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for vm tools
+ #
++include ../scripts/Makefile.include
++
+ TARGETS=page-types slabinfo page_owner_sort
+
+ LIB_DIR = ../lib/api
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 4e499b78569b..aca15bd1cc4c 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -52,9 +52,9 @@
+ #include <linux/sort.h>
+ #include <linux/bsearch.h>
+ #include <linux/kthread.h>
++#include <linux/io.h>
+
+ #include <asm/processor.h>
+-#include <asm/io.h>
+ #include <asm/ioctl.h>
+ #include <linux/uaccess.h>
+ #include <asm/pgtable.h>
+@@ -1705,6 +1705,153 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
+ }
+ EXPORT_SYMBOL_GPL(gfn_to_page);
+
++void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache)
++{
++ if (pfn == 0)
++ return;
++
++ if (cache)
++ cache->pfn = cache->gfn = 0;
++
++ if (dirty)
++ kvm_release_pfn_dirty(pfn);
++ else
++ kvm_release_pfn_clean(pfn);
++}
++
++static void kvm_cache_gfn_to_pfn(struct kvm_memory_slot *slot, gfn_t gfn,
++ struct gfn_to_pfn_cache *cache, u64 gen)
++{
++ kvm_release_pfn(cache->pfn, cache->dirty, cache);
++
++ cache->pfn = gfn_to_pfn_memslot(slot, gfn);
++ cache->gfn = gfn;
++ cache->dirty = false;
++ cache->generation = gen;
++}
++
++static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn,
++ struct kvm_host_map *map,
++ struct gfn_to_pfn_cache *cache,
++ bool atomic)
++{
++ kvm_pfn_t pfn;
++ void *hva = NULL;
++ struct page *page = KVM_UNMAPPED_PAGE;
++ struct kvm_memory_slot *slot = __gfn_to_memslot(slots, gfn);
++ u64 gen = slots->generation;
++
++ if (!map)
++ return -EINVAL;
++
++ if (cache) {
++ if (!cache->pfn || cache->gfn != gfn ||
++ cache->generation != gen) {
++ if (atomic)
++ return -EAGAIN;
++ kvm_cache_gfn_to_pfn(slot, gfn, cache, gen);
++ }
++ pfn = cache->pfn;
++ } else {
++ if (atomic)
++ return -EAGAIN;
++ pfn = gfn_to_pfn_memslot(slot, gfn);
++ }
++ if (is_error_noslot_pfn(pfn))
++ return -EINVAL;
++
++ if (pfn_valid(pfn)) {
++ page = pfn_to_page(pfn);
++ if (atomic)
++ hva = kmap_atomic(page);
++ else
++ hva = kmap(page);
++#ifdef CONFIG_HAS_IOMEM
++ } else if (!atomic) {
++ hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB);
++ } else {
++ return -EINVAL;
++#endif
++ }
++
++ if (!hva)
++ return -EFAULT;
++
++ map->page = page;
++ map->hva = hva;
++ map->pfn = pfn;
++ map->gfn = gfn;
++
++ return 0;
++}
++
++int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map,
++ struct gfn_to_pfn_cache *cache, bool atomic)
++{
++ return __kvm_map_gfn(kvm_memslots(vcpu->kvm), gfn, map,
++ cache, atomic);
++}
++EXPORT_SYMBOL_GPL(kvm_map_gfn);
++
++int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map)
++{
++ return __kvm_map_gfn(kvm_vcpu_memslots(vcpu), gfn, map,
++ NULL, false);
++}
++EXPORT_SYMBOL_GPL(kvm_vcpu_map);
++
++static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot,
++ struct kvm_host_map *map,
++ struct gfn_to_pfn_cache *cache,
++ bool dirty, bool atomic)
++{
++ if (!map)
++ return;
++
++ if (!map->hva)
++ return;
++
++ if (map->page != KVM_UNMAPPED_PAGE) {
++ if (atomic)
++ kunmap_atomic(map->hva);
++ else
++ kunmap(map->page);
++ }
++#ifdef CONFIG_HAS_IOMEM
++ else if (!atomic)
++ memunmap(map->hva);
++ else
++ WARN_ONCE(1, "Unexpected unmapping in atomic context");
++#endif
++
++ if (dirty)
++ mark_page_dirty_in_slot(memslot, map->gfn);
++
++ if (cache)
++ cache->dirty |= dirty;
++ else
++ kvm_release_pfn(map->pfn, dirty, NULL);
++
++ map->hva = NULL;
++ map->page = NULL;
++}
++
++int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
++ struct gfn_to_pfn_cache *cache, bool dirty, bool atomic)
++{
++ __kvm_unmap_gfn(gfn_to_memslot(vcpu->kvm, map->gfn), map,
++ cache, dirty, atomic);
++ return 0;
++}
++EXPORT_SYMBOL_GPL(kvm_unmap_gfn);
++
++void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty)
++{
++ __kvm_unmap_gfn(kvm_vcpu_gfn_to_memslot(vcpu, map->gfn), map, NULL,
++ dirty, false);
++}
++EXPORT_SYMBOL_GPL(kvm_vcpu_unmap);
++
+ struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn)
+ {
+ kvm_pfn_t pfn;
next reply other threads:[~2020-04-29 17:57 UTC|newest]
Thread overview: 332+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-29 17:57 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2024-04-18 3:06 [gentoo-commits] proj/linux-patches:4.19 commit in: / Alice Ferrazzi
2023-09-02 9:59 Mike Pagano
2023-08-30 15:00 Mike Pagano
2023-08-16 16:59 Mike Pagano
2023-08-11 11:58 Mike Pagano
2023-08-08 18:43 Mike Pagano
2023-07-24 20:30 Mike Pagano
2023-06-28 10:29 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:21 Mike Pagano
2023-06-09 11:32 Mike Pagano
2023-05-30 12:57 Mike Pagano
2023-05-17 11:14 Mike Pagano
2023-05-17 11:01 Mike Pagano
2023-05-10 17:59 Mike Pagano
2023-04-26 9:35 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 11:41 Mike Pagano
2023-03-22 14:16 Alice Ferrazzi
2023-03-17 10:46 Mike Pagano
2023-03-13 11:35 Alice Ferrazzi
2023-03-11 16:01 Mike Pagano
2023-03-03 12:31 Mike Pagano
2023-02-25 11:41 Mike Pagano
2023-02-24 3:19 Alice Ferrazzi
2023-02-24 3:15 Alice Ferrazzi
2023-02-22 14:51 Alice Ferrazzi
2023-02-06 12:49 Mike Pagano
2023-01-24 7:16 Alice Ferrazzi
2023-01-18 11:11 Mike Pagano
2022-12-14 12:15 Mike Pagano
2022-12-08 12:14 Alice Ferrazzi
2022-11-25 17:04 Mike Pagano
2022-11-23 9:39 Alice Ferrazzi
2022-11-10 17:58 Mike Pagano
2022-11-03 15:11 Mike Pagano
2022-11-01 19:48 Mike Pagano
2022-10-26 11:41 Mike Pagano
2022-10-05 11:59 Mike Pagano
2022-09-28 9:18 Mike Pagano
2022-09-20 12:03 Mike Pagano
2022-09-15 11:09 Mike Pagano
2022-09-05 12:06 Mike Pagano
2022-08-25 10:35 Mike Pagano
2022-08-11 12:36 Mike Pagano
2022-07-29 15:28 Mike Pagano
2022-07-21 20:12 Mike Pagano
2022-07-12 16:01 Mike Pagano
2022-07-07 16:18 Mike Pagano
2022-07-02 16:07 Mike Pagano
2022-06-25 10:22 Mike Pagano
2022-06-16 11:40 Mike Pagano
2022-06-14 16:02 Mike Pagano
2022-06-06 11:05 Mike Pagano
2022-05-27 12:24 Mike Pagano
2022-05-25 11:55 Mike Pagano
2022-05-18 9:50 Mike Pagano
2022-05-15 22:12 Mike Pagano
2022-05-12 11:30 Mike Pagano
2022-05-01 17:04 Mike Pagano
2022-04-27 12:03 Mike Pagano
2022-04-20 12:09 Mike Pagano
2022-04-15 13:11 Mike Pagano
2022-04-12 19:24 Mike Pagano
2022-03-28 10:59 Mike Pagano
2022-03-23 11:57 Mike Pagano
2022-03-16 13:27 Mike Pagano
2022-03-11 10:56 Mike Pagano
2022-03-08 18:30 Mike Pagano
2022-03-02 13:08 Mike Pagano
2022-02-26 21:14 Mike Pagano
2022-02-23 12:39 Mike Pagano
2022-02-16 12:47 Mike Pagano
2022-02-11 12:53 Mike Pagano
2022-02-11 12:46 Mike Pagano
2022-02-11 12:45 Mike Pagano
2022-02-11 12:37 Mike Pagano
2022-02-08 17:56 Mike Pagano
2022-01-29 17:45 Mike Pagano
2022-01-27 11:39 Mike Pagano
2022-01-11 13:14 Mike Pagano
2022-01-05 12:55 Mike Pagano
2021-12-29 13:11 Mike Pagano
2021-12-22 14:07 Mike Pagano
2021-12-14 10:36 Mike Pagano
2021-12-08 12:55 Mike Pagano
2021-12-01 12:51 Mike Pagano
2021-11-26 11:59 Mike Pagano
2021-11-12 14:16 Mike Pagano
2021-11-06 13:26 Mike Pagano
2021-11-02 19:32 Mike Pagano
2021-10-27 11:59 Mike Pagano
2021-10-20 13:26 Mike Pagano
2021-10-17 13:12 Mike Pagano
2021-10-13 15:00 Alice Ferrazzi
2021-10-09 21:33 Mike Pagano
2021-10-06 14:06 Mike Pagano
2021-09-26 14:13 Mike Pagano
2021-09-22 11:40 Mike Pagano
2021-09-20 22:05 Mike Pagano
2021-09-03 11:22 Mike Pagano
2021-09-03 10:08 Alice Ferrazzi
2021-08-26 14:06 Mike Pagano
2021-08-25 22:45 Mike Pagano
2021-08-25 20:41 Mike Pagano
2021-08-15 20:07 Mike Pagano
2021-08-12 11:51 Mike Pagano
2021-08-08 13:39 Mike Pagano
2021-08-04 11:54 Mike Pagano
2021-08-03 12:26 Mike Pagano
2021-07-31 10:34 Alice Ferrazzi
2021-07-28 12:37 Mike Pagano
2021-07-20 15:35 Alice Ferrazzi
2021-07-13 12:38 Mike Pagano
2021-07-11 14:45 Mike Pagano
2021-06-30 14:25 Mike Pagano
2021-06-16 12:22 Mike Pagano
2021-06-10 11:46 Mike Pagano
2021-06-03 10:32 Alice Ferrazzi
2021-05-26 12:05 Mike Pagano
2021-05-22 10:03 Mike Pagano
2021-05-07 11:40 Alice Ferrazzi
2021-04-30 19:02 Mike Pagano
2021-04-28 18:31 Mike Pagano
2021-04-28 11:44 Alice Ferrazzi
2021-04-16 11:15 Alice Ferrazzi
2021-04-14 11:22 Alice Ferrazzi
2021-04-10 13:24 Mike Pagano
2021-04-07 12:21 Mike Pagano
2021-03-30 14:17 Mike Pagano
2021-03-24 12:08 Mike Pagano
2021-03-22 15:50 Mike Pagano
2021-03-20 14:26 Mike Pagano
2021-03-17 16:21 Mike Pagano
2021-03-11 14:05 Mike Pagano
2021-03-07 15:15 Mike Pagano
2021-03-04 12:08 Mike Pagano
2021-02-23 14:31 Alice Ferrazzi
2021-02-13 15:28 Alice Ferrazzi
2021-02-10 10:03 Alice Ferrazzi
2021-02-07 14:40 Alice Ferrazzi
2021-02-03 23:43 Mike Pagano
2021-01-30 13:34 Alice Ferrazzi
2021-01-27 11:15 Mike Pagano
2021-01-23 16:36 Mike Pagano
2021-01-19 20:34 Mike Pagano
2021-01-17 16:20 Mike Pagano
2021-01-12 20:06 Mike Pagano
2021-01-09 12:57 Mike Pagano
2021-01-06 14:15 Mike Pagano
2020-12-30 12:52 Mike Pagano
2020-12-11 12:56 Mike Pagano
2020-12-08 12:06 Mike Pagano
2020-12-02 12:49 Mike Pagano
2020-11-24 14:40 Mike Pagano
2020-11-22 19:26 Mike Pagano
2020-11-18 19:56 Mike Pagano
2020-11-11 15:43 Mike Pagano
2020-11-10 13:56 Mike Pagano
2020-11-05 12:35 Mike Pagano
2020-11-01 20:29 Mike Pagano
2020-10-29 11:18 Mike Pagano
2020-10-17 10:17 Mike Pagano
2020-10-14 20:36 Mike Pagano
2020-10-07 12:50 Mike Pagano
2020-10-01 12:45 Mike Pagano
2020-09-26 22:07 Mike Pagano
2020-09-26 22:00 Mike Pagano
2020-09-24 15:58 Mike Pagano
2020-09-23 12:07 Mike Pagano
2020-09-17 15:01 Mike Pagano
2020-09-17 14:55 Mike Pagano
2020-09-12 17:59 Mike Pagano
2020-09-09 17:59 Mike Pagano
2020-09-03 11:37 Mike Pagano
2020-08-26 11:15 Mike Pagano
2020-08-21 10:49 Alice Ferrazzi
2020-08-19 9:36 Alice Ferrazzi
2020-08-12 23:36 Alice Ferrazzi
2020-08-07 19:16 Mike Pagano
2020-08-05 14:51 Thomas Deutschmann
2020-07-31 18:00 Mike Pagano
2020-07-29 12:33 Mike Pagano
2020-07-22 12:42 Mike Pagano
2020-07-16 11:17 Mike Pagano
2020-07-09 12:12 Mike Pagano
2020-07-01 12:14 Mike Pagano
2020-06-29 17:41 Mike Pagano
2020-06-25 15:07 Mike Pagano
2020-06-22 14:47 Mike Pagano
2020-06-10 21:27 Mike Pagano
2020-06-07 21:52 Mike Pagano
2020-06-03 11:41 Mike Pagano
2020-05-27 16:25 Mike Pagano
2020-05-20 11:30 Mike Pagano
2020-05-20 11:27 Mike Pagano
2020-05-14 11:30 Mike Pagano
2020-05-13 12:33 Mike Pagano
2020-05-11 22:50 Mike Pagano
2020-05-09 22:20 Mike Pagano
2020-05-06 11:46 Mike Pagano
2020-05-02 19:24 Mike Pagano
2020-04-23 11:44 Mike Pagano
2020-04-21 11:15 Mike Pagano
2020-04-17 11:45 Mike Pagano
2020-04-15 17:09 Mike Pagano
2020-04-13 11:34 Mike Pagano
2020-04-02 15:24 Mike Pagano
2020-03-25 14:58 Mike Pagano
2020-03-20 11:57 Mike Pagano
2020-03-18 14:21 Mike Pagano
2020-03-16 12:23 Mike Pagano
2020-03-11 17:20 Mike Pagano
2020-03-05 16:23 Mike Pagano
2020-02-28 16:38 Mike Pagano
2020-02-24 11:06 Mike Pagano
2020-02-19 23:45 Mike Pagano
2020-02-14 23:52 Mike Pagano
2020-02-11 16:20 Mike Pagano
2020-02-05 17:05 Mike Pagano
2020-02-01 10:37 Mike Pagano
2020-02-01 10:30 Mike Pagano
2020-01-29 16:16 Mike Pagano
2020-01-27 14:25 Mike Pagano
2020-01-23 11:07 Mike Pagano
2020-01-17 19:56 Mike Pagano
2020-01-14 22:30 Mike Pagano
2020-01-12 15:00 Mike Pagano
2020-01-09 11:15 Mike Pagano
2020-01-04 19:50 Mike Pagano
2019-12-31 17:46 Mike Pagano
2019-12-21 15:03 Mike Pagano
2019-12-17 21:56 Mike Pagano
2019-12-13 12:35 Mike Pagano
2019-12-05 12:03 Alice Ferrazzi
2019-12-01 14:06 Thomas Deutschmann
2019-11-24 15:44 Mike Pagano
2019-11-20 19:36 Mike Pagano
2019-11-12 21:00 Mike Pagano
2019-11-10 16:20 Mike Pagano
2019-11-06 14:26 Mike Pagano
2019-10-29 12:04 Mike Pagano
2019-10-17 22:27 Mike Pagano
2019-10-11 17:04 Mike Pagano
2019-10-07 17:42 Mike Pagano
2019-10-05 11:42 Mike Pagano
2019-10-01 10:10 Mike Pagano
2019-09-21 17:11 Mike Pagano
2019-09-19 12:34 Mike Pagano
2019-09-19 10:04 Mike Pagano
2019-09-16 12:26 Mike Pagano
2019-09-10 11:12 Mike Pagano
2019-09-06 17:25 Mike Pagano
2019-08-29 14:15 Mike Pagano
2019-08-25 17:37 Mike Pagano
2019-08-23 22:18 Mike Pagano
2019-08-16 12:26 Mike Pagano
2019-08-16 12:13 Mike Pagano
2019-08-09 17:45 Mike Pagano
2019-08-06 19:19 Mike Pagano
2019-08-04 16:15 Mike Pagano
2019-07-31 15:09 Mike Pagano
2019-07-31 10:22 Mike Pagano
2019-07-28 16:27 Mike Pagano
2019-07-26 11:35 Mike Pagano
2019-07-21 14:41 Mike Pagano
2019-07-14 15:44 Mike Pagano
2019-07-10 11:05 Mike Pagano
2019-07-03 11:34 Mike Pagano
2019-06-25 10:53 Mike Pagano
2019-06-22 19:06 Mike Pagano
2019-06-19 17:17 Thomas Deutschmann
2019-06-17 19:22 Mike Pagano
2019-06-15 15:07 Mike Pagano
2019-06-11 12:42 Mike Pagano
2019-06-10 19:43 Mike Pagano
2019-06-09 16:19 Mike Pagano
2019-06-04 11:11 Mike Pagano
2019-05-31 15:02 Mike Pagano
2019-05-26 17:10 Mike Pagano
2019-05-22 11:02 Mike Pagano
2019-05-16 23:03 Mike Pagano
2019-05-14 21:00 Mike Pagano
2019-05-10 19:40 Mike Pagano
2019-05-08 10:06 Mike Pagano
2019-05-05 13:42 Mike Pagano
2019-05-04 18:28 Mike Pagano
2019-05-02 10:13 Mike Pagano
2019-04-27 17:36 Mike Pagano
2019-04-20 11:09 Mike Pagano
2019-04-19 19:51 Mike Pagano
2019-04-05 21:46 Mike Pagano
2019-04-03 10:59 Mike Pagano
2019-03-27 10:22 Mike Pagano
2019-03-23 20:23 Mike Pagano
2019-03-19 16:58 Mike Pagano
2019-03-13 22:08 Mike Pagano
2019-03-10 14:15 Mike Pagano
2019-03-06 19:06 Mike Pagano
2019-03-05 18:04 Mike Pagano
2019-02-27 11:23 Mike Pagano
2019-02-23 11:35 Mike Pagano
2019-02-23 0:46 Mike Pagano
2019-02-20 11:19 Mike Pagano
2019-02-16 0:42 Mike Pagano
2019-02-15 12:39 Mike Pagano
2019-02-12 20:53 Mike Pagano
2019-02-06 17:08 Mike Pagano
2019-01-31 11:28 Mike Pagano
2019-01-26 15:09 Mike Pagano
2019-01-22 23:06 Mike Pagano
2019-01-16 23:32 Mike Pagano
2019-01-13 19:29 Mike Pagano
2019-01-09 17:54 Mike Pagano
2018-12-29 18:55 Mike Pagano
2018-12-29 1:08 Mike Pagano
2018-12-21 14:58 Mike Pagano
2018-12-19 19:09 Mike Pagano
2018-12-17 11:42 Mike Pagano
2018-12-13 11:40 Mike Pagano
2018-12-08 13:17 Mike Pagano
2018-12-08 13:17 Mike Pagano
2018-12-05 20:16 Mike Pagano
2018-12-01 15:08 Mike Pagano
2018-11-27 16:16 Mike Pagano
2018-11-23 12:42 Mike Pagano
2018-11-21 12:30 Mike Pagano
2018-11-14 0:47 Mike Pagano
2018-11-14 0:47 Mike Pagano
2018-11-13 20:44 Mike Pagano
2018-11-04 16:22 Alice Ferrazzi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1588183052.27d7c2cb01376b49c16c731e901f2ce5bf8952ea.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox