From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <gentoo-commits+bounces-1096867-garchives=archives.gentoo.org@lists.gentoo.org>
Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by finch.gentoo.org (Postfix) with ESMTPS id B512B138334
	for <garchives@archives.gentoo.org>; Sat, 22 Jun 2019 19:17:01 +0000 (UTC)
Received: from pigeon.gentoo.org (localhost [127.0.0.1])
	by pigeon.gentoo.org (Postfix) with SMTP id DA721E0887;
	Sat, 22 Jun 2019 19:17:00 +0000 (UTC)
Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by pigeon.gentoo.org (Postfix) with ESMTPS id 94EFBE0887
	for <gentoo-commits@lists.gentoo.org>; Sat, 22 Jun 2019 19:17:00 +0000 (UTC)
Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by smtp.gentoo.org (Postfix) with ESMTPS id 41BA0346809
	for <gentoo-commits@lists.gentoo.org>; Sat, 22 Jun 2019 19:16:59 +0000 (UTC)
Received: from localhost.localdomain (localhost [IPv6:::1])
	by oystercatcher.gentoo.org (Postfix) with ESMTP id 41AD9619
	for <gentoo-commits@lists.gentoo.org>; Sat, 22 Jun 2019 19:16:57 +0000 (UTC)
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Content-Transfer-Encoding: 8bit
Content-type: text/plain; charset=UTF-8
Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" <mpagano@gentoo.org>
Message-ID: <1561230996.6937a558748348df369ee6cf6dbe2918e736ccd4.mpagano@gentoo>
Subject: [gentoo-commits] proj/linux-patches:5.1 commit in: /
X-VCS-Repository: proj/linux-patches
X-VCS-Files: 0000_README 1012_linux-5.1.13.patch 1013_linux-5.1.14.patch
X-VCS-Directories: /
X-VCS-Committer: mpagano
X-VCS-Committer-Name: Mike Pagano
X-VCS-Revision: 6937a558748348df369ee6cf6dbe2918e736ccd4
X-VCS-Branch: 5.1
Date: Sat, 22 Jun 2019 19:16:57 +0000 (UTC)
Precedence: bulk
List-Post: <mailto:gentoo-commits@lists.gentoo.org>
List-Help: <mailto:gentoo-commits+help@lists.gentoo.org>
List-Unsubscribe: <mailto:gentoo-commits+unsubscribe@lists.gentoo.org>
List-Subscribe: <mailto:gentoo-commits+subscribe@lists.gentoo.org>
List-Id: Gentoo Linux mail <gentoo-commits.gentoo.org>
X-BeenThere: gentoo-commits@lists.gentoo.org
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-Archives-Salt: 04803fe8-c69e-489f-b2e9-b1630970e111
X-Archives-Hash: 5f2e59143ea1438d686eea2200ba3d99

commit:     6937a558748348df369ee6cf6dbe2918e736ccd4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 22 19:16:36 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Jun 22 19:16:36 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6937a558

Linux patch 5.1.13 and 5.1.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    8 +
 1012_linux-5.1.13.patch | 3413 +++++++++++++++++++++++++++++++++++++++++++++++
 1013_linux-5.1.14.patch |   27 +
 3 files changed, 3448 insertions(+)

diff --git a/0000_README b/0000_README
index 540b4c1..3443ce1 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,14 @@ Patch:  1011_linux-5.1.12.patch
 From:   https://www.kernel.org
 Desc:   Linux 5.1.12
 
+Patch:  1012_linux-5.1.13.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.13
+
+Patch:  1013_linux-5.1.14.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-5.1.13.patch b/1012_linux-5.1.13.patch
new file mode 100644
index 0000000..069e6f6
--- /dev/null
+++ b/1012_linux-5.1.13.patch
@@ -0,0 +1,3413 @@
+diff --git a/Makefile b/Makefile
+index 6d7bfe9fcd7d..dfcd51a35824 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
+index a179df3674a1..6206ab9bfcfc 100644
+--- a/arch/arm64/include/asm/syscall.h
++++ b/arch/arm64/include/asm/syscall.h
+@@ -20,7 +20,7 @@
+ #include <linux/compat.h>
+ #include <linux/err.h>
+ 
+-typedef long (*syscall_fn_t)(struct pt_regs *regs);
++typedef long (*syscall_fn_t)(const struct pt_regs *regs);
+ 
+ extern const syscall_fn_t sys_call_table[];
+ 
+diff --git a/arch/arm64/include/asm/syscall_wrapper.h b/arch/arm64/include/asm/syscall_wrapper.h
+index a4477e515b79..507d0ee6bc69 100644
+--- a/arch/arm64/include/asm/syscall_wrapper.h
++++ b/arch/arm64/include/asm/syscall_wrapper.h
+@@ -30,10 +30,10 @@
+ 	}										\
+ 	static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+ 
+-#define COMPAT_SYSCALL_DEFINE0(sname)					\
+-	asmlinkage long __arm64_compat_sys_##sname(void);		\
+-	ALLOW_ERROR_INJECTION(__arm64_compat_sys_##sname, ERRNO);	\
+-	asmlinkage long __arm64_compat_sys_##sname(void)
++#define COMPAT_SYSCALL_DEFINE0(sname)							\
++	asmlinkage long __arm64_compat_sys_##sname(const struct pt_regs *__unused);	\
++	ALLOW_ERROR_INJECTION(__arm64_compat_sys_##sname, ERRNO);			\
++	asmlinkage long __arm64_compat_sys_##sname(const struct pt_regs *__unused)
+ 
+ #define COND_SYSCALL_COMPAT(name) \
+ 	cond_syscall(__arm64_compat_sys_##name);
+@@ -62,11 +62,11 @@
+ 	static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+ 
+ #ifndef SYSCALL_DEFINE0
+-#define SYSCALL_DEFINE0(sname)					\
+-	SYSCALL_METADATA(_##sname, 0);				\
+-	asmlinkage long __arm64_sys_##sname(void);		\
+-	ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO);	\
+-	asmlinkage long __arm64_sys_##sname(void)
++#define SYSCALL_DEFINE0(sname)							\
++	SYSCALL_METADATA(_##sname, 0);						\
++	asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused);	\
++	ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO);			\
++	asmlinkage long __arm64_sys_##sname(const struct pt_regs *__unused)
+ #endif
+ 
+ #ifndef COND_SYSCALL
+diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
+index 162a95ed0881..fe20c461582a 100644
+--- a/arch/arm64/kernel/sys.c
++++ b/arch/arm64/kernel/sys.c
+@@ -47,22 +47,26 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
+ 	return ksys_personality(personality);
+ }
+ 
++asmlinkage long sys_ni_syscall(void);
++
++asmlinkage long __arm64_sys_ni_syscall(const struct pt_regs *__unused)
++{
++	return sys_ni_syscall();
++}
++
+ /*
+  * Wrappers to pass the pt_regs argument.
+  */
+ #define __arm64_sys_personality		__arm64_sys_arm64_personality
+ 
+-asmlinkage long sys_ni_syscall(const struct pt_regs *);
+-#define __arm64_sys_ni_syscall	sys_ni_syscall
+-
+ #undef __SYSCALL
+ #define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+ #include <asm/unistd.h>
+ 
+ #undef __SYSCALL
+-#define __SYSCALL(nr, sym)	[nr] = (syscall_fn_t)__arm64_##sym,
++#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
+ 
+ const syscall_fn_t sys_call_table[__NR_syscalls] = {
+-	[0 ... __NR_syscalls - 1] = (syscall_fn_t)sys_ni_syscall,
++	[0 ... __NR_syscalls - 1] = __arm64_sys_ni_syscall,
+ #include <asm/unistd.h>
+ };
+diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
+index 0f8bcb7de700..3c80a40c1c9d 100644
+--- a/arch/arm64/kernel/sys32.c
++++ b/arch/arm64/kernel/sys32.c
+@@ -133,17 +133,14 @@ COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode,
+ 	return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len));
+ }
+ 
+-asmlinkage long sys_ni_syscall(const struct pt_regs *);
+-#define __arm64_sys_ni_syscall	sys_ni_syscall
+-
+ #undef __SYSCALL
+ #define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+ #include <asm/unistd32.h>
+ 
+ #undef __SYSCALL
+-#define __SYSCALL(nr, sym)	[nr] = (syscall_fn_t)__arm64_##sym,
++#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
+ 
+ const syscall_fn_t compat_sys_call_table[__NR_compat_syscalls] = {
+-	[0 ... __NR_compat_syscalls - 1] = (syscall_fn_t)sys_ni_syscall,
++	[0 ... __NR_compat_syscalls - 1] = __arm64_sys_ni_syscall,
+ #include <asm/unistd32.h>
+ };
+diff --git a/arch/ia64/mm/numa.c b/arch/ia64/mm/numa.c
+index a03803506b0c..5e1015eb6d0d 100644
+--- a/arch/ia64/mm/numa.c
++++ b/arch/ia64/mm/numa.c
+@@ -55,6 +55,7 @@ paddr_to_nid(unsigned long paddr)
+ 
+ 	return (i < num_node_memblks) ? node_memblk[i].nid : (num_node_memblks ? -1 : 0);
+ }
++EXPORT_SYMBOL(paddr_to_nid);
+ 
+ #if defined(CONFIG_SPARSEMEM) && defined(CONFIG_NUMA)
+ /*
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index e6b5bb012ccb..1f9eb75ce95a 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -305,6 +305,7 @@ struct kvm_arch {
+ #ifdef CONFIG_PPC_BOOK3S_64
+ 	struct list_head spapr_tce_tables;
+ 	struct list_head rtas_tokens;
++	struct mutex rtas_token_lock;
+ 	DECLARE_BITMAP(enabled_hcalls, MAX_HCALL_OPCODE/4 + 1);
+ #endif
+ #ifdef CONFIG_KVM_MPIC
+@@ -317,6 +318,7 @@ struct kvm_arch {
+ #endif
+ 	struct kvmppc_ops *kvm_ops;
+ #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++	struct mutex mmu_setup_lock;	/* nests inside vcpu mutexes */
+ 	u64 l1_ptcr;
+ 	int max_nested_lpid;
+ 	struct kvm_nested_guest *nested_guests[KVM_MAX_NESTED_GUESTS];
+diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
+index 10c5579d20ce..020304403bae 100644
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -878,6 +878,7 @@ int kvmppc_core_init_vm(struct kvm *kvm)
+ #ifdef CONFIG_PPC64
+ 	INIT_LIST_HEAD_RCU(&kvm->arch.spapr_tce_tables);
+ 	INIT_LIST_HEAD(&kvm->arch.rtas_tokens);
++	mutex_init(&kvm->arch.rtas_token_lock);
+ #endif
+ 
+ 	return kvm->arch.kvm_ops->init_vm(kvm);
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+index be7bc070eae5..c1ced22455f9 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+@@ -63,7 +63,7 @@ struct kvm_resize_hpt {
+ 	struct work_struct work;
+ 	u32 order;
+ 
+-	/* These fields protected by kvm->lock */
++	/* These fields protected by kvm->arch.mmu_setup_lock */
+ 
+ 	/* Possible values and their usage:
+ 	 *  <0     an error occurred during allocation,
+@@ -73,7 +73,7 @@ struct kvm_resize_hpt {
+ 	int error;
+ 
+ 	/* Private to the work thread, until error != -EBUSY,
+-	 * then protected by kvm->lock.
++	 * then protected by kvm->arch.mmu_setup_lock.
+ 	 */
+ 	struct kvm_hpt_info hpt;
+ };
+@@ -139,7 +139,7 @@ long kvmppc_alloc_reset_hpt(struct kvm *kvm, int order)
+ 	long err = -EBUSY;
+ 	struct kvm_hpt_info info;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	if (kvm->arch.mmu_ready) {
+ 		kvm->arch.mmu_ready = 0;
+ 		/* order mmu_ready vs. vcpus_running */
+@@ -183,7 +183,7 @@ out:
+ 		/* Ensure that each vcpu will flush its TLB on next entry. */
+ 		cpumask_setall(&kvm->arch.need_tlb_flush);
+ 
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return err;
+ }
+ 
+@@ -1447,7 +1447,7 @@ static void resize_hpt_pivot(struct kvm_resize_hpt *resize)
+ 
+ static void resize_hpt_release(struct kvm *kvm, struct kvm_resize_hpt *resize)
+ {
+-	if (WARN_ON(!mutex_is_locked(&kvm->lock)))
++	if (WARN_ON(!mutex_is_locked(&kvm->arch.mmu_setup_lock)))
+ 		return;
+ 
+ 	if (!resize)
+@@ -1474,14 +1474,14 @@ static void resize_hpt_prepare_work(struct work_struct *work)
+ 	if (WARN_ON(resize->error != -EBUSY))
+ 		return;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 
+ 	/* Request is still current? */
+ 	if (kvm->arch.resize_hpt == resize) {
+ 		/* We may request large allocations here:
+-		 * do not sleep with kvm->lock held for a while.
++		 * do not sleep with kvm->arch.mmu_setup_lock held for a while.
+ 		 */
+-		mutex_unlock(&kvm->lock);
++		mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 
+ 		resize_hpt_debug(resize, "resize_hpt_prepare_work(): order = %d\n",
+ 				 resize->order);
+@@ -1494,9 +1494,9 @@ static void resize_hpt_prepare_work(struct work_struct *work)
+ 		if (WARN_ON(err == -EBUSY))
+ 			err = -EINPROGRESS;
+ 
+-		mutex_lock(&kvm->lock);
++		mutex_lock(&kvm->arch.mmu_setup_lock);
+ 		/* It is possible that kvm->arch.resize_hpt != resize
+-		 * after we grab kvm->lock again.
++		 * after we grab kvm->arch.mmu_setup_lock again.
+ 		 */
+ 	}
+ 
+@@ -1505,7 +1505,7 @@ static void resize_hpt_prepare_work(struct work_struct *work)
+ 	if (kvm->arch.resize_hpt != resize)
+ 		resize_hpt_release(kvm, resize);
+ 
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ }
+ 
+ long kvm_vm_ioctl_resize_hpt_prepare(struct kvm *kvm,
+@@ -1522,7 +1522,7 @@ long kvm_vm_ioctl_resize_hpt_prepare(struct kvm *kvm,
+ 	if (shift && ((shift < 18) || (shift > 46)))
+ 		return -EINVAL;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 
+ 	resize = kvm->arch.resize_hpt;
+ 
+@@ -1565,7 +1565,7 @@ long kvm_vm_ioctl_resize_hpt_prepare(struct kvm *kvm,
+ 	ret = 100; /* estimated time in ms */
+ 
+ out:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return ret;
+ }
+ 
+@@ -1588,7 +1588,7 @@ long kvm_vm_ioctl_resize_hpt_commit(struct kvm *kvm,
+ 	if (shift && ((shift < 18) || (shift > 46)))
+ 		return -EINVAL;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 
+ 	resize = kvm->arch.resize_hpt;
+ 
+@@ -1625,7 +1625,7 @@ out:
+ 	smp_mb();
+ out_no_hpt:
+ 	resize_hpt_release(kvm, resize);
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return ret;
+ }
+ 
+@@ -1868,7 +1868,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
+ 		return -EINVAL;
+ 
+ 	/* lock out vcpus from running while we're doing this */
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	mmu_ready = kvm->arch.mmu_ready;
+ 	if (mmu_ready) {
+ 		kvm->arch.mmu_ready = 0;	/* temporarily */
+@@ -1876,7 +1876,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
+ 		smp_mb();
+ 		if (atomic_read(&kvm->arch.vcpus_running)) {
+ 			kvm->arch.mmu_ready = 1;
+-			mutex_unlock(&kvm->lock);
++			mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 			return -EBUSY;
+ 		}
+ 	}
+@@ -1963,7 +1963,7 @@ static ssize_t kvm_htab_write(struct file *file, const char __user *buf,
+ 	/* Order HPTE updates vs. mmu_ready */
+ 	smp_wmb();
+ 	kvm->arch.mmu_ready = mmu_ready;
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 
+ 	if (err)
+ 		return err;
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index bd68b3e59de5..6d4f0f72231f 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -445,12 +445,7 @@ static void kvmppc_dump_regs(struct kvm_vcpu *vcpu)
+ 
+ static struct kvm_vcpu *kvmppc_find_vcpu(struct kvm *kvm, int id)
+ {
+-	struct kvm_vcpu *ret;
+-
+-	mutex_lock(&kvm->lock);
+-	ret = kvm_get_vcpu_by_id(kvm, id);
+-	mutex_unlock(&kvm->lock);
+-	return ret;
++	return kvm_get_vcpu_by_id(kvm, id);
+ }
+ 
+ static void init_vpa(struct kvm_vcpu *vcpu, struct lppaca *vpa)
+@@ -1502,7 +1497,6 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,
+ 	struct kvmppc_vcore *vc = vcpu->arch.vcore;
+ 	u64 mask;
+ 
+-	mutex_lock(&kvm->lock);
+ 	spin_lock(&vc->lock);
+ 	/*
+ 	 * If ILE (interrupt little-endian) has changed, update the
+@@ -1542,7 +1536,6 @@ static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,
+ 		mask &= 0xFFFFFFFF;
+ 	vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask);
+ 	spin_unlock(&vc->lock);
+-	mutex_unlock(&kvm->lock);
+ }
+ 
+ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+@@ -2257,11 +2250,17 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
+ 			pr_devel("KVM: collision on id %u", id);
+ 			vcore = NULL;
+ 		} else if (!vcore) {
++			/*
++			 * Take mmu_setup_lock for mutual exclusion
++			 * with kvmppc_update_lpcr().
++			 */
+ 			err = -ENOMEM;
+ 			vcore = kvmppc_vcore_create(kvm,
+ 					id & ~(kvm->arch.smt_mode - 1));
++			mutex_lock(&kvm->arch.mmu_setup_lock);
+ 			kvm->arch.vcores[core] = vcore;
+ 			kvm->arch.online_vcores++;
++			mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 		}
+ 	}
+ 	mutex_unlock(&kvm->lock);
+@@ -3821,7 +3820,7 @@ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu)
+ 	int r = 0;
+ 	struct kvm *kvm = vcpu->kvm;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	if (!kvm->arch.mmu_ready) {
+ 		if (!kvm_is_radix(kvm))
+ 			r = kvmppc_hv_setup_htab_rma(vcpu);
+@@ -3831,7 +3830,7 @@ static int kvmhv_setup_mmu(struct kvm_vcpu *vcpu)
+ 			kvm->arch.mmu_ready = 1;
+ 		}
+ 	}
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return r;
+ }
+ 
+@@ -4439,7 +4438,8 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm,
+ 
+ /*
+  * Update LPCR values in kvm->arch and in vcores.
+- * Caller must hold kvm->lock.
++ * Caller must hold kvm->arch.mmu_setup_lock (for mutual exclusion
++ * of kvm->arch.lpcr update).
+  */
+ void kvmppc_update_lpcr(struct kvm *kvm, unsigned long lpcr, unsigned long mask)
+ {
+@@ -4491,7 +4491,7 @@ void kvmppc_setup_partition_table(struct kvm *kvm)
+ 
+ /*
+  * Set up HPT (hashed page table) and RMA (real-mode area).
+- * Must be called with kvm->lock held.
++ * Must be called with kvm->arch.mmu_setup_lock held.
+  */
+ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
+ {
+@@ -4579,7 +4579,10 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
+ 	goto out_srcu;
+ }
+ 
+-/* Must be called with kvm->lock held and mmu_ready = 0 and no vcpus running */
++/*
++ * Must be called with kvm->arch.mmu_setup_lock held and
++ * mmu_ready = 0 and no vcpus running.
++ */
+ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
+ {
+ 	if (nesting_enabled(kvm))
+@@ -4596,7 +4599,10 @@ int kvmppc_switch_mmu_to_hpt(struct kvm *kvm)
+ 	return 0;
+ }
+ 
+-/* Must be called with kvm->lock held and mmu_ready = 0 and no vcpus running */
++/*
++ * Must be called with kvm->arch.mmu_setup_lock held and
++ * mmu_ready = 0 and no vcpus running.
++ */
+ int kvmppc_switch_mmu_to_radix(struct kvm *kvm)
+ {
+ 	int err;
+@@ -4701,6 +4707,8 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm)
+ 	char buf[32];
+ 	int ret;
+ 
++	mutex_init(&kvm->arch.mmu_setup_lock);
++
+ 	/* Allocate the guest's logical partition ID */
+ 
+ 	lpid = kvmppc_alloc_lpid();
+@@ -5226,7 +5234,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
+ 	if (kvmhv_on_pseries() && !radix)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.mmu_setup_lock);
+ 	if (radix != kvm_is_radix(kvm)) {
+ 		if (kvm->arch.mmu_ready) {
+ 			kvm->arch.mmu_ready = 0;
+@@ -5254,7 +5262,7 @@ static int kvmhv_configure_mmu(struct kvm *kvm, struct kvm_ppc_mmuv3_cfg *cfg)
+ 	err = 0;
+ 
+  out_unlock:
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.mmu_setup_lock);
+ 	return err;
+ }
+ 
+diff --git a/arch/powerpc/kvm/book3s_rtas.c b/arch/powerpc/kvm/book3s_rtas.c
+index 4e178c4c1ea5..b7ae3dfbf00e 100644
+--- a/arch/powerpc/kvm/book3s_rtas.c
++++ b/arch/powerpc/kvm/book3s_rtas.c
+@@ -146,7 +146,7 @@ static int rtas_token_undefine(struct kvm *kvm, char *name)
+ {
+ 	struct rtas_token_definition *d, *tmp;
+ 
+-	lockdep_assert_held(&kvm->lock);
++	lockdep_assert_held(&kvm->arch.rtas_token_lock);
+ 
+ 	list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
+ 		if (rtas_name_matches(d->handler->name, name)) {
+@@ -167,7 +167,7 @@ static int rtas_token_define(struct kvm *kvm, char *name, u64 token)
+ 	bool found;
+ 	int i;
+ 
+-	lockdep_assert_held(&kvm->lock);
++	lockdep_assert_held(&kvm->arch.rtas_token_lock);
+ 
+ 	list_for_each_entry(d, &kvm->arch.rtas_tokens, list) {
+ 		if (d->token == token)
+@@ -206,14 +206,14 @@ int kvm_vm_ioctl_rtas_define_token(struct kvm *kvm, void __user *argp)
+ 	if (copy_from_user(&args, argp, sizeof(args)))
+ 		return -EFAULT;
+ 
+-	mutex_lock(&kvm->lock);
++	mutex_lock(&kvm->arch.rtas_token_lock);
+ 
+ 	if (args.token)
+ 		rc = rtas_token_define(kvm, args.name, args.token);
+ 	else
+ 		rc = rtas_token_undefine(kvm, args.name);
+ 
+-	mutex_unlock(&kvm->lock);
++	mutex_unlock(&kvm->arch.rtas_token_lock);
+ 
+ 	return rc;
+ }
+@@ -245,7 +245,7 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ 	orig_rets = args.rets;
+ 	args.rets = &args.args[be32_to_cpu(args.nargs)];
+ 
+-	mutex_lock(&vcpu->kvm->lock);
++	mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
+ 
+ 	rc = -ENOENT;
+ 	list_for_each_entry(d, &vcpu->kvm->arch.rtas_tokens, list) {
+@@ -256,7 +256,7 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ 		}
+ 	}
+ 
+-	mutex_unlock(&vcpu->kvm->lock);
++	mutex_unlock(&vcpu->kvm->arch.rtas_token_lock);
+ 
+ 	if (rc == 0) {
+ 		args.rets = orig_rets;
+@@ -282,8 +282,6 @@ void kvmppc_rtas_tokens_free(struct kvm *kvm)
+ {
+ 	struct rtas_token_definition *d, *tmp;
+ 
+-	lockdep_assert_held(&kvm->lock);
+-
+ 	list_for_each_entry_safe(d, tmp, &kvm->arch.rtas_tokens, list) {
+ 		list_del(&d->list);
+ 		kfree(d);
+diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c
+index 3d27f02695e4..828f6656f8f7 100644
+--- a/arch/powerpc/platforms/powernv/opal-imc.c
++++ b/arch/powerpc/platforms/powernv/opal-imc.c
+@@ -161,6 +161,10 @@ static int imc_pmu_create(struct device_node *parent, int pmu_index, int domain)
+ 	struct imc_pmu *pmu_ptr;
+ 	u32 offset;
+ 
++	/* Return for unknown domain */
++	if (domain < 0)
++		return -EINVAL;
++
+ 	/* memory for pmu */
+ 	pmu_ptr = kzalloc(sizeof(*pmu_ptr), GFP_KERNEL);
+ 	if (!pmu_ptr)
+diff --git a/arch/s390/include/asm/ap.h b/arch/s390/include/asm/ap.h
+index e94a0a28b5eb..aea32dda3d14 100644
+--- a/arch/s390/include/asm/ap.h
++++ b/arch/s390/include/asm/ap.h
+@@ -160,8 +160,8 @@ struct ap_config_info {
+ 	unsigned char Nd;		/* max # of Domains - 1 */
+ 	unsigned char _reserved3[10];
+ 	unsigned int apm[8];		/* AP ID mask */
+-	unsigned int aqm[8];		/* AP queue mask */
+-	unsigned int adm[8];		/* AP domain mask */
++	unsigned int aqm[8];		/* AP (usage) queue mask */
++	unsigned int adm[8];		/* AP (control) domain mask */
+ 	unsigned char _reserved4[16];
+ } __aligned(8);
+ 
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 10c99ce1fead..b71adf603b86 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -684,7 +684,7 @@ struct event_constraint intel_core2_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -693,7 +693,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1),    /* MEM_LOAD_RETIRED.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x01),
+ 	/* Allow all events as PEBS with no flags */
+ 	INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
+ 	EVENT_CONSTRAINT_END
+@@ -701,7 +701,7 @@ struct event_constraint intel_atom_pebs_event_constraints[] = {
+ 
+ struct event_constraint intel_slm_pebs_event_constraints[] = {
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x1),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x1),
+ 	/* Allow all events as PEBS with no flags */
+ 	INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
+ 	EVENT_CONSTRAINT_END
+@@ -726,7 +726,7 @@ struct event_constraint intel_nehalem_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -743,7 +743,7 @@ struct event_constraint intel_westmere_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf),    /* MEM_LOAD_RETIRED.* */
+ 	INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf),    /* FP_ASSIST.* */
+ 	/* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	EVENT_CONSTRAINT_END
+ };
+ 
+@@ -752,7 +752,7 @@ struct event_constraint intel_snb_pebs_event_constraints[] = {
+ 	INTEL_PLD_CONSTRAINT(0x01cd, 0x8),    /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
+ 	INTEL_PST_CONSTRAINT(0x02cd, 0x8),    /* MEM_TRANS_RETIRED.PRECISE_STORES */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+         INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf),    /* MEM_UOP_RETIRED.* */
+         INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf),    /* MEM_LOAD_UOPS_RETIRED.* */
+         INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf),    /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+@@ -767,9 +767,9 @@ struct event_constraint intel_ivb_pebs_event_constraints[] = {
+         INTEL_PLD_CONSTRAINT(0x01cd, 0x8),    /* MEM_TRANS_RETIRED.LAT_ABOVE_THR */
+ 	INTEL_PST_CONSTRAINT(0x02cd, 0x8),    /* MEM_TRANS_RETIRED.PRECISE_STORES */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf),    /* MEM_UOP_RETIRED.* */
+ 	INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf),    /* MEM_LOAD_UOPS_RETIRED.* */
+ 	INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf),    /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+@@ -783,9 +783,9 @@ struct event_constraint intel_hsw_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
+ 	INTEL_PLD_CONSTRAINT(0x01cd, 0xf),    /* MEM_TRANS_RETIRED.* */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_XLD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
+@@ -806,9 +806,9 @@ struct event_constraint intel_bdw_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
+ 	INTEL_PLD_CONSTRAINT(0x01cd, 0xf),    /* MEM_TRANS_RETIRED.* */
+ 	/* UOPS_RETIRED.ALL, inv=1, cmask=16 (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c2, 0xf),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c2, 0xf),
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_NA(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
+@@ -829,9 +829,9 @@ struct event_constraint intel_bdw_pebs_event_constraints[] = {
+ struct event_constraint intel_skl_pebs_event_constraints[] = {
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT(0x1c0, 0x2),	/* INST_RETIRED.PREC_DIST */
+ 	/* INST_RETIRED.PREC_DIST, inv=1, cmask=16 (cycles:ppp). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108001c0, 0x2),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108001c0, 0x2),
+ 	/* INST_RETIRED.TOTAL_CYCLES_PS (inv=1, cmask=16) (cycles:p). */
+-	INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
++	INTEL_FLAGS_UEVENT_CONSTRAINT(0x108000c0, 0x0f),
+ 	INTEL_PLD_CONSTRAINT(0x1cd, 0xf),		      /* MEM_TRANS_RETIRED.* */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
+ 	INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 01004bfb1a1b..524709dcf749 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -820,8 +820,11 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
+ {
+ 	set_cpu_cap(c, X86_FEATURE_ZEN);
+ 
+-	/* Fix erratum 1076: CPB feature bit not being set in CPUID. */
+-	if (!cpu_has(c, X86_FEATURE_CPB))
++	/*
++	 * Fix erratum 1076: CPB feature bit not being set in CPUID.
++	 * Always set it, except when running under a hypervisor.
++	 */
++	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_CPB))
+ 		set_cpu_cap(c, X86_FEATURE_CPB);
+ }
+ 
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 11efca3534ad..00b826399228 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2846,7 +2846,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ 		goto err_exit;
+ 
+ 	if (blk_mq_alloc_ctxs(q))
+-		goto err_exit;
++		goto err_poll;
+ 
+ 	/* init q->mq_kobj and sw queues' kobjects */
+ 	blk_mq_sysfs_init(q);
+@@ -2907,6 +2907,9 @@ err_hctxs:
+ 	kfree(q->queue_hw_ctx);
+ err_sys_init:
+ 	blk_mq_sysfs_deinit(q);
++err_poll:
++	blk_stat_free_callback(q->poll_cb);
++	q->poll_cb = NULL;
+ err_exit:
+ 	q->mq_ops = NULL;
+ 	return ERR_PTR(-ENOMEM);
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 824ae985ad93..ccb59768b1f3 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -949,8 +949,8 @@ static bool acpi_dev_needs_resume(struct device *dev, struct acpi_device *adev)
+ 	u32 sys_target = acpi_target_system_state();
+ 	int ret, state;
+ 
+-	if (!pm_runtime_suspended(dev) || !adev ||
+-	    device_may_wakeup(dev) != !!adev->wakeup.prepare_count)
++	if (!pm_runtime_suspended(dev) || !adev || (adev->wakeup.flags.valid &&
++	    device_may_wakeup(dev) != !!adev->wakeup.prepare_count))
+ 		return true;
+ 
+ 	if (sys_target == ACPI_STATE_S0)
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 639f515e08f0..3325ee43bcc1 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -137,9 +137,6 @@ static int _omap4_clkctrl_clk_enable(struct clk_hw *hw)
+ 	int ret;
+ 	union omap4_timeout timeout = { 0 };
+ 
+-	if (!clk->enable_bit)
+-		return 0;
+-
+ 	if (clk->clkdm) {
+ 		ret = ti_clk_ll_ops->clkdm_clk_enable(clk->clkdm, hw->clk);
+ 		if (ret) {
+@@ -151,6 +148,9 @@ static int _omap4_clkctrl_clk_enable(struct clk_hw *hw)
+ 		}
+ 	}
+ 
++	if (!clk->enable_bit)
++		return 0;
++
+ 	val = ti_clk_ll_ops->clk_readl(&clk->enable_reg);
+ 
+ 	val &= ~OMAP4_MODULEMODE_MASK;
+@@ -179,7 +179,7 @@ static void _omap4_clkctrl_clk_disable(struct clk_hw *hw)
+ 	union omap4_timeout timeout = { 0 };
+ 
+ 	if (!clk->enable_bit)
+-		return;
++		goto exit;
+ 
+ 	val = ti_clk_ll_ops->clk_readl(&clk->enable_reg);
+ 
+diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
+index 3f50526a771f..864a1ba7aa3a 100644
+--- a/drivers/gpio/Kconfig
++++ b/drivers/gpio/Kconfig
+@@ -824,6 +824,7 @@ config GPIO_ADP5588
+ config GPIO_ADP5588_IRQ
+ 	bool "Interrupt controller support for ADP5588"
+ 	depends on GPIO_ADP5588=y
++	select GPIOLIB_IRQCHIP
+ 	help
+ 	  Say yes here to enable the adp5588 to be used as an interrupt
+ 	  controller. It requires the driver to be built in the kernel.
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+index 33854c94cb85..515515ef24f9 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+@@ -125,6 +125,8 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
+ 		return;
+ 	etnaviv_dump_core = false;
+ 
++	mutex_lock(&gpu->mmu->lock);
++
+ 	mmu_size = etnaviv_iommu_dump_size(gpu->mmu);
+ 
+ 	/* We always dump registers, mmu, ring and end marker */
+@@ -167,6 +169,7 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
+ 	iter.start = __vmalloc(file_size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY,
+ 			       PAGE_KERNEL);
+ 	if (!iter.start) {
++		mutex_unlock(&gpu->mmu->lock);
+ 		dev_warn(gpu->dev, "failed to allocate devcoredump file\n");
+ 		return;
+ 	}
+@@ -234,6 +237,8 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
+ 					 obj->base.size);
+ 	}
+ 
++	mutex_unlock(&gpu->mmu->lock);
++
+ 	etnaviv_core_dump_header(&iter, ETDUMP_BUF_END, iter.data);
+ 
+ 	dev_coredumpv(gpu->dev, iter.start, iter.data - iter.start, GFP_KERNEL);
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 3f7b9af11137..776f36690448 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -283,6 +283,7 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ 			    msgs[i].len < 1 || msgs[i].buf[0] < 1 ||
+ 			    msgs[i].len < msgs[i].buf[0] +
+ 					     I2C_SMBUS_BLOCK_MAX) {
++				i++;
+ 				res = -EINVAL;
+ 				break;
+ 			}
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index 650de0fefb7b..385f14a4d5a7 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -471,7 +471,10 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ 			return IIO_VAL_INT_PLUS_MICRO;
+ 		case IIO_TEMP:
+ 			*val = 0;
+-			*val2 = INV_MPU6050_TEMP_SCALE;
++			if (st->chip_type == INV_ICM20602)
++				*val2 = INV_ICM20602_TEMP_SCALE;
++			else
++				*val2 = INV_MPU6050_TEMP_SCALE;
+ 
+ 			return IIO_VAL_INT_PLUS_MICRO;
+ 		default:
+@@ -480,7 +483,10 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		switch (chan->type) {
+ 		case IIO_TEMP:
+-			*val = INV_MPU6050_TEMP_OFFSET;
++			if (st->chip_type == INV_ICM20602)
++				*val = INV_ICM20602_TEMP_OFFSET;
++			else
++				*val = INV_MPU6050_TEMP_OFFSET;
+ 
+ 			return IIO_VAL_INT;
+ 		default:
+@@ -845,6 +851,32 @@ static const struct iio_chan_spec inv_mpu_channels[] = {
+ 	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_Z, INV_MPU6050_SCAN_ACCL_Z),
+ };
+ 
++static const struct iio_chan_spec inv_icm20602_channels[] = {
++	IIO_CHAN_SOFT_TIMESTAMP(INV_ICM20602_SCAN_TIMESTAMP),
++	{
++		.type = IIO_TEMP,
++		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW)
++				| BIT(IIO_CHAN_INFO_OFFSET)
++				| BIT(IIO_CHAN_INFO_SCALE),
++		.scan_index = INV_ICM20602_SCAN_TEMP,
++		.scan_type = {
++				.sign = 's',
++				.realbits = 16,
++				.storagebits = 16,
++				.shift = 0,
++				.endianness = IIO_BE,
++			     },
++	},
++
++	INV_MPU6050_CHAN(IIO_ANGL_VEL, IIO_MOD_X, INV_ICM20602_SCAN_GYRO_X),
++	INV_MPU6050_CHAN(IIO_ANGL_VEL, IIO_MOD_Y, INV_ICM20602_SCAN_GYRO_Y),
++	INV_MPU6050_CHAN(IIO_ANGL_VEL, IIO_MOD_Z, INV_ICM20602_SCAN_GYRO_Z),
++
++	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_Y, INV_ICM20602_SCAN_ACCL_Y),
++	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_X, INV_ICM20602_SCAN_ACCL_X),
++	INV_MPU6050_CHAN(IIO_ACCEL, IIO_MOD_Z, INV_ICM20602_SCAN_ACCL_Z),
++};
++
+ /*
+  * The user can choose any frequency between INV_MPU6050_MIN_FIFO_RATE and
+  * INV_MPU6050_MAX_FIFO_RATE, but only these frequencies are matched by the
+@@ -1100,8 +1132,14 @@ int inv_mpu_core_probe(struct regmap *regmap, int irq, const char *name,
+ 		indio_dev->name = name;
+ 	else
+ 		indio_dev->name = dev_name(dev);
+-	indio_dev->channels = inv_mpu_channels;
+-	indio_dev->num_channels = ARRAY_SIZE(inv_mpu_channels);
++
++	if (chip_type == INV_ICM20602) {
++		indio_dev->channels = inv_icm20602_channels;
++		indio_dev->num_channels = ARRAY_SIZE(inv_icm20602_channels);
++	} else {
++		indio_dev->channels = inv_mpu_channels;
++		indio_dev->num_channels = ARRAY_SIZE(inv_mpu_channels);
++	}
+ 
+ 	indio_dev->info = &mpu_info;
+ 	indio_dev->modes = INDIO_BUFFER_TRIGGERED;
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+index 325afd9f5f61..3d5fe4474378 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+@@ -208,6 +208,9 @@ struct inv_mpu6050_state {
+ #define INV_MPU6050_BYTES_PER_3AXIS_SENSOR   6
+ #define INV_MPU6050_FIFO_COUNT_BYTE          2
+ 
++/* ICM20602 FIFO samples include temperature readings */
++#define INV_ICM20602_BYTES_PER_TEMP_SENSOR   2
++
+ /* mpu6500 registers */
+ #define INV_MPU6500_REG_ACCEL_CONFIG_2      0x1D
+ #define INV_MPU6500_REG_ACCEL_OFFSET        0x77
+@@ -229,6 +232,9 @@ struct inv_mpu6050_state {
+ #define INV_MPU6050_GYRO_CONFIG_FSR_SHIFT    3
+ #define INV_MPU6050_ACCL_CONFIG_FSR_SHIFT    3
+ 
++#define INV_ICM20602_TEMP_OFFSET	     8170
++#define INV_ICM20602_TEMP_SCALE		     3060
++
+ /* 6 + 6 round up and plus 8 */
+ #define INV_MPU6050_OUTPUT_DATA_SIZE         24
+ 
+@@ -270,7 +276,7 @@ struct inv_mpu6050_state {
+ #define INV_ICM20608_WHOAMI_VALUE		0xAF
+ #define INV_ICM20602_WHOAMI_VALUE		0x12
+ 
+-/* scan element definition */
++/* scan element definition for generic MPU6xxx devices */
+ enum inv_mpu6050_scan {
+ 	INV_MPU6050_SCAN_ACCL_X,
+ 	INV_MPU6050_SCAN_ACCL_Y,
+@@ -281,6 +287,18 @@ enum inv_mpu6050_scan {
+ 	INV_MPU6050_SCAN_TIMESTAMP,
+ };
+ 
++/* scan element definition for ICM20602, which includes temperature */
++enum inv_icm20602_scan {
++	INV_ICM20602_SCAN_ACCL_X,
++	INV_ICM20602_SCAN_ACCL_Y,
++	INV_ICM20602_SCAN_ACCL_Z,
++	INV_ICM20602_SCAN_TEMP,
++	INV_ICM20602_SCAN_GYRO_X,
++	INV_ICM20602_SCAN_GYRO_Y,
++	INV_ICM20602_SCAN_GYRO_Z,
++	INV_ICM20602_SCAN_TIMESTAMP,
++};
++
+ enum inv_mpu6050_filter_e {
+ 	INV_MPU6050_FILTER_256HZ_NOLPF2 = 0,
+ 	INV_MPU6050_FILTER_188HZ,
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+index 548e042f7b5b..57bd11bde56b 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+@@ -207,6 +207,9 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ 	if (st->chip_config.gyro_fifo_enable)
+ 		bytes_per_datum += INV_MPU6050_BYTES_PER_3AXIS_SENSOR;
+ 
++	if (st->chip_type == INV_ICM20602)
++		bytes_per_datum += INV_ICM20602_BYTES_PER_TEMP_SENSOR;
++
+ 	/*
+ 	 * read fifo_count register to know how many bytes are inside the FIFO
+ 	 * right now
+diff --git a/drivers/isdn/mISDN/socket.c b/drivers/isdn/mISDN/socket.c
+index a14e35d40538..84e1d4c2db66 100644
+--- a/drivers/isdn/mISDN/socket.c
++++ b/drivers/isdn/mISDN/socket.c
+@@ -393,7 +393,7 @@ data_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 			memcpy(di.channelmap, dev->channelmap,
+ 			       sizeof(di.channelmap));
+ 			di.nrbchan = dev->nrbchan;
+-			strcpy(di.name, dev_name(&dev->dev));
++			strscpy(di.name, dev_name(&dev->dev), sizeof(di.name));
+ 			if (copy_to_user((void __user *)arg, &di, sizeof(di)))
+ 				err = -EFAULT;
+ 		} else
+@@ -676,7 +676,7 @@ base_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 			memcpy(di.channelmap, dev->channelmap,
+ 			       sizeof(di.channelmap));
+ 			di.nrbchan = dev->nrbchan;
+-			strcpy(di.name, dev_name(&dev->dev));
++			strscpy(di.name, dev_name(&dev->dev), sizeof(di.name));
+ 			if (copy_to_user((void __user *)arg, &di, sizeof(di)))
+ 				err = -EFAULT;
+ 		} else
+@@ -690,6 +690,7 @@ base_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ 			err = -EFAULT;
+ 			break;
+ 		}
++		dn.name[sizeof(dn.name) - 1] = '\0';
+ 		dev = get_mdevice(dn.id);
+ 		if (dev)
+ 			err = device_rename(&dev->dev, dn.name);
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 39dace8e3512..f46086fa9064 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -83,6 +83,9 @@ static void ksz_mib_read_work(struct work_struct *work)
+ 	int i;
+ 
+ 	for (i = 0; i < dev->mib_port_cnt; i++) {
++		if (dsa_is_unused_port(dev->ds, i))
++			continue;
++
+ 		p = &dev->ports[i];
+ 		mib = &p->mib;
+ 		mutex_lock(&mib->cnt_mutex);
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+index 6dedd43442cc..35b767baf21f 100644
+--- a/drivers/net/dsa/rtl8366.c
++++ b/drivers/net/dsa/rtl8366.c
+@@ -307,7 +307,8 @@ int rtl8366_vlan_filtering(struct dsa_switch *ds, int port, bool vlan_filtering)
+ 	struct rtl8366_vlan_4k vlan4k;
+ 	int ret;
+ 
+-	if (!smi->ops->is_vlan_valid(smi, port))
++	/* Use VLAN nr port + 1 since VLAN0 is not valid */
++	if (!smi->ops->is_vlan_valid(smi, port + 1))
+ 		return -EINVAL;
+ 
+ 	dev_info(smi->dev, "%s filtering on port %d\n",
+@@ -318,12 +319,12 @@ int rtl8366_vlan_filtering(struct dsa_switch *ds, int port, bool vlan_filtering)
+ 	 * The hardware support filter ID (FID) 0..7, I have no clue how to
+ 	 * support this in the driver when the callback only says on/off.
+ 	 */
+-	ret = smi->ops->get_vlan_4k(smi, port, &vlan4k);
++	ret = smi->ops->get_vlan_4k(smi, port + 1, &vlan4k);
+ 	if (ret)
+ 		return ret;
+ 
+ 	/* Just set the filter to FID 1 for now then */
+-	ret = rtl8366_set_vlan(smi, port,
++	ret = rtl8366_set_vlan(smi, port + 1,
+ 			       vlan4k.member,
+ 			       vlan4k.untag,
+ 			       1);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index e2ffb159cbe2..bf4aa7060f1a 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -139,10 +139,10 @@ void aq_ring_queue_stop(struct aq_ring_s *ring)
+ bool aq_ring_tx_clean(struct aq_ring_s *self)
+ {
+ 	struct device *dev = aq_nic_get_dev(self->aq_nic);
+-	unsigned int budget = AQ_CFG_TX_CLEAN_BUDGET;
++	unsigned int budget;
+ 
+-	for (; self->sw_head != self->hw_head && budget--;
+-		self->sw_head = aq_ring_next_dx(self, self->sw_head)) {
++	for (budget = AQ_CFG_TX_CLEAN_BUDGET;
++	     budget && self->sw_head != self->hw_head; budget--) {
+ 		struct aq_ring_buff_s *buff = &self->buff_ring[self->sw_head];
+ 
+ 		if (likely(buff->is_mapped)) {
+@@ -167,6 +167,7 @@ bool aq_ring_tx_clean(struct aq_ring_s *self)
+ 
+ 		buff->pa = 0U;
+ 		buff->eop_index = 0xffffU;
++		self->sw_head = aq_ring_next_dx(self, self->sw_head);
+ 	}
+ 
+ 	return !!budget;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index b31dba1b1a55..ec302fdfec63 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -702,38 +702,41 @@ static int hw_atl_b0_hw_ring_rx_receive(struct aq_hw_s *self,
+ 		if ((rx_stat & BIT(0)) || rxd_wb->type & 0x1000U) {
+ 			/* MAC error or DMA error */
+ 			buff->is_error = 1U;
+-		} else {
+-			if (self->aq_nic_cfg->is_rss) {
+-				/* last 4 byte */
+-				u16 rss_type = rxd_wb->type & 0xFU;
+-
+-				if (rss_type && rss_type < 0x8U) {
+-					buff->is_hash_l4 = (rss_type == 0x4 ||
+-					rss_type == 0x5);
+-					buff->rss_hash = rxd_wb->rss_hash;
+-				}
++		}
++		if (self->aq_nic_cfg->is_rss) {
++			/* last 4 byte */
++			u16 rss_type = rxd_wb->type & 0xFU;
++
++			if (rss_type && rss_type < 0x8U) {
++				buff->is_hash_l4 = (rss_type == 0x4 ||
++				rss_type == 0x5);
++				buff->rss_hash = rxd_wb->rss_hash;
+ 			}
++		}
+ 
+-			if (HW_ATL_B0_RXD_WB_STAT2_EOP & rxd_wb->status) {
+-				buff->len = rxd_wb->pkt_len %
+-					AQ_CFG_RX_FRAME_MAX;
+-				buff->len = buff->len ?
+-					buff->len : AQ_CFG_RX_FRAME_MAX;
+-				buff->next = 0U;
+-				buff->is_eop = 1U;
++		if (HW_ATL_B0_RXD_WB_STAT2_EOP & rxd_wb->status) {
++			buff->len = rxd_wb->pkt_len %
++				AQ_CFG_RX_FRAME_MAX;
++			buff->len = buff->len ?
++				buff->len : AQ_CFG_RX_FRAME_MAX;
++			buff->next = 0U;
++			buff->is_eop = 1U;
++		} else {
++			buff->len =
++				rxd_wb->pkt_len > AQ_CFG_RX_FRAME_MAX ?
++				AQ_CFG_RX_FRAME_MAX : rxd_wb->pkt_len;
++
++			if (HW_ATL_B0_RXD_WB_STAT2_RSCCNT &
++				rxd_wb->status) {
++				/* LRO */
++				buff->next = rxd_wb->next_desc_ptr;
++				++ring->stats.rx.lro_packets;
+ 			} else {
+-				if (HW_ATL_B0_RXD_WB_STAT2_RSCCNT &
+-					rxd_wb->status) {
+-					/* LRO */
+-					buff->next = rxd_wb->next_desc_ptr;
+-					++ring->stats.rx.lro_packets;
+-				} else {
+-					/* jumbo */
+-					buff->next =
+-						aq_ring_next_dx(ring,
+-								ring->hw_head);
+-					++ring->stats.rx.jumbo_packets;
+-				}
++				/* jumbo */
++				buff->next =
++					aq_ring_next_dx(ring,
++							ring->hw_head);
++				++ring->stats.rx.jumbo_packets;
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/dec/tulip/de4x5.c b/drivers/net/ethernet/dec/tulip/de4x5.c
+index 66535d1653f6..f16853c3c851 100644
+--- a/drivers/net/ethernet/dec/tulip/de4x5.c
++++ b/drivers/net/ethernet/dec/tulip/de4x5.c
+@@ -2107,7 +2107,6 @@ static struct eisa_driver de4x5_eisa_driver = {
+ 		.remove  = de4x5_eisa_remove,
+         }
+ };
+-MODULE_DEVICE_TABLE(eisa, de4x5_eisa_ids);
+ #endif
+ 
+ #ifdef CONFIG_PCI
+diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+index 4c218341c51b..6e635debc7fd 100644
+--- a/drivers/net/ethernet/emulex/benet/be_ethtool.c
++++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c
+@@ -1105,7 +1105,7 @@ static int be_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
+ 		cmd->data = be_get_rss_hash_opts(adapter, cmd->flow_type);
+ 		break;
+ 	case ETHTOOL_GRXRINGS:
+-		cmd->data = adapter->num_rx_qs - 1;
++		cmd->data = adapter->num_rx_qs;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index d3f2408dc9e8..f38c3fa7d705 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -780,7 +780,7 @@ static void dpaa_eth_add_channel(u16 channel)
+ 	struct qman_portal *portal;
+ 	int cpu;
+ 
+-	for_each_cpu(cpu, cpus) {
++	for_each_cpu_and(cpu, cpus, cpu_online_mask) {
+ 		portal = qman_get_affine_portal(cpu);
+ 		qman_p_static_dequeue_add(portal, pool);
+ 	}
+@@ -896,7 +896,7 @@ static void dpaa_fq_setup(struct dpaa_priv *priv,
+ 	u16 channels[NR_CPUS];
+ 	struct dpaa_fq *fq;
+ 
+-	for_each_cpu(cpu, affine_cpus)
++	for_each_cpu_and(cpu, affine_cpus, cpu_online_mask)
+ 		channels[num_portals++] = qman_affine_channel(cpu);
+ 
+ 	if (num_portals == 0)
+@@ -2174,7 +2174,6 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget)
+ 	if (cleaned < budget) {
+ 		napi_complete_done(napi, cleaned);
+ 		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+-
+ 	} else if (np->down) {
+ 		qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
+ 	}
+@@ -2448,7 +2447,7 @@ static void dpaa_eth_napi_enable(struct dpaa_priv *priv)
+ 	struct dpaa_percpu_priv *percpu_priv;
+ 	int i;
+ 
+-	for_each_possible_cpu(i) {
++	for_each_online_cpu(i) {
+ 		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+ 
+ 		percpu_priv->np.down = 0;
+@@ -2461,7 +2460,7 @@ static void dpaa_eth_napi_disable(struct dpaa_priv *priv)
+ 	struct dpaa_percpu_priv *percpu_priv;
+ 	int i;
+ 
+-	for_each_possible_cpu(i) {
++	for_each_online_cpu(i) {
+ 		percpu_priv = per_cpu_ptr(priv->percpu_priv, i);
+ 
+ 		percpu_priv->np.down = 1;
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index bdee441bc3b7..7ce2e99b594d 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -569,7 +569,7 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 	qman_dqrr_get_ithresh(portal, &prev_thresh);
+ 
+ 	/* set new values */
+-	for_each_cpu(cpu, cpus) {
++	for_each_cpu_and(cpu, cpus, cpu_online_mask) {
+ 		portal = qman_get_affine_portal(cpu);
+ 		res = qman_portal_set_iperiod(portal, period);
+ 		if (res)
+@@ -586,7 +586,7 @@ static int dpaa_set_coalesce(struct net_device *dev,
+ 
+ revert_values:
+ 	/* restore previous values */
+-	for_each_cpu(cpu, cpus) {
++	for_each_cpu_and(cpu, cpus, cpu_online_mask) {
+ 		if (!needs_revert[cpu])
+ 			continue;
+ 		portal = qman_get_affine_portal(cpu);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 57cbaa38d247..df371c81a706 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1966,7 +1966,7 @@ alloc_channel(struct dpaa2_eth_priv *priv)
+ 
+ 	channel->dpcon = setup_dpcon(priv);
+ 	if (IS_ERR_OR_NULL(channel->dpcon)) {
+-		err = PTR_ERR(channel->dpcon);
++		err = PTR_ERR_OR_ZERO(channel->dpcon);
+ 		goto err_setup;
+ 	}
+ 
+@@ -2022,7 +2022,7 @@ static int setup_dpio(struct dpaa2_eth_priv *priv)
+ 		/* Try to allocate a channel */
+ 		channel = alloc_channel(priv);
+ 		if (IS_ERR_OR_NULL(channel)) {
+-			err = PTR_ERR(channel);
++			err = PTR_ERR_OR_ZERO(channel);
+ 			if (err != -EPROBE_DEFER)
+ 				dev_info(dev,
+ 					 "No affine channel for cpu %d and above\n", i);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+index 591dfcf76adb..0610fc0bebc2 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/net_tstamp.h>
++#include <linux/nospec.h>
+ 
+ #include "dpni.h"	/* DPNI_LINK_OPT_* */
+ #include "dpaa2-eth.h"
+@@ -589,6 +590,8 @@ static int dpaa2_eth_get_rxnfc(struct net_device *net_dev,
+ 	case ETHTOOL_GRXCLSRULE:
+ 		if (rxnfc->fs.location >= max_rules)
+ 			return -EINVAL;
++		rxnfc->fs.location = array_index_nospec(rxnfc->fs.location,
++							max_rules);
+ 		if (!priv->cls_rules[rxnfc->fs.location].in_use)
+ 			return -EINVAL;
+ 		rxnfc->fs = priv->cls_rules[rxnfc->fs.location].fs;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index 392fd895f278..ae2240074d8e 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -1905,8 +1905,7 @@ static int mvpp2_prs_ip6_init(struct mvpp2 *priv)
+ }
+ 
+ /* Find tcam entry with matched pair <vid,port> */
+-static int mvpp2_prs_vid_range_find(struct mvpp2 *priv, int pmap, u16 vid,
+-				    u16 mask)
++static int mvpp2_prs_vid_range_find(struct mvpp2_port *port, u16 vid, u16 mask)
+ {
+ 	unsigned char byte[2], enable[2];
+ 	struct mvpp2_prs_entry pe;
+@@ -1914,13 +1913,13 @@ static int mvpp2_prs_vid_range_find(struct mvpp2 *priv, int pmap, u16 vid,
+ 	int tid;
+ 
+ 	/* Go through the all entries with MVPP2_PRS_LU_VID */
+-	for (tid = MVPP2_PE_VID_FILT_RANGE_START;
+-	     tid <= MVPP2_PE_VID_FILT_RANGE_END; tid++) {
+-		if (!priv->prs_shadow[tid].valid ||
+-		    priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID)
++	for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id);
++	     tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) {
++		if (!port->priv->prs_shadow[tid].valid ||
++		    port->priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID)
+ 			continue;
+ 
+-		mvpp2_prs_init_from_hw(priv, &pe, tid);
++		mvpp2_prs_init_from_hw(port->priv, &pe, tid);
+ 
+ 		mvpp2_prs_tcam_data_byte_get(&pe, 2, &byte[0], &enable[0]);
+ 		mvpp2_prs_tcam_data_byte_get(&pe, 3, &byte[1], &enable[1]);
+@@ -1950,7 +1949,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ 	memset(&pe, 0, sizeof(pe));
+ 
+ 	/* Scan TCAM and see if entry with this <vid,port> already exist */
+-	tid = mvpp2_prs_vid_range_find(priv, (1 << port->id), vid, mask);
++	tid = mvpp2_prs_vid_range_find(port, vid, mask);
+ 
+ 	reg_val = mvpp2_read(priv, MVPP2_MH_REG(port->id));
+ 	if (reg_val & MVPP2_DSA_EXTENDED)
+@@ -2008,7 +2007,7 @@ void mvpp2_prs_vid_entry_remove(struct mvpp2_port *port, u16 vid)
+ 	int tid;
+ 
+ 	/* Scan TCAM and see if entry with this <vid,port> already exist */
+-	tid = mvpp2_prs_vid_range_find(priv, (1 << port->id), vid, 0xfff);
++	tid = mvpp2_prs_vid_range_find(port, vid, 0xfff);
+ 
+ 	/* No such entry */
+ 	if (tid < 0)
+@@ -2026,8 +2025,10 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port)
+ 
+ 	for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id);
+ 	     tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) {
+-		if (priv->prs_shadow[tid].valid)
+-			mvpp2_prs_vid_entry_remove(port, tid);
++		if (priv->prs_shadow[tid].valid) {
++			mvpp2_prs_hw_inv(priv, tid);
++			priv->prs_shadow[tid].valid = false;
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index be48c6440251..c205a80abdec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -441,6 +441,10 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
+ 	case MLX5_CMD_OP_CREATE_GENERAL_OBJECT:
+ 	case MLX5_CMD_OP_MODIFY_GENERAL_OBJECT:
+ 	case MLX5_CMD_OP_QUERY_GENERAL_OBJECT:
++	case MLX5_CMD_OP_CREATE_UCTX:
++	case MLX5_CMD_OP_DESTROY_UCTX:
++	case MLX5_CMD_OP_CREATE_UMEM:
++	case MLX5_CMD_OP_DESTROY_UMEM:
+ 	case MLX5_CMD_OP_ALLOC_MEMIC:
+ 		*status = MLX5_DRIVER_STATUS_ABORTED;
+ 		*synd = MLX5_DRIVER_SYND;
+@@ -629,6 +633,10 @@ const char *mlx5_command_str(int command)
+ 	MLX5_COMMAND_STR_CASE(ALLOC_MEMIC);
+ 	MLX5_COMMAND_STR_CASE(DEALLOC_MEMIC);
+ 	MLX5_COMMAND_STR_CASE(QUERY_HOST_PARAMS);
++	MLX5_COMMAND_STR_CASE(CREATE_UCTX);
++	MLX5_COMMAND_STR_CASE(DESTROY_UCTX);
++	MLX5_COMMAND_STR_CASE(CREATE_UMEM);
++	MLX5_COMMAND_STR_CASE(DESTROY_UMEM);
+ 	default: return "unknown command opcode";
+ 	}
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index ebc046fa97d3..f6b1da99e6c2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -248,11 +248,32 @@ void mlx5_unregister_interface(struct mlx5_interface *intf)
+ }
+ EXPORT_SYMBOL(mlx5_unregister_interface);
+ 
++/* Must be called with intf_mutex held */
++static bool mlx5_has_added_dev_by_protocol(struct mlx5_core_dev *mdev, int protocol)
++{
++	struct mlx5_device_context *dev_ctx;
++	struct mlx5_interface *intf;
++	bool found = false;
++
++	list_for_each_entry(intf, &intf_list, list) {
++		if (intf->protocol == protocol) {
++			dev_ctx = mlx5_get_device(intf, &mdev->priv);
++			if (dev_ctx && test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state))
++				found = true;
++			break;
++		}
++	}
++
++	return found;
++}
++
+ void mlx5_reload_interface(struct mlx5_core_dev *mdev, int protocol)
+ {
+ 	mutex_lock(&mlx5_intf_mutex);
+-	mlx5_remove_dev_by_protocol(mdev, protocol);
+-	mlx5_add_dev_by_protocol(mdev, protocol);
++	if (mlx5_has_added_dev_by_protocol(mdev, protocol)) {
++		mlx5_remove_dev_by_protocol(mdev, protocol);
++		mlx5_add_dev_by_protocol(mdev, protocol);
++	}
+ 	mutex_unlock(&mlx5_intf_mutex);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index d3eaf2ceaa39..a80031b2cfaf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -1059,6 +1059,7 @@ void mlx5e_del_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti)
+ netdev_features_t mlx5e_features_check(struct sk_buff *skb,
+ 				       struct net_device *netdev,
+ 				       netdev_features_t features);
++int mlx5e_set_features(struct net_device *netdev, netdev_features_t features);
+ #ifdef CONFIG_MLX5_ESWITCH
+ int mlx5e_set_vf_mac(struct net_device *dev, int vf, u8 *mac);
+ int mlx5e_set_vf_rate(struct net_device *dev, int vf, int min_tx_rate, int max_tx_rate);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index eec07b34b4ad..5efe9b5d9086 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -11,24 +11,25 @@ static int get_route_and_out_devs(struct mlx5e_priv *priv,
+ 				  struct net_device **route_dev,
+ 				  struct net_device **out_dev)
+ {
++	struct net_device *uplink_dev, *uplink_upper, *real_dev;
+ 	struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+-	struct net_device *uplink_dev, *uplink_upper;
+ 	bool dst_is_lag_dev;
+ 
++	real_dev = is_vlan_dev(dev) ? vlan_dev_real_dev(dev) : dev;
+ 	uplink_dev = mlx5_eswitch_uplink_get_proto_dev(esw, REP_ETH);
+ 	uplink_upper = netdev_master_upper_dev_get(uplink_dev);
+ 	dst_is_lag_dev = (uplink_upper &&
+ 			  netif_is_lag_master(uplink_upper) &&
+-			  dev == uplink_upper &&
++			  real_dev == uplink_upper &&
+ 			  mlx5_lag_is_sriov(priv->mdev));
+ 
+ 	/* if the egress device isn't on the same HW e-switch or
+ 	 * it's a LAG device, use the uplink
+ 	 */
+-	if (!netdev_port_same_parent_id(priv->netdev, dev) ||
++	if (!netdev_port_same_parent_id(priv->netdev, real_dev) ||
+ 	    dst_is_lag_dev) {
+-		*route_dev = uplink_dev;
+-		*out_dev = *route_dev;
++		*route_dev = dev;
++		*out_dev = uplink_dev;
+ 	} else {
+ 		*route_dev = dev;
+ 		if (is_vlan_dev(*route_dev))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 1e2688e2ed47..6a8dc73855c9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3698,8 +3698,7 @@ static int mlx5e_handle_feature(struct net_device *netdev,
+ 	return 0;
+ }
+ 
+-static int mlx5e_set_features(struct net_device *netdev,
+-			      netdev_features_t features)
++int mlx5e_set_features(struct net_device *netdev, netdev_features_t features)
+ {
+ 	netdev_features_t oper_features = netdev->features;
+ 	int err = 0;
+@@ -5166,6 +5165,11 @@ static void mlx5e_detach(struct mlx5_core_dev *mdev, void *vpriv)
+ 	struct mlx5e_priv *priv = vpriv;
+ 	struct net_device *netdev = priv->netdev;
+ 
++#ifdef CONFIG_MLX5_ESWITCH
++	if (MLX5_ESWITCH_MANAGER(mdev) && vpriv == mdev)
++		return;
++#endif
++
+ 	if (!netif_device_present(netdev))
+ 		return;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 0b09fa91019d..fd8cede040b8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1350,6 +1350,7 @@ static const struct net_device_ops mlx5e_netdev_ops_uplink_rep = {
+ 	.ndo_get_vf_stats        = mlx5e_get_vf_stats,
+ 	.ndo_set_vf_vlan         = mlx5e_uplink_rep_set_vf_vlan,
+ 	.ndo_get_port_parent_id	 = mlx5e_rep_get_port_parent_id,
++	.ndo_set_features        = mlx5e_set_features,
+ };
+ 
+ bool mlx5e_eswitch_rep(struct net_device *netdev)
+@@ -1423,10 +1424,9 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
+ 
+ 	netdev->watchdog_timeo    = 15 * HZ;
+ 
++	netdev->features       |= NETIF_F_NETNS_LOCAL;
+ 
+-	netdev->features	 |= NETIF_F_HW_TC | NETIF_F_NETNS_LOCAL;
+-	netdev->hw_features      |= NETIF_F_HW_TC;
+-
++	netdev->hw_features    |= NETIF_F_HW_TC;
+ 	netdev->hw_features    |= NETIF_F_SG;
+ 	netdev->hw_features    |= NETIF_F_IP_CSUM;
+ 	netdev->hw_features    |= NETIF_F_IPV6_CSUM;
+@@ -1435,7 +1435,9 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
+ 	netdev->hw_features    |= NETIF_F_TSO6;
+ 	netdev->hw_features    |= NETIF_F_RXCSUM;
+ 
+-	if (rep->vport != MLX5_VPORT_UPLINK)
++	if (rep->vport == MLX5_VPORT_UPLINK)
++		netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX;
++	else
+ 		netdev->features |= NETIF_F_VLAN_CHALLENGED;
+ 
+ 	netdev->features |= netdev->hw_features;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 4cb23631616b..a43ddfc0ff0b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2572,9 +2572,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv,
+ 	if (!flow_action_has_entries(flow_action))
+ 		return -EINVAL;
+ 
+-	attr->in_rep = rpriv->rep;
+-	attr->in_mdev = priv->mdev;
+-
+ 	flow_action_for_each(i, act, flow_action) {
+ 		switch (act->id) {
+ 		case FLOW_ACTION_DROP:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 6b8aa3761899..f4acb38569e1 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -3110,6 +3110,10 @@ mlxsw_sp_port_set_link_ksettings(struct net_device *dev,
+ 	ops->reg_ptys_eth_unpack(mlxsw_sp, ptys_pl, &eth_proto_cap, NULL, NULL);
+ 
+ 	autoneg = cmd->base.autoneg == AUTONEG_ENABLE;
++	if (!autoneg && cmd->base.speed == SPEED_56000) {
++		netdev_err(dev, "56G not supported with autoneg off\n");
++		return -EINVAL;
++	}
+ 	eth_proto_new = autoneg ?
+ 		ops->to_ptys_advert_link(mlxsw_sp, cmd) :
+ 		ops->to_ptys_speed(mlxsw_sp, cmd->base.speed);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+index d633bef5f105..77fe3ed38d1b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+@@ -411,9 +411,9 @@ static const struct mlxsw_sp_sb_pr mlxsw_sp1_sb_prs[] = {
+ 	MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI),
+ };
+ 
+-#define MLXSW_SP2_SB_PR_INGRESS_SIZE	40960000
++#define MLXSW_SP2_SB_PR_INGRESS_SIZE	38128752
++#define MLXSW_SP2_SB_PR_EGRESS_SIZE	38128752
+ #define MLXSW_SP2_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
+-#define MLXSW_SP2_SB_PR_EGRESS_SIZE	40960000
+ 
+ static const struct mlxsw_sp_sb_pr mlxsw_sp2_sb_prs[] = {
+ 	/* Ingress pools. */
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+index 15f804453cd6..96b23c856f4d 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+@@ -247,8 +247,8 @@ static int mlxsw_sp_flower_parse_ip(struct mlxsw_sp *mlxsw_sp,
+ 				       match.mask->tos & 0x3);
+ 
+ 	mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_IP_DSCP,
+-				       match.key->tos >> 6,
+-				       match.mask->tos >> 6);
++				       match.key->tos >> 2,
++				       match.mask->tos >> 2);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 902e766a8ed3..18d29b8f763f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -2363,7 +2363,7 @@ static void mlxsw_sp_router_probe_unresolved_nexthops(struct work_struct *work)
+ static void
+ mlxsw_sp_nexthop_neigh_update(struct mlxsw_sp *mlxsw_sp,
+ 			      struct mlxsw_sp_neigh_entry *neigh_entry,
+-			      bool removing);
++			      bool removing, bool dead);
+ 
+ static enum mlxsw_reg_rauht_op mlxsw_sp_rauht_op(bool adding)
+ {
+@@ -2494,7 +2494,8 @@ static void mlxsw_sp_router_neigh_event_work(struct work_struct *work)
+ 
+ 	memcpy(neigh_entry->ha, ha, ETH_ALEN);
+ 	mlxsw_sp_neigh_entry_update(mlxsw_sp, neigh_entry, entry_connected);
+-	mlxsw_sp_nexthop_neigh_update(mlxsw_sp, neigh_entry, !entry_connected);
++	mlxsw_sp_nexthop_neigh_update(mlxsw_sp, neigh_entry, !entry_connected,
++				      dead);
+ 
+ 	if (!neigh_entry->connected && list_empty(&neigh_entry->nexthop_list))
+ 		mlxsw_sp_neigh_entry_destroy(mlxsw_sp, neigh_entry);
+@@ -3458,13 +3459,79 @@ static void __mlxsw_sp_nexthop_neigh_update(struct mlxsw_sp_nexthop *nh,
+ 	nh->update = 1;
+ }
+ 
++static int
++mlxsw_sp_nexthop_dead_neigh_replace(struct mlxsw_sp *mlxsw_sp,
++				    struct mlxsw_sp_neigh_entry *neigh_entry)
++{
++	struct neighbour *n, *old_n = neigh_entry->key.n;
++	struct mlxsw_sp_nexthop *nh;
++	bool entry_connected;
++	u8 nud_state, dead;
++	int err;
++
++	nh = list_first_entry(&neigh_entry->nexthop_list,
++			      struct mlxsw_sp_nexthop, neigh_list_node);
++
++	n = neigh_lookup(nh->nh_grp->neigh_tbl, &nh->gw_addr, nh->rif->dev);
++	if (!n) {
++		n = neigh_create(nh->nh_grp->neigh_tbl, &nh->gw_addr,
++				 nh->rif->dev);
++		if (IS_ERR(n))
++			return PTR_ERR(n);
++		neigh_event_send(n, NULL);
++	}
++
++	mlxsw_sp_neigh_entry_remove(mlxsw_sp, neigh_entry);
++	neigh_entry->key.n = n;
++	err = mlxsw_sp_neigh_entry_insert(mlxsw_sp, neigh_entry);
++	if (err)
++		goto err_neigh_entry_insert;
++
++	read_lock_bh(&n->lock);
++	nud_state = n->nud_state;
++	dead = n->dead;
++	read_unlock_bh(&n->lock);
++	entry_connected = nud_state & NUD_VALID && !dead;
++
++	list_for_each_entry(nh, &neigh_entry->nexthop_list,
++			    neigh_list_node) {
++		neigh_release(old_n);
++		neigh_clone(n);
++		__mlxsw_sp_nexthop_neigh_update(nh, !entry_connected);
++		mlxsw_sp_nexthop_group_refresh(mlxsw_sp, nh->nh_grp);
++	}
++
++	neigh_release(n);
++
++	return 0;
++
++err_neigh_entry_insert:
++	neigh_entry->key.n = old_n;
++	mlxsw_sp_neigh_entry_insert(mlxsw_sp, neigh_entry);
++	neigh_release(n);
++	return err;
++}
++
+ static void
+ mlxsw_sp_nexthop_neigh_update(struct mlxsw_sp *mlxsw_sp,
+ 			      struct mlxsw_sp_neigh_entry *neigh_entry,
+-			      bool removing)
++			      bool removing, bool dead)
+ {
+ 	struct mlxsw_sp_nexthop *nh;
+ 
++	if (list_empty(&neigh_entry->nexthop_list))
++		return;
++
++	if (dead) {
++		int err;
++
++		err = mlxsw_sp_nexthop_dead_neigh_replace(mlxsw_sp,
++							  neigh_entry);
++		if (err)
++			dev_err(mlxsw_sp->bus_info->dev, "Failed to replace dead neigh\n");
++		return;
++	}
++
+ 	list_for_each_entry(nh, &neigh_entry->nexthop_list,
+ 			    neigh_list_node) {
+ 		__mlxsw_sp_nexthop_neigh_update(nh, removing);
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index e33af371b169..48967dd27bbf 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -1594,6 +1594,10 @@ static void sh_eth_dev_exit(struct net_device *ndev)
+ 	sh_eth_get_stats(ndev);
+ 	mdp->cd->soft_reset(ndev);
+ 
++	/* Set the RMII mode again if required */
++	if (mdp->cd->rmiimode)
++		sh_eth_write(ndev, 0x1, RMIIMODE);
++
+ 	/* Set MAC address again */
+ 	update_mac_address(ndev);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+index bf2562995fc8..126b66bb73a6 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+@@ -346,8 +346,6 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
+ 		return PTR_ERR(plat_dat);
+ 
+ 	plat_dat->interface = priv_plat->phy_mode;
+-	/* clk_csr_i = 250-300MHz & MDC = clk_csr_i/124 */
+-	plat_dat->clk_csr = 5;
+ 	plat_dat->has_gmac4 = 1;
+ 	plat_dat->has_gmac = 0;
+ 	plat_dat->pmt = 0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 3c409862c52e..635d88d82610 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3338,6 +3338,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
+ 		entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
+ 	}
+ 	rx_q->dirty_rx = entry;
++	stmmac_set_rx_tail_ptr(priv, priv->ioaddr, rx_q->rx_tail_addr, queue);
+ }
+ 
+ /**
+@@ -4379,10 +4380,10 @@ int stmmac_dvr_probe(struct device *device,
+ 	 * set the MDC clock dynamically according to the csr actual
+ 	 * clock input.
+ 	 */
+-	if (!priv->plat->clk_csr)
+-		stmmac_clk_csr_set(priv);
+-	else
++	if (priv->plat->clk_csr >= 0)
+ 		priv->clk_csr = priv->plat->clk_csr;
++	else
++		stmmac_clk_csr_set(priv);
+ 
+ 	stmmac_check_pcs_mode(priv);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 3031f2bf15d6..f45bfbef97d0 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -408,7 +408,10 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
+ 	/* Default to phy auto-detection */
+ 	plat->phy_addr = -1;
+ 
+-	/* Get clk_csr from device tree */
++	/* Default to get clk_csr from stmmac_clk_crs_set(),
++	 * or get clk_csr from device tree.
++	 */
++	plat->clk_csr = -1;
+ 	of_property_read_u32(np, "clk_csr", &plat->clk_csr);
+ 
+ 	/* "snps,phy-addr" is not a standard property. Mark it as deprecated
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 5583d993480d..ffe421944429 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -396,7 +396,7 @@ static int geneve_udp_encap_err_lookup(struct sock *sk, struct sk_buff *skb)
+ 	u8 zero_vni[3] = { 0 };
+ 	u8 *vni = zero_vni;
+ 
+-	if (skb->len < GENEVE_BASE_HLEN)
++	if (!pskb_may_pull(skb, skb_transport_offset(skb) + GENEVE_BASE_HLEN))
+ 		return -EINVAL;
+ 
+ 	geneveh = geneve_hdr(skb);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index b20fb0fb595b..e7d8884b1a10 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2414,7 +2414,7 @@ static struct  hv_driver netvsc_drv = {
+ 	.probe = netvsc_probe,
+ 	.remove = netvsc_remove,
+ 	.driver = {
+-		.probe_type = PROBE_PREFER_ASYNCHRONOUS,
++		.probe_type = PROBE_FORCE_SYNCHRONOUS,
+ 	},
+ };
+ 
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index 8448d01819ef..2995a1788ceb 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -26,10 +26,18 @@
+ 
+ /* Extended Registers */
+ #define DP83867_CFG4            0x0031
++#define DP83867_CFG4_SGMII_ANEG_MASK (BIT(5) | BIT(6))
++#define DP83867_CFG4_SGMII_ANEG_TIMER_11MS   (3 << 5)
++#define DP83867_CFG4_SGMII_ANEG_TIMER_800US  (2 << 5)
++#define DP83867_CFG4_SGMII_ANEG_TIMER_2US    (1 << 5)
++#define DP83867_CFG4_SGMII_ANEG_TIMER_16MS   (0 << 5)
++
+ #define DP83867_RGMIICTL	0x0032
+ #define DP83867_STRAP_STS1	0x006E
+ #define DP83867_RGMIIDCTL	0x0086
+ #define DP83867_IO_MUX_CFG	0x0170
++#define DP83867_10M_SGMII_CFG   0x016F
++#define DP83867_10M_SGMII_RATE_ADAPT_MASK BIT(7)
+ 
+ #define DP83867_SW_RESET	BIT(15)
+ #define DP83867_SW_RESTART	BIT(14)
+@@ -247,10 +255,8 @@ static int dp83867_config_init(struct phy_device *phydev)
+ 		ret = phy_write(phydev, MII_DP83867_PHYCTRL, val);
+ 		if (ret)
+ 			return ret;
+-	}
+ 
+-	if ((phydev->interface >= PHY_INTERFACE_MODE_RGMII_ID) &&
+-	    (phydev->interface <= PHY_INTERFACE_MODE_RGMII_RXID)) {
++		/* Set up RGMII delays */
+ 		val = phy_read_mmd(phydev, DP83867_DEVADDR, DP83867_RGMIICTL);
+ 
+ 		if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID)
+@@ -277,6 +283,33 @@ static int dp83867_config_init(struct phy_device *phydev)
+ 				       DP83867_IO_MUX_CFG_IO_IMPEDANCE_CTRL);
+ 	}
+ 
++	if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
++		/* For support SPEED_10 in SGMII mode
++		 * DP83867_10M_SGMII_RATE_ADAPT bit
++		 * has to be cleared by software. That
++		 * does not affect SPEED_100 and
++		 * SPEED_1000.
++		 */
++		ret = phy_modify_mmd(phydev, DP83867_DEVADDR,
++				     DP83867_10M_SGMII_CFG,
++				     DP83867_10M_SGMII_RATE_ADAPT_MASK,
++				     0);
++		if (ret)
++			return ret;
++
++		/* After reset SGMII Autoneg timer is set to 2us (bits 6 and 5
++		 * are 01). That is not enough to finalize autoneg on some
++		 * devices. Increase this timer duration to maximum 16ms.
++		 */
++		ret = phy_modify_mmd(phydev, DP83867_DEVADDR,
++				     DP83867_CFG4,
++				     DP83867_CFG4_SGMII_ANEG_MASK,
++				     DP83867_CFG4_SGMII_ANEG_TIMER_16MS);
++
++		if (ret)
++			return ret;
++	}
++
+ 	/* Enable Interrupt output INT_OE in CFG3 register */
+ 	if (phy_interrupt_is_valid(phydev)) {
+ 		val = phy_read(phydev, DP83867_CFG3);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 89750c7dfd6f..efa31fcda505 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -51,6 +51,10 @@ struct phylink {
+ 
+ 	/* The link configuration settings */
+ 	struct phylink_link_state link_config;
++
++	/* The current settings */
++	phy_interface_t cur_interface;
++
+ 	struct gpio_desc *link_gpio;
+ 	struct timer_list link_poll;
+ 	void (*get_fixed_state)(struct net_device *dev,
+@@ -453,12 +457,12 @@ static void phylink_resolve(struct work_struct *w)
+ 		if (!link_state.link) {
+ 			netif_carrier_off(ndev);
+ 			pl->ops->mac_link_down(ndev, pl->link_an_mode,
+-					       pl->phy_state.interface);
++					       pl->cur_interface);
+ 			netdev_info(ndev, "Link is Down\n");
+ 		} else {
++			pl->cur_interface = link_state.interface;
+ 			pl->ops->mac_link_up(ndev, pl->link_an_mode,
+-					     pl->phy_state.interface,
+-					     pl->phydev);
++					     pl->cur_interface, pl->phydev);
+ 
+ 			netif_carrier_on(ndev);
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index d76dfed8d9bb..38ecb66fb3e9 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1765,7 +1765,7 @@ static int vxlan_err_lookup(struct sock *sk, struct sk_buff *skb)
+ 	struct vxlanhdr *hdr;
+ 	__be32 vni;
+ 
+-	if (skb->len < VXLAN_HLEN)
++	if (!pskb_may_pull(skb, skb_transport_offset(skb) + VXLAN_HLEN))
+ 		return -EINVAL;
+ 
+ 	hdr = vxlan_hdr(skb);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index aae5374d2b93..08a2501b9357 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -111,6 +111,7 @@ struct nvme_tcp_ctrl {
+ 	struct work_struct	err_work;
+ 	struct delayed_work	connect_work;
+ 	struct nvme_tcp_request async_req;
++	u32			io_queues[HCTX_MAX_TYPES];
+ };
+ 
+ static LIST_HEAD(nvme_tcp_ctrl_list);
+@@ -473,7 +474,6 @@ static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue,
+ 	}
+ 
+ 	return 0;
+-
+ }
+ 
+ static int nvme_tcp_handle_comp(struct nvme_tcp_queue *queue,
+@@ -634,7 +634,6 @@ static inline void nvme_tcp_end_request(struct request *rq, u16 status)
+ 	nvme_end_request(rq, cpu_to_le16(status << 1), res);
+ }
+ 
+-
+ static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ 			      unsigned int *offset, size_t *len)
+ {
+@@ -1425,7 +1424,8 @@ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
+ 	if (!ret) {
+ 		set_bit(NVME_TCP_Q_LIVE, &ctrl->queues[idx].flags);
+ 	} else {
+-		__nvme_tcp_stop_queue(&ctrl->queues[idx]);
++		if (test_bit(NVME_TCP_Q_ALLOCATED, &ctrl->queues[idx].flags))
++			__nvme_tcp_stop_queue(&ctrl->queues[idx]);
+ 		dev_err(nctrl->device,
+ 			"failed to connect queue: %d ret=%d\n", idx, ret);
+ 	}
+@@ -1535,7 +1535,7 @@ out_free_queue:
+ 	return ret;
+ }
+ 
+-static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
++static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ {
+ 	int i, ret;
+ 
+@@ -1565,7 +1565,36 @@ static unsigned int nvme_tcp_nr_io_queues(struct nvme_ctrl *ctrl)
+ 	return nr_io_queues;
+ }
+ 
+-static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl)
++static void nvme_tcp_set_io_queues(struct nvme_ctrl *nctrl,
++		unsigned int nr_io_queues)
++{
++	struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
++	struct nvmf_ctrl_options *opts = nctrl->opts;
++
++	if (opts->nr_write_queues && opts->nr_io_queues < nr_io_queues) {
++		/*
++		 * separate read/write queues
++		 * hand out dedicated default queues only after we have
++		 * sufficient read queues.
++		 */
++		ctrl->io_queues[HCTX_TYPE_READ] = opts->nr_io_queues;
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_READ];
++		ctrl->io_queues[HCTX_TYPE_DEFAULT] =
++			min(opts->nr_write_queues, nr_io_queues);
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT];
++	} else {
++		/*
++		 * shared read/write queues
++		 * either no write queues were requested, or we don't have
++		 * sufficient queue count to have dedicated default queues.
++		 */
++		ctrl->io_queues[HCTX_TYPE_DEFAULT] =
++			min(opts->nr_io_queues, nr_io_queues);
++		nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT];
++	}
++}
++
++static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ {
+ 	unsigned int nr_io_queues;
+ 	int ret;
+@@ -1582,7 +1611,9 @@ static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl)
+ 	dev_info(ctrl->device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+-	return nvme_tcp_alloc_io_queues(ctrl);
++	nvme_tcp_set_io_queues(ctrl, nr_io_queues);
++
++	return __nvme_tcp_alloc_io_queues(ctrl);
+ }
+ 
+ static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove)
+@@ -1599,7 +1630,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
+ {
+ 	int ret;
+ 
+-	ret = nvme_alloc_io_queues(ctrl);
++	ret = nvme_tcp_alloc_io_queues(ctrl);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2090,23 +2121,34 @@ static blk_status_t nvme_tcp_queue_rq(struct blk_mq_hw_ctx *hctx,
+ static int nvme_tcp_map_queues(struct blk_mq_tag_set *set)
+ {
+ 	struct nvme_tcp_ctrl *ctrl = set->driver_data;
++	struct nvmf_ctrl_options *opts = ctrl->ctrl.opts;
+ 
+-	set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
+-	set->map[HCTX_TYPE_READ].nr_queues = ctrl->ctrl.opts->nr_io_queues;
+-	if (ctrl->ctrl.opts->nr_write_queues) {
++	if (opts->nr_write_queues && ctrl->io_queues[HCTX_TYPE_READ]) {
+ 		/* separate read/write queues */
+ 		set->map[HCTX_TYPE_DEFAULT].nr_queues =
+-				ctrl->ctrl.opts->nr_write_queues;
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
++		set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
++		set->map[HCTX_TYPE_READ].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_READ];
+ 		set->map[HCTX_TYPE_READ].queue_offset =
+-				ctrl->ctrl.opts->nr_write_queues;
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
+ 	} else {
+-		/* mixed read/write queues */
++		/* shared read/write queues */
+ 		set->map[HCTX_TYPE_DEFAULT].nr_queues =
+-				ctrl->ctrl.opts->nr_io_queues;
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
++		set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
++		set->map[HCTX_TYPE_READ].nr_queues =
++			ctrl->io_queues[HCTX_TYPE_DEFAULT];
+ 		set->map[HCTX_TYPE_READ].queue_offset = 0;
+ 	}
+ 	blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]);
+ 	blk_mq_map_queues(&set->map[HCTX_TYPE_READ]);
++
++	dev_info(ctrl->ctrl.device,
++		"mapped %d/%d default/read queues.\n",
++		ctrl->io_queues[HCTX_TYPE_DEFAULT],
++		ctrl->io_queues[HCTX_TYPE_READ]);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index e1949f7efd9c..bf32fde328c2 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -666,7 +666,8 @@ static bool acpi_pci_need_resume(struct pci_dev *dev)
+ 	if (!adev || !acpi_device_power_manageable(adev))
+ 		return false;
+ 
+-	if (device_may_wakeup(&dev->dev) != !!adev->wakeup.prepare_count)
++	if (adev->wakeup.flags.valid &&
++	    device_may_wakeup(&dev->dev) != !!adev->wakeup.prepare_count)
+ 		return true;
+ 
+ 	if (acpi_target_system_state() == ACPI_STATE_S0)
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 70638b74f9d6..95d224404c7c 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -913,35 +913,6 @@ static void intel_gpio_irq_ack(struct irq_data *d)
+ 	}
+ }
+ 
+-static void intel_gpio_irq_enable(struct irq_data *d)
+-{
+-	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+-	struct intel_pinctrl *pctrl = gpiochip_get_data(gc);
+-	const struct intel_community *community;
+-	const struct intel_padgroup *padgrp;
+-	int pin;
+-
+-	pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), &community, &padgrp);
+-	if (pin >= 0) {
+-		unsigned int gpp, gpp_offset, is_offset;
+-		unsigned long flags;
+-		u32 value;
+-
+-		gpp = padgrp->reg_num;
+-		gpp_offset = padgroup_offset(padgrp, pin);
+-		is_offset = community->is_offset + gpp * 4;
+-
+-		raw_spin_lock_irqsave(&pctrl->lock, flags);
+-		/* Clear interrupt status first to avoid unexpected interrupt */
+-		writel(BIT(gpp_offset), community->regs + is_offset);
+-
+-		value = readl(community->regs + community->ie_offset + gpp * 4);
+-		value |= BIT(gpp_offset);
+-		writel(value, community->regs + community->ie_offset + gpp * 4);
+-		raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+-	}
+-}
+-
+ static void intel_gpio_irq_mask_unmask(struct irq_data *d, bool mask)
+ {
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+@@ -954,15 +925,20 @@ static void intel_gpio_irq_mask_unmask(struct irq_data *d, bool mask)
+ 	if (pin >= 0) {
+ 		unsigned int gpp, gpp_offset;
+ 		unsigned long flags;
+-		void __iomem *reg;
++		void __iomem *reg, *is;
+ 		u32 value;
+ 
+ 		gpp = padgrp->reg_num;
+ 		gpp_offset = padgroup_offset(padgrp, pin);
+ 
+ 		reg = community->regs + community->ie_offset + gpp * 4;
++		is = community->regs + community->is_offset + gpp * 4;
+ 
+ 		raw_spin_lock_irqsave(&pctrl->lock, flags);
++
++		/* Clear interrupt status first to avoid unexpected interrupt */
++		writel(BIT(gpp_offset), is);
++
+ 		value = readl(reg);
+ 		if (mask)
+ 			value &= ~BIT(gpp_offset);
+@@ -1106,7 +1082,6 @@ static irqreturn_t intel_gpio_irq(int irq, void *data)
+ 
+ static struct irq_chip intel_gpio_irqchip = {
+ 	.name = "intel-gpio",
+-	.irq_enable = intel_gpio_irq_enable,
+ 	.irq_ack = intel_gpio_irq_ack,
+ 	.irq_mask = intel_gpio_irq_mask,
+ 	.irq_unmask = intel_gpio_irq_unmask,
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 1546389d71db..6717536a633c 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -254,19 +254,37 @@ static inline int ap_test_config_card_id(unsigned int id)
+ }
+ 
+ /*
+- * ap_test_config_domain(): Test, whether an AP usage domain is configured.
++ * ap_test_config_usage_domain(): Test, whether an AP usage domain
++ * is configured.
+  * @domain AP usage domain ID
+  *
+  * Returns 0 if the usage domain is not configured
+  *	   1 if the usage domain is configured or
+  *	     if the configuration information is not available
+  */
+-static inline int ap_test_config_domain(unsigned int domain)
++int ap_test_config_usage_domain(unsigned int domain)
+ {
+ 	if (!ap_configuration)	/* QCI not supported */
+ 		return domain < 16;
+ 	return ap_test_config(ap_configuration->aqm, domain);
+ }
++EXPORT_SYMBOL(ap_test_config_usage_domain);
++
++/*
++ * ap_test_config_ctrl_domain(): Test, whether an AP control domain
++ * is configured.
++ * @domain AP control domain ID
++ *
++ * Returns 1 if the control domain is configured
++ *	   0 in all other cases
++ */
++int ap_test_config_ctrl_domain(unsigned int domain)
++{
++	if (!ap_configuration)	/* QCI not supported */
++		return 0;
++	return ap_test_config(ap_configuration->adm, domain);
++}
++EXPORT_SYMBOL(ap_test_config_ctrl_domain);
+ 
+ /**
+  * ap_query_queue(): Check if an AP queue is available.
+@@ -1267,7 +1285,7 @@ static void ap_select_domain(void)
+ 	best_domain = -1;
+ 	max_count = 0;
+ 	for (i = 0; i < AP_DOMAINS; i++) {
+-		if (!ap_test_config_domain(i) ||
++		if (!ap_test_config_usage_domain(i) ||
+ 		    !test_bit_inv(i, ap_perms.aqm))
+ 			continue;
+ 		count = 0;
+@@ -1442,7 +1460,7 @@ static void _ap_scan_bus_adapter(int id)
+ 				      (void *)(long) qid,
+ 				      __match_queue_device_with_qid);
+ 		aq = dev ? to_ap_queue(dev) : NULL;
+-		if (!ap_test_config_domain(dom)) {
++		if (!ap_test_config_usage_domain(dom)) {
+ 			if (dev) {
+ 				/* Queue device exists but has been
+ 				 * removed from configuration.
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 15a98a673c5c..6f3cf37776ca 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -251,6 +251,9 @@ void ap_wait(enum ap_wait wait);
+ void ap_request_timeout(struct timer_list *t);
+ void ap_bus_force_rescan(void);
+ 
++int ap_test_config_usage_domain(unsigned int domain);
++int ap_test_config_ctrl_domain(unsigned int domain);
++
+ void ap_queue_init_reply(struct ap_queue *aq, struct ap_message *ap_msg);
+ struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type);
+ void ap_queue_prepare_remove(struct ap_queue *aq);
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index c31b2d31cd83..03b1853464db 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -822,7 +822,7 @@ static long _zcrypt_send_cprb(struct ap_perms *perms,
+ 	struct ap_message ap_msg;
+ 	unsigned int weight, pref_weight;
+ 	unsigned int func_code;
+-	unsigned short *domain;
++	unsigned short *domain, tdom;
+ 	int qid = 0, rc = -ENODEV;
+ 	struct module *mod;
+ 
+@@ -834,6 +834,17 @@ static long _zcrypt_send_cprb(struct ap_perms *perms,
+ 	if (rc)
+ 		goto out;
+ 
++	/*
++	 * If a valid target domain is set and this domain is NOT a usage
++	 * domain but a control only domain, use the default domain as target.
++	 */
++	tdom = *domain;
++	if (tdom >= 0 && tdom < AP_DOMAINS &&
++	    !ap_test_config_usage_domain(tdom) &&
++	    ap_test_config_ctrl_domain(tdom) &&
++	    ap_domain_index >= 0)
++		tdom = ap_domain_index;
++
+ 	pref_zc = NULL;
+ 	pref_zq = NULL;
+ 	spin_lock(&zcrypt_list_lock);
+@@ -856,8 +867,8 @@ static long _zcrypt_send_cprb(struct ap_perms *perms,
+ 			/* check if device is online and eligible */
+ 			if (!zq->online ||
+ 			    !zq->ops->send_cprb ||
+-			    ((*domain != (unsigned short) AUTOSELECT) &&
+-			     (*domain != AP_QID_QUEUE(zq->queue->qid))))
++			    (tdom != (unsigned short) AUTOSELECT &&
++			     tdom != AP_QID_QUEUE(zq->queue->qid)))
+ 				continue;
+ 			/* check if device node has admission for this queue */
+ 			if (!zcrypt_check_queue(perms,
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index 006372b3fba2..a50734f3c486 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -641,6 +641,10 @@ cxgbi_check_route(struct sockaddr *dst_addr, int ifindex)
+ 
+ 	if (ndev->flags & IFF_LOOPBACK) {
+ 		ndev = ip_dev_find(&init_net, daddr->sin_addr.s_addr);
++		if (!ndev) {
++			err = -ENETUNREACH;
++			goto rel_neigh;
++		}
+ 		mtu = ndev->mtu;
+ 		pr_info("rt dev %s, loopback -> %s, mtu %u.\n",
+ 			n->dev->name, ndev->name, mtu);
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index d7ac498ba35a..2a9dcb8973b7 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -1174,10 +1174,8 @@ static int __init alua_init(void)
+ 	int r;
+ 
+ 	kaluad_wq = alloc_workqueue("kaluad", WQ_MEM_RECLAIM, 0);
+-	if (!kaluad_wq) {
+-		/* Temporary failure, bypass */
+-		return SCSI_DH_DEV_TEMP_BUSY;
+-	}
++	if (!kaluad_wq)
++		return -ENOMEM;
+ 
+ 	r = scsi_register_device_handler(&alua_dh);
+ 	if (r != 0) {
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 3611a4ef0d15..7c2d78d189e4 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -1014,6 +1014,8 @@ static struct domain_device *sas_ex_discover_expander(
+ 		list_del(&child->dev_list_node);
+ 		spin_unlock_irq(&parent->port->dev_list_lock);
+ 		sas_put_device(child);
++		sas_port_delete(phy->port);
++		phy->port = NULL;
+ 		return NULL;
+ 	}
+ 	list_add_tail(&child->siblings, &parent->ex_dev.children);
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 75ec43aa8df3..531824afba5f 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -7285,7 +7285,7 @@ static int pqi_pci_init(struct pqi_ctrl_info *ctrl_info)
+ 	else
+ 		mask = DMA_BIT_MASK(32);
+ 
+-	rc = dma_set_mask(&ctrl_info->pci_dev->dev, mask);
++	rc = dma_set_mask_and_coherent(&ctrl_info->pci_dev->dev, mask);
+ 	if (rc) {
+ 		dev_err(&ctrl_info->pci_dev->dev, "failed to set DMA mask\n");
+ 		goto disable_device;
+diff --git a/drivers/staging/erofs/super.c b/drivers/staging/erofs/super.c
+index 15c784fba879..c8981662a49b 100644
+--- a/drivers/staging/erofs/super.c
++++ b/drivers/staging/erofs/super.c
+@@ -459,6 +459,7 @@ static int erofs_read_super(struct super_block *sb,
+ 	 */
+ err_devname:
+ 	dput(sb->s_root);
++	sb->s_root = NULL;
+ err_iget:
+ #ifdef EROFS_FS_HAS_MANAGED_CACHE
+ 	iput(sbi->managed_cache);
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/controls.c b/drivers/staging/vc04_services/bcm2835-camera/controls.c
+index a2c55cb2192a..52f3c4be5ff8 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/controls.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/controls.c
+@@ -576,7 +576,7 @@ exit:
+ 				dev->colourfx.enable ? "true" : "false",
+ 				dev->colourfx.u, dev->colourfx.v,
+ 				ret, (ret == 0 ? 0 : -EINVAL));
+-	return (ret == 0 ? 0 : EINVAL);
++	return (ret == 0 ? 0 : -EINVAL);
+ }
+ 
+ static int ctrl_set_colfx(struct bm2835_mmal_dev *dev,
+@@ -600,7 +600,7 @@ static int ctrl_set_colfx(struct bm2835_mmal_dev *dev,
+ 		 "%s: After: mmal_ctrl:%p ctrl id:0x%x ctrl val:%d ret %d(%d)\n",
+ 			__func__, mmal_ctrl, ctrl->id, ctrl->val, ret,
+ 			(ret == 0 ? 0 : -EINVAL));
+-	return (ret == 0 ? 0 : EINVAL);
++	return (ret == 0 ? 0 : -EINVAL);
+ }
+ 
+ static int ctrl_set_bitrate(struct bm2835_mmal_dev *dev,
+diff --git a/drivers/staging/wilc1000/wilc_wlan.c b/drivers/staging/wilc1000/wilc_wlan.c
+index c2389695fe20..70b1ab21f8a3 100644
+--- a/drivers/staging/wilc1000/wilc_wlan.c
++++ b/drivers/staging/wilc1000/wilc_wlan.c
+@@ -1076,13 +1076,17 @@ void wilc_wlan_cleanup(struct net_device *dev)
+ 	acquire_bus(wilc, WILC_BUS_ACQUIRE_AND_WAKEUP);
+ 
+ 	ret = wilc->hif_func->hif_read_reg(wilc, WILC_GP_REG_0, &reg);
+-	if (!ret)
++	if (!ret) {
+ 		release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP);
++		return;
++	}
+ 
+ 	ret = wilc->hif_func->hif_write_reg(wilc, WILC_GP_REG_0,
+ 					(reg | ABORT_INT));
+-	if (!ret)
++	if (!ret) {
+ 		release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP);
++		return;
++	}
+ 
+ 	release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP);
+ 	wilc->hif_func->hif_deinit(NULL);
+diff --git a/drivers/tty/serial/sunhv.c b/drivers/tty/serial/sunhv.c
+index 63e34d868de8..f8503f8fc44e 100644
+--- a/drivers/tty/serial/sunhv.c
++++ b/drivers/tty/serial/sunhv.c
+@@ -397,7 +397,7 @@ static const struct uart_ops sunhv_pops = {
+ static struct uart_driver sunhv_reg = {
+ 	.owner			= THIS_MODULE,
+ 	.driver_name		= "sunhv",
+-	.dev_name		= "ttyS",
++	.dev_name		= "ttyHV",
+ 	.major			= TTY_MAJOR,
+ };
+ 
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index cadc01336bf8..7ba6afc7ef23 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -440,6 +440,9 @@ void xhci_debugfs_create_endpoint(struct xhci_hcd *xhci,
+ 	struct xhci_ep_priv	*epriv;
+ 	struct xhci_slot_priv	*spriv = dev->debugfs_private;
+ 
++	if (!spriv)
++		return;
++
+ 	if (spriv->eps[ep_index])
+ 		return;
+ 
+diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
+index 8a249c95c193..d7438fdc5706 100644
+--- a/drivers/xen/pvcalls-front.c
++++ b/drivers/xen/pvcalls-front.c
+@@ -540,7 +540,6 @@ out:
+ int pvcalls_front_sendmsg(struct socket *sock, struct msghdr *msg,
+ 			  size_t len)
+ {
+-	struct pvcalls_bedata *bedata;
+ 	struct sock_mapping *map;
+ 	int sent, tot_sent = 0;
+ 	int count = 0, flags;
+@@ -552,7 +551,6 @@ int pvcalls_front_sendmsg(struct socket *sock, struct msghdr *msg,
+ 	map = pvcalls_enter_sock(sock);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
+-	bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
+ 
+ 	mutex_lock(&map->active.out_mutex);
+ 	if ((flags & MSG_DONTWAIT) && !pvcalls_front_write_todo(map)) {
+@@ -635,7 +633,6 @@ out:
+ int pvcalls_front_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 		     int flags)
+ {
+-	struct pvcalls_bedata *bedata;
+ 	int ret;
+ 	struct sock_mapping *map;
+ 
+@@ -645,7 +642,6 @@ int pvcalls_front_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	map = pvcalls_enter_sock(sock);
+ 	if (IS_ERR(map))
+ 		return PTR_ERR(map);
+-	bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
+ 
+ 	mutex_lock(&map->active.in_mutex);
+ 	if (len > XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER))
+diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
+index 092981171df1..d75a2385b37c 100644
+--- a/drivers/xen/xenbus/xenbus.h
++++ b/drivers/xen/xenbus/xenbus.h
+@@ -83,6 +83,7 @@ struct xb_req_data {
+ 	int num_vecs;
+ 	int err;
+ 	enum xb_req_state state;
++	bool user_req;
+ 	void (*cb)(struct xb_req_data *);
+ 	void *par;
+ };
+@@ -133,4 +134,6 @@ void xenbus_ring_ops_init(void);
+ int xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void *par);
+ void xenbus_dev_queue_reply(struct xb_req_data *req);
+ 
++extern unsigned int xb_dev_generation_id;
++
+ #endif
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 0782ff3c2273..39c63152a358 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -62,6 +62,8 @@
+ 
+ #include "xenbus.h"
+ 
++unsigned int xb_dev_generation_id;
++
+ /*
+  * An element of a list of outstanding transactions, for which we're
+  * still waiting a reply.
+@@ -69,6 +71,7 @@
+ struct xenbus_transaction_holder {
+ 	struct list_head list;
+ 	struct xenbus_transaction handle;
++	unsigned int generation_id;
+ };
+ 
+ /*
+@@ -441,6 +444,7 @@ static int xenbus_write_transaction(unsigned msg_type,
+ 			rc = -ENOMEM;
+ 			goto out;
+ 		}
++		trans->generation_id = xb_dev_generation_id;
+ 		list_add(&trans->list, &u->transactions);
+ 	} else if (msg->hdr.tx_id != 0 &&
+ 		   !xenbus_get_transaction(u, msg->hdr.tx_id))
+@@ -449,6 +453,20 @@ static int xenbus_write_transaction(unsigned msg_type,
+ 		 !(msg->hdr.len == 2 &&
+ 		   (!strcmp(msg->body, "T") || !strcmp(msg->body, "F"))))
+ 		return xenbus_command_reply(u, XS_ERROR, "EINVAL");
++	else if (msg_type == XS_TRANSACTION_END) {
++		trans = xenbus_get_transaction(u, msg->hdr.tx_id);
++		if (trans && trans->generation_id != xb_dev_generation_id) {
++			list_del(&trans->list);
++			kfree(trans);
++			if (!strcmp(msg->body, "T"))
++				return xenbus_command_reply(u, XS_ERROR,
++							    "EAGAIN");
++			else
++				return xenbus_command_reply(u,
++							    XS_TRANSACTION_END,
++							    "OK");
++		}
++	}
+ 
+ 	rc = xenbus_dev_request_and_reply(&msg->hdr, u);
+ 	if (rc && trans) {
+diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
+index 49a3874ae6bb..ddc18da61834 100644
+--- a/drivers/xen/xenbus/xenbus_xs.c
++++ b/drivers/xen/xenbus/xenbus_xs.c
+@@ -105,6 +105,7 @@ static void xs_suspend_enter(void)
+ 
+ static void xs_suspend_exit(void)
+ {
++	xb_dev_generation_id++;
+ 	spin_lock(&xs_state_lock);
+ 	xs_suspend_active--;
+ 	spin_unlock(&xs_state_lock);
+@@ -125,7 +126,7 @@ static uint32_t xs_request_enter(struct xb_req_data *req)
+ 		spin_lock(&xs_state_lock);
+ 	}
+ 
+-	if (req->type == XS_TRANSACTION_START)
++	if (req->type == XS_TRANSACTION_START && !req->user_req)
+ 		xs_state_users++;
+ 	xs_state_users++;
+ 	rq_id = xs_request_id++;
+@@ -140,7 +141,7 @@ void xs_request_exit(struct xb_req_data *req)
+ 	spin_lock(&xs_state_lock);
+ 	xs_state_users--;
+ 	if ((req->type == XS_TRANSACTION_START && req->msg.type == XS_ERROR) ||
+-	    (req->type == XS_TRANSACTION_END &&
++	    (req->type == XS_TRANSACTION_END && !req->user_req &&
+ 	     !WARN_ON_ONCE(req->msg.type == XS_ERROR &&
+ 			   !strcmp(req->body, "ENOENT"))))
+ 		xs_state_users--;
+@@ -286,6 +287,7 @@ int xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void *par)
+ 	req->num_vecs = 1;
+ 	req->cb = xenbus_dev_queue_reply;
+ 	req->par = par;
++	req->user_req = true;
+ 
+ 	xs_send(req, msg);
+ 
+@@ -313,6 +315,7 @@ static void *xs_talkv(struct xenbus_transaction t,
+ 	req->vec = iovec;
+ 	req->num_vecs = num_vecs;
+ 	req->cb = xs_wake_up;
++	req->user_req = false;
+ 
+ 	msg.req_id = 0;
+ 	msg.tx_id = t.id;
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 09b7d0d4f6e4..007cfa39be5f 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -131,7 +131,7 @@ static inline void flush_cache_ent(struct dfs_cache_entry *ce)
+ 		return;
+ 
+ 	hlist_del_init_rcu(&ce->ce_hlist);
+-	kfree(ce->ce_path);
++	kfree_const(ce->ce_path);
+ 	free_tgts(ce);
+ 	dfs_cache_count--;
+ 	call_rcu(&ce->ce_rcu, free_cache_entry);
+@@ -421,7 +421,7 @@ alloc_cache_entry(const char *path, const struct dfs_info3_param *refs,
+ 
+ 	rc = copy_ref_data(refs, numrefs, ce, NULL);
+ 	if (rc) {
+-		kfree(ce->ce_path);
++		kfree_const(ce->ce_path);
+ 		kmem_cache_free(dfs_cache_slab, ce);
+ 		ce = ERR_PTR(rc);
+ 	}
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 920d350df37b..809c1edffbaf 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -58,15 +58,13 @@ static void configfs_d_iput(struct dentry * dentry,
+ 	if (sd) {
+ 		/* Coordinate with configfs_readdir */
+ 		spin_lock(&configfs_dirent_lock);
+-		/* Coordinate with configfs_attach_attr where will increase
+-		 * sd->s_count and update sd->s_dentry to new allocated one.
+-		 * Only set sd->dentry to null when this dentry is the only
+-		 * sd owner.
+-		 * If not do so, configfs_d_iput may run just after
+-		 * configfs_attach_attr and set sd->s_dentry to null
+-		 * even it's still in use.
++		/*
++		 * Set sd->s_dentry to null only when this dentry is the one
++		 * that is going to be killed.  Otherwise configfs_d_iput may
++		 * run just after configfs_attach_attr and set sd->s_dentry to
++		 * NULL even it's still in use.
+ 		 */
+-		if (atomic_read(&sd->s_count) <= 2)
++		if (sd->s_dentry == dentry)
+ 			sd->s_dentry = NULL;
+ 
+ 		spin_unlock(&configfs_dirent_lock);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4e32a033394c..e82adbf8adc1 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2506,7 +2506,7 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
+ 
+ 		ret = io_copy_iov(ctx, &iov, arg, i);
+ 		if (ret)
+-			break;
++			goto err;
+ 
+ 		/*
+ 		 * Don't impose further limits on the size and buffer
+diff --git a/fs/ocfs2/filecheck.c b/fs/ocfs2/filecheck.c
+index f65f2b2f594d..1906cc962c4d 100644
+--- a/fs/ocfs2/filecheck.c
++++ b/fs/ocfs2/filecheck.c
+@@ -193,6 +193,7 @@ int ocfs2_filecheck_create_sysfs(struct ocfs2_super *osb)
+ 	ret = kobject_init_and_add(&entry->fs_kobj, &ocfs2_ktype_filecheck,
+ 					NULL, "filecheck");
+ 	if (ret) {
++		kobject_put(&entry->fs_kobj);
+ 		kfree(fcheck);
+ 		return ret;
+ 	}
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index a3fda9f024c3..4a7944078cc3 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -54,6 +54,10 @@ static inline void mmdrop(struct mm_struct *mm)
+  * followed by taking the mmap_sem for writing before modifying the
+  * vmas or anything the coredump pretends not to change from under it.
+  *
++ * It also has to be called when mmgrab() is used in the context of
++ * the process, but then the mm_count refcount is transferred outside
++ * the context of the process to run down_write() on that pinned mm.
++ *
+  * NOTE: find_extend_vma() called from GUP context is the only place
+  * that can modify the "mm" (notably the vm_start/end) under mmap_sem
+  * for reading and outside the context of the process, so it is also
+diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
+index 2b26979efb48..fc0d471af4b9 100644
+--- a/include/net/flow_dissector.h
++++ b/include/net/flow_dissector.h
+@@ -46,6 +46,7 @@ struct flow_dissector_key_tags {
+ 
+ struct flow_dissector_key_vlan {
+ 	u16	vlan_id:12,
++		vlan_dei:1,
+ 		vlan_priority:3;
+ 	__be16	vlan_tpid;
+ };
+diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h
+index a88f92737308..e4c4d8eaca8c 100644
+--- a/include/net/netfilter/nft_fib.h
++++ b/include/net/netfilter/nft_fib.h
+@@ -34,5 +34,5 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		   const struct nft_pktinfo *pkt);
+ 
+ void nft_fib_store_result(void *reg, const struct nft_fib *priv,
+-			  const struct nft_pktinfo *pkt, int index);
++			  const struct net_device *dev);
+ #endif
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 674b35383491..7a0c73e4b3eb 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -48,14 +48,30 @@ static void perf_output_put_handle(struct perf_output_handle *handle)
+ 	unsigned long head;
+ 
+ again:
++	/*
++	 * In order to avoid publishing a head value that goes backwards,
++	 * we must ensure the load of @rb->head happens after we've
++	 * incremented @rb->nest.
++	 *
++	 * Otherwise we can observe a @rb->head value before one published
++	 * by an IRQ/NMI happening between the load and the increment.
++	 */
++	barrier();
+ 	head = local_read(&rb->head);
+ 
+ 	/*
+-	 * IRQ/NMI can happen here, which means we can miss a head update.
++	 * IRQ/NMI can happen here and advance @rb->head, causing our
++	 * load above to be stale.
+ 	 */
+ 
+-	if (!local_dec_and_test(&rb->nest))
++	/*
++	 * If this isn't the outermost nesting, we don't have to update
++	 * @rb->user_page->data_head.
++	 */
++	if (local_read(&rb->nest) > 1) {
++		local_dec(&rb->nest);
+ 		goto out;
++	}
+ 
+ 	/*
+ 	 * Since the mmap() consumer (userspace) can run on a different CPU:
+@@ -84,12 +100,21 @@ again:
+ 	 * See perf_output_begin().
+ 	 */
+ 	smp_wmb(); /* B, matches C */
+-	rb->user_page->data_head = head;
++	WRITE_ONCE(rb->user_page->data_head, head);
++
++	/*
++	 * We must publish the head before decrementing the nest count,
++	 * otherwise an IRQ/NMI can publish a more recent head value and our
++	 * write will (temporarily) publish a stale value.
++	 */
++	barrier();
++	local_set(&rb->nest, 0);
+ 
+ 	/*
+-	 * Now check if we missed an update -- rely on previous implied
+-	 * compiler barriers to force a re-read.
++	 * Ensure we decrement @rb->nest before we validate the @rb->head.
++	 * Otherwise we cannot be sure we caught the 'last' nested update.
+ 	 */
++	barrier();
+ 	if (unlikely(head != local_read(&rb->head))) {
+ 		local_inc(&rb->nest);
+ 		goto again;
+@@ -471,7 +496,7 @@ void perf_aux_output_end(struct perf_output_handle *handle, unsigned long size)
+ 		perf_event_aux_event(handle->event, aux_head, size,
+ 				     handle->aux_flags);
+ 
+-	rb->user_page->aux_head = rb->aux_head;
++	WRITE_ONCE(rb->user_page->aux_head, rb->aux_head);
+ 	if (rb_need_aux_wakeup(rb))
+ 		wakeup = true;
+ 
+@@ -503,7 +528,7 @@ int perf_aux_output_skip(struct perf_output_handle *handle, unsigned long size)
+ 
+ 	rb->aux_head += size;
+ 
+-	rb->user_page->aux_head = rb->aux_head;
++	WRITE_ONCE(rb->user_page->aux_head, rb->aux_head);
+ 	if (rb_need_aux_wakeup(rb)) {
+ 		perf_output_wakeup(handle);
+ 		handle->wakeup = rb->aux_wakeup + rb->aux_watermark;
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 449044378782..79bcfe252d4d 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1004,6 +1004,9 @@ static void collapse_huge_page(struct mm_struct *mm,
+ 	 * handled by the anon_vma lock + PG_lock.
+ 	 */
+ 	down_write(&mm->mmap_sem);
++	result = SCAN_ANY_PROCESS;
++	if (!mmget_still_valid(mm))
++		goto out;
+ 	result = hugepage_vma_revalidate(mm, address, &vma);
+ 	if (result)
+ 		goto out;
+diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
+index f2f03c655807..a58bd0db2155 100644
+--- a/mm/mmu_gather.c
++++ b/mm/mmu_gather.c
+@@ -93,8 +93,17 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb,
+ 	struct mmu_gather_batch *batch, *next;
+ 
+ 	if (force) {
++		/*
++		 * The aarch64 yields better performance with fullmm by
++		 * avoiding multiple CPUs spamming TLBI messages at the
++		 * same time.
++		 *
++		 * On x86 non-fullmm doesn't yield significant difference
++		 * against fullmm.
++		 */
++		tlb->fullmm = 1;
+ 		__tlb_reset_range(tlb);
+-		__tlb_adjust_range(tlb, start, end - start);
++		tlb->freed_tables = 1;
+ 	}
+ 
+ 	tlb_flush_mmu(tlb);
+@@ -249,10 +258,15 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
+ {
+ 	/*
+ 	 * If there are parallel threads are doing PTE changes on same range
+-	 * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB
+-	 * flush by batching, a thread has stable TLB entry can fail to flush
+-	 * the TLB by observing pte_none|!pte_dirty, for example so flush TLB
+-	 * forcefully if we detect parallel PTE batching threads.
++	 * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB
++	 * flush by batching, one thread may end up seeing inconsistent PTEs
++	 * and result in having stale TLB entries.  So flush TLB forcefully
++	 * if we detect parallel PTE batching threads.
++	 *
++	 * However, some syscalls, e.g. munmap(), may free page tables, this
++	 * needs force flush everything in the given range. Otherwise this
++	 * may result in having stale TLB entries for some architectures,
++	 * e.g. aarch64, that could specify flush what level TLB.
+ 	 */
+ 	bool force = mm_tlb_flush_nested(tlb->mm);
+ 
+diff --git a/net/ax25/ax25_route.c b/net/ax25/ax25_route.c
+index 66f74c85cf6b..66d54fc11831 100644
+--- a/net/ax25/ax25_route.c
++++ b/net/ax25/ax25_route.c
+@@ -429,9 +429,11 @@ int ax25_rt_autobind(ax25_cb *ax25, ax25_address *addr)
+ 	}
+ 
+ 	if (ax25->sk != NULL) {
++		local_bh_disable();
+ 		bh_lock_sock(ax25->sk);
+ 		sock_reset_flag(ax25->sk, SOCK_ZAPPED);
+ 		bh_unlock_sock(ax25->sk);
++		local_bh_enable();
+ 	}
+ 
+ put:
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 7285a19bb135..7b84e014633a 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -3022,6 +3022,11 @@ ethtool_rx_flow_rule_create(const struct ethtool_rx_flow_spec_input *input)
+ 			match->mask.vlan.vlan_id =
+ 				ntohs(ext_m_spec->vlan_tci) & 0x0fff;
+ 
++			match->key.vlan.vlan_dei =
++				!!(ext_h_spec->vlan_tci & htons(0x1000));
++			match->mask.vlan.vlan_dei =
++				!!(ext_m_spec->vlan_tci & htons(0x1000));
++
+ 			match->key.vlan.vlan_priority =
+ 				(ntohs(ext_h_spec->vlan_tci) & 0xe000) >> 13;
+ 			match->mask.vlan.vlan_priority =
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 9b9da5142613..cce4fbcd7dcb 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -3199,6 +3199,7 @@ static void *neigh_get_idx_any(struct seq_file *seq, loff_t *pos)
+ }
+ 
+ void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl, unsigned int neigh_seq_flags)
++	__acquires(tbl->lock)
+ 	__acquires(rcu_bh)
+ {
+ 	struct neigh_seq_state *state = seq->private;
+@@ -3209,6 +3210,7 @@ void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl
+ 
+ 	rcu_read_lock_bh();
+ 	state->nht = rcu_dereference_bh(tbl->nht);
++	read_lock(&tbl->lock);
+ 
+ 	return *pos ? neigh_get_idx_any(seq, pos) : SEQ_START_TOKEN;
+ }
+@@ -3242,8 +3244,13 @@ out:
+ EXPORT_SYMBOL(neigh_seq_next);
+ 
+ void neigh_seq_stop(struct seq_file *seq, void *v)
++	__releases(tbl->lock)
+ 	__releases(rcu_bh)
+ {
++	struct neigh_seq_state *state = seq->private;
++	struct neigh_table *tbl = state->tbl;
++
++	read_unlock(&tbl->lock);
+ 	rcu_read_unlock_bh();
+ }
+ EXPORT_SYMBOL(neigh_seq_stop);
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index ac770940adb9..1086c3ccb601 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -923,7 +923,7 @@ static int __ip_append_data(struct sock *sk,
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
++		extra_uref = !skb_zcopy(skb);	/* only ref on new uarg */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index 94eb25bc8d7e..c8888e52591f 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -58,11 +58,6 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs,
+ }
+ EXPORT_SYMBOL_GPL(nft_fib4_eval_type);
+ 
+-static int get_ifindex(const struct net_device *dev)
+-{
+-	return dev ? dev->ifindex : 0;
+-}
+-
+ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		   const struct nft_pktinfo *pkt)
+ {
+@@ -94,8 +89,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 
+ 	if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+ 	    nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+-		nft_fib_store_result(dest, priv, pkt,
+-				     nft_in(pkt)->ifindex);
++		nft_fib_store_result(dest, priv, nft_in(pkt));
+ 		return;
+ 	}
+ 
+@@ -108,8 +102,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	if (ipv4_is_zeronet(iph->saddr)) {
+ 		if (ipv4_is_lbcast(iph->daddr) ||
+ 		    ipv4_is_local_multicast(iph->daddr)) {
+-			nft_fib_store_result(dest, priv, pkt,
+-					     get_ifindex(pkt->skb->dev));
++			nft_fib_store_result(dest, priv, pkt->skb->dev);
+ 			return;
+ 		}
+ 	}
+@@ -150,17 +143,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		found = oif;
+ 	}
+ 
+-	switch (priv->result) {
+-	case NFT_FIB_RESULT_OIF:
+-		*dest = found->ifindex;
+-		break;
+-	case NFT_FIB_RESULT_OIFNAME:
+-		strncpy((char *)dest, found->name, IFNAMSIZ);
+-		break;
+-	default:
+-		WARN_ON_ONCE(1);
+-		break;
+-	}
++	nft_fib_store_result(dest, priv, found);
+ }
+ EXPORT_SYMBOL_GPL(nft_fib4_eval);
+ 
+diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
+index be5f3d7ceb96..f994f50e1516 100644
+--- a/net/ipv6/ip6_flowlabel.c
++++ b/net/ipv6/ip6_flowlabel.c
+@@ -254,9 +254,9 @@ struct ip6_flowlabel *fl6_sock_lookup(struct sock *sk, __be32 label)
+ 	rcu_read_lock_bh();
+ 	for_each_sk_fl_rcu(np, sfl) {
+ 		struct ip6_flowlabel *fl = sfl->fl;
+-		if (fl->label == label) {
++
++		if (fl->label == label && atomic_inc_not_zero(&fl->users)) {
+ 			fl->lastuse = jiffies;
+-			atomic_inc(&fl->users);
+ 			rcu_read_unlock_bh();
+ 			return fl;
+ 		}
+@@ -622,7 +622,8 @@ int ipv6_flowlabel_opt(struct sock *sk, char __user *optval, int optlen)
+ 						goto done;
+ 					}
+ 					fl1 = sfl->fl;
+-					atomic_inc(&fl1->users);
++					if (!atomic_inc_not_zero(&fl1->users))
++						fl1 = NULL;
+ 					break;
+ 				}
+ 			}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index b5e0c85bcd57..ed9f6a7d224b 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1344,7 +1344,7 @@ emsgsize:
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
++		extra_uref = !skb_zcopy(skb);	/* only ref on new uarg */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index 73cdc0bc63f7..ec068b0cffca 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -169,8 +169,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 
+ 	if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
+ 	    nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
+-		nft_fib_store_result(dest, priv, pkt,
+-				     nft_in(pkt)->ifindex);
++		nft_fib_store_result(dest, priv, nft_in(pkt));
+ 		return;
+ 	}
+ 
+@@ -187,18 +186,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	if (oif && oif != rt->rt6i_idev->dev)
+ 		goto put_rt_err;
+ 
+-	switch (priv->result) {
+-	case NFT_FIB_RESULT_OIF:
+-		*dest = rt->rt6i_idev->dev->ifindex;
+-		break;
+-	case NFT_FIB_RESULT_OIFNAME:
+-		strncpy((char *)dest, rt->rt6i_idev->dev->name, IFNAMSIZ);
+-		break;
+-	default:
+-		WARN_ON_ONCE(1);
+-		break;
+-	}
+-
++	nft_fib_store_result(dest, priv, rt->rt6i_idev->dev);
+  put_rt_err:
+ 	ip6_rt_put(rt);
+ }
+diff --git a/net/lapb/lapb_iface.c b/net/lapb/lapb_iface.c
+index db6e0afe3a20..1740f852002e 100644
+--- a/net/lapb/lapb_iface.c
++++ b/net/lapb/lapb_iface.c
+@@ -182,6 +182,7 @@ int lapb_unregister(struct net_device *dev)
+ 	lapb = __lapb_devtostruct(dev);
+ 	if (!lapb)
+ 		goto out;
++	lapb_put(lapb);
+ 
+ 	lapb_stop_t1timer(lapb);
+ 	lapb_stop_t2timer(lapb);
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index 14457551bcb4..8ebf21149ec3 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -2312,7 +2312,6 @@ static void __net_exit __ip_vs_cleanup(struct net *net)
+ {
+ 	struct netns_ipvs *ipvs = net_ipvs(net);
+ 
+-	nf_unregister_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
+ 	ip_vs_service_net_cleanup(ipvs);	/* ip_vs_flush() with locks */
+ 	ip_vs_conn_net_cleanup(ipvs);
+ 	ip_vs_app_net_cleanup(ipvs);
+@@ -2327,6 +2326,7 @@ static void __net_exit __ip_vs_dev_cleanup(struct net *net)
+ {
+ 	struct netns_ipvs *ipvs = net_ipvs(net);
+ 	EnterFunction(2);
++	nf_unregister_net_hooks(net, ip_vs_ops, ARRAY_SIZE(ip_vs_ops));
+ 	ipvs->enable = 0;	/* Disable packet reception */
+ 	smp_wmb();
+ 	ip_vs_sync_net_cleanup(ipvs);
+diff --git a/net/netfilter/nf_nat_helper.c b/net/netfilter/nf_nat_helper.c
+index ccc06f7539d7..53aeb12b70fb 100644
+--- a/net/netfilter/nf_nat_helper.c
++++ b/net/netfilter/nf_nat_helper.c
+@@ -170,7 +170,7 @@ nf_nat_mangle_udp_packet(struct sk_buff *skb,
+ 	if (!udph->check && skb->ip_summed != CHECKSUM_PARTIAL)
+ 		return true;
+ 
+-	nf_nat_csum_recalc(skb, nf_ct_l3num(ct), IPPROTO_TCP,
++	nf_nat_csum_recalc(skb, nf_ct_l3num(ct), IPPROTO_UDP,
+ 			   udph, &udph->check, datalen, oldlen);
+ 
+ 	return true;
+diff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c
+index a36a77bae1d6..5b86574e7b89 100644
+--- a/net/netfilter/nf_queue.c
++++ b/net/netfilter/nf_queue.c
+@@ -254,6 +254,7 @@ static unsigned int nf_iterate(struct sk_buff *skb,
+ repeat:
+ 		verdict = nf_hook_entry_hookfn(hook, skb, state);
+ 		if (verdict != NF_ACCEPT) {
++			*index = i;
+ 			if (verdict != NF_REPEAT)
+ 				return verdict;
+ 			goto repeat;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index aa5e7b00a581..101975386547 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2261,13 +2261,13 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 				    u32 flags, int family,
+ 				    const struct nft_table *table,
+ 				    const struct nft_chain *chain,
+-				    const struct nft_rule *rule)
++				    const struct nft_rule *rule,
++				    const struct nft_rule *prule)
+ {
+ 	struct nlmsghdr *nlh;
+ 	struct nfgenmsg *nfmsg;
+ 	const struct nft_expr *expr, *next;
+ 	struct nlattr *list;
+-	const struct nft_rule *prule;
+ 	u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ 
+ 	nlh = nlmsg_put(skb, portid, seq, type, sizeof(struct nfgenmsg), flags);
+@@ -2287,8 +2287,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_RULE_PAD))
+ 		goto nla_put_failure;
+ 
+-	if ((event != NFT_MSG_DELRULE) && (rule->list.prev != &chain->rules)) {
+-		prule = list_prev_entry(rule, list);
++	if (event != NFT_MSG_DELRULE && prule) {
+ 		if (nla_put_be64(skb, NFTA_RULE_POSITION,
+ 				 cpu_to_be64(prule->handle),
+ 				 NFTA_RULE_PAD))
+@@ -2335,7 +2334,7 @@ static void nf_tables_rule_notify(const struct nft_ctx *ctx,
+ 
+ 	err = nf_tables_fill_rule_info(skb, ctx->net, ctx->portid, ctx->seq,
+ 				       event, 0, ctx->family, ctx->table,
+-				       ctx->chain, rule);
++				       ctx->chain, rule, NULL);
+ 	if (err < 0) {
+ 		kfree_skb(skb);
+ 		goto err;
+@@ -2360,12 +2359,13 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 				  const struct nft_chain *chain)
+ {
+ 	struct net *net = sock_net(skb->sk);
++	const struct nft_rule *rule, *prule;
+ 	unsigned int s_idx = cb->args[0];
+-	const struct nft_rule *rule;
+ 
++	prule = NULL;
+ 	list_for_each_entry_rcu(rule, &chain->rules, list) {
+ 		if (!nft_is_active(net, rule))
+-			goto cont;
++			goto cont_skip;
+ 		if (*idx < s_idx)
+ 			goto cont;
+ 		if (*idx > s_idx) {
+@@ -2377,11 +2377,13 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 					NFT_MSG_NEWRULE,
+ 					NLM_F_MULTI | NLM_F_APPEND,
+ 					table->family,
+-					table, chain, rule) < 0)
++					table, chain, rule, prule) < 0)
+ 			return 1;
+ 
+ 		nl_dump_check_consistent(cb, nlmsg_hdr(skb));
+ cont:
++		prule = rule;
++cont_skip:
+ 		(*idx)++;
+ 	}
+ 	return 0;
+@@ -2537,7 +2539,7 @@ static int nf_tables_getrule(struct net *net, struct sock *nlsk,
+ 
+ 	err = nf_tables_fill_rule_info(skb2, net, NETLINK_CB(skb).portid,
+ 				       nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
+-				       family, table, chain, rule);
++				       family, table, chain, rule, NULL);
+ 	if (err < 0)
+ 		goto err;
+ 
+diff --git a/net/netfilter/nft_fib.c b/net/netfilter/nft_fib.c
+index 21df8cccea65..77f00a99dfab 100644
+--- a/net/netfilter/nft_fib.c
++++ b/net/netfilter/nft_fib.c
+@@ -135,17 +135,17 @@ int nft_fib_dump(struct sk_buff *skb, const struct nft_expr *expr)
+ EXPORT_SYMBOL_GPL(nft_fib_dump);
+ 
+ void nft_fib_store_result(void *reg, const struct nft_fib *priv,
+-			  const struct nft_pktinfo *pkt, int index)
++			  const struct net_device *dev)
+ {
+-	struct net_device *dev;
+ 	u32 *dreg = reg;
++	int index;
+ 
+ 	switch (priv->result) {
+ 	case NFT_FIB_RESULT_OIF:
++		index = dev ? dev->ifindex : 0;
+ 		*dreg = (priv->flags & NFTA_FIB_F_PRESENT) ? !!index : index;
+ 		break;
+ 	case NFT_FIB_RESULT_OIFNAME:
+-		dev = dev_get_by_index_rcu(nft_net(pkt), index);
+ 		if (priv->flags & NFTA_FIB_F_PRESENT)
+ 			*dreg = !!dev;
+ 		else
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 376181cc1def..9f2875efb4ac 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -922,7 +922,8 @@ static int nfc_genl_deactivate_target(struct sk_buff *skb,
+ 	u32 device_idx, target_idx;
+ 	int rc;
+ 
+-	if (!info->attrs[NFC_ATTR_DEVICE_INDEX])
++	if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
++	    !info->attrs[NFC_ATTR_TARGET_INDEX])
+ 		return -EINVAL;
+ 
+ 	device_idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]);
+diff --git a/net/openvswitch/vport-internal_dev.c b/net/openvswitch/vport-internal_dev.c
+index 26f71cbf7527..5993405c25c1 100644
+--- a/net/openvswitch/vport-internal_dev.c
++++ b/net/openvswitch/vport-internal_dev.c
+@@ -170,7 +170,9 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ {
+ 	struct vport *vport;
+ 	struct internal_dev *internal_dev;
++	struct net_device *dev;
+ 	int err;
++	bool free_vport = true;
+ 
+ 	vport = ovs_vport_alloc(0, &ovs_internal_vport_ops, parms);
+ 	if (IS_ERR(vport)) {
+@@ -178,8 +180,9 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ 		goto error;
+ 	}
+ 
+-	vport->dev = alloc_netdev(sizeof(struct internal_dev),
+-				  parms->name, NET_NAME_USER, do_setup);
++	dev = alloc_netdev(sizeof(struct internal_dev),
++			   parms->name, NET_NAME_USER, do_setup);
++	vport->dev = dev;
+ 	if (!vport->dev) {
+ 		err = -ENOMEM;
+ 		goto error_free_vport;
+@@ -200,8 +203,10 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ 
+ 	rtnl_lock();
+ 	err = register_netdevice(vport->dev);
+-	if (err)
++	if (err) {
++		free_vport = false;
+ 		goto error_unlock;
++	}
+ 
+ 	dev_set_promiscuity(vport->dev, 1);
+ 	rtnl_unlock();
+@@ -211,11 +216,12 @@ static struct vport *internal_dev_create(const struct vport_parms *parms)
+ 
+ error_unlock:
+ 	rtnl_unlock();
+-	free_percpu(vport->dev->tstats);
++	free_percpu(dev->tstats);
+ error_free_netdev:
+-	free_netdev(vport->dev);
++	free_netdev(dev);
+ error_free_vport:
+-	ovs_vport_free(vport);
++	if (free_vport)
++		ovs_vport_free(vport);
+ error:
+ 	return ERR_PTR(err);
+ }
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index ae65a1cfa596..fb546b2d67ca 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -2600,6 +2600,8 @@ do_addr_param:
+ 	case SCTP_PARAM_STATE_COOKIE:
+ 		asoc->peer.cookie_len =
+ 			ntohs(param.p->length) - sizeof(struct sctp_paramhdr);
++		if (asoc->peer.cookie)
++			kfree(asoc->peer.cookie);
+ 		asoc->peer.cookie = kmemdup(param.cookie->body, asoc->peer.cookie_len, gfp);
+ 		if (!asoc->peer.cookie)
+ 			retval = 0;
+@@ -2664,6 +2666,8 @@ do_addr_param:
+ 			goto fall_through;
+ 
+ 		/* Save peer's random parameter */
++		if (asoc->peer.peer_random)
++			kfree(asoc->peer.peer_random);
+ 		asoc->peer.peer_random = kmemdup(param.p,
+ 					    ntohs(param.p->length), gfp);
+ 		if (!asoc->peer.peer_random) {
+@@ -2677,6 +2681,8 @@ do_addr_param:
+ 			goto fall_through;
+ 
+ 		/* Save peer's HMAC list */
++		if (asoc->peer.peer_hmacs)
++			kfree(asoc->peer.peer_hmacs);
+ 		asoc->peer.peer_hmacs = kmemdup(param.p,
+ 					    ntohs(param.p->length), gfp);
+ 		if (!asoc->peer.peer_hmacs) {
+@@ -2692,6 +2698,8 @@ do_addr_param:
+ 		if (!ep->auth_enable)
+ 			goto fall_through;
+ 
++		if (asoc->peer.peer_chunks)
++			kfree(asoc->peer.peer_chunks);
+ 		asoc->peer.peer_chunks = kmemdup(param.p,
+ 					    ntohs(param.p->length), gfp);
+ 		if (!asoc->peer.peer_chunks)
+diff --git a/net/tipc/group.c b/net/tipc/group.c
+index 63f39201e41e..df0c0c4b38d5 100644
+--- a/net/tipc/group.c
++++ b/net/tipc/group.c
+@@ -218,6 +218,7 @@ void tipc_group_delete(struct net *net, struct tipc_group *grp)
+ 
+ 	rbtree_postorder_for_each_entry_safe(m, tmp, tree, tree_node) {
+ 		tipc_group_proto_xmit(grp, m, GRP_LEAVE_MSG, &xmitq);
++		__skb_queue_purge(&m->deferredq);
+ 		list_del(&m->list);
+ 		kfree(m);
+ 	}
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index d350ff73a391..41e17ed0c94e 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1128,7 +1128,6 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
+ 
+ 		full_record = false;
+ 		record_room = TLS_MAX_PAYLOAD_SIZE - msg_pl->sg.size;
+-		copied = 0;
+ 		copy = size;
+ 		if (copy >= record_room) {
+ 			copy = record_room;
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index f3f3d06cb6d8..e30f53728725 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -871,8 +871,10 @@ virtio_transport_recv_connected(struct sock *sk,
+ 		if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SHUTDOWN_SEND)
+ 			vsk->peer_shutdown |= SEND_SHUTDOWN;
+ 		if (vsk->peer_shutdown == SHUTDOWN_MASK &&
+-		    vsock_stream_has_data(vsk) <= 0)
++		    vsock_stream_has_data(vsk) <= 0) {
++			sock_set_flag(sk, SOCK_DONE);
+ 			sk->sk_state = TCP_CLOSING;
++		}
+ 		if (le32_to_cpu(pkt->hdr.flags))
+ 			sk->sk_state_change(sk);
+ 		break;
+diff --git a/sound/firewire/fireface/ff-protocol-latter.c b/sound/firewire/fireface/ff-protocol-latter.c
+index c8236ff89b7f..b30d02d359b1 100644
+--- a/sound/firewire/fireface/ff-protocol-latter.c
++++ b/sound/firewire/fireface/ff-protocol-latter.c
+@@ -9,11 +9,11 @@
+ 
+ #include "ff.h"
+ 
+-#define LATTER_STF		0xffff00000004
+-#define LATTER_ISOC_CHANNELS	0xffff00000008
+-#define LATTER_ISOC_START	0xffff0000000c
+-#define LATTER_FETCH_MODE	0xffff00000010
+-#define LATTER_SYNC_STATUS	0x0000801c0000
++#define LATTER_STF		0xffff00000004ULL
++#define LATTER_ISOC_CHANNELS	0xffff00000008ULL
++#define LATTER_ISOC_START	0xffff0000000cULL
++#define LATTER_FETCH_MODE	0xffff00000010ULL
++#define LATTER_SYNC_STATUS	0x0000801c0000ULL
+ 
+ static int parse_clock_bits(u32 data, unsigned int *rate,
+ 			    enum snd_ff_clock_src *src)
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 789308f54785..5c29d6490a18 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -375,6 +375,7 @@ enum {
+ 
+ #define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
+ #define IS_CFL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa348)
++#define IS_CNL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x9dc8)
+ 
+ static char *driver_short_names[] = {
+ 	[AZX_DRIVER_ICH] = "HDA Intel",
+@@ -1700,8 +1701,8 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 	else
+ 		chip->bdl_pos_adj = bdl_pos_adj[dev];
+ 
+-	/* Workaround for a communication error on CFL (bko#199007) */
+-	if (IS_CFL(pci))
++	/* Workaround for a communication error on CFL (bko#199007) and CNL */
++	if (IS_CFL(pci) || IS_CNL(pci))
+ 		chip->polling_mode = 1;
+ 
+ 	err = azx_bus_init(chip, model[dev], &pci_hda_io_ops);
+diff --git a/tools/perf/arch/s390/util/machine.c b/tools/perf/arch/s390/util/machine.c
+index 0b2054007314..a19690a17291 100644
+--- a/tools/perf/arch/s390/util/machine.c
++++ b/tools/perf/arch/s390/util/machine.c
+@@ -5,16 +5,19 @@
+ #include "util.h"
+ #include "machine.h"
+ #include "api/fs/fs.h"
++#include "debug.h"
+ 
+ int arch__fix_module_text_start(u64 *start, const char *name)
+ {
++	u64 m_start = *start;
+ 	char path[PATH_MAX];
+ 
+ 	snprintf(path, PATH_MAX, "module/%.*s/sections/.text",
+ 				(int)strlen(name) - 2, name + 1);
+-
+-	if (sysfs__read_ull(path, (unsigned long long *)start) < 0)
+-		return -1;
++	if (sysfs__read_ull(path, (unsigned long long *)start) < 0) {
++		pr_debug2("Using module %s start:%#lx\n", path, m_start);
++		*start = m_start;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/tools/perf/util/data-convert-bt.c b/tools/perf/util/data-convert-bt.c
+index 26af43ad9ddd..53d49fd8b8ae 100644
+--- a/tools/perf/util/data-convert-bt.c
++++ b/tools/perf/util/data-convert-bt.c
+@@ -271,7 +271,7 @@ static int string_set_value(struct bt_ctf_field *field, const char *string)
+ 				if (i > 0)
+ 					strncpy(buffer, string, i);
+ 			}
+-			strncat(buffer + p, numstr, 4);
++			memcpy(buffer + p, numstr, 4);
+ 			p += 3;
+ 		}
+ 	}
+diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
+index 50678d318185..b800752745af 100644
+--- a/tools/perf/util/thread.c
++++ b/tools/perf/util/thread.c
+@@ -132,7 +132,7 @@ void thread__put(struct thread *thread)
+ 	}
+ }
+ 
+-struct namespaces *thread__namespaces(const struct thread *thread)
++static struct namespaces *__thread__namespaces(const struct thread *thread)
+ {
+ 	if (list_empty(&thread->namespaces_list))
+ 		return NULL;
+@@ -140,10 +140,21 @@ struct namespaces *thread__namespaces(const struct thread *thread)
+ 	return list_first_entry(&thread->namespaces_list, struct namespaces, list);
+ }
+ 
++struct namespaces *thread__namespaces(const struct thread *thread)
++{
++	struct namespaces *ns;
++
++	down_read((struct rw_semaphore *)&thread->namespaces_lock);
++	ns = __thread__namespaces(thread);
++	up_read((struct rw_semaphore *)&thread->namespaces_lock);
++
++	return ns;
++}
++
+ static int __thread__set_namespaces(struct thread *thread, u64 timestamp,
+ 				    struct namespaces_event *event)
+ {
+-	struct namespaces *new, *curr = thread__namespaces(thread);
++	struct namespaces *new, *curr = __thread__namespaces(thread);
+ 
+ 	new = namespaces__new(event);
+ 	if (!new)
+diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
+index 3194007cf8d1..a59c5fd4e987 100755
+--- a/tools/testing/selftests/netfilter/nft_nat.sh
++++ b/tools/testing/selftests/netfilter/nft_nat.sh
+@@ -23,7 +23,11 @@ ip netns add ns0
+ ip netns add ns1
+ ip netns add ns2
+ 
+-ip link add veth0 netns ns0 type veth peer name eth0 netns ns1
++ip link add veth0 netns ns0 type veth peer name eth0 netns ns1 > /dev/null 2>&1
++if [ $? -ne 0 ];then
++    echo "SKIP: No virtual ethernet pair device support in kernel"
++    exit $ksft_skip
++fi
+ ip link add veth1 netns ns0 type veth peer name eth0 netns ns2
+ 
+ ip -net ns0 link set lo up

diff --git a/1013_linux-5.1.14.patch b/1013_linux-5.1.14.patch
new file mode 100644
index 0000000..a5fab59
--- /dev/null
+++ b/1013_linux-5.1.14.patch
@@ -0,0 +1,27 @@
+diff --git a/Makefile b/Makefile
+index dfcd51a35824..c4b1a345d3f0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 1
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 2d86e1bc483c..b8b4ae555e34 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1299,7 +1299,8 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue,
+ 	if (nsize < 0)
+ 		nsize = 0;
+ 
+-	if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf)) {
++	if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf &&
++		     tcp_queue != TCP_FRAG_IN_WRITE_QUEUE)) {
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPWQUEUETOOBIG);
+ 		return -ENOMEM;
+ 	}