From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 64E3C138359 for ; Wed, 26 Aug 2020 11:16:43 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 29380E0848; Wed, 26 Aug 2020 11:16:41 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 3F7C6E0848 for ; Wed, 26 Aug 2020 11:16:40 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id E378E340B65 for ; Wed, 26 Aug 2020 11:16:35 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 7E2E52FE for ; Wed, 26 Aug 2020 11:16:34 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1598440575.b0dc77c40c31e3115d85da6c51e04e85d0bde1c3.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1060_linux-5.4.61.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: b0dc77c40c31e3115d85da6c51e04e85d0bde1c3 X-VCS-Branch: 5.4 Date: Wed, 26 Aug 2020 11:16:34 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 57f9b14b-2962-4034-be43-e66a6e2ab9df X-Archives-Hash: 2017182d8f9bb1d7a5d923d6b960fbdf commit: b0dc77c40c31e3115d85da6c51e04e85d0bde1c3 Author: Mike Pagano gentoo org> AuthorDate: Wed Aug 26 11:16:15 2020 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Aug 26 11:16:15 2020 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b0dc77c4 Linux patch 5.4.61 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1060_linux-5.4.61.patch | 3600 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 3604 insertions(+) diff --git a/0000_README b/0000_README index 5a3df13..fca4b03 100644 --- a/0000_README +++ b/0000_README @@ -283,6 +283,10 @@ Patch: 1059_linux-5.4.60.patch From: http://www.kernel.org Desc: Linux 5.4.60 +Patch: 1060_linux-5.4.61.patch +From: http://www.kernel.org +Desc: Linux 5.4.61 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1060_linux-5.4.61.patch b/1060_linux-5.4.61.patch new file mode 100644 index 0000000..bee93bc --- /dev/null +++ b/1060_linux-5.4.61.patch @@ -0,0 +1,3600 @@ +diff --git a/Documentation/kbuild/index.rst b/Documentation/kbuild/index.rst +index 0f144fad99a6a..3882bd5f7728c 100644 +--- a/Documentation/kbuild/index.rst ++++ b/Documentation/kbuild/index.rst +@@ -19,6 +19,7 @@ Kernel Build System + + issues + reproducible-builds ++ llvm + + .. only:: subproject and html + +diff --git a/Documentation/kbuild/kbuild.rst b/Documentation/kbuild/kbuild.rst +index f1e5dce86af7c..852ccc551bb3a 100644 +--- a/Documentation/kbuild/kbuild.rst ++++ b/Documentation/kbuild/kbuild.rst +@@ -262,3 +262,8 @@ KBUILD_BUILD_USER, KBUILD_BUILD_HOST + These two variables allow to override the user@host string displayed during + boot and in /proc/version. The default value is the output of the commands + whoami and host, respectively. ++ ++LLVM ++---- ++If this variable is set to 1, Kbuild will use Clang and LLVM utilities instead ++of GCC and GNU binutils to build the kernel. +diff --git a/Documentation/kbuild/llvm.rst b/Documentation/kbuild/llvm.rst +new file mode 100644 +index 0000000000000..c776b6eee969f +--- /dev/null ++++ b/Documentation/kbuild/llvm.rst +@@ -0,0 +1,87 @@ ++============================== ++Building Linux with Clang/LLVM ++============================== ++ ++This document covers how to build the Linux kernel with Clang and LLVM ++utilities. ++ ++About ++----- ++ ++The Linux kernel has always traditionally been compiled with GNU toolchains ++such as GCC and binutils. Ongoing work has allowed for `Clang ++`_ and `LLVM `_ utilities to be ++used as viable substitutes. Distributions such as `Android ++`_, `ChromeOS ++`_, and `OpenMandriva ++`_ use Clang built kernels. `LLVM is a ++collection of toolchain components implemented in terms of C++ objects ++`_. Clang is a front-end to LLVM that ++supports C and the GNU C extensions required by the kernel, and is pronounced ++"klang," not "see-lang." ++ ++Clang ++----- ++ ++The compiler used can be swapped out via `CC=` command line argument to `make`. ++`CC=` should be set when selecting a config and during a build. ++ ++ make CC=clang defconfig ++ ++ make CC=clang ++ ++Cross Compiling ++--------------- ++ ++A single Clang compiler binary will typically contain all supported backends, ++which can help simplify cross compiling. ++ ++ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make CC=clang ++ ++`CROSS_COMPILE` is not used to prefix the Clang compiler binary, instead ++`CROSS_COMPILE` is used to set a command line flag: `--target `. For ++example: ++ ++ clang --target aarch64-linux-gnu foo.c ++ ++LLVM Utilities ++-------------- ++ ++LLVM has substitutes for GNU binutils utilities. Kbuild supports `LLVM=1` ++to enable them. ++ ++ make LLVM=1 ++ ++They can be enabled individually. The full list of the parameters: ++ ++ make CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm STRIP=llvm-strip \\ ++ OBJCOPY=llvm-objcopy OBJDUMP=llvm-objdump OBJSIZE=llvm-size \\ ++ READELF=llvm-readelf HOSTCC=clang HOSTCXX=clang++ HOSTAR=llvm-ar \\ ++ HOSTLD=ld.lld ++ ++Currently, the integrated assembler is disabled by default. You can pass ++`LLVM_IAS=1` to enable it. ++ ++Getting Help ++------------ ++ ++- `Website `_ ++- `Mailing List `_: ++- `Issue Tracker `_ ++- IRC: #clangbuiltlinux on chat.freenode.net ++- `Telegram `_: @ClangBuiltLinux ++- `Wiki `_ ++- `Beginner Bugs `_ ++ ++Getting LLVM ++------------- ++ ++- http://releases.llvm.org/download.html ++- https://github.com/llvm/llvm-project ++- https://llvm.org/docs/GettingStarted.html ++- https://llvm.org/docs/CMake.html ++- https://apt.llvm.org/ ++- https://www.archlinux.org/packages/extra/x86_64/llvm/ ++- https://github.com/ClangBuiltLinux/tc-build ++- https://github.com/ClangBuiltLinux/linux/wiki/Building-Clang-from-source ++- https://android.googlesource.com/platform/prebuilts/clang/host/linux-x86/ +diff --git a/MAINTAINERS b/MAINTAINERS +index fe6fa5d3a63e5..1407008df7491 100644 +--- a/MAINTAINERS ++++ b/MAINTAINERS +@@ -4028,6 +4028,7 @@ B: https://github.com/ClangBuiltLinux/linux/issues + C: irc://chat.freenode.net/clangbuiltlinux + S: Supported + K: \b(?i:clang|llvm)\b ++F: Documentation/kbuild/llvm.rst + + CLEANCACHE API + M: Konrad Rzeszutek Wilk +diff --git a/Makefile b/Makefile +index 7c001e21e28e7..2c21b922644d7 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 4 +-SUBLEVEL = 60 ++SUBLEVEL = 61 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +@@ -394,8 +394,13 @@ HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null) + HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null) + HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null) + +-HOSTCC = gcc +-HOSTCXX = g++ ++ifneq ($(LLVM),) ++HOSTCC = clang ++HOSTCXX = clang++ ++else ++HOSTCC = gcc ++HOSTCXX = g++ ++endif + KBUILD_HOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 \ + -fomit-frame-pointer -std=gnu89 $(HOST_LFS_CFLAGS) \ + $(HOSTCFLAGS) +@@ -404,16 +409,28 @@ KBUILD_HOSTLDFLAGS := $(HOST_LFS_LDFLAGS) $(HOSTLDFLAGS) + KBUILD_HOSTLDLIBS := $(HOST_LFS_LIBS) $(HOSTLDLIBS) + + # Make variables (CC, etc...) +-AS = $(CROSS_COMPILE)as +-LD = $(CROSS_COMPILE)ld +-CC = $(CROSS_COMPILE)gcc + CPP = $(CC) -E ++ifneq ($(LLVM),) ++CC = clang ++LD = ld.lld ++AR = llvm-ar ++NM = llvm-nm ++OBJCOPY = llvm-objcopy ++OBJDUMP = llvm-objdump ++READELF = llvm-readelf ++OBJSIZE = llvm-size ++STRIP = llvm-strip ++else ++CC = $(CROSS_COMPILE)gcc ++LD = $(CROSS_COMPILE)ld + AR = $(CROSS_COMPILE)ar + NM = $(CROSS_COMPILE)nm +-STRIP = $(CROSS_COMPILE)strip + OBJCOPY = $(CROSS_COMPILE)objcopy + OBJDUMP = $(CROSS_COMPILE)objdump ++READELF = $(CROSS_COMPILE)readelf + OBJSIZE = $(CROSS_COMPILE)size ++STRIP = $(CROSS_COMPILE)strip ++endif + PAHOLE = pahole + LEX = flex + YACC = bison +@@ -422,7 +439,6 @@ INSTALLKERNEL := installkernel + DEPMOD = /sbin/depmod + PERL = perl + PYTHON = python +-PYTHON2 = python2 + PYTHON3 = python3 + CHECK = sparse + BASH = bash +@@ -471,9 +487,9 @@ KBUILD_LDFLAGS := + GCC_PLUGINS_CFLAGS := + CLANG_FLAGS := + +-export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE AS LD CC +-export CPP AR NM STRIP OBJCOPY OBJDUMP OBJSIZE PAHOLE LEX YACC AWK INSTALLKERNEL +-export PERL PYTHON PYTHON2 PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX ++export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC ++export CPP AR NM STRIP OBJCOPY OBJDUMP OBJSIZE READELF PAHOLE LEX YACC AWK INSTALLKERNEL ++export PERL PYTHON PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX + export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE + + export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS +@@ -534,7 +550,7 @@ endif + ifneq ($(GCC_TOOLCHAIN),) + CLANG_FLAGS += --gcc-toolchain=$(GCC_TOOLCHAIN) + endif +-ifeq ($(shell $(AS) --version 2>&1 | head -n 1 | grep clang),) ++ifneq ($(LLVM_IAS),1) + CLANG_FLAGS += -no-integrated-as + endif + CLANG_FLAGS += -Werror=unknown-warning-option +diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/include/asm/io.h +index b771bf1b53523..103270d5a9fc6 100644 +--- a/arch/alpha/include/asm/io.h ++++ b/arch/alpha/include/asm/io.h +@@ -502,10 +502,10 @@ extern inline void writeq(u64 b, volatile void __iomem *addr) + } + #endif + +-#define ioread16be(p) be16_to_cpu(ioread16(p)) +-#define ioread32be(p) be32_to_cpu(ioread32(p)) +-#define iowrite16be(v,p) iowrite16(cpu_to_be16(v), (p)) +-#define iowrite32be(v,p) iowrite32(cpu_to_be32(v), (p)) ++#define ioread16be(p) swab16(ioread16(p)) ++#define ioread32be(p) swab32(ioread32(p)) ++#define iowrite16be(v,p) iowrite16(swab16(v), (p)) ++#define iowrite32be(v,p) iowrite32(swab32(v), (p)) + + #define inb_p inb + #define inw_p inw +diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h +index 1b179b1f46bc5..dd03d5e01a946 100644 +--- a/arch/arm/include/asm/kvm_host.h ++++ b/arch/arm/include/asm/kvm_host.h +@@ -266,7 +266,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + + #define KVM_ARCH_WANT_MMU_NOTIFIER + int kvm_unmap_hva_range(struct kvm *kvm, +- unsigned long start, unsigned long end); ++ unsigned long start, unsigned long end, unsigned flags); + int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); + + unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); +diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile +index d65aef47ece3b..11a7d6208087f 100644 +--- a/arch/arm64/Makefile ++++ b/arch/arm64/Makefile +@@ -146,6 +146,7 @@ zinstall install: + PHONY += vdso_install + vdso_install: + $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso $@ ++ $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 $@ + + # We use MRPROPER_FILES and CLEAN_FILES now + archclean: +diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h +index 0c3bd6aff6e91..d719c6b4dd81c 100644 +--- a/arch/arm64/include/asm/kvm_host.h ++++ b/arch/arm64/include/asm/kvm_host.h +@@ -427,7 +427,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + + #define KVM_ARCH_WANT_MMU_NOTIFIER + int kvm_unmap_hva_range(struct kvm *kvm, +- unsigned long start, unsigned long end); ++ unsigned long start, unsigned long end, unsigned flags); + int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); + int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); + int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); +diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile +index 76b327f88fbb1..40dffe60b8454 100644 +--- a/arch/arm64/kernel/vdso32/Makefile ++++ b/arch/arm64/kernel/vdso32/Makefile +@@ -190,7 +190,7 @@ quiet_cmd_vdsosym = VDSOSYM $@ + cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@ + + # Install commands for the unstripped file +-quiet_cmd_vdso_install = INSTALL $@ ++quiet_cmd_vdso_install = INSTALL32 $@ + cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/vdso32.so + + vdso.so: $(obj)/vdso.so.dbg +diff --git a/arch/m68k/include/asm/m53xxacr.h b/arch/m68k/include/asm/m53xxacr.h +index 9138a624c5c81..692f90e7fecc1 100644 +--- a/arch/m68k/include/asm/m53xxacr.h ++++ b/arch/m68k/include/asm/m53xxacr.h +@@ -89,9 +89,9 @@ + * coherency though in all cases. And for copyback caches we will need + * to push cached data as well. + */ +-#define CACHE_INIT CACR_CINVA +-#define CACHE_INVALIDATE CACR_CINVA +-#define CACHE_INVALIDATED CACR_CINVA ++#define CACHE_INIT (CACHE_MODE + CACR_CINVA - CACR_EC) ++#define CACHE_INVALIDATE (CACHE_MODE + CACR_CINVA) ++#define CACHE_INVALIDATED (CACHE_MODE + CACR_CINVA) + + #define ACR0_MODE ((CONFIG_RAMBASE & 0xff000000) + \ + (0x000f0000) + \ +diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h +index 7b47a323dc23e..356c61074d136 100644 +--- a/arch/mips/include/asm/kvm_host.h ++++ b/arch/mips/include/asm/kvm_host.h +@@ -939,7 +939,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu, + + #define KVM_ARCH_WANT_MMU_NOTIFIER + int kvm_unmap_hva_range(struct kvm *kvm, +- unsigned long start, unsigned long end); ++ unsigned long start, unsigned long end, unsigned flags); + int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); + int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); + int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); +diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c +index 7b06e6ee6817d..b8884de89c81e 100644 +--- a/arch/mips/kernel/setup.c ++++ b/arch/mips/kernel/setup.c +@@ -494,7 +494,7 @@ static void __init mips_parse_crashkernel(void) + if (ret != 0 || crash_size <= 0) + return; + +- if (!memblock_find_in_range(crash_base, crash_base + crash_size, crash_size, 0)) { ++ if (!memblock_find_in_range(crash_base, crash_base + crash_size, crash_size, 1)) { + pr_warn("Invalid memory region reserved for crash kernel\n"); + return; + } +diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c +index 97e538a8c1be2..97f63a84aa51f 100644 +--- a/arch/mips/kvm/mmu.c ++++ b/arch/mips/kvm/mmu.c +@@ -512,7 +512,8 @@ static int kvm_unmap_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, + return 1; + } + +-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end) ++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, ++ unsigned flags) + { + handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL); + +diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h +index 6fe6ad64cba57..740b52ec35097 100644 +--- a/arch/powerpc/include/asm/kvm_host.h ++++ b/arch/powerpc/include/asm/kvm_host.h +@@ -58,7 +58,8 @@ + #define KVM_ARCH_WANT_MMU_NOTIFIER + + extern int kvm_unmap_hva_range(struct kvm *kvm, +- unsigned long start, unsigned long end); ++ unsigned long start, unsigned long end, ++ unsigned flags); + extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); + extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); + extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c +index ec2547cc5ecbe..1ff971f3b06f9 100644 +--- a/arch/powerpc/kvm/book3s.c ++++ b/arch/powerpc/kvm/book3s.c +@@ -867,7 +867,8 @@ void kvmppc_core_commit_memory_region(struct kvm *kvm, + kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change); + } + +-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end) ++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, ++ unsigned flags) + { + return kvm->arch.kvm_ops->unmap_hva_range(kvm, start, end); + } +diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c +index 321db0fdb9db8..7154bd424d243 100644 +--- a/arch/powerpc/kvm/e500_mmu_host.c ++++ b/arch/powerpc/kvm/e500_mmu_host.c +@@ -734,7 +734,8 @@ static int kvm_unmap_hva(struct kvm *kvm, unsigned long hva) + return 0; + } + +-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end) ++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, ++ unsigned flags) + { + /* kvm_unmap_hva flushes everything anyways */ + kvm_unmap_hva(kvm, start); +diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c +index 13ef77fd648f4..b3c4848869e52 100644 +--- a/arch/powerpc/platforms/pseries/ras.c ++++ b/arch/powerpc/platforms/pseries/ras.c +@@ -184,7 +184,6 @@ static void handle_system_shutdown(char event_modifier) + case EPOW_SHUTDOWN_ON_UPS: + pr_emerg("Loss of system power detected. System is running on" + " UPS/battery. Check RTAS error log for details\n"); +- orderly_poweroff(true); + break; + + case EPOW_SHUTDOWN_LOSS_OF_CRITICAL_FUNCTIONS: +diff --git a/arch/s390/kernel/ptrace.c b/arch/s390/kernel/ptrace.c +index 5aa786063eb3e..c6aef2ecf2890 100644 +--- a/arch/s390/kernel/ptrace.c ++++ b/arch/s390/kernel/ptrace.c +@@ -1283,7 +1283,6 @@ static bool is_ri_cb_valid(struct runtime_instr_cb *cb) + cb->pc == 1 && + cb->qc == 0 && + cb->reserved2 == 0 && +- cb->key == PAGE_DEFAULT_KEY && + cb->reserved3 == 0 && + cb->reserved4 == 0 && + cb->reserved5 == 0 && +@@ -1347,7 +1346,11 @@ static int s390_runtime_instr_set(struct task_struct *target, + kfree(data); + return -EINVAL; + } +- ++ /* ++ * Override access key in any case, since user space should ++ * not be able to set it, nor should it care about it. ++ */ ++ ri_cb.key = PAGE_DEFAULT_KEY >> 4; + preempt_disable(); + if (!target->thread.ri_cb) + target->thread.ri_cb = data; +diff --git a/arch/s390/kernel/runtime_instr.c b/arch/s390/kernel/runtime_instr.c +index 125c7f6e87150..1788a5454b6fc 100644 +--- a/arch/s390/kernel/runtime_instr.c ++++ b/arch/s390/kernel/runtime_instr.c +@@ -57,7 +57,7 @@ static void init_runtime_instr_cb(struct runtime_instr_cb *cb) + cb->k = 1; + cb->ps = 1; + cb->pc = 1; +- cb->key = PAGE_DEFAULT_KEY; ++ cb->key = PAGE_DEFAULT_KEY >> 4; + cb->v = 1; + } + +diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile +index 6b84afdd75382..98aac5b4bdb7e 100644 +--- a/arch/x86/boot/compressed/Makefile ++++ b/arch/x86/boot/compressed/Makefile +@@ -102,7 +102,7 @@ vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o + quiet_cmd_check_data_rel = DATAREL $@ + define cmd_check_data_rel + for obj in $(filter %.o,$^); do \ +- ${CROSS_COMPILE}readelf -S $$obj | grep -qF .rel.local && { \ ++ $(READELF) -S $$obj | grep -qF .rel.local && { \ + echo "error: $$obj has data relocations!" >&2; \ + exit 1; \ + } || true; \ +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h +index 742de9d97ba14..c41686641c3fb 100644 +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -1553,7 +1553,8 @@ asmlinkage void kvm_spurious_fault(void); + _ASM_EXTABLE(666b, 667b) + + #define KVM_ARCH_WANT_MMU_NOTIFIER +-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); ++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, ++ unsigned flags); + int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); + int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); + int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c +index 342d9ddf35c3a..bb743f956c232 100644 +--- a/arch/x86/kvm/mmu.c ++++ b/arch/x86/kvm/mmu.c +@@ -2045,7 +2045,8 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva, + return kvm_handle_hva_range(kvm, hva, hva + 1, data, handler); + } + +-int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end) ++int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, ++ unsigned flags) + { + return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp); + } +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 38b2df0e71096..8920ee7b28811 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -972,7 +972,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) + { + unsigned long old_cr4 = kvm_read_cr4(vcpu); + unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE | +- X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE; ++ X86_CR4_SMEP; + + if (kvm_valid_cr4(vcpu, cr4)) + return 1; +diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c +index 91220cc258547..5c11ae66b5d8e 100644 +--- a/arch/x86/pci/xen.c ++++ b/arch/x86/pci/xen.c +@@ -26,6 +26,7 @@ + #include + #include + #include ++#include + #include + + static int xen_pcifront_enable_irq(struct pci_dev *dev) +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c +index d3d7c4ef7d045..53dc0fd6f6d3c 100644 +--- a/drivers/cpufreq/intel_pstate.c ++++ b/drivers/cpufreq/intel_pstate.c +@@ -1571,6 +1571,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) + + intel_pstate_get_hwp_max(cpu->cpu, &phy_max, ¤t_max); + cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling; ++ cpu->pstate.turbo_pstate = phy_max; + } else { + cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling; + } +diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c +index c1167ef5d2b35..b299e22b7532a 100644 +--- a/drivers/firmware/efi/efi.c ++++ b/drivers/firmware/efi/efi.c +@@ -345,6 +345,7 @@ static int __init efisubsys_init(void) + efi_kobj = kobject_create_and_add("efi", firmware_kobj); + if (!efi_kobj) { + pr_err("efi: Firmware registration failed.\n"); ++ destroy_workqueue(efi_rts_wq); + return -ENOMEM; + } + +@@ -381,6 +382,7 @@ err_unregister: + generic_ops_unregister(); + err_put: + kobject_put(efi_kobj); ++ destroy_workqueue(efi_rts_wq); + return error; + } + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 6091194a3955c..2c0eb7140ca0e 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -1434,6 +1434,7 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector) + + drm_connector_update_edid_property(connector, + aconnector->edid); ++ drm_add_edid_modes(connector, aconnector->edid); + + if (aconnector->dc_link->aux_mode) + drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux, +diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +index c13dce760098c..05b98eadc2899 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +@@ -2845,7 +2845,7 @@ static bool dcn20_validate_bandwidth_internal(struct dc *dc, struct dc_state *co + int vlevel = 0; + int pipe_split_from[MAX_PIPES]; + int pipe_cnt = 0; +- display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL); ++ display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_ATOMIC); + DC_LOGGER_INIT(dc->ctx->logger); + + BW_VAL_TRACE_COUNT(); +diff --git a/drivers/gpu/drm/amd/display/include/fixed31_32.h b/drivers/gpu/drm/amd/display/include/fixed31_32.h +index 89ef9f6860e5b..16df2a485dd0d 100644 +--- a/drivers/gpu/drm/amd/display/include/fixed31_32.h ++++ b/drivers/gpu/drm/amd/display/include/fixed31_32.h +@@ -431,6 +431,9 @@ struct fixed31_32 dc_fixpt_log(struct fixed31_32 arg); + */ + static inline struct fixed31_32 dc_fixpt_pow(struct fixed31_32 arg1, struct fixed31_32 arg2) + { ++ if (arg1.value == 0) ++ return arg2.value == 0 ? dc_fixpt_one : dc_fixpt_zero; ++ + return dc_fixpt_exp( + dc_fixpt_mul( + dc_fixpt_log(arg1), +diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c +index 46dc3de7e81bf..f2bad14ac04ab 100644 +--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c ++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c +@@ -358,8 +358,10 @@ static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo, + static int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr, + void *buf, int len, int write) + { +- unsigned long offset = (addr) - vma->vm_start; + struct ttm_buffer_object *bo = vma->vm_private_data; ++ unsigned long offset = (addr) - vma->vm_start + ++ ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node)) ++ << PAGE_SHIFT); + int ret; + + if (len < 1 || (offset + len) >> PAGE_SHIFT > bo->num_pages) +diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c +index 909eba43664a2..204d1df5a21d1 100644 +--- a/drivers/gpu/drm/vgem/vgem_drv.c ++++ b/drivers/gpu/drm/vgem/vgem_drv.c +@@ -229,32 +229,6 @@ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev, + return 0; + } + +-static int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev, +- uint32_t handle, uint64_t *offset) +-{ +- struct drm_gem_object *obj; +- int ret; +- +- obj = drm_gem_object_lookup(file, handle); +- if (!obj) +- return -ENOENT; +- +- if (!obj->filp) { +- ret = -EINVAL; +- goto unref; +- } +- +- ret = drm_gem_create_mmap_offset(obj); +- if (ret) +- goto unref; +- +- *offset = drm_vma_node_offset_addr(&obj->vma_node); +-unref: +- drm_gem_object_put_unlocked(obj); +- +- return ret; +-} +- + static struct drm_ioctl_desc vgem_ioctls[] = { + DRM_IOCTL_DEF_DRV(VGEM_FENCE_ATTACH, vgem_fence_attach_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(VGEM_FENCE_SIGNAL, vgem_fence_signal_ioctl, DRM_RENDER_ALLOW), +@@ -448,7 +422,6 @@ static struct drm_driver vgem_driver = { + .fops = &vgem_driver_fops, + + .dumb_create = vgem_gem_dumb_create, +- .dumb_map_offset = vgem_gem_dumb_map, + + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, +diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c +index 27e2df44d043d..cfe5f47d9890e 100644 +--- a/drivers/infiniband/hw/bnxt_re/main.c ++++ b/drivers/infiniband/hw/bnxt_re/main.c +@@ -789,7 +789,8 @@ static int bnxt_re_handle_qp_async_event(struct creq_qp_event *qp_event, + struct ib_event event; + unsigned int flags; + +- if (qp->qplib_qp.state == CMDQ_MODIFY_QP_NEW_STATE_ERR) { ++ if (qp->qplib_qp.state == CMDQ_MODIFY_QP_NEW_STATE_ERR && ++ rdma_is_kernel_res(&qp->ib_qp.res)) { + flags = bnxt_re_lock_cqs(qp); + bnxt_qplib_add_flush_qp(&qp->qplib_qp); + bnxt_re_unlock_cqs(qp, flags); +diff --git a/drivers/infiniband/hw/hfi1/tid_rdma.c b/drivers/infiniband/hw/hfi1/tid_rdma.c +index 7c6fd720fb2ea..c018fc633cca3 100644 +--- a/drivers/infiniband/hw/hfi1/tid_rdma.c ++++ b/drivers/infiniband/hw/hfi1/tid_rdma.c +@@ -3215,6 +3215,7 @@ bool hfi1_tid_rdma_wqe_interlock(struct rvt_qp *qp, struct rvt_swqe *wqe) + case IB_WR_ATOMIC_CMP_AND_SWP: + case IB_WR_ATOMIC_FETCH_AND_ADD: + case IB_WR_RDMA_WRITE: ++ case IB_WR_RDMA_WRITE_WITH_IMM: + switch (prev->wr.opcode) { + case IB_WR_TID_RDMA_WRITE: + req = wqe_to_tid_req(prev); +diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c +index 527ae0b9a191e..0b4a3039f312f 100644 +--- a/drivers/input/mouse/psmouse-base.c ++++ b/drivers/input/mouse/psmouse-base.c +@@ -2042,7 +2042,7 @@ static int psmouse_get_maxproto(char *buffer, const struct kernel_param *kp) + { + int type = *((unsigned int *)kp->arg); + +- return sprintf(buffer, "%s", psmouse_protocol_by_type(type)->name); ++ return sprintf(buffer, "%s\n", psmouse_protocol_by_type(type)->name); + } + + static int __init psmouse_init(void) +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index 25ad64a3919f6..2cbfcd99b7ee7 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -816,19 +816,19 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size, + struct request_queue *q; + const size_t max_stripes = min_t(size_t, INT_MAX, + SIZE_MAX / sizeof(atomic_t)); +- size_t n; ++ uint64_t n; + int idx; + + if (!d->stripe_size) + d->stripe_size = 1 << 31; + +- d->nr_stripes = DIV_ROUND_UP_ULL(sectors, d->stripe_size); +- +- if (!d->nr_stripes || d->nr_stripes > max_stripes) { +- pr_err("nr_stripes too large or invalid: %u (start sector beyond end of disk?)", +- (unsigned int)d->nr_stripes); ++ n = DIV_ROUND_UP_ULL(sectors, d->stripe_size); ++ if (!n || n > max_stripes) { ++ pr_err("nr_stripes too large or invalid: %llu (start sector beyond end of disk?)\n", ++ n); + return -ENOMEM; + } ++ d->nr_stripes = n; + + n = d->nr_stripes * sizeof(atomic_t); + d->stripe_sectors_dirty = kvzalloc(n, GFP_KERNEL); +diff --git a/drivers/media/pci/ttpci/budget-core.c b/drivers/media/pci/ttpci/budget-core.c +index fadbdeeb44955..293867b9e7961 100644 +--- a/drivers/media/pci/ttpci/budget-core.c ++++ b/drivers/media/pci/ttpci/budget-core.c +@@ -369,20 +369,25 @@ static int budget_register(struct budget *budget) + ret = dvbdemux->dmx.add_frontend(&dvbdemux->dmx, &budget->hw_frontend); + + if (ret < 0) +- return ret; ++ goto err_release_dmx; + + budget->mem_frontend.source = DMX_MEMORY_FE; + ret = dvbdemux->dmx.add_frontend(&dvbdemux->dmx, &budget->mem_frontend); + if (ret < 0) +- return ret; ++ goto err_release_dmx; + + ret = dvbdemux->dmx.connect_frontend(&dvbdemux->dmx, &budget->hw_frontend); + if (ret < 0) +- return ret; ++ goto err_release_dmx; + + dvb_net_init(&budget->dvb_adapter, &budget->dvb_net, &dvbdemux->dmx); + + return 0; ++ ++err_release_dmx: ++ dvb_dmxdev_release(&budget->dmxdev); ++ dvb_dmx_release(&budget->demux); ++ return ret; + } + + static void budget_unregister(struct budget *budget) +diff --git a/drivers/media/platform/davinci/vpss.c b/drivers/media/platform/davinci/vpss.c +index d38d2bbb6f0f8..7000f0bf0b353 100644 +--- a/drivers/media/platform/davinci/vpss.c ++++ b/drivers/media/platform/davinci/vpss.c +@@ -505,19 +505,31 @@ static void vpss_exit(void) + + static int __init vpss_init(void) + { ++ int ret; ++ + if (!request_mem_region(VPSS_CLK_CTRL, 4, "vpss_clock_control")) + return -EBUSY; + + oper_cfg.vpss_regs_base2 = ioremap(VPSS_CLK_CTRL, 4); + if (unlikely(!oper_cfg.vpss_regs_base2)) { +- release_mem_region(VPSS_CLK_CTRL, 4); +- return -ENOMEM; ++ ret = -ENOMEM; ++ goto err_ioremap; + } + + writel(VPSS_CLK_CTRL_VENCCLKEN | +- VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2); ++ VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2); ++ ++ ret = platform_driver_register(&vpss_driver); ++ if (ret) ++ goto err_pd_register; ++ ++ return 0; + +- return platform_driver_register(&vpss_driver); ++err_pd_register: ++ iounmap(oper_cfg.vpss_regs_base2); ++err_ioremap: ++ release_mem_region(VPSS_CLK_CTRL, 4); ++ return ret; + } + subsys_initcall(vpss_init); + module_exit(vpss_exit); +diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c +index 3fdc9f964a3c6..2483641799dfb 100644 +--- a/drivers/media/platform/qcom/camss/camss.c ++++ b/drivers/media/platform/qcom/camss/camss.c +@@ -504,7 +504,6 @@ static int camss_of_parse_ports(struct camss *camss) + return num_subdevs; + + err_cleanup: +- v4l2_async_notifier_cleanup(&camss->notifier); + of_node_put(node); + return ret; + } +@@ -835,29 +834,38 @@ static int camss_probe(struct platform_device *pdev) + camss->csid_num = 4; + camss->vfe_num = 2; + } else { +- return -EINVAL; ++ ret = -EINVAL; ++ goto err_free; + } + + camss->csiphy = devm_kcalloc(dev, camss->csiphy_num, + sizeof(*camss->csiphy), GFP_KERNEL); +- if (!camss->csiphy) +- return -ENOMEM; ++ if (!camss->csiphy) { ++ ret = -ENOMEM; ++ goto err_free; ++ } + + camss->csid = devm_kcalloc(dev, camss->csid_num, sizeof(*camss->csid), + GFP_KERNEL); +- if (!camss->csid) +- return -ENOMEM; ++ if (!camss->csid) { ++ ret = -ENOMEM; ++ goto err_free; ++ } + + camss->vfe = devm_kcalloc(dev, camss->vfe_num, sizeof(*camss->vfe), + GFP_KERNEL); +- if (!camss->vfe) +- return -ENOMEM; ++ if (!camss->vfe) { ++ ret = -ENOMEM; ++ goto err_free; ++ } + + v4l2_async_notifier_init(&camss->notifier); + + num_subdevs = camss_of_parse_ports(camss); +- if (num_subdevs < 0) +- return num_subdevs; ++ if (num_subdevs < 0) { ++ ret = num_subdevs; ++ goto err_cleanup; ++ } + + ret = camss_init_subdevices(camss); + if (ret < 0) +@@ -936,6 +944,8 @@ err_register_entities: + v4l2_device_unregister(&camss->v4l2_dev); + err_cleanup: + v4l2_async_notifier_cleanup(&camss->notifier); ++err_free: ++ kfree(camss); + + return ret; + } +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index 499845c32b1bc..0d7a173f8e61c 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -2037,7 +2037,8 @@ static int bond_release_and_destroy(struct net_device *bond_dev, + int ret; + + ret = __bond_release_one(bond_dev, slave_dev, false, true); +- if (ret == 0 && !bond_has_slaves(bond)) { ++ if (ret == 0 && !bond_has_slaves(bond) && ++ bond_dev->reg_state != NETREG_UNREGISTERING) { + bond_dev->priv_flags |= IFF_DISABLE_NETPOLL; + netdev_info(bond_dev, "Destroying bond\n"); + bond_remove_proc_entry(bond); +@@ -2777,6 +2778,9 @@ static int bond_ab_arp_inspect(struct bonding *bond) + if (bond_time_in_interval(bond, last_rx, 1)) { + bond_propose_link_state(slave, BOND_LINK_UP); + commit++; ++ } else if (slave->link == BOND_LINK_BACK) { ++ bond_propose_link_state(slave, BOND_LINK_FAIL); ++ commit++; + } + continue; + } +@@ -2885,6 +2889,19 @@ static void bond_ab_arp_commit(struct bonding *bond) + + continue; + ++ case BOND_LINK_FAIL: ++ bond_set_slave_link_state(slave, BOND_LINK_FAIL, ++ BOND_SLAVE_NOTIFY_NOW); ++ bond_set_slave_inactive_flags(slave, ++ BOND_SLAVE_NOTIFY_NOW); ++ ++ /* A slave has just been enslaved and has become ++ * the current active slave. ++ */ ++ if (rtnl_dereference(bond->curr_active_slave)) ++ RCU_INIT_POINTER(bond->current_arp_slave, NULL); ++ continue; ++ + default: + slave_err(bond->dev, slave->dev, + "impossible: link_new_state %d on slave\n", +@@ -2935,8 +2952,6 @@ static bool bond_ab_arp_probe(struct bonding *bond) + return should_notify_rtnl; + } + +- bond_set_slave_inactive_flags(curr_arp_slave, BOND_SLAVE_NOTIFY_LATER); +- + bond_for_each_slave_rcu(bond, slave, iter) { + if (!found && !before && bond_slave_is_up(slave)) + before = slave; +@@ -4246,13 +4261,23 @@ static netdev_tx_t bond_start_xmit(struct sk_buff *skb, struct net_device *dev) + return ret; + } + ++static u32 bond_mode_bcast_speed(struct slave *slave, u32 speed) ++{ ++ if (speed == 0 || speed == SPEED_UNKNOWN) ++ speed = slave->speed; ++ else ++ speed = min(speed, slave->speed); ++ ++ return speed; ++} ++ + static int bond_ethtool_get_link_ksettings(struct net_device *bond_dev, + struct ethtool_link_ksettings *cmd) + { + struct bonding *bond = netdev_priv(bond_dev); +- unsigned long speed = 0; + struct list_head *iter; + struct slave *slave; ++ u32 speed = 0; + + cmd->base.duplex = DUPLEX_UNKNOWN; + cmd->base.port = PORT_OTHER; +@@ -4264,8 +4289,13 @@ static int bond_ethtool_get_link_ksettings(struct net_device *bond_dev, + */ + bond_for_each_slave(bond, slave, iter) { + if (bond_slave_can_tx(slave)) { +- if (slave->speed != SPEED_UNKNOWN) +- speed += slave->speed; ++ if (slave->speed != SPEED_UNKNOWN) { ++ if (BOND_MODE(bond) == BOND_MODE_BROADCAST) ++ speed = bond_mode_bcast_speed(slave, ++ speed); ++ else ++ speed += slave->speed; ++ } + if (cmd->base.duplex == DUPLEX_UNKNOWN && + slave->duplex != DUPLEX_UNKNOWN) + cmd->base.duplex = slave->duplex; +diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c +index 14850b7fe6d7f..4bd66ba72c03c 100644 +--- a/drivers/net/dsa/b53/b53_common.c ++++ b/drivers/net/dsa/b53/b53_common.c +@@ -1523,6 +1523,8 @@ static int b53_arl_op(struct b53_device *dev, int op, int port, + return ret; + + switch (ret) { ++ case -ETIMEDOUT: ++ return ret; + case -ENOSPC: + dev_dbg(dev->dev, "{%pM,%.4d} no space left in ARL\n", + addr, vid); +diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c +index 26325f7b3c1fa..4d0d13d5d0998 100644 +--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c ++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c +@@ -2835,16 +2835,14 @@ static void ena_fw_reset_device(struct work_struct *work) + { + struct ena_adapter *adapter = + container_of(work, struct ena_adapter, reset_task); +- struct pci_dev *pdev = adapter->pdev; + +- if (unlikely(!test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) { +- dev_err(&pdev->dev, +- "device reset schedule while reset bit is off\n"); +- return; +- } + rtnl_lock(); +- ena_destroy_device(adapter, false); +- ena_restore_device(adapter); ++ ++ if (likely(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) { ++ ena_destroy_device(adapter, false); ++ ena_restore_device(adapter); ++ } ++ + rtnl_unlock(); + } + +@@ -3675,8 +3673,11 @@ static void __ena_shutoff(struct pci_dev *pdev, bool shutdown) + netdev->rx_cpu_rmap = NULL; + } + #endif /* CONFIG_RFS_ACCEL */ +- del_timer_sync(&adapter->timer_service); + ++ /* Make sure timer and reset routine won't be called after ++ * freeing device resources. ++ */ ++ del_timer_sync(&adapter->timer_service); + cancel_work_sync(&adapter->reset_task); + + rtnl_lock(); /* lock released inside the below if-else block */ +diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c +index 01ae113f122a0..28d4c54505f9a 100644 +--- a/drivers/net/ethernet/cortina/gemini.c ++++ b/drivers/net/ethernet/cortina/gemini.c +@@ -2388,7 +2388,7 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev) + + dev_info(dev, "probe %s ID %d\n", dev_name(dev), id); + +- netdev = alloc_etherdev_mq(sizeof(*port), TX_QUEUE_NUM); ++ netdev = devm_alloc_etherdev_mqs(dev, sizeof(*port), TX_QUEUE_NUM, TX_QUEUE_NUM); + if (!netdev) { + dev_err(dev, "Can't allocate ethernet device #%d\n", id); + return -ENOMEM; +@@ -2520,7 +2520,6 @@ static int gemini_ethernet_port_probe(struct platform_device *pdev) + } + + port->netdev = NULL; +- free_netdev(netdev); + return ret; + } + +@@ -2529,7 +2528,6 @@ static int gemini_ethernet_port_remove(struct platform_device *pdev) + struct gemini_ethernet_port *port = platform_get_drvdata(pdev); + + gemini_port_remove(port); +- free_netdev(port->netdev); + return 0; + } + +diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c +index 39c112f1543c1..a0e4b12ac4ea2 100644 +--- a/drivers/net/ethernet/freescale/fec_main.c ++++ b/drivers/net/ethernet/freescale/fec_main.c +@@ -3707,11 +3707,11 @@ failed_mii_init: + failed_irq: + failed_init: + fec_ptp_stop(pdev); +- if (fep->reg_phy) +- regulator_disable(fep->reg_phy); + failed_reset: + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_disable(&pdev->dev); ++ if (fep->reg_phy) ++ regulator_disable(fep->reg_phy); + failed_regulator: + clk_disable_unprepare(fep->clk_ahb); + failed_clk_ahb: +diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h +index 69a2daaca5c56..d7684ac2522ef 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h ++++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h +@@ -1211,7 +1211,7 @@ struct i40e_aqc_set_vsi_promiscuous_modes { + #define I40E_AQC_SET_VSI_PROMISC_BROADCAST 0x04 + #define I40E_AQC_SET_VSI_DEFAULT 0x08 + #define I40E_AQC_SET_VSI_PROMISC_VLAN 0x10 +-#define I40E_AQC_SET_VSI_PROMISC_TX 0x8000 ++#define I40E_AQC_SET_VSI_PROMISC_RX_ONLY 0x8000 + __le16 seid; + #define I40E_AQC_VSI_PROM_CMD_SEID_MASK 0x3FF + __le16 vlan_tag; +diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c +index 3160b5bbe6728..66f7deaf46ae2 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_common.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c +@@ -1949,6 +1949,21 @@ i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, + return status; + } + ++/** ++ * i40e_is_aq_api_ver_ge ++ * @aq: pointer to AdminQ info containing HW API version to compare ++ * @maj: API major value ++ * @min: API minor value ++ * ++ * Assert whether current HW API version is greater/equal than provided. ++ **/ ++static bool i40e_is_aq_api_ver_ge(struct i40e_adminq_info *aq, u16 maj, ++ u16 min) ++{ ++ return (aq->api_maj_ver > maj || ++ (aq->api_maj_ver == maj && aq->api_min_ver >= min)); ++} ++ + /** + * i40e_aq_add_vsi + * @hw: pointer to the hw struct +@@ -2074,18 +2089,16 @@ i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, + + if (set) { + flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST; +- if (rx_only_promisc && +- (((hw->aq.api_maj_ver == 1) && (hw->aq.api_min_ver >= 5)) || +- (hw->aq.api_maj_ver > 1))) +- flags |= I40E_AQC_SET_VSI_PROMISC_TX; ++ if (rx_only_promisc && i40e_is_aq_api_ver_ge(&hw->aq, 1, 5)) ++ flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY; + } + + cmd->promiscuous_flags = cpu_to_le16(flags); + + cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST); +- if (((hw->aq.api_maj_ver >= 1) && (hw->aq.api_min_ver >= 5)) || +- (hw->aq.api_maj_ver > 1)) +- cmd->valid_flags |= cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_TX); ++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5)) ++ cmd->valid_flags |= ++ cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY); + + cmd->seid = cpu_to_le16(seid); + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); +@@ -2182,11 +2195,17 @@ enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw, + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_vsi_promiscuous_modes); + +- if (enable) ++ if (enable) { + flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST; ++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5)) ++ flags |= I40E_AQC_SET_VSI_PROMISC_RX_ONLY; ++ } + + cmd->promiscuous_flags = cpu_to_le16(flags); + cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST); ++ if (i40e_is_aq_api_ver_ge(&hw->aq, 1, 5)) ++ cmd->valid_flags |= ++ cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_RX_ONLY); + cmd->seid = cpu_to_le16(seid); + cmd->vlan_tag = cpu_to_le16(vid | I40E_AQC_SET_VSI_VLAN_VALID); + +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c +index 095ed81cc0ba4..b3c3911adfc2e 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c +@@ -15342,6 +15342,9 @@ static void i40e_remove(struct pci_dev *pdev) + i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), 0); + i40e_write_rx_ctl(hw, I40E_PFQF_HENA(1), 0); + ++ while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state)) ++ usleep_range(1000, 2000); ++ + /* no more scheduling of any task */ + set_bit(__I40E_SUSPENDED, pf->state); + set_bit(__I40E_DOWN, pf->state); +diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c +index 24bb721a12bc0..42eb7a7ecd96b 100644 +--- a/drivers/net/hyperv/netvsc_drv.c ++++ b/drivers/net/hyperv/netvsc_drv.c +@@ -501,7 +501,7 @@ static int netvsc_vf_xmit(struct net_device *net, struct net_device *vf_netdev, + int rc; + + skb->dev = vf_netdev; +- skb->queue_mapping = qdisc_skb_cb(skb)->slave_dev_queue_mapping; ++ skb_record_rx_queue(skb, qdisc_skb_cb(skb)->slave_dev_queue_mapping); + + rc = dev_queue_xmit(skb); + if (likely(rc == NET_XMIT_SUCCESS || rc == NET_XMIT_CN)) { +diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig +index dd1a147f29716..058d77d2e693d 100644 +--- a/drivers/net/wan/Kconfig ++++ b/drivers/net/wan/Kconfig +@@ -200,7 +200,7 @@ config WANXL_BUILD_FIRMWARE + depends on WANXL && !PREVENT_FIRMWARE_BUILD + help + Allows you to rebuild firmware run by the QUICC processor. +- It requires as68k, ld68k and hexdump programs. ++ It requires m68k toolchains and hexdump programs. + + You should never need this option, say N. + +diff --git a/drivers/net/wan/Makefile b/drivers/net/wan/Makefile +index 701f5d2fe3b61..cf7a0a65aae8d 100644 +--- a/drivers/net/wan/Makefile ++++ b/drivers/net/wan/Makefile +@@ -40,17 +40,17 @@ $(obj)/wanxl.o: $(obj)/wanxlfw.inc + + ifeq ($(CONFIG_WANXL_BUILD_FIRMWARE),y) + ifeq ($(ARCH),m68k) +- AS68K = $(AS) +- LD68K = $(LD) ++ M68KCC = $(CC) ++ M68KLD = $(LD) + else +- AS68K = as68k +- LD68K = ld68k ++ M68KCC = $(CROSS_COMPILE_M68K)gcc ++ M68KLD = $(CROSS_COMPILE_M68K)ld + endif + + quiet_cmd_build_wanxlfw = BLD FW $@ + cmd_build_wanxlfw = \ +- $(CPP) -D__ASSEMBLY__ -Wp,-MD,$(depfile) -I$(srctree)/include/uapi $< | $(AS68K) -m68360 -o $(obj)/wanxlfw.o; \ +- $(LD68K) --oformat binary -Ttext 0x1000 $(obj)/wanxlfw.o -o $(obj)/wanxlfw.bin; \ ++ $(M68KCC) -D__ASSEMBLY__ -Wp,-MD,$(depfile) -I$(srctree)/include/uapi -c -o $(obj)/wanxlfw.o $<; \ ++ $(M68KLD) --oformat binary -Ttext 0x1000 $(obj)/wanxlfw.o -o $(obj)/wanxlfw.bin; \ + hexdump -ve '"\n" 16/1 "0x%02X,"' $(obj)/wanxlfw.bin | sed 's/0x ,//g;1s/^/static const u8 firmware[]={/;$$s/,$$/\n};\n/' >$(obj)/wanxlfw.inc; \ + rm -f $(obj)/wanxlfw.bin $(obj)/wanxlfw.o + +diff --git a/drivers/opp/core.c b/drivers/opp/core.c +index 9ff0538ee83a0..7b057c32e11b1 100644 +--- a/drivers/opp/core.c ++++ b/drivers/opp/core.c +@@ -843,10 +843,12 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) + + /* Return early if nothing to do */ + if (old_freq == freq) { +- dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n", +- __func__, freq); +- ret = 0; +- goto put_opp_table; ++ if (!opp_table->required_opp_tables && !opp_table->regulators) { ++ dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n", ++ __func__, freq); ++ ret = 0; ++ goto put_opp_table; ++ } + } + + temp_freq = old_freq; +diff --git a/drivers/rtc/rtc-goldfish.c b/drivers/rtc/rtc-goldfish.c +index 1a3420ee6a4d9..d5083b013fbce 100644 +--- a/drivers/rtc/rtc-goldfish.c ++++ b/drivers/rtc/rtc-goldfish.c +@@ -73,6 +73,7 @@ static int goldfish_rtc_set_alarm(struct device *dev, + rtc_alarm64 = rtc_tm_to_time64(&alrm->time) * NSEC_PER_SEC; + writel((rtc_alarm64 >> 32), base + TIMER_ALARM_HIGH); + writel(rtc_alarm64, base + TIMER_ALARM_LOW); ++ writel(1, base + TIMER_IRQ_ENABLED); + } else { + /* + * if this function was called with enabled=0 +diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c +index cf63916814cca..5c652deb6fed4 100644 +--- a/drivers/s390/scsi/zfcp_fsf.c ++++ b/drivers/s390/scsi/zfcp_fsf.c +@@ -409,7 +409,7 @@ static void zfcp_fsf_req_complete(struct zfcp_fsf_req *req) + return; + } + +- del_timer(&req->timer); ++ del_timer_sync(&req->timer); + zfcp_fsf_protstatus_eval(req); + zfcp_fsf_fsfstatus_eval(req); + req->handler(req); +@@ -762,7 +762,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req) + req->qdio_req.qdio_outb_usage = atomic_read(&qdio->req_q_free); + req->issued = get_tod_clock(); + if (zfcp_qdio_send(qdio, &req->qdio_req)) { +- del_timer(&req->timer); ++ del_timer_sync(&req->timer); + /* lookup request again, list might have changed */ + zfcp_reqlist_find_rm(adapter->req_list, req_id); + zfcp_erp_adapter_reopen(adapter, 0, "fsrs__1"); +diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c +index 2b865c6423e29..e00dc4693fcbd 100644 +--- a/drivers/scsi/libfc/fc_disc.c ++++ b/drivers/scsi/libfc/fc_disc.c +@@ -581,8 +581,12 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp, + + if (PTR_ERR(fp) == -FC_EX_CLOSED) + goto out; +- if (IS_ERR(fp)) +- goto redisc; ++ if (IS_ERR(fp)) { ++ mutex_lock(&disc->disc_mutex); ++ fc_disc_restart(disc); ++ mutex_unlock(&disc->disc_mutex); ++ goto out; ++ } + + cp = fc_frame_payload_get(fp, sizeof(*cp)); + if (!cp) +@@ -609,7 +613,7 @@ static void fc_disc_gpn_id_resp(struct fc_seq *sp, struct fc_frame *fp, + new_rdata->disc_id = disc->disc_id; + fc_rport_login(new_rdata); + } +- goto out; ++ goto free_fp; + } + rdata->disc_id = disc->disc_id; + mutex_unlock(&rdata->rp_mutex); +@@ -626,6 +630,8 @@ redisc: + fc_disc_restart(disc); + mutex_unlock(&disc->disc_mutex); + } ++free_fp: ++ fc_frame_free(fp); + out: + kref_put(&rdata->kref, fc_rport_destroy); + if (!IS_ERR(fp)) +diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c +index d7ec4083a0911..d91c95d9981ac 100644 +--- a/drivers/scsi/qla2xxx/qla_os.c ++++ b/drivers/scsi/qla2xxx/qla_os.c +@@ -2804,10 +2804,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) + /* This may fail but that's ok */ + pci_enable_pcie_error_reporting(pdev); + +- /* Turn off T10-DIF when FC-NVMe is enabled */ +- if (ql2xnvmeenable) +- ql2xenabledif = 0; +- + ha = kzalloc(sizeof(struct qla_hw_data), GFP_KERNEL); + if (!ha) { + ql_log_pci(ql_log_fatal, pdev, 0x0009, +diff --git a/drivers/scsi/ufs/ufs_quirks.h b/drivers/scsi/ufs/ufs_quirks.h +index fe6cad9b2a0d2..03985919150b9 100644 +--- a/drivers/scsi/ufs/ufs_quirks.h ++++ b/drivers/scsi/ufs/ufs_quirks.h +@@ -12,6 +12,7 @@ + #define UFS_ANY_VENDOR 0xFFFF + #define UFS_ANY_MODEL "ANY_MODEL" + ++#define UFS_VENDOR_MICRON 0x12C + #define UFS_VENDOR_TOSHIBA 0x198 + #define UFS_VENDOR_SAMSUNG 0x1CE + #define UFS_VENDOR_SKHYNIX 0x1AD +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c +index 2b6853c7375c9..b41b88bcab3d9 100644 +--- a/drivers/scsi/ufs/ufshcd.c ++++ b/drivers/scsi/ufs/ufshcd.c +@@ -217,6 +217,8 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state, + + static struct ufs_dev_fix ufs_fixups[] = { + /* UFS cards deviations table */ ++ UFS_FIX(UFS_VENDOR_MICRON, UFS_ANY_MODEL, ++ UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM), + UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL, + UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM), + UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL, +diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig +index 6f7fdcbb9151f..5bf7542087776 100644 +--- a/drivers/spi/Kconfig ++++ b/drivers/spi/Kconfig +@@ -944,4 +944,7 @@ config SPI_SLAVE_SYSTEM_CONTROL + + endif # SPI_SLAVE + ++config SPI_DYNAMIC ++ def_bool ACPI || OF_DYNAMIC || SPI_SLAVE ++ + endif # SPI +diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c +index b222ce8d083ef..7e92ab0cc9920 100644 +--- a/drivers/spi/spi-stm32.c ++++ b/drivers/spi/spi-stm32.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -1986,6 +1987,8 @@ static int stm32_spi_remove(struct platform_device *pdev) + + pm_runtime_disable(&pdev->dev); + ++ pinctrl_pm_select_sleep_state(&pdev->dev); ++ + return 0; + } + +@@ -1997,13 +2000,18 @@ static int stm32_spi_runtime_suspend(struct device *dev) + + clk_disable_unprepare(spi->clk); + +- return 0; ++ return pinctrl_pm_select_sleep_state(dev); + } + + static int stm32_spi_runtime_resume(struct device *dev) + { + struct spi_master *master = dev_get_drvdata(dev); + struct stm32_spi *spi = spi_master_get_devdata(master); ++ int ret; ++ ++ ret = pinctrl_pm_select_default_state(dev); ++ if (ret) ++ return ret; + + return clk_prepare_enable(spi->clk); + } +@@ -2033,10 +2041,23 @@ static int stm32_spi_resume(struct device *dev) + return ret; + + ret = spi_master_resume(master); +- if (ret) ++ if (ret) { + clk_disable_unprepare(spi->clk); ++ return ret; ++ } + +- return ret; ++ ret = pm_runtime_get_sync(dev); ++ if (ret) { ++ dev_err(dev, "Unable to power device:%d\n", ret); ++ return ret; ++ } ++ ++ spi->cfg->config(spi); ++ ++ pm_runtime_mark_last_busy(dev); ++ pm_runtime_put_autosuspend(dev); ++ ++ return 0; + } + #endif + +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c +index c6242f0a307f9..6a81b2a33cb4b 100644 +--- a/drivers/spi/spi.c ++++ b/drivers/spi/spi.c +@@ -475,6 +475,12 @@ static LIST_HEAD(spi_controller_list); + */ + static DEFINE_MUTEX(board_lock); + ++/* ++ * Prevents addition of devices with same chip select and ++ * addition of devices below an unregistering controller. ++ */ ++static DEFINE_MUTEX(spi_add_lock); ++ + /** + * spi_alloc_device - Allocate a new SPI device + * @ctlr: Controller to which device is connected +@@ -553,7 +559,6 @@ static int spi_dev_check(struct device *dev, void *data) + */ + int spi_add_device(struct spi_device *spi) + { +- static DEFINE_MUTEX(spi_add_lock); + struct spi_controller *ctlr = spi->controller; + struct device *dev = ctlr->dev.parent; + int status; +@@ -581,6 +586,13 @@ int spi_add_device(struct spi_device *spi) + goto done; + } + ++ /* Controller may unregister concurrently */ ++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC) && ++ !device_is_registered(&ctlr->dev)) { ++ status = -ENODEV; ++ goto done; ++ } ++ + /* Descriptors take precedence */ + if (ctlr->cs_gpiods) + spi->cs_gpiod = ctlr->cs_gpiods[spi->chip_select]; +@@ -2582,6 +2594,10 @@ void spi_unregister_controller(struct spi_controller *ctlr) + struct spi_controller *found; + int id = ctlr->bus_num; + ++ /* Prevent addition of new devices, unregister existing ones */ ++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC)) ++ mutex_lock(&spi_add_lock); ++ + device_for_each_child(&ctlr->dev, NULL, __unregister); + + /* First make sure that this controller was ever added */ +@@ -2602,6 +2618,9 @@ void spi_unregister_controller(struct spi_controller *ctlr) + if (found == ctlr) + idr_remove(&spi_master_idr, id); + mutex_unlock(&board_lock); ++ ++ if (IS_ENABLED(CONFIG_SPI_DYNAMIC)) ++ mutex_unlock(&spi_add_lock); + } + EXPORT_SYMBOL_GPL(spi_unregister_controller); + +diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c +index a497e7c1f4fcc..d766fb14942b3 100644 +--- a/drivers/target/target_core_user.c ++++ b/drivers/target/target_core_user.c +@@ -601,7 +601,7 @@ static inline void tcmu_flush_dcache_range(void *vaddr, size_t size) + size = round_up(size+offset, PAGE_SIZE); + + while (size) { +- flush_dcache_page(virt_to_page(start)); ++ flush_dcache_page(vmalloc_to_page(start)); + start += PAGE_SIZE; + size -= PAGE_SIZE; + } +diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c +index 6cc47af1f06d3..ca8c10aa4a4bc 100644 +--- a/drivers/vfio/vfio_iommu_type1.c ++++ b/drivers/vfio/vfio_iommu_type1.c +@@ -1187,13 +1187,16 @@ static int vfio_bus_type(struct device *dev, void *data) + static int vfio_iommu_replay(struct vfio_iommu *iommu, + struct vfio_domain *domain) + { +- struct vfio_domain *d; ++ struct vfio_domain *d = NULL; + struct rb_node *n; + unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; + int ret; + + /* Arbitrarily pick the first domain in the list for lookups */ +- d = list_first_entry(&iommu->domain_list, struct vfio_domain, next); ++ if (!list_empty(&iommu->domain_list)) ++ d = list_first_entry(&iommu->domain_list, ++ struct vfio_domain, next); ++ + n = rb_first(&iommu->dma_list); + + for (; n; n = rb_next(n)) { +@@ -1211,6 +1214,11 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, + phys_addr_t p; + dma_addr_t i; + ++ if (WARN_ON(!d)) { /* mapped w/o a domain?! */ ++ ret = -EINVAL; ++ goto unwind; ++ } ++ + phys = iommu_iova_to_phys(d->domain, iova); + + if (WARN_ON(!phys)) { +@@ -1240,7 +1248,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, + if (npage <= 0) { + WARN_ON(!npage); + ret = (int)npage; +- return ret; ++ goto unwind; + } + + phys = pfn << PAGE_SHIFT; +@@ -1249,14 +1257,67 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, + + ret = iommu_map(domain->domain, iova, phys, + size, dma->prot | domain->prot); +- if (ret) +- return ret; ++ if (ret) { ++ if (!dma->iommu_mapped) ++ vfio_unpin_pages_remote(dma, iova, ++ phys >> PAGE_SHIFT, ++ size >> PAGE_SHIFT, ++ true); ++ goto unwind; ++ } + + iova += size; + } ++ } ++ ++ /* All dmas are now mapped, defer to second tree walk for unwind */ ++ for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { ++ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); ++ + dma->iommu_mapped = true; + } ++ + return 0; ++ ++unwind: ++ for (; n; n = rb_prev(n)) { ++ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); ++ dma_addr_t iova; ++ ++ if (dma->iommu_mapped) { ++ iommu_unmap(domain->domain, dma->iova, dma->size); ++ continue; ++ } ++ ++ iova = dma->iova; ++ while (iova < dma->iova + dma->size) { ++ phys_addr_t phys, p; ++ size_t size; ++ dma_addr_t i; ++ ++ phys = iommu_iova_to_phys(domain->domain, iova); ++ if (!phys) { ++ iova += PAGE_SIZE; ++ continue; ++ } ++ ++ size = PAGE_SIZE; ++ p = phys + size; ++ i = iova + size; ++ while (i < dma->iova + dma->size && ++ p == iommu_iova_to_phys(domain->domain, i)) { ++ size += PAGE_SIZE; ++ p += PAGE_SIZE; ++ i += PAGE_SIZE; ++ } ++ ++ iommu_unmap(domain->domain, iova, size); ++ vfio_unpin_pages_remote(dma, iova, phys >> PAGE_SHIFT, ++ size >> PAGE_SHIFT, true); ++ } ++ } ++ ++ return ret; + } + + /* +diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c +index 51d97ec4f58f9..e0cbf5b3d2174 100644 +--- a/drivers/video/fbdev/efifb.c ++++ b/drivers/video/fbdev/efifb.c +@@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev) + info->apertures->ranges[0].base = efifb_fix.smem_start; + info->apertures->ranges[0].size = size_remap; + +- if (efi_enabled(EFI_BOOT) && ++ if (efi_enabled(EFI_MEMMAP) && + !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) { + if ((efifb_fix.smem_start + efifb_fix.smem_len) > + (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) { +diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c +index 58b96baa8d488..4f7c73e6052f6 100644 +--- a/drivers/virtio/virtio_ring.c ++++ b/drivers/virtio/virtio_ring.c +@@ -1960,6 +1960,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx) + { + struct vring_virtqueue *vq = to_vvq(_vq); + ++ if (unlikely(vq->broken)) ++ return false; ++ + virtio_mb(vq->weak_barriers); + return vq->packed_ring ? virtqueue_poll_packed(_vq, last_used_idx) : + virtqueue_poll_split(_vq, last_used_idx); +diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c +index 456a164364a22..98a9d6892d989 100644 +--- a/drivers/xen/preempt.c ++++ b/drivers/xen/preempt.c +@@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall); + asmlinkage __visible void xen_maybe_preempt_hcall(void) + { + if (unlikely(__this_cpu_read(xen_in_preemptible_hcall) +- && need_resched())) { ++ && need_resched() && !preempt_count())) { + /* + * Clear flag as we may be rescheduled on a different + * cpu. +diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c +index bd3a10dfac157..06346422f7432 100644 +--- a/drivers/xen/swiotlb-xen.c ++++ b/drivers/xen/swiotlb-xen.c +@@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, + int order = get_order(size); + phys_addr_t phys; + u64 dma_mask = DMA_BIT_MASK(32); ++ struct page *page; + + if (hwdev && hwdev->coherent_dma_mask) + dma_mask = hwdev->coherent_dma_mask; +@@ -346,9 +347,14 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, + /* Convert the size to actually allocated. */ + size = 1UL << (order + XEN_PAGE_SHIFT); + ++ if (is_vmalloc_addr(vaddr)) ++ page = vmalloc_to_page(vaddr); ++ else ++ page = virt_to_page(vaddr); ++ + if (!WARN_ON((dev_addr + size - 1 > dma_mask) || + range_straddles_page_boundary(phys, size)) && +- TestClearPageXenRemapped(virt_to_page(vaddr))) ++ TestClearPageXenRemapped(page)) + xen_destroy_contiguous_region(phys, order); + + xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs); +diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c +index 7503899c0a1b5..f07e53ab808e3 100644 +--- a/fs/afs/dynroot.c ++++ b/fs/afs/dynroot.c +@@ -289,15 +289,17 @@ void afs_dynroot_depopulate(struct super_block *sb) + net->dynroot_sb = NULL; + mutex_unlock(&net->proc_cells_lock); + +- inode_lock(root->d_inode); +- +- /* Remove all the pins for dirs created for manually added cells */ +- list_for_each_entry_safe(subdir, tmp, &root->d_subdirs, d_child) { +- if (subdir->d_fsdata) { +- subdir->d_fsdata = NULL; +- dput(subdir); ++ if (root) { ++ inode_lock(root->d_inode); ++ ++ /* Remove all the pins for dirs created for manually added cells */ ++ list_for_each_entry_safe(subdir, tmp, &root->d_subdirs, d_child) { ++ if (subdir->d_fsdata) { ++ subdir->d_fsdata = NULL; ++ dput(subdir); ++ } + } +- } + +- inode_unlock(root->d_inode); ++ inode_unlock(root->d_inode); ++ } + } +diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c +index 42d69e77f89d9..b167649f5f5de 100644 +--- a/fs/btrfs/block-group.c ++++ b/fs/btrfs/block-group.c +@@ -2168,7 +2168,7 @@ static int cache_save_setup(struct btrfs_block_group_cache *block_group, + return 0; + } + +- if (trans->aborted) ++ if (TRANS_ABORTED(trans)) + return 0; + again: + inode = lookup_free_space_inode(block_group, path); +diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h +index 2374f3f6f3b70..18357b054a91e 100644 +--- a/fs/btrfs/ctree.h ++++ b/fs/btrfs/ctree.h +@@ -2965,6 +2965,8 @@ int btrfs_defrag_leaves(struct btrfs_trans_handle *trans, + int btrfs_parse_options(struct btrfs_fs_info *info, char *options, + unsigned long new_flags); + int btrfs_sync_fs(struct super_block *sb, int wait); ++char *btrfs_get_subvol_name_from_objectid(struct btrfs_fs_info *fs_info, ++ u64 subvol_objectid); + + static inline __printf(2, 3) __cold + void btrfs_no_printk(const struct btrfs_fs_info *fs_info, const char *fmt, ...) +diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c +index 5bcccfbcc7c15..a34ee9c2f3151 100644 +--- a/fs/btrfs/delayed-inode.c ++++ b/fs/btrfs/delayed-inode.c +@@ -1151,7 +1151,7 @@ static int __btrfs_run_delayed_items(struct btrfs_trans_handle *trans, int nr) + int ret = 0; + bool count = (nr > 0); + +- if (trans->aborted) ++ if (TRANS_ABORTED(trans)) + return -EIO; + + path = btrfs_alloc_path(); +diff --git a/fs/btrfs/export.c b/fs/btrfs/export.c +index ddf28ecf17f93..93cceeba484cc 100644 +--- a/fs/btrfs/export.c ++++ b/fs/btrfs/export.c +@@ -57,9 +57,9 @@ static int btrfs_encode_fh(struct inode *inode, u32 *fh, int *max_len, + return type; + } + +-static struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid, +- u64 root_objectid, u32 generation, +- int check_generation) ++struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid, ++ u64 root_objectid, u32 generation, ++ int check_generation) + { + struct btrfs_fs_info *fs_info = btrfs_sb(sb); + struct btrfs_root *root; +@@ -152,7 +152,7 @@ static struct dentry *btrfs_fh_to_dentry(struct super_block *sb, struct fid *fh, + return btrfs_get_dentry(sb, objectid, root_objectid, generation, 1); + } + +-static struct dentry *btrfs_get_parent(struct dentry *child) ++struct dentry *btrfs_get_parent(struct dentry *child) + { + struct inode *dir = d_inode(child); + struct btrfs_fs_info *fs_info = btrfs_sb(dir->i_sb); +diff --git a/fs/btrfs/export.h b/fs/btrfs/export.h +index 57488ecd7d4ef..f32f4113c976a 100644 +--- a/fs/btrfs/export.h ++++ b/fs/btrfs/export.h +@@ -18,4 +18,9 @@ struct btrfs_fid { + u64 parent_root_objectid; + } __attribute__ ((packed)); + ++struct dentry *btrfs_get_dentry(struct super_block *sb, u64 objectid, ++ u64 root_objectid, u32 generation, ++ int check_generation); ++struct dentry *btrfs_get_parent(struct dentry *child); ++ + #endif +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 739332b462059..a36bd4507bacd 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -1561,7 +1561,7 @@ static int run_delayed_extent_op(struct btrfs_trans_handle *trans, + int err = 0; + int metadata = !extent_op->is_data; + +- if (trans->aborted) ++ if (TRANS_ABORTED(trans)) + return 0; + + if (metadata && !btrfs_fs_incompat(fs_info, SKINNY_METADATA)) +@@ -1681,7 +1681,7 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans, + { + int ret = 0; + +- if (trans->aborted) { ++ if (TRANS_ABORTED(trans)) { + if (insert_reserved) + btrfs_pin_extent(trans->fs_info, node->bytenr, + node->num_bytes, 1); +@@ -2169,7 +2169,7 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans, + int run_all = count == (unsigned long)-1; + + /* We'll clean this up in btrfs_cleanup_transaction */ +- if (trans->aborted) ++ if (TRANS_ABORTED(trans)) + return 0; + + if (test_bit(BTRFS_FS_CREATING_FREE_SPACE_TREE, &fs_info->flags)) +@@ -2892,7 +2892,7 @@ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans) + else + unpin = &fs_info->freed_extents[0]; + +- while (!trans->aborted) { ++ while (!TRANS_ABORTED(trans)) { + struct extent_state *cached_state = NULL; + + mutex_lock(&fs_info->unused_bg_unpin_mutex); +@@ -2924,7 +2924,7 @@ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans) + u64 trimmed = 0; + + ret = -EROFS; +- if (!trans->aborted) ++ if (!TRANS_ABORTED(trans)) + ret = btrfs_discard_extent(fs_info, + block_group->key.objectid, + block_group->key.offset, +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 035ea5bc692ad..5707bf0575d43 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -4073,7 +4073,7 @@ retry: + if (!test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) { + ret = flush_write_bio(&epd); + } else { +- ret = -EUCLEAN; ++ ret = -EROFS; + end_write_bio(&epd, ret); + } + return ret; +diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c +index a7b043fd7a572..498b824148187 100644 +--- a/fs/btrfs/scrub.c ++++ b/fs/btrfs/scrub.c +@@ -3717,7 +3717,7 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx, + struct btrfs_fs_info *fs_info = sctx->fs_info; + + if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) +- return -EIO; ++ return -EROFS; + + /* Seed devices of a new filesystem has their own generation. */ + if (scrub_dev->fs_devices != fs_info->fs_devices) +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index 4b0ee34aa65d5..a1498df419b4f 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -241,7 +241,7 @@ void __btrfs_abort_transaction(struct btrfs_trans_handle *trans, + { + struct btrfs_fs_info *fs_info = trans->fs_info; + +- trans->aborted = errno; ++ WRITE_ONCE(trans->aborted, errno); + /* Nothing used. The other threads that have joined this + * transaction may be able to continue. */ + if (!trans->dirty && list_empty(&trans->new_bgs)) { +@@ -1009,8 +1009,8 @@ out: + return error; + } + +-static char *get_subvol_name_from_objectid(struct btrfs_fs_info *fs_info, +- u64 subvol_objectid) ++char *btrfs_get_subvol_name_from_objectid(struct btrfs_fs_info *fs_info, ++ u64 subvol_objectid) + { + struct btrfs_root *root = fs_info->tree_root; + struct btrfs_root *fs_root; +@@ -1291,6 +1291,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry) + { + struct btrfs_fs_info *info = btrfs_sb(dentry->d_sb); + const char *compress_type; ++ const char *subvol_name; + + if (btrfs_test_opt(info, DEGRADED)) + seq_puts(seq, ",degraded"); +@@ -1375,8 +1376,13 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry) + seq_puts(seq, ",ref_verify"); + seq_printf(seq, ",subvolid=%llu", + BTRFS_I(d_inode(dentry))->root->root_key.objectid); +- seq_puts(seq, ",subvol="); +- seq_dentry(seq, dentry, " \t\n\\"); ++ subvol_name = btrfs_get_subvol_name_from_objectid(info, ++ BTRFS_I(d_inode(dentry))->root->root_key.objectid); ++ if (!IS_ERR(subvol_name)) { ++ seq_puts(seq, ",subvol="); ++ seq_escape(seq, subvol_name, " \t\n\\"); ++ kfree(subvol_name); ++ } + return 0; + } + +@@ -1421,8 +1427,8 @@ static struct dentry *mount_subvol(const char *subvol_name, u64 subvol_objectid, + goto out; + } + } +- subvol_name = get_subvol_name_from_objectid(btrfs_sb(mnt->mnt_sb), +- subvol_objectid); ++ subvol_name = btrfs_get_subvol_name_from_objectid( ++ btrfs_sb(mnt->mnt_sb), subvol_objectid); + if (IS_ERR(subvol_name)) { + root = ERR_CAST(subvol_name); + subvol_name = NULL; +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c +index 54589e940f9af..c346ee7ec18d4 100644 +--- a/fs/btrfs/transaction.c ++++ b/fs/btrfs/transaction.c +@@ -174,7 +174,7 @@ loop: + + cur_trans = fs_info->running_transaction; + if (cur_trans) { +- if (cur_trans->aborted) { ++ if (TRANS_ABORTED(cur_trans)) { + spin_unlock(&fs_info->trans_lock); + return cur_trans->aborted; + } +@@ -390,7 +390,7 @@ static inline int is_transaction_blocked(struct btrfs_transaction *trans) + { + return (trans->state >= TRANS_STATE_BLOCKED && + trans->state < TRANS_STATE_UNBLOCKED && +- !trans->aborted); ++ !TRANS_ABORTED(trans)); + } + + /* wait for commit against the current transaction to become unblocked +@@ -409,7 +409,7 @@ static void wait_current_trans(struct btrfs_fs_info *fs_info) + + wait_event(fs_info->transaction_wait, + cur_trans->state >= TRANS_STATE_UNBLOCKED || +- cur_trans->aborted); ++ TRANS_ABORTED(cur_trans)); + btrfs_put_transaction(cur_trans); + } else { + spin_unlock(&fs_info->trans_lock); +@@ -870,10 +870,13 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans, + if (throttle) + btrfs_run_delayed_iputs(info); + +- if (trans->aborted || ++ if (TRANS_ABORTED(trans) || + test_bit(BTRFS_FS_STATE_ERROR, &info->fs_state)) { + wake_up_process(info->transaction_kthread); +- err = -EIO; ++ if (TRANS_ABORTED(trans)) ++ err = trans->aborted; ++ else ++ err = -EROFS; + } + + kmem_cache_free(btrfs_trans_handle_cachep, trans); +@@ -1727,7 +1730,8 @@ static void wait_current_trans_commit_start(struct btrfs_fs_info *fs_info, + struct btrfs_transaction *trans) + { + wait_event(fs_info->transaction_blocked_wait, +- trans->state >= TRANS_STATE_COMMIT_START || trans->aborted); ++ trans->state >= TRANS_STATE_COMMIT_START || ++ TRANS_ABORTED(trans)); + } + + /* +@@ -1739,7 +1743,8 @@ static void wait_current_trans_commit_start_and_unblock( + struct btrfs_transaction *trans) + { + wait_event(fs_info->transaction_wait, +- trans->state >= TRANS_STATE_UNBLOCKED || trans->aborted); ++ trans->state >= TRANS_STATE_UNBLOCKED || ++ TRANS_ABORTED(trans)); + } + + /* +@@ -1957,7 +1962,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) + trans->dirty = true; + + /* Stop the commit early if ->aborted is set */ +- if (unlikely(READ_ONCE(cur_trans->aborted))) { ++ if (TRANS_ABORTED(cur_trans)) { + ret = cur_trans->aborted; + btrfs_end_transaction(trans); + return ret; +@@ -2031,7 +2036,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) + + wait_for_commit(cur_trans); + +- if (unlikely(cur_trans->aborted)) ++ if (TRANS_ABORTED(cur_trans)) + ret = cur_trans->aborted; + + btrfs_put_transaction(cur_trans); +@@ -2050,7 +2055,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) + spin_unlock(&fs_info->trans_lock); + + wait_for_commit(prev_trans); +- ret = prev_trans->aborted; ++ ret = READ_ONCE(prev_trans->aborted); + + btrfs_put_transaction(prev_trans); + if (ret) +@@ -2104,8 +2109,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) + wait_event(cur_trans->writer_wait, + atomic_read(&cur_trans->num_writers) == 1); + +- /* ->aborted might be set after the previous check, so check it */ +- if (unlikely(READ_ONCE(cur_trans->aborted))) { ++ if (TRANS_ABORTED(cur_trans)) { + ret = cur_trans->aborted; + goto scrub_continue; + } +@@ -2223,7 +2227,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) + * The tasks which save the space cache and inode cache may also + * update ->aborted, check it. + */ +- if (unlikely(READ_ONCE(cur_trans->aborted))) { ++ if (TRANS_ABORTED(cur_trans)) { + ret = cur_trans->aborted; + mutex_unlock(&fs_info->tree_log_mutex); + mutex_unlock(&fs_info->reloc_mutex); +diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h +index b15c31d231488..7291a2a930751 100644 +--- a/fs/btrfs/transaction.h ++++ b/fs/btrfs/transaction.h +@@ -116,6 +116,10 @@ struct btrfs_trans_handle { + struct btrfs_block_rsv *orig_rsv; + refcount_t use_count; + unsigned int type; ++ /* ++ * Error code of transaction abort, set outside of locks and must use ++ * the READ_ONCE/WRITE_ONCE access ++ */ + short aborted; + bool adding_csums; + bool allocating_chunk; +@@ -127,6 +131,14 @@ struct btrfs_trans_handle { + struct list_head new_bgs; + }; + ++/* ++ * The abort status can be changed between calls and is not protected by locks. ++ * This accepts btrfs_transaction and btrfs_trans_handle as types. Once it's ++ * set to a non-zero value it does not change, so the macro should be in checks ++ * but is not necessary for further reads of the value. ++ */ ++#define TRANS_ABORTED(trans) (unlikely(READ_ONCE((trans)->aborted))) ++ + struct btrfs_pending_snapshot { + struct dentry *dentry; + struct inode *dir; +diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c +index 701bc3f4d4ba1..b0077f5a31688 100644 +--- a/fs/ceph/mds_client.c ++++ b/fs/ceph/mds_client.c +@@ -4143,7 +4143,6 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc) + return -ENOMEM; + } + +- fsc->mdsc = mdsc; + init_completion(&mdsc->safe_umount_waiters); + init_waitqueue_head(&mdsc->session_close_wq); + INIT_LIST_HEAD(&mdsc->waiting_for_map); +@@ -4195,6 +4194,8 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc) + + strscpy(mdsc->nodename, utsname()->nodename, + sizeof(mdsc->nodename)); ++ ++ fsc->mdsc = mdsc; + return 0; + } + +diff --git a/fs/eventpoll.c b/fs/eventpoll.c +index 6307c1d883e0a..0d9b1e2b9da72 100644 +--- a/fs/eventpoll.c ++++ b/fs/eventpoll.c +@@ -1991,9 +1991,11 @@ static int ep_loop_check_proc(void *priv, void *cookie, int call_nests) + * not already there, and calling reverse_path_check() + * during ep_insert(). + */ +- if (list_empty(&epi->ffd.file->f_tfile_llink)) ++ if (list_empty(&epi->ffd.file->f_tfile_llink)) { ++ get_file(epi->ffd.file); + list_add(&epi->ffd.file->f_tfile_llink, + &tfile_check_list); ++ } + } + } + mutex_unlock(&ep->mtx); +@@ -2037,6 +2039,7 @@ static void clear_tfile_check_list(void) + file = list_first_entry(&tfile_check_list, struct file, + f_tfile_llink); + list_del_init(&file->f_tfile_llink); ++ fput(file); + } + INIT_LIST_HEAD(&tfile_check_list); + } +@@ -2192,13 +2195,13 @@ SYSCALL_DEFINE4(epoll_ctl, int, epfd, int, op, int, fd, + mutex_lock(&epmutex); + if (is_file_epoll(tf.file)) { + error = -ELOOP; +- if (ep_loop_check(ep, tf.file) != 0) { +- clear_tfile_check_list(); ++ if (ep_loop_check(ep, tf.file) != 0) + goto error_tgt_fput; +- } +- } else ++ } else { ++ get_file(tf.file); + list_add(&tf.file->f_tfile_llink, + &tfile_check_list); ++ } + mutex_lock_nested(&ep->mtx, 0); + if (is_file_epoll(tf.file)) { + tep = tf.file->private_data; +@@ -2222,8 +2225,6 @@ SYSCALL_DEFINE4(epoll_ctl, int, epfd, int, op, int, fd, + error = ep_insert(ep, &epds, tf.file, fd, full_check); + } else + error = -EEXIST; +- if (full_check) +- clear_tfile_check_list(); + break; + case EPOLL_CTL_DEL: + if (epi) +@@ -2246,8 +2247,10 @@ SYSCALL_DEFINE4(epoll_ctl, int, epfd, int, op, int, fd, + mutex_unlock(&ep->mtx); + + error_tgt_fput: +- if (full_check) ++ if (full_check) { ++ clear_tfile_check_list(); + mutex_unlock(&epmutex); ++ } + + fdput(tf); + error_fput: +diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c +index ff8e1205127ee..ceb54ccc937e9 100644 +--- a/fs/ext4/block_validity.c ++++ b/fs/ext4/block_validity.c +@@ -68,7 +68,7 @@ static int add_system_zone(struct ext4_system_blocks *system_blks, + ext4_fsblk_t start_blk, + unsigned int count) + { +- struct ext4_system_zone *new_entry = NULL, *entry; ++ struct ext4_system_zone *new_entry, *entry; + struct rb_node **n = &system_blks->root.rb_node, *node; + struct rb_node *parent = NULL, *new_node = NULL; + +@@ -79,30 +79,20 @@ static int add_system_zone(struct ext4_system_blocks *system_blks, + n = &(*n)->rb_left; + else if (start_blk >= (entry->start_blk + entry->count)) + n = &(*n)->rb_right; +- else { +- if (start_blk + count > (entry->start_blk + +- entry->count)) +- entry->count = (start_blk + count - +- entry->start_blk); +- new_node = *n; +- new_entry = rb_entry(new_node, struct ext4_system_zone, +- node); +- break; +- } ++ else /* Unexpected overlap of system zones. */ ++ return -EFSCORRUPTED; + } + +- if (!new_entry) { +- new_entry = kmem_cache_alloc(ext4_system_zone_cachep, +- GFP_KERNEL); +- if (!new_entry) +- return -ENOMEM; +- new_entry->start_blk = start_blk; +- new_entry->count = count; +- new_node = &new_entry->node; +- +- rb_link_node(new_node, parent, n); +- rb_insert_color(new_node, &system_blks->root); +- } ++ new_entry = kmem_cache_alloc(ext4_system_zone_cachep, ++ GFP_KERNEL); ++ if (!new_entry) ++ return -ENOMEM; ++ new_entry->start_blk = start_blk; ++ new_entry->count = count; ++ new_node = &new_entry->node; ++ ++ rb_link_node(new_node, parent, n); ++ rb_insert_color(new_node, &system_blks->root); + + /* Can we merge to the left? */ + node = rb_prev(new_node); +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index a564d0289a70a..36a81b57012a5 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -1392,8 +1392,8 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size, + ext4_match(dir, fname, de)) { + /* found a match - just to be sure, do + * a full check */ +- if (ext4_check_dir_entry(dir, NULL, de, bh, bh->b_data, +- bh->b_size, offset)) ++ if (ext4_check_dir_entry(dir, NULL, de, bh, search_buf, ++ buf_size, offset)) + return -1; + *res_dir = de; + return 1; +@@ -1852,7 +1852,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir, + blocksize, hinfo, map); + map -= count; + dx_sort_map(map, count); +- /* Split the existing block in the middle, size-wise */ ++ /* Ensure that neither split block is over half full */ + size = 0; + move = 0; + for (i = count-1; i >= 0; i--) { +@@ -1862,8 +1862,18 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir, + size += map[i].size; + move++; + } +- /* map index at which we will split */ +- split = count - move; ++ /* ++ * map index at which we will split ++ * ++ * If the sum of active entries didn't exceed half the block size, just ++ * split it in half by count; each resulting block will have at least ++ * half the space free. ++ */ ++ if (i > 0) ++ split = count - move; ++ else ++ split = count/2; ++ + hash2 = map[split].hash; + continued = hash2 == map[split - 1].hash; + dxtrace(printk(KERN_INFO "Split block %lu at %x, %i/%i\n", +@@ -2462,7 +2472,7 @@ int ext4_generic_delete_entry(handle_t *handle, + de = (struct ext4_dir_entry_2 *)entry_buf; + while (i < buf_size - csum_size) { + if (ext4_check_dir_entry(dir, NULL, de, bh, +- bh->b_data, bh->b_size, i)) ++ entry_buf, buf_size, i)) + return -EFSCORRUPTED; + if (de == de_del) { + if (pde) +diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c +index adbb8fef22162..50fa3e08c02f3 100644 +--- a/fs/gfs2/bmap.c ++++ b/fs/gfs2/bmap.c +@@ -1350,9 +1350,15 @@ int gfs2_extent_map(struct inode *inode, u64 lblock, int *new, u64 *dblock, unsi + return ret; + } + ++/* ++ * NOTE: Never call gfs2_block_zero_range with an open transaction because it ++ * uses iomap write to perform its actions, which begin their own transactions ++ * (iomap_begin, page_prepare, etc.) ++ */ + static int gfs2_block_zero_range(struct inode *inode, loff_t from, + unsigned int length) + { ++ BUG_ON(current->journal_info); + return iomap_zero_range(inode, from, length, NULL, &gfs2_iomap_ops); + } + +@@ -1413,6 +1419,16 @@ static int trunc_start(struct inode *inode, u64 newsize) + u64 oldsize = inode->i_size; + int error; + ++ if (!gfs2_is_stuffed(ip)) { ++ unsigned int blocksize = i_blocksize(inode); ++ unsigned int offs = newsize & (blocksize - 1); ++ if (offs) { ++ error = gfs2_block_zero_range(inode, newsize, ++ blocksize - offs); ++ if (error) ++ return error; ++ } ++ } + if (journaled) + error = gfs2_trans_begin(sdp, RES_DINODE + RES_JDATA, GFS2_JTRUNC_REVOKES); + else +@@ -1426,19 +1442,10 @@ static int trunc_start(struct inode *inode, u64 newsize) + + gfs2_trans_add_meta(ip->i_gl, dibh); + +- if (gfs2_is_stuffed(ip)) { ++ if (gfs2_is_stuffed(ip)) + gfs2_buffer_clear_tail(dibh, sizeof(struct gfs2_dinode) + newsize); +- } else { +- unsigned int blocksize = i_blocksize(inode); +- unsigned int offs = newsize & (blocksize - 1); +- if (offs) { +- error = gfs2_block_zero_range(inode, newsize, +- blocksize - offs); +- if (error) +- goto out; +- } ++ else + ip->i_diskflags |= GFS2_DIF_TRUNC_IN_PROG; +- } + + i_size_write(inode, newsize); + ip->i_inode.i_mtime = ip->i_inode.i_ctime = current_time(&ip->i_inode); +@@ -2442,24 +2449,13 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length) + struct inode *inode = file_inode(file); + struct gfs2_inode *ip = GFS2_I(inode); + struct gfs2_sbd *sdp = GFS2_SB(inode); ++ unsigned int blocksize = i_blocksize(inode); ++ loff_t start, end; + int error; + +- if (gfs2_is_jdata(ip)) +- error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA, +- GFS2_JTRUNC_REVOKES); +- else +- error = gfs2_trans_begin(sdp, RES_DINODE, 0); +- if (error) +- return error; ++ if (!gfs2_is_stuffed(ip)) { ++ unsigned int start_off, end_len; + +- if (gfs2_is_stuffed(ip)) { +- error = stuffed_zero_range(inode, offset, length); +- if (error) +- goto out; +- } else { +- unsigned int start_off, end_len, blocksize; +- +- blocksize = i_blocksize(inode); + start_off = offset & (blocksize - 1); + end_len = (offset + length) & (blocksize - 1); + if (start_off) { +@@ -2480,6 +2476,26 @@ int __gfs2_punch_hole(struct file *file, loff_t offset, loff_t length) + } + } + ++ start = round_down(offset, blocksize); ++ end = round_up(offset + length, blocksize) - 1; ++ error = filemap_write_and_wait_range(inode->i_mapping, start, end); ++ if (error) ++ return error; ++ ++ if (gfs2_is_jdata(ip)) ++ error = gfs2_trans_begin(sdp, RES_DINODE + 2 * RES_JDATA, ++ GFS2_JTRUNC_REVOKES); ++ else ++ error = gfs2_trans_begin(sdp, RES_DINODE, 0); ++ if (error) ++ return error; ++ ++ if (gfs2_is_stuffed(ip)) { ++ error = stuffed_zero_range(inode, offset, length); ++ if (error) ++ goto out; ++ } ++ + if (gfs2_is_jdata(ip)) { + BUG_ON(!current->journal_info); + gfs2_journaled_truncate_range(inode, offset, length); +diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c +index fa58835668a62..b7c5819bfc411 100644 +--- a/fs/jbd2/journal.c ++++ b/fs/jbd2/journal.c +@@ -1348,8 +1348,10 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags) + int ret; + + /* Buffer got discarded which means block device got invalidated */ +- if (!buffer_mapped(bh)) ++ if (!buffer_mapped(bh)) { ++ unlock_buffer(bh); + return -EIO; ++ } + + trace_jbd2_write_superblock(journal, write_flags); + if (!(journal->j_flags & JBD2_BARRIER)) +diff --git a/fs/jffs2/dir.c b/fs/jffs2/dir.c +index f20cff1194bb6..776493713153f 100644 +--- a/fs/jffs2/dir.c ++++ b/fs/jffs2/dir.c +@@ -590,10 +590,14 @@ static int jffs2_rmdir (struct inode *dir_i, struct dentry *dentry) + int ret; + uint32_t now = JFFS2_NOW(); + ++ mutex_lock(&f->sem); + for (fd = f->dents ; fd; fd = fd->next) { +- if (fd->ino) ++ if (fd->ino) { ++ mutex_unlock(&f->sem); + return -ENOTEMPTY; ++ } + } ++ mutex_unlock(&f->sem); + + ret = jffs2_do_unlink(c, dir_f, dentry->d_name.name, + dentry->d_name.len, f, now); +diff --git a/fs/romfs/storage.c b/fs/romfs/storage.c +index 6b2b4362089e6..b57b3ffcbc327 100644 +--- a/fs/romfs/storage.c ++++ b/fs/romfs/storage.c +@@ -217,10 +217,8 @@ int romfs_dev_read(struct super_block *sb, unsigned long pos, + size_t limit; + + limit = romfs_maxsize(sb); +- if (pos >= limit) ++ if (pos >= limit || buflen > limit - pos) + return -EIO; +- if (buflen > limit - pos) +- buflen = limit - pos; + + #ifdef CONFIG_ROMFS_ON_MTD + if (sb->s_mtd) +diff --git a/fs/signalfd.c b/fs/signalfd.c +index 44b6845b071c3..5b78719be4455 100644 +--- a/fs/signalfd.c ++++ b/fs/signalfd.c +@@ -314,9 +314,10 @@ SYSCALL_DEFINE4(signalfd4, int, ufd, sigset_t __user *, user_mask, + { + sigset_t mask; + +- if (sizemask != sizeof(sigset_t) || +- copy_from_user(&mask, user_mask, sizeof(mask))) ++ if (sizemask != sizeof(sigset_t)) + return -EINVAL; ++ if (copy_from_user(&mask, user_mask, sizeof(mask))) ++ return -EFAULT; + return do_signalfd4(ufd, &mask, flags); + } + +@@ -325,9 +326,10 @@ SYSCALL_DEFINE3(signalfd, int, ufd, sigset_t __user *, user_mask, + { + sigset_t mask; + +- if (sizemask != sizeof(sigset_t) || +- copy_from_user(&mask, user_mask, sizeof(mask))) ++ if (sizemask != sizeof(sigset_t)) + return -EINVAL; ++ if (copy_from_user(&mask, user_mask, sizeof(mask))) ++ return -EFAULT; + return do_signalfd4(ufd, &mask, 0); + } + +diff --git a/fs/xfs/xfs_sysfs.h b/fs/xfs/xfs_sysfs.h +index e9f810fc67317..43585850f1546 100644 +--- a/fs/xfs/xfs_sysfs.h ++++ b/fs/xfs/xfs_sysfs.h +@@ -32,9 +32,11 @@ xfs_sysfs_init( + struct xfs_kobj *parent_kobj, + const char *name) + { ++ struct kobject *parent; ++ ++ parent = parent_kobj ? &parent_kobj->kobject : NULL; + init_completion(&kobj->complete); +- return kobject_init_and_add(&kobj->kobject, ktype, +- &parent_kobj->kobject, "%s", name); ++ return kobject_init_and_add(&kobj->kobject, ktype, parent, "%s", name); + } + + static inline void +diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c +index 16457465833ba..904780dd74aa3 100644 +--- a/fs/xfs/xfs_trans_dquot.c ++++ b/fs/xfs/xfs_trans_dquot.c +@@ -646,7 +646,7 @@ xfs_trans_dqresv( + } + } + if (ninos > 0) { +- total_count = be64_to_cpu(dqp->q_core.d_icount) + ninos; ++ total_count = dqp->q_res_icount + ninos; + timer = be32_to_cpu(dqp->q_core.d_itimer); + warns = be16_to_cpu(dqp->q_core.d_iwarns); + warnlimit = dqp->q_mount->m_quotainfo->qi_iwarnlimit; +diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c +index aa83538efc238..a793bd23fe56c 100644 +--- a/kernel/events/uprobes.c ++++ b/kernel/events/uprobes.c +@@ -211,7 +211,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, + try_to_free_swap(old_page); + page_vma_mapped_walk_done(&pvmw); + +- if (vma->vm_flags & VM_LOCKED) ++ if ((vma->vm_flags & VM_LOCKED) && !PageCompound(old_page)) + munlock_vma_page(old_page); + put_page(old_page); + +diff --git a/kernel/kthread.c b/kernel/kthread.c +index b262f47046ca4..bfbfa481be3a5 100644 +--- a/kernel/kthread.c ++++ b/kernel/kthread.c +@@ -199,8 +199,15 @@ static void __kthread_parkme(struct kthread *self) + if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags)) + break; + ++ /* ++ * Thread is going to call schedule(), do not preempt it, ++ * or the caller of kthread_park() may spend more time in ++ * wait_task_inactive(). ++ */ ++ preempt_disable(); + complete(&self->parked); +- schedule(); ++ schedule_preempt_disabled(); ++ preempt_enable(); + } + __set_current_state(TASK_RUNNING); + } +@@ -245,8 +252,14 @@ static int kthread(void *_create) + /* OK, tell user we're spawned, wait for stop or wakeup */ + __set_current_state(TASK_UNINTERRUPTIBLE); + create->result = current; ++ /* ++ * Thread is going to call schedule(), do not preempt it, ++ * or the creator may spend more time in wait_task_inactive(). ++ */ ++ preempt_disable(); + complete(done); +- schedule(); ++ schedule_preempt_disabled(); ++ preempt_enable(); + + ret = -EINTR; + if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) { +diff --git a/kernel/relay.c b/kernel/relay.c +index 4b760ec163426..d3940becf2fc3 100644 +--- a/kernel/relay.c ++++ b/kernel/relay.c +@@ -197,6 +197,7 @@ free_buf: + static void relay_destroy_channel(struct kref *kref) + { + struct rchan *chan = container_of(kref, struct rchan, kref); ++ free_percpu(chan->buf); + kfree(chan); + } + +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 2af1831596f22..2a83b03c54a69 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -4846,25 +4846,21 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr) + void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) + { +- unsigned long check_addr = *start; ++ unsigned long a_start, a_end; + + if (!(vma->vm_flags & VM_MAYSHARE)) + return; + +- for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) { +- unsigned long a_start = check_addr & PUD_MASK; +- unsigned long a_end = a_start + PUD_SIZE; ++ /* Extend the range to be PUD aligned for a worst case scenario */ ++ a_start = ALIGN_DOWN(*start, PUD_SIZE); ++ a_end = ALIGN(*end, PUD_SIZE); + +- /* +- * If sharing is possible, adjust start/end if necessary. +- */ +- if (range_in_vma(vma, a_start, a_end)) { +- if (a_start < *start) +- *start = a_start; +- if (a_end > *end) +- *end = a_end; +- } +- } ++ /* ++ * Intersect the range with the vma range, since pmd sharing won't be ++ * across vma after all ++ */ ++ *start = max(vma->vm_start, a_start); ++ *end = min(vma->vm_end, a_end); + } + + /* +diff --git a/mm/khugepaged.c b/mm/khugepaged.c +index 719f49d1fba2f..3623d1c5343f2 100644 +--- a/mm/khugepaged.c ++++ b/mm/khugepaged.c +@@ -401,7 +401,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm, + + static inline int khugepaged_test_exit(struct mm_struct *mm) + { +- return atomic_read(&mm->mm_users) == 0; ++ return atomic_read(&mm->mm_users) == 0 || !mmget_still_valid(mm); + } + + static bool hugepage_vma_check(struct vm_area_struct *vma, +@@ -438,7 +438,7 @@ int __khugepaged_enter(struct mm_struct *mm) + return -ENOMEM; + + /* __khugepaged_exit() must not run from under us */ +- VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); ++ VM_BUG_ON_MM(atomic_read(&mm->mm_users) == 0, mm); + if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { + free_mm_slot(mm_slot); + return 0; +@@ -1019,9 +1019,6 @@ static void collapse_huge_page(struct mm_struct *mm, + * handled by the anon_vma lock + PG_lock. + */ + down_write(&mm->mmap_sem); +- result = SCAN_ANY_PROCESS; +- if (!mmget_still_valid(mm)) +- goto out; + result = hugepage_vma_revalidate(mm, address, &vma); + if (result) + goto out; +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index 8686fe760f34c..67a9943aa595f 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -1256,6 +1256,11 @@ static void free_pcppages_bulk(struct zone *zone, int count, + struct page *page, *tmp; + LIST_HEAD(head); + ++ /* ++ * Ensure proper count is passed which otherwise would stuck in the ++ * below while (list_empty(list)) loop. ++ */ ++ count = min(pcp->count, count); + while (count) { + struct list_head *list; + +@@ -7867,7 +7872,7 @@ int __meminit init_per_zone_wmark_min(void) + + return 0; + } +-core_initcall(init_per_zone_wmark_min) ++postcore_initcall(init_per_zone_wmark_min) + + /* + * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so +diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c +index f7587428febdd..bf9fd6ee88fe0 100644 +--- a/net/can/j1939/socket.c ++++ b/net/can/j1939/socket.c +@@ -398,6 +398,7 @@ static int j1939_sk_init(struct sock *sk) + spin_lock_init(&jsk->sk_session_queue_lock); + INIT_LIST_HEAD(&jsk->sk_session_queue); + sk->sk_destruct = j1939_sk_sock_destruct; ++ sk->sk_protocol = CAN_J1939; + + return 0; + } +@@ -466,6 +467,14 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len) + goto out_release_sock; + } + ++ if (!ndev->ml_priv) { ++ netdev_warn_once(ndev, ++ "No CAN mid layer private allocated, please fix your driver and use alloc_candev()!\n"); ++ dev_put(ndev); ++ ret = -ENODEV; ++ goto out_release_sock; ++ } ++ + priv = j1939_netdev_start(ndev); + dev_put(ndev); + if (IS_ERR(priv)) { +@@ -553,6 +562,11 @@ static int j1939_sk_connect(struct socket *sock, struct sockaddr *uaddr, + static void j1939_sk_sock2sockaddr_can(struct sockaddr_can *addr, + const struct j1939_sock *jsk, int peer) + { ++ /* There are two holes (2 bytes and 3 bytes) to clear to avoid ++ * leaking kernel information to user space. ++ */ ++ memset(addr, 0, J1939_MIN_NAMELEN); ++ + addr->can_family = AF_CAN; + addr->can_ifindex = jsk->ifindex; + addr->can_addr.j1939.pgn = jsk->addr.pgn; +diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c +index 9f99af5b0b11e..dbd215cbc53d8 100644 +--- a/net/can/j1939/transport.c ++++ b/net/can/j1939/transport.c +@@ -352,17 +352,16 @@ void j1939_session_skb_queue(struct j1939_session *session, + skb_queue_tail(&session->skb_queue, skb); + } + +-static struct sk_buff *j1939_session_skb_find(struct j1939_session *session) ++static struct ++sk_buff *j1939_session_skb_find_by_offset(struct j1939_session *session, ++ unsigned int offset_start) + { + struct j1939_priv *priv = session->priv; ++ struct j1939_sk_buff_cb *do_skcb; + struct sk_buff *skb = NULL; + struct sk_buff *do_skb; +- struct j1939_sk_buff_cb *do_skcb; +- unsigned int offset_start; + unsigned long flags; + +- offset_start = session->pkt.dpo * 7; +- + spin_lock_irqsave(&session->skb_queue.lock, flags); + skb_queue_walk(&session->skb_queue, do_skb) { + do_skcb = j1939_skb_to_cb(do_skb); +@@ -382,6 +381,14 @@ static struct sk_buff *j1939_session_skb_find(struct j1939_session *session) + return skb; + } + ++static struct sk_buff *j1939_session_skb_find(struct j1939_session *session) ++{ ++ unsigned int offset_start; ++ ++ offset_start = session->pkt.dpo * 7; ++ return j1939_session_skb_find_by_offset(session, offset_start); ++} ++ + /* see if we are receiver + * returns 0 for broadcasts, although we will receive them + */ +@@ -716,10 +723,12 @@ static int j1939_session_tx_rts(struct j1939_session *session) + return ret; + + session->last_txcmd = dat[0]; +- if (dat[0] == J1939_TP_CMD_BAM) ++ if (dat[0] == J1939_TP_CMD_BAM) { + j1939_tp_schedule_txtimer(session, 50); +- +- j1939_tp_set_rxtimeout(session, 1250); ++ j1939_tp_set_rxtimeout(session, 250); ++ } else { ++ j1939_tp_set_rxtimeout(session, 1250); ++ } + + netdev_dbg(session->priv->ndev, "%s: 0x%p\n", __func__, session); + +@@ -766,7 +775,7 @@ static int j1939_session_tx_dat(struct j1939_session *session) + int ret = 0; + u8 dat[8]; + +- se_skb = j1939_session_skb_find(session); ++ se_skb = j1939_session_skb_find_by_offset(session, session->pkt.tx * 7); + if (!se_skb) + return -ENOBUFS; + +@@ -787,6 +796,18 @@ static int j1939_session_tx_dat(struct j1939_session *session) + if (len > 7) + len = 7; + ++ if (offset + len > se_skb->len) { ++ netdev_err_once(priv->ndev, ++ "%s: 0x%p: requested data outside of queued buffer: offset %i, len %i, pkt.tx: %i\n", ++ __func__, session, skcb->offset, se_skb->len , session->pkt.tx); ++ return -EOVERFLOW; ++ } ++ ++ if (!len) { ++ ret = -ENOBUFS; ++ break; ++ } ++ + memcpy(&dat[1], &tpdat[offset], len); + ret = j1939_tp_tx_dat(session, dat, len + 1); + if (ret < 0) { +@@ -1055,9 +1076,9 @@ static void __j1939_session_cancel(struct j1939_session *session, + lockdep_assert_held(&session->priv->active_session_list_lock); + + session->err = j1939_xtp_abort_to_errno(priv, err); ++ session->state = J1939_SESSION_WAITING_ABORT; + /* do not send aborts on incoming broadcasts */ + if (!j1939_cb_is_broadcast(&session->skcb)) { +- session->state = J1939_SESSION_WAITING_ABORT; + j1939_xtp_tx_abort(priv, &session->skcb, + !session->transmission, + err, session->skcb.addr.pgn); +@@ -1120,6 +1141,9 @@ static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer) + * cleanup including propagation of the error to user space. + */ + break; ++ case -EOVERFLOW: ++ j1939_session_cancel(session, J1939_XTP_ABORT_ECTS_TOO_BIG); ++ break; + case 0: + session->tx_retry = 0; + break; +@@ -1651,8 +1675,12 @@ static void j1939_xtp_rx_rts(struct j1939_priv *priv, struct sk_buff *skb, + return; + } + session = j1939_xtp_rx_rts_session_new(priv, skb); +- if (!session) ++ if (!session) { ++ if (cmd == J1939_TP_CMD_BAM && j1939_sk_recv_match(priv, skcb)) ++ netdev_info(priv->ndev, "%s: failed to create TP BAM session\n", ++ __func__); + return; ++ } + } else { + if (j1939_xtp_rx_rts_session_active(session, skb)) { + j1939_session_put(session); +@@ -1661,11 +1689,15 @@ static void j1939_xtp_rx_rts(struct j1939_priv *priv, struct sk_buff *skb, + } + session->last_cmd = cmd; + +- j1939_tp_set_rxtimeout(session, 1250); +- +- if (cmd != J1939_TP_CMD_BAM && !session->transmission) { +- j1939_session_txtimer_cancel(session); +- j1939_tp_schedule_txtimer(session, 0); ++ if (cmd == J1939_TP_CMD_BAM) { ++ if (!session->transmission) ++ j1939_tp_set_rxtimeout(session, 750); ++ } else { ++ if (!session->transmission) { ++ j1939_session_txtimer_cancel(session); ++ j1939_tp_schedule_txtimer(session, 0); ++ } ++ j1939_tp_set_rxtimeout(session, 1250); + } + + j1939_session_put(session); +@@ -1716,6 +1748,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session, + int offset; + int nbytes; + bool final = false; ++ bool remain = false; + bool do_cts_eoma = false; + int packet; + +@@ -1750,7 +1783,8 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session, + __func__, session); + goto out_session_cancel; + } +- se_skb = j1939_session_skb_find(session); ++ ++ se_skb = j1939_session_skb_find_by_offset(session, packet * 7); + if (!se_skb) { + netdev_warn(priv->ndev, "%s: 0x%p: no skb found\n", __func__, + session); +@@ -1777,6 +1811,8 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session, + j1939_cb_is_broadcast(&session->skcb)) { + if (session->pkt.rx >= session->pkt.total) + final = true; ++ else ++ remain = true; + } else { + /* never final, an EOMA must follow */ + if (session->pkt.rx >= session->pkt.last) +@@ -1784,7 +1820,11 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session, + } + + if (final) { ++ j1939_session_timers_cancel(session); + j1939_session_completed(session); ++ } else if (remain) { ++ if (!session->transmission) ++ j1939_tp_set_rxtimeout(session, 750); + } else if (do_cts_eoma) { + j1939_tp_set_rxtimeout(session, 1250); + if (!session->transmission) +@@ -1829,6 +1869,13 @@ static void j1939_xtp_rx_dat(struct j1939_priv *priv, struct sk_buff *skb) + else + j1939_xtp_rx_dat_one(session, skb); + } ++ ++ if (j1939_cb_is_broadcast(skcb)) { ++ session = j1939_session_get_by_addr(priv, &skcb->addr, false, ++ false); ++ if (session) ++ j1939_xtp_rx_dat_one(session, skb); ++ } + } + + /* j1939 main intf */ +@@ -1920,7 +1967,7 @@ static void j1939_tp_cmd_recv(struct j1939_priv *priv, struct sk_buff *skb) + if (j1939_tp_im_transmitter(skcb)) + j1939_xtp_rx_rts(priv, skb, true); + +- if (j1939_tp_im_receiver(skcb)) ++ if (j1939_tp_im_receiver(skcb) || j1939_cb_is_broadcast(skcb)) + j1939_xtp_rx_rts(priv, skb, false); + + break; +@@ -1984,7 +2031,7 @@ int j1939_tp_recv(struct j1939_priv *priv, struct sk_buff *skb) + { + struct j1939_sk_buff_cb *skcb = j1939_skb_to_cb(skb); + +- if (!j1939_tp_im_involved_anydir(skcb)) ++ if (!j1939_tp_im_involved_anydir(skcb) && !j1939_cb_is_broadcast(skcb)) + return 0; + + switch (skcb->addr.pgn) { +@@ -2017,6 +2064,10 @@ void j1939_simple_recv(struct j1939_priv *priv, struct sk_buff *skb) + if (!skb->sk) + return; + ++ if (skb->sk->sk_family != AF_CAN || ++ skb->sk->sk_protocol != CAN_J1939) ++ return; ++ + j1939_session_list_lock(priv); + session = j1939_session_get_simple(priv, skb); + j1939_session_list_unlock(priv); +diff --git a/net/core/filter.c b/net/core/filter.c +index bd1e46d61d8a1..5c490d473df1d 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -8010,6 +8010,43 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type, + offsetof(OBJ, OBJ_FIELD)); \ + } while (0) + ++#define SOCK_OPS_GET_SK() \ ++ do { \ ++ int fullsock_reg = si->dst_reg, reg = BPF_REG_9, jmp = 1; \ ++ if (si->dst_reg == reg || si->src_reg == reg) \ ++ reg--; \ ++ if (si->dst_reg == reg || si->src_reg == reg) \ ++ reg--; \ ++ if (si->dst_reg == si->src_reg) { \ ++ *insn++ = BPF_STX_MEM(BPF_DW, si->src_reg, reg, \ ++ offsetof(struct bpf_sock_ops_kern, \ ++ temp)); \ ++ fullsock_reg = reg; \ ++ jmp += 2; \ ++ } \ ++ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \ ++ struct bpf_sock_ops_kern, \ ++ is_fullsock), \ ++ fullsock_reg, si->src_reg, \ ++ offsetof(struct bpf_sock_ops_kern, \ ++ is_fullsock)); \ ++ *insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp); \ ++ if (si->dst_reg == si->src_reg) \ ++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \ ++ offsetof(struct bpf_sock_ops_kern, \ ++ temp)); \ ++ *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( \ ++ struct bpf_sock_ops_kern, sk),\ ++ si->dst_reg, si->src_reg, \ ++ offsetof(struct bpf_sock_ops_kern, sk));\ ++ if (si->dst_reg == si->src_reg) { \ ++ *insn++ = BPF_JMP_A(1); \ ++ *insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg, \ ++ offsetof(struct bpf_sock_ops_kern, \ ++ temp)); \ ++ } \ ++ } while (0) ++ + #define SOCK_OPS_GET_TCP_SOCK_FIELD(FIELD) \ + SOCK_OPS_GET_FIELD(FIELD, FIELD, struct tcp_sock) + +@@ -8294,17 +8331,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type, + SOCK_OPS_GET_TCP_SOCK_FIELD(bytes_acked); + break; + case offsetof(struct bpf_sock_ops, sk): +- *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( +- struct bpf_sock_ops_kern, +- is_fullsock), +- si->dst_reg, si->src_reg, +- offsetof(struct bpf_sock_ops_kern, +- is_fullsock)); +- *insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1); +- *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( +- struct bpf_sock_ops_kern, sk), +- si->dst_reg, si->src_reg, +- offsetof(struct bpf_sock_ops_kern, sk)); ++ SOCK_OPS_GET_SK(); + break; + } + return insn - insn_buf; +diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c +index a5e8469859e39..427d77b111b17 100644 +--- a/net/netfilter/nft_exthdr.c ++++ b/net/netfilter/nft_exthdr.c +@@ -44,7 +44,7 @@ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr, + + err = ipv6_find_hdr(pkt->skb, &offset, priv->type, NULL, NULL); + if (priv->flags & NFT_EXTHDR_F_PRESENT) { +- *dest = (err >= 0); ++ nft_reg_store8(dest, err >= 0); + return; + } else if (err < 0) { + goto err; +@@ -141,7 +141,7 @@ static void nft_exthdr_ipv4_eval(const struct nft_expr *expr, + + err = ipv4_find_option(nft_net(pkt), skb, &offset, priv->type); + if (priv->flags & NFT_EXTHDR_F_PRESENT) { +- *dest = (err >= 0); ++ nft_reg_store8(dest, err >= 0); + return; + } else if (err < 0) { + goto err; +diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +index 0ce4e75b29812..d803d814a03ad 100644 +--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c ++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +@@ -265,6 +265,8 @@ static int svc_rdma_post_recv(struct svcxprt_rdma *rdma) + { + struct svc_rdma_recv_ctxt *ctxt; + ++ if (test_bit(XPT_CLOSE, &rdma->sc_xprt.xpt_flags)) ++ return 0; + ctxt = svc_rdma_recv_ctxt_get(rdma); + if (!ctxt) + return -ENOMEM; +diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc +index 0f8c77f847114..a94909ad9a53a 100644 +--- a/scripts/kconfig/qconf.cc ++++ b/scripts/kconfig/qconf.cc +@@ -869,40 +869,40 @@ void ConfigList::focusInEvent(QFocusEvent *e) + + void ConfigList::contextMenuEvent(QContextMenuEvent *e) + { +- if (e->y() <= header()->geometry().bottom()) { +- if (!headerPopup) { +- QAction *action; +- +- headerPopup = new QMenu(this); +- action = new QAction("Show Name", this); +- action->setCheckable(true); +- connect(action, SIGNAL(toggled(bool)), +- parent(), SLOT(setShowName(bool))); +- connect(parent(), SIGNAL(showNameChanged(bool)), +- action, SLOT(setOn(bool))); +- action->setChecked(showName); +- headerPopup->addAction(action); +- action = new QAction("Show Range", this); +- action->setCheckable(true); +- connect(action, SIGNAL(toggled(bool)), +- parent(), SLOT(setShowRange(bool))); +- connect(parent(), SIGNAL(showRangeChanged(bool)), +- action, SLOT(setOn(bool))); +- action->setChecked(showRange); +- headerPopup->addAction(action); +- action = new QAction("Show Data", this); +- action->setCheckable(true); +- connect(action, SIGNAL(toggled(bool)), +- parent(), SLOT(setShowData(bool))); +- connect(parent(), SIGNAL(showDataChanged(bool)), +- action, SLOT(setOn(bool))); +- action->setChecked(showData); +- headerPopup->addAction(action); +- } +- headerPopup->exec(e->globalPos()); +- e->accept(); +- } else +- e->ignore(); ++ if (!headerPopup) { ++ QAction *action; ++ ++ headerPopup = new QMenu(this); ++ action = new QAction("Show Name", this); ++ action->setCheckable(true); ++ connect(action, SIGNAL(toggled(bool)), ++ parent(), SLOT(setShowName(bool))); ++ connect(parent(), SIGNAL(showNameChanged(bool)), ++ action, SLOT(setChecked(bool))); ++ action->setChecked(showName); ++ headerPopup->addAction(action); ++ ++ action = new QAction("Show Range", this); ++ action->setCheckable(true); ++ connect(action, SIGNAL(toggled(bool)), ++ parent(), SLOT(setShowRange(bool))); ++ connect(parent(), SIGNAL(showRangeChanged(bool)), ++ action, SLOT(setChecked(bool))); ++ action->setChecked(showRange); ++ headerPopup->addAction(action); ++ ++ action = new QAction("Show Data", this); ++ action->setCheckable(true); ++ connect(action, SIGNAL(toggled(bool)), ++ parent(), SLOT(setShowData(bool))); ++ connect(parent(), SIGNAL(showDataChanged(bool)), ++ action, SLOT(setChecked(bool))); ++ action->setChecked(showData); ++ headerPopup->addAction(action); ++ } ++ ++ headerPopup->exec(e->globalPos()); ++ e->accept(); + } + + ConfigView*ConfigView::viewList; +@@ -1228,7 +1228,7 @@ QMenu* ConfigInfoView::createStandardContextMenu(const QPoint & pos) + + action->setCheckable(true); + connect(action, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool))); +- connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setOn(bool))); ++ connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setChecked(bool))); + action->setChecked(showDebug()); + popup->addSeparator(); + popup->addAction(action); +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 88629906f314c..06bbcfbb28153 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -7666,6 +7666,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC), + SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), + SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), ++ SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), ++ SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), + SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8), + SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), + SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC), +diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c +index 84289ebeae872..337bddb7c2a49 100644 +--- a/sound/soc/codecs/msm8916-wcd-analog.c ++++ b/sound/soc/codecs/msm8916-wcd-analog.c +@@ -19,8 +19,8 @@ + + #define CDC_D_REVISION1 (0xf000) + #define CDC_D_PERPH_SUBTYPE (0xf005) +-#define CDC_D_INT_EN_SET (0x015) +-#define CDC_D_INT_EN_CLR (0x016) ++#define CDC_D_INT_EN_SET (0xf015) ++#define CDC_D_INT_EN_CLR (0xf016) + #define MBHC_SWITCH_INT BIT(7) + #define MBHC_MIC_ELECTRICAL_INS_REM_DET BIT(6) + #define MBHC_BUTTON_PRESS_DET BIT(5) +diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c +index 8cc3cc363eb03..31f1dd6541aa1 100644 +--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c ++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c +@@ -331,7 +331,7 @@ static int sst_media_open(struct snd_pcm_substream *substream, + + ret_val = power_up_sst(stream); + if (ret_val < 0) +- return ret_val; ++ goto out_power_up; + + /* Make sure, that the period size is always even */ + snd_pcm_hw_constraint_step(substream->runtime, 0, +@@ -340,8 +340,9 @@ static int sst_media_open(struct snd_pcm_substream *substream, + return snd_pcm_hw_constraint_integer(runtime, + SNDRV_PCM_HW_PARAM_PERIODS); + out_ops: +- kfree(stream); + mutex_unlock(&sst_lock); ++out_power_up: ++ kfree(stream); + return ret_val; + } + +diff --git a/sound/soc/qcom/qdsp6/q6afe-dai.c b/sound/soc/qcom/qdsp6/q6afe-dai.c +index 2a5302f1db98a..0168af8492727 100644 +--- a/sound/soc/qcom/qdsp6/q6afe-dai.c ++++ b/sound/soc/qcom/qdsp6/q6afe-dai.c +@@ -1150,206 +1150,206 @@ static int q6afe_of_xlate_dai_name(struct snd_soc_component *component, + } + + static const struct snd_soc_dapm_widget q6afe_dai_widgets[] = { +- SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, 0, 0, 0), ++ SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_MI2S_RX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_TX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_MI2S_RX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_MI2S_TX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_MI2S_TX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX_SD1", + "Secondary MI2S Playback SD1", +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRI_MI2S_RX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRI_MI2S_TX", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_7", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_0", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_1", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_2", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_3", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_4", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_5", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_6", NULL, +- 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_7", NULL, +- 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("DISPLAY_PORT_RX", "NULL", 0, 0, 0, 0), ++ 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("DISPLAY_PORT_RX", "NULL", 0, SND_SOC_NOPM, 0, 0), + }; + + static const struct snd_soc_component_driver q6afe_dai_component = { +diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c +index ddcd9978cf57b..745cc9dd14f38 100644 +--- a/sound/soc/qcom/qdsp6/q6routing.c ++++ b/sound/soc/qcom/qdsp6/q6routing.c +@@ -996,6 +996,20 @@ static int msm_routing_probe(struct snd_soc_component *c) + return 0; + } + ++static unsigned int q6routing_reg_read(struct snd_soc_component *component, ++ unsigned int reg) ++{ ++ /* default value */ ++ return 0; ++} ++ ++static int q6routing_reg_write(struct snd_soc_component *component, ++ unsigned int reg, unsigned int val) ++{ ++ /* dummy */ ++ return 0; ++} ++ + static const struct snd_soc_component_driver msm_soc_routing_component = { + .ops = &q6pcm_routing_ops, + .probe = msm_routing_probe, +@@ -1004,6 +1018,8 @@ static const struct snd_soc_component_driver msm_soc_routing_component = { + .num_dapm_widgets = ARRAY_SIZE(msm_qdsp6_widgets), + .dapm_routes = intercon, + .num_dapm_routes = ARRAY_SIZE(intercon), ++ .read = q6routing_reg_read, ++ .write = q6routing_reg_write, + }; + + static int q6pcm_routing_probe(struct platform_device *pdev) +diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile +index ee08aeff30a19..f591c4d1b6fe2 100644 +--- a/tools/objtool/Makefile ++++ b/tools/objtool/Makefile +@@ -3,9 +3,15 @@ include ../scripts/Makefile.include + include ../scripts/Makefile.arch + + # always use the host compiler ++ifneq ($(LLVM),) ++HOSTAR ?= llvm-ar ++HOSTCC ?= clang ++HOSTLD ?= ld.lld ++else + HOSTAR ?= ar + HOSTCC ?= gcc + HOSTLD ?= ld ++endif + AR = $(HOSTAR) + CC = $(HOSTCC) + LD = $(HOSTLD) +diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c +index dc9d495e3d6ab..849d8d2e5976b 100644 +--- a/tools/perf/util/probe-finder.c ++++ b/tools/perf/util/probe-finder.c +@@ -1362,7 +1362,7 @@ int debuginfo__find_trace_events(struct debuginfo *dbg, + tf.ntevs = 0; + + ret = debuginfo__find_probes(dbg, &tf.pf); +- if (ret < 0) { ++ if (ret < 0 || tf.ntevs == 0) { + for (i = 0; i < tf.ntevs; i++) + clear_probe_trace_event(&tf.tevs[i]); + zfree(tevs); +diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c +index bdb69599c4bdc..5e939ff1e3f95 100644 +--- a/tools/testing/selftests/cgroup/cgroup_util.c ++++ b/tools/testing/selftests/cgroup/cgroup_util.c +@@ -105,7 +105,7 @@ int cg_read_strcmp(const char *cgroup, const char *control, + + /* Handle the case of comparing against empty string */ + if (!expected) +- size = 32; ++ return -1; + else + size = strlen(expected) + 1; + +diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c +index 767ac4eab4fe9..7501ec8a46004 100644 +--- a/virt/kvm/arm/mmu.c ++++ b/virt/kvm/arm/mmu.c +@@ -332,7 +332,8 @@ static void unmap_stage2_puds(struct kvm *kvm, pgd_t *pgd, + * destroying the VM), otherwise another faulting VCPU may come in and mess + * with things behind our backs. + */ +-static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) ++static void __unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size, ++ bool may_block) + { + pgd_t *pgd; + phys_addr_t addr = start, end = start + size; +@@ -357,11 +358,16 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) + * If the range is too large, release the kvm->mmu_lock + * to prevent starvation and lockup detector warnings. + */ +- if (next != end) ++ if (may_block && next != end) + cond_resched_lock(&kvm->mmu_lock); + } while (pgd++, addr = next, addr != end); + } + ++static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) ++{ ++ __unmap_stage2_range(kvm, start, size, true); ++} ++ + static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd, + phys_addr_t addr, phys_addr_t end) + { +@@ -2045,18 +2051,21 @@ static int handle_hva_to_gpa(struct kvm *kvm, + + static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) + { +- unmap_stage2_range(kvm, gpa, size); ++ unsigned flags = *(unsigned *)data; ++ bool may_block = flags & MMU_NOTIFIER_RANGE_BLOCKABLE; ++ ++ __unmap_stage2_range(kvm, gpa, size, may_block); + return 0; + } + + int kvm_unmap_hva_range(struct kvm *kvm, +- unsigned long start, unsigned long end) ++ unsigned long start, unsigned long end, unsigned flags) + { + if (!kvm->arch.pgd) + return 0; + + trace_kvm_unmap_hva_range(start, end); +- handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, NULL); ++ handle_hva_to_gpa(kvm, start, end, &kvm_unmap_hva_handler, &flags); + return 0; + } + +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index d5d4cd581af32..278bdc53047e8 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -425,7 +425,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_notifier_count++; +- need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end); ++ need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end, ++ range->flags); + need_tlb_flush |= kvm->tlbs_dirty; + /* we've to flush the tlb before the pages can be freed */ + if (need_tlb_flush)